uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
2,869,038,156,928
arxiv
\section{Introduction} Considerable progress has been recently achieved in the spectroscopy of hydrogen molecular ions. Several rovibrational transitions in HD$^+$ have been measured with relative uncertainties in the 10$^{-11}$--10$^{-12}$ range through Doppler-free spectroscopy of ultracold trapped ions in the Lamb-Dicke regime~\cite{Schiller20,Patra20,Kortunov21}, approaching or even exceeding the precision of theoretical predictions~\cite{Korobov17}. This has allowed to get improved determinations of the proton-to-electron mass ratio, and to perform tests of QED constraining hypothetical ``fifth forces'' between hadrons~\cite{Schiller20,Germann21}. These results, as well as other ongoing projects~\cite{Zhong15,Schmidt20}, and perspectives of reaching higher precision~\cite{Schiller14,Karr14} e.g. using quantum-logic spectroscopy schemes~\cite{Schmidt05,Wolf16,Chou20} provide strong motivation to improve the theory further. In our paper~\cite{Korobov17}, we calculated the frequencies of fundamental vibrational transitions in the nonrelativistic QED approach, including corrections up to the $m\alpha^8$ order. Since then, several new advances in the theory of hydrogenlike atoms have been achieved~\cite{YPP19,Szafron19,Karshenboim19}, which allows to get improved results for the corresponding correction terms in the hydrogen molecular ions. In addition, we found that several contributions that had been neglected in our previous consideration were of comparable magnitude to the estimated error bar, and thus should be included. Finally, changes in the recommended values of fundamental constants, the nucleus-to-electron mass ratios but also the Rydberg constant, proton and deuteron radii, between the previous (2014) CODATA adjustment~\cite{CODATA14} and the most recent one (2018)~\cite{CODATA18} (see Table~\ref{tab:CODATA-fc}) also affect our theoretical predictions and their uncertainties. The aim of this work is to reanalyse the theory, with particular emphasis on the evaluation of QED corrections at orders $m\alpha^7$ and $m\alpha^8$ in the framework of the adiabatic approximation, where we systematically include ``vibrational'' corrections, i.e. the second-order perturbation terms due to perturbation of the vibrational wavefunction. Improved theoretical rovibrational transition frequencies are given for experimentally relevant transitions, and the impact of these new results on the determination of the proton-to-electron mass ratio is illustrated. We use atomic units throughout this paper ($\hbar\!=\!m_e\!=\!e\!=\!1$). Other constants used in calculations are taken from the CODATA18 adjustment~\cite{CODATA18} (see in Table~\ref{tab:CODATA-fc}), including the fine structure constant, $\alpha=7.297\,352\,5693(11)\!\times\!10^{-3}$. \section{Nuclear size and polarizability corrections} \label{sec2} In our previous calculations~\cite{Korobov17,Korobov06}, we only included the leading-order nuclear finite-size correction, see Eq.~(6) in~\cite{Korobov06}. Some higher-order nuclear corrections are not negligible at the current level of theoretical accuracy, in particular the deuteron polarizability~\cite{Friar97}. Here, we follow the notations used in~\cite{CODATA14}. According to Eq.~(34) of~\cite{CODATA14}), we write the $m\alpha^5$ deuteron polarizability contribution as \begin{equation} E_{\rm pol}^{(5)}(\hbox{D}) = [-21.37(8)]\, \left\langle\pi\delta(\mathbf{r}_d)\right\rangle\, (\hbox{kHz}). \end{equation} For example, this results in a 0.33~kHz shift for the frequency of the fundamental vibrational transition $(L=0,v=0)\to(0,1)$, which is comparable to the overall theoretical uncertainty of 0.5~kHz for this transition (see Table~\ref{ftE}). Nuclear finite-size corrections at the same order~\cite{Friar97b} are written as (see Eq.~(59) in~\cite{CODATA14}) \begin{equation} E_{\rm fns}^{(5)}(\hbox{D}) = -(2R_{\infty}c)\frac{2\pi}{3}C_{\eta}\left(\frac{R_d}{a_0}\right)^3\,\left\langle\pi\delta(\mathbf{r}_d)\right\rangle = [-0.57(3)]\, \left\langle\pi\delta(\mathbf{r}_d)\right\rangle\, (\hbox{kHz}), \end{equation} where $a_0$ is the Bohr radius, $R_d=2.12799(74)$ the rms charge radius of the deuteron, and $C_{\eta}=2.0(1)$. The contribution at the next order ($m\alpha^6$), unlike the previous ones, is not a ``state-independent'' term proportional to the squared value of the wavefunction at the electron-nucleus, and would require an independent calculation. However, this term can be estimated from its value for the hydrogen atom ground state by using the LCAO approximation for the electronic wavefunction: \[ \psi_{_{\rm LCAO}}^{}(\mathbf{r}) = \frac{1}{\sqrt{2}}\bigl(\psi_{1s}(\mathbf{r}_p)\!+\!\psi_{1s}(\mathbf{r}_d)\bigr), \] where $\psi_{1s}$ is the hydrogen ground state wavefunction. Under this approximation, one gets from Eq.~(59) of~\cite{CODATA14}: \begin{equation} E_{\rm fns}^{(6)}(\hbox{D}) = -(2R_{\infty}c)\frac{2\pi}{3}\left(\frac{R_d}{a_0}\right)^2 (Z\alpha)^2\left(C_{\theta}-\ln\frac{ZR_d}{a_0}\right)\,\left\langle\pi\delta(\mathbf{r}_d)\right\rangle = [3.96(2)]\, \left\langle\pi\delta(\mathbf{r}_d)\right\rangle\, (\hbox{kHz}). \end{equation} with $C_{\theta}=0.38(4)$. Only the uncertainty from $C_{\theta}$ is indicated in this equation. Due to the employed LCAO approximation, one may estimate the uncertainty of $E_{\rm fns}^{(6)}(\hbox{D})$ as equal to the nonlogarithmic term, which is still much smaller than the overall theoretical uncertainty. In the proton case, all the corrections considered above are negligibly small at the current level of accuracy. \begin{table}[t] \begin{center} \begin{tabular}{l@{\hspace{2mm}}l@{\hspace{2mm}}l@{\hspace{2mm}}l} \hline\hline quantity & symbol & value & uncertainty \\ \hline \vrule width 0pt height 11pt proton charge radius & $r_p$ & $0.8751(61)\!\times\!10^{-15}\mbox{ m}$ & $7.0\!\times\!10^{-3}$ \\ & & $0.8414(19)\!\times\!10^{-15}\mbox{ m}$ & $2.2\!\times\!10^{-3}$ \\ Rydberg constant & $R_\infty=\alpha^2m_ec/2h$ & $10\,973\,731.568\,508(65) \mbox{ m}^{-1}$ & $5.9\!\times\!10^{-12}$ \\ & & $10\,973\,731.568\,160(21) \mbox{ m}^{-1}$ & $1.9\!\times\!10^{-12}$ \\ proton-to-electron & $\mu_p=m_p/m_e$ & $1836.152\,673\,89(17)$ & $9.5\!\times\!10^{-11}$ \\ mass ratio & & $1836.152\,673\,43(11)$ & $6.0\!\times\!10^{-11}$ \\ deuteron-to-electron & $\mu_d=m_d/m_e$ & $3670.482\,967\,85(13)$ & $3.5\!\times\!10^{-11}$ \\ mass ratio & & $3670.482\,967\,88(13)$ & $3.5\!\times\!10^{-11}$ \\ \hline\hline \end{tabular} \end{center} \vspace*{-5mm} \caption{Reevaluation of the fundamental constants of atomic physics by CODATA in 2018. CODATA14 (resp. CODATA18) values are given in the upper (resp. lower) line.} \label{tab:CODATA-fc} \end{table} \section{$m\alpha^7$ and $m\alpha^8$-order corrections in the adiabatic approximation}\label{sec3} Relativistic and QED corrections at the orders $m\alpha^4$ to $m\alpha^6$ have been evaluated in a full three-body approach using precise variational wavefunctions~\cite{Korobov06,Korobov12}, except for the $m\alpha^6$ relativistic correction~\cite{Korobov17,KorobovJPB07}. For calculation of $m\alpha^7$ and higher order one- and two-loop corrections we use the Born-Oppenheimer approach, where the states of the molecule are taken in the form \begin{equation}\label{BO} \Psi^{\rm BO} = \phi_{\rm el}(\mathbf{r};R)\chi_{\rm BO}^{}(R). \end{equation} The electronic wavefunction obeys the clamped nuclei Schr\"odinger equation for a bound electron \begin{equation}\label{BO_el} \left[H_{\rm el}-\mathcal{E}_{\rm el}(R)\right]\phi_{\rm el}=0, \end{equation} where \[ H_{\rm el} = \frac{p^2}{2m} + V + \frac{Z_1Z_2}{R}, \qquad V = -\frac{Z_1}{r_1} - \frac{Z_2}{r_2}. \] Here $H_{\rm el}$ is the electronic Hamiltonian, $Z_1$ and $Z_2$ are the charges of the nuclei, and $r_1$, $r_2$ are the distances from the electron to nuclei 1 and 2, respectively. The wavefunction $\chi_{\rm BO}^{}(R)$ describes the relative nuclear motion, and is a solution of \begin{equation}\label{radial} \left(H_{vb}\!-\!E_0\right)\chi_{\rm BO}^{} = \left[-\frac{\nabla_R^2}{2\mu_N}\!+\!\mathcal{E}_{\rm el}(R)\!-\!E_0\right]\chi_{\rm BO}^{} = 0, \end{equation} where $\mu_N=M_1M_2/(M_1\!+\!M_2)$ is the reduced mass of the nuclei. Instead of the Born-Oppenheimer solution $\chi_{\rm BO}^{}(R)$ we use the adiabatic solution $\chi_{\rm ad}(R)$, which includes as well the adiabatic corrections \begin{equation} \mathcal{E}_{\rm ad}(R) = \mathcal{E}_{\rm el} +\int d\mathbf{r} \left\langle \phi_{\rm el}\left| \frac{\mathbf{p}^2}{8\mu_N}+\frac{\mathbf{P}^2}{2\mu_N}-\frac{\kappa}{2\mu_N}\mathbf{p}\mathbf{P} \right|\phi_{\rm el} \right\rangle \end{equation} where $\mathbf{p}$ is the electron impulse in the center-of-mass frame, $\mathbf{P}$ the relative impulse of the two nuclei, and $\kappa = (M_1\!-\!M_2)/(M_1\!+\!M_2)$ is the asymmetry parameter. See Ref.~\cite{Wolniewicz80}, or a review by Carrington {\em et al.} \cite{Carrington89} for more details. The one-loop self-energy correction of order $m\alpha^7$ to the energy of a bound electron in the two-center problem (non-recoil limit) has been determined in~\cite{JCP_PRL05,JCP_PRA05,Korobov13,KorobovPRL14,KorobovPRA14}. The electronic part of the correction can be calculated using the effective Hamiltonian of Eq.~(6) from Ref.~\cite{KorobovPRA14}: \begin{equation}\label{EffH} \begin{array}{@{}l} \displaystyle \Delta E_{\rm el-SE}^{(7)} = \left\langle \chi_{\rm ad}\bigl|\mathcal{E}_{\rm 1loop-SE}^{(7)}(R)\bigr|\chi_{\rm ad} \right\rangle, \end{array} \end{equation} numerical data for the $\mathcal{E}_{\rm 1loop-SE}^{(7)}(R)$ effective potential may be found in the Supplemental Material to \cite{Korobov13}. The one-loop vacuum polarization (Uehling potential) contribution has been considered in~\cite{KarrVP17}. The adiabatic approximation was also compared with full three-body calculations, confirming that it is accurate to $\mathcal{O}(m/M)$ (where $m/M$ is the electron-to-nucleus mass ratio). The electronic part of the correction can be written as \begin{equation}\label{EffH} \begin{array}{@{}l} \displaystyle \Delta E_{\rm el-VP}^{(7+)} = \left\langle \chi_{\rm ad}\bigl|\mathcal{E}_{\rm 1loop-VP}^{(7+)}(R)\bigr|\chi_{\rm ad} \right\rangle, \end{array} \end{equation} where $\mathcal{E}_{\rm 1loop-VP}^{(7+)}(R)$ is given by Eq.~(16) of~\cite{KarrVP17}. As shown in~\cite{Korobov17,KarrVP17}, beyond the above electronic contributions one also needs to include vibrational corrections. The latter are second-order perturbation terms stemming from perturbation of the vibrational wavefunction by the leading relativistic and radiative corrections to the adiabatic potential $\mathcal{E}_{\rm ad}(R)$, namely \begin{equation} \label{leading} \begin{array}{@{}l}\displaystyle \mathcal{E}_{\rm BP}^{(4)}(R) = \alpha^2\Bigl\langle -\frac{p^4}{8m^3}+\frac{\pi\rho}{2m^3}+H_{\rm so} \Bigr\rangle_{\big|{R}}\,, \\[3mm]\displaystyle \mathcal{E}_{\rm SE}^{(5)}(R) = \alpha^3\frac{4}{3}\left[\ln{\frac{1}{\alpha^2}}-\beta(R)+\frac{5}{6}\right] \Bigl\langle Z_1\delta(\mathbf{r}_1)+Z_2\delta(\mathbf{r}_2) \Bigr\rangle_{\big|{R}}\,, \\[3mm]\displaystyle \mathcal{E}_{\rm VP}^{(5)}(R) = -\alpha^3\frac{4}{15} \Bigl\langle Z_1\delta(\mathbf{r}_1)+Z_2\delta(\mathbf{r}_2) \Bigr\rangle_{\big|{R}}\,. \end{array} \end{equation} Here $\rho=\boldsymbol{\nabla}^2V/(4\pi)$, $H_{\rm so}$ is the electron spin-orbit Hamiltonian (see~\cite{KorobovJPB07} for details), and $\beta(R)$ is the nonrelativistic Bethe logarithm for the bound electron in the two-center problem, whose values as a function of $R$ may be found in the Supplemental Material to Ref.~\cite{Korobov13} or in \cite{Kolos}. The $m\alpha^7$-order vibrational correction from one-loop self-energy and vacuum polarization is then obtained via the second-order perturbation formalism as \begin{equation}\label{1loop-vb-a7} \Delta E_{\rm vb}^{(7)} = 2\left\langle \chi_{\rm ad} \bigl| \mathcal{E}_{\rm BP}^{(4)}(R) Q' (E_0-H_{\rm vb})^{-1} Q' \bigl(\mathcal{E}_{\rm SE}^{(5)}(R)+\mathcal{E}_{\rm VP}^{(5)}(R) \bigr) \bigr| \chi_{\rm ad} \right\rangle, \end{equation} here $Q'$ is a projection operator onto a subspace orthogonal to $\chi_{\rm ad}(R)$ from Eq.~(\ref{BO_el}). The difference with respect to our previous calculations is that $\mathcal{E}_{\rm SE}^{(5)}(R)$ in Eq.~(\ref{leading}) includes the contribution from the electron anomalous magnetic moment, which was missed in Eq.~(10) of~\cite{Korobov17}. We now turn to the $m\alpha^8$-order, starting with the one-loop self-energy correction. The higher-order remainder ($m\alpha^8$ and above) for this contribution can be estimated from the hydrogen 1S state results by using the LCAO approximation: \begin{equation}\label{EffH} \begin{array}{@{}l} \displaystyle \Delta E_{\rm el-1loop}^{(8+)} = \alpha^5\left\langle \chi_{\rm ad}\bigl| \bigl(G_{\rm SE}(1S)-A_{60}\bigr)\Bigl\langle Z_1^3\delta(\mathbf{r}_1)+Z_2^3\delta(\mathbf{r}_2) \Bigr\rangle \bigr|\chi_{\rm ad} \right\rangle, \end{array} \end{equation} where $G_{\rm SE}(1S)=-30.29024(2)$ is the higher order remainder, and $A_{60}=-30.924149$. Both numbers are taken from Table~II of \cite{YPP19}. This is more accurate than the treatment of Ref.~\cite{Korobov17}, where only the $m\alpha^8$-order term was estimated. The theoretical uncertainty is estimated as equal to $\left| \Delta E_{\rm el-1loop}^{(8+)} - \Delta E_{\rm el-1loop}^{(8log)} \right|$, where $\Delta E_{\rm el-1loop}^{(8log)}$ is the known $m\alpha^8$-order logarithmic term (first term of Eq.~(14) in~\cite{Korobov17}). The second-order vibrational contribution is expressed \begin{equation}\label{1loop-vb-a8} \Delta E_{\rm vb-1loop}^{(8)} = 2\left\langle \chi_{\rm ad} \bigl| \mathcal{E}_{\rm BP}^{(4)}(R) Q' (E_0-H_{\rm vb})^{-1} Q' \bigl(\mathcal{E}_{\rm SE}^{(6)}(R)+\mathcal{E}_{\rm VP}^{(6)}(R) \bigr) \bigr| \chi_{\rm ad} \right\rangle, \end{equation} where \[ \begin{array}{@{}l}\displaystyle \mathcal{E}_{\rm SE}^{(6)}(R) = \alpha^4\left[\frac{139}{32}-2\ln{2}\right]\pi \Bigl\langle Z_1^2\delta(\mathbf{r}_1)+Z_2^2\delta(\mathbf{r}_2) \Bigr\rangle_{\big|{R}}\,, \\[3mm]\displaystyle \mathcal{E}_{\rm VP}^{(6)}(R) = \alpha^4\frac{5}{48}\pi \Bigl\langle Z_1^2\delta(\mathbf{r}_1)+Z_2^2\delta(\mathbf{r}_2) \Bigr\rangle_{\big|{R}}\,. \end{array} \] Finally, we consider two-loop $m\alpha^8$-order corrections. For hydrogen-like atoms, the two-loop correction at orders $m\alpha^8$ and higher is generally expressed in the form~\cite{YPP19,CODATA18} \begin{equation} \begin{array}{@{}l}\displaystyle E_{\rm 2loop}^{(8+)} = \frac{(Z\alpha)^6}{\pi^2 n^3} \Bigl[ B_{63}\ln^3(Z\alpha)^{-2}\! +\!B_{62}\ln^2(Z\alpha)^{-2}\! +B_{61}\ln(Z\alpha)^{-2}+G^{\rm 2loop}(Z\alpha) \Bigr], \end{array} \end{equation} where $G^{\rm 2loop}(Z\alpha)$ is the higher order remainder calculated in \cite{Yerokhin09,Yerokhin18}. We adopt similar notations for hydrogen molecular ions: \begin{equation} \label{2-loop} \begin{array}{@{}l}\displaystyle E_{\rm 2loop}^{(8+)} = \frac{\alpha^6}{\pi} \left\langle \chi_{\rm ad}\bigl| \mathcal{B}_{63}(R) \ln^3(\alpha^{-2}) \!+\! \mathcal{B}_{62}(R) \ln^2(\alpha^{-2}) \!+\! \mathcal{B}_{61}(R) \ln(\alpha)^{-2} \!+\! G^{\rm 2loop}(1S) \Bigl\langle Z_1^3\delta(\mathbf{r}_1)\!+\!Z_2^3\delta(\mathbf{r}_2) \Bigr\rangle \bigr|\chi_{\rm ad} \right\rangle. \end{array} \end{equation} Again, the higher-order remainder is estimated using the LCAO approximation. In our calculations, we adopted the value $G^{\rm 2loop}(1S) = -94.5(6.6)$ from~\cite{Karshenboim19}. This is more accurate than our previous treatment~\cite{Korobov17}, where only the $m\alpha^8$-order correction was estimated using $B_{60}(1S)$ instead of $G^{\rm 2loop}(1S)$. The theoretical uncertainty is estimated as equal to the term proportional to $G^{\rm 2loop}(1S)$, after subtraction of the known term of order~$m\alpha^9\ln^2(\alpha)$ (Eq.~(26) of~\cite{YPP19}). Calculation of the $\mathcal{B}_{6k}(R)$ effective potentials for the two-center problem is described in~\cite{Korobov17}. Since then, a new contribution to the $B_{61}$ coefficient in hydrogen-like atoms from light-by-light scattering diagrams has been found, yielding a correction $B^{\rm LbL}_{61} = 0.830\,309$ for $S$ states~\cite{Szafron19}. For the two-center problem we thus add the following term to $\mathcal{B}_{61}(R)$: \begin{equation}\label{B61:LbL} \mathcal{B}_{61}^{\rm LbL}(R) = B^{\rm LbL}_{61} \Bigl\langle Z_1^3\delta(\mathbf{r}_1)+Z_2^3\delta(\mathbf{r}_2) \Bigr\rangle_{\big|{R}}\,. \end{equation} The second-order contribution due to vibrational motion is expressed as \begin{equation}\label{eq:2loop_a8_vb} \begin{array}{@{}l} \Delta E_{\rm vb-2loop}^{(8)} = 2\left\langle \chi_{\rm ad} \bigl| \mathcal{E}_{\rm BP}^{(4)}(R) Q' (E_0-H_{\rm vb})^{-1} Q'\mathcal{E}_{\rm 2loop}^{(6)}(R) \bigr| \chi_{\rm ad} \right\rangle, \\[3mm]\displaystyle\hspace{22mm} +\left\langle \chi_{\rm ad} \bigl| \bigl(\mathcal{E}_{\rm SE}^{(5)}(R)+\mathcal{E}_{\rm VP}^{(5)}(R) \bigr) Q' (E_0-H_{\rm vb})^{-1} Q' \bigl(\mathcal{E}_{\rm SE}^{(5)}(R)+\mathcal{E}_{\rm VP}^{(5)}(R) \bigr) \bigr| \chi_{\rm ad} \right\rangle, \end{array} \end{equation} with \[ \mathcal{E}_{\rm 2loop}^{(6)}(R) = \frac{\alpha^4}{\pi^2}\left[0.538941\right] \pi\Bigl\langle Z_1\delta(\mathbf{r}_1)+Z_2\delta(\mathbf{r}_2) \Bigr\rangle_{\big|{R}}\,. \] Due to the presence of a logarithmic term in $\mathcal{E}_{\rm SE}^{(5)}(R)$, the vibrational correction is enhanced by a factor of $\ln^2(\alpha^{-2})$ and contributes to the $\mathcal{B}_{62}$, $\mathcal{B}_{61}$ and nonlogarithmic terms. It results in a 1.14~kHz shift for the fundamental vibrational transition, thus the neglect of this term in~\cite{Korobov17} was not justified. The last corrections requiring new consideration are the muonic and hadronic vacuum polarization corrections. The muonic term is~(see Eq.~(14) of~\cite{YPP19}) \begin{equation} \mathcal{E}_{\mu{\rm VP}}(R) = \left( \frac{m_e}{m_{\mu}} \right)^2 \mathcal{E}_{\rm VP}^{(5)} (R) \,, \end{equation} and the hadronic term may be written as~(Eq.~(15) of~\cite{YPP19}) \begin{equation} \mathcal{E}_{\rm hadVP}(R) = 0.671(15) \, \mathcal{E}_{\mu{\rm VP}}(R). \end{equation} The sum of these two contributions shifts the fundamental transition frequency by 0.25~kHz. \section{Results} \label{sec4} Our results for the frequency of the fundamental transition $(L=0,v=0)\to(0,1)$ in HD$^+$ are presented in Table~\ref{ftE} and compared with previous results from Ref.~\cite{Korobov17}. The change in the nonrelativistic transition frequency $\nu_{nr}$ is mainly due to those of the nucleus-to-electron mass ratios (mostly $\mu_p$) and Rydberg constant between the 2014 and 2018 CODATA adjustments. The shift in $\nu_{\alpha^2}$ is due to the nuclear corrections; it stems from the CODATA18 values of the proton and deuteron radii, and from the inclusion of higher-order finite-size and polarizability corrections described in Sec.~\ref{sec2}. The change in $\nu_{\alpha^5}$ comes from the correction of the vibrational contribution [Eq.~(\ref{1loop-vb-a7})]. Finally, several improvements have been made in the calculation of $\nu_{\alpha^6}$ as detailed in Sec.\ref{sec3}: inclusion of vibrational contributions, estimate of the all-order remainder both in one-loop and too-loop corrections, as well as the inclusion of light-by-light scattering diagrams in the two-loop correction. The total shift of the transition frequency is +6.4~kHz, of which +5.4~kHz is due to the shifts in the fundamental constants (+2.7~kHz for $\nu_{nr}$, and +2.7~kHz for the leading-order finite-size correction in $\nu_{\alpha^2}$), and +1.0~kHz comes from the new contributions discussed in Secs.~\ref{sec2} and \ref{sec3}. The theoretical uncertainty is dominated by the one-loop (Eq.~(\ref{EffH})) and two-loop (term proportional to $G^{\rm 2loop}(1S)$ in Eq.~(\ref{2-loop})) higher-order remainders (see discussion in~\cite{Korobov17}). In the uncertainty from fundamental constants, the largest contribution by far is that of the proton-to-electron mass ratio $\mu_p$ (1.7~kHz and 1.1~kHz using 2014 and 2018 CODATA values, respectively). \begin{table}[t] \begin{center} \caption{Fundamental transition frequency $\nu_{01}$ for the $\mbox{HD}^+$ molecular ion (in kHz). CODATA14 recommended values of fundamental constants were used in~\cite{Korobov17}, and the latest CODATA18 values are used in the present work. Nuclear size and polarizability corrections are included in $\nu_{\alpha^2}$, and ``other'' corrections correspond to the muonic and hadronic vacuum polarization. Theoretical uncertainties of contributions at each order in $\alpha$, if not negligible, are indicated within parentheses. In the final value of the transition frequency, the first error is the theoretical uncertainty, and the second one is due to the uncertainty of fundamental constants.}\label{ftE} \begin{tabular}{l@{\hspace{16mm}}d@{\hspace{16mm}}d} \hline\hline \vrule height 10.5pt width 0pt depth 3.5pt & [4] & \hbox{this work} \\ \hline \vrule height 10pt width 0p $\nu_{nr}$ & 57\,349\,439\,952.4 & 57\,349\,439\,955.1 \\ $\nu_{\alpha^2}$ & 958\,151.7 & 958\,154.6 \\ $\nu_{\alpha^3}$ & -242\,126.3 & -242\,126.3 \\ $\nu_{\alpha^4}$ & -1708.9(1) & -1708.9(1)\\ $\nu_{\alpha^5}$ & 106.4(1) & 105.9(1)\\ $\nu_{\alpha^6}$ & -2.0(5) & -0.8(5)\\ other & & 0.25 \\ \hline \vrule height 10pt width 0p $\nu_{tot}$ & 57\,350\,154\,373.4(0.5)(1.8) & 57\,350\,154\,379.8(0.5)(1.3) \\ \hline\hline \end{tabular} \end{center} \vspace*{-3mm} \end{table} Theoretical frequencies for the rovibrational transitions measured in recent experiments are presented in Table~\ref{theor_exp}. They are in very good agreement with experimental results in all cases. For the $(0,0)\!\to\!(1,1)$ transition the combined uncertainty $u=0.86$ kHz ($u^{\rm exp}=0.16$ kHz and $u^{\rm theor,spin}=0.85$ kHz) is given, for the $v=(3,0)\!\to\!(3,9)$ transition, the revised experimental value from~\cite{Germann21} is used. Numerical results of calculations for all the contributions considered in Sec.~\ref{sec3} (with the $m\alpha^6$-order relativistic correction as well) for a wide range of rovibrational states are given in the Supplemental Material~\cite{SM}. \begin{table}[h] \begin{center} \caption{Theoretical and experimental spin-averaged transition frequencies (in kHz). CODATA18 values of fundamental constants were used in the calculations. For theoretical values, the first uncertainty is due to yet uncalculated terms and used approximations in theory, while the second uncertainty is due to inaccuracy in the CODATA18 recommended mass values.}\label{theor_exp} \begin{tabular}{c@{\hspace{8mm}}c@{\hspace{8mm}}c} \hline\hline $(L,v)\to(L',v')$ & theory & experiment \\ \hline $(0,0)\to(1,0)$ & 1\,314\,925\,752.932(19)(61) & 1\,314\,925\,752.910(17) \\ $(0,0)\to(1,1)$ & 58\,605\,052\,163.9(0.5)(1.3) & 58\,605\,052\,164.24(86) \\ $(3,0)\to(3,9)$ & 415\,264\,925\,502.8(3.3)(6.7) & 415\,264\,925\,501.8(1.3) \\ \hline\hline \end{tabular} \end{center} \end{table} Using these improved theoretical predictions, we give in Table~\ref{mupe} some updated determinations of the proton-to-electron mass ratio from HD$^+$ spectroscopy. We follow the least-squares fitting procedure used in the CODATA adjustments and described in Appendix~E of~\cite{CODATA98}. The dependence of HD$^+$ transition frequencies on fundamental constants is linearized using a first-order Taylor expansion around their starting (CODATA18) values; first-derivative coefficients are obtained by the methods described in~\cite{Schiller05,Karr16}. Following the CODATA approach, we adjust the individual particle masses rather than mass ratios. In more detail, we use CODATA18 values of $m_d$, $R_{\infty}$, $r_d$, $r_p$, include as an additional data point the latest measurement of the electron mass~\cite{Kohler15}, and solve for the electron and proton masses. $m_p/m_e$ and its uncertainty are then deduced from the adjusted values of $m_p$ and $m_e$, taking the correlation between them into account. The covariance matrix of the HD$^+$ input data (see~\cite{CODATA98} for definition) is built including the three (uncorrelated) sources of uncertainties: experimental, theoretical, and parametric, the latter stemming from uncertainties of fundamental constants that are not directly involved in the adjustment ($m_d$, $R_{\infty}$, $r_d$, $r_p$). When several HD$^+$ measurements are combined, the following assumptions are made regarding correlations: (i) uncorrelated experimental uncertainties, (ii) fully correlated theoretical uncertainties, and (iii) correlations between parametric uncertainties are included taking into account the correlation coefficients between fundamental constants available from~\cite{CODATA-web}. Values obtained in this way from each single HD$^+$ experiment are given in the first three lines of Table~\ref{mupe}, and the combined result from the three measurements in the fourth line. Note that its uncertainty is only slightly reduced with respect to that of individual lines due to strong correlation between them. The latter value, although slightly higher (by 1.4 combined standard deviations), is in good agreement with that obtained from recent high-precision mass spectroscopy measurements of $m_p$~\cite{Heisse19}, $m_d$~\cite{Rau20}, $m($HD$^+)$~\cite{Rau20}, and $m_d/m_p$~\cite{Fink20} in Penning traps: $m_p/m_e = 1836.152\,673\,343(60)$~\cite{Kohler15,Rau20}. Finally, the HD$^+$ data can be combined with the mass spectrometry measurements and the CODATA18 values of $R_{\infty}$, $r_d$, $r_p$ to simultaneously determine the three particle masses, $m_e$, $m_p$ and $m_d$. The mass ratios $m_p/m_e$ and $m_d/m_p$ deduced from this adjustment are shown in the last line of Table~\ref{mupe}. Relative uncertainties of $1.8 \times 10^{-11}$ (for $m_p/m_e$) and $1.6 \times 10^{-11}$ (for $m_d/m_p$) are obtained, improved by factors of 3.3 and 3.5 with respect to CODATA18. \begin{table}[h] \begin{center} \caption{Determinations of mass ratios using HD$^+$ spectroscopy.}\label{mupe} \begin{tabular}{c@{\hspace{8mm}}c@{\hspace{8mm}}c} \hline\hline data & $m_p/m_e$ & $m_d/m_p$ \\ \hline $(0,0)\to(1,0)$ & 1836.152\,673\,480(63) & \\ $(0,0)\to(1,1)$ & 1836.152\,673\,40(10)~ & \\ $(3,0)\to(3,9)$ & 1836.152\,673\,457(73) & \\ \hline HD$^+$ & 1846.152\,673\,466(61) & \\ \hline HD$^+$/Penning & 1836.152\,673\,454(33) & 1.999\,007\,501\,243(31) \\ \hline\hline \end{tabular} \end{center} \end{table} In conclusion, we have presented a revised and improved theory of spin-averaged transition frequencies in hydrogen molecular ions. Further progress in precision now requires calculations of nonlogarithmic $m\alpha^8$-order one- and two-loop corrections. \emph{Acknowledgements.} The authors thank J.C.J.~Koelemeij for his help with the least-squares adjustments of fundamental constants. V.I.K. acknowledges support of the Russian Foundation for Basic Research under Grant No.~19-02-00058-a. \section{The adiabatic approximation} \vspace{5mm} {\bf 1.} Relativistic correction of order $m\alpha^6$. Electronic and vibrational contributions to this term have been derived in~\cite{KorobovJPB07} and~\cite{Korobov17}, respectively: \begin{equation}\label{a6rel} \begin{array}{@{}l} \displaystyle \Delta E_{\rm el}^{(6)} = \left\langle \chi_{\rm ad}\bigl|\mathcal{E}_{\rm rel}^{(6)}(R)\bigr|\chi_{\rm ad} \right\rangle, \\[3mm]\displaystyle \Delta E_{\rm vb}^{(6)} = \left\langle \chi_{\rm ad} \bigl| \mathcal{E}_{\rm BP}^{(4)}(R) Q' (E_0-H_{\rm vb})^{-1} Q' \mathcal{E}_{\rm BP}^{(4)}(R) \bigr| \chi_{\rm ad} \right\rangle. \end{array} \end{equation} Here $\mathcal{E}_{\rm rel}^{(6)}(R)$ is taken from~\cite{KorobovJPB07}, Eq.~(8), and $\mathcal{E}_{\rm BP}^{(4)}(R)$ from the SM of~\cite{KorobovMP18}. \begin{table}[h] \begin{center} \begin{tabular}{|@{\hspace{2mm}}c@{\hspace{3mm}}c@{\hspace{3mm}}l@{\hspace{3mm}}l@{\hspace{3mm}}l@{\hspace{3mm}}l@{\hspace{3mm}}l@{\hspace{2mm}}|} \hline\hline & & $~~~L=0$ & $~~~L=1$ & $~~~L=2$ & $~~~L=3$ & $~~~L=4$ \\ \hline $v=0$ & $\Delta E_{\rm el}^{(6)}$ & $-$0.0420430 & $-$0.0420454 & $-$0.0420502 & $-$0.0420579 & $-$0.0420687 \\ & $\Delta E_{\rm vb}^{(6)}$ & $-$0.0238898 & $-$0.0237857 & $-$0.0235791 & $-$0.0232735 & $-$0.0228742 \\ \hline $v=1$ & $\Delta E_{\rm el}^{(6)}$ & $-$0.0427380 & $-$0.0427413 & $-$0.0427481 & $-$0.0427586 & $-$0.0427733 \\ & $\Delta E_{\rm vb}^{(6)}$ & $-$0.0218020 & $-$0.0217043 & $-$0.0215105 & $-$0.0212239 & $-$0.0208493 \\ \hline $v=2$ & $\Delta E_{\rm el}^{(6)}$ & $-$0.0434825 & $-$0.0434869 & $-$0.0434958 & $-$0.0435095 & $-$0.0435285 \\ & $\Delta E_{\rm vb}^{(6)}$ & $-$0.0198265 & $-$0.0197348 & $-$0.0195529 & $-$0.0192839 & $-$0.0189325 \\ \hline $v=3$ & $\Delta E_{\rm el}^{(6)}$ & $-$0.0442783 & $-$0.0442838 & $-$0.0442950 & $-$0.0443122 & $-$0.0443357 \\ & $\Delta E_{\rm vb}^{(6)}$ & $-$0.0179527 & $-$0.0178666 & $-$0.0176959 & $-$0.0174435 & $-$0.0171137 \\ \hline $v=4$ & $\Delta E_{\rm el}^{(6)}$ & $-$0.0451263 & $-$0.0451331 & $-$0.0451467 & $-$0.0451676 & $-$0.0451960 \\ & $\Delta E_{\rm vb}^{(6)}$ & $-$0.0161720 & $-$0.0160912 & $-$0.0159310 & $-$0.0156943 & $-$0.0153850 \\ \hline $v=5$ & $\Delta E_{\rm el}^{(6)}$ & $-$0.0460270 & $-$0.0460351 & $-$0.0460513 & $-$0.0460760 & $-$0.0461096 \\ & $\Delta E_{\rm vb}^{(6)}$ & $-$0.0144778 & $-$0.0144021 & $-$0.0142519 & $-$0.0140301 & $-$0.0137405 \\ \hline $v=6$ & $\Delta E_{\rm el}^{(6)}$ & $-$0.0469799 & $-$0.0469893 & $-$0.0470082 & $-$0.0470368 & $-$0.0470758 \\ & $\Delta E_{\rm vb}^{(6)}$ & $-$0.0128655 & $-$0.0127946 & $-$0.0126542 & $-$0.0124467 & $-$0.0121759 \\ \hline $v=7$ & $\Delta E_{\rm el}^{(6)}$ & $-$0.0479835 & $-$0.0479942 & $-$0.0480158 & $-$0.0480485 & $-$0.0480929 \\ & $\Delta E_{\rm vb}^{(6)}$ & $-$0.0113325 & $-$0.0112664 & $-$0.0111354 & $-$0.0109420 & $-$0.0106896 \\ \hline $v=8$ & $\Delta E_{\rm el}^{(6)}$ & $-$0.0490350 & $-$0.0490471 & $-$0.0490714 & $-$0.0491082 & $-$0.0491579 \\ & $\Delta E_{\rm vb}^{(6)}$ & $-$0.00987872& $-$0.00981731& $-$0.00969566& $-$0.00951608& $-$0.00928192\\ \hline $v=9$ & $\Delta E_{\rm el}^{(6)}$ & $-$0.0501302 & $-$0.0501437 & $-$0.0501707 & $-$0.0502115 & $-$0.0502665 \\ & $\Delta E_{\rm vb}^{(6)}$ & $-$0.00850621& $-$0.00844949& $-$0.00833719& $-$0.00817151& $-$0.00795567\\ \hline\hline \end{tabular} \caption{Relativistic corrections of order $m\alpha^6$ [in units of $\alpha^4\!\times\!\hbox{(1 a.u.)}$].}\label{tab:a6rel} \end{center} \end{table} \clearpage \newpage {\bf 2.} One-loop self-energy correction of order $m\alpha^7$: \begin{equation}\label{a7se} \begin{array}{@{}l} \displaystyle \Delta E_{\rm el-SE}^{(7)} = \left\langle \chi_{\rm ad}\bigl|\mathcal{E}_{\rm 1loop-SE}^{(7)}(R)\bigr|\chi_{\rm ad} \right\rangle = \alpha^5 \left( A_{62}^{\rm el} \ln^2(\alpha^{-2}) \!+\! A_{61}^{\rm el} \ln(\alpha^{-2}) \!+\! A_{60}^{\rm el} \right), \\[3mm]\displaystyle \Delta E_{\rm vb-SE}^{(7)} = 2\left\langle \chi_{\rm ad} \bigl| \mathcal{E}_{\rm BP}^{(4)}(R) Q' (E_0-H_{\rm vb})^{-1} Q' \mathcal{E}_{\rm SE}^{(5)}(R) \bigr| \chi_{\rm ad} \right\rangle = \alpha^5 \left( A_{61}^{\rm vb} \ln(\alpha^{-2}) \!+\! A_{60}^{\rm vb} \right), \end{array} \end{equation} where $\mathcal{E}_{\rm 1loop-SE}^{(7)}(R)$ is taken from Eq.~(20) of~\cite{KorobovPRA14} and from the SM of~\cite{Korobov13}. Data for the leading-order corrections $\mathcal{E}_{\rm BP}^{(4)}(R)$ and $\mathcal{E}_{\rm SE}^{(5)}(R)$ can be found in the SM to Ref.~\cite{KorobovMP18} (see also~\cite{Korobov13} for details on the nonrelativistic Bethe logarithm $\beta(R)$, which enters in $\mathcal{E}_{\rm SE}^{(5)}(R)$). \begin{table}[h] \begin{center} \begin{tabular}{|@{\hspace{2mm}}c@{\hspace{4mm}}c@{\hspace{5mm}}r@{\hspace{5mm}}r@{\hspace{5mm}}r@{\hspace{5mm}}r@{\hspace{5mm}}r@{\hspace{2mm}}|} \hline\hline & & $L=0~~~$ & $L=1~~~$ & $L=2~~~$ & $L=3~~~$ & $L=4~~~$ \\ \hline $v=0$ & $A_{61}^{\rm el}$ & 2.06947 & 2.06781 & 2.06453 & 2.05964 & 2.05321 \\ & $A_{60}^{\rm el}$ & $-$12.4697 & $-$12.4589 & $-$12.4375 & $-$12.4056 & $-$12.3637 \\ & $A_{61}^{\rm vb}$ & 0.170602 & 0.169947 & 0.168648 & 0.166724 & 0.164207 \\ & $A_{60}^{\rm vb}$ & $-$0.375948 & $-$0.374488 & $-$0.371591 & $-$0.367306 & $-$0.361699 \\ \hline $v=1$ & $A_{61}^{\rm el}$ & 2.03005 & 2.02849 & 2.02540 & 2.02080 & 2.01475 \\ & $A_{60}^{\rm el}$ & $-$12.1997 & $-$12.1896 & $-$12.1693 & $-$12.1393 & $-$12.0998 \\ & $A_{61}^{\rm vb}$ & 0.156671 & 0.156054 & 0.154830 & 0.153019 & 0.150649 \\ & $A_{60}^{\rm vb}$ & $-$0.346002 & $-$0.344624 & $-$0.341890 & $-$0.337846 & $-$0.332555 \\ \hline $v=2$ & $A_{61}^{\rm el}$ & 1.99367 & 1.99221 & 1.98930 & 1.98497 & 1.97929 \\ & $A_{60}^{\rm el}$ & $-$11.9484 & $-$11.9389 & $-$11.9198 & $-$11.8915 & $-$11.8543 \\ & $A_{61}^{\rm vb}$ & 0.143445 & 0.142864 & 0.141711 & 0.140004 & 0.137770 \\ & $A_{60}^{\rm vb}$ & $-$0.317508 & $-$0.316206 & $-$0.313624 & $-$0.309804 & $-$0.304807 \\ \hline $v=3$ & $A_{61}^{\rm el}$ & 1.96018 & 1.95880 & 1.95607 & 1.95202 & 1.94668 \\ & $A_{60}^{\rm el}$ & $-$11.7149 & $-$11.7059 & $-$11.6880 & $-$11.6613 & $-$11.6263 \\ & $A_{61}^{\rm vb}$ & 0.130853 & 0.130304 & 0.129216 & 0.127606 & 0.125499 \\ & $A_{60}^{\rm vb}$ & $-$0.290313 & $-$0.289082 & $-$0.286641 & $-$0.283030 & $-$0.278308 \\ \hline $v=4$ & $A_{61}^{\rm el}$ & 1.92944 & 1.92815 & 1.92559 & 1.92179 & 1.91680 \\ & $A_{60}^{\rm el}$ & $-$11.4982 & $-$11.4897 & $-$11.4729 & $-$11.4478 & $-$11.4149 \\ & $A_{61}^{\rm vb}$ & 0.118829 & 0.118311 & 0.117284 & 0.115764 & 0.113776 \\ & $A_{60}^{\rm vb}$ & $-$0.264284 & $-$0.263120 & $-$0.260812 & $-$0.257397 & $-$0.252932 \\ \hline $v=5$ & $A_{61}^{\rm el}$ & 1.90132 & 1.90012 & 1.89773 & 1.89418 & 1.88952 \\ & $A_{60}^{\rm el}$ & $-$11.2975 & $-$11.2895 & $-$11.2737 & $-$11.2502 & $-$11.2193 \\ & $A_{61}^{\rm vb}$ & 0.107320 & 0.106831 & 0.105862 & 0.104428 & 0.102551 \\ & $A_{60}^{\rm vb}$ & $-$0.239310 & $-$0.238209 & $-$0.236026 & $-$0.232797 & $-$0.228574 \\ \hline $v=6$ & $A_{61}^{\rm el}$ & 1.87573 & 1.87461 & 1.87238 & 1.86907 & 1.86473 \\ & $A_{60}^{\rm el}$ & $-$11.1120 & $-$11.1046 & $-$11.0897 & $-$11.0677 & $-$11.0388 \\ & $A_{61}^{\rm vb}$ & 0.096285 & 0.095823 & 0.094909 & 0.093555 & 0.091785 \\ & $A_{60}^{\rm vb}$ & $-$0.215304 & $-$0.214262 & $-$0.212198 & $-$0.209145 & $-$0.205154 \\ \hline $v=7$ & $A_{61}^{\rm el}$ & 1.85254 & 1.85150 & 1.84943 & 1.84635 & 1.84232 \\ & $A_{60}^{\rm el}$ & $-$10.9412 & $-$10.9342 & $-$10.9204 & $-$10.8998 & $-$10.8728 \\ & $A_{61}^{\rm vb}$ & 0.085691 & 0.085256 & 0.084394 & 0.083118 & 0.081449 \\ & $A_{60}^{\rm vb}$ & $-$0.192197 & $-$0.191213 & $-$0.189263 & $-$0.186378 & $-$0.182608 \\ \hline $v=8$ & $A_{61}^{\rm el}$ & 1.83166 & 1.83069 & 1.82878 & 1.82594 & 1.82221 \\ & $A_{60}^{\rm el}$ & $-$10.7844 & $-$10.7779 & $-$10.7650 & $-$10.7458 & $-$10.7207 \\ & $A_{61}^{\rm vb}$ & 0.075520 & 0.075110 & 0.074298 & 0.073097 & 0.071527 \\ & $A_{60}^{\rm vb}$ & $-$0.169946 & $-$0.169018 & $-$0.167178 & $-$0.164458 & $-$0.160903 \\ \hline $v=9$ & $A_{61}^{\rm el}$ & 1.81299 & 1.81210 & 1.81034 & 1.80772 & 1.80429 \\ & $A_{60}^{\rm el}$ & $-$10.6410 & $-$10.6350 & $-$10.6230 & $-$10.6053 & $-$10.5820 \\ & $A_{61}^{\rm vb}$ & 0.065763 & 0.065378 & 0.064615 & 0.063488 & 0.062014 \\ & $A_{60}^{\rm vb}$ & $-$0.148535 & $-$0.147661 & $-$0.145929 & $-$0.143370 & $-$0.140025 \\ \hline\hline \end{tabular} \caption{Radiative one-loop self-energy corrections of order $m\alpha^7$ [in units of $\alpha^5\ln^k{\alpha}\!\times\!\hbox{(1 a.u.)}$ for $A_{6k}$].}\label{tab:a7se} \end{center} \end{table} \clearpage \newpage {\bf 3.} The one-loop vacuum polarization contribution of orders $m\alpha^7$ and $m\alpha^8$ (Uehling potential) was discussed in~\cite{KarrVP17}. It may be written in the form: \begin{equation}\label{a7vp} \begin{array}{@{}l} \displaystyle \Delta E_{\rm el-VP}^{(7-8)} = \left\langle \chi_{\rm ad}\bigl|\mathcal{E}_{\rm 1loop-VP}^{(7-8)}(R)\bigr|\chi_{\rm ad} \right\rangle = \alpha^5 \left( V_{61}^{\rm el} \ln(\alpha^{-2}) \!+\! V_{60}^{\rm el} \right), \\[3mm]\displaystyle \Delta E_{\rm vb-VP}^{(7)} = 2\left\langle \chi_{\rm ad} \bigl| \mathcal{E}_{\rm BP}^{(4)}(R) Q' (E_0-H_{\rm vb})^{-1} Q' \mathcal{E}_{\rm VP}^{(5)}(R) \bigr| \chi_{\rm ad} \right\rangle = \alpha^5 \, V_{60}^{\rm vb}, \end{array} \end{equation} Here $\mathcal{E}_{\rm 1loop-VP}^{(7-8)}(R)$ is taken from Eq.~(16) of~\cite{KarrVP17}; the first two terms in the $\alpha$-expansion of the Uehling potential are taken into account in the calculation of matrix elements. $\mathcal{E}_{\rm VP}^{(5)}(R)$ is proportional to delta-function expectation values (see Eq.~(10) of the main paper). \begin{table}[h] \begin{center} \begin{tabular}{|@{\hspace{2mm}}c@{\hspace{4mm}}c@{\hspace{5mm}}r@{\hspace{5mm}}r@{\hspace{5mm}}r@{\hspace{5mm}}r@{\hspace{5mm}}r@{\hspace{2mm}}|} \hline\hline & & $L=0~~~$ & $L=1~~~$ & $L=2~~~$ & $L=3~~~$ & $L=4~~~$ \\ \hline $v=0$ & $V_{60}^{\rm el}$ & $-$0.208342 & $-$0.208197 & $-$0.207910 & $-$0.207483 & $-$0.206923 \\ & $V_{60}^{\rm vb}$ & $-$0.0341205 & $-$0.0339895 & $-$0.0337295 & $-$0.0333449 & $-$0.0328414 \\ \hline $v=1$ & $V_{60}^{\rm el}$ & $-$0.205650 & $-$0.205514 & $-$0.205244 & $-$0.204843 & $-$0.204317 \\ & $V_{60}^{\rm vb}$ & $-$0.0313341 & $-$0.0312107 & $-$0.0309660 & $-$0.0306038 & $-$0.0301298 \\ \hline $v=2$ & $V_{60}^{\rm el}$ & $-$0.203229 & $-$0.203102 & $-$0.202848 & $-$0.202473 & $-$0.201981 \\ & $V_{60}^{\rm vb}$ & $-$0.0286891 & $-$0.0285728 & $-$0.0283421 & $-$0.0280007 & $-$0.0275540 \\ \hline $v=3$ & $V_{60}^{\rm el}$ & $-$0.201071 & $-$0.200952 & $-$0.200716 & $-$0.200366 & $-$0.199906 \\ & $V_{60}^{\rm vb}$ & $-$0.0261705 & $-$0.0260608 & $-$0.0258432 & $-$0.0255212 & $-$0.0250999 \\ \hline $v=4$ & $V_{60}^{\rm el}$ & $-$0.199170 & $-$0.199059 & $-$0.198839 & $-$0.198513 & $-$0.198086 \\ & $V_{60}^{\rm vb}$ & $-$0.0237657 & $-$0.0236622 & $-$0.0234568 & $-$0.0231529 & $-$0.0227553 \\ \hline $v=5$ & $V_{60}^{\rm el}$ & $-$0.197516 & $-$0.197414 & $-$0.197210 & $-$0.196908 & $-$0.196513 \\ & $V_{60}^{\rm vb}$ & $-$0.0214640 & $-$0.0213662 & $-$0.0211724 & $-$0.0208855 & $-$0.0205103 \\ \hline $v=6$ & $V_{60}^{\rm el}$ & $-$0.196103 & $-$0.196009 & $-$0.195821 & $-$0.195543 & $-$0.195180 \\ & $V_{60}^{\rm vb}$ & $-$0.0192569 & $-$0.0191647 & $-$0.0189817 & $-$0.0187110 & $-$0.0183570 \\ \hline $v=7$ & $V_{60}^{\rm el}$ & $-$0.194923 & $-$0.194836 & $-$0.194665 & $-$0.194411 & $-$0.194078 \\ & $V_{60}^{\rm vb}$ & $-$0.0171382 & $-$0.0170512 & $-$0.0168787 & $-$0.0166235 & $-$0.0162898 \\ \hline $v=8$ & $V_{60}^{\rm el}$ & $-$0.193968 & $-$0.193890 & $-$0.193733 & $-$0.193503 & $-$0.193201 \\ & $V_{60}^{\rm vb}$ & $-$0.0151039 & $-$0.0150220 & $-$0.0148596 & $-$0.0146194 & $-$0.0143053 \\ \hline $v=9$ & $V_{60}^{\rm el}$ & $-$0.193231 & $-$0.193160 & $-$0.193020 & $-$0.192812 & $-$0.192541 \\ & $V_{60}^{\rm vb}$ & $-$0.0131525 & $-$0.0130756 & $-$0.0129230 & $-$0.0126975 & $-$0.0124028 \\ \hline\hline \end{tabular} \caption{Radiative one-loop vacuum polarization corrections of orders $m\alpha^7$ and $m\alpha^8$ [in units of $\alpha^5\!\times\!\hbox{(1 a.u.)}$].}\label{tab:a7VP} \end{center} \end{table} \clearpage \newpage {\bf 4.} One-loop self-energy correction of order $m\alpha^8$. The electronic contribution defined by Eq.~(12) of the main text involves delta functions, whereas the vibrational contribution is written as \begin{equation}\label{1loop-vb-a8} \Delta E_{\rm vb-1loop}^{(8)} = 2\left\langle \chi_{\rm ad} \bigl| \mathcal{E}_{\rm BP}^{(4)}(R) Q' (E_0-H_{\rm vb})^{-1} Q' \bigl(\mathcal{E}_{\rm SE}^{(6)}(R)+\mathcal{E}_{\rm VP}^{(6)}(R) \bigr) \bigr| \chi_{\rm ad} \right\rangle = \alpha^6 \, A_{70}^{\rm vb}. \end{equation} Here too, the terms $\mathcal{E}_{\rm SE}^{(6)}(R)$ and $\mathcal{E}_{\rm VP}^{(6)}(R)$ involve delta-function expectation values (see main text). \begin{table}[h] \begin{center} \begin{tabular}{|@{\hspace{2mm}}c@{\hspace{4mm}}c@{\hspace{5mm}}r@{\hspace{5mm}}r@{\hspace{5mm}}r@{\hspace{5mm}}r@{\hspace{5mm}}r@{\hspace{2mm}}|} \hline\hline & & $L=0~~~$ & $L=1~~~$ & $L=2~~~$ & $L=3~~~$ & $L=4~~~$ \\ \hline $v=0$ & $A_{70}^{\rm vb}$ & 1.23069 & 1.22596 & 1.21659 & 1.20271 & 1.18455 \\ $v=1$ & $A_{70}^{\rm vb}$ & 1.13019 & 1.12574 & 1.11691 & 1.10384 & 1.08675 \\ $v=2$ & $A_{70}^{\rm vb}$ & 1.03478 & 1.03059 & 1.02227 & 1.00995 & 0.99384 \\ $v=3$ & $A_{70}^{\rm vb}$ & 0.94394 & 0.93999 & 0.93214 & 0.92052 & 0.90532 \\ $v=4$ & $A_{70}^{\rm vb}$ & 0.85720 & 0.85347 & 0.84606 & 0.83510 & 0.82076 \\ $v=5$ & $A_{70}^{\rm vb}$ & 0.77418 & 0.77066 & 0.76366 & 0.75332 & 0.73978 \\ $v=6$ & $A_{70}^{\rm vb}$ & 0.69458 & 0.69125 & 0.68465 & 0.67489 & 0.66212 \\ $v=7$ & $A_{70}^{\rm vb}$ & 0.61816 & 0.61502 & 0.60880 & 0.59959 & 0.58755 \\ $v=8$ & $A_{70}^{\rm vb}$ & 0.54478 & 0.54183 & 0.53597 & 0.52730 & 0.51598 \\ $v=9$ & $A_{70}^{\rm vb}$ & 0.47440 & 0.47162 & 0.46612 & 0.45799 & 0.44735 \\ \hline\hline \end{tabular} \caption{Radiative one-loop self-energy vibrational correction of order $m\alpha^8$ [in units of $\alpha^6\!\times\!\hbox{(1 a.u.)}$].}\label{tab:a8se} \end{center} \end{table} \clearpage \newpage {\bf 5.} Two-loop $m\alpha^8$-order corrections. The electronic contribution (calculated in~\cite{Korobov17}) and the vibrational contribution are written in Eqs.~(15) and (17) of the main text, respectively. They can be expressed in the form: \vspace*{-3mm} \begin{equation}\label{1loop-vb-a8} \begin{array}{@{}l} \displaystyle \Delta E_{\rm el-2loop}^{(8+)} = \frac{\alpha^6}{\pi} \left( B_{63}^{\rm el} \ln^3(\alpha^{-2}) \!+\! B_{62}^{\rm el} \ln^2(\alpha^{-2}) \!+\! B_{61}^{\rm el} \ln(\alpha^{-2}) \!+\! G_{60}^{\rm el} \right), \\[3mm]\displaystyle \Delta E_{\rm vb-2loop}^{(8)} = \frac{\alpha^6}{\pi} \left( B_{62}^{\rm vb} \ln^2(\alpha^{-2}) \!+\! B_{61}^{\rm vb} \ln(\alpha^{-2}) \!+\! B_{60}^{\rm vb} \right). \end{array} \end{equation} \vspace*{-4mm} \begin{table}[h] \begin{center} \begin{tabular}{|@{\hspace{2mm}}c@{\hspace{4mm}}c@{\hspace{5mm}}r@{\hspace{5mm}}r@{\hspace{5mm}}r@{\hspace{5mm}}r@{\hspace{5mm}}r@{\hspace{2mm}}|} \hline\hline & & $L=0~~~$ & $L=1~~~$ & $L=2~~~$ & $L=3~~~$ & $L=4~~~$ \\ \hline $v=0$ & $B_{62}^{\rm el}$ & 0.570282 & 0.568869 & 0.566054 & 0.561863 & 0.556331 \\ & $B_{61}^{\rm el}$ & 17.3590 & 17.3478 & 17.3255 & 17.2923 & 17.2488 \\ & $B_{62}^{\rm vb}$ & $-$0.304790 & $-$0.303780 & $-$0.301774 & $-$0.298803 & $-$0.294912 \\ & $B_{61}^{\rm vb}$ & 1.46513 & 1.46021 & 1.45046 & 1.43601 & 1.41709 \\ & $B_{60}^{\rm vb}$ & $-$1.73867 & $-$1.73277 & $-$1.72108 & $-$1.70376 & $-$1.68109 \\ \hline $v=1$ & $B_{62}^{\rm el}$ & 0.522533 & 0.521181 & 0.518489 & 0.514480 & 0.509186 \\ & $B_{61}^{\rm el}$ & 17.1126 & 17.1021 & 17.0812 & 17.0502 & 17.0094 \\ & $B_{62}^{\rm vb}$ & $-$0.282101 & $-$0.281147 & $-$0.279253 & $-$0.276448 & $-$0.272774 \\ & $B_{61}^{\rm vb}$ & 1.35863 & 1.35398 & 1.34475 & 1.33108 & 1.31318 \\ & $B_{60}^{\rm vb}$ & $-$1.61535 & $-$1.60976 & $-$1.59867 & $-$1.58226 & $-$1.56077 \\ \hline $v=2$ & $B_{62}^{\rm el}$ & 0.476256 & 0.474960 & 0.472379 & 0.468534 & 0.463456 \\ & $B_{61}^{\rm el}$ & 16.8899 & 16.8801 & 16.8605 & 16.8316 & 16.7935 \\ & $B_{62}^{\rm vb}$ & $-$0.260536 & $-$0.259634 & $-$0.257844 & $-$0.255192 & $-$0.251719 \\ & $B_{61}^{\rm vb}$ & 1.25722 & 1.25282 & 1.24407 & 1.23113 & 1.21418 \\ & $B_{60}^{\rm vb}$ & $-$1.49771 & $-$1.49241 & $-$1.48189 & $-$1.46631 & $-$1.44593 \\ \hline $v=3$ & $B_{62}^{\rm el}$ & 0.431275 & 0.430029 & 0.427548 & 0.423851 & 0.418968 \\ & $B_{61}^{\rm el}$ & 16.6901 & 16.6809 & 16.6627 & 16.6357 & 16.6002 \\ & $B_{62}^{\rm vb}$ & $-$0.239976 & $-$0.239122 & $-$0.237427 & $-$0.234917 & $-$0.231630 \\ & $B_{61}^{\rm vb}$ & 1.16037 & 1.15619 & 1.14790 & 1.13562 & 1.11955 \\ & $B_{60}^{\rm vb}$ & $-$1.38515 & $-$1.38011 & $-$1.37011 & $-$1.35532 & $-$1.33596 \\ \hline $v=4$ & $B_{62}^{\rm el}$ & 0.387434 & 0.386234 & 0.383843 & 0.380281 & 0.375574 \\ & $B_{61}^{\rm el}$ & 16.5123 & 16.5038 & 16.4869 & 16.4618 & 16.4289 \\ & $B_{62}^{\rm vb}$ & $-$0.220313 & $-$0.219504 & $-$0.217898 & $-$0.215520 & $-$0.212405 \\ & $B_{61}^{\rm vb}$ & 1.06758 & 1.06361 & 1.05574 & 1.04410 & 1.02885 \\ & $B_{60}^{\rm vb}$ & $-$1.27711 & $-$1.27233 & $-$1.26283 & $-$1.24877 & $-$1.23037 \\ \hline $v=5$ & $B_{62}^{\rm el}$ & 0.344605 & 0.343446 & 0.341138 & 0.337699 & 0.333153 \\ & $B_{61}^{\rm el}$ & 16.3556 & 16.3478 & 16.3321 & 16.3090 & 16.2786 \\ & $B_{62}^{\rm vb}$ & $-$0.201456 & $-$0.200689 & $-$0.199166 & $-$0.196911 & $-$0.193957 \\ & $B_{61}^{\rm vb}$ & 0.97843 & 0.97467 & 0.96720 & 0.95613 & 0.94165 \\ & $B_{60}^{\rm vb}$ & $-$1.17315 & $-$1.16859 & $-$1.15956 & $-$1.14618 & $-$1.12868 \\ \hline $v=6$ & $B_{62}^{\rm el}$ & 0.302686 & 0.301565 & 0.299333 & 0.296005 & 0.291607 \\ & $B_{61}^{\rm el}$ & 16.2194 & 16.2122 & 16.1978 & 16.1765 & 16.1485 \\ & $B_{62}^{\rm vb}$ & $-$0.183326 & $-$0.182598 & $-$0.181153 & $-$0.179014 & $-$0.176211 \\ & $B_{61}^{\rm vb}$ & 0.89258 & 0.88900 & 0.88190 & 0.87139 & 0.85763 \\ & $B_{60}^{\rm vb}$ & $-$1.07284 & $-$1.06851 & $-$1.05991 & $-$1.04719 & $-$1.03054 \\ \hline $v=7$ & $B_{62}^{\rm el}$ & 0.261604 & 0.260518 & 0.258356 & 0.255133 & 0.250871 \\ & $B_{61}^{\rm el}$ & 16.1029 & 16.0963 & 16.0831 & 16.0636 & 16.0381 \\ & $B_{62}^{\rm vb}$ & $-$0.165860 & $-$0.165169 & $-$0.163798 & $-$0.161767 & $-$0.159108 \\ & $B_{61}^{\rm vb}$ & 0.80972 & 0.80632 & 0.79957 & 0.78958 & 0.77651 \\ & $B_{60}^{\rm vb}$ & $-$0.97587 & $-$0.97174 & $-$0.96356 & $-$0.95145 & $-$0.93560 \\ \hline $v=8$ & $B_{62}^{\rm el}$ & 0.221319 & 0.220267 & 0.218171 & 0.215046 & 0.210914 \\ & $B_{61}^{\rm el}$ & 16.0053 & 15.9992 & 15.9873 & 15.9696 & 15.9464 \\ & $B_{62}^{\rm vb}$ & $-$0.149008 & $-$0.148353 & $-$0.147051 & $-$0.145124 & $-$0.142601 \\ & $B_{61}^{\rm vb}$ & 0.72963 & 0.72640 & 0.71998 & 0.71049 & 0.69807 \\ & $B_{60}^{\rm vb}$ & $-$0.88195 & $-$0.87803 & $-$0.87024 & $-$0.85872 & $-$0.84365 \\ \hline $v=9$ & $B_{62}^{\rm el}$ & 0.181830 & 0.180810 & 0.178778 & 0.175747 & 0.171740 \\ & $B_{61}^{\rm el}$ & 15.9257 & 15.9202 & 15.9094 & 15.8935 & 15.8726 \\ & $B_{62}^{\rm vb}$ & $-$0.132737 & $-$0.132115 & $-$0.130881 & $-$0.129054 & $-$0.126662 \\ & $B_{61}^{\rm vb}$ & 0.65214 & 0.64907 & 0.64298 & 0.63397 & 0.62218 \\ & $B_{60}^{\rm vb}$ & $-$0.79092 & $-$0.78719 & $-$0.77978 & $-$0.76883 & $-$0.75450 \\ \hline\hline \end{tabular} \caption{Radiative two-loop self-energy corrections of order $m\alpha^8$ [in units of $\alpha^5\ln^k{\alpha}\!\times\!\hbox{(1 a.u.)}$ for $B_{6k}$].}\label{tab:a8_2loop} \vspace*{-10mm} \end{center} \end{table} \clearpage
2,869,038,156,929
arxiv
\section{INTRODUCTION} Recently the field of autonomous driving has witnessed rapid development. A number of novel behavior planning algorithms have been proposed, many of which are awesome achievements. However, it is almost impossible for a single behavior planner to drive through so many different real and complex scenarios. For example, one needs to ride in an autonomous car in his everyday life. The autonomous car should be able to drive both in cities and on highways, which include various intersections, roundabouts, and merging and following scenarios (as shown in Figure \ref{fig:wholemap}). One single planning algorithm, such as an optimization-based planner, may never find its solution in each complex traffic condition. The hyperparameters in each planner or policy may work well in intersection scenarios, but it is very likely to fail in highway merging scenarios, because the optimal driving settings in each scenario, e.g. the optimal speed and spacing, are so difficult to adjust with only one fixed planner. The most difficult part in this complex driving problem lies in the rapidly changing environment. The road curvature may change, and the number of obstacles may vary in each of these self-driving scenarios. For example, when an autonomous vehicle is trying to make a left turn at an intersection, it has to consider the number of oncoming vehicles, the actual speed of them, and their distance to the intersection. However, when the self-driving vehicle is finding its way to merge to the next lane on the highway, although it also has to analyze similar information, it needs to do it in a whole different larger scale for distance, velocity, etc. We therefore have to find an adaptive policy that can make high-quality decisions in various scenarios. \begin{figure} \centering \includegraphics[width=\linewidth]{figures/wholemap.png} \caption{Various complex real-world traffic conditions in which autonomous vehicles (red) must drive safely and efficiently.} \label{fig:wholemap} \vspace{-6mm} \end{figure} Each behavior and motion planner from existing work has its own advantages and disadvantages. For example, learning-based methods can recognize the specific properties of different scenarios with less effort, but it is hard to interpret the results and generalize them to other scenarios. On the other hand, non-learning-based methods can ensure the safety of the agent and deal with many similar cases without too much modification, but it is difficult to tune the hyperparameters (e.g., the detailed cost function of motion planning) to be compatible for various traffic scenarios with different number of interactive agents. Therefore, based on the analysis of these limitations, we seek to design a hierarchical model to exploit the advantages of both reinforcement learning and non-learning-based policies. In this paper, we make the following contributions: \begin{itemize} \item We propose a \textbf{H}ierarchical behavior planning framework with low-level safe \textbf{C}on\textbf{t}rollers and a high-level \textbf{R}einforcement \textbf{L}earning \textbf{(H-CtRL)}. It is an adaptive behavior planner that can make high-quality decisions in many complex traffic scenarios. \item A simulator that could reproduce traffic scenes from real-world traffic datasets is constructed, so that the proposed method can be trained and tested in realistic scenarios. \item We test the proposed method H-CtRL in real-world traffic conditions, and it was proved to be capable of handling different planning tasks in various scenarios. \end{itemize} \section{RELATED WORK} \subsection{Non-Learning Based Methods} Non-learning-based decision-making modules are considered to be safe-guaranteed and interpretable \cite{mcnaughton2011motion, paden2016survey}. But these planners tends to be too conservative. In \cite{defensive, bayesianPersuasive}, this problem was addressed by making the autonomous agent more cognizant of and reactive to obstacles. The authors of \cite{leverageHuman} proposed another framework leveraging effects on human actions to make it more interactive. However, the methods aforementioned suffer more or less from their long computation time. Constrained Iterative Linear Quadratic Regulator \cite{CILQR} significantly reduced the operation time while preserving the safety. We try to inherit the safe guarantees and fast operation time from these methods, and select non-learning-based planners as our low-level safe controllers. \subsection{Supervised Learning} The application of supervised learning on autonomous driving dates back to the work in \cite{Alvinn}. The authors of \cite{end2endforAV} then learned a map from raw pixels directly to steering commands, where the concept of imitation learning (IL) began to surface. A more robust perception-action model was developed in \cite{endtoendPerception}. To enhance the safety of IL, \cite{sun2018fast} proposed a hierarchical framework which utilized a high-level IL policy and a low-level MPC controller to improve efficiency and safety. Similarly, to make IL generalizable and deal with complex urban scenarios, the authors of \cite{DILinurban} learned policies from offline connected driving data, and integrated a safety controller at test time. \subsection{Reinforcement Learning} Reinforcement learning (RL) has also been extensively explored in autonomous driving. The algorithm in \cite{DRLframework} adopted Recurrent Neural Networks for information integration, and learned an effective driving policy on simulators. The work in \cite{virtualtoreal} developed a realistic translation network to make sim2real possible. \cite{scalable, interpretable} developed robust policies to make self-driving cars capable of driving through complex urban scenarios. One can also consider to incorporate prediction models \cite{evolvegraph, conditional} to build model-based RL planners. We believe RL is a suitable high-level policy candidate. It can learn from experience to know which low-level controller is the most suitable at a specific time step. Hierarchical Reinforcement Learning (HRL) can make the learning process more sample-efficient. The idea is to reuse the well trained network of one sub-goal on other similar tasks in HRL \cite{HRL, towardshrl}. There are also many variants of HRL. The work in \cite{learningHRL} integrated a sampling-based motion planner with a high-level RL policy, which can solve long horizon problems. Similarly, \cite{hoel2019combining} combined deep reinforcement learning with Monte Carlo sampling to achieve tactical decision making for autonomous vehicles. Authors of \cite{thananjeyan2020safety} developed SAVED which augmented the safety of model-based reinforcement learning with Value estimation from demonstrations. Only to plan in normal scenarios is not enough for reliable self-driving cars, so the authors of \cite{rl-il} developed a hybrid method with RL and IL policies to plan safely in near accident scenarios. The authors of \cite{socialAttention} proposed an attention-based architecture that can deal with a varying number of obstacles and the interaction in between. We adopted the basic ideas to reuse low-level controllers, and aimed to design a novel planning module that works in various traffic conditions. The high-level RL was trained to recognize and react to different environments, while the low-level conventional controllers fulfill the goals sent from the high-level RL and guarantee the safety in the same time. \section{Problem Statement} Throughout the paper, we focus on the behavior planning problem in different complex urban traffic scenarios. There is one ego agent and many other obstacle cars in the environment. Each of them has its own behavior pattern. Thus, we need a mechanism to model the evolution of each scene and the interactions among agents. We formulate the problem as a Partially Observable Markov Decision Process (POMDP). A POMDP can be defined as a tuple: $<\mathcal{S}, \mathcal{A}, \mathcal{O}, \mathcal{T}, \mathcal{Z}, \mathcal{R}, \gamma>$. $\mathcal{S}$ denotes the state space and $s\in \mathcal{S}$ is a state of the environment. $\mathcal{A}$ is defined to be the action space and $a \in \mathcal{A}$ is an action taken by the ego agent. $o \in \mathcal{O}$ is an observation received by the ego agent. The transition model $\mathcal{T}(s, a, s')$ is the probability of the transition from a current state - action pair $(s, a)$ to $s'$ at the next time step. $\mathcal{Z}(s, o)$ denotes the transition model, which calculates the probability of ending in the observation $o$ given a state $s$. The reward function is defined by $\mathcal{R}(s, a)$, which yields a specific reward via a state - action pair $(s, a)$. The discount factor is denoted by $\gamma$. The overall objective is to maximize the expected discount reward and find the corresponding optimal policy \begin{equation} \pi^* =\arg \max_{\pi} \mathbb{E}\left[ \sum_{t = 0}^{\infty}\gamma^t \mathcal{R}(s^t, \pi(o^t)) \right] \end{equation} where $s^t, \; o^t $ are the state and the observation at timestep $t$ of the environment, respectively. \section{METHOD} \subsection{H-CtRL Framework} We propose to solve the POMDP aforementioned with a hierarchical behavior planning framework H-CtRL (shown in Figure \ref{fig:pipeline}). It can be viewed as an integrated solver for the POMDP. We input the current state of the environment into the framework and then it outputs actions to be executed by the ego agent. The hierarchical framework consists of a collection of low-level controllers with safety constraints and a high-level Reinforcement Learning policy to manage them all. We aim to find an optimal high-level policy $\pi^*$ that can take advantage of one controller at a specific time step when doing behavior planning for the autonomous agent. In details, the state $s_t$ of the problem at the time step $t$ is designed to be the low-dimensional states of all agents presented in the environment at $t$, namely, \begin{equation} \begin{aligned} &s_t = \left[ s_t^1 \;\; || \;\; s_t^2 \;\; || \;\; \dots \;\; || \;\; s_t^m \right]\\ &s_t^i = \begin{bmatrix}x_t^i & y_t^i & v_t^i & \theta^i \end{bmatrix}, \; i \in [1, m] \end{aligned} \end{equation} \begin{figure}[h] \vspace{3pt} \centering \includegraphics[width=0.27\textwidth]{figures/pipeline.png} \caption{The hierarchical framework with low-level safe controllers and a high-level RL algorithm.} \label{fig:pipeline} \end{figure} where $s_t^i,\; i \in [0, m]$ is the low-dimensional state of the $i$-th agent in the environment at time step $t$ (note that $s_t^0$ denotes the state of the ego agent), and $s_t$ is the state of the environment. The operator $||$ here is the concatenation operator. The action at time step $t$ is denoted by $a_t = [acc_t \;\; \delta_t]$ where $acc_t$ and $\delta_t$ are the acceleration and the steering angle of the ego agent, respectively. We choose the states of the $k$ nearest neighbors around the ego vehicle at time step $t$ to be the observation, namely, $o_t = [o_t^1 \;\; || \;\; o_t^2 \;\; || \;\; \dots \;\; || \;\; o_t^k]$ where $o_t^i, \; i\in [1, k]$ is the state of the $i$-th nearest obstacle around the ego car. We refer \cite{bicyclemodel} and select the bicycle model as the transition model for the ego agent. As for the other vehicles in the scene, their states would evolve according to the definition by the environment or the simulator. \textbf{Low-Level Safe Controllers} consist a set of $n$ non-learning-based controllers, denoted by $\{ f_1, f_2, \dots, f_n\}$. They are like the workers within the hierarchical framework, who are responsible for specific tasks assigned by high-level coordinators. Each $f_i, \; i \in [1,n]$ has its own behavior pattern, either cooperative or selfish, either defensive or aggressive. Although they can lead to different driving strategies, they all have their own safety constraints when performing motion planning. For example, sampling-based planners would assign little probability to the area where an accident is more likely to happen, and optimization-based controllers would have huge cost in the objective function when the output lands in the dangerous area. Since the chosen low-level controller is the one whose action directly affects the evolution of the environment, these safety constraints of the low-level controllers are generally a good property that guarantees the safety of the whole hierarchical behavior planning framework. Specifically, the low-level controller takes in the observation $o_t$ and gives back an action $a_t$ at the time step $t$. \begin{algorithm} \caption{H-CtRL Training Algorithm}\label{alg:hctrl} \begin{algorithmic}[1] \State \textbf{Initialize} the simulation environment $env$ and get an initial observation $o_t$. \State \textbf{Initialize} the policy and the target Q-network with weights $\theta$ and $\theta^-$. Set $\theta^- \gets \theta$. \State \textbf{Instantiate} an empty reply buffer $\mathcal{B}$ with a maximum length of $l_B$. \For {$h \gets 0$ \textbf{to} $N$ steps} \State Select an action $a_h \gets \arg \max_{a_h} Q_\theta (o_h, a_h)$ according to the $\epsilon-greedy$. \State Given $o_h$, the chosen low-level controller corresponding to $a_h$ acts $p$ timesteps in $env$ to get the next observation $o_{h+1}$ and the reward $r_h$. \State Push the transition $\mathcal{T} = \begin{bmatrix} o_h & a_h & r_h & o_{h+1} \end{bmatrix}$ into $\mathcal{B}$. \State Update the weights $\theta$ of the policy Q-network using the replay buffer $\mathcal{B}$. \If {$h$ \textbf{mod} $target\_update\_frequency == 0$} \State $\theta^- \gets \theta.copy()$ \EndIf \If {the episode is done} \State Record the cumulative reward in this episode. \State Reset $env$ and get an initial observation $o_{t+1}$. \EndIf \EndFor \end{algorithmic} \end{algorithm} \textbf{The High-Level Reinforcement Learning} policy is the coordinator in this framework. It makes observations from the environment and gets to choose one of the low-level controllers that is the most suitable given the current observation. Therefore, the action space of the high-level RL is reduced and discretized from the original continuous space to a finite set of low-level controller's id $\{ 1, 2, \dots, n \}$. The actions of RL can be viewed as intermediate actions within the hierarchical framework, whereas the final output actions that directly interact and affect the environment are the outputs of low-level controllers. Here, we should note that to reduce the cardinality of the actions within one episode and improve the stability of the algorithm, the high-level RL switches its choice of low-level controllers every $p$ time steps, which means within the $p$ time steps of the environment, only one chosen low-level controller would plan trajectories consistently and would not be disabled by the RL. We therefore introduce the new timestep $h$ for the high-level RL: the original time step $t=h \cdot p$ should coincide with the time step $h$ in RL. The state and the observation at each time step of the RL problem are defined accordingly as $s_h$ and $o_h$ as aforementioned. One way to get $s_h$ and $o_h$ is to convert the time scale in RL back to the original scale, thus $s_h$ and $o_h$ should be the state and the observation at time step $t=h \cdot p$ in the environment. The transition model of RL is also defined within the new time scale, namely, $\mathcal{T}(s_h, a_h, s_{h+1})$, where $s_{h+1}$ is the next state of $s_h$ after applying $p$ actions by the low-level controller corresponding to $a_h$. Because of the existence of low-level controllers, we no longer need to worry about various tedious design details of the reward function $\mathcal{R}(s_h, a_h)$, and can simply adopt a very high-level one that encourages the completion of the planning task as fast as possible without any collision. In detail, the ego agent receives a positive reward that is proportion to its progress along its reference trajectory and a negative reward if the episode terminates early because of collision or low-level controller failure or other factors. Generally, the high-level RL can be trained using any model-free RL algorithms. In this paper, we choose to use Double Deep Q-Network (DDQN) in \cite{ddqn} to learn our high-level RL policy, for it is more stable and has less variance. The pseudo-code for the hierarchical planning framework is shown in algorithm \ref{alg:hctrl}. \section{EXPERIMENTS} \subsection{Simulator} We constructed our own simulation environment based on the INTERACTION Dataset \cite{interactiondataset} and the OpenAI GYM toolkit. The road maps in the simulator were loaded from the INTERACTION Dataset map collections, which contained various real-world traffic scenarios recorded from many different countries. After the simulator finished constructing the road map, we specify an initial timestep from where vehicles data were loaded. The states of these vehicles other than the ego agent were all loaded from the dataset at each time step. Since these vehicle data were all collected from real-word traffics, we could simulate relatively realistic traffic conditions. The ego agent in the simulator would then be our self-driving car equipped with H-CtRL. The transition model of the ego agent was the bicycle model. When running experiments in the simulator, we input into it one low-level action, $a_t = [acc_t, \; \delta_t]$, and then it would take one step and output a reward, an observation, and a boolean indicating whether the episode had terminated, according to the bicycle model and the dataset. \subsection{Scenarios} We consider two road maps from the INTERACTION dataset, and design different driving tasks in each of them. \begin{itemize} \item \textbf{TC\_BGR\_Intersection\_VA (VA)}. It is a map of a busy and complex intersection, which makes it difficult for the ego agent to avoid collisions. \item \textbf{DR\_USA\_Roundabout\_SR (SR)}. The map is collected from a real-world roundabout. The map is bigger than the previous intersection map and thus is difficult to navigate through. \end{itemize} Since there are four directions in both scenarios, we design similar tasks in each of them. The ego agent should navigate through the traffic safely to make unprotected left turns, unprotected right turns, and straight crossings. By unprotected left or right turns, we mean that one must yield to other vehicles when turning left on green lights and turning right on red lights according to the traffic rules. \subsection{Low-level Controllers} Generally speaking, we can choose any mature non-learning-based planner to make the proposed hierarchical framework inclusive and powerful. In this paper, we consider $n=9$ different Constrained Iterative Linear Quadratic Regulators (CILQR) \cite{CILQR} as the set of low-level controllers. The objective of CILQR is to find an optimal control sequence, namely, an optimal action sequence $a^*$ given an initial observation $o_0$ that minimizes a cost function: \begin{equation} \begin{aligned} &a^*, o^* = \arg \min_{a, o} \left\{ \phi(o_N) + \sum_{t=0}^{N-1}L(o_t, a_t) \right\}\\ \text{s.t.}\;\; &o_{t+1} = f(o_t, a_t), \;\; t = 0, 1, \dots, N-1\\ &g_t(o_t, a_t)<0, \;\; t = 0, 1, \dots, N-1\\ &g_N(o_N)<0 \end{aligned} \label{eq:opt-cilqr} \end{equation} where $N$ is the planning horizon, $L(\cdot)$ and $\phi(\cdot)$ are the cost functions, $f(\cdot)$ is the transition model, and $g_t(\cdot)$'s are the safety and dynamics constraints. From a theoretical point of view, Chen \textit{et al} proved in Theorem 1 in \cite{CILQR} that for the problem in Equation~\ref{eq:opt-cilqr}, the output trajectories $\{ o_t^{(k)}, a_t^{(k)} \}$ at the $k-$th step will converge to a local optimum as $ k\rightarrow 0 $ when using the CILQR algorithm. Compared to other non-learning based methods, CILQR solves the optimal control problem with non-convex constraints and non-linear system dynamics much faster with a guarantee to converge. Learning based methods do not introduce constraints on dynamics and have no guarantee of a convergence either. It has been tested with on-road driving scenarios and is proved to be able to avoid obstacles successfully. However, the main drawback of CILQR is that it tends to be very aggressive if its reference speed is too fast. For example, when the ego agent is following slow traffic in an urban scenario, it always tries to pass the cars in the front whenever it finds a gap. This maneuver style may cause serious problems, because a sudden move is highly likely to result in collisions in such dense traffic with many occlusions. The dangerous decision is mainly because of the high reference speed that the controller tempts to track. Since the objective function penalizes its deviation from the reference trajectory, it is willing to sacrifice the safety to bypass the obstacles. Therefore, we seek to apply the high-level RL policy to choose the most suitable reference and the most appropriate setting for low-level controllers. We design a fixed finite set of candidate reference speed for the high-level RL to choose. The set includes 9 possible discrete speeds: $v_{\text{ref}}\in \{0, 2, 3, 4, 5, 6, 7, 8, 9\}(m/s)$. Each reference speed corresponds to a different CILQR controller that has a different behavior pattern. For example, the one with the reference speed $v_{\text{ref}} = 0m/s$ is the most conservative one, because it would yield to any obstacle in the environment, whereas the one with $v_{\text{ref}} = 9m/s$ is the most aggressive one, for it would tries its best to track the high reference speed and the safety would be compromised. The high-level RL aims to balance between the safety and the passing time given the current observations, so as to make the ego agent capable of handling various complex scenarios. \subsection{Baseline Methods} \begin{figure*}[thpb] \centering \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{figures/tc_1_good.png} \caption{VA: 1-Entering} \label{fig:tc1} \end{subfigure} \hfill \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{figures/tc_2_good.png} \caption{VA: 2-Waiting} \label{fig:tc2} \end{subfigure} \hfill \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{figures/tc_3_good.png} \caption{VA: 3-Exiting} \label{fig:tc3} \end{subfigure} \hfill \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{figures/sr_1_good.png} \caption{SR: 1-Entering} \label{fig:sr1} \end{subfigure} \hfill \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{figures/sr_2_good.png} \caption{SR: 2-Waiting} \label{fig:sr2} \end{subfigure} \hfill \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{figures/sr_3_good.png} \caption{SR: 3-Following} \label{fig:sr3} \end{subfigure} \caption{The planned trajectories in both maps. The ego car is always red and the obstacles are always purple. The future trajectories for the next 10 high-level timestep $h$ are plotted using a line with markers on it. A green line means H-CtRL chooses a low-level CILQR planner with high reference speed, while a red line means H-CtRL chooses one with low reference speed. The darker the red, the lower the speed.} \label{fig:path-results} \end{figure*} We compared the following policies in the experiments: \begin{itemize} \item \textbf{CILQR\#3}. The third low-level CILQR controller with a reference speed of $3m/s$. No high-level RL policy is used. \item \textbf{CILQR\#9}. The ninth low-level CILQR controller with a reference speed of $9m/s$. No high-level RL policy is used. \item \textbf{Random}. The hierarchical framework with the high-level RL is replaced by a random sampler, which chooses low-level controllers randomly. \item \textbf{H-CtRL}. Our proposed hierarchical behavior planning framework. \end{itemize} It would be tedious to compare and list results of all low-level CILQR controllers, so we only choose two representative low-level controllers here. CILQR\#3 plans trajectory that tracks a low reference speed, which will results in a rather conservative driving strategy, whereas CILQR\#9 is just the opposite resulting in an aggressive driving strategy. \subsection{Implementation Details} We trained and tested the RL in our proposed hierarchical framework in two maps separately. At the beginning of each episode, we initialized the position of the ego vehicle at the edge of the road map, perturbed by a Gaussian noise. The initial timestep to load obstacles from the dataset was randomly sampled from $600$ to $900$ original timestep in VA and from $100$ to $400$ in SR. Each timestep in the simulator as well as CILQRs lasts $0.1$s, whereas each timestep of the high-level RL lasts $1.0$s with $p=10$. The goal of each episode was also chosen randomly, either to turn left, turn right, or to go straight. The observation that was fed into the high-level RL was the observation given by the simulator plus the goal of each episode. The RL policy was represented by a neural network with two fully connected hidden layers. According to the high-level decision, the same observation was then fed into the low-level controller to plan executable actions in the simulator for the ego agent. \begin{center} \begin{table}[] \centering \caption{Average episode returns, collision rate, and the completion rate within 50s time limit based on 100 episodes in each map. } \begin{tabular}{c c c c} \Xhline{4\arrayrulewidth}\\[-1em] Method & Aver. Epi. Return & Collision Rate & Completion Rate \\ \hline\\[-1em] \multicolumn{4}{l}{\textbf{Intersection (VA)}}\\[2pt] CILQR\#3 & 80.06 & 0.07 & 0.24 \\ CILQR\#9 & 37.84 & 0.59 & 0.09 \\ Random & 51.23 & 0.36 & 0.17 \\ H-CtRL & 86.59 & 0.10 & 0.85 \\ \hline\\[-1em] \multicolumn{4}{l}{\textbf{Roundabout (SR)}}\\[2pt] CILQR\#3 & 73.52 & 0.05 & 0.17 \\ CILQR\#9 & 48.64 & 0.47 & 0.22 \\ Random & 55.13 & 0.32 & 0.28 \\ H-CtRL & 90.29 & 0.08 & 0.91 \\ \Xhline{4\arrayrulewidth} \end{tabular} \label{tab:results} \end{table} \end{center} \section{RESULTS} \subsection{Statistics} We ran each policy for 100 episodes in VA and SR separately, and compared the average episode return, the collision rate, and the completion rate within a 50s time limit. As shown in Table \ref{tab:results}, the proposed H-CtRL has the best performance in terms of the average episode reward. It is much higher than CILQR\#9 and Random, while CILQR\#3 is close to it. Considering the experiment setting where initial states of the environment are sampled randomly among a wide range, we can safely conclude that our proposed H-CtRL is able to handle various situations better than each individual low-level planners. It also implies that H-CtRL is better than a random high-level switching policy, so the high-level RL successfully learned useful skills to navigate through various complex urban traffics. When looking into the collision rate, CILQR\#3 performs the best, following by H-CtRL. CILQR\#9 is the most aggressive policy, as is implied by its high reference speed. When we set a time limit of 50 seconds for each episode, only the proposed H-CtRL has the ability to finish the task in both maps. CILQR\#3 is too conservative and drives at a low speed, making it almost impossible to finish the task in time. CILQR\#9 is just the opposite of CILQR\#3. It drives too fast to recognize danger and to avoid collisions in time. The random switching policy fails to reach a high completion rate because of a mixture of the reasons aforementioned. If we look into the collision rate and the completion rate together, we can conclude that H-CtRL makes a good balance between the safety and the operation time. \subsection{Visualization} To show the details of what happened in both VA and SR, we visualized one representative episode in each map. Figure \ref{fig:tc1} - \ref{fig:tc3} are the visualization for an episode in VA, where the ego car was trying to make a left turn when the traffic light was green. According to traffic rules, it must yield to oncoming cars from across the road. As we can see in Figure \ref{fig:tc1}, the high-level RL first chose to drive at a high speed of approximately $6m/s$ (as shown by the green markers) into the entrance of the intersection. Then it decided to slow down in front of the intersection (as shown by the orange markers). When cars continued to come from the opposite direction, the high-level RL planned a perfect stop to stay put (as shown in Figure \ref{fig:tc2}). After all cars finished crossing the intersection, It decided to accelerate and to exit the intersection as fast as possible (as shown in Figure \ref{fig:tc3}). One episode in SR was visualized in Figure \ref{fig:sr1} - \ref{fig:sr3}. In this episode, the ego agent was trying to go straight across the roundabout. When it was approaching the roundabout, the high-level RL chose to drive at a high speed as usual (implied by the green markers in Figure \ref{fig:sr1}). When it was about to enter the roundabout, the high-level RL decided to slow down (shown in Figure \ref{fig:sr2}) so that it could observe the surroundings and could avoid collisions if necessary. After it exited the roundabout (shown in Figure \ref{fig:sr3}), it first accelerated toward the goal position. But there were several obstacles driving slowly in the front, so it chose a low-level controller with a very low reference speed ($v_{\text{ref}}=2m/s$). Then the low-level CILQR controller helped the ego agent to slow down and avoid collisions. We visualized the velocity and the task progress of H-CtRL for the SR episode described above, and compared them with those of CILQR\#3 and CILQR\#9 in Figure \ref{fig:lines}. As we can see, neither CILQR\#3 nor CILQR\#9 managed to finish the task before the time was up. The ego agent with CILQR\#9 collided with obstacles in the episode, whereas the one with CILQR\#3 did not manage to finish the task before the time was up. Only the agent with H-CtRL successfully finished this episode. We can therefore conclude that our proposed H-CtRL has the ability to handle these complex urban traffic conditions safely and efficiently. \begin{figure}[thbp] \centering \begin{subfigure}[b]{0.46\textwidth} \centering \includegraphics[width=\linewidth]{figures/speed-pos.png} \caption{The velocity vs. the position along the trajectory of the ego agent.} \label{fig:velocity} \end{subfigure} \hfill \begin{subfigure}[b]{0.46\textwidth} \centering \includegraphics[width=\linewidth]{figures/progress.png} \caption{The task progress vs. the timestep of the ego agent.} \label{fig:progress} \end{subfigure} \caption{A visualization of the velocity and the task progress of the ego agent driving in SR. The proposed H-CtRL was compared to CILQR\#3 (with $v_\text{ref} = 3m/s$) and CILQR\#9 (with $v_\text{ref} = 9m/s$). The black cross means that the episode was terminated earlier without reaching the goal position. } \label{fig:lines} \end{figure} \subsection{Failure Cases} When training and testing our proposed method, we discovered that the ego agent braked really hard if it observed obstacles in the front or the front vehicle slowed down. We believed that this is caused by the low-level CILQR controllers, which only considered the safety constraint without any optimization on the comfort. Generally speaking, our proposed hierarchical framework has the ability to adopt many low-level planners. Therefore, the future work is to add more low-level planners into the framework, and to train them all together to get a more powerful and generalizable behavior planner. \section{CONCLUSION} In this paper, we proposed a general behavior planner for autonomous vehicles based on reinforcement learning and safe motion planners. By combining the power of low-level safe controllers with a high-level reinforcement learning coordinator, various complex urban traffic conditions can be handled via this general framework. We built a simulator that can reproduce scenarios according to real-world traffic dataset. The proposed algorithm was trained and tested based on such real traffic data. Compared to other baseline methods, the proposed framework achieved both high completion rate and low collision rate, verifying its ability to handle various traffic scenarios with satisfying performance on both safety and efficiency. \addtolength{\textheight}{-9.5cm} \bibliographystyle{ieeetr}
2,869,038,156,930
arxiv
\section{\label{sec:intro}Introduction} Numerous geophysical and astrophysical flows present a two-layer configuration, with a turbulent convective layer standing above or below a stably stratified one. Examples include planetary atmospheres, stars interiors, and possibly the outermost layer of the Earth liquid core \cite{hirose_composition_2013}. The dynamics of coupled stratified and convective layers are quite complex. Due to the convective motions, internal gravity waves (IGWs) are generated at the interface between the two layers, and propagate in the stratified one. IGWs transport energy and momentum \cite{rogers_internal_2012,bretherton_momentum_1969} from where they are generated to where they are damped. Thanks to their transport properties and non-linear interactions, IGWs are able to generate and sustain large-scale horizontal flows \cite{plumb_interaction_1977,rogers_internal_2012}. Examples of such large-scale flows driven by IGWs are the oscillations of equatorial zonal winds observed in some planets' atmosphere \cite{fouchet_equatorial_2008,leovy_quasiquadrennial_1991}, including the Earth where it is called the Quasi-Biennial Oscillation (QBO) \cite{baldwin_quasi-biennial_2001}.\\ The IGWs generation by turbulent dynamics has been studied in various experiments. The generation by a single buoyant plume was experimentally studied by Ansong \& Sutherland \cite{ansong_internal_2010}. The penetration of the plume within the stratified layer and the spectral characteristics of the generated IGWs were studied. They found that the peak frequency of the generated IGWs lies in a range close to $0.7N$, where $N$ is the Brunt-V\"ais\"al\"a (or buoyancy) frequency, and that the radial wavenumber is set by the plume cap and not by the width of the plume at the interface. Deardoff et al. \cite{deardorff_laboratory_1969} and later Michaelian et al. \cite{michaelian_coupling_2002} studied the effect of penetrative convection in a stratified layer in a transient, Rayleigh-B\'enard type experiment. Stratification was initially set up thermally from the top to the bottom of the tank. Then, the fluid was suddenly warmed up at the bottom, triggering Rayleigh-B\'enard convection. IGWs were measured transiently \cite{michaelian_coupling_2002} while the stratified (resp. convective) layer was decreasing (resp. increasing) in size. Eventually, there was no more stratified layer to sustain the propagation of IGWs. Townsend \cite{townsend_natural_1964} introduced an original set-up to study the quasi-steady generation of IGWs by Rayleigh-B\'enard convection. Using the fact that water maximum density is around $4^{\circ}$C, a two-layer system is spontaneously generated by cooling the bottom of a tank at $0^{\circ}$C and heating its top above $4^{\circ}$C. The density gradient is unstable at temperature below 4 $^{\circ}$C and stable at temperature above. This creates a self-organising system, with a turbulent convective layer adjacent to a stratified layer. With dye visualisation and temperature measurements, he observed IGWs propagating close to the interface between the two layers. The $4^\circ$C convection was also studied experimentally by Le Gal \cite{legal_penetrative_1997} in a laminar flow, at low Rayleigh number. Convection displayed an hexagonal pattern and viscous entrainment of the fluid above the convective cells was observed. Perrard et al. \cite{perrard_experimental_2013} and Le Bars et al. \cite{bars_experimental_2015} re-investigated this setup in a quasi two-dimensional tank using Particle Image Velocimetry (PIV) and temperature measurements to obtain detailed data of IGWs generated by the convection. They observed a wide spectrum of waves generated at different frequencies. Favoured frequencies were related to the differential attenuation length of the waves depending on frequency, in good agreement with linear theory. No large-scale flow in the stratified layer was observed in this 2D geometry. Numerical simulations of the same configuration were performed by Lecoanet et al. \cite{lecoanet_numerical_2015}. They demonstrated that IGWs are mainly generated by the Reynolds stresses in the bulk of the convection below the interface, rather than by the displacement of the interface between the two layers. Numerical studies by Couston et al. \cite{couston_dynamics_2017,couston_order_2018,couston_energy_2018} extended these results by considering a generic non-linear equation of state (piecewise linear around an inversion temperature, with adjustable slopes), both in 2D and 3D horizontally periodic geometries. Various flow regimes and the energy transport by IGWs were quantitatively studied. Interestingly, long simulations -- accessible in 2D studies only -- showed, for low Prandtl numbers ($Pr < 1$), a large-scale horizontal flow with reversals in the stratified layer, similar to the QBO phenomenon introduced above \cite{couston_order_2018}. \\ Several experiments took interest in the generation and reversal of a large-scale horizontal mean flow in a stratified domain, driven by IGWs. The well-known experiment designed by Plumb and McEwan \cite{plumb_instability_1978}, later reproduced and extended by Semin et al. \cite{semin_generation_2016,semin_nonlinear_2018}, is capable of driving a QBO from the mechanical forcing of a standing wave pattern (\textit{i.e.} two waves with the same frequency and opposite phase speed) in a salty water stratification. With this system, Plumb and McEwan managed to observe few oscillations of the driven large-scale flow before the stratification was destroyed by mixing. The experiment gave results in good agreement with the theory \cite{richard_s._lindzen_theory_1968,lindzen_updated_1972,plumb_interaction_1977}, notably with reversals starting away from the forcing and propagating towards it. Semin et al. \cite{semin_generation_2016,semin_nonlinear_2018} improved the system by constantly injecting fluid to rebuild the stratification, while removing mixed fluid close to the forcing. This method allowed to run the experiment longer and to study the nature of the bifurcation in the Plumb model which can be either supercritical or subcritical, depending on the dominant dissipative process. In those experimental realisations of the QBO mechanism, the wave forcing remains monochromatic, as opposed to the natural mechanism where it is due to chaotic tropical storms \cite{baldwin_quasi-biennial_2001}. The forcing is driven by interface displacements, as opposed to the observations of \cite{lecoanet_numerical_2015}. Besides, only the stratified layer is modelled. It thus remains a challenge to observe experimentally a large-scale, reversing flow from a turbulent source and a wide range of naturally excited IGWs. \\ In the present study, we extend the work of Townsend \cite{townsend_natural_1964,perrard_experimental_2013,bars_experimental_2015} in a cylindrical, 3D geometry reminiscent of Plumb and McEwan's set-up \cite{plumb_instability_1978,semin_generation_2016,semin_nonlinear_2018}. Our purpose is threefold: to characterise the generation of IGWs in such a self-organising two-layer system, to quantify the coupling between the layers, and to investigate the possible generation of large-scale horizontal flows. Our experiments are complemented by direct numerical simulations of the same configuration. The experimental setup and numerical model are presented in section \ref{sec:methods}, results are analysed in section \ref{sec:results}, and conclusions and future works are discussed in section \ref{sec:discussion}. \section{\label{sec:methods}Methods} \subsection{Experimental set-up} The set-up consists in a cubic tank whose lateral boundaries are made of 2 cm thick acrylic walls. The bottom boundary is a chrome plated copper plate in which refrigerated fluid is forced to circulate. The top boundary is a commercial, transparent electric heater. The tank inner dimensions are $32 \times32$ cm for the horizontal section and $H=20$ cm in height. Preliminary experiments were conducted in this cubic geometry. Eventually, a cylinder of outer diameter $D = 29$ cm and thickness $e = 0.4 $ cm was added inside the cubic tank, to reproduce the axisymmetric geometry of \cite{plumb_instability_1978,semin_generation_2016,semin_nonlinear_2018}, which seems prone to the development of large-scale flows. We are interested in the flow within the cylinder: the fluid in the gap between the cylinder and the cubic tank follows a similar dynamics and thus imposes to the working fluid a (close to) no-flux lateral boundary condition. The temperature of the bottom boundary is controlled by water circulating in the copper plate. Water is cooled down by a thermal bath with a fixed temperature set at $-1.25^{\circ}$C. Due to thermal losses in the pipes alimenting the copper plate, bottom tank temperature is $0.2 \pm 0.05^\circ$C. Plate thickness and circulation circuit were optimised so as to ensure a uniform temperature over the whole plate. At the top boundary, the heater is set at a temperature of $35^{\circ}$C. Its temperature control is custom made: a PT100 probe measures the heater temperature in real time, driving through a feed-back loop the input power injected in the heater. This is a very simple and inexpensive system to impose a temperature while having a transparent top boundary, allowing visualisation and velocity measurements by PIV. Nonetheless, it is necessary to point out that the temperature over the heater area is not perfectly homogeneous. Temperature is maximal at the centre where $T \sim 38^{\circ}$C, while the edges are indeed at the requested $T = 35\pm 0.1^\circ$C. This inhomogeneity of the top temperature by $\delta T = 3^{\circ}$C induces slow convective motions below the heater, in a $\sim 2$ cm high layer. By performing an experiment where the whole fluid is stably stratified with an overall temperature gradient similar to the one in the stratified layer studied here, but above the density reversal at $4^{\circ}$C (i.e. bottom boundary set at 10$^{\circ}$C and top boundary at 70$^{\circ}$C), we have checked that those top convective motions have no significant impact on the dynamics of the two-layer system. It is also important to say that despite the thick acrylic wall and the intermediate fluid layer between the cylinder and the tank, the working region is not fully thermally insulated on the sides. Nevertheless, our fully stratified, test experiment has shown no motion within the fluid driven by these lateral losses. \\ The equation of state of water is non-linear with a maximum density close to 4$^{\circ}$C (International Equation of State of Seawater, 1980): \begin{equation} \begin{split} \rho(T)= 999.842594+6.793952.10^{-2}T-9.095290.10^{-3}T^2+1.001685.10^{-4}T^3 \\ -1.120083.10^{-6}T^4+6.536332.10^{-9}T^5.\\ \end{split} \label{eq:eos_eau} \end{equation} Thus, due to the imposed temperature at top and bottom boundaries, the bottom part of the tank, between 0$^{\circ}$C and 4$^{\circ}$C, is convectively unstable (see figure \ref{fig:setup}). Cold plumes detach from the bottom plate and rise in the tank due to buoyancy. Reciprocally, ``hot'' fluid sinks from the 4$^{\circ}$C interface due to gravity. While convective motion takes place in the lower layer, the upper part of the tank, between 4$^{\circ}$C and 35$^{\circ}$C, is stably stratified, with an assumed linear temperature profile at equilibrium. The temperature is indeed linear for an ideal case without thermal losses, assuming that the stratified layer has a bottom boundary at fixed temperature $4^\circ$C and top boundary at $35^\circ$C (\textit{i.e.} constant diffusive flux through the whole layer). However, the density profile is not linear, due to the non-linear equation of state of water. Stratification is characterised by the Brunt-V\"ais\"al\"a frequency $N^* = \frac{1}{2 \pi} \sqrt{-\frac{g}{\rho_0} \frac{\partial \rho}{\partial z}}$. Because of the non-linear equation of state, $N^*$ is not constant with depth, as shown in figure \ref{fig:setup}. For simplicity, we also use below the global buoyancy frequency defined as $N = \frac{1}{2 \pi} \sqrt{-\frac{g}{\rho_0} \frac{\Delta \rho}{\Delta z}}$, where ${\Delta \rho}$ is the global density contrast within the stratified layer of depth ${\Delta z}$. \\ \begin{figure}[h] \includegraphics[scale=.3]{schema_cuve_profilN_NB_v3.png} \caption{2D sketch of the tank. A cylinder (light grey shaded area) is placed in a larger cubic tank. The system is cooled down at the bottom at 0$^{\circ}$C and heated up at the top at 35$^{\circ}$C. The bottom half is convective with an almost constant density, apart from the bottom boundary layer. The upper half is stably stratified and waves are generated due to the fluid motions in the convective layer. The graph on the right shows the theoretical profile for the buoyancy frequency $N^*$. It is computed considering a linear temperature profile and the equation of state of water (\ref{eq:eos_eau}). The dashed line is the global buoyancy frequency $N$ calculated on the stratified layer. The various length scales are the cylinder diameter $D$, the vertical extent of the tank $H$ and the vertical extent of the convective layer $h$. $\delta$ is the minimal width between the outer square tank and the inner cylinder. The $x$, $y$ and $z$-velocity components are noted $u$, $v$ and $w$ respectively.} \label{fig:setup} \end{figure} Before starting the experiment, the bath and the heater are allowed to reach their assigned temperature. Then, the upper half of the tank is filled with water stratified in temperature from 4$^{\circ}$C to 35$^{\circ}$C, using the double bucket technique \cite{oster_density_1965}. The bottom half is filled with water with a temperature close to 4$^{\circ}$C. This filling process is used to avoid tremendously long transient before reaching steady state by thermal diffusion. Typically, we fill the tank at the end of a day and start the measurements the next day in order to let the system reach its equilibrium state over night. Each experiment then lasts four days, with no change in the location of the interface. Note that this steady interface position is the result of the heat budget equilibrium between the convective heat flux extracted from the bottom plate, the diffusive heat flux through the stratified layer, and the lateral heat losses. To perform PIV measurements, particles are chosen as small as possible and with a density as close as possible to water density in order to avoid their sedimentation in the stratified layer over the long duration of the experiment. We use fluorescent orange polyethylene micro-spheres whose size ranges from $10 \, \mathrm{\mu m}$ to $20 \, \mathrm{\mu m}$ and density is $1.00 \pm 0.01$~g/cc. The fluorescent property allows us, with a high pass filter on the camera, to remove any laser reflection, significantly enhancing the images quality. The tank is illuminated with a green laser $532$~nm. Power is set at $1$~W. We perform side view PIV to measure convection and IGWs spectral characteristics, and top view PIV to observe the large scale flow and its fluctuations over time. The camera used for the side view PIV is a HiSense Zyla $2560 \times 2160$ pixels recorded on 12 bits. Acquisition rate is $2$~Hz with $100$~ms exposure time. Typical acquisition time for spectral characteristics is $50$~min. For the top view, we use a Point grey camera $1920\times1080$ pixels on 8~bits. Exposure time is $1$~s, acquisition rate $0.1$~Hz and acquisition time $8$~hours. Captured movies are processed either by the DantecDynamics software DynamicStudio for the side view or by DPIVSoft \cite{meunier2003analysis} for the top view. Both are resolved into $32\times32$ pixels boxes with 50\% overlapping. Side view PIVs are performed in the middle of the tank at $y=16$~cm in a laser sheet crossing the cylinder along its diameter. This is the case for all figures shown in the $(x,z)$ plane and thus not mentioned in the results section. The vertical fields (see an example in figure \ref{fig:LSCexpe}) do not show the whole $(x,z)$ plane (where the origin $(0,0)$ is located in the bottom left corner of the cubic tank): it was indeed chosen to zoom in, in order to have the best resolution for the very weak motions in the stratified layer. The interface between the layers is localised between $11 \mathrm{~cm} \leqslant z \leqslant 12$~cm. Typical Rayleigh number for the convection based on this depth is $Ra = 7 \times 10^6$, and the global Brunt-V\"ais\"al\"a frequency is $N = 1.35 \times 10^{-1}$~Hz. \begin{figure*}[h] \centering \includegraphics[scale = .45]{champ_int_730_colored.pdf} \hspace{.1cm} \includegraphics[scale = .45]{LSC_0510_1800_colored.pdf} \caption{(Left) Instantaneous velocity field. An ascending plume is visible at $x=150$ mm, and transported by the large-scale circulation. (Right) Large-scale circulation in the convective layer obtained by time-averaging velocities over a 50 minutes signal. The large-scale circulation is a counter clockwise cell. No inversion of the circulation has been seen in our experiment. The velocities under $z=10$ mm are noisy and thus not shown here. Maximum instantaneous velocities are $2.5$ times bigger than the maximum averaged velocities. The left edge of the cylinder is located at $x=19$~mm and the centre of the cylinder is at $x=160$~mm. Approximately $40$~mm of the right side of the cylinder is not captured with our PIV measurments.} \label{fig:LSCexpe} \end{figure*} To observe the large-scale flow, horizontal views at different heights are performed. A linear translation axis platform from Igus, driven by homemade software, is used to automate the process. With two mirrors, one fixed at the exit of the laser head and the other fixed on the translating platform, it is possible to sweep along one direction with the laser sheet. We typically make measurements during 15~s in each of 11 successive planes separated by 0.5~cm. The full scan duration is about 3~min, and is repeated during at least 8~hours. \\ The cylindrical geometry described here differs from the cylindrical shell geometry in \cite{plumb_instability_1978,semin_generation_2016, semin_nonlinear_2018}. We first tried to work in that annular geometry by adding a second, smaller cylinder in our cubic tank. Three different gap sizes were tested but none showed any interesting dynamics. Indeed, the convection was too confined in the shell to provide an efficient chaotic excitation, and IGWs did not propagate well within the stratification, attenuated quite fast by wall friction. During these tests, we observed the most interesting dynamics within the innermost cylinder, so we decided to use that geometry. The point of the cylindrical shell geometry is that it is a close analogue to the equatorial band of the stratosphere where QBO takes place. By working in a cylinder, the geometry analogy is lost but the physics of the problem remains the same, still a priori allowing for large-scale, reversing, axisymmetric horizontal flows. \clearpage \subsection{Numerical model}\label{sec:methods_num} To complement the experiments, we also performed Direct Numerical Simulations (DNS) of the same configuration. We solve the Navier Stokes equations using a non-Oberbeck Boussinesq model. The density variations are consider small compared to the reference density $\frac{\Delta \rho }{\rho_o} \ll 1$. Therefore, density fluctuations only appear in the buoyancy force. However, temperature variations affect the value of the thermal expansion coefficient to account for the non-linear equation of state of water. Variations of the thermal diffusivity $\kappa$ and kinematic viscosity $\nu$ are neglected. Governing equations are given in dimensionless form by: \begin{align} \label{eq:momentum} \frac{\partial\bm{u}}{\partial t}+\bm{u}\cdot\nabla\bm{u} = & -\nabla P+Pr\nabla^2\bm{u}-Pr Ra \ \theta^2 \bm{e}_z \\ \label{eq:heat} \frac{\partial \theta}{\partial t}+\bm{u}\cdot\nabla\theta = & \ \nabla^2 \theta \\ \label{eq:mass} \nabla\cdot\bm{u} = & \ 0 \end{align} where we used the depth $H$ of the container as a unit of length, the thermal diffusive timescale $H^2/\kappa$ as a unit of time and the difference between the bottom and the inversion temperature $T_0 - T_i$ as a temperature scale. These equations are characterised by the Prandtl number $Pr=\nu/\kappa$, the global Rayleigh number $Ra=\alpha g (T_0-T_i)^2 H^3/(\nu\kappa)$ and the imposed dimensionless top temperature $\theta_{top}=(T_{top}-T_i)/(T_0-T_i)$. Note that the quadratic temperature term in the momentum equation is a direct consequence of the nonlinear equation of state of water given by equation~\eqref{eq:eos_eau}, which is approximed in our model by the quadratic equation $\rho(T) \approx \rho_0 (1-\alpha(T-T_i)^2)$. The coefficient $\alpha$ is not the usual thermal expansion coefficient but has a unit of $(\degree \mathrm{C})^{-2}$ and is given by $\alpha\approx8.1\times10^{-6}(\degree \mathrm{C})^{-2}$ (see also \cite{lecoanet_numerical_2015}). We consider a cylindrical fluid cavity of diameter $D=3H/2$ as in the experiments. Both horizontal plates are assumed to be no-slip and with fixed temperature. The side wall is assumed to be no-slip and perfectly insulating. This is of course not the case in the experiment, for which lateral heat losses are inevitable and top temperature is not exactly constant, but the objective is to check whether the conclusions drawn from the experimental results are robust and do not depend on these effects. Since the experiment runs with water, we use $Pr=7$. The Rayleigh number of the experiment is $Ra=7 \times 10^7$ while its dimensionless top temperature is $\theta_{top}=-7.75$. If we were to run the simulation with these parameters, the interface will be located very close to the top boundary. It is not the case in the experiment because of the lateral heat losses, which tend to reduce the effective Rayleigh number. For that reason, and instead of taking into account these losses as in \cite{lecoanet_numerical_2015}, we kept the insulating lateral boundaries and use the slightly adjusted parameters $Ra=10^7$ and $\theta_{top} = -11$ instead, which leads to an interface located approximately at $z\approx120$~mm, as in the experiment. The Rayleigh number could not be lowered under $10^7$ in order to keep the convective flow turbulent enough, thus we had to increase the top temperature to have the interface located at $z\approx120$~mm. We perform DNS of equations~\eqref{eq:momentum}-\eqref{eq:mass} using the spectral element solver Nek5000 \citep{Nek5000}. The global geometry is decomposed into hexahedral elements, with vertical refinement close to the horizontal boundaries and around the mid-plane where the inversion isotherm is expected to be located. Velocity, buoyancy and pressure variables are represented as tensor product Lagrange polynomials of order $N$ and $N-2$ based on Gauss or Gauss-Lobatto quadrature points. The total number of grid points is given by $\mathcal{E}N^3$ where $\mathcal{E}$ is the number of elements. For all the results discussed in this paper, the number of elements is $\mathcal{E}=8960$ and we use a polynomial order of $N=11$. Numerical convergence was checked by increasing the polynomial order $N$. Time integration is performed with a third-order mixed implicit-explicit scheme. The simulations are initialised with a small amplitude buoyancy perturbation and a temperature profile varying linearly between the top and bottom boundaries. Simulations are run until a quasi-steady state is reached, after which data is accumulated to compute statistics. \section{\label{sec:results}Results} \subsection{\label{sec:results_exp}Experiments} \subsubsection{\label{sec:results_conv}Convection} PIV side view is used to quantify horizontal and vertical velocities in the convection zone. Examples of vertical velocities measured at one point in a given location are shown in figure \ref{fig:panache}, for both ascending cold and descending hot structures. Measurements are consistent with the numerical simulations \cite{lecoanet_numerical_2015,couston_dynamics_2017} showing intense, localised, cold rising plumes and more diffusive descending plumes. Moreover, these structures are advected by a large-scale circulation encompassing the whole convective layer, as shown in figure \ref{fig:LSCexpe}. \begin{figure}[h] \centering {\label{fig:panacheup}\includegraphics[scale=0.43]{sonde_panache_up.pdf}} \hspace{.2pt} {\label{fig:panachedown}\includegraphics[scale=0.43]{sonde_panache_down.pdf}} \caption{Time evolution of the vertical velocity $w$ within: (Left) upward plumes at $x = 200$ mm, $z = 45$ mm, (Right) downward structures at $x= 100$ mm, $z = 95$ mm. } \label{fig:panache} \end{figure} Spectral analysis is performed to extract power spectral density (PSD) from the velocity signals. Figure \ref{fig:spectreconv} shows the PSD of the convection vertical velocity $w$. For the two panels, the spectrum is flat with a lot of energy for low frequencies, then the energy drops above some cut-off frequency. Left panel of figure \ref{fig:spectreconv} shows the vertical velocity PSD at a single point close to the top of the convective layer. A small peak can be seen close to $f = 10^{-2}$~Hz. This is the quasi-periodic signal of the plumes dropping from the top thermal boundary layer. The theoretical characteristic time of convection can be computed from \cite{gortler_convection_1966}: \begin{equation} \tau = \frac{h^2}{\pi \kappa}\left(\frac{\Delta T}{\Delta T_{local}} \times \frac{Ra_c}{Ra}\right)^{2/3} \end{equation} \begin{figure*}[b] \centering \includegraphics[scale = .43]{spectre_1_point_cd67_58_vf.pdf} \hspace{.1cm} \includegraphics[scale = .43]{spectre_W_50toend_10to57_vf.pdf} \caption{PSD for the vertical velocity fluctuations. (Left): PSD computed at a single point $x=100$~mm, $z=95$~mm (signal shown in figure \ref{fig:panache} right). The plume forcing frequency can be seen around $f = 10^{-2}$~Hz (red dashed line). (Right): PSD spatially averaged over the whole convective cell in the measured $(x,z)$ plane (all points above $z=10$~mm and below $z=110$~mm). } \label{fig:spectreconv} \end{figure*} with $h$ the height of the convective layer, $\kappa$ the thermal diffusivity, $\Delta T$ the temperature difference between the top and bottom of the convective domain, $\Delta T_{local}$ the temperature difference between the top and bottom of the thermal boundary layer, and $Ra_c$ the critical Rayleigh number. The critical Rayleigh number in the presence of free and solid interfaces and for fixed temperature is $Ra_c = 1100.65$. For our experiment, the characteristic time is $\tau = 96$~s, thus characteristic frequency is $1/ \tau \sim 10^{-2}$~Hz, which is close to the observed peak in the left panel of figure \ref{fig:spectreconv}. At frequencies lower than this characteristic frequency, the spectrum is flat. This is explained by the combined effect of the randomness of the plumes (see figure \ref{fig:panache}) and of the slow fluctuations of the large-scale circulation. Right panel of figure \ref{fig:spectreconv} shows the PSD of vertical velocities, averaged over the whole convective cell in the $(x,z)$ plane. It shows a similar trend, with a lower cut-off frequency compared to right panel spectrum. Actually, the plumes signal is more localised and less intense on average than the large-scale circulation signal, which hence dominates the space-averaged PSD. The probability density function (PDF) of the vertical velocities in the whole convective layer $\mathrm{P}(w)$ is computed and shown in figure \ref{fig:pdf}. It is normalised such that $\int \mathrm{P}(w)\mathrm{d}w = 1$. The PDF describes important features of the convection: it is skewed towards positive values, with positive velocities reaching higher magnitude than negative velocities, \textit{i.e.} the ascending plumes are stronger than the descending structures. However, the central part of the PDF is close to gaussian profile. The distribution obtained here is in good agreement with the probability density function computed in an idealised 2D numerical model by Couston et al. \cite{couston_dynamics_2017}. Note that this asymmetry is specific to our model, for which the usual upside-down symmetry in Boussinesq Rayleigh-B\'enard convection is broken. \begin{figure} \centering \includegraphics[scale = .6]{pdf_conv.pdf} \caption{Probability density function of the vertical velocities in the convective layer. All PIV points under $z=110$~mm have been used to compute the PDF.} \label{fig:pdf} \end{figure} \subsubsection{\label{sec:results:buffer}Buffer layer} An intermediate layer (we name it the buffer layer in the following) is present between the convective layer and the stratified layer. It was first reported in the quasi-2D 4$^{\circ}$C convection experiment by Perrard et al. \cite{perrard_experimental_2013}. Their temperature measurements showed that the buffer layer corresponds to the area where the temperature goes from 4$^{\circ}$C to 8$^{\circ}$C. This actually corresponds to the overshooting region for rising cold plumes (note that this type of convection is called "penetrative convection" because of this effect). Indeed, since the density of water is close to quadratic around $4^{\circ}$C, densities at e.g. $0^{\circ}$C and $8^{\circ}$C are the same, and the 8$^{\circ}$C isotherm is the theoretical maximum height reachable by an ascending cold plume at $0^{\circ}$C in the absence of thermal diffusion. Simultaneously, the overall density profile between $4^{\circ}$C and $8^{\circ}$C is stable, as in the stratified layer above $8^{\circ}$C. The buffer layer is thus a very specific location supporting simultaneously convective overshooting motions and IGWs, as observed with PIV measurements \cite{perrard_experimental_2013}. \begin{figure}[h] \centering \includegraphics[scale = .6]{buffer_U-xavg.pdf} \caption{Time evolution of the horizontal average of the horizontal velocity, noted $u_X$. Red (resp. blue) regions correspond to mean flow going towards the right (resp. left).} \label{fig:bufferlayer} \end{figure} To complete the description of this buffer layer now using velocity measurements, we plot in figure \ref{fig:bufferlayer} the spatio-temporal graph of the horizontal average of the horizontal velocity $u$. The graph exhibits a strong shear around $z=120$ mm. Since the fluid is going in opposite directions above and below $z=120$ mm with a sharp interface, viscous entrainment by the convective layer is excluded. A special kind of thermal coupling might explain the observed dynamics, as sketched in figure \ref{fig:schema_couplage}. Indeed, when a cold ascending plume from the convection zone reaches the interface and overshoots in the buffer region, its associated velocity perturbation dissipates more rapidly than its temperature perturbation. Due to gravity, the distorted part of the buffer region wants to sink back to its initial state (pictured by the green arrows), while the fluid above the buffer layer moves towards the impact point of the plume to take the place of the falling water (pictured by the red arrows). The buffer layer then needs some compensating fluid from the convective layer. This mechanism works when the velocity perturbation of the plume at the interface dissipates more rapidly than the thermal perturbation, hence for a Prandtl number $Pr \geq 1$. One might expect the shearing zone to decrease in size and amplitude when thermal diffusion increases (i.e. when the Prandtl number decreases), since the overshooting rising cold plume will then equilibrate thermally during its ascent more rapidly. This may explain why no interfacial shear was reported in the systematic numerical study of \cite{couston_dynamics_2017,couston_order_2018} where $Pr \leq 1$. Global temperature field measurements (using e.g. Temperature Dependent, Laser Induced Fluorescence) are now required to confirm or infirm the proposed model, but those are beyond the scope of the present paper. Note that by extension, we call "buffer layer" in the following the region including the $T=4^\circ$C to $T=8^\circ$C overshooting region and the shear region. In the experiment, the shear region extends from $z=120$~mm to $z \approx135$~mm. \begin{figure}[htb] \centering \includegraphics[scale = .75]{schema_couplage_final2.pdf} \caption{Sketch of the thermal coupling between the convective and buffer layers. On the left, a cold plume moves upwards towards the interface between the two layers. On the right, isotherms are deflected due to the impact.} \label{fig:schema_couplage} \end{figure} \clearpage \subsubsection{\label{sec:results_waves}Internal gravity waves} \begin{figure}[h] \centering \includegraphics[scale = .83]{visu_onde_colormap_col_02to03_axisequal_schema.pdf} \hspace{.1cm} \includegraphics[scale = .53]{panache_v1.pdf \caption{Velocity fields showing IGWs propagating. (Top) Velocities in $(x,z)$ plane. The signal is frequency-filtered to enhance the visualisation of oscillatory motions: only frequencies between $0.02$ and $0.03$ Hz are shown, propagating at an angle of roughly $75^\circ$ with the vertical. The angle of propagation is the angle between the constant phase line and the vertical. (Bottom) Velocities in $(x,y)$ plane located at $z \approx 125$~mm. In the $(x,y)$ plane, IGWs take the form of oscillating rings. Note that this figure is from a previous experiment without any internal cylinder and is therefore only displayed here as an illustration of the IGWs seen from above.} \label{fig:ondeN} \end{figure} The convective motions induce a Reynolds stress at and below the interface which generates IGWs propagating in the stratified area \cite{lecoanet_numerical_2015}. An example is shown in figure \ref{fig:ondeN}. The vector field has been frequency filtered in the band $[0.02-0.03]$ Hz to isolate a single propagating wave train. We can measure an angle close to $\theta \simeq 75^{\circ}$ between contant phase lines and the vertical. This observation is in good agreement with the inviscid dispersion relation $\omega = \pm N \mathrm{cos}(\theta)$, which relates the frequency and the propagation angle of IGWs. Indeed, at $z=120$~mm, $N \sim 0.1$~Hz, thus $\theta = \mathrm{cos^{-1}}\left(\frac{\omega}{N}\right) = 78.5^\circ$. The motion within the stratified area is a superimposition of many such IGWs oscillating at different frequencies. To further investigate the waves signal, waves spectra are plotted in figure \ref{fig:spectro}, showing the power spectral density of oscillatory motions within the stratified layer at every height, averaged horizontally for each height. The grey line is the theoretically computed buoyancy frequency profile. Figure \ref{fig:spectro} shows that energy is present in a wide frequency band, from the lowest measured frequencies to the buoyancy frequency $N$. Low frequency motions $f < 4\times10^{-3}$~Hz are very intense and propagate high in the stratified layer. Motions with frequency ranging from $4 \times 10^{-3}$~Hz to $N$ are less intense, but still propagate into the stratified layer. Motions propagating at frequencies higher than the buoyancy frequency $N$ are greatly attenuated after a centimetre as IGWs of frequency larger than $N$ are evanescent. The weak signal at low frequencies above $z=180$ mm comes from the convective motions due to the non-homogeneous heating at the top. These motions are confined at the very top of the experimental container. \begin{figure*}[t] \centering \includegraphics[scale=0.55]{propagation_nonorm_120_vff.pdf} \hspace{.1cm} \includegraphics[scale =.55]{plot_attenuation_norm_v2.pdf} \caption{(Left) Power spectral density of the absolute velocity $\sqrt{u^2+w^2}$ above the convective layer. The grey curve shows the theoretical buoyancy frequency profile, assuming a linear profile for the temperature, from $4^\circ$C to $35^\circ$C. (Right) Two selected profiles (taken at frequencies shown by dashed lines on the left graph) of the re-scaled PSDs by the PSD at the top of the convective layer, \textit{i.e.} $z=120$~mm (noted $\mathrm{PSD_o}$). The PIV measurements are performed for 50 minutes and results (using the pwelch Matlab function) are horizontally averaged at each height to obtain the averaged power spectral density.} \label{fig:spectro} \end{figure*} Right panel of figure \ref{fig:spectro} shows two vertical profiles of the PSD re-scaled by the PSD at the top of the convective layer ($z=120$~mm), taken at two different frequencies. The energy decrease is quite similar between $z=120$~mm and $z=140$~mm for both frequencies. However, for $z>140$~mm, the energy for the higher frequency decreases slower than the energy for the lower frequency. This dependence of the attenuation length regarding the frequency of the signal is a characteristic of IGWs. Indeed, the dispersion relation of IGWs relates the frequency and the wave vector direction. Moreover, energy propagates perpendicularly to the wave vector for IGWs (i.e. group and phase velocities are perpendicular). The closer to $N$ the wave frequency $\omega$ is, the more horizontal the phase propagates, hence the more vertical energy propagates. High frequency waves are thus capable of transporting energy to high altitudes before being damped. On the opposite, waves with low frequency compared to $N$ propagate energy almost horizontally, and are thus attenuated before reaching high altitudes. At frequencies $f<4\times 10^{-3}$~Hz, a lot of energy is seen and the attenuation length does not depend on the frequency. There is no reason why IGWs should disappear below a certain frequency, but we would expect to see the attenuation length to keep decreasing with decreasing frequency. We thus deduce that IGWs at frequencies $f \leqslant 4\times 10^{-3}$~Hz are hidden in the energy spectrum by some very energetic large-scale slowly-varying flow, which we will describe below. More than one order of magnitude separates the buoyancy frequency and the fastest large-scale flow fluctuations. The large-scale flow penetrates deep into the stratified layer. It globally decreases in amplitude with height, but with some local increases at $z\sim 125$ mm (i.e. close to the interface between convective and buffer layers) and $z \sim 145$ mm. The IGWs signal can be seen between $f = 4 \times 10^{-3}$ Hz and the buoyancy frequency. A peak that reaches the top of the stratified layer is seen around $f= 1.2 \times 10^{-2}$ Hz, \textit{i.e.} the same frequency as the convective forcing discussed in section \ref{sec:results_conv}. It corresponds to the strong excitation provided by the cold rising and hot sinking turbulent plumes. However, top panel of figure \ref{fig:spectro} also shows a sudden drop of the energy at frequencies $f>1.2 \times 10^{-2}$~Hz. Indeed, wave attenuation is strong at these frequencies, even if they are close to (but below) the buoyancy frequency $N$. Actually, energy dissipation also depends on the norm of the wave vector squared. There is no reason that all excited waves have the same wave vector norm; one could even expect that fastest waves are excited by fastest, hence smallest convective patterns, and are thus also at smallest scale: they then dissipate more rapidly. \subsubsection{\label{sec:results_lsf}Large-scale flow in the stratified layer} Figure \ref{fig:spectro} shows an important amount of energy at low frequencies which has been interpreted as the signature of a large-scale slowly-varying flow in the stratified layer. We will now investigate the nature of these fluctuations to see if they relate to reversals similar to the QBO. Figure \ref{fig:meanflow} shows horizontal vector fields at the same depth at different times. In figure \ref{fig:meanflow}(a), the flow goes counter-clockwise inside the cylinder. Figure \ref{fig:meanflow}(b) shows that two contra-rotating vortices with a smaller amplitude typical velocities have appeared. Figure \ref{fig:meanflow}(c) shows a mostly clockwise rotating flow, where one of the preceding eddy pairs has nearly disappeared. The large-scale flow thus evolves drastically over time. A criterion is computed to extract a typical mean velocity from those fields that accounts for the ``direction'' of the large-scale flow: as illustrated in figure \ref{fig:critere}, we compute a mean azimuthal velocity, taken along a ring centred in the cylinder. Other criteria to extract a representative value for the large scale flow direction have also been tested, including: the mean vorticity over the cylinder area, the average of the azimuthal velocity over several rings with different radii, and the azimuthal velocity averaged over thick rings. They all give similar results for the large-scale flow measurement. \\ \begin{figure*}[t] \centering \includegraphics[scale = .8]{evolution_mf_vff.pdf} \caption{Horizontal velocity vector fields in the stratified layer at different times. The laser sheet is located at $z=150$~mm. The large-scale flow reverses from (a) to (c). Time between (a) and (c) is approximately half an hour. Maximum velocities are $0.1$ mm/s.} \label{fig:meanflow} \end{figure*} \begin{figure}[t] \centering \includegraphics[scale = .45]{critere_mf.pdf} \caption{Criterion used to extract a significant value for the large scale flow and its direction: the azimuthal velocity is averaged over the ring shown in red.} \label{fig:critere} \end{figure} In order to investigate the vertical phase propagation of the reversals, and thus, to compare the reversal dynamics observed to a QBO-like phenomenon, the setup has been equipped with a linear translating platform that allows us to perform horizontal laser sheet sweeping along the vertical. Horizontal velocities are measured in an horizontal plane, every $5$~mm from the top of the convective layer $z=110$~mm to the middle of the stratified layer $z=160$~mm. Any trace of downward phase propagation of the reversals, as observed on the QBO on Earth \cite{baldwin_quasi-biennial_2001} and on the historical Plumb experiment \cite{plumb_interaction_1977, semin_nonlinear_2018}, would be a significant evidence for QBO-like phenomenon in the experiment. Indeed, the phase propagation of the reversals due to IGWs non-linear interactions is theorised as follows: an IGW propagating in a stratified layer with an horizontal phase velocity in the same direction as the existing base flow propagates upward until reaching a critical height $z_c$, where it deposits all its energy locally. At $z=z_c$, the flow accelerates. Thus the critical height where the flow is intense enough to damp the wave is lowered. As time goes on, this critical height moves towards the location where the waves are emitted. Here, the waves are emitted at the bottom of the stratified layer. We would expect a downward phase propagation if the reversals are driven by IGWs non-linear interactions. \begin{figure*} \centering \includegraphics[scale=0.7]{QBOfinal_avecpentediffusionv2.pdf} \caption{Reversals of the large-scale flow. $z=115 - 120$ mm is the convective / buffer layers interface. Ascending plumes often perturb the buffer layer flow. The velocity was measured at 11 heights, marked by each tick in the vertical axis of the figure, and interpolated in between. The slope of the black dot lines represent the viscous coupling phase velocity.} \label{fig:QBO} \end{figure*} We performed long time experiments (around 8 hours). Typical results extracted from the criterion described above are shown in figure \ref{fig:QBO}. Blue patches (resp. red patches) represent large scale flow going counter clockwise (resp. clockwise). The present measurements mainly confirm the interpretation of figure \ref{fig:spectro} for the lowest frequencies: the large-scale flow is horizontal and extends over the whole depth of the stratified layer with an amplitude attenuated with height, and exhibits slow reversals. Additionally, some intense events at $z = 110$ mm are directly related to penetrative plumes from the convection. Reversals times range from $400$s to $1800$s. However, no downward phase propagation of the reversals is observed. On the contrary, the reversals seem to occur along the whole stratification height at the same time, or even with a rapid upward phase propagation. Since the phase propagation is not towards the location where the waves are emitted, the reversals are unlikely driven by the non-linear interactions of IGWs. However, as seen in section \ref{sec:results_waves}, IGWs propagate in the stratified layer and carry energy. Therefore, they give energy to the large-scale flow through non-linear interactions. Yet, the process is not dominant in the reversals dynamics. \\ Since, the reversals observed in figure \ref{fig:QBO} do not have a downward phase propagation, we look for other mechanisms than the QBO mechanism to explain the reversals. Two other mechanisms can be investigated. The first one relies on a specific convective dynamics within the overall stratified layer driven by horizontal gradients related to imperfect top and side boundary conditions. The second mechanism relies on viscous coupling with the underlying convective and buffer layers. Our fully stratified reference experiment described in section \ref{sec:methods} precludes the first scenario. Indeed, setting the bottom boundary at 10$^{\circ}$C and the top boundary at 70$^{\circ}$C, no motion is observed for the bottom $3/4$ of the tank. In this test-experiment, the top $1/4$ of the tank is animated by convective motions due to the non-homogeneous top heat source (in the standard $4^\circ$C experiment, where $T_{top} = 35\degree$C, only $\sim 2$ cm are affected by the convection at the top of the tank, because the non-homogeneity of the heat source is less important for lower temperature, thus the horizontal convection is weaker). However, these are inefficient to generate waves below and to drive any large-scale flow observable away from the top region. \begin{figure}[h] \includegraphics[scale = 0.55]{balayage_corr_vf.pdf} \caption{Velocity vector fields in the horizontal plane. Different columns represent different sweeping cycles $t^*$ (one sweeping cycle corresponds to the 11 steps needed to go from the lowest position $z=110$~mm to the highest position $z=160$~mm). Different rows represent different heights within the same sweeping cycle: first row is the top of the convection $z=110$~mm, second row is in the buffer layer $z=120$~mm and third row is in the stratified layer $z=145$~mm. Convective plumes are easily noticeable on the first row fields.} \label{fig:champ_correle} \end{figure} \begin{figure}[h] \includegraphics[scale = 0.55]{balayage_NOcorr_vf.pdf} \caption{Same as figure \ref{fig:champ_correle} but for different sweeping cycles. Note that in this set of velocity fields, the buffer and stratified layers are less correlated than they are in figure \ref{fig:champ_correle}.} \label{fig:champ_pascorrele} \end{figure} \begin{figure}[h] \centering \includegraphics[scale=0.6]{correProdscalaire_BW_vf.pdf} \caption{Velocity correlation between the three layers. Dashed black lines are the $-0.5$ and $0.5$ values. Correlation coefficient are window-averaged over $10$~min to smooth the curve.} \label{fig:correlation} \end{figure} This leaves viscous entrainment as a possible driving mechanism. The dotted lines on figure \ref{fig:QBO} show a theoretical viscous time, computed from the time for viscous entrainment to drive 20\% of the horizontal velocity at $z = 115$ mm to $z=160$ mm, starting from a base state flow at rest. The 20\% value corresponds to the measured value of the large scale flow at $z=160$ mm compared to the value at $z = 115$ mm (noted $u_b$). The theoretical viscous time entrainment is given by $t = \frac{z^2}{4\,\nu\, \mathrm{erf}^{-2}\left(\left(\frac{u}{u_b}-1\right)\right)}$. The reversals occur in a time scale comparable with this theoretical viscous time. The similarity between the slope of the dashed lines and the slope of the upward phases suggests that reversals are driven viscously. However, the existence of the buffer layer and its associated intense shear, with opposite horizontal velocities with the convective layer below (see figure \ref{fig:bufferlayer}) precludes direct viscous coupling between the convective and stratified layers. Besides, no reversal has been observed in the convective region. We thus propose a thermal coupling between the convective and buffer layers as seen in section \ref{sec:results:buffer}, associated to a viscous coupling between the buffer and stratified ones. To further quantify this possibility, figures \ref{fig:champ_correle} and \ref{fig:champ_pascorrele} show horizontal velocity fields at different heights and at different times. For each of the columns shown, first row is the mean flow in the convective layer at depth $z=110$~mm, second row is the mean flow in the buffer layer at depth $z=120$~mm and third row is the mean flow in the stratified layer at depth $z=145$~mm. The correlation coefficients through time between (i) the convective and buffer layers, (ii) the buffer and stratified layers, and (iii) the convective and stratified layers have been computed. It consists in a scalar product of the velocity vector for each position at two different heights rescaled by the product of the norm of the velocity vector at the two heights, \textit{i.e.}: \begin{equation} R_{ij} = R(x_i,y_j) = \frac{u(x_i,y_j,z_1) \times u(x_i,y_j,z_2) + v(x_i,y_j,z_1) \times v(x_i,y_j,z_2)}{\left(u(x_i,y_j,z_1)^2+v(x_i,y_j,z_1)^2\right)^{1/2} \times \left(u(x_i,y_j,z_2)^2 + v(x_i,y_j,z_2)^2 \right)^{1/2}} \end{equation} This gives a correlation coefficient $R_{ij}$ for each PIV position in the horizontal plane. The global correlation coefficient $R$ is computed by spatially averaging the local correlation coefficients. Results are shown in figure \ref{fig:correlation}. The convective and buffer layers are negatively correlated: the correlation coefficient is most of the time close to $R=-0.5$. This can also be seen at all times in figures \ref{fig:champ_correle} and \ref{fig:champ_pascorrele}, where horizontal velocities in the convective and buffer layers have opposite direction. A diverging flow coming from an impinging plume in the convective zone corresponds to a converging flow in the buffer layer towards the impact zone, hence confirming the thermal coupling mechanism described in section \ref{sec:results:buffer}. This converging flow may lead either to a clockwise or anticlockwise azimuthal mean flow, depending on the details of the chaotic excitation from the convective plumes. The correlation coefficient between the convective and stratified layers can be positive or negative, and is anyway most of the time less than 0.2, in absolute value. The correlation coefficient between the buffer and stratified layers shows a lot of temporal variations. However, it remains always positive. At a given time, the large-scale flow in the stratified layer may switch between a regime strongly dominated by the buffer layer (see also figure \ref{fig:champ_correle}), and a second regime where the flow in the stratified layer is quite different from the flow in the buffer layer (see also figure \ref{fig:champ_pascorrele}). We thus conclude that the stratified layer is globally viscously driven by the buffer layer. However, the stratified layer exhibits additional complexities. These might be due to IGWs interacting with the large-scale flow. The results from Couston et al. \cite{couston_order_2018} show that the lower the Prandtl number, the more regular the QBO. In the experiment, the Prandtl number is close to $Pr = 7$: the typical associated QBO-type flow is irregular, with low amplitude. We thus propose that large-scale flow driven by IGWs non-linear interaction superimposes on the viscously driven flow, but remains secondary. We do not know at this point how to disentangle those two potential contributions from the available data. \clearpage \subsection{\label{sec:results_num} Numerical simulations} The experimental results are not fully sufficient to explain, with complete certainty, the origin of the buffer layer and of the large-scale flow observed in the stratified layer. In addition, the effects of the lateral heat losses and top temperature heterogeneity are difficult to distinguish. To answer these questions, 3D DNS of a configuration similar to our experiments are performed, reproducing the 4$^{\circ}$C convection but with idealised boundary conditions (i.e. no flux on the sides, and fixed temperature at the top and bottom). As mentioned in section \ref{sec:methods_num}, the Rayleigh number $Ra$ and $T_{top}$ are tuned so that the interface depth in the experiment and the numerical simulation are similar. We have $Ra=10^7$ and $T_{top}=48^\circ$C. All the numerical simulations are run dimensionless, but results are shown in dimensional values. The length scale is $H = 200$~mm, the vertical extent of the whole domain (hence diameter is $D=300$~mm), the timescale is the thermal diffusive time $\tau = \frac{H^2}{\kappa} = \frac{0.2^2}{1.5 \, 10^{-7}} = 2.67 \times 10^{5}$~s, and the temperature is given by the dimensionless temperature $\theta = \frac{T - T_i}{T_0 - T_i}$, where $T, T_i, T_0$ are respectively the dimensional temperature, the inversion temperature of the equation of state (i.e. $4^\circ$C), and the bottom temperature (i.e. $0^\circ$C). Results for sections \ref{sec:results_num_conv} - \ref{sec:results_num_igw} are computed from a $(x,z)$ vertical plane located along a cylinder diameter. \subsubsection{\label{sec:results_num_conv}Large-scale circulation in the convection zone and buffer layer} Figure \ref{fig:LSC_num} shows that a large-scale circulation takes place in the convective layer. It consists of a cell filling the whole convective layer, and exhibits no reversal over the whole course of the simulation. The fluid rotates counter clockwise in the vertical plane. This is qualitatively consistent with the mean flow observed in the experiment and shown in the right panel of figure \ref{fig:LSCexpe}. As in the experiments, a counter current exists on the top of the convective layer at $z= 120$~mm, creating a strong shear and demonstrating the existence of a buffer layer in the numerical simulation as well. \begin{figure}[h] \centering \includegraphics[scale = .47]{quiver_inst_color.pdf} \hspace{.1cm} \includegraphics[scale = .47]{quiver_moyen_color.pdf} \caption{(Left) Instantaneous velocity field. An ascending plume is visible at $x=230$ mm. (Right) Large-scale circulation in the convective layer obtained by time-averaging velocities over a 50 minutes recording. The large-scale circulation is a counter clockwise cell. Maximum instantaneous velocities are $3$ times bigger than the maximum averaged velocities.} \label{fig:LSC_num} \end{figure} \begin{figure*}[t] \centering \includegraphics[scale = 0.5]{Spatio_temp_Ux_v3_dim.pdf} \hspace{.1cm} \includegraphics[scale = 0.5]{profil_T_all.pdf} \caption{(Left) Horizontal average of the horizontal velocity $u$ over a vertical cross-section in the middle of the tank. The buffer layer can be seen above $z=120$~mm. A stationary large-scale circulation is present in the convective layer, even if it appears quite perturbed at the end of the signal. (Right) Temperature profiles along the $z$-axis.} \label{fig:num_buffer} \end{figure*} The space-time diagram of the mean horizontal flow shown in figure \ref{fig:num_buffer} confirms it. Observing the buffer layer in the absence of side thermal losses and top temperature heterogeneity is an additional argument accounting for the fact that it is not an artefact driven by imperfect experimental conditions. We also observe that the flow within the convection stays positive through time at the bottom and negative at the top. This is evidence of the steady large scale circulation taking place in the convective layer. Some events appear at $t > 1.42 \times 10^{5}$~s and are interpreted as quasi-reversal of the large-scale circulation. \begin{figure}[t] \centering \includegraphics[scale=.5]{quiverIsotherme.pdf} \caption{Velocity field and temperature isotherms at the end of an upward plume impact on the interface.} \label{fig:num_quiverisoT} \end{figure} \begin{figure}[t] \centering \includegraphics[scale=.6]{corre_WetU.pdf} \caption{(Left) Spatio-temporal diagram of the horizontal velocity $u$ at $z=128$~mm. (Right) Spatio-temporal diagram of the vertical velocity $w$ at $z=108$~mm. The event at $t \approx 1.052 \times 10^5$~s is shown in figure \ref{fig:num_quiverisoT}.} \label{fig:num_correWU} \end{figure} The temperature profile along the $z$ axis is also plotted on the right panel of figure \ref{fig:num_buffer}. The figure shows a temporal and horizontal average of the temperature field (black thick curve), two temporal averages at two different positions $x=14$~mm (left side of the tank - dashed grey) and $x=277$~mm (right side - dotted grey) and an instantaneous profile at $x=145$~mm (middle of the tank - thick grey with crosses). The thermal boundary layer can be seen, between $z=0$~mm and $z=10$~mm. Then, between $z=10$~mm and $z=100$~mm lies a layer of constant temperature $T \sim 2.8^\circ$C. Between $100 \mathrm{~mm} \leqslant z \leqslant 115 \mathrm{~mm}$, the temperature profile evolves from constant to linear for $z > 115$ ~mm. The $T=4^\circ$C (respectively $T=8^\circ$C) isotherm is located at $z = 110$~mm (resp. $z = 120$~mm). Note that the temporal average of the temperature profiles are different on the left and right sides of the tank. Indeed, the constant temperature height goes to $z=90$~mm for the left side whereas it goes to $z=115$~mm for the right side. This suggests that the convective / buffer layer interface does not lies at one height over the whole tank but is a function of time and space. This is very likely due to the large-scale circulation. Thus, the thermal coupling described in \ref{sec:results:buffer} will likely occur at different heights, depending on time and horizontal position. The thermal coupling as schematised in figure \ref{fig:schema_couplage} can be found in the numerical simulation. This is represented in figure \ref{fig:num_quiverisoT}. An upward plume impacting the convective / buffer layer interface is seen. The isotherms ranging from $T=4^\circ$C to $T=11^\circ$C are deflected upward, due to the plume bringing cold fluid upward. On the contrary, the isotherms $T = 12 -14^\circ$C are deflected downward by the converging flow. Isotherms at $T \geqslant 15^\circ$C remain horizontal. After the impact on the interface, the plume is deflected outwards. One could expect the fluid above the impact to be viscously entrained by this outward deflection. However, as observed in figure \ref{fig:num_quiverisoT} for the simulation and figures \ref{fig:champ_correle}-\ref{fig:champ_pascorrele} for the experiment, the fluid above the interface is going towards the plume, \textit{i.e.} in the opposite direction of the fluid below, hence explaining the observed shear (see figures \ref{fig:num_quiverisoT} and \ref{fig:schema_couplage}). The time evolution of these dynamics is shown in figure \ref{fig:num_correWU}. Figure \ref{fig:num_correWU} shows the time evolution of the horizontal velocity $u$ in the shear layer at $z=128$~mm and the time evolution of the vertical velocity $w$ in the convective layer at $z=108$~mm. Comparing the two panels of figure \ref{fig:num_correWU} shows that upward plumes are concomitant with converging horizontal velocities towards the plume impact. Indeed, the spatio-temporal diagram of $w$ exhibits local strong upward plumes. These plumes, as suggested by the dashed black lines, are correlated in time and space with converging horizontal velocities. For instance, an upward plume is seen at $x\approx220$~mm and $t\approx1.043 \times 10^5$~s. At the same horizontal position and time, the positive horizontal velocity becomes stronger and the negative horizontal velocity patch increases in size to reach $x\approx220$~mm. The converging horizontal velocities event occurs a short time after the impact of the plumes. Thus, it can be concluded that the plume induces the converging flow, as suggested by our explanation in section \ref{sec:results:buffer}. \clearpage \subsubsection{\label{sec:results_num_igw}Internal gravity waves} \begin{figure}[h] \centering \includegraphics[scale = 0.55]{spectro_vfinale.pdf} \hspace{.1cm} \includegraphics[scale = .55]{plot_attenuation_norm_num_v2.pdf} \caption{(Left) Power spectral density of the absolute velocity $\sqrt{u^2+w^2}$ in the buffer and stratified layers. The grey curve shows the buoyancy frequency profile computed from the spatial and temporal average of the temperature field. (Right) Two selected profiles (taken at frequencies shown by dashed lines on the left graph) of the re-scaled PSDs by the PSD at the top of the convective layer, \textit{i.e.} $z=118$~mm.} \label{fig:num_spectro} \end{figure} PSDs are computed in the stratified and buffer layers and are plotted in figure \ref{fig:num_spectro}. As for the experiment (figure \ref{fig:spectro}), numerical results show oscillatory motions at different frequencies attenuated with height. Experimental results (figure \ref{fig:spectro}) and numerical results (figure \ref{fig:num_spectro}) show strongly similar dynamics: most of the energy is present at low frequencies ($f<3 \times 10^{-3}$~Hz). The motion with frequencies ranging from $3 \times 10^{-3}$~Hz to $N$ are less intense, and almost no energy is seen at frequencies $f>N$. Right panel of figure \ref{fig:num_spectro} shows two selected vertical profiles (shown by the white dashed line on the left panel figure) of the the PSD re-scaled by the PSD at $z=118$~mm. The energy for the higher frequency ($f=1.4\times10^{-2}$~Hz) decreases slower than the energy for the lower frequency ($f=5.0\times10^{-3}$~Hz). This is, as experimental results, in agreement with the dispersion relation of IGWs. The overall behaviour of waves spectra is similar in experiment and numerical simulation, with an attenuation length independent of the frequency in the low-frequency signal thus confirming a viscous coupling origin of the large-scale flow, and increasing when the frequency goes towards $N$ in the wave domain. \subsubsection{Large-scale flow within the stratified layer} \begin{figure*}[h] \centering \includegraphics[scale = 0.6]{Spatiotemp_Vtheta_cyl_v2_square_dim.pdf} \hspace{.1cm} \includegraphics[scale = 0.6]{Spatiotemp_Vtheta_cyl_v2_zoom_dim.pdf} \caption{Spatio-temporal diagrams of the azimuthal averaging of the azimuthal velocity inside a virtual cylinder of radius $r = 140~$mm. The bottom figure is a zoom on the stratified zone, delimited in the top figure by the black square. The slope of the black lines show the theoretical viscous diffusive time.} \label{fig:num_vthetacyl} \end{figure*} Similarly to what has been done for the experimental data, figure \ref{fig:num_vthetacyl} shows the mean azimuthal velocity over the whole height of a virtual cylinder of radius $r = 140$ mm. We observe reversals within the convective layer ($z<120$ mm), which are not systematically correlated with the signal in the stratified layer. The mean velocity in the stratified layer also exhibits reversals. They are characterised by an upward phase propagation from the buffer zone at $z=120$ mm, as shown in the zoom (bottom panel of figure \ref{fig:num_vthetacyl}). The phase velocity seen in figure \ref{fig:num_vthetacyl} is in good agreement with the theoretical time for viscous propagation $t = \frac{z^2}{4\,\nu\, \mathrm{erf}^{-2}\left(\left(\frac{u}{u_b}-1\right)\right)}$. This corroborates the fact that the reversals observed within the stratified layer are viscously driven from the dynamics occurring in the buffer layer, as it has been seen for the experiment. Reversals time ranges from $300$~s to $1500$~s. Those reversals times are similar to the experimental ones, though slightly shorter (numerical reversals are $\sim20$\% faster than experimental reversals). \clearpage \section{Conclusion}\label{sec:discussion} The 4$^{\circ}$C convection experiment, originally performed by Townsend \cite{townsend_natural_1964}, has been re-investigated using long-term PIV measurements in a vertical cross-section, and in several horizontal cross-sections within the stratified layer. This last type of measurements has allowed to investigate for the first time the long-term horizontal mean flow in the stratified layer. Experiments have been complemented by direct numerical simulations. The first result of this paper is the confirmation, in 3D and with ideal boundary conditions, of the presence of a buffer layer, including an overshooting region as first observed by Perrard \cite{perrard_experimental_2013}, and a shear region. We have argued that the buffer layer is driven by thermal coupling with the convection, due to the non-linear equation of state of water, and that this mechanism is a priori related to a Prandtl number larger than one. The second result is that the buffer layer viscously drives slow reversals of the horizontal large-scale flow within the stratified layer. Additionally, IGWs at different frequencies propagate in the stratified layer. They likely interact with the horizontal large-scale flow, and probably also produce a reversing flow, which superimposes to the viscously driven one. From Couston et al. \cite{couston_order_2018}, we know that the Prandtl number has a strong influence on this QBO-like mechanism: the lower the Prandtl number, the stronger the amplitude of the QBO. In water, $Pr \sim 7$, and the expected amplitude of the large-scale QBO flow is weak, hence dominated by the viscous driving. Further experimental studies at lower Prandtl number should allow deciphering the two contributions. One could for instance suggest using gas as a working fluid; however, the absence of density reversal around a given temperature will necessitate to consider either transient experiments like \cite{deardorff_laboratory_1969, michaelian_coupling_2002}, or two-gas experiments which might then be prone to double diffusive instabilities. Experimentally, the question also remains to understand why the only successful QBO experiment has been performed in salty water, hence with a Schmidt number (equivalent to Pr) of 700. \begin{figure}[h] \centering \includegraphics[scale=.6]{U_avgX.pdf} \caption{Horizontal average of the horizontal velocity $u$ over a vertical cross section in the middle of the numerical domain for $Pr = 0.1$.} \label{fig:num_pr01} \end{figure} In the meantime, it is straightforward to change the Prandtl number in the numerical simulation of our set-up. We have thus run a second simulation with the same Rayleigh number $Ra = 10^7$ and top temperature $\theta_{top} = 11$ but with $Pr = 0.1$. In this simulation, as shown in figure \ref{fig:num_pr01}, no buffer layer is observed, but strong signatures of a QBO like mechanism are visible, marked by downward phase propagation of the reversals of the large-scale flow. This configuration thus deserves a more systematic study in the future. \section*{Acknowledgements} The authors acknowledge funding by the European Research Council under the European Union's Horizon 2020 research and innovation program through Grant No. 681835-FLUDYCO-ERC-2015-CoG. This work was granted access to the HPC resources of Aix-Marseille Universit\'e financed by the project Equip@Meso (ANR-10-EQPX-29-01) of the program “Investissements d'Avenir” supervised by the Agence Nationale de la Recherche. Computations were also conducted with the support of the HPC resources of GENCI-IDRIS (Grant No.A0060407543). \section{\label{sec:intro}Introduction} Numerous geophysical and astrophysical flows present a two-layer configuration, with a turbulent convective layer standing above or below a stably stratified one. Examples include planetary atmospheres, stars interiors, and possibly the outermost layer of the Earth liquid core \cite{hirose_composition_2013}. The dynamics of coupled stratified and convective layers are quite complex. Due to the convective motions, internal gravity waves (IGWs) are generated at the interface between the two layers, and propagate in the stratified one. IGWs transport energy and momentum \cite{rogers_internal_2012,bretherton_momentum_1969} from where they are generated to where they are damped. Thanks to their transport properties and non-linear interactions, IGWs are able to generate and sustain large-scale horizontal flows \cite{plumb_interaction_1977,rogers_internal_2012}. Examples of such large-scale flows driven by IGWs are the oscillations of equatorial zonal winds observed in some planets' atmosphere \cite{fouchet_equatorial_2008,leovy_quasiquadrennial_1991}, including the Earth where it is called the Quasi-Biennial Oscillation (QBO) \cite{baldwin_quasi-biennial_2001}.\\ The IGWs generation by turbulent dynamics has been studied in various experiments. The generation by a single buoyant plume was experimentally studied by Ansong \& Sutherland \cite{ansong_internal_2010}. The penetration of the plume within the stratified layer and the spectral characteristics of the generated IGWs were studied. They found that the peak frequency of the generated IGWs lies in a range close to $0.7N$, where $N$ is the Brunt-V\"ais\"al\"a (or buoyancy) frequency, and that the radial wavenumber is set by the plume cap and not by the width of the plume at the interface. Deardoff et al. \cite{deardorff_laboratory_1969} and later Michaelian et al. \cite{michaelian_coupling_2002} studied the effect of penetrative convection in a stratified layer in a transient, Rayleigh-B\'enard type experiment. Stratification was initially set up thermally from the top to the bottom of the tank. Then, the fluid was suddenly warmed up at the bottom, triggering Rayleigh-B\'enard convection. IGWs were measured transiently \cite{michaelian_coupling_2002} while the stratified (resp. convective) layer was decreasing (resp. increasing) in size. Eventually, there was no more stratified layer to sustain the propagation of IGWs. Townsend \cite{townsend_natural_1964} introduced an original set-up to study the quasi-steady generation of IGWs by Rayleigh-B\'enard convection. Using the fact that water maximum density is around $4^{\circ}$C, a two-layer system is spontaneously generated by cooling the bottom of a tank at $0^{\circ}$C and heating its top above $4^{\circ}$C. The density gradient is unstable at temperature below 4 $^{\circ}$C and stable at temperature above. This creates a self-organising system, with a turbulent convective layer adjacent to a stratified layer. With dye visualisation and temperature measurements, he observed IGWs propagating close to the interface between the two layers. The $4^\circ$C convection was also studied experimentally by Le Gal \cite{legal_penetrative_1997} in a laminar flow, at low Rayleigh number. Convection displayed an hexagonal pattern and viscous entrainment of the fluid above the convective cells was observed. Perrard et al. \cite{perrard_experimental_2013} and Le Bars et al. \cite{bars_experimental_2015} re-investigated this setup in a quasi two-dimensional tank using Particle Image Velocimetry (PIV) and temperature measurements to obtain detailed data of IGWs generated by the convection. They observed a wide spectrum of waves generated at different frequencies. Favoured frequencies were related to the differential attenuation length of the waves depending on frequency, in good agreement with linear theory. No large-scale flow in the stratified layer was observed in this 2D geometry. Numerical simulations of the same configuration were performed by Lecoanet et al. \cite{lecoanet_numerical_2015}. They demonstrated that IGWs are mainly generated by the Reynolds stresses in the bulk of the convection below the interface, rather than by the displacement of the interface between the two layers. Numerical studies by Couston et al. \cite{couston_dynamics_2017,couston_order_2018,couston_energy_2018} extended these results by considering a generic non-linear equation of state (piecewise linear around an inversion temperature, with adjustable slopes), both in 2D and 3D horizontally periodic geometries. Various flow regimes and the energy transport by IGWs were quantitatively studied. Interestingly, long simulations -- accessible in 2D studies only -- showed, for low Prandtl numbers ($Pr < 1$), a large-scale horizontal flow with reversals in the stratified layer, similar to the QBO phenomenon introduced above \cite{couston_order_2018}. \\ Several experiments took interest in the generation and reversal of a large-scale horizontal mean flow in a stratified domain, driven by IGWs. The well-known experiment designed by Plumb and McEwan \cite{plumb_instability_1978}, later reproduced and extended by Semin et al. \cite{semin_generation_2016,semin_nonlinear_2018}, is capable of driving a QBO from the mechanical forcing of a standing wave pattern (\textit{i.e.} two waves with the same frequency and opposite phase speed) in a salty water stratification. With this system, Plumb and McEwan managed to observe few oscillations of the driven large-scale flow before the stratification was destroyed by mixing. The experiment gave results in good agreement with the theory \cite{richard_s._lindzen_theory_1968,lindzen_updated_1972,plumb_interaction_1977}, notably with reversals starting away from the forcing and propagating towards it. Semin et al. \cite{semin_generation_2016,semin_nonlinear_2018} improved the system by constantly injecting fluid to rebuild the stratification, while removing mixed fluid close to the forcing. This method allowed to run the experiment longer and to study the nature of the bifurcation in the Plumb model which can be either supercritical or subcritical, depending on the dominant dissipative process. In those experimental realisations of the QBO mechanism, the wave forcing remains monochromatic, as opposed to the natural mechanism where it is due to chaotic tropical storms \cite{baldwin_quasi-biennial_2001}. The forcing is driven by interface displacements, as opposed to the observations of \cite{lecoanet_numerical_2015}. Besides, only the stratified layer is modelled. It thus remains a challenge to observe experimentally a large-scale, reversing flow from a turbulent source and a wide range of naturally excited IGWs. \\ In the present study, we extend the work of Townsend \cite{townsend_natural_1964,perrard_experimental_2013,bars_experimental_2015} in a cylindrical, 3D geometry reminiscent of Plumb and McEwan's set-up \cite{plumb_instability_1978,semin_generation_2016,semin_nonlinear_2018}. Our purpose is threefold: to characterise the generation of IGWs in such a self-organising two-layer system, to quantify the coupling between the layers, and to investigate the possible generation of large-scale horizontal flows. Our experiments are complemented by direct numerical simulations of the same configuration. The experimental setup and numerical model are presented in section \ref{sec:methods}, results are analysed in section \ref{sec:results}, and conclusions and future works are discussed in section \ref{sec:discussion}. \section{\label{sec:methods}Methods} \subsection{Experimental set-up} The set-up consists in a cubic tank whose lateral boundaries are made of 2 cm thick acrylic walls. The bottom boundary is a chrome plated copper plate in which refrigerated fluid is forced to circulate. The top boundary is a commercial, transparent electric heater. The tank inner dimensions are $32 \times32$ cm for the horizontal section and $H=20$ cm in height. Preliminary experiments were conducted in this cubic geometry. Eventually, a cylinder of outer diameter $D = 29$ cm and thickness $e = 0.4 $ cm was added inside the cubic tank, to reproduce the axisymmetric geometry of \cite{plumb_instability_1978,semin_generation_2016,semin_nonlinear_2018}, which seems prone to the development of large-scale flows. We are interested in the flow within the cylinder: the fluid in the gap between the cylinder and the cubic tank follows a similar dynamics and thus imposes to the working fluid a (close to) no-flux lateral boundary condition. The temperature of the bottom boundary is controlled by water circulating in the copper plate. Water is cooled down by a thermal bath with a fixed temperature set at $-1.25^{\circ}$C. Due to thermal losses in the pipes alimenting the copper plate, bottom tank temperature is $0.2 \pm 0.05^\circ$C. Plate thickness and circulation circuit were optimised so as to ensure a uniform temperature over the whole plate. At the top boundary, the heater is set at a temperature of $35^{\circ}$C. Its temperature control is custom made: a PT100 probe measures the heater temperature in real time, driving through a feed-back loop the input power injected in the heater. This is a very simple and inexpensive system to impose a temperature while having a transparent top boundary, allowing visualisation and velocity measurements by PIV. Nonetheless, it is necessary to point out that the temperature over the heater area is not perfectly homogeneous. Temperature is maximal at the centre where $T \sim 38^{\circ}$C, while the edges are indeed at the requested $T = 35\pm 0.1^\circ$C. This inhomogeneity of the top temperature by $\delta T = 3^{\circ}$C induces slow convective motions below the heater, in a $\sim 2$ cm high layer. By performing an experiment where the whole fluid is stably stratified with an overall temperature gradient similar to the one in the stratified layer studied here, but above the density reversal at $4^{\circ}$C (i.e. bottom boundary set at 10$^{\circ}$C and top boundary at 70$^{\circ}$C), we have checked that those top convective motions have no significant impact on the dynamics of the two-layer system. It is also important to say that despite the thick acrylic wall and the intermediate fluid layer between the cylinder and the tank, the working region is not fully thermally insulated on the sides. Nevertheless, our fully stratified, test experiment has shown no motion within the fluid driven by these lateral losses. \\ The equation of state of water is non-linear with a maximum density close to 4$^{\circ}$C (International Equation of State of Seawater, 1980): \begin{equation} \begin{split} \rho(T)= 999.842594+6.793952.10^{-2}T-9.095290.10^{-3}T^2+1.001685.10^{-4}T^3 \\ -1.120083.10^{-6}T^4+6.536332.10^{-9}T^5.\\ \end{split} \label{eq:eos_eau} \end{equation} Thus, due to the imposed temperature at top and bottom boundaries, the bottom part of the tank, between 0$^{\circ}$C and 4$^{\circ}$C, is convectively unstable (see figure \ref{fig:setup}). Cold plumes detach from the bottom plate and rise in the tank due to buoyancy. Reciprocally, ``hot'' fluid sinks from the 4$^{\circ}$C interface due to gravity. While convective motion takes place in the lower layer, the upper part of the tank, between 4$^{\circ}$C and 35$^{\circ}$C, is stably stratified, with an assumed linear temperature profile at equilibrium. The temperature is indeed linear for an ideal case without thermal losses, assuming that the stratified layer has a bottom boundary at fixed temperature $4^\circ$C and top boundary at $35^\circ$C (\textit{i.e.} constant diffusive flux through the whole layer). However, the density profile is not linear, due to the non-linear equation of state of water. Stratification is characterised by the Brunt-V\"ais\"al\"a frequency $N^* = \frac{1}{2 \pi} \sqrt{-\frac{g}{\rho_0} \frac{\partial \rho}{\partial z}}$. Because of the non-linear equation of state, $N^*$ is not constant with depth, as shown in figure \ref{fig:setup}. For simplicity, we also use below the global buoyancy frequency defined as $N = \frac{1}{2 \pi} \sqrt{-\frac{g}{\rho_0} \frac{\Delta \rho}{\Delta z}}$, where ${\Delta \rho}$ is the global density contrast within the stratified layer of depth ${\Delta z}$. \\ \begin{figure}[h] \includegraphics[scale=.3]{schema_cuve_profilN_NB_v3.png} \caption{2D sketch of the tank. A cylinder (light grey shaded area) is placed in a larger cubic tank. The system is cooled down at the bottom at 0$^{\circ}$C and heated up at the top at 35$^{\circ}$C. The bottom half is convective with an almost constant density, apart from the bottom boundary layer. The upper half is stably stratified and waves are generated due to the fluid motions in the convective layer. The graph on the right shows the theoretical profile for the buoyancy frequency $N^*$. It is computed considering a linear temperature profile and the equation of state of water (\ref{eq:eos_eau}). The dashed line is the global buoyancy frequency $N$ calculated on the stratified layer. The various length scales are the cylinder diameter $D$, the vertical extent of the tank $H$ and the vertical extent of the convective layer $h$. $\delta$ is the minimal width between the outer square tank and the inner cylinder. The $x$, $y$ and $z$-velocity components are noted $u$, $v$ and $w$ respectively.} \label{fig:setup} \end{figure} Before starting the experiment, the bath and the heater are allowed to reach their assigned temperature. Then, the upper half of the tank is filled with water stratified in temperature from 4$^{\circ}$C to 35$^{\circ}$C, using the double bucket technique \cite{oster_density_1965}. The bottom half is filled with water with a temperature close to 4$^{\circ}$C. This filling process is used to avoid tremendously long transient before reaching steady state by thermal diffusion. Typically, we fill the tank at the end of a day and start the measurements the next day in order to let the system reach its equilibrium state over night. Each experiment then lasts four days, with no change in the location of the interface. Note that this steady interface position is the result of the heat budget equilibrium between the convective heat flux extracted from the bottom plate, the diffusive heat flux through the stratified layer, and the lateral heat losses. To perform PIV measurements, particles are chosen as small as possible and with a density as close as possible to water density in order to avoid their sedimentation in the stratified layer over the long duration of the experiment. We use fluorescent orange polyethylene micro-spheres whose size ranges from $10 \, \mathrm{\mu m}$ to $20 \, \mathrm{\mu m}$ and density is $1.00 \pm 0.01$~g/cc. The fluorescent property allows us, with a high pass filter on the camera, to remove any laser reflection, significantly enhancing the images quality. The tank is illuminated with a green laser $532$~nm. Power is set at $1$~W. We perform side view PIV to measure convection and IGWs spectral characteristics, and top view PIV to observe the large scale flow and its fluctuations over time. The camera used for the side view PIV is a HiSense Zyla $2560 \times 2160$ pixels recorded on 12 bits. Acquisition rate is $2$~Hz with $100$~ms exposure time. Typical acquisition time for spectral characteristics is $50$~min. For the top view, we use a Point grey camera $1920\times1080$ pixels on 8~bits. Exposure time is $1$~s, acquisition rate $0.1$~Hz and acquisition time $8$~hours. Captured movies are processed either by the DantecDynamics software DynamicStudio for the side view or by DPIVSoft \cite{meunier2003analysis} for the top view. Both are resolved into $32\times32$ pixels boxes with 50\% overlapping. Side view PIVs are performed in the middle of the tank at $y=16$~cm in a laser sheet crossing the cylinder along its diameter. This is the case for all figures shown in the $(x,z)$ plane and thus not mentioned in the results section. The vertical fields (see an example in figure \ref{fig:LSCexpe}) do not show the whole $(x,z)$ plane (where the origin $(0,0)$ is located in the bottom left corner of the cubic tank): it was indeed chosen to zoom in, in order to have the best resolution for the very weak motions in the stratified layer. The interface between the layers is localised between $11 \mathrm{~cm} \leqslant z \leqslant 12$~cm. Typical Rayleigh number for the convection based on this depth is $Ra = 7 \times 10^6$, and the global Brunt-V\"ais\"al\"a frequency is $N = 1.35 \times 10^{-1}$~Hz. \begin{figure*}[h] \centering \includegraphics[scale = .45]{champ_int_730_colored.pdf} \hspace{.1cm} \includegraphics[scale = .45]{LSC_0510_1800_colored.pdf} \caption{(Left) Instantaneous velocity field. An ascending plume is visible at $x=150$ mm, and transported by the large-scale circulation. (Right) Large-scale circulation in the convective layer obtained by time-averaging velocities over a 50 minutes signal. The large-scale circulation is a counter clockwise cell. No inversion of the circulation has been seen in our experiment. The velocities under $z=10$ mm are noisy and thus not shown here. Maximum instantaneous velocities are $2.5$ times bigger than the maximum averaged velocities. The left edge of the cylinder is located at $x=19$~mm and the centre of the cylinder is at $x=160$~mm. Approximately $40$~mm of the right side of the cylinder is not captured with our PIV measurments.} \label{fig:LSCexpe} \end{figure*} To observe the large-scale flow, horizontal views at different heights are performed. A linear translation axis platform from Igus, driven by homemade software, is used to automate the process. With two mirrors, one fixed at the exit of the laser head and the other fixed on the translating platform, it is possible to sweep along one direction with the laser sheet. We typically make measurements during 15~s in each of 11 successive planes separated by 0.5~cm. The full scan duration is about 3~min, and is repeated during at least 8~hours. \\ The cylindrical geometry described here differs from the cylindrical shell geometry in \cite{plumb_instability_1978,semin_generation_2016, semin_nonlinear_2018}. We first tried to work in that annular geometry by adding a second, smaller cylinder in our cubic tank. Three different gap sizes were tested but none showed any interesting dynamics. Indeed, the convection was too confined in the shell to provide an efficient chaotic excitation, and IGWs did not propagate well within the stratification, attenuated quite fast by wall friction. During these tests, we observed the most interesting dynamics within the innermost cylinder, so we decided to use that geometry. The point of the cylindrical shell geometry is that it is a close analogue to the equatorial band of the stratosphere where QBO takes place. By working in a cylinder, the geometry analogy is lost but the physics of the problem remains the same, still a priori allowing for large-scale, reversing, axisymmetric horizontal flows. \clearpage \subsection{Numerical model}\label{sec:methods_num} To complement the experiments, we also performed Direct Numerical Simulations (DNS) of the same configuration. We solve the Navier Stokes equations using a non-Oberbeck Boussinesq model. The density variations are consider small compared to the reference density $\frac{\Delta \rho }{\rho_o} \ll 1$. Therefore, density fluctuations only appear in the buoyancy force. However, temperature variations affect the value of the thermal expansion coefficient to account for the non-linear equation of state of water. Variations of the thermal diffusivity $\kappa$ and kinematic viscosity $\nu$ are neglected. Governing equations are given in dimensionless form by: \begin{align} \label{eq:momentum} \frac{\partial\bm{u}}{\partial t}+\bm{u}\cdot\nabla\bm{u} = & -\nabla P+Pr\nabla^2\bm{u}-Pr Ra \ \theta^2 \bm{e}_z \\ \label{eq:heat} \frac{\partial \theta}{\partial t}+\bm{u}\cdot\nabla\theta = & \ \nabla^2 \theta \\ \label{eq:mass} \nabla\cdot\bm{u} = & \ 0 \end{align} where we used the depth $H$ of the container as a unit of length, the thermal diffusive timescale $H^2/\kappa$ as a unit of time and the difference between the bottom and the inversion temperature $T_0 - T_i$ as a temperature scale. These equations are characterised by the Prandtl number $Pr=\nu/\kappa$, the global Rayleigh number $Ra=\alpha g (T_0-T_i)^2 H^3/(\nu\kappa)$ and the imposed dimensionless top temperature $\theta_{top}=(T_{top}-T_i)/(T_0-T_i)$. Note that the quadratic temperature term in the momentum equation is a direct consequence of the nonlinear equation of state of water given by equation~\eqref{eq:eos_eau}, which is approximed in our model by the quadratic equation $\rho(T) \approx \rho_0 (1-\alpha(T-T_i)^2)$. The coefficient $\alpha$ is not the usual thermal expansion coefficient but has a unit of $(\degree \mathrm{C})^{-2}$ and is given by $\alpha\approx8.1\times10^{-6}(\degree \mathrm{C})^{-2}$ (see also \cite{lecoanet_numerical_2015}). We consider a cylindrical fluid cavity of diameter $D=3H/2$ as in the experiments. Both horizontal plates are assumed to be no-slip and with fixed temperature. The side wall is assumed to be no-slip and perfectly insulating. This is of course not the case in the experiment, for which lateral heat losses are inevitable and top temperature is not exactly constant, but the objective is to check whether the conclusions drawn from the experimental results are robust and do not depend on these effects. Since the experiment runs with water, we use $Pr=7$. The Rayleigh number of the experiment is $Ra=7 \times 10^7$ while its dimensionless top temperature is $\theta_{top}=-7.75$. If we were to run the simulation with these parameters, the interface will be located very close to the top boundary. It is not the case in the experiment because of the lateral heat losses, which tend to reduce the effective Rayleigh number. For that reason, and instead of taking into account these losses as in \cite{lecoanet_numerical_2015}, we kept the insulating lateral boundaries and use the slightly adjusted parameters $Ra=10^7$ and $\theta_{top} = -11$ instead, which leads to an interface located approximately at $z\approx120$~mm, as in the experiment. The Rayleigh number could not be lowered under $10^7$ in order to keep the convective flow turbulent enough, thus we had to increase the top temperature to have the interface located at $z\approx120$~mm. We perform DNS of equations~\eqref{eq:momentum}-\eqref{eq:mass} using the spectral element solver Nek5000 \citep{Nek5000}. The global geometry is decomposed into hexahedral elements, with vertical refinement close to the horizontal boundaries and around the mid-plane where the inversion isotherm is expected to be located. Velocity, buoyancy and pressure variables are represented as tensor product Lagrange polynomials of order $N$ and $N-2$ based on Gauss or Gauss-Lobatto quadrature points. The total number of grid points is given by $\mathcal{E}N^3$ where $\mathcal{E}$ is the number of elements. For all the results discussed in this paper, the number of elements is $\mathcal{E}=8960$ and we use a polynomial order of $N=11$. Numerical convergence was checked by increasing the polynomial order $N$. Time integration is performed with a third-order mixed implicit-explicit scheme. The simulations are initialised with a small amplitude buoyancy perturbation and a temperature profile varying linearly between the top and bottom boundaries. Simulations are run until a quasi-steady state is reached, after which data is accumulated to compute statistics. \section{\label{sec:results}Results} \subsection{\label{sec:results_exp}Experiments} \subsubsection{\label{sec:results_conv}Convection} PIV side view is used to quantify horizontal and vertical velocities in the convection zone. Examples of vertical velocities measured at one point in a given location are shown in figure \ref{fig:panache}, for both ascending cold and descending hot structures. Measurements are consistent with the numerical simulations \cite{lecoanet_numerical_2015,couston_dynamics_2017} showing intense, localised, cold rising plumes and more diffusive descending plumes. Moreover, these structures are advected by a large-scale circulation encompassing the whole convective layer, as shown in figure \ref{fig:LSCexpe}. \begin{figure}[h] \centering {\label{fig:panacheup}\includegraphics[scale=0.43]{sonde_panache_up.pdf}} \hspace{.2pt} {\label{fig:panachedown}\includegraphics[scale=0.43]{sonde_panache_down.pdf}} \caption{Time evolution of the vertical velocity $w$ within: (Left) upward plumes at $x = 200$ mm, $z = 45$ mm, (Right) downward structures at $x= 100$ mm, $z = 95$ mm. } \label{fig:panache} \end{figure} Spectral analysis is performed to extract power spectral density (PSD) from the velocity signals. Figure \ref{fig:spectreconv} shows the PSD of the convection vertical velocity $w$. For the two panels, the spectrum is flat with a lot of energy for low frequencies, then the energy drops above some cut-off frequency. Left panel of figure \ref{fig:spectreconv} shows the vertical velocity PSD at a single point close to the top of the convective layer. A small peak can be seen close to $f = 10^{-2}$~Hz. This is the quasi-periodic signal of the plumes dropping from the top thermal boundary layer. The theoretical characteristic time of convection can be computed from \cite{gortler_convection_1966}: \begin{equation} \tau = \frac{h^2}{\pi \kappa}\left(\frac{\Delta T}{\Delta T_{local}} \times \frac{Ra_c}{Ra}\right)^{2/3} \end{equation} \begin{figure*}[b] \centering \includegraphics[scale = .43]{spectre_1_point_cd67_58_vf.pdf} \hspace{.1cm} \includegraphics[scale = .43]{spectre_W_50toend_10to57_vf.pdf} \caption{PSD for the vertical velocity fluctuations. (Left): PSD computed at a single point $x=100$~mm, $z=95$~mm (signal shown in figure \ref{fig:panache} right). The plume forcing frequency can be seen around $f = 10^{-2}$~Hz (red dashed line). (Right): PSD spatially averaged over the whole convective cell in the measured $(x,z)$ plane (all points above $z=10$~mm and below $z=110$~mm). } \label{fig:spectreconv} \end{figure*} with $h$ the height of the convective layer, $\kappa$ the thermal diffusivity, $\Delta T$ the temperature difference between the top and bottom of the convective domain, $\Delta T_{local}$ the temperature difference between the top and bottom of the thermal boundary layer, and $Ra_c$ the critical Rayleigh number. The critical Rayleigh number in the presence of free and solid interfaces and for fixed temperature is $Ra_c = 1100.65$. For our experiment, the characteristic time is $\tau = 96$~s, thus characteristic frequency is $1/ \tau \sim 10^{-2}$~Hz, which is close to the observed peak in the left panel of figure \ref{fig:spectreconv}. At frequencies lower than this characteristic frequency, the spectrum is flat. This is explained by the combined effect of the randomness of the plumes (see figure \ref{fig:panache}) and of the slow fluctuations of the large-scale circulation. Right panel of figure \ref{fig:spectreconv} shows the PSD of vertical velocities, averaged over the whole convective cell in the $(x,z)$ plane. It shows a similar trend, with a lower cut-off frequency compared to right panel spectrum. Actually, the plumes signal is more localised and less intense on average than the large-scale circulation signal, which hence dominates the space-averaged PSD. The probability density function (PDF) of the vertical velocities in the whole convective layer $\mathrm{P}(w)$ is computed and shown in figure \ref{fig:pdf}. It is normalised such that $\int \mathrm{P}(w)\mathrm{d}w = 1$. The PDF describes important features of the convection: it is skewed towards positive values, with positive velocities reaching higher magnitude than negative velocities, \textit{i.e.} the ascending plumes are stronger than the descending structures. However, the central part of the PDF is close to gaussian profile. The distribution obtained here is in good agreement with the probability density function computed in an idealised 2D numerical model by Couston et al. \cite{couston_dynamics_2017}. Note that this asymmetry is specific to our model, for which the usual upside-down symmetry in Boussinesq Rayleigh-B\'enard convection is broken. \begin{figure} \centering \includegraphics[scale = .6]{pdf_conv.pdf} \caption{Probability density function of the vertical velocities in the convective layer. All PIV points under $z=110$~mm have been used to compute the PDF.} \label{fig:pdf} \end{figure} \subsubsection{\label{sec:results:buffer}Buffer layer} An intermediate layer (we name it the buffer layer in the following) is present between the convective layer and the stratified layer. It was first reported in the quasi-2D 4$^{\circ}$C convection experiment by Perrard et al. \cite{perrard_experimental_2013}. Their temperature measurements showed that the buffer layer corresponds to the area where the temperature goes from 4$^{\circ}$C to 8$^{\circ}$C. This actually corresponds to the overshooting region for rising cold plumes (note that this type of convection is called "penetrative convection" because of this effect). Indeed, since the density of water is close to quadratic around $4^{\circ}$C, densities at e.g. $0^{\circ}$C and $8^{\circ}$C are the same, and the 8$^{\circ}$C isotherm is the theoretical maximum height reachable by an ascending cold plume at $0^{\circ}$C in the absence of thermal diffusion. Simultaneously, the overall density profile between $4^{\circ}$C and $8^{\circ}$C is stable, as in the stratified layer above $8^{\circ}$C. The buffer layer is thus a very specific location supporting simultaneously convective overshooting motions and IGWs, as observed with PIV measurements \cite{perrard_experimental_2013}. \begin{figure}[h] \centering \includegraphics[scale = .6]{buffer_U-xavg.pdf} \caption{Time evolution of the horizontal average of the horizontal velocity, noted $u_X$. Red (resp. blue) regions correspond to mean flow going towards the right (resp. left).} \label{fig:bufferlayer} \end{figure} To complete the description of this buffer layer now using velocity measurements, we plot in figure \ref{fig:bufferlayer} the spatio-temporal graph of the horizontal average of the horizontal velocity $u$. The graph exhibits a strong shear around $z=120$ mm. Since the fluid is going in opposite directions above and below $z=120$ mm with a sharp interface, viscous entrainment by the convective layer is excluded. A special kind of thermal coupling might explain the observed dynamics, as sketched in figure \ref{fig:schema_couplage}. Indeed, when a cold ascending plume from the convection zone reaches the interface and overshoots in the buffer region, its associated velocity perturbation dissipates more rapidly than its temperature perturbation. Due to gravity, the distorted part of the buffer region wants to sink back to its initial state (pictured by the green arrows), while the fluid above the buffer layer moves towards the impact point of the plume to take the place of the falling water (pictured by the red arrows). The buffer layer then needs some compensating fluid from the convective layer. This mechanism works when the velocity perturbation of the plume at the interface dissipates more rapidly than the thermal perturbation, hence for a Prandtl number $Pr \geq 1$. One might expect the shearing zone to decrease in size and amplitude when thermal diffusion increases (i.e. when the Prandtl number decreases), since the overshooting rising cold plume will then equilibrate thermally during its ascent more rapidly. This may explain why no interfacial shear was reported in the systematic numerical study of \cite{couston_dynamics_2017,couston_order_2018} where $Pr \leq 1$. Global temperature field measurements (using e.g. Temperature Dependent, Laser Induced Fluorescence) are now required to confirm or infirm the proposed model, but those are beyond the scope of the present paper. Note that by extension, we call "buffer layer" in the following the region including the $T=4^\circ$C to $T=8^\circ$C overshooting region and the shear region. In the experiment, the shear region extends from $z=120$~mm to $z \approx135$~mm. \begin{figure}[htb] \centering \includegraphics[scale = .75]{schema_couplage_final2.pdf} \caption{Sketch of the thermal coupling between the convective and buffer layers. On the left, a cold plume moves upwards towards the interface between the two layers. On the right, isotherms are deflected due to the impact.} \label{fig:schema_couplage} \end{figure} \clearpage \subsubsection{\label{sec:results_waves}Internal gravity waves} \begin{figure}[h] \centering \includegraphics[scale = .83]{visu_onde_colormap_col_02to03_axisequal_schema.pdf} \hspace{.1cm} \includegraphics[scale = .53]{panache_v1.pdf \caption{Velocity fields showing IGWs propagating. (Top) Velocities in $(x,z)$ plane. The signal is frequency-filtered to enhance the visualisation of oscillatory motions: only frequencies between $0.02$ and $0.03$ Hz are shown, propagating at an angle of roughly $75^\circ$ with the vertical. The angle of propagation is the angle between the constant phase line and the vertical. (Bottom) Velocities in $(x,y)$ plane located at $z \approx 125$~mm. In the $(x,y)$ plane, IGWs take the form of oscillating rings. Note that this figure is from a previous experiment without any internal cylinder and is therefore only displayed here as an illustration of the IGWs seen from above.} \label{fig:ondeN} \end{figure} The convective motions induce a Reynolds stress at and below the interface which generates IGWs propagating in the stratified area \cite{lecoanet_numerical_2015}. An example is shown in figure \ref{fig:ondeN}. The vector field has been frequency filtered in the band $[0.02-0.03]$ Hz to isolate a single propagating wave train. We can measure an angle close to $\theta \simeq 75^{\circ}$ between contant phase lines and the vertical. This observation is in good agreement with the inviscid dispersion relation $\omega = \pm N \mathrm{cos}(\theta)$, which relates the frequency and the propagation angle of IGWs. Indeed, at $z=120$~mm, $N \sim 0.1$~Hz, thus $\theta = \mathrm{cos^{-1}}\left(\frac{\omega}{N}\right) = 78.5^\circ$. The motion within the stratified area is a superimposition of many such IGWs oscillating at different frequencies. To further investigate the waves signal, waves spectra are plotted in figure \ref{fig:spectro}, showing the power spectral density of oscillatory motions within the stratified layer at every height, averaged horizontally for each height. The grey line is the theoretically computed buoyancy frequency profile. Figure \ref{fig:spectro} shows that energy is present in a wide frequency band, from the lowest measured frequencies to the buoyancy frequency $N$. Low frequency motions $f < 4\times10^{-3}$~Hz are very intense and propagate high in the stratified layer. Motions with frequency ranging from $4 \times 10^{-3}$~Hz to $N$ are less intense, but still propagate into the stratified layer. Motions propagating at frequencies higher than the buoyancy frequency $N$ are greatly attenuated after a centimetre as IGWs of frequency larger than $N$ are evanescent. The weak signal at low frequencies above $z=180$ mm comes from the convective motions due to the non-homogeneous heating at the top. These motions are confined at the very top of the experimental container. \begin{figure*}[t] \centering \includegraphics[scale=0.55]{propagation_nonorm_120_vff.pdf} \hspace{.1cm} \includegraphics[scale =.55]{plot_attenuation_norm_v2.pdf} \caption{(Left) Power spectral density of the absolute velocity $\sqrt{u^2+w^2}$ above the convective layer. The grey curve shows the theoretical buoyancy frequency profile, assuming a linear profile for the temperature, from $4^\circ$C to $35^\circ$C. (Right) Two selected profiles (taken at frequencies shown by dashed lines on the left graph) of the re-scaled PSDs by the PSD at the top of the convective layer, \textit{i.e.} $z=120$~mm (noted $\mathrm{PSD_o}$). The PIV measurements are performed for 50 minutes and results (using the pwelch Matlab function) are horizontally averaged at each height to obtain the averaged power spectral density.} \label{fig:spectro} \end{figure*} Right panel of figure \ref{fig:spectro} shows two vertical profiles of the PSD re-scaled by the PSD at the top of the convective layer ($z=120$~mm), taken at two different frequencies. The energy decrease is quite similar between $z=120$~mm and $z=140$~mm for both frequencies. However, for $z>140$~mm, the energy for the higher frequency decreases slower than the energy for the lower frequency. This dependence of the attenuation length regarding the frequency of the signal is a characteristic of IGWs. Indeed, the dispersion relation of IGWs relates the frequency and the wave vector direction. Moreover, energy propagates perpendicularly to the wave vector for IGWs (i.e. group and phase velocities are perpendicular). The closer to $N$ the wave frequency $\omega$ is, the more horizontal the phase propagates, hence the more vertical energy propagates. High frequency waves are thus capable of transporting energy to high altitudes before being damped. On the opposite, waves with low frequency compared to $N$ propagate energy almost horizontally, and are thus attenuated before reaching high altitudes. At frequencies $f<4\times 10^{-3}$~Hz, a lot of energy is seen and the attenuation length does not depend on the frequency. There is no reason why IGWs should disappear below a certain frequency, but we would expect to see the attenuation length to keep decreasing with decreasing frequency. We thus deduce that IGWs at frequencies $f \leqslant 4\times 10^{-3}$~Hz are hidden in the energy spectrum by some very energetic large-scale slowly-varying flow, which we will describe below. More than one order of magnitude separates the buoyancy frequency and the fastest large-scale flow fluctuations. The large-scale flow penetrates deep into the stratified layer. It globally decreases in amplitude with height, but with some local increases at $z\sim 125$ mm (i.e. close to the interface between convective and buffer layers) and $z \sim 145$ mm. The IGWs signal can be seen between $f = 4 \times 10^{-3}$ Hz and the buoyancy frequency. A peak that reaches the top of the stratified layer is seen around $f= 1.2 \times 10^{-2}$ Hz, \textit{i.e.} the same frequency as the convective forcing discussed in section \ref{sec:results_conv}. It corresponds to the strong excitation provided by the cold rising and hot sinking turbulent plumes. However, top panel of figure \ref{fig:spectro} also shows a sudden drop of the energy at frequencies $f>1.2 \times 10^{-2}$~Hz. Indeed, wave attenuation is strong at these frequencies, even if they are close to (but below) the buoyancy frequency $N$. Actually, energy dissipation also depends on the norm of the wave vector squared. There is no reason that all excited waves have the same wave vector norm; one could even expect that fastest waves are excited by fastest, hence smallest convective patterns, and are thus also at smallest scale: they then dissipate more rapidly. \subsubsection{\label{sec:results_lsf}Large-scale flow in the stratified layer} Figure \ref{fig:spectro} shows an important amount of energy at low frequencies which has been interpreted as the signature of a large-scale slowly-varying flow in the stratified layer. We will now investigate the nature of these fluctuations to see if they relate to reversals similar to the QBO. Figure \ref{fig:meanflow} shows horizontal vector fields at the same depth at different times. In figure \ref{fig:meanflow}(a), the flow goes counter-clockwise inside the cylinder. Figure \ref{fig:meanflow}(b) shows that two contra-rotating vortices with a smaller amplitude typical velocities have appeared. Figure \ref{fig:meanflow}(c) shows a mostly clockwise rotating flow, where one of the preceding eddy pairs has nearly disappeared. The large-scale flow thus evolves drastically over time. A criterion is computed to extract a typical mean velocity from those fields that accounts for the ``direction'' of the large-scale flow: as illustrated in figure \ref{fig:critere}, we compute a mean azimuthal velocity, taken along a ring centred in the cylinder. Other criteria to extract a representative value for the large scale flow direction have also been tested, including: the mean vorticity over the cylinder area, the average of the azimuthal velocity over several rings with different radii, and the azimuthal velocity averaged over thick rings. They all give similar results for the large-scale flow measurement. \\ \begin{figure*}[t] \centering \includegraphics[scale = .8]{evolution_mf_vff.pdf} \caption{Horizontal velocity vector fields in the stratified layer at different times. The laser sheet is located at $z=150$~mm. The large-scale flow reverses from (a) to (c). Time between (a) and (c) is approximately half an hour. Maximum velocities are $0.1$ mm/s.} \label{fig:meanflow} \end{figure*} \begin{figure}[t] \centering \includegraphics[scale = .45]{critere_mf.pdf} \caption{Criterion used to extract a significant value for the large scale flow and its direction: the azimuthal velocity is averaged over the ring shown in red.} \label{fig:critere} \end{figure} In order to investigate the vertical phase propagation of the reversals, and thus, to compare the reversal dynamics observed to a QBO-like phenomenon, the setup has been equipped with a linear translating platform that allows us to perform horizontal laser sheet sweeping along the vertical. Horizontal velocities are measured in an horizontal plane, every $5$~mm from the top of the convective layer $z=110$~mm to the middle of the stratified layer $z=160$~mm. Any trace of downward phase propagation of the reversals, as observed on the QBO on Earth \cite{baldwin_quasi-biennial_2001} and on the historical Plumb experiment \cite{plumb_interaction_1977, semin_nonlinear_2018}, would be a significant evidence for QBO-like phenomenon in the experiment. Indeed, the phase propagation of the reversals due to IGWs non-linear interactions is theorised as follows: an IGW propagating in a stratified layer with an horizontal phase velocity in the same direction as the existing base flow propagates upward until reaching a critical height $z_c$, where it deposits all its energy locally. At $z=z_c$, the flow accelerates. Thus the critical height where the flow is intense enough to damp the wave is lowered. As time goes on, this critical height moves towards the location where the waves are emitted. Here, the waves are emitted at the bottom of the stratified layer. We would expect a downward phase propagation if the reversals are driven by IGWs non-linear interactions. \begin{figure*} \centering \includegraphics[scale=0.7]{QBOfinal_avecpentediffusionv2.pdf} \caption{Reversals of the large-scale flow. $z=115 - 120$ mm is the convective / buffer layers interface. Ascending plumes often perturb the buffer layer flow. The velocity was measured at 11 heights, marked by each tick in the vertical axis of the figure, and interpolated in between. The slope of the black dot lines represent the viscous coupling phase velocity.} \label{fig:QBO} \end{figure*} We performed long time experiments (around 8 hours). Typical results extracted from the criterion described above are shown in figure \ref{fig:QBO}. Blue patches (resp. red patches) represent large scale flow going counter clockwise (resp. clockwise). The present measurements mainly confirm the interpretation of figure \ref{fig:spectro} for the lowest frequencies: the large-scale flow is horizontal and extends over the whole depth of the stratified layer with an amplitude attenuated with height, and exhibits slow reversals. Additionally, some intense events at $z = 110$ mm are directly related to penetrative plumes from the convection. Reversals times range from $400$s to $1800$s. However, no downward phase propagation of the reversals is observed. On the contrary, the reversals seem to occur along the whole stratification height at the same time, or even with a rapid upward phase propagation. Since the phase propagation is not towards the location where the waves are emitted, the reversals are unlikely driven by the non-linear interactions of IGWs. However, as seen in section \ref{sec:results_waves}, IGWs propagate in the stratified layer and carry energy. Therefore, they give energy to the large-scale flow through non-linear interactions. Yet, the process is not dominant in the reversals dynamics. \\ Since, the reversals observed in figure \ref{fig:QBO} do not have a downward phase propagation, we look for other mechanisms than the QBO mechanism to explain the reversals. Two other mechanisms can be investigated. The first one relies on a specific convective dynamics within the overall stratified layer driven by horizontal gradients related to imperfect top and side boundary conditions. The second mechanism relies on viscous coupling with the underlying convective and buffer layers. Our fully stratified reference experiment described in section \ref{sec:methods} precludes the first scenario. Indeed, setting the bottom boundary at 10$^{\circ}$C and the top boundary at 70$^{\circ}$C, no motion is observed for the bottom $3/4$ of the tank. In this test-experiment, the top $1/4$ of the tank is animated by convective motions due to the non-homogeneous top heat source (in the standard $4^\circ$C experiment, where $T_{top} = 35\degree$C, only $\sim 2$ cm are affected by the convection at the top of the tank, because the non-homogeneity of the heat source is less important for lower temperature, thus the horizontal convection is weaker). However, these are inefficient to generate waves below and to drive any large-scale flow observable away from the top region. \begin{figure}[h] \includegraphics[scale = 0.55]{balayage_corr_vf.pdf} \caption{Velocity vector fields in the horizontal plane. Different columns represent different sweeping cycles $t^*$ (one sweeping cycle corresponds to the 11 steps needed to go from the lowest position $z=110$~mm to the highest position $z=160$~mm). Different rows represent different heights within the same sweeping cycle: first row is the top of the convection $z=110$~mm, second row is in the buffer layer $z=120$~mm and third row is in the stratified layer $z=145$~mm. Convective plumes are easily noticeable on the first row fields.} \label{fig:champ_correle} \end{figure} \begin{figure}[h] \includegraphics[scale = 0.55]{balayage_NOcorr_vf.pdf} \caption{Same as figure \ref{fig:champ_correle} but for different sweeping cycles. Note that in this set of velocity fields, the buffer and stratified layers are less correlated than they are in figure \ref{fig:champ_correle}.} \label{fig:champ_pascorrele} \end{figure} \begin{figure}[h] \centering \includegraphics[scale=0.6]{correProdscalaire_BW_vf.pdf} \caption{Velocity correlation between the three layers. Dashed black lines are the $-0.5$ and $0.5$ values. Correlation coefficient are window-averaged over $10$~min to smooth the curve.} \label{fig:correlation} \end{figure} This leaves viscous entrainment as a possible driving mechanism. The dotted lines on figure \ref{fig:QBO} show a theoretical viscous time, computed from the time for viscous entrainment to drive 20\% of the horizontal velocity at $z = 115$ mm to $z=160$ mm, starting from a base state flow at rest. The 20\% value corresponds to the measured value of the large scale flow at $z=160$ mm compared to the value at $z = 115$ mm (noted $u_b$). The theoretical viscous time entrainment is given by $t = \frac{z^2}{4\,\nu\, \mathrm{erf}^{-2}\left(\left(\frac{u}{u_b}-1\right)\right)}$. The reversals occur in a time scale comparable with this theoretical viscous time. The similarity between the slope of the dashed lines and the slope of the upward phases suggests that reversals are driven viscously. However, the existence of the buffer layer and its associated intense shear, with opposite horizontal velocities with the convective layer below (see figure \ref{fig:bufferlayer}) precludes direct viscous coupling between the convective and stratified layers. Besides, no reversal has been observed in the convective region. We thus propose a thermal coupling between the convective and buffer layers as seen in section \ref{sec:results:buffer}, associated to a viscous coupling between the buffer and stratified ones. To further quantify this possibility, figures \ref{fig:champ_correle} and \ref{fig:champ_pascorrele} show horizontal velocity fields at different heights and at different times. For each of the columns shown, first row is the mean flow in the convective layer at depth $z=110$~mm, second row is the mean flow in the buffer layer at depth $z=120$~mm and third row is the mean flow in the stratified layer at depth $z=145$~mm. The correlation coefficients through time between (i) the convective and buffer layers, (ii) the buffer and stratified layers, and (iii) the convective and stratified layers have been computed. It consists in a scalar product of the velocity vector for each position at two different heights rescaled by the product of the norm of the velocity vector at the two heights, \textit{i.e.}: \begin{equation} R_{ij} = R(x_i,y_j) = \frac{u(x_i,y_j,z_1) \times u(x_i,y_j,z_2) + v(x_i,y_j,z_1) \times v(x_i,y_j,z_2)}{\left(u(x_i,y_j,z_1)^2+v(x_i,y_j,z_1)^2\right)^{1/2} \times \left(u(x_i,y_j,z_2)^2 + v(x_i,y_j,z_2)^2 \right)^{1/2}} \end{equation} This gives a correlation coefficient $R_{ij}$ for each PIV position in the horizontal plane. The global correlation coefficient $R$ is computed by spatially averaging the local correlation coefficients. Results are shown in figure \ref{fig:correlation}. The convective and buffer layers are negatively correlated: the correlation coefficient is most of the time close to $R=-0.5$. This can also be seen at all times in figures \ref{fig:champ_correle} and \ref{fig:champ_pascorrele}, where horizontal velocities in the convective and buffer layers have opposite direction. A diverging flow coming from an impinging plume in the convective zone corresponds to a converging flow in the buffer layer towards the impact zone, hence confirming the thermal coupling mechanism described in section \ref{sec:results:buffer}. This converging flow may lead either to a clockwise or anticlockwise azimuthal mean flow, depending on the details of the chaotic excitation from the convective plumes. The correlation coefficient between the convective and stratified layers can be positive or negative, and is anyway most of the time less than 0.2, in absolute value. The correlation coefficient between the buffer and stratified layers shows a lot of temporal variations. However, it remains always positive. At a given time, the large-scale flow in the stratified layer may switch between a regime strongly dominated by the buffer layer (see also figure \ref{fig:champ_correle}), and a second regime where the flow in the stratified layer is quite different from the flow in the buffer layer (see also figure \ref{fig:champ_pascorrele}). We thus conclude that the stratified layer is globally viscously driven by the buffer layer. However, the stratified layer exhibits additional complexities. These might be due to IGWs interacting with the large-scale flow. The results from Couston et al. \cite{couston_order_2018} show that the lower the Prandtl number, the more regular the QBO. In the experiment, the Prandtl number is close to $Pr = 7$: the typical associated QBO-type flow is irregular, with low amplitude. We thus propose that large-scale flow driven by IGWs non-linear interaction superimposes on the viscously driven flow, but remains secondary. We do not know at this point how to disentangle those two potential contributions from the available data. \clearpage \subsection{\label{sec:results_num} Numerical simulations} The experimental results are not fully sufficient to explain, with complete certainty, the origin of the buffer layer and of the large-scale flow observed in the stratified layer. In addition, the effects of the lateral heat losses and top temperature heterogeneity are difficult to distinguish. To answer these questions, 3D DNS of a configuration similar to our experiments are performed, reproducing the 4$^{\circ}$C convection but with idealised boundary conditions (i.e. no flux on the sides, and fixed temperature at the top and bottom). As mentioned in section \ref{sec:methods_num}, the Rayleigh number $Ra$ and $T_{top}$ are tuned so that the interface depth in the experiment and the numerical simulation are similar. We have $Ra=10^7$ and $T_{top}=48^\circ$C. All the numerical simulations are run dimensionless, but results are shown in dimensional values. The length scale is $H = 200$~mm, the vertical extent of the whole domain (hence diameter is $D=300$~mm), the timescale is the thermal diffusive time $\tau = \frac{H^2}{\kappa} = \frac{0.2^2}{1.5 \, 10^{-7}} = 2.67 \times 10^{5}$~s, and the temperature is given by the dimensionless temperature $\theta = \frac{T - T_i}{T_0 - T_i}$, where $T, T_i, T_0$ are respectively the dimensional temperature, the inversion temperature of the equation of state (i.e. $4^\circ$C), and the bottom temperature (i.e. $0^\circ$C). Results for sections \ref{sec:results_num_conv} - \ref{sec:results_num_igw} are computed from a $(x,z)$ vertical plane located along a cylinder diameter. \subsubsection{\label{sec:results_num_conv}Large-scale circulation in the convection zone and buffer layer} Figure \ref{fig:LSC_num} shows that a large-scale circulation takes place in the convective layer. It consists of a cell filling the whole convective layer, and exhibits no reversal over the whole course of the simulation. The fluid rotates counter clockwise in the vertical plane. This is qualitatively consistent with the mean flow observed in the experiment and shown in the right panel of figure \ref{fig:LSCexpe}. As in the experiments, a counter current exists on the top of the convective layer at $z= 120$~mm, creating a strong shear and demonstrating the existence of a buffer layer in the numerical simulation as well. \begin{figure}[h] \centering \includegraphics[scale = .47]{quiver_inst_color.pdf} \hspace{.1cm} \includegraphics[scale = .47]{quiver_moyen_color.pdf} \caption{(Left) Instantaneous velocity field. An ascending plume is visible at $x=230$ mm. (Right) Large-scale circulation in the convective layer obtained by time-averaging velocities over a 50 minutes recording. The large-scale circulation is a counter clockwise cell. Maximum instantaneous velocities are $3$ times bigger than the maximum averaged velocities.} \label{fig:LSC_num} \end{figure} \begin{figure*}[t] \centering \includegraphics[scale = 0.5]{Spatio_temp_Ux_v3_dim.pdf} \hspace{.1cm} \includegraphics[scale = 0.5]{profil_T_all.pdf} \caption{(Left) Horizontal average of the horizontal velocity $u$ over a vertical cross-section in the middle of the tank. The buffer layer can be seen above $z=120$~mm. A stationary large-scale circulation is present in the convective layer, even if it appears quite perturbed at the end of the signal. (Right) Temperature profiles along the $z$-axis.} \label{fig:num_buffer} \end{figure*} The space-time diagram of the mean horizontal flow shown in figure \ref{fig:num_buffer} confirms it. Observing the buffer layer in the absence of side thermal losses and top temperature heterogeneity is an additional argument accounting for the fact that it is not an artefact driven by imperfect experimental conditions. We also observe that the flow within the convection stays positive through time at the bottom and negative at the top. This is evidence of the steady large scale circulation taking place in the convective layer. Some events appear at $t > 1.42 \times 10^{5}$~s and are interpreted as quasi-reversal of the large-scale circulation. \begin{figure}[t] \centering \includegraphics[scale=.5]{quiverIsotherme.pdf} \caption{Velocity field and temperature isotherms at the end of an upward plume impact on the interface.} \label{fig:num_quiverisoT} \end{figure} \begin{figure}[t] \centering \includegraphics[scale=.6]{corre_WetU.pdf} \caption{(Left) Spatio-temporal diagram of the horizontal velocity $u$ at $z=128$~mm. (Right) Spatio-temporal diagram of the vertical velocity $w$ at $z=108$~mm. The event at $t \approx 1.052 \times 10^5$~s is shown in figure \ref{fig:num_quiverisoT}.} \label{fig:num_correWU} \end{figure} The temperature profile along the $z$ axis is also plotted on the right panel of figure \ref{fig:num_buffer}. The figure shows a temporal and horizontal average of the temperature field (black thick curve), two temporal averages at two different positions $x=14$~mm (left side of the tank - dashed grey) and $x=277$~mm (right side - dotted grey) and an instantaneous profile at $x=145$~mm (middle of the tank - thick grey with crosses). The thermal boundary layer can be seen, between $z=0$~mm and $z=10$~mm. Then, between $z=10$~mm and $z=100$~mm lies a layer of constant temperature $T \sim 2.8^\circ$C. Between $100 \mathrm{~mm} \leqslant z \leqslant 115 \mathrm{~mm}$, the temperature profile evolves from constant to linear for $z > 115$ ~mm. The $T=4^\circ$C (respectively $T=8^\circ$C) isotherm is located at $z = 110$~mm (resp. $z = 120$~mm). Note that the temporal average of the temperature profiles are different on the left and right sides of the tank. Indeed, the constant temperature height goes to $z=90$~mm for the left side whereas it goes to $z=115$~mm for the right side. This suggests that the convective / buffer layer interface does not lies at one height over the whole tank but is a function of time and space. This is very likely due to the large-scale circulation. Thus, the thermal coupling described in \ref{sec:results:buffer} will likely occur at different heights, depending on time and horizontal position. The thermal coupling as schematised in figure \ref{fig:schema_couplage} can be found in the numerical simulation. This is represented in figure \ref{fig:num_quiverisoT}. An upward plume impacting the convective / buffer layer interface is seen. The isotherms ranging from $T=4^\circ$C to $T=11^\circ$C are deflected upward, due to the plume bringing cold fluid upward. On the contrary, the isotherms $T = 12 -14^\circ$C are deflected downward by the converging flow. Isotherms at $T \geqslant 15^\circ$C remain horizontal. After the impact on the interface, the plume is deflected outwards. One could expect the fluid above the impact to be viscously entrained by this outward deflection. However, as observed in figure \ref{fig:num_quiverisoT} for the simulation and figures \ref{fig:champ_correle}-\ref{fig:champ_pascorrele} for the experiment, the fluid above the interface is going towards the plume, \textit{i.e.} in the opposite direction of the fluid below, hence explaining the observed shear (see figures \ref{fig:num_quiverisoT} and \ref{fig:schema_couplage}). The time evolution of these dynamics is shown in figure \ref{fig:num_correWU}. Figure \ref{fig:num_correWU} shows the time evolution of the horizontal velocity $u$ in the shear layer at $z=128$~mm and the time evolution of the vertical velocity $w$ in the convective layer at $z=108$~mm. Comparing the two panels of figure \ref{fig:num_correWU} shows that upward plumes are concomitant with converging horizontal velocities towards the plume impact. Indeed, the spatio-temporal diagram of $w$ exhibits local strong upward plumes. These plumes, as suggested by the dashed black lines, are correlated in time and space with converging horizontal velocities. For instance, an upward plume is seen at $x\approx220$~mm and $t\approx1.043 \times 10^5$~s. At the same horizontal position and time, the positive horizontal velocity becomes stronger and the negative horizontal velocity patch increases in size to reach $x\approx220$~mm. The converging horizontal velocities event occurs a short time after the impact of the plumes. Thus, it can be concluded that the plume induces the converging flow, as suggested by our explanation in section \ref{sec:results:buffer}. \clearpage \subsubsection{\label{sec:results_num_igw}Internal gravity waves} \begin{figure}[h] \centering \includegraphics[scale = 0.55]{spectro_vfinale.pdf} \hspace{.1cm} \includegraphics[scale = .55]{plot_attenuation_norm_num_v2.pdf} \caption{(Left) Power spectral density of the absolute velocity $\sqrt{u^2+w^2}$ in the buffer and stratified layers. The grey curve shows the buoyancy frequency profile computed from the spatial and temporal average of the temperature field. (Right) Two selected profiles (taken at frequencies shown by dashed lines on the left graph) of the re-scaled PSDs by the PSD at the top of the convective layer, \textit{i.e.} $z=118$~mm.} \label{fig:num_spectro} \end{figure} PSDs are computed in the stratified and buffer layers and are plotted in figure \ref{fig:num_spectro}. As for the experiment (figure \ref{fig:spectro}), numerical results show oscillatory motions at different frequencies attenuated with height. Experimental results (figure \ref{fig:spectro}) and numerical results (figure \ref{fig:num_spectro}) show strongly similar dynamics: most of the energy is present at low frequencies ($f<3 \times 10^{-3}$~Hz). The motion with frequencies ranging from $3 \times 10^{-3}$~Hz to $N$ are less intense, and almost no energy is seen at frequencies $f>N$. Right panel of figure \ref{fig:num_spectro} shows two selected vertical profiles (shown by the white dashed line on the left panel figure) of the the PSD re-scaled by the PSD at $z=118$~mm. The energy for the higher frequency ($f=1.4\times10^{-2}$~Hz) decreases slower than the energy for the lower frequency ($f=5.0\times10^{-3}$~Hz). This is, as experimental results, in agreement with the dispersion relation of IGWs. The overall behaviour of waves spectra is similar in experiment and numerical simulation, with an attenuation length independent of the frequency in the low-frequency signal thus confirming a viscous coupling origin of the large-scale flow, and increasing when the frequency goes towards $N$ in the wave domain. \subsubsection{Large-scale flow within the stratified layer} \begin{figure*}[h] \centering \includegraphics[scale = 0.6]{Spatiotemp_Vtheta_cyl_v2_square_dim.pdf} \hspace{.1cm} \includegraphics[scale = 0.6]{Spatiotemp_Vtheta_cyl_v2_zoom_dim.pdf} \caption{Spatio-temporal diagrams of the azimuthal averaging of the azimuthal velocity inside a virtual cylinder of radius $r = 140~$mm. The bottom figure is a zoom on the stratified zone, delimited in the top figure by the black square. The slope of the black lines show the theoretical viscous diffusive time.} \label{fig:num_vthetacyl} \end{figure*} Similarly to what has been done for the experimental data, figure \ref{fig:num_vthetacyl} shows the mean azimuthal velocity over the whole height of a virtual cylinder of radius $r = 140$ mm. We observe reversals within the convective layer ($z<120$ mm), which are not systematically correlated with the signal in the stratified layer. The mean velocity in the stratified layer also exhibits reversals. They are characterised by an upward phase propagation from the buffer zone at $z=120$ mm, as shown in the zoom (bottom panel of figure \ref{fig:num_vthetacyl}). The phase velocity seen in figure \ref{fig:num_vthetacyl} is in good agreement with the theoretical time for viscous propagation $t = \frac{z^2}{4\,\nu\, \mathrm{erf}^{-2}\left(\left(\frac{u}{u_b}-1\right)\right)}$. This corroborates the fact that the reversals observed within the stratified layer are viscously driven from the dynamics occurring in the buffer layer, as it has been seen for the experiment. Reversals time ranges from $300$~s to $1500$~s. Those reversals times are similar to the experimental ones, though slightly shorter (numerical reversals are $\sim20$\% faster than experimental reversals). \clearpage \section{Conclusion}\label{sec:discussion} The 4$^{\circ}$C convection experiment, originally performed by Townsend \cite{townsend_natural_1964}, has been re-investigated using long-term PIV measurements in a vertical cross-section, and in several horizontal cross-sections within the stratified layer. This last type of measurements has allowed to investigate for the first time the long-term horizontal mean flow in the stratified layer. Experiments have been complemented by direct numerical simulations. The first result of this paper is the confirmation, in 3D and with ideal boundary conditions, of the presence of a buffer layer, including an overshooting region as first observed by Perrard \cite{perrard_experimental_2013}, and a shear region. We have argued that the buffer layer is driven by thermal coupling with the convection, due to the non-linear equation of state of water, and that this mechanism is a priori related to a Prandtl number larger than one. The second result is that the buffer layer viscously drives slow reversals of the horizontal large-scale flow within the stratified layer. Additionally, IGWs at different frequencies propagate in the stratified layer. They likely interact with the horizontal large-scale flow, and probably also produce a reversing flow, which superimposes to the viscously driven one. From Couston et al. \cite{couston_order_2018}, we know that the Prandtl number has a strong influence on this QBO-like mechanism: the lower the Prandtl number, the stronger the amplitude of the QBO. In water, $Pr \sim 7$, and the expected amplitude of the large-scale QBO flow is weak, hence dominated by the viscous driving. Further experimental studies at lower Prandtl number should allow deciphering the two contributions. One could for instance suggest using gas as a working fluid; however, the absence of density reversal around a given temperature will necessitate to consider either transient experiments like \cite{deardorff_laboratory_1969, michaelian_coupling_2002}, or two-gas experiments which might then be prone to double diffusive instabilities. Experimentally, the question also remains to understand why the only successful QBO experiment has been performed in salty water, hence with a Schmidt number (equivalent to Pr) of 700. \begin{figure}[h] \centering \includegraphics[scale=.6]{U_avgX.pdf} \caption{Horizontal average of the horizontal velocity $u$ over a vertical cross section in the middle of the numerical domain for $Pr = 0.1$.} \label{fig:num_pr01} \end{figure} In the meantime, it is straightforward to change the Prandtl number in the numerical simulation of our set-up. We have thus run a second simulation with the same Rayleigh number $Ra = 10^7$ and top temperature $\theta_{top} = 11$ but with $Pr = 0.1$. In this simulation, as shown in figure \ref{fig:num_pr01}, no buffer layer is observed, but strong signatures of a QBO like mechanism are visible, marked by downward phase propagation of the reversals of the large-scale flow. This configuration thus deserves a more systematic study in the future. \section*{Acknowledgements} The authors acknowledge funding by the European Research Council under the European Union's Horizon 2020 research and innovation program through Grant No. 681835-FLUDYCO-ERC-2015-CoG. This work was granted access to the HPC resources of Aix-Marseille Universit\'e financed by the project Equip@Meso (ANR-10-EQPX-29-01) of the program “Investissements d'Avenir” supervised by the Agence Nationale de la Recherche. Computations were also conducted with the support of the HPC resources of GENCI-IDRIS (Grant No.A0060407543).
2,869,038,156,931
arxiv
\section*{ Introduction} The classical Myers's theorem (\cite{M}) and Cheng's maximum diam theorem (\cite{Ch}) in Riemannian geometry are well known. As for their generalizations, we can refer to \cite{BL}, \cite{Q} and \cite{R}. In Finsler geometry, the Myers type theorem (\cite{Sh1}) states that if $(M,F,d\mu)$ is a complete, connected Finsler $n$-manifold such that $\textmd{Ric}\geq (n-1)k > 0$, then its diameter $Diam(M)\leq\frac{\pi}{\sqrt{k}}$. Using the weighted Ricci curvature condition, Ohta (\cite{O1}) proved an analogue of Myers's theorem (see Lemma 1.2 in Section 1 below). As in the case of Riemannian geometry, it is natural to ask what happens if the diameter attains its maximal value. In \cite{KY}, the authors obtained the following Cheng type maximum diam theorem. \vspace{1.5mm} \hspace{-4.5mm}\textbf{Theorem A.} (\cite{KY}) \emph{Let $(M,F,d\mu)$ be a complete reversible connected Finsler $n$-manifold with the Busemann-Hausdorff volume form. If the weighted Ricci curvature satisfies $\emph{Ric}_n\geq (n-1)k > 0$ and $Diam(M)=\frac{\pi}{\sqrt{k}}$, then $(M,F)$ is isometric to the Euclidean sphere $\mathbb{S}^n(\frac{1}{\sqrt{k}})$.} \vspace{1mm} This result describes the rigidity for the reversible Finsler manifolds, and then reduces to the Riemannian case. However, as we know, most Finsler manifolds are not reversible, such as Randers spaces. We might, of course, wonder whether or not there exist general Finsler manifolds attaining the maximum diam. In this paper, we will give a positive answer to this question. Let $\mathbb{S}^n$ be an $n$-sphere equipped with a Finsler metric $F$ and a volume form $d\mu$. If it has constant flag curvature $k$, vanishing $S$ curvature and $Diam=\frac{\pi}{\sqrt{k}}$, then we call it a standard Finsler sphere and denote it by $(\mathfrak{S}^n(\frac{1}{\sqrt{k}}),F,d\mu)$. This definition is inspired by Bao-Shen's result (see Example 2.4 below). Clearly, the Euclidean sphere $\mathbb{S}^n(\frac{1}{\sqrt{k}})$ is certainly a standard Finsler sphere. If $F$ is a Randers metric, we call it a standard Randers sphere denoted by $(\mathcal{S}^n(\frac{1}{\sqrt{k}}),F,d\mu)$. Here $F$ is determined by navigation data $(\mathfrak{g},W)$, where $\mathfrak{g}$ is the standard sphere metric and $W$ is a Killing vector. The more details are shown in Section 2 below. \vspace{1.5mm} \hspace{-4.5mm}\textbf{Theorem 0.1.} \emph{Let $(M,F,d\mu)$ be a complete connected Finsler n-manifold with the Busemann-Hausdorff volume form. If the weighted Ricci curvature satisfies $\emph{Ric}_n\geq(n-1)k>0$ and $Diam(M)=\frac{\pi}{\sqrt{k}}$, then $(M,F)$ is isometric to a standard Finsler sphere. In particular, if $n\geq 3$ and F is an $(\alpha,\beta)$ metric, then $(M,F)$ is isometric to a standard Randers sphere.} \vspace{1mm} The weighted Ricci curvature $\textmd{Ric}_n\geq(n-1)k>0$ means that $\textmd{Ric}\geq(n-1)k>0$ and the $S$ curvature $S=0$ (see the definition in Sec.1 below). The above definition of the standard Finsler sphere is reasonable. We will see that in the Finsler setting the vanishing $S$ curvature is a necessary condition and one can not deduce the maximum diam under a single condition of constant flag curvature $k$. If $F$ is reversible, the Finsler sphere is just the standard Euclidean sphere $\mathbb{S}^n(\frac{1}{\sqrt{k}})$, and if $F$ is not reversible, the Finsler spheres never exist singly but always in pairs. Unfortunately, we can not further characterize the standard Finsler sphere metric. This is because it is not clear, until now, about the classification for Finsler manifolds with constant flag curvature. Theorem 0.1 covers Theorem A and shows that, apart from the Euclidean sphere, the maximum diam can be attained by countless Finsler metrics on a sphere. In general, if the manifold has an arbitrary volume form $d\mu$ with a suitable restriction, and the weighted Ricci curvature satisfies $\textmd{Ric}_N\geq(N-1)k>0$ for some real number $N\in[n,\infty)$, the conclusion still holds (see Theorem 2.1 and 2.8 below). Since the Finsler manifolds discussed are not necessarily reversible, some methods in \cite{KY} do not work and we have to explore a new trail. For example, the reverse of a geodesic $\gamma$ is not necessarily a geodesic, the distance $d_F(p,q)\neq d_F(q,p)$, the gradient and Laplacian of a function $\nabla(-f)\neq-\nabla f,\Delta(-f)\neq-\Delta f$ in general, and so on. These lead to many difficulties in computation and reasoning. For this, we make full use of the reverse Finsler metric $\overleftarrow{F}$ which has many relationships with the metric $F$ on geodesic, distance, gradient, Laplacian, curvature, and so on. This technique gives another way to deal with nonreversible Finsler manifolds (see the proof for details in Theorem 2.1 below). As an application, we can use it to characterize the rigidity of Finsler manifolds when the first eigenvalue of Finsler Laplacian attains its lower bound. More precisely, we obtain Obata type rigidity theorem in the following: \vspace{1.5mm} \hspace{-4.5mm}\textbf{Theorem 0.2.} \emph{Let $(M,F,d\mu)$ be a complete connected Finsler n-manifold with the Busemann-Hausdorff volume form. If the weighted Ricci curvature satisfies $\emph{Ric}_n\geq(n-1)k>0$, then the first eigenvalue of Finsler-Laplacian $\lambda_1=nk$ if and only if $(M,F)$ is isometric to a standard Finsler sphere. In particular, if $n\geq 3$ and F is an $(\alpha,\beta)$ metric, then the equality holds if and only if $(M,F)$ is isometric to a standard Randers sphere.} \vspace{1mm} In \cite{YHS1}, the authors obtained Obata type rigidity theorem by using Theorem A. Since the condition is rather strong, the manifold, which attains the lower bound of the first eigenvalue, reduces to the Euclidean sphere. By contrast, Theorem 0.2 demonstrates that there are infinite non-Riemannian manifolds satisfying such a property. In addition, we also construct the first eigenfunctions of the Finsler Laplacian(see the proof of Theorem 3.1 and 3.2 below). The paper is organized as follows. In Section 1, some fundamental concepts and formulas which are necessary for the present paper are given, and some lemmas are contained. The maximum diam theorem and Obata type rigidity theorem are then proved in Section 2 and Section 3, respectively. \section{ Preliminaries} Let $M$ be a smooth $n$-manifold and $\pi :TM\to M$ be the natural projection from the tangent bundle $TM$. Let $(x,y)$ be a point of $TM$ with $x\in M$, $y\in T_xM$, and let $(x^i,y^i)$ be the local coordinates on $TM$ with $y=y^i\partial /\partial x^i$. A {\it Finsler metric} on $ M$ is a function $F:TM\to [0,+\infty )$ satisfying the following properties: (i){\it Regularity}: $F(x,y)$ is smooth in $TM\setminus 0$; (ii){\it Positive homogeneity}: $F(x,\lambda y)=\lambda F(x,y)$ for $\lambda >0$; (iii){\it Strong convexity}: The fundamental quadratic form $$g:=g_{ij}(x,y)dx^i\otimes dx^j,\qquad g_{ij}:=\frac 12[F^2]_{y^i y^j}$$ is positively definite. Let $X=X^{i}\frac{\partial}{\partial x^{i}}$ be a vector field. Then the \emph{covariant derivative} of $X$ by $v\in T_{x}M$ with reference vector $w\in T_{x}M\backslash 0$ is defined by $$D^{w}_{v}X(x):=\left\{v^{j}\frac{\partial X^{i}}{\partial x^{j}}(x) +\Gamma^{i}_{jk}(w)v^{j}X^{k}(x)\right\}\frac{\partial}{\partial x^{i}},$$ where $\Gamma^{i}_{jk}$ denote the coefficients of the Chern connection. Given two linearly independent vectors $V,W\in T_{x}M\backslash0$, the flag curvature is defined by $$K(V,W):=\frac{g_{V}(R^{V}(V,W)W,V)}{g_{V}(V,V)g_{V}(W,W)-g_{V}(V,W)^{2}},$$ where $R^{V}$ is the \emph{Chern curvature}: $$R^{V}(X,Y)Z=D^{V}_{X}D^{V}_{Y}Z-D^{V}_{Y}D^{V}_{X}Z-D^{V}_{[X,Y]}Z.$$ Then the Ricci curvature for $(M,F)$ is defined as $$\textmd{Ric}(V)=\sum_{\alpha=1}^{n-1} K(V,e_{\alpha}),$$ where $e_{1},\cdots,e_{n-1},\frac{V}{F(V)}$ form an orthonormal basis of $T_{x}M$ with respect to $g_{V}$. Let $(M,F,d\mu)$ be a Finsler $n$-manifold. Given a vector $V\in T_xM$, let $\gamma :(-\varepsilon ,\varepsilon )\to M$ be a geodesic with $\gamma (0)=x,~\dot\gamma (0)=V$. Define $${\dot S}(V):=F^{-2}(V)\frac{d}{d t}[S(\gamma (t),\dot\gamma(t))]_{t=0},$$ where $S(V)$ denotes the $S$-curvature at $(x,V)$. The {\it weighted Ricci curvature} of $(M,F,d\mu)$ is defined by (see \cite{O1}) $$\left\{\begin{array}{l}{\textmd{Ric}}_{n}(V):=\left\{\begin{array}{l}{\textmd{Ric}}(V)+\dot S(V),\quad{\rm for}~~S(V)=0,\\ -\infty,\qquad\qquad\qquad{\rm otherwise},\end{array}\right.\\ {\textmd{Ric}}_{N}(V):={\textmd{Ric}}(V)+\dot S(V)-\frac {S(V)^{2}}{(N-n)F(V)^{2}},~~\forall~~N\in (n,\infty),\\ {\textmd{Ric}}_{\infty}(V):={\textmd{Ric}}(V)+\dot S(V),\end{array}\right.$$ For a smooth function $u$, the \emph{gradient vector} of $u$ at $x$ is defined by $\nabla u(x):=\mathcal{L}^{-1}(du)$, where $\mathcal{L}:T_xM\to T_x^*M$ is the Legendre transform. Let $V=V^{i}\frac{\partial}{\partial x^{i}}$ be a smooth vector field on $M$. The \emph{divergence} of $V$ with respect to an arbitrary volume form $d\mu$ is defined by $$ \textmd{div}V :=\sum_{i=1}^{n}\left(\frac{\partial V^{i}}{\partial x^{i}}+V^{i}\frac{\partial \Phi}{\partial x^{i}}\right),$$ where $d\mu =e^{\Phi}dx$. Then the \emph{Finsler-Laplacian} of $u$ can be defined by(\cite{GS}) $$\Delta u:=\textmd{div}(\nabla u), $$ where the equality is in the weak $W^{1,2}(M)$ sense. We remark here that, up to now, except for this definition, there have been several different definitions of Finsler-Laplacians, introduced by Antonelli-Zastawniak (\cite{AL}), Bao-Lackey(\cite{BL2}), Barthelm(\cite{B}) and Centroe(\cite{C}), respectively. Using the Finsler-Laplacian, Ohta obtained the following comparison theorems under the weighted Ricci curvature condition. \begin{lemma}\cite{OS} Let $(M,F,d\mu)$ be a Finsler n manifold. If the weighted Ricci curvature satisfies $\emph{Ric}_{N}\geq (N-1)k>0,N\in[n,\infty)$, then the Laplacian of the distance function $r(x)=d_{F}(p,x)$ from any given point $p\in M$ can be estimated as follows: $$\Delta r\leq(N-1)\sqrt{k}\cot(\sqrt{k}r)$$ pointwise on $M\backslash (\{p\}\cup \textmd{Cut}(p))$ and in the sense of distributions on $M\backslash \{p\}$. \end{lemma} \begin{lemma}\cite{O1} Let $(M,F,d\mu)$ be a Finsler n manifold. If the weighted Ricci curvature satisfies $\emph{Ric}_{N}\geq (N-1)k>0,N\in[n,\infty)$, then $Diam M\leq\frac{\pi}{\sqrt{k}}$, and for any $0<r<R$, it holds that $$\max\left\{\frac{\emph{vol}^{d\mu}_FB^+_x(R)}{\emph{vol}^{d\mu}_FB^+_x(r)},\frac{\emph{vol}^{d\mu}_FB^-_x(R)}{\emph{vol}^{d\mu}_FB^-_x(r)}\right\} \leq \frac{\int_0^{R}(\frac{\sin \sqrt{k}t}{\sqrt{k}})^{N-1}dt}{\int_0^{r}(\frac{\sin \sqrt{k}t}{\sqrt{k}})^{N-1}dt}.$$ \end{lemma} \vspace{1mm} \section {The maximum diam theorem} We now consider the Finsler manifolds whose diam reach the maximum value. It is shown that they must be of constant flag curvature and vanishing $S$ curvature. But it seems very difficult to characterize the manifolds further since the classification problem has not been solved so far. There are two main barriers in our proof to overcome. Firstly, Finsler metrics are not reversible in general. This brings about difficulties in some calculations and arguments. Here the use of the reversibility of $F$ to estimate some inequalities does not work any more. Some quantities should be precisely calculated. For this, we have to make full use of the forward (resp. backward) geodesic (ball), the forward (resp. backward) distance function, the reverse Finsler metric and some corresponding comparison theorems. Secondly, we have to give the constraint on the volume form $d\mu$, which is satisfied for the Busemann-Hausdorff volume form if $N=n$. By means of this, we can prove that for any point $p\in M$, there exists a point $q$ such that $d_F(p,q)$ attains the diam. Thus, by the arbitrariness of the point $p$, we can further compute the flag curvature and the $S$ curvature. Let $(r,\theta)$ be the polar coordinate around $z\in M$ and write the volume form as $d\mu=\sigma_z(r,\theta)drd\theta$. Set $S_xM:=\{y|y\in T_xM,F(y)=1\}$. Then we can give \begin{theorem} Let $(M,F,d\mu)$ be a complete connected Finsler n-manifold. If the weighted Ricci curvature and the volume form satisfy $\emph{Ric}_{N}\geq (N-1)k>0$, $\lim\limits_{r\to0}\int_{S_zM}\frac{\sigma_z(r,\theta)}{r^{N-1}}d\theta=C,\forall z\in M$ for some real number $C>0,N\in[n,\infty)$, and $Diam M=\frac{\pi}{\sqrt{k}}$, then the flag curvature $K=k$, $S$ curvature $S=0$, and $M$ is homeomorphic to the $n$-sphere $\mathbb{S}^{n}$. \end{theorem} \begin{remark} Theorem 2.1 means that if $M$ attains its maximum diam, then $N=n$ and thus $\textmd{Ric}_{N}=\textmd{Ric}_{n}=\textmd{Ric}$. In other words, for any fixed number $N>n$, the diam of $M$ can not achieve $\frac{\pi}{\sqrt{k}}$. \end{remark} \begin{proof} Take $p,q\in M$ such that $d_F(p,q)=\frac{\pi}{\sqrt{k}}$. Let $r_p^+(x)=d_F(p,x)$ be the forward distance function from $p$ and $r_q^-(x)=d_F(x,q)$ be the backward distance function from $q$. For any point $x\in M$, we claim: \begin{align}\label{1} d_F(p,x)+d_F(x,q)=d_F(p,q)=\frac{\pi}{\sqrt{k}}. \end{align} If not, then there exists a number $\varepsilon>0$ such that $d_F(p,x)+d_F(x,q)=\frac{\pi}{\sqrt{k}}+2\varepsilon.$ Denote by $r_1=d_F(p,x)-\varepsilon$ and $r_2=d_F(x,q)-\varepsilon$. It is clear that $r_1>0,r_2>0$. Then $B_p^+(r_1)$, $B_q^-(r_2)$ and $B_x^{\pm}(\varepsilon)$ are pairwise disjoint, where $B_p^+(r_1)$ and $B_q^-(r_2)$ are the forward (resp. backward) geodesic ball centered at $p$ (resp. $q$) of radius $r_1$ (resp. $r_2$) and $B_x^{\pm}(\varepsilon)=B_x^{+}(\varepsilon)\cap B_x^{-}(\varepsilon)$. In fact, if $y\in B_p^+(r_1)\cap B_q^-(r_2)$, which means that $d_F(p,y)< r_1$ and $d_F(y,q)< r_2$, then $$d_F(p,q)\leq d_F(p,y)+d_F(y,q) < r_1+r_2=d_F(p,x)+d_F(x,q)-2\varepsilon=\frac{\pi}{\sqrt{k}}.$$ This is a contradiction. On the other hand, if $y\in B_p^+(r_1)\cap B_x^{\pm}(\varepsilon)$, which means that $d_F(p,y)<r_1$ and $d_F(y,x)< \varepsilon, d_F(x,y)<\varepsilon $, then $$ r_1+ \varepsilon=d_F(p,x)\leq d_F(p,y)+d_F(y,x)<r_1+ \varepsilon$$ which is also a contradiction. By a similar argument, we can further conclude that $B_q^-(r_2)\cap B_x^{\pm}(\varepsilon)=\emptyset$. Next by volume comparison theorem (Lemma 1.2), we have \begin{align}\label{2} 1&=\frac{\textmd{vol}_F^{d\mu}(M)}{\textmd{vol}_F^{d\mu}(M)}\geq \frac{\textmd{vol}_F^{d\mu}B_p^+(r_1)+\textmd{vol}_F^{d\mu}B_q^-(r_2)+ \textmd{vol}_F^{d\mu}(B_x^{\pm}(\varepsilon))}{\textmd{vol}_F^{d\mu}(M)}\nonumber\\ &\geq\frac{\int_0^{r_1}(\frac{\sin \sqrt{k}t}{\sqrt{k}})^{N-1}dt+ \int_0^{r_2}(\frac{\sin \sqrt{k}t}{\sqrt{k}})^{N-1}dt}{\int_0^{\frac{\pi}{\sqrt{k}}}(\frac{\sin \sqrt{k}t}{\sqrt{k}})^{N-1}dt} +\frac{\textmd{vol}_F^{d\mu}(B_x^{\pm}(\varepsilon))}{\textmd{vol}_F^{d\mu}(M)}. \end{align} Note that $r_1+r_2=\frac{\pi}{\sqrt{k}}$. Hence, $$\int_0^{r_1}\left(\frac{\sin \sqrt{k}t}{\sqrt{k}}\right)^{N-1}dt =\int_{r_2}^{\frac{\pi}{\sqrt{k}}}\left(\frac{\sin \sqrt{k}t}{\sqrt{k}}\right)^{N-1}dt.$$ Therefore, (\ref{2}) can be rewritten as follows $$1\geq1+\frac{\textmd{vol}_F^{d\mu}(B_x^{\pm}(\varepsilon))}{\textmd{vol}_F^{d\mu}(M)}$$ which implies that $\varepsilon=0$. This contradicts the assumption above that $\varepsilon>0$. From (\ref{1}) we obtain $$r_p^+(x)+r_q^-(x)=\frac{\pi}{\sqrt{k}},$$ which gives $$\Delta r_p^+(x)=\Delta\left(\frac{\pi}{\sqrt{k}}-r_q^-(x)\right)=\Delta(-r_q^-(x))=-\overleftarrow{\Delta}r_q^-(x),$$ where $\overleftarrow{\Delta}$ denotes the Laplacian of Finsler metric $\overleftarrow{F}(x,y):=F(x,-y)$. Further, for the reverse Finsler metric $\overleftarrow{F}$, $\overleftarrow{d}_{\overleftarrow{F}}(p,q)=d_{F}(q,p)$, $\overleftarrow{\nabla} u=-\nabla(-u)$ and $\overleftarrow{\textmd{Ric}}_N(x,y)=\textmd{Ric}_N(x,-y)$. Since $\textmd{Ric}_N(x,y)\geq(N-1)k,\forall y\in T_xM$, then $\overleftarrow{\textmd{Ric}}_N(x,y)\geq(N-1)k,\forall y\in T_xM$. Thus by the Laplacian comparison theorem (Lemma 1.1)of the revised version for the reverse Finsler metric $\overleftarrow{F}$, we have \begin{align} (N-1)\sqrt{k}\cot(\sqrt{k}r_p^+(x))&\geq\Delta r_p^+(x)= -\overleftarrow{\Delta}(r_q^-(x))\nonumber\\ &\geq-(N-1)\sqrt{k}\cot(\sqrt{k}r_q^-(x))\nonumber\\ &=(N-1)\sqrt{k}\cot(\sqrt{k}r_p^+(x))\nonumber \end{align} which yields \begin{align}\label{a0} \Delta r_p^+(x)=(N-1)\sqrt{k}\cot(\sqrt{k}r_p^+(x)). \end{align} In the following, we write $r$ instead of $r_p^+(x)$ for simplicity. Direct computation gives \begin{align}\label{3} \frac{\partial}{\partial r}(\Delta r)+\frac{(\Delta r)^2}{N-1}=-(N-1)k. \end{align} Let $S_{p}(r(x))$ be the forward geodesic sphere of radius $r(x)$ centered at $p$. Choosing the local $g_{\nabla r}$-orthonormal frame $E_{1},\cdots,E_{n-1}$ of $S_{p}(r(x))$ near $x$, we get local vector fields $E_{1},\cdots,E_{n-1},E_{n}=\nabla r$ by parallel transport along geodesic rays. Thus, it follows from \cite{WX} that \begin{align}\label{4} \frac{\partial}{\partial r}\textmd{tr}_{\nabla r}H(r)=-\textmd{Ric}(\nabla r)-\sum_{i,j}[H(r)(E_i,E_j)]^2, \end{align} where $H(r)$ is the Hessian of distant function $r$. On the other hand, one has (\cite{WX}), \begin{align}\label{5} \Delta r=\textmd{tr}_{\nabla r}H(r)-S(\nabla r). \end{align} Therefore, from (\ref{3})-(\ref{5}), we derive \begin{align}\label{6} -(N-1)k&=\frac{\partial}{\partial r}(\Delta r)+\frac{(\Delta r)^2}{N-1}\nonumber\\ &=\frac{\partial}{\partial r}(\textmd{tr}_{\nabla r}H(r)-S(\nabla r))+\frac{1}{N-1}(\textmd{tr}_{\nabla r}H(r)-S(\nabla r))^2\nonumber\\ &\leq\frac{\partial}{\partial r}\textmd{tr}_{\nabla r}H(r)-\dot{S}(\nabla r) +\frac{1}{n-1}(\textmd{tr}_{\nabla r}H(r))^2+\frac{S(\nabla r)^2}{N-n}\nonumber\\ &\leq\frac{\partial}{\partial r}\textmd{tr}_{\nabla r}H(r)-\dot{S}(\nabla r)+\sum_{i,j}[H(r)(E_i,E_j)]^2+\frac{S(\nabla r)^2}{N-n}\nonumber\\ &=-\textmd{Ric}(\nabla r)-\dot{S}(\nabla r)+\frac{S(\nabla r)^2}{N-n}\nonumber\\ &=-\textmd{Ric}_N(\nabla r)\leq-(N-1)k, \end{align} where the first inequality holds from the following by replacing $a=\textmd{tr}_{\nabla r}H(r),b=S(\nabla r)$: \begin{align}\label{7} \frac{(a-b)^2}{N-1}&=\frac{a^2}{n-1}+\frac{b^2}{N-n}-\frac{N-n}{(n-1)(N-1)}(a+\frac{n-1}{N-n}b)^2\nonumber\\ &\leq\frac{a^2}{n-1}+\frac{b^2}{N-n}. \end{align} Using (\ref{6}) and (\ref{7}), we obtain \begin{align} \left\{ \begin{array}{ll} &\frac{\textmd{tr}_{\nabla r}H(r)}{n-1}=\frac{-S(\nabla r)}{N-n}=\frac{\Delta r}{N-1},\\ &\\ & \sum_{i,j}[H(r)(E_i,E_j)]^2=\frac{1}{n-1}(\textmd{tr}_{\nabla r}H(r))^2.\nonumber \end{array}\right. \end{align} Thus, \begin{align}\label{8} \nabla^2r(E_i,E_j)&:=H(r)(E_i,E_j)\nonumber\\ &=\left\{ \begin{array}{ll} \frac{\textmd{tr}_{\nabla r}H(r)}{n-1}=\frac{-S(\nabla r)}{N-n}=\frac{\Delta r}{N-1}=\sqrt{k}\cot(\sqrt{k}r),&i=j<n \\ 0,&i\neq j. \end{array}\right. \end{align} Now we calculate the flag curvature of $(M,F)$. By (\ref{8}) we observe that $\{E_i\}_{i=1}^{n-1}$ are $(n-1)$ eigenvectors of $\nabla^2r$. That is, $$D^{\nabla r}_{E_i}\nabla r=\sqrt{k}\cot(\sqrt{k}r)E_i,\qquad i=1,\cdots,n-1.$$ Since $\nabla r$ is a geodesic field on $(M,F)$, the flag curvature $K(\nabla r;\cdot)$ is equal to the sectional curvature of the weighted Riemannian manifold $(M,g_{\nabla r})$. Note that $\{E_i\}_{i=1}^{n-1}$ are $(n-1)$ eigenvectors of $\nabla^2r$ and parallel along the geodesic ray. By a straightforward computation, we get, for $1\leq i\leq n-1$, \begin{align} K(\nabla r;E_i)&=R^{\nabla r}(E_i,\nabla r,E_i,\nabla r)=g_{\nabla r}(R^{\nabla r}(E_i,\nabla r)\nabla r,E_i)\nonumber\\ &=g_{\nabla r}(D^{\nabla r}_{E_i}D^{\nabla r}_{\nabla r}\nabla r-D^{\nabla r}_{\nabla r}D^{\nabla r}_{E_i}\nabla r -D^{\nabla r}_{[E_i,\nabla r]}\nabla r,E_i)\nonumber\\ &=-g_{\nabla r}(D^{\nabla r}_{\nabla r}(\sqrt{k}\cot(\sqrt{k}r))E_i+ D^{\nabla r}_{D^{\nabla r}_{E_i}\nabla r-D^{\nabla r}_{\nabla r}E_i}\nabla r,E_i)\nonumber\\ &=-g_{\nabla r}(-k\csc^2(\sqrt{k}r)E_i+D^{\nabla r}_{\sqrt{k}\cot(\sqrt{k}r)E_i}\nabla r,E_i)\nonumber\\ &=k\csc^2(\sqrt{k}r)-\sqrt{k}\cot(\sqrt{k}r)g_{\nabla r}(D^{\nabla r}_{E_i}\nabla r,E_i)\nonumber\\ &=k\csc^2(\sqrt{k}r)-k\cot^2(\sqrt{k}r)\nonumber\\ &=k.\nonumber \end{align} We have proved that for any $x\in M$, $K(x,\nabla r;\cdot)=k$, where $r$ is the distance function from $p$. Next we will prove that along any direction $V\in T_xM$, $K(x,V;\cdot)=k$, which yields $K\equiv k$. For this, we will vindicate that for any fixed point $ p'\in M$ there exists a point $q'\in M$ such that $d_F(p',q')=\frac{\pi}{\sqrt{k}}$. If this is not true, then there is a small $\varepsilon>0$ such that $B^+_{p'}(\frac{\pi}{\sqrt{k}}-\varepsilon)=M$. Define $$f(x,r):=\frac{\textmd{vol}_F^{d\mu}(B^+_{x}(r))}{\int_0^r(\frac{\sin\sqrt{k}t}{\sqrt{k}})^{N-1}dt}.$$ Then \begin{align}\label{9} f(p',\frac{\pi}{\sqrt{k}}-\varepsilon)&=\frac{\textmd{vol}_F^{d\mu}(B^+_{p'}(\frac{\pi}{\sqrt{k}}-\varepsilon))} {\int_0^{\frac{\pi}{\sqrt{k}}-\varepsilon}(\frac{\sin\sqrt{k}t}{\sqrt{k}})^{N-1}dt} = \frac{\textmd{vol}_F^{d\mu}(M)}{\int_0^{\frac{\pi}{\sqrt{k}}-\varepsilon}(\frac{\sin\sqrt{k}t}{\sqrt{k}})^{N-1}dt}\nonumber\\ &=\frac{\textmd{vol}_F^{d\mu}(B^+_{p}(\frac{\pi}{\sqrt{k}}-\varepsilon))+\textmd{vol}_F^{d\mu}(B^-_{q}(\varepsilon))} {\int_0^{\frac{\pi}{\sqrt{k}}-\varepsilon}(\frac{\sin\sqrt{k}t}{\sqrt{k}})^{N-1}dt}\nonumber\\ &>\frac{\textmd{vol}_F^{d\mu}(B^+_{p}(\frac{\pi}{\sqrt{k}}-\varepsilon))} {\int_0^{\frac{\pi}{\sqrt{k}}-\varepsilon}(\frac{\sin\sqrt{k}t}{\sqrt{k}})^{N-1}dt} = f(p,\frac{\pi}{\sqrt{k}}-\varepsilon), \end{align} where the third equality is due to (\ref{1}). Since $\textmd{Ric}_N\geq (N-1)k$, by Laplacian comparison theorem (Lemma 1.1), we have \begin{align}\label{a1} \Delta r\leq(N-1)\sqrt{k}\cot(\sqrt{k}r)=(N-1)\frac{(\sin\sqrt{k}r)'}{\sin\sqrt{k}r}, \end{align} where $r$ is the distance function from any fixed point $z\in M$. Let $(r,\theta)$ be the polar coordinates of $x$. Then $r(x)=F(v),\theta^{\alpha}(x)=\theta^{\alpha}(\frac{v}{F(v)})$ and $v=\exp^{-1}_{z}(x)$. Thus, the above inequality shows \begin{align}\label{a2} \frac{\partial}{\partial r}\log\sigma_z\leq\frac{\partial}{\partial r}\log\tilde{\sigma}, \qquad \tilde{\sigma}:=\left(\frac{\sin\sqrt{k}r}{\sqrt{k}}\right)^{N-1}. \end{align} Integrating both sides gives \begin{align}\label{a3} \frac{\sigma_z(r,\theta)}{\tilde{\sigma}(r)}\leq\frac{\sigma_z(\delta,\theta)}{\tilde{\sigma}(\delta)}(\delta\to0). \end{align} Note that if $r$ is the distance function from $p$, then, by (\ref{a0}), the equalities holds in (\ref{a1})-(\ref{a3}). Hence, using the condition on $d\mu$ of Theorem 2.1, we have \begin{align} f(p,\frac{\pi}{\sqrt{k}}-\varepsilon) &= \lim_{r\to0}\frac{\textmd{vol}_F^{d\mu}(B^+_{p}(r))} {\int_0^{r}\tilde{\sigma}dt} =\lim\limits_{r\to0}\int_{S_pM}\frac{\sigma_p(r,\theta)}{r^{N-1}}d\theta =C,\nonumber\\ f(p',\frac{\pi}{\sqrt{k}}-\varepsilon) &\leq \lim_{r\to0}\frac{\textmd{vol}_F^{d\mu}(B^+_{p'}(r))} {\int_0^{r}\tilde{\sigma}dt} =\lim\limits_{r\to0}\int_{S_{p'}M}\frac{\sigma_{p'}(r,\theta)}{r^{N-1}}d\theta =C,\nonumber \end{align} which contradicts to (\ref{9}). In what follows, we prove that along any direction $V\in T_xM$, $K(x,V;\cdot)=k,\forall x\in M$. First, we draw a minimal geodesic $\overleftarrow{\eta}$ of reverse Finsler metric $\overleftarrow{F}$ satisfying $\overleftarrow{\eta}(0)=x,\overleftarrow{\eta}'(0)=\frac{-V}{F(V)}$. Then its reverse $\eta$ is a normal minimal geodesic of $F$ satisfying $\eta(0)=x,\dot{\eta}(0)=\frac{V}{F(V)}$. Choose $p'= \overleftarrow{\eta}(\delta)$ for some small $\delta>0$. Then $d_F(p',x)=L(\eta_{\widehat{p'x}})=\delta$. Second, let $q'$ be the point such that $d_F(p',q')=\frac{\pi}{\sqrt{k}}$ and draw a minimal geodesic $\tilde{\gamma}$ from $x$ to $q'$. Then, by (\ref{1}), we see $\eta|_{\widehat{p'x}}\cup\tilde{\gamma}$ is a minimal geodesic from $p'$ to $q'$. Thus, by the same argument, we obtain $K(x,V;\cdot)=k$. Now we are to prove $(M,F,d\mu)$ has vanishing $S$ curvature. From (\ref{8}) we have $$S(x,\nabla r)=-(N-n)\sqrt{k}\cot(\sqrt{k}r),$$ where $r(x)=d_F(p,x)$. Choose an arbitrary point $p'$ on the minimal geodesic $\gamma$ from $p$ to $x$ and let $q'$ be the point such that $d_F(p',q')=\frac{\pi}{\sqrt{k}}$. Then $\gamma|_{p'x}$ is the minimal geodesic from $p'$ to $x$ and we can extend $\gamma$ to pass through $q'$. Write $r_1:=d_F(p,x),r_2:=d_F(p',x)$. Then $\nabla r_1=\nabla r_2$ and $$-(N-n)\sqrt{k}\cot(\sqrt{k}r_1)=S(x,\nabla r_1)=S(x,\nabla r_2)=-(N-n)\sqrt{k}\cot(\sqrt{k}r_2).$$ Since $r_1\neq r_2$, we have $N=n$ which yields $S(x,\nabla r)=0$. By the arbitrariness of choice of the points $p,p'$ and $x$, for any vector $V\in T_xM$, we can choose a suitable geodesic $\gamma$ passing through $x$ and satisfying $\nabla r(x)=\frac{V}{F(V)}$. This gives $S(x,V)=0,\forall x\in M,\forall V\in T_xM$. Let $p,q\in M$ be the points as above. Then $d_F(p,q)=\frac{\pi}{\sqrt{k}}$. It follows from (\ref{1}) that, for any point $x$, there exists a minimal geodesic $\gamma$ from $p$ to $q$ passing through $x$. This means that $x$ is not cut point of $p$. If not, there are two minimal geodesics $\eta_1$ and $\eta_2$ from $p$ to $x$. Then $\eta_1\cup\gamma_{\widehat{xq}}$ and $\eta_2\cup\gamma_{\widehat{xq}}$ are two minimal geodesics from $p$ to $q$ passing through $x$. This is impossible. Therefore, by arbitrariness of choice of point $x$, we conclude $q=Cut(p)$. Thus $$\exp_{p}:T_{p}M\supset \textbf{B}_{p}(\frac{\pi}{\sqrt{k}})\longrightarrow M^{n}\backslash\{q\}$$ is a diffeomorphism. On the other hand, $$\exp_{\tilde{p}}:T_{\tilde{p}}\mathbb{S}^{n}\supset \textbf{B}_{\tilde{p}}(\pi)\longrightarrow \mathbb{S}^{n}\backslash\{\tilde{q}\}$$ is also a diffeomorphism, where $\mathbb{S}^{n}$ is the $n$-sphere, $\tilde{p},\tilde{q}$ are the south pole and north pole respectively. Let $(\tilde{r},\tilde{\theta}^{\alpha})$ be the polar coordinate system of $T_{\tilde{p}}\mathbb{S}^{n}$ and $(r,\theta^{\alpha})$ be the polar coordinate system of $T_{p}M$. Define $h:T_{\tilde{p}}\mathbb{S}^{n}\longrightarrow T_{p}M$ by $r=\frac{\tilde{r}}{\sqrt{k}},\theta^{\alpha}=\tilde{\theta}^{\alpha}$. Then $h$ is a diffeomorphism. Now we define $\psi:M^{n}\longrightarrow \mathbb{S}^{n}$ by $$\psi(x)=\left\{\begin{array}{cc} \exp_{\tilde{p}}\circ h^{-1}\circ \exp_{p}^{-1}(x) & x\neq q \\ \tilde{q} & x=q \end{array}\right.$$ Obviously, $\psi$ is homeomorphic. That is, $M$ is homeomorphic to $\mathbb{S}^{n}$. \end{proof} In \cite{KY}, the authors obtained the maximum diam theorem for the reversible Finsler manifolds by using the condition of Ricci curvature $\textmd{Ric} \geq(n-1)k>0$ and vanishing $S$ curvature. By the weighted Ricci curvature defined in Section 1 above, the condition can be also written as $\textmd{Ric}_n\geq(n-1)k>0$ (see Theorem A). Note that a reversible Finsler sphere is actually the Euclidean sphere (see Remark 0.1 above). Then, from Theorem 2.1, we generalize Theorem A as follows. \begin{corollary} Let $(M,F,d\mu)$ be a complete connected Finsler n-manifold with the Beausemann-hausdorff volume form. If the weighted Ricci curvature satisfies $\emph{Ric}_n\geq(n-1)k>0$ and $Diam(M)=\frac{\pi}{\sqrt{k}}$, then $(M,F)$ is isometric to a standard Finsler sphere. \end{corollary} \begin{proof} Note that the Busemann-Hausdorff volume form satisfies $\lim\limits_{r\to0}\frac{\textmd{vol}^{d\mu}_F(B^+_x(r))}{\textmd{vol}\mathbb{B}^n(r)}=1,\forall x\in M$, where $B_x^+(r)$ is the forward geodesic ball of $M$ and $\mathbb{B}^n(r)$ is the Euclidean ball (\cite{Sh2}). In this case, the condition on $d\mu$ in Theorem 2.1 is satisfied for $N=n$. From Theorem 2.1,we have $K=k$ and $M$ is homeomorphic to $\mathbb{S}^n$. Thus we can view $(M,F,d\mu)$ as a sphere with constant flag curvature and vanishing $S$ curvature. Hence, it is a standard Finsler sphere. \end{proof} As is well known that there are infinite nonreversible Finsler metrics with constant flag curvature on the sphere $\mathbb{S}^n$. Since these metrics have not been classified completely, we can not characterize the manifolds when the diam attains its maximum. However, the following example shows that the maximum diam can be achieved in non-Riemannian case. \begin{example} \cite{BS} View $\mathbb{S}^3$ as a compact Lie group. Let $\zeta^1,\zeta^2,\zeta^3$ be the standard right invariant 1-form on $\mathbb{S}^3$ satisfying $$d\zeta^1=2\zeta^2\wedge\zeta^3,\quad d\zeta^2=2\zeta^3\wedge\zeta^1,\quad d\zeta^3=2\zeta^1\wedge\zeta^2.$$ For $k\geq1$, define $$\alpha_k(y)=\sqrt{(k\zeta^1(y))^2+k(\zeta^2(y))^2+k(\zeta^3(y))^2},\quad \beta_k(y)=\sqrt{k^2-k}\zeta^1(y).$$ Then $F_k=\alpha_k+\beta_k$ is a Randers metric on $\mathbb{S}^3$ satisfying $$K\equiv1,\quad S\equiv0,\quad Diam(\mathbb{S}^3,F_k)=\pi.$$ \end{example} \vspace{3mm} In what follows, we focus on the Randers spaces. Let $\mathfrak{g}$ be the standard sphere metric and $W=W^i\frac{\partial}{\partial x^i}$ be a Killing vector field on $\mathbb{S}^n(\frac{1}{\sqrt{k}})$. Then the sectional curvature $K_{\mathfrak{g}}=k$. Define a Randers metric by \begin{align}\label{b} F=\frac{\sqrt{\lambda \mathfrak{g}^2+W_0^2}}{\lambda}-\frac{W_0}{\lambda}, \end{align} where $\lambda=1-\|W\|^2_{\mathfrak{g}}$. Then the sphere $\mathbb{S}^n(\frac{1}{\sqrt{k}})$ is equipped with a Randers metric $F$ of constant flag curvature $k$ (see \cite{BCS},\cite{BRS} for details). We say it a \emph{standard Randers sphere} and denote it by $\mathcal{S}^n(\frac{1}{\sqrt{k}})$. \begin{proposition} On a standard Randers sphere $(\mathcal{S}^n(\frac{1}{\sqrt{k}}),F,d\mu)$ with the Busemann-Hausdorff volume form, we have \begin{enumerate} \item $S=0$; \item $\emph{vol}^{d\mu}_F(\mathcal{S}^n(\frac{1}{\sqrt{k}}))=\emph{vol}_\mathfrak{g}(\mathbb{S}^n(\frac{1}{\sqrt{k}}))$; \item $Diam(\mathcal{S}^n(\frac{1}{\sqrt{k}}),F)=\frac{\pi}{\sqrt{k}}$. \end{enumerate} Clearly, $(\mathcal{S}^n(\frac{1}{\sqrt{k}}),F,d\mu)$ is naturally a standard Finsler sphere. \end{proposition} \begin{proof} In (\ref{b}), $W$ is a Killing vector field. This is equivalent to $S=0$ (see \cite{BCS}). Since $d\mu$ is the Busemann-Hausdorff volume form, we know that $dV_F = dV_\mathfrak{g}$. Thus, $$\textmd{vol}^{d\mu}_F(\mathcal{S}^n(\frac{1}{\sqrt{k}}))=\textmd{vol}_\mathfrak{g}(\mathbb{S}^n(\frac{1}{\sqrt{k}}))$$ Now fix $p\in \mathcal{S}^n(\frac{1}{\sqrt{k}})$. Using $K=k$ and Theorem 18.3.1 in \cite{Sh1}, there exists $q\in \mathcal{S}^n(\frac{1}{\sqrt{k}})$ such that $$\exp_p(\frac{\pi}{\sqrt{k}}\xi)=q,\qquad \forall\xi\in S_p(\mathcal{S}^n(\frac{1}{\sqrt{k}})),$$ where $S_p(\mathcal{S}^n(\frac{1}{\sqrt{k}})):=\{v|v\in T_p(\mathcal{S}^n(\frac{1}{\sqrt{k}})),F(v)=1\}$. From the proof of the volume comparison theorem (\cite{Sh2}, or Theorem 16.1.1, p.250, \cite{Sh1}), $$\textmd{vol}^{d\mu}_F(B^+_p(r))\leq\sigma_n(r),$$ where $\sigma_n(r)$ denotes the volume of the metric ball of radius $r$ in $\mathbb{S}^n(\frac{1}{\sqrt{k}})$. The equality holds if and only if $B^+_p(r)\subset \mathcal{D}_p$, i.e., $\mathbf{i}_p\geq r$. By the Bonnet-Myers theorem, $Diam(\mathcal{S}^n(\frac{1}{\sqrt{k}}))\leq\frac{\pi}{\sqrt{k}}$, which means $\overline{B^+_p(\frac{\pi}{\sqrt{k}})}=\mathcal{S}^n(\frac{1}{\sqrt{k}})$. Therefore, $$\textmd{vol}^{d\mu}_F(B^+_p(\frac{\pi}{\sqrt{k}}))=\textmd{vol}^{d\mu}_F(\mathcal{S}^n(\frac{1}{\sqrt{k}})) =\textmd{vol}_\mathfrak{g}(\mathbb{S}^n(\frac{1}{\sqrt{k}}))=\sigma_n(\frac{\pi}{\sqrt{k}}).$$ We deduce that $\mathbf{i}_p\geq \frac{\pi}{\sqrt{k}}$, which yields $d_F(p,q)=\frac{\pi}{\sqrt{k}}$. \end{proof} \begin{remark} In \cite{Sh3}, the author studied the reversible Finsler manifolds with constant flag curvature. For the nonreversible case, We show in Proposition 2.5 that there are infinite Randers metrics on $\mathcal{S}^n(\frac{1}{\sqrt{k}})$ with constant flag curvature $K=k$ and vanishing $S$ curvature. Moreover, they have the same diameter and volume as the Euclidean sphere, but they are not necessarily isometric to each other. Write $F\triangleq(\mathfrak{g},W)$ if $F$ is expressed by (\ref{b}). Set $$\mathfrak{F}:=\{F|F\triangleq(\mathfrak{g},W), W \textmd{ is a Killing vector with } \|W\|_{\mathfrak{g}}<1.\}$$ Then $\mathfrak{F}$ determines all standard Randers spheres $(\mathcal{S}^n(\frac{1}{\sqrt{k}}),F)$, and especially includes $(\mathbb{S}^n(\frac{1}{\sqrt{k}}),\mathfrak{g})$. Fix a Killing vector field $W$ and let $W_a:=aW,a\in[0,\frac{1}{\|W\|_{\mathfrak{g}}})$. Then each $W_a$ is also a Killing vector field satisfying $\|W_a\|_{\mathfrak{g}}<1$ and $\big\{(\mathcal{S}^n(\frac{1}{\sqrt{k}}),F\triangleq(\mathfrak{g},W_a))\big\}$ make up a family of standard Randers spheres. \end{remark} \begin{remark} Let $F=\alpha\phi(\frac{\beta}{\alpha})$ be an $(\alpha,\beta)$ metric. Then, by Theorem 1.1 in \cite{CST}, a standard $(\alpha,\beta)$ sphere is actually a standard Randers sphere if $\phi$ is a polynomial. Moreover, when $n\geq3$, then, by Theorem 0.4 in \cite{ZH}, every standard $(\alpha,\beta)$ sphere is standard Randers sphere for all $\phi$. \end{remark} \begin{theorem} Let $(M,F,d\mu)$ be a complete connected Randers n-manifold. If the weighted Ricci curvature and the volume form satisfy $\emph{Ric}_N\geq(N-1)k>0,$ $\lim\limits_{r\to0}\int_{S_zM}\frac{\sigma_z(r,\theta)}{r^{N-1}}d\theta=C, \forall z\in M$ for some real number $C>0,N\in[n,\infty)$ and $Diam(M)=\frac{\pi}{\sqrt{k}}$, then $(M,F)$ is isometric to a standard Randers sphere. \end{theorem} \begin{proof} Suppose that the Randres metric $F$ is given by \begin{align}\label{10} F=\frac{\sqrt{\lambda h^2+W_0^2}}{\lambda}-\frac{W_0}{\lambda},\quad W_0=W_iy^i, \end{align} where $h=\sqrt{h_{ij}(x)y^iy^j}$ is a Riemannian metric, $W=W^i\frac{\partial}{\partial x^i}$ is a vector field on $M$, and $$W_i=h_{ij}W^j,\quad \lambda:=1-W_i W^i=1-h(x,W)^2.$$ Under the condition of Theorem 2.8, it follows from Theorem 2.1 that the flag curvature of $F$ is $K=k$ and $S=0$. First, according to Theorem 1.1 in \cite{O2}, only (constant multiplications of) the Busemann-Hausdorff measures can satisfy $S\equiv0$ on Randers spaces. Thus we might as well suppose that $d\mu$ is the Busemann-Hausdorff volume form and $S_{BH}=0$, which is an equivalence that $W$ is a Killing vector fields on $M$. Second, for a Randers metric $F$ expressed above, it follows from \cite{BCS} that $F$ has constant flag curvature $K=k$ if and only if $h$ has constant sectional curvature $K_h=k+c^2$ and $S_{BH}=(n+1)cF$. Thus $K_h=k$. By Theorem 2.1, $M$ is homeomorphic to $\mathbb{S}^n$. As a result, $(M,h)$ is a compact simply connected Riemannian manifold of sectional curvature $K_h=k$. Therefore, $(M,h)$ is isometric to the Euclid sphere $(\mathbb{S}^n(\frac{1}{\sqrt{k}}),\mathfrak{g})$, where $\mathfrak{g}$ denotes the standard sphere metric on $\mathbb{S}^n(\frac{1}{\sqrt{k}})$. Now the Randers metric $F$ defined on $\mathbb{S}^n(\frac{1}{\sqrt{k}})$ is given by $$F=\frac{\sqrt{\lambda \mathfrak{g}^2+W_0^2}}{\lambda}-\frac{W_0}{\lambda},$$ where $W$ is a Killing vector field on $(\mathbb{S}^n(\frac{1}{\sqrt{k}}),\mathfrak{g})$. \end{proof} \vspace{3mm} \hspace{-4mm}\emph{Proof of Theorem 0.1}. The first part of Theorem 0.1 follows from Corollary 2.3 directly. Notice that Theorem 0.4 in \cite{ZH}, shows that, a regular non-Randers $(\alpha, \beta)$-metric with isometric $S$-curvature and scalar flag curvature on a Finsler $n$-manifold $(n\geq3)$ must be a Minkowski metric. Combining this with Theorem 2.8 the second part of Theorem 0.1 follows. $\hspace{120mm}\square$ \section {Some applications on the first eigenvalue} In this section, we use the maximum diam theorem to describe the rigidity of the Finsler manifolds on which the first eigenvalue attains its lower bound. First we give the following: \begin{lemma} Let $(\mathfrak{S}^n(\frac{1}{\sqrt{k}}),F,d\mu)$ be a standard Finsler sphere, and $r(x)=d_F(p,x)$ be the distance function from a fixed point $p\in\mathfrak{S}^n$. Then $$\tilde{f}=-\cos(\sqrt{k}r),\quad 0\leq r\leq\frac{\pi}{\sqrt{k}}$$ is a 1-st eigenfunction with $\lambda_1= nk$. \end{lemma} \begin{proof} For a standard Finsler sphere $\mathfrak{S}^n(\frac{1}{\sqrt{k}})$, it follows from the proof of Theorem 2.1 that $\Delta r=(n-1)\sqrt{k}\cot(\sqrt{k}r)$. Noticed that the volume form satisfies (see also the proof of Theorem 2.1) $\frac{\partial}{\partial r}\log\sigma_p(r,\theta)=\frac{\partial}{\partial r}\log\tilde{\sigma}(r)$, which yields $$\frac{\sigma_p(R,\theta)}{\tilde{\sigma}(R)}=\frac{\sigma_p(r,\theta)}{\tilde{\sigma}(r)}:=C(\theta), \quad r\leq R\leq\frac{\pi}{\sqrt{k}},$$ where $\tilde{\sigma}(r)=(\frac{\sin(\sqrt{k}r)}{\sqrt{k}})^{n-1}$. Therefore, \begin{align} \int_{\mathfrak{S}^n}\tilde{f}d\mu&=\int_0^{\frac{\pi}{\sqrt{k}}}\int_{S_pM}\tilde{f}\sigma_p(r,\theta)drd\theta =\int_0^{\frac{\pi}{\sqrt{k}}}\int_{S_pM}\tilde{f}C(\theta)\tilde{\sigma}(r)drd\theta\nonumber\\ &=-\int_0^{\frac{\pi}{\sqrt{k}}}\cos(\sqrt{k}r)(\frac{\sin(\sqrt{k}r)}{\sqrt{k}})^{n-1}dr\int_{S_pM}C(\theta)d\theta=0.\nonumber \end{align} Moreover, $\nabla \tilde{f}=\sqrt{k}\sin(\sqrt{k}r)\nabla r,0< r<\frac{\pi}{\sqrt{k}}$, which means that $\nabla \tilde{f}$ and $\nabla r$ have the same direction. Thus \begin{align} \Delta \tilde{f}&=\tilde{f}'\Delta r+\tilde{f}''=\sqrt{k}\sin(\sqrt{k}r)\times (n-1)\sqrt{k}\cot(\sqrt{k}r)+k\cos(\sqrt{k}r)\nonumber\\ &=nk\cos(\sqrt{k}r)=-nk\tilde{f}.\nonumber \end{align} Therefore, we obtain the first eigenfunction $\tilde{f}$ of $(\mathfrak{S}^n,F,d\mu)$. \end{proof} \begin{theorem} Let $(M,F,d\mu)$ be a complete connected Finsler n-manifold with the Busemann-Hausdorff volume form. If the weighted Ricci curvature satisfies $\emph{Ric}_n\geq(n-1)k>0$, then the first eigenvalue of Finsler-Laplacian $$\lambda_1\geq nk.$$ The equality holds if and only if $(M, F)$ is isometric to a standard Finsler sphere. \end{theorem} \begin{proof} The estimate of the first eigenvalue is proved in \cite{YHS1}. If the equality holds, we deduce that $Diam(M)=\frac{\pi}{\sqrt{k}}$(\cite{YHS1}). Then by Corollary 2.3, $(M, F)$ is isometric to $\mathfrak{S}^n(\frac{1}{\sqrt{k}})$. Conversely, it follows from Lemma 3.1. \end{proof} \begin{theorem} Let $(M,F,d\mu)$ be a complete connected Randers n-manifold. If the weighted Ricci curvature satisfies $\emph{Ric}_N\geq(N-1)k>0$ for some real number $N\in[n,\infty)$, then the first eigenvalue of Finsler-Laplacian $$\lambda_1\geq Nk.$$ Moreover, if the volume form satisfies $\lim\limits_{r\to0}\int_{S_zM}\frac{\sigma_z(r,\theta)}{r^{N-1}}d\theta=C, \forall z\in M$, then the equality holds if and only if $(M,F)$ is isometric to a standard Randers sphere. \end{theorem} \begin{remark} Theorem 3.2 shows that the lower bound of the first eigenvalue of Finsler-Laplacian can be attained in a standard Finsler sphere. However, we are still unable to characterize the sphere in more details. In Theorem 3.3, we narrow the scope to Randers manifolds. This is because Randers spheres are completely clear even though their quantity is also infinite. \end{remark} \begin{proof} If the equality holds, we have $Diam(M)=\frac{\pi}{\sqrt{k}}$ (\cite{YHS1}). Then the conclusion follows from Theorem 2.8 directly. To prove the reverse side, we point out that, by Proposition 2.5, for a standard Randers sphere $\mathcal{S}^n(\frac{1}{\sqrt{k}})$, $K=k,S=0$ and $\textmd{Ric}_N=\textmd{Ric}=(n-1)k$, and thus $N=n,\lambda_1\geq nk$. Therefore, we only need to construct a Randers metric on the sphere $\mathbb{S}^n(\frac{1}{\sqrt{k}})$ such that the first eigenvalue of Finsler-Laplacian attains its lower bound $nk$. In Lemma 3.1 we have obtained the first eigenfunction $\tilde{f}$. In the following, we want to give another 1-st eigenfunction via a different method. Let $p,q$ be the north pole and south pole of the Euclidean sphere $(\mathbb{S}^n(\frac{1}{\sqrt{k}}),\mathfrak{g})$, respectively. Let $\varphi(t,x)$ be the rotation transform on $\mathbb{S}^n(\frac{1}{\sqrt{k}})$ satisfying $\varphi(t,p)=p$ and $\varphi(t,q)=q$ for any $t$. Then $\varphi(t,x)$ is a isometric transform on $\mathbb{S}^n(\frac{1}{\sqrt{k}})$ and $X=\frac{\partial\varphi(t,x)}{\partial t}$ is a Killing vector field. It is easy to see that $X\bot\nabla^{\mathfrak{g}} \rho$ where $\rho(x)=d_\mathfrak{g}(p,x)$ is the distance function and $\nabla^{\mathfrak{g}} \rho$ is the gradient with respect to $\mathfrak{g}$. On the other hand, it is well known that the first eigenfunction $f$ of $(\mathbb{S}^n(\frac{1}{\sqrt{k}}),\mathfrak{g})$ is radial function, i.e., $f(\rho,\theta)=f(\rho)$, where $\rho(x)=d_\mathfrak{g}(p,x)$. Thus, we have $$X(f)=0.$$ Note that the volume form of a Rander sphere $(\mathcal{S}^n(\frac{1}{\sqrt{k}}),F,d\mu)$ is the Busemann-Hausdorff volume form. Therefore, $d\mu=dV_{\mathfrak{g}}$ which yields \begin{align}\label{11} \int_M|f|^2d\mu=\int_M|f|^2dV_{\mathfrak{g}}. \end{align} Recall that the dual metric of (\ref{10}) is $$F^{\ast}:=h^{\ast}+W^{\ast}=\sqrt{h^{ij}\xi_{i}\xi_{j}}+W^{i}\xi_{i},$$ where $(h^{ij})=(h_{ij})^{-1}$ and $W^{i}=W_{j}h^{ij}$. Thus, for a $C^{1}$ function $f$, we have $$F(\nabla f)=F^{\ast}(df)=h^{\ast}(df)+W^{i}f_{i}=h(\nabla^{h} f)+W(f),$$ If $f$ is the first eigenfunction of $(\mathbb{S}^n(\frac{1}{\sqrt{k}}),\mathfrak{g})$ and $W=X$ is the Killing vector as above, then we have $F(\nabla f)=\mathfrak{g}(\nabla^{\mathfrak{g}} f)$, which gives \begin{align}\label{12} \int_MF(\nabla f)^2d\mu=\int_M\mathfrak{g}(\nabla^{\mathfrak{g}} f)^2dV_{\mathfrak{g}}. \end{align} Combining (\ref{11}) and (\ref{12}), and noting that the first eigenvalue of $(\mathbb{S}^n(\frac{1}{\sqrt{k}}),\mathfrak{g})$ is $nk$, we obtain $$\lambda_1\leq\frac{\int_MF(\nabla f)^2d\mu}{\int_M|f|^2d\mu}= \frac{\int_M\mathfrak{g}(\nabla^{\mathfrak{g}} f)^2dV_{\mathfrak{g}}}{\int_M|f|^2dV_{\mathfrak{g}}} =nk.$$ On the other hand, we know from the first assertion of Theorem 3.2 that $\lambda_1\geq nk$. Thus $\lambda_1=nk$. This implies that $f$ is also the first eigenfunction of the Randers sphere $(\mathcal{S}^n(\frac{1}{\sqrt{k}}),F)$. \end{proof} The existence of the first eigenfunction of Finsler Laplacian is proved by \cite{GS}. But so far, any explicit 1-st eigenfunction has not been found in non-Riemannian case. It seems very difficult to do the computation since Finsler Laplacian is a nonlinear operator. \begin{remark} In the proof of Lemma 3.1, we first construct an explicit 1-st eigenfunction of Finsler Laplacian $\tilde{f}$ on the sphere $(\mathfrak{S}^n,F,d\mu)$, which means that $-\tilde{f}$ is the first eigenfunction of the reverse sphere $(\mathfrak{S}^n,\overleftarrow{F},d\mu)$. In particular, in a standard Randers sphere $(\mathcal{S}^n(\frac{1}{\sqrt{k}}),F,d\mu)$, we give two 1-st eigenfunctions $\tilde{f}$ and $f$ (see the proof of Theorem 3.3) since they are not necessarily equal. \end{remark} \vspace{3mm} \hspace{-4mm}\emph{Proof of Theorem 0.2}.\quad It follows directly from Theorems 3.2-3.3 and Theorem 0.4 in \cite{ZH}. $\hspace{120mm}\square$ \vspace{3mm} \bibliographystyle{amsplain}
2,869,038,156,932
arxiv
\section*{Acknowledgments}\end{small}} \newcommand\altaffilmark[1]{$^{#1}$} \newcommand\altaffiltext[1]{$^{#1}$} \voffset=-0.6in \title[The Acoustic RDI]{The Resonant Drag Instability (RDI): Acoustic Modes \vspace{-0.5cm}} \vspace{-0.2cm} \author[Hopkins \&\ Squire]{ \parbox[t]{\textwidth}{ Philip F.~Hopkins\altaffilmark{1}, \&\ Jonathan Squire\altaffilmark{1} } \vspace*{6pt} \\ \altaffiltext{1}{TAPIR, Mailcode 350-17, California Institute of Technology, Pasadena, CA 91125, USA} \vspace{-0.5cm} } \date{Submitted to MNRAS, July 2017\vspace{-0.6cm}} \begin{document} \maketitle \label{firstpage} \vspace{-0.2cm} \begin{abstract} \vspace{-0.2cm} Recently, Squire \&\ Hopkins (2017) showed any coupled dust-gas mixture is subject to a class of linear ``resonant drag instabilities'' (RDI). These can drive large dust-to-gas ratio fluctuations even at arbitrarily small dust-to-gas mass ratios $\mu$. Here, we explore the RDI in the simple case where the gas satisfies neutral hydrodynamics and supports acoustic waves ($\omega^{2}=c_{s}^{2}\,k^{2}$). The gas and dust are coupled via an arbitrary drag law and subject to external accelerations (e.g.\ gravity, radiation pressure). If there is any dust drift velocity, the system is unstable. The instabilities exist for {\em all} dust-to-gas ratios $\mu$ and their growth rates depend only weakly on $\mu$ around resonance, as $\sim\mu^{1/3}$ or $\sim \mu^{1/2}$ (depending on wavenumber). The behavior changes depending on whether the drift velocity is larger or smaller than the sound speed $c_{s}$. In the supersonic limit a ``resonant'' instability appears with growth rate increasing {\em without limit} with wavenumber, even for vanishingly small $\mu$ and values of the coupling strength (``stopping time''). In the subsonic limit instabilities always exist, but their growth rates no longer increase indefinitely towards small wavelengths. The dimensional scalings and qualitative behavior of the instability do not depend sensitively on the drag law or equation-of-state of the gas. The instabilities directly drive exponentially growing dust-to-gas-ratio fluctuations, which can be large even when the modes are otherwise weak. We discuss physical implications for cool-star winds, AGN-driven winds and torii, and starburst winds: the instabilities alter the character of these outflows and could drive clumping and/or turbulence in the dust and gas. \end{abstract} \begin{keywords} instabilities --- turbulence --- ISM: kinematics and dynamics --- star formation: general --- galaxies: formation --- planets and satellites: formation\vspace{-0.5cm} \end{keywords} \vspace{-1.1cm} \section{Introduction} \label{sec:intro} Astrophysical fluids are replete with dust, and the dynamics of the dust-gas mixture in these ``dusty fluids'' are critical to astro-chemistry, star and planet formation, ``feedback'' from stars and active galactic nuclei (AGN) in galaxy formation, the origins and evolution heavy elements, cooling in the inter-stellar medium, stellar evolution in cool stars, and more. Dust is also ubiquitous as a source of extinction or contamination in almost all astrophysical contexts. As such, it is critical to understand how dust and gas interact, and whether these interactions produce phenomena that could segregate or produce novel dynamics or instabilities in the gas or dust. Recently, \citet{squire.hopkins:RDI} (henceforth SH) showed that there exists a general class of previously unrecognized instabilities of dust-gas mixtures. The SH\ ``resonant drag instability'' (RDI) generically appears whenever a gas system that supports some wave or linear perturbation mode (in the absence of dust) also contains dust moving with a finite drift velocity ${\bf w}_{s}}%{{\boldsymbol{\alpha}}$ relative to the gas. This is unstable at a wide range of wavenumbers, but the fastest-growing instabilities occur at a ``resonance'' between the phase velocity ($v_{p} = \omega_{0}/|{\bf k}|$) of the ``natural'' wave that would be present in the gas (absent dust), and the dust drift velocity projected along the wavevector direction (${\bf w}_{s}}%{{\boldsymbol{\alpha}}\cdot \hat{\bf k} \approx v_{p}$).\footnote{Equivalently, we can write the resonance condition as ${\bf w}_{s}}%{{\boldsymbol{\alpha}}\cdot{\bf k} \approx \omega_{0}$, where $\omega_{0} = v_{p}\,|{\bf k}|$ is the natural frequency a wave would have in the gas, absent dust drag. Note this is a resonance condition for a given (single) Fourier mode -- it does not require two different modes actually be present.} Some previously well-studied instabilities -- most notably the ``streaming instability'' of grains in protostellar disks \citep{youdin.goodman:2005.streaming.instability.derivation}, which is related to a resonance with the disk's epicyclic oscillations (i.e.\ has maximal growth rates when ${\bf w}_{s}}%{{\boldsymbol{\alpha}} \cdot {\bf k} \approx \Omega$) -- belong to the general RDI category. These instabilities {\em directly} generate fluctuations in the dust-to-gas ratio and the relative dynamics of the dust and gas, making them potentially critical for the host of phenomena above (see, e.g., \citealt{chiang:2010.planetesimal.formation.review} for applications of the disk streaming instability). The relative dust-gas drift velocity ${\bf w}_{s}}%{{\boldsymbol{\alpha}}$ and the ensuing instabilities can arise for a myriad of reasons. For example, in the photospheres of cool stars, in the interstellar medium of star-forming molecular clouds or galaxies, and in the obscuring ``torus'' or narrow-line region around an AGN, dust is accelerated by absorbed radiation from the stars/AGN, generating movement relative to the gas. Similarly, in a proto-stellar disk, gas is supported via pressure, while grains (without such pressure) gradually sediment. In both cases, a drag force, which couples the dust to the gas, then causes the dust to accelerate the gas, or vice versa. While there has been an extensive literature on such mechanisms -- e.g.,\ radiation-pressure driven winds -- there has been surprisingly little focus on the question of whether the dust can stably transfer momentum to gas under these conditions. We will argue that these process are all inherently unstable. Perhaps the simplest example of the RDI occurs when one considers ideal, inviscid hydrodynamics, where the only wave (absent dust) is a sound wave. This ``acoustic RDI'' has not yet been studied, despite having potentially important implications for a wide variety of astrophysical systems. In this paper, we therefore explore this manifestation of the RDI in detail. We show that homogenous gas, coupled to dust via some drag law, is generically unstable to a spectrum of exponentially-growing linear instabilities (both resonant and non-resonant), regardless of the form of the dust drag law, the magnitude of the drift velocity, the dust-to-gas ratio, the drag coefficient or ``stopping time,'' and the source of the drift velocity. If the drift velocity exceeds the sound speed, the ``resonance'' condition is always met and the growth rate increases without limit at short wavelengths. We present the basic derivation and linearized equations-of-motion in \S~\ref{sec:deriv}, including various extensions and caveats (more detail in Appendices). In \S~\ref{sec:general.modes}, we then derive the stability conditions, growth rates, and structure of the unstable modes for arbitrary drag laws, showing in \S~\ref{sec:draglaws} how this specifies to various physical cases (Epstein drag, Stokes drag, and Coulomb drag). The discussion of \S~\ref{sec:general.modes}--\S~\ref{sec:draglaws} is necessarily rather involved, covering a variety of different unstable modes in different physical regimes, and the reader more interested in applications may wish to read just the general overview in \S~\ref{sec:general overview of modes}, the discussion of mode structure in \S~\ref{sec:mode.structure}, and skim through relevant drag laws of \S~\ref{sec:draglaws}. We briefly discuss the non-linear regime (\S~\ref{sec:nonlinear}), scales where our analysis breaks down (\S~\ref{sec:breakdown}), and the relation of these instabilities to those discussed in previous literature (\S~\ref{sec:previous.work}), before considering applications to different astrophysical systems including cool-star winds, starbursts, AGN obscuring torii and narrow-line regions, and protoplanetary disks (\S~\ref{sec:applications}). We conclude in \S~\ref{sec:summary}. \vspace{-0.5cm} \section{Basic Equations \&\ Linear Perturbations} \label{sec:deriv} \begin{figure*} \plotsidesize{figs/dwind_instab_vs_alpha_tsconstant.pdf}{0.99} \vspace{-0.25cm} \caption{Linear growth rates of the acoustic RDI. We show the growth rate $\Im{(\omega)}$ of the fastest-growing unstable mode (in units of the equilibrium dust drag timescale or ``stopping time'' $\langle t_{s} \rangle$; Eq.~\eqref{eqn:general}), for dust moving through gas with drift/streaming velocity ${\bf w}_{s}}%{{\boldsymbol{\alpha}} = w_{s}}%{{\alpha}\,c_{s}\,\hat{{\bf w}}_{s}}%{\hat{\boldsymbol{\alpha}}$ (Eq.~\eqref{eqn:mean.v.offset}; $c_{s}$ is the gas sound speed), for different $w_{s}}%{{\alpha}$ (\S~\ref{sec:deriv}). Here we assume a mean dust-to-gas mass ratio $\mu=0.1$ (Eq.~\eqref{eqn:mean.v.offset}), constant drag coefficient ($\zeta_{s}=\zeta_{w}=0$; Eq.~\eqref{eqn:ts.general}), and a homogeneous background (\S~\ref{sec:pressure.gradients}). {\em Left:} Growth rate vs.\ wavenumber ${\bf k}$ (\S~\ref{sec:general.modes}), in terms of the dimensionless $\kappa_{\|} \equiv {\bf k}\cdot {\bf w}_{s}}%{{\boldsymbol{\alpha}}\,\langle t_{s} \rangle = k\,w_{s}}%{{\alpha}\,\langle t_{s} \rangle \cos{\theta}$ (Eq.~\eqref{eqn:kappa.definition}), and angle $\cos{\theta}\equiv \hat{\bf k} \cdot \hat{{\bf w}}_{s}}%{\hat{\boldsymbol{\alpha}}$ between the wavevector ${\bf k}$ and ${\bf w}_{s}}%{{\boldsymbol{\alpha}}$. For ``subsonic'' cases with $w_{s}}%{{\alpha} < 1$, modes are unstable at long wavelengths (see \S~\ref{sec:long.wavelength}) with growth rates $\propto \kappa_{\|}^{2/3}$ (Eq.~\eqref{eqn:longwave.mode}) then saturate at a maximum growth rate, and are stabilized at high-$k$ (\S~\ref{sec:subsonic.long.wavelenghts}). We show the fastest-growing angle $\cos{\theta}=1$ for $w_{s}}%{{\alpha}<1$. Note that up to their saturation value, the different-$w_{s}}%{{\alpha}$ cases behave identically. For ``supersonic cases'' with $w_{s}}%{{\alpha} \ge 1$, all $k$ are unstable; at most angles the growth rate saturates at a constant value (the ``quasi-sound} %{intermediate'' mode in \S~\ref{sec:intermediate}), but for $\cos{\theta}=\pm1/w_{s}}%{{\alpha}$ the ``resonant'' RDI appears (\S~\ref{sec:resonance}), where the drift velocity in the direction $\hat{\bf k}$ is resonant with the natural response frequency of the system (a sound wave), and the growth rates increase without limit as $\propto \kappa_{\|}^{1/2}$ (Eq.~\eqref{eqn:longwave.mode.midk}) and $\propto \kappa_{\|}^{1/3}$ (Eq.~\eqref{eqn:omega.resonant}) at intermediate and high $\kappa_{\|}$, respectively. {\em Right:} Maximum growth rate (over all $k$) as a function of angle. For $w_{s}}%{{\alpha}<1$ this is maximized at finite growth rate, at $\cos{\theta}=\pm1$; for $w_{s}}%{{\alpha} \ge1$, the maximum growth rates diverge around the ``resonant angle.'' \vspace{-0.25cm} \label{fig:growth.rate.demo}} \end{figure*} \vspace{-0.05cm} \subsection{General Case with Constant Streaming} \label{sec:free.streaming} Consider a mixture of gas and a second component which can be approximated as a pressure-free fluid (at least for {\em linear} perturbations; see \citealt{youdin.goodman:2005.streaming.instability.derivation} and App.~A of \citealt{Jacquet:2011cy}), interacting via some generalized drag law. We will refer to this second component as ``dust'' henceforth. For now we consider an ideal, inviscid gas, so the system is described by mass and momentum conservation for both fluids: \begin{align} \nonumber \partialAB{\rho}{t} + \nabla\cdot ({\bf u}\,\rho) &= 0,\\ \nonumber \left(\partialAB{}{t} + {\bf u}\cdot\nabla \right){\bf u} &= -\frac{\nabla P}{\rho} + {\bf g} + \frac{\rho_{d}}{\rho}\,\frac{({\bf v}-{\bf u})}{t_{s}},\\ \nonumber \partialAB{\rho_{d}}{t} + \nabla\cdot ({\bf v}\,\rho_{d}) &= 0,\\ \label{eqn:general} \left(\partialAB{}{t} + {\bf v}\cdot\nabla \right){\bf v} &= -\frac{({\bf v}-{\bf u})}{t_{s}} + {\bf g} + {\bf a}, \end{align} where ($\rho,\,{\bf u}$) and ($\rho_{d},\,{\bf v}$) are the density and velocity of the gas and dust, respectively; ${\bf g}$ is the external acceleration of the gas while ${\bf g}+{\bf a}$ is the external acceleration of dust (i.e., ${\bf a}$ is the difference in the dust and gas acceleration), and $P$ is the gas pressure. We assume a barotropic equation of state with sound speed $c_{s}^{2}=\partial P/\partial \rho$ and polytropic index $\gamma$ (see \S~\ref{sec:epstein} for further details). The dust experiences a drag acceleration ${\bf a}_{\rm drag} = -({\bf v}-{\bf u})/t_{s}$ with an arbitrary drag coefficient $t_{s}$, known as the ``stopping time'' (which can be a function of other properties). The term in $t_{s}$ in the gas acceleration equation is the ``back-reaction'' -- its form is dictated by conservation of momentum. The equilibrium (steady-state), spatially-homogeneous solution to Eq.~\eqref{eqn:general} is the dust and gas accelerating together at the same rate, with a constant relative drift velocity ${\bf w}_{s}}%{{\boldsymbol{\alpha}}$: \begin{align} \nonumber \rho^{h} &= \langle \rho \rangle = \rho_{0}, \\ \nonumber \rho_{d}^{h} &= \langle \rho_{d} \rangle = \rho_{d,\,0} \equiv \mu\,\rho_{0}, \\ \nonumber {\bf u}^{h} &= \langle {\bf u} \rangle = {\bf u}_{0} + \left[{\bf g} + {\bf a}\,\left(\frac{\mu}{1+\mu}\right) \right]\,t,\\ \nonumber {\bf v}^{h} &= \langle {\bf v} \rangle = \langle {\bf u} \rangle + {\bf w}_{s}}%{{\boldsymbol{\alpha}},\\ \label{eqn:mean.v.offset} {\bf w}_{s}}%{{\boldsymbol{\alpha}} &\equiv \frac{{\bf a}\,\langle t_{s} \rangle}{1+\mu} = \frac{{\bf a}\, t_{s}^{h}(\rho^{h},\,{\bf w}_{s}}%{{\boldsymbol{\alpha}},\,...)}{1+\mu}, \end{align} where we define the total mass-ratio between the two fluids as $\mu\equiv \langle \rho_{d} \rangle / \langle \rho \rangle$, and $\langle t_{s} \rangle = t_{s}(\langle \rho \rangle,\,\langle {\bf v} \rangle,\,...)$ is the value of $t_{s}$ for the homogeneous solution.\footnote{Eq.~\ref{eqn:general} also admits {\em non-equilibrium} but spatially homogeneous solutions with an additional initial transient/decaying drift $\Delta{\bf w}_{0} = {\bf w}_{0}\,\exp{(-t/\langle t_{s} \rangle)}$ (Eq.~\ref{eqn:mean.v.offset} with $\langle {\bf u} \rangle \rightarrow {\bf u}_{0} + [{\bf g}+{\bf a}\,\mu/(1+\mu)]\,t - (\mu/(1+\mu))\,\Delta{\bf w}_{0}$, $\langle {\bf v} \rangle \rightarrow \langle {\bf u} \rangle + {\bf w}_{s}}%{{\boldsymbol{\alpha}} + \Delta{\bf w}_{0}$). If we consider modes with growth timescales $1/\Im{(\omega)} \gg \langle t_{s} \rangle$, then $\Delta{\bf w}_{0}\rightarrow 0$ decays rapidly and our analysis is unchanged by such initial transient drifts; alternatively if $1/\Im{(\omega)} \ll \langle t_{s} \rangle$, then $\Delta{\bf w}_{0}\approx {\bf w}_{0}$ is approximately constant and our analysis is identical with the replacement ${\bf w}_{s}}%{{\boldsymbol{\alpha}} \rightarrow {\bf w}_{s}}%{{\boldsymbol{\alpha}} + {\bf w}_{0}$.} Note that $\langle t_{s} \rangle$ can depend on ${\bf w}_{s}}%{{\boldsymbol{\alpha}}$, so Eq.~\eqref{eqn:mean.v.offset} is in general a non-linear equation for ${\bf w}_{s}}%{{\boldsymbol{\alpha}}$. Let us also define the normalized drift speed $w_{s}}%{{\alpha} \equiv |{\bf w}_{s}}%{{\boldsymbol{\alpha}}|/c_{s}$, which is a key parameter in determining stability properties and will be used extensively below. (Note that this definition of $w_{s}}%{{\alpha}$ differs from that of SH: this dimensionless version is more convenient throughout this work because of our focus on the acoustic resonance; see \S~\ref{sec:General dispersion relation}. We now consider small perturbations $\delta$: $\rho = \rho^{h} + \delta\rho$, ${\bf u} = {\bf u}^{h} + \delta {\bf u}$, etc., and adopt a free-falling frame moving with the homogeneous gas solution $\langle {\bf u} \rangle$ (see App.~\ref{sec:accel.frame} for details). Linearizing Eq.~\eqref{eqn:general}, we obtain, \begin{align} \nonumber \partialAB{\delta\rho}{t} =& -\rho_{0}\,\nabla\cdot \delta{\bf u},\\ \nonumber \partialAB{\delta{\bf u}}{t} =& -c_{s}^{2}\,\frac{\nabla \delta \rho}{\rho_{0}} + \mu\,\frac{(\delta {\bf v}-\delta{\bf u})}{\langle t_{s} \rangle} \\ \nonumber &- \mu\,\frac{{\bf w}_{s}}%{{\boldsymbol{\alpha}}}{\langle t_{s} \rangle}\,\left( \frac{\delta t_{s}}{\langle t_{s} \rangle} + \frac{\delta \rho}{\rho_{0}} - \frac{\delta \rho_{d}}{\mu\,\rho_{0}} \right), \\ \nonumber \left( \partialAB{}{t} + {\bf w}_{s}}%{{\boldsymbol{\alpha}}\cdot\nabla \right)\delta \rho_{d} =& -\mu\,\rho_{0}\,\nabla\cdot \delta{\bf v},\\ \label{eqn:linearized} \left( \partialAB{}{t} + {\bf w}_{s}}%{{\boldsymbol{\alpha}}\cdot\nabla \right)\delta {\bf v} =& -\frac{(\delta {\bf v}-\delta{\bf u})}{\langle t_{s} \rangle} + \frac{{\bf w}_{s}}%{{\boldsymbol{\alpha}}\,\delta t_{s}}{\langle t_{s} \rangle^{2}}, \end{align} where all coordinates here now refer to those in the free-falling frame, and we have defined $\delta t_{s}$ as the linearized perturbation to $t_{s}$; i.e.\ $t_{s} \equiv \langle t_{s} \rangle + \delta t_{s}(\delta \rho,\,\delta {\bf v},\, ...) + \mathcal{O}(\delta^{2})$. We now Fourier decompose each variable, $\delta \propto \exp{[\iimag\,({\bf k}\cdot {\bf {x}} - \omega\,{t})]}$, and define the parallel and perpendicular components of ${\bf k} \equiv k_{\|}\,\hat{{\bf w}}_{s}}%{\hat{\boldsymbol{\alpha}} + k_{\bot}\,\hat{\bf k}_{\bot}$. Because of the symmetry of the problem, the solutions are independent of the orientation of ${\bf k}_{\bot}$ in the plane perpendicular to $\hat{{\bf w}}_{s}}%{\hat{\boldsymbol{\alpha}}$. The density equations trivially evaluate to $\delta \rho = \rho_{0}\,\omega^{-1}\,{\bf k}\cdot \delta {\bf u}$ and $\delta \rho_{d} = \mu\,\rho_{0}\,(\omega - {\bf w}_{s}}%{{\boldsymbol{\alpha}}\cdot {\bf k})^{-1}\,{\bf k}\cdot \delta {\bf v}$, and the momentum equations can be written \begin{align} \nonumber \omega\,\delta{\bf {u}} + \mu\,(\omega - {\bf w}_{s}}%{{\boldsymbol{\alpha}}\cdot{\bf k})\,\delta{\bf {v}} &= \frac{( c_{s}^{2}\,\langle t_{s} \rangle\,{\bf k} - \iimag\,\mu\,{{\bf w}_{s}}%{{\boldsymbol{\alpha}}} )\,{\bf k}\cdot\delta{\bf {u}}}{\omega\,\langle t_{s} \rangle} \\ \nonumber &\ \ \ \ \ \ \ \ \ + \frac{(\iimag\,\mu\,{{\bf w}_{s}}%{{\boldsymbol{\alpha}}})\,{\bf k}\cdot\delta{\bf {v}}}{(\omega-{\bf w}_{s}}%{{\boldsymbol{\alpha}}\cdot{\bf k})\,\langle t_{s} \rangle}, \\ \label{eqn:linearized.fourier} \iimag\,{{\bf w}_{s}}%{{\boldsymbol{\alpha}}}\,\frac{\delta{t_{s}}}{\langle t_{s} \rangle} &= \langle t_{s} \rangle\,(\omega - {\bf w}_{s}}%{{\boldsymbol{\alpha}}\cdot{\bf k})\,\delta{\bf {v}} + \iimag\,(\delta{\bf {v}} - \delta{\bf {u}}). \end{align} In this form, the first equation is the total momentum equation for the sum gas+dust mixture. The next equation encodes our ignorance about $t_{s}$. A couple of important results are immediately clear from here and Eq.~\eqref{eqn:linearized}. After removing the homogeneous solution, ${\bf g}$ vanishes: an identical uniform acceleration on dust and gas produces no interesting behavior. More precisely, as derived in detail in App.~\ref{sec:accel.frame}, a transformation from the free-falling frame, which moves with velocity $\langle {\bf u} \rangle = {\bf u}_{0} + [{\bf g} + {\bf a}\,\mu/(1+\mu)]\,t$, back into the stationary frame, is exactly equivalent to making the replacement $\omega \rightarrow \omega + {\bf u}_{0}\cdot {\bf k} + (t/2)\,[ {\bf g} + {\bf a}\,\mu/(1+\mu) ]\cdot {\bf k}$. In other words, the only difference between working in the stationary and free-falling frames is a trivial phase-shift of the modes. This implies that the acceleration ${\bf a}$ is important only insofar as it produces a non-vanishing dust-gas drift velocity ${\bf w}_{s}}%{{\boldsymbol{\alpha}}$, and any source producing the same equilibrium drift will produce the same linear instabilities. Finally, we note that if ${\bf a}=\mathbf{0}$, then ${\bf w}_{s}}%{{\boldsymbol{\alpha}}=\mathbf{0}$ and the equations become those for a coupled pair of soundwaves with friction (all modes are stable or decay). This also occurs if $\delta {\bf u}$ and $\delta {\bf v}$ are strictly perpendicular to ${\bf w}_{s}}%{{\boldsymbol{\alpha}}$. In this manuscript, we will consider only single-wave perturbations in linear perturbation theory -- i.e.\ the dispersion relation and ensuing instabilities studied here involve a single wave at a given ${\bf k}$ and $\omega({\bf k})$, as opposed to, e.g.,\ higher-order two-wave interactions involving waves with different $\omega_{1}$, $\omega_{2}$. To be clear, although the waves we study necessarily involve both gas and dust, the drag coupling means that the two phases cannot be considered separately. To make further progress, we require a functional form for $t_{s}$ to determine $\delta t_{s}$. For most physically interesting drag laws, $t_{s}$ depends on some combination of the density, temperature, and velocity offset $|{\bf v}-{\bf u}|$ (more below). Therefore, for now, we consider an {\em arbitrary} $t_{s}$ of the form $t_{s} = t_{s}(\rho,\,T,\,c_{s},\,{\bf v}-{\bf u})$. We will assume there is some equation-of-state which can relate perturbations in $T$ and $c_{s}$ to $\rho$. Then the linearized form obeys, \begin{align} \label{eqn:ts.general} \frac{\delta {t_{s}}}{\langle t_{s} \rangle} &= -\zeta_{s}\,\frac{\delta{\rho}}{\rho_{0}} - \zeta_{w}\,\frac{{{\bf w}_{s}}%{{\boldsymbol{\alpha}}}\cdot\left(\delta{\bf {v}} - \delta{\bf {u}} \right) }{|{\bf w}_{s}}%{{\boldsymbol{\alpha}}|^{2}}, \end{align} where $\zeta_{s}$ and $\zeta_{w}$ are the drag coefficients\footnote{Note that we label the $\delta \rho/\rho_{0}$ coefficient in Eq.~\eqref{eqn:ts.general} as $\zeta_{s}$ because it encodes the dependence of $t_{s}$ on density at constant entropy; see App.~\ref{sec:hydrostatic.generalized}.} that depend on the form of $t_{s}$ (see \S~\ref{sec:draglaws}). \vspace{-0.5cm} \subsection{Gas Supported By Pressure Gradients and Abitrarily-Stratified Systems} \label{sec:pressure.gradients} Above we considered a homogeneous, freely-falling system. Another physically relevant case is when the gas is stationary (hydrostatic), which requires a pressure gradient (with $\nabla P_{0} = \rho_{0}\,{\bf g} + \rho_{d,\,0}\,{\bf w}_{s}}%{{\boldsymbol{\alpha}}/\langle t_{s} \rangle$). This will generally involve stratification in other properties as well (e.g.\ gas and dust density), so more broadly we can consider arbitrary stratification of the background quantities $P_{0}$, $\rho_{0}$, $\rho_{d,\,0}$, and ${\bf w}_{s}}%{{\boldsymbol{\alpha}}$. As usual, if we allow such gradients, we must restrict our analysis to spatial scales shorter than the background gradient scale-length $L_{0}$ (e.g.\ $k \gg |\nabla U_{0}|/|U_{0}|\sim 1/L_{0}$, for each variable $U_{0}$), or else a global solution (with appropriate boundary conditions, etc.) is obviously needed. Moreover we must also require $w_{s}}%{{\alpha}\,t_{s} \ll L_{0}$, or else the timescale for the dust to ``drift through'' the system scale-length is much shorter than the stopping time (and no equilibrium can develop). So our analysis should be considered local in space and time, with these criteria imposing maximum spatial and timescales over which it is applicable (with actual values that are, of course, problem-dependent). We discuss these scales with various applications in \S~\ref{sec:breakdown}. In App.~\ref{sec:hydrostatic.generalized}, we re-derive our results, for all unstable modes considered in this paper, for hydrostatic systems with arbitrary stratification in $P_{0}$, $\rho_{0}$, $\rho_{d,\,0}$, and ${\bf w}_{s}}%{{\boldsymbol{\alpha}}$. Provided we meet the conditions above required for our derivation to be valid (i.e.\ $k \gg 1/L_{0}$), we show: \begin{itemize} \item{\bf (1):} The existence and qualitative (e.g.\ dimensional, leading-order) scalings of all the instabilities analyzed here in the homogeneous case are un-altered by stratification terms, and the leading-order corrections to both the real and imaginary parts (growth rates and phase velocities) of the relevant modes are fractionally small. \item{\bf (2):} Pressure gradients (the term required to make the system hydrostatic) enter especially weakly at high-$k$ (at third-from-leading order in $1/(L_{0}\,k)$) in the behavior of the instabilities studied here. The dominant correction from stratification is from non-vanishing $\nabla \cdot {\bf w}_{s}}%{{\boldsymbol{\alpha}} \sim \rho_{d,\,0}^{-1}\, {\bf w}_{s}}%{{\boldsymbol{\alpha}} \cdot \nabla \rho_{d,\,0}$, i.e.\ a background dust density and drift velocity gradient along the direction of the drift. The sense of the resulting correction is simply that modes moving in the direction of the drift are stretched or compressed along with the background dust flow. The correction is large only if the timescale for the dust to drift through the dust-density gradient-scale-length is short compared to mode growth timescales. \item{\bf (3):} The leading-order corrections from stratification are not systematically stabilizing or de-stabilizing (they can increase or decrease the growth rates). \item{\bf (4):} Introducing stratification introduces new instabilities. For example, even when the {gas} is stably stratified, stratification leads to new linear modes in the gas, e.g.\ Brunt-V\"ais\"al\"a\ buoyancy oscillations. As shown in SH, if these modes exist in the gas, there is a corresponding RDI (the Brunt-V\"ais\"al\"a\ RDI studied in SH), which has maximal growth rates when ${\bf w}_{s}}%{{\boldsymbol{\alpha}}\cdot{\bf k} = \pm (k_{\bot}/k)\,N_{BV}$, i.e.\ when ${\bf w}_{s}}%{{\boldsymbol{\alpha}}\cdot{\bf k}$ matches the Brunt-V\"ais\"al\"a\ frequency $N_{BV}$. We defer a detailed study of these modes to future work, since they are not acoustic instabilities and have fundamentally different behaviors and dimensional scalings (e.g.\ resonance exists for all $w_{s}}%{{\alpha}$, but the growth rates are always lower than those of the acoustic RDI at high-$k$ if $w_{s}}%{{\alpha} > 1$). \end{itemize} In what follows, we will take the homogeneous (free-falling) case to be our ``default'' reference case, for two reasons. (1) The homogeneous and stratified cases exhibit the same qualitative behaviors, instabilities, and modes in all limits we wish to study, but the mathematical expressions are considerably simpler in the homogeneous case. And (2), as discussed in \S~\ref{sec:applications}, the situations where the acoustic RDI is of the greatest astrophysical interest involve dust-driven winds (e.g.\ in cool stars, star-forming regions, AGN torii, etc.). Such systems are generally better approximated as being freely accelerating than in hydrostatic equilibrium. \vspace{-0.5cm} \subsection{Neglected physics} \subsubsection{Magnetized Gas and Dust} \label{sec:mhd} In this paper, we focus for simplicity on a pure hydrodynamic fluid. If the system is sufficiently magnetized, new wave families appear (e.g.\ shear Alfven, slow, and fast magnetosonic waves in MHD). SH\ show that slow and fast magnetosonic waves, just like the acoustic waves here, are subject to the RDI (even when there is no Lorentz force on the dust). For resonant modes, when the projected dust streaming velocity (${\bf w}_{s}}%{{\boldsymbol{\alpha}}\cdot \hat{\bf k}$) matches either the slow or fast wave phase velocity, the qualitative behavior is similar to the acoustic RDI studied here (\S~\ref{sec:resonance}). Further, like for hydrodynamic modes studied in detail below (\S~\ref{sec:general.modes}), even modes that are not resonant can still be unstable (but, unsurprisingly, the MHD-dust system is more complicated; see \citealt{tytarenko:two.fluid.drift.intabilities}). Another effect, which was not included in SH, is grain charge. If the gas is magnetized and the grains are sufficiently charged, then Lorentz forces may dominate over the aerodynamic drag laws we consider here. This regime is relevant to many astrophysical systems (even, e.g., cosmic ray instabilities; \citealp{kulsrud.1969:streaming.instability,Bell.cosmic.rays}). Lorentz forces will alter the equilibrium solution, and introduce additional dependence of the mode structure on the direction of ${\bf k}$ via cross-product terms (terms perpendicular to both the mean drift and magnetic field), although they do not generally suppress (and in many cases actually enhance) the RDI. For these reasons, we defer a more detailed study of MHD to future work. \vspace{-0.5cm} \subsubsection{Multi-Species Dust} \label{sec:dust.species} Astrophysical dust is distributed over a broad spectrum of sizes (and other internal properties), producing different $t_{s}$, ${\bf v}$, ${\bf a}$ for different species. Consider de-composing the dust into sub-species $i$. Since the dust is pressure free, the dust continuity and momentum equations in Eq.~\eqref{eqn:general} simply become a pair of equations for each sub-species $i$. Each has a continuity equation for $\rho_{d,\,i}$ (where $\rho_{d} = \sum_{i}\,\rho_{d,\,i}$) and momentum equation for ${\bf v}_{i}$, each with their own acceleration ${\bf a}_{i}$ and drag $t_{s,\,i}$, but otherwise identical form to Eq.~\eqref{eqn:general}. The gas continuity equation is identical, and the gas momentum equation is modified by the replacement of the drag term $\rho_{d}\,({\bf v}-{\bf u})/t_{s} \rightarrow \sum_{i}\,\rho_{d,\,i}\,({\bf v}_{i}-{\bf u})/t_{s,\,i}$. The homogeneous solution now features each grain species moving with $\driftveli{i}$ where $\driftveli{i} \propto {\bf a}_{i}\,t_{s,\,i}$, so the sum in the gas momentum equation becomes $\sum_{i}\,\rho_{d,\,i}\,({\bf v}_{i}-{\bf u})/t_{s,\,i} \sim \sum_{i}\,\mu_{i}\,{\bf a}_{i}$. The most important grain property is usually size (this, to leading order, determines other properties such as charge). For a canonical spectrum of individual dust grain sizes ($R_{d}$), the total dust mass contained in a logarithmic interval of size scales as $\mu_{i} \propto d\mu / d\ln{R_{d}} \propto R_{d}^{0.5}$, i.e.\ most of the dust mass is concentrated in the largest grains \citep{mathis:1977.grain.sizes,draine:2003.dust.review}. Further, for any physical dust law (see \S~\ref{sec:draglaws}), $t_{s,\,i}$ increases with $R_{d}$. In most situations, we expect $|{\bf a}_{i}|$ to depend only weakly on $R_{d}$. This occurs: (i) if the difference in dust-gas acceleration is sourced by gravity or pressure support for the gas, (ii) when the gas is directly accelerated by some additional force (e.g.\ radiative line-driving), or (iii) when the dust is radiatively accelerated by long-wavelength radiation.\footnote{If dust is radiatively accelerated by a total incident flux ${\bf F}_{\lambda}$ centered on some wavelength $\lambda$, the acceleration is ${\bf a} \approx {\bf F}_{\lambda}\,Q_{\lambda}\,\pi\,R_{d}^{2} / (c\,m_{d}) \propto Q_{\lambda}/R_{d}$, where $m_{d}\propto \bar{\rho}_{d}\,R_{d}^{3}$ is the grain mass and $Q_{\lambda}$ is the absorption efficiency which scales as $Q_{\lambda}\sim1$ for $\lambda \ll R_{d}$ and $Q_{\lambda} \sim R_{d}/\lambda$ for $\lambda \gg R_{d}$. So the acceleration scales $\propto 1/R_{d}$ for $\lambda \ll R_{d}$ and is independent of grain size for $\lambda \gg R_{d}$. For ISM dust, the typical sizes of the largest grains are $\sim 0.1\,\mu\,{\rm m} \sim 1000\,$\AA, so for many sources we expect to be in the long-wavelength limit (even in cases where sources peak at $\ll 1000\,$\AA, then gas, not dust, will typically be the dominant opacity source).} Therefore, in these cases, all of the relevant terms in the problem are dominated by the largest grains, which contain most of the mass. We therefore think of the derivation here as applying to ``large grains.'' The finite width of the grain size distribution is expected to broaden the resonances discussed below (since there is not exactly one $\driftvelmagi{i}$, there will be a range of angles for resonance), but not significantly change the dynamics. Much smaller grains can effectively be considered tightly-coupled to the gas (they will simply increase the average weight of the gas). However, in some circumstances -- for example acceleration of grains by high-frequency radiation -- we may have $|{\bf a}_{i}| \propto R_{d}^{-1}$. In these cases, the ``back reaction'' term on the gas is dominated by small grains, however those also have the smallest $\driftvelmagi{i}$, and may therefore have slower instability growth rates. There can therefore be some competition between effects at different grain sizes, and the different sizes may influence one another via their effects on the gas. This will be explored in future numerical simulations. \vspace{-0.5cm} \subsubsection{Viscosity} \label{sec:hydro.dissipation} We neglect dissipative processes in the gas in Eqs.~\eqref{eqn:linearized}--\eqref{eqn:linearized.fourier} (e.g., bulk viscosity). Clearly, including this physics will create a minimum scale below which RDI modes may be damped. This is discussed more in \S~\ref{sec:breakdown}. \vspace{-0.5cm} \section{Unstable Modes: General Case} \label{sec:general.modes} In this section, we outline, in full detail, the behavior of the dispersion relation that results from Eq.~\eqref{eqn:linearized.fourier}. While the completely general case must be solved numerically, we can derive analytic expressions that highlight key scalings for all interesting physical regimes. To guide the reader, we start with a general overview of the different branches of the dispersion relation in \S~\ref{sec:general overview of modes}, referring to the relevant subsections for detailed derivations. For those readers most interested in a basic picture of the instability, Figs.~\ref{fig:growth.rate.demo}--\ref{fig:growth.rate.mu} give a simple overview of the dispersion relation and its fastest-growing modes. \vspace{-0.5cm} \subsection{Overview of results}\label{sec:general overview of modes} In general, the coupled gas-dust dispersion relation (Eq.~\eqref{eqn:dispersion.full} below) admits at least two unstable modes, sometimes more. This leads to a plethora of different scalings, each valid in different regimes, which we study in detail throughout \S~\ref{sec:General dispersion relation}--\ref{sec:mode.structure}. The purpose of this section is then to provide a ``road map'' to help the reader to navigate the discussion. An important concept, discussed above and in SH, is a mode ``resonance.'' This occurs here when ${\bf w}_{s}}%{{\boldsymbol{\alpha}} \cdot \hat{\bf k} = \pm c_{s}$, and thus is always possible (for some $\hat{\bf k}$) when $|{\bf w}_{s}}%{{\boldsymbol{\alpha}} | \ge c_{s}$ ($w_{s}}%{{\alpha} \ge 1$). As shown in SH, when $\mu\ll1$ (and $k\gg \mu$), modes at the resonant angle are the fastest growing, and will thus be the most important for dynamics. In the context of the analysis presented below, we will see that the dispersion relation changes character at resonance, and we must therefore analyze these specific mode angles separately. The connection to the matrix-based analysis of SH, which treated only the modes at the resonant angle, is outlined in App.~\ref{app: matrix relationship}. A clear illustration of the importance of the resonant angle is shown in the right-hand panel of Fig.~\ref{fig:growth.rate.demo}. Below, we separate our discussion into the following modes (i.e., regimes/branches of the dispersion relation): \vspace{-0.2cm} \begin{description} \item[\emph{\bf (i) Decoupling instability, \S~\ref{sec:decoupling}:}] If $\zeta_{w}<-1$, the drag on the dust decreases with increasing $w_{s}}%{{\alpha}$ sufficiently rapidly that the dust and the dust completely decouple, causing an instability which separates the two. This instability exists for all ${\bf k}$, but is not usually physically relevant (see \S~\ref{sec:coulomb}). \item[\emph{\bf (ii) Long-wavelength modes, \S~\ref{sec:long.wavelength}:}] At long wavelengths, the two unstable branches of the dispersion relation merge. This instability, which has a growth rate that scales as $\Im(\omega) \sim k^{2/3}$, persists for all $\mu$, any $w_{s}}%{{\alpha}$, and any $\zeta_{s}$ and $\zeta_{w}$ (except $\zeta_{w}=0$, $\zeta_{s}=1$). This mode has a unique structure which does not resemble a modified sound wave or free dust drift. \item[\emph{\bf (iii) The ``quasi-sound} %{intermediate'' mode, \S~\ref{sec:intermediate}:}] At shorter wavelengths, the two branches of the dispersion relation split in two. We term the first of these the ``quasi-sound} %{intermediate'' mode. The mode structure resembles a modified sound wave. When $w_{s}}%{{\alpha} \gtrsim 1$, the quasi-sound} %{intermediate\ mode is unstable for all $k$, with $\Im(\omega)\sim k^{0}$ (i.e., the growth rate is constant). At resonance (\S~\ref{sec:intermediate.mode.at.resonance}), the quasi-sound} %{intermediate\ mode is subdominant and its growth rate declines with increasing $k$. The quasi-sound} %{intermediate\ mode is stable for subsonic streaming ($w_{s}}%{{\alpha}<1$). \item[\emph{\bf (iv) The ``quasi-drift} %{slow'' mode, \S~\ref{sec:slow}:}] The second shorter-wavelength branch is the ``quasi-drift} %{slow'' mode. The mode structure resembles modified free (undamped) grain drift. At the resonant mode angle (\S~\ref{sec:resonance}), the quasi-drift} %{slow\ mode is the dominant mode in the system, with a growth rate that increases without bound as $k\rightarrow \infty$. For a mid range of wavelengths $\Im(\omega) \sim k^{1/2}$, while for sufficiently short wavelengths $\Im(\omega) \sim k^{1/3}$. At resonance, the mode structure also becomes ``sound wave-like'' in the gas, in some respects (\S~\ref{sec:mode.structure}). Away from resonance (e.g., if $w_{s}}%{{\alpha} <1$), the quasi-drift} %{slow\ mode is either stable or its growth rate saturates at a constant value (i.e., $\Im(\omega)\sim k^{0}$), depending on $w_{s}}%{{\alpha}$ and $\zeta_{s}/(1+\zeta_{w})$. \item[\emph{\bf (v) The ``uninteresting'' mode:}] For certain parameter choices a third unstable mode appears (it would be a fourth unstable mode if $\zeta_{w}<-1$, when the decoupling instability also exists). We do not analyze this mode further because it always has a (significantly) lower growth rate than either the quasi-sound} %{intermediate\ or quasi-drift} %{slow\ modes. \end{description} We also discuss the subsonic limit $w_{s}}%{{\alpha}<1$ separately in more detail (\S~\ref{sec:subsonic.long.wavelenghts}), so as to highlight key scalings for this important physical regime. Finally, in \S~\ref{sec:mode.structure}, we consider the structure of the eigenmodes for the fastest-growing modes (the long-wavelength mode and the resonant version of the quasi-drift} %{slow\ mode), emphasizing how the resonant modes directly seed large dust-to-gas-ratio fluctuations in the gas. \vspace{-0.5cm} \subsection{General dispersion relation}\label{sec:General dispersion relation} Before continuing, let us define the problem. For brevity of notation, we will work in units of $\rho_{0}$, $c_{s}$, and $\langle t_{s} \rangle$ (i.e.\ length units $c_{s}\,\langle t_{s} \rangle$), viz., \begin{align} w_{s}}%{{\alpha} &\equiv \frac{|{\bf w}_{s}}%{{\boldsymbol{\alpha}}|}{c_{s}} \ \ , \ \ \omega \rightarrow \omega\,\langle t_{s} \rangle \ \ , \ \ k \rightarrow k\,c_{s}\,\langle t_{s} \rangle. \label{eqn:dimensionless vars} \end{align} Inserting the general form for $t_{s}$ (Eq.~\eqref{eqn:ts.general}) into Eq.~\eqref{eqn:linearized.fourier}, we obtain the dispersion relation \begin{align} \label{eqn:dispersion.full} 0 =&\, A_{\omega}\,B_{\omega} \\ \nonumber A_{\omega} \equiv&\, \mu + (\omega + \iimag\,\mu)\,(\tilde{\omega} + \iimag) \\ \nonumber B_{\omega} \equiv&\, \tilde{\omega}\,(k_{\|}^{2} - k^{2})\, {\Bigl[} \tilde{\omega}^{3} + \tilde{\omega}^{2}\{\kappa_{\|} + \iimag\,[1 + \tilde{\zeta}_{w}(1+\mu)] \} \\ \nonumber &\, + \iimag\,\tilde{\omega}\,\{ \kappa_{\|}\,(1+\tilde{\zeta}_{w}) + \iimag\,\tilde{\zeta}_{w}\,(1+\mu) \} - \kappa_{\|}\{\mu + \tilde{\zeta}_{w}\,(1-\mu) \} {\Bigr]} \\ \nonumber &\, + \left[ \tilde{\omega}^{2} + \tilde{\omega}\,\{\kappa_{\|} + \iimag\,(1+\mu)\} + \iimag\,\kappa_{\|} \right]\, {\Bigl[} \tilde{\omega}\,(\tilde{\omega}+\iimag\,\tilde{\zeta}_{w})\,(\omega^{2}-k_{\|}^{2}) \\ \nonumber &\, + \iimag\,\mu\, \{ \tilde{\omega}^{3}\,\tilde{\zeta}_{w} + \tilde{\omega}^{2}\,\kappa_{\|}\,(1+\tilde{\zeta}_{w}-\zeta_{s}) - \iimag\,\kappa_{\|}^{2}\,(\tilde{\zeta}_{w}-\zeta_{s}) \} {\Bigr]} \end{align} where \begin{align} \nonumber \tilde{\omega} &\equiv \omega - \kappa_{\|} \ \ , \ \ \tilde{\zeta}_{w} \equiv 1 + \zeta_{w} \\ \label{eqn:kappa.definition} \kappa_{\|} &\equiv {\bf k} \cdot {\bf w}_{s}}%{{\boldsymbol{\alpha}} = k_{\|}\,w_{s}}%{{\alpha} = w_{s}}%{{\alpha}\,k\,\cos{\theta}. \end{align} (Note that $\cos\theta$, the angle between $\hat{\bf k}$ and $\hat{{\bf w}}_{s}}%{\hat{\boldsymbol{\alpha}}$, was denoted $\psi_{kw}$ in SH\ to allow for simpler notation in the MHD case.) App.~\ref{sec:hydrostatic.generalized} gives more general expressions for stratified media. Our task is to analyze the solutions to Eq.~\eqref{eqn:dispersion.full}. Fig.~\ref{fig:growth.rate.demo} plots the growth rate of the fastest-growing modes at each $\kappa_{\|}$ for a range of $w_{s}}%{{\alpha}$, determined by exact numerical solution of Eq.~\eqref{eqn:dispersion.full}. Figs.~\ref{fig:mode.structure}, \ref{fig:growth.rate.draglaw}, and \ref{fig:growth.rate.mu} show additional examples. \vspace{-0.5cm} \subsubsection{General considerations} In Eq.~\eqref{eqn:dispersion.full}, $A_{\omega}$ has the uninteresting zeros $2\omega = \,\kappa_{\|} - \iimag\,(1+\mu) \pm [\kappa_{\|}^{2} - (1+\mu)^{2} - \iimag\,2\,\kappa_{\|}\,(1-\mu)]^{1/2}$. These are damped longitudinal sound waves which decay (${\Im}(\omega) \le 0$) on a timescale $\sim \langle t_{s} \rangle$ for all $\mu$ and $\kappa_{\|}$; they are independent of $\zeta_{s}$ and $\zeta_{w}$. The interesting solutions therefore satisfy $B_{\omega}=0$, a sixth-order polynomial in $\omega$. For fully-perpendicular modes (${\bf k}={\bf k}_{\bot}$), $B_{\omega}=0$ simplifies to $\omega^{2}\,(\omega + \iimag\,\tilde{\zeta}_{w}\,[1+\mu])\,[\omega^{2}\,(\iimag\,[1+\mu]+\omega) - k^{2}\,(\iimag+\omega)]=0$; this has the solutions $\omega=0$, $\omega=-\iimag\,(1+\mu)\,\tilde{\zeta}_{w}$, and the solutions to $\omega^{2}\,(\iimag\,[1+\mu]+\omega) - k^{2}\,(\iimag+\omega)=0$ which correspond to damped perpendicular sound waves and decay (${\Im}(\omega)<0$) for all physical $\mu>0$. For the general physical situation, with $\tilde{\zeta}_{w} > 0$, all unstable modes must thus have $k_{\parallel}\neq 0$. \vspace{-0.5cm} \subsection{Decoupling Instability}\label{sec:decoupling} Before considering the more general case with $k_{\parallel}\neq 0$, it is worth noting that the perpendicular ($k_{\parallel}=0$) mode above, $\omega=-\iimag\,(1+\mu)\,\tilde{\zeta}_{w}$ is unstable if $\tilde{\zeta}_{w} < 0$, i.e.\ $\zeta_{w} < -1$. Physically, $\tilde{\zeta}_{w} < 0$ is the statement that the dust-gas coupling becomes weaker at higher relative velocities, and instability can occur when dust and gas de-couple from one another (the gas decelerates and returns to its equilibrium without dust coupling, while the dust moves faster and faster as it accelerates, further increasing their velocity separation). As discussed below (Sec.~\ref{sec:coulomb}) this could occur for Coulomb drag with $w_{s}}%{{\alpha} \gg 1$; however, in this regime Coulomb drag will never realistically dominate over Epstein or Stokes drag, so we do not expect this instability to be physically relevant. \vspace{-0.5cm} \subsection{Long-Wavelength Instability: $\kappa_{\|}\ll \hat{\mu}$} \label{sec:long.wavelength} We now examine the case of long wavelengths (small $k$). If we consider terms in $\omega$ up to $\mathcal{O}(k)$ for $k\ll \hat{\mu}$, and expand $B_{\omega}$, we obtain $\omega^{3}\,\tilde{\zeta}_{w}\,(1+\mu) = \iimag\,\mu\,(\tilde{\zeta}_{w}-\zeta_{s})\,\kappa_{\|}^{2}$ to leading order. For $\tilde{\zeta}_{w}-\zeta_{s}>0$, this has two unstable roots with the same imaginary part but oppositely-signed real parts (waves propagating in opposite directions are degenerate). Solving $B_{\omega}$ up to $\mathcal{O}(k)$ gives: \begin{align} \nonumber \omega(\kappa_{\|}\ll\hat{\mu}) &\approx \begin{cases} {\displaystyle \kappa_{0} + \frac{\pm \sqrt{3} + \iimag}{2}\,\left(1 - \frac{\zeta_{s}}{\tilde{\zeta}_{w}}\right)^{\frac{1}{3}}\hat{\mu}^{1/3}\,\kappa_{\|}^{2/3}} \ \ & \hfill{(\zeta_{s} < \tilde{\zeta}_{w})} \\ \\ {\displaystyle \kappa_{0} + \iimag\,\left(\frac{\zeta_{s}}{\tilde{\zeta}_{w}}-1 \right)^{\frac{1}{3}}\hat{\mu}^{1/3}\,\kappa_{\|}^{2/3}} \ \ & \hfill{(\zeta_{s} > \tilde{\zeta}_{w})} \end{cases} \\ \label{eqn:longwave.mode} \kappa_{0} \equiv {\Bigl[} 1& + \mu\,\left(2 + \frac{\zeta_{s}-1}{\tilde{\zeta}_{w}} \right) {\Bigr]} \, \frac{\kappa_{\|}}{3\,(1+\mu)} \ \ \ , \ \ \ \hat{\mu} \equiv \frac{\mu}{1+\mu} \end{align} Note that this mode depends only on $\kappa_{\|} = w_{s}}%{{\alpha}\,k\,\cos{\theta}$ at this order; the dependence on $w_{s}}%{{\alpha}$ is implicit. The growth rate rises towards shorter wavelengths, but sub-linearly. Most notably, instability exists at {\em all} dust abundances $\mu$ (and depends only weakly on that abundance, with the $1/3$ power), wavelengths $\kappa_{\|}$ (for $\kappa_{\|}\ll \hat{\mu}$), accelerations or $w_{s}}%{{\alpha}$, and drag coefficients $\zeta_{s}$ and $\zeta_{w}$.\footnote{\label{foot:when.instab.vanishes}Note that in the pathological case $\zeta_{s} = \tilde{\zeta}_{w} = 1+\zeta_{w}$, our approximation in Eq.~\eqref{eqn:longwave.mode} vanishes but an exact solution to Eq.~\eqref{eqn:dispersion.full} still exhibits low-$k$ instability, albeit with reduced growth rate. The reason is that the leading-order term on which Eq.~\eqref{eqn:longwave.mode} is based vanishes, so the growth rate scales with a higher power of $\kappa_{\|}$. Instability only vanishes completely at low-$k$ when $\zeta_{s}=1$ and $\zeta_{w}=0$, exactly.} This mode is fundamentally distinct from either a modified sound wave or a modified dust drift mode. Rather, it is essentially a one-dimensional mode of a pressure-free, two-fluid system with drift between the two phases. To see this, we note that the pressure force on the gas scales as $\nabla P \sim k\,c_{s}^{2}\,\delta\rho$, while the drift forces scale $\propto \mu$. So, at sufficiently small $k\ll \mu$, the pressure force becomes small compared to the drag force of the dust on the gas. Perturbations perpendicular to the drift are damped on the stopping time, but parallel perturbations can grow. As a result, one can recover all of the properties of this mode by simplifying to a pressure-free, one-dimensional system (${\bf k}$, $\delta {\bf u}$, $\delta {\bf v}$ parallel to ${\bf w}_{s}}%{{\boldsymbol{\alpha}}$). At long wavelengths in particular, one might wonder whether the presence of gradients or inhomogeneity in the equilibrium solution might modify the mode here. In App.~\ref{sec:hydrostatic.generalized}, we consider a system in hydrostatic equilibrium supported by pressure gradients, with arbitrary stratification of any of the background quantities $P_{0}$, $\rho_{0}$, $\rho_{d,\,0}$, ${\bf w}_{s}}%{{\boldsymbol{\alpha}}$. We show that the leading-order correction to this mode can be written as $\omega \rightarrow \omega\,(1 + \epsilon)$ with $\epsilon \sim \hat{\mu}^{1/3}\,\kappa_{\|}^{2/3}\,(k\,\mu/|\nabla\mu|)^{-1}$. But $\hat{\mu} \ll 1$, generally, and $\kappa_{\|} \ll \hat{\mu} \ll 1$ for this mode, so the correction term is small unless $k^{-1} \gg \mu/|\nabla\mu|$; i.e.\ unless we go to wavelengths much larger than the background gradient-scale length (of $\mu$). Obviously, in this case a global solution, with appropriate boundary conditions, would be needed. \begin{figure*} \plotsidesize{figs/dwind_instab_modes.pdf}{0.75} \vspace{-0.25cm} \caption{Spatial structure of the modes in Fig.~\ref{fig:growth.rate.demo} (see \S~\ref{sec:mode.structure}). Here we take $\mu=0.01$, $\zeta_{s}=\zeta_{w}=0$, $w_{s}}%{{\alpha}=10$, and $\cos{\theta}$ shown, and plot the perturbed dust density $\delta \rho_{d}$, gas density $\delta \rho$ (in units of $\rho_{0}$, the mean density) and perturbed dust velocity $\delta {\bf v}$ and gas velocity $\delta {\bf u}$ (in units of $c_{s}$). The overall amplitude of the linear perturbation ($y$-axis normalization) is arbitrary. For the velocities we separate them into the magnitude of the component parallel to ${\bf k}$ ($\delta {\bf v} \cdot \hat{\bf k}$), and perpendicular ($\delta {\bf v} \times \hat{\bf k}$). We show the spatial structure over one period, for a given $k$ (in units of $c_{s}\,\langle t_{s} \rangle$). In all cases, a lag between the dust and gas density perturbations arises because the dust de-celerates when moving through the denser gas, which generates a ``pileup'' and stronger dust-density peak, which in turn amplifies the gas response. {\em Top:} The long-wavelength mode (\S~\ref{sec:long.wavelength}) exhibits a nearly-coherent dust-gas oscillation, with $\delta \rho_{d} \approx \mu\,\delta \rho$ to leading order (the lag is higher-order). This is not a modified sound wave, however: the phase/group velocities scale $\propto k^{-1/3}$ (Eq.~\ref{eqn:longwave.mode}), the velocity and density responses are offset by a phase lag, and the gas+dust density perturbation is weak ($|\delta \rho|/\rho_{0} \ll |\delta {\bf v}|/c_{s}$; note we multiply $\delta \rho$ plotted by $10$, and $\delta\rho_{d}$ by $10/\mu$). {\em Middle:} Resonant mode (\S~\ref{sec:resonance}), at intermediate wavelengths where the growth rate scales $\propto k^{1/2}$ (Eq.~\ref{eqn:longwave.mode.midk}). The wavespeed, gas density and velocity in the $\hat{\bf k}$ direction now behave like a sound wave. The dust lag is larger (phase angle $\sim \pi/6$) and because of the ``resonance,'' where the dust motion along the $\hat{\bf k}$ direction exactly matches the wavespeed, the effects above add coherently and generate a much stronger dust response with $|\delta \rho_{d}|/|\delta \rho| \sim (2\,\mu\,k)^{1/2}$, a factor $\sim (2\,k/\mu)^{1/2} \sim 20$ larger than the mean dust-to-gas ratio. Note the large perpendicular velocities also present. {\em Bottom:} Resonant mode, at short wavelengths (where growth rates scale $\propto k^{1/3}$; Eq.~\ref{eqn:omega.resonant}). This is similar to the intermediate-wavelength case except perpendicular velocities become negligible, the dust velocity response $\delta {\bf v}$ becomes weaker, and the dust density response continues to become stronger, with $|\delta \rho_{d}|/|\delta \rho| \sim (4\,\mu\,k)^{1/3}$, a factor $\sim 1000$ larger here than the mean dust-to-gas ratio $\mu$. \vspace{-0.25cm} \label{fig:mode.structure}} \end{figure*} \vspace{-0.5cm} \subsection{Short(er)-Wavelength Instabilities: $\kappa_{\|}\gg \hat{\mu}$} At high-$k$ there are at least two different unstable solutions. If we assume a dispersion relation of the form $\omega \sim \mathcal{O}(k^{1}) + \mathcal{O}(k^{\nu})$ where $\nu<1$, and expand $B_{\omega}$ to leading order in $k^{-1} \ll 1$, we obtain a dispersion relation $0 = \omega\,(\omega-\kappa_{\|})^{3}\,(\omega^{2} - k^{2})\,(1 + \mathcal{O}(k^{-1}))$. This is solved by $\omega = \pm k + \mathcal{O}(k^{\nu})$ or $\omega = \kappa_{\|} + \mathcal{O}(k^{\nu})$, each of which produces a high-$k$ branch of the dispersion relation. In the following sections, \ref{sec:intermediate}--\ref{sec:slow}, we study each of these branches in detail. We term the first branch, with $\omega = \pm k + \mathcal{O}(k^{\nu})$, the ``quasi-sound} %{intermediate'' mode (\S~\ref{sec:intermediate}); to leading order this is just a soundwave (the natural mode in the gas, absent drag: $\omega = \pm c_{s}\,k$). We term the second branch, with $\omega = \kappa_{\|} + \mathcal{O}(k^{\nu})$, the ``quasi-drift} %{slow'' mode (\S~\ref{sec:slow}); to leading order this is ``free drift'' (the natural mode in the dust, absent drag: $\omega = {\bf w}_{s}}%{{\boldsymbol{\alpha}}\cdot{\bf k}$). In the analysis of each of these, we must treat modes with the resonant angle, $\cos\theta =\pm 1/w_{s}}%{{\alpha}$, separately, because the dispersion relation fundamentally changes character. The quasi-drift} %{slow\ mode at resonance (\S~\ref{sec:resonance}) is the fastest-growing mode in the system (when $w_{s}}%{{\alpha}>1$ and $\mu\ll1$), with growth rates that increase \emph{without bound} as $k\rightarrow \infty$. This is the resonance condition for the acoustic RDI case considered in SH~ (see also App.~\ref{app: matrix relationship}). \vspace{-0.5cm} \subsection{Short(er)-Wavelength Instability: The ``Quasi-sound} %{Intermediate'' Mode} \label{sec:intermediate} To leading-order, the quasi-sound} %{intermediate\ mode satisfies $\omega = \pm k$ (the sound wave dispersion relation). Consider the next-leading-order term; i.e.\ assume $\omega = \omega_{\rm QS} %{\rm med} = \pm k + \omegaZ + \mathcal{O}(k^{-1})$ (where $\omegaZ$ is a term that is independent of $k$) and expand the dispersion relation to leading order in $k^{-1}$ (it will transpire that the solution here is valid for all $k \gg w_{s}}%{{\alpha}\,\mu$). This produces a simple linear leading-order dispersion relation for both the $\pm$ cases: \begin{align} \label{eqn:omega.med} \omega_{\rm QS} %{\rm med} &\approx \pm\, k - \iimag\,\frac{\mu\,(1 + \zeta_{w}\,\cos^{2}{\theta} \pm w_{s}}%{{\alpha}\,(1-\zeta_{s})\,\cos{\theta})}{2} \end{align} Where the ``$+$'' mode applies the $+$ to all $\pm$, and vice versa. Because both signs of $\cos{\theta}$ are allowed, it follows that the modes are unstable (${\Im}(\omega) > 0$) if \begin{align} \label{eqn:eta.intermediate.mode.1} w_{s}}%{{\alpha}\,{|}(1 - \zeta_{s})\,\cos{\theta}{|} &> 1 + \zeta_{w}\,\cos^{2}{\theta}. \end{align} Because $\zeta_{w}$ and $\zeta_{s}$ generally are order-unity or smaller, Eq.~\eqref{eqn:eta.intermediate.mode.1} implies that $w_{s}}%{{\alpha} \gtrsim 1$ is required for this mode to be unstable. For $\zeta_{w}<1$, the more common physical case (see \S~\ref{sec:draglaws}), we also see that the condition (Eq.~\eqref{eqn:eta.intermediate.mode.1}) is first met for parallel modes ($\cos\theta=\pm1$) and that their growth rate (Eq.~\eqref{eqn:omega.med}) is larger than oblique modes.\footnote{For the parallel case, the general dispersion relation $B_{\omega}$ simplifies to: $B_{\omega} \rightarrow A_{\omega}\,B_{\omega}^{\prime}$ with \begin{align} \nonumber B_{\omega}^{\prime} &= \kappa_{\|}\,w_{s}}%{{\alpha}^{2}\,\mu\,(\omega\,\tilde{\zeta}_{w} -\kappa_{\|}\,\zeta_{s}) + \tilde{\omega}\,{\Bigl(} (\tilde{\omega}+\iimag\tilde{\zeta}_{w})\,(\omega^{2}\,w_{s}}%{{\alpha}^{2}-\kappa_{\|}^{2}) \\ \nonumber & + \iimag\,w_{s}}%{{\alpha}^{2}\,\mu\,( \omega^{2}\,\tilde{\zeta}_{w} + \kappa_{\|}\,\{\kappa_{\|}\,(\zeta_{s}-1) + \iimag\,\tilde{\zeta}_{w} \} - \omega\,\kappa_{\|}\,\,(\tilde{\zeta}_{w}+\zeta_{s} - 1) {\Bigr)} \end{align} } Comparing the long-wavelength result in Eq.~\eqref{eqn:longwave.mode} to Eq.~\eqref{eqn:omega.med}, we see that the growth rate grows with $k$ until it saturates at the constant value given by Eq.~\eqref{eqn:omega.med} above $k \gtrsim w_{s}}%{{\alpha}\,\mu$. For $w_{s}}%{{\alpha} \lesssim 1$, the mode becomes stable above $k\gtrsim w_{s}}%{{\alpha}\,\mu$. In App.~\ref{sec:hydrostatic.generalized} we show that up to this order in $k$, the behavior of this mode is identical in hydrostatic or arbitrarily stratified media (the leading-order corrections appear at order $\sim 1/(k\,L_{0})$, where $L_{0}$ is the gradient scale-length of the system). \vspace{-0.5cm} \subsubsection{The Quasi-sound} %{Intermediate\ Mode at Resonance}\label{sec:intermediate.mode.at.resonance} When $w_{s}}%{{\alpha}\,\cos{\theta}=\pm1$, the behavior of the quasi-sound} %{intermediate\ mode is modified (the series expansion we used is no longer valid; see \S~\ref{sec:resonance}). If we follow the same branch of the dispersion relation, then instead of the growth rate becoming constant at high-$k$, it peaks around $\kappa_{\|}\sim\hat{\mu}$ at a value ${\Im}(\omega) \approx \hat{\mu}/4$, and then declines with increasing $\kappa_{\|}$. It is therefore the less interesting branch in this limit, because the quasi-drift} %{slow\ branch produces much larger growth rates. \vspace{-0.5cm} \subsection{Short(er) Wavelength Instability: The ``Quasi-drift} %{Slow'' Mode} \label{sec:slow} We now consider the quasi-drift} %{slow\ mode branch of the high-$k$ limit of $\omega$, with leading-order $\omega = \kappa_{\|}$ (the free-drift dispersion relation). Assuming $\omega = \omega_{\rm QD} %{\rm slow} = \kappa_{\|} + \omegaZ + \mathcal{O}(k^{-1})$, and expanding to leading order in $k$, we obtain the leading-order cubic relation \begin{align} 0 = &\omegaZ\,(\omegaZ+\iimag)\,(\omegaZ + \iimag\,\tilde{\zeta}_{w})\,(1-w_{s}}%{{\alpha}^{2}\,\cos^{2}{\theta}) - \mu\,(\iimag\,(\tilde{\zeta}_{w}-\zeta_{s})\,w_{s}}%{{\alpha}^{2}\,\cos^{2}{\theta} \nonumber\\ &+ \omegaZ\,(1-\tilde{\zeta}_{w} + (\tilde{\zeta}_{w}\,(1+w_{s}}%{{\alpha}^{2})-w_{s}}%{{\alpha}^{2}\,\zeta_{s} - 1)\,\cos^{2}{\theta})).\label{eqn:slowmode.dispersion.relation} \end{align} Equation~\eqref{eqn:slowmode.dispersion.relation} is solvable in closed form but the expressions are tedious and unintuitive.\footnote{Eq.~\eqref{eqn:slowmode.dispersion.relation} does provide a simple closed-form solution if $\cos\theta=\pm 1$ (parallel modes), or $\zeta_{w}=0$; in these cases the growing mode solutions are: \begin{align} \nonumber \omega_{\rm QD} %{\rm slow}(|\cos{\theta}| = 1) &\approx \kappa_{\|} + \iimag\,\frac{\tilde{\zeta}_{w}}{2}\,\left[-1 + \left({1 + \frac{4\,{\mu}\,(\tilde{\zeta}_{w}-\zeta_{s})}{\tilde{\zeta}_{w}^{2}\,(1-w_{s}}%{{\alpha}^{-2})}} \right)^{1/2}\right] \\ \nonumber \omega_{\rm QD} %{\rm slow}(\zeta_{w}=0) &\approx \kappa_{\|} + \iimag\,\frac{1}{2}\,\left[-1 + \left({1 + \frac{4\,\mu\,(1-\zeta_{s})}{1-(w_{s}}%{{\alpha}\,\cos{\theta})^{-2}}} \right)^{1/2} \right] \end{align} } For clarity of presentation, if we consider $\mu \ll 1$, the expression factors into a damped solution with $\omegaZ = -\iimag$, and a quadratic that gives a damped and a growing solution which simplifies to: \begin{align} \label{eqn:omega.slow} \omega_{\rm QD} %{\rm slow}(\mu\ll1) &\approx \kappa_{\|} + \iimag\,\frac{(w_{s}}%{{\alpha}\,\cos{\theta})^{2}\mu}{(w_{s}}%{{\alpha}\,\cos{\theta})^{2}-1}\,\left(1 - \frac{\zeta_{s}}{\tilde{\zeta}_{w}} \right) \end{align} This illustrates the general form of the full expression. In particular, we see that the expressions become invalid ($\Im (\omega) \rightarrow \infty$) at the resonant angle $w_{s}}%{{\alpha}^{2}\cos^{2}\theta =1$, which will be treated separately below (\S~\ref{sec:resonance}). The requirement for instability (from the general version of Eq.~\eqref{eqn:omega.slow}) is: \begin{align} \label{eqn:slowmode.stability} (w_{s}}%{{\alpha}^{2}\,\cos^{2}{\theta}-1)\,(1-\zeta_{s}/\tilde{\zeta}_{w}) \ge 0 \end{align} We thus see that if $\zeta_{s}/\tilde{\zeta}_{w} < 1$ (the more common physical case), this mode is unstable for $w_{s}}%{{\alpha}\,|\cos{\theta}| > 1$; if $\zeta_{s}/\tilde{\zeta}_{w} > 1$, however, the mode is stable for $w_{s}}%{{\alpha}\,|\cos{\theta}| > 1$ but becomes unstable for $w_{s}}%{{\alpha}\,|\cos{\theta}| < 1$. Away from resonance (i.e., with $|w_{s}}%{{\alpha}\cos{\theta}| \ne 1$), we see that, like the quasi-sound} %{intermediate\ mode, the quasi-drift} %{slow\ mode is described by the long-wavelength solution from \S~\ref{sec:long.wavelength}, with a growth rate that increases with $k$ until it saturates at the constant value of Eq.~\eqref{eqn:omega.slow}: roughly $\sim w_{s}}%{{\alpha}^{2}\,\mu$ for $w_{s}}%{{\alpha} < 1$ or $\sim \mu$ for $w_{s}}%{{\alpha} > 1$. Comparing the growth rates (Eq.~\eqref{eqn:omega.slow} and Eq.~\eqref{eqn:longwave.mode}) we see this occurs at $k\gtrsim \mu\,w_{s}}%{{\alpha}^{2} / (1 + w_{s}}%{{\alpha}^{3})$ (i.e. $\sim w_{s}}%{{\alpha}^{2}\,\mu$ for $w_{s}}%{{\alpha} < 1$, $\sim \mu/w_{s}}%{{\alpha}$ for $w_{s}}%{{\alpha} > 1$). In App.~\ref{sec:hydrostatic.generalized}, we note that in an arbitrarily stratified background, a constant correction to the growth rate of this mode appears at leading-order, with the form $\omega_{\rm QD} %{\rm slow} \rightarrow \omega_{\rm QD} %{\rm slow} - \iimag\,\nabla\cdot{\bf w}_{s}}%{{\boldsymbol{\alpha}}$ (or $\omega_{\rm QD} %{\rm slow} \rightarrow \omega_{\rm QD} %{\rm slow} + \iimag\,\rho_{d,\,0}^{-1}\,{\bf w}_{s}}%{{\boldsymbol{\alpha}}\cdot\nabla\rho_{d,\,0}$, since the dust density and velocity are related by continuity). Because this mode is moving with the mean dust motion ($\omega\approx\kappa_{\|}$ to leading order), this is just the statement that, if there is a non-zero divergence of the background drift, the perturbation is correspondingly stretched or compressed along with the mean flow. The correction is important only if the timescale for the dust to ``drift through'' the global gradient scale-length (in $\rho_{d,\,0}$ or ${\bf w}_{s}}%{{\boldsymbol{\alpha}}$) is short compared to the growth time. \vspace{-0.1cm} \subsubsection{The Quasi-drift} %{Slow\ Mode at Resonance} \label{sec:resonance} When $w_{s}}%{{\alpha} \ge 1$, then Eq.~\eqref{eqn:omega.slow} (and its generalization, valid at all $\mu$) diverge as $\cos{\theta} \rightarrow \pm 1/w_{s}}%{{\alpha}$. In this case the ``saturation'' or maximum growth rate of the mode becomes infinite. What actually occurs is that the growth rate continues to increase {\em without limit} with increasing $k$. In this limit, our previous series expansion at high-$k$ is invalid: we must return to $B_{\omega}$ and insert $w_{s}}%{{\alpha}\,\cos{\theta}=\pm1$; i.e.\ $k^{2}=\kappa_{\|}^{2}$ or ${\bf k}\cdot {\bf w}_{s}}%{{\boldsymbol{\alpha}} = \omega_{\rm sound} \equiv \pm c_{s}\,k$, the resonance condition for the RDI. Note that when the resonant condition is met, the mode satisfies $\omega = {\bf w}_{s}}%{{\boldsymbol{\alpha}}\cdot{\bf k} = \pm c_{s}\,k$ -- i.e.\ to leading order it simultaneously satisfies the dispersion relation of gas absent drag (a sound wave) {\em and} dust absent drag (free drift). This effectively eliminates the restoring forces in the system, so the resulting dispersion relation\footnote{If the resonant condition is satisfied and $\zeta_{s}=\zeta_{w}=0$, the dispersion relation has the simple form $\tilde{\omega}^{2}\,[\tilde{\omega}+\iimag\,(1+\mu)]\,(\omega+k) = -\mu\,k^{2}$.} has growing solutions with ${\Im}(\omega_{\ast}) > 0$ for {\em all} $\kappa_{\|}$, and {\em the growth rate increases monotonically with $\kappa_{\|}$ without limit} (here and below we use $\omega_{\ast}$ to denote the resonant frequency).\footnote{Note that at long wavelengths, $k \ll \hat{\mu}$, the series expansion in Eq.~\eqref{eqn:longwave.mode} is still accurate and we just obtain the solutions in \S~\ref{sec:long.wavelength}, even at resonance.} There are two relevant regimes for this mode at resonance: {\bf (1) The Intermediate-wavelength (``mid-$k$'' or ``low-$\mu$'') Resonant Mode:} If $\hat{\mu} \ll k \ll \hat{\mu}^{-1}$, the resonant solutions to $B_{\omega_{\ast}}=0$ give: \begin{align} \nonumber \omega_{\ast}(\hat{\mu} &\ll \kappa_{\|} \ll \hat{\mu}^{-1}) \approx \kappa_{1} + \frac{\iimag\pm1}{2}\,\left( {\Bigl |}1 - \frac{\zeta_{s}}{\tilde{\zeta}_{w}} {\Bigr|}\,{\hat{\mu}\,\kappa_{\|}} \right)^{1/2} \\ \kappa_{1} &\equiv \left[ 1 - \frac{\hat{\mu}}{4}\,\left(1 - \frac{\tilde{\zeta}_{w}\,\zeta_{w} + \zeta_{s}\,w_{s}}%{{\alpha}^{2}}{\tilde{\zeta}_{w}^{2}\,w_{s}}%{{\alpha}^{2}} \right) \right]\,\kappa_{\|} - \iimag\frac{(\tilde{\zeta}_{w}-\zeta_{s})\,\hat{\mu}}{8\,\tilde{\zeta}_{w}}.\label{eqn:longwave.mode.midk} \end{align} As expected, to $\mathcal{O}({\mu}^{1/2})$, this matches the ``acoustic RDI'' expression derived in SH, with the resonance between the dust drift velocity and the natural phase velocity of an acoustic wave without dust (the exact correspondence is explained in detail in App.~\ref{app: matrix relationship}). {\bf (2) The Short-wavelength (``high-$k$'') Resonant Mode:} At larger $\kappa_{\|}\gg \hat{\mu}^{-1}$, expanding $\omega \sim \mathcal{O}(k)$ to leading order in $k \gg 1$ shows that the leading-order term must obey $\omega = \pm \kappa_{\|} = \pm k$, as before. Now expand to the next two orders in $k$ as $\omega_{\ast}+k + \omega_{1/3}\,k^{1/3} + \omegaZ$, where again $\omegaZ$ denotes a $k$-independent part (it is easy to verify that with $\nu\ge0$, any term $\omega = k + \omega_{\nu}\, k^{\nu}$, other than $\nu=0$ and $\nu=1/3$, must have $\omega_{\nu}=0$ to satisfy the dispersion relation to next-leading order in $k$). This gives $2\,\omega_{1/3}^{3} + (1+\zeta_{w}/w_{s}}%{{\alpha}-\zeta_{s})\,\mu=0$, and a simple linear expression for $\omegaZ$. There is always one purely real root, one decaying root, and one unstable ${\Im}(\omega) > 0$ root. Taking the unstable root, we obtain the ``high-k'' resonant mode: \begin{align} \label{eqn:omega.resonant}\omega_{\ast}(k \gg \hat{\mu}^{-1}) &\approx \kappa_{\|} + (\iimag\,\sqrt{3} \pm 1)\,\left( \frac{|\Theta|\,\mu\,\kappa_{\|}}{16} \right)^{1/3} - \iimag\,\omegaZ \\ \nonumber \Theta &\equiv 1+\frac{\zeta_{w}}{w_{s}}%{{\alpha}^{2}}-\zeta_{s} \\ \nonumber \omegaZ &\equiv \frac{(1+\Theta)\,\mu}{6} + \frac{1 + (\tilde{\zeta}_{w}^{2}-1)/w_{s}}%{{\alpha}^{2} - \tilde{\zeta}_{w}\,\zeta_{s}}{3\,\Theta}, \end{align} where the sign in the $\pm$ part of the real part of $\omega_{\ast}$ is ``$+$'' if $\Theta > 0$ and ``$-$'' if $\Theta < 0$. Again this is just the high-$k$ expression for the acoustic RDI derived in SH. Note that, formally, the intermediate-wavelength (mid-$k$) and short-wavelenth (high-$k$) resonant modes do not necessarily represent the same branch of the dispersion relation (they are distinct modes even at resonance, one of which is the fastest-growing at intermediate $k$, the other at high $k$). However, for $\zeta_{s} \le 1$, they are degenerate, and the resonant mode behavior transitions smoothly between the two limits with increasing $k$. Qualitatively, the resonant modes grow in a similar way to the long-wavelength instability Eq.~\eqref{eqn:longwave.mode}. We see that the slope decreases with increasing $\kappa_{\|}$ from $\omega\sim \kappa_{\|}^{2/3}$ (for $\kappa_{\|} \ll \hat{\mu}$), to $\omega_{\ast}\sim \kappa_{\|}^{1/2}$ (for $\hat{\mu} \ll \kappa_{\|} \ll \hat{\mu}^{-1}$), to $\omega_{\ast}\sim \kappa_{\|}^{1/3}$ (for $\hat{\mu}^{-1}\ll \kappa_{\|}$). Comparison to the quasi-sound} %{intermediate\ mode (Eq.~\eqref{eqn:omega.med}) or the quasi-drift} %{slow\ mode away from resonance (Eq.~\eqref{eqn:omega.slow}) shows that the resonant mode (Eqs.~\eqref{eqn:longwave.mode.midk} and \eqref{eqn:omega.resonant}) always grows fastest. Because resonance requires $w_{s}}%{{\alpha}\,\cos{\theta}=\pm1$, we have: $k_{\|} = k\,\cos{\theta} = \pm k/w_{s}}%{{\alpha}$, $k_{\bot} = |{\bf k}_{\bot}| = k\,\sin{\theta} = k\,(1 - w_{s}}%{{\alpha}^{-2})^{1/2}$, and $k_{\|}/k_{\bot} = \pm 1/\sqrt{w_{s}}%{{\alpha}^{2}-1}$. For modest $w_{s}}%{{\alpha}\gtrsim 1$, the resonant mode is primarily parallel ($\cos{\theta} \sim \pm1$), but for large $w_{s}}%{{\alpha} \gg 1$, the resonant mode becomes increasingly perpendicular, with $\theta \rightarrow \pi/2$ and $k_{\bot} \gg |k_{\|}|$. We can estimate the width of the resonant angle in Fig.~\ref{fig:growth.rate.demo} -- i.e., the range of angles over which the growth rate is similar to maximum -- by combining the maximum growth rate at resonance (Eqs.~\eqref{eqn:longwave.mode.midk}-\eqref{eqn:omega.resonant}) with the growth rate of the quasi-drift} %{slow\ mode away from resonance (Eq.~\eqref{eqn:omega.slow}). This gives $\Delta\cos{\theta} \sim \mu/(w_{s}}%{{\alpha}\,\omega_{\ast})$ where $\omega_{\ast} \sim (\mu\,k)^{1/2}$ (at mid $k$) or $\omega_{\ast} \sim (\mu\,k)^{1/3}$ (at high $k$). We see that the resonance is broader at larger $\mu$, lower $w_{s}}%{{\alpha}$, and lower $k$. Similar to the out-of-resonance quasi-drift} %{slow\ modes, if we consider arbitrarily stratified, hydrostatic backgrounds (App.~\ref{sec:hydrostatic.generalized}) the dispersion relation differs (to leading order in $\sim 1/k$) only in a constant offset in the growth rate (i.e.\ in the $\kappa_{1}$ term in Eq.~\ref{eqn:longwave.mode.midk} or $\omegaZ$ term in Eq.~\ref{eqn:omega.resonant}) of order $\sim \nabla\cdot{\bf w}_{s}}%{{\boldsymbol{\alpha}}$. This correction is always un-important for the ``high-$k$'' resonant mode, and for the ``mid-$k$'' resonant mode over the upper range of $k$ in which that mode exists. But it can, in principle, be a significant correction at the lower-$k$ range of the ``mid-$k$'' mode ($k\sim \hat{\mu}$) if $\hat{\mu}$ is very small (see App.~\ref{sec:hydrostatic.generalized} for details). At high-$k$ and at resonance, anti-aligned solutions of the form $\omega = -k + \omegaZ + \mathcal{O}(k^{-1})$ are also admitted. These have the simple solution $\omegaZ \approx -\iimag\,(\zeta_{w} + w_{s}}%{{\alpha}\,\zeta_{s})\,\mu/(2\,w_{s}}%{{\alpha})$, which is growing only if $\zeta_{w} + w_{s}}%{{\alpha}\,\zeta_{s} < 0$. \vspace{-0.5cm} \subsection{Subsonic ($w_{s}}%{{\alpha} < 1$) Modes} \label{sec:subsonic.long.wavelenghts} In \S~\ref{sec:slow} above, we saw that when $w_{s}}%{{\alpha}>1$ (and $\hat{\mu}\ll1$) the fastest growing modes will be the long-wavelength mode (at low $k$) and the acoustic RDI ``resonant'' modes (at high $k$). When the streaming is subsonic ($w_{s}}%{{\alpha}<1$) this resonance is no longer possible and the quasi-sound} %{intermediate\ mode (\S~\ref{sec:intermediate}) is also stabilized. It thus seems helpful to cover the subsonic mode structure in a self-contained manner, which is the purpose of this section. We collect some of the results derived in \S~\ref{sec:long.wavelength}--\S~\ref{sec:slow} and derive a new limit of the subsonic quasi-drift} %{slow\ mode. At low $k$, the long-wavelength solutions from \S~\ref{sec:long.wavelength} continue to be unstable. Moreover, the ``quasi-drift} %{slow'' mode in Eq.~\eqref{eqn:omega.slow} is still unstable if $\zeta_{s} > \tilde{\zeta}_{w}$ (see Eq.~\eqref{eqn:slowmode.stability}; in this case all $k$ are unstable). The mode then grows as in Eq.~\eqref{eqn:longwave.mode} until saturating at a maximum growth rate given by Eq.~\eqref{eqn:omega.slow}: approximately $\sim w_{s}}%{{\alpha}^{2}\,\mu$, for $k \gtrsim w_{s}}%{{\alpha}^{2}\,\mu$. From the form of Eq.~\eqref{eqn:omega.slow} we can also see that for $w_{s}}%{{\alpha}<1$ the most rapidly-growing mode has $\cos{\theta}=\pm1$, i.e.\ the modes are parallel. If $\tilde{\zeta}_{w} > \zeta_{s}$ (and $w_{s}}%{{\alpha} < 1$), the quasi-drift} %{slow\ mode is stabilized for $k\gg1$. However it persists for some intermediate range of $k$, which was not included in Eq.~\eqref{eqn:omega.slow} due to our assumption $k\gg1$. Specifically, the growth of $\Im(\omega)$ with $\kappa_{\|}$ saturates at a similar point, but then $\Im(\omega)$ turns over and vanishes at finite $k \gtrsim w_{s}}%{{\alpha}$. Since we are interested in small $w_{s}}%{{\alpha}$ and low-$k$, we assume $\omega\sim \omegaZ + \omega_{1}\,w_{s}}%{{\alpha} + \omega_{2}\,w_{s}}%{{\alpha}^{2}$ and $k\sim \mathcal{O}(w_{s}}%{{\alpha})$, and expand the dispersion relation to leading order in $w_{s}}%{{\alpha}$. This gives two results: (i) that $\omegaZ$ must vanish, and (ii) that $\omega_{1}$ must obey $\omega_{1}\,(\omega_{1}^{2}\,(1+\mu) - (k/w_{s}}%{{\alpha})^{2})=0$. This gives the leading-order solution $\omega = \pm k_{\|}/\sqrt{1+\mu}$. Plugging in either the $+$ or $-$ root (they give the same growth rate), we solve for the second-order term, to obtain the relation \begin{align} \nonumber \omega_{\rm subsonic} &\approx k_{\|}\,\left( \pm\frac{1}{(1+\mu)^{1/2}} + \frac{(\zeta_{s}+\zeta_{w}\,w_{s}}%{{\alpha})\,w_{s}}%{{\alpha}\,\mu}{2\,(1+\mu)\tilde{\zeta}_{w}}\right) \\ \label{eqn:omega.lowk} &+ \iimag\,\frac{\mu}{2}\,\left( w_{s}}%{{\alpha}^{2}\,(\tilde{\zeta}_{w}-\zeta_{s}) - \frac{k_{\|}^{2}}{(1+\mu)^{2}} \right). \end{align} We see that this subsonic quasi-drift} %{slow\ mode is unstable for $k_{\|}<w_{s}}%{{\alpha} (1+\mu)(\tilde{\zeta}_{w}-\zeta_{s})^{1/2}$. We reiterate that Eq.~\eqref{eqn:omega.lowk} is valid only for $\tilde{\zeta}_{w} > \zeta_{s}$; otherwise Eq.~\eqref{eqn:omega.slow} is correct and all $k$ are unstable. \vspace{-0.5cm} \subsection{Mode Structure} \label{sec:mode.structure} In this section we discuss the structure of the eigenmodes in ($\delta \rho,\delta {\bf u},\delta \rho_{d},\delta {\bf v}$). We focus on the most relevant (fastest-growing) modes in the three limits: (i) $\kappa_{\parallel}\ll \hat{\mu}$ (dispersion relation in Eq.~\eqref{eqn:longwave.mode}), (ii) $\hat{\mu}\ll \kappa_{\parallel}\ll \hat{\mu}^{-1}$ (Eq.~\eqref{eqn:longwave.mode.midk}), and (iii) $\kappa_{\parallel}\gg \hat{\mu}^{-1}$ (Eq.~\eqref{eqn:omega.resonant}). In the subsonic streaming limit $w_{s}}%{{\alpha}<1$, the long-wavelength mode is the most relevant. Examples of each are shown in Fig.~\ref{fig:mode.structure}. \vspace{-0.2cm} \begin{enumerate} \item{\bf Long-Wavelength Mode} ($\kappa_{\|} \ll \hat{\mu}$; Eq.~\eqref{eqn:longwave.mode}): As $k\rightarrow 0$, the fastest-growing mode has ${\bf k} \propto {\bf w}_{s}}%{{\boldsymbol{\alpha}}$ (i.e.\ $\cos{\theta}=\pm 1$), and the perturbed velocities are parallel: $\delta {\bf v} \propto \delta {\bf u} \propto {\bf k} \propto {\bf w}_{s}}%{{\boldsymbol{\alpha}}$. Moreover $\delta {\bf v} \approx \delta{\bf u}$ and $\delta \rho_{d} \approx \mu\,\delta {\rho}$. In other words the mode simply features {\em coherent} oscillations of the dust and gas together, because these modes have wavelengths larger than the deceleration length of the dust. To leading order, the mode does not generate fluctuations in the dust-to-gas ratio. A second order phase offset does appear between the dust and gas perturbations, and this drives the growth. But this offset is weak and the growth rate is correspondingly small. However, as we noted above, the long-wavelength mode is not a perturbed sound wave (coupled dust-gas soundwaves exist at low-$k$, but these are damped). It is a unique, approximately one-dimensional, pressure-free, two-fluid mode. The phase and group velocities scale $\sim {\bf w}_{s}}%{{\boldsymbol{\alpha}} \,(k\,w_{s}}%{{\alpha}/\mu)^{-1/3} \propto k^{-1/3}$, diverging as $k\rightarrow0$ because of the leading-order term in $\omega \propto k^{2/3}$. There is also a phase offset, whereby the velocity perturbations lead (follow) the density perturbations by a phase angle of $\sim \pi/6$ for $w_{s}}%{{\alpha} > 1$ ($w_{s}}%{{\alpha} < 1$).\footnote{The phase angle $\pi/6$ (the argument of $\iimag^{1/3}$) appears repeatedly because the dominant imaginary terms in the dispersion relation are cubic.} This implies that the gas density response to the velocity perturbations is distinct from a sound wave, satisfing $\delta\rho/\rho_{0} \sim w_{s}}%{{\alpha}^{-1}\,(\kappa_{|}/\mu)^{1/3}\,|\delta {\bf v}/c_{s}| \sim [k/(\mu\,w_{s}}%{{\alpha}^{2})]^{1/3}\,|\delta {\bf v}/c_{s}|$. \item{\bf Resonant Mode}, {\bf Intermediate-Wavelengths} ($\hat{\mu} \ll \kappa_{\|} \ll \hat{\mu}^{-1}$; Eq.~\eqref{eqn:longwave.mode.midk}): For intermediate $k$ with $w_{s}}%{{\alpha} \ge 1$, the fastest-growing mode has ${\bf k}$ oriented at the resonant angle $\cos{\theta} = \pm 1/w_{s}}%{{\alpha}$ (i.e.\ $\kappa_{\|}=k$, with $k_{\|} = \pm k/w_{s}}%{{\alpha}$), so for $w_{s}}%{{\alpha} \gg1$ it is increasingly transverse ($k \approx k_{\bot}$). To leading order in $k$ and $\mu$, $\omega \approx c_{s}\,k$ so the wave phase/group velocity $=c_{s}\,\hat{\bf k}$. This is the key RDI resonance: the wavespeed (approximately) matches the natural wavespeed of the system without dust (in this case, the sound speed), with a wavevector angle $\cos{\theta}=\pm1/w_{s}}%{{\alpha}$, such that the dust drift velocity ({in the direction of the wave propagation}) is also equal to that wavespeed: ${\bf w}_{s}}%{{\boldsymbol{\alpha}} \cdot \hat{\bf k} = c_{s}$. In other words, the bulk dust is co-moving with the wave in the direction $\hat{\bf k}$. For $\mu\ll1$, the gas density response behaves like a sound wave, $\delta \rho/\rho_{0} \approx \hat{\bf k}\cdot \delta {\bf u}/c_{s}$, in-phase with the velocity in the $\hat{\bf k}$-direction. However, the dust density response now lags by a phase angle $\sim \pi/6$, and, more importantly, the resonance generates a strong dust density response: $|\delta \rho_{d}| \sim (2\,\mu\,\kappa_{\|})^{1/2}\, |\delta \rho|$. We see the dust-density fluctuation is enhanced by a factor $\sim (2\,\kappa_{\|}/\mu)^{1/2} \gg 1$ relative to the mean ($\mu$), which is much stronger than for the long-wavelength mode (with $\delta \rho_{d} \sim \mu\,\delta\rho$). The resonant mode can thus generate very large dust-to-gas fluctuations even for otherwise weak modes, and the magnitude of the induced dust response increases at shorter wavelengths. Effectively, as the dust moves into the gas density peak from the wave, it decelerates, producing a trailing ``pileup'' of dust density behind the gas density peak, which can be large. This dust-density peak then accelerates the gas, amplifying the wave. Because of the resonance with both drift and sound speeds, these effects add coherently as the wave propagates, leading to the exponential growth of the mode. One further interesting feature of this mode deserves mention: the velocities ($\delta {\bf v} \approx \delta {\bf u}$ here) are not fully-aligned with $\hat{\bf k}$ but have a component in the ${\bf k}_{\bot}$ direction,\footnote{Note that for $w_{s}}%{{\alpha} \gg1$, the ${\bf k}_{\bot}$ direction is approximately the $\hat{{\bf w}}_{s}}%{\hat{\boldsymbol{\alpha}}$ direction.} which leads the velocity in the $\hat{\bf k}$ direction by a phase angle $\sim \pi/4$. This is a response to the dust streaming in the ${\bf k}_{\bot}$ direction and the amplitude of this term decreases with $k$. \item{\bf Resonant Mode}, {\bf Short-Wavelengths} ($\kappa_{\|} \gg \hat{\mu}^{-1}$; Eq.~\eqref{eqn:omega.resonant}): At high-$k$ with $w_{s}}%{{\alpha} \ge1$ the details of the resonant mode (and scaling of the growth rate) change. The resonant condition remains the same as at mid $k$, however, the mode propagates with wavespeed $c_{s}\,\hat{\bf k}$ along the resonant angle $\cos{\theta}=\pm 1/w_{s}}%{{\alpha}$, and the gas behaves like a soundwave (the velocities are now aligned $\delta {\bf u} \propto \delta {\bf v} \propto {\bf k}$). This generates a strong dust response with the slightly-modified scaling $|\delta \rho_{d} |/|\delta \rho| \sim (4\,\mu\,\kappa_{\|})^{1/3} \gg 1$ (scaling like the growth rate), with $\delta \rho_{d}$ lagging the gas mode by a phase angle $\sim \pi/6$. Importantly, $|\delta \rho_{d}|/|\delta \rho|$ continues to increase indefinitely with $k$, and in this regime, the dust density perturbation becomes {\em larger} than the gas density perturbation in absolute units (even though the mean dust density is smaller than gas by a factor $\mu$). The dust velocity $\delta {\bf v}$ is parallel to $\delta {\bf u}$, but with a smaller amplitude $|\delta {\bf v}| / |\delta {\bf u}| \sim (\mu\,\kappa_{\|}/2)^{-1/3} \ll 1$, and $\delta {\bf v}$ leads $\delta {\bf u}$ by a phase angle $\sim \pi/6$. \end{enumerate} \begin{figure*} \plotsidesize{figs/dwind_instab_vs_alpha_vs_lawEOS.pdf}{0.85} \vspace{-0.25cm} \caption{Growth rates of the most-rapidly-growing unstable mode as a function of wavenumber and drift velocity, as Fig.~\ref{fig:growth.rate.demo}, for different drag laws (see \S~\ref{sec:draglaws}). Here we take $\mu=0.01$, and marginalize over angle (the most rapidly-growing cases $\cos{\theta}=1$ for $w_{s}}%{{\alpha}<1$ or $\cos{\theta}=\pm1/w_{s}}%{{\alpha}$ for $w_{s}}%{{\alpha} \ge 1$). {\em Top Left:} Arbitrary constant $\zeta_{s}$, $\zeta_{w}$ parameterization of $t_{s}$ (Eq.~\eqref{eqn:ts.general}) with $\zeta_{s}=2$, $\zeta_{w}=0$ (thick lines) or $\zeta_{s}=0$, $\zeta_{w}=1$ (thin lines). As shown in \S~\ref{sec:general.modes} the dependence on these parameters is weak; the largest effect is to determine, when $w_{s}}%{{\alpha} < 1$, whether all $k$ are unstable (if $\zeta_{s} > 1+\zeta_{w}$) or only small-$k$ ($\zeta_{s} < 1+\zeta_{w}$), but the maximum growth rates in these cases are very similar. {\em Top Right:} Epstein drag (\S~\ref{sec:epstein}), with gas equation-of-state parameters $\gamma=5/3$ (thick) or $\gamma=2/3$ (thin). The qualitative behavior is identical, with modest normalization differences, and the transition between regimes for $w_{s}}%{{\alpha}<1$ ($\zeta_{s} = 1+\zeta_{w}$) occurring at $\gamma^{-1} = 1 - 9\,\pi\,w_{s}}%{{\alpha}^{2}/64$ ($\zeta_{s}$, $\zeta_{w}$ depend on $\gamma$ and $w_{s}}%{{\alpha}$). Note the low saturation value of the $\gamma=5/3$, $w_{s}}%{{\alpha}=0.9$ case occurs because it is very close to this singular value ($(1+\zeta_{w})-\zeta_{s} \approx 0.02$). {\em Bottom Left:} Stokes drag (\S~\ref{sec:stokes}). The dependence on $\gamma$ is weak and for all $\gamma < 3$, high-$k$ modes with $w_{s}}%{{\alpha}<1$ are stable. {\em Bottom Right:} Coulomb drag (\S~\ref{sec:coulomb}; here $\Gamma=1$). For long-wavelength modes with $w_{s}}%{{\alpha} < 1$, and all high-wavelength modes, the qualitative behavior is similar to other laws although normalization differences are more obvious. The high growth-rate, low-$k$ modes with $w_{s}}%{{\alpha} > 1$ are a different instability which manifests because when $w_{s}}%{{\alpha} > 1$ in Coulomb drag, increasing the dust-gas velocity {\em decreases} the drag acceleration, so the dust speeds up and the system ``self-decouples.'' Physically Epstein or Stokes drag should be dominant over Coulomb drag in this limit. \vspace{-0.25cm} \label{fig:growth.rate.draglaw}} \end{figure*} \vspace{-0.5cm} \section{Drag Physics} \label{sec:draglaws} In this section, we consider different physical drag laws. This involves inserting specific forms of $\zeta_{s}$ and $\zeta_{w} $ into the dispersion relations derived in \S~\ref{sec:general.modes}. Numerically calculated growth rates for representative cases are shown for comparison in Fig.~\ref{fig:growth.rate.draglaw}. We also show as illustrative cases two arbitrary but constant, order-unity choices: $(\zeta_{s},\zeta_{w}) = (0,1)$ and $(\zeta_{s},\zeta_{w}) =(2,0)$. The former case illustrates that with $\zeta_{w} < \tilde{\zeta}_{w}$, the qualitative behavior of the modes are largely similar to the constant-$t_{s}$ case in Fig.~\ref{fig:growth.rate.demo}. The latter shows that when $\zeta_{w} > \tilde{\zeta}_{w}$, the dominant effect is to extend the instability of sub-sonic ($w_{s}}%{{\alpha}<1$) cases to high-$k$. For simplicity of notation, we again use the dimensionless variables of Eq.~\eqref{eqn:dimensionless vars} in this section. \vspace{-0.5cm} \subsection{Constant Drag Coefficient} \label{sec:constantdrag} The simplest case is $t_{s} = $\,constant, so $\delta {t_{s}} = 0$ -- i.e.\ $\zeta_{s}=\zeta_{w} = 0$ (and $\tilde{\zeta}_{w}=1$). The characteristic polynomial simplifies to $B_{\omega}=A_{\omega}\,B^{\prime}_{\omega}$ with $B^{\prime}_{\omega}\equiv\tilde{\omega}\,(\tilde{\omega}+\iimag)\,(\omega^{2}-k^{2}) + \iimag\,\mu\,(\omega^{2}\,\tilde{\omega} - \kappa_{\|}^{2}\{ \tilde{\omega} + \iimag\})$. Since $\tilde{\zeta}_{w}=1$, all pure-perpendicular modes are damped or stable. The long-wavelength modes are unstable with growth rates, \begin{align} \omega(\kappa_{\|} \ll \hat{\mu}) & = \kappa_{\|} + \frac{\pm \sqrt{3} + \iimag}{2}\,\hat{\mu}^{1/3}\,\kappa_{\|}^{2/3}. \end{align} For $w_{s}}%{{\alpha} < 1$, these cut off at high-$k$ with $\omega \approx (\mu/2)\,(w_{s}}%{{\alpha}^{2} - k_{z}^{2}/(1+\mu)^{2})$ (Eq.~\eqref{eqn:omega.lowk}). For $w_{s}}%{{\alpha} \ge 1$, at large $k$ the quasi-sound} %{intermediate\ mode (Eq.~\eqref{eqn:omega.med}) is present with growth rate $=\mu\,(w_{s}}%{{\alpha}\,|\cos{\theta}| - 1)/2$ so the most rapidly-growing mode is parallel. The quasi-drift} %{slow\ mode (Eq.~\eqref{eqn:omega.slow}) is present with growth rate $\sim \mu/[1 - (w_{s}}%{{\alpha}\,\cos{\theta})^{-2}]$. At resonance ($\cos{\theta}\rightarrow \pm 1/w_{s}}%{{\alpha}$), the growth rate is, \begin{align} \omega_{\ast} &= \begin{cases} {\displaystyle \kappa_{\|}\,\left(1-\frac{\hat{\mu}}{4}\right) - \iimag\frac{\hat{\mu}}{8} + \frac{(1+\iimag)}{2}\,({\hat{\mu}\,\kappa_{\|}})^{1/2} \ \ \hfill{(\hat{\mu} \ll \kappa_{\|} \ll \hat{\mu}^{-1})} } \\ \\ {\displaystyle \kappa_{\|} - \iimag\,\frac{1+\mu}{3} + (1+\iimag\,\sqrt{3})\,\left(\frac{\mu\,\kappa_{\|}}{16}\right)^{1/3} \ \ \ \ \hfill{(\kappa_{\|} \gg \hat{\mu}^{-1})}. }\\ \end{cases} \end{align} Examples of this case ($\zeta_{s}=\zeta_{w} = 0$) are shown in Fig.~\ref{fig:growth.rate.demo}, but they are similar to the other cases with $\zeta_{w} < \tilde{\zeta}_{w}$ in Fig.~\ref{fig:growth.rate.draglaw}. \vspace{-0.5cm} \subsection{Epstein Drag} \label{sec:epstein} The general expression\footnote{Equation~\eqref{eqn:ts.epstein} is actually a convenient polynomial approximation, given in \citet{draine.salpeter:ism.dust.dynamics}, to the more complicated dependence on $|{\bf v}-{\bf u}|$. However using the more complicated expression yields negligible ($\sim 1\%$) differences for any parameters considered here.} (including physical dimensions) for the drag coefficient in the Epstein limit is: \begin{align} \label{eqn:ts.epstein} t_{s} &= \sqrt{\frac{\pi\,\gamma}{8}}\,\frac{\bar{\rho}_{d}\,R_{d}}{\rho\,c_{s}}\,\left( 1 + \epsteincoeff\,\frac{|{\bf v}-{\bf u}|^{2}}{c_{s}^{2}} \right)^{-1/2}, \ \ \ \ \epsteincoeff \equiv \frac{9\,\pi\,\gamma}{128}. \end{align} Where $\bar{\rho}_{d}$ is the internal material density of the aerodynamic particle and $R_{d}$ is the particle (grain) radius. For astrophysical dust, $\bar{\rho}_{d}\sim 1-3\,{\rm g\,cm^{-3}}$, and $R_{d} \sim 0.001 - 1\,\mu$m in the ISM, or in denser environments $R_{d} \sim 0.1-1000\,\mu\,$m (e.g., protoplanetary disks, SNe ejecta, or cool star atmospheres; \citealt{draine:2003.dust.review}). Note that Epstein drag depends on the {\em isothermal} sound speed, $c_{\rm iso} \equiv \sqrt{k_{B}\,T/m_{\rm eff}}$ (where $m_{\rm eff}$ is the mean molecular weight). However, because we work in units of the sound speed $c_{s}\equiv \sqrt{\partial P/\partial \rho}$, we relate the two via the usual equation-of-state parameter $\gamma$, \begin{align} \gamma &\equiv \frac{c_{s}^{2}}{c_{\rm iso}^{2}} = \frac{\rho}{P}\,\partialAB{P}{\rho}, \end{align} and will assume $\gamma$ is a constant under linear perturbations. We emphasize that the $\gamma$ here is the appropriate $\gamma$ describing how the temperature responds to compression or expansion on a wave-crossing time -- roughly the same $\gamma$ appropriate for a sound wave. This means that external heating or cooling processes are only important for $\gamma$ if the heating/cooling time is shorter than the sound-crossing time (otherwise we typically expect adiabatic $\gamma$). Note that because $t_{s}$ now depends on $\langle |{\bf v}-{\bf u}| \rangle = |{\bf w}_{s}}%{{\boldsymbol{\alpha}}|$, Eq.~\eqref{eqn:mean.v.offset} for the drift velocity, ${\bf w}_{s}}%{{\boldsymbol{\alpha}} = {\bf a}\,\langle t_{s} \rangle/(1+\mu)$, is implicit. Define $\driftvelmagi{0} \equiv |{\bf a}|\,t_{0}/(c_{s}\,(1+\mu))$ where $t_{0} \equiv (\pi\,\gamma/8)^{1/2}\,\bar{\rho}_{d}\,R_{d}/(\rho_{0}\,c_{s})$ is the stopping time at zero relative velocity. Then the solution of Eq.~\eqref{eqn:mean.v.offset} is \begin{equation} w_{s}}%{{\alpha}^{2} = \frac{1}{2\,\epsteincoeff}\left[ (1+4\,\epsteincoeff\,\driftvelmagi{0}^{2})^{1/2}-1\right],\label{eqn:ws.in.epstein} \end{equation} which reduces to $w_{s}}%{{\alpha} \approx \driftvelmagi{0}$ for $|{\bf a}| \ll c_{s}/t_{0}$, or $w_{s}}%{{\alpha} \approx \epsteincoeff^{-1/4}\,\driftvelmagi{0}^{1/2}$ for $|{\bf a}| \gg c_{s}/t_{0}$. With Eq.~\eqref{eqn:ts.epstein} for $t_{s}$ and Eq.~\eqref{eqn:ws.in.epstein} for $w_{s}}%{{\alpha}$, $\delta{t_{s}}$ follows Eq.~\eqref{eqn:ts.general} with \begin{align} \zeta_{s} &= \frac{\gamma+1+2\,\epsteincoeff\,w_{s}}%{{\alpha}^{2}}{2\,(1+\epsteincoeff\,w_{s}}%{{\alpha}^{2})}\ , \ \ \ \ \ \zeta_{w} = \frac{\epsteincoeff\,w_{s}}%{{\alpha}^{2}}{1+\epsteincoeff\,w_{s}}%{{\alpha}^{2}}. \end{align} From this we can derive the relevant instability behavior for different $\gamma$ and $w_{s}}%{{\alpha}$. Note $\zeta_{s}>0$ and $\zeta_{w}>0$, so the ``decoupling'' instability (which requires $\tilde{\zeta}_{w} < 0$) is not present. In Fig.~\ref{fig:growth.rate.draglaw}, for this case (as well as Stokes and Coulomb drag), we show values of $\Im(\omega)$ for two values of $\gamma=2/3,\,5/3$ (and a range of $w_{s}}%{{\alpha}$), which determine $\zeta_{s}$, $\zeta_{w}$. The two values of $\gamma$ are chosen to bracket the range where the behavior changes ($\zeta_{s} < \tilde{\zeta}_{w}$ and $\zeta_{s} > \tilde{\zeta}_{w}$) and be qualitatively representative of cases where cooling (on the mode-crossing time) is either inefficient ($\gamma=5/3$, i.e.\ adiabatic) or efficient ($\gamma=2/3$, approximately valid in the dense/cold ISM). \vspace{-0.5cm} \subsubsection{Super-sonic streaming ($w_{s}}%{{\alpha} \gg 1$)} \label{sec:epstein:supersonic} In the $w_{s}}%{{\alpha} \gg 1$ limit, $\zeta_{s}\rightarrow1 + \mathcal{O}(w_{s}}%{{\alpha}^{-2})$ (independent of $\gamma$) and $\zeta_{w}\rightarrow 1$. This stabilizes the quasi-sound} %{intermediate\ modes (Eq.~\eqref{eqn:omega.med}) because at high-$w_{s}}%{{\alpha}$, the $\zeta_{w}$ term dominates over ($1-\zeta_{s}$), viz., the stronger coupling from at high relative velocity stabilizes the modes. The long-wavelength modes (Eq.~\eqref{eqn:longwave.mode}) are present and saturate in the quasi-drift} %{slow/resonant mode, with growth rate $\Im(\omega) \sim \mu\,[1-(w_{s}}%{{\alpha}\,\cos{\theta})^{-2}]^{-1}\,(1-\zeta_{s}/\tilde{\zeta}_{w})$, which approaches $ \Im(\omega)\sim \mu/2$ for $w_{s}}%{{\alpha} \gg 1$ out-of-resonance. At resonance, we insert the full expressions for $\zeta_{s}$ and $\zeta_{w}$ into Eq.~\eqref{eqn:longwave.mode.midk} and Eq.~\eqref{eqn:omega.resonant}. This gives \begin{align} \label{eqn:resonant.epstein.mid}\omega_{\ast} \approx&\,k\,\omegaZ_{\Re} - \frac{\iimag\hat{\mu}}{8}\left(\frac{\tilde{\zeta}_{w}-\zeta_{s}}{\tilde{\zeta}_{w}}\right) + \frac{\iimag\pm1}{2}\left( \left| \frac{\tilde{\zeta}_{w}-\zeta_{s}}{\tilde{\zeta}_{w}}\right| \hat{\mu} \,k \right)^{1/2}, \\ & \frac{\tilde{\zeta}_{w}-\zeta_{s}}{\tilde{\zeta}_{w}} = \frac{1+2\epsteincoeff w_{s}}%{{\alpha}^{2}-\gamma}{2+4\epsteincoeff w_{s}}%{{\alpha}^{2}} = \frac{1}{2}+\mathcal{O}(w_{s}}%{{\alpha}^{-2}),\nonumber \\ & \omegaZ_{\Re}= 1+ \frac{3\hat{\mu}}{16}\,\left( 1 + \mathcal{O}(w_{s}}%{{\alpha}^{-2}) \right),\nonumber \end{align} in the ``mid-$k$'' regime (we show the lowest order terms in $w_{s}}%{{\alpha}^{-1}$ for simplicity), and \begin{align} \label{eqn:resonant.epstein} \omega_{\ast} \approx&\, k - \iimag\,\omegaZ + (\iimag\,\sqrt{3}+1)\,\left( \frac{|\Theta|\,\mu\,k}{16} \right)^{1/3} \\ \nonumber &\Theta =\, \frac{1-\gamma+2\,\epsteincoeff}{2\,(1 + \epsteincoeff\,w_{s}}%{{\alpha}^{2})} = \frac{1-\gamma+2\,\epsteincoeff}{2\,\epsteincoeff\,w_{s}}%{{\alpha}^{2}} + \mathcal{O}(w_{s}}%{{\alpha}^{-4}), \\ \nonumber & \omegaZ = -\frac{2\,\epsteincoeff\,w_{s}}%{{\alpha}^{2}}{3\,(1-\gamma+2\,\epsteincoeff)} + \mathcal{O}(w_{s}}%{{\alpha}^{0}), \end{align} in the ``high-$k$'' regime. We see that in the mid-$k$ regime, the growth rate is mostly independent of $w_{s}}%{{\alpha}$ and $\gamma$, while in the high-$k$ regime the growth rate decreases, ${\Im}(\omega_{\ast}) \propto w_{s}}%{{\alpha}^{-2/3}$, at large $w_{s}}%{{\alpha}$. The dependence on $\gamma$ is weak. At mid $k$, we see from Eq.~\ref{eqn:resonant.epstein.mid} that the growth rate declines as we approach the point where $\tilde{\zeta}_{w}-\zeta_{s} =0$, which occurs at $w_{s}}%{{\alpha}^{2} = 64(\gamma-1)/(9\pi \gamma)$. This implies that unless the gas equation of state is very stiff -- specifically, $\gamma>64/(64-9\pi)\approx 1.8$ -- this ``stable point'' does not exist for $w_{s}}%{{\alpha}>1$ (a necessary condition for resonant modes). Even for $\gamma \gtrsim 1.8$, the point of stability occurs only at a specific $w_{s}}%{{\alpha}$, and so is unlikely to be of physical significance. At high-$k$, we see somewhat similar behavior, with the growth rate declines as $\gamma$ approaches the point where $\Theta =0$ (and $\omegaZ$ diverges), at $\gamma = 64/(64-9\pi)\approx 1.8$. In fact, at this point exactly, our series expansion is incorrect (since $\omegaZ$ diverges), and a resonant mode still exists, but with a growth rate that increases more slowly with $k$: \begin{equation} \label{eqn:resonant.epstein.high.reduced}\omega_{\ast} = k + \left(\sin\frac{\pi}{8}+ \iimag\,\cos\frac{\pi}{8} \right)\,\left(\frac{(w_{s}}%{{\alpha}^{2}-1)\,\epsteincoeff\,\mu\, k}{2\,(1+\epsteincoeff\,w_{s}}%{{\alpha}^{2})}\right)^{1/4}.\end{equation} Again, it seems unlikely that this specific point, $\gamma\approx 1.8$ is of particular physical significance (and in any case, the system is still unstable, just with the reduced growth rate in Eq.~\eqref{eqn:resonant.epstein.high.reduced}). \vspace{-0.5cm} \subsubsection{Sub-Sonic ($w_{s}}%{{\alpha} \ll 1$)} \label{sec:epstein.subsonic} Now consider $w_{s}}%{{\alpha}\ll1$. In this limit $\zeta_{s}= (\gamma+1)/2+ \mathcal{O}(w_{s}}%{{\alpha}^{2})$ and $\zeta_{w}= \epsteincoeff\,w_{s}}%{{\alpha}^{2}+ \mathcal{O}(w_{s}}%{{\alpha}^{4})$; i.e., the velocity-dependent terms in $t_{s}$ become second-order, as expected. For $w_{s}}%{{\alpha} < 1$ the resonant and quasi-sound} %{intermediate\ modes are stabilized. We also see that the type of unstable mode will depend on the value of $\gamma$: if $\gamma > 1$ then $\zeta_{s}/\tilde{\zeta}_{w} \approx (\gamma+1)/2> 1$, which implies the ``subsonic'' mode at low-$k$ from Eq.~\eqref{eqn:omega.lowk} is stabilized, but ``quasi-drift} %{slow'' mode from Eq.~\eqref{eqn:omega.slow} is unstable; if $\gamma< 1$, the ``quasi-drift} %{slow'' mode at $k \gtrsim 1$ becomes damped at high $k$, and the ``subsonic'' low-$k$ expression from Eq.~\eqref{eqn:omega.lowk} is unstable. The ``quasi-drift} %{slow'' modes, relevant for $\gamma \gtrsim 1$, have growth rates that increase with $k$ for $k \ll \hat{\mu}$ (the long-wavelength mode; Eq.~\eqref{eqn:longwave.mode}), then saturate to a constant maximum for $k\gtrsim 1$ (i.e.\ all modes shorter-wavelength than the length scale $\sim c_{s}\,\langle t_{s} \rangle$ have similar growth rate). For large $k$ and $w_{s}}%{{\alpha} \ll 1$ the growth rate from Eq.~\eqref{eqn:omega.slow} is $\Im(\omega)\approx w_{s}}%{{\alpha}^{2}\,\cos^{2}{\theta}\,\mu\,(\gamma-1)/2$. The ``subsonic'' mode (Eq.~\eqref{eqn:omega.lowk}), relevant for very soft equations of state with $\gamma\lesssim1$, has a maximum growth rate $\Im(\omega) \approx w_{s}}%{{\alpha}^{2}\,\mu\,(\tilde{\zeta}_{w}-\zeta_{s})/2 \approx w_{s}}%{{\alpha}^{2}\,\mu\,(1-\gamma)/4$, which again occurs for parallel modes. The mode is stabilized at short wavelengths, $k\gtrsim (1+\mu)w_{s}}%{{\alpha} \sqrt{1-\gamma}$. Overall, we see that for {\em all} $\gamma$, there is an unstable parallel mode at low $w_{s}}%{{\alpha} \ll 1$, with maximum growth rate $\sim w_{s}}%{{\alpha}^{2}\,\mu$. The difference is that for $\gamma>1$ the unstable modes are quasi-drift} %{slow\ modes, which are unstable at all $k$ and propagate with velocity ${{\bf w}_{s}}%{{\boldsymbol{\alpha}}}$ when $k\gg 1$; for $\gamma<1$ the instability only exists for long wavelength modes $k \lesssim w_{s}}%{{\alpha}$, which propagate with velocity $\pm c_{s}\,\hat{{\bf w}}_{s}}%{\hat{\boldsymbol{\alpha}}/\sqrt{1+\mu}$. Again there is one critical point when $\tilde{\zeta}_{w}-\zeta_{s}=0$, or $w_{s}}%{{\alpha}^{2} = 64\,(\gamma-1)/(9\pi\,\gamma)$, where the standard long-wavelength instability vanishes. This occurs only for some specific $w_{s}}%{{\alpha}$ at a given $\gamma$, so is unlikely to be of physical significance for most $\gamma$. Again, at this point, there is in fact still an instability, albeit with a reduced growth rate (see footnote~\ref{foot:when.instab.vanishes}, near Eq.~\eqref{eqn:longwave.mode}; the instability only truly vanishes at $\zeta_{w}=0$, $\zeta_{s}=1$ exactly). However, one does approach this vanishing-point for $\gamma=1$ as $w_{s}}%{{\alpha}\rightarrow0$ becomes sufficiently small. This leads to a cautionary note: it is common in some sub-sonic ($w_{s}}%{{\alpha} \ll c_{s}$) applications to drop the term in $|{\bf v}-{\bf u}|^{2}/c_{s}^{2}$ in Eq.~\ref{eqn:ts.epstein} (i.e.\ simply taking $t_{s} \propto 1/\rho\,c_{s}$), for simplicity. If the gas is also isothermal ($\gamma=1$), this would give $\zeta_{w}=0$, $\zeta_{s}=1$ exactly and the instabilities would vanish for $w_{s}}%{{\alpha} \ll 1$. However, this can be mis-leading: although the term in $|{\bf v}-{\bf u}|^{2}/c_{s}^{2}$ is small, it does give rise to a non-zero (albeit small) growth rate. Moreover if the equation of state is even slightly non-isothermal (e.g.\ $\gamma=0.9,\,1.1$), the instability is not suppressed strongly. Also, we caution that the appropriate equation-of-state here is that relevant under local, small-scale compression by dust and sound waves (not necessarily the same as the effective equation-of-state of e.g.\ a vertical atmosphere). \vspace{-0.5cm} \subsection{Stokes Drag} \label{sec:stokes} The expression for drag in the Stokes limit -- which is valid for an intermediate range of grain sizes, when $R_{d} \gtrsim (9/4)\,\lambda_{\rm mfp}$ but $\mathrm{Re}_{\mathrm{grain}}\equiv R_{d}|{\bf w}_{s}}%{{\boldsymbol{\alpha}}|/(\lambda_{\mathrm{mfp}} c_{s})\lesssim 1$ -- is given by multiplying the Epstein expression (Eq.~\eqref{eqn:ts.epstein}) by $(4\,R_{d}) / (9\,\lambda_{\rm mfp})$. Here $\lambda_{\rm mfp} \propto 1/(\rho\,\sigma_{\mathrm{gas}})$ is the gas mean-free-path, $\sigma_{\mathrm{gas}}$ is the gas collision cross section, and $\mathrm{Re}_{\mathrm{grain}}$ is the Reynolds number of the streaming grain. We can solve implicitly for the dust streaming velocity ${\bf w}_{s}}%{{\boldsymbol{\alpha}}$, which is the same as in the Epstein case (since $t_{s}$ depends on $|{\bf v}-{\bf u}|$ in the same manner). However, the absolute value of $t_{s}$ only determines our units, and the behavior of interest depends only on the coefficients $\zeta_{s}$ and $\zeta_{w}$. Since $R_{d}$ is a material property of the dust and $\sigma_{\rm gas}$ an intrinsic property of the gas, the important aspect of the Stokes drag law is that it multiplies the Epstein law by one power of $\rho$. Although it is certainly possible $\sigma_{\rm gas}$ might depend on density and/or temperature, lacking a specific physical model for this we will take it to be a constant for now. This simply gives $\zeta_{s}\rightarrow \zeta_{s} - 1$, relative to the scalings for Epstein drag. When $w_{s}}%{{\alpha} \ll 1$ (c.f., \S~\ref{sec:epstein.subsonic} for Epstein drag), $\zeta_{s}= (\gamma-1)/2+ \mathcal{O}(w_{s}}%{{\alpha}^{2})$ and $\zeta_{w}= \epsteincoeff\,w_{s}}%{{\alpha}^{2}+ \mathcal{O}(w_{s}}%{{\alpha}^{4})$, and quasi-sound} %{intermediate\ and resonant modes are stabilized (because $w_{s}}%{{\alpha}<1$). The quasi-drift} %{slow\ (high-$k$) mode is stabilized for $1-\zeta_{s}/\tilde{\zeta}_{w} \approx (3-\gamma)/2 > 0$, viz., so as long as $\gamma < 3$ (which is expected in almost all physical situations) the quasi-drift} %{slow\ mode is damped. However for all $\gamma < 3$, the subsonic low-$k$ mode (Eq.~\eqref{eqn:omega.lowk}) is unstable for $k \lesssim w_{s}}%{{\alpha}$, with maximum growth rate $\Im(\omega)\approx w_{s}}%{{\alpha}^{2}\,\mu\,(3-\gamma)/4$. This is larger (smaller) than the Epstein drag growth rate for $\gamma <5/3$ ($\gamma>5/3$). In the limit $w_{s}}%{{\alpha} \gg 1$, the Stokes drag expression cannot formally apply because $R_{d}>\lambda_{\mathrm{mfp}}$ then implies $\mathrm{Re}_{\mathrm{grain}}=R_{d}|{\bf w}_{s}}%{{\boldsymbol{\alpha}}|/(\lambda_{\mathrm{mfp}} c_{s})\gtrsim 1$. When this is the case, either because $w_{s}}%{{\alpha}$ is large or (more commonly) $R_{d}$ is large, there is no longer a simple drag law because the grain develops a turbulent wake. This will tend to increase the drag above the Stokes estimate (the turbulence increases the drag) with a stronger and stronger effect as $\mathrm{Re}_{\mathrm{grain}}$ increases. Given some empirically determined scaling of $t_{s}$ with $R_{d}$, $\rho$, $w_{s}}%{{\alpha}$ etc. (see, e.g., \citealt{clair.hamielec:sphere.drag} for subsonic drag), one could still qualitatively consider such a turbulent drag within the framework above, with the properties of the turbulence determining $\zeta_{s}$ and $\zeta_{w}$. We do not do this here, but note that because $\mathrm{Re}_{\mathrm{grain}}$ increases with $w_{s}}%{{\alpha}$ and $\rho$ (through $\lambda_{\mathrm{mfp}}$), we expect $t_{s}$ to decrease with $w_{s}}%{{\alpha}$ and $\rho$, viz., $\zeta_{s}>0$ and $\zeta_{w}>0$. The general scalings are thus likely similar to the Epstein case, but with a larger $\zeta_{w}$ for $w_{s}}%{{\alpha}\ll 1$, because the velocity dependence of the drag will be significant, even for subsonic streaming. Of course we can still simply calculate what the mode growth rates would be, if the usual Stokes expression applied even for $w_{s}}%{{\alpha} \gtrsim 1$. This is shown in Fig.~\ref{fig:growth.rate.draglaw}, for the sake of completeness. \vspace{-0.5cm} \subsection{Coulomb Drag} \label{sec:coulomb} The standard expression\footnote{Again, Eq.~\eqref{eqn:drag.coulomb} is a polynomial approximation for more complex dependence on $|{\bf v}-{\bf u}|$, given in \citet{draine.salpeter:ism.dust.dynamics}. However using this approximation versus the full expression makes no important difference to our results.} (in physical units) for $t_{s}$ in the Coulomb drag limit is \begin{align} \label{eqn:drag.coulomb} t_{s} &= \sqrt{\frac{\pi\,\gamma}{2}}\,\frac{\bar{\rho}_{d}\,R_{d}}{\rho\,c_{s}\,\ln{\Lambda}}\,\left(\frac{k_{B}\,T}{z_{i}\,e\,U} \right)^{2}\,\left[ 1 + \coulombcoeff\,\frac{|{\bf v}-{\bf u}|^{3}}{c_{s}^{3}} \right] \\ \nonumber \Lambda &\equiv \frac{3\,k_{B}\,T}{2\,R_{d}\,z_{i}\,e^{2}\,U}\,\sqrt{\frac{m_{i}\,k_{B}\,T}{\pi\,\rho}} \ \ \ \ \ , \ \ \ \ \ \coulombcoeff \equiv \sqrt{\frac{2\,\gamma^{3}}{9\,\pi}} \end{align} where $\ln{\Lambda}$ is the Coulomb logarithm, $e$ is the electron charge, $z_{i}$ is the mean gas ion charge, $m_{i}$ is the mean molecular weight, $T\propto \rho^{\gamma-1}$ is the gas temperature, and $U$ is the electrostatic potential of the grains, $U\sim Z_{\rm grain}\,e/R_{d}$ (where $Z_{\rm grain}$ is the grain charge). The behavior of $U$ is complicated and depends on a wide variety of environmental factors: in the different regimes considered in \citet{draine.salpeter:ism.dust.dynamics} they find regimes where $U\sim$\,constant and others where $U \propto Z_{\rm grain} \propto T$, we therefore parameterize the dependence by $U\propto T^{\Gamma}$. With this ansatz, we obtain \begin{align} \nonumber \zeta_{s} &= 1 +2\,(\gamma-1)\,\Gamma - \frac{3\,(\gamma-1)}{2\,(1+\coulombcoeff\,w_{s}}%{{\alpha}^{3})} - \frac{1-(3-2\,\Gamma)\,(\gamma-1)}{2\,\ln{\Lambda}}, \\ \zeta_{w} &= -\frac{3\,\coulombcoeff\,w_{s}}%{{\alpha}^{3}}{1+\coulombcoeff\,w_{s}}%{{\alpha}^{3}} < 0. \end{align} For relevant astrophysical conditions, $\ln{\Lambda} \sim 15-20$, so the $\ln{\Lambda}$ term in $\zeta_{s}$ is unimportant. In general, Coulomb drag is sub-dominant to Epstein or Stokes drag under astrophysical conditions when the direct effects of magnetic fields on grains (i.e., Lorentz forces) are not important. Nonetheless, the qualitative structure of the scaling produces similar features to the Epstein and Stokes drag laws, and we consider it here for completeness. In fact, grains influenced by Coulomb drag are significantly ``more unstable'' than those influenced by Epstein or Stokes drag. For $w_{s}}%{{\alpha} \ll 1$, $\zeta_{s}\rightarrow [(3\,\gamma-4) + (5-3\,\gamma)\,\log{\Lambda}]/(2\,\log{\Lambda}) \approx (5-3\,\gamma)/2$ if $\Gamma=0$, and $\zeta_{s}\rightarrow [(\gamma-2) + (1+\gamma)\,\log{\Lambda}]/(2\,\log{\Lambda}) \approx (1+\gamma)/2$ if $\Gamma=1$. Since $\tilde{\zeta}_{w}\rightarrow1$, the ``quasi-drift} %{slow'' mode is unstable if $\zeta_{s} > 1$ (for $\Gamma=0$ this requires $\gamma < (-4 + 3\,\log{\Lambda})/(3\,(-1 + \log{\Lambda})) \approx 0.98$; for $\Gamma=1$ this requires $\gamma > (2+\log{\Lambda})/(1+\log{\Lambda}) \approx 1.05$). As noted above for the Epstein case (\S~\ref{sec:epstein.subsonic}), because $\zeta_{w}\rightarrow0$ at small $w_{s}}%{{\alpha}$, the scaling of the ``subsonic'' low-$k$ mode is essentially reversed from the ``quasi-drift} %{slow'' high-$k$ mode: when the ``quasi-drift} %{slow'' mode is stable at high-$k$ ($\zeta_{s} < 1$) the ``subsonic'' mode is unstable at low-$k$, and when the ``quasi-drift} %{slow'' mode is unstable ($\zeta_{s} > 1$) the ``subsonic'' mode is stable. In either case, whichever of the two is unstable has growth rate $\Im(\omega)\sim w_{s}}%{{\alpha}^{2}\,\mu\,|\zeta_{s}|/2$. For $w_{s}}%{{\alpha} \gg 1$, the drag force {\em decreases} rapidly for $|{\bf v}-{\bf u}| \gg c_{s}$ (i.e.\ $\zeta_{w} \lesssim -1$ when $w_{s}}%{{\alpha} \gg 1$). In this regime, one never expects Coulomb drag to dominate over Epstein drag (which becomes more tightly-coupled at high $w_{s}}%{{\alpha}$), and in fact Coulomb drag alone does not allow self-consistent solutions for the equilibrium ${\bf w}_{s}}%{{\boldsymbol{\alpha}}$ in Eq.~\eqref{eqn:mean.v.offset} without an additional Epstein or Stokes term when $w_{s}}%{{\alpha} \gg 1$, but we consider the case briefly for completeness. We see that $\zeta_{s} \approx 1$ for $\Gamma=0$, and $\zeta_{s}\approx 2\,\gamma-1$ for $\Gamma=1$. More importantly, $\zeta_{w} \rightarrow -3$. This produces the fast-growing ``decoupling instability'' (\S~\ref{sec:decoupling}), which affects {\em all} wavenumbers and has a growth rate ${\Im}(\omega) \approx -\tilde{\zeta}_{w}\,(1+\mu) \approx 2\,(1+\mu)$. These modes arise from decoupling of the gas and dust: if the dust starts to move faster relative to the gas, $t_{s}$ increases (the coupling becomes weaker), so the terminal/relative velocity increases further, and so on. If we ignore the decoupling mode, we see that each of the other modes we have discussed are still present: the high-$k$ resonant mode (Eq.~\eqref{eqn:omega.resonant}) has $\Theta=(4-3\,\gamma)/(2\,\log{\Lambda})$ for $\Gamma=0$ and $\Theta\approx2\,(1-\gamma)$ for $\Gamma=1$. \begin{figure*} \plotsidesize{figs/dwind_instab_vs_alpha_vs_mu.pdf}{0.95} \vspace{-0.25cm} \caption{Growth rates of the most-rapidly-growing unstable mode as a function of wavenumber and drift velocity, as Fig.~\ref{fig:growth.rate.demo}, for different dust-to-gas ratios $\mu=0.001,\ 0.01,\ 1,\ 100$ (the $\mu=0.1$ case is in Fig.~\ref{fig:growth.rate.demo}). For simplicity we take a constant drag coefficient ($\zeta_{s}=\zeta_{w}=0$, as Fig.~\ref{fig:growth.rate.demo}), and marginalize over angle at each $\kappa_{\|}$. As shown in \S~\ref{sec:general.modes}, the dependence on $\mu$ at a given $\kappa_{\|}$ is quite weak. At low $\mu \ll 1$, the low and high-$k$ growth rates scale $\propto \mu^{1/3}$, with the slightly stronger $\propto \mu^{1/2}$ dependence around $\kappa_{\|}\sim 1$. At large $\mu \gtrsim 1$, the low and intermediate-$k$ growth rates become independent of $\mu$ (because they scale with $\hat{\mu}\equiv\mu/(1+\mu) \rightarrow 1$ for large $\mu$); the high-$k$ growth rate continues to increase weakly with $\mu^{1/3}$. In the sub-sonic ($w_{s}}%{{\alpha}<1$) case, however, the maximum wavenumber where the growth rate either saturates or the mode becomes stable increases with $\mu$ so that the maximum growth rate (marginalizing over $k$) increases roughly $\propto \mu^{2/3}$. For the super-sonic ($w_{s}}%{{\alpha} > 1$) case all wavelengths are unstable independent of $\mu$, so there is no such dependence. \vspace{-0.25cm} \label{fig:growth.rate.mu}} \end{figure*} \vspace{-0.5cm} \section{Non-Linear Behavior \&\ Turbulence} \label{sec:nonlinear} The non-linear behavior of the coupled dust-gas system is complex and chaotic, and will be studied in future work with numerical simulations (Moseley et al., in prep.). Here, we briefly speculate on some possible saturation mechanisms of the acoustic RDI and subsonic instabilities. For $w_{s}}%{{\alpha} \ge 1$, the resonant mode at the shortest wavelengths will grow fastest, with the dust density aligning locally into crests at the phase peaks with orientation $\cos{\theta} = \pm 1/w_{s}}%{{\alpha}$. These will launch small-scale perturbations in the tranverse directions in the gas. Because it is short-wavelength, we do not expect the modes to be coherent on large scales, so this will drive small-scale turbulence in the gas in the transverse directions, while in the $\hat{{\bf w}}_{s}}%{\hat{\boldsymbol{\alpha}}$ direction, the modes will be stretched by the drift. For $w_{s}}%{{\alpha} < 1$, the modes grow more slowly, and, depending on $\zeta_{s}$ and $\zeta_{w}$ (see \S~\ref{sec:subsonic.long.wavelenghts}), either saturate to a constant growth rate or turn over above a critical $k \gtrsim w_{s}}%{{\alpha}$. Thus, most of the power on large scales will be in modes of order this wavelength ($k^{-1}\sim c_{s}^{2}/(\mu\,|{\bf a}|)$). If $\mu\ll1$, dust will go strongly non-linear before the gas does, but eventually the non-linear terms will likely lead to turbulence in the gas and dust, at least for $\mu$ not too small. Gas turbulence can then enhance dust-to-gas fluctuations (see e.g.\ numerical experiments with dust in super-sonic turbulence in \citealt{hopkins.2016:dust.gas.molecular.cloud.dynamics.sims,lee:dynamics.charged.dust.gmcs}). Eventually sharp dust-filaments will form, and as the modes grow beyond this point, dust trajectories will cross and the fluid approximation for the dust will break down. Rayleigh-Taylor type secondary instabilities will likely appear, as regions with higher gas density are accelerated more rapidly, while those without dust are not dragged efficiently. It also seems possible that for $\mu\ll1$ and/or $w_{s}}%{{\alpha}$ not very large, the modes saturate in a laminar way (e.g., by changing shape, or if the dust fluid approximation breaks down). We can crudely guess the saturation amplitude of the non-linear turbulence by comparing the energy input (per unit mass) from the imposed acceleration (without including the bulk acceleration of the system), \begin{equation} \frac{dE_{\mathrm{accel}}}{dm\,dt} \sim \frac{d(m_{\rm dust}\,v_{\rm dust-gas}^{2})/dt}{m_{\rm dust} + m_{\rm gas}} \sim \frac{m_{\rm dust}\,\langle {\bf v}_{\rm dust-gas} \rangle\cdot {\bf a}}{m_{\rm dust} + m_{\rm gas}} \sim \frac{\mu\,|{\bf w}_{s}}%{{\boldsymbol{\alpha}}|^{2}}{(1+\mu)\,\langle t_{s}\rangle},\label{eqn:forcing.energy.input}\end{equation} to the specific energy decay rate of turbulence \begin{equation} \frac{dE_{\mathrm{turb}}}{dm\,dt} \sim -\frac{v_{\rm eddy}^{2}}{t_{\rm eddy}} \sim -\frac{\delta v_{\rm sat}^{3}}{\lambda},\label{eqn:turb.energy.diss}\end{equation} where $\lambda$ is the driving scale of the turbulence. Equating Eq.~\eqref{eqn:forcing.energy.input} and Eq.~\eqref{eqn:turb.energy.diss} gives $\delta v_{\rm sat} \sim \,(\hat{\mu}\,|{\bf w}_{s}}%{{\boldsymbol{\alpha}}|^{2}\,\lambda/\langle t_{s}\rangle)^{1/3}$. For each range of the RDI, we can then equate the turbulent dissipation rate $t_{\mathrm{diss}}^{-1}\sim t_{\mathrm{eddy}}^{-1}\sim v_{\mathrm{eddy}}/\lambda\sim (\mu |{\bf w}_{s}}%{{\boldsymbol{\alpha}}|^{2}/\langle t_{s}\rangle)^{1/3}\lambda^{-2/3}$ to the growth rate $\Im(\omega)$, which should (in principle) allow for the estimation of a characteristic scale and saturation amplitude in the resulting turbulence. However, one finds that: (i) in the low-$k$ regime, with $\Im(\omega)\sim (\hat{\mu}/\langle t_{s}\rangle )^{1/3}(|{\bf w}_{s}}%{{\boldsymbol{\alpha}}| k)^{2/3}$, the two are identical and there is no obvious characteristic $\lambda$; (ii) in the mid-$k$ regime, with $\Im(\omega)\sim (\hat{\mu}\, c_{s} k/\langle t_{s}\rangle)^{1/2}$, the characteristic scale is $\lambda/(c_{s}\langle t_{s}\rangle )\sim w_{s}}%{{\alpha}^{4}\hat{\mu}^{-1}$, which is outside of the range of validity of the mid-$k$ regime; and (iii) in the high-$k$ regime, with $\Im(\omega)\sim (\hat{\mu}\,c_{s} k/\langle t_{s}\rangle^{2})^{1/3}$, the characteristic scale is $\lambda/(c_{s}\langle t_{s}\rangle )\sim w_{s}}%{{\alpha}^{2}$, which is outside of the range of validity of the high-$k$ regime (if $\hat{\mu}<1$). Thus, we see that there is no obvious way for the system to choose a scale for resonant modes in \emph{any} wavelength regime. What we instead expect is that turbulence will begin on small scales and grow to larger and larger $\lambda$, up to the scale of the system (if the given sufficiently long time periods). One might also expect that this the characteristic scale would increase in time, in some way proportional to the growth rate at a given $\lambda$. This suggests that $\lambda\sim t^{3}$ ($\delta v \sim t$) at early times (with the instability growing in the high-$k$ regime), $\lambda\sim t^{2}$ ($\delta v \sim t^{2/3}$) at intermediate times (in the mid-$k$ regime), then slowing to $\lambda \sim t^{3/2}$ ($\delta v \sim t^{1/2}$) at longer times (in the long-wavelength regime).\footnote{Of course, actually resolving this shift in simulations would generally require an unfeasibly large dynamic range.} This qualitative behavior -- viz., turbulence that moves to larger and larger scales as a function of time -- is observed in simulations of cosmic-ray-driven instabilities, which have some similar characteristics to the dust-gas instabilities studied here (see, e.g., \citealt{2009ApJ...694..626R,2017MNRAS.469.1849M}). \vspace{-0.5cm} \section{Scales where our analysis Breaks Down} \label{sec:breakdown} We now briefly review the scales where our analysis breaks down. \begin{enumerate} \item{\bf Non-Linearity \&\ Orbit-Crossing:} If there is sufficiently sharp structure in the velocity or density fields, the dust trajectories become self-intersecting and the fluid approximation is invalid (for dust). In this limit numerical simulations must be used to integrate particle trajectories directly. This should not occur in the linear regime (see App.~A of \citealt{Jacquet:2011cy} for more discussion). \item{\bf Smallest Spatial Scales:} At sufficiently short wavelengths (high $k$) approaching the gas mean-free-path, dissipative effects will be important.\footnote{More precisely, the fluid viscosity is important when $\omega\, u \sim \nu_{\mathrm{vis}} k^{2}u$, where $u$ is the perturbed gas velocity, and $ \nu_{\mathrm{vis}}\sim c_{s}\lambda_{\rm mfp}^{\rm gas}$ is the kinematic viscosity. For $\omega\sim c_{s} k$, as is the case for the acoustic RDI here, we find that viscosity is important when $k\sim1/ \lambda_{\rm mfp}^{\rm gas}$. } For ionized gas, this scale is $\lambda_{\rm mfp}^{\rm gas} \sim 10^{12}\,{\rm cm}\,(T/10^{4}\,{\rm K})^{2}\,(n_{\rm gas}/{\rm cm^{-3}})^{-1}$. If we assume Epstein drag with modest $w_{s}}%{{\alpha}\sim 1$, this gives a dimensionless $\kappa_{\rm max} \sim (2\pi\,c_{s}\,\langle t_{s} \rangle/\lambda_{\rm mfp}) \sim 10^{9}\,(R_{d}/\mu\,{\rm m})\,(T/10^{4}\,{\rm K})^{-2} \gg 1$. In the dust, the fluid approximation breaks down on scales comparable to the dust-particle separation $\lambda_{\mathrm{sep}}^{\mathrm{dust}}\sim 10^{5}\,{\rm cm}\,(R_{d}/{\rm \mu\,m})\,(n_{\rm gas}/1\,{\rm cm^{-3}})^{-1/3}\,(\mu/0.01)^{-1/3}$, which is much smaller than $\lambda_{\rm mfp}^{\rm gas}$ under most astrophysical conditions. Because each of these minimum scales (for the gas and the dust) are small, very small wavelengths (e.g., up to $\kappa_{\|} \sim k_{\rm max} c_{s} \langle t_{s}\rangle \sim 10^{9}$ in Figs.~\ref{fig:growth.rate.demo}, \ref{fig:growth.rate.draglaw}, and \ref{fig:growth.rate.mu}) are astrophysically relevant. \item{\bf Largest Spatial Scales:} At low $k$, we eventually hit new scale lengths (e.g.\ the gas pressure-scale-length). The physical scale where $\kappa_{\|}\sim 1$, i.e.,\ where $k\sim c_{s}\,\langle t_{s} \rangle$, can be large. For example, with Epstein drag at $w_{s}}%{{\alpha}\sim 1$ this is $k^{-1} \sim 10^{20}\,{\rm cm}\,(R_{d}/\mu\,{\rm m})\,(n_{\rm gas}/{\rm cm^{-3}})^{-1}$. For relatively low-density starburst regions or GMCs affected by massive stars, this is only $\sim 100$ times smaller than the system scale, so the long-wavelength instability ($k c_{s}\,\langle t_{s} \rangle \ll \mu$) will likely require a global analysis. However, in e.g.\ cool stars the densities are much higher and the scales correspondingly smaller; e.g., for $\rho \sim \rho_{-12}\,10^{-12}\,{\rm g\,cm^{-3}}$ we obtain $k_{\rm min} c_{s}\,\langle t_{s} \rangle \sim 10^{-5}\,(R_{\rm min}/100\,R_{\rm sun})^{-1}\,(R_{d}/\mu\,{\rm m})\,\rho_{-12}^{-1}$ (see \S~\ref{sec:applications} for more details). \item{\bf Maximum Timescales}: Dust with speed $|{\bf w}_{s}}%{{\boldsymbol{\alpha}}|$ will drift through a system of size $L_{0}$ on a timescale $t_{\rm drift} \sim L_{0} / |{\bf w}_{s}}%{{\boldsymbol{\alpha}}|$. An instability must grow faster than this to be astrophysically relevant. In App.~\ref{sec:hydrostatic.generalized} we show that this is equivalent to the condition for background dust stratification terms to be unimportant. In units of the stopping time, the relevant timescale is $L_{0}/(|{\bf w}_{s}}%{{\boldsymbol{\alpha}}|\,\langle t_{s} \rangle) = (w_{s}}%{{\alpha}/c_{s}) \, L_{0}/(c_{s}\,\langle t_{s} \rangle)$ -- i.e.\ the timescale criterion is closely related to the requirement that we consider modes smaller than the largest spatial scales. Another maximum timescale is set by the time for the equilibrium solution (dust+gas) to be accelerated out of the system of size $\sim L_{0}$, i.e.\ $t_{\rm acc} \sim (2\,L_{0}/|\hat{\mu}\,{\bf a}|)^{1/2}$ Noting $|{\bf w}_{s}}%{{\boldsymbol{\alpha}}|\sim |{\bf a}|\,t_{s}/(1+\mu)$, we have $t_{\rm acc}/t_{s} \sim \hat{\mu}^{-1/2}\,(t_{\rm drift}/t_{s})^{1/2}$, so (since $\hat{\mu}\ll 1$) this is generally a less-stringent criterion. \end{enumerate} \vspace{-0.5cm} \section{Relation to Previous Work} \label{sec:previous.work} \subsection{Winds from Cool Stars} \label{sec:previous.work:coolstar.winds} In the context of dust-driven winds from red giants and other cool stars, there has been extensive work on other dust-related instabilities (involving thermal instability, dust formation, Rayleigh-Taylor instabilities, magnetic cycles, etc; see \citealt{macgregor:grains.cool.star.eventually.decouple,hartquist:bfield.dust.coupling.cool.star.winds, sandin:agb.wind.sims,soker:agb.star.magnetic.cycle.instabilities,soker:2002.arcs.around.agb.stars.from.instability,simis:2001.shells.around.dust.agb.winds,woitke:2d.agb.wind.simulations,woitke:2d.rad.pressure.dust.agb.wind.models}), but these are physically distinct from the instabilities studied here. Of course, simulations with the appropriate physics -- namely, (1) explicit integration of a drag law with gas back-reaction (and compressible gas), (2) trans-sonic $w_{s}}%{{\alpha}$, (3) multi-dimensional (2D/3D) domains, and (4) sufficient resolution (for the high-$k$ resonant modes) -- should see the RDI. Most studies to date to not meet these conditions. Moreover they often include other complicated physics (e.g.\ opacity and self-shielding, dust formation) which are certainly important, but make it difficult to identify the specific instability channel we describe here. However, some authors have previously identified aspects of the instabilities described above. \citet{morris:1993.cool.wind.dust.drag.instability.slow.saturated.mode} performed a much simpler linear stability analysis on a two-fluid mixture subject to drag (see also \citealt{mastrodemos:dust.gas.coupling.cool.winds.spherical.symmetry}), and noted two unstable solutions whose growth rates saturated at high-$k$: these are the ``quasi-drift} %{slow'' and ``quasi-sound} %{intermediate'' modes identified here. However, they assumed: (1) zero gas pressure (effectively $w_{s}}%{{\alpha}\rightarrow \infty$), preventing identification of stability criteria; (2) a constant coupling coefficient; and (3) spherical symmetry (of the perturbations) which eliminates the resonant modes. \citet{deguchi:1997.dust.envelope.pne.spherical.drag.instability.quasi.resonant} followed this up allowing for non-zero gas pressure, but retaining spherical symmetry and imposing the assumption that the dust always exactly follows the local equilibrium drift velocity. This suppresses all instabilities except the resonant mode at $w_{s}}%{{\alpha}=c_{s}$ exactly. To our knowledge, the scaling of these instabilities and the existence of the resonant instability for all $k$ and all $w_{s}}%{{\alpha} > 1$ has not been discussed previously in the literature. \vspace{-0.5cm} \subsection{Starburst and AGN Winds} \label{sec:previous.work:agn.winds} In models of starbursts and AGN, there is a long literature discussing radiation pressure on grains as an acceleration mechanism for outflows or driver of turbulence \citep[see e.g.][]{heckman:1990.sb.superwinds,scoville:2001.dust.pressure.in.sb.regions,thompson:rad.pressure,krumholz:2009.rad.dom.region.dynamics,hopkins:twostage.feedback,hopkins:rad.pressure.sf.fb,murray:molcloud.disrupt.by.rad.pressure,kuiper:2012.rad.pressure.outflow.vs.rt.method,wise:2012.rad.pressure.effects}. But almost all calculations to date treat dust and gas as perfectly-coupled (so the RDI cannot appear). The RDI is not related to the ``radiative Rayleigh-Taylor'' instability of a radiation pressure-supported gas+dust fluid \citep{krumholz:2012.rad.pressure.rt.instab,davis:2014.rad.pressure.outflows}, or non-linear hydrodynamic instabilities generated by e.g.\ pressure gradients or entropy inversions ultimately sourced by dust ``lifting'' material \citep[e.g.][]{berruyer:dust.wind.unstable.pressure.gradient}, nor the dust sedimentation effects in ambipolar diffusion in molecular clouds discussed in \citet{cochran:dust.bounded.hII.regions.outflow,sandford:radiatively.driven.dust.bounded.gmc.globules}. Each of these other classes of instability do not involve local dust-to-gas ratio fluctuations. There recently has been more work exploring dust-gas de-coupling in molecular cloud turbulence and shocks (integrating the explicit dust dynamics; see \citealt{hopkins.2016:dust.gas.molecular.cloud.dynamics.sims,lee:dynamics.charged.dust.gmcs,monceau:shock.cloud.dust.disperson}) which has shown this can have important effects on cooling, dust growth, and star formation. However, these studies did not identify instabilities, or include the necessary physics to capture the RDI. \vspace{-0.5cm} \subsection{Proto-Planetary Disks} \label{sec:previous.work:disks} There has been extensive study of dust-gas instabilities and dynamics in proto-planetary disks \citep{youdin.goodman:2005.streaming.instability.derivation,johansen:2007.streaming.instab.sims,carballido:2008.grain.streaming.instab.sims,bai:2010.grain.streaming.sims.test,bai:2010.grain.streaming.vs.diskparams,pan:2011.grain.clustering.midstokes.sims,dittrich:2013.grain.clustering.mri.disk.sims,jalali:2013.streaming.instability.largescales,hopkins:2014.pebble.pile.formation,2017arXiv170802945L}. As mentioned in SH, the well-studied ``streaming instability'' \citep{youdin.goodman:2005.streaming.instability.derivation} is in fact an example of an RDI (although this has not been noted before in this context), a connection that will be explored in detail in future work. However, in the streaming instability, the wave with which the dust drift ``resonates'' is not a sound wave, but epicyclic oscillations of the gas. Similarly, as shown in SH\ (see also App.~\ref{sec:hydrostatic.generalized}), Brunt-V\"ais\"al\"a\ oscillations create an RDI, which may be of importance in proto-planetary disks \citep[this is likely the cause for the instability seen in][]{lambrechts:bv.rdi}. The acoustic RDI has not been explored in this literature. In fact, it is common in these studies to simplify by assuming incompressible gas (enforcing $\delta\rho=0$), in which case all of the acoustic instabilities studied here vanish. Finally, it is worth noting that dust-induced instabilities that occur due to the mass loading of the gas caused by dust \citep[see, e.g.,][]{Garaud:2004,2012ApJ...744..101T} or from changes to its thermodynamic properties (e.g., \citealt{2015MNRAS.453L..78L}, and some of the instabilities discussed in \citealt{2017arXiv170802945L}), are not in the RDI class, because they do not rely on the finite drift velocity between the dust and gas phases. \vspace{-0.5cm} \subsection{Plasma Instabilities} \label{sec:previous.work:plasma} As noted in SH, the most general RDI is closely related to instabilities of two-fluid plasmas (see, e.g.,\ \citealt{tytarenko:two.fluid.drift.intabilities} for an in-depth analysis of a closely related coupled neutral gas-MHD instability). These include the \citet{wardle:instability.mhd.shocks.with.slip} instability and cosmic ray streaming instabilities \citep{kulsrud.1969:streaming.instability,Bell.cosmic.rays}. However, these are quite distinct physical systems and the instabilities have different linear behaviors. \vspace{-0.5cm} \section{Astrophysical Applications} \label{sec:applications} There are a number of astrophysical contexts where this specific example of the SH\ instability may be important, which we review here. In the discussions below, we estimate the radiative acceleration of the dust from ${\bf a}\sim{\bf F}_{\lambda}\,Q_{\lambda}\,\bar{\rho}_{d}/ (c\,R_{d})$, where $|{\bf F}|_{\lambda}\sim L/ r^{2}$ is the incident flux of radiation from a source of luminosity $L$ at distance $r$, $c$ is the speed of light, and $Q_{\lambda}$ is the absorption efficiency ($Q_{\lambda}\sim 1$ for very large grains, $Q_{\lambda}\propto R_{d}$ for smaller grains; see \S~\ref{sec:dust.species}) \begin{enumerate} \item{\bf AGN-Driven Outflows and the AGN ``Torus'':} Around a luminous AGN, gas and dust are strongly differentially accelerated by radiation pressure. There is some dust sublimation radius close to the AGN, interior to which dust is destroyed. The instabilities must occur outside this region in the dusty ``torus,'' or further out still, in the galactic narrow-line region. We assume the AGN has luminosity $L \sim L_{46}\,10^{46}\,{\rm erg\,s^{-1}}$, and normalize the radius $r$ of the dusty torus to the dust sublimation radius, i.e., $r\sim \tilde{r}\,r_{\mathrm{sub}}\sim 0.3\,{\rm pc}\,\tilde{r}\,L_{46}^{1/2}$. For a midplane column density $ n_{\rm gas}\,r \sim N_{26}\,10^{26}\,{\rm cm^{-2}}$, and gas temperature $T\sim 1000\,$K, we find that we are in the highly super-sonic limit with $w_{s}}%{{\alpha} \sim 100\,L_{46}^{1/4}\,(\tilde{r}\,N_{26})^{-1/2}$ (dust is in the Epstein regime; see Eq.~\ref{eqn:ws.in.epstein}). For grains with size $R_{d} \sim R_{d,\mu}\,\mu{\rm m}$, the stopping time is $\langle t_{s} \rangle\sim 0.01\,{\rm yr}\,R_{d,\mu}\,L_{46}^{1/4}\,\tilde{r}^{3/2}\,N_{26}^{-1/2}$ and the characteristic length scale is $c_{s}\,\langle t_{s} \rangle\sim 6\times10^{10}\,{\rm cm}\,R_{d,\mu}\,L_{46}^{1/4}\,\tilde{r}^{3/2}\,(T_{1000}/N_{26})^{1/2}$ (this is $\sim 10^{-7}\,r$, and $\sim 1000$ times the viscous scale). Thus the large-scale dynamics are in the long-wavelength regime ($k\ll \hat{\mu}$), with growth timescales (see Eq.~\ref{eqn:longwave.mode}) $\Im(\omega)^{-1}\sim 30\,{\rm yr}\,R_{d,\mu}^{1/3}\,L_{46}^{-1/12}\,N_{26}^{1/6}\,\tilde{r}^{5/6}\,(Z/Z_{\sun})^{-1/3}\,(\lambda / 0.1\,{\rm pc})^{2/3}$ (where $\lambda$ is the mode wavelength and we assume the dust-to-gas mass ratio scales with $Z/Z_{\sun}$). This is faster than the dynamical time, and the turbulent eddy turnover time, on essentially every scale inside the torus. Much smaller-scale modes ($\lambda \ll {\rm au}$) fall into the mid-$k$ resonant regime, with the fastest growth timescales of $\Im(\omega)^{-1}\sim 10-100\,{\rm hours}$ for modes approaching the viscous scale ($\lambda\sim 10^{7-8}\,{\rm cm}$). Thus, essentially all luminous AGN ($L \gtrsim 10^{42}\,{\rm erg\,s^{-1}}$) should exhibit regions in the ``clumpy torus'' surrounding the AGN, as well as radiation-pressure-driven AGN outflows, which are subject to the super-sonic instabilities described above. This may provide a natural explanation for clumpiness, velocity sub-structure, and turbulence in the torus \citep[see e.g.][]{krolik:clumpy.torii,mason:ngc1068.torus.obs,sanchez:circinus.torus.mass,nenkova:clumpy.torus.model.1,thompson:dust.em.from.unobscured.agn,mor:2009.torus.structure.from.fitting.obs,hoenig:clumpy.torus.modeling,hopkins:m31.disk,hopkins:torus,hopkins:qso.stellar.fb.together,deo:2011.z2.clumpy.torii}, as well as observed time-variability in AGN obscuration \citep{mckernan:1998.agn.occultation.by.clumpy.outflow,risaliti:nh.column.variability}. It of course is critical to understand whether this directly alters the AGN-driven winds in the torus region, a subject that will be addressed in future numerical simulations \citep[see e.g.][]{ciottiostriker:recycling,murray:momentum.winds,elitzur:torus.wind,miller:2008.clumpy.agn.disk.wind,roth:2012.rad.transfer.agn,wada:torus.mol.gas.hydro.sims}. As noted above, the instability requires only a dust-gas drift velocity, and this can instead be sourced by AGN line-driving of the \emph{gas} in the narrow/broad line regions. In this case, the scaling of $w_{s}}%{{\alpha}$ depends on the opacity of the gas, but for plausible values in the narrow-line region, and similar luminosities and densities to those used above, we find $w_{s}}%{{\alpha} \gtrsim 10^{2}-10^{3}$. \item{\bf Starburst Regions, Radiation-Pressure Driven Winds, and Dust in the ISM around Massive Stars:} Similarly, consider dusty gas in molecular clouds and HII regions surrounding regions with massive stars. It has been widely postulated that radiation pressure on dust (either single-scattering from optical/UV light or multiple-scattering of IR photons) can drive local outflows from these regions, unbinding dense clumps and GMCs, and stirring GMC or ISM-scale turbulence. Assuming geometric absorption of radiation by the dust ($Q_{\lambda}\sim 1$), a random patch of gas in a GMC (with temperature $T\sim T_{100}\,100\,$K, density $n\sim n_{10}10\,{\rm cm^{-3}}$) at a distance $r\sim r_{\rm pc}\,{\rm pc}$ from a source with luminosity $L\sim L_{1000}\,1000\,L_{\sun}$ has $w_{s}}%{{\alpha} \sim 10\,L_{1000}^{1/2}\,n_{10}^{-1/2}\,r_{\rm pc}^{-1}$. Similarly, consider a GMC of some arbitrary total mass $M_{\rm cl}$ and total size $r\sim r_{10}\,10\,{\rm pc}$, which has converted a fraction $\sim 0.1\,\epsilon_{0.1}$ of its mass into stars. If we assume a typical mass-to-light ratio for young stellar populations ($\sim 1100\,L_{\sun}/M_{\sun}$), we find $w_{s}}%{{\alpha} \sim 10\,r_{10}^{1/2}\,\epsilon_{0.1}^{1/2}$. For smaller (typical ISM) $R_{d}\sim 0.1\,R_{d,0.1}\,\mu{\rm m}$, the corresponding (Epstein) stopping time is $\langle t_{s} \rangle \sim 10^{4}\,{\rm yr}\,R_{d,0.1}\,\Sigma_{100}^{-1}\,(r_{10}/\epsilon_{0.1})^{1/2}$ (where $\Sigma_{100} = \Sigma/100\,M_{\sun}\,{\rm pc}^{-2}$ is the cloud surface density), with scale $c_{s}\langle t_{s} \rangle \sim 0.006\,{\rm pc}\,T_{100}^{1/2}\,(\langle t_{s} \rangle/10^{4}\,{\rm yr})$. So depending on grain size and gas temperature/density, directly observable ($\gtrsim 0.1\,{\rm pc}$) scales fall in the resonant mid-$k$ regime (larger dust) or long-wavelength regime (smaller dust), with growth timescales $t_{\rm grow}/t_{\rm dyn} \sim 0.03\,R_{d,0.1}^{1/2}\,(\lambda/0.1\,{\rm pc})^{1/2}\,(Z/Z_{\sun})^{-1/2}\,(r_{10}\,T_{100}\,\epsilon_{0.1})^{-1/4}$ (where $t_{\rm dyn}=1/\sqrt{G\,\rho}\sim 10\,{\rm Myr}\,(r_{10}/\Sigma_{100})^{1/2}$). Therefore, we again expect these instabilities to be important. They may fundamentally alter the ability of radiation pressure from massive stars to drive outflows and source local turbulence \citep[a subject of considerable interest and controversy; see][]{murray:momentum.winds,thompson:rad.pressure,krumholz:2007.rhd.protostar.modes,schartmann:2009.stellar.fb.effects.on.torus,hopkins:rad.pressure.sf.fb,hopkins:dense.gas.tracers,hopkins:2013.fire,guszejnov.2015:feedback.imf.invariance,grudic:sfe.cluster.form.surface.density}. They will also directly source dust-to-gas fluctuations, which can in turn drive abundance anomalies in next-generation stars \citep{hopkins:totally.metal.stars,hopkins.conroy.2015:metal.poor.star.abundances.dust}, as well as altering the dust growth, chemistry, and cooling physics of the clouds \citep{goldsmith:molecular.dust.cooling.gmcs,dopke.2013:fragmentation.all.dust.levels.but.enhanced.with.crit.dust,ji:2014.si.dust.cooling.threshold.for.early.stars,chiaki:2014.critical.dust.abundance.for.cooling}. \item{\bf Cool Star (AGB and Red Giant) Winds and PNe:} In the photospheres and envelopes of cool stars, dust forms and is accelerated by continuum radiation pressure. This contributes to the launching and acceleration of winds, and potentially defines key wind properties, such as their ``clumpiness'' and variability in time and space. There has been extensive study of accelerating dust-gas mixtures in this context (see references in \S~\ref{sec:previous.work:coolstar.winds}). Consider an expanding photosphere/wind ($\rho=\dot{M}/(4\pi\,r^{2}\,v_{\rm wind})$) with $v_{\rm wind}\sim v_{10}\,10\,{\rm km\,s^{-1}}$, $\dot{M}\sim \dot{M}_{-3}\,10^{-3}\,M_{\sun}\,{\rm yr^{-1}}$, and gas temperature $T\sim T_{1000}\,1000\,$K (in the outflow) around a giant with luminosity $L\sim L_{5}\,10^{5}\,L_{\sun}$. Assuming geometric absorption, we obtain $w_{s}}%{{\alpha} \sim 2\,(L_{5}\,v_{10}/\dot{M}_{-3}\,T_{1000})^{1/2}$. We therefore expect $w_{s}}%{{\alpha}\sim 1$ (but with a broad range, $w_{s}}%{{\alpha}\sim 0.1\rightarrow 10$, or larger) for plausible parameters of different cool stars, and different locations of the grains within the photosphere and wind. The corresponding (Epstein) stopping time is $\langle t_{s} \rangle \sim 1\,{\rm sec}\,R_{d,0.1}\,r_{100}^{2}\,(v_{10}/L_{5}\,\dot{M}_{-3})^{1/2}$ (where $r_{100} \equiv r/100\,R_{\sun}$) and the relevant scales are $c_{s}\,\langle t_{s} \rangle \sim 3\times10^{5}\,{\rm cm}\,T_{1000}^{1/2}\,(\langle t_{s} \rangle/{\rm sec})$. So large-scale modes ($\lambda\gtrsim 10^{8}\,{\rm cm}$) are in the long-wavelength (low-$k$) limit. However, the mean free path is very small $\lambda_{\rm MFP} \sim 10\,{\rm cm}\,r_{100}^{2}\,v_{10}/\dot{M}_{-3}$, implying that the full dynamic range of the mid-$k$ and high-$k$ resonant modes is also present when $w_{s}}%{{\alpha} \ge 1$. The growth timescale for the largest (low-$k$) modes scales as $t_{\rm grow}/t_{\rm wind} \sim 0.02\,v_{10}^{4/3}\,(R_{d,0.1}\,r_{100}\,Z/\dot{M}_{-3}\,Z_{\sun})^{1/3}\,T_{1000}^{-1/2}\,(\lambda / r)^{2/3}$, where $t_{\rm wind}=r/v_{\rm wind} \sim 0.2\,{\rm yr}\,r_{100}/v_{10}$, suggesting all modes can grow in a wind dynamical time. Approaching the viscous scale (in the high-$k$ regime), $t_{\rm grow}$ reaches $\sim 0.1\,{\rm sec}\,R_{d,0.1}^{2/3}\,T_{1000}^{-1/2}\,(Z/Z_{\sun})^{-1/3}\,(\lambda_{\rm mfp}/10\,{\rm cm})$. This places the instability in perhaps the most interesting range, where certain regimes of the outflows (with $w_{s}}%{{\alpha} \lesssim 1$, but not vanishingly small) would be subject to the long-wavelength instability, and other regimes (with $w_{s}}%{{\alpha} \gtrsim 1$) would be subject to the short-wavelength acoustic RDI. The long-wavelength instability, which grows fastest in the direction parallel to ${\bf w}_{s}}%{{\boldsymbol{\alpha}}$, could perhaps explain large-scale features such as dust ``shells'' or ``arcs'' \citep[similar to ideas proposed by][]{morris:1993.cool.wind.dust.drag.instability.slow.saturated.mode,1994A&A...288..255W,deguchi:1997.dust.envelope.pne.spherical.drag.instability.quasi.resonant}. In contrast, regimes with $w_{s}}%{{\alpha}\gtrsim 1$, where the fastest-growing modes are short-wavelength and oblique, would likely develop non-linearly into turbulence, seeding clumpy sub-structure in the winds and in emission \citep[a subject of considerable interest; see e.g.][]{1998A&A...333L..51W,2003ApJ...582L..39F,young:2003.clumpy.wind.models,2007Natur.447.1094Z,2010ApJ...724L.133A,2012A&A...537A..35C}. The latter would almost certainly trigger secondary non-linear instabilities by driving large dust-gas clumping; for example via radiative Rayleigh-Taylor instabilities, dust opacity/self-shielding effects, and dust collisions/growth in the wind. \item{\bf Proto-planetary Disks:} As discussed in \S~\ref{sec:previous.work}, instabilities of the coupled dust-gas system in proto-planetary disks are particularly interesting, given their implications for planet formation and observable disk properties. In proto-planetary disks we expect drift velocities to be highly subsonic. For a disk with parameters following \citet{chiang:2010.planetesimal.formation.review} at radius $ r\sim r_{10}\,10\,{\rm au}$ and surface density $\Sigma \sim \Sigma_{\rm MMSN}\,1000\,{\rm g\,cm^{-3}}\,(r/{\rm au})^{-1.5}$, pebbles with size $R_{d}\sim R_{d,{\rm cm}}\,{\rm cm}$ will have $w_{s}}%{{\alpha}\sim 0.005\,r_{10}^{25/14}\,R_{d,{\rm cm}}\,\Sigma_{\rm MMSN}^{-1}$ \citep{nakagawa:1986.grain.drift.solution}. Since $w_{s}}%{{\alpha} \ll 1$ we expect the growth rate of the instabilities here to have a maximum value $\Im(\omega)\sim w_{s}}%{{\alpha}^{2}\,\mu\,t_{s}^{-1}$. For plausible disk parameters this rate is much slower than the radial drift rate $\sim v_{\rm drift}/r$ for the grains to drift through the disk. Given this relatively low growth rate, we do not expect this particular sound-wave resonance (the acoustic RDI) to be dominant. However, we {\em do} expect other examples from the broad class of RDI resonances to be interesting. For example, as noted in SH\ and above, the well-studied disk ``streaming instability'' is an RDI associated with the disk epicyclic frequency. Other wave families such as Brunt-V\"ais\"al\"a\ oscillations, slow magnetosonic, and Hall magnetosonic-cyclotron waves are also present with slow phase velocities, which can give rise to much larger growth rates (as compared to the acoustic RDI studied here) when $w_{s}}%{{\alpha}\ll 1$. These will be studied in future work (Squire \&\ Hopkins, in prep.). \end{enumerate} \vspace{-0.5cm} \section{Conclusions} \label{sec:summary} \subsection{Summary} We study the acoustic family of the class of ``resonant drag instabilities'' (RDI) explored in SH. Such instabilities can occur when a relative drift velocity arises between the dust and gas in a coupled dust-gas mixture (due, for example, to different radiative forces on the dust and the gas, or pressure support of the gas). SH\ studied a general gas system and showed that if the gas (absent dust) supports some undamped waves, a streaming velocity that ``resonates with'' the wave phase velocity usually creates an instability (the RDI). In this work, we focus on the case where the gas is governed by neutral hydrodynamics and supports sound waves, studying the ``acoustic RDI'' (resonance with sound waves) and a collection of other non-resonant unstable modes (these are important in certain regimes, e.g., at long-wavelengths or high dust-to-gas ratios). Although neutral hydrodynamics is perhaps the simplest gas system possible, these instabilities have not (to our knowledge) been studied or identified in previous literature, despite their likely relevance for a wide variety of astrophysical systems. We identify a spectrum of exponentially-growing linear instabilities which {\em directly} source fluctuations in the dust-to-gas ratio. Under certain conditions {\em all} wavelengths feature unstable modes, some of which have growth rates that increase without limit with increasing wavenumber. We show that the basic qualitative behaviors (dimensional scalings and nature of the fastest-growing modes) are not sensitive to the gas equation-of-state, the form of the drag law (constant drag coefficient, Epstein, Stokes, or Coulomb drag), the dust-to-gas ratio, or other details, although these do quantitatively alter the predictions. We derive stability conditions and simple closed analytic expressions for the growth rates of the instability (\S~\ref{sec:general.modes}). There is one critical dimensionless parameter that determines the system's qualitative behavior, viz., ratio of the mean dust drift velocity ($|{\bf v}_{\rm dust} - {\bf u}_{\rm gas}|^{\rm drift}$) to the gas sound speed $c_{s}$: \begin{align} w_{s}}%{{\alpha} &\equiv \frac{|{\bf w}_{s}}%{{\boldsymbol{\alpha}}|}{c_{s}} = \frac{|{\bf v}_{\rm dust} - {\bf u}_{\rm gas}|^{\rm drift}}{c_{s}} = \frac{|\Delta {\bf a}_{\rm dust-gas}|\,\langle t_{s}({\bf a},\,\rho,\,...) \rangle}{c_{s}\,(1+\mu)}. \end{align} Here, the drift velocity ${\bf w}_{s}}%{{\boldsymbol{\alpha}}$ is the ``terminal'' velocity when the dust and gas experience accelerations which differ by some amount $\Delta {\bf a}_{\rm dust-gas}$, $t_{s}$ is the drag coefficient or ``stopping time'' (determined by the drag law), and $\mu$ is the dust-to-gas mass ratio. When $w_{s}}%{{\alpha} \ge 1$, i.e.\ when the dust is moving supersonically relative to the gas, the system is strongly unstable at {\em all} wavelengths. There are multiple unstable modes but the acoustic RDI from SH\ (\S~\ref{sec:resonance}) is the most rapidly growing. The growth rate $\Im(\omega)$ increases {\em without limit} with increasing wavenumber $k$ as $\Im(\omega)\sim (\mu\,k\,c_{s} / t_{s})^{1/2}$ (in a mid range of $k$) or $\Im(\omega)\sim (\mu\,k\,c_{s} / t_{s}^{2})^{1/3}$ (at high $k$), independent of $w_{s}}%{{\alpha}$. These modes propagate at a critical angle $\cos{\theta} = \pm 1/w_{s}}%{{\alpha}$ with respect to the drift direction; the wavespeed is the normal sound speed, and the drift velocity along the wavevector $\hat{\bf k}$ exactly matches this, allowing the dust to coherently push gas, and generate density perturbations. The denser gas then decelerates the dust further, causing a pileup, which runs away. For modes at angles that do not match the resonance condition ($\cos{\theta} \ne \pm 1/w_{s}}%{{\alpha}$), the growth rates saturate at finite values (i.e., $\Im(\omega)$ does not increase indefinitely with $k$). When $w_{s}}%{{\alpha} < 1$, i.e.\ when the dust is moving subsonically relative to the gas, the resonance above does not exist but there are still unstable, long-wavelength modes whose growth rate peaks or saturates above some wavenumber $k \propto w_{s}}%{{\alpha}/(c_{s}t_{s})$, with maximum growth rate $\Im(\omega)\sim w_{s}}%{{\alpha}^{2}\,\mu/t_{s}$. \vspace{-0.5cm} \subsection{Implications, Caveats, \&\ Future Work} In all cases, the instabilities drive dust-gas segregation and local fluctuations in the dust-to-gas ratio, compressible fluctuations in the gas density and velocity, and clumping within the dust (\S~\ref{sec:mode.structure}). Non-linearly, we expect them to saturate by breaking up into turbulent motions (in both dust and gas) which can be subsonic or supersonic, and in both cases can give rise to large separations between dense gas-dominated and dust-dominated regions. We provide simple estimates for the saturated turbulent amplitude (\S~\ref{sec:nonlinear}). We discuss some astrophysical implications of these instabilities (\S~\ref{sec:applications}) and argue that the ``resonant'' instability is likely to be important in the dusty gas around AGN (in the torus or narrow-line regions), starbursts, giant molecular clouds, and other massive-star forming regions, where $w_{s}}%{{\alpha} \gg 1$ almost everywhere. In the winds and photospheres of cool stars, simple estimates suggest $w_{s}}%{{\alpha} \sim 1$, with a broad range depending on the local conditions and location in the atmosphere. Thus, we again expect these instabilities to be important. In each of these regimes, the instability may fundamentally alter the ability of the system to drive winds via radiation pressure (on the dust or the gas), and could source turbulence, velocity sub-structure, clumping, and potentially observable inhomogeneities in the winds. More detailed conclusions will require detailed numerical simulations to study the non-linear evolution of these systems. Our analytic results here make it clear what physics must be included to study such instabilities -- in particular, physical drag laws (with realistic density and velocity dependence) and backreaction from the dust to the gas -- and the range of scales that must be resolved. Most previous studies of such systems either did not include the appropriate drag physics or lacked the resolution to treat these modes properly. This is especially challenging for the resonant mode: because the growth rate increases without limit at high $k$, it could (in principle) become more important and grow ever-faster as the simulation resolution increases. We have focused on a relatively simple case here, namely gas with a pure acoustic wave in the absence of dust. This ignores, for example, magnetic fields, which alter the mode structure and could influence the grain ``drag'' directly (if the grains are charged). As shown in SH, the RDI generically exists for systems that support undamped linear waves, so we expect a similar rich phenomenology of instabilities (both resonant and non-resonant) in other systems. However it is outside the scope of this work to explore these in detail. Another topic which we will explore in more detail is the influence of a broad size spectrum of dust grains. This is discussed in \S~\ref{sec:dust.species}, where we argue that under most conditions, we can think of the results of this work as being relevant for the large grains (specifically, the largest grains which contain a large fraction of the grain mass), because these dominate the mass and back-reaction on the gas. However as shown there, under some circumstances there is a complicated mix of terms dominated by small grains and others dominated by large grains, which could couple indirectly. Moreover, because the RDI can resonate with any wave family, it is possible that (for example) small, tightly-coupled grains (which may be more stable if considered in isolation) generate wave families to which larger grains can couple via the RDI (or vice versa). \vspace{-0.7cm} \begin{small}\section*{Acknowledgments}\end{small} We would like to thank our anonymous referee, as well as E.~S.~Phinney and E.~Quataert for helpful discussions. Support for PFH \&\ JS was provided by an Alfred P. Sloan Research Fellowship, NASA ATP Grant NNX14AH35G, and NSF Collaborative Research Grant \#1411920 and CAREER grant \#1455342. JS was funded in part by the Gordon and Betty Moore Foundation through Grant GBMF5076 to Lars Bildsten, Eliot Quataert and E. Sterl Phinney.\\ \vspace{-0.2cm}
2,869,038,156,933
arxiv
\section{Introduction} In this paper, we assume that $\Omega$ is a simply connected bounded domain in $\RR^3$ with smooth boundary and investigate the following system in $\Omega$, \begin{equation}\label{equation1.1} \begin{cases} \rho_t+\dive (\rho u)=0,\,\,\rho\geq 0,\\ (\rho u)_t+\dive(\rho u\otimes u)-\dive[2\mu(\rho)D(u)]+\nabla \pi=0,\\ \dive u=c_0\Delta\psi(\rho),\,\,\psi(\rho):=\rho^{-1}, \end{cases} \end{equation} where $u=(u_1,u_2,u_3)$, $\rho$ and $\pi$ stand for the unknown velocity field, density and pressure respectively, $c_0>0$ is a fixed constant, $\mu$ is a positive function and \begin{equation}\label{equation12} \mu(s)\in C^\infty(0,\infty). \end{equation} The deformation tensor $D(u)$ is denoted by \begin{equation} D(u) = \frac{1}{2}\left[\nabla u+(\nabla u)^t\right]=\frac{1}{2}(\partial_iu_j+\partial_ju_i),\quad 1\leq i,j\leq 3. \end{equation} The system is equiped with the initial data \begin{equation}\label{equation1.4} u(x,0)=u_0(x),\quad \rho(x,0)=\rho_0(x),\quad x\in\Omega \end{equation} and one of the following boundary conditions: \begin{equation}\label{equation1.6} n\cdot \nabla\rho=0,\quad u\cdot n=0,\,\,\curle u\times n=-B\cdot u\quad \mathrm{on}\,\,\partial\Omega\times(0,T)\tag{A} \end{equation} where $B=B(x)$ is a smooth positive semi-definite matrix, or \begin{equation}\label{equation1.7} n\cdot \nabla\rho=0,\quad u=0\quad \mathrm{on}\,\,\partial\Omega\times(0,T),\tag{B} \end{equation} Combustion model is the low Mach number limit of the fully compressible Navier-Stokes equations, see \cite{lions1}, and it is tightly linked with the non-homogeneous incompressible Navier-Stokes equations (taking $c_0=0$) and the homogeneous one (taking $\rho$ be a constant). There are lots of works studying the combustion model \eqref{equation1.1} and the problems associated with it. The study of the system \eqref{equation1.1}, which has been introduced by A. Majda \cite{majda}, can date back to the 1980s. P. Embid \cite{embid} has proved the local-in-time well-posedness for classical solutions of the system \eqref{equation1.1} with the periodic boundary condition. Also, the local well-posedness was considered by H. B. da Veiga \cite{daveiga} with $\eqref{equation1.1}_3$ replaced by Fick's law $\psi(\rho)=\log \rho$. Danchin-Liao \cite{danchin} established the local well-posedness in critical homogeneous Besov spaces under some smallness assumptions and that in non-homogeneous Besov space for arbitrarily large data. For the global-in-time existence of weak and strong solutions of \eqref{equation1.1} and relative problems, P. Secchi \cite{secchi} proved that there exists a unique global strong solution in the two-dimensional domain providing the diffusion coefficient $c_0$ is small enough. They also considered the limiting behavior of the solutions when $c_0\to 0^+$ for 2D and 3D case and the convergence towards the corresponding solutions of non-homogeneous incompressible Navier-Stokes equations. Another remarkable work comes from P. Lions \cite{lions2} where he has shown the global existence of weak solutions only under a small perturbation of a constant density without any restriction on the initial velocity. However, in \cite{lions2}, he only gives the proof for $\RR^2$ and periodic case. Also in \cite{danchin}, Danchin-Liao proved the existence of solutions in critical homogeneous Besov spaces provided the initial density is closed to a constant and the initial velocity is small enough. For large initial data, Bresch-Essoufi-Sy \cite{bresch2} showed the global existence of the weak solutions for the combustion model in dimensions 2 and 3 by taking $\mu(\rho)$ be a specific function $\frac{c_0}{2}\log \rho$ and then, in \cite{bresch1}, Bresch-Giovangigli-Zatorska relaxed the restriction on $\mu(\rho)$ by renormalizing the mass equation. Recently, W. Tan \cite{tan} proved the global existence of the weak and strong solutions for the system \eqref{equation1.1} with general coefficient $\mu(\rho)$ in $\eqref{equation1.1}_2$ and $\psi(\rho)$ in $\eqref{equation1.1}_3$ provided $\norm{\nabla\rho}_{L^2}$ is small enough. Another relative model to the system \eqref{equation1.1} is the so-called Kazhikhov-Smagulov type model, see \eqref{equation1.18}. In \cite{caixiaoyun,aaa}, Cai-Liao-Sun established the global-in-time existence of strong solutions to the initial-boundary value problem of a 2D Kazhikhov-Smagulov type model for incompressible non-homogeneous fluids with mass diffusion for the arbitrary size of initial data. For other works on the classical Kazhikhov-Smagulov's model, we refer the reader to \cite{antontsev,beirao}. If the diffusion coefficient $c_0$ tends to zero, \eqref{equation1.1} may reduce to the general non-homogeneous incompressible Navier-Stokes equations. There are also plenty of works studying it with the general viscosity coefficient $\mu(\rho)$, we refer the reader to \cite{abidi,cai2,cho,he2021,jun,huang,lions1} and the references therein. In the final part of this paper we focus on the mechanism of blowup and the structure of possible singularities of strong solutions to the Navier-Stokes system. The blowup criterion on the Leray-Hopf weak solutions to the 3D incompressible homogeneous Navier-Stokes equations was first given by J. Serrin \cite{serrin1962}, that is, if a weak solution $u$ satisfies \begin{equation}\label{serrin1.5} u\in L^s(0,T;L^r),\quad \frac{2}{s}+\frac{3}{r}\leq 1,\quad 3<r\leq \infty, \end{equation} then it is regular. Later, He-Xin \cite{he2005} showed that the Serrin's criterion \eqref{serrin1.5} still holds even in the case of the incompressible MHD equations. For non-homogeneous incompressible Navier–Stokes equations, H. Kim \cite{kim2006} has shown that if $(\rho,u)$ blows up at $T^*$, then \begin{equation} \lim_{t\to T^*}\norm{u}_{L^s(0,T;L^r_w)}=\infty\quad \text{for all}\quad\frac{2}{s}+\frac{3}{r}\leq 1,\quad 3<r\leq \infty. \end{equation} In recent works, X. Zhong \cite{zhong2017} obtained a blowup criterion \eqref{serrin1.5} to the non-homogeneous incompressible heat conducting Navier–Stokes flows in bounded domain of $\RR^3$. For the compressible fluids, we refer reader to \cite{huang2013serrin,HLX,xu2012blow} and references therein. However, the theory for the 3D combustion model with the general viscosity coefficient in the bounded domain is still blank. Therefore, our goal is obtaining the global existence of strong solutions with small initial data and extending the Serrin's blow-up criterion to \eqref{equation1.1}. Before stating the main theorem, let us explain some notation and conventions used throughout the paper. First, we define the strong solutions as follows. \begin{Definition} $(\rho,u,\pi)$ is called a strong solution of \eqref{equation1.1} on $\Omega\times(0,T)$, if \eqref{equation1.1} holds almost everywhere in $\Omega\times(0,T)$ such that \begin{equation}\label{strong} \begin{cases} \alpha\leq \rho\leq \beta,\\ \rho\in C([0,T];H^2)\cap L^2(0,T;H^3),\rho_t\in C([0,T];L^2)\cap L^2(0,T;L^2),\\ u\in C([0,T];H^1)\cap L^2(0,T;H^2),u_t\in L^2(0,T;L^2),\\ \pi\in L^2(0,T;H^1). \end{cases} \end{equation} In particular, if \eqref{strong} holds for all $T\in (0,\infty)$, we call $(\rho,u,\pi)$ the global strong one. \end{Definition} For $1 \leq p \leq \infty$ and integer numbers $k \geq 1$, the standard Sobolev spaces and other functional spaces are defined as follows: $$\begin{cases} L^p=L^p(\Omega), \quad W^{k, p}=W^{k, p}(\Omega), \quad H^k=W^{k, 2}, \\ W_0^{k,p}=\overline{C_0^\infty}\,\,\text{closure in the norm of } W^{k,p},\\ \|\cdot\|_{B_1 \cap B_2}=\|\cdot\|_{B_1}+\|\cdot\|_{B_2}, \text { for two Banach spaces } B_1 \text { and } B_2, \\ H^1_\omega:=\left\{u\in H^1: u\cdot n=0,\,\,\curle u\times n=-B\cdot u\,\,\mathrm{on}\,\,\partial\Omega\right\}. \end{cases}$$ Next, we set $$ \int f d x := \int_{\Omega} f d x,\quad\int_{\partial} f := \int_{\partial\Omega} f d S $$ and $$f_\Omega:=\frac{1}{|\Omega|}\int f$$ which is the average of a function $f$ over $\Omega$. The weak, weak* and strong convergence of a sequence $\{f^n\}$ are respectively denoted by \begin{equation*} f^n\wconverge f,\quad f^n\wsconverge f,\quad f^n\sconverge f. \end{equation*} Finally, for two $3 \times 3$ matrices $A=\left\{a_{i j}\right\}, B=\left\{b_{i j}\right\}$, the symbol $A: B$ represents the trace of $A B$, that is, $$A:B:=\mathrm{tr}(AB)=\sum_{i,j=1}^3a_{ij}b_{ji}.$$ Now, we give our main theorems. The first theorem concerns with the global existence of strong solutions for \eqref{equation1.1} when $\Omega$ is a bounded domain. \begin{Theorem}\label{Theorem1.1} Suppose that $\Omega\subset\RR^3$ is a simply connected bounded domain with smooth boundary and $(\rho_0,u_0)$ satisfies \begin{equation}\label{equation1.8} 0<\alpha\leq \rho_0\leq\beta<\infty,\quad x\in \Omega, \end{equation} the compatibility condition \begin{equation}\label{equation1.3} \begin{cases} \dive u_0=c_0\Delta \rho_0^{-1},&x\in \Omega\\ u_0\cdot n=c_0n\cdot\nabla \rho_0^{-1},&x\in \partial\Omega\\ \end{cases} \end{equation} and $u_0\in H^1_\omega$, if $u$ satisfies the boundary condition \eqref{equation1.6}; $u_0\in H^1_0$, if $u$ satisfies the boundary condition \eqref{equation1.7}. Then there exists a positive constant $\delta$ depending only on $\Omega$, $c_0$, $\alpha$ and $\beta$ such that if \begin{equation} \norm{\nabla u_0}_{L^2}\leq \delta \end{equation} and $\pi$ satisfies the normalized condition \begin{equation}\label{normalized} \int\pi=0, \end{equation} the system \eqref{equation1.1}--\eqref{equation1.4}, \eqref{equation1.6} or \eqref{equation1.7} admits a unique global strong solution $(\rho,u,\pi)$. \end{Theorem} Next, we give the Serrin-type blowup criterion. \begin{Theorem}\label{Theorem1.3} If $(\rho, u,\pi)$ is a local strong solution on $\Omega\times(0,T^*)$ and $T^*<\infty$ is the maximal time of existence, then \begin{equation}\label{serrin} \lim_{T\to T^*}\norm{u}_{L^r(0,T;L^s)}=\infty, \end{equation} where $r$ and $s$ satisfy the relation \begin{equation} \frac{2}{s}+\frac{3}{r}\leq 1,\quad 3<r\leq \infty. \end{equation} \end{Theorem} \begin{Remark} Our main theorems holds for all function $\mu(s)>0$ satisfying \eqref{equation12} even if $\mu(s)\to\infty$ as $s\to 0^+$ under the smallness assumption on $\norm{\nabla u_0}_{L^2}$. Theorem \ref{Theorem1.1} is the first result giving the existence of strong solutions for \eqref{equation1.1} with general viscosity coefficient in an arbitrary 3D bounded domain. Theorem \ref{Theorem1.3} is parallel to the classical Serrin's condition for 3D non-homogeneous Navier-Stokes equations. \end{Remark} \begin{Remark} Comparing with the work of \cite{huang,zhang2015} where they obtain the global strong solutions for non-homogeneous incompressible Navier-Stokes equations with density-depended viscosity coefficient $\mu(\rho)$ and the Dirichlet boundary conditions, our result can be seen as an extension from the divergence-free velocity field $u$, $\dive u=0$, to non-divergence-free one, that is, $\eqref{equation1.1}_3$. \end{Remark} \begin{Remark} In our proof of the theorem, we only need $\psi(s)\in C^3(0,\infty)$, thus, more general $\psi(\rho)$ can also be considered under the same assumptions. \end{Remark} \begin{Remark} From the hypothesis of Theorem \ref{Theorem1.1}, one may notice that we do not impose any information about the regularity of $\rho_0$ (except for the size restriction \eqref{equation1.8}). This is mainly because of the compatibility condition \eqref{equation1.3}. Indeed, for example, if $u_0\in H^1_\omega$, one can solve the following elliptic problem \begin{equation*} \begin{cases} c_0\Delta\rho_0^{-1}=\dive u_0,&x\in \Omega,\\ n\cdot \nabla\rho_0^{-1}=0,&x\in \partial\Omega, \end{cases} \end{equation*} from which the regularity of $\rho_0$ is completely determinded by that of $u_0$. More precisely, we have, for all $1<p\leq 6$, \begin{equation}\label{1.16} \begin{cases} \norm{\nabla\rho_0}_{L^p}\leq C(p)\norm{u_0}_{L^p},\\ \norm{\nabla\rho_0}_{H^1}\leq C\norm{\nabla u_0}_{L^2}. \end{cases} \end{equation} \end{Remark} Now, we give some comments about the analysis throughout the whole paper. Generally speaking, in order to overcome the non-divergence-free of $u$, our proof for Theorem \ref{Theorem1.1} is based on two types of decomposition. For the first case, that is, $u$ satisfying the boundary condition \eqref{equation1.6}, we may write in view of $\eqref{equation1.1}_3$ \begin{equation}\label{equation1.17} v = u-c_0\nabla\rho^{-1}, \end{equation} Consequently, using \eqref{equation1.17}, the original system \eqref{equation1.1} can be changed into the following Kazhikohlv-Samgulov type model, \begin{equation}\label{equation1.18} \begin{cases} \rho_t +v\cdot\nabla\rho + c_0\rho^{-2}\abs{\nabla\rho}^2 - c_0\rho^{-1}\Delta\rho=0,\\ \\ \begin{cases} (\rho v)_t +\dive(\rho v\otimes v) - \dive{[2\mu(\rho)D(v)]}+ \nabla \pi_1=c_0\dive{\left[2\mu(\rho)\nabla^2\rho^{-1}\right]}\\ - c_0 \dive{\left(\rho v\otimes\nabla\rho^{-1}\right)} -\dive\left(c_0\rho\nabla\rho^{-1}\otimes v\right)-c_0^2 \dive{\left(\rho \nabla\rho^{-1}\otimes\nabla\rho^{-1}\right)}, \end{cases}\\ \\ \dive{v} = 0, \end{cases} \end{equation} where $\pi_1=\pi-c_0(\log\rho)_t$ is a modified pressure. Then, one can find that the mass equation $\eqref{equation1.1}_1$ becomes a parabolic type one, which provides us some high regularity properites for $\rho$, and, on the other hand, $v$ is divergence-free, which allows us to use some ``standard'' treatments of the classical incompressible Navier-Stokes equations. Thus, in Section \ref{section3}, we will mainly discuss the system \eqref{equation1.18} and try to derive the a priori esitmates of $(\rho,v)$. So, here, we give an explanation about the definition of $v_0$, the initial value of $v$, and the boundary condition related to $v$. Since we have the compatibility condition \eqref{equation1.3} from which we can find a unique function $v_0$ defined by \begin{equation}\label{1.19} v_0:=u_0-c_0\nabla\rho_0^{-1}. \end{equation} Then, we may impose $v_0$ as the initial value of $v$. Of course, in view of the estimates \eqref{1.16}, $v_0$ is also controlled by $u_0$, that is, \begin{equation}\label{1.17} \begin{cases} \norm{v_0}_{L^p}\leq C(p)\norm{u_0}_{L^p},\\ \norm{\nabla v_0}_{L^2}\leq C\norm{\nabla u_0}_{L^2}. \end{cases} \end{equation} For the boundary condition, if $u$ satisfies the condition \eqref{equation1.6}, applying $\curle$ on \eqref{equation1.17} implies that $v$ satisfies \begin{equation}\label{A'} \curle v\times n=-B\cdot(v+c_0\nabla\rho^{-1})\quad\mathrm{on}\,\,\partial\Omega\times(0,T).\tag{A'} \end{equation} In this case, we would call $(\rho,v)$ or $v$ satisfying the condition \eqref{A'}. In addition, from \eqref{1.19}, we can obtain the compatibility condition corresponding with $(\rho_0,v_0)$, that is, \begin{equation} \begin{cases} \dive v_0=0,&x\in \Omega\\ v_0\cdot n=0,\curle v_0\times n=-B\cdot(v_0+c_0\nabla\rho_0^{-1}),&x\in \partial\Omega\\ \end{cases} \end{equation} provided $u_0\in H^1_\omega$. To sum up, our sketches of the proof is given by \begin{equation*} (\rho_0,u_0)\xRightarrow{\eqref{1.19}}(\rho_0,v_0)\implies \mathrm{existence\,\,of}\,\,(\rho,v)\xRightarrow{\eqref{equation1.17}}\mathrm{existence\,\,of}\,\,(\rho,u). \end{equation*} Another difficulty in this situation comes from the boundary integrals. To overcome it, we mainly adapt the idea from Cai-Li \cite{cai}. Since $v \cdot n=0$ on $\partial \Omega$, we have $$ v=v^{\perp} \times n \quad\text {on } \partial \Omega, $$ where $v^\perp= -v\times n$. Then, for $f\in H^1$, $$\abs{\int_\partial v\cdot\nabla f}=\abs{\int_\partial v^\perp\times n\cdot \nabla f}=\abs{\int \curle v^\perp\cdot \nabla f}\leq C\norm{v}_{H^1}\norm{\nabla f}_{L^2},$$ which is clearly has advantages over using the trace inequality, since the latter needs $f\in H^2$. For $u$ satisfying \eqref{equation1.7}, the situation is somewhat different, since, in every case that follows, $v$ satisfies the non-homogeneous Dirichlet boundary conditions, that is, \begin{equation} v=-c_0\nabla\rho^{-1}\quad\mathrm{on}\,\,\partial\Omega\times (0,T). \end{equation} Such condition may bring too much high order derivatives so that the boundary integrals are no longer controllable, especially when we treat the energy estimates for $v$. Therefore, we shall apply another type of decomposition whose idea comes from Lemma \ref{llemma2.2} (see Section \ref{section2}). From which, one can find a function $Q=\cB[c_0\Delta\rho^{-1}]$, where $\cB$ is the Bogovski\v i operator. As a consequence, $u$ will be splitted into \begin{equation}\label{1.21} u=w+Q. \end{equation} and, hence, one can hope to get the energy estimates for the system \eqref{equation1.1} The advantage of above decomposition is obvious: on the one hand, from Lemma \ref{llemma2.2}, $Q$ is ``almost'' $\nabla\rho$, in other words, for all $1<p<\infty$, $Q$ has the following bounds \begin{equation}\label{1.23} \begin{cases} \norm{Q}_{L^p}\leq C\norm{\nabla\rho}_{L^p},\\ \norm{Q}_{H^1}\leq C\left(\norm{\Delta\rho}_{L^2}+\norm{\nabla\rho}_{L^{3}}\norm{\nabla\rho}_{L^{6}}\right),\\ \norm{Q_t}_{L^p}\leq C\left(\norm{\nabla\rho_t}_{L^p}+\norm{|\rho_t||\nabla\rho|}_{L^p}\right); \end{cases} \end{equation} on the other hand, it is easy to check that $w$ has a vanished boundary, which will not generate any bounary term when applying the energy estimates. Therefore, the strategy of the proof can be concluded as follows \begin{equation*} \begin{aligned} (\rho_0,u_0)\xRightarrow[\eqref{1.23}]{\eqref{1.21}}\mathrm{estimates\,\,for}\,\,(\rho,u)\implies\cdots \end{aligned} \end{equation*} At last, to prove Theorem \ref{Theorem1.3}, we mainly adapt proofs mentioned above with a slight change. We first let \eqref{serrin} be false, that is, \begin{equation}\label{serrin'} \lim_{t\to T^*}\norm{u}_{L^s(0,T;L^r)}\leq M_0<\infty, \end{equation} then following the proof of Theorem \ref{Theorem1.1}, one may obtain the bounds for $(\rho,u,\pi)$ satisfying \eqref{strong}, which will give the contradictory ot the maximality of $T^*$. However, when it comes to the higher order estimates of $(\rho,v)$ (or $(\rho,u)$), one has to control $$\normf{|\nabla\rho|^3}_{L^2}=\norm{\nabla\rho}_{L^6}^3,$$ due to the nonlinear terms $$\rho^{-2}\abs{\nabla\rho}^2,\quad c_0^2 \dive{\left(\rho \nabla\rho^{-1}\otimes\nabla\rho^{-1}\right)}$$ in $\eqref{equation1.18}_1$ and $\eqref{equation1.18}_2$, which is failed to be bounded by the Serrin's condition \eqref{serrin'}. To overcome it, we change $\eqref{equation1.18}_1$ into \begin{equation} \rho_t+v\cdot \nabla\rho-c_0\Delta\log\rho=0, \end{equation} which pushes us to estimate $\log\rho$ thanks to the pure transport constructure $\rho_t+v\cdot \nabla\rho$ and the disspation term $-\Delta\log\rho$, see Section \ref{section4} for details. The rest of this paper is organized as follows. In Section \ref{section2}, we give some elementary results which will be used in later. Section \ref{section3} is devoted to the a priori estimates for system \eqref{equation1.1} and the proof for Theorem \ref{Theorem1.1}. Finally, in Section \ref{section4}, we will give the proof of Theroem \ref{Theorem1.3}. \section{Preliminaries}\label{section2} First, we give the following local existence result for system \eqref{equation1.1}. We have already proved this for 2D case in our previous work \cite{zjw} and the 3D one can be established step by step only after some minor adaptions. \begin{Lemma}\label{local} Assume that $(\rho_0,u_0)$ satisfies the same conditions as in Theorem \ref{Theorem1.1} and $\Omega\subset \RR^3$ is a simply connected bounded domain with smooth boundary. Let $\pi$ saitisfies the condition \eqref{normalized}. Then there exists a positive time $T_1$ depending on $\Omega$, $c_0$, $\alpha$, $\beta$ and $\norm{u_0}_{H^1}$ so that the problem \eqref{equation1.1}--\eqref{equation1.4}, \eqref{equation1.6}admits an unique strong solution $(\rho, u,\pi)$ on $\Omega\times(0,T_1)$. Moreover, if $\mu(\rho)$ is a positive constant, then the above result also holds for the condition \eqref{equation1.7}. \end{Lemma} \begin{Remark}\label{local2} Even if we restrict $\mu(\rho)=\mu$ a positive constant in the case \eqref{equation1.7}, the existence result can be extended to $\mu(\rho)=\mu(\rho_\epsilon)$ (see \cite{zjw} for details), where \begin{equation*} \rho_\epsilon\in C^\infty(\overline\Omega),\quad \alpha\leq \rho_\epsilon\leq \beta,\quad \rho_\epsilon\sconverge \rho\quad\text{in }W^{k,p}\text{ for all }\rho\in W^{k,p},\,k\in \NN,\,1\leq p<\infty. \end{equation*} This extension will help us to fill the gap between the existence of local strong solutions and that of global one when $(\rho,u)$ satisfies the condition \eqref{equation1.7}. \end{Remark} Next, we give the well-known Gagliardo-Nirenberg's inequalities which will be frequently used later. \begin{Lemma}[Gagliardo-Nirenberg \cite{leoni,nirenberg}] \label{Lemma221} Assume that $\Omega$ is a bounded domain in $\mathbb{R}^3$ with smooth boundary. Then there exist generic constants $C$ and $C_1$ which depend only on $p$ and $\Omega$ such that, for all $p \in[2,6]$ and $f \in H^1$, \begin{gather*} \|f\|_{L^p(\Omega)} \leq C\|f\|_{L^2}^{\frac{6-p}{2 p}}\|\nabla f\|_{L^2}^{\frac{3 p-6}{2 p}}+C_1\|f\|_{L^2}. \end{gather*} Moreover, if either $\left.f \cdot n\right|_{\partial \Omega}=0$ or $f_\Omega=0$, we can choose $C_1=0$. \end{Lemma} The next two lemmas can be found in \cite{aramaki,von}. \begin{Lemma}\label{lemma22} Let $\Omega$ be a bounded simply connected domain in $\mathbb{R}^3$ with smooth boundary. Assume that $k\geq 0$ is an integer and $1<p<\infty$. Then for all $u\in W^{{k+1},p}$ with $u\cdot n=0$ on $\partial \Omega$, there exists a positive constant $C=C(k,p,\Omega)$ such that \begin{equation*} \norm{u}_{W^{{k+1},p}}\leq C\left(\norm{\dive{u}}_{ W^{{k},p}}+\norm{\curle{u}}_{ W^{{k},p}}\right). \end{equation*} \end{Lemma} \begin{Lemma}\label{lemma23} Suppose that $\Omega$ is a bounded simply connected domain in $\mathbb{R}^3$ smooth boundary. Let $k \geq 0$ be an integer, $1<p<\infty$. Then for $u\in W^{k+1, p}$ with $u \times n=0$ on $\partial \Omega$, there exists a constant $C=C(k, p, \Omega)$ such that $$ \|u\|_{W^{k+1, p}} \leq C\left(\|\operatorname{div} u\|_{W^{k, p}}+\|\operatorname{curl} u\|_{W^{k, p}}+\|u\|_{L^p}\right). $$ \end{Lemma} Next, consider the problem \begin{equation}\label{laplace} \begin{cases} \dive u=f, & x \in \Omega, \\ u =\Phi, & x \in \partial \Omega, \end{cases} \end{equation} where $\Omega$ is a bounded smooth domain in $\mathbb{R}^3$. We have the following standard estimates, which will be used to eliminate the non-homogeneity of equations. \begin{Lemma}[\cite{galdi}, Theorem III.3.3]\label{llemma2.2} Suppose that $\Phi\cdot n=0$ on $\partial\Omega$ and $f_\Omega =0$. Then, \begin{enumerate} \item[1)] If $\Phi=0$, there exists a bounded linear operator $\mathcal{B}=\left[\mathcal{B}_1, \mathcal{B}_2,\mathcal{B}_3\right]$, \begin{equation*} \mathcal{B}: \{f\in L^{p}:f_\Omega =0\} \mapsto \left[W_0^{1,p}\right]^3 \end{equation*} such that \begin{equation*} \|\mathcal{B}[f]\|_{W^{1, p}} \leq C(p)\|f\|_{L^{p}}, \end{equation*} for all $p \in(1, \infty)$, and the function $Q=\mathcal{B}[f]$ solves the problem \eqref{laplace}. Moreover, if $f=\dive g$ with a certain $g \in L^r,\left.g \cdot n\right|_{\partial \Omega}=0$, then for any $r \in(1, \infty)$ \begin{equation*} \|\mathcal{B}[f]\|_{L^r} \leq C(r)\|g\|_{L^r} . \end{equation*} $\cB$ is so-called the Bogovski\v i operator. \item[2)] If $f=0$, there exists a bounded linear operator $\cC=[\cC_1,\cC_2,\cC_3]$, $$\cC: \{\Phi: \Phi\cdot n|_{\partial\Omega}=0,\,\,\dive\Phi\in L^p\}\mapsto \left[W^{1,p}\right]^3$$ such that $$\norm{\cC[\Phi]}_{W^{1,p}}\leq C(p)\norm{\dive \Phi}_{L^p},$$ for all $p\in (1,\infty)$ and the function $R=\cC[\Phi]$ sovles the problem \eqref{laplace}. \end{enumerate} \end{Lemma} The next two lemmas about the estimates of Stokes system are important to the higher order estimates of $v$. \begin{Lemma}\label{lemma2.3} Let $\Omega$ be a bounded simply connnected domain in $\mathbb{R}^3$ with smooth boundary and $(u,p)$ satisfy the following Stokes equations \begin{equation}\label{equation2.1} \begin{cases} -\Delta u+\nabla p=F, &x\in\Omega,\\ \dive u=0, &x\in\Omega, \end{cases} \end{equation} where $p$ is normalized by the condition $\int p=0$ and $F\in L^2$. Then, we have the following conclusions: \begin{enumerate} \item[(1)] If $u$ satisfies the boundary condition $u\cdot n=0,\,\curle u\times n=\Phi$ on $\partial\Omega$, where $\Phi\in H^1$ is a function defined on $\Omega$. Then there exists a positive constant $C$ depending only on $\Omega$ such that \begin{equation}\label{equation2.2} \norm{u}_{H^2} +\norm{p}_{H^1}\leq C(\normf{F}_{L^2}+\norm{\Phi}_{H^1}). \end{equation} \item[(2)] If $u$ satisfies the boundary condition $u=\Phi$ on $\partial\Omega$, where $\Phi\in H^2$ is a function defined on $\Omega$. Then there exists a positive constant $C$ depending only on $\Omega$ such that \begin{equation}\label{equation2.2'} \norm{u}_{H^2} +\norm{p}_{H^1}\leq C(\normf{F}_{L^2}+\norm{\Phi}_{H^2}). \end{equation} \end{enumerate} \end{Lemma} \begin{proof} We only give the proof for $(1)$, since $(2)$ can be found in \cite{galdi}, Chapter IV. Multiplying $u$ on both side of $\eqref{equation2.1}_1$ and integrating by parts, one has \begin{equation*} \int |\curle u|^2 =\int_\partial \Phi\cdot u +\int F\cdot u, \end{equation*} which, using Lemma \ref{lemma22} and trace inequality, implies that \begin{equation}\label{equation23} \norm{u}_{H^1}\leq C\left(\norm{F}_{L^2}+\norm{\Phi}_{H^1}\right). \end{equation} Then, $\nabla p\in H^{-1}$ and, using the condition $\int p=0$, we have \begin{equation}\label{equation24} \norm{p}_{L^2}\leq C\left(\norm{F}_{L^2}+\norm{u}_{H^1}\right). \end{equation} Next, applying $\curle$ on $\eqref{equation2.1}_1$ leads to the following Laplace equations, \begin{equation*} -\Delta\curle u=\curle F. \end{equation*} Then, multiplying $\curle u-\Phi^\perp$ and integrating over $\Omega$ gives \begin{align*} &\int |\curle \curle u|^2-\int \curle \curle u\cdot \curle\Phi^\perp+\int_\partial (n\times \curle \curle u)\cdot \left(\curle u-\Phi^\perp\right)\\ &=\int F\cdot \left(\curle\curle u-\curle\Phi^\perp\right)+\int_\partial (n\times F)\cdot \left(\curle u-\Phi^\perp\right), \end{align*} that is, using the indentity $a\cdot (b\times c)=b\cdot (c\times a)=c\cdot (a\times b)$, \begin{align*} \int |\curle \curle u|^2-\int \curle \curle u\cdot \curle\Phi^\perp=\int F\cdot \left(\curle\curle u-\curle\Phi^\perp\right), \end{align*} which imlplies that \begin{equation*} \norm{\curle\curle u}_{L^2}\leq C\left(\norm{F}_{L^2}+\norm{\Phi}_{H^1}\right). \end{equation*} It follows from Lemma \ref{lemma22}--\ref{lemma23} and \eqref{equation23} that \begin{equation}\label{equation27} \norm{u}_{H^2}\leq C\left(\norm{F}_{L^2}+\norm{\Phi}_{H^1}+\norm{u}_{L^2}\right). \end{equation} Because of the uniqueness of the Stokes system, one can eliminate the $L^2$-norm of $u$ on the right-hand side of \eqref{equation27}. On the other hand, of course, we have $$\norm{p}_{H^1}\leq C\norm{\nabla p}_{L^2}\leq C(\norm{\Delta u}_{L^2}+\norm{F}_{L^2}).$$ Thus, alonging with \eqref{equation27}, we complete the proof. \end{proof} \begin{Lemma}\label{lemma26} Let $\Omega$ be a bounded simply connnected domain in $\mathbb{R}^3$ with smooth boundary. Let $(u,p)$ be a strong solution of the following Stokes type system, \begin{equation}\label{equation28} \begin{cases} -\dive[2\mu(\rho)D(u)]+\nabla p=F, &x\in\Omega,\\ \dive u=0, &x\in\Omega, \end{cases} \end{equation} where $p$ is normalized by the condition $\int p=0$, $F\in L^2$ and $$0<\underline\mu \leq \mu(\rho)\leq \overline\mu<\infty,\,\,\,\,\nabla\mu(\rho)\in L^r,\quad r\in (3,\infty]$$ Then, we have the following results: \begin{enumerate} \item[(1)] If $u$ satisfies the boundary condition $u\cdot n=0,\,\curle u\times n=\Phi$ on $\partial\Omega$, where $\Phi\in H^1$ is a function defined on $\Omega$. Then there exists a positive constant $C$ depending only on $\underline\mu$, $\overline\mu$ and $\Omega$ such that \begin{equation*} \norm{u}_{H^2}+\norm{p}_{H^1}\leq C\left[\norm{\nabla\mu(\rho)}^{\frac{r}{r-3}}_{L^r}\norm{\nabla u}_{L^2}+\left(1+\norm{\nabla\mu(\rho)}^{\frac{r}{r-3}}_{L^r}\right)\left(\norm{F}_{L^2}+\norm{\Phi}_{H^1}\right)\right]. \end{equation*} \item[(2)] If $u$ satisfies the boundary condition $u=\Phi$ on $\partial\Omega$, where $\Phi\in H^2$ is a function defined on $\Omega$. Then there exists a positive constant $C$ depending only on $\underline\mu$, $\overline\mu$ and $\Omega$ such that \begin{equation*} \norm{u}_{H^2}+\norm{p}_{H^1}\leq C\left[\norm{\nabla\mu(\rho)}^{\frac{r}{r-3}}_{L^r}\norm{\nabla u}_{L^2}+\left(1+\norm{\nabla\mu(\rho)}^{\frac{r}{r-3}}_{L^r}\right)\left(\norm{F}_{L^2}+\norm{\Phi}_{H^2}\right)\right]. \end{equation*} \end{enumerate} \end{Lemma} \begin{proof} We still only give the proof of $(1)$, since $(2)$ can be checked in a similar way. Using Lemma \ref{llemma2.2}, we can find a function $R=\cC[\Phi]$ such that $\dive R=0$ and $R|_{\partial\Omega}=\Phi$. Then, we rewrite $\eqref{equation28}_1$ as \begin{equation}\label{equation29} -\operatorname{div}[2 \mu(\rho) D(u-R)]+\nabla p=F+\operatorname{div}[2 \mu(\rho) D(R)]. \end{equation} Multiplying $u-R$ on both sides of \eqref{equation29}, integrating by parts like in Lemma \ref{lemma2.3} and using the control $\norm{R}_{H^1}\leq C\norm{\dive \Phi}_{L^2}$, one has \begin{equation} \norm{u}_{H^1}+\norm{p}_{L^2}\leq C\left(\norm{F}_{L^2}+\norm{\Phi}_{H^1}\right). \end{equation} Next, converting $\eqref{equation28}_1$ into the form \begin{equation} -\Delta u+\nabla\left[\frac{p}{\mu(\rho)}\right]=\frac{F}{\mu(\rho)}+\frac{2\nabla\mu(\rho)\cdot D(u)}{\mu(\rho)} -\frac{p\nabla\mu(\rho)}{\mu(\rho)^2}, \end{equation} then using $\eqref{equation2.2}_2$ and Poincar\'e's inequality, we have \begin{equation*} \begin{aligned} \norm{u}_{H^2}+\norm{p}_{H^1}&\leq C\left(\norm{u}_{H^2}+\norm{\nabla\frac{p}{\mu(\rho)}}_{L^2}+\norm{\frac{\nabla\mu(\rho)\cdot D(u)}{\mu(\rho)}}_{L^2}+\norm{\frac{p\nabla\mu(\rho)}{\mu(\rho)^2}}_{L^2}\right)\\ &\leq C(\normf{F}_{L^2}+\norm{\Phi}_{H^1}+\norm{\nabla\mu(\rho)}_{L^r}\norm{\nabla u}_{L^{\frac{2r}{r-2}}}+\norm{\nabla\mu(\rho)}_{L^r}\norm{p}_{L^{\frac{2r}{r-2}}})\\ &\leq C\left(\normf{F}_{L^2}+\norm{\Phi}_{H^1}+\norm{\nabla\mu(\rho)}^{\frac{r}{r-3}}_{L^r}\norm{\nabla u}_{L^2}+\norm{\nabla\mu(\rho)}^{\frac{r}{r-3}}_{L^r}\norm{p}_{L^2}\right)\\ &\quad+\frac{1}{2}\left(\norm{u}_{H^2}+\norm{p}_{H^1}\right), \end{aligned} \end{equation*} which implies that \begin{equation*} \begin{aligned} \norm{u}_{H^2}+\norm{p}_{H^1}&\leq C\left(\normf{F}_{L^2}+\norm{\Phi}_{H^1}+\norm{\nabla\mu(\rho)}^{\frac{r}{r-3}}_{L^r}\norm{\nabla u}_{L^2}+\norm{\nabla\mu(\rho)}^{\frac{r}{r-3}}_{L^r}\norm{p}_{L^2}\right)\\ &\leq C\left[\normf{F}_{L^2}+\norm{\Phi}_{H^1}+\norm{\nabla\mu(\rho)}^{\frac{r}{r-3}}_{L^r}\norm{\nabla u}_{L^2}+\norm{\nabla\mu(\rho)}^{\frac{r}{r-3}}_{L^r}\left(\norm{F}_{L^2}+\norm{\Phi}_{H^1}\right)\right]. \end{aligned} \end{equation*} This completes the proof. \end{proof} Next, we consider the H$\mathrm{\ddot{o}}$lder continuity of $\rho$ and the non-divergence type Stokes model. \begin{Lemma}[\cite{aaa,ladyzhenskaia,SUN2013,zjw}]\label{lemma2.7} Let $v\in L^s(0,T;L^r)$, $\dive v = 0$, $v \cdot n = 0$ and $\rho\in C([0,T];L^2)\cap L^2(0,T;H^1)$ be the weak solution of equation $\eqref{equation1.8}_1$, $\alpha\leq \rho\leq\beta$. Let $\rho$ satisfy $n\cdot \nabla\rho=0$ on $\partial\Omega$ provided $\Omega\subset\RR^3$ is a bounded domain with smooth boudary. Suppose that $\rho_0\in C^{\gamma_0}(\overline\Omega)$ for some $\gamma_0\in (0,1)$, then $\rho$ is H$\mathit{\ddot{o}}$lder continuous. More precisely, $\rho\in C^{\gamma,\frac{\gamma}{2}}(\overline Q_T)$, for some $\gamma$ depending only on $\gamma_0$, $\alpha$ and $\beta$. \end{Lemma} \begin{Lemma}\label{lemma2.8} Let $\Omega$ be a bounded simply connnected domain in $\mathbb{R}^3$ with smooth boundary. Let $(u,p)$ be a strong solution of the following Stokes type system, \begin{equation}\label{equation28} \begin{cases} -\mu(x)\Delta u+\nabla p=F, &x\in\Omega,\\ \dive u=0, &x\in\Omega, \end{cases} \end{equation} where $p$ is normalized by the condition $\int p=0$, $F\in L^2$ and $$0<\underline\mu \leq \mu(x)\leq \overline\mu<\infty,\quad\mu(x)\in C(\overline\Omega).$$ Then, we have the following results: \begin{enumerate} \item[(1)] If $u$ satisfies the boundary condition $u\cdot n=0,\,\curle u\times n=\Phi$ on $\partial\Omega$, where $\Phi\in H^1$ is a function defined on $\Omega$. Then there exists a positive constant $C$ depending only on $\Omega$ such that \begin{equation}\label{2.15} \norm{u}_{H^2} +\norm{p}_{H^1}\leq C(\normf{F}_{L^2}+\norm{\Phi}_{H^1}). \end{equation} \item[(2)] If $u$ satisfies the boundary condition $u=\Phi$ on $\partial\Omega$, where $\Phi\in H^2$ is a function defined on $\Omega$. Then there exists a positive constant $C$ depending only on $\Omega$ such that \begin{equation}\label{2.16} \norm{u}_{H^2} +\norm{p}_{H^1}\leq C(\normf{F}_{L^2}+\norm{\Phi}_{H^2}). \end{equation} \end{enumerate} \end{Lemma} \begin{proof} The proof of Lemma \ref{lemma2.8} is an easy consequence of the freezing point argument, since we already have the conclusion when $\mu\equiv \text{constant}$ from the Lemma \ref{lemma2.3}. \end{proof} At last, in subsection \ref{P12}, we need the following lemma. \begin{Lemma}[Simon \cite{novotny, simon}]\label{lemma221} Let $X\hookrightarrow B\hookrightarrow Y$ be three Banach spaces with compact imbedding $X\hookrightarrow\hookrightarrow Y$. Further, let there eixst $0<\theta<1$ and $M>0$ such that \begin{equation*} \norm{v}_{B}\leq M\norm{v}_X^{1-\theta}\norm{v}_Y^\theta,\,\,\, \text{for all}\,\,v\in X\cap Y. \end{equation*} Denote for $T>0$, \begin{equation*} W(0,T):= W^{s_0,r_0}(0,T;X)\cap W^{s_1,r_1}(0,T;Y) \end{equation*} with $s_0,s_1\in\RR$, $r_1,r_0\in [1,\infty]$, and \begin{equation*} s_\theta:=(1-\theta)s_0+\theta s_1,\,\,\frac{1}{r_\theta}:=\frac{1-\theta}{r_0}+\frac{\theta}{r_1},\,\,s^*:=s_\theta-\frac{1}{r_\theta}. \end{equation*} Assume that $s_\theta>0$ and $F$ is a bounded set in $W(0,T)$. \begin{enumerate} \item[(1)] If $s^*\leq 0$, then $F$ is precompact in $L^p(0,T;B)$ for all $1\leq p<-\frac{1}{s^*}$. \item[(2)] If $s^*> 0$, then $F$ is precompact in $C([0,T];B)$. \end{enumerate} \end{Lemma} \section{Proof of Theorem \ref{Theorem1.1}}\label{section3} In this section, we assume $u_0\in C^\infty(\overline\Omega)\cap H^1_0$ (or $H^1_\omega$). We always suppose that the assumptions in Theorem \ref{Theorem1.1} hold. In the following proof, in order to simplify the notation, we denote by $\varepsilon_i$, $i\in\NN_+$, the arbitrarily small number belongs to $(0, 1/2]$ and we use the subscript $C_{\varepsilon_i}$ to emphasize the dependency of the constant $C$ on $\varepsilon_i$. \subsection{A Priori Estimates} \subsubsection{Case \eqref{equation1.6}} The key of the proof is deriving the following proposition. Using the idea from \cite{huang,zhang2015}, we first assume the bounds \eqref{condition2.5} and obtain the a priori estimates of $(\rho,v)$ (see below). Then, using the a priori estimates in Lemma \ref{Lemma2.2}--\ref{lemma33} leads to a smaller bounds \eqref{32}, which means that we can close the energy estimates. \begin{Proposition}\label{prop3.1} There exists a positive constant $\delta$ depending on $\Omega$, $c_0$, $\alpha$ and $\beta$ such that, if $\norm{\nabla u_0}_{L^2}\leq \delta$ and \begin{equation}\label{condition2.5} \sup_{t\in[0,T]}\norm{\nabla\rho}_{L^6}\leq 2,\quad\int_0^T\left(\norm{\nabla v}_{L^2}^4+\norm{\Delta\rho}_{L^2}^4\right)\,dt\leq 2\norm{\nabla u_0}^2_{L^2}, \end{equation} then, one has \begin{equation}\label{32} \sup_{t\in[0,T]}\norm{\nabla\rho}_{L^6}\leq 1,\quad\int_0^T\left(\norm{\nabla v}_{L^2}^4+\norm{\Delta\rho}_{L^2}^4\right)\,dt\leq \norm{\nabla u_0}^2_{L^2}. \end{equation} \end{Proposition} We first come to prove the lower order estimates of $(\rho,v)$. \begin{Lemma}\label{Lemma2.2} Let $(\rho,v)$ be a smooth solution of \eqref{equation1.18}, then $\alpha\leq\rho\leq\beta$ and there exist some positive constant $C$ depending only on $\Omega$, $c_0$, $\alpha$ and $\beta$ such that, for all $T\in (0,\infty)$, \begin{equation}\label{2.7} \sup_{t\in[0,T]}\norm{\rho-(\rho_0)_\Omega}_{L^2}^2+\int_0^T\norm{\nabla\rho}_{L^2}^2\,dt\leq C\norm{\nabla u_0}_{L^2}^2. \end{equation} Furthermore, if $\norm{\nabla u_0}_{L^2}\leq 1$ and the condition \eqref{condition2.5} holds, one has \begin{gather} \sup_{t\in[0,T]}\norm{\nabla\rho}_{L^2}^2+\int_0^T\left(\norm{\nabla\rho}_{L^3}^4+ \norm{\Delta\rho}_{L^2}^2\right)\,dt\leq C\norm{\nabla u_0}^2_{L^2},\label{2.8}\\ \sup_{t\in[0,T]}\norm{v}^2_{L^2}+\int_0^T\left(\norm{v}_{L^3}^4+\norm{\nabla v}_{L^2}^2\right)\,dt\leq C\norm{\nabla u_0}^2_{L^2}.\label{2.9} \end{gather} \end{Lemma} \begin{proof} First of all, $\alpha\leq \rho\leq \beta$ is a consequence of the standard maximal principle. Next, multiplying $\rho-(\rho_0)_\Omega$ on both sides of $\eqref{equation1.18}_1$ and integrating over $\Omega$, one has \begin{equation*} \left(\norm{\rho-(\rho_0)_\Omega}_{L^2}^2\right)_t+\nu\norm{\nabla\rho}_{L^2}^2\leq 0. \end{equation*} Therefore, \eqref{2.7} is an easy consequence of Gr$\mathrm{\ddot{o}}$nwall's inequality and the control $$\norm{\rho_0-(\rho_0)_\Omega}_{L^2}\leq C\norm{\nabla\rho_0}_{L^2}\leq C\norm{\nabla u_0}_{L^2}.$$ To prove \eqref{2.8}, multiplying $-\Delta\rho$ and integrating over $\Omega$, one has \begin{equation}\label{eq3.7} \begin{aligned} \left(\int \frac{1}{2}|\nabla\rho|^2\right)_t+\int c_0\rho^{-1} |\Delta\rho|^2&=\int (v\cdot\nabla\rho)\Delta\rho+\int c_0\rho^{-2} |\nabla\rho|^2\Delta\rho\\ &:=\sum_{i=1}^2 G_i, \end{aligned} \end{equation} where, using Lemma \ref{Lemma221}, \begin{equation} \begin{cases} |G_1|\leq C\norm{v}_{L^6}\norm{\nabla\rho}_{L^3}\norm{\Delta\rho}_{L^2}\leq C_{\varepsilon_1}\norm{\nabla v}^4_{L^2}\norm{\nabla\rho}_{L^2}^2+\varepsilon_1\norm{\Delta\rho}_{L^2}^2,\\ |G_2|\leq C\norm{\nabla\rho}_{L^6}\norm{\nabla\rho}_{L^3}\norm{\Delta\rho}_{L^2}\leq C_{\varepsilon_2}\norm{\Delta\rho}^4_{L^2}\norm{\nabla\rho}_{L^2}^2+\varepsilon_2\norm{\Delta\rho}_{L^2}^2. \end{cases} \end{equation} Thus, we have \begin{equation}\label{33.8} \left(\norm{\nabla\rho}_{L^2}^2\right)_t+\nu\norm{\Delta\rho}_{L^2}^2\leq C\left(\norm{\Delta\rho}_{L^2}^4+\norm{\nabla v}^4_{L^2}\right)\norm{\nabla\rho}_{L^2}^2. \end{equation} Applying the Gr$\mathrm{\ddot{o}}$nwall's inequality on \eqref{33.8} and using cndition \eqref{condition2.5}, we obtain \eqref{2.8}. With help of \eqref{2.7}--\eqref{2.8}, we next come to the proof of \eqref{2.9}. Multiplying $v$ on both sides of $\eqref{equation1.18}_2$ and integrating over $\Omega$, one has \begin{equation}\label{218} \begin{aligned} \left(\frac{1}{2}\int \rho|v|^2\right)_t-\int \dive [2\mu(\rho)D(v)]\cdot v &=- \int 2c_0\mu(\rho)\nabla^2\rho^{-1}: \nabla v+ \int c_0 \rho v \cdot \nabla v\cdot \nabla\rho^{-1}\\ &\quad+\int c_0^2 \rho \nabla\rho^{-1} \cdot \nabla v\cdot \nabla\rho^{-1}\\ &\quad+\int_\partial 2c_0\mu(\rho)n\cdot \nabla^2\rho^{-1}\cdot v\\ &:=\sum_{i=1}^4H_i. \end{aligned} \end{equation} For the second term on the left-hand side, using $\Delta v=-\curle\curle v$ and Lemma \ref{Lemma221} and \ref{lemma23}, we have \begin{equation}\label{219} \begin{aligned} -\int \dive [2\mu(\rho)D(v)]\cdot v &=\int \mu(\rho)\curle\curle v\cdot v- \int 2\mu'(\rho)\nabla\rho\cdot D(v)\cdot v\\ &=\int \mu(\rho)|\curle v|^2+\int_\partial \mu(\rho)v\cdot B\cdot v+\int_\partial c_0\mu(\rho)v\cdot B\cdot \nabla\rho^{-1} \\ &\quad+\int \curle v\cdot \left(\nabla\mu(\rho)\times v\right)- \int 2\nabla\mu(\rho)\cdot D(v)\cdot v\\ &\geq \underline\mu\left(\norm{\curle v}_{L^2}^2+\int_\partial v\cdot B\cdot v\right)-C\norm{v}_{H^1}\norm{\nabla\rho}_{H^1}\\ &\quad-C\norm{\nabla\rho}_{L^6}\norm{v}_{L^3}\norm{\nabla v}_{L^2}\\ &\geq \nu\norm{\nabla v}_{L^2}^2-C\left(\norm{\Delta\rho}_{L^2}^2+\norm{\Delta\rho}_{L^2}^4\norm{v}_{L^2}^2\right). \end{aligned} \end{equation} Next, for $H_1$--$H_4$, one has, applying Lemma \ref{Lemma221}, \begin{equation}\label{220} \begin{cases} |H_1|&\!\!\!\!\leq C\left(\norm{\Delta\rho}_{L^2}+\norm{\nabla\rho}_{L^6}\norm{\nabla\rho}_{L^3}\right)\norm{\nabla v}_{L^2}\\ &\!\!\!\!\leq C_{\varepsilon_1}\left(\norm{\Delta\rho}_{L^2}^2+\norm{\Delta\rho}_{L^2}^4\norm{\nabla\rho}_{L^2}^2\right)+\varepsilon_1\norm{\nabla v}_{L^2}^2,\\ |H_2|&\!\!\!\!\leq C\norm{\nabla\rho}_{L^6}\norm{v}_{L^3}\norm{\nabla v}_{L^2}\leq C_{\varepsilon_2}\norm{\Delta\rho}_{L^2}^4\norm{v}_{L^2}^2+\varepsilon_2\norm{\nabla v}_{L^2}^2,\\ |H_3|&\!\!\!\!\leq C\norm{\nabla\rho}_{L^6}\norm{\nabla\rho}_{L^3}\norm{\nabla v}_{L^2}\leq C_{\varepsilon_3}\norm{\Delta\rho}_{L^2}^4\norm{\nabla\rho}_{L^2}^2+C_{\varepsilon_3}\norm{\Delta\rho}_{L^2}^2+\varepsilon_3\norm{\nabla v}_{L^2}^2. \end{cases} \end{equation} and, using the fact that \begin{equation*} v\cdot \nabla^2\rho^{-1}\cdot n=-v\cdot \nabla n\cdot \nabla \rho^{-1}\,\,\mathrm{on}\,\,\partial\Omega\times(0,T), \end{equation*} \begin{equation}\label{221} \begin{aligned} |H_4|&=\abs{\int_\partial 2c_0\mu(\rho)n\cdot \nabla^2\rho^{-1}\cdot v}=\abs{\int_\partial 2c_0\mu(\rho) v\cdot\nabla n\cdot \nabla\rho^{-1}}\\ &=\abs{\int_\partial 2c_0\mu(\rho)(v^\perp\times n)\cdot\nabla n\cdot \nabla\rho^{-1}}\\ &=\abs{\int 2c_0\mu(\rho)\curle v^\perp\cdot \nabla n\cdot \nabla\rho^{-1} -\int 2c_0v^\perp\cdot \curle\left[\mu(\rho)\nabla n\cdot \nabla\rho^{-1}\right]}\\ &=\left|\int 2c_0\mu(\rho)\curle v^\perp\cdot \nabla n\cdot \nabla\rho^{-1}+\int 2c_0\mu(\rho)v^\perp\cdot \curle\left(\nabla n\cdot \nabla\rho^{-1}\right)\right.\\ &\quad-\left.\int 2c_0\nabla\mu(\rho)\times\left(\nabla n\cdot \nabla\rho^{-1}\right)\cdot v^\perp\right|\\ &\leq C\norm{v}_{H^1}\norm{\nabla\rho}_{H^1}+C\norm{\nabla\rho}_{L^3}^2\norm{v}_{L^3}\leq C_{\varepsilon_4}\left(\norm{\Delta\rho}_{L^2}^2+\norm{\nabla\rho}_{L^3}^4\right)+\varepsilon_4\norm{\nabla v}_{L^2}^2. \end{aligned} \end{equation} Combining \eqref{219}--\eqref{221}, we deduce from \eqref{218} that \begin{equation}\label{3314} \begin{aligned} &\left(\norm{\sqrt\rho v}^2_{L^2}\right)_t+\nu\norm{\nabla v}_{L^2}^2\\ &\leq C\left(\norm{\Delta\rho}_{L^2}^4\norm{\sqrt\rho v}_{L^2}^2+\norm{\Delta\rho}_{L^2}^2+\norm{\Delta\rho}_{L^2}^4\norm{\nabla\rho}_{L^2}^2+\norm{\nabla\rho}_{L^3}^4\right). \end{aligned} \end{equation} Using Gr$\mathrm{\ddot{o}}$nwall's inequality, Lemma \ref{Lemma221}, \eqref{1.17}, \eqref{condition2.5} and \eqref{2.8}, \begin{equation*} \begin{aligned} \sup_{t\in[0,T]}\norm{v}^2_{L^2}+\int_0^T\left(\norm{v}_{L^3}^4+\norm{\nabla v}_{L^2}^2\right)\,dt&\leq C\left(\norm{\nabla u_0}_{L^2}^2+\norm{v_0}_{L^2}^2\right)\\ &\leq C\norm{\nabla u_0}_{L^2}^2, \end{aligned} \end{equation*} which gives \eqref{2.9}. Thus, we complete the proof of Lemma \ref{Lemma2.2}. \end{proof} Next, we prove the higher order estimates for $(\rho, v)$, that is, \begin{Lemma}\label{lemma33} Let $(\rho,v,\pi_1)$ be a smooth solution of \eqref{equation1.18}. Suppose that $\norm{\nabla u_0}_{L^2}\leq 1$ and the condition \eqref{condition2.5} holds, then there exist some positive constants $C$ depending only on $\Omega$, $c_0$, $\alpha$ and $\beta$ such that, for all $T\in (0,\infty)$, \begin{gather} \sup_{t\in[0,T]}\cP(t)+\int_0^T\left(\cQ(t)+\norm{\pi}_{H^1}^2\right)\,dt\leq C\norm{\nabla u_0}^2_{L^2}.\label{2.12} \end{gather} where \begin{gather*} \cP(t):=\norm{\nabla v}_{L^2}^2+\norm{\Delta\rho}^2_{L^2}+\norm{\rho_t}_{L^2}^2,\\ \cQ(t):=\norm{v_t}_{L^2}^2+\norm{v}_{H^2}^2+\norm{\nabla\Delta\rho}_{L^2}^2+\norm{\nabla\rho_t}_{L^2}^2. \end{gather*} \end{Lemma} \begin{proof} We first apply $-\nabla\Delta\rho\cdot \nabla$ on both sides of $\eqref{equation1.18}_1$ and, then, integrate over $\Omega$, we have \begin{equation}\label{2.20} \begin{aligned} \left(\int \frac{1}{2}|\Delta\rho|^2\right)_t+\int c_0\rho^{-1}|\nabla\Delta\rho|^2&=\int \nabla\Delta\rho\cdot \nabla v\cdot \nabla\rho+\int v\cdot \nabla^2\rho\cdot\nabla\Delta\rho\\ &\quad+\int \nabla \left[\frac{c_0}{\rho^2}|\nabla\rho|^2\right]\cdot\nabla\Delta\rho+\int \frac{c_0}{\rho^2}\Delta\rho\nabla\rho\cdot\nabla\Delta\rho\\ &:=\sum_{i=1}^4 I_i, \end{aligned} \end{equation} where, applying Lemma \ref{Lemma221}, \begin{equation}\label{2.21} \begin{cases} |I_1|&\!\!\!\!\leq C\norm{\nabla v}_{L^3}\norm{\nabla\rho}_{L^6}\norm{\nabla\Delta\rho}_{L^2}\leq C_{\varepsilon_1}\norm{\Delta\rho}_{L^2}^4\norm{\nabla v}_{L^2}^2+\varepsilon_1\left(\norm{\nabla\Delta\rho}_{L^2}^2+\norm{v}_{H^2}^2\right),\\ |I_2|&\!\!\!\!\leq C\norm{v}_{L^6}\norm{\Delta\rho}_{L^3}\norm{\nabla\Delta\rho}_{L^2}\leq C_{\varepsilon_2}\norm{\nabla v}_{L^2}^4\norm{\Delta\rho}_{L^2}^2+\varepsilon_2\norm{\nabla\Delta\rho}_{L^2}^2,\\ |I_3|&\!\!\!\!\leq C\left(\norm{\nabla\rho}_{L^6}^3+\norm{\nabla\rho}_{L^6}\normf{\nabla^2\rho}_{L^3}\right)\norm{\nabla\Delta\rho}_{L^2}\\ &\!\!\!\!\leq C_{\varepsilon_3}\norm{\Delta\rho}_{L^2}^4\norm{\Delta\rho}_{L^2}^2+\varepsilon_3\norm{\nabla\Delta\rho}_{L^2}^2,\\ |I_4|&\!\!\!\!\leq C\norm{\nabla\rho}_{L^6}\normf{\Delta\rho}_{L^3}\norm{\nabla\Delta\rho}_{L^2}\leq C_{\varepsilon_4}\norm{\Delta\rho}_{L^2}^4\norm{\Delta\rho}_{L^2}^2+\varepsilon_4\norm{\nabla\Delta\rho}_{L^2}^2. \end{cases} \end{equation} Thus, substituting \eqref{2.21} into \eqref{2.20} leads to \begin{equation}\label{318} \begin{aligned} \left(\norm{\Delta\rho}_{L^2}^2\right)_t+\nu \norm{\nabla\Delta\rho}_{L^2}^2&\leq C_\varepsilon\left(\norm{\nabla v}_{L^2}^4+\norm{\Delta\rho}_{L^2}^4\right)\left(\norm{\Delta\rho}_{L^2}^2+\norm{\nabla v}_{L^2}^2\right)+\varepsilon\norm{v}_{H^2}^2 \end{aligned} \end{equation} For the higher order estimates of $v$, multiplying $v_t$ on both sides of $\eqref{equation1.18}_2$ and integrating over $\Omega$ lead to \begin{equation}\label{2.23} \begin{aligned} \int \rho|v_t|^2-\int\dive[2\mu(\rho)D(v)]\cdot v_t&=-\int\rho u\cdot \nabla v\cdot v_t+ \int c_0\dive{\left[2\mu(\rho)\nabla^2\rho^{-1}\right]}\cdot v_t\\ &\quad-\int c_0 \dive{\left(\rho v\otimes\nabla\rho^{-1}\right)}\cdot v_t\\ &\quad -\int c_0^2 \dive{\left(\rho \nabla\rho^{-1}\otimes\nabla\rho^{-1}\right)}\cdot v_t\\ &:=\sum_{i=1}^4 J_i. \end{aligned} \end{equation} For the second term on the left-hand side, we have \begin{align} -\int\dive[2\mu(\rho)D(v)]\cdot v_t&=\int \mu(\rho)\curle\curle v\cdot v_t-\int 2\nabla\mu(\rho)\cdot D(v)\cdot v_t\notag\\ &=\int_\partial \mu(\rho)v_t\cdot B\cdot v+\int_\partial c_0 \mu(\rho)v_t\cdot B\cdot \nabla\rho^{-1}+\int \mu(\rho)\curle v\cdot \curle v_t\notag\\ &\quad+\int \curle v\cdot\left[\nabla\mu(\rho)\times v_t\right]-\int 2\nabla\mu(\rho)\cdot D(v)\cdot v_t\notag\\ &=\frac{1}{2}\left(\int_\partial \mu(\rho)v\cdot B\cdot v+\int \mu(\rho)|\curle v|^2\right)_t+\int_\partial c_0 \mu(\rho)v_t\cdot B\cdot \nabla\rho^{-1} \notag\\ &\quad-\int_\partial \frac{1}{2}\mu(\rho)_tv\cdot B\cdot v-\int \frac{1}{2}\mu(\rho)_t |\curle v|^2\notag\\ &\quad+\int \curle v\cdot\left[\nabla\mu(\rho)\times v_t\right]-\int 2\nabla\mu(\rho)\cdot D(v)\cdot v_t\notag\\ &:= \frac{1}{2}\left(\int_\partial \mu(\rho)v\cdot B\cdot v+\int \mu(\rho)|\curle v|^2\right)_t +\sum_{i=1}^5K_i.\label{2.25} \end{align} However, For $K_1$--$K_5$, we have, using Lemma \ref{Lemma221}, \begin{equation}\label{equ3.21} \begin{aligned} K_1&=-\int_\partial c_0 \frac{\mu(\rho)}{\rho^2}(v_t^\perp \times n)\cdot B\cdot \nabla\rho\\ &=\int c_0 \frac{\mu(\rho)}{\rho^2}\curle v_t^\perp\cdot B\cdot \nabla\rho-\int c_0 v_t^\perp\cdot \left[\nabla\frac{\mu(\rho)}{\rho^2}\times (B\cdot \nabla\rho)\right]\\ &\quad-\int c_0\frac{\mu(\rho)}{\rho^2} v_t^\perp\cdot \curle(B\cdot \nabla\rho)\\ &=M'(t)-\int c_0\curle v^\perp\cdot B\cdot \left[\frac{\mu(\rho)}{\rho^2}\nabla\rho\right]_t-\int c_0 v_t^\perp\cdot \left[\nabla\frac{\mu(\rho)}{\rho^2}\times (B\cdot \nabla\rho)\right]\\ &\quad-\int c_0\frac{\mu(\rho)}{\rho^2} v_t^\perp\cdot \curle(B\cdot \nabla\rho)\\ &\leq M'(t)+C\norm{\nabla v}_{L^2}\left(\norm{\nabla\rho_t}_{L^2}+\norm{\rho_t}_{L^6}\norm{\nabla\rho}_{L^3}\right)\\ &\quad+C\norm{v_t}_{L^2}\left(\norm{\nabla\rho}_{L^6}\norm{\nabla\rho}_{L^3}+\norm{\Delta\rho}_{L^2}\right)\\ &\leq M'(t)+C_{\varepsilon_1}\left(\norm{\nabla\rho}^4_{L^3}\norm{\nabla v}_{L^2}^2+\norm{\nabla v}^2_{L^2}\right)+\varepsilon_1\norm{\nabla\rho_t}_{L^2}^2\\ &\quad+C_{\varepsilon_2}\left(\norm{\Delta\rho}^4_{L^2}\norm{\nabla\rho}^2_{L^2}+\norm{\Delta\rho}^2_{L^2}\right)+\varepsilon_2\norm{v_t}^2_{L^2}, \end{aligned} \end{equation} where \begin{equation*} M(t):=\int c_0 \frac{\mu(\rho)}{\rho^2}\curle v^\perp\cdot B\cdot \nabla\rho, \end{equation*} and \begin{equation}\label{equ3.22} \begin{aligned} K_2&=-\int_\partial \frac{1}{2}\mu(\rho)_tv\cdot B\cdot v=\int_\partial \frac{1}{2}\mu(\rho)_t (n\times v^\perp)\cdot B\cdot v\\ &=\int \frac{1}{2}\mu(\rho)_t\curle v^\perp \cdot B\cdot v-\int \frac{1}{2} v^\perp \cdot\left[ \nabla\mu(\rho)_t\times (B\cdot v)\right]-\int \frac{1}{2}\mu(\rho)_t v^\perp \cdot \curle(B\cdot v)\\ &\leq C\norm{\rho_t}_{L^6}\norm{v}_{L^3}\norm{\nabla v}_{L^2}+C\norm{v}_{L^6}\norm{v}_{L^3}\norm{\nabla\rho_t}_{L^2}+C\norm{v}_{L^3}\norm{v}_{L^6}\norm{\nabla\rho}_{L^3}\norm{\rho_t}_{L^6}\\ &\leq C_{\varepsilon_3}\left(\norm{v}_{L^3}^4\norm{\nabla v}_{L^2}^2+\norm{\nabla\rho}_{L^3}^4\norm{\nabla v}_{L^2}^2+\norm{\nabla v}_{L^2}^2\right)+\varepsilon_3\norm{\nabla\rho_t}_{L^2}^2 \end{aligned} \end{equation} and \begin{equation}\label{2.28} \begin{cases} |K_3|\leq C\norm{\rho_t}_{L^6}\norm{\nabla v}_{L^3}\norm{\nabla v}_{L^2}\leq C_{\varepsilon_4}\norm{\nabla v}^4_{L^2}\norm{\nabla v}_{L^2}^2+\varepsilon_4\left(\norm{v}_{H^2}^2+\norm{\nabla\rho_t}_{L^2}^2\right),\\ |K_4|\leq C\norm{\nabla v}_{L^3}\norm{\nabla\rho}_{L^6}\norm{v_t}_{L^2}\leq C_{\varepsilon_5}\norm{\Delta\rho}^4_{L^2}\norm{\nabla v}_{L^2}^2+\varepsilon_5\left(\norm{v}_{H^2}^2+\norm{v_t}_{L^2}^2\right),\\ |K_5|\leq C\norm{\nabla v}_{L^3}\norm{\nabla\rho}_{L^6}\norm{v_t}_{L^2}\leq C_{\varepsilon_6}\norm{\Delta\rho}^4_{L^2}\norm{\nabla v}_{L^2}^2+\varepsilon_6\left(\norm{v}_{H^2}^2+\norm{v_t}_{L^2}^2\right). \end{cases} \end{equation} Combining \eqref{2.25}--\eqref{2.28}, we have \begin{equation}\label{228} \begin{aligned} -\int\dive[2\mu(\rho)D(v)]\cdot v_t &\geq \frac{1}{2}\left(\int_\partial \mu(\rho)v\cdot B\cdot v+\int \mu(\rho)|\curle v|^2\right)_t+M'(t)\\ &\quad-C_\varepsilon\left(\norm{\nabla\rho}_{L^3}^4+\norm{\Delta\rho}_{L^2}^4+\norm{v}_{L^3}^4+\norm{\nabla v}_{L^2}^4\right)\norm{\nabla v}_{L^2}^2\\ &\quad-C_\varepsilon\norm{\nabla v}^2_{L^2}-C_{\varepsilon}\left(\norm{\Delta\rho}^4_{L^2}\norm{\nabla\rho}^2_{L^2}+\norm{\Delta\rho}^2_{L^2}\right)\\ &\quad-\varepsilon\left(\norm{v}_{H^2}^2+\norm{v_t}_{L^2}^2\right). \end{aligned} \end{equation} Next, we turn to estimate $J_1$--$J_4$ and apply Lemma \ref{Lemma221}, that is, \begin{equation}\label{2.29} \begin{cases} |J_1|\leq C\left(\norm{\nabla\rho}_{L^6}+\norm{v}_{L^6}\right)\norm{\nabla v}_{L^3}\norm{v_t}_{L^2}\\ \quad\quad\!\leq C_{\varepsilon_1}\left(\norm{\Delta\rho}_{L^2}^4+\norm{\nabla v}_{L^2}^4\right)\norm{\nabla v}_{L^2}^2+\varepsilon_1\left(\norm{v}_{H^2}^2+\norm{v_t}_{L^2}^2\right),\\ |J_2|\leq C\left(\norm{\nabla\rho}_{L^6}^3+\norm{\nabla\rho}_{L^6}\norm{\Delta\rho}_{L^3}+\norm{\nabla\Delta\rho}_{L^2}\right)\norm{v_t}_{L^2}\\ \quad\quad\!\leq C_{\varepsilon_2}\left(\norm{\Delta\rho}_{L^2}^4\norm{\Delta\rho}_{L^2}^2+\norm{\nabla\Delta\rho}_{L^2}^2\right)+\varepsilon_2\norm{v_t}_{L^2}^2,\\ |J_3|\leq C\left(\norm{\nabla\rho}_{L^6}^2\norm{v}_{L^6}+\norm{\nabla\rho}_{L^6}\norm{\nabla v}_{L^3}+\norm{\Delta\rho}_{L^3}\norm{v}_{L^6}\right)\norm{v_t}_{L^2}\\ \quad\quad\!\leq C_{\varepsilon_3}\left(\norm{\Delta\rho}_{L^2}^4\norm{\nabla v}_{L^2}^2+\norm{\nabla v}_{L^2}^4\norm{\Delta\rho}_{L^2}^2\right)\\ \quad\quad\quad\!+\varepsilon_3\left(\norm{\Delta v}_{L^2}^2+\norm{v_t}_{L^2}^2+\norm{\nabla\Delta\rho}_{L^2}^2\right),\\ |J_4|\leq C\left(\norm{\nabla\rho}_{L^6}^3+\norm{\nabla\rho}_{L^6}\norm{\Delta\rho}_{L^3}\right)\norm{v_t}_{L^2}\\ \quad\quad\!\leq C_{\varepsilon_4}\left(\norm{\Delta\rho}_{L^2}^4\norm{\Delta\rho}_{L^2}^2+\norm{\nabla\Delta\rho}_{L^2}^2\right)+\varepsilon_4\norm{v_t}_{L^2}^2.\\ \end{cases} \end{equation} Now, substituting \eqref{228} and \eqref{2.29} into \eqref{2.23}, one can deduce that \begin{equation}\label{2.30} \begin{aligned} &\left(\int_\partial \mu(\rho)v\cdot B\cdot v+\int \mu(\rho)|\curle v|^2\right)_t +\nu\norm{v_t}_{L^2}^2+M'(t)\\ &\leq C_\varepsilon\left(\norm{\nabla\rho}_{L^3}^4+\norm{\Delta\rho}_{L^2}^4+\norm{v}_{L^3}^4+\norm{\nabla v}_{L^2}^4+1\right)\norm{\nabla v}_{L^2}^2\\ &\quad+C_{\varepsilon}\left(\norm{\Delta\rho}^4_{L^2}\norm{\Delta\rho}^2_{L^2}+\norm{\nabla\Delta\rho}^2_{L^2}\right)+\varepsilon\left(\norm{v}_{H^2}^2+\norm{\nabla\rho_t}_{L^2}^2\right). \end{aligned} \end{equation} For simplicity, we rewrite \eqref{2.30} as \begin{equation}\label{327} \begin{aligned} \left(\norm{\nabla v}_{L^2}^2\right)_t +\nu\norm{v_t}_{L^2}^2+M'(t)&\leq C_\varepsilon\left(\norm{\nabla\rho}_{L^3}^4+\norm{\Delta\rho}_{L^2}^4+\norm{v}_{L^3}^4+\norm{\nabla v}_{L^2}^4+1\right)\norm{\nabla v}_{L^2}^2\\ &\quad+C_{\varepsilon}\left(\norm{\Delta\rho}^4_{L^2}\norm{\Delta\rho}^2_{L^2}+\norm{\nabla\Delta\rho}^2_{L^2}\right)+\varepsilon\norm{v}_{H^2}^2,\\ &\leq C_\varepsilon\left(\norm{\Delta\rho}_{L^2}^4+\norm{\nabla v}_{L^2}^4\right)\left(\norm{\nabla v}_{L^2}^2+\norm{\Delta\rho}^2_{L^2}\right)\\ &\quad+C_{\varepsilon}\left(\norm{\nabla v}^2_{L^2}+\norm{\nabla\Delta\rho}^2_{L^2}\right)+\varepsilon\left(\norm{v}_{H^2}^2+\norm{\nabla\rho_t}_{L^2}^2\right),\\ \end{aligned} \end{equation} since, from the positivity of $B$ and Lemma \ref{lemma22} $$\int_\partial \mu(\rho)v\cdot B\cdot v\geq 0,\,\,\,\,\int \mu(\rho)|\curle v|^2\sim \norm{\nabla v}_{L^2}^2,$$ and they do not influent the results after applying the Gr$\mathrm{\ddot{o}}$nwall's inequality for \eqref{2.30}. We still need to estimate $\norm{v}_{H^2}$. To get this, we convert $\eqref{equation1.18}_2$ into the form \begin{equation}\label{328} -\dive[2\mu(\rho)D(v)]+\nabla \pi=F, \end{equation} where \begin{equation}\label{F} \begin{aligned} F&:=-\rho v_t -\rho u\cdot\nabla v +c_0\dive{\left[2\mu(\rho)\nabla^2\rho^{-1}\right]}- c_0 \dive{\left(\rho v\otimes\nabla\rho^{-1}\right)}\\ &\,\,\quad -c_0^2 \dive{\left(\rho \nabla\rho^{-1}\otimes\nabla\rho^{-1}\right)}+c_0\nabla(\log\rho)_t. \end{aligned} \end{equation} In order to use Lemma \ref{lemma26}, from the embedding $L^2\hookrightarrow H^{-1}$, one should estimate $\norm{F}_{L^2}$, that is, \begin{equation}\label{329} \begin{aligned} \norm{F}^2_{L^2}&\leq C\left(\norm{v_t}^2_{L^2}+\norm{\nabla\rho_t}^2_{L^2}\right)+C\left[\left(\norm{\Delta\rho}_{L^2}^4+\norm{\nabla v}_{L^2}^4\right)\norm{\Delta\rho}_{L^2}^2+\norm{\nabla\Delta\rho}_{L^2}^2\right]\\ &\quad+C_{\varepsilon_1}\norm{\Delta\rho}_{L^2}^4\norm{\nabla v}_{L^2}^2+\varepsilon_1\norm{v}_{H^2}^2, \end{aligned} \end{equation} where we have used $$\norm{\nabla(\log\rho)_t}_{L^2}\leq C\left(\norm{\nabla\rho_t}_{L^2}+\norm{\rho_t}_{L^3}\norm{\nabla\rho}_{L^6}\right)\leq C\norm{\nabla\rho_t}_{L^2}$$ from Lemma \ref{Lemma221} and condition \eqref{condition2.5}. On the other hand, in this case, $\Phi:=-B\cdot(v+c_0\nabla\rho^{-1})$, where $\Phi$ as in Lemma \ref{lemma26}. Hence, applying Poincar\'e's inequality leads to \begin{equation}\label{330} \begin{aligned} \norm{\Phi}^2_{H^1}&\leq C\left(\norm{v}_{H^1}^2+\norm{\nabla\rho}_{H^1}^2+\norm{\nabla\rho}_{L^3}^2\norm{\nabla\rho}_{L^6}^2\right)\\ &\leq C\left(\norm{\nabla v}_{L^2}^2+\norm{\Delta\rho}_{L^2}^2+\norm{\Delta\rho}_{L^2}^4\norm{\nabla\rho}_{L^2}^2\right). \end{aligned} \end{equation} Combining \eqref{329}--\eqref{330} and using Lemma \ref{lemma26}, condition \eqref{condition2.5} and Poincar\'e's inequality, we deduce from \eqref{328} that \begin{equation*} \begin{aligned} \norm{v}_{H^2}^2+\norm{\pi}^2_{H^1}&\leq C\left[\norm{\nabla\rho}^4_{L^6}\norm{\nabla v}^2_{L^2}+\left(1+\norm{\nabla\rho}^4_{L^6}\right)\left(\norm{F}^2_{L^2}+\norm{\Phi}^2_{H^1}\right)\right]\\ &\leq C\left(\normf{F}^2_{L^2}+\norm{\Phi}^2_{H^1}+\norm{\nabla v}^2_{L^2}\right)\\ &\leq C\left(\norm{v_t}^2_{L^2}+\norm{\nabla\rho_t}^2_{L^2}\right)+C\left(\norm{\Delta\rho}_{L^2}^4+\norm{\nabla v}_{L^2}^4\right)\norm{\Delta\rho}_{L^2}^2+C\norm{\nabla\Delta\rho}_{L^2}^2\\ &\quad+C\norm{\nabla v}_{L^2}^2+C_{\varepsilon}\norm{\Delta\rho}_{L^2}^4\norm{\nabla v}_{L^2}^2+\varepsilon\norm{v}_{H^2}^2, \end{aligned} \end{equation*} which gives \begin{equation}\label{331} \begin{aligned} \norm{v}_{H^2}^2+\norm{\pi}^2_{H^1}&\leq C\left(\norm{v_t}^2_{L^2}+\norm{\nabla\rho_t}^2_{L^2}\right)+C\left(\norm{\Delta\rho}_{L^2}^4+\norm{\nabla v}_{L^2}^4\right)\left(\norm{\Delta\rho}_{L^2}^2+\norm{\nabla v}_{L^2}^2\right)\\ &\quad+C\left(\norm{\nabla\Delta\rho}_{L^2}^2+\norm{\nabla v}_{L^2}^2\right) \end{aligned} \end{equation} Combining \eqref{327} and \eqref{331}, one has \begin{equation}\label{332} \begin{aligned} \left(\norm{\nabla v}_{L^2}^2\right)_t +\nu\norm{v_t}_{L^2}+\nu\norm{v}_{H^2}^2+M'(t)&\leq C_\varepsilon\left(\norm{\Delta\rho}_{L^2}^4+\norm{\nabla v}_{L^2}^4\right)\left(\norm{\nabla v}_{L^2}^2+\norm{\Delta\rho}^2_{L^2}\right)\\ &\quad+C_\varepsilon\left(\norm{\nabla\Delta\rho}^2_{L^2}+\norm{\nabla v}^2_{L^2}\right)+\varepsilon\norm{\nabla\rho_t}^2_{L^2}. \end{aligned} \end{equation} At last, we come to estimate $\nabla\rho_t$. Applying $\rho_t\partial_t$ on both sides of $\eqref{equation1.18}_1$ and integrating over $\Omega$ yield that \begin{align} \left(\int\frac{1}{2}\abs{\rho_t}^2\right)_t +\int c_0\rho^{-1}\abs{\nabla\rho_t}^2 &=-\int (v_t\cdot\nabla\rho)\rho_t+ \int 2c_0 \rho^{-3}\abs{\rho_t}^2\abs{\nabla\rho}^2\notag\\ &\quad-\int 2c_0\rho^{-1}\left(\nabla\rho\cdot\nabla\rho_t\right)\rho_t-\int c_0\rho^{-1}\abs{\rho_t}^2\Delta\rho\notag\\ &:= \sum_{i=1}^4L_i.\label{33.5} \end{align} It follow from Lemma \ref{Lemma221} that \begin{equation}\label{33.6} \begin{cases} |L_1|\leq \norm{v_t}_{L^2}\norm{\nabla\rho}_{L^6}\norm{\rho_t}_{L^3}\leq C_{\varepsilon_1}\norm{\Delta\rho}_{L^2}^4\norm{\rho_t}_{L^2}^2+\varepsilon_1\left(\norm{v_t}_{L^2}^2+\norm{\nabla\rho_t}_{L^2}^2\right),\\ |L_2|\leq \norm{\nabla\rho}_{L^2}\norm{\nabla\rho}_{L^6}\norm{\rho_t}_{L^3}\leq C_{\varepsilon_2}\norm{\Delta\rho}_{L^2}^4\norm{\rho_t}_{L^2}^2+\varepsilon_2\left(\norm{\nabla\rho}_{L^2}^2+\norm{\nabla\rho_t}_{L^2}^2\right),\\ |L_3|\leq \norm{\nabla\rho_t}_{L^2}\norm{\nabla\rho}_{L^6}\norm{\rho_t}_{L^3}\leq C_{\varepsilon_3}\norm{\Delta\rho}_{L^2}^4\norm{\rho_t}_{L^2}^2+\varepsilon_3\norm{\nabla\rho_t}_{L^2}^2,\\ |L_4|\leq \norm{\Delta\rho}_{L^2}\norm{\rho_t}_{L^6}\norm{\rho_t}_{L^3}\leq C_{\varepsilon_4}\norm{\Delta\rho}_{L^2}^4\norm{\rho_t}_{L^2}^2+\varepsilon_4\norm{\nabla\rho_t}_{L^2}^2. \end{cases} \end{equation} Combining \eqref{33.5} and \eqref{33.6} leads to \begin{equation} \left(\norm{\rho_t}_{L^2}^2\right)_t+\nu\norm{\nabla\rho_t}_{L^2}^2\leq C_\varepsilon\norm{\Delta\rho}_{L^2}^4\norm{\rho_t}_{L^2}^2+\varepsilon\left(\norm{v_t}_{L^2}^2+\norm{\nabla\rho}_{L^2}^2\right). \end{equation} This, alonging with \eqref{332}, yields that \begin{equation}\label{3336} \begin{aligned} &\left(\norm{\nabla v}_{L^2}^2+\norm{\rho_t}_{L^2}^2\right)_t +\nu\norm{v_t}_{L^2}^2+\nu\norm{v}_{H^2}^2+\nu\norm{\nabla\rho_t}_{L^2}^2+M'(t)\\ &\leq C\left(\norm{\Delta\rho}_{L^2}^4+\norm{\nabla v}_{L^2}^4\right)\cP(t)+C\left(\norm{\nabla\Delta\rho}^2_{L^2}+\norm{\nabla v}^2_{L^2}\right). \end{aligned} \end{equation} On the other hand, combining \eqref{318} and \eqref{331} leads to \begin{equation} \begin{aligned} \left(\norm{\Delta\rho}_{L^2}^2\right)_t+\nu \norm{\nabla\Delta\rho}_{L^2}^2&\leq C_\varepsilon\left(\norm{\nabla v}_{L^2}^4+\norm{\Delta\rho}_{L^2}^4\right)\left(\norm{\Delta\rho}_{L^2}^2+\norm{\nabla v}_{L^2}^2\right)\\ &\quad+\varepsilon\left(\norm{\nabla v}_{L^2}^2+\norm{v_t}_{L^2}^2+\norm{\nabla \rho_t}_{L^2}^2\right), \end{aligned} \end{equation} that is, \begin{equation}\label{333} \begin{aligned} \frac{\nu}{2} \norm{\nabla\Delta\rho}_{L^2}^2&\leq -\frac{\nu}{2} \norm{\nabla\Delta\rho}_{L^2}^2-\left(\norm{\Delta\rho}_{L^2}^2\right)_t\\ &\quad+ C_\varepsilon\left(\norm{\nabla v}_{L^2}^4+\norm{\Delta\rho}_{L^2}^4\right)\left(\norm{\Delta\rho}_{L^2}^2+\norm{\nabla v}_{L^2}^2\right)\\ &\quad+\varepsilon\left(\norm{\nabla v}_{L^2}^2+\norm{v_t}_{L^2}^2+\norm{\nabla \rho_t}_{L^2}^2\right). \end{aligned} \end{equation} Thus, substituting \eqref{333} into \eqref{3336} and choosing $\varepsilon$ small enough, we obtain \begin{equation*} \begin{aligned} &\left(\frac{2C}{\nu}\norm{\Delta\rho}_{L^2}^2+\norm{\nabla v}_{L^2}^2+\norm{\rho_t}_{L^2}^2\right)_t +\frac{\nu}{2}\left(\frac{2C}{\nu}\norm{\nabla\Delta\rho}_{L^2}^2+\norm{v_t}_{L^2}^2+\norm{\nabla\rho_t}_{L^2}^2\right)+M'(t)\\ &\leq C\left(\norm{\nabla v}_{L^2}^4+\norm{\Delta\rho}_{L^2}^4\right)\cP(t)+C\norm{\nabla v}_{L^2}^2, \end{aligned} \end{equation*} or, equivalent to say, using the definition of $\cP(t),\,\cQ(t)$, \begin{equation*} \begin{aligned} \cP'(t)+\nu\cQ(t)+M'(t)\leq C\left(\norm{\nabla v}_{L^2}^4+\norm{\Delta\rho}_{L^2}^4\right)\cP(t)+C\norm{\nabla v}_{L^2}^2, \end{aligned} \end{equation*} Then, alonging with Lemma \ref{Lemma2.2} and using Gr$\mathrm{\ddot{o}}$nwall's inequality gives \begin{equation}\label{334} \sup_{t\in[0,T]}\cP(t)+\int_0^T\cQ(t),dt\leq C\norm{\nabla u_0}_{L^2}^2, \end{equation} where we have used the control \begin{equation*} \begin{aligned} \norm{\rho_{t,0}}_{L^2}^2&\leq C\left(\norm{\nabla\rho_0}_{L^3}^2\norm{v_0}_{L^6}^2+\norm{\nabla\rho_0}_{L^3}^2\norm{\nabla\rho_0}_{L^6}^2+\norm{\Delta\rho_0}_{L^2}^2\right)\\ &\leq C\left(\norm{\Delta\rho_0}_{L^2}^2\norm{\nabla v_0}_{L^2}^2+\norm{\Delta\rho_0}_{L^2}^4+\norm{\Delta\rho_0}_{L^2}^2\right)\\ &\leq C\left(\norm{\nabla u_0}_{L^2}^4+\norm{\nabla u_0}_{L^2}^2\right)\leq C\norm{\nabla u_0}_{L^2}^2, \end{aligned} \end{equation*} and the follwoing estimates \begin{equation}\label{341} \begin{aligned} \sup_{t\in[0,T]}M(t)&\leq \varepsilon \sup_{t\in[0,T]}\norm{\nabla v}_{L^2}^2+C_\varepsilon\sup_{t\in[0,T]}\norm{\nabla\rho}_{L^2}^2\\ &\leq \varepsilon \sup_{t\in[0,T]}\cP(t)+C_\varepsilon\norm{\nabla u_0}_{L^2}^2, \end{aligned} \end{equation} and \begin{equation} \begin{aligned} \abs{e^{\int_0^T h(t)\,dt}\int_0^T M'(t)e^{-\int_0^t h(s)\,ds}\,dt}&\leq \varepsilon\sup_{t\in[0,T]}\norm{\nabla v}_{L^2}^2+C_\varepsilon\sup_{t\in[0,T]}\norm{\nabla \rho}_{L^2}^2\\ &\leq \varepsilon \sup_{t\in[0,T]}\cP(t)+C_\varepsilon\norm{\nabla u_0}_{L^2}^2 \end{aligned} \end{equation} where $h(t)$ is an integrable function on $[0,\infty)$. Finally, plugging \eqref{334} into \eqref{331}, we have \begin{equation*} \int_0^T\norm{\pi}_{H^1}^2\,dt\leq C\norm{\nabla u_0}_{L^2}^2, \end{equation*} which, together with \eqref{334}, completes the proof of \eqref{2.12}. \end{proof} Now, we turn back to prove Proposition \ref{prop3.1}. \begin{proof}[Proof of Proposition \ref{prop3.1}] Since, from Lemma \ref{lemma33} and the Sobolev embedding theorem, \begin{equation*} \sup_{t\in[0,T]}\norm{\nabla\rho}_{L^6}\leq C_1\sup_{t\in[0,T]}\norm{\Delta\rho}_{L^2}\leq C_1C\norm{\nabla u_0}_{L^2}, \end{equation*} where $C$ as in Lemma \ref{lemma33} and $C_1$ is Sobolev embedding constant. Thus, if we choose \begin{equation}\label{335} \norm{\nabla u_0}_{L^2}\leq \delta_1:= (C_1C)^{-1}, \end{equation} we can derive the first part of \eqref{32}. For the rest of \eqref{32}, using Lemma \ref{lemma33} again leads to \begin{equation*} \begin{aligned} \int_0^T\left(\norm{\nabla v}_{L^2}^4+\norm{\Delta\rho}_{L^2}^4\right)\,dt&\leq \left(\sup_{t\in[0,T]}\norm{\Delta\rho}_{L^2}^2+\sup_{t\in[0,T]}\norm{\nabla v}_{L^2}^2\right)\int_0^T \left(\norm{\Delta\rho}^2_{L^2}+\norm{\nabla v}^2_{L^2}\right)\,dt\\ &\leq \lambda^{-1} C^2\norm{\nabla u_0}_{L^2}^4\\ &\leq \norm{\nabla u_0}_{L^2}^2 \end{aligned} \end{equation*} provided \begin{equation}\label{336} \norm{\nabla u_0}_{L^2}\leq \delta_2:= \lambda^{1/2} C^{-1}, \end{equation} where $C$ as in Lemma \ref{lemma33}. It follows from \eqref{335} and \eqref{336} that one should choose $\delta:=\min\{1,\delta_1,\delta_2\}$. Of course, such $\delta$ depends only on $\Omega$, $c_0$, $\alpha$ and $\beta$ and, therefore, we estabished \eqref{32}. \end{proof} \subsubsection{Case \eqref{equation1.7}}\label{subsection3.2} Similar with the preceding subsection, we are going to prove the following proposition. \begin{Proposition}\label{prop3.4} There exists a positive constant $\hat\delta$ depending on $\Omega$, $c_0$, $\alpha$ and $\beta$ such that, if $\norm{\nabla u}_{L^2}\leq \hat\delta$ and \begin{equation}\label{3.39} \sup_{t\in[0,T]}\norm{\nabla\rho}_{L^6}\leq 2,\quad\int_0^T\left(\norm{\nabla u}_{L^2}^4+\norm{\Delta\rho}_{L^2}^4\right)\,dt\leq 2\norm{\nabla u_0}^2_{L^2}, \end{equation} then, one has \begin{equation}\label{3.40} \sup_{t\in[0,T]}\norm{\nabla\rho}_{L^6}\leq 1,\quad\int_0^T\left(\norm{\nabla u}_{L^2}^4+\norm{\Delta\rho}_{L^2}^4\right)\,dt\leq \norm{\nabla u_0}^2_{L^2}. \end{equation} \end{Proposition} One should notice that the norms of $v$ and $u$ are equivalent in the following sense under condition \eqref{3.39}, \begin{equation}\label{342} \begin{aligned} \norm{v}_{L^p}+\norm{\nabla\rho}_{L^p}&\sim \norm{u}_{L^p}+\norm{\nabla\rho}_{L^p},\\ \norm{\nabla v}_{L^p}+\norm{\Delta\rho}_{L^p}+\norm{\nabla\rho}_{L^{2p}}^2&\sim \norm{\nabla u}_{L^p}+\norm{\Delta\rho}_{L^p}+\norm{\nabla\rho}_{L^{2p}}^2,\\ \norm{\Delta v}_{L^2}+\norm{\nabla\Delta\rho}_{L^2}&\sim\norm{\Delta u}_{L^2}+\norm{\nabla\Delta\rho}_{L^2}, \end{aligned} \end{equation} where $\eqref{342}_3$ is deduced by \begin{equation*} \begin{aligned} \norm{\Delta v}_{L^2}&\leq C\left(\norm{\Delta u}_{L^2}+\norm{\nabla\Delta\rho}_{L^2}+\norm{\nabla\rho}_{L^6}\normf{\nabla^2\rho}_{L^3}+\norm{\nabla\rho}_{L^6}^3\right)\\ &\leq C\left(\norm{\Delta u}_{L^2}+\norm{\nabla\Delta\rho}_{L^2}+\norm{\nabla\rho}_{L^6}\norm{\nabla\Delta\rho}_{L^2}+\norm{\nabla\rho}_{L^6}^2\norm{\nabla\Delta\rho}_{L^2}\right)\\ &\leq C\left(\norm{\Delta u}_{L^2}+\norm{\nabla\Delta\rho}_{L^2}\right) \end{aligned} \end{equation*} and vice versa. Now, we come to prove. We can easily derive a similar lemma comparing with Lemma \ref{Lemma2.2} which is given as follows. \begin{Lemma}\label{lemma3.5} Let $(\rho,u,\pi)$ be a smooth solution of \eqref{equation1.1}, then there exist some positive constant $C$ depending only on $\Omega$, $c_0$, $\alpha$ and $\beta$ such that, for all $T\in (0,\infty)$, \begin{equation}\label{343} \sup_{t\in[0,T]}\norm{\rho-(\rho_0)_\Omega}_{L^2}^2+\int_0^T\norm{\nabla\rho}_{L^2}^2\,dt\leq C\norm{\nabla u_0}_{L^2}^2. \end{equation} Furthermore, if $\norm{\nabla u_0}_{L^2}\leq 1$ and the condition \eqref{3.39} holds, one has \begin{gather} \sup_{t\in[0,T]}\norm{\nabla\rho}_{L^2}^2+\int_0^T\left(\norm{\nabla\rho}_{L^3}^4+ \norm{\Delta\rho}_{L^2}^2\right)\,dt\leq C\norm{\nabla u_0}^2_{L^2},\label{344}\\ \sup_{t\in[0,T]}\cF(t)+\int_0^T\left(\cG(t)+\norm{\pi}_{H^1}^2\right)\,dt\leq C\norm{\nabla u_0}^2_{L^2},\label{3347} \end{gather} where \begin{equation*} \begin{gathered} \cF(t):=\normf{\nabla u}_{L^2}^2+\norm{\Delta\rho}_{L^2}^2+\norm{\rho_t}_{L^2}^2,\\ \cG(t):= \norm{\nabla\Delta\rho}_{L^2}^2+\norm{u_t}_{L^2}^2+\norm{\Delta u}_{L^2}^2+\norm{\nabla\rho_t}_{L^2}^2 \end{gathered} \end{equation*} \end{Lemma} \begin{proof} \eqref{343} has been proved in Lemma \ref{Lemma2.2} and \eqref{344} is a trivial consequence of \eqref{33.8}. Indeed, using \eqref{342}, condition \eqref{3.39}, Lemma \ref{Lemma221} and Poincar\'e's inequality leads to $$ \begin{aligned} \norm{\nabla v}_{L^2}^4&\leq C\left(\norm{\nabla u}_{L^2}^4+\norm{\Delta\rho}_{L^2}^4+\norm{\nabla\rho}_{L^3}^4\norm{\nabla\rho}_{L^6}^4\right)\\ &\leq C\left(\norm{\nabla u}_{L^2}^4+\norm{\Delta\rho}_{L^2}^4\right), \end{aligned} $$ and, thus, \begin{equation*} \left(\norm{\nabla\rho}_{L^2}^2\right)_t+\nu\norm{\Delta\rho}_{L^2}^2\leq C\left(\norm{\Delta\rho}_{L^2}^4+\norm{\nabla u}^4_{L^2}\right)\norm{\nabla\rho}_{L^2}^2. \end{equation*} To prove \eqref{3347}, we first come to get the lower order estimate of $u$. Multiplying $w$ on both sides of $\eqref{equation1.1}_2$ and integrating over $\Omega$, one has \begin{equation}\label{346} \begin{aligned} \left(\int \frac{1}{2}\rho|u|^2\right)_t+\int 2\mu(\rho)|D(u)|^2&=\int \rho u_t\cdot Q +\int \rho u\cdot \nabla u\cdot Q-\int \dive[2\mu(\rho)D(u)]\cdot Q\\ &=\int \rho u_t\cdot Q +\int \rho u\cdot \nabla u\cdot Q+\int 2\mu(\rho)D(u)\cdot \nabla Q\\ &:=\sum_{i=1}^3 M_i, \end{aligned} \end{equation} where, from Lemma \ref{Lemma221}, \begin{equation}\label{347} \begin{cases} |M_1|\leq C\norm{u_t}_{L^2}\norm{Q}_{L^2}\leq C_{\varepsilon_1}\norm{\nabla\rho}_{L^2}^2+\varepsilon_1\norm{u_t}_{L^2}^2,\\ |M_2|\leq C\norm{u}_{L^3}\norm{\nabla u}_{L^2}\norm{\nabla\rho}_{L^6}\leq C_{\varepsilon_2}\norm{\Delta\rho}_{L^2}^4\norm{u}_{L^2}^2+\varepsilon_2\norm{\nabla u}_{L^2},\\ |M_3|\leq C\norm{\nabla u}_{L^2}\norm{\nabla Q}_{L^2}\leq C_{\varepsilon_3}\norm{\Delta\rho}_{L^2}^2+\varepsilon_3\norm{\nabla u}_{L^2}. \end{cases} \end{equation} Here, we have used the following control \begin{equation*} \begin{aligned} \norm{\nabla Q}_{L^2}&\leq C\left(\norm{\Delta\rho}_{L^2}+\norm{\nabla\rho}_{L^4}^2\right)\leq C\left(\norm{\Delta\rho}_{L^2}+\norm{\nabla\rho}_{L^3}\norm{\nabla\rho}_{L^6}\right)\leq C\norm{\Delta\rho}_{L^2}. \end{aligned} \end{equation*} Thus, substituting \eqref{347} into \eqref{346} gives \begin{equation}\label{348} \left(\norm{\sqrt\rho u}_{L^2}^2\right)_t+\nu\norm{\nabla u}_{L^2}^2\leq C_\varepsilon\norm{\Delta\rho}_{L^2}^4\norm{\sqrt\rho u}_{L^2}^2+C_\varepsilon\left(\norm{\nabla\rho}_{L^2}^2+\norm{\Delta\rho}_{L^2}^2\right)+\varepsilon\norm{u_t}_{L^2}^2. \end{equation} Multiplying $w_t$ on both sides of $\eqref{equation1.1}_2$ and integrating over $\Omega$, one has \begin{equation}\label{349} \begin{aligned} \int\rho|u_t|^2+\left(\int\mu(\rho)|D(u)|^2\right)_t&=-\int\rho u_t\cdot Q_t-\int\rho u\cdot \nabla u\cdot w_t \\ &\quad+\int\mu(\rho)_t|D(u)|^2-\int\dive[2\mu(\rho)D(u)]\cdot Q_t\\ &:=\sum_{i=1}^4N_i, \end{aligned} \end{equation} where, using Lemma \ref{Lemma221}, \begin{equation}\label{350} \begin{cases} |N_1|&\!\!\!\!\leq C\norm{u_t}_{L^2}\norm{Q_t}_{L^2}\leq C_{\varepsilon_1}\norm{\nabla\rho_t}_{L^2}^2+\varepsilon_1\norm{u_t}_{L^2}^2,\\ |N_2|&\!\!\!\!\leq C\norm{u}_{L^6}\norm{\nabla u}_{L^3}\norm{w_t}_{L^2}\\ &\!\!\!\!\leq C_{\varepsilon_2}\norm{\nabla u}_{L^2}^4\norm{\nabla u}_{L^2}^2+\varepsilon_2\left(\norm{\nabla\rho_t}_{L^2}^2+\norm{\Delta\rho}_{L^2}^4\norm{\rho_t}_{L^2}^2\right)\\ &\!\!\!\!\quad+\varepsilon_2\left(\norm{\Delta u}_{L^2}^2+\norm{u_t}_{L^2}^2\right),\\ |N_3|&\!\!\!\!\leq C\norm{\rho_t}_{L^3}\norm{\nabla u}_{L^2}\norm{\nabla u}_{L^6}\leq C_{\varepsilon_3}\norm{\nabla u}_{L^2}^4\norm{\rho_t}_{L^2}^2+C\norm{\nabla\rho_t}_{L^2}^2+\varepsilon_3\norm{\Delta u}_{L^2}^2,\\ |N_4|&\!\!\!\!\leq C\left(\norm{\nabla\rho}_{L^6}\norm{\nabla u}_{L^3}+\norm{\Delta u}_{L^2}\right)\norm{Q_t}_{L^2}\\ &\!\!\!\!\leq C_{\varepsilon_4}\norm{\Delta\rho}_{L^2}^4\norm{\nabla u}_{L^2}^2+C_{\varepsilon_4}\norm{\nabla\rho_t}_{L^2}^2+\varepsilon_4\norm{\Delta u}_{L^2}^2, \end{cases} \end{equation} where we have used $$\norm{Q_t}_{L^2}\leq C\left(\norm{\nabla\rho_t}_{L^2}+\norm{\nabla\rho}_{L^6}\norm{\rho_t}_{L^3}\right)\leq C\norm{\nabla\rho_t}_{L^2}.$$ Combining \eqref{349} and \eqref{350} leads to \begin{equation}\label{351} \begin{aligned} \left(\normf{\sqrt{\mu(\rho)}|D(u)|}_{L^2}^2\right)_t+\nu\norm{u_t}_{L^2}^2&\leq C_\varepsilon\left(\norm{\nabla u}_{L^2}^4+\norm{\Delta \rho}_{L^2}^4\right)\left(\norm{\nabla u}_{L^2}^2+\norm{\rho_t}_{L^2}^2\right)\\ &\quad+C_\varepsilon\norm{\nabla\rho_t}_{L^2}^2+\varepsilon\norm{\Delta u}_{L^2}^2. \end{aligned} \end{equation} To get $\norm{\Delta u}_{L^2}$, we follow the proof \eqref{328}--\eqref{331} and use Lemma \ref{lemma26} $(2)$ with $\Phi=-c_0\nabla\rho^{-1}$ and condition \eqref{3.39} to deduce \begin{align*} \norm{v}^2_{H^2}+\norm{\nabla \pi}^2_{L^2}&\leq C\left[\norm{\nabla\rho}^4_{L^6}\norm{\nabla v}^2_{L^2}+\left(1+\norm{\nabla\rho}^4_{L^6}\right)\left(\norm{F}_{L^2}^2+\norm{\Phi}_{H^2}^2\right)\right]\\ &\leq C\left(\normf{F}^2_{L^2}+\norm{\Phi}^2_{H^2}+\norm{\Delta\rho}^4_{L^2}\norm{\nabla v}^2_{L^2}\right)\\ &\leq C\left(\norm{v_t}^2_{L^2}+\norm{\nabla\rho_t}^2_{L^2}\right)+C\left(\norm{\Delta\rho}_{L^2}^4+\norm{\nabla v}_{L^2}^4\right)\norm{\Delta\rho}_{L^2}^2+C\norm{\nabla\Delta\rho}_{L^2}^2\\ &\quad+C\norm{\nabla v}_{L^2}^2+C_{\varepsilon}\left(\norm{\nabla v}_{L^2}^4+\norm{\Delta\rho}_{L^2}^4\right)\norm{\nabla v}_{L^2}^2+\varepsilon\norm{v}_{H^2}^2 \end{align*} where $F$ as in \eqref{F}--\eqref{329}. Thus, we still have \eqref{331} and, if we convert $v$ into $u$ and $\rho$ by \eqref{342} and condition \eqref{3.39}, we can derive the bounds for $\Delta u$, that is, \begin{equation}\label{352} \begin{aligned} \norm{\Delta u}_{L^2}^2+\norm{\pi}^2_{H^1}&\leq C\left(\norm{\Delta\rho}_{L^2}^4+\norm{\nabla u}_{L^2}^4\right)\left(\norm{\Delta\rho}_{L^2}^2+\norm{\nabla u}_{L^2}^2\right)\\ &\quad+C\left(\norm{\nabla\Delta\rho}_{L^2}^2+\norm{\nabla u}_{L^2}^2+\norm{u_t}^2_{L^2}+\norm{\nabla\rho_t}^2_{L^2}\right) \end{aligned} \end{equation} Combining \eqref{348}, \eqref{351} and \eqref{352} and choosing $\varepsilon$ small enough, one has \begin{equation}\label{3354} \begin{aligned} &\left(\normf{\nabla u}_{L^2}^2\right)_t+\frac{\nu}{2}\norm{\nabla u}_{L^2}^2+\frac{\varepsilon}{2C}\norm{\Delta u}_{L^2}^2+\frac{\nu}{2}\norm{u_t}_{L^2}^2\\ &\leq C_\varepsilon\left(\norm{\nabla u}_{L^2}^4+\norm{\Delta \rho}_{L^2}^4\right)\cF(t)+C_\varepsilon\norm{\nabla\rho_t}_{L^2}^2+\varepsilon\norm{\nabla\Delta\rho}_{L^2}^2+C\left(\norm{\nabla\rho}_{L^2}^2+\norm{\Delta\rho}_{L^2}^2\right), \end{aligned} \end{equation} where we have used $$\norm{\sqrt\rho u}_{L^2}+\normf{\sqrt{\mu(\rho)}|D(u)|}_{L^2}\sim \norm{\nabla u}_{L^2}.$$ Similarly, converting $v$ into $u$ and $\rho$, we can also obtain an analogous estimates from \eqref{318}, that is, \begin{equation*} \begin{aligned} \left(\norm{\Delta\rho}_{L^2}^2\right)_t+\nu \norm{\nabla\Delta\rho}_{L^2}^2&\leq C\left(\norm{\nabla u}_{L^2}^4+\norm{\Delta\rho}_{L^2}^4\right)\norm{\Delta\rho}_{L^2}^2\\ &\quad+C_{\varepsilon_1}\norm{\Delta\rho}_{L^2}^4\norm{\nabla u}_{L^2}^2+\varepsilon_1\norm{\Delta u}_{L^2}^2, \end{aligned} \end{equation*} which, combining with \eqref{352}, gives \begin{equation}\label{355} \begin{aligned} \left(\norm{\Delta\rho}_{L^2}^2\right)_t+\nu \norm{\nabla\Delta\rho}_{L^2}^2&\leq C_{\varepsilon_1}\left(\norm{\nabla u}_{L^2}^4+\norm{\Delta\rho}_{L^2}^4\right)\left(\norm{\Delta\rho}_{L^2}^2+\norm{\nabla u}_{L^2}^2\right)\\ &\quad+{\varepsilon_1}\left(\norm{\nabla u}_{L^2}^2+\norm{\nabla \rho_t}_{L^2}^2\right). \end{aligned} \end{equation} Combining \eqref{3354}--\eqref{355} and letting $\varepsilon,\,\varepsilon_1$ suitably small yield that, $\exists\,\nu>0$, \begin{equation}\label{3356} \begin{aligned} &\left(\normf{\nabla u}_{L^2}^2+\norm{\Delta\rho}_{L^2}^2\right)_t+\nu\left(\norm{\nabla\Delta\rho}_{L^2}^2+\norm{\nabla u}_{L^2}^2+\norm{\Delta u}_{L^2}^2+\norm{u_t}_{L^2}^2\right)\\ &\leq C\left(\norm{\nabla u}_{L^2}^4+\norm{\Delta \rho}_{L^2}^4\right)\cF(t)+C\norm{\nabla\rho_t}_{L^2}^2+C\left(\norm{\nabla\rho}_{L^2}^2+\norm{\Delta\rho}_{L^2}^2\right), \end{aligned} \end{equation} On the other hand, from \eqref{33.6}, we can deduce similarly that \begin{equation}\label{356} \left(\norm{\rho_t}_{L^2}^2\right)_t+\nu\norm{\nabla\rho_t}_{L^2}^2\leq C_{\varepsilon_2}\norm{\Delta\rho}_{L^2}^4\norm{\rho_t}_{L^2}^2+C_{\varepsilon_2}\norm{\nabla\rho}_{L^2}^2+\varepsilon_2\norm{u_t}_{L^2}^2. \end{equation} Then times $2C$ for \eqref{356} and plugging it into \eqref{3356}, choosing $\varepsilon_2$ sufficiently small and using Poincar\'e's inequality, we have, for some positive constant $\nu$, \begin{equation}\label{3361} \cF'(t)+\nu\cG(t)\leq C\left(\norm{\nabla u}_{L^2}^4+\norm{\Delta\rho}_{L^2}^4\right)\cF(t)+C\left(\norm{\nabla\rho}_{L^2}^2+\norm{\Delta\rho}_{L^2}^2\right), \end{equation} where we have used the following equivalent norms for convenience \begin{equation*} \begin{gathered} \cF(t)\sim \normf{\nabla u}_{L^2}^2+\norm{\Delta\rho}_{L^2}^2+2C\norm{\rho_t}_{L^2}^2,\\ \cG(t)\sim \norm{\nabla\Delta\rho}_{L^2}^2+\norm{u_t}_{L^2}^2+\norm{\Delta u}_{L^2}^2+2C\norm{\nabla\rho_t}_{L^2}^2 \end{gathered} \end{equation*} and these equivalences do not have an influence on the final result after applying the Gr$\mathrm{\ddot{o}}$nwall's inequality. Thus, we get the higher order estimates for $(\rho, u)$ by using Gr$\mathrm{\ddot{o}}$nwall's inequality and \eqref{343}--\eqref{344} and the estimate of $\pi$ can be obtained from \eqref{352}. Consequently, we show the estimate \eqref{3347} and finished the proof. \end{proof} \begin{proof}[Proof of Proposition \ref{prop3.4}] The proof of Proposition \ref{prop3.4} is exactly same with that of Proposition \ref{prop3.1} and, thus, we omit the proof and leave it proof to readers. \end{proof} \subsection{Proof of Theorem \ref{Theorem1.1}}\label{P12} With the uniform bounds hold in our hand, the proof is rather simple. We first come to prove the case \eqref{equation1.6}. Using Lemma \ref{local}, there exists a unique strong solution $(\rho,u)$ of \eqref{equation1.1} on $\Omega\times(0,T_1)$ with initial data $(\rho_0,u_0)$ satifying the boundary condition \eqref{equation1.6}, for some positive time $T_1$. Then, one may use the a priori estimates, Proposition \eqref{prop3.1} and Lemma \ref{32}--\ref{lemma33} to extend the strong solution $(\rho,u)$ globally in time. Indeed, if $T_1<\infty$ is the maximal time for existence, then using the uniform bounds, we have \begin{equation} (\rho,u)(x,T_1):=\lim_{t\to T_1^-}(\rho,u)(x,t)\text{ in the sense of }H^2\times H^1 \end{equation} satisfying the conditions imposed on the initial data, that is, $\alpha\leq\rho(T_1)\leq \beta$ and $u(T_1)\in H^1_\omega$, at the time $T_1$. Furthermore, it is easy to check that $(\rho,u)(x,T_1)$ satisfies the compatiablity condition \eqref{equation1.3}. Therefore, we can take $(\rho,u)(x,T_1)$ as the initial data and apply Lemma \ref{local} to extend the strong solution beyond $T_1$. This contradicts the maximality of $T_1$ and, hence, we finish the proof of Theorem \ref{Theorem1.1} for the case \eqref{equation1.6}. However, for $(\rho,u)$ satisfying \eqref{equation1.7}, we can use Lemma \ref{local} and Remark \ref{local2} to extend $(\rho,u)$ on $\Omega\times(0,T_1)$ to the global one for every fixed $\epsilon\in (0,1]$. Then, using the a priori estimates, Proposition \ref{prop3.4} and Lemma \ref{lemma3.5}, we can get a uniform bounds for $(\rho^\epsilon,u^\epsilon,\pi^\epsilon)$, for all $\epsilon\in (0,1]$. More precisely, one may has, as $\epsilon\to 0^+$, \begin{equation}\label{equation363} \begin{cases} \rho^\epsilon\wsconverge \rho\quad \text{in }C([0,T];H^2)\cap L^2(0,T;H^3),\\ \rho_t^\epsilon\wsconverge \rho_t\quad \text{in }C([0,T];L^2)\cap L^2(0,T;H^1),\\ u^\epsilon\wsconverge u\quad \text{in }C([0,T];H^1)\cap L^2(0,T;H^2),\\ u_t^\epsilon\wsconverge u_t\quad \text{in }C([0,T];H^1)\cap L^2(0,T;L^2),\\ \pi^\epsilon\wconverge \pi\quad \text{in } L^2(0,T;H^1).\\ \end{cases} \end{equation} Then, after applying Lemma \ref{lemma221}, we may derive that \begin{equation}\label{equation364} \begin{cases} \rho^\epsilon\longrightarrow\rho \text{ uniformly for all }(x,t)\in \overline\Omega\times[0,T],\\ \rho^\epsilon\sconverge \rho \quad\text{in }C([0,T];H^2),\\ u^\epsilon\sconverge u\quad\text{in }C([0,T];H^1). \end{cases} \end{equation} \eqref{equation363} and \eqref{equation364} are eough to let $\epsilon\to 0^+$ and recover to the original system \eqref{equation1.1}. The uniqueness can be obtained by similar method in \cite{zjw}. \section{Proof of Theorem \ref{Theorem1.3}}\label{section4} We first come to prove the blowup criterion. Throughout this section, we let $(\rho,u,\pi)$ be a strong solution described in Theorem \ref{Theorem1.3} and $\tilde C$ be a positive generic constant depending on $c_0$, $\alpha$, $\beta$, $T^*$, $M_0$ and $\norm{u_0}_{H^1}$. Suppose that \eqref{serrin} were false, that is, for some $r$ and $s$, \begin{equation}\label{4.1} \lim_{T\to T^*}\norm{u}_{L^s(0,T;L^r)}\leq M_0<\infty, \end{equation} or, equivalently, \begin{equation*} \lim_{T\to T^*}\left(\norm{v}_{L^s(0,T;L^r)}+\norm{\nabla\rho}_{L^s(0,T;L^r)}\right)\leq M_0, \end{equation*} we want to show the following estimate holds. \begin{Proposition}\label{prop4.1} Under the above condition, one has, for all $T\in[0,T^*)$, \begin{equation} \sup_{t\in[0,T]}\left(\norm{\rho_t}_{L^2}^2+\norm{\rho}^2_{H^2}+\norm{u}^2_{H^1}\right)+\int_0^T\left(\norm{\rho_t}_{H^1}^2+\norm{\rho}_{H^3}^2+\norm{\nabla u}_{H^1}^2\right)\,dt\leq \tilde C. \end{equation} \end{Proposition} The proof of Proposition \ref{prop4.1} will be separated into the following two parts. \subsection{Case for $(\rho,u)$ satisfying \eqref{equation1.6}}\label{11} The first lemma is the part of Lemma \ref{Lemma2.2}, we give it here for convenience. \begin{Lemma}\label{lemma4.2} The following bounds hold for condition \eqref{equation1.6} and for all $T\in[0,T^*)$, that is, \begin{equation} \alpha\leq\rho\leq\beta,\quad\sup_{t\in[0,T]}\norm{\rho-(\rho_0)_\Omega}_{L^2}^2+\nu\int_0^T\norm{\nabla\rho}_{L^2}^2\,dt\leq \norm{\rho_0-(\rho_0)_\Omega}_{L^2}^2, \end{equation} \end{Lemma} Next, we give the lower order bounds for $(\log\rho,v)$, that is, \begin{Lemma}\label{lemma44} Suppose that \eqref{4.1} holds and $(\rho,u)$ satisfies \eqref{equation1.6}, then one has \begin{equation}\label{equation4.4} \sup_{t\in[0,T]}\left(\norm{\nabla\log\rho}^2_{L^2}+\norm{v}^2_{L^2}\right)+\int_0^T\left(\norm{\Delta\log\rho}_{L^2}^2+\norm{\nabla v}_{L^2}^2\right)\,dt\leq \tilde C. \end{equation} \end{Lemma} \begin{proof} We first change \eqref{equation1.18} to the form \begin{equation}\label{log} (\log\rho)_t +v\cdot\nabla\log\rho - c_0\rho^{-1}\Delta\log\rho=0, \end{equation} and, then, multiplying $-\Delta\log\rho$ on both sides of \eqref{log}, integrating over $\Omega$ and using Lemma \ref{Lemma221} imply that \begin{equation}\label{44.6} \begin{aligned} &\left(\frac{1}{2}\int |\nabla\log\rho|^2\right)_t+\int c_0\rho^{-1}|\Delta\log\rho|^2\\ &=\int (v\cdot \nabla\log\rho)\Delta\log\rho\\ &\leq \norm{\nabla\log\rho}_{L^r}\norm{v}_{L^{\frac{2r}{r-2}}}\norm{\Delta\log\rho}_{L^2}\\ &\leq C_\varepsilon\norm{\nabla\log\rho}_{L^r}^2\norm{v}_{L^2}^{\frac{2r-6}{r}}\norm{\nabla v}_{L^2}^{\frac{6}{r}}+\varepsilon\norm{\Delta\log\rho}_{L^2}^2\\ &\leq C_\varepsilon\left(\norm{\nabla\rho}_{L^r}^s+1\right)\norm{v}^2_{L^2}+\varepsilon\left(\norm{\Delta\log\rho}_{L^2}^2+\norm{\nabla v}_{L^2}^2\right), \end{aligned} \end{equation} that is \begin{equation}\label{equation4.6} \left(\norm{\nabla\log\rho}_{L^2}^2\right)_t+\nu\norm{\Delta\log\rho}_{L^2}^2\leq C_\varepsilon\left(\norm{\nabla\rho}_{L^r}^s+1\right)\norm{v}^2_{L^2}+\varepsilon\norm{\nabla v}_{L^2}^2. \end{equation} To estimate the rest part of \eqref{equation4.4}, it follows from \eqref{218}--\eqref{219} that \begin{equation}\label{equation4.8} \begin{aligned} \left(\norm{\sqrt{\rho}v}_{L^2}^2\right)_t+\nu\norm{\nabla v}_{L^2}^2&\leq C\left(\norm{\nabla\log\rho}_{H^1}\norm{\nabla v}_{L^2}+\norm{\nabla\log\rho}_{L^r}\norm{v}_{L^{\frac{2r}{r-2}}}\norm{\nabla v}_{L^2}\right)\\ &\quad+\sum_{i=1}^4H_i, \end{aligned} \end{equation} For $H_1$--$H_4$, using Lemma \ref{Lemma221}, we have, from \eqref{220}--\eqref{221} \begin{equation}\label{equation4.9} \begin{cases} |H_1|&\!\!\!\!\leq C\left(\norm{\Delta\log\rho}_{L^2}+\norm{\nabla\log\rho}_{L^r}\norm{\nabla\log\rho}_{L^{\frac{2r}{r-2}}}\right)\norm{\nabla v}_{L^2}\\ &\!\!\!\!\leq C_{\varepsilon_1}\left(\norm{\nabla\rho}_{L^r}^s+1\right)\norm{\nabla\log\rho}_{L^2}^2+\varepsilon_1\left(\norm{\nabla v}_{L^2}^2+\norm{\Delta\log\rho}_{L^2}^2\right),\\ |H_2|&\!\!\!\!\leq C\norm{\nabla\log\rho}_{L^r}\norm{v}_{L^{\frac{2r}{r-2}}}\norm{\nabla v}_{L^2}\\ &\!\!\!\!\leq C_{\varepsilon_2}\left(\norm{\nabla\rho}_{L^r}^s+1\right)\norm{v}_{L^2}^2+\varepsilon_2\left(\norm{\nabla v}_{L^2}^2+\norm{\Delta\log\rho}_{L^2}^2\right),\\ |H_3|&\!\!\!\!\leq C\norm{\nabla\log\rho}_{L^r}\norm{\nabla\log\rho}_{L^{\frac{2r}{r-2}}}\norm{\nabla v}_{L^2}\\ &\!\!\!\!\leq C_{\varepsilon_3}\left(\norm{\nabla\rho}_{L^r}^s+1\right)\norm{\nabla\log\rho}_{L^2}^2+\varepsilon_3\left(\norm{\nabla v}_{L^2}^2+\norm{\Delta\log\rho}_{L^2}^2\right),\\ |H_4|&\!\!\!\!\leq C\left(\norm{v}_{H^1}\norm{\nabla\log\rho}_{H^1}+\norm{\nabla\log\rho}_{L^r}\norm{v}_{L^{\frac{2r}{r-2}}}\norm{\nabla\log\rho}_{L^2}\right)\\ &\!\!\!\!\leq C_{\varepsilon_4}\left[\norm{\Delta\log\rho}_{L^2}^2+\left(\norm{\nabla\rho}_{L^r}^s+1\right)\norm{v}_{L^2}^2\right]+\varepsilon_4\left(\norm{\nabla v}_{L^2}^2+\norm{\nabla\log\rho}_{L^2}^2\right). \end{cases} \end{equation} Combining \eqref{equation4.8} and \eqref{equation4.9}, we deduce that \begin{equation}\label{equation4.10} \left(\norm{\sqrt{\rho}v}_{L^2}^2\right)_t+\nu\norm{\nabla v}_{L^2}^2\leq C\left(\norm{\nabla\rho}_{L^r}^s+1\right)\left(\norm{v}_{L^2}^2+\norm{\nabla\log\rho}_{L^2}^2\right)+C\norm{\Delta\log\rho}_{L^2}^2. \end{equation} Multiplying $2\varepsilon$ on \eqref{equation4.10} and alonging with \eqref{equation4.6}, then chooseing $\varepsilon$ suitably small gives \begin{equation*} \begin{aligned} &\left(\frac{1}{2C}\norm{\sqrt{\rho}v}_{L^2}^2+\norm{\nabla\log\rho}_{L^2}^2\right)_t+\frac{\nu}{2}\left(\frac{1}{2C}\norm{\nabla v}_{L^2}^2+\norm{\Delta\log\rho}_{L^2}^2\right)\\ &\leq C\left(\norm{\nabla\rho}_{L^r}^s+1\right)\left(\norm{v}_{L^2}^2+\norm{\nabla\log\rho}_{L^2}^2\right). \end{aligned} \end{equation*} which, using Gr$\mathrm{\ddot{o}}$nwall's inequality, condition \eqref{4.1} and Lemma \ref{lemma4.2}, implies \eqref{equation4.4}. \end{proof} \begin{Lemma}\label{lemma45} Suppose that \eqref{4.1} holds and $(\rho,u)$ satisfies \eqref{equation1.6}, then \begin{equation}\label{4.11} \begin{aligned} \sup_{t\in[0,T]}\tilde\cP(t)+\int_0^T\left(\tilde\cQ(t)+\norm{\pi}_{H^1}^2\right)\,dt\leq \tilde C. \end{aligned} \end{equation} where \begin{gather*} \tilde\cP(t):=\norm{(\log\rho)_t}_{L^2}^2+\norm{\Delta\rho}^2_{L^2}+\norm{\Delta\log\rho}^2_{L^2}+\norm{\nabla v}^2_{L^2},\\ \tilde\cQ(t):=\norm{\nabla(\log\rho)_t}_{L^2}^2+\norm{\nabla\Delta\rho}_{L^2}^2+\norm{\nabla\Delta\log\rho}_{L^2}^2+\normf{\nabla^2 v}_{L^2}^. \end{gather*} \end{Lemma} \begin{proof} Applying $-\nabla\Delta\log\rho\cdot\nabla$ on both sides of \eqref{log} and integrating over $\Omega$, we have \begin{equation}\label{44.12} \begin{aligned} \left(\int \frac{1}{2}|\Delta\log\rho|^2\right)_t+\int c_0\rho^{-1}|\nabla\Delta\log\rho|^2&=\int \nabla\Delta\log\rho\cdot \nabla v\cdot \nabla\log\rho\\ &\quad+\int c_0\rho^{-1}\Delta\log\rho\nabla\log\rho\cdot\nabla\Delta\log\rho\\ &\quad+\int v\cdot \nabla^2\log\rho\cdot\nabla\Delta\log\rho\\ &:=\sum_{i=1}^3 O_i, \end{aligned} \end{equation} where, for terms $O_1$ and $O_2$, we use Lemma \ref{Lemma221} to get \begin{equation}\label{4.13} \begin{cases} |O_1|&\!\!\!\!\leq \norm{\nabla\log\rho}_{L^r}\norm{\nabla v}_{L^{\frac{2r}{r-2}}}\norm{\nabla\Delta\log\rho}_{L^2}\\ &\!\!\!\!\leq C_{\varepsilon_1}\left(\norm{\nabla\rho}_{L^r}^s+1\right)\norm{\nabla v}_{L^2}^2+\varepsilon_1\left(\norm{\nabla\Delta\log\rho}_{L^2}^2+\normf{v}_{H^2}^2\right),\\ |O_2|&\!\!\!\!\leq C\norm{\nabla\log\rho}_{L^r}\norm{\Delta\log\rho}_{L^{\frac{2r}{r-2}}}\norm{\nabla\Delta\log\rho}_{L^2}\\ &\!\!\!\!\leq C_{\varepsilon_2}\left(\norm{\nabla\rho}_{L^r}^s+1\right)\norm{\Delta\log\rho}_{L^2}^2+\varepsilon_2\norm{\nabla\Delta\rho}_{L^2}^2. \end{cases} \end{equation} For $O_3$, we integrate by parts to get \begin{equation} \begin{aligned}\label{4.14} O_3&=\int v_i\partial_{ij}\log\rho\partial_j\Delta\log\rho\\ &=\int_\partial (v_i\partial_{ij}\log\rho n_j)\Delta\log\rho-\int (\partial_jv_i\partial_{ij}\log\rho)\Delta\log\rho\\ &=\int_\partial (v_i\partial_{ij}\log\rho n_j)\Delta\log\rho-\int_\partial (\partial_jv_i\partial_{j}\log\rho n_i)\Delta\log\rho+\int \partial_jv_i\partial_{j}\log\rho\partial_i\Delta\log\rho\\ &:=B_1+B_2+B_3. \end{aligned} \end{equation} Since the simplest part $B_3$ can be handled similarly like $O_1$, we only need estimate $B_1$ and $B_2$. First, using the boundary condition $v\cdot n=n\cdot \nabla\log\rho=0$ and Lemma \ref{Lemma221}, we have \begin{equation} \begin{aligned} B_1&=\int_\partial v\cdot \nabla^2\log\rho\cdot n\Delta\log\rho\\ &=-\int_\partial v\cdot\nabla n \cdot\nabla\log\rho \Delta\log\rho\\ &=\int_\partial (n\times v^\perp)\cdot\nabla n \cdot\nabla\log\rho \Delta\log\rho\\ &=\int (\curle v^\perp\cdot\nabla n \cdot\nabla\log\rho)\Delta\log\rho-\int v^\perp\cdot\left[\nabla\Delta\log\rho\times(\nabla n \cdot\nabla\log\rho)\right] \\ &\quad-\int v^\perp\cdot\curle(\nabla n \cdot\nabla\log\rho) \Delta\log\rho\\ &\leq C\norm{\nabla\log\rho}_{L^r}\norm{\nabla v}_{L^{\frac{2r}{r-2}}}\norm{\Delta\log\rho}_{L^2}+C\norm{\nabla\log\rho}_{L^r}\norm{v}_{L^{\frac{2r}{r-2}}}\norm{\nabla\Delta\log\rho}_{L^2}\\ &\quad+C\norm{v}_{L^3}\norm{\Delta\log\rho}_{L^6}\norm{\Delta\log\rho}_{L^2}\\ &\leq C_{\varepsilon_3}(\norm{\nabla\rho}_{L^r}^s+1)\norm{\nabla v}_{L^2}^2+C_{\varepsilon_3}\norm{v}^4_{L^3}\norm{\Delta\log\rho}^2_{L^2}+\varepsilon_3\norm{\nabla\Delta\log\rho}^2_{L^2}. \end{aligned} \end{equation} Hence, \begin{equation}\label{4.16} \begin{aligned} |B_1|\leq C_{\varepsilon_3}(\norm{\nabla\rho}_{L^r}^s+1)\norm{\nabla v}_{L^2}^2+C_{\varepsilon_3}\norm{v}^4_{L^3}\norm{\Delta\log\rho}^2_{L^2}+\varepsilon_3\norm{\nabla\Delta\log\rho}^2_{L^2}. \end{aligned} \end{equation} Similarly, for $B_2$, one has \begin{equation} \begin{aligned} B_2&=-\int_\partial \nabla\log\rho\cdot \nabla v\cdot n\Delta\log\rho\\ &=\int_\partial \nabla\log\rho\cdot \nabla n\cdot v\Delta\log\rho\\ &=\int_\partial \nabla\log\rho\cdot \nabla n\cdot (v^\perp \times n)\Delta\log\rho\\ &=-\int (\nabla\log\rho\cdot \nabla n\cdot \curle v^\perp) \Delta\log\rho+\int \nabla\Delta\log\rho\times(\nabla\log\rho\cdot \nabla n)\cdot v^\perp\\ &\quad+\int \Delta\log\rho\curle(\nabla\log\rho\cdot \nabla n)\cdot v^\perp\\ &\leq C_{\varepsilon_4}(\norm{\nabla\rho}_{L^r}^s+1)\norm{\nabla v}_{L^2}^2+C_{\varepsilon_4}\norm{v}^4_{L^3}\norm{\Delta\log\rho}^2_{L^2}+\varepsilon_4\norm{\nabla\Delta\log\rho}^2_{L^2}, \end{aligned} \end{equation} that is, \begin{equation}\label{4.18} |B_2|\leq C_{\varepsilon_4}(\norm{\nabla\rho}_{L^r}^s+1)\norm{\nabla v}_{L^2}^2+C_{\varepsilon_4}\norm{v}^4_{L^3}\norm{\Delta\log\rho}^2_{L^2}+\varepsilon_4\norm{\nabla\Delta\log\rho}^2_{L^2}. \end{equation} Combining \eqref{4.13}--\eqref{4.14}, \eqref{4.16} and \eqref{4.18}, one can deduce that \begin{equation}\label{4.19} \begin{aligned} &\left(\norm{\Delta\log\rho}_{L^2}^2\right)_t+\nu\norm{\nabla\Delta\log\rho}_{L^2}^2\\ &\leq C\left(\norm{\nabla\rho}_{L^r}^s+\norm{v}^4_{L^3}+1\right)\norm{\Delta\log\rho}_{L^2}^2+C_\varepsilon\left(\norm{\nabla\rho}_{L^r}^s+1\right)\norm{\nabla v}_{L^2}^2+\varepsilon\norm{v}_{H^2}^2. \end{aligned} \end{equation} On the other hand, we slightly change \eqref{2.20} (more precisely, $I_3$) into the form \begin{equation} \begin{aligned} \left(\int \frac{1}{2}|\Delta\rho|^2\right)_t+\int c_0\rho^{-1}|\nabla\Delta\rho|^2&=\int \nabla\Delta\rho\cdot \nabla v\cdot \nabla\rho+\int v\cdot \nabla^2\rho\cdot\nabla\Delta\rho\\ &\quad+c_0\int \nabla|\nabla\log\rho|^2\cdot\nabla\Delta\rho+\int c_0\rho^{-2}\Delta\rho\nabla\rho\cdot\nabla\Delta\rho\\ &:=\sum_{i=1}^4 I_i. \end{aligned} \end{equation} Then, exactly following the proof of \eqref{4.13}--\eqref{4.18}, we can obtain the festimate which is similar with \eqref{4.19}, that is, \begin{equation}\label{44.21} \begin{aligned} \left(\norm{\Delta\rho}_{L^2}^2\right)_t+\nu\norm{\nabla\Delta\rho}_{L^2}^2&\leq C_\varepsilon\left(\norm{\nabla\rho}_{L^r}^s+\norm{v}_{L^3}^4+1\right)\left(\norm{\Delta\rho}_{L^2}^2+\norm{\Delta\log\rho}_{L^2}^2\right)\\ &\quad+C_\varepsilon\left(\norm{\nabla\rho}_{L^r}^s+1\right)\norm{\nabla v}_{L^2}^2+\varepsilon\left(\norm{v}_{H^2}^2+\norm{\nabla\Delta\log\rho}_{L^2}^2\right), \end{aligned} \end{equation} together with \eqref{4.19} yields \begin{equation}\label{4.22} \begin{aligned} &\left(\norm{\Delta\rho}_{L^2}^2+\norm{\Delta\log\rho}_{L^2}^2\right)_t+\nu\left(\norm{\nabla\Delta\rho}_{L^2}^2+\norm{\nabla\Delta\log\rho}_{L^2}^2\right)\\ &\leq C\left(\norm{\nabla\rho}_{L^r}^s+\norm{v}_{L^3}^4+1\right)\left(\norm{\Delta\rho}_{L^2}^2+\norm{\Delta\log\rho}_{L^2}^2+\norm{\nabla v}_{L^2}^2\right)+\varepsilon\norm{v}_{H^2}^2, \end{aligned} \end{equation} For the estimate of $(\log\rho)_t$, applying $(\log\rho)_t\partial_t$ on both sides of \eqref{log} and integrating over $\Omega$, one has \begin{equation}\label{4.20} \begin{aligned} &\left(\frac{1}{2}\int|(\log\rho)_t|^2\right)_t+\int c_0\rho^{-1}|\nabla(\log\rho)_t|^2\\ &=-\int c_0\rho^{-1}\nabla(\log\rho)_t\cdot\nabla\log\rho(\log\rho)_t-\int v_t\cdot\nabla\log\rho(\log\rho)_t+\int c_0\rho^{-1}|(\log\rho)_t|^2|\nabla\log\rho|^2\\ &:=\sum_{i=1}^3P_i, \end{aligned} \end{equation} where, using Lemma \ref{Lemma221}, \begin{equation}\label{4.21} \begin{cases} |P_1|&\!\!\!\!\leq C\norm{\nabla\log\rho}_{L^r}\norm{(\log\rho)_t}_{L^{\frac{2r}{r-2}}}\norm{\nabla(\log\rho)_t}_{L^2}\\ &\!\!\!\!\leq C_{\varepsilon_1}(\norm{\nabla\rho}_{L^r}^s+1)\norm{(\log\rho)_t}_{L^2}^2+\varepsilon_1\norm{\nabla(\log\rho)_t}_{L^2}^2\\ |P_2|&\!\!\!\!\leq \norm{\nabla\log\rho}_{L^r}\norm{(\log\rho)_t}_{L^{\frac{2r}{r-2}}}\norm{v_t}_{L^2}\\ &\!\!\!\!\leq C_{\varepsilon_2}(\norm{\nabla\rho}_{L^r}^s+1)\norm{(\log\rho)_t}_{L^2}^2+\varepsilon_2\norm{v_t}_{L^2}^2\\ |P_3|&\!\!\!\!\leq \norm{\nabla\log\rho}^2_{L^r}\norm{(\log\rho)_t}^2_{L^{\frac{2r}{r-2}}}\\ &\!\!\!\!\leq C_{\varepsilon_3}(\norm{\nabla\rho}_{L^r}^s+1)\norm{(\log\rho)_t}_{L^2}^2+\varepsilon_3\norm{\nabla(\log\rho)_t}_{L^2}^2. \end{cases} \end{equation} Combining \eqref{4.20} and \eqref{4.21} leads to \begin{equation}\label{4.25} \left(\norm{(\log\rho)_t}_{L^2}^2\right)_t+\nu\norm{\nabla(\log\rho)_t}_{L^2}^2\leq C_\varepsilon(\norm{\nabla\rho}_{L^r}^s+1)\norm{(\log\rho)_t}_{L^2}^2+\varepsilon\norm{v_t}_{L^2}^2 \end{equation} We still need to treat the higher order bounds for $v$. The proof is basically the same as we did in \eqref{2.25}--\eqref{2.28} and the main differences one should notice are terms $K_3$ and $J_2$--$J_4$. For $K_3$, \begin{equation}\label{4.26} \begin{aligned} K_3&=-\int \frac{1}{2}\mu(\rho)_t |\curle v|^2\\ &=-\frac{1}{2}\int_{\partial}(n\times v)\cdot \curle v\mu(\rho)_t- \frac{1}{2}\int \nabla\mu(\rho)_t\times \curle v\cdot v-\frac{1}{2}\int \mu(\rho)_t\Delta v\cdot v\\ &=-\frac{1}{2}\int_\partial \rho\mu'(\rho)(\log\rho)_tv\cdot B\cdot v- \frac{1}{2}\int \rho\mu'(\rho)\nabla(\log\rho)_t\times \curle v\cdot v\\ &\quad- \frac{1}{2}\int \left[\rho\mu'(\rho)+\rho^2\mu''(\rho)\right](\log\rho)_t\nabla\log\rho\times \curle v\cdot v-\frac{1}{2}\int \mu(\rho)_t\Delta v\cdot v\\ &\leq |K_2|+C\left(\norm{\nabla(\log\rho)_t}_{L^2}+\norm{\nabla\rho}_{L^r}\norm{(\log\rho)_t}_{L^{\frac{2r}{r-2}}}\right)\norm{v}_{L^r}\norm{\nabla v}_{L^{\frac{2r}{r-2}}}\\ &\quad+C\norm{v}_{L^r}\norm{(\log\rho)_t}_{L^{\frac{2r}{r-2}}}\norm{\Delta v}_{L^2}\\ &\leq |K_2|+C_\varepsilon\left(\norm{v}_{L^r}^s+\norm{\nabla\rho}_{L^r}^s+1\right)\left(\norm{\nabla v}_{L^2}^2+\norm{(\log\rho)_t}_{L^2}^2\right)\\ &\quad+\varepsilon\left(\norm{\nabla(\log\rho)_t}_{L^2}^2+\norm{v}_{H^2}^2\right), \end{aligned} \end{equation} while, for $J_2$--$J_4$, using the relation $$\nabla^2\rho^{-1}=\frac{1}{\rho^2}\nabla^2\rho-\frac{2}{\rho}\nabla^2\log\rho$$ and Lemma \ref{Lemma221}, we have \begin{equation}\label{4.27} \begin{aligned} J_2&=\int c_0\dive{\left[2\mu(\rho)\nabla^2\rho^{-1}\right]}\cdot v_t\\ &\leq C\norm{\nabla\rho}_{L^r}\normf{\nabla^2\rho^{-1}}_{L^{\frac{2r}{r-2}}}\norm{v_t}_{L^2}+C\norm{\nabla\Delta\rho^{-1}}_{L^2}\norm{v_t}_{L^2}\\ &\leq C_{\varepsilon_1}\left(\norm{\nabla\rho}_{L^r}^s+1\right)\left(\normf{\Delta\rho}_{L^2}^2+\normf{\Delta\log\rho}_{L^2}^2\right)\\ &\quad+C_{\varepsilon_1}\left(\normf{\nabla\Delta\rho}_{L^2}^2+\normf{\nabla\Delta\log\rho}_{L^2}^2\right)+\varepsilon_1\norm{v_t}_{L^2}^2 \end{aligned} \end{equation} \begin{equation}\label{4.28} \begin{aligned} J_3&=-\int c_0 \dive{\left(\rho v\otimes\nabla\rho^{-1}\right)}\cdot v_t=\int c_0 \dive{\left( v\otimes\nabla\log\rho\right)}\cdot v_t\\ &\leq C\norm{\nabla\rho}_{L^r}\norm{\nabla v}_{L^{\frac{2r}{r-2}}}\norm{v_t}_{L^2}+C\norm{v}_{L^r}\normf{\nabla^2\log\rho}_{L^{\frac{2r}{r-2}}}\norm{v_t}_{L^2}\\ &\leq C_{\varepsilon_2}\left(\norm{\nabla\rho}_{L^r}^s+\norm{v}_{L^r}^s+1\right)\left(\norm{\Delta\log\rho}_{L^2}^2+\norm{\nabla v}_{L^2}^2\right)\\ &\quad+\varepsilon_2\left(\norm{\nabla\Delta\log\rho}_{L^2}^2+\norm{v}_{H^2}^2+\norm{v_t}_{L^2}^2\right) \end{aligned} \end{equation} \begin{equation}\label{4.29} \begin{aligned} J_4&=-\int c_0^2 \dive{\left(\rho \nabla\rho^{-1}\otimes\nabla\rho^{-1}\right)}\cdot v_t=\int c_0^2 \dive{\left(\nabla\log\rho\otimes\nabla\rho^{-1}\right)}\cdot v_t\\ &\leq C\norm{\nabla\rho}_{L^r}\left(\normf{\nabla^2\log\rho}_{L^{\frac{2r}{r-2}}}+\normf{\nabla^2\rho^{-1}}_{L^{\frac{2r}{r-2}}}\right)\norm{v_t}_{L^2}\\ &\leq C_{\varepsilon_3}\left(\norm{\nabla\rho}_{L^r}^s+1\right)\left(\norm{\Delta\log\rho}_{L^2}^2+\norm{\Delta\rho}_{L^2}^2\right)\\ &\quad+\varepsilon_3\left(\norm{\nabla\Delta\log\rho}_{L^2}^2+\norm{\nabla\Delta\rho}_{L^2}^2+\norm{v_t}_{L^2}^2\right)\\ \end{aligned} \end{equation} Therefore, modifying the corresponding norms of $(v,\nabla\rho)$ from \eqref{2.25}--\eqref{2.28} into the $L^r$-norms, alonging with \eqref{4.26}--\eqref{4.29}, we have \begin{equation}\label{4.30} \begin{aligned} &\left(\int_\partial \mu(\rho)v\cdot B\cdot v+\int \mu(\rho)|\curle v|^2\right)_t +\nu\norm{v_t}_{L^2}^2+M'(t)\\ &\leq C_\varepsilon\left(\norm{\nabla\rho}_{L^3}^4+\norm{\nabla\rho}_{L^r}^s+\norm{v}_{L^3}^4+\norm{v}_{L^r}^s+1\right)\left(\norm{\nabla v}_{L^2}^2+\norm{\Delta\log\rho}_{L^2}^2+\norm{\Delta\rho}_{L^2}^2\right)\\ &\quad+C_{\varepsilon}\left(\norm{\nabla\Delta\log\rho}_{L^2}^2+\norm{\nabla\Delta\rho}^2_{L^2}\right)+\varepsilon\left(\norm{v}_{H^2}^2+\norm{\nabla(\log\rho)_t}_{L^2}^2\right). \end{aligned} \end{equation} For the sake of simplicity, as we have explained in \eqref{327} and \eqref{341}, we can rewrite \eqref{4.30} into \begin{equation}\label{4.30} \begin{aligned} \left(\norm{\nabla v}_{L^2}^2\right)_t +\nu\norm{v_t}_{L^2}^2&\leq C_\varepsilon\left[\cI(t)+1\right]\left(\norm{\nabla v}_{L^2}^2+\norm{\Delta\log\rho}_{L^2}^2+\norm{\Delta\rho}_{L^2}^2\right)\\ &\quad+C_{\varepsilon}\left(\norm{\nabla\Delta\log\rho}_{L^2}^2+\norm{\nabla\Delta\rho}^2_{L^2}\right)+\varepsilon\left(\norm{v}_{H^2}^2+\norm{\nabla(\log\rho)_t}_{L^2}^2\right), \end{aligned} \end{equation} where $\cI(t)$ is an integrable function over $(0,T^*)$. For $H^2$-norm of $v$, analoging with \eqref{328}--\eqref{331} and applying Lemma \ref{Lemma221}, one has \begin{equation}\label{44.32} \begin{aligned} \norm{F}^2_{L^2}&\leq C_{\varepsilon}\left(\norm{\nabla\rho}_{L^r}^s+\norm{v}_{L^r}^s+1\right)\left(\norm{\Delta\log\rho}_{L^2}^2+\norm{\Delta\rho}_{L^2}^2+\norm{\nabla v}_{L^2}^2\right)\\ &\quad+C\left(\norm{v_t}^2_{L^2}+\norm{\nabla(\log\rho)_t}^2_{L^2}\right)+\varepsilon\left(\norm{v}_{H^2}^2+\norm{\nabla\Delta\log\rho}_{L^2}^2+\norm{\nabla\Delta\rho}^2_{L^2}\right)\\ \norm{\Phi}^2_{H^1}&\leq C\left(\norm{\nabla v}_{L^2}^2+\norm{\Delta\rho}_{L^2}^2+\norm{\Delta\log\rho}_{L^2}^2\right). \end{aligned} \end{equation} where $\Phi:=-B\cdot[v+c_0\nabla\rho^{-1}]$. Thus, from Lemma \ref{lemma2.8}, \eqref{2.15}, we have \begin{equation}\label{4.33} \begin{aligned} \norm{v}_{H^2}^2+\norm{p}_{H^1}^2&\leq C_{\varepsilon}\left(\norm{\nabla\rho}_{L^r}^s+\norm{v}_{L^r}^s+1\right)\left(\norm{\Delta\log\rho}_{L^2}^2+\norm{\Delta\rho}_{L^2}^2+\norm{\nabla v}_{L^2}^2\right)\\ &\quad+C\left(\norm{v_t}^2_{L^2}+\norm{\nabla(\log\rho)_t}^2_{L^2}\right)+\varepsilon\left(\norm{\nabla\Delta\log\rho}_{L^2}^2+\norm{\nabla\Delta\rho}^2_{L^2}\right), \end{aligned} \end{equation} alonging with \eqref{4.30} gives \begin{equation}\label{4.34} \begin{aligned} \left(\norm{\nabla v}_{L^2}^2\right)_t +\frac{\varepsilon}{2C}\norm{v}_{H^2}^2+\frac{\nu}{2}\norm{v_t}_{L^2}^2&\leq C_\varepsilon\left[\cI(t)+1\right]\left(\norm{\nabla v}_{L^2}^2+\norm{\Delta\log\rho}_{L^2}^2+\norm{\Delta\rho}_{L^2}^2\right)\\ &\quad+C_{\varepsilon}\left(\norm{\nabla\Delta\log\rho}_{L^2}^2+\norm{\nabla\Delta\rho}^2_{L^2}\right)\\ &\quad+\varepsilon\norm{\nabla(\log\rho)_t}_{L^2}^2. \end{aligned} \end{equation} Thus, combining \eqref{4.22}, \eqref{4.25}, \eqref{4.33} and \eqref{4.34} by using the similar approach from \eqref{332}--\eqref{333}, then applying the Gr$\mathrm{\ddot{o}}$nwall's inequality, we deduce the estimate \eqref{4.11}. \end{proof} \begin{Remark} From the proof above, one should notice that, it is the convection term $\rho u\cdot \nabla u$ that restricts us to use the Serrin's condition of $v$. In fact, we can directly use the the bound $u\in L^s(0,T;L^r)$ in \eqref{44.6} to get the lower bounds for $\log\rho$ (see also Lemma \ref{lemma46}), but, in order to show this point, we insist to only use $\nabla\rho\in L^s(0,T;L^r)$. \end{Remark} Now, we turn back to prove Proposition \ref{prop4.1} for $(\rho,u)$ satisfying \eqref{equation1.6}. \begin{proof}[Proof of Proposition \ref{prop4.1}] Combining Lemma \ref{lemma44}--\ref{lemma45}, we can get Proposition \ref{prop4.1}. The only point one should notice is that \begin{equation*} \begin{aligned} \norm{\nabla\rho_t}_{L^2}&\leq C\left(\norm{\rho_t\nabla\rho}_{L^2}+\norm{\nabla(\log\rho)_t}_{L^2}\right)\\ &\leq C\left(\norm{\Delta\rho}^2_{L^2}\norm{\rho_t}_{L^2}+\norm{\nabla(\log\rho)_t}_{L^2}\right)+\frac{1}{2}\norm{\nabla\rho_t}_{L^2}, \end{aligned} \end{equation*} that is, \begin{equation*} \int_0^T\norm{\nabla\rho_t}^2_{L^2}\,dt\leq C\left(\sup_{t\in[0,T]}\norm{\Delta\rho}^2_{L^2}\sup_{t\in[0,T]}\norm{\rho_t}^2_{L^2}\int_0^T\norm{\Delta\rho}^2_{L^2}\,dt+\int_0^T\norm{\nabla(\log\rho)_t}^2_{L^2}\,dt\right)\leq \tilde C. \end{equation*} \end{proof} \subsection{Case for $(\rho,u)$ satisfying \eqref{equation1.7}} We basically follow the proof in subsection \ref{11}. Since the nonlinear term $|\nabla\rho|^2$, one still has to estimate for $\rho$ together with $\log\rho$. In case of use, we colloect some bounds from \eqref{1.23} \begin{equation}\label{Q1} \norm{\nabla Q}_{L^2}\leq C\left(\norm{\Delta\rho}_{L^2}+\norm{\nabla\rho}_{L^r}\norm{\nabla\rho}_{L^{\frac{2r}{r-2}}}\right)\leq C\left[\norm{\Delta\rho}_{L^2}+\left(\norm{\nabla\rho}_{L^r}^s+1\right)\norm{\nabla\rho}_{L^2}^2\right], \end{equation} \begin{equation}\label{Q2} \begin{aligned} \norm{Q_t}_{L^2}&\leq C\left(\norm{\nabla(\log\rho)_t}_{L^2}+\norm{\nabla\rho}_{L^r}\norm{\rho_t}_{L^{\frac{2r}{r-2}}}\right)\\ &\leq C\left[\norm{\nabla(\log\rho)_t}_{L^2}+\left(\norm{\nabla\rho}_{L^r}^s+1\right)\norm{(\log\rho)_t}_{L^2}^2\right]. \end{aligned} \end{equation} First, we give a lemma for the lower order bounds of $\rho$. \begin{Lemma}\label{lemma46} Suppose that $(\rho,u)$ satisfies the condition \eqref{equation1.7}. Then, for all $T\in (0,T^*)$, Lemma \ref{lemma4.2} holds and \begin{equation} \sup_{t\in[0,T]}\left(\norm{\nabla\rho}_{L^2}^2+\norm{\nabla\log\rho}_{L^2}^2\right)+\int_0^T\left(\norm{\Delta\rho}_{L^2}^2+\norm{\Delta\log\rho}_{L^2}^2\right)\,dt\leq \tilde C. \end{equation} \end{Lemma} \begin{proof} The estimates for $\rho$ and $\log\rho$ come from \eqref{eq3.7} and \eqref{44.6}, respectively, and we only give the proof for $\log\rho$ here, since another one can be proved similarly. From \eqref{44.6}, we have \begin{equation}\label{44.40} \begin{aligned} \left(\frac{1}{2}\int |\nabla\log\rho|^2\right)_t+\int c_0\rho^{-1}|\Delta\log\rho|^2&=\int (v\cdot \nabla\log\rho)\Delta\log\rho\\ &\leq\norm{v}_{L^{r}} \norm{\nabla\log\rho}_{L^\frac{2r}{r-2}}\norm{\Delta\log\rho}_{L^2}\\ &\leq C_\varepsilon\left(\norm{v}_{L^r}^s+1\right)\norm{\nabla\log\rho}^2_{L^2}+\varepsilon\norm{\Delta\log\rho}_{L^2}^2\\ &\leq C_\varepsilon\left(\norm{u}_{L^r}^s+1\right)\norm{\nabla\log\rho}^2_{L^2}+\varepsilon\norm{\Delta\log\rho}_{L^2}^2, \end{aligned} \end{equation} then, applying \eqref{4.1} and Gr$\mathrm{\ddot{o}}$nwall's inequality for \eqref{44.40}, we conclude the proof. \end{proof} \begin{Lemma} Suppose that $(\rho,u)$ satisfies the condition \eqref{equation1.7}. Then, \begin{equation}\label{con} \sup_{t\in [0,T]}\tilde\cF(t)+\int_0^T\left(\tilde\cG(t)+\norm{\pi}_{H^1}\right)\,dt\leq \tilde C, \end{equation} where \begin{equation*} \begin{gathered} \tilde\cF(t):=\normf{u}_{H^1}^2+\norm{\Delta\rho}_{L^2}^2+\norm{\Delta\log\rho}_{L^2}^2+\norm{(\log\rho)_t}_{L^2}^2,\\ \tilde\cG(t):= \norm{\nabla\Delta\rho}_{L^2}^2+ \norm{\nabla\Delta\log\rho}_{L^2}^2+\norm{u_t}_{L^2}^2+\norm{\Delta u}_{L^2}^2+\norm{\nabla(\log\rho)_t}_{L^2}^2 \end{gathered} \end{equation*} \end{Lemma} \begin{proof} On the one hand, We follow the proof from \eqref{44.12} to \eqref{4.22} and replace all $\norm{v}_{L^3}^4$ by $\norm{v}_{L^r}^s$ via Lemma \ref{Lemma221} to obtain that \begin{equation}\label{44.39} \begin{aligned} &\left(\norm{\Delta\rho}_{L^2}^2+\norm{\Delta\log\rho}_{L^2}^2\right)_t+\nu\left(\norm{\nabla\Delta\rho}_{L^2}^2+\norm{\nabla\Delta\log\rho}_{L^2}^2\right)\\ &\leq C\left(\norm{\nabla\rho}_{L^r}^s+\norm{v}_{L^r}^s+1\right)\left(\norm{\Delta\rho}_{L^2}^2+\norm{\Delta\log\rho}_{L^2}^2+\norm{\nabla v}_{L^2}^2\right)+\varepsilon\norm{v}_{H^2}^2\\ &\leq C\left(\norm{u}_{L^r}^s+1\right)\tilde\cF(t)+\varepsilon\norm{v}_{H^2}^2, \end{aligned} \end{equation} On the other hand, we still have \eqref{4.25}, that is, \begin{equation}\label{44.40} \begin{aligned} \left(\norm{(\log\rho)_t}_{L^2}^2\right)_t+\nu\norm{\nabla(\log\rho)_t}_{L^2}^2&\leq C_\varepsilon(\norm{\nabla\rho}_{L^r}^s+1)\norm{(\log\rho)_t}_{L^2}^2+\varepsilon\norm{v_t}_{L^2}^2\\ &\leq C_\varepsilon(\norm{u}_{L^r}^s+1)\tilde\cF(t)+\varepsilon\norm{u_t}_{L^2}^2. \end{aligned} \end{equation} Here, we have used the fact that $$\norm{\nabla v}_{L^2}^2\leq C\left(\norm{\nabla u}_{L^2}^2+\normf{\nabla^2\rho^{-1}}_{L^2}^2\right)\leq C\left(\norm{\nabla u}_{L^2}^2+\normf{\Delta\rho}_{L^2}^2+\normf{\Delta\log\rho}_{L^2}^2\right),$$ $$\norm{v_t}_{L^2}^2\leq C\left(\norm{u_t}_{L^2}^2+\normf{\nabla\rho^{-1}_t}_{L^2}^2\right)\leq C\left(\norm{u_t}_{L^2}^2+\normf{\nabla(\log\rho)_t}_{L^2}^2+\normf{\nabla\rho}_{L^r}^2\normf{(\log\rho)_t}_{L^{\frac{2r}{r-2}}}^2\right).$$ For $u$, similar with the proof in subsection \ref{11}, we apply the Serrin's condition \eqref{4.1} on \eqref{346}--\eqref{351} and use \eqref{Q1}--\eqref{Q2}, we can derive that \begin{equation}\label{44.41} \begin{aligned} &\left(\norm{\sqrt\rho u}_{L^2}^2\right)_t+\nu\norm{\nabla u}_{L^2}^2\\ &\leq C_\varepsilon\left(\norm{u}_{L^r}^s+1\right)\left(\norm{\sqrt\rho u}_{L^2}^2+\norm{\nabla\rho}_{L^2}^2\right)+C_\varepsilon\left(\norm{\nabla\rho}_{L^2}^2+\norm{\Delta\rho}_{L^2}^2\right)+\varepsilon\norm{u_t}_{L^2}^2\\ &\leq C_\varepsilon\left(\norm{u}_{L^r}^s+1\right)\tilde\cF(t)+C_\varepsilon\left(\norm{\nabla\rho}_{L^2}^2+\norm{\Delta\rho}_{L^2}^2\right)+\varepsilon\norm{u_t}_{L^2}^2, \end{aligned} \end{equation} and \begin{equation}\label{44.42} \begin{aligned} \left(\normf{\sqrt{\mu(\rho)}|D(u)|}_{L^2}^2\right)_t+\nu\norm{u_t}_{L^2}^2&\leq C_\varepsilon\left(\norm{u}_{L^r}^s+1\right)\tilde\cF(t)+C_\varepsilon\norm{\nabla(\log\rho)_t}_{L^2}^2+\varepsilon\norm{\Delta u}_{L^2}^2, \end{aligned} \end{equation} where the only term we need concern is \begin{equation*} N_3=\int\mu(\rho)_t|D(u)|^2\quad\text{ in }\eqref{350}. \end{equation*} However, this term can be computed by integrating by parts, \begin{equation*} \begin{aligned} N_3&=\int\mu(\rho)_t|D(u)|^2\\ &=-\int \nabla\mu(\rho)_t\cdot D(u)\cdot u-\int \frac{1}{2}\mu(\rho)_t\Delta u\cdot u\\ &=-\int \rho\mu'(\rho)\nabla(\log\rho)_t\cdot D(u)\cdot u-\int (\log\rho)_t \left(\rho\mu'(\rho)\right)'\nabla\rho\cdot D(u)\cdot u\\ &\quad-\int \frac{1}{2}\rho\mu'(\rho)(\log\rho)_t\Delta u\cdot u\\ &\leq C\norm{u}_{L^r}\norm{\nabla u}_{L^{\frac{2r}{r-2}}}\norm{\nabla(\log\rho)_t}_{L^2}+C\norm{\nabla\rho}_{L^r}\norm{(\log\rho)_t}_{L^{\frac{2r}{r-2}}}\norm{u}_{L^r}\norm{\nabla u}_{L^{\frac{2r}{r-2}}}\\ &\quad+C\norm{u}_{L^r}\norm{(\log\rho)_t}_{L^{\frac{2r}{r-2}}}\norm{\Delta u}_{L^2}\\ &\leq C_\varepsilon\left(\norm{u}_{L^r}^s+1\right)\left(\norm{\nabla u}_{L^2}^2+\norm{(\log\rho)_t}_{L^2}^2\right)+\varepsilon\left(\norm{\nabla(\log\rho)_t}_{L^2}^2+\norm{\Delta u}_{L^2}^2\right)\\ &\leq C_\varepsilon\left(\norm{u}_{L^r}^s+1\right)\tilde\cF(t)+\varepsilon\left(\norm{\nabla(\log\rho)_t}_{L^2}^2+\norm{v}_{H^2}^2\right), \end{aligned} \end{equation*} where we have used $$ \begin{aligned} \norm{\Delta u}^2_{L^2}&\leq C\left(\norm{v}_{H^2}^2+\norm{\nabla\Delta\rho^{-1}}_{L^2}^2\right)\\ &\leq C\left(\norm{v}_{H^2}^2+\norm{\nabla\Delta\rho}^2_{L^2}+\norm{\nabla\Delta\log\rho}^2_{L^2}\right)\\ &\quad+C\norm{\nabla\rho}^2_{L^r}\left(\norm{\Delta\rho}_{L^{\frac{2r}{r-2}}}^2+\norm{\Delta\log\rho}_{L^{\frac{2r}{r-2}}}^2\right) \end{aligned} $$ To estimate $\Delta u$, we apply Lemma \ref{Lemma221}, \ref{lemma2.7} and \ref{lemma2.8} on \eqref{328} and, then, use \eqref{44.32}--\eqref{4.33} with $\Phi=-c_0\nabla\rho^{-1}$ and $$ \begin{aligned} \norm{\Phi}_{H^2}&\leq C\norm{\nabla\Delta\rho^{-1}}_{L^2}\\ &\leq C\left(\norm{\nabla\Delta\rho}_{L^2}+\norm{\nabla\Delta\log\rho}_{L^2}\right)\\ &\quad+C\norm{\nabla\rho}_{L^r}\left(\norm{\Delta\rho}_{L^{\frac{2r}{r-2}}}+\norm{\Delta\log\rho}_{L^{\frac{2r}{r-2}}}\right) \end{aligned} $$ to deduce that \begin{equation}\label{44.43} \begin{aligned} \norm{v}_{H^2}^2+\norm{\pi}_{H^1}^2&\leq C\left(\norm{u}_{L^r}^s+1\right)\left(\norm{\Delta\log\rho}_{L^2}^2+\norm{\Delta\rho}_{L^2}^2+\norm{\nabla v}_{L^2}^2\right)\\ &\quad+C\left(\norm{v_t}^2_{L^2}+\norm{\nabla(\log\rho)_t}^2_{L^2}+\norm{\nabla\Delta\log\rho}_{L^2}^2+\norm{\nabla\Delta\rho}^2_{L^2}\right)\\ &\leq C\left(\norm{u}_{L^r}^s+1\right)\tilde\cF(t)\\ &\quad+C\left(\norm{u_t}^2_{L^2}+\norm{\nabla(\log\rho)_t}^2_{L^2}+\norm{\nabla\Delta\log\rho}_{L^2}^2+\norm{\nabla\Delta\rho}^2_{L^2}\right). \end{aligned} \end{equation} Now, collecting the bounds \eqref{44.39}--\eqref{44.43} and following the proof from \eqref{3354} to \eqref{3361}, one has \begin{equation}\label{44.44} \begin{aligned} \tilde\cF'(t)+\nu\tilde\cG(t)\leq C(\norm{u}_{L^r}^s+1)\tilde\cF(t)+C\left(\norm{\nabla\rho}_{L^2}^2+\norm{\Delta\rho}_{L^2}^2\right). \end{aligned} \end{equation} Applying the Gr$\mathrm{\ddot{o}}$nwall's inequality and Lemma \ref{lemma46} on \eqref{44.44} and, then, turning back to \eqref{44.43}, we obtain \eqref{con}. \end{proof} The proof of Proposition \ref{prop4.1} is same as that at the end of subsection \ref{11}, we omit it and left it to readers. \subsection{Proof of Theorem \ref{Theorem1.3}} Since we have Proposition \ref{prop4.1} and the constant $\tilde C$ is independent with $T\in (0,T^*)$. Thus, we can let $t\to T^*$ and consider $(\rho,u)(x,T^*)$ as the initial data. Then, following the proof in subsection \ref{P12}, we can deduce the violation of the maximality of $T^*$. Therefore, we complete the proof for Theorem \ref{Theorem1.3}. \bibliographystyle{abbrv}
2,869,038,156,934
arxiv
\section{Introduction} \label{intro} We assume throughout that $\Omega $ is a bounded domain in ${\mathbb{C}}^n$. Let ${\mathbb{D}}$ stand for the unit disk in ${\mathbb{C}}$. The classical Lempert function with pole at $a \in \Omega $ \cite{Lempert} is defined by $$ \ell_a (z):=\inf \big\{ \log|\zeta|:\exists \varphi\in Hol (\mathbb D,\Omega), \varphi(0)=z, \varphi(\zeta)=a\big\}. $$ Given a finite number of points $a_j \in \Omega$, $j=1,...,N$, Coman \cite{Coman} extended this to: \begin{multline} \label{coman} \ell (z):=\ell_{\{a_1,\dots,a_N\}} (z):= \inf \big\{ \sum^N_{j=1} \log|\zeta_j|: \\ \exists \varphi\in Hol (\mathbb D,\Omega) : \varphi(0)=z, \varphi(\zeta_j)=a_j, j=1,...,N \big\} . \end{multline} The \emph{Green function} for the same poles is \begin{multline*} g := \sup \left\lbrace u \in PSH(\Omega, \mathbb R_-) : u(z) \le \log |z-a_j|+C_j, \right. \\ \left. \mbox{ for } z \mbox{ in a neighborhood of } a_j, j=1,...,N \right\rbrace , \end{multline*} where $PSH(\Omega, \mathbb R_-)$ stands for the set of all negative plurisubharmonic functions in $\Omega$. The inequality $g(z)\le \ell(z)$ always holds, and it is known that it can be strict \cite{CarlWieg}, \cite{TraoTh}, \cite{NikoZwo}. If $\ell$ ever turns out to be plurisubharmonic itself, then $\ell$ must be equal to $g$ \cite{Coman}. There are natural extensions of the definition of the Green function. In one dimension, considering a finite number of poles in the same location $a$, say $m$ poles, has a natural interpretation in terms of multiplicities: the point mass in the Riesz measure of the Green function is multiplied by $m$. Locally, the Green function behaves like $\log |f|$, where $f$ is a holomorphic function vanishing at $a$ with multiplicity $m$. Lelong and Rashkovskii \cite{lelongrash}, \cite{Rash} defined a generalized Green function. The function $\log |z|$ was replaced by ``local indicators", i.e. circled plurisubharmonic functions $\Psi$ whose Monge Amp\`ere measure $(dd^c \Psi)^n$ is concentrated at the origin, such that whenever $\log |w_j|=c\log |z_j|$ for all $j \in \{1,\dots,n\}$, then $\Psi(w)=c\Psi(z)$. This has the advantage of allowing the consideration of non-isotropic singularities such as $\max(2\log |z_1|, \log |z_2|)$, but the ``circled" condition privileges certain coordinate axes, so that the class isn't invariant under linear changes of variables. We will have to remove this restriction to obtain a class large enough to describe some natural limits. In several complex variables, we would like to know which notion of multiplicity can arise when we take limits of ordinary Green (or Lempert) functions with several poles tending to the same point. This idea was put to use in \cite{TraoTh} to exhibit an example where a Lempert function with four poles is different from the corresponding Green function. The definition of a generalized Lempert function chosen in \cite{TraoTh} had some drawbacks --- essentially, it was not monotonic with respect to its system of poles (in an appropriate sense) \cite[Proposition 4.3]{TraoTh} and did not pass to the limit in some very simple situations \cite[Theorem 6.3]{TTppt}. We recall that monotonicity holds when no multiplicities are present, see \cite{WikstromAMS} and \cite[Proposition 3.1]{TraoTh} for the convex case, and the more recent \cite{NiPfl} for arbitrary domains and weighted Lempert functions, or more generally when a subset of the original set of poles is considered with the same generalized local indicators. In section \ref{definition}, we successively define a class of indicators, a subclass which is useful to produce ``monomial" examples, a notion of multiplicity for values attained by an analytic disk, and a generalization of Coman's Lempert function to systems of poles with generalized local indicators, different from \cite{TraoTh}. In section \ref{main}, we state our two main results: monotonicity, and convergence under certain restrictive (but, we hope, natural) conditions. Further sections are devoted to the proofs of those results. Finally, in Section \ref{compprev} we summarize the differences between our new definition and that given in \cite{TraoTh}. The first named author would like to thank Nikolai Nikolov for stimulating discussions on this topic, and his colleague Anne Bauval from Toulouse for showing him a nice purely combinatorial proof of Lemma \ref{order}. Special thanks are due to the referee for his very thorough reading of our paper. \section{Definitions} \label{definition} \begin{defn} \cite{lelongrash} \label{LRindic} Let $\Psi \in PSH({\mathbb{D}}^n)$. We call $\Psi$ a \emph{local indicator} and write $\Psi \in \mathcal I_0$ if \begin{enumerate} \item $\Psi$ is bounded from above on ${\mathbb{D}}^n$; \item $\Psi$ is circled, i.e. $\Psi (z_1,\dots,z_n)$ depends only on $(|z_1|, \dots, |z_n|)$; \item for any $c>0$, $\Psi (|z_1|^c, \dots, |z_n|^c) =c \Psi (|z_1|, \dots, |z_n|)$. \end{enumerate} As a consequence, $(dd^c \Psi)^n = \tau_\Psi \delta_0$ for some $\tau_\Psi \ge 0$. \end{defn} Notice that if $\Psi_1 \in PSH({\mathbb{D}}^n)$, $\Psi_2 \in PSH({\mathbb{D}}^m)$, and they are both local indicators, then $$ \Psi (z,z'):= \max ( \Psi_1(z), \Psi_2(z')) $$ defines a local indicator on ${\mathbb{D}}^{n+m}$. We need to remove the restriction to a single coordinate system in Definition \ref{LRindic}. \begin{defn} \label{genindic} We call $\Psi$ a \emph{generalized local indicator}, and we write $\Psi \in \mathcal I$ if there exists $U$ a neighborhood of $0$, $\Psi_0 \in \mathcal I_0$ and a one-to-one linear map $L$ of ${\mathbb{C}}^n$ to itself such that $L(U ) \subset {\mathbb{D}}^n$ and $\Psi = \Psi_0 \circ L$. \end{defn} We will concentrate on a class of simple examples. Given two vectors $z, w \in {\mathbb{C}}^n$, their standard hermitian product is denoted by $z\cdot \bar w := \sum_j z_j \bar w_j$. We also write $\| z \| := |z \cdot \bar z |^{1/2}.$ \begin{defn} \label{elemindic} We say that $\Psi$ is an \emph{elementary local indicator} if there exists a basis $\{v_1, \dots, v_n\}$ of vectors of ${\mathbb{C}}^n$ and scalars $m_j \in {\mathbb{R}}_+$, $1\le j \le n$, such that for $z\in {\mathbb{D}}^n$, \begin{equation} \label{elindform} \Psi(z) = \max_ {1\le j \le n} m_j \log |z \cdot \bar v_j | . \end{equation} \end{defn} One easily checks that any elementary local indicator is a generalized local indicator. The most interesting case is the one for which the basis is orthornormal. In fact, it is essentially the only case. \begin{lemma} \label{ortho} Given an elementary local indicator $\Psi$ as in Definition \ref{elemindic} there exists an orthonormal basis $\{\tilde v_1, \dots, \tilde v_n\}$ of ${\mathbb{C}}^n$ such that the associated elementary local indicator $\tilde \Psi (z) := \max_ {1\le j \le n} m_j \log |z \cdot \overline {{\tilde v}_j} |$ verifies $\tilde \Psi - \Psi \in L^\infty({\mathbb{D}}^n)$. \end{lemma} As a consequence, we could have restricted the map $L$ in Definition \ref{genindic} to be unitary, and it would not have changed things in any essential way. The proof of Lemma \ref{ortho} is given in Section \ref{easypf} below. \begin{lemma} \cite[example in Section 3]{lelongrash}, \cite{Rash} \label{mass} If $\Psi$ is an elementary local indicator, then $\tau_\Psi = m_1 \cdots m_n.$ \end{lemma} We take the same definition of the generalized Green function as in \cite{lelongrash}. \begin{defn} \label{defgreen} Let $\Omega $ be a bounded domain in ${\mathbb{C}}^n$. Given $$ S := \{ (a_j,\Psi_j), 1\le j \le N\}, \mbox{ where } a_j \in \Omega, a_j \neq a_k \mbox{ for }j \neq k, \Psi_j \in \mathcal I, $$ its \emph{Green function} is \begin{multline*} G_S := \sup \left\lbrace u \in PSH(\Omega, \mathbb R_-) : u(z) \le \Psi_j (z) + C_j, \right. \\ \left. \mbox{ for } z \mbox{ in a neighborhood of } a_j, j=1,...,N \right\rbrace . \end{multline*} \end{defn} To generalize the Lempert function, the first step is to quantify the way in which an analytic disk, i.e. an element of $Hol ({\mathbb{D}},\Omega)$, meets a pole provided with a generalized local indicator. \begin{defn} \label{multphia} Let $\alpha \in {\mathbb{D}}$, $a \in \Omega$, $\Psi \in \mathcal I$. Then the \emph{multiplicity} of $\varphi \in Hol ({\mathbb{D}},\Omega)$ at $\alpha$, with respect to $a$, is given by \begin{eqnarray*} \mbox{If } \varphi (\alpha) =a, & \mbox{ then } & m_{\varphi, a, \Psi} (\alpha) := \min \left( \tau_\Psi , \liminf_{\zeta \to 0} \frac{\Psi(\varphi(\alpha+\zeta)-a)}{\log |\zeta|} \right); \\ \mbox{if } \varphi(\alpha) \neq a , &\mbox{ then }& m_{\varphi, a, \Psi} (\alpha) :=0. \end{eqnarray*} \end{defn} Notice that if $\Psi_1-\Psi_2$ is locally bounded near the origin, then $m_{\varphi, a, \Psi_1} (\alpha) =m_{\varphi, a, \Psi_2} (\alpha)$. The quantity $\liminf_{\zeta \to 0} \frac{\Psi(\varphi(\alpha+\zeta)-a)}{\log |\zeta|}$ is exactly the Lelong number at $0$ of the subharmonic function $\Psi \circ \varphi$, compare with \cite[pp. 334--335]{RashSig}. Truncating at the level of the local Monge-Amp\`ere mass $\tau_\Psi$ will turn out to be convenient in Definition \ref{defLempert}, and the proofs that use it. It is useful to see what this means in the case of elementary local indicators. {\bf Elementary examples.} Suppose that $\alpha=0$, $a=0$, and that $\Psi(z) = \max_ {1\le j \le n} m_j \log |z_j | .$ We write $$ \varphi(\zeta) = (\varphi_1(\zeta), \dots , \varphi_n(\zeta)), $$ and define the valuations $$ \nu_j := \nu_j (0,\varphi) := \min \{ k : (\frac{d}{d\zeta})^k \varphi_j(0) \neq 0 \}. $$ Then we have \begin{equation} \label{multex} m_{\varphi, 0, \Psi} (0) = \min \left( \min_ {1\le j \le n} m_j \nu_j , \prod_{j=1}^n m_j \right). \end{equation} {\bf Example 1.} If $m_j=1$ for all $j$, $m_{\varphi, 0, \Psi} (0)=1$ if $\varphi(0)=0$, $m_{\varphi, 0, \Psi} (0)=0$ otherwise. This is the basic case where one just records whether a point has been hit by the analytic disk or not. {\bf Example 2.} In more general cases, the use of an elementary local indicator will impose higher-order differential conditions on the map $\varphi$. For instance, if $m_1=2$ and $m_j=1$, $2\le j \le n$, then \begin{eqnarray*} m_{\varphi, 0, \Psi} (0)& =& 0 \mbox{ if } \varphi(0) \neq 0; \\ m_{\varphi, 0, \Psi} (0)& =& 1 \mbox{ if } \varphi(0) = 0 \mbox{ and } \varphi_j'(0) \neq 0 \mbox{ for some }j \in \{2,\dots, n\}; \\ m_{\varphi, 0, \Psi} (0)& =& 2 \mbox{ if } \varphi(0) = 0 \mbox{ and } \varphi_j'(0) = 0\mbox{ for any }j \in \{2,\dots, n\}. \end{eqnarray*} \begin{defn} \label{defLempert} Given a system $S$ as in Definition \ref{defgreen}, we write $\tau_j := \tau_{\Psi_j}$. Let $\varphi \in Hol({\mathbb{D}},\Omega)$ and $A_j \subset {\mathbb{D}}$, $1\le j \le N$. We say that $(\varphi, (A_j)_{1\le j \le N})$ is \emph{admissible (for $S$, $z$)} if $$ \varphi (0)=z; \quad A_j \subset \varphi^{-1}(a_j) \mbox{ and } \sum_{\alpha \in A_j} m_{\varphi, a_j, \Psi_j} (\alpha) \le \tau_j , 1\le j \le N. $$ In this case, we write (with the convention that $0 \cdot \infty =0$) $$ \mathcal S (\varphi, (A_j)_{1\le j \le N}):= \sum_{j=1}^N \sum_{\alpha \in A_j} m_{\varphi, a_j, \Psi_j} (\alpha) \log |\alpha| . $$ Then the generalized Lempert function is defined by \begin{multline*} \mathcal L^\Omega_S (z):= \mathcal L_S (z) \\ := \inf \left\lbrace \mathcal S (\varphi, (A_j)_{1\le j \le N}) : (\varphi, (A_j)_{1\le j \le N}) \mbox{ is admissible for }S, z \right\rbrace . \end{multline*} \end{defn} Notice that we allow any of the $A_j$ to be the empty set (in which case the $j$-th term drops from the sum). Consider the \emph{single poles case} where \begin{equation} \label{sglp} \mbox{ for each }j, \quad \Psi_j (z) = \max_ {1\le l \le n} \log |z_l | ,\mbox{ or }\Psi_j (z) = \log \|z \| \end{equation} -- it is the same, since both functions differ by a bounded term near $0$; in fact, one could use any norm that is homogenous under complex scalar multiplication. In this case, $\tau_j=1$ for every $j$. With a slight abuse of notation, we write $S = \{ a_1, \dots , a_N\}.$ Then $\mathcal L_S (z)= \min_{S'\subset S} \ell_{S'}(z)$, where $\ell_S$ is defined in \eqref{coman}. And in fact $\min_{S'\subset S} \ell_{S'}(z) = \ell_{S}(z)$ \cite{NiPfl} (see also \cite{Wikstrom}, \cite{WikstromAMS} for the case when the domain $\Omega$ is convex). The Lempert function is different from the functionals considered by Poletsky and others in that it is restricted to one pre-image per pole $a_j$ (thus the Lempert function can fail to be equal to the corresponding Green function). In our definition, the number of pre-images per pole is bounded above by the Monge-Amp\`ere mass at that pole of its generalized local indicator. In \cite{TraoTh}, each pole only could have one pre-image, but (essentially) $\varphi$ had to hit the pole with maximum multiplicity at that pre-image. Although Definition \ref{defLempert} may seem contrived, it is required to obtain the reasonable convergence theorem \ref{conv}. See the discussion in Section \ref{compprev}. We remark right away that the usual relationship holds between this generalized Lempert function and the corresponding Green function. \begin{lemma} \label{inegGL} For $\Omega $ a bounded domain, for any system $S$ as in Definition \ref{defgreen}, for any $z \in \Omega$, $G_S (z) \le \mathcal L_S (z)$. \end{lemma} \begin{proof} If $\varphi \in Hol({\mathbb{D}},\Omega)$, and $u \in PSH_- (\Omega)$ is a member of the defining family for the Green function of $S$, then $u \circ \varphi$ is subharmonic and negative on ${\mathbb{D}}$. Furthermore, if $(\varphi, (A_j)_{1\le j \le N})$ is admissible (for $S$, $z$) and $\alpha \in A_j$, then given any $\varepsilon>0$, for $|\zeta|$ small enough, $$ u \circ \varphi (\alpha +\zeta ) \le C_j + \Psi_j (\varphi (\alpha +\zeta )-a_j) \le C_j + (m_{\varphi, a_j, \Psi_j} (\alpha) - \varepsilon ) \log |\zeta| . $$ So $u \circ \varphi$ is a member of the defining family for the Green function on ${\mathbb{D}}$ with poles $\alpha$ and weights $m_{\varphi, a_j, \Psi_j} (\alpha) - \varepsilon$ at $\alpha$. This implies that $$ u \circ \varphi (\zeta) \le \sum_{j=1}^N \sum_{\alpha \in A_j} (m_{\varphi, a_j, \Psi_j} (\alpha) - \varepsilon ) \log \left| \frac{\alpha -\zeta}{1-\zeta \bar \alpha} \right| . $$ Letting $\varepsilon$ tend to $0$ and setting $\zeta=0$, we get $u (z) \le \mathcal S (\varphi, (A_j)_{1\le j \le N})$. Passing to the supremum over $u$, then to the infimum over $(\varphi, (A_j)_{1\le j \le N})$, we get the Lemma. \end{proof} \section{Main Results} \label{main} We start with a remark. \begin{lemma} \label{smallerS} If $S$ is as in Definition \ref{defgreen}, $1\le N'\le N$, and $$ S':= \{ (a_j,\Psi_j), 1\le j \le N'\}, $$ then for any $z \in \Omega$, $\mathcal L_{S'} (z) \ge \mathcal L_{S} (z)$. \end{lemma} \begin{proof} If we take $A_j=\emptyset$ for $N'+1 \le j \le N$, any member of the defining family for $\mathcal L_{S'} (z)$ becomes a member of the defining family for $\mathcal L_{S} (z)$, and the sum remains the same. \end{proof} The above lemma goes in the direction of monotonicity of the Lempert function with respect to its system of poles. For the Green function, it is immediate that the more poles there are, the more negative the function must be. More generally the more negative the generalized local indicators are (removing a pole corresponds to replacing a local indicator by $0$), the more negative the function must be. This is not immediately apparent in Definition \ref{defLempert}, but it does hold for elementary local indicators. \begin{theorem} \label{monotone} Let $\Omega$ be a bounded domain in ${\mathbb{C}}^n$, $$ S := \{ (a_j,\Psi_j), 1\le j \le N\}, S' := \{ (a_j,\Psi'_j), 1\le j \le N\}, \mbox{ where } a_j \in \Omega, $$ and $\Psi_j$, $\Psi'_j$, are elementary local indicators such that $\Psi_j \le \Psi'_j + C_j$ in a neighborhood of $0$, $C_j \in {\mathbb{R}}$, $1\le j \le N$. Then $\mathcal L_{S'} (z) \ge \mathcal L_{S} (z)$, for all $z \in \Omega$. \end{theorem} The proof is given in Section \ref{pfmono}. Now we turn to a result about the convergence of some families of (ordinary) Lempert functions with single poles, whose limits can be described naturally as generalized Lempert functions. Note that the proof of this next theorem doesn't require the relatively difficult Theorem \ref{monotone}, only the easy Lemma \ref{smallerS}. For $z\in {\mathbb{C}}^n \setminus \{0\}$, we denote by $[z]$ the equivalence class of $z$ in the complex projective space $\mathbb P^{n-1}$. \begin{theorem} \label{conv} Let $\Omega$ be a bounded and convex domain in ${\mathbb{C}}^n$. Let $0\le M \le N$ be integers. For $\varepsilon$ belonging to a neighborhood of $0$ in ${\mathbb{C}}$, using the simplified notation of the single pole case \eqref{sglp}, let $$ S(\varepsilon) := \left\lbrace a_j(\varepsilon), 1\le j\le M ; a'_j(\varepsilon), a''_j(\varepsilon), M+1\le j\le N \right\rbrace \subset \Omega. $$ Suppose that all the points of $S(\varepsilon)$ are distinct for any fixed $\varepsilon$, that $$ \lim_{\varepsilon\to 0} a_j(\varepsilon) = a_j \in \Omega, 1\le j\le M ; $$ $$ \lim_{\varepsilon\to 0} a'_j(\varepsilon) = \lim_{\varepsilon\to 0} a''_j(\varepsilon) =a_j \in \Omega, M+1\le j\le N ; $$ and that \begin{equation} \label{limproj} \lim_{\varepsilon\to 0} [a''_j(\varepsilon)-a'_j(\varepsilon)] = [v_j], \end{equation} where the limit is with respect to the distance in $\mathbb P^{n-1}$ and the representative $v_j$ is chosen of unit norm. Let $\Psi_j (z) := \log \| z \|$, $1\le j\le M$. Denote by $\pi_j$ the orthogonal projection onto $\{v_j\}^\bot$, $M+1\le j\le N $, and by $\Psi_j$ the generalized local indicator $$ \Psi_j (z) := \max (\log \| \pi_j(z)\|, 2 \log |z\cdot \bar v_j|), \quad M+1\le j\le N. $$ Set $S:=\{(a_j,\Psi_j), 1 \le j \le N\}$. Then $$ \lim_{\varepsilon\to 0} \ell_{S(\varepsilon)} (z) = \lim_{\varepsilon\to 0} \mathcal L_{S(\varepsilon)} (z) = \mathcal L_{S} (z) \mbox{ for all } z \in \Omega. $$ \end{theorem} Remarks : (a) as in the comments after \eqref{sglp}, one could replace $\Psi_j$ by an elementary local indicator; (b) the convexity requirement is imposed by Lemma \ref{relax}, and we conjecture that it is not essential. Note that in the case where $a'_j(\varepsilon)=a_j$ does not depend on $\varepsilon$, the hypothesis \eqref{limproj} means that the point $a''_j(\varepsilon)$ converges to a limit in the blow-up of ${\mathbb{C}}^n$ around the point $a_j$. It seems to us that this is the only reasonable convergence result that can be obtained for a family of ordinary Lempert functions. If \eqref{limproj} is not satisfied, one can find two distinct limit points for our family of Lempert functions. Thus hypothesis \eqref{limproj} is required. We are restricting ourselves to the case where no more than two points converge to the same point: examples where three points converge to the origin in the bidisk are explicitly studied in \cite{Th3p}, and show that the situation leads to results that probably can't be described in terms of our generalized local indicators. The proof is given in Section \ref{pfconv}. \section{Proof of Lemma \ref{ortho}} \label{easypf} Multiplying one of the vectors $v_j$ by a scalar only modifies the function $\Psi$ by a bounded additive term, so it will be enough to exhibit an orthogonal basis of vectors complying with the conclusion of the Lemma. Renumber the vectors $v_j$ so that we have $0\le m_1 \le \cdots \le m_n$. Using the Gram-Schmidt orthogonalization process, we produce an orthogonal system of vectors $\tilde v_k$ such that $\mbox{Span} (\tilde v_1, \dots, \tilde v_k) = \mbox{Span} ( v_1, \dots, v_k)$ for any $k$, $1\le k \le n$. We proceed by induction on the dimension $n$. When $n=1$ the property is immediate. Assume that the result holds up to dimension $n-1$. Write $$ \Psi_1(z) := \max_ {1\le j \le n-1} m_j \log |z \cdot \bar v_j | , \quad \tilde \Psi_1 (z) := \max_ {1\le j \le n-1} m_j \log |z \cdot \overline {{\tilde v}_j} |. $$ Denote $z_n:= z \cdot \overline {{\tilde v}_n} $. It is enough to obtain the estimates on a neighborhood $U$ of $0$. We choose it so that for $z \in U$, $|z_n|\le 0$, $ \Psi_1 (z), \tilde \Psi_1 (z) \le 0$. Since $v_n=\tilde v_n - w$, where $w \in \mbox{Span}( v_1, \dots, v_{n-1})$, we have \begin{multline} \label{psiw} \Psi (z) = \max ( \Psi_1 (z'), m_n \log |z_n-z'\cdot \bar w |), \\ \tilde \Psi (z) = \max ( \tilde \Psi_1 (z'), m_n \log |z_n |), \end{multline} where $z'$ stands for the orthogonal projection of $z$ on $\mbox{Span}( v_1, \dots, v_{n-1})$ $=\mbox{Span}( \tilde v_1, \dots, \tilde v_{n-1})$. By the induction hypothesis, $ \Psi_1=\tilde \Psi_1 + O(1),$ so it is enough to prove that $$ \Psi'(z):= \max ( \Psi_1 (z'), m_n \log |z_n |) $$ differs from $\Psi(z)$ by a bounded additive term. There is a constant $C_0>0$ such that $\Psi_1 (z') \ge m_{n-1} \log \| z' \| - \log C_0$, for $z'\in U$. Choose a constant $A>1$ large enough so that $ \|w\| (C_0/A)^{1/ m_{n-1}} < 1/2$. Then, since $ \Psi_1 (z) \le 0$ and $m_{n-1}\le m_{n}$, \begin{multline} \label{zw} |z'\cdot \bar w | \le \|w\| C_0^{1/ m_{n-1}} \exp( \frac{\Psi_1 (z') }{m_{n-1}}) \\ \le \|w\| C_0^{1/ m_{n-1}} \exp( \frac{\Psi_1 (z') }{m_{n}}). \end{multline} {\it Case 1.} $\Psi_1 (z') \ge m_n \log |z_n| - \log A.$ By the inequality above, $\Psi'(z) \le \Psi_1 (z') + \log A \le \Psi (z) + \log A$. On the other hand, using \eqref{zw}, we get $$ |z_n-z'\cdot \bar w |^{m_{n}} \le \left(A^{1/m_{n-1}} + \|w\| C_0^{1/m_{n-1}} \right)^{m_{n}} \exp( \Psi_1 (z') ), $$ so $\Psi(z) \le \Psi_1 (z') +O(1) \le \Psi' (z) +O(1)$. {\it Case 2.} $\Psi_1 (z') \le m_n \log |z_n| - \log A.$ Then \eqref{zw} and the choice of $A$ imply $$ |z'\cdot \bar w | \le \|w\| C_0^{1/ m_{n-1}} \exp\left( \log |z_n| -\frac{ \log A}{m_{n-1}} \right) \le \frac12 |z_n|, $$ thus \eqref{psiw} implies that $$ \Psi'(z) +\log \frac12 \le \Psi(z) \le \Psi'(z) +\log \frac32. $$ \section{Proof of Theorem \ref{monotone}} \label{pfmono} Without loss of generality, we may assume that $\tau'_j>0$ for all $j$. We have $\Psi_j \le \Psi'_j + C_j$ in a neighborhood of $0$ and $$ \mbox{supp}\, (dd^c \Psi_j)^n \subset \{ 0 \}, \quad \mbox{supp}\, (dd^c \Psi'_j)^n \subset \{ 0 \}. $$ Thus it follows from Bedford and Taylor's comparison theorem \cite{BT}, \cite[p. 126, Theorem 3.7.1]{Kli} that $\tau_j \ge \tau'_j>0$. For any $\alpha, a_j$, \begin{equation} \label{mgtm} m_{\varphi, a_j, \Psi_j} (\alpha) \ge m_{\varphi, a_j, \Psi'_j} (\alpha). \end{equation} Therefore \begin{equation} \label{comp_sums} \sum_{j=1}^N \sum_{\alpha \in A_j} m_{\varphi, a_j, \Psi_j} (\alpha) \log |\alpha| \le \sum_{j=1}^N \sum_{\alpha \in A_j} m_{\varphi, a_j, \Psi'_j} (\alpha) \log |\alpha| . \end{equation} To finish the proof, it suffices to show that the family over which we take the infimum is smaller for $\mathcal L_{S'} (z)$ than the one for $\mathcal L_{S} (z)$. This can be checked for each $j$ separately, hence we drop the index $j$. \begin{lemma} \label{comp_sets} Let $\Omega$ be a bounded domain in ${\mathbb{C}}^n$. If $\Psi, \Psi'$ are elementary local indicators such that $\Psi \le \Psi' + C$ and $\tau':=\tau_{\Psi'}>0$, if $A\subset {\mathbb{D}}$, $a\in \Omega$ and $\varphi \in Hol({\mathbb{D}},\Omega)$ verify $$ \sum_{\alpha \in A} m_{\varphi, a, \Psi'} (\alpha) \le \tau', $$ then $$ \sum_{\alpha \in A} m_{\varphi, a, \Psi} (\alpha) \le \tau:=\tau_{\Psi}. $$ \end{lemma} \begin{proof} Since the point $a$ plays no role, we assume $a=0$ and write $m_{\varphi, 0, \Psi} (\alpha)= m_{\varphi, \Psi} (\alpha)$. By \eqref{mgtm}, we may assume that $ m_{\varphi, \Psi} (\alpha) >0$ and the sums in \eqref{comp_sums} will not change. Using Lemma \ref{ortho}, we reduce ourselves to the case where the elementary local indicators are given by orthonormal systems of vectors. We use the same ``valuations" as in the Elementary Example: $$ \nu_j (\alpha) := \nu_j (\alpha ,\varphi) := \min \{ k : (\frac{d}{d\zeta})^k (\varphi (\zeta) \cdot \bar v_j ) (\alpha )\neq 0 \}, $$ and $\nu'_j (\alpha)$ is defined analogously using the vectors $v'_j$. {\bf Case 1.} There exists $\alpha_0$ such that $m_{\varphi, \Psi'} (\alpha_0) = \tau'$. Then the hypothesis of Lemma \ref{comp_sets} implies that for all $\alpha \in A\setminus \{\alpha_0\}$, $m_{\varphi, \Psi'} (\alpha) =0$, so that $\min_{1\le k \le n} m'_k \nu'_k(\alpha)=0$. Since $\tau'>0$, we have $m'_k>0$ for all $k$, so there must exist $k$ such that $\nu'_k(\alpha)=0$. Then $\varphi(\alpha)\neq 0$, which implies that $m_{\varphi, \Psi} (\alpha) =0$, and $$ \sum_{\alpha \in A} m_{\varphi, \Psi} (\alpha) = m_{\varphi, \Psi} (\alpha_0) \le \tau, $$ by definition of the multiplicity. {\bf Case 2.} For all $\alpha \in A$, $m_{\varphi, \Psi'} (\alpha) < \tau'$. Therefore $m_{\varphi, \Psi'} (\alpha) = \min_{1\le k \le n} m'_k \nu'_k(\alpha)$, and since we always have $m_{\varphi, \Psi} (\alpha) \le \min_{1\le k \le n} m_k \nu_k(\alpha)$, it becomes enough to work with those quantities in \eqref{comp_sums}. By dividing by $\tau$ and $\tau'$ respectively, it will be enough to prove the following Lemma. \end{proof} \begin{lemma} \label{dirmult} Under the hypotheses of Lemma \ref{comp_sets} and Case 2 above, for each $\alpha \in A$, $$ \frac{\min_{1\le k \le n} m'_k \nu'_k(\alpha)}{\prod_{k=1}^n m'_k} \ge \frac{\min_{1\le k \le n} m_k \nu_k(\alpha)}{\prod_{k=1}^n m_k} . $$ \end{lemma} \begin{proof} Since we are now dealing with a single $\alpha$, we also drop it from the notation. We introduce a binary relation on the index set $\{1,\dots,n\}$. \begin{defn} Given $k, l \in \{1,\dots,n\}$, we say that $k \mathcal R l$ if and only if $v_k \cdot \bar v'_l \neq 0$. \end{defn} \begin{lemma} \label{relmult} If $\Psi' + C \ge \Psi$ and $k \mathcal R l$, then $m_k \ge m'_l$. \end{lemma} \begin{proof} For any nonzero $\lambda \in \mathbb C$, $$ \Psi'(\lambda v'_l)= m'_l \log |\lambda| + m'_l \log \|v'_l\|^2, $$ while, for $|\lambda|$ small enough, $$ \Psi(\lambda v'_l)= \max_{1\le j \le n} \left( m_j (\log |\lambda| + \log |v_j \cdot \bar v'_l |) \right) =(\min_{k: k \mathcal R l} m_k) \log |\lambda| + O(1), $$ therefore by letting $\lambda$ tend to $0$ we see that $\min_{k: k \mathcal R l} m_k \ge m'_l$. \end{proof} \begin{lemma} \label{relval} If $\Psi' + C \ge \Psi$, then \begin{enumerate} \item $\nu'_l \ge \min \{ \nu_k : k \mathcal R l \},$ \item $\nu_k \ge \min \{ \nu'_l : k \mathcal R l \}.$ \end{enumerate} \end{lemma} \begin{proof} We will use and prove part (1) only. The other one has a similar proof. Since $v'_l$ is orthogonal to $v_k$ unless $k \mathcal R l$, we must have complex scalars $c_k$ such that $v'_l =\sum_{k : k \mathcal R l} c_k v_k$, thus for $\varphi $ as in Lemma \ref{comp_sets}, $$ \varphi(\zeta) \cdot \bar v'_l = \sum_{k : k \mathcal R l} \bar c_k \varphi(\zeta) \cdot \bar v_k . $$ Now take $m < \nu_k = \nu_k(\alpha, \varphi)$, for all $k$ such that $k \mathcal R l$. Then $$ (\frac{d}{d\zeta})^m (\varphi \cdot \bar v'_l ) (\alpha ) = \sum_{k : k \mathcal R l} \bar c_k (\frac{d}{d\zeta})^m (\varphi (\zeta) \cdot \bar v_k ) (\alpha) = 0, $$ so we must have $\nu'_l > m$, which proves the result. \end{proof} Now renumber the vectors $v'_l$ so that $\min_k (m'_k \nu'_k) = m'_1 \nu'_1$. Pick an index $k_0$ such that $k_0 \mathcal R 1$ and $\nu_{k_0} = \min \{ \nu_k : k \mathcal R 1 \}$. By renumbering the vectors $v_k$, we may assume $k_0=1$. By Lemma \ref{relval}, we may assume $\nu'_1 \ge \nu_1$. The conclusion of Lemma \ref{dirmult} thus reduces to: \begin{equation} \label{ineqfin} \frac{ \nu'_1}{\prod_{k=2}^n m'_k} \ge \frac{\nu_1}{\prod_{k=2}^n m_k} . \end{equation} This is a consequence of the next result. \begin{lemma} \label{order} There exists a bijection $\sigma$ from $\{2,\dots,n\}$ onto itself such that for any $l \in \{2,\dots,n\}$, $ \sigma(l)\mathcal R l $. \end{lemma} This Lemma will be proved below. It implies $$ \prod_{k=2}^n m_k = \prod_{l=2}^n m_{\sigma(l)} \ge \prod_{l=2}^n m'_l, $$ by Lemma \ref{relmult}, so \eqref{ineqfin} holds and this concludes the proof of Lemma \ref{dirmult}. \end{proof} \begin{proof*}{\it Proof of Lemma \ref{order} } Denote $A:=(a_{kl})_{2\le k,l \le n} := (v_k \cdot \overline{v'_l})_{2\le k,l \le n}$. First we prove that this matrix is non singular. Let $\pi$ be the orthogonal projection on $\{v'_1\}^\bot$. If $\mbox{rank}\, \{\pi(v_k), 2 \le k \le n\} < n-1$, there exists $w \in \{v'_1\}^\bot$, $w\neq 0$, such that $w \bot \pi(v_k)$, $2\le k \le n$. This implies $w \bot v_k$, $2\le k \le n$. Since we have orthogonal bases, $v_1= \lambda w$, for some $\lambda \in {\mathbb{C}}$. So $v_1 \cdot \overline{v'_1}=0$, which contradicts the fact that $1\mathcal R 1$. We construct the bijection $\sigma$ by induction on $n$. For $n=2$ it's obvious. Suppose that the property holds for $n-1$. Then $$ 0\neq \mbox{\rm det} A = \sum_{k=2}^n (-1)^k a_{k2} \mbox{\rm det} A_k, $$ where $A_k$ stands for the minor matrix with the first column and the $k$-th row removed. There must be some $k$ for which $a_{k2} \mbox{\rm det} A_k \neq 0$. Let $\sigma(2)=k$; the induction hypothesis gives us a bijection $\sigma'$ from $\{3,\dots,n\}$ to $\{2,\dots,n\}\setminus \{k\}$ such that $a_{\sigma'(l)l}\neq 0$, and this finishes the proof. \end{proof*} \section{Proof of Theorem \ref{conv}} \label{pfconv} \begin{proof* First observe that we can relax the conditions used in Definition \ref{defLempert}. \begin{lemma} \label{relax} Let $\Omega$ be a convex bounded domain in $\mathbb C^n$ containing the origin, and let $z\in \Omega$. (i) Let $a_j \in \Omega$, $\Psi_j \in \mathcal I$ and, as in Definition \ref{defgreen} $$ S := \left\lbrace (a_j, \Psi_j), 1\le j\le N \right\rbrace . $$ Suppose that for any $\delta >0$, there exists a map $\varphi^\delta $ holomorphic from ${\mathbb{D}}$ to $(1+\delta)\Omega$ and sets $(A_j(\delta))_{1\le j \le N}$ such that $(\varphi^\delta, (A_j(\delta))_{1\le j \le N})$ is admissible for $ S,z$ with respect to $(1+\delta)\Omega$ and $$ \mathcal S (\varphi^\delta, (A_j(\delta))_{1\le j \le N}) \le \ell + h(\delta), $$ where $ h(\delta)\ge 0$, $\lim_{\delta \to 0} h(\delta)= 0$. Then $\mathcal L^\Omega_S (z) \le \ell.$ (ii) For $\varepsilon$ in a neighborhood $V$ of $0$ in ${\mathbb{C}}$, let $a_j(\varepsilon) \in \Omega$, $1\le j\le N$. $$ S(\varepsilon) := \left\lbrace (a_j(\varepsilon), \Psi_j), 1\le j\le N \right\rbrace. $$ Let $g: V \longrightarrow {\mathbb{R}}_+^*$ be such that $\lim_{\varepsilon\to0}g(\varepsilon)=0$. Then $$ \limsup_{\varepsilon\to0} \mathcal L^\Omega_{S(\varepsilon)} (z) \le \limsup_{\varepsilon\to0} \mathcal L^{(1+g(\varepsilon))\Omega}_{S(\varepsilon)} (z). $$ \end{lemma} \begin{proof} Without loss of generality, we may assume $z=0$. Let $$ \Omega_r := \left\{ \varphi (\zeta) : \varphi \in Hol({\mathbb{D}}, \Omega), \varphi(0)=0, |\zeta|<r \right\}. $$ A bounded convex domain is Kobayashi complete hyperbolic \cite[Proposition 6.9 (b), p. 88]{Dineen}, so $\Omega_r $ is relatively compact in $\Omega $. Let $\rho_\Omega$ stand for the Minkowski function of $\Omega$: $$ \rho_\Omega(z):= \inf \{ r>0 : \frac{z}r \in \Omega\}. $$ We set $\gamma_\Omega (r) := \sup_{\Omega_r } \rho_\Omega.$ The function $\gamma$ is increasing and continuous from $(0,1)$ to itself. For any $\mu \in (0,1)$, $\phi \in Hol({\mathbb{D}}, {\mathbb{C}}^n)$, denote $\phi_\mu (\zeta) := \phi (\mu\zeta)$. Note that for any points and generalized local indicators, $m_{\phi_\mu, a, \Psi} (\alpha/\mu) = m_{\phi, a, \Psi} (\alpha)$. Take $\varphi^\delta$ as in Part (i) of the Lemma, in particular $\varphi^\delta(0)=0$, so by construction of $\gamma$, $$ \frac1{(1+\delta)}\varphi_\mu^\delta ({\mathbb{D}}) \subset \gamma(\mu) \Omega . $$ Choose some $\mu(\delta)$ such that $ \gamma(\mu(\delta)) = (1+\delta)^{-1}$, and set $\tilde \varphi^\delta := \varphi_{\mu(\delta)}^\delta $, then $\tilde \varphi^\delta \in Hol({\mathbb{D}}, \Omega)$. Note that $\lim_{\delta\to 0} \mu(\delta)=1$, by the relative compactness of each $\Omega_r$. Let $$ \tilde A_j (\delta) := \left\lbrace \frac{\alpha}{\mu(\delta)}: \alpha \in A_j (\delta), |\alpha|< \mu(\delta) \right\rbrace . $$ Then \begin{multline} \label{contract} \left| \mathcal S (\tilde \varphi^\delta, (\tilde A_j(\delta))_{1\le j \le N}) - \mathcal S (\varphi^\delta, (A_j(\delta))_{1\le j \le N}) \right| \\ = \left| \sum_j \sum_{\alpha \in A_j, |\alpha|< \mu(\delta)} m_{\varphi^\delta, a_j, \Psi_j} (\alpha) | \log \mu(\delta)| - \sum_j \sum_{\alpha \in A_j, |\alpha|\ge \mu(\delta)} m_{\varphi^\delta, a_j, \Psi_j} (\alpha) \log |\alpha| \right| \\ \le 2 (\sum_j \tau_{\Psi_j}) |\log \mu(\delta)|, \end{multline} and this last quantity tends to $0$, which concludes the proof of (i). To prove (ii), take maps $\varphi^\varepsilon$ and systems of points $(A_j(\varepsilon))$, admissible for $S(\varepsilon)$, such that $$ \lim_{\varepsilon\to 0} \mathcal S ( \varphi^\varepsilon, ( A_j(\varepsilon))_{1\le j \le N}) = \limsup_{\varepsilon\to0} \mathcal L^{(1+g(\varepsilon))\Omega}_{S(\varepsilon)} (0). $$ Use the above proof with $\delta=g(\varepsilon)$ to construct maps $\tilde \varphi^\varepsilon$ into $\Omega$ and systems of points $(\tilde A_j(\varepsilon))$, admissible for $S(\varepsilon)$, such that \begin{equation*} \left| \mathcal S (\tilde \varphi^\varepsilon, (\tilde A_j(\varepsilon))_{1\le j \le N}) - \mathcal S (\varphi^\varepsilon, (A_j(\varepsilon))_{1\le j \le N}) \right| \le 2 (\sum_j \tau_{\Psi_j}) |\log \mu(g(\varepsilon))|, \end{equation*} and by definition $\mathcal S (\tilde \varphi^\varepsilon, (\tilde A_j(\varepsilon))_{1\le j \le N}) \ge \mathcal L^\Omega_{S(\varepsilon)} (0) $. \end{proof} Consider as in Theorem \ref{conv} a bounded convex domain $\Omega,$ and distinct points $a_j \in \Omega$, $1\le j \le N$. Let $z \in \Omega \setminus \{a_j, 1 \le j \le N\}$ (otherwise the property is trivially true). Again we may assume $z=0$. By Lemma \ref{relax} applied to $S(\delta)=S$ for any $\delta$, to show that \begin{equation} \label{liminfgr} \mathcal L_S (z) \le \liminf_{\varepsilon\to 0} \mathcal L_{S(\varepsilon)} (z)=: \ell , \end{equation} it will be enough to provide: some increasing function $g$ such that $g(0)=0$ and, for any $\delta>0$, $\varphi^\delta \in Hol({\mathbb{D}},(1+g(\delta))\Omega)$ and subsets $(A_j^\delta )_{1\le j \le N})$ of $\Omega$ such $(\varphi^\delta , (A_j^\delta )_{1\le j \le N})$ is admissible for $S, z$ and that $$ \mathcal S (\varphi^\delta, (A_j^\delta)_{1\le j \le N}) = \ell. $$ The systems $S(\varepsilon)$ all have single poles, so the definition of $\ell$ means that there exist $\varphi_m \in Hol ({\mathbb{D}}, \Omega)$, $\varepsilon_m\to 0$, and points $\alpha_{j,m}, \alpha'_{j,m}, \alpha''_{j,m} \in {\mathbb{D}} $ such that $\varphi_m ( \alpha_{j,m})= a_j(\varepsilon_m),$ $1\le j \le M$, and $\varphi_m ( \alpha'_{j,m})= a'_j(\varepsilon_m),$ $\varphi_m ( \alpha''_{j,m})= a''_j(\varepsilon_m),$ $M+1\le j \le N$; and they satisfy $$ \sum_{j=1}^M \log | \alpha_{j,m} | + \sum_{j=M+1}^N \left( \log |\alpha'_{j,m} | + \log | \alpha''_{j,m} | \right) = \ell + \delta (m) , $$ with $\lim_{m\to \infty} \delta (m) =0$. Passing to a subsequence, for which we keep the same notations, we may assume that $\alpha_{j,m}\to \alpha_{j} \in \overline {\mathbb{D}},$ $\alpha'_{j,m}\to \alpha'_{j} \in \overline {\mathbb{D}},$ $\alpha''_{j,m}\to \alpha''_{j} \in \overline {\mathbb{D}}$ as $m\to \infty$, and that $\varphi_m \to \tilde \varphi \in Hol ({\mathbb{D}}, \overline \Omega)$ uniformly on compact subsets of ${\mathbb{D}}$. Furthermore, by compactness of the unit circle, there exists $\tilde {v_j} \in [v_j] \cap S^{2n-1}$ such that, taking a further subsequence, $$ \lim_{m\to \infty} \frac{a''_j(\varepsilon_m)-a'_j(\varepsilon_m)}{\| a''_j(\varepsilon_m)-a'_j(\varepsilon_m) \|} = \tilde {v_j}. $$ By renumbering the points and exchanging $a'_j$ and $a''_j$ as needed, we may assume that there are integers $M' \le M$, $M\le N_1 \le N_2 \le N_3 \le N$ such that \begin{eqnarray*} \alpha_j \in {\mathbb{D}} &\mbox{ for }& 1 \le j \le M' \\ \alpha_j \in \partial {\mathbb{D}} &\mbox{ for }& M'+1 \le j \le M\\ \alpha'_j = \alpha''_j\in {\mathbb{D}} &\mbox{ for }& M+1 \le j \le N_1 \\ |\alpha'_j| < |\alpha''_j| <1 &\mbox{ for }& N_1+1 \le j \le N_2 \\ |\alpha'_j |<1 , |\alpha''_j| =1 &\mbox{ for }& N_2+1 \le j \le N_3 \\ |\alpha'_j|=|\alpha''_j|=1 &\mbox{ for }& N_3+1 \le j \le N. \end{eqnarray*} Then \begin{multline*} \ell = \lim_{m\to\infty} \left( \sum_{j=1}^M \log |\alpha_{j,m}| + \sum_{j=M+1}^N \left( \log |\alpha'_{j,m}| + \log |\alpha''_{j,m}|\right) \right) \\ = \sum_{j=1}^{M'} \log |\alpha_{j}| + \sum_{j=M+1}^{N_1} 2 \log |\alpha'_{j}| + \sum_{j=N_1+1}^{N_2} \left( \log |\alpha'_{j}| + \log |\alpha''_{j}| \right) + \sum_{j=N_2+1}^{N_3} \log |\alpha'_{j}| . \end{multline*} Now we choose \begin{eqnarray*} A_j = \{\alpha_j \} &\mbox{ for }& 1 \le j \le M' \\ A_j = \emptyset &\mbox{ for }& M'+1 \le j \le M\\ A_j = \{\alpha'_j \} &\mbox{ for }& M+1 \le j \le N_1 \\ A_j = \{\alpha'_j, \alpha''_j\} &\mbox{ for }& N_1+1 \le j \le N_2 \\ A_j = \{\alpha'_j \} &\mbox{ for }& N_2+1 \le j \le N_3 \\ A_j = \emptyset &\mbox{ for }& N_3+1 \le j \le N. \end{eqnarray*} Notice that $(\tilde \varphi, (A_j)_{1\le j \le N})$ hits the correct points but doesn't necessarily produce an admissible choice, because for some $j$, $N_1+1 \le j \le N_2$, we could have $$ m_{\tilde \varphi, a_j, \Psi_j} (\alpha'_j) + m_{\tilde \varphi, a_j, \Psi_j} (\alpha''_j) > 2=\tau_j. $$ So, in order to apply Lemma \ref{relax} with $\delta\to 0$, we set $A_j^\delta =A_j$ for any $\delta>0$ and $$ \tilde \varphi^\delta (\zeta) := \tilde \varphi (\zeta) + \delta \zeta \left[ \prod_1^{M'} (\zeta-\alpha_j) \prod_{M+1}^{N_1} (\zeta-\alpha'_j)^2 \prod_{N_1+1}^{N_2} (\zeta-\alpha'_j) (\zeta-\alpha''_j) \prod_{N_2+1}^{N_3} (\zeta-\alpha'_j) \right] \, v, $$ where $v \in {\mathbb{C}}^n$ is a unit vector chosen such that $\pi_j(v) \neq 0$, $N_1+1 \le j \le N_3$. For any $\alpha \in \cup_1^N A_j$, $\tilde \varphi^\delta (\alpha) = \tilde \varphi (\alpha)$. There is a constant $C>0$ such that $\tilde \varphi^\delta ({\mathbb{D}}) \subset \Omega + C \delta B(0,1)$. All the following considerations apply when $\delta$ is small enough. For $1\le j \le M'$, $m_{\tilde \varphi^\delta, a_j, \Psi_j} (\alpha_j)=1$, because $\tilde \varphi^\delta$ takes on the correct value, and the multiplicity cannot be more than $1=\tau_j$ in those cases. For $N_1+1\le j \le N_3$, we have $$ \pi_j ((\tilde \varphi^\delta)'(\alpha'_j)) = \pi_j ((\tilde \varphi)'(\alpha'_j)) + \delta p_j \pi_j(v), $$ where $p_j$ is some complex scalar which doesn't depend on $\delta$, so for $\delta >0$ and small enough, this projection doesn't vanish and we have $m_{\tilde \varphi^\delta, a_j, \Psi_j} (\alpha'_j)=1$. An analogous reasoning shows that $m_{\tilde \varphi^\delta, a_j, \Psi_j} (\alpha''_j)=1$ for $N_1+1\le j \le N_2$. For $M+1 \le j \le N_1$, we have $$ (\tilde \varphi^\delta)'(\alpha'_j) = (\tilde \varphi)'(\alpha'_j), $$ and by the uniform convergence on compact sets, $$ (\tilde \varphi)'(\alpha'_j) = \lim_{m\to\infty} \frac{\varphi_m (\alpha'_{j,m})-\varphi_m (\alpha''_{j,m})}{\alpha'_{j,m}-\alpha''_{j,m}} = \lim_{m\to\infty} \frac{a'_j(\varepsilon_m)-a''_j(\varepsilon_m)}{\alpha'_{j,m}-\alpha''_{j,m}}, $$ which must be colinear to $v_j$ by definition. Therefore $m_{\tilde \varphi^\delta, a_j, \Psi_j} (\alpha'_j)=2$ for $M+1 \le j \le N_1$. Thus $(\tilde \varphi^\delta, (A_j)_{1\le j \le N})$ is admissible for $S, 0$ and $\mathcal S(\tilde \varphi^\delta, (A_j)_{1\le j \le N})=\ell$, which proves \eqref{liminfgr}. Now we need to show that \begin{equation} \label{limsupsm} \mathcal L_S(z) \ge \limsup_{\varepsilon\to0} \mathcal L_{S(\varepsilon)}(z). \end{equation} We use Lemma \ref{relax}(ii). For any $\delta >0$, we need to construct a positive function $g$ such that $\lim_{\varepsilon\to0}g(\varepsilon)=0$ and, for $\varepsilon$ small enough, $\varphi^\varepsilon \in Hol({\mathbb{D}}(1+g(\varepsilon))\Omega)$ and sets $(A_j(\varepsilon))_{1\le j \le N}$ such that $ (\varphi^\varepsilon, (A_j(\varepsilon))_{1\le j \le N})$ is admissible for $S(\varepsilon),0$ and $$ \mathcal S (\varphi^\varepsilon, (A_j(\varepsilon))_{1\le j \le N}) \le \mathcal L_S(z) +\delta. $$ We start with an admissible choice $(\varphi, (A_j)_{1\le j \le N})$ for $S$, such that $$ \mathcal S (\varphi, (A_j)_{1\le j \le N}) \le \mathcal L_S (z) + \delta/2. $$ To fix notations, suppose that, after renumbering and exchanging the points as needed, there exist integers $M'\le M$, $N_1, N_2, N_3 \in \{ M, \dots N \}$ such that \begin{eqnarray*} A_j = \{\alpha_j \} &\mbox{ for }& 1 \le j \le M' ,\\ A_j = \emptyset &\mbox{ for }& M'+1 \le j \le M,\\ A_j = \{\alpha'_j \}, m_{\varphi, a_j, \Psi_j} (\alpha'_j)=2 &\mbox{ for }& M+1 \le j \le N_1 ,\\ A_j = \{\alpha'_j, \alpha''_j\}, \alpha'_j \neq \alpha''_j &\mbox{ for }& N_1+1 \le j \le N_2 ,\\ A_j = \{\alpha'_j \}, m_{\varphi, a_j, \Psi_j} (\alpha'_j)=1 &\mbox{ for }& N_2+1 \le j \le N_3 ,\\ A_j = \emptyset &\mbox{ for }& N_3+1 \le j \le N. \end{eqnarray*} The definition of $\Psi_j$ (see the computations performed in the Elementary example) implies that, for $M+1 \le j \le N_1$, $\varphi'(\alpha'_j) \cdot \bar w = 0$, for any $w \in v_j^\perp$. We perturb $\varphi$ to make sure that, on the other hand, $\varphi'(\alpha'_j) \cdot \bar v_j \neq 0$ in the same index range. For $\eta(\varepsilon) \in \mathbb C$ to be chosen later, set \begin{multline*} \tilde \varphi (\zeta) := \varphi (\zeta) + \\ \eta(\varepsilon) \, \left[ \zeta \, \prod_1^{M'} (\zeta - \alpha_j) \prod_{j=N_1+1}^{N_2} (\zeta - \alpha'_j)(\zeta - \alpha''_j) \prod_{j=N_2+1}^{N_3} (\zeta - \alpha'_j) \right] \times \\ \times \left\{ \sum_{j=M+1}^{N_1} \left[ (\zeta - \alpha'_j) \prod_{M+1\le k \le N_1, k\neq j} (\zeta - \alpha'_k)^2 \right] \, v_j \right\}. \end{multline*} The map $\tilde \varphi$ depends on $\varepsilon$ and is admissible again. We have positive constants $C_1, C_2, C_3$ such that \begin{itemize} \item $\tilde \varphi'(\alpha'_j)=\lambda_j v_j$, with $C_1^{-1} |\eta(\varepsilon)| \le |\lambda_j | \le C_1 |\eta(\varepsilon)| $, \item $\|\tilde \varphi - \varphi\|_\infty \le C_2 |\eta(\varepsilon)|$, \item $\tilde \varphi({\mathbb{D}}) \subset (1+C_3|\eta(\varepsilon)|) \Omega$; \end{itemize} in particular $\tilde \varphi$ will be bounded by constants independent of $\varepsilon$, along with all its derivatives on any given compact subset of ${\mathbb{D}}$. For $M+1 \le j \le N$ and $\varepsilon$ in a neighborhood of $0$, $a''_j(\varepsilon) - a'_j(\varepsilon) = n_j(\varepsilon) v_j(\varepsilon)$, where $\|v_j(\varepsilon)\|=1$, $\lim_{\varepsilon\to 0} v_j(\varepsilon) = v_j$ and $n_j(\varepsilon) \in {\mathbb{C}}$. For $|\varepsilon|$ small enough, we now may define \begin{multline*} A_j(\varepsilon) := A_j, \mbox{ for } 1 \le j \le M, N_1+1 \le j \le N, \\ \mbox{ and } A_j(\varepsilon) :=\{ \alpha'_j, \alpha'_j +\frac{n_j(\varepsilon)}{\lambda_j} \}, \mbox{ for } M+1 \le j \le N_1. \end{multline*} We shall need to add to $\tilde \varphi$ a vector-valued correcting term obtained by Lagrange interpolation. To this end, we write $B(\varepsilon) := \cup_j A_j(\varepsilon)$, and values to be interpolated, $w (\alpha)$, for $\alpha \in B(\varepsilon)$. Let \begin{eqnarray*} w(\alpha_j ) := a_j(\varepsilon) - a_j = a_j(\varepsilon) - \tilde \varphi(\alpha_j ) &\mbox{ for }& 1 \le j \le M' ,\\ w(\alpha'_j ) := a'_j(\varepsilon) - a_j = a'_j(\varepsilon) - \tilde \varphi(\alpha'_j ) &\mbox{ for }& M+1 \le j \le N_1 ,\\ w(\alpha'_j + \frac{n_j(\varepsilon)}{\lambda_j}) := a''_j(\varepsilon) - \tilde \varphi(\alpha'_j + \frac{n_j(\varepsilon)}{\lambda_j}) &\mbox{ for }& M+1 \le j \le N_1 ,\\ w(\alpha'_j ) := a'_j(\varepsilon) - a_j = a'_j(\varepsilon) - \tilde \varphi(\alpha'_j ) &\mbox{ for }& N_1+1 \le j \le N_2 ,\\ w(\alpha''_j ) := a''_j(\varepsilon) - a_j = a''_j(\varepsilon) - \tilde \varphi(\alpha''_j ) &\mbox{ for }& N_1+1 \le j \le N_2 ,\\ w(\alpha'_j ) := a'_j(\varepsilon) - a_j = a'_j(\varepsilon) - \tilde \varphi(\alpha'_j ) &\mbox{ for }& N_2+1 \le j \le N_3 . \end{eqnarray*} We denote by $P_\varepsilon$ the solution to the interpolation problem $$ \left(P(\alpha)= w(\alpha): \alpha \in B(\varepsilon) \right). $$ Let $\varphi^\varepsilon := \tilde \varphi + P_\varepsilon \in Hol({\mathbb{D}}, \Omega^\varepsilon)$. The domain $\Omega^\varepsilon$ will be specified below. By construction $ (\varphi^\varepsilon, (A_j(\varepsilon))_{1\le j \le N})$ is admissible for $S(\varepsilon)$, and for $|\varepsilon|$ small enough, $$ \mathcal S (\varphi^\varepsilon, (A_j(\varepsilon))_{1\le j \le N}) \le \mathcal L_S(z) +\delta, $$ provided that, for $M+1 \le j \le N_1$, \begin{equation} \label{lambdacond} \lim_{\varepsilon\to 0} \frac{n_j(\varepsilon)}{\lambda_j} = 0, \end{equation} Now we need to show that the correction is small, more precisely that we can choose $\eta(\varepsilon)$ so that the above condition is satisfied and $\lim_{\varepsilon\to 0} \|P_\varepsilon\|_\infty = 0$. Then we can choose a function $g$ tending to $0$ such that $$ \Omega^\varepsilon = (1+g(\varepsilon) ) \Omega \supset (1+C_3|\eta(\varepsilon)|) \Omega + B(0,\|P_\varepsilon\|_\infty). $$ Write $\Pi_\alpha$ for the unique (scalar) polynomial of degree less or equal to $d:= \# B(\varepsilon) - 1$ ($d$ does not depend on $\varepsilon$) such that $$ \Pi_\alpha (\alpha) = 1, \Pi_\alpha (\beta) = 0 \mbox{ for any } \beta \in B(\varepsilon) \setminus \{\alpha\}. $$ Then $$ P_\varepsilon = \sum_{\alpha \in B(\varepsilon)} \Pi_\alpha w(\alpha). $$ For $\alpha \in \bigcup_{1 \le j \le M, N_1+1 \le j \le N} A_j$, $\|\Pi_\alpha\|_\infty$ is uniformly bounded, because $\mbox{dist} (\alpha, B(\varepsilon) \setminus \{\alpha\}) \ge \gamma > 0$ with $\gamma$ independent of $\varepsilon$. It also follows from the hypotheses of the theorem and the choice of $w$ that $$ \lim_{\varepsilon\to 0} \, \max \{ \|w(\alpha)\| , \alpha \in \bigcup_{1 \le j \le M, N_1+1 \le j \le N} A_j\} = 0. $$ For $M+1 \le j \le N_1$, we need an elementary lemma about Lagrange interpolation. \begin{lemma} \label{Lagrange} Let $x_0, \dots , x_d \in {\mathbb{D}}$, $w_0, w_1 \in {\mathbb{C}}^n$. Suppose that there exists $\gamma >0$ such that $|x_0-x_1|\le \gamma $ and $\mbox{dist} ([x_0,x_1], \{x_2, \dots, x_d\})\ge 2 \gamma$, where $[x_0,x_1]$ is the real line segment from $x_0$ to $x_1$. Let $P$ be the unique (${\mathbb{C}}^n$-valued) polynomial of degree less or equal to $d$ such that $$ P(x_0)=w_0, P(x_1)=w_1, P(x_j)=0, 2 \le j \le d. $$ Then there exist constants $L_1, L_0$ depending only on $\gamma$ and $d$ such that $$ \sup_{\zeta \in {\mathbb{D}}} \|P(\zeta)\| \le L_1 \left\| \frac{w_1-w_0}{x_1-x_0} \right\| + L_0 \|w_0\|. $$ \end{lemma} We will prove this Lemma a little later. It yields, for $M+1 \le j \le N_1$, \begin{multline*} \sup_{\zeta \in {\mathbb{D}}} \left\| \Pi_{\alpha'_j}(\zeta) w(\alpha'_j ) + \Pi_{\alpha'_j + \frac{n_j(\varepsilon)}{\lambda_j}} (\zeta) w(\alpha'_j + \frac{n_j(\varepsilon)}{\lambda_j}) \right\| \\ \le L_1 \left|\frac{\lambda_j} {n_j(\varepsilon)}\right| \left\| a''_j(\varepsilon) - \tilde \varphi(\alpha'_j + \frac{n_j(\varepsilon)}{\lambda_j}) \right\| + L_0 \|a'_j(\varepsilon) - a_j\|. \end{multline*} We now estimate the first term in the last sum above. By the Taylor formula, $$ a''_j(\varepsilon) - \tilde \varphi(\alpha'_j + \frac{n_j(\varepsilon)}{\lambda_j}) = a''_j(\varepsilon) - a'_j(\varepsilon) - n_j(\varepsilon) v_j + R_2 (\varepsilon) = n_j(\varepsilon) ( v_j(\varepsilon) - v_j ) + R_2 (\varepsilon), $$ where $\| R_2 (\varepsilon)\| \le C |n_j(\varepsilon)|^2|\lambda_j|^{-2}$ with $C$ a constant independent of $\varepsilon$ by the boundedness of the derivatives of $\tilde \varphi$. Finally \begin{multline*} \sup_{\zeta \in {\mathbb{D}}} \left\| \Pi_{\alpha'_j}(\zeta) w(\alpha'_j ) + \Pi_{\alpha'_j + \frac{n_j(\varepsilon)}{\lambda_j}} (\zeta) w(\alpha'_j + \frac{n_j(\varepsilon)}{\lambda_j}) \right\| \\ \le C \left( \|v_j(\varepsilon) - v_j\| |\eta(\varepsilon)| + |n_j(\varepsilon)||\eta(\varepsilon)|^{-1} + \|a'_j(\varepsilon) - a_j\| \right). \end{multline*} To to satisfy \eqref{lambdacond}, we need to have $\lim_{\varepsilon\to 0} n_j(\varepsilon)/\eta(\varepsilon)=0$; to make sure, in addition, that the whole sum above tends to $0$ as $\varepsilon$ tends to $0$, it will be enough to choose $\eta(\varepsilon)$ going to zero, but more slowly that $|n_j(\varepsilon)| =\|a''_j(\varepsilon) - a'_j\|$, for $M+1 \le j \le N_1$. \end{proof*} \begin{proof*}{\it Proof of Lemma \ref{Lagrange}} Let $$ Q(X,Y):= \prod_2^d \frac{X-x_k}{Y-x_k}. $$ Then $Q$ and all of its derivatives are bounded for $X \in \overline {\mathbb{D}}$ and $Y\in [x_0,x_1]$. \begin{multline*} P(X) = \frac{X-x_0}{x_1-x_0} Q(X,x_1) w_1 + \frac{X-x_1}{x_0-x_1} Q(X,x_0) w_0 \\ = \frac{w_1 - w_0}{x_1-x_0} (X-x_0) Q(X, x_1) + \left( -Q(X,x_1) + (X-x_1) \frac{Q(X, x_1)-Q(X, x_0)}{x_0-x_1} \right) \, w_0. \end{multline*} Then the conclusion follows from the boundedness of $Q$ and $Q'$ and the mean value theorem. \end{proof*} \section{Comparison with previous results} \label{compprev} In \cite{TraoTh}, we had used a different definition for a Lempert function with multiplicities. We state it with the same notations as in Definition \ref{defLempert}. \begin{defn} \label{prevdef} Given a system $S$ as in Definition \ref{defgreen}, we write $\tau_j := \tau_{\Psi_j}$. Let $\varphi \in Hol({\mathbb{D}},\Omega)$ and $\alpha_j \in {\mathbb{D}}$, $1\le j \le N$. We say that $(\varphi, (\alpha_j)_{1\le j \le N})$ is \emph{admissible (for $S$, $z$) in the old sense} if \begin{multline*} \varphi (0)=z, \mbox{ and there exists } U_j \mbox{\rm\ a neighborhood of }\zeta_j \\ \mbox{ s.t. } \Psi_j (\varphi(\zeta) -a_j) \le \tau_j \log|\zeta-\zeta_j| + C_j, \forall \zeta \in U_j, 1\le j \le N. \end{multline*} In this case, we write (with the convention that $0 \cdot \infty =0$) $$ \mathcal S (\varphi, (\alpha_j)_{1\le j \le N}):= \sum_{j=1}^N \tau_j \log |\alpha_j| . $$ Then the \emph{old generalized Lempert function} is defined by \begin{multline*} L^\Omega_S (z):= L_S (z) \\ := \inf \left\lbrace \mathcal S (\varphi, (\alpha_j)_{1\le j \le N}) : (\varphi, (\alpha_j)_{1\le j \le N}) \mbox{ is admissible for }S, z \mbox{ in the old sense } \right\rbrace . \end{multline*} \end{defn} Recall also that since the functional $L$ did not enjoy monotonicity properties, another definition was given in \cite{TraoTh}. \begin{defn} \label{modifLemp} Let $S:= \{(a_j,\Psi_j): 1 \le j \le N\}$ and $S_1:= \{(a_j,\Psi^1_j): 1 \le j \le N\}$ where $a_j \in \Omega$ and $\Psi_j$, $\Psi^1_j$ are local indicators. We define $$ \tilde L_S (z) := \inf \{ L_{S^1}(z) : \Psi^1_j \ge \Psi_j + C_j, 1 \le j \le N\} . $$ \end{defn} \begin{lemma} \label{comparison} If $S=\{(a_j, \Psi_j), 1\le j \le N\}$, where the $\Psi_j$ are elementary local indicators, then for any $z \in \Omega$, $\mathcal L_S (z) \le \tilde L_S (z)$. \end{lemma} \begin{proof} Since the functional $\mathcal L$ is monotonic by Theorem \ref{monotone}, it will be enough to show that $\mathcal L_S (z) \le L_S (z)$ for any system $S$. If we have a map $\varphi$ which is admissible in the sense of Definition \ref{prevdef}, we can take $A_j:=\{\alpha_j\}$, and $\Psi_j (\varphi(\zeta) -a_j) \le \tau_j \log|\zeta-\alpha_j| + C_j$ implies that $m_{\varphi, a_j, \Psi_j} (\alpha_j) \ge \tau_j$, which by Definition \ref{multphia} means that $m_{\varphi, a_j, \Psi_j} (\alpha_j) = \tau_j$. So that any such $\varphi$ is admissible in the sense of Definition \ref{defLempert}, and $$ \mathcal S (\varphi, (a_j)_{1\le j \le N}) = \mathcal S (\varphi, (A_j)_{1\le j \le N}), $$ and the desired inequality follows. \end{proof} We now return to the study of the example presented in \cite{TraoTh}. Let us recall the notations. For $z \in \mathbb D^2$, $$ \Psi_0 (z) := \max(\log|z_1|,\log|z_2|), \quad \Psi_V (z) := \max(\log|z_1|,2\log|z_2|). $$ Here $V$ stands for "vertical", for the obvious reasons : for $a \in \mathbb D^2$, \newline $\Psi_j ( \varphi (\zeta) -a) \le \tau_j \log|\zeta -\zeta_0| + C$ translates to ($\tau_0=1$, $\tau_V=2$): \begin{eqnarray*} \varphi (\zeta_0) = a, &\mbox{ when }& j =0, \\ \varphi (\zeta_0) = a, \varphi'_1(\zeta_0) =0 &\mbox{ when }& j =V . \end{eqnarray*} For $a$, $b \in \mathbb D$ and $\varepsilon \in \mathbb C$, let \begin{eqnarray*} S_\varepsilon &:=& \{ ((a,0), \Psi_0);((b,0), \Psi_0);((b,\varepsilon), \Psi_0);((a,\varepsilon), \Psi_0) \} \\ S &:=& \{ ((a,0), \Psi_V);((b,0), \Psi_V) \}. \end{eqnarray*} Those are product set situations, and the Green functions are explicitly known. For $w \in \mathbb D$, denote by $\phi_w$ the unique involutive holomorphic automorphism of the disk which exchanges $0$ and $w$: $$ \phi_w (\zeta) := \frac{w-\zeta}{1-\zeta \bar w}. $$ Then \begin{multline*} G_S (z_1,z_2) = \max \left( \log |\phi_a(z_1) \phi_b(z_2)|, 2 \log |z_2| \right), \\ G_{S_\varepsilon} (z_1,z_2) = \max \left( \log |\phi_a(z_1) \phi_b(z_2)|, \log |z_2 \phi_\varepsilon(z_2)| \right). \end{multline*} The following is proved in \cite[p. 397]{TraoTh}. \begin{prop} \label{basicineq} If $b=-a$ and $|a|^2<|\gamma|<|a|$, then $G_S (0,\gamma)<\tilde L_S (0,\gamma)$. \end{prop} It follows from our Theorem \ref{conv} that for any $z \in \mathbb D^2$, $\lim_{\varepsilon\to 0} L_{S_\varepsilon} (z) = \mathcal L_S(z)$, and in particular, using Lemma \ref{comparison}, we find again the result laboriously obtained in \cite[Proposition 6.1]{TTppt}: $\limsup_{\varepsilon\to 0} L_{S_\varepsilon} (z) \le \tilde L_S(z)$. It is a consequence of \cite[Theorem 5.1]{TraoTh} (or equivalently \cite[Theorem 6.2]{TTppt}) that for $b=-a$ and $|a|^{3/2} < |\gamma| <|a|$, then $\mathcal L_S (0,\gamma) > G_S (0,\gamma) $; the motivation then was to obtain the counterexample $ L_{S_\varepsilon} (0,\gamma) > G_{S_\varepsilon} (0,\gamma)$ for $|\varepsilon|$ small enough. On the other hand, when $ |\gamma| < |a|^{3/2}$, the old generalized Lempert function doesn't provide the correct limit of the single pole Lempert functions. \begin{prop} \label{distinct} For $b=-a$ and $|a|^2 < |\gamma| < |a|^{3/2}$, $\mathcal L_S (0,\gamma) < \tilde L_S (0,\gamma) $. \end{prop} \begin{proof} Since Proposition \ref{basicineq} implies that $\tilde L_S (0,\gamma) > G_S (0,\gamma) = 2 \log |a|$, it will be enough to provide a mapping $\varphi$ and sets $A_1, A_2$ admissible in the sense of Definition \ref{defLempert} such that $\mathcal S (\varphi; A_1, A_2) \le 2 \log |a|$. We restrict ourselves to $a>0$. We now choose $A_1:=\{\zeta_1, \zeta_4\}$, $A_2:=\{\zeta_2\}$, with $$ \zeta_2:= \sqrt{a}, \quad \zeta_1 := \phi_{\zeta_2} \left( \sqrt{\frac{2a}{1+a^2}} \right), \quad \zeta_4 := \phi_{\zeta_2} \left( -\sqrt{\frac{2a}{1+a^2}} \right), $$ and $$ \varphi_1 (\zeta) := \phi_{-a} \left( - \phi_{\zeta_2} (\zeta)^2 \right), \varphi_2 (\zeta) := \frac{\gamma}{\zeta_1\zeta_2\zeta_4} \phi_{\zeta_1} (\zeta) \phi_{\zeta_2} (\zeta) \phi_{\zeta_4} (\zeta). $$ From those definitions it is clear that $\varphi_1 (\mathbb D) \subset \mathbb D$ and that $$ \varphi_1 (\zeta_2) = -a, \varphi_1' (\zeta_2) =0; \quad \varphi_2 (\zeta_j) = 0, \mbox{ for } j = 1, 2, 4. $$ Furthermore, using the involutivity of $\phi_{\zeta_2} $, $$ \varphi_1 (\zeta_1) = \varphi_1 (\zeta_4) = \phi_{-a} \left(- \frac{2a}{1+a^2}\right) = \phi_{-a} \left( \phi_{-a} (a) \right) = a. $$ So the map $\varphi$ hits the poles, and $$ m_{\varphi, (a,0), \Psi_V} (\zeta_1) \ge 1, \quad m_{\varphi, (a,0), \Psi_V} (\zeta_4) \ge 1, \quad m_{\varphi, (-a,0), \Psi_V} (\zeta_2) = 2. $$ To see that actually $m_{\varphi, (a,0), \Psi_V} (\zeta_j)=1$, for $j=1, 4$, notice that, since $\varphi_1$ only admits one critical point, $\zeta_2$, and since $\zeta_1\neq \zeta_2$ and $\zeta_4\neq \zeta_2$, we must have $ \varphi_1' (\zeta_j)\neq 0$, $j=1, 4$. Thus $\varphi$ is admissible in the sense of Definition \ref{defLempert}, and $$ \mathcal S (\varphi; A_1, A_2) = \log |\zeta_1| + 2 \log |\zeta_2| +\log |\zeta_4| = \log |\zeta_1 \zeta_4 \zeta_2^2|. $$ We need to compute \begin{multline*} \zeta_1 \zeta_4= \phi_{\sqrt a} \left( \sqrt{\frac{2a}{1+a^2}} \right) \cdot \phi_{\sqrt a} \left( -\sqrt{\frac{2a}{1+a^2}} \right) \\ = \frac{\sqrt a - \sqrt{\frac{2a}{1+a^2}} }{1-\sqrt a \sqrt{\frac{2a}{1+a^2}}} \cdot \frac{\sqrt a + \sqrt{\frac{2a}{1+a^2}} }{1 + \sqrt a \sqrt{\frac{2a}{1+a^2}}} =\frac{ a - \frac{2a}{1+a^2} }{1- a \frac{2a}{1+a^2} } = \phi_a \left( \phi_a (-a) \right) = -a. \end{multline*} From this we deduce $|\zeta_1 \zeta_2 \zeta_4| = a^{3/2} > |\gamma|$, and therefore $\varphi_2 (\mathbb D) \subset \mathbb D$; and $ |\zeta_1 \zeta_4 \zeta_2^2| = a^2$, therefore $$ \mathcal S (\varphi; A_1, A_2) \le \log |\zeta_1 \zeta_4 \zeta_2^2| = 2 \log |a|, $$ q.e.d. \end{proof} \bibliographystyle{amsplain}
2,869,038,156,935
arxiv
\section{Introduction} Many machine learning models can be formulated as the following empirical risk minimization problem: \begin{align}\label{equation:object} \min_{{\bf w}\in {\mathbb R}^d} F({\bf w}) := \frac{1}{n}\sum_{i=1}^n f({\bf w};\xi_i), \end{align} where ${\bf w}$ denotes the model parameter, $\xi_i$ denotes the $i$th training data, $n$ is number of training data, $d$ is the size of model. For example, let $\xi_i = ({\bf a}_i,y_i)$, where ${\bf a}_i$ denotes the feature of the $i$th training data, $y_i$ denotes the label. Then in logistic regression $f({\bf w};\xi_i) = \log(1 + e^{-y_i{\bf a}_i^T{\bf w}}) + \frac{\lambda}{2}\|{\bf w}\|^2$, and in SVM $f({\bf w};\xi_i) = \max(0, 1-y_i{\bf a}_i^T{\bf w}) + \frac{\lambda}{2}\|{\bf w}\|^2$. Many deep learning models can also be formulated as~(\ref{equation:object}). One of the efficient ways to solve (\ref{equation:object}) is stochastic gradient descent~(SGD)~\citep{Robbins&Monro:1951}. In each iteration, SGD calculates one stochastic gradient $\nabla f({\bf w};\xi_i)$ and update ${\bf w}$ by ${\bf w} \leftarrow {\bf w} - \eta \nabla f({\bf w};\xi_i)$, or update ${\bf w}$ with a mini-batch of stochastic gradients. Inspired by momentum and nesterov's accelerated gradient descent, momentum SGD~(MSGD)~\citep{article,DBLP:journals/siamjo/Tseng98,DBLP:journals/mp/Lan12,DBLP:journals/corr/KingmaB14} has been proposed and widely used in machine learning. In practice, MSGD can achieve better performance than SGD~\citep{DBLP:conf/nips/KrizhevskySH12,DBLP:conf/icml/SutskeverMDH13}. Many machine learning platforms like TensorFlow, PyTorch and MXNet adopt MSGD as one of the optimization methods. With the rapid growth of data, distributed SGD~(DSGD)~\citep{DBLP:journals/jmlr/DekelGSX12,DBLP:conf/kdd/LiZCS14} has attracted much attention since it can parallelly calculate a batch of stochastic gradients. DSGD can be formulated as follows: \begin{align}\label{equation:dsgd} {\bf w}_{t+1} = {\bf w}_t - \frac{\eta_t}{p}\sum_{k=1}^{p} {\bf g}_{k,t}, \end{align} where $p$ is the number of workers, ${\bf g}_{k,t}$ is the stochastic gradient~(or a mini-batch of stochastic gradients) calculated by the $k$th worker. DSGD can be implemented on distributed frameworks like parameter server and all-reduce framework. Each worker calculates ${\bf g}_{k,t}$ and sends it to the server or other workers for updating ${\bf w}$. Recently, more and more large models, such as deep learning models, are used in machine learning to improve the generalization ability. This makes ${\bf g}_{k,t}$ be a high dimensional vector. Due to the latency and limited bandwidth of network, communication cost has become the bottleneck of traditional DSGD or distributed MSGD~(DMSGD). For example, when we implement DSGD on parameter server, the server need to receive $p$ high dimension vectors from workers, which will lead to communication traffic jam and make the convergence of DSGD slow. Hence, we need to compress ${\bf g}_{k,t}$ to reduce the communication cost. Recently, researchers have proposed two main categories of communication compression techniques for reducing communication cost in DSGD and DMSGD. The first category is quantization~\citep{DBLP:conf/nips/WenXYWWCL17,DBLP:conf/nips/AlistarhG0TV17,DBLP:conf/nips/JiangA18}. In machine learning problems, $32$-bit float number is typically adopted for representation. Quantization methods quantize the value~(gradient or parameter) representation from 32 bits to some low bit-width like 8 bits or 4 bits. Since the quantized gradients in most methods are an unbiased estimation of the original ones, the convergence rate of these methods has the same order of magnitude as that of DSGD, but slower due to the extra quantization variance. It is easy to find that the communication cost can be reduced by 31 fold in the ideal case. In practice, at least 4 bits should be adopted for representation in most cases to keep original accuracy. In these cases, the communication cost is reduced by 7 fold. The other category is based on sparsified gradient~\citep{DBLP:conf/emnlp/AjiH17,DBLP:conf/nips/AlistarhH0KKR18,DBLP:conf/nips/StichCJ18,DBLP:conf/icml/KarimireddyRSJ19,DBLP:conf/icml/TangYLZL19}, which is called \emph{sparse communication}. In sparse communication, after calculating the update vector ${\bf g}_{t,k}$ at each iteration, each worker only sends a subset of coordinates in ${\bf g}_{t,k}$, denoted as $S({\bf g}_{t,k})$. Here, $S({\bf g}_{t,k})$ is a sparse vector, and hence it can reduce the communication cost. In recent works~\citep{DBLP:conf/emnlp/AjiH17,DBLP:conf/iclr/LinHM0D18}, each worker will typically remember those values which are not sent, i.e., ${\bf g}_{t,k} - S({\bf g}_{t,k})$, and store it in the \emph{memory} rather than dropping them. The ${\bf g}_{t,k} - S({\bf g}_{t,k})$ is called \emph{memory gradient} and it will be used to calculate the next update vector ${\bf g}_{t+1,k}$. This is intuitively necessary because a subset coordinates of one stochastic gradient can not reflect the real descent direction and can make errors with higher probability than the original stochastic gradient. This memory gradient based sparse communication strategy has been widely adopted by recent communication compression methods and has achieved better performance than quantization methods and other sparse communication methods without memory gradient. In these memory gradient based sparse communication methods, some are for vanilla SGD~(including signSGD)~\citep{DBLP:conf/emnlp/AjiH17,DBLP:conf/nips/AlistarhH0KKR18,DBLP:conf/nips/StichCJ18,DBLP:conf/icml/KarimireddyRSJ19,DBLP:conf/icml/TangYLZL19}. The convergence rate of vanilla SGD with sparse communication has been proved in~\citep{DBLP:conf/nips/AlistarhH0KKR18,DBLP:conf/nips/StichCJ18,DBLP:conf/icml/KarimireddyRSJ19,DBLP:conf/icml/TangYLZL19}. Very recently, there has appeared one sparse communication method for distributed MSGD~(DMSGD), called deep gradient compression~(DGC)~\citep{DBLP:conf/iclr/LinHM0D18}, which has achieved better performance than vanilla DSGD with sparse communication in practise. However, the theory about the convergence of DGC is still lack. Furthermore, although DGC uses momentum SGD, the momentum in DGC is calculated by each worker and hence it is a local momentum without global information. In this paper, we propose a novel method, called \emph{\underline{g}}lobal \emph{\underline{m}}omentum \emph{\underline{c}}ompression~(GMC), for sparse communication in DMSGD which includes DSGD as a special case. The main contributions of this paper are summarized as follows: \begin{itemize} \item GMC combines memory gradient and momentum SGD to achieve sparse communication for DMSGD~(DSGD). But different from DGC which adopts local momentum, GMC adopts global momentum. \item We theoretically prove the convergence rate of GMC for both convex and non-convex problems. To the best of our knowledge, this is the first work that proves the convergence of DMSGD with sparse communication and memory gradient. \item Empirical results show that, compared with the DMSGD counterpart without sparse communication, GMC can reduce the communication cost by approximately 100 fold without loss of generalization accuracy. \item GMC can also achieve comparable~(sometimes better) performance compared with DGC, with extra theoretical guarantee. \end{itemize} \section{Preliminary} In this paper, we use $\|\cdot\|$ to denote $L_2$ norm, use ${\bf w}^*$ to denote the optimal solution of (\ref{equation:object}), use $\nabla f({\bf w}; {\mathcal I})$ to denote one stochastic gradient with respect to a mini-batch of samples ${\mathcal I}$ such that $\nabla f({\bf w};\mathcal{I}) = \frac{1}{|{\mathcal I}|}\sum_{\xi_i \in {\mathcal I}} \nabla f({\bf w};\xi_i)$ and ${\mathbb E}_{\mathcal{I}}[\nabla f({\bf w};\mathcal{I})|{\bf w}] = \nabla F({\bf w})$, use $\odot$ to denote element-wise product, use ${\bf 1}$ to denote the vector $(1,1,\ldots,1)^T\in {\mathbb R}^d$, use ${\bf I}$ to denote an identity matrix. For a vector ${\bf a}$, we use $a^{(j)}$ to denote its $j$th coordinate value. $\|{\bf a}\|_0$ denotes the number of non-zero values in ${\bf a}$. \begin{definition}\label{def:smooth loss function} (smooth function)~Function $h(\cdot)$ is $L$-smooth~($L>0$) if $$|h({\bf w}) - h({\bf w}') - \nabla h({\bf w}')^T({\bf w} - {\bf w}')| \leq \frac{L}{2}\|{\bf w} - {\bf w}'\|^2, \forall {\bf w},{\bf w}'.$$ \end{definition} \begin{definition}\label{def:strong convex object} (strongly convex function) Function $h(\cdot)$ is $\mu$-strongly convex~($\mu>0$) if $$h({\bf w}) \geq h({\bf w}')+\nabla h({\bf w}')^T({\bf w} - {\bf w}') + \frac{\mu}{2}\|{\bf w} - {\bf w}'\|^2, \forall {\bf w},{\bf w}'.$$ If $\mu = 0$, we call $h(\cdot)$ convex function. \end{definition} \subsection{Distributed Momentum SGD} The widely used momentum SGD~(MSGD)~\citep{article} for solving~(\ref{equation:object}) can be written as \begin{align*} {\bf g}_t = &\beta{\bf g}_{t-1} + \eta_t\nabla f({\bf w}_t;{\mathcal I}_t), \\ {\bf w}_{t+1} = &{\bf w}_t - {\bf g}_t. \end{align*} The ${\bf g}_t$ is the Polyak's momentum and $\nabla f({\bf w}_t;{\mathcal I}_t)$ is an unbiased estimated stochastic gradient of $F({\bf w}_t)$. Since ${\bf g}_t=({\bf w}_t - {\bf w}_{t+1})$, it can also be written as \begin{align}\label{eq:momentum SGD} {\bf w}_{t+1} = &{\bf w}_t - \eta_t(\nabla f({\bf w}_t;{\mathcal I}_t) + \frac{\beta}{\eta_{t}}({\bf w}_{t-1} - {\bf w}_{t})). \end{align} Please note that if $\beta = 0$, MSGD degenerates to SGD. One simple way to implement distributed MSGD~(DMSGD) is that each worker parallelly calculates some stochastic gradient and then the stochastic gradient of all workers are aggregated to get $\nabla f({\bf w}_t;{\mathcal I}_t) + \beta({\bf w}_{t-1} - {\bf w}_{t})/\eta_{t}$. The update process of ${\bf w}$ in this way is totally equivalent to the serial MSGD. We call $({\bf w}_{t-1} - {\bf w}_{t})/\eta_t$ the \emph{global momentum}, because it captures the global information from all workers. Another way to implement DMSGD is using \emph{local momentum}: \begin{align*} {\bf g}_{t,k} = &\beta{\bf g}_{t-1,k} + \frac{\eta_t}{p}{\bf v}_{t,k}, k = 1,2,\ldots,p,\\ {\bf w}_{t+1} = &{\bf w}_t - \sum_{k=1}^{p}{\bf g}_{t,k}, \end{align*} where ${\bf v}_{t,k}$ is the stochastic gradient calculated by the $k$th worker and $\sum_{k=1}^{p}{\bf v}_{t,k}/p = \nabla f({\bf w}_t;{\mathcal I}_t)$. ${\bf g}_{t-1,k}/\eta_t$ is the \emph{local momentum}. We will find that DGC~\citep{DBLP:conf/iclr/LinHM0D18} degenerates to this DMSGD with local momentum when it does not adopt sparse communication. Since $\sum_{k=1}^{p}{\bf g}_{t,k} = ({\bf w}_{t} - {\bf w}_{t+1})$, this DMSGD with local momentum can also be written as the formulation in~(\ref{eq:momentum SGD}). Hence, the global momentum contains all information of local momentum. Please note that if sparse communication is adopted, the update rule of DGC cannot capture all the information in global momentum. In the later section, we will see that global momentum is better than local momentum when using memory gradient for sparse communication. Recently, there has appeared another distributed SGD method using local momentum~\citep{hrs}. In~\citep{hrs}, it also needs to unify the local momentums on each worker to get the global momentum after several iterations, which means local momentum cannot be independently applied for too many iterations. \section{Method: Global Momentum Compression} Assume we have $p$ workers. $D_i$ denotes the data stored on the $i$th worker and $\bigcup_{i=1}^p D_i \ = \{\xi_1,\xi_2,\ldots,\xi_n\}$. Our method Global Momentum Compression~(GMC) mainly performs the following operations: \begin{itemize} \item Each worker calculates ${\bf g}_{t,k} = \frac{1}{pb}\sum_{\xi_i \in {\mathcal I}_{t,k}} \nabla f({\bf w};\xi_i) - \frac{\beta}{p\eta_t}({\bf w}_t - {\bf w}_{t-1}), |{\mathcal I}_{t,k}| = b$; \item Each worker generates a sparse vector ${\bf m}_{t,k}$ and sends ${\bf m}_{t,k}\odot({\bf g}_{t,k}+{\bf u}_{t,k})$; \item Each worker updates ${\bf u}_{t+1,k} = ({\bf 1} - {\bf m}_{t,k})\odot({\bf g}_{t,k}+{\bf u}_{t,k})$; \item Update parameter ${\bf w}_{t+1} = {\bf w}_t - \eta_t\sum_{k=1}^p{\bf m}_{t,k} \odot ({\bf g}_{t,k}+{\bf u}_{t,k})$; \end{itemize} Below, we introduce the framework and two key elements of GMC: memory gradient and global momentum. \subsection{Framework of GMC} GMC can be easily implemented on all-reduce distributed framework in which each worker sends the sparse vector ${\bf m}_{t,k}\odot({\bf g}_{t,k}+{\bf u}_{t,k})$ to all the other workers, then each worker updates ${\bf w}_{t+1}$ after receiving the sparse vectors from other workers. Recently, parameter server~\citep{DBLP:conf/osdi/LiAPSAJLSS14} has been one of the most popular distributed frameworks in machine learning due to its scalability. GMC can also be implemented on parameter server. The details are shown in Algorithm \ref{alg:gmc}. The difference between GMC and traditional DSGD on parameter server is that in GMC after updating ${\bf w}_{t+1}$, server will send ${\bf w}_{t+1} - {\bf w}_t$ to workers instead of ${\bf w}_{t+1}$. Since ${\bf m}_{t,k}$ is sparse, ${\bf w}_{t+1} - {\bf w}_t$ is sparse as well. Then sending ${\bf w}_{t+1} - {\bf w}_t$ can reduce the communication cost. In our experiments, we find that GMC can make $\|{\bf w}_{t+1} - {\bf w}_t\|_0\leq 0.01d$ without loss of accuracy when training large scale models. Workers can get ${\bf w}_{t+1}$ by ${\bf w}_{t+1} = {\bf w}_t + ({\bf w}_{t+1} - {\bf w}_t)$. \begin{remark} We can also use the memory gradient technique on server to make $({\bf w}_{t+1} -{\bf w}_t)$ as sparse as ${\bf m}_{t,k}$, which is similar to that in~\citep{DBLP:conf/icml/TangYLZL19}. The convergence analysis for GMC is also suitable for it. It is out of the scope of this paper. Hence, we discuss it in Appendix~\ref{appendix:memory on server}. \end{remark} \begin{algorithm}[t] \caption{Global Momentum Compression~(GMC) on Parameter Server} \label{alg:gmc} \begin{algorithmic}[1] \STATE Initialization: $p$ workers, ${\bf w}_{-1} = {\bf w}_0$, $\beta \in [0,1)$, batch size $b$; \STATE Set ${\bf g}_{0,k} = {\bf u}_{0,k} = 0,k=1,\ldots,p,$ \FOR {$t=0,1,2,...T-1$} \STATE \underline{Workers:} \FOR {$k=1,2\ldots,p$, \textbf{each worker parallelly}} \IF {$t>0$} \STATE Receive ${\bf w}_t - {\bf w}_{t-1}$ from server; \STATE Get ${\bf w}_t$ by ${\bf w}_t = {\bf w}_{t-1} + ({\bf w}_t - {\bf w}_{t-1})$; \ENDIF \STATE Randomly pick a mini-batch of training data ${\mathcal I}_{t,k}\subseteq D_k$ with $|{\mathcal I}_{t,k}| = b$; \STATE ${\bf g}_{t,k} = \frac{1}{pb}\sum_{\xi_i \in {\mathcal I}_{t,k}} \nabla f({\bf w}_t;\xi_i) - \frac{\beta}{p\eta_t}({\bf w}_t - {\bf w}_{t-1})$; \STATE Generate a sparse vector ${\bf m}_{t,k}\in \{0,1\}^d$; \STATE Send ${\bf m}_{t,k}\odot({\bf g}_{t,k}+{\bf u}_{t,k})$ to server; \STATE ${\bf u}_{t+1,k} = ({\bf 1} - {\bf m}_{t,k})\odot({\bf g}_{t,k}+{\bf u}_{t,k})$, $k=1,2,\ldots,p$; \ENDFOR \STATE \underline{Server:} \STATE ${\bf w}_{t+1} = {\bf w}_t - \eta_t\sum_{k=1}^p{\bf m}_{t,k} \odot ({\bf g}_{t,k}+{\bf u}_{t,k})$; \STATE Send ${\bf w}_{t+1} - {\bf w}_t$ to workers; \ENDFOR \end{algorithmic} \end{algorithm} \subsection{Necessity of Memory Gradient} In GMC, after sending a sparse vector ${\bf m}_{t,k}\odot({\bf g}_{t,k}+{\bf u}_{t,k})$, each worker will remember the coordinates which are not sent and store them in ${\bf u}_{t+1,k}$: \begin{align}\label{eq:memory gradient} {\bf u}_{t+1,k} = ({\bf 1} - {\bf m}_{t,k})\odot({\bf g}_{t,k}+{\bf u}_{t,k}). \end{align} So we call ${\bf u}_{t,k}$ the \emph{memory gradient}, which is important for the convergence guarantee of GMC. Here, we give an intuitive explanation about why GMC needs to remember the coordinate values which are not sent. We consider the simple case that $\beta = 0$, which means ${\bf g}_{t,k}$ is a stochastic gradient of $F({\bf w})$ and GMC degenerates to~\citep{DBLP:conf/emnlp/AjiH17}. Since ${\bf m}_{t,k}$ is a sparse vector, GMC can be seen as a method achieving sparse communication by combining stochastic coordinate descent~(SCD)~\citep{DBLP:journals/siamjo/Nesterov12} and DSGD. In SCD, each $-\nabla F({\bf w})^{(j)}{\bf e}_j$ denotes a true descent direction. When we use a stochastic gradient $\nabla f({\bf w};{\mathcal I})$ to replace $\nabla F({\bf w})$, $-\nabla f({\bf w};{\mathcal I})^{(j)}{\bf e}_j$ will make error with high probability, especially when ${\bf m}_{t,k}$ adopts the top $s$ strategy (choose $s$ coordinates with larger absolute values~\citep{DBLP:conf/nips/AlistarhH0KKR18}). For further explaining the importance of memory gradient, we consider the following simple example: let $n = p = 2$, and define $f({\bf w};\xi_1) = (-\alpha,\epsilon){\bf w}, f({\bf w};\xi_2) = (\alpha+\epsilon,\gamma){\bf w}$, where ${\bf w} \in [-1,0]\times[-1,0], 0<\epsilon<\alpha<\gamma<\alpha+\epsilon$, $f({\bf w};\xi_1)$ is on the first worker, $f({\bf w};\xi_2)$ is on the second worker. Then we run GMC to solve $\min F({\bf w}) = \frac{1}{2}(f({\bf w};\xi_1) + f({\bf w};\xi_2))$. The ${\bf m}_{t,k}$ adopts the top-$1$ strategy. If we do not use the memory gradient, which means each worker directly sends ${\bf m}_{t,k}\odot{\bf g}_{t,k}$, then the first worker will send $(-\alpha/2,0)^T$, the second worker will send $((\alpha+\epsilon)/2,0)^T$ and ${\bf w} \leftarrow {\bf w} - \eta_t(\epsilon/2,0)$. We observe that $w^{(2)}$ will never be updated. This is due to the pseudo large gradient values which cheat ${\bf m}_{t,k}$. Since $\nabla F({\bf w}) = (\epsilon/2, (\gamma+\epsilon)/2)^T$, we can see that the second coordinate is the true larger gradient and we should have mainly updated $w^{(2)}$. However, in the two stochastic functions $f({\bf w};\xi_1), f({\bf w};\xi_2)$, the first coordinate has larger absolute value, so they cheat ${\bf m}_{t,k}$ and lead to the error. If we use memory gradient, at the beginning, ${\bf m}_{t,1} = {\bf m}_{t,2} = (1,0)^T$. After some iterations, they will be ${\bf m}_{t,1} = (0,1)^T$ and ${\bf m}_{t,2} = (0,1)^T$ due to the memory gradient. Specifically, let $t_1, t_2$ be two integers satisfying $\alpha/\epsilon\leq t_1 < \alpha/\epsilon + 1, (\alpha+\epsilon)/\gamma\leq t_2 < (\alpha+\epsilon)/\gamma + 1$, then it is easy to verify that ${\bf m}_{st_1,1} = (0,1)^T, {\bf m}_{st_2,2} = (0,1)^T, \forall s\geq 1$. It implies that if we use the memory gradient, both $w^{(1)}$ and $w^{(2)}$ will be updated, so GMC can make ${\bf w}$ converge to the optimum $(-1,-1)^T$. Hence, the memory gradient is necessary for sparse communication. It can overcome the disadvantage of combining DSGD and SCD. \subsection{Benefit of Global Momentum} In GMC, each worker calculates ${\bf g}_{t,k}$ as \begin{align}\label{eq:g_gmc} {\bf g}_{t,k} = \frac{1}{pb}\sum_{\xi_i \in {\mathcal I}_{t,k}} \nabla f({\bf w}_t;\xi_i) - \frac{\beta}{p\eta_t}({\bf w}_t - {\bf w}_{t-1}). \end{align} When $\beta = 0$, it degenerates to that of gradient dropping~\citep{DBLP:conf/emnlp/AjiH17}, denoted as ${\bf g}_{t,k}^{(\mbox{\tiny{GD}})}$. We can see that GMC uses the global momentum $({\bf w}_{t-1} - {\bf w}_{t})/\eta_t$. While in DGC~\citep{DBLP:conf/iclr/LinHM0D18}, the ${\bf g}_{t,k}^{(\mbox{\tiny{DGC}})}$ is calculated by \begin{align}\label{eq:g_dgc} {\bf g}_{t,k}^{(\mbox{\tiny{DGC}})} = \frac{1}{pb} \sum_{\xi_i \in {\mathcal I}_{t,k}} \nabla f({\bf w}_t;\xi_i) + \beta{\bf g}_{t-1,k}^{(\mbox{\tiny{DGC}})}, \end{align} which uses the local momentum ${\bf g}_{t-1,k}^{(\mbox{\tiny{DGC}})}$. If we set ${\bf m}_{t,k} = {\bf 1}$, then DGC is equivalent to GMC and $\sum_{k=1}^{p}{\bf g}_{t,k}^{(\mbox{\tiny{DGC}})} = ({\bf w}_{t} - {\bf w}_{t+1})/\eta_t$. If ${\bf m}_{t,k}$ is sparse, according to the update rule for memory gradient ${\bf u}_{t,k}$ in~(\ref{eq:memory gradient}), in GMC each ${\bf u}_{t,k}$ will contain partial information of the global momentum, while in DGC each ${\bf u}_{t,k}^{(\mbox{\tiny{DGC}})}$ only contains partial information of the local momentum. Assume that ${\mathbb E}[F({\bf w}_t)]$ converges to $F({\bf w}^*)$, then ${\bf w}_t - {\bf w}_{t-1}$ denotes the descent direction with high probability. Since ${\mathcal I}_{t,k} \subseteq D_k$, which only contains partial information of the whole training data, the global momentum $({\bf w}_{t-1} - {\bf w}_{t})/\eta_t$ can make compensation for the error between the stochastic gradient and full gradient. Specifically, if $({\bf w}_t - {\bf w}_{t-1})^T(-\nabla F({\bf w}_t)) \geq 0$, then we get that \begin{align*} {\bf g}_{t,k}^T\nabla F({\bf w}_t)= & (\frac{1}{pb}\sum_{\xi_i \in {\mathcal I}_{t,k}} \nabla f({\bf w}_t;\xi_i) - \frac{\beta}{p\eta_t}({\bf w}_t - {\bf w}_{t-1}))^T\nabla F({\bf w}_t) \\ \geq & \frac{1}{pb}\sum_{\xi_i \in {\mathcal I}_{t,k}} \nabla f({\bf w}_t;\xi_i)^T\nabla F({\bf w}_t) \\ = & ({\bf g}_{t,k}^{(\mbox{\tiny{GD}})})^T\nabla F({\bf w}_t). \end{align*} It implies that ${\bf g}_{t,k}$ is a better estimation of $\nabla F({\bf w}_t)$ than ${\bf g}_{t,k}^{(\mbox{\tiny{GD}})}$. Hence, the ${\bf m}_{t,k}$ in GMC can be sparser than that in~\citep{DBLP:conf/emnlp/AjiH17}. \begin{remark} There are some other ways to combine global momentum and memory gradient. For example, we can put the global momentum on the server. However, they have loss of accuracy. More discussions are put in Appendix~\ref{appendix:global momentum}. \end{remark} \section{Convergence of GMC} In this section, we prove the convergence rate of GMC for both convex and non-convex problems. The proofs are put in Appendix.... We define a diagonal matrix ${\bf M}_{t,k}\in {\mathbb R}^{d\times d}$ such that $\mbox{diag}({\bf M}_{t,k}) = {\bf m}_{t,k}$ to replace the symbol $\odot$. Then we get that: \begin{align} {\bf w}_{t+1} = & {\bf w}_t - \eta_t\sum_{k=1}^p{\bf M}_{t,k}({\bf g}_{t,k}+{\bf u}_{t,k}), \label{update:w} \\ {\bf u}_{t+1,k} = & ({\bf I} - {\bf M}_{t,k})({\bf g}_{t,k}+{\bf u}_{t,k}), k=1,2,\ldots, p. \end{align} For convenience, we denote $\nabla f({\bf w};{\mathcal I}_t) = \frac{1}{pb}\sum_{k=1}^{p} \sum_{\xi_i \in {\mathcal I}_{t,k}} \nabla f({\bf w}_t;\xi_i)$, $\tilde{{\bf u}}_t = \sum_{k=1}^{p}{\bf u}_{t,k}$, $\tilde{{\bf g}}_t = \sum_{k=1}^{p}{\bf g}_{t,k}$. By eliminating ${\bf M}_{t,k}$ mathematically, we obtain \begin{align}\label{eq:update rule} {\bf w}_{t+1} = & {\bf w}_t - \eta_t\nabla f({\bf w}_t;{\mathcal I}_t) + \beta({\bf w}_t - {\bf w}_{t-1}) - \eta_t\tilde{{\bf u}}_{t} + \eta_t\tilde{{\bf u}}_{t+1}. \end{align} We can find that if we do not use the compression for communication, then $\tilde{{\bf u}}_t = 0$ and (\ref{eq:update rule}) is the same as normal momentum SGD in (\ref{eq:momentum SGD}). First, we propose the following lemma: \begin{lemma}\label{lemma:w to x} Let ${\bf x}_t = {\bf w}_t + \frac{\beta}{1-\beta}({\bf w}_t - {\bf w}_{t-1}) - \frac{\eta_t}{1-\beta}\tilde{{\bf u}}_t$. Then we have: \begin{align}\label{eq:update of x} {\bf x}_{t+1} = {\bf x}_t - \frac{\eta_t}{1-\beta}\nabla f({\bf w}_t;{\mathcal I}_t) + \frac{\eta_t - \eta_{t+1}}{1-\beta}\tilde{{\bf u}}_{t+1} \end{align} \end{lemma} Equation (\ref{eq:update of x}) is similar to the update equation in SGD except the additional term $\mathcal{O}(\tilde{{\bf u}}_{t+1})$. Inspired by this, we only need to proof the convergence of ${\bf x}_t$ and $\|{\bf x}_t - {\bf w}_t\|$. To get the convergence results, we further make the following three assumptions: \begin{assumption}\label{assume:bounded gradient} (bounded gradient)~${\mathbb E}\|\nabla f({\bf w}_t;\xi_i) \|^2 \leq G^2, \forall t$. \end{assumption} \begin{assumption}\label{assume:bounded u} (bounded memory)~${\mathbb E}\|\tilde{{\bf u}}_t \|^2 \leq U^2, \forall t$. \end{assumption} \begin{assumption}\label{assume:bounded parameter} (bounded parameter)~${\mathbb E}\|{\bf x}_t-{\bf w}^*\|^2 \leq D^2, \forall t$. \end{assumption} \begin{remark} Assumption \ref{assume:bounded gradient} is common in stochastic optimization. Assumption \ref{assume:bounded u} is easy to be guaranteed. We discuss it in the end of this section. Assumption \ref{assume:bounded parameter} is only for convenience. It is used in strongly convex and general convex cases. We can add one projection operation on equation (\ref{update:w}) to guarantee Assumption~\ref{assume:bounded parameter}~(if both ${\bf w}_t$ and $\tilde{{\bf u}}_t$ are bounded, ${\bf x}_t$ is bounded as well), which can be written as ${\bf w}_{t+1} = \Pi_\Omega({\bf w}_t - \eta_t\sum_{k=1}^p{\bf M}_{t,k}({\bf g}_{t,k}+{\bf u}_{t,k}))$. It also guarantees Assumption \ref{assume:bounded gradient}. Although in this case, equation (\ref{eq:update rule}) and (\ref{eq:update of x}) may fail, it does not affect the inequalities in the convergence analysis for strongly convex and general convex functions. Specifically, the following inequalities are still true: $ {\mathbb E}\|{\bf x}_t - {\bf w}_t\|^2 \leq \mathcal{O}(\eta_t^2)$ and $\|{\bf x}_{t+1} - {\bf w}^*\|^2 \leq \|{\bf x}_t - {\bf w}^* - \frac{\eta_t}{1-\beta} \nabla f({\bf w}_t;\mathcal{I}_t) - \frac{\eta_t - \eta_{t+1}}{1-\beta}\tilde{{\bf u}}_{t+1}\|^2$. More details are put in Appendix~\ref{appendix:bounded parameter}. \end{remark} For the new variable ${\bf x}_t$ in Lemma \ref{lemma:w to x}, the gap between ${\bf x}_t$ and ${\bf w}_t$ has the following property: \begin{lemma}\label{lemma:w-x} With Assumption~\ref{assume:bounded gradient}, \ref{assume:bounded u}, we get that \begin{itemize} \item If $\eta_t = \eta$, then \begin{align} {\mathbb E}\|{\bf x}_{t} - {\bf w}_{t}\|^2 \leq [\frac{2\beta^2(2G^2 + 2U^2)}{(1-\beta)^4} + \frac{2U^2}{(1-\beta)^2}]\eta^2. \end{align} \item If $\eta_t = \frac{r}{(t+q)^\rho}, \rho \in [0.5,1], r>0, q\geq 0$, then \begin{align} {\mathbb E}\|{\bf x}_t - {\bf w}_t\|^2 \leq [\frac{2\beta^2(2G^2 + 2U^2)\max\{(1-\beta)A,2\}}{(1-\beta)^4} + \frac{2U^2}{(1-\beta)^2}]\eta_t^2. \nonumber \end{align} where $A = \max_{t=0}^{t_0}\{\frac{\eta_t^2 + \beta\eta_{t-1}^2 + \ldots + \beta^t\eta_0^2}{\eta_{t+1}^2}\}$, $t_0$ satisfies $(\frac{t_0+1+q}{t_0+2+q})^{2\rho} \geq \frac{1+\beta}{2}$. \end{itemize} \end{lemma} The learning rates $\eta_t$ provided in Lemma \ref{lemma:w-x} are common in the convergence analysis of DSGD and it tells us that ${\mathbb E}\|{\bf x}_t - {\bf w}_t\|^2 \leq \mathcal{O}(\eta_t^2)$. For convenience, below we use the constant $C_{r,q,\rho,\beta,\eta}$ to denote ${\mathbb E}\|{\bf x}_t - {\bf w}_t\|^2 \leq C_{r,q,\rho,\beta,\eta}\eta_t^2$. Then we have the following convergence results: \begin{theorem}\label{theorem:gmc strong convex} (strongly convex case)~Let $F(\cdot)$ be $L$-smooth and $\mu$-strongly convex. With Assumption~\ref{assume:bounded gradient}, \ref{assume:bounded u}, \ref{assume:bounded parameter}, and $\eta_t = \frac{1-\beta}{\mu (t+1)}$, $m=\lceil T/2 \rceil$, we obtain \begin{align*} \frac{1}{m}\sum_{t=T-m}^{T-1}{\mathbb E}(F({\bf w}_t) - F({\bf w}^*)) \leq \frac{3A + 2G \sqrt{C_{(1-\beta)/\mu,1,1,\beta,0}}(1-\beta)}{\mu T}. \end{align*} where $A = \max\{\mu^2D^2,2(1-\beta)\sqrt{C_{(1-\beta)/\mu,1,1,\beta,0}}LD + \mu UD + 2G^2 + 2U^2\}$. \end{theorem} \begin{theorem}\label{theorem:gmc general convex} (convex case)~Let $F(\cdot)$ be a convex function. With Assumption~\ref{assume:bounded gradient}, \ref{assume:bounded u}, \ref{assume:bounded parameter}, and $\eta_t = \frac{1-\beta}{\sqrt{t+1}}$, we obtain \begin{align*} \sum_{t=0}^{T-1} \frac{2}{\sqrt{t+1}}{\mathbb E}(F({\bf w}_t) - F({\bf w}^*)) \leq \|{\bf w}_0 - {\bf w}^*\|^2 + A\log(T), \end{align*} where $A = 2(1-\beta)\sqrt{C_{1-\beta,1,0.5,\beta,0}}G + 2UD + 2G^2 + 2U^2$. \end{theorem} Theorem \ref{theorem:gmc general convex} implies that if the objective function is convex, GMC has a convergence rate of $\mathcal{O}(\log(T)/\sqrt{T})$. For non-convex function, we have the following result: \begin{theorem}\label{theorem:gmc nonconvex} (non-convex case)~Let $F(\cdot)$ be $L$-smooth. With Assumption~\ref{assume:bounded gradient}, \ref{assume:bounded u}, and $\eta_t = \eta$ is a constant, we obtain \begin{align*} \frac{1}{(1-\beta)T}\sum_{t=0}^{T-1}{\mathbb E}\|\nabla F({\bf w}_t)\|^2 \leq \frac{F({\bf w}_0) - F({\bf w}^*)}{T\eta} + A\eta, \end{align*} where $A = \frac{LG\sqrt{C_{0,0,1,\beta,\eta}}}{1-\beta} + \frac{LG^2}{2(1-\beta)^2}$. By taking $\eta = \mathcal{O}(1/\sqrt{T})$, it is easy to find that GMC has a convergence rate of $\mathcal{O}(1/\sqrt{T})$, if the objective function is non-convex. \end{theorem} In the previous theorems, we need ${\mathbb E}\|\tilde{{\bf u}}_t\|^2 \leq U^2$~(Assumption \ref{assume:bounded u}). According to its definition, we only need ${\mathbb E}\|{\bf u}_{t,k}\|^2$ to be bounded which is mainly related to the choice of ${\bf m}_{t,k}$. According to the update rule of ${\bf u}_{t,k}$, we can intuitively set ${\bf m}_{t,k}$ by choosing $s$ values randomly or top $s$ values from $\{|g_{t,k}^{(j)}+u_{t,k}^{(j)}||j=1,\ldots,d\}$. It is easy to get the following theorem: \begin{theorem}\label{theorem:bound_u} Let $\|{\bf m}_{t,k}\|_0 = s$ and ${\bf m}_{t,k}$ adopt random or top $s$ strategy. If ${\mathbb E}\|{\bf g}_{t,k}\|^2 \leq g^2, \forall t,k$, then we have ${\mathbb E}\|{\bf u}_{t,k}\|^2 \leq 2(d-s)(2d+s)g^2/s^2$. \end{theorem} Since ${\bf g}_{t,k}^{(\mbox{\tiny{DGC}})}$ in (\ref{eq:g_dgc}) is bounded, both of the ${\mathbb E}\|{\bf u}_{t,k}\|^2$ in them are bounded. However, it is not easy to proof that in GMC if we using the two strategies for ${\bf m}_{t,k}$ due to the global momentum and non-increasing step size. Theoretically, we set one large threshold $\tilde{\theta}>0$ in advance and we denote $\theta_{t,k}^s$ as the $s$-largest value of $\{|g_{t,k}^{(j)}+u_{t,k}^{(j)}||j=1,\ldots,d\}$. Then in each iteration, we can choose the values which are larger than $\min\{\tilde{\theta}, \theta_{t,k}^s\}$. Thus we get that ${\mathbb E}\|{\bf u}_{t,k}\|^2 \leq d\tilde{\theta}^2$. In practice, the large threshold $\tilde{\theta}$ is never been activated for the choice of ${\bf m}_{t,k}$. \section{Experiments} We conduct experiments on a PyTorch based parameter server with one server and eight workers. Each worker has access to one K40 GPU. We compare with distributed momentum SGD~(DMSGD) and DGC~\citep{DBLP:conf/iclr/LinHM0D18}. In DGC, the momentum factor masking is used~\citep{DBLP:conf/iclr/LinHM0D18}. We set $\beta = 0.9$ for GMC and DGC. In our experiments, we consider the communication cost on the server which is the busiest node. It includes receiving vectors from the $p$ workers and sending one vector to the $p$ workers. So the cost of DMSGD is $2pd$. In GMC and DGC, since ${\bf m}_{t,k}$ is sparse, workers send the vectors using the structure of $(key, value)$. The cost of each $(key, value)$ is $2$. Server sends ${\bf w}_{t+1} - {\bf w}_t$ using this structure as well. Hence, the cost of GMC and DGC is $2(\sum_{k=1}^{p}\|{\bf m}_{t,k}\|_0 + p\|{\bf w}_{t+1} - {\bf w}_t\|_0)$. Hence, the communication compression ratio~(CR) is: $CR=\frac{1}{T}\sum_{t=0}^{T-1}\frac{1}{pd}(\sum_{k=1}^{p}\|{\bf m}_{t,k}\|_0 + p\|{\bf w}_{t+1} - {\bf w}_t\|_0)$. Here, all numbers have the same unit~(float value). \textbf{Convex model.} We use the dataset MNIST and the model logistic regression~(LR) to evaluate GMC on convex problem. Since the size of dataset and model is small, ${\bf m}_{t,k}$ directly adopts top-K strategy with $\|{\bf m}_{t,k}\|_0 = s$ where $s = 0.01d$ or $0.001d$. We use 4 workers for this experiment. We train LR~(weight decay: 0.0001, batch size: 128) for 30 epochs. The results are in Table~\ref{tab:convex}. We can see that GMC gets the same training loss and test accuracy as that of DMSGD under different sparsity. According to the definition of CR, the communication compress ratio is smaller than $(1+p)s/d = 5s/d$. So the compress ratio in Table~\ref{tab:convex} is consist with it, which is proportional to $s/d$. We can find that, compared with DMSGD, GMC can reduce the communication cost by more than 200 fold without loss of accuracy. Furthermore, GMC achieves comparable performance as DGC. \textbf{Non-convex model.} We use the dataset CIFAR-10 and two popular deep models~(AlexNet, ResNet20) to evaluate GMC on non-convex problems. Since the model size is large, in GMC and DGC we use the approximate top-K strategy for ${\bf m}_{t,k}$: given a vector ${\bf a} = (a^{(1)}, a^{(2)}, \ldots, a^{(d)})$, we first randomly choose some indexes $S$ with $|S| = 0.01d$. We get the threshold $\theta$ such that $|\{j||a^{(j)}|\geq \theta, j\in S\}| = 0.001|S|$. Then we choose the indexes $\{j||a^{(j)}|\geq \theta, j=1,2,\ldots,d\}$. It implies that $\|{\bf m}_{t,k}\|_0$ is approximately $0.001d$. We use both $4$ and $8$ workers with the total batch size 128. The results are shown in Figure~\ref{fig:resnet cifar10} and Table~\ref{tab:non-convex}. First, according to Figure~\ref{fig:resnet cifar10}~(a), GMC and DGC has the same training loss and test accuracy as that of DMSGD on ResNet20. Compared to ResNet20, AlexNet has more parameters. In Figure~\ref{fig:resnet cifar10}~(b), we can see that GMC also gets the same loss and accuracy as that of DMSGD. When using 4 workers, GMC is better than DGC on test accuracy. From Table~\ref{tab:non-convex}, we can find that, compared with DMSGD, GMC can reduce the communication cost by more than 100 fold without loss of accuracy. Furthermore, GMC achieves comparable~(sometimes better) accuracy with comparable communication compression ratio, compared with DGC. \begin{figure*}\label{fig:resnet cifar10} \centering \subfigure[ResNet20 on CIFAR-10]{ \includegraphics[width = 3.5cm]{Cifar10_ResNet20_Loss_m4} \includegraphics[width = 3.5cm]{Cifar10_ResNet20_Top1Acc_m4} \includegraphics[width = 3.5cm]{Cifar10_ResNet20_Loss_m8} \includegraphics[width = 3.5cm]{Cifar10_ResNet20_Top1Acc_m8} } \subfigure[AlexNet on CIFAR-10]{ \includegraphics[width = 3.5cm]{Cifar10_AlexNet_Loss_m4} \includegraphics[width = 3.5cm]{Cifar10_AlexNet_Top1Acc_m4} \includegraphics[width = 3.5cm]{Cifar10_AlexNet_Loss_m8} \includegraphics[width = 3.5cm]{Cifar10_AlexNet_Top1Acc_m8} } \caption{Training process using different workers} \end{figure*} \begin{minipage}{\textwidth} \begin{minipage}[t]{0.45\textwidth} \centering \tiny \makeatletter\def\@captype{table}\makeatother \caption{\small Training logistic regression}\label{tab:convex} \begin{tabular}{lllll} \toprule Methods & $s/d$ & Loss & Accuracy & CR \\ \midrule DMSGD & 1 & 0.2627 & 92.19\% & 1 \\ \midrule DGC & 0.01 & 0.2695 & 92.25\% & 4.867\% \\ & 0.001 & 0.2636 & 92.32\% & 0.447\% \\ \midrule GMC & 0.01 & 0.2621 & 92.32\% & 4.913\% \\ & 0.001 & 0.2638 & 92.29\% & 0.441\% \\ \bottomrule \end{tabular} \end{minipage} \begin{minipage}[t]{0.45\textwidth} \centering \tiny \makeatletter\def\@captype{table}\makeatother \caption{\small Training ResNet and AlexNet}\label{tab:non-convex} \begin{tabular}{lllll} \toprule Models & GPUs & Methods & Accuracy & CR \\ \midrule ResNet20 & 4 & MDSGD & 91.93\% & 1 \\ & ~ & DGC & 91.82\% & 0.361\% \\ & ~ & GMC & 91.57\% & 0.402\% \\ \cmidrule(r){2-5} & 8 & MDSGD & 91.79\% & 1 \\ & ~ & DGC & 91.77\% & 0.648\% \\ & ~ & GMC & 92.05\% & 0.718\% \\ \midrule AlexNet & 4 & MDSGD & 75.76\% & 1 \\ & ~ & DGC & 74.66\% & 0.479\% \\ & ~ & GMC & 76.40\% & 0.517\% \\ \cmidrule(r){2-5} & 8 & MDSGD & 76.08\% & 1 \\ & ~ & DGC & 75.19\% & 0.849\% \\ & ~ & GMC & 75.48\% & 0.890\% \\ \bottomrule \end{tabular} \end{minipage} \end{minipage} \section{Conclusion} In this paper, we propose a novel method, called \emph{\underline{g}}lobal \emph{\underline{m}}omentum \emph{\underline{c}}ompression~(GMC), for sparse communication in DMSGD~(DSGD). To the best of our knowledge, this is the first work that proves the convergence of DMSGD with sparse communication and memory gradient. Empirical results show that GMC can achieve state-of-the-art performance.
2,869,038,156,936
arxiv
\section{Introduction} Video Object Segmentation (VOS) is a fundamental task in computer vision with many potential applications, including augmented reality~\cite{ngan2011video} and self-driving cars~\cite{zhang2016instance}. In this paper, we focus on semi-supervised VOS, which targets on segmenting a particular object across the entire video sequence based on the object mask given at the first frame. \zongxin{The development of semi-supervised VOS can benefit many related tasks, such as video instance segmentation~\cite{vis,Feng_2019_ICCV} and interactive video object segmentation~\cite{oh2019fast,miao2020memory,liangmemory}.} Early VOS works~\cite{osvos,onavos,premvos} rely on fine-tuning with the first frame in evaluation, which heavily slows down the inference speed. Recent works (\emph{e.g.},~\cite{osmn,feelvos,spacetime}) aim to avoid fine-tuning and achieve better run-time. In these works, STMVOS~\cite{spacetime} introduces memory networks to learn to read sequence information and outperforms all the fine-tuning based methods. However, STMVOS relies on simulating extensive frame sequences using large image datasets~\cite{voc,coco,cheng2014global,shi2015hierarchical,semantic} for training. The simulated data significantly boosts the performance of STMVOS but makes the training procedure elaborate. Without simulated data, FEELVOS~\cite{feelvos} adopts a semantic pixel-wise embedding together with a global (between the first and current frames) and a local (between the previous and current frames) matching mechanism to guide the prediction. The matching mechanism is simple and fast, but the performance is not comparable with STMVOS. \begin{wrapfigure}[17]{R}{0.55\textwidth} \centering \includegraphics[width=0.98\linewidth]{figs/CI.pdf} \caption{CI means collaborative integration. There are two foreground sheep (pink and blue). In the top line, the contempt of background matching leads to a confusion of sheep's prediction. In the bottom line, we relieve the confusion problem by introducing background matching (dot-line arrow).}\label{fig:cl} \end{wrapfigure} Even though the efforts mentioned above have made significant progress, current state-of-the-art works pay little attention to the feature embedding of background region in videos and only focus on exploring robust matching strategies for the foreground object (s). Intuitively, it is easy to extract the foreground region from a video when precisely removing all the background. Moreover, modern video scenes commonly focus on many similar objects, such as the cars in car racing, the people in a conference, and the animals on a farm. For these cases, the contempt of integrating foreground and background embeddings traps VOS in an unexpected background confusion problem. As shown in Fig.~\ref{fig:cl}, if we focus on only the foreground matching like FEELVOS, a similar and same kind of object (sheep here) in the background is easy to confuse the prediction of the foreground object. Such an observation motivates us that the background should be equally treated compared with the foreground so that better feature embedding can be learned to relieve the background confusion and promote the accuracy of VOS. We propose a novel framework for Collaborative video object segmentation by Foreground-Background Integration (CFBI) based on the above motivation. Different from the above methods, we not only extract the embedding and do match for the foreground target in the reference frame, but also for the background region to relieve the background confusion. Besides, our framework extracts two types of embedding (\emph{i.e.}, pixel-level, and instance-level embedding) for each video frame to cover different scales of features. Like FEELVOS, we employ pixel-level embedding to match all the objects' details with the same global \& local mechanism. However, the pixel-level matching is not sufficient and robust to match those objects with larger scales and may bring unexpected noises due to the pixel-wise diversity. Thus we introduce instance-level embedding to help the segmentation of large-scale objects by using attention mechanisms. Moreover, we propose a collaborative ensembler to aggregate the foreground \& background and pixel-level \& instance-level information and learn the collaborative relationship among them implicitly. \zongxin{For better convergence, we take a balanced random-crop scheme in training to avoid learned attributes being biased to the background attributes.} All these proposed strategies can significantly improve the quality of the learned collaborative embeddings for conducting VOS while keeping the network simple yet effective simultaneously. We perform extensive experiments on DAVIS~\cite{davis2016,davis2017}, and YouTube-VOS~\cite{youtubevos} to validate the effectiveness of the proposed CFBI approach. Without any bells and whistles (such as the use of simulated data, fine-tuning or post-processing), CFBI outperforms all other state-of-the-art methods on the validation splits of DAVIS 2016 (ours, $\mathcal{J}\&\mathcal{F}$ $\mathbf{89.4\%}$), DAVIS 2017 ($\mathbf{81.9\%}$) and YouTube-VOS ($\mathbf{81.4\%}$) while keeping a competitive single-object inference speed of about 5 FPS. By additionally applying multi-scale \& flip augmentation at the testing stage, the accuracy can be further boosted to $\mathbf{90.1\%}$, $\mathbf{83.3\%}$ and $\mathbf{82.7\%}$, respectively. We hope our simple yet effective CFBI will serve as a solid baseline and help ease VOS's future research. \section{Related Work} \noindent\textbf{Semi-supervised Video Object Segmentation.} Many previous methods for semi-supervised VOS rely on fine-tuning at test time. Among them, OSVOS~\cite{osvos} and MoNet~\cite{xiao2018monet} fine-tune the network on the first-frame ground-truth at test time. OnAVOS~\cite{onavos} extends the first-frame fine-tuning by an online adaptation mechanism, \emph{i.e.}, online fine-tuning. MaskTrack~\cite{masktrack} uses optical flow to propagate the segmentation mask from one frame to the next. PReMVOS~\cite{premvos} combines four different neural networks (including an optical flow network~\cite{flownet}) using extensive fine-tuning and a merging algorithm. Despite achieving promising results, all these methods are seriously slowed down by fine-tuning during inference. Some other recent works (\emph{e.g.},~\cite{osmn,favos}) aim to avoid fine-tuning and achieve a better run-time. OSMN~\cite{osmn} employs two networks to extract the instance-level information and make segmentation predictions, respectively. PML~\cite{pml} learns a pixel-wise embedding with the nearest neighbor classifier. Similar to PML, VideoMatch~\cite{videomatch} uses a soft matching layer that maps the pixels of the current frame to the first frame in a learned embedding space. Following PML and VideoMatch, FEELVOS~\cite{feelvos} extends the pixel-level matching mechanism by additionally matching between the current frame and the previous frame. Compared to the methods with fine-tuning, FEELVOS achieves a much higher speed, but there is still a gap inaccuracy. Like FEELVOS, RGMP~\cite{rgmp} and STMVOS~\cite{spacetime} does not require any fine-tuning. STMVOS, which leverages a memory network to store and read the information from past frames, outperforms all the previous methods. However, STMVOS relies on an elaborate training procedure using extensive simulated data generated from multiple datasets. Moreover, the above methods do not focus on background matching. Our CFBI utilizes both the pixel-level and instance-level embeddings to guide prediction. Furthermore, we propose a collaborative integration method by additionally learning background embedding. \noindent\textbf{Attention Mechanisms.} Recent works introduce the attention mechanism into convolutional networks (\emph{e.g.}, ~\cite{attention_conv1,attention_conv2}). Following them, SE-Nets~\cite{senet} introduced a lightweight gating mechanism that focuses on enhancing the representational power of the convolutional network by modeling channel attention. Inspired by SE-Nets, CFBI uses an instance-level average pooling method to embed collaborative instance information from pixel-level embeddings. After that, we conduct a channel-wise attention mechanism to help guide prediction. Compared to OSMN, which employs an additional convolutional network to extract instance-level embedding, our instance-level attention method is more efficient and lightweight. \begin{figure}[t!] \centering \includegraphics[width=0.9\linewidth]{figs/overview.pdf} \caption{An \textbf{overview} of CFBI. F-G denotes Foreground-Background. We use \textcolor{red}{red} and \textcolor{blue}{blue} to indicate foreground and background separately. The deeper the red or blue color, the higher the confidence. Given the first frame ($t=1$), previous frame ($t=T-1$), and current frame ($t=T$), we firstly extract their pixel-wise embedding by using a backbone network. Second, we separate the first and previous frame embeddings into the foreground and background pixels based on their masks. After that, we use F-G pixel-level matching and instance-level attention to guide our collaborative ensembler network to generate a prediction.} \label{fig:overview} \end{figure} \section{Method}\label{sec:model} \noindent\textbf{Overview.} Learning foreground feature embedding has been well explored by previous practices (\emph{e.g.},~\cite{osmn,feelvos}). OSMN proposed to conduct an instance-level matching, but such a matching scheme fails to consider the feature diversity among the details of the target's appearance and results in coarse predictions. PML and FEELVOS alternatively adopt the pixel-level matching by matching each pixel of the target, which effectively takes the feature diversity into account and achieves promising performance. Nevertheless, performing pixel-level matching may bring unexpected noises in the case of some pixels from the background are with a similar appearance to the ones from the foreground (Fig.~\ref{fig:cl}). To overcome the problems raised by the above methods and promote the foreground objects from the background, we present Collaborative video object segmentation by Foreground-Background Integration (CFBI), as shown in Figure~\ref{fig:overview}. We use red and blue to indicate foreground and background separately. First, beyond learning feature embedding from foreground pixels, our CFBI also considers embedding learning from background pixels for collaboration. Such a learning scheme will encourage the feature embedding from the target object and its corresponding background to be contrastive, promoting the segmentation results accordingly. Second, we further conduct the embedding matching from both pixel-level and instance-level with the collaboration of pixels from the foreground and background. For the pixel-level matching, we improve the robustness of the local matching under various object moving rates. For the instance-level matching, we design an instance-level attention mechanism to augment the pixel-level matching efficiently. Moreover, to implicitly aggregate the learned foreground \& background and pixel-level \& instance-level information, we employ a collaborative ensembler to construct large receptive fields and make precise predictions. \subsection{Collaborative Pixel-level Matching} For the pixel-level matching, we adopt a global and local matching mechanism similar to FEELVOS for introducing the guided information from the first and previous frames, respectively. Unlike previous methods~\cite{pml,feelvos}, we additionally incorporate background information and apply multiple windows in the local matching, which is shown in the middle of Fig.~\ref{fig:overview}. For incorporating background information, we firstly redesign the pixel distance of~\cite{feelvos} to further distinguish the foreground and background. Let $B_t$ and $F_t$ denote the pixel sets of background and all the foreground objects of frame $t$, respectively. We define a new distance between pixel $p$ of the current frame $T$ and pixel $q$ of frame $t$ in terms of their corresponding embedding, $e_p$ and $e_q$, by \begin{equation} \label{equ:distance} D_t(p,q)= \begin{cases} 1-\frac{2}{1+exp(||e_p-e_q||^2+b_B)} & \text{if } q \in B_t\\ 1-\frac{2}{1+exp(||e_p-e_q||^2+b_F)} & \text{if } q \in F_t \end{cases}, \end{equation} where $b_B$ and $b_F$ are trainable background bias and foreground bias. We introduce these two biases to make our model be able further to learn the difference between foreground distance and background distance. \noindent\textbf{Foreground-Background Global Matching.} Let $\mathcal{P}_t$ denote the set of all pixels (with a stride of 4) at time $t$ and $\mathcal{P}_{t,o}\subseteq \mathcal{P}_{t}$ is the set of pixels at time $t$ which belongs to the foreground object $o$. The global foreground matching between one pixel $p$ of the current frame $T$ and the pixels of the first reference frame (\emph{i.e.}, $t=1$) is, \begin{equation} \label{equ:global_f} G_{T,o}(p)=\min_{q\in\mathcal{P}_{1,o}} D_1(p,q). \end{equation} Similarly, let $\mathcal{\overline{P}}_{t,o} =\mathcal{P}_t \backslash \mathcal{P}_{t,o}$ denote the set of relative background pixels of object $o$ at time $t$, and the global background matching is, \begin{equation} \label{equ:global_b} \overline{G}_{T,o}(p)=\min_{q\in\mathcal{\overline{P}}_{1,o}} D_{1}(p,q). \end{equation} \noindent\textbf{Foreground-Background Multi-Local Matching.} \setlength{\intextsep}{-10pt} \begin{wrapfigure}[20]{R}{0.37\textwidth} \center \subfloat[Slow moving rate]{ \label{fig:slow} \includegraphics[width=0.9\linewidth]{figs/slow.pdf} } \subfloat[Fast moving rate]{ \label{fig:fast} \includegraphics[width=0.9\linewidth]{figs/fast.pdf} } \caption{The moving rate of objects across two adjacent frames is largely variable for different sequences. Examples are from YouTube-VOS~\cite{youtubevos}.}\label{fig:offset} \end{wrapfigure} \noindent In FEELVOS, the local matching is limited in only one fixed extent of neighboring pixels, but the offset of objects across two adjacent frames in VOS is variable, as shown in Fig.~\ref{fig:offset}. Thus, we propose to apply the local matching mechanism on different scales and let the network learn how to select an appropriate local scale, which makes our framework more robust to various moving rates of objects. Notably, we use the intermediate results of the local matching with the largest window to calculate on other windows. Thus, the increase of computational resources of our multi-local matching is negligible. \setlength{\intextsep}{0pt} Formally, let $K=\{k_1,k_2,...,k_n\}$ denote all the neighborhood sizes and $H(p,k)$ denote the neighborhood set of pixels that are at most $k$ pixels away from $p$ in both $x$ and $y$ directions, our foreground multi-local matching between the current frame $T$ and its previous frame $T-1$ is \begin{equation} ML_{T,o}(p,K)=\{L_{T,o}(p,k_1),L_{T,o}(p,k_2),...,L_{T,o}(p,k_n)\}, \end{equation} where \begin{equation} \label{equ:local_f} L_{T,o}(p,k)= \begin{cases} \min_{q\in\mathcal{P}^{p,k}_{T-1,o}} D_{T-1}(p,q) & \text{if }\mathcal{P}^{p,k}_{T-1,o}\neq\emptyset \\ 1 & \text{otherwise} \end{cases}. \end{equation} Here, $\mathcal{P}^{p,k}_{T-1,o}:=\mathcal{P}_{T-1,o}\cap H(p,k)$ denotes the pixels in the local window (or neighborhood). And our background multi-local matching is \begin{equation} \overline{ML}_{T,o}(p,K)=\{\overline{L}_{T,o}(p,k_1),\overline{L}_{T,o}(p,k_2),...,\overline{L}_{T,o}(p,k_n)\}, \end{equation} where \begin{equation} \label{equ:local_b} \overline{L}_{T,o}(p,k)= \begin{cases} \min_{q\in\mathcal{\overline{P}}_{T-1,o}^{p,k}} D_{T-1}(p,q) & \text{if }\mathcal{\overline{P}}_{T-1,o}^{p,k}\neq\emptyset \\ 1 & \text{otherwise} \end{cases}. \end{equation} Here similarly, $\mathcal{\overline{P}}^{p,k}_{T-1,o}:=\mathcal{\overline{P}}_{T-1,o}\cap H(p,k)$. In addition to the global and multi-local matching maps, we concatenate the pixel-level embedding feature and mask of the previous frame with the current frame feature. FEELVOS demonstrates the effectiveness of concatenating the previous mask. Following this, we empirically find that introducing the previous embedding can further improve the performance ($\mathcal{J}$\&$\mathcal{F}$) by about $0.5\%$. In summary, the output of our collaborative pixel-level matching is a concatenation of (1) the pixel-level embedding of the current frame, (2) the pixel-level embedding and mask of the previous frame, (3) the multi-local matching map and (4) the global matching map, as shown in the bottom box of Fig.~\ref{fig:overview}. \setlength{\intextsep}{-10pt} \begin{wrapfigure}[22]{r}{0.3\textwidth} \center \includegraphics[width=0.98\linewidth]{figs/ins_a.pdf} \caption{The trainable part of the instance-level attention. $C_e$ denotes the channel dimension of pixel-wise embedding. $H$, $W$, $C$ denote the height, width, channel dimension of CE features.} \label{fig:instance} \end{wrapfigure} \subsection{Collaborative Instance-level Attention} As shown in the right of Fig~\ref{fig:overview}, we further design a Collaborative instance-level attention mechanism to guide the segmentation for large-scale objects. After getting the pixel-level embeddings of the first and previous frames, we separate them into foreground and background pixels (\emph{i.e.}, $\mathcal{P}_{1,o}$, $\mathcal{\overline{P}}_{1,o}$, $\mathcal{P}_{T-1,o}$, and $\mathcal{\overline{P}}_{T-1,o}$) according to their masks. Then, we apply channel-wise average pooling on each group of pixels to generate a total of four instance-level embedding vectors and concatenate these vectors into one collaborative instance-level guidance vector. Thus, the guidance vector contains the information from both the first and previous frames, and both the foreground and background regions. \setlength{\intextsep}{0pt} In order to efficiently utilize the instance-level information, we employ an attention mechanism to adjust our Collaborative Ensembler (CE). We show a detailed illustration in Fig.~\ref{fig:instance}. Inspired by SE-Nets~\cite{senet}, we leverage a fully-connected (FC) layer (we found this setting is better than using two FC layers as adopted by SE-Net) and a non-linear activation function to construct a gate for the input of each Res-Block in the CE. The gate will adjust the scale of the input feature channel-wisely. By introducing collaborative instance-level attention, we can leverage a full scale of foreground-background information to guide the prediction further. The information with a large (instance-level) receptive field is useful to relieve local ambiguities~\cite{torralba2003contextual}, which is inevitable with a small (pixel-wise) receptive field. \subsection{Collaborative Ensembler (CE)} In the lower right of Fig.~\ref{fig:overview}, we design a collaborative ensembler for making large receptive fields to aggregate pixel-level and instance-level information and implicitly learn the collaborative relationship between foreground and background. Inspired by ResNets~\cite{resnet} and Deeplabs~\cite{deeplab,deeplabv3p}, which both have shown significant representational power in image segmentation tasks, our CE uses a downsample-upsample structure, which contains three stages of Res-Blocks~\cite{resnet} and an Atrous Spatial Pyramid Pooling (ASPP)~\cite{deeplabv3p} module. The number of Res-Blocks in Stage 1, 2, and 3 are $2$, $3$, $3$ in order. Besides, we employ dilated convolutional layers to improve the receptive fields efficiently. The dilated rates of the $3\times3$ convolutional layer of Res-Blocks in one stage are separately $1$, $2$, $4$ ( or $1$, $2$ for Stage 1). At the beginning of Stage 2 and Stage 3, the feature maps will be downsampled by the first Res-Block with a stride of 2. After these three stages, we employ an ASPP and a Decoder~\cite{deeplabv3p} module to increase the receptive fields further, upsample the scale of feature and fine-tune the prediction collaborated with the low-level backbone features. \section{Implementation Details} \setlength{\intextsep}{-10pt} \begin{wrapfigure}[11]{r}{0.5\textwidth} \center\vspace{-9mm} \subfloat[Normal]{ \label{fig:normal_crop} \includegraphics[width=0.426\linewidth]{figs/normal_crop.pdf} } \subfloat[Balanced]{ \label{fig:balanced_crop} \includegraphics[width=0.52\linewidth]{figs/balanced_crop.pdf} } \caption{When using normal random-crop, some red windows contain few or no foreground pixels. For reliving this problem, we propose balanced random-crop.}\label{fig:crop} \end{wrapfigure} For better convergence, we modify the random-crop augmentation and the training method in previous methods~\cite{spacetime,feelvos}. \setlength{\intextsep}{0pt} \noindent\textbf{Balanced Random-Crop.} As shown in Fig.~\ref{fig:crop}, there is an apparent imbalance between the foreground and the background pixel number on VOS datasets. Such an issue usually makes the models easier to be biased to background attributes. In order to relieve this problem, we take a balanced random-crop scheme, which crops a sequence of frames (\emph{i.e.}, the first frame, the previous frame, and the current frame) by using a same cropped window and restricts the cropped region of the first frame to contain enough foreground information. The restriction method is simple yet effective. To be specific, the balanced random-crop will decide on whether the randomly cropped frame contains enough pixels from foreground objects or not. If not, the method will continually take the cropping operation until we obtain an expected one. \iffalse \begin{figure}[!t] \center \includegraphics[width=0.8\linewidth]{figs/sequence_training.pdf} \caption{An illustration of the sequential training. In each step, the previous mask comes from the previous prediction (the green lines) except for the first step, whose previous mask comes from the ground-truth mask (the blue line).} \label{fig:sequence_training} \end{figure} \fi \noindent\textbf{Sequential Training.} \zongxin{In the training stage, FEELVOS predicts only one step in one iteration, and the guidance masks come from the ground-truth data. RGMP and STMVOS uses previous guidance information (mask or feature memory) in training, which is more consistent with the inference stage and performs better. In the evaluation stage, the previous guidance masks are always generated by the network in the previous inference steps.} \zongxin{Following RGMP, we train the network using a sequence of consecutive frames in each SGD iteration. In each iteration, we randomly sample a batch of video sequences. For each video sequence, we randomly sample a frame as the reference frame and a continuous $N+1$ frames as the previous frame and current frame sequence with $N$ frames. When predicting the first frame, we use the ground-truth of the previous frame as the previous mask. When predicting the following frames, we use the latest prediction as the previous mask. } \begin{figure}[t!] \centering \includegraphics[width=0.9\linewidth]{figs/comparison.pdf} \caption{Qualitative comparison with STMVOS on DAVIS 2017. In the first video, STMVOS fails in tracking the gun after occlusion and blur. In the second video, STMVOS is easier to partly confuse with bicycle and person.} \label{fig:comparison} \end{figure} \noindent\textbf{Training Details.} Following FEELVOS, we use the DeepLabv3+~\cite{deeplabv3p} architecture as the backbone for our network. However, our backbone is based on the dilated Resnet-101~\cite{deeplabv3p} instead of Xception-65~\cite{xception} for saving computational resources. We apply batch normalization (BN)~\cite{bn} in our backbone and pre-train it on ImageNet~\cite{deng2009imagenet} and COCO~\cite{coco}. The backbone is followed by one depth-wise separable convolution for extracting pixel-wise embedding with a stride of 4. We initialize $b_B$ and $b_F$ to $0$. For the multi-local matching, we further downsample the embedding feature to a half size using bi-linear interpolation for saving GPU memory. Besides, the window sizes in our setting are $K=\{2, 4, 6, 8, 10, 12\}$. For the collaborative ensembler, we apply group normalization (GN)~\cite{gn} and gated channel transformation~\cite{gct} to improving training stability and performance when using a small batch size. For sequential training, the current sequence's length is $N=3$, which makes a better balance between computational resources and network performance. We use the DAVIS 2017~\cite{davis2017} training set (60 videos) and the YouTube-VOS~\cite{youtubevos} training set (3471 videos) as the training data. \zongxin{We downsample all the videos to 480P resolution, which is same as the default setting in DAVIS.} We adopt SGD with a momentum of $0.9$ and apply a bootstrapped cross-entropy loss, which only considers the $15\%$ hardest pixels. During the training stage, we freeze the parameters of BN in the backbone. For the experiments on YouTube-VOS, we use a learning rate of $0.01$ for $100,000$ steps with a batch size of 4 videos (\emph{i.e.}, 20 frames in total) per GPU using $2$ Tesla V100 GPUs. The training time on YouTube-VOS is about 5 days. For DAVIS, we use a learning rate of $0.006$ for $50,000$ steps with a batch size of 3 videos (\emph{i.e.}, 15 frames in total) per GPU using $2$ GPUs. We apply flipping, scaling, and balanced random-crop as data augmentations. The cropped window size is $465\times 465$. For the multi-scale testing, we apply the scales of $\{1.0, 1.15, 1.3, 1.5\}$ and $\{2.0, 2.15, 2.3\}$ for YouTube-VOS and DAVIS, respectively. CFBI achieves similar results in PyTorch~\cite{pytorch} and PaddlePaddle~\cite{paddlepaddle}. \setlength{\intextsep}{-3pt} \begin{wraptable}[28]{r}{0.55\textwidth} \centering\vspace{-9mm} \caption{The quantitative evaluation on YouTube-VOS~\cite{youtubevos}. F, S, and $^*$ separately denote fine-tuning at test time, using simulated data in the training process and performing model ensemble in evaluation. CFBI$^{MS}$ denotes using a multi-scale and flip strategy in evaluation.}\label{tab:youtubevos} \begin{tabular}{lccccccc} \toprule[1.5pt] & & & & \multicolumn{2}{c}{Seen} & \multicolumn{2}{c}{Unseen} \\ \midrule[1pt] Methods & F & S & Avg & $\mathcal{J}$ & $\mathcal{F}$ & $\mathcal{J}$ & $\mathcal{F}$ \\ \midrule[1pt] \multicolumn{8}{c}{\textit{Validation 2018 Split}} \\ \midrule[1pt] AG~\cite{agame} & & & 66.1 & 67.8 & - & 60.8 & - \\ PReM~\cite{premvos} & \checkmark & & 66.9 & 71.4 & 75.9 & 56.5 & 63.7 \\ BoLT~\cite{boltvos} & \checkmark & & 71.1 & 71.6 & - & 64.3 & - \\ STM$^-$~\cite{spacetime} & & & 68.2 & - & - & - & - \\ STM~\cite{spacetime} & & \checkmark & 79.4 & 79.7 & 84.2 & 72.8 & 80.9 \\ \hline CFBI & & & \textbf{81.4} & \textbf{81.1} & \textbf{85.8} & \textbf{75.3} & \textbf{83.4} \\ CFBI$^{MS}$ & & & \textbf{82.7} & \textbf{82.2} & \textbf{86.8} & \textbf{76.9} & \textbf{85.0} \\ \midrule[1pt] \multicolumn{8}{c}{\textit{Validation 2019 Split}} \\ \midrule[1pt] CFBI & & & \textbf{81.0} & \textbf{80.6} & \textbf{85.1} & \textbf{75.2} & \textbf{83.0} \\ CFBI$^{MS}$ & & & \textbf{82.4} & \textbf{81.8} & \textbf{86.1} & \textbf{76.9} & \textbf{84.8} \\ \bottomrule[1.5pt] \multicolumn{8}{c}{\textit{Testing 2019 Split}} \\ \midrule[1pt] MST$^*$~\cite{mst} & & \checkmark & 81.7 & 80.0 & 83.3 & \textbf{77.9} & 85.5 \\ EMN$^*$~\cite{emn} & & \checkmark & 81.8 & \textbf{80.7} & \textbf{84.7} & 77.3 & 84.7 \\ \hline CFBI & & & 81.5 & 79.6 & 84.0 & 77.3 & 85.3 \\ CFBI$^{MS}$ & & & \textbf{82.2} & 80.4 & \textbf{84.7} & \textbf{77.9} & \textbf{85.7} \\ \bottomrule[1.5pt] \end{tabular} \end{wraptable} \section{Experiments} Following the previous state-of-the-art method~\cite{spacetime}, we evaluate our method on YouTube-VOS~\cite{youtubevos}, DAVIS 2016~\cite{davis2016} and DAVIS 2017~\cite{davis2017}. For the evaluation on YouTube-VOS, we train our model on the YouTube-VOS training set~\cite{youtubevos} (3471 videos). For DAVIS, we train our model on the DAVIS-2017 training set~\cite{davis2017} (60 videos). Both DAVIS 2016 and 2017 are evaluated using an identical model trained on DAVIS 2017 for a fair comparison with the previous works~\cite{feelvos,spacetime}. Furthermore, we provide DAVIS results using both DAVIS 2017 and YouTube-VOS for training following some latest works~\cite{feelvos,spacetime}. \setlength{\intextsep}{0pt} The evaluation metric is the $\mathcal{J}$ score, calculated as the average IoU between the prediction and the ground truth mask, and the $\mathcal{F}$ score, calculated as an average boundary similarity measure between the boundary of the prediction and the ground truth, and their average value ($\mathcal{J}$\&$\mathcal{F}$). We evaluate our results on the official evaluation server or use the official tools. \subsection{Compare with the State-of-the-art Methods} \noindent \textbf{YouTube-VOS}~\cite{youtubevos} is the latest large-scale dataset for multi-object video segmentation. Compared to the popular DAVIS benchmark that consists of $120$ videos, YouTube-VOS is about 37 times larger. In detail, the dataset contains 3471 videos in the training set (65 categories), 507 videos in the validation set (additional 26 unseen categories), and 541 videos in the test set (additional 29 unseen categories). Due to the existence of unseen object categories, the YouTube-VOS validation set is much suitable for measuring the generalization ability of different methods. \begin{figure}[t!] \centering \includegraphics[width=0.9\linewidth]{figs/quality.pdf} \caption{Qualitative results on DAVIS 2017 and YouTube-VOS. In the first video, we succeed in tracking many similar-looking sheep. In the second video, our CFBI tracks the person and the dog with a red mask after occlusion well. In the last video, CFBI fails to segment one hand of the right person (the white box). A possible reason is that the two persons are too similar and close.} \label{fig:quality} \end{figure} \begin{wraptable}[21]{r}{0.58\textwidth} \centering \caption{The quantitative evaluation on DAVIS 2016~\cite{davis2016} validation set. (\textbf{Y}) denotes using YouTube-VOS for training.}\label{tab:davis2016} \begin{tabular}{l c c c c c c} \toprule[1.5pt] Methods & F & S & Avg & $\mathcal{J}$ & $\mathcal{F}$ & t/s \\ \midrule[1pt] OSMN~\cite{osmn} & & & - & 74.0 & & 0.14 \\ PML~\cite{pml} & & & 77.4 & 75.5 & 79.3 & 0.28 \\ VideoMatch~\cite{videomatch} & & & 80.9 & 81.0 & 80.8 & 0.32 \\ RGMP$^-$~\cite{rgmp} & & & 68.8 & 68.6 & 68.9 & 0.14 \\ RGMP~\cite{rgmp} & & \checkmark & 81.8 & 81.5 & 82.0 & 0.14 \\ A-GAME~\cite{agame} (\textbf{Y}) & & & 82.1 & 82.2 & 82.0 & \textbf{0.07} \\ FEELVOS~\cite{feelvos} (\textbf{Y}) & & & 81.7 & 81.1 & 82.2 & 0.45 \\ OnAVOS~\cite{onavos}{} & \checkmark & & 85.0 & 85.7 & 84.2 & 13 \\ PReMVOS~\cite{premvos} & \checkmark & & 86.8 & 84.9 & 88.6 & 32.8 \\ STMVOS~\cite{spacetime} & & \checkmark & 86.5 & 84.8 & 88.1 & 0.16 \\ STMVOS~\cite{spacetime} (\textbf{Y}) & & \checkmark & \textbf{89.3} & \textbf{88.7} & 89.9 & 0.16 \\ \hline CFBI & & & 86.1 & 85.3 & 86.9 & 0.18 \\ CFBI (\textbf{Y}) & & & \textbf{89.4} & 88.3 & \textbf{90.5} & 0.18 \\ CFBI$^{MS}$ (\textbf{Y}) & & & \textbf{90.7} & \textbf{89.6} & \textbf{91.7} & 9 \\ \bottomrule[1.5pt] \end{tabular} \end{wraptable} As shown in Table~\ref{tab:youtubevos}, we compare our method to existing methods on both Validation 2018 and Testing 2019 splits. Without using any bells and whistles, like fine-tuning at test time~\cite{osvos,onavos} or pre-training on larger augmented simulated data~\cite{rgmp,spacetime}, our method achieves an average score of $\mathbf{81.4\%}$, which significantly outperforms all other methods in every evaluation metric. Particularly, the $81.4\%$ result is $2.0\%$ higher than the previous state-of-the-art method, STMVOS, which uses extensive simulated data from~\cite{coco,voc,cheng2014global,semantic,shi2015hierarchical} for training. Without simulated data, the performance of STMVOS will drop from $79.4\%$ to $68.2\%$. Moreover, we further boost our performance to $\mathbf{82.7\%}$ by applying a multi-scale and flip strategy during the evaluation. We also compare our method with two of the best results on the Testing 2019 split, \emph{i.e.}, \textit{Rank 1} (EMN~\cite{emn}) and \textit{Rank 2} (MST~\cite{mst}) results in the 2nd Large-scale Video Object Segmentation Challenge. Without applying model ensemble, our single-model result ($\mathbf{82.2\%}$) outperforms the \textit{Rank 1} result ($81.8\%$) in the unseen and average metrics, which further demonstrates our generalization ability and effectiveness. \noindent \textbf{DAVIS 2016}~\cite{davis2016} contains 20 videos annotated with high-quality masks each for a single target object. We compare our CFBI method with state-of-the-art methods in Table~\ref{tab:davis2016}. On the DAVIS-2016 validation set, our method trained with an additional YouTube-VOS training set achieves an average score of $\mathbf{89.4\%}$, which is slightly better than STMVOS ($89.3\%$), a method using simulated data as mentioned before. The accuracy gap between CFBI and STMVOS on DAVIS is smaller than the gap on YouTube-VOS. A possible reason is that DAVIS is too small and easy to over-fit. Compare to a much fair baseline (\emph{i.e.}, FEELVOS) whose setting is same to ours, the proposed CFBI not only achieves much better accuracy ($\mathbf{89.4\%}$ \emph{vs.}\hspace{-0.8mm} $81.7\%$) but also maintains a comparable fast inference speed ($0.18s$ \emph{vs.}\hspace{-0.8mm} $0.45s$). After applying multi-scale and flip for evaluation, we can improve the performance from $\mathbf{89.4\%}$ to $\mathbf{90.1\%}$. However, this strategy will cost much more inference time ($9s$). \setlength{\intextsep}{-3pt} \begin{wraptable}[29]{r}{0.52\textwidth} \caption{The quantitative evaluation on DAVIS-2017~\cite{davis2017}.}\label{tab:davis2017} \begin{center} \begin{tabular}{l c c c c c} \toprule[1.5pt] Methods & F & S & Avg & $\mathcal{J}$ & $\mathcal{F}$ \\ \midrule[1pt] \multicolumn{6}{c}{\textit{Validation Split}} \\ \midrule[1pt] OSMN~\cite{osmn} & & & 54.8 & 52.5 & 57.1 \\ VideoMatch~\cite{videomatch} & & & 62.4 & 56.5 & 68.2 \\ OnAVOS~\cite{onavos} & \checkmark & & 63.6 & 61.0 & 66.1 \\ RGMP~\cite{rgmp} & & \checkmark & 66.7 & 64.8 & 68.6 \\ A-GAME~\cite{agame} (\textbf{Y}) & & & 70.0 & 67.2 & 72.7 \\ FEELVOS~\cite{feelvos} (\textbf{Y}) & & & 71.5 & 69.1 & 74.0 \\ PReMVOS~\cite{premvos} & \checkmark & & 77.8 & 73.9 & 81.7 \\ STMVOS~\cite{spacetime} & & \checkmark & 71.6 & 69.2 & 74.0 \\ STMVOS~\cite{spacetime} (\textbf{Y}) & & \checkmark & \textbf{81.8} & \textbf{79.2} & 84.3 \\ \hline CFBI & & & 74.9 & 72.1 & 77.7 \\ CFBI (\textbf{Y}) & & & \textbf{81.9} & \textbf{79.1} & \textbf{84.6} \\ CFBI$^{MS}$ (\textbf{Y}) & & & \textbf{83.3} & \textbf{80.5} & \textbf{86.0} \\ \bottomrule[1.5pt] \multicolumn{6}{c}{\textit{Testing Split}} \\ \midrule[1pt] OSMN~\cite{osmn} & & & 41.3 & 37.7 & 44.9 \\ OnAVOS~\cite{onavos} & \checkmark & & 56.5 & 53.4 & 59.6 \\ RGMP~\cite{rgmp} & & \checkmark & 52.9 & 51.3 & 54.4 \\ FEELVOS~\cite{feelvos} (\textbf{Y}) & & & 57.8 & 55.2 & 60.5 \\ PReMVOS~\cite{premvos} & \checkmark & & 71.6 & 67.5 & 75.7 \\ STMVOS~\cite{spacetime} (\textbf{Y}) & & \checkmark & 72.2 & 69.3 & 75.2 \\ \hline CFBI (\textbf{Y})& & & \textbf{74.8} & \textbf{71.1} & \textbf{78.5} \\ CFBI$^{MS}$ (\textbf{Y}) & & & \textbf{77.5} & \textbf{73.8} & \textbf{81.1} \\ \bottomrule[1.5pt] \end{tabular} \end{center} \end{wraptable} \noindent \textbf{DAVIS 2017}~\cite{davis2017} is a multi-object extension of DAVIS 2016. The validation set of DAVIS 2017 consists of 59 objects in 30 videos. Next, we evaluate the generalization ability of our model on the popular DAVIS-2017 benchmark. As shown in Table~\ref{tab:davis2017}, our CFBI makes significantly improvement over FEELVOS ($\mathbf{81.9\%}$ \emph{vs.}\hspace{-0.8mm} $71.5\%$). Besides, our CFBI without using simulated data is slightly better than the previous state-of-the-art method, STMVOS ($\mathbf{81.9\%}$ \emph{vs.}\hspace{-0.8mm} $81.8\%$). We show some examples compared with STMVOS in Fig.~\ref{fig:comparison}. Same as previous experiments, the augmentation in evaluation can further boost the results to a higher score of $\mathbf{83.3\%}$. We also evaluate our method on the testing split of DAVIS 2017, which is much more challenging than the validation split. As shown in Table~\ref{tab:davis2017}, we significantly outperforms STMVOS ($72.2\%$) by $\textbf{2.6\%}$. By applying augmentation, we can further boost the result to $\textbf{77.5\%}$. The strong results prove that our method has the best generalization ability among the latest methods. \noindent \textbf{Qualitative Results} We show more results of CFBI on the validation set of DAVIS 2017 ($\mathbf{81.9\%}$) and YouTube-VOS ($\mathbf{81.4\%}$) in Fig.~\ref{fig:quality}. It can be seen that CFBI is capable of producing accurate segmentation under challenging situations, such as large motion, occlusion, blur, and similar objects. In the \emph{sheep} video, CFBI succeeds in tracking five selected sheep inside a crowded flock. In the \emph{judo} video, CFBI fails to segment one hand of the right person. A possible reason is that the two persons are too similar in appearance and too close in position. Besides, their hands are with blur appearance due to the fast motion. \subsection{Ablation Study} \setlength{\intextsep}{-2pt} \begin{wraptable}[15]{r}{0.4\textwidth} \centering \caption{Ablation of background embedding. P and I separately denote the pixel-level matching and instance-level attention. $^*$ denotes removing the foreground and background bias.}\label{tab:ablation_a} \setlength{\tabcolsep}{6.5pt} \begin{tabular}{l c c c c} \toprule[1.5pt] P & I & Avg & $\mathcal{J}$ & $\mathcal{F}$ \\ \midrule[1pt] \checkmark & \checkmark & 74.9 & 72.1 & 77.7 \\ \hline \checkmark$^*$ & \checkmark & 72.8 & 69.5 & 76.1 \\ \checkmark & & 73.0 & 69.9 & 76.0 \\ & \checkmark & 72.3 & 69.1 & 75.4 \\ & & 70.9 & 68.2 & 73.6 \\ \bottomrule[1.5pt] \end{tabular} \end{wraptable} We analyze the ablation effect of each component proposed in CFBI on the DAVIS-2017 validation set. Following FEELVOS, we only use the DAVIS-2017 training set as training data for these experiments. \noindent \textbf{Background Embedding.} As shown in Table~\ref{tab:ablation_a}, we first analyze the influence of removing the background embedding while keeping the foreground only as~\cite{feelvos,osmn}. Without any background mechanisms, the result of our method heavily drops from $74.9\%$ to $70.9\%$. This result shows that it is significant to embed both foreground and background features collaboratively. Besides, the missing of background information in the pixel-level matching or the instance-level attention will decrease the result to $73.0\%$ or $72.3\%$ separately. Thus, compared to instance-level attention, the pixel-level matching performance is more sensitive to the effect of background embedding. A possible reason for this phenomenon is that the possibility of existing some background pixels similar to the foreground is higher than some background instances. Finally, we remove the foreground and background bias, $b_F$ and $b_B$, from the distance metric and the result drops to $72.8\%$, which further shows that the distance between foreground pixels and the distance between background pixels should be separately considered. \setlength{\intextsep}{-3pt} \begin{wraptable}[11]{r}{0.55\textwidth} \centering \caption{Ablation of other components.}\label{tab:ablation_b} \begin{tabular}{l c c c c} \toprule[1.5pt] & Ablation & Avg & $\mathcal{J}$ & $\mathcal{F}$ \\ \midrule[1pt] 0 & Ours (CFBI) & 74.9 & 72.1 & 77.7 \\ \hline 1 & w/o multi-local windows & 73.8 & 70.8 & 76.8 \\ 2 & w/o sequential training & 73.3 & 70.8 & 75.7 \\ 3 & w/o collaborative ensembler & 73.3 & 70.5 & 76.1 \\ 4 & w/o balanced random-crop & 72.8 & 69.8 & 75.8 \\ 5 & w/o instance-level attention & 72.7 & 69.8 & 75.5 \\ \hline 6 & baseline (FEELVOS) & 68.3 & 65.6 & 70.9 \\ \bottomrule[1.5pt] \end{tabular} \end{wraptable} \noindent \textbf{Other Components.} The ablation study of other proposed components is shown in Table~\ref{tab:ablation_b}. Line 0 ($74.9\%$) is the result of proposed CFBI, and Line 6 ($68.3\%$) is our baseline method reproduced by us. Under the same setting, our CFBI significantly outperforms the baseline. In line 1, we use only one local neighborhood window to conduct the local matching following the setting of FEELVOS, which degrades the result from $74.9\%$ to $73.8\%$. It demonstrates that our multi-local matching module is more robust and effective than the single-local matching module of FEELVOS. Notably, the computational complexity of multi-local matching dominantly depends on the biggest local window size because we use the intermediate results of the local matching of the biggest window to calculate on smaller windows. In line 2, we replace our sequential training by using ground-truth masks instead of network predictions as the previous mask. By doing this, the performance of CFBI drops from $74.9\%$ to $73.3\%$, which shows the effectiveness of our sequential training under the same setting. In line 3, we replace our collaborative ensembler with 4 depth-wise separable convolutional layers. This architecture is the same as the dynamic segmentation head of~\cite{feelvos}. Compared to our collaborative ensembler, the dynamic segmentation head has much smaller receptive fields and performs $1.6\%$ worse. In line 4, we use normal random-crop instead of our balanced random-crop during the training process. In this situation, the performance drops by $2.1\%$ to $72.8\%$ as well. As expected, our balanced random-crop is successful in relieving the model form biasing to background attributes. In line 5, we disable the use of instance-level attention as guidance information to the collaborative ensembler, which means we only use pixel-level information to guide the prediction. In this case, the result deteriorates even further to $72.7$, which proves that instance-level information can further help the segmentation with pixel-level information. In summary, we explain the effectiveness of each proposed component of CFBI. For VOS, it is necessary to embed both foreground and background features. Besides, the model will be more robust by combining pixel-level information and instance-level information, and by using more local windows in the matching between two continuous frames. Apart from this, the proposed balanced random-crop and sequential training are useful but straightforward in improving training performance. \section{Conclusion} This paper proposes a novel framework for video object segmentation by introducing collaborative foreground-background integration and achieves new state-of-the-art results on three popular benchmarks. Specifically, we impose the feature embedding from the foreground target and its corresponding background to be contrastive. Moreover, we integrate both pixel-level and instance-level embeddings to make our framework robust to various object scales while keeping the network simple and fast. We hope CFBI will serve as a solid baseline and help ease the future research of VOS and related areas, such as video object tracking and interactive video editing. \noindent \textbf{Acknowledgements.} This work is partly supported by ARC DP200100938 and ARC DECRA DE190101315. \bibliographystyle{splncs04}
2,869,038,156,937
arxiv
\section{Introduction} Low-dimensional systems have been very popular among theorists for a number of decades now \cite{Mattis,Cazalilla}, not without a reason. On one hand, many of these allow for an exact treatment. The Bethe {\it ansatz} solves a number of one-dimensional problems exactly, such as the Lieb-Liniger gas \cite{LiebLiniger} or Heisenberg's model \cite{IntroBethe}, while systems with supersymmetric Hamiltonians \cite{Cooper}, such as Sutherland's model \cite{Sutherlandpaper} or the attractive Lieb-Liniger model \cite{Mattisprivate}, allow for a trivial evaluation of their ground state wave functions \cite{Mattis}. On the other hand, one-dimensional systems have physical properties of interest, and can become strongly correlated, as is the case of the Tonks-Girardeau gas \cite{Girardeau,Yukalov} -- a one-dimensional system of impenetrable bosons sharing many of its properties with the ideal Fermi gas -- which was experimentally realized in \cite{Paredes}. More recently, the metastability of the so-called super Tonks-Girardeau gas -- a strongly attractive one-dimensional Bose gas with no bound clusters -- has been proposed \cite{Astra1,Batchelor,Astra2} and subsequently experimentally realized \cite{Haller}. This unique system attracts much of current interest \cite{Chen,Kormos,Yin,Carnicero}. There is strong theoretical evidence \cite{Batchelor} on the close relation between the lowest-lying super Tonks-Girardeau (sTG) gas state and a system of one-dimensional (1D) hard-spheres in the low-density limit. This is indeed very appealing, since the hard-sphere gas is extremely simple to solve with a variety of methods (see e.g. \cite{Girardeau}), while a good description of the 1D Bose gas in the sTG regime requires, in principle, accurate numerical calculations, such as the diffusion and variational Monte Carlo employed in \cite{Astra2}. This evidence was then used to conjecture that, in a harmonic trap, the sTG and 1D hard-sphere Bose gases are equivalent \cite{AstraGirardeau}; we call this statement Astrakharchik-Girardeau conjecture for the trapped case. In this Letter, we prove a theorem which states the equivalence between 1D Bose gases interacting via hard-sphere potentials and certain momentum-dependent contact interactions -- which we also define in general -- in the untrapped case. Our theorem represents a weaker, though exact version of the Astrakharchik-Girardeau conjecture in free space. The two-body interactions in our new model, which we call extended hard-sphere (eHS) system, are Fourier-transformable and we are therefore able to write down the system's Hamiltonian in momentum representation. For the Lieb-Liniger gas, we obtain the energy of both the ground state with strong repulsive interactions and the lowest sTG state with strong attraction to first-order in perturbation theory, departing from the exactly-solvable eHS model as the zero-th order reference Hamiltonian. Last but not least, we obtain the so-called Tan relations \cite{Olsha2,Tan1,Tan2,Tan3,Braaten,Castin,Combescot,VZM,Zwerger} for the Lieb-Liniger gas and use them to calculate Tan's contact -- related to short-distance correlations -- in the homogeneous and trapped cases, the latter within the local density approximation. \section{System Hamiltonians} The Hamiltonians for the hard-sphere (HS) and Lieb-Liniger (LL) systems with $N$ identical bosons of mass $m$ have the form \begin{equation} H=\sum_{i=1}^N \frac{\hat{p}_i^2}{2m}+\sum_{i<j=1}^N V(x_i-x_j).\label{Ham1} \end{equation} For the LL model, $V(x)= g_{\text{LL}} \delta(x)$, $g_{\text{LL}}=-2\hbar^2/ma$ constant, and $a$ the two-body scattering length; for the HS model, $V(x)=0$ ($\infty$) for $|x|>a$ ($\le a$), with $a>0$ a constant hard-sphere diameter. \section{Two-body problem} We begin by considering the two-boson problem. After separation of center-of-mass ($X=(x_1+x_2)/2$) and relative ($x=x_1-x_2$) coordinates, Hamiltonian (\ref{Ham1}) reads \begin{equation} H=-\frac{\hbar^2}{2\mu} \frac{\partial^2}{\partial x^2}+V(x),\label{HamTwoBody} \end{equation} with $\mu=m/2$ the reduced mass of the two-boson system. The stationary Schr\"odinger equation $H\psi=E\psi$ at positive energies $E=\hbar^2k^2/2\mu$ is solved by $\psi(x)=\sin(k|x|+\theta_{\alpha})$. The wave functions of the HS model are given by $\psi(x)$ for $|x|>a$ and are zero for $|x|\le a$, while for the LL model they are given by $\psi(x)$ for all $x$. The phase shifts, with self-explanatory subscripts, are given by \begin{align} \tan\theta_{\text{LL}} &= \frac{\hbar^2 k}{\mu g_{\text{LL}}} \label{phaseshiftsLL}\\ \tan\theta_{\text{HS}} &= -\tan(ka).\label{phaseshiftsHS} \end{align} Note that if, in Eq. (\ref{phaseshiftsLL}), $g_{\text{LL}}$ is made momentum-dependent, $g_{\text{LL}}\to g(k)$, then the HS phase-shifts (\ref{phaseshiftsHS}) are recovered by choosing \begin{equation} g(k)=-\frac{\hbar^2k}{\mu}\cot(ka).\label{momentumdependentg} \end{equation} The above relation, carefully stated, provides the desired mapping between HS and Dirac delta interactions in one dimension. \section{Definition of momentum-dependent interactions} Any analytic function of an operator is defined by its expansion in powers of the operator \cite{Conway}. Given that, define an even analytic function of the momentum operator $f(\hat{k})$, with $\hat{k}=\hat{p}/\hbar$. The action of $f(\hat{k})$ on a plane-wave is given by $f(\hat{k})e^{iqx}=f(q) e^{iqx}$. It is also necessary to consider its action on functions of the form $\sin(q|x|)$, with discontinuous derivatives at the origin, which is given by $\sum_{n=0}^{\infty}(-1)^nf^{(2n)}(0) (\partial_{x})^{2n} \sin(q|x|)/(2n)!$. All even derivatives of $\sin(q|x|)$ include undesired Dirac deltas $\delta(x)$. A simple way to deal with this problem is to restrict the action of $f$ to positions $x>0$ or $x<0$, which avoids Dirac deltas and has no influence on differentiable functions ($\sim \cos(kx)$) at the origin. We can therefore define momentum-dependent contact interactions as follows: {\it Definition}. Let $f$ be an even, analytic function. A momentum-dependent contact interaction is an operator $\hat{W}_k$ with the action \begin{equation} [\hat{W}_k\psi](x) = \delta(x) \lim_{x\to 0^+} f\left(-i\frac{\partial}{\partial x}\right) \psi(x).\label{momentumdependentW} \end{equation} With the above definition, $\hat{W}_k$ is not Hermitian. This is a general property of momentum-dependent pseudopotentials, shared by the famous partial-wave pseudopotentials of Huang and Yang \cite{HuangYang}. \section{Equivalence between hard-sphere and contact interactions} From the above considerations, the proof of the equivalence between the HS states and the states corresponding to momentum-dependent contact interactions for the two-body case is immediate: {\it Theorem}. Let $H$ be defined by Eq. (\ref{HamTwoBody}), with \begin{equation} V(x)=\delta(x) g(\hat{k}),\label{potential1} \end{equation} where $g(\hat{k})=-\hbar^2 \hat{k} \cot{a\hat{k}}$, and the limit $x\to 0^+$ as in Eq. (\ref{momentumdependentW}) is assumed. Let $\psi_k:\mathbb{R}\to \mathbb{C}$ be the bosonic scattering wave functions of $H$, at energies $\hbar^2k^2/2\mu$, $k$ real. Then, the restrictions $\phi_k$ of $\psi_k$ to $\mathcal{D}=\mathbb{R}-(-a,a)$, $a>0$, are the bosonic scattering wave functions at the same energies for the problem of two hard spheres of diameter $a$. Remarks: (i) $a>0$ is necessary for the mapping, although Hamiltonian (\ref{HamTwoBody}) with potential (\ref{potential1}) is well-defined for $a<0$, so we regard $a$ as the scattering length of the model, which we call extended hard-sphere (eHS) model; (ii) it is fundamental in the above theorem that the energies are positive, since there exists an unphysical, infinitely-bound state which becomes the identically zero function in the restricted domain of the HS wave-functions. \section{Many-body problem} Fortunately, the many-boson problem with contact interactions is exactly solvable by means of the Bethe {\it ansatz}. This leaves us with the equivalence of the HS and the extended model, stated as follows {\it Theorem}. Let $H$ be defined by Eq. (\ref{Ham1}), with \begin{equation} V(x_i-x_j) = \delta(x_i-x_j) g(\hat{k}_{ij}),\label{potential2} \end{equation} where $k_{ij}=(k_i-k_j)/2$, $g(\hat{k})=-\hbar^2 \hat{k} \cot a\hat{k}$, and the limit $x_i-x_j\to 0^+$ as in Eq. (\ref{momentumdependentW}) is assumed. Let $\psi_{k_1,k_2,\ldots,k_N}:\mathbb{R}^N \to \mathbb{C}$ be a bosonic scattering wave function of $H$ with energy $E=\sum_{i=1}^N \hbar^2 k_i^2 / 2m$ and $k_i$ real, $i=1,\ldots,N$. Then, the restrictions $\phi_{k_1,k_2,\ldots,k_N}$ of $\psi_{k_1,k_2,\ldots,k_N}$ to the domain $\mathcal{D}=\mathbb{R}^N-\cup_{i<j=1}^N \mathcal{I}^{a}_{i,j}$, with \begin{equation} \mathcal{I}^{a}_{i,j}=\{(x_1,x_2,\ldots,x_N) \in \mathbb{R}^N | |x_i-x_j|<a \}, \end{equation} are the bosonic scattering states for the $N$-boson problem with HS interactions of diameter $a>0$ at the same energies. The same conclusion holds valid in a box of length $L>Na$. {\it Proof}. Showing that Bethe {\it ansatz} (BA) wave functions constitute the bosonic scattering states of Hamiltonian (\ref{Ham1}) with interactions (\ref{potential2}) is trivial: the momentum-dependent interactions only multiply each term of the BA by a constant (which depends on the relative momenta, obviously). Since the potentials have a zero range, scattering occurs without diffraction \cite{BeautifulModels} and therefore the model is exactly solvable via BA. For the finite box case, the same holds evidently true. Now, because the two-body scattering phase shifts for the eHS model are identical to the HS phase shifts, the BA equations are identical, too. In the HS case, the saturation density, for which the ground state energy at finite densities (in a finite box) diverges, is given by $N/L = 1/a$. At that point the wave functions simply do not exist. Now assume that $N/L<a$ and take the restriction $\phi_{k_1,\ldots,k_N}$ of a BA wave function $\psi_{k_1,\ldots,k_N}$ to $\mathcal{D}$. Since the BA equations for the two models are identical, $\phi_{k_1,\ldots,k_N}$ are the scattering wave functions for the HS problem. QED. \section{Momentum space} We now turn to Fourier-transform any general momentum-dependent contact interactions, including the particular eHS interactions. Our starting point is the interaction in terms of the bosonic field operators $\hat{\psi}$, \begin{equation} W_k=\frac{1}{2} \int \hat{\psi}^{\dagger}(x) \hat{\psi}^{\dagger}(x') W_k(x-x') \hat{\psi}(x) \hat{\psi}(x') dx' dx,\label{potentialspace} \end{equation} The field operators are expanded in the plane-wave basis as $\hat{\psi}(x) = \frac{1}{\sqrt{L}} \sum_{p} a_p e^{i p x}$, with $L$ the length of the system, and $a_p$ the bosonic annihilation operators in momentum space. Inserting this expansion into Eq. (\ref{potentialspace}), we obtain the momentum representation of $W_k$, \begin{equation} W_k = \frac{1}{2L} \sum_{p_1,p_2,q} g(p_{1,2}) a_{p_1+q}^{\dagger}a_{p_2-q}^{\dagger}a_{p_1}a_{p_2},\label{fourier} \end{equation} where $p_{1,2}\equiv (p_1-p_2)/2$. In particular, Eq. (\ref{fourier}) applied to the extended HS interactions provides, for the first time, a simple momentum representation for a very singular potential. \section{Strongly-interacting delta Bose gas as a perturbation from the hard-sphere model} A major inconvenience of using singular interactions such as the HS potential is that it is not possible to perform perturbation expansions in the small parameter $\rho a$, with $\rho=N/L$ the particle density. Remarkably, the ground-state energy of the strongly repulsive ($-\rho a \ll 1$) LL gas coincides asymptotically with the ground state energy of the HS model, albeit with $a<0$ \cite{LiebLiniger}, that is, the eHS model with $a<0$. With the novel momentum-dependent interactions developed in this work, we are able to perform perturbation theory for the strongly attractive or repulsive LL gas, starting from the exact ground state for the eHS model, for both $a>0$ and $a<0$. The success of such a perturbative treatment, which is seen {\it a posteriori}, is {\it a priori} expected, since the maximal difference of two-body T-matrices for the eHS and LL models at density $\rho$ behaves as $(T_{\mathrm{eHS}}-T_{\mathrm{LL}})/\rho = O[(\rho a)^3]$, while the maximal difference between fermionized (Tonks-Girardeau) and LL T-matrices behaves as $(T_{\mathrm{TG}}-T_{\mathrm{LL}})/\rho = O(\rho a)$, which is two orders worse in the small $\rho a$. These estimates show that, effectively, first-order perturbation theory from the eHS model corresponds to a third-order expansion from the fermionized Bose gas. The LL Hamiltonian $H_{\text{LL}}$ can be written in terms of the eHS Hamiltonian $H_{\text{eHS}}$ (Eq. (\ref{Ham1}) with momentum-dependent interactions (\ref{momentumdependentg})) as \begin{equation} H_{\text{LL}} = H_{\text{eHS}}+\sum_{i<j=1}^N\delta(x_i-x_j) [g_{\text{LL}}-g(\hat{k}_{ij})], \end{equation} where $g_{\text{LL}}$ is the LL interaction strength and $g(\hat{k}_{ij})$ is given by Eq. (\ref{momentumdependentg}), with the limit $x_i-x_j\to 0^+$ as in Eq. (\ref{momentumdependentW}) implicitly assumed. Making use of the bosonic symmetry of the particles, we can show that in any state of the eHS Hamiltonian, the expectation value $E^{(1)}$ of the perturbation is given by $E^{(1)}=\langle{\delta(x_1-x_2)\rangle} \sum_{i<j=1}^N\tilde{g}(k_{ij})$, with $\{k_{ij}\}_{ij}$ the set of BA relative momenta of the particular eHS (or HS) state, and $\tilde{g}(k_{ij})=g_{\text{LL}}-g(k_{ij})$. In the thermodynamic limit (TL) $E^{(1)}$ is well-defined for $\rho a <1/2$, which sets an upper bound on the radius of convergence of the perturbation expansion. We now apply the Hellmann-Feynman theorem, $dE_{\text{HS}}/da = \sum_{i<j}\langle \delta(x_i-x_j)\rangle dg(\hat{k}_{ij})/da$, where $E_{\text{HS}}$ is the ground state energy of the HS Bose gas, given by \cite{Girardeau} \begin{equation} E_{\text{HS}}=\frac{\pi^2\hbar^2\rho^2}{6m(1-\rho a)^2}\frac{N^2-1}{N}.\label{energyHS} \end{equation} For the first-order energy correction we obtain $E^{(1)}= \mathcal{C}(\rho,a) dE_{\text{HS}}/da$, with \begin{equation} \mathcal{C}(\rho,a) = \frac{\sum_{i<j=1}^{N} \tilde{g}(k_{ij})}{\sum_{i<j=1}^N \frac{dg}{d a}(k_{ij})}.\label{Ca} \end{equation} In the TL, the density of states (in the BA \cite{BeautifulModels}) for the HS Bose gas is a constant times a step function, and therefore we can write \begin{equation} \mathcal{C}(\rho,a) = \frac{\int_{-q}^{q}dk_1 \int_{-q}^{q} dk_2 \tilde{g}(k_{12})}{\int_{-q}^{q} dk_1 \int_{-q}^{q} dk_2 \frac{dg}{d a}(k_{12})}\approx -\frac{a(qa)^2}{18+(qa)^2},\label{Caint} \end{equation} where $q=\pi \rho /(1-a\rho)$, and where the approximate equality is valid for $q|a|\ll 1$. \begin{figure} \includegraphics[width=0.44\textwidth]{./figure-hard-spheres-fig} \caption{Main figure: energy per particle as a function of $\rho a$ for sTG gas in 1st order perturbation theory (black solid line), compared to HS result (dashed red line) and Monte Carlo results of ref. \cite{Astra2} (blue dots). Inset: inverse compresibility for sTG to 1st order (black solid line) and for HS gas (red dashed line).} \label{fig} \end{figure} The perturbative correction for the repulsive LL gas ($a<0$) yields a minor improvement with respect to the eHS asymptotic value, so we concentrate on the attractive case, i.e. the sTG gas. In Fig. \ref{fig} we show the energy per particle in the TL to first order in perturbation theory, $E/N\approx (E_{\text{HS}} + E^{(1)})/N$, as a function of the gas parameter $\rho a$, and compare our results with existing Variational Monte Carlo (VMC) data from ref. \cite{Astra2} and with the HS result, Eq. (\ref{energyHS}). Our results are in excellent agreement with the VMC calculations up to $\rho a \approx 0.25$. Beyond that point and until $\rho a \approx 0.3$ our results deviate from VMC, but their tendency is still correct. For $\rho a > 0.3$ our calculation is not enough to describe the sTG gas, since in the present case the energy is overestimated in this region. In Fig. \ref{fig} we also show the inverse compressibility $mc^2=\rho \partial_{\rho} \mu$, with $\mu$ the chemical potential, as a function of $\rho a$ to first order in perturbation theory. Our results reproduce the overall features calculated in \cite{Astra2} and are in good agreement until $\rho a \approx 0.2$, from where our calculation largely overestimates the fitted VMC results. \section{Tan's contact for the sTG gas}\label{sectionTan} A quantity which has attracted much of recent theoretical and experimental interest is the so-called contact \cite{Tan1}, denoted by $\mathcal{I}$. This is defined for bosonic and fermionic systems with zero-range interactions in any dimension \cite{Olsha2,Tan1,Castin,Combescot,VZM,Zwerger} as the coefficient of the asymptotic part of the momentum distribution $n_{\sigma}(\mathbf{k})$, $\mathcal{I}=\Omega\lim_{|\mathbf{k}|\to \infty} k^4 n_{\sigma}(\mathbf{k})$, where $\Omega=L^D$ is the volume, $D$ is the dimension and $\sigma$ is the spin component (omitted for spinless bosons). In the 1D case, relevant here, it is related to short-distance correlations \cite{Olsha2,Gora}. Relations between different properties of the system and the contact are generally known as Tan relations. We focus here in Tan relations for the LL gas with or without a trap, which we then apply to the sTG gas. Tan relations can be proved in parallel to the higher-dimensional cases \cite{Tan1,Tan2,Tan3,VZM} by using the 1D version of the so-called $\eta$-selector \cite{Tan1,ValienteTan,VZM,Tanarxiv}. This reads \begin{equation} \eta(k)=1+\frac{\pi \hbar}{mg_{\mathrm{LL}}}\delta(1/|k|). \end{equation} The energy of the system is given by \begin{equation} E=\frac{\hbar^2}{2m}\sum_k \eta(k)k^2 n_k + \langle \mathcal{W} \rangle, \end{equation} with $\mathcal{W}\equiv \sum_{i=1}^N W(x_i)$ the total single-particle trapping potential. The adiabatic energy theorem reads \begin{equation} \frac{dE}{da}=\frac{\hbar^2 \mathcal{I}}{m}. \end{equation} The two relations above were already known for the homogeneous case \cite{LiebLiniger,Olsha2}. The generalized virial theorem, assuming $W(x)\propto x^{\beta}$, is given by \begin{equation} E=\frac{\beta + 2}{2} \langle \mathcal{W} \rangle -\frac{\hbar^2\mathcal{I}}{2m} a.\label{virial} \end{equation} Last, the pressure relation, which is only valid in the homogeneous case, is given by \begin{equation} PL=2E+\frac{\hbar^2\mathcal{I}a}{2m}. \end{equation} In Fig. \ref{fig-contact}, we plot the contact per particle $\mathcal{I}/N$ for the homogeneous sTG gas obtained via perturbation theory from the eHS gas in the thermodynamic limit. As expected, deviations of the perturbative contact from the asymptotic HS result are more pronounced than for the energy as the gas parameter grows. The contact for a given quantum state of a system and, in particular, for a stationary state of the Schr\"odinger equation, is a positive quantity. A violation of this property implies that the state under consideration is not a physical state of the system. As observed in Fig. \ref{fig-contact}, the contact exhibits a maximum at $\rho a\approx 0.25$, where perturbation theory is still approximately correct, showing that it will eventually become negative at a given critical value of the gas parameter $(\rho a)_c$ where a super-Tonks-Girardeau gas cannot exist. The VMC data of ref. \cite{Astra2} show that this is indeed the case, and from their fit one can estimate $(\rho a)_c\approx 0.4-0.45$. For completeness, we note that, although our calculation is not quantitatively correct for $\rho a \approx 0.25-0.3$, the perturbative contact becomes negative at $(\rho a)_c \approx 0.38$. \begin{figure} \includegraphics[width=0.44\textwidth]{./CONTACT} \caption{Main figure: contact per particle as a function of $\rho a$ for sTG gas in 1st order perturbation theory (black solid line), compared to HS result (dashed red line). Inset: contact for harmonically trapped sTG to 1st order in the LDA from virial theorem, Eq. (\ref{virial}), (black solid line) compared to contact for trapped HS gas (red dashed line).} \label{fig-contact} \end{figure} In a realistic experiment, the sTG gas is created under harmonic confinement \cite{Haller}. A qualitative picture of the trapped system can be inferred from the homogeneous Bose gas via the local density approximation (LDA) \cite{Stringari}. Within the LDA, we calculate the contact $\mathcal{I}$ for a harmonically trapped sTG gas ($W(x)=m\omega^2 x^2/2$), by making use of the virial theorem, Eq. (\ref{virial}), which is accessible with current experimental techniques \cite{Jin}. We plot it in Fig. \ref{fig-contact} as a function of $Na^2/a_{\mathrm{ho}}^2$, where $a_{\mathrm{ho}}=\sqrt{\hbar/m\omega}$. The appearance of a maximum for the contact and its subsequent depletion -- related to the gas-phase instability, as noted above for the homogeneous case -- would be clear experimental signatures of the sTG gas. \section{Concluding remarks} We have shown that the Bose gas with hard-sphere interactions is equivalent to a many-boson system with momentum-dependent contact interparticle potentials. The resulting model is Fourier-transformable, and constitutes, for the first time, a simple momentum representation for the singular one-dimensional hard-sphere potential. As an important application, we have used our equivalent, soft-core model as a starting point to obtain the properties of the attractive and repulsive delta Bose gases in perturbation theory, and are in good agreement with the large-scale numerical simulations of ref. \cite{Astra2} in the limit of applicability of our perturbation theory. Universal Tan relations for the Lieb-Liniger gas are also derived and applied to estimate the contact for the super Tonks-Girardeau gas within our perturbation-theoretic approach. The momentum-dependent zero-range interactions in this work have been carefully defined and are general. Therefore, they are applicable to other one-dimensional systems: we can use them to construct integrable approximations to non-integrable systems from their two-body phase shifts, and treat the difference between the original and model interactions as a weak perturbation. \acknowledgments I am very grateful to Marvin D. Girardeau for many useful discussions during the final stages of his work in ref. \cite{AstraGirardeau}, and thank the authors of \cite{Astra2} for providing their Monte Carlo data for comparison. Useful correspondence with Murray T. Batchelor and Lukas F. Buchmann is gratefully acknowledged. The author was supported by a Villum Kann Rasmussen block scholarship.
2,869,038,156,938
arxiv
\section{Introduction and main results} Cluster algebras were introduced by Fomin and Zelevinsky~\cite{FZ} in an effort to describe dual canonical bases in the universal enveloping algebras of Borel subalgebras of simple complex Lie algebras. {A cluster algebra possesses the distinguished set of generators, \emph{cluster variables}, organized in the groups of the same cardinality called \emph{clusters} which form a combinatorial structure described by an \emph{exchange graph}, where clusters correspond to vertices of the exchange graph. The generators of the neighboring clusters are algebraically dependent, where the corresponding relations are encoded by \emph{exchange matrix} or, equivalently, by an \emph{exchange quiver}.} In the famous paper~\cite{FZ2} { Fomin and Zelevinsky} obtained Cartan-Killing type classification of all cluster algebras of finite type, i.e. cluster algebras having only finitely many distinct cluster variables. A wider class of cluster algebras is formed by cluster algebras {\em of finite mutation type} which have finitely many exchange matrices (but are allowed to have infinitely many cluster variables). These algebras found various applications, including ones in quantum field theories (see e.g.~\cite{CV,CV2}). Skew-symmetric cluster algebras { (i.e., with skew-symmetric exchange matrices)} of finite mutation type were classified in~\cite{FeSTu}: it was shown there that such algebra either has rank at most two, or corresponds to a triangulated surface, or belongs to one of finitely many exceptional mutation classes. The approach in~\cite{FeSTu} was based on a computer assisted analysis of the combinatorial structure of the exchange graph of mutation-finite cluster algebras. This { paper} is written in an effort to develop a more conceptual characterization of the mutation finite phenomenon. { We consider the property of mutation finiteness to be related to the existence} of some positive semi-definite symmetric form similar to the classification of finite type cluster algebras \cite{FZ2} and to the classification of reflection groups of finite and affine types~\cite{Co}. In the present paper, we follow the path started in \cite{BGZ,Se1,Se,S2} characterizing mutation-finite cluster algebras of rank at least 3 using associated quadratic forms called {\em quasi-Cartan companions}. The notion of a quasi-Cartan companion of a skew-symmetric matrix $B$ (or, equivalently, of the corresponding quiver) was introduced in~\cite{BGZ} as a symmetric matrix whose off-diagonal entries have the same moduli as ones of $B$ (we recall the precise definitions in Section~\ref{semi-def}). It was proved in~\cite{BGZ} that a matrix $B$ defines a cluster algebra of finite type if and only if it has a positive definite quasi-Cartan companion, and all the cycles in the associated to $B$ quiver are cyclically oriented. This result was extended to the case of algebras of affine type in~\cite{Se}, where it was proved that a matrix defines a cluster algebra of affine type if and only if it has a positive semi-definite quasi-Cartan companion of corank one satisfying some additional {\em admissibility conditions} (see Section~\ref{adm-sec}). Our first construction provides the following result (for brevity, we formulate everything in terms of quivers, see Section~\ref{background} for details). \setcounter{section}{3} \setcounter{theorem}{13} \begin{theorem} Let $Q$ be a connected quiver of finite mutation type with at least $3$ vertices. Then $Q$ has a positive semi-definite quasi-Cartan companion. \end{theorem} As a corollary, we obtain the following characterization of finite mutation classes. \begin{cor} A connected quiver $Q$ with at least three vertices is mutation-finite if and only if every quiver in the mutation class of $Q$ has a positive semi-definite quasi-Cartan companion. \end{cor} A quasi-Cartan companion $A$ of a quiver $Q$ can be mutated along with the quiver (we recall the definition given in~\cite{BGZ} and our geometric interpretation of it in Section~\ref{semi-def}). Understanding $A$ as a matrix of a quadratic form on a real vector space, and thus as a Gram matrix of a certain basis (called a {\em companion basis}~\cite{P1,P2}), the mutation corresponds to a change of basis (we call this procedure a {\em mutation of a basis}). However, the mutated matrix $\mu_k(A)$ may not be a quasi-Cartan companion of the mutated quiver $\mu_k(Q)$. A notion of $k$-{\em compatible} companion was introduced in~\cite{BGZ} to guarantee that $\mu_k(A)$ is again a quasi-Cartan companion of $\mu_k(Q)$. We introduce a notion of a {\em fully compatible} companion which is $k$-compatible for every vertex $k$, and thus its mutation in every direction leads to a quasi-Cartan companion of the mutated quiver. We then prove the following result. \setcounter{theorem}{16} \begin{theorem} Let $Q$ be a mutation-finite quiver with at least $3$ vertices. If $Q$ is not the quiver shown in Fig.~\ref{killhope} then $Q$ has a fully compatible positive semi-definite quasi-Cartan companion. \end{theorem} Although every mutation of a fully compatible positive semi-definite quasi-Cartan companion provided by Theorem~\ref{thm fully} is again a positive semi-definite quasi-Cartan companion, it may not be fully compatible, and thus further mutation may not lead to a quasi-Cartan companion. So, the main question we want to explore is when we are able to mutate a quasi-Cartan companion throughout the whole mutation class of a quiver { (we call such a quasi-Cartan companion a {\em symmetric twin} of a quiver)}. This is the case, for example, for quivers without oriented cycles: it was proved in~\cite{ST} that a quasi-Cartan companion of such a quiver with all off-diagonal entries being non-positive can be mutated along any mutation sequence to produce a quasi-Cartan companion again. Our main result is the following. For every unpunctured surface we pick a specific representative from its mutation class of quivers, and then construct a companion basis ${\bf u}$ belonging to an extended affine root system of type $A_{n-n_0}^{[n_0]}$ (we recall the definitions in Section~\ref{eawg}), which agrees with the results of~\cite{CdZ} (where $n$ is the rank of the quiver, and $n_0$ is the dimension of the kernel of the corresponding quadratic form which can be expressed in terms of Euler characteristics of $S$). Then the next theorem states that this collection of roots gives rise to a positive semi-definite { symmetric twin} for every quiver in the mutation class. \setcounter{section}{5} \setcounter{theorem}{4} \begin{theorem} Given a quiver $Q$ constructed from a triangulation of an unpunctured surface, there exists a companion basis ${\bf u}$ for $Q$ such that for any mutation sequence $\mu$ the Gram matrix of vectors $\mu(\bf u)$ is a positive semi-definite quasi-Cartan companion for $\mu(Q)$. \end{theorem} { Moreover, we claim in Corollary~\ref{all-un} that the choice of such basis ${\bf u}$ is essentially unique up to a linear isometry and sign changes of vectors in ${\bf u}$. } We approach Theorem~\ref{c-ind} in two different ways. The first one is through the groups constructed in~\cite{FeTu} by quivers originating from unpunctured surfaces. With any such quiver we can then associate two groups, whose generators are involutions indexed by vertices of the quiver: a group $G$ constructed in~\cite{FeTu} (which is a quotient of a certain Coxeter group), and an extended affine Weyl group $W$ generated by reflections. These groups turned out to be related in the following way. \setcounter{section}{5} \setcounter{theorem}{0} \begin{theorem} There exists a surjective homomorphism $\varphi: G\to W$ taking generating involutions of $G$ to generating reflections of $W$. \end{theorem} In finite and affine types the groups $G$ and $W$ are actually isomorphic, which gives rise to the following conjecture. \begin{conjecture} The map $\varphi$ in Theorem~\ref{homo} is an isomorphism. \end{conjecture} We note that both groups $G$ and $W$ can also be defined for all $9$ exceptional mutation-finite classes of types $E$ (see Fig.~\ref{exceptional-fig}), and the conjecture holds in these cases (see Remark~\ref{iso-el}). Assuming Conjecture~\ref{iso}, Theorem~\ref{c-ind} holds easily (see Prop.~\ref{ind}). As we do not have a proof of Conjecture~\ref{iso}, we prove Theorem~\ref{c-ind} in a different way by using the notion of an {\em admissible} quasi-Cartan companion introduced in~\cite{Se}. The admissibility condition is stronger than full compatibility, so Theorem~\ref{c-ind} can be deduced from the following result which we prove in Section~\ref{adm-sec}. \setcounter{section}{6} \setcounter{theorem}{1} \begin{prop}\footnote{While preparing the paper we were informed by Ahmet Seven that he has obtained an independent proof of Proposition~\ref{adm}.} Given a quiver $Q$ constructed from a triangulation of an unpunctured surface, there exists an admissible quasi-Cartan companion $A$ of $Q$ such that for any mutation sequence $\mu$ the matrix $\mu(A)$ is an admissible quasi-Cartan companion of $\mu(Q)$. { In particular, $A$ is a symmetric twin of $Q$.} \end{prop} \bigskip The paper is organized as follows. Section~\ref{background} contains the essential facts about mutations of quivers, and about finite mutation classes. In Section~\ref{semi-def}, we first recall the basics on quasi-Cartan companions, and then construct positive semi-definite quasi-Cartan companions for all mutation-finite quivers. In Section~\ref{group}, we discuss the group constructed from a quiver in~\cite{FeTu}: after recalling the presentation, we show that the group depends on three numerical parameters only, namely, on the topological type of the surface (genus and the number of boundary components), and the number of marked points. In other words, the group turns out to be independent on the distribution of marked points amongst the boundary components (note that if Conjecture~\ref{iso} is true, the group would depend on two numerical parameters only). In Section~\ref{eawg}, we associate with every unpunctured surface an extended affine Weyl group of type $A$, and then discuss the relations between this group and the one constructed above. Finally, Section~\ref{adm-sec} is devoted to the proof of Proposition~\ref{adm} and Theorem~\ref{c-ind} by using admissible quasi-Cartan companions. We also discuss the geometric interpretation of the admissibility condition. \subsection*{Acknowledgements} The authors are grateful to the anonymous referee for valuable comments. The authors would like to express their gratitude to the Research Institute for Mathematical Sciences, Kyoto, and the organizers of the program on Cluster Algebras at RIMS in the Spring of 2019. M.S. is also grateful to the Research in Pairs Program at the Mathematisches Forschungsinstitut Oberwolfach (Summer 2019) and Mathematical Science Research Institute, Berkeley (Fall 2019) for their hospitality and outstanding working conditions they provided. \setcounter{section}{1} \section{Quivers of finite mutation type} \label{background} In this section, we recall the essential notions on mutations of quivers of finite, affine, and finite mutation type. For details see~\cite{FST}. \subsection{Quivers and mutations} \label{dm} An $n\times n$ skew-symmetric integer matrix $B$ can be encoded by a {\em quiver} $Q$ which is a (multi)-graph with oriented edges (called {\it arrows}). Vertices of $Q$ are labeled by $[1,\dots,n]$. If $b_{ij}>0$, we join vertices $i$ and $j$ by $b_{ij}$ arrows directed from $i$ to $j$. Throughout the paper we assume that all diagrams are connected (equivalently, matrix $B$ is assumed to be indecomposable). For every vertex $k$ of a quiver $Q$ one can define an involutive operation $\mu_k$ called {\it mutation of $Q$ in direction $k$}. This operation produces a new quiver denoted by $\mu_k(Q)$ which can be obtained from $Q$ in the following way (see~\cite{FZ}): \begin{itemize} \item orientations of all arrows incident to a vertex $k$ are reversed; \item for every pair of vertices $(i,j)$ such that $Q$ contains arrows directed from $i$ to $k$ and from $k$ to $j$ the number of arrows joining $i$ and $j$ changes as described in Figure~\ref{quivermut}. \end{itemize} \begin{figure}[!h] \begin{center} \epsfig{file=./pic/mutdef.eps,width=0.3\linewidth} \put(-132,20){\small $a$} \put(-96,20){\small $b$} \put(-46,20){\small $a$} \put(-10,20){\small $b$} \put(-115,-7){\small $c$} \put(-30,-7){\small $d$} \put(-110,50){\small $k$} \put(-25,50){\small $k$} \put(-73,32){\small $\mu_k$}\\ $\pm{c}\pm{d}={ab}$ \caption{Mutations of quivers. The sign before ${c}$ (resp., ${d}$) is positive if the three vertices form an oriented cycle, and negative otherwise. Either $c$ or $d$ may vanish. If $ab$ is equal to zero then neither value of $c$ nor orientation of the corresponding arrow does change.} \label{quivermut} \end{center} \end{figure} Given a quiver $Q$, its {\it mutation class} is the set of all quivers obtained from the given one by all sequences of iterated mutations. All quivers from one mutation class are called {\it mutation-equivalent}. \subsection{Finite type} A quiver is of {\it finite type} if it is mutation-equivalent to an orientation of a simply-laced Dynkin diagram. So, a quiver of finite type is of one of the following mutation types: $A_n$, $D_n$, $E_6$, $E_7$ or $E_8$. It is shown in~\cite{FZ2} that mutation classes of quivers of finite type are in one-to-one correspondence with skew-symmetric cluster algebras of finite type. In particular, this implies that any subquiver of a quiver of finite type is also of finite type. \subsection{Affine type} A quiver is of {\it affine type} if it is mutation-equivalent to an orientation of a simply-laced affine Dynkin diagram different from an oriented cycle. A quiver of affine type is of one of the following mutation types: $\widetilde A_{k,n-k}$, $0<k<n$ (see Remark~\ref{a}), $\widetilde D_n$, $\widetilde E_6$, $\widetilde E_7$ or $\widetilde E_8$. \begin{remark} \label{a} Let $\widetilde D$ be an affine Dynkin diagram different from $\widetilde A_n$. Then all orientations of $\widetilde D$ are mutation-equivalent. The orientations of $\widetilde A_{n-1}$ split into $[n/2]$ mutation classes $\widetilde A_{k,n-k}$ (where by $[x]$ we mean the integer part of $x$): each class contains a cyclic representative with only two changes of orientations, with $k$ consecutive arrows in one direction and $n-k$ in the other, $0<k<n$. \end{remark} We will heavily use the following statement. \begin{prop}[\cite{BMR,Z}] \label{subd of aff} Any subquiver of a quiver of affine type is either of finite or of affine type. \end{prop} \subsection{Finite mutation type } A quiver is called {\it mutation-finite} (or {\it of finite mutation type}) if its mutation class is finite. As it is shown in~\cite{FeSTu}, a quiver of finite mutation type either has only two vertices, or corresponds to a triangulated surface (see Section~\ref{triang-sec}), or belongs to one of finitely many exceptional mutation classes. \begin{theorem}[\cite{FeSTu}] \label{class} Let $\Gamma$ be a mutation-finite diagram with at least $3$ vertices. Then either $\Gamma$ arises from a triangulated surface, or $\Gamma$ is mutation-equivalent to one of $18$ exceptional diagrams $E_6,E_7,E_8, \widetilde E_6,\widetilde E_7,\widetilde E_8,E_6^{(1,1)}\!,E_7^{(1,1)}\!,E_8^{(1,1)}\!,X_6,X_7$ shown in Fig.~\ref{exceptional-fig}. \end{theorem} \begin{figure}[!h] \begin{center} \epsfig{file=./pic/otvet.eps,width=0.89\linewidth} \put(-420,305){$E_6$} \put(-420,245){$E_7$} \put(-420,185){$E_8$} \put(-420,110){$\widetilde E_6$} \put(-420,48){$\widetilde E_7$} \put(-420,-10){$\widetilde E_8$} \put(-210,305){$E_6^{(1,1)}$} \put(-210,245){$E_7^{(1,1)}$} \put(-210,185){$E_8^{(1,1)}$} \put(-160,110){$X_6$} \put(-160,10){$X_7$} \end{center} \caption{Exceptional finite mutation classes} \label{exceptional-fig} \end{figure} \subsection{Triangulated surfaces and block-decomposable quivers} \label{triang-sec} The correspondence between quivers of finite mutation type and triangulated surfaces is developed in~\cite{FST}. Here we briefly remind the basic definitions. By a {\it surface} we mean a genus $g$ orientable surface with $r$ boundary components and a finite set of marked points, with at least one marked point at each boundary component. A non-boundary marked point is called a {\it puncture}. An (ideal) {\it triangulation} of a surface is a triangulation with vertices of triangles in the marked points. We allow self-folded triangles and follow~\cite{FST} considering triangulations as {\it tagged triangulations} (however, we are neither reproducing nor using all the details in this paper). Given a triangulated surface, one constructs a quiver in the following way: \begin{itemize} \item vertices of the quiver correspond to the (non-boundary) edges of a triangulation; \item two vertices are connected by an arrow if they correspond to two sides of the same triangle (i.e., there is one simple arrow between given two vertices for every such triangle); inside the triangle orientations of the arrow are arranged counter-clockwise (with respect to some orientation of the surface); \item two arrows with different directions connecting the same vertices cancel out; two arrows in the same direction result in a double arrow; \item for a self-folded triangle (with two sides identified), two vertices of the quiver corresponding to the sides of this triangle are disjoint; a vertex corresponding to the ``inner'' side of the triangle is connected to other vertices in the same way as the vertex corresponding to the outer side of the triangle. \end{itemize} It is shown in~\cite{FST} that any surface can be cut into {\it elementary surfaces}, we list their quivers in Fig.~\ref{blocks-list}. We use {\it white} color for the vertices corresponding to the ``exterior'' edges of these elementary surfaces (such vertices are called {\em open}) and {\it black} for the vertices corresponding to ``interior'' edges. The quivers in Fig.~\ref{blocks-list} are called {\it blocks}. Depending on a block, we call it {\it a block of type} ${\rm{I}}$, ${\rm{II}}$ etc. As elementary surfaces are glued to each other to form a triangulated surface, the blocks are glued to form a {\it block-decomposition} of a bigger quiver. A connected quiver $Q$ is called {\it block-decomposable} (or simply, {\it decomposable}) if it can be obtained from a collection of blocks by identifying white (i.e., open) vertices of different blocks along some partial matching (matching of vertices of the same block is not allowed), where two simple arrows with the same endpoints and opposite directions cancel out, and two arrows with the same endpoints and the same directions form a double arrow. A non-connected quiver $Q$ is called block-decomposable if every connected component of $Q$ is either decomposable or a single vertex. \begin{figure}[!h] \begin{center} \epsfig{file=./pic/blocks.eps,width=0.99\linewidth} \put(-420,-30){I} \put(-345,-30){II} \put(-275,-30){IIIa} \put(-215,-30){IIIb} \put(-125,-30){IV} \put(-45,-30){V} \caption{Blocks used to obtain quivers from triangulations} \label{blocks-list} \end{center} \end{figure} Block-decomposable quivers are in one-to-one correspondence with adjacency matrices of arcs of ideal (tagged) triangulations of bordered two-dimensional surfaces with marked points (see~\cite[Section~13]{FST} for the detailed explanations). Mutations of block-decomposable quivers correspond to flips of (tagged) triangulations. In particular, this implies that mutation class of any block-decomposable quiver is finite, and any subquiver of a block-decomposable diagram is block-decomposable too. Theorem~\ref{class} shows that block-decomposable quivers almost exhaust mutation-finite ones. We will use the surface presentations of block-decomposable quivers of finite and affine type, see Table~\ref{surface realizations}. \begin{table}[!h] \begin{center} \caption{Surfaces corresponding to quivers of finite and affine type} \label{surface realizations} \begin{tabular}{|c|c|c|} \hline \multicolumn{3}{|c|}{Finite types}\\ \hline \raisebox{0mm}{$A_n$, $n\ge 1$} &\raisebox{-1mm}{\epsfig{file=./pic/a.eps,width=0.25\linewidth}} &\raisebox{0mm}{\small disk}\\ \hline \raisebox{5mm}{$D_n$, $n\ge 4$}&\epsfig{file=./pic/d.eps,width=0.25\linewidth} &\raisebox{4mm}{\small punctured disk}\\ \hline \multicolumn{3}{|c|}{Affine types}\\ \hline \raisebox{4mm}{$\widetilde A_{k,n-k}$, $n>k\ge 1$} & \raisebox{0mm}{\epsfig{file=./pic/ta.eps,width=0.25\linewidth}}&\raisebox{4mm}{\small annulus}\\ \hline \raisebox{5mm}{$\widetilde D_n$, $n\ge 4$}& \epsfig{file=./pic/td.eps,width=0.25\linewidth}&\raisebox{5mm}{\small twice punctured disk}\\ \hline \end{tabular} \end{center} \end{table} \begin{remark} A mutation class $\widetilde A_{k,n-k}$ (of affine type $\widetilde A_{n-1}$) corresponds to an annulus with $k$ marked points on one boundary component and $n-k$ on the other. \end{remark} \subsection{Subquivers of mutation-finite quivers} \label{subd} In this section, we list some technical facts we are going to use in the sequel. \subsubsection*{Oriented cycles in mutation-finite quivers} It is easy to see that there are two types of mutation-finite oriented chordless cycles: simply-laced cycles (they are of finite type $D_n$) and a cycle of length three with $(1,1,2)$ arrows (of type $\widetilde A_{2,1}$). Note that a quiver of type $D_n$ for $n\ge 4$ corresponds to a punctured disk, so these will not appear in quivers constructed from unpunctured surfaces. \subsubsection*{Non-oriented cycles in mutation-finite quivers} It is also a well known fact that all non-oriented cycles in mutation-finite quivers are simply-laced (and thus of type $\widetilde A_{k,n-k}$ for some $k$). We will also use the following statement proved by Seven in~\cite{Se1}. \begin{prop}[Proposition~2.1(iv),~\cite{Se1}] \label{non-or} Let $Q$ be a simply-laced mutation-finite quiver and let $C\subset Q$ be a non-oriented chordless cycle. Then for each vertex $v\in Q$ the number of arrows connecting $v$ with $C$ is even. \end{prop} \section{Quasi-Cartan companions} \label{semi-def} In~\cite{BGZ}, Barot, Geiss and Zelevinsky introduced a notion of quasi-Cartan companion of a skew-symmetrizable matrix and defined its mutation. As we restrict ourselves to quivers, we reproduce below their definitions for skew-symmetric matrices. \subsection{Definitions and basic properties} \begin{definition}[Quasi-Cartan companion] Let $B$ be an $n\times n$ skew-symmetric matrix. An $n\times n$ symmetric matrix $A$ is a {\it quasi-Cartan companion} of $B$ if $|a_{ij}|=|b_{ij}|$ and $a_{ii}=2$. \end{definition} \begin{remark} A quasi-Cartan companion contains the same information as the skew-symmetric matrix $B$ together with the choice of signs assigned to each (unordered) pair of indices $(i,j)$, $1\le i,j \le n$ with non-zero $b_{ij}$ (sign of the entry $a_{ij}=a_{ji}$ in $A$). Pictorially, we will represent a quasi-Cartan companion by labelling the arrows of a quiver with the signs of the corresponding elements. We will also say that a quasi-Cartan companion of a skew-symmetric matrix is a quasi-Cartan companion of the corresponding quiver. \end{remark} \begin{definition} Given a quiver $Q$ and its quasi-Cartan companion $A$, consider a quadratic vector space $V$ defined by the quadratic form $A$. Let ${\bf v}= \{v_1,\dots,v_n\}$ be basis vectors in $V$ for which $A$ serves as the Gram matrix, i.e. $(v_i,v_j)=a_{ij}$. Generalizing the definition of Parsons~\cite{P1,P2}, we will call the set of vectors $\bf v$ a {\it companion basis of $Q$}. \end{definition} \begin{definition}[Mutation of quasi-Cartan companions] Let $A$ be a quasi-Cartan companion of $B$. A mutation $\mu_k$ of $A$ is defined as $\mu_k(A)=A'$ where $$ a_{ij}'=\begin{cases} 2 & \text{if $i=j$; }\\ \mathrm{sgn }(b_{ik})a_{ik} & \text{if $j=k$; }\\ -\mathrm{sgn }(b_{kj})a_{kj} & \text{if $i=k$; }\\ a_{ij}-\mathrm{sgn }(a_{ik}a_{kj})[b_{ik}b_{kj}]_+ & \text{otherwise. }\\ \end{cases} $$ \end{definition} \begin{remark} \label{geometric realisation} There is a geometric interpretation of the mutation of quasi-Cartan companions, as follows. Let $\bf v$ be a companion basis for $Q$. Then it is straightforward to check that the elements $a_{ij}'$ of $\mu_k(A)=A'$ satisfy $a_{ij}'=\langle v_i',v_j'\rangle$, where $$ v_i'=\begin{cases} -v_i & \text{if $i=k$; }\\ v_i-\langle v_i,v_k\rangle v_k & \text{if $b_{i,k}>0$; }\\ v_i & \text{otherwise}.\\ \end{cases} $$ In other words, a mutation of a quasi-Cartan companion corresponds to the reflection of some of the vectors of the companion basis. In particular, $\mu_k(A)$ and $A$ define the same quadratic form (written in different bases). \end{remark} Note that the result of a mutation of a quasi-Cartan companion is not always a quasi-Cartan companion of the mutated matrix, see Fig.~\ref{ex-mut}. \begin{figure}[!h] \begin{center} \epsfig{file=./pic/a_c.eps,width=0.4\linewidth} \put(-100,31){$\mu_2$} \put(-210,-12){$e_1-e_2$} \put(-140,-12){$e_2-e_3$} \put(-150,45){$e_3-e_1$} \put(-80,-12){$e_1-e_2$} \put(-10,-12){$e_2-e_1$} \put(-20,45){$e_1-e_3$} \put(30,20){\scriptsize $\mu_2(A)=\begin{pmatrix}2& 1 & -2 \\ 1& 2 & -1 \\ -2 & -1 & 2 \end{pmatrix} $} \put(-290,20){\scriptsize $A=\begin{pmatrix}2& -1 & -1 \\ -1& 2 & -1 \\ -1 & -1 & 2 \end{pmatrix} $} \caption{A mutation transforming a quasi-Cartan companion of the quiver to a matrix which is not a quasi-Cartan companion of the mutated quiver.} \label{ex-mut} \end{center} \end{figure} In~\cite{BGZ}, Barot, Geiss and Zelevinsky described a sufficient condition for a quasi-Cartan companion to ensure that the result of the mutation $\mu_k$ is a quasi-Cartan companion of the mutated matrix. This is provided by the notion of $k$-compatibility. \begin{definition}[$k$-compatibility] A quasi-Cartan companion $A$ is {\it $k$-compatible } if for every $i,j\ne k$ one has $$ \begin{cases} a_{ij}a_{jk}a_{ki}>0 & \text{if $(i,j,k)$ form an oriented cycle,}\\ a_{ij}a_{jk}a_{ki}\le 0 & \text{otherwise}.\\ \end{cases} $$ \end{definition} \begin{lemma}[\cite{BGZ}] \label{l k-comp} Let $A$ be a $k$-compatible quasi-Cartan companion for $B$. Then $\mu_k(A)$ is a $k$-compatible quasi-Cartan companion for $\mu_k(B)$. \end{lemma} \begin{definition}[full compatibility] A quasi-Cartan companion is {\it fully compatible} if it is $k$-compatible for every $k\in \{1,\dots, n \}$. \end{definition} \begin{remark} \label{sign} Given a fully compatible quasi-Cartan companion, we can change the sign of any vector in a companion basis to obtain a new fully compatible quasi-Cartan companion. The resulting matrix differs by the signs of off-diagonal entries in a given row and column. \end{remark} To construct an example of a fully compatible quasi-Cartan companion, one can take any acyclic quiver and label all its arrows with the negative sign. \begin{theorem}[\cite{S2,ST}] \label{ST} Let $Q$ be an acyclic quiver and $A$ be its quasi-Cartan companion with $a_{ij}\le 0$ for all $i\ne j$. Then for every sequence $\mu$ of mutations the matrix $\mu(A)$ is a fully compatible quasi-Cartan companion of $\mu(Q)$. \end{theorem} \begin{remark} \label{fin,aff} In particular, Theorem~\ref{ST} can be applied in all finite and affine cases. If $B$ is of finite type, the quasi-Cartan companion described in Theorem~\ref{ST} is a positive definite quasi-Cartan matrix (cf. Remark~\ref{geometric realisation}). Similarly, If $B$ is of affine type, then the corresponding quasi-Cartan companion is positive semi-definite. \end{remark} \begin{remark} \label{ell} One can extend the above construction of a positive semi-definite quasi-Cartan companion to the case of the elliptic quivers $E_6^{(1,1)}$, $E_7^{(1,1)}$, $E_8^{(1,1)}$ in the following way: \begin{itemize} \item[-] Let $Q$ be a quiver of one of the types above, consider a subquiver $Q'$ obtained by removing one of the ends of the double arrow. Quiver $Q'$ is of the type $\widetilde E_6$, $\widetilde E_7$ or $\widetilde E_8$ respectively. Now consider a quasi-Cartan companion $A'$ of $Q'$ (with all arrows labelled by the negative sign). \item[-] Let $x$ be the removed node in $Q$, and let $x'$ be the other end of the double arrow. Let $v_i$ be the vector assigned to $x'$ in a companion basis giving rise to $A'$. Assigning to $x$ a copy of $v_i$ we obtain a positive semi-definite quasi-Cartan companion $A$ of $Q$. \end{itemize} One can check explicitly by computation that for each mutation sequence $\mu$ the matrix $\mu(A)$ is a fully compatible quasi-Cartan companion of $\mu(Q)$ (this also follows fro Remark~\ref{iso-el} together with Proposition~\ref{adm}). \end{remark} \begin{remark} \label{basis} Vectors constructed in Remark~\ref{ell} are clearly linearly dependent. However, we can slightly amend the construction by adding to $v_i$ a new basis vector lying in the kernel of the quadratic form. In this way we obtain a collection of linearly independent vectors which we have right to call a companion basis. We will follow this procedure throughout the paper. \end{remark} \subsection{Positive semi-definite companions and mutation finiteness} In this section, we prove the following theorem. \begin{theorem} \label{thm fin mut type} Let $Q$ be a connected quiver of finite mutation type with at least 3 vertices. Then $Q$ has a positive semi-definite quasi-Cartan companion. \end{theorem} \begin{proof} As follows from the classification of quivers of finite mutation type (see~\cite{FeSTu}), a quiver of finite mutation type is either of rank 2, or arises from a surface, or belongs to one of 11 exceptional mutation classes. For the exceptional quivers of finite and affine type the statement follows from Remark~\ref{fin,aff}. For quivers of the types $E_6^{(1,1)}$, $E_7^{(1,1)}$, $E_8^{(1,1)}$ a positive semi-definite quasi-Cartan companion is constructed in Remark~\ref{ell}. The mutation classes of quivers $X_6$ and $X_7$ are very small (containing $6$ and $2$ quivers respectively), for them one can check the statement directly. We are left to consider the case of quivers arising from triangulations of surfaces. To build (a companion basis for) a quasi-Cartan companion of a quiver $Q$ originating from a given triangulation, we will assign vectors $v_1,\dots, v_n$ to the arcs of the triangulation, and the quasi-Cartan companion $A$ will be constructed as the Gram matrix of these vectors, i.e. $a_{ij}=(v_i,v_j)$. Let $t$ be the number of triangles in the triangulation. Consider a Euclidean $t$-dimensional space with an orthonormal basis $e_1,\dots,e_{t}$. To construct the vectors $v_i$, we first assign the basis vectors $e_1,\dots,e_t$ to the triangles $T_1,\dots,T_t$ of the triangulation. To an arc contained in the triangles $T_i$ and $T_j$ we assign a vector $e_i+e_j$ or $e_i-e_j$, as in Fig.~\ref{triang}. It is straightforward to see that the vectors constructed in this way provide a quasi-Cartan companion of the quiver $Q$. As they all lie in the Euclidean space, the quasi-Cartan companion is positive semi-definite. In view of Remark~\ref{basis}, we can also assume the vectors $v_1,\dots, v_n$ to be linearly independent. \end{proof} \begin{figure}[!h] \begin{center} \epsfig{file=./pic/triang.eps,width=0.95\linewidth} \put(-373,150){\color{red} $i$} \put(-373,195){\color{red}$j$} \put(-365,180){\color{blue} $e_i+e_j$} \put(-221,150){\color{red} $i$} \put(-221,195){\color{red} $j$} \put(-203,180){\color{blue} $e_i-e_j$} \put(-263,180){\color{blue} $e_i+e_j$} \put(-105,175){\color{red} $i$} \put(-76,180){\color{red} $j$} \put(-55,180){\color{blue} $e_i-e_j$} \put(-76,160){\color{blue} $e_i+e_j$} \put(-105,175){\color{red} $i$} \put(-76,180){\color{red} $j$} \put(-55,180){\color{blue} $e_i-e_j$} \put(-76,160){\color{blue} $e_i+e_j$} \put(-365,60){\color{red} $i$} \put(-340,60){\color{red} $j$} \put(-325,109){\color{blue} $e_i+e_j$} \put(-285,68){\color{blue} $e_i+e_j$} \put(-180,73){\color{red} $i$} \put(-165,47){\color{red} $j$} \put(-165,95){\color{red} $k$} \put(-145,53){\color{blue} $e_i-e_j$} \put(-145,27){\color{blue} $e_i+e_j$} \put(-145,102){\color{blue} $e_i-e_k$} \put(-145,120){\color{blue} $e_i+e_k$} \caption{Construction of vectors for quivers from triangulations. Triangles $i$ and $j$ are separated by an edge assigned with vector $e_i+e_j$, unless there are two common edges meeting at a puncture, see the configuration in the middle of the top row. An internal edge of self-folded triangle $j$ surrounded by triangle $i$ is assigned with vector $e_i-e_j$.} \label{triang} \end{center} \end{figure} \begin{cor} \label{char} A connected quiver $Q$ of a rank higher than $2$ is mutation-finite if and only if every quiver in the mutation class of $Q$ has a positive semi-definite quasi-Cartan companion. \end{cor} \begin{proof} In view of Theorem~\ref{thm fin mut type} it is sufficient to show that every mutation-infinite quiver $Q$ has a mutation-equivalent quiver not admitting a positive semi-definite quasi-Cartan companion. According to the well-known criterion (see e.g.~\cite[Corollary 8]{DO}), we can always find a quiver mutation-equivalent to $Q$ containing an arrow of weight at least $3$. Then any quasi-Cartan companion of such quiver is clearly indefinite. \end{proof} One can ask a natural question whether there exists a {\it fully compatible } positive semi-definite quasi-Cartan companion for every mutation finite quiver. \begin{remark} It is easy to check via case by case inspection (taking into account Remark~\ref{sign}) that the quiver on Fig.~\ref{killhope} does not admit any fully compatible quasi-Cartan companion. One can also check that this quiver corresponds to a closed torus with two punctures, and it has one block decomposition only~\cite{Gu1,Gu2} (which consists of four blocks of type II). In particular, this quiver is not a subquiver of any larger quiver arising from a triangulation. \end{remark} \begin{figure}[!h] \begin{center} \epsfig{file=./pic/killhope.eps,width=0.35\linewidth} \caption{This quiver admits no fully compatible quasi-Cartan companion. Shaded triangles label non-oriented cycles.} \label{killhope} \end{center} \end{figure} In the next section we will show that this example is unique in the class of mutation-finite quivers. \subsection{Fully compatible positive semi-definite companions} The main result of this section is the following theorem. \begin{theorem} \label{thm fully} Let $Q$ be a mutation-finite quiver with more than $2$ vertices. If $Q$ is not the quiver shown in Fig.~\ref{killhope} then $Q$ has a fully compatible positive semi-definite quasi-Cartan companion. \end{theorem} We need to show the statement for quivers originating from triangulations and for quivers from eleven exceptional mutation classes. \begin{lemma} \label{exceptional} Theorem~\ref{thm fully} holds for quivers from eleven exceptional mutation classes. \end{lemma} \begin{proof} For finite, affine and elliptic quivers, the statement follows from Remarks~\ref{fin,aff} and~\ref{ell}. For the quiver in mutation classes $X_6$ and $X_7$ the statement can be checked directly. \end{proof} Now, we are left to prove the theorem for the case of quivers from triangulations. As in the proof of Theorem~\ref{thm fin mut type}, we will use vectors of the form $\pm e_i\pm e_j$, where $e_1,\dots,e_n$ is an orthonormal basis of a Euclidean space (here vectors $\{e_i\}$ correspond to the triangles in the triangulation). In view of Remark~\ref{basis}, we assume the vectors are linearly independent. We need the following technical definitions. \begin{definition} If a vertex of $Q$ is assigned with a vector $v=\pm e_i \pm e_j$, we say that the set $\{e_i,e_j\}$ is the {\em support} of the vector $v$. \end{definition} \begin{definition}[Disjoint support companion basis] Suppose that $Q$ is a quiver decomposed into blocks, and suppose that vectors $\{v_k\}=\{ \pm e_{i_k} \pm e_{j_k}\}$ provide a companion basis for $Q$. We say that $\{v_i\}$ is a {\it disjoint support companion basis} if for every open vertex $p_k$ of the block decomposition of $Q$ (i.e., an open vertex of some block which is not matched with any other) the following holds: if $p_k$ is not connected to any other open vertex in the block decomposition of $Q$ then the support of the vector $v_k$ assigned to $p_k$ is not contained in the union of all supports of other vectors $\{v_i\}$ for $i\ne k$. \end{definition} We will use the following two technical lemmas concerning disjoint support companion bases. \begin{lemma} \label{bl_} Every block has a fully compatible positive semi-definite quasi-Cartan companion with a disjoint support companion basis. \end{lemma} \begin{proof} \label{bl} The required companion bases are provided in Fig.~\ref{blocks}. \end{proof} \begin{figure}[!h] \begin{center} \epsfig{file=./pic/blocks.eps,width=0.99\linewidth} \put(-460,27){\scriptsize $e_1+e_2$} \put(-420,27){\scriptsize $e_2+e_3$} \put(-380,27){\scriptsize $e_1+e_2$} \put(-335,27){\scriptsize $e_2+e_3$} \put(-375,75){\scriptsize $e_1+e_3$} \put(-300,27){\scriptsize $e_2+e_3$} \put(-260,27){\scriptsize $e_2-e_3$} \put(-290,75){\scriptsize $e_1+e_2$} \put(-225,27){\scriptsize $e_2+e_3$} \put(-180,27){\scriptsize $e_2-e_3$} \put(-230,75){\scriptsize $e_1+e_2$} \put(-167,42){\scriptsize $e_1\!+\!e_2$} \put(-95,42){\scriptsize $e_2\!+\!e_3$} \put(-150,75){\scriptsize $e_2+e_4$} \put(-150,-5){\scriptsize $e_2-e_4$} \put(-95,75){\scriptsize $e_2-e_4$} \put(-95,-5){\scriptsize $e_2-e_3$} \put(-25,75){\scriptsize $e_2+e_3$} \put(-25,-5){\scriptsize $e_2+e_4$} \put(-30,35){\scriptsize $e_1\!+\!e_2$} \caption{Fully compatible quasi-Cartan companions for blocks} \label{blocks} \end{center} \end{figure} \begin{lemma} \label{disjoint} Let $Q$ be a quiver decomposed into blocks. Suppose that $Q_1$ and $Q_2$ are subquivers of $Q$ such that $Q= Q_1\cup Q_2$, every block of the decomposition of $Q$ lies entirely either in $Q_1$ or in $Q_2$, and the intersection $Q_1\cap Q_2$ contains no arrows. Suppose also that $Q_1$ and $Q_2$ have quasi-Cartan companions with disjoint support companion bases. Then $Q$ also has a quasi-Cartan companion with a disjoint support companion basis. \end{lemma} \begin{proof} Suppose that $\{p_1,\dots, p_k\}=Q_1\cap Q_2$ are vertices in the intersection. Consider disjoint support companion bases ${\bf w}=\{\pm w_i\pm w_j\}$ and ${\bf u}=\{\pm u_i\pm u_j\}$ of $Q_1$ and $Q_2$. Let $W$ and $U$ be vector spaces spanned by vectors $\{w_i\}$ and $\{u_i\}$ respectively. Denote by $w_{i_m}$ ($u_{i_m}$ resp.) the vectors showing up in the expressions assigned to vertices $p_m$, $m=1,\dots,k$. Now take the vector space $U+W$ by identifying $w_{i_m}=u_{i_m}$, and extend the quadratic forms from $U$ and $W$ to $U+W$ by $(w_i,u_j)=0$ for all the remaining $i,j$. This provides a required disjoint support companion basis. \begin{figure}[!h] \begin{center} \epsfig{file=./pic/disj.eps,width=0.5\linewidth} \put(-170, 40){$Q_1$} \put(-80, 40){$Q_2$} \put(-190, 20){\small $\pm w_i\pm w_j$} \put(-80, 20){\small $\pm u_i\pm u_j$} \caption{To the proof of Lemma~\ref{disjoint}.} \label{disj} \end{center} \end{figure} \end{proof} To prove Theorem~\ref{thm fully}, we show first the statement for quivers with a block decomposition containing blocks of type II only. More precisely, for such quivers we will prove existence of a quasi-Cartan companion with all required properties and additionally a disjoint support companion basis. This will be done by induction on the number of non-oriented triangles in the quiver (see Lemma~\ref{no_nonor} for the base of induction, i.e. the case all triangles are oriented, and Lemma~\ref{step} for the induction step). We then show that one can also include blocks of type I and IV, and then finally we use Lemmas~\ref{bl_} and~\ref{disjoint} to conclude the theorem for block decompositions containing blocks of remaining types IIIa, IIIb and V. \begin{lemma} \label{no_nonor} Let $Q$ be a quiver decomposed into several copies of block II. Suppose that $Q$ contains no non-oriented cycles of length 3. Then $Q$ has a fully compatible positive semi-definite quasi-Cartan companion with a disjoint support companion basis. \end{lemma} \begin{proof} We will use the same construction as in the proof of Theorem~\ref{thm fin mut type}, see Fig.~\ref{triang}. Since we only have copies of block II in the decomposition, the arcs labelled by anything different from $e_i+e_j$ for some $i,j$ will only occur when two copies of block II are attached along two vertices to create a vanishing arrow. However, the corresponding vertex of the quiver is not a part of any triangle in the quiver (it is a vertex in an oriented quadrilateral). By assumption, all triangles in $Q$ are oriented, and as we see now, every arrow in a triangle is labelled by ``+''. Therefore the quasi-Cartan companion is fully compatible. Furthermore, observe that any open vertex of the decomposition of $Q$ corresponds to an arc of a triangle $T_k$ such that two other arcs of $T_k$ lie at the boundary. Therefore, one arc of $T_k$ only corresponds to a vertex of $Q$, and thus the basis vector $e_k$ belongs to the support of the only vector corresponding to this open vertex, so we get a disjoint support companion basis. \end{proof} \begin{lemma} \label{step} Let $Q$ be a quiver decomposed into several copies of block II. Suppose that $Q$ is not the quiver shown in Fig.~\ref{killhope}. Then $Q$ has a fully compatible positive semi-definite quasi-Cartan companion with a disjoint support companion basis. \end{lemma} \begin{proof} To show the statement we will proceed by induction on the number of non-oriented triangles in $Q$. Lemma~\ref{no_nonor} constitutes the base of the induction (no non-oriented triangles in $Q$). Suppose that the statement is known for every quiver with less than $n$ non-oriented triangles, and consider a quiver $Q$ with $k$ non-oriented triangles. Let $p,q,r$ be vertices of some non-oriented triangle in $Q$. Clearly, each of the edges $pq,qr,rp$ belongs to its own block of type II (two of these blocks may have a second vertex in common), so the configuration of blocks forming the triangle $pqr$ looks like one of the three configurations shown in Fig.~\ref{3blocks}, up to a symmetry obtained by changing the direction of all arrows (the orientations of edges of the non-oriented triangle determine all other arrows in the three blocks, furthermore two of the three open vertices may be attached to each other, this gives 4 possibilities, two of which coincide up to reversing all arrows). Denote this configuration by $C$. Notice that as shown in Fig.~\ref{3blocks}, the configuration itself has a fully compatible positive semi-definite quasi-Cartan companion with disjoint support companion basis. \begin{figure}[!h] \begin{center} \epsfig{file=./pic/3blocks.eps,width=0.9\linewidth} \put(-435,93){\scriptsize $e_1\!+\!e_3$} \put(-320,93){\scriptsize $e_2\!+\!e_3$} \put(-357,44){\scriptsize $e_1\!-\!e_2$} \put(-357,130){\scriptsize $e_3\!+\!f_3$} \put(-435,-3){\scriptsize $e_1\!+\!f_1$} \put(-320,-3){\scriptsize $e_2\!+\!f_2$} \put(-280,96){\scriptsize $e_1\!+\!e_3$} \put(-180,96){\scriptsize $e_2\!-\!e_3$} \put(-210,44){\scriptsize $e_1\!+\!e_2$} \put(-270,-3){\scriptsize $e_1\!+\!f_1$} \put(-170,-3){\scriptsize $e_2\!-\!e_3$} \put(-80,130){\scriptsize $e_1\!+\!f_1$} \put(-40,-3){\scriptsize $e_2\!+\!e_3$} \put(-115,93){\scriptsize $e_1\!+\!e_2$} \put(-5,93){\scriptsize $e_1\!+\!e_3$} \put(-40,44){\scriptsize $e_2\!-\!e_3$} \caption{Configurations containing a non-oriented triangle. } \label{3blocks} \end{center} \end{figure} An additional triangle attached to that configuration may be attached along one, two or three vertices. We will consider these three cases. \medskip \noindent {\bf Case 1:} First, suppose that every triangle attached to $C$ is attached by at most one vertex. Then every edge of $Q$ either belongs to $C$ or two $Q\setminus C$ (here we understand $Q\setminus C$ as a subquiver spanned by all vertices contained in at least one block not lying in $C$). Notice that $Q\setminus C$ is a quiver from triangulation containing open vertices (so it is different from the quiver shown in Fig.~\ref{killhope}). Also $Q\setminus C$ contains smaller number of non-oriented triangles than $Q$ (as it does not contain $pqr$). Hence, by the inductive assumption, $Q\setminus C$ has a fully compatible positive semi-definite quasi-Cartan companion with a disjoint support companion basis. Furthermore, since every triangle attached to $C$ is attached only along one vertex, any two vertices in the intersection of $C$ and $Q\setminus C$ are not adjacent in $Q\setminus C$. By the inductive assumption, $C$ has a fully compatible positive semi-definite quasi-Cartan companion with a disjoint support companion basis. By Lemma~\ref{disjoint}, this implies that the quiver $Q$ itself has a fully compatible positive semi-definite quasi-Cartan companion with a disjoint support companion basis. \medskip \noindent {\bf Case 2:} Next, suppose that there is a triangle $T$ attached to $C$ along 2 vertices. Then the quiver $T \cup C$ will be one of the quivers shown in Fig.~\ref{2vert} (again, we identify configurations obtained by reversing all arrows). Notice that each of these quivers has a fully compatible positive semi-definite quasi-Cartan companion with a disjoint support companion basis (see Fig.~\ref{2vert}). Furthermore, if no triangle is attached to two vertices of $T\cup C$ simultaneously, then the reasoning of Case 1 shows the statement for $Q$. If some triangle $T'$ is attached to the two open vertices of $T\cup C$, then the quiver $T'\cup T\cup C$ also has fully compatible positive semi-definite quasi-Cartan companion with a disjoint support companion basis (to see this one can identify $f:=f_1=f_2$ and assign the vector $f+f_0$ to the vertex of $T'$ which does not lie in $T\cup C$, see Fig.~\ref{2vert}). Then the same reasoning as in Case 1 shows the statement for $Q$. \begin{figure}[!h] \begin{center} \epsfig{file=./pic/2vert.eps,width=0.9\linewidth} \put(-352,120){\scriptsize $e_1\!+\!e_3$} \put(-318,81){\scriptsize $e_3\!\!+\!\!e_4$} \put(-327,23){\scriptsize $e_4\!+\!f_2$} \put(-432,92){\scriptsize $e_1\!+\!f_1$} \put(-370,90){\scriptsize $e_2\!-\!e_3$} \put(-410,23){\scriptsize $e_1\!+\!e_2$} \put(-346,56){\scriptsize $e_2\!\!+\!\!e_4$} \put(-305,92){\scriptsize $e_1\!\!+\!\!f_1$} \put(-252,120){\scriptsize $e_1\!+\!e_3$} \put(-275,23){\scriptsize $e_1\!+\!e_2$} \put(-218,23){\scriptsize $e_4\!+\!f_2$} \put(-224,56){\scriptsize $e_2\!\!+\!\!e_4$} \put(-198,81){\scriptsize $e_3\!\!+\!\!e_4$} \put(-250,90){\scriptsize $e_2\!-\!e_3$} \put(-127,0){\scriptsize $e_1\!+\!f_1$} \put(-127,120){\scriptsize $e_3\!+\!f_2$} \put(-153,33){\scriptsize $e_1\!+\!e_2$} \put(-100,26){\scriptsize $e_1\!\!+\!\!e_4$} \put(-127,56){\scriptsize $e_2\!\!+\!\!e_4$} \put(-100,81){\scriptsize $e_3\!\!+\!\!e_4$} \put(-150,92){\scriptsize $e_2\!-\!e_3$} \put(-27,0){\scriptsize $e_1\!+\!f_1$} \put(-27,120){\scriptsize $e_3\!+\!f_2$} \put(-53,33){\scriptsize $e_1\!+\!e_2$} \put(-1,26){\scriptsize $-e_1\!\!+\!\!e_4$} \put(-26,56){\scriptsize $e_2\!\!+\!\!e_4$} \put(2,80){\scriptsize $e_3\!\!+\!\!e_4$} \put(-51,91){\scriptsize $e_2\!-\!e_3$} \caption{Triangle $T$ attached to $C$ by two vertices. } \label{2vert} \end{center} \end{figure} \medskip \noindent {\bf Case 3:} Finally, suppose that there is a triangle $T$ attached to $C$ along 3 vertices. Then the quiver $T \cup C$ will be one of the quivers shown in Fig.~\ref{3vert}. The first of these quivers coincides with the exceptional quiver shown in Fig.~\ref{killhope} and has no fully compatible quasi-Cartan companion. The second one has a fully compatible quasi-Cartan companion (see Fig~\ref{3vert} for the corresponding vectors). It has no open vertices, so it cannot be a part of any larger quiver from triangulation. \begin{figure}[!h] \begin{center} \epsfig{file=./pic/3vert.eps,width=0.6\linewidth} \put(-68,107){\scriptsize $e_1\!+\!e_3$} \put(-104,-6){\scriptsize $e_1\!+\!e_2$} \put(-30,-6){\scriptsize $e_1\!-\!e_4$} \put(-49,33){\scriptsize $e_2\!\!+\!\!e_4$} \put(-110,69){\scriptsize $e_2\!\!-\!\!e_3$} \put(-23,69){\scriptsize $e_3\!\!+\!\!e_4$} \caption{Triangle attached to $C$ by three vertices. } \label{3vert} \end{center} \end{figure} \end{proof} \begin{proof}[Proof of Theorem~\ref{thm fully}:] To complete the proof of Theorem~\ref{thm fully}, it is sufficient to show that there is a required quasi-Cartan companion for block decompositions containing blocks other than block of type II. We will first show that one can add blocks of type I and IV and then that one can add blocks of types IIIa, IIIb or V. For the blocks of types I and IV, we can substitute such a block in the block decomposition by a block of type II (of course, this slightly changes the surface), we will obtain some different quiver $Q'$. After this substitution we will never obtain the quiver shown in Fig.~\ref{killhope} (as this quiver does not contain open vertices while the process of substitution does introduce such vertices). So, $Q'$ has a required quasi-Cartan companion with a disjoint support realisation. To get a quasi-Carton companion for $Q$, we just remove extra vertex for the case of the block of type I, and add an additional vector as in Fig.~\ref{bl_I_IV} for the case of the block of type IV. Finally, to treat also the blocks of types IIIa, IIIb and V, let $Q_1$ be the union of such blocks. Let $Q_2$ be the union of all other blocks in $Q$. As each block of type IIIa, IIIb or V has a unique open vertex, the subquiver $Q_1\cap Q_2$ has no arrows. Furthermore, as $Q_2$ is a quiver without blocks of type IIIa, IIIb and V, it has a fully compatible positive semi-definite quasi-Cartan companion with a disjoint support companion basis. In view of Lemma~\ref{bl_}, the quiver $Q_1$ also has a fully compatible positive semi-definite quasi-Cartan companion with a disjoint support companion basis. So, by Lemma~\ref{disjoint} the quiver $Q$ itself has a fully compatible positive semi-definite quasi-Cartan companion with a disjoint support companion basis, as required. \end{proof} \begin{figure}[!h] \begin{center} \epsfig{file=./pic/bl_I_IV.eps,width=0.7\linewidth} \put(-205,115){\scriptsize $e_1\!+\!e_2$} \put(-138,115){\scriptsize $e_1\!+\!e_3$} \put(-155,155){\scriptsize $e_1\!+\!f_1$} \put(-80,115){\scriptsize $e_1\!+\!e_2$} \put(-13,115){\scriptsize $e_1\!+\!e_3$} \put(-205,43){\scriptsize $e_1\!+\!e_2$} \put(-138,43){\scriptsize $e_1\!+\!e_3$} \put(-155,83){\scriptsize $e_1\!+\!f_1$} \put(-80,43){\scriptsize $e_1\!+\!e_2$} \put(-13,43){\scriptsize $e_1\!+\!e_3$} \put(-28,83){\scriptsize $e_1\!+\!f_1$} \put(-28,16){\scriptsize $e_1\!-\!f_1$} \caption{Constructing companions for quivers with blocks of type I and IV.} \label{bl_I_IV} \end{center} \end{figure} \begin{remark} A mutation of a fully compatible companion is not necessarily a fully compatible quasi-Cartan companion, see Fig.~\ref{ex_puncture}. \end{remark} \begin{figure}[!h] \begin{center} \epsfig{file=./pic/punct.eps,width=0.6\linewidth} \put(-150,50){$\mu_1$} \put(-270,78){$1$} \put(-310,70){\small $e_3-e_1$} \put(-310,0){\small $e_4-e_1$} \put(-195,70){\small $e_2+e_3$} \put(-195,0){\small $e_2+e_4$} \put(-269,35){\small $e_1\!+\!e_2$} \put(-110,70){\small $e_1-e_3$} \put(-110,0){\small $e_4-e_3$} \put(5,70){\small $e_2+e_3$} \put(5,0){\small $e_2+e_4$} \put(-68,30){\small $e_1\!+\!e_2$} \caption{Mutation of a fully compatible quasi-Cartan companion is not always fully compatible: after the mutation there is an oriented triangle labeled with vectors $e_2+e_3$, $e_2+e_4$ and $e_4-e_3$.} \label{ex_puncture} \end{center} \end{figure} \section{Group from an (unpunctured) surface quiver} \label{group} The construction described in this section was initiated in~\cite{BM} for the case of quivers of finite type and then extended in~\cite{FeTu} for affine type quivers, quivers from unpunctured surfaces and exceptional mutation-finite quivers. \subsection{Construction of the group} \label{group-constr} Here we present the construction for the case of unpunctured surface. Given a quiver $Q$ from an unpunctured surface, we construct a group $G=G(Q)$ as follows: \begin{itemize} \item the generators $s_1,\dots,s_n$ of $G$ correspond to the vertices of $Q$; \item there are five types of relations: \begin{itemize} \item[(R1)] $s_i^2=e$ for all $i=1,\dots,n$; \item[(R2)] $(s_is_j)^{m_{ij}}= e$ for all $i,j$ not joined by double arrow, where $$ m_{i,j}=\begin{cases} 2,& \text{if $i,j,$ are not joined}; \\ 3,& \text{if $i,j,$ are joined by a single arrow}; \end{cases} $$ \item[(R3)] (cycle relations) \\ $(s_1\ s_2s_3s_2)^2=e$ for every subquiver of $Q$ shown in Fig.~\ref{rel}(a) and \\ $(s_1\ s_2s_3s_2)^3=e$ for every subquiver of $Q$ shown in Fig.~\ref{rel}(b) respectively; \item[(R4)] ($\widetilde A_3$-relation) \\ $(s_1\ s_2s_3s_4s_3s_2)^2=e$ for every subquiver of $Q$ shown in Fig.~\ref{rel}(c); \item[(R5)] (handle relations) \\ $(s_1\ s_2s_3s_4s_3s_2)^3=e$ for every subquiver of $Q$ shown in Fig.~\ref{rel}(d); \\ $(s_1\ s_2s_3s_4s_5s_4s_3s_2)^2=e$ for every subquiver of $Q$ shown in Fig.~\ref{rel}(e). \end{itemize} \end{itemize} \begin{figure}[!h] \begin{center} \epsfig{file=./pic/rel-ext.eps,width=0.99\linewidth} \put(-458,25){\scriptsize $1$} \put(-433,65){\scriptsize $2$} \put(-400,25){\scriptsize $3$} \put(-380,25){\scriptsize $1$} \put(-358,65){\scriptsize $2$} \put(-323,25){\scriptsize $3$} \put(-300,25){\scriptsize $1$} \put(-280,65){\scriptsize $2$} \put(-245,25){\scriptsize $3$} \put(-280,-3){\scriptsize $4$} \put(-210,25){\scriptsize $1$} \put(-190,65){\scriptsize $2$} \put(-155,25){\scriptsize $3$} \put(-190,-3){\scriptsize $4$} \put(-105,25){\scriptsize $1$} \put(-85,65){\scriptsize $2$} \put(-50,25){\scriptsize $3$} \put(-85,-3){\scriptsize $4$} \put(1,25){\scriptsize $5$} \put(-460,-43){\small (a)} \put(-380,-43){\small (b)} \put(-300,-43){\small (c)} \put(-210,-43){\small (d)} \put(-105,-43){\small (e)} \put(-460,-23){\scriptsize $(s_1\ s_2s_3s_2)^2=e$ } \put(-380,-23){\scriptsize $(s_1\ s_2s_3s_2)^3=e$} \put(-300,-23){\scriptsize $(s_1\ s_2s_3s_4s_3s_2)^2=e$ } \put(-210,-23){\scriptsize $(s_1\ s_2s_3s_4s_3s_2)^3=e$ } \put(-105,-23){\scriptsize $(s_1\ s_2s_3s_4s_5s_4s_3s_2)^2=e$} \put(-430,93){ Cycle relations: } \put(-300,93){\normalsize $\widetilde A_3$-relation: } \put(-170,93){ Handle relations:} \caption{Relations R3,R4,R5 for the group $G$. } \label{rel} \end{center} \end{figure} \begin{theorem}[\cite{BM},\cite{FeTu}] \label{G invar} If $Q$ is a quiver arising from an unpunctured surface and $G=G(Q)$ is a group defined as above, then $G$ is invariant under the mutations of $Q$. \end{theorem} \begin{remark} If $Q$ is a quiver and $\mu_k(Q)$ is a mutation of $Q$ in the direction $k$, then the isomorphism of groups $G_1=G(Q)$ and $G_2=G(\mu_k(Q))$ can be described as follows. If $\{s_i\}$ and $\{t_i\}$ are the generators of $G(Q)$ and $G(\mu_k(Q))$ described above, then $$ t_i=\begin{cases} s_ks_is_k,& \text{if $Q$ contains an arrow from $i$ to $k$};\\ s_i, & \text{ otherwise}. \end{cases} $$ \end{remark} \begin{remark} Theorem~\ref{G invar} implies that the group $G$ does not depend on the choice of triangulation of the corresponding surface $S$, so one can say that $G=G(S)$ is the group assigned to the topological surface $S$. \end{remark} \begin{remark} Relations (R1),(R2) define a Coxeter group, the other relations turn $G$ into a quotient of a Coxeter group. However, in the cases of finite and affine quivers, by choosing an acyclic representative $Q$ one can see that there are no non-Coxeter relations in $G$, so $G$ is a Coxeter group itself. Applying Theorem~\ref{G invar} we see that $G$ is a Coxeter group for any quiver of finite or affine type. \end{remark} \subsection{Moving the marked points from one boundary component to another} In Section~\ref{group-constr} we recalled the construction of a group for any quiver from unpunctured surface $S$. It is natural to ask whether the group $G=G(S)$ uniquely defines the surface $S$. The main result of this section indicates that it does not: one can move boundary marked points from one boundary component to another without changing the group. \begin{theorem} \label{boundaries} Let $S_{g,b}$ be an unpunctured surface of genus $g$ with $b$ boundary components. Then the group $G(S_{g,b})$ does not depend on the distribution of the boundary marked points along the boundary components of the surface (depending only on $g,b$ an the total number of boundary marked points). \end{theorem} Denote by $G(S_{g,b};k_1,\dots,k_b)$ the group constructed from $S_{g,b}$ with $k_1,\dots,k_b$ marked points on the boundary components, $k_i\ge 1$. \begin{lemma} \label{l 13-22} $G(S_{0,3};1,1,3)\cong G(S_{0,3};1,2,2)$. \end{lemma} \begin{proof} Let $S_1$ and $S_2$ be the surfaces with $(1,1,3)$ and $(1,2,2)$ boundary marked points respectively. Consider the triangulations of $S_1$ and $S_2$ as in Fig.~\ref{13-22}. Let $s_i$ be the generators for $G(S_1)$ corresponding to the triangulation, and let $t_i$ be the set of generators of $G(S_1)$ satisfying $$ t_i= \begin{cases} s_4s_5s_4, & \text{if $i=5$,}\\ s_i, & \text{otherwise.} \end{cases} $$ We will show that the defining relations for $G(S_1)$ in terms of $t_i$ will rewrite exactly as defining relations for $G(S_2)$ in the triangulation shown in Fig.~\ref{13-22} with $t_i$ corresponding to the $i$--th arc. Indeed, all the relations (except for ones corresponding to Coxeter relations and cycle relations for arcs in the dark quadrilaterals) coincide, and all other relations can be checked directly. For example, the Coxeter relation $(s_5s_6)^2=e$ will turn into the cycle relation $( t_4t_5t_4\ t_6)^2=e$, while the cycle relation $(s_3\ s_4s_5s_4)^2=e$ turns into a Coxeter relation $ (t_3t_5)^2=e$. Moreover, every defining relation for $G(S_2)$ is obtained from the defining relations for $S_1$ in this way, which implies that the groups coincide. \end{proof} \begin{figure}[!h] \begin{center} \epsfig{file=./pic/13-22.eps,width=0.99\linewidth} \put(-422,30){\scriptsize $1$} \put(-382,40){\scriptsize $3$} \put(-360,50){\scriptsize $4$} \put(-365,65){\scriptsize $5$} \put(-338,40){\scriptsize $6$} \put(-318,40){\scriptsize $7$} \put(-297,40){\scriptsize $8$} \put(-365,85){\scriptsize $2$} \put(-165,30){\scriptsize $1$} \put(-138,40){\scriptsize $3$} \put(-116,40){\scriptsize $4$} \put(-100,70){\scriptsize $5$} \put(-82,40){\scriptsize $6$} \put(-60,40){\scriptsize $7$} \put(-42,40){\scriptsize $8$} \put(-100,90){\scriptsize $2$} \caption{Moving a boundary marked point form one boundary component to another.} \label{13-22} \end{center} \end{figure} \begin{lemma} \label{insert} $G(S_{0,3};1,1,m)=G(S_{0,3};1,2,m-1)$ for any $m\ge 2$. \end{lemma} \begin{proof} To show the statement we will adjust slightly the proof of Lemma~\ref{l 13-22}. Namely, we will insert one more arc between the arcs labeled $7$ and $8$ in both parts of Fig.~\ref{13-22} without changing anything else in the proof (see Fig.~\ref{13-22ins}, left). Repeating this several times we see that $G(S_{0,3},1,1,m)=G(S_{0,3},1,2,m-1)$ for all $m\ge 3$. For $m=2$ the statement holds trivially. \end{proof} \begin{figure}[!h] \begin{center} \epsfig{file=./pic/13-22ins.eps,width=0.96\linewidth} \put(-156,20){\scriptsize $1$} \put(-132,30){\scriptsize $3$} \put(-118,40){\scriptsize $4$} \put(-105,60){\scriptsize $5$} \put(-93,40){\scriptsize $6$} \put(-74,40){\scriptsize $7$} \put(-37,46){\scriptsize $8$} \put(-100,95){\scriptsize $2$} \put(-395,20){\scriptsize $1$} \put(-371,30){\scriptsize $3$} \put(-357,40){\scriptsize $4$} \put(-344,60){\scriptsize $5$} \put(-332,40){\scriptsize $6$} \put(-313,40){\scriptsize $7$} \put(-276,46){\scriptsize $8$} \put(-339,95){\scriptsize $2$} \caption{Inserting more boundary marked points to each of the three boundary components.} \label{13-22ins} \end{center} \end{figure} \begin{lemma} If $k+l+m=k'+l'+m'$ then $G(S_{0,3};k,l,m)=G(S_{0,3};k',l',m')$. \end{lemma} \begin{proof} We will prove that $G(S_{0,3};k,l,m)=G(S_{0,3};k,l+1,m-1)$ for any $k,l\ge 1$, $m\ge 2$. For this we modify the reasoning in the proof of Lemma~\ref{l 13-22} once again: in Lemma~\ref{insert} we increased $m$ by inserting a new arc between arcs $7$ and $8$; similarly, we can increase number $k$ by inserting an arc between the arcs labelled $1$ and $2$, and we can increase the number $l$ by inserting an arc between the arcs labelled $1$ and $3$ (see Fig.~\ref{13-22ins}, right). So, we can move a boundary marked point from any boundary component to any other boundary component. \end{proof} \begin{lemma} \label{3} If $k+l+m=k'+l'+m'$ and $b\ge 3$ then $G(S_{g,b};k,l,m)=G(S_{g,b};k',l',m')$. \end{lemma} \begin{proof} Consider once more Fig.~\ref{13-22} and the proof of Lemma~\ref{l 13-22}. We can increase the genus or the number of boundary components of the surface $S$ by attaching a triangulated surface $S_{1,1}$ (a torus with 2 boundary marked points on a unique boundary component) or $S_{0,2}$ (an annulus with 1 marked point on one boundary component and 2 marked points on the other). We will attach this small surface along one of its boundary segments to the boundary segment of $S$ lying in Fig.~\ref{13-22} between arcs $7$ and $8$. As this does not affect the shaded regions, this will not affect the proof. \end{proof} It is left to consider the case when we cannot choose three boundary components like in Lemma~\ref{l 13-22}, i.e. the case when we only have 2 boundary components (in case of $b=1$ there is nothing to prove). \begin{lemma} \label{2} If $k+l=k'+l'$ then $G(S_{g,2};k,l)=G(S_{g,2};k',l')$. \end{lemma} \begin{proof} The proof is by induction on $g$. The base, $g=0$, is known: in this case we deal with an annulus, and the group $G$ is the affine Weyl group $\widetilde A_{k+l}$. Assume that the lemma is known for all surfaces of genus $g$. To increase the genus of $S=S_{g,2}$ , we cut $S$ along any arc $\alpha$ connecting two boundary components and insert a handle $S_{1,1}$ (with two marked points on the boundary) between the sides $\alpha_1$ and $\alpha_2$ of the cut as in Fig.~\ref{torus}. As a result, we obtain a surface $S'=S_{g+1,2}$. We then choose a triangulation of the handle so that the arcs $\alpha_1$ and $\alpha_2$ are not adjacent (for this it is sufficient to include the arc $\beta$ separating $\alpha_1$ from $\alpha_2$ at both ends and going through the handle in between). Cutting $S'$ along $\alpha_1$ and $\alpha_2$, we obtain two connected components: a torus $S_{1,1}$ and some other surface $P$ of genus $g$. Let $G(P)$ be the corresponding group for this surface. Then $G(S')$ is an amalgamated product of $G(P)$ and $G(S_{1,1})$ along the common subgroup $\langle s_{\alpha_1},s_{\alpha_2}\ | \ (s_{\alpha_1}s_{\alpha_2})^2=e\rangle$. As neither $G(P)$ nor $G(S_{1,1})$ depends on the distribution of the boundary marked points, we get the lemma. \end{proof} \begin{remark} While considering block decompositions of $P$ and $S_{1,1}$, we glue additional two triangles along $\alpha_1$ and $\alpha_2$ to each of them for the arcs $\alpha_1$ and $\alpha_2$ to be interior ones (and thus to correspond to generators of the groups). When gluing surfaces together we remove these four triangles. \end{remark} \begin{figure}[!h] \begin{center} \epsfig{file=./pic/torus_.eps,width=0.49\linewidth} \put(-170,55){$\alpha_1$} \put(-170,25){$\alpha_2$} \put(-130,43){\color{ForestGreen} $\beta$} \caption{Inserting an extra handle. } \label{torus} \end{center} \end{figure} Lemma~\ref{2} together with Lemma~\ref{3} prove Theorem~\ref{boundaries}. \section{Extended affine Weyl group for a surface} \label{eawg} In this section we provide a construction of another group $W(S)$ from a bordered marked unpunctured surface $S$, and then explore its relations with $G(S)$ defined above. \subsection{Extended affine Weyl group} We remind the definition of extended affine Weyl groups following~\cite{MS,AS} (these groups are called ``toroidal Weyl groups'' in~\cite{MS}). Let $V$ be a quadratic space with a quadratic form $\langle\cdot,\cdot\rangle$ of signature $(n_+,n_0)$, where $n_0$ is a dimension of the radical $V_0$. Choose a maximal positive-definite subspace $V_+$, i.e. $dim V_+=n_+$ and $V= V_+\oplus V_0$. An {\it extended affine root system} $R$ is a set of roots (vectors) in $V$, such that $R$ is discrete, indecomposable, reduced and closed under reflections with respect to the hyperplanes orthogonal to the real roots in $R$ (for detailed definitions and properties we refer to~\cite{Sa1,AABGP}). Choose bases $\{v_1,\dots,v_{n_+}\}$ and $\{\delta_1,\dots,\delta_{n_0}\}$ in $V_+$ and $V_0$ respectively, and consider the space $V\oplus V_0^*= V_+\oplus V_0 \oplus V_0^*$ with basis $\{v_1,\dots,v_{n_+},\delta_1,\dots,\delta_{n_0}, \delta_1^*, \dots, \delta_{n_0}^* \}$. We extend the quadratic form $\langle\cdot,\cdot\rangle$ to $V\oplus V_0^*$ by $$ \langle v_i, \delta_j^*\rangle=0\,\forall i=1,\dots,n_+,\qquad\qquad \langle \delta_i, \delta_j^*\rangle= \begin{cases} 1,& \text{if $i=j$ };\\ 0,& \text{otherwise} \end{cases} $$ Then one can consider the action of the reflections in the real roots of $R$ in $V\oplus V_0^*$. The {\em extended affine Weyl group} $W=W(R)$ is the group acting in $V\oplus V_0^*$ generated by reflections in real roots of $R$. \subsection{Construction of a special triangulation.} \label{triang-sp} We now take an unpunctured surface $S$ with a particular choice of triangulation as described below. For this triangulation we consider the corresponding quiver $Q$ and construct a positive semi-definite fully compatible quasi-Cartan companion. The reflections in the companion basis will generate an extended affine Weyl group of type $A_{n_+}^{[n_0]}$ for certain $n_+,n_0$. An unpunctured surface $S$ contains the following features: boundary components (each with a number of boundary marked points) and handles. To construct the triangulation we do the following: \begin{itemize} \item[--] Choose any boundary component $b_0$ and a marked point $p$ on it. \item[--] Consider three arcs (loops) $x,y,z$ as in Fig.~\ref{tr}, top: \begin{itemize} \item[-] all three of them have both ends at $p$, \item[-] $x$ separates the boundary component $b_0$; \item[-] $y$ separates all other boundary components; \item[-] $z$ separates all handles. \end{itemize} \item[--] triangulate the region separated by $x$ as in Fig.~\ref{tr}, bottom left. We can schematically show the quiver corresponding quiver as in Fig.~\ref{tr}, bottom right. \item[--] Triangulate the regions separated by arcs $y$ and $z$ as in Fig.~\ref{tr3}. Use the triangulations of handles and of annuli shown in the middle of Fig.~\ref{tr3} (notice, that the triangulation of the annulus includes a region triangulated as the domain separated by $x$, cf. Fig.~\ref{tr}, bottom left.) \end{itemize} \begin{figure}[!h] \begin{center} \epsfig{file=./pic/tr2.eps,width=0.98\linewidth} \put(-220,235){$x$} \put(-250,218){$z$} \put(-203,218){$y$} \put(-90,75){$x$} \put(-75,55){$y$} \put(-230,110){$p$} \put(-125,55){$z$} \put(-370,35){$x$} \put(-220,55){$x$} \put(-170,30){handles} \put(-50,30){holes} \caption{Constructing the triangulation } \label{tr} \end{center} \end{figure} \begin{figure}[!h] \begin{center} \epsfig{file=./pic/tr3.eps,width=0.98\linewidth} \put(-415,265){$z$} \put(-438,219){\scriptsize $z_1$} \put(-397,219){\scriptsize $z_2$} \put(-363,219){\scriptsize $z_3$} \put(-330,219){\scriptsize $z_4$} \put(-177,165){$z_i$} \put(-210,220){$z_i$} \put(-145,245){$z$} \put(-135,219){\scriptsize $z_1$} \put(-105,219){\scriptsize $z_2$} \put(-70,219){\scriptsize $z_3$} \put(-50,245){\scriptsize $z_4$} \put(-315,110){$y$} \put(-405,90){\scriptsize $y_4$} \put(-385,90){\scriptsize $y_3$} \put(-350,90){\scriptsize $y_2$} \put(-313,90){\scriptsize $y_1$} \put(-315,110){$y$} \put(-1,110){$y$} \put(-235,110){$y_i$} \put(-160,-3){$y_i$} \put(-100,107){\scriptsize $y_4$} \put(-77,80){\scriptsize $y_3$} \put(-47,80){\scriptsize $y_2$} \put(-13,80){\scriptsize $y_1$} \caption{Constructing the triangulation, cont.: regions with handles (top) and holes (bottom). } \label{tr3} \end{center} \end{figure} The quiver corresponding to the constructed triangulation is shown in Fig.~\ref{tr-q}. \begin{figure}[!h] \begin{center} \epsfig{file=./pic/tr-q.eps,width=0.95\linewidth} \put(-205,175){$x$} \put(-175,135){$y$} \put(-240,135){$z$} \caption{The quiver } \label{tr-q} \end{center} \end{figure} \medskip \noindent {\bf Construction of the root system.} Next, we construct a companion basis by assigning vectors to the nodes of $Q$. For this, we first consider a subquiver $Q'$ in $Q$ obtained by removing from $Q$ all nodes labelled by squares in Fig.~\ref{tr-q-vect}. Notice that $Q'$ is a quiver of type $A_m$ for some $m$ (see e.g.~\cite{H}). Label the nodes of $Q'$ by roots in the root system of type $A_m$ (more precisely, of type $A_{n-2g-b+1}$, where $n$ is the number of nodes in $Q$, $g$ is the genus of the surface and $b$ is the number of boundary components) providing a fully compatible quasi-Cartan companion for $Q'$. We denote the vector space spanned by the companion basis by $V_+$. We now extend the quadratic form to $n$-dimensional vector space by adding $(2g+b-1)$-dimensional radical $V_0$. Having done that, we assign to the remaining nodes of $Q$ the vectors as in Fig.~\ref{tr-q-vect}: these additional vectors are constructed from the vectors associated to the adjacent nodes and basis vectors of the radical. More precisely, for each boundary component we have one radical vector ($\delta_k$ in Fig.~\ref{tr-q-vect}) and for each handle we have two radical vectors ($\delta_{ij}^{1,2}$ in Fig.~\ref{tr-q-vect}). Let $u_1,\dots,u_n$ be the vectors obtained, denote ${\bf u}=\{u_1,\dots,u_n \}$. These vectors define a positive semi-definite fully compatible quasi-Cartan companion of $Q$. \medskip Let $r_i=r_{u_i}$ be the reflections with respect to vectors $u_i$, denote by $W=W({\bf u},Q)$ the group generated by reflections $r_i=r_{u_i}$ acting on $V_+\oplus V_0$. We can also consider the action of $W$ on the $V_+\oplus V_0\oplus V_0^*$. By construction, $W$ is an extended affine Weyl group of type $A_{n_+}^{[n_0]}$, where the signature of the corresponding quadratic form is given by $(n_+,n_0)=(n-2g-b+1,2g+b-1)$. \begin{figure}[!h] \begin{center} \epsfig{file=./pic/tr-q-vect.eps,width=0.95\linewidth} \put(-41,93){\color{blue} $v_k\!\!+\!\!\delta_k$} \put(-27,147){\color{blue} $v_k$} \put(-410,149){\color{blue} $v_i$} \put(-410,98){\color{blue} $v_j$} \put(-463,110){\color{blue} $v_i\!\!+\!\!v_j\!\!+\!\!\delta_{ij}^1$} \put(-388,146){\color{blue} $v_i\!\!+\!\!v_j\!\!+\!\!\delta_{ij}^2$} \put(-392,133){\color{blue} \large / } \caption{Constructing the quasi-Cartan companion. } \label{tr-q-vect} \end{center} \end{figure} \subsection{Homomorphism of groups} Given an unpunctured surface $S$, we triangulate it as in Section~\ref{triang-sp} and obtain a quiver $Q$. Then we can construct two groups from the same quiver $Q$: the group $G=G(Q)$, generated by involutions $s_i$ with relations as in Section~\ref{group}, and the extended affine Weyl group $W=W({\bf u},Q)$ generated by reflections $r_i=r_{u_i}$. \begin{theorem} \label{homo} The mapping $s_i\mapsto r_i$ of the generators extends to a surjective homomorphism $\varphi: G\to W$. \end{theorem} \begin{proof} As the generators of $G$ and $W$ are in bijection, we only need to show that the map $f$ takes each defining relation of $G$ to a relation which holds in the reflection group $W$. This is clearly the case for relations of types (R1) and (R2) by the construction of $W$. Relations (R4) follow from~\cite[Theorem 1]{Sa3}. For the remaining relations (R3) and (R5), the verification is straightforward: we write the matrices of the reflections $r_i$ explicitly and check the corresponding relations. We present the corresponding matrices in the appendix. \end{proof} \begin{conjecture} \label{iso} The map $\varphi$ in Theorem~\ref{homo} is an isomorphism, i.e. $G(Q)\cong W({\bf u},Q)$. \end{conjecture} \begin{remark} \label{aff} Conjecture~\ref{iso} holds for surfaces of genus $0$ with at most two boundary components: in this case both groups are either finite or affine Weyl groups~\cite{BM,FeTu}. \end{remark} \begin{remark} \label{iso-el} Conjecture~\ref{iso} also holds for exceptional mutation-finite quivers $E_6^{(1,1)}$, $E_7^{(1,1)}$ ans $E_8^{(1,1)}$ in the following sense. The construction of a group in Section~\ref{group-constr} can also be applied to the mutation classes of the quivers listed above, see~\cite{FeTu}. The presentations of extended affine Weyl groups for elliptic root systems of these types are given in~\cite{Sa3}. Comparing the presentations, we see that the groups are isomorphic. \end{remark} We will now state our main result: the group $W$ is invariant under mutations, i.e. one can apply mutations to the quasi-Cartan companions. \begin{theorem} \label{c-ind} For any mutation sequence $\mu$, the vectors $\mu(\bf u)$ provide a positive semi-definite quasi-Cartan companion for $\mu(Q)$. \end{theorem} Our proof of Theorem~\ref{c-ind} is based on the notion of an {\em admissible} quasi-Cartan companion introduced by Seven~\cite{Se}, we prove the theorem in Section~\ref{adm-sec}. At the same time, Theorem~\ref{c-ind} can be also considered as a corollary of Conjecture~\ref{iso} (if it holds). \begin{prop} \label{ind} Conjecture~\ref{iso} implies Theorem~\ref{c-ind}. \end{prop} \begin{proof} We will proceed inductively to prove the following claim:\\ \noindent {\bf Claim.} {\it Suppose there is a quiver $Q$ and a vector system ${\bf u} =\{u_i \}$ satisfying the following conditions: \begin{itemize} \item[-] $\bf u$ is a companion basis for $Q$; \item[-] the reflections $\{r_i\}$ (where $r_i=r_{u_i}$) generate an extended affine Weyl group isomorphic to $G=G(Q)$ via the mapping $s_i\mapsto r_{u_i}$. \item[] Then for any mutation $\mu_k$ the quiver $\mu_k(Q)$ and the vector system $\mu_k(\bf u)$ satisfy the same conditions. \end{itemize} } It is sufficient to show this inductive statement, as we can start with the quiver $Q$ and the corresponding vectors $\{u_i\}$ constructed in Section~\ref{triang-sp} and hence satisfying the assumption. Let $Q'=\mu_k(Q)$. The group $G(Q')$ coincides with the group $G$ with generators given by $$t_i=\begin{cases} s_ks_is_k & \text{ if $Q$ contains an arrow from $i$ to $k$ },\\ s_i &\text{ otherwise}. \end{cases} $$ Applying $\varphi$ (which is an isomorphism by the assumption of the proposition) to these new generators, we obtain the set of reflections in $W$ given by $$ r_i'=\begin{cases} r_kr_ir_k & \text{ if $Q$ contains an arrow from $i$ to $k$ },\\ r_i &\text{ otherwise}. \end{cases} $$ Notice that each of these new reflections $r_i'$ is a reflection with respect to a new vector $u_i'$ obtained from $u_i$ by a reflection with respect to $u_k$ (see Fig.~\ref{fig-ind}). According to Remark~\ref{geometric realisation}, this is exactly the action of the mutation $\mu_k$ on the vectors $\{u_i\}$. In particular, the order of the element $t_it_j$ in $G$ coincides with the order of the element $r_i'r_j'$ in $W$, and hence the vectors $\{ r_i'\}$ form a quasi-Cartan companion for $Q'$. \end{proof} \begin{figure}[!h] \begin{center} \epsfig{file=./pic/diagr.eps,width=0.3\linewidth} \put(-72,125){$Q$} \put(-155,82){$G(Q)$} \put(-240,82){$(s_1,\dots,s_n) =$} \put(-5,82){$W$} \put(20,82){$=(r_{u_1},\dots,r_{u_n})$} \put(-74,43){$Q'$} \put(-157,0){$G(Q')$} \put(-240,0){$(t_1,\dots,t_n) =$} \put(-5,0){$W'$} \put(20,0){$= (r_{u_1'},\dots,r_{u_n'})$} \put(-40,88){$\varphi$} \put(-40,7){$\varphi$} \put(-80,100){$\mu_k$} \put(-148,55){$\mu_k$} \caption{To Proposition~\ref{ind}. } \label{fig-ind} \end{center} \end{figure} \section{Admissible quasi-Cartan companions} \label{adm-sec} \subsection{Proof of Theorem~\ref{c-ind}} In~\cite{Se}, Seven generalized the notion of a $k$-compatible quasi-Cartan companion of a quiver $Q$. \begin{definition}[\cite{Se}] A quasi-Cartan companion $A$ is {\it admissible } if the following holds: for every chordless cycle $i_1,\dots,i_k$ the cyclic product of the elements $-a_{i,i+1}$ is negative if the cycle is oriented, and positive otherwise. \end{definition} One can easily see that restricting the admissibility condition to $3$-cycles leads to the definition of a fully compatible companion, and thus an admissible companion is always fully compatible. However, the converse may not be true. Note that the quasi-Cartan companion constructed in Section~\ref{triang-sp} is fully compatible, positive semi-definite, and does not contain any cycles of length more than $3$, and thus is admissible. Therefore, to prove Theorem~\ref{c-ind}, it is sufficient to prove the following statement. \begin{prop} \label{adm} Let the quiver $Q_0$ and a vector system ${\bf u} =\{u_i \}$ be those constructed in Section~\ref{triang-sp}, and assume that $\mu$ is a sequence of mutations such that $\mu({\bf u})$ provides an admissible quasi-Cartan companion of $\mu(Q_0)$. Then for any mutation $\mu_k$ the vector system $\mu_k(\mu(\bf u))$ provides an admissible quasi-Cartan companion of $\mu_k\mu(Q_0)$. \end{prop} { \begin{definition} Given a quiver $Q$ and its quasi-Cartan companion $A$, let us call $A$ a {\em symmetric twin} of $Q$ if every mutation sequence $\mu$ takes $A$ to a quasi-Cartan companion of $\mu(Q)$. \end{definition} Then Proposition~\ref{adm} implies the following corollary. \begin{cor} \label{all-un} Every quiver constructed from a triangulation of an unpunctured surface has a symmetric twin. The twin is unique up to simultaneous sign changes of rows and columns. \end{cor} \begin{proof} By Proposition~\ref{adm}, the Gram matrix of the vector system ${\bf u}$ is a symmetric twin of $Q_0$, and thus every quiver mutation-equivalent to it also has a symmetric twin. To prove the uniqueness, notice that a symmetric twin must be admissible. Indeed, if a quasi-Cartan companion $A$ of $Q$ is not admissible, then there exist a cycle $C$ in $Q$ on which the admissibility condition fails. The cycle $C$ itself is a quiver of type $D_k$ or $\widetilde A_{m,n}$. It is now easy to check that the restriction of $A$ onto $C$ is not a symmetric twin of $C$, which implies that $A$ is not a twin of $Q$. Finally, it is proved in~\cite{Se} that an admissible quasi-Cartan companion to a quiver, if exists, is unique up to simultaneous change of sign of rows and columns, which completes the proof. \end{proof} } \begin{remark} \label{all-ex} Corollary~\ref{all-un} also holds for all exceptional finite mutation classes except for $X_6$ and $X_7$. Indeed, for quivers of finite and affine types this follows from Remark~\ref{aff} (or directly from~\cite{S2}), and for elliptic quivers $E_6^{(1,1)}$, $E_7^{(1,1)}$, $E_8^{(1,1)}$ this follows from Remark~\ref{iso-el}. \end{remark} Let us now prove Proposition~\ref{adm}. First, we use the properties of admissible companions to prove the proposition for a very restricted set of surfaces. \begin{lemma} \label{ell-inv} Proposition~\ref{adm} holds for quivers $Q_0$ constructed from a surface $S$ that is either a genus $0$ surface with three boundary components, or a genus $1$ surface with one boundary component. \end{lemma} \begin{proof} We will proceed inductively: suppose that a quiver $Q=\mu(Q_0)$ for some sequence of mutations $\mu$, and $A=\mu(A^0)$ is its admissible quasi-Cartan companion, where $A^0$ is the admissible companion of $Q_0$ constructed as a Gram matrix of vectors ${\bf u}$. Choose any vertex $k$. To prove that the mutation $\mu_k(A)$ is admissible, we need to show that the admissibility condition holds for every chordless cycle $Q'_c$ in the quiver $Q'=\mu_k(Q)$. As $Q'$ is a quiver constructed from an unpunctured surface, $Q'_c$ can be either oriented of length $3$, or non-oriented. Observe that $Q'_c$ is of affine or finite type. In particular, if the vertex $k$ belongs to $Q'_c$, then the quiver $\mu_k(Q'_c)$ is also of affine or finite type. Since $\mu_k(Q'_c)$ is a subquiver of $Q$, the restriction of $A$ to it is also admissible, and thus its mutation restricted to $Q'_c$ is admissible by~\cite[Corollary 1.8]{S2}. Therefore, we can now assume that the vertex $k$ does not belong to $Q'_c$, denote $\widetilde Q'_c= Q'_c\cup\{k\}$. Suppose first that $Q'_c$ contains exactly three vertices, and consider a subquiver $\mu_k(\widetilde Q'_c)$ of $Q$. This subquiver contains four vertices and is mutation-finite. It is easy to see that this implies that either it is of finite or affine type, or it is a quiver shown in Fig.~\ref{dread} (it represents a triangulation of a genus $1$ surface with a unique marked point on its boundary). In the former case the restriction of $\mu_k(A)$ to $\widetilde Q'_c$ is admissible by~\cite{Se}, and in the latter case we just need to check that every mutation of the quiver in Fig.~\ref{dread} leads to an admissible companion, which is a short and straightforward calculation (note that an admissible companion is unique up to simultaneous change of signs of rows and columns~\cite[Theorem 2.11]{Se}, so we need to choose one admissible companion, e.g. the one shown in Fig.~\ref{dread}, and perform four mutations). \begin{figure} \begin{center} \epsfig{file=./pic/dread.eps,width=0.24\linewidth} \put(-97,85){\small $e_1-e_2$} \put(-150,38){\small $e_1-e_3$} \put(-45,38){\small $e_1-e_3$} \put(-97,-10){\small $e_2-e_3$} \end{center} \caption{The unique quiver corresponding to triangulations of a torus with one boundary component, and its admissible quasi-Cartan companion} \label{dread} \end{figure} Now suppose that $Q'_c$ contains more than three vertices, in particular, it is non-oriented. By Prop.~\ref{non-or}, any vertex is connected to $Q'_C$ by an even number of arrows. Combining this with the fact that a valence of a vertex in a quiver originating from an unpunctured triangulated surface does not exceed $4$~\cite{FST}, we see that $k$ is connected to $Q'_c$ either by $2$ or by $4$ arrows. First, suppose that $k$ is connected to $Q'_c$ by $2$ arrows. This cannot be a double arrow, otherwise $\widetilde Q'_c$ would be mutation-infinite. If $k$ is connected to non-neighboring vertices of $Q'_c$, then $\widetilde Q'_c$ contains $3$ cycles of length at least $4$ (and thus non-oriented). This contradicts~\cite[Proposition~2.1]{Se1}, according to which a mutation-finite quiver with at least two non-oriented cycles must contain an oriented cycle. Thus, $k$ is connected to two neighboring vertices of $Q'_c$ and forms with them an oriented cycle of length $3$. Then $\mu_k(\widetilde Q'_c)$ is an affine subquiver of $Q$, and thus the restriction of $\mu_k(A)$ to $\widetilde Q'_c$ is admissible by~\cite{Se}. \begin{figure}[!h] \begin{center} \epsfig{file=./pic/wheel1.eps,width=0.27\linewidth} \put(-60,60){\small $k$} \put(-60,-10){\small $i$} \end{center} \caption{A quiver $\widetilde Q'_c$ for $k$ being incident to a double arrow. ``Non-oriented'' arrows can be oriented in any way} \label{wheel1} \end{figure} Suppose now that $k$ is connected to $Q'_c$ by $4$ arrows. If $k$ is connected to some vertex (say, $i$) by a double arrow, then $k$ and $i$ must form oriented cycles with both neighbors of $i$ in $Q'_c$, otherwise the subquiver formed by four vertices $k$, $i$ and two neighbors of $i$ is mutation-infinite. Therefore, $\widetilde Q'_c$ has the form shown in Fig.~\ref{wheel1}. A short explicit calculation shows that if the restriction of $A$ to $\mu_k(\widetilde Q'_c)$ is admissible, then the restriction of $\mu_k(A)$ to $\widetilde Q'_c$ is also admissible. So, we can now assume that $k$ is connected to $4$ distinct vertices of $Q'_c$. Let us look at possible structure of the quiver $\widetilde Q'_c$. By~\cite[Proposition~2.1]{Se1} mentioned above, $k$ must belong to at least one oriented cycle (which is of length $3$ since there are no punctures). Suppose first that $k$ belongs to one oriented cycle only. Then $\widetilde Q'_c$ contains $3$ non-oriented cycles, see Fig.~\ref{1cycle}. Removing any of the two vertices of the oriented triangle belonging to $\widetilde Q'_c$ we obtain a subquiver with at least two non-oriented cycles and no oriented cycles. By~\cite[Proposition~2.1]{Se1}, this subquiver is mutation-infinite. \begin{figure}[!h] \begin{center} \epsfig{file=./pic/1cycle.eps,width=0.30\linewidth} \put(-70,15){\small $k$} \put(-110,35){\small $C_1$} \put(-73,60){\small $C_2$} \put(-40,35){\small $C_3$} \end{center} \caption{A quiver $\widetilde Q'_c$ for $k$ belonging to exactly one oriented cycle. Cycles $C_i$ are non-oriented and can be of any lengths} \label{1cycle} \end{figure} Therefore, we can now assume that $k$ belongs to $2$ oriented cycles. If these cycles have a common arrow, then they form a subquiver on $4$ vertices composed of two oriented triangles sharing an edge, which corresponds to a self-folded triangle and thus can never show up in a quiver of unpunctured surface (see~\cite{Gu1,Gu2}). Thus, $\widetilde Q'_c$ consists of two oriented cycles and two non-oriented cycles ``between'' them, see Fig.~\ref{2cycles}. Mutating this quiver in $k$, we obtain one of the two quivers shown in Fig.~\ref{2cycles}. Now an easy computation shows that an admissible companion to the latter mutates to an admissible companion to the former, which completes the proof. \begin{figure}[!h] \begin{center} \epsfig{file=./pic/2cycles.eps,width=0.95\linewidth} \put(-398,80){\small $k$} \put(-443,110){\small $i_1$} \put(-443,30){\small $i_2$} \put(-358,110){\small $j_1$} \put(-358,30){\small $j_2$} \put(-349,75){\small $\mu_k$} \put(-294,80){\small $k$} \put(-339,110){\small $i_1$} \put(-339,30){\small $j_2$} \put(-254,110){\small $j_1$} \put(-254,30){\small $i_2$} \put(-142,80){\small $k$} \put(-187,110){\small $i_1$} \put(-187,30){\small $i_2$} \put(-102,110){\small $j_1$} \put(-102,30){\small $j_2$} \put(-90,77){\small $\mu_k$} \put(-39,80){\small $k$} \put(-84,110){\small $i_1$} \put(-84,30){\small $i_2$} \put(1,110){\small $j_1$} \put(1,30){\small $j_2$} \end{center} \caption{Quivers $\widetilde Q'_c$ for $k$ belonging to two oriented cycles and their mutations. Shaded cycles are non-oriented and can be of any lengths} \label{2cycles} \end{figure} \end{proof} We can now complete the proof of Prop.~\ref{adm} (and thus of Theorem~\ref{c-ind}). \begin{proof}[Proof of Proposition~\ref{adm}] Let $Q$ be a quiver constructed from a triangulation of an unpunctured surface, and let $A$ be its admissible quasi-Cartan companion. We need to prove that the companion $\mu_k(A)$ of the quiver $Q'=\mu_k(Q)$ is also admissible. As in the proof of Lemma~\ref{ell-inv}, we need to prove that the admissibility condition holds for every chordless cycle $Q'_c$ in $Q'$. Since $Q'_c$ is a cycle, it is a quiver of affine type $\widetilde A_{m-1}$ or finite type $A_m$ (type $D_m$ for $m>3$ is excluded as there are no punctures), where $m$ is the number of vertices in $Q'_c$. In the former case it corresponds to a triangulated annulus, and in the latter case to a triangulated polygon. We now consider a subquiver $\widetilde Q'_c= Q'_c\cup\{k\}$ of $Q'$ and determine how can the corresponding to $\widetilde Q'_c$ surface look like. Observe that $Q'_c$ is a subquiver of $\widetilde Q'_c$ obtained by removing one vertex. In the language of surfaces, the operation of removing one vertex from a quiver is equivalent to cutting the surface along the corresponding edge of a triangulation. Thus, the surface for $\widetilde Q'_c$ can be obtained from the surface for $Q'_c$ by gluing two segments of the boundary (without creating any punctures), or by attaching to the surface for $Q'_c$ a single triangle along one boundary edge. By gluing two segments of boundary of a polygon or by attaching a triangle to it we can obtain either an annulus or a polygon again (closed sphere is excluded), so $\widetilde Q'_c$ is again of affine or finite type. By gluing two segments of boundary of an annulus we can obtain either a $3$-holed sphere (if the segments belong to the same boundary component) or a torus with one boundary component (if the segments belong to distinct components). Similarly, by attaching a triangle to an annulus along one edge we can obtain an annulus only. Thus $\widetilde Q'_c$ is either of affine type, or of one of the types covered by Lemma~\ref{ell-inv}. We now observe that the restriction $A_c$ of $A$ to the subquiver $\mu_k(\widetilde Q'_c)$ of $Q$ is an admissible quasi-Cartan companion. Since the quiver $\mu_k(\widetilde Q'_c)$ is either of finite type, or of affine type, or of one of the types covered by Lemma~\ref{ell-inv}, we deduce that $\mu_k(A_c)$ is also an admissible companion of $\widetilde Q'_c$ either by~\cite{Se} or by Lemma~\ref{ell-inv}, which completes the proof. \end{proof} \subsection{Constructing an admissible quasi-Cartan companion by a triangulation} \label{qCC-c} Given an arbitrary triangulation $T$ of an unpunctured surface (or, equivalently, its quiver $Q$), Proposition~\ref{adm} provides a way to construct an admissible quasi-Cartan companion by $T$ without making use of any mutations. Let $S$ be a surface of genus $g$ with $b$ boundary components, and let $T$ be any its triangulation. As before, we denote by $n$ the number of interior edges of $T$ (it depends on the number of marked points), we may assume $n\ge 3$. Cut along some edges of $T$ to obtain a topological disk. Euler characteristics considerations imply that the number of cuts is equal to $2g+b-1$. As in Section~\ref{semi-def}, index all the triangles of $T$ (there are $n-2g-b+2$ of them), and consider a Euclidean vector space of dimension $n-2g-b+2$ with orthonormal basis $e_i$. Assign to every edge of $T$ on the disk the vector $e_i+e_j$ if the edge belongs to triangles $i$ and $j$. Now, consider any other edge of $T$, assume it belongs to triangles $i$ and $j$. If we glue the disk along this edge only, we obtain an annulus. Therefore, this edge belongs in $Q$ either to an oriented triangle of length $3$ (if triangles $i$ and $j$ have a common edge in the disk), or to non-oriented cycle (otherwise). In the former case the admissibility condition implies that the inner product of the corresponding vector with the two other in the cycle must be positive, and thus we can take a vector $e_i+e_j$ without loss of generality. In the latter case the admissibility condition implies that we must take either $e_i+e_j$ (if the length of the cycle is even) or $e_i-e_j$ (if the length of the cycle is odd). Therefore, we have assigned a vector to every edge of a triangulation $T$ and obtained a semi-positive quasi-Cartan companion $A$. \begin{cor} \label{alg} $A$ is an admissible quasi-Cartan companion of $Q$. \end{cor} \begin{proof} The moduli of inner products of constructed vectors match the moduli of the entries of the matrix $B$, and the choice of signs in the construction was unique (up to the change of the sign of any of the vectors) to satisfy the admissibility condition. Therefore, if $A$ is not admissible, then $Q$ has no admissible companions. However, Proposition~\ref{adm} implies that there exists an admissible quasi-Cartan companion of $Q$, which completes the proof. \end{proof} \begin{remark} \label{a2} Note that although the vectors we constructed have the form $e_i\pm e_j$, all of them belong to a finite root system of type $A_{n-2g-b+1}$ spanned by the vectors corresponding to the interior edges of the triangulation of the disk. \end{remark} \subsection{Admissible companions and reflection groups} Finally, we would like to mention a geometric interpretation of the admissibility condition. \begin{prop} \label{lin-ind} Let $Q$ be a quiver of affine type $\tilde A_{p,q}$, let $A$ be a quasi-Cartan companion with companion basis ${\bf u}=\{u_1,\dots,u_{p+q}\}$. For any subquiver $Q'$ of $Q$ denote by $W(A,Q')$ the group generated by reflections in vectors of $\bf u$ assigned to vertices of $Q'$. Then $A$ is admissible if and only if for any $Q'\subset Q$ of affine or finite type the group $W(A,Q')$ is isomorphic to an affine (respectively, finite) Weyl group. \end{prop} \begin{proof} Recall that, according to~\cite{Se}, any quiver of affine type has a unique admissible quasi-Cartan companion up to equivalence (corresponding to changing the signs of some of the vectors in the companion basis), and any mutation of an admissible companion of an affine quiver results in an admissible companion again. Thus, if $A$ is admissible, then it can be obtained by mutations from an admissible companion of an affine Dynkin diagram for which the isomorphism between the corresponding reflection group $W(A,Q)$ and $\tilde A_{p+q-1}$ is obvious. We then can use the claim from the proof of Proposition~\ref{ind} to show that $W(A,Q)$ is also isomorphic to $\tilde A_{p+q-1}$. Restricting this reasoning to any affine subquiver, we obtain ``only if'' statement. Conversely, assume that $A$ is not admissible. This implies that there is a chordless cycle for which the admissibility condition does not hold. There are three types of chordless cycles in quivers of type $\tilde A$: oriented cycles of length $3$ with weights $(1,1,1)$ or $(1,1,2)$, and non-oriented cycles with all weights equal to one. If the admissibility condition is broken for a non-oriented cycle $Q'$, then $W(A,Q')$ is a finite Weyl group of type $D$; if it is broken for a cycle of type $(1,1,2)$, then the corresponding reflection group is isomorphic to a group generated by reflections in sides of a hyperbolic triangle with angles $(\pi/3,\pi/3,0)$. As both of these types of cycles are affine subquivers themselves, we see that in both cases we have an affine subquiver $Q'$ with the group $W(A,Q')$ not being an affine Weyl group. Finally, if the admissibility condition is broken for an oriented $3$-cycle with weights $(1,1,1)$, then the corresponding reflection group is isomorphic to an affine Weyl group $\tilde A_2$. As the cycle itself is a quiver of finite type $A_3$, we again come to a contradiction. \end{proof} Combining Propositions~\ref{lin-ind} and~\ref{adm} we obtain the following result. \begin{cor} \label{aff-adm} Let the quiver $Q_0$ and a vector system ${\bf u} =\{u_i \}$ be those constructed in Section~\ref{triang-sp}. Then for any sequence of mutations $\mu$ and any subquiver $Q'\subset \mu(Q_0)$ of affine or finite type the group $W(\mu_k(A),Q')$ is isomorphic to an affine (respectively, finite) Weyl group. \end{cor} \section*{Appendix: Calculations in the proof of Theorem~\ref{homo}} We present below the calculations for the relations of types (R3) and (R5), which are required to prove Theorem~\ref{c-ind}. We index the vertices as in Fig.~\ref{rel}(e), and assign the following vectors to the vertices: $$u_1=e_2-e_3+\delta_1,\quad u_2=e_1-e_3,\quad u_3=e_2-e_3+\delta_2,\quad u_4=e_1-e_2,\quad u_5=e_1-e_4.$$ Here we assume that the vector space $V_+$ with basis $\{v_1,\dots,v_{n_+}\}$ is embedded in the vector space $V'_+$ of dimension $n_++1$ with orthonormal basis $\{e_i\}$ (and thus the whole space $V\oplus V_0^*$ is embedded in the space with basis $\{e_1,\dots,e_{n_++1},\delta_1,\dots,\delta_{n_0},\delta_1^*,\dots,\delta_{n_0}^*\}$), and all roots lying in $V_+$ are of the form $e_i-e_j$ (see Section~\ref{eawg} for the notation and further details). The action of the group $W$ on $V=V_+\oplus V_0$ can be naturally extended to an action on $V'=V'_+\oplus V_0$. It is easy to see that reflections $s_1,\dots,s_5$ in the vectors $u_1,\dots,u_5$ may only act non-trivially on the $8$-dimensional subspace of $V'$ spanned by vectors $\{e_1,e_2,e_3,e_4,\delta_1,\delta_2,\delta_1^*,\delta_2^*\}$. Therefore, to verify the relations (R3) and (R5) we may write down the matrices of the reflections in the basis above and calculate the required products. The matrices of the reflections in this basis have the following form: $$ s_1 = \begin{pmatrix} 1& 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 1& 0& 0& 0& -1& 0\\ 0& 1& 0& 0& 0& 0& 1& 0\\ 0& 0& 0& 1& 0& 0& 0& 0\\ 0& -1& 1& 0& 1& 0& -1& 0\\ 0& 0& 0& 0& 0& 1& 0& 0\\ 0& 0& 0& 0& 0& 0& 1& 0\\ 0& 0& 0& 0& 0& 0& 0& 1 \end{pmatrix}\qquad\qquad s_2 = \begin{pmatrix} 0& 0& 1& 0& 0& 0& 0& 0\\ 0& 1& 0& 0& 0& 0& 0& 0\\ 1& 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 0& 1& 0& 0& 0& 0\\ 0& 0& 0& 0& 1& 0& 0& 0\\ 0& 0& 0& 0& 0& 1& 0& 0\\ 0& 0& 0& 0& 0& 0& 1& 0\\ 0& 0& 0& 0& 0& 0& 0& 1 \end{pmatrix} $$ $$ s_3 = \begin{pmatrix} 1& 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 1& 0& 0& 0& 0& -1\\ 0& 1& 0& 0& 0& 0& 0& 1\\ 0& 0& 0& 1& 0& 0& 0& 0\\ 0& 0& 0& 0& 1& 0& 0& 0\\ 0& -1& 1& 0& 0& 1& 0& -1\\ 0& 0& 0& 0& 0& 0& 1& 0\\ 0& 0& 0& 0& 0& 0& 0& 1 \end{pmatrix} \qquad\qquad s_4 = \begin{pmatrix} 0& 1& 0& 0& 0& 0& 0& 0\\ 1& 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 1& 0& 0& 0& 0& 0\\ 0& 0& 0& 1& 0& 0& 0& 0\\ 0& 0& 0& 0& 1& 0& 0& 0\\ 0& 0& 0& 0& 0& 1& 0& 0\\ 0& 0& 0& 0& 0& 0& 1& 0\\ 0& 0& 0& 0& 0& 0& 0& 1 \end{pmatrix} $$ $$ s_5 = \begin{pmatrix} 0& 0& 0& 1& 0& 0& 0& 0\\ 0& 1& 0& 0& 0& 0& 0& 0\\ 0& 0& 1& 0& 0& 0& 0& 0\\ 1& 0& 0& 0& 0& 0& 0& 0\\ 0& 0& 0& 0& 1& 0& 0& 0\\ 0& 0& 0& 0& 0& 1& 0& 0\\ 0& 0& 0& 0& 0& 0& 1& 0\\ 0& 0& 0& 0& 0& 0& 0& 1 \end{pmatrix} $$ Now a direct computation shows that both $(s_1\ s_2s_3s_4s_3s_2)^3$ and $(s_1\ s_2s_3s_4s_5s_4s_3s_2)^2$ are identity matrices, which verifies relations (R5). Finally, to verify the relation (R3), we compute $(s_2\ s_4s_5s_4)^2$ which also turns out to be an identity matrix, as required. Note that the relations (R1) and (R2) can also be checked straightforwardly using the matrices provided above, and to verify (R4) we need to assign a different vector to the vertex $v_4$, say, $e_2-e_4$ (cf. Fig.~\ref{rel}(c)).
2,869,038,156,939
arxiv
\section{Introduction} \label{sec:intro} Cataclysmic Variables (CVs) are a class of close binary system, with a typical orbital period of a few hours, where a white dwarf primary accretes material from its companion via Roche lobe overflow. The secondary is normally a late-type main sequence star although examples with an evolved companion do exist (eg. GK~Per). In the absence of a significant white dwarf magnetic field, material arrives at the primary after processing though an accretion disc. The dwarf nova subgroup show outbursts where the luminosity of the system increases by around 2--5~mag. Although not strictily periodic, these recur on a typical timescale for each system ranging from tens of days to tens of years. The dwarf novae are further subdivided based on the properties of the outbursts. We will be interested in the SU~UMa type where the systems show occassional superoutbursts which have a brighter maximum ($\sim0.7$~mag.) and longer duration ($\sim5$~times) than normal outbursts. The most favoured explanation for dwarf nova outbursts involves an ionization instability where the accretion disc mass increases until a critical surface density \begin{equation} \Sigma_{\rm max} = 114~\mbox{kg}~\mbox{m}^{-2} \left(\frac{r}{10^{8}~\mbox{m}}\right)^{1.05} M_{1}^{-0.35} \alpha_{C}^{0.86} \end{equation} is reached at some radius $r$. When this occurs, the disc switches into a ``hot'' state with higher viscosity that causes a larger mass transport rate through the disc and increased luminosity. The disc mass now steadily decreases until a second critical surface density \begin{equation} \Sigma_{\rm min} = 82.5~\mbox{kg}~\mbox{m}^{-2} \left(\frac{r}{10^{8}~\mbox{m}}\right)^{1.05} M_{1}^{-0.35} \alpha_{H}^{0.8} \end{equation} is reached at some point. Upon meeting this condition, the disc transfers back to the ``cold'' quiescent state and the cycle repeats. In these expressions $\alpha_{C}$ and $\alpha_{H}$ are the Shakura-Sunyaev viscosity parameters in the cold and hot states respectively and the primary mass $M_{1}$ is measured in solar masses \citep{cannizzo88}. SU~UMa superoutbursts also have the property of showing superhumps. Here, an additional periodity ($P_{\rm sh}$) a few percent longer than the orbital period ($P_{\rm orb}$) is apparent in the lightcurve. This is believed to arise from a precessing, eccentric accretion disc driven by a resonance between the orbiting disc material and the secondary. CVs in general and the SU~UMa systems in particular are well-reviewed by \citet{warner95}. Some systems other than dwarf novae also show superhumps which, by analogy, are believed to share a common origin with an eccentric disc now permanently present giving rise to ``permanent superhumps'' \citep{retter00}. Similarly, some Low-Mass X-ray Binaries (LMXBs); analogues to CVs where the primary is now a neutron star, have also been discovered to have superhumps eg. KV~UMa \citep{zurita02}. There is also a related phenomenon of ``negative'' superhumps which occur on a period a few percent {\it shorter} than $P_{\rm orb}$. This is believed to arise from precession of a disc warp and we will not consider these in this paper. The intent of this paper is to test our understanding of the (positive) superhump phenomenon by comparing observation to the predictions of theoretical expressions for $P_{\rm sh}$. Since this is directly observable, if we can relate it to the funadmental parameters of a system, we will have a method to indirectly measure such parameters. \section{Theoretical Background} \label{sec:theback} \citet{lubow91,lubow91b,lubow92} derived the final precession rate $\omega$ for an eccentric disc as the sum of three terms: \begin{equation} \omega=\omega_{\rm dyn} +\omega_{\rm press} + \omega_{\rm tran} \end{equation} where $\omega_{\rm dyn}$ is the dynamical precession frequency, $\omega_{\rm press}$ is a pressure related term and $\omega_{\rm tran}$ is a transient term. This latter contribution is related to the time derivative of the mode giving rise to the dynamical precession. Thus, it may be important in the development phase of the superhumps but not in steady state. As a result, we shall not consider it in detail except to note that it can have either sign and thus can either act to increase or decrease the precession rate. The dynamical term is the one arising from the resonance and is examined in section~\ref{sec:dynthry}. The pressure term acts to slow the precession and it is summarised in section~\ref{sec:presthry}. \subsection{Dynamical Precession Theory} \label{sec:dynthry} \citet{hirose90} derived the general expression for the ratio of the dynamical disc precession $\omega_{\rm dyn}$ and orbital $\omega_{\rm orb}$ frequencies in terms of the mass ratio and radius of disc material. Their equation (8) is \begin{equation} \frac{\omega_{\rm dyn}}{\omega_{\rm orb}} = \frac{q}{\left(1+q\right)^{\frac{1}{2}}} \left[\frac{1}{2r^{\frac{1}{2}}} \frac{d}{d\!r} \left(r^{2} \frac{d\!B_{0}}{d\!r}\right)\right] \label{eqn:osaki8} \end{equation} where \begin{equation} B_{0}(r) = \frac{1}{2}b^{0}_{\frac{1}{2}} = F\left(\frac{1}{2},\frac{1}{2},1, r^{2}\right) \end{equation} \citep{brumberg95} is the zeroth order Laplace coefficient given in terms of the hypergeometric function $F$, $q=M_{2}/M_{1}~(<1)$ is the mass-ratio and $r$ is the radius of orbiting material expressed as a fraction of the separation $d$. This evaluates to \begin{equation} \frac{\omega_{\rm dyn}}{\omega_{\rm orb}} = \frac{3}{4}\frac{q}{\left(1+q\right)^{\frac{1}{2}}} r^{\frac{3}{2}}\sum_{n=1}^{\infty} a_{n} r^{2(n-1)} \label{eqn:omrat} \end{equation} where the coefficients are given by \begin{equation} a_{n}=\frac{2}{3}(2n)(2n+1)\prod_{m=1}^{n} \left(\frac{2m-1}{2m}\right)^{2} \label{eqn:coeffrel} \end{equation} \citep{pearson03}. \citet{lubow92} used the fixed value of $r=0.477$ enigmatically described as ``corrected for the presence of the companion'' and thus presumably in the limit of $q\rightarrow0$. \citet{FKR}, however, give the radius for $j$:$j-1$ resonances as, \begin{equation} r_{j} =\frac{1}{j^{\frac{2}{3}}\left(1+q\right)^{\frac{1}{3}}}. \label{eqn:resrad} \end{equation} This evaluates to r=0.481 for the case of $j=3$ and vanishing $q$; very close to the value of the other paper but retaining accuracy for $q\neq0$. Substituting into (\ref{eqn:omrat}) gives \begin{equation} \frac{\omega_{\rm dyn}}{\omega_{\rm orb}} = \frac{3}{4j}\frac{q}{1+q} \sum_{n=1}^{\infty} \frac{a_{n}}{\left[j^{2}{(1+q)}\right]^{\frac{2(n-1)}{3}}}. \label{eqn:omratfull} \end{equation} The canonical approximation \begin{equation} P_{\rm dyn}\approx\frac{3.85(1+q)}{q}P_{\rm orb} \end{equation} \citep{warner95}, is recovered by setting $j=3$ and evaluating the summation with $q=0.16$. The limiting mass ratio $q\approx0.22$ found by \citet{whitehurst88a} arises from the largest value for which $r_{3}$ remains within the last stable stream line \citep{molnar92}. Numerical simulations, however, still produce identifiable superhumps up to a mass ratio of $q\approx0.33$ \citep{whitehurst94,murray00b}. \citet{whitehurst91} used an approximation for the disc tidal radius \begin{equation} R_{\rm T}\approx\beta R_{\rm L,1} \label{eqn:tidalrad} \end{equation} with $\beta\approx0.9$. When coupled to Eggleton's formula \citep{eggleton83} for the primary's Roche lobe radius \begin{eqnarray} R_{\rm L,1} & = &\frac{0.49q^{-\frac{2}{3}}}{0.6q^{-\frac{2}{3}} +\ln(1+q^{-\frac{1}{3}})}\\ & \equiv & E(q^{-1}) \label{eqn:eggformula} \end{eqnarray} and equated to the 3:2 resonance radius in equation~(\ref{eqn:resrad}), this gives a limiting mass ratio of $q_{\rm max}=0.28$, although this is sensitive to the choice of $\beta$. It should be noted, however, that this differs from the often cited value of $q_{\rm max}=0.33$ quoted in that paper, as equation (5) there contains an incorrect power of $q$. A slightly less {\it ad hoc} expression for the tidal radius comes from fitting to the simulations of \citet{paczynski77} \begin{equation} R_{\rm T}=\frac{0.60}{1+q}~~~~~~~~~~~0.03<q<1. \end{equation} Equating this to equation~(\ref{eqn:resrad}) gives a limiting mass ratio of \begin{equation} q=(0.6)^{\frac{3}{2}}j-1 \label{eqn:qmax} \end{equation} which sets a maximum $q_{\rm max}=0.39$ for a 3:2 resonance. \subsection{Pressure Contribution} \label{sec:presthry} \citet{lubow92} showed that the pressure term can be expressed as \begin{equation} \omega_{\rm press}=-\frac{k^{2} c^{2}}{2\omega_{\rm p}} \label{eqn:ompress1} \end{equation} where $\omega_{\rm p}$ is the angular orbital frequency of a parcel of gas in the disc, $k$ is the radial wavenumber of the mode and $c$ is the gas sound speed. Clearly the pressure term acts in the opposite, retrograde sense to the dynamical term. For a spiral wave, the pitch angle $i$ is related to $k$ by \begin{equation} \tan i =\frac{1}{k r}. \label{eqn:kirel} \end{equation} From the resonance condition \citep[eg.][eqn. 3.37]{warner95}, we have \begin{equation} (j-1)(\omega_{\rm p}-\omega)=j(\omega_{\rm p}-\omega_{\rm orb}) \end{equation} which becomes, when $\omega\ll\omega_{\rm orb}$, \begin{equation} \omega_{p}=j\omega_{\rm orb}. \label{eqn:ompapprox} \end{equation} Hence, for the 3:2 resonance $\omega_{\rm p}=3\omega_{\rm orb}$. For the fixed radius $r=0.477$, \citet{montgomery01} corrected earlier errors to derive a contribution \begin{equation} \omega_{\rm press}=-0.7325\omega_{\rm orb} \left(\frac{c}{\omega_{\rm orb} d} \frac{1}{\tan i}\right)^{2}. \end{equation} For our general case using (\ref{eqn:ompapprox}) and where $r$ is given by equation~(\ref{eqn:resrad}), we have \begin{equation} \omega_{\rm press}=-\frac{j^{\frac{1}{3}}}{2}(1+q)^{\frac{2}{3}} \omega_{\rm orb} \left(\frac{c}{\omega_{\rm orb} d} \frac{1}{\tan i} \right)^{2}. \label{eqn:ompress} \end{equation} To proceed further, we need to understand the behaviour of the final dimensionless term in brackets. Since values for $c$ and $i$ may vary according to the peculiar characteristics of any particular system, we will examine this in more detail in section~\ref{sec:presscomp}. \section{Comparison} \label{sec:comp} Observers normally present their measurements of the precesion period in terms of the period excess \begin{equation} \epsilon = \frac{P_{\rm sh}-P_{\rm orb}}{P_{\rm orb}} \end{equation} or equivalently (noting $\omega_{\rm sh}=\omega_{\rm orb}-\omega$) \begin{eqnarray} \epsilon & = & \frac{\omega}{\omega_{\rm orb}-\omega} \\ & = & \left[\left(\frac{\omega_{\rm dyn}+\omega_{\rm press}}{\omega_{\rm orb}} \right)^{-1}-1\right]^{-1}\\ &\approx& \frac{\omega_{\rm dyn}+\omega_{\rm press}}{\omega_{\rm orb}} ~~~~~~~~~~~~~~\mbox{if~}\omega_{\rm dyn},\omega_{\rm press}\ll\omega_{\rm orb}. \end{eqnarray} Since there are relatively few systems with accurately measured values of $q$, it is often convenient to make use of the theoretical relation \begin{equation} M_{2}\approx0.11P_{\rm orb} \end{equation} \citep{FKR} that follows from the assumption that the secondary has a main sequence structure or an observationally derived equivalent \begin{equation} M_{2}=(0.038\pm0.003)P_{\rm orb}^{1.58\pm0.09} \label{eqn:m2prel} \end{equation} \citep{smith98}. In both cases $M$ is measured in solar masses and $P_{\rm orb}$ in hours. \subsection{Dynamical precession only} \label{sec:dynonly} For different assumed values of $M_{1}$ we can plot theoretically predicted lines on the $\epsilon$--$P_{\rm orb}$ plane. \citet{murray00} used equation~\ref{eqn:omrat} with the fixed radius value given by \citet{lubow92} to compare the observed distribution with theory in just this way. He concluded that ``superhumps observations {\it cannot} be adequately explained in terms of purely dynamical precession''. However, including the $q$ dependence of $r$ given in equation~\ref{eqn:omratfull} leads us to a different conclusion. Figure~\ref{fig:epsporb} reproduces the comparison of \citet{murray00} using both methods. The lines are plotted for $M_{1}=0.76$, $0.76\pm0.22$ and $M_{1}=1.44$. This shows that the distribution is compatible with the boundary imposed by the condition $M_{\rm wd}<1.44$ when the full $q$ dependence is included. In fact, the most we can conclude is that the distribution of superhumping systems suggests that they have a primary mass higher than the general CV population. This would not be entirely suprising since to be a superhumping system requires a small mass ratio and will thus tend to select higher $M_{1}$ systems. Also apparent is the deviation of the secondary from a main sequence structure at the short period turnoff. \begin{figure} \begin{center} \includegraphics [angle=270,scale=0.5]{murraycomp.ps} \caption{Comparison of the observed superhump data from \protect\citet{patterson98} with the model from \citet{murray00}, where the resonant radius is independent of $q$ (solid), and with a model including the $q$ dependance (dashed). Both family of curves show the theoretical lines derived using the mean $(0.76)$ and $\pm1\sigma$ $(\pm0.22)$ values for $M_{1}$ of all CVs and in the $M_{2}$--$P_{\rm orb}$ relation (equation \protect\ref{eqn:m2prel}) from \protect\citet{smith98} as well as a further line with $M_{1}=1.44$. $\epsilon$ decreases with increasing $M_{1}$. The turn to lower $\epsilon$ at $P_{\rm orb}< 1.4$ may arise from the secondary becoming degenerate and deviating from a main sequence structure as assumed for the theoretical curves.} \protect\label{fig:epsporb} \end{center} \end{figure} More convincing evidence for the need for a pressure related effect on the precession rate is provided by the data for $\epsilon$ and $q$ plotted in Figure~\ref{fig:epsofq}. The 11 systems on this plot are those used in \citet{patterson05}. These have directly determined values of $q$. The exception is OY~Car which appears in Fig.~9 of that paper but not in the corresponding Table~7. We take the values for this system from \citet{patterson01}. We can see that the data are certainly not compatible with the asumption of a 3:2 resonance. If anything, they cluster around the prediction for a 4:3 resonance, although given the way the resonances `pile up' one can argue that it is inevitable that some resonance would fall near the data. It is intriguing to note however, just how close the two most accurately measure values of $q$ for the systems XZ~Eri and DV~UMa \citep{feline04b} lie to the 4:3 theoretical line (see Figure~\ref{fig:epsofqzoom}). It would thus be extremely interesting to see modelling using the method employed by these authors applied to the other eclipsing systems to determine similarly accurate values for $q$. We summarise the $\chi^{2}$ value for the data against each resonance in Table~\ref{tab:reschi2} and note the remarkable reduction in $\chi^{2}$ for the 4:3 model. If a purely dynamical 4:3 resonance were the correct model, the probability that the value of $\chi^{2}$ would exceed the measured value is 0.23. \begin{figure} \begin{center} \includegraphics [angle=270,scale=0.5] {qpattcomp.ps} \caption{Comparison of the observed superhump data from \protect\citet{patterson05} (circles), for systems with accurately measured values of $q$, and the dynamical precession rates of discs with radii calculated under different assumptions. The resonance period excesses are calculated using equation~(\ref{eqn:omratfull}) and the discs with radii equal to the tidal and (unphysically) the Roche lobe radius are calculated using equation~(\ref{eqn:omrat}). The stars mark the additional `challenging' systems U~Gem and TV~Col. } \protect\label{fig:epsofq} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics [angle=270,scale=0.5] {qpattcompzoom2.ps} \caption{A closer view of the data used in the comparison of the observed superhump data from \protect\citet{patterson05}, for systems with accurately measured values of $q$, and the dynamical precession rates of discs at resonant radii (solid line). Also plotted are the model lines including an approximation to pressure effect (Model A; dashed line). } \protect\label{fig:epsofqzoom} \end{center} \end{figure} \begin{table} \begin{center} \begin{tabular}{cc} \hline Resonance & $\frac{\chi^{2}}{N_{\rm obs}}$ \\ \hline 3:2 & 49.5 \\ 4:3 & 1.28 \\ 5:4 & 18.9 \\ \hline \end{tabular} \caption{Summary of the $\chi^{2}$ statistic comparing purely dynamical resonant precession to the observed period excess.} \end{center} \protect\label{tab:reschi2} \end{table} We have followed the precendent of \citet{patterson05} in the systems considered above but we mention here briefly two further systems: U~Gem and TV~Col. On the grounds that U~Gem does not show superhumps, \citet{patterson05} used its observed mass ratio to place an upper limit on the system BB~Dor which does have measured period excess. However, \citet{smak04} reported the detection of superhumps in the 1984 outburst of U~Gem. As a result, we dropped BB~Dor from the analysis. Similarly, \citet{retter03} report a superhump detection near the expected 6.3-h in TV~Col from an exhaustive search in archival and from fresh 2001 observations. Two other campaigns, however, failed to confirm this result \citep{patterson05}. Neither U~Gem nor TV~Col fit comfortably within the context of the precessing disc theory set out above. However, both have noticeably large errors on the value of their period excess. In the case of U~Gem this reflects a systematic trend as the putative superhump period drifted to longer values over time. The outburst was also notable for its unusual length and completely out of character for the system. To reach the observed period excess would require the disc to extend beyond the tidally truncated radius, which is not impossible for a disc of finite viscosity \citep{lyndenbell74,papaloizou77}, but is uncomfortably close to the value we would get if the disc had a radius equal to that of the primary's entire Roche lobe! The most plausible explanation for such a large period excess and the unusual character is that the transient term $\omega_{\rm tran}$ has become important and that the change in the observed period excess reflects a change in the rate of growth of the disc eccentricity. The uncertainties in the values for TV~Col make it a difficult system to assess. The expected disc radius is again much larger than the tidal truncation radius and in addition to the large range of allowed $q$ there is the possibility of unknown systematic errors due to the multiple components of the emission lines \citep{hellier93}. We thus neglect this system also but echo the call of \citet{retter03} for a search for permanent superhumps in similarly long period systems. Despite the encouraging agreement with a 4:3 resonance there are several theoretical hurdles to overcome before we could accept such an explanation. The theory outlined in Section~\ref{sec:theback} is only a summary of the extensive literature in this area (eg \citet{goldreich78,goldreich79,borderies83}). In outline, the analytical method employed is to decompose the effective potential into its harmonic components, to introduce this into the fluid equations in a standard linear perturbation analysis and to look at the response of the disc material. The behaviour of the disc in superhumping systems has a close analogue in galactic dynamics. The ``dynamical resonance'' above corresponds to the inner Lindblad resonance of the system and there is a similar result to that given, for example by \citet{binneybook}, that leads to spiral waves being generated in the disc. It is these spiral waves that couple with the tides to excite disc eccentricity. There is also a corotational resonance that acts to suppress eccentricity. Cursory consideration shows the fundamental contradiction of treating the dissipation arising from an eccentric disc (an inherently collective phenomenon) by the precession of single particle orbits at a resonant radius as carried out above. As a first attempt to correct for the untreated collective effects, we might assume that the precession can be characterised by an effective radius ($r_{\rm eff}$) interior to $r_{3}$ that would produce dynamical precession with the observed period. The values for $r_{\rm eff}$ derived from the observed superhump period excesses of the \citet{patterson05} systems with well measured $q$ are given in Table~\ref{tab:reff}. Assuming that the ratio $r_{\rm eff}/r_{3}$ is a constant for all systems, we derive a best value for it of 0.827. This places $r_{\rm eff}$ extremely close to $r_{4}$ with a barely different value for the total $\chi^{2}$. In fact, the radius calculated from a $j=4$ resonance differs from this best possible radius by a suprisingly small 0.2\%! \begin{table} \protect\label{tab:reff} \begin{center} \begin{tabular}{lcccccc} \hline System & $\frac{r_{\rm eff}}{r_{3}}$ \\\hline WZ Sge & 0.718 \\ OY Car & 0.770 \\ XZ Eri & 0.844 \\ IY UMa & 0.787 \\ Z Cha & 0.864 \\ HT Cas & 0.816 \\ DV UMa & 0.831 \\ OU Vir & 0.762 \\ V2051 Oph & 0.707\\ DW UMa & 0.873 \\ UU Aqr & 0.886 \\ \hline \end{tabular} \caption{Values for $r_{\rm eff}$ derived from the observed period excesses for the \protect\citet{patterson05} calibration systems.} \end{center} \end{table} The above result not withstanding, a treatment that deals explictly with the coupling of the tides, spiral arms and disc eccentricity explicitly ought to be preferred. This is exactly the approach used by \citet{lubow91} to derive the additional term related to pressure effects which we turn to next. One of the important results from that paper was the recognition that the 3:2 resonance ``is unique in that it is the innermost resonance for which an eccentric Lindblad resonance appears without an overlapping eccentric corotational resonance\ldots This property allows that resonance to easily excite eccentricity.'' Such a conclusion is a strong argument against any dynamical resonance other than 3:2 being an important factor in the disc behaviour. The ability of a secondary magnetic field to give rise to a precession rate characteristic of a higher resonance \citep{pearson97,pearson03} may reflect the fact that the perturbation in that case is no longer small and capable of represention by a linear analysis. \subsection{Inclusion of Pressure Effects} \label{sec:presscomp} Proceeding under the assumption that the excited resonance is 3:2 and that the difference between the measured and expected $\epsilon$ is due to the pressure effect we can update the analysis of \citet{murray00} for all the systems with measured $q$. These are summarised in table~\ref{tab:press}. For the final column we have used values for $M_{1}$ from \citet{patterson05} except for V2051 Oph which we take from \citet{rkcat}. \begin{table*} \protect\label{tab:press} \begin{center} \begin{tabular}{lcccccc} \hline System & $\omega$ & $\omega_{\rm dyn}$ & $\omega_{\rm press}$ & $\sqrt{2\eta_{\rm A}}$ & $\frac{c}{\omega_{\rm orb} d}$ & $c$ \\ & (d$^{-1}$) & (d$^{-1}$) & (d$^{-1}$) & & & ($10^{4}$ m s$^{-1}$) \\\hline WZ Sge & 1.01 & 2.13 & -1.12 & 0.12 & 0.036 & 2.07 \\ OY Car & 1.98 & 3.59 & -1.61 & 0.14 & 0.044 & 2.16 \\ XZ Eri & 2.70 & 4.02 & -1.32 & 0.13 & 0.039 & 2.02 \\ IY UMa & 2.15 & 3.71 & -1.56 & 0.15 & 0.047 & 2.29 \\ Z Cha & 2.96 & 4.17 & -1.21 & 0.13 & 0.041 & 1.80 \\ HT Cas & 2.73 & 4.34 & -1.62 & 0.15 & 0.047 & 2.14 \\ DV UMa & 2.43 & 3.73 & -1.30 & 0.15 & 0.046 & 2.38 \\ OU Vir & 2.73 & 4.99 & -2.26 & 0.18 & 0.055 & 2.87 \\ V2051 Oph & 2.93 & 6.20 & -3.27 & 0.20 & 0.061 & 3.21 \\ DW UMa & 2.78 & 3.79 & -1.01 & 0.16 & 0.049 & 1.99 \\ UU Aqr & 2.52 & 3.33 & -0.81 & 0.16 & 0.048 & 1.78 \\ \hline \end{tabular} \caption{Derived pressure force contribution to the precession of the \protect\citet{patterson05} calibration systems. The last two columns assume $i=17^{\circ}$.} \end{center} \end{table*} To make progress with our comparison we need values for $c$ and $i$ in equation \ref{eqn:ompress}. We will consider two cases: A) where $\frac{c \cot i}{\omega_{\rm orb} d}$ is fixed for all systems and B) where $c$ is evaluated using analytic expression taken from detailed disc models. \subsubsection{Model A} If we assume that the final bracketed term of equation~(\ref{eqn:ompress}) is constant we can rewrite it as \begin{equation} \frac{\omega_{\rm press}}{\omega_{\rm orb}}=-j^{\frac{1}{3}} \eta_{\rm A} \left(1+q\right)^{\frac{2}{3}} \end{equation} where \begin{equation} \eta_{\rm A}=\frac{1}{2}\left(\frac{c \cot i}{\omega_{\rm orb} d} \right)^{2}. \end{equation} \citet{lubow92} gave a range of 0.01--0.05 for the ratio $\frac{c}{\omega_{\rm orb} d}$. \citet{murray00} used a value of 0.05 in his considerations to derive values for $i$. Using this value and the mean $i=17^\circ$ found from simulations by \citet{montgomery01}, we have $\eta_{\rm A}=0.0134$. Compared to the data for the eclipsing systems, this gives $\frac{\chi^{2}}{N_{\rm obs}}=3.3$. Allowing $\eta_{\rm A}$ to be a free parameter, however, we derive an optimal value of $\eta_{\rm A}=0.0107$ with a corresponding $\frac{\chi^{2}}{N_{\rm obs}-1}=1.42$. Figure \ref{fig:epsofqzoom} shows the comparison for this value in graphical form. The corresponding plot to Figure~\ref{fig:epsporb}, comparing a purely dynamical 3:2 resonance and a model including pressure, is shown in Figure~\ref{fig:epspcomp}. It should be noted here that the values assumed for $M_{1}$ in producing the lines come from the weighted mean derived by \citet{smith98} for {\it dwarf nova} systems ($M_{1}=0.69\pm0.01$) rather than that for all systems used by \citet{murray00}. We have also updated the data to the latest compilation from \citet{patterson05}. Coupling the mean primary mass with $q_{\rm max}=0.39$, we can derive an equivalent $P_{\rm orb,max}=3.5$-h. For the ultimate restriction $M_{1}<1.44M_{\odot}$ we have $P_{\rm orb,max}=5.5$-h (coincidentally the same as that of TV~Col). \begin{figure} \begin{center} \includegraphics [angle=270,scale=0.5] {epspcomp.ps} \caption{ Comparison of the observed superhump data from \protect\citet{patterson05} with a model with purely dynamical precession (solid) and including the pressure effect (dashed). Both family of curves show the theoretical lines derived using the mean ($0.69$) and $\pm1\sigma$ ($0.01$) values for $M_{1}$ found for {\it dwarf novae} and in the $M_{2}$--$P_{\rm orb}$ relation (equation \protect\ref{eqn:m2prel}) from \protect\citet{smith98} as well as a further line with $M_{1}=1.44$. $\epsilon$ decreases with increasing $M_{1}$. The horizontal lines mark the limiting precession rate derived from $q_{\rm max}=0.39$ in either case. } \protect\label{fig:epspcomp} \end{center} \end{figure} We might like to compare the implied distribution of $M_{1}$ from our model with that of \citet{smith98}. However, this is precluded by the selection effect alluded to in section~\ref{sec:dynonly} being at work. Systems with longer periods (and thus higher $M_{2}$) can accomodate more massive primaries and still fit within the $q_{\rm max}$ limitation. Hence, the distribution of $M_{1}$ would be expected to differ between all dwarf novae and the superhumping subset. Figure~\ref{fig:epspcomp} also shows systems lieing above the limiting $\epsilon$ allowed by our formulation of the pressure term. This, along with the range of derived $\eta_{\rm A}$ shown in Table~\ref{tab:press} and the poorer fit to the data than a simple resonance model, suggests that a one size fits all value for $\eta$ is not appropriate. \subsubsection{Model B} Since the effect of the pressure term relies on the sound speed in the disc, we turn to detailed models of hot discs to evaluate this in terms of fundamental parameters. From equation~A1 of \citet{cannizzo92a} we have the mid-plane temperature in terms of the mass transport rate through the disc \begin{equation} T_{\rm mid}=\left(\frac{64}{9}\frac{\sigma}{\kappa_{0}}\right)^{-\frac{1}{10}} \left(\frac{\mu_{\rm H}}{R}\right)^{\frac{1}{4}} \omega_{\rm p}^{\frac{1}{2}} \alpha_{\rm H}^{-\frac{1}{5}} \left(\frac{\dot{M}}{2\pi}\right)^{\frac{3}{10}} \label{eqn:tmid} \end{equation} where $\sigma$ is the Stefan-Boltzman constant, $R$ is the gas constant, $\alpha_{\rm H}$ and $\mu_{\rm H}$ are the Sunyaev-Shakura viscosity parameter and mean molecular weight in the hot state respectively and assuming an opacity $\kappa=\kappa_{0} \rho T^{-3.5}$ where $\kappa_{0}=2.8\times10^{23}~\mbox{m}^{2}~\mbox{kg}^{-1}$ is appropriate \citep{cannizzo88}. We can find a suitable value of $\dot{M}$ for an outbursting dwarf nova using the approach of \citet{cannizzo93} and \citet{cannizzo88} by assuming that the disc fills to a mass $fM_{\rm max}$. $M_{\rm max}$ is the maximum mass the disc can hold in the cold state without exceeding $\Sigma_{\rm max}$ at some point ie. \begin{eqnarray} fM_{\rm max} & = & f \int_0^{r_{\rm d}} 2 \pi r \Sigma_{\rm max}(r) d\,r \\ & = & f \frac{2\pi r_{\rm d}^{2}}{3.05} \Sigma_{\rm max}(r_{\rm d}). \label{eqn:fmmax} \end{eqnarray} Now, equation~A3 of \citet{cannizzo92a} gives the surface density in the hot state as \begin{eqnarray} \Sigma & = & \left(\frac{64}{9}\frac{\sigma}{\kappa_{0}}\right)^{\frac{1}{10}} \left(\frac{\mu_{\rm H}}{R}\right)^{\frac{3}{4}} \omega_{\rm p}^{\frac{1}{2}} \alpha_{\rm H}^{\frac{4}{5}} \left(\frac{\dot{M}}{2\pi}\right)^{\frac{7}{10}} \\ \nonumber & = & 405~\mbox{kg}~\mbox{m}^{-2} \mu_{\rm H}^{\frac{3}{4}} \alpha_{\rm H}^{-\frac{4}{5}} M_{1}^{\frac{1}{4}} \left(\frac{r}{10^{8}~\mbox{m}}\right)^{-\frac{3}{4}} \\ & & \times \left(\frac{\dot{M}}{10^{-10}~\mbox{M}_{\odot}~{\mbox{y}^{-1}}} \right)^{\frac{7}{10}} \end{eqnarray} where we have used $\omega_{\rm p}=\left(GM_{1}M_{\odot}r^{-3}\right)^{\frac{1}{2}}$. Integrating this and equating it to the expression for $fM_{\rm max}$ from (\ref{eqn:fmmax}), we can rearrange for the mass transport rate through the disc \begin{eqnarray} \dot{M} & = & 9.67\times10^{-8}~\mbox{kg}~\mbox{s}^{-1}~ \left(\frac{\alpha_{\rm H}}{0.1}\right)^{1.14} \left(\frac{\alpha_{\rm C}}{0.02}\right)^{-1.23} \mu_{\rm H}^{-1.07} \nonumber \\ & & \times r_{\rm d}^{2.57} M_{1}^{-0.86} \left(\frac{f}{0.4}\right)^{1.43} \end{eqnarray} which, with equation~(\ref{eqn:tmid}), gives \begin{eqnarray} c^{2} & = & \frac{\gamma k T}{\mu_{H} m_{\rm H}} \\ & = & 5.66\times10^{4}~\mbox{m}^{1.229}~\mbox{s}^{-\frac{3}{2}} \left(\frac{\alpha_{\rm H}}{0.1}\right)^{0.142} \left(\frac{\alpha_{\rm C}}{0.02}\right)^{-0.369} \nonumber \\ && \times \mu_{\rm H}^{-1.071} r_{\rm d}^{0.771} M_{1}^{-0.258} \left(\frac{f}{0.4}\right)^{0.429} \omega_{p}^{\frac{1}{2}}. \end{eqnarray} Finally, we can combine this with equations~(\ref{eqn:resrad}), (\ref{eqn:ompress1}), (\ref{eqn:kirel}) and (\ref{eqn:ompapprox}) to get \begin{eqnarray} \frac{\omega_{\rm press}}{\omega_{\rm orb}} & = & -2.83\times10^{4} j^{\frac{5}{6}} (1+q)^{\frac{2}{3}} \cot^{2} i \left(\frac{\alpha_{\rm H}}{0.1}\right)^{0.142} \nonumber \\ & & \times \left(\frac{\alpha_{\rm C}}{0.02}\right)^{-0.369} \mu_{\rm H}^{-1.071} r_{\rm d}^{0.771} M_{1}^{-0.258} \nonumber \\ & & \times \left(\frac{f}{0.4}\right)^{0.429} \omega_{\rm orb}^{-\frac{3}{2}}. \label{eqn:finala} \end{eqnarray} Since we want an expression for $\omega_{\rm press}$ in terms of $q$ only, we look to eliminate $\omega_{\rm orb}$ by using a form of the mass-radius relation recommended by \citet{smith98} \begin{equation} \frac{R_{2}}{R_{\odot}}= (0.91\pm0.09)M_{2}^{0.75\pm0.04}. \label{eqn:massradrel} \end{equation} When this is equated to the size of the secondary's Roche lobe from (\ref{eqn:eggformula}), we can rearrange for the separation \begin{equation} d=\frac{0.91M_{2}^{0.75} R_{\odot}}{E(q)}. \end{equation} Using this with Kepler's Law \begin{equation} \omega_{\rm orb}=\left[GM_{1}(1+q)M_{\odot}\right]^\frac{1}{2} d^{-\frac{3}{2}} \end{equation} and a disc radius $r_{\rm d}=\beta R_{\rm L,1}$, we arrive at the final expression \begin{eqnarray} \frac{\omega_{\rm press}}{\omega_{\rm orb}} & = & -j^{\frac{5}{6}} \eta_{\rm B} \frac{\left[E(q^{-1})\right]^{0.771}} {\left[E(q)\right]^{1.021}} \frac{q^{0.766}}{(1+q)^{\frac{1}{12}}} \label{eqn:finompress} \end{eqnarray} where \begin{eqnarray} \eta_{\rm B} & = & 0.0209 \cot^{2} i \left(\frac{\alpha_{\rm H}}{0.1}\right)^{0.142} \left(\frac{\alpha_{\rm C}}{0.02}\right)^{-0.369} \nonumber \\ & & \times \mu_{\rm H}^{-1.071} \left(\frac{f}{0.4}\right)^{0.429} \left(\frac{\beta}{0.9}\right)^{0.771} M_{1}^{-0.242}. \end{eqnarray} Hence, we can see that although the pressure term is not purely expressible as a function of $q$ alone, the various parameters are either expected to remain reasonably constant between systems (eg. $\mu_{\rm H}$) or enter in with a weak dependency such as $M_{1}^{-0.242}$. As mentioned above, the observed distribution for {\it all} CVs is $M_{1}=0.76\pm0.22$ which would produce a variation of only $\sim7\%$ in the predicted pressure contribution. The dwarf nova subsample had an even smaller range for $M_{1}$. The final expression for the precession rate was fitted to the calibration systems of \citet{patterson05} using a single free parameter as the constant of proportionality to the functional form of $q$ in (\ref{eqn:finompress}). The best fitting value of $\eta_{\rm B}=0.0109$ gives $\frac{\chi^{2}}{N_{\rm obs}-1}=1.04$. With the typical values used above this implies $i=61^{\rm \circ}$. A comparison with the data is plotted in Figure~\ref{fig:finevq}. For the lowest $q$ systems the effect of the pressure term actually makes $\epsilon$ negative, ie. force the precession to become retrograde. \begin{figure} \begin{center} \includegraphics [angle=270,scale=0.5] {finevq3.ps} \caption{Comparison of the calibration systems from \protect\citet{patterson05} with a model with purely dynamical precession (solid) and including the detail pressure term in equation~(\ref{eqn:finompress}) (dashed).} \protect\label{fig:finevq} \end{center} \end{figure} Inverting the process for systems of known $\epsilon$ but unknown $q$ allows us to derive values for the 88 systems in Table~9 of \citet{patterson05}. These are listed in Table~\ref{tab:qded}. The errors quoted in the table reflect the observational errors propogated as appropriate but not the systematic errors that may arise from variations of the parameters in the \citet{smith98} relations. It is apparent that some systems have predicted values of $q$ in excess of the expected limit of $q_{\rm max}=0.28$ for $\beta=0.9$. The largest derived value of $q=0.437$ can be accommodated if $\beta=0.94$. If the resonance were to transition to 4:3 at some $q_{\rm max}$, the derived values of $q$ would be even higher. However, axisymmetric structure models were used in the above derivation. As $r_{j}$ approaches $R_{\rm T}$ this will become an increasingly invalid assumption. It might be reasonably expected then that these high $q$ systems have true mass ratios close to $q_{\rm max}$ although, as we have seen, this limit is not well determined. It is also worth noting just how close is the derived $M_{1}$ in EG~Cnc to the Chandrasekhar limit. Although the mass-period relation we have used (equation~\ref{eqn:massradrel}) was derived from observations, there are strong theoretical grounds for expecting a significantly different relation for the lowest mass secondaries once they become degenerate or semi-degenerate. EG~Cnc is towards the low-end in the range of $M_{2}$ where this effect may be important and we may be extrapolating the relation beyond its range of validity. A non-main-sequence star would be expected to have a larger radius for the same mass as a main-sequence equivalent. This would lead to a larger pressure effect for a given value of $q$. As a result we would tend to underestimate both $M_{2}$ and $q$ when such deviation becomes important. In principle, analytic forms for the mass-radius relations for the appropriate ranges of $M_{2}$ could be included in place of (\ref{eqn:massradrel}). \begin{figure} \begin{center} \includegraphics [angle=270,scale=0.5] {epspcompfull.ps} \caption{Comparison of the observed superhump data from \protect\citet{patterson05} with a model including the full pressure effect. The curves use the mean ($0.69$) and $\pm1\sigma$ ($0.01$) values for $M_{1}$ found for {\it dwarf novae} and in the $M_{2}$--$P_{\rm orb}$ relation from \protect\citet{smith98} as well as a further line with $M_{1}=1.44$. $\epsilon$ decreases with increasing $M_{1}$. } \protect\label{fig:epspcompfull} \end{center} \end{figure} \begin{table*} \protect\label{tab:qded} \begin{center} \begin{tabular}{lccccc} \hline Name & $P_{\rm orb}$(d) & $\epsilon$ & $q$ & $M_{2}$ & $M_{1}$ \\ \hline DI UMa & 0.05456( 1) & 0.0133( 6) & 0.071( 2) & 0.058 & 0.821(25) \\ V844Her & 0.05464( 1) & 0.0243( 9) & 0.111( 3) & 0.058 & 0.526(16) \\ LLAND & 0.05505( 1) & 0.0290(36) & 0.128(14) & 0.059 & 0.459(49) \\ SDSS0137-09 & 0.05537( 4) & 0.0248(20) & 0.113( 7) & 0.060 & 0.528(35) \\ ASAS0025+12 & 0.05605( 5) & 0.0206(21) & 0.097( 8) & 0.061 & 0.624(49) \\ AL Com & 0.05667( 3) & 0.0120( 7) & 0.066( 3) & 0.062 & 0.933(35) \\ WZ Sge & 0.05669( 1) & 0.0092( 7) & 0.056( 3) & 0.062 & 1.100(49) \\ RX1839+26 & 0.05669( 5) & 0.0173(20) & 0.085( 7) & 0.062 & 0.724(61) \\ PU CMa & 0.05669( 5) & 0.0222(20) & 0.103( 7) & 0.062 & 0.599(43) \\ SW UMa & 0.05681(14) & 0.0245(27) & 0.112(10) & 0.062 & 0.556(50) \\ HV Vir & 0.05707( 1) & 0.0200( 9) & 0.095( 3) & 0.062 & 0.657(23) \\ MM Hya & 0.05759( 1) & 0.0184(10) & 0.089( 4) & 0.063 & 0.710(29) \\ WX Cet & 0.05829( 4) & 0.0199(15) & 0.095( 5) & 0.065 & 0.682(39) \\ KV Dra & 0.05876( 7) & 0.0233(22) & 0.107( 8) & 0.065 & 0.610(46) \\ T Leo & 0.05882( 1) & 0.0236(14) & 0.108( 5) & 0.066 & 0.605(29) \\ EG Cnc & 0.05997( 9) & 0.0067( 8) & 0.047( 3) & 0.068 & 1.433(88) \\ V1040 Cen & 0.06028(10) & 0.0310(27) & 0.136(10) & 0.068 & 0.501(38) \\ RX Vol & 0.06030(20) & 0.0178(20) & 0.087( 7) & 0.068 & 0.782(65) \\ AQ Eri & 0.06094( 6) & 0.0284(21) & 0.126( 8) & 0.069 & 0.549(34) \\ XZ Eri & 0.06116( 1) & 0.0270(16) & 0.121( 6) & 0.070 & 0.576(28) \\ CP Pup & 0.06145( 6) & 0.0171(20) & 0.085( 7) & 0.070 & 0.830(71) \\ V1159 Ori & 0.06218( 1) & 0.0320(11) & 0.140( 4) & 0.072 & 0.512(15) \\ V2051 Oph & 0.06243( 1) & 0.0281(25) & 0.125( 9) & 0.072 & 0.576(43) \\ V436 Cen & 0.06250(20) & 0.0212(32) & 0.099(12) & 0.072 & 0.725(85) \\ BC UMa & 0.06261( 1) & 0.0306(14) & 0.134( 5) & 0.072 & 0.538(21) \\ HO Del & 0.06266(16) & 0.0276(35) & 0.123(13) & 0.072 & 0.588(63) \\ EK TrA & 0.06288( 5) & 0.0321(25) & 0.140(10) & 0.073 & 0.519(35) \\ TV Crv & 0.06290(20) & 0.0325(32) & 0.142(12) & 0.073 & 0.513(44) \\ VY Aqr & 0.06309( 4) & 0.0203(15) & 0.096( 5) & 0.073 & 0.761(43) \\ OY Car & 0.06312( 1) & 0.0203(15) & 0.096( 5) & 0.073 & 0.761(43) \\ RX1131+43 & 0.06331( 8) & 0.0259(16) & 0.117( 6) & 0.074 & 0.630(32) \\ ER UMa & 0.06336( 3) & 0.0314(11) & 0.138( 4) & 0.074 & 0.536(17) \\ DM Lyr & 0.06546( 6) & 0.0281(31) & 0.125(12) & 0.078 & 0.620(58) \\ UV Per & 0.06489(11) & 0.0234(23) & 0.108( 8) & 0.077 & 0.711(56) \\ AK Cnc & 0.06510(20) & 0.0368(33) & 0.158(13) & 0.077 & 0.486(40) \\ AO Oct & 0.06557(13) & 0.0242(39) & 0.111(14) & 0.078 & 0.704(92) \\ SX LMi & 0.06717(11) & 0.0347(25) & 0.150(10) & 0.081 & 0.538(35) \\ SS UMi & 0.06778( 4) & 0.0360(15) & 0.155( 6) & 0.082 & 0.528(20) \\ KS UMa & 0.06796(10) & 0.0241(30) & 0.110(11) & 0.082 & 0.747(75) \\ V1208 Tau & 0.06810(20) & 0.0374(28) & 0.161(11) & 0.083 & 0.514(35) \\ RZ Sge & 0.06828( 2) & 0.0306(28) & 0.134(11) & 0.083 & 0.617(49) \\ TY Psc & 0.06833( 5) & 0.0347(15) & 0.150( 6) & 0.083 & 0.553(21) \\ IR Gem & 0.06840(30) & 0.0351(66) & 0.152(26) & 0.083 & 0.548(93) \\ V699 Oph & 0.06890(20) & 0.0197(28) & 0.094(10) & 0.084 & 0.895(97) \\ CY UMa & 0.06957( 4) & 0.0364(14) & 0.157( 5) & 0.085 & 0.545(19) \\ FO And & 0.07161(18) & 0.0349(40) & 0.151(16) & 0.089 & 0.592(61) \\ OU Vir & 0.07271( 1) & 0.0326(15) & 0.142( 6) & 0.092 & 0.643(26) \\ VZ Pyx & 0.07332( 3) & 0.0333(20) & 0.145( 8) & 0.093 & 0.641(34) \\ CC Cnc & 0.07352( 5) & 0.0487(27) & 0.207(11) & 0.093 & 0.449(25) \\ HT Cas & 0.07365( 1) & 0.0330(30) & 0.144(12) & 0.093 & 0.651(52) \\ IY UMa & 0.07391( 1) & 0.0260(13) & 0.117( 5) & 0.094 & 0.802(33) \\ VW Hyi & 0.07427( 1) & 0.0331( 8) & 0.144( 3) & 0.095 & 0.658(13) \\ Z Cha & 0.07450( 1) & 0.0364( 9) & 0.157( 4) & 0.095 & 0.607(14) \\ QW Ser & 0.07453(10) & 0.0331(40) & 0.144(15) & 0.095 & 0.661(71) \\ WX Hyi & 0.07481( 1) & 0.0346(14) & 0.150( 5) & 0.096 & 0.639(23) \\ BK Lyn & 0.07498( 5) & 0.0479( 7) & 0.204( 3) & 0.096 & 0.471( 7) \\ RZ Leo & 0.07604( 1) & 0.0347(25) & 0.150(10) & 0.098 & 0.654(42) \\ AW Gem & 0.07621(10) & 0.0422(27) & 0.180(11) & 0.099 & 0.548(33) \\ SU UMa & 0.07635( 5) & 0.0317(12) & 0.139( 5) & 0.099 & 0.713(23) \\ \end{tabular} \caption{Derived parameters for all the systems with measured period excesses from \protect\citet{patterson05}. The errors on $q$ reflect the observational errors propogated appropriately. $M_{2}$ has been calculated using equation (\ref{eqn:m2prel}) and $M_{1}=\frac{q}{M_{2}}$. No allowance has been made for the systematic error that would arise from the errors in the parameters in this relation. Similarly, no error is quoted for $M_{2}$ since this is dominated by the assumption of main sequence structure.} \end{center} \end{table*} \addtocounter{table}{-1} \begin{table*} \begin{center} \begin{tabular}{lccccc} \hline Star & $P_{\rm orb}$(d) & $\epsilon$ & $q$ & $M_{2}$ & $M_{1}$ \\ \hline SDSS1730+62 & 0.07655( 9) & 0.0376(22) & 0.162( 9) & 0.099 & 0.615(33) \\ HS Vir & 0.07690(20) & 0.0477(23) & 0.203(10) & 0.100 & 0.493(23) \\ V503 Cyg & 0.07770(20) & 0.0430(27) & 0.183(11) & 0.102 & 0.555(34) \\ V359 Cen & 0.07990(30) & 0.0388(40) & 0.166(16) & 0.106 & 0.639(61) \\ CU Vel & 0.07850(20) & 0.0293(36) & 0.130(14) & 0.103 & 0.798(83) \\ NSV 9923 & 0.07910(20) & 0.0412(30) & 0.176(12) & 0.105 & 0.595(41) \\ BR Lup & 0.07950(20) & 0.0340(40) & 0.147(15) & 0.105 & 0.716(75) \\ V1974 Cyg & 0.08126( 1) & 0.0471(10) & 0.201( 4) & 0.109 & 0.544(11) \\ TU Crt & 0.08209( 9) & 0.0397(22) & 0.170( 9) & 0.111 & 0.653(34) \\ TY PsA & 0.08414(18) & 0.0417(22) & 0.178( 9) & 0.115 & 0.648(32) \\ KK Tel & 0.08453(21) & 0.0368(31) & 0.158(12) & 0.116 & 0.734(56) \\ V452 Cas & 0.08460(20) & 0.0497(33) & 0.212(14) & 0.116 & 0.550(37) \\ DV Uma & 0.08585( 1) & 0.0343(11) & 0.149( 4) & 0.119 & 0.801(23) \\ YZ Cnc & 0.08680(20) & 0.0553(26) & 0.236(12) & 0.121 & 0.514(25) \\ GX Cas & 0.08902(16) & 0.0449(25) & 0.191(11) & 0.126 & 0.660(37) \\ NY Ser & 0.09775(19) & 0.0623(35) & 0.268(16) & 0.146 & 0.546(33) \\ V348 Pup & 0.10184( 1) & 0.0640(40) & 0.276(19) & 0.156 & 0.565(39) \\ V795 Her & 0.10826( 1) & 0.0760(10) & 0.336( 5) & 0.172 & 0.512( 8) \\ V592 Cas & 0.11506( 1) & 0.0625( 5) & 0.269( 2) & 0.189 & 0.703( 6) \\ TU Men & 0.11720(20) & 0.0717(32) & 0.314(16) & 0.195 & 0.621(32) \\ AH Men & 0.12721( 6) & 0.0887(16) & 0.406( 9) & 0.222 & 0.546(13) \\ DW Uma & 0.13661( 1) & 0.0644(20) & 0.278(10) & 0.248 & 0.893(31) \\ TT Ari & 0.13755( 1) & 0.0847( 7) & 0.383( 4) & 0.251 & 0.655( 7) \\ V603 Aql & 0.13810(20) & 0.0572(51) & 0.244(23) & 0.252 & 1.032(97) \\ PX And & 0.14635( 1) & 0.0898(14) & 0.412( 8) & 0.277 & 0.671(13) \\ V533 Her & 0.14730(20) & 0.0719(20) & 0.315(10) & 0.279 & 0.888(29) \\ BB Dor & 0.14920(10) & 0.0939(10) & 0.437( 6) & 0.285 & 0.652( 9) \\ BH Lyn & 0.15575( 1) & 0.0790(30) & 0.352(16) & 0.305 & 0.868(40) \\ UU Aqr & 0.16358( 1) & 0.0702(14) & 0.306( 7) & 0.330 & 1.077(25) \\ \end{tabular} \caption{{\bf Cont.} Derived parameters for all the systems with measured period excesses from \protect\citet{patterson05}.} \end{center} \end{table*} As a disc outburst proceeds, we would expect $\dot{M}$ to steadily decrease. Since $c^{2}\propto\dot{M}^{\frac{3}{10}}$ this would cause the pressure term to also shrink and thus the period excess to increase during an outburst. Observationally, the opposite appears to be the case, with the period excess decreasing with time \citep{patterson93}. The only available parameter to counter this is $\tan i$ implying that the pitch angle $i$ increases during an outburst. Although the treatment of \citet{lubow91} includes the coupling of the tides, spiral waves and eccentricity, it shares the shortcoming of a purely dynamical resonance in that the properties are characterised by those at a single radius. A recent preprint of a paper by \citet{goodchild06} has addressed this problem with a detailed analysis solving the equations for the disc behaviour integrated over the whole range of disc radii. Their final equation describing the generation, damping and dynamics of eccentricity in the disc is \begin{eqnarray} 2r\Omega \frac{\partial E}{\partial t} & = \frac{i E}{\rho} \frac{\partial p}{\partial r} + \frac{i}{r^{2} \rho} \frac{\partial}{\partial r} \left( \left(\gamma-i\alpha_{\rm b}\right) p r^{3} \frac{\partial E}{\partial r}\right) +\nonumber & \\ & \frac{i q \Omega^{2} r^{3}}{2 d^{2}} \left( b^{1}_{\frac{3}{2}}\left(\frac{r}{d}\right) E\right) +2\xi r \Omega E \delta(r - r_{\rm res}). & \label{eqn:goodeqn} \end{eqnarray} Here $E$ is the complex eccentricity $E=e {\rm e}^{i \omega}$, $p$ and $\rho$ are the local disc pressure and density, $\gamma-i\alpha_{\rm b}$ is a complex adiabatic exponent, $\xi=2.08\omega_{\rm orb}q^{2} r_{\rm res}$ is the eccentricity growth rate for a resonance at radius $r_{\rm res}$ from \citet{lubow91} and $\Omega$ is the angular velocity of the orbiting disc material. Unsuprisingly, the full solution has to be found numerically. These authors do so with undisturbed (vertically integrated) structure distributions given by \begin{eqnarray} P & = & P_{\rm sc} \left(\frac{r}{r_{\rm sc}}\right)^{-\frac{3}{2}} \left(1\sqrt\frac{r_{\rm in}}{r}\right) \tanh\left(\frac{r_{\rm out}-r}{\nu r_{\rm sc}}\right) \label{eqn:PGood}\\ \Sigma & = & \Sigma_{\rm sc} \left(\frac{r}{r_{\rm sc}}\right)^{-\frac{3}{4}} \left(1\sqrt\frac{r_{\rm in}}{r}\right)^{0.7} \tanh\left(\frac{r_{\rm out}-r}{\nu r_{\rm sc}}\right) \label{eqn:SigmaGood} \end{eqnarray} where $r_{\rm out}$ and $r_{\rm in}$ are the outer and inner radii disc radii and $P_{\rm sc}$ and $\Sigma_{\rm sc}$ are the pressure and surface density at the scaling radius $r_{\rm sc}$. Considering the first bracketed terms in either case, the \citet{cannizzo92a} equations used above have the same radial dependence. The second bracketed terms reflect the behaviour close to the inner radius. The final terms were chosen to implement the boundary conditions at the outer edge of the disc. Since we would expect the inner and outer terms to only become important close to the limiting mass ratios and given the similarity otherwise of the radial dependences of the equations to those used earlier, we do not need to repeat the integration here but compare the final results produced. The results in Figure~8 of \citet{goodchild06} show a similar excursion to a negative period excess for small $q$. The curves also turn up to high $\epsilon$ as they assymptotically approach $q_{\rm max}$ which our expression does not produce. However, the turn off occurs very close to $q_{\rm max}$ for the best fitting curve and the details of the behaviour here depend on the functional form chosen to implement the boundary conditions for the outer edge of the disc in equations (\ref{eqn:PGood}) and (\ref{eqn:SigmaGood}). Away from the extreme mass ratios the two sets of results are very close. \subsection{Empirical Fit} In an attempt to calibrate an empirical relation between $\epsilon$ and $q$, \citet{patterson05} extended the earlier fit of \begin{equation} \epsilon=0.22q \label{eqn:pattlinapp} \end{equation} \citep{patterson01} to \begin{equation} \epsilon=0.18q +0.29q^{2}. \label{eqn:pattquadapp} \end{equation} We can view these as phenomenologically derived equivalents to the Maclaurin series for our analytic expression. Given the complexity of the full expressions, however, rather than carry out the necessary differentiation, it is simplest to generate $\epsilon(q)$ predictions from our formula numerically and then find the best fitting polynomial to these values using standard methods \citep{numrec}. This approach gives an approximate formula \begin{equation} \epsilon=3.5\times10^{-4}+0.24q-0.12q^{2} \label{eqn:ourquad} \end{equation} over the range $0.01<q<0.4$. This expression differs both with a very small constant offset and with a quadratic term that causes a curvature in the opposite sense to the empirical expression. This can be attributed to removing the $q_{\rm max}$ limitation imposed by U~Gem for BB~Dor. \citet{goodchild06} derive a best fit formula for their numerical integrations of (\ref{eqn:goodeqn}) of \begin{equation} \epsilon=-4.1\times10^{-4}+0.2076q. \label{eqn:goodlin} \end{equation} These polynomial forms are compared graphically in Figure~\ref{fig:empcomp}. The linear empirical fit, the full integration polynomial and our result all appear in good agreement. \begin{figure} \begin{center} \includegraphics [angle=270,scale=0.5] {empcomp.ps} \caption{Comparison of the observed superhump data from \protect\citet{patterson05} with the polynomial approximations for empirical fits (equations \protect\ref{eqn:pattlinapp} and \protect\ref{eqn:pattquadapp}) (solid), this work (\ref{eqn:ourquad}) (dashed) and full integration (\ref{eqn:goodlin}) (dot-dashed). } \protect\label{fig:empcomp} \end{center} \end{figure} \section{Summary} We have shown that the standard dynamical method of calculating the precession rate of superhumping CVs with a 3:2 provides such a poor fit to the data that a 4:3 resonance is actually a better fit! We have confirmed the importance of including the pressure related term in the calculation of the precession rate \citep{murray00} which fits the data better than any pure resonance model. The pressure term has been reduced to a function of $q$ and the total, analytic precession rate shown to be equivalent to the empirically derived expressions of \citet{patterson05}. These anayltic expressions also produce precession rates in good agreement with those from the detailed integrations carried out by \citet{goodchild06}. This formulation has been used to calculate values for $q$ in systems which would otherwise be unknown. \section*{ACKNOWLEDGEMENTS} I thank Robert Hynes and Juhan Frank for helpful remarks and advice regarding the work in this paper and the anonymous referee for insightful remarks that improved the presentation of these results. \\
2,869,038,156,940
arxiv
\section*{Introdução} Sabemos que quando as dimensões de uma fonte luminosa são pequenas quando comparadas à distância entre essa fonte e um receptor dessa luz, podemos considerar que a fonte luminosa em questão é pontual. Uma maneira de confirmar os limites de validade dessa aproximação é medindo a queda da intensidade de luz que chega a um determinado “ponto”, aumentando-se a distância entre a fonte e o medidor e mantendo a potência luminosa da fonte fixa. Se a intensidade luminosa cair com o inverso do quadrado desta distância, podemos considerar que a fonte é pontual. Pode-se argumentar também que essa fonte emite luz em todas as direções igualmente, ou seja, a única simetria que permite essa configuração é a esférica, com a fonte luminosa localizada na posição central. Podemos ainda nos utilizar de argumentos mais rigorosos. Usando o modelo de raios luminosos, dizemos que toda e qualquer região fechada ao redor dessa fonte receberá um certo fluxo de raios luminosos, que independe da distância, de maneira análoga ao que se faz com as linhas de campo elétrico emanadas por uma carga elétrica pontual \cite{griffiths}. A quantidade de energia que chega a uma certa região infinitesimal depende diretamente da densidade de raios luminosos, da área desta região e do produto escalar entre o vetor área (normal à região) e o vetor $\hat{k}$ referente à direção de propagação dos raios. Com a ideia de densidade de raios luminosos, definida como a quantidade de raios que atravessam uma determinada área dividido pelo valor desta área, vemos que a densidade de raios luminosos que se propagam dentro de um dado ângulo sólido formado a partir da fonte diminui à medida que nos afastamos da fonte. Isto ocorre por que a área aumenta e a quantidade de raios luminosos se mantém a mesma [veja a figura (\ref{fig:fonte_luz})]. \begin{figure}[!htb] \begin{center} \includegraphics[scale=0.56]{fluxo_linhas12.pdf} \end{center} \caption{Representação esquemática de um cone de luz advindo de uma fonte que emite igualmente em todas as direções. Repare que a densidade superficial de raios luminosos cai com a distância ao quadrado, conforme discutido no texto. Utilizamos sete raios por simplicidade de representação, uma vez que a quantidade de raios luminosos não é finita. Mesmo assim, a densidade de raios diminui com a mesma forma funcional.} \label{fig:fonte_luz} \end{figure} Como a área da esfera aumenta com $r^{2}$ (onde $r$ é a distância entre a fonte e o medidor), temos, para o caso de uma fonte pontual, que a densidade de raios luminosos - e também a intensidade luminosa - é proporcional a $r^{-2}$, ou seja: \begin{equation} I \propto r^{-2} . \label{eq:1} \end{equation} Esse é um assunto de grande relevância na Física, principalmente no entendimento de alguns modelos, entre os quais ressaltamos a Gravitação Universal de Newton e a Lei de Coulomb. Em ambos os casos os campos de partículas pontuais também exibem esse comportamento ($E \propto r^{-2}$), excetuando-se distribuições mais complexas de carga (dipolos e multipolos de ordem superior). Para discutir este tema introduzimos o uso de smartphones, aparelhos que fazem parte do cotidiano dos alunos. Há trabalhos anteriores nesta direção. Para os interessados, recomendamos algumas referências \cite{artigo_int1, artigo_int2, artigo_gota, artigo_ondas, diss_leo}. \section*{Os smartphones como medidores de grandezas físicas} Os smartphones e tablets, tão difundidos hoje em dia, podem servir como computadores pessoais e instrumento de medida direta de grandezas físicas importantes no Ensino de Física. Sensores e softwares (aplicativos) de fácil acesso e utilização já vêm de fábrica nesses aparelhos, podendo ser usados para medir aceleração, velocidade angular, intensidade sonora, campo magnético, posição (através de GPS) e intensidade luminosa. Aplicativos podem ser baixados para efetuar a leitura, armazenamento e apresentação dos dados mensurados. Para medir a intensidade de luz usaremos o luxímetro, localizado na câmera de smartphones e tablets. Vários aplicativos de medição de intensidade luminosa podem ser baixados gratuitamente para as três principais plataformas (Android, iOS e WindowsPhone) \cite{android, ios, windowsphone}. Uma vez que você tenha instalado devidamente um desses aplicativos em um smartphone ou tablet você será capaz de medir com boa precisão a intensidade luminosa que chega à câmera digital desse aparelho. Agora, utilizando a luz do flash de um segundo aparelho como fonte luminosa \footnote{A maioria desses aparelhos permitem que se utilize o LED de flash como lanterna [veja a figura (\ref{fig:fonte_celular})] e nada o impede de usar uma lâmpada comum}, afixamos um desses aparelhos e alteramos a distância entre eles, anotando os valores da intensidade luminosa e distância correspondentes [veja as figuras (\ref{fig:fonte_celular}) e (\ref{fig:grafico})]. \begin{figure}[!htb] \begin{center} \includegraphics[scale=0.65]{celular.jpg} \includegraphics[scale=0.09]{fontes.jpg} \end{center} \caption{À esquerda temos a fotografia de um smartphone que está com o flash ativado. À direita temos uma representação esquemática do experimento descrito no texto.} \label{fig:fonte_celular} \end{figure} \begin{figure}[!htb] \begin{center} \vspace{0.6cm} \includegraphics[scale=0.27]{grafico12.pdf} \end{center} \caption{Nesta figura temos o gráfico da intensidade luminosa em função da distância entre fonte e sensor, bem como os valores dos parâmetros $a = 3.72 \pm 0.82$ e $b = -2.09 \pm 0.07$ obtidos para o ajuste utilizando-se a função $f(x) = ax^{b}$.} \label{fig:grafico} \end{figure} O gráfico da figura (\ref{fig:grafico}) foi gerado com um aplicativo grátis baixado na \textit{Applestore} \cite{dataanalysis}. Esse é capaz de plotar um conjunto de pontos e ajustar uma curva a estes pontos com bastante rapidez. Podemos observar que o ajuste foi realizado com uma função lei de potência do tipo $y = ax^{b}$. O expoente $b$ calculado, para os valores medidos, é por volta de -2 com ótima precisão até a primeira casa decimal. Isto significa que as distâncias utilizadas no experimento são grandes em comparação com o tamanho da fonte, e portanto, podemos considerar esse tipo de fonte luminosa como pontual. Vale ressaltar que mesmo com luminosidade de fundo os resultados mostrados aqui ainda são reprodutíveis, contanto que você encontre para que distância a luminosidade da fonte tem intensidade parecida com a de fundo, tomando-a como limite superior. Caso o interessado deseje reproduzir este experimento utilizando uma lâmpada incandescente, deverá ter o cuidado de começar a coleta de dados para distâncias um pouco maiores. Sugerimos que o comprimento do filamento de tungstênio seja estimado e que se comecem as medições para distâncias maiores que o triplo do tamanho estimado do filamento. \section*{Conclusões e Perspectivas} Observamos que essa técnica simples e de fácil acesso pode ajudar os alunos a compreender que a simetria com a qual um sistema físico se dispersa ou propaga tem grande influência na maneira como o fenômeno se comporta. A simetria esférica se apresenta em vários modelos físicos e um experimento como esse é importante nas interações de aprendizagem. Utilizar-se de uma lente convergente para colimar esse feixe luminoso e repetir o experimento mostrará que o comportamento da luminosidade, quando comparada com a distância da fonte, se alterará fortemente. Dessa vez a intensidade não diminui ou talvez, diminuirá muito pouco. Temos aí um ótimo exemplo de como a simetria dos processos físicos alteram seu comportamento. Se logo após a lente esférica, posicionarmos uma cilíndrica, devemos perceber uma queda linear com o aumento da distância. São muitas as possibilidades que um arranjo simples e prático como esse nos proporciona. \section*{Agradecimentos} Os autores são gratos à Agência de Fomento CAPES. \bibliographystyle{apsrev.bst}
2,869,038,156,941
arxiv
\section{Introduction} \label{sec:introduction} Large optical surveys demonstrated that galaxies evolve through mergers from star-forming spirals, through a transition region, to massive elliptical galaxies \citep[e.g.][]{bell04,faber07,schawinski14}. Outflows and the energetic feedback from active galactic nuclei (AGN) are widely believed to play a crucial role in building the observed luminosity function of galaxies and in the co-evolution of supermassive black holes and their host galaxies \citep[e.g.][]{benson03,croton06}. However, it is still debated how the energy released from AGN is coupled with the surrounding matter. The NLR, located beyond the sphere of influence of supermassive black holes, provides an ideal laboratory to explore the connection between the central AGN and the host galaxy. Given that the typical extent of NLRs is in the range of few hundreds to about a thousand pc, for nearby AGN these regions are well resolved, allowing detailed morphological and diagnostic studies. Investigations of the NLR reveal the presence of bright narrow emission lines at a wide range of energies, from [O~III] to X-rays \citep[e.g.][]{heckman14,netzer15}. The presence of emission lines at various wavelengths suggest a common ionization mechanism. A long-standing debate is whether the NLR of AGN is ionized by the nucleus or by shocks driven by radio jets. Although observational studies suggest that in radio galaxies jets may be responsible for the ionization of optical emission-line material \citep[e.g.][]{nesvadba08,lanz15}, the consensus is still lacking \citep[e.g.][]{robinson00}. Similarly, the dominant emission mechanism is also debated in radio-quiet AGN. Studies of nearby Seyfert galaxies suggest that their X-ray spectrum is consistent with photoionized gas \citep{bianchi06}. The main arguments hinting that photoionization is the main excitation mechanism are (1) the morphological similarity between the diffuse X-ray and the [O~III] emission; (2) the approximately constant flux ratios between the [O~III] and soft X-ray emission; and (3) the acceptable fit obtained by describing the observed spectra with a photoionized gas models. However, several studies hint that the role of collisional ionization may be non-negligible in Seyfert galaxies \citep{capetti99,kukula99,maksym16}. To probe the ionization mechanism of the hot gas in the NLR of Seyfert galaxies, it is indispensable to perform detailed studies of nearby Seyferts with prime multi-wavelength data. As we demonstrate below, Mrk~3 is the ideal candidate for such a study. Mrk~3 (UGC~3426) is an early-type (S0) galaxy\footnote{Morphological classification taken from HyperLeda.} at $z=0.013509$, which hosts a luminous Seyfert 2 AGN. Given its brightness and proximity, Mrk~3 was the subject of a wide range of multi-wavelength observations. The \textit{Hubble Space Telescope} (\textit{HST}) [O~III] survey of nearby AGN showed that Mrk~3 is the second brightest source, after the spiral galaxy, NGC 1068 \citep{schmitt03}. The \textit{HST} images point out that the NLR of Mrk~3 exhibit a series of emission-line knots, which show an S-shaped morphology. Radio observations reveal the presence of a pair of jet knots, whose position angle is consistent with that of the NLR \citep{kukula93,kukula99}. Based on the spectroscopic study carried out with \textit{HST}, \citet{capetti99} argue that the NLR is a high-density shell that was shock heated by the jet. However, based on long-slit spectra obtained with \textit{HST} and by utilizing photoionization models, \citet{collins05,collins09} suggest that the NLR is dominated by photoionization. Thus, the mechanism responsible for ionizing the diffuse gas in the NLR of Mrk~3 remains a matter of debate. Mrk~3 has been explored with the X-ray grating spectrometers of \textit{Chandra} and \textit{XMM-Newton}. Based on the analysis of a 100 ks \textit{Chandra} HETG observation and by extracting the spectrum of an 8 pixel ($\approx4\arcsec$ wide) region, \citet{sako00} suggested that the main excitation mechanism of the X-ray emitting plasma is photoionization. In agreement with this, the \textit{XMM-Newton} RGS spectrum of Mrk~3 also hinted that the soft X-ray emission is dominated by photoionized gas \citep{bianchi05}. However, previous X-ray studies did not explore the NLR of Mrk~3 at spatial scales comparable to that of the \textit{HST} and radio images. Indeed, these works probed the entire NLR as a single region and did not take into account its complex structure. \begin{table} \caption{The list of analyzed \textit{Chandra} observations.} \begin{minipage}{8.75cm} \renewcommand{\arraystretch}{1.3} \centering \begin{tabular}{c c c c c } \hline Obs ID & $T_{\rm{obs}}$ (ks) & Grating & Date\\ \hline 873 & 100.6 & HETG & 2000/03/18 \\ 12874 & 77.1 & HETG & 2011/04/19 \\ 12875 & 29.9 & HETG & 2011/04/25 \\ 13254 & 31.5 & HETG & 2011/08/26 \\ 13261 & 22.1 & HETG & 2011/05/02 \\ 13263 & 19.7 & HETG & 2011/04/28 \\ 13264 & 35.8 & HETG & 2011/04/27 \\ 13406 & 21.4 & HETG & 2011/05/03 \\ 14331 & 51.2 & HETG & 2011/08/28 \\ 12293 & 30.6 & NONE & 2012/01/09 \\ \hline \end{tabular} \end{minipage} \label{tab:list1} \end{table} \begin{figure*} \begin{center} \leavevmode \epsfxsize=16cm\epsfbox{fig1.eps} \caption{The raw \textit{Chandra} ACIS-S image of the central regions of Mrk~3 in the $0.3-2$ keV (soft) and $4-8$ keV (hard) band. The soft band image is extended and elongated in the east-west direction, whereas the hard band image appears to be point like. Based on the \textit{Chandra} PSF in the $4-8$ keV band, about 90\% of the counts are expected to be enclosed within $2\arcsec$ radius for a point source. The hard band image demonstrates that more than $90\%$ of the counts are within this region, confirming that this image is consistent with a point source. In the $0.3-2$ keV band the 90\% enclosed fraction of counts corresponds to about $0.8\arcsec$. Only 35\% of the counts are within this region, demonstrating that besides the AGN another extended X-ray emitting component is present originating from ionized hot gas in the NLR of Mrk~3. In the soft band image we over plot the regions that were used to extract X-ray energy spectra for constructing the temperature profile (see Section \ref{sec:shock} and Figure \ref{fig:sb}). In the hard band image we over plot the logarithmic intensity levels taken from the $0.3-2$ keV band image, which are in stark contrast with the point like distribution of the counts in the $4-8$ keV band.} \vspace{0.5cm} \label{fig:zero} \end{center} \end{figure*} Given the prototypical nature of Mrk~3 and the wealth of available multi-wavelength data, it is a prime laboratory to explore the mechanisms responsible for exciting the diffuse X-ray gas. In this work, we focus on analyzing \textit{Chandra} observations of Mrk~3 to distinguish between collisional ionization from the small-scale radio jet and photoionization from the AGN radiation field. This project is greatly facilitated by the deep \textit{Chandra} HETG observations that allow us to perform high-resolution spectroscopy of the NLR at $0.5\arcsec$ spatial scales -- nearly an order of magnitude smaller scales than applied in previous X-ray grating studies. This work is structured as follows. In Section 2 we introduce the data and describe the main steps of the analysis. In Section 3 we present our results, namely discuss the deconvolved X-ray image of the diffuse emission, study the surface brightness and temperature structure of the hot gas, and probe the \textit{Chandra} HETG spectra in seven distinct locations as a function of radius from the nucleus. We discuss our results in Section 4 and argue that both photoionization and collisional ionization play a role in the NLR of Mrk~3. We summarize in Section 5. The luminosity distance of Mrk~3 is $D_{\rm L} = 58.8$ Mpc and the corresponding angular scale is $278 \ \rm{pc \ arcsec^{-1}}$. All uncertainties listed in the paper are $1\sigma$ errors. \begin{figure}[t] \begin{center} \leavevmode \epsfxsize=8.5cm\epsfbox{fig2.eps} \caption{X-ray energy spectrum of a circular region with $2\arcsec$ radius centered on the nucleus of Mrk~3. The spectrum was fit with a two component model consisting of a thermal model and a power law model. The column density was fixed at the Galactic value. The best-fit model is over plotted. The bottom panel shows the residuals of the fit.} \vspace{0.7cm} \label{fig:spec_central} \end{center} \end{figure} \section{The Chandra data} \label{sec:chandra} The \textit{Chandra} X-ray Observatory observed Mrk~3 in nine pointings with HETG/ACIS-S for a total of 389.3 ks. In addition, one pointing with an exposure time of 30.6 ks was done with \textit{Chandra} ACIS-S in imaging mode. The details of the individual observations are listed in Table \ref{tab:list1}. The data were reduced with standard CIAO\footnote{http://cxc.harvard.edu/ciao/} software package tools (CIAO version 4.5, CALDB version 4.6.7). To analyze the HETG data, we reprocessed all observations, which assures that the most recent calibration updates are applied. We used standard CIAO tools to create the region masks (\textit{tg\_create\_mask}) and extract the spectra (\textit{tg\_extract}). Throughout the analysis we only consider the first order dispersed spectra for the Medium Energy Grating (MEG) and High Energy Grating (HEG). To maximize the signal-to-noise ratios of the spectra, we combined the $\pm1$ orders of each grating. To probe the spectral variability of Mrk~3, we investigated the individual exposures and found that the spectra are consistent and the count rates measured in the $6-10 \ \rm{\AA}$ wavelength range exhibit $\lesssim7\%$ variations. Therefore, we combined the spectra from all individual observations to obtain a single first order HEG and MEG spectrum. Finally, we produced grating response files for each observations by employing the \textit{mkgarf} and \textit{mkgrmf} tools, which were then combined. The imaging observation was analyzed following the main steps outlined in \citet{emery17}. First, we used the \textit{chandra\_repro} tool to reprocess the observations. Then we searched for high background periods using a light curve that was extracted from the $2.3-7.3$ keV energy range, which band is the most sensitive to flares \citep{hickox06}. Using the \textit{deflare} tool and applying $3\sigma$ clipping, we did not find any high background time periods, hence the total exposure time of the imaging observation remains $30.6$ ks. Given that we aim to explore the gaseous X-ray emission around Mrk~3, bright point sources -- mostly originating from low-mass X-ray binaries or background AGN -- need to be identified and removed. To detect the point sources, we utilized the \textit{wavdetect} tool. The resulting source regions were excluded from the analysis of the diffuse emission. Although we identify several point sources, including the nuclear source associated with the galaxy, aside from the central AGN none of the sources are in the proximity of the NLR. To account for the background emission when studying the diffuse emission, we utilize nearby regions, which assures the precise subtraction of both the instrumental and sky background components. Exposure maps were produced for the images to correct for vignetting effects. To create the exposure maps, we used a spectral weights file that was computed by assuming an optically-thin thermal plasma emission model with $N_{\rm H} = 9.67 \times 10^{20} \ \rm{cm^{-2}}$ column density, $kT= 0.85$ keV temperature, and $Z=0.24$ Solar metallicity. This model represents the best-fit average spectrum of the hot gaseous emission in the NLR of Mrk~3 (see Section \ref{sec:basic}). \begin{figure*}[t] \begin{center} \leavevmode \epsfxsize=5.75in\epsfbox{fig3.eps} \caption{Deconvolved $0.3-2$ keV band \textit{Chandra} image of the NLR, where one pixel corresponds to $0.148\arcsec$ ($41.1$ pc). The scale bar corresponds to $\approx0.36\arcsec$. Over plotted are the [O~III] intensity levels taken from the \textit{HST} Faint Object Camera. The contour levels are $[1.67, 3.33, 5.00, 6.67, 8.33, 10.00]\times10^{-16} \ \rm{erg \ s^{-1} \ cm^{-2}}$ . The overall morphology of the X-ray and [O~III] images are similar, but the X-ray emission exhibits a broader distribution. The [O~III]-to-X-ray flux ratios are non-uniform across the east-west axis: we observed $\mathcal{R}_{\rm [O~III]/X} \approx 5.6 $ towards the east and $\mathcal{R}_{\rm [O~III]/X} \approx 2.2 $ towards the west. While these flux ratios are consistent with those obtained for other Seyfert galaxies, the higher ionization state towards the west indicates that shock heating may play a role.} \vspace{0.7cm} \label{fig:deconvolveoiii} \end{center} \end{figure*} \section{Results} \subsection{X-ray images of Mrk~3} \label{sec:image} In Figure~\ref{fig:zero} we depict the $0.3-2$ keV (soft) and $4-8$ keV (hard) band X-ray images of the central $20\arcsec \times 20\arcsec $ ($5.56 \times 5.56 $ kpc) region around Mrk~3 based on the sole imaging observation (Obs ID: 12293). Both images reveal the presence of a bright nuclear point source. However, the overall distribution of X-ray photons is strikingly different. The hard band image appears to be round and symmetric, whereas the soft band image shows an elongated structure in the east-west direction. To probe whether the distribution of photons can be explained by a bright point source, we construct the \textit{Chandra} point spread function (PSF) for both energy ranges. Based on the hard band PSF we expect that $\sim90\%$ of the photons should be enclosed within a circular region with radius of $2\arcsec$. In agreement with this, we find that somewhat more than $90\%$ of the photons are encircled within this radius, implying that the hard band emission can be explained with the bright AGN. The PSF extracted for the soft band predicts that the $90\%$ encircled radius is $0.8\arcsec$. However, within this radius only $\sim35\%$ of the photons are included, implying that beyond the nuclear source an extended X-ray emitting component is present. This diffuse X-ray emitting component in the NLR of Mrk~3, originating from hot X-ray gas, is in the main focus of our study. \subsection{Average properties of the hot gas in the NLR} \label{sec:basic} We establish the nature and average characteristics of the extended emission within the NLR by extracting an X-ray energy spectrum using the ACIS-S imaging observation. We utilize a circular region with $2\arcsec$ ($556$ pc) radius centered on the center of Mrk~3. We note that this region covers most of the NLR. We fit the resulting spectrum with a two component model consisting of an absorbed optically-thin thermal plasma emission model (\textsc{APEC}) and a power law model. The thermal component describes the gaseous emission, while the power law component accounts for the emission associated with the nuclear source and the population of unresolved X-ray binaries. The column density was fixed at the Galactic value. The spectrum and the best-fit model is shown in Figure \ref{fig:spec_central}. Based on the fit performed in the $0.5-2$ keV band, we confirm the presence of a significant gaseous component. The average best-fit temperature of the hot gas is $kT=0.83\pm0.03$ keV and the metallicity of $Z=0.24^{+0.24}_{-0.09}$ Solar using the \citet{anders89} abundance table. Given the stellar mass of the galaxy ($M_{\rm \star} = 1.6 \times 10^{11} \ \rm{M_{\odot}}$), the metallicity of the gas is relatively low. Indeed, other massive early-type galaxies exhibit approximately Solar metallicities \citep[e.g.][]{ji09}, whereas lower mass gas-poor ellipticals have sub-Solar metallicities \citep{bogdan12}, similar to that observed in Mrk~3. The slope of the power law component is $\Gamma = 1.96\pm0.10$, which is similar to that obtained by \citet{guainazzi16}, who performed a thorough analysis of the spectral properties of the AGN. In addition, these authors reported that Mrk 3 has a heavily absorbed continuum emission with $N_{\rm H} = (0.8-1.1)\times10^{24} \ \rm{cm^{-2}}$. However, due to the high absorbing column this emission component does not add a notable contribution at energies below $\lesssim5$ keV, hence our results are not affected by this emission in any significant way. The absorption corrected $0.3-2$ keV band luminosity of the thermal component is $L_{\rm 0.3-2keV} = 6.7\times10^{40} \ \rm{erg \ s^{-1}}$. We note that the observed X-ray luminosity and the gas temperature is in broad agreement with the scaling relations established for massive early-type galaxies \citep{goulding16}. Based on the best-fit spectral model we compute the emission measure of the gas and obtain $\int n_e n_H dV = 7.9\times10^{63} \ \rm{cm^{-3}} $. By using an admittedly simplistic approach and assuming uniform density and spherical symmetry for the gas distribution, we estimate the average gas density $n_e=0.61 \ \rm {cm^{-3}}$ and obtain a total gas mass of $M = 1.1\times10^7 \ \rm{M_{\odot}}$ within the studied volume. This gas mass is comparable to that obtained by \citet{collins09} from \textit{HST} observations of the NLR. \begin{figure*}[t] \begin{center} \leavevmode \epsfxsize=5.75in\epsfbox{fig4.eps} \caption{Same as Figure \ref{fig:deconvolveoiii}, but over plotted are the intensity levels of the 18 cm EVN and Merlin radio image \citep{kukula99}. The intensity levels of the contours are identical with those presented in Figure 2 of \citet{kukula99}. The image demonstrates the S-shaped morphology of the extended X-ray emission, which is in good agreement with the structure of the radio jet. Note that the X-ray emission envelopes the radio emission. The asymmetric radio jets, which extend to about $200$ pc and $400$ pc toward the east and west, signify the presence of a strong shock towards the west and a weaker shock towards the east \citep{kukula99}. The main features in the radio emission, namely the hotspot on the western side and the two bright radio components on the eastern side, are highlighted and further discussed in Section \ref{sec:radio}. } \vspace{0.7cm} \label{fig:deconvolve} \end{center} \end{figure*} \subsection{High-resolution images} \label{sec:deconvolved} \subsubsection{Deconvolved X-ray image} High-resolution radio and optical observations demonstrate the complex structure of the NLR. Given that the native $0.492\arcsec$ per pixel \textit{Chandra} resolution is lower than the resolution of the radio and optical images, for a more appropriate comparison we enhance the resolution of the ACIS imaging. This allows us to explore the spatial structure of the diffuse X-ray emission at finer angular scales. To this end, we apply the Lucy-Richardson deconvolution algorithm. Since the observed \textit{Chandra} image is the intrinsic brightness distribution of the source (in our case the hot gas in the NLR of Mrk~3) convolved with the point spread function (PSF) of the detector, it is indispensable to have a good understanding of the PSF. To construct an accurate image of the PSF, we used the Chandra Ray Tracer (ChaRT). Specifically, we ran a ray-trace simulation using the ChaRT web interface\footnote{http://cxc.harvard.edu/ciao/PSFs/chart2/runchart.html}, which set of rays was then projected onto the detector-plane, resulting in a pseudo-event file. We binned this PSF event file to a fraction of the native ACIS pixels and created an image of the PSF in the $0.5-2$ keV band. For the Lucy-Richardson deconvolution we utilized the $0.5-2$ keV band X-ray image binned to $30\%$ of the native ACIS resolution and the similarly binned PSF image obtained from ChaRT. We used the \textsc{CIAO} \textit{arestore} task to carry out the deconvolution and iterated 100 times. While we tried to construct deconvolved images at different resolutions, we found that for the NLR of Mrk~3 the best result is achieved if $30\%$ of the ACIS resolution is applied. The deconvolved \textit{Chandra} image, shown in Figures~\ref{fig:deconvolveoiii} and \ref{fig:deconvolve}, have a pixel size of $0.148\arcsec$. We note that the applied Lucy-Richardson deconvolution technique tends to sharpen features. Therefore, the true X-ray light distribution is slightly more extended than seen on the deconvolved images. \begin{figure*}[t] \begin{center} \leavevmode \epsfxsize=8.5cm\epsfbox{fig5a.eps} \hspace{0.5cm} \epsfxsize=8.5cm\epsfbox{fig5b.eps} \vspace{0.3cm} \caption{$0.3-2$ keV band X-ray surface brightness (left panel) and temperature (right panel) profiles of the diffuse gas towards the east and west of the NLR. On the left panel the solid histogram shows the PSF obtained from \textit{Chandra} ray tracing. Note the drop in surface brightness and temperature at $\sim2\arcsec$ towards the east and west. The observed surface brightness (hence density) and temperature ratios across the edge signify the presence of shock fronts. Based on the de-projected density jumps and the Rankine-Hugonoit jump conditions we derive shock Mach-numbers of $M=2.5^{+1.0}_{-0.6}$ and $M=1.5\pm0.2$ towards the west and east, respectively. The temperature jumps suggest comparable Mach numbers.} \vspace{0.7cm} \label{fig:sb} \end{center} \end{figure*} \subsubsection{Comparing the X-ray and [O~III] morphology} The morphological similarity and the nearly uniform flux ratios between the [O~III] line emission and the gaseous X-ray emission were used to argue that photoionization is the main excitation mechanism in Seyfert galaxies \citep{bianchi06}. In addition, \citet{bianchi06} studied a sample of radio galaxies and obtained similar conclusions. To this end, we confront the [O~III] and X-ray morphology and flux ratios ($\mathcal{R}_{\rm [O~III]/X} = F_{\rm [O~III]}/F_{\rm 0.5-2keV}$) in the NLR of Mrk~3. In Figure~\ref{fig:deconvolveoiii} we present the deconvolved \textit{Chandra} image of the central regions of Mrk~3 and over plot the intensity levels of the [O~III] $\lambda5007$ emission observed by the \textit{HST}. The [O~III] image was taken with the Faint Object Camera at an angular resolution of $\approx0.1\arcsec$. There is an overall agreement between the distribution of the X-ray light and the [O~III] intensity levels as both images exhibit a characteristic S-shaped morphology. However, the emission from the hot X-ray gas has a broader distribution and surrounds the [O~III] emission. To compute the flux ratios, we utilize the [O~III] fluxes measured by \citet{collins05} and the X-ray luminosity obtained from the $0.5-2$ keV band \textit{Chandra} images. Given the different angular resolution of the two images, we derive the average flux ratios in two regions corresponding to the east and west of the NLR within $1 \arcsec$ radius from the nucleus. The flux ratios are different on the two sides of the AGN. Specifically, we obtain $\mathcal{R}_{\rm [O~III]/X} \approx 5.6 $ towards the east and a significantly lower value, $\mathcal{R}_{\rm [O~III]/X} \approx 2.2 $, towards the west. The former ratio agrees with those obtained by \citet{bianchi06}, suggesting that photoionization plays a notable role to excite the gas. However, the $\mathcal{R}_{\rm [O~III]/X}$ ratio on the western side of Mrk~3 is significantly lower and is comparable with sources that contain small-scale radio sources \citep{bianchi06,balmaverde12}. Therefore, the higher ionization state towards the west hints that the interaction between the jet and ISM may play -- at least -- a complementary role in the ionization of the gas. Thus, the non-uniform flux ratios and the broader distribution of the X-ray emission in the NLR of Mrk~3 suggest that photoionization may not be the sole excitation mechanism. \bigskip \subsubsection{Comparing the X-ray and radio morphology} \label{sec:radio} The high-resolution 18 cm EVN and Merlin radio images of Mrk~3 (see the intensity levels in Figure \ref{fig:deconvolve}) reveal jets with an S-shaped structure and a remarkable hotspot on the western side \citep{kukula93,kukula99}. These authors suggest that the S-shaped morphology of the radio jet may either be due to a change in the jet axis or from the jet interacting with the rotating interstellar medium. Moreover, they suggest that the characteristics of the hotspot on the west may signify the presence of a shock, where the radio jet is interacting with the surrounding material. On the eastern side a similar hotspot is not observed, but two bright radio components are present at $<100$ pc from the nucleus. These features are marked as R1 and R2 in Figure \ref{fig:deconvolve}. \citet{kukula99} suggest that these radio components may have played a role in thermalizing the kinetic energy of the eastern jet, hence reducing the jet's Mach number and leading to a weaker eastern shock. To compare the morphology of the X-ray and radio emission, in Figure \ref{fig:deconvolve} we show the deconvolved X-ray image with the 18 cm radio intensity levels over plotted. This image reveals that the overall morphology between the X-ray and radio structure is similar since both images show the S-shaped morphology. However, the radio jets are narrow, whereas the gaseous X-ray emission is significantly broader and surrounds the radio emission. This hints that collisional ionization may play a role and the gas may be driven by shocks \citep{wilson01,massaro09}. Based on the prediction of shocks in the X-ray surface brightness, we investigate the X-ray data to identify potential shocks in the NLR. \subsection{Detection of shocks in the NLR} \label{sec:shock} To search for possible shocks in the NLR, we investigate the surface brightness and temperature distribution of the X-ray gas. We extract the profiles using circular wedges with position angles of $135\degr-225\degr$ and $315\degr-405\degr$, where $0\degr$ and $90\degr$ correspond to west and north, respectively. While both the surface brightness and temperature profiles are extracted using wedges with these position angles, the width of the individual wedges are different. To extract the surface brightness profiles, we used regions with widths of $0.5\arcsec-1\arcsec$ and for the temperature profile the extraction regions had widths of $1\arcsec-6\arcsec$ depending on the brightness of the diffuse emission. The surface brightness profiles, extracted from the $0.3-2$ keV energy range towards the east and west of the NLR, are depicted in the left panel of Figure \ref{fig:sb}. Along with the surface brightness profiles, we also show the expected brightness distribution of the PSF obtained from ChaRT. As discussed in Section \ref{sec:image}, the PSF has a significantly narrower distribution than the diffuse emission, thereby demonstrating that the extended emission cannot be associated with the bright nuclear source. The surface brightness profiles reveal a notable jump at $\sim2\arcsec$ towards the east and west. \begin{figure*} \begin{center} \leavevmode \epsfxsize=18.5cm\epsfbox{fig6_2.eps} \vspace{-1.2cm} \caption{The \textit{Chandra} MEG spectra of the NLR of Mrk~3 in the $6-9 \ \rm{\AA}$ wavelength range. The five panels show five different extraction regions from the east (top panel) towards the west (bottom panel). The extraction regions had a width of $0.5\arcsec$ ($139$ pc). The systemic position of the major lines are marked with solid lines. The wavelengths are corrected for the cosmological redshift (z=0.013509) of Mrk~3. Note that we do not detect a significant redshift and blueshift between the east and west sides of the NLR or relative to the systemic position of the emission lines.} \label{fig:spec1} \end{center} \end{figure*} Although the surface brightness profile demonstrates the presence of jumps, de-projection analysis needs to be performed to determine the exact position and the magnitude of the corresponding density jumps. Therefore, we utilize the \textsc{proffit} software package tools \citep{eckert11} and construct de-projected density profiles. We assume spherical symmetry for the gas density within each wedge and assume that the gas density can be described with a broken power law model inside and outside the edge. We obtain density jumps of $n_1/n_0=2.68\pm0.54$ at $r_{\rm cut} = 2.14\arcsec \pm 0.08\arcsec$ (or $595\pm22$ pc) towards the west and $n_1/n_0=1.72\pm0.26$ at $r_{\rm cut} = 1.80\arcsec \pm0.08\arcsec$ (or $478\pm22$ pc) towards the east. In the right panel of Figure \ref{fig:sb}, we show the temperature profile of the hot gas towards the east and west. To measure the temperature of the gas, we construct X-ray spectra of each region and fit them with a model consisting of an \textsc{APEC} thermal emission model with the metallicities fixed at $Z=0.24$ Solar (Section \ref{sec:basic}) and a power law model. The latter component accounts for the emission arising from the AGN at $r\lesssim2\arcsec$ and from the population of unresolved low-mass X-ray binaries at $r\gtrsim 2 \arcsec$ \citep{irwin03,gilfanov04}. The best-fit spectra are depicted in the Appendix A. The profiles reveal a significant drop at $\sim2\arcsec$ towards the west and a smaller jump towards the east. Specifically, we observe a temperature drop of $T_{1}/T_{0}=1.23\pm0.24$ and $T_{1}/T_{0}=2.67\pm0.39$ towards the east and west. Based on the presence of sharp surface brightness jump, and the observed density and temperature ratios across the edge, we conclude that the observed discontinuity on the western side of Mrk~3 is a shock front \citep[e.g.][]{markevitch07,emery17}. To compute the shock Mach number ($v\equiv c_{\rm s}$) and the corresponding velocity, we utilize the Rankine-Hugoniot jump conditions \citep{landau59,markevitch07}, which directly connect the pre-shock and post-shock density and temperature with the Mach number. Given that we measure the density and temperature jumps in the NLR of Mrk~3, we can derive the Mach numbers using these two independent approaches. Based on the pre-shock and post-shock densities we find $M=2.5^{+1.0}_{-0.6}$ towards the west and $M=1.5\pm0.2$ towards the east. Based on the magnitude of the temperature jump, we derive the shock Mach numbers of $M=2.4\pm0.3$ and $M=1.5^{+1.0}_{-0.5}$ towards the west and east, respectively. We emphasize that the Mach numbers obtained from the density and temperature jumps are in excellent agreement with each other. For a 0.7 keV plasma the sound speed is $c_{\rm s} = \sqrt{(\gamma kT)/( \mu m_{\rm H})} = 420 \ \rm{km \ s^{-1}}$ by using $\gamma = 5/3$ and $\mu = 0.62 $. Hence, the velocities are $v = 1050^{+420}_{-252} \ \rm{km \ s^{-1}}$ and $v = 630\pm84 \ \rm{km \ s^{-1}}$ towards the west and east, respectively. The presence of shocks suggests that the hot gas in the NLR of Mrk~3 is undergoing collisional ionization due to the interaction between the radio jet and the circum nuclear material. \subsection{High-resolution spectrum of the NLR} \label{sec:ratios} The bipolar morphology of the X-ray emitting gas combined with the presence of small-scale radio jets and the detection of shocks suggest the presence of an outflow. To characterize the properties of the outflow, we utilize the HETG spectrum of the NLR. Due to the relative proximity of Mrk~3 and the superb angular resolution of \textit{Chandra}, we can perform spatially resolved X-ray spectral diagnostics on the outflow. To this end, we study the emission line spectra at two locations on either side of the nucleus with extraction regions that have a width of $0.5\arcsec$ ($139$ pc). Although the outflow is traced out to larger radii in the ACIS images, the signal-to-noise ratio is not sufficiently high to explore the high-resolution spectrum of the outflow beyond these regions. In Figure \ref{fig:spec1} we show the dispersed spectra of the individual extraction regions in the most relevant $6-9 \ \mathrm{\AA}$ region. To obtain these spectra, the plus and minus orders were combined, and the depicted wavelengths are corrected for the cosmological redshift. To fit the spectral lines, we utilize a model consisting of an absorbed power law model to account for the continuum and a series of Gaussian lines. When fitting the lines, we fixed the slope of the power law at $\Gamma=1.7$, but left the normalization as a free parameter. Additionally, the centroid, width, and normalization of the Gaussian lines were also free parameters. The best fit line centroid wavelengths were corrected for the cosmological redshift and then compared with the laboratory measurements of strong lines based on the NIST data base \citep{verner96}. The spectra reveal a series of H-like and He-like emission lines along with fluorescence lines. In general, the set of identified lines and their best-fit wavelengths are in good agreement with those of \citet{sako00}. In this work, we compare the best-fit line centroids of the strongest emission lines between the east and west side of the nucleus and probe whether the outflowing gas shows significant blue-shift or redshift. Detailed modeling of the emission spectrum lines will be subject of a future paper. Based on the HETG spectra we find that, within measurement uncertainties, the line centroids towards the east and west agree with the laboratory wavelengths (Table \ref{tab:list2}). In addition, they do not show a statistically significant difference between the east and west side of the NLR. Specifically, all lines are within the $1\sigma$ uncertainties with the expected wavelengths, except for the Si line we measure a $1.5\sigma$ offset from the laboratory wavelength. In the absence of red-shifted and blue-shifted line centroids, we place upper limits on the outflow velocity of the hot gas. The upper limits typically remain below a few hundred $ \rm{km \ s^{-1}}$. The detailed constraints on the outflow velocity of the gas are listed in (Table \ref{tab:list3}). We note that these velocities are significantly lower than those inferred from the Rankine-Hugonoit jump conditions (Section \ref{sec:shock}). This difference is likely caused by the orientation of NLR, which has an inclination of $5\degr$ implying that is it is virtually in the plane of the sky \citep{crenshaw10}. Therefore, if the outflowing gas propagates along the plane of the sky and does not have a significant velocity towards (and away) from the observer, the projected velocities will be close to 0. Hence, the low outflow velocities computed from the HETG data indirectly point out that the outflow propagates almost in the plane of the sky. The low observed outflow velocities observed from the \textit{Chandra} HETG data are at odds with the results of \citet{capetti99}, who identified [O~III] emission lines shifted with several hundreds $\rm{km \ s^{-1}}$. We speculate that the observed velocity difference might be due to the decoupled nature of the cold [O~III] and the hot X-ray gas. In this picture, the rotating cold gas has a different velocity and temperature structure than the X-ray gas. Therefore, the shock driven by the radio jet will not drag the cold and hot gas components with the same velocity, implying that these gaseous components remain decoupled. In addition, we mention that due to the $\approx0.1\arcsec$ angular resolution of the \textit{HST} Faint Object Camera, \citet{capetti99} extracted narrow regions that were mostly coincident with the locations of bright radio components. As opposed to these, the \textit{Chandra} HETG spectra cover notably larger regions with $0.5\arcsec$ width, implying that these regions include brighter and fainter parts of the emission. This difference might also contribute to the observed velocity difference. However, further exploring the velocity difference would require a dedicated analysis, which is beyond the scope of this paper. \begin{figure} \begin{center} \leavevmode \epsfxsize=8.5cm\epsfbox{fig7.eps} \caption{The intensities of the resonance, intercombination, and forbidden lines of the Si XIII triplet as a function of distance from the nucleus based on \textit{Chandra} HETG data. The east (negative distances) and west (positive distances) sides show different line ratios, despite the comparable $\mathcal{G}$-ratios. The line ratios hint that multiple ionization processes may play a role. In the central regions and towards the east the photo ionization is dominant, while collisional ionization may play a non-negligible role towards the west.} \vspace{0.7cm} \label{fig:gratio} \end{center} \end{figure} \subsection{Line ratios of He-like ions} \label{sec:ratios} The line ratios of He-like triplets, and in particular the $G$ ratios, are suitable to probe the ionization state of the gas. Due to the high energy resolution of HETG, the three most intense lines, namely the resonance ($\rm{1s^2 \ ^1S_0 - 1s2p \ ^1P_1 }$), the intercombination ($\rm{1s^2 \ ^1S_0 - 1s2p \ ^3P_{2,1} }$), and the forbidden lines ($\rm{1s^2 \ ^1S_0 - 1s2s \ ^3S_1 }$), can be individually resolved. Following \citet{porquet10}, we derive the $G$ ratio as: $$ G (T_{\rm{e}}) = \frac{F+I}{R} \ .$$ where $R$, $I$, and $F$ refer to the resonance, intercombination, and forbidden line strengths, respectively. In Mrk~3, the most prominent He-like ion is the Si XIII triplet at $\sim6.7 \ \rm{\AA}$. The He-like lines of Mg and Ne are also detected, but these lines are significantly weaker due to the lower effective area of MEG and the relatively high absorbing column, and hence, cannot be used to compute constraining G ratios. Based on a 100 ks \textit{Chandra} HETG observation \citet{sako00} concluded that the G-ratios are inconsistent with a pure collisional plasma and are marginally consistent with a photoionized plasma. However, these line ratios were obtained by treating the entire NLR as a single region. Due to the presence of shocks, it is feasible that the line ratios show variation in the east-west direction. Therefore, we characterize the ionization state of the gas as a function of central distance by computing the $G$ ratios of the Si XIII triplet in seven distinct locations. The central region is centered on the nucleus of Mrk~3 and has a width of $0.5\arcsec$, while the regions at $0.5\arcsec$ (139 pc), $1\arcsec$ (278 pc), and $1.5\arcsec$ (417 pc) radii towards the east and west each comprise $0.5\arcsec$ wide extraction regions. To fit the lines, we used Gaussian line profiles (\textsc{agauss} in \textsc{XSpec}). Based on the fits we find that the G-ratios in the NLR are in the range of $\mathcal{G} = 0.7-1.1$ and $\mathcal{G} = 1.6\pm0.7$ in the center. Although the G-ratios are comparable within uncertainties at every radius, the line strengths of the resonance, intercombination, and forbidden lines exhibit stark differences. As demonstrated in Figure \ref{fig:gratio}, the intercombination line is weak or virtually absent towards the west, while it is prominent in the central region and in the east of the NLR. In addition, the intensities of the resonance and forbidden lines are comparable towards the west, while the resonance lines are factor of about $3$ times stronger than the forbidden lines towards the east. These results hint that multiple processes may be responsible for ionizing the hot gas. \begin{table*} \footnotesize \centering \caption{The list of strong emission lines observable in the nucleus and in the NLR of Mrk~3.} \begin{minipage}{14cm} \renewcommand{\arraystretch}{1.8} \begin{tabular}{c c c c c c c} \hline & & -278 pc$^\dagger$ & -139 pc$^\dagger$ & Nucleus & 139 pc$^\dagger$ & 278 pc$^\dagger$ \\ \hline Ion & $\lambda_{\rm{lab}}$ & $\lambda_{\rm{obs}}$ & $\lambda_{\rm{obs}}$ & $\lambda_{\rm{obs}}$ & $\lambda_{\rm{obs}}$ & $\lambda_{\rm{obs}}$ \\ & ($\rm{\AA}$) &($\rm{\AA}$) & ($\rm{\AA}$) & ($\rm{\AA}$) & ($\rm{\AA}$) & ($\rm{\AA}$) \\ \hline Si XIV & 6.183 & $6.173^{+0.023}_{-0.019}$ & $6.177^{+0.012}_{-0.003}$ & $6.184^{+0.005}_{-0.004}$ & $6.185^{+0.007}_{-0.003}$ & $6.186^{+0.007}_{-0.003}$ \\ Si XIII (R)$^\ddagger$ & 6.648 & $6.661^{+0.010}_{-0.007}$ & $6.667^{+0.016}_{-0.021}$ & $6.656^{+0.011}_{-0.011}$ & $6.644^{+0.006}_{-0.010}$ & $6.644^{+0.005}_{-0.004}$ \\ Si XIII (F)$^\ddagger$ & 6.740 & $-$ & $-$ & $6.737^{+0.005}_{-0.008}$ & $6.741^{+0.022}_{-0.007}$ & $6.739^{+0.005}_{-0.013}$ \\ Si K$\alpha$ & 7.128 & $7.122^{+0.009}_{-0.008}$ & $7.123^{+0.002}_{-0.002}$ & $7.121^{+0.002}_{-0.002}$ & $7.121^{+0.003}_{-0.002}$ & $7.121^{+0.003}_{-0.002}$ \\ Mg XII & 8.420 & $8.408^{+0.012}_{-0.010}$ & $8.412^{+0.009}_{-0.003}$ & $8.421^{+0.003}_{-0.003}$ & $8.422^{+0.010}_{-0.009}$ & $8.421^{+0.012}_{-0.009}$ \\ Ne X & 12.132 & $12.115^{+0.011}_{-0.009}$ & $12.131^{+0.015}_{-0.011}$ & $12.132^{+0.011}_{-0.018}$ & $12.135^{+0.011}_{-0.012}$ & $12.135^{+0.010}_{-0.012}$ \\ \hline \end{tabular} \flushleft $^\dagger$ Negative and positive offsets correspond to east and west, respectively. \\ $^\ddagger$(R) and (F) represent the resonance and forbidden lines, respectively. \\ \end{minipage} \label{tab:list2} \end{table*} Although the G-ratios are similar to those expected for collisional plasma, the observed values may be influenced by resonance line scattering, which is relevant for high absorbing column densities ($N_{\rm HI} \gtrsim 10^{21} \ \rm{cm^{-2}} $). This, in turn, could enhance the intensities of the resonance lines, thereby decreasing the G-ratios of a photoionized plasma and mimicking collisional ionization \citep{porquet01,porquet10}. To probe whether resonance line scattering plays a role in the NLR of Mrk~3, we rely on \citet{collins05} who probed the geometry of the NLR and the extinction as a function of angular position. These authors found that Mrk~3 hosts inner gas disks, which results in a positive extinction gradient from west to east. Specifically, \citet{collins05} measured $E(B-V)=0.12-0.16$ towards the west and $E(B-V)=0.2-0.4$ towards the east. We convert these values to hydrogen column densities following \citet{shull85} as $N_{\rm HI}=5.2 \times 10^{21} \ \rm{cm^{-2}} \times \rm{E(B-V)}$, and conclude that the $E(B-V)$ color excess corresponds to $N_{\rm HI} = (0.6-0.8) \times 10^{21} \ \rm{cm^{-2}}$ and $N_{\rm HI} = (1.0-2.1) \times 10^{21} \ \rm{cm^{-2}}$ towards the east and west, respectively. Thus, resonance line scattering is expected to increase the resonance line intensities, and hence decrease the G-ratios towards the west. As opposed to this, due to the relatively low column densities, resonance line scattering is not expected to significantly influence the observed G-ratios towards the east. Overall, the line intensities of the Si XIII triplet and the $G$-ratios hint that both excitation mechanisms -- photoionization and collisional ionization -- may be present in the NLR of Mrk~3. Specifically, in the central regions and towards the east the main ionizing mechanism may be photoionization, whereas collisional ionization may play a role on the west. \section{Discussion} \label{sec:discussion} \subsection{Excitation mechanisms} There is a significant debate about the ionization process of the thermal gas in the NLR of Seyfert galaxies. The observed X-ray emission may either originate from photoionized gas or may be due to gas shock heated by the radio jet. Detailed morphological studies of a sample of Seyfert galaxies pointed out the nearly constant OIII-to-X-ray flux ratios in the NLR \citep{bianchi06}. Specifically, these studies found the median value of $\mathcal{R}_{[O~III]/X} = 5 $ and a scatter of about 0.3 dex. These arguments suggest that a common ionizing source, photoionization from the nuclear source, may be responsible for the observed emission. However, this simple picture may break down when galaxies with small-scale radio jets are investigated. These galaxies exhibit lower O~III-to-X-ray flux ratios, indicating a higher level of ionization. This implies that photoionization may not be the only ionizing source, but the interaction between the radio jets and the dense ISM may also play a role. The picture, in which photoionization is the main ionization mechanism, is further challenged when the morphology of the X-ray gas and radio emission is compared. Specifically, in several radio galaxies (e.g. 3C 293, 3C 305, NGC 4258) the X-ray emission exhibits a broader distribution than the radio jets, hinting that shock heating may play a role in heating the gas to X-ray temperatures \citep[e.g.][]{wilson01,massaro09,lanz15}. Our results obtained for the NLR of Mrk~3 can be summarized as follows. \begin{itemize} \item The X-ray gas and the [O~III] emission share similar morphology. However, the X-ray light distribution is more extended in the east-west direction than the [O~III] emission. \item The [O~III]-to-X-ray flux ratios are non-uniform across the NLR. In the central regions and in the east they are $\mathcal{R}_{\rm [O~III]/X} \approx 5.6 $, while towards the west the observed ratio drops to $\mathcal{R}_{\rm [O~III]/X} \approx 2.2 $. \item The X-ray and radio morphology shows generally similar structures, but the X-ray emission is significantly broader and surrounds the radio emission. \item We detect shocks with $M= 2.4\pm0.3$ and $M=1.5^{+1.0}_{-0.5}$ toward the west and east, respectively. The shock front towards the west is approximately consistent with the locations of the radio hot spot. \item The line ratios of the Si XIII triplets do not favor photoionization as the sole ionizing source in the western regions of the NLR. \end{itemize} Overall, these results strongly suggest that photoionization \textit{and} collisional excitation commonly act as excitation mechanisms in the NLR of Mrk~3. This result is at odds with the canonical picture, which hypothesized that photoionization is the main excitation mechanism \citep{bianchi06,balmaverde12}. However, this canonical picture may be overly simplistic and does not reflect the complexity of Seyfert galaxies, most of which produce small-scale, weak, bipolar radio-emitting jets \citep{thean00,lal04}. Indeed, small-scale radio jets that are confined within the host galaxy are expected to interact with the surrounding dense interstellar material, which can give rise to shock heating \citep[e.g.][]{baum92,best00}. Therefore, it is feasible that shock heating plays a general, possibly complementing, role in the ionization of the gas surrounding the nuclei. \begin{table} \caption{Constraints on the gas outflow velocities} \begin{minipage}{8.75cm} \renewcommand{\arraystretch}{2} \centering \begin{tabular}{c | c c c c } \hline Ion& $-278$ pc$^\dagger$ & $-139$ pc$^\dagger$ & $139$ pc$^\dagger$ & $278$ pc$^\dagger$ \\ & $\rm{km \ s^{-1}}$ &$\rm{km \ s^{-1}}$ &$\rm{km \ s^{-1}}$ &$\rm{km \ s^{-1}}$ \\ \hline Si XIV & $146^{+340}_{-146}$ & $97^{+340}_{-146}$ & $49^{+243}_{-194}$ & $291^{+631}_{-146}$ \\ Si XIII (R) & $-181^{+226}_{-181}$ & $181^{+271}_{-451}$ & $361^{+496}_{-496}$ & $857^{+722}_{-948}$ \\ Si XIII (F) & $-45^{+223}_{-579}$ & $45^{+1024}_{-312}$ & ... & ... \\ Mg XII & $36^{+428}_{-321}$ & $71^{+356}_{-321}$ & $35^{+107}_{-107}$ & $285^{+321}_{-107}$ \\ Ne X & $74^{+247}_{-297}$ & $74^{+247}_{-296}$ & $0^{+247}_{-445}$ & $-25^{+371}_{-272}$ \\ \hline \end{tabular} \flushleft $^\dagger$ Negative and positive offsets correspond to east and west, respectively. \\ $^\ddagger$(R) and (F) represent the resonance and forbidden lines, respectively. \\ \end{minipage} \vspace{0.75cm} \label{tab:list3} \end{table} \subsection{Large-scale gas} \label{sec:large} To study the diffuse emission on galaxy scales, we extract an X-ray energy spectrum using an elliptical region with $54.5\arcsec$ and $38.2\arcsec$ axis radii with position angle of $20\degr$. This region corresponds to the total elliptical aperture radius of the galaxy as measured by the 2MASS Large Galaxy Atlas \citep{jarrett03}. Since we aim to study the large-scale diffuse emission, we omit the counts originating from the NLR by excluding en elliptical region with $3.3\arcsec \times 2.2\arcsec$ radii centered on the center of Mrk~3. To fit the spectrum of the large-scale diffuse emission, we employ a two component model consisting of an absorbed \textsc{apec} thermal emission model and a power law model. As before, we fixed the column density at the Galactic value and the slope of the power law at $\Gamma=1.56$. We find a best-fit temperature and abundance of $kT=0.77\pm0.05$ keV and $Z=0.09^{+0.08}_{-0.04}$ Solar. With these parameters we obtain the absorption corrected $0.3-2$ keV band luminosity of $L_{\rm{0.3-2keV}} = 4.9\times10^{40} \ \rm{erg \ s^{-1}}$, which corresponds to the bolometric luminosity of $L_{\rm{bol}} = 8.3 \times10^{40} \ \rm{erg \ s^{-1}}$. Based on the normalization of the spectrum, we compute the emission measure of the gas and compute the total gas mass following Section \ref{sec:basic} and obtain $M_{\rm gas} = 1.0\times10^{9} \ \rm{M_{\odot}}$. To place the X-ray luminosity and gas mass of the galaxy into a broader context, we compute the X-ray luminosity per unit K-band luminosity. We derive the K-band luminosity of the the galaxy based on its apparent K-band magnitude ($m_{\rm K} = 8.97$) and obtain $L_{\rm K} = 1.8\times10^{11} \ \rm{L_{\rm \odot}}$. Using the $0.3-2$ keV band X-ray luminosity, we find that the specific X-ray emissivity of Mrk~3 is $L_{\rm{0.3-2keV}}/M_{\rm \star} = 2.7\times10^{29} \ \rm{erg \ s^{-1} \ L^{-1}_{K,\odot}}$. This value exceeds that obtained in low luminosity ellipticals, but are comparable to emissivities found in more massive (non-BCG) ellipticals \citep[e.g.][]{bogdan11,goulding16}. Although the NLR demonstrated an outflow in the east-west direction, it is not clear whether the gas is expelled from the galaxy or it is retained in the gravitational potential well. If a galactic-scale outflow is present, it may be either powered by the the energy input of Type Ia Supernovae or from the AGN. In this picture, the outflowing gas is replenished by the stellar yields originating from evolved stars, which are estimated to shed mass at a rate of $0.0021 \ \rm{L_{K}/L_{K,\odot} \ M_{\odot} Gyr^{-1}} $ \citep{knapp92}. Given the K-band luminosity of Mrk~3, we estimate that the mass loss rate from evolved stars is $\dot{M} = 0.38 \ \rm{M_{\odot} \ yr^{-1}}$. This implies that the replenishment time scale of the total observed gas mass is about $t_{\rm{repl}} = 2.6\times10^9$ years. To lift the gas from the potential well of the galaxy, we require $E_{\rm lift} = 7.2 \dot{M} \sigma^2$ \citep{david06}, where $\sigma=274 \ \rm{km \ s^{-1}}$ corresponds to the central stellar velocity dispersion. We thus find that the total energy required to lift the gas is $E_{\rm lift} = 4.1\times10^{41} \ \rm{erg \ s^{-1}}$. The available energy from Type Ia Supernova can be computed by assuming that each supernova releases $10^{51} \ \rm{erg \ s^{-1}}$ energy and by computing the Type Ia Supernova rate of the galaxy using the frequency established by \citet{mannucci05} and the K-band luminosity of the galaxy. Hence we obtain the Type Ia Supernova frequency of $6.4\times10^{-3} \ \rm{yr^{-1}} $, implying the total energy of $E_{\rm{SNIa}} = 2.0\times10^{41} \ \rm{erg \ s^{-1}} $. This value falls factor of about two short of the energy required to lift the gas from the potential of Mrk~3, hinting that Type Ia Supernovae cannot provide sufficient energy to drive a galaxy-scale outflow. The minimum energy required to drive a galactic-scale outflow is about factor of five lower than the kinetic energy ($E_{\rm kin} \gtrsim 2\times 10^{42} \ \rm{erg \ s^{-1}}$) from the AGN \citep{capetti99}, hinting that the AGN is able to expel the gas from the galaxy. However, the large hot gas mass in the galaxy combined with the long replenishment time of the gas argues against the existence of a large-scale outflow that would remove the gas from the gravitational potential well of the galaxy. Instead, it is more likely that the energy from the AGN plays a role in heating the X-ray gas, possibly driving it to larger radii. \section{Summary} In this work we analyzed \textit{Chandra} X-ray observations of the NLR of Markarian 3. By combining imaging and grating spectroscopy data, we achieved the following conclusions: \begin{itemize} \item We confirmed the presence of X-ray emitting gas in the NLR of the galaxy. The average gas temperature and metallicity is $kT=0.85$ keV and $Z=0.24$ Solar. \item We deconvolved the X-ray image to probe the structure of the gas at small angular scales. The X-ray morphology of the hot gas was confronted with the radio and [O~III] morphology. We found that while the X-ray gas exhibits an S-shaped morphology, which is similar to those observed in other wavelengths, the hot gaseous emission has a broader distribution than the radio or [O~III] emission. \item We demonstrated the presence of shocks towards the west ($M=2.4\pm0.3$) and towards the east ($M=1.5^{+1.0}_{-0.5}$). This detection suggests that shock heating due to the interaction between the radio jets and the dense interstellar material may play a non-negligible role in the ionization of the gas. \item Spectroscopic analysis of the Si XIII triplet (resonance, intercombination, forbidden) lines suggests that both photoionization and collisional ionization may excite the hot gas. \item Using the high-resolution spectra we compared the best-fit line centroids between the east and west sides of the NLR. We did not find statistically significant differences, which hints at low projected outflow velocities that are significantly lower than those inferred from the Rankine-Hugonoit jump conditions. This difference implies that the outflow likely propagates along the plane of the sky. \item Given the common nature of small-scale radio jets in Seyfert galaxies, it is feasible that collisional ionization plays a role in the excitation of the hot gas in the NLR of other Seyfert galaxies as well. \end{itemize} \smallskip \begin{small} \noindent \textit{Acknowledgements.} We thank the referee for the constructive comments. This research has made use of \textit{Chandra} data provided by the Chandra X-ray Center. The publication makes use of software provided by the Chandra X-ray Center (CXC) in the application package CIAO. In this work the NASA/IPAC Extragalactic Database (NED) has been used. We acknowledge the usage of the HyperLeda database (http://leda.univ-lyon1.fr). \'A.B., R.P.K, W.R.R acknowledges support for the Smithsonian Institution. F.A-S. acknowledges support from \textit{Chandra} grant GO3-14131X. \end{small}
2,869,038,156,942
arxiv
\section*{Highlights} \begin{itemize} \item Complexity characteristics of the scientific collaboration networks of several world renown scholars are studied. \item Scientific collaboration community of H. Eugene Stanley self-organizes into a scale-free hierarchy but this is seen exclusively in the weighted network representation. \item Such a network organization indicates that during its evolution the spread of weight gets stimulation to a balanced growth of interconnections among them. \item Collaboration networks of other world's outstanding and prolific scholars often develop a star-like form and the central hubs constitute outliers. \item Spectral decomposition of the normalised Laplacian matrices is shown to be efficient in disentangling internal community ties. \end{itemize} \section{Introduction} \label{intro} The accelerating process of world globalization embraces and pervades all aspects of the human activity. Contemporary means and standards of conducting the scientific investigations deserve a special attention in this context as their progress at the same time constitutes both the condition and the result of this world globalization process. Indeed, the world most advanced scientific contemporary initiatives are based on multinational and often even on highly multidisciplinary collaborations. Some of them, like the ones carrying out the high energy physics experiments at CERN and at DESY in Europe, at Fermilab and at Brookhaven in the US, at KEK in Japan or the ones conducting the global astronomical sky-observations, are largely administratively arranged as far as their organization and staff involved is concerned. Typically this predetermines the co-authorship composition, usually very numerous, of the resulting, also numerous, publications. However, there recently emerge more spontaneous and at the same time more dynamical forms of the scientific cooperation. In most cases they are driven by the contemporary interdisciplinary trends in research, such that they involve a group of renown scientists (or even a single one) who, by their ability to create a scientifically stimulating environment, attract others to a productive collaboration, which proliferates further through various disciplines and diversified co-authorship compositions (Adams, 2012). Paul Erd\H{o}s, the famous Hungarian mathematician (De~Castro \& Grossman, 1999), who has written over 1400 papers with over 500 co-authors and who thus inspired the concept of the Erd\H{o}s number, can be considered a forerunner. At present an even more spectacular cascading of scientific collaboration of this kind can be observed. In this regard H.~Eugene Stanley, professor at the Boston University, whose scientific activity comprises a broad range of areas such as \textit{Aggregation, Viscous Fingering, Statistical Physics, Phase Transitions, Critical Phenomena, Granular Materials, Surface Physics, Econophysics, Chemistry, Water, Social Networks, Physiology, Medicine, and Neuroscience}, and his constantly increasing number of collaborators create a particularly interesting phenomenon to study. H.E.~Stanley's $h=125$ index due to ${\it N} = 1208$ published articles co-authored in total by 738 scientists, as on March 28, 2016, listed by the \textit{Web of Science (WoS)}, with all these figures constantly increasing (currently $h=134$ and ${\it N} = 1301$) provides a formal evidence of this great success and his scientific collaboration network (SCN) deserves thus a particular attention. Studying characteristics of various aspects of the scientific collaboration potentially constitutes a significant contribution towards understanding the structure and dynamics of the social interactions (Luukkonen, Persson, \& Sivertsen, 1992; Katz, 1994; Grossman \& Ion, 1995; Jin, Girvan \& Newman, 2001; Liljeros et al., 2001; Jiang et al., 2013) but, first of all, it is of great importance for an efficient stimulation of the future science development (Wilsdon, 2011; Ausloos, 2013; Mi\'skiewicz, 2013; Bourgrine 2014; Ausloos, 2014a; Rotundo, 2014). Quantifying properties of the scientific collaboration networks in an informative and transparent way becomes highly facilitated (Barabasi et al., 2002; Li et al., 2007; Palla, Barab\'asi \& Vicsek, 2007; Lee, Goh, Kahng \& Kim, 2010; Liu, Xu, Small \& Chi, 2011) thanks to the great advances in the field of network theory (Albert \& Barab\'asi, 2002). Most of the existing related works study the global properties of the collaboration networks (de~Solla~Price, 1965; Wagner \& Leydesdorff, 2005; Wuchty, Jones \& Uzzi, 2007; Freeman, Ganguli \& Murciano-Goroff, 2014), including their evolutionary aspects (Newman, Strogatz \& Watts, 2001a; Newman, 2001b; Newman, 2001c; Newman, 2004; Tomassini \& Luthi, 2007), or occasionally point to the individual country contribution (He, 2009; Perc, 2010). Fewer works focus on characteristics of the selected scientists (Ding, 2011) in their creative role and of the range of their influence in the collaboration network. In order to make this issue and the related characteristics even more exposed, here, for several most outstanding scholar figures working in the domain of exact sciences, with a particular focus on H. Eugene Stanley, we generate their collaboration networks based exclusively on all the publications involving that particular scholar. Nodes then represent all the authors who appeared in any of the common publications and the links among them are assigned when their names appear together in the same publication. By construction, a node representing the author X defining such a network constitutes the central hub and all the other nodes in such a network have the collaboration number 1 relative to X, which by analogy to the Erd\H{o}s number can be termed the X number 1 (X $\#1$). \section{Network construction and description} \label{res} All the results presented in this work have been obtained using the data downloaded from the \textit{Web of Science}. This website provides one of the most reliable and complete scientometrics sources. It covers many scientific disciplines belonging both to the exact sciences, to engineering as well as to the life sciences. Still, ensuring that all scientists are clearly identifiable and distinguishable, as needed in the present analysis, appears a highly non-trivial task. There are several elements that demand a special care. One particularly important is a proper distinction of different scientists. As it has already been estimated (Newman, Strogatz \& Watts, 2001a; Newman, 2001b; Perc, 2010), about $5\%$ of all scientists have the same initials and surnames. What is even more troublesome is that there exist different scientists having the same name and the same surname as well. In order to overcome such an equivocation, an additional criterion of the scientific affiliation has been applied. This of course helps, but does not resolve the problem entirely due to the significant mobility of scientists. Another problem is the presence of typos in the names and surnames. Such possible errors have been taken care of by using the \textit{Levenshtein measure} (Levenshtein, 1966) to strings of letters, here representing the names and surnames. As in essentially all the network cases, the topology of SCN can be expressed by its adjacency matrix $\bf A$ whose elements $a_{ij}$ assume the value 1, thus express existence of the resulting link, if the authors $i$ and $j$ co-author at least one publication. Otherwise $a_{ij}$ equals 0. The corresponding $i$-th node degree $k_i = \sum_{j=1}^N a_{ij}$, where $N$ is the total number of authors (nodes) within the network. Complete description of SCN requires, however, taking into account not only its topology but also the weights of the links among the nodes (Newman, 2001c; Boccaletti et al., 2006). In SCN the weight of a given link is determined by the number $n_{ij}$ of publications co-authored by the $i$-th and $j$-th authors. The so-weighted $i$-th node degree, denoted as $k^w$, can be written as $k^w_i=\sum_{j=1}^N a_{ij}n_{ij}$. A more sophisticated way of introducing strengths of the collaborative ties is to account for the varying number $m_l$ of the co-authors of the corresponding publication $l$ by defining \begin{equation} {\it s}_{ij}=\sum_l {{\delta^l_i\delta^l_j}\over{m_l-1}}, \label{strength} \end{equation} where $\delta^l_i$ is 1 if the author $i$ is a co-author of the publication $l$ and zero otherwise and $l$ runs over all the publications involved (Newman, 2001c). Thus, \begin{equation} s_i=\sum_{j(\neq i)} {\it s}_{ij} \end{equation} expresses the collaborative strength of the author $i$ since, as it can be easily verified by substitution, $s_i=\sum_k \delta^k_i$ just equals the number of papers that $i$ has co-authored with others. Distributions of the above three variants of the node degrees will be studied below. Another topologically informative network measure is the clustering coefficient, which for a node $i$ with $k_{i}$ links (edges) is defined as \begin{equation} C_{i}=2q_{i}/k_{i}(k_{i}-1), \end{equation} where $q_{i}$ is the number of edges between the $k_{i}$ neighbours of $i$. In the case of a hierarchical network, the clustering coefficient of a node with $k$ links follows the scaling law $C(k) \sim k^{-1}$ (Ravasz \& Barab\'asi, 2003). Extension of the clustering coefficient concept to incorporate weights is not unique, however. In the literature there exist several equally acceptable definitions, but all among themselves they result in the different distributions and therefore will not be considered here. \section{Stanley's scientific collaboration network} \label{HES} A sketch of the network central to the present study is shown in Fig.~1. This is the H. Eugene Stanley's (HES) scientific collaboration network where 738 nodes represent the scientists co-authoring publications with HES up to March 28, 2016. By analogy to the Erd\H{o}s number, these are thus all scientists whose Stanley number equals 1. Links are drawn between the nodes if there exists a publication co-authored by the corresponding scientists. By construction the node representing HES is linked with all the other nodes and thus constitutes a central hub in this network. There are also direct links between other nodes and this reflects the presence of multiple-author publications. As some of the scientists co-authored many publications with HES in various author compositions, they give rise to several sub-hubs whose initials are explicitly indicated in Fig.~1. The numbers of common publications with HES (denoted as $L_{\rm HES}$), together with their full names, are listed in the Table~1. This Table lists also some other selected names from the Stanley $\#1$ network, their number of publications $L_{\rm HES}$ co-authored by HES, their entire number $L_{\rm TOT}$ of publications and, for those whose networks are explicitly drawn in Figs.~4-6, the numbers (in parentheses) of publications with no co-authors ($L_1$). Explicitly indicated are also those lower-rank nodes in this particular network that represent other renown scientists whose own scientific collaboration networks have an interesting organization. Depending on the number of common publications, the links have different weights. This is taken into account in the lines thickness in Fig.~1. \begin{figure}[!h] \centering \includegraphics[scale=0.3]{fig_1.eps} \caption{H.~Eugene Stanley's (HES) scientific collaboration network (Stanley $\#1$) determined by all his 1208 publications listed by the {\it Web of Science} as of March 28, 2016. Nodes denote all co-authors of these publications and links are drawn between those authors whose names appear in the same publication. The three lower panels include (i) the cumulative degree distributions, both unweighted $P(k)$ (gray dots) and weighted $P(k^w)$ (black dots), (ii) cumulative strength distributions $P(s)$ and (iii) the clustering coefficient $C(k)$ distribution, all characteristics for this network. Fits are indicated by the dashed lines, while the slope indicated by the dotted line serves guiding the eye.} \label{fig1} \end{figure} The structure of the network in Fig.~1 already visually indicates its hierarchical organization. This organization appears, however, subtle, what finds quantitative evidence in terms of the relations between the three node-degree-related measures introduced above, i.e., $k$, $k^w$ and $s$. Their cumulative distributions, defined as \begin{equation} P(X \geq x) \equiv \int_{x}^{\infty} P(x')dx' , \end{equation} where $x$ denotes either $k$, $k^w$ or $s$, are shown in the log-log scale in the first and the second lower panels of this Fig.~1. The scaling exponents are evaluated by linear regression in the log-log scale using the \textit{Maximum Likelihood Estimation (MLE)} and the $R^2$ coefficient, as an error of this regression, reflects the goodness-of-fit statistics (Clauset, Shalizi \& Newman, 2009). \begin{table}[!h] \begin{center} {\small \begin{tabular}{ | c | c |} \hline Author & $L_{\rm HES}/L_{\rm TOT} \ (L_1)$ \\ \hline \hline Stanley, H.E. (HES) & 1208/1208 (47) \\ \hline Havlin, S. (SH) & 307/697 (9) \\ \hline Buldyrev, S.V. (SB) & 267/319 (4) \\ \hline Amaral, L.A.N. (LA) & 81/165 \\ \hline Sciortino, F. (FS) & 69/153 \\ \hline Ivanov, P.C. (PI) & 65/117 \\ \hline Peng, C.-K. (CP) & 62/158 \\ \hline Goldberger, A.L. (AG) & 62/376 \\ \hline Plerou, V. (VP) & 55/59 \\ \hline Podobnik, B. (BP) & 49/118 \\ \hline Barab\'asi, A.-L. (AB) & 22/263 (26) \\ \hline Kert\'esz, J. (JK) & 11/243 \\ \hline Vicsek, T. (TV) & 6/235 (25) \\ \hline Ausloos, M. (MA) & 2/555 (41) \\ \hline Tsallis, C. (CT) & 1/367 (60) \\ \hline \end{tabular} } \end{center} \label{tab1} \caption{Statistics of H.E. Stanley (HES) and some of his collaborators: the number of all publications of a given author ($L_{\rm TOT}$), the number of publications coauthored by HES ($L_{\rm HES}$), and the number of considered monographs ($L_1$).} \end{table} Clearly, in all the three distributions there are segments where the straight line fits can be applied, pointing to some underlying scale-free effects that this network develops. At the same time, there are several essential quantitative differences in the corresponding characteristics, however, especially between the unweighted ($P(k)$) and the weighted ($P(k^w)$, $P(s)$) cases. For $P(k)$ a straight line fit $P(X \geq k) \sim k^{-\gamma}$ applies in the $k$-interval of about $10 - 100$ with $\gamma \approx 2$ that in differential representation corresponds to 3. This thus indicates that the intermediate $k$-degree nodes develop links that not only make them belonging to the preferential attachment universality class of networks, but even correspond exactly to the Barab\'asi-Albert model (Albert \& Barab\'asi, 2002). The central hub, HES, possesses, however, disproportionately more $k$-degrees and forms a clear outlier. Very interestingly, this effect disappears almost completely when the link weights, either expressed through $k_w$ or $s$, are taken into account. Within these two measures, the node degree distributions, including HES, tend to align along the same straight line. The corresponding best fit for $P(k^w)$ results in $\gamma_w \approx 1.01$ and for $P(s)$ in $\gamma_s \approx 1.14$. In fact, in $P(s)$ such a fit applies to all the scales and even the initial flattening of the Zipf-Mandelbrot type for the low-degree nodes seen in $P(k)$ and in $P(k^w)$, present also in other scientometrics analyses (Ausloos, 2014b), disappears. This resembles observations made in the linguistic context (Kulig, Kwapie\'n, Stanisz \& Dro\.zd\.z, 2017). There, by including the punctuation marks in the Zipfian analysis, in addition to words, corrects an analogous flattening such that the Mandelbrot's amendment appears largely redundant and thus our language emerges as a more consistent composition on all the scales. The present result may thus be taken as an additional indication that the strength of the collaborative ties, as defined by Eq.(1), offers the most consistent way of weighting the author's contributions. Overall hierarchical organization of the HES network is also confirmed by the coefficient $C(k)$ of a node with $k$ degrees whose scaling law $C(k) \sim k^{-1}$ (Ravasz \& Barab\'asi, 2003) asymptotically appears to be convincingly obeyed as it can be seen in the lower rightmost panel of Fig.~1. \section{Networks of Erd\H{o}s and Witten} \label{NEW} In order to confront organization of the HES network with other possible organizations in this kind of networks, in Fig.~2 we show two analogous scientific collaboration networks: one for Paul Erd\H{o}s and another for the influential mathematical physicist Edward Witten, whose $h$-index of 131 in March 2016 (currently $h=134$) can be identified as the highest among active researchers representing or originating from the exact sciences. The scientometrics characteristics, including the numbers of articles published, the total numbers of co-authors in these published articles and the corresponding $h$-indices of these two scientists and of HES, correspondingly, are listed in Table~2. \begin{center} \begin{tabular}{ | c | c | c | c | p{2cm} |} \hline * Based on \textit{Web of Science} & P.~Erd\H{o}s & E.~Witten & H.E.~Stanley \\ \hline \hline Number of articles & 1246 & 319 & 1208 \\ \hline Number of collaborators & 391 & 140 & 738 \\ \hline \textit{h}--index & 60 & 131 & 125 \\ \hline \end{tabular} \label{tab1} \end{center} The {\it Web of Science (WoS)} (http://www.webofknowledge.com) does not list all commonly recognized Erd\H{o}s' publications. A much more extended list of Erd\H{o}s' works, including all those listed by {\it WoS}, is provided by the Erd\H{o}s Number Project, which studies research collaborations among mathematicians and is maintained at the Oakland University (http://oakland.edu/enp/). Exceptionally, it is thus this list, instead of {\it WoS}, which is used here to construct the Erd\H{o}s collaboration network. This source qualifies 1246 Erd\H{o}s' works as scientific publications, with 391 co-authors. At the same time this site notes that the total number of Erd\H{o}s publications equals 1525 with 511 co-authors; apparently not all fulfilling the criteria imposed. As far as Witten is concerned, all his recognized publications are listed by {\it WoS} including even three multi-author conference contributions. \begin{figure}[!h] \centering \includegraphics[scale=0.55]{fig_2.eps} \caption{Paul Erd\H{o}s' (left) and Edward Witten's (right) scientific collaboration networks, thus Erd\H{o}s $\#1$ and Witten $\#1$. Open dots in Witten's case indicate cumulative degree distribution $P(k)$ when the three clusters of nodes seen in this network are included. Otherwise the same convention as in Fig.~1 is used.} \label{fig2} \end{figure} Clearly, topology of the two networks in Fig.~2 is not as extended in terms of the number of internal links as the one of HES. They both have a visibly dominant star-like component and scaling of the pure topology related $P(k)$ (gray dots in Fig.~2) can hardly be claimed. When the weights are taken into account, some partial scaling appears, however. In the Erd\H{o}s' network the weighted degree distribution $P(k^w)$ resulting from links among the other nodes displays an approximate scaling over almost two decades in $k^w$, with $\gamma_w \approx 1.55$. It has thus a lower intensity of weighted links than in the HES case. As a result the Erd\H{o}s' node still constitutes an outlier whose $k^w$ is by another one order of magnitude separated from all the others. Similarly, the clustering coefficient $C(k)$ is much compressed towards smaller $k$ as compared to HES and only one point representing Erd\H{o}s himself remains an outlier. Interestingly, however, it is placed not much below the slope of $\gamma=1$. The Witten's node in his collaboration network is also an outlier, but essentially the links among other nodes give rise to any scaling neither in their weighted degree distribution $P(k^w)$ (open circles) nor in the clustering coefficient $C(k)$. This non-homogeneity in the Witten's network appears to be caused predominantly by the three multi-author publications mentioned above. Removing these three publications results in the distributions sketched by the black dots. Then the two characteristics become qualitatively similar to those of Erd\H{o}s with an even lower intensity of links as expressed by the corresponding $\gamma_w=1.47$ in the distribution of weighted degrees. As far as the range and quality of scaling is concerned, similar tendency is displayed by the strength $P(s)$ distribution shown in the lower panels of Fig.~2. It is thus evident that, contrary to the HES network, the two hubs, Erd\H{o}s' and Witten's, in their SCNs stay by far out of the distributions resulting from the node degrees of the other members of the corresponding networks. For completeness one may here mention the two obvious extremes of the collaboration networks. On the one side, there is a case of no co-authors at all through the entire scientific activity (like, for instance, the Paul A.M. Dirac's one). The collaboration network then reduces itself to a trivial single node. On the opposite side, there are large collaborations of nearly fixed number of participants publishing papers always in the same author compositions. In the corresponding collaboration network any node is connected to all the remaining nodes and thus all of them have the same number of connections, thus the same degree. This kind of a network approximately represents large, predominantly administratively established collaborations. \begin{figure}[!h] \centering \includegraphics[scale=0.45]{fig_3.eps} \caption{Time evolution of H.~Eugene Stanley's (HES) scientific collaboration network (Stanley $\#1$) between 1975 and 2015 with the snapshots taken every 10 years, with the corresponding clustering coefficient and degree, both unweighted and weighted, distributions in the lower panels. The same convention as in Fig.~1 is used.} \label{fig3} \end{figure} All the above possibilities indicate that the HES network, with its hierarchical organization through all levels, is an exception rather than a rule and that it therefore deserves a special attention, indeed. Such a collaboration network is of course a dynamical phenomenon and, definitely, some time is needed to attain not only this kind of richness but also a proper balance of links between all the participating nodes. The way the HES network has been evolving since the beginning of his scientific activity is illustrated in Fig.~3 with the five snapshots taken every 10 years between 1975 and 2015 with the corresponding degree, both unweighted and weighted, and the clustering coefficients distributions in the lower panels. Judging from these degree distributions a fully developed hierarchical organization had already been attained in around 1995. \section{Networks of Ausloos, Barab\'asi, Buldyrev, Havlin, Tsallis, and Vicsek} As it can be seen in Fig.~1, the two dominant sub-hubs in the HES network are those representing Shlomo Havlin (SH) and Sergey V. Buldyrev (SB), the long-term principal collaborators of H. Eugene Stanley. Their own collaboration networks, respectively Havlin $\#1$ and Buldyrev $\#1$, are thus sizeably overlapping with the one of HES. It is therefore natural to expect that they share some characteristics of its hierarchical organization. These two networks are shown in Fig.~4 together with the corresponding degree, both $k$ and $k^w$, strength $s$, and the clustering coefficient $C(k)$ distributions. Some parallels can easily be seen like, for instance, the fact that in both networks their central hubs, SH and SB, essentially align with the overall trend of the weighted degree distributions in these networks and, thus, they are not outliers. The quality of scaling of the weighted degree distributions is somewhat poorer, especially for SB, as compared to HES, but if one insists on fitting a single straight line, then the result for both $P(k^w)$ and $P(s)$ is consistent with $\gamma \approx 1$ in both networks, similarly as for HES. \begin{figure}[!h] \centering \includegraphics[scale=0.55]{fig_4.eps} \caption{Shlomo Havlin's (SH, left) and Sergei V.~Buldyrev's (SB, right) scientific collaboration networks, thus Havlin $\#1$ and Buldrev $\#1$, with the corresponding cumulative unweighted and weighted degree distributions and the clustering coefficient distributions. The same convention as in Fig.~1 is used.} \label{fig4} \end{figure} Among the nodes that appear in the HES scientific collaboration network, one can identify many extremely renown scholars. Several of them are explicitly indicated in Fig.~1 and among those some constitute further significant hubs in addition to the two shown in Fig.~4. The other nodes, rather peripheral in this network, are explicitly indicated for the reason that their own scientific collaboration networks also display a diverse organization. Four such cases, including Marcel Ausloos (MA), Albert-L\'aszl\'o Barab\'asi (AB), Constantino Tsallis (CT), and Tam\'as Vicsek (TV), are shown in Figs.~5 and 6. Clearly, out of these four, the most homogeneous hierarchical organization is revealed by the scientific collaboration network of Marcel Ausloos (Ausloos $\#1$), whose weighted degree distributions, in both $k^w$ and $s$, scale over more than two decades with the scaling exponents, $\gamma_w$ and $\gamma_s$, very close to unity. In this respect, it resembles most the HES case over the period 1995-2005. The three dominant sub-hubs in the MA network are due to Rudi Cloots (143 common publications with MA), Nicolas Vandewalle (75 common publications), Phillipe Vanderbemden (62 common publications), and Andr\'e Rulmont (54 common publications). Similarly, the clustering coefficient of a node with $k$ links for the Ausloos network follows the scaling law $C(k) \sim k^{-1}$ with quality comparable to the HES case. The Barab\'asi's network (Barab\'asi $\#1$), on the other hand, develops no homogeneous hierarchical organization as both the lack of scaling in all three variants of the node degree and in the clustering coefficient distributions indicate. This, actually, can be inferred directly from the structure of this network as its concentrations of nodes originate from the sparsely connected clusters, some of them mediated exclusively by the central hub, here Barab\'asi himself. \begin{figure}[!h] \centering \includegraphics[scale=0.55]{fig_5.eps} \caption{Marcel Ausloos' (MA, left) and Albert-L\'aszl\'o Barab\'asi's (AB, right) scientific collaboration networks, thus Ausloos $\#1$ and Barab\'asi $\#1$, with the corresponding cumulative unweighted and weighted degree distributions and the clustering coefficient distributions. The same convention as in Fig.~1 is used.} \label{fig5} \end{figure} The other two networks, Tsallis $\#1$ and Viscek $\#1$, qualitatively resemble the Erd\H{o}s' star-like network with a sparser connectivity of the nodes in the Tsallis' network than in the Vicsek's one as schematically (since scaling is approximate here) indicated by the straight lines with the slopes $\gamma_w \approx 1.4$ (Tsallis') and $\gamma_w \approx 1.3$ (Vicsek's). The central hubs constitute outliers separated from all other nodes in both cases. Consistently, the clustering coefficients develop similar distributions as in the Erd\H{o}s' case. \begin{figure}[!h] \centering \includegraphics[scale=0.6]{fig_6.eps} \caption{Constantino Tsallis' (CT, left) and Tam\'as Vicsek's (TV, right) scientific collaboration networks, thus Tsallis $\#1$ and Vicsek $\#1$, with the corresponding cumulative unweighted and weighted degree distributions and the clustering coefficient distributions. The same convention as in Fig.~1 is used.} \label{fig6} \end{figure} \section{Relation to weighted network model} Comparison of the above results clearly shows that it is the weighted network representation, which allows more complete and informative disentangling of local differences in the scientific collaboration networks and, thus, offers scientifically more advanced framework. Modelling the observed characteristics of such networks is, however, much more difficult as compared to the unweighted cases. The main reason for this difficulty is that their growth involves mutual influence of the dynamics of links and weights and also some possible elements of the accelerated growth (Dorogovtsev \& Mendes, 2002) allowing appearance of the new links between the already existing nodes. Apparently, it is for these reasons that no sufficient progress has been attained so far in the rigorous modeling of complex weighted networks. The most closely related existing model of weight driven growing networks (Barrat, Berthelemy \& Vespignani, 2004), even though still definitely much simplified in relation to the dynamics underlying evolution of the networks considered here, offers some preliminary unified view and quantifies ranges for the exponents of the degree distributions. In this model the usual preferential attachment (Barab\'asi \& Albert, 1999) is extended to the rule "busy get busier" (Barthelemy, Barrat, Pastor-Satorras \& Vespignani, 2005), in which new nodes connect more likely to the nodes carrying larger weights and being more central in terms of the interaction strength. Accordingly, the local rearrangements of weights between $i$ and its neighbours $j$ obeys the simple rule \begin{equation} w_{ij} \rightarrow w_{ij} + \Delta w_{ij}, \qquad \textrm{where} \qquad \Delta w_{ij} = \delta {w_{ij} \over s_i}, \label{delta} \end{equation} which means that a new link with the node {\it i} induces a total increase of activity $\delta$ that is proportionally distributed among the links departing from this node. The model thus involves only one parameter $\delta$ that reflects the fraction of weight transmitted by the new link onto the others. $\delta < 1$ corresponds to such a situation that a new connection does not lead to a more intense activity on existing links. In particular, for $\delta = 0$ the arrival of a new link does not affect the existing weights and this model becomes topologically equivalent to the Barab\'asi-Albert model (Barab\'asi, 1999). On the other hand, for $\delta > 1$ a new link multiplicatively bursts the weight on neighbours. Within this model, in the large time limit, one obtains a power law scaling of the weighted degree distribution with the scaling exponent for the cumulative distribution \begin{equation} \gamma_s = 1 + {1 \over {2 \delta +1}}, \end{equation} with its values $\gamma_s \in [1,2]$. In the networks considered above, by their construction, all the new links involve the central hubs. From the perspective of this model, a value of the corresponding empirically determined $\gamma_s$ can be viewed as an indication of how such 'centers of condensation' stimulate mutual interactions of their nearest neighbours. HES with his $\gamma_s \approx 1.1$ and, thus, $\delta$ of about 5 appears very stimulative. Interestingly, the same applies to Marcel Ausloos though his (Ausloos $\#1$) network is not as large. However, it weekly overlaps with the HES network (unlike Havlin's and Buldyrev's) and can thus be considered independently. The opposite can be inferred for the networks from Figs. 2 and 6. Their central hubs with $\gamma_s < 1$ appear largely neutral in influencing interactions among the nearest neighbours. As far as this kind of neutrality is concerned, an extreme case is the Erd\H{o}s' SCN, whose $\delta$ consistent with $\gamma_s$ observed would be close to zero. \section{Spectral decomposition of Laplacian matrices} The adjacency matrix $\bf A$ records all the information about nodes and how they are interconnected. The most mathematically consistent way of formulating this information is in terms of the Laplacian matrix (Bapat, 2014) \begin{equation} {\bf L} = {\bf D} - {\bf A}, \label{laplace} \end{equation} where $\bf D$ is a diagonal matrix composed of the nodes' weighted degrees. Equivalently, a normalized Laplacian matrix \begin{equation} {\bf {\hat L}} = {\bf D}^{-1/2} {\bf L} {\bf D}^{-1/2} = {\bf I} - {\bf D}^{-1/2} {\bf A} {\bf D}^{-1/2} \label{nlaplace} \end{equation} is more appropriate for comparative purposes (Chung, 1997). Thus, studying spectral properties of the Laplacian matrix offers an alternative way of getting insight into organization of the corresponding network (Merris, 1994). This implies solving the equation \begin{equation} {\bf {\hat L}} {\bf x}_i = \lambda_i {\bf x}_i , \end{equation} which determines the eigenvectors ${\bf x}_i$ and the corresponding eigenvalues $\lambda_i$. Since ${\bf {\hat L}}$ can be expressed as a product \begin{equation} {\bf {\hat L}} = {\bf B}{\bf B}^T, \label{BB} \end{equation} where $\bf B$ is the incidence matrix, whose rows are indexed by the vertices and whose columns are indexed by the edges of the network, all the eigenvalues of the normalized Laplacian are real and non-negative. Furthermore, by construction the sum of entries in all the rows (or columns) of ${\bf {\hat L}}$ is zero which reduces dimensionality of this matrix (making it singular) and, thus, results in one zero eigenvalue representing the most collective mode (Dro\.zd\.z, Kwapie\'n, Speth \& W\'ojcik, 2002). A null hypothesis of purely random links in such networks thus corresponds to the Wishart ensemble of random matrices with the reduced rank (Janik \& Nowak, 2003). In Fig.~7 the eigenvalue distributions for the normalized Laplacian matrices constructed from the strength $s$ for all the nine networks considered above and for the additional one composed of 2220 nodes including all papers by HES and also all papers by each of the six authors (MA, AB, SB, SH, CT, TV), whose own networks overlap with the one of HES and in separation are shown in Figs.~4-6. This extended network thus already involves many Stanley $\#2$ nodes and is denoted as $\it All (Stanley)$. Consistent with the structure of the normalized Laplacian matrix, Eq.(~\ref{nlaplace}), the eigenvalues are centered on both sides of unity and one zero eigenvalue is of course always present. Some correlation between the relative locations of these eigenvalues and the weighted degree distributions of the corresponding networks in Figs.~1-6 can also be seen. When the central node constitutes an outlier in the degree distribution, the other eigenvalues of $\bf {\hat L}$, while developing a gap with respect to the zero mode, are spread more uniformly as compared to the case of no outlier. In the latter case, connectivity of the nodes in the corresponding network is larger, which drives a larger fraction of eigenvalues to be concentrated closer to unity as it is consistent with the Wishart-type product structure (Eq.(\ref{BB})) of the matrix ${\bf {\hat L}}$. \begin{figure}[!h] \centering \includegraphics[scale=0.42]{fig_7.eps} \caption{Eigenvalue spectra of the normalized Laplacian matrices generated from the strength $s_{ij}$ of links as in Eq.(\ref{strength}) for the scientific collaboration networks determined by all the publications involving the authors listed along the horizontal axis. The additional {\it All (Stanley)} network results from all publications by Stanley and by other six authors (Ausloos, Barab\'asi, Buldyrev, Havlin, Tsallis, and Vicsek) whose own collaboration networks, overlapping with Stanley's, in separation are shown in Figs.~4-6.} \label{fig7} \end{figure} A very valuable insight into organization of networks is offered by the eigenvectors $\bf x_i$ with components $\{ x^j_i \}$ ($\sum_j |x^j_i|^2 = 1$) as they reflect composition of the orthogonal 'modes' in the network (Kwapie\'n \& Dro\.zd\.z, 2012) and may thus project out communities within the corresponding network (Lancichinetti, Fortunato \& Kert\'esz, 2009). The potential of such a procedure for the most involved case considered here of {\it All (Stanley)} is demonstrated in Fig.~8, which shows the distributions of the eigenvector components $x^i_j$ for the six eigenvectors corresponding to the eigenvalues starting from the lowest ($i=1$) and going upwards to $i=6$. The $i=1$ eigenvector is seen to represent the most collective structure as essentially all its components contribute visibly. The arrows indicate to whom the largest contributions belong. In $i=1$ they are seen to belong to the dominant hubs and the largest contribution here is due to HES. The other eigenvectors are already much more selective (and involve the negative sign as well). $i=2$ is dominated by Ausloos, while $i=3$ by Tsallis and by their collaborators, respectively. $i=4$ again represents a mixture of the names from $i=1$, but here the Buldyrev's sector somewhat overtakes. $i=5$ and $i=6$ on the other hand project out the Barabasi and Vicsek sectors and their own collaborative relations. Such a composition of the consecutive eigenvectors is quite understandable when mutual overlaps and convolutions of the sub-communities inside the {\it All (Stanley))} network are explicitly inspected. Thus it indicates potential utility of the procedure sketched here for the community detection in complex networks of larger size, where a visual identification of such effects is the most likely prohibitively impractical. \vspace{0.5cm} \begin{figure}[!h] \centering \hspace{-1cm} \includegraphics[scale=0.5]{fig_8.eps} \caption{Components $x^i_j$ of the six consecutive eigenvectors $i=1,...,6$, starting with the 'zero mode' upwards, of the normalised Laplacian matrix $\bf {\hat L}$ defined by Eq.~(\ref{nlaplace}), whose entries are constructed from the collaborative strength $s_{ij}$ (Eq.~(\ref{strength})) for the scientific collaboration network denoted as {\it All (Stanley)}.} \label{fig8} \end{figure} \section{Summary} As the above explicit inspection of the scientific collaboration networks by several renown scholars shows, their structure reveals a rich diversity of the quantifiable characteristics. This points to the utility of such a representation for the scientometric and bibliometric studies, to quantify characteristics of communities, and offers an interesting perspective to inspect mechanisms that stay behind the development and the spreading tendencies of the contemporary scientific collaboration. From the complex networks point of view, the most interesting network is the one of H. Eugene Stanley (Stanley $\#1$), which is of a large size and develops a visibly hierarchical organization through all its scales. However, this is seen in the weighted network representation and confirmed by the resulting degree and by the clustering coefficient distributions. A particularly significant fact in this context is that the central node defining this network, HES, obeys the same functional form in these two distributions as all the other nodes belonging to this network. Interestingly, as Fig.~3 shows, this network attained such an organization already in around 1995 and largely preserved through the next 20 years even though it more than doubled in size until present. Similar characteristics are observed in the Ausloos $\#1$ and, to some extent, also in the Havlin $\#1$ networks, though these two are about a factor of two smaller and the latter in addition strongly overlaps with the HES network. Self-organizing growth of such networks demands of course a special balance between the number of incoming new nodes in time (with new publications) and appearance of the new links (co-authorships) among the nodes already belonging to this network, with some elements of the preferential attachment in distributing the links so that the hierarchical organization builds up and the weighted degree distribution assumes the scale-free form. An existing relevant model, in which the usual preferential attachment (Barab\'asi \& Albert, 1999) is extended to the rule "busy get busier" (Barthelemy, Barrat, Pastor-Satorras \& Vespignani, 2005), where the new nodes connect more likely to the nodes carrying larger weights and which are more central in terms of the strength of interactions, offers some quantitative insight into the underlying mechanism. In particular, it indicates that, in networks with such characteristics as the one of HES, the spread of weight gets stimulation to multiplicative bursts over the neighbouring nodes and this leads to a balanced distribution of interconnections among them such that the scale-free Stanley $\#1$ hierarchy develops. The unquestionably interdisciplinary and diversified character of the corresponding HES (and largely also Ausloos) scientific activity may constitute a significant factor that facilitates and is even likely to favour a spontaneous generation of such interconnections as it is needed for their balanced growth. In the cases of more specialized activity, the growth of the corresponding networks may progress through a smaller number of the mutual interconnections. In the latter case, this is thus expected to result in a deficit of such interconnections relative to the number of links acquired by the central hub and, therefore, such a hub is more likely to maintain an outlier position in the degree distribution. Relating Stanley $\#1$ and Ausloos $\#1$ networks to those of Erd\H{o}s or Witten, whose scientific activities are more specialized, indicates that this may be a relevant element that boosts a potential in this kind of networks to become hierarchical, indeed. Of course, development of such a hierarchy involves and is embedded in many overlapping communities (Palla, Der\'enyi, Farkas \& Vicsek, 2005). A sample spectral analysis in Section~6 of the normalised Laplacian matrix corresponding to the extended scientific collaboration network, including also a number of Stanley $\#2$ nodes, points to the usefulness of such a procedure for identifying and for characterising the related component communities in the weighted networks. \newpage
2,869,038,156,943
arxiv
\section*{Architecture} Figure \ref{architecture} shows how this PIC architecture allows us to realize a fully integrated coherent optical neural network (FICONN) through the following stages: (i) the \textit{transmitter} (TX) maps input vectors $\mathbf{x}_{(j)}$ to an optical field vector $\mathbf{a}_{(j)}^{(1)}$ by splitting an input laser field into MZI modulators $m=1,2,...,6$, each of which encode one element of $\mathbf{x}_{(j)}$ into the amplitude $A_m$ and phase $\phi_m$ of the transverse electric field component $a_{(j),m}^{(1)} = A_m e^{i \omega t + \phi_m}$; (ii) the \textit{coherent matrix multiplication unit} (CMXU), consisting of a MZI mesh \cite{bogaerts_programmable_2020,shen_deep_2017,zhang_optical_2021}, transforms $\mathbf{a}^{(1)} \rightarrow \mathbf{b}^{(1)} = U^{(1)} \mathbf{a}^{(1)}$ through passive optical interference; and (iii) the \textit{programmable nonlinear optical function unit} (NOFU) applies the activation function to yield the input to the next layer, $\mathbf{a}^{(2)}=f(\mathbf{b}^{(1)})$. Following the input layer, the PIC directly transmits the optically-encoded signal into a hidden layer, composed of another CMXU and six NOFUs, that implements the transformation $\mathbf{a}^{(3)} = f(U^{(2)}\mathbf{a}^{(2)})$. The final layer $U^{(3)}$, implemented with a third CMXU, maps $\mathbf{a}^{(3)}$ to the output $\mathbf{b}^{(3)}$. Inference therefore proceeds entirely in the optical domain without photodiode readout, amplifiers, or digitization between layers. An \textit{integrated coherent receiver} (ICR), shown in Figure \ref{architecture}(iv), reads out the amplitude and phase of the DNN output by homodyning each element of the vector $\mathbf{b}^{(3)}$ with a common local oscillator field $E_\mathrm{LO}$. The DNN output is read out by transimpedance amplifiers that convert the photocurrent vector $\mathbf{i}^{PD}$ to a voltage vector $\mathbf{V}^{PD}$. $\mathbf{V}^{PD}$ is digitized and then normalized by the sum of voltages measured across all channels $\sum \mathbf{V}^{PD}$ to yield a quasi-probability distribution $\mathbf{V}^{\mathrm{norm}}$ for a classification task. Each sample $\mathbf{x}^{(i)}$ is assigned the label corresponding to the highest probability, i.e.\ $\mathrm{argmax}(\mathbf{V}^{\mathrm{norm}})$. \begin{figure*} \includegraphics[width=7in]{Fig2.png} \caption{\textbf{a)} Microscope image of the fabricated PIC. Key subsystems of the circuit are highlighted in the same color as the architecture depicted in Figure \ref{architecture}. The signal path through the PIC is indicated in white, while the local oscillator path is outlined in blue. \textbf{b)} Photonic packaging of the PIC for lab testing. Insets show side and top-down views of the packaged PIC. \textbf{c)} The fabricated transmitter splits off light coupled into the PIC to a local oscillator and fans out the remainder to six input channels. The inset shows the measured optical response of a typical channel. \textbf{d)} The coherent matrix multiplication unit is implemented with a Mach-Zehnder interferometer mesh. Each MZI comprises two directional couplers (DCs), an internal phase shifter $\theta_1$ between the two splitters, and an external phase shifter $\theta_2$ on one output mode. The histogram shows the measured fidelity of 500 arbitrary unitary matrices implemented on a single layer using a ``direct'' approach (orange) and an approach that takes into account hardware errors and thermal crosstalk (blue). \textbf{e)} The integrated coherent receiver (ICR). \textbf{f)} One channel of the ICR. Signal and LO are interfered on a 50-50 MMI and measured using balanced detectors. } \label{PIC} \end{figure*} \section*{Experiment} We implemented the FICONN architecture in a commercial silicon photonic foundry process incorporating low-loss edge couplers and waveguides, compact phase shifters, high-speed waveguide-integrated germanium photodiodes, and efficient carrier-based microring modulators. Figure 2a shows the PIC, fabricated in a silicon-on-insulator (SOI) process, which monolithically integrates all FICONN subsystems. Demonstrating our system, which required control of 169 active devices and stable optical coupling, necessitated developing a custom photonic package for lab testing. This package, shown in Figure 2b, interfaces the active devices on chip to driver electronics through 236 wirebonds to a printed circuit board. Input light is coupled into the circuit through a single channel of a polarization-maintaining fiber array glued to the chip facet with index-matching epoxy. No light is coupled out of the PIC, as all readout is done on chip with the ICR. We measured an end-to-end loss for our system of 10 dB, including 2.5 dB fiber-to-chip coupling loss. As the depth of our system is 91 layers of optical components from input to readout, the end-to-end loss implies a per-component insertion loss of less than 0.1 dB, enabling single-shot inference across all DNN layers without optical re-amplification. The key subsystems of the PIC are depicted in Figures 2c$-$f. In Figure 2c, we show the transmitter for encoding input vectors into the FICONN. The light coupled into the chip is first split with an MZI into a local oscillator (LO) path, which is directed to the ICR, and a signal path, which is fanned out to six channels. Each channel of the transmitter comprises an MZI, which programs the amplitude of one element of $\mathbf{a}^{(1)}_{(j)}$, and a phase shifter on the output that encodes the phase. As the inset shows, a typical channel realizes more than 40 dB of extinction, enabling programming of input vectors with more than 13 bits of precision. Figure \ref{PIC}d shows the coherent matrix multiplication unit (CMXU), which computes linear transformations in the DNN. The CMXU is comprised of a MZI mesh \cite{bogaerts_programmable_2020} of 15 devices, connected in the Clements configuration \cite{clements_optimal_2016}, which implements an arbitrary $6 \times 6$ unitary operation $U^{(1)}$ on the optical fields $\mathbf{a}^{(1)}$. As $\mathrm{det}(U^{(1)})=1$, the CMXU conserves optical power in the system with the exception of component insertion losses. Unitary weighting, which redistributes light between optical modes but does not attenuate it, minimizes optical losses and enables single-shot DNN inference without re-amplification or readout between layers. Training unitary layers also avoids the vanishing gradient problem, improving optimization of deep and recurrent neural networks \cite{pmlr-v70-jing17a}. We benchmarked the matrix accuracy of the CMXU by programming 500 random $6 \times 6$ unitary matrices sampled from the Haar measure into the device and measuring the fidelity $F = \mathrm{Tr}[U^\dagger_\mathrm{programmed} U_\mathrm{measured}]/6$. In the histogram in Figure \ref{PIC}d, we show the measured fidelity obtained with a ``direct'' programming, where we algorithmically decompose the phase shifter settings as outlined in \cite{clements_optimal_2016}, and using a modified programming that corrects for hardware errors, losses, and thermal crosstalk \cite{Bandyopadhyay:21, hamerly_stability_2021, hamerly_accurate_2021}. While a direct programming only achieves a matrix fidelity of $\langle F \rangle = 0.900 \pm 0.031$, correcting for hardware non-idealities improves this value to $\langle F \rangle = 0.987 \pm 0.007$ for the CMXU. To the best of our knowledge, this is the highest reported fidelity for a programmable photonic matrix processor. Figure \ref{PIC}e shows the integrated coherent receiver (ICR), which measures the amplitude and phase of the output signal $\mathbf{b}^{(3)}$ of the DNN. Each channel, as shown in Figure \ref{PIC}f, interferes the signal field with the LO using a 50-50 multimode interferometer (MMI) and measures the outputs with a pair of balanced detectors. A phase shifter is used to select the quadrature being read out. \begin{figure*} \includegraphics[width=7in]{Fig3.png} \caption{\textbf{a)} The fabricated NOFU. A programmable MZI determines the fraction of light tapped off to the photodiode, and a waveguide delay line synchronizes the optical and electrical pulses. A \textit{pn}-doped microring resonator modulates the incident field. \textbf{b)} Circuit diagram of resonant EO nonlinearity. The photocurrent $I_p$ directly drives a \textit{pn}-doped resonant modulator. No amplifier stage is required between the two and the devices are directly connected on chip. By adjusting the bias voltage $V_B$, the nonlinearity can be operated in forward or reverse bias. \textbf{c)} Left: Detuning of the cavity resonance at various incident optical powers when operated in carrier injection mode ($V_B > 0$). Right: Cavity detuning in carrier depletion mode ($V_B < 0$). Our system realizes close to a linewidth detuning without the use of any amplifier, improving energy consumption and latency of the nonlinearity. A full linewidth detuning can be realized by further engineering the cavity finesse. \textbf{d)} Activation functions measured on chip. Arbitrary function shapes can be realized by adjusting the cavity detuning $\Delta \lambda$ and fraction of light $\beta$ tapped off to the photodiode.} \label{activation} \end{figure*} The programmable nonlinear optical function unit (NOFU) is shown in Figure 3. To realize a programmable coherent optical activation function, we developed the resonant electro-optical nonlinearity shown schematically in Figure \ref{architecture}iii). This device directs a fraction $\beta$ of the incident optical power $|b|^2$ into a photodiode by programming the phase shift $\theta$ in an MZI. The photodiode is electrically connected to a $pn$-doped resonant microring modulator, and the resultant photocurrent (or photovoltage) detunes the resonance by either injecting (or depleting) carriers from the waveguide. The remainder of the incident signal field passes into the microring resonator; the nonlinear modulation of the electric field $b$ by the cavity, which is dependent on the incident optical power $|b|^2$, results in a coherent nonlinear optical function for DNNs. Setting the detuning of the cavity and the fraction of optical power tapped off to the photodiode determines the implemented function. Figure \ref{activation}a shows the fabricated device, where the photodiode output is directly connected on chip to the modulator. An integrated heater aligns the microring resonance to the programmed detuning, and an optical delay line placed between the tunable coupler and modulator synchronizes the optical and electrical pulses. \begin{figure*} \includegraphics[width=7in]{Fig4.pdf} \caption{\textbf{a)} A multivariate cost function $\mathcal{L}(\mathbf{\Theta})$ can be minimized by computing the directional derivative of the function along a random direction (black). This directs the optimization along the component of the gradient (red) parallel to the search direction. Over multiple iterations, the steps taken along random directions average to follow the direction of steepest descent to the minimum. \textbf{b)} \textit{In situ} training procedure. At every iteration, the directional derivative of the cost function $\mathcal{L(\mathbf{\Theta})}$ is computed in hardware along a randomly chosen direction $\mathbf{\Delta}$ in the search space. $\mathbf{\Delta}$ is chosen from a Bernoulli distribution to be $\pm \delta$. The weights $\mathbf{\Theta}$ are then updated by the measured derivative following a learning rate $\eta$ chosen as a hyperparameter of the optimization. \textbf{c)} \textit{In situ} training of a photonic DNN for vowel classification. We obtain 92.7\% accuracy on a test set, which is the same as the performance (92.7\%) obtained on a digital model with the same number of weights. Despite not having direct access to gradients, our approach produces a training curve similar to those produced by standard gradient descent algorithms.} \label{training} \end{figure*} The electrical circuit for the NOFU is shown in Figure \ref{activation}b. Incident light generates a reverse current in the photodiode; depending on the bias voltage $V_B$, this either injects carriers into the modulator or generates a photovoltage that depletes the modulator of carriers. Figure \ref{activation}c shows the device response in injection (left) and depletion modes (right). In injection mode, optical power modulates both the loss and phase of the resonator, producing a strong nonlinear response to the incident field $b$. In depletion mode, we observe nearly a linewidth detuning when the incident light is switched on vs.\ off, which is induced by the voltage produced by the photodiode. The NOFU is designed to implement programmable nonlinear activation functions at high speeds with ultra-low energy consumption. This required separately optimizing the cavity parameters, which determine the microring response time, and closely integrating the photodiode and modulator together on the PIC to minimize total device capacitance, and therefore the $RC$ time delay. In injection mode, we found that 75~$\mu$A photocurrent was sufficient to detune the resonator by a linewidth. As each NOFU performs the equivalent of two multiplications in digital electronics, over a carrier lifetime of $\sim$1 ns this corresponds to an energy consumption of 30 fJ per nonlinear operation (NLOP). Compared to prior approaches \cite{williamson_reprogrammable_2020, ashtiani_-chip_2022}, the NOFU directly drives the modulator through the photodiode and eliminates the amplifier stage between them. This greatly improves the latency and energy efficiency of the device, as high speed transimpedance amplifiers consume hundreds of milliwatts of power \cite{ahmed_34gbaud_2018}. For our device, incorporating such an amplifier would have increased the power consumption by two orders of magnitude to about 3 pJ/NLOP. Our design, which eliminates intermediate amplifier circuitry and is therefore ``receiverless'' \cite{miller_attojoule_2017}, is not only more energy-efficient, but also eliminates the latency introduced by the amplifier. In Figure \ref{activation}d, we show several of the activation functions measured on chip. The programmability of the device enables a wide range of nonlinear optical functions to be realized. By tuning the fraction of power tapped off to the photodiode and the relative detuning of the cavity, we can not only program the form of the nonlinear function, but also train it during model optimization. \section*{\textit{In situ} training} The accuracy of the inference output depends on the model parameters $\mathbf{\Theta}$ of the FICONN system, comprising a total of $N_\mathrm{model}= N_\mathrm{layer} N_\mathrm{neuron}^2 + 2N_\mathrm{neuron}(N_\mathrm{layer}-1) = 132$ real-valued phase shifter settings controlled with 16 bits of precision. These parameters can either be determined offline by training on a digital computer \cite{shen_deep_2017, sludds_delocalized_2022, bernstein_single-shot_2022}, using a digital model of the hardware \cite{wright_deep_2022}, or by training the hardware parameters \textit{in situ}. Training \textit{in situ} takes advantage of low latency inference on optical hardware, reducing the time and energy required for model optimization. Previous work on \textit{in situ} training has focused on developing optical implementations of ``backpropagation,'' which is the standard for training electronic DNNs \cite{hughes_training_2018, pai2022}. However, these approaches train only the linear layers of a photonic system and require evaluating gradients of activation functions on a digital system, thereby limiting the optical acceleration obtained by computing a multi-layer DNN in a single shot. Alternatively, genetic algorithms have been used to optimize weights on chip \cite{zhang_efficient_2021}, but they are challenging to scale to large model sizes and require many generations to converge. We trained the model parameters of the FICONN \textit{in situ}, including those of the activation functions, by evaluating the derivatives of those parameters directly on the hardware. To the best of our knowledge, this is the first demonstration of \textit{in situ} training of a photonic DNN. Our approach, which is based on prior work on \textit{in situ} optimization of analog VLSI neural networks \cite{Cauwenberghs_1992, spall_1998}, is robust to noise, performs gradient descent on average, and is guaranteed to converge to a local minimum. Moreover, it is not limited to our specific system, but can be generalized to any hardware architecture for photonic DNNs. A direct approach to computing the gradient on hardware would be to perturb the model parameters $\mathbf{\Theta} = [\Theta_1, \Theta_2, ..., \Theta_N]$ one weight at a time and repeatedly batch the training set through the system \cite{shen_deep_2017}. This procedure produces a forward difference estimate of the loss gradient $\nabla \mathcal L(\mathbf{\Theta})$ with respect to all weights. Moreover, since the derivatives are evaluated directly on chip, this procedure extends to other hardware parameters, such as the detuning and fraction of power tapped off in the NOFU. The drawback to this approach is that for $N$ parameters, it requires batching the training set through the hardware $2N$ times. Our approach varies all model parameters $\mathbf{\Theta}$ simultaneously. Figure \ref{training}b sketches the optimization procedure. Instead of perturbing the parameters one weight at a time, during training the system perturbs all parameters towards a random direction $\mathbf{\Delta}$ in search space, i.e. $\mathbf{\Theta} \rightarrow \mathbf{\Theta} + \mathbf{\Delta}=\mathbf{\Theta} + [\delta_1, \delta_2, ..., \delta_N]$. At each iteration the system then computes the directional derivative: \begin{equation} \nabla_\mathbf{\Delta} \mathcal{L}(\mathbf{\Theta}) = \frac{\mathcal{L}(\mathbf{\Theta} + \mathbf{\Delta})-\mathcal{L}(\mathbf{\Theta} - \mathbf{\Delta})}{2||\mathbf{\Delta}||} \end{equation} As in standard gradient descent, the weights $\mathbf{\Theta}$ are then updated to $\mathbf{\Theta} \rightarrow \mathbf{\Theta} - \eta \nabla_\mathbf{\Delta} \mathcal{L}(\mathbf{\Theta}) \mathbf{\Delta}$, where $\eta$ is a learning rate chosen as a hyperparameter of the system. Compared to the forward difference approach outlined earlier, our approach requires batching the training set through the hardware only twice per iteration. Moreover, we obtain true estimates of the cost function $\mathcal L$ and the derivative $\nabla_\mathbf{\Delta} \mathcal{L}(\mathbf{W})$, ensuring that component errors or errors in calibration do not affect the accuracy of training. Unlike other derivative-free optimization methods, our approach will always track the direction of steepest descent, as errors in the gradient direction average out to zero over multiple epochs \cite{Cauwenberghs_1992, spall_1998} (see Supplementary Information [SI]). We implemented \textit{in situ} training of $\mathbf{\Theta}$, which includes weights and nonlinear function parameters, for a standard vowel classification task (dataset available at \cite{vowels}). At each epoch, we batched a training set of 540 samples into the system and implemented the optimization loop described in Figure \ref{training}b with a learning rate $\eta = 0.002$. We reserved part of the data ($N = 294$) to evaluate the trained model on inputs it had not seen before. The top plot of Figure \ref{training}c shows the classification accuracy of both datasets during training. Our system achieves over 96\% accuracy on the training set, and over 92\% accuracy on the test set, as shown in the confusion matrices at the bottom. When training a digital system, we found it also obtained the same accuracy on the test set. Each epoch batches the training set only three times through the system; two times to evaluate the derivative $\nabla_\mathbf{\Delta} \mathcal{L}(\mathbf{\Theta})$ and once more to evaluate $\mathcal L(\mathbf{\Theta})$ at the current parameter set $\mathbf{\Theta}$. We observed that the system quickly trained to an accuracy exceeding 80\%, and then slowly asymptoted to a training accuracy of 96\%. This behavior resembles the optimization trajectories of other first-order methods for training DNNs in electronics, such as stochastic gradient descent. Moreover, our system successfully trains using only 16-bit accuracies for the weights. Lower precision weights reduce memory requirements for training; however, digital systems are challenging to train with fewer than 32 bits due to numerical errors in gradients accumulating during backpropagation \cite{micikevicius2018mixed}. \section*{Discussion} An important DNN metric is the latency $\tau_\text{latency}$ of inference, i.e. the time delay between input of a vector and the DNN output. For the FICONN, $\tau_\text{latency}$ is dominated by the optical propagation delay, which we estimate from the PIC subsystems (as described in the SI) as $3\tau_\text{CMXU} + 2\tau_\text{NOFU} + \tau_\text{TX to U1} + \tau_\text{U3 to RX} + \tau_\text{U-turn} \approx$ 435 ps. Each inference requires $N_\text{OPS}\approx 2 M N^2$ matrix operations (see SI), where $M$ is the number of layers and $N$ the number of modes. Dividing the FICONN's energy consumed during $\tau_\text{latency}$ by $N_\text{OPS}$ upper-bounds the energy-per-operation as \begin{equation} E_\text{OP} \approx \frac{\tau_\text{latency}}{2} \left [P_\text{PS} + \frac{P_\text{NOFU}}{N} + \frac{P_\text{TX} + P_\text{ICR}}{MN} \right ], \label{energy} \end{equation} where $P_j$ denotes the power dissipation of subsystem $j$. In the SI, we estimate upper bounds to the on-chip energy consumption of 9.8 pJ/OP and a throughput of $N_\text{OPS}/\tau_\text{latency}\approx 0.53$ tera-operations per second (TOPS) per inference. \begin{table}[h] \begin{tabular}{|c|c|c|c|c|c|} $N$ & Phase shifter & $E_\text{OP}$ & $E_\text{total,est}$ & $\tau_\text{latency}$ & TOPS \\ \hline 6 & \vtop{\hbox{\strut Thermal}\hbox{\strut (this work)}} & 9.8 pJ/OP & 11.7 pJ/OP & 435 ps & 0.53 \\ 6 & \vtop{\hbox{\strut Undercut}\hbox{\strut thermal \cite{dong_thermally_2010}}} & 35 fJ/OP & 546 fJ/OP & 140 ps & 12 \\ 6 & MEMS \cite{baghdadi_dual_2021, gyger_reconfigurable_2021} & 1.6 fJ/OP & 513 fJ/OP & 140 ps & 12 \\ 64 & MEMS \cite{baghdadi_dual_2021, gyger_reconfigurable_2021} & 0.84 fJ/OP & 54 fJ/OP & 1.4 ns & 1240 \\ 128 & MEMS \cite{baghdadi_dual_2021, gyger_reconfigurable_2021} & 0.79 fJ/OP & 27 fJ/OP & 2.7 ns & 4940 \\ 256 & MEMS \cite{baghdadi_dual_2021, gyger_reconfigurable_2021} & 0.77 fJ/OP & 14 fJ/OP & 5.4 ns & 19700 \end{tabular} \caption{Performance metrics for a three-layer FICONN with $N$ neurons. We list the on-chip energy consumption $E_\text{OP}$, as well as an estimate of the total power dissipation $E_\text{total,est}$ including optimized driver electronics. The predicted metrics assume inference on large batches of vectors with resonant modulators at 50 GHz \cite{timurdogan_ultralow_2014}. For latency, we assume a device length of 500~$\mu$m and an optimized layout, while our reported latency uses the actual waveguide layout fabricated on the PIC.} \label{table} \end{table} The FICONN's power consumption is dominated by the thermal phase shifters, which require $\sim$25 mW of electrical power to produce a $\pi$ phase shift. Table \ref{table} lists the key parameters of our proof-of-concept FICONN (top row), along with estimates for alternative published phase shifter technologies for varying $N$ and $M=3$. These estimates suggest that low-power quasi-static phase shifters in combination with high-speed modulators \cite{timurdogan_ultralow_2014} could push total energy consumption to $\sim 10$ fJ/OP for large systems, while maintaining ns latencies and throughputs of thousands of TOPS. In comparison, systolic arrays such as the tensor processing unit (TPU) require at minimum $N+1$ clock cycles for a single $N \times N$ matrix-vector multiplication. A three-layer DNN with $N=256$ neurons would require $\sim$1 $\mu$s to compute at a 700 MHz clock speed \cite{Jouppi2017}, which is more than two orders of magnitude longer than in a photonic processor. The ultra-low latency of inference in the FICONN could greatly improve the speed of training models, which consumes significant energy \cite{Strubell_Ganesh_McCallum_2020} and has motivated a search for efficient scheduling algorithms that reduce training time \cite{You_ImageNet_2018}. \textit{In situ} training could also improve the performance of DNN models, as training with noise has been suggested to regularize models, preventing overfitting \cite{Camuto_Explicit_2020} and improving their adversarial robustness \cite{Liu_2018_ECCV} to small changes in the input. This regularization can be implemented automatically by leveraging quantum noise in hardware. We observed this effect in our own experiments; while both the FICONN and a digital system obtained similar performance on the test set for the classification task studied, the digital system overfit the model, achieving perfect accuracy on the training set (see SI). Finally, our implementation of \textit{in situ} training, which does not require a digital system for computing gradients, is compatible with feedback-based ``self-learning'' photonic devices \cite{feldmann_all-optical_2019, Marquadt_2021}, enabling fast, autonomous training of models without any required external input. The FICONN, which is implemented in a foundry-fabricated photonic integrated circuit, could be scaled to larger sizes with current-day technologies. Silicon photonic foundries have already produced functional systems of up to tens of thousands of components \cite{sun_large-scale_2013}. Spectral multiplexing, for instance through integration of microcomb sources with silicon photonics \cite{shu_microcomb-driven_2022}, could enable classifying data simultaneously across many wavelength channels, further reducing energy consumption and increasing throughput. The system's energy consumption would further improve by optimization of the NOFU; while our implementation makes use of microring resonators, photonic crystal modulators \cite{nozaki_femtofarad_2019}, microdisks \cite{timurdogan_ultralow_2014}, or hybrid integration of lithium niobate \cite{li_all-optical_2022, wang_integrated_2018} would further reduce the activation function to less than 1 fJ/NLOP. Our implementation of the FICONN makes use of feedforward unitary circuits, which implement fully-connected layers in a DNN. However, this architecture can be generalized to other types of neural networks. For example, temporal or frequency data may be classified using recirculating waveguide meshes \cite{perez-lopez_multipurpose_2020}, which can implement feedback and resonant filters. Such a system, where phase shifter settings are trained \textit{in situ} \cite{perez-lopez_multipurpose_2020, mak_wavelength_2020}, may be used for intelligent processing of microwave signals in the optical domain. \section*{Conclusion} We have demonstrated a coherent optical DNN on a single chip that performs both inference and \textit{in situ} training. The FICONN system introduces inline nonlinear activation functions based on modulators driven by ``receiverless'' photodection, eliminating the latency and power consumption introduced by optical-to-electrical conversion between DNN layers and preserving phase information for optical data to be processed coherently. The system fabrication relied entirely on commercial foundry photolithography, potentially enabling scaling to wafer-level systems. Scaling these systems up to hundreds of modes would lower energy consumption to $\sim$10 fJ/OP, while maintaining latencies orders of magnitude lower than electronics. Moreover, we have demonstrated \textit{in situ} training of DNNs by estimating derivatives of model parameters directly on hardware. Our approach is also generalizable to other photonic DNN hardware being currently studied. \textit{In situ} training, which takes advantage of the optically-accelerated forward pass enabled by receiverless hardware, opens the path to a new generation of devices that learn in real time for sensing, autonomous driving, and telecommunications. \section*{Methods} {\footnotesize\par} \noindent\footnotesize{}\textbf{Photonic integrated circuit.} The photonic integrated circuit (PIC) was fabricated in a commercial silicon photonics process by Elenion Technologies. Waveguides were defined in a crystalline silicon layer cladded by silicon dioxide, and the optical signals were routed with partially-etched waveguides to minimize propagation loss and backscattering. Most signal routing was done with single-mode waveguides, while longer distance propagation used multimode waveguides to further reduce transmission losses. Input light ($\lambda = 1564$ nm) was edge coupled into the chip, while output signals were measured on chip with waveguide-integrated germanium photodiodes. Mach-Zehnder interferometers were programmed using 200 $\mu$m long thermal phase shifters, which induce a refractive index change by locally heating the waveguide. The nonlinear optical function unit was realized with a 20 $\mu$m radius microring resonator where the waveguide core is \textit{pn} doped. Light was coupled into the system through a polarization-maintaining fiber array glued to the chip facet. The PIC was bonded to a copper plane on a printed circuit board for heatsinking and electrically driven through 236 wirebonds. We thermally stabilized the system using a Peltier module connected to a precision feedback system that locked the chip temperature to within 31$^\circ$ $\pm$ 0.004$^\circ$ C (Arroyo Instruments 5400). Devices on the PIC were electrically controlled through a 192-channel software programmable current source (Qontrol Systems Q8iv). Each channel sources up to 24 mA of current with 16 bits of precision, corresponding to approximately 0.4 mrad precision in our system. For faster transmission of input vectors into the DNN, we designed a custom 16-bit current driver system that used a microcontroller to buffer the training set in memory, which enabled training at the maximum DAC speed rather than the speed of the serial connection to the computer. This system was paired with a custom receiver board, which read out the photodiodes on chip with a transimpedance amplifier and 18-bit ADC.\\ \noindent\footnotesize{}\textbf{System characterization.} Light coupled into the PIC was split into local oscillator (LO) and signal paths with a programmable MZI. The LO is directed to the ICR, while the signal is fanned out to the six channels of the transmitter through a splitting tree of 50-50 multimode interferometers (MMIs). Each channel of the transmitter was calibrated using a photodiode on the drop port of the MZI. For each mode, we swept the current $I$ driven into the thermal phase shifter and measured the output transmission $T(I)$. To produce a mapping between current and phase for an MZI, we fit the expression $A \pm B \cos (p_4 I^4 + p_3 I^3 + p_2 I^2 + p_1 I + p_0)$ to $T(I)$. The sign of the expression depends on whether transmission is measured at the cross ($+$) or bar ($-$) port. The first two meshes were calibrated at their output using the photodiodes that drive the NOFUs, while the final mesh was calibrated with the receiver. To characterize an uncalibrated mesh, we began by transmitting light down the main diagonal. Light was transmitted into input 1 and the internal phase shifters along the main diagonal were optimized in a round robin fashion to maximize the signal at output 6. This procedure deterministically initialized the main diagonal to the ``cross'' ($\theta_1 = 0$) state, as there is only one possible path in the circuit between the two modes. The devices were then characterized by routing light down diagonals of the circuit. External phase shifters were calibrated by programming ``meta-MZIs'' into the device, as described in \cite{prabhu_accelerating_2020}. To correct for hardware errors, we programmed 300 Haar random unitaries into the mesh and measured the output from transmitting 100 random input vectors. The measured data was fit to a software model of an MZI mesh that incorporates the effects of beamsplitter errors, waveguide losses, and thermal crosstalk. The software model's parameters were optimized to fit the measured data using the limited-memory Broyden–Fletcher–Goldfarb–Shanno (L-BFGS) algorithm. We found the software model is able to predict the hardware outputs with an average fidelity of 0.969 $\pm$ 0.023. Hardware error correction was implemented by determining the required hardware settings to implement a desired matrix within the software model and then porting them to the chip. Similar to our previous work \cite{Bandyopadhyay:21}, our approach here efficiently corrects for component errors, as no real-time optimization is done on the hardware. However, fitting the device response to a software model eliminates the need to calibrate component errors one at a time. The fidelity results shown in Fig.\ \ref{PIC} were obtained by programming random unitary matrices $U_\text{programmed}$ sampled from the Haar measure into the circuit and sequentially transmitting the columns of $U^\dagger_\text{programmed}$ to compute the metric $F = \mathrm{Tr}[U^\dagger_\text{programmed} U_\text{hardware}]/N$. As the inverse of a unitary matrix is its adjoint, for a perfect hardware implementation of $U_\mathrm{programmed}$ the quantity $F$ should equal 1. Insertion losses in the CMXU were inferred by transmitting light down different paths in the circuit and fitting the measured photocurrent to the number of devices light passed through. We measured the insertion loss per MZI to be $0.22 \pm 0.05$ dB, corresponding to a loss per CMXU of $1.32 \pm 0.30$ dB. The wavelength spectra shown in Figure \ref{activation}c were measured on a test structure of the NOFU by varying the incident optical power. We measured the microring to have a quality factor $Q = 8300$ when no current is injected into the device, which corresponds to a cavity lifetime $\tau_\text{NL} = Q/\omega$ of 6.6 ps. The activation functions measured in Figure \ref{activation}d were obtained with the integrated coherent receiver on the PIC. To operate in injection mode, $V_B$ must be sufficiently high to ensure the modulator can be forward biased ($\sim$0.7 V). Otherwise, the device operates in photovoltaic mode and generates a reverse bias $\sim$0.3 V across the modulator, which increases the width of the junction depletion zone and removes carriers from the waveguide. \\ \noindent\footnotesize{}\textbf{\textit{In situ} training.} For demonstrating \textit{in situ} training, we used vowel classification data from the Hillenbrand database (available at \cite{vowels}). We used the first three formants $F_1, F_2, F_3$ at steady state and at 50\% of the vowel duration as the six input features for each datapoint. Each input vector was normalized to ensure that the maximum value was 1. We used 540 samples for training and evaluated the performance of the model on a test set of 294 samples. We initialized the weights for the unitary layers randomly over the Haar measure \cite{pai_matrix_2019}. At every epoch, we performed the following optimization loop: \begin{enumerate} \item Perturb the system parameters $\mathbf{\Theta}$ by a random displacement $\pm \mathbf{\Delta}$. $\mathbf{\Delta}$ is a vector of the same length as $\mathbf{\Theta}$ whose elements are chosen from a Bernoulli distribution to be $\pm \delta$. $\delta$ is a hyperparameter of the optimization; we used $\delta = 0.05$ in our experiments. \item Batch the training set through the system and compute the loss $\mathcal L = \sum_j \mathbf{y}_{(j)}^\text{train} \log \mathbf{V}_{(j)}^\text{norm}$ for hardware parameters $\mathbf{\Theta} \pm \mathbf{\Delta}$. \item Estimate the directional derivative $\nabla_\mathbf{\Delta} \mathcal{L}(\mathbf{\Theta})$ along $\mathbf{\Delta}$ and update the hardware parameters $\mathbf{\Theta} \rightarrow \mathbf{\Theta} - \eta \nabla_\mathbf{\Delta} \mathcal{L}(\mathbf{\Theta})\mathbf{\Delta}$. We found that $\eta = 0.002$ provided stable convergence. \end{enumerate} During training a software feedback loop stabilized the power coupled into the chip, as variations in optical power affected the response of the electro-optic nonlinearity. The \textit{in situ} training experiments were conducted with the NOFU in injection mode ($V_B = 0.8$ V) due to the wider range of nonlinear functions we could realize on chip. We optimized our device design for carrier injection, which accounts for the comparatively lower efficiency in depletion mode. However, even our non-optimized design realizes nearly a full linewidth detuning in depletion; therefore, we expect a modest improvement in the cavity finesse would be sufficient to realize full modulation in future iterations. \\ \noindent\footnotesize{}\textbf{Data availability.} The data that support the plots in this paper are available from the corresponding authors upon reasonable request.\\ \noindent\footnotesize{}\textbf{Code availability.} The code used to generate the results of this paper is available from the corresponding authors upon reasonable request.\\ \noindent\footnotesize{}\textbf{Acknowledgments.} S.B. was supported by a National Science Foundation (NSF) Graduate Research Fellowship under grant no. 1745302, NSF awards 1839159 (RAISE-TAQS) and 2040695 (Convergence Accelerator), and the Air Force Office of Scientific Research (AFOSR) under award number FA9550-20-1-0113. A.S. was also supported by an NSF Graduate Research Fellowship and the aforementioned AFOSR award, as well as NSF award 1946976 (EAGER) and NTT Research.\\ \\ \noindent The authors would like to acknowledge Paul Gaudette and Dr. David Scott of Optelligent for packaging the photonic integrated circuit; Dr.\ Ruizhi Shi and Dr.\ Hang Guan for feedback on the photonics layout; Dr.\ Sri Krishna Vadlamani for discussions on hardware-aware training; Liane Bernstein for discussions on DNN applications and feedback on the manuscript; Dr.\ Jacques Carolan and Mihika Prabhu for discussions on chip packaging and testing of the photonics; and Dave Lewis for assistance with the use of machining tools.\\ \noindent\footnotesize{}\textbf{Competing interests.} S.B., R.H., and D.E.\ have filed US Patent Applications 17/556,033 and 17/711,640 on error correction algorithms for programmable photonics. N.H.\ is CEO of Lightmatter. D.B.\ is Chief Scientist at Lightmatter. M.S.\ is VP, Packaging, Photonics, \& Mixed-Signal at Luminous Computing. M.H.\ is President of Luminous Computing. D.E.\ holds shares in Lightmatter, but received no support for this work. The other authors declare no competing interests.\\ \noindent\footnotesize{}\textbf{Author contributions.} S.B.\ and D.E.\ conceived the experiments. S.B.\ designed the photonic integrated circuit, chip packaging, and control electronics, calibrated the system, and conducted the experiments. A.S.\ assisted with characterizing the electro-optical nonlinearity. S.B., S.K., and D.E.\ developed the \textit{in situ} training scheme. R.H.\ assisted with developing calibration procedures for the system and interpreting the results of the \textit{in situ} training experiments. N.H.\ and D.B.\ architected the photonic integrated circuit. D.B. performed preliminary evaluation of the PIC in Tensorflow. M.S.\ and M.H.\ fabricated the photonic integrated circuit. S.B.\ and D.E.\ wrote the manuscript with input from all authors.\\ \\\noindent\footnotesize{}\textbf{Supplementary information} is available for this paper.\\ \\ \noindent\footnotesize{}\textbf{Correspondence and requests for materials} should be addressed to Saumil Bandyopadhyay or Dirk Englund. \bibliographystyle{naturemag} \section{Device characterization} \subsection*{Transmitter} A Mach-Zehnder interferometer (MZI) performs the programmable $2\times2$ unitary operation: \begin{equation} U(\theta_1, \theta_2) = ie^{i \theta_1/2} \begin{bmatrix} e^{i\theta_2} \sin(\theta_1/2) & e^{i\theta_2} \cos(\theta_1/2) \\ \cos(\theta_1/2) & -\sin(\theta_1/2) \\ \end{bmatrix} \end{equation} To characterize a single MZI, we first input light into one port of the device and measure the output transmission $T(\theta_1) = P_\text{out} / P_\text{in}$. For an ideal device, the output transmission at the bar port is \begin{equation} T_{\rm bar}(\theta_1) = \sin^2 \left ( \frac{\theta_1}{2} \right) \end{equation} and at the cross port: \begin{equation} T_{\rm cross}(\theta_1) = \cos^2 \left ( \frac{\theta_1}{2} \right) \end{equation} In a fabricated MZI, $\theta_1$ is determined by the total dissipated power $I \times V(I)$, where $I$ is the programmed current and $V(I)$ is the voltage dropped across the device. To characterize a device in the transmitter, where the cross port of each channel has a photodiode, we sweep $I$ and measure the voltage $V(I)$ and output transmission $T(I)$. We fit the expressions: \begin{equation} V(I) = a_4 I^4 + a_3 I^3 + a_2 I^2 + a_1 I \end{equation} \begin{equation} T_{\rm cross} = A + B \cos \left ( \frac{I V(I)}{P_\pi} \pi + p_0 \right ) \label{cross} \end{equation} where $a_4, a_3, a_2, a_1, A, B, P_\pi, p_0$ are fitting parameters. Here, $P_\pi$ is the total dissipated power required to induce a $\pi$ phase shift, $p_0$ is the static phase difference between the two interferometer arms, and $1/(A-B)$ is the interferometer extinction ratio. We found that a fourth-order polynomial was required to fit the voltage-current relationship of the heaters, which became non-Ohmic at high currents due to self-heating and velocity saturation of the carriers. If we measured $T(I)$ at the bar port, the latter expression would instead be: \begin{equation} T_{\rm bar} = A - B \cos \left ( \frac{I V(I)}{P_\pi} \pi + p_0 \right ) \label{bar} \end{equation} This yields a mapping between the current $I$ and the programmed phase $\theta_1(I)$ of the form: \begin{equation} \theta_1(I) = p_4 I^4 + p_3 I^3 + p_2 I^2 + p_1 I + p_0 \end{equation} \subsection*{Coherent matrix multiplication unit (CMXU)} The procedure for characterizing the CMXU is sketched in Figure \ref{characterize_mzi}. We use the photodiodes at the output of each matrix processor; for the first two layers, we use the photodiode at each nonlinear optical function unit (NOFU), while the last layer is calibrated with the system's receiver. \begin{figure*}[h] \includegraphics[width=7in]{FigS1.pdf} \caption{Calibration procedure for internal phase shifters in the CMXU. The devices along the main diagonal and antidiagonal are calibrated first. Once these devices are characterized, the remainder of the phase shifters can be calibrated by programming devices along the main diagonal.} \label{characterize_mzi} \end{figure*} In an uncalibrated mesh, light will scatter randomly through the circuit as $p_0$ is random for each device. To characterize the circuit, we first input light into the top input (input 1) of the mesh and measure the transmission at the bottom output (output 6). We then optimize the internal phase shifters along the main diagonal in a round robin fashion to maximize the signal at output 6. This procedure deterministically initializes the main diagonal to the cross state ($\theta_1 = 0$), as there is only one possible path between input 1 and output 6. Having initialized the diagonal, we can then calibrate each device along it by sweeping the phase shifter $\theta_1$, measuring $T$ at output 6, and fitting equation \ref{cross} to the data. The antidiagonal, connecting input 6 to output 1, is calibrated in the same way. Having characterized the main diagonals, the remainder of the devices can be calibrated in a similar fashion. For instance, inputting light into mode 1 and setting the top left MZI in the circuit, which is already calibrated, to the bar state provides access to the first subdiagonal. The uncalibrated devices can then be characterized with the same procedure as was used for the main diagonal. We show the full calibration sequence in Figure \ref{characterize_mzi}. This protocol calibrates all internal phase shifters $\theta_1$ in a matrix processor. The external phase shifters $\theta_2$ are calibrated using ``meta-MZIs,'' as shown in Figure \ref{ext_mzi}. A ``meta-MZI'' consists of two MZIs in columns $i-1, i+1$ that are programmed to implement a 50-50 beamsplitter ($\theta_1 = \pi/2$). This subcircuit now functions as an effective MZI, where the relative phase difference between two external phase shifters $\theta_{2,a}, \theta_{2,b}$ is equivalent to the setting of the internal phase shifter in a discrete device. We fix one of the two external phase shifters to $I=0$, sweep the current programmed into the other, and measure the output transmission $T$. Fitting the data to equations \ref{cross}, \ref{bar}, depending on the port $T$ is measured out of, calibrates the static phase difference $\Delta \phi(I=0) = \theta_{2,b}(I=0)-\theta_{2,a}(I=0)$. Repeating this procedure for all devices produces a linear system of equations that can be inverted to find the static phase $p_0$ for each external heater. More details on this procedure can be found in \cite{prabhu_accelerating_2020}. \begin{figure*}[h] \includegraphics[width=7in]{FigS2.pdf} \caption{``Meta-MZI'' for calibrating external phase shifters. Two phase shifters in columns $i-1, i+1$ are set to implement a 50-50 beamsplitter. The output transmission of this meta-interferometer, which functions exactly like a discrete MZI, is dependent on the phase difference between the external phase shifters $\Delta \phi = \theta_{2,b}-\theta_{2,a}$.} \label{ext_mzi} \end{figure*} \section{Nonlinear optical function unit} Figure \ref{NOFU}a shows the measured response of a typical nonlinear optical function unit. The resonator is designed to be overcoupled, as the injection of carriers increases the round-trip loss in the ring. Thus, as current is increased, the resonator transitions through the critical coupling regime to being undercoupled at large incident powers. In Figure \ref{NOFU}b, we show the device resonance when no current is injected into the device. The device has a quality factor $Q \approx 8300$, which corresponds to a photon lifetime of 6.6 ps and ensures that the cavity response does not limit the speed of the device. \begin{figure*} \includegraphics[width=7in]{FigS3.pdf} \caption{a) NOFU response vs. injected photocurrent. We assume a photodiode responsivity of 1 A/W. The resonator is initially overcoupled, and transitions to undercoupling as the incident optical power is increased. b) NOFU resonance when no power is incident on the photodiode. We measure $Q = 8293$, which limits the photon lifetime to 6.6 ps. c) Phase shift $\Delta \phi$ in cavity vs. incident photocurrent. d) Round-trip amplitude loss $a$ as a function of incident photocurrent. As photocurrent increases more carriers are injected into the waveguide, increasing the loss of the optical signal inside the resonator.} \label{NOFU} \end{figure*} The phase response and round-trip attenuation $a$ as a function of the photocurrent $I$ are shown in Figure \ref{NOFU}c and d, respectively. Assuming a photodiode responsivity of $\sim 1$ A/W, we find that about 75 $\mu$W is sufficient to detune the NOFU by a linewidth. As we bias the device to 0.8 V in our experiment, the power consumption during operation is therefore $\sim 60$ $\mu$W. \section{Correcting hardware errors} Static component error, such as errors in beamsplitters or transmission losses, affect the accuracy of matrices programmed into the CMXU. Since we use thermal phase shifters for programming the matrix processors, thermal crosstalk between devices will also impact the performance of the device. In this section, we outline the hardware error correction procedures used to obtain high matrix accuracies in the FICONN. \subsection*{Transmitter correction} For the transmitter, we corrected thermal crosstalk between components by directly measuring the $12 \times 6$ crosstalk matrix $M$, where $M_\text{ij}$ denotes the crosstalk on channel $i$ produced by an aggressor channel $j$. This quantity was measured by driving channel $i$ and measuring the output transmission $T$ at different current settings for channel $j$. For each measurement, we fit equation \ref{cross} to the data to extract the static phase $p_0$. Thermal crosstalk will cause $p_0$ to vary as a function of the settings in channel $j$ due to parasitic heating; we fit a linear expression to this data to extract the crosstalk coefficient $M_\text{ij}$. \begin{figure*} \includegraphics[width=7in]{FigS4.pdf} \caption{a) To determine the elements of the thermal crosstalk matrix $M$, we drive an aggressor channel $j$ while characterizing the static phase $p_0$ of channel $i$. As an example, here we characterize $M_{12}$ by plotting the static phase of channel 1 as a function of the phase setting of channel 2. We fit a linear function to this data to find a crosstalk coefficient of $M_{12} = -0.00735$. b) We benchmark the effectiveness of thermal crosstalk correction by repeatedly trying to program a channel to $\theta_1 = \pi/2$, while setting all other channels to random values. We then determine the actual phase implemented by measuring the output transmission $T$ and computing $2\arccos \sqrt{T}$. As an example, here we show the results for channel 2, where over 500 random experiments thermal crosstalk correction greatly improves the repeatability of programming a channel to a desired phase.} \label{thermal_crosstalk} \end{figure*} Figure \ref{thermal_crosstalk}a shows an example of this procedure, where we extracted the crosstalk on channel 1 produced by channel 2. Having obtained $M$, we can now obtain the phase settings $\mathbf{\Phi}$ for a desired programming $\mathbf{\Phi}^\prime$ by computing: \begin{equation} \mathbf{\Phi} = M^{-1}(\mathbf{\Phi}^\prime - \mathbf{\Phi}_0) + \mathbf{\Phi}_0 \end{equation} where $\mathbf{\Phi}_0$ is the static phase for each channel. We neglected crosstalk on the external phase shifters of the transmitter, which program the phase of the input $\mathbf{a}^{(1)}$, as we did not have coherent detection directly at the transmitter output. In order to benchmark the correction protocol for each transmitter channel we repeatedly attempted to program $\theta_1 = \pi/2$ while setting all other channels to a random phase setting. Shown in Figure \ref{thermal_crosstalk}b is the phase setting actually implemented by the transmitter channel for 500 such experiments, which we extracted by measuring the transmission $T$ and computing $2 \arccos \sqrt{T}$. Thermal crosstalk correction greatly improves both the accuracy and repeatability of each channel; for example, the measured phase on channel 2 improves from $0.493 \pm 0.015$ to $0.501 \pm 0.003$ following correction. \subsection*{CMXU correction} Correcting for thermal crosstalk in the CMXU is more challenging. As the CMXU is a mesh of interferometers, changing the programming of aggressor channels can introduce phases and redirect light through the circuit in unexpected ways. These effects are challenging to disentangle from pure thermal crosstalk when the circuit also has other component errors, such as beamsplitter imperfections and device loss, making it difficult to directly measure $M$. To address this, we instead developed a ``digital twin'' of the hardware, which modeled in software the response of a device with known beamsplitter errors, waveguide losses, and thermal crosstalk. As the effects of all of these imperfections are known \textit{a priori} for Mach-Zehnder interferometer meshes \cite{Bandyopadhyay:21, hamerly_stability_2021}, we can fit a software model, where these imperfections are initially unknown model parameters, to data taken on the real device. If the software model can accurately reproduce measurements from the hardware, the parameters found to describe the device imperfections can be used to deterministically correct errors on the real hardware. We note here that our approach is not a ``black-box'' or neural network model of the device. Our model is based on the physics of how Mach-Zehnder interferometer meshes behave, and thus the parameters we find are realistic and correspond to true physical attributes of the device, such as the error for a particular directional coupler. We fit the model to a dataset obtained by programming 300 random unitary matrices into the chip and measuring the response to 100 randomly selected input vectors. Our software model, which is written in \verb|JAX| for auto-differentiability, is fit to the measured data using the limited-memory Broyden–Fletcher–Goldfarb–Shanno (L-BFGS) algorithm. We found that our software model was able to predict hardware outputs with an average fidelity $F = \mathrm{Tr}[U^\dagger_\text{measured} U_\text{software}]/N$ of 0.969 $\pm$ 0.023. The data shown in Figure 2d of the main text was taken by first implementing a ``direct'' programming, where the phase shifter settings were obtained using the Clements decomposition, and then by using the settings obtained from the software model. \section{Digital model for vowel classification} We trained a digital model on the vowel classification task to benchmark the performance of \textit{in situ} training on our system. The two models had the same number of neurons ($3 \times 6^2 = 108$), but the weights of the digital model, unlike those of our system, were unconstrained and could be arbitrary real matrices. We trained the system with a tanh nonlinearity, as we obtained very poor performance on the test set using a ReLU function. When training, we normalized the output with a softmax function and used the categorical cross-entropy loss function, as we did in the \textit{in situ} training experiment. \begin{figure*} \includegraphics[width=7in]{FigS5.pdf} \caption{Performance of the digital model on the vowel classification task. The model overfits the training set, achieving 100\% accuracy, but performance on the test set is comparable to the accuracy achieved by our system (92.7\% on the digital model vs. 92.7\% on the FICONN).} \label{digital_model} \end{figure*} The performance of the digital model is shown in Figure \ref{digital_model}. The performance on the test set is similar to that obtained by our system. However, the digital model is significantly overfit, achieving perfect (100\%) accuracy on the training set. One possible explanation for why our system does not overfit as much is the presence of analog noise, which has been suggested to function as regularization during DNN training \cite{Camuto_Explicit_2020}. \section{Stochastic optimization performs gradient descent on average} Unlike other derivative-free optimizers, the advantage of the stochastic optimization approach we use is that it performs gradient descent on average. Here we illustrate this by adapting the proof provided in \cite{Cauwenberghs_1992}. Suppose we are optimizing a DNN with model parameters $\mathbf{\Theta}$ and error functional $\mathcal{L}(\mathbf{\Theta})$. Gradient descent iteratively optimizes the parameters with the update rule: \begin{equation} \Delta \mathbf{\Theta} = -\eta \frac{\partial \mathcal{L}}{\partial \mathbf{\Theta}} \end{equation} Assuming $\eta > 0$ and is sufficiently small, this update rule will converge to a local minimum of $\mathcal{L}(\mathbf{\Theta})$. Finite difference methods for analog hardware attempt to compute $\partial \mathcal{L}/\partial \mathbf{\Theta}$ by perturbing one parameter at a time in the system. Each epoch therefore requires $2N$ evaluations of the model on the training set for $N$ model parameters. Alternatively, one could perturb all model parameters at once by a random vector $\mathbf{\Pi} = [\pi_1, \pi_2, ..., \pi_N]$, where the elements of $\mathbf{\Pi}$ are randomly and independently chosen from an $N$-dimensional hypercube. The update rule here is: \begin{align} \Delta \mathbf{\Theta} &= -\mu \frac{\mathcal{L}(\mathbf{\Theta} + \mathbf{\Pi}) - \mathcal{L}(\mathbf{\Theta} - \mathbf{\Pi})}{2 ||\mathbf{\Pi}||} \mathbf{\Pi}\\ &= -\frac{\mu}{2 |\pi| \sqrt{N}} [\mathcal{L}(\mathbf{\Theta} + \mathbf{\Pi}) - \mathcal{L}(\mathbf{\Theta} - \mathbf{\Pi})] \mathbf{\Pi} \end{align} where $\mu$ is the learning rate. Assuming that the elements of $\mathbf{\Pi}$ are independently drawn from a Bernoulli distribution as $\pm \pi$, we can substitute $||\mathbf{\Pi}||$ as $|\pi| \sqrt{N}$. We note here that $\mu \neq \eta$, and in practice $\mu$ can be much larger than $\eta$ while preserving stable convergence. We Taylor expand the expression $[\mathcal{L}(\mathbf{\Theta} + \mathbf{\Pi}) - \mathcal{L}(\mathbf{\Theta} - \mathbf{\Pi})]$ as: \begin{equation} 2\sum_i \frac{\partial \mathcal{L}}{\partial \theta_i} \pi_i \end{equation} Substituting this into the update rule, we get: \begin{align} \Delta \mathbf{\Theta} &= -\frac{\mu}{|\pi| \sqrt{N}} \left ( \sum_i \frac{\partial \mathcal{L}}{\partial \theta_i} \pi_i \right ) \mathbf{\Pi}\\ &= -\frac{\mu}{|\pi| \sqrt{N}} \left ( \sum_i \frac{\partial \mathcal{L}}{\partial \theta_i} \pi_i \right ) [\pi_1, \pi_2, ..., \pi_N] \end{align} Since the $\pi_i$ are independently chosen, $\mathrm{E}[\pi_i \pi_j] = 0$ if $i \neq j$. Therefore, the expected parameter update $\mathrm{E}[\Delta \mathbf{\Theta}]$ is: \begin{align} \mathrm{E}[\Delta \mathbf{\Theta}] &= -\frac{\mu}{|\pi| \sqrt{N}} \left ( \sum_i \frac{\partial \mathcal{L}}{\partial \theta_i} \mathrm{E}[\pi_i^2] \hat{x}_i \right )\\ &= -\frac{\mu}{|\pi| \sqrt{N}} \left ( \sum_i \frac{\partial \mathcal{L}}{\partial \theta_i} |\pi|^2 \hat{x}_i \right )\\ &= -\frac{\mu |\pi|}{\sqrt{N}} \frac{\partial \mathcal{L}}{\partial \mathbf{\Theta}} \end{align} We therefore find that, on average, this procedure performs gradient descent with an effective learning rate $\eta = \mu |\pi| / \sqrt{N}$. \section{Latency and energy efficiency} \subsection*{Latency} The optical propagation delay of the FICONN is the time-of-flight through the photonic circuit, which is $3\tau_\text{CMXU} + 2\tau_\text{NOFU} + \tau_\text{TX to U1} + \tau_\text{U3 to RX} + \tau_\text{U-turn}$. For each subsystem, we computed the waveguide length from the PIC design. While in the actual device waveguides transition between fully-etched and partially-etched geometries, which have different group indices, when estimating latency we conservatively assumed that all waveguides have a ridge geometry, which has a higher group index ($n_g \approx 4.2$) and therefore longer propagation time. The NOFU response time $\tau_{\text{NOFU}}$ includes the cavity lifetime (6.6 ps), as well as time-of-flight through the tunable MZI and delay line. We also considered propagation time between the transmitter and first CMXU $\tau_\text{TX to U1}$ and between the last CMXU and the receiver $\tau_\text{TX to U1}$. Due to chip area constraints, we connected different subsystems with U-turns, as shown in Figure 2a of the main text; these waveguides account for an additional 7.5 mm of propagation. In total, the waveguide length from transmitter to receiver is 29.8 mm, which together with the cavity lifetime (6.6 ps $\times$ 2), corresponds to about 435 ps propagation time. \begin{table}[tb] \begin{tabular}{l|c|c} Parameter & Value & Reference \\ \hline Digital-to-analog conversion (transmitter, 1 GHz) & 26 mW & \cite{sedighi_low-power_dac_2012}\\ Digital-to-analog conversion (transmitter, 50 GHz) & 560 mW & \cite{greshishchev_60_2019}\\ Digital-to-analog conversion (weights) & 27.5 $\mu$W & \cite{ltc1662}\\ Resonant modulator (transmitter) & ~~0.9 fJ/bit at 25 Gb/s $\rightarrow$ 22.5 $\mu$W~~ & \cite{timurdogan_ultralow_2014}\\ Phase shifter (weights, thermal) & 37.5 mW & Measured\\ Phase shifter (weights, MEMS) & 75 $\mu$W & \cite{gyger_reconfigurable_2021}\\ NOFU (injection mode) & 60 $\mu$W & Measured\\ NOFU (depletion mode) & ~~18 fJ/clock cycle~~ & ~~Estimated, see discussion~~\\ Transimpedance amplifier (receiver, 1 GHz) & 57 mW & \cite{sedighi_low-power_tia_2012}\\ Transimpedance amplifier (receiver, 50 GHz) & 313 mW & \cite{ahmed_34gbaud_2018}\\ Analog-to-digital conversion (receiver, 1 GHz) & 2.55 mW & \cite{oh_8b_2020}\\ Analog-to-digital conversion (receiver, 50 GHz) & 150 mW & \cite{adcs}\\ \end{tabular} \caption{Parameters used for energy efficiency calculation.} \label{energy} \end{table} \subsection*{Energy efficiency} \noindent The FICONN architecture with $N$ modes and $M$ layers performs $2 M N^2 + 2 (M-1) N$ operations per inference, where the first term accounts for linear matrix operations and the second term refers to the nonlinear activation function. For large $N$ the first term dominates and we approximate the total number of operations as $2 M N^2$. The total energy consumption per operation of the system for a single inference can therefore be approximated as $\tau_\text{latency} P_\text{total} / (2 M N^2)$, where $P_\text{total}$ is the total power consumption of the photonics, drivers, and readout electronics and $\tau_\text{latency}$ is the time required for a single inference. The system requires $MN^2$ phase shifters, $N$ transmitters, $N$ receivers, and $MN$ nonlinear optical function units, making the total power consumption $P_\text{total} = MN^2 P_\text{PS} + MN P_\text{NOFU} + N(P_\text{TX} + P_\text{ICR})$. Substituting this into the expression for energy consumption per operation produces equation 2 in the main text. Our device performs $2 M N^2 + 2 (M-1) N = 240$ operations per inference, where $M = 3$ and $N = 6$. The phase shifters require about 25 mW per $\pi$ phase shift; as the internal phase shifters only require up to $\pi$ phase shift, while the external phase shifters require up to $2 \pi$, we assume the average power consumption per phase shifter is 37.5 mW. The phase shifter contribution to the energy per operation is therefore $144 \times (37.5~\text{mW}) \times (435~\text{ps}) / (240~\text{OPs}) = 9.8$ pJ/OP, where we include phase shifters for both model parameters and the transmitter. This energy requirement would reduce substantially with the use of undercut thermal phase shifters \cite{dong_thermally_2010}, which reduce power dissipation by an order of magnitude, or MEMS-actuated devices \cite{baghdadi_dual_2021, gyger_reconfigurable_2021}, both of which are available in silicon photonic foundries. The nonlinear optical function unit consumes 60 $\mu$W of power, which contributes about $12 \times (60~\mu\text{W}) \times (435~\text{ps}) / (240~\text{OPs}) = 1.3$ fJ/OP to this total. We used benchtop electronics in our demonstration to control the devices on chip. In practice, the devices would be controlled by a custom CMOS integrated circuit. To estimate the energy contribution of the electronic driver and readout circuitry, we use values reported in the literature for DACs, TIAs, and ADCs, which are shown in Table \ref{energy}. We assume a clock speed of 1 GHz, as the NOFU in injection mode would have a response time of $\sim$1 ns. The transmitter would require high-speed DACs for programming input vectors into the system, while the receiver would require high-speed TIAs and ADCs for readout. The model parameters can be controlled by lower-speed electronics, as they are not anticipated to change frequently; even when training the system \textit{in situ}, the parameters will only be updated after an entire batch is evaluated. Using these values, we find the worst-case energy consumption of electronics for our device would be $[12 \times (26~\text{mW} + 57~\text{mW} + 2.55~\text{mW}) + 132 \times (27.5~\mu\text{W})] \times (435~\text{ps}) / (240~\text{OPs}) = 1.9$ pJ/OP, yielding a total system performance of 11.7 pJ/OP. \subsection*{Scaling} The latency and energy efficiency of the FICONN would be further improved by performing inference on large batches of vectors, as they can be transmitted into the PIC at a faster rate than the end-to-end propagation delay. For $W$ input vectors, the total latency of the system is $\tau_\text{latency} + (W-1)/f_\text{BW}$, where $f_\text{BW}$ is the total bandwidth of the system. For large $W$, the latter term dominates the latency. The system then performs $2f_\text{BW}(MN^2 + (M-1)N)$ TOPS and the energy efficiency becomes: \begin{equation} E_\text{OP} \approx \frac{1}{2f_\text{BW}} \left [P_\text{PS} + \frac{P_\text{NOFU}}{N} + \frac{P_\text{TX} + P_\text{ICR}}{MN} \right ] \end{equation} The ultimate speed and energy efficiency of our architecture is therefore determined by the rate at which input vectors can be transmitted into the DNN, rather than the end-to-end propagation time. In our implementation the NOFU is realized with a carrier injection device, which would usually be limited to about $\sim$1~ns response time due to the relatively long recombination time in silicon. While single ns latency is already far lower than what is achievable in conventional electronics, optimizing the NOFU to operate in depletion mode, where the bandwidth is $RC$-limited, could improve this even further to system bandwidths on the order of 50 GHz \cite{sun_128_2019}. Resonant, high-speed modulators in silicon photonics have previously been shown to consume less than 1 fJ/bit \cite{timurdogan_ultralow_2014}. In Table 1 of the main text, the predicted energy consumption of future iterations of the FICONN assumes a clock speed of 50 GHz with the NOFU operating in depletion mode. To estimate the energy of the NOFU in depletion mode, which would be required to operate at faster clock rates, we assume a capacitance of 200 fF based on reported capacitances for photodiodes ($\sim$15 fF) \cite{novack_germanium_2013}, modulators ($\sim$60 fF, estimated using the depletion capacitance of a 20 $\mu$m microring modulator), and interconnects (200 aF/$\mu$m) \cite{miller_attojoule_2017}. Assuming a 0.3 V drive voltage, this yields an energy consumption of $CV^2 = 9$ fJ/NLOP. Performance improves even further for larger system sizes, as the energy cost of the electronics and nonlinearity is amortized by the number of modes $N$. Figure \ref{scaling} shows the expected energy consumption of a receiverless photonic DNN, like our system, and one that reads out optical signals between layers as a function of $N$ and $M$. We show also the current performance region of application-specific integrated circuits for DNNs, which consume approximately 0.1$-$1 pJ per operation \cite{Jouppi2017, shao_simba_2019}, the estimated performance of our current system (11.7 pJ/OP), and that of an optimized version with low-power phase shifters and high-speed electronics (513 fJ/OP). \begin{figure*} \includegraphics[width=3.5in]{FigS6.png} \caption{The energy efficiency per operation of a photonic DNN as a function of number of modes ($N$), number of layers ($M$), and whether the architecture is receiverless (RL) or requires readout between each intermediate layer (R). The shaded region indicates the energy efficiencies of current-day electronic ASICs for DNNs ($\sim$0.1$-$1 pJ/OP). An optimized version of our system, with low-power phase shifters and high-speed electronics, would already be competitive with current ASICs.} \label{scaling} \end{figure*} For a receiverless, three layer system such as ours, a circuit with $N=34$ modes is sufficient to achieve energy efficiencies below 100 fJ/OP, while $N>380$ attains efficiencies better than 10 fJ/OP. This scaling also improves greatly with the number of layers $M$; for a device with $M=10$ DNN layers, a circuit as small as $N=10$ modes yields efficiencies superior to digital electronics. We assume a clock rate of 50 GHz in Figure \ref{scaling} to minimize the system latency; however, as the power dissipation of electronics scales nonlinearly with the bandwidth, it may be advantageous to operate at slower clock rates when energy efficiency is the primary concern. Direct processing of optical data would further lower energy consumption, as high-speed DACs \cite{greshishchev_60_2019} are no longer required to encode data onto an optical carrier. High speed readout and digitization of the outputs \cite{ahmed_34gbaud_2018, adcs}, however, is still required. This accounts for the comparatively poorer scaling of energy efficiency for devices that read out between layers. A three layer system that performs intermediate readout needs to have nearly twice as many modes as a receiverless circuit to achieve energy efficiency below 100 fJ/OP. Figure \ref{scaling} shows how the repeated cost of readout results in little improvement in energy efficiency as $M$ increases. As an example, a receiverless system with $M=10, N=114$ attains efficiencies of 10 fJ/OP, while a system with intermediate readout would require $N \approx 575$ to achieve the same efficiency. \bibliographystyle{naturemag}
2,869,038,156,944
arxiv
\section{Introduction}\label{sec1 Throughout this paper, by $\rho(M)$ we mean the usual spectral radius of a real square matrix $M\in\mathbb{R}^{d\times d}$, where $2\le d<+\infty$. For an arbitrary finite family of real matrices \bean \bA=\{A_1,\dotsc,A_K\}\subset\mathbb{R}^{d\times d}, \eean its \textit{generalized spectral radius $\pmb{\rho}(\bA)$}, first introduced by Daubechies and Lagarias in \cite{DL92-01}, is defined by \bean \pmb{\rho}(\bA)=\sup_{n\ge1}\max_{M\in\bA^n}\sqrt[n]{\rho(M)}\quad\left(\,=\limsup_{n\to+\infty}\max_{M\in\bA^n}\sqrt[n]{\rho(M)}\right), \eean where \begin{equation*} \bA^n=\left\{\stackrel{n\textrm{-folds}}{\overbrace{M_1\dotsm M_n}}\,\bigg\vert\, M_i\in\bA\textrm{ for }1\le i\le n\right\}\quad \forall n\ge1. \end{equation*} According to the Berger-Wang spectral formula~\cite{BW92} (also see \cite{Els, Dai-JMAA} for simple proofs), this quantity is very important for many pure and applied mathematics branches like numerical computation of matrices, differential equations, coding theory, wavelets, stability analysis of random matrix, control theory, combinatorics, and so on. See, for example, \cite{Bar, Jun}. Therefore, the following finite-step realization property for the accurate computation of $\pmb{\rho}(\bA)$ becomes very interesting and important, because it makes the stability question algorithmically decidable; see, e.g., \cite[Proposition~2.9]{Bar}. \begin{prob}\label{prob1 Does there exist a finite-length word which realize $\pmb{\rho}$ for $\bA$? In other words, does there exist any $M^*\in\bA^{n^*}$ such that $\pmb{\rho}(\bA)=\sqrt[\leftroot{-2}\uproot{2}n^*]{\rho(M^*)}$, for some $n^*\ge1$? \end{prob} If one can find such a word $M^*$ for some $n^*\ge1$, then $\bA$ is said to possess \textit{the spectral finiteness}. This spectral finiteness, for any bounded $\bA$, was conjectured respectively by Pyatnitski\v{i} (see, e.g.,~\cite{PR91,SWMW07}), Daubechies and Lagarias in~\cite{DL92-01}, Gurvits in~\cite{Gur95}, and by Lagarias and Wang in~\cite{LW95}. It has been disproved first by Bousch and Mairesse in \cite{BM}, and then by Blondel \textit{et al.} in \cite{BTV}, by Kozyakin in~\cite{Koz05, Koz07}, all offered the existence of counterexamples in the case where $d=2$ and $K=2$; moreover, an explicit expression for such a counterexample has been found in the recent work of Hare \textit{et al.}~\cite{HMST}. However, an affirmative solution to Problem~\ref{prob1} is very important; this is because it implies an effective computation of $\pmb{\rho}(\bA)$ and decidability of stability by only finitely many steps of computations. There have been some sufficient (and necessary) conditions for the spectral finiteness for some types of $\bA$, based on and involving Barabanov norms, polytope norms, ergodic theory or some limit properties of $\bA$, for example, in Gurvits~\cite{Gur95}, Lagarias and Wang~\cite{LW95}, Guglielmi, Wirth and Zennaro~\cite{GWZ05}, Kozyakin~\cite{Koz07}, Dai, Huang and Xiao~\cite{DHX-pro}, and Dai and Kozyakin~\cite{DK}. But these theoretic criteria seem to be difficult to be directly employed to judge whether or not an explicit family $\bA$ or even a pair $\{A,B\}\subset\mathbb{R}^{2\times 2}$ have the spectral finiteness. From literature, as far we know, there are only few results on such an explicit family of matrices $\bA$ as follows. \begin{thm-A}[{Theys~\cite{Theys}, also \cite{JB08}} If $A_1,\dotsc,A_K\in\mathbb{R}^{d\times d}$ are all symmetric matrices, then $\bA$ has the spectral finiteness such that $\brho(\bA)=\max_{1\le k\le K}\rho(A_k)$. \end{thm-A} A more general version of this theorem is the following. \begin{thm-B}[{Barabanov~\cite[Proposition~2.2]{Bar}} If a finite set $\bA\subset\mathbb{R}^{d\times d}$ only contains normal matrices, then the spectral finiteness holds. \end{thm-B} For a matrix $A$, by $A^T$ we mean its transpose matrix. Another generalization of Theorem~A is the following. \begin{thm-C}[{Plischke and Wirth~\cite[Proposition~18]{PW:LAA08}} If $\bA=\{A_1,\dotsc,A_K\}\in\mathbb{R}^{d\times d}$ is symmetric in the sense of $A_k^T\in\bA$ for all $1\le k\le K$, then $\bA$ has the spectral finiteness. \end{thm-C} Jungers and Blondel~\cite{JB08} proved that for a pair of $\{0,1\}$-matrices of $2\times 2$, the spectral finiteness holds. A more general result than this is \begin{thm-D}[{Cicone \textit{et al.}~\cite{CGSZ10}} If $A_1$ and $A_2$ are $2\times 2$ sign-matrices; that is, $A_1,A_2$ belong to $\{0,\pm1\}^{2\times 2}$, then the spectral finiteness holds for $\{A_1,A_2\}$. \end{thm-D} The followings are other different type of results. \begin{thm-E}[{Br\"{o}ker and Zhou~\cite{BZ}} If $\bA=\{A, B\}\subset\mathbb{R}^{2\times 2}$ satisfies $\det(A)<0$ and $\det(B)<0$, then $\brho(\bA)=\max\left\{\rho(A),\rho(B), \sqrt{\rho(AB)}\right\}$. \end{thm-E} \begin{thm-F}[{M\"{o}{\ss}ner~\cite{Mo}} If $\bA=\{L, R\}\subset\mathbb{R}^{2\times 2}$ satisfies $L=\left(\begin{smallmatrix}0&1\\1&0\end{smallmatrix}\right)R\left(\begin{smallmatrix}0&1\\1&0\end{smallmatrix}\right)$, then $\bA$ has the spectral finiteness with $\brho(\bA)=\max\left\{\rho(L), \sqrt{\rho(LR)}\right\}$. \end{thm-F} \begin{thm-G}[{Guglielmi \textit{et al.}~\cite[Theorem~4]{GMV}}] Let $\bA=\{A,B\}$ satisfy \begin{equation*} A=\begin{pmatrix}a&b\\c&d\end{pmatrix}\quad\textrm{and}\quad B=\begin{pmatrix}a&-b\\-c&d\end{pmatrix},\quad \textrm{where }a,b,c,d\in\mathbb{R}. \end{equation*} Then $\bA$ has the spectral finiteness such that \begin{equation*} \brho(\bA)= \begin{cases} \rho(A)=\rho(B)& \textrm{if }bc\ge0,\\ \sqrt{\rho(AB)}& \textrm{if }bc<0.\end{cases} \end{equation*} \end{thm-G} \begin{thm-H}[Dai \textit{et al.}~\cite{DHLX11}] If one of $A, B\in\mathbb{R}^{d\times d}$ is of rank one, then there holds the spectral finiteness property for $\{A,B\}$. \end{thm-H} We will present a new criterion for the spectral finiteness of a finite subset of $\mathbb{R}^{d\times d}$, see Theorem~\ref{thm1} in Section~\ref{sec2}, which generalizes Theorems~A,~C and G. From this we can obtain some checkable sufficient conditions for the spectral finiteness. Finally in Section~\ref{sec3}, we will improve the main theorem of Kozyakin~\cite{Koz07} to get a sufficient and necessary condition for the spectral finiteness of a type of $2$-by-$2$ matrix set $\bA$; see Theorem~\ref{thm8}. \section{Symmetric optimal words and the spectral finiteness}\label{sec2 We let $\bA=\{A_1,\dotsc,A_K\}\subset\mathbb{R}^{d\times d}$ be an arbitrarily given set, where $2\le K<\infty$ and $2\le d<\infty$. By $\|\cdot\|$, we denote the usual euclidean norm of $\mathbb{R}^{d\times d}$. Let $$ \bK=\{1,2,\dotsc,K\}\quad \textrm{and}\quad \bK^n=\stackrel{n\textit{-folds}}{\overbrace{\bK\times\dotsm\times\bK}}. $$ For any word $w=(k_1,\dotsc,k_n)\in\bK^n$ of length $n$, we write $\bA(w)=A_{k_1}\dotsm A_{k_n}\in\bA^n$. A word $w^*=(k_1^*,\dotsc,k_n^*)\in\bK^n$ of length $n$ is called an \textit{$(\bA,n)$-optimal word}, provided that it satisfies condition: \begin{equation*} \|\bA(w^*)\|=\max_{w\in\bK^n}\|\bA(w)\|. \end{equation*} This section is mainly devoted to proving the following criterion for the spectral finiteness of $\bA$, which generalizes Theorems~A and C and the first part of Theorem~G. \begin{theorem}\label{thm1 Let $\bA=\{A_1,\dotsc,A_K\}\subset\mathbb{R}^{d\times d}$. If there exists an $(\bA,n^*)$-optimal word $w^*$ with $\bA(w^*)^T\bA(w^*)\in\bA^{2n^*}$ $(\textrm{resp. } \bA(w^*)\bA(w^*)^T\in\bA^{2n^*})$ for some $n^*\ge1$, then $\bA$ has the spectral finiteness such that \begin{equation*} \brho(\bA)=\sqrt[\leftroot{-3}n^*]{\|\bA(w^*))\|}=\sqrt[\leftroot{-3}2n^*]{\rho\left(\bA(w^*)^T\bA(w^*)\right)}\quad\left(=\sqrt[\leftroot{-3}2n^*]{\rho\left(\bA(w^*)\bA(w^*)^T\right)}\right). \end{equation*} \end{theorem} \begin{proof} Let $w^*$ be an $(\bA,n^*)$-optimal word of length $n^*$, which is such that $\bA(w^*)^T\bA(w^*)\in\bA^{2n^*}$ (resp. $\bA(w^*)\bA(w^*)^T\in\bA^{2n^*}$), for some $n^*\ge1$. Then from the Berger-Wang spectral formula~\cite{BW92}, it follows that \begin{equation*}\begin{split} \brho(\bA)&=\inf_{n\ge1}\max_{w\in\bK^n}\sqrt[n]{\|\bA(w)\|}\le\sqrt[\leftroot{-3}n^*]{\|\bA(w^*))\|}\\ &=\sqrt[\leftroot{-3}2n^*]{\rho\left(\bA(w^*)^T\bA(w^*)\right)}\\ &=\sqrt[\leftroot{-3}2n^*]{\rho\left(\bA(w^*)\bA(w^*)^T\right)}\\ &\le\brho(\bA). \end{split}\end{equation*} This implies the desired result and ends the proof of Theorem~\ref{thm1}. \end{proof} For the case where $\bA$ is symmetric as in Theorem~C, one can find an $(\bA,1)$-optimal word $w^*$ such that both $\bA(w^*)^T\bA(w^*)$ and $\bA(w^*)\bA(w^*)^T$ belong to $\bA^{2}$. On the other hand, the following simple example shows our Theorem~\ref{thm1} is an essential extension of Theorem~C. \begin{example}\label{example2 Let $\bA$ consist of the following three matrices: \begin{equation*} A_1=\begin{pmatrix}1&1&2\\0&1&1\\0&0&1\end{pmatrix},\quad A_2=\begin{pmatrix}1&0&0\\1&1&0\\2&1&1\end{pmatrix},\quad \textrm{and}\quad A_3=\begin{pmatrix}\cos\alpha&\sin\alpha&0\\-\sin\alpha&\cos\alpha&0\\0&0&\sqrt{\frac{3-\sqrt{5}}{2}}\end{pmatrix}. \end{equation*} It is evident that $\bA$ is not symmetric. However, $w^*=(1)$ is an $(\bA,1)$-optimal word such that \begin{equation*} \bA(w^*)^T\bA(w^*)=A_2A_1\in\bA^{2} \end{equation*} and so $\brho(\bA)=\sqrt{\rho(A_2A_1)}$. \end{example} As a consequence of Theorem~\ref{thm1}, we can obtain the following checkable criterion for the spectral finiteness of a kind of $\bA$. \begin{cor}\label{cor3 Let $\bA$ consist of the following $K+2$ matrices: \begin{equation*} A_0=\begin{pmatrix}a&b\\c&d\end{pmatrix},\;A_1=\begin{pmatrix}a_{11}&r_1b\\r_1c&d_{11}\end{pmatrix},\;\dotsc,\;A_K=\begin{pmatrix}a_{KK}&r_Kb\\r_Kc&d_{KK}\end{pmatrix},\; \textrm{and}\; B=\begin{pmatrix}b_{11}&r\sqrt{|b|}\\r\sqrt{|c|}&b_{22}\end{pmatrix}, \end{equation*} where $r, r_1,\dotsc,r_K$ are all constants. If $bc\ge0$ and $\|B\|\le\max_{0\le i\le K}\rho(A_i)$, then $\bA$ has the spectral finiteness and moreover \begin{equation*} \brho(\bA)=\max_{0\le k\le K}\rho(A_k). \end{equation*} \end{cor} \begin{proof} If $bc=0$ then the statement holds trivially. Next, we assume $bc>0$. Let $k^*\in\{0,1,\dotsc,K\}$ be such that \begin{equation*} \rho(A_{k^*})=\max_{0\le k\le K}\rho(A_k), \end{equation*} and we put \begin{equation*} Q=\begin{pmatrix}q_1&0\\0&q_2\end{pmatrix} \end{equation*} which is such that $$q_1q_2\not=0\quad \textrm{and}\quad \frac{q_1}{q_2}=\sqrt{\frac{c}{b}}.$$ Then, \begin{gather*} QA_0Q^{-1}=\begin{pmatrix}a&\sqrt{bc}\\\sqrt{bc}&d\end{pmatrix},\\ QA_1Q^{-1}=\begin{pmatrix}a_{11}&r_1\sqrt{bc}\\r_1\sqrt{bc}&d_{11}\end{pmatrix},\\ \vdots\quad\vdots\quad \vdots\\ QA_KQ^{-1}=\begin{pmatrix}a_{KK}&r_K^{}\sqrt{bc}\\r_K^{}\sqrt{bc}&d_{KK}\end{pmatrix},\\ \intertext{and} QBQ^{-1}=\begin{pmatrix}b_{11}&r\sqrt{|c|}\\r\sqrt{|b|}&b_{22}\end{pmatrix}=B^T. \end{gather*} So, \begin{equation*} \rho(A_i)=\|QA_iQ^{-1}\|\quad \textrm{for }0\le i\le K,\quad\textrm{and}\quad\|B^T\|\le\max_{0\le i\le K}\|QA_iQ^{-1}\|. \end{equation*} Thus, $w^*=(k^*)$ is a $(Q\bA Q^{-1},1)$-optimal word with $QA_{k^*}Q^{-1}\in Q\bA Q^{-1}$. From Theorem~\ref{thm1}, this thus proves Corollary~\ref{cor3}. \end{proof} Corollary~\ref{cor3} generalizes the first part of Theorem~G stated in Section~\ref{sec1}. A special case of Corollary~\ref{cor3} is the following which is of independent interest. \begin{cor}\label{cor4 Let $\bA$ consist of \begin{equation*} A=\begin{pmatrix}\lambda_1&0\\0&\lambda_2\end{pmatrix}\quad \textrm{and}\quad B=\begin{pmatrix}a&b\\c&d\end{pmatrix}. \end{equation*} If $bc\ge0$, then $\brho(\bA)=\max\{\rho(A), \rho(B)\}$. \end{cor} Now we are naturally concerned with the following. \begin{prob}\label{prob2 What can we say for $\bA$ without the constraint condition $bc\ge0$ in Corollary~\ref{cor4}? \end{prob} First, a special case might be simply observed as follows. \begin{prop}\label{prop5 Let $A,B\in\mathbb{R}^{d\times d}$, where $2\le d<\infty$, be a pair of matrices such that \bean A=\begin{pmatrix}a_1&0&\dotsm&0\\0&a_2&\dotsm&0\\\vdots&\vdots&\ddots&\vdots\\0&0&\dotsm&a_d\end{pmatrix}\quad\textrm{and}\quad B=\begin{pmatrix}0&\dotsm&0&b_1\\0&\dotsm&b_2&0\\\vdots&{}&\vdots&\vdots\\b_d&\dotsm&0&0\end{pmatrix}. \eean Then $\bA=\{A,B\}$ is such that $\brho(\bA)=\max\{\rho(A), \rho(B)\}$. \end{prop} \begin{proof} We will only prove the statement in the case of $d=3$, since the other case may be similarly proved. By replacing $A$ and $B$ with $A/\brho$ and $B/\brho$ respectively if necessary, there is no loss of generality in assuming $\brho(\bA)=1$. By contradiction, we assume \begin{equation*} \rho(A)=\max\{|a_1|,|a_2|, |a_3|\}<1 \end{equation*} and \begin{equation*} \rho(B)=\max\left\{|b_2|,\sqrt{|b_1b_3|}\right\}<1. \end{equation*} Let $\{(m_k,n_k)\}_{k=1}^{+\infty}$ be an arbitrary sequence of positive integer pairs. We claim that \begin{equation*} \|A^{m_1}B^{n_1}A^{m_2}B^{n_2}\dotsm A^{m_k}B^{n_k}\|\to0 \end{equation*} as $k\to+\infty$. Indeed, the claim follows from the following simple computation: \begin{gather*} A^m=\begin{pmatrix}a_1^m&0&0\\0&a_2^m&0\\0&0&a_3^m\end{pmatrix},\quad B^n=\begin{cases}\begin{pmatrix}(b_1b_3)^{n^\prime}&0&0\\0&b_2^{n^\prime}&0\\0&0&(b_3b_1)^{n^\prime}\end{pmatrix}& \textrm{if }n=2n^\prime,\\\begin{pmatrix}(b_1b_3)^{n^\prime}&0&0\\0&b_2^{n^\prime}&0\\0&0&(b_3b_1)^{n^\prime}\end{pmatrix}B& \textrm{if }n=2n^\prime+1;\end{cases} \end{gather*} and for any constants $q_i,c_i, d_i$ for $i=1,2,3$, \begin{gather*} \begin{pmatrix}q_1&0&0\\0&q_2&0\\0&0&q_3\end{pmatrix}\begin{pmatrix}0&0&c_1\\0&c_2&0\\c_3&0&0\end{pmatrix} =\begin{pmatrix}0&0&q_1c_1\\0&q_2c_2&0\\q_3c_3&0&0\end{pmatrix},\\ \intertext{and} \begin{pmatrix}0&0&c_1\\0&c_2&0\\c_3&0&0\end{pmatrix} \begin{pmatrix}0&0&d_1\\0&d_2&0\\d_3&0&0\end{pmatrix}=\begin{pmatrix}c_1d_3&0&0\\0&c_2d_2&0\\0&0&c_3d_1\end{pmatrix}. \end{gather*} Then, this claim is a contradiction to $\brho(\bA)=1$ and so it implies that $\brho(\bA)=\max\{\rho(A), \rho(B)\}$. \end{proof} It should be noted that although $\rho(B)<1$ and $\|A\|<1$ in the proof of Proposition~\ref{prop5} under the contradiction assumption, yet $\|B\|>1$ possibly happens; for example, \begin{equation*} B=\begin{pmatrix}0&0&6/5\\0&4/5&0\\2/5&0&0\end{pmatrix} \end{equation*} is such that $\rho(B)=4/5<1$ but $\|B\|=6/5>1$. This is just the nontrivial point of the above proof of Proposition~\ref{prop5}. For Problem~\ref{prob2}, we cannot, however, expect a general positive solution as disproved by the following counterexample. \begin{example}\label{example6 Let \begin{equation*} A_0=\alpha\begin{pmatrix} -3&3.5\\-4&4.5 \end{pmatrix}\quad \textrm{and}\quad A_1=\beta\begin{pmatrix}0.5&0\\0&1\end{pmatrix} \end{equation*} where $\alpha>0, \beta>0$, and $bc=-14<0$. Then $\bA=\{A_0,A_1\}$ cannot be simultaneously symmetrized and there exists a pair of $\alpha,\beta$ so that $\bA$ has no the spectral finiteness. \end{example} \begin{proof} Putting $Q=\left(\begin{smallmatrix}-0.5& 1 \\ 0 & 1\end{smallmatrix}\right)$, we have \begin{equation*} B_0:=Q^{-1}A_0Q=\alpha\begin{pmatrix}1 & 0 \\ 2 & 0.5 \end{pmatrix} \quad\textrm{and}\quad B_{1}:=Q^{-1}A_1Q=\beta\begin{pmatrix}0.5 & 1 \\ 0 & 1\end{pmatrix}. \end{equation*} According to Kozyakin~\cite[Theorem~10, Lemma~12 and Theorem~6]{Koz07}, it follows that there always exists a pair of real numbers $\alpha>0,\beta>0$ such that $\{B_0, B_1\}$ and so $\bA$ do not have the spectral finiteness. Thus, if $\{A_0,A_1\}$ might be simultaneously symmetrized for some pair of $\alpha>0,\beta>0$, then $\{A_0, A_1\}$ and hence $\{B_0, B_1\}$ have the spectral finiteness from Theorem~A, for all $\alpha>0,\beta>0$. This is a contradiction. Therefore, $\{A_0,A_1\}$ cannot be simultaneously symmetrized for all $\alpha>0,\beta>0$. This proves the statement of Example~\ref{example6}. \end{proof} Meanwhile this argument shows that the constraint condition ``$bc\ge0$" in Corollary~\ref{cor3} and even in Corollary~\ref{cor4} is crucial for the spectral finiteness in our situation. Given an arbitrary set $\bA=\{A_1,\dotsc,A_K\}\subset\mathbb{R}^{d\times d}$, although its periodic stability implies that it is stable almost surely in terms of arbitrary Markovian measures as shown in Dai, Huang and Xiao~\cite{DHX11-aut} for the discrete-time case and in Dai~\cite{Dai-JDE} for the continuous-time case, yet its absolute stability is generally undecidable; see, e.g., Blondel and Tsitsiklis~\cite{BT97, BT00, BT00-aut}. However, Corollary~\ref{cor3} is equivalent to the statement\,---\,``periodic stability $\Rightarrow$ absolute stability'', under suitable additional conditions. \begin{prop}\label{prop7 Let $\bA$ consist of the following $K+2$ matrices: \begin{equation*} A_0=\begin{pmatrix}a&b\\c&d\end{pmatrix},\;A_1=\begin{pmatrix}a_{11}&r_1b\\r_1c&d_{11}\end{pmatrix},\;\dotsc,\;A_K=\begin{pmatrix}a_{KK}&r_Kb\\r_Kc&d_{KK}\end{pmatrix},\; \textrm{and}\; B=\begin{pmatrix}b_{11}&r\sqrt{|b|}\\r\sqrt{|c|}&b_{22}\end{pmatrix}, \end{equation*} where $r, r_1,\dotsc,r_K$ are all constants, such that $bc\ge0$ and $\|B\|\le\max_{0\le i\le K}\rho(A_i)$. Then $\bA$ is absolutely stable if and only if $\rho(A_k)<1$ for all $0\le k\le K$. \end{prop} \begin{proof} The statement is obvious and we omit the details here. \end{proof} In fact, the absolute stability of $\bA$ is decidable in the situation of Theorem~\ref{thm1}. \section{Kozyakin's model}\label{sec3 In \cite{Koz07}, Kozyakin systemly considered the spectral finiteness of $\bA$ which consists of the following two matrices: \begin{equation*} A_0=\alpha\begin{pmatrix}a&b\\0&1\end{pmatrix}\quad \textrm{and} \quad A_1=\beta\begin{pmatrix}1&0\\c&d\end{pmatrix}, \end{equation*} where $a,b,c,d,\alpha$, and $\beta$ are all real constants, such that \begin{equation*} \alpha,\beta>0\quad \textrm{and}\quad bc\ge1\ge a>0,\;d>0.\leqno{(\mathrm{K})} \end{equation*} Let $\brho=\brho(\bA)$. We first note that from \cite{Bar} there exists a Barabanov norm $\pmb{\|}\cdot\pmb{\|}$ on $\mathbb{R}^2$; i.e., $$ \brho\pmb{\|}x\pmb{\|}=\max\{\pmb{\|}A_0x\pmb{\|}, \pmb{\|}A_1x\pmb{\|}\}\quad\forall x\in\mathbb{R}^2. $$ And so for any $x_0\in\mathbb{R}^2\setminus\{0\}$, one can find a corresponding (B-extremal) switching law $$ \mathfrak{i}_{\bcdot}(x_0)\colon\{1,2,\dotsc\}\rightarrow\{0,1\} $$ such that $$\pmb{\|}A_{\mathfrak{i}_n(x_0)}\dotsm A_{\mathfrak{i}_1(x_0)}x_0\pmb{\|}=\pmb{\|}x_0\pmb{\|}\brho^n\quad\forall n\ge1.$$ Then from Kozyakin~\cite[Theorem~6]{Koz07}, it follows that there exists the limit \begin{equation*} \sigma(\bA):=\lim_{n\to\infty}\frac{1}{n}\sum_{k=1}^n\mathfrak{i}_k(x_0), \end{equation*} called the \textit{switching frequency} of $\bA$, which does not depend on the choices of $x_0$ and the (B-extremal) switching law $\mathfrak{i}_{\bcdot}(x_0)$. Kozyakin (cf.~\cite[Theorem~10]{Koz07}) asserted that if $\sigma(\bA)$ is irrational, then $\bA$ does not have the spectral finiteness. We now show that this is also necessary. \begin{theorem}\label{thm8 Under condition $(\mathrm{K})$, $\bA$ has the spectral finiteness iff its switching frequency $\sigma(\bA)$ is rational. \end{theorem} \begin{proof} If $\sigma(\bA)$ is an irrational number, then \cite[Theorem~10]{Koz07} follows that $\bA$ does not have the spectral finiteness. Next, assume $\sigma(\bA)$ is rational. Then \cite[Theorem~6]{Koz07} implies that one can find some $x_0\in\mathbb{R}^2\setminus\{0\}$ and a corresponding periodic switching law, say \begin{equation*} \mathfrak{i}_{\bcdot}(x_0)=(\uwave{i_1i_2\dotsm i_\pi}\,\uwave{i_1i_2\dotsm i_\pi}\,\uwave{i_1i_2\dotsm i_\pi}\,\dotsm), \end{equation*} such that $$\pmb{\|}A_{\mathfrak{i}_n(x_0)}\dotsm A_{\mathfrak{i}_1(x_0)}x_0\pmb{\|}=\pmb{\|}x_0\pmb{\|}\brho^n\quad\forall n\ge1,$$ where $\brho=\brho(\bA)$. Therefore, it holds that $$\pmb{\|}(A_{i_\pi}\dotsm A_{i_1})^n\pmb{\|}\ge\brho^{n\pi}\quad\forall n\ge1.$$ Moreover, from the classical Gel'fand spectral formula we have \bean \rho(A_{i_\pi}\dotsm A_{i_1})=\lim_{n\to\infty}\sqrt[n]{\pmb{\|}(A_{i_\pi}\dotsm A_{i_1})^n\pmb{\|}}\ge\brho^{\pi}. \eean Thus, $\brho(\bA)=\sqrt[\pi]{\rho(A_{i_\pi}\dotsm A_{i_1})}$, which means the spectral finiteness. This completes the proof of Theorem~\ref{thm8}. \end{proof} This result improves \cite[Theorem~10]{Koz07} and it should be convenient for applications. Let us consider an explicit example. \begin{example}\label{example9 Let $\bB=\{B_0,B_1\}$ be such that \bean B_0=\begin{pmatrix}a&b\\0&1\end{pmatrix}\quad \textrm{and} \quad B_1=\begin{pmatrix}1&0\\c&d\end{pmatrix}, \eean where $a,b,c,d\in\mathbb{R}$. \end{example} We will divide our arguments into several cases. 1). If $ad=0$ then we have either $\mathrm{rank}(B_0)=1$ or $\mathrm{rank}(B_1)=1$ and so $\bB$ has the spectral finiteness from Theorem~H stated in Section~\ref{sec1}. 2). If $bc=0$ then $\bB$ has the spectral finiteness from Corollary~\ref{cor4} stated in Section~\ref{sec2}. 3). If $a<0$ and $d<0$, then $\bB$ has the spectral finiteness from Theorem~E stated in Section~\ref{sec1}. 4). If $a=d$ and $b=c$, then $\bB$ has the spectral finiteness from Theorem~F stated in Section~\ref{sec1} such that $\brho(B)=\max\left\{\rho(B_0), \sqrt{\rho(B_0B_1)}\right\}$. 5). Next, let $ad\not=0, bc\not=0$, and define \begin{equation*} Q=\begin{pmatrix}\frac{a-1}{b}&1\\0&1\end{pmatrix}. \end{equation*} When $a\not=1$, we can obtain that \begin{equation*} QB_0Q^{-1}=\begin{pmatrix}a&0\\0&1\end{pmatrix}\quad \textrm{and}\quad QB_1Q^{-1}=\begin{pmatrix}1+\frac{bc}{a-1}&\frac{(d-1)(a-1)-bc}{a-1}\\\frac{bc}{a-1}&d-\frac{bc}{a-1}\end{pmatrix}. \end{equation*} Note that \begin{equation*} \frac{(d-1)(a-1)-bc}{a-1}\times\frac{bc}{a-1}\ge0\quad \mathrm{iff}\quad [(1-a)(1-d)-bc]\times bc\ge0. \end{equation*} Hence, if \begin{equation*} [(1-a)(1-d)-bc]\times bc\ge0, \end{equation*} then from Corollary~\ref{cor4}, it follows that $\bB$ has the spectral finiteness such that $$\brho(B)=\max\{\rho(B_0), \rho(B_1)\}.$$ 6). If $a=d=1$ and $bc\ge1$, then $\bB$ has the spectral finiteness from Theorem~\ref{thm8}. Indeed in this case, \cite[Lemma~12]{Koz07} implies that the switching frequency $\sigma(\bB)=\frac{1}{2}$ is rational, and then Theorem~\ref{thm8} implies the spectral finiteness of $\bB$. We notice that our cases 1)\,--\,5) are beyond Kozyakin's condition $(\mathrm{K})$. \section*{\textbf{Acknowledgments} \noindent The author would like to thank professors Y.~Huang and M.~Xiao for some helpful discussion, and is particularly grateful to professor Victor Kozyakin for his helpful comments to Theorem~\ref{thm8}. \bibliographystyle{amsplain}
2,869,038,156,945
arxiv
\section{} \robbert{no shadowing}\\ We want a UE to be served by approximately $L_{u}$ O-RUs. Thus we need to define a circle around the UE which contains the $L_{u}$ closest O-RUs. The O-RUs are distributed as a poisson point process. For this, we can use Campbell's theorem. If we define the Borel set $\mathcal{B}$ on the poisson point process, which in our case specializes to a circular area with radius $R$. \begin{equation} \label{eq:campbell} \E{N(\mathcal{B})} = \int_{0}^{R} 2\lambda(r) \pi r dr \end{equation} We must define a distance based intensity measure on the Borel set where the intensity only measures the O-RUs that have a path loss lower than the threshold. \begin{equation} \lambda(r) = \bar{\lambda} P(\beta(r) > \bar{\beta}) \end{equation} For each of these O-RUs in this Borel set, the path loss is then distributed as a log-normal. \begin{equation} \beta(r) = a + b\log(r), \end{equation} Lets define $\bar{r}$ as the limit of the radius where the path loss becomes lower than the threshold we defined before. \begin{equation} \bar{r} = 10^{\frac{\bar{\beta} - a}{b}} \end{equation} Thus, we can easily define the probability of an O-RU at distance $r$ falling within the path loss threshold, \begin{equation} P(\beta(r) > \bar{\beta}) = \begin{cases} 1 \quad, r < \bar{r} \\ 0 \quad, r \geq \bar{r}. \end{cases} \end{equation} This probability allows us to solve the integral in Equation \ref{eq:campbell}, \begin{equation} \begin{split} \E{N(\mathcal{B})} &= \int_{0}^{\infty} 2 \bar{\lambda}P(\beta(r) > \bar{\beta})\pi r dr \\ &= \int_{0}^{\bar{r}} 2 \bar{\lambda} \pi r dr \\ &= \bar{\lambda} \pi \bar{r}^2 \\ L_u &= \bar{\lambda} \pi 10^{2\frac{\bar{\beta} - a}{b}}. \end{split} \end{equation} By defining an amount of O-RUs that we would like to serve the UEs, we can specialize this to define a path loss threshold \begin{equation} \bar{\beta} = \frac{b}{2}\log_{10}\left(\frac{L_u}{\bar{\lambda} \pi}\right) + a \end{equation} In order to generalize this to a scenario where shadow fading is included. \section{} \robbert{shadowing}\\ We want a UE to be served by approximately $L_{u}$ O-RUs. Thus we need to define a circle around the UE which contains the $L_{u}$ closest O-RUs. The O-RUs are distributed as a poisson point process. For this, we can use Campbell's theorem. If we define the Borel set $\mathcal{B}$ on the poisson point process, which in our case specializes to a circular area with radius $R$. \begin{equation} \E{N(\mathcal{B})} = \int_{0}^{R} 2\lambda(r) \pi r dr \end{equation} We must define a distance-based intensity measure on the Borel set where the intensity only measures the O-RUs that have a path loss lower than the threshold. \begin{equation} \lambda(r) = \bar{\lambda} P(\beta(r) > \bar{\beta}) \end{equation} For each of these O-RUs in this Borel set, the path loss is then distributed as a log-normal. \begin{equation} \beta(r) = a + b\log(r) + x_{sf}, \end{equation} where $x_{sf} \sim \mathcal{N}(0, \sigma_{sf}^2)$. And thus $\beta(r) \sim \mathcal{N}(a + b\log(r), \sigma_{sf}^2)$ Define $\bar{x}(r)$ as the function where \begin{equation} \bar{\beta} < a + b\log(r) + \bar{x}(r) \end{equation} The limit can be found as \begin{equation} \bar{x}(r) = \bar{\beta} - a - b\log(r) \end{equation} \begin{equation} \begin{split} P(\beta(r) > \bar{\beta}) &= P(a + b\log(r) + x_{sf} > \bar{\beta}) \\ &= P(x_{sf} > \bar{\beta} - a - b\log(r)) \\ &= P(x_{sf}> \bar{x}(r)) \\ &= Q\left(\frac{\bar{x}(r)}{\sigma_{sf}}\right) \end{split} \end{equation} \begin{equation} \E{N(\mathcal{B})} = \int_{0}^{\infty} 2 \bar{\lambda}Q\left(\frac{\bar{x}(r)}{\sigma_{sf}}\right) \pi r dr \end{equation} Instead of looking at the actual area of the Borel set, we can also take the $\bar{\beta}$ as the limiting factor and thus we can say $f(r) = Q\left(\frac{\bar{x}(r)}{\sigma_{sf}}\right)$ \begin{equation} \E{N(\mathcal{\bar{\beta}})} = \begin{bmatrix} f(r) \bar{\lambda} r^2 \pi\end{bmatrix}_{r =0}^{r=\infty} - \int_0^{\infty} f'(r) 2 \bar{\lambda} \pi r dr \end{equation} \begin{equation} f'(r) = \frac{1}{\sqrt{2\pi}} \exp \left(-\frac{1}{2} \left(\frac{\bar{x}(r)}{\sigma_{sf}} \right) ^2 \right) \log_{10}(e)\frac{-b}{r} \end{equation} \begin{equation} \begin{split} \bar{x}^2(r) &= \bar{\beta}^2 - 2a\bar{\beta} -2 \bar{\beta} b \log(r) + a^2 + 2ab\log(r) + (b \log(r))^2 \\ &= (\bar{\beta} - a)^2 - 2b\log(r)(a -\bar{\beta}) + (b \log(r))^2 \end{split} \end{equation} \begin{equation} \E{N(\mathcal{\bar{\beta}})} = \begin{bmatrix} f(r) \bar{\lambda} r^2 \pi\end{bmatrix}_{r =0}^{r=\infty} - \int_0^{\infty} f'(r) 2 \bar{\lambda} \pi r dr \end{equation} This can be lower/upper bounded by arbitrary approximations of Q-functions, unfortunately, these do not lead to expressions that are solvable for $\beta$, which is the parameter for which we would like to solve. Another possible solution is to solve for $r$. and then increase it such that APs at the edge are included with 99\% probability. This will however lead to \section{} The variance of the sum of the PPP is also given by Campbells theorem Which implies that if we use the counting function as the measurable function on the Borel set defined as the area in the Poisson Point Process, the Var is the same as the expected value and thus for a large serving cluster, the tail distribution is quite large either way and thus any range based metric for O-RU selection will not provide a tight approximation for the amount of O-RUs that serve a certain UE. \section{} We can also condition it on the distance via $\bar{r}$, we then get following result i.e. $\mathcal{B}$ is a circle with radius $\bar{r}$. \begin{equation} \E{N(\mathcal{B})} = \int_{0}^{\bar{r}} 2\lambda(r) \pi r dr \end{equation} Since the tail distribution of the counting operation on the Poisson Point Process is quite heavy anyway, \begin{equation} \begin{aligned} \text{Var}[N(\mathcal{B})] &=\lambda \int_{\mathcal{B}} f^2 \\ &= \lambda \int_{\mathcal{B}} \\ &= \lambda \pi \bar{r}^2 \label{eq:var} \end{aligned} \end{equation} In order to look at the tightness of the expected value of O-RUs serving a UE, we should consider the dispersion around the mean of $N(\mathcal{B})$. We learn that the variance is equal to the expected value and thus the \begin{equation} \E{N(\mathcal{B}) - \E{N(\mathcal{B}}} = \sqrt{\E{N(\mathcal{B}}} \end{equation} And thus for larger serving cluster the relative error is smaller \begin{equation} \frac{\E{N(\mathcal{B}) - \E{N(\mathcal{B}}}}{\E{N(\mathcal{B})}} = \frac{1}{\sqrt{\E{N(\mathcal{B}}}} \end{equation} Consequently a distance based metric will give tighter approximations of the amount of O-RUs. The received power for this limit distance is then given as a normal distributed random variable with mean $ a + b\log({\bar{r}})$ and variance $\sigma_{\text{sf}}^2$ \begin{equation} \beta(\bar{r}) = a + b\log({\bar{r}}) + x_{\text{sf}} \end{equation} We define $\bar{\beta}$ as the upper limit on the path loss for whom we still serve a certain UE. \begin{equation} \bar{\beta} = a + b\log({\bar{r}}) + \gamma \end{equation} \section{} \begin{equation} \begin{aligned} \E{P} &= \int_{\mathcal{B}} \beta(r) \lambda \pi r dr \\ \end{aligned} \end{equation} tightness \begin{equation} \E{P} = \int_{\mathcal{B}} \beta(r)^2 \lambda \pi r dr \end{equation} \end{comment} \section{Cluster Formation } \label{sec:cluster} In this section, we define two clustering strategies for determining the set of O-RUs that serve a UE. To define the two clustering strategies, we introduce the concepts of \emph{primary O-DU and O-RU}, \emph{measurement cluster} and \emph{serving cluster}. When a UE first connects to the network, that UE chooses a primary O-RU, and the O-DU that controls that O-RU logically becomes the primary O-DU. Physical layer processing is split between O-RU and O-DU, i.e. the O-RU applies the combining vector; the O-DU computes the combining vector, estimates the channel and combines the decoded signal from O-RUs based on large-scale fading coefficients. The concept of the primary O-RU (and O-DU) is important in our network, as the primary O-RU determines: \begin{itemize} \item {\bf Handover:} The handover, as described in Section~\ref{sec:handover}, occurs when a new primary O-RU is selected. \item {\bf Measurement cluster:} The measurement cluster is a set of O-RUs located in proximity to the primary O-RU. With the primary O-RU, the measurement cluster is uniquely defined. \end{itemize} Two {\bf Serving cluster} strategies are proposed in this section: a \emph{Fixed} serving cluster formation and an \emph{Opportunistic} serving cluster formation. The first one is fixed each time the UE triggers a handover. With such handover, a new primary O-RU and measurement cluster is determined, and a serving cluster is statically defined by the Near-RT RIC based on the reported channel gain measurements. The second method allows each O-DU, doing the channel estimation for its O-RU in the measurement cluster for a particular UE, to locally decide if any of those O-RUs should aid the signal decoding for the considered UE. Below, we detail the procedures for selecting the primary O-RU and measurement cluster and the two serving cluster formation methods. In this work, we assume the UL and DL to be completely power-reciprocal. \subsection{Primary O-RU and Measurement Cluster} Upon connection to the network, a UE instantiates a connection to a single O-RU and O-DU combination. This can be done via a procedure as outlined in \cite{Bjornson_Sanguinetti_2020}. In short, this method implies that a UE determines which O-RU provides the best channel gain towards the UE via a Downlink (DL) control signal. The UE then connects to this primary O-RU and its O-DU. Based on the primary O-RU location, the Near-RT RIC will then form a measurement cluster of O-RUs, $\mathcal{M}^{\text{m}}_k$, for that specific UE $k$. The goal of the measurement cluster is to limit the number of O-RUs on which the channel gain should be measured for a specific UE when determining the serving cluster. The serving cluster is a subset of O-RUs $ \mathcal{M}^{\text{s}}_k$ that aid in the joint precoding/decoding for a specific UE $k$. Note that these O-RUs can belong to one or multiple O-DUs, thus, potentially requiring O-DUs to cooperate. In this work, we take the measurement cluster to be the set of $|\mathcal{M}^{\text{m}}_k|$ O-RUs that are located the closest to the primary O-RU. We take the size of the measurement cluster to be twice as large as the serving cluster, i.e. $|\mathcal{M}^{\text{m}}_k| = 2|\mathcal{M}^{\text{s}}_k|$. The size of the serving cluster is a network-wide parameter. The measurement cluster is the same for every method we discuss later and is uniquely defined by the primary O-RU location. Every O-DU that controls an O-RU $l \in \mathcal{M}^{\text{m}}_k$ is then notified that it should track the channel gain, $\beta_{l,k}$ for that specific UE $k$ in the relevant O-RUs streams. In Section~\ref{section:mobility}, we explain how the large-scale fading in the channel is modelled as a function of UE mobility. Thus the access procedure, including the measurement cluster allocation, can be summarised as follows: \begin{enumerate} \item A UE completes an initial access procedure to the primary O-DU that controls the primary O-RU from which the UE has the highest DL channel gain $\beta_{l,k}$. \item The primary O-DU requests the Near-RT RIC to designate a measurement cluster for the newly attached UE. \item The Near-RT RIC designates a set of O-RUs as a measurement cluster, $\mathcal{M}^{\text{m}}_k$, for a specific U ; The O-DUs that control these O-RUs track the received power of the pilots of these UEs in the relevant O-RUs' streams. \end{enumerate}% This measurement cluster allows us to define two methods for finding a serving cluster which we will discuss in Subsections~\ref{subsec:uetrig} and~\ref{subsec:nettrig}. In this work, we employ two clustering strategies, fixed clustering and opportunistic tracking, and compare them against two baselines, ubiquitous CF and a cellular system with only the primary O-DU serving. \subsection{Fixed Cluster Formation}% \label{subsec:uetrig} \begin{algorithm} \caption{Initial Fixed Serving Cluster Formation }\label{alg:UCInitial} \begin{algorithmic}[1] \FOR{$k = 1 \dots K$} \STATE $l^{\ast}_k = \arg\max_l \beta_{l,k}[0]$ \STATE $\mathcal{M}^{\text{s}}_k[0] \gets O_k[0]$ \STATE{$ \bar{P}_k \gets 10 \log_{10} \left( \sum_{l \in \mathcal{M}_k[0]}\beta_{l^{\ast}_k, k}[0]\right)$} \ENDFOR \end{algorithmic} \end{algorithm} \begin{figure}[htb] \centering \includegraphics[width=0.8\linewidth]{figures/ue-centric.pdf} \caption{Signalling Flow for the Fixed Clustering; Upon trigger by a UE, the O-DUs which control O-RUs in the measurement cluster (orange) measure the channel gain $\beta_{l,k}$ in the received stream from those O-RUs. The O-DUs then transfer these channel gains to the Near-RT RIC. The Near-RT RIC then notifies the O-DUs of the set of UEs, $\mathcal{D}_l$, to be served on each O-RU $l$. } \end{figure}% After the measurement clustered is obtained, the O-DUs that control O-RUs in the measurement cluster report the UL channel gains, $\beta_{l,k}$, for UE $k$ to the Near-RT RIC. In Algorithm~\ref{alg:UCInitial} we show how the fixed serving cluster is formed for a specific UE $k$ by the Near-RT RIC. The $|\mathcal{M}^{\text{s}}_k|$ O-RUs with the highest UL channel gain are added to the serving cluster, and a metric $\bar{P}_k$, the sum of DL channel gains from the serving cluster to UE $k$, is determined. This metric represents the total signal quality in the cluster when it is created. This metric is also updated regularly from channel estimates and used by the handover mechanism, which will be discussed in Section~\ref{sec:handover}. We define $O_k[t]$ as the function that maps the set of all O-RUs in the measurement cluster to a serving cluster which contains only the $|\mathcal{M}^{\text{s}}_k|$ O-RUs that experience the highest UL channel gains from that UE at time $t$. The main advantage of this method is that the number of O-RUs, $|\mathcal{M}^{\text{s}}_k|$, can be chosen by the Near-RT RIC. The main disadvantage is the large amount of signalling between the O-DUs and the Near-RT RIC. \subsection{Opportunistic Cluster Formation} \label{subsec:nettrig} \begin{figure}[htb] \centering \includegraphics[width=0.8\linewidth]{figures/opportunistic.pdf} \caption{Signalling flow for the Opportunistic Clustering; the O-DUs which control O-RUs in the measurement cluster (orange) measure the channel gain $\beta_{l,k}$ in the received stream from those O-RUs. The O-DUs can locally decide which UEs should be served on its O-RUs. The O-DU then notifies the Near-RT RIC of this set of UEs, $\mathcal{D}_l$, for each of its O-RUs. } \end{figure} By this method, after the allocation of the primary O-RU, primary O-DU and measurement cluster, the O-DUs can decide locally if they serve a specific UE on O-RUs that have unused resources without involving the Near-RT RIC. O-DUs decide opportunistically by only selecting the UEs with the highest UL channel gains. Since O-DUs can decide autonomously, we need a way to limit the size of the serving cluster. We achieve this by having both a limited measurement cluster and placing an upper limit on the number of UEs that a specific O-RU can serve opportunistically. By this method, we attempt to only select the best UEs per O-RU. This is a different approach to achieving the same goal of a limited number of O-RU serving each UE in the fixed clustering scheme. In the fixed clustering approach, the Near-RT RIC selects the best O-RUs for a specific UE and the serving cluster has a deterministic size. In the opportunistic approach, the O-DU selects the best UEs for each O-RU and the size of the serving cluster is upper bounded by the number of UEs per O-RU and the size of the measurement cluster. Algorithm~\ref{algo:NCInitial} shows how the O-RUs can be loaded maximally. \\ In contrast to the fixed clustering strategy, this opportunistic strategy avoids a network-wide procedure executed at the Near-RT RIC. We define $\mathcal{D}_l$ as the set of all UEs that are served by O-RU $l$. The number of UEs that are selected for an O-RU has an upper limit, which we design here to be the number of antennas on the O-RU, $N$. Naturally, we let the UEs that use an O-RU as a primary take priority over those served opportunistically by the same O-RU. We define the function $Q^{(w)}_l[t]$ as the function that maps all the UEs whose measurement cluster contains O-RU $l$ to the subset of $w$ UEs that achieve the best channel towards O-RU $l$ at time instance $t$. This function loads an O-RU up to its limit, including the UEs that use it as a primary O-RU i.e. an O-RU that can serve $N$ UEs and currently serves $K^{\star}_l$ UEs as primary O-RU could take an additional $N - K^{\star}_l$ UEs opportunistically. Notice the importance of the measurement cluster to the number of O-RUs that serve a specific UE, as only O-RUs in its measurement cluster can potentially be used to serve that UE. \cite{Bursalioglu_Caire_Mungara_Papadopoulos_Wang_2019} also proposes an opportunistic clustering scheme where O-RUs can decide locally if they serve a specific UE. However, they base this decision on pilot contamination and do not consider UE mobility. \begin{algorithm} \caption{Initial Opportunistic Cluster Formation }\label{algo:NCInitial} \begin{algorithmic}[1] \FOR{$k = 1 \dots K$} \STATE{\textbf{Assign O-RU with highest gain to each UE (primary O-RU)}} \STATE $l^{\ast}_k = \arg\max_l \beta_{l,k}[0]$ \ENDFOR \FOR{$l = 1 \dots L$} \STATE{\textbf{For each O-RU, add more UEs up to the maximum of $N$}} \STATE $\mathcal{D}_l[0] \gets Q_l^{(N - K^{\star}_l)}[0]$ \STATE{$\bar{\beta}^{\text{dB}}_{l, k} \gets \beta^{\text{dB}}_{l, k}[0] \qquad \forall k \in \mathcal{D}_l$} \ENDFOR \end{algorithmic} \end{algorithm} The main advantage of this method is the low computational cost and load on the E2 interface. The only required signalling is the notification of the measurement clusters from the Near-RT RIC to the O-DUs. This message can also indicate which O-DU serves as the primary O-DU for a specific UE. This way, an O-DU knows with which O-DU it should instantiate inter-O-DU interfaces to enable cooperative decoding. The main disadvantage of this method is that there are no guarantees on the number of serving O-RUs beyond the primary O-RU. \subsection{Ubiquitous Cell-Free} By this method of operation, every UE is served by every single O-RU in the network and thus $|\mathcal{M}^s_k| = L$. We use this as an upper bound on the performance of the clustering. It is expected that performance drops when only part of the network is used to serve a specific UE. By comparing to the optimal, ubiquitous case, we can evaluate the performance gap to a heuristic handover scheme. \subsection{Cellular} In the cellular case, every UE is served only by O-RUs connected to a single O-DU. Hence, only $|\mathcal{M}^s_k| = L/C$ O-RUs cooperate to serve a single UE. Thus, the cellular case is used as a lower bound on the performance to show the benefits of implementing our proposed inter-O-DU interface. The disadvantages are two-fold; first the amount of O-RUs is inherently limited due to the size of the O-DU, second, the set is never optimal as there is a high probability that O-RUs from a different O-DU might have a better channel to the UE than the selected O-RUs. This method of operation is similar to canonical Distributed MIMO. \section{Conclusion} In this work, we mapped an UL detection method from the CF mMIMO state-of-the-art to the O-RAN architecture. We proposed a temporal channel model based on the shadow fading for CF mMIMO and have shown its effect on handover frequency. We discussed two clustering and handover strategies, mapped them to the O-RAN architecture and benchmarked them via our selected UL detection method. We find that the opportunistic clustering works as well as the fixed clustering method and even better at high speeds and with significantly less signalling per handover. We also demonstrated that CF mMIMO is much more resilient against UE mobility than classical cellular systems (Figure~\ref{fig:comparison}). UE mobility in CF mMIMO is a research area still facing many complex problems, especially if user-centric clusters are considered instead of ubiquitous CF. We highlighted the significant synergy between O-RAN and the state-of-the-art in CF mMIMO and that O-RAN could be a strong enabler for CF mMIMO networks with only minimal changes to its architecture. \section{Handover} \label{sec:handover} If a serving cluster can only use a subset of the O-RUs in the network and we consider UE mobility, any selected subset of O-RUs will become suboptimal as the UE moves away from its cluster. Hence, in this section, we propose updating strategies for the clusters from Section~\ref{sec:cluster}. For the fixed clustering in Section~\ref{subsec:uetrig}, we define a threshold for the entire cluster based on the DL power received at the UE. For the opportunistic cluster in Section~\ref{subsec:nettrig}, we define a threshold per O-RU such that UEs select their primary O-RU and the O-DUs can decide locally which UEs to serve on which O-RUs opportunistically. We discuss these handover procedures at discrete times $t$, which are spaced apart at the same intervals, $T_s$, as in Section~\ref{section:mobility}. \subsection{Handover for Fixed Clustering} \begin{algorithm}[h!] \caption{Handover for Fixed Clustering }\label{alg:cap} \begin{algorithmic}[1] \STATE \textbf{Initialize:} \STATE \hspace*{\algorithmicindent}\parbox[t]{.8\linewidth}{\raggedright $\mathcal{M}^{\text{s}}_k[0] \gets O_k[0]$} \STATE \hspace*{\algorithmicindent}\parbox[t]{.8\linewidth}{\raggedright $\bar{P}_k \gets \sum_{l \in \mathcal{M}^{\text{s}}_k[0]} 10\log_{10} \left( \beta_{l,k}[0] \right)$} \FOR{$t = 1 \dots T$} \STATE{$P_k[t] \gets 10\log_{10} \left( \sum_{l \in \mathcal{M}^{\text{s}}_k[t]} \beta_{l,k}[t] \right)$} \IF{$\bar{P}_k - P_k[t] > M^{\text{F}}_{\text{HO}}$} \STATE{$\mathcal{M}_k[t] \gets O_k[t]$} \STATE{$\bar{P}_k \gets 10\log_{10} \left( \sum_{l \in \mathcal{M}^{\text{s}}_k[t]} \beta_{l,k}[t] \right)$} \ELSE \STATE $\mathcal{M}^{\text{s}}_k[t] \gets \mathcal{M}^{\text{s}}_k[t-1]$ \ENDIF \ENDFOR \end{algorithmic} \end{algorithm} First, we describe a handover strategy for our fixed clustering strategy. In this case, the UE monitors the average received power $P_k[t]$, and triggers a handover if it is below the initial cluster power $\bar{P}_k - M^{\text{F}}_{\text{HO}}$, where $M^{\text{F}}_{\text{HO}}$ is a hysteresis threshold. The threshold brings a trade-off in the system's performance (higher SE) versus less signalling overhead (less frequent handovers). When the UE triggers a handover, it selects a new primary O-RU with the highest DL channel gain. The new primary O-DU then requests a new measurement cluster around the new primary O-RU and, subsequently, a serving cluster for that UE from the Near-RT RIC. To keep track of the received power, we assume that in the DL control channel, the UE learns the DL channel gain from its O-RUs. Different works have argued in favour of DL pilots in CF mMIMO \cite{Interdonato_Ngo_Frenger_Larsson_2019}, and thus we see this as a valid assumption. It is also possible to transmit the UL measurements back to the UE in a control channel and to use these under mild reciprocity assumptions. We provide pseudocode in Algorithm~\ref{alg:cap}. \begin{comment} \begin{figure} \centering \includegraphics[width=0.7\linewidth]{figures/UE-Centric Handover.png} \caption{Signalling flow for UE Centric Handover} \end{figure} \end{comment} \subsection{Opportunistic Cluster Tracking} \begin{algorithm}[htb] \caption{UE Tracking in Opportunistic Clustering}\label{alg:oppHO} \begin{algorithmic}[1] \FOR{$t=1\dots T$} \FOR{$k = 1 \dots K$ } \STATE{$\bar{l} \gets \arg\max_l \beta^{\text{dB}}_{l, k}$} \IF{$(\beta^{\text{dB}}_{\bar{l}, k}[t] > \beta^{\text{dB}}_{l^{\ast}_k, k}[t] + M^{O}_{HO}) \land (\bar{l} \neq l^{\ast}_k)$} \STATE{\textbf{Primary O-RU handover}} \STATE $\mathcal{D}_{l^{\star}}[t] \gets Q_{l^{\star}}^{(N - K^{\star}_l + 1)}[t]$ \STATE {$l^{\ast}_k \gets \bar{l}$} \STATE $\mathcal{D}_{\bar{l}}[t] \gets Q_{\bar{l}}^{(N - K^{\star}_{\bar{l}})}[t]$ \ENDIF \ENDFOR \FOR{$l = 1\dots L$ } \IF{$\exists \bar{k} \notin \mathcal{D}_l[t]: \beta^{dB}_{l, \bar{k}}[t] > \beta^{dB}_{l,k}[t] + M^{O}_{HO}, \quad k \in \mathcal{D}_l[t]$ } \STATE{\textbf{Opportunistic O-RU Reload}} \STATE{$\mathcal{D}_l[t] \gets Q_l^{(N - K^{\star}_l)}[t] \cup \{k: l^{\ast}_k = l\} $} \ELSE \STATE{$\mathcal{D}_l[t] \gets \mathcal{D}_l[t-1]$} \ENDIF \ENDFOR \ENDFOR \end{algorithmic} \end{algorithm} We also define a handover strategy for our clustering with opportunistic tracking. In opportunistic tracking, the cluster updates on two different levels: 1) The primary O-RU; 2) The opportunistic serving by the O-DU. Every UE is connected to one O-RU as its primary one, denoted by $l^{\ast}_k$. A change of the primary O-RU requires the UE to perform an actual handover, hence we call this the \textit{primary O-RU handover}. To limit the signalling between the O-DUs and the Near-RT RIC, it is logical to make the UE responsible for the handover of its primary O-RU as it is the UE's only persistent connection. A UE selects a new primary O-RU when it detects a significantly higher DL channel gain to a different O-RU in its measurement cluster. If that UE makes its new primary O-RU exceed its limit of served UEs, that O-RU drops the weakest UE it is currently serving opportunistically. When the primary O-RU is updated, the measurement cluster changes accordingly, i.e. the closest O-RUs to the new primary O-RU become the new measurement cluster.\\ Additionally, the O-DUs can dynamically change the set of UEs it is opportunistically serving on its O-RUs. We call this an \textit{opportunistic O-RU reload}. Once an O-DU detects a UE with a significantly higher UL channel gain than the UEs it is currently serving on one of its O-RUs, the serving set for that O-RU is updated opportunistically by the O-DU. The O-DU achieves this by selecting the UEs with that O-RU in their measurement cluster with the highest UL channel gains (via $Q_l^{(w)}[t]$ from Section~\ref{subsec:nettrig}). Any changes in the opportunistic serving can happen dynamically as this does not change any connections between UEs and their primary O-RU. We introduce the handover threshold as $M^{\text{O}}_{\text{HO}}$ and design it to be the same for the handover of the primary O-RU and the opportunistic addition of extra UEs. However, we do acknowledge that it might be interesting to use different thresholds for the primary handover and opportunistic O-RU reload. We outline the algorithm in Algorithm~\ref{alg:oppHO}. \subsection{Ubiquitous} By this method, the UE is served by every O-RU in the system. The UE can move anywhere without significant losses because the LSFD combiner can calculate new combining weights $\mathbf{a}_k$ based on the changing channel for every O-RU. Hence, there is no handover needed. However, it is helpful to highlight the performance gap to this method. \subsection{Cellular} \begin{algorithm}[htb] \caption{Cellular Handover}\label{alg:cellHO} \begin{algorithmic}[1] \FOR{$t=1\dots T$} \FOR{$k = 1 \dots K$} \STATE{$\bar{l} \gets \arg\max_{l \notin \mathcal{M}^{s}_k} \beta^{\text{dB}}_{l, k}$} \IF{$(\beta^{\text{dB}}_{\bar{l}, k} > \beta^{\text{dB}}_{l^{\ast}_k, k} + M^{\text{C}}_{\text{HO}})$} \STATE $\mathcal{M}_k^{s}[t] \gets \{\mathcal{L}_c | \bar{l} \in \mathcal{L}_c\}$ \ELSE \STATE $\mathcal{M}_k^{s}[t] \gets \mathcal{M}_k^{s}[t-1]$ \ENDIF \ENDFOR \ENDFOR \end{algorithmic} \end{algorithm} By this method, a UE requests a handover when it detects that it has a significantly higher DL channel gain to an O-RU in a different O-DU than the one by which it is currently served. The threshold for this handover is $ M^{\text{C}}_{\text{HO}}$. The algorithm is described in Algorithm~\ref{alg:cellHO}. \section{Introduction} The next generation of wireless networks will require enormous data rates. To this end, many solutions have been promised, Cell-Free Massive MIMO (CF mMIMO) being amongst the most popular ones. It has been studied thoroughly by many researchers \cite{7421222} since its initial conception \cite{Ngo_Ashikhmin_Yang_Larsson_Marzetta_2015}. On the other hand, O-RAN only recently gained significant traction in the research community. we show considerable synergy between these two and demonstrate that their combination has the ability to enable the next generation of wireless networks. \subsection{O-RAN} Recently, O-RAN has emerged as a way to organise mobile networks. This new architecture proves to be interesting for practical deployments of CF mMIMO networks for two reasons. First, the physical layer is split between the O-RAN Distributed Units (O-DUs) and Radio Units (O-RUs). Second, extra functional blocks are introduced, enabling AI and containerised service orchestration on the network. These include the Near-Real Time RAN Intelligent Controller (Near-RT RIC), the Non-RT RIC, and the Service Management and Orchestration (SMO) framework \cite{ORAN-OVERVIEW}. The new interfaces and options for network-wide control can be exploited to achieve cooperation amongst O-RUs, even beyond the borders of the O-DUs. This work will focus on how an O-RAN network can achieve cooperation and smooth handover between its O-DUs via our proposed inter-O-DU interface, the E2 interface, and xApps in the Near-RT RIC. An xApp is a function in the Near-RT RIC which optimises the network via the E2 interface. The authors of \cite{Orhan_Swamy_Tetzlaff_Nassar_Nikopour_Talwar_2021} demonstrate a typical use-case for an xApp, which is network-wide handover management for load balancing and interference management in a cellular network. The reader is referred to \cite{Polese_Bonati} for an overview of the O-RAN architecture. \subsection{Cell-Free Massive MIMO} CF mMIMO refers to a network with many more Access Points (APs) than User Equipment (UEs), where APs cooperate via coherent joint transmission and reception. Numerous promising algorithms have been proposed, many of which target the physical layer \cite{Ammar_Adve_Shahbazpanahi_Boudreau_Srinivas_2022, Chen_Zhang_Zhang, Elhoushy_Ibrahim_Hamouda_2022}. However, a good deal of questions remain unanswered in resource scheduling, including pilot allocation, power control, and user clustering. And especially, it is unknown how to organise these networks practically and which changes and extensions to current standards are required to let APs cooperate optimally. \cite{Demir_Masoudi_Bjornson_Cavdar_2022} proposes a way to integrate CF mMIMO into the Cloud-RAN (C-RAN) architecture while optimising for energy efficiency. The optimisation problem is tailored specifically towards a C-RAN deployment. They split the total power consumption into parts for Optical Network Units, APs, and DUs in the cloud. An initial attempt at standardisation of cooperation across APs was made in the LTE standard via Coordinated Multipoint Joint Transmission (CoMP-JT). Unfortunately, the standard never became very clear, especially regarding how exactly those cooperating APs should organise their transmissions and how they should coordinate amongst APs. The main problem was limited interest from system vendors since legacy APs could only achieve marginal gains from the cooperative serving of UEs \cite{fantini_zirwas_thiele_aziz_baracca_dohler_nakamura_2016}. CF mMIMO attempts to overcome this by utilising algorithms from Massive MIMO in TDD, densely deployed networks and user-centric transmission \cite{Interdonato_Bjornson_Quoc_Ngo_Frenger_Larsson_2019}. User-centric clustering was already identified as an important research direction for CoMP networks \cite{Bassoy_Farooq_Imran_Imran_2017}. Many seminal works discuss a Central Processing Unit (CPU) which is responsible for the higher parts of the physical layer processing. The centralisation of all computations in a central unit should be avoided as this imposes immense requirements on the fronthaul capacity and computing power of that singular unit. The idea of the CPU not being one single entity already exists for a relatively long time and is explored in \cite{Bjornson_Sanguinetti_2020, Interdonato_Frenger_Larsson_2019, Li_Sun_Ji_Chen_2022, Riera-Palou_Femenias_2019}. The authors in \cite{Bjornson_Sanguinetti_2020} claim that the CPU should not be thought of as a physically central unit but as a collection of algorithms that can be run anywhere in the network. \cite{Interdonato_Frenger_Larsson_2019} proposes an architecture where multiple CPUs cooperate to serve its UEs via user-centric clustering. They propose an algorithm with performance comparable to the canonical, ubiquitous cell-free case while being scalable. This idea is reflected in our work, where O-DUs decode the signal collaboratively. The Near-RT RIC serves as an additional network-wide controller which executes specific optimisation algorithms. \subsection{State of the Art in Mobility Modeling} The current temporal models for CF mMIMO are rather limited. \cite{Zheng_Zhang_Bjornson_Ai_2021, Chopra_Murthy_Papazafeiropoulos_2021, Zheng_Zhang_Shi_Jiang_Ai_2022, Anand_Murthy_Chopra_2022} use the Jakes autocorrelation model, where the different channels are assumed to be from the same distribution, and the autocorrelation is modelled via a Bessel function. In \cite{DAndrea_Interdonato_Buzzi_2021}, the UE moves along a predetermined straight line. Their mmWave channel consists of an LoS component and an NLoS component, both are updated at discrete sampling times. The location of the UE determines the AoA and the path loss; thus, the LoS component can be computed directly. The NLoS components are computed as an autoregressive function via the Jakes autocorrelation model. They primarily focus on beamforming that is more robust to UEs mobility. In \cite{Xiao_Mahonen_Simic_2022}, the probability of handover in Network MIMO is studied using distance-based metrics. As such, they do not model the channel itself but only the movement of the UEs. \cite{Zaher_Bjornson_Petrova_2022} uses a blockage map of an urban location; hence, the shadowing is consistent between UEs and along the path of the UE. This is interesting as it mimics real-life shadowing very closely, although it does not account for moving blockers. Unfortunately, it is computationally very taxing and scenario-dependant. They also use the concept of soft handover, but they are more focused on pilot allocation. \subsection{Synergy between O-RAN and Cell-Free} The O-RAN Alliance builds upon the continuing trend of increased disaggregation and softwarisation in the RAN. \begin{comment} This trend has already passed four uniquely identifiable stages. First, there was Distributed RAN (D-RAN), where the Remote Radio Heads (RRHs) and Baseband Processing Unit (BBU) are located together at a specific cell site and connected to the core via a backhaul network. Second, there was C-RAN \cite{Wu_Zhang_Hong_Wen_2015}, where the BBUs are collocated at a shared pool. Third, there was vRAN, which refers to a system where all baseband processing is run as software in the cloud instead of dedicated hardware devices. In classical vRAN the chosen functional split was 8. In this case, all of the complexity is located on the BBU. This leads to a cheap, easily deployable RRH. Unfortunately, the fronthaul requirements and complexity at the BBU are high when cooperation between multiple RRHs is required \cite{Rodriguez_Guillemin_Ferrieux_Thomas_2020}. \end{comment} This trend has already passed three uniquely identifiable stages: 1) Distributed RAN (D-RAN) has the BBU and the RRH both at the cell site; 2) Cloud RAN (C-RAN) \cite{Wu_Zhang_Hong_Wen_2015} collocates the BBUs at a shared pool; 3) Virtual RAN (vRAN) \cite{Rodriguez_Guillemin_Ferrieux_Thomas_2020} decouples the software from the hardware by implementing parts of the RAN as standalone software components. 3GPP has determined a range of physical layer splits (1-8) \cite{Larsen_Checko_Christiansen_2019} where more functions are moved to the BBU with an increasing split number. Conversely, the RRH becomes more simplistic with increasing split number \cite{Checko_Christiansen_Yan_Scolari_Kardaras_Berger_Dittmann_2015}. O-RAN, which uses split 7.2x, is just one example of these possible functional splits per 3GPP's specifications. The disaggregation of the RAN is also an essential aspect of implementing cell-free networks. The theoretical CF literature emphasises this strongly. In most seminal CF papers \cite{Nayebi_Ashikhmin_Marzetta_Rao_2016, Ngo_Ashikhmin_Yang_Larsson_Marzetta_2017} the APs are quite cheap and mostly responsible for the channel estimation and, in the case of Large-Scale Fading Decoding (LSFD), for computing a local signal estimate \cite{Interdonato_Bjornson_Quoc_Ngo_Frenger_Larsson_2019}. Numerous works in theoretical CF have discussed how the functional split should be made between an AP and the CPU \cite{Bjornson_Sanguinetti_2019}. We see great value in mapping the current state-of-the-art CF mMIMO research onto the O-RAN architecture. The Central Processing Unit (CPU) can be disaggregated into the Near-RT RIC and a set of cooperating O-DUs. Cooperation between distributed APs has previously been proposed in many scenarios under different terms such as CoMP-JT, D-MIMO, CF, and so on. At this point in the development of the O-RAN architecture, it is vital to identify and select the most significant contributions from these many different topics. A first physical layer-focused mapping between CF mMIMO and O-RAN was proposed in \cite{Ranjbar_Girycki_Rahman_Pollin_Moonen_Vinogradov_2022} where the joint computation of a precoder is studied across both O-RAN Radio Units (O-RUs) and O-DUs. They conclude that not only enabling cooperation between O-RUs but also between as many O-DUs as possible is the best approach in terms of spectral efficiency. Unfortunately, a high degree of cooperation requires significant computing power and signalling capacity between the O-DUs due to the large amounts of channel state information (CSI) needed to calculate the precoder. In this paper, we extend this mapping by also considering user mobility; thus, the allocation of O-DUs and O-RUs to a UE should change, resulting in a full or partial handover. \begin{comment} To limit fronthaul signalling, many solutions have been proposed in the state-of-the-art, some target the quantization of the signals that are transmitted over this interface \cite{Khorsandmanesh_Bjornson_Jalden_2022, Burr_Bashar_Maryopi_2018}, others tackle a dimensionality reduction in the fronthaul interface via e.g. principal component analysis \cite{Kanno_Ito_Ohseki_Yamazaki_Kishi_Choi_Chen_Molisch_2022} or Large Scale Fading Decoding \cite{Nayebi_Ashikhmin_Marzetta_Rao_2016}, where we transport only a so-called effective gain over the fronthaul. The fronthaul interface in O-RAN and its relation to beamforming is studied extensively in \cite{Mohsin_Batalla_Pallis_Mastorakis_Markakis_Mavromoustakis_2021}. They conclude that a CSI-based method is best for adhering to latency requirements. By this method, the O-DU transmits user-specific CSI to its It has been show the solution for the DL problem can be used for the UL and vice versa \cite{Liu_Patil_Yu_2016}. \end{comment} \begin{comment} \robbert{Load Balancing on O-DUs}\\ In the end, the CPU from those theoretical CF mMIMO works isn't a physical unit per se, it is a set of algorithms that can be ran almost anywhere. Thus, the algorithms for a specific UE could be allocated to a certain O-DU depending on a predefined policy. Two extreme cases are to distribute the processing maximally across the different O-DUs, this leads to cheaper O-DUs (as they need much less processing power), on the other side, we can combine as much processing as possible. The second approach is presented in \cite{Sigwele_Alam_Pillai_Hu_2017}, they claim that it leads to energy efficiency as there will be many O-DUs that are not necessary in decoding signals for any of the UEs, especially if the total load on the system is not very high. These unused O-DUs could be partially turned off to a sleep-like state in order to save large amounts of energy. The main downside to the O-DUs should be overdimensioned to accomodate this consolidation of computations. An intermediate solution is to impose an upper bound on the amount of processing that can be collocated in a single O-DU \cite{Ha_Le_2016}. They solve the power minimization problem based constrained via cloud computational capacity and user QoS. \end{comment} \subsection{Contributions} In a practical CF mMIMO network, it is practical and realistic to assume coherent combining only at the AP \cite{Ozdogan_Bjornson_Zhang_2019, Zhang_Zhang_Hu_Yang_Zhu_2021,Ganesan_Sarvendranath_Larsson_2021}. This greatly simplifies implementation costs and makes it possible to achieve CF mMIMO with minor changes to the O-RAN architecture, as detailed in this paper. This motivates us to use a local decoder at every O-RU and then combine based on large-scale changes; hence we choose the method from \cite{Demir_Bjornson_Sanguinetti_2021}. In \cite{Ranjbar_Girycki_Rahman_Pollin_Moonen_Vinogradov_2022}, it was also shown that inter-DU cooperation leads to great benefits. \begin{itemize} \item We propose how to map an Uplink (UL) detection method \cite{Demir_Bjornson_Sanguinetti_2021} to the O-RAN framework and combine this with the new concepts of measurement and serving clusters. For this mapping, we further detail the information exchange across the relevant existing interfaces between O-RU, O-DU and Near-RT RIC and build upon the work from \cite{selfCite}. \item We propose a way to evolve the channel temporally in CF mMIMO. This method evolves the large-scale fading of the channel only. Thus, it allows studying the impact of the large-scale fading in the channel on cluster update algorithms operating over large time intervals. \item We propose two clustering and handover strategies and quantify their performance via our selected UL detection method. We detail how initial clusters could be determined and how the serving cluster can be updated based on a trigger by the user or opportunistically based on information locally available for each O-RU. Subsequently, we map these to the O-RAN framework and discuss the signalling flow over the existing E2 interface. \end{itemize} \subsection{Outline} First, we define our clustering strategies in Section~\ref{sec:cluster}. In Section~\ref{section:system_model}, we give an overview of our CF mMIMO system model and extend it to allow for temporal evolution. Subsequently, in Section~\ref{sec:handover}, we provide handover strategies for the clustering strategies in Section~\ref{sec:cluster}. Finally, in Section~\ref{section:oran}, we map all of the previous to the O-RAN architecture and provide practical guidelines on how they could be implemented with minimal changes to O-RAN. In Section~\ref{sec:results}, we give numerical results for the proposed solutions. \subsection{Notation} A diagonal matrix with the elements $x_i$ on the main diagonal is denoted by $\text{diag}(x_0, x_1, \cdots, x_N)$. The cardinality of a set $\mathcal{S}$ is written as $|\mathcal{S}|$. For matrices and vectors alike, $\mathbf{A}^T$ is the transpose, $\mathbf{A}^H$ is the Hermitian conjugate of the matrix, and $\mathbf{A}^{\ast}$ is the complex conjugate. The $l$'th element of vector $\mathbf{a}$ is denoted by $[\mathbf{a}]_l$. The expected value of $x$ is denoted by $\E{x}$. We also list our commonly used symbols in Table~\ref{table:symbols}. \input{symbols.tex} \subsection{Proposed Long-term Evolution Mobility Model} \label{section:mobility} Next, we explain how the channel model can be evolved temporally. The UEs move in a straight line from their starting position at a random angle $ \theta_k \sim \mathcal{U}[0, 2\pi]$. The moving direction, $\theta_k$, and the speed of the UE, $v_k$, are fixed for each simulation run. The sample time of the simulation is $T_{S}$. A UEs position at time $t$ is then, \begin{equation} \begin{aligned} x[t] = x[t-1] + v_k T_{S} \cos{(\theta_k)} \\ y[t] = y[t-1] + v_k T_{S} \sin{(\theta_k)} \end{aligned}. \end{equation} If the deployment of O-RUs is dense, a minor movement of a UE could induce a significant change in the Angle of Arrival (AoA) for the received signal at an O-RU and thus change the covariance matrix of the channel significantly (see Eq.~\ref{eq:covariance}). Therefore, we do not expect the channel itself to be strongly correlated over multiple time samples, but we expect the large-scale fading to be correlated. Hence, we model shadow fading as an autoregressive function. From \cite{Zhenyu_Wang_Tameh_Nix_2008}, it is known that the shadow fading at two points with distance $d$ is correlated as $e^{-\alpha d}$, where $\alpha$ is the reciprocal of the decorrelation distance. We take $\alpha$ to be $\frac{1}{20 \text{m}}$ in this example. This is the recommended value for a typical European city \cite{Zhenyu_Wang_Tameh_Nix_2008}. \cite{He_Zhong_Ai_Oestges_2015} indicates that the decorrelation distance becomes larger for more rural areas. We consider a relatively dense deployment of O-RUs, which is typical for an urban area, and thus our value for $\alpha$ is valid. We assume that the shadow fading follows the same distribution during our simulation. A UE moves a distance of $ v_k T_{S}$ between two subsequent samples. Thus we can model the shadow fading $ F_{l,k}[t]$ at any of the subsequent time intervals as \begin{equation} F_{l,k}[t] = \rho_k F_{l,k}[t-1] + \sqrt{1 - \rho_k^2} F_{l,k}^{\text{new}}. \end{equation} Where $\rho_k = e^{-\alpha v_k T_{S}}$ is the correlation coefficient for two subsequent realizations, $F_{l,k}^{\text{new}}$ is a newly drawn shadow fading from the distribution $\mathcal{N}(0, \sigma_{\text{sf}}^2)$. Our model allows to have correlated shadow fading for different samples of a specific UE but not between different UEs. In the end, this leads to a path loss model which is constructed as \begin{equation} \beta^{\text{dB}}_{l,k}[t] = a + b\log_{10}(d_{l,k}[t]) + F_{l,k}[t]. \end{equation} Where $a$ is the path loss at the reference distance, $b$ is the distance dependant path loss coefficient, $d_{l,k}$ is the distance between a specific UE $k$ and O-RU $l$. The closest existing model is the one in \cite{DAndrea_Interdonato_Buzzi_2021} where the individual NLoS components are updated. In this work we update the shadow fading coefficient as a whole and evolve it using an empirically validated model. \\ Shadow fading plays an important role in the proposed solution because the measurement cluster is constructed by taking the closest O-RUs to the primary one. Since shadow fading occurs, the O-RUs that are closer to the UE might not be optimal as they could be (partially) blocked. O-RUs that are further away might provide a higher channel gain. Generally, the probability of further O-RUs being better than closer ones increases with increasing shadow fading variance. Although we have chosen a fixed measurement cluster, it might be valuable to make this size dependent on shadow fading variance. Because we consider a dense deployment of O-RUs, a relatively small movement of the UE might induce a big change in the angle of arrival for a particular UE - O-RU association. Hence, the channel covariance matrices should be regenerated at every new time instance. To calculate the second-order statistics of the channel, we use the one-ring scattering model \cite{massivemimobook} to calculate the elements of the covariance matrix, \begin{equation} \label{eq:covariance} [\mathbf{R}_{l,k}]_{m, n}[t] = \beta_{l, k}[t] \int e^{2 \pi j d_H (n - m) \sin(\bar{\phi})}f(\bar{\phi},t)d\bar{\phi}, \end{equation} where $f(\bar{\phi},t)$ is the time-dependent distribution of the possible angles of arrival. This distribution can be either Gaussian, Laplacian or uniform around the true angle of arrival for the signal. For this work, we consider a uniform distribution. In this case \begin{equation} \bar{\phi} = \phi + \delta, \qquad \delta \sim U[-\xi, \xi], \end{equation} where $\phi$ is the true angle of arrival, and $\xi$ models the richness of the scattering environment. \section{Integration into O-RAN} \label{section:oran} The recent adoption of the O-RAN framework is a strong enabler for CF mMIMO networks. The disaggregation of the network and the introduction of the Near-RT RIC prove to be extremely useful in deploying CF mMIMO networks. This section expands on the discussion in our previous work \cite{selfCite}. We provide an overview of how our clustering (Section~\ref{sec:cluster}) and handover (Section~\ref{sec:handover}) strategies could be implemented with minimal changes to the O-RAN architecture. We focus on how the E2 interface and the Near-RT RIC can be used. \begin{comment} \begin{figure} \centering \includegraphics{figures/figure_final.pdf} \caption{Integration of CF mMIMO into the O-RAN architecture} \end{figure} \end{comment} \subsection{E2 - Service Models} The first important aspect is the E2 interface. It allows the Near-RT RIC to control its E2 nodes; we mainly consider the O-DUs as there is no E2 interface to the O-RUs. This control is implemented via so-called E2 Service Models (E2SMs) \cite{ORAN-E2}. Currently, four E2SMs exist: \begin{itemize} \item Network Interface (E2SM-NI): monitors and modifies messages sent on 3GPP interfaces (S1, X2, NG, Xn, F1, E1). \item Key Performance Measurement (E2SM-KPM): monitors RAN and UE performance. \item RAN Control (E2SM-RC): monitors and modifies RAN parameters per UE. \item Cell Configuration and Control (E2SM-CCC): monitors and modifies RAN parameters per E2-node or cell. \end{itemize} The following paragraphs will indicate how we use the E2SM-KPM and E2SM-RC to enable our clustering and handover strategies. \subsection{Fixed Clustering and Handover} We first detail the usage of the E2 interface for the fixed clustering and handover strategy. \begin{enumerate} \item {\em Initial Access}; The UE connects to the network via its primary O-DU. This O-DU then requests a measurement cluster from the Near-RT RIC. \item {\em Measurement Cluster Allocation}; The Near-RT RIC allocates the nearest O-RUs to the primary O-RU for that UE as a measurement cluster and signals this to the participating O-DUs. This notification can be sent via E2SM-KPM. \item {\em Serving Cluster Allocation}; The participating O-DUs report the received power for the UE via the E2SM-KPM. This service model enables the Near-RT RIC to subscribe to UE-specific measurements. An xApp in the Near-RT RIC can then exploit these measurements to select the fixed cluster. The Near-RT RIC then signals, via the E2SM-KPM, to every O-DU that has any O-RUs in the serving cluster for that UE, the ID of the UE and the ID of the primary O-DU. This way, the secondary O-DUs know where to forward the partially decoded signals. \item {\em Handover}; If the UE measures that its received power has deteriorated too much, it hands over to a new primary O-RU. The new primary O-DU notifies the Near-RT RIC, the Near-RT RIC then allocates a new measurement cluster and, subsequently, a new serving cluster. \end{enumerate} The E2SM-KPM already supports performance monitoring per UE. Hence the way we intend to use it would only require minimal changes. The E2SM-RC, however, currently does not support the reporting of serving decisions to specific O-DUs. Especially the instantiating of inter-O-DU interfaces would require significant changes to the current specification. \subsection{Opportunistic Clusters} The signalling flow for the opportunistic approach is the same up until the point where the participating O-DUs are notified that they are in the measurement cluster. \begin{enumerate} \setcounter{enumi}{1} \item {\em Measurement Cluster Allocation}; The measurement cluster for opportunistic clusters should be assigned via E2SM-RC as the O-DU is given (partial) autonomy on the decision of the serving cluster. \item {\em Cluster Formation}; The participating O-DUs can decide locally on opportunistic serving. They only notify the Near-RT RIC if they decide to serve a certain UE on an O-RU. This can be achieved via the E2SM-KPM. This way the Near-RT RIC can notify O-DUs if they should instantiate or break inter-O-DU interfaces and to which O-DU they should forward which signals in case there is a handover of the primary O-RU. This can be achieved via E2SM-RC. \item {\em Primary O-RU Handover}; A UE measures a significantly better channel to an O-RU that is not its primary. It then signals its primary O-DU that it will handover to a different O-RU. The primary O-DU uses E2SM-KPM to notify the Near-RT RIC. The Near-RT RIC then notifies every O-DU that participates in the serving for that UE that it will get a different primary O-RU and O-DU. This notification can be sent over the E2SM-RC. Thus, the secondary O-DUs know where to forward the partially decoded signal. \item {\em Opportunistic O-RU Reload}; The O-DU measures a significantly stronger UE than the weakest of the UEs it is currently serving on one of its O-RUs. It then triggers a new measurement on that O-RU to determine the strongest UEs. The O-DU signals to the Near-RT RIC which UEs it serves on that O-RU via the E2SM-KPM. \end{enumerate} This method of operation requires very little input from the Near-RT RIC and it serves more as a bookkeeper that keeps track of which UEs are served by which O-RUs. The E2SM-RC should be extended to give a O-DU notice that it should involve a certain UE in its cluster formation. The E2SM-KPM should be extended to allow an O-DU to notify the Near-RT RIC that it is now serving a UE on certain O-RUs, one of them possibly being the primary one. The method would require minor changes to the E2SM-KPM and E2SM-RC. We argue that the potential benefits from the inter-O-DU cooperation outweigh the limited changes to the interface. \subsection{Numerical Results} In this section, we quantify the performance of the different clustering methods. For the figures that relate to the clustering, we provide the canonical, ubiquitous cell-free case as an upper bound for the performance of our clustering/handover algorithms; because all O-RUs cooperate in this scenario. Additionally, we also provide the cellular case as a lower bound on the performance; for this case, we use a handover threshold of 2 dB. For the figures that are specific to the clustering methods, we use a decorrelation distance of 20m. \subsection{Decorrelation Distance} \input{plots/systemmodelplots.tex} Figure~\ref{fig:resdecorr} shows the effect of the chosen decorrelation distance in the proposed temporal channel model. When the decorrelation distance increases, handover occurs significantly less frequently. This occurs due to the shadowing fading remaining similar over longer distances. Note that doubling the speed and halving the decorrelation distance have almost the same effect. There is no effect on the SE because we estimate the SE at the same times the cluster is checked. \subsection{Fixed Clustering and Handover} \input{plots/fixedClusterSEs.tex} Figure~\ref{fig:results1} shows the effect of the handover threshold on the distribution of the spectral efficiency for the fixed clustering and handover strategy. A lower handover threshold increases the handover frequency; thus, on average, the serving O-RUs are of higher quality. However, this induces a considerable signalling cost because of the increased amount of handovers. For the fixed clustering, the Near-RT RIC calculates the cluster. Hence, we must be careful not to overload this component with too many cluster calculations. Figure \ref{fig:results1} also displays the same CDF for a higher speed of the UE. We notice that the threshold becomes more critical when the speed is high. Interestingly, for the high-speed case, the performance is worse than cellular if the threshold is too high (4 dB). Also note that if we use ubiquitous CF mMIMO, the CDFs for low and high speed are almost the same. This is further supported by the findings in Figure~\ref{fig:comparison}. \subsection{Opportunistic Clustering} \input{plots/opportunisticClusterSEs.tex} In this section, we provide numerical results for the opportunistic clustering approach. First, we look at the CDF of the SE for different handover thresholds for UEs travelling at both 30 km/h and 120 km/h. Figure~\ref{fig:results2} shows that for a low speed, the threshold does not significantly impact the performance. For high speed, however, the threshold substantially impacts the SE. Even for high handover thresholds, the opportunistic clustering strategy has a performance that is better than the cellular case. As the opportunistic tracking is decided locally, without involvement of the Near Real-Time RIC, this method finds a good balance between performance and cost. \subsection{Handover Frequency} \input{plots/handovers.tex} Figure~\ref{fig:results3} quantifies the handover frequency for different speeds and different threshold values. Fixed clustering, which is expensive in terms of E2-signalling, has a lower handover frequency, even for a low threshold, than opportunistic clustering. Because, in the fixed case, the cluster-wide channel gain is tracked. This metric is less sensitive to the UE's mobility than the channel gain per O-RU. For both methods, the number of handovers increases with speed and decreases with a rising threshold. This highlights a vital trade-off, a high frequency of handovers improves the SE significantly, especially for high speeds. For the fixed clusters, this becomes expensive as the E2 interface and the Near-RT RIC become more loaded. For opportunistic clustering, handover is relatively cheap as it does not require much signalling to the Near-RT RIC. \subsection{Comparison of Clustering Methods} \input{plots/comparison.tex} Figure~\ref{fig:comparison} shows the impact of the threshold on the SE as a function of speed for both the fixed and the opportunistic clustering strategies. For the fixed clustering, the mean SE drops with increasing UE speed. This effect is significant if the HO threshold is chosen too large. If the threshold is too large, performance degrades beyond that of the cellular case. The performance of the opportunistic clustering is less sensitive to the threshold value than the fixed clustering. This is because the UE only triggers the handover if the total received power decreases too much in the fixed clustering. The opportunistic clustering allows a new O-RU to easily start serving a new UE when that O-RU detects it as sufficiently strong. Hence, the opportunistic strategy provides more fine-grained control. This does, however, come at the price of not having a deterministic number of O-RUs that serve a particular UE and as such it is impossible for our algorithm to guarantee any quality of service beyond that provided by the primary O-RU. Furthermore, the ubiquitous case is barely affected by increasing speed of the UE. \section{Simulation and Numerical Results} \label{sec:results} \begin{table}[htb] \centering \begin{tabularx}{1\textwidth}{X|c||X|c||X|c} \textbf{Parameter} & \textbf{Value} & \textbf{Parameter} & \textbf{Value} & \textbf{Parameter} & \textbf{Value}\\ \hline K & 40 & L & 36 & C & 9 \\ \hline N & 4 & $\sigma^2_{\text{ul}}$ & -94 dBm & Grid Size & 1 x 1 km \\ \hline Number of Setups & 25 & $\tau_p$ & 100 & $T_s$ & 0.5s \\ \hline Simulation Time & 10s & $\xi$ & 10° & $\mathcal{M}^{\text{s}}_k$ & 16 \\ \end{tabularx} \caption{Simulation Parameters } \label{table:sim} \end{table}% We generate a scenario with $K$ UEs and $C$ O-DUs, $L$ O-RUs and a single Near-RT RIC. The UEs are located in a square grid, this grid is divided into uniformly sized squares for each O-DU, the O-RUs for a specific O-DU are then placed randomly in the subsquare for its O-DU via a uniform distribution. We duplicate the initial setup eight times to create a wrap-around scenario, so we have a configuration with $3\times 3$ tiles and avoid performance differences between UEs in the centre and the edges of the centre tile. \section{System Model} \label{section:system_model} This section describes how to tackle UL detection, channel estimation, and combiner calculation during a specific coherence block. We use a method from the current state-of-the-art and map it to the O-RAN architecture. Most of subsections \ref{subsection:system}, \ref{subsection:chest}, and \ref{subsection:combining} are adapted from \cite{SIG-109}. In Section~\ref{section:mobility}, we extend our system to allow for mobility of the UEs and how to temporally evolve our channel model. Section~\ref{sec:handover} then discusses how the clusters should be reformed based on the mobility of the UEs. \subsection{Channel Model} \label{subsection:system} In this work, we consider UL detection. We assume each O-RU to have $N$ antennas, the channel vector between a single-antenna UE $k$ and an O-RU $l$ is thus defined as $\mathbf{h}_{l,k} \in \mathbb{C} ^{N}$. We assume $\mathbf{h}_{l,k}$ to be from a complex normal distribution, \begin{equation} \mathbf{h}_{l,k} \sim \mathcal{CN}(\mathbf{0}, \mathbf{R}_{l,k}), \end{equation} where $\mathbf{R}_{l,k} \in \mathbb{C}^{N \times N}$ is the covariance matrix of the channel. This model implies that we assume the channels between one UE and different O-RUs to be mutually uncorrelated. This is a valid assumption if the O-RUs are placed many wavelengths apart. We assume these $\mathbf{R}_{l,k}$ to be known at their respective O-RU $l$ and the respective O-DU. Some methods exist to estimate these from a relatively limited amount of samples \cite{Ranjbar_Moonen_Pollin_2021}. We will later link these $\mathbf{R}_{l,k}$ to our mobility model in Section~\ref{section:mobility}. We simulate a system with $K$ UEs, $L$ O-RUs, $C$ O-DUs and a single Near-RT RIC. The O-DUs are placed in a uniform grid; in each square in this grid there are $L/C$ O-RUs; these O-RUs are placed randomly via a uniform random distribution. \subsection{Channel Estimation} \label{subsection:chest} To estimate the channel, the UEs transmit mutually orthogonal pilot sequences $\boldsymbol{\phi}_i \in \mathbb{C}^{\tau_p}$. The O-DU can exploit this orthogonality to separate the UEs while estimating the channel. \begin{comment} \begin{equation} \label{eq:ortho} \boldsymbol{\phi}_i^H\boldsymbol{\phi}_j = \begin{cases} \tau_p & \text{if } j = i \\ 0 & \text{if } j \neq i \end{cases}. \end{equation} \end{comment} In the UL, UE $k$ transmits pilot $\boldsymbol{\phi}_{t_k}$ in the designated time slot, where $t_k$ is the index of the pilot allocated to UE $k$. O-RU $l$ receives the superposition of all of the UEs' pilots as $\mathbf{Y}_{l} \in \mathbb{C}^{N \times \tau_p}$: \begin{equation} \label{eq:decorr} \mathbf{Y}_{l} = \sum_{k=1}^{K} \sqrt{p_k} \mathbf{h}_{l,k} \boldsymbol{\phi}_{t_k}^T + \mathbf{N}_{l} . \end{equation} $\mathbf{N}_l$ is a matrix with the measured noise realizations $\mathbf{n}_{t,l}$ in the columns. Each of its elements are i.i.d. distributed white noise from the distribution $\mathcal{CN}(0, \sigma^2_{\text{ul}})$. The transmit power of UE $k$ is $p_k$. The O-DU can estimate the respective channels per O-RU as these O-RUs are assumed to be many wavelengths apart and thus the channels between a specific UE and multiple O-RUs are mutually uncorrelated. The O-DU decorrelates these received pilots for UE $k$ in O-RU $l$ as $\mathbf{y}_{l,k}^{(\text{p})} = \mathbf{Y}_{l}\frac{\boldsymbol{\phi}_{t_k}^{\ast}}{\sqrt{\tau_p}} $. \begin{comment} \begin{equation} \mathbf{y}_{l,k}^{(\text{p})} = \sum_{i=1}^{K} \sqrt{\frac{p_i}{\tau_p}} \mathbf{h}_{l,i} \boldsymbol{\phi}_{t_i}^T \boldsymbol{\phi}_{t_k}^{\ast} + \mathbf{N}_{l} \boldsymbol{\phi}_{t_k}^{\ast}. \end{equation} \end{comment} Afterwards, only interference from UEs that share the same pilot remains, which is known as pilot contamination \cite{Jose_Ashikhmin_Marzetta_Vishwanath_2011}. UEs that share their pilot with UE $k$ are denoted via the set $\mathcal{P}_k$. The decorrelated pilot simplifies to, \begin{equation} \mathbf{y}_{l,k}^{(\text{p})} = \sum_{i \in \mathcal{P}_k} \sqrt{p_i \tau_p} \mathbf{h}_{l,i} + \mathbf{N}_{l} \boldsymbol{\phi}_{t_k}^{\ast}. \end{equation} The O-DU then calculates the MMSE channel estimates between UE $k$ and O-RU $l$, $\hat{\mathbf{h}}_{l,k}$, as \begin{equation} \hat{\mathbf{h}}_{l,k} = \sqrt{\tau_p p_k} \mathbf{R}_{l,k} \left( \sum_{i \in \mathcal{P}_k} \tau_p p_i \mathbf{R}_{l,i} + \sigma_{\text{ul}}^2 \mathbf{I}_N \right)^{-1} \mathbf{y}_k^{(\text{p})}. \end{equation} We denote the error on the channel estimate as \begin{equation} \tilde{\mathbf{h}}_{l,k} = \hat{\mathbf{h}}_{l,k} - \mathbf{h}_{l,k} . \end{equation} The covariance of this channel estimation error is $\mathbf{C}_{l,k}$. \subsection{Receive Combining} \label{subsection:combining} Nearly-Optimal Large-Scale Fading Decoding (n-opt LSFD) fits nicely into the O-RAN architecture. We employ this decoding scheme by having the primary O-DU calculate the n-opt LSFD weights \cite{Nayebi_Ashikhmin_Marzetta_Rao_2016} based on the effective gains in all of the O-RUs of the serving clusters. UL detection has three stages in our system. First, each O-RU $l$ estimates the signal $\hat{s}_{l,k}$ of its served UEs, $k \in \mathcal{D}_l$, locally and sends these local estimates to its respective O-DU. Second, this O-DU $c$ computes more accurate estimates for the UE's signal $\hat{s}_{k}^{c}$ based on the LSFD weights ing of the UE's signal, $\hat{s}_{k}^{c}$, to their respective primary O-DU which combines them into the final estimate for the UE's signal, $\hat{s}_k$. \begin{figure}[htb] \centering \includegraphics[width=1\linewidth]{figures/Signal_Processing.pdf} \caption{Overview of the O-DU/O-RU and the inter-O-DU signalling required for the cooperative detection. We show both the signalling between O-RU and O-DU and the inter-O-DU signalling. On the left (primary) O-DU both O-RUs are used in decoding, on the right (secondary) O-DU, only the top O-RU is used. } \label{fig:signalling_interdu} \end{figure}% This combination at the secondary and primary O-DUs is organized such that we reach the n-opt LSFD solution in the primary O-DU for the final decoding step. This mapping of n-opt LSFD to the O-RAN architecture and the division of computations between what we call primary and secondary O-DUs was proposed in \cite{selfCite}. The interface between two cooperating O-DUs is a concept which was explored in \cite{selfCite, Ranjbar_Girycki_Rahman_Pollin_Moonen_Vinogradov_2022}. The received signal at each O-RU is, \begin{equation} \mathbf{y}_l = \sum_{k=1}^{K} \mathbf{h}_{l,k} s_k + \mathbf{n}_l . \end{equation} In our analysis, we use the Local Partial MMSE (LP-MMSE) combiner \cite{Bjornson_Sanguinetti_2020} as it is computed separately for each O-RU. The fronthaul specification in O-RAN \cite{ORAN-FH} supports both transfer of the receive combining vector from the O-DU to the O-RU over the fronthaul as well as transfer of CSI to the O-RU such that the O-RU can calculate the combining vector itself. Here, we assume the first option. The O-DU computes an LP-MMSE receive combiner $ \mathbf{v}_{l, k}$ for each of its O-RU - UE associations, \begin{equation} \label{eq:lpmmse} \mathbf{v}_{l, k} = p_k \left(\sum_{i \in \mathcal{D}_l} p_i \left(\hat{\mathbf{h}}_{l,i}\hat{\mathbf{h}}_{l,i}^{H} + \mathbf{C}_{l,i}\right) + \sigma_{\text{ul}}^2\mathbf{I}_N \right)^{-1} \hat{\mathbf{h}}_{l,k} . \end{equation} This is where the signal processing is linked to the serving clusters $\mathcal{M}_k^{\text{s}}$ from Section~\ref{sec:cluster}. The O-DU computes this combiner only if $k \in \mathcal{D}_l$. The O-RU then locally combines the received signal via $\mathbf{v}_{l, k}$ for all the UEs that it serves as \begin{equation} \hat{s}_{l,k} = \mathbf{v}_{l,k}^H \mathbf{y}_{l,k}, \quad \forall l \in \mathcal{M}^{\text{s}}_k . \end{equation} We compute cluster-wide LSFD combining weights for the local O-RU estimates. $\mathbf{g}_{ki}$ is the effective gain for UE $i$ in the receive combiner for UE $k$ and $\delta_{l,k}$ indicates if UE $k$ is served by O-RU $l$, \begin{equation} \mathbf{g}_{ki} = \begin{bmatrix} \delta_{1,k}\mathbf{v}_{1, k}^H\mathbf{h}_{1,i} & \delta_{2,k}\mathbf{v}_{2, k}^H\mathbf{h}_{2,i} & \dots & \delta_{L,k} \mathbf{v}_{L, k}^H\mathbf{h}_{L,i} \end{bmatrix}^T . \end{equation} We denote the set of UEs that share at least one O-RU with UE $k$ as $\mathcal{S}_k$. In n-opt LSFD, it is assumed that only those UEs in $\mathcal{S}_k$ induce significant interference for UE $k$, and thus we only cancel interference from those UEs. The primary O-DU can calculate the n-opt LSFD weights $\mathbf{a}_k \in \mathbb{C}^{L}$ as \cite{Demir_Bjornson_Sanguinetti_2021}, \begin{equation} \label{eq:lsfd} \mathbf{a}_{k} = p_k \left( \sum_{i \in \mathcal{S}_k} \E{\mathbf{g}_{ki}\mathbf{g}_{ki}^H} + \mathbf{F}_k \right)^{-1} \E{\mathbf{g}_{kk}}. \end{equation} Where $\mathbf{F}_k$ is defined as \begin{equation} \mathbf{F}_k = \sigma_{\text{ul}}^2 \text{diag}\left(\E{\| \delta_{1, k} \mathbf{v}_{1,k}\|}, \cdots, \E{\| \delta_{L, k}\mathbf{v}_{L,k}\|}\right). \end{equation} The weight vector for UE $k$, $\mathbf{a}_k$, has $|\mathcal{M}_k^{\text{s}}|$ non-zero elements, one for each O-RU that is serving UE $k$. The LSFD weights are only a function of the statistics of the channels and thus the secondary O-DUs do not need to pass any channel estimates to the primary O-DU. The statistics of different O-RUs are uncorrelated and thus the primary O-DU can combine the statistics of its own O-RUs and the O-RUs of secondary O-DUs without introducing any error \cite{SIG-109}. The secondary O-DUs only need to give the primary O-DU timely updates of these channel statistics. We consider the relevant statistics to be known at the O-DU for each UE that is served by its' O-RUs. In this work we do not consider the updates of channel statistics to the primary O-DU and consider them perfectly known at the primary O-DU as well. We calculate the LSFD combining weights and possibly new clusters on the same timescale. At every timestep, we first check if a new cluster is needed, possibly update the cluster and then always update the LSFD weights. Future work should also consider updates of the LSFD weights based on a metric for the ageing of these weights without necessarily updating the cluster. From the primary O-DU to the secondary O-DU, the only signalling required are the relevant elements of $\mathbf{a}_k$ to the O-RUs of this secondary O-DU which are in the serving cluster of UE $k$ We denote the set of O-RUs that are connected to O-DU $c$ as $\mathcal{L}_c$. O-DU $c$ can combine the signals locally for each UE served by O-RU $l \in \mathcal{L}_c$ as \begin{equation} \hat{s}_k^{c} = \sum_{l \in \mathcal{L}_c} [\mathbf{a}^{\ast}_{k}]_{l} \hat{s}_{l,k} . \end{equation} The communication of these $\mathbf{a}_k$ between O-DU is an extension from \cite{selfCite}, we see it as an important part of this inter-O-DU interface. The primary O-DU then computes the final estimate for the UE's signal by combining the estimates of each of the serving O-DUs, $\hat{s}_k^c$. \begin{equation} \label{eq:9} \hat{s}_k = \sum_{c} \hat{s}_k^{c} = \sum_{l \in \mathcal{M}_k^{\text{s}}} [\mathbf{a}^{\ast}_{k}]_{l} \hat{s}_{l,k} \end{equation} The result from \eqref{eq:9} is the same solution as if a centralized CPU would compute the result via LSFD. Due to how the calculations are organised, the result is transparent to how the O-RUs are organized amongst the different O-DUs; after all, the n-opt LSFD weights are calculated per O-RU. In our proposed architecture, computations are divided amongst the primary O-DUs; thus enabling us to scale the algorithm across a large number of O-RUs and O-DUs. \subsection{Spectral Efficiency} \begin{equation} \label{eq:sinr} \begin{aligned} \text{SINR}_k^{\text{ul}} = \frac{p_k |\mathbf{a}_k^H \E{\mathbf{g}_{kk}}|^2} {\mathbf{a}_k^H \left( \sum_{i \in \mathcal{S}_k}p_i \E{\mathbf{g}_{ki}\mathbf{g}_{ki}^H} - p_k \E{\mathbf{g}_{kk}}\E{\mathbf{g}_{kk}^H} + \mathbf{F}_k\right)\mathbf{a}_k} \end{aligned} \end{equation} From these previous equations, we can estimate the SINR as in \eqref{eq:sinr} \cite{Demir_Bjornson_Sanguinetti_2021}.\ Via Equation~\ref{eq:se}, we can then estimate the spectral efficiency (SE) based on the SINR. This result is used in Section \ref{sec:results} to show the performance of the clustering: \begin{equation} \label{eq:se} \text{SE}^{\text{ul}}_k = \log_2(1 + \text{SINR}_k^{\text{ul}}). \end{equation}
2,869,038,156,946
arxiv
\section{Introduction} This paper studies finite difference schemes for nonlinear fractional order Klein-Gordon type equations. Fractional differential equations have applications in physics, biology and petroleum industry. Interested reader can refer to \cite{Kilbas2006,EBarkai_Phy,Meerschaert2001_Phy,Meerschaert2002_Phy,West2007_Bio,MFitt2009_Oill} for more details. One of the key features of fractional derivatives are their nonlocal dependence which make fractional differential equations suitable to model some phenomena. However, the nonlocal dependence causes difficulty in the study of these equations. In the past decade, many works have been done on the study of effective numerical method for time fractional differential equations, the most popular approach are finite difference, spectral and finite element, see \cite{YusteSIAM2005, VongAML, SunJCP2014,LiuSIAM2008,LiuFCAA2013,XuJCP2007,JinSIAM2013, JinSIAM2016,DengJCP2007,DengSIAM2008,CuiJCP2009,VongCMA2014,Zhao,Lei} and the references therein. Klein-Gordon equation is a basic equation to describe many phenomena. Solving it in numerically is an interesting topic. Many efficient methods have been employed to solve the linear and nonlinear Klein-Gordon, or sin-Gordon equations successfully. Such as the Adomian's decomposition method \cite{Kaya2004,El-Sayed2003}, the variational iteration method (VIM) \cite{Batiha2007}, the He's variational iteration method \cite{Yusufo2008} and the Homotopy analysis method (HAM) \cite{Jafari2009}, and so on. When studying this kind of equations with fractional order derivative, it would be more challenging. In this paper, we consider finite difference schemes for nonlinear time fractional Klein-Gordon type equations with the following form: \begin{align}\label{eq1} &^C_0D_t^{\alpha} u(x,t)=\frac{\partial^2 u(x,t)}{\partial x^2}-f(u)+p(x,t), \quad x\in (a,b),\quad t\in(0,T],\\ \label{eq2} & u(a,t)=u(b,t)=0, \quad t\in(0,T],\\ \label{eq3} &u(x,0)=\varphi(x), \quad u_t(x,0)=\psi(x), \quad x\in [a,b], \end{align} where $1<\alpha<2$, $^C_0D_t^{\alpha}$ denotes the Caputo's derivative which is defined by $$^C_0D_t^{\alpha} u(x,t)=\frac1{\Gamma(2-\alpha)}\int_0^t\frac{\partial^2 u(x,s)}{\partial s^2}\frac{ds}{{(t-s)}^{\alpha-1}},$$ and $f$ is a continuous function and satisfies the Lipschitz condition: \begin{align}\label{Lip} |f(\phi_1)-f(\phi_2)|\leq L|\phi_1-\phi_2|, \quad \forall \phi_1,\phi_2 \in \Omega. \end{align} Here $\Omega$ is a suitable domain, and $L$ is a positive constant only depends on the domain $\Omega$. Lots of literatures have devoted to the study of time fractional Klein-Gordon (or sin-Gordon) type equations, see also \cite{Alireza2011,CuiNPDE2009,Vong2014,Vong2015,ChenH-Taiwan,Dehghan2015,Jafari2013} and references therein. The authors in \cite{Vong2014,Vong2015} proposed space compact schemes to solve the one and two dimensional time fractional Klein-Gordon type equations, respectively, and stability and convergence were analyzed by energy method. In \cite{ChenH-Taiwan}, a fully spectral scheme with finite difference discretization in time and Legendre spectral approximation in space was derived. Moreover, the meshless method based on radial basis function was used in \cite{Dehghan2015} to obtain an unconditionally stable discrete scheme for this kind of equation. We note that the finite difference schemes (or finite difference discretization in time) proposed in \cite{Vong2014,Vong2015,ChenH-Taiwan,Dehghan2015} are nonlinear with temporal convergence order ${\cal O}(\tau^{3-\alpha})~(1<\alpha<2)$, or linear with convergence order ${\cal O}(\tau)$. These motivate us to investigate linearized and higher temporal convergence order scheme to solve the nonlinear equations. We remark that linearized scheme was shown to be very efficiently for dealing with nonlinear problems \cite{zhao_siamJSC,SunNPDE2016,WangD2014,Ran2016}. The main advantage is that the nonlinear term is evaluated at previous time level so that iterative method is not needed to solve the solution at the current time level, which would be more convenience and save much computational costs. However, to our knowledge, the idea has not been applied to construct second temporal convergence order scheme for time-fractional differential equation. Our scheme in this paper is second-order in time. The proposed method is based on the descretization given in \cite{Alikhanov} and the idea of linearized scheme. We further note that the discretization formula for fractional derivatives developed in \cite{Alikhanov} is not given at grid points. This induce some technical difficulties for shifting the evaluation of nonlinear term to previous time level. Inspired by some estimates in the recent works \cite{Liao0,Liao2}, we show that our proposed scheme converges with second-order in time. The rest of the paper is organized as follows. In section \ref{derivation}, we first give some estimates of the discretization coefficients on the fractional derivative, and using the weighted approach we derive a linearized implicit scheme for the problem \eqref{eq1}--\eqref{eq3}. The scheme is shown to be convergent with ${\cal O}(\tau^2+h^2)$ and stable by discrete energy method in section \ref{Analysis}. Spatial fourth-order compact scheme is proposed in section \ref{compactscheme}. In section \ref{Numericalexperiments}, we test some numerical examples to confirm the theoretical results. A brief conclusion is followed in the last section. \section{Derivation of the difference scheme}\label{derivation} \subsection{Preliminary notations and lemmas} The following notations are needed to present our scheme. Let $\tau=\frac{T}{N}$ and $h=\frac{b-a}{M}$ be the temporal and spatial step sizes respectively, where $N$ and $M$ are some given integers. For $n=0,1,...,N$, and $i=0,1,...,M$, denote $t_n=n\tau$, $x_i=ih$, $t_{n+\theta}=(n+\theta)\tau$ for a constant $\theta\in [0,1]$, $\varphi_i=\varphi(x_i)$ and $\psi_i=\psi(x_i)$. We next introduce the grid function spaces $\mathcal{V}_h=\big\{u|u=\{u_i|0\leq i\leq M\}\mbox{ and }u_0=u_M=0\big\}$ and $\mathcal{W}_\tau=\{w^n|0\leq n\leq N\}$. For any $u,v\in \mathcal{V}_h$, we denote $$\delta_x u_{i-\frac12}=\frac{u_i-u_{i-1}}{h}, \quad \delta_x^2 u_i=\frac{\delta_x u_{i+\frac12}-\delta_x u_{i-\frac12}}{h}=\frac{u_{i+1}-2u_i+u_{i-1}}{h^2},$$ and the inner product and norms $$\langle u,v \rangle=h\sum_{i=1}^{M-1}u_iv_i,\quad \|u\|=\sqrt{\langle u,u\rangle},\quad |u|_1=\sqrt{h\sum_{i=1}^M\mid\delta_x u_{i-\frac12}\mid^2},\quad \|u\|_\infty=\max_{1\leq i\leq M-1}|u_i|.$$ For any $u^n\in\mathcal{W}_\tau$, we further consider $$ \delta_t u^{n+\frac12}=\frac{u^{n+1}-u^n}{\tau},\quad \delta_{\hat t} u^{n}=\frac{u^{n+1}-u^{n-1}}{2\tau}.$$ Discretization on the fractional derivative of our scheme based on the following lemma, which is obtained straightly by replacing the parameter $\sigma=\frac{2-\alpha}{2}$ $(0<\alpha<1)$ in Lemma 2.3 of \cite{Liao0} with $\theta=\frac{3-\alpha}{2}$ $(1<\alpha<2)$ and, in fact, it also can be found in Lemma 2.3 of \cite{Liao2}. \begin{lemma}\label{lemma_time} Suppose $1<\alpha<2,~\theta=\frac{3-\alpha}{2},~v(t)\in{\cal C}^2[0,T]\cap{\cal C}^3(0,T]$, and there exists a positive constant $C$ such that $v'''(t)\leq C t^{\alpha-2}$ in $[0,T]$. Then \begin{align*} &\frac1{\Gamma(2-\alpha)}\int_0^{t_{n+\theta}}\frac{v'(s)}{(t_{n+\theta}-s)^{\alpha-1}}ds -\Delta_t^\alpha v(t_{n+\theta})={\cal O}(\tau^2), \end{align*} where $\Delta_t^\alpha v(t_{n+\theta})=\frac{\tau^{1-\alpha}}{\Gamma(3-\alpha)}\sum_{s=0}^{n}c_{n-s}^{(n+1)}[v(t_{s+1})-v(t_s)]$, and $c_0^{(1)}=a_{0}$ for $n=0$, $$c_m^{(n+1)}=\left\{\begin{array}{ll} a_{0}+b_{1}, &m=0,\\ a_m+b_{m+1}-b_m, &1\leq m\leq n-1,\\ a_{n}-b_{n}, &m=n;\end{array}\right.$$ for $n\geq1$, in which $ a_{0}=\theta^{2-\alpha}$, $ a_{l}=(l+\theta)^{2-\alpha}-(l-1+\theta)^{2-\alpha}$, for $l\geq1$; and $b_{l}=\frac1{3-\alpha}[(l+\theta)^{3-\alpha}-(l-1+\theta)^{3-\alpha}]-\frac12[(l+\theta)^{2-\alpha}+(l-1+\theta)^{2-\alpha}].$ \end{lemma} \begin{lemma}\label{akbk} The coefficients $a_k$, $b_k$, and $c_k^{(n+1)}$ defined in Lemma \ref{lemma_time} satisfy \begin{align*} & (a)\quad 0<b_k<\frac{\alpha-1}{2(3-\alpha)}a_k<\frac12 a_k,~k\geq 1, \quad \quad (b)\quad \sum_{k=1}^{n}b_k<\frac12\sum_{k=0}^na_k=\frac12(n+\theta)^{2-\alpha},~n\geq 1,\\ & (c)\quad c_n^{(n+1)}>\frac{2-\alpha}{2(n+\theta)^{\alpha-1}}, \quad\quad (d)\quad c_0^{(n+1)}>c_1^{(n+1)}>\cdots>c_{n-1}^{(n+1)}>c_{n}^{(n+1)},\\ &(e)\quad (2\theta-1)c_0^{(n+1)}-\theta c_1^{(n+1)}>0,\quad \quad (f)\quad\sum_{k=0}^{n}c_k^{(n+1)}=\sum_{k=0}^{n}c_k^{(k+1)}+\sum_{k=1}^{n}b_k=(n+\theta)^{2-\alpha},\\ &(g)\quad c_{n-1}^{(n+1)}=c_{n-1}^{(n)}+b_n,~n\geq 1;\quad\quad c_k^{(n+1)}=c_k^{(n)},~0\leq k\leq n-2,~n\geq 2,\\ &(h)\quad \sum_{k=0}^n\frac1{c_k^{(n+1)}}<\frac{2{(n+1)}^\alpha}{2-\alpha},~n\geq 1. \end{align*} \end{lemma} \begin{proof} The inequalities $(a)$--$(g)$ are obtained directly by replacing $\sigma=\frac{2-\alpha}{2}$ $(0<\alpha<1)$ in Lemma 2.1 and Lemma 2.2 of \cite{Liao0} with $\theta=\frac{3-\alpha}{2}$ $(1<\alpha<2)$, which also can be found in Lemma 2.1 and Lemma 2.2 of \cite{Liao2}. Using $(c)$--$(d)$, we have $$\sum_{k=0}^n\frac1{c_k^{(n+1)}}<\sum_{k=0}^n\frac1{c_n^{(n+1)}}<\frac{2(n+1){(n+\theta)}^{\alpha-1}}{2-\alpha} <\frac{2{(n+1)}^\alpha}{2-\alpha},$$ thus $(h)$ is verified. \end{proof} Denote $\mu=\tau^{\alpha-1} \Gamma(3-\alpha)$ and \begin{equation}\label{dkn} d_0^{(1)}=\frac{c_0^{(1)}}{\mu}=\frac{\theta^{2-\alpha}\tau^{1-\alpha}}{\Gamma(3-\alpha)}; \quad\quad d_k^{(n+1)}=\left\{\begin{array}{ll} \frac{c_k^{(n+1)}}{\mu}, \quad &0\leq k\leq n-1,\\ \frac{c_n^{(n+1)}-\frac{\theta}{1-\theta}b_n}{\mu}, &k=n, \end{array}\right. \quad n\geq 1. \end{equation} We further have the following lemma. \begin{lemma}\label{dk} The above coefficients $d_{k}^{(n+1)}$ $(0\leq k\leq n\leq N-1,~n\geq 1)$ satisfy \begin{align*} &(a)\quad \frac{(2-\alpha)^2t_{n+\theta}^{1-\alpha}}{\Gamma(4-\alpha)}<d_n^{(n+1)}<\frac{t_{n-1+\theta}^{1-\alpha}}{\Gamma(2-\alpha)}, \quad \quad (b) \quad d_0^{(n+1)}>d_1^{(n+1)}>\cdots>d_{n-1}^{(n+1)}>d_{n}^{(n+1)}, \\ &(c) \quad(2\theta-1)d_0^{(n+1)}-\theta d_1^{(n+1)}>0,\quad \quad (d) \quad\tau \sum_{k=0}^{n}d_k^{(n+1)}<\frac{t_{n+\theta}^{2-\alpha}}{\Gamma(3-\alpha)},\\ &(e) \quad \tau \sum_{k=0}^{n}d_k^{(k+1)}<\frac{t_{n+\theta}^{2-\alpha}}{\Gamma(3-\alpha)}, \quad\quad (f) \quad \tau\sum_{k=0}^n \frac1{d_k^{(k+1)}}< \frac{\Gamma(4-\alpha)T^{\alpha}}{(2-\alpha)^2} . \end{align*} \end{lemma} \begin{proof} Applying Lemma \ref{akbk}(a), we get \begin{align*} &d_n^{(n+1)}=\frac{a_n-\frac1{1-\theta}b_n}{\mu}< \frac{a_n}{\mu}=\frac{2-\alpha}{\mu}\int_0^1\frac{ds}{(n+\theta-s)^{\alpha-1}}<\frac{t_{n-1+\theta}^{1-\alpha}}{\Gamma(2-\alpha)},\\ &d_n^{(n+1)}=\frac{a_n-\frac1{1-\theta}b_n}{\mu}>\frac{2-\alpha}{(3-\alpha)\mu}a_n> \frac{(2-\alpha)^2}{(3-\alpha)\mu}\int_0^1\frac{ds}{(n+\theta-s)^{\alpha-1}}>\frac{(2-\alpha)^2t_{n+\theta}^{1-\alpha}}{\Gamma(4-\alpha)}. \end{align*} So (a) is verified. Since $b_n>0$, it follows from Lemma \ref{akbk}(d) that $c_0^{(n+1)}>c_1^{(n+1)}>\cdots>c_{n-1}^{(n+1)}>c_{n}^{(n+1)}-b_n$, then the definition \eqref{dkn} yields the inequality (b). Similarly, Lemma \ref{akbk}(e) implies that $$(2\theta-1)d_0^{(n+1)}-\theta d_1^{(n+1)}=\big[(2\theta-1)c_0^{(n+1)}-\theta c_1^{(n+1)} \big]/\mu>0,~n\geq 2;$$ and $(2\theta-1)d_0^{(2)}-\theta d_1^{(2)}=\big[(2\theta-1)c_0^{(2)}-\theta c_1^{(2)} \big]/\mu>0$. Thus (c) is verified. The definition \eqref{dkn} also implies $\tau\sum_{k=0}^n d_k^{(n+1)}<\frac{\tau}{\mu}\sum_{k=0}^n c_k^{(n+1)}$ such that the inequality (d) is obtained by using Lemma \ref{akbk}(f). The proof of (a) shows that $d_k^{(k+1)}<\frac{a_k}{\mu}$ for $k\geq 1$. Then Lemma \ref{akbk}(b) yields the inequality (e). It is easy to check that $\frac{\theta}{\Gamma(3-\alpha)}>\frac{(2-\alpha)^2}{\Gamma(4-\alpha)}$ for $1<\alpha<2$, then $d_0^{(1)}=\frac{\theta t_\theta^{1-\alpha}}{\Gamma(3-\alpha)}>\frac{(2-\alpha)^2t_\theta^{1-\alpha}}{\Gamma(4-\alpha)}$, and it follows by combining (a) that $$\tau\sum_{k=0}^n \frac1{d_k^{(k+1)}}<\tau\sum_{k=0}^n\frac{\Gamma(4-\alpha)t_{n+\theta}^{\alpha-1}}{(2-\alpha)^2}< \frac{\Gamma(4-\alpha)t_{n+1}^{\alpha}}{(2-\alpha)^2}\leq \frac{\Gamma(4-\alpha)T^{\alpha}}{(2-\alpha)^2} ,$$ so inequality (f) is verified. \end{proof} We have the following lemma relating solution values at different points. \begin{lemma}\label{time} For any $g(t)\in{\cal C}^2[t_{n-1+\theta},t_{n+\theta}]$, it holds that $$g(t_n)=(1-\theta)g(t_{n+\theta})+\theta g(t_{n-1+\theta})+{\cal O}(\tau^2).$$ \end{lemma} \begin{proof} Using Taylor formula with integral remainder, we have \begin{align*} &g(t_{n+\theta})=g(t_n)+\theta\tau g'(t_n)+\theta^2\tau^2 \int_0^1 g''(t_n+\rho\theta\tau)(1-\rho)d\rho,\\ &g(t_{n-1+\theta})=g(t_n)-(1-\theta)\tau g'(t_n)+(1-\theta)^2\tau^2 \int_0^1 g''\big(t_n-\rho(1-\theta)\tau\big)(1-\rho)d\rho. \end{align*} Then the desired result can be obtained by a direct calculation. \end{proof} \subsection{Weighted approximation to time fractional derivative} Denote $v=u_t$, then $^C_0D_t^{\alpha} u=\frac1{\Gamma(2-\alpha)}\int_0^{t}\frac{v_t(s)}{(t-s)^{\alpha-1}}ds$. We define the grid functions $$U_i^n=u(x_i,t_n),\mbox{ and } ~ V_i^{n+\sigma}=v(x_i,t_{n+\sigma})~ \mbox{ for }~ 0\leq\sigma\leq 1 ,~ 0\leq i\leq M, ~ 0\leq n\leq N.$$ Consider equation \eqref{eq1} at the point $(x_i,t_n)$, we have \begin{align}\label{eeq1} ^C_0D_t^{\alpha} u(x_i,t_n)=\frac{\partial^2 u(x_i,t_n)}{\partial x_i^2}-f\big(u(x_i,t_n)\big)+p(x_i,t_n), \quad 1\leq i\leq M-1, ~ 1\leq n\leq N. \end{align} Utilizing Lemma \ref{time} and Lemma \ref{lemma_time}, we introduce a weighted approximation for the time fractional derivative. Let $^C_0D_t^{\alpha} u(x_i,t)\in{\cal C}^2[t_{n-1+\theta},t_{n+\theta}]$ and $v(x_i,t)\in{\cal C}^2[0,T]\cap{\cal C}^3(0,T]$, $1\leq n\leq N-1$, it follows that \begin{align}\nonumber ^C_0D_t^{\alpha} u(x_i,t_n)=&(1-\theta) ^C_0D_t^{\alpha} u(x_i,t_{n+\theta})+\theta ^C_0D_t^{\alpha} u(x_i,t_{n-1+\theta})+{\cal O}(\tau^2),\\\label{Rhat} =&(1-\theta) \Delta_t^\alpha V_i^{n+\theta}+\theta \Delta_t^\alpha V_i^{n-1+\theta}+({\hat R}_t)_i^n, \end{align} where $({\hat R}_t)_i^n={\cal O}(\tau^2)$. Denote $V_i^{k+1-\theta}=(1-\theta)V_i^{k+1}+\theta V_i^k$ ($k\geq0$). Then, for $n\geq1$, using Lemma \ref{akbk}(g) and \eqref{dkn}, we have \begin{align}\nonumber (1-\theta) \Delta_t^\alpha V_i^{n+\theta}+\theta \Delta_t^\alpha V_i^{n-1+\theta} =& \frac{(1-\theta)}{\mu} \sum_{k=0}^n c_{n-k}^{(n+1)}(V_i^{k+1}-V_i^k) + \frac{\theta}{\mu} \sum_{k=0}^{n-1} c_{n-k-1}^{(n)}(V_i^{k+1}-V_i^k) \\\nonumber =& \frac{(1-\theta)}{\mu} \sum_{k=0}^n c_{n-k}^{(n+1)}(V_i^{k+1}-V_i^k) + \frac{\theta}{\mu} \sum_{k=0}^{n-1} c_{n-k-1}^{(n+1)}(V_i^{k+1}-V_i^k)\\\nonumber &- \frac{\theta}{\mu} b_n (V_i^1-V_i^0) \\\nonumber =& \frac{(1-\theta)}{\mu} \sum_{k=1}^n c_{n-k}^{(n+1)}(V_i^{k+1}-V_i^k) + \frac{\theta}{\mu} \sum_{k=1}^{n} c_{n-k}^{(n+1)}(V_i^k-V_i^{k-1})\\\nonumber &+\frac{(1-\theta)}{\mu}c_n^{(n+1)}(V_i^1-V_i^0)- \frac{\theta}{\mu} b_n (V_i^1-V_i^0) \\\nonumber =& \frac1{\mu} \sum_{k=1}^n c_{n-k}^{(n+1)} (V_i^{k+1-\theta}-V_i^{k-\theta})\\\nonumber &+\frac1{\mu}\big[(1-\theta)c_n^{(n+1)}-\theta b_n\big](V_i^1-V_i^0)\\\label{timedisc} =& \sum_{k=1}^n d_{n-k}^{(n+1)}(V_i^{k+1-\theta}-V_i^{k-\theta})+ d_n^{(n+1)}(V_i^{1-\theta}-V_i^0). \end{align} If $n=0$, \begin{align}\label{timedisc0} \Delta_t^\alpha V_i^\theta=\frac{d_0^{(1)}}{1-\theta}(V_i^{1-\theta}-V_i^0). \end{align} For $k\geq 1$, Taylor expansion gives \begin{align}\nonumber V_i^{k+1-\theta}=&(2-2\theta)V_i^{k+\frac12}+(2\theta-1)V_i^k+(2-2\theta)(R_{v})_i^{k+\frac12}\\\nonumber =&(2-2\theta)\delta_tU_i^{k+\frac12}+(2\theta-1)\delta_{\hat t}U_i^{k}-(2-2\theta)(R_t)_i^{k+\frac12}-(2\theta-1)(R_{\hat t})_i^{k}\\\label{vktheta} &+(2-2\theta)(R_{v})_i^{k+\frac12}, \end{align} and \begin{align}\nonumber V_i^{1-\theta}&=(2-2\theta)V_i^{\frac12}+(2\theta-1)V_i^0+(2-2\theta)(R_{v})_i^{\frac12}\\\label{v01theta} &=(2-2\theta)\delta_tU_i^{\frac12}+(2\theta-1)\psi_i-(2-2\theta)(R_t)_i^{\frac12}+(2-2\theta)(R_{v})_i^{\frac12}, \end{align} where $$(R_{v})_i^{k+\frac12}=\frac{\tau^2}{8}\int_0^1\Big[\frac{\partial^3u(x_i,t_{k+\frac12}-\frac{\rho\tau}{2})}{\partial t^3}+\frac{\partial^3u(x_i,t_{k+\frac12}+\frac{\rho\tau}{2})}{\partial t^3} \Big](1-\rho) d\rho \mbox{ for } k\geq 0,$$ $$(R_t)_i^{k+\frac12}=\frac{\tau^2}{16}\int_0^1\Big[\frac{\partial^3u(x_i,t_{k+\frac12}-\frac{\rho\tau}{2})}{\partial t^3}+\frac{\partial^3u(x_i,t_{k+\frac12}+\frac{\rho\tau}{2})}{\partial t^3} \Big](1-\rho)^2 d\rho \mbox{ for } k\geq 0,$$ and $$(R_{\hat t})_i^{k}=\frac{\tau^2}{4}\int_0^1\Big[\frac{\partial^3 u(x_i,t_{k}-\rho\tau)}{\partial t^3}+\frac{\partial^3 u(x_i,t_k+\rho\tau)}{\partial t^3} \Big](1-\rho)^2 d\rho \mbox{ for } k\geq 1.$$ So, by inserting \eqref{vktheta} and \eqref{v01theta} into \eqref{timedisc}, we can get the following approximation for the fractional derivative on grid function $U_i^n$ \begin{align}\nonumber &(1-\theta) \Delta_t^\alpha V_i^{n+\theta}+\theta \Delta_t^\alpha V_i^{n-1+\theta}\\\nonumber =&\sum_{k=1}^n d_{n-k}^{(n+1)}(V_i^{k+1-\theta}-V_i^{k-\theta})+ d_n^{(n+1)}(V_i^{1-\theta}-V_i^0)\\\nonumber =&\sum_{k=1}^n d_{n-k}^{(n+1)}\big[(2-2\theta)\big(\delta_tU_i^{k+\frac12}-\delta_tU_i^{k-\frac12}\big)+(2\theta-1)\big(\delta_{\hat t}U_i^{k}-\delta_{\hat t}U_i^{k-1} \big)\big]\\\label{Rtilde} &+d_n^{(n+1)}\big[(2-2\theta)\delta_tU_i^{\frac12}+(2\theta-1)\psi_i-V_i^0 \big]-({\tilde R}_t)_i^{n+1},\quad 1\leq n\leq N-1,~1\leq i\leq M-1, \end{align} where \begin{align*} ({\tilde R}_t)_i^{n+1} =&\sum_{k=1}^n d_{n-k}^{(n+1)}\big[(2-2\theta)\big((R_t)_i^{k+\frac12}-(R_t)_i^{k-\frac12}\big)+(2\theta-1)\big((R_{\hat t})_i^{k}-(R_{\hat t})_i^{k-1}\big)\\ &-(2-2\theta)\big((R_{v})_i^{k+\frac12}-(R_{v})_i^{k-\frac12}\big)\big]+(2-2\theta)d_n^{(n+1)}\big[(R_t)_i^{\frac12}-(R_{v})_i^{\frac12}\big]. \end{align*} Utilizing the Taylor expansion, it is observed that \begin{align*} (R_t)_i^{k+\frac12}-(R_t)_i^{k-\frac12}=&\frac{\tau^3}{16}\int_0^1\int_0^1\frac{\partial^4 u(x_i,t_{k+\frac12}+(\varrho-\frac{\rho}{2})\tau)}{\partial t^4}(1-\rho)^2d\varrho d\rho\\ &+\frac{\tau^3}{16}\int_0^1\int_0^1\frac{\partial^4 u(x_i,t_{k+\frac12}+(\varrho+\frac{\rho}{2})\tau)}{\partial t^4}(1-\rho)^2d\varrho d\rho, \\ (R_{\hat t})_i^{k}-(R_{\hat t})_i^{k-1}=&\frac{\tau^3}{4}\int_0^1\int_0^1\frac{\partial^4 u(x_i,t_{k-1}+(\varrho-\rho)\tau)}{\partial t^4}(1-\rho)^2d\varrho d\rho\\ &+\frac{\tau^3}{4}\int_0^1\int_0^1\frac{\partial^4 u(x_i,t_{k-1}+(\varrho+\rho)\tau)}{\partial t^4}(1-\rho)^2d\varrho d\rho, \\ (R_{v})_i^{k+\frac12}-(R_{v})_i^{k-\frac12}=&\frac{\tau^3}{8}\int_0^1\int_0^1\frac{\partial^4 u(x_i,t_{k+\frac12}+(\varrho-\frac{\rho}{2})\tau)}{\partial t^4}(1-\rho)d\varrho d\rho\\ &+\frac{\tau^3}{8}\int_0^1\int_0^1\frac{\partial^4 u(x_i,t_{k+\frac12}+(\varrho+\frac{\rho}{2})\tau)}{\partial t^4}(1-\rho)d\varrho d\rho, \quad 1\leq k\leq n. \end{align*} These three equations above yield \begin{align* |(R_t)_i^{k+\frac12}-(R_t)_i^{k-\frac12}|\leq C_1 \tau^3,~|(R_{\hat t})_i^{k}-(R_{\hat t})_i^{k-1}|\leq C_1 \tau^3,~ |(R_{v})_i^{k+\frac12}-(R_{v})_i^{k-\frac12}|\leq C_1\tau^3, \end{align*} where $C_1$ is a positive constant. Then, suppose $\frac{\partial^4 u}{\partial t^4}$ is continuous on $[a,b]\times[0,T]$, we can obtain \begin{align*} |({\tilde R}_t)_i^{n+1}| \leq & (2-2\theta)\sum_{k=1}^n d_{n-k}^{(n+1)}|(R_t)_i^{k+\frac12}-(R_t)_i^{k-\frac12}|+(2\theta-1)\sum_{k=1}^n d_{n-k}^{(n+1)}|(R_{\hat t})_i^{k}-(R_{\hat t})_i^{k-1}|\\ & +(2-2\theta)\sum_{k=1}^n d_{n-k}^{(n+1)}|(R_{v})_i^{k+\frac12}-(R_{v})_i^{k-\frac12}|+(2-2\theta)d_n^{(n+1)}\big(|(R_t)_i^{\frac12}|+|(R_{v})_i^{\frac12}|\big)\\ \leq & (2-2\theta)C_1\sum_{k=1}^n d_{n-k}^{(n+1)}\tau^3+(2\theta-1)C_1\sum_{k=1}^n d_{n-k}^{(n+1)}\tau^3+ C_2d_n^{(n+1)}\tau^2 \\ \leq&(2-2\theta)\frac{C_1t_{n+\theta}^{2-\alpha}}{\Gamma(3-\alpha)}\tau^2+(2\theta-1)\frac{C_1t_{n+\theta}^{2-\alpha}}{\Gamma(3-\alpha)}\tau^2+ \frac{C_2t_{n-1+\theta}^{1-\alpha}}{\Gamma(2-\alpha)}\tau^2 \\ \leq & C_3 \tau^2, \end{align*} where Lemma \ref{dk}(a),(d) have been used, and $C_2,C_3$ are positive constants. \subsection{Approximation on first time level and space discretization} Note that the discretization \eqref{Rtilde} is devoted to solving the numerical solutions of $U^{n+1}$ ($n\geq 1$). For the approximation on the first time level, further construction is required. Inserting \eqref{v01theta} into \eqref{timedisc0}, we have \begin{align}\label{Rttheta} \Delta_t^\alpha V_i^\theta=\frac{d_0^{(1)}}{1-\theta}(2-2\theta)\big(\delta_t U_i^{\frac12}-\psi_i \big)-({\tilde R}_t)_i^\theta, \end{align} in which \begin{align*} ({\tilde R}_t)_i^\theta=\frac{d_0^{(1)}}{1-\theta}(2-2\theta)\big[(R_t)_i^{\frac12}-(R_{v})_i^{\frac12}\big] =\frac{2\theta^{2-\alpha}\tau^{1-\alpha}}{\Gamma(3-\alpha)}\big[(R_t)_i^{\frac12}-(R_{v})_i^{\frac12}\big], \end{align*} hence we have $|({\tilde R}_t)_i^\theta|\leq \frac{2\theta^{2-\alpha}\tau^{1-\alpha}}{\Gamma(3-\alpha)}\big(|(R_t)_i^{\frac12}|+|(R_{v})_i^{\frac12}|\big)\leq C_3\tau^{3-\alpha}$. We can use the Taylor formula to get \begin{align}\label{rr1} u(x_i,t_\theta)=u(x_i,t_0)+\theta\tau u_t(x_i,t_0)+{\cal O}(\tau^2)=\varphi_i+\theta\tau\psi_i+{\cal O}(\tau^2), \quad 1\leq i\leq M-1. \end{align} Consider equation \eqref{eq1} at the point $(x_i,t_\theta)$, we have \begin{align}\label{rr2} ^C_0D_t^{\alpha} u(x_i,t_\theta)=\frac{\partial^2 u(x_i,t_\theta)}{\partial x_i^2}-f\big(u(x_i,t_\theta)\big)+p(x_i,t_\theta), \quad 1\leq i\leq M-1. \end{align} Then combining Lemma \ref{lemma_time}, \eqref{rr1} and \eqref{rr2}, we obtain the approximation of first time level \begin{align}\label{utheta} \Delta_t^\alpha V_i^{\theta}=(\varphi_{xx}+\theta\tau\psi_{xx})_i- f(\varphi_i+\theta\tau\psi_i) +p_i^\theta+({\hat R}_t)_i^\theta,\quad 1\leq i\leq M-1, \end{align} where $({\hat R}_t)_i^\theta={\cal O}(\tau^2)$. For space discretization at each of grid points, suppose $\frac{\partial^4 u}{\partial x^4}$ is continuous on $[a,b]\times[0,T]$. The Taylor expansion gives \begin{align} \label{rx} ^C_0D_t^{\alpha} u(x_i,t_n)=\delta_x^2U_i^n-f\big(u(x_i,t_n)\big)+p(x_i,t_n)+(R_x)_i^n, \quad 1\leq i\leq M-1, ~ 1\leq n\leq N, \end{align} where $(R_x)_i^n={\cal O}(h^2)$. \subsection{The linearized scheme} To construct a stable implicit difference scheme, we need one more Lemma on the approximation of $U_i^n$, which is obtained by using Taylor expansion. This lemma plays an important role in analysis in the next section, see the equality \eqref{ww}. It reads as: \begin{lemma}\label{w} For $n\geq 1$, it holds that $$U_i^n=\frac{W_i^{n+1}+W_i^n}{2}+(R_{w})_i^n,$$ where \begin{align*} W_i^{n}=&(\frac32-\theta)\big[\theta U_i^{n}+(1-\theta)U_i^{n-1}\big]+(\theta-\frac12)\big[\theta U_i^{n-1}+(1-\theta)U_i^{n-2}\big], \quad n\geq 2, \\ W_i^{1}=&(\frac32-\theta)\big[\theta U_i^{1}+(1-\theta)U_i^{0}\big]+(\theta-\frac12)\big[\theta U_i^{0}+(1-\theta)(U_i^1-2\tau\psi_i)\big], \end{align*} and $(R_{w})_i^n={\cal O}(\tau^2)$. \end{lemma} Therefore, by \eqref{Rhat}, \eqref{Rtilde}, \eqref{Rttheta}, \eqref{utheta}, \eqref{rx} and Lemma \ref{w}, one can approximate the equation \eqref{eq1}--\eqref{eq3} as following: \begin{align}\nonumber & (1-\theta) \Delta_t^\alpha V_i^{n+\theta}+\theta \Delta_t^\alpha V_i^{n-1+\theta}=\delta_x^2 \Big(\frac{W_i^{n+1}+W_i^n}{2}\Big)-f(U_i^n)+p_i^n+(R_x)_i^n-({\hat R}_t)_i^n+(R_{w})_i^n, \\\label{sch1} &~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~1\leq n\leq N-1,~ 1\leq i\leq M-1,\\\label{sch2} &\Delta_t^\alpha V_i^\theta=(\varphi_{xx}+\theta\tau\psi_{xx})_i-f(\varphi_i+\theta\tau\psi_i) +p_i^\theta+({\hat R}_t)_i^\theta,~~~~~~~~~~ 1\leq i\leq M-1, \\\label{sch3} & U_0^n=U_M^n=0, \quad 1\leq n\leq N,\\\label{sch4} & U_i^0=\varphi_i,\quad V_i^0=\psi_i,\quad 0\leq i\leq M, \end{align} in which \begin{align}\nonumber (1-\theta) \Delta_t^\alpha V_i^{n+\theta}&+\theta \Delta_t^\alpha V_i^{n-1+\theta}=\sum_{k=1}^n d_{n-k}^{(n+1)}\big[(2-2\theta)\big(\delta_tU_i^{k+\frac12}-\delta_tU_i^{k-\frac12}\big)\\\label{sch5} &+(2\theta-1)\big(\delta_{\hat t}U_i^{k}-\delta_{\hat t}U_i^{k-1}\big)\big] +d_n^{(n+1)}(2-2\theta)(\delta_tU_i^{\frac12}-\psi_i)-({\tilde R}_t)_i^{n+1}, \end{align} and \begin{align}\label{sch6} \Delta_t^\alpha V_i^\theta=2d_0^{(1)}\big(\delta_t U_i^{\frac12}-\psi_i \big)-({\tilde R}_t)_i^\theta. \end{align} Denote $u_i^n$, $v_i^n$ and $w_i^n$ the numerical approximations of $U_i^n$, $V_i^n$ and $W_i^n$, respectively. Omitting the small terms $(R_x)_i^n$, $({\hat R}_t)_i^n$, $(R_{w})_i^n$ in \eqref{sch1}, $({\hat R}_t)_i^\theta$ in \eqref{sch2}, $({\tilde R}_t)_i^{n+1}$ in \eqref{sch5} and $({\tilde R}_t)_i^\theta$ in \eqref{sch6}, we obtain the following linearized difference scheme: \begin{align}\nonumber & (1-\theta) \Delta_t^\alpha v_i^{n+\theta}+\theta \Delta_t^\alpha v_i^{n-1+\theta}=\delta_x^2 \Big(\frac{w_i^{n+1}+w_i^n}{2}\Big)-f(u_i^n)+p_i^n, \\\label{sc1} &~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1\leq n\leq N-1,~ 1\leq i\leq M-1,\\\label{sc2} & \Delta_t^\alpha v_i^\theta=(\varphi_{xx}+\theta\tau\psi_{xx})_i-f(\varphi_i+\theta\tau\psi_i) +p_i^\theta, \quad 1\leq i\leq M-1,\\\label{sc3} & u_0^n=u_M^n=0, \quad 1\leq n\leq N,\\\label{sc4} & u_i^0=\varphi_i,\quad v_i^0=\psi_i,\quad 0\leq i\leq M. \end{align} \section{Analysis of the proposed scheme}\label{Analysis} Before carrying out the convergence and stability of difference scheme \eqref{sc1}--\eqref{sc4}, we first list some preliminary lemmas. \begin{lemma}\label{GRW} (Gronwall's inequality \cite{Gronwall}) Let $\{G_n\}$ and $\{k_n\}$ be nonnegative sequences satisfying $$G_0\leq K,\qquad G_n \leq K+\sum_{l=0}^{n-1}k_l G_l,\quad n\geq 1, $$ where $K\geq 0$. Then $$G_n\leq K\exp\Big(\sum_{l=0}^{n-1}k_l \Big),\quad n\geq 1. $$ \end{lemma} \begin{lemma}\label{1norm} (\cite{Sun1}) Let $u\in \mathcal{V}_h$, it holds that $$ \|u\|^2\leq \frac{(b-a)^2}{6}|u|_1^2.$$ \end{lemma} \begin{lemma}\label{alik} (\cite{Alikhanov}) If the positive sequence $\{g_k^{(n+1)}|0\leq k\leq n,n\geq 1\}$ is strictly decreasing and satisfies $(2\sigma-1)g_0^{(n+1)}-\sigma g_1^{(n+1)}>0$ for a constant $\sigma\in(0,1)$ . Then $$ 2[\sigma y^{n+1}+(1-\sigma)y^n]\sum_{k=0}^{n}g_{n-k}^{(n+1)}(y^{k+1}-y^k) \geq \sum_{k=0}^{n}g_{n-k}^{(n+1)}\big[(y^{k+1})^2-(y^k)^2 \big],\quad n\geq 1. $$ \end{lemma} We need a special form of Lemma \ref{alik}, which is stated as the following: \begin{lemma}\label{stproof1} For any real sequence $F^n$, the following inequality holds: \begin{align*} &2[\theta v^{n+1-\theta}+(1-\theta)v^{n-\theta}][(1-\theta) \Delta_t^\alpha v^{n+\theta}+\theta \Delta_t^\alpha v^{n-1+\theta}-F^n]\\ \geq &\sum_{k=0}^n \frac{c_{n-k}^{(n+1)}}{\mu} {(v^{k+1-\theta})}^2- \sum_{k=0}^{n-1} \frac{c_{n-k-1}^{(n)}}{\mu} {(v^{k+1-\theta})}^2-\frac{ b_n}{(1-\theta)\mu}{(v^{1-\theta})}^2-d_n^{(n+1)}\Big(v^0+\frac1{d_n^{(n+1)}}F^n \Big)^2. \end{align*} \end{lemma} \begin{proof} Taking $g_k^{(n+1)}=d_k^{(n+1)}$, $y^n=v^{n-\theta}$ $(n\geq 1)$ and $y^0=v^0+\frac1{d_n^{(n+1)}}F^n$ in Lemma \ref{alik}, and using Lemma \ref{dk}(b),(c), we get \begin{align*} &2\big[\theta v^{n+1-\theta} +(1-\theta)v^{n-\theta} \big]\big[(1-\theta) \Delta_t^\alpha v^{n+\theta}+\theta \Delta_t^\alpha v^{n-1+\theta}- F^n \big]\\ \geq& \sum_{k=1}^n d_{n-k}^{(n+1)} \big[ {(v^{k+1-\theta})}^2-{(v^{k-\theta})}^2\big]+d_n^{(n+1)} \Big[{(v^{1-\theta})}^2-\Big(v^0+\frac1{d_n^{(n+1)}}F^n \Big)^2\Big] \\ =& \sum_{k=0}^n \frac{c_{n-k}^{(n+1)}}{\mu} {(v^{k+1-\theta})}^2- \sum_{k=1}^n \frac{c_{n-k}^{(n+1)}}{\mu} {(v^{k-\theta})}^2 -\frac{\theta b_n}{(1-\theta)\mu}{(v^{1-\theta})}^2-d_n^{(n+1)}\Big(v^0+\frac1{d_n^{(n+1)}}F^n \Big)^2\\ =& \sum_{k=0}^n \frac{c_{n-k}^{(n+1)}}{\mu} {(v^{k+1-\theta})}^2- \sum_{k=1}^n \frac{c_{n-k}^{(n)}}{\mu} {(v^{k-\theta})}^2 -\frac{ b_n}{(1-\theta)\mu}{(v^{1-\theta})}^2-d_n^{(n+1)}\Big(v^0+\frac1{d_n^{(n+1)}}F^n \Big)^2\\ =& \sum_{k=0}^n \frac{c_{n-k}^{(n+1)}}{\mu} {(v^{k+1-\theta})}^2- \sum_{k=0}^{n-1} \frac{c_{n-k-1}^{(n)}}{\mu} {(v^{k+1-\theta})}^2 -\frac{ b_n}{(1-\theta)\mu}{(v^{1-\theta})}^2-d_n^{(n+1)}\Big(v^0+\frac1{d_n^{(n+1)}}F^n \Big)^2. \end{align*} \end{proof} \subsection{Convergence} Now we denote the error $e_i^n=U_i^n-u_i^n$, $0\leq i\leq M$, $0\leq n\leq N$, and $$(1-\theta) \Delta_t^\alpha {\hat v}_i^{n+\theta}+\theta \Delta_t^\alpha {\hat v}_i^{n-1+\theta}=\sum_{k=1}^n d_{n-k}^{(n+1)}({\hat v}_i^{k+1-\theta}-{\hat v}_i^{k-\theta})+ d_n^{(n+1)}{\hat v}_i^{1-\theta},$$ in which \begin{align}\nonumber {\hat v}_i^{k+1-\theta} =&(2-2\theta)\frac{e_i^{k+1}-e_i^{k}}{\tau}+(2\theta-1)\frac{e_i^{k+1}-e_i^{k-1}}{2\tau},\quad k\geq 1, \\\label{v1theta} {\hat v}_i^{1-\theta}=&(2-2\theta)\frac{e_i^{1}}{\tau}. \end{align} Taking \begin{align*} {\hat w}_i^k=&(\frac32-\theta)\big[\theta e_i^{k}+(1-\theta)e_i^{k-1}\big]+(\theta-\frac12)\big[\theta e_i^{k-1}+(1-\theta)e_i^{k-2}\big],\quad k\geq 2,\\ {\hat w}_i^1=&[(\frac32-\theta)\theta+(\theta-\frac12)(1-\theta)] e_i^{1}, \end{align*} and subtracting \eqref{sc1}--\eqref{sc4} from \eqref{sch1}--\eqref{sch4}, we obtain the following error system: \begin{align}\nonumber &(1-\theta) \Delta_t^\alpha {\hat v}_i^{n+\theta}+\theta \Delta_t^\alpha {\hat v}_i^{n-1+\theta}=\delta_x^2\Big(\frac{{\hat w}_i^{n+1}+{\hat w}_i^n}{2}\Big) -\big[f(U_i^n)-f(u_i^n)\big] +R_i^{n+1}, \\\label{err1} &~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\qquad 1\leq n\leq N-1,~ 1\leq i\leq M-1,\\\label{err2} &\frac{d_0^{(1)}}{1-\theta} {\hat v}_i^{1-\theta}=R_i^1,\quad 1\leq i\leq M-1,\\\label{err3} &e_0^n=e_M^n=0, \quad 1\leq n\leq N,\\\label{err4} & e^0_i=0,\quad 0\leq i\leq M, \end{align} where $R_i^1=({\hat R}_t)_i^\theta+({\tilde R}_t)_i^\theta$ and $R_i^{n+1}=(R_x)_i^n-({\hat R}_t)_i^n+(R_{w})_i^n+({\tilde R}_t)_i^{n+1}$, $ 1\leq n\leq N-1$. Then there exists a positive constant $C_4$ such that \begin{align}\label{Rn} \|R^1\|\leq C_4\tau^{3-\alpha}, \quad |R^1|_1\leq C_4\tau^{3-\alpha}\mbox{ and }\|R^{n+1}\|\leq C_4(\tau^2+h^2), ~ 1\leq n\leq N-1. \end{align} Then we conclude the convergence of proposed scheme \eqref{sc1}--\eqref{sc4} as following. \begin{theorem}\label{convergence} Let $u(x,t)$ be the solution of the problem \eqref{eq1}--\eqref{eq3} and smooth enough, and let $\{u_i^n,0\leq i\leq M,0\leq n\leq N\}$ be the solution of the scheme \eqref{sc1}--\eqref{sc4}. It holds that \begin{align}\label{converesult} \|e^n\|\leq {\bar C}(\tau^2+h^2),\quad 0\leq n\leq N, \end{align} where ${\bar C}=\frac{\exp(C_7)}{2(1-\theta)}\Big[ \frac12 \Big(\frac{(\theta-\frac12)C_4\Gamma(3-\alpha)}{\theta^{2-\alpha}}\Big)^2+4\Gamma(2-\alpha)T^\alpha\Big(C_5+C_6 +\frac{2\Gamma(4-\alpha)T^{\alpha}}{(2-\alpha)^2}C_4^2\Big) \Big]^{\frac12}$, with \\ $C_5=\Big[\frac{(3\theta-2\theta^2-\frac12)C_3\Gamma(3-\alpha)}{2\theta^{2-\alpha}}\Big]^2$ , $C_6=\frac{(1-\theta)\Gamma(3-\alpha)(3-2\theta)T^{2-\alpha}}{2\theta^{4-2\alpha}}$, and $C_7=\frac{\Gamma(2-\alpha)\Gamma(4-\alpha)T^{2\alpha} L^2 }{\left[(2-\alpha)(1-\theta)\right]^2}$. \end{theorem} \begin{proof} We have $\|e^0\|=0$ from \eqref{err4}. Now we use mathematical induction to prove \begin{align}\nonumber \|e^{n}\|^2\leq & \frac{1}{4(1-\theta)^2}\left[ \frac12 \Big(\frac{(\theta-\frac12)C_4\Gamma(3-\alpha)}{\theta^{2-\alpha}}\Big)^2\right.\\\nonumber &\left.+4\Gamma(2-\alpha)T^\alpha\Big(C_5+C_6+\frac{2\Gamma(4-\alpha)T^{\alpha}}{(2-\alpha)^2}C_4^2\Big) \right](\tau^2+h^2)^2\\\label{induction} &+ \frac{2 \Gamma(2-\alpha)T^\alpha L^2}{(1-\theta)^2}\tau\sum_{k=0}^{n-1} \frac1{d_k^{(k+1)}}\|e^{k}\|^2, \quad 1\leq n\leq N. \end{align} It follows from \eqref{v1theta}, \eqref{err2} and \eqref{Rn} that \begin{align}\label{e1} \|e^1\|=\frac{\tau}{2d_0^{(1)}}\|R^1\|=\frac{\Gamma(3-\alpha)\tau^\alpha}{2\theta^{2-\alpha}}\|R^1\|\leq \frac{C_4\Gamma(3-\alpha)}{2\theta^{2-\alpha}}\tau^3. \end{align} Hence \eqref{induction} holds for $n=1$. Suppose \eqref{induction} is valid for $1\leq n\leq m$ ($1\leq m\leq N-1$), we then prove it is also valid for $n=m+1$. Taking the inner product of \eqref{err1} with \begin{align}\label{ww} 2\big[\theta {\hat v}_i^{n+1-\theta} +(1-\theta){\hat v}_i^{n-\theta}\big]=2\Big(\frac{{\hat w}_i^{n+1}-{\hat w}_i^n}{\tau}\Big), \quad 1\leq n\leq m, \end{align} we have \begin{align}\nonumber 2\big\langle (1-\theta) \Delta_t^\alpha {\hat v}^{n+\theta}+\theta \Delta_t^\alpha {\hat v}^{n-1+\theta}-R_f^{n+1},\theta {\hat v}^{n+1-\theta} +(1-\theta){\hat v}^{n-\theta}\big\rangle\\\label{error1_1} =2\big\langle \delta_x^2 \big( \frac{{\hat w}^{n+1}+{\hat w}^n}{2} \big),\frac{{\hat w}^{n+1}-{\hat w}^n}{\tau} \big\rangle , \end{align} where $(R_f)_i^{n+1}=-\big[f(U_i^n)-f(u_i^n)\big]+R_i^{n+1}$. It is easy to verify that \begin{align}\label{wn1norm} -2\Big\langle \delta_x^2 \Big(\frac{{\hat w}^{n+1}+{\hat w}^n}{2}\Big),\frac{{\hat w}^{n+1}-{\hat w}^n}{\tau} \Big\rangle=\frac{|{\hat w}^{n+1}|_1^2-|{\hat w}^{n}|_1^2}{\tau}. \end{align} Noticing ${\hat v}^0=0$ and utilizing Lemma \ref{stproof1}, we get \begin{align}\nonumber &2\big\langle (1-\theta) \Delta_t^\alpha {\hat v}^{n+\theta}+\theta \Delta_t^\alpha {\hat v}^{n-1+\theta}-(R_f)^{n+1},\theta {\hat v}^{n+1-\theta} +(1-\theta){\hat v}^{n-\theta}\big\rangle\\\label{wn1norm2} \geq & \sum_{k=0}^n \frac{c_{n-k}^{(n+1)}}{\mu} \|{\hat v}^{k+1-\theta}\|^2- \sum_{k=0}^{n-1} \frac{c_{n-k-1}^{(n)}}{\mu} \|{\hat v}^{k+1-\theta}\|^2 -\frac{ b_n}{(1-\theta)\mu}\|{\hat v}^{1-\theta}\|^2-d_n^{(n+1)}\|\frac1{d_n^{(n+1)}}(R_f)^{n+1}\|^2. \end{align} Substituting \eqref{wn1norm} and \eqref{wn1norm2} into \eqref{error1_1}, we obtain \begin{align}\label{error2} E^{n+1}-E^n \leq \frac{ \tau b_n}{(1-\theta)\mu}\|{\hat v}^{1-\theta}\|^2+\tau \frac1{d_n^{(n+1)}}\|(R_f)^{n+1}\|^2,\quad 1\leq n\leq m, \end{align} where $$E^n=\tau \sum_{k=0}^{n-1} \frac{c_{n-k-1}^{(n)}}{\mu} \|{\hat v}^{k+1-\theta}\|^2+|{\hat w}^{n}|_1^2.$$ Summing up \eqref{error2} for $n$ from $1$ to $m$ yield \begin{align* E^{m+1} \leq E^{1}+\frac{\tau}{(1-\theta)\mu} \sum_{n=1}^m b_n \|{\hat v}^{1-\theta}\|^2+ \tau\sum_{n=1}^m \frac1{d_n^{(n+1)}} \|(R_f)^{n+1}\|^2, \end{align*} then we can deduce the following inequality \begin{align}\label{sumck1} \frac{\tau}{\mu}\sum_{k=0}^{m}c_{m-k}^{(m+1)}\|{\hat v}^{k+1-\theta}\|^2 \leq |{\hat w}^{1}|_1^2+ \frac{\tau}{\mu}\Big[c_0^{(1)}+\frac1{1-\theta} \sum_{n=1}^m b_n\Big]\|{\hat v}^{1-\theta}\|^2+ \tau\sum_{n=1}^m \frac1{d_n^{(n+1)}}\|(R_f)^{n+1}\|^2. \end{align} It can be verified by using Cauchy-Schwarz inequality and Lemma \ref{akbk}(h) that \begin{align}\label{sumck2} \|\tau\sum_{k=0}^{m} {\hat v}^{k+1-\theta}\|^2\leq\left(\tau\sum_{k=0}^{m}\frac{\mu}{c_{m-k}^{(m+1)}} \right)\frac{\tau}{\mu}\sum_{k=0}^{m}c_{m-k}^{(m+1)}\|{\hat v}^{k+1-\theta}\|^2\leq 2\Gamma(2-\alpha)t_{m+1}^\alpha\left(\frac{\tau}{\mu}\sum_{k=0}^{m}c_{m-k}^{(m+1)}{\|{\hat v}^{k+1-\theta}\|}^2\right). \end{align} Furthermore, the inequality $(y+z)^2\leq 2(y^2+z^2)$ gives \begin{align}\label{sumck3} \|(\frac{3}{2}-\theta)e^{m+1}+(\theta-\frac12)e^m\|^2 = \|(\theta-\frac12)e^1+\tau\sum_{k=0}^{m} {\hat v}^{k+1-\theta}\|^2 \leq 2\left(\|(\theta-\frac12)e^1\|^2+ \|\tau\sum_{k=0}^{m} {\hat v}^{k+1-\theta}\|^2\right). \end{align} Consequently, it follows from \eqref{sumck1}--\eqref{sumck3} that \begin{align}\label{errorB} \|(\frac{3}{2}-\theta)e^{m+1}+(\theta-\frac12)e^m\|^2\leq B^m, \end{align} where $t_{m+1}^\alpha\leq T^\alpha$ has been used, and \begin{align*} B^m=&2\|(\theta-\frac12)e^1\|^2\\ &+4\Gamma(2-\alpha)T^\alpha\left\{|{\hat w}^1|_1^2+\frac{\tau}{\mu}\Big[c_0^{(1)}+\frac1{1-\theta} \sum_{n=1}^m b_n\Big]\|{\hat v}^{1-\theta}\|^2+ \tau\sum_{n=1}^m \frac1{d_n^{(n+1)}}\|(R_f)^{n+1}\|^2\right\}. \end{align*} We note that if $\|e^{m+1}\|\leq \|e^{m}\|$, \eqref{induction} follows directly. Therefore, we only consider the situation that $$\|e^{m+1}\|\geq \|e^{m}\|.$$ Then the triangular property of $L_2$ norm yields \begin{align*} \|(\frac{3}{2}-\theta)e^{m+1}+(\theta-\frac12)e^m\| \ge\big(\frac{3}{2}-\theta\big)\|e^{m+1}\|-\big(\theta-\frac12\big)\|e^m\|\geq 2(1-\theta)\|e^{m+1}\|, \end{align*} which implies that \begin{align}\label{error4} \|\big(\frac{3}{2}-\theta\big)e^{m+1}+\big(\theta-\frac12\big)e^m\|^2\ge 4(1-\theta)^2\|e^{m+1}\|^2. \end{align} Combining \eqref{errorB} and \eqref{error4}, we get \begin{align}\label{error5} \|e^{m+1}\|^2\leq \frac{B^m}{4(1-\theta)^2}. \end{align} Then we estimate $B^m$ term by term. Recalling the definition of ${\hat w}_i^1$, a straightforward calculation shows \begin{align}\label{errorw1} |{\hat w}^1|_1^2&=(3\theta-2\theta^2-\frac12)^2|e^1|_1^2= \Big[\frac{(3\theta-2\theta^2-\frac12)\Gamma(3-\alpha)}{2\theta^{2-\alpha}}\Big]^2 \tau^{2\alpha}|R^1|_1^2 \leq C_5\tau^4. \end{align} By using Lemma \ref{lemma_time}, Lemma \ref{akbk}(b), \eqref{err2} and \eqref{Rn}, we have \begin{align}\nonumber \frac{\tau}{\mu}\Big[c_0^{(1)}+\frac1{1-\theta} \sum_{n=1}^m b_n\Big]\|{\hat v}^{1-\theta}\|^2 \leq & \Big[\frac{{t_\theta}^{2-\alpha}}{\Gamma(3-\alpha)}+\frac{t_{n+\theta}^{2-\alpha}}{2(1-\theta)\Gamma(3-\alpha)} \Big]\|{\hat v}^{1-\theta}\|^2\\\nonumber \leq & \frac{2(1-\theta){t_\theta}^{2-\alpha}+t_{n+\theta}^{2-\alpha}}{2(1-\theta)\Gamma(3-\alpha)} \Big(\frac{1-\theta}{d_0^{(1)}}\Big)^2\|R^1\|^2\\\nonumber \leq & \frac{(1-\theta)\Gamma(3-\alpha)(3-2\theta)T^{2-\alpha}}{2\theta^{4-2\alpha}}\tau^{2\alpha-2}\|R^1\|^2\\\label{vhattheta} \leq & C_6 \tau^4. \end{align} For the nonlinear term, assuming that the global Lipschitz condition \eqref{Lip} hold, we have \begin{align}\label{nonlinearterm} \|f(U^n)-f(u^n)\|\leq L\|U^n-u^n\|=L\|e^n\|. \end{align} Then utilizing Lemma \ref{dk}(f), \eqref{Rn} and \eqref{nonlinearterm}, we can conclude that \begin{align}\nonumber \tau\sum_{n=1}^m \frac1{d_n^{(n+1)}}\|(R_f)^{n+1}\|^2\leq& 2\tau\sum_{n=1}^m \frac1{d_n^{(n+1)}}\left( \|f(U^n)-f(u^n)\|^2+\|R^{n+1}\|^2\right) \\\label{Rm} \leq &2L^2\tau\sum_{n=1}^m \frac1{d_n^{(n+1)}}\|e^n\|^2+ \frac{2\Gamma(4-\alpha)T^{\alpha}C_4^2}{(2-\alpha)^2}(\tau^2+h^2)^2. \end{align} Thus, \eqref{Rn}, \eqref{e1}, \eqref{error5}--\eqref{vhattheta} and \eqref{Rm} yield \begin{align*} \|e^{m+1}\|^2\leq & \frac{1}{4(1-\theta)^2}\left[ \frac12 \Big(\frac{(\theta-\frac12)C_4\Gamma(3-\alpha)}{\theta^{2-\alpha}}\Big)^2\right.\\ &\left.+4\Gamma(2-\alpha)T^\alpha\Big(C_5+C_6+\frac{2\Gamma(4-\alpha)T^{\alpha}}{(2-\alpha)^2}C_4^2\Big) \right](\tau^2+h^2)^2\\ &+ \frac{2 \Gamma(2-\alpha)T^\alpha L^2}{(1-\theta)^2}\tau\sum_{n=1}^m \frac1{d_n^{(n+1)}}\|e^n\|^2, \end{align*} which shows that \eqref{induction} is proved. Consequently, we can apply Lemma \ref{GRW} and Lemma \ref{dk}(f) on \eqref{induction} to conclude $$\|e^{n}\|^2\leq \big[{\bar C}(\tau^2+h^2)\big]^2, \quad 1\leq n\leq N.$$ \end{proof} \begin{remark}\label{condition} For functions which are not globally Lipschitz continuous, for example $f(u)=[u(x,t)]^r$, $r$ is positive integer and $r\geq 3$. Inspired by the approach for dealing with the nonlinear term in Theorem 4.2 of \cite{zhao_siamJSC}, one can also obtain convergence of the scheme by assuming $\tau=\nu h^{\frac12+\epsilon}$, where $\nu$, $\epsilon$ are positive numbers. In fact, based on the smoothness assumption of the exact solution, there exists a positive constant $C_0$ such that \begin{align* \|U^n\|_\infty\leq C_0, \quad 0\leq n\leq N. \end{align*} Note that \eqref{converesult} is valid for $1\leq n\leq m$ by applying Lemma \ref{GRW} and Lemma \ref{dk}(f) on the inductive assumption of Theorem \ref{convergence}. If ${\bar C}(\nu^2h^{2\epsilon}+h)\leq {\tilde C}_0$, in which ${\tilde C}_0$ is a positive constant independent of $\tau$ and $h$, then it follows that $$\|e^n\|_\infty\leq h^{-1}\|e^n\|\leq {\bar C}(\nu^2h^{2\epsilon}+h)\leq {\tilde C}_0, \quad 1\leq n\leq m,$$ and we get \begin{equation}\label{infinity-bound} \|u^n\|_\infty=\|u^n-U^n+U^n\|_\infty\leq \|e^n\|_\infty+\|U^n\|_\infty\leq {\tilde C}_0+C_0,\quad 1\leq n\leq m, \end{equation} which implies that $u^n$ ($1\leq n\leq m$) is uniformly bounded. The bound \eqref{nonlinearterm} can still be obtained so long as $f$ is Lipschitz continuous on $[-{\tilde C}_0-C_0,{\tilde C}_0+C_0]$. One can then follow the other parts of the proof to conclude convergence of the scheme. \end{remark} \subsection{Stability} Now we show the stability of propose scheme \eqref{sc1}--\eqref{sc4}. Suppose that $\{{\tilde u}_i^n,0\leq i\leq M,0\leq n\leq N\}$ is the solution of the following difference scheme: \begin{align}\nonumber & (1-\theta) \Delta_t^\alpha {\tilde v}_i^{n+\theta}+\theta \Delta_t^\alpha {\tilde v}_i^{n-1+\theta}=\delta_x^2 \Big(\frac{{\tilde w}_i^{n+1}+{\tilde w}_i^n}{2}\Big)-f({\tilde u}_i^n) +p_i^n, \\\label{stb1} &~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1\leq n\leq N-1,~ 1\leq i\leq M-1,\\\label{stb2} & \Delta_t^\alpha {\tilde v}_i^\theta=({\tilde \varphi}_{xx}+\theta\tau{\tilde \psi}_{xx})_i-f({\tilde \varphi}_i+\theta\tau{\tilde \psi}_i) +p_i^\theta, \quad 1\leq i\leq M-1,\\\label{stb3} & {\tilde u}_i^0={\tilde \varphi}_i, ~ ({\tilde u}_t)_i^0={\tilde \psi}_i, \quad 0\leq i\leq M,\\\label{stb4} & {\tilde u}_0^n={\tilde u}_M^n=0, \quad 1\leq n\leq N, \end{align} where $$(1-\theta) \Delta_t^\alpha {\tilde v}_i^{n+\theta}+\theta \Delta_t^\alpha {\tilde v}_i^{n-1+\theta}=\sum_{k=1}^n d_{n-k}^{(n+1)}({\tilde v}_i^{k+1-\theta}-{\tilde v}_i^{k-\theta})+ d_n^{(n+1)}({\tilde v}_i^{1-\theta}-{\tilde \psi}_i),$$ with \begin{align*} {\tilde v}_i^{k+1-\theta}=&(2-2\theta)\delta_t {\tilde u}_i^{k+\frac12}+(2\theta-1)\delta_{\hat t} {\tilde u}_i^{ k},\quad k\geq 1,\\ {\tilde v}_i^{1-\theta}=&(2-2\theta)\delta_t {\tilde u}_i^{\frac12}+(2\theta-1){\tilde \psi}_i; \end{align*} and \begin{align*} {\tilde w}_i^{k}=&(\frac32-\theta)\big[\theta {\tilde u}_i^{k}+(1-\theta){\tilde u}_i^{k-1}\big]+(\theta-\frac12)\big[\theta {\tilde u}_i^{k-1}+(1-\theta){\tilde u}_i^{k-2}\big], \quad k\geq 2,\\ {\tilde w}_i^{1}=&(\frac32-\theta)\big[\theta {\tilde u}_i^{1}+(1-\theta){\tilde u}_i^{0}\big]+(\theta-\frac12)\big[\theta {\tilde u}_i^{0}+(1-\theta)({\tilde u}_i^1-2\tau{\tilde \psi}_i)\big]. \end{align*} Denoting the perturbation term $$\eta_i^n={\tilde u}_i^n-u_i^n, \quad 1\leq n\leq N,~ 1\leq i\leq M-1,$$ and taking $\xi_i=\big[({\tilde \varphi}_{xx}-\varphi_{xx})_i+\theta\tau({\tilde \psi}_{xx}-\psi_{xx})_i\big]-\big[f({\tilde \varphi}_i+\theta\tau{\tilde \psi}_i)-f(\varphi_i+\theta\tau\psi_i)\big]$. Then we can get the following perturbation system: \begin{align}\nonumber &(1-\theta) \Delta_t^\alpha {\hat \eta}_i^{n+\theta}+\theta \Delta_t^\alpha {\hat \eta}_i^{n-1+\theta}=\delta_x^2\Big(\frac{{\hat \zeta}_i^{n+1}+{\hat \zeta}_i^n}{2}\Big) -\big[f({\tilde u}_i^n)-f(u_i^n)\big], \\\label{per1} &~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\quad 1\leq n\leq N-1,~ 1\leq i\leq M-1,\\\label{per2} &\frac{d_0^{(1)}}{1-\theta} {\hat \eta}_i^{1-\theta}=\xi_i,\quad 1\leq i\leq M-1,\\\label{per3} &\eta_0^n=\eta_M^n=0, \quad 1\leq n\leq N,\\\label{per4} & \eta^0_i={\tilde \varphi}_i-\varphi_i,\quad 0\leq i\leq M. \end{align} where $$(1-\theta) \Delta_t^\alpha {\hat \eta}_i^{n+\theta}+\theta \Delta_t^\alpha {\hat \eta}_i^{n-1+\theta}=\sum_{k=1}^n d_{n-k}^{(n+1)}({\hat \eta}_i^{k+1-\theta}-{\hat \eta}_i^{k-\theta})+ d_n^{(n+1)}{\hat \eta}_i^{1-\theta},$$ with \begin{align}\nonumber {\hat \eta}_i^{k+1-\theta} =&(2-2\theta)\frac{\eta_i^{k+1}-\eta_i^{k}}{\tau}+(2\theta-1)\frac{\eta_i^{k+1}-\eta_i^{k-1}}{2\tau},\quad k\geq 1,\\\label{eta1theta} {\hat \eta}_i^{1-\theta}=&(2-2\theta)\frac{\eta_i^{1}-\eta_i^0}{\tau}+(2\theta-1)({\tilde \psi}_i-\psi_i), \end{align} and \begin{align}\nonumber {\hat \zeta}_i^k=&(\frac32-\theta)\big[\theta \eta_i^{k}+(1-\theta)\eta_i^{k-1}\big]+(\theta-\frac12)\big[\theta \eta_i^{k-1}+(1-\theta)\eta_i^{k-2}\big],\quad k\geq 2,\\\label{stable1_1} {\hat \zeta}_i^1=&(\frac32-\theta)\big[\theta \eta_i^1+(1-\theta)\eta_i^0\big]+(\theta-\frac12)\big[\theta \eta_i^0+(1-\theta)\big(\eta_i^1-2\tau({\tilde \psi}_i-\psi_i)\big)\big]. \end{align} We have the following theorem to describe the stability of proposed scheme. \begin{theorem}\label{perstable} Let $\{\eta_i^n,0\leq i\leq M,0\leq n\leq N\}$ be the solution of the perturbation system \eqref{per1}--\eqref{per4}. It holds that \begin{align}\label{perstable1} \|\eta^n\|^2\leq {\tilde C}^2\Big(C_8(|{\tilde \varphi}_{xx}- {\varphi}_{xx}|_1^2+|{\tilde \psi}_{xx}-{\psi}_{xx}|_1^2)+C_9|\eta^0|_1^2+C_{10}|{\tilde \psi}-\psi|_1^2 \Big), \quad 0\leq n\leq N, \end{align} where ${\tilde C}=\exp(C_7)$, $C_8=\frac1{4(1-\theta)^2}\Big\{\Big(\frac{(\theta-\frac12)(b-a)\Gamma(3-\alpha)}{\theta^{2-\alpha}}\Big)^2+\\ 12\Gamma(2-\alpha)T^\alpha\Big[\Big(\frac{(3\theta-2\theta^2-\frac12)\Gamma(3-\alpha)}{\theta^{2-\alpha}}\Big)^2 +\frac{2(b-a)^2C_5}{9}\Big]\Big\}$, \\ $C_9=\frac1{4(1-\theta)^2}\Big\{{[(2\theta-1)^2(L+1)^2+(\frac32-\theta)^2](b-a)^2}+ \\ 12\Gamma(2-\alpha)T^\alpha\Big[{(3\theta-2\theta^2-\frac12)^2(L^2+1) +(\frac32-3\theta+2\theta^2)^2}+\frac{2(b-a)^2 C_5L^2}{9}\Big]\Big\}$ and $C_{10}=\frac{1}{4(1-\theta)^2}\Big\{(2\theta-1)^2\Big[6\Big(\frac{(2\theta-1)(b-a)}{2\sqrt{6}(1-\theta)}+L \Big)^2+(b-a)^2 \Big] +12\Gamma(2-\alpha)T^\alpha\Big[\big(\frac{(2\theta-1)(3\theta-2\theta^2-\frac12)}{2-2\theta}\big)^2(L^2+1)+(3\theta-2\theta^2-1)^2+\frac{2(b-a)^2 C_5L^2}{9}+\frac{(b-a)^2 T^{2-\alpha}}{9\Gamma(3-\alpha)}\Big]\Big\}$. \end{theorem} \begin{proof} Obviously, \eqref{perstable1} is valid for $n=0$. We use mathematical induction once again to prove \begin{align}\nonumber \|\eta^{n}\|^2\leq &C_8(|{\tilde \varphi}_{xx}- {\varphi}_{xx}|_1^2+|{\tilde \psi}_{xx}-{\psi}_{xx}|_1^2)+C_9|\eta^0|_1^2+C_{10}|{\tilde \psi}-\psi|_1^2\\\label{induc_stable} &+\frac{2\Gamma(2-\alpha)T^{\alpha}L^2}{(1-\theta)^2}\tau\sum_{k=0}^{n-1}\frac1{d_k^{(k+1)}} \|\eta^k\|^2,\quad 1\leq n\leq N. \end{align} It follows from \eqref{per2}, \eqref{eta1theta}, and Lemma \ref{1norm} that \begin{align}\nonumber \|\eta^1\|&=\|\frac{\tau}{2d_0^{(1)}}\xi-\frac{(2\theta-1)\tau}{2-2\theta}({\tilde \psi-\psi})+\eta^0\|\leq \frac{\tau}{2d_0^{(1)}}\|\xi\|+\frac{(2\theta-1)\tau}{2-2\theta}\|{\tilde \psi}-\psi\|+\|\eta^0\|\\\label{stable1_21} &\leq \frac{(b-a)\Gamma(3-\alpha)\tau^\alpha}{2\sqrt{6}\theta^{2-\alpha}}|\xi|_1+\frac{(2\theta-1)(b-a)\tau}{2\sqrt{6}(1-\theta)}|{\tilde \psi}-\psi|_1+\frac{b-a}{\sqrt{6}}|\eta^0|_1 . \end{align} Note that $$|f({\tilde \varphi}_i+\theta\tau{\tilde \psi}_i)-f(\varphi_i+\theta\tau\psi_i)|\leq L|{\tilde \varphi}_i-\varphi_i+\theta\tau({\tilde \psi}_i-\psi_i)|\leq L(|\eta_i^0|+\theta\tau|{\tilde \psi}_i-\psi_i|),$$ and then (for sufficiently small $\tau$) \begin{align}\label{xi} |\xi|_1\leq |{\tilde \varphi}_{xx}- {\varphi}_{xx}|_1+|{\tilde \psi}_{xx}-{\psi}_{xx}|_1+L(|\eta^0|_1+\tau|{\tilde \psi}-\psi|_1). \end{align} Combining \eqref{stable1_21} and \eqref{xi}, we get \begin{align}\nonumber \|\eta^1\|\leq& \frac{(b-a)\Gamma(3-\alpha)\tau^\alpha}{2\sqrt{6}\theta^{2-\alpha}}(|{\tilde \varphi}_{xx}- {\varphi}_{xx}|_1+|{\tilde \psi}_{xx}-{\psi}_{xx}|_1)+\Big(\frac{(2\theta-1)(b-a)}{2\sqrt{6}(1-\theta)}+L\Big)\tau|{\tilde \psi}-\psi|_1\\\label{stable1_2} &+\frac{b-a}{\sqrt{6}}(L+1)|\eta^0|_1 . \end{align} Since $\sqrt{C_9}>\frac{\sqrt{6}(b-a)}{3}(L+1)$, then \eqref{induc_stable} is valid for $n=1$. Suppose \eqref{induc_stable} hold for $1\leq n\leq m$ ($1\leq m\leq N-1$). Now we show \eqref{induc_stable} also hold for $n=m+1$. Taking the inner product of \eqref{per1} by $$ 2\big[\theta {\hat \eta}_i^{n+1-\theta} +(1-\theta){\hat \eta}_i^{n-\theta}\big]=2\Big(\frac{{\hat \zeta}_i^{n+1}-{\hat \zeta}_i^n}{\tau}\Big), \quad 1\leq n\leq m,$$ we get $$2\big\langle (1-\theta) \Delta_t^\alpha {\hat \eta}^{n+\theta}+\theta \Delta_t^\alpha {\hat \eta}^{n-1+\theta}-{\tilde R}_f^{n+1},\theta {\hat \eta}^{n+1-\theta} +(1-\theta){\hat \eta}^{n-\theta}\big\rangle=2\big\langle \delta_x^2 \big( \frac{{\hat \zeta}^{n+1}+{\hat \zeta}^n}{2} \big),\frac{{\hat \zeta}^{n+1}-{\hat \zeta}^n}{\tau} \big\rangle ,$$ where $({\tilde R}_f)_i^{n+1}=-\big[f({\tilde u}_i^n)-f(u_i^n)\big]$. Then following the similar methodology in the proof of Theorem \ref{convergence}, we can obtain \begin{align}\label{etan} \|\eta^{m+1}\|^2\leq \frac{{\tilde B}^m}{4(1-\theta)^2}, \end{align} where \begin{align}\nonumber {\tilde B}^m =&6\big(\|(\theta-\frac12)\eta^1\|^2+\|(\frac32-\theta)\eta^0\|^2+\|\tau(2\theta-1)({\tilde\psi}-\psi)\|^2\big)+4\Gamma(2-\alpha)T^\alpha\Big\{|{\hat \zeta}^1|_1^2\\\label{Btilde} &+\frac{\tau}{\mu}\Big[c_0^{(1)}+\frac1{(1-\theta)} \sum_{n=1}^m b_n\Big]{\|{\hat \eta}^{1-\theta}\|}^2+ \tau\sum_{n=1}^m d_n^{(n+1)}\big\|({\tilde\psi}-\psi)+\frac1{d_n^{(n+1)}}({\tilde R}_f)^{n+1}\big\|^2\Big\}. \end{align} Applying \eqref{stable1_1}, \eqref{eta1theta}, \eqref{xi} and Cauchy-Schwarz inequality, we have \begin{align}\nonumber |{\hat \zeta}^1|_1^2\leq & \Big[(3\theta-2\theta^2-\frac12)|\eta^1|_1+(\frac32-3\theta+2\theta^2)|\eta^0|_1+(3\theta-2\theta^2-1)\tau|{\tilde \psi}-\psi|_1 \Big]^2\\\nonumber \leq & 3\Big\{\Big[\frac{(3\theta-2\theta^2-\frac12)\Gamma(3-\alpha)}{\theta^{2-\alpha}}\Big]^2{\tau^{2\alpha}}(|{\tilde \varphi}_{xx}- {\varphi}_{xx}|_1^2+|{\tilde \psi}_{xx}-{\psi}_{xx}|_1^2) \\\nonumber &+\Big[(3\theta-2\theta^2-\frac12)^2(L^2+1)+(\frac32-3\theta+2\theta^2)^2\Big]|\eta^0|_1^2\\\label{zeta1} &+\Big[\Big(\frac{(2\theta-1)(3\theta-2\theta^2-\frac12)}{2-2\theta}\Big)^2(L^2+1)+(3\theta-2\theta^2-1)^2\Big]\tau^2|{\tilde \psi}-\psi|_1^2\Big\}. \end{align} Note that \begin{align}\label{etahat} \frac{\tau}{\mu}\Big[c_0^{(1)}+\frac1{(1-\theta)} \sum_{n=1}^m b_n\Big]{\|{\hat \eta}^{1-\theta}\|}^2\leq C_5\|\xi\|^2\leq \frac{C_5(b-a)^2}{6}|\xi|_1^2, \end{align} and the globally Lipschitz continuity \eqref{Lip} yields $$\|f({\tilde u}^n)-f(u^n)\|\leq L \|{\tilde u}^n-u^n\|=L\|\eta^n\|,$$ and then \begin{align}\label{etahat} \tau\sum_{n=1}^m d_n^{(n+1)}\big\|({\tilde\psi}-\psi)+\frac1{d_n^{(n+1)}}({\tilde R}_f)^{n+1}\big\|^2\leq & \frac{(b-a)^2 T^{2-\alpha}}{3\Gamma(3-\alpha)}|{\tilde\psi}-\psi|_1^2+2L^2\tau\sum_{n=1}^m\frac1{d_n^{(n+1)}} \|\eta^n\|^2. \end{align} So we can derive form combining \eqref{xi}--\eqref{etahat} and Lemma \ref{1norm} that {\small{\begin{align*} \|\eta^{m+1}\|^2&\leq C_8(|{\tilde \varphi}_{xx}- {\varphi}_{xx}|_1^2+|{\tilde \psi}_{xx}-{\psi}_{xx}|_1^2)+C_9|\eta^0|_1^2+C_{10}|{\tilde \psi}-\psi|_1^2+\frac{2\Gamma(2-\alpha)T^{\alpha}L^2}{(1-\theta)^2}\tau\sum_{n=1}^m\frac1{d_n^{(n+1)}} \|\eta^n\|^2.\\ &\leq C_8(|{\tilde \varphi}_{xx}- {\varphi}_{xx}|_1^2+|{\tilde \psi}_{xx}-{\psi}_{xx}|_1^2)+C_9|\eta^0|_1^2+C_{10}|{\tilde \psi}-\psi|_1^2+\frac{2\Gamma(2-\alpha)T^{\alpha}L^2}{(1-\theta)^2}\tau\sum_{n=0}^m\frac1{d_n^{(n+1)}} \|\eta^n\|^2, \end{align*}}} which shows that \eqref{induc_stable} is proved. Therefore, applying Lemma \ref{GRW} and Lemma \ref{dk}(f) on \eqref{induc_stable}, we finally get {\small{\begin{align*} \|\eta^{n}\|^2\leq \exp(2C_7)\Big(C_8(|{\tilde \varphi}_{xx}- {\varphi}_{xx}|_1^2+|{\tilde \psi}_{xx}-{\psi}_{xx}|_1^2)+C_9|\eta^0|_1^2+C_{10}|{\tilde \psi}-\psi|_1^2 \Big),~ 1\leq n\leq N. \end{align*}}} \end{proof} \begin{remark} For nonlinear terms which are locally Lipschitz continuous as discussed in Remark \ref{condition}, we assume that $$|\eta^0|_1\leq C_{11} h^{1+\delta},\quad |{\tilde \psi}-\psi|_1\leq C_{11}h^{1+\delta},\quad |{\tilde \varphi}_{xx}- {\varphi}_{xx}|_1\leq C_{11} h^{1+\delta},\quad|{\tilde \psi}_{xx}-{\psi}_{xx}|_1\leq C_{11} h^{1+\delta},$$ where $C_{11}$ and $\delta$ are positive constants. Note that \eqref{perstable1} is valid for $1\leq n\leq m$ by applying Lemma \ref{GRW} and Lemma \ref{dk}(f) on \eqref{induc_stable}. If ${\tilde C}C_{11}\sqrt{C_8+C_9+C_{10}}h^\delta\leq {\tilde C}_0$, it follows that $$\|\eta^n\|_\infty\leq h^{-1}\|\eta^n\|\leq {\tilde C}C_{11}\sqrt{C_8+C_9+C_{10}}h^\delta\leq {\tilde C}_0,\quad 1\leq n\leq m.$$ Noticing \eqref{infinity-bound}, then $$\|{\tilde u}^n\|_\infty=\|{\tilde u}^n-u^n+u^n\|_\infty\leq \|\eta^n\|_\infty+\|u^n\|_\infty\leq 2{\tilde C}_0+C_0,\quad 1\leq n\leq m,$$ which implies that ${\tilde u}^n$ is uniformly bounded. If $f$ is Lipschitz continuous on an interval containing $[-2{\tilde C}_0-C_0,2{\tilde C}_0+C_0]$, one can still obtain the desired conclusion. \end{remark} \section{Compact scheme}\label{compactscheme} In this section, we propose the spacial fourth-order scheme for the problem \eqref{eq1}--\eqref{eq3}. For any $u\in \mathcal{V}_h$, we define the spatial high order operator ${\cal A}$ as follow: $${\cal A}u_i=\left\{\begin{array}{ll} \frac1{12}(u_{i-1}+10u_i+u_{i+1}), &1\leq i\leq M-1,\\ u_i, &i=0,~M,\end{array}\right.$$ and we define the corresponding norm: $$\|u\|_A=\sqrt{\langle {\cal A}u,u\rangle}.$$ Then it is easy to check that \begin{align}\label{uAnorm} \frac23\|u\|\leq \|u\|_A\leq \|u\|. \end{align} By Taylor expansion, if $\frac{\partial^6 u}{\partial x^6}$ is continuous on $[x_{i-1},x_{i+1}]$, we have \begin{align}\label{compactA} {\cal A}\frac{\partial^2 u}{\partial x^2}(x_i)=\delta_x^2 u(x_i)+O(h^4). \end{align} Performing the compact operator ${\cal A}$ on both sides of \eqref{eeq1} and following the derivation of the difference scheme \eqref{sc1}--\eqref{sc4} in section \ref{derivation}, we obtain the compact difference linearized scheme for \eqref{eq1}--\eqref{eq3}: \begin{align}\nonumber & (1-\theta) \Delta_t^\alpha {\cal A} v_i^{n+\theta}+\theta \Delta_t^\alpha {\cal A} v_i^{n-1+\theta}=\delta_x^2 \big(\frac{w_i^{n+1}+w_i^n}{2}\big)-{\cal A}f(u_i^n) +{\cal A}p_i^n, \\\label{Csc1} &~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1\leq n\leq N-1,~ 1\leq i\leq M-1,\\\label{Csc2} & \Delta_t^\alpha v_i^\theta=(\varphi_{xx}+\theta\tau\psi_{xx})_i-f(\varphi_i+\theta\tau\psi_i) +p_i^\theta, \quad 1\leq i\leq M-1,\\\label{Csc3} & u_0^n=u_M^n=0, \quad 1\leq n\leq N,\\\label{Csc4} & u_i^0=\varphi_i,\quad v_i^0=\psi_i,\quad 0\leq i\leq M. \end{align} Using similar approach with the proof of Theorem 3.7 in \cite{Vong_numer} or Lemma 4.2 in \cite{Liao2}, we have the following Lemma. \begin{lemma}\label{compactvtheta} For any real sequence $F^n$, the following estimate holds: \begin{align*} &2\big\langle\theta v^{n+1-\theta} +(1-\theta)v^{n-\theta} ,(1-\theta) \Delta_t^\alpha {\cal A}v^{n+\theta}+\theta \Delta_t^\alpha {\cal A} v^{n-1+\theta}- {\cal A}F^n \big\rangle\\ \geq& \sum_{k=1}^n d_{n-k}^{(n+1)} \big( \|v^{k+1-\theta}\|_A^2-\|v^{k-\theta}\|_A^2\big)+d_n^{(n+1)}\big( \|v^{1-\theta}\|_A^2-\|v^0+\frac1{d_n^{(n+1)}}F^n\|_A^2\big)\\ =& \sum_{k=0}^n \frac{c_{n-k}^{(n+1)}}{\mu} \|v^{k+1-\theta}\|_A^2- \sum_{k=0}^{n-1} \frac{c_{n-k-1}^{(n)}}{\mu} \|v^{k+1-\theta}\|_A^2 -\frac{ b_n}{(1-\theta)\mu}\|v^{1-\theta}\|_A^2-d_n^{(n+1)}\|v^0+\frac1{d_n^{(n+1)}}F^n \|_A^2. \end{align*} \end{lemma} Then, with the help of \eqref{uAnorm}, \eqref{compactA} and Lemma \ref{compactvtheta}, and under the same assumptions in Theorem \ref{convergence} and Theorem \ref{perstable}, the theoretical results can be obtained following similar arguments in the proof of Theorem \ref{convergence} and Theorem \ref{perstable}. We present the convergence conclusion in the following. \begin{theorem}\label{compconverge} Let $u(x,t)$ be the solution of the problem \eqref{eq1}--\eqref{eq3} and smooth enough, and let $\{u_i^n,0\leq i\leq M,0\leq n\leq N\}$ be the solution of the scheme \eqref{Csc1}--\eqref{Csc4}. If \eqref{Lip} holds globally, we then have \begin{align}\label{compconveresult} \|e^n\|\leq {\hat C}(\tau^2+h^4),\quad 1\leq n\leq N, \end{align} where ${\hat C}=\frac{3\exp(\frac94C_7)}{4(1-\theta)}\Big[ \frac12 \Big(\frac{(\theta-\frac12)C_4\Gamma(3-\alpha)}{\theta^{2-\alpha}}\Big)^2+4\Gamma(2-\alpha)T^\alpha\Big(C_5+C_6 +\frac{2\Gamma(4-\alpha)T^{\alpha}}{(2-\alpha)^2}C_4^2\Big) \Big]^{\frac12}$. \end{theorem} \begin{remark} As in Remark \ref{condition}, we can show that \eqref{compconveresult} holds for locally Lipschitz continuous nonlinear term, if we assume ${\hat C}(\nu^2h^{2\epsilon}+h^3)\leq {\tilde C}_0$. \end{remark} \section{Numerical experiments}\label{Numericalexperiments} In this section, we carry out numerical experiments for the proposed finite difference schemes \eqref{sc1}--\eqref{sc4} and \eqref{Csc1}--\eqref{Csc4} to illustrate our theoretical statements. All our tests were done in MATLAB R2014a with a desktop computer (Dell optiplex 7020, configuration: Intel(R) Core(TM) i7-4790 CPU 3.60GHz and 16.00G RAM). The $L_2$ norm errors between the exact and the numerical solutions $$E_{2}(\tau,h)=\max_{0\leq n\leq N}\|e^n\|$$ are shown in the following tables. In the tables, $$\mbox{Rate1}=\log_2\bigg(\displaystyle\frac{E_{2}(2\tau,h)}{E_{2}(\tau,h)}\bigg)$$ is used to denote the temporal convergence order for sufficiently small $h$, and $$\mbox{Rate2}=\log_2\bigg(\displaystyle\frac{E_{2}(\tau,2h)}{E_{2}(\tau,h)}\bigg)$$ is the spatial convergence order for sufficiently small $\tau$. \subsection{Accuracy verification}\label{Accuracy} We consider the problem \eqref{eq1}--\eqref{eq3} for $x\in[0,1]$, $T=1$ and the forcing term $$p(x,t)=\left[\frac{24}{\Gamma(5-\alpha)}t^{4-\alpha}+\pi^2(t^4+1) \right]\sin(\pi x)+f\big(u(x,t)\big),$$ is chosen to such that the exact solution is $u(x,t)=\sin(\pi x)(t^4+1)$, where \begin{align*} &\mbox{\bf Case 1}\quad f\big(u(x,t)\big)=2\left(u(x,t)\right)^3 ,\\ &\mbox{\bf Case 2}\quad f\big(u(x,t)\big)=\sin\big(u(x,t)\big), \quad\mbox{(sin-Gordon)} ,\\ &\mbox{\bf Case 3}\quad f\big(u(x,t)\big)=\left[\big(u(x,t)\big)^2+5\right]^{\frac12}. \end{align*} The numerical results for the above three cases by applying difference scheme \eqref{sc1}--\eqref{sc4} were recorded in Table \ref{table1} and Table \ref{table2}, while the results for the three cases by applying compact scheme \eqref{Csc1}--\eqref{Csc4} were presented in Table \ref{table3} and Table \ref{table4}. \begin{table}[hbt!] \begin{center} \caption{Numerical accuracy in temporal direction of scheme \eqref{sc1}--\eqref{sc4} with $h=\frac1{1000}$. }\label{table1} \renewcommand{\arraystretch}{1} \def0.9\textwidth{1\textwidth} {\rule{0.9\textwidth}{0.9pt}} \begin{tabular*}{0.9\textwidth}{@{\extracolsep{\fill}}cccccccc} &$\tau$ &\multicolumn{2}{c}{$\alpha=1.2$}&\multicolumn{2}{c}{$\alpha=1.5$} &\multicolumn{2}{c}{$\alpha=1.8$}\\ \cline{3-4} \cline{5-6} \cline{7-8}\\ &&$E_2(\tau,h)$& Rate1 &$E_2(\tau,h)$& Rate1 &$E_2(\tau,h)$& Rate1 \\\hline &$1/20$ & 2.5994e-03 & $\ast$ & 3.0095e-03 & $\ast$ & 3.0680e-03 & $\ast$ \\ {\bf Case 1}&$1/40$ & 6.5070e-04 & 1.9981 & 7.5142e-04 & 2.0018 & 7.6428e-04 & 2.0051 \\ &$1/80$ & 1.6242e-04 & 2.0023 & 1.8761e-04 & 2.0019 & 1.9053e-04 & 2.0041 \\ &$1/160$ & 4.0295e-05 & 2.0110 & 4.6593e-05 & 2.0095 & 4.7242e-05 & 2.0119 \\ \hline &$1/20$ & 5.4877e-03 & $\ast$ & 5.5724e-03 & $\ast$ & 4.7836e-03 & $\ast$ \\ {\bf Case 2}&$1/40$ & 1.3773e-03 & 1.9944 & 1.3994e-03 & 1.9935 & 1.1950e-03 & 2.0011 \\ &$1/80$ & 3.4422e-04 & 2.0004 & 3.4977e-04 & 2.0003 & 2.9756e-04 & 2.0058 \\ &$1/160$ & 8.5409e-05 & 2.0109 & 8.6803e-05 & 2.0106 & 7.3449e-05 & 2.0184 \\ \hline &$1/20$ & 5.0042e-03 & $\ast$ & 5.2116e-03 & $\ast$ & 4.5829e-03 & $\ast$ \\ {\bf Case 3}&$1/40$ & 1.2539e-03 & 1.9967 & 1.3072e-03 & 1.9953 & 1.1442e-03 & 2.0019 \\ &$1/80$ & 3.1327e-04 & 2.0010 & 3.2666e-04 & 2.0006 & 2.8493e-04 & 2.0057 \\ &$1/160$ & 7.7724e-05 & 2.0109 & 8.1076e-05 & 2.0104 & 7.0359e-05 & 2.0178 \end{tabular*} {\rule{0.9\textwidth}{0.9pt}} \end{center} \end{table} \begin{table}[hbt!] \begin{center} \caption{Numerical accuracy in spatial direction of scheme \eqref{sc1}--\eqref{sc4} with $\tau=\frac1{1000}$. }\label{table2} \renewcommand{\arraystretch}{1} \def0.9\textwidth{1\textwidth} {\rule{0.9\textwidth}{0.9pt}} \begin{tabular*}{0.9\textwidth}{@{\extracolsep{\fill}}cccccccc} &$h$ &\multicolumn{2}{c}{$\alpha=1.2$}&\multicolumn{2}{c}{$\alpha=1.5$} &\multicolumn{2}{c}{$\alpha=1.8$}\\ \cline{3-4} \cline{5-6} \cline{7-8}\\ &&$E_2(\tau,h)$& Rate2 &$E_2(\tau,h)$& Rate2 &$E_2(\tau,h)$& Rate2 \\\hline &$1/20$ & 1.0942e-03 & $\ast$ & 1.3052e-03 & $\ast$ & 1.6671e-03 & $\ast$ \\ {\bf Case 1}&$1/40$ & 2.7284e-04 & 2.0038 & 3.2598e-04 & 2.0014 & 4.1632e-04 & 2.0016 \\ &$1/80$ & 6.7455e-05 & 2.0160 & 8.1203e-05 & 2.0052 & 1.0363e-04 & 2.0062 \\ &$1/160$ & 1.6611e-05 & 2.0218 & 2.0018e-05 & 2.0202 & 2.5468e-05 & 2.0247 \\ \hline &$1/20$ & 2.3688e-03 & $\ast$ & 2.2720e-03 & $\ast$ & 2.6671e-03 & $\ast$ \\ {\bf Case 2}&$1/40$ & 5.9008e-04 & 2.0051 & 5.6571e-04 & 2.0058 & 6.6450e-04 & 2.0049 \\ &$1/80$ & 1.4583e-04 & 2.0166 & 1.3971e-04 & 2.0177 & 1.6465e-04 & 2.0129 \\ &$1/160$ & 3.4798e-05 & 2.0673 & 3.3241e-05 & 2.0714 & 3.9735e-05 & 2.0509 \\ \hline &$1/20$ & 2.1390e-03 & $\ast$ & 2.0746e-03 & $\ast$ & 2.4736e-03 & $\ast$ \\ {\bf Case 3}&$1/40$ & 5.3295e-04 & 2.0048 & 5.1668e-04 & 2.0055 & 6.1637e-04 & 2.0047 \\ &$1/80$ & 1.3171e-04 & 2.0166 & 1.2757e-04 & 2.0180 & 1.5269e-04 & 2.0132 \\ &$1/160$ & 3.1417e-05 & 2.0677 & 3.0319e-05 & 2.0730 & 3.6806e-05 & 2.0526 \end{tabular*} {\rule{0.9\textwidth}{0.9pt}} \end{center} \end{table} Table \ref{table1} reports the numerical results in time direction of the proposed scheme \eqref{sc1}--\eqref{sc4} with fixed $h=\frac1{1000}$ and different choices of $\alpha$ are taken. Meanwhile, the numerical results in space direction with fixed $\tau=\frac1{1000}$ and different $\alpha$ are listed in Table \ref{table2}. From these two tables, one can see that the convergence rate for the three cases are both 2 in time and space, which are in accordance with the theoretical statements. \begin{table}[hbt!] \begin{center} \caption{Numerical accuracy in temporal direction of scheme \eqref{Csc1}--\eqref{Csc4} with $h=\frac1{100}$. }\label{table3} \renewcommand{\arraystretch}{1} \def0.9\textwidth{1\textwidth} {\rule{0.9\textwidth}{0.9pt}} \begin{tabular*}{0.9\textwidth}{@{\extracolsep{\fill}}cccccccc} &$\tau$ &\multicolumn{2}{c}{$\alpha=1.2$}&\multicolumn{2}{c}{$\alpha=1.5$} &\multicolumn{2}{c}{$\alpha=1.8$}\\ \cline{3-4} \cline{5-6} \cline{7-8}\\ &&$E_2(\tau,h)$& Rate1 &$E_2(\tau,h)$& Rate1 &$E_2(\tau,h)$& Rate1 \\\hline &$1/20$ & 2.5998e-03 & $\ast$ & 3.0099e-03 & $\ast$ & 3.0685e-03 & $\ast$ \\ {\bf Case 1}&$1/40$ & 6.5113e-04 & 1.9974 & 7.5184e-04 & 2.0012 & 7.6475e-04 & 2.0045 \\ &$1/80$ & 1.6285e-04 & 1.9994 & 1.8803e-04 & 1.9995 & 1.9099e-04 & 2.0015 \\ &$1/160$ & 4.0729e-05 & 1.9995 & 4.7017e-05 & 1.9997 & 4.7701e-05 & 2.0014 \\ \hline &$1/20$ & 5.4886e-03 & $\ast$ & 5.5733e-03 & $\ast$ & 4.7847e-03 & $\ast$ \\ {\bf Case 2}&$1/40$ & 1.3782e-03 & 1.9936 & 1.4003e-03 & 1.9928 & 1.1961e-03 & 2.0001 \\ &$1/80$ & 3.4517e-04 & 1.9975 & 3.5068e-04 & 1.9975 & 2.9863e-04 & 2.0019 \\ &$1/160$ & 8.6352e-05 & 1.9990 & 8.7708e-05 & 1.9994 & 7.4512e-05 & 2.0028 \\ \hline &$1/20$ & 5.0051e-03 & $\ast$ & 5.2124e-03 & $\ast$ & 4.5839e-03 & $\ast$ \\ {\bf Case 3}&$1/40$ & 1.2548e-03 & 1.9960 & 1.3080e-03 & 1.9946 & 1.1452e-03 & 2.0010 \\ &$1/80$ & 3.1412e-04 & 1.9981 & 3.2749e-04 & 1.9979 & 2.8591e-04 & 2.0020 \\ &$1/160$ & 7.8576e-05 & 1.9991 & 8.1903e-05 & 1.9995 & 7.1346e-05 & 2.0027 \end{tabular*} {\rule{0.9\textwidth}{0.9pt}} \end{center} \end{table} \begin{table}[hbt!] \begin{center} \caption{Numerical accuracy in spatial direction of scheme \eqref{Csc1}--\eqref{Csc4} with $\tau=\frac1{5000}$. }\label{table4} \renewcommand{\arraystretch}{1} \def0.9\textwidth{1\textwidth} {\rule{0.9\textwidth}{0.9pt}} \begin{tabular*}{0.9\textwidth}{@{\extracolsep{\fill}}cccccccc} &$h$ &\multicolumn{2}{c}{$\alpha=1.2$}&\multicolumn{2}{c}{$\alpha=1.5$} &\multicolumn{2}{c}{$\alpha=1.8$}\\ \cline{3-4} \cline{5-6} \cline{7-8}\\ &&$E_2(\tau,h)$& Rate2 &$E_2(\tau,h)$& Rate2 &$E_2(\tau,h)$& Rate2 \\\hline &$1/4$ & 8.6534e-04 & $\ast$ & 1.0312e-03 & $\ast$ & 1.3172e-03 & $\ast$ \\ {\bf Case 1}&$1/8$ & 5.3074e-05 & 4.0272 & 6.3283e-05 & 4.0263 & 8.0835e-05 & 4.0263 \\ &$1/16$ & 3.2641e-06 & 4.0232 & 3.9227e-06 & 4.0119 & 5.0070e-06 & 4.0130 \\ &$1/32$ & 1.9472e-07 & 4.0672 & 2.3157e-07 & 4.0823 & 2.9156e-07 & 4.1021 \\ \hline &$1/4$ & 1.8717e-03 & $\ast$ & 1.7946e-03 & $\ast$ & 2.1058e-03 & $\ast$ \\ {\bf Case 2}&$1/8$ & 1.1476e-04 & 4.0277 & 1.1000e-04 & 4.0281 & 1.2907e-04 & 4.0281 \\ &$1/16$ & 7.0565e-06 & 4.0235 & 6.7590e-06 & 4.0245 & 7.9587e-06 & 4.0195 \\ &$1/32$ & 3.5757e-07 & 4.3026 & 3.3780e-07 & 4.3226 & 4.2591e-07 & 4.2239 \\ \hline &$1/4$ & 1.6891e-03 & $\ast$ & 1.6377e-03 & $\ast$ & 1.9516e-03 & $\ast$ \\ {\bf Case 3}&$1/8$ & 1.0365e-04 & 4.0264 & 1.0047e-04 & 4.0268 & 1.1973e-04 & 4.0268 \\ &$1/16$ & 6.3734e-06 & 4.0235 & 6.1724e-06 & 4.0248 & 7.3808e-06 & 4.0198 \\ &$1/32$ & 3.2240e-07 & 4.3051 & 3.1045e-07 & 4.3134 & 4.0458e-07 & 4.1893 \end{tabular*} {\rule{0.9\textwidth}{0.9pt}} \end{center} \end{table} Table \ref{table3} lists the numerical results of the compact scheme \eqref{Csc1}--\eqref{Csc4} in time direction with fixed $h=\frac1{100}$, and Table \ref{table4} shows the numerical results in space direction with fixed $\tau=\frac1{5000}$ and different $\alpha$. The second-order accuracy in time and fourth-order accuracy in space are apparent in these two tables, respectively. We note that, in our implementation to the the non-global Lipschitz continuous term $f(u)=u^3$, the condition $\tau=\nu h^{\frac12+\epsilon}$ theoretically imposed in Remark \ref{condition} is not necessary. \subsection{Comparison with other numerical schemes} In this subsection, we give some comparisons between our proposed method and some standard methods for solving the equation \eqref{eq1}--\eqref{eq3}. It will be shown that our linearized schemes have advantages in both theoretical analysis and numerical computation. One may construct the numerical schemes where time fractional derivative was approximate by the widely used classical $L_1$ formula \cite{SunANM2006} to solve this kind of equations. The scheme will take the form as \begin{align}\label{sch-compare} \frac1{\mu}\left[a_0\delta_t u_i^{n+\frac12}-\sum_{k=1}^{n-1}(a_{n-k-1}-a_{n-k})\delta_t u_i^{k+\frac12}-a_{n-1}\psi_i \right]=\delta_x^2 u_i^{n+\frac12}-f(u_i^{n+\frac12})+p_i^{n+\frac12}, \end{align} where $a_k=(k+1)^{2-\alpha}-k^{2-\alpha}$. There are two commonly used approaches to approximate the nonlinear term $f(u_i^{n+\frac12})$. The first is the central approximation, i.e. $f(u_i^{n+\frac12})=\big[f(u_i^{n+1})+f(u_i^n)\big]/2+{\cal O}(\tau^2)$, and the resulting scheme will be accurate of temporal order ${\cal O}(\tau^{3-\alpha})$, e.g. \cite{Vong2014,Vong2015,ChenH-Taiwan}, but this leads to nonlinear treatment so that iterative methods are required. However, the iterative methods would cause additional computational costs. Here we give a comparison by a numerical example, we apply our proposed scheme \eqref{sc1}--\eqref{sc4} and the scheme \eqref{sch-compare} with the above nonlinear approximation to solve the example in subsection \ref{Accuracy}, in which the nonlinear scheme is dealt with a fixed-point method. For simplicity of presentation, we only list results of {\bf Case 2} in Table \ref{table5}, remarking that similar results can be obtained for other cases and different parameters. From this table, one can clearly see that our scheme works more efficiently and spend less CPU (seconds) time than the iterative method. Moreover, when applying iterative methods for solving nonlinear schemes, the convergence of the iterative methods is usually not easy to be established. The second way to approximate nonlinear term is a linear approach, e.g. $f(u_i^{n+\frac12})=f(u_i^n)+{\cal O}(\tau)$. The stability and convergence can be verified like \cite{Dehghan2015}, but its convergence order is only ${\cal O}(\tau)$ or no more than ${\cal O}(\tau^{3-\alpha})$ in time. \begin{table}[hbt!] \begin{center} \caption{Numerical results of {\bf Case 2} by applying our scheme \eqref{sc1}--\eqref{sc4} and nonlinear scheme \eqref{sch-compare} with fixed-point iteration method, respectively, for $\alpha=1.8$ and $h=\frac1{1000}$.}\label{table5} \renewcommand{\arraystretch}{1} \def0.9\textwidth{0.9\textwidth} {\rule{0.9\textwidth}{0.9pt}} \begin{tabular*}{0.9\textwidth}{@{\extracolsep{\fill}}ccccccc} $\tau$ &\multicolumn{3}{c}{scheme \eqref{sc1}--\eqref{sc4}}&\multicolumn{3}{c}{scheme \eqref{sch-compare} with iteration}\\ \cline{2-4} \cline{5-7} &$E_2(\tau,h)$& Rate1 &CPU(s) &$E_2(\tau,h)$& Rate1 &CPU(s)\\\hline $1/20$ & 4.7836e-03 & $\ast$ &0.35 & 1.3562e-02 & $\ast$ & 1.06 \\ $1/40$ & 1.1950e-03 & 2.0011 &0.95 & 6.3733e-03 & 1.0895 & 3.14 \\ $1/80$ & 2.9756e-04 & 2.0058 &2.24 & 2.8693e-03 & 1.1513 & 6.49 \\ $1/160$ & 7.3449e-05 & 2.0184 &5.31 & 1.2624e-03 & 1.1845 & 13.38 \\ $1/320$ & 1.7524e-05 & 2.0674 &11.29 & 5.4810e-04 & 1.2037 & 27.55 \end{tabular*} {\rule{0.9\textwidth}{0.9pt}} \end{center} \end{table} One may also use the formula in Lemma \ref{lemma_time} to replace the $L_1$ formula and then achieve a ${\cal O}(\tau^2)$ in fractional approximation, but this gives rise to some difficulties in theoretical analysis and, to our knowledge, the linear method like that in \cite{Dehghan2015} may not be applied. Consequently, iterative methods are required. All the discussions above illustrate that our proposed linearized schemes would be favorable to some usual numerical schemes. \section{Concluding remarks}\label{Concluding} In this paper, we consider a nonlinear fractional order Klein-Gordon type equation. We proposed a linearized finite difference scheme to solve the problem numerically. The main advantage is that the nonlinear term is evaluated on previous time level. As a result, iterative method is not needed for implementation. However, shifting the evaluation to previous level causes difficulties in theoretical analysis. Inspired by some recent studies, we show that the scheme converges with second-order in time. The results are justified by some numerical tests. \section*{Acknowledgment} The authors would like to thank the referees for their comments which improve the paper significantly. We also want to thank Prof. Honglin Liao for helpful discussion on the estimates of Lemmas \ref{akbk} and \ref{dk}.
2,869,038,156,947
arxiv
\section{\label{Sec.intro}Introduction} Precise estimation of parameters is desired for realizing upcoming quantum technologies such as quantum information processing. Quantum metrology is a promising method that offers higher precision sensing of target parameters than classical counterparts by exploiting entanglement~\cite{Toth2014,Degen2017,Pezze2018}. Appropriate entanglement among probe qubits enhances sensitivity, surpassing the standard quantum limit (SQL)~\cite{Caves1981,Giovannetti2004,Giovannetti2006}, which is known as the limit of classical sensors composed of separable states. In particular, the Greenberger-Horne-Zeilinger (GHZ) state~\cite{Greenberger1990,Mermin1990} achieves the ultimate precision called the Heisenberg limit in the absence of noise~\cite{Bollinger1996,Leibfried2004}. Even under specific noise, the GHZ state can still beat the SQL~\cite{Matsuzaki2011,Chin2012,Chaves2013,Dur2014,Macieszczak2015,Zhou2018,Matsuzaki2018}. Considerable effort has been devoted to the development of entanglement generation and interferometry for practical use. However, application of entanglement-enhanced sensing is still limited due to the following reasons. One of the major challenges in entanglement-enhanced sensing is to develop robust schemes against experimental imperfection. Typically, entanglement is created by gate operations~\cite{Leibfried2005,Neumann2008,Jones2009,DiCarlo2010,Neeley2010,Barends2014,Wei2020} or nonlinear interactions~\cite{Kitagawa1993,Agarwal1997,Molmer1999,Chumakov1999,Micheli2003,Pezze2009,Song2017,Song2019}. Controlled pulse sequences are required for adequately turning on/off gates or interactions to complete entanglement generation and to proceed to interferometry. It implies that complicated and precise setups are necessary in experiments. Desirable schemes should not require accurate control of (individual) qubits. Recently, such schemes using interacting systems have been proposed~\cite{Ivanov2013,Yoshinaga2021}. In these schemes, interactions are not necessarily turned off. In a ferromagnetic Ising model with a transverse field, macroscopic entanglement can be created in the ground state by adiabatically decreasing the transverse field~\cite{Cirac1998,Lee2006,Yukawa2018}. This process does not require accurate control of qubits. Moreover, this process is protected by symmetry, i.e., nonadiabatic transitions from even-parity energy eigenstates to odd-parity energy eigenstates do not take place because of parity conservation based on spin-flip symmetry~\cite{Hatomura2019,Hatomura2019a,Zhuang2020}. This suppression of nonadiabatic transitions protects the macroscopic entanglement from spontaneous symmetry breaking. To use the macroscopic entanglement in the ferromagnetic Ising model for quantum metrology, parity measurement is required to extract information of a target parameter. Several ways to perform parity measurement exist. For example, we can obtain information of parity by post-processing data of single-qubit measurement on each qubit. However, operators to be measured in single-qubit measurement do not commute with the Hamiltonian (interaction term). In general, measurement of operators that do not commute with a given Hamiltonian is experimentally hard~\cite{Endo2020}, and thus we cannot perform single-qubit measurement unless interactions are turned off. In this paper, we propose a scheme for quantum metrology, in which we use the macroscopic entanglement in the ferromagnetic Ising model. In our scheme, after exposing the macroscopically entangled state to a target longitudinal field, we adiabatically induce the transverse field again. This process is also protected by the symmetry, conserving the parity. Consequently, we can extract information of the parity by global magnetization measurement. For the strong transverse field, an operator to be measured commutes with the dominant part of the Hamiltonian (transverse field term), and thus our scheme is feasible in experiments. Some realistic situations are also discussed. \section{\label{Sec.theory}Theory} \subsection{Parameter estimation} We express a probe state exposed to a target parameter $\theta$ with an unknown value during a time interval $T_\mathrm{int}$ as $|\Psi_{T_\mathrm{int}}(\theta)\rangle$. When a readout process corresponds to projection measurement, uncertainty of the estimation is given by \begin{equation} \delta\theta=\frac{\sqrt{P(1-P)}}{\left|\frac{\partial P}{\partial\theta}\right|\sqrt{M}}, \label{Eq.projection.estimate} \end{equation} where \begin{equation} P=\langle\Psi_{T_\mathrm{int}}(\theta)|\hat{P}|\Psi_{T_\mathrm{int}}(\theta)\rangle \label{Eq.projection.ex} \end{equation} with a projection operator $\hat{P}$ denoting the survival probability and $M$ denoting the number of measurement~\cite{Degen2017}. Since we are interested in scaling behavior of the uncertainty against the number of qubits, we set $M=1$ throughout the paper for simplicity. \subsection{Model} Our theory can be applied to any ferromagnetic Ising model with a transverse field, but we focus on the following infinite-range Ising model with a transverse field \begin{equation} \hat{\mathcal{H}}=-\frac{1}{2}J\sum_{i,j=1}^N\hat{Z}_i\hat{Z}_j-h^x\sum_{i=1}^N\hat{X}_i, \label{Eq.ham} \end{equation} where we express Pauli matrices as $\{\hat{X},\hat{Y},\hat{Z}\}$, and $J$ and $h^x$ are the interaction strength and the amplitude of the transverse field, respectively. We assume that $h^x$ is tunable, while $J$ is fixed. This is a reasonable assumption for many physical systems. In addition, we assume $N$ to be even for simplicity. Our purpose is to estimate a target longitudinal field $h^z$. In a sensing process, \begin{equation} \hat{V}=-h^z\sum_{i=1}^N\hat{Z}_i \end{equation} is added to the Hamiltonian (\ref{Eq.ham}). For convenience, we use eigenvectors \begin{equation} \hat{S}_W|N/2,m\rangle_W=m|N/2,m\rangle_W\quad(W=X,Y,Z) \end{equation} of collective spin operators \begin{equation} \hat{S}_W=\frac{1}{2}\sum_{i=1}^N\hat{W}_i\quad(W=X,Y,Z) \label{Eq.collective} \end{equation} to express energy eigenstates of the Hamiltonian (\ref{Eq.ham}). Here we suppose that the system is confined in the maximum spin subspace satisfying $\sum_{W=X,Y,Z}\hat{S}_W^2=N/2\times(N/2+1)$, i.e., $m=-N/2,-N/2+1,\dots,N/2$. This system (\ref{Eq.ham}) conserves the parity \begin{equation} \hat{\Pi}=\prod_{i=1}^N\hat{X}_i, \label{Eq.parity} \end{equation} i.e., the commutation relation between the Hamiltonian (\ref{Eq.ham}) and the parity operator (\ref{Eq.parity}) becomes zero. That is, \begin{equation} [\hat{\mathcal{H}},\hat{\Pi}]=0 \label{Eq.parity.conserve} \end{equation} for any $h^x$ (see, e.g., Ref.~\cite{Hatomura2019,Hatomura2019a,Zhuang2020}). Therefore, $(N+1)$ energy eigenstates of the Hamiltonian (\ref{Eq.ham}) in the maximum spin subspace are classified into two sets, $\{|\psi_n(h^x)\rangle\}_{n=0}^{N/2}$ with the parity $\hat{\Pi}=+1$ and $\{|\phi_n(h^x)\rangle\}_{n=0}^{N/2-1}$ with the parity $\hat{\Pi}=-1$, in the ascending order of energy, respectively. These energy eigenstates are given by \begin{equation} \left\{ \begin{aligned} &|\psi_n(\infty)\rangle=|N/2,N/2-2n\rangle_X, \\ &|\phi_n(\infty)\rangle=|N/2,N/2-(2n+1)\rangle_X \end{aligned} \right. \label{Eq.eigen.infty} \end{equation} in the $h^x\to\infty$ limit and \begin{equation} \left\{ \begin{aligned} &|\psi_n(0)\rangle=\frac{1}{\sqrt{2}}(|N/2,N/2-n\rangle_Z+|N/2,n-N/2\rangle_Z), \\ &|\phi_n(0)\rangle=\frac{1}{\sqrt{2}}(|N/2,N/2-n\rangle_Z-|N/2,n-N/2\rangle_Z), \end{aligned} \right. \label{Eq.eigen.zero} \end{equation} for $n=0,1,\dots,N/2-1$ and $|\psi_{N/2}(0)\rangle=|N/2,0\rangle_Z$ in the $h^x\to0$ limit. Notably, the degenerate ground states $|\psi_0(0)\rangle$, which is known as the GHZ state, and $|\phi_0(0)\rangle$ as well can achieve the Heisenberg limit by parity measurement~\cite{Bollinger1996}. For example, we can obtain the expectation value of the parity (\ref{Eq.parity}) by implementing single-qubit measurement of $\hat{X}$ on each qubit and multiplying the measurement outcomes, and by averaging it for many independent and identically distributed samples. However, single-qubit measurement of $\hat{X}$ is nontrivial for the present model because each $\hat{X}$ does not commute with the interaction term of the Hamiltonian. If the interaction term is much smaller than the resonant frequency of qubits, we can perform single-qubit rotation along the $y$-axis by $\pi/2$ and subsequent single-qubit measurement of $\hat{Z}$, which commutes with the interaction term of the Hamiltonian, for each qubit. The measurement outcome is equivalent to $\hat{X}$ of the original state. However, when the interaction term is as large as or larger than the resonant frequency of qubits, we cannot use this method. Other approaches are necessary to measure the parity. \subsection{\label{Sec.method.ideal}Symmetry-protected adiabatic quantum metrology} In this section, we explain our scheme with a reasonable readout protocol extracting information of the parity. First, we generate $|\psi_0(0)\rangle$ by adiabatic transformation, i.e., we prepare the trivial ground state $|\psi_0(\infty)\rangle$ as the initial state and adiabatically change the transverse field $h^x$ from infinity to zero~\cite{Cirac1998,Lee2006,Yukawa2018}. Notably, this process is protected by the parity conservation based on the spin-flip symmetry, i.e., nonadiabatic transitions from the ground state $|\psi_0(h^x)\rangle$ to the degenerate ground state $|\phi_0(h^x)\rangle$ do not take place even if the transverse field becomes small~\cite{Hatomura2019,Hatomura2019a,Zhuang2020}. We then expose the system to the target longitudinal field $h^z$ during a time interval $T_\mathrm{int}$. Consequently, the state changes as \begin{equation} e^{-i(\hat{\mathcal{H}}+\hat{V})T_\mathrm{int}}|\psi_0(0)\rangle=e^{iJN^2T_\mathrm{int}/2}[\cos(h^zNT_\mathrm{int})|\psi_0(0)\rangle+i\sin(h^zNT_\mathrm{int})|\phi_0(0)\rangle]. \label{Eq.state.sensing} \end{equation} In our scheme, we adiabatically change the transverse field $h^x$ again to infinity. The state (\ref{Eq.state.sensing}) becomes \begin{equation} |\Psi_{T_\mathrm{int}}(h^z)\rangle=\cos(h^zNT_\mathrm{int})|\psi_0(\infty)\rangle+\sin(h^zNT_\mathrm{int})e^{i\alpha}|\phi_0(\infty)\rangle \end{equation} except for a global phase factor because of the parity conservation (\ref{Eq.parity.conserve}). Here, $\alpha$ is a relative phase accompanying the adiabatic transformation of the transverse field $h^x$. To extract the information of the target longitudinal field $h^z$ from this state, we perform projection measurement of the eigenstate $|\psi_0(\infty)\rangle=|N/2,N/2\rangle_X$. This measurement can simply be implemented by measuring the global magnetization $\hat{S}_X$, which commutes with the dominant part of the Hamiltonian (transverse field term). The expectation value of the projection measurement (\ref{Eq.projection.ex}) is given by $P=\cos^2(h^zNT_\mathrm{int})$, and thus the uncertainty of the estimation (\ref{Eq.projection.estimate}) achieves the Heisenberg limit \begin{equation} \delta h^z=\frac{1}{2NT_\mathrm{int}}. \end{equation} In conclusion, experimentally difficult parity measurement is replaced with simple global magnetization measurement, keeping the Heisenberg limit. \subsection{\label{Sec.noise}Phase shift} While the uncertainty achieves the Heisenberg limit even for vanishingly small $h^z$, both the denominator and the numerator in Eq.~(\ref{Eq.projection.estimate}) vanish for $h^z\ll1$ because the expectation value of the projection measurement (\ref{Eq.projection.ex}) is the cosine-squared function. However, in noisy situations, the numerator typically has a finite value, while the denominator is infinitesimal for small $h^z$, resulting in divergence of the uncertainty. For example, the numerator becomes large when readout measurement becomes noisy~\cite{Taylor2008,Kitazawa2017}. To avoid such a problem, we introduce a phase shift. The target parameter can be divided into two parts, $h^z=h^z_k+h^z_u$, where $h^z_k$ is a known part and $h^z_u$ is an unknown part. We assume that, by pre-estimation, an approximate value of $h^z$ is known, i.e., $h^z_k\approx h^z$ and $h^z_u\ll 1$. We try to estimate $h^z_u$ by entanglement-enhanced sensing for further improvement of precision. For a phase shift, we add an offset $h^z_0$ so that $2(h^z_k+h_0^z)NT_\mathrm{int}=(2n+1)\pi/2$ with an integer $n$. Then, the expectation value of the projection measurement (\ref{Eq.projection.ex}) turns into $P=[1\pm\sin(2h^z_uNT_\mathrm{int})]/2$, and thus the denominator of Eq.~(\ref{Eq.projection.estimate}) does $|\partial P/\partial h^z|=NT_\mathrm{int}|\cos(2h^z_uNT_\mathrm{int})|$, which does not vanish for small $h^z_u$. Due to this phase shift, the uncertainty of the estimation (\ref{Eq.projection.estimate}) becomes robust against noise. \subsection{\label{Sec.real}Realistic situation} We now consider a case, where the transverse field $h^x$ and its transformation speed are finite in order to discuss a realistic situation in experiments. We set the transverse field $h^x$ at the initial and final time as $h^x=h^x_0$, and assume that the transverse field is changed from $h^x_0$ (0) to $0$ ($h^x_0$) with finite operation time $T_a$. For finite operation time, some nonadiabatic transitions may happen, but the parity conservation (\ref{Eq.parity.conserve}) guarantees that transitions between different parity eigenstates $\{|\psi_n(h^x)\rangle\}_{n=0}^{N/2}$ and $\{|\phi_n(h^x)\rangle\}_{n=0}^{N/2-1}$ do not take place as well as in the ideal situation~\cite{Hatomura2019,Hatomura2019a,Zhuang2020}. Let us discuss two approaches to prepare the initial state. The first approach is as follows: for $h^x_0/JN\gg1$, we prepare the ground state $|\psi_0(h^x_0)\rangle$ as the initial state, which can be done by cooling the system because of large energy gap. However, in this case, long operation time is required to adiabatically change the transverse field from large $h^x_0$ to 0 and from 0 to large $h^x_0$. The other approach is as follows: we apply a strong magnetic field $h^x/JN\gg1$ and perform projection measurement of $|\psi_0(\infty)\rangle=|N/2,N/2\rangle_X$, and implement sudden quench to $h^x_0$ satisfying $h^x_0/JN\approx1$. In this case, the operation time to satisfy the adiabatic condition can be shorter than the first approach, while the initial state becomes $|\psi_0(\infty)\rangle$, which is not the ground state but close to it as discussed later. In this paper, we consider the latter case. Note that $h^x/JN=1$ is the critical point in the thermodynamic limit, and thus we cannot prepare the ground state by cooling because of small energy gap. The initial state $|\psi_0(\infty)\rangle$ is expanded by the set of the bases $\{|\psi_m(h_0^x)\rangle\}_{m=0}^{N/2}$ as \begin{equation} |\psi_0(\infty)\rangle=\sum_{m=0}^{N/2}g_m|\psi_m(h_0^x)\rangle, \label{Eq.state.finite.ini} \end{equation} where $g_m=\langle\psi_m(h_0^x)|\psi_0(\infty)\rangle$. The transverse field $h^x$ is changed from $h^x_0$ to $0$ with finite operation time $T_a$. We express the time evolution operator of this process as $\hat{U}_1$. The state after this process is \begin{equation} \hat{U}_1|\psi_0(\infty)\rangle=\sum_{m,n=0}^{N/2}g_mg_{m\to n}|\psi_n(0)\rangle, \end{equation} where $g_{m\to n}=\langle\psi_n(0)|\hat{U}_1|\psi_m(h_0^x)\rangle$. In the sensing process, the system is exposed to the target longitudinal field for a time interval $T_\mathrm{int}$ and each level rotates at different speed, i.e., \begin{equation} e^{-i(\hat{\mathcal{H}}+\hat{V})T_\mathrm{int}}\hat{U}_1|\psi_0(\infty)\rangle=\sum_{m,n=0}^{N/2}g_mg_{m\to n}e^{iJ(N-2n)^2T_\mathrm{int}/2}\{\cos[h^z(N-2n)T_\mathrm{int}]|\psi_n(0)\rangle+i\sin[h^z(N-2n)T_\mathrm{int}]|\phi_n(0)\rangle\}. \end{equation} After the sensing process, we again change the transverse field $h^x$ from $0$ to $h^x_0$ with the time $T_a$. We express the time evolution operator of this process as $\hat{U}_2$. We then obtain the probe state \begin{equation} \begin{aligned} |\Psi_{T_\mathrm{int}}(h^z)\rangle=&\hat{U}_2e^{-i(\hat{\mathcal{H}}+\hat{V})T_\mathrm{int}}\hat{U}_1|\psi_0(\infty)\rangle \\ =&\sum_{m,n,l=0}^{N/2}g_mg_{m\to n}e^{iJ(N-2n)^2T_\mathrm{int}/2}\{g_{n\to l}^\psi\cos[h^z(N-2n)T_\mathrm{int}]|\psi_l(h_0^x)\rangle+ig_{n\to l}^\phi\sin[h^z(N-2n)T_\mathrm{int}]|\phi_l(h_0^x)\rangle\}, \end{aligned} \end{equation} where $g_{n\to l}^{\psi(\phi)}=\langle\psi_l(\phi_l)(h_0^x)|\hat{U}_2|\psi_n(\phi_n)(0)\rangle$. The total operation time of the entire protocol is $T=2T_a+T_\mathrm{int}$. Finally we perform the projection measurement of $|\psi_0(\infty)\rangle=|N/2,N/2\rangle_X$ and obtain \begin{equation} P=\left|\sum_{m,n,l=0}^{N/2}g_mg_{m\to n}g_{n\to l}^\psi g_l^\ast e^{iJ(N-2n)^2T_\mathrm{int}/2}\cos[h^z(N-2n)T_\mathrm{int}]\right|^2 \label{Eq.projection.realistic} \end{equation} as the survival probability of this measurement. When the adiabatic condition is satisfied, i.e., when $g_{m\to n}^{(\psi)}=\delta_{mn}e^{i\alpha_m^{(\psi)}}$ holds with a phase factor $e^{i\alpha_m^{(\psi)}}$, we can provide a sufficient condition for achieving the Heisenberg limit scaling. The uncertainty of estimation (\ref{Eq.projection.estimate}) is bounded as \begin{equation} \delta h^z\le\frac{1}{2NT_\mathrm{int}(2|g_0|^4-1)\sin(2h^zNT_\mathrm{int})} \label{Eq.uncertainty.bound} \end{equation} for $|g_0|^4>1/2$ when the condition $0\le 2h^zNT_\mathrm{int}\le\pi/2$ is satisfied (see, Appendix~\ref{Sec.bound.derivation}, for derivation). Notably, the factor $\sin(2h^zNT_\mathrm{int})$ becomes unity when we consider the phase shift discussed in Sec.~\ref{Sec.noise} and the right-hand side of Eq.~(\ref{Eq.uncertainty.bound}) exactly coincides with the Heisenberg limit when $|g_0|^2=1$. This bound guarantees the Heisenberg limit scaling when the overlap between the initial state and the ground state, $|g_0|^2=|\langle\psi_0(h_0^x)|\psi_0(\infty)\rangle|^2$, satisfies $|g_0|^4>1/2$, or rigorously $2|g_0|^4-1=\Theta(N^0)$. We plot the overlap $|g_0|^2$ and the threshold $|g_0|^4=1/2$ in Fig.~\ref{Fig.overlap}. \begin{figure} \includegraphics[width=8cm]{overlap.eps} \caption{\label{Fig.overlap}Overlap between the initial state and the ground state $|g_0|^2=|\langle\psi_0(h^x)|\psi_0(\infty)\rangle|^2$. The horizontal axis is the transverse field $h^x$ in units of $JN$. Here, (red circles) $N=10$, (green triangles) $N=50$, and (blue squares) $N=100$. The solid horizontal line represents the threshold $|g_0|^4=1/2$ and the dashed vertical line represents the critical point. } \end{figure} We find that the initial condition $h^x_0=JN$ would be enough to achieve the Heisenberg limit scaling when $N\le100$ and provides shorter operation time $T_a$. \section{\label{Sec.numsim}Numerical simulation} In this paper, we set $h_0^x=JN$ and change the transverse field $h^x$ as $h^x=h_0^x\cos(\pi t/2T_a)$ for $0\le t\le T_a$, which was introduced as coherent driving in Ref.~\cite{Yukawa2018} and is similar to a geometrically optimal schedule~\cite{Hatomura2019a}. Under this transverse field, we can shorten the operation time $T_a$ because nonadiabatic transitions and interference result in high fidelity to the GHZ state even in nonadiabatic time scale~\cite{Yukawa2018}. We also change the transverse field $h^x$ as $h^x=h_0^x\sin\{\pi[t-(T_a+T_\mathrm{int})]/2T_a\}$ for $T_a+T_\mathrm{int}\le t\le2T_a+T_\mathrm{int}$. In the following numerical simulations, we set $JN=1$. First, we optimize the operation time $T_a$ for the sensing time $T_\mathrm{int}=0$. We plot (red circles) the fidelity of the probe state to the GHZ state $|\psi_0(0)\rangle$ at the time $t=T_a$, i.e., $|\langle\psi_0(0)|\hat U_1|\psi_0(\infty)\rangle|^2$, and (green triangles) that to the initial state $|\psi_0(\infty)\rangle$ at the time $t=2T_a+T_\mathrm{int}=2T_a$, i.e., $|\langle\psi_0(\infty)|\hat U_2\hat U_1|\psi_0(\infty)\rangle|^2$, for $N=10$ in Fig.~\ref{Fig.fidN10}. \begin{figure} \includegraphics[width=8cm]{fidN10.eps} \caption{\label{Fig.fidN10}Fidelity to the GHZ state at time $t=T_a$ (red circles) and to the initial state at time $t=2T_a+T_\mathrm{int}=2T_a$ (green triangles) for $N=10$. The horizontal axis is the operation time $T_a$ in units of $(2JN^2)^{-1}$. } \end{figure} Here, interference appears when nonadiabatic transitions take place, and thus these quantities show oscillating behavior. We find a locally optimal operation time $T_a\approx150(2JN^2)^{-1}$ showing high fidelity to the GHZ state ($\sim0.97$) and to the initial state ($\sim0.91$). Notably, it satisfies $g_{m\to n}^{(\psi)}\approx\delta_{mn}e^{i\alpha_m^{(\psi)}}$ because of interference although this locally optimal operation time is in nonadiabatic time scale. Now, we set $T_a=150(2JN^2)^{-1}$ and study the uncertainty of the estimation (\ref{Eq.projection.estimate}) for the infinitesimal target parameter $h^z_u$ with the phase shift discussed in Sec.~\ref{Sec.noise}. We calculate the denominator of Eq.~(\ref{Eq.projection.estimate}) by finite difference, $\partial P/\partial h^z_u\approx(P|_{h^z_u=10^{-10}}-P|_{h^z_u=0})/10^{-10}$. The sensing time $T_\mathrm{int}$ contributes to relative phases between different levels, and it affects the uncertainty of the estimation (\ref{Eq.projection.estimate}) [see, Eq.~(\ref{Eq.projection.realistic})]. Therefore, we plot the uncertainty of the estimation (\ref{Eq.projection.estimate}) with respect to $T_\mathrm{int}$ in Fig.~\ref{Fig.uncertaintyN10}. \begin{figure} \includegraphics[width=8cm]{accuracyN10.eps} \caption{\label{Fig.uncertaintyN10} Uncertainty of the estimation for $N=10$. The horizontal axis is the sensing time $T_\mathrm{int}$ in units of $(2JN^2)^{-1}$ and the vertical axis is the uncertainty of estimation $\delta h^z$ in units of $JN$. The solid and dashed curves represent the Heisenberg limit, $\delta h^z=1/2NT_{\mathrm{int}}$, and the SQL, $\delta h^z=1/2N^{1/2}T_{\mathrm{int}}$, respectively.} \end{figure} We find that the uncertainty is very close to the Heisenberg limit. Indeed, the uncertainty of the estimation (\ref{Eq.projection.estimate}) achieves $\delta h^z\approx1.07/2NT_\mathrm{int}$ on average for $(2JN^2)T_\mathrm{int}=1,3,5,\dots,199$. Here, $(h_k^z+h_0^z)/JN=\pi/2$. Note that $1.07\approx(0.93)^{-1}$ and thus it is smaller than that expected from the fidelity to the GHZ state ($\sim0.97$) and a little bit larger than that expected from the fidelity to the initial state ($\sim0.91$) for $T_\mathrm{int}=0$. We also discuss these quantities for other system size, $N=20,30,40,\dots,100$. Some examples of locally optimal operation time are plotted with respect to system size $N$ in Fig.~\ref{Fig.optime}. \begin{figure} \includegraphics[width=8cm]{optime.eps} \caption{\label{Fig.optime}Some examples of locally optimal operation time $T_a$ with respect to the system size $N$. The vertical axis is in units of $(2JN^2)^{-1}$. } \end{figure} By using these locally optimal operation time, we calculate the uncertainty for several $N$ against $T_\mathrm{int}$ (see Appendix~\ref{Sec.othersize}). We find that the uncertainty has some dependence on $T_\mathrm{int}$ and it slightly deviates from the Heisenberg limit. We express the average uncertainty of estimation as $\delta h^z=1/2pNT_\mathrm{int}$, where $p$ ($0\le p\le1$) is an index denoting how close the uncertainty is to the Heisenberg limit. Here, the uncertainty is averaged for $(2JN^2)T_\mathrm{int}=1,3,5,\dots,199$. For locally optimal time $T_a$ in Fig.~\ref{Fig.optime}, the fidelity to the GHZ state (red circles), that to the initial state (green triangles), and the index $p$ (blue squares) are calculated and plotted in Fig.~\ref{Fig.syssize}. \begin{figure} \includegraphics[width=8cm]{syssize.eps} \caption{\label{Fig.syssize} System size dependence of (red circles) the fidelity of the probe state at time $t=T_a$ to the GHZ state, (green triangles) that of the probe state at time $t=2T_a$ with $T_\mathrm{int}=0$ to the initial state, and (blue squares) the index $p$, which shows how close the uncertainty is to the Heisenberg limit on average. Here we use locally optimal time $T_a$ plotted in Fig.~\ref{Fig.optime}. The error bar in the index $p$ represents standard deviation and the dotted curve represents the SQL. } \end{figure} These quantities show complicated behavior against the number of qubits because of nonadiabatic transitions and interference. Remarkably, the uncertainty surpasses the SQL. Note that the shown performance is not the best, there exists other longer operation time showing better performance. If coherent time is long enough, we can choose those operation time. \section{Discussion} First, we summarize the present paper. We proposed symmetry-protected adiabatic quantum metrology. In this protocol, parity measurement, which is difficult to be implemented in the experiments, is replaced with simple global magnetization measurement by adiabatic transformation of the transverse field. Here, we exploited the fact that the parity is a conserved quantity because of the spin-flip symmetry. We also discussed performance of our method in the realistic situations. Next, we compare our method with other schemes. Replacing parity measurement with global magnetization measurement was also discussed in Ref.~\cite{Leibfried2004}, where the one-axis twisting operation $U_\mathrm{OAT}=\exp(i\pi \hat{S}_X^2/2)$ was used. In this case, the operation time is $(\pi/2)(2J)^{-1}=\mathcal{O}(J^{-1})$. As found in Fig.~\ref{Fig.optime}, the operation time of our scheme can be $\sim(11.6N+60.0)(2JN^2)^{-1}=\mathcal{O}(J^{-1}N^{-1})$. Therefore, our scheme is faster than that in Ref.~\cite{Leibfried2004}. Our scheme is also more advantageous compared with the quantum domino dynamics-based scheme discussed in Ref.~\cite{Yoshinaga2021}. Although interactions are not necessarily turned off in their scheme as is also the case for ours, the operation time is $\mathcal{O}(N)$ in their scheme. Typically, the interaction strength $J$ can be $\mathcal{O}(N^{-1})$~\cite{Dooley2016}, i.e., the operation time of our scheme can be $\mathcal{O}(J^{-1}N^{-1})=\mathcal{O}(N^0)$, and thus our scheme is also faster than that in Ref.~\cite{Yoshinaga2021}. Finally, we remark on robustness of our scheme. In this paper, we considered the finite transverse field and nonadiabatic transitions as possible errors in realistic situations. We leave effects of other errors and noises as future work, but we mention some evidence of robustness against various errors and noises. Our protocol utilizes the ground state, and thus decay from excited states to lower energy states during entanglement generation is less problematic than conventional dynamical approaches. In addition, the offset discussed in Sec.~\ref{Sec.noise} makes our protocol robust against measurement imperfection. Robustness of dynamics against bias, which breaks symmetry-protected conservation laws, during symmetry-protected adiabatic transformation was discussed in Ref.~\cite{Zhuang2020}. Robustness of entanglement generation against a loss process, which breaks a symmetry-protected conservation law and confinement in subspace of the Hilbert space, during (super)adiabatic transformation was discussed in Ref.~\cite{Hatomura2019}. Symmetry-protected superadiabatic transformation~\cite{Hatomura2018a,Hatomura2019} based on shortcuts to adiabaticity~\cite{Guery-Odelin2019} can also speedup the present protocol and reduce negative effects. \begin{acknowledgments} This work was supported by JST PRESTO Grant No.~JPMJPR1919, JST CREST Grant No.~JPMJCR1774, and Leading Initiative for Excellent Young Researchers, MEXT, Japan. MT is supported by JSPS fellowship (JSPS KAKENHI Grant No. 20J01757). \end{acknowledgments}
2,869,038,156,948
arxiv
\section{INTRODUCTION} Solid particles of refractory substances are formed at high temperature in the outer shells and in the cooling outflows of evolved stars \citep[e.g.,][]{Ferrarotti06}, and in supernova ejecta \citep[e.g.,][]{Sugerman06,Matsuura11}. They are eventually injected into the interstellar medium (ISM) to evolve as interstellar dust grains. Various mechanisms alter and eventually destroy them over a time-scale that estimations find shorter than the time-scale of their injection by stars \citep{Draine79,Jones96,Jones05,Draine09}. Astronomical observations, however, show that the mass of interstellar dust is steady. A mechanism that creates dust mass in the ISM would conciliate the two results \citep{Jones96,Jones05,Draine09,Jones11,Jones14}. The mass of interstellar dust can increase locally through the accretion of atoms and molecules present in the interstellar gas phase, which necessitates the adsorption of these gas-phase precursors by existing particles. Moreover, the adsorption lifetime must make possible the chemical reactions that produce the solid material of interstellar dust. This suggests that the local creation of interstellar dust mass must take place in cold regions. Such regions include the cold neutral ISM, or cold H~{\small I} medium, where the temperature is $\sim$100~K, dense molecular clouds where the temperatures range from 10 to 50~K, and all regions with intermediate density and temperature conditions \citep{Draine11}. It is however not understood how the refractory components of the interstellar dust can be produced at these cryogenic temperatures. Additionally, the role of the interstellar radiation field in the diffuse regions and that of ice mantles in dense ones have to be clarified. Interstellar dust comprises essentially silicate grains and carbonaceous particles. Experimental studies reporting the formation of these refractory substances at low temperatures are scarce. Silicate-related solids \citep{Donn81,Khanna81} and carbonaceous particles \citep{Wakabayashi04,Wakabayashi05} were obtained following the annealing of ice matrices in which specific atomic and molecular species were initially isolated. In another study, a silicate residue was formed by proton irradiation and warming of H$_2$O ices containing SiH$_4$ molecules \citep{Nuth89}. More recently, experiments by our group have demonstrated the formation of complex silicates \citep{Rouille14} and of carbonaceous solid matter \citep{Fulvio17} at temperatures not higher than 13 and 15~K, respectively. In liquid helium, at 1.7~K, carbon atoms and molecules were found to condense into a partially graphitized solid \citep{Krasnokutski17}. In such conditions, a chemistry that requires very little activation energy, if any, is involved \citep{Krasnokutski14c}. These observations made in the laboratory support the hypothesis that low temperature is not an obstacle for the formation and the growth of refractory grains in the cold regions of the ISM. Another issue is the separation of interstellar dust into carbonaceous particles and silicate grains suggested by some studies \citep{Adamson99,Dwek04,Chiar06,Smith06,Mason07,Li14,Valencic15}. The formation and the growth of mostly pure grains in a medium populated with the atomic and molecular precursors of both silicate and carbonaceous materials, not mentioning other substances, would necessitate selective adsorption and desorption mechanisms. Studying the co-accretion of cold precursors of silicates and carbonaceous solids, together with other species relevant to the ISM, would contribute to explaining the separation of the two refractory materials in the interstellar dust. It would also give us the opportunity to demonstrate that silicates and carbonaceous solids can be formed and can grow even though other species are present. Finally, it is expected to show that solid SiC is not formed in interstellar conditions, as can be inferred from its low abundance in the diffuse ISM \citep{Whittet90,Chiar06a}. Accordingly, we present an experimental study examining the simultaneous accretion of potential precursors of silicate grains and solid carbon at cryogenic temperatures, in the presence of species relevant to the ISM. \section{EXPERIMENTAL}\label{sec:exp} The principle of the experiments is to make cryogenic-cold atomic and molecular species interact with each other in conditions that enable their accretion into a solid, refractory material. Figure~\ref{fig:process} illustrates the procedure we have applied and Figure~\ref{fig:setup} shows the core part of the apparatus we have used. \begin{figure*} \epsscale{1.1} \plotone{fig1} \caption{Principle of the experimental procedure used to study the simultaneous condensation of silicates and carbonaceous grains. (a) Deposition of atomic and molecular precursors produced by laser vaporization, with Ne atoms, on a cold KBr substrate. (b) Cold precursors isolated in Ne ice and identified with absorption spectroscopy. (c) Annealing of the Ne ice causing the diffusion and accretion of the cold precursors, monitored with FTIR spectroscopy. (d) At 13~K, atomic and molecular precursors have disappeared and a refractory condensate is observed. Impurities (e.g., H$_2$O, CO, CO$_2$) that form ices are not included.\label{fig:process}} \end{figure*} \begin{figure} \epsscale{1.1} \plotone{fig2} \caption{Core of the apparatus used for the experiments. (1) Ne gas inlet; (2) vacuum chamber extension with targets; (3) vacuum chamber, with evacuation (not shown) toward the foreground; (4) radiation shield fixed to the first stage of the rotatable cryocooler; (5) substrate holder fixed to the second stage; (6) semicircular substrate; (7) port for IR spectroscopy; (8) port for UV/vis spectroscopy; (9) extension with port for laser vaporization; (10) laser beams. The front of parts (2), (3), (4), and (9) has been cut away.\label{fig:setup}} \end{figure} As the first step of several experiments, the potential atomic and molecular precursors of silicates and of carbonaceous matter were produced by the simultaneous laser vaporization of a silicate-related target and a graphite target (Plano GmbH). They were placed side by side in a vacuum chamber with a base pressure of $\sim$1$\times$10$^{-6}$~mbar at room temperature and $\sim$1$\times$10$^{-7}$~mbar at 6.5~K. Four experiments denoted E1, E2, E3, and E4 were carried out, each time with a different silicate-related target, so as to observe the influence of various metal atom concentrations. To date, interstellar silicates are found to be Mg-rich \citep{Min07,Fogerty16}, a large fraction of the depleted interstellar Fe atoms being possibly found as metallic Fe and FeS nanometer-sized inclusions \citep{Koehler14,Westphal14}. In experiments E1 to E4, the targets were respectively pressed SiO powder (Sigma Aldrich) and slabs of quenched melts with the formulas Mg$_2$SiO$_4$, Mg$_{0.4}$Fe$_{0.6}$SiO$_3$, and Mg$_{0.6}$Fe$_{0.4}$SiO$_3$. The three compounds with complex silicate compositions were amorphous, glassy. They were synthesized in-house following the melting procedure described by \cite{Jaeger94} and \cite{Dorschner95}. Cooling of the atomic and molecular products of laser vaporization down to $\sim$7~K was achieved by isolating them in Ne ice. For this purpose, the targets to be vaporized were placed 5.5~cm away from a KBr substrate (Korth Kristalle GmbH) kept at $\sim$6.5~K by the action of a compressed-He, closed-cycle cryocooler (Advanced Research Systems, Inc. ARS-4H and DE-204SL). Laser vaporization was performed during 60 to 65~min with synchronized pulsed Nd:YAG sources (Continuum Minilite I and Minilite II) emitting at 532-nm wavelength and operated at a rate of 10 shots per second. The laser spots on the targets were shifted every minute. At the same time, a continuous flow of Ne gas (Air Liquide, purity 99.999{\%}) was directed towards the cold substrate with a rate of 5.00~sccm, bringing the pressure to $\sim$2$\times$10$^{-5}$~mbar in the chamber. In such conditions the substrate became covered with Ne ice in which the products of laser vaporization were embedded. The second step was the annealing of the Ne ice to cause the diffusion and the interaction of the cold atoms and molecules it contained. Any energy produced by chemical reactions would be dissipated into the ice, preventing the dissociation of the products and enabling accretion. The annealing consisted of increasing the temperature of the KBr substrate by electrical heating (temperature controller LakeShore 330 Autotuning). The increase was carried out at an overall rate of $\sim$0.02~K~min$^{-1}$, starting from 8.5--9.0~K (with oscillations of $\pm$0.5~K). The Ne ice disappeared completely near 13~K as Ne ice evaporates at an extremely fast rate at this temperature \citep[see, e.g.,][]{Hama17}. In situ optical spectroscopy at ultraviolet and visible (UV/vis) wavelengths and also in the infrared (IR) domain allowed us to verify the presence of the precursors isolated in the Ne ice prior to annealing. It also allowed us to monitor the disappearance of these species during the annealing procedure, and, possibly, the associated formation of a solid material at low temperature. Finally, the solids, or condensates in the broad sense, produced in the experiments were also studied ex situ by high-resolution transmission electron microscopy (HRTEM) and energy-dispersive X-ray (EDX) spectroscopy. These techniques provided us with information on the morphology and the composition of the condensates, respectively. In order to perform their HRTEM analysis, the solids produced in the experiments were transferred from the KBr substrate to a TEM copper grid covered with a lacey carbon film. The HRTEM analysis was performed with a transmission electron microscope (JEOL GmbH JEM-3010) equipped with a LaB$_6$ cathode and operated with an acceleration voltage of 300~kV. The HRTEM micrographs were Fourier-transformed (FT) to reveal periodic structures such as lattice fringes or structural patterns related to a medium range order of the amorphous material. Non-physically related periodicities were removed from the inverse-FT images and subsequent reconversion into bright-field images allowed us to produce HRTEM micrographs of high quality. Some were skeletonized to illustrate differences in the medium-range order, which indicates the degree of structural ordering in amorphous materials. For instance, in carbonaceous grains, the presence of plane or bent aromatic structural units is described by the medium-range order. In contrast, a crystalline structure is a display of long-range order. \section{RESULTS} \subsection{Identification of the Dopants} Figure~\ref{fig:UVvis} shows UV/vis spectra of Ne matrices doped with atoms and molecules produced in the laser vaporization of the silicate-related and graphite targets. The species identified by examining the UV/vis spectra are Mg, Fe, C$_2$, C$_3$, C$_6$, C$_8$, SiO, and CNN. Table~\ref{tbl:UVvisspecies} contains information on the wavelength positions of the features used for identification. Not included in Table~\ref{tbl:UVvisspecies} are complex absorption features attributed to Si$_x$O$_x$ ($x$ $>$ 1) oligomers \citep{Rouille13,Krasnokutski14c}. \begin{figure} \epsscale{1.1} \plotone{fig3} \caption{Absorption spectra of doped Ne matrices in the UV/vis wavelength domain after subtraction of the baselines. Matrices obtained after (a) 64, (b) 10, (c) 12, and (d) 11~min deposition of Ne atoms while carrying out laser vaporization of a graphite target paired with a target of (a) SiO, (b) Mg$_2$SiO$_4$, (c) Mg$_{0.4}$Fe$_{0.6}$SiO$_3$, and (d) Mg$_{0.6}$Fe$_{0.4}$SiO$_3$. As to peak labels, the numbers $n$ refer to C$_n$ molecules, the black diamonds to atomic Fe, the asterisks to atomic Mg, and the black bullets to CNN.\label{fig:UVvis}} \end{figure} When present, Mg and Fe atoms were identified by using reference spectra of these atoms isolated in Ne ice. The atoms were then produced by laser vaporization of pure metal targets. There are no clear signs of the presence of Mg or Fe dimers in the present spectra. Interestingly, Si atoms do not appear in any of the matrices. We infer that the Si atoms are all contained in the various silicon oxide molecules observed in our experiments, because there are no features that could be attributed to other silicon-containing molecules, e.g., silicon carbides. The $A ^1\Pi \leftarrow X ^1\Sigma^+$ transition of SiO gives lines in the UV/vis wavelength range. It is visible in the spectrum obtained with the SiO target because it was measured after a long deposition time (64~min). The spectra obtained with the silicate targets were measured after depositing materials for 10--12~min only, so as to avoid saturation by the strongly absorbing atomic lines. The detection of CNN in the matrices indicates the production of C atoms via the laser vaporization of graphite. The observed CNN molecules are the product of a barrierless reaction between C atoms and N$_2$ molecules of the background gas \citep{Hickson16}. The dissociation of the product was prevented by the dissipation of the reaction energy by the rare-gas ice. The direct detection of C atoms in the Ne matrices was not achieved due to the lack of practical features in the wavelength domains scanned in the experiments. As said above, the detection of CNN molecules indicates the presence of a significant amount of background gas in the vacuum chamber. The molecules N$_2$, O$_2$, CO, CO$_2$, and H$_2$O were likely embedded in the rare-gas ice. Accordingly, the features of CO, CO$_2$, and H$_2$O appeared in the IR spectra. Figures~\ref{fig:MIR1} to~\ref{fig:MIR3} present Fourier-transform IR (FTIR) spectra measured during the experiments. The list of the matrix-isolated species detected and identified by analyzing these spectra, the frequencies of the absorption bands attributed to the species, and literature references are given in Table~\ref{tbl:obsvib}. \begin{figure*} \epsscale{1.1} \plotone{fig4} \caption{Infrared spectra of Ne matrices doped with atoms and molecules produced in the laser vaporization of two targets. (a) Experiment E1, graphite and SiO. (b) E4, graphite and Mg$_{0.6}$Fe$_{0.4}$SiO$_3$. In both panel, the interference pattern generated by the matrix and the general baseline have been subtracted. The numbers $n$ refer to C$_n$ molecules, the letters m, d, t, and tt stand for water monomer, dimer, trimer, and tetramer. In panel (b), arrows indicate bands identified in panel (a) and five-time magnified sections of the spectrum are vertically offset for clarity.\label{fig:MIR1}} \end{figure*} \begin{figure*} \epsscale{1.0} \plotone{fig5} \caption{Infrared spectra measured before and during the annealing of Ne matrices doped with atoms and molecules produced in the laser vaporization of two targets. (a) Experiment E1: graphite and SiO. (b) E2: graphite and Mg$_2$SiO$_4$. (c) E3: graphite and Mg$_{0.4}$Fe$_{0.6}$SiO$_3$. (d) E4: graphite and Mg$_{0.6}$Fe$_{0.4}$SiO$_3$. The spectra marked with (*) are those measured at 6.3-6.5~K after subtracting the interference pattern generated by the matrix, and also the general baseline. Labels 1$^{\mathrm{st}}$ and 2$^{\mathrm{nd}}$ indicate measurements made at close times. The spectra are offset vertically for clarity.\label{fig:MIR2}} \end{figure*} \begin{figure} \epsscale{1.1} \plotone{fig6} \caption{Infrared spectra of the materials formed in the experiments E1 to E4, measured at room temperature. Lines of water vapor are not fully corrected because of varying vacuum conditions in the spectrometer during experiment. The spectra are offset vertically for clarity.\label{fig:MIR3}} \end{figure} The assignment of a band to Ne-matrix-isolated SiO$_2$ was tentative in our earlier work on the formation of silicates \citep{Rouille14}. It was then suggested by studies reporting the band of Ar-matrix-isolated SiO$_2$ at 1416.4~cm$^{-1}$ \citep{Andrews92,Tremblay96}. The assignment was confirmed when we performed the laser vaporization of an amorphous Mg$_{0.4}$Fe$_{0.6}$SiO$_3$ target to dope an Ar matrix and found at 1417.0~cm$^{-1}$ the band seen at 1423.8~cm$^{-1}$ in Ne matrices. The absorption bands of the carbon oxide molecules CO, CO$_2$, and C$_3$O are also detected. In the case of C$_3$O, the identification relies on spectra of the molecule in the gas phase, isolated in Ar ice, and isolated in Ne matrix (D. Strelnikov, private communication). The amount of carbon oxide molecules isolated in the matrices is much larger when working with a silicate target than with an SiO target. In the former case, the bands of CO, CO$_2$, and C$_3$O (at 2141.2, 2347.5, and 2253.5~cm$^{-1}$, resp.) dominate the FTIR spectrum and show a different pattern of relative intensities. This suggests that the laser vaporization of a silicate target, as performed here, produces oxygen species that react with carbon species generated in the simultaneous laser vaporization of the graphite target. The reaction may take place in the gas phase or at and under the surface of the growing Ne matrix where the atoms and/or molecules may diffuse. By contrast, the vaporization of SiO essentially produces SiO molecules and their oligomers. In that case, the carbon oxide molecules are either background molecules or the products of reactions between the carbon species produced by laser vaporization and the oxygen molecules of the background gas phase. Similarly, the oxygen molecules O$_3$ and O$_4^+$ were detected only in the experiments with silicate targets. It can be assumed that they are not produced when shooting at a SiO target because this material is vaporized essentially in the form of SiO molecules. Water molecules were initially present in the Ne matrices. They came from the background gas in the vacuum chamber and likely from the targets as well, which were not treated before the experiments to remove adsorbed water. The amount of H$_2$O molecules in the doped Ne matrices was large, causing the formation of dimers, trimers, tetramers, and some pentamers. It is indicated by the broad features arising between 3300 and 3800~cm$^{-1}$. An unknown species that gives bands at 913 and 2192~cm$^{-1}$ was detected in the experiments with laser vaporization of silicate targets. It was already spotted in our previous study on silicates \citep{Rouille14}. This species was reported by \cite{Jacox13} who proposed a complex involving H$_2$ and H$_3$O$^+$ or H$_2$O$_5^+$ as a possible carrier. Since it is not identified, we neglect its possible role in the formation of refractory materials. Unassigned weak absorption bands indicate the presence of other species beside the main substances listed above. They likely comprise van der Waals complexes formed with the most abundant dopants, and various oxides. For instance, a band is tentatively attributed to CO$\sbond$H$_2$O, and one of two features between 2050 and 2057~cm$^{-1}$ may be caused by C$_3$$\sbond$H$_2$O \citep[][in Ar matrix]{Szczepanski95,Dibben00}. A band seen at $\sim$1934~cm$^{-1}$ when using the iron-containing targets may signal the formation of FeCO \citep{Zhou99} already during the growth of the matrices. We did not find any absorption that could be attributed to molecular MgO, which has a vibrational frequency of $\sim$775~cm$^{-1}$ in the gas phase \citep{Kagi06}. \subsection{Amounts of Dopants} The amount of a dopant in a matrix can be evaluated from its absorbance spectrum provided that the thickness $d$ of the matrix and the molecular absorption cross-section of the dopant are known. We have neglected the effect of the matrix on the absorption cross-section of the free species. Because the refractive index $n$ is not known, only the optical equivalent thickness $nd$ was determined and used in place of $d$. It was estimated by analyzing the interference patterns observed in the MIR spectra. Thus $nd$ was determined to be 28.6, 31.3, 35.3, and 33.3~$\mu$m in experiments E1, E2, E3, and E4, respectively. The areas of the atomic lines identified in the UV/vis spectra were compared with the cross-sections of the corresponding transitions. The cross-sections were calculated by using the vacuum wavelengths, Einstein coefficients, and state degeneracies \citep{NIST_ASD}. Table~\ref{tbl:concentrations} gives the amount of the identified and most significant dopants of the Ne matrices in the various experiments. \begin{deluxetable}{lllll} \tablecaption{Average Concentrations\tablenotemark{a} of Matrix-isolated Species\label{tbl:concentrations}} \tablehead{\colhead{Species} & \colhead{E1\tablenotemark{b}} & \colhead{E2} & \colhead{E3} & \colhead{E4} } \startdata Mg & & 10.06 & 1.55 & 1.64 \\ Fe & & & 19.2 & 16.2 \\ SiO & 26.0 & 19.6 & 59.3 & 41.1 \\ SiO$_2$ & & 6.63 & 17.9 & 11.3 \\ Si$_2$O$_2$ & 16.4 & & 8.81 & 15.9 \\ Si$_3$O$_3$ & 6.76 & & & 1.14 \\ C$_2$ & & 1.53 & 2.83 & 4.99 \\ C$_3$ & 10.0 & 4.90 & 11.7 & 35.2 \\ C$_4$ & 0.538 & 2.41 & 2.17 & 4.58 \\ C$_5$ & 2.02 & 3.41 & 3.53 & 8.49 \\ C$_6$ & 2.55 & 1.15 & 5.15 & 9.91 \\ C$_7$ & 0.297 & 1.72 & 0.551 & 1.00 \\ C$_8$ & 0.243 & 0.0972 & 0.390 & \\ C$_9$ & 0.498 & 0.156 & 0.554 & 1.55 \\ CO & 8.92 & 610 & 477 & 458 \\ CO$_2$ & 5.85 & 70.4 & 65.8 & 51.5 \\ C$_3$O & 0.330 & 14.5 & 13.2 & 19.0 \\ H$_2$O\tablenotemark{c} & 425 & 213 & 483 & 180 \\ O$_3$ & & 24.9 & 30.2 & 13.9 \\ \enddata \tablenotetext{a}{In units of 10$^5$~$\mu$m$^{-3}$.} \tablenotetext{b}{Experiments E1, E2, E3, and E4 used targets of SiO, Mg$_2$SiO$_4$, Mg$_{0.4}$Fe$_{0.6}$SiO$_3$, and Mg$_{0.6}$Fe$_{0.4}$SiO$_3$, respectively, along graphite.} \tablenotetext{c}{The contributions of water clusters and water-containing van der Waals complexes are not included.} \end{deluxetable} When Fe and Mg atoms are simultaneously present in the Ne matrices, seven Lorentz profiles were fitted to the features observed in the 32\,701--38\,941~cm$^{-1}$ interval of energy (256.8--305.8~nm wavelength interval). The two profiles centered between 35\,000 and 38\,000~cm$^{-1}$ were assigned together to the single Mg~{\footnotesize I} line at 35\,051.253~cm$^{-1}$ in vacuum \citep{NIST_ASD}. The other features were assigned to Fe~{\footnotesize I} lines, given at 33\,095.9408, 33\,507.1232, 33\,695.3972, 34\,039.5154, and 36\,766.9660~cm$^{-1}$, in vacuum \citep{NIST_ASD}. When only Mg atoms were present, four Lorentz profiles were fitted to the double-peaked feature and its base between 33\,112 and 41\,667~cm$^{-1}$. Only the two main Lorentz profiles were taken into account to determine the amount of Mg atoms in the matrix. Regarding C$_2$, the origin band of the Mulliken $D ^1\Sigma_u^+$--$X ^1\Sigma_g^+$ system was found at 232.2~nm. It was used to determine the column density of C$_2$ by giving it an $f$-value or oscillator strength of 0.0545 for a central wavelength of 231.3~nm in air \citep{Lambert95}. The integrated molecular absorption cross section for this transition was determined using these values of strength and wavelength \citep{Mulliken39}. Its comparison with the integrated absorbance of the band observed in our spectra yields allowed us to derive the density of C$_2$ molecules in the Ne matrices. In the case of infrared-active molecules, bands observed in the FTIR spectra were exploited to determine the amount of the molecular dopants in the Ne matrices. The amount was derived from the comparison between measured band areas and vibrational intensities computed at the density functional theory (DFT) level with the Gaussian 03 program \citep{Gaussian03}. We used the B3LYP functional \citep{Becke88,Lee88,Becke93} in combination with the 6-311+G(d,p) basis set \citep{Krishnan80,McLean80,Frisch84}. The data in Table~\ref{tbl:concentrations} can be used to evaluate the number of atoms that can contribute to forming the solid condensates. In particular, Mg and Fe atoms, Si atoms from oxides, and C atoms from carbon molecules. Most C atoms in carbon oxide molecules would likely escape as the molecules get into the gas phase. Thus, Si and C atoms are present in numbers of the same order. The number of Fe atoms is at least five times smaller in comparison, though those deposited as oxides are not being counted. They are, however, potential precursors of silicates. The number of Mg atoms is an order of magnitude smaller that that of Fe atoms in the experiments E3 and E4. Magnesium oxides are not taken into account because they do not appear in the FTIR spectra. The contents of Table~\ref{tbl:concentrations} show that the Ne matrices in experiments E2, E3, and E4 with complex silicate targets were rich in CO and H$_2$O molecules in comparison with silicon oxide molecules and carbon molecules. This did not prevent the formation of silicate and carbonaceous condensates. \subsection{Condensation Processes} As the Ne ice was annealed, the various species it contained diffused and interacted, thus giving barrierless chemical reactions the opportunity to take place. Reactions were possible between atoms and molecules and between an atom or molecule and a site at the surface of a cluster or a particle. During the processing of the rare-gas matrix, the number of isolated water molecules decreases as they form dimers, etc. and water ice is formed as seen in Figure~\ref{fig:MIR2}. The features of water ice are visible after the Ne atoms have disappeared. They are broad and found in the IR spectra at 770, 1650, and 3280~cm$^{-1}$, or 13.0, 6.06, and 3.05~$\mu$m, respectively \citep[see, for instance,][]{Oeberg07}. Water ice sublimates when the substrate reaches a temperature of $\sim$160~K under vacuum \citep[value for pure H$_2$O ice,][]{Collings04} and its features are no longer visible in the spectra measured at room temperature. Some of the water molecules can react chemically with other species or with surface groups of particles such as Si$\sbond$O$\sbond$Si or aromatic C$\sbond$H. These reactions, however, have not been studied at low temperatures yet. On the other hand, reactions of H$_2$O with C and with Fe were studied at low temperature in He droplets and in Ar ice. They were found to give the weakly bond compounds H$_2$O$\sbond$C \citep{Krasnokutski14a} and H$_2$O$\sbond$Fe \citep{Krasnokutski14b,Deguin18}. The oxygen molecules O$_2$ and O$_3$ can contribute to forming ice and can also react chemically with other species to produce potential silicate precursors. For instance, Mg atoms can react with O$_2$ \citep{Krasnokutski10} and with O$_3$ \citep{Andrews78} to give magnesium oxides. Additionally, Fe atoms can react with O$_2$ to form the weakly bound FeOO molecule \citep{Krasnokutski14b}. In Ne matrices, CO molecules can react with Fe atoms to produce Fe$_x$(CO)$_y$ compounds \citep{Zhou99}. The most abundant product would be FeCO with a band at 1933.7~cm$^{-1}$. A weak band visible in the spectra obtained when using the Fe-containing targets coincides with this absorption. Its assignment is not certain. Nitrogen molecules contribute to ice formation during the annealing of the Ne matrix. They would start to sublimate around 20~K \citep{Collings04}. The complex composition of the ice (H$_2$O, N$_2$, carbon oxides, O$_2$, etc.) may make the sublimation irregular. During the annealing of the Ne ice, SiO and its oligomers react together despite the low temperature as the reactions do not require activation energy \citep[][and references therein]{Krasnokutski14c}. These reactions alone produce silicon oxide solids that give rise to the 10- and 20-$\mu$m bands in Figure~\ref{fig:MIR2}(a) \citep{Rouille13,Krasnokutski14c}. In addition, the silicon-bearing molecules react with Mg, Fe, and their oxides to form complex silicates such as MgSiO$_3$ and Mg$_{0.24}$Fe$_{0.76}$SiO$_3$ \citep{Rouille14}, and the broad band that arises at 10-$\mu$m in Figures~\ref{fig:MIR2}(b) to \ref{fig:MIR2}(d) is attributed to such solid compounds. It is assumed that the SiO$_2$ molecules contribute to forming these condensates as well. The bands of the matrix-isolated iron oxide molecules, which are detected in experiments with Fe-containing silicate targets, disappear during the heating and sublimation of the Ne ice. It is assumed that they are incorporated in the forming refractory condensates as mentioned above. Chemical reactions at cryogenic temperatures between carbon molecules were proposed to explain the evolution of IR bands during the annealing of rare-gas matrices doped with such species \citep{Thompson71}. Experiments \citep{Wakabayashi04} and molecular dynamics modeling \citep{Yamaguchi04} suggested that condensates of amorphous carbon can be formed in a cold environment \citep{Wakabayashi05}. The phenomenon was demonstrated experimentally by \cite{Fulvio17}. Presently we observe the attenuation and disappearance of the bands caused by C$_n$ molecules as the Ne ice is warmed up and evaporated. While the formation of a refractory carbonaceous material cannot be monitored by the appearance of a strong, distinct IR band, it is nonetheless related to a general increase of the absorption due to the formation of free charge carriers. An increase of the absorption was observed during annealing even though it had to be distinguished from occasional variations of the baseline vertical position. Electron microscopy shows that solid amorphous carbon was formed (see Section~\ref{sec:TEM}). Two broad bands are found in the IR spectra of the condensates produced in the experiments with the silicate and graphite targets. Their wavelength positions change slightly with the Fe-content of the silicate: 6.2 and 7.1~$\mu$m (resp. 1610 and 1410~cm$^{-1}$) with the Fe-free silicate target, 6.3 and 7.2~$\mu$m (resp. 1583 and 1392~cm$^{-1}$) with the two Fe-containing silicate targets. Because the bands do not appear in experiments without carbon \citep{Rouille14} or without magnesium, they are attributed to magnesium and magnesium-iron carbonates. The degenerate asymmetric stretching mode $\nu_3$ of the CO$_3^{2-}$ ion, which exhibits the symmetry elements of the $D_{3h}$ point group, gives rise to a band near 7~$\mu$m. The band is split into the two components presently observed when the ion is distorted by its environment \citep{Brooker01}. Our assignment is consistent with the high abundance of carbon oxide and water molecules in the Ne matrices. Note that the spectra of the freshly prepared matrices did not show the features of isolated CO$_3^{2-}$ \citep[][in Ar matrix]{Jacox74}. Two features that resemble the 6.3- and 7.2-$\mu$m bands discussed above appear in the IR spectrum of the condensate produced with the SiO and graphite targets. They differ in position, however, as they peak at 5.85 and 6.25~$\mu$m (1709 and 1600~cm$^{-1}$, respectively). In this condensate, the bands arise likely from water molecules bound in two ways, with and without hydrogen bonding \citep[e.g.,][]{Frost09a}. This would be consistent with the broad band at $\sim$3400~cm$^{-1}$. Finally, the IR spectra show the formation of a broad band at 10~$\mu$m (1000~cm$^{-1}$) after annealing of the Ne matrix up to 12~K. This band is a clear signature of the low-temperature condensation of refractory silicate materials characterized by the typical Si$\sbond$O stretching band. In the presence of water, this band is partly obscured by the water libration band at 12.8~$\mu$m (781~cm$^{-1}$). The strength of the absorption at 10~$\mu$m increases as the Ne ice is annealed and eventually evaporated at $\sim$13~K, and the band appears clearly after water ice is removed by the final warming to room temperature. \subsection{Composition and Structure of the Condensates}\label{sec:TEM} Electron microscopy studies of the condensed refractory material verified the formation of porous aggregates of nanometer-sized particles as illustrated with Figure~\ref{fig:HR1}. The HRTEM analysis of the grains revealed the presence of chemically separated silicate and carbonaceous phases. Both materials are present as individual grains forming either pure or mixed aggregates, as shown in Figures~\ref{fig:HR1} and \ref{fig:HR2}, respectively. In Figure~\ref{fig:HR2}(c), a silicate particle is covered with carbonaceous material. The formation of SiC was not observed, neither with IR spectroscopy nor with HRTEM/EDX analysis. Both silicate and carbonaceous grains are characterized by an amorphous structure, which is clearly visible in Figures~\ref{fig:HR1}(b) and \ref{fig:HR1}(c). This common property makes it complex to distinguish between the two materials with HRTEM. Note that amorphous carbonates cannot be distinguished from amorphous silicates. For a distinct discrimination of the two condensates, we used EDX spectroscopy in combination with an image analysis that demonstrates the differences in the medium-range order of the amorphous materials. The medium-range order images were derived by processing the HRTEM images as described in Section~\ref{sec:exp}. \begin{figure*} \epsscale{1.1} \plotone{fig7} \caption{(a) Representative TEM image showing the porous aggregate structure of mixed refractory condensates (E2). (b) and (c) are high-resolution micrographs of typical silicate and carbonaceous grains, respectively, observed in the condensate of experiment E2. Circles outline a few individual grains to demonstrate their sizes. The insets show noise-filtered images of the areas delimited with squares, illustrating differences in the medium-range order of both materials. The carbon grains are slightly smaller than the silicate particles and show a more distinct medium-range order characterized by small, bent graphene layers visible in edge-on view as black, curvy stripes of various lengths. They can be described as fullerene-like carbon grains. They can be described as fullerene-like carbon grains. Particle sizes, medium-range order, and EDX analysis were used to distinguish between carbonaceous and siliceous materials in the mixed condensates.\label{fig:HR1}} \end{figure*} \begin{figure*} \epsscale{1.1} \plotone{fig8} \caption{HRTEM micrographs of two sample areas in the condensate of experiment E2, where silicate and carbonaceous grains are in contact. The medium-range order of the carbonaceous material characterized by small, bent graphene layers is illustrated in both images. In panel (b), a 25-nm-large agglomerate of three silicate particles, outlined with a circle, is surrounded and covered (on top) by carbon material, which consists of particles with sizes between 2 and 6~nm. Two carbon particles, marked with circles, can be more easily distinguished at the edge of the condensate. Two fullerene molecules at the periphery of a denser carbon cluster are framed in a square and magnified 2.5 times.\label{fig:HR2}} \end{figure*} The micrographs of separated silicate and carbonaceous grains illustrate the smaller size of the primary carbon grains (2 to 6~nm) compared with the silicate particles (5 to 10~nm). As an additional difference, carbon grains show a distinct medium-range order characterized by strongly bent graphene layers, which can be directly identified as a substructure or can be derived from an image analysis. The slight dissimilarity between the carbonaceous and siliceous components is visible in the insets of Figures~\ref{fig:HR1}(b) and \ref{fig:HR1}(c). The structure of the carbonaceous material can be described as fullerene-like, and it is identical to that of the product in experiments on the condensation of pure carbon \citep{Fulvio17}. Like in the pure carbonaceous condensate, the formation of fullerene cages can be observed. Two cage molecules at the periphery of a denser cluster can be clearly identified in Figure~\ref{fig:HR2}. Table~\ref{tbl:edx} gives the mean elemental composition of the silicate component of the condensates determined by EDX spectroscopy. The compositions of the silicate grains differ from those of the corresponding targets. An increase of iron and a depletion of magnesium was found for iron-containing silicates, which was caused by an incongruent evaporation of the silicate constituents under laser evaporation. Such discrepancies between the composition of the target and the condensate were already observed in previous experiments. The power density of the laser used for the ablation process of silicate targets was too small to instantly sublimate the atomic and molecular species from the ablation volume. Consequently, a liquid silicate phase was formed that segregated into different phases including a less volatile magnesium-rich silicate and a more volatile iron-rich silicate component. The latter was preferentially evaporated during the laser ablation process. \begin{deluxetable}{lllll} \tablecaption{Measured Mean Elemental Composition of the Silicate Component of the Condensates\tablenotemark{a}\label{tbl:edx}} \tablehead{\colhead{Experiment\tablenotemark{b}} & \colhead{Mg} & \colhead{Fe} & \colhead{Si} & \colhead{O} } \startdata E1 & & & 47.3 & 52.7 \\ E2 & 27.9 & & 15.5 & 56.6 \\ E3 & 4.1 & 20.3 & 21.7 & 53.9 \\ E4 & 8.8 & 14.0 & 22.7 & 54.5 \\ \enddata \tablenotetext{a}{In atomic percent.} \tablenotetext{b}{Experiments E1, E2, E3, and E4 used targets of SiO, Mg$_2$SiO$_4$, Mg$_{0.4}$Fe$_{0.6}$SiO$_3$, and Mg$_{0.6}$Fe$_{0.4}$SiO$_3$, respectively, along graphite.} \end{deluxetable} In addition, the composition of the condensed silicates is not stoichiometric for most of the samples and depends on the composition of molecular species in the ice layer. The formation routes of silicates and carbonaceous solids are highly complex. It is impossible to provide a complete set of reactions leading to refractory siliceous and carbonaceous material simultaneously. For these formation routes, more than a few hundred individual reactions have to be considered. Moreover, the co-condensation of silicates and carbonaceous molecules can result in redox reactions taking place either between oxydizing and reducing molecular species or between reducing and oxidizing solid components. For example, C atoms or small carbon clusters can be oxidized by oxygen-bearing molecules such as SiO, SiO$_2$, Si$_2$O$_3$, FeO, OH, H$_2$O to CO and CO$_2$. Redox reactions additionally complicate the reaction schemes responsible for the formation of both silicates and carbonaceous material. Solid carbonaceous material can also react as a reducing agent in contact with silicates. The oxidation of carbonaceous material requires the reduction of metallic silicate components. The solid carbon can be oxidized either completely leading to the formation of CO or CO$_2$ or by the formation of oxygen-bearing functional groups such as C$\sbond$OH or C$\dbond$O on the surface. Simultaneously, other components such as Fe cations must be reduced to metallic iron, which is able to form Fe nanometer-sized particles within the silicate matrix. Although redox reactions are well known in high-temperature processes, they have not been investigated at cryogenic temperatures yet. The co-condensation of SiO$_x$ and carbonaceous material was found to occasionally produce nanometer-sized silicon crystals clearly visible in Figure~\ref{fig:HR3}. Here, the carbon was oxidized, whereas Si$^{4+}$ was reduced to metallic silicon and could form small crystalline Si inclusions. The simultaneous condensation of Mg-Fe-silicates with carbonaceous species lead to the reduction of ferric or ferrous ions and the formation of metallic Fe particles. The formation of nanometer-sized silicon crystals and metallic iron inclusions in the carbon-silicate condensates is well documented in Figure~\ref{fig:HR3}. Since Fe$^{2+}$ and Fe$^{3+}$ ions are easier to reduce than Si$^{4+}$ ions, metallic iron particles were frequently produced in these condensation processes. \begin{figure*} \epsscale{1.1} \plotone{fig9} \caption{(a) HRTEM micrograph of condensed SiO$_x$ grains in experiment E1 with amorphous structure and a small Si inclusion. (b) Metallic iron inclusion formed during the condensation of iron-containing silicate and carbonaceous material in experiment E3.\label{fig:HR3}} \end{figure*} The HRTEM/EDX analysis does not permit an exact quantification of the silicate/carbon ratio. Nevertheless, a slightly reduced carbon content in the condensate can be derived from the HRTEM images. One important question is the efficiency of the low-temperature condensation of a refractory material from molecules. It can be roughly determined by comparing the total masses of all atoms and molecules observed in the Ne ice matrix with the mass of the final condensate. Based on a ratio of 0.6~mol magnesium silicate and 0.4~mol carbon grains, an average molecular weight and density was calculated. For the experiment E2 (Mg$_2$SiO$_4$ and graphite targets), a total mass of refractory atoms and molecules, which are available for the condensation process, of 15.3~$\mu$g was calculated using the average concentration of molecules provided in Table~\ref{tbl:concentrations}, the corresponding ice thickness, and the area of the substrate. The amount of solid condensed material was determined using an average thickness of the condensate layers of about 100~$\mu$m determined from scanning electron microscope images, assuming a 90{\%} porosity of the condensate from the electron microscope images and a coverage of the substrate of less than 90{\%} due to the mesh-like topology of the condensate film \citep{Fulvio17}. A dust mass of 14.4~$\mu$g was calculated resulting in a condensation efficiency of 94{\%}. \section{DISCUSSION} The debate on the possible growth of dust grains in the ISM has arisen from the discrepancy between the time-scales estimated for injecting the grains of stellar origin into the ISM and for destroying them there. There are actually indications that the dominant source of dust in the ISM is the accretion of gas-phase species onto existing dust grains \citep[][and references therein]{Ginolfi18}. For instance, the formation and growth of refractory dust grains in the ISM is in agreement with the spatial variations observed in elemental depletion \citep{Draine79,Tielens98,Turner00,Draine09,Jenkins09,Whittet10}, and also with those observed in the Galactic extinction curve \citep{Hirashita12,Hirashita14}. The depletion of iron, which is thought to be the consequence of accretion onto grains in the ISM \citep{Dwek16}, is another indication of interstellar grain growth. Observations of galaxies at early times of the Universe have revealed dust masses that could not be explained if considering dust production by asymptotic giant branch (AGB) stars and supernovae (SNe) only, given the destruction caused by the shocks induced by the very same SNe \citep{Michalowski15}. Whether or not refractory dust mass is created by accretion of gas-phase species in the ISM, carbonaceous matter and silicates are found as mostly pure grains. Spectropolarimetric measurements of the mid-IR absorption bands that characterize the two materials \citep{Adamson99,Chiar06,Mason07,Li14} suggest their separation. The separation is also supported by the analysis of X-ray halos generated by the scattering of these energetic photons by the dust grains \citep{Dwek04,Smith06,Valencic15}. Analyses of the latter type, however, suggest also grain populations that include, in addition to pure grains, a fraction ($\sim$36\% in mass) of composite grains containing together silicate, refractory organic matter, and water ice \citep{Jin17,Jin19}. The relevance and results of our experiments with Ne ice matrices are examined with regard to cold interstellar conditions, in which dust formation may proceed according to the following scenarios. In the conditions of the cold H~{\small I} ($T$ $\sim$100~K, $n_\mathrm{H}$ = 30~cm$^{-3}$) and diffuse H$_2$ ($T$ $\sim$50~K, $n_\mathrm{H}$ $\sim$100~cm$^{-3}$) media, existing grains are bare, hence free atoms and molecules impinge on refractory surfaces. The surfaces are constituted of silicate, carbonaceous, or refractory organic matter. In a hypothetical grain growth scenario, atoms or molecules that stick on such surfaces can diffuse and react with an adequate site and become a component of the refractory matter. The interstellar UV radiation field might maintain the reactivity of the surface of the grains. The accretion process can be facilitated by Coulomb interaction as small grains tend to be neutral or negatively charged whereas gas-phase atoms are more likely neutral or positively charged, including refractory Si, Fe, Mg, C, and O \citep[][and references therein]{Zhukovska16}. On the other hand, electron-tunneling from negatively charged grains to incident cations can reduce the sticking probability \citep{Turner91}. While the chemically compatible species or precursors are incorporated, any other adsorbed substance is removed by sputtering, photodesorption, photolysis, or chemical reaction \citep{Barlow78,Draine09,Jenkins09}. For instance, the growth of amorphous silicate by accretion requires a mechanism that removes C atoms efficiently because they are much more abundant than Si atoms and SiO molecules. It could consist of the formation and desorption of small C-containing molecules, e.g., CH, since atomic hydrogen is orders of magnitude more abundant than any other species. Such mechanism is expected in the case of oxide grains \citep{Duley79,Denison81}. The existence and efficiency of the processes depend on the regional conditions, the formation of a carbon mantle becoming possible as the density conditions shift progressively from diffuse to dense \citep[][and references therein]{Jones17}. In the dense ISM ($T$ = 10--50~K, $n_\mathrm{H}$ = 10$^3$--10$^6$~cm$^{-3}$), dust grains are covered in ices, composed essentially of H$_2$O, CO, and CO$_2$ molecules \citep[e.g.,][]{Gibb04,Boogert15}, in which the precursors of silicates and carbonaceous matter are embedded. The observation of ions in interstellar ices indicates the existence of bulk diffusion despite the low temperature \citep[][and references therein]{Cuppen17}. Species diffusing in the ice mantle can either reach the surface of the refractory core and become incorporated in case of chemical compatibility, or they can start to nucleate and form new refractory grains in the ice. The latter is the most probable scenario, but it depends on the morphology of the interstellar dust grains and the thickness as well as the structure of the ice. Ice chemistry is complex, however. Irradiation causes the chemical erosion of carbon covered with H$_2$O ice, producing CO and CO$_2$ molecules \citep{Fulvio12,Sabri15}. Thus it hinders the growth and the formation of carbon grains. Irradiation also induces the formation of complex organic molecules like aldehydes, sugars, carboxylic acids, and amino acids \citep[][and references therein]{Oeberg16}. When grains enter intercloud regions, ices are sublimated yet complex molecules remain as an organic refractory residue. The composite grains possibly revealed by some analyses of X-ray halos \citep{Jin17,Jin19} could emerge at that stage. While further irradiation converts the residue into carbonaceous material \citep{Jenniskens93}, grain-grain shocks separate the initial refractory core, which has possibly grown, and the solid nuclei formed in the ice. In our experiments, the Ne ice matrix can be seen either as the surface of a virtual refractory grain onto which the matrix-isolated atoms and molecules diffuse, or as the ice layer that covers grains in dense interstellar regions. In both scenarios, it acts as the energy sink that a bare grain or an ice mantle constitute for an exothermic reaction. Only in this sense does the Ne matrix simulates the surface of a grain or bulk ice since it does not possess the catalytic or morphological properties of organic ices and refractory surfaces. With relation to bare refractory surfaces as found in the cold neutral ISM and diffuse ISM, our experiments show that chemical reactions proceed at cryogenic temperatures between SiO molecules, Mg and Fe atoms, or their oxides, and possibly SiO$_2$, to produce complex silicates. Simultaneously, carbon molecules assemble and combine into a solid carbonaceous matter. Still, the precursors of silicates do not react with those of carbonaceous matter. In the absence of selection mechanisms such as photodesorption or sputtering, silicate and carbon materials can stick to each other without combining chemically, as observed in our experiments. Assuming the Ne matrix can be likened to a refractory grain surface and its dopants to reactive sites, our experiments might support the hypothesis of grain growth in the cold neutral ISM and diffuse ISM, silicates and carbonaceous matters growing separately. While it would be favored by Coulomb interaction \citep{Zhukovska16}, the efficiency of the mechanism in its competition against destruction processes must still be evaluated. Comparing the Ne ice matrix with the ice layer that covers grains in dense interstellar regions, the experiments show that diffusing precursors eventually nucleate to form solid grains of complex silicates and carbonaceous matter at cryogenic temperature despite the presence of H$_2$O and CO molecules, and also CO$_2$. While the formation of carbonates is observed in our experiments, in interstellar ices they would be destroyed by UV irradiation \citep{Ciaravella18}. Since the condensates show silicate and carbon materials stuck to each other, one material may facilitate the condensation of the other, suggesting a possible interfacing of carbon and silicate materials. This is supported by the finding that reduced silicon and iron particles were detected in the complex silicates. The reduction process was triggered by reactions of silicate intermediates, such as (SiO)$_x$ oligomers or small grains, with carbon or CO, which lead to the reduction of Si$^{4+}$, Fe$^{2+}$, and Fe$^{3+}$ and the oxidation of carbon (see Section~\ref{sec:TEM}). The results of our experiments with Ne ice support the notion of grain formation in interstellar ices. Nucleation of precursors is possible since diffusion can be observed in water ice \citep[e.g.,][]{Mispelaer13} and is expected in interstellar ices \citep{Cuppen17}. \section{Conclusions} Matrices of Ne ice have been doped, simultaneously, with atoms and molecules that are potential precursors of complex silicates (Mg, Fe, SiO, SiO$_2$) and of carbonaceous materials (C$_n$, $n$ = 2--10). They also contained ice-forming species (CO, CO$_2$, C$_3$O, and H$_2$O) as well as O$_3$. The annealing of the matrices showed the disappearance of the molecular bands and the rise of a broad feature at $\sim$10~$\mu$m in IR spectra measured at temperatures lower than 13~K, thus indicating the formation of amorphous silicate condensates at cryogenic temperatures. This was confirmed by ex-situ HRTEM and EDX spectroscopy analysis. Electron microscopy also revealed the condensation of amorphous carbon in parallel with that of the silicate material. Amorphous carbon was not detected with IR spectroscopy because it does not give rise to defined bands and the amount of material was too little for a noticeable effect on the baseline of the spectra. As a secondary result, the formation of carbonates was also observed, which we attribute to ices rich in carbon oxide molecules in addition to silicate precursors. Thus both silicate and carbonaceous materials can condense from cold precursors in the absence of radiations similar to interstellar UV photons and cosmic rays. This supports the hypothesis that dust grains are re-formed or grow in the ISM. Moreover, the present observations suggest that species from one of two groups that consist respectively of silicate precursors and carbonaceous matter precursors do not react at cryogenic temperatures with those belonging to the other group. Such finding constitutes a clue as to the separation between silicate and carbonaceous materials observed by astronomers. It is also consistent with the low abundance of SiC grains in the diffuse ISM. On the other hand, silicate precursors may react with carbon oxides in interstellar ices to produce carbonates. Although crystalline carbonates were found in dust shells of evolved stars \citep{Kemper02}, interstellar amorphous carbonates have not been identified. The results do not allow us to determine whether the formation and/or the growth of dust grains take place exclusively or even preferentially in the cold neutral ISM or in the diffuse ISM through an ice-free process, or in dense clouds where ices would play a role. Experiments with bare and ice-covered refractory surfaces exposed to UV photons are required for this purpose. \acknowledgments The authors acknowledge the support of the Deutsche Forschungsgemeinschaft through project No. JA 2107/2-2 within the framework of the Priority Program 1573 "Physics of the Insterstellar Medium". They are most grateful to the anonymous reviewers for comments and suggestions that helped us to considerably improve our manuscript. \bibliographystyle{aasjournal}
2,869,038,156,949
arxiv
\section{Introduction} \label{sec:Intro} It is possible for new phenomena (NP) beyond the standard model (SM) of particle physics to be observed either directly or indirectly, \ie, through their influence on other physics processes. Indirect searches for NP generally proceed by comparing experimental results with theoretical predictions in the production or decay of known particles. The study of flavor-changing neutral-current decays of b hadrons such as \ensuremath{\PBz\to\cPKstz \Pgmp \Pgmm}\xspace ($\cPKstz$ indicates the $\PKst{}^0$ and charge conjugate states are implied in what follows, unless explicitly stated otherwise) is particularly fertile for new phenomena searches, given the modest theoretical uncertainties in the predictions and the low rate as the decay is forbidden at tree level in the SM\@. On the theoretical side, great progress has been made since the first calculations of the branching fraction~\cite{Deshpande:1988mg,Deshpande:1988bd,Lim:1988yu,Grinstein:1988me}, the forward-backward asymmetry of the muons, $A_\mathrm{FB}$~\cite{Ali:1991is}, and the longitudinal polarization fraction of the $\cPKstz$, $F_L$~\cite{Kruger:1999xa,Kim:2000dq,Yan:2000dc,Aliev:2001fc,Beneke:2001at,Chen:2002zk}. Robust calculations of these variables~\cite{Bobeth:2010wg, Bobeth:2011nj, Bobeth:2012vn, Ali:2006ew, Altmannshofer:2008dz, Altmannshofer:2011gn, Jager:2012uw, Descotes-Genon:2013vna} are now available for much of the phase space of this decay, and it is clear that new physics could give rise to readily observable effects~\cite{Altmannshofer:2008dz, Melikhov:1998cd, Ali:1999mm, Yan:2000dc, Buchalla:2000sk, Feldmann:2002iw, Hiller:2003js, Kruger:2005ep, Hovhannisyan:2007pb, Egede:2008uy, Hurth:2008jc, Alok:2009tz, Alok:2010zd, Chang:2010zy, DescotesGenon:2011yn, Matias:2012xw,DescotesGenon:2012zf}. Finally, this decay mode is relatively easy to select and reconstruct at hadron colliders. The quantities $A_\mathrm{FB}$ and $F_L$ can be measured as a function of the dimuon invariant mass squared $(q^2)$ and compared to SM predictions~\cite{Bobeth:2012vn}. Deviations from the SM predictions can indicate new physics. For example, in the minimal supersymmetric standard model (MSSM) modified with minimal flavor violation, called flavor blind MSSM (FBMSSM), effects can arise through NP contributions to the Wilson coefficient $C_7$~\cite{Altmannshofer:2008dz}. Another NP example is the MSSM with generic flavor-violating and CP-violating soft SUSY-breaking terms (GMSSM), in which the Wilson coefficients $C_7$, $C^\prime_7$, and $C_{10}$ can receive contributions~\cite{Altmannshofer:2008dz}. As shown there, these NP contributions can dramatically affect the $A_\mathrm{FB}$ distribution (note that the variable $S^s_6$ defined in Ref.~\cite{Altmannshofer:2008dz} is related to $A_\mathrm{FB}$ measured in this paper by $S^s_6 = -{\frac{4}{3}}A_\mathrm{FB}$), indicating that precision measurements of $A_\mathrm{FB}$ can be used to identify or constrain new physics. While previous measurements by BaBar, Belle, CDF, and LHCb are consistent with the SM~\cite{BaBar, Belle, CDF, LHCb}, these measurements are still statistically limited, and more precise measurements offer the possibility to uncover physics beyond the SM\@. In this Letter, we present measurements of $A_\mathrm{FB}$, $F_L$, and the differential branching fraction $\rd{}\mathcal{B}/\rd{}q^2$ from \ensuremath{\PBz\to\cPKstz \Pgmp \Pgmm}\xspace decays, using data collected from pp collisions at the Large Hadron Collider (LHC) with the Compact Muon Solenoid (CMS) experiment in 2011 at a center-of-mass energy of 7\TeV. The analyzed data correspond to an integrated luminosity of $5.2\pm0.1\fbinv$~\cite{LUMI}. The $\cPKstz$ is reconstructed through its decay to $\PKp\Pgpm$ and the $\PBz$ is reconstructed by fitting the two identified muon tracks and the two hadron tracks to a common vertex. The values of $A_\mathrm{FB}$ and $F_L$ are measured by fitting the distribution of events as a function of two angular variables: the angle between the positively charged muon and the $\PBz$ in the dimuon rest frame, and the angle between the kaon and the $\PBz$ in the $\cPKstz$ rest frame. All measurements are performed in $q^2$ bins from 1 to $19\GeV^2$. The $q^2$ bins $8.68<q^2<10.09\GeV^2$ and $12.90<q^2<14.18\GeV^2$, corresponding to the \ensuremath{\PBz\to\cPKstz \cPJgy}\xspace and \ensuremath{\PBz\to\cPKstz \psi'}\xspace decays ($\psi'$ indicates the \Pgy\ in what follows), respectively, are both used to validate the analysis, and the former is used to normalize the branching fraction measurement. \section{CMS detector} \label{sec:Detector} A detailed description of the CMS detector can be found elsewhere~\cite{CMS}. The main detector components used in this analysis are the silicon tracker and the muon detection systems. The silicon tracker measures charged particles within the pseudorapidity range $\abs{\eta}<2.4$, where $\eta = -\ln[\tan(\theta/2)]$ and $\theta$ is the polar angle of the track relative to the beam direction. It consists of 1440 silicon pixel and 15\,148 silicon strip detector modules and is located in the 3.8\unit{T} field of the superconducting solenoid. The reconstructed tracks have a transverse impact parameter resolution ranging from ${\approx} 100\micron$ to ${\approx} 20\micron$ as the transverse momentum of the track (\pt) increases from 1\GeV to 10\GeV. In the same \pt regime, the momentum resolution is better than 1\% in the central region, increasing to 2\% at $\eta \approx 2$, while the track reconstruction efficiency is nearly 100\% for muons with $\abs{\eta}<2.4$ and varies from ${\approx} 95\%$ at $\eta=0$ to ${\approx} 85\%$ at $\abs{\eta}=2.4$ for hadrons. Muons are measured in the pseudorapidity range $\abs{\eta}<2.4$, with detection planes made using three technologies: drift tubes, cathode strip chambers, and resistive-plate chambers, all of which are sandwiched between the solenoid flux return steel plates. Events are selected with a two-level trigger system. The first level is composed of custom hardware processors and uses information from the calorimeters and muon systems to select the most interesting events. The high-level trigger processor farm further decreases the event rate from nearly 100\unit{kHz} to around 350\unit{Hz} before data storage. \section{Reconstruction, event selection, and efficiency} \label{sec:Selection} The signal (\ensuremath{\PBz\to\cPKstz \Pgmp \Pgmm}\xspace) and normalization/control samples (\ensuremath{\PBz\to\cPKstz \cPJgy}\xspace and \ensuremath{\PBz\to\cPKstz \psi'}\xspace) were recorded with the same trigger, requiring two identified muons of opposite charge to form a vertex that is displaced from the pp collision region (beamspot). The beamspot position and size were continuously measured from Gaussian fits to reconstructed vertices as part of the online data quality monitoring. Five dimuon trigger configurations were used during 2011 data taking with increasingly stringent requirements to maintain an acceptable trigger rate as the instantaneous luminosity increased. For all triggers, the separation between the beamspot and the dimuon vertex in the transverse plane was required to be larger than three times the sum in quadrature of the distance uncertainty and the beamspot size. In addition, the cosine of the angle between the dimuon momentum vector and the vector from the beamspot to the dimuon vertex in the transverse plane was required to be greater than 0.9. More than 95\% of the data were collected with triggers that required single-muon pseudorapidity of $\abs{\eta(\mu)}<2.2$ for both muons, dimuon transverse momentum of $\pt(\mu\mu)>6.9\GeV$, single-muon transverse momentum for both muons of $\pt(\mu)>3.0,$ 4.0, 4.5, 5.0\GeV (depending on the trigger), and the corresponding vertex fit probability of $\chi^2_\text{prob} > 5\%$, 15\%, 15\%, 15\%. The remaining data were obtained from a trigger with requirements of $|\eta(\mu)|<2.5$, $\chi^2_\text{prob}>0.16\%$, and $\pt(\mu\mu)>6.5\GeV$. The events used in this analysis passed at least one of the five triggers. The decay modes used in this analysis require two reconstructed muons and two charged hadrons, obtained from offline reconstruction. The reconstructed muons are required to match the muons that triggered the event readout and to pass several muon identification requirements, namely a track matched with at least one muon segment, a track fit $\chi^2$ per degree of freedom less than 1.8, at least 11 hits in the tracker with at least 2 from the pixel detector, and a transverse (longitudinal) impact parameter less than 3\cm (30\cm). The reconstructed dimuon system is further required to satisfy the same requirements as were used in the trigger. In events where multiple trigger configurations are satisfied, the requirements associated with the loosest trigger are used. While the muon requirements are based on the trigger and a CMS standard selection, most of the remaining selection criteria are optimized by maximizing $S/\sqrt{S+B}$, where $S$ is the expected signal yield from Monte Carlo (MC) simulations and $B$ is the background estimated from invariant-mass sidebands in data, defined as ${>}3\sigma_{m(\PBz)}$ and ${<}5.5\sigma_{m(\PBz)}$ from the $\PBz$ mass~\cite{PDG}, where $\sigma_{m(\PBz)}$ is the average $\PBz$ mass resolution of $44\MeV$. The optimization is performed on one trigger sample, corresponding to an integrated luminosity of 2.7\fbinv, requiring $1.0<q^2<7.3\GeV^2$ or $16<q^2<19\GeV^2$ to avoid $\cPJgy$ and $\psi'$ contributions. The hadron tracks are required to fail the muon identification criteria, and have $\pt(\text{h})>0.75\GeV$ and an extrapolated distance of closest approach to the beamspot in the transverse plane greater than 1.3 times the sum in quadrature of the distance uncertainty and the beamspot transverse size. The two hadrons must have an invariant mass within 80\MeV of the nominal $\cPKstz$ mass for either the $\PKp\Pgpm$ or $\text{K}^-\pi^+$ hypothesis. To remove contamination from $\phi$ decays, the hadron-pair invariant mass must be greater than 1.035\GeV when the charged \PK\ mass is assigned to both hadron tracks. The $\PBz$ candidates are obtained by fitting the four charged tracks to a common vertex and applying a vertex constraint to improve the resolution of the track parameters. The $\PBz$ candidates must have $\pt(\PBz)>8\GeV$, $|\eta(\PBz)|<2.2$, vertex fit probability $\chi^2_\text{prob} > 9\%$, vertex transverse separation from the beamspot greater than 12 times the sum in quadrature of the separation uncertainty and the beamspot transverse size, and $\cos{\alpha_{xy}}>0.9994$, where $\alpha_{xy}$ is the angle, in the transverse plane, between the $\PBz$ momentum vector and the line-of-flight between the beamspot and the $\PBz$ vertex. The invariant mass of the four-track vertex must also be within 280\MeV of the world-average $\PBz$ mass for either the $\PKm\Pgpp\Pgmp\Pgmm$ or $\PKp\Pgpm\Pgmp\Pgmm$ hypothesis. This selection results in an average of 1.06 candidates per event in which at least one candidate is found. A single candidate is chosen from each event based on the best $\PBz$ vertex fit $\chi^2$. The four-track vertex candidate is identified as a $\PBz \big(\PaBz\big)$ if the $\PKp\Pgpm \big(\PKm\Pgpp\big)$ invariant mass is closest to the nominal $\cPKstz$ mass. In cases where both $\PK\Pgp$ combinations are within 50\MeV of the nominal $\cPKstz$ mass, the event is rejected since no clear identification is possible owing to the 50\MeV natural width of the $\cPKstz$. The fraction of candidates assigned the incorrect state is estimated from simulations to be 8\%. From the retained events, the dimuon invariant mass $q$ and its corresponding calculated uncertainty $\sigma_{q}$ are used to distinguish between the signal and normalization/control samples. The \ensuremath{\PBz\to\cPKstz \cPJgy}\xspace and \ensuremath{\PBz\to\cPKstz \psi'}\xspace samples are defined as $m_{\cPJgy}-5\sigma_{q} < q < m_{\cPJgy}+3\sigma_{q}$ and $\abs{q - m_{\psi'}} < 3\sigma_{q}$, respectively, where $m_{\cPJgy}$ and $m_{\psi'}$ are the world-average mass values. The asymmetric selection of the $\cPJgy$ sample is due to the radiative tail in the dimuon spectrum, while the smaller signal in the $\psi'$ mode made an asymmetric selection unnecessary. The signal sample is the complement of the $\cPJgy$ and $\psi'$ samples. The global efficiency, $\epsilon$, is the product of the acceptance and the trigger, reconstruction, and selection efficiencies, all of which are obtained from MC simulations. The pp collisions are simulated using \PYTHIA~\cite{Pythia} version 6.424, the unstable particles are decayed by \EVTGEN~\cite{EvtGen} version 9.1 (using the default matrix element for the signal), and the particles are traced through a detailed model of the detector with \GEANTfour~\cite{Geant4}. The reconstruction and event selection for the generated samples proceed as for the data events. Three simulation samples were created in which the $\PBz$ was forced to decay to \ensuremath{\PBz\to\cPKstz(\PKp \Pgpm) \Pgmp \Pgmm}\xspace, \ensuremath{\PBz\to\cPKstz(\PKp \Pgpm) \cPJgy(\Pgmp \Pgmm)}\xspace, or \ensuremath{\PBz\to\cPKstz(\PKp \Pgpm) \psi'(\Pgmp \Pgmm)}\xspace. The acceptance is calculated as the fraction of events passing the single-muon cuts of $\pt(\mu)>2.8\GeV$ and $\abs{\eta(\mu)}<2.3$ relative to all events with a $\PBz$ in the event with $\pt(\PBz)>8\GeV$ and $\abs{\eta(\PBz)}<2.2$. The acceptance is obtained from the generated events before the particle tracing with \GEANTfour. To obtain the reconstruction and selection efficiency, the MC simulation events are divided into five samples, appropriately sized to match the amount of data taken with each of the five triggers. In each of the five samples, the appropriate trigger and matching offline event selection is applied. Furthermore, each of the five samples is reweighted to obtain the correct distribution of pileup events (additional pp collisions in the same bunch crossing as the collision that produced the $\PBz$ candidate), corresponding to the data period during which the trigger was active. The reconstruction and selection efficiency is the ratio of the number events that pass all the selections and have a reconstructed $\PBz$ compatible with the generated $\PBz$ in the event relative to the number of events that pass the acceptance criteria. The compatibility of generated and reconstructed particles is enforced by requiring the reconstructed $\PKp$, $\Pgpm$, $\Pgmp$, and $\Pgmm$ to have $\sqrt{(\Delta \eta)^2 + (\Delta \varphi)^2} < 0.3$ for hadrons and 0.004 for muons, where $\Delta \eta$ and $\Delta \varphi$ are the differences in $\eta$ and $\varphi$ between the reconstructed and generated particles, and $\varphi$ is the azimuthal angle in the plane perpendicular to the beam direction. The efficiency and purity of this compatibility requirement are greater than 99\%. \section{Analysis method} \label{sec:Analysis} The analysis measures $A_\mathrm{FB}$, $F_L$, and $\rd{}\mathcal{B}/\rd{}q^2$ of the decay \ensuremath{\PBz\to\cPKstz \Pgmp \Pgmm}\xspace as a function of $q^2$. Figure~\ref{fig:ske} shows the relevant angular observables needed to define the decay: $\theta_\PK$ is the angle between the kaon momentum and the direction opposite to the $\PBz$ $\big(\PaBz\big)$ in the $\cPKstz$ $\big(\cPAKstz\big)$ rest frame, $\theta_l$ is the angle between the positive (negative) muon momentum and the direction opposite to the $\PBz$ $\big(\PaBz\big)$ in the dimuon rest frame, and $\phi$ is the angle between the plane containing the two muons and the plane containing the kaon and pion. Since the extracted angular parameters $A_\mathrm{FB}$ and $F_L$ and the acceptance times efficiency do not depend on $\phi$, $\phi$ is integrated out. Although the $\PKp\Pgpm$ invariant mass must be consistent with a $\cPKstz$, there can be contributions from a spinless (S-wave) $\PKp\Pgpm$ combination~\cite{Becirevic:2012dp,Matias:2012qz,Blake:Swave}. This is parametrized with two terms related to the S-wave fraction, $F_S$, and the interference amplitude between the S-wave and P-wave decays, $A_S$. Including this component, the angular distribution of \ensuremath{\PBz\to\cPKstz \Pgmp \Pgmm}\xspace can be written as~\cite{Blake:Swave}: \ifthenelse{\boolean{cms@external}}{ \begin{multline}\label{eq:angALL} \frac{1}{\Gamma}\frac{\rd{}^3\Gamma}{\rd{}\cos\theta_\PK\, \rd{}\cos\theta_l\, \rd{}q^2} = \\ \begin{aligned} \qquad&\frac{9}{16} \left\lbrace \left[ \frac{2}{3} F_S + \frac{4}{3} A_S \cos\theta_\PK \right] \left(1 - \cos^2\theta_l\right) \right. \\ & \left. +\; \left(1 - F_S\right) \Bigl[2 F_L \cos^2\theta_\PK \left(1 - \cos^2\theta_l\right) \right. \\ & \left. +\; \frac{1}{2} \left(1 - F_L\right) \left(1 - \cos^2\theta_\PK\right) \left(1 + \cos^2\theta_l\right) \right. \\ & \left. +\; \frac{4}{3} A_\mathrm{FB} \left(1 - \cos^2\theta_\PK\right) \cos\theta_l\Bigr] \right\rbrace. \end{aligned} \end{multline} }{ \begin{equation}\label{eq:angALL} \begin{split} \frac{1}{\Gamma}\frac{\rd{}^3\Gamma}{\rd{}\!\cos\theta_\PK\, \rd{}\!\cos\theta_l\, \rd{}\!q^2} &= \frac{9}{16} \left\lbrace \left[ \frac{2}{3} F_S + \frac{4}{3} A_S \cos\theta_\PK \right] \left(1 - \cos^2\theta_l\right) \right. \\ & \left. + \left(1 - F_S\right) \Bigl[2 F_L \cos^2\theta_\PK \left(1 - \cos^2\theta_l\right) \right. \\ & \left. + \frac{1}{2} \left(1 - F_L\right) \left(1 - \cos^2\theta_\PK\right) \left(1 + \cos^2\theta_l\right) \right. \\ & \left. + \frac{4}{3} A_\mathrm{FB} \left(1 - \cos^2\theta_\PK\right) \cos\theta_l\Bigr] \right\rbrace. \end{split} \end{equation} } \begin{figure}[bht] \begin{center} \includegraphics[width=\cmsFigWidth]{SketchDecay.pdf} \caption{Sketch showing the definition of the angular observables for the decay \ensuremath{\PBz\to\cPKstz(\PKp \Pgpm) \Pgmp \Pgmm}\xspace.} \label{fig:ske} \end{center} \end{figure} The main results of the analysis are extracted from unbinned extended maximum-likelihood fits in bins of $q^2$ to three variables: the $\PKp\Pgpm\Pgmp\Pgmm$ invariant mass and the two angular variables ${\theta_\PK}$ and ${\theta_l}$. For each $q^2$ bin, the probability density function (PDF) has the following expression: \ifthenelse{\boolean{cms@external}}{ \begin{multline} \label{eq:PDF} \text{PDF}(m,\cos\theta_\PK,\cos\theta_l) = \\ \begin{aligned} \qquad&Y_{S} \cdot S(m) \cdot S(\cos\theta_\PK,\cos\theta_l) \cdot \epsilon(\cos\theta_\PK,\cos\theta_l) \\ & + Y_{Bc} \cdot B_{c}(m) \cdot B_{c}(\cos\theta_\PK) \cdot B_{c}(\cos\theta_l) \\ & + Y_{Bp} \cdot B_{p}(m) \cdot B_{p}(\cos\theta_\PK) \cdot B_{p}(\cos\theta_l). \end{aligned} \end{multline} }{ \begin{equation} \label{eq:PDF} \begin{split} \text{PDF}(m,\cos\theta_\PK,\cos\theta_l) &= Y_{S} \cdot S(m) \cdot S(\cos\theta_\PK,\cos\theta_l) \cdot \epsilon(\cos\theta_\PK,\cos\theta_l) \\ & + Y_{Bc} \cdot B_{c}(m) \cdot B_{c}(\cos\theta_\PK) \cdot B_{c}(\cos\theta_l) \\ & + Y_{Bp} \cdot B_{p}(m) \cdot B_{p}(\cos\theta_\PK) \cdot B_{p}(\cos\theta_l). \end{split} \end{equation} } The signal yield is given by the free parameter $Y_S$. The signal shape is described by the product of a function $S(m)$ of the invariant mass variable, the theoretical signal shape as a function of two angular variables, $S(\cos\theta_\PK,\cos\theta_l)$, and the efficiency as a function of the same two variables, $\epsilon(\cos\theta_\PK,\cos\theta_l)$. The signal mass shape $S(m)$ is the sum of two Gaussian functions with a common mean. While the mean is free to float, the two resolution parameters and the relative fraction are fixed to the result from a fit to the simulated events. The signal angular function $S(\cos\theta_\PK,\cos\theta_l)$ is given by Eq.~(\ref{eq:angALL}). The efficiency function $\epsilon(\cos\theta_\PK,\cos\theta_l)$, which also accounts for mistagging of a $\PBz$ as a $\PaBz$ (and vice versa), is obtained by fitting the two-dimensional efficiency histograms (6 $\cos\theta_\PK$ bins and 5 $\cos\theta_l$ bins) to polynomials in $\cos\theta_\PK$ and $\cos\theta_l$. The $\cos\theta_\PK$ polynomial is degree 3, while the $\cos\theta_l$ polynomial is degree 6, with the 1st and 5th orders removed, as these were the simplest polynomials that adequately described the efficiency in all bins. For some $q^2$ bins, simpler polynomials are used as they are sufficient to describe the data. There are two contributions to the background, with yields given by $Y_{Bp}$ for the ``peaking'' background and $Y_{Bc}$ for the ``combinatorial'' background. The peaking background is due to the remaining \ensuremath{\PBz\to\cPKstz \cPJgy}\xspace and \ensuremath{\PBz\to\cPKstz \psi'}\xspace decays, not removed by the dimuon mass or $q^2$ requirements. For these events, the dimuon mass is reconstructed far from the true $\cPJgy$ or $\psi'$ mass, which results in a reconstructed $\PBz$ mass similarly displaced from the true $\PBz$ mass. The shapes of this background in the mass, $B_{p}(m)$, and angular variables, $B_{p}(\cos\theta_\PK)$ and $B_{p}(\cos\theta_l)$, are obtained from simulation of \ensuremath{\PBz\to\cPKstz \cPJgy}\xspace and \ensuremath{\PBz\to\cPKstz \psi'}\xspace events, fit to the sum of two Gaussian functions in mass and polynomials in $\cos{\theta_\PK}$ and $\cos{\theta_l}$. The background yield is also obtained from simulation, properly normalized by comparing the reconstructed \ensuremath{\PBz\to\cPKstz \cPJgy}\xspace and \ensuremath{\PBz\to\cPKstz \psi'}\xspace yields in data and MC simulation. The remaining background, combinatorial in nature, is described by a single exponential in mass, $B_{c}(m)$, and a polynomial in each angular variable, $B_{c}(\cos\theta_\PK)$ and $B_{c}(\cos\theta_l)$, varying between degree 0 and 4, as needed to describe the data. The results of the fit in each $q^2$ bin (including the $\cPJgy$ and $\psi'$ bins) are $A_\mathrm{FB}$ and $F_L$. In the fits to the data, the yield $Y_{Bp}$ and all but one of the parameters that define the shapes of $S(m)$, $B_{p}(m)$, $B_{p}(\cos\theta_\PK)$, and $B_{p}(\cos\theta_l)$ are initially set to the values obtained from simulation, with a Gaussian constraint defined by the uncertainty found in the fit to the simulated events. The $S(m)$ mass parameter is not constrained. The first fit to the data is to the control samples: \ensuremath{\PBz\to\cPKstz \cPJgy}\xspace and \ensuremath{\PBz\to\cPKstz \psi'}\xspace. The values for $F_S$ and $A_S$ from the \ensuremath{\PBz\to\cPKstz \cPJgy}\xspace fit are used in the signal $q^2$ bins, with Gaussian constraints defined by the uncertainties from the fit. The longitudinal polarization fraction $F_L$ and the scalar fraction $F_S$ are constrained to lie in the physical region of 0 to 1. In addition, penalty terms are added to ensure that $\left| A_\mathrm{FB} \right| < \frac{3}{4}\left(1-F_L\right)$ and $\left| A_S \right| < \frac{1}{2}\left[F_S + 3F_L\left(1-F_S\right)\right]$, which are necessary to avoid a negative decay rate. The differential branching fraction, $\rd{}\mathcal{B}/\rd{}q^2$, is measured relative to the normalization channel \ensuremath{\PBz\to\cPKstz \cPJgy}\xspace using \begin{equation} \label{eq:BF} \frac{\rd{}\mathcal{B}\left(\ensuremath{\PBz\to\cPKstz \Pgmp \Pgmm}\xspace\right)}{\rd{}q^2} = \frac{Y_{S}}{Y_N} \frac{\epsilon_N}{\epsilon_{S}} \frac{\rd{}\mathcal{B}\left(\ensuremath{\PBz\to\cPKstz \cPJgy}\xspace\right)}{\rd{}q^2}, \end{equation} where $Y_{S}$ and $Y_N$ are the yields of the signal and normalization channels, respectively, $\epsilon_{S}$ and $\epsilon_N$ are the efficiencies of the signal and normalization channels, respectively, and $\mathcal{B}\left(\ensuremath{\PBz\to\cPKstz \cPJgy}\xspace \right)$ is the world-average branching fraction for the normalization channel~\cite{PDG}. The yields are obtained with fits to the invariant-mass distributions and the efficiencies are obtained by integrating over the angular variables using the values obtained from the previously described fits. Three methods are used to validate the fit formalism and results. First, 1000 pseudo-experiment samples are generated in each $q^2$ bin using the PDF in Eq.~(\ref{eq:PDF}). The log-likelihood values obtained from the fits to the data are consistent with the distributions from the pseudo-experiments, indicating an acceptable goodness of fit. The pull distributions obtained from the pseudo-experiments indicate the uncertainties returned by the fit are generally overestimated by 0--10\%. No attempt is made to correct the experimental uncertainties for this effect. Second, a fit is performed to a sample of MC simulation events that approximated the data sample in size and composition. The MC simulation sample contains a data-like mixture of four types of events. Three types of events are generated and simulated events from \ensuremath{\PBz\to\cPKstz \Pgmp \Pgmm}\xspace, \ensuremath{\PBz\to\cPKstz \cPJgy}\xspace, and \ensuremath{\PBz\to\cPKstz \psi'}\xspace decays. The last event type is the combinatorial background, which is generated based on the PDF in Eq.~(\ref{eq:PDF}). Third, the fit is performed on the normalization/control samples and the results compared to the known values. Biases observed from these three checks are treated as systematic uncertainties, as described in Section~\ref{sec:Systematics}. \section{Systematic uncertainties} \label{sec:Systematics} A variety of systematic effects are investigated and the impacts on the measurements of $F_L$, $A_\mathrm{FB}$, and $\rd{}\mathcal{B}/\rd{}q^2$ are evaluated. The finite sizes of the MC simulation samples used to measure the efficiency introduce a systematic uncertainty of a statistical nature. Alternative efficiency functions are created by randomly varying the parameters of the efficiency polynomials within the fitted uncertainties for the MC samples. The alternative efficiency functions are applied to the data and the root-mean-squares of the returned values taken as the systematic uncertainty. The fit algorithm is validated by performing 1000 pseudo-experiments, generated and fit with the PDF of Eq.~(\ref{eq:PDF}). The average deviation of the 1000 pseudo-experiments from the expected mean is taken as the systematic uncertainty associated with possible bias from the fit algorithm. This bias is less than half of the statistical uncertainty for all measurements. Discrepancies between the functions used in the PDF and the true distribution can also give rise to biases. To evaluate this effect, a MC simulation sample similar in size and composition to the analyzed data set is fit using the PDF of Eq.~(\ref{eq:PDF}). The differences between the fitted values and the true values are taken as the systematic uncertainties associated with the fit ingredients. Mistagging a $\PBz$ as a $\PaBz$ (and vice versa) worsens the measured $\PBz$ mass resolution. A comparison of resolutions for data and MC simulations (varying the mistag rates in the simulation) indicates the mistag rate may be as high as 12\%, compared to the value of 8\% determined from simulation. The systematic uncertainty in the mistag rate is obtained from the difference in the final measurements when these two values are used. The systematic uncertainty related to the contribution from the $\PK\Pgp$ S-wave (and interference with the P-wave) is evaluated by taking the difference between the default results, obtained by fitting with a function accounting for the S-wave (Eq.~(\ref{eq:angALL})), with the results from a fit performed with no S-wave or interference terms ($F_S=A_S=0$ in Eq.~(\ref{eq:angALL})). Variations of the background PDF shapes, versus mass and angles, are used to estimate the effect from the choice of PDF shapes. The mass-shape parameters of the peaking background, normally taken from a fit to the simulation, are left free in the data fit and the difference adopted as a systematic uncertainty. The degree of the polynomials used to fit the angular shapes of the combinatorial background are increased by one and the difference taken as a systematic uncertainty. In addition, the difference in results obtained by fitting the mass-shape parameters using the data, rather than using the result from simulations, is taken as the signal mass-shape systematic uncertainty. The effect of the experimental resolution of $\cos\theta_\PK$ and $\cos\theta_l$ is estimated as the difference, when significant, of the returned values for $A_\mathrm{FB}$ and $F_L$ when the reconstructed or generated values of $\cos\theta_\PK$ and $\cos\theta_l$ are used. The effect of the dimuon mass resolution is found to be negligible. A possible difference between the efficiency computed with the simulation and the true efficiency in data is tested by comparing the measurements of known observables between data and simulation using the control channels. The differences in the measurements of $F_L$ and $A_\mathrm{FB}$ are computed using the \ensuremath{\PBz\to\cPKstz \cPJgy}\xspace decay. For the differential branching fraction measurement, the systematic uncertainty is estimated using the ratio of branching fractions $\mathcal{B}(\ensuremath{\PBz\to\cPKstz \cPJgy(\Pgmp \Pgmm})\xspace)/\mathcal{B}(\ensuremath{\PBz\to\cPKstz \psi'(\Pgmp \Pgmm)}\xspace)$, where our measured value of $15.5 \pm 0.4$ (statistical uncertainty only) is in agreement with the most-precise previously published value of $16.2 \pm 0.5 \pm 0.3$~\cite{Aaij:2012dda}. We use the difference of 4.3\% between these two measurements as an estimate of the systematic uncertainty from possible $q^2$-dependent efficiency mismodeling. For the branching fraction measurement, a common normalization systematic uncertainty of 4.6\% arises from the branching fractions of the normalization mode (\ensuremath{\PBz\to\cPKstz \cPJgy}\xspace and $\cPJgy\to\Pgmp\Pgmm$)~\cite{PDG}. Finally, variation of the number of pileup collisions is found to have no effect on the results. The systematic uncertainties are measured and applied in each $q^2$ bin, with the total systematic uncertainty obtained by adding in quadrature the individual contributions. A summary of the systematic uncertainties is given in Table~\ref{tab:sys}; the ranges give the variation over the $q^2$ bins. \begin{table*}[bth] \centering \topcaption{Systematic uncertainty contributions for the measurements of $F_L$, $A_\mathrm{FB}$, and $\rd{}\mathcal{B}/\rd{}q^2$. The $F_L$ and $A_\mathrm{FB}$ uncertainties are absolute values, while the $\rd{}\mathcal{B}/\rd{}q^2$ uncertainties are relative to the measured value. The ranges given refer to the variations over the $q^2$ bins.\label{tab:sys}} \begin{tabular}{lD{,}{\text{--}}{2.3}D{,}{\text{--}}{2.3}D{,}{\text{--}}{2.3}} Systematic uncertainty & \multicolumn{1}{c}{$F_L \left(10^{-3}\right)$} & \multicolumn{1}{c}{$A_\mathrm{FB} \left(10^{-3}\right)$} & \multicolumn{1}{c}{$\rd{}\mathcal{B}/\rd{}q^2 (\%)$} \\ \hline Efficiency statistical uncertainty & 5 , 7 & 3 , 5 & \multicolumn{1}{c}{1} \\ Potential bias from fit algorithm & 3 , 40 & 12 , 77 & 0 , 2.7 \\ Potential bias from fit ingredients & \multicolumn{1}{c}{0} & 0 , 17 & 0 , 7.1 \\ Incorrect CP assignment of decay & 2 , 6 & 2 , 6 & \multicolumn{1}{c}{0} \\ Effect of $\PK\Pgp$ S-wave contribution & 5 , 23 & 6 , 14 & \multicolumn{1}{c}{5} \\ Peaking background mass shape & 0 , 26 & 0 , 8 & 0 , 15 \\ Background shapes vs.\ $\cos\theta_{L,K}$ & 3 , 180 & 4 , 160 & 0 , 3.3 \\ Signal mass shape & \multicolumn{1}{c}{0} & \multicolumn{1}{c}{0} & \multicolumn{1}{c}{0.9} \\ Angular resolution & 0 , 19 & \multicolumn{1}{c}{0} & \multicolumn{1}{c}{0} \\ Efficiency shape & \multicolumn{1}{c}{16} & \multicolumn{1}{c}{4} & \multicolumn{1}{c}{4.3} \\ Normalization to \ensuremath{\PBz\to\cPKstz \cPJgy}\xspace & \multicolumn{1}{c}{---} & \multicolumn{1}{c}{---} & \multicolumn{1}{c}{4.6} \\ \hline Total systematic uncertainty & 31 , 190 & 18 , 180 & 8.6 , 17 \\ \end{tabular} \end{table*} \section{Results} \label{sec:Results} The $\PKp\Pgpm\Pgmp\Pgmm$ invariant-mass, $\cos\theta_\PK$, and $\cos\theta_l$ distributions for the $q^2$ bin corresponding to the \ensuremath{\PBz\to\cPKstz \cPJgy}\xspace decay are shown in Fig.~\ref{fig:resPsiFlAfb}, along with the projection of the maximum-likelihood fit described in Section~\ref{sec:Analysis}. The results are used to validate the fitting procedure and obtain the values for $F_S$ and $A_S$ used in the fits to the signal $q^2$ bins. From 47\,000 signal events, the fitted values are $F_L = 0.554 \pm 0.004$, $A_\mathrm{FB} = -0.004 \pm 0.004$, $F_S = 0.01 \pm 0.01$, and $A_S = -0.10 \pm 0.01$, where the uncertainties are statistical. Considering also the typical systematic uncertainties (Table~\ref{tab:sys}), the result for $F_L$ is compatible with the world-average value of $0.570 \pm 0.008$~\cite{PDG}, while the value for $A_\mathrm{FB}$ is consistent with the expected result of no asymmetry. The same fit is performed for the \ensuremath{\PBz\to\cPKstz \psi'}\xspace $q^2$ bin, where 3200 signal events yield results of $F_L = 0.509 \pm 0.016\stat$, which is consistent with the world-average value of $0.46 \pm 0.04$~\cite{PDG}, and $A_\mathrm{FB} = 0.013 \pm 0.014\stat$, compatible with no asymmetry, as expected in the SM\@. \begin{figure}[hbtp] \begin{center} \includegraphics[width=0.48\textwidth]{TotalPDFq2Bin_Ang_3_0.pdf} \includegraphics[width=0.48\textwidth]{TotalPDFq2Bin_Ang_3_1.pdf} \includegraphics[width=0.48\textwidth]{TotalPDFq2Bin_Ang_3_2.pdf} \caption{The $\PKp\Pgpm\Pgmp\Pgmm$ invariant-mass (\cmsTop), $\cos\theta_l$ (\cmsMiddle), and $\cos\theta_\PK$ (\cmsBottom) distributions for the $q^2$ bin associated with the \ensuremath{\PBz\to\cPKstz \cPJgy}\xspace decay, along with results from the projections of the overall unbinned maximum-likelihood fit (solid line), the signal contribution (dashed line), and the background contribution (dot-dashed line).} \label{fig:resPsiFlAfb} \end{center} \end{figure} The $\PKp\Pgpm\Pgmp\Pgmm$ invariant mass distributions for each $q^2$ bin of the signal sample $\PBz\to\cPKstz$ $\Pgmp \Pgmm$ are shown in Fig.~\ref{fig:invMassq2}, along with the projection of the unbinned maximum-likelihood fit described in Section~\ref{sec:Analysis}. Clear signals are seen in each bin, with yields ranging from $23\pm 6$ to $103\pm 12$ events. The fitted results for $F_L$ and $A_\mathrm{FB}$ are shown in Fig.~\ref{fig:resultFLAFB}, along with the SM predictions. The values of $A_\mathrm{FB}$ and $F_L$ obtained for the first $q^2$ bin are at the physical boundary, which is enforced by a penalty term. This leads to statistical uncertainties, obtained from \textsc{minos}~\cite{Minuit}, of zero for the positive (negative) uncertainty for $F_L$ $(A_\mathrm{FB})$. \begin{figure*}[htbp] \begin{center} \includegraphics[width=0.48\textwidth]{TotalPDFq2Bin_0.pdf} \includegraphics[width=0.48\textwidth]{TotalPDFq2Bin_1.pdf} \includegraphics[width=0.48\textwidth]{TotalPDFq2Bin_2.pdf} \includegraphics[width=0.48\textwidth]{TotalPDFq2Bin_4.pdf} \includegraphics[width=0.48\textwidth]{TotalPDFq2Bin_6.pdf} \includegraphics[width=0.48\textwidth]{TotalPDFq2Bin_7.pdf} \caption{The $\PKp\Pgpm\Pgmp\Pgmm$ invariant-mass distributions for each of the signal $q^2$ bins. Overlaid on each mass distribution is the projection of the unbinned maximum-likelihood fit results for the overall fit (solid line), the signal contribution (dashed line), the combinatorial background contribution (dot-dashed line), and the peaking background contribution (dotted line).} \label{fig:invMassq2} \end{center} \end{figure*} The SM predictions are taken from Ref.~\cite{Bobeth:2012vn} and combines two calculational techniques. In the low-$q^2$ region, a QCD factorization approach~\cite{Beneke:2001at} is used, which is applicable for $q^2<4m_c^4$, where $m_c$ is the charm quark mass. In the high-$q^2$ region, an operator product expansion in the inverse \cPqb-quark mass and $1/\!\sqrt{q^2}$~\cite{Grinstein:2004vb,Beylich:2011aq} is combined with heavy quark form factor relations~\cite{Grinstein:2002cz}. This is valid above the open-charm threshold. In both regions, the form factor calculations are taken from Ref.~\cite{Ball:2004rg}, and a dimensional estimate is made of the uncertainty from the expansion corrections~\cite{Egede:2008uy}. Other recent SM calculations~\cite{Ali:2006ew,Altmannshofer:2011gn,Jager:2012uw,Descotes-Genon:2013vna} give similar results, with the largest variations found in the uncertainty estimates and the differential branching fraction value. Between the $\cPJgy$ and $\psi'$ resonances, reliable theoretical predictions are not available. \begin{figure}[hbtp] \begin{center} \includegraphics[width=0.48\textwidth]{FL_Data.pdf} \includegraphics[width=0.48\textwidth]{AFB_Data.pdf} \caption{Results of the measurement of $F_L$ (\cmsLeft) and $A_\mathrm{FB}$ (\cmsRight) versus $q^2$. The statistical uncertainty is shown by inner error bars, while the outer error bars give the total uncertainty. The vertical shaded regions correspond to the $\cPJgy$ and $\psi'$ resonances. The other shaded regions show the SM prediction as a continuous distribution and after rate-averaging across the $q^2$ bins $(\langle \text{SM} \rangle)$ to allow direct comparison to the data points. Reliable theoretical predictions between the $\cPJgy$ and $\psi'$ resonances $(10.09 < q^2 < 12.86 \GeV^2)$ are not available.} \label{fig:resultFLAFB} \end{center} \end{figure} Using the efficiency corrected yields for the signal and normalization modes (\ensuremath{\PBz\to\cPKstz \Pgmp \Pgmm}\xspace and \ensuremath{\PBz\to\cPKstz \cPJgy}\xspace) and the world-average branching fraction for the normalization mode~\cite{PDG}, the branching fraction for \ensuremath{\PBz\to\cPKstz \Pgmp \Pgmm}\xspace is obtained as a function of $q^2$, as shown in Fig.~\ref{fig:resultBF}, together with the SM predictions. The results for $A_\mathrm{FB}$, $F_L$, and $\rd{}\mathcal{B}/\rd{}q^2$ are also reported in Table~\ref{tab:results}. \begin{figure}[btph] \begin{center} \includegraphics[width=0.48\textwidth]{dBFvsq2_Data.pdf} \caption{Results of the measurement of $\rd{}\mathcal{B}/\rd{}q^2$ versus $q^2$. The statistical uncertainty is shown by inner error bars, while the outer error bars give the total uncertainty. The vertical shaded regions correspond to the $\cPJgy$ and $\psi'$ resonances. The other shaded regions show the SM prediction as a continuous distribution and after rate-averaging across the $q^2$ bins $(\langle \text{SM} \rangle)$ to allow direct comparison to the data points. Reliable theoretical predictions between the $\cPJgy$ and $\psi'$ resonances $(10.09 < q^2 < 12.86 \GeV^2)$ are not available.} \label{fig:resultBF} \end{center} \end{figure} \begin{table*}[htbp] \centering \topcaption{\label{tab:results}The yields and the measurements of $F_L$, $A_\mathrm{FB}$, and the branching fraction for the decay \ensuremath{\PBz\to\cPKstz \Pgmp \Pgmm}\xspace in bins of $q^2$. The first uncertainty is statistical and the second is systematic.} \begin{tabular}{c|cccc} $q^2$ & Yield & $F_L$ & $A_\mathrm{FB}$ & $\rd{}\mathcal{B}/\rd{}q^2$\\ $(\GeVns^2)$ & & & & $(10^{-8}\GeV^{-2})$ \\[1pt] \hline\\[-2ex] 1--2 & $23.0 \pm 6.3$ & $0.60^{\:+\:0.00}_{\:-\:0.28} \pm 0.19$ & $-0.29^{\:+\:0.37}_{\:-\:0.00} \pm 0.18$ & $4.8^{\:+\:1.4}_{\:-\:1.2} \pm 0.4$ \\[1pt] 2--4.3 & $45.0 \pm 8.8$ & $0.65 \pm 0.17 \pm 0.03$ & $-0.07\pm 0.20 \pm 0.02$ & $3.8\pm 0.7 \pm 0.3$ \\[1pt] 4.3--8.68 & $90 \pm 17$ & $0.81^{\:+\:0.13}_{\:-\:0.12} \pm 0.05$ & $-0.01\pm 0.11 \pm 0.03$ & $3.7\pm 0.7 \pm 0.4$ \\[1pt] 10.09--12.86 & $96 \pm 16$ & $0.45^{\:+\:0.10}_{\:-\:0.11} \pm 0.04$ & $0.40\pm 0.08 \pm 0.05$ & $5.4\pm 0.9 \pm 0.9$ \\[1pt] 14.18--16 & $58 \pm 10$ & $0.53 \pm 0.12 \pm 0.03$ & $0.29\pm 0.09 \pm 0.05$ & $4.6^{\:+\:0.9}_{\:-\:0.8} \pm 0.5$ \\[1pt] 16--19 & $103 \pm 12$ & $0.44 \pm 0.07 \pm 0.03$ & $0.41\pm 0.05 \pm 0.03$ & $5.2\pm 0.6 \pm 0.5$ \\[1pt] \hline 1--6 & $107 \pm 14$ & $0.68 \pm 0.10 \pm 0.02$ & $-0.07\pm 0.12 \pm 0.01$ & $4.4\pm 0.6 \pm 0.4$ \\ \end{tabular} \end{table*} The angular observables can be theoretically predicted with good control of the relevant form-factor uncertainties in the low dimuon invariant-mass region. It is therefore interesting to perform the measurements of the relevant observables in the $1 < q^2 < 6\GeV^2$ region. The experimental results in this region, along with the fit projections, are shown in Fig.~\ref{fig:resSpec}. The values obtained from this fit for $F_L$, $A_\mathrm{FB}$, and $\rd{}\mathcal{B}/\rd{}q^2$ are shown in the bottom row of Table~\ref{tab:results}. These results are consistent with the SM predictions of $F_L = 0.74^{\:+\:0.06}_{\:-\:0.07}$, $A_\mathrm{FB} = -0.05 \pm 0.03$, and $\rd{}\mathcal{B}/\rd{}q^2 = (4.9^{\:+\:1.0}_{\:-\:1.1})\times 10^{-8}\GeV^{-2}$~\cite{Bobeth:2011gi}. \begin{figure}[hbtp] \begin{center} \includegraphics[width=0.48\textwidth]{TotalPDFq2Bin_Ang_Special_0.pdf} \includegraphics[width=0.48\textwidth]{TotalPDFq2Bin_Ang_Special_1.pdf} \includegraphics[width=0.48\textwidth]{TotalPDFq2Bin_Ang_Special_2.pdf} \caption{The $\PKp\Pgpm\Pgmp\Pgmm$ invariant-mass (\cmsTop), $\cos\theta_l$ (\cmsMiddle), and $\cos\theta_\PK$ (\cmsBottom) distributions for $1<q^2<6\GeV^2$, along with results from the projections of the overall unbinned maximum-likelihood fit (solid line), the signal contribution (dashed line), and the background contribution (dot-dashed line).} \label{fig:resSpec} \end{center} \end{figure} The results of $A_\mathrm{FB}$, $F_L$, and the branching fraction versus $q^2$ are compared to previous measurements that use the same $q^2$ binning~\cite{Belle,CDF,CDF_BR,BaBar_BR,LHCb} in Fig.~\ref{fig:comp}. The CMS measurements are more precise than all but the LHCb values, and in the highest-$q^2$ bin, the CMS measurements have the smallest uncertainty in $A_\mathrm{FB}$ and $F_L$. Table~\ref{tab:comp} provides a comparison of the same quantities in the low dimuon invariant-mass region: $1 < q^2 < 6\GeV^2$. \begin{figure}[hbtp] \begin{center} \includegraphics[width=0.48\textwidth]{FL_comp.pdf} \includegraphics[width=0.48\textwidth]{AFB_comp.pdf} \includegraphics[width=0.48\textwidth]{BF_comp.pdf} \caption{Measurements versus $q^2$ of $F_L$ (\cmsTop), $A_\mathrm{FB}$ (\cmsMiddle), and the branching fraction (\cmsBottom) for $\PB \to \PK^{\ast} \ell^+ \ell^-$ from CMS (this paper), Belle~\cite{Belle}, CDF~\cite{CDF,CDF_BR}, BaBar~\cite{BaBar_BR}, and LHCb~\cite{LHCb}. The error bars give the total uncertainty. The vertical shaded regions correspond to the $\cPJgy$ and $\psi'$ resonances. The other shaded regions are the result of rate-averaging the SM prediction across the $q^2$ bins to allow direct comparison to the data points. Reliable theoretical predictions between the $\cPJgy$ and $\psi'$ resonances $(10.09 < q^2 < 12.86\GeV^2)$ are not available.} \label{fig:comp} \end{center} \end{figure} \begin{table*}[htbp] \centering \topcaption{\label{tab:comp}Measurements from CMS (this paper), LHCb~\cite{LHCb}, BaBar~\cite{BaBar_BR}, CDF~\cite{CDF,CDF_BR}, and Belle~\cite{Belle} of $F_L$, $A_\mathrm{FB}$, and $\rd{}\mathcal{B}/\rd{}q^2$ in the region $1 < q^2 < 6\GeV^2$ for the decay $\PB \to \cPKst \ell^+ \ell^-$. The first uncertainty is statistical and the second is systematic. The SM predictions are also given~\cite{Bobeth:2012vn}.} \begin{tabular}{c|cccc} Experiment & $F_L$ & $A_\mathrm{FB}$ & $\rd{}\mathcal{B}/\rd{}q^2\;(10^{-8}\GeV^{-2})$ \\[1pt] \hline CMS & $0.68 \pm 0.10 \pm 0.02$ & $-0.07\pm 0.12 \pm 0.01$ & $4.4\pm 0.6 \pm 0.4$ \\[1pt] LHCb & $0.65^{\:+\:0.08}_{\:-\:0.07}\pm 0.03$ & $-0.17 \pm 0.06 \pm 0.01$ & $3.4\pm 0.3^{\:+\:0.4}_{\:-\:0.5}$ \\[1pt] BaBar & --- & --- & $4.1^{\:+\:1.1}_{\:-\:1.0}\pm 0.1$ \\[1pt] CDF & $0.69^{\:+\:0.19}_{\:-\:0.21} \pm 0.08$ & $0.29^{\:+\:0.20}_{\:-\:0.23} \pm 0.07$ & $3.2\pm 1.1 \pm 0.3$ \\[1pt] Belle & $0.67 \pm 0.23 \pm 0.05$ & $0.26^{\:+\:0.27}_{\:-\:0.32} \pm 0.07$ & $3.0^{\:+\:0.9}_{\:-\:0.8} \pm 0.2$ \\[1pt] \hline SM & $0.74^{\:+\:0.06}_{\:-\:0.07}$ & $-0.05 \pm 0.03$ & $4.9^{\:+\:1.0}_{\:-\:1.1}$ \\[1pt] \end{tabular} \end{table*} \section{Summary} \label{sec:End} Using a data sample recorded with the CMS detector during 2011 and corresponding to an integrated luminosity of 5.2\fbinv, an angular analysis of the decay \ensuremath{\PBz\to\cPKstz \Pgmp \Pgmm}\xspace has been carried out. The data used for this analysis include more than 400 signal decays and 50\,000 normalization/control mode decays (\ensuremath{\PBz\to\cPKstz \cPJgy}\xspace and \ensuremath{\PBz\to\cPKstz \psi'}\xspace). Unbinned maximum-likelihood fits have been performed in bins of the square of the dimuon invariant mass $(q^2)$ with three independent variables, the $\PKp\Pgpm\Pgmp\Pgmm$ invariant mass and two decay angles, to obtain values of the forward-backward asymmetry of the muons, $A_\mathrm{FB}$, and the fraction of longitudinal polarization of the $\cPKstz$, $F_L$. Using these results, unbinned maximum-likelihood fits to the $\PKp\Pgpm\Pgmp\Pgmm$ invariant mass in $q^2$ bins have been used to extract the differential branching fraction $\rd{}\mathcal{B}/\rd{}q^2$. The results are consistent with the SM predictions and previous measurements. Combined with other measurements, these results can be used to rule out or constrain new physics. \section*{Acknowledgements} We congratulate our colleagues in the CERN accelerator departments for the excellent performance of the LHC and thank the technical and administrative staffs at CERN and at other CMS institutes for their contributions to the success of the CMS effort. In addition, we gratefully acknowledge the computing centres and personnel of the Worldwide LHC Computing Grid for delivering so effectively the computing infrastructure essential to our analyses. Finally, we acknowledge the enduring support for the construction and operation of the LHC and the CMS detector provided by the following funding agencies: BMWF and FWF (Austria); FNRS and FWO (Belgium); CNPq, CAPES, FAPERJ, and FAPESP (Brazil); MES (Bulgaria); CERN; CAS, MoST, and NSFC (China); COLCIENCIAS (Colombia); MSES (Croatia); RPF (Cyprus); MoER, SF0690030s09 and ERDF (Estonia); Academy of Finland, MEC, and HIP (Finland); CEA and CNRS/IN2P3 (France); BMBF, DFG, and HGF (Germany); GSRT (Greece); OTKA and NKTH (Hungary); DAE and DST (India); IPM (Iran); SFI (Ireland); INFN (Italy); NRF and WCU (Republic of Korea); LAS (Lithuania); CINVESTAV, CONACYT, SEP, and UASLP-FAI (Mexico); MBIE (New Zealand); PAEC (Pakistan); MSHE and NSC (Poland); FCT (Portugal); JINR (Dubna); MON, RosAtom, RAS and RFBR (Russia); MESTD (Serbia); SEIDI and CPAN (Spain); Swiss Funding Agencies (Switzerland); NSC (Taipei); ThEPCenter, IPST, STAR and NSTDA (Thailand); TUBITAK and TAEK (Turkey); NASU (Ukraine); STFC (United Kingdom); DOE and NSF (USA). Individuals have received support from the Marie-Curie programme and the European Research Council and EPLANET (European Union); the Leventis Foundation; the A. P. Sloan Foundation; the Alexander von Humboldt Foundation; the Belgian Federal Science Policy Office; the Fonds pour la Formation \`a la Recherche dans l'Industrie et dans l'Agriculture (FRIA-Belgium); the Agentschap voor Innovatie door Wetenschap en Technologie (IWT-Belgium); the Ministry of Education, Youth and Sports (MEYS) of Czech Republic; the Council of Science and Industrial Research, India; the Compagnia di San Paolo (Torino); the HOMING PLUS programme of Foundation for Polish Science, cofinanced by EU, Regional Development Fund; and the Thalis and Aristeia programmes cofinanced by EU-ESF and the Greek NSRF.
2,869,038,156,950
arxiv
\section{Introduction} Markov-modulated Brownian motion~\cite{asmussen95,kk95} is a popular tool to model continuous-time phenomena in a stochastic context. An MMBM can be described as the pair $\{Y(t), \phi(t)\}_{t \geq 0}$, where $\phi(t)$ is a continuous-time Markov chain on a state space $\mathcal{S} = \{1,2,\dots,n\}$ with rate matrix $Q\in\mathbb{R}^{n\times n}$ ($Q\bs{1} = \bs{0}$, where $\bs{1}$ and $\bs{0}$ are the vectors of all ones and zeros, respectively). Whenever $\phi(t)=i \in \mathcal{S}$, $Y(t)$ evolves according to a Brownian motion process with drift~$d_i$ and variance~$\frac{1}{2}v_i \geq 0$. The main quantity of interest to determine its steady-state behaviour is the invariant density $\bs{p}(x): \mathbb{R}_{\geq 0}\to \mathbb{R}_{\geq 0}^{1\times n}$, which satisfies, with suitable boundary conditions, the differential equation \begin{align} \label{ODE} \bs{p}''(x) V - \bs{p}'(x) D + \bs{p}(x) Q= \bs{0}, \end{align} where $V=\diag(v_i)_{i \in \mathcal{S}}$ and $D=\diag(d_i)_{i \in \mathcal{S}}$. The solutions of this ODE are related to the eigenvalues and left eigenvectors of the matrix polynomial \begin{align} \label{eq:matpol} P(z) := Vz^2-Dz +Q. \end{align} The solution of probabilistic interest is asymptotically stable, that is, $\bs{p}(x)\to \bs{0}$ when $x\to\infty$. Hence, we are interested in particular in the eigenvalues $\lambda_i$ with $\Re\lambda_i<0$. Several methods have been suggested in literature to compute this solution. Some are iterative~\cite{LatN} based on Cyclic Reduction; some depend on the eigendecomposition of a linearization~\cite{kk95}, or more generally a block diagonal decomposition~\cite{AgaS}. A few other algorithms rely on finding a special \emph{invariant pair} $(X,U)$, that is, a pair of matrices $X \in \mathbb{R}^{\ell \times \ell}, U \in \mathbb{R}^{\ell \times n}$ satisfying \begin{align} \label{invpair2} X^2UV - XUD + Q = 0. \end{align} For instance, Ivanovs \cite[Sect.~3]{Iva10} considers a related problem---determining the steady-state behavior of a two-boundary Markov-modulated Brownian motion process---which can be solved with the same techniques. The author constructs the solution starting from an invariant pair in which \begin{align} \label{ivaU} U = \begin{bmatrix} I & \Psi \end{bmatrix}, \end{align} where the identity block corresponds to the indices $i$ for which $v_{ii}>0$ or $d_{ii}>0$. This invariant pair $(X, U)$ has a probabilistic meaning: $\Psi \geq 0$ is the matrix recording first-return probabilities of the time-reversed process, and $X$ is a subgenerator matrix ($X_{ij}\geq 0$ if $i\neq j$, and $X\bs{1} \leq \bs{0}$) for the downward-record process. A special case often considered in literature is when $v_{ii}>0$ for all $i \in \mathcal{S}$. In this case, $U=I$, and \eqref{invpair2} reduces to \begin{align} \label{eq:eqRhat} X^2V-XD+Q=0. \end{align} This matrix equation has been studied extensively, especially because of its connection to quasi-birth-death processes~\cite{blm05,ram99}. Another special case interesting in its own is when $V=0$, that is, when the Markov-modulated Brownian motion $\{Y(t),\phi(t)\}$ is a \emph{stochastic fluid model}, also known as a \emph{fluid queue}. The papers~\cite{xxl12} and~\cite{NguP15} deal with this special case, and provide quadratically convergent algorithms for the invariant density, which are \emph{componentwise accurate}. That is, the algorithms can deliver an approximate solution $\widetilde{\bs{p}}$ such that the quantity $\max_{i \in \mathcal{S}} {\abs{\widetilde{\bs{p}}_i-\bs{p}_i}}/{\bs{p}_i}$ is bounded, rather than ${\norm{\bs{p}-\widetilde{\bs{p}}}}/{\norm{\bs{p}}}$. In this informal introduction, $\bs{p}$ refers to the exact value of a vector quantity related to the solution, for instance, the value of $\bs{p}(x)$ at a determined level $x$, and $\widetilde{\bs{p}}$ represents its computed version in machine arithmetic. Informally, this means that all entries of $\bs{p}$ have the same number of correct significant digits, irrespectively of their magnitude; thus, all components can be computed to a high accuracy. For example, suppose \begin{align*} \bs{p} = \begin{bmatrix} 1-10^{-15} & 10^{-15} \end{bmatrix}. \end{align*} In this case, traditional linear algebra algorithms would instead ensure a high accuracy on the large component $\widetilde{\bs{p}}_1$ only, while $\widetilde{\bs{p}}_2$ could vary wildly with few theoretical guarantees: for instance, it could become negative. Componentwise error bounds are particularly meaningful for probability applications, since small components may represent probabilities of catastrophic failure, which have to be assessed carefully. In this paper, we focus on computing in a componentwise accurate fashion invariant pairs that solve~\eqref{invpair2} (with the additional property~\eqref{ivaU}) and solutions of the matrix equation~\eqref{eq:eqRhat}, which can then be used to compute a solution $\bs{p}(x)$ of~\eqref{ODE}. This problem contains the linear models treated in~\cite{NguP15,xxl12} as a special case ($V=0$); we extend the techniques introduced there and generalize them to the more challenging second-order case. In particular, the case in which some of the $v_{ii}$ are zero and some are not, requires special attention. To treat it, we use a method related to both the \emph{shift technique}~\cite{HeMeiniRhee,blm05} and the \emph{theory of index reduction} of differential-algebraic equations~\cite{MehrmannKunkelBook}. We use these techniques in a novel way that combines the themes of these two approaches and adds componentwise accuracy and positivity preservation into the picture. The paper is structured as follows. In Section~\ref{sec:preliminaries}, we introduce most of the concepts needed in the development of the algorithm, including invariant pairs, componentwise accurate algorithms, and Cyclic Reduction. In Section~\ref{sec:discretization}, we present our solution strategy and formulate a solution algorithm, first for the case $\diag(V)>0$ and then in general. In Section~\ref{sec:stability}, we prove the numerical stability of our algorithm in a componentwise sense. Numerical experiments in Section~\ref{sec:numerical} confirm the effectiveness of this approach, and some brief conclusions follow. To the best of our knowledge, Lemma~\ref{lem:secondcrtriplet} and the fully subtraction-free version of Cyclic Reduction presented in Algorithm~\ref{algo:crt}, together with the proof of its componentwise stability, are new also in the context of discrete-time quasi-birth-death models~\cite{blm05}. \section{Assumptions and preliminaries} \label{sec:preliminaries} \subsection{Eigenvalues and invariant pairs of matrix polynomials} For ease of analysis, we make several assumptions to make sure that the problem cannot be simplified further: \begin{description} \item[A1] The matrix $Q$ is irreducible and aperiodic. \item[A2] $V\neq 0$. \item[A3] there is no index $i \in \mathcal{S}$ for which $v_{ii}=d_{ii}=0$. \end{description} Assumption A1 is to eliminate the cases where our problem can be reduced to smaller disjoint cases. If Assumption A2 is not satisfied, methods for the fluid queue case like the one in~\cite{NguP15} can be used. Finally, if Assumption A3 does not hold, we can replace the problem with another one, where such~$i$ is censored out. Let $\mathbb{C}[z]$ denote the set of polynomials in the variable $z$. We encountered in the introduction the notion of \emph{eigenvalues} and \emph{left eigenvectors} of a degree-$g$ matrix polynomial \begin{align*} P(z) = P_0 + P_1 z + \dots + P_g z^g\in\mathbb{C}[z]^{n\times n}, \end{align*} % that is, scalars $\lambda\in\mathbb{C}$ and row vectors $\bs{u}\in\mathbb{C}^{1\times n}$ such that $\bs{u}P(\lambda)=\bs{0}$. Under Assumptions A1--A3, $P(z)$ is a \emph{regular} matrix polynomial, that is, the scalar polynomial $\det P(z)\in \mathbb{C}[z]$ is not identically zero, as one can see by considering its highest-degree term; hence its eigenvalues are a well-defined set of $n$ complex numbers counted with multiplicity. When the leading term $P_g$ is singular, $\det P(z)$ has degree strictly lower than $gn$. In this case we say that $\infty$ is an eigenvalue of $P(z)$ with algebraic multiplicity $gn-\deg P(z)$\footnote{Eigenvalues at infinity are a useful algebraic abstraction that makes several counting and transformation arguments work with little or no modification in a more general setting. We do not venture here in the theory of \emph{geometric multiplicity} and \emph{Jordan structure} for matrix polynomials, which is not a trivial task~\cite{GohLR}.}. An extremely useful tool to deal with multiple eigenvalues simultaneously, both in theory and in numerical practice, is \emph{invariant pairs} \cite{BetK,GohLR,HigK}. For any $\ell\leq gn$, a pair $(X,U)\in\mathbb{C}^{\ell\times \ell} \times \mathbb{C}^{\ell\times n}$ is called a \emph{left invariant pair} for the matrix polynomial $P(z) = P_0 + P_1 z + \dots + P_g z^g\in\mathbb{C}[z]^{n\times n}$ if \begin{align} \label{geninvpair} UP_0 + XUP_1 + X^2UP_2 + \dots + X^g U P_g = 0 \end{align} and the matrix $\m{U & XU & \cdots & X^{g-1}U}$ has full row rank. It follows from this definition that if $(X,U)$ is a left invariant pair, then so is \begin{align} \label{eq:qtrans} (MXM^{-1},MU)\quad \text{for any nonsingular $M \in \mathbb{C}^{\ell\times \ell}$}. \end{align} The reader not acquainted with this concept can consider a simpler case in which $P(z)$ has $gn$ distinct finite eigenvalues; in this case, the invariant pairs for a matrix polynomial are given by \begin{align*} X=\diag(\lambda_1,\lambda_2,\dots,\lambda_\ell), \quad U = \m{\bs{u}_1\\ \bs{u}_2\\ \vdots\\ \bs{u}_\ell}, \end{align*} where $\ell \leq gn$ is arbitrary and for each $i=1,2,\dots,\ell$ the row vector $\bs{u}_i$ is a left eigenvector of $P(z)$ with eigenvalue $\lambda_i$, as well as all the pairs obtained from them through the change of basis transformations in~\eqref{eq:qtrans}. Informally speaking, invariant pairs are a tool to deal with several eigenvalues and eigenvectors at the same time without resorting to a Jordan form. Invariant pairs generalize the concept of solution of polynomial matrix equations: indeed, if $X$ satisfies~\eqref{eq:eqRhat}, then $(X,I)$ is a left invariant pair for $P(z)$. Moreover, the following properties hold~\cite{GohLR}. \begin{lemma} Let $(X,U)$ be a left invariant pair for a regular matrix polynomial $P(z)$. Then, the following properties hold. % \begin{enumerate} \label{invpairprops} \item The eigenvalues of $X$ are a subset of the finite eigenvalues of $P(z)$ (both counted with their multiplicity); \item \label{deflateinvpair} if \[ X = \m{X_{11} & 0\\ X_{21} & X_{22}}, \quad U = \m{U_1 \\ U_2}, \] with $X_{11}$ and $X_{22}$ square and $U$ partitioned conformably, then $(X_{11},U_1)$ is another invariant pair for $P(z)$; \item $(X,U)$ is an invariant pair also for $P(z)S(z)$, where $S(z)\in\mathbb{C}[z]^{n\times n}$ is any other regular matrix polynomial. \end{enumerate} \end{lemma} \begin{remark} A feature that distinguishes linear eigenvalue problems (i.e., the case in which $g=1$, or equivalently $V=0$ in the case of~\eqref{eq:matpol}) from polynomial ones is the fact that in a non-linear eigenproblem \emph{eigenvectors do not uniquely determine their associated eigenvalues}. For instance, both $(1,\bs{e}_1)$ and $(2,\bs{e}_1)$, with $\bs{e}_1=\m{1 & 0}$, are left eigenpairs of the matrix polynomial \[ P(z) = z^2 I - z \m{3 & 0\\ 0 & 7} + \m{2 & 0\\ 0 & 12}. \] Hence, we have to deal explicitly with both elements $U$ and $X$ of the invariant pair, and compute them both at the same time. Instead, in the algorithms for the case $V=0$~\cite{NguP15,xxl12}, it is common to deal with the matrix $\Psi$ in~\eqref{ivaU} as the only unknown, and then compute $X$ from it afterwards. This is possible in the first-order case, but not in the second-order one. This point will prove crucial in Section~\ref{SectionWhereWeNeedToRecoverX}, where the need to compute $X$ as well will impose a nontrivial restriction not present in the linear case. \end{remark} \subsection{Triplet representations and accurate matrix exponentials} As stated earlier, we are interested in performing numerical computations in a way to guarantee the componentwise accuracy of the computed quantities. To this purpose, the main resource are so-called \emph{subtraction-free} algorithms in linear algebra: when the matrices and vectors involved have a prescribed sign structure, it is possible to carry out linear algebraic operations on a computer in terms of sums only, without ever subtracting two floating-point numbers with the same sign. In this case, there is no cancellation, and the results are provably accurate. The most famous algorithm in this class is the \emph{GTH algorithm}~\cite{GraTH85,oci93,AlfXY02} and its generalizations. To introduce it, we need a few preliminary concepts. Here and in the following, inequalities between matrices and vectors are used in the componentwise sense: for instance, $A\leq B$ means $A_{ij}\leq B_{ij}$ for each $i,j$. For a matrix $M\in\mathbb{R}^{n\times n}$, we use the notation $\offdiag(M)$ to denote a vector $\bs{m}\in\mathbb{R}^{n^2-n}$ which contains the elements $\{M_{ij} : i\neq j\}$, i.e., those which do not belong to the main diagonal. The exact ordering of these elements in $\bs{m}$ is not important here. A matrix $M$ is called \emph{M-matrix} if it can be expressed as \begin{equation} \label{expdecomposition} M=sI-P, \quad P\in\mathbb{R}_{\geq 0}^{n\times n}, s\geq 0. \end{equation} where $s\in\mathbb{R}$ is greater or equal than the spectral radius $\rho(P)$. It is well known that if an M-matrix $M$ is invertible, then $M^{-1}\geq 0$~\cite{bp94}. A \emph{triplet representation} for an M-matrix $M$ is a triple $(\bs{m},\bs{v},\bs{w}) \in \mathbb{R}_{\leq 0}^{n^2-n} \times \mathbb{R}_{>0}^{n} \times \mathbb{R}_{\geq 0}^{n}$ such that $\bs{m}=\offdiag(M)$, and $\bs{v}>\bs{0}$, $\bs{w}\geq \bs{0}$ are two vectors such that $M\bs{v}=\bs{w}$. The diagonal elements of $M$ do not appear explicitly in the triplet representation, but they are determined uniquely from the relation $M\bs{v}=\bs{w}$. Not all M-matrices admit triplet representations~\cite[Section~1]{guonew}; a counterexample is $ M = \begin{bsmallmatrix} \phantom{-}0 & 0\\ -1 & 0 \end{bsmallmatrix}. $ M-matrices that admit a triplet representation are called \emph{regular M-matrices}. Non-regular M-matrices must necessarily be singular and reducible~\cite{guonew}, so most M-matrices appearing in applications (and, in particular, all those appearing in the rest of this paper) are indeed regular. The following result shows that one can solve linear systems with a regular M-matrix with almost perfect componentwise accuracy, given a triplet representation as input. \begin{theorem}[\cite{oci93,AlfXY02}] \label{thm:gth} Let $(\bs{m},\bs{v},\bs{w})$, be a triplet representation for an invertible regular M-matrix $M\in\mathbb{R}^{n\times n}$, and $\bs{u}\in\mathbb{R}_{\geq 0}^n$. Then, there is an algorithm to compute in $O(n^3)$ floating-point arithmetical operations (starting from the floating-point numbers $\bs{m},\bs{v},\bs{w},\bs{u}$) an approximation $\widetilde{\bs{x}}$ of $\bs{x}=M^{-1}\bs{u} \geq \bs{0}$ such that \begin{align} \label{gthbound} \abs{\widetilde{\bs{x}}-\bs{x}} \leq \left(\psi(n)\mp+\mathcal{O}(\mp^2)\right) \bs{x}, \end{align} with $\psi(n)=\frac{2}{3}(2n+5)(n+2)(n+3)$ and $\mp$ being the machine precision. \end{theorem} Notice the remarkable absence of the condition number of $M$, which would be necessary in an error bound for an algorithm that uses the matrix entries rather than a triplet representation. The use of a triplet representation as an input makes it possible to solve a linear system with perfect accuracy (up to a polynomial function of the dimension), regardless of its condition number. The algorithm basically works by computing a LU decomposition of $M$, in which both $L$ and $U$ are M-matrices. Using variants of the same algorithm, one can also perform other related tasks, again starting from a triplet representation $(\bs{m},\bs{v},\bs{w})$ of a regular M-matrix $M$: \begin{itemize} \item computing $M^{-1}$; \item solving linear systems of the form $M^\top \bs{x} = \bs{b}$, with $\bs{b}\geq \bs{0}$; \item finding the left and right kernel of a singular irreducible $M$. \end{itemize} We shall also need the following result. \begin{lemma} \label{subtriplets} Let \[ \left(\offdiag(M), \begin{bmatrix} \bs{v}_1\\ \bs{v}_2 \end{bmatrix}, \begin{bmatrix} \bs{w}_1\\ \bs{w}_2 \end{bmatrix}\right), \quad M = \begin{bmatrix} M_{11} & M_{12}\\ M_{21} & M_{22} \end{bmatrix} \] (where all the matrices are partitioned conformably) be a triplet representation for the regular M-matrix $M$. Then, \begin{align} &(\offdiag(M_{22}), \bs{v}_2, \bs{w}_2 - M_{21}\bs{v}_1), \label{tripletsub}\\ &(\offdiag(S),\bs{v}_1, \bs{w}_1-M_{12}M_{22}^{-1}\bs{w}_2) \label{tripletschur} \end{align} are subtraction-free expressions for triplet representations of a principal submatrix $M_{22}$ and its Schur complement $S = M_{11}- M_{12}M_{22}^{-1}M_{21}$. \end{lemma} \begin{proof} The relation $M_{22}\bs{v}_2 = \bs{w}_2 - M_{21}\bs{v}_1$ which defines the first triplet representation comes from expanding the second block row of $M\bs{v}=\bs{w}$. The second relation comes from premultiplying both sides of $M\bs{v}=\bs{w}$ by \[ \begin{bmatrix} I & -M_{12}M_{22}^{-1}\\ 0 & I \end{bmatrix}. \] Additionally, note that $M_{12},M_{21}\leq 0$ and $M_{22}^{-1}\geq 0$ (which can be obtained in a subtraction-free way using the GTH algorithm and the triplet representation~\eqref{tripletsub}), so computing the last terms in~\eqref{tripletsub} and~\eqref{tripletschur} does not involve subtractions. The computation of $S$ via the formula $S=M_{11}- M_{12}M_{22}^{-1}M_{21}$ involves subtractions only for the diagonal entries, but conveniently in the triplet representation we only need $\offdiag(S)$. \end{proof} \begin{corollary} \label{diagcorollary} Given a triplet representation for $M$, $\diag(M)$ can be reconstructed using subtraction-free formulas only. \end{corollary} \begin{proof} It is sufficient to consider~\eqref{tripletsub} in the case in which $M_{22}$ is $1\times 1$. Then, $M_{22} = (\bs{w}_2 - M_{21}\bs{v}_1) / \bs{v}_2$, where the numerator and denominator are scalar quantities, too. \end{proof} We comment briefly also on the computation of the matrix exponential, which we shall need only in the final step of our algorithm. For an M-matrix $M$, it holds that $\exp(-M) \geq 0$. As studied in~\cite{xy08}, it is impossible to find an unconditionally accurate algorithm for this computation in the style of the GTH algorithm; we can only compute approximations $\widetilde{E}$ of $E=\exp(-M)$ satisfying a bound of the form \[ \abs{\widetilde{E} - E} = c(M)\mathcal{O}(\mp)E, \] where $c(M)$ is a condition number which depends explicitly on $M$. Algorithms for the componentwise accurate computation of matrix exponentials were discussed in~\cite{sgx12}; one of the first steps in these methods is decomposing $M=sI-P$ as in~\eqref{expdecomposition} and using the identity $\exp(-M)=e^{-s}\exp(P)$. Hence it is appealing to look for explicit accurate decompositions of the form~\eqref{expdecomposition} for the matrices whose exponentials we are going to compute. \subsection{Quadratic matrix equations and their properties} We now discuss the properties of the solutions of matrix equations of the form $A-BX+CX^2=0$. We focus here on the most common setting in its probabilistic applications; namely, we assume that \begin{description} \item[A4] $A\geq 0$, $C\geq 0$ and $B$ is an M-matrix; \item[A5] $(A-B+C)\bs{1}=\bs{0}$; \item[A6] the bi-infinite matrix \begin{equation} \label{biinf} \begin{bmatrix} \ddots & \ddots\\ \ddots & -B & C\\ & A & -B & C\\ & & A & -B & \ddots\\ & & & \ddots & \ddots & \end{bmatrix} \end{equation} is irreducible and aperiodic. \end{description} The case most frequently appearing in the probability applications~\cite{blm05} is the one in which $I-B\geq 0$ and $A,I-B,C$ are the transition matrices of a quasi-birth-death (QBD) process, with $A$ being the transition to a lower level. Under Assumptions A4, A5, A6, one can prove that $A-B+C$ is an irreducible singular M-matrix; we call its left Perron vector $\bs{u}\in\mathbb{R}^{1\times n}$; then we have $\bs{u}>\bs{0}$, $\bs{u}(A-B+C)=\bs{0}$. Moreover, one can prove the following results \cite{blm05}. \begin{theorem} \label{discreteproperties} Let $A,B,C\in\mathbb{R}^{n\times n}$ satisfying Assumptions A4, A5, A6. Then, the following matrices exist: \begin{subequations} \label{eq:foursolutions} \begin{align} G\in\mathbb{R}^{n\times n} &\text{such that $A-BG+CG^2=0$, $G\geq 0$ and $G\bs{1}\leq \bs{1}$};\\ R\in\mathbb{R}^{n\times n} &\text{such that $R^2A-RB+C=0$, $R\geq 0$ and $\bs{u}R\leq \bs{u}$}. \end{align} \end{subequations} Moreover, the location in the complex plane of the eigenvalues of $F(y)=Ay^2-By+C$, and of $G$ and $R$, is related to the sign of the \emph{mean drift} $d=\bs{u} (C-A) \bs{1}$ as described in Table~\ref{tab:discretecase}. \begin{table}[h] \centering \small \begin{tabular}{lllllll} \toprule Case & Name & $\abs{\mathcal{S}_d}$ & $\abs{\mathcal{U}_d}$ & Other eigvls. & Eigvls. of $G$ & Eigvls. of $R$\\ \midrule $d<0$ & Positive recurrent & $n$ & $n-1$ & 1 (mult. 1) & $\{\lambda^{-1} : \lambda \in\mathcal{U}_d\} \cup \{1\}$ & $\mathcal{S}_d$ \\ $d=0$ & Null recurrent & $n-1$ & $n-1$ & 1 (mult. 2) & $\{\lambda^{-1} : \lambda \in\mathcal{U}_d\} \cup \{1\}$ & $\mathcal{S}_d \cup \{1\}$ \\ $d>0$ & Transient & $n-1$ & $n$& 1 (mult. 1) & $\{\lambda^{-1} : \lambda \in\mathcal{U}_d\}$ & $\mathcal{S}_d \cup \{1\}$ \\ \bottomrule \end{tabular} \caption{Cardinality of the multisets $\mathcal{S}_d = \{\lambda : \lambda$ is an eigenvalue of $F(y)$ and $\abs{\lambda}<1$ $\}$ and $\mathcal{U}_d = \{\lambda : \text{$\lambda$ is an eigenvalue of $F(y)$ and $\abs{\lambda}>1$}\}$, and eigenvalues of the four solution matrices in three possible cases for a triple $A,B,C$ satisfying A4, A5, A6. } \label{tab:discretecase} \end{table} \end{theorem} The letters $\mathcal{S}$ and $\mathcal{U}$ in the table stand for `stable' and `unstable', respectively, while $d$ stands for `discrete time' and $c$ in the following will stand for `continuous time'. Assumption A6 can be relaxed to the less stringent one where $\tridiag(A,-B,C)$ has only one final class \cite[Section~4.7]{blm05}, with only some minor technical complications. \subsection{Cyclic Reduction} Cyclic Reduction (CR) is the following matrix iteration. Set \begin{align*} A_0 = A, \quad B_0 = \sad{B}_0 = B, \quad C_0 = C, \end{align*} and compute for each $k=0,1,2,\dots$ \begin{subequations} \label{eq:cr} \begin{align} A_{k+1} &= A_kB_k^{-1}A_k,\\ B_{k+1} & = B_k - A_k B_k^{-1} C_k - C_k B_k^{-1} A_k,\\ C_{k+1} &= C_k B_k^{-1} C_k,\\ \sad{B}_{k+1} &= \sad{B}_k - C_k B_k^{-1}A_k. \end{align} \end{subequations} The following applicability and convergence results hold for Cyclic Reduction. \begin{theorem} \label{thm:crconv} Let $A,B,C\in\mathbb{R}^{n\times n}$ satisfying Assumptions A4, A5, A6. Then, \begin{enumerate} \item $B_k$ is nonsingular for $k\geq 0$; hence, CR can be applied with no breakdown. \item $A_k, C_k$ are nonnegative, and $B_k$ and $\sad{B}_k$ are M-matrices for $k\geq 0$. % \item $B_k$ and $\sad{B}_k$ converge monotonically to matrices that we shall call $B_\infty$ and $\sad{B}_\infty$, respectively. The matrix $\sad{B}_\infty$ is invertible. % \item We have % \begin{subequations} \label{eq:crsol} \begin{align} G&=\sad{B}_{\infty}^{-1}A_0,\\ R&=C_0\sad{B}_{\infty}^{-1}. \end{align} \end{subequations} \item \label{quadconv} The convergence speed is linear with factor ${1}/{2}$ in the null recurrent case, quadratic with factor $\rho(R)<1$ in the positive recurrent case, and quadratic with factor $\rho(G)<1$ in the transient case. \item $(A_k-B_k+C_k)\bs{1} = \bs{0}$ for each $k\geq 0$, hence $(\offdiag(B_k), \bs{1}, (A_k+C_k)\bs{1})$ is a triplet representation for $B_k$. \label{crfirsttriplet} \end{enumerate} \end{theorem} The last item in particular is useful because it allows one to perform the iteration using the GTH algorithm for the inversions required in~\eqref{eq:cr}. Hence, $A_k,\offdiag(B_k),C_k,\offdiag(\sad{B}_k)$ can be computed in a subtraction-free fashion. This is how Cyclic Reduction is currently implemented in software packages such as SMCSolver~\cite{SMCSolver}. However, to implement the final step, \eqref{eq:crsol}, in a subtraction-free way, we need to find a triplet representation for $\sad{B}_\infty$. To this purpose, we give the following result. \begin{lemma} \label{lem:secondcrtriplet} Under Assumptions A4, A5, A6, the following results hold for the iterates of Cyclic Reduction. \begin{itemize} % \item $(A_0 - \sad{B}_k + C_k)\bs{1} = \bs{0}$ for each $k\geq 0$, hence % \begin{align*} \lim_{k\to\infty} C_k\bs{1}=:\sad{\bs{w}} \quad \mbox{ exists, } \end{align*} % and $(\offdiag(\sad{B}_k),\bs{1}, A_0\bs{1} + \sad{\bs{w}})$ is a triplet representation for $\sad{B}_\infty$. \item $\bs{u}(A_k - B_k + C_k)=\bs{0}$ for each $k\geq 0$. \item $\bs{u}(A_k - \sad{B}_k + C_0) = \bs{0}$ for each $k\geq 0$, hence \begin{align*} \lim_{k\to\infty} \bs{u}A_k =: \sad{\bs{v}} \mbox{ exists,} \end{align*} % and $(\offdiag(\sad{B}^{\top}),\bs{u}^{\top},(\bs{u}C_0+\sad{\bs{v}})^{\top})$ is a triplet representation for $\sad{B}_\infty^{\top}$. \end{itemize} \end{lemma} \begin{proof} We prove only the first equality, the others are analogous. The proof is by induction and similar to the one of item~\ref{crfirsttriplet} of Theorem~\ref{thm:crconv}. For $k=0$, the result holds by Assumption~A5. The inductive step is \begin{align*} (A_0 - \sad{B}_{k+1} + C_{k+1})\bs{1} &= (A_0 - \sad{B}_{k} + C_kB_k^{-1}A_k + C_kB_k^{-1}C_k)\bs{1}\\ &=(A_0 - \sad{B}_{k})\bs{1} + C_kB_k^{-1}(A_k+C_k)\bs{1}\\ &=(A_0 - \sad{B}_{k})\bs{1} + C_kB_k^{-1}B_k\bs{1}\\ &=(A_0 - \sad{B}_{k}+C_k)\bs{1}=\bs{0}, \end{align*} % where we have used the fact that $(A_k+C_k)\bs{1} = B_k\bs{1}$ (item~\ref{crfirsttriplet} of Theorem~\ref{thm:crconv}). \end{proof} Armed with these triplet representations, we can formulate a fully subtraction-free version of Cyclic Reduction, Algorithm~\ref{algo:crt}. \begin{algorithm} \KwIn{$A,B,C\in\mathbb{R}^{n\times n}$ satisfying A4, A5, A6} \KwOut{The matrices $G$, $R$ defined in~\eqref{eq:foursolutions}.} Set $A_0=A$, $B_0=\sad{B}_0=B$, $C_0=C$, and $k=0$\; \Repeat{$\offdiag(\sad{B}_k)$ has converged}{ Compute $A_{k+1},\offdiag(B_{k+1}), C_{k+1}, \offdiag(\sad{B}_{k+1})$ using \eqref{eq:cr}, performing inversions using the triplet representation in Item~\ref{crfirsttriplet} of Theorem~\ref{thm:crconv}\; $k\to k+1$\; } Compute $G,R$ using~\eqref{eq:crsol}, performing inversions using one of the triplet representations in Lemma~\ref{lem:secondcrtriplet}\; \caption{A subtraction-free version of Cyclic Reduction using triplet representations.} \label{algo:crt} \end{algorithm} \section{Derivation of the algorithm} \label{sec:discretization} In this section, we focus on the problem of finding a left invariant pair $(X,U)$ for the matrix polynomial $P(z)$ in~\eqref{eq:matpol} associated to its eigenvalues in the left half-plane. We shall see in Section~\eqref{sec:solvingODE} that a solution to~\eqref{ODE} can be constructed from this pair. \subsection{The spectrum of $P(z)$} We start with a theoretical result on the location of the eigenvalues of $P(z)$. To formulate it, we subdivide the indices $i\in\{1,2,\dots,n\}$ into three disjoint subsets, according to the values of $v_{ii}$ and $d_{ii}$, as shown in Table~\ref{tab:states}. Moreover, we set $n_i=\abs{E_i}$ for $i=1,2,3$, so that $n=n_1+n_2+n_3$, and we call $\bs{u}> \bs{0}$ the left Perron vector of $Q$. \begin{table}[h] \centering \begin{tabular}{lll} \toprule Name & $v_{ii}$ & $d_{ii}$\\ \midrule $E_1$ & $>0$ & any\\ $E_2$ & $=0$ & $>0$\\ $E_3$ & $=0$ & $<0$\\ \bottomrule \end{tabular} \caption{Subdivision of each state $i \in \{1,2,\dots,n\}$ into three different sets. Recall that we assume that there is no state with $v_{ii}=d_{ii}=0$.} \label{tab:states} \end{table} \begin{theorem} \label{contlocation} The location in the complex plane of the eigenvalues of $P(z)$ is related to the sign of the \emph{mean drift} $d=\bs{u}D\bs{1}$ as described in Table~\ref{tab:continuouscase}. \end{theorem} \begin{proof} The case $d<0$ appears in~\cite{kk95}; the case $d>0$ can be proved by replacing $D$ with $-D$ (which has the effect of changing the sign of all eigenvalues). For the case $d=0$, the proof is not immediate; we give only a sketch, since the result is not necessary for the rest of the paper. A limit argument from both sides shows that $\abs{\mathcal{S}_c}\leq n_1+n_2-1$ and $\abs{\mathcal{S}_u}\leq n_1+n_3-1$. Since we assume that $v_{ii}$ and $d_{ii}$ are not both zero, $D_{E_2 \cup E_3,E_2 \cup E_3}$ is nonsingular, and hence the $n_2+n_3$ Jordan chains for $\lambda=\infty$ (as defined in~\cite[Section~1.4]{GohLR}) have length $1$. Hence the only thing left to prove is that there are no more than $2$ eigenvalues on the imaginary axis. The Gerschgorin argument in~\cite{kk95} shows that the only possible eigenvalue on the imaginary axis is zero. The multiplicity of the eigenvalue $0$ is at most $2$, because for $h$ small enough the matrix polynomial $I+hP(z)$ satisfies the hypotheses of Theorem~\ref{discreteproperties}, as we show in more detail in the following. \end{proof} \begin{table}[h] \centering \begin{tabular}{llllll} \toprule Case & Name & $\abs{\mathcal{S}_c}$ & $\abs{\mathcal{U}_c}$ & Other eigenvalues\\ \midrule $d<0$ & Pos. rec. & $n_1+n_2$ & $n_1+n_3-1$ & 0 (mult. 1), $\infty$ (mult. $n_2+n_3$)\\ $d=0$ & Null rec. & $n_1+n_2-1$ & $n_1+n_3-1$ & 0 (mult. 2), $\infty$ (mult. $n_2+n_3$)\\ $d>0$ & Transient & $n_1+n_2-1$ & $n_1+n_3$ & 0 (mult. 1), $\infty$ (mult. $n_2+n_3$)\\ \bottomrule \end{tabular} \caption{Cardinality of $\mathcal{S}_c = \{\lambda : \text{$\lambda$ is an eigenvalue of $P(z)$ and $\Re{\lambda}<0$}\}$ and $\mathcal{U}_c = \{\lambda : \text{$\lambda$ is an eigenvalue of $P(z)$ and $\Re{\lambda}>0$}\}$. } \label{tab:continuouscase} \end{table} We now have all we need to define precisely which invariant pair we are looking for. We call a left invariant pair of $P(z)$ \emph{c-stable}, if its associated eigenvalues are: \begin{itemize} \item the $n_1+n_2$ eigenvalues in $\mathcal{S}_c$, if $P(z)$ is positive recurrent; or \item the $n_1+n_2-1$ eigenvalues in $\mathcal{S}_c$ and the eigenvalue $0$ with multiplicity $1$, if $P(z)$ is null recurrent or transient. \end{itemize} Similarly, with respect to Table~\ref{tab:discretecase}, we call a left invariant pair of $F(y)$ \emph{d-stable}, if its associated eigenvalues are: \begin{itemize} \item the $n$ eigenvalues in $\mathcal{S}_d$, if $F(y)$ is positive recurrent; or \item the $n-1$ eigenvalues in $\mathcal{S}_d$ and the eigenvalue $1$ with multiplicity $1$, if $F(y)$ is null recurrent or transient. \end{itemize} Notice that $(R,I)$, where $R$ is the matrix in~\eqref{eq:foursolutions}, is a d-stable invariant pair for $F(y)$. \subsection{The case $\diag(V)>0$} \label{SectionWhereWeNeedToRecoverX} We start by treating the simpler case in which $\diag(V)>0$ (or, in probabilistic terms, the dynamic in all states has a Brownian motion component). We have $E_2=E_3=\varnothing$ and $n_2=n_3=0$, all $2n$ eigenvalues of $P(z)$ are finite and we are looking for exactly $n$ of them. The formulation in~\eqref{ivaU} has $U=I$, hence the task of finding an invariant pair becomes the one of finding a solution of the matrix equation~\eqref{eq:eqRhat}. We have seen that Cyclic Reduction can be applied to matrix polynomials $F(y)$ with a specific sign structure, which is associated with a specific spectral structure as shown in Table~\ref{tab:discretecase}. The sign structure and spectral structure of $P(z)$ in~\eqref{tab:continuouscase} do not match these requirements, so we need some preprocessing to convert one case into the other. Even if the sign structure is a stricter requirement, it is useful for our analysis to focus first on the spectral structure, and describe methods of altering the position of the eigenvalues. We start from a general lemma on rational transformations of matrix polynomials. \begin{lemma}[\cite{M4poly,Nofpoly}] \label{rattrans} Let \begin{align*} y = f(z)=\frac{\alpha z+\beta}{\gamma z+\delta} \end{align*} be a degree-1 (scalar) rational function, with $\alpha,\beta,\gamma,\delta\in\mathbb{C}$ and $\alpha\delta\neq \beta\gamma$, $z=f^{-1}(y)={(\delta y-\beta)}/{(\alpha-\gamma y)}$ its inverse, and $P(z)\in\mathbb{R}[z]^{n\times n}$ be a degree-$g$ regular matrix polynomial with eigenvalues $\lambda_1,\dots,\lambda_{gn}$ (counted with multiplicity, and possibly including $\infty$). Then, the following properties hold. \begin{enumerate} \item The matrix polynomial \begin{align*} F(y)=(\alpha-\gamma y)^g P({\delta y-\beta}/{\alpha -\gamma y}) \end{align*} % is regular and has eigenvalues $f(\lambda_i)$, for each $i=1,2,\dots,gn$. \item If $(Z,U)$ is a left invariant pair for $P(z)$ and $\gamma Z+\delta I$ is nonsingular, then $(f(Z),U)$ is an invariant pair for $F(y)$. Conversely, if $(Y,U)$ is an invariant pair for $F(y)$ and $\alpha I-\gamma Y$ is nonsingular, then $(f^{-1}(Y),U)$ is an invariant pair for $P(z)$. \end{enumerate} \end{lemma} Note that $f(Z) = (\gamma Z+ \delta I)^{-1}(\alpha Z+\beta I)=(\alpha Z+\beta I)( \gamma Z+\delta I)^{-1}$ is well-defined for a matrix argument $Z$ since the two factors commute, and similarly for $f^{-1}(Y)$. If $U=I$, the last item gives a relation between the solutions to the unilateral matrix equations associated to $P(z)$ and $F(y)$. Lemma~\ref{rattrans} suggests a general strategy to approach the problem: \begin{enumerate} \item Choose a function $f$ such that $f(0)=1$ and the images of $\mathcal{S}_c,\mathcal{U}_c$ lie inside and outside the unit circle, respectively. % \item Construct $F(y)=Ay^2-By+C$ as in Lemma~\ref{rattrans}. % \item Apply Cyclic Reduction to find the solution $R$ to % \begin{align*} R^2A-RB+C=0; \end{align*} % then, $(R,I)$ is a d-stable invariant pair of $F(y)$. \item Compute $X=f^{-1}(R)$. \label{laststep} % \end{enumerate} This general framework of relocating eigenvalues via rational transformations is quite common in literature; see for instance~\cite{bmp} for a discussion of it in the case of fluid queues ($V=0$). Frequent choices for $f$ are \begin{align*} y=1+hz, \quad \mbox{and} \quad y=\frac{1+hz}{1-hz}, \end{align*} % where $h>0$ is a parameter. However, some care is needed here to allow for componentwise accurate computations within the framework. The first important restriction comes from the last step: once we have obtained $R\geq 0$, we need to be able to compute $f^{-1}(R)$. If one chooses $y={1+hz}/{1-hz}$, the computation becomes $X= h^{-1}(I-R)(I+R)^{-1}$. This is problematic, because the matrix $I+R$, which we need to invert, is a nonnegative matrix; hence Theorem~\ref{thm:gth} does not apply, and we do not know of another componentwise algorithm to invert matrices with this sign pattern, even if triplet representations are available. Things are easier if one chooses the function $y=1+hz$. In this case, the last step becomes $X= h^{-1}(R- I)$; subtractions are needed only to compute its diagonal, and we can avoid them completely using Corollary~\ref{diagcorollary} if we manage to obtain a triplet representation for $-X$. Moreover, the matrix is now naturally expressed in the form~\eqref{expdecomposition}. For this reason, we set $y=f(z)=1+hz$ in the following. With this choice, we get $z={(y-1)}/{h}$, and \begin{align*} P(z) & = V\frac{(y-1)^2}{h^2} -D\frac{y-1}{h} + Q \\ & = \frac{1}{h^2}Vy^2 - \left(2\frac{1}{h^2}V + \frac{1}{h}D\right)y + \left(\frac{1}{h^2}V+\frac{1}{h}D+Q\right), \end{align*} hence \begin{align} \label{ABC} A := \frac{1}{h^2}V, \quad B:= 2\frac{1}{h^2}V + \frac{1}{h}D, \quad C:= \frac{1}{h^2}V+\frac{1}{h}D+Q. \end{align} Once we have decided to use~\eqref{ABC} to convert the setting into that of a discrete-time quadratic matrix equations, we have to choose the value of the parameter~$h$. A first requirement is that Assumption A4 is satisfied; it is easy to see that it holds provided that $v_{ii}-d_{ii}h+q_{ii}h^2 \geq 0$ for each $i$. Since we assume $v_{ii}>0$ for each $i \in \mathcal{S}$ for now, this holds for sufficiently small values of $h$. Moreover, we have to ensure that the computed diagonal of $C$ is componentwise accurate. For this, we follow the strategy used in~\cite{xxl12}: we choose $h$ small enough so that all the required subtractions are of the form $b-a$ with $b\geq 2a\geq 0$. In this case, there cannot be catastrophic cancellation in the subtraction in machine arithmetic. This requirement translates to the following constraints on $h$: \begin{subequations} \label{eq:hconstraints} \begin{align} v_{ii} &\geq -2(d_{ii}h + q_{ii}h^2), \text{ for each $i$ with $v_{ii}>0$ and $d_{ii}<0$};\\ v_{ii}+hd_{ii} &\geq -2 q_{ii}h^2, \text{ for each $i$ with $d_{ii}>0$}. \end{align} \end{subequations} All these inequalities are satisfied for a sufficiently small value of $h$, which is easy to compute explicitly. Another possibility is performing these subtractions using machine arithmetic with a higher precision; since there are only $O(n)$ of them, this safeguard will not impact the final cost of the algorithm. It is easy to check that Assumption A5 is always satisfied. We prove below that A6 is satisfied as well. \begin{theorem} Suppose the matrix $Q$ is irreducible and aperiodic, and $V\neq 0$. Then, Assumption A6 holds for the matrices $A,B,$ and $C$ defined in~\eqref{ABC}. \end{theorem} \begin{proof} We identify each element in the index set of the infinite matrix~\eqref{biinf} with a pair $(i,\ell)\in \{1,2,\dots,n\} \times \mathbb{Z}$, where the second entry denotes the block (level) and the first denotes the position in the block. We shall prove that there is a walk in the graph associated to~\eqref{biinf} between any two states $(i,\ell),(j,m) \in \{1,2,\dots,n\} \times \mathbb{Z}$. Let $k\in\{1,2,\dots,n\}$ be such that $v_{kk}>0$. By the irreducibility assumption, we can find in the graph associated to $Q$ a walk from $i$ to $j$ which passes through $k$ and has length at least $m-\ell$. Since $C$ has the same offdiagonal nonzero structure as $Q$, the same walk can be used in the matrix~\eqref{biinf}, and after each step the second element of the pair goes up by one. Hence the path goes from $(i,\ell)$ to $(j,m+p)$, for some $p>0$. We modify this walk by inserting $p$ transitions using the nonzero entry $A_{kk}$ when we first reach $k$ as the first element of the pair. The resulting graph goes from $(i,\ell)$ to $(j,m)$, as requested. If $Q$ is aperiodic, the same construction can be made with different values of $p$ which are coprime; hence~\eqref{ABC} is aperiodic, too. \end{proof} Moreover, we can prove that our transformation~\eqref{ABC} preserves the sign of the mean drift. \begin{lemma} Let $d_c$ be the mean drift of the Markov-modulated Brownian motion process with parameters $V,D,Q$, and $d_d$ be the mean drift of the QBD process associated to $A,B,C$ as in~\eqref{ABC}. Then, $d_d = h^{-1}d_c$. \end{lemma} \begin{proof} First note that $A-B+C=Q$, so the left Perron vector $\bs{u}$ of $A-B+C$ coincides with the one of $Q$. Then it is easy to compute \[ d_d = \bs{u}(C-A)\bs{1} = \bs{u}\left(\frac{1}{h^2}V+\frac{1}{h}D+Q-\frac{1}{h^2}V\right)\bs{1} = \frac{1}{h}\bs{u}D\bs{1} = {1}{h}d_c. \] \end{proof} Finally, we note that as a byproduct of Cyclic Reduction (Algorithm~\eqref{algo:crt}) we can obtain explicitly a triplet representation for $-X^{\top}$. \begin{theorem} The triplet \begin{equation} \label{tripletX} \left(\offdiag(-X^{\top}),\bs{u}^{\top},\left(h^{-1}\sad{\bs{v}}\sad{B}^{-1}_\infty\right)^{\top}\right) \end{equation} is a triplet representation for the matrix $-X^{\top}$, where $X=h^{-1}(R-I)$. \end{theorem} \begin{proof} By Lemma~\ref{lem:secondcrtriplet}, we have $\sad{\bs{v}} - \bs{u}\sad{B}_\infty + \bs{u}C_0 = \bs{0}$. Hence, \begin{align*} -\bs{u}X = \bs{u}\frac{1}{h}(I-C_0\sad{B}_{\infty}^{-1}) = \frac{1}{h}\sad{\bs{v}}\sad{B}^{-1}_\infty. \end{align*} \end{proof} Summing up everything, our algorithm for the case $\diag(V)>0$ is described in Algorithm~\ref{algo:vpos}. \begin{algorithm} \KwIn{$V,D\in\mathbb{R}^{n\times n}$ diagonal matrices with $\diag(V)>\bs{0}$, $Q\in\mathbb{R}^{n\times n}$ a generator matrix ($Q\bs{1} =\bs{0}$, $\offdiag(Q)\geq \bs{0}$), satisfying A1, A2, A3.} \KwOut{the matrix $X$ (or a decomposition~\eqref{expdecomposition} for it), and the triplet representation~\eqref{tripletX} for $-X^{\top}$.} compute the left Perron vector $\bs{u}$ of $Q$ using the triplet representation $(\offdiag(-Q),\bs{1},\bs{0})$\; choose $h$ small enough so that $h^{-2}v_{ii}+h^{-1}d_{ii}+q_{ii}>0$, and it can be computed without catastrophic cancellation\; compute $A,B,C$ as in~\eqref{ABC}\; compute $R \geq 0$ via Algorithm~\ref{algo:crt}\; using the last iterate $A_k$ computed by Algorithm~\ref{algo:crt} and the triplet representation for $\sad{B}_{\infty}$, compute $\sad{\bs{v}}=\bs{u}A_k$ and $h^{-1}\sad{\bs{v}}\sad{B}_{\infty}^{-1}$\; compute $X=h^{-1}(R-I)$ (or $P= h^{-1}R$ and $s = h^{-1}$)\; \caption{Computing a c-stable invariant pair $(X,I)$ of $P(z)$, in the case $\diag(V)>0$} \label{algo:vpos} \end{algorithm} \subsection{Shifting infinite eigenvalues in $P(z)$} The method outlined in the previous section uses the assumption that $v_{ii}>0$ for each $i$. When this is not the case, it is not true in general that we can choose~$h$ small enough to have $h^{-2}v_{ii}+h^{-1}d_{ii}+q_{ii}\geq 0$. This is possible for $i\in E_2$, since $d_{ii}>0$, but if $E_3$ is not empty the algorithm cannot be applied. Moreover, if $V$ is singular, then the matrix polynomial $P(z)$ has infinite eigenvalues, and a c-stable invariant pair $(X,I)$, with $X$ of size $n\times n$, cannot be constructed since even in the positive recurrent case $P(z)$ does not have $n$ eigenvalues in the left half-plane. Finally, the discretization methods outlined in the previous section all break down in some way: if we use the map $y=1+hz$, then we cannot enforce the requirement that the eigenvalue $z=\infty$ is mapped inside the unit circle by choosing a small enough $h$; if we use a variant of the Cayley transform, then $f(\infty)=-1$, and we are left with an eigenvalue of $F(y)$ at $-1$, possibly with high multiplicity; this eigenvalue often prevents the convergence of Cyclic Reduction (note indeed that we are not in the hypotheses of Theorem~\ref{thm:crconv}). All these issues are related, and indeed we can solve all of them with the same modification to the algorithm. We subdivide the parameter matrices into blocks corresponding to $E_1,E_2,E_3$ as \begin{equation} \label{VDQ3blocks} V = \m{V_1 & 0 & 0\\ 0 & 0 & 0\\0&0&0}, \quad D = \m{D_1 & 0 & 0\\ 0 & D_2 & 0\\0&0& D_3}, \quad Q = \m{Q_{11} & Q_{12} & Q_{13}\\Q_{21} & Q_{22} & Q_{23}\\Q_{31} & Q_{32} & Q_{33}}, \end{equation} where $V_1>0$, $D_2 > 0$ and $D_3 \leq 0$ are diagonal matrices. We define \[ \widetilde{P}(z) := P(z)S(z), \quad S(z) := \m{I & 0 & 0\\ 0 & I & 0\\ 0 & 0 & (1+hz)I}. \] The resulting matrix polynomial $\widetilde{P}(z)=\widetilde{V}z^2-\widetilde{D}z+\widetilde{Q}$ has coefficients \[ \widetilde{V} := \m{V_1 & 0 & 0\\ 0 & 0 & 0\\0&0&-hD_{3}}, \quad \widetilde{D} := \m{D_1 & 0 & -hQ_{13}\\ 0 & D_2 & -hQ_{23}\\0&0& D_3-hQ_{33}}, \quad \widetilde{Q} := Q. \] Every finite eigenvalue $\lambda\neq\infty$ of $P(z)$ is also an eigenvalue of $\widetilde{P}(z)$ (with the same left eigenvector), while $n_3$ infinite eigenvalues are replaced by eigenvalues $-h^{-1}$. This can be readily proved by considering the determinants $\det \widetilde{P}(z) = \det P(z) \det S(z)$ and their degrees. This formulation of shifting as multiplication by a suitable matrix polynomial has been suggested recently in~\cite{BinM_brauer}. \begin{remark} We can interpret this transformation as a manipulation of the differential equation~\eqref{ODE}. Indeed, if we subdivide $\bs{p}(x) = \begin{bmatrix} \bs{p}_1(x) & \bs{p}_2(x) & \bs{p}_3(x) \end{bmatrix}$ conformably, then the third block equation reads \begin{align} \label{ODE3} - D_3\bs{p}_3'(x) + Q_{31}\bs{p}_1(x)+Q_{32}\bs{p}_2(x)+Q_{33}\bs{p}_3(x) = \bs{0}; \end{align} differentiating this equation gives \begin{align} \label{ODE3d} - D_3\bs{p}_3''(x) + Q_{31}\bs{p}_1'(x)+Q_{32}\bs{p}_2'(x)+Q_{33}\bs{p}_3'(x) = \bs{0}; \end{align} then the equation $\bs{p}''(x)\widetilde{V} - \bs{p}'(x)\widetilde{D}+\bs{p}(x)Q = \bs{0}$ is obtained from~\eqref{ODE} by replacing the third block equation~\eqref{ODE3} with $\eqref{ODE3} + h\eqref{ODE3d}$. This kind of manipulations is commonly used in the context of index reduction techniques~\cite{MehrmannKunkelBook}. \end{remark} We can set up the discretization scheme described in Section~\ref{sec:discretization} starting from $\widetilde{P}(z)$ rather than $P(z)$. The resulting polynomial $\widetilde{F}(y)$ has coefficients \begin{subequations} \label{eq:tildeABC} \begin{align} \widetilde{A} &:= \m{h^{-2}V_1 & 0 & 0\\ 0 & 0 & 0\\ 0&0&- h^{-1}D_{3}},\\ \widetilde{B}&:=\m{ 2h^{-2}V_1 + h^{-1}D_1 & 0 & -Q_{13}\\ 0 & h^{-1}D_2 & -Q_{23}\\0&0& -h^{-1}D_3-Q_{33}},\\ \widetilde{C}&:= \m{h^{-2}V_1+ h^{-1}D_1+Q_{11} & Q_{12} & 0\\ Q_{21} & h^{-1}D_2 + Q_{22} & 0\\Q_{31}&Q_{32}&0}. \end{align} \end{subequations} Notice the nontrivial simplification that zeroes out the last block column of $\widetilde{C}$. Its appearance is due to the fact that the eigenvalues $-h^{-1}$ introduced in the previous step get mapped to $f(-h^{-1})=0$, hence $\widetilde{F}(y)$ has $n_3$ zero eigenvalues. If one chooses a sufficiently small $h$, the diagonals of $h^{-2}V_1+ h^{-1} D_1+Q_{11}$ and $h^{-1}D_2 + Q_{22}$ are nonnegative (and can be computed accurately): it is sufficient to impose~\eqref{eq:hconstraints} on $i\in E_1 \cup E_2$. Hence, the matrices $\widetilde{A}$ and $\widetilde{C}$ are nonnegative, and $\widetilde{B}$ is an M-matrix, which is the correct sign structure to implement subtraction-free Cyclic Reduction (Algorithm~\ref{algo:crt}). \subsection{Deflating zero eigenvalues in $R$} \label{sec:deflatingzeros} In the case $v_{ii}$ may be zero, the solution $R$ produced by Cyclic Reduction does not give immediately the invariant pair we need. Indeed, in view of our previous analysis of the eigenvalues of $F(y)$ and $\widetilde{F}(y)$, in the positive recurrent case the eigenvalues of $R$ comprise of \begin{itemize} \item $n_3$ zero eigenvalues; \item $f(\lambda)$, for each eigenvalue $\lambda$ of $P(z)$ with $\Re \lambda<0$, counted with multiplicity. \end{itemize} The eigenvalues of the form $f(\lambda)$ are precisely the ones we need in our invariant pair, but there are spurious zero eigenvalues. If $R$ were in the form \begin{equation} \label{trailingzerocolumns} \begin{bmatrix} * & 0\\ * & 0 \end{bmatrix}, \end{equation} with the bottom-right block $n_3\times n_3$, we could remove them by applying the result in point~\ref{deflateinvpair} of Theorem~\ref{invpairprops} to the invariant pair $(R,I)$. Unfortunately, we have $R=C_0 \sad{B}_\infty^{-1}$, where $C_0$ is in the form~\eqref{trailingzerocolumns} and $\sad{B}_\infty$ is a regular M-matrix (for which we know a triplet representation). When one carries out the product, the zero block is lost. To recover it, we have to switch to a different invariant pair. \begin{theorem} Let the matrix $C$ in~\eqref{eq:tildeABC} and the matrix $\sad{B}_\infty$ produced by Cyclic Reduction on~\eqref{eq:tildeABC} be partitioned as \begin{align} \label{BCpartitioning} C_0 = \begin{bmatrix} C_{11} & 0\\ C_{21} & 0 \end{bmatrix}, \quad \sad{B}_\infty =\begin{bmatrix} B_{11} & B_{12}\\ B_{21} & B_{22} \end{bmatrix}, \end{align} where the bottom-right block of dimension $n_3\times n_3$ corresponds to the indices in $E_3$. Then, $(Y,\begin{bmatrix} I & \Psi \end{bmatrix})$, with \begin{subequations} \label{dstableinvpairformula} \begin{align*} \Psi & = -B_{12}B_{22}^{-1} \geq 0,\\ Y & = (C_{11}+\Psi C_{21})S^{-1}\geq 0, \quad \text{with } S := B_{11}+\Psi B_{21} \end{align*} \end{subequations} is a subtraction-free expression for a d-stable left invariant pair of $\widetilde{F}(y)$, and \begin{equation} \label{cstableinvpairformula} (X,\begin{bmatrix} I & \Psi \end{bmatrix}), \quad \text{with } X= h^{-1}(Y-I) \end{equation} is a subtraction-free expression for a c-stable left invariant pair of $P(z)$. \end{theorem} \begin{proof} We apply to the d-stable left invariant pair $(R,I)$ of $\widetilde{P}(z)$ a transformation of the form~\eqref{eq:qtrans} with \[ M = \begin{bmatrix} I & \Psi\\ 0 & I \end{bmatrix}, \] obtaining \begin{align*} (MRM^{-1},M) &= (MC(M\sad{B}_{\infty})^{-1},M)\\ & = \left( \begin{bmatrix} C_{11}+\Psi C_{21} & 0\\ C_{21} & 0 \end{bmatrix} \begin{bmatrix} B_{11}+ \Psi B_{21} & 0\\ B_{21} & B_{22} \end{bmatrix}^{-1}, \begin{bmatrix} I & \Psi\\ 0 & I \end{bmatrix} \right)\\ & = \left( \begin{bmatrix} (C_{11}+\Psi C_{21})S^{-1} & 0\\ C_{21}S^{-1} & 0 \end{bmatrix}, \begin{bmatrix} I & \Psi\\ 0 & I \end{bmatrix} \right).\\ \end{align*} Notice that $B_{22}$ and $S$ are respectively a submatrix and a Schur complement of the regular M-matrix $\sad{B}_\infty$, so triplet representations to invert both are available by Lemma~\ref{subtriplets}. We can now apply point~\ref{deflateinvpair} of Theorem~\ref{invpairprops} to obtain that~\eqref{dstableinvpairformula} is an invariant pair associated to the d-stable eigenvalues of $F(y)$. Transforming this invariant pair with Lemma~\ref{rattrans}, we obtain~\eqref{cstableinvpairformula}. \end{proof} \subsection{A triplet representation for $-X^{\top}$} \label{sec:tripletX2} In this section, we obtain a triplet representation for the M-matrix $-X^{\top}$ using subtraction-free expressions only. \begin{theorem} The triplet \begin{equation} \label{tripletX2} \left(\offdiag(-X^{\top}),\bs{u}_1^{\top}, \frac{1}{h}\left(\left(\sad{\bs{v}}_1 + \sad{\bs{v}}_2B_{22}^{-1}(C_{21}-B_{21})\right)S^{-1}\right)^{\top}\right) \end{equation} is a subtraction-free expression for a triplet representation of $-X^{\top}$, where $X$ is defined by~\eqref{cstableinvpairformula}. \end{theorem} \begin{proof} By introducing the partitioning~\eqref{BCpartitioning} in Lemma~\ref{lem:secondcrtriplet}, we get $\bs{u}_1B_{12} + \bs{u}_2 B_{22} = \sad{\bs{v}}_2$. Hence, $\bs{u}_2 = \sad{\bs{v}}_2 B_{22}^{-1} + \bs{u}_1\Psi $, and \begin{equation} \label{vBeq} \bs{u}M^{-1} = \begin{bmatrix} \bs{u}_1 & \bs{u}_2 \end{bmatrix} \begin{bmatrix} I & -\Psi\\ 0 & I \end{bmatrix} = \begin{bmatrix} \bs{u}_1 & \sad{\bs{v}}_2B_{22}^{-1} \end{bmatrix}. \end{equation} Again, from Lemma~\ref{lem:secondcrtriplet}, we get \begin{align*} \sad{\bs{v}} &= \bs{u}(\sad{B}_\infty -C_0) = \bs{u}M^{-1}(M\sad{B}_\infty -MC_0)\\ &=\begin{bmatrix} \bs{u}_1 & \sad{\bs{v}}_2B_{22}^{-1} \end{bmatrix} \left( \begin{bmatrix} S & 0\\ B_{21} & B_{22} \end{bmatrix} - \begin{bmatrix} C_{11} + \Psi C_{21} & 0\\ C_{21} & 0 \end{bmatrix} \right), \end{align*} and the first block column of this expression gives \[ \sad{\bs{v}}_1 + \sad{\bs{v}}_2B_{22}^{-1}(C_{21}-B_{21}) = \bs{u}_1 (S - (C_{11}+\Psi C_{21})) \] or \[ \left(\sad{\bs{v}}_1 + \sad{\bs{v}}_2B_{22}^{-1}(C_{21}-B_{21})\right)S^{-1} = \bs{u}_1 (I - (C_{11}+\Psi C_{21})S^{-1}) = -h \bs{u}_1 X, \] from which~\eqref{tripletX2} follows. Note that $B_{21}\leq 0$ and $C_{21}, B_{22}^{-1},S^{-1}\geq 0$, so no subtractions are needed in~\eqref{tripletX2}. \end{proof} \subsection{The algorithm} Putting everything together, we obtain Algorithm~\ref{algo:npt} for the computation of the c-stable invariant pair of a matrix polynomial $P(z)$, which generalizes Algorithm~\ref{algo:vpos} by removing the assumption that $\operatorname{diag}(V)>0$. \begin{algorithm} \KwIn{$V,D\in\mathbb{R}^{n\times n}$ diagonal matrices with $\diag(V) \geq \bs{0}$, $Q\in\mathbb{R}^{n\times n}$ a generator matrix ($Q\bs{1} =\bs{0}$, $\offdiag(Q)\geq \bs{0}$), satisfying A1, A2, A3.} \KwOut{a c-stable invariant pair $(X,\begin{bmatrix} I & \Psi \end{bmatrix})$ of $P(z) = Vz^2-Dz+Q$ (or a decomposition~\eqref{expdecomposition} for it) and a triplet representation for $-X^{\top}$.} compute the left Perron vector $\bs{u}$ of $Q$ using the triplet representation $(\offdiag(-Q),\bs{1},\bs{0})$\; choose $h > 0$ small enough to satisfy~\eqref{eq:hconstraints}\; compute $\widetilde{A},\widetilde{B},\widetilde{C}$ using the formulas~\eqref{eq:tildeABC}\; apply Algorithm~\ref{algo:crt} to $\widetilde{A},\widetilde{B},\widetilde{C}$ (only the last iterate $\offdiag(\sad{B}_k)$ and $\sad{\bs{v}}= \bs{u} A_k$ are needed)\; compute $X$ (or $P=h^{-1}Y$, $s=h^{-1}$) and $\Psi$ from~\eqref{cstableinvpairformula}, using the triplet representations derived from Lemma~\ref{subtriplets} to invert $B_{22}$ and $S$\; compute the triplet representation~\eqref{tripletX2}\; \caption{Computing a c-stable invariant pair of $P(z)$.} \label{algo:npt} \end{algorithm} \begin{remark} In the case $V=0$, our construction reduces to the method to transform a fluid queue into a QBD introduced by Ramaswami~\cite{ram99}, up to a diagonal scaling. Indeed, the transition matrices $A_0,A_1,A_2$ appearing in~\cite[Equation~4.5]{ram99} satisfy $A_0=K^{-1}\widetilde{C}$, $A_1=I-K^{-1}\widetilde{B}$, $A_2=K^{-1}\widetilde{A}$, where $K$ is the diagonal matrix with entries \[ K_{ii} = \begin{cases} d_{ii} & d_{ii}>0,\\ 2\abs{d_{ii}} & d_{ii}<0. \end{cases} \] (the case $d_{ii}=0$ is not treated in~\cite{ram99}). \end{remark} \subsection{An SDA-like variant} In the linear case, a popular algorithm for this problem is the~\emph{structured doubling algorithm}~\cite{glx05} (SDA) and its variants~\cite{bmp,wwl}. It is a slightly different iteration, which has a lower computational cost because it uses the block structure in a more effective way. Merging the derivation in~\cite{bmp} with ours, we can obtain a SDA-lookalike variant for second-order problem. The following algorithm indeed reduces to SDA-ss~\cite{bmp} if $n_1=0$. We start from the matrix polynomial $P(z)$ in the three-blocks form~\eqref{VDQ3blocks}, but this time we apply the discretization map $y=1+hz$ first, and then we modify the location of the infinite eigenvalues. We have $F(y) = Ay^2 + By + C$, with coefficients as in~\eqref{ABC}, that is, \begin{align*} A &= \m{h^{-2} V_1\\ & 0 \\ & & 0}, \quad B = \m{h^{-1}D_1+ 2h^{-2}V_1 \\ & h^{-1}D_2\\&& h^{-1}D_3}\\ C &= \m{Q_{11}+ h^{-1}D_1+h^{-2}V_1 & Q_{12} & Q_{13}\\ Q_{21} & Q_{22}+ h^{-1}D_2 & Q_{23}\\ Q_{31} & Q_{32} & Q_{33}+ h^{-1}D_3}. \end{align*} We postmultiply these coefficients by the inverse of the M-matrix \begin{equation} \label{sdalikemmatrix} M = \m{h^{-2}V_1 &0 &0 \\ 0 & h^{-1}D_2 & 0\\ -Q_{31} & -Q_{32} & -Q_{33}- h^{-1}D_3}, \end{equation} an operation which does not change eigenvalues and left invariant pairs, obtaining $\widehat{F}(y) = \widehat{A}y^2 +\widehat{B}y+\widehat{C}$, with \[ \widehat{A} = \m{I\\ & 0 \\ & & 0}, \quad \widehat{B} = \m{B_{11} \\ & I \\-B_{31} & -B_{32} & -B_{33}}, \quad \widehat{C} = \m{C_{11} & C_{12} & C_{13}\\ C_{21} & C_{22} & C_{23}\\ 0 & 0 & -I}, \quad \] where the block coefficients are given by \begin{align*} \m{C_{11} & C_{12} & C_{13} \\ C_{21} & C_{22} & C_{23}\\ } & = \m{Q_{11}+ h^{-1}D_1+ h^{-2}V_1 & Q_{12} & Q_{13}\\ Q_{21} & Q_{22}+ h^{-1}D_2 & Q_{23} } \times \\ & \;\;\;\; \m{h^{-2}V_1 &0 &0 \\ 0 & h^{-1}D_2 & 0\\ -Q_{31} & -Q_{32} & -Q_{33}- h^{-1}D_3}^{-1} \end{align*} and \begin{align*} \m{B_{11} & 0 & 0\\ B_{31} & B_{32} & B_{33} } & = \m{h^{-1}D_1+ 2h^{-2}V_1 & 0 & 0\\ 0&0&- h^{-1}D_3 } \times \\ & \;\;\;\; \m{h^{-2}V_1 &0 &0 \\ 0 & h^{-1}D_2 & 0\\ -Q_{31} & -Q_{32} & -Q_{33}- h^{-1}D_3}^{-1}. \end{align*} To obtain these blocks with a subtraction-free expression, we can make use of the triplet representation $\left(\offdiag(M),\bs{1}, \begin{bmatrix} h^{-2}\diag(V_1), h^{-1}D_2,-h^{-1}D_3 \end{bmatrix}^{\top}\right)$ for~$M$. Finally, we postmultiply by \[ \widehat{S}(y) = \m{I\\&I\\&&yI}, \] which has the effect of shuffling around some blocks and moving $n_3$ of the infinite eigenvalues to zero; the final result is $\check{F}(y) = \check{A}y^2 + \check{B}y + \check{C}$, with \begin{equation} \label{SDAlikeABC} \check{A}=\m{I & 0 & 0\\0 & 0 & 0 \\ 0 & 0 & B_{33}}, \check{B}=\m{B_{11} & 0 & -C_{13}\\0 & I & -C_{23}\\ -B_{31} & -B_{32} & I}, \check{C} =\m{C_{11} &C_{12} & 0\\ C_{21} & C_{22} & 0\\0 & 0 & 0}. \end{equation} The triple $\check{A}, \check{B}, \check{C}$ has the right signs for us to apply Cyclic Reduction, producing the same solution matrix $R$ as the above approach, since the final location of the eigenvalues is the same. Moreover, some of the pattern in the matrices $\check{A}, \check{B}, \check{C}$ is preserved under CR iterations; namely, at each step $k$, the pattern is \[ A_k = \m{\ast & \ast & 0\\ \ast & \ast & 0\\ 0 & 0 & 0}, \quad B_k = \m{\ast & \ast & \ast\\ \ast & I & \ast \\ \ast & \ast & I}, \quad \sad{B}_k = \m{\ast & \ast & \ast\\ \ast & I & \ast \\ \ast & \ast & I}, \quad C_k = \m{\ast & 0 & \ast\\ 0 & 0 & 0\\ \ast & 0 & \ast}. \] A slightly more efficient version of Cyclic Reduction can be obtained by exploiting the knowledge of these zero and identity blocks. While in the linear case the formulas simplify notably, in our quadratic case it is dubious whether it is worthwhile dealing with the additional complication of these formulas in the implementation, despite the slight computational advantage. \subsection{Solving the ODE} \label{sec:solvingODE} The reference~\cite[Sections~1.4, 2.4 and~2.5]{GohLR} contains a complete theory of the relations between invariant pairs and solution of matrix linear differential equations. Let $(X,U)$, with $X\in\mathbb{R}^{\ell\times \ell}$ and $U \in \mathbb{R}^{\ell\times n}$ in the form~\eqref{ivaU}, be a c-stable invariant pair of~\eqref{eq:matpol}. We assume positive recurrence, since otherwise there is no invariant density to compute. Then, the eigenvalues of $X$ coincide with the eigenvalues of $P(z)$ in the open left half-plane, and any solution $\bs{p}(x)$ of~\eqref{ODE} such that $\lim\limits_{x\to\infty} \bs{p}(x)=\bs{0}$ can be written as \begin{align*} \bs{p}(x)=\bs{v}\exp(Xx)U, \quad \mbox{for $\bs{v}\in\mathbb{R}^{1\times \ell}$}. \end{align*} Simple probability considerations show that the invariant measure of the Markov-modulated Brownian motion with coefficients $V,D,Q$ is the sum of a mass $ \begin{bmatrix} \bs{0} & \bs{p}_0 \end{bmatrix}$ at $x=0$ (where the matrix partitioning is consistent with~\eqref{ivaU}), and the density $\bs{p}(x)=\bs{v}\exp(Xx)U$. If the computed invariant pair satisfies~\eqref{ivaU}, the unknown coefficients $\bs{p}_0$ and $\bs{v}$ can be determined from the condition \[ \bs{u} = \begin{bmatrix} \bs{u}_{1} & \bs{u_2} \end{bmatrix} = \begin{bmatrix} 0 & \bs{p}_0 \end{bmatrix} + \int_{0}^\infty \bs{p}(x) \mathrm{d}x = \begin{bmatrix} -\bs{v}X^{-1} & \bs{p}_0 - \bs{v}X^{-1}\Psi \end{bmatrix}. \] Using the relation already derived in~\eqref{vBeq}, we get \begin{align*} \bs{p}_0 = \bs{u}_2 + \bs{v}X^{-1}\Psi = \bs{u}_2 - \bs{u}_1\Psi = \sad{\bs{v}}_2 B_{22}^{-1}. \end{align*} Moreover, $\bs{v}$ satisfies $-\bs{u}_1X=\bs{v}$, hence it follows from~\eqref{tripletX2} that \[ \bs{v} = \frac{1}{h}\left(\sad{\bs{v}}_1 + \sad{\bs{v}}_2B_{22}^{-1}(C_{21}-B_{21})\right)S^{-1}, \] the vector that we already have computed when obtaining a triplet representation for $-X^{\top}$. Hence we have all the quantities that are needed to compute the invariant density $\bs{p}(x)=\bs{v}\exp(Xx)U$. \section{Componentwise stability} \label{sec:stability} In this section, we adapt the theory in \cite{NguP15} to prove that the computation of invariant pairs and matrix equation solutions with Cyclic Reduction (in the non-null-recurrent case) is componentwise stable, provided that one uses triplet representations and the GTH trick as described in Algorithm~\ref{algo:crt}. We define for each $k=0,1,2,\dots$ the 4-tuple of nonnegative matrices and vectors \begin{equation} \label{Qtuple} S_k := (A_k,-\offdiag(B_k),-\offdiag(\sad{B}_k), C_k), \end{equation} and we call $\mathcal{F}$ the map such that $S_{k+1}=\mathcal{F}(S_k)$, corresponding to one step of Cyclic Reduction computed with~\eqref{eq:cr}. When $S_k$ and $\widetilde{S}_k$ are two different 4-tuples in the form~\eqref{Qtuple} and $\alpha$ is a real number, we write for short $\abs{\widetilde{S}_k-S_k} \leq \alpha S_k$ to mean that the relation holds when we replace $S_k$ with each of the matrices and vectors in the 4-tuple, i.e., \begin{align*} \abs{\widetilde{A}_k-A_k} &\leq \alpha A_k,\\ \abs{\offdiag(\widetilde{B}_k)-\offdiag(B_k)} &\leq \alpha \abs{\offdiag(B_k)}, \\ \abs{\offdiag(\widetilde{\sad{B}}_k)-\offdiag(\sad{B}_k)} &\leq \alpha \abs{\offdiag(\sad{B}_k)}, \\ \abs{\widetilde{C}_k-C_k} &\leq \alpha C_k. \end{align*} \subsection{Componentwise perturbation bounds} We start from assessing the error incurred when starting from inaccurate initial values. We focus on first-order results, and adopt the notation $M \mathrel{\dot{\leq}} N$ to mean $M \leq N+O(\varepsilon^2)$. The key to this result is interpreting the iterates of Cyclic Reduction as the result of a censoring operation. The connection between Cyclic Reduction and censoring is a well-established result (see, for example, \cite[Section~7.3]{blm05}). The following lemma is one of the possible ways to formalize this connection. \begin{lemma} \label{censorlemma} Consider the sequences obtained by Cyclic Reduction~\eqref{eq:cr}, and in addition the sequence $\happy{B}$ defined by $\happy{B}_0 = B_0$ and $\happy{B}_{k+1} =\happy{B}_k - A_k B_k^{-1}C_k$. The matrix \begin{equation} \label{smallmatrix} \begin{bmatrix} A_0+I-\sad{B}_k & C_k\\ A_k & I-\happy{B}_k+C_0 \end{bmatrix} \end{equation} is the result of censoring all blocks apart from the first and last from the $n(2^k+1)\times n(2^k+1)$ matrix \begin{equation} \label{bigmatrix} \begin{bmatrix} A_0 + I-B_0 & C_0\\ A_0 & I-B_0 & C_0\\ & \ddots & \ddots & \ddots\\ && A_0 & I-B_0 & C_0\\ &&& A_0 & I-B_0+C_0 \end{bmatrix} \end{equation} \end{lemma} \begin{proof} We first censor the even-numbered blocks, obtaining \begin{multline*} \displaybreak \begin{bmatrix} A_0+I-B_0 \\ & I-B_0\\ & & \ddots\\ & & & I-B_0\\ & & & & I-B_0 + C_0\\ \end{bmatrix} \\ +\begin{bmatrix} C_0\\ A_0 & C_0\\ & A_0 & \ddots\\ & & \ddots & C_0\\ & & & A_0 \end{bmatrix} \begin{bmatrix} B_0\\ & B_0\\ &&\ddots \\ &&& B_0 \end{bmatrix}^{-1} \begin{bmatrix} A_0 & C_0\\ & A_0 & C_0\\ && \ddots & \ddots\\ &&&A_0 & C_0 \end{bmatrix}\\ = \begin{bmatrix} A_0 + I-\sad{B}_1 & C_1\\ A_1 & I-B_1 & C_1\\ &\ddots & \ddots & \ddots\\ & & A_1& I-B_1 & C_1\\ & & & A_1 & I-\happy{B}_1 + C_0\\ \end{bmatrix}. \end{multline*} We reiterate the same process $k$ times in total, each time censoring the even-numbered blocks in the new matrix; after each step, we obtain a matrix with the same structure, smaller size, and the indices increased by 1. \end{proof} \begin{remark} If the elements in $\diag(B_0)$ are small enough that $I-B_0 \geq 0$, then the matrix in~\eqref{bigmatrix} is stochastic, and so is its censoring~\eqref{smallmatrix}. This gives an alternative proof of the relations \begin{align*} (A_k-B_k+C_k)\bs{1}=(A_0-\sad{B}_k+C_k)\bs{1}=(A_k-\happy{B}_k+C_0)\bs{1} = \bs{0}, \end{align*} which appeared in Theorem~\ref{thm:crconv} and Lemma~\ref{lem:secondcrtriplet}. \end{remark} Once this lemma is set up, it is simple to prove the following perturbation bound. \begin{lemma} \label{pertboundlemma} Let $A,B,C\in \mathbb{R}^{n\times n}$ and $\widetilde{A},\widetilde{B},\widetilde{C}\in \mathbb{R}^{n\times n}$ be two different triples of matrices satisfying A4, A5, A6, such that \begin{subequations} \begin{align} \abs{\widetilde{A}-A} &\mathrel{\dot{\leq}} \varepsilon A, \\ \abs{\offdiag(\widetilde{B})-\offdiag(B)} &\mathrel{\dot{\leq}} \varepsilon \abs{\offdiag(B)}, \\ \abs{\widetilde{C}-C} &\mathrel{\dot{\leq}} \varepsilon C. \end{align} \end{subequations} Let $S_k$ and $\widetilde{S}_k$ be the 4-tuples resulting from applying $k$ steps of Cyclic Reduction~\eqref{eq:cr} starting from $A,B,C$ and $\widetilde{A},\widetilde{B},\widetilde{C}$, respectively. Then, \[ \abs{\widetilde{S}_k-S_k}\mathrel{\dot{\leq}} n2^k \varepsilon S_k. \] \end{lemma} \begin{proof} Up to a common scaling factor (which does not alter the statement of the theorem), we can assume that $I-B\geq 0$. Then, the matrices in~\eqref{smallmatrix} and~\eqref{bigmatrix} are stochastic, and we can apply~\cite[Lemma~7.3]{NguP15} to this censoring operation. In detail, we call $P$ the matrix in~\eqref{bigmatrix}, and $\widetilde{P}$ its equivalent built starting with the initial values with a tilde. We have for $i\neq j$ \[ \abs{(\widetilde{A}_0+I-\widetilde{B}_0)_{ij} - (A_0+I-B_0)_{ij}} \mathrel{\dot{\leq}} \abs{(\widetilde{A}_0-A_0)_{ij}} + \abs{(\widetilde{B}_0-B_0)_{ij}} \mathrel{\dot{\leq}} \varepsilon (A_0+I-B_0)_{ij}, \] and similarly for all other entries, so $\abs{\offdiag(\widetilde{P})-\offdiag(P)}\leq \varepsilon\offdiag(P)$. Thus, the first part of~\cite[Lemma~7.3]{NguP15} holds with $m = n(2^k-1)\leq n2^k$. This proves that \begin{align*} \abs{\widetilde{A}_k-A_k} &\mathrel{\dot{\leq}} n2^k \varepsilon A_k,\\ \abs{\widetilde{C}_k-C_k} &\mathrel{\dot{\leq}} n2^k \varepsilon C_k,\\ \intertext{and} \abs{\widetilde{D}_k-D_k} &\mathrel{\dot{\leq}} n2^k\varepsilon D_k,\\ \abs{\widetilde{E}_k-E_k} &\mathrel{\dot{\leq}} n2^k\varepsilon E_k, \end{align*} where \begin{align*} D_k & = B_0-\sad{B}_k = \sum_{j=0}^{k-1} C_jB_j^{-1}A_j, \quad E_k = B_0-\happy{B}_k = \sum_{j=0}^{k-1} A_jB_j^{-1}C_j, \end{align*} % and equivalent definitions with the tilde symbols. The bounds \begin{align*} \abs{\offdiag(\widetilde{B}_k)-\offdiag(B_k)} &\mathrel{\dot{\leq}} n2^k\varepsilon \abs{\offdiag(B_k)},\\ \abs{\offdiag(\widetilde{\sad{B}}_k)-\offdiag(\sad{B}_k)} &\mathrel{\dot{\leq}} n2^k\varepsilon \abs{\offdiag(\sad{B}_k)} \end{align*} follow by noting that $B_k=B_0-D_k-E_k$, $\sad{B}_k=B_0-D_k$ and using \cite[Lemma~7.2 (i)]{NguP15}. \end{proof} \subsection{Stability of a CR step} Our next point is investigating the stability of a step of Cyclic Reduction when performed in machine arithmetic. We rely once again on the lemmas on basic operations in~\cite[Section~7]{NguP15}, and we hide in $M\mathrel{\dot{\leq}} N$ terms which are second-order in $\mp$. \begin{lemma} \label{onecrstep} Let the $4$-tuple $S_k = (A_k,-\offdiag(B_k),-\offdiag(\sad{B}_k),C_k)$ be exactly-represented machine numbers. We denote by $S_{k+1} =\mathcal{F}(S_k)$ the result of performing one step of Cyclic Reduction on them, and by $\widetilde{S}_{k+1} = \widetilde{\mathcal{F}}(S_k)$ the result of performing one step of Cyclic Reduction computed in inexact machine arithmetic, starting from the same matrices. Then, \[ \abs{\widetilde{S}_{k+1}-S_{k+1}}\mathrel{\dot{\leq}} (\psi(n)+n+2)\mp S_{k+1}, \] where $n$ is the size of the involved matrices and $\psi(n)=\frac{2}{3}(2n+5)(n+2)(n+3)$ is the accuracy bound for the solution of a linear system with the GTH algorithm (as in~\cite[Theorem~4.1]{NguP15}). \end{lemma} \begin{proof} We use, with a slight abuse of notation, the notation $c(X)$ to denote the computed approximation of a quantity $X$ along one step of the algorithm (even though it is not, strictly speaking, a function of $X$ only). Using \cite[Lemma~7.9]{NguP15} with $a=b=0$, we obtain that the computed value $c(B_k^{-1}A_k)$ of $B_k^{-1}A_k$ satisfies \[ \abs{c(B_k^{-1}A_k)-B_k^{-1}A_k} \mathrel{\dot{\leq}} \psi(n)\mp B_k^{-1}A_k. \] Hence the computed values of $A_{k+1}=A_kB_k^{-1}A_k$ and $C_{k}B_k^{-1}A_k$ satisfy (by \cite[Lemma~7.8]{NguP15}) \begin{align*} \abs{c(A_{k+1})-A_{k+1}} &\mathrel{\dot{\leq}} (\psi(n)+n)\mp A_{k+1},\\ \abs{c(C_{k}B_k^{-1}A_k)-C_{k}B_k^{-1}A_k} &\mathrel{\dot{\leq}} (\psi(n)+n)\mp C_{k}B_k^{-1}A_k,\\ \end{align*} Analogously we have \begin{align*} \abs{c(C_{k+1})-C_{k+1}} &\mathrel{\dot{\leq}} (\psi(n)+n)\mp C_{k+1},\\ \abs{c(A_{k}B_k^{-1}C_k)-A_{k}B_k^{-1}C_k} &\mathrel{\dot{\leq}} (\psi(n)+n)\mp A_{k}B_k^{-1}C_k,\\ \end{align*} Using again \cite[Lemma~7.8]{NguP15} for the additions, we have then \begin{align*} \abs{c(\offdiag(B_{k+1}))-\offdiag(B_{k+1})} &\mathrel{\dot{\leq}} (\psi(n)+n+2)\mp \abs{\offdiag(B_{k+1})},\\ \abs{c(\offdiag(\sad{B}_{k+1}))-\offdiag(\sad{B}_{k+1})} &\mathrel{\dot{\leq}} (\psi(n)+n+1)\mp \abs{\offdiag(\sad{B}_{k+1})}. \end{align*} \end{proof} \subsection{Stability of multiple CR steps} We can now address multiple steps of Cyclic Reduction. The proof here follows~\cite[Theorem~7.12]{NguP15}. \begin{lemma} Let $A,B,C\in\mathbb{R}^{n\times n}$ be three matrices satisfying Assumptions A4, A5, A6, and such that $A,C$ and $\offdiag(B)$ are exactly-represented machine numbers. Denote by $S_k=\mathcal{F}^k(S_0)$ the result of performing $k$ steps of Cyclic Reduction starting from $S_0=(A,-\offdiag(B),-\offdiag(B),-\offdiag(B),C)$, and by $\widetilde{S}_k=\widetilde{\mathcal{F}}^k(S_0)$ the result of $k$ steps of Cyclic Reduction performed in inexact machine arithmetic. Then, \begin{equation} \label{crstabequation} \abs{\widetilde{S}_k-S_k} \leq n2^k(\psi(n)+n+2)\mp S_k. \end{equation} \end{lemma} \begin{proof} We prove the result by induction on $k$; the base case ($k=1$) is Lemma~\ref{onecrstep}. The following manipulation is a formal version of the statement that when considering first-order error bounds we can add up the local errors at the different steps of the algorithm. Consider the telescopic sum \begin{equation} \label{telesum} \abs{\widetilde{S}_k-S_k} \leq \sum_{h=1}^k \abs{\mathcal{F}^{h-1}\widetilde{\mathcal{F}}(\widetilde{S}_{k-h}) - \mathcal{F}^{h-1}\mathcal{F}(\widetilde{S}_{k-h})}. \end{equation} By Lemma~\ref{onecrstep}, we have \[ \abs{\widetilde{\mathcal{F}}(\widetilde{S}_{k-h})-\mathcal{F}(\widetilde{S}_{k-h})} \mathrel{\dot{\leq}} (\psi(n)+n+2)\mp \mathcal{F}(\widetilde{S}_{k-h}). \] Then by Lemma~\ref{censorlemma} used with $\varepsilon=(\psi(n)+n+2)\mp$, we get \begin{align*} \abs{\mathcal{F}^{h-1}\widetilde{\mathcal{F}}(\widetilde{S}_{k-h}) - \mathcal{F}^{h-1}\mathcal{F}(\widetilde{S}_{k-h})} &\mathrel{\dot{\leq}} n2^{h-1}(\psi(n)+n+2)\mp \mathcal{F}^{h-1}\mathcal{F}(\widetilde{S}_{k-h}) \\ &\mathrel{\dot{\leq}} n2^{h-1}(\psi(n)+n+2)\mp \mathcal{F}^h(S_{k-h}) \\ &= n2^{h-1}(\psi(n)+n+2)\mp S_{k}. \end{align*} Passing from the first to the second row we have replaced $\widetilde{S}_{k-h}$ with $S_{k-h}$; this is possible because they differ by a term of order $O(\mp)$ by inductive hypothesis. Insert this inequality into~\eqref{telesum} to get \[ \abs{\widetilde{S}_k-S_k} \mathrel{\dot{\leq}} \sum_{h=1}^k n2^{h-1}(\psi(n)+n+2)\mp S_{k} < n2^k(\psi(n)+n+2)\mp S_{k}. \] \end{proof} \subsection{Putting everything together} The previous sections shows that the CR iteration (Algorithm~1) is componentwise stable. The computation of its initial values starting from $V,D,Q$ can be performed with~\eqref{ABC}, \eqref{eq:tildeABC}, or~\eqref{SDAlikeABC}; in all three cases, if~\eqref{eq:hconstraints} holds for each $i\in E_1\cup E_2$, then we obtain an approximation $\widetilde{S}_0$ of the CR initial values satisfying $\abs{\widetilde{S}_0-S_0}\leq \alpha \mp S_0$ for a moderate multiple $\alpha$ of the machine precision, and thus by Lemma~\ref{pertboundlemma} the computed iterates are also componentwise accurate. Once a sufficient number $k$ of steps is performed to achieve convergence, we compute the invariant pair $(X,U)$ as described in Section~\ref{sec:deflatingzeros}. The computed iterates satisfy~\eqref{crstabequation}, and similarly the computed approximation $\widetilde{\sad{\bs{v}}}_k$ of $\sad{\bs{v}}=\sad{\bs{u}}A_k$ satisfies \[ \abs{\widetilde{\sad{\bs{v}}}_k-\sad{\bs{v}}_k} = \abs{\bs{u}\widetilde{A}_k- \bs{u}A_k} \mathrel{\dot{\leq}} n2^k(\psi(n)+n+2)\mp \sad{\bs{v}}_k. \] The rest of the computation only involves subtraction-free formulas: we have described in Sections~\ref{sec:deflatingzeros}, \ref{sec:tripletX2}, and~\ref{sec:solvingODE} how to get from the matrix $R$ computed by CR (and $\sad{B}_{\infty}$ and $\sad{\bs{v}}$) to the stable invariant pair of $P(z)$, its triplet representation, and the quantities needed to compute $\bs{p}(x)$. \section{Numerical experiments} \label{sec:numerical} We compare the following methods. \begin{description} \item[KK] The algorithm in~\cite{kk95}, based on explicit computation of eigenvalues and eigenvectors of a linearizing matrix which is obtained (essentially) by deflating the infinite eigenvalues from the linearizing matrix polynomial \begin{equation} \label{linearization} \mathcal{A} - z\mathcal{E} = \begin{bmatrix} D & -T^\top\\ I_n & 0 \end{bmatrix} - z \begin{bmatrix} V & 0\\ 0 & I_n \end{bmatrix}. \end{equation} The main drawback of this method, is that by computing explicitly an eigenvalue decomposition we expect error amplification by the condition number of the eigenvector matrix. \item[AS] The algorithm in~\cite{AgaS}, based on computing the sign function of the pencil~\eqref{linearization} using the Newton-like iteration $\mathcal{A}_{k+1} = \frac{1}{2}\left( \mathcal{A}_k + \mathcal{E}\mathcal{A}_k^{-1}\mathcal{E}\right)$, and using it to separate the infinite, stable and unstable eigenvalues into different blocks. In principle, this algorithm goes in the right direction to get better numerical properties; in practice, unfortunately, our implementation of this algorithm was affected negatively by convergence issues in this iteration. In particular, it seems that the pencil $\mathcal{A}_k-z\mathcal{E}$ converges to a singular pencil whenever $\diag(V)$ has zero entries, so the inversion $\mathcal{A}_k^{-1}$ becomes increasingly ill-conditioned. This complicates the choice of a stopping criterion. \item[QZ] An algorithm similar to AS, but in which the stable subspace is computed using a permuted QZ decomposition~\cite{Kag92} of~\eqref{linearization} (MATLAB's \texttt{ordqz}). While we could not find an explicit reference in the applied probability literature for the use of this method in the context of Markov-modulated Brownian motion, it is the method of choice for problems of this kind in the numerical linear algebra community \cite{BetK}. The QZ decomposition is normwise backward stable, so we expect excellent normwise stability properties. % \item[LN] The algorithm in~\cite{LatN}, which is based on Cyclic Reduction without the use of triplet representations, or of any particular method to preserve positiveness. The discretizing transformation is the Cayley transform with $h=1$, $y = (z+1)/(z-1)$. This algorithm can solve only problems with $\diag(V)>0$. \item[NP] Algorithm~\eqref{algo:npt} as described here, from which we expect componentwise accuracy. \end{description} We apply these algorithms to several test problems. \begin{description} \item[NP15] A modification of \cite[Example~5.1]{NguP15}, a problem in which there is an imbalance of several orders of magnitude between the components of the solution. We take $T$ and $D$ as in that problem, and add a Brownian motion component with $V=I$. \item[NP15s] The same problem as NP, but with $V(n,n)=0$, to obtain a problem with singular $V$. \item[rand($n$)] Random-generated problems of different sizes $n = 8,20,50$. The matrices are generated with the MATLAB commands \begin{verbatim} V = diag(abs(randn(n, 1)); D = diag(randn(n, 1)); T = abs(randn(n)); T = T - diag(T * ones(n, 1)); \end{verbatim} \item[rand($n$)s] The same as rand($n$), but with a matrix $V$ containing four zero diagonal entries: \texttt{V = blkdiag(diag(abs(randn(n-4, 1))), zeros(4))}. \item[imb($n$),imb($n$)s] Defined as rand($n$) and rand($n$)s, but all the calls of the form \texttt{randn(h,k)} are replaced by a different procedure that generates numbers spanning different orders of magnitude: \texttt{randn(h, k) .* exp(5 * randn(h, k))}. \end{description} To improve reproducibility without generating the same numbers repeatedly, we have reset the random number seed once before the complete set of experiments. As a first error measure, we have considered the residual in the Euclidean norm \begin{equation} \label{relresidual} \frac{\norm{X^2UV - XUD + UQ}}{\norm{U}(\norm{V}+\norm{D}+\norm{Q})} \end{equation} of the left stable invariant pair $(X,U)$ as produced by the algorithms. The values of this residual are in Table~\ref{table:residuals}. Moreover, we have normalized each invariant pair to be in the form~\eqref{ivaU} with a similarity transformation~\eqref{eq:qtrans}, and checked the forward errors \begin{equation} \label{forwarderror} \frac{\norm{X-X_{\mathrm{exact}}}}{\norm{X_{\mathrm{exact}}}}, \quad \frac{\norm{\Psi-\Psi_{\mathrm{exact}}}}{\norm{\Psi_{\mathrm{exact}}}}, \end{equation} where the reference values $X_\mathrm{exact}$ and $\Psi_\mathrm{exact}$ are computed applying method KK with higher precision arithmetic (32 digits, using Matlab's \texttt{vpa} command). The results are in Tables~\eqref{table:Xrel} and~\eqref{table:Psirel}. Note that in the problems with $V>0$, we have $E_3=\varnothing$, and hence the matrix $\Psi$ is empty and computing the error does not make sense. For this reason, Table~\eqref{table:Psirel} does not contain all the experiments. \begin{table} \centering \begin{tabular}{rccccc} \toprule Problem & KK & AS & LN & QZ & NP \\ \midrule NP15 & 1.5e-15 & 5.7e-08 & 3.7e-15 & 9.9e-16 & 3.8e-16 \\ NP15s & 5.0e-16 & 5.0e-08 & - & 7.7e-16 & 2.3e-16 \\ rand8 & 1.5e-15 & 9.4e-16 & 3.6e-15 & 9.9e-16 & 1.1e-15 \\ rand8s & 6.6e-15 & 2.8e-11 & - & 5.5e-15 & 2.6e-15 \\ rand20 & 1.9e-15 & 5.6e-14 & 1.8e-14 & 1.8e-15 & 7.3e-16 \\ rand20s & 2.8e-15 & 5.8e-12 & - & 2.0e-14 & 1.3e-14 \\ rand50 & 2.3e-15 & 3.0e-14 & 3.4e-13 & 1.5e-14 & 5.9e-15 \\ rand50s & 7.1e-14 & 1.2e-08 & - & 1.3e-14 & 1.7e-14 \\ imb8 & 1.2e-08 & 8.6e-05 & 4.5e+04 & 4.2e-11 & 7.4e-09 \\ imb8s & 1.5e-13 & 4.2e-10 & - & 2.3e-14 & 2.3e-13 \\ imb20 & 2.2e-15 & 4.8e-06 & 9.7e-01 & 2.7e-14 & 4.9e-13 \\ imb20s & 1.2e-09 & 8.0e+05 & - & 2.6e-13 & 1.9e-13 \\ imb50 & 8.7e-14 & 7.5e-06 & 3.3e+01 & 3.3e-13 & 1.3e-10 \\ imb50s & 3.1e-04 & 3.2e+11 & - & 2.5e-05 & 2.0e-08 \\ \bottomrule \end{tabular} \caption{Relative residual~\eqref{relresidual}.} \label{table:residuals} \end{table} \begin{table} \centering \begin{tabular}{rccccc} \toprule Problem & KK & AS & LN & QZ & NP \\ \midrule NP15 & 2.7e-12 & 2.5e-07 & 2.9e-13 & 1.8e-12 & 1.7e-16 \\ NP15s & 1.3e-12 & 2.2e-07 & - & 6.2e-13 & 1.8e-16 \\ rand8 & 2.8e-15 & 1.5e-15 & 1.6e-15 & 2.4e-15 & 2.7e-16 \\ rand8s & 2.9e-15 & 1.8e-13 & - & 2.3e-15 & 3.1e-16 \\ rand20 & 4.4e-15 & 9.6e-14 & 5.6e-15 & 4.8e-15 & 3.0e-16 \\ rand20s & 3.2e-15 & 3.0e-12 & - & 4.1e-14 & 1.1e-15 \\ rand50 & 5.9e-15 & 4.0e-14 & 4.0e-14 & 5.6e-14 & 6.9e-16 \\ rand50s & 5.6e-14 & 1.2e-10 & - & 3.5e-14 & 5.2e-16 \\ imb8 & 9.7e-12 & 1.9e-09 & 1.1e+00 & 7.1e-13 & 9.0e-13 \\ imb8s & 2.6e-14 & 1.3e-08 & - & 1.3e-12 & 1.1e-15 \\ imb20 & 4.6e-11 & 2.1e-07 & 3.2e-04 & 1.1e-09 & 9.1e-12 \\ imb20s & 4.4e-12 & 6.9e-06 & - & 5.9e-12 & 4.0e-13 \\ imb50 & 2.0e-10 & 9.8e-06 & 7.2e-01 & 1.0e-08 & 8.3e-10 \\ imb50s & 2.0e-10 & 3.3e-05 & - & 1.0e+00 & 2.6e-13 \\ \bottomrule \end{tabular} \caption{Forward error~\eqref{forwarderror} on $X$.} \label{table:Xrel} \end{table} \begin{table} \centering \begin{tabular}{rccccc} \toprule Problem & KK & AS & LN & QZ & NP \\ \midrule NP15s & 2.3e-15 & 1.8e-11 & - & 2.8e-15 & 1.3e-16 \\ rand8s & 1.2e-14 & 3.7e-13 & - & 2.4e-15 & 2.5e-15 \\ rand20s & 7.1e-15 & 7.7e-11 & - & 6.7e-14 & 2.1e-15 \\ rand50s & 3.4e-14 & 3.5e-09 & - & 5.3e-14 & 4.7e-16 \\ imb8s & 8.3e-15 & 5.2e-09 & - & 1.1e-11 & 5.2e-15 \\ imb20s & 1.4e-10 & 1.9e-08 & - & 2.8e-11 & 4.0e-11 \\ imb50s & 6.9e-11 & 9.0e-09 & - & 1.0e-04 & 6.1e-08 \\ \bottomrule \end{tabular} \caption{Forward error~\eqref{forwarderror} on $\Psi$. } \label{table:Psirel} \end{table} As one can see, the results obtained by the new algorithm are very satisfying, especially in terms of accuracy of the computed $X$ and $\Psi$ (which are often the quantities of interest in view of their physical interpretation). The relative residual of the computed invariant pair, however, is sometimes slightly higher than the one obtained with the QZ method. \section{Conclusions} \label{sec:conclusions} We have described a subtraction-free algorithm to compute the quantities needed to determine the steady-state behavior of Markov-modulated Brownian motion models in a componentwise accurate fashion. The algorithm extends the one described in \cite{NguP15} for the linear case $V=0$, and is based on a componentwise accurate variant of Cyclic Reduction. A componentwise error analysis of this CR algorithm is provided. Our analysis highlights the role of the spectral transformation which converts continuous-time to discrete-time stability. Another interesting result is the use of a transformation related to index reduction for differential-algebraic equation and to the shift technique in this novel context. \bibliographystyle{alpha}
2,869,038,156,951
arxiv
\section{The Model} \label{sect_model} The model \cite{Knochel:2008ks} is based on Higgsless models proposed in \cite{Csaki:2003dt,Cacciapaglia:2004rb}. They are constructed in 5D using a warped background inspired by the RSI scenario \cite{Randall:1999ee}, $ g_{\mu\nu}=\eta_{\mu\nu}e^{-2 R k y}\quad g_{55}=-R^2\quad y \in [0,\pi]$. The 5D gauge group $G_{bulk}=SU(3)_C\times SU(2)_L \times SU(2)_R \times U(1)_{B-L}$ is broken by boundary conditions to $G_{SM}=SU(3)_C\times SU(2)_L\times U(1)_Y$ on the UV brane and $SU(3)_C\times SU(2)_D\times U(1)_{B-L}$ on the IR brane. and thus only $SU(3)_C\times U(1)_{EM}$ survives as a conserved subgroup. There are at least eight real supercharges in 5D relating it to 4D $\mathcal{N}=2$ SUSY, but half of the symmetries are already broken by the background, leaving us with usual $\mathcal{N}=1$ SUSY after Kaluza-Klein expansion. The action of 5D SYM theory broken by a warped background can be written down using $\mathcal{N}=1$ superfields $A_\mu, \lambda_1,D \in V$ (vector), $A_5,\Sigma,\lambda_2,F_V \in \chi$ (chiral), while the hypermultiplet can be written down using one chiral and one antichiral superfield $H, \overline{H}^c$. The complete bulk superfield content is given in \tabref{fig_sfields}. \begin{table}[b] \caption{The superfield content of the model and corresponding representations and quantum numbers with respect to the bulk gauge group $G_{bulk}=SU(3)_C\times SU(2)_L \times SU(2)_R\times U(1)_{B-L}$. \label{fig_sfields}}\vspace{0.4cm} \begin{center} \begin{tabular}{|l|l|l|l|} \hline Superfield& Rep. $G_{bulk}$& Superfield & Rep. $G_{bulk}$\\ \hline\hline $V^{Ca}$, $\chi^{Ca}$ &${\bf 8}$ of $SU(3)_C$ & $V^{Li}$, $\chi^{Li}$ &$\bf 3$ of $SU(2)_L$ \\ $V^{Ri}$, $\chi^{Ri}$ & $\bf 3$ of $SU(2)_R$ $V^X$, $\chi^X$ & $U(1)_{B-L} \\[0ex] $H^{L}_{l,g}$&$(\mathbf{1},\mathbf{2},\mathbf{1},-1)$ & $H^{R}_{l,g}$&$(\mathbf{1},\mathbf{1},\mathbf{2},-1)$ \\ $H^{Lc}_{l,g}$&$(\mathbf{1},\mathbf{\overline{2}},\mathbf{1},1)$ & $H^{Rc}_{l,g}$&$(\mathbf{1},\mathbf{1},\mathbf{\overline{2}},1)$\\[0ex] $H^{L}_{q,g}$&$(\mathbf{3},\mathbf{2},\mathbf{1},1/3)$ & $H^{R}_{q,g}$&$(\mathbf{3},\mathbf{1},\mathbf{2},1/3)$ \\ $H^{Lc}_{q,g}$&$(\mathbf{\overline{3}},\mathbf{\overline{2}},\mathbf{1},-1/3)$ & $H^{Rc}_{q,g}$&$(\mathbf{\overline{3}},\mathbf{1},\mathbf{\overline{2}},-1/3)$\\ \hline \end{tabular} \end{center} \end{table} To obtain the spectrum of the model, we still have to assign boundary conditions. The IR brane (i.\,e.~$y=\pi$) boundary conditions are a straightforward generalization of the nonsupersymmetric boundary conditions \cite{Knochel:2008ks}, \begin{subequations} \label{superbcs} \begin{align} \begin{bmatrix} 1 & -1 \\ \partial_y & \partial_y \end{bmatrix} \left.\begin{bmatrix} V^L \\ V^R \end{bmatrix}\right|_{y=\pi} = \left.\begin{bmatrix} \partial_y & -\partial_y \\ 1 & 1 \end{bmatrix} e^{-2 R k y} \begin{bmatrix} \chi^L \\ \chi^R \end{bmatrix}\right|_{y=\pi} &= 0\,,\\ \partial_y V^X(\pi) = \chi^X(\pi) = \partial_y V^C(\pi) = \chi^C(\pi) &= 0,\\ H^{Lc}_{g}(\pi)+\mu_g H^{Rc}_{g}(\pi)=H^R_{g}(\pi)-\mu_g H^L_{g}(\pi)&=0 \end{align} \end{subequations} where $g=l_1\dots l_3, q_1\dots q_3$ runs over all leptons and quark generations, and $\mu_g \Lambda_{IR}$ is the IR Dirac boundary mass parameter of the $g$th doublet. As a contrast, the UV brane does not carry SUSY which means that boundary conditions can differ within 4D multiplets. The physical scalars $h_f$, $h^c_f$, $\Sigma^i$ thus get universal Dirichlet conditions, while the gauginos receive twisted and mixing boundary conditions, \begin{subequations} \begin{align} h_f(0)=h^c_f(0)=\Sigma^i(0)=\lambda_1^C(0)=\lambda^{L}_1 = \lambda^{R12}_2 =0\\ \cos(\theta_N) \lambda^X_1+\sin(\theta_N) \lambda^{R3}_1 = \cos(\theta_N) \lambda^{R3}_2-\sin(\theta_N) \lambda^X_2 = 0 \end{align} \end{subequations} While the parameter space of the model is still rather large, there are several reasonable assumptions that one can make. First of all, we impose (tree level) degeneracy of the pairs of electroweak gaugino modes which will be lifted only at the loop level (no Majorana masses on the UV brane). Furthermore, the splitting of the $W$ and $\chi^+$ raises the KK scale and is therefore assumed to be small (with the lightest charginos just above the experimental lower bounds). The neutralino mixing angle $\theta_N$ is then fixed by the relic density \cite{Knochel:2008ks}. The localization of the matter hypermultiplets is controlled by the multiplet bulk mass $c=M_5/k$. The localization of the light quarks is largely determined by the S parameter to be around $c_L \approx 0.5$ which also suppresses the coupling to the heavy resonances. The third generation is naturally IR localized to generate the heavy top. While this basically fixes the properties which are relevant to the LSP production processes which we will discuss later, there is some freedom in these localization parameters which strongly impact LHC phenomenology. Exactly delocalized light quarks ($c_L=-c_R=1/2$) have vanishing couplings to KK gluons, while a small deviation from delocalization introduces nonzero couplings. At the same time, localized kinetic quark terms as they are used to split the doublets introduce a localization effect on the UV brane which also shifts these effective couplings to a nonzero value. Depending on the exact choices, the production of KK gluons is irrelevant or observable in our study of LSP production at the LHC. Our minimal implementation of the third generation, though not addressing the $Zbb$ problem, provides a simple way to study the phenomenology of the $t$ and $b$ in LSP production for different scenarios from strongly IR localized fields to the almost delocalized case. The introduction of a UV localized kinetic term for the quarks shifts the effective localization. While the localization of the third generation lets the mass of the first quark KK modes vary between extremely light (for almost delocalized third generation fields requiring large IR Dirac masses) and heavy ($\approx 3 k e^{- R k \pi}$), the masses of the lightest $\tilde{t}$ and $\tilde{b}$ modes stay below $2 k e^{- R k \pi}\approx 1100$ GeV and can thus be pair produced at LHC energies with appreciable cross sections. \section{Production of Missing Energy and heavy Quarks} We concentrate on a set of final states which is particularly favored in the model which we consider in this work, the production of third generation quarks in association with missing energy. In our scenario, the first stop mode $\tilde{t}$ is in a convenient mass range: it is still light enough to be pair produced copiously at the LHC at 14 TeV, and at the same time heavy enough ($m_{\tilde{t}}-m_{\chi}-m_t\gtrsim 400$ GeV) in all but the extreme cases to produce a strong missing energy signal from the decay. Such a situation has been discussed in a generic way in \cite{Han:2008gy}. The analysis carried out by the authors is valid for our $\tilde{t}$ pair production contributions, but this is only one of the contributions to this class of final states in our model, where the production of heavy quark and gluon resonances proves to be important as well. Due to the size of the model, we rely on simulations with four particle final states, which means that we do not consider the possible decay modes of the $t$ (hadronic, semileptonic, leptonic). We consider a set of points in parameter space representing different localizations of light and heavy quarks (\tabref{tab:Pn}), while the IR scale is assumed to be $\Lambda_{IR}\approx 620$ GeV and $m_{\chi0}=88$ GeV, $m_{\chi+}=103$ GeV. To proceed, we define a number of cuts \begin{table} \caption{ Points in bulk mass parameter space of the first and second ($c_{1,2}$) and third ($c_3$) generation of quarks.\label{tab:Pn}\label{tabcuts}}\vspace{0.4cm} \begin{center} \begin{tabular}{|c|c|c|c|c|}\hline Bulk Mass&P1{}&P2{}&P3{}&P4{}\\ \hline\hline $c_{L1,2}$ &0.48&0.48&1/2&0.48 \\ \hline $c_{R1,2}$ &-0.48&-0.48&-1/2&-0.48 \\ \hline $c_{L3}$ &1/3&0.4&0.4& 0.2\\ \hline $c_{R3}$ &-0.4&-1/3&-1/3& -0.2\\ \hline \end{tabular} \end{center} \end{table} in addition to the standard cuts $M(q,\overline{q})\in[10,\infty]$; $M(\mbox{parton},q),M(\mbox{parton},\overline{q})\in[-\infty,-10]$; $E(\mbox{parton})>20$; $\eta(q),\eta(\overline{q}) \in [-5,5]$ to further suppress backgrounds. The ones used here are $P_T{(q)},P_T{(\overline q)}>100 \mbox{ GeV}$ (II.1) and $P_T{(q)},P_T{(\overline q)}>300 \mbox{ GeV}$ (II.2). When judging the results for the missing energy signal with $t$ and $b$ quarks in the final state, one therefore has to remember the following points: The $t$ pair production itself does not introduce $\mslash{P}_T$, but the leptonic and semileptonic decay modes contain neutrinos, and considering the relative strength of $t$ pair production, this can constitute an important background. In addition to this, there are SM processes with have final states distinguishable from our signal only by their kinematics, for example $pp\rightarrow b\overline{b}\nu l jj$ \cite{Han:2008gy}. While the analysis of these contributions is beyond the scope of this work, there is a generic way to suppress these backgrounds as we can fortunately afford to place rather strong $P_T$ and $\Delta \phi$ cuts without losing too much of our signal. Some results for the production of missing energy in association with top quarks are shown in \figref{missingptsummary}. At the $2\rightarrow 4$ particle level we compare it to the production of missing energy via neutrinos ($pp\rightarrow \nu\overline{\nu}t\overline{t}$), which is the main SM background assuming perfect top quark reconstruction. The chargino NLSP in the model discussed here has a mass barely above the current lower bound. As a consequence, it is very narrow ($\Gamma\approx 10^{-7}$ GeV), decaying through an offshell $W$ as $\chi^+ \rightarrow f \overline{f} \chi^0$. Since $\Delta m \approx 15$ GeV, NLSP production should be visible as missing energy in association with leptons or rather soft jets. As a first approximation, the total transverse momentum of the NLSP pair is shown for cuts II.2 (\figref{fig_chapt}). The production of b pairs in association with missing energy turns out to be not as well suited for this analysis. The SM background in this case is very large, and can only be reduced by severe cuts on transverse momenta, effectively drowning the $pp\rightarrow \chi^0\chi^0 b \overline{b}$ signal. To conclude, models such as the higgsless supersymmetric scenario investigated in this work, provide an interesting alternative way how electroweak symmetry breaking and dark matter phenomenology can be linked. The discovery of such LSP dark matter candidates at the LHC via production of missing energy in association with top quarks seems promising in large parts of the parameter space. However, it is important to go beyond four particle final states to make more precise statements about the observability of LSP production in this context at the LHC. \begin{figure} \begin{center} \hspace{-3ex}\includegraphics[width=7cm]{logo_cutII1neutstack}\hspace{-1ex}\includegraphics[width=7cm]{logo_cutII2neutstack}\\ \hspace{-3ex}\includegraphics[width=7cm]{logo_cutII2chatstack}\hspace{-1ex}\includegraphics[width=7cm]{logo_cutII2chatstackE1}\\ \hspace{-3ex}\includegraphics[width=7cm]{logo_cutII2chabstack}\hspace{-1ex}\includegraphics[width=7cm]{logo_cutII2chabstackE1} \end{center} \caption{ Missing energy from LSP and neutrino production in association with top pairs for different quark localizations and cuts on invariant masses and azimuthal angle. The line marked SM shows the missing energy in $pp\rightarrow \nu \overline{\nu} t \overline{t}$. Below are total transverse momenta and boosts of charginos produced in association with top and bottom pairs for different quark localizations after cuts II.2. The total $P_T$ is shown as an approximation to $\mslash{P}_T$ which will have further contributions from the decay products (all MSTW08).\label{fig_chapt}\label{missingptsummary}} \end{figure} \section*{Acknowledgments} This research was supported by Deutsche Forschungsgemeinschaft through the Research Training Group 1147 \textit{Theoretical Astrophysics and Particle Physics}, and by Bundesministerium f\"ur Bildung und Forschung Germany, grant 05H4\-WWA/2. \section*{References}
2,869,038,156,952
arxiv
\section{1. Introduction} It is well known that Maxwell equations can be generalized in a non-linear way, adding to the Lagrangian higher powers of the invariants constructed from the electromagnetic field. Well-known examples are the corrections due to quantum electrodynamics that were proposed by Heisenberg and Euler [1] or the highly non-linear Born-Infeld Lagrangian [2] and their generalizations by Pleba\'nski [3]. These generalizations still yield second order field equations, but can give rise to solutions with regular electric or magnetic field [3]. Nonminimal coupling of the Maxwell equations to the gravitational field is instead more difficult, if one requires that the field equations remain second order and linear in the second derivatives of the electromagnetic potential and of the metric tensor. This problem has been studied in general in [4]. It is notable that a simple example of a model obeying this property can be obtained by dimensional reduction of a Kaluza-Klein (KK) theory containing a Gauss-Bonnet (GB) contribution [5-8]. We recall that KK theories [9,10] provide a unification of general relativity with electromagnetism based on the assumption that spacetime is five-dimensional and the fifth dimension is not observable because it is curled in an extremely small circle. However, in higher dimensions the Einstein-Hilbert Lagrangian is not unique, and one may add to it a GB term, which would not be effective in four dimension, since in that case it reduces to a total derivative. GB terms were shown in [11] to give the most general corrections to higher-dimensional gravity leading to second order field equations and compatible with some natural assumptions. The introduction of this term in the five-dimensional Lagrangian permits to obtain by dimensional reduction to four dimensions a model whose predictions differ from those of the Einstein-Maxwell (EM) theory, giving rise to the possibility of an indirect evidence of the existence of a fifth dimension. In fact, the dimensionally reduced theory contains corrections to the EM coupling that are of the kind discussed in [4]. Moreover, they provide nonlinear modifications of the pure electromagnetic lagrangian, that give rise to corrections of the standard electrodynamics [6]. Although these corrections can be considered as a special case of Pleba\'nski's nonlinear electrodynamics [3], and in particular of its simplified version proposed in [12], their properties are rather peculiar, due to the particular combination of coefficients in the Lagrangian. For example, the purely electric or magnetic solutions of the Maxwell equations are not modified. It follows that, although regular solutions can be obtained for more general quadratic electrodynamics coupled to gravity [13-15], this is not the case for this model. These facts are particularly relevant in relation with uniqueness and no-hair theorems for black holes. These theorems state that the only \ssy asymptotically flat }\def\hd{higher derivative }\def\st{spacetime solution of the EM theory is the RN metric [16]. However, if nonlinear electromagnetic terms, like those of Born-Infeld [17] or Pleba\'nski [13-14], or extra fields with nonminimal coupling, like the dilaton [18-19], are added to the standard EM action, the theory will exhibit different solutions. Also the generalization to Yang-Mills fields can give rise to nontrivial solutions [20]. In fact, the solutions of the five-dimensional Einstein-GB theory have been studied from a higher-dimensional point of view in ref.~[8], where it was shown that the effect of the GB term is only detectable through the coupling of electrodynamics to the gravitational field. However, the case of dyonic solutions was disregarded in that paper. As we shall see, dyonic solutions of the standard Maxwell equation are modified, even in the absence of the gravitational field, due to the nonlinear terms present in the field equations. Dyons were introduced in ref.~[21] and have found many application in grand unified theories. Especially interesting are also their implications on the properties of charged black holes, in particular in relation with uniqueness and no-hair theorems. The coupling of our dyonic solution with gravity of course modifies the standard RN black holes, giving another example of the failure of the uniqueness theorems in case of nontrivial couplings. In this paper, we describe exact flat-space dyonic solutions of the nonlinear Maxwell equations derived from GB-KK theory and show that solutions with everywhere regular electric field are possible. We then investigate the effect of these configurations on the \bh solutions of general relativity if the electromagnetic field is minimally coupled, obtaining an exact solution. We briefly discuss its properties and thermodynamical parameters. However, we shall not consider the nonminimal couplings with the gravitational field arising from the dimensional reduction of the GB Lagrangian, since this problem is more involved and will therefore be studied separately [22]. \goodbreak \section{2. The dyonic solution in flat space} We consider a five-dimensional Einstein-Gauss-Bonnet theory, with action $$I=\int\sqrt{-g}\ d^5x(R+\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon S),\eqno(1)$$ where $\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon$ is a coupling constant of dimension (lenght)$^2$, $R$ is the Ricci scalar and $S$ the Gauss-Bonnet term, $S=R^{\m\n\r\s}R_{\m\n\r\s}-4R^{\m\n}R_{\m\n}+R^2$. We use the simple ansatz\footnote{$^\dagger$}{Greek indices run from 0 to 4, Latin indices from 0 to 3.} [5] $$g_{\mu\nu}}\def\ij{{ij}}\def\ba{\bar\a=\(\matrix{g_\ij+g^2A_iA_j&gA_i\cr gA_j&1}\),\eqno(2)$$ where $A_i$ is the Maxwell potential and $g$ a coupling constant. Discarding total derivatives, the action (1) reduces to [1-3] $$I=\int\sqrt{-g}\ d^4x\[R-{g^2\over4} F^\ij F_\ij+{3\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon g^4\over16}\Big[(F^\ij F_\ij)^2-2F^{ij}F_{jk}F^{kl}F_{li}\Big]- {\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon g^2\over2}L_{int}\],\eqno(3)$$ where $$L_{int}=F^\ij F^{kl}(R_{ijkl}-4R_{ik}\d_{jl}+R\,\d_{ik}\d_{jl}),\eqno(4)$$ and $F_\ij=\partial}\def\na{\nabla}\def\per{\times_iA_j-\partial}\def\na{\nabla}\def\per{\times_jA_i$. This model of electromagnetism modified with nonlinear terms has been previously considered in ref.~[4]. The Einstein-Maxwell coupling (4) has also been investigated in ref.~[8]. In this section we shall consider only the electromagnetic field in flat spacetime, neglecting gravity, since we are mainly interested in the nonlinear modifications of the Maxwell theory. The solutions of the EM theory (3) will be discussed in the following section. The electromagnetic sector of the action (3) then reduces to [2] $$I_{em}=\int d^4x\ \(-{g^2\over4}F^\ij F_\ij+{3\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon g^4\over16}\Big[(F^\ij F_\ij)^2-2F^{ij}F_{jk}F^{kl}F_{li}\Big]\).\eqno(5)$$ The field equations }\def\bh{black hole }\def\coo{coordinates derived from (4) read $$\(1-{3\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon g^2\over2}F^2\)\partial}\def\na{\nabla}\def\per{\times_jF_{ji}+3\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon g^2\,\partial}\def\na{\nabla}\def\per{\times_j(F_{jk}F_{kl}F_{li})=0.\eqno(6)$$ and contain derivatives of the potential $A_i$ not higher than second order. Of course, the field $F_\ij$ also satisfies the Bianchi identities, $\partial}\def\na{\nabla}\def\per{\times_{(i}F_{jk)}=0$. It is easy to see that for purely electric or magnetic solutions the terms coming from the GB correction give no contribution [2,4], in contrast with most nonlinear models of electrodynamics [3]. However, let us consider a \ssy dyonic solution, whose potential is given in spherical \coo by $$A=a(r)\,dt+ P\cos\h\,d\phi}\def\g{\gamma}\def\h{\theta}\def\io{\iota}\def\j{\vartheta.\eqno(7)$$ In an orthogonal frame one has $$F_{01}=a'(r),\quad F_{23}={P\over r^2},\eqno(8)$$ where $'=d/dr$. Clearly, $F_{23}$ satisfies (6). To find the solution for the electric potential $a(r)$, it is convenient to write the action in terms of it and perform the variation. After integration on the angular variables, the action is proportional to $$I_{em}=\int r^2dr\[a'^2-{P^2\over r^4}+3\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon g^2{P^2a'^2\over r^4}\],\eqno(9)$$ and its variation gives $$\[r^2\(1+3\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon g^2{P^2\over r^4}\)a'\]'=0.\eqno(10)$$ Therefore, $$a'=F_{01}={Q\over r^2\(1+3\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon g^2{P^2\over r^4}\)}={r^4\over r^4+3\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon g^2P^2}\ {Q\over r^2},\eqno(11)$$ with $Q$ an integration constant, that can be identified with the electric charge. The potential can be obtained by integration. It follows that in this model the electric field of a point charge is distorted in the presence of a magnetic monopole; in particular, for $\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon<0$ it diverges at $r=(3|\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon|g^2P^2)^{1/4}$, while for $\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon>0$ it is regular everywhere. However, the magnetic field (7) is still singular at the origin. In the limits $P=0$ or $Q=0$ one recovers the standard solutions. \section{3. The coupling to gravity} We now introduce gravity, to see the effects of nonlinear electromagnetism on \ssy \bh solutions. However, we neglect the nonminimal EM coupling (4), since in this way we can obtain exact solutions. The inclusion of this term will be investigated elsewhere [22]. Moreover, contrary to ref.~[4], we consider the solutions of the four-dimensional effective theory, rather than those of the five-dimensional one, since they admit a more transparent interpretation. In the following, in order to obtain the standard normalization, we shall set $g^2=4$ and define $\bar\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon={\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon/4}$, so that $\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon g^2=\ba$. \goodbreak We seek for spherically symmetric solutions of the form $$ds^2=-e^{2\n}dt^2+e^{2\m}dr^2+e^{2\r}d\Omega}\def\P{\Pi}\def\Q{\Psi}\def\S{\Sigma}\def\U{\Upsilon}\def\X{\Xi^2,\eqno(12)$$ $$A=a(r)\,dt+ P\cos\h\,d\phi}\def\g{\gamma}\def\h{\theta}\def\io{\iota}\def\j{\vartheta.\eqno(13)$$ Like in the flat case, we calculate the field equations }\def\bh{black hole }\def\coo{coordinates by substituting this ansatz into the action and performing the variation. We obtain $$I=2\int dr\[(2\n'\r'+\r'^2)\e^{\n-\m+2\r}+e^{\n+\m}+a'^2e^{-\n-\m+2\r}-P^2e^{\n+\m-2\r}-3\ba P^2a'^2e^{-\m-\n-2\r}\].\eqno(14)$$ In the gauge $e^\r=r$ the field equations }\def\bh{black hole }\def\coo{coordinates stemming from (14) read $$2{\n'\over r}+{1\over r^2}-{e^{2\m}\over r^2}+a'^2e^{-2\n}+{P^2\over r^4}e^{2\m}+3\ba a'^2{P^2\over r^4} e^{-2\n}=0,\eqno(15)$$ $$-2{\m'\over r}+{1\over r^2}-{e^{2\m}\over r^2}+a'^2e^{-2\n}+{P^2\over r^4}e^{2\m}+3\ba\,a'^2{P^2\over r^4} e^{-2\n}=0,\eqno(16)$$ $$\[r^2e^{-\n-\m}\(1+3\ba\,{P^2\over r^4}\)a'\]'=0.\eqno(17)$$ Combining (15) and (16), we get $$\n'+\m'=0\eqno(18).$$ Hence, for asymptotically flat }\def\hd{higher derivative }\def\st{spacetime solutions, $\m=-\n$ and, integrating (17), $$a'={Qr^2\over r^4+3\ba P^2},\eqno(19)$$ with $Q$ an integration constant. Substituting in (15), one can rearrange as $$(re^{2\n})'=1-{P^2\over r^2}-{Q^2r^2\over r^4+3\ba P^2}\approx1-{P^2+Q^2\over r^2}-{3\ba P^2Q^2\over r^6}+\dots\eqno(20)$$ which displays order-$\ba$ corrections to the corresponding equation for the RN metric function. \goodbreak \bigskip \centerline{\epsfysize=4truecm\epsfbox{metric1a.eps}\qquad\epsfysize=4truecm\epsfbox{metric0a.eps}} \medskip \baselineskip10pt{\noindent{\smalll Fig.\ 1: The metric function $\scriptstyle{e^{2\n}}$ for generic (left panel) and near-extremal black holes (right panel). In the right panel, we have chosen values of the parameters that are extremal for the RN black hole. The continuous lines show the RN solution, the dashed line the $\scriptstyle{\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon>0}$ solution (21) and the dotted line the $\scriptstyle{\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon<0}$ solution (31). The singularity at $\scriptstyle{r=r_0}$ occurring when $\scriptstyle{\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon<0}$ is clearly visible.}}\par \bigskip \baselineskip12pt Eq.~(20) can be solved exactly. Let us first consider the case $\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon>0$. Setting $\g=\sqrt{3\ba P^2}$ and choosing suitable boundary conditions, the solution is $$e^{2\n}=\ 1-{2M\over r}+{P^2\over r^2}+{Q^2\over2\sqrt{2\g}\,r}\[\pi}\def\q{\psi}\def\r{\rho}\def\s{\sigma}\def\t{\tau}\def\u{\upsilon+\arctan\(1-{\sqrt2\,r\over\sqrt\g}\)-\arctan\(1+{\sqrt2\,r\over\sqrt\g}\) +\ha\log{r^2-\sqrt{2\g}\,r+\g\over r^2+\sqrt{2\g}\,r+\g}\].\eqno(21)$$ This metric exhibits some similarity with the so-called geon solution of the Born-Infeld nonlinear electromagnetism coupled to gravity [11].\footnote{$^\ddagger$} {Curiously, a similar solution has also been obtained in a rather different EM model with nonlinear terms [23].} For small $\g$, it gives a slight correction to the RN solution, which however is relevant for the uniqueness theorems. In Fig.~1 some solutions are depicted together with the corresponding RN metric function. The \ab of (21) reproduces that of the RN metric, $$e^{2\n}=1-{2M\over r}+{P^2+Q^2\over r^2}+o\({1\over r^3}\),\eqno(22)$$ and one can identify $M$ with the mass, $Q$ and $P$ with the electric and magnetic charge, respectively. Instead for $r\to0$ the behavior is different from that of RN, $$e^{2\n}\sim{P^2\over r^2}-\(2M-{\pi}\def\q{\psi}\def\r{\rho}\def\s{\sigma}\def\t{\tau}\def\u{\upsilon Q^2\over2\sqrt{2\g}}\){1\over r}+o\({1}\).\eqno(23)$$ In particular, the ${1\over r}$ term becomes repulsive near the origin for $M<{\pi}\def\q{\psi}\def\r{\rho}\def\s{\sigma}\def\t{\tau}\def\u{\upsilon Q^2\over4\sqrt{2\g}}$. The departure from the RN behavior are therefore greater for small $r$. The term proportional to ${1\over\sqrt\g}$ arises because we have fixed the boundary conditions }\def\ssy{spherically symmetric so that $M$ is the mass of the solution. It can be useful to define an effective mass near the singularity as $m=M-{\pi}\def\q{\psi}\def\r{\rho}\def\s{\sigma}\def\t{\tau}\def\u{\upsilon Q^2\over4\sqrt{2\g}}$. The curvature scalar is given by $R={4\g^2Q^2\over(r^4+\g^2)^2}$ and is regular everywhere, but, in contrast with the RN solution does not vanish. However also in our case a curvature singularity occurs at the origin, since $$R_{ijkl}R^{ijkl}\sim{56P^4\over r^8}-{96mP^2\over r^7}+{48m^2\over r^6}+o\({1\over r^5}\).\eqno(24)$$ The leading order term in (24) depends only on $P$ and not on $Q$. The causal structure depends on the values of the parameters $M$, $P$ and $Q$ that characterize the solution. Due to the nontrivial form of the metric, a general discussion can be made only numerically. However, if $\g\ll1$, as arguable on physical grounds, the solution should not differ much from the RN metric and one can resort to a perturbative expansion in $\g$. However, one must be careful, because this fails at small $r$, since, as follows from (23), in this regime $\sqrt\g$ appears at the denominator. The RN metric is known to exhibit a singularity at the origin and two horizons at $$\tilde r_\pm=M\pm\sqrt{M^2-P^2-Q^2}.\eqno(25)$$ Unfortunately, it is not possible to obtain an exact expression for the location of the horizons of the metric (21). We can obtain an approximation by expanding in the small parameter $\g$, $$e^{2\n}=1-{2M\over r}+{P^2+Q^2\over r^2}-{\g^2Q^2\over5r^6}+o(\g^4).\eqno(26)$$ The leading-order corrections are proportional to $\g^2$, and we can compute the zeroes of the metric as $r_\pm=\bar r_\pm+\g^2\Delta}\def\F{\Phi}\def\G{\Gamma}\def\H{\Theta}\def\L{\Lambda r_\pm+o(\g)$, where $$\Delta}\def\F{\Phi}\def\G{\Gamma}\def\H{\Theta}\def\L{\Lambda r_\pm=\pm{\,Q^2\over5\,\bar r_\pm^4(\bar r_+-\bar r_-)}.\eqno(27)$$ Hence, the two horizons are farther than in the RN case. From (27) follows that at leading order the condition of extremality $r_+=r_-$ is, recalling that $\g^2=3\ba P^2$, $$M^2\approx Q^2+P^2+{3\ba P^2Q^2\over5(P^2+Q^2)^2}.\eqno(28)$$ The causal structure is analogous to that of RN: for $M$ greater than its extremal value, one has two horizons, while for $M$ smaller than the extremal value a naked singularity occurs. Although these results are obtained for small $\g$, the qualitative behavior of the solution remains the same also for generic values of the coupling constant. The thermodynamical quantities can be computed in the standard way: the area of the external horizon is usually identified with the entropy, therefore $$S=4\pi}\def\q{\psi}\def\r{\rho}\def\s{\sigma}\def\t{\tau}\def\u{\upsilon\(\bar r_+^2+{6\ba P^2Q^2\over5r_+^3(\bar r_+-\bar r_-)}\)+o(\ba^2),\eqno(29)$$ while the temperature can be calculated as $$T={1\over4\pi}\def\q{\psi}\def\r{\rho}\def\s{\sigma}\def\t{\tau}\def\u{\upsilon}{e^{2\n}\over dr}\Big|_{r=r+}={1\over4\pi}\def\q{\psi}\def\r{\rho}\def\s{\sigma}\def\t{\tau}\def\u{\upsilon}\({\bar r_+-\bar r_-\over\bar r_+^2}+{6\ba P^2Q^2\over5\bar r_+^7}\,{2\bar r_+-\bar r_-\over \bar r_+-\bar r_-}\)+o(\ba^2).\eqno(30)$$ For extremal black holes the temperature vanishes. Both temperature and entropy are increased with respect to }\def\ie{i.e.\ the Reissner-Nordstr\"om }\def\RC{Riemann-Cartan }\def\poi{Poincar\'e black hole. Let us briefly comment on the case $\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon<0$. The metric function has a simpler form, $$e^{2\n}=\ 1-{2M\over r}+{P^2\over r^2}+{Q^2\over2\sqrt\g\,r}\[{\pi}\def\q{\psi}\def\r{\rho}\def\s{\sigma}\def\t{\tau}\def\u{\upsilon\over2}-\arctan{r\over\sqrt\g}-\ha\log{r-\sqrt\g\over r+\sqrt\g}\],\eqno(31)$$ where now $\g=\sqrt{3|\ba|P^2}$, but has the same \ab (22) as the previous solution. Also the expansion for small $\g$ has the same form as (26) except for the sign of the term proportional to $\g^2$. The curvature scalar is $R={4\g^2Q^2\over(r^4-\g^2)^2}$. Now a curvature singularity occurs at the surface $r_0=\sqrt\g$, while the horizons are located at $$r_\pm\approx\bar r_\pm\pm{3\ba P^2Q^2\over5\,\bar r_\pm(\bar r_+-\bar r_-)}.\eqno(32)$$ If $r_0>r_-$, a single horizon is present and the causal structure is similar to that of the Schwarzschild }\def\mi{Minkowski }\def\ads{anti-de Sitter solutions. Otherwise, the properties are analogous to those of the solution with positive $\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon$ and all the previous formulas still hold, taking into account that $\ba$ has opposite sign. This is true in particular for the thermodynamical quantities. \goodbreak \section{4. Conclusion} We have considered the effect of the nonlinearity of the electrodynamics induced by a five-dimensional KK model with Einstein-GB lagrangian on the dyonic solutions with a pointlike source. While it is well known that purely electric or magnetic solutions are not modified in this model, we have shown that the dyonic solutions differ from those of the Maxwell theory, and the electric field can be regular everywhere. In our model, the field equations }\def\bh{black hole }\def\coo{coordinates contain at most cubic terms in $A_i$, but the model can be generalized to higher powers by increasing the number of the internal dimensions and adding higher-order GB terms. Also in this case the pure electric or magnetic fields of pointlike sources maintain the standard form, but the dyonic solutions are modified and for suitable ranges of values of the coupling constants the singularity of the electric field is suppressed. We have also examined the coupling with gravity and have found a new class of solutions that modify the RN metric, with a Maxwell field identical to the flat space solution and a metric that deforms the RN solution. The solutions still depend on the three parameters $M$, $Q$ and $P$, but are no longer dual for the interchange of $Q$ and $P$. For positive $\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon$ they exhibit a pointlike singularity, while for negative $\alpha}\def\b{\beta}\def\c{\chi}\def\d{\delta}\def\e{\epsilon$ the singularity is spherical. The horizon structure is similar to that of RN, with two horizon, but for some values of the parameters it can present one or no horizons. Our result is notable since it shows that the introduction of nonlinear equations for the electromagnetic fields affects the results of the \bh uniqueness theorems also in case of minimal coupling to gravity, analogously to what happens in more general models [13-15]. Going to higher dimensions also allows the introduction of Yang-Mills fields through the Kaluza-Klein mechanism. Of course, in this case more complicated solutions are expected. In this paper we have not considered the nonminimal coupling between gravity and electromagnetism induced by the KK-GB model. Such coupling spoils the possibility of solving the field equations }\def\bh{black hole }\def\coo{coordinates analytically, but the main properties of the solutions should be preserved. We plan to investigate this topic in a future publication [22]. \bigbreak \beginref \ref [1] W. Heisenberg and H. Euler, Z. Phys. {\bf 98}, 714 (1936) . \ref [2] M. Born and L. lnfeld, \PRS{A144}, 435 (1934). \ref [3] J. Pleba\'nski, {\it Lectures on non-linear electrodynamics}, NORDITA, Copenhagen, 1968. \ref [4] G.W. Horndeski, \JMP{17}, 1980 (1976). \ref [5] H.A. Buchdal, \JoP{A12}, 1037 (1979). \ref [6] R. Kerner, C.\ R.\ Acad.\ Sc.\ Paris {\bf 304}, 621 (1987). \ref [7] F. M\"uller-Hoissen, \PL{B201}, 325 (1998). \ref [8] H.H. Soleng and \O. Gr\o n, \AoP{240}, 432 (1995). \ref [9] T. Kaluza, Sitz. Preuss. Akad. Wiss., Math. Phys. {\bf 1}, 966 (1921). \ref [10] O. Klein, Z. Phys.\ {\bf 37}, 895 (1926). \ref [11] D. Lovelock, \JMP{12}, 498 (1971). \ref [12] S.I. Kruglov, \PR{D75}, 117301 (2007). \ref [13] R. Pellicer and R. J. Torrence, \JMP{19}, 1718 (1969). \ref [14] H.P. de Oliveira, \CQG{11}, 1469 (1994). \ref [15] A. Garcia, E. Hackmann, C. L\"ammerzahl and A. Mac\'{\i}as, \PR{D86}, 024037 (2012). \ref [16] W. Israel, \CMP{8}, 245 (1968). \ref [17] M. Demia\'nski, Found.\ Phys.\ {\bf 16}, 187 (1986). \ref [18] G. Gibbons and K. Maeda, \NP{B298}, 741 (1988). \ref [19] S. Mignemi and N.R. Stewart, \PR{D47}, 5259 (1993). \ref [20] M.S. Volkov and D.V. Gal'tsov, JETP Lett.\ {\bf 50} 346 (1989). \ref [21] J. Schwinger, Science {\bf 165}, 757 (1969). \ref [22] S. Mignemi, in preparation. \ref [23] S.I. Kruglov, Grav.\ and Cosm.\ {\bf 27}, 78 (2021). \end
2,869,038,156,953
arxiv
\section{Algorithm} In this section, we describe a decision procedure for a local theory extension, say $({\Theory_0}, \axioms_e, {\Theory_1})$, which can be easily implemented in most SMT solvers with quantifier instantiation support. We describe our procedure $\mathfrak{D}_{{\Theory_1}}$ as a theory module in a typical SMT solver architecture. For simplicity, we separate out the interaction between theory solver and core SMT solver. We describe the procedure abstractly as taking as input: \begin{itemize} \item the original formula $\phi$, \item a set of extension axioms $\axioms_e$, \item a set of instantiations of axioms that have already been made, $Z$, and \item a set of ${\Theory_0}$ satisfiable ground literals $G$ such that $G \models \phi \wedge (\bigwedge_{\psi \in Z} \psi)$, and \item a set equalities $E \subseteq G$ between terms. \end{itemize} It either returns \begin{itemize} \item $\textsf{sat}$, denoting that $G$ is ${\Theory_1}$ satisfiable; or \item a new set of instantiations of the axioms, $Z'$. \end{itemize} For completeness, we describe briefly the way we envisage the interaction mechanism of this module in a DPLL(T) SMT solver. Let the input problem be $\phi$. The SAT solver along with the theory solvers for ${\Theory_0}$ will find a subset of literals $G$ from $\phi \wedge (\bigwedge_{\psi \in Z} \psi)$ such that its conjunction is satisfiable modulo ${\Theory_0}$. If no such satisfying assignment exists, the SMT solver stops with $\textsf{unsat}$. One can think of $G$ as being simply the literals in $\phi$ on the SAT solver trail. $G$ will be sent to $\mathfrak{D}_{{\Theory_1}}$ along with information known about equalities between terms. The set $Z$ can be also thought of as internal state maintained by the ${\Theory_1}$-theory solver module, with new instances $Z'$ sent out as theory lemmas and $Z$ updated to $Z \cup Z'$ after each call to $\mathfrak{D}_{{\Theory_1}}$. If $\mathfrak{D}_{{\Theory_1}}$ returns $\textsf{sat}$, so does the SMT solver and stops. On the other hand, if it returns a new set of instances, the SMT solver continues the search to additionally satisfy these. \medskip \noindent \emph{E-matching.} In order to describe our procedure, we introduce the well-studied E-match\-ing problem. Given a universally quantified $\Sigma$-sentence $K$, let $X(K)$ denote the quantified variables. Define a $\Sigma$-substitution $\sub$ of $K$ to be a mapping from variables $X(K)$ to $\Sigma$-terms of corresponding sort. Given a $\Sigma$-term $p$, let $p\sub$ denote the term obtained by substituting variables in $p$ by the substitutions provided in $\sub$. Two substitutions $\sub_1$, $\sub_2$ with the same domain $X$ are equivalent modulo a set of equalities $E$ if $\forall x \in X.\, E \models \sub_1(x) \approx \sub_2(x)$. We denote this as $\sub_1 \sim_E \sub_2$. \begin{problem}[E-matching] \begin{description} \item[input:] A set of ground equalities $E$, a set of $\Sigma$-terms $G$, and patterns $P$. \item[output:] The set of substitutions $\sub$ over the variables in $p$, modulo $E$, such that for all $p \in P$ there exists a $t \in G$ with $E \models t \approx p\sub$. \end{description} \end{problem} E-matching is a well-studied problem, specifically in the context of SMT. An algorithm for E-matching that is efficient and backtrackable is described in \cite{MB07}. We denote this procedure by $\mathfrak{E}$. \begin{figure}[t] $\mathfrak{D}_{{\Theory_1}}(\phi, \axioms_e, Z, G, E)$ Local variable: $Z'$, initially an empty set. \begin{enumerate} \item For each $K \in \axioms_e$: \begin{enumerate} \item Define the set of patterns $P$ to be the function symbols in $K$ containing variables. We observe that because the axioms are linear and flat, these patterns are always of the form $f(x_1,\dots,x_n)$ where $f$ is an extension symbol and the $x_i$ are quantified variables. \item Run $\mathfrak{E}(E, G, P)$ obtaining substitutions $\subs$. Without loss of generality, assume that $\sub \in \subs$ returned by the algorithm are such that $\m{st}(K\sub) \subseteq \m{st}(G \cup \axioms_e)$. For the special case of the patterns in (a), for any $\sub$ not respecting the condition there exists one in the equivalence class that respects the condition. \knew{Formally, $\forall \sub. \exists \sub'. \sub' \sim_E \sub \wedge \m{st}(K\sub') \subseteq \m{st}(G \cup \axioms_e)$.} We make this assumption only for simplicity of arguments later in the paper. \knew{If one uses an E-matching procedure not respecting this constraint, our procedure will still be terminating and correct (albeit total number of instantiations suboptimal).} \kold{Since the e-matching algorithm is required to return substitution modulo $\sim_E$, for simplicity of arguments later in the paper we make this assumption.} \item For each $\sub \in \subs$, if there exists no $K\sub'$ in $Z$ such that $\sub \sim_E \sub'$, then add $K\sub$ to $Z'$ as a new instantiation to be made. \end{enumerate} \item If $Z'$ is empty, return $\textsf{sat}$, else return $Z'$. \end{enumerate} \caption{Procedure $\mathfrak{D}_{{\Theory_1}}$} \label{fig:algo} \end{figure} The procedure $\mathfrak{D}_{{\Theory_1}}(\phi, \axioms_e, Z, G, E)$ is given in Fig.~\ref{fig:algo}. Intuitively, it adds all the new instances along the current search path that are required for local theory reasoning as given in Definition \ref{def:lte}, but modulo equality. For each axiom $K$ in $\axioms_e$, the algorithm looks for function symbols containing variables. For example, if we think of the monotonicity axiom in Sect. \ref{sec:example}, these would be the terms $f(x)$ and $f(y)$. These terms serve as patterns for the E-matching procedure. Next, with the help of the E-matching algorithm, all \emph{new} instances are computed (to be more precise, all instances for the axiom $K$ in $Z$ which are equivalent modulo $\sim_E$ are skipped). If there are no new instances for any axiom in $\axioms_e$, and the set $G$ of literals implies $\phi$, we stop with \textsf{sat}. as effectively we have that $G \cup \axioms_e[G]$ is satisfiable modulo ${\Theory_0}$. Otherwise, we return this set. We note that though the algorithm $\mathfrak{D}_{{\Theory_1}}$ may \emph{look} inefficient because of the presence of nested loops, keeping track of which substitutions have already happened, and which substitutions are new. However, in actual implementations all of this is taken care of by the E-matching algorithm. There has been significant research on fast, incremental algorithms for E-matching in the context of SMT, and one advantage of our approach is to be able to leverage this work. \medskip \noindent \emph{Correctness.} The correctness argument relies on two aspects: one, that if the SMT solver says $\textsf{sat}$ (resp. $\textsf{unsat}$) then $\phi$ is satisfiable (resp. unsatisfiable) modulo ${\Theory_1}$, and second, that it terminates. For the case where the output is $\textsf{unsat}$, the correctness follows from the fact that $Z$ only contains instances of $\axioms_e$. The $\textsf{sat}$ case is more tricky, but the main idea is that the set of instances made by $\mathfrak{D}_{{\Theory_1}}(\phi, \axioms_e, Z, G, E)$ are logically equivalent to $\axioms_e[G]$. Thus, when the solver stops, $G \cup \axioms_e[G]$ is satisfiable modulo ${\Theory_0}$. As a consequence, $G$ is satisfiable modulo ${\Theory_1}$. Since $G \models \phi$, we have that $\phi$ is satisfiable modulo ${\Theory_1}$. The termination relies on the fact that the instantiations returned by procedure $\mathfrak{D}_{{\Theory_1}}(\phi, \axioms_e, Z, G, E)$ do not add new terms, and they are always a subset of $\axioms_e[\phi]$. Since, $\axioms_e[\phi]$ is finite, eventually $\mathfrak{D}$ will stop making new instantiations. Assuming that we have a terminating decision procedure for the ground SMT problem of ${\Theory_0}$, we get a terminating decision procedure for ${\Theory_1}$. \begin{theorem} An SMT solver with theory module $\mathfrak{D}_{{\Theory_1}}$ is a decision procedure for the satisfiability problem modulo ${\Theory_1}$. \end{theorem} \paragraph{Psi-local theories.} We briefly explain how our approach can be extended to the more general notion of Psi-local theory extensions~\cite{IhlemannETAL08LocalReasoninginVerification}. Sometimes, it is not sufficient to consider only local instances of extension axioms to decide satisfiability modulo a theory extension. For example, consider the following set of ground literals: \[G = \{f(a) = f(b), a \neq b\} \] Suppose we interpret $G$ in a theory of an injective function $f: S \to S$ with a partial inverse $g: S \to S$ for some set $S$. We can axiomatize this theory as a theory extension of the theory of uninterpreted functions using the axiom \[K = \forall x, y.\, f(x) = y \rightarrow g(y) = x \enspace. \] $G$ is unsatisfiable in the theory extension, but the local instances of $K$ with respect to the ground terms $\m{st}(G)=\{a,b,f(a),f(b)\}$ are insufficient to yield a contradiction in the base theory. However, if we consider the local instances with respect to the larger set of ground terms \[\Psi(\m{st}(G)) = \{a,b,f(a),f(b),g(f(a)),g(f(b))\}, \] then we obtain, among others, the instances \[f(a) = f(b) \rightarrow g(f(b)) = a \quad \text{ and } \quad f(b) = f(a) \rightarrow g(f(a)) = b \enspace. \] Together with $G$, these instances are unsatisfiable in the base theory. The set $\Psi(\m{st}(G))$ is computed as follows: \[ \Psi(\m{st}(G)) = \m{st}(G) \cup \pset{g(f(t))}{t \in \m{st}(G)}\] It turns out that considering local instances with respect to $\Psi(\m{st}(G))$ is sufficient to check satisfiability modulo the theory extension $K$ for arbitrary sets of ground clauses $G$. Moreover, $\Psi(\m{st}(G))$ is always finite. Thus, we still obtain a decision procedure for the theory extension via finite instantiation of extension axioms. Psi-local theory extensions formalize this idea. In particular, if $\Psi$ satisfies certain properties including monotonicity and idempotence, one can again provide a model-theoretic characterization of completeness in terms of embeddings of partial models. We refer the reader to~\cite{IhlemannETAL08LocalReasoninginVerification} for the technical details. To use our algorithm for deciding satisfiability of a set of ground literals $G$ modulo a Psi-local theory extension $({\Theory_0},\axioms_e,{\Theory_1})$, we simply need to add an additional preprocessing step in which we compute $\Psi(\m{st}(G))$ and define $G' = G \cup \pset{\mathtt{instclosure}(t)}{t \in \Psi(\m{st}(G))}$ where $\mathtt{instclosure}$ is a fresh predicate symbol. Then calling our procedure for ${\Theory_1}$ with $G'$ decides satisfiability of $G$ modulo ${\Theory_1}$. \kold{ One heuristic for Psi-local theories that we found to improve the performance of our algorithm in our experiments is as follows: E-matching with the terms that are in $\Psi(\m{st}(G))$ but not in $\m{st}(G)$ should be delayed. That is, only after an auxiliary term $t \in \Psi(\m{st}(G)) \setminus \m{st}(G)$ occurs in an instance of an axiom that has been generated in a previous instantiation round, is $t$ allowed to be used in substitutions that generate additional instances.} \section{Conclusion} We have presented a new algorithm for deciding local theory extensions, a class of theories that plays an important role in verification applications. Our algorithm relies on existing SMT solver technology so that it can be easily implemented in today's solvers. In its simplest form, the algorithm does not require any modifications to the solver itself but only trivial syntactic modifications to its input. These are: (1) flattening and linearizing the extension axioms; and (2) adding trigger annotations to encode locality constraints for E-matching. In our evaluation we have experimented with different configurations of two SMT solvers, implementing a number of optimizations of our base line algorithm. Our results suggest interesting directions to further improve the quantifier modules of current SMT solvers, promising better performance and usability for applications in automated verification. \section{Example} \label{sec:example} We start our discussion with a simple example that illustrates the basic idea behind local theory extensions. Consider the following set of ground literals \[ G = \{ a + b = 1\text{, }f(a) + f(b) = 0 \}. \] We interpret $G$ in the theory of linear integer arithmetic and a monotonically increasing function $f:\mathbb{Z} \to \mathbb{Z}$. One satisfying assignment for $G$ is: \begin{equation}\label{eqn:example:model:1} a=0\text{, }b=1\text{, }f(x) = \{ -1\text{ if }x \leq 0, 1 \text{ if }x > 0 \}. \end{equation} We now explain how we can use an SMT solver to conclude that $G$ is indeed satisfiable in the above theory. SMT solvers commonly provide inbuilt decision procedures for common theories such as the theory of linear integer arithmetic (\texttt{LIA}) and the theory of equality over uninterpreted functions (\texttt{UF}). However, they do not natively support the theory of monotone functions. The standard way to enforce $f$ to be monotonic is to axiomatize this property, \begin{equation}\label{eqn:example:axiom} K = \forall x, y.\ x \leq y \rightarrow f(x) \leq f(y), \end{equation} and then let the SMT solver check if $G \cup \set{K}$ is satisfiable via a reduction to its natively supported theories. In our example, the reduction target is the combination of \texttt{LIA} and \texttt{UF}, which we refer to as the \emph{base theory}, denoted by ${\Theory_0}$. We refer to the axiom $K$ as a \emph{theory extension} of the base theory and to the function symbol $f$ as an \emph{extension symbol}. Most SMT solvers divide the work of deciding ground formulas $G$ in a base theory ${\Theory_0}$ and axioms $\mathcal{K}$ of theory extensions between different modules. A quantifier module looks for substitutions to the variables within an axiom $K$, $x$ and $y$, to some ground terms, $t_1$ and $t_2$. We denote such a substitution as $\sigma = \{ x \mapsto t_1, y \mapsto t_2\}$ and the instance of an axiom $K$ with respect to this substitution as $K \sigma$. The quantifier module iteratively adds the generated ground instances $K \sigma$ as lemmas to $G$ until the base theory solver derives a contradiction. However, if $G$ is satisfiable, as in our case, then the quantifier module does not know when to stop generating instances of $K$, and the solver may diverge, effectively enumerating an infinite model of $G$. For a local theory extension, we can syntactically restrict the instances $K \sigma$ that need to be considered before concluding that $G$ is satisfiable to a finite set of candidates. More precisely, a theory extension is called \emph{local} if in order to decide satisfiability of $G \cup \set{K}$, it is sufficient to consider only those instances $K \sigma$ in which all ground terms already occur in $G$ and $K$. The monotonicity axiom $K$ is a local theory extension of ${\Theory_0}$. The local instances of $K$ and $G$ are: \begin{align*} K \sigma_1 = a \leq b \rightarrow f(a) \leq f(b) & \; \text{ where } \; \sigma_1 = \{x \mapsto a, y \mapsto b\}, \\ K \sigma_2 = b \leq a \rightarrow f(b) \leq f(a) & \; \text{ where } \; \sigma_2 = \{x \mapsto b, y \mapsto a\}, \\ K \sigma_3 = a \leq a \rightarrow f(a) \leq f(a) & \; \text{ where } \; \sigma_3 = \{x \mapsto a, y \mapsto a\}, \text{ and}\\ K \sigma_4 = b \leq b \rightarrow f(b) \leq f(b) & \; \text{ where } \; \sigma_4 = \{x \mapsto b, y \mapsto b\}. \end{align*} Note that we do not need to instantiate $x$ and $y$ with other ground terms in $G$, such as $0$ and $1$. Adding the above instances to $G$ yields \[ G' = G \cup \{ K \sigma_1, K\sigma_2, K \sigma_3, K \sigma_4 \}. \] which is satisfiable in the base theory. Since $K$ is a local theory extension, we can immediately conclude that $G \cup \{K\}$ is also satisfiable. \paragraph{Recognizing Local Theory Extensions.} There are two useful characterizations of local theory extensions that can help users of SMT solvers in designing axiomatization that are local. The first one is model-theoretic~\cite{G01,SS05}. Consider again the set of ground clauses $G'$. When checking satisfiability of $G'$ in the base theory, the SMT solver may produce the following model: \begin{equation}\label{eqn:example:model:2} a=0\text{, }b=1\text{, }f(x) = \{ -1 \text{ if } x=0\text{, }1\text{ if }x=1 \text{, -1 otherwise} \}. \end{equation} This is not a model of the original $G \cup \{K\}$. However, if we restrict the interpretation of the extension symbol $f$ in this model to the ground terms in $G \cup \{K\}$, we obtain the \emph{partial model} \begin{equation}\label{eqn:example:model:3} a=0\text{, }b=1\text{, }f(x) = \{ -1 \text{ if } x=0\text{, }1\text{ if }x=1 \text{, undefined otherwise} \}. \end{equation} This partial model can now be embedded into the model~(\ref{eqn:example:model:1}) of the theory extension. If such embeddings of partial models of $G'$ to total models of $G \cup \{K\}$ always exist for all sets of ground literals $G$, then $K$ is a local theory extension of ${\Theory_0}$. The second characterization of local theory extensions is proof-theoretic and states that a set of axioms is a local theory extension if it is saturated under (ordered) resolution~\cite{DBLP:conf/lics/BasinG96}. This characterization can be used to automatically compute local theory extensions from non-local ones~\cite{DBLP:conf/frocos/HorbachS13}. Note that the locality property depends both on the base theory as well as the specific axiomatization of the theory extension. For example, the following axiomatization of a monotone function $f$ over the integers, which is logically equivalent to equation~(\ref{eqn:example:axiom}) in ${\Theory_0}$, is not local: \[K = \forall x.\, f(x) \leq f(x + 1) \enspace. \] Similarly, if we replace all inequalities in equation~(\ref{eqn:example:axiom}) by strict inequalities, then the extension is no longer local for the base theory ${\Theory_0}$. However, if we replace ${\Theory_0}$ by a theory in which $\leq$ is a dense order (such as in linear real arithmetic), then the strict version of the monotonicity axiom is again a local theory extension. In the next two sections, we show how we can use the existing technology implemented in quantifier modules of SMT solvers to decide local theory extensions. In particular, we show how E-matching can be used to further reduce the number of axiom instances that need to be considered before we can conclude that a given set of ground literals $G$ is satisfiable. \endinput Existing common theories in SMT--equalities over uninterpreted functions (UF) and linear real arithmetic(LRA)--already provide mechanisms for handling such formulas. Let us call this combination of UF and LRA the base theory and denote it by ${\Theory_0}$. One satisfying ${\Theory_0}$ model a solver could produce is: \begin{equation}\label{eqn:example:model:1} a=1, b=0, f(x) = \{ 1 \text{ if } x=0 \text{, -1 otherwise} \}. \end{equation} Further suppose that we want for $f$ to be a monotone [increasing] function. The function $f$ is now no longer uninterpreted, and the assignment to $f$ in \eqref{eqn:example:model:1} is not increasing. Intuitively, it seems that if we could find a model for $f$ defined at points $a$ and $b$ satisfying the monotonicity constraints, then it should be possible to extend the definition of $f$ to all reals while preserving the monotonicity constraint. The standard way to enforce an interpretation on $f$ in an SMT solver is to axiomatize this property, \[ K = \forall x, y.\ x \leq y \rightarrow f(x) \leq f(y), \] and let the SMT solver decide if $G \cup \set{K}$ is satisfiable. Most SMT solver divide the work of deciding ground formulas in ${\Theory_0}$ ($G$) and quantified formulas ($K$) between different modules. A quantifier module would look for substitutions to the variables within $K$, $x$ and $y$, to the terms, $t_1$ and $t_2$. We denote a substitution as $\sigma = \{ x \mapsto t_1, y \mapsto t_2\}$ and the instance of an axiom as $K \sigma$. For this example, we look for all $\sigma$ such that both $f(t_1)$ and $f(t_2)$ are atoms in $G$. (We explain why $f(t_i)$ terms are interesting in \ref{localtheories}.) These substitutions and their instances are: \begin{align*} \sigma_1 = \{x \mapsto a, y \mapsto b\} & K \sigma_1 = a \leq b \rightarrow f(a) \leq f(b) \text{ and}\\ \sigma_2 = \{x \mapsto b, y \mapsto a\} & K \sigma_2 = b \leq a \rightarrow f(b) \leq f(a). \end{align*} These instances are now ground terms. The quantifiers module sends the instances as new lemmas to the ground ${\Theory_0}$ decision procedures which decide on the satisfiability over \[ G' = G \cup \{ K \sigma_1 \land K\sigma_2 \}.\] The following is a satisfying \emph{partial model} (\autoref{sec:prelim:structures}) for $G'$ modulo ${\Theory_0}$ which leaves $f$ undefined at values other than $f(a)$ and $f(b)$: \begin{equation}\label{eqn:example:model:2} a=0, b=1, f(x) = \{ -1 \text{ if } x=0\text{, }1\text{ if }x=1 \text{, undefined otherwise} \}. \end{equation} The SMT solver cannot report this as a satisfying model to $G \cup \set{K}$ as it has not yet satisfied $K$. The quantifiers module cannot deduce that $G'$ is satisfiable from instance only.\footnote{ \tool{Z3} requires model based quantifier instance to be enabled return sat while \tool{CVC4} requires ``--fmf-fun.'' }\timsays{Andy what is fmf-fun ?} The formula is indeed satisfiable as it is possible to extend \eqref{eqn:example:model:2} such that $K$ is satisfied everywhere: \begin{equation}\label{eqn:example:model:3} a=0, b=1, f(x) = \{ -1\text{ if }x \leq 0, 1 \text{ if }x > 0 \}. \end{equation} An alternative view of finding ${\Theory_0}$ models that satisfy $K$ is finding a satisfying model of ${\Theory_1} = {\Theory_0} \cup \{ \mathcal{K} \}$. The theory ${\Theory_1}$ is an extension of ${\Theory_0}$ where $f$ a monotone increasing function. Local theory extensions (\autoref{}) are special theories where it is possible to reduce the problem of checking if a set of ground literals $G$ is satisfiable modulo the extension, $\models_{\Theory_1} G$, to the problem of checking if another set of ground literals $G \cup \instances$ is satisfiable modulo the base theory, $\models_{\Theory_0} G'$, where $\instances$ is a sufficient set of instances of $K$. As we will show later, the search for this sufficient set of instances, $\instances$, can be encoded as an e-matching problem, which is well-studied and for which efficient procedures exists in the context of SMT\cite{,}. Putting this together, if the user knows that the quantified formulas are axioms of a local theory extension and lets the SMT solver of this, the SMT solver can slightly alter its quantifier instance tactics and correctly conclude that formulas such as $G$ are satisfiable in ${\Theory_1}$. \section{Implementation and Experimental Results} \smartparagraph{Benchmarks.} We evaluated our techniques on a set of benchmarks generated by the deductive verification tool \tool{GRASShopper}~\cite{grasshopper-tool}. The benchmarks encode memory safety and functional correctness properties of programs that manipulate complex heap-allo\-cated data structures. The programs are written in a type-safe imperative language without garbage collection. The tool makes no simplifying assumptions about these programs like acyclicity of heap structures. \tool{GRASShopper} supports mixed specifications in (classical) first-order logic and separation logic (SL)~\cite{Reynolds02SeparationLogic}. The tool reduces the program and specification to verification conditions that are encoded in a hierarchical combination of (Psi-)local theory extensions. This hierarchy of extensions is organized as follows: \begin{enumerate} \item \textit{Base theory:} at the lowest level we have \texttt{UFLIA}, the theory of uninterpreted functions and linear integer arithmetic, which is directly supported by SMT solvers. \item \textit{GRASS:} the first extension layer consists of the theory of graph reachability and stratified sets. This theory is a disjoint combination of two local theory extensions: the theory of linked lists with reachability~\cite{DBLP:conf/popl/LahiriQ08} and the theory of sets over interpreted elements~\cite{DBLP:conf/birthday/Zarba03}. \item \textit{Frame axioms:} the second extension layer consists of axioms that encode the frame rule of separation logic. This theory extension includes arrays as a subtheory. \item \textit{Program-specific extensions:} The final extension layer consists of a combination of local extensions that encode properties specific to the program and data structures under consideration. These include: \begin{itemize} \item axioms defining memory footprints of SL specifications, \item axioms defining structural constraints on the shape of data structures, \item sorted constraints, and \item axioms defining partial inverses of certain functions, e.g., to express injectivity of functions and to specify the content of data structures. \end{itemize} \end{enumerate} We refer the interested reader to~\cite{DBLP:conf/cav/PiskacWZ13, grasshopper, DBLP:conf/cav/PiskacWZ14} for further details about the encoding. The programs considered include sorting algorithms, common data structure operations, such as inserting and removing elements, as well as complex operations on abstract data types. Our selection of data structures consists of singly and doubly-linked lists, sorted lists, nested linked lists with head pointers, binary search trees, skew heaps, and a union find data structure. The input programs comprise 108 procedures with a total of 2000 lines of code, 260 lines of procedure contracts and loop invariants, and 250 lines of data structure specifications (including some duplicate specifications that could be shared across data structures). \knew{The verification of these specifications are reduced by \tool{GRASShopper} to 816 SMT queries, each serves as one benchmark in our experiments.}\kold{The total benchmark set comprises 816 benchmarks, each consisting of a single SMT query.} 802 benchmarks are unsatisfiable. The remaining 14 satisfiable benchmarks stem from programs that have bugs in their implementation or specification. All of these are genuine bugs that users of \tool{GRASShopper} made while writing the programs.\footnote{See~\url{www.cs.nyu.edu/~kshitij/localtheories/} for the programs and benchmarks used.} We considered several versions of each benchmark, which we describe in more detail below. Each of these versions is encoded as an SMT-LIB 2 input file. \smartparagraph{Experimental setup.} All experiments were conducted on the StarExec platform~\cite{StumpST14} with a CPU time limit of one hour and a memory limit of 100 GB. We focus on the SMT solvers \tool{CVC4}\cite{conf/cav/BarrettCDHJKRT11} and \tool{Z3}\cite{MouraBjoerner08Z3}\footnote{We used the version of \tool{Z3} downloaded from the git master branch at \url{http://z3.codeplex.com} on Jan 17, 2015.} as both support \texttt{UFLIA} and quantifiers via E-matching. This version of \tool{CVC4} is a fork of v1.4 with special support for quantifiers.\footnote{This version is available at \url{www.github.com/kbansal/CVC4/tree/cav14-lte-draft}. } \kold{ We instrumented GRASShopper to eagerly instantiate all of the (Psi-)local theory axioms (modulo top level equalities). The resulting ground SMT query is satisfiable if and only if the original query is satisfiable in the local theory extension. We also instrumented GRASShopper to generate benchmarks where (Psi-)local theory axioms are provided as quantified formulas. We call these \emph{uninstantiated} benchmarks. } In order to be able to test our approach with both \tool{CVC4} and \tool{Z3}, wherever possible we transformed the benchmarks to simulate our algorithm. We describe these transformations in this paragraph. First, the quantified formulas in the benchmarks were linearized and flattened, and annotated with patterns to simulate Step 1(a) of our algorithm (this was done by \tool{GRASShopper} in our experiments, but may also be handled by an SMT solver aware of local theories). Both \tool{CVC4} and \tool{Z3} support using these annotations for controlling instantiations in their E-matching procedures. In order to handle Psi-local theories, the additional terms required for completeness were provided as dummy assertions, so that these appear as ground terms to the solver. In \tool{CVC4}, we also made some changes internally so as to treat these assertions specially and apply certain additional optimizations which we describe later in this section. \smartparagraph{Experiment 1.} \input{figure-scatterplots} Our first experiment aims at comparing the effectiveness of eager instantiation versus incremental instantiation up to congruence (as done by E-matching). Figure~\ref{fig:instantiations} charts the number of eager instantiations versus the number of E-matching instantiations for each query in a logarithmic plot.\footnote{Figure~\ref{fig:instantiations} does not include timeouts for \tool{CVC4}.} Points lying on the central line have an equal number of instantiations in both series while points lying on the lower line have 10 times as many eager instantiations as E-matching instantiations. (The upper line corresponds to $\frac{1}{10}$.) Most benchmarks require substantially more eager instantiations. We instrumented \tool{GRASShopper} to eagerly instantiate all axioms. Subfigure (a) compares upfront instantiations with a baseline implementation of our E-matching algorithm. Points along the $x$-axis required no instantiations in \tool{CVC4} to conclude unsat. We have plotted the above charts up to 10e10 instantiations. There were four outlying benchmarks where upfront instantiations had between 10e10 and up to 10e14 instances. E-matching had zero instantiations for all four. Subfigure (b) compares against an optimized version of our algorithm implemented in \tool{CVC4}. It shows that incremental solving reduces the number of instantiations significantly, often by several orders of magnitude. The details of these optimizations are given later in the section. \smartparagraph{Experiment 2.} \input{table-uninst} Next, we did a more thorough comparison on running times and number of benchmarks solved for \emph{uninstantiated benchmarks}. These results are in Table~\ref{table:cvc4:comp:noinst}. The benchmarks are partitioned according to the types of data structures occurring in the programs from which the benchmarks have been generated. Here, ``sl'' stands for singly-linked, ``dl'' for double-linked, and ``sls'' for sorted singly-linked. The binary search tree, skew heap, and union find benchmarks have all been summarized in the ``trees'' row. The row ``soundness'' contains unsatisfiable benchmarks that come from programs with incorrect code or specifications. These programs manipulate various types of data structures. The actual satisfiable queries that reveal the bugs in these programs are summarized in the ``sat'' row. We simulated our algorithm and ran these experiments on both \tool{CVC4} (C) and \tool{Z3} obtaining similar improvements with both. We ran each with three configurations: \begin{description} \item[UD] Default. For comparison purposes, we ran the solvers with default options. \tool{CVC4}'s default solver uses an E-matching based heuristic instantiation procedure, whereas \tool{Z3}'s uses both E-matching and model-based quantifier instantiation (MBQI). For both of the solvers, the default procedures are incomplete for our benchmarks. \item[UL] These columns refer to the E-matching based complete procedure for local theory extensions (algorithm in Fig.~\ref{fig:algo}).\footnote{ The configuration C\xspace UL\xspace had one memory out on a benchmark in the tree family. } \item[ULO] Doing instantiations inside the solver instead of upfront, opens the room for optimizations wherein one tries some instantiations before others, or reduces the number of instantiations using other heuristics that do not affect completeness. The results in these columns show the effect of all such optimizations. \end{description} As noted above, the UL\xspace and ULO\xspace procedures are both complete, whereas UD\xspace is not. This is also reflected in the ``sat'' row in Table~\ref{table:cvc4:comp:noinst}. Incomplete Instantiation-based procedures cannot hope to answer ``sat''. A significant improvement can be seen between the UL\xspace and ULO\xspace columns. \knew{The general thrust of the optimizations was to avoid blowup of instantiations by doing ground theory checks on a subset of instantiations. Our intuition is that the theory lemmas learned from these checks eliminate large parts of the search space before we do further instantiations.} \kold{Some of the optimizations we found helpful on these benchmarks are as follows (most of these seem helpful in general):} \knew{For example, we used a heuristic for Psi-local theories inspired from the observation that the axioms involving Psi-terms are needed mostly for completeness, and that we can prove unsatisfiable without instantiating axioms with these terms most of the time.} \kold{As noted above, our benchmarks had some Psi-local theory extensions.} We tried an approach where the instantiations were staged. First, the instantiations were done according to the algorithm in Fig.~\ref{fig:algo} for locality with respect to ground terms from the original query. Only when those were saturated, the instantiations for the auxiliary Psi-terms were generated. We found this to be very helpful. Since this required non-trivial changes inside the solver, we only implemented this optimization in \tool{CVC4}; but we think that staging instantiations for Psi-local theories is a good strategy in general. \knew{A second optimization, again with the idea of cutting instantiations,} was adding assertions in the benchmarks of the form $(a = b) \vee (a \neq b)$ where $a$, $b$ are ground terms. This forces an arbitrary arrangement over the ground terms before the instantiation procedure kicks in. \knew{ Intuitively, the solver first does checks with many terms equal to each other (and hence fewer instantiations) eliminating as much of the search space as possible. Only when equality or disequality is relevant to the reasoning is the solver forced to instantiate with terms disequal to each other.} \kold{If many of the terms are equal, it leads to fewer instantiations in the E-matching call on the current trail.} \knew{One may contrast this with ideas being used successfully in the care-graph-based theory combination framework in SMT where one needs to try all possible arrangements of equalities over terms. It has been observed that equality or disequality is sometimes relevant only for a subset of pairs of terms. Whereas in theory combination this idea is used to cut down the number of arrangements that need to be considered, we use it to reduce the number of unnecessary instantiations.} We found this really helped \tool{CVC4} on many benchmarks. \ksays{Sentence from rebuttal not incorporated: The optimizations which force equality between terms reduce polynomial blowup in combination with E-matching. The intuition here is that } Another optimization was instantiating special cases of the axioms first by enforcing equalities between variables of the same sort, before doing a full instantiation. We did this for axioms that yield a particularly large number of instances (instantiations growing with the fourth power of the number of ground terms). Again, we believe this could be a good heuristic in general. \smartparagraph{Experiment 3.} \input{table-partinst} \knew{ Effective propositional Logic (EPR) is the fragment of first order-logic consisting of formulas of the shape $\exists\vec{x}\forall\vec{y}.G$ with $G$ quantifier-free and where none of the universally quantified variables $\vec{y}$ appears below a function symbol in $G$. Theory extensions that fall into EPR are always local. Our third exploration is to see if we can exploit dedicated procedures for this fragment when such fragments occur in the benchmarks.} \ksays{I think we don't really define EPR anywhere properly or why it is a LTE. That explains the confusion we had in the reviews. @Thomas: Any suggestions to briefly define/summarize, or it current text good enough?} \kold{In the process of experimenting, we also noticed that most of the instantiations were coming from axioms which fall into the EPR fragment, or can be partially instantiated to fall into EPR.} For the EPR fragment, \tool{Z3} has a complete decision procedure that uses model-based quantifier instantiation. We therefore implemented a hybrid approach wherein we did upfront partial instantiation to the EPR fragment using E-matching with respect to top-level equalities \knew{(as described in our algorithm). The resulting EPR benchmark is then decided using Z3's MBQI mode.} This approach can only be expected to help where there are EPR-like axioms in the benchmarks, and we did have some which were heavier on these. We found that on \kold{the considered benchmarks}\knew{singly linked list and tree benchmarks} this hybrid algorithm significantly outperforms all other solver configurations that we have tried in our experiments\kold{ (except on the satisfiable benchmarks)}. \knew{On the other hand, on nested list benchmarks, which make more heavy use of purely equational axioms, this technique does not help compared to only using E-matching because the partial instantiation already yields ground formulas.}\kold{this suggests that more sophisticated combinations of E-matching and MBQI that further increase performance can be realized inside the solver while retaining completeness.} \ksays{See Apx.~\ref{rebuttalnotes} for rebuttal text for this part} The results with our hybrid algorithm are summarized in Column Z3 PM\xspace of Table~\ref{table:cvc4:comp:inst}. Since EPR is a special case of local theories, we also tried our E-matching based algorithm on these benchmarks. We found that the staged instantiation improves performance on these as well. The optimization that help on the uninstantiated benchmarks also work here. These results are summarized in the same table. Overall, our experiments indicate that there is a lot of potential in the design of quantifier modules to further improve the performance of SMT solvers, and at the same time make them complete on more expressive decidable fragments. \section{Introduction} Satisfiability Modulo Theories (SMT) solvers are a cornerstone of today's verification technology. Common applications of SMT include checking verification conditions in deductive verification~\cite{DBLP:conf/lpar/Leino10, DBLP:conf/esop/FilliatreP13}, computing program abstractions in software model checking~\cite{DBLP:conf/fmcad/McMillan11, DBLP:journals/jar/BrilloutKRW11, DBLP:conf/cav/AlbarghouthiLGC12}, and synthesizing code fragments in software synthesis~\cite{DBLP:conf/cav/BodikT12, DBLP:conf/popl/BeyeneCPR14}. Ultimately, all these tasks can be reduced to satisfiability of formulas in certain first-order theories that model the semantics of prevalent data types and software constructs, such as integers, bitvectors, and arrays. The appeal of SMT solvers is that they implement decision procedures for efficiently reasoning about formulas in these theories. Thus, they can often be used off the shelf as automated back-end solvers in verification tools. Some verification tasks involve reasoning about universally quantified formulas, which goes beyond the capabilities of the solvers' core decision procedures. Typical examples include verification of programs with complex data structures and concurrency, yielding formulas that quantify over unbounded sets of memory locations or thread identifiers. From a logical perspective, these quantified formulas can be thought of as axioms of application-specific theories. In practice, such theories often remain within decidable fragments of first-order logic~\cite{DBLP:journals/jar/BrilloutKRW11, DBLP:conf/atva/BouajjaniDES12, DBLP:conf/tacas/AlbertiGS14, DBLP:conf/popl/ItzhakyBILNS14}. However, their narrow scope (which is typically restricted to a specific program) does not justify the implementation of a dedicated decision procedure inside the SMT solver. Instead, many solvers allow theory axioms to be specified directly in the input constraints. The solver then provides a quantifier module that is designed to heuristically instantiate these axioms. These heuristics are in general incomplete and the user is given little control over the instance generation. Thus, even if there exists a finite instantiation strategy that yields a decision procedure for a specific set of axioms, the communication of strategies and tactics to SMT solvers is a challenge~\cite{DBLP:conf/birthday/MouraP13}. Further, the user cannot communicate the completeness of such a strategy. In this situation, the user is left with two alternatives: either she gives up on completeness, which may lead to usability issues in the verification tool, or she implements her own instantiation engine as a preprocessor to the SMT solver, leading to duplication of effort and reduced solver performance. The contributions of this paper are two-fold. First, we provide a better understanding of how complete decision procedures for application-specific theories can be realized with the quantifier modules that are implemented in SMT solvers. Second, we explore several extensions of the capabilities of these modules to better serve the needs of verification tool developers. The focus of our exploration is on \emph{local theory extensions}~\cite{SS05, IhlemannETAL08LocalReasoninginVerification}. A theory extension extends a given base theory with additional symbols and axioms. Local theory extensions are a class of such extensions that can be decided using finite quantifier instantiation of the extension axioms. This class is attractive because it is characterized by proof and model-theoretic properties that abstract from the intricacies of specific quantifier instantiation techniques~\cite{G01,SS05, DBLP:conf/frocos/HorbachS13}. Also, many well-known theories that are important in verification but not commonly supported by SMT solvers are in fact local theory extensions, even if they have not been presented as such in the literature. Examples include the array property fragment~\cite{DBLP:conf/vmcai/BradleyMS06}, the theory of reachability in linked lists~\cite{DBLP:conf/vmcai/RakamaricBH07, DBLP:conf/popl/LahiriQ08}, and the theories of finite sets~\cite{DBLP:conf/birthday/Zarba03} and multisets~\cite{Zarba02CombiningMultisetsIntegers}. We present a general decision procedure for local theory extensions that relies on E-matching, one of the core components of the quantifier modules in SMT solvers. We have implemented our decision procedure using the SMT solvers \tool{CVC4}~\cite{conf/cav/BarrettCDHJKRT11} and \tool{Z3}~\cite{MouraBjoerner08Z3} and applied it to a large set of SMT benchmarks coming from the deductive software verification tool \tool{GRASShopper}~\cite{DBLP:conf/cav/PiskacWZ13, grasshopper}. These benchmarks use a hierarchical combination of local theory extensions to encode verification conditions that express correctness properties of programs manipulating complex heap-allocated data structures. Guided by our experiments, we developed generic optimizations in \tool{CVC4} that improve the performance of our base-line decision procedure. Some of these optimizations required us to implement extensions in the solver's quantifier module. We believe that our results are of interest to both the users of SMT solvers as well as their developers. For users we provide simple ways of realizing complete decision procedures for application-specific theories with today's SMT solvers. For developers we provide interesting insights that can help them further improve the completeness and performance of today's quantifier instantiation modules. \paragraph{Related work.} Sofronie-Stokkermans~\cite{SS05} introduced local theory extensions as a generalization of locality in equational theories~\cite{DBLP:conf/kr/GivanM92, G01}. Further generalizations include Psi-local theories~\cite{IhlemannETAL08LocalReasoninginVerification}, which can describe arbitrary theory extensions that admit finite quantifier instantiation. The formalization of our algorithm targets local theory extensions, but we briefly describe how it can be generalized to handle Psi-locality. The original decision procedure for local theory extensions presented in~\cite{SS05}, which is implemented in \tool{H-Pilot}~\cite{DBLP:conf/cade/IhlemannS09}, eagerly generates all instances of extension axioms upfront, before the base theory solver is called. As we show in our experiments, eager instantiation is prohibitively expensive for many local theory extensions that are of interest in verification because it results in a high degree polynomial blowup in the problem size. In~\cite{Jacobs09}, Swen Jacobs proposed an incremental instantiation algorithm for local theory extensions. The algorithm is a variant of model-based quantifier instantiation (MBQI). It uses the base theory solver to incrementally generate partial models from which relevant axiom instances are extracted. The algorithm was implemented as a plug-in to \tool{Z3} and experiments showed that it helps to reduce the overall number of axiom instances that need to be considered. However, the benchmarks were artificially generated. Jacob's algorithm is orthogonal to ours as the focus of this paper is on how to use SMT solvers for deciding local theory extensions without adding new substantial functionality to the solvers. A combination with this approach is feasible as we discuss in more detail below. Other variants of MBQI include its use in the context of finite model finding~\cite{ReyEtAl-CADE-13}, and the algorithm described in \cite{GM09}, which is implemented in \tool{Z3}. This algorithm is complete for the so-called almost uninterpreted fragment of first-order logic. While this fragment is not sufficiently expressive for the local theory extensions that appear in our benchmarks, it includes important fragments such as Effectively Propositional Logic (EPR). In fact, we have also experimented with a hybrid approach that uses our E-matching-based algorithm to reduce the benchmarks first to EPR and then solves them with \tool{Z3}'s MBQI algorithm. E-matching was first described in~\cite{Nelson:1980:TPV:909447}, and since has been implemented in various SMT solvers~\cite{MB07, GBT09}. In practice, user-provided \emph{triggers} can be given as hints for finer grained control over quantifier instantiations in these implementations. More recent work~\cite{Dross2012} has made progress towards formalizing the semantics of triggers for the purposes of specifying decision procedures for a number of theories. A more general but incomplete technique~\cite{reynolds14quant_fmcad} addresses the prohibitively large number of instantiations produced by E-matching by prioritizing instantiations that lead to ground conflicts. \iffalse \begin{itemize} \item \item H-pilot~\cite{DBLP:conf/cade/IhlemannS09}. \item Investigate/compare the complete fragments handled by \asays{ Both of these are doing model-based quantifier instantiation, whereas our focus is on heuristic-based ones with the option --user-pat=resort, which is similar to what we did in http://lara.epfl.ch/~reynolds/cade24.pdf (section 2.3). We could also experiment with only considering instantiations from variable patterns that are in conflict with a candidate model or with limiting the instantiations per round for variable patterns. In my experience, there are both pros and cons to doing this, it's often the most effective just to generate many instances at once like we are doing now. } \item E-matching Journal paper. \end{itemize} \fi \section{Preliminaries} \label{sec:prelim} \kold{In the following, we define the syntax and semantics of formulas.} \iffalse We further recall the notions of partial structures and ($\Psi$-)local theory extensions as defined in~\cite{SS05,IhlemannETAL08LocalReasoninginVerification}. \fi \paragraph{Sorted first-order logic.} We present our problem in sorted first-order logic with equality. A \emph{signature} $\Sigma$ is a tuple $(\sorts, \Omega, \Pi)$, where $\sorts$ is a countable set of sorts and $\Omega$ and $\Pi$ are countable sets of function and predicate symbols, respectively. Each function symbol $f \in \Omega$ has an associated arity $n \geq 0$ and associated sort $s_1 \times \dots \times s_n \rightarrow s_0$ with $s_i \in \sorts$ for all $i \leq n$. Function symbols of arity 0 are called \emph{constant symbols}. Similarly, predicate symbols $P \in \Pi$ have an arity $n \geq 0$ and sort $s_1 \times \dots \times s_n$. We assume dedicated equality symbols $\approx_s \, \in \Pi$ with the sort $s \times s$ for all sorts $s \in \sorts$, though we typically drop the explicit subscript. Terms are built from the function symbols in $\Omega$ and (sorted) variables taken from a countably infinite set $X$ that is disjoint from $\Omega$. We denote by $t:s$ that term $t$ has sort $s$. A $\Sigma$-atom $A$ is of the form $P(t_1,\dots,t_n)$ where $P \in \Pi$ is a predicate symbol of sort $s_1 \times \dots \times s_n$ and the $t_i$ are terms with $t_i: s_i$. A $\Sigma$-\emph{formula} $F$ is either a $\Sigma$-atom $A$, $\lnot F_1$, $F_1 \land F_2$, $F_1 \lor F_2$, or $\forall x : s. F_1$ where $F_1$ and $F_2$ are $\Sigma$-formulas. A $\Sigma$-\emph{literal} $L$ is either $A$ or $\lnot A$ for a $\Sigma$-atom $A$. A $\Sigma$-\emph{clause} $C$ is a disjunction of $\Sigma$-literals. A $\Sigma$-term, atom, or formula is said to be \emph{ground}, if no variable appears in it. For a set of formulas $\mathcal{K}$, we denote by $\m{st}(\mathcal{K})$ the set of all ground subterms that appear in $\mathcal{K}$. A $\Sigma$-sentence is a $\Sigma$-formula with no free variables where the free variables of a formula are defined in the standard fashion. We typically omit $\Sigma$ if it is clear from the context. \paragraph{Structures.} Given a signature $\Sigma=(\sorts,\Omega, \Pi)$, a \emph{$\Sigma$-structure} $M}%{\alpha$ is a function that maps each sort $s \in \sorts$ to a non-empty set $M}%{\alpha(s)$, each function symbol $f \in \Omega$ of sort $s_1 \times \dots \times s_n \rightarrow s_0$ to a function $M}%{\alpha(f): M}%{\alpha(s_1) \times \dots \times M}%{\alpha(s_n) \to M}%{\alpha(s_0)$, and each predicate symbol $P \in \Pi$ of sort $s_1 \times \dots \times s_n$ to a relation $M}%{\alpha(s_1) \times \dots \times M}%{\alpha(s_n)$. We assume that all structures $M}%{\alpha$ interpret each symbol $\approx_s$ by the equality relation on $M}%{\alpha(s)$. For a $\Sigma$-structure $M}%{\alpha$ where $\Sigma$ extends a signature $\Sigma_0$ with additional sorts and function symbols, we write $\restrict{M}%{\alpha}{\Sigma_0}$ for the $\Sigma_0$-structure obtained by restricting $M}%{\alpha$ to $\Sigma_0$. Given a structure $M}%{\alpha$ and a \emph{variable assignment} $\nu : X \rightarrow M}%{\alpha$, the evaluation $t^{M}%{\alpha,\nu}$ of a term $t$ in $M}%{\alpha,\nu$ is defined as usual. For a structure $M}%{\alpha$ and an atom $A$ of the form $P(t_1,\dots,t_n)$, $(M}%{\alpha, \nu)$ satisfies $A$ iff $(t_1^{M}%{\alpha,\nu}, \dots, t_n^{M}%{\alpha,\nu}) \in M}%{\alpha(P)$. This is written as $(M}%{\alpha, \nu) \models A$. From this satisfaction relation of atoms and $\Sigma$-structures, we can derive the standard notions of the satisfiability of a formula, satisfying a set of formulas $(M}%{\alpha, \nu) \models \{F_i\}$, validity $\models F$, and entailment $F_1 \models F_2$. If a $\Sigma$-structure $M}%{\alpha$ satisfies a $\Sigma$-sentence $F$, we call $M}%{\alpha$ a model of $F$. \iffalse Given a signature $\Sigma=(\sorts,\Omega, \Pi)$, a \emph{partial $\Sigma$-structure} $M}%{\alpha$ is a function that maps each sort $s \in \sorts$ to a non-empty set $M}%{\alpha(s)$ and each function symbol $f \in \Omega$ of sort $s_1 \times \dots \times s_n \rightarrow s_0$ to a partial function $M}%{\alpha(f): M}%{\alpha(s_1) \times \dots \times M}%{\alpha(s_n) \rightharpoonup M}%{\alpha(s_0)$. We denote by $\support{M}%{\alpha}$ the \emph{support} of $M}%{\alpha$ which is the non-disjoint union of the interpretation of all sorts in $M}%{\alpha$.\timsays{ We never use supports. } We assume that all partial structures interpret the sort $\m{bool}$ by the two-element set of Booleans $\set{\top,\bot}$. We further assume that all structures $M}%{\alpha$ interpret each symbol $\approx_s$ by the equality relation on $M}%{\alpha(s)$. A partial structure $M}%{\alpha$ is called \emph{total structure} or simply \emph{structure} if it interprets all function symbols by total functions. For a $\Sigma$-structure $M}%{\alpha$ where $\Sigma$ extends a signature $\Sigma_0$ with additional sorts and function symbols, we write $\restrict{M}%{\alpha}{\Sigma_0}$ for the $\Sigma_0$-structure obtained by restricting $M}%{\alpha$ to $\Sigma_0$. Given a total structure $M}%{\alpha$ and a \emph{variable assignment} $\nu : X \rightarrow M}%{\alpha$, the evaluation $t^{M}%{\alpha,\nu}$ of a term $t$ in $M}%{\alpha,\nu$ is defined as usual. For the evaluation of a ground term $t$ in $M}%{\alpha$ we write just $t^M}%{\alpha$. For a total structure $M}%{\alpha$ and an atom $A$, $(M}%{\alpha, \nu)$ satisfies $A$ iff $A^{M}%{\alpha,\nu} = \top$. This is written as $(M}%{\alpha, \nu) \models A$. From this satisfaction relation of atoms, and total $\Sigma$-structures, we can derive the standard notions of the satisfiability of a term formula, satisfying a set of formulas $(M}%{\alpha, \nu) \models \{F_i\}$ , validity $\models F$, and entailment $F_1 \models F_2$. The interpretation $t^{M}%{\alpha,\nu}$ of a term $t$ in a partial structure $M}%{\alpha$ is as for total structures, except that if $t=f(t_1,\dots,t_n)$ for $f \in \Omega$, then $t^{M}%{\alpha,\nu}$ is undefined when either $t_{i}^{M}%{\alpha,\nu}$ is undefined for some $t_i$, or $({t_{1}^{M}%{\alpha,\nu}},\dots,t_{n}^{M}%{\alpha,\nu})$ is not in the domain of $M}%{\alpha(f)$. For partial structures, we define a \emph{weakly satisfies} relation over $\Sigma$-formulas. We say that a partial structure $M}%{\alpha$ \emph{weakly satisfies} a literal $L$ under $\nu$, written $(M}%{\alpha, \nu) {\models_w} L$, if \begin{enumerate}[label=(\roman*)] \item $L$ is an atom $A$ and either $A^{M}%{\alpha,\nu} = \top$ or $A^{M}%{\alpha,\nu}$ is undefined, or \item $L$ is a negated atom $\neg A$ and either $A^{M}%{\alpha,\nu} = \bot$ or $A^{M}%{\alpha,\nu}$ is undefined. \end{enumerate} \timsays{ I believe this now corresponds with weak satisfaction in \cite{SS05}. } For a clause $C$, $(M}%{\alpha,\nu) {\models_w} C$ if $(M}%{\alpha,\nu) {\models_w} L$ for at least one $L$ in $C$. A clause $C$ is \emph{weakly valid} in a partial structure $M}%{\alpha$ if $M}%{\alpha$ weakly satisfies $C$ for all assignments $\nu$. We then call $M}%{\alpha$ a \emph{weak partial model} of $C$. \timsays{Are clauses quantified in our paper? If so we need to change the definition.} \fi \paragraph{Theories and theory extensions.} \label{sec:prelim:theory} A \emph{theory} $\mathcal{T}$ over signature $\Sigma$ is a set of $\Sigma$-structures. We call a $\Sigma$-sentence $K$ an \emph{axiom} if it is the universal closure of a $\Sigma$-clause, and we denote a set of $\Sigma$-axioms as $\mathcal{K}$. We consider theories $\mathcal{T}$ defined as a class of $\Sigma$-structures that are models of a given set of $\Sigma$-sentences $\mathcal{K}$. Let $\Sigma_0=(\sorts_0,\Omega_0,\Pi)$ be a signature and assume that the signature $\Sigma_1=(\sorts_0 \cup \sorts_e,\Omega_0 \cup \Omega_e, \Pi)$ extends $\Sigma_0$ by new sorts $\sorts_e$ and function symbols $\Omega_e$. We call the elements of $\Omega_e$ \emph{extension symbols} and terms starting with extension symbols \emph{extension terms}. Given a $\Sigma_0$-theory $\mathcal{T}_0$ and $\Sigma_1$-axioms $\axioms_e$, we call $(\mathcal{T}_0,\axioms_e, \mathcal{T}_1)$ the \emph{theory extension} of $\mathcal{T}_0$ with $\axioms_e$, where $\mathcal{T}_1$ is the set of all $\Sigma_1$-structures $M}%{\alpha$ that are models of $\axioms_e$ and whose reducts $\restrict{M}%{\alpha}{\Sigma_0}$ are in $\mathcal{T}_0$. We often identify the theory extension with the theory $\mathcal{T}_1$. \section{Problem} We formally define the problem of satisfiability modulo theory and the notion of local theory extensions in this section. Let $\mathcal{T}$ be a theory over signature $\Sigma$. Given a $\Sigma$-formula $\phi$, we say $\phi$ is satisfiable modulo $\mathcal{T}$ if there exists a structure $M$ in $\mathcal{T}$ and an assignment $\nu$ of the variables in $\phi$ such that $(M, \nu) \models \phi$. We define the ground satisfiability modulo theory problem as the corresponding decision problem for quantifier-free formulas. \begin{problem}[Ground satisfiability problem for $\Sigma$-theory $\mathcal{T}$] \label{prob:smt} \begin{description} \item[input:] A quantifier-free $\Sigma$-formula $\phi$. \item[output:] $\textsf{sat}$ if $\phi$ is satisfiable modulo $\mathcal{T}$, $\textsf{unsat}$ otherwise. \end{description} \end{problem} We say the satisfiability problem for $\mathcal{T}$ is \emph{decidable} if there exists a procedure for the above problem which always terminates with $\textsf{sat}$ or $\textsf{unsat}$. We write entailment modulo a theory as $\phi \models_\mathcal{T} \psi$. We say an axiom of a theory extension is \emph{linear} if all the variables occur under at most one extension term. We say it is \emph{flat} if there there is no nesting of terms containing variables. It is easy to linearize and flatten the axioms by using additional variables and equality. As an example, $\forall x. \phi$ with $f(x)$ and $f(g(x))$ as terms in $F$ may be written as \[\forall x y z. x \approx y \wedge z \approx g(y) \rightarrow F' \] where $F'$ is obtained from $F$ by replacing $f(g(x))$ with $f(z)$. For the remainder of the paper, we assume that all extension axioms $\axioms_e$ are flat and linear. For the simplicity of the presentation, we assume that if a variable appears below a function symbol then that symbol must be an extension symbol. \begin{definition}[Local theory extensions] \label{def:lte} A theory extension $({\Theory_0}, \axioms_e, {\Theory_1})$ is \emph{local} if for any set of ground $\Sigma_1$-literals $G$: $G$ is satisfiable modulo ${\Theory_1}$ if and only if $G \cup \axioms_e[G]$ is satisfiable modulo ${\Theory_0}$ extended with free function symbols. Here $\axioms_e[G]$ is the set of instances of $\axioms_e$ where the subterms of the instantiation are all subterms of $G$ or $\axioms_e$ (in other words, they do not introduce new terms). \end{definition} For simplicity, in the rest of this paper, we work with theories ${\Theory_0}$ which have decision procedures for not just ${\Theory_0}$ but also ${\Theory_0}$ extended with free function symbols. Thus, we sometimes talk of satisfiability of a $\Sigma_1$-formula with respect a $\Sigma_0$-theory ${\Theory_0}$, to mean satisfiability in the ${\Theory_0}$ with the extension symbols in $\Sigma_1$ treated as free function symbols. In terms of SMT, we only talk of extensions of theories containing uninterpreted functions (\texttt{UF}). A naive decision procedure for ground SMT of a local theory extension ${\Theory_1}$ is thus to generate all possible instances of the axioms $\axioms_e$ that do not introduce new ground terms, thereby reducing to the ground SMT problem of ${\Theory_0}$ extended with free functions. \medskip \noindent \emph{Hierarchical extensions.} Note that local theory extensions can be stacked to form hierarchies \[((\dots((\mathcal{T}_0, \mathcal{K}_1, \mathcal{T}_1), \mathcal{K}_2, \mathcal{T}_2),\dots), \mathcal{K}_n, \mathcal{T}_n).\] Such a hierarchical arrangement of extension axioms is often useful to modularize locality proofs. In such cases, the condition that variables are only allowed to occur below extension symbols (of the current extension) can be relaxed to any extension symbol of the current level or below. The resulting theory extension can be decided by composing procedures for the individual extensions. Alternatively, one can use a monolithic decision procedure for the resulting theory $\mathcal{T}_n$, which can also be viewed as a single local theory extension $(\mathcal{T}_0, \mathcal{K}_1 \cup \dots \cup \mathcal{K}_n, \mathcal{T}_n)$. In our experimental evaluation, which involved hierarchical extensions, we followed the latter approach.
2,869,038,156,954
arxiv
\section{Introduction} This paper was motivated by problems posed by Ie.~Lutsenko and I.V.~Protasov in a preliminary version of the paper \cite{LP2} devoted to relatively thin sets in groups. Following \cite{LP1}, we say that a subset $A$ of a group $G$ is {\em thin} if for any distinct points $x,y\in G$ the intersection $xA\cap yA$ is finite. In \cite{LP2} (following the approach of \cite{BL}) Lutsenko and Protasov generalized the notion of a thin set to that of $\mathcal F$-thin set where $\mathcal F$ is a family of subsets of $G$. By $\mathcal P_G$ we shall denote the Boolean algebra of all subsets of the group $G$. We shall say that a family $\mathcal F\subset\mathcal P_G$ is \begin{itemize} \item {\em left-invariant} if $xF\in\mathcal F$ for all $F\in\mathcal F$ and $x\in G$, and \item {\em additive} if $A\cup B\in\mathcal F$ for all $A,B\in\mathcal F$; \item {\em lower} if $A\in\mathcal F$ for any $A\subset B\in\mathcal F$; \item {\em an ideal} if $\mathcal F$ is lower and additive. \end{itemize} Let $\mathcal F\subset\mathcal P_G$ be a left-invariant lower family of subsets of a group $G$. A subset $A\subset G$ is defined to be {\em $\mathcal F$-thin} if for any distinct points $x,y\in G$ we get $xA\cap yA\in\mathcal F$. The family of all $\mathcal F$-thin subsets of $G$ will be denoted by $\tau(\mathcal F)$. It is clear that $\tau(\mathcal F)$ is a left-invariant lower family of subsets of $G$ and $\mathcal F\subset\tau(\mathcal F)$. If $\tau(\mathcal F)=\mathcal F$, then the family $\mathcal F$ will be called {\em thin-complete}. \smallskip Let $\tau^*(\mathcal F)$ be the intersection of all thin-complete families $\tilde \mathcal F\subset\mathcal P_G$ that contain $\mathcal F$. It is clear that $\tau^*(\mathcal F)$ is the smallest thin-complete family containing $\mathcal F$. This family is called the {\em thin-completion} of $\mathcal F$. The family $\tau^*(\mathcal F)$ has an interesting hierarchic structure that can be described as follows. Let $\tau^0(\mathcal F)=\mathcal F$ and for each ordinal $\alpha$ put $\tau^\alpha(\mathcal F)$ be the family of all sets $A\subset G$ such that for any distinct points $x,y\in G$ we get $xA\cap yA\in \bigcup_{\beta<\alpha}\tau^{\beta}(\mathcal F)$. So, $$\tau^\alpha(\mathcal F)=\tau(\tau^{<\alpha}(\mathcal F))\mbox{ \ where \ }\tau^{<\alpha}(\mathcal F)=\bigcup_{\beta<\alpha}\tau^\beta(\mathcal F).$$ By Proposition 3 of \cite{LP2}, $\tau^*(\mathcal F)=\bigcup\limits_{\alpha<|G|^+}\tau^\alpha(\mathcal F)$. The following theorem (that will be proved in Section~\ref{s2}) answers the problem of combinatorial characterization of the family $\tau^*(\mathcal F)$ posed by Ie.~Lutsenko and I.V.~Protasov. Below by $e$ we denote the neutral element of the group $G$. \begin{theorem}\label{char-t} Let $\mathcal F\subset\mathcal P_G$ be a left-invariant lower family of subsets of a group $G$. A subset $A\subset G$ belongs to the family $\tau^*(\mathcal F)$ if and only if for any sequence $(g_n)_{n\in\omega}\in (G\setminus\{e\})^\mathbb N$ there is a number $n\in\omega$ such that $$\bigcap_{k_0,\dots,k_{n}\in\{0,1\}}g_0^{k_0}\cdots g_n^{k_n}A\in\mathcal F.$$ \end{theorem} We recall that a family $\mathcal F\subset\mathcal P_G$ is called {\em additive} if $\{A\cup B:A,B\in\mathcal F\}\subset\mathcal F$. It is clear that the family $\mathcal F_G$ of finite subsets of a group $G$ is additive. If $G$ is an infinite Boolean group, then the family $\tau^*(\mathcal F_G)=\tau(\mathcal F_G)$ is not additive, see Remark 3 in \cite{LP2}. For torsion-free groups the situation is totally diferent. Let us recall that a group $G$ is {\em torsion-free} if each non-zero element of $G$ has infinite order. \begin{theorem}\label{ideal} For a torsion-free group $G$ and a left-invariant ideal $\mathcal F\subset\mathcal P_G$ the family $\tau^{<\alpha}(\mathcal F)$ is additive for any limit ordinal $\alpha$. In particular, the thin-completion $\tau^*(\mathcal F)$ of $\mathcal F$ is an ideal in $\mathcal P_G$. \end{theorem} We define a subset of a group $G$ to be {\em $*$-thin} if its belongs to the thin-completion $\tau^*(\mathcal F_G)$ of the family $\mathcal F_G$ of all finite subsets of the group $G$. By Proposition 3 of \cite{LP2}, for each countable group $G$ we get $\tau^*(\mathcal F_G)=\tau^{<\omega_1}(\mathcal F_G)$. It is natural to ask if the equality $\tau^*(\mathcal F_G)=\tau^{<\alpha}(\mathcal F_G)$ can happen for some cardinal $\alpha<\omega_1$. If the group $G$ is Boolean, then the answer is affirmative: $\tau^*(\mathcal F)=\tau^1(\mathcal F)$ according to Theorem 1 of \cite{LP2}. The situation is different for non-torsion groups: \begin{theorem}\label{m2} If an infinite group $G$ contains an abelian torsion-free subgroup $H$ of cardinality $|H|=|G|$, then $\tau^*(\mathcal F_G)\ne\tau^{\alpha}(\mathcal F_G)\ne\tau^{<\alpha}(\mathcal F_G)$ for each ordinal $\alpha<|G|^+$. \end{theorem} Theorems~\ref{ideal} and \ref{m2} will be proved in Sections~\ref{s:ideal} and \ref{s5}, respectively. In Section~\ref{s6} we shall study the Borel complexity of the family $\tau^*(\mathcal F_G)$ for a countable group $G$. In this case the power-set $\mathcal P_G$ carries a natural compact metrizable topology, so we can talk about topological properties of subsets of $\mathcal P_G$. \begin{theorem} For a countable group $G$ and a countable ordinal $\alpha$ the subset $\tau^\alpha(\mathcal F_G)$ of $\mathcal P_G$ is Borel while $\tau^*(\mathcal F_G)=\tau^{<\omega_1}(\mathcal F_G)$ is coanalytic. If $G$ contains an element of infinite order, then the space $\tau^*(\mathcal F_G)$ is coanalytic but not analytic. \end{theorem} \section{Preliminaries on well-founded posets and trees} In this section we collect the neccessary information on well-founded posets and trees. A {\em poset} is an abbreviation from a {\em partially ordered set}. A poset $(X,\le)$ is {\em well-founded} if each subset $A\subset X$ has a maximal element $a\in A$ (this means that each element $x\in A$ with $x\ge a$ is equal to $a$). In a well-founded poset $(X,\le)$ to each point $x\in X$ we can assign the ordinal $\mathrm{rank}_X(x)$ defined by the recursive formula: $$\mathrm{rank}_X(x)=\sup\{\mathrm{rank}_X(y)+1:y>x\}$$where $\sup\emptyset=0$. Thus maximal elements of $X$ have rank 0, their immediate predecessors 1, and so on. If $X$ is not empty, then the ordinal $\mathrm{rank}(X)=\sup\{\mathrm{rank}_X(x)+1:x\in X\}$ is called the {\em rank} of the poset $X$. In particular, a one-element poset has rank 1. If $X$ is empty, then we put $\mathrm{rank}(X)=0$. \smallskip A {\em tree} is a poset $(T,\le)$ with the smallest element $\emptyset_T$ such that for each $t\in T$ the lower set ${\downarrow}t=\{s\in T:s\le t\}$ is well-ordered in the sense that each subset $A\subset{\downarrow}t$ has the smallest element. A {\em branch} of a tree $T$ is any maximal linearly ordered subset of $T$. If a tree is well-founded, then all its branches are finite. A subset $S\subset T$ of a tree is called a {\em subtree} if it is a tree with respect to the induced partial order. A subtree $S\subset T$ is {\em lower} if $S={\downarrow}S=\{t\in T:\exists s\in S\;\;t\le s\}$. All trees that appear in this paper are (lower) subtrees of the tree $X^{<\omega}=\bigcup_{n\in\omega}X^n$ of finite sequences of a set $X$. The tree $X^{<\omega}$ carries the following partial order: $$(x_0,\dots,x_n)\le (y_0,\dots,y_m) \mbox{ iff $n\le m$ and $x_i=y_i$ for all $i\le n$.}$$ The empty sequence $s_\emptyset\in X^0$ is the smallest element (the root) of the tree $X^{<\omega}$. For a finite sequence $s=(x_0,\dots,x_n)\in X^{<\omega}$ and an element $x\in X$ by $s\hat{\;}x=(x_0,\dots,x_n,x)$ we denote the concatenation of $s$ and $x$. So, $s\hat{\;}x$ is one of $|X|$ many immediate successors of $s$. The set of all branches of $X^{<\omega}$ can be naturally identified with the countable power $X^\omega$. For each branch $s=(s_n)_{n\in\omega}\in X^\omega$ and $n\in\omega$ by $s|n=(s_0,\dots,s_{n-1})$ we denote the initial interval of length $n$. Let $\mathsf{Tr}\subset \mathcal P_{X^{<\omega}}$ denote the family of all lower subtrees of the tree $X^{<\omega}$ and $\mathsf{WF}\subset\mathsf{Tr}$ be the subset consisting of all well-founded lower subtrees of $X^{<\omega}$. In Section~\ref{s6} we shall exploit some deep facts about the descriptive properties of the sets $\mathsf{WF}\subset\mathsf{Tr}\subset\mathcal P_{X^{<\omega}}$ for a countable set $X$. In this case the tree $X^{<\omega}$ is countable and the power-set $\mathcal P_{X^{<\omega}}$ carries a natural compact metrizable topology of the Tychonov power $2^{X^{<\omega}}$. So, we can speak about topological properties of the subsets $\mathsf{WF}$ and $\mathsf{Tr}$ of the compact metrizable space $\mathcal P_{X^{<\omega}}$. We recall that a topological space $X$ is {\em Polish} if $X$ is homeomorphic to a separable complete metric space. A subset $A$ of a Polish space $X$ is called \begin{itemize} \item {\em Borel} if $A$ belongs to the smallest $\sigma$-algebra that contains all open subsets of $X$; \item {\em analytic} if $A$ is the image of a Polish space $P$ under a continuous map $f:P\to A$; \item {\em coanalytic} if $X\setminus A$ is analytic. \end{itemize} By Souslin's Theorem 14.11 \cite{Ke}, a subset of a Polish space is Borel if and only if it is both analytic and coanalytic. By $\Sigma^1_1$ and $\Pi^1_1$ we denote the classes of spaces homeomorphic to analytic and coanalytic subsets of Polish spaces, respectively. A coanalytic subset $X$ of a compact metric space $K$ is called {\em $\Pi^1_1$-complete} if for each coanalytic subset $C$ of the Cantor cube $2^\omega$ there is a continuous map $f:2^\omega\to K$ such that $f^{-1}(X)=C$. It follows from the existence of a coanalytic non-Borel set in $2^\omega$ that each $\Pi^1_1$-complete set $X\subset K$ is non-Borel. The following deep theorem is classical and belongs to Lusin, see \cite[32.B and 35.23]{Ke}. \begin{theorem}\label{WF} Let $X$ be a countable set. \begin{enumerate} \item The subspace $\mathsf{Tr}$ is closed (and hence compact) in $\mathcal P_{X^{<\omega}}$. \item The set of well-founded trees $\mathsf{WF}$ is $\Pi^1_1$-complete in $\mathsf{Tr}$. In particular, $\mathsf{WF}$ is coanalytic but not analytic (and hence not Borel). \item For each ordinal $\alpha<\omega_1$ the subset $\mathsf{WF}_\alpha=\{T\in\mathsf{WF}:\mathrm{rank}(T)\le\alpha\}$ is Borel in $\mathsf{Tr}$. \item Each analytic subspace of $\mathsf{WF}$ lies in $\mathsf{WF}_\alpha$ for some ordinal $\alpha<\omega_1$. \end{enumerate} \end{theorem} \section{Combinatorial characterization of $*$-thin subsets}\label{s2} In this section we prove Theorem~\ref{char-t}. Let $\mathcal F\subset\mathcal P_G$ be a left-invariant lower family of subsets of a group $G$. Theorem~\ref{char-t} trivially holds if $\mathcal F=\mathcal P_G$ (which happens if and only if $G\in\mathcal F$). So, it remains to consider the case $G\notin\mathcal F$. Let $e$ be the neutral element of $G$ and $G_\circ=G\setminus\{e\}$. We shall work with the tree $G_\circ^{<\omega}$ discussed in the preceding section. Let $A$ be any subset of $G$. To each finite sequence $s\in G_\circ^{<\omega}$ assign the set $A_s\subset G$, defined by induction: $A_\emptyset=A$ and $A_{s\hat{\,}x}=A_s\cap xA_s$ and for $s\in G_\circ^{<\omega}$ and $x\in G_\circ$. Repeating the inductive argument of the proof of Proposition 2 \cite{LP2}, we can obtaine the following direct description of the sets $A_s$: \begin{claim} For every sequence $s=(g_0,\dots,g_n)\in G_\circ^{<\omega}$ $$A_s=\bigcap_{k_0,\dots,k_n\in\{0,1\}}g_0^{k_0}\cdots g_n^{k_n}A.$$ \end{claim} The set $$T_A=\{s\in G_\circ^{<\omega}:A_s\notin\mathcal F\}$$ is a subtree of $G_\circ^{<\omega}$ called the {\em $\tau$-tree} of the set $A$. For a non-zero ordinal $\alpha$ let $-1+\alpha$ be a unique ordinal $\beta$ such that $1+\beta=\alpha$. For $\alpha=0$ we put $-1+\alpha=-1$. It follows that $-1+\alpha=\alpha$ for each infinite ordinal $\alpha$. \begin{theorem}\label{t1.2} A set $A\subset G$ belongs to the family $\tau^{\alpha}(\mathcal F)$ for some ordinal $\alpha$ if and only if its $\tau$-tree $T_A$ is well-founded and has $\mathrm{rank}(T_A)\le-1+\alpha+1$. \end{theorem} \begin{proof} By induction on $\alpha$. Observe that $A\in\tau^{0}(\mathcal F)=\mathcal F$ if and only if $T_A=\emptyset$ if and only if $\mathrm{rank}(T_A)=0=1+0+1$. So, Theorem holds for $\alpha=0$. Assume that for some ordinal $\alpha>0$ and any ordinal $\beta<\alpha$ we know that a set $A\subset G$ belongs to $\tau^{\beta}(G)$ if and only if $T_A$ is a well-founded tree with $\mathrm{rank}(T_A)\le-1+\beta+1$. Given a subset $A\subset G$ we should check that that $A\in\tau^{\alpha}(\mathcal F)$ if and only if its $\tau$-tree $T_A$ is well-founded and has $\mathrm{rank}(T_A)\le-1+\alpha+1$. First assume that $A\in\tau^{\alpha}(\mathcal F)$. Then for every $x\in G_\circ$ the set $A\cap xA$ belongs to $\tau^{\beta_x}(\mathcal F)\subset\tau^{<\alpha}(\mathcal F)$ for some ordinal $\beta_x<\alpha$. By the inductive assumption, the $\tau$-tree $T_{A\cap xA}$ is well-founded and has $\mathrm{rank}(T_{A\cap xA})\le-1+\beta_x+1$. If $A\in\tau(\mathcal F)$, then $T_A\subset\{s_\emptyset\}$ and $\mathrm{rank}(T_A)\le 1\le-1+\alpha+1$. So, we can assume that $A\notin\tau(\mathcal F)$. In this case each point $x\in G_\circ=G_\circ^1$ considered as the sequence $(x)\in G^1$ of length 1 belongs to the $\tau$-tree $T_A$ of the set $A$. So we can consider the upper set $T_A(x)=\{s\in T_A:s\ge x\}$ and observe that the subtree $T_A(x)$ of $T_A$ is isomorphic to the $\tau$-tree $T_{A\cap xA}$ of the set $A\cap xA$ and hence $\mathrm{rank}(T_A(x))=\mathrm{rank}(T_{A\cap xA})\le-1+\beta_x+1$. It follows that $$ \begin{aligned} \mathrm{rank}(T_A)&=\mathrm{rank}_{T_A}(s_\emptyset)+1=\big(\sup_{x\in G_\circ}(\mathrm{rank}_{T_A}(x)+1)\big)+1=\\ &=\big(\sup_{x\in G_\circ}\mathrm{rank}\,T_A(x)\big)+1\le\big(\sup_{x\in G_\circ}(-1+\beta_x+1\big)\big)+1\le -1+\alpha+1. \end{aligned} $$ \smallskip Now assume conversely that the $\tau$-tree $T_A$ of $A$ is well-founded and has $\mathrm{rank}(T_A)\le-1+\alpha+1$. For each $x\in G_\circ$, find a unique ordinal $\beta_x$ such that $-1+\beta_x=\mathrm{rank}_{T_A}(x)$. It follows from $$-1+\beta_x+2=\mathrm{rank}_{T_A}(x)+2\le\mathrm{rank}_{T_A}(s_\emptyset)+1=\mathrm{rank}(T_A)\le-1+\alpha+1$$that $\beta_x<\alpha$. Since the subtree $T_A(x)=T_A\cap{\uparrow}x$ is isomorphic to the $\tau$-tree $T_{A\cap xA}$ of the set $A\cap xA$, we conclude that $T_{A\cap xA}$ is well-founded and has $\mathrm{rank}(T_{A\cap xA})=\mathrm{rank}(T_A(x))=\mathrm{rank}_{T_A}(x)+1=-1+\beta_x+1$. Then the inductive assumption guarantees that $A\cap xA\in\tau^{\beta_x}(\mathcal F)\subset\tau^{<\alpha}(\mathcal F)$ and hence $A\in\tau^{\alpha}(\mathcal F)$ by the definition of the family $\tau^\alpha(\mathcal F)$. \end{proof} As a corollary of Theorem~\ref{t1.2}, we obtain the following characterization proved in \cite{LP2}: \begin{corollary}\label{tau-n} A subset $A\subset G$ belongs to the family $\tau^n(\mathcal F)$ for some $n\in\omega$ if and only if for each sequence $(g_i)_{i=0}^n\in G_\circ^{n+1}$ we get $$\bigcap\limits_{k_0,\dots,k_n\in\{0,1\}}g_0^{k_0}\cdots g_n^{k_n}A\in\mathcal F.$$ \end{corollary} Theorem~\ref{t1.2} also implies the following explicit description of the family $\tau^*(\mathcal F)$, which was announced in Theorem~\ref{char-t}: \begin{corollary}\label{c3.4} For a subset $A\subset G$ the following conditions are equivalent: \begin{enumerate} \item $A\in\tau^*(\mathcal F)$; \item the $\tau$-tree $T_A$ of $A$ is well-founded; \item for each sequence $(g_n)_{n\in\omega}\in G_\circ^\omega$ there is $n\in\omega$ such that $(g_0,\dots,g_n)\notin T_A$; \item for each sequence $(g_n)_{n\in\omega}\in G_\circ^\omega$ there is $n\in\omega$ such that $$\bigcap\limits_{k_0,\dots,k_n\in\{0,1\}}g_0^{k_0}\cdots g_n^{k_n}A\in\mathcal F.$$ \end{enumerate} \end{corollary} \section{The additivity of the families $\tau^{<\alpha}(\mathcal F)$}\label{s:ideal} In this section we shall prove Theorem~\ref{ideal}. Let $G$ be an infinite group and $e$ be the neutral element of $G$. For a natural number $m$ let $2^m$ denote the finite cube $\{0,1\}^m$. For vectors $\mathbf g=(g_1,\dots,g_m)\in (G\setminus\{e\})^m$ and $\mathbf x=(x_1,\dots,x_m)\in 2^m$ let $$\mathbf g^{\mathbf x}=g_1^{x_1}\cdots g_m^{x_m}\in G.$$ A function $f:2^m\to G$ to a group $G$ will be called {\em cubic} if there is a vector $\mathbf g=(g_1,\dots,g_m)\in (G\setminus\{e\})^m$ such that $f(x)=\mathbf g^x$ for all $x\in 2^m$. \begin{lemma}\label{alpha} If the group $G$ is torsion-free, then for every $n\in\mathbb N$, $m>(n-1)^2$, and a cubic function $f:2^{m}\to G$ we get $|f(2^{m})|>n$. \end{lemma} \begin{proof} Assume conversely that $|f(2^m)|\le n$. Consider the set $B=\{(k_1,\dots,k_m)\in 2^m:\sum_{i=1}^mk_i=1\}$ having cardinality $|B|=m>(n-1)^2$. Since $e\notin f(B)$, we conclude that $|f(B)|\le |f(2^m)|-1\le n-1$ and hence $|f^{-1}(y)|\ge n$ for some $y\in f(B)$. Let $B_y=f^{-1}(y)$ and observe that $f(2^m)\supset \{e,y,y^2,\dots,y^{|B_y|}\}$ and thus $|f(2^m)|\ge |B_y|+1\ge n+1$, which contradicts our assumption. \end{proof} For every $n\in\mathbb N$ let $c(n)$ be the smallest number $m\in\mathbb N$ such that for each cubic function $f:2^m\to G$ we get $|f(2^m)|>n$. It is easy to see that $c(n)\ge n$. On the other hand, Lemma~\ref{alpha} implies that $c(n)\le (n-1)^2+1$ if $G$ is torsion-free. For a family $\mathcal F$ and a natural number $n\in\mathbb N$, let $$\bigvee_n\mathcal F=\{\cup\mathcal A:\mathcal A\subset\mathcal F,\;|\mathcal A|\le n\}.$$ \begin{lemma}\label{sum} Let $\mathcal F\subset\mathcal P_G$ be a left-invariant lower family of subsets in a torsion-free group $G$. For every $n\in\mathbb N$ we get $$\bigvee_n\tau(\mathcal F)\subset \tau^{c(n)-1}(\bigvee_{m}\mathcal F)$$where $m=n^{2^{c(n)}}$. \end{lemma} \begin{proof} Fix any $A\in\bigvee\limits_n\tau(\mathcal F)$ and write it as the union $A=A_1\cup\dots\cup A_n$ of sets $A_1,\dots,A_n\in\tau(\mathcal F)$. The inclusion $A\in\tau^{c(n)-1}(\bigvee\limits_{m}\mathcal F)$ will follow from Corollary~\ref{tau-n} as soon as we check that $$\bigcap_{x\in 2^{c(n)}}\mathbf g^xA\in\bigvee_m\mathcal F$$for each vector $\mathbf g\in (G\setminus\{e\})^{c(n)}$. De Morgan's law guarantees that $$ \bigcap_{x\in 2^{c(n)}}\mathbf g^x\cdot (\bigcup_{i=1}^nA_i)= \bigcup_{f\in n^{2^{c(n)}}}\bigcap_{x\in 2^{c(n)}}\mathbf g^xA_{f(x)}.$$ So, the proof will be complete as soon as we check that for every function $f:2^{c(n)}\to n$ the set $\bigcap\limits_{x\in 2^{c(n)}}\mathbf g^xA_{f(x)}$ belongs to $\mathcal F$. The vector $\mathbf g\in (G\setminus\{e\})^{c(n)}$ induces the cubic function $g:2^{c(n)}\to G$, $g:x\mapsto\mathbf g^x$. The definition of the function $c(n)$ guarantees that $|g(2^{c(n)})|>n$. The function $f:2^{c(n)}\to n$ can be thought as a coloring of the cube $2^{c(n)}$ into $n$ colors. Since $|g(2^{c(n)})|>n$, there are two points $y,z\in 2^{c(n)}$ colored by the same color such that $g(y)\ne g(z)$. Then $\mathbf g^y=g(y)\ne g(z)=\mathbf g^z$ but $f(y)=f(z)=k$ for some $k\le n$. Consequently, $$\bigcap_{x\in 2^{c(n)}}\mathbf g^xA_{f(x)}\subset \mathbf g^y A_k\cap \mathbf g^z A_k\in\mathcal F$$because the set $A_k\in\tau(\mathcal F)$. \end{proof} Now consider the function $c:\mathbb N\times\omega\to\omega$ defined recursively as $c(n,0)=0$ for all $n\in\mathbb N$ and $c(n,k+1)=c(n)-1+c(n^{2^{c(n)}},k)$ for $(n,k)\in\mathbb N\times\omega$. Observe that $c(n,1)=c(n)-1$ for all $n\in\mathbb N$. \begin{lemma}\label{l4.3} If the group $G$ is torsion-free and $\mathcal F\subset\mathcal P_G$ is a left-invariant ideal, then $$\bigvee_n\tau^k(\mathcal F)\subset\tau^{c(n,k)}(\mathcal F)$$for all pairs $(n,k)\in\mathbb N\times\omega$. \end{lemma} \begin{proof} By induction on $k$. For $k=0$ the equality $\bigvee_n\tau^0(\mathcal F)=\mathcal F=\tau^{c(n,0)}(\mathcal F)$ holds because $\mathcal F$ is additive. Assume that Lemma is true for some $k\in\omega$. By Lemma~\ref{sum} and by the inductive assumption, for every $n\in\mathbb N$ we get $$ \begin{aligned} \bigvee_n\tau^{k+1}(\mathcal F)&=\bigvee_n\tau(\tau^k(\mathcal F))\subset\tau^{c(n)-1}\big(\bigvee_{n^{2^{c(n)}}}\tau^k(\mathcal F)\big)\subset \\ &\tau^{c(n)-1}(\tau^{c(n^{2^{c(n)}},k)}(\mathcal F))= \tau^{c(n)-1+c(n^{2^{c(n)}},k)}(\mathcal F)= \tau^{c(n,k+1)}(\mathcal F). \end{aligned}$$ \end{proof} Now we are able to present: \begin{proof}[Proof of Theorem~\ref{ideal}] Assume that $G$ is a torsion-free group $G$ and $\mathcal F\subset\mathcal P_G$ is a left-invariant ideal. By transfinite induction we shall prove that for each limit ordinal $\alpha$ the family $\tau^{<\alpha}(\mathcal F)$ is additive. For the smallest limit ordinal $\alpha=0$ the additivity of the family $\tau^0(\mathcal F)=\mathcal F$ is included into the hypothesis. Assume that for some non-zero limit ordinal $\alpha$ we have proved that the families $\tau^{<\beta}(\mathcal F)$ are additive for all limit ordinals $\beta<\alpha$. Two cases are possible: 1) $\alpha=\beta+\omega$ for some limit ordinal $\beta$. By the inductive assumption, the family $\tau^{<\beta}(\mathcal F)$ is additive. Then Lemma~\ref{l4.3} implies that the family $\tau^{<\alpha}(\mathcal F)=\tau^{<\omega}(\tau^{<\beta}(\mathcal F))$ is additive. 2) $\alpha=\sup B$ for some family $B\not\ni \alpha$ of limit ordinals. By the inductive assumption for each limit ordinal $\beta\in B$ the family $\tau^{<\beta}(\mathcal F)$ is additive and then the union $$\tau^{<\alpha}(\mathcal F)=\bigcup_{\beta\in B}\tau^{<\beta}(\mathcal F)$$ is additive too. This completes the proof of the additivity of the families $\tau^{<\alpha}(\mathcal F)$ for all limit ordinals $\alpha$. Since the torsion-free group $G$ is infinite, the ordinal $\alpha=|G|^+$ is limit and hence the family $\tau^*(\mathcal F)=\tau^{<\alpha}(\mathcal F)$ is additive. Being left-invariant and lower, the family $\tau^*(\mathcal F)$ is a left-invariant ideal in $\mathcal P_G$. \end{proof} \begin{remark} Theorem~\ref{ideal} is not true for an infinite Boolean group $G$. In this case Theorem 1(2) of \cite{LP2} implies that $\tau^*(\mathcal F_G)=\tau(\mathcal F_G)$. Then for any infinite thin subset $A\subset G$ and any $x\in G\setminus\{e\}$ the union $A\cup xA$ is not thin as $(A\cup xA)\cap x(A\cup xA)=A\cup xA$ is infinite. Consequently, the family $\tau^*(\mathcal F_G)=\tau(\mathcal F_G)$ is not additive. \end{remark} \section{$h$-Invariant families of subsets in groups}\label{s:hinv} Let $G$ be a group and $h:H\to K$ be an isomorphism between subgroups of $G$. A family $\mathcal F$ of subsets of $G$ is called {\em $h$-invariant} if a subset $A\subset H$ belongs to $\mathcal F$ if and only if $h(A)\in\mathcal F$. \begin{example} The ideal $\mathcal F_\mathbb Z$ of finite subsets of the group $\mathbb Z$ is $h$-invariant for each isomorphism $h_k:\mathbb Z\to k\mathbb Z$, $h:x\mapsto kx$, where $k\in\mathbb N$. \end{example} \begin{proposition}\label{hinv} Let $h:H\to K$ be an isomorphism between subgroups of a group $G$. For any $h$-invariant family $\mathcal F\subset\mathcal P_G$ and any ordinal $\alpha$ the family $\tau^\alpha(\mathcal F)$ is $h$-invariant. \end{proposition} \begin{proof} For $\alpha=0$ the $h$-invariance of $\tau^0(\mathcal F)=\mathcal F$ follows from our assumption. Assume that for some ordinal $\alpha$ we have established that the families $\tau^\beta(\mathcal F)$ are $h$-invariant for all ordinals $\beta<\alpha$. Then the union $\tau^{<\alpha}(\mathcal F)=\bigcup_{\beta<\alpha}\tau^\beta(\mathcal F)$ is also $h$-invariant. We shall prove that the family $\tau^\alpha(\mathcal F)$ is $h$-invariant. Given a set $A\subset H$ we need to prove that $A\in\tau^\alpha(\mathcal F)$ if and only if $h(A)\in\tau^\alpha(\mathcal F)$. Assume first that $A\in\tau^\alpha(\mathcal F)$. To show that $h(A)\in\tau^\alpha(\mathcal F)$, take any element $y\in G\setminus\{e\}$. If $y\notin K$, then $h(A)\cap yh(A)=\emptyset\in \tau^{<\alpha}(\mathcal F)$. If $y\in K$, then $y=h(x)$ for some $x\in H$ and then $h(A)\cap yh(A)=h(A\cap xA)\in\tau^{<\alpha}(\mathcal F)$ since $A\cap xA\in\tau^{<\alpha}(\mathcal F)$ and the family $\tau^{<\alpha}(\mathcal F)$ is $h$-invariant. Now assume that $A\notin\tau^\alpha(\mathcal F)$. Then there is an element $x\in G\setminus\{e\}$ such that $A\cap xA\notin\tau^{<\alpha}(\mathcal F)$. Since $A\subset H$, the element $x$ must belong to $H$ (otherwise $A\cap xA=\emptyset\in\tau^{<\alpha}(\mathcal F)$). Then for the element $y=h(x)$ we get $h(A)\cap yh(A)\notin\tau^{<\alpha}(\mathcal F)$ by the $h$-invariance of the family $\tau^{<\alpha}(\mathcal F)$. Consequently, $h(A)\notin\tau^\alpha(\mathcal F)$. \end{proof} \begin{corollary} Let $h:H\to K$ be an isomorphism between subgroups of a group $G$. For any $h$-invariant family $\mathcal F\subset\mathcal P_G$ the family $\tau^*(\mathcal F)$ is $h$-invariant. \end{corollary} \begin{definition} A {\em left-invariant} family $\mathcal F\subset\mathcal P_G$ of subsets of a group $G$ is called \begin{itemize} \item {\em auto-invariant} if $\mathcal F$ is $h$-invariant for each injective homomorphism $h:G\to G$; \item {\em sub-invariant} if $\mathcal F$ is $h$-invariant for each isomorphism $h:H\to K$ between subgroups $K\subset H$ of $G$. \item {\em strongly invariant} if $\mathcal F$ is $h$-invariant for each isomorphism $h:H\to K$ between subgroups of $G$. \end{itemize} \end{definition} It is clear that $$\mbox{strongly invariant } \Rightarrow \mbox{ sub-invariant } \Rightarrow \mbox{ auto-invariant} $$ \begin{remark} Each auto-invariant family $\mathcal F\subset\mathcal P_G$, being left-invariant is also right-invariant. \end{remark} Proposition~\ref{hinv} implies: \begin{corollary} If $\mathcal F\subset \mathcal P_G$ is an auto-invariant (sub-invariant, strongly invariant) family of subsets of a group $G$, then so are the families $\tau^*(\mathcal F)$ and $\tau^\alpha(\mathcal F)$ for all ordinals $\alpha$. \end{corollary} It is clear that the famly $\mathcal F_G$ of finite subsets of a group $G$ is strongly invariant. Now we present some natural examples of families, which are not strongly invariant. Following \cite{BM}, we call a subset $A$ of a group $G$ \begin{itemize} \item {\em large} if there is a finite subset $F\subset G$ with $G=FA$; \item {\em small} if for any large set $L\subset G$ the set $L\setminus A$ remains large. \end{itemize} It follows that the family $\mathcal S_G$ of small subsets of $G$ is a left-invariant ideal in $\mathcal P_G$. According to \cite{BM}, a subset $A\subset G$ is small if and only if for every finite subset $F\subset G$ the complement $G\setminus FA$ is large. We shall need the following (probably known) fact. \begin{lemma}\label{finind} Let $H$ be a subgroup of finite index in a group $G$. A subset $A\subset H$ is small in $H$ if and only if $A$ is small in $G$. \end{lemma} \begin{proof} First assume that $A$ is small in $G$. To show that $A$ is small in $H$, take any large subset $L\subset H$. Since $H$ has finite index in $G$, the set $L$ is large in $G$. Since $A$ is small in $G$, the complement $L\setminus A$ is large in $G$. Consequently, there is a finite subset $F\subset G$ such that $F(L\setminus A)=G$. Then for the finite set $F_H=F\cap H$, we get $F_H(L\setminus A)=H$, which means that $L\setminus A$ is large in $H$. Now assume that $A$ is small in $H$. To show that $A$ is small in $G$, it suffices to show that for every finite subset $F\subset G$ the complement $G\setminus FA$ is large in $G$. Observe that $(G\setminus FA)\cap H=H\setminus F_HA$ where $F_H=F\cap H$. Since $A$ is small in $H$, the set $H\setminus F_HA$ is large in $H$ and hence large in $G$ (as $H$ has finite index in $G$). Then the set $G\setminus FA\supset H\setminus F_HA$ is large in $G$ too. \end{proof} \begin{proposition}Let $G$ be an infinite abelian group. \begin{enumerate} \item If $G$ is finitely generated, then the ideal $\mathcal S_G$ is strongly invariant. \item If $G$ is infinitely generated free abelian group, then the ideal $\mathcal S_G$ is not auto-invariant. \end{enumerate} \end{proposition} \begin{proof} 1. Assume that $G$ is a finitely generated abelian group. To show that $\mathcal S_G$ is strongly invariant, fix any isomorphism $h:H\to K$ between subgroups of $G$ and let $A\subset H$ be any subset. The groups $H,K$ are isomorphic and hence have the same free rank $r_0(H)=r_0(K)$. If $r_0(H)=r_0(K)<r_0(G)$, then the subgroups $H,K$ have infinite index in $G$ and hence are small. In this case the inclusions $A\in \mathcal S_G$ and $h(A)\in\mathcal S_G$ hold and so are equivalent. If the free ranks $r_0(H)=r_0(K)$ and $r_0(G)$ coincide, then $H$ and $K$ are subgroups of finite index in the finitely generated group $G$. By Lemma~\ref{finind}, a subset $A\subset H$ is small in $G$ if and only if $A$ is small in $H$ if and only if $h(A)$ is small in the group $h(H)=K$ if and only if $h(A)$ is small in $G$. \smallskip 2. Now assume that $G$ is an infinitely generated free abelian group. Then $G$ is isomorphic to the direct sum $\oplus^\kappa\mathbb Z$ of $\kappa=|G|\ge\aleph_0$ many copies of the infinite cyclic group $\mathbb Z$. Take any subset $\lambda\subset\kappa$ with infinite complement $\kappa\setminus\lambda$ and cardinality $|\lambda|=|\kappa|$ and fix an isomorphism $h:G\to H$ of the group $G=\oplus^\kappa\mathbb Z$ onto its subgroup $H=\oplus^\lambda\mathbb Z$. The subgroup $H$ has infinite index in $G$ and hence is small in $G$. Yet $h^{-1}(H)=G$ is not small in $G$, witnessing that the ideal $\mathcal S_G$ of small subsets of $G$ is not auto-invariant. \end{proof} \section{Thin-completeness of the families $\tau^\alpha(\mathcal F)$}\label{s5} In this section we shall prove that in general the families $\tau^\alpha(\mathcal F)$ are not thin-complete. Our principal result is the following theorem that implies Theorem~\ref{m2} announced in the Introduction. \begin{theorem}\label{t5.1} Let $G$ be a group containing a free abelian subgroup $H$ of cardinality $|H|=|G|$. If $\mathcal F$ is a sub-invariant ideal of subsets of $G$ such that $\tau(\mathcal F)\cap\mathcal P_H\not\subset\mathcal F$, then $\tau^*(\mathcal F)\ne\tau^\alpha(\mathcal F)\ne\tau^{<\alpha}(\mathcal F)$ for all ordinals $\alpha<|G|^+$. \end{theorem} We divide the proof of this theorem in a series of lemmas. \begin{lemma}\label{l6.2} Let $h:H\to K$ be an isomorphism between subgroups of a group $G$, $\mathcal F$ be an $h$-invariant left-invariant lower family of subsets of $G$. If a subset $A\subset H$ does not belong to $\tau^\alpha(\mathcal F)$ for some ordinal $\alpha$, then for every point $z\in G\setminus \{e\}$ the set $h(A)\cup zh(A)\notin\tau^{\alpha+1}(\mathcal F)$. \end{lemma} \begin{proof} Proposition~\ref{hinv} implies that $h(A)\notin \tau^\alpha(\mathcal F)$. Since $$(h(A)\cup zh(A))\cap z^{-1}(h(A)\cup zh(A))\supset h(A)\notin\tau^\alpha(\mathcal F),$$ the set $h(A)\cup zh(A)\notin \tau^{\alpha+1}(\mathcal F)$ by the definition of $\tau^{\alpha+1}(\mathcal F)$. \end{proof} In the following lemma for a subgroup $K$ of a group $H$ by $$Z_H(K)=\{z\in H:\forall x\in K\;\;zx=xz\}$$we denote the centralizer of $K$ in $H$. \begin{lemma}\label{union2} Let $h:H\to K$ be an isomorphism between subgroups $K\subset H$ of a group $G$ such that there is a point $z\in Z_H(K)$ with $z^2\notin K$. Let $\mathcal F\subset\mathcal P_G$ be an $h$-invariant left-invariant ideal. If a subset $A\subset H$ belongs to the family $\tau^\alpha(\mathcal F)$ for some ordinal $\alpha$, then $h(A)\cup zh(A)\in\tau^{\alpha+1}(\mathcal F)$. \end{lemma} \begin{proof} By induction on $\alpha$. For $\alpha=0$ and $A\in\mathcal F$ the inclusion $h(A)\cup zh(A)\in\mathcal F\subset\tau(\mathcal F)$ follows from the $h$-invariance and the additivity of $\mathcal F$. Now assume that for some ordinal $\alpha$ we have proved that for every $\beta<\alpha$ and $A\in\mathcal P_H\cap\tau^\beta(\mathcal F)$ the set $h(A)\cup zh(A)$ belongs to $\tau^{\beta+1}(\mathcal F)$. Given any set $A\in\mathcal P_H\cap \tau^{\alpha}(\mathcal F)$, we need to prove that $h(A)\cup zh(A)\in\tau^{\alpha+1}(\mathcal F)$. This will follow as soon as we check that $(h(A)\cup zh(A))\cap y(h(A)\cup zh(A)\in\tau^{\alpha}(\mathcal F)$ for every $y\in G\setminus\{e\}$. If $y\notin K\cup zK\cup z^{-1}K$, then $$(h(A)\cup zh(A))\cap y(h(A)\cup zh(A))\subset (K\cup zK)\cap y(K\cup zK)=\emptyset\in\tau^{\alpha+1}(\mathcal F).$$ So, it remains to consider the case $y\in K\cup zK\cup z^{-1}K\subset H$. If $y\in K$, then $$(h(A)\cup zh(A))\cap y(h(A)\cup zh(A))=(h(A)\cap yh(A))\cup z(h(A)\cap y\,h(A)).$$ Since $y\in K$, there is an element $x\in H$ with $y=h(x)$. Since $A\in\tau^{\alpha}(\mathcal F)$, $A\cap xA\in\tau^\beta(\mathcal F)$ for some $\beta<\alpha$ and then $$(h(A)\cup zh(A))\cap y(h(A)\cup zh(A))=h(A\cap xA)\cup zh(A\cap xA)\in\tau^{\beta+1}(\mathcal F)\subset\tau^\alpha(\mathcal F)$$ by the inductive assumption. If $y\in zK$, then $z^2\notin K$ implies that $$(h(A)\cup zh(A))\cap y(h(A)\cup zh(A))=zh(A)\cap yh(A)\subset zh(A)\in\tau^\alpha(\mathcal F)$$ by the $h$-invariance and the left-invariance of the family $\tau^\alpha(\mathcal F)$, see Proposition~\ref{hinv}. If $y\in z^{-1}K$, then by the same reason, $$(h(A)\cup zh(A))\cap y(h(A)\cup zh(A))=h(A)\cap yzh(A)\subset h(A)\in\tau^\alpha(\mathcal F).$$ \end{proof} Given an isomorphism $h:H\to K$ between subgroups $K\subset H$ of a group $G$, for every $n\in\mathbb N$ define the iteration $h^n:H\to K$ of the isomorphism $h$ letting $h^1=h:H\to K$ and $h^{n+1}=h\circ h^n$ for $n\ge 1$. The isomorphism $h:H\to K$ will be called {\em expanding} if $\bigcap_{n\in\mathbb N}h^n(H)=\{e\}$. \begin{example} For every integer $k\ge 2$ the isomorphism $$h_k:\mathbb Z\to k\mathbb Z,\;\;h_k:x\mapsto kx,$$ is expanding. \end{example} \begin{lemma}\label{union} Let $h:H\to K$ be an expanding isomorphism between torsion-free subgroups $K\subset H$ of a group $G$ and $\mathcal F\subset\mathcal P_G$ be an $h$-invariant left-invariant ideal of subsets of $G$. For any limit ordinal $\alpha$ and family $\{A_n\}_{n\in\omega}\subset\tau^{<\alpha}(\mathcal F)$ of subsets of the group $H$, the union $A=\bigcup_{n\in\omega}h^n(A_n)$ belongs to the family $\tau^{\alpha}(\mathcal F)$. \end{lemma} \begin{proof} First observe that $\{h^n(A_n)\}_{n\in\omega}\subset\tau^{<\alpha}(\mathcal F)$ by Proposition~\ref{hinv}. To show that $A=\bigcup_{n\in\omega}h^n(A_n)\in\tau^{\alpha}(\mathcal F)$ we need to check that $A\cap xA\in\tau^{<\alpha}(\mathcal F)$ for all $x\in G\setminus\{e\}$. This is trivially true if $x\notin H$ as $A\subset H$. So, we assume that $x\in H$. By the expanding property of the isomorphism $h$, there is a number $m\in\omega$ such that $x\notin h^m(H)$. Put $B=\bigcup_{n=0}^{m-1}h^n(A_n)$ and observe that $A\cap xA\subset B\cup xB\in\tau^{<\alpha}(\mathcal F)$ as $\tau^{<\alpha}(\mathcal F)$ is additive according to Theorem~\ref{ideal}. \end{proof} \begin{lemma}\label{l6.6} Assume that a left-invariant ideal $\mathcal F$ on a group $G$ is $h$-invariant for some expanding isomorphism $h:H\to K$ between torsion-free subgroups $K\subset H$ of $G$ such that $Z_H(K)\not\subset K$. If $\tau(\mathcal F)\cap\mathcal P_H\not\subset\mathcal F$, then $\tau^{\alpha}(\mathcal F)\ne\tau^{<\alpha}(\mathcal F)$ for all ordinals $\alpha<\omega_1$. \end{lemma} \begin{proof} Fix any point $z\in Z_K(H)\setminus K$. Since $H$ is torsion-free, $z^2\ne e$. Since the isomorphism $h$ is expanding, $z^2\notin h^m(H)$ for some $m\in\mathbb N$. Replacing the isomorphism $h$ by its iterate $h^m$, we lose no generality assuming that $z^2\notin h(H)=K$. By induction on $\alpha<\omega_1$ we shall prove that $ \tau^{\alpha}(\mathcal F)\cap\mathcal P_H\ne\tau^{<\alpha}(\mathcal F)\cap\mathcal P_H. $ For $\alpha=1$ the non-equality $\tau(\mathcal F)\cap\mathcal P_H\ne\tau^0(\mathcal F)\cap\mathcal P_H$ is included into the hypothesis. Assume that for some ordinal $\alpha<\omega_1$ we proved that $\tau^{\beta}(\mathcal F)\cap\mathcal P_H\ne\tau^{<\beta}(\mathcal F)\cap\mathcal P_H$ for all ordinals $\beta<\alpha$. If $\alpha=\beta+1$ is a successor ordinal, then by the inductive assumption we can find a set $A\in\tau^{\beta}(\mathcal F)\setminus\tau^{<\beta}(\mathcal F)$ in the subgroup $H$. By Lemmas~\ref{l6.2} and \ref{union2}, $A\cup zA\in\tau^{\beta+1}(\mathcal F)\setminus\tau^{\beta}(\mathcal F)=\tau^\alpha(\mathcal F)\setminus\tau^{<\alpha}(\mathcal F)$ and we are done. If $\alpha$ is a limit ordinal, then we can find an increasing sequence of ordinals $(\alpha_n)_{n\in\omega}$ with $\alpha=\sup_{n\in\omega}\alpha_n$. By the inductive assumption, for every $n\in\omega$ there is a subset $A_n\subset H$ with $A_n\in\tau^{\alpha_n+1}(\mathcal F)\setminus\tau^{\alpha_n}(\mathcal F)$. Then we can put $A=\bigcup_{n\in\omega}h^n(A_n)$. By Proposition~\ref{hinv}, for every $n\in\omega$, we get $$h^n(A_n)\in\tau^{\alpha_n+1}(\mathcal F)\setminus\tau^{\alpha_n}(\mathcal F)$$ and thus $A\notin\tau^{\alpha_n}(\mathcal F)$ for all $n\in\omega$, which implies that $A\notin \tau^{<\alpha}(\mathcal F)$. On the other hand, Lemma~\ref{union} guarantees that $A\in\tau^{\alpha}(\mathcal F)$. \end{proof} \begin{lemma}\label{l6.7} Assume that a left-invariant ideal $\mathcal F$ on a group $G$ is $h$-invariant for some isomorphism $h:H\to K$ between torsion-free subgroups $K\subset H$ of $G$ such that $z^2\notin K$ for some $z\in Z_K(H)$. Assume that for an infinite cardinal $\kappa$ there are isomorphisms $h_n:H\to H_n$, $n\in\kappa$, onto subgroups $H_n\subset H$ such that $\mathcal F$ is $h_n$-invariant and $H_n\cdot H_m\cap H_k\cdot H_l=\{e\}$ for all indices $n,m,k,l\in\kappa$ with $\{n,m\}\cap\{k,l\}=\emptyset$. If $\tau(\mathcal F)\cap\mathcal P_H\not\subset\mathcal F$, then $\tau^{\alpha}(\mathcal F)\ne\tau^{<\alpha}(\mathcal F)$ for all ordinals $\alpha<\kappa^+$. \end{lemma} \begin{proof} By induction on $\alpha<\kappa^+$ we shall prove that $ \tau^{\alpha}(\mathcal F)\cap\mathcal P_H\ne\tau^{<\alpha}(\mathcal F)\cap\mathcal P_H. $ For $\alpha=1$ the non-equality $\tau^1(\mathcal F)\cap\mathcal P_H\ne\tau^0(\mathcal F)\cap\mathcal P_H$ is included into the hypothesis. Assume that for some ordinal $\alpha<\kappa^+$ we proved that $\tau^{\beta}(\mathcal F)\cap\mathcal P_H\ne\tau^{<\beta}(\mathcal F)\cap\mathcal P_H$ for all ordinals $\beta<\alpha$. If $\alpha=\beta+1$ is a successor ordinal, then by the inductive assumption we can find a set $A\in\tau^{\beta}(\mathcal F)\setminus\tau^{<\beta}(\mathcal F)$ in the subgroup $H$. By Lemmas~\ref{l6.2} and \ref{union2}, $h(A)\cup zh(A)\in\tau^{\beta+1}(\mathcal F)\setminus\tau^{\beta}(\mathcal F)$ and we are done. If $\alpha$ is a limit ordinal, then we can fix a family of ordinals $(\alpha_n)_{n\in\kappa}$ with $\alpha=\sup_{n\in\kappa}(\alpha_{n}+1)$. By the inductive assumption, for every $n\in\kappa$ there is a subset $A_n\subset H$ such that $A_n\in\tau^{\alpha_n+1}(\mathcal F)\setminus\tau^{\alpha_n}(\mathcal F)$. After a suitable shift, we can assume that $e\notin A_n$. Since the ideal $\mathcal F$ is $h_n$-invariant, $h_n(A_n)\in\tau^{\alpha_n+1}(\mathcal F)\setminus\tau^{\alpha_n}(\mathcal F)$ according to Lemma~\ref{hinv}. Then the set $A=\bigcup_{n\in\omega}h_n(A_n)$ does not belong to $\tau^{<\alpha}(\mathcal F)$. The inclusion $A\in\tau^{\alpha}(\mathcal F)$ will follow as soon as we check that $A\cap xA\in\tau^{<\alpha}(\mathcal F)$ for all $x\in G\setminus \{e\}$. This is clear if $A\cap xA$ is empty. If $A\cap xA$ is not empty, then $x\in h_n(A_n)h_m(A_m)^{-1}\subset H_nH_m$ for some $n,m\in\kappa$. Taking into account that $H_nH_m\cap H_kH_l=\{e\}$ for all $k,l\in\kappa\setminus\{n,m\}$ and $e\notin A$, we conclude that $$A\cap xA\subset h_n(A_n)\cup h_m(A_m)\cup xh_n(A_n)\cup xh_m(A_m)\in\tau^{<\alpha}(\mathcal F)$$ as $\tau^{<\alpha}(\mathcal F)$ is additive according to Theorem~\ref{ideal}. \end{proof} Let us recall that a family $\mathcal F$ of subsets of a group $G$ is called {\em auto-invariant} if for any injective homomorphism $h:G\to G$ a subset $A\subset G$ belongs to $\mathcal F$ if and only if $h(A)\in\mathcal F$. \begin{lemma}\label{l5.8} Let $G$ be a free abelian group $G$ and $\mathcal F$ be an auto-invariant ideal of subsets of $G$. If $\mathcal F$ is not thin-complete, then for each ordinal $\alpha<|G|^+$ the family $\tau^\alpha(\mathcal F)$ is not thin-complete. \end{lemma} \begin{proof} Being free abelian, the group $G$ is generated by some linearly independent subset $B\subset G$. Consider the isomorphism $h:G\to 3G$ of $G$ onto the subgroup $3G=\{g^3:g\in G\}$ and observe that $h$ is expanding and for each $z\in B$ we get $z^2\notin 3G$. The ideal $\mathcal F$ being auto-invariant, is $h$-invariant. Applying Lemma~\ref{l6.6}, we conclude that $\tau^{\alpha}(\mathcal F)\ne\tau^{<\alpha}(\mathcal F)$ for all ordinals $\alpha<\omega_1$. If the group $G$ is countable, then this is exactly what we need. Now consider the case of uncountable $\kappa=|G|$. Being free abelian, the group $G$ is isomorphic to the direct sum $\oplus^\kappa\mathbb Z$ of $\kappa$-many copies of the infinite cyclic group $\mathbb Z$. Write the cardinal $\kappa$ as the disjoint union $\kappa=\bigcup_{\alpha\in\kappa}\kappa_\alpha$ of $\kappa$ many subsets $\kappa_\alpha\subset\kappa$ of cardinality $|\kappa_\alpha|=\kappa$. For every $\alpha\in\kappa$ consider the free abelian subgroup $G_\alpha=\oplus^{\kappa_\alpha}\mathbb Z$ of $G$ and fix any isomorphism $h_\alpha:G\to G_\alpha$. It is clear that $G_\alpha\oplus G_\beta\cap G_\gamma\oplus G_\delta=\{0\}$ for all ordinals $\alpha,\beta,\gamma,\delta\in\kappa$ with $\{\alpha,\beta\}\cap\{\gamma,\delta\}=\emptyset$. Being auto-invariant, the ideal $\mathcal F$ is $h_\alpha$-invariant for every $\alpha\in \kappa$. Now it is legal to apply Lemma~\ref{l6.7} to conclude that $\tau^{\alpha}(\mathcal F)\ne\tau^{<\alpha}(\mathcal F)$ for all ordinals $\alpha<\kappa^+$. \end{proof} \begin{proof}[Proof of Theorem~\ref{t5.1}] Let $\mathcal F$ be a sub-invariant ideal of subsets of a group $G$ and let $H\subset G$ be a free abelian subgroup of cardinality $|H|=|G|$. Assume that $\tau(\mathcal F)\cap\mathcal P_H\not\subset\mathcal F$. Consider the ideal $\mathcal F'=\mathcal P_H\cap\mathcal F$ of subsets of the group $H$. By transfinite induction it can be shown that $\tau^{\alpha}(\mathcal F')=\mathcal P_H\cap\tau^\alpha(\mathcal F)$ for all ordinals $\alpha$. The sub-invariance of $\mathcal F$ implies the sub-invariance (and hence auto-invariance) of $\mathcal F'$. By Lemma~\ref{l5.8}, we get $\tau^{\alpha}(\mathcal F')\ne \tau^{<\alpha}(\mathcal F')$ for each $\alpha<|H|^+=|G|^+$. Then also $\tau^*(\mathcal F)\ne\tau^{\alpha}(\mathcal F)\ne\tau^{<\alpha}(\mathcal F)$ for all $\alpha<|G|^+$. \end{proof} \section{The descriptive complexity of the family $\tau^*(\mathcal F)$}\label{s6} In this section given a countable group $G$ and a left-invariant monotone subfamily $\mathcal F\subset\mathcal P_G$, we study the descriptive complexity of the family $\tau^*(\mathcal F)$, considered as a subspace of the power-set $\mathcal P_G$ endowed with the compact metrizable topology of the Tychonov product $2^G$ (we identify $\mathcal P_G$ with $2^G$ by identifying each subset $A\subset G$ with its characteristic function $\chi_A:G\to 2=\{0,1\}$). \begin{theorem}\label{t7.1} Let $G$ be a countable group and $\mathcal F\subset\mathcal P_G$ be a Borel left-invariant lower family of subsets of $G$. \begin{enumerate} \item For every ordinal $\alpha<\omega_1$ the family $\tau^\alpha(\mathcal F)$ is Borel in $\mathcal P_G$. \item The family $\tau^*(\mathcal F)=\tau^{<\omega_1}(\mathcal F)$ is coanalytic. \item If $\tau^*(\mathcal F)\ne\tau^\alpha(\mathcal F)$ for all $\alpha<\omega_1$, then $\tau^*(\mathcal F)$ is not Borel in $\mathcal P_G$. \end{enumerate} \end{theorem} \begin{proof} Let us recall that $G_\circ=G\setminus\{e\}$. In Section~\ref{s2} to each subset $A\subset G$ we assigned the $\tau$-tree $$T_A=\{s\in G_\circ^{<\omega}:A_s\notin\mathcal F\},$$ where for a finite sequence $s=(g_0,\dots,g_{n-1})\in G_\circ^n\subset G_\circ^{<\omega}$ we put $$A_s=\bigcap_{x_0,\dots,x_{n-1}\in 2^{n}}g_0^{x_0}\cdots g_{n-1}^{x_{n-1}}A.$$ Consider the subspaces $\mathsf{WF}\subset \mathsf{Tr}$ of $\mathcal P_{G_\circ^{<\omega}}$, consisting of all (well-founded) lower subtrees of the tree $G_\circ^{<\omega}$. \begin{claim} The function $$T_*:\mathcal P_G\to \mathsf{Tr},\;\;T_{*}:A\mapsto T_A$$is Borel measurable. \end{claim} \begin{proof} The Borel measurability of $T_*$ means that for each open subset $\mathcal{U}\subset\mathsf{Tr}$ the preimage $T_*^{-1}(\mathcal{U})$ is a Borel subset of $\mathcal P_G$. Let us observe that the topology of the space $\mathsf{Tr}$ is generated by the sub-base consisting of the sets $$\mbox{$\langle s\rangle^+=\{T\in\mathsf{Tr}:s\in T\}$ \ and \ $\langle s\rangle^-=\{T\in\mathsf{Tr}:s\notin T\}$ where $s\in G_\circ^{<\omega}$}.$$ Since $\langle s\rangle^-=\mathsf{Tr}\setminus\langle s\rangle^+$, the Borel masurability of $T_*$ will follow as soon as we check that for every $s\in G_\circ^{<\omega}$ the preimage $T_*^{-1}(\langle s\rangle^+)=\{A\in\mathcal P_G:s\in T_A\}$ is Borel. For this observe that the function $$f:\mathcal P_G\times G_\circ^{<\omega}\to \mathcal P_G,\;f:(A,s)\mapsto A_s,$$is continuous. Here the tree $G_\circ^{<\omega}$ is endowed with the discrete topology. Since $\mathcal F$ is Borel in $\mathcal P_G$, the preimage $\mathcal E=f^{-1}(\mathcal P_G\setminus \mathcal F)$ is Borel in $\mathcal P_G\times G_\circ^{<\omega}$. Now observe that for every $s\in G_\circ^{<\omega}$ the set $$T_*^{-1}(\langle s\rangle^+)=\{A\in\mathcal P_G:s\in T_A\}=\{A\in\mathcal P_G:(A,s)\in \mathcal E\}$$ is Borel. \end{proof} By Theorem~\ref{t1.2}, $\tau^*(\mathcal F)=T_*^{-1}(\mathsf{WF})$ and $\tau^\alpha(\mathcal F)=T_*^{-1}(\mathsf{WF}_{{-}1{+}\alpha{+}1})$ for $\alpha<\omega_1$. Now Theorem~\ref{WF} and the Borel measurablity of the function $T_*$ imply that the preimage $\tau^*(\mathcal F)=T_*^{-1}(\mathsf{WF})$ is coanalytic while $\tau^\alpha(\mathcal F)=T_*^{-1}(\mathsf{WF}_{{-}1{+}\alpha{+}1})$ is Borel for every $\alpha<\omega_1$, see \cite[14.4]{Ke}. \smallskip Now assuming that $\tau^{\alpha+1}(\mathcal F)\ne\tau^\alpha(\mathcal F)$ for all $\alpha<\omega_1$, we shall show that $\tau^*(\mathcal F)$ is not Borel. In the opposite case, $\tau^*(\mathcal F)$ is analytic and then its image $T_*(\tau^*(\mathcal F))\subset\mathsf{WF}$ under the Borel function $T_*$ is an analytic subspace of $\mathsf{WF}$, see \cite[14.4]{Ke}. By Theorem~\ref{WF}(4), $T_*(\tau^*(\mathcal F))\subset\mathsf{WF}_{\alpha{+}1}$ for some infinite ordinal $\alpha<\omega_1$ and thus $\tau^*(\mathcal F)=T_*^{-1}(\mathsf{WF}_{\alpha{+}1})=\tau^{\alpha}(\mathcal F)$, which is a contradiction. \end{proof} Theorems~\ref{t5.1} and \ref{t7.1} imply: \begin{corollary} For any countable non-torsion group $G$ the ideal $\tau^*(\mathcal F_G)\subset\mathcal P_G$ is coanalytic but not analytic. \end{corollary} By \cite[26.4]{Ke}, the $\Sigma_1^1$-Determinacy (i.e., the assumption of the determinacy of all analytic games) implies that each coanalytic non-analytic space is $\Pi^1_1$-complete. By \cite{Ma}, the $\Sigma_1^1$-Determinacy follows from the existence of a measurable cardinal. So, the existence of a measurable cardinal implies that for each countable non-torsion group $G$ the subspace $\tau^*(\mathcal F_G)\subset\mathcal P_G$, being coanalytic and non-analytic, is $\Pi^1_1$-complete. \begin{question} Is the space $\tau^*(\mathcal F_\mathbb Z)$ \ $\Pi^1_1$-complete in ZFC? \end{question}
2,869,038,156,955
arxiv
\section{Introduction} The Los Alamos National Laboratory (LANL) designed and built Mars Odyssey Neutron Spectrometer (MONS) has been continuously measuring the leakage flux of neutrons from a Mars polar orbit since February 2002. These data have been used to map the hydrogen content of Mars up through July 2009 \cite{Maurice2011,Feldman2011}. However, both the Odyssey spacecraft and the MONS instrument have been operational to the present time; here we present a new and extended analysis of the MONS data set through December 2017 that can be used to search for long-term climate variations, particularly in the polar regions of Mars. Processing of the integrated total data set is an important step before such interpretations can be drawn. Throughout this process, many choices and alternatives must be clearly documented to establish the accuracy, precision, and robustness of these data. The MONS instrument collects neutron fluxes continuously from the Mars surface in three energy bands: thermal (0--0.4 eV), epithermal (0.4 eV--700 keV), and fast (0.7--5 MeV). The purpose of our updated data processing code is to transform time-tagged measurements through the present time in relevant neutron maps, some of which are time dependent because of seasonally changing CO$_2$- and water-ice precipitation at high latitudes. Since Mars orbit insertion, MONS has been in an excellent state of health over 17 years of operation. However, over that time period, there have been several unresolved issues regarding our understanding of the MONS systematic biases. At present, most, if not all, of these biases have been removed. Each generation of MONS data processing was built independently of its predecessors, often by different people, to limit the propagation of erroneous assumptions. Efforts were also devoted to compare the results of each approach. The initial processing of MONS data was performed by Tokar \textit{et al.} \cite{Tokar2002} and was used for early discovery results. Subsequently, Prettyman \textit{et al.} developed an independent approach \cite{Prettyman2004} that has been the reference for publications between 2004 to present. This code is currently used to deliver level‐1 derived neutron data (DND) to the Planetary Data System (PDS). These products are time series’ of corrected neutron counting rate data that can be used for scientific investigations. The level‐1 data set includes averaged neutron data (AND), which consists of neutron maps built from the neutron time series data. The most recent processing and analysis of MONS data was performed by Maurice \textit{et al.} \cite{Maurice2011} and covered data through July 2009. This lead to work studying the depth-distribution of water on Mars \cite{Feldman2011,Pathare2018}. On the way to any science interpretation of inter-annual variability in the MONS data set, this paper intends to document and provide the necessary elements for understanding the new data processing method and resulting data set. We then build on this stage by presenting averaged counting rate maps and a preliminary comparison of the inter-annual variability in the Mars polar regions over 8 Mars Years by means of the neutron counting rates. These can be compared to previous results presented in \cite{Prettyman2004,Prettyman2009,Maurice2011}. \section{MONS Instrument} The MONS instrument consists of an 11$\times$11$\times$10~cm$^3$ BC454 plastic scintillator separated into four optically isolated segments, or ``prisms." This plastic is loaded with 5\% natural boron by weight, which provides sensitivity to thermal and epithermal neutrons through neutron capture on $^{10}$B. The predominant interaction that occurs is $^{10}$B(n,$\alpha$)$^{7}$Li$^{*}$ with a Q value of 2.8 MeV. Due to inefficiency in light production from the heavy isotopes produced in this reaction, this energy is quenched and detected at 98~keV electron equivalent (keVee). A schematic of the MONS instrument is shown in Fig.~\ref{fig:mons}. \begin{figure}[h] \centering \includegraphics[width=0.8\textwidth]{mons.pdf} \caption{Schematic of the MONS instrument and cartoon of scintillator orientation.} \label{fig:mons} \end{figure} There are two types of primary data products produced by the MONS instrument that define what type of neutron was detected \cite{Maurice2011}. Category 1 events (thermal, epithermal) are defined by a prompt interaction with an energy between 40~keVee and 630~keVee that is not followed by a delayed interaction within 25.6~$\mu$s. Category 2 events (fast) are defined as a similar prompt pulse with an expanded energy range of 40~keVee to 2.55~MeVee followed by a delayed pulse with an energy between 40~keVee and 630~keVee within a 25.6~$\mu$s window. For both Category 1 and Category 2 events, only one or two prisms can detect the event, otherwise the event is thrown away. Prompt events with an energy greater than 2.55~MeVee are categorized as GCR events. Fast neutrons (Category 2 events) are defined as neutrons with an energy $>$0.7~MeV \cite{Maurice2011} and can be detected by all four prisms. Category 1 events can be split into thermal and epithermal neutrons based on prism. Prism 1 faces the nadir direction and is covered with a 0.69~mm thick cadmium sheet, which absorbs neutrons below $\sim$0.4 eV. Therefore, Category 1 events from this prism are epithermal neutrons (0.4~eV - 0.7~MeV). As noted in \cite{Maurice2011,Prettyman2009}, due to the geometry of the prisms there are small gaps in the cadmium coverage allowing Prism 1 some thermal neutron sensitivity. Thermal neutrons are detected by exploiting the Doppler filter technique \cite{Feldman1986}, which uses the fact that the spacecraft velocity (3.4~km/s) is faster than the velocity of thermal neutrons (1.9~km/s in the Mars atmosphere, corresponding to a thermal neutron energy of 0.019~eV). Prism 2 is forward facing along the direction of spacecraft motion, and therefore detects both thermal and epithermal neutrons. Prism 4 faces backwards along the direction of spacecraft motion, and therefore only neutrons that have a velocity higher than the spacecraft velocity can be detected. This corresponds to neutrons in the epithermal range with an energy greater than 0.06 eV. Thermal neutron ($<$0.06~eV) rates are determined by subtracting the Prism 4 counting rate from the Prism 2 counting rate. Similarly, an alternate definition of epithermal neutrons (0.06~eV - 0.7 MeV) can be obtained from the Prism 4 counting rates. Finally, Prism 3 is shielded from Mars and therefore should be a good proxy for the spacecraft background. The sides of the prisms are also covered in cadmium. The mapping phase of the MONS instrument began February 22, 2002 and has been operating nearly continuously since then, leading to 17 Earth-years of data. The most recent processing and analysis of MONS data \cite{Maurice2011} covered data through July 2009, corresponding to nearly 4 Mars-years of data ($L_s$ = 330 in Mars Year (MY) 25 to $L_s$ = 313 in MY 29, using the Mars calendar defined by \cite{Piqueux2015}). While \cite{Maurice2011} showed some inter-annual comparisons of counting rates in the polar regions, their work focused primarily on creating an averaged CO$_2$ frost-free map of two-layer water-equivalent hydrogen (WEH) based on the MONS data that subsequently was used in the most definitive MONS mapping of WEH and its depth distribution to date \cite{Pathare2018}. Another paper including inter-annual comparisons of the CO$_2$ frost cap thickness for two Mars years towards the beginning of the MONS mission can be found in \cite{Prettyman2009}. Here we present new data processing of the Category 1 MONS data that includes all data through the end of 2017 ($L_s$ = 108.3 in MY 34). This doubles the amount of data processed by \cite{Maurice2011} and quadruples the number of MY in a detailed comparison of inter-annual variability of the seasonal CO$_2$ frost deposits in the polar regions. \section{New Data Processing} The data processing includes many steps to take the MONS data from raw binary data to prism counting rates registered with latitude and longitude. Much of the data processing follows and draws upon the work described in \cite{Maurice2011}, but performed independently. Raw data were acquired from the Planetary Data System (PDS) Geosciences Node (\url{pds-geosciences.pds.wustl.edu}), which releases data quarterly for the Mars Odyssey mission and GRS instrument suite. The raw data or experimental data records (EDR) are organized into folders by calendar year and subsequently by day. Raw data for the MONS instrument are contained within the neutron\_spectra files. Relevant engineering data for MONS are contained within the eng subdirectory. Information on the format of each binary EDR file is contained within the main label directory. The MONS data are pre-packaged to contain ephemeris data in addition to the instrument data. Each data point is registered with a UTC time stamp and an ``SLCK" clock value that is unique for each data point. The neutron data includes 64-channel histograms for Category 1 events and 32-channel histograms for the prompt (early) and delayed (late) Category 2 events. Counter data, which store the total number of counts over threshold in 19.75~second accumulation windows, include GCR, deadtime, and the number of, and which, prisms fired. There is additional information on the first 84 Category 2 events within each accumulation window, including time between the prompt and delayed pulses, and pulse heights. The data also contain sub-satellite latitude and longitude at the middle of each integration window and position and velocity of the spacecraft in different reference frames. The raw data conversion was done using Python 3.5 and the unpacked data stored in a MySQL Database. Following conversion, data reduction takes place to remove bad data from the dataset, described in Section~\ref{sec:data_reduction}. After all bad data are removed, data corrections that result in the final dataset are applied, described in Section~\ref{sec:data_correction}. \subsection{Data Reduction}\label{sec:data_reduction} There are several categories of ``bad" data that must be removed before further processing can take place. The first and largest data cut is from solar energetic particle (SEP) events, that produce a large background in the prism counting rates. Stability cuts are also applied to the counter data, which remove outliers and transients in these data sets. Cuts on spacecraft orbit parameters are applied to also remove outliers or transients and remove data acquired during clock resets that corrupt our ability to normalize to counting rates. Finally, some additional data cuts related to various anomalous readings are applied. The final data set contains only good data that passes all four of the following described cuts. A summary of how much data is removed by each cut is provided at the end of this sub-section. \subsubsection{SEP Event Cuts} Removal of SEP events is done manually by looking at the counting rate recorded by a dedicated GCR counter and removing periods of rate excursions. An example of the base procedure is described below for a SEP event in September 2004, shown in Fig.~\ref{fig:sep1}. The mean and standard deviation of the GCR counter (total counts in each 19.75 s accumulation window) is determined for 8 days before and 8 days after the event. Figure~\ref{fig:sep1} shows black bands representing $\pm$3 standard deviations ($\sigma$) from the mean. The excursion is flagged as when the GCR counter extends beyond $\pm$3$\sigma$ from these means. To safely remove the full extent of each event, the SEP event cut range starts 4 hours before the start of the excursion and ends 4 hours after the end of the excursion. The final cut range is demonstrated as the gray shaded region. \begin{figure}[h] \centering \includegraphics[width=0.8\textwidth]{gcr_2004_step1_20040907_wmean.png} \caption{Example of default SEP event cut definition for September 2004 events. See text for details.} \label{fig:sep1} \end{figure} There are several SEP events where the event cut method was adapted or event cut ranges were manually updated from the base method. These included SEP events that were low in strength or short in duration, but most frequently were when a decrease in the GCR counter was observed surrounding the peak of the SEP event (likely due to changes in the interplanetary magnetic field). This was observed in $\sim$20\% of SEP events. In these cases if the dip was before the main SEP excursion, the start of the event cut was determined by eye. If the dip was after the main SEP excursion, the mean and standard deviation in counter from before the event was used to judge when the counter returned to nominal. An example of this type of event (July 2004) is shown in Fig.~\ref{fig:sep2}. This event exhibited the decrease in rates both before and after the event. The mean was determined from data between 6/26/2004 -- 7/3/2004. \begin{figure}[h] \centering \includegraphics[width=0.8\textwidth]{gcr_2004_step1_20040706_wmean.png} \caption{Example of modified SEP event cut definition for July 2004 event. See text for details.} \label{fig:sep2} \end{figure} Ranges defining the removal of SEP events from the data set are summarized in Table~\ref{table:sep_cut}. \begin{table}[h!] \caption{Event cut ranges for removing SEP events, in UTC range.} \label{table:sep_cut} \centering \begin{adjustbox}{width=0.95\textwidth} \begin{tabular}{|l|c|c||l|c|c|} \hline Event & Start Time (UTC) & End Time (UTC) & Event & Start Time (UTC) & End Time (UTC) \\ \hline 1 & 2002-03-11 00:00 & 2002-03-20 10:00 & 36 & 2012-05-16 22:00 & 2012-05-21 14:00 \\ 2 & 2002-04-21 23:00 & 2002-05-05 10:00 & 37 & 2012-07-06 21:00 & 2012-08-04 10:00 \\ 3 & 2002-05-21 18:00 & 2002-05-29 00:00 & 38 & 2012-08-31 21:00& 2012-09-04 14:00 \\ 4 & 2002-07-15 20:00 & 2002-08-06 12:00 & 39 & 2012-09-20 00:00& 2012-10-01 18:00 \\ 5 & 2002-08-14 00:00 & 2002-09-18 06:00 & 40 & 2013-03-05 04:00& 2013-03-09 18:00 \\ 6 & 2002-10-14 11:00 & 2002-10-20 00:00 & 41 & 2013-05-01 04:00& 2013-05-02 14:00 \\ 7 & 2002-10-24 12:00 & 2002-11-14 00:00 & 42 & 2013-05-12 23:00& 2013-05-16 22:00 \\ 8 & 2002-12-02 12:00 & 2002-12-05 00:00 & 43 & 2013-05-23 12:00& 2013-05-29 00:00 \\ 9 & 2003-03-18 13:00 & 2003-03-28 16:00 & 44 & 2013-08-19 22:00& 2013-08-25 14:00 \\ 10 & 2003-05-28 13:00 & 2003-06-02 16:00 & 45 & 2013-10-05 05:00& 2013-10-09 14:00 \\ 11 & 2003-10-25 06:00 & 2003-11-25 04:00 & 46 & 2013-10-11 03:00& 2013-10-18 10:00 \\ 12 & 2003-12-02 13:00 & 2003-12-05 12:00 & 47 & 2013-11-02 03:00& 2013-11-15 14:00 \\ 13 & 2004-07-03 16:00 & 2004-07-20 00:00 & 48 & 2013-12-26 05:00& 2014-01-01 12:00 \\ 14 & 2004-09-06 08:00 & 2004-09-15 22:00 & 49 & 2014-01-06 03:00& 2014-01-14 06:00 \\ 15 & 2004-11-10 23:00 & 2004-11-19 20:00 & 50 & 2014-02-14 12:00& 2014-03-18 12:00 \\ 16 & 2005-01-11 12:00 & 2005-02-04 14:00 & 51 & 2014-03-29 18:00& 2014-03-31 00:00 \\ 17 & 2005-05-14 00:00 & 2005-05-20 00:00 & 52 & 2014-04-18 15:00& 2014-04-22 18:00 \\ 18 & 2005-06-16 17:00 & 2005-06-24 10:00 & 53 & 2014-05-09 03:00& 2014-05-12 00:00 \\ 19 & 2005-07-14 05:00 & 2005-08-09 22:00 & 54 & 2014-09-01 10:00& 2014-09-15 14:00 \\ 20 & 2005-08-22 14:00 & 2005-09-23 13:00 & 55 & 2014-09-22 00:00& 2014-10-01 00:00 \\ 21 & 2006-11-03 18:00 & 2005-11-10 03:00 & 56 & 2014-10-15 00:00& 2014-10-20 00:00 \\ 22 & 2006-12-05 07:00 & 2006-12-20 18:00 & 57 & 2014-11-01 00:00& 2014-11-03 00:00 \\ 23 & 2007-01-25 05:00 & 2007-01-27 05:00 & 58 & 2014-11-07 00:00& 2014-11-11 00:00 \\ 24 & 2010-06-11 22:00& 2010-06-13 04:00& 59 & 2014-12-13 00:00& 2014-12-28 13:00 \\ 25 & 2010-08-05 06:00& 2010-08-09 10:00& 60 & 2015-03-03 12:00& 2015-03-12 00:00 \\ 26 & 2011-02-11 12:00& 2011-02-12 12:00& 61 & 2015-03-23 22:00& 2015-04-01 20:00 \\ 27 & 2011-03-08 00:00& 2011-04-11 18:00& 62 & 2015-04-21 11:00& 2015-04-24 09:00 \\ 28 & 2011-05-09 20:00& 2001-05-11 22:00& 63 & 2015-05-02 10:00& 2015-05-08 12:00 \\ 29 & 2011-06-04 19:00& 2011-06-12 13:00& 64 & 2015-06-18 00:00& 2016-06-24 00:00 \\ 30 & 2011-07-26 00:00& 2011-07-29 00:00& 65 & 2015-10-28 13:00& 2015-11-05 00:00 \\ 31 & 2011-09-04 00:00& 2011-10-08 14:00& 66 & 2016-01-06 00:00& 2016-01-07 15:00 \\ 32 & 2011-11-03 19:00& 2011-11-08 23:00& 67 & 2016-02-21 12:00& 2016-02-23 12:00 \\ 33 & 2011-11-29 12:00& 2011-11-30 12:00& 68 & 2016-03-16 12:00& 2016-03-17 18:00 \\ 34 & 2012-01-23 06:00& 2012-02-05 04:00& 69 & 2017-04-14 18:00& 2016-04-21 00:00 \\ 35 & 2012-03-06 23:00 & 2012-03-17 10:00 & 70 & 2017-09-10 15:00 & 2017-09-21 12:00 \\ \hline \end{tabular} \end{adjustbox} \end{table} \subsubsection{Stability Cuts} Stability cuts were applied to the GCR counters and the total counts in each of the four prism Category 1 histograms. The stability cuts are applied based on the deviation of each data point from a boxcar rolling median value. A rolling window is specified as the number of data entries to sum over, and the result is centered within the time range of the window. Based on the time scales over which observed rates can change, we chose to apply a ``daily" rolling median window. Resampling data from 2003 through 2007 to a frequency of one day, the typical number of data entries in one day was 4185 entries. An example of this technique is described using data from 2004. Figure~\ref{fig:stab1} shows a histogram of the deviations from the rolling median value for the GCR counter (left) and the Category 1 Prism 4 total histogram counts (right). The line indicates the stability threshold, which was chosen as ten times the median of the deviations (sometimes called the ``MAD"). Data greater than the stability threshold are cut, indicated by the + markers in Fig.~\ref{fig:stab2}. The stability cut most obviously affects what seem to be spurious readings in the GCR counter. For the Prism total histogram counts the stability cuts remove most spikes observed in the rates. \begin{figure}[h] \centering \includegraphics[width=0.42\textwidth]{diffdist_gcr_ex.png} \hspace{0.2 in} \includegraphics[width=0.42\textwidth]{diffdist_cat1p4_ex.png} \caption{Histogram of the deviations from the rolling median with the determined stability threshold for the GCR counter (left) and Category 1 Prism 4 counts (right).} \label{fig:stab1} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=0.75\textwidth]{stability_gcr_showcut.png} \includegraphics[width=0.75\textwidth]{stability_cat1p4_showcut.png} \caption{Example of the stability cut applied. See text for details.} \label{fig:stab2} \end{figure} \subsubsection{Spacecraft Cuts} Cuts were applied based on the spacecraft orbit data to flag when a variable goes out of a range or experiences an excursion from nominal. First, pointing and intersecting data were required to be available (=1) in the raw data. The parameters subject to spacecraft cuts that must be between certain values are the latitude ($-90^{\circ}$ to $+90^{\circ}$), longitude ($0^{\circ}$ to $360^{\circ}$), and altitude (380~km to 460~km). In addition, issues related to the Northernly equatorial crossing is flagged under this cut. As described in \cite{Maurice2011} and illustrated in Fig.~\ref{fig:eqcross}, when the spacecraft is moving Northward and crosses the equator the internal clock is reset and thus the measurement time interval is lost. These events cannot be properly normalized into counting rates and are removed. The most common bounding cuts come from pointing and intersecting data not being available. \begin{figure}[h] \centering \includegraphics[width=0.65\textwidth]{plot_cnts_eqcrossing_april2004_p2.png} \caption{(Color online) Prism 4 count rates assuming the nominal 19.75~s window in April 2004 when crossing the equator Northward (black circles) and Southward (red triangles).} \label{fig:eqcross} \end{figure} Finally, some instances of transient deviations in the mars position and velocity as recorded in the instrument frame were observed and removed. \subsubsection{Other Cuts} There were a few data reduction cuts that do not fall under the above categories. This includes something identified in \cite{Maurice2011} as erroneous latitude registration errors. This can be seen in Fig.~\ref{fig:anom2}, which shows the latitude (for any longitude) registered near the equator. The black points indicate when the spacecraft is moving Northward (``up") and the red points when it is moving Southward (``down"). Data in this plot have all other cuts already applied, so gaps in time mostly represent SEP event cuts and the effect of removing the Northward equatorial crossing can also be observed. The two regions of time cut under this error code are both in 2002, and are also shown in Fig.~\ref{fig:anom2} with the cut region shaded gray. \begin{figure}[h!] \centering \includegraphics[width=0.45\textwidth]{latitude_registration_equator_2002_zoom_cut.png} \includegraphics[width=0.45\textwidth]{latitude_registration_equator_2002_zoom2_cut.png} \caption{(Color online) Latitude of data near the equator, when the spacecraft is moving Northward (black) and Southward (red). Two regions affected by this cut are indicated by the shaded bands.} \label{fig:anom2} \end{figure} There was also very rarely an issue with the UTC timestamp showing up at the seemingly incorrect time, affecting 63 data points overall. For example, several data entries with UTC timestamps indicating 2007 showed up at SLCK values near November 25, 2006. There are two other data points with a similar issue, one in 2009 and one in 2010. It is likely the data affected by this anomaly are fine and somehow only the UTC timestamp is erroneous. However, to be safe these data were removed. Finally, cuts based on the MONS sensor temperature and high voltage power supply (HVPS) were applied. Note that there appears to be a mismatch in the mapping of engineering data to files starting April 1, 2004. Before this date, the engineering data are in the appropriately labeled files, which for the HVPS are the hvps\_mntr\_[1,2].dat files. From April 1, 2004 through the dataset that we considered, the data corresponding to the HVPS1 and HVPS2 were empirically found to be in the files plus\_5v\_crnt\_dig.dat and plus\_5v\_anlg.dat, respectively. The other files in the eng directory are similarly mis-mapped. We did not determine the mapping for the other engineering data, however, note that none of the files provided negative values after April 1, 2004. The temperature and HVPS voltage are shown for the beginning of data collection through the end of 2017 in Fig.~\ref{fig:engdata}. These data are registered at different SLCK clock values than the rest of the MONS data therefore no event cuts are applied in these plots. The temperature fluctuates seasonally with some excursions to lower temperature. The origin of the double-banded structure in the temperature data is not known at this time. The two HVPS channels track together with some excursions to high voltages. All of the HVPS excursions occur during SEP events and are therefore excluded based on that cut. Some of the temperature excursions partially overlap an SEP event, but those that are not are removed. \begin{figure}[h!] \centering \includegraphics[width=0.48\textwidth]{plot_mons_nstemp_2002-2017.png} \includegraphics[width=0.48\textwidth]{plot_mons_hvps_2002-2017.png} \caption{(Left) MONS sensor temperature. (Right) HVPS voltage for 2002 -- 2017.} \label{fig:engdata} \end{figure} \subsubsection{Summary of data cuts} From February 22, 2002 through the end of 2017 there are just over 23 million data points processed. The total percentage of data removed from all cuts is 14.1\%. The majority of this, 12.1\% of the data or 85.6\% of the total cut, comes from the removal of SEP events. Stability cuts remove only an additional 0.2\% of the data. Spacecraft cuts affect 1.8\% of the data, but some of this overlaps with SEP event ranges. Within the spacecraft cuts, 0.56\% of the data is cut due to bad pointing/intersecting flags, 0.4\% of the data exhibits transients in the position or velocity parameters, and 0.28\% of the data are removed from equatorial crossings. Temperature cuts remove 0.65\% of the data and the erroneous latitude registration early in the dataset affects 0.26\% of the data. \subsection{Data Corrections}\label{sec:data_correction} Several data corrections are applied after bad data are removed. These corrections are necessary to extract the correct prism counting rates and appropriate latitude and longitude registration. \subsubsection{ADC Non-Linearity} The analog to digital conversion of the prism spectra introduces differential nonlinearities into the recorded histogram data. To see this best and to determine the correction, Prism 3 histograms were averaged poleward of 85$^{\circ}$N during the Northern summer. Kilometer-thick perennial water-ice deposits cover most of the region poleward of 80$^{\circ}$N in the Northern summer \cite{Clifford2000}. A cut selecting the period between solar longitude $L_s = 110^{\circ} - 140^{\circ}$ was used to be safely in Northern summer based on \cite{Piqueux2015b} observations of Northern seasonal cap growth and retreat. Prism 3 is oriented away from the planet and therefore is expected to show only a smooth, continuous background. Deviations from this smooth function allow the nonlinearity correction to be determined. The data set analyzed for determining the nonlinearity correction (2002--2007) covers three Northern summers within this $L_s$ range that span $\sim$63 days each: 12/18/2002 -- 2/19/2003, 11/4/2004 -- 1/6/2005, and 9/22/2006 -- 11/23/2006. The raw Prism 3 spectra for these three summers are compared in Fig.~\ref{fig:adcnonlin1}. \begin{figure}[h!] \centering \includegraphics[width=0.8\textwidth]{adcnonlin_prism3_raw_all_v2.pdf} \caption{Raw ADC counts from Prism 3 averaged over three Northern summer periods.} \label{fig:adcnonlin1} \end{figure} There are some differences year-to-year, likely due to slight differences in sensor temperature and GCR flux. However, overall a repeating 16-channel pattern can be observed. The data were smoothed by applying a centered 7-channel boxcar filter. This means the first three channels and last three channels are not included in the smoothed data, which are shown in Fig.~\ref{fig:adcnonlin2}. \begin{figure}[h!] \centering \includegraphics[width=0.98\textwidth]{adcnonlin_prism3_raw_wsmoothed_all_v2.pdf} \caption{Raw ADC counts with smoothed curves from a 7-channel boxcar average.} \label{fig:adcnonlin2} \end{figure} The correction factor was determined by calculating 1 + (Smoothed[i] - Raw[i])/Raw[i] for these 58 ADC channels. The extracted correction factors are shown in the left panel of Fig.~\ref{fig:adcnonlin3}. To determine the final correction factor, data from the appropriate channels above channel 16 are averaged (\textit{e.g.} channels 16, 32, and 48). Channels below 16 were excluded to avoid any artifacts from the boxcar smoothing near the peak. The final correction factor for each set of 16 channels comes from averaging the results of the three summers and is shown in the right of Fig.~\ref{fig:adcnonlin3} and given in Table~\ref{table:adcnonlin}. \begin{figure}[h!] \centering \includegraphics[width=0.98\textwidth]{adcnonlin_prism3_correctionfactor_v2.pdf} \caption{Correction factor for ADC nonlinearity. See text for details.} \label{fig:adcnonlin3} \end{figure} \begin{table}[h] \caption{ADC Nonlinearty correction factor, repeated for each 16 channel group.} \label{table:adcnonlin} \centering \begin{tabular}{|c|c|} \hline Channel & Correction Factor \\ \hline 0x & 1.00864 \\ 1x & 0.97306 \\ 2x & 1.04867 \\ 3x & 1.03395 \\ 4x & 0.95991 \\ 5x & 1.05862 \\ 6x & 0.92326 \\ 7x & 1.05427 \\ 8x & 0.97688 \\ 9x & 1.02267 \\ 10x & 0.99594 \\ 11x & 0.99265 \\ 12x & 0.89037 \\ 13x & 1.14551 \\ 14x & 1.00161 \\ 15x & 0.91398 \\ \hline \end{tabular} \end{table} Figure~\ref{fig:adcnonlin5} shows an example of the ADC nonlinearity correction applied to Prism 1 counts. The data come from $\sim$12 days during the peak of Northern winter ($L_s = 266^{\circ}-274^{\circ}$) and are averaged over this time period for latitudes poleward of 85$^{\circ}$N. The total number of counts is increased by 0.7\%, consistent with the average correction factor. We expect other systematic errors to be much larger than this and therefore are not concerned. \begin{figure}[h!] \centering \includegraphics[width=0.75\textwidth]{adcnonlin_prism1_corrected_v2.pdf} \caption{Prism 1 ADC counts during a Northern winter, uncorrected and corrected.} \label{fig:adcnonlin5} \end{figure} \subsubsection{Gain Correction} The gain of each prism drifts during the duration of the mission, due to degradation over time, high voltage, or temperature variations. To determine the gain correction the position of the $^{10}$B neutron capture peak must be identified. Unless there are abrupt changes in the prism high-voltage, the peak position is very stable and changes smoothly with time. To acquire good statistics on the determination of the peak location, and the shape of the background that is used later, entries within a 10$^{\circ}$ $L_s$ bin were subdivided into 40 subbins. The peak location and background parameters were determined on the histograms summed within these subbins, which typically contained approximately 1500 entries. \begin{figure} \centering \includegraphics[width=0.49\textwidth]{plot_fit_example_prism4_step1.pdf} \includegraphics[width=0.49\textwidth]{plot_fit_example_prism4_step2.pdf} \caption{Example illustrating Steps 1 and 2 of the fitting procedure to determine peak position.} \label{fig:gain2} \end{figure} The fitting procedure used in this analysis includes 5 steps and is illustrated using Prism 4. Steps 1 and 2 (Fig.~\ref{fig:gain2}) follow similarly to what was done in \cite{Maurice2011}; a linear fit in log-log space is performed and then a Gaussian is fit to the rotated data. In these steps only part of the spectrum is fit, from a minimum ADC value as determined by the maximum value in log-log space plus an offset (this safely places the minimum fit after the low-channel roll over) to a maximum ADC value of channel 28. The initial guess for the mean value of the Gaussian fit in Step 2 is determined by finding all the local maxima in the rotated array and requiring that the mean value is not immediately within a certain number of channels to the minimum fit location and of the remaining candidates is selected as the one with the highest counts value. This results in a single and most often correct guess for the mean value. In Step 3 the Gaussian fit results are used to create an excluded region around the peak of $\pm$3$\sigma$ from the mean, and the linear fit is repeated over the same constrained minimum and maximum ADC fit values in an attempt to improve the linear fit parameters. Step 4 repeats the rotation based on the tuned linear fit and refits a Gaussian to extract tuned Gaussian parameters. The tuned Gaussian mean is the peak location and is used to apply the gain correction that lines up all data to have a peak in channel 10. The peak positions for the four prisms from the start of the mission through the end of 2017 are shown in Fig.~\ref{fig:gain4}. There is a period of approximately 3.5 months during MY 31 (8/4/2012 -- 11/16/2012) that the prism gains are too low and peak values cannot be reliably extracted; this data has been removed. \begin{figure}[h!] \centering \includegraphics[width=0.98\textwidth]{plot_gaincorr_peak_pos.pdf} \caption{Channel location of the $^{10}$B capture peak for each prism over all processed data from 2002 -- 2017.} \label{fig:gain4} \end{figure} \subsubsection{Peak Integration} Once the histograms have been gain corrected, background parameters can be fit and the histograms integrated to determine the signal counts. The background fit parameters were constrained by using the same summed histograms as used to determine the gain correction. The tuned log-log linear fit parameters and the gain-corrected Gaussian fit parameters from Step's 3 and 4 above are used to guide initial guesses of a Gaussian plus background fit of the spectrum in log-log space. The results are compared for a linear background and a quadratic background in Fig.~\ref{fig:gain3}. Both background fits yield essentially the same mean peak position, however, the quadratic background fits the curve better and yields a better background subtraction, in particular when the peak location is high within the ADC range. When fitting each individual data point, the Gaussian mean, Gaussian sigma, background slope, and background squared term are constrained by knowledge of the higher-statistics fits, while the Gaussian height and background offset are allowed to float. For each data point, this new fit results in parameterizations for the signal and background functions, which are used to integrate and determine the number of signal counts. No deadtime correction is made, as \cite{Maurice2011} showed this is quite small, and therefore all counts are simply divided by 19.75~s to determine the count rate per second. \begin{figure}[h!] \centering \includegraphics[width=0.65\textwidth]{plot_fit_example_prism4_step4.pdf} \caption{Example illustrating background fits.} \label{fig:gain3} \end{figure} \subsubsection{Altitude Correction} The orbit of MONS is slightly elliptical, with an altitude that changes from $\sim$380--460~km. Figure~\ref{fig:altitude} shows a histogram of the spacecraft altitude for all of the processed data from the start of the mission in 2002 through 2017. The counting rates are normalized to an altitude of 400~km using the following equation which corrects for the solid angle observed at a given altitude $h$ \cite{PrettymanBook}: \begin{equation} \Omega(h) = 2\pi \left[1 - \sqrt{1 - R^2/\left(R+h\right)^2}\right]~. \end{equation} where $R = 3389.5$~km is the mean radius of Mars. The scale factor is calculated as $\Omega$(400~km)/$\Omega$(h) and varies from $\sim$0.99--1.05. We do not make any corrections for local elevation of the surface. \begin{figure}[h!] \centering \includegraphics[width=0.6\textwidth]{sc_altitude_histo_all.pdf} \caption{Altitude of MONS with respect to the mean Mars surface sphere.} \label{fig:altitude} \end{figure} \subsubsection{Ground-Track Correction} The MONS detector is positioned such that Prism 1 faces nadir and Prism's 2 and 4 are along and opposite to the direction of spacecraft motion, respectively. Because of the motion of the spacecraft and the fact that thermal and epithermal neutrons have similar velocity to the spacecraft, the latitude and longitude that each prism ``see's'' (\textit{i.e.} ground track) is not the same as the sub-spacecraft location that is registered with each data point. To determine the angle offset between the registered location and the observed location, the counting rates summed over longitude as a function of latitude when the spacecraft is traveling Northwards and Southwards can be compared. An observable shift is seen in the Northward versus Southward data, as illustrated in the left panel of Fig.~\ref{fig:latoffset_1} using data from the first quarter of 2004. The angle that resolves this discrepancy is the angle offset, and this depends on the prism. As the spacecraft orbit has an inclination of 93.2$^{\circ}$, both the latitude and the longitude have to be corrected. \begin{figure}[h!] \centering \includegraphics[width=0.49\textwidth]{plot_latoffset_2004_firstquarter_prism2.pdf} \includegraphics[width=0.49\textwidth]{plot_latoffset_2004_firstquarter_prism2_newcoords.pdf} \caption{Prism 2 Northward and Southward counts averaged over all longitudes for the first quarter of 2004, showing an angle offset originating from spacecraft motion (left panel) and its correction (right panel).} \label{fig:latoffset_1} \end{figure} To determine the angle offset that best resolves the discrepancy for each prism, a $\chi^2$-minimization was performed which is shown in Fig.~\ref{fig:latoffset_2}. The resulting angles are 0.4289$^{\circ}$ for Prism 1, 2.5297$^{\circ}$ for Prism 2, 0.8757$^{\circ}$ for Prism 3, and -1.5667$^{\circ}$ for Prism 4. A positive number indicates the prism is seeing ahead of the spacecraft motion, while a negative number is behind the spacecraft. These numbers are within 10-20\% to those published in \cite{Maurice2011}. \begin{figure}[h!] \centering \includegraphics[width=0.65\textwidth]{get_latitude_offset_method1_fit_root.pdf} \caption{$\chi^2$ optimization to determine prism angle offsets.} \label{fig:latoffset_2} \end{figure} This correction is applied to each data point where we first calculate the bearing (direction) of the ground track by using the latitude and longitude of the current and subsequent data points. Given the latitudes $\theta_{1,2}$ and longitudes $\varphi_{1,2}$ of the initial point (1) and final point (2), the bearing $\lambda$ is given by: \begin{equation} \lambda = \textrm{atan2}\left(\sin\left(\varphi_2-\varphi_1\right)\cos\theta_2,\cos\theta_1\sin\theta_2 - \sin\theta_1\cos\theta_2\cos\left(\varphi_2-\varphi_1\right)\right)~. \end{equation} This is combined with the knowledge of the angle offset determined from the $\chi^2$ optimization ($\delta_i$ for Prism $i$) to calculate the new latitude and longitude to register with each prism: \begin{eqnarray} \theta^i_{\textrm{new}} &=& \textrm{asin}\left(\sin\theta_1\cos\delta_i + \cos\theta_1\sin\delta_i\cos\lambda\right)~, \\ \varphi^i_{\textrm{new}} &=& \varphi_1 + \textrm{atan2}\left(\sin\lambda\sin\delta_i\cos\theta_1,\cos\delta_i-\sin\theta_1\sin\theta^i_{\textrm{new}}\right)~. \end{eqnarray} Applying these corrections to the first quarter of 2004 shows a resolved offset in the Northward and Southward counting rates versus latitude, as shown in the right panel of Fig.~\ref{fig:latoffset_1}. Rotations of the spacecraft about the velocity direction were performed during several periods in 2009 -- 2011. This does not impact Prism's 2 and 4 which are oriented along the direction of motion, however, data from Prism 1 has been removed during these periods since it does not face nadir. The rotations occur between MY 29, $L_s\approx265$ through MY 30, $L_s\approx34$ and between MY 30 $L_s\approx316$ through MY 31, $L_s\approx61$. \subsubsection{GCR Correction} The GCR flux, which is the source for the measured neutron signals, varies over time mostly with solar cycle. To remove this effect, the data must be normalized to a particular date with a known GCR flux. To determine the GCR correction factor, the belly band procedure \cite{Maurice2011} is adopted, which assumes that near the equator the ground-surface processes are in equilibrium and therefore the neutron counting rates should be stable over time. In addition to the GCR flux changing over time, seasonal changes in the density of Mars' atmosphere can lead to seasonal changes in the neutron counting rates. These must be accounted for through simulations before the GCR correction can be determined. A radiation transport tool to simulate the neutron leakage flux from Mars and a tool to transport this flux to the MONS spacecraft and predict the prism counting rates were developed. The details of these tools are discussed in the Appendix. We divided the MONS data into $2^{\circ} \times 2^{\circ}$ latitude and longitude bins within $\pm$20$^{\circ}$ of the equator. The Mars Climate Database (MCD) v5.3 global circulation model (GCM) \cite{Forget1999} was used to determine the atmospheric density on the latitude and longitude grid as a function of seasonal $L_s$. The neutron counting rates were normalized to an average atmospheric density of 16~g/cm$^2$, using the simulation described in the Appendix. With seasonal effects from the atmosphere removed from the data, the GCR correction was then determined. We chose to normalize the GCR proxy to be unity in June 2008. This corresponds to a solar modulation of $\phi = 463$~MV according to the latest Usoskin model \cite{Usoskin2017} and was chosen because it is during the time period with the lowest uncertainty in the determination of the solar modulation. This is different than \cite{Maurice2011} where the data were normalized to the period October -- November 2002. Based on the difference in solar modulation and therefore integrated GCR flux, we expect the counting rates presented in this work to be up to a factor of 2 higher than the counting rates determined in \cite{Maurice2011}. The multiplicative GCR correction is shown in Fig.~\ref{fig:gcrnorm}. Since the chosen GCR normalization date is close to solar minimum, when the GCR flux is largest, the GCR correction factor is generally greater than 1. Between solar minimum and solar maximum, the GCR flux can change by a factor of 2.5. \begin{figure}[h!] \centering \includegraphics[width=0.98\textwidth]{plot_gcr_correction_all.pdf} \caption{GCR normalization relative to June 2008.} \label{fig:gcrnorm} \end{figure} \section{Preliminary Results}\label{sec:results} As an example of the temporal coverage available in the new data set, the prism counting rates in the polar regions are shown in Fig.~\ref{fig:result_fig14compare} for the entire dataset in 10$^{\circ}$ $L_s$ bins. Data from the North pole (latitude $>$80$^{\circ}$N) is overlayed with the data from the South pole (latitude $<-$80$^{\circ}$N). These plots can be compared with the results from \cite{Maurice2011} (Figs.~14 and 23), which show similar plots extending part way through MY 29. As CO$_2$ frost is deposited seasonally the neutron counting rates increase, due to CO$_2$ having a low cross section for absorbing the GCR-induced neutrons, until the seasonal CO$_2$-ice cap reaches peak mass. The counting rates then decreases as the CO$_2$ frost is sublimed away. \begin{figure}[h!] \centering \includegraphics[width=1\textwidth]{plot_compare_maurice_fig14_p1.pdf} \includegraphics[width=1\textwidth]{plot_compare_maurice_fig14_p2.pdf} \includegraphics[width=1\textwidth]{plot_compare_maurice_fig14_p4.pdf} \caption{Counting rates in 10$^{\circ}$ $L_s$ bins (starting from $L_s$ = 0 in MY 25) for Prism 1 (top) Prism 2 (middle) and Prism 4 (bottom) from the new dataset. The colors separate the Mars Year, and the circles correspond to data $>$80$^{\circ}$N and the stars data $<-$80$^{\circ}$N.} \label{fig:result_fig14compare} \end{figure} Qualitatively the same trends are observed in this data set when compared with \cite{Maurice2011}. The baseline count rate in the Northern summer is lower than the Southern baseline count rate, which relates to differences in the regolith and perennial ice caps at each pole. The counts at the peak of seasonal CO$_2$-frost deposition in the South are higher than the peak counts in the North. There is also a slight reduction in the Northern peak counts in MY 28--29 relative to previous years observed in Prism 2 that is not seen in Prism 1, similar to observations in \cite{Maurice2011}. Quantitatively as mentioned previously, differences in the GCR normalization result in the present counting rates being larger than those in \cite{Maurice2011}, however ratios of the counting rates between prisms and ratios of peak counting rates in the South to North are generally consistent with \cite{Maurice2011}. The new dataset can be averaged over all years to produce averaged neutron counting rate maps for each of the prisms. These averaged maps are shown in Fig.~\ref{fig:result_fig12compare} assuming 1$^{\circ}$ binning in latitude and longitude. The data plotted are only frost free data, assuming a cutoff of 0.2~g/cm$^2$, similar to \cite{Maurice2011}. The CO$_2$ frost thickness was predicted for each latitude and longitude bin as a function of $L_s$ using the MCD GCM model \cite{Forget1999}. These maps are very similar to those found in Fig.~12 in \cite{Maurice2011}. The counting rates in these frost-free maps are inversely proportional to water content in the near-surface. \begin{figure} \centering \includegraphics[width=1\textwidth]{plot_compare_maurice_fig12_1ll_p1.pdf} \includegraphics[width=1\textwidth]{plot_compare_maurice_fig12_1ll_p2.pdf} \includegraphics[width=1\textwidth]{plot_compare_maurice_fig12_1ll_p4.pdf} \caption{Frost-free averaged neutron counting rate maps for Prism 1 (top) Prism 2 (middle) and Prism 4 (bottom) using 1$^{\circ}$ latitude and longitude bins.} \label{fig:result_fig12compare} \end{figure} Since Prism 1 and Prism 4 provide different measures of epithermal neutrons (and with Prism 1 having a small contamination from thermal neutrons), we plot the correlation of the frost-free counting rates in Prism 4 to the counting rates in Prism 1 in Fig.~\ref{fig:result_p1p4corr_ff}. There is a strong correlation that is well-fit by a straight line, with a slope of 0.54 in this data set and a slope of 0.48 from the Maurice \textit{et al.} data set \cite{Maurice2011}, which is shown for comparison. The effect of differences in the normalization of the data in this analysis is evident. Since Prism 1 has a larger dynamic range and smaller errors (see discussion of errors and Fig.~\ref{fig:result_fig19compare} below), it is a better choice as the epithermal detector when frost free data are considered. However, as discussed in \cite{Prettyman2009}, when studying the polar regions the thermal neutron counting rate is extremely sensitive to changes in the atmospheric abundance of N$_2$ and Ar, which can vary seasonally. We therefore will consider both Prism 1 and Prism 4 as epithermal neutron detectors when studying the polar regions, and explore if differences between them can be used to study how the atmosphere changes seasonally. \begin{figure}[h!] \centering \includegraphics[width=0.5\textwidth]{plot_compare_ff_corr_maurice.pdf} \caption{Correlation of Prism 4 counting rates to Prism 1 counting rates for frost free data.} \label{fig:result_p1p4corr_ff} \end{figure} Stereo-graphic polar projections of the averaged thermal neutron counting rate (Prism 2 - Prism 4) are shown in Figs.~\ref{fig:result_fig2compare} and ~\ref{fig:result_fig2compare_north} for the South and North poles, respectively. \begin{figure}[h] \centering \includegraphics[width=1\textwidth]{plot_compare_to_prettyman_fig2.png} \caption{South polar stereo-graphic projection of thermal neutron counting rates in the summer (left, $L_s = 140^{\circ} - 180^{\circ}$) and in the winter (right, $L_s = 330^{\circ} - 360^{\circ}$).} \label{fig:result_fig2compare} \end{figure} \begin{figure}[h] \centering \includegraphics[width=1\textwidth]{plot_compare_to_prettyman_fig2_north.png} \caption{North polar stereo-graphic projection of thermal neutron counting rates in the summer (left,$L_s = 110^{\circ} - 150^{\circ}$) and in the winter (right, $L_s = 330^{\circ} - 360^{\circ}$), $0^{\circ} - 20^{\circ}$.} \label{fig:result_fig2compare_north} \end{figure} Each plot shows the thermal neutron counting rate during local summer (left) and local winter (right). The maps extend to a latitude of 45$^{\circ}$ in the respective hemisphere and are smoothed with a Gaussian filter with 5$^{\circ}$ FWHM to remove random variations between pixels. The South pole plots in Fig.~\ref{fig:result_fig2compare} use cuts of $L_s = 330^{\circ} - 360^{\circ}$ to define Southern summer and $L_s = 140^{\circ} - 180^{\circ}$ to define Southern winter, similar to the plots in Fig.~2 of \cite{Prettyman2004}. The North pole plots in Fig.~\ref{fig:result_fig2compare_north} use cuts of $L_s = 110^{\circ} - 150^{\circ}$ to define Northern summer and $L_s = 330^{\circ} - 360^{\circ}, 0^{\circ} - 20^{\circ},$ to define Northern winter. The summer plots for both poles can also be compared to Fig.~25 in \cite{Maurice2011}, which shows frost free polar maps of thermal neutron counting rates down to $\pm60^{\circ}$. In the winter, the seasonal caps are clearly identified; in the Southern hemisphere, the peak in neutron counting rate is slightly offset from center towards the Northwest in the plot. In the summer, the South pole exhibits the perennial CO$_2$ cap that is offset to the West-Northwest in the plot. Outside of this cap, the counting rates poleward of $-60^{\circ}$ are generally much lower than the lower latitude terrain. This is similar in the summer at the North pole, and small enhancements in the counting rate above $60^{\circ}$N are similar to observations in \cite{Maurice2011}. This dataset is intended to be used for studying seasonal effects and comparison of inter-annual variability, therefore the typical data product will be a count rate binned not only in latitude and longitude, but also in year and seasonal $L_s$. Therefore, the average uncertainties in each data point will be larger than \cite{Maurice2011} which focused on removing frost effects to produce global time-averaged count rate maps. To limit the uncertainties in a given data point to less than 10\%, a limit of 10$^{\circ}$ binning in $L_s$ and 4$^{\circ}$ binning in latitude and longitude is required. The uncertainty over all years for this binning is shown in Fig.~\ref{fig:result_fig19compare}. The average uncertainty (solid lines) vary slightly with latitude, due to there being more spread in the data due to frost effects, and turn up at the highest point because no data exist poleward of $\pm$87$^{\circ}$. The shaded region represents the extent of the inner 80\% of the data. \begin{figure} \centering \includegraphics[width=1\columnwidth]{compare_binned_rootfile_errors_10degLs_4degLL_manual.pdf} \caption{Uncertainty in count rate distribution as a function of latitude in 4$^{\circ}$ bins for Prism 1 (left), Prism 2 (middle), and Prism 4 (right). The average uncertainty (solid lines) is the average over uncertainties in all bins at that latitude (10$^{\circ}$ in $L_s$ and 4$^{\circ}$ in longitude). The shaded regions correspond to the inner 80\% of all uncertainty values.} \label{fig:result_fig19compare} \end{figure} Similar plots can be made for frost free data only (like Figs.~19 and 26 in \cite{Maurice2011}), and the errors in the polar regions will increase slightly due to a reduction in the number of data entries available, but the spread in the uncertainty will decrease. Assuming the data is evenly spread over the 8 Martian Years, the errors shown in Fig.~\ref{fig:result_fig19compare} will increase by a factor of 2.8 when considering a single year, leading to errors of $\sim$4\%--7\% for Prism 1, $\sim$3.5\%--5\% for Prism 2, and $\sim$5.5\%--10\% for Prism 4. We generated a map with 10$^{\circ}$ binning in $L_s$ and 4$^{\circ}$ binning in latitude and longitude and plotted counting rate trends as a function of $L_s$ to perform a preliminary comparison of inter-annual variability. The counting rate averaged over all Mars years versus $L_s$ for latitude bins in the polar regions are shown in Fig.~\ref{fig:result_p1allyears}, \ref{fig:result_p4allyears}, and \ref{fig:result_p2m4allyears}, for Prism 1 (epithermal neutrons), Prism 4 (alternate epithermal neutrons), and Prism 2 - Prism 4 (thermal neutrons), respectively. The counts are integrated over all longitude and the latitude bin noted in the legend is the center point within the 4$^{\circ}$ bin. The counting rates have a rough summer-time background subtraction applied by using the average of counting rates during summer so that the counts above the summer-time baseline counting rate can be compared across latitudes. Both epithermal analogs have similar trends, with the Prism 4 counting rates about a factor of 2 lower than the Prism 1 counting rates. The thermal counting rates have a much different trend and much higher counting rate. At this stage, atmospheric effects have not been removed from the data. \begin{figure}[h] \centering \includegraphics[width=0.48\columnwidth]{plot_ls_variations_by_lat_p1_combinedyears.pdf} \includegraphics[width=0.48\columnwidth]{plot_ls_variations_by_lat_p1_north_combinedyears.pdf} \caption{(Color online) Example of background-subtracted counting rates for Prism 1 (epithermal) as a function of $L_s$ at the South pole (left) and the North pole (right), averaged over all MY.} \label{fig:result_p1allyears} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=0.48\columnwidth]{plot_ls_variations_by_lat_p4_combinedyears.pdf} \includegraphics[width=0.48\columnwidth]{plot_ls_variations_by_lat_p4_north_combinedyears.pdf} \caption{(Color online) Same as Fig.~\ref{fig:result_p1allyears} but for Prism 4 (alternate epithermal).} \label{fig:result_p4allyears} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=0.48\columnwidth]{plot_ls_variations_by_lat_p2m4_south_combinedyears.pdf} \includegraphics[width=0.48\columnwidth]{plot_ls_variations_by_lat_p2m4_north_combinedyears.pdf} \caption{(Color online) Same as Fig.~\ref{fig:result_p1allyears} but for Prism 2 - Prism 4 (thermal).} \label{fig:result_p2m4allyears} \end{figure} The counting rates as a function of $L_s$ are separated by MY for the 86$^{\circ}$ latitude band in Figs.~\ref{fig:result_p1_sepyears}, \ref{fig:result_p4_sepyears}, and \ref{fig:result_p2m4_sepyears} for Prism 1 (epithermal neutrons), Prism 4 (alternate epithermal neutrons), and Prism 2 - Prism 4 (thermal neutrons), respectively. \begin{figure}[h!] \centering \includegraphics[width=1\columnwidth]{plot_ls_variations_by_year_lat-86_p1_south_combinedyears.pdf} \includegraphics[width=1\columnwidth]{plot_ls_variations_by_year_lat86_p1_north_combinedyears.pdf} \caption{(Color online) Prism 1 (epithermal) counting rates for 86$^{\circ}$S (top) and 86$^{\circ}$N (bottom) as a function of $L_s$ separated by MY.} \label{fig:result_p1_sepyears} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=1\columnwidth]{plot_ls_variations_by_year_lat-86_p4_south_combinedyears.pdf} \includegraphics[width=1\columnwidth]{plot_ls_variations_by_year_lat86_p4_north_combinedyears.pdf} \caption{(Color online) Same as Fig.~\ref{fig:result_p1_sepyears} but for Prism 4 (alternate epithermal).} \label{fig:result_p4_sepyears} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=1\columnwidth]{plot_ls_variations_by_year_lat-86_p2m4_south_combinedyears.pdf} \includegraphics[width=1\columnwidth]{plot_ls_variations_by_year_lat86_p2m4_north_combinedyears.pdf} \caption{(Color online) Same as Fig.~\ref{fig:result_p1_sepyears} but for Prism 2 - Prism 4 (thermal).} \label{fig:result_p2m4_sepyears} \end{figure} These plots also show the counting rate averaged over all MY for reference as a solid line. In general, the counting rate trends from year to year are very reproducible and agree within the uncertainties of the data. The notable exception to the reproducibility, which was previously mentioned, is the thermal neutron counting rate from $L_s\sim315^{\circ}$ in MY 28 through $L_s\sim45^{\circ}$ in MY 29. At the peak of Northern winter, the MY 29 counting rates are about 14\% lower than the average of the other Mars years. This drop in counting rate, which is suggestive of less CO$_2$ deposition, occurs after the planet-wide global dust storm that emerged around $L_s \sim 265^{\circ}$ in MY 28 \cite{Smith2009,Wang2015}. Observations in the Mars Climate Sounder thermal infrared data \cite{Piqueux2015b} show the Northern seasonal cap is $\sim$10\% smaller in spatial extent following the MY 28 global dust storm. In future work, it will be interesting to process and analyze the MONS data through 2018, as another global dust storm with an onset in $L_s = 185^{\circ}$ occurred in MY 34. \section{Summary \& Future Work} In summary, we have performed a full re-analysis of Mars Odyssey Neutron Spectrometer data, extending the coverage of MONS data through the end of 2017 to cover 8 Martian Years. This paper summarizes the data analysis procedure, including data reduction and data corrections, to document and provide the necessary understanding to process raw MONS data from the PDS and utilize this new data set. Example results based on this new data set include frost-free global counting rate maps and maps of the polar regions and were presented in Section~\ref{sec:results}. These maps were qualitatively compared to previous analyses of the MONS data performed in \cite{Prettyman2004,Maurice2011} and found to show similar trends. Due to choices made in processing of this data set, the overall normalization of the data is different than the previous analyses. We showed the averaged counting rates for different latitude bands in the polar regions, which show the typical latitude dependence of the counting rates as a function of $L_s$. Preliminary results on inter-annual variability of the seasons CO$_2$ caps were also presented in Section~\ref{sec:results}, which show reproducibility in the Southern seasonal cap based on both thermal and epithermal neutron counting rates, and reproducibility in the Northern seasonal cap based on the epithermal neutron counting rates and in thermal counting rates in all years except MY 28 - MY 29, following a global dust event. The variation in the thermal neutron counting rates in MY 28 - MY 29 may be due to a combination of cap properties and atmospheric properties, and will be explored in future work that will include an inter-annual comparison across all the latitude bins. Work utilizing this new data set is ongoing by this team. We are currently performing the necessary simulation and modeling efforts to normalize the data and convert counting rates to CO$_2$ frost thickness in both the North and South polar regions and better understand atmospheric effects. With these efforts, we will be better able to interpret the overall properties of the seasonal caps and how the MY 28 global dust storm impacted the overall mass of CO$_2$ deposited and any changes in the extent in the Northern seasonal cap following this event. This work will be the subject of future papers. \section*{Acknowledgements} Research presented in this paper was supported by the Laboratory Directed Research and Development program of Los Alamos National Laboratory under project number 20160672PRD3 and the NASA Mars Data Analysis Program under project number 80HQTR170004. This research used resources provided under the Los Alamos National Laboratory Institutional Computing Program, which is supported by the U.S. Department of Energy National Nuclear Security Administration under Contract No. 8923328CNA000001. \newpage \section*{Appendix A: Simulation Tools} \subsection*{Neutron Flux Signal} The radiation transport simulation package Geant4 \cite{Agostinelli2003} was utilized to develop a tool to simulate the expected neutron leakage from Mars at the top of the atmosphere. Benchmarking of this simulation package \cite{Mesick2018} has shown that with the appropriate choice of physics model, Geant4 and MCNP6.2 \cite{MCNP6.2} produce similar results but both slightly over-predict neutron density profiles measured in the Lunar Neutron Probe Experiment \cite{Woolum1975}. The shapes of the curves generally match the data well, and therefore with proper normalization these types of simulation tools are helpful in the interpretation of planetary neutron data. For the simulations relevant to determining the change in neutron flux with atmospheric density, a single-layer model with varying amounts of water-equivalent hydrogen (WEH) and varying atmosphere thicknesses were performed. The soil composition was based on the S21 average composition from \cite{Diez2008}, with elemental compositions given in Table~\ref{table:s21}. A fixed Cl abundance of 0.517\% was added to this composition, close to the average as measured by the Mars Odyssey gamma-ray instrument. The size of the atmosphere normalization correction to the counting rate depends on the average WEH of the soil, and published maps of average WEH from the Mars Odyssey gamma-ray detector that are available in the PDS were used to estimate this within the belly band region. After adding in Cl abundance and the appropriate amount of H$_2$O, the S21 elemental abundances were scaled uniformly so that the total elemental abundance summed to unity. The density of the soil was assumed to be 1.8~g/cm$^3$. \begin{table}[h] \centering \caption{Elemental weight fractions in the base soil composition based on S21 from \cite{Diez2008}} \label{table:s21} \begin{tabular}{|c|c|c|c|c|} \hline O & 0.4223 & & K & 0.0022 \\ \hline Na & 0.0147 & & Ca & 0.0419 \\ \hline Mg & 0.0478 & & Ti & 0.0052 \\ \hline Al & 0.0464 & & Cr & 0.0016 \\ \hline Si & 0.2075 & & Mn & 0.0033 \\ \hline S & 0.0278 & & Fe & 0.1466 \\ \hline \end{tabular} \end{table} The composition of the atmosphere in the simulation was based primarily on values used in \cite{Prettyman2004}, which come from the Viking data \cite{Lewis1984} with minor modifications. The concentrations of CO$_2$, H$_2$O, N$_2$, and Ar from \cite{Prettyman2004} are 96.93\%, 0.054\%, 2.7\%, and 1.6\%, respectively. In addition, an O$_2$ concentration of 0.13\% was assumed, taken from \cite{Lewis1984}. The atmosphere was simulated out to 40~km above the surface, with 20 layers of exponentially decreasing density (exp($-h/H$), where $H$ is the scale height) and a scale height of 10.8~km, similar to the layering used in simulations by \cite{Jun2013}. As described in \cite{Feldman1989}, gravitational binding of neutrons can effect the measured flux spectra. On Mars, the gravitational binding energy is 0.132~eV. Neutrons below this energy can return to the surface and re-interact. Gravitational binding of neutrons was implemented in our simulation by a reflecting boundary at the top of the atmosphere that reflected neutrons below the binding energy back to the surface. Since the surface return time $\Delta$t (derived in \cite{Feldman1989}) can be on the order of the neutron decay lifetime, a weighting factor exp($-\Delta t/\tau$) was applied based on the probability of neutron decay assuming the most recent value of the neutron free lifetime, $\tau$ = 880.2~s \cite{PDG2018}. The epithermal and fast neutron flux are not affected by gravitational binding, however, the thermal neutron flux is almost 50\% higher at the top of the atmosphere when gravitational binding is included. Simulations were run for WEH values of 1\%, 3\%, 4\%, 5\%, 6\%, and 8\% (spanning the range of measured values in the belly band region) with total atmospheric thicknesses ($\rho_a$) of 4, 8, 12, 16, 20, and 24~g/cm$^2$ (spanning the range of values predicted by the GCM model \cite{Forget1999} in the belly band region) at each WEH point. In these simulations, the GCR flux was modeled following the Castognali \& Lal model \cite{Munoz1975} with a solar modulation of $\phi = 900$~MV. More details about this model and other GCR models can be found in \cite{Mesick2018}. At this stage, the neutron energy and angle at the top of the atmosphere are recorded. For the atmospheric correction since only ratios are being compared, the absolute GCR flux is not important. \subsection*{Detector Response} The MONS instrument response was modeled in a separate simulation utilizing Geant4; the geometry is shown in Fig.~\ref{fig:mons_sim}. The geometry of the four boron-loaded plastic prisms was modeled, including cadmium covering where appropriate. The gaps between the Prism 1 face and the cadmium sheet where thermal neutrons can leak in was also modeled. The external spacecraft was not included in the model based on results from \cite{Prettyman2004}, which describes that Prism 1 is well shielded from spacecraft background, and that Prism 2 and Prism 4 have the same response so that subtraction of Prism2 - Prism 4 to determine the thermal neutron counting rate will effectively remove the spacecraft background. \begin{figure}[h!] \centering \includegraphics[width=0.6\textwidth]{mons_sim.pdf} \caption{Geometry modeled in the detector response simulations. The gaps in the cadmium covering are indicated. The cadmium covering the front face of the view (Prism side) was removed for clarity to view the Prism geometry.} \label{fig:mons_sim} \end{figure} The efficiency for neutron capture of Prism 1 (Cd covered), Prism 3 (not Cd covered), and the side of Prism 1 facing away from the spacecraft was simulated. A rectangular planar source, the same size as the prism face, was placed directly on top of the prism to fully cover the prism face in each scenario. Neutrons were uniformly generated from random points within the rectangular dimensions. Neutron energies of 1$\times$10$^{-10}$ MeV and increasing in increments of one decade to 1 MeV were simulated, and at each energy the neutrons originated at angles from 0 to 85 degrees in increments of 5$^{\circ}$. The Prism 1 simulation provides the efficiency for epithermal neutron detection, and the Prism 3 simulation provides the efficiency for thermal neutron detection. Figure~\ref{fig:mons_ea} shows the efficiency for neutron capture in Prism 1 (epithermal neutrons), Prism 3 (thermal neutrons), and the side of Prism 1. The Prism 1 side efficiency is lower than the Prism 1 efficiency since the total area is $1/4$ of the prism side. In addition to the efficiency being tabulated for events hitting the primary prism in each configuration, the efficiency of events detected in, for example, Prism 2 when the source was on Prism 1 was also tabulated. From these simulations the effective area was calculated, by multiplying the source area (110~cm$^2$) by the efficiency at each neutron energy and angle. \begin{figure}[h!] \centering \includegraphics[width=0.8\textwidth]{Prism_Efficiencies_Report.pdf} \caption{The neutron capture efficiency as a function of energy, for an incident angle of 0$^{\circ}$.} \label{fig:mons_ea} \end{figure} The simulated MONS counting rate was calculated by taking the simulated neutron current at the top of the Mars atmosphere (40~km) and combining it with the appropriate effective area tables from the Geant4 detector response simulations. However, several steps in between occur to account for the hyperbolic trajectory of the particles, the gravitational binding of neutrons, and the spacecraft velocity. The procedure to calculate the count rate for each prism includes equations derived in \cite{Feldman1989} and follows the steps below. \begin{enumerate} \item Transform the energy and angle to account for the hyperbolic trajectory due to the gravity of Mars, \item Transform the energy and angle into the spacecraft motion frame, \item Interpolate the effective area of the incident prism from the final energy and angle of the neutron, and \item Account for the neutron lifetime. \end{enumerate} The energy and angle of the simulated neutrons were transported to the spacecraft orbit assuming hyperbolic trajectories. The energy was adjusted to account for the effect of the binding energy of Mars (0.132 eV). The final energy of the neutron is \cite{Feldman1989} $K_r$ = $K-V\frac{(R-{R_M})}{R}$, where $K$ is the energy of the neutron leaving the surface of Mars, $V$ is the binding energy of Mars, $R$ is the spacecraft orbital altitude, and $R_M$ is the radius of Mars. The neutron incident angle was adjusted for the hyperbolic trajectory caused by the binding energy of Mars \cite{Feldman1989}: $\mu_r^2=1-\left(\frac{R_M}{R}\right)^2\frac{K}{K_R}(1-\mu^2)$, where $\mu$ is the cosine of $\theta$, the angle the neutron leaves the surface, and $\mu_r$ is the cosine of $\theta_R$, the angle of the neutron at orbit. The new energy was transformed into the reference frame of the spacecraft depending on which prism the neutron was determined to hit. Using Galilean velocity transformations the constant speed of the spacecraft (3380~m/s) is added (subtracted) to the component of the neutron velocity in the direction of the spacecraft motion if the neutron hit Prism 2 (4). For Prism 1 and the prism side, a random sampling of azimuthal $\phi$ values were assigned to each neutron angle and energy, and the incident velocity was calculated with the $\phi$ value included. This was to account for all possible points the neutrons may hit the detector from, and the results were averaged. This correction affects what energy each Prism ``see's" based on the motion of the spacecraft, which impacts the effective area. Prism 2 is heading ``toward" the neutrons and will crash into them, adding to their incident energy and Prism 4 is traveling ``away" from the neutrons taking away some of their energy upon impact. In addition to this effect, the neutron flux itself must also be corrected for the shift in detected neutron energy. The original neutron flux and velocity at the top of the atmosphere were used to calculate the neutron density. This neutron density was then multiplied by the velocity of the neutron at the spacecraft to get the neutron flux in the reference frame of the moving spacecraft. Finally, given the definition of angles in the detector response simulations, an angle transformation of 90$^{\circ}$ is applied to events hitting Prism 2, Prism 4, and the prism side. The effective area tables were interpolated in energy and angle using the Python 2D SciPy interpolation package. For events hitting Prism 1, the Prism 1 effective area tables were used. For events hitting Prism 2 or Prism 4, the Prism 3 effective area tables were used. For events hitting the side of the prism cube, the Prism 1 side effective area tables were used. The primary signals (events hitting only the primary prism, and ignoring contributions from the side) account for 65\% of Prism 1 and Prism 4 events and 85\% of Prism 2 events. Finally, a weighting factor is applied to account for the probability of neutron decay during the transport of neutrons from 40~km to the spacecraft orbit, using a neutron lifetime of $\tau$ = 880.2~s \cite{PDG2018} and the neutron transit time \cite{Feldman1989} (note correction of sign error): \begin{eqnarray} &&\Delta t_r = \frac{R_M(m/2V)^{1/2}}{2\left[1-K/V\right]^{3/2}}\times \\ \nonumber &&\left\{2\mu\left(1-\frac{K}{V}\right)^{1/2}\left(\frac{K}{V}\right)^{1/2}\left[1-\left(\frac{\tan^2\theta}{\tan^2\theta_R}\right)^{1/2}\right] + \sin^{-1}\left(\frac{B}{[A^2+B^2]^{1/2}}\right) + \sin^{-1}\left(\frac{1-2K_R/V_R}{[A^2+B^2]^{1/2}}\right)\right\}~, \end{eqnarray} where $A = \left[4(K/V)(1-K/V)\mu^2\right]^{1/2}$ and $B = 2K/V-1$ for $K/V<1$ and the other constants have already been defined. \subsection*{GCR Correction Details} The resulting ratio of count rates for Prism 1, Prism 2, Prism 3, and Prism 4 as a function of atmospheric density ($\rho_a$) for the different WEH values is shown in Fig.~\ref{fig:atmosim}. For low WEH content, the correction factor is as much as 19\% for the lowest atmospheric density, however, the typical correction is much smaller. These curves were fit to second-order polynomials, leading to the fit parameters given in Table~\ref{table:fit_params}. \begin{figure}[h!] \centering \includegraphics[width=0.48\textwidth]{plot_counts_p1_norm_to_16_vs_weh.pdf} \includegraphics[width=0.48\textwidth]{plot_counts_p2_norm_to_16_vs_weh.pdf} \includegraphics[width=0.48\textwidth]{plot_counts_p3_norm_to_16_vs_weh.pdf} \includegraphics[width=0.48\textwidth]{plot_counts_p4_norm_to_16_vs_weh.pdf} \caption{Simulated Prism counting rates normalized to an atmospheric density of 16~g/cm$^2$ for different WEH abundances.} \label{fig:atmosim} \end{figure} \begin{table}[h] \centering \caption{Fit parameters (squared term, linear term, and constant term) for determining atmospheric scaling factor relative to 16~g/cm$^2$.} \label{table:fit_params} \begin{tabular}{lcccccc} Prism/Term & 1\% WEH & 3\% WEH & 4\% WEH & 5\% WEH & 6\% WEH & 8\% WEH \\ \hline Prism 1 Sqr & 3.169e-4 & 3.133e-4 & 3.752e-4 & 4.160e-4 & 4.265e-4 & 4.237e-4 \\ Prism 1 Lin & -2.227e-2 & -1.990e-2 & -2.077e-2 & -2.143e-2 & -2.060e-2 & -1.880e-2\\ Prism 1 Cns & 1.275 & 1.241 & 1.239 & 1.235 & 1.221 & 1.192 \\ \hline Prism 2 Sqr &2.831e-4 & 1.827e-4 & 2.405e-4 & 2.956e-4 & 2.545e-4 & 2.088e-4 \\ Prism 2 Lin &-1.145e-2 & -1.114e-2 & -1.393e-2 & -1.608e-2 & -1.485e-2 & -1.418e-2 \\ Prism 2 Cns &1.110 & 1.137 & 1.165 & 1.179 & 1.174 & 1.177 \\ \hline Prism 3 Sqr &3.148e-4 & 3.132e-4 & 3.751e04 & 4.158e-4 & 4.289e-4 & 4.258e-4 \\ Prism 3 Lin &-2.216e-2 & -1.981e-2 & -2.067e-2 & -2.132e-2 & -2.056e-2 & -1.873e-2 \\ Prism 3 Cts &1.274 & 1.240 & 1.238 & 1.234 & 1.219 & 1.190 \\ \hline Prism 4 Sqr &2.753e-4 & 3.189e-4 & 3.701e-4 & 4.173e-4 & 4.571e-4 & 4.629e-4 \\ Prism 4 Lin &-2.073e-2 & -2.097e-2 & -2.183e-2 & -2.298e-2 & -2.321e-2 & -2.157e-2 \\ Prism 4 Cts &1.261 & 1.257 & 1.257 & 1.259 & 1.255 & 1.226 \\ \hline \end{tabular} \end{table} To correct out the changes in neutron counting rates due to seasonal variations in the atmosphere, the data were binned for each year in $L_s$. At each $L_s$ point, the MCD GCM \cite{Forget1999} was used to determine $\rho_a$ for each 2$^{\circ}$$\times$2$^{\circ}$ latitude and longitude bin within the belly band region. The correction factor based on this $\rho_a$ was then calculated for each of the six simulated WEH values using the fitted parameters. The Mars Odyssey gamma-ray map of WEH was then used to determine the WEH of the soil within each bin, and the final correction factor determined by an interpolation of the correction factors covering the range of WEH values. Once the neutron counting rates were normalized to 16~g/cm$^2$, the GCR correction factor was determined. \newpage \bibliographystyle{elsarticle-num}
2,869,038,156,956
arxiv
\section{Introduction} In hierarchical clustering theories of structure formation, like the cold dark matter (CDM) models, small mass clumps of dark matter form first and gather into larger and larger masses subsequently. The structure of these dark matter "halos", is likely to be related to how the halos formed, the initial spectrum of the density fluctuations and to the underlying cosmology. Several properties of galactic and cluster halos can be well constrained by observations. So, if the matter distribution in dark halos are fossils which do depend on some of the properties of structure formation models, like their initial power spectrum, one would have a useful observational handle on these properties. It is therefore necessary to understand what determines the matter distribution (or density profiles) of dark matter halos {\it ab initio}. This forms the motivation of the present paper and the companion paper by Subramanian, Cen and Ostriker (SCO99, 1999). Further, Navarro, Frenk and White (NFW) (1995, 96, 97) have proposed from their N-body simulations, that dark matter halos in hierarchical clustering scenarios develop a universal density profile, regardless of the scenario for structure formation or cosmology. The NFW profile has an inner cuspy form with the density $\rho \propto r^{-1}$ and an outer envelope of the form $\rho \propto r^{-3}$. Several investigators have found that the NFW profile provides a moderately good fit to numerical simulations ( Cole and Lacey 1996, Tormen, Bouchet $\&$ White 1997, Huss, Jain $\&$ Steinmetz 1997, 1999, Thomas {\it et al.}, 1998). Recently, though, high resolution simulations of cluster formation in a CDM model, by Moore {\it et al.} (1998), yielded a core density profile $\rho(r) \propto r^{-1.4}$, shallower than $r^{-2}$, but steeper than the $r^{-1}$ form preferred by NFW, consistent with the earlier high resolution work of Xu (1995). (For smaller mass halos, Kravtsov {\it et al.} (1998) find the core density profile to be shallower than the NFW form). It is important to understand these results as well on general theoretical grounds. In a companion paper SCO99, we explore the possibility that a nested sequence of undigested cores in the center of a halo, which have survived the inhomogeneous collapse to form larger and larger objects, determine halo structure in the inner regions. For a flat universe with a power spectrum of density fluctuations $P(k) \propto k^n$, scaling arguments then suggest that the core density profile scales as, $\rho \propto r^{-\alpha}$ with $\alpha = \alpha_n = (9+3n)/(5+n)$. But whether such a scaling law indeed obtains depends on the detailed dynamics. Similarity solutions often provide a tractable, semi-analytic route to study time dependent dynamics in complicated physical systems. Fillmore and Goldreich (FG, 1984) and Bertschinger (B85, 1985) derived such solutions for describing the purely radial collapse of cold, collisionless matter in a perturbed Einstein-de Sitter universe. These solutions need to be generalised to incorporate tangential velocity dispersions, which as we see below, turn out to be crucial to understand density profiles shallower than $1/r^2$. Some general, analytical aspects of the similarity solutions incorporating tangential velocity dispersions are outlined in the companion paper SCO99. In the present paper, we consider these self-similar collapse solutions in greater detail, by deriving and solving numerically the scaled moment equations for such a collapse, including the effect of velocity dispersions. In the next section we formulate the self-similar collapse problem and introduce a fluid approach for its solution, recapitulating the corresponding discussion in SCO99. In Section 3 we derive scaled moment equations describing the collapse and discuss their numerical solution. Specific numerical examples of self-similar collapse are given in Section 4. These solutions include the effect of halo velocity dispersions (both radial and tangential), and consider a range of spherically averaged initial density profiles. The final section discusses the results and presents our conclusions. \section{ The self-similar solution } We summarize in this section, some of the properties of the similarity solution, that can be derived by analytic arguments. Although much of this section is mostly a recapitulation of Section 3 of SCO99, we include it here to make the present paper as self-contained as possible, and also to set the framework for the detailed numerical work which follows. Consider the collapse of a single spherically symmetric density perturbation, in a flat background universe. Assume the initial density to be a power law in radius. We expect to describe the dynamics through a self-similar solution. FG and B85 looked at the self-similar evolution by following the self-similar particle trajectory. We adopt a different approach, by examining directly the evolution of the phase space density. During the course of this work we have learned that several authors (Padmanabhan 1994, unpublished notes; Padmanabhan 1996a, Chieze, Teyssier and Alimi 1997; Henriksen and Widrow 1997) have also adopted this approach to the purely radial self-similar collapse of FG and B85. We will emphasise and incorporate here also an additional aspect, the distinctive role of non-radial motions (velocity dispersions) in self-similar collapse. The evolution of dark matter phase space density $f({\bf r}, {\bf v}, t)$ is governed by the Vlasov Equation, \begin{equation} {\partial f \over \partial t} + {\bf v}. {\partial f \over \partial{\bf r}} + {\bf a}. {\partial f \over \partial{\bf v}} = 0 \label{Vlasov} \end{equation} where ${\bf r}$ and ${\bf v} = \dot {\bf r}$ are the proper co-ordinate and velocity of the particles respectively. Also the acceleration ${\bf a} = \dot {\bf v} = - {\bf \nabla }\Phi$, with \begin{equation} {\bf \nabla}^2\Phi = 4 \pi G \rho = 4 \pi G \int f d^3 {\bf v} \label{pois} \end{equation} By direct substitution, it is easy to verify that these equations admit self similar solutions of the form \begin{equation} f({\bf r}, {\bf v}, t) = k_2 k_1^{-3} t^{-q -2p} F( {{\bf r}\over k_1 t^p}, {{\bf v}\over k_1 t^q}) ; \qquad p = q + 1 \label{scalf} \end{equation} where $k_1,k_2$ are constants which we will fix to convenient values below. We have used proper co-ordinates here since the final equilibrium halo is most simply described in these co-ordinates. (The same solution in co-moving co-ordinates is given in Padmanabhan (1996a)). Defining a new set of co-ordinates ${\bf y} = {\bf r}/(k_1t^p)$, ${\bf w} = {\bf v}/(k_1t^q)$ and a scaled potential $\chi =k_1^{-2} t^{-2q}\Phi$, the scaled phase space density $F$ satisfies \begin{equation} -(q + 2p) F - p {\bf y}. {\partial F \over \partial{\bf y}} -q {\bf w}. {\partial F \over \partial{\bf w}} + {\bf w}. {\partial F \over \partial{\bf y}} -{\bf \nabla}_{\bf y}\chi . {\partial F \over \partial{\bf w}} = 0 ; \label{valsc} \end{equation} \begin{equation} {\bf \nabla}_{\bf y}^2\chi = 4 \pi G k_2 \int F d^3 {\bf w} . \label{potsc} \end{equation} Consider the evolution of a spherically symmetric density perturbation, in a flat universe whose scale factor $a(t) \propto t^{2/3}$. For self similar evolution, the density is given by \begin{equation} \rho(r,t) = \int f d^3{\bf v} = = k_2 t^{-2}\int F(y, {\bf w}) d^3{\bf w} \equiv k_2 t^{-2} \psi(y) \label{densc} \end{equation} where we have defined $r = \vert {\bf r} \vert$, $y = \vert {\bf y} \vert$ and used the relation $p = q+1$. For the flat universe, the background matter density evolves as $\rho_b(t) = 1/(6 \pi G t^2)$. So the density contrast $\rho(r,t)/\rho_b(t) = \psi(y)$, where we take $k_2 = 1/(6\pi G)$. \subsection{ Linear and non-linear limits} Let the initial excess density contrast averaged over a sphere of co-moving radius $x= r/a(t) \propto rt^{-2/3}$ be a power law $\bar\delta(x,t_i) \propto x^{-3\epsilon}$. Since $\rho/\rho_b$ is a function of $y$ alone, the $\bar\delta(x,t)$ will also be a function only of $y$. Note that, in the linear regime, the excess density contrast averaged over a {\it co-moving} sphere, grows as the scale factor $a(t)$. So one can write for the linear evolution of the spherical perturbation \begin{equation} \bar\delta(r,t)= \bar\delta_0 x^{-3\epsilon}t^{2/3} \propto \bar\delta_0 r^{-3\epsilon}t^{2/3 + 2\epsilon} \propto \bar\delta_0 y^{-3\epsilon}t^{- 3\epsilon p + 2/3 + 2\epsilon} , \label{lincon} \end{equation} where we have substituted $r \propto y t^p$. This can be a function of $y$ alone, for a range of $t$ in the linear regime iff $- 3\epsilon p + 2/3 + 2\epsilon = 0$, which gives \begin{equation} p = {2 + 6\epsilon \over 9\epsilon} \label{adet} \end{equation} We see that once the initial density profile is specified the exponents $p,q$ of the self similar solution are completely determined. Consider now what happens in the non-linear limit. The zeroth moment of the Vlasov equation gives \begin{equation} {\partial \rho \over \partial t} + {\bf \nabla}_{\bf r}.(\rho \bar{\bf v}) = 0 \label{contm} \end{equation} Here $\bar{\bf v} = <{\bf v}>$ is the mean velocity. (Henceforth both $<>$ or a bar over a variable denotes a normalised moment over $f$). In regions which have had a large amount of shell crossings, it seems plausible to demand that the halo particles have settled to nearly zero average infall velocity, that is $ \bar{v_r} \equiv 0$. From (\ref{contm}) , we then have $(\partial \rho / \partial t) = 0$, and therefore, in the non-linear regime, \begin{equation} \rho(r,t) = Q(r) = Q(yt^{p}) = {1 \over 6 \pi G t^{2}} \psi(y). \label{nonc} \end{equation} This functional equation has only power law solution, because of the power law dependences on $t$. Substituting $Q(r) = q_0 r^{-\alpha}$ into Eq. (\ref{nonc}), and using $r \propto yt^p$, we obtain $y^{-\alpha} t^{-p \alpha} \propto t^{-2} D(y)$. This can only be satisfied for range of $t$ in the non-linear regime provided $p\alpha = 2$. So, for an initial density profile with a power law slope $3\epsilon$, the power law slope of the density in the non-linear regime is given by, \begin{equation} \alpha = {2 \over p} = {9\epsilon \over 3\epsilon + 1} . \label{nonpow} \end{equation} This result has been obtained by following the self similar particle trajectory, by B85 (for $\epsilon =1$), and FG for $2/3 \leq \epsilon < 1$. We see that it can be simply obtained by just combining the self-similar solution $f$ and the static core condition. (Obtaining the B85/FG result in this way has been independently noted by Padmanabhan (private communication, unpublished notes 1994)). What should we choose for the value of $\epsilon$? For a power law $P(k) \propto k^n$, the fractional density contrast averaged over a co-moving sphere of radius $x$, is distributed as a Gaussian, with a variance $\propto x^{-(3+n)/2}$. This suggests a "typical" spherically averaged initial density law for a halo collapsing around a randomly placed point of the form $\bar\delta(x,t_i) \propto x^{-(3+n)/2}$, or $3\epsilon = (3 + n)/2$. Suppose we use this value of $\epsilon$ for the initial density profile of a halo. Then the halo density in the staic core regions will be $\rho(r,t) \propto r^{-\alpha}$, where, substituting $ 3\epsilon = (3 + n)/2$ in Eq. ( \ref{nonpow} ) \begin{equation} \alpha = \alpha_n = { 9 + 3n \over 5 + n} \label{aln} \end{equation} Remarkably, this is the same form that scaling laws suggest, for the core of a collapsed halo, assuming that the cores of sequence of sub-halos are left undigested, during the formation of the bigger halo (see SCO99). (In a paper which appeared during the course of this work, Syer $\&$ White (1998) motivate the same form, in the case when bigger halos form by purely merger of smaller halos). Note that for $n < 1$ the density law given by (\ref{aln}) is shallower than $1/r^2$. FG also showed that a power law slope shallower than $1/r^2$, cannot obtain for purely radial collapse, And that while the above form for $\alpha$ should obtain for $2/3 \leq \epsilon < 1$, for $\epsilon < 2/3$, one goes to the limiting value $\alpha = 2$. However, this is only true for purely radial trajectories (cf. White and Zaritsky 1992; Sikvie, Tkachev and Wang 1997). We see below, by considering the higher moments of the Vlasov equation, that $\alpha < 2$ can only obtain if the system has non-radial velocity dispersions. \subsection {Jeans and Energy equations} Suppose we multiply the Vlasov equation by the components of ${\bf v}$ and integrate over all ${\bf v}$. Assume there is no mean rotation to the halo, that is $\bar v_{\theta} = 0$ and $\bar v_{\phi} = 0$. Then we get \begin{equation} {\partial(\rho \bar v_r) \over \partial t} +{\partial(\rho \bar{v_r^2}) \over \partial r} +{\rho \over r} (2\bar{v_r^2} - \bar{v_{\theta}^2} - \bar{v_{\phi}^2}) + {GM(r)\rho \over r^2} = 0 \label{radm} \end{equation} \begin{equation} \bar{v_{\theta}^2} = \bar{v_{\phi}^2} \label{thetm} \end{equation} Here $M(r)$ is the mass contained in a sphere of radius $r$. Let us consider again a static core with $\bar v_r \equiv 0$. The Jeans equation gives two equations for the three unknown velocity dispersions, even for a static core. To see if one can close the system SCO99 considered the second moments of the Vlasov equation (the energy equations). However these will involve the third moments, or the peculiar velocity skewness. Some form of closure hypothesis is needed in a fluid treatment of the Vlasov equation. For this we proceed as follows: One can firstly assume that initially the tangential velocities have zero skewness. Then in purely spherically symmetric evolution they would not develop any skewness, that is $\bar{v_{\theta}^3} = \bar{v_{\phi}^3} = < v_{\theta}v_{\phi}^2 > = 0$ for all times. Also if the initial velocity ellipsoid had one of its principle axis pointing radially, we do not expect this axis to become misaligned in purely spherical evolution. This means we can assume $< v_r v_{\theta}^2 > = \bar{v_r} \bar{v_{\theta}^2 } $. Under these assumptions, and taking the static core condition $\bar v_r = 0$, we get, $(\partial(\rho \bar{v_{\theta}^2})/\partial t) = 0$ or $\rho \bar{v_{\theta}^2} = K(r)$ independent of $t$. For the self-similar solution we then have \begin{equation} \rho \bar{v_{\theta}^2} = K(r) = K(yt^p) = k_2k_1^2 t^{4q -2p}\int w_{\theta}^2 F(y,{\bf w})d^3{\bf w} \label{tan} \end{equation} Once again substituting a power law solution $K(r) = K_0 r^s$, to this functional equation, we get the constraint from matching power of $t$ on both sides, $ps =4q - 2p$. Using $p = q +1$, we then get $s = 2 - 4/p = 2 - 2\alpha$, and so \begin{equation} \rho \bar{v_{\theta}^2} = K_0 r^{2 - 2\alpha} \label{tanvel} \end{equation} Integrating the radial momentum equation using Eq. (\ref{radm}) , (\ref{thetm}), (\ref{tanvel}) and using $\rho = q_0 r^{-\alpha}$, we have \begin{eqnarray} \bar{v_r^2} &=& r^{2 - \alpha} \left [ {K_0 \over (2 - \alpha) q_0} - {4\pi G q_0 \over 2(2-\alpha)(3-\alpha)} \right ] \nonumber\\ &\equiv& {1 \over (2 - \alpha)} \left [ \bar{v_{\theta}^2}(r) - {GM(r)\over 2r} \right ] . \label{consist} \end{eqnarray} Several important points are to be noted from the above equation\altaffilmark{1}. \altaffiltext{1}{A constant of integration could have arisen in integrating the Jeans equation over radius; however this is excluded as, for a static core, arguments similar to deriving (\ref{tan}) and (\ref{tanvel}), give $\rho {\bar v}_r^2 \propto r^{2 - 2\alpha}$, for the self-similar solution.} A crucial one is that, when $ \alpha < 2$, the RHS of Eq. (\ref{consist}) can remain positive, provided one has a non zero tangential velocity dispersions. If one has a purely spherically symmetric collapse and zero tangential velocities, then the density law cannot become shallower than $\alpha=2$ and maintain a static core with $\bar{v_r}=0$. This agrees with FG. Infact for any $\alpha < 2$, one needs tangential velocity dispersions to be at least as large as $GM/2r$, comparable to the gravitational potential energy per unit mass. Further, one can see that to obtain static cores with $\alpha < 1$, the required tangential dispersions have to be necessarily larger than the radial velocity dispersions. Also note that for $\alpha < 2$, all the components of velocity dispersions decrease with decreasing radius, as suggested by the simple scaling arguments of SCO99. For a static core $\bar{v_r^2}$ should be independent of $t$. However suppose we look at the the energy equation for the radial velocity dispersion, \begin{equation} {\partial(\rho \bar{v_r^2}) \over \partial t} +{1 \over r^2}{\partial(\rho r^2\bar{v_r^3}) \over \partial r} - {2\rho < v_r(v_{\theta}^2+v_{\phi}^2) > \over r} +2 \bar{v_r} \rho {GM/r^2} = 0 . \label{radsqm} \end{equation} This shows that, even when ${\bar v}_r = 0$, a time independent radial velocity dispersion can only obtain if the radial velocity skewness $<(v_r -\bar{v_r})^3>$ is also zero. In the core regions where large amounts of shell crossing has occurred, one can assume that a quasi "equilibrium" state obtains, whereby all odd moments of the distribution function, over $({\bf v} - \bar{\bf v})$, may be neglected. Such a treatment will correspond to considering a fluid like limit to the Vlasov equation. However, the radial skewness will become important near the radius, where infalling matter meets the outermost re-expanding shell of matter. This region will appear like a shock front in the fluid limit. A possible treatment of the full problem in the fluid approach to the Vlasov equation then suggests itself. This is to take the radial skewness to be zero both inside and outside a "shock or caustic" radius, whose location is to be determined as an eigenvalue, so as to match the inner core solution that we determine in this section with an outer spherical infall solution. One has to also match various quantities across this "shock", using jump conditions, derived from the equations themselves. To do this requires numerical solution of the self consistent set of moment equations derived from the scaled Vlasov equation (the main focus of this paper), to which we now turn. \section{ Numerical solution of moment equations for self similar collapse} \subsection{ Moment equations} We write the scaled Vlasov equation (\ref{valsc}) in spherical co-ordinates and take moments. Let us define $V=\bar{w_r}$, $\Pi= < (w_r -\bar{w_r})^2>$ and $\Sigma=\bar{w_{\theta}^2} = \bar{w_{\phi}^2}$. We also set the tangential velocity skewness to zero. As explained above, we take the radial skewness to be zero both inside and outside a "shock or caustic" radius. The shock location, say $y=y_s$ in scaled co-ordinates, will be determined as an eigenvalue, to the complete problem. So we set $< (w_r - \bar{w_r})^3> = 0$ in the regions of interest, $ y< y_s$ and $y > y_s$. The resulting moment equations can be further simplified with a little algebra and then can be written in the following more transparent form: \begin{equation} {1 \over y^2}{d\over dy}\left[y^2 \psi V\right] -2\psi -py {d\psi\over dy} =0 \label{cont} \end{equation} \begin{equation} (p-1)\psi V + \psi (V -py){dV\over dy} = -{1\over y^2}{d \over dy} \left[y^2\psi \Pi\right] + {2\psi \Sigma \over y} - {\bar{M} \psi \over 6\pi y^2} \label{euler} \end{equation} \begin{equation} (V-py){d \over dy}\left[{\rm ln}\left({ \psi \Pi y^2 \over (\psi y^2)^3} \right)\right] = 2p -2 \label{energy} \end{equation} \begin{equation} (V-py){d \over dy}\left[\Sigma y^2\right] + (4p-2)\Sigma y^2 = 0 \label{tandisp} \end{equation} \begin{equation} {d \bar{M} \over dy} = 4\pi y^2 \psi \label{mass} \end{equation} These equations have obvious meaning: Eq. (\ref{cont}) is the continuity equation for the scaled density, (\ref{euler}) the scaled Euler equation and (\ref{energy}) the scaled energy equation. Angular momentum conservation is reflected in Eq. (\ref{tandisp}) for $\Sigma$. It should be noted that the energy equation reflects the more general conservation $P/\lambda^3$ along fluid trajectories, where $\lambda = \rho r^2$ is an effective linear radial density and $P = \lambda < (v_r -\bar{v_r})^2>$ is the effective radial pressure. Infact our system behaves like a monodimensional gas with an effective adiabatic index $\gamma=3$, provided one takes the density to be the linear radial density $\lambda$, and defines the pressure $P$ as above. Both radial and tangential velocity dispersions are likely to be generated during the inhomogeneous collapse to form the halo. So it is natural take the initial, pre-collapse, velocity dispersions to be small. In the problem we are treating, of purely spherical collapse, the radial velocity dispersions will automatically be generated when spherically collapsing shells start to cross, that is where the radial skewness is important. In our fluid approach we will be replacing this region where radial skewness is important by a shock front. On the other hand, note that there are no source terms in the moment equations for the tangential velocity dispersions. Indeed in a purely spherically symmetric problem tangential velocities have to be necessarily introduced in an ad-hoc fashion. They are only non zero if present in the initial conditions. White and Zaritsky (1992) introduced tangential velocities into their solutions by invoking a fictitious tangential force which act on particles in each spherical shell, until the shell turns around. Since in a general inhomogeneous collapse, one expects all the components of the velocity dispersion to be generated together, the shock gives an alternate natural location to introduce a tangential velocity dispersion, as well. We will do this here, for most of the numerical examples. Further in a non spherical, asymmetric collapse, the random velocities induced at the shock would be in general non-radial (cf. Ryden 1993), and the spherical models with both tangential and radial velocity dispersions introduced at the shock, may represent this in a rough way. (For comparison, we will also present a few examples in the next section, with the tangential velocity dispersion introduced at the turn around radius.) Studies of halo formation using cosmological N-body simulations, which are discussed in SCO99, can redress this deficiency of the spherical treatment. The evolution of the region before shell crossings is determined by the spherical infall solution. At some initial time $t_i$, let the excess density contrast averaged over a sphere of proper initial radius $r_i$ be $\bar\delta_i(r_i)= \delta_0(r_i/r_0)^{-3\epsilon}$. Then the shell initially at $r_i$ will turnaround and collapse when it has expanded to a radius $r_i/\bar\delta_i(r_i)$ at a time $t=(3\pi/4) t_i/\bar\delta_i^{3/2}(r_i)$. The radius of a shell turning around at any time $t$ is given by $r_t(t) = r_{0t}(t/t_{0t})^p$ where $p = (2 +6\epsilon)/9\epsilon$ is as in Eq. (\ref{adet}) . Also $r_{0t} = (r_0/\delta_0)$ and $t_{0t}=(3\pi/4)t_i/\delta_0^{3/2}$, are the turn-around radius and time of the shell initially at $r_0$. Since $y=r/(k_1 t^p)$, a natural way of fixing the constant $k_1$ is by taking $k_1 t^p = r_t(t)$. We will do this in what follows. Then turn around occurs at the scaled co-ordinate $y=1$. A straightforward application of the spherical model (cf. Peebles 1980,, Padmanabhan and Subramanian 1992, Kumar, Padmanabhan, Subramanian 1995) then gives the solution of the moment equations, when $\Pi = \Sigma=0$, in the region $y > y_s$. Expressed in a parametric form we have for $y > y_s$, \begin{eqnarray} y &=& {r\over r_t(t)} = {(1-\cos\theta) \pi^p \over 2(\theta -\sin\theta)^p} ; \qquad V(y) = y {\sin\theta (\theta - \sin\theta) \over (1 - \cos\theta)^2} \nonumber\\ \psi(y) &=& {9(\theta - \sin\theta)^2 \over 2(1-\cos\theta)^3} \left[ 1 + 3\epsilon - {9\epsilon \sin\theta(\theta - \sin\theta) \over 2 ( 1 - \cos\theta)^2} \right]^{-1} ; \qquad \bar M(y) = {4\pi y^3 \over 3} {9(\theta - \sin\theta)^2 \over 2(1-\cos\theta)^3} \label{outsol} \end{eqnarray} This goes over to the standard growing mode solution in the linear limit as $y \to \infty$. \subsection{ Matching and boundary conditions} The equations (\ref{outsol}) evaluated at $y=y_s$ gives the pre-shock boundary conditions to the moment equations. To match the spherical infall solution to the core solution for $y<y_s$ determined in the previous section, we have to specify the jump conditions across the shock at $y=y_s$. These conditions can be derived again in a straightforward manner from the moment equations. Suppose we denote the pre-shock values with subscript $1$ and post shock values of all quantities with a subscript $2$. Also we wish to consider the case when the pre-shock $\Pi_1=0$. Then the scaled jump conditions are given by \begin{equation} \psi_2 = 2 \psi_1 , \qquad V_2 = py_s + {1\over 2} (V_1 - py_s), \qquad \Pi_2 = {(V_1 - py_s)^2 \over 4}, \qquad \bar{M}_2 = \bar{M}_1 \label{jump} \end{equation} Infact the jump conditions corresponds to taking $\gamma=3$ in the usual fluid Rankine-Huginot jump relations. These together with a non zero, {\it arbitrary} $\Sigma_2$, gives the starting values for the numerical integration of the scaled moment equations (\ref{cont}) - (\ref{tandisp}) inward from the shock location $y=y_s$. The eigenvalue $y_s$ is determined by requiring the solutions to satisfy the inner boundary conditions \begin{equation} V = M = 0, \qquad y=0 \label{bcs} \end{equation} To ensure the vanishing of the mass at $y=0$, we have in fact integrated the scaled continuity equation and expressed the scaled mass interms of the density and velocities. We have using (\ref{cont}) and (\ref{mass}) \begin{equation} \bar{M}(y) ={ 4\pi y^2 \psi (V-py)\over 2 - 3p} \label{melim} \end{equation} The scaled density for all $\alpha$ and the scaled dispersions for $\alpha > 2$, are expected to be singular at the origin for the shocked infall solutions. So we scale out the expected asymptotic behaviour, at $y\to 0$, before numerical integration. If we are to obtain a nearly static core, we expect $V \to 0$ and $dV/dy \to 0$ as $y \to 0$. In this case, an analysis of the moment equation shows (see also section 2), \begin{equation} \psi(y) = y^{-\alpha} \tilde{\psi}(y), \qquad \Sigma(y)=y^{2-\alpha}\tilde{\Sigma}(y), \qquad \Pi(y)=y^{2-\alpha} \tilde{\Pi}(y) \label{redefn} \end{equation} where $\tilde{\psi}(y)$, $\tilde{\Sigma}(y)$ and $\tilde{\Pi}(y)$ are expected to tend towards a constant value as $y \to 0$. The exact asymptotic dependence of $V(y)$ of course has to be determined by the numerical solution. The moment equations (\ref{cont}) - (\ref{tandisp}) are numerically integrated, after eliminating the scaled mass using (\ref{melim}) and transforming to the dependent variables defined in Eq. (\ref{redefn}) . We adapted a NAG library routine which integrates the differential equations using a Runge-Kutta-Merson method, and solves the boundary value problem with Newton iteration in a shooting and matching technique. For a given $\epsilon$, and a sufficiently large $\Sigma_2$ (when $\alpha < 2$), a unique value of $y_s$ is found to satisfy the inner boundary conditions of (\ref{bcs}). The moment equations lead to two conservation laws which can be used to provide a check on the numerical integration. These can be derived by using Eq. (\ref{melim}) and the moment equations. We have \begin{equation} { \tilde{\Sigma}(y) \over \tilde{M}^{\kappa}(y) } = {\rm const}, \qquad \kappa = { 4 -\alpha \over 3 - \alpha } \label{angcon} \end{equation} representing angular momentum conservation, where $\bar{M}(y)= y^{3-\alpha} \tilde{M}(y)$. And \begin{equation} { \tilde{\Pi}(y) \tilde{M}^{\mu}(y) \over \tilde{\psi}^2(y) } = {\rm const}, \qquad \mu= {2 - \alpha \over 3 - \alpha} \label{encon} \end{equation} representing energy conservation. At each point these two integrals of motion were checked and had relative errors less than $ 10^{-8} - 10^{-9}$, as in B85. A possible additional constraint on a solution is the asymptotic condition given by Eq. (\ref{consist}), for an almost static core. In terms of the scaled variables, static core solutions satisfy the constraint \begin{equation} \tilde{\Pi}(0) = {1 \over (2 - \alpha)} \left [ \tilde{\Sigma}(0) - {\tilde{\psi}(0) \over 3(3-\alpha)} \right ] . \label{saconsist} \end{equation} The equations of course cannot be integrated in practice upto $y \equiv 0$, but only upto some small $y=y_m $ which is generally $\sim 10^{-2.5} - 10^{-4}$. So this constraint can be checked at this minimum $y$. Also, in practice, $V(y_m)$ is not expected to be identically zero (unlike $V(0)$), and so one has to set it to a very small but non-zero value to obtain converged solutions. In general for the solutions obtained here, $V(y_m)= V_m \sim -1.0 \times 10^{-6}$ to $-1.0 \times 10^{-3}$ and $V_m/y_m << 1$. And the constraint (\ref{saconsist}) is satisfied at a few percent level. Let us now discuss particular numerical examples. \section{ Numerical examples} \subsection{ Collapse onto a localised, overdense perturbation, $\epsilon = 1$ case} First we look at self-similar spherical secondary infall onto an initially localised, overdense perturbation, by adopting $\epsilon=1$ and $\Sigma_2=0$. This problem was solved by B85 and FG by examining the self similar particle trajectory. The parameters for the numerical solution obtained here with the fluid approach are summarized in Table 1. We give there the assumed parameters, the derived eigenvalue $y_s$ and the value of the scaled dependent variables at $y_s$ and at the minimum $y=y_m$. We find the eigenvalue $y_s = 0.4628$ for the above parameters. B85 solving the problem by looking at particle trajectories got the location of the outermost caustic as $y_s=0.364$. The difference between the location of the shock as determined by this work and B85 could be because we have replaced a smooth transition region for the collisionless fluid, where velocity skewness is important, by a discontinuous shock. B85 found that the scaled density could be fitted asymptotically by a form $\psi(y) \approx 2.79 y^{-9/4}$ when they adopted a minimum $y = y_m \sim 0.02$ for the particle trajectory. We can integrate our equations and get converged solutions satisfying the boundary conditions upto $y_m \sim 2 \times 10^{-4}$. We find that $\psi(y) =\tilde{\psi} y^{-9/4} \approx 3.1 y^{-9/4}$ at $y_m$, while at $y \sim 2 \times 10^{-2}$ we find $\tilde{\psi} \approx 2.5$. These numbers bracket the asymptotic value of $\tilde{\psi} \sim 2.79 $ obtained in B85. So there is reasonable agreement between our work and B85, given the differences in the value of $y_m$ and the very different approaches. In Figure 1 we plot $V(y)$, $log(\psi(y))$, $log(\Pi(y))$, against $log(y)$ for this solution with $\epsilon=1$, $\Sigma_2=0$. We can define the scaled rotational velocity $U(y) = [ (GM(r)/r)/(r_t/t)^2 ]^{1/2}$. In Figure 1 we also show a plot of $log(U^2)$ versus $log(y)$. We see that the velocity, $V(y)$, smoothly tends to zero as $y \to 0$. $V_m$ for this case was $-4.0 \times 10^{-5}$. To compare the asymptotic dependence of the scaled density, with that predicted above for a static core, we also show in the $log(\psi)$ vs $log(y)$ plot, density laws $\psi(y) \propto y^{-\alpha}$, with $\alpha=9\epsilon/(1+3\epsilon)$ predicted in section 2, (dashed line) and $\psi(y) \propto y^{-2}$ (dot-dashed-dot line). These are normalized to agree with $\psi(y)$ at the minimum $y$ shown in the figure. We see from figure 2 that as $y\to 0$, the density does go over to $\psi(y) \propto y^{-\alpha} \propto y^{-9/4}$, as expected for a static core, with $\epsilon =1$. Overall, we recover the results of B85 and FG reasonably well with our fluid approach to the problem. \subsection{ $\epsilon < 2/3$ cases and the importance of tangential dispersions} We then considered solutions for initial density profiles shallower than $r^{-2}$, or $\epsilon < 2/3$. For such shallow density profiles, if the collapse were purely radial, FG showed that the final density profile approaches a $1/r^2$ form. We find, as expected, that the nature of the solutions in this case, depends on the ratio of tangential to radial velocity dispersions. We illustrate this by considering two values of this ratio, which bracket the expected behaviour. In Figure 2 we show the solution for the case $\epsilon=0.4$, $\tilde{\Sigma}_2 = \Sigma_2 y_s^{2 - \alpha} = 0.94$. The detailed solution for this case is given in Table 2. For this solution, the value of $y_s=0.4955$. We show both $log(\Pi(y)$ (solid line) and $log(\Sigma(y)$ (dashed line) in the same plot, so that they can be easily compared. From the figure or the table one sees that tangential velocity dispersions for this solutions are everywhere larger than the radial dispersions, by a factor $\sim 1.3$. (Or $(\Sigma/\Pi)^{1/2} \sim 1.3$). For $\epsilon =0.4$, and a static core, we expect the scaled variables, to have the asymptotic behaviour given in Eq. (\ref{redefn}), with $\alpha = 18/11$. We see from comparing the solid and dashed lined, in the $\psi(y)- y$ plot of Figure 2, that the density rises as $ \psi \propto y^{-\alpha} \propto y^{-18/11}$ to a very good approximation, throughout the core. Also the velocity dispersions and rotation velocities decrease with decreasing radius, as the analytic theory of Section 2 (or Eq.(\ref{redefn})) predicts. Indeed, from Table 2, we see that all the variables $\tilde{\psi}(y)$, $\tilde{\Sigma}(y)$ and $\tilde{\Pi}(y)$ tend to constant values as $y \to 0$ to an excellent approximation. This case illustrates that it is possible to obtain solutions for $\epsilon < 2/3$, which have $\alpha = 9\epsilon/(1+3\epsilon) < 2$, provided the tangential velocity dispersions are large enough. To illustrate the effect of decreasing tangential velocity dispersions, we show in Figure 3, the properties of a solution with $\epsilon=0.4$, $\tilde{\Sigma}_2 =0.65$. The parameters for this solution, are given in Table 1. The location of the shock is at $y_s=0.3797$. We could get converged solutions with $y_m=4.6 \times 10^{-3}$, and with the velocity $V_m =-1.75 \times 10^{-3}$. The core regions are nearly static but not completely so. But the constraint given by Eq. (\ref{saconsist}) is satisfied at the $2\%$ level. For this case the radial velocity dispersions are everywhere larger than the tangential dispersions, by a factor $\sim 1.15$ as $y \to 0$. One sees a large difference between this solution (Figure 3) and the one obtained for larger tangential velocity dispersion (Figure 2). First we see that when radial dispersion dominates, the density profile is closer to the $\psi \propto y^{-2}$ form than the $\psi \propto y^{-\alpha}$ form, although neither provides a good fit. Second the velocity dispersions are reasonably constant with radius as $y \to 0$ limit, instead of decreasing with decreasing radius. The rotation velocity is also flatter. We also considered smaller values of $\epsilon$. Figure 4 gives a solution with $\epsilon=1/6$, (corresponding to $\alpha=1$ or $n=-2$ in $\alpha_n$), and $\tilde{\Sigma}_2 = 2.2$ and some parameters for this solution are given in Table 1. The eigenvalue $y_s=0.2584$ and $(\Sigma/\Pi)^{1/2} \sim 1.33$ in the core. Note that for $\alpha=1$, the constraint equation (\ref{saconsist}) for a static core implies that $\tilde{\Sigma}(0) = \tilde{\Pi}(0) + \tilde{\psi}(0)/6 $. So $\tilde{\Sigma}(0)/\tilde{\Pi}(0) > 1$ for any solution with a static core. The density profile shows a reasonable correspondence with the asymptotic behaviour expected taking $\alpha =1$; $\psi(y) \propto y^{-1}$. The velocity dispersions and the rotation velocity also decrease with decreasing $y$, but do so a little less rapidly compared to the predicted $\Sigma \propto \Pi \propto U^2 \propto y$ form. In general, we find that, for smaller values of $\epsilon \le 1/6$ (or $\alpha \le 1$), while it possible to find static core solutions for a sufficiently large $\Sigma/\Pi$ ratio, it becomes increasingly difficult to do so (obtaining a small enough $V_m/y_m$ ratio) as one lowers the ratio of $\Sigma/\Pi$, to even slightly smaller values. We considered for example a case with $\epsilon=1/6$, $\tilde{\Sigma}_2 = 2$. This turns out to have $\Sigma/\Pi \sim 1$, but we found that we could only decrease $V_m/y_m$ to a value of order unity and get a solution. We get $V_m \sim 1.5 \times 10^{-2}$ and $dV/dy \sim 0.06$ at $y_m$; so even though the core is not strictly static, the LHS of the scaled Euler equation (\ref{euler}) is much smaller than each of the individual force terms. These nearly cancel each other making the the core quasi-static. We give a plot of all the variables for this solution in Figure 5, and a summary of some parameters in Table 1. We see that the shape of the density profile for this solution is mid-way between the $r^{-\alpha}\propto r^{-1}$ form and $r^{-2}$ form. The velocity dispersions are decreasing with radius but not as rapidly as predicted for a truly static core. At this stage it is worthwhile to note the following: Recall that the static core condition used to derive the asymptotic scaling properties of the density and velocity dispersions, involves assuming not only ${\bar v}_r(0)=0$ (the boundary condition adopted above), but also that the radial velocity vanishes for a range of radii near the origin. This situation can strictly obtain only if particles with a given turn around radius have a {\it minimum} radius of approach to the centre; so that the core at any radius $r$ is evacuated of particles having turn around radii larger than say, $R_t(r)$. Such an "evacuated" core will inturn obtain only if the distribution of angular momentum has a "hole" near the origin of $(v_{\theta}, v_{\phi})$ plane. Such an angular momentum distribution is indeed assumed (and relevant) in the work of White and Zaritsky (1992). However, in the present work we are making the statistical assumption that the distribution of tangential velocities is well described by its second moment (viz. the tangential dispersion), thereby excluding distribution functions, which have a hole. This assumption is quite reasonable for halo cores with are forming by a general inhomogeneous collapse. However, in this case, for any shell of particles which pass the caustic at some epoch, there are always some particles with sufficiently small angular momentum, that can approach close to the halo core. So the halo core will not be strictly static, a feature which will be more and more noticeable, as one decreases tangential velocity dispersions relative to the radial dispersions. This may account for our result that (for $\epsilon < 2/3$), as one decreases $\Sigma/\Pi$, the density profile is steeper than the $\psi \propto y^{-\alpha}$ form expected for a strictly static core. Finally, for the sake of comparison, we have also looked at numerical examples where the tangential velocity dispersions are introduced at the 'turn around' radius (taken to be approximately at $y=1$) rather than at the shock. In this case one has to solve the moment equations numerically, both outside and inside the shock radius, match the solutions across the shock, using the shock jump conditions (cf. Eq. (\ref{jump}) when $\Pi_1 = 0$), and find the shock location as an eigenvalue to satisfy the boundary conditions in Eq. (\ref{bcs}). Figures 6 and 7 give two examples with $\epsilon=0.4$, adopting $\tilde{\Pi}(1) =0$ and $\tilde{\Sigma}(1) =0.25$ and $\tilde{\Sigma}(1) =0.30$, respectively. The parameters of these solutions are given in Table 1. When $\tilde{\Sigma}(1) =0.25$, the force due to the tangential velocity dispersion at turn around is $\sim 13.5 \%$ of the radial gravitational force. These examples show very similar behaviour to the $\epsilon = 0.4$ solutions discussed above (Figures 2 and 3), where the tangential velocities are introduced at the shock. For example, the solution shown in Figure 6, has $\Sigma/\Pi \sim 1$ in the core. The corresponding density profile is shallower than $y^{-2}$ but steeper than the $y^{-\alpha}$ form, reflecting the fact that only a partial memory of the initial profile is retained by self-similar evolution in this case. The numerical results of this section shows the importance of tangential velocity dispersions, in deciding whether the self similar solution, with an initial density profile shallower than $1/r^2$ ($\epsilon < 2/3$) retains a memory of this initial profile or whether the density profile tends to a universal $1/r^2$ form. The set of solutions we have given show that for a large enough $\Sigma/\Pi > 1$, the the core density profile is indeed close to the form $\rho \propto r^{-\alpha}$, with $\alpha = 9\epsilon/(1+3\epsilon)$. For $\Sigma/\Pi \sim 1$, some memory of the initial density profile is always retained; the density profile has an asymptotic form $\rho \propto r^{-\bar\alpha}$, with $ \alpha < \bar\alpha < 2$. When $\Sigma/\Pi << 1$, the density profile goes over to the $1/r^2$ form derived by FG. Also for shallow initial density profiles with $\alpha \leq 1$, one must necessarily have a tangential dispersion larger than radial dispersion to get a static core region, retaining the memory of the initial density profile. \section{Discussion and Conclusions} We have explored here the dynamical restrictions on the structure of dark matter halos through a study of cosmological self-similar gravitational collapse solutions, adopting a fluid approach to the collisionless dynamics of dark matter. In a companion paper (SCO99) we consider the possibility that a nested sequence of undigested cores in the center of a halo, which have survived the inhomogeneous collapse to form larger and larger objects, determine halo structure in the inner regions. For a flat universe with $P(k) \propto k^n$, scaling arguments then suggest that the core density profile scales as, $\rho \propto r^{-\alpha}$ with $\alpha = \alpha_n = (9+3n)/(5+n)$. However, such arguments do not tell us how and in fact whether this form will be realized dynamically. The similarity solutions worked out in some detail here, allows us to examine this dynamical issue, in a simple tractable manner. The problem of spherical self similar collapse, has often been solved by following particle trajectories. We adopted here and in SCO99 another approach, examining directly the evolution of the moments of the phase space density. For a purely radial collapse, with the initial density profile $\propto r^{-3\epsilon}$, and steeper than $r^{-2}$, we recover, by demanding that the core be static, the asymptotic form of the non-linear density profile: $\rho \propto r^{-\alpha} \propto r^{-9\epsilon/(1 + 3\epsilon)}$ (see also Padmanabhan 1996b). For initial density profiles shallower than $1/r^2$, with $\epsilon < 2/3$, we showed that, a static core with a non-linear density profile, with $\alpha= 9\epsilon/(1 + 3\epsilon)$, is possible, only if the core has sufficiently large tangential velocity dispersions. Infact, one needs $\bar{v_{\theta}^2} > GM/2r$. Also if a static core has to have a cuspy density profile shallower than $1/r$, (with $\alpha < 1$), one requires $\bar{v_{\theta}^2} > \bar{v_r^2}$. Note that when $3\epsilon = (3 +n)/2$ (as would be relevant for collapse around a typical point in the universe), $\alpha = \alpha_n = (9 + 3n)/(5+n)$. The consequences of introducing non radial velocity dispersions, in this approach, can only be examined in detail, by adopting a closure approximation. In spherical collapse, the skewness of the tangential velocities can be assumed to be zero, in the core regions. In fact, in regions where large amounts of shell crossing has occurred, one can assume that a quasi "equilibrium" state obtains, whereby all odd moments of the distribution function, over $({\bf v} - \bar{\bf v})$, may be neglected. The radial peculiar velocity is then also expected to have negligible skewness, in the core regions. However, the radial peculiar velocity will necessarily have a non-zero skewness (non zero third moment) near a caustic radius, where collapsing dark matter particles meet the outermost shell of re-expanding matter. To take this into account we introduce a fluid approach. In this approach, the effect of peculiar velocity skewness is neglected in all regions except at location of the caustic, which we call the shock. In the particle picture the shock is where a single stream flow becomes a muti stream flow. In the fluid picture it is a where some of the average infall velocity, is converted to velocity dispersion. The location of the caustic, $y_s$, in scaled co ordinates, is found as an eigenvalue, to the boundary value problem of matching the single stream collapse solution with a core solution, adopting $V=M=0$ as the boundary condition at $y=0$. In spherical collapse tangential velocities are only non zero if they are present in the initial condition. The shock or the turn around radius, provide a natural location for introducing tangential dispersions, into the initial conditions. Our treatment here assumes that the distribution of tangential velocities is well described by just its second moment, consistent with the statistical assumptions of a quasi-relaxed core. The results of the numerical integration of the moment equations, are summarized in Table 1 and are graphically displayed in Figures 1-7. The details of one particular solution is also given in Table 2. These examples largely bear out the expectations of section 2. First we recover quite well, using the fluid approach, the the asymptotic form of the non-linear density profile, for the $\epsilon = 1$ case, which B85/FG got by solving for the self-similar particle trajectory. Second our solutions show the importance of tangential velocity dispersions, in deciding the nature of the core density profile, when $\epsilon < 2/3$. In the spherical self similar collapse solutions with $\epsilon < 2/3$, for a large enough $\Sigma/\Pi > 1$, one gets $\rho \propto r^{-\alpha}$, with $\alpha = 9\epsilon/(1+3\epsilon)$. For $\Sigma/\Pi \sim 1$, some memory of the initial density profile is always retained; one gets $\rho \propto r^{-\bar\alpha}$, with $ \alpha < \bar\alpha < 2$. When $\Sigma/\Pi << 1$, the density profile goes over to the $1/r^2$ form derived by FG for radial collapse. Also $\alpha < 1$, requires $\Sigma/\Pi >> 1$, to get a static core region. So if in halo cores tangential velocities are constrained to be smaller than radial velocity dispersions, then a cuspy core density profile shallower than $1/r$ cannot obtain, purely by self-similar evolution. The results of this work and SCO99, illustrate the importance of dynamical considerations and hint at features which are likely to obtain in more realistic collapse. If newly collapsing material is constrained to mostly contribute to the density at larger and larger radii, then memory of initial conditions can be retained. The solutions, with $\alpha > 2$ (Figure 1), or with $\alpha < 2$ but a large enough tangential dispersion (Figures 2 and 4), illustrate this possibility. However when newly collapsing material is able to occupy similar regions as the matter which collapsed earlier, the core density profile will only partially reflect a memory of the initial conditions. The solutions in Section 4 with $\alpha < 2$ and $\Sigma/\Pi \sim 1$ (Figures 3, 5 and 6) illustrates this feature. In SCO99 we have also adopted a complimentary approach, of looking at halo properties in numerical simulations of structure formation models having $n=-2,-1 $ and $0$. We find that the core density profiles of dark matter halos show a large scatter in their properties, but do nevertheless appear to reflect a memory of the initial power spectrum (please see SCO99 for details). The fluid approach adopted here and in SCO99 suggests new ways of exploring non linear dynamics. Perhaps one can extend analytic approximations like the Zeldovich approximation, valid in a single stream flow, to the multi streaming regime, by replacing multistreaming regions by regions with velocity dispersions, generated by the Zeldovich type caustics. The fluid approach could also be useful to study possible closures of the BBJKY hierarchy. Further one needs to extend the self-similar solutions to incorporate a baryonic component; the gas necessarily has an isotropic velocity dispersion, and so will have a different dynamical evolution compared to the dark matter. We hope to study some of these issues in the future. \acknowledgments KS thanks Jerry Ostriker and Renyue Cen for an enjoyable collaboration which led to this paper. This work was begun when KS visited the Princeton University Observatory, during Sept-Nov 1996. Partial travel support to Princeton came from IAU Commission 38. Some of the work was done at the University of Sussex where KS was supported by a PPARC Visiting Fellowship. He thanks John Barrow, Jerry Ostriker, Ed Turner, the other Princeton and Sussex astronomers for warm hospitality. T. Padmanabhan is thanked for critical comments on an earlier version of this work. KS also thanks Ben Moore, Bepi Tormen, Ravi Sheth, Dave Syer and Simon White for several helpful discussions. \clearpage
2,869,038,156,957
arxiv
\chapter{Generalized event structures and probabilities} \begin{center} {Karl Svozil}\\ {\it Institute for Theoretical Physics, Vienna University of Technology}\\ {\it Wiedner Hauptstra\ss e 8-10/136, A-1040 Vienna, Austria}\\ {\it email: {[email protected]} homepage: {http://tph.tuwien.ac.at/\char`\~svozil} } \end{center} \else \documentclass 12pt, reprint, twocolumn, showpacs, showkeys, preprintnumbers, amsmath,amssymb, aps, pra, longbibliography, ]{revtex4-1} \usepackage[breaklinks=true,colorlinks=true,anchorcolor=blue,citecolor=blue,filecolor=blue,menucolor=blue,pagecolor=blue,urlcolor=blue,linkcolor=blue]{hyperref} \usepackage{url} \usepackage{eepic} \RequirePackage{times} \RequirePackage{mathptm} \usepackage{xcolor} \begin{document} \title{Generalized event structures and probabilities} \author{Karl Svozil} \affiliation{Institute for Theoretical Physics, Vienna University of Technology, Wiedner Hauptstra\ss e 8-10/136, A-1040 Vienna, Austria} \email{[email protected]} \homepage[]{http://tph.tuwien.ac.at/~svozil} \pacs{03.65.Ca, 02.50.-r, 02.10.-v, 03.65.Aa, 03.67.Ac, 03.65.Ud} \keywords{quantum theory, probability theory, quantum logic} \begin{abstract} For the classical mind, quantum mechanics is boggling enough; nevertheless more bizarre behavior could be imagined, thereby concentrating on propositional structures (empirical logics) that transcend the quantum domain. One can also consistently suppose predictions and probabilities which are neither classical nor quantum, but which are subject to subclassicality; that is, the additivity of probabilities for mutually exclusive, co-measurable observables, as formalized by admissibility rules and frame functions. \end{abstract} \maketitle \fi \section{Specker's oracle} In his first, programmatic, article on quantum logic Ernst Specker -- one of his sermons is preserved in his {\it Selecta}~\cite[pp.~321-323]{specker-ges} -- considered a parable~\cite{specker-60} which can be easily translated into the following oracle: imagine that there are three boxes on a table, each of which either contains a gem or does not. Your task is to correctly choose two of the boxes that will either both be empty or both contain a gem when opened. Note that, according to combinatorics (or, more generally, Ramsey theory), for all classical states there always exist two such boxes among the three boxes satisfying the above property of being ``both empty or both filled.'' After you place your guess the two boxes whose content you have attempted to predict are opened; the third box remains closed. In Specker's malign oracle scenario it turns out that you always fail: no matter how often you try and what you choose to forecast, the boxes you have predicted as both being empty or both being full always have mixed content -- one box is always filled and the other one always empty. That is, phenomenologically, or, if you like, epistemically, Specker's oracle is defined by the following behavior: if $e$ and $f$ denote the empty and the filled state, respectively, and $\ast$ stands for the third (unopened) box, then one of the following six configurations are rendered: $ef\ast$, $fe\ast$, $e\ast f$, $f\ast e$, $\ast ef$, or $\ast fe$. Is such a Specker oracle realizable in Nature? Intuition tends to negate this. Because, more formally, per box there are two classical states $e$ and $f$, and thus $2^3$ such classical ``ontological'' configurations or classical three-box states, namely $eee$, $eef$, $efe$, $fee$, $eff$, $fef$, $ffe$, and $fff$, which can be grouped into four classes: those extreme cases with all the boxes filled and empty, those with two empty and one filled boxes, and those with two filled and one empty boxes. These can be represented by the four-partitioning (into equivalence classes with respect to the number of filled and empty boxes) of the set of all states $ \{ \{eee \}, \{fff \}, \{eef,efe.fee \}, \{eff,fef,eff \} \} $. Now, on closer inspection, in any unbiased prediction (or unbiased preparation) scenario there is an ever decreasing chance that you will not hit the right prediction eventually, because for all eight possible configurations there always is at least one right prediction (either two empty or two full boxes). Of course, if I am in command of the preparation process, and if you and me chose to conspire in such a way that I always choose to prepare, say, either $eee$ or $eef$ or $efe$ or $fee$, and you always choose to predict $ff$, than you will never win. But such a scenario is hilariously biased. Also with adaptive, that is, {\it a posteriori,} preparation {\em after} the prediction, the Specker parable is realizable -- in hindsight I can always ruin your prediction. But if you allow no restrictions on predictions (or preparations), and no {\it a posteriori} manipulation, there are no classical means to realize Specker's oracle. Can this system be realized quantum mechanically? That is, can one find a quantum state and projection measurements rendering that kind of performance? I guess (but have no proof of it) not, because in any finite dimensional Hilbert space the associated empirical logic~\cite{v-neumann-49,birkhoff-36} is a merging through identifying common elements, called a {\em pasting}~\cite{nav:91}, of (possibly a continuum of) Boolean subalgebras with a finite number of atoms or, used synonymously, contexts~\cite{svozil-2008-ql,2014-nobit}. And any subalgebra, according to the premises of Gleason's theorem~\cite{Gleason,r:dvur-93,pitowsky:218,Peres-expTest-Glea}, in terms of probability theory, is classically Boolean. As has already been pointed out by Specker, the phenomenology of the oracle suggests, that $e_i \rightarrow f_j$, and, conversely $f_i \rightarrow e_j$ for different Boxes $i,j \in \{1,2,3\}$, that is, ``the first opened box always contains the complement of the second opened box''; and otherwise -- that is, by disregarding the third (unopened) box -- they are classical. Thus one could say that the contents of the two opened boxes represent the two atoms of a Boolean subalgebra $2^2$. There are three such subalgebras associated with opening two of three boxes, namely $(1,2)$, $(1,3)$, and $(2,3)$ which need to be pasted into the propositional structure at hand; in the quantum case this is quantum logic. This can be imagined in two ways, by interpreting the situation as follows: (i) The first option would be to attempt to paste or ``isomorphically bundle'' the three subalgebras $2^2$ into a three-atomic subalgebra $2^3$. Clearly this attempt is futile, since this would imply transitivity, and thus yield a complete contradiction, by, say $e_1 \rightarrow f_2 \rightarrow e_3 \rightarrow f_1$. (ii) The second option would circumvent transitivity by means of complementarity (as argued originally by Specker), through a {\em horizontal pasting} of the three Boolean algebras, amounting to a logic of the Chinese lantern form ${\rm MO}_3$. This is a common quantum logic rendered, for instance, by spin-$\frac{1}{2}$ measurements along different spatial directions; as well as by the quasi-classical partition logics~\cite{svozil-2001-eua} of automata and generalized urn models~\cite{wright}. But clearly, such a logic does not deal with the three boxes of Specker's oracle equally; rather the third, unopened box could be considered as a ``space holder'' or ``indicator'' labeling the associated context. Within such a context one could, for example, attempt to consider a general wave function in eight dimensional Hilbert space $\vert \Psi \rangle =\sum_{i,j,k \in \{e,f\}}\alpha_{ijk} \vert ijk \rangle$, geometrically representable by $\vert e\rangle \equiv (1,0)$ and $\vert f \rangle \equiv (0,1)$, and thus $\vert \Psi \rangle \equiv \left(\alpha_{eee},\alpha_{eef},\ldots ,\alpha_{fff} \right)$. All three measurements (i.e. projections onto $\vert ijk \rangle$) commute; so one can open the boxes ``independently.'' By listing all the associated ``unbiased'' measurement scenarios (including partial traces over the third box), there is no quantum way one could end up with the type of behavior one expects from Specker's oracle. Ultimately, because a general quantum state is a coherent superposition of classical states, one cannot ``break outside'' this extended classical domain. So, I guess, if one insists on treating all the three boxes involved in Specker's oracle equally, this device requires supernatural means. And yet it is imaginable; and that is the beauty of it. \section{Observables unrealizable by quantum means} In what follows we shall enumerate, as a kind of continuation of Specker's oracle, hypothetical ``weird'' propositional structures, in particular, certain anecdotal ``zoo of collections of observables'' constructed by pastings of contexts (or, used synonymously, blocks, subalgebras) containing ``very few'' atoms. We shall compare them to logical structures associated with very low-dimensional quantum Hilbert spaces. (Actually, the dimensions dealt with will never exceed the number of fingers on one hand.) \begin{figure} \begin{center} \begin{tabular}{ccccc} \unitlength 0.8mm \allinethickness{2.1pt} \ifx\plotpoint\undefined\newsavebox{\plotpoint}\fi \begin{picture}(21,21)(0,0) \put(0,0){\color{blue}\circle{1.2}} \put(10,0){\color{blue}\circle{1.2}} \put(20,0){\color{blue}\circle{1.2}} \put(0,0){\color{blue}\line(1,0){20}} \put(5,10){\color{red}\circle{1.2}} \put(15,10){\color{red}\circle{1.2}} \put(5,10){\color{red}\line(1,0){10}} \end{picture} && \unitlength 0.8mm \allinethickness{2.1pt} \ifx\plotpoint\undefined\newsavebox{\plotpoint}\fi \begin{picture}(21,21)(0,0) \put(0,0){\color{blue}\line(1,0){20}} \put(20,0){\color{red}\line(0,1){10}} \put(0,10){\color{green}\line(1,0){20}} \put(0,10){\color{magenta}\line(0,-1){10}} \put(0,0){\color{blue}\circle{1.2}} \put(0,0){\color{magenta}\circle{3}} \put(0,10){\color{green}\circle{1.2}} \put(0,10){\color{magenta}\circle{3}} \put(10,0){\color{blue}\circle{1.2}} \put(10,10){\color{green}\circle{1.2}} \put(20,0){\color{blue}\circle{1.2}} \put(20,0){\color{red}\circle{3}} \put(20,10){\color{green}\circle{1.2}} \put(20,10){\color{red}\circle{3}} \end{picture} && \unitlength 0.8mm \allinethickness{2.1pt} \ifx\plotpoint\undefined\newsavebox{\plotpoint}\fi \begin{picture}(21,21)(0,0) \put(0,0){\color{blue}\line(1,0){20}} \put(20,0){\color{red}\line(0,1){10}} \put(0,10){\color{green}\line(1,0){20}} \put(0,10){\color{magenta}\line(0,-1){10}} \put(10,0){\color{cyan}\line(0,1){10}} \put(0,0){\color{blue}\circle{1.2}} \put(0,0){\color{magenta}\circle{3}} \put(0,10){\color{green}\circle{1.2}} \put(0,10){\color{magenta}\circle{3}} \put(10,0){\color{blue}\circle{1.2}} \put(10,0){\color{cyan}\circle{3}} \put(10,10){\color{green}\circle{1.2}} \put(10,10){\color{cyan}\circle{3}} \put(20,0){\color{blue}\circle{1.2}} \put(20,0){\color{red}\circle{3}} \put(20,10){\color{green}\circle{1.2}} \put(20,10){\color{red}\circle{3}} \end{picture} \\ $\;$\\ (i)&$\qquad$&(ii)&$\qquad$&(iii) \end{tabular} \end{center} \caption{(Color online) Orthogonality diagrams with mixed two- and three-atomic contexts, drawn in different colors. \label{2015-s-f1}} \end{figure} It is not too difficult to sketch propositional structures which are not realizable by any known physical device. Take, for instance, the collection of observables whose Greechie or, by another wording, orthogonality diagram~\cite{greechie:71} is sketched in Fig.~\ref{2015-s-f1}. In Hilbert space realizations, the straight lines or smooth curves depicting contexts represent orthogonal bases, and points on these straight lines or smooth curves represent elements of these bases; that is, two points being orthogonal if and only if they are on the same these straight line or smooth curve. From dimension three onwards, bases can intertwine~\cite{Gleason} by possessing common elements. The propositional structure depicted in Fig.~\ref{2015-s-f1} consists of four contexts of mixed type; that is, the contexts involved have two and three atoms. No such mixed type phenomenology occurs in Nature; on the contrary, regardless of the quantized system the number of (mutually exclusive) physical outcomes, reflected by the dimension of the associated Hilbert space, always remains the same. You may now say that this was an easy and almost trivial cheat; but what about the triangular shaped propositional structures depicted in Fig.~\ref{2015-s-f2}? They surely look inconspicuous, yet none of them has a representation as a quantum logic; simply because they have no realization in two- and three-dimensional Hilbert space: The propositional structure depicted in Fig.~\ref{2015-s-f2}(i) has too tightly intertwining contexts, which would mean that two different orthogonal bases in two-dimensional Hilbert space can have an element in common (which they cannot have, except when the bases are identical). By a similar argument, the propositional structure depicted in Fig.~\ref{2015-s-f2}(ii) has ``too tightly intertwined'' contexs to be representable in three-dimensional Hilbert space: in dimension three, for two non-identical but intertwined orthogonal bases with one common vector (if they have two common elements they would have to be identical) it is impossible to ``shuffle'' the remaining vectors around such that at least one remaining vector from one basis is orthogonal to at least one remaining vector from the other basis. From an algebraic point of view all these propositional structures are not realizable quantum mechanically, because they contain loops of order three~\cite{kalmbach-83,beran,pulmannova-91}. \begin{figure} \begin{center} \begin{tabular}{ccccc} \unitlength 0.8mm \allinethickness{2.1pt \ifx\plotpoint\undefined\newsavebox{\plotpoint}\fi \begin{picture}(21,25)(0,0) \put(0,0){\color{blue}\line(1,0){20}} \put(0,0){\color{red}\line(3,5){10}} \put(20,0){\color{green}\line(-3,5){10}} \put(0,0){\color{blue}\circle{1.2}} \put(0,0){\color{red}\circle{3}} \put(20,0){\color{green}\circle{1.2}} \put(20,0){\color{blue}\circle{3}} \put(10,16.5){\color{red}\circle{1.2}} \put(10,16.5){\color{green}\circle{3}} \end{picture} && \unitlength 0.8mm \allinethickness{2.1pt \ifx\plotpoint\undefined\newsavebox{\plotpoint}\fi \begin{picture}(21,25)(0,0) \put(0,0){\color{blue}\line(1,0){20}} \put(0,0){\color{red}\line(3,5){10}} \put(20,0){\color{green}\line(-3,5){10}} \put(0,0){\color{blue}\circle{1.2}} \put(0,0){\color{red}\circle{3}} \put(20,0){\color{green}\circle{1.2}} \put(20,0){\color{blue}\circle{3}} \put(10,16.5){\color{red}\circle{1.2}} \put(10,16.5){\color{green}\circle{3}} \put(5,8.25){\color{red}\circle{1.2}} \put(15,8.25){\color{green}\circle{1.2}} \put(10,0){\color{blue}\circle{1.2}} \end{picture} && \unitlength 0.8mm \allinethickness{2.2pt \ifx\plotpoint\undefined\newsavebox{\plotpoint}\fi \begin{picture}(21,25)(0,0) \put(0,0){\color{blue}\line(1,0){20}} \put(0,0){\color{red}\line(3,5){10}} \put(20,0){\color{green}\line(-3,5){10}} \put(10,0){\color{cyan}\line(0,1){16.5}} \put(5.3,8.75){\color{orange}\line(5,-3){15}} \put(14.7,8.75){\color{magenta}\line(-5,-3){15}} \put(0,0){\color{blue}\circle{1.2}} \put(0,0){\color{red}\circle{3}} \put(0,0){\color{magenta}\circle{4}} \put(20,0){\color{green}\circle{1.2}} \put(20,0){\color{blue}\circle{3}} \put(20,0){\color{orange}\circle{4}} \put(10,16.5){\color{red}\circle{1.2}} \put(10,16.5){\color{green}\circle{3}} \put(10,16.5){\color{cyan}\circle{4}} \put(5,8.75){\color{red}\circle{1.2}} \put(5,8.75){\color{orange}\circle{3}} \put(15,8.75){\color{green}\circle{1.2}} \put(15,8.75){\color{magenta}\circle{3}} \put(10,0){\color{blue}\circle{1.2}} \put(10,0){\color{cyan}\circle{3}} \put(10,5.9){\color{cyan}\circle{1.2}} \put(10,5.9){\color{magenta}\circle{3}} \put(10,5.9){\color{orange}\circle{4}} \end{picture} \\ $\;$\\ (i)&$\qquad$&(ii)&$\qquad$&(iii) \end{tabular} \end{center} \caption{(Color online) Orthogonality diagrams representing tight triangular pastings of two- and three-atomic contexts. \label{2015-s-f2}} \end{figure} Indeed, for reasons that will be explicated later, the propositional structure depicted in Fig.~\ref{2015-s-f2}(i) has no two-valued (admissible~\cite{2012-incomput-proofsCJ,PhysRevA.89.032109,2015-AnalyticKS}) state equivalent to a frame function~\cite{Gleason}; a fact that can be seen by ascribing one element a ``1,'' forcing the remaining two to be ``0.'' (There cannot be only zeroes in a context.) This means that it is no quasi classical partition logic. The logic depicted in Fig.~\ref{2015-s-f2}(ii) has sufficiently many (indeed four) two-valued measures to be representable by a partition logic~\cite{2010-qchocolate}. The propositional structure depicted in Fig.~\ref{2015-s-f2}(iii) is too tightly interlinked to be representable by a partition logic -- it allows only one two-valued state. In a similar manner one could go on and consider orthogonality diagrams of the ``square'' type, such as the ones depicted in Fig.~\ref{2015-s-f3}. All these propositional structures are not realizable quantum mechanically, because they contain loops of order four~\cite{kalmbach-83,beran,pulmannova-91}. The propositional structure in Fig.~\ref{2015-s-f3}(i) has two two-valued measures, but the union of them is not ``full'' because it cannot separate opposite atoms. Figs.~\ref{2015-s-f3}(ii) as well as (iii) represent propositional structures with ``sufficiently many'' two-valued measures (e.g. separating two arbitrary atoms by different values), which are representable as partition (and, in particular, as generalized urn and automaton) logics. Actually, the number of two-valued measures for the propositional structures in Figs.~\ref{2015-s-f3}(i) as well as (iii) can be found by counting the number of permutations, or permutation matrices: these are $2!$ and $3!$, respectively. Because of the too tightly intertwined contexts the propositional structure in Fig.~\ref{2015-s-f3}(iv) has no two-valued state. \begin{figure} \begin{center} \begin{tabular}{ccccccc} \unitlength 0.8mm \allinethickness{2.1pt \ifx\plotpoint\undefined\newsavebox{\plotpoint}\fi \begin{picture}(21,21)(0,0) \put(0,0){\color{green}\line(0,1){20}} \put(0,0){\color{blue}\line(1,0){20}} \put(20,0){\color{magenta}\line(0,1){20}} \put(0,20){\color{red}\line(1,0){20}} \put(0,0){\color{green}\circle{1.2}} \put(0,0){\color{blue}\circle{3}} \put(20,0){\color{blue}\circle{1.2}} \put(20,0){\color{magenta}\circle{3}} \put(0,20){\color{red}\circle{1.2}} \put(0,20){\color{green}\circle{3}} \put(20,20){\color{magenta}\circle{1.2}} \put(20,20){\color{red}\circle{3}} \end{picture} && \unitlength 0.8mm \allinethickness{2.1pt \ifx\plotpoint\undefined\newsavebox{\plotpoint}\fi \begin{picture}(21,21)(0,0) \put(0,0){\color{green}\line(0,1){20}} \put(0,0){\color{blue}\line(1,0){20}} \put(20,0){\color{magenta}\line(0,1){20}} \put(0,20){\color{red}\line(1,0){20}} \put(0,0){\color{green}\circle{1.2}} \put(0,0){\color{blue}\circle{3}} \put(10,0){\color{blue}\circle{1.2}} \put(20,0){\color{blue}\circle{1.2}} \put(20,0){\color{magenta}\circle{3}} \put(20,10){\color{magenta}\circle{1.2}} \put(0,20){\color{red}\circle{1.2}} \put(0,20){\color{green}\circle{3}} \put(0,10){\color{green}\circle{1.2}} \put(20,20){\color{magenta}\circle{1.2}} \put(20,20){\color{red}\circle{3}} \put(10,20){\color{red}\circle{1.2}} \end{picture} && \unitlength 0.8mm \allinethickness{2.1pt \ifx\plotpoint\undefined\newsavebox{\plotpoint}\fi \begin{picture}(21,21)(0,0) \put(0,0){\color{green}\line(0,1){20}} \put(0,0){\color{blue}\line(1,0){20}} \put(20,0){\color{magenta}\line(0,1){20}} \put(0,20){\color{red}\line(1,0){20}} \put(0,10){\color{orange}\line(1,0){20}} \put(10,0){\color{gray}\line(0,1){20}} \put(0,0){\color{green}\circle{1.2}} \put(0,0){\color{blue}\circle{3}} \put(10,0){\color{blue}\circle{1.2}} \put(10,0){\color{gray}\circle{3}} \put(20,0){\color{blue}\circle{1.2}} \put(20,0){\color{magenta}\circle{3}} \put(20,10){\color{magenta}\circle{1.2}} \put(20,10){\color{orange}\circle{3}} \put(0,20){\color{red}\circle{1.2}} \put(0,20){\color{green}\circle{3}} \put(0,10){\color{green}\circle{1.2}} \put(0,10){\color{orange}\circle{3}} \put(20,20){\color{magenta}\circle{1.2}} \put(20,20){\color{red}\circle{3}} \put(10,20){\color{red}\circle{1.2}} \put(10,20){\color{gray}\circle{3}} \put(10,10){\color{gray}\circle{1.2}} \put(10,10){\color{orange}\circle{3}} \end{picture} && \unitlength 0.8mm \allinethickness{2.1pt \ifx\plotpoint\undefined\newsavebox{\plotpoint}\fi \begin{picture}(21,21)(0,0) \put(0,0){\color{green}\line(0,1){20}} \put(0,0){\color{blue}\line(1,0){20}} \put(20,0){\color{magenta}\line(0,1){20}} \put(0,20){\color{red}\line(1,0){20}} \put(0,10){\color{orange}\line(1,0){20}} \put(0,0){\color{cyan}\line(1,1){20}} \put(20,0){\color{brown}\line(-1,1){20}} \put(10,0){\color{gray}\line(0,1){20}} \put(0,0){\color{green}\circle{1.2}} \put(0,0){\color{blue}\circle{3}} \put(0,0){\color{cyan}\circle{4}} \put(10,0){\color{blue}\circle{1.2}} \put(10,0){\color{gray}\circle{3}} \put(20,0){\color{blue}\circle{1.2}} \put(20,0){\color{magenta}\circle{3}} \put(20,0){\color{brown}\circle{4}} \put(20,10){\color{magenta}\circle{1.2}} \put(20,10){\color{orange}\circle{3}} \put(0,20){\color{red}\circle{1.2}} \put(0,20){\color{green}\circle{3}} \put(0,20){\color{brown}\circle{4}} \put(0,10){\color{green}\circle{1.2}} \put(0,10){\color{orange}\circle{3}} \put(20,20){\color{magenta}\circle{1.2}} \put(20,20){\color{red}\circle{3}} \put(20,20){\color{cyan}\circle{4}} \put(10,20){\color{red}\circle{1.2}} \put(10,20){\color{gray}\circle{3}} \put(10,10){\color{gray}\circle{1.2}} \put(10,10){\color{orange}\circle{3}} \put(10,10){\color{cyan}\circle{4}} \put(10,10){\color{brown}\circle{5}} \end{picture} \\ $\qquad$\\ (i)&$\;$&(ii)&$\;$&(iii)&$\quad$&(iv) \end{tabular} \end{center} \caption{(Color online) Orthogonality diagrams representing tight square type pastings of two- and three-atomic contexts. \label{2015-s-f3}} \end{figure} Let us now come back to the collection of observables represented in Fig.~\ref{2015-s-f1}. Are they in some form realizable, maybe even in ways ``beyond'' quantum realizability? Again, as long as there are ``sufficiently many'' two-valued measures~\cite{wright:pent}, partition logics as well as their generalized urn and automaton models~\cite{svozil-2001-eua} are capable of reproducing these phenomenological schemes. One construction yielding the pasting described in Fig.~\ref{2015-s-f1}(ii) would involve a four color (associated with the four contexts) scheme; with three symbols ``$+$,'' ``$-$,'' and ``$0$'' in two colors representing the Boolean algebra $2^3$ of two contexts, and with two symbols ``$+$,'' and ``$-$'' in two colors representing the Boolean algebra $2^2$ of two contexts. (I leave it to the Reader to find a concrete realization; one systematic way would be the enumeration of all two-valued measures.) Fig.~\ref{2015-s-f1}(iii) does not possess a quasi-classical {\it simulacrum} in terms of a partition logic. For the sake of a proof by contradiction~\cite{greechie:71}, suppose there exist a two-valued state. Any such two-valued state needs to have exactly two 1s on the horizontal contexts, whereas it needs to have exactly three 1s on the vertical contexts; but both contexts yield (two- and three-atomic) partitions of the entire set of atoms; thus implying $2=3$, which is clearly wrong. So, in a sense, one could say that the collection of observables represented in Fig.~\ref{2015-s-f1}(iii) is ``weirder'' than the ones represented in Figs.~\ref{2015-s-f1}(i)-(iii). \section{Generalized probabilities beyond the quantum predictions} When it comes to observables and probabilities there are two fundamental questions: (i) Given a particular collection of observables; what sort of probability measures can this propositional structure support or entail~\cite{Pitowsky2003395,pitowsky-06}? (ii) Conversely, given a particular probability measure; which observables and what propositional structure can be associated with this probability~\cite{Hardy:2001jk,Hardy2003381}? We shall mainly concentrate on the first question. A {\it caveat} is in order: it might as well be that, from a certain perspective, we might not be forced to ``leave'' or modify classical probability theory: for example, quantum probabilities could be interpreted as classical {\em conditional} probabilities~\cite{Khrennikov-15}, where conditioning is with respect to fixed experimental settings, in particular, with respect to the context measured. \subsection{Subclassicality and frame functions} In order to construct probability measures on non-Boolean propositional structures which can be obtained by pasting together contexts we shall adhere to the following assumption which we would like to call {\em subclassicality}: {\em every context (i.e., Boolean subalgebra, block) is endowed with classical probabilities.} In particular, any probability measure on its atoms is additive. This is quite reasonable, because it is prudent to maintain the validity of classical probability theory on those classical substructures of the propositional calculus that containing observables which are mutually co-measurable. Subclassicality can be formalized by frame functions in the context of Hilbert spaces~\cite{Gleason,r:dvur-93,pitowsky:218,Peres-expTest-Glea} as follows: A frame function of unit weight for a separable Hilbert space $H$ is a real-valued function $f$ defined on the (surface of the) unit sphere of $H$ such that if $\{ e_i \}$ is an orthonormal basis of $H$ then $\sum_i f(e_i) = 1$. This can be translated for pastings of contexts by identifying the set of atoms $\{a_{i}\}$ in a particular context $C$ with the set of vectors in one basis, and by requireing that $\sum_i f(a_i) = 1$ for all contexts $C$ involved. For pastings of contexts on value definite systems of observables, admissibility, which originally has been conceived as a formalization of ``partial value definiteness'' and value indefiniteness~\cite{2012-incomput-proofsCJ,PhysRevA.89.032109,2015-AnalyticKS} is essentially equivalent to the requirements imposed upon frame functions; that is, subclassicality. Nevertheless, one could also request generalized admissibility rules as follows. Let $O$ be a set of atoms in a propositional structure, and let $f: O \to [0,1]$ be a probability measure. Then $f$ is {\em admissible} if the following condition holds for every context $C$ of $ O $: for any $a\in C$ with $0\le f(a)\le 1$, $\sum_i f(b_i)=1-f(a)$ for all $b_i\in C\setminus\{a\}$. Likewise, for two-valued measures $v$ on value definite systems of observables, admissibility~\cite{2012-incomput-proofsCJ,PhysRevA.89.032109,2015-AnalyticKS} can be defined in analogy to frame functions: for any context $C=\{a_1,\ldots,a_n\}$ of $ O $, the two-valued measure on the atoms $a_1,\ldots,a_n$ has to add up to one; that is, $\sum_i v(a_i) = 1$. For the sake of a (quasi-) classical formalization, define a {\em two-valued measure} (or, used synonymously, {\em valuation,} or {\em truth assignment}) $v$ on a single context $C=\{a_1,\ldots,a_n\}$ to acquire the value $v (a_i)=1$ on exactly one $a_i$, $1\le i\le n$ of the atoms of the context, and the value zero on the remaining atoms $v (a_{j\neq i})=0$, $1\le j\le n$. Any (quasi-) classical probability measure, or, used synonymously, {\em state,} or {\em non-negative frame function} $f$ (of weight one), on this context can then be obtained by a convex combination of all $m$ two-valued measures; that is, \begin{equation} \begin{split} f =\sum_{1\le k\le m} \lambda_k v_k \text{, with } \\ 1 =\sum_{1\le k\le m} \lambda_k \text{, and } \lambda_k \ge 0. \label{2015-s-e1} \end{split} \end{equation} As far as classical physics is concerned, that is all there is -- the classical probabilities are just the convex combinations of the $m$ two-valued measures on the Boolean algebras $2^m$. This convex combination can be given a geometrical interpretation: First encode every two-valued measure on $C$ as some $m$-tuple, whereby the $i$'th component of the $m$-tuple is identified with the value $v(a_i)$ of that valuation on the $i$'th atom of the context $C$; and then interpret the resulting set of $m$-tuples as the set of the vertices of a convex polytope. By the Minkoswki-Weyl representation theorem~\cite[p.29]{ziegler}, every convex polytope has a dual (equivalent) description: either as the convex hull of its extreme points (vertices); or as the intersection of a finite number of half-spaces. More generally, one can do this not only on the atoms of one context, but also on a selection of atoms and joint probabilities of two or more contexts~\cite{pitowsky-89a,Pit-91,Pit-94,2000-poly}. This results in what Boole~\cite{Boole,Boole-62} called {\em ``conditions of possible experience''} for the {\em ``concurrence of events.''} In an Einstein-Podolsky-Rosen setup one ends up in Bell-type inequalities, which are prominently violated by quantum probabilities and correlations. Alas, the quantum correlations do not violate the inequalities maximally, which has led to the introduction of so-called ``nonlocal boxes''~\cite{popescu-2014}, which may be obtained by ``sharpening'' the two-partite quantum correlations to a Heaviside function~\cite{svozil-krenn}. As long as there are ``sufficiently many'' two-valued measures (e.g. capable of separating two arbitrary atoms) one might generalize this strategy to non-Boolean propositional structures. In particular, one could obtain quasi-classical probability measures by enumerating all two-valued measures, and by then taking the convex combination (\ref{2015-s-e1}) thereof~\cite{svozil-2008-ql}. One can do this because a two-valued measure has to ``cover'' all involved contexts simultaneously: if subclassicality is assumed, then the same two-valued measure defined on one context contributes to all the other contexts in such a way that the sums of that measure, taken along any such context has to be additive and yield one. \subsection{Cat's cradle configurations} \begin{figure} \begin{center} \unitlength 0.6mm \allinethickness{2.1pt} \begin{picture}(108.00,55.00) \put(25.00,7.33){\color{gray}\line(1,0){60.00}} \put(25.00,47.33){\color{red}\line(1,0){60.00}} \put(55.00,7.33){\color{cyan}\line(0,1){40.00}} \put(25.00,7.33){\color{blue}\line(-1,1){20.00}} \put(5.00,27.33){\color{green}\line(1,1){20.00}} \put(85.00,7.33){\color{magenta}\line(1,1){20.00}} \put(105.00,27.33){\color{yellow}\line(-1,1){20.00}} \put(24.67,55.00){\makebox(0,0)[rc]{$a_3$}} \put(55.33,55.00){\makebox(0,0)[cc]{$a_4$}} \put(85.33,55.00){\makebox(0,0)[lc]{$a_5$}} \put(9.00,40.00){\makebox(0,0)[rc]{$a_2$}} \put(99.33,40.00){\makebox(0,0)[lc]{$a_6$}} \put(0.00,26.33){\makebox(0,0)[rc]{$a_1$}} \put(108.00,26.33){\makebox(0,0)[lc]{$a_7$}} \put(60.33,26.33){\makebox(0,0)[lc]{$a_{13}$}} \put(9.00,13.33){\makebox(0,0)[rc]{$a_{12}$}} \put(99.67,13.33){\makebox(0,0)[lc]{$a_8$}} \put(24.67,-0.05){\makebox(0,0)[rc]{$a_{11}$}} \put(55.33,-0.05){\makebox(0,0)[cc]{$a_{10}$}} \put(85.33,-0.05){\makebox(0,0)[lc]{$a_9$}} \put(15.00,17.09){\color{blue}\circle{1.5}} \put(25.00,7.33){\color{blue}\circle{1.5}} \put(25.00,7.33){\color{gray}\circle{3.00}} \put(55.00,27.33){\color{cyan}\circle{1.5}} \put(85.00,7.33){\color{gray}\circle{1.5}} \put(85.00,7.33){\color{magenta}\circle{3.00}} \put(95.00,17.33){\color{magenta}\circle{1.5}} \put(5.00,27.33){\color{green}\circle{1.5}} \put(5.00,27.33){\color{blue}\circle{3.0}} \put(15.00,37.33){\color{green}\circle{1.5}} \put(25.00,47.33){\color{green}\circle{1.5}} \put(25.00,47.33){\color{red}\circle{3.00}} \put(55.00,47.33){\color{red}\circle{1.5}} \put(55.00,47.33){\color{cyan}\circle{3.00}} \put(85.00,47.33){\color{red}\circle{1.5}} \put(85.00,47.33){\color{yellow}\circle{3.00}} \put(55.00,7.33){\color{gray}\circle{1.5}} \put(55.00,7.33){\color{cyan}\circle{3.00}} \put(104.76,27.33){\color{yellow}\circle{1.5}} \put(104.76,27.33){\color{magenta}\circle{3.00}} \put(95.00,37.33){\color{yellow}\circle{1.5}} \end{picture} \end{center} \caption{\label{2015-cesena-f2} (Color online) Orthogonality diagram of a cats cradle logic which requires that, for two-valued measures, if $v(a_1)=1$, then $v(a_7)=0$. For a partition logic as well as for a Hilbert space realization see Refs.~\cite{svozil-tkadlec,svozil-2008-ql}.} \end{figure} Consider a propositional structure depicted in Fig.~\ref{2015-cesena-f2}. As Pitowsky~\cite{Pitowsky2003395,pitowsky-06} has pointed out, the reduction of some probabilities of atoms at intertwined contexts yields \begin{equation} p_1+p_7=\frac{3}{2}- \frac{1}{2}\left(p_{12}+p_{13}+p_2+p_6+p_8\right)\le \frac{3}{2}, \label{2015-s-e2} \end{equation} because all probabilities $p_i$ are non-negative. Indeed, if one applies the standard quantum mechanical Born (trace) rule to a particular realization enumerated in Fig.~4 of Ref.~\cite{svozil-tkadlec}, then, as $a_1\equiv \frac{1}{\sqrt{3}}\left( \sqrt{2},-1,0 \right)$ and $a_7\equiv \frac{1}{\sqrt{3}}\left( \sqrt{2},1,0 \right)$, the quantum probability of finding the quantum in a state spanned by $a_7$ if it has been prepared in a state spanned by $a_1$ is $p_7(a_1)= \langle a_7 \vert a_1 \rangle^2 = \frac{1}{9}$. Together with $p_1(a_1)= \langle a_1 \vert a_1 \rangle^2 = 1$ we obtain $p_1(a_1) + p_7(a_1) = \frac{10}{9}$, which satisfies the classical bound $\frac{3}{2}$. Indeed, a closer look at the quantum probabilities reveals that, with $a_{13}\equiv \left( 0, 1,0 \right)$, $a_{6,8}\equiv \frac{1}{2\sqrt{3}}\left( -1,\sqrt{2},\pm 3 \right)$, $p_{12}(a_1) =p_2(a_1)=0$, $p_{13}(a_1)= \frac{1}{3}$, and $p_6(a_1)=p_8(a_1) = \frac{4}{9}$, the classical bounds of probability~(\ref{2015-s-e2}) -- Boole's conditions of possible experience -- are perfectly satisfied by the quantum predictions, since $ 1+\frac{1}{9} = \frac{3}{2} - \frac{1}{2}\left(0+\frac{1}{3}+0+\frac{2}{9}+\frac{2}{9}\right)$. This was to be expected, as Eq.~(\ref{2015-s-e2}) has been derived by supposing subclassicality which is satisfied both by quasi-classical (e.g. generalized urn as well as automata) models as well as quantum mechanics. But does that mean that the classical and quantum predictions coincide? The quantum predictions, computed under the assumption that the system is prepared in state $a_1$ and thus $p_1(a_1)=1$, are enumerated in Fig.~\ref{2015-cesena-f3}(i). Note that the sum of the probabilities of each context has to sum up to unity. In contrast to the quantum predictions, with the same preparation, the classical predictions cannot yield any $p_7(a_1)$ other than zero, because by the way the logic is constructed there does not exist any two-valued measure satisfying $p_1(a_1)=p_7(a_1)=1$. (This is easily derivable by proving the impossibility of any such measure~\cite{svozil-2006-omni}.) They are enumerated in Fig.~\ref{2015-cesena-f3}(ii). The full parametrization of all conceivable classical probabilities is depicted in Fig.~\ref{2015-cesena-f3}(iii). So, if one interprets this argument in terms of a (state dependent) Boole-Bell type inequality, all it needs is to prepare a three-state quantum system in a state along $a_1\equiv \frac{1}{\sqrt{3}}\left( \sqrt{2},-1,0 \right)$ and measure the projection observable along $a_7\equiv \frac{1}{\sqrt{3}}\left( \sqrt{2},1,0 \right)$. In a generalized beam splitter setup~\cite{rzbb}, once the detector associated with $a_7$ clicks on the input associated with port $a_1$ one knows that the underlying physical realization is ``quantum-like'' and not classical. This represents another type of violation of Boole's conditions of possible experience by quantized systems. There exist more quantum predictions contradicting (quasi-) classical predictions based on additivity: suppose a tandem cat's cradle logic, which are just two cat's cradle logics intertwined at three contexts per copy, with a non-separating set of two-valued states already discussed by Kochen and Specker~\cite[$\Gamma_3$, p.~70]{kochen1}, and explicitly parameterized in three-dimensional real Hilbert space by Tkadlec~\cite[Fig.~1]{tkadlec-96}, thereby continuing the observables and preparations already used earlier. Classical predictions based on this set of observables would require that that if one prepares a quantized system in $a_1\equiv \frac{1}{\sqrt{3}}\left( \sqrt{2},-1,0 \right)$ and measure it along $b\equiv \frac{1}{\sqrt{3}}\left( -1,\sqrt{2},0 \right)$, the measurement would always yield a positive result, because every two-valued measure $v$ on that logic must satisfy $v(a_1)=v(b)=1$. However, the quantum predictions, also satisfying subclassicality, are $\langle b \vert a_1 \rangle^2 =\frac{8}{9}$. \begin{figure} \begin{center} \begin{tabular}{ccc} \unitlength 0.3mm \allinethickness{1.5pt} \begin{picture}(108.00,55.00) \put(25.00,7.33){\color{gray}\line(1,0){60.00}} \put(25.00,47.33){\color{red}\line(1,0){60.00}} \put(55.00,7.33){\color{cyan}\line(0,1){40.00}} \put(25.00,7.33){\color{blue}\line(-1,1){20.00}} \put(5.00,27.33){\color{green}\line(1,1){20.00}} \put(85.00,7.33){\color{magenta}\line(1,1){20.00}} \put(105.00,27.33){\color{yellow}\line(-1,1){20.00}} \put(24.67,55.00){\makebox(0,0)[rc]{$0$}} \put(55.33,58.00){\makebox(0,0)[cc]{$\frac{1}{3}$}} \put(87,58.00){\makebox(0,0)[lc]{$\frac{2}{3}$}} \put(9.00,40.00){\makebox(0,0)[rc]{$0$}} \put(99.33,40.00){\makebox(0,0)[lc]{$\frac{2}{9}$}} \put(0.00,26.33){\makebox(0,0)[rc]{$1$}} \put(110.00,26.33){\makebox(0,0)[lc]{$\frac{1}{9}$}} \put(60.33,26.33){\makebox(0,0)[lc]{$\frac{1}{3}$}} \put(9.00,13.33){\makebox(0,0)[rc]{$0$}} \put(99.67,13.33){\makebox(0,0)[lc]{$\frac{2}{9}$}} \put(24.67,-2){\makebox(0,0)[rc]{$0$}} \put(55.33,-3){\makebox(0,0)[cc]{$\frac{1}{3}$}} \put(87,-2){\makebox(0,0)[lc]{$\frac{2}{3}$}} \put(15.00,17.09){\color{blue}\circle{1.5}} \put(25.00,7.33){\color{blue}\circle{1.5}} \put(25.00,7.33){\color{gray}\circle{3.00}} \put(55.00,27.33){\color{cyan}\circle{1.5}} \put(85.00,7.33){\color{gray}\circle{1.5}} \put(85.00,7.33){\color{magenta}\circle{3.00}} \put(95.00,17.33){\color{magenta}\circle{1.5}} \put(5.00,27.33){\color{green}\circle{1.5}} \put(5.00,27.33){\color{blue}\circle{3.0}} \put(15.00,37.33){\color{green}\circle{1.5}} \put(25.00,47.33){\color{green}\circle{1.5}} \put(25.00,47.33){\color{red}\circle{3.00}} \put(55.00,47.33){\color{red}\circle{1.5}} \put(55.00,47.33){\color{cyan}\circle{3.00}} \put(85.00,47.33){\color{red}\circle{1.5}} \put(85.00,47.33){\color{yellow}\circle{3.00}} \put(55.00,7.33){\color{gray}\circle{1.5}} \put(55.00,7.33){\color{cyan}\circle{3.00}} \put(104.76,27.33){\color{yellow}\circle{1.5}} \put(104.76,27.33){\color{magenta}\circle{3.00}} \put(95.00,37.33){\color{yellow}\circle{1.5}} \end{picture} && \unitlength 0.3mm \allinethickness{1.5pt} \begin{picture}(108.00,55.00) \put(25.00,7.33){\color{gray}\line(1,0){60.00}} \put(25.00,47.33){\color{red}\line(1,0){60.00}} \put(55.00,7.33){\color{cyan}\line(0,1){40.00}} \put(25.00,7.33){\color{blue}\line(-1,1){20.00}} \put(5.00,27.33){\color{green}\line(1,1){20.00}} \put(85.00,7.33){\color{magenta}\line(1,1){20.00}} \put(105.00,27.33){\color{yellow}\line(-1,1){20.00}} \put(24.67,55.00){\makebox(0,0)[rc]{$0$}} \put(55.33,55.00){\makebox(0,0)[cc]{$y$}} \put(85.33,55.00){\makebox(0,0)[lc]{$x+z$}} \put(9.00,40.00){\makebox(0,0)[rc]{$0$}} \put(99.33,40.00){\makebox(0,0)[lc]{$y$}} \put(0.00,26.33){\makebox(0,0)[rc]{$1$}} \put(110.00,26.33){\makebox(0,0)[lc]{$0$}} \put(60.33,26.33){\makebox(0,0)[lc]{$x$}} \put(9.00,13.33){\makebox(0,0)[rc]{$0$}} \put(99.67,13.33){\makebox(0,0)[lc]{$z$}} \put(24.67,-0.05){\makebox(0,0)[rc]{$0$}} \put(55.33,-0.05){\makebox(0,0)[cc]{$z$}} \put(85.33,-0.05){\makebox(0,0)[lc]{$x+y$}} \put(15.00,17.09){\color{blue}\circle{1.5}} \put(25.00,7.33){\color{blue}\circle{1.5}} \put(25.00,7.33){\color{gray}\circle{3.00}} \put(55.00,27.33){\color{cyan}\circle{1.5}} \put(85.00,7.33){\color{gray}\circle{1.5}} \put(85.00,7.33){\color{magenta}\circle{3.00}} \put(95.00,17.33){\color{magenta}\circle{1.5}} \put(5.00,27.33){\color{green}\circle{1.5}} \put(5.00,27.33){\color{blue}\circle{3.0}} \put(15.00,37.33){\color{green}\circle{1.5}} \put(25.00,47.33){\color{green}\circle{1.5}} \put(25.00,47.33){\color{red}\circle{3.00}} \put(55.00,47.33){\color{red}\circle{1.5}} \put(55.00,47.33){\color{cyan}\circle{3.00}} \put(85.00,47.33){\color{red}\circle{1.5}} \put(85.00,47.33){\color{yellow}\circle{3.00}} \put(55.00,7.33){\color{gray}\circle{1.5}} \put(55.00,7.33){\color{cyan}\circle{3.00}} \put(104.76,27.33){\color{yellow}\circle{1.5}} \put(104.76,27.33){\color{magenta}\circle{3.00}} \put(95.00,37.33){\color{yellow}\circle{1.5}} \end{picture} \\ $\;$ \\ (i)&$\qquad$&(ii) \\ \multicolumn{3}{c}{ \unitlength 0.4mm \allinethickness{1.8pt} \begin{picture}(108.00,80.00)(0,-10) \put(25.00,7.33){\color{gray}\line(1,0){60.00}} \put(25.00,47.33){\color{red}\line(1,0){60.00}} \put(55.00,7.33){\color{cyan}\line(0,1){40.00}} \put(25.00,7.33){\color{blue}\line(-1,1){20.00}} \put(5.00,27.33){\color{green}\line(1,1){20.00}} \put(85.00,7.33){\color{magenta}\line(1,1){20.00}} \put(105.00,27.33){\color{yellow}\line(-1,1){20.00}} \put(24.67,61.00){\makebox(0,0)[rc]{\scriptsize $\lambda_{10} +\lambda_{11}+$}} \put(24.67,55.00){\makebox(0,0)[rc]{\scriptsize $+ \lambda_{12} + \lambda_{13} + \lambda_{14}$}} \put(55.33,61.00){\makebox(0,0)[cc]{\scriptsize $\lambda_2 + \lambda_6 + $}} \put(55.33,55.00){\makebox(0,0)[cc]{\scriptsize $+ \lambda_7 + \lambda_8$}} \put(85.33,61.00){\makebox(0,0)[lc]{\scriptsize $\lambda_1 + \lambda_3 + \lambda_4 +$}} \put(85.33,55.00){\makebox(0,0)[lc]{\scriptsize $+ \lambda_{12} + \lambda_{13} + \lambda_{14}$}} \put(9.00,43.00){\makebox(0,0)[rc] {\scriptsize $\lambda_4 + \lambda_5 + \lambda_6 + $}} \put(9.00,37.00){\makebox(0,0)[rc] {\scriptsize $ + \lambda_7 + \lambda_8 + \lambda_9$}} \put(99.33,43.00){\makebox(0,0)[lc]{\scriptsize $\lambda_2 + \lambda_6 + \lambda_8 +$}} \put(99.33,37.00){\makebox(0,0)[lc]{\scriptsize $ + \lambda_{11} + \lambda_{12} + \lambda_{14}$}} \put(0.00,26.33){\makebox(0,0)[rc] {\scriptsize $\lambda_1 + \lambda_2 + \lambda_3$}} \put(108.00,26.33){\makebox(0,0)[lc]{\scriptsize $\lambda_7 + \lambda_{10} + \lambda_{13}$}} \put(60.33,32.33){\makebox(0,0)[lc]{\scriptsize $\lambda_1 + \lambda_4 + \lambda_5 + $}} \put(60.33,26.33){\makebox(0,0)[lc]{\scriptsize $+ \lambda_{10} + \lambda_{11} + $}} \put(60.33,20.33){\makebox(0,0)[lc]{\scriptsize $+ \lambda_{12}$}} \put(9.00,15.33){\makebox(0,0)[rc] {\scriptsize $\lambda_4 + \lambda_6 + \lambda_9 + $}} \put(9.00,9.33){\makebox(0,0)[rc] {\scriptsize $+ \lambda_{12} + \lambda_{13} + \lambda_{14}$}} \put(99.67,15.33){\makebox(0,0)[lc]{\scriptsize $\lambda_3 + \lambda_5 + \lambda_8 + $}} \put(99.67,9.33){\makebox(0,0)[lc] {\scriptsize $+ \lambda_9 + \lambda_{11} + \lambda_{14}$}} \put(24.67,-0.05){\makebox(0,0)[rc]{\scriptsize $\lambda_5 + \lambda_7 + \lambda_8 +$}} \put(24.67,-6.05){\makebox(0,0)[rc]{\scriptsize $+ \lambda_{10} + \lambda_{11}$}} \put(55.33,-0.05){\makebox(0,0)[cc]{\scriptsize $\lambda_3 + \lambda_9 + $}} \put(55.33,-6.05){\makebox(0,0)[cc]{\scriptsize $ + \lambda_{13} + \lambda_{14}$}} \put(85.33,-0.05){\makebox(0,0)[lc]{\scriptsize $\lambda_1 + \lambda_2 + \lambda_4 + $}} \put(85.33,-6.05){\makebox(0,0)[lc]{\scriptsize $ + \lambda_6 + \lambda_{12}$}} \put(15.00,17.09){\color{blue}\circle{1.5}} \put(25.00,7.33){\color{blue}\circle{1.5}} \put(25.00,7.33){\color{gray}\circle{3.00}} \put(55.00,27.33){\color{cyan}\circle{1.5}} \put(85.00,7.33){\color{gray}\circle{1.5}} \put(85.00,7.33){\color{magenta}\circle{3.00}} \put(95.00,17.33){\color{magenta}\circle{1.5}} \put(5.00,27.33){\color{green}\circle{1.5}} \put(5.00,27.33){\color{blue}\circle{3.0}} \put(15.00,37.33){\color{green}\circle{1.5}} \put(25.00,47.33){\color{green}\circle{1.5}} \put(25.00,47.33){\color{red}\circle{3.00}} \put(55.00,47.33){\color{red}\circle{1.5}} \put(55.00,47.33){\color{cyan}\circle{3.00}} \put(85.00,47.33){\color{red}\circle{1.5}} \put(85.00,47.33){\color{yellow}\circle{3.00}} \put(55.00,7.33){\color{gray}\circle{1.5}} \put(55.00,7.33){\color{cyan}\circle{3.00}} \put(104.76,27.33){\color{yellow}\circle{1.5}} \put(104.76,27.33){\color{magenta}\circle{3.00}} \put(95.00,37.33){\color{yellow}\circle{1.5}} \end{picture} } \\ \multicolumn{3}{c}{(iii)} \end{tabular} \end{center} \caption{\label{2015-cesena-f3} (Color online) Orthogonality diagram of the logic depicted in Fig.~\ref{2015-cesena-f2} with overlaid (i) quantum and (ii) classical prediction probabilities for a state prepared along $a_1$. The classical predictions require that $x$, $y$ and $z$ are non-negative and $x+y+z = 1$. (iii) The full parametrization of classical probabilities; with non-negative $\lambda_1,\ldots \lambda_{14}\ge 0$, and $\lambda_1+\cdots +\lambda_{14} = 1$. Note that the special case (ii) is obtained by identifying with $\lambda_1=x$, $\lambda_2=y$, $\lambda_3=z$, and $\lambda_4,\ldots \lambda_{14}= 0$. } \end{figure} The full hull computation~\cite{cdd-pck} reveals the Boole-Bell type conditions of possible experience \begin{equation} \begin{split} p_1+p_2+p_6\geq p_4+p_8, \\ p_1+p_2\geq p_4, \\ p_1+2 p_2+p_6\geq 2 p_4+p_8, \\ p_2+p_6\geq p_4, \ldots \\ p_{10}+p_2+p_6\geq p_4+p_8, \\ p_4+p_8+1\geq p_1+p_{10}+p_2+p_6, \\ p_8+1\geq p_1+p_{10}+p_2, \\ p_4+1\geq p_1+p_2+p_6, \\ p_4+p_5\geq p_1+p_2, \\ p_1+p_2+p_6+p_7\geq p_4+1, \\ p_4+p_8+p_9\geq p_1+p_2+p_6, \\ p_1+p_{10}+p_{11}+p_2+p_6\geq p_4+p_8+1, \\ p_{12}+p_4+p_8\geq p_{10}+p_2+p_6, \\ p_{10}+p_{13}+p_4\geq 1 \end{split} \label{2015-s-e-ccbb} \end{equation} as bounds of the polytope spanned by the two-valued measures interpreted as vertices. Some of these classical bounds are enumerated in Eq.~(\ref{2015-s-e-ccbb}). A fraction of these, in particular, $p_2+p_6\geq p_4$ is violated by the quantum probabilities mentioned earlier, as $p_2=0$, $p_6=\frac{2}{9}$, and $p_4=\frac{1}{3}$. \subsection{Pentagon configuration} There exist, however, probabilities that are neither quasi-classical nor quantum-like although they satisfy subclassicality, and although the underlying logic can be realized both quasi-classically by partition logics as well as quantum mechanically. For the sake of an example, we shall discuss Wright's dispersionless state~\cite{wright:pent} on the logic whose orthogonality diagram is a pentagon, as depicted in Fig.~\ref{2015-s-f6}(ii). \begin{figure} \begin{center} \begin{tabular}{ccc} \unitlength 0.16mm \allinethickness{2.1pt} \begin{picture}(230,200)(-110,-100) \multiput(31,-95.25)(.033724340176,.046554252199){2046}{\color{cyan}\line(0,1){.046554252199}} \multiput(100,0)(-.033724340176,.046432062561){2046}{\color{magenta}\line(0,1){.046432062561}} \multiput(31,95)(-.10418604651,-.03372093023){1075}{\color{blue}\line(-1,0){.10418604651}} \put(-81,58.75){\color{red}\line(0,-1){117.75}} \multiput(-81,-59)(.10328096118,-.03373382625){1082}{\color{green}\line(1,0){.10328096118}} \put( 30.9017 , 95.1057){\color{blue}\circle{1.20}} \put( 30.9017 , 95.1057){\color{magenta}\circle{9.00}} \put( 55.9017 , 95.1057){\makebox(0,0)[cc]{$a_1$}} \put(100,0){\color{magenta}\circle{1.20}} \put(100,0){\color{cyan}\circle{9.00}} \put(120,0){\makebox(0,0)[cc]{$a_3$}} \put( 30.9017 , -95.1057){\color{cyan}\circle{1.20}} \put( 30.9017 , -95.1057){\color{green}\circle{9.00}} \put( 55.9017 , -95.1057){\makebox(0,0)[cc]{$a_5$}} \put( -80.9017 , -58.7785){\color{green}\circle{1.20}} \put( -80.9017 , -58.7785){\color{red}\circle{9.00}} \put( -105.9017 , -58.7785){\makebox(0,0)[cc]{$a_7$}} \put(-80.9017 , 58.7785){\color{red}\circle{1.20}} \put(-80.9017 , 58.7785){\color{blue}\circle{9.00}} \put(-105.9017 , 58.7785){\makebox(0,0)[cc]{$a_9$}} \end{picture} && \unitlength 0.16mm \allinethickness{2.1pt} \begin{picture}(230,200)(-110,-100) \multiput(31,-95.25)(.033724340176,.046554252199){2046}{\color{cyan}\line(0,1){.046554252199}} \multiput(100,0)(-.033724340176,.046432062561){2046}{\color{magenta}\line(0,1){.046432062561}} \multiput(31,95)(-.10418604651,-.03372093023){1075}{\color{blue}\line(-1,0){.10418604651}} \put(-81,58.75){\color{red}\line(0,-1){117.75}} \multiput(-81,-59)(.10328096118,-.03373382625){1082}{\color{green}\line(1,0){.10328096118}} \put( 30.9017 , 95.1057){\color{blue}\circle{1.20}} \put( 30.9017 , 95.1057){\color{magenta}\circle{9.00}} \put( 55.9017 , 95.1057){\makebox(0,0)[cc]{$a_1$}} \put( 65.4509,47.5529){\color{magenta}\circle{9}} \put( 90.4509,47.5529){\makebox(0,0)[cc]{$a_2$}} \put(100,0){\color{magenta}\circle{1.20}} \put(100,0){\color{cyan}\circle{9.00}} \put(120,0){\makebox(0,0)[cc]{$a_3$}} \put( 65.4509,-47.5529){\color{cyan}\circle{9}} \put( 90.4509,-47.5529){\makebox(0,0)[cc]{$a_4$}} \put( 30.9017 , -95.1057){\color{cyan}\circle{1.20}} \put( 30.9017 , -95.1057){\color{green}\circle{9.00}} \put( 55.9017 , -95.1057){\makebox(0,0)[cc]{$a_5$}} \put( -25,-76.9421){\color{green}\circle{9}} \put( -40,-90.9421){\makebox(0,0)[cc]{$a_6$}} \put( -80.9017 , -58.7785){\color{green}\circle{1.20}} \put( -80.9017 , -58.7785){\color{red}\circle{9.00}} \put( -105.9017 , -58.7785){\makebox(0,0)[cc]{$a_7$}} \put(-80.9017,0){\color{red}\circle{9}} \put(-105.9017,0){\makebox(0,0)[cc]{$a_8$}} \put(-80.9017 , 58.7785){\color{red}\circle{1.20}} \put(-80.9017 , 58.7785){\color{blue}\circle{9.00}} \put(-105.9017 , 58.7785){\makebox(0,0)[cc]{$a_9$}} \put( -25,76.9421){\color{blue}\circle{9}} \put( -40,90.9421){\makebox(0,0)[cc]{$a_{10}$}} \end{picture} \\ $\;$ (i)&$\quad$&(ii) \end{tabular} \end{center} \caption{\label{2015-s-f6} (Color online) Orthogonality diagram of the reduced pentagon (i), and of the pentagon logic (ii). A realization of (ii) in terms of partition logic is enumerated in Eq.~(\ref{2015-s-e6}); an explicit quantum realizaion can be found in Ref.~\cite{svozil-tkadlec}.} \end{figure} What are the probabilities of prediction associated with such structures? The propositional structure depicted in Fig.~\ref{2015-s-f6}(i) has no two-valued state, and just allows a single probability measure which is constant on all atoms; that is, $p_1=p_3=p_5=p_7=p_9=\frac{1}{2}$. This prediction or oracle is still allowed by the subclassicality rule even if one adds one atom per block. But, as has been pointed out by Wright~\cite{wright:pent}, it can neither be operationally realized by any quasi-classical nor by any quantum oracle. For quasi-classical systems, this can explicitly be demonstrated by enumerating all two-valued measures on this ``pentagon logic'' of Fig.~\ref{2015-s-f6}(ii), as depicted in Fig.~\ref{2015-s-f7}. Note that no measure exists which is non-zero only on the atoms located at intertwining contexts; that is, which does not vanish at one (or more) atoms at intertwining contexts, and at the same time vanishes at all the ``middle'' atoms belonging to only one context. Because the quasi-classical probabilities are just the convex sum Eq.~(\ref{2015-s-e1}) over all the two-valued measures it is clear that no classical probability vanishes at all non-intertwining atoms; in particular one which is $\frac{1}{2}$ on all intertwining atoms. A straightforward extraction~\cite{svozil-2001-eua,svozil-2008-ql} based on two-valued measures in Fig.~\ref{2015-s-f7} yields the partition logic -- which is the pasting of subalgebras specified by partitions of the set $\{1,2, \ldots , 11\}$ in such a way that any atom is represented by the set of indices of two-valued measures acquiring the value one on that atom -- of indices of the two-valued measures enumerated in Eq.~(\ref{2015-s-e6}); that is, in terms of the subscripts of the two-valued measures (i.e., $v_i \rightarrow i$), \begin{equation} \begin{split} \{ \{ \{ 1,2,3 \}, \{ 7,8,9,10,11 \}, \{ 4,5,6 \} \}, \\ \{ \{ 4,5,6 \}, \{ 1,3,9,10,11 \}, \{ 2,7,8 \} \}, \\ \{ \{ 2,7,8 \}, \{ 1,4,6,10,11 \}, \{ 3,5,9,3 \} \},\\ \{ \{ 3,5,9,3 \}, \{ 1,2,4,7,11 \}, \{ 6,8,10 \} \},\\ \{ \{ 6,8,10 \}, \{ 4,5,7,9,11 \}, \{ 1,2,3 \} \} \} \end{split} \label{2015-s-e6} \end{equation} \begin{figure} \begin{center} \begin{tabular}{ccc} \unitlength 0.1mm \allinethickness{1.5pt} \begin{picture}(230,200)(-110,-100) \put(0,0){\makebox(0,0)[cc]{\large $v_1$}} \multiput(31,-95.25)(1.2,1.6565){58}{\color{cyan}\line(0,1){.1656521739}} \multiput(100,0)(-1.2,1.6522){58}{\color{magenta}\line(0,1){.1652173913}} \multiput(31,95)(-3.69637,-1.19637){30}{\color{blue}\line(-1,0){.3696369637}} \multiput(-81,59)(0,-2){60}{\color{red}\line(0,-1){0.33}} \multiput(-81,-59)(3.664,-1.1967){31}{\color{green}\line(1,0){.3663934426}} \put( 30.9017 , 95.1057){\circle{8}} \put( 65.4509,-47.5529){\circle{8}} \put( -25,-76.9421){\circle{8}} \put(-80.9017,0){\circle{8}} \end{picture} & \unitlength 0.1mm \allinethickness{1.5pt} \begin{picture}(230,200)(-110,-100) \put(0,0){\makebox(0,0)[cc]{\large $v_2$}} \multiput(31,-95.25)(1.2,1.6565){58}{\color{cyan}\line(0,1){.1656521739}} \multiput(100,0)(-1.2,1.6522){58}{\color{magenta}\line(0,1){.1652173913}} \multiput(31,95)(-3.69637,-1.19637){30}{\color{blue}\line(-1,0){.3696369637}} \multiput(-81,59)(0,-2){60}{\color{red}\line(0,-1){0.33}} \multiput(-81,-59)(3.664,-1.1967){31}{\color{green}\line(1,0){.3663934426}} \put( 30.9017 , 95.1057){\circle{8}} \put( 30.9017 , -95.1057){\circle{8}} \put(-80.9017,0){\circle{8}} \end{picture} & \unitlength 0.1mm \allinethickness{1.5pt} \begin{picture}(230,200)(-110,-100) \put(0,0){\makebox(0,0)[cc]{\large $v_3$}} \multiput(31,-95.25)(1.2,1.6565){58}{\color{cyan}\line(0,1){.1656521739}} \multiput(100,0)(-1.2,1.6522){58}{\color{magenta}\line(0,1){.1652173913}} \multiput(31,95)(-3.69637,-1.19637){30}{\color{blue}\line(-1,0){.3696369637}} \multiput(-81,59)(0,-2){60}{\color{red}\line(0,-1){0.33}} \multiput(-81,-59)(3.664,-1.1967){31}{\color{green}\line(1,0){.3663934426}} \put( 30.9017 , 95.1057){\circle{8}} \put( 65.4509,-47.5529){\circle{8}} \put( -80.9017 , -58.7785){\circle{8}} \end{picture} \\ \unitlength 0.1mm \allinethickness{1.5pt} \begin{picture}(230,200)(-110,-100) \put(0,0){\makebox(0,0)[cc]{\large $v_4$}} \multiput(31,-95.25)(1.2,1.6565){58}{\color{cyan}\line(0,1){.1656521739}} \multiput(100,0)(-1.2,1.6522){58}{\color{magenta}\line(0,1){.1652173913}} \multiput(31,95)(-3.69637,-1.19637){30}{\color{blue}\line(-1,0){.3696369637}} \multiput(-81,59)(0,-2){60}{\color{red}\line(0,-1){0.33}} \multiput(-81,-59)(3.664,-1.1967){31}{\color{green}\line(1,0){.3663934426}} \put(100,0){\circle{8}} \put( -25,-76.9421){\circle{8}} \put(-80.9017,0){\circle{8}} \put( -25,76.9421){\circle{8}} \end{picture} & \unitlength 0.1mm \allinethickness{1.5pt} \begin{picture}(230,200)(-110,-100) \put(0,0){\makebox(0,0)[cc]{\large $v_5$}} \multiput(31,-95.25)(1.2,1.6565){58}{\color{cyan}\line(0,1){.1656521739}} \multiput(100,0)(-1.2,1.6522){58}{\color{magenta}\line(0,1){.1652173913}} \multiput(31,95)(-3.69637,-1.19637){30}{\color{blue}\line(-1,0){.3696369637}} \multiput(-81,59)(0,-2){60}{\color{red}\line(0,-1){0.33}} \multiput(-81,-59)(3.664,-1.1967){31}{\color{green}\line(1,0){.3663934426}} \put(100,0){\circle{8}} \put( -80.9017 , -58.7785){\circle{8}} \put( -25,76.9421){\circle{8}} \end{picture} & \unitlength 0.1mm \allinethickness{1.5pt} \begin{picture}(230,200)(-110,-100) \put(0,0){\makebox(0,0)[cc]{\large $v_6$}} \multiput(31,-95.25)(1.2,1.6565){58}{\color{cyan}\line(0,1){.1656521739}} \multiput(100,0)(-1.2,1.6522){58}{\color{magenta}\line(0,1){.1652173913}} \multiput(31,95)(-3.69637,-1.19637){30}{\color{blue}\line(-1,0){.3696369637}} \multiput(-81,59)(0,-2){60}{\color{red}\line(0,-1){0.33}} \multiput(-81,-59)(3.664,-1.1967){31}{\color{green}\line(1,0){.3663934426}} \put(100,0){\circle{8}} \put( -25,-76.9421){\circle{8}} \put(-80.9017 , 58.7785){\circle{8}} \end{picture} \\ \unitlength 0.1mm \allinethickness{1.5pt} \begin{picture}(230,200)(-110,-100) \put(0,0){\makebox(0,0)[cc]{\large $v_7$}} \multiput(31,-95.25)(1.2,1.6565){58}{\color{cyan}\line(0,1){.1656521739}} \multiput(100,0)(-1.2,1.6522){58}{\color{magenta}\line(0,1){.1652173913}} \multiput(31,95)(-3.69637,-1.19637){30}{\color{blue}\line(-1,0){.3696369637}} \multiput(-81,59)(0,-2){60}{\color{red}\line(0,-1){0.33}} \multiput(-81,-59)(3.664,-1.1967){31}{\color{green}\line(1,0){.3663934426}} \put( 65.4509,47.5529){\circle{8}} \put( 30.9017 , -95.1057){\circle{8}} \put(-80.9017,0){\circle{8}} \put( -25,76.9421){\circle{8}} \end{picture} & \unitlength 0.1mm \allinethickness{1.5pt} \begin{picture}(230,200)(-110,-100) \put(0,0){\makebox(0,0)[cc]{\large $v_8$}} \multiput(31,-95.25)(1.2,1.6565){58}{\color{cyan}\line(0,1){.1656521739}} \multiput(100,0)(-1.2,1.6522){58}{\color{magenta}\line(0,1){.1652173913}} \multiput(31,95)(-3.69637,-1.19637){30}{\color{blue}\line(-1,0){.3696369637}} \multiput(-81,59)(0,-2){60}{\color{red}\line(0,-1){0.33}} \multiput(-81,-59)(3.664,-1.1967){31}{\color{green}\line(1,0){.3663934426}} \put( 65.4509,47.5529){\circle{8}} \put( 30.9017 , -95.1057){\circle{8}} \put(-80.9017 , 58.7785){\circle{8}} \end{picture} & \unitlength 0.1mm \allinethickness{1.5pt} \begin{picture}(230,200)(-110,-100) \put(0,0){\makebox(0,0)[cc]{\large $v_9$}} \multiput(31,-95.25)(1.2,1.6565){58}{\color{cyan}\line(0,1){.1656521739}} \multiput(100,0)(-1.2,1.6522){58}{\color{magenta}\line(0,1){.1652173913}} \multiput(31,95)(-3.69637,-1.19637){30}{\color{blue}\line(-1,0){.3696369637}} \multiput(-81,59)(0,-2){60}{\color{red}\line(0,-1){0.33}} \multiput(-81,-59)(3.664,-1.1967){31}{\color{green}\line(1,0){.3663934426}} \put( 65.4509,47.5529){\circle{8}} \put( 65.4509,-47.5529){\circle{8}} \put( -80.9017 , -58.7785){\circle{8}} \put( -25,76.9421){\circle{8}} \end{picture} \\ \unitlength 0.1mm \allinethickness{1.5pt} \begin{picture}(230,200)(-110,-100) \put(0,0){\makebox(0,0)[cc]{\large $v_{10}$}} \multiput(31,-95.25)(1.2,1.6565){58}{\color{cyan}\line(0,1){.1656521739}} \multiput(100,0)(-1.2,1.6522){58}{\color{magenta}\line(0,1){.1652173913}} \multiput(31,95)(-3.69637,-1.19637){30}{\color{blue}\line(-1,0){.3696369637}} \multiput(-81,59)(0,-2){60}{\color{red}\line(0,-1){0.33}} \multiput(-81,-59)(3.664,-1.1967){31}{\color{green}\line(1,0){.3663934426}} \put( 65.4509,47.5529){\circle{8}} \put( 65.4509,-47.5529){\circle{8}} \put( -25,-76.9421){\circle{8}} \put(-80.9017 , 58.7785){\circle{8}} \end{picture} & \unitlength 0.1mm \allinethickness{1.5pt} \begin{picture}(230,200)(-110,-100) \put(0,0){\makebox(0,0)[cc]{\large $v_{11}$}} \multiput(31,-95.25)(1.2,1.6565){58}{\color{cyan}\line(0,1){.1656521739}} \multiput(100,0)(-1.2,1.6522){58}{\color{magenta}\line(0,1){.1652173913}} \multiput(31,95)(-3.69637,-1.19637){30}{\color{blue}\line(-1,0){.3696369637}} \multiput(-81,59)(0,-2){60}{\color{red}\line(0,-1){0.33}} \multiput(-81,-59)(3.664,-1.1967){31}{\color{green}\line(1,0){.3663934426}} \put( 65.4509,47.5529){\circle{8}} \put( 65.4509,-47.5529){\circle{8}} \put( -25,-76.9421){\circle{8}} \put(-80.9017,0){\circle{8}} \put( -25,76.9421){\circle{8}} \end{picture} \end{tabular} \end{center} \caption{\label{2015-s-f7} Two-valued measures on the pentagon logic of Fig.~\ref{2015-s-f6}.} \end{figure} These partitions directly translate into the classical probabilities which are, for instance, realizable by generalized urn or automaton models. Fig.~\ref{2015-s-f8} parameterizes all classical probabilities through non-negative $\lambda_1,\ldots ,\lambda_{11}\ge 0$ with $\lambda_1+\cdots +\lambda_{11}=1$, subject to subclassicality. \begin{figure} \begin{center} \begin{tabular}{c} \unitlength 0.12mm \allinethickness{2.1pt} \begin{picture}(230,250)(-110,-115) \multiput(31,-95.25)(.033724340176,.046554252199){2046}{\color{cyan}\line(0,1){.046554252199}} \multiput(100,0)(-.033724340176,.046432062561){2046}{\color{magenta}\line(0,1){.046432062561}} \multiput(31,95)(-.10418604651,-.03372093023){1075}{\color{blue}\line(-1,0){.10418604651}} \put(-81,58.75){\color{red}\line(0,-1){117.75}} \multiput(-81,-59)(.10328096118,-.03373382625){1082}{\color{green}\line(1,0){.10328096118}} \put( 30.9017 , 95.1057){\color{blue}\circle{1.20}} \put( 30.9017 , 95.1057){\color{magenta}\circle{15.00}} \put( 55.9017 , 95.1057){\makebox(0,0)[lc]{$\lambda_1 + \lambda_2 + \lambda_3$}} \put( 65.4509,47.5529){\color{magenta}\circle{15.00}} \put( 90.4509,47.5529){\makebox(0,0)[lc]{$\lambda_7 + \lambda_8 + \lambda_9 + \lambda_{10} + \lambda_{11}$}} \put(100,0){\color{magenta}\circle{1.20}} \put(100,0){\color{cyan}\circle{15.00}} \put(120,0){\makebox(0,0)[lc]{$\lambda_4 + \lambda_5 + \lambda_6$}} \put( 65.4509,-47.5529){\color{cyan}\circle{15.00}} \put( 90.4509,-47.5529){\makebox(0,0)[lc]{$\lambda_1 + \lambda_3 + \lambda_9 + \lambda_{10} + \lambda_{11}$}} \put( 30.9017 , -95.1057){\color{cyan}\circle{1.20}} \put( 30.9017 , -95.1057){\color{green}\circle{15.00}} \put(55.9017 , -95.1057){\makebox(0,0)[lc]{$\lambda_2 + \lambda_7 + \lambda_8$}} \put( -25,-76.9421){\color{green}\circle{15.00}} \put( -40,-90.9421){\makebox(0,0)[rc]{$\lambda_1 + \lambda_4 + \lambda_6 + \lambda_{10} + \lambda_{11}$}} \put( -80.9017 , -58.7785){\color{green}\circle{1.20}} \put( -80.9017 , -58.7785){\color{red}\circle{15.00}} \put( -105.9017 , -58.7785){\makebox(0,0)[rc]{$\lambda_3 + \lambda_5 + \lambda_9 + \lambda_3$}} \put(-80.9017,0){\color{red}\circle{15.00}} \put(-105.9017,0){\makebox(0,0)[rc]{$\lambda_1 + \lambda_2 + \lambda_4 + \lambda_7 + \lambda_{11}$}} \put(-80.9017 , 58.7785){\color{red}\circle{1.20}} \put(-80.9017 , 58.7785){\color{blue}\circle{15.00}} \put(-105.9017 , 58.7785){\makebox(0,0)[rc]{$\lambda_6 + \lambda_8 + \lambda_{10}$}} \put( -25,76.9421){\color{blue}\circle{15.00}} \put( -40,90.9421){\makebox(0,0)[rc]{$\lambda_4 + \lambda_5 + \lambda_7 + \lambda_9 + \lambda_{11}$}} \end{picture} \end{tabular} \end{center} \caption{\label{2015-s-f8} (Color online) Classical probabilities on the pentagon logic.} \end{figure} The hull computation~\cite{cdd-pck} reveals the Boole-Bell type conditions of possible experience \begin{equation} \begin{split} p_4+p_8\geq p_1, \ldots \\ p_4+1\geq p_1+p_2+p_6, \\ p_4+p_8+1\geq 2p_1+p_2+p_6, \\ p_1+p_2\geq p_4, \\ p_1+p_2+p_6\geq p_4+p_8, \\ 2p_1+p_{10}+p_2+p_6\geq p_4+p_8+1 \end{split} \label{2015-s-e8} \end{equation} as bounds of the polytope spanned by the two-valued measures interpreted as vertices. Some of these classical bounds are enumerated in Eq.~(\ref{2015-s-e8}). Wright's measure, with $p_1=\frac{1}{2}$ and $p_4=p_8=0$, violates the first inequality. \subsection{Triangle configurations} Very similar arguments hold also for the propositional structures depicted in Figs.~\ref{2015-s-f2}(i),(ii): Fig.~\ref{2015-s-f9}(i) represents a trivial classical prediction with equal probabilities. Fig.~\ref{2015-s-f9}(ii) represents all classical predictions; the probability measures being read off from the partition logic $ \{ \{ \{1 \}, \{3 \}, \{2 \} \}, \{ \{2 \}, \{1 \}, \{3 \} \}, \{ \{3 \}, \{2 \}, \{1 \} \} \} $ obtained from the three two-valued states on the logic in Fig.~\ref{2015-s-f2}(ii). Figs.~\ref{2015-s-f9}(i),(iii) represent predictions $\frac{1}{2}$ for all atoms at which the three contexts intertwine. Fig.~\ref{2015-s-f9}(iii) represents a Wright prediction. None of the propositional structures depicted in Figs.~\ref{2015-s-f9}(i)--(iii) allows a quantum realization. \begin{figure} \begin{center} \begin{tabular}{ccccccc} \unitlength 0.6mm \allinethickness{2.1pt \ifx\plotpoint\undefined\newsavebox{\plotpoint}\fi \begin{picture}(26,25)(-3,0) \put(0,0){\color{blue}\line(1,0){20}} \put(0,0){\color{red}\line(3,5){10}} \put(20,0){\color{green}\line(-3,5){10}} \put(0,0){\color{blue}\circle{1.2}} \put(0,0){\color{red}\circle{3}} \put(-2.5,0){\makebox(0,0)[rc]{$\frac{1}{2}$}} \put(20,0){\color{green}\circle{1.2}} \put(20,0){\color{blue}\circle{3}} \put(22.5,0){\makebox(0,0)[lc]{$\frac{1}{2}$}} \put(10,16.5){\color{red}\circle{1.2}} \put(10,16.5){\color{green}\circle{3}} \put(13,16.5){\makebox(0,0)[lc]{$\frac{1}{2}$}} \end{picture} && \unitlength 0.6mm \allinethickness{2.1pt \ifx\plotpoint\undefined\newsavebox{\plotpoint}\fi \begin{picture}(26,25)(-3,0) \put(0,0){\color{blue}\line(1,0){20}} \put(0,0){\color{red}\line(3,5){10}} \put(20,0){\color{green}\line(-3,5){10}} \put(0,0){\color{blue}\circle{1.2}} \put(0,0){\color{red}\circle{3}} \put(-3,0){\makebox(0,0)[rc]{$z$}} \put(20,0){\color{green}\circle{1.2}} \put(20,0){\color{blue}\circle{3}} \put(23,0){\makebox(0,0)[lc]{$y$}} \put(10,16.5){\color{red}\circle{1.2}} \put(10,16.5){\color{green}\circle{3}} \put(13,16.5){\makebox(0,0)[lc]{$x$}} \put(5,8.25){\color{red}\circle{1.2}} \put(3,9.25){\makebox(0,0)[rc]{$y$}} \put(15,8.25){\color{green}\circle{1.2}} \put(17,9.25){\makebox(0,0)[lc]{$z$}} \put(10,0){\color{blue}\circle{1.2}} \put(10,3){\makebox(0,0)[cc]{$x$}} \end{picture} && \unitlength 0.6mm \allinethickness{2.1pt \ifx\plotpoint\undefined\newsavebox{\plotpoint}\fi \begin{picture}(26,25)(-3,0) \put(0,0){\color{blue}\line(1,0){20}} \put(0,0){\color{red}\line(3,5){10}} \put(20,0){\color{green}\line(-3,5){10}} \put(0,0){\color{blue}\circle{1.2}} \put(0,0){\color{red}\circle{3}} \put(-2.5,0){\makebox(0,0)[rc]{$\frac{1}{2}$}} \put(20,0){\color{green}\circle{1.2}} \put(20,0){\color{blue}\circle{3}} \put(22.5,0){\makebox(0,0)[lc]{$\frac{1}{2}$}} \put(10,16.5){\color{red}\circle{1.2}} \put(10,16.5){\color{green}\circle{3}} \put(13,16.5){\makebox(0,0)[lc]{$\frac{1}{2}$}} \put(5,8.25){\color{red}\circle{1.2}} \put(3,9.25){\makebox(0,0)[rc]{$0$}} \put(15,8.25){\color{green}\circle{1.2}} \put(17,9.25){\makebox(0,0)[lc]{$0$}} \put(10,0){\color{blue}\circle{1.2}} \put(10,-5){\makebox(0,0)[cc]{$0$}} \end{picture} && \unitlength 0.6mm \allinethickness{2.1pt \ifx\plotpoint\undefined\newsavebox{\plotpoint}\fi \begin{picture}(26,25)(-3,0) \put(0,0){\color{blue}\line(1,0){20}} \put(0,0){\color{red}\line(3,5){10}} \put(20,0){\color{green}\line(-3,5){10}} \put(0,0){\color{blue}\circle{1.2}} \put(0,0){\color{red}\circle{3}} \put(-2.5,0){\makebox(0,0)[rc]{$\frac{1}{2}$}} \put(20,0){\color{green}\circle{1.2}} \put(20,0){\color{blue}\circle{3}} \put(22.5,0){\makebox(0,0)[lc]{$\frac{1}{2}$}} \put(10,16.5){\color{red}\circle{1.2}} \put(10,16.5){\color{green}\circle{3}} \put(13,16.5){\makebox(0,0)[lc]{$\frac{1}{2}$}} \put(3.33,5.5){\color{red}\circle{1.2}} \put(6.67,11){\color{red}\circle{1.2}} \put(1,5.5){\makebox(0,0)[rc]{$0$}} \put(4,11){\makebox(0,0)[rc]{$0$}} \put(13.33,11){\color{green}\circle{1.2}} \put(16.67,5.5){\color{green}\circle{1.2}} \put(18.67,5.5){\makebox(0,0)[lc]{$0$}} \put(15.33,11){\makebox(0,0)[lc]{$0$}} \put(6.67,0){\color{blue}\circle{1.2}} \put(13.33,0){\color{blue}\circle{1.2}} \put(6.67,-5){\makebox(0,0)[cc]{$0$}} \put(13.33,-5){\makebox(0,0)[cc]{$0$}} \end{picture} \\ $\;$\\ (i)&$\quad$&(ii)&$\quad$&(iii)&$\quad$&(iv) \\ \multicolumn{7}{c}{ \unitlength 0.4mm \allinethickness{2.1pt \ifx\plotpoint\undefined\newsavebox{\plotpoint}\fi \begin{picture}(140,125)(0,0) \put(20,20){\color{blue}\line(1,0){110}} \multiput(20,20)(.03372164316,.05518087063){1631}{\color{red}\line(0,1){.05518087063}} \multiput(75,110)(.03372164316,-.05518087063){1631}{\color{green}\line(0,-1){.05518087063}} \put(20,20){\color{red}\circle{5}} \put(20,20){\color{blue}\circle{2}} \put(56.25,20){\color{blue}\circle{2}} \put(56.25,20){\color{blue}\circle{5}} \put(92.5,20){\color{blue}\circle{2}} \put(92.5,20){\color{blue}\circle{5}} \put(129.75,20){\color{green}\circle{2}} \put(129.75,20){\color{blue}\circle{5}} \put(56.25,79.75){\color{red}\circle{5}} \put(56.25,79.75){\color{red}\circle{2}} \put(38.75,51.25){\color{red}\circle{5}} \put(38.75,51.25){\color{red}\circle{2}} \put(74.75,109.75){\color{red}\circle{2}} \put(74.75,109.75){\color{green}\circle{5}} \put(93.75,79.75){\color{green}\circle{5}} \put(93.75,79.75){\color{green}\circle{2}} \put(111.25,51.25){\color{green}\circle{5}} \put(111.25,51.25){\color{green}\circle{2}} \put(15,11){\makebox(0,0)[rc] {\footnotesize $p_1=\lambda_1 + \lambda_2$}} \put(100,11){\makebox(0,0)[cc] {\footnotesize $p_3=\lambda_8 + \lambda_9 + $}} \put(100,3){\makebox(0,0)[cc] {\footnotesize $ + \lambda_{10} + \lambda_{11} + \lambda_{12}$}} \put(45,11){\makebox(0,0)[cc] {\footnotesize $p_2=\lambda_3 + \lambda_4 +$}} \put(45,3){\makebox(0,0)[cc] {\footnotesize $ + \lambda_5+ \lambda_6 + \lambda_7$}} \put(138,11){\makebox(0,0)[lc] {\footnotesize $p_4=\lambda_{13} +\lambda_{14}$}} \put(74.75,119){\makebox(0,0)[cc] {\footnotesize $p_7=\lambda_3 + \lambda_8$}} \put(108,80.25){\makebox(0,0)[lc] {\footnotesize $p_6=\lambda_2 + \lambda_6 + \lambda_7 + $}} \put(108,72.25){\makebox(0,0)[lc] {\footnotesize $+ \lambda_{11} + \lambda_{12}$}} \put(125,52.25){\makebox(0,0)[lc] {\footnotesize $p_5=\lambda_1 + \lambda_4 + \lambda_5+$}} \put(125,44.25){\makebox(0,0)[lc] {\footnotesize $+ \lambda_9 + \lambda_{10}$}} \put(42.5,80.25){\makebox(0,0)[rc] {\footnotesize $p_8=\lambda_4 + \lambda_6 + \lambda_9 +$}} \put(42.5,72.25){\makebox(0,0)[rc] {\footnotesize $+ \lambda_{11} + \lambda_{13}$}} \put(25.5,52.25){\makebox(0,0)[rc] {\footnotesize $p_9=\lambda_5 + \lambda_7 + \lambda_{10} +$}} \put(25.5,44.25){\makebox(0,0)[rc] {\footnotesize $+ \lambda_{12} + \lambda_{14}$}} \end{picture} } \\ \multicolumn{7}{c}{(v)} \end{tabular} \end{center} \caption{(Color online) Classical probabilities (i) and (ii) of the tight triangular pastings of two- and three-atomic contexts introduced in Figs.~\ref{2015-s-f9}(i),(ii); with $x,y,z\ge 0$, and $x+y+z=1$. The prediction probabilities represented by (iii) as well as (iv) are neither classical nor quantum mechanical. The classical probabilities on the triangle logic with four atoms per context are enumerated in (v); again $\lambda_1 , \ldots , \lambda_{14} \ge 0$ and again $\lambda_1 + \cdots + \lambda_{14} = 1$. \label{2015-s-f9}} \end{figure} Nevertheless, in four-dimensional Hilbert space, the propositional structure with a triangular shaped orthogonality diagram allows a gemetric representation; a particular one is explicitly enumerated in Fig.~4 of Ref.~\cite{2010-qchocolate} whose classical probabilities are exhausted by the parameterization in Fig.~\ref{2015-s-f9}(v), read off from the complete set of 14 two-valued measures enumerated in Fig.~5 of Ref.~\cite{2010-qchocolate}. Fig.~\ref{2015-s-f9}(iv) represents a Wright prediction, which cannot be realized classically as well as quantum mechanically for the same reasons as mentioned earlier. In the quantum case, the proof of Theorem 2.2 of Ref.~\cite{wright:pent} can be directly transferred to the four-dimensional configuration. The hull computation~\cite{cdd-pck} reveals the Boole-Bell type conditions of possible experience \begin{equation} \begin{split} p_5+p_6\geq p_1, \ldots \\ p_5+p_6+1\geq 2 p_1+p_2+p_3+p_8, \\ p_1+p_2+p_3\geq p_5+p_6, \\ p_5+p_6+p_7\geq p_1+p_2+p_3 \\ 2p_1+p_2+p_3+p_8+p_9\geq p_5+p_6+1 \end{split} \label{2015-s-e10} \end{equation} as bounds of the polytope spanned by the two-valued measures interpreted as vertices. Some of these classical bounds are enumerated in Eq.~(\ref{2015-s-e10}). Wright's measure, with $p_1=\frac{1}{2}$ and $p_5=p_6=0$, violates the first inequality. \subsection{Gleason theorem and Kochen-Specker configurations} The strategy to obtain predictions and probabilities by taking the convex sum of (sufficiently many) two-valued measures satisfying subclassicality fails completely for quantum systems with three or more mutually exclusive outcomes -- that is, for quantum Hilbert spaces of dimensions greater than two: in this case, two-valued measures do not exist even on certain finite substructures thereof~\cite{kochen1,2015-AnalyticKS}. However, if one still clings to the subclassicality assumption -- essentially requiring that every context of maximally co-measurable observables is behaving classically, and thus should be endowed with classical probabilities -- then Gleason's theorem~\cite{Gleason,r:dvur-93,pitowsky:218,Peres-expTest-Glea} derives the Born (trace) rule for quantum probabilities from subclassicality. Indeed, as already observed by Gleason, it is easy to see that, in the simplest case, such a subclassical (admissible) probability measure can be obtained in the form of a frame function $f_\rho$ by selecting some unit vector $\vert \rho \rangle$, corresponding to a pure quantum state (preparation), and, for each closed subspace corresponding to a one-dimensional projection observable (i.e. an elementary yes-no proposition) $E=\vert e\rangle \langle e \vert$ along the unit vector $\vert e\rangle$, and by taking $f_\rho(\vert e\rangle ) = \langle \rho \vert e\rangle \langle e \vert \rho \rangle = \vert \langle e \vert \rho \rangle \vert^2$ as the square of the norm of the projection of $\vert \rho \rangle$ onto the subspace spanned by $\vert e\rangle$. The reason for this is that, because an arbitrary context can be represented as an orthonormal basis $\{ \vert e_i \rangle \}$, an {\it ad hoc} frame function $f_\rho $ on any such context (and thus basis) can be obtained by taking the length of the orthogonal (with respect to the basis vectors) projections of $\vert \rho \rangle$ onto all the basis vectors $\vert e_i \rangle$, that is, the norm of the resulting vector projections of $\vert \rho \rangle$ onto the basis vectors, respectively. This amounts to computing the absolute value of the Euclidean scalar products $\langle e_i \vert \rho \rangle$ of the state vector with all the basis vectors. In order that all such absolute values of the scalar products (or the associated norms) sum up to one and yield a frame function of weight one, recall that $\vert \rho \rangle$ is a unit vector and note that, by the Pythagorean theorem, these absolute values of the individual scalar products -- or the associated norms of the vector projections of $\vert \rho \rangle$ onto the basis vectors -- must be squared. Thus the value $f_\rho(\vert e_i\rangle )$ of the frame function on the argument $\vert e_i\rangle $ must be the square of the scalar product of $\vert \rho \rangle$ with $\vert e_i \rangle$, corresponding to the square of the length (or norm) of the respective projection vector of $\vert \rho \rangle$ onto $\vert e_i \rangle$. For complex vector spaces one has to take the absolute square of the scalar product; that is, $f_\rho ( \vert e_i \rangle ) = \vert \langle e_i \vert \rho \rangle \vert ^2$. Pointedly stated, from this point of view the probabilities $f_\rho ( \vert e_i \rangle )$ are just the (absolute) squares of the coordinates of a unit vector $\vert \rho \rangle$ with respect to some orthonormal basis $\{ \vert e_i \rangle \}$, representable by the square $\vert \langle e_i \vert \rho \rangle \vert ^2$ of the length of the vector projections of $\vert \rho \rangle$ onto the basis vectors $\vert e_i \rangle$. The squares come in because the absolute values of the individual components do not add up to one; but their squares do. These considerations apply to Hilbert spaces of any, including two, finite dimensions. In this non-general, {\it ad hoc} sense the Born rule for a system in a pure state and an elementary proposition observable (quantum encodable by a one-dimensional projection operator) can be motivated by the requirement of subclassicality for arbitrary finite dimensional Hilbert space. Note that it is possible to generate ``Boole-Bell type inequalities (sort of)'' if one is willing to {\em abandon subclassicality}. That is, suppose one is willing to accept that, within any particular context mutually excluding observables are not mutually exclusive any longer. In particular, one could consider two-valued measures in which all or some or none of the atoms acquire the value zero or one (with subclassicality, the two-valued measure is one at only a single atom; all other atoms have measure zero). With these assumptions one can, for every context, define a ``correlation observable'' as the product of the (non-subclassical) measures of all the atoms in this context. For instance, for any particular $i$'th context $C_i$ with atoms $a_{i,1}, \ldots , a_{i,n}$; then the ``joint probabilities'' $P_i$ or ``joint expectations'' $E_i$ of a single context $C_i$ take on the values \begin{equation} \begin{split} P_i= \prod_{j=1}^n v(a_{i,j})= v(a_{i,1})\cdots v(a_{i,n}),\\ E_i= \prod_{j=1}^n \left[1-2v(a_{i,j})\right]= \left[1-2v(a_{i,1})\right]\cdots \left[1-2v(a_{i,n})\right]. \end{split} \end{equation} A geometric interpretation in terms of convex correlation polytopes is then straightforward -- the tuples representing the edges of the polytopes are obtained by the enumeration of the ``joint probabilities'' $P_i$ or the ``joint expectations'' $E_i$ for all the involved contexts $C_i$. For example, solving the hull problem for the ``correlation polytope'' of a system of observables introduced in Ref.~\cite{cabello-96} and depicted in Fig.~\ref{2007-miracles-ksc} yields, among 274 facet inequalities, \begin{equation} \begin{split} 0\le P_1\leq 1 \\ P_1 + 3\geq P_2 + P_6 + P_7 + P_8 \\ P_1 + P_3 + P_5 + 4\geq P_2 + P_4 + P_6 + P_7 + P_8 + P_9 \\ \ldots \\ -1 \leq E_1 \leq 1, \\ E_1+7\geq E_2+E_3+E_4+E_5+E_6+E_7+E_8+E_9, \\ E_1+E_8+E_9+7\geq E_2+E_3+E_4+E_5+E_6+E_7, \\ E_1+E_6+E_7+E_8+E_9+7\geq E_2+E_3+E_4+E_5, \\ E_1+E_4+E_5+E_6+E_7+E_8+E_9+7\geq E_2+E_3, \\ E_1+E_2+E_3+E_4+E_5+E_6+E_7+E_8+E_9+7\geq 0 . \end{split} \label{2015-s-e11} \end{equation} The last bound has been introduced in Ref.~\cite{cabello:210401}. It is violated both by classical models (satisfying subclassicality) as well as by quantum mechanics, because both cases obey subclassicality, thereby rendering the value ``$-1$'' for any ``correlation observable'' $E_1, \ldots , E_9$ of all nine tightly intertwined contexts $C_1,\ldots , C_9$: in each context, there is an odd number of ``$-1$''-factors. For the sake of demonstration, Fig.~\ref{2007-miracles-ksc} also explicitly enumerates one (of 1152 non-admissible, non-subclassical) value assignments yielding the bound seven. However, note that the associated observables, and also the two-valued measures and frame functions, have been allowed to disrespect subclassicality; because otherwise no two-valued measure exists. Note also that similar calculations~\cite{pitowsky-89a,Pit-91,Pit-94,2000-poly} for two- and three-partite correlations do not suffer from a lack of subclassicality, since in an Einstein-Podolsky-Rosen setup, the observables entering as factors in the product -- coming from different particles -- are independent (therefore justifying multiplication of single-particle probabilities and expectations), and not part of a one and the same single-particle context. \begin{figure} \begin{center} \begin{tabular}{c} \unitlength .4mm \allinethickness{2pt} \ifx\plotpoint\undefined\newsavebox{\plotpoint}\fi \begin{picture}(134.09,130)(0,-2) \multiput(86.39,101.96)(.119617225,-.208133971){209}{{\color{green}\line(0,-1){0.208133971}}} \multiput(86.39,14.96)(.119617225,.208133971){209}{{\color{red}\line(0,1){0.208133971}}} \multiput(36.47,101.96)(-.119617225,-.208133971){209}{{\color{gray}\line(0,-1){0.208133971}}} \multiput(36.47,14.96)(-.119617225,.208133971){209}{{\color{magenta}\line(0,1){0.208133971}}} \color{blue}\put(86.39,15.21){\color{blue}\line(-1,0){50}} \put(86.39,101.71){\color{violet}\line(-1,0){50}} \color{cyan} \qbezier(29.2,27.73)(23.55,-5.86)(52.99,15.24) \qbezier(29.2,27.88)(36.93,75)(69.63,101.91) \qbezier(52.69,15.24)(87.47,40.96)(93.72,89.27) \qbezier(93.72,89.27)(98.4,125.99)(69.49,102.06) \color{orange} \qbezier(93.57,27.73)(99.22,-5.86)(69.78,15.24) \qbezier(93.57,27.88)(85.84,75)(53.13,101.91) \qbezier(70.08,15.24)(35.3,40.96)(29.05,89.27) \qbezier(29.05,89.27)(24.37,125.99)(53.28,102.06) \color{olive} \qbezier(20.15,73.72)(-11.67,58.52)(20.15,43.31) \qbezier(20.33,73.72)(61.34,93.16)(102.36,73.72) \qbezier(102.36,73.72)(134.09,58.52)(102.53,43.31) \qbezier(102.53,43.31)(60.99,23.43)(20.15,43.49) \put(36.34,15.16){\color{magenta}\circle{5}} \put(36.34,15.16){\color{blue}\circle{2}} \put(52.99,15.16){\color{blue}\circle{2}} \put(52.99,15.16){\color{cyan}\circle{5}} \put(69.68,15.16){\color{blue}\circle{2}} \put(69.68,15.16){\color{orange}\circle{5}} \put(86.28,15.16){\color{blue}\circle{2}} \put(86.28,15.16){\color{red}\circle{5}} \put(93.53,27.71){\color{red}\circle{2}} \put(93.53,27.71){\color{orange}\circle{5}} \put(102.37,43.44){\color{red}\circle{2}} \put(102.37,43.44){\color{olive}\circle{5}} \put(111.21,58.45){\color{red}\circle{2}} \color{green}\put(111.21,58.45){\circle{5}} \put(102.37,73.47){\color{green}\circle{2}} \put(102.37,73.47){\color{olive}\circle{5}} \put(93.53,89.21){\color{green}\circle{2}} \put(93.53,89.21){\color{cyan}\circle{5}} \put(86.28,101.76){\color{green}\circle{2}} \put(86.28,101.76){\color{violet}\circle{5}} \put(69.68,101.76){\color{violet}\circle{2}} \put(69.68,101.76){\color{cyan}\circle{5}} \put(52.99,101.76){\color{violet}\circle{2}} \put(52.99,101.76){\color{orange}\circle{5}} \put(36.34,101.76){\color{violet}\circle{2}} \put(36.34,101.76){\color{gray}\circle{5}} \put(29.24,89.21){\color{gray}\circle{2}} \put(29.24,89.21){\color{orange}\circle{5}} \put(20.4,73.47){\color{gray}\circle{2}} \put(20.4,73.47){\color{olive}\circle{5}} \put(11.56,58.45){\color{gray}\circle{2}} \put(11.56,58.45){\color{magenta}\circle{5}} \put(20.4,43.44){\color{magenta}\circle{2}} \put(20.4,43.44){\color{olive}\circle{5}} \put(29.24,27.71){\color{magenta}\circle{2}} \put(29.24,27.71){\color{cyan}\circle{5}} {\color{black} \put(30.41,116) {\makebox(0,0)[cc]{$-1$}} \put(30.41,2) {\makebox(0,0)[cc] {$+1$}} \put(52.68,116) {\makebox(0,0)[cc]{$-1$}} \put(52.68,2) {\makebox(0,0)[cc] {$+1$}} \put(91.93,116) {\makebox(0,0)[cc] {$+1$}} \put(91.93,2) {\makebox(0,0)[cc] {$+1$}} \put(69.65,116) {\makebox(0,0)[cc]{$-1$}} \put(73.65,2) {\makebox(0,0)[cc] {$+1$}} \put(103.24,94.22){\makebox(0,0)[cc]{$+1$}} \put(17.45,94.22) {\makebox(0,0)[cc] {$+1$}} \put(106.24,22.45){\makebox(0,0)[cc]{$+1$}} \put(17.45,22.45) {\makebox(0,0)[cc] {$+1$}} \put(115.13,77.96){\makebox(0,0)[cc]{$+1$}} \put(8.55,77.96) {\makebox(0,0)[cc] {$+1$}} \put(115.13,38.72){\makebox(0,0)[cc]{$+1$}} \put(10.55,38.72) {\makebox(0,0)[cc] {$-1$}} \put(120.92,57.98){\makebox(0,0)[l] {$-1$}} \put(1.77,57.98) {\makebox(0,0)[rc] {$+1$}} } \put(61.341,9.192){\color{blue}\makebox(0,0)[cc] {$C_1$}} \put(102.53,31.355){\color{red}\makebox(0,0)[cc] {$C_2$}} \put(102.53,84.322){\color{green}\makebox(0,0)[cc] {$C_3$}} \put(60.457,108.01){\color{violet}\makebox(0,0)[cc] {$C_4$}} \put(18.031,84.145){\color{gray}\makebox(0,0)[cc] {$C_5$}} \put(18.561,33.057){\color{magenta}\makebox(0,0)[cc]{$C_6$}} \put(61.341,39.774){\color{olive}\makebox(0,0)[cc] {$C_7$}} \put(72.124,67.882){\color{orange}\makebox(0,0)[cc] {$C_8$}} \put(48.79,67.705){\color{cyan}\makebox(0,0)[cc] {$C_9$}} \end{picture} \end{tabular} \end{center} \caption{(Color online) Orthogonality diagram of a finite subset $C_1,\ldots ,C_9$ of the continuum of blocks or contexts embeddable in four-dimensional real Hilbert space without a two-valued probability measure~\cite{cabello-96}; with one of the 1152 non-admissible value assignments yielding the bound seven, as derived in Ref~\cite{cabello:210401}. In contrast, subclassicality would require that, within each one of the nine contexts, exactly one observable would have value ``$-1$,'' and the other three observables would have the value ``$+1$.'' \label{2007-miracles-ksc}} \end{figure} \section{Discussion} We have discussed ``bizarre'' structures of observables and have considered classical, quantum and other, more ``bizarre'' probability measures on them. Thereby we have mostly assumed subclassicality, which stands for additivity within contexts, formalized by frame functions as well as admissibility~\cite{Gleason,r:dvur-93,pitowsky:218,Peres-expTest-Glea}. From all of this one might conclude a simple lesson: in non-Boolean empirical structures which allow both a quantum as well as a quasi-classical representation (rendering a homeomorphic embedding into some larger Boolean algebra) the predictions from quantum and classical probabilities (rendered by the convex combination of two-valued measures) may be different. Which ones are realized depends on the nature of the system (e.g. quasi-classical generalized urn models or finite automata, or quantum states of orthohelium~\cite{kochen1}) involved. Such structures may also allow (non-dispersive) probabilities and predictions which can neither be realized by (quasi-) classical nor be quantized systems. Stated pointedly: even if one assumes subclassicality -- that is, the validity of classical predictions within contexts in the form of maximal subsets of observables which are mutually co-measurable -- in general (i.e. in non-Boolean cases) the structure of observables does not determinate the probabilities completely. Finally, let us speculate that if we were living in a computable universe capable of universal computation, then universality would imply we could see the types of collections of observables sketched in Figs.~\ref{2015-s-f1}-\ref{2015-s-f2}; at least if some (superselection) rule would not prohibit the occurrence of such propositional structures. Why do we not observe them? Maybe we have not looked closely enough, or maybe the Universe is not entirely ``universal'' in terms of fundamental phenomenology. I personally have a rather simple stance towards these issues, which comes out of my inclinations~\cite{svozil-2013-omelette} towards {\em ``The Church of the larger Hilbert space.''} I believe that Dirac~\cite{dirac} and von Neumann~\cite{v-neumann-49} had it all right -- alas in a surprising, literal way. The quantum universe appears to be the geometry of linear vector space equipped with a scalar product (projections). From this point of view, all those bizarre structures of observables and prediction probabilities do not show up just because, after all, our operationally accessible universe, at least on the most fundamental level, has to be understood in purely geometric terms, thereby disallowing some algebraic possibilities. This may be similar to the non-maximal violation of certain Boole-Bell type conditions of possible experience. \begin{acknowledgments} This research has been partly supported by FP7-PEOPLE-2010-IRSES-269151-RANPHYS. I gratefully acknowledge advice from Komei Fukuda with the {\tt cddlib} package, as well as Alexander Svozil for his help installing it. I am also indebted to Alastair A. Abbot for a critical reading of the manuscript, as well as for suggesting improvements. \end{acknowledgments} \ifws
2,869,038,156,958
arxiv
\section{Preliminaries}\label{appen:pre} We use ${\mathcal{F}}_r$ to denote the filtration generated by \begin{align*} \{\xi_t^i:t\in {\mathcal{I}}_l,i=1,...N\}_{l=1}^{r-1} \cup \{\widetilde{\xi}_l^i:i=1,...N\}_{l=1}^{r-1}. \end{align*} It means that given ${\mathcal{F}}_r$, the global solution $\bar{\vx}_r$ is fixed, but the randomness of ${\mathcal{A}}_r$, ${\bm{G}}_r^i$ and ${\bm{G}}_r$ still exists. In addition, for $t\in {\mathcal{I}}_r$, we use ${\mathcal{H}}_t$ to denote the filtration generated by \begin{align*} {\mathcal{F}}_r \cup \{\xi_s^i: t_r\leq s\leq t\}_{i=1}^N\cup \{\widetilde{\xi}_r^i\}_{i=1}^N. \end{align*} Recall the definitions of ${\bm{G}}_r^i$ and ${\bm{G}}_r$, \begin{align*} {\bm{G}}_r^i = \nabla F_i(\bar{\vx}_r;\widetilde{\xi}_r^i)\quad\text{and}\quad {\bm{G}}_r = \frac{1}{N}\sum_{i=1}^N {\bm{G}}_r^i. \end{align*} Hence we have \begin{align*} \twonorm{{\bm{G}}_r^i - \nabla f_i(\bar{\vx}_r)} \leq \sigma\quad\text{and}\quad \twonorm{{\bm{G}}_r - \nabla f(\bar{\vx}_r)} \leq \sigma, \end{align*} hold almost surely due to Assumption \ref{assume:object}(iii). Also, the local update rule of EPISODE is \begin{align*} {\bm{x}}_{t+1}^i = {\bm{x}}_t^i - \eta {\bm{g}}_t^i \indicator{{\mathcal{A}}_r} - \gamma \frac{{\bm{g}}_t^i}{\twonorm{{\bm{g}}_t^i}}\indicator{\bar{{\mathcal{A}}}_r}\quad \text{for}\quad t \in {\mathcal{I}}_r, \end{align*} where ${\bm{g}}_t^i = \nabla F_i({\bm{x}}_t^i;\xi_t^i) - {\bm{G}}_r^i + {\bm{G}}_r$, ${\mathcal{A}}_r = \{\twonorm{{\bm{G}}_r}\leq \gamma/\eta\}$ and $\bar{{\mathcal{A}}}_r = \{\twonorm{{\bm{G}}_r}> \gamma/\eta\}$. \subsection{Auxiliary Lemmas} \begin{lemma}[Lemma A.2 in \cite{zhang2020improved}]\label{lemma:smooth_obj_descent} Let $f$ be $(L_0,L_1)$-smooth, and $C > 0$ be a constant. For any ${\bm{x}}, {\bm{x}}^{\prime} \in {\mathbb{R}}^d$ such that $\twonorm{{\bm{x}} - {\bm{x}}^{\prime}} \leq C/L_1$, we have \begin{align*} f({\bm{x}}^{\prime}) - f({\bm{x}}) \leq \inprod{\nabla f({\bm{x}})}{{\bm{x}}^{\prime} - {\bm{x}}} + \frac{AL_0 + BL_1\twonorm{\nabla f({\bm{x}})}}{2}\twonorm{{\bm{x}}^{\prime} - {\bm{x}}}^2, \end{align*} where $A=1+e^{C}-\frac{e^{C}-1}{C}$ and $B = \frac{e^{C}-1}{C}$. \end{lemma} \begin{lemma}[Lemma A.3 in \cite{zhang2020improved}]\label{lemma:smooth_grad_diff} Let $f$ be $(L_0,L_1)$-smooth, and $C > 0$ be a constant. For any ${\bm{x}}, {\bm{x}}^{\prime} \in {\mathbb{R}}^d$ such that $\twonorm{{\bm{x}} - {\bm{x}}^{\prime}} \leq C/L_1$, we have \begin{align*} \twonorm{\nabla f({\bm{x}}^{\prime}) - \nabla f({\bm{x}})} \leq (AL_0 + BL_1\twonorm{\nabla f({\bm{x}})})\twonorm{{\bm{x}}^{\prime} - {\bm{x}}}, \end{align*} where $A=1+e^{C}-\frac{e^{C}-1}{C}$ and $B = \frac{e^{C}-1}{C}$. \end{lemma} Here we choose $C \geq 1$ such that $A\geq 1$ and $B \geq 1$. \begin{lemma}[Lemma B.1 in \cite{zhang2020improved}] \label{lemma:clip_inprod} Let $\mu > 0$ and ${\bm{u}}, {\bm{v}} \in \mathbb{R}^d$. Then \begin{equation*} -\frac{\langle {\bm{u}}, {\bm{v}} \rangle}{\|{\bm{v}}\|} \leq -\mu \|{\bm{u}}\| - (1-\mu) \|{\bm{v}}\| + (1+\mu) \|{\bm{v}}-{\bm{u}}\|. \end{equation*} \end{lemma} \section{Proof of Lemmas in Section \ref{sec:proof_sketch:thm:main}} \subsection{Proof of Lemma~\ref{lemma:individual_dis}}\label{proof:lemma:individual_dis} \textbf{Lemma \ref{lemma:individual_dis} restated.} Suppose $2\eta I (AL_0 + BL_1\kappa + BL_1\rho(\sigma + \frac{\gamma}{\eta})) \leq 1$ and $\max\LRl{2\eta I (2\sigma + \frac{\gamma}{\eta}),\ \gamma I}\leq \frac{C}{L_1}$, where the relation between $A$, $B$ and $C$ is stated in Lemma \ref{lemma:smooth_obj_descent} and \ref{lemma:smooth_grad_diff}. Then for any $i \in [N]$ and $t-1 \in {\mathcal{I}}_r$, it almost surely holds that \begin{equation} \label{eq:lem_disc_1} \indicator{{\mathcal{A}}_r}\twonorm{{\bm{x}}_t^i - \bar{\vx}_r}\leq 2\eta I\LRs{2\sigma + \frac{\gamma}{\eta}}, \end{equation} and \begin{equation} \label{eq:lem_disc_2} \indicator{\bar{{\mathcal{A}}}_r} \twonorm{{\bm{x}}_t^i - \bar{\vx}_r} \leq \gamma I. \end{equation} \begin{proof}[Proof of Lemma \ref{lemma:individual_dis}] To show \eqref{eq:lem_disc_1} holds, it suffices to show that under the event ${\mathcal{A}}_r$, \begin{equation*} \|{\bm{x}}_t^i - \bar{\vx}_r\| \leq 2 \eta (t-t_r) \left( 2\sigma + \frac{\gamma}{\eta} \right) \end{equation*} holds for any $t_r+1 \leq t \leq t_{r+1}$ and $i \in [N]$. We will show it by induction. In particular, to show that this fact holds for $t = t_r+1$, notice \begin{equation*} \|{\bm{x}}_{t_r+1}^i - \bar{\vx}_r\| = \eta \|{\bm{g}}_{t_r+1}^i\| \leq \eta \|\nabla F_i(\bar{\vx}_r; \xi_{t_r}^i) - {\bm{G}}_r^i\| + \eta \|{\bm{G}}_r\| \leq 2 \eta \sigma + \gamma \leq 2 \eta \left(\sigma + \frac{\gamma}{\eta} \right), \end{equation*} where we used the fact that $\|{\bm{G}}_r\| \leq \frac{\gamma}{\eta}$ under ${\mathcal{A}}_r$, and $\|\nabla F_i(\bar{\vx}_r;\xi_{t_r}^i)-\nabla F_i(\bar{\vx}_r)\|\leq \sigma$, $\|{\bm{G}}_r^i-\nabla F_i(\bar{\vx}_r)\|\leq\sigma$ hold almost surely. Now, denote $\Lambda = 2\left(2\sigma + \frac{\gamma}{\eta} \right)$ and suppose that \begin{equation} \label{eq:disc_induct_hyp} \|{\bm{x}}_t^i - \bar{\vx}_r\| \leq \Lambda \eta (t-t_r). \end{equation} Then we have \begin{align} \|{\bm{x}}_{t+1}^i - \bar{\vx}_r\| &= \|{\bm{x}}_t^i - \bar{\vx}_r - \eta {\bm{g}}_t^i\| \nonumber \\ &\leq \Lambda \eta (t-t_r) + \eta \|\nabla F_i({\bm{x}}_t^i, \xi_t^i) - {\bm{G}}_r^i\| + \eta \|{\bm{G}}_r\| \nonumber \\ &\leq \Lambda \eta (t-t_r) + \eta \|\nabla f_i({\bm{x}}_t^i) - \nabla f_i(\bar{\vx}_r)\| + 2 \eta \sigma + \gamma. \label{eq:disc_inter_1} \end{align} Using our assumption $\eta \Lambda I \leq C/L_1$ together with the inductive assumption \eqref{eq:disc_induct_hyp}, we can apply Lemma \ref{lemma:smooth_grad_diff} to obtain \begin{align} \|\nabla f_i({\bm{x}}_t^i) - \nabla f_i(\bar{\vx}_r)\| &\leq (AL_0 + BL_1 \|\nabla f_i(\bar{\vx}_r)\|) \|{\bm{x}}_t^i - \bar{\vx}_r\| \nonumber\\ &\leq \Lambda \eta (t-t_r) (AL_0 + BL_1 \|\nabla f_i(\bar{\vx}_r)\|) \nonumber\\ &\Eqmark{i}{\leq} \Lambda \eta (t-t_r) (AL_0 + BL_1 (\kappa + \rho \|\nabla f(\bar{\vx}_r)\|)) \nonumber\\ &\leq \Lambda \eta (t-t_r) (AL_0 + BL_1 \kappa) + \eta \Lambda BL_1 \rho (t-t_r) (\|\nabla f(\bar{\vx}_r) - {\bm{G}}_r\| + \|{\bm{G}}_r\|) \nonumber\\ &\leq \Lambda \eta (t-t_r) \left( AL_0 + BL_1 \kappa + BL_1 \rho \left(\sigma + \frac{\gamma}{\eta} \right) \right) \nonumber\\ &\Eqmark{ii}{\leq} \frac{\Lambda (t-t_r)}{2I} \leq \frac{\Lambda}{2}, \label{eq:grad_diff_xti} \end{align} where $(i)$ comes from the heterogeneity assumption $\|\nabla f_i(x)\| \leq \kappa + \rho \|\nabla f(x)\|$ for all $x$ and $(ii)$ from the assumption $2\eta I (AL_0 + BL_1\kappa + BL_1\rho(\sigma + \frac{\gamma}{\eta})) \leq 1$. Substituting this into Equation \eqref{eq:disc_inter_1} yields \begin{align*} \|{\bm{x}}_{t+1}^i - \bar{\vx}_r\| &\leq \Lambda \eta (t-t_r) + \eta \frac{\Lambda}{2} + 2 \eta \sigma + \gamma \\ &\leq \eta \left( \Lambda (t-t_r) + \frac{\Lambda}{2} + 2 \sigma + \frac{\gamma}{\eta} \right) \\ &\leq \Lambda \eta (t-t_r+1). \end{align*} which completes the induction and the proof of Equation \eqref{eq:lem_disc_1}. Next, to show Equation \eqref{eq:lem_disc_2}, notice that under the event $\bar{{\mathcal{A}}_r}$ we have \begin{equation*} \|\bar{\vx}_r - {\bm{x}}_t^i\| = \left\| \gamma \sum_{s = t_r+1}^{t-1} \frac{{\bm{g}}_s^i}{\twonorm{{\bm{g}}_s^i}} \right\| \leq \gamma \sum_{s = t_r+1}^{t-1} \left\| \frac{{\bm{g}}_s^i}{\twonorm{{\bm{g}}_s^i}} \right\| = \gamma (t - (t_r+1)) \leq \gamma I. \end{equation*} \end{proof} \subsection{Proof of Lemma \ref{lemma:non_clipping_hessian}}\label{proof:lemma:non_clipping_hessian} \textbf{Lemma \ref{lemma:non_clipping_hessian} restated.} Suppose $2\eta I (AL_0 + BL_1\kappa + BL_1\rho(\sigma + \frac{\gamma}{\eta})) \leq 1$ and $\max\LRl{2\eta I \left(2\sigma + \frac{\gamma}{\eta}\right),\ \gamma I}\leq \frac{C}{L_1}$. Then for all ${\bm{x}}\in {\mathbb{R}}^d$ such that $\twonorm{{\bm{x}} - \bar{\vx}_r}\leq 2 \eta I \left(2\sigma + \frac{\gamma}{\eta}\right)$, we have the following inequality almost surely holds: \begin{align*} \indicator{{\mathcal{A}}_r} \twonorm{\nabla^2 f_i({\bm{x}})}\leq L_0 + L_1\LRs{\kappa + (\rho+1)\left(\frac{\gamma}{\eta} + 2\sigma\right)}. \end{align*} \begin{proof}[Proof of Lemma \ref{lemma:non_clipping_hessian}] Under the event ${\mathcal{A}}_r = \{\|{\bm{G}}_r\| \leq \gamma/\eta\}$. From the definition of $(L_0, L_1)$-smoothness we have \begin{align} \|\nabla^2 f_i({\bm{x}})\| &\leq L_0 + L_1 \|\nabla f_i({\bm{x}})\| \nonumber \\ &\leq L_0 + L_1 \left( \|\nabla f_i({\bm{x}}) - \nabla f_i(\bar{\vx}_r)\| + \|\nabla f_i(\bar{\vx}_r)\| \right) \nonumber \\ &\Eqmark{i}{\leq} L_0 + L_1 \left( \|\nabla f_i({\bm{x}}) - \nabla f_i(\bar{\vx}_r)\| + \kappa + \rho \|\nabla f(\bar{\vx}_r)\| \right) \nonumber \\ &\Eqmark{ii}{\leq} L_0 + L_1 \left( \|\nabla f_i({\bm{x}}) - \nabla f_i(\bar{\vx}_r)\| + \kappa + \rho \left( \sigma + \frac{\gamma}{\eta} \right) \right), \label{eq:hess_inter_1} \end{align} where we used the heterogeneity assumption $\|\nabla f_i({\bm{x}})\| \leq \kappa + \rho \|\nabla f(\bar{\vx}_r)\|$ for all ${\bm{x}}$ to obtain $(i)$ and the fact $\|\nabla f(\bar{\vx}_r)\| \leq \|\nabla f(\bar{\vx}_r) - {\bm{G}}_r\| + \|{\bm{G}}_r\|$ to obtain $(ii)$. Now, for all ${\bm{x}}$ such that $\|{\bm{x}}-\bar{\vx}_r\|\leq 2\eta I(2\sigma+\frac{\gamma}{\eta})$, according to our assumptions, we have $\twonorm{{\bm{x}} - \bar{\vx}_r}\leq 2 \eta I(2\sigma + \frac{\gamma}{\eta}) \leq \frac{C}{L_1}$. Hence we can apply Lemma \ref{lemma:smooth_grad_diff} to ${\bm{x}}$ and $\bar{\vx}_r$, which yields \begin{align*} \|\nabla f_i({\bm{x}}) - \nabla f_i(\bar{\vx}_r)\| &\leq (AL_0 + BL_1 \|\nabla f_i(\bar{\vx}_r)\|) \|{\bm{x}}-\bar{\vx}_r\| \\ &\leq 2 \eta I \left(2\sigma + \frac{\gamma}{\eta}\right) (AL_0 + BL_1 \|\nabla f_i(\bar{\vx}_r)\|) \\ &\leq 2 \eta I \left(2\sigma + \frac{\gamma}{\eta}\right) (AL_0 + BL_1 (\kappa + \rho \|\nabla f(\bar{\vx}_r)\|)) \\ &\leq 2 \eta I \left(2\sigma + \frac{\gamma}{\eta}\right) \left(AL_0 + BL_1 \kappa + BL_1 \rho \left( \frac{\gamma}{\eta} + \sigma \right) \right) \\ &\Eqmark{i}{\leq} 2\sigma + \frac{\gamma}{\eta}, \end{align*} where $(i)$ comes from the assumption $2\eta I (AL_0 + BL_1\kappa + BL_1\rho(\sigma + \frac{\gamma}{\eta})) \leq 1$. Substituting this result into Equation \eqref{eq:hess_inter_1} yields \begin{align*} \|\nabla^2 f_i({\bm{x}})\| &\leq L_0 + L_1 \left( 2\sigma + \frac{\gamma}{\eta} + \kappa + \rho \left( \sigma + \frac{\gamma}{\eta} \right) \right) \\ &\leq L_0 + L_1 \left( \kappa + (\rho + 1) \left( 2\sigma + \frac{\gamma}{\eta} \right) \right). \end{align*} \end{proof} \subsection{Proof of Lemma \ref{lemma:non_clipping_discre_expec}} \textbf{Lemma \ref{lemma:non_clipping_discre_expec} restated.} Suppose $2\eta I (AL_0 + BL_1\kappa + BL_1\rho(\sigma + \frac{\gamma}{\eta})) \leq 1$ and $\max\LRl{2\eta I (2\sigma + \frac{\gamma}{\eta}),\ \gamma I}\leq \frac{C}{L_1}$, we have both \begin{align} \mathbb{E}_r\LRm{\indicator{{\mathcal{A}}_r}\twonorm{{\bm{x}}_t^i - \bar{\vx}_r}^2}&\leq 36p_r I^2\eta^2\twonorm{\nabla f(\bar{\vx}_r)}^2 + 126p_r I^2 \eta^2 \sigma^2,\label{eq:drift_expectation_bound_quadratic}\\ \mathbb{E}_r\LRm{\indicator{{\mathcal{A}}_r}\twonorm{{\bm{x}}_t^i - \bar{\vx}_r}^2}&\leq 18p_rI^2\eta \gamma \twonorm{\nabla f(\bar{\vx}_r)} + 18p_r I^2\eta^2 \LRs{\frac{\gamma}{\eta}\sigma + 5\sigma^2}, \label{eq:drift_expectation_bound_linear} \end{align} hold for any $t-1 \in {\mathcal{I}}_r$. \begin{proof}[Proof of Lemma \ref{lemma:non_clipping_discre_expec}] Under the event ${\mathcal{A}}_r$, the local update rule is given by \begin{align*} {\bm{x}}_{t+1}^i = {\bm{x}}_t^i - \eta {\bm{g}}_t^i,\quad \text{where}\quad {\bm{g}}_t^i = \nabla F_i({\bm{x}}_t^i;\xi_t^i) - {\bm{G}}_r^i + {\bm{G}}_r. \end{align*} Using the basic inequality $(a + b)^2 \leq (1+ 1/\lambda)a^2 + (\lambda+1) b^2$ for any $\lambda >0$, we have \begin{align} &\mathbb{E}_r\LRm{\indicator{{\mathcal{A}}_r}\twonorm{{\bm{x}}_{t+1}^i - \bar{\vx}_r}^2}\nonumber\\ & = \mathbb{E}_r\LRm{\indicator{{\mathcal{A}}_r}\twonorm{{\bm{x}}_t^i - \bar{\vx}_r - \eta {\bm{g}}_t^i}^2}\nonumber\\ & \Eqmark{i}{\leq} \mathbb{E}_r\LRm{\indicator{{\mathcal{A}}_r}\twonorm{{\bm{x}}_t^i - \bar{\vx}_r - \eta (\nabla f_i({\bm{x}}_t^i) - {\bm{G}}_r^i + {\bm{G}}_r)}^2} + \eta^2\mathbb{E}_r\LRm{\indicator{{\mathcal{A}}_r}\twonorm{\nabla F_i({\bm{x}}_t^i;\xi_t^i) - \nabla f_i({\bm{x}}_t^i)}^2}\nonumber\\ & \Eqmark{ii}{\leq} \LRs{\frac{1}{I} + 1}\mathbb{E}_r\LRm{\indicator{{\mathcal{A}}_r}\twonorm{{\bm{x}}_t^i - \bar{\vx}_r}^2} + (I+1)\eta^2 \mathbb{E}_r\LRm{\indicator{{\mathcal{A}}_r}\twonorm{\nabla f_i({\bm{x}}_t^i) - {\bm{G}}_r^i + {\bm{G}}_r}^2} + p_r\eta^2 \sigma^2. \label{eq:dis_recursion} \end{align} The equality $(i)$ and $(ii)$ hold since ${\mathcal{F}}_r \subseteq {\mathcal{H}}_t$ for $t\geq t_r$ such that \begin{align*} &\mathbb{E}_r\LRm{\indicator{{\mathcal{A}}_r}\LRinprod{{\bm{x}}_t^i - \bar{\vx}_r - \eta (\nabla f_i({\bm{x}}_t^i) - {\bm{G}}_r^i + {\bm{G}}_r)}{\nabla F_i({\bm{x}}_t^i;\xi_t^i) - \nabla f_i({\bm{x}}_t^i)}}\\ =& \mathbb{E}_r\LRm{\mathbb{E}\LRm{\indicator{{\mathcal{A}}_r}\LRinprod{{\bm{x}}_t^i - \bar{\vx}_r - \eta (\nabla f_i({\bm{x}}_t^i) - {\bm{G}}_r^i + {\bm{G}}_r)}{\nabla F_i({\bm{x}}_t^i;\xi_t^i) - \nabla f_i({\bm{x}}_t^i)}\big| {\mathcal{H}}_t}}\\ =& \mathbb{E}_r\LRm{\indicator{{\mathcal{A}}_r}\LRinprod{{\bm{x}}_t^i - \bar{\vx}_r - \eta (\nabla f_i({\bm{x}}_t^i) - {\bm{G}}_r^i + {\bm{G}}_r)}{\mathbb{E}\LRm{\nabla F_i({\bm{x}}_t^i;\xi_t^i) - \nabla f_i({\bm{x}}_t^i)\big| {\mathcal{H}}_t}}}= 0, \end{align*} and \begin{align*} \mathbb{E}_r\LRm{\indicator{{\mathcal{A}}_r}\twonorm{\nabla F_i({\bm{x}}_t^i;\xi_t^i) - \nabla f_i({\bm{x}}_t^i)}^2} &= \mathbb{E}_r\LRm{\mathbb{E}\LRm{\twonorm{\nabla F_i({\bm{x}}_t^i;\xi_t^i) - \nabla f_i({\bm{x}}_t^i)}^2\big| {\mathcal{H}}_t}}\\ &\leq \mathbb{E}_r\LRm{\indicator{{\mathcal{A}}_r}\sigma^2} = p_r\sigma^2. \end{align*} Let $L = L_0 + L_1(\kappa + (\rho+1)(\frac{\gamma}{\eta} + 2\sigma))$. Applying the upper bound for Hessian matrix in Lemma \ref{lemma:non_clipping_hessian} and the premise in Lemma~\ref{lemma:individual_dis}, we have \begin{align} &\mathbb{E}_r\LRm{\indicator{{\mathcal{A}}_r}\twonorm{\nabla f_i({\bm{x}}_t^i) - {\bm{G}}_r^i + {\bm{G}}_r}^2}\nonumber\\ &= \mathbb{E}_r\LRm{\indicator{{\mathcal{A}}_r}\LRtwonorm{(\nabla f_i({\bm{x}}_t^i)-\nabla f_i(\bar{\vx}_r))+ (\nabla f_i(\bar{\vx}_r) - {\bm{G}}_r^i) + {\bm{G}}_r}^2}\nonumber\\ &\leq 2\mathbb{E}_r\LRm{\indicator{{\mathcal{A}}_r}\LRtwonorm{(\nabla f_i({\bm{x}}_t^i)-\nabla f_i(\bar{\vx}_r))+ (\nabla f_i(\bar{\vx}_r) - {\bm{G}}_r^i)}^2} + 2\mathbb{E}_r\LRm{\indicator{{\mathcal{A}}_r}\twonorm{{\bm{G}}_r}^2}\nonumber\\ &\leq 4\mathbb{E}_r\LRm{\indicator{{\mathcal{A}}_r}\twonorm{\nabla f_i({\bm{x}}_t^i)-\nabla f_i(\bar{\vx}_r)}^2} + 4p_r\sigma^2 + 2\mathbb{E}_r\LRm{\indicator{{\mathcal{A}}_r}\twonorm{{\bm{G}}_r}^2}\nonumber\\ &\leq 4\mathbb{E}_r\LRm{\indicator{{\mathcal{A}}_r}\LRtwonorm{\int_{0}^1\nabla^2 f_i(\alpha {\bm{x}}_t^i + (1-\alpha)\bar{\vx}_r) ({\bm{x}}_t^i - \bar{\vx}_r)d\alpha}^2} + 4p_r\sigma^2 +2\mathbb{E}_r\LRm{\indicator{{\mathcal{A}}_r}\twonorm{{\bm{G}}_r}^2}\nonumber\\ &\leq 4L^2 \mathbb{E}_r\LRm{\indicator{{\mathcal{A}}_r}\twonorm{{\bm{x}}_t^i - \bar{\vx}_r}^2} + 4p_r\sigma^2 +2\mathbb{E}_r\LRm{\indicator{{\mathcal{A}}_r}\twonorm{{\bm{G}}_r}^2}, \label{eq:gti_bound_expec} \end{align} where the second inequality follows from $\twonorm{{\bm{G}}_r^i - \nabla f_i(\bar{\vx}_r)} \leq \sigma$ almost surely. Plugging the final bound of \eqref{eq:gti_bound_expec} into \eqref{eq:dis_recursion} yields \begin{align} \mathbb{E}_r\LRm{\indicator{{\mathcal{A}}_r}\twonorm{{\bm{x}}_{t+1}^i - \bar{\vx}_r}^2}&\leq \LRs{\frac{1}{I} + 1 + 4LI\eta^2 }\mathbb{E}_r\LRm{\indicator{{\mathcal{A}}_r}\twonorm{{\bm{x}}_t^i - \bar{\vx}_r}^2}\nonumber\\ &\qquad + 2(I+1)\eta^2\mathbb{E}_r\LRm{\indicator{{\mathcal{A}}_r}\twonorm{{\bm{G}}_r}^2} + 10p_r(I+1) \eta^2 \sigma^2. \label{eq:dis_recursion_1} \end{align} By recursively invoking \eqref{eq:dis_recursion_1}, we are guaranteed that \begin{align*} \mathbb{E}_r\LRm{\indicator{{\mathcal{A}}_r}\twonorm{{\bm{x}}_{t+1}^i - \bar{\vx}_r}^2}&\leq \sum_{s = 0}^{I-1}\LRs{\frac{1}{I} + 1 + 4LI\eta^2 }^s(I+1)\LRs{2\eta^2\mathbb{E}_r\LRm{\indicator{{\mathcal{A}}_r}\twonorm{{\bm{G}}_r}^2} + 10p_r\eta^2 \sigma^2}\\ & = \frac{\LRs{\frac{1}{I} + 1 + 4LI\eta^2 }^I}{\frac{1}{I} + 4LI\eta^2}(I+1)\LRs{2\eta^2\mathbb{E}_r\LRm{\indicator{{\mathcal{A}}_r}\twonorm{{\bm{G}}_r}^2} + 10p_r \eta^2 \sigma^2}\\ &\Eqmark{i}{\leq} \frac{\LRs{\frac{2}{I} + 1}^I}{\frac{1}{I}}(I+1)\LRs{2\eta^2\mathbb{E}_r\LRm{\indicator{{\mathcal{A}}_r}\twonorm{{\bm{G}}_r}^2} + 10p_r \eta^2 \sigma^2}\\ &\Eqmark{ii}{\leq} 9\LRs{2I^2\eta^2\mathbb{E}_r\LRm{\indicator{{\mathcal{A}}_r}\twonorm{{\bm{G}}_r}^2} + 10p_r I^2 \eta^2 \sigma^2}\\ &\leq 36 I^2\eta^2 \LRs{\mathbb{E}_r\LRm{\indicator{{\mathcal{A}}_r}\twonorm{{\bm{G}}_r - \nabla f(\bar{\vx}_r)}^2} + p_r\twonorm{\nabla f(\bar{\vx}_r)}^2} + 90p_r I^2 \eta^2 \sigma^2\\ &\Eqmark{iii}{\leq} 36p_r I^2\eta^2\twonorm{\nabla f(\bar{\vx}_r)}^2 + 126p_r I^2 \eta^2 \sigma^2. \end{align*} The inequality $(i)$ comes from \begin{equation*} 4I \eta^2 L^2 = \frac{1}{I} (2I \eta L)^2 \leq \frac{1}{I} \left(2I \eta \left(L_0 + L_1\kappa + L_1 (\rho+1) (2\sigma + \frac{\gamma}{\eta}) \right) \right)^2 \leq \frac{1}{I}, \end{equation*} which is true because $2\eta I (AL_0 + BL_1\kappa + BL_1\rho(\sigma + \frac{\gamma}{\eta})) \leq 1$ and $A, B \geq 1$. The inequality $(ii)$ comes from $(\frac{2}{I} + 1)^I(I+1)\leq e^2 I$ for any $I\geq 1$. The inequality $(iii)$ holds since $\twonorm{{\bm{G}}_r - \nabla f(\bar{\vx}_r)}\leq \sigma$ almost surely. Therefore, we have proved \eqref{eq:drift_expectation_bound_quadratic}. In addition, for \eqref{eq:drift_expectation_bound_linear}, we notice that \begin{align*} \mathbb{E}_r\LRm{\indicator{{\mathcal{A}}_r}\twonorm{{\bm{x}}_{t+1}^i - \bar{\vx}_r}^2}&\leq 18I^2\eta^2\mathbb{E}_r\LRm{\indicator{{\mathcal{A}}_r}\twonorm{{\bm{G}}_r}^2} + 90p_rI^2 \eta^2 \sigma^2\\ &\leq 18I^2\eta^2\mathbb{E}_r\LRm{\indicator{{\mathcal{A}}_r}\twonorm{{\bm{G}}_r}\LRs{\twonorm{{\bm{G}}_r - \nabla f(\bar{\vx}_r)} + \twonorm{\nabla f(\bar{\vx}_r)}}} + 90p_r I^2 \eta^2\sigma^2\\ &\Eqmark{iv}{\leq} 18p_rI^2\eta^2\frac{\gamma}{\eta}\LRs{\sigma + \twonorm{\nabla f(\bar{\vx}_r)}} + 90p_r I^2 \eta^2\sigma^2\\ &= 18p_r I^2\eta \gamma \twonorm{\nabla f(\bar{\vx}_r)} + 18p_rI^2\eta^2 \LRs{\frac{\gamma}{\eta}\sigma + 5\sigma^2}. \end{align*} The inequality $(iv)$ holds since $\twonorm{{\bm{G}}_r} \leq \gamma/\eta$ holds under the event ${\mathcal{A}}_r$ and $\twonorm{{\bm{G}}_r - \nabla f(\bar{\vx}_r)}\leq \sigma$ almost surely. \end{proof} \section{Proof of Main Results}\label{sec:main_proof} \subsection{Proof of Lemma \ref{lemma:descent}}\label{proof:lemma:descent} \paragraph{Lemma \ref{lemma:descent} restated. Under the conditions of Lemma \ref{lemma:individual_dis}, let $p_r = {\mathbb{P}}({\mathcal{A}}_r|{\mathcal{F}}_r)$, $\Gamma = AL_0 + BL_1(\kappa+ \rho (\frac{\gamma}{\eta}+\sigma))$. Then it holds that for each $1\leq r\leq R-1$, \begin{align*} &\mathbb{E}_r \left[ f(\bar{\vx}_{r+1}) - f(\bar{\vx}_r) \right] \leq \mathbb{E}_{r}\LRm{\indicator{{\mathcal{A}}_r} V(\bar{\vx}_r)} + \mathbb{E}_{r}\LRm{\indicator{\bar{{\mathcal{A}}}_r} U(\bar{\vx}_r)}, \end{align*} where \begin{align*} V(\bar{\vx}_r) &= \left( -\frac{\eta I}{2} + 36 \Gamma^2 I^3 \eta^3 + 9\frac{\gamma}{\eta}BL_1 I^2 \eta^2 \right) \|\nabla f(\bar{\vx}_r)\|^2 + 9 BL_1 I^2 \eta^2 \LRs{5\sigma^2+\frac{\gamma}{\eta}\sigma} \|\nabla f(\bar{\vx}_r)\| \\ &\quad\quad\quad\quad +90 \Gamma^2 I^3 \eta^3 \sigma^2 + \frac{2AL_0 I \eta^2 \sigma^2}{N}, \end{align*} and \begin{align*} U(\bar{\vx}_r) = \left(-\frac{2}{5} \gamma I + \frac{BL_1 (4\rho + 1) \gamma^2 I^2}{2} \right) \|\nabla f(\bar{\vx}_r)\| - \frac{3\gamma^2 I}{5\eta} + \gamma^2 I^2 (3AL_0 + 2BL_1 \kappa) + 6 \gamma I \sigma. \end{align*} \begin{proof} We begin by applying Lemma \ref{lemma:smooth_obj_descent} to obtain a bound on $f(\bar{\vx}_{r+1}) - f(\bar{\vx}_r)$, but first we must show that the conditions of Lemma \ref{lemma:smooth_obj_descent} hold here. Note that \begin{align*} \|\bar{\vx}_{r+1} - \bar{\vx}_r\| &= \left\| \frac{1}{N} \sum_{i=1}^N {\bm{x}}_{t_{r+1}}^i - \bar{\vx}_r \right\| \\ &\leq \frac{1}{N} \sum_{i=1}^N \indicator{{\mathcal{A}}_r} \|{\bm{x}}_{t_{r+1}}^i - \bar{\vx}_r\| + \frac{1}{N} \sum_{i=1}^N \indicator{\bar{{\mathcal{A}}}_r} \|{\bm{x}}_{t_{r+1}}^i - \bar{\vx}_r\| \\ &\leq \max\left\{ 2 \eta I \left( 2\sigma + \frac{\gamma}{\eta} \right), \gamma I\right\} \leq \frac{C}{L_1}, \end{align*} where the last step is due to the conditions of Lemma \ref{lemma:individual_dis}. This shows that we can apply Lemma \ref{lemma:smooth_obj_descent} to obtain \begin{align} \mathbb{E}_r \left[ f(\bar{\vx}_{r+1}) - f(\bar{\vx}_r) \right] &\leq \mathbb{E}_r \left[ \langle \nabla f(\bar{\vx}_r), \bar{\vx}_{r+1} - \bar{\vx}_r \rangle \right] + \mathbb{E}_r \left[ \frac{AL_0 + BL_1 \|\nabla f(\bar{\vx}_r)\|}{2} \|\bar{\vx}_{r+1} - \bar{\vx}_r\|^2 \right] \nonumber \\ &\leq - \eta \mathbb{E}_r \left[ \frac{1}{N} \sum_{i=1}^N \sum_{t \in {\mathcal{I}}_r} \indicator{{\mathcal{A}}_r} \langle \nabla f(\bar{\vx}_r), {\bm{g}}_t^i \rangle \right] \nonumber \\ &\quad -\gamma \mathbb{E}_r \left[ \frac{1}{N} \sum_{i=1}^N \sum_{t \in {\mathcal{I}}_r} \indicator{\bar{{\mathcal{A}}}_r} \langle \nabla f(\bar{\vx}_r), \frac{{\bm{g}}_t^i}{\|{\bm{g}}_t^i\|} \rangle \right] \nonumber \\ &\quad + \frac{AL_0}{2} \mathbb{E}_r \left[ \|\bar{\vx}_{r+1} - \bar{\vx}_r\|^2 \right] + \frac{BL_1}{2} \|\nabla f(\bar{\vx}_r)\|\mathbb{E}_r \left[\|\bar{\vx}_{r+1} - \bar{\vx}_r\|^2 \right]. \label{eq:descent_orig_bound} \end{align} Let $p_r = {\mathbb{P}}({\mathcal{A}}_r|{\mathcal{F}}_r)$, then $1 - p_r = {\mathbb{P}}(\bar{{\mathcal{A}}}_r|{\mathcal{F}}_r)$. Notice that $p_r$ is a function of $\bar{\vx}_r$. The last term in Equation \eqref{eq:descent_orig_bound} can be bounded as follows: \begin{align} &\|\nabla f(\bar{\vx}_r)\|\mathbb{E}_r \left[\|\bar{\vx}_{r+1} - \bar{\vx}_r\|^2 \right]\nonumber \\ &= \|\nabla f(\bar{\vx}_r)\|\mathbb{E} \left[\indicator{{\mathcal{A}}_r}\|\bar{\vx}_{r+1} - \bar{\vx}_r\|^2 \big\vert {\mathcal{F}}_r\right] + \|\nabla f(\bar{\vx}_r)\| \mathbb{E} \left[ \indicator{\bar{{\mathcal{A}}}_r}\|\bar{\vx}_{r+1} - \bar{\vx}_r\|^2 \big\vert {\mathcal{F}}_r\right]\nonumber \\ &\Eqmark{i}{\leq} \|\nabla f(\bar{\vx}_r)\| \mathbb{E} \left[\indicator{{\mathcal{A}}_r} \|\bar{\vx}_{r+1} - \bar{\vx}_r\|^2 \big\vert {\mathcal{F}}_r\right] + (1-p_r) \gamma^2 I^2 \|\nabla f(\bar{\vx}_r)\| \nonumber\\ &\Eqmark{ii}{\leq} 18 p_r I^2 \eta^2 \|\nabla f(\bar{\vx}_r)\| \LRs{\frac{\gamma}{\eta}\twonorm{\nabla f(\bar{\vx}_r)} + 5 \sigma^2 + \frac{\gamma}{\eta}\sigma} + (1-p_r) \gamma^2 I^2 \|\nabla f(\bar{\vx}_r)\| \nonumber\\ &\leq 18 p_r I^2 \eta \gamma \|\nabla f(\bar{\vx}_r)\|^2 + 18 p_r I^2 \eta^2 \LRs{5\sigma^2+\frac{\gamma}{\eta}\sigma} \|\nabla f(\bar{\vx}_r)\| + (1-p_r) \gamma^2 I^2 \|\nabla f(\bar{\vx}_r)\|, \label{eq:expansion_L1} \end{align} where $(i)$ comes from an application of Lemma \ref{lemma:individual_dis} with $t = t_{r+1}$, and $(ii)$ comes from an application of \eqref{eq:drift_expectation_bound_linear} in Lemma \ref{lemma:non_clipping_discre_expec}. Substituting \eqref{eq:expansion_L1} into \eqref{eq:descent_orig_bound} gives \begin{align} &\mathbb{E}_r \left[ f(\bar{\vx}_{r+1}) - f(\bar{\vx}_r) \right]\nonumber\\ &\quad \leq - \eta \mathbb{E}_r \left[ \frac{1}{N} \sum_{i=1}^N \sum_{t \in {\mathcal{I}}_r} \indicator{{\mathcal{A}}_r} \langle \nabla f(\bar{\vx}_r), {\bm{g}}_t^i \rangle \right] -\gamma \mathbb{E}_r \left[ \frac{1}{N} \sum_{i=1}^N \sum_{t \in {\mathcal{I}}_r} \indicator{\bar{{\mathcal{A}}}_r} \langle \nabla f(\bar{\vx}_r), \frac{{\bm{g}}_t^i}{\|{\bm{g}}_t^i\|} \rangle \right] \nonumber \\ &\qquad + \frac{AL_0}{2} \mathbb{E}_r \left[ \|\bar{\vx}_{r+1} - \bar{\vx}_r\|^2 \right] + 9 p_r BL_1 I^2 \eta^2 \LRs{\frac{\gamma}{\eta} \|\nabla f(\bar{\vx}_r)\|^2 + \LRs{5\sigma^2+\frac{\gamma}{\eta}\sigma} \|\nabla f(\bar{\vx}_r)\|} \nonumber\\ &\qquad + (1-p_r) \frac{BL_1\gamma^2 I^2}{2} \|\nabla f(\bar{\vx}_r)\| \label{eq:descent_middle_bound} \end{align} We introduce three claims to bound the first three terms in \eqref{eq:descent_middle_bound}, whose proofs are deferred to Section \ref{sec:claim_proofs}. \begin{claim} \label{claim:inner_prod_clip} Under the conditions of Lemma \ref{lemma:descent}, we have \begin{align*} &-\gamma \mathbb{E}_r \left[ \frac{1}{N} \sum_{i=1}^N \sum_{t \in {\mathcal{I}}_r} \indicator{\bar{{\mathcal{A}}}_r} \langle \nabla f(\bar{\vx}_r), \frac{{\bm{g}}_t^i}{\|{\bm{g}}_t^i\|} \rangle \right] \\ &\quad\quad \leq (1-p_r) \LRm{\left(-\frac{2}{5} \gamma I + 2 BL_1 \rho \gamma^2 I^2 \right) \|\nabla f(\bar{\vx}_r)\| - \frac{3\gamma^2 I}{5\eta} + 2 \gamma^2 I^2 (AL_0 + BL_1 \kappa) + 6 \gamma I \sigma}. \end{align*} \end{claim} \begin{claim} \label{claim:inner_prod_no_clip} Under the conditions of Lemma \ref{lemma:descent}, we have \begin{align*} &-\eta \mathbb{E}_r \left[ \frac{1}{N} \sum_{i=1}^N \sum_{t \in {\mathcal{I}}_r} \indicator{{\mathcal{A}}_r} \langle \nabla f(\bar{\vx}_r), {\bm{g}}_t^i \rangle \right] \\ & \leq p_r \LRm{\left( -\frac{\eta I}{2} + 36 I^3 \eta^3 \Gamma^2 \right) \|\nabla f(\bar{\vx}_r)\|^2+ 126 I^3 \eta^3 \sigma^2 \Gamma^2} - \frac{\eta}{2I} \mathbb{E}_r \left[\indicator{{\mathcal{A}}_r} \left\| \frac{1}{N} \sum_{i=1}^N \sum_{t \in {\mathcal{I}}_r} \nabla f({\bm{x}}_t^i) \right\|^2 \right], \end{align*} where $\Gamma = AL_0 + BL_1 \left( \kappa + \rho \left(\sigma + \frac{\gamma}{\eta} \right) \right)$. \end{claim} \begin{claim} \label{claim:quadratic} Under the conditions of Lemma \ref{lemma:descent}, we have \begin{equation*} \mathbb{E}_r\left[\|\bar{\vx}_{r+1} - \bar{\vx}_r\|^2\right] \leq 2 (1-p_r) \gamma^2 I^2 + \frac{4p_r I\sigma^2 \eta^2}{N} + 4\eta^2 \mathbb{E}_r \left[\indicator{{\mathcal{A}}_r} \left\| \frac{1}{N} \sum_{i=1}^N \sum_{t \in {\mathcal{I}}_r} \nabla f_i({\bm{x}}_t^i) \right\|^2\right]. \end{equation*} \end{claim} Combining Claims \ref{claim:inner_prod_clip}, \ref{claim:inner_prod_no_clip}, and \ref{claim:quadratic} with \eqref{eq:descent_orig_bound} and \eqref{eq:expansion_L1} yields \begin{align*} &\mathbb{E}_r \left[ f(\bar{\vx}_{r+1}) - f(\bar{\vx}_r) \right] \\ &\quad \leq p_r \bigg[ \left( -\frac{\eta I}{2} + 36 \Gamma^2 I^3 \eta^3 + 9\frac{\gamma}{\eta}BL_1 I^2 \eta^2 \right) \|\nabla f(\bar{\vx}_r)\|^2 + 9 p_r BL_1 I^2 \eta^2 \LRs{5\sigma^2+\frac{\gamma}{\eta}\sigma} \|\nabla f(\bar{\vx}_r)\| + \\ &\quad\quad\quad\quad 126 \Gamma^2 I^3 \eta^3 \sigma^2 + \frac{2AL_0 I \eta^2 \sigma^2}{N} \bigg] \\ &\quad\quad + (1-p_r) \left[ \left(-\frac{2}{5} \gamma I + \frac{BL_1 (4\rho + 1) \gamma^2 I^2}{2} \right) \|\nabla f(\bar{\vx}_r)\| - \frac{3\gamma^2 I}{5\eta} + \gamma^2 I^2 (3AL_0 + 2BL_1 \kappa) + 6 \gamma I \sigma \right] \\ &\quad\quad + \left( 2AL_0 \eta^2 - \frac{\eta}{2I} \right) \mathbb{E}_r \left[\indicator{{\mathcal{A}}_r}\left\| \frac{1}{N} \sum_{i=1}^N \sum_{t \in {\mathcal{I}}_r} \nabla f({\bm{x}}_t^i) \right\|^2 \right] \\ &\quad \leq p_r \bigg[ \left( -\frac{\eta I}{2} + 36 \Gamma^2 I^3 \eta^3 + 9\frac{\gamma}{\eta}BL_1 I^2 \eta^2 \right) \|\nabla f(\bar{\vx}_r)\|^2 + 9 BL_1 I^2 \eta^2 \LRs{5\sigma^2+\frac{\gamma}{\eta}\sigma} \|\nabla f(\bar{\vx}_r)\| + \\ &\quad\quad\quad\quad 90 \Gamma^2 I^3 \eta^3 \sigma^2 + \frac{2AL_0 I \eta^2 \sigma^2}{N} \bigg] \\ &\quad\quad + (1-p_r) \left[ \left(-\frac{2}{5} \gamma I + \frac{BL_1 (4\rho + 1) \gamma^2 I^2}{2} \right) \|\nabla f(\bar{\vx}_r)\| - \frac{3\gamma^2 I}{5\eta} + \gamma^2 I^2 (3AL_0 + 2BL_1 \kappa) + 6 \gamma I \sigma \right], \end{align*} where the last inequality holds since $\eta/(2I) \geq 4\eta^2$ due to the assumption $4AL_0 \eta I \leq 1$. Then we can finish the proof of Lemma \ref{lemma:descent} by noticing that $p_r = \mathbb{E}_r[\indicator{{\mathcal{A}}_r}]$ and $1 - p_r = \mathbb{E}_r[\indicator{\bar{{\mathcal{A}}}_r}]$. \end{proof} \subsection{Proof of Theorem \ref{thm:main}}\label{proof:thm:main} \paragraph{Theorem \ref{thm:main} restated.} Suppose Assumption \ref{assume:object} hold. For any $\epsilon \leq \frac{3AL_0}{5 BL_1\rho}$, we choose \begin{align}\label{eq:eta_gamma_choices} \eta \leq \min\LRl{\frac{1}{856 \Gamma I},\frac{\epsilon}{180 \Gamma I \sigma},\frac{N \epsilon^2}{8 AL_0 \sigma^2}} \quad\text{and}\quad \gamma = \LRs{11\sigma + \frac{AL_0}{BL_1 \rho}}\eta, \end{align} where $\Gamma = AL_0 + BL_1 \kappa + BL_1 \rho \left(\sigma + \frac{\gamma}{\eta} \right)$. The output of EPISODE satisfies \begin{equation*} \frac{1}{R} \sum_{t=0}^R \mathbb{E}\left[\|\nabla f(\bar{\vx}_r)\|\right] \leq 3\epsilon \end{equation*} as long as $R \geq \frac{4 \Delta}{\epsilon^2 \eta I}$. \begin{proof} In order to apply Lemma \ref{lemma:descent}, we must verify the conditions of Lemma \ref{lemma:individual_dis} under our choice of hyperparameters. From our choices of $\eta$ and $\gamma$, we have \begin{equation*} 2 \Gamma \eta I \leq \frac{1}{856} < 1. \end{equation*} Also \begin{align*} 2 \eta I \left(2\sigma + \frac{\gamma}{\eta} \right) \Eqmark{i}{\leq} \frac{2\sigma + \frac{\gamma}{\eta}}{856 \LRs{AL_0 + BL_1 \kappa + BL_1 \rho \left(\sigma + \frac{\gamma}{\eta} \right)}}\Eqmark{ii}{\leq} \frac{C}{L_1}, \end{align*} where $(i)$ comes from the condition $\eta \leq 1/(856 \Gamma I)$ in \eqref{eq:eta_gamma_choices}, $(ii)$ is true due to the fact that $B, C \geq 1$ and $\rho\geq 1$. Lastly, it also holds that \begin{equation*} \gamma I \leq 4 \eta I \sigma + 2 \gamma I = 2 \eta I \left( 2\sigma + \frac{\gamma}{\eta} \right) \leq \frac{C}{L_1}. \end{equation*} Therefore the conditions of Lemma \ref{lemma:individual_dis} are satisfied, and we can apply Lemma \ref{lemma:descent}. Denoting \begin{equation} \label{eq:thm_u_def} U({\bm{x}}) = \left(-\frac{2}{5} \gamma I + \frac{BL_1 (4\rho + 1) \gamma^2 I^2}{2} \right) \|\nabla f({\bm{x}})\| - \frac{3\gamma^2 I}{5\eta} + \gamma^2 I^2 (3AL_0 + 2BL_1 \kappa) + 6 \gamma I \sigma, \end{equation} and \begin{align} V({\bm{x}}) &= \left( -\frac{\eta I}{2} + 36 \Gamma^2 I^3 \eta^3 + 9\frac{\gamma}{\eta} I^2 \eta^2 \right) \|\nabla f({\bm{x}})\|^2 + 9 p_r I^2 \eta^2 \LRs{5\sigma^2+\frac{\gamma}{\eta}\sigma} \|\nabla f({\bm{x}})\| \nonumber \\ &\quad + 126 \Gamma^2 I^3 \eta^3 \sigma^2 + \frac{2AL_0 I \eta^2 \sigma^2}{N}. \label{eq:thm_v_def} \end{align} Lemma \ref{lemma:descent} tells us that \begin{equation} \label{eq:thm_descent_u_v} \mathbb{E}_r \left[ f(\bar{\vx}_{r+1}) - f(\bar{\vx}_r) \right] \leq \mathbb{E}_r \left[ \indicator{\bar{{\mathcal{A}}}_r} U(\bar{\vx}_r) + \indicator{{\mathcal{A}}_r} V(\bar{\vx}_r) \right]. \end{equation} We will proceed by bounding each $U({\bm{x}})$ and $V({\bm{x}})$ by the same linear function of $\|\nabla f({\bm{x}})\|$. To bound $U({\bm{x}})$, notice \begin{align} -\frac{2}{5} \gamma I + &\frac{BL_1 (4\rho + 1) \gamma^2 I^2}{2}\nonumber\\ &= -\frac{2}{5} \gamma I + 2 BL_1 \rho \gamma^2 I^2 + \frac{1}{2} BL_1 \gamma^2 I^2 \nonumber \\ &\leq \gamma I \left( -\frac{2}{5} + 2BL_1 \rho \gamma I + \frac{1}{2} BL_1 \gamma I \right) \nonumber \\ &\leq \gamma I \left( -\frac{2}{5} + 2BL_1 \rho \left( 11 \sigma + \frac{AL_0}{BL_1 \rho} \right) \eta I + \frac{1}{2} BL_1 \left( 11 \sigma + \frac{AL_0}{BL_1 \rho} \right) \eta I \right) \nonumber \\ &\Eqmark{i}{\leq} \gamma I \left( -\frac{2}{5} + 3 \left( 11 BL_1 \rho \sigma + AL_0 \right) \eta I \right) \nonumber \\ &\Eqmark{ii}{\leq} \gamma I \left( -\frac{2}{5} + \frac{18}{856} \right) \leq -\frac{3}{10} \gamma I\nonumber\\ &\Eqmark{iii}{\leq} -\frac{3}{10}\frac{AL_0}{BL_1 \rho} \eta I \Eqmark{iv}{\leq} -\frac{1}{2} \epsilon \eta I, \label{eq:thm_u1_bound} \end{align} where $(i)$ comes from $\rho \geq 1$ and $(ii)$ comes from $856 \Gamma \eta I \leq 1$ and $(iii)$ holds since $\gamma/\eta = 11\sigma + \frac{AL_0}{BL_1 \rho}$ and $(iv)$ comes from $\epsilon \leq \frac{3AL_0}{5 BL_1\rho}$. Also, we have \begin{align} - \frac{3\gamma^2 I}{5\eta} + \gamma^2 I^2 (3AL_0 + 2BL_1 \kappa) + 6 \gamma I \sigma &\leq \frac{\gamma^2 I}{\eta} \left( -\frac{3}{5} + 3 \Gamma \eta I + 6 \sigma \frac{\eta}{\gamma} \right) \nonumber \\ &\leq \frac{\gamma^2 I}{\eta} \left( -\frac{3}{5} + \frac{3}{856} + \frac{6 \sigma}{11 \sigma + \frac{AL_0}{BL_1 \rho}} \right) \nonumber \\ &\leq \frac{\gamma^2 I}{\eta} \left( -\frac{3}{5} + \frac{3}{856} + \frac{6}{11} \right) \leq 0. \label{eq:thm_u2_bound} \end{align} Plugging Equations \eqref{eq:thm_u1_bound} and \eqref{eq:thm_u2_bound} into Equation \eqref{eq:thm_u_def} yields \begin{equation} \label{eq:thm_u_bound} U({\bm{x}}) \leq -\frac{1}{2} \epsilon \eta I \|\nabla f({\bm{x}})\|. \end{equation} Now to bound $V({\bm{x}})$, we have \begin{align} -\frac{\eta I}{2} + 36 \Gamma^2 I^3 \eta^3 + 9\frac{\gamma}{\eta}BL_1 I^2 \eta^2 &\Eqmark{i}{\leq} -\frac{1}{2} \eta I + \frac{36}{856^2} \eta I + \frac{9(11BL_1\sigma + AL_0/\rho)}{856\Gamma} \eta I \nonumber \\ &\leq -\frac{1}{4} \eta I, \label{eq:thm_v1_bound} \end{align} where $(i)$ comes from $\eta \leq \frac{1}{856 \Gamma I}$ and $\Gamma > BL_1\sigma + AL_0/\rho$ for $\rho > 1$. Using the assumption $\eta \leq \frac{\epsilon}{180 I\Gamma \sigma}$, it holds that \begin{align} 9 BL_1 I^2 \eta^2 \LRs{5\sigma^2 + \frac{\gamma}{\eta}\sigma} &= 9 BL_1 I^2 \eta^2\LRs{16\sigma^2 + \frac{AL_0\sigma}{BL_1 \rho}}\nonumber\\ &\leq \eta I \epsilon \frac{16 BL_1 \sigma + AL_0}{20\Gamma} \Eqmark{ii}{\leq} \frac{1}{4} \epsilon \eta I \label{eq:thm_v2_bound} \end{align} where $(ii)$ comes from $16BL_1 \sigma + AL_0 < 5\Gamma$. Lastly, we have \begin{align} 90 \Gamma^2 I^3 \eta^3 \sigma^2 + \frac{2AL_0 I \eta^2 \sigma^2}{N} &= \eta I \LRs{90 \Gamma^2 I^2 \eta^2 \sigma^2 + \frac{2AL_0 \eta \sigma^2}{N}} \nonumber \\ &\Eqmark{iii}{\leq}\eta I \LRs{90\Gamma^2\sigma^2 \cdot \frac{\epsilon^2}{180^2 \Gamma^2 \sigma^2} + \frac{2AL_0 \sigma^2}{N} \frac{N \epsilon^2}{8 AL_0 \sigma^2}} \nonumber\\ &\leq \frac{1}{4} \epsilon^2 \eta I, \label{eq:thm_v3_bound} \end{align} where $(i)$ comes from $\eta \leq \min\LRl{\frac{\epsilon}{180 I\Gamma \sigma},\frac{N \epsilon^2}{8 AL_0 \sigma^2}}$. Plugging Equations \eqref{eq:thm_v1_bound}, \eqref{eq:thm_v2_bound}, and \eqref{eq:thm_v3_bound} into \eqref{eq:thm_v_def} then yields \begin{equation*} V(\mathbf{x}) \leq -\frac{1}{4} \eta I \|\nabla f(\mathbf{x})\|^2 + \frac{1}{4} \epsilon \eta I \|\nabla f(\mathbf{x})\| + \frac{1}{4} \epsilon^2 \eta I \end{equation*} We can then use the inequality $x^2 \geq 2ax - a^2$ with $x = \|\nabla f(\mathbf{x})\|$ and $a = \epsilon$ to obtain \begin{equation} \label{eq:thm_v_bound} V(\mathbf{x}) \leq -\frac{1}{4} \epsilon \eta I \|\nabla f(\mathbf{x})\| + \frac{1}{2} \epsilon^2 \eta I. \end{equation} Having bounded $U(\mathbf{x})$ and $V(\mathbf{x})$, we can return to \eqref{eq:thm_descent_u_v}. Using \eqref{eq:thm_u_bound}, we can see \begin{equation*} U(\mathbf{x}) \leq -\frac{1}{2} \epsilon \eta I \|\nabla f(\mathbf{x})\| \leq -\frac{1}{4} \epsilon \eta I \|\nabla f(\mathbf{x})\| + \frac{1}{2} \epsilon^2 \eta I, \end{equation*} so the RHS of \eqref{eq:thm_v_bound} is an upper bound of both $U(\mathbf{x})$ and $V(\mathbf{x})$. Plugging this bound into \eqref{eq:thm_descent_u_v} and taking total expectation then gives \begin{equation*} \mathbb{E} \left[ f(\bar{\vx}_{r+1}) - f(\bar{\vx}_r) \right] \leq -\frac{1}{4} \epsilon \eta I \mathbb{E} \left[ \|\nabla f(\bar{\vx}_r)\| \right] + \frac{1}{2} \epsilon^2 \eta I. \end{equation*} Finally, denoting $\Delta = f(\bar{\vx}_0) - f^*$, we can unroll the above recurrence to obtain \begin{align*} \mathbb{E} \left[ f(\bar{\vx}_{R+1}) - f(\bar{\vx}_0) \right] &\leq -\frac{1}{4} \epsilon \eta I \sum_{r=0}^{R} \mathbb{E} \left[ \|\nabla f(\bar{\vx}_r)\| \right] + \frac{1}{2} (R+1) \epsilon^2 \eta I, \\ \frac{1}{R+1} \sum_{r=0}^{R} \mathbb{E} \left[ \|\nabla f(\bar{\vx}_r)\| \right] &\leq \frac{4 \Delta}{\epsilon \eta I (R+1)} + 2 \epsilon, \\ \frac{1}{R+1} \sum_{r=0}^{R} \mathbb{E} \left[ \|\nabla f(\bar{\vx}_r)\| \right] &\leq 3 \epsilon, \end{align*} where the last inequality comes from our choice of $R \geq \frac{4 \Delta}{\epsilon^2 \eta I}$. \end{proof} \section{Deferred Proofs of Section \ref{sec:main_proof}}\label{sec:claim_proofs} \subsection{Proof of Claim \ref{claim:inner_prod_clip}} \begin{proof} Starting from Lemma \ref{lemma:clip_inprod} with ${\bm{u}} = \nabla f(\bar{\vx}_r)$ and ${\bm{v}} = {\bm{g}}_t^i$, we have \begin{equation} \label{eq:clip_inprod_1} -\frac{\langle \nabla f(\bar{\vx}_r), {\bm{g}}_t^i \rangle}{\|{\bm{g}}_t^i\|} \leq -\mu \|\nabla f(\bar{\vx}_r)\| - (1-\mu) \|{\bm{g}}_t^i\| + (1+\mu) \|{\bm{g}}_t^i - \nabla f(\bar{\vx}_r)\|. \end{equation} Under $\bar{{\mathcal{A}}}_r = \{\|{\bm{G}}_r\| > \frac{\gamma}{\eta}\}$, note that ${\bm{g}}_t^i=\nabla F_i({\bm{x}}_t^i;\xi_t^i)-{\bm{G}}_r^i+{\bm{G}}_r$, and we have \begin{align*} \|{\bm{g}}_t^i\| &\geq \|{\bm{G}}_r\| - \|\nabla F_i({\bm{x}}_t^i, \xi_t^i) - {\bm{G}}_r^i\| \\ &\geq \frac{\gamma}{\eta} - \|\nabla F_i({\bm{x}}_t^i, \xi_t^i) - \nabla f_i({\bm{x}}_t^i)\| - \|\nabla f_i({\bm{x}}_t^i) - \nabla f_i(\bar{\vx}_r)\| - \|\nabla f_i(\bar{\vx}_r) - {\bm{G}}_r^i\| \\ &\geq \frac{\gamma}{\eta} - 2\sigma - \|\nabla f_i({\bm{x}}_t^i) - \nabla f_i(\bar{\vx}_r)\| \end{align*} and \begin{align*} \|{\bm{g}}_t^i - \nabla f(\bar{\vx}_r)\| &\leq \|\nabla F_i({\bm{x}}_t^i, \xi_t^i) - \nabla f_i({\bm{x}}_t^i)\| + \|\nabla f_i({\bm{x}}_t^i) - \nabla f_i(\bar{\vx}_r)\|\\ &\quad + \|\nabla f_i(\bar{\vx}_r) - {\bm{G}}_r^i\| + \|{\bm{G}}_r - \nabla f(\bar{\vx}_r)\| \\ &\leq 3\sigma + \|\nabla f_i({\bm{x}}_t^i) - \nabla f_i(\bar{\vx}_r)\|. \end{align*} Plugging these two inequalities into \eqref{eq:clip_inprod_1} yields \begin{equation*} -\frac{\langle \nabla f(\bar{\vx}_r), {\bm{g}}_t^i \rangle}{\|{\bm{g}}_t^i\|} \leq -\mu \|\nabla f(\bar{\vx}_r)\| - (1-\mu) \frac{\gamma}{\eta} + (5+\mu) \sigma + 2 \|\nabla f_i({\bm{x}}_t^i) - \nabla f_i(\bar{\vx}_r)\|. \end{equation*} Under $\bar{{\mathcal{A}}}_r$, we know $\|{\bm{x}}_t^i - \bar{\vx}_r\| \leq \gamma I$, and $\gamma I \leq \frac{C}{L_1}$ by assumption. Therefore we can apply Lemma \ref{lemma:smooth_grad_diff} to obtain \begin{equation*} \|\nabla f_i({\bm{x}}_t^i) - \nabla f_i(\bar{\vx}_r)\| \leq (AL_0 + BL_1 \|\nabla f_i(\bar{\vx}_r)\|) \|{\bm{x}}_t^i - \bar{\vx}_r\| \leq \gamma I (AL_0 + BL_1 \|\nabla f_i(\bar{\vx}_r)\|). \end{equation*} This implies that \begin{equation*} -\frac{\langle \nabla f(\bar{\vx}_r), {\bm{g}}_t^i \rangle}{\|{\bm{g}}_t^i\|} \leq -\mu \|\nabla f(\bar{\vx}_r)\| - (1-\mu) \frac{\gamma}{\eta} + (5+\mu) \sigma + 2 AL_0 \gamma I + 2 BL_1 \gamma I \|\nabla f_i(\bar{\vx}_r)\|. \end{equation*} Combining this with the choice $\mu = 2/5$, we have the final bound: \begin{align*} &-\gamma \mathbb{E}_r \left[ \frac{1}{N} \sum_{i=1}^N \sum_{t \in {\mathcal{I}}_r} \indicator{\bar{{\mathcal{A}}}_r} \langle \nabla f(\bar{\vx}_r), \frac{{\bm{g}}_t^i}{\|{\bm{g}}_t^i\|} \rangle \right] \\ &\quad\quad \leq \frac{1}{N} \sum_{i=1}^N (1-p_r) \left( -\frac{2}{5} \gamma I \|\nabla f(\bar{\vx}_r)\| - \frac{3\gamma^2 I}{5\eta} + 6 \gamma I \sigma + 2 AL_0 \gamma^2 I^2 + 2 BL_1 \gamma^2 I^2 \|\nabla f_i(\bar{\vx}_r)\| \right) \\ &\quad\quad \leq (1-p_r) \left( \left(-\frac{2}{5} \gamma I + 2 BL_1 \rho \gamma^2 I^2 \right) \|\nabla f(\bar{\vx}_r)\| - \frac{3\gamma^2 I}{5\eta} + 2 \gamma^2 I^2 (AL_0 + BL_1 \kappa) + 6 \gamma I \sigma \right) \end{align*} where we used the heterogeneity assumption $\|\nabla f_i(\bar{\vx}_r)\| \leq \kappa + \rho \|\nabla f(\bar{\vx}_r)\|$. \end{proof} \subsection{Proof of Claim \ref{claim:inner_prod_no_clip}} \begin{proof} Recall the event ${\mathcal{A}}_r = \{\|{\bm{G}}_r\| \leq \gamma / \eta\}$, we have \begin{align} &I \mathbb{E}_r \left[ \frac{1}{N} \sum_{i=1}^N \sum_{t \in {\mathcal{I}}_r} \indicator{{\mathcal{A}}_r} \langle \nabla f(\bar{\vx}_r), {\bm{g}}_t^i \rangle \right] = \mathbb{E}_r\LRm{\indicator{{\mathcal{A}}_r}\left\langle I \nabla f(\bar{\vx}_r), \sum_{t \in {\mathcal{I}}_r} \frac{1}{N} \sum_{i=1}^N{\bm{g}}_t^i \right\rangle} \nonumber\\ &\quad\Eqmark{i}{=} \mathbb{E}_r\LRm{\indicator{{\mathcal{A}}_r}\left\langle I \nabla f(\bar{\vx}_r), \sum_{t \in {\mathcal{I}}_r} \frac{1}{N} \sum_{i=1}^N \nabla F_i({\bm{x}}_t^i;\xi_t^i) \right\rangle} \nonumber \\ &\quad\Eqmark{ii}= \mathbb{E}_r\LRm{\indicator{{\mathcal{A}}_r}\left\langle I \nabla f(\bar{\vx}_r), \sum_{t \in {\mathcal{I}}_r} \frac{1}{N} \sum_{i=1}^N \nabla f_i({\bm{x}}_t^i) \right\rangle} \nonumber \\ &\quad\Eqmark{iii}{=} \frac{p_r I^2}{2} \|\nabla f(\bar{\vx}_r)\|^2 + \frac{1}{2} \mathbb{E}_r \left[\indicator{{\mathcal{A}}_r} \left\| \frac{1}{N} \sum_{i=1}^N \sum_{t \in {\mathcal{I}}_r} \nabla f({\bm{x}}_t^i) \right\|^2 \right]\nonumber\\ &\quad\quad - \frac{1}{2} \mathbb{E}_r \left[\indicator{{\mathcal{A}}_r}\left\| \sum_{t \in {\mathcal{I}}_r} \left( \frac{1}{N} \sum_{i=1}^N \nabla f_i({\bm{x}}_t^i) - \nabla f(\bar{\vx}_r) \right) \right\|^2\right]. \label{eq:claim_inner2_1} \end{align} The equality $(i)$ is obtained from the fact that $\frac{1}{N} \sum_{i=1}^N {\bm{g}}_t^i = \frac{1}{N} \sum_{i=1}^N \nabla F_i({\bm{x}}_t^i, \xi_t^i) - {\bm{G}}_r^i + {\bm{G}}_r = \frac{1}{N} \sum_{i=1}^N \nabla F_i({\bm{x}}_t^i, \xi_t^i)$. The equality $(ii)$ holds due to the tower property such that for $t> t_r$ \begin{align*} \mathbb{E}\LRm{\indicator{{\mathcal{A}}_r}\nabla F_i({\bm{x}}_t^i, \xi_t^i) \big| {\mathcal{F}}_r} = \mathbb{E}\LRm{\indicator{{\mathcal{A}}_r}\mathbb{E}\LRm{\nabla F_i({\bm{x}}_t^i, \xi_t^i) \big| {\mathcal{H}}_t} \big| {\mathcal{F}}_r} =\mathbb{E}\LRm{\indicator{{\mathcal{A}}_r}\nabla f_i({\bm{x}}_t^i) \big| {\mathcal{F}}_r}; \end{align*} for $ t = t_r$ \begin{align*} \mathbb{E}\LRm{\indicator{{\mathcal{A}}_r}\nabla F_i(\bar{\vx}_r, \xi_{t_r}^i) \big| {\mathcal{F}}_r} = \mathbb{E}\LRm{\indicator{{\mathcal{A}}_r}|{\mathcal{F}}_r}\mathbb{E}\LRm{\nabla F_i(\bar{\vx}_r, \xi_{t_r}^i) \big| {\mathcal{F}}_r} = \mathbb{E}\LRm{\indicator{{\mathcal{A}}_r}\nabla f_i(\bar{\vx}_r) \big| {\mathcal{F}}_r}, \end{align*} which is true since ${\bm{G}}_r = \frac{1}{N}\sum_{i=1}^N \nabla F_i(\bar{\vx}_r;\widetilde{\xi}_{r}^i)$ is independent of $\nabla F_i(\bar{\vx}_r, \xi_{t_r}^i)$ given ${\mathcal{F}}_r$, and $(iii)$ holds because $2 \langle a, b \rangle = \|a\|^2 + \|b\|^2 - \|a-b\|^2$. Let $\Gamma = AL_0 + BL_1 \left(\kappa + \rho \left(\sigma + \frac{\gamma}{\eta} \right)\right)$. Notice that we can apply the relaxed smoothness in Lemma \ref{lemma:smooth_grad_diff} to obtain \begin{align*} &\mathbb{E}_r \left[\indicator{{\mathcal{A}}_r} \|\nabla f_i({\bm{x}}_t^i) - \nabla f_i(\bar{\vx}_r)\|^2\right]\\ &\qquad\leq \mathbb{E}_r \left[\indicator{{\mathcal{A}}_r}(AL_0 + BL_1 \|\nabla f_i(\bar{\vx}_r)\|)^2 \|{\bm{x}}_t^i - \bar{\vx}_r\|^2 \right] \\ &\qquad\leq \mathbb{E}_r \left[\indicator{{\mathcal{A}}_r}(AL_0 + BL_1 (\kappa + \rho \|\nabla f(\bar{\vx}_r)\|))^2 \|{\bm{x}}_t^i - \bar{\vx}_r\|^2 \right] \\ &\qquad\Eqmark{i}{\leq} \Gamma^2 \mathbb{E}_r \left[\indicator{{\mathcal{A}}_r} \|{\bm{x}}_t^i - \bar{\vx}_r\|^2 \right] \\ &\qquad\Eqmark{ii}{\leq} 18p_r I^2 \eta^2 \Gamma^2 \LRs{2\twonorm{\nabla f(\bar{\vx}_r)}^2 + 7\sigma^2}. \end{align*} The inequality $(i)$ holds since $\twonorm{\nabla f(\bar{\vx}_r)} \leq \twonorm{\nabla f(\bar{\vx}_r) - {\bm{G}}_r} + \twonorm{{\bm{G}}_r} \leq \sigma + \gamma/\eta$ almost surely under the event ${\mathcal{A}}_r$. The inequality $(ii)$ follows from the bound \eqref{eq:drift_expectation_bound_quadratic} in Lemma \ref{lemma:non_clipping_discre_expec}. Therefore, we are guaranteed that \begin{align} &\mathbb{E}_r \left[\indicator{{\mathcal{A}}_r} \left\| \sum_{t \in {\mathcal{I}}_r} \left( \frac{1}{N} \sum_{i=1}^N \nabla f_i({\bm{x}}_t^i) - \nabla f(\bar{\vx}_r) \right) \right\|^2 \right] \nonumber \\ &\quad\quad \leq I \sum_{t \in {\mathcal{I}}_r} \frac{1}{N} \sum_{i=1}^N \mathbb{E}_r \left[\indicator{{\mathcal{A}}_r} \left\| \nabla f_i({\bm{x}}_t^i) - \nabla f(\bar{\vx}_r) \right\|^2 \right] \nonumber \\ &\quad\quad \leq I \sum_{t \in {\mathcal{I}}_r} \frac{1}{N} \sum_{i=1}^N 18p_r I^2 \eta^2 \Gamma^2 \LRs{2\twonorm{\nabla f(\bar{\vx}_r)}^2 + 7 \sigma^2} \nonumber \\ &\quad\quad \leq 18 p_r I^4 \eta^2 \Gamma^2 \LRs{2\twonorm{\nabla f(\bar{\vx}_r)}^2 + 7 \sigma^2}.\label{eq:claim_inner2_2} \end{align} Multiplying both sides of \eqref{eq:claim_inner2_1} by $-\eta/I$ and substituting \eqref{eq:claim_inner2_2} then yields \begin{align*} &-\eta \mathbb{E}_r \left[ \frac{1}{N} \sum_{i=1}^N \sum_{t \in {\mathcal{I}}_r} \indicator{{\mathcal{A}}_r} \langle \nabla f(\bar{\vx}_r), {\bm{g}}_t^i \rangle \right] \\ &\leq -\frac{p_r \eta I}{2} \|\nabla f(\bar{\vx}_r)\|^2 - \frac{\eta}{2I} \mathbb{E}_r \left[\indicator{{\mathcal{A}}_r} \left\| \frac{1}{N} \sum_{i=1}^N \sum_{t \in {\mathcal{I}}_r} \nabla f({\bm{x}}_t^i) \right\|^2 \right] \\ &\quad\quad+ \frac{p_r \eta}{2I} \mathbb{E}_r \left[ \left\| \sum_{t \in {\mathcal{I}}_r} \left( \frac{1}{N} \sum_{i=1}^N \nabla f_i({\bm{x}}_t^i) - \nabla f(\bar{\vx}_r) \right) \right\|^2 \right] \\ &\leq p_r \LRm{\left( -\frac{\eta I}{2} + 36 I^3 \eta^3 \Gamma^2 \right) \|\nabla f(\bar{\vx}_r)\|^2+ 126 I^3 \eta^3 \sigma^2 \Gamma^2} - \frac{\eta}{2I} \mathbb{E}_r \left[\indicator{{\mathcal{A}}_r} \left\| \frac{1}{N} \sum_{i=1}^N \sum_{t \in {\mathcal{I}}_r} \nabla f({\bm{x}}_t^i) \right\|^2 \right]. \end{align*} \end{proof} \subsection{Proof of Claim \ref{claim:quadratic}} \begin{proof} From the definition of $\bar{\vx}_{r+1}$, we have \begin{align} &\mathbb{E}_r\left[\|\bar{\vx}_{r+1} - \bar{\vx}_r\|^2\right]\nonumber\\ &\qquad\leq 2 \eta^2 \mathbb{E}_r \left[ \indicator{{\mathcal{A}}_r} \left\| \frac{1}{N} \sum_{i=1}^N \sum_{t \in {\mathcal{I}}_r} {\bm{g}}_t^i \right\|^2 \right] + 2 \gamma^2 \mathbb{E}_r \left[ \indicator{\bar{{\mathcal{A}}}_r} \left\| \frac{1}{N} \sum_{i=1}^N \sum_{t \in {\mathcal{I}}_r} \frac{{\bm{g}}_t^i}{\|{\bm{g}}_t^i\|} \right\|^2 \right] \nonumber \\ &\qquad\Eqmark{i}{\leq} 2\eta^2 \mathbb{E}_r \left[ \indicator{{\mathcal{A}}_r}\left\| \frac{1}{N} \sum_{i=1}^N \sum_{t \in {\mathcal{I}}_r} \nabla F_i({\bm{x}}_t^i, \xi_t^i) \right\|^2 \right] + 2(1-p_r) \gamma^2 I^2 \nonumber \\ &\qquad\leq 4\eta^2 \mathbb{E}_r \left[\indicator{{\mathcal{A}}_r}\left\| \frac{1}{N} \sum_{i=1}^N \sum_{t \in {\mathcal{I}}_r} \nabla f_i({\bm{x}}_t^i) \right\|^2 \right] \nonumber \\ &\qquad\quad + 4p_r \eta^2 \mathbb{E}_r \left[ \indicator{{\mathcal{A}}_r}\left\| \frac{1}{N} \sum_{i=1}^N \sum_{t \in {\mathcal{I}}_r} \nabla F_i({\bm{x}}_t^i;\xi_t^i) - \nabla f_i({\bm{x}}_t^i) \right\|^2 \right]+ 2 (1-p_r) \gamma^2 I^2 \nonumber \\ &\qquad\Eqmark{ii}{\leq} 4 \eta^2 \mathbb{E}_r \left[\indicator{{\mathcal{A}}_r} \left\| \frac{1}{N} \sum_{i=1}^N \sum_{t \in {\mathcal{I}}_r} \nabla f_i({\bm{x}}_t^i) \right\|^2 \right] \nonumber \\ &\qquad\quad + 4 \eta^2 \frac{1}{N^2} \sum_{i=1}^N \mathbb{E}_r \left[\indicator{{\mathcal{A}}_r}\left\| \sum_{t \in {\mathcal{I}}_r} \nabla F_i({\bm{x}}_t^i;\xi_t^i) - \nabla f_i({\bm{x}}_t^i) \right\|^2 \right] + 2 (1-p_r) \gamma^2 I^2, \label{eq:claim_quad_1} \end{align} where $(i)$ is obtained by noticing that $\frac{1}{N} \sum_{i=1}^N {\bm{g}}_t^i = \frac{1}{N} \sum_{i=1}^N \nabla F_i({\bm{x}}_t^i, \xi_t^i)$, and $(ii)$ holds by the fact that each client's stochastic gradients $\nabla F_i({\bm{x}}_t^i, \xi_t^i)$ are sampled independently from one another. Similarly, let $s \in {\mathcal{I}}_r$ with $s > t$., we can see that \begin{align*} &\mathbb{E}_r \left[\indicator{{\mathcal{A}}_r}\langle \nabla F_i({\bm{x}}_t^i;\xi_t^i) - \nabla f_i({\bm{x}}_t^i), \nabla F_i({\bm{x}}_s^i;\xi_s^i) - \nabla f_i({\bm{x}}_s^i) \rangle \right] \\ &\quad = \mathbb{E}_r \left[\indicator{{\mathcal{A}}_r} \mathbb{E}_r \left[ \langle \nabla F_i({\bm{x}}_t^i;\xi_t^i) - \nabla f_i({\bm{x}}_t^i), \nabla F_i({\bm{x}}_s^i;\xi_s^i) - \nabla f_i({\bm{x}}_s^i) \rangle \bigg| {\mathcal{H}}_s \right] \right] \\ &\quad = \mathbb{E}_r \left[ \indicator{{\mathcal{A}}_r}\langle \nabla F_i({\bm{x}}_t^i;\xi_t^i) - \nabla f_i({\bm{x}}_t^i), \mathbb{E}_r \left[ \nabla F_i({\bm{x}}_s^i;\xi_s^i) \bigg| {\mathcal{H}}_s \right] - \nabla f_i({\bm{x}}_s^i) \rangle \right] \\ &\quad = 0. \end{align*} Therefore, we have \begin{align} &\frac{1}{N^2} \sum_{i=1}^N \mathbb{E}_r \left[\indicator{{\mathcal{A}}_r} \left\| \sum_{t \in {\mathcal{I}}_r} \nabla F_i({\bm{x}}_t^i;\xi_t^i) - \nabla f_i({\bm{x}}_t^i) \right\|^2 \right]\nonumber\\ &\quad = \frac{1}{N^2} \sum_{i=1}^N \sum_{t \in {\mathcal{I}}_r} \mathbb{E}_r \left[\indicator{{\mathcal{A}}_r} \left\| \nabla F_i({\bm{x}}_t^i;\xi_t^i) - \nabla f_i({\bm{x}}_t^i) \right\|^2 \right] \nonumber \\ &\quad = \frac{1}{N^2} \sum_{i=1}^N \sum_{t \in {\mathcal{I}}_r} \mathbb{E}_r \left[\indicator{{\mathcal{A}}_r} \mathbb{E}_r\LRm{\left\| \nabla F_i({\bm{x}}_t^i;\xi_t^i) - \nabla f_i({\bm{x}}_t^i) \right\|^2\big| {\mathcal{H}}_t} \right]\nonumber\\ &\quad\leq \frac{p_r I\sigma^2}{N}. \label{eq:claim_quad_2} \end{align} And the desired result is obtained by plugging \eqref{eq:claim_quad_2} into \eqref{eq:claim_quad_1}. \end{proof} \section{Additional Experimental Results} \label{appen:experiments} \subsection{Proof of Proposition \ref{proposition:H_kappa}}\label{proof:proposition:H_kappa} \begin{proof} Recall the definition of $f_1(x)$ and $f_2(x)$, \begin{align*} f_1(x) = x^4 - 3x^3 + Hx^2 + x, \quad f_2(x) = x^4 - 3x^3 - 2Hx^2 + x, \end{align*} which means \begin{align*} \nabla f(x) = 4x^3 - 9x^2 - H x+1. \end{align*} and \begin{align*} \nabla f_1(x) - \nabla f(x) = 3H x,\quad \nabla f_2(x) - \nabla f(x) = -3H x, \end{align*} It follows that \begin{align} \twonorm{\nabla f_i(x)} &\leq \twonorm{\nabla f_i(x) - \nabla f(x)} + \twonorm{\nabla f(x)}\nonumber\\ &\leq 3H |x| + \twonorm{\nabla f(x)}\nonumber\\ &\leq 3H |x| - \LRabs{4x^3 - 9x^2 - H x+1} + 2\twonorm{\nabla f(x)}\nonumber\\ &\leq 4H|x| - \LRabs{4x^3 - 9x^2 + 1} + 2\twonorm{\nabla f(x)}\nonumber\\ &\leq 10H|x| - \LRabs{4x^3 - 9x^2} + 1 + 2\twonorm{\nabla f(x)}. \label{eq:synthetic_H_bound} \end{align} Let $g(x) = 10H|x| - \LRabs{4x^3 - 9x^2}$, next we will characterize $g(x)$ in different region. (i) When $x \in (-\infty, 0)$, $g(x) = 4x^3 - 9x^2 - 10Hx$. The root for the derivative of $g(x)$ in this region is \begin{align*} 12x^2 - 18x - 10H = 0 \Longrightarrow x = x_1:= \frac{18-\sqrt{18^2 + 480H}}{24}. \end{align*} It follows that \begin{align}\label{eq:gx_maximum_1} g(x) &\leq 4x_1^3 - 9x_1^2-10H x_1\nonumber\\ &\leq 10H \LRs{\frac{\sqrt{18^2 + 480H}-18}{24}}\nonumber\\ &\leq 10H \LRs{\frac{20H}{24}} \leq \frac{25H^2}{3}. \end{align} where the last inequality follows from $x_1 \leq 0$. (ii) When $x \in (0,\frac{9}{4})$, $g(x) = 4x^3 - 9x^2 + 10Hx$. The derivative of $g(x)$ is greater than 0 in this case since $18^2 - 480 H \leq 0$ for $H \geq 1$. Then we have \begin{align}\label{eq:gx_maximum_2} g(x) \leq 10H \cdot \frac{9}{4} = \frac{45H}{2}. \end{align} (iii) When $x \in (\frac{9}{4}, +\infty)$, $g(x) = -4x^3 + 9x^2 + 10H x$. The root for the derivative of $g(x)$ is \begin{align*} -12x^2 + 18x + 10H = 0 \Longrightarrow x = x_2 := \frac{-18+\sqrt{18^2 + 480H}}{24}. \end{align*} Then we have \begin{align}\label{eq:gx_maximum_3} g(x) &\leq \max\LRl{-4x_2^3 + 9x_2^2 + 10 H x_2, -4\LRs{\frac{9}{4}}^3 + 9\LRs{\frac{9}{4}}^2 + \frac{45H}{2}}\nonumber\\ &\leq 9x_2^2 + 10 H x_2 + 9\LRs{\frac{9}{4}}^2 + \frac{45H}{2}. \end{align} Combining \eqref{eq:gx_maximum_1}, \eqref{eq:gx_maximum_2} and \eqref{eq:gx_maximum_3}, we are guaranteed that \begin{align*} g(x)+1 &\leq 9\LRs{\frac{-18+\sqrt{18^2 + 480H}}{24}}^2 + 10 H\LRs{\frac{-18+\sqrt{18^2 + 480H}}{24}}\\ &\qquad + \frac{25H^2}{3} + 45H + 100\\ &:= \kappa(H). \end{align*} Substituting this bound into \eqref{eq:synthetic_H_bound}, we get \begin{align*} \twonorm{\nabla f_i(x)} &\leq 2\twonorm{\nabla f(x)} + g(x) + 1\\ &\leq 2\twonorm{\nabla f(x)} + \kappa(H). \end{align*} And $\kappa(H) < \infty$ is an increasing function of $H$. \end{proof} \subsection{Synthetic task}\label{appen:synthetic} For two algorithms, we inject uniform noise over $[-1, 1]$ into the gradient at each step, and tune $\gamma / \eta \in \{5, 10, 15\}$ and tune $\eta \in \{0.1, 0.01, 0.001\}$. We run each algorithm for 500 communication rounds and the length of each communication round is $I = 8$. The results are showed in Figure~\ref{fig:synthetic_trajectory}. \begin{figure}[h] \centering \subfigure[$H=1$]{\includegraphics[width=0.3\linewidth]{plots/traj_h1.pdf} \includegraphics[width=0.3\linewidth]{plots/obj_h1.pdf} } \subfigure[$H=2$]{\includegraphics[width=0.3\linewidth]{plots/traj_h2.pdf} \includegraphics[width=0.3\linewidth]{plots/obj_h2.pdf} } \subfigure[$H=4$]{\includegraphics[width=0.3\linewidth]{plots/traj_h4.pdf} \includegraphics[width=0.3\linewidth]{plots/obj_h4.pdf} } \subfigure[$H=8$]{\includegraphics[width=0.3\linewidth]{plots/traj_h8.pdf} \includegraphics[width=0.3\linewidth]{plots/obj_h8.pdf} } \caption{The loss trajectories and converged solutions of CELGC and EPISODE on synthetic task.} \label{fig:synthetic_trajectory} \end{figure} \subsection{SNLI} \label{appen:SNLI} The learning rate $\eta$ and the clipping parameter $\gamma$ are tuned with search in the following way: we vary $\gamma \in \{0.01, 0.03, 0.1\}$ and for each $\gamma$ we vary $\eta$ so that the clipping threshold $\gamma / \eta$ varies over $\{0.1, 0.333, 1.0, 3.333, 10.0\}$, leading to 15 pairs $(\eta, \gamma)$. We decay both $\eta$ and $\gamma$ by a factor of $0.5$ at epochs $15$ and $20$. We choose the best pair $(\eta, \gamma)$ according to the performance on a validation set, and the corresponding model is evaluated on a held-out test set. Note that we do not tune $(\gamma, \eta)$ separately for each algorithm. Instead, due to computational constraints, we tune the hyperparameters for the baseline CELGC under the setting $I = 4$, $s = 50\%$ and re-use the tuned values for the rest of the settings. \begin{figure}[tb] \centering \subfigure[$I=8$, $s=50\%$]{ \includegraphics[width=0.38\linewidth]{plots/cifar_50_train_losses.pdf} \includegraphics[width=0.38\linewidth]{plots/cifar_50_test_accuracies.pdf} } \subfigure[$I=8$, $s=70\%$]{ \includegraphics[width=0.38\linewidth]{plots/cifar_70_train_losses.pdf} \includegraphics[width=0.38\linewidth]{plots/cifar_70_test_accuracies.pdf} } \caption{Training curves for CIFAR-10 experiments.} \label{fig:cifar_learning_curves} \end{figure} \subsection{CIFAR-10} \label{appen:cifar} \subsubsection{Setup} We train a ResNet-50 \citep{he2016deep} for 150 epochs using the cross-entropy loss and a batch size of 64 for each worker. Starting from an initial learning rate $\eta_0 = 1.0$ and clipping parameter $\gamma = 0.5$, we decay the learning rate by a factor of 0.5 at epochs 80 and 120. In this setting, we decay the clipping parameter $\gamma$ with the learning rate $\eta$, so that the clipping threshold $\frac{\gamma}{\eta}$ remains constant during training. We present results for $I = 8$ and $s \in \{50\%, 70\%\}$. We include the same baselines as the experiments of the main text, comparing EPISODE to FedAvg, SCAFFOLD, and CELGC. \subsubsection{Results} Training loss and testing accuracy during training are shown below in Figure \ref{fig:cifar_learning_curves}. In both settings, EPISODE is superior in terms of testing accuracy and nearly the best in terms of training loss. \subsection{ImageNet} \label{appen:imagenet} The training curves (training and testing loss) for each ImageNet setting are shown below in Figure \ref{fig:imagenet_learning_curves}. \begin{figure} \centering \subfigure[$I=64$, $s=70\%$]{\includegraphics[width=0.45\linewidth]{plots/imagenet_64_70.pdf}} \subfigure[$I=64$, $s=60\%$]{\includegraphics[width=0.45\linewidth]{plots/imagenet_64_60.pdf}} \subfigure[$I=64$, $s=50\%$]{\includegraphics[width=0.45\linewidth]{plots/imagenet_64_50.pdf}} \subfigure[$I=128$, $s=60\%$]{\includegraphics[width=0.45\linewidth]{plots/imagenet_128_60.pdf}} \caption{Training curves for all ImageNet experiments.} \label{fig:imagenet_learning_curves} \end{figure} \begin{table}[tb] \centering {\begin{tabular}{@{}llllll@{}} \toprule Interval & Similarity & Algorithm & 70\% & 75\% & 80\% \\ \midrule 1 & 100\% & NaiveParallelClip & 37.30 & 59.69 & 118.45 \\ \midrule 2 & 30\% & CELGC & 33.57 & 63.98 & N/A \\ & & EPISODE & \textbf{27.20} & \textbf{38.07} & \textbf{70.60} \\ \midrule 4 & 30\% & CELGC & 23.84 & 42.51 & N/A \\ & & EPISODE & \textbf{18.34} & \textbf{25.73} & \textbf{55.15} \\ \midrule 8 & 30\% & CELGC & 20.37 & 34.06 & N/A \\ & & EPISODE & \textbf{13.98} & \textbf{22.43} & \textbf{53.43} \\ \midrule 16 & 30\% & CELGC & \textbf{16.57} & \textbf{27.00} & N/A \\ & & EPISODE & 21.26 & 28.39 & N/A \\ \midrule 4 & 50\% & CELGC & 18.52 & 31.86 & N/A \\ & & EPISODE & \textbf{18.37} & \textbf{25.71} & \textbf{47.76} \\ \midrule 4 & 10\% & CELGC & 39.75 & N/A & N/A \\ & & EPISODE & \textbf{18.46} & \textbf{29.71} & \textbf{55.92} \\ \bottomrule \end{tabular}} \caption{Running time (in minutes) for each algorithm to reach test accuracy of $70\%$, $75\%$, and $80\%$ on SNLI dataset. We use N/A to denote when an algorithm did not reach the corresponding level of accuracy over the course of training.} \label{tab:snli_runtime} \end{table} \begin{figure}[tb] \centering \subfigure[Effect of $I$]{ \includegraphics[width=0.45\linewidth]{plots/effect_of_I_time.pdf} } \subfigure[Effect of $\kappa$]{ \includegraphics[width=0.45\linewidth]{plots/effect_of_kappa_time.pdf} } \caption{ Training loss and testing accuracy on SNLI against running time. \textbf{(a)} Various values of communication intervals $I \in \{2, 4, 8, 16\}$ with fixed data similarity $s = 30\%$. \textbf{(b)} Various values of data similarity $s \in \{10\%, 30\%, 50\%\}$ with fixed $I=4$. }\label{fig:snli_running_time} \end{figure} \section{Running Time Results} \label{appen:running_time} To demonstrate the utility of EPISODE for federated learning in practical settings, we also provide a comparison of the running time of each algorithm on the SNLI dataset. Our experiments were run on eight NVIDIA Tesla V100 GPUs distributed on two machines. The training loss and testing accuracy of each algorithm (under the settings described above) are plotted against running time below. Note that these are the same results as shown in Figure \ref{fig:snli}, plotted against time instead of epochs or communication rounds. On the SNLI dataset, EPISODE reaches a lower training loss and higher testing accuracy with respect to time, compared with CELGC and NaiveParallelClip. Table \ref{tab:snli_runtime} shows that, when $I \leq 8$, EPISODE requires significantly less running time to reach high testing accuracy compared with both CELGC and NaiveParallelClip. When $I = 16$, CELGC and NaiveParallelClip nearly match, indicating that $I = 16$ may be close to the theoretical upper bound on $I$ for which fast convergence can be guaranteed. Also, as the client data similarity decreases, the running time requirement of EPISODE to reach high test accuracy stays nearly constant (e.g., when $I=4$), while the running time required by CELGC steadily increases. This demonstrates the resilience of EPISODE's convergence speed to heterogeneity. Training curves for the same experiment are shown in Figure \ref{fig:snli_running_time}. \section{Ablation Study} \label{append:ablation} In this section, we introduce an ablation study which disentangles the role of the two components of EPISODE's algorithm design: periodic resampled corrections and episodic clipping. Using the SNLI dataset, we have evaluated several variants of the EPISODE algorithm constructed by removing one algorithmic component at a time, and we compare the performance against EPISODE along with variants of the baselines mentioned in the paper. Our ablation study shows that both components of EPISODE's algorithm design (periodically resampled corrections and episodic clipping) contribute to the improved performance over previous work. Our ablation experiments follow the same setting as the SNLI experiments in the main text. The network architecture, hyperparameters, and dataset are all identical to the SNLI experiments described in the main text. In this ablation study, we additionally evaluate multiple variants of EPISODE and baselines, which are described below: \begin{itemize} \item SCAFFOLD (clipped): The SCAFFOLD algorithm~\citep{karimireddy2020scaffold} with gradient clipping applied at each iteration. This algorithm, as a variant of CELGC, determines the gradient clipping operation based on the corrected gradient at every iteration on each machine \item EPISODE (unclipped): The EPISODE algorithm with clipping operation removed. \item FedAvg: The FedAvg algorithm~\citep{mcmahan2016communication}. We include this to show that clipping in some form is crucial for optimization in the relaxed smoothness setting. \item SCAFFOLD: The SCAFFOLD algorithm \citep{karimireddy2020scaffold}. We include this to show that SCAFFOLD-style corrections are not sufficient for optimization in the relaxed smoothness setting. \end{itemize} We compare these four algorithm variations against the algorithms discussed in the main text, which include EPISODE, CELGC, and NaiveParallelClip. Following the protocol outlined in the main text, we train each one of these algorithms while varying the communication interval $I$ and the client data similarity parameter $s$. Specifically, we evaluate six settings formed by first fixing $s = 30\%$ and varying $I \in \{2, 4, 8, 16\}$, then fixing $I = 4$ and varying $s \in \{10\%, 30\%, 50\%\}$. Note that the results of NaiveParallelClip are unaffected by $I$ and $s$, since NaiveParallelClip communicates at every iteration. For each of these six settings, we provide the training loss and testing accuracy reached by each algorithm at the end of training. Final results for all settings are given in Table \ref{tab:snli_ablation}, and training curves for the setting $I = 4, s = 30\%$ are shown in Figure \ref{fig:snli_ablation_curves}. \begin{table}[tb] \centering {\begin{tabular}{@{}lllll@{}} \toprule Interval & Similarity & Algorithm & Train Loss & Test Acc. \\ \midrule 1 & 100\% & NaiveParallelClip & 0.357 & 82.4\% \\ \midrule 2 & 30\% & CELGC & 0.579 & 75.9\% \\ & & EPISODE & \textbf{0.361} & \textbf{82.3\%} \\ & & SCAFFOLD (clipped) & 0.445 & 80.5\% \\ & & EPISODE (unclipped) & 4.51 & 33.3\% \\ & & FedAvg & 1.56 & 32.8\% \\ & & SCAFFOLD & 1.23 & 34.1\% \\ \midrule 4 & 30\% & CELGC & 0.564 & 77.2\% \\ & & EPISODE & \textbf{0.399} & \textbf{81.7\%} \\ & & SCAFFOLD (clipped) & 0.440 & 80.7\% \\ & & EPISODE (unclipped) & 9.82 & 33.0\% \\ & & FedAvg & 1.14 & 32.8\% \\ & & SCAFFOLD & 4.39 & 32.8\% \\ \midrule 8 & 30\% & CELGC & 0.539 & 78.0\% \\ & & EPISODE & \textbf{0.431} & \textbf{81.1\%} \\ & & SCAFFOLD (clipped) & 0.512 & 77.1\% \\ & & EPISODE (unclipped) & 8.02 & 34.3\% \\ & & FedAvg & 1.25 & 32.7\% \\ & & SCAFFOLD & 10.86 & 32.8\% \\ \midrule 16 & 30\% & CELGC & 0.525 & 78.3\% \\ & & EPISODE & \textbf{0.534} & \textbf{77.8\%} \\ & & SCAFFOLD (clipped) & 0.597 & 75.7\% \\ & & EPISODE (unclipped) & 4.71 & 33.0\% \\ & & FedAvg & 3.45 & 32.7\% \\ & & SCAFFOLD & 4.87 & 32.7\% \\ \midrule 4 & 50\% & CELGC & 0.490 & 79.1\% \\ & & EPISODE & \textbf{0.385} & \textbf{82.1\%} \\ & & SCAFFOLD (clipped) & 0.436 & 80.7\% \\ & & EPISODE (unclipped) & 9.08 & 34.3\% \\ & & FedAvg & 4.81 & 32.8\% \\ & & SCAFFOLD & 2.40 & 32.9\% \\ \midrule 4 & 10\% & CELGC & 0.667 & 73.3\% \\ & & EPISODE & \textbf{0.404} & \textbf{81.5\%} \\ & & SCAFFOLD (clipped) & 0.438 & 80.7\% \\ & & EPISODE (unclipped) & 8.54 & 33.0\% \\ & & FedAvg & 1.89 & 34.3\% \\ & & SCAFFOLD & 5.61 & 34.3\% \\ \bottomrule \end{tabular}} \caption{Results for ablation study of EPISODE on SNLI dataset.} \label{tab:snli_ablation} \end{table} \begin{figure}[tb] \centering \subfigure{\includegraphics[width=0.45\linewidth]{plots/snli_4_0.7_train_losses.pdf}} \subfigure{\includegraphics[width=0.45\linewidth]{plots/snli_4_0.7_test_accuracies.pdf}} \caption{Training curves SNLI ablation study under the setting $I = 4$ and $s = 30\%$. Note that the training losses of EPISODE (unclipped), FedAvg, and SCAFFOLD are not visible, since they are orders of magnitude larger than the other algorithms.} \label{fig:snli_ablation_curves} \end{figure} From these results, we can conclude that both components of EPISODE (periodic resampled corrections and episodic clipping) contribute to EPISODE's improved performance. \begin{itemize} \item Replacing periodic resampled corrections with SCAFFOLD-style corrections yields the variant SCAFFOLD (clipped). In every setting, SCAFFOLD (clipped) performs slightly better than CELGC, but still worse than EPISODE. This corroborates the intuition that SCAFFOLD-style corrections use slightly outdated information compared to that of EPISODE, and this information lag caused worse performance in this ablation study. \item On the other hand, clipping is essential for EPISODE to avoid divergence. By removing clipping from EPISODE, we obtain the variant EPISODE (unclipped), which fails to learn entirely. EPISODE (unclipped) never reached a test accuracy higher than $35\%$, which is barely higher than random guessing, since SNLI is a 3-way classification problem. In summary, both periodic resampled corrections and episodic clipping contribute to the improved performance of EPISODE over baselines. \end{itemize} In addition, FedAvg and SCAFFOLD show similar \emph{divergence} behavior as EPISODE (unclipped). None of these three algorithms employ any clipping or normalization in updates, and consequently none of these algorithms are able to surpass random performance on SNLI. Finally, although NaiveParallelClip appears to be the best performing algorithm from this table, it requires more wall-clock time than any other algorithms due to its frequent communication. For a comparison of the running time results, see Table \ref{tab:snli_runtime} in Appendix \ref{appen:running_time}. \section{New Experiments on Federated Learning Benchmark: Sentiment140 Dataset} \label{app:LEAF} To evaluate EPISODE on a real-world federated dataset, we provide additional experiments on the Sentiment140 benchmark from the LEAF benchmark \citep{caldas2018leaf}. Sentiment140 is a sentiment classification problem on a dataset of tweets, where each tweet is labeled as positive or negative. For this setting, we follow the experimental setup of \cite{li2020federatedprox}: training a 2-layer LSTM network with 256 hidden units on the cross-entropy classification loss. We also follow their data preprocessing steps to eliminate users with a small number of data points and split into training and testing sets. We perform an additional step to simulate the cross-silo federated environment \citep{kairouz2019advances} by partitioning the original Sentiment140 users into eight groups (i.e., eight machines). To simulate heterogeneity between silos, we partition the users based on a non-i.i.d. sampling scheme similar to that of our SNLI experiments. Specifically, given a silo similarity parameter $s$, each silo is allocated $s\%$ of its users by uniform sampling, and $(100-s)\%$ of its users from a pool of users which are sorted by the proportion of positive tweets in their local dataset. This way, when $s$ is small, different silos will have a very different proportion of positive/negative samples in their respective datasets. We evaluate NaiveParallelClip, CELGC, and EPISODE in this cross-silo environment with $I=4$ and $s \in \{0, 10, 20\}$. We tuned the learning rate $\eta$, and the clipping parameter $\gamma$ with grid search over the values $\eta \in \{0.01, 0.03, 0.1, 0.3, 1.0\}$ and $\gamma \in \{0.01, 0.03, 0.1, 0.3, 1.0\}$. Results are plotted in Figures \ref{fig:sent140_learning_curves_steps} and \ref{fig:sent140_learning_curves_time}. \begin{figure}[tb] \centering \subfigure[$I=4$, $s=20\%$]{\includegraphics[width=0.32\linewidth]{plots/sent140_4_0.8_steps.pdf}} \subfigure[$I=4$, $s=10\%$]{\includegraphics[width=0.32\linewidth]{plots/sent140_4_0.9_steps.pdf}} \subfigure[$I=4$, $s=0\%$]{\includegraphics[width=0.32\linewidth]{plots/sent140_4_1.0_steps.pdf}} \caption{Training curves for all Sentiment140 experiments over training steps.} \label{fig:sent140_learning_curves_steps} \end{figure} \begin{figure}[tb] \centering \subfigure[$I=4$, $s=20\%$]{\includegraphics[width=0.32\linewidth]{plots/sent140_4_0.8_time.pdf}} \subfigure[$I=4$, $s=10\%$]{\includegraphics[width=0.32\linewidth]{plots/sent140_4_0.9_time.pdf}} \subfigure[$I=4$, $s=0\%$]{\includegraphics[width=0.32\linewidth]{plots/sent140_4_1.0_time.pdf}} \caption{Training curves for all Sentiment140 experiments over running time.} \label{fig:sent140_learning_curves_time} \end{figure} Overall, EPISODE is able to nearly match the training loss and testing accuracy of NaiveParallelClip while requiring significantly \emph{less running time}, and the performance of EPISODE does not degrade as the client data similarity $s$ decreases. Figure \ref{fig:sent140_learning_curves_steps} shows that, with respect to the number of training steps, EPISODE remains competitive with NaiveParallelClip and outperforms CELGC. In particular, the gap between EPISODE and CELGC grows as the client data similarity decreases, showing that EPISODE can adapt to data heterogeneity. On the other hand, Figure \ref{fig:sent140_learning_curves_time} shows that, with a fixed time budget, EPISODE is able to reach lower training loss and higher testing accuracy than both CELGC and NaiveParallelClip in all settings. This demonstrates the superior performance of EPISODE in practical scenarios. \section{Introduction} \vspace*{-0.1in} Gradient clipping~\citep{pascanu2012understanding,pascanu2013difficulty} is a well-known strategy to improve the training of deep neural networks with the exploding gradient issue such as Recurrent Neural Networks (RNN)~\citep{rumelhart1986learning,elman1990finding,werbos1988generalization} and Long Short-Term Memory (LSTM)~\citep{hochreiter1997long}. Although it is a widely-used strategy, formally analyzing gradient clipping in deep neural networks under the framework of nonconvex optimization only happened recently~\citep{zhang2019gradient,zhang2020improved,cutkosky2021high,liu2022communication}. In particular, ~\citet{zhang2019gradient} showed empirically that the gradient Lipschitz constant scales linearly in terms of the gradient norm when training certain neural networks such as AWD-LSTM~\citep{merity2018regularizing}, introduced the relaxed smoothness condition (i.e., $(L_0,L_1)$-smoothness\footnote{The formal definition of $(L_0,L_1)$-smoothness is illustrated in Definition~\ref{def:L0L1_smooth}.}), and proved that clipped gradient descent converges faster than any fixed step size gradient descent. Later on,~\citet{zhang2020improved} provided tighter complexity bounds of the gradient clipping algorithm. Federated Learning (FL)~\citep{mcmahan2016communication} is an important distributed learning paradigm in which a single model is trained collaboratively under the coordination of a central server without revealing client data \footnote{In this paper, we use the terms ``client" and ``machine" interchangeably.}. FL has two critical features: heterogeneous data and limited communication. Although there is a vast literature on FL (see~\citep{kairouz2019advances} and references therein), the theoretical and algorithmic understanding of gradient clipping algorithms for training deep neural networks in the FL setting remains nascent. To the best of our knowledge, ~\citet{liu2022communication} is the only work that has considered a communication-efficient distributed gradient clipping algorithm under the nonconvex and relaxed smoothness conditions in the FL setting. In particular,~\citet{liu2022communication} proved that their algorithm achieves linear speedup in terms of the number of clients and reduced communication rounds. Nevertheless, their algorithm and analysis are only applicable to the case of homogeneous data. In addition, the analyses of the stochastic gradient clipping algorithms in both single machine~\citep{zhang2020improved} and multiple-machine setting~\citep{liu2022communication} require strong distributional assumptions on the stochastic gradient noise~\footnote{~\citet{zhang2020improved} requires an explicit lower bound for the stochastic gradient noise, and~\citet{liu2022communication} requires the distribution of the stochastic gradient noise is unimodal and symmetric around its mean.}, which may not hold in practice. \begin{table}[t] \caption{Communication complexity ($R$) and largest number of skipped communication ($I_{\max}$) to guarantee linear speedup for different methods to find an $\epsilon$-stationary point (defined in Definition \ref{def:eps_stationary}). ``Single" means single machine, $N$ is the number of clients, $I$ is the number of skipped communications, $\kappa$ is the quantity representing the heterogeneity, $\Delta= f({\bm{x}}_0) -\min_{{\bm{x}}}f({\bm{x}})$, and $\sigma^2$ is the variance of stochastic gradients. Iteration complexity ($T$) is the product of communication complexity and the number of skipped communications (i.e., $T=RI$ ). Best iteration complexity $T_{\min}$ denotes the minimum value of $T$ the algorithm can achieve through adjusting $I$. Linear speedup means the iteration complexity is divided by $N$ compared with the single machine baseline: in our case it means $T={\mathcal{O}}(\frac{\Delta L_0\sigma^2}{N\epsilon^4})$ iteration complexity. } \label{table:comparison} \resizebox{\textwidth}{!}{\begin{tabular}{@{}ccccc@{}} \toprule Method & Setting & Communication Complexity ($R$) & Best Iteration Complexity ($T_{\min}$) & \makecell{ Largest $I$ to guarantee\\ linear speedup ($I_{\max}$)} \\ \midrule \makecell{Local SGD \\~\citep{yu2019parallel}} & \makecell{Heterogeneous, \\ $L$-smooth} & ${\mathcal{O}}\LRs{\frac{\Delta L\sigma^{2}}{N I \epsilon^{4}}+\frac{\Delta L\kappa^{2} N I}{\sigma^2\epsilon^2}+\frac{\Delta L N}{\epsilon^2}}$ & ${\mathcal{O}}(\frac{\Delta L_0\sigma^2}{N\epsilon^4})$ & ${\mathcal{O}}\LRs{\frac{\sigma^2}{\kappa N \epsilon}}$ \\\hline \makecell{SCAFFOLD\\~\citep{karimireddy2020scaffold}}& \makecell{Heterogeneous,\\ $L$-smooth} & ${\mathcal{O}}\LRs{\frac{\Delta L\sigma^{2}}{N I \epsilon^{4}}+\frac{\Delta L}{\epsilon^2}}$ & ${\mathcal{O}}(\frac{\Delta L_0\sigma^2}{N\epsilon^4})$ &${\mathcal{O}}\LRs{\frac{\sigma^2}{N\epsilon^2}}$ \\\hline \makecell{Clipped SGD\\~\citep{zhang2019adaptive}} & \makecell{Single,\\ $(L_0,L_1)$-smooth} & ${\mathcal{O}}\LRs{\frac{\left(\Delta+\left(L_{0}+L_{1} \sigma\right) \sigma^{2}+\sigma L_{0}^{2} / L_{1}\right)^{2}}{\epsilon^4}}$ & ${\mathcal{O}}\LRs{\frac{\left(\Delta+\left(L_{0}+L_{1} \sigma\right) \sigma^{2}+\sigma L_{0}^{2} / L_{1}\right)^{2}}{\epsilon^4}}$ & N/A \\\hline \makecell{Clipping Framework\\~\citep{zhang2020improved}} & \makecell{Single, \\ $(L_0,L_1)$-smooth} & ${\mathcal{O}}\left(\frac{\Delta L_{0} \sigma^{2}}{\epsilon^{4}}\right)$& ${\mathcal{O}}\left(\frac{\Delta L_{0} \sigma^{2}}{\epsilon^{4}}\right)$ & N/A \\\hline \makecell{CELGC \\~\citep{liu2022communication}} & \makecell{Homogeneous, \\ $(L_0,L_1)$-smooth} & ${\mathcal{O}}\left(\frac{\Delta L_{0} \sigma^2}{ N I\epsilon^{4}}\right)$ & ${\mathcal{O}}(\frac{\Delta L_0\sigma^2}{N\epsilon^4})$ & ${\mathcal{O}}\left(\frac{\sigma}{N\epsilon}\right)$ \\\hline \makecell{EPISODE\\(this work)} & \makecell{Heterogeneous,\\ $(L_0,L_1)$-smooth} & ${\mathcal{O}}\LRs{\frac{\Delta L_0 \sigma^2}{N I\epsilon^4}+\frac{\Delta (L_0 + L_1(\kappa + \sigma))}{\epsilon^2}\LRs{1 + \frac{\sigma}{\epsilon}}}$ & ${\mathcal{O}}(\frac{\Delta L_0\sigma^2}{N\epsilon^4})$ & ${\mathcal{O}}\LRs{\frac{L_0 \sigma^2}{(L_0 + L_1(\kappa + \sigma))(1 + \frac{\sigma}{\epsilon})N \epsilon^2}}$ \\ \bottomrule \end{tabular}} \vspace*{-0.2in} \end{table} In this work, we introduce a provably computation and communication efficient gradient clipping algorithm for nonconvex and relaxed-smooth functions in \textbf{the general FL setting (i.e., heterogeneous data, limited communication)} and \textbf{without any distributional assumptions on the stochastic gradient noise}. Compared with previous work on gradient clipping~\citep{zhang2019gradient,zhang2020improved,cutkosky2020momentum,liu2022communication} and FL with heterogeneous data~\citep{li2020federated,karimireddy2020scaffold}, our algorithm design relies on two novel techniques: \textit{episodic gradient clipping} and \textit{periodic resampled corrections}. In a nutshell, at the beginning of each communication round, the algorithm resamples each client's stochastic gradient; this information is used to decide whether to apply clipping in the current round (i.e., \textit{episodic gradient clipping}), and to perform local corrections to each client's update (i.e., \textit{periodic resampled corrections}). These techniques are very different compared with previous work on gradient clipping. Specifically, (1) In traditional gradient clipping~\citep{pascanu2012understanding,zhang2019gradient,zhang2020improved,liu2022communication}, whether or not to apply the clipping operation is determined only by the norm of the client's current stochastic gradient. Instead, we use the norm of the global objective's stochastic gradient (resampled at the beginning of the round) to determine whether or not clipping will be applied throughout the entire communication round. (2) Different from~\citet{karimireddy2020scaffold} which uses historical gradient information from the previous round to perform corrections, our algorithm utilizes the resampled gradient to correct each client's local update towards the global gradient, which mitigates the effect of data heterogeneity. Notice that, under the relaxed smoothness setting, the gradient may change quickly around a point at which the gradient norm is large. Therefore, our algorithm treats a small gradient as more ``reliable" and confidently applies the unclipped corrected local updates; on the contrary, the algorithm regards a large gradient as less ``reliable" and in this case clips the corrected local updates. Our major contributions are summarized as follows. \begin{itemize} \item We introduce EPISODE, the very first algorithm for optimizing nonconvex and $(L_0,L_1)$-smooth functions in the general FL setting with heterogeneous data and limited communication. The algorithm design introduces novel techniques, including episodic gradient clipping and periodic resampled corrections. To the best of our knowledge, these techniques are first introduced by us and crucial for algorithm design. \item Under the nonconvex and relaxed smoothness condition, we prove that the EPISODE algorithm can achieve linear speedup in the number of clients and reduced communication rounds in the heterogeneous data setting, without any distributional assumptions on the stochastic gradient noise. In addition, we show that the degenerate case of EPISODE matches state-of-the-art complexity results under weaker assumptions~\footnote{We prove that the degenerate case of our analysis (e.g., homogeneous data) achieves the same iteration and communication complexity, but without the requirement of unimodal and symmetric stochastic gradient noise as in \citet{liu2022communication}. Also, our analysis is unified over any noise level of stochastic gradient, which does not require an explicit lower bound for the stochastic gradient noise as in the analysis of \cite{zhang2020improved}. }. Detailed complexity results and a comparison with other relevant algorithms are shown in Table~\ref{table:comparison}. \item We conduct experiments on several heterogeneous medium and large scale datasets with different deep neural network architectures, including a synthetic objective, text classification, and image classification. We show that the performance of the EPISODE algorithm is consistent with our theory, and it consistently outperforms several strong baselines in FL. \end{itemize} \vspace*{-0.1in} \section{Related Work} \vspace*{-0.1in} \paragraph{Gradient Clipping} Gradient clipping is a standard technique in the optimization literature for solving convex/quasiconvex problems~\citep{ermoliev1988stochastic,nesterov1984minimization,shor2012minimization,hazan2015beyond,mai2021stability,gorbunov2020stochastic}, nonconvex smooth problems~\citep{levy2016power,cutkosky2021high}, and nonconvex distributionally robust optimization~\citep{jin2021non}. ~\cite{menon2019can} showed that gradient clipping can help mitigate label noise. Gradient clipping is a well-known strategy to achieve differential privacy~\citep{abadi2016deep,mcmahan2017learning,andrew2021differentially,zhang2021understanding}. In the deep learning literature, gradient clipping is employed to avoid exploding gradient issue when training certain deep neural networks such as recurrent neural networks or long-short term memory networks~\citep{pascanu2012understanding,pascanu2013difficulty} and language models~\citep{gehring2017convolutional,peters2018deep,merity2018regularizing}.~\cite{zhang2019gradient} initiated the study of gradient clipping for nonconvex and relaxed smooth functions. ~\cite{zhang2020improved} provided an improved analysis over~\cite{zhang2019gradient}. However, none of these works apply to the general FL setting with nonconvex and relaxed smooth functions. \paragraph{Federated Learning} FL was proposed by~\cite{mcmahan2016communication}, to enable large-scale distributed learning while keep client data decentralized to protect user privacy. ~\citet{mcmahan2016communication} designed the FedAvg algorithm, which allows multiple steps of gradient updates before communication. This algorithm is also known as local SGD~\citep{stich2018local,lin2018don,wang2018cooperative,yu2019parallel}. The local SGD algorithm and their variants have been analyzed in the convex setting~\citep{stich2018local,stich2018sparsified,dieuleveut2019communication,khaled2020tighter,li2020federated,karimireddy2020scaffold,woodworth2020minibatch,woodworth2020local,koloskova2020unified,yuan2021federated} and nonconvex smooth setting~\citep{jiang2018linear,wang2018cooperative,lin2018don,basu2019qsparse,haddadpour2019local,yu2019parallel,yu_linear,li2020federated,karimireddy2020scaffold,reddi2020adaptive,zhang2020fedpd,koloskova2020unified}. Recently, in the stochastic convex optimization setting, several works compared local SGD and minibatch SGD in the homogeneous~\citep{woodworth2020local} and heterogeneous data setting~\citep{woodworth2020minibatch}, as well as the fundamental limit~\citep{woodworth2021min}. For a comprehensive survey, we refer the readers to~\cite{kairouz2019advances,li2020federated} and references therein. The most relevant work to ours is~\cite{liu2022communication}, which introduced a communication-efficient distributed gradient clipping algorithm for nonconvex and relaxed smooth functions. However, their algorithm and analysis does not apply in the case of heterogeneous data as considered in this paper. \vspace*{-0.1in} \section{Problem Setup and Preliminaries} \paragraph{Notations} In this paper, we use $\inprod{\cdot}{\cdot}$ and $\twonorm{\cdot}$ to denote the inner product and Euclidean norm in space ${\mathbb{R}}^d$. We use $\indicator{\cdot}$ to denote the indicator function. We let ${\mathcal{I}}_r$ be the set of iterations at the $r$-th round, that is ${\mathcal{I}}_r = \{t_r,...,t_{r+1}-1\}$. The filtration generated by the random variables before step $t_r$ is denoted by ${\mathcal{F}}_r$. We also use $\mathbb{E}_r[\cdot]$ to denote the conditional expectation $\mathbb{E}[\cdot|{\mathcal{F}}_r]$. The number of clients is denoted by $N$ and the length of the communication interval is denoted by $I$, i.e., $|{\mathcal{I}}_r| = I$ for $r = 0,1,...,R$. Let $f_i({\bm{x}}) := \mathbb{E}_{\xi_i \sim \mathcal{D}_i}[F_i({\bm{x}}; \xi_i)]$ be the loss function in $i$-th client for $i \in [N]$, where the local distribution $\mathcal{D}_i$ is unknown and may be different across $i \in [N]$. In the FL setting, we aim to minimize the following overall averaged loss function: \begin{equation} \label{eq:min_problem} f({\bm{x}}) := \frac{1}{N} \sum_{i=1}^N f_i({\bm{x}}). \end{equation} We focus on the case that each $f_i$ is non-convex, in which it is NP-hard to find the global minimum of $f$. Instead we consider finding an $\epsilon$-stationary point~\citep{ghadimi2013stochastic,zhang2020improved}. \begin{definition}\label{def:eps_stationary} For a function $h:{\mathbb{R}}^d \to {\mathbb{R}}$, a point $x\in \mathbb{R}^d$ is called $\epsilon$-stationary if $\twonorm{\nabla h({\bm{x}})}\leq \epsilon$. \end{definition} Most existing works in the non-convex FL literature \citep{yu2019linear,karimireddy2020scaffold} assume each $f_i$ is $L$-smooth, i.e., $\twonorm{\nabla f_i({\bm{x}}) - \nabla f_i({\bm{y}})}\leq L\twonorm{{\bm{x}} - {\bm{y}}}$ for any ${\bm{x}}, {\bm{y}} \in {\mathbb{R}}^d$. However it is shown in~\cite{zhang2019gradient} that $L$-smoothness may not hold for certain neural networks such as RNNs and LSTMs. $(L_0,L_1)$-smoothness in Definition \ref{def:L0L1_smooth} was proposed by \cite{zhang2019adaptive} and is strictly weaker than $L$-smoothness. Under this condition, the local smoothness of the objective can grow with the gradient scale. For AWD-LSTM \citep{merity2018regularizing}, empirical evidence of $(L_0, L_1)$-smoothness was observed in \cite{zhang2019adaptive}. \begin{definition}\label{def:L0L1_smooth} A second order differentiable function $h:{\mathbb{R}}^d \to {\mathbb{R}}$ is $(L_0,L_1)$-smooth if $\twonorm{\nabla^2 h({\bm{x}})} \leq L_0 + L_1\twonorm{\nabla h({\bm{x}})}$ holds for any $x \in {\mathbb{R}}^d$. \end{definition} Suppose we only have access to the stochastic gradient $\nabla F_i({\bm{x}};\xi)$ for $\xi\sim {\mathcal{D}}_i$ in each client. Next we make the following assumptions on objectives and stochastic gradients. \begin{assumption}\label{assume:object} Assume $f_i$ for $i \in [N]$ and $f$ defined in \eqref{eq:min_problem} satisfy: \begin{enumerate}[label=(\roman*)] \item $f_i$ is second order differentiable and $(L_0,L_1)$-smooth. \item Let ${\bm{x}}^{*}$ be the global minimum of $f$ and ${\bm{x}}_0$ be the initial point. There exists some $\Delta > 0$ such that $f({\bm{x}}_0) - f({\bm{x}}^{*})\leq \Delta$. \item For all ${\bm{x}} \in \mathbb{R}^d$, there exists some $\sigma \geq 0$ such that $\mathbb{E}_{\xi_i \sim \mathcal{D}_i}[\nabla F_i({\bm{x}}; \xi_i)] = \nabla f_i({\bm{x}})$ and $\|\nabla F_i({\bm{x}}; \xi_i) - \nabla f_i({\bm{x}})\| \leq \sigma$ almost surely. \item There exists some $\kappa \geq 0$ and $\rho \geq 1$ such that $\twonorm{\nabla f_i({\bm{x}})} \leq \kappa + \rho \twonorm{\nabla f({\bm{x}})}$ for any ${\bm{x}} \in {\mathbb{R}}^d$. \end{enumerate} \end{assumption} \textbf{Remark}: (i) and (ii) are standard in the non-convex optimization literature \citep{ghadimi2013stochastic}, and (iii) is a standard assumption in the $(L_0,L_1)$-smoothness setting \citep{zhang2019adaptive,zhang2020improved,liu2022communication}. (iv) is used to bound the difference between the gradient of each client's local loss and the gradient of the overall loss, which is commonly assumed in the FL literature with heterogeneous data~\citep{karimireddy2020scaffold}. When $\kappa = 0$ and $\rho = 1$, (iv) corresponds to the homogeneous setting. \vspace*{-0.1in} \section{Algorithm and Analysis} \vspace*{-0.1in} \subsection{Main Challenges and Algorithm Design} \paragraph{Main Challenges} We first illustrate why the prior local gradient clipping algorithm ~\citep{liu2022communication} would not work in the heterogeneous data setting. \citet{liu2022communication} proposed the first communication-efficient local gradient clipping algorithm (CELGC) in a homogeneous setting for relaxed smooth functions, which can be interpreted as the clipping version of FedAvg. Let us consider a simple heterogeneous example with two clients in which CELGC fails. Denote $f_1(x)=\frac{1}{2}x^2+a_1x$ and $f_2(x)=\frac{1}{2}x^2+a_2x$ with $a_1=-\gamma-1$, $a_2=\gamma+2$, and $\gamma>1$. We know that the optimal solution for $f=\frac{f_1+f_2}{2}$ is $x_*=-\frac{a_1+a_2}{2}=-\frac{1}{2}$, and both $f_1$ and $f_2$ are $(L_0,L_1)$-smooth with $L_0=1$ and $L_1=0$. Consider CELGC with communication interval $I=1$ (i.e., communication happens at every iteration), starting point $x_0=0$, $\eta = 1/L_0 = 1$, clipping threshold $\gamma$, and $\sigma = 0$. In this setting, after the first iteration, the model parameters on the two clients become $\gamma$ and $-\gamma$ respectively, so that the averaged model parameter after communication returns to $0$. This means that the model parameter of CELGC remains stuck at $0$ indefinitely, demonstrating that CELGC cannot handle data heterogeneity. We then explain why the stochastic controlled averaging method (SCAFFOLD)~\citep{karimireddy2020scaffold} for heterogeneous data does not work in the relaxed smoothness setting. SCAFFOLD utilizes the client gradients from the previous round to constructing correction terms which are added to each client's local update. Crucially, SCAFFOLD requires that the gradient is Lipschitz so that gradients from the previous round are good approximations of gradients in the current round with controllable errors. This technique is not applicable in the relaxed smoothness setting: the gradient may change dramatically, so historical gradients from the previous round are not good approximations of the current gradients anymore due to potential unbounded errors. \vspace*{-0.1in} \paragraph{Algorithm Design} To address the challenges brought by heterogeneity and relaxed smoothness, our idea is to clip the local updates similarly as we would clip the global gradient (if we could access it). The detailed description of EPISODE is stated in Algorithm \ref{alg:episode}. Specifically, we introduce two novel techniques: (1) Episodic gradient clipping. At the $r$-th round, EPISODE constructs a global indicator $\indicator{\|{\bm{G}}_r\|\leq\gamma/\eta}$ to determine whether to perform clipping in every local update during the round for all clients (line 6). (2) Periodic resampled corrections. EPISODE resamples fresh gradients with \emph{constant batch size} at the beginning of each round (line 3-5). In particular, at the beginning of the $r$-th round, EPISODE samples stochastic gradients evaluated at the current averaged global weight $\bar{\vx}_r$ in all clients to construct the control variate ${\bm{G}}_r$, which has two roles. The first is to construct the global clipping indicator according to $\|{\bm{G}}_r\|$ (line 10). The second one is to correct the bias between local gradient and global gradient through the variate ${\bm{g}}_t^i$ in local updates (line 10). \begin{algorithm}[tb] \caption{Episodic Gradient Clipping with Periodic Resampled Corrections (EPISODE)} \label{alg:episode} \begin{algorithmic}[1] \STATE Initialize ${\bm{x}}_0^i \gets {\bm{x}}_0$, $\bar{\vx}_0 \gets {\bm{x}}_0$. \FOR{$r = 0,1,...,R$} \FOR{$i \in [N]$} \STATE Sample $\nabla F_i(\bar{\vx}_{r};\widetilde{\xi}_{r}^i)$ where $\widetilde{\xi}_{r}^i \sim {\mathcal{D}}_i$, and update ${\bm{G}}_r^i \gets \nabla F_i(\bar{\vx}_{r};\widetilde{\xi}_{r}^i)$. \ENDFOR \STATE Update ${\bm{G}}_r = \frac{1}{N}\sum_{i=1}^N {\bm{G}}_r^i$. \FOR{$i\in [N]$} \FOR {$t = t_{r}, \ldots, t_{r+1}-1$} \STATE Sample $\nabla F_i({\bm{x}}_t^i; \xi_t^i)$, where $\xi_t^i \sim \mathcal{D}_i$, and compute ${\bm{g}}_t^i \gets \nabla F_i({\bm{x}}_t^i;\xi_t^i) - {\bm{G}}_r^i + {\bm{G}}_r$. \STATE ${\bm{x}}_{t+1}^i\gets {\bm{x}}_t^i - \eta {\bm{g}}_t^i \indicator{\twonorm{{\bm{G}}_r} \leq \gamma/\eta}- \gamma \frac{{\bm{g}}_t^i}{\twonorm{{\bm{g}}_t^i}} \indicator{\twonorm{{\bm{G}}_r} \geq \gamma/\eta}. $ \ENDFOR \ENDFOR \STATE Update $\bar{\vx}_r \gets \frac{1}{N}\sum_{i=1}^N {\bm{x}}_{t_{r+1}}^i$. \ENDFOR \end{algorithmic} \end{algorithm} \vspace*{-0.2in} \subsection{Main Results} \vspace*{-0.05in} \begin{theorem} \label{thm:main} Suppose Assumption \ref{assume:object} holds. For any tolerance $\epsilon \leq \frac{3AL_0}{5 BL_1\rho}$, we choose the hyper parameters as $\eta \leq \min\LRl{\frac{1}{216 \Gamma I},\frac{\epsilon}{180 \Gamma I \sigma},\frac{N \epsilon^2}{16 AL_0 \sigma^2}}$ and $\gamma = \LRs{11\sigma + \frac{AL_0}{BL_1 \rho}}\eta$, where $\Gamma = AL_0 + BL_1 \kappa + BL_1 \rho \left(\sigma + \frac{\gamma}{\eta} \right)$. Then EPISODE satisfies $\frac{1}{R+1} \sum_{r=0}^R \mathbb{E}\left[\|\nabla f(\bar{\vx}_r)\|\right] \leq 3\epsilon$ as long as the number of communication rounds satisfies $R \geq \frac{4 \Delta}{\epsilon^2 \eta I}$. \end{theorem} \textbf{Remark 1:} The result in Theorem \ref{thm:main} holds for arbitrary noise level, while the complexity bounds in the stochastic case of \citet{zhang2020improved,liu2022communication} both require $\sigma \geq 1$. In addition, this theorem can automatically recover the complexity results in~\cite{liu2022communication}, but does not require their symmetric and unimodal noise assumption. The improvement upon previous work comes from a better algorithm design, as well as a more careful analysis in the smoothness and individual discrepancy in the non-clipped case (see Lemma \ref{lemma:non_clipping_hessian} and \ref{lemma:non_clipping_discre_expec}). \textbf{Remark 2:} In Theorem \ref{thm:main}, when we choose $\eta = \min\LRl{\frac{1}{216 \Gamma I},\frac{\epsilon}{180 \Gamma I \sigma},\frac{N \epsilon^2}{16 AL_0 \sigma^2}}$, the total communication complexity to find an $\epsilon$-stationary point is no more than $ R = {\mathcal{O}}\LRs{\frac{\Delta}{\epsilon^2 \eta I}} = {\mathcal{O}}\LRs{\frac{\Delta (L_0 + L_1(\kappa + \sigma))}{\epsilon^2}\LRs{1 + \frac{\sigma}{\epsilon}} + \frac{\Delta L_0 \sigma^2}{N I \epsilon^4}}$. Next we present some implications of the communication complexity. \begin{enumerate} \item When $I \lesssim \frac{L_0 \sigma}{(L_0 + L_1(\kappa + \sigma))N \epsilon}$ and $\sigma \gtrsim \epsilon$, EPISODE has communication complexity ${\mathcal{O}}(\frac{\Delta L_0 \sigma^2}{N I \epsilon^4})$. In this case, EPISODE enjoys a better communication complexity than the naive parallel version of the algorithm in \citet{zhang2020improved}, that is ${\mathcal{O}}(\frac{\Delta L_0 \sigma^2}{N \epsilon^4})$. Moreover, the iteration complexity of EPISODE is $T = RI = {\mathcal{O}}(\frac{\Delta L_0 \sigma^2}{N \epsilon^4})$, which achieves the \emph{linear speedup} w.r.t. the number of clients $N$. This matches the result of \citet{liu2022communication} in the homogeneous data setting. \item When $I \gtrsim \frac{L_0 \sigma}{(L_0 + L_1(\kappa + \sigma))N \epsilon}$ and $\sigma \gtrsim \epsilon$, the communication complexity of EPISODE is ${\mathcal{O}}(\frac{\Delta (L_0 + L_1(\kappa + \sigma)) \sigma}{\epsilon^3})$. This term does not appear in Theorem III of \citet{karimireddy2020scaffold}, but it appears here due to the difference in the construction of the control variates. In fact, the communication complexity of EPISODE is still lower than the naive parallel version of \citet{zhang2020improved} if the number of clients satisfies $N \lesssim \frac{L_0\sigma}{(L_0 + L_1(\kappa + \sigma))\epsilon}$. \item When $0<\sigma \lesssim \epsilon$ , EPISODE has communication complexity ${\mathcal{O}}(\frac{\Delta (L_0 + L_1(\kappa + \sigma))}{\epsilon^2})$. Under this particular noise level, the algorithms in~\citet{zhang2020improved,liu2022communication} do not guarantee convergence because their analyses crucially rely on the fact that $\sigma\gtrsim \epsilon$. \item When $\sigma=0$, EPISODE has communication complexity ${\mathcal{O}}(\frac{\Delta(L_0+L_1\kappa)}{\epsilon^2})$. This bound includes an additional constant $L_1 \kappa$ compared with the complexity results in the deterministic case~\citep{zhang2020improved}, which comes from data heterogeneity and infrequent communication. \end{enumerate} \vspace*{-0.1in} \subsection{Proof Sketch of Theorem \ref{thm:main}}\label{sec:proof_sketch:thm:main} Despite recent work on gradient clipping in the homogeneous setting \citep{liu2022communication}, the analysis of Theorem \ref{thm:main} is highly nontrivial since we need to cope with $(L_0, L_1)$-smoothness and heterogeneity simultaneously. In addition, we do not require a lower bound of $\sigma$ and allow for arbitrary $\sigma \geq 0$. The first step is to establish the descent inequality of the global loss function. According to the $(L_0, L_1)$-smoothness condition, if $\twonorm{\bar{\vx}_{r+1} - \bar{\vx}_r} \leq C/L_1$, then \begin{align} &\mathbb{E}_r \left[ f(\bar{\vx}_{r+1}) - f(\bar{\vx}_r) \right] \leq \mathbb{E}_r \left[(\indicator{{\mathcal{A}}_r} + \indicator{\bar{{\mathcal{A}}}_r}) \langle \nabla f(\bar{\vx}_r), \bar{\vx}_{r+1} - \bar{\vx}_r \rangle\right]\nonumber\\ &\qquad\qquad + \mathbb{E}_r \left[(\indicator{{\mathcal{A}}_r} + \indicator{\bar{{\mathcal{A}}}_r})\frac{AL_0 + BL_1 \|\nabla f(\bar{\vx}_r)\|}{2} \|\bar{\vx}_{r+1} - \bar{\vx}_r\|^2\right],\label{eq:obj_diff} \end{align} where ${\mathcal{A}}_r: =\{\twonorm{{\bm{G}}_r} \leq \gamma/\eta\}$, $\bar{{\mathcal{A}}}_r$ is the complement of ${\mathcal{A}}_r$, and $A, B, C$ are constants defined in Lemma \ref{lemma:smooth_obj_descent}. To utilize the inequality \eqref{eq:obj_diff}, we need to verify that the distance between $\bar{\vx}_{r+1}$ and $\bar{\vx}_{r}$ is small almost surely. In the algorithm of \cite{liu2022communication}, clipping is performed in each iteration based on the magnitude of the current stochastic gradient, and hence the increment of each local weight is bounded by the clipping threshold $\gamma$. For each client in EPISODE, whether to perform clipping is decided by the magnitude of ${\bm{G}}_r$ at the beginning of each round. Therefore, the techniques in \cite{liu2022communication} to bound the individual discrepancy cannot be applied to EPISODE. To address this issue, we introduce Lemma \ref{lemma:individual_dis}, which guarantees that we can apply the properties of relaxed smoothness (Lemma \ref{lemma:smooth_obj_descent} and \ref{lemma:smooth_grad_diff}) to all iterations in one round, in either case of clipping or non-clipping. \begin{lemma} \label{lemma:individual_dis} Suppose $2\eta I (AL_0 + BL_1\kappa + BL_1\rho(\sigma + \gamma/\eta)) \leq 1$ and $\max\LRl{2\eta I (2\sigma + \gamma/\eta),\ \gamma I}\leq \frac{C}{L_1}$. Then for any $i \in [N]$ and $t-1 \in {\mathcal{I}}_r$, it almost surely holds that \begin{equation}\label{eq:non-clipping_dis_as_bound} \indicator{{\mathcal{A}}_r}\twonorm{{\bm{x}}_t^i - \bar{\vx}_r}\leq 2\eta I\LRs{2\sigma + \gamma/\eta}\quad \text{and}\quad \indicator{\bar{{\mathcal{A}}}_r} \twonorm{{\bm{x}}_t^i - \bar{\vx}_r} \leq \gamma I. \end{equation} \end{lemma} Equipped with Lemma \ref{lemma:individual_dis}, the condition $\twonorm{\bar{\vx}_{r+1} - \bar{\vx}_r} \leq \frac{1}{N}\sum_{i=1}^N \twonorm{{\bm{x}}_{t_{r+1}}^i - \bar{\vx}_r} \leq C/L_1$ can hold almost surely with a proper choice of $\eta$. Then it suffices to bound the terms from \eqref{eq:obj_diff} in expectation under the events ${\mathcal{A}}_r$ and $\bar{{\mathcal{A}}}_r$ respectively. To deal with the discrepancy term $\mathbb{E}[\twonorm{{\bm{x}}_{t}^i - \bar{\vx}_r}^2]$ for $t-1 \in {\mathcal{I}}_r$, \cite{liu2022communication} directly uses the almost sure bound for both cases of clipping and non-clipping. Here we aim to obtain a more delicate bound in expectation for the non-clipping case. The following lemma, which is critical to obtain the unified bound from Theorem \ref{thm:main} under any noise level, gives an upper bound for the local smoothness of $f_i$ at ${\bm{x}}$. \begin{lemma}\label{lemma:non_clipping_hessian} Under the conditions of Lemma \ref{lemma:individual_dis}, for all ${\bm{x}}\in {\mathbb{R}}^d$ such that $\twonorm{{\bm{x}} - \bar{\vx}_r}\leq 2 \eta I \left(2\sigma + \gamma/\eta\right)$, the following inequality almost surely holds: \begin{align*} \indicator{{\mathcal{A}}_r} \twonorm{\nabla^2 f_i({\bm{x}})}\leq L_0 + L_1\LRs{\kappa + (\rho+1)\left(\gamma/\eta + 2\sigma\right)}. \end{align*} \end{lemma} From \eqref{eq:non-clipping_dis_as_bound}, we can see that all iterations in the $r$-th round satisfy the condition of Lemma \ref{lemma:non_clipping_hessian} almost surely. Hence we are guaranteed that each local loss $f_i$ is $L$-smooth over the iterations in this round under the event ${\mathcal{A}}_r$, where $L = L_0 + L_1(\kappa + (\rho+1)(\gamma/\eta + 2\sigma))$. In light of this, the following lemma gives a bound in expectation of the individual discrepancy. We denote $p_r = \mathbb{E}_r[\indicator{{\mathcal{A}}_r}]$. \begin{lemma}\label{lemma:non_clipping_discre_expec} Under the conditions of Lemma \ref{lemma:individual_dis}, for any $i \in [N]$ and $t-1 \in {\mathcal{I}}_r$, we have \begin{align} \mathbb{E}_r\LRm{\indicator{{\mathcal{A}}_r}\twonorm{{\bm{x}}_t^i - \bar{\vx}_r}^2}&\leq 36p_r I^2\eta^2\twonorm{\nabla f(\bar{\vx}_r)}^2 + 126p_r I^2 \eta^2 \sigma^2,\label{eq:drift_expectation_bound_quadratic_1}\\ \mathbb{E}_r\LRm{\indicator{{\mathcal{A}}_r}\twonorm{{\bm{x}}_t^i - \bar{\vx}_r}^2}&\leq 18p_rI^2\eta \gamma \twonorm{\nabla f(\bar{\vx}_r)} + 18p_r I^2\eta^2 \LRs{\gamma\sigma/\eta + 5\sigma^2}. \label{eq:drift_expectation_bound_linear_1} \end{align} \end{lemma} It is worthwhile noting that the bound in \eqref{eq:drift_expectation_bound_quadratic_1} involves a quadratic term of $\twonorm{\nabla f(\bar{\vx}_r)}$, whereas it is linear in \eqref{eq:drift_expectation_bound_linear_1}. The role of the linear bound is to deal with $\indicator{{\mathcal{A}}_r}\twonorm{\nabla f(\bar{\vx}_r)}\twonorm{\bar{\vx}_{r+1} - \bar{\vx}_r}^2$ from the descent inequality \eqref{eq:obj_diff}, since directly substituting \eqref{eq:drift_expectation_bound_quadratic_1} would result in a cubic term which is hard to analyze. With Lemma \ref{lemma:individual_dis}, \ref{lemma:non_clipping_hessian} and \ref{lemma:non_clipping_discre_expec}, we obtain the following descent inequality. \begin{lemma} \label{lemma:descent} Under the conditions of Lemma \ref{lemma:individual_dis}, let $\Gamma = AL_0 + BL_1(\kappa+ \rho (\gamma/\eta+\sigma))$. Then it holds that for each $0\leq r\leq R-1$, \begin{align}\label{eq:final_descent_ineuqality} &\mathbb{E}_r \left[ f(\bar{\vx}_{r+1}) - f(\bar{\vx}_r)\right] \leq \mathbb{E}_r \LRm{\indicator{{\mathcal{A}}_r} V(\bar{\vx}_r)} + \mathbb{E}_r \LRm{\indicator{\bar{{\mathcal{A}}}_r} U(\bar{\vx}_r)}, \end{align} where the definitions of $V(\bar{\vx}_r)$ and $U(\bar{\vx}_r)$ are given in Appendix \ref{proof:lemma:descent}. \end{lemma} The detailed proof of Lemma \ref{lemma:descent} is deferred in Appendix \ref{proof:lemma:descent}. With this Lemma, the descent inequality is divided into $V(\bar{\vx}_r)$ (objective value decrease during the non-clipping rounds) and $U(\bar{\vx}_r)$ (objective value decrease during the clipping rounds). Plugging in the choices of $\eta$ and $\gamma$ yields \begin{align}\label{eq:UV_bound} \max \left\{ U(\bar{\vx}_r), V(\bar{\vx}_r) \right\} \leq -\frac{1}{4}\epsilon \eta I \twonorm{\nabla f(\bar{\vx}_r)} + \frac{1}{2}\epsilon^2 \eta I. \end{align} The conclusion of Theorem \ref{thm:main} can then be obtained by substituting \eqref{eq:UV_bound} into \eqref{eq:final_descent_ineuqality} and summing over $r$. \vspace*{-0.15in} \section{Experiments} \label{sec:experiments} \vspace*{-0.1in} In this section, we present an empirical evaluation of EPISODE to validate our theory. We present results in the heterogeneous FL setting on three diverse tasks: a synthetic optimization problem satisfying $(L_0, L_1)$-smoothness, natural language inference on the SNLI dataset \citep{snli:emnlp2015}, and ImageNet classification \citep{deng2009imagenet}. We compare EPISODE against FedAvg \citep{mcmahan2016communication}, SCAFFOLD \citep{karimireddy2020scaffold}, CELGC \citep{liu2022communication}, and a naive distributed algorithm which we refer to as Naive Parallel Clip \footnote{ Naive Parallel Clip uses the globally averaged stochastic gradient obtained from synchronization at every iteration to run SGD with gradient clipping on the global objective. } We include additional experiments on the CIFAR-10 dataset~\citep{krizhevsky2009learning} in Appendix \ref{appen:cifar}, running time results in Appendix~\ref{appen:running_time}, ablation study in Appendix~\ref{append:ablation}, and new experiments on federated learning benchmark datasets in Appendix~\ref{app:LEAF}. \vspace*{-0.05in} \subsection{Setup} \vspace*{-0.05in} All non-synthetic experiments were implemented with PyTorch \citep{paszke2019pytorch} and run on a cluster with eight NVIDIA Tesla V100 GPUs. Since SNLI , CIFAR-10, and ImageNet are centralized datasets, we follow the non-i.i.d. partitioning protocol in \citep{karimireddy2020scaffold} to split each dataset into heterogeneous client datasets with varying label distributions. Specifically, for a similarity parameter $s \in [0, 100]$, each client's local dataset is composed of two parts. The first $s\%$ is comprised of i.i.d. samples from the complete dataset, and the remaining $(100-s)\%$ of data is sorted by label. \paragraph{Synthetic} To demonstrate the behavior of EPISODE and baselines under $(L_0, L_1)$-smoothness, we consider a simple minimization problem in a single variable. Here we have $N=2$ clients with: \begin{align*} f_1(x) = x^4 - 3x^3 + Hx^2 + x, \quad f_2(x) = x^4 - 3x^3 - 2Hx^2 + x, \end{align*} where the parameter $H$ controls the heterogeneity between the two clients. Notice that $f_1$ and $f_2$ satisfy $(L_0, L_1)$-smoothness but not traditional $L$-smoothness. \begin{proposition}\label{proposition:H_kappa} For any $x \in {\mathbb{R}}$ and $i=1,2$, it holds that $\twonorm{\nabla f_i(x)} \leq 2\twonorm{\nabla f(x)} + \kappa(H)$, where $\kappa(H) < \infty$ and is a positive increasing function of $H$ for $H \geq 1$. \end{proposition} According to Proposition \ref{proposition:H_kappa}, Assumption \ref{assume:object}(iv) will be satisfied with $\rho = 2$ and $\kappa = \kappa(H)$, where $\kappa(H)$ is an increasing function of $H$. The proof of this proposition is deferred to Appendix \ref{proof:proposition:H_kappa}. \paragraph{SNLI} Following \cite{conneau2017supervised}, we train a BiRNN network for 25 epochs using the multi-class hinge loss and a batch size of 64 on each worker. The network is composed of a one layer BiRNN encoder with hidden size 2048 and max pooling, and a three-layer fully connected classifier with hidden size 512. The BiRNN encodes a sentence (represented as a sequence of GloVe vectors \citep{pennington2014glove}), and the classifier predicts the relationship of two encoded sentences as either entailment, neutral, or contradiction. For more hyperparameter information, see Appendix \ref{appen:SNLI}. To determine the effects of infrequent communication and data heterogeneity on the performance of each algorithm, we vary $I \in \{2, 4, 8, 16\}$ and $s \in \{10\%, 30\%, 50\%\}$. We compare EPISODE, CELGC, and the Naive Parallel Clip. Note that the training process diverged when using SCAFFOLD, likely due to a gradient explosion issue, since SCAFFOLD does not use gradient clipping. \paragraph{ImageNet} Following \cite{goyal2017accurate}, we train a ResNet-50~\citep{he2016deep} for 90 epochs using the cross-entropy loss, a batch size of 32 for each worker, clipping parameter $\gamma = 1.0$, momentum with coefficient $0.9$, and weight decay with coefficient $5 \times 10^{-5}$. We initially set the learning rate $\eta = 0.1$ and decay by a factor of 0.1 at epochs $30$, $60$, and $80$. To analyze the effect of data heterogeneity in this setting, we fix $I = 64$ and vary $s \in \{50\%, 60\%, 70\%\}$. Similarly, to analyze the effect of infrequent communication, we fix $s = 60\%$ and vary $I \in \{64, 128\}$. We compare the performance of FedAvg, CELGC, EPISODE, and SCAFFOLD. \vspace*{-0.1in} \subsection{Results} \vspace*{-0.05in} \paragraph{Synthetic} Figure \ref{fig:synthetic_trajectory} in Appendix \ref{appen:synthetic} shows the objective value throughout training, where the heterogeneity parameter $H$ varies over $\{1, 2, 4, 8\}$. CELGC exhibits very slow optimization due to the heterogeneity across clients: as $H$ increases, the optimization progress becomes slower and slower. In contrast, EPISODE maintains fast convergence as $H$ varies. We can also see that EPISODE converges to the minimum of global loss, while CELGC fails to do so when $H$ is larger. \begin{figure}[tb] \begin{center} \subfigure[Effect of $I$ (Epochs)]{ \includegraphics[width=0.31\linewidth]{plots/effect_of_I_epochs.pdf} \label{fig:snli_I_epochs} } \subfigure[Effect of $I$ (Rounds)]{ \includegraphics[width=0.31\linewidth]{plots/effect_of_I_rounds.pdf} \label{fig:snli_I_rounds} } \subfigure[Effect of $s$ (Epochs)]{ \includegraphics[width=0.31\linewidth]{plots/effect_of_kappa.pdf} \label{fig:snli_kappa} } \end{center} \caption{ Training loss and testing accuracy on SNLI. The style of each curve (solid, dashed, dotted) corresponds to the algorithm, while the color corresponds to either the communication interval $I$ (for (a) and (b)) or the client data similarity $s$ (for (c)). \textbf{(a), (b)} Effect of varying $I$ with $s = 30\%$, plotted against (a) epochs and (b) communication rounds. \textbf{(c)} Effect of varying $s$ with $I = 4$. } \label{fig:snli} \end{figure} \paragraph{SNLI} Results for the SNLI dataset are shown in Figure \ref{fig:snli}. To demonstrate the effect of infrequent communication, Figures \ref{fig:snli_I_epochs} and \ref{fig:snli_I_rounds} show results for EPISODE, CELGC, and Naive Parallel Clip as the communication interval $I$ varies (with fixed $s = 30\%$). After 25 epochs, the test accuracy of EPISODE nearly matches that of Naive Parallel Clip for all $I \leq 8$, while CELGC lags 2-3\% behind Naive Parallel Clip for all values of $I$. Also, EPISODE nearly matches the test accuracy of Naive Parallel Clip with as little as $8$ times fewer communication rounds. Lastly, EPISODE requires significantly less communication rounds to reach the same training loss as CELGC. For example, EPISODE with $I = 4$, $s = 30\%$ takes less than $5000$ rounds to reach a training loss of $0.4$, while CELGC does not reach $0.4$ during the entirety of training with any $I$. To demonstrate the effect of client data heterogeneity, Figure \ref{fig:snli_kappa} shows results for varying values of $s$ (with fixed $I = 4$). Here we can see that EPISODE is resilient against data heterogeneity: even with client similarity as low as $s = 10\%$, the performance of EPISODE is the same as $s = 50\%$. Also, the testing accuracy of EPISODE with $s = 10\%$ is nearly identical to that of the Naive Parallel Clip. On the other hand, the performance of CELGC drastically worsens with more heterogeneity: even with $s = 50\%$, the training loss of CELGC is significantly worse than EPISODE with $s = 10\%$. \begin{figure} \begin{minipage}{0.65\linewidth} \centering \begin{tabular}{@{}lllll@{}} \toprule Interval & Similarity & Algorithm & Train loss & Test acc. \\ \midrule 64 & 70\% & FedAvg & 1.010 & 74.89\% \\ & & CELGC & 1.016 & 74.89\% \\ & & SCAFFOLD & 1.024 & 74.92\% \\ & & EPISODE & \textbf{0.964} & \textbf{75.20\%} \\ \midrule 64 & 60\% & FedAvg & 0.990 & 74.73\% \\ & & CELGC & 0.979 & 74.51\% \\ & & SCAFFOLD & 0.983 & 74.68\% \\ & & EPISODE & \textbf{0.945} & \textbf{74.95\%} \\ \midrule 64 & 50\% & FedAvg & 0.955 & 74.53\% \\ & & CELGC & 0.951 & 74.12\% \\ & & SCAFFOLD & 0.959 & 74.19\% \\ & & EPISODE & \textbf{0.916} & \textbf{74.81\%} \\ \midrule 128 & 60\% & FedAvg & 1.071 & 74.15\% \\ & & CELGC & 1.034 & 74.24\% \\ & & SCAFFOLD & 1.071 & 74.03\% \\ & & EPISODE & \textbf{1.016} & \textbf{74.36\%} \\ \bottomrule \end{tabular} \end{minipage} \begin{minipage}{0.35\linewidth} \centering \includegraphics[width=\linewidth]{plots/imagenet_64_50.pdf} \end{minipage} \caption{ImageNet results. \textbf{Left:} Training loss and testing accuracy at the end of training for various settings of $I$ and $s$. EPISODE consistently reaches better final metrics in all settings. \textbf{Right:} Training loss and testing accuracy during training for $I = 64$ and $s = 50\%$.} \vspace*{-0.2in} \label{fig:imagenet} \end{figure} \vspace*{-0.1in} \paragraph{ImageNet} Figure \ref{fig:imagenet} shows the performance of each algorithm at the end of training for all settings (left) and during training for the setting $I = 64$ and $s = 50\%$ (right). Training curves for the rest of the settings are given in Appendix \ref{appen:imagenet}. EPISODE outperforms all baselines in every experimental setting, especially in the case of high data heterogeneity. EPISODE is particularly dominant over other methods in terms of the training loss during the whole training process, which is consistent with our theory. Also, EPISODE exhibits more resilience to data heterogeneity than CELGC and SCAFFOLD: as the client data similarity deceases from $70\%$ to $50\%$, the test accuracies of CELGC and SCAFFOLD decrease by 0.8\% and 0.7\%, respectively, while the test accuracy of EPISODE decreases by 0.4\%. Lastly, as communication becomes more infrequent (i.e., the communication interval $I$ increases from $64$ to $128$), the performance of EPISODE remains superior to the baselines. % \vspace*{-0.1in} \section{Conclusion} \vspace*{-0.1in} We have presented EPISODE, a new communication-efficient distributed gradient clipping algorithm for federated learning with heterogeneous data in the nonconvex and relaxed smoothness setting. We have proved convergence results under any noise level of the stochastic gradient. In particular, we have established linear speedup results as well as reduced communication complexity. Further, our experiments on both synthetic and real-world data demonstrate the superior performance of EPISODE compared to competitive baselines in FL. Our algorithm is suitable for the cross-silo federated learning setting such as in healthcare and financial domains~\citep{kairouz2019advances}, and we plan to consider cross-device setting in the future. \section*{Acknowledgements} We would like to thank the anonymous reviewers for their helpful comments. Michael Crawshaw is supported by the Institute for Digital Innovation fellowship from George Mason University. Michael Crawshaw and Mingrui Liu are both supported by a grant from George Mason University. The work of Yajie Bao was done when he was virtually visiting Mingrui Liu’s research group in the Department of Computer Science at George Mason University. \bibliographystyle{iclr2023_conference}
2,869,038,156,959
arxiv
\section{Introduction} Supersymmetry (SUSY) \cite{SUSY_refs} is a promising extension of the Standard Model of particle physics (SM) \cite{SM_refs}; the simplest form is denoted the supersymmetric SM (SSM). It should be imminently testable at the LHC \cite{LHC_refs}. Supersymmetric particles, if they exist, typically decay instantaneously on collider time scales down to the lightest supersymmetric particle (LSP). The nature and possible decay properties of the LSP are thus an essential ingredient for all SUSY signatures. In the minimal SSM, with conserved proton-hexality, $\Psix$, \cite{Dreiner:2005rd} (or equivalently conserved R-parity \cite{Farrar:1978xj}), the LSP is stable. Cosmological constraints as well as LEP searches then restrict the LSP to be the lightest neutralino \cite{Ellis:1983ew,Hebbeker:1999pi}. \mymed If we allow for violation of proton-hexality, the LSP is no longer stable and in general any supersymmetric particle can be the LSP \cite{Dreiner:1997uz}. However, it is impossible to perform a detailed phenomenological study of the corresponding wide variety of mass orderings of the sparticle spectrum. We thus must restrict ourselves to well motivated models. In this paper, we focus on the B$_3$ minimal supergravity (mSUGRA) model \cite{Allanach:2003eb}. In Ref.~\cite{Allanach:2003eb,Allanach:2006st} it was shown that in such models there are three different LSP candidates: the lightest neutralino, $\neut_1$, the lightest scalar tau (stau), $\tilde{\tau}_1$ and the sneutrino, $\tilde{\nu}_i$. The lightest neutralino LSP has been studied extensively in the literature, see for example \cite{Dreiner:1991pe,Godbole:1992fb,Dreiner:2000vf,Bartl:2000yh}. More recently the stau LSP has also been investigated \cite{Allanach:2003eb,Allanach:2006st,Allanach:2007vi,Dreiner:2007uj, Bernhardt:2008mz,Dreiner:2008rv,nonmSUGRA_stauLSP}. \mymed In this paper we consider in detail the case of a $\tilde{\nu}_i$ LSP. The $\tilde{\nu}_i$ is special, because unlike the $\neut_1$ and $\tilde{\tau}_1$, the $\text{B}_3$ contributions to the renormalization group equations (RGEs) are essential for it to become the LSP. In Ref.~\cite{Allanach:2006st} only one example $\text{B}_3$ mSUGRA scenario with a $\tilde{\nu}_\tau$ LSP was presented. We go beyond this work and systematically investigate the $\text{B}_3$ mSUGRA parameter space with a $\tilde{\nu}_i$ LSP. In the first part of our paper we analyse, which conditions at the grand unification (GUT) scale lead to a $\tilde{\nu}_i$ LSP. In the second part, we point out striking collider signatures at the LHC which can lead to a SUSY discovery and which can distinguish a $\tilde{\nu}_i$ LSP scenario from a ``standard" mSUGRA scenario with a stable $\neut_1$ LSP. \mymed The outline of our paper is as follows. In Sect.~\ref{the_model}, we briefly review the $\Psix$ and $\text{B}_3$ mSUGRA models and discuss the RGEs which lead to a $\tilde{\nu}_i$ LSP. We then analyse in Sect.~\ref{bounds_on_snu_LSP} the experimental bounds, especially on the $L_i Q_j \bar D_k$ operator, which restrict the $\tilde{\nu}_i$ LSP parameter space. In Sect.~\ref{parameter_space} we investigate in detail the conditions at the GUT scale leading to a $\tilde{\nu}_i$ LSP. This is the central part of our work. Finally, in Sect.~\ref{pheno}, we simulate SUSY events at the LHC within one $\tilde{\nu}_\mu$ LSP scenario. We focus on signatures, which are special for $\tilde{\nu}_i$ LSP scenarios. We conclude in Sect.~\ref{conclusion}. \section{The model} \label{the_model} The most general gauge invariant and renormalizable superpotential of the SSM is \cite{superpot_refs} \bea W_{\mathrm{SSM}}&=& W_{\text{P}_6}+W_{\not \text{P}_6}\,, \label{superpot} \\[1.5mm] W_{\text{P}_6}&=&\eps_{ab}\left[(\mathbf{Y}_E)_{ij}L_i^aH_d^b \bar{E}_j + (\mathbf{Y}_D)_{ij}Q_i^{ax}H_d^b\bar{D}_{jx} \right. \notag \\ & & \left.+(\mathbf{Y}_U)_{ij}Q_i^{ax}H_u^b\bar{U}_{jx} + \mu H_d^aH_u^b\right], \label{P6-superpot} \\[1.5mm] W_{\not \text{P}_6} & = & \eps_{ab}\left[\frac{1}{2} \lam_{ijk} L_i^aL_j^b \bar{E}_k + \lam'_{ijk}L_i^aQ_j^{bx}\bar{D}_{kx}\right]\notag \\&& +\epsilon_{ab}\kappa^i L_i^aH_u^b +\frac{1}{2}\eps_{xyz} \lam''_{ijk} \bar{U}_i^{\,x} \bar{D}_j^{\,y} \bar{D}_k^{\,z} \,. \label{notP6-superpot} \eea where $i,j,k=1,2,3$ are generation indices. We have employed the standard notation of Ref.~\cite{Allanach:1999ic}. \medskip The superpotential, Eq.~(\ref{superpot}), consists of two different parts. The second part, $W_{\not \text{P}_6}$, contains lepton and baryon number violating operators. If simultaneously present, they lead to rapid proton decay, in disagreement with experimental observations \cite{proton_decay,Dreiner:1997uz,Barbier:2004ez,Shiozawa:1998si}. An additional discrete symmetry is therefore required to keep the proton stable \cite{discrete_symmetries,Dreiner:2005rd}. The SSM with R-parity, which prohibits $W_{\not\text{P}_6}$, is conventionally denoted the MSSM. In a more general approach, proton-hexality, $\text{P}_6 $, in addition prohibits dangerous dimension-five proton decay operators \cite{Dreiner:2005rd}. Here, we consider a third possibility, baryon-triality, $\text{B}_3$, which violates R-parity and $\text{P}_6$ by prohibiting only the $\bar{U}\bar{D}\bar{D}$ operators in Eq.~(\ref{notP6-superpot}). R-parity, proton-hexality and baryon-triality are the only discrete gauge anomaly-free symmetries of the SSM \cite{discrete_symmetries,Dreiner:2005rd}. $\text{B}_3$ models including also a dark matter candidate have been, for example, proposed in Refs.~\cite{UMSSM}. \subsection{$\Psix$ mSUGRA Model} \label{sec:mSUGRA_param} The MSSM with conserved $\Psix$ has 124 free parameters \cite{Haber:1997if}. In the mSUGRA model with conserved $\Psix$ and radiative electroweak symmetry breaking (REWSB) \cite{msugramodel,Ibanez:1982fr} this is reduced to five parameters, which is more manageable for phenomenological studies, \begin{equation} M_0,\, M_{1/2},\, A_0,\, \tan \beta,\, \text{sgn}(\mu) \, . \label{mSUGRA_param} \end{equation} $M_0$, $M_{1/2}$ and $A_0$ are the universal scalar mass, the universal gaugino mass and the universal trilinear scalar interaction at the GUT scale ($M_{\rm GUT}$), respectively. $\tan\beta$ is the ratio of the vacuum expectation values of the two Higgs doublets; see Eq.~(\ref{P6-superpot}). Finally, we choose with $\text{sgn}(\mu)$ unambiguously one solution of the electroweak symmetry breaking scalar potential ($|\mu|$ is determined by REWSB). \mymed Using the five parameters at $M_{\rm GUT}$, Eq.~(\ref{mSUGRA_param}), and the RGEs to evolve the parameters down to the electroweak scale ($M_Z$), the mass spectrum of the sparticles and their interactions is completely determined. The left- (right-)handed charged slepton, $\tilde{\ell}_{L(R)}$, and $\tilde{\nu}$ masses of the first two generations can be approximated at $M_Z$ by \cite{Drees:1995hj}: \bea m_{\tilde{\ell}_R}^2 &=& M_0^2 + 0.15 M_{1/2}^2 - \sin^2 \theta_W M_Z^2 \cos 2\beta, \nonumber \\ m_{\tilde{\ell}_L}^2 &=& M_0^2 + 0.52 M_{1/2}^2 - (0.5 - \sin^2 \theta_W) M_Z^2 \cos 2\beta, \nonumber \\ m_{\tilde{\nu}}^2 &=& M_0^2 + 0.52 M_{1/2}^2 + 0.5 M_Z^2 \cos 2\beta, \, \label{sfermion_masses} \eea where $M_Z$ is the $Z$-boson mass and $\theta_W$ is the weak-mixing angle; see also the original work of Ref.~\cite{Ibanez:1984vq}. The third terms in Eq.~(\ref{sfermion_masses}) originate from the D-term quartic interactions. \mymed For the sleptons of the third generation, the mixing between left and right chiral states is non-negligible. The stau mass matrix $\mathcal {M}_{\tilde{\tau}}$ is given by \cite{Gunion:1984yn} \begin{align} \mathcal{M}^2_{\tilde\tau} &= \left( {m_{\tau}^2 + A_{LL} \qquad m_{\tau} B_{LR}} \atop {m_{\tau} B_{LR}\qquad m_{\tau}^2 + C_{RR} } \right) \, , \label{eq_staumassmatrix} \end{align} with \begin{align} \begin{split} A_{LL} &= M^2_{\tilde{\tau}_L} - (0.5 - \sin^2\theta_W) M_Z^2 \cos 2\beta\,, \\ B_{LR} &= A_{\tau} - \mu \tan{\beta}\,, \\ C_{RR} &= M^2_{\tilde{\tau}_R} - \sin^2\theta_W M_Z^2 \cos 2\beta \, . \end{split} \end{align} We denote with $m_{\tau}$ the tau lepton mass and with $M_{\tilde{\tau}_L}$ and $M_{\tilde{\tau}_R}$ the left- and right-handed third generation softbreaking stau mass parameters, respectively. At $M_Z$, they can be approximated by \cite{Drees:1995hj}, \begin{align} M_{\tilde{\tau}_R}^2 &= M_0^2 + 0.15 M_{1/2}^2 - \frac{2}{3} X_\tau, \nonumber \\ M_{\tilde{\tau}_L}^2 &= M_0^2 + 0.52 M_{1/2}^2 - \frac{1}{3} X_\tau, \label{stau_parameter}\\ X_\tau &\equiv 10^{-4}(1+\tan^2 \beta) \left( M_0^2 + 0.15 M_{1/2}^2 + 0.33A_0^2 \right). \nonumber \end{align} Here, $X_\tau$ parametrizes the influence of the tau Yukawa coupling on the running of the stau masses. \mymed An interesting property of REWSB is that over most of the parameter space one finds that $\mu^2\gg M_Z^2$. This leads to approximate relations between neutralino and gaugino masses at $M_Z$. The $\tilde{\chi}_1^0$ is dominantly bino-like in mSUGRA models and its mass can be approximately written as \cite{Drees:1995hj} \bea m_{\tilde{\chi}_1^0} &\simeq & M_1 = 0.41 M_{1/2}, \label{neutralino_masses} \eea In most of the $\Psix$ mSUGRA parameter space the $\tilde{\chi}_1^0$ or the $\tilde{\tau}_1$ is the LSP \cite{Ibanez:1984vq,Allanach:2003eb,Allanach:2006st}. \subsection{$\text{B}_3$ mSUGRA Model} The SSM allows also for lepton number violating interactions, {\it cf.} Eq.~(\ref{notP6-superpot}). These additional interactions increase the number of free parameters from 124 to more than 200. For detailed phenomenological studies, the ${\text{B}_3}$ mSUGRA model was proposed in Ref.~\cite{Allanach:2003eb}. Beyond the five mSUGRA parameters, Eq.~(\ref{mSUGRA_param}), we assume one additional positive coupling $\BLam' \in\{\lam'_{ijk}\}$ at $M_{\rm GUT}$ \footnote{In general, also $\BLam \in\{\lam_{ijk}\}$ at $M_{\rm GUT}$ is allowed in B$_3$ mSUGRA models. $\kappa_i|_{\rm GUT}$ is rotated away; see Ref.~\cite{Allanach:2003eb} for details.}. We thus have the six free parameters: \bea &&\mzero\,,\, \mhalf\,,\, \azero\,,\,\tanb\,,\, \sgnmu\,,\,\BLam'\,. \label{P6V-param} \eea Due to the presence of one $\lam'_{ijk}$ coupling at $M_{\rm GUT}$, the following changes in collider phenomenology take place compared to $\Psix$ conserving mSUGRA: \begin{itemize} \item The RGEs get additional contributions and consequently the sparticle mass spectrum and the couplings at $M_Z$ are altered \cite{Allanach:2003eb,Allanach:2006st,Jack:2005id}. \item The LSP can decay into SM particles via the $\lam'_{ijk}$ coupling. In principle any sparticle can now be the LSP, because the cosmological bound on stable LSPs no longer holds \cite{Ellis:1983ew}. \item Sparticles may be produced singly, possibly on resonance \cite{Allanach:1997sa,Bernhardt:2008mz,Barbier:2004ez}, \textit{e.g.} single slepton production at ha\-dron colliders \cite{single_slep,Dreiner:2006sv,Dreiner:2008rv}. \item The decay patterns of the sparticles can change due to changes in the mass spectrum and the additional $\text{B}_3$ interactions; see Refs.~\cite{Allanach:2006st,Dreiner:2008rv} for explicit examples. \end{itemize} \mymed In this paper we mainly focus on the first aspect and investigate the effect of a non-vanishing $\lam'_{ijk}|_{\text{GUT}}$ on the running of the sparticle masses. We show, that a large region in the $\text{B}_3$ mSUGRA parameter space exists, where a $\tilde{\nu}_i$ is the LSP. This is consistent with all present experimental constraints. \subsection{Sneutrino LSPs in $\text{B}_3$ mSUGRA} \label{snu_LSP_in_mSUGRA} In order to understand the dependence of the $\tilde{\nu}_i$ mass at $M_Z$ on the parameters of Eq.~(\ref{P6V-param}) at the GUT scale, we must take a closer look at the relevant RGEs. According to Ref.~\cite{Allanach:2003eb}, the dominant contributions are \footnote{For $i=3$, we also have terms proportional to $(\mathbf{Y}_E)_{33}^2$, {\it i.e.} proportional to the tau Yukawa coupling squared.}: \bea 16\pi^2 \frac{d(m^2_{\tilde{\nu}_i})}{dt} &=& - \frac{6}{5} g_1^2 M_1^2 - 6 g_2^2 M_2^2 - \frac{3}{5} g_1^2 {\cal S} \nonumber \\ & & + \, 6\lam'^2_{ijk}\left[ (\mathbf{m_{\tilde{L}}})^2_{ii} +(\mathbf{m_{\tilde{Q}}})^2_{jj} +(\mathbf{m_{\tilde{D}}})^2_{kk} \right] \nonumber \\ & & + \, 6 (\mathbf{h_{D^k}})_{ij}^2 \label{sneu_RGE} \eea with \bea (\mathbf{h_{D^k}})_{ij} \equiv \lam'_{ijk} \times A_0 \qquad \text{at} \,\,\, M_{\rm GUT}\,, \label{hdk_RGE} \eea and \bea {\cal S} &\equiv& \Tr[{\bf m_{\tilde{Q}}}^2- {\bf m_{\tilde{L}}}^2-2{\bf m_{\tilde{U}}}^2 + {\bf m_{\tilde{D}}}^2 + {\bf m_{\tilde{E}}}^2] \nonumber \\ & & + m_{H_u}^2-m_{H_d}^2 \,. \label{trace_s} \eea Here $g_1$, $g_2$ are the U(1) and SU(2) gauge couplings, respectively. $t=\ln Q$ with $Q$ the renormalization scale. $({\bf h}_{D^k})_{ij}$ is the soft breaking coupling corresponding to $\lam'_{ijk}$. The bold-faced soft mass parameters in Eqs.~(\ref{sneu_RGE}) and (\ref{trace_s}) are $3\times 3$ matrices in flavor space: $\mathbf{m_{\tilde{Q}}}$ and $\mathbf{m_{\tilde{L}}}$ for the left-handed doublet squarks and sleptons; $\mathbf{m_{\tilde{U}}}$, $\mathbf{m_{\tilde{D}}}$ and $\mathbf{m_{\tilde{E}}}$ for the singlet up-squarks, down-squarks and sleptons, respectively. There is no summation over repeated indices in Eq.~(\ref{sneu_RGE}). \mymed The running of $m_{\tilde{\nu}_i}$ is governed by two different sets of terms. The first three terms in Eq.~(\ref{sneu_RGE}) are proportional to the gauge couplings squared, $g_1^2$ and $g_2^2$. We find that the sum of these three terms is negative at every scale. They therefore lead to an increase in $m_{\tilde{\nu}_i}$, going from $M_{\rm GUT}$ to $M_Z$. This effect leads to the contribution proportional to $M_{1/2}^2$ in the approximate formula, Eq.~(\ref{sfermion_masses}), for $m_{\tilde{\nu}_i}^2$. Note, that the main contributions come from the terms proportional to the gaugino masses squared, $M_{1}^2$ and $M_{2}^2$, because ${\cal S}$ in Eq.~(\ref{sneu_RGE}), which can be negative, is identical to zero at $M_{\rm GUT}$ for universal scalar masses. In addition the coefficients of the $M_{1}^2$ and $M_{2}^2$ terms are larger compared to the ${\cal S}$ term. \mymed The remaining contributions are proportional to $\lam'^2_{ijk}$ and $({\bf h}_{D^k})^2_{ij}$; the latter is also proportional to $\lam'^2 _{ijk}$ at $M_{\rm GUT}$, {\it cf.} Eq.~(\ref{hdk_RGE}). These terms are positive and will therefore reduce $m_{\tilde{\nu}_i}$, going from $M_{\rm GUT}$ to $M_Z$. They are also new to the B$_3$ mSUGRA model compared to minimal mSUGRA. The influence of these new contributions on $m_{\tilde{\nu}_i}$ depends on the magnitude of $\lam'_{ijk}$ and also on the the other mSUGRA parameters, Eq.~(\ref{P6V-param}), especially on $A_0$, as we will show in Sect.~\ref{parameter_space}. \begin{figure}[ht!] \centering \setlength{\unitlength}{1cm} \includegraphics[scale=0.43, bb = 50 110 500 530, clip=true]{SPS1a_masses.ps} \put(-4.3,-0.3){$\lam'_{231}$ at $M_{\rm GUT}$} \put(-7.3,2.5){\rotatebox{90}{Mass [GeV]}} \put(-5.2,4.3){$\tilde{\nu}_\mu$} \put(-5.2,5.4){$\tilde{\mu}_L$} \put(-5.2,3.2){$\sstau_1$} \put(-5.2,2.3){$\neutralino_1$} \caption{\label{lambdap231} Masses of $\neutralino_1$, $\sstau_1$, $\ssnumu$ and $\tilde{\muon}_L$ at $M_Z$ as a function of $\lam'_{231}|_{\text{GUT}}$. The other mSUGRA parameters are that of SPS1a \cite{Allanach:2002nj}. We assume up-mixing, {\it cf.} Sect.~\ref{quark_mixing}.} \end{figure} \mymed In Fig.~\ref{lambdap231}, we demonstrate the impact of a non-vanishing $\lam'_{231}|_{\rm GUT}$ on the running of $m_{\tilde{\nu}_i}$. We have chosen the mSUGRA point SPS1a \cite{Allanach:2002nj}, where in the $\Psix$ conserving case, the $\neutralino_1$ is the LSP and the $\sstau_1$ is the next-to-lightest supersymmetric particle (NLSP). See also Ref.~\cite{Allanach:2006st} for the case of $\lam'_{331}|_{\rm GUT}$. The mass of the muon sneutrino, $\tilde{\nu}_ {\mu}$, decreases for increasing $\lam'_{231}|_{\rm GUT}$, as described by Eq.~(\ref{sneu_RGE}). Furthermore, the mass of the left-handed smuon, $\tilde{\mu}_L$, decreases, as it belongs to the same SU(2) doublet. The running of the $\tilde{\mu}_L$ mass squared is also described by Eq.~(\ref{sneu_RGE}). But note that the mass difference between $\ssnumu$ and $\tilde{\mu}_L$, is not the same with varying $\lam'_{231}|_{\rm GUT}$ as can be seen in Fig.~\ref{lambdap231}. This is due to the different D-term contributions to $m_{\tilde{\nu}_\mu}$ and $m_{\tilde{\mu}_{L}}$, {\it cf.} Eq.~(\ref{sfermion_masses}), for different $\lam'_{231}|_{\rm GUT}$. The mass difference is approximately 20 GeV (50 GeV) for $\lam'_ {231}|_{\rm GUT} = 0.0\, (0.14)$. The $\tilde{\mu}_L$ is also always heavier than the $\tilde{\nu}_{\mu}$, as long as $\tan \beta>1$. We calculated the sparticle masses in Fig.~\ref{lambdap231} with an unpublished $\text{B}_3$ version of {\tt SOFTSUSY} \cite{rpv_softsusy,footnote_SOFTSUSY}. \mymed At one-loop order, the masses of the $\neutralino_1$ and the $\sstau_1 $, are not changed, as can be seen in Fig.~\ref{lambdap231}. They do not directly couple to the $L_2 Q_3 \bar D_1$ operator, in contrast to $\tilde{\nu}_{\mu},\,\tilde{\mu}_L$. We therefore obtain for the parameter set SPS1a with $\lam'_ {231}|_{\rm GUT} > 0.12$ a new candidate for the LSP, namely the sneutrino! In the following, we systematically investigate the conditions which lead to a $\tilde{\nu}_i$ LSP in $\text{B}_3$ mSUGRA models. From Eq.~(\ref{sneu_RGE}) it is clear that we need a coupling $\lam'_ {ijk}|_{\rm GUT}\not=0$. The smallest $\lam'_{ijk}|_{\rm GUT}$ coupling which we found leading to a $\tilde{\nu}_i$ LSP is $\lam '_{ijk}|_{\rm GUT}= 0.054$. Otherwise, the new contributions in the RGE, Eq.~(\ref{sneu_RGE}), are not large enough to reduce $m_{\tilde{\nu}_i}$ significantly. \mymed A non-vanishing $\lam'_{ijk}|_{\rm GUT}$ also reduces the left-handed squark masses of generation $j$ and the right-handed down-squark masses of generation $k$, because these squarks couple directly to the $L_i Q_j \bar D_k$ operator \cite{Allanach:2006st}. One might worry that this effect leads to unwanted flavour changing neutral currents (FCNCs), when we rotate the quarks and squarks from the flavour-basis to their mass-basis. But, for example for SPS1a with $\lam'_{231}|_{\text{GUT}} = 0.13$, the respective squark masses are reduced by less than $4\%$, thus avoiding FCNCs which are in contradiction with experiment \cite{FCNCs_refs}. \mymed We investigate in Sect.~\ref{bounds_on_snu_LSP} the experimental bounds on $\text{B}_3$ mSUGRA models with a $\tilde{\nu}_i$ LSP. For this purpose, we need to take a closer look at quark-flavour mixing. \subsection{Quark Mixing} \label{quark_mixing} The RGEs of the different $\text{B}_3$ couplings are coupled via the matrix elements of the lepton- and quark-Yukawa matrices, Eq.~(\ref{P6-superpot}). Assuming a diagonal lepton Yukawa matrix, $\mathbf{Y}_E$, a non-vanishing $\lam'_{ijk}$ coupling at $M_{\rm GUT}$ will generate at $M_Z$ all other $\text{B}_3$ couplings which violate the same lepton number, see also Ref.~\cite{Dreiner:2008rv}. We thus need to know the up- and down-quark Yukawa matrices, $\mathbf{Y}_U$ and $\mathbf{Y}_D$, respectively. \mymed From experiment, we only know the Cabibbo-Kobayashi-Maskawa (CKM) matrix \begin{equation} \mathbf{V}_{\text{CKM}} = \mathbf{V_{uL}} \mathbf{V_{dL}^\dagger}. \end{equation} Here, $\mathbf{V_{uL}}$ ($\mathbf{V_{dL}}$) rotates the left-handed up- (down-) type quarks from the electroweak basis to the mass basis. For simplicity, we assume that the Yukawa matrices $\mathbf{Y}_U$ and $\mathbf{Y}_D$ are real and symmetric, thus $\mathbf{V_{uL}}= \mathbf{V_{uR}}$ and $\mathbf{V_{dL}}=\mathbf{V_{dR}}$. We can imagine two extreme cases. We refer to ``up-mixing" if \begin{align} {\bf V_{u\,L,R}^{}} = {\bf V_{\rm CKM}}, \quad {\bf V_{d\,L,R}^{}} = {\bf 1}_{3\times3}, \label{up_mixing} \end{align} at $M_Z$, {\it i.e.} the up-type Yukawa matrix $\mathbf{Y}_U$ is non-diagonal and $\mathbf{Y}_D$ is diagonal. In the case of ``down-mixing", we have \begin{align} {\bf V_{u\,L,R}^{}}={\bf 1}_{3\times3}, \quad {\bf V_{d\,L,R}^{}}= {\bf V^\dagger_{\rm CKM}},\label{down_mix} \end{align} at $M_Z$. Now, the down-type Yukawa matrix $\mathbf{Y}_D$ is non-diagonal and $\mathbf{Y}_U$ is diagonal. For a more detailed discussion see for example Refs.~\cite{Agashe:1995qm, Allanach:2003eb,Dreiner:2008rv}. \section{Experimental Bounds on $\tilde{\nu}$ LSP Models} \label{bounds_on_snu_LSP} We have shown above that a non-vanishing coupling $\lam '_{ijk}$ at $M_{\rm GUT}$ can affect the spectrum at $M_Z$ such that a $\tilde {\nu}_i$ is the LSP. This requires $\lam'_{ijk}|_{\rm GUT}\gsim 0.05$, corresponding to $\lam'_{ijk}\gsim0.15$ at $M_Z$. In this section, we investigate for which couplings $\lam'_{ijk}|_{\rm GUT}$ the upper bounds are sufficiently weak such that a $\tilde{\nu}_i$ LSP can be generated. For the bounds, we first take into account the generation of tree level neutrino masses. Then we review other indirect bounds on these couplings. Finally we discuss the restrictions from direct searches for supersymmetric particles at LEP, at the Tevatron and the CERN $p \bar p$ collider. \subsection{Bounds from Tree Level Neutrino Masses} \label{tree_neut_bounds} If $\lam'_{ijk}|_{\rm GUT}\not=0$ and the bilinear coupling $\kappa_i|_{\rm GUT}=0$, \textit{cf.} Eq.~(\ref{notP6-superpot}), $\kappa_i|_{\rm M_Z}\not=0$ will be generated via the RGEs \cite{Allanach:2003eb,Carlos:1996du,Dreiner:1995hu,Barger:1995qe,Nardi:1996iy} \begin{equation} 16\pi^2\frac{d\kappa_i}{dt} = - 3 \mu \lam'_{ijk} ({\bf Y}_D)_{jk} + \dots \, . \label{gen_kappa} \end{equation} Furthermore, $\lam'_{ijk}|_{\rm GUT}$ will generate the corresponding soft breaking term of $\kappa_i$, namely $\tilde{D}_i$, via \cite{Allanach:2003eb,Carlos:1996du,Dreiner:1995hu,Barger:1995qe,Nardi:1996iy} \begin{equation} 16\pi^2\frac{d\tilde{D}_i}{dt} = -3 \left[2 \mu ({\bf h}_{D^k})_{ij} + \tilde{B} \lam'_{ijk} \right] ({\bf Y}_D)_{jk} + \dots \, . \label{gen_D} \end{equation} Here, $\tilde{B}$ is the soft breaking coupling corresponding to $\mu$ and is determined by REWSB \cite{Ibanez:1982fr,Allanach:2003eb}. Since the RGEs are different for $\kap_i$ and $\tilde D_i$, they are not aligned at the weak scale and can not be rotated away through a field redefinition. \mymed The neutrino of generation $i$ will develop a vacuum expectation value $v_i$ due to the non-vanishing couplings $\kappa_i$ and $\tilde{D}_i$. The vacuum expectation value $v_i$, and the $\kappa_i$ operator will mix the neutralino fields with the neutrino fields which generates one massive neutrino, $m_{\nu_i}$, for non-vanishing $\lam'_{ijk}|_{\rm G UT}$ at tree-level \cite{Nardi:1996iy,Hall:1983id,Ellis:1984gi,Banks:1995by,Allanach:2003eb}. \mymed Demanding that this neutrino mass is smaller than the cosmological bound on the sum of neutrino masses, determined by the combination of the WMAP data \cite{Spergel:2003cb} and the 2dFGRS data \cite{Colless:2003wz}, \begin{equation} \sum_i m_{\nu_i} < 0.71 \, \text{eV} \, , \label{WMAP_bound} \end{equation} results in upper bounds on $\lam'_{ijk}|_{\rm GUT}$, which were calculated in Ref.~\cite{Allanach:2003eb} for the parameter point SPS1a \cite{Allanach:2002nj}. \mymed It was found in Ref.~\cite{Allanach:2003eb}, assuming quark mixing solely in the down-sector (\ref{down_mix}) and assuming no accidental cancellations, that the bounds on $\lam'_{ijk}|_{\rm GUT}$ are of the order of $\mathcal{O}(10^{-3}-10^{-6})$. However, if quark mixing is solely in the up-sector (\ref{up_mixing}), than $({\bf Y}_D)_{jk}$ vanishes at $M_Z$ for $j\not =k$. This suppresses the right hand side of Eq.~(\ref{gen_kappa}) and Eq.~(\ref{gen_D}). The neutrino masses and therefore the bounds on $\lam'_{ijk}|_{\rm GUT}$ are significantly softened. Taking also two loop effects into account, we summarize in Table~\ref{RPV_couplings} the $\lam'_{ijk}$ couplings, which are unrestricted by the neutrino mass bound, Eq.~(\ref{WMAP_bound}), as long as quark mixing is dominantly in the up-sector, \textit{cf.} Eq.~(\ref{up_mixing}). We also include the strictest experimental bound, which we discuss in the following subsection. \subsection{Indirect Bounds on $\lam'_{ijk}$} \label{indirect_bounds} \begin{table}[t!] \begin{ruledtabular} \begin{tabular}{ccc} coupling & upper bounds at $M_Z$ & LSP \\ \hline $\lam'_{121}$ & $0.03\times (m_{\tilde{c}_L}/100 \, \text{GeV})$ & $\tilde{\nu}_e$ \\ $\lam'_{131}$ & $0.02\times (m_{\tilde{t}_L}/100 \, \text{GeV})$ & $\tilde{\nu}_e$ \\ $\lam'_{112}$ & $0.02\times (m_{\tilde{s}_R}/100 \, \text{GeV})$ & $\tilde{\nu}_e$ \\ $\lam'_{221}$ & $0.18 \times (m_{\tilde{s}_L}/100 \, \text{GeV})$ & $\tilde{\nu}_\mu$ \\ $\lam'_{231}$ & $0.18\times (m_{\tilde{b}_L}/100 \, \text{GeV})$ & $\tilde{\nu}_\mu$ \\ $\lam'_{212}$ & $0.06\times (m_{\tilde{s}_R}/100 \, \text{GeV})$ & $\tilde{\nu}_\mu$ \\ $\lam'_{321}$ & $0.52 \times (m_{\tilde{d}_R}/100 \, \text{GeV})$ & $\tilde{\nu}_\tau$ \\ $\lam'_{331}$ & $0.32 \times (m_{\tilde{d}_R}/100 \, \text{GeV})$ & $\tilde{\nu}_\tau$ \\ $\lam'_{312}$ & $0.11\times (m_{\tilde{s}_R}/100 \, \text{GeV})$ & $\tilde{\nu}_\tau$ \end{tabular} \caption{\label{RPV_couplings} Upper bounds on single couplings $\lam'_{ijk}$ from electroweak precision measurements. Only couplings are shown, which are consistent with the cosmological bound on neutrino masses, Eq.~(\ref{WMAP_bound}); see also Ref.~\cite{Allanach:2003eb}. The bounds depend strongly on the masses of the relevant squarks, $m_{\tilde{q}}$. The third column shows the $\tilde{\nu}_i$ LSP, which can be generated via the respective $\lam'_{ijk}|_{\rm GUT}$ coupling.} \end{ruledtabular} \end{table} In this section, we review the relevant indirect bounds on the couplings $\lam'_{ijk}$ from electroweak precision measurements. In Table~\ref{RPV_couplings}, we present the strongest bounds on the single $\lam'_{ijk}$ couplings at the $2\sigma$ level \cite{Barbier:2004ez,Allanach:1999ic,bounds_rpv,Dreiner:1997uz}. The bounds apply to the couplings at $M_Z$. To obtain the respective bound at $M_{\rm GUT}$ one has to divide the corresponding bound in Table~\ref{RPV_couplings} by roughly a factor of three. For each coupling the bound depends linearly on the sfermion mass of the virtual particle exchanged in the relevant process. In the right column, we show which sneutrino can become the LSP. We see that an electron sneutrino LSP, $\tilde{\nu}_e$, is disfavoured due to the strong bounds on the couplings $\lam'_{1jk}$. We have found that only in a small range of mSUGRA parameter space a $\tilde{\nu}_e$ LSP is found, although large squark masses weaken the bounds. In the following we will thus concentrate on muon sneutrinos, $\tilde{\nu}_\mu$, and tau sneutrinos, $\tilde{\nu}_\tau$ as LSP candidates. \mymed One non-vanishing $\lam'_{ijk}|_{\rm GUT}$ will also generate additional ($LQ\bar D$ \textit{and} $LL\bar E$) $\text{B}_3$ operators at $M_Z$, which violate the same lepton number \cite{Dreiner:2008rv}. For example, for one $\lam'_ {2jk}|_{\rm GUT} \not = 0$, we will generate all other muon number violating operators at $M_Z$ via one and two loop effects. Since bounds on products of two different $\text{B}_3$ couplings are often much stronger than on only one $\text{B}_3$ coupling \cite{Barbier:2004ez,Allanach:1999ic,bounds_rpv,Dreiner:1997uz}, we have also checked that all generated products of the dominant $\lam'_{ijk}$ coupling with a generated coupling satisfy the bounds. All products lie at least one order of magnitude below the strongest upper bounds if $\lambda'_{ijk}|_{\rm GUT}=0.1$. \mymed After REWSB, the single coupling scheme, which was assumed in deriving the bounds in Table~\ref{RPV_couplings}, cannot be realized in the quark mass eigenbasis \cite{Agashe:1995qm}. In Sect.~\ref{tree_neut_bounds}, we stated that quark mixing must be dominantly in the up-sector, Eq.~(\ref{up_mixing}), to fulfill the cosmological bound on the sum of neutrino masses, Eq.~(\ref{WMAP_bound}). Therefore, in the quark mass basis we will generate the following $\text{B}_3$ couplings \begin{equation} \tilde{\lam}'_{imk}=(\mathbf{V}^*_{CKM})_{mj} \lam'_{ijk} \, . \label{lamp_tilde} \end{equation} $\tilde{\lam}'_{imk}$ with $m=1,2,3$ couples an up-quark superfield of generation $m$ (in the mass basis) to a lepton and down-quark superfield of generation $i$ and $k$, respectively. These effective couplings can give rise to $D_0$--$\bar D_0$ mixing if $m=1,2$ \cite{Agashe:1995qm,Petrov:2007gp,Golowich:2007ka}. $D_0$ oscillations were investigated by the BABAR \cite{Aubert:2007aa,Aubert:2007wf}, Belle \cite{Abe:2007rd,Staric:2007dt} and CDF \cite{:2007uc} collaborations. The Heavy Flavor Averaging Group combined all experimental results and obtained windows for the allowed mass difference and the allowed lifetime difference of the $D_0$--$\bar D_0$ system \cite{Schwartz:2008wa}. \mymed Ref.~\cite{Golowich:2007ka} employed the experimental $2\sigma$ errors on the $D_0$--$\bar D_0$ mass difference to obtain the following bounds on $\lam'_{ijk}$ \begin{eqnarray} & &|\tilde{\lam'}_{i21} \, \tilde{\lam'}_{i11}| = |\lam_W \, \lam^{\prime 2}_{i21}| \nonumber \\ & & \leq 0.0029 \left [ \left(\frac{100 \text{GeV} }{m_ {\tilde{\ell}_{Li}}}\right )^2 + \left( \frac{100 \text{GeV} } {m_{\tilde{d}_R}} \right)^2 \right ]^{-1/2} \, , \nonumber \\ \label{bound1_D0D0bar} \end{eqnarray} where $\lam_W=0.23$ is the Wolfenstein parameter \cite{wolf_param} and $i=1,2,3$. For the evaluation of Eq.~(\ref{bound1_D0D0bar}), Ref.~\cite{Golowich:2007ka} assumed that the mass splitting arises solely from $\text{B}_3$ contributions. Note that the first equality of Eq.~(\ref{bound1_D0D0bar}) only holds if quark mixing is solely in the up-sector, Eq.~(\ref{up_mixing}). The corresponding upper bound on $|\lam_W \, \lam^{\prime 2}_{i12}|$ can be obtained from Eq.~(\ref{bound1_D0D0bar}) by replacing $m_{\tilde{d}_R}$ with $m_{\tilde{s}_R}$. \mymed The experimentally allowed range for the difference in lifetime of the $D_0$--$\bar D_0$ system was used in Ref.~\cite{Petrov:2007gp} to obtain the bounds \begin{eqnarray} |\tilde{\lam'}_{i21} \, \tilde{\lam'}_{i11}| = |\lam_W \, \lam^{\prime 2}_{i21}| \leq 0.082 \left(\frac{m_{\tilde{\ell}_ {Li}}}{100 \text{GeV} }\right )^2. \label{bound22_D0D0bar} \end{eqnarray} These are valid for $i=1,2$. Unlike Ref.~\cite{Golowich:2007ka}, Ref.~\cite{Petrov:2007gp} also took (destructive) interference between the $\text{B}_3$ and SM contributions into account. The bound on $|\lam_W \, \lam^{\prime 2}_{i12}|$ is the same. \mymed If we assume a $\tilde{\ell}_{Li}$ with a mass of 200 GeV and squarks with a mass of 500 GeV, we obtain the upper bounds $\lam'_{i21}, \lam'_{i12} \leq 0.15$ at $M_Z$ from the $D_0$--$\bar D_0$ mass difference, Eq.~(\ref{bound1_D0D0bar}), and $\lam'_{i21}, \lam'_{i12}\leq 1.2$ at $M_Z$ from the $D_0$--$\bar D_0$ lifetime difference Eq.~(\ref{bound22_D0D0bar}). Thus the $\tilde{\nu}_i$ LSP parameter space is strongly restricted by the $D_0$--$\bar D_0$ mass difference. However it was pointed out in Ref.~\cite{Petrov:2007gp} that destructive interference, for example between $\Psix$ violating and $\Psix$ conserving contributions, may significantly weaken the bounds of Eq.~(\ref{bound1_D0D0bar}), as in the case of the $D_0$--$\bar D_0$ lifetime difference. \mymed In the following, we mainly focus on the couplings $\lam'_{231}$ and $\lam'_{331}$ leading to a $\tilde{\nu}_\mu$ and $\tilde{\nu}_\tau$ LSP, respectively. These couplings are not restricted by $D_0$--$\bar D_0$ mixing, because the relevant CKM matrix elements to generate $\tilde{\lam}'_{i21}$ and $\tilde{\lam}'_{i11}$ out of $\lam'_{i31}$ are too small, {\it cf.} Eq.~(\ref{lamp_tilde}). \subsection{Collider Constraints} \subsubsection{Constraints from LEP} \label{LEP_constraints} We now determine bounds on the $\tilde{\nu}_i$ LSP mass from LEP. For the case of a non-vanishing $\lam'_{ijk}$ coupling the $\tilde{\nu}_i$ LSP will dominantly decay into two jets: \bea \tilde{\nu}_i &\rightarrow& \bar d_j d_k. \label{sneut_decay} \eea Here, $d_k$ ($\bar d_j$) is a (anti) down quark of generation $k$ ($j$). This decay will occur instantaneously in the detector, \textit{i.e.} with no detached vertex, since in our model $\lam'_ {ijk}$ is bounded from below by the requirement of a $\tilde{\nu}_i$ LSP. $\tilde{\nu}_i$ pair production followed by the decay, Eq.~(\ref{sneut_decay}), would lead to four jet events at LEP. \mymed Bounds on the total $\tilde{\nu} _i$ pair production cross section, with the $\tilde{\nu} _i$ decaying via $\lam'_{ijk}$ were obtained by the OPAL collaboration \cite{Abbiendi:2003rn} and also by the ALEPH collaboration \cite{Heister:2002jc}. \begin{table}[t] \begin{ruledtabular} \begin{tabular}{c|ccc} & $m_{\tilde{\nu}_e}$ & $m_{\tilde{\nu}_\mu}$ & $m_{\tilde{\nu}_\tau}$ \\ \hline OPAL & $>68-89$ GeV & $>74$ GeV & $>74$ GeV \\ ALEPH & $>75-95$ GeV & $>79$ GeV & $>79$ GeV \end{tabular} \caption{\label{bounds_LEP} Lower bounds on the $\tilde{\nu}_i$ LSP masses from direct $\tilde{\nu}_i$ decay via $\lam'_{ijk}$. The bounds were obtained from the OPAL \cite{Abbiendi:2003rn} and ALEPH \cite{Heister:2002jc} analyses, respectively. The $\tilde{\nu}_\mu$ and $\tilde{\nu}_\tau$ mass bounds are universal. The $\tilde{\nu} _e$ mass bound depends on the chargino parameters due to potential interference effects.} \end{ruledtabular} \end{table} {}From these we can obtain lower bounds on the mass of the $\tilde {\nu}_i$ LSP. We calculated the pair production cross section using the formulas given in Ref.~\cite{Wendel:1990yc}, with the fine structure constant equal to its value at $M_Z$, {\it i.e.} $\alpha = 1/128$. We show in Table~\ref{bounds_LEP} the strongest lower bounds on the $\tilde{\nu}_i$ LSP masses for different lepton flavours $i$. \mymed The $\tilde{\nu}_i$ LSP mass bounds for the second and third generation ($i=2,3$) are universal. The $\tilde{\nu}_e$ mass bound, in contrast, depends also on the chargino parameters. The chargino parameters enter through t-channel diagrams to the sneutrino pair production cross section. We calculated the different bounds on the electron sneutrino mass by assuming, that the lightest chargino is wino-like. This is the case for most mSUGRA scenarios. We then varied its mass between 120 GeV and 1000 GeV to obtain the numbers in Table~\ref{bounds_LEP}. \mymed In the following, we investigate the $\tilde{\nu}_\mu$ LSP and $\tilde {\nu}_\tau$ LSP parameter space in detail. A $\tilde{\nu}_e$ LSP is less favoured due to the stronger bounds on the $\lam'_{1jk}$ couplings, {\it cf.} Table~\ref{RPV_couplings}. We employ a lower mass bound of 78 GeV. This corresponds to the bound obtained by the ALEPH collaboration, see Table~\ref{bounds_LEP}, reduced by 1 GeV to account for numerical uncertainties in {\tt SOFTSUSY} \cite{Allanach:2003jw}. \mymed Only the mass bounds of the directly decaying $\tilde{\nu}_i$ LSP need to be considered, because all the other bounds from LEP on direct and indirect decays of heavier sparticles (compared to the $\tilde{\nu}_i$ LSP) are automatically fulfilled. In addition, the LEP exclusion bound on the light Higgs, $h$, is $m_h > 114.4$ GeV at $95 \%$ confidence level \cite{Barate:2003sz}. Anticipating a numerical error of 3 GeV of {\tt SOFTSUSY}s prediction of $m_h$ \cite{Allanach:2006st,Allanach:2003jw,Degrassi:2002fi,Allanach:2004rh}, we have imposed a lower bound of 111.4 GeV. \subsubsection{Constraints from the Tevatron} \label{Tevatron_constraints} At the Tevatron, a non-vanishing $\lam'_{ijk}$ coupling allows for resonant single $\tilde{\ell}^-_{Li}$ and $\tilde{\nu}_i$ production leading to dijet events \bea \bar u_j d_k & \rightarrow \tilde{\ell}_{Li}^- & \rightarrow \bar u_j d_k \, , \label{res_slep}\\ \bar d_j d_k & \rightarrow \tilde{\nu}_i & \rightarrow \bar d_j d_k\, . \label{res_sneu} \eea The expected reach for the slepton resonance search at the Tevatron in the dijet channel is estimated in Ref.~\cite{Hewett:1998fu} as a function of the hadronic cross section for the processes in Eqs.~(\ref{res_slep}),\,(\ref{res_sneu}) and the slepton mass. In Ref.~\cite{Hewett:1998fu}, the discovery potential for slepton masses between 200 GeV and 1200 GeV is given assuming an integrated luminosity of 2 $\text{fb}^{-1}$ and 30 $\text{fb}^{-1}$. We have checked that all the couplings shown in Table~\ref{RPV_couplings}, assuming $\lam'_{ijk}|_{\rm GUT}=0.1$, lead to production cross sections which lie at least one order of magnitude below the expected discovery region for 2 $\text{fb}^{-1}$ given in Ref.~\cite{Hewett:1998fu}. We have employed the QCD and SUSY-QCD next-to-leading order (NLO) cross section \cite{Dreiner:2006sv}. \mymed Tevatron searches for new resonances in the dijet channel have indeed been performed by the D0 collaboration \cite{Abazov:2003tj} and the CDF collaboration \cite{Abe:1995jz,Abe:1997hm,CDFnote}. Although $\text{B}_3$ models were not considered, bounds on the production cross section of additional vector bosons, $W^\prime$ and $Z^\prime$, which decay into two jets, were obtained. These processes are very similar to the $\text{B}_3$ processes, Eqs.~(\ref{res_slep}) and (\ref{res_sneu}). $W^\prime$ and $Z^\prime$ masses between 180 GeV and 1400 GeV were probed. In this mass region, the production cross section for a single $\tilde{\ell}_{Li}^-$ and $\tilde{\nu}_i$ with subsequent decay into two jets, lies at least one order of magnitude below the experimental limits on $W^\prime$ and $Z^\prime$ production. We assumed $\lamp_{ijk}|_{\rm GUT}=0.1$ and one coupling of Table~\ref{RPV_couplings}. \mymed \begin{table} \begin{ruledtabular} \begin{tabular}{cc|cc} process & & \multicolumn{2}{c}{cross section [pb]} \\ \hline $P \bar P \rightarrow W (Z) \rightarrow q \bar q$ & & $2.7 \times 10^4$ & ($7.9 \times 10^3$) \\ $P \bar P \rightarrow \tilde{\mu}_L \rightarrow q \bar q$ & & $9.2 \times 10^2$ & ($5.7 \times 10^2$) \\ $P \bar P \rightarrow \tilde{\nu}_\mu \rightarrow q \bar q$ & & $1.3 \times 10^3$ & ($8.0 \times 10^2$) \end{tabular} \caption{\label{WZ_xsection} Hadronic cross section for dijet production via an on shell $W$ ($Z$) boson in comparison to $\text{B}_3$ violating dijet production via $\tilde{\mu}_L$, Eq.~(\ref{res_slep}) and $\tilde{\nu}_\mu$, Eq.~(\ref{res_sneu}), with a mass equal to the $W$ ($Z$) mass. We assumed $\lam'_{221}|_{\rm GUT}=0.1$. The charge conjugated processes are also taken into account.} \end{ruledtabular} \end{table} We now estimate if the Tevatron has a chance to observe dijet pair production for $\tilde{\ell}_{Li}^-$ and $\tilde{\nu}_i$ masses {\it below} 180 GeV. We show in Table~\ref{WZ_xsection} the hadronic cross sections for dijet production via an on-shell $W$ ($Z$) boson \cite{D0note,Yao:2006px}. We also give the NLO production cross section for a $\tilde{\ell}_{Li}^-$ and $\tilde{\nu}_i$ with a mass equal to the $Z$ and $W$ mass \cite{Dreiner:2006sv}, assuming $\lamp_{221}|_{\rm GUT}=0.1$. We see that the $\text{B}_3$ cross sections are roughly one order of magnitude smaller than the SM cross sections. We conclude that the processes, Eqs.~(\ref{res_slep}) and (\ref{res_sneu}), for slepton masses below 180 GeV can not be seen at the Tevatron because the $Z$ and the $W$ have not been observed at the Tevatron in the dijet channel so far. \mymed Singly produced charged sleptons, Eq.~(\ref{res_slep}), may also cascade decay into a lepton $\ell_i$, two jets and missing energy: \begin{align} \tilde{\ell}_{Li}^- \rightarrow & \tilde{\chi}_1^0 \ell_i^- \nonumber \\ & \hookrightarrow \tilde{\nu}_i \bar \nu_i \nonumber \\ & \qquad \hookrightarrow \bar d_j d_k \, . \label{slep_cascade} \end{align} In principle, this signature could be more easily distinguished from the (QCD) background than pure dijet events, due to the additional isolated lepton in the final state. However the cascade decay, Eq.~(\ref{slep_cascade}), is kinematically forbidden in most regions of the $\tilde{\nu}_i$ LSP parameter space, as we show in Sect.~\ref{parameter_space}. In that case one might think about the 3-body decay, $\tilde{\ell}_{Li}^- \rightarrow \ell_i^- \bar \nu_i \tilde{\nu}_i$, via a virtual neutralino. However, this process can only occur at a significant rate, if the 2-body decay mode into two jets, Eq.~(\ref{res_slep}), is forbidden or kinematically suppressed. This is the case for $j=3$, {\it i.e.} a top quark in the final state. But the $\tilde{\ell}_{Li}^-$ can then not be produced as a single resonance, because we also need a top quark in the initial state, see Eq.~(\ref{res_slep}). Furthermore the 3-body decay, $\tilde{\ell}_{Li}^- \rightarrow \ell_i^- \bar \nu_i \tilde{\nu}_i$, is heavily suppressed compared to the 3-body decay via a virtual top-quark, as we will see in Sect.~\ref{example_spectrum}. \mymed A non-vanishing $\lam'_{i31}$ coupling can lead to $\text{B}_3$ top-quark decay at the Tevatron \cite{Dreiner:1991dt,Agashe:1995qm,Belyaev:2004qp,Eilam:2001dh,Abraham:2000kx,Ghosh:1996bm}. For example $t \rightarrow d\,\tilde{\ell}_{Li}$ if $m_{\tilde{\ell i}} < m_t$. However, the Tevatron can only test couplings $\lam'_{i31}$ via top decay, which lie at their upper bounds \cite{Ghosh:1996bm}, see Table~\ref{RPV_couplings}. We use smaller $\lam'_{i31}$ couplings in the following. \mymed A non-vanishing $\lam'_{i31}$ coupling contributes also to top-pair production, see Refs.~\cite{Ghosh:1996bm,Hikasa:1999wy,Li:2006he}. The top quarks in the $t \bar t$ events are polarized, since the $\text{B}_3$ operator couples only to left-handed top quarks. It is shown in Refs.~\cite{Ghosh:1996bm,Hikasa:1999wy,Li:2006he}, that the Tevatron at the end of Run~II can only test couplings $\lam'_{i31}$, which lie near their current upper bounds, {\it cf.} Table~\ref{RPV_couplings}. The LHC will be able to probe couplings $\lam'_{i31}$ down to $\lam'_{i31}=0.2$ via top polarization \cite{Li:2006he}. \subsubsection{Constraints from the CERN $p \bar p$ Collider} Unlike D0 and CDF, the UA2 collaboration at the CERN $p \bar p$ collider was able to measure the hadronic decay mode of the $Z$ and $W$ \cite{Alitti:1990kw}. They also searched for a $W^\prime$ and $Z^\prime$ decaying into two jets. They found no excess over the SM background and therefore set exclusion limits for $W^\prime$ and $Z^\prime$ production with masses between 80 GeV and 320 GeV \cite{Alitti:1990kw,Alitti:1993pn}. \mymed We compared the exclusion limits with our NLO cross section predictions for single slepton, Eq.~(\ref{res_slep}), and sneutrino, Eq.~(\ref{res_sneu}), production assuming again $\lamp_{ijk}|_{\rm GUT}=0.1$ and one of the couplings shown in Table~\ref{RPV_couplings} \cite{Dreiner:2006sv}. Our cross section prediction is at least one order of magnitude smaller than the exclusion limits in the relevant mass range. \section{Sneutrino LSP parameter space} \label{parameter_space} We have shown in Sect.~\ref{snu_LSP_in_mSUGRA}, that one non-vanishing coupling $\lam'_{ijk}|_{\rm GUT}=\mathcal{O}(10^{-1})$ may lead to a $\tilde{\nu}_i$ LSP in $\text{B}_3$ mSUGRA models, {\it cf.} Fig~{\ref{lambdap231}}. We also presented the $\lam'_{ijk}$ couplings, which have sufficiently weak upper bounds to allow for a $\tilde{\nu}_i$ LSP, see Table~\ref{RPV_couplings}. All lepton flavours are possible, although a $\tilde{\nu}_e$ LSP is disfavoured due to the stronger bounds on the $\lam'_{1jk}$. Thus we concentrate on $\tilde{\nu}_\mu$ and $\tilde{\nu}_\tau$ LSPs in the following. \mymed In this section, we investigate in detail the dependence of the $\tilde{\nu}_i$ LSP parameter space on the mSUGRA parameters $M_0$, $M_{1/2}$, $A_0$ and $\tan \beta$. This is the central part of our paper. We explore 2-dimensional parameter spaces, where our scans are centered around the following points \begin{align} \begin{split} \textnormal{\bf Point I:\,\,}& M_0 = 50 \textnormal{\,GeV},\, M_{1/2}=500 \textnormal{\,GeV}, \\& A_0=-600 \textnormal{\,GeV},\, \tan\beta=10, \\& \textnormal{sgn}(\mu) = +1, \, \lamp_{231}\lvert_{\rm GUT} =0.11, \\[2ex] \textnormal{\bf Point II:\,\,}& M_0 = 200 \textnormal{\,GeV},\, M_{1/2}=290 \textnormal{\,GeV}, \\& A_0=-550 \textnormal{\,GeV},\, \tan\beta=12, \\& \textnormal{sgn}(\mu) = +1, \, \lamp_{331}\lvert_{\rm GUT} =0.12. \label{scanpoints} \end{split} \end{align} We perform our parameter scans with an unpublished $\text{B}_3$ version of {\tt SOFTSUSY} \cite{rpv_softsusy}. \mymed Point I results in a $\tilde{\nu}_\mu$ LSP with a mass of 130 GeV. The NLSP is the left-handed smuon, $\tilde{\muon}_L$, with a mass of 159 GeV. Note that the $\tilde{\muon}_L$ mass is also reduced due to $\lam'_{231}|_{\rm GUT}\not=0$, and the $\tilde{\muon}_L$ is always heavier than the $\tilde{\nu}_\mu$ for $\tan \beta > 1$, see Eq.~(\ref{sfermion_masses}) and Fig.~\ref{lambdap231}. The masses of the other LSP candidates, namely the $\tilde{\tau}_1$ and the $\tilde{\chi}_1^0$, are 186 GeV and 205 GeV, respectively. Due to the rather large mass difference between the $\tilde{\nu}_\mu$ LSP on the one side, and $\tilde{\tau}_1$ and $\tilde{\chi}_1^0$ on the other, we expect an extended $\tilde{\nu}_\mu$ LSP parameter space. This is indeed the case, as shown in the following. \mymed Point II results in a $\tilde{\nu}_\tau$ LSP with a mass of 107 GeV. The NLSP is the $\tilde{\chi}_1^0$ with a mass of 116 GeV. The NNLSP is the $\tilde{\tau}_1$, which has a large left-handed component here, because the soft breaking mass $M_{\tilde{\tau}_L}$, Eq.~(\ref{stau_parameter}), is also reduced via the non-vanishing $\lam'_{331}|_{\rm GUT}$ coupling. In contrast, $M_{\tilde{\tau}_R}$ is not affected. The $\tilde{\tau}_1$ mass is 120 GeV. \mymed The mass difference between the $\tilde{\nu}_\tau$ LSP and the $\tilde{\tau}_1$ is smaller for Point II than Point I, because $\lam'_{331}|_{\rm GUT}$ also reduces the mass of the $\tilde{\tau}_1$, which is an admixture of $\tilde\tau_L$ and $\tilde\tau_R$. This competes with the $\tilde{\nu}_\tau$ to be the LSP; \textit{cf.} Ref.~\cite{Allanach:2006st}. In contrast, $\lam'_ {231}|_{\rm GUT}\not =0$ reduces the mass of the $\tilde{\muon}_L$. But the $\tilde{\muon} _L$ is always heavier than the $\tilde{\nu}_ \mu$. We therefore expect a smaller $\tilde{\nu}_ \tau$ LSP parameter space around Point II than the $\tilde{\nu}_\mu$ LSP parameter space around Point I. \mymed It is worth mentioning, that Point I leads to a heavier sparticle mass spectrum than Point II. This stems from the fact, that we have chosen our central scan points, such that the SUSY contributions to the anomalous magnetic moment of the muon, $\delta a_\mu^{\rm SUSY}$, can explain the observed discrepancy, $\delta a_\mu$, between experiment, $a_\mu^{\rm exp}$, and the SM prediction, $a_\mu^{\rm SM}$, \begin{equation} \delta a_\mu= a_\mu^{\rm exp} - a_\mu^{\rm SM} = (29.5 \pm 8.8) \times 10^{-10} \, , \label{amu_exp} \end{equation} which corresponds to a $3.4 \sigma$ deviation \cite{Bennett:2006fi,Miller:2007kk,Stockinger:2007pe}. In the following, we show in our parameter scans in Figs.~\ref{M12-A0:deltaM} --\ref{fig:M0_M12} contour lines, where the SUSY contributions, $\delta a_\mu^{\rm SUSY}$, correspond to the \begin{align} \begin{split} \text{central value}: & \,\, \delta a_\mu^{\rm SUSY}=29.5 \times 10^{-10} \\ \Leftrightarrow & \, \text{yellow line}, \, \text{labelled with} ``\, 0 \," \, , \\[2ex] \text{central value} \pm 1 \sigma : & \,\, \delta a_\mu^{\rm SUSY}=(29.5 \pm 8.8) \times 10^{-10} \\ \Leftrightarrow & \, \text{blue line}, \, \text{labelled with} ``\pm 1 " \, , \\[2ex] \text{central value} \pm 2 \sigma : & \,\, \delta a_\mu^{\rm SUSY}=(29.5 \pm 17.6) \times 10^{-10} \\ \Leftrightarrow &\, \text{green line} , \, \text{labelled with} ``\pm 2 " \, , \\[2ex] \text{central value} \pm 3 \sigma : & \,\, \delta a_\mu^{\rm SUSY}=(29.5 \pm 26.4) \times 10^{-10} \\ \Leftrightarrow &\, \text{magenta line} , \, \text{labelled with} ``\pm 3 " \, . \label{mu_magnetic_moment} \end{split} \end{align} Yellow (labelled with $``\, 0 \,"$), green (labelled with $``\pm1"$), blue (labelled with $``\pm 2 "$) and magenta (labelled with $`` \pm 3 "$) are the colours of the contour lines in the plots, which we show in the following sections. \mymed The SUSY contributions to the anomalous magnetic moment of the muon, $\delta a_\mu^{\rm SUSY}$, enter starting at the one loop level, see for example Refs.~\cite{Grifols:1982vx,Stockinger:2006zn}, and involve the $\tilde{\muon}_L$ and $\tilde{\nu}_\mu$. Thus, they are enhanced if the $\tilde{\muon}_L$ and $\tilde{\nu}_\mu$ are light. As a consequence, $\delta a_\mu^{\rm SUSY}$ increases if we switch on $\lam'_{231}|_{\rm GUT}$, because the mass of the $\tilde{\muon}_L$ and $\tilde{\nu}_\mu$ decrease. In contrast, $\lam'_{331}|_{\rm GUT}$ does not affect $\delta a_\mu^{\rm SUSY}$. Note, that we have not included $\text{B}_3$ contributions to $\delta a_\mu^{\rm SUSY}$, because they are at most at the percent level and can therefore be neglected \cite{Kim:2001se}. \mymed We also consider the constraints from the BR($b \rightarrow s \gamma$). The current experimental value is \cite{Barberio:2008fa} \begin{equation} \text{BR}(b \rightarrow s \gamma) = (3.52 \pm 0.25) \times 10^{-4}\, . \label{bsg_exp} \end{equation} Here we have added the statistical and systematic errors in quadrature \cite{Barberio:2008fa}. If we also include the combined theoretical error of $0.3 \times 10^{-4}$ \cite{bsg_theo_error} we obtain the $2 \sigma$ window \begin{equation} 2.74 \times 10^{-4}< \text{BR}(b \rightarrow s \gamma) < 4.30 \times 10^{-4}\, , \label{bsg_2sigma} \end{equation} where we have now added theoretical and experimental errors in quadrature. \mymed The complete $\tilde{\nu}_\mu$ LSP parameter space, which we will show in the following, {\it i.e.} Figs.~\ref{M12-A0:deltaM}, \ref{fig:DeltaM_lamp221_a0tanb}, \ref{fig:DeltaM_lamp221_m0m12}, is consistent with BR($b \rightarrow s \gamma$) at the $2\sigma$ level Eq.~(\ref{bsg_2sigma}). The $\tilde{\nu}_\tau$ LSP parameter space in the $A_0$--$\tan \beta$ [$M_{1/2}$--$M_{0}$] plane, Fig.~\ref{fig:DeltaM_lamp331_a0tanb} [Fig.~\ref{fig:DeltaM_lamp331_m0m12}] is consistent with BR($b \rightarrow s \gamma$) at $2\sigma$, Eq.~(\ref{bsg_2sigma}) for $\tan \beta\lsim 11$ [$M_{1/2} \gsim 290$ GeV] corresponding to the dashed black line in Fig.~\ref{fig:DeltaM_lamp331_a0tanb} [Fig.~\ref{fig:DeltaM_lamp331_m0m12}]. We will show mainly contour lines for $\delta a_\mu^{\rm SUSY}$ in the following, {\it cf.} Eq.~(\ref{mu_magnetic_moment}), because the experimental value of $a_\mu$ is in general more restrictive on the $\tilde{\nu}_i$ LSP parameter space than BR($b \rightarrow s \gamma$). \mymed We finally want to point out that the complete $\tilde{\nu}_\mu$ and $\tilde{\nu}_\tau$ LSP parameter space, which we will show in the next three sections posses a branching ratio for $B_s \rightarrow \mu^+ \mu^-$, which lies at least one order of magnitude below the current experimental upper bound \cite{Barberio:2008fa}, \begin{equation} \text{BR}(B_s \rightarrow \mu^+ \mu^-) < 4.7 \times 10^{8}\, . \end{equation} \mymed We have employed {\tt micrOMEGAs1.3.7} \cite{Belanger:2001fz} to calculate $\delta a_\mu^{\rm SUSY}$, BR($b \rightarrow s \gamma$) and \text{BR}($B_s \rightarrow \mu^+ \mu^-$). According to Ref.~\cite{ Allanach:2006st}, $\text{B}_3$ contributions to BR($b \rightarrow s \gamma$) and \text{BR}($B_s \rightarrow \mu^+ \mu^-$) can also be neglected for only one dominant $\lambda'_{ijk}|_{\rm GUT}$. \subsection{$A_0$ Dependence} \label{A0_dependence} \begin{figure} \setlength{\unitlength}{1in} \includegraphics[scale=0.44, bb = 30 40 570 530, clip=true]{hdk_M12p500_lp01.eps} \put(-2.0,0.0){Q [GeV]} \put(-3.38,1.4){\rotatebox{90}{$(\mathbf{h_{D^k}})_{ij}$ [GeV]}} \caption{\label{fig:hDk_M12_500}Running of $(\mathbf{h_{D^k}})_{ij}$ from $M_{\rm GUT}$ to $M_Z$ for different values of $A_0$. At $M_{\rm GUT}$, we choose $M_{1/2}=500$ GeV and $\lam'_{ijk}=0.1$.} \end{figure} \begin{figure} \setlength{\unitlength}{1in} \includegraphics[scale=0.44, bb = 30 40 570 530, clip=true]{hdk2_M12p500_lp01.eps} \put(-2.0,0.0){Q [GeV]} \put(-3.38,1.4){\rotatebox{90}{$(\mathbf{h_{D^k}})^2_{ij}$ [GeV$^2$]}} \caption{\label{fig:hDk2_M12_500}Running of $(\mathbf{h_{D^k}})^2_{ij}$ from $M_{\rm GUT}$ to $M_Z$ for different values of $A_0$. At $M_{\rm GUT}$, we choose $M_{1/2}=500$ GeV and $\lam'_{ijk}=0.1$.} \end{figure} We have chosen two scenarios, Point I and Point II, Eq.~(\ref{scanpoints}), which we use as central values for 2-dimensional mSUGRA parameter scans. For both points $A_0<0$, with a magnitude of a few hundred GeV. We now show that this choice of $A_0$ enhances the negative contribution to the $\tilde{\nu}_i$ mass, which originates from a non-vanishing $\lam'_{ijk}|_{\rm GUT}$ coupling, {\it cf.} Eq.~(\ref{sneu_RGE}). \mymed According to Eq.~(\ref{sneu_RGE}) and (\ref{hdk_RGE}), $A_0$ enters the running of $m_{\tilde{\nu}_i}$ via the $\text{B}_3$ soft-breaking, trilinear scalar coupling $(\mathbf{h_{D^k}})_{ij}$ \cite{Allanach:2003eb}. Thus $(\mathbf{h_{D^k}})_{ij}$ gives a negative contribution to $m^2_{\tilde{\nu}_i}$, as $t$ is decreased. It is proportional to the integral of $(\mathbf{h_{D^k}})_{ij}^2$ over $t$, from $t_{\rm min}=\ln(M_{Z})$ to $t_{\rm max}=\ln(M_{\rm GUT})$. \mymed We show in Fig.~\ref{fig:hDk_M12_500} the running of the trilinear scalar coupling $(\mathbf{h_{D^k}})_{ij}$. We assume one non-vanishing coupling $\lam'_{ijk}\lvert_{\rm GUT}=0.1$ and a universal gaugino mass $M_{1/2}=500$ GeV. Different lines correspond to different values of $A_0$. We have employed the one-loop contributions from gauge interactions \cite{Allanach:2003eb}, as well as the B$_3$ leading interaction \bea 16\pi^2 \frac{d(\mathbf{h_{D^k}})_{ij}}{dt} &=& -(\mathbf{h_{D^k}})_{ij} \left( \frac{7}{15}g_1^2 + 3g_2^2 + \frac{16}{3}g_3^2 \right) \nonumber \\ &&\hspace{-2cm}+ \lam'_{ijk} \left( \frac{14}{15}g_1^2 M_1 + 6 g_2^2 M_2 + \frac{32}{3} g_3^2 M_3 \right) \,. \nonumber \\ \label{running_hDK} \eea $M_1$, $M_2$ and $M_3$ are the U(1), SU(2) and SU(3) gaugino masses. The running of $(\mathbf{h_{D^k}})_{ij}$ is dominated by the strong interaction, {\it i.e.} by the strong coupling $g_3$ and the gluino mass $M_3$. The running is governed by two terms with opposite sign in Eq.~(\ref{running_hDK}), one proportional to $\lam'_{ijk}$ and one proportional to $(\mathbf{h_{D^k}})_{ij}$. \mymed The term proportional to $\lam'_{ijk}$ is always positive and thus decreases $(\mathbf{h_{D^k}})_{ij}$ when we go from $M_{\rm GUT}$ to $M_{Z}$. Note, that we assume $\lam'_{ijk}$ is positive. Furthermore, the gluino mass $M_3$ will increase by a factor of roughly 2.5 and also $\lam'_{ijk}$ will increase by roughly a factor of 3 when we run from $M_{\rm GUT}$ to $M_Z$. Therefore this term gets relatively more important towards lower scales. \mymed The sign of the term proportional to $(\mathbf{h_{D^k}})_{ij}$ depends on the sign of $A_0$, according to Eq.~(\ref{hdk_RGE}). At $M_{\rm GUT}$, this term is positive (negative) for negative (positive) $A_0$. Therefore, for positive $A_0$, the term proportional to $(\mathbf{h_{D^k}})_{ij}$ increase $(\mathbf{h_{D^k}})_{ij}$ when we run from $M_{\rm GUT}$ to $M_{Z}$. \mymed We can now understand the running of $(\mathbf{h_{D^k}})_{ij}$ in Fig.~\ref{fig:hDk_M12_500}. Looking at the solid red line, $A_0=2500$ GeV, we see that $(\mathbf{h_{D^k}})_{ij}$ first increases when we go from $M_{\rm GUT}$ to smaller scales. Due to the large $A_0$ at $M_{\rm GUT}$, the negative term proportional to $(\mathbf{h_{D^k}}) _{ij}$ dominates and increases $(\mathbf{h_{D^k}})_{ij}$. Going to lower scales the positive term proportional to $\lam'_{ijk}$ grows faster and starts to dominate at $Q\approx 10^{6}$ GeV. From this scale on, $(\mathbf{h_{D^k}})_{ij}$ decreases. In contrast, if we start with negative $A_0$ (solid black line), both terms give negative contributions to the running of $(\mathbf{h_{D^k}})_ {ij}$. Then, $(\mathbf{h_{D^k}})_{ij}$ decreases with a large slope. \mymed The resulting running of $(\mathbf{h_{D^k}})^2_{ij}$ is shown in Fig.~\ref{fig:hDk2_M12_500}. Recall Eq.~(\ref{sneu_RGE}), $m^2_{\tilde{\nu}_i}$ is reduced proportional to the integral of $(\mathbf{h_{D^k}})^2_{ij}$ over $t$. A negative value of $A_0$ therefore leads to a smaller $m_{\tilde{\nu}_i}$ compared to a positive value of $A_0$ with the same magnitude. We expect from Fig.~\ref{fig:hDk2_M12_500}, that a $\tilde{\nu}_i$ LSP in $\text{B}_3$ mSUGRA is preferred for negative values of $A_0$ with a large magnitude. We also expect, that $m_{\tilde{\nu}_i}$ in the $A_0$ direction has a maximum at $A_0=1000$ GeV, if $M_{1/2}=500$ GeV. In general, there should be a line in the $M_{1/2}$--$A_{0}$ plane, where $m_{\tilde{\nu}_i}$ is ``maximal", falling to either side. \mymed \begin{figure} \setlength{\unitlength}{1in} \includegraphics[scale=0.84, bb = 110 70 570 260,clip=true]{lamp231_largeA0M12_M00_tanb10_delta.eps} \put(-5.39,0.31){\epsfig{file=lamp231_largeA0M12_M00_tanb10_contour.eps,scale=0.5707}} \put(-2.4,0.5){\rotatebox{90}{$\Delta_M = M_{\rm NLSP} - M_{\rm LSP}$ [GeV]}} \put(-3.9,0.07){\makebox(0,0){$\mhalf$ [GeV]}} \put(-5.45,1.1){\rotatebox{90}{$\azero$[GeV]}} \put(-3.4,2.1){\makebox(0,0){\color[rgb]{1,1,1}{$\boldsymbol{-2}$}}} \put(-3.9,2.02){\makebox(0,0){\color[rgb]{1,1,1}{$\boldsymbol{-1}$}}} \put(-4.33,1.88){\makebox(0,0){\color[rgb]{1,1,1}{$\boldsymbol{0}$}}} \put(-4.67,1.7){\makebox(0,0){\color[rgb]{1,1,1}{$\boldsymbol{+1}$}}} \put(-3.4,1.5){\makebox(0,0){\color[rgb]{1,1,1} {$\boldsymbol{\stau}_1$ \bf{LSP}}}} \put(-3.9,1.0){\makebox(0,0){\color[rgb]{1,1,1}{$\boldsymbol{\sneu_\mu}$ \bf{LSP}}}} \caption{\label{M12-A0:deltaM} Mass difference in GeV between the NLSP and the LSP as a function of $M_{1/2}$ and $A_0$. The other mSUGRA parameters are $M_0=0$ GeV, $\tan \beta=10$, $\text{sgn}(\mu)=+1$ and $\lam'_{231}\lvert_{\rm GUT}=0.16$. We observe a $\tilde{\nu}_\mu$ LSP and a $\tilde{\tau}_1$ LSP region. The contour lines correspond to different SUSY contributions to the anomalous magnetic moment of the muon, {\it cf.} Eq.~(\ref{mu_magnetic_moment}). The blackened out region is excluded due to tachyons or the LEP $\tilde{\nu}_\mu$, $h$ mass bounds, see Sect.~\ref{LEP_constraints}.} \end{figure} \begin{figure} \setlength{\unitlength}{1in} \includegraphics[scale=0.84, bb = 110 70 570 260, clip=true]{lamp231_largeA0M12_M00_tanb10_mLSP.eps} \put(-5.39,0.31){\epsfig{file=lamp231_largeA0M12_M00_tanb10_contour.eps,scale=0.5707}} \put(-2.4,0.9){\rotatebox{90}{$m_{\sneu_{\mu}}$ [GeV]}} \put(-3.9,0.07){\makebox(0,0){$\mhalf$ [GeV]}} \put(-5.45,1.1){\rotatebox{90}{$\azero$[GeV]}} \put(-3.4,2.1){\makebox(0,0){\color[rgb]{1,1,1}{$\boldsymbol{-2}$}}} \put(-3.9,2.02){\makebox(0,0){\color[rgb]{1,1,1}{$\boldsymbol{-1}$}}} \put(-4.33,1.88){\makebox(0,0){\color[rgb]{1,1,1}{$\boldsymbol{0}$}}} \put(-4.67,1.7){\makebox(0,0){\color[rgb]{1,1,1}{$\boldsymbol{+1}$}}} \caption{\label{M12-A0:Mnu} Mass of the $\tilde{\nu}_\mu$ in GeV for the $\tilde{\nu}_\mu$ LSP region shown in Fig.~\ref{M12-A0:deltaM}.} \end{figure} We show in Fig.~\ref{M12-A0:deltaM} the mass difference in GeV between the NLSP and the LSP as a function of $M_{1/2}$ and $A_0$. The other mSUGRA parameters are $M_0=0$ GeV, $\tan \beta=10$, $\text{sgn}(\mu) =+1$ and $\lam'_{231}\lvert_{\rm GUT}=0.16$. The yellow (labelled with $``\, 0 \,"$), blue (labelled with $`` \pm 1 "$) and green (labelled with $`` \pm 2 "$) line indicate the SUSY contributions to the anomalous magnetic moment of the muon as described in Eq.~(\ref{mu_magnetic_moment}). The blackened out region corresponds to mSUGRA points, which lead to tachyons or where $m_{\tilde{\nu}_ \mu}$ or $m_h$ lies below the LEP bound, see Sect.~\ref{LEP_constraints}. In Fig.~\ref{M12-A0:Mnu}, we give the mass of the $\tilde{\nu}_\mu$ in GeV for the $\tilde{\nu} _\mu$ LSP region shown in Fig.~\ref{M12-A0:deltaM}. \mymed We see in Fig.~\ref{M12-A0:deltaM} a region with a $\tilde{\nu}_\mu$ LSP and a region with a $\tilde{\tau}_1$ LSP. The cross over region is marked in black. We get a $\tilde{\nu} _\mu$ LSP for small and very large values of $A_0$, as expected from Fig.~\ref{fig:hDk2_M12_500}. We also see in Fig.~\ref{M12-A0:Mnu} that $m_{\tilde{\nu}_\mu}$ is maximal for $M_{1/2}=500$ GeV and $A_{0} \approx 1000$ GeV in the $A_0$ direction. The region of negative $A_0$ is not shown in Figs.~\ref{M12-A0:deltaM}, \ref{M12-A0:Mnu}, because the influence of $\lam'_{ijk}|_{\rm GUT}$ on $m_{\tilde{\nu} _\mu}$ is so enhanced, that we violate the mass bound of 78 GeV or even obtain a tachyonic $\tilde{\nu}_\mu$ in large regions of $A_0<0$ GeV. In the following, we choose smaller values of $\lam'_{ijk}|_{\rm GUT}$. \subsection{$A_0$--$\tan \beta$ Plane} \label{A0tanbplane} \begin{figure*}[h!] \setlength{\unitlength}{1in} \subfigure[Mass difference, $\Delta M$, between the NLSP and LSP. The LSP candidates in different regions are explicitly mentioned. The blackened out region corresponds to parameter points, which posses a tachyon or where the $\tilde{\nu}_\mu$ or $h$ mass violate the LEP bounds, Sect.~\ref{LEP_constraints}. \label{fig:DeltaM_lamp221_a0tanb}]{ \begin{picture}(3,2.3) \put(-0.6,0){\epsfig{file=lamp231_A0tanb_M12500_M050_delta.eps,width=3.7in}} \put(0.0,0.48){\epsfig{file=lamp231_A0tanb_M12500_M050_contour.eps,width=2.509in}} \put(2.7,0.5){\rotatebox{90}{$\Delta M = M_{\rm NLSP} - M_{\rm LSP}$ [GeV]}} \put(1.3,0.2){\makebox(0,0){$\azero$ [GeV]}} \put(0.05,1.2){\rotatebox{90}{$\tanb$}} \put(1.12,1.2){\makebox(0,0){\color[rgb]{1,1,1}{$\boldsymbol{\sneu_\mu}$ \bf{LSP}}}} \put(1.6,1.85){\makebox(0,0){\color[rgb]{1,1,1}{$\boldsymbol{\stau_1}$ \bf{LSP}}}} \end{picture} }\hfill \subfigure[ Mass difference, $\Delta M$, between the NLSP and LSP. The LSP candidates in different regions are explicitly mentioned. The blackened out region corresponds to parameter points, which posses a tachyon or where the $\tilde{\nu}_\tau$ or $h$ mass violate the LEP bounds, {\it cf.} Sect.~\ref{LEP_constraints}. \label{fig:DeltaM_lamp331_a0tanb}]{ \begin{picture}(3,2.3) \put(-0.6,0){\epsfig{file=lamp331_A0tanb_M0200_M12290_delta.eps,width=3.7in}} \put(0.0,0.48){\epsfig{file=lamp331_A0tanb_M0200_M12290_contor.eps,width=2.509in}} \put(0.0,0.48){\epsfig{file=lamp331_A0tanb_M0200_M12290_bsgcontor.eps,width=2.509in}} \put(2.7,0.5){\rotatebox{90}{$\Delta M = M_{\rm NLSP} - M_{\rm LSP}$ [GeV]}} \put(1.3,0.2){\makebox(0,0){$\azero$ [GeV]}} \put(0.05,1.2){\rotatebox{90}{$\tanb$}} \put(1.2,1.15){\makebox(0,0){\color[rgb]{1,1,1}{$\boldsymbol{\sneu_\tau}$ \bf{LSP}}}} \put(1.8,1.15){\makebox(0,0){\color[rgb]{1,1,1}{$\boldsymbol{\neutralino_1}$}}} \put(1.55,2.04){\makebox(0,0){\color[rgb]{1,1,1}{$\boldsymbol{\stau}_1$ \bf{ LSP}}}} \end{picture} } \subfigure[$\,\tilde{\nu}_\mu$ mass, $m_{\sneu_{\mu}}$, for the $\tilde{\nu}_\mu$ LSP region of Fig.~\ref{fig:DeltaM_lamp221_a0tanb}. \label{fig:SnuMass_Lamp221_a0tanb}]{ \begin{picture}(3,2.3) \put(-0.6,0){\epsfig{file=lamp231_A0tanb_M12500_M050_mLSP.eps, width=3.7in}} \put(0.0,0.48){\epsfig{file=lamp231_A0tanb_M12500_M050_contour.eps,width=2.509in}} \put(2.7,1.1){\rotatebox{90}{$m_{\sneu_{\mu}}$ [GeV]}} \put(1.3,0.2){\makebox(0,0){$\azero$ [GeV]}} \put(0.05,1.2){\rotatebox{90}{$\tanb$}} \put(1.92,0.74){\makebox(0,0){\color[rgb]{1,1,1}{$\boldsymbol{-2}$}}} \put(1.92,1.2){\makebox(0,0){\color[rgb]{1,1,1}{$\boldsymbol{-1}$}}} \put(1.96,1.72){\makebox(0,0){\color[rgb]{1,1,1}{$\boldsymbol{0}$}}} \put(1.69,2.05){\makebox(0,0){\color[rgb]{1,1,1}{$\boldsymbol{+1}$}}} \put(1.12,2.05){\makebox(0,0){\color[rgb]{1,1,1}{$\boldsymbol{+2}$}}} \put(0.8,2.05){\makebox(0,0){\color[rgb]{1,1,1}{$\boldsymbol{+3}$}}} \end{picture} }\hfill \subfigure[$\,\tilde{\nu}_\tau$ mass, $m_{\sneu_{\tau}}$, for the $\tilde{\nu}_\tau$ LSP region of Fig.~\ref{fig:DeltaM_lamp331_a0tanb}. \label{fig:SnuMass_Lamp331_a0tanb}]{ \begin{picture}(3,2.3) \put(-0.6,0){\epsfig{file=lamp331_A0tanb_M0200_M12290_mLSP.eps, width=3.7in}} \put(0.0,0.48){\epsfig{file=lamp331_A0tanb_M0200_M12290_contor.eps,width=2.509in}} \put(0.0,0.48){\epsfig{file=lamp331_A0tanb_M0200_M12290_bsgcontor.eps,width=2.509in}} \put(2.7,1.1){\rotatebox{90}{$m_{\sneu_{\tau}}$ [GeV]}} \put(1.3,0.2){\makebox(0,0){$\azero$ [GeV]}} \put(0.05,1.2){\rotatebox{90}{$\tanb$}} \put(1.92,0.75){\makebox(0,0){\color[rgb]{1,1,1}{$\boldsymbol{-2}$}}} \put(1.92,1.41){\makebox(0,0){\color[rgb]{1,1,1}{$\boldsymbol{-1}$}}} \end{picture} } \subfigure[Mass difference of the $\tilde{\mu}_L$ and $\tilde{\chi}_1^0$ for the $\tilde{\nu}_\mu$ LSP region of Fig.~\ref{fig:DeltaM_lamp221_a0tanb}. We have $m_{\tilde{\mu}_L} > m_{\tilde{\chi}_1^0}$ (denoted by $\tilde{\mu}_L>\neutralino_1$) and $m_{\tilde{\mu}_L} < m_{\tilde{\chi}_1^0}$ (denoted by $\tilde{\mu}_L<\neutralino_1$). \label{fig:MOrder_Lamp221_a0tanb}]{ \begin{picture}(3,2.3) \put(-0.6,0){\epsfig{file=lamp231_A0tanb_M12500_M050_mSmuL_m_mMneut1.eps, width=3.7in}} \put(0.0,0.48){\epsfig{file=lamp231_A0tanb_M12500_M050_contour.eps,width=2.509in}} \put(2.7,0.8){\rotatebox{90}{$|m_{\tilde{\mu}_L} - m_{\tilde{\chi}_1^0}|$ [GeV]}} \put(1.1,1.21){\makebox(0,0){\color[rgb]{1,1,1}{$\boldsymbol{\tilde{\mu}}_L<\boldsymbol{\neutralino_1}$}}} \put(1.78,1.04){\makebox(0,0){\color[rgb]{1,1,1}{$\boldsymbol{\tilde{\mu}}_L>\boldsymbol{\neutralino_1}$}}} \put(1.3,0.2){\makebox(0,0){$\azero$ [GeV]}} \put(0.05,1.2){\rotatebox{90}{$\tanb$}} \end{picture} }\hfill \subfigure[ Mass difference of the $\tilde{\tau}_1$ and $\tilde{\chi}_1^0$ for the $\tilde{\nu}_\tau$ LSP region of Fig.~\ref{fig:DeltaM_lamp331_a0tanb}. We have $m_{\tilde{\tau}_1} > m_{\tilde{\chi}_1^0}$ (denoted by $\tilde{\tau}_1>\neutralino_1$) and $m_{\tilde{\tau}_1} < m_{\tilde{\chi}_1^0}$ (denoted by $\tilde{\tau}_1<\neutralino_1$). \label{fig:MOrder_Lamp331_a0tanb}]{ \begin{picture}(3,2.3) \put(-0.6,0){\epsfig{file=lamp331_A0tanb_M0200_M12290_Mstau1_m_Mneut1.eps, width=3.7in}} \put(0.0,0.48){\epsfig{file=lamp331_A0tanb_M0200_M12290_contor.eps,width=2.509in}} \put(0.0,0.48){\epsfig{file=lamp331_A0tanb_M0200_M12290_bsgcontor.eps,width=2.509in}} \put(1.3,1.05){\makebox(0,0){\color[rgb]{1,1,1}{$\boldsymbol{\stau}_1>\boldsymbol{\neutralino_1}$}}} \put(1.3,1.75){\makebox(0,0){\color[rgb]{1,1,1}{$\boldsymbol{\stau}_1<\boldsymbol{\neutralino_1}$}}} \put(2.7,0.8){\rotatebox{90}{$|m_{\tilde{\tau}_1} - m_{\tilde{\chi}_1^0}|$ [GeV]}} \put(1.3,0.2){\makebox(0,0){$\azero$ [GeV]}} \put(0.05,1.2){\rotatebox{90}{$\tanb$}} \end{picture} } \caption{ Sneutrino LSP parameter space in the $\azero$--$\tanb$ plane. The left panel (right panel) shows the $\tilde{\nu}_\mu$ LSP ($\tilde{\nu}_\tau$ LSP) region obtained via $\lam'_{231}\lvert_{\rm GUT}=0.11$, $M_0=50$ GeV, $M_{1/2}=500$ GeV and $\text{sgn}(\mu)=+1$ ($\lam'_{331}\lvert_{\rm GUT}=0.12$, $M_0=200$ GeV, $M_{1/2}=290$ GeV and $\text{sgn}(\mu)=+1$). The plots show from top to bottom the mass difference between the NLSP and LSP, $\Delta M$, the mass of the sneutrino LSP, $m_{\sneu}$, and the mass difference between the $\tilde{\chi}_1^0$ and the $\tilde{\mu}_L$ (left panel) or between the $\tilde{\chi}_1^0$ and $\tilde{\tau}_1$ (right panel). The yellow (labelled with $``\, 0 \,"$), blue (labelled with $`` \pm 1 "$), green (labelled with $`` \pm 2 "$) and magenta (labelled with $`` \pm 3 "$) contours correspond to different SUSY contributions to the anomalous magnetic moment of the muon, $\delta a_\mu^{\rm SUSY}$, as described in Eq.~(\ref{mu_magnetic_moment}). The dashed black line (right panel) corresponds to BR($b \rightarrow s \gamma)=2.74 \times 10^{-4}$, Eq.~(\ref{bsg_2sigma}). \label{fig:a0_tanb}} \end{figure*} We investigate in this section the sneutrino LSP parameter space in the $A_0$--$\tan \beta$ plane. As central values for our 2-dimensional scans, we choose the points given in Eq.~(\ref{scanpoints}). \mymed We show in Fig.~\ref{fig:DeltaM_lamp221_a0tanb} [Fig.~\ref{fig:DeltaM_lamp331_a0tanb}] the $\tilde{\nu}_\mu$ LSP [$\tilde{\nu}_\tau$ LSP] parameter space in the $A_0$--$\tan \beta$ plane. We have chosen $\lam'_{231}\lvert_{\rm GUT}=0.11$ [$\lam'_{331}\lvert_{\rm GUT}=0.12$]. Both figures show the mass difference between the NLSP and the LSP in GeV. The solid contour lines correspond to different SUSY contributions to the anomalous magnetic moment of the muon $\delta a_\mu^{\rm SUSY}$ as described in Eq.~(\ref{mu_magnetic_moment}). The dashed black line in Fig.~\ref{fig:DeltaM_lamp331_a0tanb} corresponds to BR($b\rightarrow s\gamma)=2.74 \times 10^{-4}$ (\ref{bsg_2sigma}), {\it i.e.} the parameter space below that line is consistent with $b \rightarrow s \gamma$ at $2\sigma$. The blackened out region is excluded due to the presence of tachyons or by the LEP $\tilde{\nu}_{\mu/\tau}$ and Higgs mass bound, see Sect.~\ref{LEP_constraints}. \mymed We observe that the $\tilde{\nu}_\mu$ LSP lives in an extended region of $\text{B}_3$ mSUGRA parameter space. For $\tan \beta = 6$, we find a $\tilde{\nu}_\mu$ LSP between $A_0=-750$ GeV and $A_0=-300$ GeV. For $A_0 = -700$ GeV, we find a $\tilde{\nu}_\mu$ LSP between $\tan \beta = 4$ and $\tan \beta =21$. We also observe that most of the $\tilde{\nu}_\mu$ LSP region is consistent with the observed anomalous magnetic moment of the muon at the $1 \sigma$ (blue lines) and $2 \sigma$ (green lines) level, {\it cf.} Eq.~(\ref{mu_magnetic_moment}). Recall, that the complete $\tilde{\nu}_\mu$ LSP region in Fig.~\ref{fig:DeltaM_lamp221_a0tanb} is also consistent with BR($b \rightarrow s \gamma$) at $2 \sigma$, Eq.~(\ref{bsg_2sigma}). The large region of $\tilde{\nu}_\mu$ LSP parameter space is a consequence of the choice of our central scan point, {\it i.e.} Point I of Eq.~(\ref{scanpoints}). Here, the mass difference between the $\tilde{\nu}_\mu$ LSP and the $\tilde{\tau}_1$ ($\tilde{\chi}_1^0$), {\it i.e.} the other LSP candidates, is rather large, namely 56 GeV (75 GeV). \mymed We see in Fig.~\ref{fig:DeltaM_lamp221_a0tanb} that we obtain a $\tilde{\tau}_1$ LSP if we increase $A_0$. We explained this in the last section. A large magnitude and negative value of $A_0$ enhances the (negative) effect of $\lam'_{231}|_{\rm GUT}$ on the $\tilde{\nu}_\mu$ mass via the soft breaking trilinear coupling $(\mathbf{h_{D^1}})_{23}$. The $\stau_1$ mass on the other hand, depends only weakly on $A_0$. The dependence is via the tau Yukawa-coupling, Eq.~(\ref{stau_parameter}), and due to left-right-mixing, Eq.~(\ref{eq_staumassmatrix}). According to the last section, there should also be a $\tilde{\nu}_\mu$ LSP for large values of $A_0$. But in this case the Higgs mass lies below the LEP bound. \mymed We also obtain a $\tilde{\tau}_1$ LSP, when we increase $\tan\beta$. $\tan \beta$ hardly affects the mass of the $\tilde{\nu}_\mu$ but affects the $\tilde{\tau}_1$ mass in two ways. First, increasing $\tan \beta$ increases the tau Yukawa coupling, which reduces the $\tilde{\tau}_1$ mass going from $M_{\rm GUT}$ to $M_Z$. This is parametrized by Eq.~(\ref{stau_parameter}). Second, increasing $\tan \beta$ increases the absolute value of the off diagonal elements of the stau mass matrix, Eq.~(\ref{eq_staumassmatrix}). This leads to larger left-right mixing and thus also reduces the $\tilde{\tau}_1$ mass. \mymed Fig.~\ref{fig:DeltaM_lamp221_a0tanb} shows no region with a $\tilde{\chi}_1^0$ LSP. The entire allowed $A_0$--$\tan\beta$ plane in Fig.~\ref{fig:DeltaM_lamp221_a0tanb} has a $\tilde{\tau}_1$ LSP for vanishing $\lam'_{231}$ because $M_{1/2}\gg M_0$. \mymed We show in Fig.~\ref{fig:DeltaM_lamp331_a0tanb} the $\tilde{\nu}_\tau$ LSP parameter space. We observe a ``smaller" $\tilde{\nu}_\tau$ LSP region compared to the $\tilde{\nu}_\mu$ LSP region, Fig.~\ref{fig:DeltaM_lamp221_a0tanb}. We only find a $\tilde{\nu}_\tau$ LSP between $A_0=-630$ GeV and $A_0=-540$ GeV for $\tan \beta=8$. In addition, the experimental $2\sigma$ windows for $\delta a_\mu^{\rm SUSY}$, Eq.~(\ref{mu_magnetic_moment}), and BR($b \rightarrow s \gamma$), Eq.~(\ref{bsg_2sigma}), restrict the allowed $\tilde{\nu}_\tau$ LSP region in Fig.~\ref{fig:DeltaM_lamp331_a0tanb} to lie between $\tan\beta=7$ and $\tan\beta=11$. \mymed We again obtain in Fig.~\ref{fig:DeltaM_lamp331_a0tanb} the $\tilde{\tau}_1$ as LSP when we go to larger values of $\tan \beta$ ($\tan \beta \approx 17$). Although the $\tilde{\nu}_\tau$ mass will also be reduced by a larger tau Yukawa coupling, {\it cf.} Eq.~(\ref{stau_parameter}), the squared mass of the right-handed stau is reduced twice as much as the $\tilde{\nu}_\tau$ mass. In addition, $\tan \beta$ increases mixing between the $\tilde{\tau}_R$ and $\tilde{\tau_L}$, Eq.~(\ref{eq_staumassmatrix}). But it is not possible to find a B$_3$ mSUGRA point, where the mass difference between the $\tilde{\nu}_\tau$ LSP and the $\tilde {\tau}_1$ is large, because $\lam'_{331}|_{\rm GUT}$ also reduces the mass of the $\tilde{\tau}_1$. \mymed We also obtain in Fig.~\ref{fig:DeltaM_lamp331_a0tanb} a $\tilde{\chi}_1^0$ LSP instead of a $\tilde{\nu}_\tau$ or $\tilde{\tau}_1$ LSP if we increase $A_0$ beyond a certain value. The parameter space shown in Fig.~\ref{fig:DeltaM_lamp331_a0tanb} posses a $\tilde{\chi}_1^0$ LSP for vanishing $\lam'_{331}|_{\rm GUT}$. Increasing $A_0$ reduces the effect of $\lam'_{331}|_{\rm GUT}$ on the $\tilde{\nu}_\tau$ and $\tilde{\tau}_1$ mass, but leaves the (bino-like) $\tilde{\chi}_1^0$ mass unaffected. Thus, if the influence of $\lam'_{331}|_{\rm GUT}$ on the $\tilde{\nu}_\tau$ and $\tilde{\tau}_1$ mass is getting smaller, we re-obtain the $\tilde{\chi}_1^0$ as the LSP. \mymed Finally we want to mention in our discussion of Fig.~\ref{fig:DeltaM_lamp331_a0tanb} that we have a ``triple-point", where the $\tilde{\nu}_\tau$, the $\tilde{\tau}_1$ and the $\tilde{\chi}_1^0$ are degenerate in mass. The existence of this ``triple-point" is a general feature of the sneutrino LSP parameter space. This has important consequences for the LHC phenomenology, because close to a ``triple-point", we effectively have three nearly degenerate LSPs at the same time. There are also large regions in Fig.~\ref{fig:DeltaM_lamp221_a0tanb} and Fig.~\ref{fig:DeltaM_lamp331_a0tanb}, where two of the three LSP candidates are nearly degenerate in mass, {\it i.e.} $\Delta M \leq 5$ GeV. \mymed We present in Fig.~\ref{fig:SnuMass_Lamp221_a0tanb} [Fig.~\ref{fig:SnuMass_Lamp331_a0tanb}] the mass of the $\tilde{\nu} _\mu$ [$\tilde{\nu}_\tau$] for the corresponding sneutrino LSP regions of Fig.~\ref{fig:DeltaM_lamp221_a0tanb} [Fig.~\ref{fig:DeltaM_lamp331_a0tanb}]. The lightest sneutrino LSPs have a mass of 78 GeV stemming from LEP bounds, {\it cf.} Sect.~\ref{LEP_constraints}. The heaviest sneutrino LSPs, consistent with $a_\mu^{\rm exp}$, Eq.~(\ref{amu_exp}), and BR($b \rightarrow s \gamma$), Eq.~(\ref{bsg_2sigma}), are found in Fig.~\ref{fig:SnuMass_Lamp221_a0tanb} and posses a mass of roughly 200 GeV. If one wants to have a sneutrino LSP scenario consistent with the anomalous magnetic moment of the muon, than the sneutrino mass is not allowed to be much larger than 200 GeV (see also the next section). \mymed We show in Fig.~\ref{fig:MOrder_Lamp221_a0tanb} [Fig.~\ref{fig:MOrder_Lamp331_a0tanb}] the mass difference in GeV between the $\tilde{\chi}_1^0$ and the $\tilde{\mu}_L$ [mainly left-handed $\tilde{\tau}_1$]. Whether, $m_{\tilde{\chi}_1^0} > m_{\tilde{\mu}_L}$ [$m_{\tilde{\tau}_1}$] or $m_{\tilde{\chi}_1^0} < m_{\tilde{\mu}_L}$ [$m_{\tilde{\tau}_1}$] has important consequences for collider phenomenology. For example, the $\tilde{\mu}_L$ can not decay into a $\mu$ and $\tilde{\chi}_1^0$ if $m_{\tilde{\chi}_1^0} > m_{\mu_L}$. This is the case in most of the $\tilde{\nu}_\mu$ LSP parameter space. The cascade decay, Eq.~(\ref{slep_cascade}), is then forbidden and can not be explored at the Tevatron or LHC, as stated in Sect.~\ref{Tevatron_constraints}. We discuss further phenomenological implications in Sect.~\ref{pheno}. \subsection{$M_{1/2}$--$M_{0}$ Plane} \begin{figure*}[h!] \setlength{\unitlength}{1in} \subfigure[Mass difference $\Delta M$ between the NLSP and LSP. The LSP candidates in different regions are explicitly mentioned. The blackened out region corresponds to parameter points, which posses a tachyon or where the $\tilde{\nu}_\mu$ or $h$ mass violate the LEP bounds, {\it cf.} Sect.~\ref{LEP_constraints}. \label{fig:DeltaM_lamp221_m0m12}]{ \begin{picture}(3,2.3) \put(-0.6,0){\epsfig{file=lamp231_M0M12_A0m600_tanb10_delta.eps, width=3.7in}} \put(0.0,0.48){\epsfig{file=lamp231_M0M12_A0m600_tanb10_contour.eps,width=2.509in}} \put(2.7,0.5){\rotatebox{90}{$\Delta M = M_{\rm NLSP} - M_{\rm LSP}$ [GeV]}} \put(1.3,0.2){\makebox(0,0){$\mhalf$ [GeV]}} \put(0.03,1.2){\rotatebox{90}{$\mzero$ [GeV] }} \put(1.95,1.1){\makebox(0,0){\color[rgb]{1,1,1}{$\boldsymbol{\stau_1}$}}} \put(1.455,1.1){\makebox(0,0){\color[rgb]{1,1,1}{$\boldsymbol{\sneu_\mu}$ \bf{LSP}}}} \put(1.7,2.0){\makebox(0,0){\color[rgb]{1,1,1}{$\boldsymbol{\neutralino_1}$ \bf{LSP}}}} \end{picture} }\hfill \subfigure[ Mass difference $\Delta M$ between the NLSP and LSP. The LSP candidates in different regions are explicitly mentioned. The blackened out region corresponds to parameter points, which posses a tachyon or where the $\tilde{\nu}_\tau$ or $h$ mass violate the LEP bounds, Sect.~\ref{LEP_constraints}. \label{fig:DeltaM_lamp331_m0m12}]{ \begin{picture}(3,2.3) \put(-0.6,0){\epsfig{file=lamp331_M0M12_A0m550_tanb12_delta.eps, width=3.7in}} \put(0.0,0.48){\epsfig{file=lamp331_M0M12_A0m550_tanb12_contour.eps,width=2.509in}} \put(0.0,0.48){\epsfig{file=lamp331_M0M12_A0m550_tanb12_bsgcontour.eps,width=2.509in}} \put(2.7,0.5){\rotatebox{90}{$\Delta M = M_{\rm NLSP} - M_{\rm LSP}$ [GeV]}} \put(1.3,0.2){\makebox(0,0){$\mhalf$ [GeV]}} \put(0.03,1.2){\rotatebox{90}{$\mzero$ [GeV] }} \put(1.95,1.1){\makebox(0,0){\color[rgb]{1,1,1}{$\boldsymbol{\stau_1}$}}} \put(1.25,1.55){\makebox(0,0){\color[rgb]{1,1,1}{$\boldsymbol{\sneu_\tau}$ \bf{LSP}}}} \put(1.2,2.){\makebox(0,0){\color[rgb]{1,1,1}{$\boldsymbol{\neutralino_1}$ \bf{LSP}}}} \end{picture} } \subfigure[ $\,\tilde{\nu}_\mu$ mass, $m_{\sneu_{\mu}}$, for the $\tilde{\nu}_\mu$ LSP region of Fig.~\ref{fig:DeltaM_lamp221_m0m12}. \label{fig:SnuMass_Lamp221_m12m0}]{ \begin{picture}(3,2.3) \put(-0.6,0){\epsfig{file=lamp231_M0M12_A0m600_tanb10_mLSP.eps,width=3.7in}} \put(0.0,0.48){\epsfig{file=lamp231_M0M12_A0m600_tanb10_contour.eps,width=2.509in}} \put(0.8,2.0){\makebox(0,0){\color[rgb]{1,1,1}{$\boldsymbol{0}$}}} \put(1.08,2.0){\makebox(0,0){\color[rgb]{1,1,1}{$\boldsymbol{-1}$}}} \put(1.5,2.0){\makebox(0,0){\color[rgb]{1,1,1}{$\boldsymbol{-2}$}}} \put(0.58,1.6){\makebox(0,0){\color[rgb]{1,1,1}{$\boldsymbol{+3}$}}} \put(0.8,1.4){\makebox(0,0){\color[rgb]{1,1,1}{$\boldsymbol{+2}$}}} \put(0.95,1.2){\makebox(0,0){\color[rgb]{1,1,1}{$\boldsymbol{+1}$}}} \put(2.7,1.1){\rotatebox{90}{$m_{\sneu_{\mu}}$ [GeV]}} \put(1.3,0.2){\makebox(0,0){$\mhalf$ [GeV]}} \put(0.03,1.2){\rotatebox{90}{$\mzero$ [GeV] }} \end{picture} }\hfill \subfigure[ $\,\tilde{\nu}_\tau$ mass, $m_{\sneu_{\tau}}$, for the $\tilde{\nu}_\tau$ LSP region of Fig.~\ref{fig:DeltaM_lamp331_m0m12}. \label{fig:SnuMass_Lamp331_m12m0}]{ \begin{picture}(3,2.3) \put(-0.6,0){\epsfig{file=lamp331_M0M12_A0m550_tanb12_mLSP.eps,width=3.7in}} \put(0.0,0.48){\epsfig{file=lamp331_M0M12_A0m550_tanb12_contour.eps,width=2.509in}} \put(0.0,0.48){\epsfig{file=lamp331_M0M12_A0m550_tanb12_bsgcontour.eps,width=2.509in}} \put(2.7,1.1){\rotatebox{90}{$m_{\sneu_{\tau}}$ [GeV]}} \put(1.3,0.2){\makebox(0,0){$\mhalf$ [GeV]}} \put(0.03,1.2){\rotatebox{90}{$\mzero$ [GeV] }} \put(0.84,2.0){\makebox(0,0){\color[rgb]{1,1,1}{$\boldsymbol{-1}$}}} \put(1.52,2.0){\makebox(0,0){\color[rgb]{1,1,1}{$\boldsymbol{-2}$}}} \end{picture} } \subfigure[Mass difference of the $\tilde{\mu}_L$ and $\tilde{\chi}_1^0$ for the $\tilde{\nu}_\mu$ LSP region of Fig.~\ref{fig:DeltaM_lamp221_m0m12}. We have $m_{\tilde{\mu}_L} > m_{\tilde{\chi}_1^0}$ (denoted by $\tilde{\mu}_L>\neutralino_1$) and $m_{\tilde{\mu}_L} < m_{\tilde{\chi}_1^0}$ (denoted by $\tilde{\mu}_L<\neutralino_1$). \label{fig:MOrder_Lamp221_m12m0}]{ \begin{picture}(3,2.3) \put(-0.6,0){\epsfig{file=lamp231_M0M12_A0m600_tanb10_mSmuL_m_Mneut1.eps,width=3.7in}} \put(0.0,0.48){\epsfig{file=lamp231_M0M12_A0m600_tanb10_contour.eps,width=2.509in}} \put(2.7,0.8){\rotatebox{90}{$|m_{\tilde{\mu}_L} - m_{\tilde{\chi}_1^0}|$ [GeV]}} \put(1.43,1.2){\makebox(0,0){\color[rgb]{1,1,1}{$\boldsymbol{\tilde{\mu}}_L<\boldsymbol{\neutralino_1}$}}} \put(1.58,1.73){\makebox(0,0){\color[rgb]{1,1,1}{$\boldsymbol{\tilde{\mu}}_L>\boldsymbol{\neutralino_1}$}}} \put(1.3,0.2){\makebox(0,0){$\mhalf$ [GeV]}} \put(0.03,1.2){\rotatebox{90}{$\mzero$ [GeV] }} \end{picture} }\hfill \subfigure[ Mass difference of the $\tilde{\tau}_1$ and $\tilde{\chi}_1^0$ for the $\tilde{\nu}_\tau$ LSP region of Fig.~\ref{fig:DeltaM_lamp331_m0m12}. We have $m_{\tilde{\tau}_1} > m_{\tilde{\chi}_1^0}$ (denoted by $\tilde{\tau}_1>\neutralino_1$) and $m_{\tilde{\tau}_1} < m_{\tilde{\chi}_1^0}$ (denoted by $\tilde{\tau}_1<\neutralino_1$). \label{fig:MOrder_Lamp331_m12m0}]{ \begin{picture}(3,2.3) \put(-0.6,0){\epsfig{file=lamp331_M0M12_A0m550_tanb12_Mstau1_m_Mneut1.eps,width=3.7in}} \put(0.0,0.48){\epsfig{file=lamp331_M0M12_A0m550_tanb12_contour.eps,width=2.509in}} \put(0.0,0.48){\epsfig{file=lamp331_M0M12_A0m550_tanb12_bsgcontour.eps,width=2.509in}} \put(1.68,1.4){\makebox(0,0){\color[rgb]{1,1,1}{$\boldsymbol{\stau}_1<\boldsymbol{\neutralino_1}$}}} \put(0.86,1.8){\makebox(0,0){\color[rgb]{1,1,1}{$\boldsymbol{\stau}_1>\boldsymbol{\neutralino_1}$}}} \put(2.7,0.8){\rotatebox{90}{$|m_{\tilde{\tau}_1} - m_{\tilde{\chi}_1^0}|$ [GeV]}} \put(1.3,0.2){\makebox(0,0){$\mhalf$ [GeV]}} \put(0.03,1.2){\rotatebox{90}{$\mzero$ [GeV] }} \end{picture} } \caption{ Sneutrino LSP parameter space in the $M_{1/2}$--$M_{0}$ plane. The left panel (right panel) shows the $\tilde{\nu}_\mu$ LSP ($\tilde{\nu}_\tau$ LSP) region obtained via $\lam'_{231}\lvert_{\rm GUT}=0.11$, $A_0=-600$ GeV, $\tanb=10$ and $\text{sgn}(\mu)=+1$ ($\lam'_{331}\lvert_{\rm GUT}=0.12$, $A_0=-550$ GeV, $\tanb=12$ and $\text{sgn}(\mu)=+1$). The plots show from top to bottom the mass difference between the NLSP and LSP, $\Delta M$, the mass of the sneutrino LSP, $m_{\sneu}$, and the mass difference between the $\tilde{\chi}_1^0$ and the $\tilde{\mu}_L$ (left panel) or between the $\tilde{\chi}_1^0$ and $\tilde{\tau}_1$ (right panel). The yellow (labelled with $``\, 0 \,"$), blue (labelled with $`` \pm 1 "$), green (labelled with $`` \pm 2 "$) and magenta (labelled with $`` \pm 3 "$) contours correspond to different SUSY contributions to the anomalous magnetic moment of the muon, $\delta a_\mu^{\rm SUSY}$, as described in Eq.~(\ref{mu_magnetic_moment}). The dashed black line in Fig.~\ref{fig:DeltaM_lamp331_m0m12} corresponds to BR($b \rightarrow s \gamma)=2.74 \times 10^{-4}$, Eq.~(\ref{bsg_2sigma}). \label{fig:M0_M12}} \end{figure*} We present in Fig.~\ref{fig:DeltaM_lamp221_m0m12} [Fig.~\ref{fig:DeltaM_lamp331_m0m12}] the $\tilde{\nu}_\mu$ LSP [$\tilde{\nu}_\tau$ LSP] region in the $M_{1/2}$--$M_{0}$ plane. We have chosen $\lam'_{231}\lvert_{\rm GUT}=0.11$ [$\lam'_{331}\lvert_{\rm GUT}$=0.12]. The figures show the mass difference in GeV between the NLSP and the LSP. The solid contour lines correspond again to SUSY scenarios, which contribute to $a_{\mu}$ the amount described in Eq.~(\ref{mu_magnetic_moment}) and the dashed black line in Fig.~\ref{fig:DeltaM_lamp331_m0m12} corresponds to BR($b \rightarrow s \gamma)=2.74 \times 10^{-4}$ Eq.~(\ref{bsg_2sigma}). \mymed The $\tilde{\nu}_\mu$ LSP lives in an extended region of $\text{B}_3$ mSUGRA parameter space. This stems from the fact, that we were able to choose a central scan point, Point I of Eq.~(\ref{scanpoints}), where the mass difference between the $\tilde{\nu}_\mu$ LSP and the other LSP candidates, $\tilde{\tau}_1$ and $\tilde{\chi}_1^0$, is large, namely 56 GeV and 75 GeV, respectively. We find a $\tilde{\nu}_\mu$ LSP between $M_{1/2}=350$ GeV and $M_{1/2}=600$ GeV for $M_0=140$ GeV, which is consistent with $a_\mu^{\rm exp}$, Eq.~(\ref{amu_exp}), and BR($b \rightarrow s \gamma$), Eq.~(\ref{bsg_2sigma}), at $2 \sigma$. For $M_{1/2}=500$ GeV, we obtain a consistent $\tilde {\nu}_\mu$ LSP for $M_0<170$ GeV. \mymed Nearly the entire $\tilde{\nu}_\mu$ LSP region of Fig.~\ref{fig:DeltaM_lamp221_m0m12} is consistent with the observed value of $a_\mu$ at the $1\sigma$ (blue lines) and $2\sigma$ (green lines) level, {\it cf.} Eq.~(\ref{mu_magnetic_moment}). It is also consistent with BR($b \rightarrow s \gamma$) at $2\sigma$, Eq.~(\ref{bsg_2sigma}). \mymed We see in Fig.~\ref{fig:DeltaM_lamp221_m0m12}, all three LSP candidates, the $\tilde{\nu}_\mu$, the $\tilde{\tau}_1$ and the $\tilde{\chi}_1^0$. If we increase $M_0$, we re-obtain at $M_0\approx 150$ GeV the $\tilde{\chi}_1^0$ LSP instead of the $\tilde{\nu}_\mu$ or the $\tilde{\tau}_1$ LSP. This is easy to understand. $M_0$ increases the mass of all the sfermions, see Eq.~(\ref{sfermion_masses}), but leaves the mass of the (bino-like) $\tilde{\chi}_1^0$ unaffected, {\it cf.} Eq.~(\ref{neutralino_masses}). \mymed We get a $\tilde{\tau}_1$ LSP instead of a $\tilde{\nu}_\mu$ LSP for $M_{1/2} > 650$ GeV and $M_0<140$ GeV. Remember that the $\tilde{\tau}_1$ is mainly right-handed for non-vanishing $\lam'_{231}|_{\rm GUT}$ (not for large $\lam'_{331}|_{\rm GUT}$). According to Eq.~(\ref{stau_parameter}), the right-handed stau mass increases more slowly with $M_{1/2}$ than the left-handed $\tilde{\nu}_\mu$ mass, Eq.~(\ref{sfermion_masses}), because the right-handed sfermions couple only to the U(1) gaugino, whereas the left-handed sfermions couple also to the SU(2) gauginos. \mymed For $M_0$ between 140 GeV and 180 GeV, we obtain a $\tilde{\chi}_1^0$ LSP instead of a $\tilde{\nu}_\mu$ LSP if we increase $M_{1/2}$. In this region of parameter space, {\it i.e.} $M_0$ between 140 GeV and 180 GeV and $M_{1/2}<700$ GeV, we have a $\tilde{\chi}_1^0$ LSP for vanishing $\lam'_{231}|_{\rm GUT}$. With $\lam'_{231}\lvert_{\rm GUT}=0.11$, we must retrieve the $\tilde{\chi}_1^0$ LSP for increasing $M_{1/2}$, because the (left-handed) $\tilde{\nu}_\mu$ couples stronger via the gauge interactions than the (bino-like) $\tilde{\chi}_1^0$; see Eq.~(\ref{sfermion_masses}) and Eq.~(\ref{neutralino_masses}) respectively. \mymed The $M_{1/2}$--$M_{0}$ plane showing the $\tilde{\nu}_\tau$ LSP region, Fig.~\ref{fig:DeltaM_lamp331_m0m12}, looks similar to the $\tilde{\nu}_\mu$ LSP region, Fig.~\ref{fig:DeltaM_lamp221_m0m12}: We again get a $\tilde{\chi}_1^0$ LSP when we increase $M_0$, and a $\tilde{\tau}_1$ LSP for larger values of $M_{1/2}$. Most of the $\tilde{\nu}_\tau$ LSP region is also consistent with the observed value of $a_\mu$ at the $1\sigma$ (blue line) or $2\sigma$ (green line) level, Eq.~(\ref{mu_magnetic_moment}). But we must have $M_{1/2} \gsim 290$ GeV [dashed black line in Fig.~\ref{fig:DeltaM_lamp331_m0m12}] to be consistent with BR($b \rightarrow s \gamma$) at $2 \sigma$, {\it cf.} Eq.~(\ref{bsg_2sigma}). The allowed $\tilde{\nu}_\tau$ LSP region in the $M_{1/2}$--$M_{0}$ plane is therefore ``smaller" compared to the $\tilde{\nu}_\mu$ LSP region. It is worth mentioning, that one can also obtain a $\tilde{\nu}_\tau$ LSP via $\lambda'_{331}|_{\rm GUT}$ consistent with $a_\mu^{\rm exp}$, Eq.~(\ref{amu_exp}), and BR($b \rightarrow s \gamma$), Eq.~(\ref{bsg_2sigma}), within $1 \sigma$; see an example in Ref.~\cite{Allanach:2006st}. However the allowed $\tilde{\nu}_\tau$ LSP region in the $M_{1/2}$--$M_{0}$ [$A_0$--$\tan \beta$] plane is smaller in that case compared to Fig.~\ref{fig:DeltaM_lamp331_m0m12} [Fig.~\ref{fig:DeltaM_lamp331_a0tanb}]. \mymed As explained before, $\lam'_{331}|_{\rm GUT}$ reduces also the mass of the $\tilde{\tau}_1$, which is also a candidate for the LSP. We can see this in Fig.~\ref{fig:DeltaM_lamp331_m0m12} by noting that the mass difference between the $\tilde{\nu}_\tau$ LSP and the $\tilde{\tau}_1$ NLSP is rather small, {\it i.e.} $\Delta M \lsim 15$ GeV. A way to increase this mass difference is to decrease $\tan \beta$; see the discussion in Sect.~\ref{A0tanbplane}. \mymed Another difference between the $\tilde{\nu}_\tau$ LSP region, Fig.~\ref{fig:DeltaM_lamp331_m0m12}, and the $\tilde{\nu}_\mu$ LSP region, Fig.~\ref{fig:DeltaM_lamp221_m0m12}, is that the corresponding SUSY mass spectra for a $\tilde{\nu}_\mu$ LSP scenario are in average heavier than the SUSY mass spectra for a $\tilde{\nu}_\tau$ LSP scenario. For example, $M_0=100$ GeV (200 GeV) and $M_{1/2}=500$ GeV (320 GeV) lead to squark masses of roughly 1000 GeV (700 GeV) in the $\tilde{\nu}_\mu$ LSP ($\tilde{\nu}_\tau$ LSP) parameter space. The reason is, that we have chosen our scenarios consistent with the measured value of $a_\mu$; see discussion after Eq.~(\ref{mu_magnetic_moment}). \mymed We have again in Fig.~\ref{fig:DeltaM_lamp221_m0m12} as well as in Fig.~\ref{fig:DeltaM_lamp331_m0m12} a ``triple-point", where the three LSP candidates are degenerate in mass. \mymed We give in Fig.~\ref{fig:SnuMass_Lamp221_m12m0} [Fig.~\ref{fig:SnuMass_Lamp331_m12m0}] the mass of the $\tilde{\nu} _\mu$ LSP [$\tilde{\nu}_\tau$ LSP] for the sneutrino LSP region of Fig.~\ref{fig:DeltaM_lamp221_m0m12} [Fig.~\ref{fig:DeltaM_lamp331_m0m12}]. The sneutrino LSP masses, which lead to SUSY scenarios in agreement with $a_\mu^{\rm exp}$ (and $b \rightarrow s \gamma$), range from 78 GeV (LEP bound, Sect.~\ref{LEP_constraints}) up to roughly 250 GeV. Relaxing this bound, we claim that $a_\mu^{\rm exp}$ puts an upper bound of roughly 300 GeV at the $2\sigma$ level on the mass of a sneutrino LSP within $\text{B}_3$ mSUGRA. Note that BR($b \rightarrow s \gamma$) increases if we increase $M_{1/2}$, whereas $\delta a_\mu^{\rm SUSY}$ decreases, {\it cf.} for example Fig.~4 and Fig.~5 in Ref.~\cite{Allanach:2006st}. The upper bound on the sneutrino LSP mass is thus due to $a_\mu^{\rm exp}$. \mymed We finally show in Fig.~\ref{fig:MOrder_Lamp221_m12m0} [Fig.~\ref{fig:MOrder_Lamp331_m12m0}] the mass difference in GeV between the $\tilde{\chi}_1^0$ and the $\mu_L$ [mainly left-handed $\tilde{\tau}_1$]. We again observe that the $\tilde{\chi}_1^0$ is heavier than the $\tilde{\mu}_L$ in most regions of the $\tilde{\nu}_\mu$ LSP parameter space. The cascade decay, Eq.~(\ref{slep_cascade}), is therefore not observable at the Tevatron. Further phenomenological consequences at hadron colliders will be discussed in Sect.~\ref{pheno}. \subsection{Sneutrino LSPs with $\lam'_{ijk}|_{\rm GUT} \not = \lam'_{231}$ or $\lam'_{331}$} We investigated in the last three sections in detail the $\tilde{\nu} _\mu$ LSP ($\tilde{\nu}_\tau$ LSP) parameter space with $\lam'_ {231}\lvert_{\rm GUT}=0.11$ ($\lam'_{331}\lvert_{\rm GUT}=0.12$). We briefly consider the other couplings of Table~\ref{RPV_couplings}. \mymed For $\lam'_{131}|_{\rm GUT}$, we obtain nearly the same parameter space as in Fig.~\ref{fig:DeltaM_lamp221_a0tanb} and Fig.~\ref{fig:DeltaM_lamp221_m0m12}, where $\lam'_{231}\lvert_{\rm GUT}=0.11$. We now have a $\tilde{\nu}_e$ LSP instead of a $\tilde{\nu}_\mu$ LSP. Also the mass of the left-handed selectron, $\tilde{e}_L$, (for $\lam'_{131}\lvert_{\rm GUT}=0.11$) equals the mass of the $\tilde{\mu}_L$ (for $\lam'_{231}\lvert_{\rm GUT}=0.11$) and vice versa. But note, that the $\tilde{\nu}_e$ LSP parameter space is much more restricted than the $\tilde{\nu}_\mu$ LSP parameter space due to the stronger bounds on $\lam'_{131}$, {\it cf.} Table~\ref{RPV_couplings}. Also the LEP bound on $m_{\tilde {\nu}_e}$ is more model dependent, see Table~\ref{bounds_LEP}. \mymed We also obtain a $\tilde{\nu}_\mu$ LSP scenario via $\lam'_{221} |_{\rm GUT}$ and $\lam'_{212}|_{\rm GUT}$. If we choose $\lam '_{221}\lvert_{\rm GUT}$ or $\lam'_{212}\lvert_{\rm GUT}= 0.097$, we find similar regions to Fig.~\ref{fig:DeltaM_lamp221_a0tanb} and Fig.~\ref{fig:DeltaM_lamp221_m0m12}, where the $\tilde{\nu}_\mu$ is the LSP. The effect of $\lam'_{221}|_{\rm GUT}$ and $\lam'_{212}|_{\rm GUT}$ on $m_{\tilde{\nu}_\mu}$ is stronger, because the running of both couplings involves no loops containing the large top Yukawa coupling. In contrast, the top Yukawa coupling weakens the running of $\lam'_{231}$ ($j$=3!) when we go from $M_{\rm GUT}$ to $M_Z$ \cite{Allanach:2003eb,Dreiner:2008rv}. \mymed Analogously, similar to Fig.~\ref{fig:DeltaM_lamp331_a0tanb} and Fig.~\ref{fig:DeltaM_lamp331_m0m12}, we find parameter regions, where the $\tilde{\nu}_\tau$ is the LSP. We now have to choose $\lam'_{321}|_{\rm GUT}$ or $\lam'_{312}\lvert_{\rm GUT}=0.104$ instead of $\lam'_{331}\lvert_{\rm GUT}=0.12$. \mymed Note however, that different couplings $\lam'_{ijk}$ lead to a different collider phenomenology, because the $L_i Q_j \bar D_k$ operator couples to different generations of lepton and quark superfields. We discuss this topic in the next section. \section{Hadron Collider Phenomenology} \label{pheno} We have shown in the last section, that a sneutrino LSP exists in an extended region of $\text{B}_3$ mSUGRA parameter space. We now investigate the corresponding phenomenology at hadron colliders, especially at the LHC. The main phenomenological differences between a $\Psix$ mSUGRA scenario with a stable $\tilde{\chi}_1^0$ LSP and a $\text{B}_3$ mSUGRA scenario with an unstable sneutrino LSP are: \begin{itemize} \item The mass spectrum is changed. We now have a sneutrino LSP. Also some of the sleptons might be lighter than the $\tilde{\chi}_ 1^0$, for example the $\tilde{\mu}_L$ in the presence of $\lam'_ {231}|_{\rm GUT}$; see Figs.~\ref{fig:MOrder_Lamp221_a0tanb}, \ref{fig:MOrder_Lamp221_m12m0}. Thus the decay chains and final state topologies are different. \item The LSP is not stable anymore and directly decays to SM particles via the $\text{B}_3$ coupling. In the following analysis, with $\lam'_{231}|_{\rm GUT}\not=0$, we have two extra jets from each $\tilde{\nu}_{\mu}$ LSP decay. This also results in less missing transverse momentum, $\met$. \item We have shown, that $\lam'_{ijk}|_{\rm GUT}=\mathcal{O} (10^{-1})$ is needed to obtain a $\tilde{\nu}_i$ LSP. This large coupling can lead to direct and dominating $\text{B}_3$ decays of heavy sparticles; namely of left-handed charged sleptons of generation $i$, of left-handed squarks of generation $j$ and of right-handed down-type squarks of generation $k$. The SM decay products naturally have large momenta. \end{itemize} In the following, we investigate these aspects in detail. We perform a Monte Carlo simulation at the parton level using the {\tt HERWIG} event generator \cite{Herwig,4bodyHERWIG}. \subsection{Example Spectrum and Branching Ratios} \label{example_spectrum} To investigate the sneutrino LSP phenomenology at the LHC, we choose as an example a scenario with a $\tilde{\nu}_\mu$ LSP: \begin{eqnarray} && \lamp_{231}\lvert_{\text{\rm GUT}} = 0.11, \, M_0 = 100 \textnormal{\,GeV},\, M_{1/2}=450 \textnormal{\,GeV}, \nonumber \\ && A_0 = -600 \textnormal{\,GeV},\, \tan\beta=10, \, \textnormal{sgn}(\mu) = +1 \, . \label{example_point} \end{eqnarray} This benchmark point can be found in Fig.~\ref{fig:DeltaM_lamp221_m0m12} and is consistent with $a_{\mu}^{\rm exp}$, Eq.~(\ref{amu_exp}), and BR($b \rightarrow s \gamma$), Eq.~(\ref{bsg_2sigma}), at 1$\sigma$. See also Ref.~\cite{Allanach:2006st} for a benchmark scenario with a $\tilde{\nu}_\tau$ LSP. \mymed The resulting sparticle masses and branching ratios (BRs) are given in Table~\ref{BRs_point_lp231a}. The $\text{B}_3$ decays are shown in bold-face. Sparticle masses which are significantly affected by $\lam'_{231}|_{\rm GUT}$ are also bold-face. We calculate the decay rates by piping the output of {\tt SOFTSUSY} through {\tt ISAWIG1.200}. This is linked to {\tt ISAJET7.75} \cite{Paige:2003mg} in order to calculate the decay widths of the SUSY particles. This output is later fed into {\tt HERWIG} to simulate events at the LHC. \mymed \begin{table*}[ht!] \centering \begin{tabular}{cc} \begin{tabular}{|lc|ll|ll|} \hline & mass [GeV] &channel &BR &channel &BR \\ \hline $\ssnumu$ & {\bf 124} &$\bar b d$ & {\bf 100}$\%$ &&\\ \hline $\ssmu^-_L$& {\bf 147} &$ W^- \bar b d$ &{\bf 79.0}$\%$ &$\bar c d$&{\bf 21.0}$\%$\\ \hline $\neut_1$ & 184 &$\ssnumu^* \nu_\mu$ & $36.0 \%$ &$\ssnumu \bar \nu_\mu$ & $36.0 \%$ \\ & & $\ssmu^+_L \mu^-$ & $14.0 \%$ &$\ssmu^-_L \mu^+$ & $14.0 \%$ \\ \hline $\sstau_1^-$ & 188 &$\neut_1 \tau^-$ &$100 \%$&& \\ \hline $\sse^-_R$ ($\ssmu^-_R$)& 206 &$\neut_1 e^- (\mu^-)$ &$100\%$ &&\\ \hline $\ssnutau$ & 316 & $\neut_1\nutau$ & $67.3\%$ &$W^+\sstau_1^-$ & $32.7\%$ \\ \hline $\ssnue$ & 319 & $\neut_1 \nue$ & $100 \%$ &&\\ \hline $\sse^-_L$& 329 &$\neut_1 e^-$ &$100\%$ &&\\ \hline $\sstau_2^-$& 329 &$\neut_1 \tau^-$ & $65.1\%$ &$h^0\sstau_1^-$ & $18.2\%$ \\ & & $Z^0\sstau_1^-$ & $16.7\%$ & & \\ \hline $\neut_2$& 350 &$\ssnumu \bnumu$ & $23.7\%$ &$\ssnumu^* \numu$ & $23.7\%$ \\ & &$\ssmu_L^- \mu^+$ & $22.4\%$ &$\ssmu_L^+ \mu^-$ & $22.4\%$ \\ & &$\ssnutau \bnutau$ & $1.1\%$ &$\ssbnutau \nutau$ & $1.1\%$ \\\hline $\charge_1^-$& 350 &$\ssnumu^* \mu^-$ & $49.7\%$ &$\ssmu_L^- \bnumu$ & $42.6\%$ \\ & &$\ssnutau^* \tau^-$ & $2.3\%$ &$\ssnue^* e^-$ & $1.8\%$ \\ & & $\sstau_1^- \bnutau$ & $1.6\%$ & & \\ \hline $\neut_3$ & 691 &$\charge_1^- W^+$ & $29.7\%$ &$\charge_1^+ W^-$ & $29.7\%$ \\ & &$\neut_2 Z^0$ & $26.1\%$ &$\neut_1 Z^0$ & $8.3\%$ \\ & &$\neut_1 \higgs$ & $1.7\%$ &$\neut_2 \higgs$ & $1.7\%$ \\ \hline $\sstop_1$& 650 &$\charge^+_1 b$ & $42.1\%$ &$\neut_1 t$ & $33.5\%$ \\ & &$\neut_2 t$ & $13.8\%$ & $\mu^+ d$ & {\bf 10.6}$\%$ \\ \hline $\charge_2^-$& 702 &$\neut_2 W^-$ & $28.0\%$ &$\charge_1^- Z^0$ & $26.6\%$ \\ & &$\charge_1^- \higgs$ & $23.8\%$ &$\neut_1 W^-$ & $7.9\%$ \\ & &$\sstop_1^* b$ & $4.1\%$ &$\ssmu_L^- \bnumu$ & $2.5\%$ \\ & &$\sstau_2^- \bnutau$ & $2.0\%$ &$\sse_L^- \bnue$ & $1.7\%$ \\ & &$\ssbnutau \tau^-$ & $1.3\%$ & & \\ \hline $\neut_4$& 702 &$\charge_1^- W^+$ & $28.3\%$ &$\charge_1^+ W^-$ & $28.3\%$ \\ & &$\neut_2 \higgs$ & $22.3\%$ &$\neut_1 \higgs$ & $7.0\%$ \\ & &$\neut_2 Z^0$ & $2.0\%$ &$\neut_1 Z^0$ & $1.8\%$ \\ & &$\ssnumu \bnumu$ &$1.2\%$ &$\ssbnumu \numu$ &$1.2\%$ \\ \hline \end{tabular} & \hspace{1cm} \begin{tabular}{|lc|ll|ll|} \hline &mass [GeV] &channel &BR &channel &BR \\ \hline $\ssbottom_1$& {\bf 842} &$W^- \sstop_1$ & $35.8\%$ & $\charge^-_1 t$ & $31.3\%$ \\ & &$\neut_2 b$ & $18.8\%$ & $\bar \nu_\mu d$ & {\bf 12.4}$\%$\\ & &$\neut_1 b$ & $1.2\%$ & & \\ \hline $\tilde d_R$& {\bf 897} &$\nu_\mu b$ & {\bf 45.3}$\%$ & $\mu^- t$ & {\bf 42.1}$\%$ \\ & &$\neut_1 d$ & $12.6\%$ & & \\ \hline $\sstop_2$& {\bf 906} &$Z^0 \sstop_1$ & $28.2\%$ &$\charge^+_1 b$ & $23.7\%$ \\ & &$\higgs \sstop_1$ & $11.7\%$ & $\neut_2 t$ & $10.2\%$\\ & &$\mu^+ d$ & {\bf 9.0}$\%$ &$\neut_4 t$ & $7.5\%$ \\ & &$\charge^+_2 b$ & $5.4\%$ &$\neut_1 t$ & $2.6\%$ \\ & & $\neut_3 t$ & $1.7\%$ & & \\ \hline $\ssbottom_2$& 919 &$\neut_1 b$ & $41.3\%$ &$W^- \sstop_1$ & $25.3\%$ \\ & &$\charge^-_2 t$ & $14.4\%$ &$\neut_4 b$ & $5.3\%$ \\ & &$\neut_3 b$ & $5.0\%$ & $\bar \nu_\mu d$ & {\bf 3.4}$\%$\\ & & $\charge^-_1 t$ & $3.2\%$ & $\neut_2 b$ & $1.9\%$\\ \hline $\tilde s_R$& 928 &$\neut_1 s$ & $99.8\%$ & & \\ \hline $\tilde u_R$ ($\tilde c_R$)& 932 &$\neut_1 u (c)$ & $99.8\%$ & & \\ \hline $\tilde u_L$ ($\tilde c_L$)& 963 &$\charge^+_1 d (s)$ & $65.6\%$ & $\neut_2 u (c)$ & $32.6\%$ \\ & &$\neut_1 u (c)$ & $1.2\%$ & & \\ \hline $\tilde d_L$ ($\tilde s_L$)& 966 &$\charge^-_1 u (c)$ & $64.5\%$ & $\neut_2 d (s)$ & $32.5\%$ \\ & &$\neut_1 d (s) $ & $1.6\%$ & $\charge^-_2 u (c)$ & $1.0\%$ \\ \hline $\glu$ & 1046 &$\sstop_1 \bar{t}$ & $15.0\%$ &$\sstop^*_1 t$ & $15.0\%$ \\ & &$\ssbottom_1 \bar{b}$ & $9.2\%$ &$\ssbottom_1^* b$ & $9.2\%$ \\ & &$\tilde d_R \bar d$ & $5.2\%$ &$\tilde d_R^* d$ & $5.2\%$ \\ & &$\ssbottom_2 \bar{b}$ & $3.9\%$ &$\ssbottom_2^* b$ & $3.9\%$ \\ & &$\tilde s_R \bar s$ & $3.4\%$ &$\tilde s_R^* s$ & $3.4\%$ \\ & &$\tilde u_R \bar u$ ($\tilde c_R \bar c$) & $3.2\%$ &$\tilde u_R^* u$ ($\tilde c_R^* c$) & $3.2\%$ \\ & &$\tilde u_L \bar u$ ($\tilde c_L \bar c$)& $1.7\%$ &$\tilde u_L^* u$ ($\tilde c_L^* c$) & $1.7\%$ \\ & &$\tilde d_L \bar d$ ($\tilde s_L \bar s$)& $1.6\%$ &$\tilde d_L^* d$ ($\tilde s_L^* s$)& $1.6\%$ \\ \hline \end{tabular} \\ \end{tabular} \caption{\label{BRs_point_lp231a} Branching ratios (BRs) and sparticle masses for the example scenario defined in Eq.~(\ref{example_point}). BRs smaller than $1\%$ are neglected. $\text{B}_3$ decays are shown in bold-face. Masses which are reduced by more than 5 GeV (compared to the $\Psix$ spectrum) due to $\lam'_{231}|_{\rm GUT}=0.11$ are also shown in bold-face.} \end{table*} We find that the decay of the $\tilde{\nu}_\mu$ LSP with a mass of 124 GeV is completely dominated by the $\lam'_{231}$ coupling. Each LSP decay leads to a bottom and a down quark and no $\met$ \cite{footnote_4body,footnote_2body}. However, $\met$ can be obtained from cascade decays of heavy sparticles. In principle, reconstruction of the $\tilde{\nu}_\mu$ mass should be possible, although combinatorial backgrounds might complicate this task. \mymed The $\ssmu_L$ with a mass of 147 GeV is the NLSP. This is the case in most of the $\tilde{\nu}_\mu$ LSP parameter space, {\it cf.} Figs.~\ref{fig:MOrder_Lamp221_a0tanb}, \ref{fig:MOrder_Lamp221_m12m0}. The $\ssmu_L$ decays mainly via the $L_2 Q_3 \bar D_1$ operator into SM fermions, in principle to $\bar t d$. If this decay mode is not kinematically allowed, like for the benchmark point under study, we obtain a dominant 3-body decay into $W^-\bar bd$ \cite{Dreiner:2008rv}. We thus have at least two jets, where one of the jets is a $b$-jet. As mentioned in Sect.~\ref{Tevatron_constraints}, another possible 3-body decay is $\tilde{\mu}^-_L \ra \mu^- \bar\nu_\mu \tilde{\nu}_\mu$ via a virtual neutralino. But this decay is suppressed by four orders of magnitude compared to the 3-body decay via a virtual top quark. The reasons are: small couplings (left-handed sleptons couple to a bino-like $\neut_1$), less phase space ($m_{\tilde{\mu}_L} - m_{\tilde{\nu}_\mu}=23$ GeV), destructive interferences between diagrams with a virtual $\neut_1$ and $\neut_2$, and the decay via the virtual top is enhanced by a colour factor of 3 \cite{Dreiner:2008rv}. However, there is an additional 2-body decay mode, $\tilde{\mu}_L\ra\bar c d$, in Table~\ref{BRs_point_lp231a}. This decay proceeds via a non-vanishing $\lam'_{221}$ coupling, which is generated out of $\lam'_{231}|_{\rm GUT}$ via RGE running \cite{Carlos:1996du,Allanach:2003eb,Dreiner:2008rv}. \mymed The electroweak gauginos decay dominantly via $\Psix$ conserving gauge interactions to 2-body final states. The lightest gaugino is the $\neut_1$, which is only the NNLSP within our benchmark scenario; $m_{\tilde{\chi}_1^0}=184$ GeV. It decays into either the LSP or NLSP. These then undergo direct $\text{B}_3$ decays, as discussed before. So, the $\neut_1$ decays lead to dijet events with $\met$ or a muon. Due to the Majorana nature of the $\neut_1$, negatively and positively charged muons are possible. Cascade decays of pair produced sparticles can therefore lead to like sign-muon events via $\neut_1$ decays; see Sect.~\ref{spart_pair_prod}. Note, that $\tilde{\nu}_\mu$ LSP scenarios exist where the $\neut_1$ is also heavier than the $\tilde{\tau}_1$ or even the right-handed smuon, $\tilde{\mu}_R$, and selectron, $\tilde{e}_R$. These scenarios can lead to multi-lepton final states. We will not consider these scenarios here, because the relevant $\tilde{\mu}_R$ and $\tilde{e}_R$ decays into the $\tilde{\nu}_\mu$ LSP and the $\tilde{\mu}_L$ NLSP are not implemented in {\tt HERWIG}. \mymed The $\neut_2$ also has a significant BR to $\ssmu_L^\pm \mu^\mp$ and $\ssnumu \nu_\mu$. Similarly, the lightest chargino, $\charge_1^-$, decays either predominantly into $\ssnumu^* \mu^-$ or $\ssmu^-_L\bar \nu_{\mu}$, leading to either a muon or missing energy in the final state. The $\neut_2$ and $\charge^-_1$ are wino-like in mSUGRA models. They thus decay predominantly to the left-handed $\ssmu_L$ and $\tilde{\nu}_\mu$. The decays of the heavier chargino, $\charge_ 2^-$, and neutralinos, $\neut_{3/4}$ are similar to $\Psix$ mSUGRA scenarios. \mymed The $\sstau_1$ in Table~\ref{BRs_point_lp231a} is the next-to-NNLSP (NNNLSP) with a mass of 188 GeV and almost degenerate with the $\neut_1$. The $\sstau_1$ can in general be the NLSP, the NNLSP or NNNLSP in $\text{B}_3$ mSUGRA scenarios with a sneutrino LSP. Here we have $\sstau_1^-$ $\ra$ $\neut_1$ $\tau^-$. \mymed The $\tilde{\mu}_R$, $\tilde{e}_R$, $\tilde{e}_L$, $\tilde{\nu}_e$, $\tilde{\nu}_\tau$ and $\sstau_2$ in Table~\ref{BRs_point_lp231a} decay into the $\neut_1$ or, in the case of the $\sstau_2$ and $\tilde{\nu}_\tau$, also into the $\sstau_1$ similar to $\Psix$ mSUGRA scenarios. But as mentioned above, the $\sstau_1$, the $\tilde{\mu}_R$ and the $\tilde{e}_R$ can in general be lighter than the $\neut_1$ in $\tilde{\nu}_\mu$ LSP scenarios. These particles then decay preferentially into the $\tilde{\nu}_\mu$ LSP via a 3-body decay. \mymed The masses of the top-squarks, $\tilde{t}_{1,2}$, and the bottom-squarks, $\tilde{b}_{1,2}$, are slightly reduced due to the presence of $\lam'_{231}$ in the corresponding RGEs. The $\tilde {t}_{1}$ is the lightest squark with a mass of 650 GeV and has four 2-body decay modes with appreciable BRs. Three decays are via gauge interactions and one via $\lam'_{231}$. Since the electroweak gauge couplings and $\lam'_{231}$ have the same order of magnitude, we also expect $\Psix$ conserving and violating decays at a similar rate. The situation for the $\tilde{t}_{2}$, $\tilde{b}_{1}$ and $\tilde{b}_{2}$ is similar to $\tilde{t}_{1}$. All of these particles couple via their left-handed component to the $L_2 Q_3 \bar D_1$ operator and can therefore decay into two SM particles. \mymed The masses of the left-handed and right-handed squarks of the 1st and 2nd generation are around 900 GeV. The right-handed down-squark ($m_{\tilde d_R}$= 897 GeV) is lighter than the right-handed strange-squark ($m_{\tilde s_R}$= 928 GeV). In contrast both squarks are degenerate in mass in $\Psix$ mSUGRA. However, they are so heavy, that no problems should occur with flavour changing neutral currents. $\lam'_{231}|_{\rm GUT}$ couples only to the right-handed down squarks and not to the right-handed strange squarks. So, $m_{\tilde d_R}$ is reduced, keeping $m_{\tilde s_R}$ unchanged. For the same reason, there exist no $\text{B}_3$ decays of $\tilde s_R$ via $\lambda'_{231}$ at tree-level. In contrast, $\tilde d_R$ has dominant direct $\text{B}_3$ decays to SM particles, which than have large momenta, see Sect.~\ref{spart_pair_prod}. \mymed The heaviest sparticle is the gluino, $\tilde{g}$, with a mass of 1046 GeV. It decays only via the strong interaction. The allowed decay modes and their relative BRs depend upon the sum of the final state masses. For example, $\tilde{g} \ra \tilde{t}_{1} t$ has the largest BR, since the $\tilde{t}_{1}$ is the lightest squark. \mymed We conclude that the heavy part of the mass spectrum looks very similar to $\Psix$ mSUGRA scenarios with a stable $\tilde{\chi}_1^0$ LSP. However, a non-vanishing $\lam'_{ijk}$ coupling, which has the same order of magnitude as the gauge couplings, allows for additional 2-body $\text{B}_3$ decays of some of the squarks. Which squarks are allowed to decay via $\lam'_{ijk}$ depend on the indices $j$, $k$. The masses and compositions of the electroweak gauginos are also very similar to $\Psix$ mSUGRA. However, the $\tilde {\chi}_1^0$ is no longer the LSP. Depending on the specific $\tilde {\nu}_i$ LSP scenario, the $\tilde{\chi}_1^0$ can decay into charged sleptons and sneutrinos of different generations. Therefore, the main difference can be found in the light part of the mass spectrum where we have the $\tilde{\nu}_i$ LSP. The $\tilde{\nu}_i$ LSP decays preferentially into two jets via $\lam'_{ijk}$. \subsection{Sparticle Pair Production} \label{spart_pair_prod} We have investigated in the last section the mass spectrum and the BRs of SUSY particles for one representative $\text{B}_3$ mSUGRA scenario with a $\tilde{\nu}_\mu$ LSP, described by Eq.~(\ref{example_point}). We have pointed out the general differences compared to mSUGRA scenarios with a stable $\neut_1$ LSP. We now explore signatures at the LHC which arise from pair production of sparticles via the gauge interactions, {\it i.e.} mainly squark and gluino production via the strong interaction. For this purpose we use the {\tt HERWIG} event generator. We investigate single sparticle production in Sect.~\ref{single_spart_prod}. \mymed The masses of the strongly interacting sparticles are roughly 1 TeV. We therefore obtain from {\tt HERWIG} a total sparticle pair production (leading order) cross section at the LHC of \begin{equation} \sigma_{\text{total}} = 3.0 \, \text{pb} \, . \label{total_xsection} \end{equation} So, one can expect approximately $300\,000$ SUSY pair production events for an integrated luminosity of 100 $\text{fb}^{-1}$. The sparticle decays follow those in Table~\ref{BRs_point_lp231a}. The different decay chains lead to different final states. Moreover, the $p_T$ distributions of the final state particles and the $\met$ can be very distinctive compared to $\Psix$ mSUGRA with a stable $\neut_1$ LSP. \mymed \begin{figure} \includegraphics[scale=0.40, bb = 30 70 530 530, clip=true]{point_lp231a_missing_pT_from_neutrinos.ps} \put(-107.0,-13.0){$\met$ [GeV]} \caption{\label{missing_pT_from_neutrinos} $\met$ distribution due to neutrinos in the final state for the example scenario Eq.~(\ref{example_point}). The distribution is normalized to one. Note that events with no $\met$ in the final state are not shown.} \end{figure} We show in Fig.~\ref{missing_pT_from_neutrinos} the $\met$ distribution due to neutrinos in the final state. Note, that here roughly $20 \%$ of all SUSY events posses no $\met$ in contrast to $\Psix$ mSUGRA scenarios. For example, if the decay chains of the pair produced sparticles into the $\tilde{\nu}_\mu$ LSP contain no neutrino than there is no $\met$. The $\met$ distribution in Fig.~\ref{missing_pT_from_neutrinos} peaks at roughly 90 GeV. Thus, $\met$ might still be used to distinguish the SUSY signal from its SM background. Large amounts of $\met$, {\it i.e.} $\met$ of a few hundred GeV, can arise if a squark decays directly via $\lam'_{231}$ into a quark and a neutrino. For example $\tilde{d}_R\ra\nu_\mu b$, {\it cf.} Table~\ref{BRs_point_lp231a}. This decay also leads to a high-$p_T$ $b$-jet, {\it i.e.} $p_T$ of $\mathcal{O}(100\,\text{GeV})$. \mymed \begin{figure} \includegraphics[scale=0.40, bb = 30 70 530 530, clip=true]{point_lp231a_mu_pT_from_sdR_or_stop.ps} \put(-107.0,-13.0){$p_T$ [GeV]} \caption{\label{mu_pT_from_sdR_or_stop} $p_T$ distribution of the muon from the decays $\tilde{d}_R \rightarrow \mu t $ and $\tilde{t}_{1/2} \rightarrow \mu d$ ({\it cf}. Table \ref{BRs_point_lp231a}) at the LHC. The distribution is normalized to one.} \end{figure} \begin{figure} \includegraphics[scale=0.40, bb = 30 70 530 530, clip=true]{point_lp231a_t_pT_from_sdR.ps} \put(-107.0,-13.0){$p_T$ [GeV]} \caption{\label{top_pT_from_sdR} $p_T$ distribution of the top quark from the decay $\tilde{d}_R \rightarrow \mu t $ ({\it cf}. Table \ref{BRs_point_lp231a}) at the LHC. The distribution is normalized to one.} \end{figure} Instead of high-$p_T$ neutrinos, we can also have high-$p_T$ muons from the direct decays of $\tilde{d}_R$ and $\tilde{t}_{1/2}$ via $\lam'_{231}$, see Table~\ref{BRs_point_lp231a}. We show in Fig.~\ref{mu_pT_from_sdR_or_stop} the $p_T$ distribution of these muons. The distribution peaks at 340 GeV. The large momenta are a consequence of the large squark masses. Nearly the entire mass of the squarks is transformed into the momenta of two SM particles. These high-$p_T$ SM particles might also be used to reconstruct the squark mass. The muon $p_T$-distribution will peak at smaller values, if the squarks are lighter than in our benchmark scenario. But at the same time we will produce more squarks and muons compared to the cross section, Eq.~(\ref{total_xsection}). If the mass spectrum is heavier compared to our example point, the cross section will be smaller. But the muon $p_T $-distribution will now peak at larger values. Thus stronger cuts on the muon $p_T$ can be applied. We conclude that the high-$p_T$ muons might be used on the one hand to distinguish the SUSY signal from the SM background and on the other hand to distinguish the $\text{B}_3$ mSUGRA model with a $\tilde{\nu}_\mu$ LSP from mSUGRA with a stable $\neut_1$ LSP. For our benchmark scenario Eq.~(\ref{example_point}), we find that $11 \%$ of all sparticle pair production events lead to at least one high-$p_T$ muon from a squark decay. A fraction of roughly $10 \%$ is a general feature of our $\tilde{\nu}_\mu$ LSP scenarios. \mymed The neutrino or muon from the squark decay will be accompanied by a quark with roughly the opposite $p_T$. These quarks lead to high-$p_T$ jets, which might be $b$-jets depending on the flavour indices of $\lam'$. For our benchmark point, we obtain high-$p_T$ $b$-jets from the $\text{B}_3$ decay $\tilde{d}_R \ra \nu_\mu b$. We also can get a top-quark, $t$, from the decay $\tilde{d}_R \ra \mu^- t$. We show in Fig.~\ref{top_pT_from_sdR} the $p_T$-distribution of this top-quark. The distribution peaks at 360 GeV. The top decay will also produce a $b$-jet and a $W$. The $W$ might produce additional jets or leptons with $\met$. These decay products will be boosted due to the large top momentum. Thus isolated leptons can most likely not be used to reconstruct the top quark. \mymed \begin{figure} \includegraphics[scale=0.40, bb = 30 70 530 530, clip=true]{point_lp231a_mu_pT_from_neut1.ps} \put(-107.0,-13.0){$p_T$ [GeV]} \caption{\label{mu_pT_from_chi} $p_T$ distribution of the muon from the decay $\tilde{\chi}_1^0 \rightarrow \tilde{\mu}_L \mu$ ({\it cf}. Table \ref{BRs_point_lp231a}) at the LHC . The distribution is normalized to one.} \end{figure} Finally we want to mention an effect arising from the mass ordering in the light part of the spectrum. We have shown in Figs.~\ref{fig:MOrder_Lamp221_a0tanb}, \ref{fig:MOrder_Lamp221_m12m0} that the $\tilde{\mu}_L$ is lighter than the $\neut_1$ in most regions of $\tilde{\nu}_\mu$ LSP parameter space allowing for the decay $\neut_1 \ra \tilde{\mu}_L^\pm \mu^\mp$. Since many decay chains in Table~\ref{BRs_point_lp231a} involve the $\neut_1$, we expect more muons in the final state than in mSUGRA with a stable $\neut_1$ LSP \footnote{Note, that also the $\neut_2$ and $\tilde{\chi}_1^-$ decay to a muon with a BR of roughly $50\%$, see Table~\ref{BRs_point_lp231a}.}. For example, all right-handed squarks, which do not directly couple to the $L_2Q_3\bar D_1$ operator will predominantly decay into the $\neut_1$. Thus pair production of right-handed squarks, $\tilde{q}_R$, has a large fraction of the signature \begin{equation} \tilde{q}_R \tilde{q}_R \ra \mu^\pm \mu^\pm \, jjjjjj \,(W W) \,. \label{squark_pair_mu} \end{equation} We have six jets, $j$, where two jets rise from the $\tilde{q}_R$ decay and four jets from the decay of the two $\tilde{\mu}_L$. If the $\tilde{\mu}_L$ decay via the 3-body decay (see Table \ref{BRs_point_lp231a}), two jets will be $b$-jets and we will also have two $W$s in the final state. We also find two muons from $\neut_1$ decay, where all charge combinations of the muons are possible due to the Majorana nature of the $\neut_1$. We therefore have a new source for like-sign dimuon events, which does not exist in $\Psix$ mSUGRA scenarios with a stable $\neut_1$ LSP. In principle, it should be possible to reconstruct the full event, Eq.~(\ref{squark_pair_mu}), although we have large combinatorial backgrounds due to the many jets in the final state. \mymed We show in Fig.~\ref{mu_pT_from_chi} the $p_T$-distribution of the muons arising from $\neut_1$ decay within our example scenario, Eq.~(\ref{example_point}). The distribution peaks at 20 GeV and therefore we expect that most of the muons will pass standard experimental cuts. However, the position of the peak is restricted by the mass difference of the $\tilde{\mu}_L$ and $\neut_1$. In our example the mass difference is 37 GeV. In general we find in Figs.~\ref{fig:MOrder_Lamp221_a0tanb}, \ref{fig:MOrder_Lamp221_m12m0} mass differences of up to 90 GeV. \mymed In a $\tilde{\nu}_i$ LSP scenario with $\lam'_{ijk}|_{\rm GUT}\not= \lam'_{231}|_{\rm GUT}$ we get the following differences. Now left-handed (right-handed down-type) squarks of generation $j$ ($k$) will couple to the $L_i Q_j \bar D_k$ operator. These squarks can now decay into a quark of generation $k$ ($j$) and into a lepton of generation $i$. In addition, the masses of these squarks will be reduced via the $\text{B}_3$ interaction. For $i=1$, we have to replace the muons in the discussion above by electrons. For $i=3$, we have taus instead of muons. We will get taus with large momenta, {\it i.e.} $p_\tau=\mathcal{O}(100 \, \text{GeV})$, from the decays of the squarks via the $\text{B}_3$ interaction. These taus have a boost factor of $\gamma = \mathcal{O}(100)$ and are thus long lived leading to detached vertices of $\mathcal{O}(1 \, \text{cm})$. We finally see in Figs.~\ref{fig:MOrder_Lamp331_m12m0}, \ref{fig:MOrder_Lamp331_a0tanb} that also in large regions of $\tilde{\nu}_\tau$ LSP parameter space the $\tilde{\tau}_1$ is lighter than the $\neut_1$. This might lead to like-sign tau events from two decay chains involving a $\neut_1$. \subsection{Single Sparticle Production} \label{single_spart_prod} Here we explore single sparticle production, which is not possible if $\Psix$ is conserved. We expect high rates due to the large $\lam'_{ijk}$ coupling in $\tilde{\nu}_i$ LSP scenarios. \mymed \begin{table} \begin{ruledtabular} \begin{tabular}{clcc} & process & cross section & \\ \hline & $PP \ra \tilde{\nu}_\mu + X$ & $2.2 \times 10^{6}$ fb & \\ & $PP \ra \neut_1 \nu_\mu + X$ & $ 4.2 \times 10^{1}$ fb & \\ & $PP \ra \neut_2 \nu_\mu + X$ & $ 6.2 \times 10^{0}$ fb & \\ & $PP \ra \tilde{\chi}^-_1 \mu^+ + X$ & $ 1.3 \times 10^{1}$ fb & \\ & $PP \ra \tilde{\mu}_L^- t + X$ & $1.3 \times 10^{4}$ fb & \end{tabular} \caption{\label{single_prod_xsect} Total hadronic cross sections for single sparticle production at the LHC within the $\tilde{\nu}_\mu$ LSP scenario, Eq.~(\ref{example_point}), with $\lambda'_{231}|_{\rm GUT}= 0.11$. The cross sections include also the charge conjugated processes.} \end{ruledtabular} \end{table} We show in Table~\ref{single_prod_xsect} the hadronic cross sections for different single sparticle production processes. We again consider the example scenario, Eq.~(\ref{example_point}), with $\lambda'_{231}|_{\rm GUT}= 0.11$. The first four cross sections are calculated with {\tt HERWIG} and the last cross section is taken from Ref.~\cite{Bernhardt:2008mz}. The first four processes involve a real or virtual $\tilde{\nu}_\mu$, which is the LSP. The corresponding processes with the $\tilde{\mu}_L$ are not possible, because one parton in the initial state has to be a top-quark. A single $\tilde{\mu}_L$ can therefore be produced only in association with a SM particle, for example with a top-quark \cite{Bernhardt:2008mz,Slepitop2}, see also Table~\ref{single_prod_xsect}. \mymed We indeed observe in Table~\ref{single_prod_xsect} a large cross section for the resonant production of single $\tilde{\nu}_\mu$s due to the large $\lam'_{231}$ coupling, high parton luminosity (due to small Bjorken x) and large phase space. For 10 $\text{fb}^{-1}$ integrated luminosity we will produce more than two million $\tilde{\nu}_\mu$ LSPs. However, the $\tilde{\nu}_\mu$ can only decay into two jets, {\it cf.} Table~\ref{BRs_point_lp231a}, where one jet is a $b$-jet \cite{footnote_2body,footnote_4body}. This process thus suffers from large QCD background and it will be very hard to observe an excess over the SM background at the LHC \cite{Hewett:1998fu}. \mymed The process in Table~\ref{single_prod_xsect} with the second largest cross section is single $\tilde{\mu}_L$ production in association with a top quark. This process suffers in general from the large SM $t\bar t+ \text{jet}$ background \cite{Bernhardt:2008mz}. However it might be possible to see an excess over the SM in small regions of $\tilde{\nu}_\mu$ LSP parameter space, where the $\neut_1$ is lighter than the $\tilde{\mu}_L$, {\it cf.} Figs.~\ref{fig:MOrder_Lamp221_a0tanb}, \ref{fig:MOrder_Lamp221_m12m0}. The $\tilde{\mu}_L$ can decay in this case to $\neut_1 \mu$ and we might employ the charge asymmetry of the muons to distinguish the signal from the background \cite{Bernhardt:2008mz}. \mymed The production of a $\neut_{1}$ [$\neut_2$] in association with a neutrino, Table~\ref{single_prod_xsect}, can lead to a muon with jets and $\met$ in the final state, because $28\%$ [$44.8\%$] of the $\neut_1$s [$\neut_2$s] decay into a $\tilde{\mu}_L \mu$ pair. However the respective production cross sections are rather small, namely 42 $\text{fb}$ [6.2 $\text{fb}$]. \mymed The production of charginos and muons, $\tilde{\chi}_1^- \mu^+$, seems more promising. Roughly $50\%$ of the produced $\tilde{\chi}_1^-$ will decay into $\tilde{\nu}_\mu^* \mu^-$ leading to a final state with a pair of muons, and two jets, where one jet is a $b$-jet. But again the cross section is small, 13 $\text{fb}$. \mymed In $\tilde{\nu}_i$ LSP scenarios, where $\lam'_{ijk}|_{\rm GUT}\not= \lam'_{231}|_{\rm GUT}$, the main difference arises if $j \not = 3$. In this case also resonant single charged slepton, $\tilde{\ell}_{Li}$, production, Eq.~(\ref{res_slep}), is possible via an up-type quark of generation $j$. Therefore, if the $\neut_1$ is lighter than the $\tilde{\ell}_{Li}$, we expect a high rate of leptons from $\tilde{\ell}_{Li}$ decay to $\neut_1 \ell_i$. But this is only possible in small regions of $\tilde{\nu}_i$ LSP parameter space, see Figs.~\ref{fig:MOrder_Lamp221_a0tanb}, \ref{fig:MOrder_Lamp331_a0tanb}, \ref{fig:MOrder_Lamp221_m12m0} and \ref{fig:MOrder_Lamp331_m12m0}. A further bottleneck for the observation of these leptons is the small mass difference between the $\neut_1$ and $\tilde{\ell}_{Li}$ leading to small lepton momenta. The mass difference will not exceed roughly 30 GeV. Large $\lam'_{ijk}$ couplings with $j\not = 3$ are also disfavoured by $D_0$--$\bar D_0$-mixing, {\it cf.} Sect.~\ref{indirect_bounds}. \mymed We conclude, that pair production of SUSY particles and their subsequent decays lead to much more promising signatures than single sparticle production. On the one hand, resonant single sneutrino production, which occurs at a high rate, lead mainly to jets in the final state and thus suffers from the large QCD background. On the other hand, processes with one or two leptons in the final state have small cross sections, {\it i.e.} $\lsim \mathcal{O}(10 \, fb)$. \section{Conclusion} \label{conclusion} In supersymmetric models it is essential to know the nature of the LSP, since it is involved in practically all collider signals. In the MSSM the LSP is necessarily the lightest neutralino. However, in B$_3$ mSUGRA models this is not the case: It had been shown previously that one can obtain a stau LSP and even a sneutrino LSP. In this paper we have analysed in detail which B$_3$ mSUGRA parameter region leads to a sneutrino LSP. In particular, we have found that a coupling $\lam'_{ijk}=\mathcal{O}(10^{-1})$ at the GUT scale will lead to a sneutrino LSP due to additional $\text{B}_3$ terms in the RGEs. We have shown, that such a large coupling can still be consistent with experiment, for a $\tilde\nu_{\mu,\tau}$ LSP. A $\tilde{\nu} _e$ LSP is disfavoured due to the strong bounds on the couplings $\lambda'_{1jk}$, see Table~\ref{RPV_couplings}. \mymed We have explored which conditions at the GUT scale lead to a sneutrino LSP. We have shown that a negative trilinear scalar coupling $A_0$ with a large magnitude enhances the negative $\text{B}_3$ contribution to the sneutrino mass. We have found large regions in the $\text{B}_3$ mSUGRA parameter space, where the sneutrino is the LSP and which are consistent with the observed anomalous magnetic moment of the muon, $a_\mu^{\rm exp}$, as well as with BR($b\ra s\gamma$), see Figs.~\ref{fig:a0_tanb} and \ref{fig:M0_M12}. The allowed $\tilde{\nu}_\mu$ LSP parameter space is hereby larger than the $\tilde{\nu}_\tau$ LSP parameter space. We have also shown that $a_\mu^{\rm exp}$ puts an upper bound of roughly 300 GeV on the sneutrino LSP mass. \mymed We have next investigated the phenomenology of sneutrino LSP models at the LHC. We have considered one benchmark scenario with a $\tilde{\nu} _\mu$ LSP which is obtained via $\lambda'_{231}|_{\rm GUT}=0.11$. Within this scenario, we have found that direct decays of light as well as heavy SUSY particles lead to an excess of muons in the final state, {\it cf.} Table~\ref{BRs_point_lp231a}. We also have found that signatures from pair production of SUSY particles are more promising than from single sparticle production, since the latter mainly involve hadronic final states. Promising pair production signatures are high-$p_T$ muons of a few hundred GeV, \textit{cf.} Fig.~\ref{mu_pT_from_sdR_or_stop}, high-$p_T$ jets, like-sign muon events and long-lived taus with a detached vertex of $\mathcal{O} $(1cm). \mymed These signatures should be investigated by the experimental groups in order to find supersymmetry as well as to distinguish $\text{B}_3$ mSUGRA with a sneutrino LSP from ``normal" mSUGRA with a stable $\neut_1$. \begin{acknowledgments} We thank Benjamin Allanach for help with the as-yet unpublished B$_3$ version of {\tt SOFTSUSY}. We also thank Volker B\"uscher for helpful discussions. SG thanks the theory groups of Fermilab National Accelerator, Argonne National Laboratory and UC Santa Cruz for helpful discussions and warm hospitality. SG also thanks the `Deutsche Telekom Stiftung' and the `Bonn-Cologne Graduate School of Physics and Astronomy' for financial support. This work was partially supported by BMBF grant 05 HT6PDA, by the Helmholtz Allianz HA-101 `Physics at the Terascale' and by the SFB Transregio 33 `The Dark Universe'. \end{acknowledgments} \bibliographystyle{h-physrev}
2,869,038,156,960
arxiv
\section{Introduction} \label{Sec:Introduction} The nature of matter has long been a source of philosophical debate: Around 400BC, Democritus and Leucippus postulated an indivisible unit which they called the atom \citep{Coveney_highfield}. With no evidence for the existence of atoms and the apparent continuous nature of matter, the discrete paradigm largely fell from favor over the ensuing millennia. This view was so ingrained by the late 19$^{\rm th}$ century that, despite being central to Ludwig Boltzmann's development of statistical mechanics, discrete atoms were regarded as little more than a tool. As a result of this dichotomy, the fields of continuum mechanics, and discrete particle dynamics evolved separately, well into the age of computers. Even today, the two descriptions are often studied independently by separate research communities. \begin{figure} \includegraphics[width=0.7\textwidth]{fig1.pdf} \caption{Schematic representation of a contact line for a sessile drop on a solid substrate. } \label{contact_angle_schematic} \end{figure} The {\it continuum hypothesis} is one of the primary cornerstones of predictive models for many diverse engineering applications. There are instances, however, when it becomes necessary to explore motion at the meso- and molecular-scales for which this hypothesis requires refinement. A prime example of this is the modelling of moving contact lines \citep{Snoeijer_Andreotti}, which is a feature of many flows where an interface separating two phases intersects a solid boundary, and moves despite the classical requirement for imposing the no-slip condition.\cite{deGennes1985} Understanding the behaviour of the contact line is very important for many applications including a recent exciting area, that is spontaneous dewetting of ultra thin films.\cite{Mukherjee2015} To model such flows, one actually needs to combine three separate problems: $1)$ capture the behaviour of the system at the interfaces between the solid substrate and the liquid, $2)$ model the interaction of the solid substrate and the air, as well as $3)$ reproduce the dynamic interface between the liquid surface and the air (see Fig. \ref{contact_angle_schematic}). The meeting point of all three problems is at the so called {\it contact line}. Continuum approaches at the contact line are unable to elucidate the microscopic underlying mechanisms, which involves understanding the diffuse and complex nature of the contact line. In contrast to molecular-level models \citep{Razavi2014}, the behaviour of the contact line is an assumption of a continuum-based model and not a result of the numerical simulation.\cite{Karapetas_et_al,Beacham2009,Karapetsas2011b} As a result, the dynamics of the contact line should be addressed at the molecular level. Molecular dynamics (MD) has demonstrated that although a microscopic resolution at the contact line is indispensable for understanding wetting-type phenomena, it is often also important to take into account areas of the liquid far from the interfaces or the contact line in order to provide a complete description of the system in tandem with the contact line. For example, in the presence of additional complexities associated, for instance, with surfactant-laden systems \cite{Theodorakis2014,Venzmer2011,Nikolov2011} the bulk of the droplet acts as a source of surfactants for the interfaces, which subsequently supplies the contact line with surfactants.\cite{Theodorakis2015,Theodorakis2015b} Although a molecular-level description of the surfactants in the bulk of the droplet may be unnecessary, the dynamics of the system is determined by the interplay between the surfactant in the bulk and the interfaces/contact line.\cite{Theodorakis2015,Theodorakis2015b} In this case, the dynamic behaviour of the contact line often requires a microscopic resolution, whereas the bulk can be described by continuum-scale models. Using simple molecular models, which can predict the contact angle, is also possible. For example, in the case of a liquid bridge sandwiched between parallel plates, which will be discussed later, one can demonstrate good agreement in terms of the meniscus shape and the static angles, \cite{Thompson_et_al89,Thompson_et_al93} and contact line dynamics can be explored \citep{Smith2016}. In this Feature Article, we give our perspective on the contact line problem, in which we classify and consider the various approaches for linking molecular- and continuum-level models. We start with direct coupling of an MD and a Computational Fluid Dynamics (CFD) solver to model near-wall dynamics, before moving on to parameterization of quantities such as pressure, surface tension, static and dynamic contact angles. Direct coupling allows the molecular and continuum regions to evolve together as parts of one simulation, while parameterizations allow molecular details to be abstracted and included in a CFD solver which will model the whole space. We present the role of near-wall interactions, and discuss the dynamic and static angle behaviour in the context of two flow configurations: $1)$ a liquid bridge as mentioned above, undergoing shear flow; and $2)$ a droplet spreading in the presence and absence of surfactants. The results from a sheared liquid bridge, a commonly used approach to study the contact line in the MD literature, are discussed critically motivating the final simulation of a full MD droplet. The work is organized as follows: In the Methodology Section, \ref{Sec:Methodology} we give the outline of our continuum and coarse-grained (CG) MD models. In the Results and Discussion Section, \ref{sec:results} we discuss results for the contact line dynamics; in particular, the near-wall interaction, surface tension, static and dynamic contact angles, electrowetting, droplet modelling, and the link between molecular models and continuum-based approaches. We conclude our article by highlighting limitations of our methods, the importance of accessing larger systems with microscopic resolution, and future directions. \section{Methodology} \label{Sec:Methodology} In this section, we outline the continuum and molecular modelling methodologies used in this work, and highlight the links between them. \subsection{Continuum dynamics} Continuum-scale theories rely on conservation principles to provide tools for predicting the spatio-temporal evolution of `fields', such as the fluid velocity, pressure, and, for non-isothermal flows, temperature. An equation for the velocity ${\mathbf{u}}$ is provided by the equation of force-momentum balance as described in, say, Batchelor\cite{batchelor67a}. Here, the rate of change of momentum is determined by momentum advection, and the balance of forces over an arbitrary volume, $V$, enclosed by a surface with area $S$: \begin{align} \!\! \frac{d }{d t} \int_V \rho \mathbf{u} dV \! = - \oint_S \left[ \rho \mathbf{u}:\mathbf{u} + \boldsymbol{\Pi} \right] \cdot d\textbf{S} + \textbf{F}_{\textnormal{body}}, \label{BofMEqn2} \end{align}where $\rho$ is the density of the fluid, $\mathbf{u}$ is the fluid velocity, $\boldsymbol{\Pi}$ the stress tensor, $\textbf{F}_{\textnormal{body}}$ denotes external forces (per unit volume) acting on the fluid, and $t$ represents time. By assuming that the continuum hypothesis is valid, we take the zero volume limit in the momentum balance, \eq{BofMEqn2}, to arrive at the celebrated Navier--Stokes equations: \begin{align} \rho\left(\frac{\partial \mathbf{u}}{\partial t} + \left( \mathbf{u} \cdot \mathbf{\nabla} \right) {\mathbf u}\right) = - \mathbf{\nabla} P -\rho {\mathbf{g}}+ \mu\nabla^2 \mathbf{u}. \label{ANDNSEqn} \end{align}Here, we have assumed that $\mathbf{\Pi}=-P{\mathbf{I}} + \mu\left(\nabla {\mathbf{u}}+\nabla{\mathbf{u}}^{T}\right)$ in which $\mu$ is the fluid (constant) viscosity, $P$ is the fluid pressure and $\mathbf{I}$ is the identity tensor. \citep{Gad-El-Hak_06} We have also set ${\mathbf{F}}_{\textnormal{body}}=-\rho {\mathbf{g}}$ reflecting the presence of gravitational forces in many applications, though these will be neglected in the systems which are considered in the present work. In addition to the momentum balance, an equation of mass conservation must also be solved, given by \begin{align} \boldsymbol{\nabla}\cdot \mathbf{u} = 0; \label{incompressible_cont} \end{align} which is consistent with the assumption of an incompressible fluid. Computational Fluid Dynamics aims to solve \eq{ANDNSEqn} and \eq{incompressible_cont} using numerical approximations for the derivatives in $\mathbf{u}$ and $P$. Numerical solutions of these equations are obtained starting from appropriate initial conditions, which reflect the particular physical situation under consideration. These solutions are subject to boundary conditions on $\mathbf{u}$, which correspond to the so-called `no-slip', and `no-penetration' conditions on the tangential and wall-normal components of the velocity, respectively. The no-penetration condition is appropriate for situations in which the solid substrate underlying the fluid is impermeable. The no-slip condition, in turn, is commonly deployed in, for instance, single-phase, Newtonian flows in pipes and channels. This condition, however, must be modified when modelling systems involving, for instance, the spreading of a droplet on a solid substrate; otherwise, the no-slip condition leads naturally to a stress singularity at the moving contact line \citep{Bonn2009}. Instead, a Navier-slip model can be used \citep{Navier,Bonn2009}, given by the following expression \begin{align} u = \ell_{sl} \frac{\partial u}{\partial z}, \label{Navier_slip} \end{align} where $\ell_{sl}$ is the slip length, $u$ denotes the streamwise component of $\mathbf{u}$, and $z$ the wall-normal coordinate; setting $\ell_{sl}=0$ recovers the no-slip condition. The lubrication approximation is often used to model the spreading of slender droplets via solution of a reduced form of Eqs. (\ref{ANDNSEqn}) and (\ref{incompressible_cont}) \citep{Karapetas_et_al}: \begin{align} \frac{\partial u}{\partial x} + \frac{\partial w}{\partial z} = 0 ; \;\;\; \frac{\partial P}{\partial x} = \frac{\partial^2 u}{\partial z^2} ; \;\;\; \frac{\partial P}{\partial z} = 0, \label{thin_film} \end{align} where $x$ represents the streamwise direction, and $w$ denotes the velocity component in the wall-normal direction; here, gravitational forces have been neglected. Solutions of these reduced equations are subject to the kinematic, normal, and tangential stress conditions at the interface, $z=h$, respectively given by \begin{align} \frac{\partial h}{\partial t} + u \frac{\partial h}{\partial x} = w ; \;\;\; P = P_0- \gamma \frac{\partial^2 h}{\partial x^2} ; \;\;\; \mu\frac{\partial u}{\partial z}=\frac{\partial \gamma}{\partial x}, & &\text{ at } z = h, \end{align} and the Navier-slip condition, \eq{Navier_slip}, for the wall boundary at $z=0$. In the normal-stress condition, $P_0$ denotes the ambient pressure (which can be set to zero without loss of generality), and the second term on the right-hand-side represents capillary effects: surface tension, $\gamma$, multiplying the interfacial curvature (in its simplified form in the lubrication approximation). In the tangential-stress condition, we have included on the right-hand-side the possibility of surface tension gradients, which arise, for instance, in the presence of surfactant where $\gamma$ varies with surfactant concentration. The thin film equation is introduced here as a simple model for spreading droplets, however in practice a more comprehensive fluid solver would be required for the general coupling presented in this work. A closure is required for the contact line speed $u_{cl}$, the simplest and most common form of which is represented by Tanner's law \citep{Tanners}: \begin{align} u_{cl} = A_{cl} \left( \langle \theta \rangle - \theta_e \right)^n, \label{Tanners} \end{align} where $\langle \theta \rangle$ is the mean local angle, $\theta_e$ is the equilibrium contact angle, and $A_{cl}$ and $n$ are constants. It is worthwhile noting that the continuum-scale modelling is well established with review articles \cite{Blake_2006,Bonn2009,Craster2009,Sui_2014} covering many of the aspects briefly discussed above. We note here that Tanner's law involves the apparent contact angle, which may be different than the microscopic contact angle, defined at the solid surface.\cite{Bonn2009} The latter requires averaging over microscopic data\cite{Thompson_Robbins} or the application of well-defined theoretical models that take into account the diffuse nature of the interface.\cite{Qian2003} In what follows, we describe methodologies used to provide estimates for $\langle\theta\rangle$ from averaged values of the contact angle obtained via molecular dynamics simulations. \subsection{Molecular Dynamics} Molecular modelling is a direct solution of Newton's Laws for individual molecules and shows excellent agreement to experiments, reproducing the underlying liquid structure (matched to x ray scattering \citep{Rapaport}); equilibrium thermodynamic properties (phase diagrams, triple point \citep{sadus_13, sadus_16}); flow dynamics and diffusion \citep{Alder_Wainwright70}; \textit{a priori} prediction of dynamic coefficients such as viscosity, surface tension \citep{Smith_et_al2016}, heat flux \citep{Todd_Evans_95} and slip-lengths \citep{Thompson1997, Hansen_et_al2011}; as well as fluid dynamics for canonical flows like Couette \citep{Smith_et_al}, Poiseuille \citep{Travis_Gubbins_00}, non-linear flow case like Rayleigh-B\'{e}rnard instability \citep{Rapaport_88}, complex chemical superspreading \citep{Theodorakis2015b}, shock waves dynamics \citep{Hoover_1979, Root_et_al} and even turbulence \citep{Smith_2015}. Naturally, high resolution all-atom simulations is therefore an ideal tool to study the dynamics of the contact line. Yet, phenomena in this region of a fluid can involve physics at larger length- and time-scales than is practicable to model atomistically. \cite{Halverson2009} In addition, all-atom force-fields require a parametrization, which, in turn, involve targeting the reproducibility of certain properties of a system (e.g. thermophysical ones)\cite{Vega2007} or, in the case of complex molecules (e.g. proteins), their native structures\cite{Poma2017}. To overcome these constraints, coarse-grained (CG) force-fields, that provide adequate resolution for the phenomena involved at the contact line, but at the same time allow access to large time and length scales\cite{Marrink2007,Sergi2012} have been developed. A popular class of models, Statistical Associating Fluid Theory (SAFT) initiated in the late 1980s \cite{Chapman1989}, with their progress at the turn of the century reviewed in Ref.~\citen{Muller2001}, have been refined to a level where they provide a realistic methodology sophisticated enough to capture the essential thermophysical behaviours observed. The SAFT-$\gamma$ we use here is a molecular-based equation of state \cite{Lafitte2013,Avendano2011,Avendano2013,Muller2014,Papaioannou2014} which offers an accurate fit for the force-field parameters that can be optimized to reproduce the macroscopically-observed thermophysical properties.\cite{Herdes2015,Lobanova_PhD,Lobanova2015,Muller2014,Avendano2011,Avendano2013} Hence, SAFT-$\gamma$ ensures that the potential parameters are consistent with experimental results on thermodynamic data (first and second derivatives of the free energy) while retaining much of the simplicity of Lennard--Jones (LJ) systems, as interactions between groups of atoms are modelled with the Mie potential, a generalized LJ potential that offers larger flexibility in the fitting of the equation of state. As a result, problem complexity does not obscure the relevant physical mechanisms, which are a central motivation for molecular simulation. SAFT-$\gamma$ has provided the potential parameters for different components of the CG system including those for water.\cite{Lobanova2015,Theodorakis2015,Theodorakis2015b} In the case of the SAFT-$\gamma$ water model we use here, one spherical bead in the simulations represents two water molecules allowing for the simulation of even larger systems. The model has been tested to reproduce thermodynamic data and other properties, such as the surface tension of water.\cite{Lobanova_PhD,Lobanova2015} Based on this CG model, the range of problems that can be addressed also include systems involving water molecules and surfactants.\cite{Theodorakis2015,Theodorakis2015b,Lobanova_PhD,Lobanova2015} In the molecular model, the forces exerted on groups of atoms represented by effective beads are $\boldsymbol{f}_{\rm ij} = -\boldsymbol{\nabla} \phi_{\rm ij}$, where $\phi_{\rm ij}$ is the Mie potential between beads of type $\rm i$ and $\rm j$ being a distance $r_{\rm ij}$ apart: \begin{align} \phi_{\rm ij} = C \epsilon_{\rm ij} \left[ \left( \frac{\sigma_{\rm ij}}{r_{\rm ij}} \right)^{\lambda_{\rm ij}^r} - \left( \frac{\sigma_{\rm ij}}{r_{\rm ij}} \right)^{\lambda_{\rm ij}^a} \right], \label{Mie_potential} \end{align} and $C$ is given by, \begin{align} C = \left( \frac{\lambda_{\rm ij}^r}{\lambda_{\rm ij}^r-\lambda_{\rm ij}^a}\right) \left( \frac{\lambda_{\rm ij}^r}{\lambda_{\rm ij}^a}\right)^{\left( \frac{\lambda_{\rm ij}^a}{\lambda_{\rm ij}^r-\lambda_{\rm ij}^a}\right)}. \end{align} The parameters $\lambda_{\rm ij}^a$ and $\lambda_{\rm ij}^r$ are defined for different pairs of interacting beads $\rm i$ and $\rm j$. To this end, the exponent $\lambda_{\rm ij}^a$ has only a physical meaning expressing the dispersion interaction between different beads, whereas $\lambda_{\rm ij}^r$ acts as a fitting parameter and affects the core interactions between beads (size of beads). In the CG model, the unit of length is $\sigma$ and that of energy $\epsilon$, which correspond to $\sigma = 0.436$ nm and $\epsilon/k_B = 492$ K, respectively, and $k_B$ is the Boltzmann constant. The cross interaction parameters are predicted by the following combination rules \cite{Lafitte_et_al} \begin{align} \sigma_{\rm ij} = 0.5\left[\sigma_{\rm ii} + \sigma_{\rm jj}\right], \;\;\;\;\;\; \epsilon_{\rm ij} = \frac{\sqrt{\sigma_{\rm ii}^3 + \sigma_{\rm jj}^3}}{\sigma_{\rm ij}^3} \sqrt{\epsilon_{\rm ii}\epsilon_{\rm jj}} \end{align} with \begin{align} \lambda_{\rm ij}^k - 3 = \sqrt{(\lambda_{\rm ii}^k - 3)(\lambda_{\rm jj}^k - 3)}, \;\;\; k = a,r. \end{align} Unless stated otherwise, the model assumes a universal cutoff, which is $r_c=4.5834\sigma$. The values of the potential parameters for different groups of atoms represented by the effective beads Ar, W, M, D, EO, and CM are given in Table \ref{SAFT_table}; $\lambda_{\rm ij}^a = 6$ for all effective beads. Here, one Ar atom is represented by a single bead. The following atoms are also represented by a single bead as follows: W for two water molecules [$\chem{H_2O}$], M and D for $\left[\chem{(CH_3)_3-Si-O_{\frac{1}{2}}}\right]$ and $\left[\chem{O_{\frac{1}{2}}-(CH_3)_2-Si-O_{\frac{1}{2}}}\right]$, respectively, EO for $\left[\chem{-CH_2-O-CH_2-}\right]$ as well as CM for $\left[\chem{-CH_2-CH_2-CH_2-}\right]$. \begin{center} \captionof{table}{Potential parameter values for different effective beads representing different groups of atoms as discussed in the text. In parenthesis, we provide the units for each parameter.} \begin{tabular}{|c|c|c|c|c|}\hline Mol & Mass (m) & $\sigma_{\rm ii} (\sigma)$ & $\epsilon_{\rm ii} (\epsilon)$ & $\lambda_{\rm ii}^r$\\\hline Ar & 0.90645 & 0.782 & 0.2429 & 12.0 \\\hline W & 0.8179 & 0.8584 & 0.8129 & 8.0 \\\hline M & 1.8588 & 1.2398 & 0.8998 & 26.0 \\\hline D & 1.6833 & 1.6833 & 0.5081 & 13.9 \\\hline EO & 1.0000 & 0.9307 & 0.8067 & 19.0 \\\hline CM & 0.9552 & 1.0000 & 0.7000 & 15.0 \\\hline \end{tabular} \label{SAFT_table} \end{center} If one is interested in accounting for the presence of surfactant molecules, then with the EO, D, and CM one can build CG models for both so-called ``superspreading" and non-superspreading surfactants.\cite{Theodorakis2015} In the case of surfactants, chains are built by binding effective beads with a harmonic potential \cite{Lobanova2015,Lobanova_PhD}: \begin{equation} \label{harmonicbond} \phi_B(r_{\rm ij}) = 0.5 k (r_{\rm ij}-\sigma_{\rm ij})^2, \end{equation} where the values of $\sigma_{\rm ij}$ are given in Table \ref{SAFT_table}, and $k=295.33 \epsilon/\sigma^2$. Additionally, any three consecutive beads in a surfactant molecule of type EO interact via a harmonic angle potential \begin{equation} \label{harmonicangle} \phi_{\theta}(\theta_{\rm ijk}) = 0.5 k_{\theta} (\theta_{\rm ijk}-\theta_0)^2, \end{equation} where $\theta_{\rm ijk}$ is the angle defined by three consecutive beads along the surfactant chain, $k_{\theta}=4.32 \varepsilon/{\rm rad}^2$ is a constant, and $\theta_0=2.75$ rad is the equilibrium angle of the harmonic potential. In an MD simulation, one typically integrates Newton's equations of motion with periodic boundaries. To model walls, the molecules are tethered to their equilibrium lattice sites by harmonic interactions governed by relations such as that expressed by Eq. (\ref{harmonicbond}). In this case, a Nos\'{e}--Hoover thermostat is applied to the tethered molecules only, and the equations of motion for the wall atoms are given by, \begin{subequations} \begin{eqnarray} \boldsymbol{v}_{i} &=& \frac{{\boldsymbol{p}}_{i}}{m_i} + U_\text{w} \textbf{n}_x, \\ \dot{{\boldsymbol{p}}}_{\rm i} &=& \boldsymbol{F}_{\rm i} + \boldsymbol{F}_{\rm i_{\textnormal{teth}}} - \xi {\boldsymbol{p}}_{\rm i}, \\ \dot{\xi} &=& \frac{1}{Q_{\rm \xi}} \left[ \displaystyle\sum_{n=1}^{N} \frac{{\boldsymbol{p}}_{\rm n} \cdot {\boldsymbol{p}}_{\rm n}}{m_{\rm n}} -3T_{\rm 0} \right], \\ \boldsymbol{F}_{\rm i_{\textnormal{teth}}} &=& \boldsymbol{r}_{\rm i_0} \left( 4 k_{4} r_{\rm i_0}^{\rm 2}+6 k_{6} r_{\rm i_0}^{4} \right). \label{NH_verify} \end{eqnarray} \end{subequations} Use of a wall-only thermostat allows a temperature profile to develop in the domain, prevents the thermostat from impacting the dynamics of the fluid \citep{Smith2016}, and represents more closely experimental setups. Here $\boldsymbol{p}_{\rm i} $ $\define \boldsymbol{v}_{\rm i} - U_{\rm w} \textbf{n}_x$ is the peculiar momentum, defined as the particle velocity $\boldsymbol{v}_{\rm i}$ (times particle mass $m_{\rm i}$) minus average streaming velocity, which is the wall velocity $U_{\rm w}$ in the $x$ direction denoted by vector $\textbf{n}_x$. The molecules are tethered to equilibrium location $\boldsymbol{r}_0$ and experience a force $\boldsymbol{F}_{\rm i_{\textnormal{teth}}}$ proportional to displacement from this site, $\boldsymbol{r}_{\rm i0} \define \boldsymbol{r}_{\rm i} - \boldsymbol{r}_{\rm 0}$, with spring coefficients $k_4=5\times10^3$ and $k_6=5\times10^6$ from \citet{Petravic_Harrowell}. Both the molecule and its tethering site slides with speed $U_w$. The arbitrary wall thermostatting coefficient $Q_{\rm \xi}$ is chosen to be equal to $0.1 N_{_{\rm thermo}} \Delta t$ so the heat bath is proportional to system size and timestep in guiding the system to thermostat setpoint $T_{\rm 0}$. Our simulations have been performed using an in-house MD code, called {\it flowmol} \citep{Smith_Thesis}. A more simplified way of considering solid substrates in MD simulations is an unstructured smooth wall,\cite{Theodorakis2015,Theodorakis2015b} which is realized by using an interaction potential that depends on the distance between the effective beads and the wall.\cite{Israelachvili,Forte2014} In this case, the fluid--substrate interactions can be modelled by an unbiased integration (i.e. density inhomogeneities and structural characteristics of the substrate at the microscopic level are neglected) of the solid potential considering a wall composed of spherical Mie beads, where the width of the substrate exceeds the cut-off of the potential.\cite{Forte2014} The form of the potential reads \begin{equation} \label{eq:sub} \phi_{\rm sub}(D) = 2 \pi\rho_{\rm MD} C \epsilon_{\rm ij}\sigma_{\rm ij}^3 \left[ A \left( \frac{\sigma_{\rm ij}}{D} \right)^{\lambda_{\rm ij}^r-3} - B \left( \frac{\sigma_{\rm ij}}{D} \right)^{\lambda_{\rm ij}^a-3} \right], \end{equation} where $A=1/(\lambda_{\rm ij}^r-2)(\lambda_{\rm ij}^r-3)$ and $B=1/(\lambda_{\rm ij}^a-2)(\lambda_{\rm ij}^a-3)$. $C$, $\sigma_{\rm ij}$,$\epsilon_{\rm ij}$, $\lambda_{\rm ij}^r$, and $\lambda_{\rm ij}^a$ have been defined in \eq{Mie_potential}, $\rho_{\rm MD}$ is the number density, which typically for a paraffinic substrate is $\rho_{\rm MD}\approx 1\sigma^{-3}$. $D$ is the vertical distance between beads and the substrate (wall). The cut-off of the fluid--substrate interaction is the same as the cut-off used for the fluid--fluid interactions. In this model, the substrate--fluid interaction is tuned against the contact angle of water.\cite{Theodorakis2015b} For example, a contact angle of approximately $60^\circ$ is obtained by setting the value of $\epsilon_{ \rm SW}=1.4 \epsilon$, where $\epsilon_{ \rm SW}$ is the strength of interaction between the substrate and water effective beads. Then, by knowing the interaction parameter between water molecules (fluid--fluid interactions from Table \ref{SAFT_table}) we can obtain an estimate of an effective interaction parameter $\epsilon_{\rm SS}$ for the substrate beads by using the above combination rules. All other fluid--solid interactions arise from the use of these combination rules. Patterned substrates of different geometry can also be simulated using MD methods, for example in the context of realizing equal-sized mesoscale polymer droplets of two constituent polymers by sequential spin dewetting.\cite{Bhandaru2014} For this type of systems, MD is particularly suitable, because one can accurately design any substrate geometry. Hence, the substrate pattern can be constructed according to the application under consideration.\cite{Bhandaru2014} We turn our attention now to the strategies employed for coupling molecular and continuum-scale models. \subsection{Coupling of molecular and continuum models} \label{sec:coupling} There are, broadly speaking, three techniques, which we term `Types 1, 2 and 3', for linking the continuum and molecular models described in the previous sections: \begin{enumerate} \item Type 1: run MD simulation to get tables of data \citep{sadus_13, sadus_16}, or a reduced model to include in the continuum simulation, such as surface tension in surfactant-laden flows or the contact line dynamics. This type of simulation includes the wider class of parameterizing continuum constants using molecular dynamics \citep{Evans_Morris}, defining non-local viscosity kernels \citep{Hansen_et_al}, defining slip boundary conditions \citep{Hansen_et_al2011, Qian2003,Qian2004,Qian2006} or any parameterization which allows the continuum model to be run for the whole space with no further MD simulation. This type of coupling is the focus in this work, to get surface tension values, define contact angles and create a model for the moving contact line which includes fluctuations. \citep{Smith_et_al2016} \item Type 2: call dynamically or spawn new representative MD simulations during a continuum run to obtain (or check) parameters which are transferred to the continuum run, including effect of complex molecules on viscosity \citep{Yamamoto, E_et_al}, or slip-length. \citep{Asproulis2013} \item Type 3: directly link molecular and continuum solvers with each solving a portion of the same domain with mass, momentum, and energy exchange at the interface and both models evolving together \citep{ OConnell_Thompson, Li_et_al, Hadjiconstantinou_thesis, Flekkoy_et_al, Wagner_et_al, Delgado-Buscalioni_Coveney_03, Nie_et_al, Werder_et_al, Borg_et_al, Smith_Thesis}. This type of coupling is discussed in detail in the next section \ref{sec:near_wall} on near-wall interactions. \end{enumerate} In this work, we demonstrate examples of recent work using coupling Types 1 and 3 and discuss using MD simulations to obtain surface tension, slip-length, and contact line dynamics. Our approach is mechanical and hydrodynamical, and we do not consider other possible approaches, for example: use of analytic method, such as solving the Ornstein--Zernike equations (for example, the Percus--Yevick closure that results in the Percus--Yevick equation) or other approximations (e.g. the hypernetted-chain equation). \section{Results and discussion} \label{sec:results} In this section, we will describe developments in the use of molecular-based models and their links to continuum-scale counterparts for a range of problems, starting from relatively simple shear flows, where attention is focused on the near-wall region. Complexity is then ramped up gradually, leading up to the consideration of contact line problems involving the presence of surfactant. In each case, the type of coupling between the molecular and continuum scales is discussed, and its successes and shortcomings highlighted. \subsection{Near-wall interactions} \label{sec:near_wall} It has been shown that analytical solutions for continuum flows give very good agreement with MD results even at the small scale \citep{Travis_et_al97, Todd_Daivis_book, Smith_Thesis}. To demonstrate this, a Couette flow simulation is presented in this section, modelled by entraining a molecular liquid between two solid walls. The molecules interact via the Mie potential, which is described by \eq{Mie_potential} using the values for Argon in Table \ref{SAFT_table} and a cutoff of $r_c = 2^{1/6}\sigma_{ArAr}$ for efficiency. The top wall is set in translational motion and the evolution of the velocity profile towards the steady-state Couette flow limit was monitored. Four layers of tethered molecules were used to model each wall, with the top wall given a sliding velocity of, $U_0 = 1.0\sigma/\tau$ at the start of the simulation, corresponding to time $t = 0$ ($\tau$ is the MD time unit). The temperature of both walls was controlled by applying the Nos\'{e}-Hoover (NH) thermostat to the wall atoms \citep{Hoover_NoseHooverthermostat}. The MD simulation consists of $93, 393$ molecules, with a liquid density of $\rho = 0.4$ and solid density $1.0$. The domain is of size $63.5$ by $46.0$ by $63.5$ in reduced LJ units split into $2$ cells in $x$ and $z$ but $512$ in $y$ to show the detailed near wall flow. The average density and velocity can be obtained by taking averages of the molecular values in the cells spaced over the molecular channel, in the form, \begin{align} \boldsymbol{u} \define \frac{1}{M_{\rm I}} \displaystyle\sum_{i=1}^N m_{\rm i} \boldsymbol{v}_{\textrm{i}} \vartheta_{\textrm{i}}, \label{U_def} \end{align} where $M_{\rm I}$ is the mass of molecules in the volume $I$ given by $M_{\rm I} \define \sum m_{\rm i} \vartheta_{\rm i}$, and $\vartheta_{\rm i}$ is a functional which is $1$ when the position, $r_i$, of molecule $\rm i$ is inside the volume $\rm I$ and zero otherwise. \citep{Smith_et_al2015} This functional $\vartheta_i$ is formally the product of three boxcar functionals in each direction $\vartheta_i\define[H(x^+ - x_i) - H(x^- - x_i)][H(y^+ - y_i) - H(y^- - y_i)][H(z^+ - z_i) - H(z^- - z_i)]$ with plus and minus superscript denoting top and bottom surfaces of the volume. This velocity profile is compared with the analytical solution of the unsteady diffusion equation in Fig.~\ref{velocity_density_couette}(a), \citep{Smith_et_al} showing close agreement in both space and time. However, some tuning of the start/end location of the analytical solution is required to get the good agreement in Fig \ref{velocity_density_couette}(a) based on the location of the walls and due to near-wall partial slip in the molecular system. This is due to the molecular `layering/stacking' effect observed in the case of hard walls (strong tethering) and stick-slip behaviour near the walls in the molecular system, a physical phenomenon not captured by the continuum solution. The density and momentum in the near-wall region is shown in Fig~\ref{velocity_density_couette}(b) with molecular stacking apparent for both. This stacking effect has been observed in experiments \citep{Butt_2005} and will be important in defining near-wall dynamics of the fluid as required in defining the contact line. \begin{figure}[H] \centering (a) \hspace{3in} (b) \\ \begin{subfigure}{0.45\textwidth} \includegraphics[width=\textwidth]{fig2a.pdf} \end{subfigure} \;\;\;\;\; \begin{subfigure}{0.45\textwidth} \includegraphics[width=\textwidth]{fig2b.pdf} \end{subfigure} \;\;\; \caption{Couette flow in a single-phase MD system showing excellent agreement with the continuum solution despite the small size of the system: (a) MD velocity (Green points) matched to analytical Couette flow (black line) at $t=\{ 30 , 75 , 130 , 275, 530.5\}$ in reduced unit; (b) density (black line) and momentum (dotted line) near the wall in the same channel where average liquid density is $\rho=0.4$. } \label{velocity_density_couette} \end{figure} Many previous studies have tried to establish a way of coarse-graining this to a simple Navier slip-length of the form of \eq{Navier_slip}. \citep{Hansen_et_al2011,Thompson1997} For two-phase flows, one can extract hydrodynamic boundary conditions from MD to construct a continuum model that holds in the whole space including the contact line as has been shown by Qian \textit{et al.}.\cite{Qian2003,Qian2004,Qian2006} In this case, the model involves a total of nine material parameters, with two of them (a mobility coefficient and a positive phenomenological parameter) matching the hydrodynamic model calculation to optimize their values.\cite{Qian2006} In this approach, the continuum predictions are not sensitive to these parameters within a certain range of the hydrodynamic model, where it reaches the sharp interface limit.\cite{Qian2006} It is clear that this is possible, and indeed reasonable, for some simple fluids and surfaces, but for complex surfaces, predictions can vary by orders of magnitude \citep{Kannam_et_al2013}. One solution to include the details of the molecular surface directly is to couple the MD and CFD descriptions (Type 3), with MD simulations near the walls and the average of this providing the CFD boundary conditions. This type of direct coupling was originally proposed by \citet{ OConnell_Thompson} and has since received considerable attention in the literature \citep{Li_et_al, Flekkoy_et_al, Wagner_et_al, Delgado-Buscalioni_Coveney_03, Nie_et_al, Werder_et_al, Borg_et_al}. \begin{figure}[H] (a) \hspace{3in} (b) \\ \begin{subfigure}{0.44\textwidth} \includegraphics[width=\textwidth]{fig3a.pdf} \end{subfigure} \begin{subfigure}{0.44\textwidth} \includegraphics[width=\textwidth]{fig3b.pdf} \end{subfigure} \caption{Coupled MD--CFD model for simple LJ fluid near a wall. The different regions are MD (green $\bullet$) and CFD (red $\times$), boundary conditions averaged from the MD (red $\otimes$) and the cell constrained to agree with the CFD (blue $\bullet$) all compared to the analytical solution shown by black lines at successive times (reduced LJ units), $t=\{ 30 , 75 , 130 , 275, 530.5\}$ for (a) the molecularly flat wall case and (b) wall with roughness, which shifts the effective no-slip location up to about half way across the wall, with the two horizontal red lines denoting the start of the matched analytical solution in both cases to allow comparison between (a) and (b). A snapshot of the molecules in a $10 \sigma$ wide slice in the span-wise direction are overlayed. } \label{CPL_velocity_couette} \end{figure} The bottom CFD boundary condition is obtained by averaging the velocity in the MD region as outlined in \eq{U_def} to obtain the mean velocity fields. The constraint applied to the MD region \citep{Smith_et_al2015} is of the form, \begin{align} m_{\rm i} \ddot{\boldsymbol{r}}_{\rm i} = \boldsymbol{F}_{\rm i}- \frac{m_{\rm i} \vartheta_{\rm i}}{M_{\rm I}} \left[ \frac{d}{dt} \displaystyle\sum_{{\rm i}=1}^N m_{\rm i} \boldsymbol{v}_{\rm i} \vartheta_{\rm i} - \frac{d}{dt}\int_V \rho \boldsymbol{u} dV \right], \label{GLC_EOM} \end{align} which is Newton's law with a differential constraint, a force based on the difference in the time evolution of momentum in the overlapping molecular, $\sum m_{\rm i} \boldsymbol{v}_{\rm i} \vartheta_{\rm i}$, and continuum, $\int_V \rho \boldsymbol{u} dV$, control volumes. This form is derived using Gauss' principle of least constraint with explicit localization using the $\vartheta_{\rm i}$ functional, which results in an extra term which was not present in previous formulations \citep{OConnell_Thompson, Nie_et_al}. The MD simulation consists of $122,980$ molecules for the flat wall case and $166,838$ molecules for the rough wall, with a liquid density of $\rho = 0.4$ and solid density $1.0$. The domain is of size $130.2$ by $25.4$ by $108.0$ in reduced LJ units split into $4$ cells in $x$ each of size $32.5$, $16$ in $y$ of size $1.59$ and $4$ in $z$ of size $27.0$. It has the same height as the pure MD simulation shown in Fig \ref{velocity_density_couette} but split between the CFD and MD solver. The CFD has the same number of cells $4 \times 16 \times 4$ and the same domain size, with both domains overlapping by $8$ cells. In Fig. \ref{CPL_velocity_couette} the grid highlights the cells used to solve the CFD and average the MD as show in red and green, respectively. The symbols at corresponding positions show the velocity in these cells. The constraint \eq{GLC_EOM} is applied to cell $14$ of the MD domain and the top two cells are left as a buffer with the top cell thermostatted (a separate Nos\'{e} Hoover thermostat to the one applied to the bottom cell of the wall). This maintains the system temperature at setpoint $T_0=0.4$. The choice of density and temperature are based on previous simulation \citep{Smith_2015} so the viscosity is known to be $\nu = 0.7$ and this is the value set for the CFD solver so the simulations evolve together. The simulation is run in parallel using four processes for both the CFD code, (in this case OpenFOAM \citep{openfoam} is used to solve \eq{ANDNSEqn}, although any unsteady diffusion solver would suffice), and the MD code {\it flowmol} \citep{Smith_Thesis}. The parallel data exchange between these two codes is managed using the CPL library, which ensures communications are local between overlapping processors to optimise parallel scaling. \citep{CPL-library} The agreement between the coupled model and the time-evolving Couette flow analytical solution is excellent across the two domains in Fig \ref{CPL_velocity_couette} (a). The constraint of \eq{GLC_EOM} iterates to ensure local control of momentum in the MD simulation is exact and the agreement between MD and CFD is to machine precision, only possible by explicitly including localisation in the mathematical formulation. The impact of including three-dimensional wall roughness (specified using random components in Fourier space) is also shown in Fig \ref{CPL_velocity_couette} (b). Note that as only a slice of molecules is shown, a single peak is apparent but the roughness varies in both $x$ and $z$. This roughness has the effect of shifting the effective location of the zero velocity point upwards into the flow. This demonstrates the potential of this type of approach compared to parameterizing a simple Navier slip model, the actual molecular detail of wall roughness, material crystal structure, complex chemical coatings or even biological membranes can be designed and tested as part of a fluid simulation. The Type 3 coupling described above has been applied to two-phase flows \citep{Hadjiconstantinou_thesis, Hadjiconstantinou_99} in simple channels. There is also work using this coupling to model droplet spreading and moving contact lines \citep{Wu_et_al14} (more on this below). However, this required an inflow in the molecular system, which necessitated the creation and insertion of molecules. Insertion of single atoms as well as more complicated molecules \citep{Praprotnik_2005, Borg_et_al_FADE} has been considered in the coupled simulation literature (see, for instance, Ref.~\citen{Delgado-Buscalioni_Coveney_03_USHER} and references therein). Despite this, most MD simulations use periodic boundaries to avoid the need to specify an inflow condition \citep{Rapaport,Todd_Daivis_book,Evans_Morris}, while non-periodic boundaries require creation of information that increases the potential of non-physical artifacts being introduced. As a result, Type 3 coupling simulations have typically focused on flows parallel to the interface that minimize the need for insertion of molecules and modelling of a net inlet. This form of coupled simulation is still in its infancy, and despite great progress, a clear mathematical and theoretical framework for coupling is still a way off \citep{Smith_Thesis, E_book}. Direct coupling of this type links the dynamics of the two domains and so is limited to molecular time and length scales. This is a very important observation, as it means Type 3 coupling can only be used as a method of extending molecular simulations and not as a way of adding molecular detail to a continuum approach. As a result of these limitations, in the remaining work we move away from Type 3 coupling, considering molecular modelling and development of Type 1 style couplings. Despite these limitations, it is worth noting that the potential applications of Type 3 coupling are vast, allowing modelling of the molecular details of surfaces, nucleation of bubbles, complex chemical coatings and the flow interaction with biological membranes. \subsection{Surface tension effects} \label{sec:interface} We now consider situations in which the dynamics of a liquid--vapour interface are simulated; including the effect of surface tension. The transition from liquid to vapour often occurs over as few as three atomic layers, ~\cite{Delgado-Buscalioni2008} and it seems reasonable to assume a surface exists. To obtain this surface, a cluster analysis is used to identify molecules in the liquid phase, with a `linked-list' built based on the criterion that molecules in a cluster are within a length $r_{\rm s} \approx 1.5 \sigma_{\rm ij}$ of each other.\cite{Stillinger1963,Theodorakis2010} In order to define the liquid--vapour interface, the location of the edge of the liquid cluster is determined. All molecules within a distance $r_{\rm s}$ of this surface are counted as `surface molecules'. The surface is then defined by a function fitted to the molecular surface as points. For capillary wave theory, a surface is commonly defined through Fourier components \citep{Chacon2003aa}, although a similar approach using polynomial functions can remove the requirement of periodicity; it is worth noting that the MD liquid--vapour interface is actually diffuse and any choice of surface is arbitrary, made purely as a convenient way to match to continuum concepts. In addition to providing insight into the dynamics of the liquid--vapour surface, the surface tension can also be determined from MD simulation. Most surface tension definitions use a thermodynamic approach, \citep{Rowlinson2002aa,Waals1979aa} based on free-energies quantities, which are not clearly defined at the molecular scale. From a thermodynamic viewpoint, the surface tension can be interpreted as the work needed to increase the area of the dividing surface at a given curvature \citep{Ono_Kondo}. However, as noted in Ono and Kondo\cite{Ono_Kondo}, the thermodynamic approach to obtaining surface tension is convenient but restricted to thermodynamic equilibrium. Although it is possible to extend the Gibbsian approach to define curvature and contact angles \citep{Neumann_et_al}, the hydrostatic approach has the advantage that it is applicable even out of equilibrium \citep{Ono_Kondo}. Fluid dynamics describes the evolution of a mechanical fluid system and MD a mechanical molecular system, so the mechanical approach is used in this work. This mechanical form of surface tension was introduced by Bakker\cite{Bakker1928} and is often referred to as the \citet{Kirkwood_Buff} method being expressed by the following formula: \begin{equation} \gamma = \int_{-\infty}^{\infty} \left[\Pi_{\rm N} - \Pi_{\rm T} \right] dz, \end{equation} where the subscript "N" indicates the perpendicular direction to the surface and "T" is the tangential direction in the plane. This approach, suitable for concentrations below the Critical Aggregation Concentration (CAC), requires a stress tensor $\boldsymbol{\Pi}$ to be defined in the molecular system. For an entire system at equilibrium, the virial approach can be used to obtain the pressure \citep{Rapaport}. However, the virial expression is not a valid method to get the local stress \citep{Evans_Morris, Heyes_et_al}. Instead, the \citet{Irving_Kirkwood} (IK) approach is formally consistent with the continuum definition of stress and represents the standard microscopic formulae for computing the local expressions. This stress is expressed here as the integral of the IK form over a volume, the so-called volume average (VA) stress \citep{Lutsko, Hardy, Cormier_et_al}: \begin{equation} \boldsymbol{\Pi} = \frac{1}{V} \left[ \sum_{\rm i=1}^{N} m_{\rm i}\boldsymbol{v}_{\rm i}\boldsymbol{v}_{\rm i} \vartheta_{\rm i} +\frac{1}{2}\sum_{\rm i,j}^N \boldsymbol{r}_{\rm ij}\boldsymbol{F}_{\rm ij}\,\int_{0}^{1} \vartheta_s ds \right]. \end{equation} where $\boldsymbol{F}_{\rm ij}$ is the intermolecular force between molecules $\rm i$ and $\rm j$ with $\vartheta_s$ denoting a functional which is one if a fraction of the intermolecular interaction line is inside the volume and zero otherwise, defined formally in Ref.~\citen{Smith_et_al}. The $\vartheta_s$ term is an exact analogy to $\vartheta_i$ introduced previously, with $r_{\rm i} \to r_s$; the location of a point on the line of interaction between molecules $r_s = r_{\rm i} + sr_{\rm ij}$ is used instead of particle position. The dummy variable $s$ is integrated between $0$ and $1$, corresponding to tracing the line between molecule $\rm i$ and $\rm j$ to obtain the fraction of the intermolecular interaction inside the volume. Considering stress over an integrated control volume removes some of the ambiguity in its definition (see \textit{e.g., } \citet{Zhou, Admal_Tadmor}) by allowing direct comparison to the control volume form of the equations (\ref{BofMEqn2}) \citep{Smith_et_al}. The non-uniqueness of stress is important when the actual value of surface tension depends on stress in the vicinity of the interface. Recent work has shown that the distributions of measured stresses in volumes $V$ smaller than $V^{1/3} = 3$ are non-Gaussian, suggesting that simply taking mean values of stress may not be sufficient\citep{Smith_et_al2017}. This is a concern as the stress within only a few intermolecular diameters of the surface can be shown to almost entirely determine the surface tension \citep{Delgado-Buscalioni2008}. In addition, three-body interactions or even more complex multi-body potentials are needed even for simple LJ, as shown by a recent study \citep{Ghoufi_Patrice_Tildesley}. It is clear that care is required in defining both a vapour--liquid surface and its tension. \begin{figure} \centering (a) \hspace{3in} (b) \\ \begin{subfigure}{0.45\textwidth} \includegraphics[width=\textwidth]{fig4a.pdf} \end{subfigure} \;\;\;\;\; \begin{subfigure}{0.45\textwidth} \includegraphics[width=\textwidth]{fig4b.pdf} \end{subfigure} \;\;\; (c) \hspace{3in} (d) \\ \begin{subfigure}{0.45\textwidth} \includegraphics[width=\textwidth]{fig4c.pdf} \end{subfigure} \begin{subfigure}{0.48\textwidth} \includegraphics[width=\textwidth]{fig4d.pdf} \end{subfigure} \caption{Molecular dynamics simulations of interfaces with surfactants: (a) (top) concentration of surfactant, $C = \rho_{poly}/\rho_{Tot}$ where $\rho_{Tot} = \rho_{poly} + \rho_{solv}$ with $\rho_{poly}$ being the density of surfactants and $\rho_{solv}$ being the density of the solvent (water), and (bottom) contribution to surface tension integral $\gamma$ from differences in pressure as a function of domain position; (b) two-phase water (transparent) and surfactant (blue hydrophilic tails and teal hydrophobic heads) in a simulation used to calculate the surface tension; (c) surface excess concentration $\Gamma \define C_{Surface} - C_{Bulk}$ as a function of concentration, which suggests that the critical aggregation concentration (CAC) is around $8\%$; (d) surface tension $\gamma$ in reduced units as a function of concentration; here, the dotted line is at the CAC of $8\%$ with points above this value omitted from the figure. The surface tension in the limit of zero concentration, $\gamma \approx 1.64$ corresponds to the standard temperature and pressure surface tension of water, $71mN/m$, as the SAFT model is designed to give this \cite{Lobanova2015,Theodorakis2015,Theodorakis2015b}. } \label{vmd_Surface_concentration} \end{figure} Despite the aforementioned challenges, molecular simulation is uniquely placed to provide \textit{a priori} estimates of surface tension for complex multi-component systems such as surfactant-laden fluids. Liquid--vapour interfaces can be designed based on the required molecular structures, downloaded from online databases \citep{PDB} which are tuned from quantum potentials or experiments, and verified against X-ray scattering data \citep{Hansen_Mcdonald, Yamell_et_al}. The bulk behaviour of systems combining different numbers of these molecules allows the exploration of the effect of their concentration on the surface tension. To demonstrate this, the surface tension contribution as a function of spatial location is shown for a range of surfactants concentration in Fig \ref{vmd_Surface_concentration} (a). The modeled system is two-phase water with varying concentrations of a SAFT based model for an organic poly-alkyl-ether molecule, ($CM-CM-CM-EO-EO-EO-EO-EO-EO-EO-EO$ with $-$ denoting harmonic bonds, see table \ref{SAFT_table}). The concentration of surfactant at the surface can be seen to increase for greater concentrations in Fig \ref{vmd_Surface_concentration} (a) (top) while the contribution to surface tension at the surface, the difference in normal and tangential stress $\Pi_N - \Pi_T$ is reduced by the increased surfactant concentration (bottom). The results of increasing surfactant concentration is also shown in Fig. \ref{vmd_Surface_concentration} $(c)$ for the surface excess against concentration. It is clear that beyond the critical aggregation concentration (CAC), approximately $8\%$ in this case, no more surfactants can be accommodated at the interface. The measured surface tension is shown as a function of this concentration in Fig. \ref{vmd_Surface_concentration} $(d)$. For concentrations above the CAC, surface tension continues to drop, which highlights a feature of the \citet{Kirkwood_Buff} formulation as it includes the integral over the whole domain, including the increasingly surfactant-laden bulk. This bulk is increasingly inhomogenous as micelles form, creating new surfactant water interfaces inside the liquid region, and the $\Pi_N - \Pi_T$ term will no longer be zero on average inside the liquid. \subsection{Static contact angles} \label{sec:wall_interaction} Having considered details of both the wall--fluid interaction and the liquid--vapour interface, we move on to consider the point where both meet, the contact line. In a recent experimental study, \citet{Nelson_et_al_2011} explored the impact of electrowetting and wall sliding speed on dynamic contact angle. Despite the small system size in the case of molecular modelling, good agreement between molecular-level simulation and experiment has been observed.\citep{Cheng_et_al2016} The static contact line is analyzed first, to parameterize this behaviour before the added complexity of moving the contact line is considered. In a molecular model, wall-fluid interactions can be varied by changing the interaction, $\epsilon_{\rm wall}$, between the wall and the fluid molecules. There is a large number of factors that determines surface--fluid interplay in experiments, including surface roughness and material, complex chemical coatings as well as electrowetting. We consider the flows observed in experiments \citep{Nelson_et_al_2011} in which a liquid bridge is sheared between a gold surface on the top and a sliding Electro-Wetting on Dielectric (EWOD) surface on the bottom; here, electrowetting effects are used as a means of effectively-varying the wall--fluid potential, which, in turn, alter the wall wetting properties \cite{Theodorakis2015b}. \begin{figure} \centering (a) \hspace{2.8in} (b) \\ \begin{subfigure}[t]{0.48\textwidth} \includegraphics[width=\textwidth]{fig5a.pdf} \end{subfigure} \begin{subfigure}[t]{0.48\textwidth} \includegraphics[width=\textwidth]{fig5b.pdf} \end{subfigure} \caption{Static contact angles as a function of wall wetting value $\epsilon_{\textrm{wall}}$, (a) the same $\epsilon_{\textrm{wall}}$ for top and bottom walls obtained from a cubic line fitted to the surface with advancing angle (blue), receding angle (green) and slant angle from a linear fit (red). The three cases are shown above the figure with the fitted lines used to determine angles shown (slightly unusual choice of measuring anti-clockwise for the bridge, with angle determined from the right at the bottom, left from the top) and the measured angles, while the horizontal black line indicates the contact for water--gold interaction angle \citep{Garcia_et_al}. (b) Using $\epsilon_{\textrm{wall}}^T=0.7$ for the top wall (based on gold--water angle), the slant angle from the linear fit for varying bottom wall interaction $\epsilon_{\textrm{wall}}^B$ (red) measured with the same convention (anti-clockwise) as the advancing and receding angles, is compared to experimental data with second x-axis for Voltage (black) \citep{Nelson_private_communication, Nelson_et_al_2011}.Adapted with permission from Ref. 31. Copyright 2016 Royal Society of Chemistry.} \label{fig:wetting_study} \end{figure} First, the results of a parametric study of wetting using MD simulations with a symmetric wall for a range of different wall interactions are presented in Fig \ref{fig:wetting_study}(a); here, the SAFT form of water from table \ref{SAFT_table} is used for the liquid and vapour molecules. It is worth noting that the extreme values are not strictly correct due to the use of a cubic polynomial to define the liquid--vapour surface. The angles are, therefore, over-predicted or under-predicted for the hydrophillic and hydrophobic case, respectively. These extreme values are not relevant, and so this discrepancy is not important for the results presented here. From the work of \citet{Garcia_et_al}, the contact angle of water on a gold substrate is $56.6^\circ$ at room temperature (shown as a horizontal line on Fig \ref{fig:wetting_study}(a)). In order to reproduce this gold--water contact angle, a wall interaction of $\epsilon_{\rm wall}^T = 0.7$ is chosen and the top wall set to this value, while the bottom wall interaction is varied separately to match the static contact angle from \citet{Nelson_et_al_2011}. These static contact angles in the molecular systems are parameterized and the overall trends compared to the experimental results for different electrowetting cases in Fig \ref{fig:wetting_study}(b). From this, four values, $\epsilon_{\rm wall}^B = \{ 0.38, 0.46, 0.52, 0.69\}$ are chosen to match $10V$ to $50V$ from the electrowetting study. We note that, in principle, voltage controls the macroscopic contact angle as expressed by the Young--Lippmann equation (YLE), while the interaction energy parameter is related to the microscopic contact angle. Although, this may be misleading when matching dynamic contact angles, \citet{Liu2012} present results which ``provide strong evidence that the YLE remains valid down to nanometer scales''. In order to obtain a simple model of contact line motion in this work, we tune the interaction strength, as an analogy to voltage in the YLE, to manipulate the microscopic contact angle. The range of wall interactions covers the same angles observed in the experiments of \citet{Nelson_et_al_2011} except large voltage values which exhibit unstable behaviours. Having matched the static starting angles, a parametric study of wall sliding speeds is performed in the next section. \subsection{Dynamic contact angles} \label{sec:dynamic_contact} Molecular dynamic studies of the moving contact angle using sheared liquid bridges have shown great promise in the literature, matching Cox--Voinov \citep{Cox_1986a} and molecular Kinetic Theory, \citep{Blake_2006} but have not been directly compared to experimental results. \citep{Thompson_et_al89, Thompson_et_al93, Gentner_et_al_2003, Qian_et_al_2003, Ren_E_2007} In this section, we are interested in the similarities and differences of these molecular simulation to results from similar experimental geometries. The setup in this section is the same as in the previous section, represented by Fig. \ref{fig:wetting_study} (b), except that now the bottom wall is allowed to slide. \begin{figure} \centering (a) \\ \begin{subfigure}[t]{0.65\textwidth} \includegraphics[width=\textwidth]{fig6a.pdf} \end{subfigure} \\ (b) \hspace{3in} (c) \\ \begin{subfigure}[t]{0.65\textwidth} \includegraphics[width=\textwidth]{fig6b.pdf} \end{subfigure} \caption{(a) Qualitative comparison of contact angle sliding for different voltage and Capillary number as indicated for analytical and molecular modelling results (using the same surface tension $\gamma=71.97\times 10^{-3}N/m$ and viscosity $\mu = 1.002\times10^{-3} N/(s m^2)$ for both MD and experiments). Result for the contact angle $\theta$ vs. Capillary number with (b) from experiments of \citet{Nelson_et_al_2011} with EWOD voltage $0V$ ($\square$), $20V$ ($\circ$) and $40V$ ($\triangle$) and (c) wall interaction $\epsilon_{\textrm{wall}}$ of 0.38 ($\square$), 0.46 ($\circ$), 0.52 ($\triangle$) and 0.69 ($\times$).Adapted from Ref. 114. Copyright 2011 American Chemical Society.} \label{Nelson_et_al_2011_vs_MD} \end{figure} The static contact angles were matched in the previous section for a range of electrowetting strengths. The dynamic behaviour of these matched static contact angles are compared between MD and experiments in Fig \ref{Nelson_et_al_2011_vs_MD} (b). The experimental result for the fastest sliding speed from \citet{Nelson_private_communication} overlaps with the slowest MD result. The hysteresis seen in the experimental results is significant, with almost $20^\circ$ in the slowest sliding case, while almost no hysteresis is observed in the MD simulation. This difference in the contact angle hysteresis may be attributed to the small system size in the case of the simulation \cite{Extrand1995,Whyman2008,Park2015,Eral2013}, which is predicted to be more pronounced for smaller systems.\cite{Whyman2008} At sufficiently large speeds, there is a minor hysteresis in the advancing and receding angles, as shown in Fig.~\ref{Nelson_et_al_2011_vs_MD} (a). The molecular liquid bridge does not remain stationary and travels along the domain, which most likely reduces hysteresis as the contact line is not pinned. The movement of the liquid bridge does not occur with counter-sliding walls, and appears to result from the asymmetry of a stationary top wall and moving bottom wall. Furthermore, quantitative agreement is not observed between MD and experiments. Possible reasons for this discrepancy include: $i)$ the simplicity of the MD model for water (SAFT here, SPC or TIP4-P may be better), $ii)$ inadequacies of using interaction strength to model electrowetting, $iii)$ the perfect atomic lattice used for the walls, and iv) system size effects. Considering point $i)$, the force-field based on the SAFT-$\gamma$ equation of state does not have electrostatic terms and has not been fitted to hydrodynamic properties such as viscosity, but is instead a result of fitting the equation of state to experimental thermodynamic data. Hence, a possible refinement of the fitting parameters could also lead to a more accurate model. For point $ii)$, using an actual model of the electrowetting force-field instead of simply tuning the fluid--wall interaction, as in \citet{Zhao_Quanzi}, or a more sophisticated force applied to all molecules. \citep{C4NR06759B} For $iii)$, the lattice is very different from the chemical and physical roughness observed in the experiments which would pin the contact line to a particular location. In fact, even the actual tethering of the wall molecules can be an important factor in the dynamics of a contact line. In our case, the tethering of molecules leads to stiff substrates, but one can vary the strength of the tethering interactions creating soft and hard substrate areas as appropriate. This property has been recently used in simulation `experiments' to guide droplets along stiffness gradients on solid substrates; a phenomenon known as {\it durotaxis}. \cite{Chang2015,Theodorakis2017} By varying the stiffness of the substrate, and up to a threshold value for the stiffness, the contact angle of a droplet can change significantly (\textit{e.g., } by as much as 20$^{\circ}$).\cite{Theodorakis2017} However, the limited system size may be the main source of error, as it is insufficient to accommodate the scales of motion necessary to provide a faithful representation of the dominant physics.\cite{Whyman2008,Extrand1995} Eral \textit{et al.}\cite{Eral2013} notes that MD simulations have tried to model hysteresis but size and length scales are too limited by computer costs. As will be shown in the next section, MD system sizes must exceed a certain size before agreement with continuum-scale models is observed.\cite{Theodorakis2015b} This observation is supported by the experiments of Park \textit{et al.}\cite{Park2015} where below 5 μm the spreading behavior of the contact line is shown to be very different, with nano-scale wall roughness being an important factor. Despite the differences between experiments and MD, the observed shapes and dynamic trends of the liquid bridge are still seen to be broadly similar to experimental observations in Fig \ref{Nelson_et_al_2011_vs_MD} (b). In particular, inspection of this figure reveals that the advancing contact angles converge at high speeds, and the receding ones diverge, with a larger degree of wetting effectively shifting the curve down in both the numerical and experimental results. The sheared liquid bridge has the advantage that it is a steady state, allowing us to collect detailed statistics on the behaviour of the contact angle. As a result, the molecular dynamics simulation of a sheared bridge can provide insight into the fluctuations due to molecular motions \citep{Smith2016, Smith_et_al2017}. This provides an opportunity to develop a reduced model for these molecular fluctuations in the spirit of Type 1 coupling. As shown in recent work \citep{Smith2016}, these fluctuations are largely Gaussian and the autocorrelation is well described by exponential decay. As a result, a Langevin equation can be used to reproduce the molecular behaviour at the contact line by tuning it with the MD mean, standard deviation, and autocorrelation, \begin{align} \theta^{t+1} = \theta^t - \frac{k \Delta t}{\Gamma} \left[\theta^t - \langle\theta\rangle \right] + \xi \frac{\sqrt{C \Delta t}}{\Gamma}. \label{Langevin_equations_numerical} \end{align} Using \eq{Langevin_equations_numerical}, molecular fluctuations can be incorporated into a CFD model as a form of Type 1 coupling. This was demonstrated in \citet{Smith2016} with evolution of the mean velocity, $\langle\theta\rangle$, governed by a form of Tanner's law and the Langevin equation used to advance the continuum angle but including molecular noise. In principle, this approach can be generalized to include the effect of wall interaction using the data from Fig. \ref{Nelson_et_al_2011_vs_MD} (b). As each electrowetting case has a different equilibrium angle $\theta_e$, we consider the relative change in angle $\langle\theta\rangle - \theta_e$ and, for the four different electrowetting numbers Fig \ref{Nelson_et_al_2011_vs_MD} (b), we can collapse these onto a single curve by multiplying velocity with an arbitrary function of wall interaction $\epsilon_{\textrm{wall}}$, here $\epsilon^{5/2}_{\textrm{wall}}$. The data with this scaling is shown in \ref{Collapse_based_on_electrowet} (a). By taking the coefficient for the line of best fit, $1377.7$, we can define a relationship between angle, sliding velocity and electrowetting, \begin{align} u = \frac{ \left(\langle\theta\rangle - \theta_e \right)}{ 1377.7 \epsilon_{\textrm{wall}}^{5/3}} \end{align} which is simply a form of Tanner's law \eq{Tanners} with $n=1$ and $A$ a function of the electrowetting number. \begin{figure} (a) \\ \includegraphics[width=0.7\textwidth]{fig7a.pdf} \\ (b) \\ \includegraphics[width=0.7\textwidth]{fig7b.pdf} \caption{(a) Wall sliding speeds for four wetting numbers collapsed onto a single line by scaling using $\epsilon^{5/2}_{\textrm{wall}}$. (b) CFD solver using the thin-film form of the equations (\ref{thin_film}) for four values of electrowetting number $\epsilon_{wall} = \{ 0.38, 0.46, 0.52, 0.69\}$ (red, blue, yellow, and green lines, respectively) showing the evolution of angle $\theta$ and the contact angle velocity $dX_c/dt$ (insert) as a function of time $t$.} \label{Collapse_based_on_electrowet} \end{figure} Using the form of Tanner's law from \eq{Tanners} with the Langevin \eq{Langevin_equations_numerical}, wetting is incorporated into a solver for the continuum thin-film equations \ref{thin_film} as shown in Fig \ref{Collapse_based_on_electrowet} (b) (see \citet{Karapetas_et_al} for implementation details of the thin-film solver and \citet{Smith_et_al2016} for the tuning of the contact line model). The equilibrium angles are taken from the receding angles in Fig \ref{Nelson_et_al_2011_vs_MD} (a), with $\theta_e = \{112, 98.7, 88.7, 50.5 \}$ for $\epsilon_{\rm wall} = \{ 0.38, 0.46, 0.52, 0.69 \}$, respectively. The molecular fluctuations are seen in the contact line velocity of Fig \ref{Collapse_based_on_electrowet} (b insert), with the difference in speed of evolution and final equilibrium angle shown in the main part of Fig \ref{Collapse_based_on_electrowet} (b). In this way, the molecular detail has been parameterized and included in a form that is directly useful for CFD applications (\textit{i.e., }{} a closure model for contact line motion). The use of a thin-film solver is potentially invalidated by the large angles present in the MD simulation used to design the model. In practical application, the approach used here should be included as part of a more complex CFD solver modelling larger angles. We apply this technique to the thin-film equations as an example of how molecular contact motion can be parameterized and incorporated into a continuum solver; through Tanner's law using a simple droplet model. In addition, the use of a reduced model for the contact line represents a massive simplification of very complex molecular detail. While this \textit{may} be acceptable for flat walls and simple fluids, it would not be expected to work for more complex examples. These include common challenges in industrial fluid mechanics such as surfactant-laden flow, rough or textured walls, build up of surface fouling, large heat gradients and phase change. To address this complexity, one possible future extension could be to use Type 2 coupling, where surfactants are included or fractal wall roughness is modelled explicitly with the contact line dynamics fed back into the continuum model. Examples of possible embedded MD models are shown in Fig \ref{fig:rough_wall}(a) for fractal roughness and Fig \ref{fig:rough_wall} (b) for inclusion of surfactants. \begin{figure} \centering (a) \hspace{2.8in} (b) \\ \begin{subfigure}{0.42\textwidth} \includegraphics[width=\textwidth]{fig8a.pdf} \end{subfigure} \;\; \begin{subfigure}[c]{0.44\textwidth} \includegraphics[width=\textwidth]{fig8b.pdf} \end{subfigure} \caption{(a) Including roughness at the molecular scale using an arbitrary superposition of random cosines to removing molecules from FCC lattice of tethered molecules. (b) Building surfactant molecules into the SAFT-$\gamma$ water sheared liquid model. } \label{fig:rough_wall} \end{figure} As the parameter space is too large to model and potentially too complex to define a reduced order model, we could consider a Type 2 embedded scheme, where MD is run as needed to get contact line data at a state point or based on observed roughness of a wall at that point in a CFD model. However, there are potential issues with this modelling methodology; the flow field may not be representative of an actual droplet and, at higher flow shear rates, the liquid bridge can be seen to pinch off. To highlight both the different physics and pinch off, the streamlines for a liquid bridge and a droplet are compared in Fig \ref{Streamlines_droplet_vs_bridge}. The streamlines observed here are consistent with the variation of flow regimes observed in a continuum study of a similar liquid bridge by \citet{Ren_E_2011}. The liquid bridge deforms linearly (elastically) for smaller shear rates, returning to a stationary bridge if the shear is removed. Beyond a certain yield strain, the liquid bridge is pulled apart and fails like a solid in the non-linear (plastic) region. The streamlines in the liquid bridge of Fig \ref{Streamlines_droplet_vs_bridge} (a) looks similar to a Kirchhoff ellipse vortex, but gradually move apart until the vortex pair becomes sufficiently separated that the middle region (a region of low pressure) is no longer surrounded by a flow and surface tension cannot hold the bridge together. The pinch off mechanism seen in the molecular system also bears a striking qualitative resemblance to the one observed experimentally \citep{Smith2016, Wang_McCarthy}. \begin{figure} \centering (a) \hspace{3in} (b) \\ \begin{subfigure}{0.48\textwidth} \includegraphics[width=\textwidth]{fig9a.pdf} \end{subfigure} \begin{subfigure}{0.48\textwidth} \includegraphics[width=\textwidth]{fig9b.pdf} \end{subfigure} \caption{A comparison of the streamlines for (a) a liquid bridge at low sliding rates and at the point of pinch off with contact angle highlighted and (b) a droplet before and during spreading with streamlines colored by mean particle density from blue (low) to red (high).} \label{Streamlines_droplet_vs_bridge} \end{figure} This limitation on the range of stability of a sheared liquid bridge places a constraint on the range of contact line dynamics that can be explored. This is important for modelling surfactants as reduced surface tension would further promote pinch off. In addition, when compared to the droplet flow-field shown by the streamlines of Fig \ref{Streamlines_droplet_vs_bridge}(b), it is clear that the fluid dynamics is not the same in the liquid bridge of Fig \ref{Streamlines_droplet_vs_bridge}(a). This all suggests that the liquid bridge may not be the appropriate method to get the dynamic contact line behavior for Type 2 embedded coupling. As the flow of surfactant determines contact line motion for superspreading, the next section outlines a model of the entire MD droplet to understand the correct dynamics in the presence of surfactants. The system size limitations are directly discussed and it is found that large droplets are required to get macroscopic behaviour. \subsection{Droplet modelling} \label{sec:droplet} The molecular-level modelling of surfactant-laden droplets requires the simulation of large systems.\cite{Theodorakis2015b,Santiso2013} Even in the case of a pure aqueous SAFT-$\gamma$ water droplet without surfactants, for example, the size of the droplet should exceed about $65,000$ effective beads in order to render the contact angle independent of the droplet size \cite{Theodorakis2015b} minimizing line tension effects, which are present in nanoscale droplets.\cite{Weijs2011} To this end, line tension was first introduced by Gibbs as `linear tension' suggesting that it might be considered in a manner entirely analogous to that in which surfaces of discontinuity are treated and it may have negative values, particularly relevant for small systems.\cite{Gibbs1961} An accurate way to measure contact angles by avoiding a fitting procedure is to use the curvature of the droplet through the following relation\cite{Theodorakis2015b}, \begin{equation} \label{eq:phi} \theta = \arcsin(1/\mu_{\rm D}), \end{equation} where $\mu_{\rm D} = (1+\lambda^2)/(2\lambda)$, $\lambda=h/\alpha$ with $h$ being the distance from the solid--liquid interface to the apex of the droplet and $\alpha$ being the radius of the solid--liquid interface. The calculation of the contact angle through the ratio $\lambda$ also results in smaller statistical errors as one is only required to measure the ensemble average values of $h$ and $\alpha$. After the contact angle has reached a constant value, independent of the droplet size (Fig.~\ref{droplets_panos}), the strength of interaction between the water effective beads and the unstructured smooth substrate can be tuned against experimental data and continuum simulations. \cite{Theodorakis2017,Theodorakis2015b} However, even above this threshold value, which is also system dependent, the contact line will still play a significant role when the interaction strength between the droplet and the substrate exceeds a certain value $\varepsilon_0$, which is faster reached on stiff substrates (Fig.~\ref{droplets_panos}) \cite{Theodorakis2017}. \begin{figure}[H] \includegraphics[width=0.7\textwidth]{fig10.pdf} \caption{(a) The dependence of the contact angle on the potential interaction between the water molecules and an unstructured smooth wall. Contact line phenomena appear for small contact angles below a threshold value $\varepsilon_0$. In the case of the SAFT-$\gamma$ water model this value is about $45^{\rm o}$. The difference in the contact angle of a droplet ``sitting'' on a hard (b) and a soft (c) substrate is illustrated. Panel (d) illustrates the dependence on the size of the droplet expressed through the number of effective beads $N$. The contact angle increases with the size of the droplet up to a threshold value. In the case of the SAFT-$\gamma$ water model this is estimated around $N=65 000$ effective beads. However, the contact angle is also model dependent. Panels (e) and (f) show example of a small and a large droplet, respectively. Panels (g) and (h) illustrate two spherical droplets of different sizes at the CAC. The droplets have different concentration despite being both at the CAC. The CAC concentration scales as $1/R$.\cite{Theodorakis2015} Therefore, CAC (which is different for droplets of different size) provides a unit of concentration that allows for comparison between droplets of different size. Panels (i)-(n) illustrate the characteristic snapshots of a superspreading droplet due to the Silwet-L77 superspreading surfactant. For each snapshot we present a schematic of the main adsorption processes taking place at each state of the superspreading. A schematic of the Silwet-L77 superspreading surfactant in our molecular model is illustrated. Here, blue are beads EO (hydrophilic) and red M and Q beads (hydrophobic). Cyan colored are the SAFT water beads. Adapted from Ref. 12. Copyright 2015 American Chemical Society. } \label{droplets_panos} \end{figure} While we can overcome the contact angle dependence on the droplet size by reaching system sizes of the order of $10^5$ effective beads, systems with surfactants pose a much greater challenge for simulations. In this case, one needs to reach a macroscopic limit to capture the various processes that take place within the droplet, for example, the diffusion of surfactant monomers/aggregates within the droplet or the formation/dissociation of aggregates. In fact, the spreading mechanism also depends on the molecular shape of the liquid molecules \cite{Shanahan1998,Wu2017} and the surfactant affecting the contact line motion,\cite{Song2017} where the latter has been investigated by MD simulations in the context of superspreading.\cite{Theodorakis2015} We should note here that the critical aggregation concentration (CAC) for nanoscale droplets depends strongly on the size of the droplet as well. The surfactant concentration, $w$, which is usually expressed through wt\%, scales as $1/R$ for spherical droplets (prefactors have been omitted here), where $R$ is the radius of the droplet. This simply means that the absolute values of concentration do not provide a measure for direct comparison between droplets of different sizes and concentrations may be better expressed as a multiple of the CAC concentration for each droplet. Although from the point of view of simulation experiments the ideal situation would be to simulate macroscopic droplets where the above problems gradually disappear, one can still identify certain adsorption mechanisms for nanoscale droplets as illustrated in Fig.~\ref{mechanisms_panos} for the superspreading of Silwet-L77 surfactant.\cite{Theodorakis2015,Theodorakis2015b} In the superspreading example (Fig.~\ref{mechanisms_panos}), MD simulations can provide information for the dynamics of the contact line for Type 1 and 2 coupling. However, there is still the need to be able to model even larger systems in many cases. This is very important in the context of Fig. \ref{mechanisms_panos}, where different processes take place within the droplet. Although adsorption processes characteristic of the superspreading, such as the adsorption of surfactant onto the substrate through the contact line or the replenishment of surfactant at the liquid--vapour interface, can be described by MD simulations and provide information for Type 1 and 2 coupling, still, diffusion processes or dynamic formation of aggregates cannot be handled by MD without using excessively large system sizes (Fig.~\ref{mechanisms_panos}). All this emphasizes the challenges of MD simulations to tackle some of these issues, which renders Type 3 coupling an obvious solution for realizing the full simulation of the droplet. As Type 3 coupling is limited to the molecular time and length scales, the future of coupling will likely use the output from these simulations to either inform Type 1 or even provide results for a Type 2 coupling. As the field advances, solutions mixing the advantages of the various approaches will likely become more routine. \begin{figure}[H] \includegraphics[width=8.9cm]{fig11.pdf} \caption{Different stages during the spreading of a surfactant-laden droplet (the superspreader Silwet-L77 surfactant has been used here and the cross-section of the droplet is illustrated): an initial (a) and an intermediate (b) non-equilibrium states, and a final equilibrium state (c). The main adsorption mechanisms at each stage of the spreading process are indicated with differences between the arrow heads indicating the dominant direction of the adsorption process during spreading. Red color indicates M and Q hydrophobic beads, whereas blue indicates EO hydrophilic groups. Water molecules are in cyan color.} \label{mechanisms_panos} \end{figure} \section{Conclusions} Molecular simulation has shown great promise in getting \textit{a priori} results for near-wall behaviour, liquid--vapour interfaces and dynamics of complex molecules such as surfactants. In the latter case, the molecular architecture of the liquid molecules and the surfactants is of great importance as it affects significantly the spreading mechanisms of a droplet.\cite{Shanahan1998,Wu2017,Song2017} Moreover, for submicron/nano-sized droplets, the surface tension and three-phase contact angle are a function of the drop size.\cite{Ono_Kondo} The moving contact line has to incorporate all this complexity and furthermore couple this with slip, surface tension and complex bulk--fluid flow. In this invited article, the anatomy of the moving contact line is explored by analyzing the effect of these various contributions, gradually building up to the full complexity of the dynamic contact line on a superspreading droplet. In doing this, we explore techniques which can be used to couple the molecular model to continuum simulation, both directly as part of the same simulation and indirectly by parameterizing equations. To understand the key quantities MD could give the continuum, we study each of the contact line problems constituent parts: slip-length, vapour--liquid surface tension, static contact angle behaviour and contact line dynamics. Starting with slip, the near-wall effects are explored for a single phase. A direct coupling approach is presented here, retaining a region of molecular detail and linking the two systems on an interface. Next, the liquid--vapour interface is analyzed through the surface tension and the effect of surfactants is explored. The static angle in a liquid--vapour system is then parameterized for a range of wall interactions potentials by comparison to experiments. This is then extended to include sliding of these walls to understand how these behave when the contact angle is moving. The molecular sliding is compared to experiments, noting broadly similar behaviour despite the difference in scale between the two systems but ultimately poor agreement, attributed to the limited simulation sizes possible with MD. In order to couple to CFD, a simplified contact line model parameterized using MD is presented. The limitations of these reduced models for the contact line are discussed in detail and the work finishes by presenting a full large-scale simulation of a molecular droplet with surfactants. We conclude by noting that, although clearly promising, the methodology of coupling MD to continuum models to capture the precise dynamics of the contact line for a range of non-trivial situations, for instance, the presence of surfactants, surface wettability and chemical reactions, is at an early stage. The use of coupling relies on the validity of molecular simulation, which is difficult to compare with experiments given the scale separation. However, as computers get bigger and experiments higher resolution, direct comparisons of the two approaches is increasingly possible. Coupled simulation provides an opportunity to accelerate the comparison by allowing simulations of larger scales. We have outlined current strategies, noting that the future of MD/continuum coupling will likely combine a range of these approaches, and hope that it encourages and motivates others to engage in further research into this area. \subsubsection{Acknowledgements} This work is supported by the EPSRC Platform Grant MACIPh (Grant number EP/L020564/1). This research has been supported by the National Science Centre, Poland, under grant No.~2015/19/P/ST3/03541. This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sk{\l}odowska--Curie grant agreement No. 665778. This research was supported in part by PLGrid Infrastructure.
2,869,038,156,961
arxiv
\section{Introduction} In \cite{pseudo-split}, Loughran-Skorobogatov-Smeets develop, building upon work of Denef \cite{denef}, a necessary and sufficient criterion to say when a morphism of varieties over a number field $k$ is surjective on $k_v$-points for almost all finite places $v$. This property is called \emph{arithmetic surjectivity} by Colliot-Thélène \cite[\S13]{cime}. More precisely, Loughran et.\ al.\ define that a variety $X$ over a perfect field is \emph{pseudo-split} if every Galois automorphism over the ground field fixes some geometric irreducible component of $X$ of multiplicity $1$. They then prove: \begin{theorem}\cite[Theorem 1.4]{pseudo-split}\label{thm:lss} Let $f:X\to Y$ be a dominant morphism between proper, smooth, geometrically integral varieties over a number field $k$ with geometrically integral generic fibre. Then $f$ is arithmetically surjective if and only if for each modification $f':X'\to Y'$ of $f$ and for each codimension $1$ point $\vartheta'$ in $Y'$, the fibre $f'^{-1}(\vartheta')$ is pseudo-split. \end{theorem} By a \emph{modification} of $f$, we mean a morphism $f':X'\to Y'$ of proper, smooth, geometrically integral varieties over $k$ such that there exist proper, birational morphisms $\alpha_X:X'\to X$ and $\alpha_Y:Y'\to Y$ with $f'\circ\alpha_X=\alpha_Y\circ f$. In this paper, we closely follow and extend the methods from \cite{pseudo-split} to deal with the analogous question for zero-cycles. We introduce the notion of \emph{combinatorial cycle-splitness} and prove: \begin{theorem}\label{thm:main} Let $f:X\to Y$ be a dominant morphism between proper, smooth, geometrically integral varieties over a number field $k$ with geometrically integral generic fibre. The following statements are equivalent: \begin{enumerate}[(i)] \item For almost all places $v$, $f$ has a $v$-adic zero-cycle of degree $1$ in all fibres over $k_v$-points. \item For each modification $f':X'\to Y'$ and for each codimension $1$ point $\vartheta'$ in $Y'$, the fibre $f'^{-1}(\vartheta')$ is combinatorially cycle-split. \end{enumerate} \end{theorem} A situation where Theorem\nobreakspace \ref {thm:main} applies but not Theorem\nobreakspace \ref {thm:lss} is given at the end of this article in Example\nobreakspace \ref {ex:cycle-surjective}. Note that we do not naively ask for surjectivity on zero-cycles but only for zero-cycles that are each entirely contained in a fibre. This has three reasons. First, if we allowed for zero-cycles whose summands lie in several distinct fibres, the question would not be fibre-wise anymore and our tools would not suffice to provide an answer for $\dim Y>1$. Secondly, the naive version is not very well-behaved even in dimensions $0$ and $1$, which we can handle, where it already leads to rather complicated criteria. Thirdly, it can be argued that the problem as posed above arises more naturally, for example when considering Artin's conjecture on $p$-adic forms in its variant for zero-cycles of degree $1$. \begin{conjecture}[e.g.\ {\cite[Problem 3]{kato-kuzumaki}}]\label{conj:artin} If $p$ is an arbitrary prime and if $n$ and $d$ are positive integers such that $n\geq d^2$, then a degree $d$ hypersurface in $\BP^n_{\BQ_p}$ has a zero-cycle of degree $1$. \end{conjecture} In other words, this open conjecture posits that the famous Ax-Kochen theorem, a special application of Theorem\nobreakspace \ref {thm:lss}, holds without the need to exclude any primes when restated for zero-cycles of degree $1$. In moduli terms, this asks for fibre-wise $p$-adic zero-cycles of degree $1$ in the universal family of such hypersurfaces for every prime $p$. \subsection{Notation and conventions} By a variety, we mean a separated scheme of finite type over a field $K$. We denote by $\ov X$ the base change of a variety $X$ along an algebraic closure $\ov K$ of $K$. For a field $K'\supset K$, we write $X_{K'}$ for $X\times_K K'$. If $k$ is a number field and $S$ a finite set of finite places in $k$, we write $\CO_k$ for the ring of integers of $k$ and $\CO_{k,S}$ for the $S$-integers of $k$. Furthermore, for a finite place $v$ of $k$, $k_v$ shall denote the completion at $v$ with ring of integers $\CO_{k_v}$ and residue field $k(v)$ of size $N(v)$. By a model of a variety $X$ over $k$ (respectively $k_v$), we mean a scheme $\CX$ which is flat and of finite type over $\CO_{k,S}$ for some finite set of places $S$ (respectively $\CO_{k_v}$) together with an isomorphism of its generic fibre to $X$. If $X$ is proper, $\CX$ a fixed model of $X$ and $x\in X$ is a closed point, we write $\wt x$ for the closure of $x$ in $\CX$. By a model of a morphism of varieties $f:X\to Y$ over $k$ (respectively $k_v$), we mean a morphism $\CF:\CX\to\CY$ over $\CO_{k,S}$ (respectively $\CO_{k_v}$) such that $\CX$ and $\CY$ are models of $X$ and $Y$ compatible with $f$ in the obvious way. \section{Preliminary definitions}\label{sec:term} To start, we introduce some terminology related to zero-cycles and our question. \begin{definition} A variety over a field $K$ is \emph{$r$-cycle-split}, if it contains a zero-cycle of degree $r$ which is the sum of smooth points. A variety over a number field $k$ is \emph{locally $r$-cycle-split} outside a finite set of places $S$, if for all finite places $v\notin S$ of $k$, the base change $X_{k_v}$ is $r$-cycle-split. \end{definition} \begin{definition}\label{def:cycle-surjective} A morphism of varieties over a field $K$ is \emph{$r$-cycle-surjective}, if the fibre over any rational point contains a zero-cycle of degree $r$. A morphism of varieties over a number field $k$ is \emph{arithmetically $r$-cycle-surjective} outside a finite set of places $S$, if for all finite places $v\notin S$ of $k$, the base change $f\times_k{k_v}$ is $r$-cycle-surjective. \end{definition} In the case $r=1$, we propose the easier terminology \emph{cycle-split}, \emph{locally cycle-split}, \emph{cycle-surjective} and \emph{arithmetically cycle-surjective}. Although Theorem\nobreakspace \ref {thm:main} is only concerned with arithmetic cycle-surjectivity, dealing with the case of general $r$ does not add further complications. The terms are chosen in relation to \cite{pseudo-split}. It turns out to be important to bound the degree of points appearing in zero-cycles. \begin{definition} For a zero-cycle $Z=\sum n_i x_i$ on a variety over a field $K$, define the \emph{maximum degree} of $Z$ \[\maxdeg Z=\max [K(x_i):K],\] where $K(x_i)$ is the residue field of the point $x_i$. \end{definition} We will make crucial use of a uniform version of the Lang-Weil estimates \cite{lang-weil}. \begin{lemma}\label{lem:lang-weil} There exists a function $\ov \Phi:\BN^3\to\BN$ with the following property. Let $U\subset \BP^\nu$ be a geometrically irreducible, quasi-projective variety over a finite field, $\ov U$ its closure in $\BP^\nu$ and $\partial U=\ov U\setminus U$. Then there exists a zero-cycle $Z$ of degree $1$ on $U$ with \[\maxdeg Z\leq \ov\Phi(N,\deg \ov U, \deg\partial U).\] \end{lemma} If $X$ is proper and $\iota:X\dashrightarrow\BP^\nu$ a rational embedding defined on an open $U\subset X$, then we will write $\ov\Phi(\iota)$ for $\ov\Phi(N,\deg \ov{\iota(U)}, \deg\partial(\iota(U)))$. \section{Combinatorial cycle-splitness} We define the notion of combinatorial cycle-splitness, first for algebras and then for varieties. \subsection{In dimension \texorpdfstring{$0$}{0}}\label{sub:base0} Let $X$ be a finite étale scheme scheme over a field $K$. It can be written as $X=\Spec(A)$ for some finite $K$-algebra $A=\oplus_{i=1}^n K_i$ (where $K_i/K$ are finite field extensions but not necessarily normal). Let the Galois extension $L/K$ be the compositum of the Galois closures of the $K_i$ and denote $G:=\Gal(L/K)$. Let $H_i:=\Gal(L/K_i)$, i.e.\ $L^{H_i}=K_i$. We note that $X$ has a global zero-cycle of degree $r$, if and only if $\gcd_i(\#G/\#H_i)|r$. An element $g\in\Gal(L/K)$ acts on the set $G/H_i$ of right cosets from the right and partitions it into $r_i$ orbits of sizes which we denote by $m_{i1}^g,\dots,m_{ir_i}^g$. \begin{definition} Define the \emph{combinatorial index of $X$ at $g\in G$} as \[I_X(g):=\gcd_{i,j}(m_{ij}^g).\] We call $X$ \emph{combinatorially $r$-cycle-split} if and only if $I_X(g)|r$ for all $g\in G$. If $r=1$, we say $X$ is \emph{combinatorially cycle-split}. \end{definition} For the rest of this section, we take $K$ to be a number field $k$. With notation as above, the extension $L/k$ is unramified outside a finite set of places $S$. A finite place of $k$ that is unramified in all $K_i$ is also unramified in $L$. For a finite place $v\notin S$, let $\Frob_v\in G$ be the Frobenius automorphism at $v$. \begin{lemma} Let $v\notin S$ be a finite place of $k$. Then \[A\otimes k_v=\bigoplus_{i=1}^n\bigoplus_{j=1}^{r_i}k_{ij}\] where $k_{ij}/k_v$ is a finite extension of degree $m_{ij}^{\Frob_v}$. \end{lemma} \begin{proof} This is \cite[Theorem 33]{marcus}. \end{proof} Note that the list of orbit sizes really only depends on the conjugacy class of $g$: the size of the orbit of $H_it$ under $g$ is the smallest integer $j$ such that $tg^j\in H_it$, or equivalently $g^j\in t^{-1}H_it$. \begin{corollary} Let $X$ be a finite étale scheme over a number field $k$ and $S$ a finite set of places such that $L/k$ is unramified outside $S$. For a finite place $v\notin S$, $X_{k_v}$ is $r$-cycle-split, if and only if $\gcd_{i,j}(m_{ij}^{\Frob_v})|r$. \end{corollary} \begin{corollary} Let $X$ be a finite étale scheme over a number field $k$ and $S$ a finite set of places such that $L/k$ is unramified outside $S$. Then $X$ is locally cycle-split outside $S$, if and only if $X$ is combinatorially $r$-cycle-split. \end{corollary} \begin{proof} One direction directly follows from the previous corollary. The other direction follows because by Cebotarev density, for every conjugacy class $C\subseteq G$, there exist infinitely many places $v$ with $\Frob_v\in C$. Hence, if $X$ is not combinatorially $r$-cycle split there exists a $v\notin S$ such that $\gcd_{i,j}(m_{ij}^{\Frob_v})\nmid r$. \end{proof} \begin{example}\label{ex:upgrade} The preceding corollary gives a very explicit condition that can be explicitly checked for a finite group $G$. One example of an everywhere locally cycle-split scheme that is not cycle-split is \[\Spec(k[t]/(t^2-a)(t^2-b)(t^6-ab))\] with $a,b,a/b\notin k^2$. In the case where $a$ or $b$ is a square in $k_v$, the scheme has a rational point. If $v$ does not lie over $2$, $a,b\in \CO_{k_v}^\times$ and neither $a$ nor $b$ are squares in $k_v$, then $ab$ is a square in $k_v$ and we get $k_v$-points of degree $2$ and $3$, hence a zero-cycle of degree $1$. In fact, this is a ``modification'' of an example by Colliot-Thélène for non-split pseudo-splitness where the exponent $6$ is replaced by $2$ \cite[4.1]{squaretrick}. \end{example} \begin{example}\label{ex:non-upgrade} Take \[X=\Spec(\BQ[t]/(t^2+1)(t^6-3t^2-1))\] which has a local zero-cycle of degree $1$ everywhere. The second factor $(t^6-3t^2-1)$ is an irreducible polynomial that is everywhere reducible. This is because its non-cyclic Galois group is $A_4$, of which a subgroup of order $2$ leaves $\BQ[t]/(t^6-3t^2-1)$ fixed. Moreover, due to the absence of subgroups of order $6$ in $A_4$, locally there always is a factor of order dividing $3$ which together with $(t^2+1)$ yields a zero-cycle of degree $1$. Moreover, $X$ is not a finite cover of a non-split pseudo-split scheme $X'$ over $\BQ$ as in Example~\ref{ex:upgrade}. This is because $A_4$ is the smallest counterexample to the converse Lagrange's theorem and thus one sees that any proper quotient of $\BZ/2\times A_4$ fails to satisfy even the group theoretic condition for combinatorial cycle-splitness. \end{example} It is a curious result that there is no connected example ($n=1$) as the following theorem shows. \begin{theorem} The Hasse principle for zero-cycles of degree $1$ holds for connected, reduced zero-dimensional schemes over a number field $k$. \end{theorem} \begin{proof} As before, let $L/k$ be a finite non-trivial Galois extension with Galois group $G$ and $H\subsetneq G$ a proper subgroup. We want to show that $\Spec L^H$ is not locally $r$-cycle-split at infinitely many places. Equivalently, we want to find an element $g$ such that \[\gcd_{t\in G}\min\{k|g^k\in t^{-1}Ht\}>1.\] To do this we use the following fact proven ``outrageous[ly]'' in \cite[Theorem 1]{fein} via the classification of finite simple groups: for a finite group $G$, there exists a prime number $p$ and an element $g\notin \bigcup_{t\in G}t^{-1}Ht$ of order a power of $p$. This is sufficient since then $p|\min\{k|g^k\in t^{-1}Ht\}$ for all $t\in G$. \end{proof} In more down-to-earth language, there is no irreducible polynomial over $k$ that factors into coprime degrees modulo almost all primes. \subsection{In higher dimensions}\label{sub:dim0} For the beginning of this section, let us again allow $K$ to be any field. Let $X$ be a proper variety over $K$. For $X'$ a reduced, irreducible component of $X$, we define the \emph{(apparent) multiplicity} of $X'$ in $X$ as the length of the local ring $\CO_{X,\eta'}$ where $\eta'$ is the generic point of $X'$. We define the \emph{geometric multiplicity} of $X'$ in $X$ as the length of the local ring $\CO_{\ov X, \ov{\eta'}}$ where $\ov{\eta'}$ is a point of $\ov X $ lying over $\eta'$. If $X'$ is geometrically reduced, for example when $K$ is perfect, then multiplicity and geometric multiplicity coincide. Let $X_1^m,\dots,X_n^m$ be the reduced, irreducible components of geometric multiplicity $m$ in $X$. Let $K_i$ be the separable closure of $K$ in the function field of $X_i^m$. \begin{definition} Define the \emph{algebra of irreducible components of geometric multiplicity $m$} as $Z_X^m:=\Spec(\oplus_{i=1}^n K_i)$. (If there are no such components, then $Z_X^m$ is empty.) \end{definition} The reason for this definition is of course that the embedding of the ground field into the function field of a scheme controls, to some extent, its geometric properties and thus we can reduce to the previous section. A scheme $T$ of finite type over $K$ is geometrically irreducible if and only if $T$ is irreducible and $K$ is separably closed in the function field of $T$ (see \cite[4.5.9]{ega4-2}). However, using the functor of open irreducible components defined by Romagny we can obtain finer results. For a finite type morphism of schemes $T\to R$ with $R$ integral, let $\Irr^m_{T/R}$ be the subfunctor of $\Irr_{T/R}$ defined in \cite[Def. 2.1.1]{romagny} of open irreducible components of geometric multiplicity $m$. We recall that $\Irr_{T/R}$ parametrises open subschemes $U$ of an $R$-scheme $R'$ such that the geometric fibres of $U\times_R R'\to R'$ are interiors of irreducible components in the geometric fibres of $T\times_R R'\to R'$. Note that this is stable under base change and thus functorial because we use the geometric instead of the apparent multiplicity. \begin{lemma}\label{lem:irr-functor} The functor $\Irr^m_{T/R}$ is representable over a dense open of $R$ by a finite étale cover. \end{lemma} \begin{proof} Let $\eta$ be the generic point of $R$ and $T'\hookrightarrow T\to R$ be the reduced closure of the irreducible components of geometric multiplicity $m$ in the fibre over $\eta$. Then after replacing $R$ with a dense open subscheme, we have that $\Irr^m_{T/R}=\Irr_{T'/R}$ because the geometric multiplicity of the fibre over $\eta$ spreads out to a dense open neighbourhood by \cite[Proposition 9.8.6]{ega4-3}. After further replacement of $R$ with a dense open subscheme, the functor $\Irr_{T'/R}$ is representable by a separated algebraic space which is finite étale over $R$ by \cite[2.1.2,2.1.3]{romagny}. However, by Knutson's representability criterion, this algebraic space over $R$ must in fact be a scheme (cf.\ \cite[Proof of Proposition 3.7]{smeets-loughran} for this last step). \end{proof} \begin{lemma}\label{lem:irr-rep} The functor $\Irr^m_{X/K}$ is represented by $Z_X^m$. \end{lemma} \begin{proof} This follows from \cite[2.1.4]{romagny}. \end{proof} \begin{definition}\label{def:general-index} Let $G$ be the Galois group defined in subsection \ref{sub:dim0} for the finite étale $K$-scheme $Z_X^m$. Define the \emph{combinatorial index of $X$ at $g\in G$} as \[I_X(g):=\gcd_m(mI_{Z_X^m}(g)).\] We call $X$ \emph{combinatorially $r$-cycle-split} if and only if $I_X(g)|r$ for all $g\in G$. If $r=1$, we say $X$ is \emph{combinatorially cycle-split}. \end{definition} This is compatible with the previous definition of combinatorial index in dimension $0$ and only depends on the conjugacy class of $g$ in $G$. Let us return to the case of $k$ a number field and assume $X$ is smooth and proper over $k$. Let $v$ be a finite place of $k$. To tackle the question of zero-cycles on $X_{k_v}$, we need to relate closed points in the special and generic fibres of a model. This seems to be folkloric knowledge partly written down in \cite[\S9, Cor. 9.1]{blr} but the author could not find a complete reference before \cite{bosch-liu} (see also \cite[4.6]{wittenberg-c11} and \cite[2.4]{kesteloot}). \begin{lemma}\label{lem:generalised-hensel} Let $\CX$ be a proper, flat model over $\CO_{k_v}$ of $X_{k_v}$. Let $\ov x\in \CX(k(v))$ be a point which is regular in $\CX$ and regular in the reduction $\CX_{k(v)}$ and lies on a geometrically irreducible component of $\CX_{k(v)}$ of multiplicity $m$. Then there exists a closed point $x\in X_{k_v}$ of degree $m$ with reduction $\ov x$. \end{lemma} \begin{proof} See \cite[2.3]{ct-saito}. \end{proof} Conversely, the following result applies. \begin{lemma}\label{lem:converse-hensel} Let $\CX$ be a proper, regular, flat model over $\CO_{k_v}$ of $X_{k_v}$ and $x$ a closed in point in $X_{k_v}$ of degree $d$ with reduction $\tilde x$. Let $D_j$, $j\in J$, be the irreducible components of $X_{k(v)}$ on which $\tilde x$ lies. Denote by $m_j$ the multiplicity of $D_j$ and by $d_j$ the minimal degree of an extension of $k(v)$ over which $D_j$ splits into geometrically irreducible components. Then $\gcd_{j\in J} m_j d_j$ divides $d$. \end{lemma} \begin{proof} See \cite[1.6]{bosch-liu}. \end{proof} \begin{lemma}\label{lem:special-generic} Let $\CX$ be a proper, normal, flat model over $\CO_{k_v}$ of $X_{k_v}$. Let $\Frob_v$ be the Frobenius element in the absolute Galois group of $k(v)$. If $I_{\CX_{k(v)}}(\Frob_v)|r$, then $X_{k_v}$ is $r$-cycle-split. If $\CX$ is regular, then the converse holds. There exists a function $\Phi:\BN^3\to\BN$ not depending on $X$ with the following property. Let $\iota:\CX_{k(v)}\dashrightarrow\BP^\nu_{k(v)}$ be a rational embedding. If \[I_{\CX_{k(v)}}(\Frob_v)|r,\] then there exists a zero-cycle $Z$ of degree $r$ on $X_{k_v}$ with $\maxdeg Z\leq \Phi(\iota)$ (where $\Phi(\iota)$ is defined as after Lemma~\ref{lem:lang-weil}). \end{lemma} \begin{proof} Assume $I_{\CX_{k(v)}}(\Frob_v)|r$. Then there exist geometrically irreducible components $D_j$, $j\in J$, of the special fibre of multiplicities $m_j$ defined over extensions of $k(v)$ of degrees $d_j$ s.t. \[\gcd_{j\in J}d_jm_j=I_{\CX_{k(v)}}(\Frob_v).\] By the Lang-Weil estimates as formulated in Section\nobreakspace \ref {sec:term}, each $D_j$ has a zero-cycle $Z$ of degree $d_j$. Let $Z_j$ be the union of the non-regular locus of $\CX$ and the non-regular locus of the reduction of $D_j$. Because $\CX$ is assumed normal, hence regular in codimension $1$, $Z_j$ does not contain all of $D_j$. Then $\deg Z_j$ has an upper bound only depending on $\nu$ and the degree of the image of $\iota$. By the Lang-Weil estimates as described in Section~\ref{sec:term}, one can arrange for the summands of $Z$ to avoid all $Z_j$ and satisfy $\maxdeg Z\leq\Phi(\iota)$ for a suitable function $\Phi$. Applying Lemma~\ref{lem:generalised-hensel} to each of the summands, the existence of points of orders $m_jd_j$ and thus a zero-cycle of degree $r$ in $X_{k_v}$ follows. The converse in the case of regular $\CX$ follows from Lemma~\ref{lem:converse-hensel}. \end{proof} \begin{remark} We remark that to examine $r$-cycle-splitness of the special fibre itself, all components of multiplicity greater than $1$ would have to be discarded. Thus, there are two notions, $r$-cycle-split and combinatorially $r$-cycle-split. This is a difference to the case of rational points with only one notion of pseudo-split. \end{remark} \begin{lemma}\label{lem:variety-cycle-split} Let $X$ be a smooth, proper variety over a number field $k$. Let $\iota: X\dashrightarrow\BP^\nu$ be a rational embedding of $X$. Then $X$ is almost everywhere locally $r$-cycle-split if and only if $X$ is combinatorially $r$-cycle-split. In this case, $X_{k_v}$ has a zero-cycle $Z$ of degree $r$ with $\maxdeg Z\leq \Phi(\iota)$ for all $v\notin S$. \end{lemma} \begin{proof} Let $U\subseteq X$ be a dense open subvariety on which $\iota$ is defined. We can find a finite set $S$ of places such that $U\hookrightarrow X$ and $\iota:U\hookrightarrow\BP^\nu$ spread out to models $\CU\hookrightarrow\CX$ and $\iota_S:\CU\hookrightarrow\BP^\nu_{\CO_{k,S}}$ over $\CO_{k,S}$ where $\CU$ and $\CX$ are smooth over $\CO_{k,S}$. By Lemma\nobreakspace \ref {lem:irr-rep} and Lemma\nobreakspace \ref {lem:irr-functor}, after possibly enlarging $S$, $\Irr_{\CX/\CO_{k,S}}^m$ is represented by $\Spec(\oplus_{i=1}^n \CO_{K_i,S})$. The result now follows from Lemma\nobreakspace \ref {lem:special-generic}. \end{proof} \section{\texorpdfstring{$s^{0}$}{s0}-invariants} In analogy to the $s$-invariants in \cite{pseudo-split}, we construct \emph{$s^{0}$-invariants} that measure failure of combinatorial cycle-splitness in families. Let $f:X\to Y$ be a morphism of varieties over a number field $k$. For any (possibly non-closed) point $y\in Y$, set $K:=k(y)$. We get finite étale (possibly empty) $K$-schemes $Z^m_{f^{-1}(y)}=\Irr^m_{f^{-1}(y)/K}$ for all multiplicities $m$. We may pick $L/K$ a minimal Galois extension which splits all $Z^m_{f^{-1}(y)}$ with Galois group $G$. Denote by $k_K$ and $k_L$ the algebraic closures of $k$ in $K$ and $L$. By replacing $k_L$ with its Galois closure and extending $L$, we can assume that $k_L/k$ is Galois. Let $N$ be the subgroup of $G$ acting trivially on $k_L$. Denote by $\Omega_{k_K}$ the set of finite places of $k_K$. \begin{definition} For a finite place $v$ of $k$, define $s^{0,r}_{f,y}(v)$ in the following way: \begin{enumerate}[(i)] \item as $1$, if $v$ ramifies in $k_L$ or there is no place in $k_K$ of degree $1$ over $v$ \item otherwise, as \[\frac{\sum_{\substack{w\in\Omega_{k_K}\\ \N(w)=\N(v)\\w|v}}\#\{g\in G:\Frob_w\equiv g\bmod N, I_{f^{-1}(y)}(g)|r\}}{\#N\#\{w\in\Omega_{k_K}|\N(w)=\N(v),w|v\}}.\] \end{enumerate} \end{definition} One can see that $s^{0,r}_{f,y}(v)$ is constant on the conjugacy class of $\Frob_v$, i.e.\ that this function is Frobenian in the sense of Serre \cite[\S3.3.3.5]{serre} but this fact will not be directly needed. Over finite fields, the $s^0$-invariants asymptotically quantify the failure of combinatorial $r$-cycle-splitness. \begin{proposition}\label{prop:asymptotics} Assume $Y$ is integral of dimension $n$ with generic point $\eta$. Let $\CF:\CX\to\CY$ be a model of $f$ over $\CO_{k}$. Then \begin{align*} &\#\{y\in\CY(k(v))|\CF^{-1}(y)\text{ is combinatorially $r$-cycle-split}\}\\=&s^{0,r}_{f,\eta}(v)\#\CY(k(v))+O(\N(v)^{n-1/2}) \end{align*} as $\N(v)\to\infty$, where the asymptotic constant of the $O$-notation only depends on the chosen model. \end{proposition} \begin{proof} The main idea after \cite[Proposition 3.13]{pseudo-split} is to count and then compare both sides using Lang-Weil estimates and the Cebotarev density theorem for schemes. We divide the proof into several parts. \textbf{Set-up} By the Lang-Weil estimates we can remove strict closed subsets of $\CY$ since for dimension reasons, their rational points only contribute to the error term. Hence, with the help of Lemma\nobreakspace \ref {lem:irr-functor}, we assume that $\Irr^m_{\CX/\CY}\to\CY$ finite étale. In the same way, we ensure that $\CY$ and its special fibres $\CY_{k(v)}$ are normal for all $v$ not contained in some finite set $S$. Set $y:=\eta$ and from there on $K$, $L$, $k_K$ and $k_L$, $G$ and $N$ as before. Enlarging $S$ further, we may spread out and assume that $L$ is the generic fibre of a Galois closure $\CL$ of $\Irr^m_{\CX/\CY}\to\CY$. From now on, let $v\notin S$ \textbf{Counting points of $\CY_{k_v}$ with Lang-Weil} The functor $\Irr_{\CY_{k(v)}/k(v)}$ is represented by \[\Spec\CO_{k_K}\otimes_{\CO_k} k(v)=\bigoplus_{\substack{w\in\Omega_{k_K}\\w|v}} k(w).\] Therefore, geometrically irreducible components of $\CY_{k(v)}$ correspond to places $w$ with $\N(w)=\N(v)$. We write $\CY_w$ for such a component. By the normality assumption, the irreducible components of $\CY_{k(v)}$ are all disjoint, so if there is none which is geometrically irreducible, $\CY_{k(v)}$ has no rational point. This is the trivial case of the proposition. In the non-trivial case, we can count points by Lang-Weil: \begin{eqnarray*}\#\CY(k(v))&=&\sum_{\substack{w\in\Omega_{k_K}\\ \N(w)=\N(v)\\w|v}}\#\CY_w(k(w))=\sum_{\substack{w\in\Omega_{k_K}\\ \N(w)=\N(v)\\w|v}}\N(v)^n+O(\N(v)^{n-1/2})\\&=&{\#\{w\in\Omega_{k_K}|\N(w)=\N(v),w|v\}}\N(v)^n+O(\N(v)^{n-1/2}).\end{eqnarray*} \textbf{Counting combinatorially $r$-cycle-split fibres with Cebotarev} For a rational point $y\in\CY(k(v))$, we can view the Frobenius $\Frob_y$ as an element of $G$ up to conjugacy. The fibre $\CF^{-1}(y)$ is combinatorially $r$-cycle-split if and only if $I_{\CF^{-1}(\eta)}(\Frob_y)=I_{\CF^{-1}(y)}(\Frob_y)|r$. Let $\delta_\CF(g)\in\{0,1\}$ be the indicator function of the set of elements $g\in G$ for which $I_{\CF^{-1}(y)}(g)|r$. This function only depends on the conjugacy class of $g$. Applying the Cebotarev density theorem for étale morphisms as in \cite[9.15]{serre} to $\delta_\CF$ one gets: \begin{align*} \#\{y\in\CY(k(v))|\CF^{-1}(y)\text{ is combinatorially $r$-cycle-split}\}\\=\frac{\N(v)^n}{\#N}\sum_{\substack{w\in\Omega_{k_K}\\ \N(w)=\N(v)\\w|v}}\#\{g\in G:\Frob_w\equiv g\bmod N, I_{f^{-1}(\eta)}(g)|r\}+O((\N(v))^{n-1/2}) \end{align*} Comparing both counts with the definition of $s^{0,r}_{f,\eta}(v)$, the result follows. \end{proof} The asymptotic formula gives a necessary condition for combinatorial cycle-splitness of all fibres. \begin{corollary}\label{cor:combinatorial-fail} With the same notation, if $s^{0,r}_{f,\eta}(v)<1$ for some $v\notin S$, then there exists $y\in\CY(k(v))$ such that $\CF^{-1}(y)$ is not combinatorially $r$-cycle-split. \end{corollary} \begin{proof} For $v$ large enough, there will be rational points on $\CY_{k(v)}$ but by Proposition\nobreakspace \ref {prop:asymptotics}, not all fibres over them can be combinatorially $r$-cycle-split. \end{proof} The asymptotics also give the other direction. \begin{corollary}\label{cor:cycle-split} With the same notation, assume $\CY$ is integral normal and $\Irr_\CF^m$ finite étale over $\CY$ for all $m$. Then there exists a finite set of places $S$ such that for all $v\notin S$, $s^{0,r}_{f,\eta}(v)=1$ if and only if the fibre of $\CF$ over every $y\in\CY(k(v))$ is combinatorially $r$-cycle-split. \end{corollary} \begin{proof} One direction has just been proven. For the other direction, we use the same notation as in Proposition\nobreakspace \ref {prop:asymptotics}. A point $y\in\CY(k(v))$ must lie on a geometrically irreducible component corresponding to the degree $1$ place $w$ of $k_K$. Let $l\in\CL$ be a closed point over $y$ and $u$ be the corresponding place of its irreducible component. Then \[k(y)=k(w)\subset k(u)\subset k(l)\] and there exist natural embeddings \[\Gal(k(l)/k(y))\hookrightarrow G\] and \[\Gal(k(u)/k(y))\hookrightarrow G/N.\] By functoriality of Frobenius, we have \[\Frob_{l/y}\bmod N=\Frob_{u/w}.\] Because of the assumption that $s^{0,r}_{f,\eta}(v)=1$, we deduce that $\Frob_{l/y}$ acts on $\Irr^m_{\CF^{-1}(y)/y}$ such that $I_{\CF^{-1}(y)}(\Frob_{l/y})|r$. Hence $\CF^{-1}(y)$ is combinatorially $r$-cycle-split. \end{proof} \begin{corollary} The fibre $f^{-1}(y)$ is combinatorially $r$-cycle-split if and only if $s^{0,r}_{f,y}(v)=1$ for almost all $v$. \end{corollary} \begin{proof} This is Corollary~\ref{cor:cycle-split} in the case of a zero-dimensional base. \end{proof} \section{Arithmetic cycle-surjectivity} Let $f:X\to Y$ be a dominant morphism between proper, smooth, geometrically integral varieties with geometrically integral generic fibre over a number field $k$. \subsection{Birational invariance} We want to prove that arithmetic $r$-cycle-surjectivity is a property invariant under modifications. The argument here is more subtle than in the case of rational points. \begin{definition} Let $v$ be a place of $k$. If a fibre over a $k_v$-point $y$ of $Y$ contains a zero-cycle of degree $r$ we call this cycle a \emph{witness for $r$-cycle-surjectivity over $y$ at $v$}. \end{definition} \begin{lemma}\label{lem:birational} Let $v$ be a place of $k$. Let $V$ be a dense open subset of $Y$. Assume that there exists $B\in\BN$ such that $f^{-1}(V)\to V$ is $r$-cycle-surjective at $v$ and there exist witnesses $Z_v$ for $r$-cycle-surjectivity over $y$ at $v$ for all $y\in V(k_v)$ with $\maxdeg Z_v\leq B$. Then $f$ is $r$-cycle-surjective at $v$. \end{lemma} \begin{proof} Assume cycle-surjectivity on an open $V$ with a uniform bound $B$ as described above. Let $k_v(i)$ denote the compositum of all degree $i$ extensions of $k_v$. Then $X(k_v(i))\subseteq X(\overline{k_v})$ is the set of $\overline{k_v}$-points fixed by all elements in $\Gal(\overline{k_v}/k_v(i))$ and this a closed subset. Hence \[Y_B(f):=\bigcup_{\substack{I\subset \{1,\dots, B\}\\ \gcd(I)|r}}\bigcap_{i\in I}f(X(k_v(i)))\] is a finite union of closed subsets of $Y(\overline{k_v})$. Let $y$ be a $k_v$-rational point in $V$ for which the fibre $f^{-1}(y)$ contains a zero-cycle $Z$ of degree $r$ with $\maxdeg Z\leq B$. There exists $I\subset \{1,\dots, B\}$ such that \[y\in\bigcap_{i\in I}f(X(k_v(i)))\subset Y_B(f).\] On the other hand, a point $y\in Y_B(f)$ lies in $\bigcap_{i\in I}f(X(k_v(i)))$ for some $I\subset \{1,\dots, B\}$ with $\gcd(I)|r$, so its fibre has a closed $k_v(i)$-point for all $i\in I$. Let $j_i$ be the degree of this point. It follows that the prime factors of $j_i$ are contained in the prime factors of $i$. In particular, the fibre has a zero-cycle of degree $\gcd_{i\in I}j_i=\gcd I|r$. Now $Y_B(f)$ is closed and contains $V(k_v)$ which is dense and open in $Y(k_v)$, hence $Y(k_v)\subseteq Y_B(f)$. \end{proof} \begin{remark} The above proof generalises to $k_v$ any Henselian (non-trivially) valued field. \end{remark} \begin{lemma} To show arithmetic $r$-cycle-surjectivity of $f$, it is enough to show arithmetic $r$-cycle-surjectivity of $f^{-1}(V)\to V$ for a dense open $V$ in $Y$. \end{lemma} \begin{proof} By Lemma\nobreakspace \ref {lem:birational} all we have to show is that for $v$ large enough, if $f$ is arithmetically $r$-cycle-surjective over $V$, there is a uniform bound on the maximum degree of witnesses. By generic smoothness \cite[Corollary 10.7]{hartshorne}, after shrinking $V$, we may assume that all fibres over $V$ are smooth. Let $\iota: X\dashrightarrow\BP^\nu_Y$ be a rational embedding. But now by Lemma\nobreakspace \ref {lem:variety-cycle-split} (which has a smoothness assumption), a fibre over a point in $V$ is almost everywhere locally $r$-cycle-split if and only if it is almost everywhere locally $r$-cycle-split with zero-cycles as witnesses that have maximum degree less than $\Phi(\iota)$. \end{proof} \subsection{Necessary condition} From the results over finite fields, we can deduce a necessary condition for arithmetic $r$-cycle-surjec\-tivity. \begin{proposition}\label{prop:transversal} Let $\CF:\CX\to\CY$ be a proper model of $f$ over $\CO_{k,S}$ for a finite set of places $S$ of $k$ with regular source and target. Let $\CT\subset\CY$ be a reduced divisor such that $\CF$ is smooth away from $\CT$. Then after possibly enlarging $S$, we can find a subset $\CR\subset\CT_{\CO_{k,S}}$ of codimension at least $2$ in $\CY_{\CO_{k,S}}$ such that for all $v\notin S$ the following holds. Choose $\wt y\in \CY(\CO_{k_v})$; denote its generic point by $y\in Y(k_v)$. If $\tilde y$ intersects $\CT_{\CO_{k,S}}$ transversally outside $\CR_{\CO_{k,S}}$ and the fibre at $(\tilde y\bmod \pi_v)$ is not combinatorially $r$-cycle-split, then $f^{-1}(y)$ is not $r$-cycle-split. \end{proposition} \begin{proof} This is a variant of \cite[Theorem 2.8]{smeets-loughran}: After possibly enlarging $S$, $\CR$ can be chosen of codimension $2$ in a way such that \begin{enumerate} \item by generic flatness for regular schemes, $\CF$ is flat on the complement $\CY\setminus\CR$, and \item by generic submersivity \cite[Theorem 2.4]{smeets-loughran} in characteristic $0$, $\CF$ is submersive (i.e.\ surjective on tangent spaces) over $\CT\setminus\CR$. \end{enumerate} Then $\CX\times_\CY \wt y$ is regular and its special fibre is not combinatorially $r$-cycle-split. The rest follows by Lemma\nobreakspace \ref{lem:special-generic}. \end{proof} \begin{proposition}\label{prop:non-surjectivity} Let $\vartheta\in Y^{(1)}$ be a codimension $1$ point of $Y$. There exists a finite set of places $S$ such that for all $v\notin S$ the following holds: if $s^{0,r}_{f,\vartheta}(v)<1$, then $f$ is not arithmetically $r$-cycle-surjective. \end{proposition} \begin{proof} If $s_{f,\vartheta}^{0,r}(v)<1$, let $\CE$ be the closure of $\vartheta$ in $\CY$. By Corollary\nobreakspace \ref {cor:combinatorial-fail}, for suitable $S$ we can find a point $y$ in the special fibre of $\CE$ above which the fibre is not combinatorially $r$-cycle-split. By Proposition\nobreakspace \ref {prop:transversal}, it therefore suffices to lift $y$ to an integral point intersecting $\CE$ transversally. The argument for this is well-known and literally the same as in \cite[Theorem 4.2]{pseudo-split} via blowing-up $\CY$ in $y$ and choosing a point on the exceptional divisor. \end{proof} \subsection{Sufficient condition and proof of main theorem}\label{sec:sufficient} Finally, using tools from logarithmic geometry, we can give a necessary and sufficient criterion for arithmetic $r$-cycle-surjectivity. We refrain from giving yet another exposition of logarithmic geometry and refer the reader to \cite{handbook}. All log schemes in this section will be fs Zariski log schemes. For this section assume that we have a log smooth, proper model $\CF:(\CX,\CD)\to(\CY,\CE)$ of $f$ where $(\CX,\CD)$ and $(\CY,\CE)$ are Zariski log regular schemes (with divisorial log structure induced by $\CD$ and $\CE$) that are log smooth and proper over $\CO_{k,S}$ equipped with the trivial log structure for some finite set of places $S$. This can be achieved after a modification of $f$ by using Abramovich-Denef-Karu's toroidalisation theorem in \cite{karu} and spreading out. Denote by $D$ and $E$ the generic fibres of $\CD$ and $\CE$. Set $\CU:=\CX\setminus\CD$, $U:=X\setminus D$, $\CV:=\CY\setminus\CE$, and $V:=Y\setminus E$. On these open sets, the log structures are trivial. By possibly enlarging $S$ in the spreading-out procedure above, we may assume that all irreducible components $\CE'$ of $\CE$ intersect the generic fibre non-trivially, i.e.\ their generic points lie in $Y$. This property of our chosen model is absolutely crucial for the method presented here. Namely, one can control the splitting behaviour of the fibre of $\CF$ over a point in the interior of $\CE'$ by the behaviour of the fibre of $f$ over the generic (characteristic $0$) point $\vartheta'$ of $\CE'$ (see Lemma~\ref{lem:irr-rep-strata}). Let $v$ be a finite place of $k$. Let $k'/k$ be a finite extension and $w$ an extension of $v$ to $k'$. By the valuative criterion of properness, any closed point \[y:\Spec k'_w\to Y\] extends to a morphism \[\wt y:(\Spec\CO_{k'_w})^\dagger\to (\CY,\CE),\] where $(\Spec\CO_{k'_w})^\dagger$ is the log scheme equipped with the standard divisorial log structure defined by a uniformiser $\pi_w$ (i.e.\ with monoid given by $\CO_{k'_w}\setminus 0$). In \cite{kato}, Kato defines the fan $(F_T,\CM_{F_T})$, a locally monoidal space associated to a log regular scheme $T$, and a morphism $c_T:T\to F_T$. The preimage $U(t)$ of a point $t\in F_T$ under $c_T$ is called a \emph{logarithmic stratum} and is a locally closed subset of $T$. Then the points of $F_T$ can be identified with the generic points of the logarithmic strata. (In the older language of toroidal embeddings, these strata are connected components of repeated intersections of the boundary divisor.) To each $t\in F_T$, there corresponds a Kato subcone $F_T^t$ of $F_T$ which is the unique subcone with closed point $t$. There is an attached \emph{logarithmic height} function $h_T:F_T(\Spec \BN)\to\BN$ which is defined in \cite[\S5]{pseudo-split} as follows. Under the $\BN$-valued point $t\in F_T(\Spec \BN)$, the closed point of $\Spec\BN$ is sent to the closed point of a Kato subcone $\Spec \BN^j$. The height $h_T(t)$ is defined as the sum of the images of the generators of $\BN^j$ under the map $\BN^j\to\BN$ induced by $t$. Furthermore, a morphism $g$ of log regular schemes induces a morphism $F(g)$ of Kato fans. Because $F_{(\Spec\CO_{k'_w})^\dagger}\cong\Spec\BN$, this defines a logarithmic height $h_\CY(y)$ for any $y\in Y(k_w')$. Morally, the height of $y$ quantifies how often $\wt y$ intersects the special fibre. \begin{lemma}\label{lem:irr-rep-strata} For any $t\in F_\CY$ and $m\in\BN$, the functor $\Irr^m_{\CF^{-1}(U(t))/U(t)}$ is representable by a finite étale scheme over $U(t)$. \end{lemma} \begin{proof} It is shown in \cite[Proposition 5.18]{pseudo-split} that $\Irr_{\CF^{-1}(U(t))/U(t)}$ is representable by a finite étale scheme over $U(t)$. By \cite[Proposition 5.16]{pseudo-split}, apparent multiplicity is constant along logarithmic strata for proper, log smooth morphisms of log regular schemes, and because log smoothness is stable under base change, the same is true for geometric multiplicity. Thus the subfunctor $\Irr_{\CF^{-1}(U(t))/U(t)}^m$ is represented by the closure of $\Irr_{\CF^{-1}(t))/t}^m$ in $\Irr_{\CF^{-1}(U(t))/U(t)}$. \end{proof} The following two propositions bound the intersection behaviour of points in $Y$ the fibres above which we have to consider. \begin{proposition}\label{prop:intersection-bound} There is a positive integer $N$ with the following property. Let $B\in\BN$ be arbitrary and $v\notin S$ a place of $k$. If the fibre over each point $y\in V(k_v)$ with $h_\CY(y)\leq N$ has a zero-cycle $Z$ of degree $r$ with $\maxdeg Z\leq B$, then $f\times_k{k_v}$ is $r$-cycle-surjective. \end{proposition} \begin{proof} The proof is very similar to the one in \cite[Proposition 6.1]{pseudo-split}, which itself is an adaptation of \cite[4.2]{denef}, and we only sketch the steps and highlight the necessary changes. Let $F(f)_*:F_X(\Spec \BN)\to F_Y(\Spec \BN)$ be the morphism induced by $f$. Then define for all $s\in F_X$ and $t=F(f)_*(s)\in F_Y$: \[N_t=\min\{h_Y(t')|t'\in F_Y^t(\Spec\BN), t'\notin F(f)_*(F_X(\Spec \BN))\},\] \[N_{s,t}=\min\{h_Y(t')|t'\in F(f)_*(F_X^s(\Spec \BN))\subset F_Y^t(\Spec\BN)\}\] and $N=\max\{N_t,N_{s,t}\}$. We have thus a finite partition \begin{align*} F_Y(\Spec\BN)=&\bigsqcup_{t\in F_Y} F_Y^t(\Spec\BN)\setminus F(f)_*(F_X(\Spec \BN)) \\ &\sqcup\bigsqcup_{s\in F_X} F(f)_*(F_X^s(\Spec \BN)), \end{align*} where each partition subset contains at least one element with height less than $N$. Given some arbitrary $y\in V(k_v)$, we have to show that its fibre is $r$-cycle-split with a uniform bound $B$ on the maximum degree of witnesses so that we can conclude by Lemma~\ref{lem:birational}. The proof works by twice applying the logarithmic analogue of Hensel's lemma for log smooth morphisms from \cite{hensel} (see also \cite[\S5.2]{pseudo-split}). By the above, we may find $b\in F_Y(\Spec\BN)$ in the same partition subset as $F(\wt y)$ with $h_Y(b)\leq N$. Write $b=\sum_{i\in I}b_i v_i$, where $(v_i)_{i\in I}$ are the cones in $F_Y$ corresponding to the irreducible components $(\CE_i)_{i\in I}$ of $\CE$. Let $(\pi_i)_{i\in I}$ be local equations for $(\CE_i)_{i\in I}$ in an affine neighbourhood $\Spec A$ of $(\tilde y \bmod \pi_v)$ in $\CY$. Let $\ov\varphi$ be the canonical morphism \[\ov \varphi:\CO_{k_v}\setminus 0\to (\CO_{k_v}\setminus 0)/(1+\pi_v\CO_{k_v})\cong k(v)^*\oplus \BN\to k(v)^*.\] The first application of logarithmic Hensel's lemma is to the diagram \[\begin{tikzcd} \Spec(k(v))^\dagger \arrow[d]\arrow[r] & (\CY,\CE)\arrow[d]\\ \Spec(\CO_{k_v})^\dagger \arrow[r] & \Spec(\CO_{k_v})^\mathrm{tr} \end{tikzcd}.\] Here, $\Spec(\CO_{k_v})^\mathrm{tr}$ denotes the trivial log structure with monoid $\CO_{k_v}^*$ and $\Spec(k(v))^\dagger$ denotes the standard log point with log structure $k(v)^*\oplus\BN$, the restriction of $\Spec(\CO_{k_v})^\dagger$. On the level of monoids, the upper horizontal arrow is defined by \begin{eqnarray*} A^*\times \BN^I &\to& k(v)^*\oplus \BN,\\ \alpha \in A^* &\mapsto& (\alpha(\tilde y \bmod \pi_v),0),\\ 1_i\in\BN^I &\mapsto& (\ov\varphi(\pi_i(\wt y)),b_i), \end{eqnarray*} where $1_i$ is the generator of the $i$-th factor. All other morphisms are the obvious ones. The point $y'\in Y(k_v)$ yielded by logarithmic Hensel's lemma has the same reduction as $y$ but satisfies \[F(\wt{y'})=b\] and \[\ov \varphi(\pi_i(\wt y))=\ov \varphi(\pi_i(\wt y')).\] This is the first half of the proof and works verbatim as in \cite[Proposition 6.1]{pseudo-split}. For the second half, the assumption of our proposition now states that $f^{-1}(y')$ contains a zero-cycle of degree $r$ which we write as $\sum_h n_h x_h'$. Here, $x_h'$ is a closed point defined over a finite extension $l_{w_h}/k_v$ with $[l_{w_h}:k_v]\leq B$. We are done with the proof, if we can lift each $(\wt{x_h'} \bmod \pi_{w_h})$ to an $l_{w_h}$-point $x_h\in f^{-1}(y)$. To do so, we only have to slightly alter diagram $(6.3)$ from the original proof in \cite{pseudo-split} and apply (for the second time) logarithmic Hensel, namely to \[\begin{tikzcd} \Spec(k(w_h))^\dagger \arrow[d]\arrow[r] & (\CX,\CD)\arrow[d]\\ \Spec(\CO_{l_{w_h}})^\dagger \arrow[r] & (\CY,\CE) \end{tikzcd}.\] On schemes, the upper horizontal morphism is given by $(\wt{x_h'} \bmod \pi_{w_h})$ and the lower horizontal morphism is defined by $\wt y$ composed with $\Spec(\CO_{l_{w_h}})^\dagger \to \Spec(\CO_{k_v})^\dagger$. Let $e_h$ be the ramification index of $l_{w_h}/k_v$. Then on fans \[\Spec(\CO_{l_{w_h}})^\dagger \to \Spec(\CO_{k_v})^\dagger\] is just $\Spec\BN\to\Spec\BN$ induced by multiplication with $e_h$ and hence \[F(\Spec(\CO_{l_{w_h}})^\dagger \to (\CY,\CE))=e_hF(\wt y).\] In an affine neighbourhood $\Spec(B)$ of $(\wt{x_h'} \bmod \pi_{w_h})$ in $\CX$, $(\CX,\CD)$ has a chart $\BN^J\to B$ given by sending the generator $1_j$ to a local equation $\omega_j$ of the irreducible component $\CD_j$. Let $u_j$ be the Kato subcone corresponding to $\CD_j$. Since $F(\wt y)$ and $b$ were chosen in the same partition subset and \[F(f)_*(\wt x_h')=F(\wt{y'})=b,\] there exists $a=\sum_j a_j u_j\in F_X^s(\Spec\BN)$ such that $F(\wt x_h')\in F_X^s(\Spec \BN)$ and $F(\CF)_*(a)=F(\wt y)$, so \[F(\CF)_*(e_ha)=F(\Spec(\CO_{l_{w_h}})^\dagger \to (\CY,\CE)).\] Then the log structure of $\Spec(k(w_h))^\dagger \to (\CX,\CD)$ should be defined by the morphism of monoids \[\BN^J\to k(w_h)^*\oplus \BN, 1_j\mapsto (\ov\varphi(\omega_j(\wt x_h')),e_ha_j).\] The proof that this defines a commuting diagram of log schemes works as in \cite[Proposition 6.1]{pseudo-split}. \end{proof} The next proposition \cite[Proposition 5.10 and Proposition 6.2]{pseudo-split} gives us a modification of $\CF$ (obtained by pulling back $N-1$ barycentric log blow-ups of the target) which will turn out to be optimal in the sense that it is all we need to check arithmetic $r$-cycle-surjectivity. \begin{proposition}\label{prop:ultimate-mod} Let $N$ be a positive integer. There is a log smooth modification $\CF':(\CX',\CD')\to(\CY',\CE')$ of $\CF$ with $\CX'$ and $\CY'$ smooth, proper over $\CO_{k,S}$ and geometrically integral with the following property: Let $Y'$ be the generic fibre of $\CY'$ and $E'$ be the generic fibre of $\CE'$. For any $v\notin S$ and each point $y\in (Y\setminus E)(k_v)=(Y'\setminus E')(k_v)$ with $1\leq h_{\CY}(y)\leq N$, $h_{\CY'}(y)=1$ and its reduction in $\CY'$ is a smooth point of the reduction of $\CE'$. \end{proposition} Now we can prove a sufficient criterion: \begin{proposition}\label{prop:sufficient} Let $v\notin S$ and $\CF':(\CX',\CD')\to(\CY',\CE')$ a log smooth modification of $\CF$ as in Proposition\nobreakspace \ref {prop:ultimate-mod}. If $s^{0,r}_{f,\vartheta'}(v)=1$ for each generic point $\vartheta'$ of $D'$ (the generic fibre of $\CD'$), then $f\times_k{k_v}$ is $r$-cycle-surjective. \end{proposition} \begin{proof} By Chow's lemma, pick a rational embedding $\iota:\CX'_{k(v)}\dashrightarrow\BP^\nu_{\CY'_{k(v)}}$ and let $B=\Phi(\iota)$. Let $\CV':=\CX'\setminus\CE'$. It is enough to prove that the fibre over a point $y\in V'(k_v)=V(k_v)$ has a zero-cycle $Z$ of degree $r$ with $\maxdeg Z\leq B$. If the reduction of $y$ in $\CY$ is in $\CV$, we know that $\CF'^{-1}(\tilde y \bmod \pi_v)\cap\CU$ is non-empty smooth and geometrically integral (by assumption on the generic fibre), so $f^{-1}(y)$ has a zero-cycle of degree $1$ with maximum degree less than $B$ by the Lang-Weil estimates. Otherwise, assume that $\wt y$ intersects $\CE'$. By Proposition\nobreakspace \ref {prop:intersection-bound}, we can restrict ourselves to $y$ with $h_\CY(y)\leq N$. Because of Proposition\nobreakspace \ref {prop:ultimate-mod}, $\wt y$ intersects transversally a codimension $1$ logarithmic stratum $\CZ$ of $(\CY',\CE')$. By Lemma\nobreakspace \ref {lem:irr-rep-strata} $\Irr^m_\CF$ is representable by a finite étale cover over logarithmic strata. Hence by assumption of $s^{0,r}_{f,\eta_\CZ}(v)=1$ and Corollary\nobreakspace \ref {cor:cycle-split}, the fibre $\CF'^{-1}(\tilde y \bmod \pi_v)$ is combinatorially $r$-cycle-split. The closure $\wt y$ of $y$ in $\CY'$ lies outside the Zariski closure of $E'_{\mathrm{sing}}$ (the singular locus of $E'$). Therefore $\CF'$ is integral outside the closure of $E'_\mathrm{sing}$ by \cite[Cor. 4.4(ii)]{fontaine-illusie}. Hence, the fibre product $\CX'_y:=(\CX',\CD')\times_{\CF',(\CY',\CE'),\wt y}(\Spec\CO_{k_v})^\dagger$, taken in the category of Zariski log schemes, is fine. Its underlying scheme agrees with the fibre product in schemes \cite[(1.6)]{fontaine-illusie}. Since $\wt y$ intersects $\CE'$ transversally, it follows that $\wt y:(\Spec\CO_{k_v})^\dagger\to (\CY',\CE')$ is a saturated morphism as in \cite{tsuji}. Hence by \cite[I.3.14]{tsuji}, $\CX'_y\to(\CX',\CD')$ is saturated and so is $\CX'_y$ \cite[II.2.12]{tsuji}. Thus $\CX'_y$ coincides with the fibre product taken in the category of fs log schemes. Log smoothness is stable under fs base change \cite[Proposition 12.3.24]{gabber-ramero}, so $\CX'_y$ is log regular, being log smooth over the log regular base $(\Spec\CO_{k_v})^\dagger$ \cite[Theorem 8.2]{kato}. It follows that $\CX'_y$ is Cohen-Macaulay and in particular normal \cite[Theorem 4.1]{kato}. That $\CF'^{-1}(y)=f^{-1}(y)$ is $r$-cycle-split with a witness $Z$ of $\maxdeg Z\leq B$ now follows from its reduction being combinatorially $r$-cycle-split and Lemma\nobreakspace \ref {lem:special-generic}. \end{proof} The main result Theorem\nobreakspace \ref {thm:main} reformulated for any $r\in\BN$ is now an easy corollary of Proposition\nobreakspace \ref {prop:non-surjectivity} and Proposition\nobreakspace \ref {prop:sufficient}. \begin{theorem}\label{thm:mainr} Let $f:X\to Y$ be a dominant morphism between proper, smooth, geometrically integral varieties over a number field $k$ with geometrically integral generic fibre. Then $f$ is arithmetically $r$-cycle-surjective outside a finite set $S$, if and only if for each modification $f':X'\to Y'$ and for each codimension $1$ point $\vartheta'$ in $Y'$, the fibre $f'^{-1}(\vartheta')$ is combinatorially $r$-cycle-split. \end{theorem} \begin{remark} The above result cannot be applied directly to \protect \MakeUppercase {C}onjecture\nobreakspace \ref {conj:artin}, which requires to prove that the exceptional set $S$ in Theorem\nobreakspace \ref {thm:main} is empty. We can nevertheless say the following. In contrast to the case of Theorem\nobreakspace \ref {thm:lss}, the set $S$ for which we prove Theorem\nobreakspace \ref {thm:main} does not depend on Lang-Weil estimates but only on the existence of a sufficiently nice log smooth model of $f$ as stated in Section\nobreakspace \ref {sec:sufficient}. However, the existence of such models remains open. As far as zero-cycles are concerned, one may try to construct log smooth models by allowing alterations of $f$ instead of modifications and \cite{temkin} contains strong results in this direction. Unfortunately, even those models do not suffice since the creation of codimension $1$ logarithmic strata is not controlled. \end{remark} \begin{remark} Because the criterion of the preceding main theorem is stable under extensions of the ground field $k$, we could have also defined $r$-cycle-surjective to mean the existence of a zero-cycle of degree $r$ on each fibre over \emph{closed} points of $Y_{k_v}$ (instead of fibres over \emph{$k_v$-rational} points as in Definition\nobreakspace \ref {def:cycle-surjective}). The criterion of Theorem\nobreakspace \ref {thm:mainr} then shows that either definition leads to equivalent notions of arithmetic $r$-cycle-surjectivity (see the related observation by Liang \cite[Remark 6.5]{pseudo-split}). While using closed points is arguably the more natural definition, we prefer to keep Definition\nobreakspace \ref {def:cycle-surjective} in analogy with \cite{pseudo-split}. \end{remark} \begin{example}\label{ex:cycle-surjective} We give an example of a morphism for which one can show that it is arithmetically cycle-surjective but not arithmetically surjective. Let $A=\oplus_{i=1}^n k_i$ be a finite étale algebra over a number field $k$. Assume that $A$ is almost everywhere locally cycle-split but not pseudo-split (e.g.\ one of the algebras in Examples\nobreakspace \ref {ex:upgrade} and\nobreakspace \ref {ex:non-upgrade}). Then one can define the multinorm torus $\mathrm{R}^1_{A/k}\BG_m$ through \[0\to\mathrm{R}^1_{A/k}\BG_m\to \mathrm{R}_{A/k}\BG_m\xrightarrow{N_{A/k}} \BG_m\to 0\] where the middle term maps to $\BG_m$ via the norm maps. The $1$-parameter family of torsors for $\mathrm{R}^1_{A/k}\BG_m$ given by \[\mathrm{N}_{A/k}(x)=t\neq 0\] can be compactified to a proper, smooth, geometrically integral variety $X$ with a morphism $f$ to $\BP^1_k$. It is easy to see that for all $v\notin S$, all smooth fibres over $k_v$-points have a zero-cycle of degree $1$. Hence, $f$ is arithmetically cycle-surjective. On the other hand, since $A\otimes_k k_v$ is non-split for infinitely many $v$, it follows from \cite[Lemma 5.4]{smeets-loughran}, that $f$ is not arithmetically surjective. \end{example} \subsection*{Acknowledgements} The author thanks M.~Bright, J.-L.~Colliot-Thélène, J.~Nicaise, A.~Skorobogatov and O.~Wittenberg and the anonymous referee for comments. This work was supported by the Engineering and Physical Sciences Research Council [EP/ L015234/1], the EPSRC Centre for Doctoral Training in Geometry and Number Theory (The London School of Geometry and Number Theory), University College London. \bibliographystyle{alpha} \newcommand{\etalchar}[1]{$^{#1}$}
2,869,038,156,962
arxiv
\section{Introduction}\label{section1} Throughout this text all functions we consider are real-valued. Let $\Omega$ be a Lipschitz bounded domain of $\mathbb{R}^n$ ($n\ge 3$), with boundary $\Gamma$, and set \[ C_+(\overline{\Omega})=\{ \sigma \in C(\overline{\Omega});\; \sigma >0\; \mbox{in}\; \overline{\Omega}\}. \] We consider, where $\sigma \in C_+(\overline{\Omega})$, the symmetric bounded and coercive bilinear form \[ \mathfrak{a}_\sigma (u,v)=(\sigma\nabla u|\nabla v)_2,\quad u,v\in H_0^1(\Omega), \] where $(\cdot |\cdot)_2$ is the usual scalar product of $L^2(\Omega )$. Let $F\in H^{-1}(\Omega)$. There exists, according to Lax-Milgram's lemma, a unique $v_\sigma (F)\in H_0^1(\Omega)$ so that \[ \mathfrak{a}_\sigma (v_\sigma (F),w)=\langle F,w\rangle_1,\quad w\in H^1(\Omega), \] where $\langle \cdot ,\cdot \rangle_1$ is the duality pairing between $H_0^1(\Omega)$ and its dual $H^{-1}(\Omega)$. Denote by $\gamma_0$ the bounded trace operator from $H^1(\Omega )$ onto $H^{1/2}(\Gamma )$ defined by \[ \gamma_0w =w_{|\Gamma},\quad w\in C^\infty (\overline{\Omega }). \] For simplicity convenience, $\gamma_0w$ is denoted in the sequel by $w_{|\Gamma}$. For $h\in H^{1/2}(\Gamma)$, let $\mathcal{E}h$ denotes the unique element of $H^1(\Omega)$ satisfying $\mathcal{E}h_{|\Gamma}=h$ and \[ \|\mathcal{E}h\|_{H^1(\Omega)}=\|h\|_{H^{1/2}(\Gamma)}. \] Pick $g\in H^{1/2}(\Gamma)$. It is then not difficult to check that \[ u_\sigma(g)=\mathcal{E}g+v_\sigma (-\mbox{div}(\sigma \nabla \mathcal{E}g))\in H^1(\Omega) \] is the unique solution of the BVP \[ \left\{ \begin{array}{l} \mbox{div}(\sigma \nabla u)=0\quad \mbox{in}\; \Omega, \\ u_{|\Gamma}=g. \end{array} \right. \] Furthermore, \begin{equation}\label{i1} \|u_\sigma(g)\|_{H^1(\Omega )}\le \varkappa \|g\|_{H^{1/2}(\Gamma)}, \end{equation} where $\varkappa =\varkappa (n,\Omega ,\min \sigma) >0$ is a constant. We define the Dirichlet-to-Neumann map, associated to $\sigma \in C_+(\overline{\Omega})$, by \begin{align*} \Lambda_\sigma :g\in H^{1/2}(\Gamma )\mapsto &\Lambda_\sigma (g)\in H^{-1/2}(\Gamma): \\ &\langle\Lambda_\sigma (g),h\rangle_{1/2}=\mathfrak{a}_\sigma(u_\sigma(g),\mathcal{E}h),\quad h\in H^{1/2}(\Gamma), \end{align*} where $\langle \cdot ,\cdot\rangle_{1/2}$ is the duality pairing between $H^{1/2}(\Gamma)$ and its dual $H^{-1/2}(\Gamma)$. We remark that we have, according to \cite[Lemma 2.2 in page 131]{Ka}, \[ \Lambda_\sigma(g)=\sigma\partial_\nu u_\sigma(g), \] where $\partial_\nu$ denotes the derivative along the unit normal exterior vector field to $\Gamma$. In light of \eqref{i1}, we obtain \[ |\langle\Lambda_\sigma (g),h\rangle_{1/2}|\le \varkappa \|\sigma\|_{C(\overline{\Omega})}\|g\|_{H^{1/2}(\Gamma)}\|h\|_{H^{1/2}(\Gamma)},\quad h\in H^{1/2}(\Gamma). \] Whence $\Lambda_\sigma \in \mathscr{B}(H^{1/2}(\Gamma ),H^{-1/2}(\Gamma))$. For notational convenience the natural norm of $\mathscr{B}(H^{1/2}(\Gamma ),H^{-1/2}(\Gamma))$ will simply denoted in the rest of this text by $\|\cdot \|$. Fix $0<\alpha\le1$, $\kappa >1$ and let \[ \Sigma = \left\{ \sigma \in C^{1,\alpha}(\overline{\Omega});\; \kappa^{-1}\le \sigma, \; \|\sigma\|_{C^{1,\alpha}(\overline{\Omega})}\le \kappa\right\}. \] \begin{theorem}\label{theorem-i1} If $\Omega$ is of class $C^{1,1}$ then, for all $\sigma_1,\sigma_2\in \Sigma$, we have \begin{align} & \|\sigma_1-\sigma_2 \|_{C(\Gamma)}\le C\|\Lambda_{\sigma_1}- \Lambda_{\sigma_2}\|,\label{thm-i1.1} \\ & \|\partial_\nu (\sigma_1-\sigma_2) \|_{C(\Gamma)}\le C\|\Lambda_{\sigma_1}- \Lambda_{\sigma_2}\|^{\alpha/(\alpha+1)}, \label{thm-i1.2} \end{align} where $C=C(n,\Omega ,\kappa,\alpha )>0$ is a constant. \end{theorem} As $\alpha\in ]0,1] \mapsto \alpha/(1+\alpha)$ is strictly increasing, the best possible exponent in \eqref{thm-i1.2} is obtained when $\alpha=1$ and it is equal to $1/2$. In light of the interpolation inequality in \cite[Lemma 3.2 in page 264]{Al90} (in which we substitute $\tilde{\nu}$ by $\nu$), we deduce from Theorem \ref{theorem-i1} the following corollary. \begin{corollary}\label{corollary-i1} Assume that $\Omega$ is of class $C^{1,1}$. Then, for all $\sigma_1,\sigma_2\in \Sigma$, we have \begin{align*} & \|\sigma_1-\sigma_2 \|_{C(\Gamma)}\le C\|\Lambda_{\sigma_1}- \Lambda_{\sigma_2}\|, \\ & \|\nabla (\sigma_1-\sigma_2) \|_{C(\Gamma)}\le C\|\Lambda_{\sigma_1}- \Lambda_{\sigma_2}\|^{\alpha/(\alpha+1)}, \end{align*} where $C=C(n,\Omega ,\kappa,\alpha )>0$ is a constant. \end{corollary} Theorem \ref{theorem-i1} was established by Alessandrini \cite{Al90} using singular solutions. An earlier result by Kohn and Vogelius \cite{KV84} gives a uniqueness of the conductivity and its normal derivative at the boundary. We also mention the works of Brown \cite{Br} and Nakamura and Tanuma \cite{NT} for reconstruction formulas (for derivatives of arbitrary order in \cite{NT}). The main idea introduced in \cite{KV84} consists in constructing oscillating solutions localized at a boundary point. We revisit this construction in the last section to show that it gives also a stability inequalities similar to that in Theorem \ref{theorem-i1}. The only difference is that we obtain, instead of $\alpha/(\alpha+1)$ in \eqref{thm-i1.2}, the exponent $\alpha/[2(\alpha+1)]$. We initially expected to get by this method the same exponent as in \eqref{thm-i1.2} but actually we do not succeed to modify our analysis to prove it. A variant of Theorem \ref{theorem-i1} was proven by Sylvester and Uhlamnn \cite{SU88} by using tools from microlocal analysis. They showed that the informations on conductivity and its normal derivative at the boundary are contained in the two principal terms of the full symbol of $\Lambda_\sigma$, considered as a pseudo-differential operator of order one. These informations are extracted by using again oscillating solutions localized at a boundary point. The results in \cite{SU88} yield a H\"older stability for the normal derivative at the boundary, with an exponent $\gamma$ satisfying $0<\gamma< 1/(n+1)$ (see for instance \cite[Theorem 4.2 in page 6]{Uh09}). We define, for fixed $\dot{\kappa}>1$ and $p>n$, \[ \dot{\Sigma}=\left\{\sigma \in W^{2,p}(\Omega);\; \dot{\kappa}^{-1} \le \sigma ,\; \|\sigma\|_{W^{2,p}(\Omega)}\le \dot{\kappa}\right\}. \] In light of \cite[Corollaire 1.2 in page 30]{Ch13}, we can assert that $W^{2,p}(\Omega)$ is continuously embedded in $C^{1,\alpha}(\overline{\Omega})$, when $\alpha=1-n/p$. Therefore we have obviously $\dot{\Sigma}\subset \Sigma$, with $\alpha=1-n/p$ and some constant $\kappa =\kappa (n,\Omega ,p, \dot{\kappa})>1$. Define $\Phi$ by \[ \Phi(0)=0\quad \mbox{and}\quad \Phi(\varrho)=|\ln \varrho|^{-2/(n+2)}+\varrho,\quad \varrho>0. \] \begin{theorem}\label{theorem-i2} Suppose that $\Omega$ is of class $C^{1,1}$. Then, for all $\sigma_1,\sigma_2\in \dot{\Sigma}$, we have \begin{equation}\label{thm-i2.1} \|\sigma_1-\sigma_2\|_{H^1(\Omega)}\le C\Phi \left(\|\Lambda_{\sigma_1}-\Lambda_{\sigma_2}\|\right), \end{equation} where $C=C(n,\Omega ,p,\dot{\kappa})>0$ is a constant. \end{theorem} Alessandrini \cite{Al88} established a similar inequality to that in Theorem \ref{theorem-i2} for $H^{s+2}$, when $s>n/2$, conductivities with a logarithmic modulus of continuity of indefinite exponent $-\gamma$, $0<\gamma=\gamma (n,s)<1$. The interior stability inequality is achieved by performing the usual Liouville's transform in order to reduce the inverse conductivity problem into the problem of recovering the potential $q$ in $-\Delta +q$, from the corresponding Dirichlet-to-Neumann map. When $\sigma\in \dot{\Sigma}$, the associated potential $q_\sigma$ belongs to $L^p(\Omega)$. It is not necessarily bounded but still a function. We point out that the regularity on the conductivity can be relaxed. The case of $C^{1,\alpha}$ conductivities was considered by Caro, Garcia and Reyes \cite{CGR}, in which $q_\sigma$ is only a distribution (see the exact statement in \cite[Theorem 1.1, page 470]{CGR}). In that case the proof is more intricate. The analysis in \cite{CGR} follows the one introduced by Haberman and Tataru in \cite{HT} to prove uniqueness for $C^1$ (or Lipschitz and close to a constant) conductivities. The case of Lipschitz conductivities was conjectured by Uhlmann and proved by Caro and Rogers in \cite{CR}. We point out that there is only few results in the case of less regular conductivities and partial boundary data. We refer to the recent paper by Krupchyk and Uhlmann \cite{KU} where uniqueness results are established for conductivities with only $3/2$ derivatives. The first uniqueness result of determining piecewise real-analytic conductivities from the corresponding Dirichlet-to-Neumann map was obtained in the earlier work by Kohn and Vogelius \cite{KV85}. We define the map $\Lambda$ by \[ \Lambda :\sigma \in C_+(\overline{\Omega})\mapsto \Lambda_\sigma \in \mathscr{B}(H^{1/2}(\Gamma ),H^{-1/2}(\Gamma)). \] We leave to the reader to check that $\Lambda$ is continuous. \begin{theorem}\label{theorem-i3} Assume that $\Omega$ is of class $C^{1,1}$. Then there exists a dense subset $\mathscr{D}$ of $C_+(\overline{\Omega})$, endowed with the topology of $C(\overline{\Omega})$, so that $\Lambda_{|\mathscr{D}}$ is injective. \end{theorem} It is worth noticing that $\Lambda$ can be extended to continuous map from $L^\infty_+(\Omega)$ into $\mathscr{B}(H^{1/2}(\Gamma ),H^{-1/2}(\Gamma))$, where \[ L^\infty_+(\Omega)=\{ \sigma\in L^\infty(\Omega);\; \mathrm{essinf}\sigma >0\}. \] We know from Lusin's theorem that every function of $L^\infty(\Omega)$ can be approximated pointwise (in the almost everywhere sense) by a sequence of $C(\overline{\Omega})$. Unfortunately, this is not sufficient to extend Theorem \ref{theorem-i3} to bounded conductivities. There is a tremendous amount of literature devoted to the inverse conductivity problem. We will not discuss in details this literature. We refer to the nice review paper by Uhlmann \cite{Uh09} on the inverse conductivity problem and the related topics, starting from the pioneer paper by Calder\'on \cite{Ca}. This review paper contains also the most significative results for the two dimensional case whose treatment uses tools from complex analysis. For sake of clarity, we do not comment in the present work the two dimensional case. We close this introduction by remarking that it appears from our analysis that $C^{1,1}$ regularity for the domain seems to be the best possible one. \section{Stability at the boundary using singular solutions}\label{section2} We shall use in the sequel the following extension theorem. \begin{theorem}\label{exth} Assume that $\Omega$ is of class $C^{1,1}$ and $0\le \beta \le 1$. Then there exists $\mathscr{E}_\beta\in \mathscr{B}(C^{1,\beta}(\overline{\Omega}),C^{1,\beta}(\mathbb{R}^n))$, preserving positivity, so that $\mathscr{E}_\beta f_{|\Omega}=f$, for any $f\in C^{1,\beta}(\overline{\Omega})$. Furthermore, \[ \|\mathscr{E}_\beta\|_{ \mathscr{B}(C^{1,\beta}(\overline{\Omega}),C^{1,\beta}(\mathbb{R}^n))}\le C, \] for some constant $C=C(n,\Omega,\beta)>0$. \end{theorem} \begin{proof} This theorem is more or less known. One can recover for instance the proof by modifying slightly that of \cite[Theorem 1.16, page 23]{Ch13}. \end{proof} Define $\Sigma^e = \mathscr{E}_\alpha (\Sigma)$. In light of Theorem \ref{exth} we can assert that \[ \Sigma^e\subset\left\{ \sigma\in C^{1,\alpha}(\mathbb{R}^n);\; \kappa^{-1}\le \sigma, \; \|\sigma\|_{C^{1,\alpha}(\mathbb{R}^n)}\le \kappa^e\right\}, \] where $\kappa^e=C\kappa$, with $C$ as in the preceding theorem in which $\beta$ is substituted by $\alpha$. Consider, for each $\sigma \in \Sigma$, the operator $L_\sigma$ acting as follows \[ L_\sigma u =\mbox{div}(\sigma \nabla u),\quad u\in C^2(\Omega), \] and set \[ \mathscr{S}_\sigma =\left\{u\in C^2(\overline{\Omega});\; L_\sigma u=0\right\},\quad \sigma \in \Sigma_\kappa. \] \subsection{Proof of \eqref{thm-i1.1} of Theorem \ref{theorem-i1}}\label{subsection2.1} Denote by $\mathcal{L}_\sigma$, $\sigma \in \Sigma$, the operator that acts as follows \[ \mathcal{L}_\sigma u=\Delta u+\nabla \ln \sigma \cdot \nabla u, \quad u\in C^2(\Omega), \] We present a proof based on the singularities of the fundamental solution of the operator $\mathcal{L}_\sigma$, obtained by Levi's parametrix method. Let $\Omega_0 \Supset \Omega$ be a Lipschitz domain and set $\Omega_+=\Omega$ and $\Omega_-=\Omega_0 \setminus \overline{\Omega}$. Since $\Omega_-$ and $\Omega_+$ are both Lipschitz domains, they possess the uniform interior cone property. Therefore, there exists $R>0$ and $\theta\in ]0,\pi/2[$ so that, for each $x_0\in \Gamma$, we find $\xi_\pm =\xi_\pm (x_0)\in \mathbb{S}^{n-1}$ with the property that \[ \mathscr{C}_\pm(x_0)=\{x\in \mathbb{R}^n;\; 0<|x-x_0|<R,\; (x-x_0)\cdot \xi_\pm >|x-x_0|\cos \theta\}\subset \Omega_\pm. \] The following fact will be useful in the sequel : if $x_\rho=x_0+\rho \xi_\pm$, with $0<\rho < R/2$, then \[ \mbox{dist}(x_\rho,\partial \mathscr{C}_\pm(x_0))=\rho\sin \theta . \] Pick $x_0\in \Gamma $. Let $R>0$ and $\xi_\pm =\xi_\pm(x_0)$ be as in the definition of the interior cone property. We use in what follows the notations \[ x_\delta =x_0+(\delta/2) \xi_+,\; y_\delta =x_0+(\delta/2) \xi_-,\quad 0<\delta <R/2. \] Of course $x_\delta$ and $y_\delta$ depend on $x_0$. We denote by $H$ the usual fundamental solution of the Laplace operator: \[ H(x,y)= \left[(n-2)|\mathbb{S}^{n-1}|\right]^{-1}|x-y|^{2-n}, \quad x,y\in \mathbb{R}^n,\; x\ne y. \] Then straightforward computations show that \[ |\nabla H(x,y)|^2= |\mathbb{S}^{n-1}|^{-2}|x-y|^{2-2n}, \quad x,y\in \mathbb{R}^n,\; x\ne y. \] The following result follows readily from \cite[Theorem 5, page 282]{Ka} (see also \cite[Theorem A.7, page 265]{Chou}) applied to the operator $\mathcal{L}_\sigma$. \begin{theorem}\label{theorem1} For any $\sigma\in \Sigma$ and $y\in \Omega_0\setminus \overline{\Omega}$, there exists $u_{\sigma} ^y\in \mathscr{S}_\sigma$ so that \begin{align} &|u_{\sigma} ^y(x)-H(x,y)|\le C|x-y|^{2-n+\alpha},\quad x\in \overline{\Omega},\label{thm1.1} \\ &|\nabla u_{\sigma} ^y(x)-\nabla H(x,y)|\le C|x-y|^{1-n+\alpha},\quad x\in \overline{\Omega},\label{thm1.2} \end{align} where $C=C(n,\Omega, ,\alpha ,\kappa)>0$ is a constant. \end{theorem} The preceding result is obtained from \cite[Theorem 5, page 282]{Ka} with a $C^2$-smooth domain $\Omega_1$ satisfying $\Omega_1 \Supset \Omega_0\Supset \Omega$. In that case $\mathscr{E}_\alpha\sigma$ gives an extension of $\sigma$ in $\Omega_1$ with the properties required in \cite[Theorem 5, page 282]{Ka}. The following lemma can be deduced easily from the preceding theorem. \begin{lemma}\label{lemma3.1} There exist three constants $C=C(n,\Omega, ,\alpha ,\kappa)>0$, $0<\mathfrak{r}=\mathfrak{r}(n,\Omega, ,\alpha ,\kappa)\le R$ and $0<\delta_0=(n,\Omega, ,\alpha ,\kappa)\le \mathfrak{r}/2$ so that \begin{equation}\label{lem3.1} \nabla u_1(x)\cdot \nabla u_2(x)\ge C|x-y_\delta |^{2-2n},\quad x\in B(x_0,\mathfrak{r})\cap \Omega,\; 0<\delta \le \delta_0, \end{equation} where $u_j=u_{\sigma_j}^{y_\delta}$ with $\sigma_j\in \Sigma$, $j=1,2$, is as in Theorem \ref{theorem1}. \end{lemma} It is worth noticing that this lemma says that $\nabla u_1(x)\cdot \nabla u_2(x)$ behaves like $|\nabla H(x,y_\delta)|^2$, locally near $x_0$. In the rest of this subsection, $C=C(n,\Omega, ,\alpha ,\kappa)>0$ will denote a generic constant. Also, the constants $\mathfrak{r}$ and $\delta_0$ are the same as in Lemma \ref{lemma3.1}. Pick $\sigma_j\in \Sigma$, $j=1,2$, and set $\sigma =\sigma_1-\sigma_2$. Fix $x_0\in \Gamma $ so that $|\sigma(x_0)|=\|\sigma\|_{C(\Gamma)}$ and, without loss of generality, we may assume that $|\sigma(x_0)|=\sigma(x_0)$. Let $u_j=u_{\sigma_j}^{y_\delta}\in \mathscr{S}_{\sigma_j}$, $j=1,2$, be given by Theorem \ref{theorem1} and set \[ v_j=u_j-\int_\Omega u_jdx \; (\in \mathscr{S}_{\sigma_j}). \] Since \[ \|\sigma\|_{C(\Gamma)}= \sigma(x_0)\le \sigma (x)+2\kappa |x-x_0|^\alpha,\quad x\in \Omega, \] we get \begin{align} \|\sigma\|_{C(\Gamma)}\int_{B(x_0,\mathfrak{r})\cap \Omega} \nabla u_1\cdot \nabla u_2dx\le &\int_{B(x_0,\mathfrak{r})\cap \Omega} \sigma \nabla u_1\cdot \nabla u_2dx \label{a1} \\ &+2\kappa\int_{B(x_0,\mathfrak{r})\cap \Omega} |x-x_0|^\alpha\nabla u_1\cdot \nabla u_2dx.\nonumber \end{align} Hereafter, $0<\delta \le \delta_0$. Using that \[ B(x_\delta ,\delta \sin\theta /2)\subset B(x_0,\delta)\cap \mathscr{C}_+(x_0)\subset B(x_0,\mathfrak{r})\cap \Omega, \] we get from \eqref{lem3.1} \begin{equation}\label{a2} \int_{B(x_0,\mathfrak{r})\cap \Omega} \nabla u_1\cdot \nabla u_2dx\ge |B(x_\delta ,\delta \sin\theta /2)| (3\delta/2)^{2-2n}\ge C\delta^{2-n}. \end{equation} Taking into account that \[ |x-y_\delta|\ge \mathfrak{r}/2,\quad x\in \Omega \setminus B(x_0,\mathfrak{r}), \] and \[ |\nabla u_j|\le C|x-y_\delta|^{2-2n},\quad j=1,2, \] we obtain \[ \int_{\Omega \setminus B(x_0,\mathfrak{r})}\sigma \nabla u_1\cdot\nabla u_2\le C. \] Therefore \begin{equation}\label{a3} \int_{B(x_0,\mathfrak{r})\cap \Omega} \sigma \nabla u_1\cdot \nabla u_2dx\le \int_\Omega \sigma \nabla u_1\cdot \nabla u_2dx +C. \end{equation} Let $\mathrm{R}$ (independent of $x_0$) sufficiently large in such a way that \[ \Omega \subset B(y_\delta ,\mathrm{R})\setminus B(y_\delta, \delta\sin \theta /2). \] In that case, we have \[ \int_{B(x_0,\mathfrak{r})\cap \Omega} |x-x_0|^\alpha\nabla u_1\cdot \nabla u_2dx\le |\mathbb{S}^{n-1}|\int_{\delta\sin \theta /2}^{\mathrm{R}}(\delta/2+r)^\alpha r^{1-n}dr. \] In consequence, \begin{equation}\label{a4} \int_{B(x_0,\mathfrak{r})\cap \Omega} |x-x_0|^\alpha\nabla u_1\cdot \nabla u_2dx\le C\delta^{2-n+\alpha}. \end{equation} Inequalities \eqref{a2}, \eqref{a3} and \eqref{a4} in \eqref{a1} give \begin{equation}\label{3.3} C\|\sigma\|_{C(\Gamma)}\le \delta^{2-n}\int_\Omega \sigma \nabla v_1\cdot \nabla v_2dx+\delta^\alpha . \end{equation} Let \[ V=\left\{u\in H^1(\Omega);\; \int_\Omega u(x)dx=0\right\}. \] Then Poincar\'e's inequality shows the map $w\mapsto \|\nabla w\|_{L^2(\Omega)}$ defines a norm on $V$ equivalent to the usual $H^1$ norm. We get in a straightforward manner, by using inequality \eqref{thm1.2}, \[ \|\nabla v_j\|_{L^2(\Omega )}\le C\delta ^{(2-n)/2},\quad j=1,2. \] Combined with the continuity of the trace operator $w\in H^1(\Omega )\mapsto w_{|\Gamma}\in H^{1/2}(\Gamma)$, these inequalities imply \begin{equation}\label{3.4} \|v_j\|_{H^{1/2}(\Gamma)}\le C\delta ^{(2-n)/2},\quad j=1,2. \end{equation} Now according to a well known identity (e.g. \cite[formula (3.4), page 265]{Al90}), we have \[ \int_\Omega \sigma \nabla v_1\cdot \nabla v_2dx=\langle (\Lambda_1-\Lambda_2)v_1,v_2\rangle_{1/2}. \] Here and henceforward, $\Lambda_j=\Lambda_{\sigma_j}$, $j=1,2$. Inequality \eqref{3.4} then yields \begin{equation}\label{3.5} \int_\Omega \sigma \nabla v_1\cdot \nabla v_2\le C\delta ^{2-n}\|\Lambda_1-\Lambda_2\|. \end{equation} Putting together \eqref{3.3} and \eqref{3.5} in order to get \[ C\|\sigma\|_{C(\Gamma)}\le \|\Lambda_1-\Lambda_2\|+\delta^{\alpha}. \] We obtain by passing to the limit, when $\delta$ tends to $0$, \[ \|\sigma\|_{C(\Gamma)}\le C \|\Lambda_1-\Lambda_2\|. \] That is we proved \eqref{thm-i1.1}. \subsection{Proof of \eqref{thm-i1.2} of Theorem \ref{theorem-i1}}\label{subsection2.2} We shall need the following proposition. We provide its proof in Appendix \ref{appendixA}. We use hereafter the notation \[ \Omega_r=\{x\in \Omega;\; \mathrm{dist}(x,\Gamma)\le r\},\quad r>0. \] \begin{proposition}\label{gproposition} Suppose that $\Omega$ is of class $C^{1,1}$. There exists $\dot{\varrho}=\dot{\varrho}(n,\Omega)$ so that we have : \\ $\mathrm{(i)}$ For any $x\in \Omega_{\dot{\varrho}}$, there exists a unique $\mathfrak{p}(x)\in \Gamma$ such that \[ |x-\mathfrak{p}(x)|=\mathrm{dist}(x,\Gamma),\; x= \mathfrak{p}(x)-|x-\mathfrak{p}(x)|\nu(\mathfrak{p}(x)). \] (ii) If $x\in \Omega_{\dot{\varrho}}$ then $x_t=\mathfrak{p}(x)-t|x-\mathfrak{p}(x)|\nu(\mathfrak{p}(x))\in \Omega_{\dot{\varrho}}$, $t\in ]0,1]$, and \[ \mathfrak{p}(x_t)=\mathfrak{p}(x),\; |x_t-\mathfrak{p}(x)|=t\mathrm{dist}(x,\Gamma). \] \end{proposition} In the rest of this text, we keep the notations $\dot{\varrho}$, $\Omega_{\dot{\varrho}}$ and $\mathfrak{p}(x)$, $x\in \Omega_{\dot{\varrho}}$, as they are defined in Proposition \ref{gproposition}. We will also use the following semi-norm \[ [\nabla f]_\alpha=\sup\left\{\frac{|\nabla f(x)-\nabla f(y)|}{|x-y|^\alpha};\; x,y\in \overline{\Omega},\; x\ne y\right\},\quad f\in C^{1,\alpha}(\overline{\Omega}). \] \begin{lemma}\label{lemma2} Assume that $\Omega$ is of class $C^{1,1}$ and fix $\varkappa>0$. Let $f\in C^{1,\alpha}(\overline{\Omega})$ satisfying, for some $x_0\in \Gamma$, \[ [\nabla f]_\alpha \le \varkappa\quad \mathrm{and}\quad -\partial_\nu f(x_0)=|\partial_\nu f(x_0)|>0. \] Then, we have \begin{equation}\label{lem2.1} |\partial_\nu f(x_0)|\mathrm{dist}(x,\Gamma)\le f(x)-f(\mathfrak{p}(x))+3\varkappa |x-x_0|^{1+\alpha},\quad x\in \Omega_{\dot{\varrho}}. \end{equation} \end{lemma} \begin{proof} Let $x\in \Omega_{\dot{\varrho}}$ and set $\tilde{x}=\mathfrak{p}(x)$. Then \[ -\partial_\nu f(\tilde{x})\ge -\partial_\nu f(x_0)-|\partial_\nu f(\tilde{x})-\partial_\nu f(x_0)|, \] and hence \[ -\partial_\nu f(\tilde{x})\ge -\partial_\nu f(x_0)-\varkappa |\tilde{x}-x_0|^\alpha . \] But \[ |x_0-\tilde{x}|\le |x_0-x|+|x-\tilde{x}|\le 2|x-x_0|. \] Therefore \begin{equation}\label{lem2.2} -\partial_\nu f(\tilde{x})\ge -\partial_\nu f(x_0)-2\varkappa |\tilde{x}-x_0|^\alpha . \end{equation} Since \begin{align*} f(x)-f(\tilde{x})&=f(\tilde{x}-|x-\tilde{x}|\nu (\tilde{x}))-f(\tilde{x}) \\ &=-|x-\tilde{x}|\int_0^1\nabla f(\tilde{x}-t|x-\tilde{x}|\nu (\tilde{x}))\cdot\nu (\tilde{x})dt, \end{align*} we obtain \begin{align*} f(x)-f(\tilde{x})&=-\partial_\nu f(\tilde{x})|x-\tilde{x}| \\ &-|x-\tilde{x}|\int_0^1[\nabla f(\tilde{x}-t|x-\tilde{x}|\nu (\tilde{x}))-\nabla f(\tilde{x})]\cdot\nu (\tilde{x})dt. \end{align*} In light of \eqref{lem2.2} this identity yields \begin{align*} f(x)-f(\tilde{x})+2\varkappa &|x-x_0|^{1+\alpha} \ge |\partial_\nu f(x_0)||x-\tilde{x}| \\ &-|x-\tilde{x}|\int_0^1[\nabla f(\tilde{x}-t|x-\tilde{x}|\nu (\tilde{x}))-\nabla f(\tilde{x})]\cdot\nu (\tilde{x})dt. \end{align*} Whence \[ f(x)-f(\tilde{x})+3\varkappa |x-x_0|^{1+\alpha}\ge |\partial_\nu f(x_0)||x-\tilde{x}|. \] This is the expected inequality because $|x-\tilde{x}|=\mathrm{dist}(x,\Gamma)$. \end{proof} We observe that the singularities of the solutions we used in the preceding subsection depend on the dimension. Therefore this is not sufficient to establish \eqref{thm-i1.2} of Theorem \ref{theorem-i1} when the dimension is three as we will explain now. Fix then $x_0\in \Gamma$ so that $|\partial_\nu \sigma (x_0)|=\|\partial_\nu \sigma\|_{C(\Gamma)}$. Without loss of generality, we may assume that $|\partial_\nu \sigma (x_0)|=-\partial_\nu \sigma (x_0)$. We then apply inequality \eqref{lem2.1} in Lemma \ref{lemma2} in order to get \begin{equation}\label{3.6 \|\partial_\nu \sigma\|_{C(\Gamma)}\mathrm{dist}(x,\Gamma )\le \|\sigma\|_{C(\Gamma)}+\sigma(x)+6\kappa |x-x_0|^{1+\alpha},\quad x\in \Omega \cap B(x_0,\rho), \end{equation} where $0< \rho\le \min(\delta_0,\dot{\varrho})$. Let $R$ be as it appears in the definition of the uniform interior cone property. Then reducing $R$ if necessary, we may assume that $R\le \dot{\varrho}$ . In the sequel the notations are those of the preceding subsection. Recall that \[ B(x_\delta ,\delta \sin\theta /2)\subset \mathscr{C}_+(x_0)\cap B(x_0,\delta) \subset B(x_0,\delta)\cap \Omega. \] Therefore, noting that $\mathrm{dist}(x,\Gamma)\ge \delta \sin \theta /4$, if $x\in B(x_\delta ,\delta \sin\theta /4)$, we get \[ \int_{B(x_0,\delta)\cap \Omega}\mathrm{dist}(x,\Gamma)\nabla v_1\cdot \nabla v_2dx\ge C\delta^{n-3}. \] This together with \eqref{3.6}, \eqref{thm-i1.1} of Theorem \ref{theorem-i1}, \eqref{a3} and \eqref{3.5} yield \begin{equation}\label{3.7} C\|\partial_\nu \sigma\|_{C(\Gamma)}\le \delta^{-1}\|\Lambda_1-\Lambda_2\|+\delta^{n-3}(1+\delta ^{-(n-3-\alpha)_+}), \end{equation} where $t_+=\max(t,0)$, $t\in \mathbb{R}$. This inequality allows us to prove \eqref{thm-i1.2} of Theorem \ref{theorem-i1} but only when $n\ge 4$. To overcome this restriction we need singular solutions with singularities of arbitrary order. The construction of such singular solutions is due to Alessandrini \cite[Lemma 3.1 in page 264]{Al90} in the case of $W^{1,p}(\Omega)$, $p>n$, conductivities (note that $C^{1,\alpha}(\overline{\Omega})$ is continuously embedded in $W^{1,p}(\Omega)$, for any $p>n$). \begin{theorem}\label{thm-Al90} Let $\sigma_j\in \Sigma$, $j=1,2$, and $\ell \ge 1$ an integer. Then there exists $u_j\in W^{2,p}(\Omega )$ satisfying $L_{\sigma_j}u_j=0$ in $\Omega$, $j=1,2$, and \begin{align*} &|\nabla u_j(x)|\le C|x-y_\delta|^{1-(n+\ell)},\quad x\in \overline{\Omega},\; j=1,2, \\ &\nabla u_1(x)\cdot \nabla u_2(x)\ge C|x-y_\delta|^{2-2(n+\ell)},\quad x\in \overline{\Omega}, \end{align*} where $C=C(n,\Omega ,\alpha ,\kappa,\ell)$ is a generic constant. \end{theorem} By taking instead the solutions given by Lemma \ref{lemma3.1} those in Theorem \ref{thm-Al90}, we can proceed similarly as above to get in place of \eqref{3.7} the following inequality \[ C\|\partial_\nu \sigma\|_{C(\Gamma)}\le \delta^{-1}\|\Lambda_1-\Lambda_2\|+\delta^{n-3+2\ell }(1+\delta ^{-(n+2\ell -3-\alpha)_+}). \] We fix then $\ell$ sufficiently large in such a way that $n-3+2\ell\ge 1$ in order to get \[ C\|\partial_\nu \sigma\|_{C(\Gamma)}\le \delta^{-1}\|\Lambda_1-\Lambda_2\|+\delta ^\alpha, \] from which we derive \eqref{thm-i1.2} of Theorem \ref{theorem-i1} in a straightforward manner. \section{Proof of Theorems \ref{theorem-i2} and \ref{theorem-i3}}\label{section3} \subsection{Proof of Theorems \ref{theorem-i2}}\label{subsection3.1} Let $\sigma \in \dot{\Sigma}$. Then the multiplication by $\sigma^{\pm 1/2}$ as an operator, denoted again by $\sigma^{\pm 1/2}$, acting on $H^{1/2}(\Gamma)$ is bounded with \begin{equation}\label{4.1} \|\sigma^{\pm 1/2}\|_{\mathscr{B}(H^{1/2}(\Gamma))}\le C\|\sigma^{\pm 1/2}\|_{C^1(\Gamma)}, \end{equation} where $C=C(n,\Omega)$ is a constant. Recall that if $(\sigma^{\pm 1/2})^\ast$ is the adjoint of $\sigma^{\pm 1/2}$, acting as a bounded operator on $\mathscr{B}(H^{-1/2}(\Gamma))$, then \[ \|(\sigma^{\pm 1/2})^\ast\|_{\mathscr{B}(H^{-1/2}(\Gamma))}=\|\sigma^{\pm 1/2}\|_{\mathscr{B}(H^{1/2}(\Gamma))}. \] This and \eqref{4.1} yields \begin{equation}\label{4.2} \|(\sigma^{\pm 1/2})^\ast\|_{\mathscr{B}(H^{-1/2}(\Gamma))}\le C\|\sigma^{\pm 1/2}\|_{C^1(\Gamma)}, \end{equation} with $C$ as in \eqref{4.1}. If $\sigma_1,\sigma_2\in \dot{\Sigma}$, then similarly to \eqref{4.1} and \eqref{4.2}, we have \begin{align*} &\|\sigma_1^{\pm 1/2}-\sigma_2^{\pm 1/2}\|_{\mathscr{B}(H^{1/2}(\Gamma))}\le C\|\sigma_1^{\pm 1/2}-\sigma_2^{\pm 1/2}\|_{C^1(\Gamma)}, \\ & \|(\sigma_1^{\pm 1/2}-\sigma_2^{\pm 1/2})^\ast\|_{\mathscr{B}(H^{-1/2}(\Gamma))}\le C\|\sigma_1^{\pm 1/2}-\sigma_2^{\pm 1/2}\|_{C^1(\Gamma)}, \end{align*} where $C=C(n,\Omega)>0$ is a constant. Whence \begin{align*} &\|\sigma_1^{\pm 1/2}-\sigma_2^{\pm 1/2}\|_{\mathscr{B}(H^{1/2}(\Gamma))}\le C\|\sigma_1-\sigma_2\|_{C^1(\Gamma)}, \\ & \|(\sigma_1^{\pm 1/2}-\sigma_2^{\pm 1/2})^\ast\|_{\mathscr{B}(H^{-1/2}(\Gamma))}\le C\|\sigma_1-\sigma_2\|_{C^1(\Gamma)}, \end{align*} where $C=C(n,\Omega,\dot{\kappa})>0$ is a constant. These inequalities together with the interpolation inequality in \cite[Lemma 3.2 in page 264]{Al90} (in which we substitute $\tilde{\nu}$ by $\nu$ and we take $\alpha=1-n/p$) imply \begin{align} &C\|\sigma_1^{\pm 1/2}-\sigma_2^{\pm 1/2}\|_{\mathscr{B}(H^{1/2}(\Gamma))}\label{4.3} \\ &\hskip 3cm \le \|\sigma_1-\sigma_2\|_{C(\Gamma)}^{(p-n)/(2p-n)}+\| \partial_\nu(\sigma_1-\sigma_2)\|_{C(\Gamma)},\nonumber \\ &C \|(\sigma_1^{\pm 1/2}-\sigma_2^{\pm 1/2})^\ast\|_{\mathscr{B}(H^{-1/2}(\Gamma))}\label{4.4} \\ &\hskip 3cm\le \|\sigma_1-\sigma_2\|_{C(\Gamma)}^{(p-n)/(2p-n)}+\| \partial_\nu(\sigma_1-\sigma_2)\|_{C(\Gamma)},\nonumber \end{align} where $C=C(n,\Omega,p,\dot{\kappa})>0$ is a constant. We have \[ \left|\int_\Gamma \sigma^{-1}\partial_\nu \sigma gdS(x)\right|\le \|\sigma^{-1}\partial_\nu\sigma \|_{L^2(\Gamma)}\|g\|_{L^2(\Gamma)}\le C \|\sigma^{-1}\partial_\nu\sigma \|_{L^2(\Gamma)}\|g\|_{H^{1/2}(\Gamma)}, \] with a constant $C=C(n,\Omega)>0$. Hence the multiplication by $\sigma^{-1}\partial_\nu \sigma$ defines an operator, denoted again by $\sigma^{-1}\partial_\nu \sigma$, acting continuously between $H^{1/2}(\Gamma)$ and $H^{-1/2}(\Gamma)$ and \[ \|\sigma^{-1}\partial_\nu \sigma \|_{\mathscr{B}(H^{1/2}(\Gamma),H^{-1/2}(\Gamma))}\le C\|\sigma^{-1}\partial_\nu\sigma \|_{L^2(\Gamma)}, \] where $C$ is as in the inequality above. We have similarly \begin{equation}\label{4.5} \|\sigma_1^{-1}\partial_\nu \sigma_1-\sigma_2^{-1}\partial_\nu \sigma_2\|_{\mathscr{B}(H^{1/2}(\Gamma),H^{-1/2}(\Gamma))}\le C\|\sigma_1^{-1}\partial_\nu \sigma_1-\sigma_2^{-1}\partial_\nu \sigma_2 \|_{L^2(\Gamma)}. \end{equation} Here the constant $C$ is the same constant as in the preceding inequality. We associate to $\sigma\in \dot{\Sigma}$ the function $q_\sigma =\sigma^{-1/2}\Delta \sigma^{1/2}\in L^p(\Omega)$. The usual Liouville's transform shows that $v_\sigma (g)=\sigma^{1/2}u_\sigma (\sigma^{-1/2}g)$, $g\in H^{1/2}(\Gamma)$, is the unique solution of the BVP \[ \left\{ \begin{array}{ll} -\Delta v+q_\sigma v=0\quad \mbox{in}\; \Omega , \\ v_{|\Gamma}=g. \end{array} \right. \] It is worth noticing that the preceding transform guarantees that $0$ is not an eigenvalue of the operator $A_\sigma= -\Delta +q_\sigma$, with domain $D(A_\sigma )=D=\{w\in H_0^1(\Omega);\; \Delta u\in L^2(\Omega)\}$, that we consider as an unbounded operator on $L^2(\Omega)$. In this definition we used the fact that $H_0^1(\Omega)$ is continuously embedded in $L^{2n/(n-2)}(\Omega)$, which combined with H\"older's inequality, yields \[ \|q_\sigma w\|_{L^2(\Omega)}\le \|q_\sigma\|_{L^n(\Omega)}\|w\|_{L^{2n/(n-2)}(\Omega)}\le C\|q_\sigma\|_{L^p(\Omega)}\|w\|_{H_0^1(\Omega)}, \] for some constant $C=C(n,\Omega,p)>0$. We also recall that trace operator $w\in D\rightarrow \partial_\nu w\in H^{-1/2}(\Gamma)$ defines a bounded operator, with \[ \|\partial_\nu w\|_{H^{-1/2}(\Gamma)}\le c_\Omega\left(\|w\|_{H_0^1(\Omega)}+\|\Delta u\|_{L^2(\Omega)}\right),\quad w\in D \] (e.g. \cite[Lemma 2.2, page 131]{Ka}). Let $\dot{\Lambda}_\sigma\in \mathscr{B}(H^{1/2}(\Gamma ),H^{-1/2}(\Gamma))$ be the operator acting as follows \[ \dot{\Lambda}_\sigma (g)=\partial_\nu v_\sigma (g),\quad g\in H^{1/2}(\Gamma). \] We have the following known formula, that one can also establish in a straightforward manner, \begin{equation}\label{4.6} \dot{\Lambda}_\sigma = (\sigma^{-1/2})^\ast \circ \Lambda_\sigma \circ \sigma ^{1/2}+\sigma^{-1}\partial_\nu \sigma /2. \end{equation} We get by putting together inequalities \eqref{4.3}-\eqref{4.6} \[ C\|\dot{\Lambda}_{\sigma_1}-\dot{\Lambda}_{\sigma_2}\|\le \|\Lambda_{\sigma_1}-\Lambda_{\sigma_2}\|+\|\sigma_1-\sigma_2\|_{C(\Gamma)}^{(p-n)/(2p-n)}+\| \partial_\nu(\sigma_1-\sigma_2)\|_{C(\Gamma)} \] which, in light of Theorem \ref{theorem-i1}, gives \begin{equation}\label{4.7} C\|\dot{\Lambda}_{\sigma_1}-\dot{\Lambda}_{\sigma_2}\|\le \|\Lambda_{\sigma_1}-\Lambda_{\sigma_2}\|^{(p-n)/(2p-n)}. \end{equation} Noting that \[ \|q_{\sigma_1}-q_{\sigma_2}\|_{H^{-1}(\Omega)}\le \|(q_{\sigma_1}-q_{\sigma_2})\chi_\Omega\|_{H^{-1}(\mathbb{R}^n)}, \] we modify slightly the proof \cite[Theorem 3.2 in 14]{Ch19} in order to obtain \[ C\|q_{\sigma_1}-q_{\sigma_2}\|_{H^{-1}(\Omega)}\le \rho^{-2/(2+n)}+e^{c\rho}\|\dot{\Lambda}_{\sigma_1}-\dot{\Lambda}_{\sigma_2}\|,\quad \rho \ge \rho_0, \] where $C=C(n,\Omega ,\kappa ,p)>0$, $c=c(n,\Omega ,\kappa ,p)>0$ and $\rho_0=\rho_0(n,\Omega ,\kappa ,p)>0$ are constants. \\ This and \eqref{4.7} give \begin{equation}\label{4.8} C\|q_{\sigma_1}-q_{\sigma_2}\|_{H^{-1}(\Omega)}\le \rho^{-2/(2+n)}+e^{c\rho}\|\Lambda_{\sigma_1}-\Lambda_{\sigma_2}\|^{(p-n)/(2p-n)},\quad \rho \ge \rho_0, \end{equation} where the constant $C$, $c$ and $\rho_0$ are the same as above. \begin{lemma}\label{lemma4.1} Let $a\in L^\infty(\Omega)$ satisfying $\lambda^{-1}\le a\le \lambda$, for some constant $\lambda\ge 1$. We have, for any $w\in H^2(\Omega)$, \[ C\|w\|_{H^1(\Omega)}\le \|\mathrm{div}(a\nabla w)\|_{H^{-1}(\Omega)}+\|w\|_{L^2(\Gamma)}+\|\nabla w\|_{L^2(\Gamma)}, \] where $C=C(n,\Omega ,\lambda)>0$ is a constant. \end{lemma} \begin{proof} Let $w\in H^2(\Omega)$ and $\tilde{w}=\mathcal{E}(w_{|\Gamma})\in H^1(\Omega)$. Since \[ C\|w\|_{H^{1/2}(\Gamma)}\le \|w\|_{H^1(\Gamma)}\le \|w\|_{L^2(\Gamma)}+\|\nabla w\|_{L^2(\Gamma)} \] and \[ C \|\mathrm{div}(a\nabla \tilde{w})\|_{H^{-1}(\Omega)}\le\|\tilde{w}\|_{H^1(\Omega)}=\|w\|_{H^{1/2}(\Gamma)}, \] we derive that \[ C \|\mathrm{div}(a\nabla w)\|_{H^{-1}(\Omega)}\le \|w\|_{L^2(\Gamma)}+\|\nabla w\|_{L^2(\Gamma)}. \] In the sequel, we endow $H_0(\Omega)$ with the norm $\psi \mapsto \|\nabla \psi\|_{L^2(\Omega)}$. As $w-\tilde{w}\in H_0^1(\Omega)$, we get \[ \int_\Omega a|\nabla (w-\tilde{w})(x)|^2dx =\langle\mathrm{div}(a\nabla w(x))-\mathrm{div}(a\nabla \tilde{w}(x)|w-\tilde{w}\rangle_1. \] Hence \[ \lambda^{-1}\|w-\tilde{w}\|_{H^1(\Omega)}\le \|\mathrm{div}(a\nabla w)\|_{H^{-1}(\Omega)}+\|\mathrm{div}(a\nabla \tilde{w})\|_{H^{-1}(\Omega)}, \] from which we deduce in a straightforward manner \[ C\|w-\tilde{w}\|_{H^1(\Omega)}\le \|\mathrm{div}(a\nabla w)\|_{H^{-1}(\Omega)}+\|w\|_{L^2(\Gamma)}+\|\nabla w\|_{L^2(\Gamma)}. \] where $C=C(n,\Omega ,\lambda)>0$ is a constant. The last inequality, together with the following one \[ \|w\|_{H^1(\Omega)}\le \|\tilde{w}\|_{H^1(\Omega)}+\|w-\tilde{w}\|_{H^1(\Omega)}, \] then give the expected inequality. \end{proof} \begin{proposition}\label{proposition4.1} For each $\sigma_1,\sigma_2\in \dot{\Sigma}$, we have \begin{equation}\label{4.9} C\|\sigma_1-\sigma_2\|_{H^1(\Omega )}\le \|q_{\sigma_1}-q_{\sigma_2}\|_{H^{-1}(\Omega)}+\|\sigma_1-\sigma_2\|_{L^2(\Gamma)}+\|\nabla (\sigma_1-\sigma_2)\|_{L^2(\Gamma)}, \end{equation} where $C=C(n,\Omega ,p,\dot{\kappa})>0$ is a constant. \end{proposition} \begin{proof} In this proof $C=C(n,\Omega ,p,\dot{\kappa})>0$ denotes a generic constant. Let $\sigma_1,\sigma_2\in \dot{\Sigma}$ and set $a=\sqrt{\sigma_1\sigma_2}$, $f=2a(q_{\sigma_1}-q_{\sigma_2})$, and $w=\ln (\sigma_1/\sigma_2)$. From the calculations in \cite{Al88} or \cite{SU86}, we derive \[ \mathrm{div}(a\nabla w)=f. \] However, the calculations that lead to this equation can be carried out easily. We apply Lemma \ref{lemma4.1} in order to get \[ C\|w\|_{H^1(\Omega)}\le \|q_{\sigma_1}-q_{\sigma_2}\|_{H^{-1}(\Omega)}+\|w\|_{L^2(\Gamma)}+\|\nabla w\|_{L^2(\Gamma)}. \] The following identities \begin{align*} &w(x)=(\sigma_1(x)-\sigma_2(x))\int_0^1\frac{dt}{\sigma_2(x)+t(\sigma_1(x)-\sigma_2(x))},\quad x\in \overline{\Omega}, \\ &\nabla (\sigma_1(x)-\sigma_2(x))=\sigma_1(x)[\nabla w+(1/\sigma_1(x)-1/\sigma_2(x))\nabla \sigma_2(x)] ,\quad x\in \overline{\Omega}, \end{align*} yield easily the expected inequality. \end{proof} We end up getting by putting together \eqref{thm-i1.1}, \eqref{thm-i1.2}, \eqref{4.8} and \eqref{4.9} \begin{equation}\label{4.10} C\|\sigma_1-\sigma_2\|_{H^1(\Omega )}\le \rho^{-2/(2+n)}+e^{c\rho}\|\Lambda_{\sigma_1}-\Lambda_{\sigma_2}\|^{(p-n)/(2p-n)},\quad \rho \ge \rho_0, \end{equation} where $C$, $c_0$ and $\rho_0$ are the same as in \eqref{4.8}. The proof of Theorem \ref{theorem-i2} follows by using a usual minimizing argument, in the right hand side of \eqref{4.10}, with respect to $\rho$. \subsection{Proof of Theorems \ref{theorem-i3}}\label{subsection3.2} We first proceed to the construction of $\mathscr{D}$. The unit cube $]0,1[^n$ is denoted by $\mathscr{Q}_0$. Recall that the Bernstein's polynomials $p_{k,j}$ are given by \[ p_{k,j}(t)=C_k^jt^j(1-t)^{k-j},\quad 0\le j\le k, \] with \[ C_k^j=\frac{k!}{j!(k-j)!}. \] To $f\in C(\overline{\mathscr{Q}_0})$, we associate the Bernstein polynomial \[ B_k^0(f)(t_1,\ldots ,t_n)=\sum_{j_1=0}^k\ldots \sum_{j_n=0}^k f\left( \frac{j_1}{k},\ldots ,\frac{j_n}{k}\right)p_{k,j_1}(t_1)\ldots p_{k,j_n}(t_n). \] \begin{theorem}\label{theorem2} $($\cite[Theorem 1.2.9, page 18]{BP}$)$ For any $f\in C(\overline{\mathscr{Q}_0})$, we have \[ \lim_{k\rightarrow \infty}\|f-B_k^0(f)\|_{C(\overline{\mathscr{Q}_0})}=0. \] \end{theorem} Fix $a<b$ and denote by $\mathscr{Q}$ the cube $]a,b[^n$. We associate to each $f\in C(\overline{\mathscr{Q}})$ the polynomial \begin{align*} &B_k(f)(x_1,\ldots ,x_n)=\sum_{j_1=0}^k\ldots \sum_{j_n=0}^k f\left( a+\frac{j_1}{k}(b-a),\ldots ,a+\frac{j_n}{k}(b-a)\right) \\ &\hskip 6.5cm \times p_{k,j_1}\left(\frac{x_1-a}{b-a}\right)\ldots p_{k,j_n}\left(\frac{x_n-a}{b-a}\right). \end{align*} The following result is a straightforward consequence of Theorem \ref{theorem2}. \begin{corollary}\label{corollary1} For any $f\in C(\overline{\mathscr{Q}})$, we have \[ \lim_{k\rightarrow \infty}\|f-B_k(f)\|_{C(\overline{\mathscr{Q}})}=0. \] \end{corollary} If $O$ is an open bounded subset of $\mathbb{R}^n$, we set \[ C_+(\overline{O})=\{ \sigma \in C(\overline{O});\; \sigma >0\; \mbox{in}\; \overline{O}\}. \] Let the cube $\mathscr{Q}$ be chosen so that $\Omega \Subset \mathscr{Q}$. Then according to Tietze extension theorem (e.g. \cite[Theorem 9.35, page 256]{BrP}) for each $\sigma \in C_+(\overline{\Omega})$ there exists $\sigma_e\in C_+(\overline{\mathscr{Q}})$ so that $\sigma_e=\sigma$ in $\overline{\Omega}$ and \[ \|\sigma_e\|_{C(\overline{\mathscr{Q}})}=\|\sigma\|_{C(\overline{\Omega})}. \] In the sequel we shall use the fact that $B_k(\sigma_e)\in C_+(\overline{\mathscr{Q}})$, whenever $\sigma \in C_+(\overline{\Omega})$. Define \[ \mathscr{D}=\{\chi=B_k(\sigma_e)_{|\overline{\Omega}};\; k\in \mathbb{N},\; \sigma \in C_+(\overline{\Omega})\}. \] Pick $\epsilon >0$ and $\sigma \in C_+(\overline{\Omega})$. In light of Corollary \ref{corollary1}, we can choose $k\in \mathbb{N}$ sufficiently large so that \[ \|\sigma_e-B_k(\sigma_e)\|_{C(\overline{\mathscr{Q}})}\le \epsilon . \] In consequence, we have, where $\chi =B_k(\sigma_e)_{|\overline{\Omega}}\in \mathscr{D}$, \[ \|\sigma-\chi\|_{C(\overline{\Omega})}\le \|\sigma_e-B_k(\sigma_e)\|_{C(\overline{\mathscr{Q}})}\le \epsilon . \] In other words, we proved that $\mathscr{D}$ is dense in $C_+(\overline{\Omega} )$ with respect to the topology of $C(\overline{\Omega})$. We now complete the proof of Theorem \ref{theorem-i3}. Let $\chi_j\in \mathscr{D}$, $j=1,2$, so that $\Lambda_{\chi_1}=\Lambda_{\chi_2}$. Then it is straightforward to check that $\chi_1,\chi_2$ belong to $\Sigma$, for some $\kappa =\kappa (\chi_1,\chi_2)>1$. We end up getting $\chi_1=\chi_2$ by applying Theorem \ref{theorem-i2}. \begin{remark} {\rm There is another possibility to construct $\mathscr{D}$ by using a sequence of mollifiers and the convolution. Let $\psi\in C_0^\infty(\mathbb{R}^n)$ satisfying $0\le \psi $, $\mbox{supp}(\psi)\subset B(0,1)$ and $\int_{\mathbb{R}^n}\psi (x)dx=1$. For each integer $k\ge 1$, we define $\psi_k$ by $\psi_k(x)=k^n\psi (kx)$, $x\in \mathbb{R}^n$. If $f\in C(\overline{\mathscr{Q}})$ then $f_k=\psi_k\ast f$ is well defined on $\mathscr{Q}_k=\{x\in \mathscr{Q};\; \mbox{dist}(x,\partial \mathscr{Q})>1/k\}$. We derive from \cite[Theorem 1.6, page 5]{Ch13} that $\|f_k-f\|_{C(\overline{\Omega})}$ converge to zero as $k$ goes to $\infty$. We can therefore proceed as above to prove that \[ \mathscr{D}=\{\chi=(\psi_k\ast \sigma_e)_{|\overline{\Omega}};\; k\in \mathbb{N},\; \sigma \in C_+(\overline{\Omega})\} \] is dense in $C_+(\overline{\Omega})$, when this later is equipped with the norm of $C(\overline{\Omega})$. } \end{remark} \section{Additional results}\label{section4} \subsection{Anisotropic case: determination of the conformal factor}\label{subsection4.1} We describe the main ideas to extend some results of the isotropic case to that of the anisotropic case. We are mainly concerned with the determination of the conformal factor. To this end we fix $A=(a^{ij})$ a matrix valued function whose coefficients belong to $C^{1,\alpha}(\overline{\Omega})$. We suppose that $A$ is symmetric and satisfies, for some $\mu > 1$, \[ \mu^{-1}|\xi|^2\le A(x)\xi \cdot \xi\le \mu|\xi|^2, \quad x\in \Omega, \; \xi \in \mathbb{R}^n, \] and \[ \max_{1\le i,j\le n}\|a^{ij}\|_{C^{1,\alpha}(\overline{\Omega})}\le \mu. \] Consider the BVP \begin{equation}\label{6.1} \left\{ \begin{array}{l} \mbox{div}(\sigma A\nabla u)=0\quad \mbox{in}\; \Omega, \\ u_{|\Gamma}=g. \end{array} \right. \end{equation} We can proceed similarly to the isotropic case to show that, for any $\sigma \in \Sigma$ and $g\in H^{1/2}(\Gamma)$, the BVP \eqref{6.1} possesses a unique solution $\tilde{u}_\sigma (g)\in H^1(\Omega )$. Furthermore, we can define the Dirichlet-to-Neumann map, associated to $\sigma$, as the bounded operator given by \begin{align*} \tilde{\Lambda}_\sigma :g\in H^{1/2}(\Gamma )&\rightarrow H^{-1/2}(\Gamma) : \\ &\langle\Lambda_\sigma (g),h\rangle_{1/2}=\int_\Omega \sigma A\nabla \tilde{u}_\sigma(g)\cdot \nabla\mathcal{E}hdx,\quad h\in H^{1/2}(\Gamma), \end{align*} which satisfies \[ \|\tilde{\Lambda} _\sigma\|\le C, \] for some constant $C=C(n,\Omega ,\kappa ,\mu)$. The canonical parametrix associated to the operator $\mbox{div}(\sigma A\nabla \cdot)$, with $\sigma\in \Sigma$, is given by \[ H_\sigma (x,y)=\frac{[A^{-1}(y)(x-y)\cdot (x-y)]^{(2-n)/2}}{(n-2)|\mathbb{S}^{n-1}|\sigma(y)[\mbox{det}A(y)]^{1/2}},\quad x,y\in \mathbb{R}^n,\; x\ne y. \] Here $\sigma$ and $A$ are extended according to Theorem \ref{exth} (\cite[Formula (2.4) in page 258]{Ka}). Elementary computations show that, for all $\sigma_1,\sigma_2\in \Sigma$ and $x,y\in \mathbb{R}^n$ with $ x\ne y$, we have \begin{equation}\label{6.2} \mathfrak{c}^{-1} |x-y|^{2-2n}\le A(x)\nabla_xH_{\sigma_1}(x,y)\cdot\nabla_xH_{\sigma_2}(x,y)\le \mathfrak{c}|x-y|^{2-2n}, \end{equation} where $\mathfrak{c}=\mathfrak{c}(n,\Omega ,\kappa,\mu )>1$ is a constant. Set \[ \tilde{\mathscr{S}}_\sigma=\{ u\in C^2(\overline{\Omega});\; \mbox{div}(\sigma A\nabla u)=0\},\quad \sigma \in \Sigma. \] As for Theorem \ref{theorem1}, we have as a consequence of \cite[Theorem 5, page 282]{Ka} the following result. \begin{theorem}\label{theorem6.1} For any $\sigma\in \Sigma_\kappa$ and $y\in \Omega_0\setminus \overline{\Omega}$, there exists $\tilde{u}_{\sigma} ^y\in \tilde{\mathscr{S}}_\sigma$ so that \begin{align*} &|\tilde{u}_{\sigma} ^y(x)-H_\sigma(x ,y)|\le C|x-y|^{2-n+\alpha},\quad x\in \overline{\Omega}, \\ &|\nabla \tilde{u}_{\sigma} ^y(x)-\nabla H_\sigma (x,y)|\le C|x-y|^{1-n+\alpha},\quad x\in \overline{\Omega}, \end{align*} where $C=C(n,\Omega ,\kappa,\mu,\alpha)>0$ is a constant. \end{theorem} On the other hand, as for the isotropic case we have, for all $\sigma_j\in \Sigma$ and $u_j\in \tilde{\mathscr{S}}_{\sigma_j}$, $j=1,2$, the following identity holds \begin{equation}\label{6.3} \int_\Omega (\sigma_1-\sigma_2) A\nabla u_1\cdot \nabla u_2dx=\langle (\tilde{\Lambda}_{\sigma_1}-\tilde{\Lambda}_{\sigma_2})v_1,v_2\rangle_{1/2}, \end{equation} where we set $v_j=u_j-\int_\Omega u_jdx$, $j=1,2$. In light of \eqref{6.2}, \eqref{6.3} and Theorem \ref{theorem6.1} we can mimic the proof of Theorem \ref{theorem-i1} and Corollary \ref{corollary-i1} in order to obtain the following theorem (we observe that Theorem \ref{thm-Al90} still holds if $L_{\sigma_j}$ is substituted by the operator $\mathrm{div}(\sigma_jA\cdot \nabla \cdot)$). \begin{theorem}\label{theorem-6.2} If $\Omega$ is of class $C^{1,1}$ then, for all $\sigma_1,\sigma_2\in \Sigma$, we have \begin{align*} & \|\sigma_1-\sigma_2 \|_{C(\Gamma)}\le C\|\tilde{\Lambda}_{\sigma_1}- \tilde{\Lambda}_{\sigma_2}\| \\ & \|\nabla (\sigma_1-\sigma_2) \|_{C(\Gamma)}\le C\|\tilde{\Lambda}_{\sigma_1}- \tilde{\Lambda}_{\sigma_2}\|^{\alpha/(\alpha+1)}, \end{align*} where $C=C(n,\Omega ,\kappa, \mu,\alpha )>0$ is a constant. \end{theorem} \begin{lemma}\label{lemma6.1} Let $\ell \ge 2$ be an integer and $f\in C^{\ell,\alpha}(\overline{\Omega_{\varrho_0}})$, for some $\varrho_0>0$, satisfying, for some $\varkappa >0$, \[ \|f\|_{C^{\ell,\alpha}(\overline{\Omega_{\varrho_0}})}\le \varkappa. \] Let $x_0\in \Gamma$ so that \[ (-1)^\ell\partial_\nu ^\ell f(x_0)=|\partial_\nu ^\ell f(x_0)|. \] Then the following inequality holds \begin{align*} |\partial_\nu ^\ell f(x_0)|\mathrm{dist}(x,\Gamma)^\ell \le f(x)&-f(\mathfrak{p}(x)) \\ &+\sum_{j=1}^{\ell-1}(-1)^{j+1}\partial_\nu^jf(\mathfrak{p}(x))\mathrm{dist}(x,\Gamma)^j+\varkappa '|x-x_0|^{\ell+\alpha}, \end{align*} where $\varkappa'=\varkappa'(n,\Omega ,\varkappa,\ell)>0$ is a constant. \end{lemma} \begin{proof} We use Taylor's formula and we proceed as in the proof of Lemma \ref{lemma2}. \end{proof} We fix $0<\varrho_0<\varrho$. We set then $\Sigma^0=\Sigma^1=\Sigma$ and \[ \Sigma^\ell=\{ \sigma \in \Sigma \cap C^{\ell,\alpha}(\overline{\Omega_{\varrho_0}}),\; \|\sigma\|_{C^{\ell,\alpha}(\overline{\Omega_{\varrho_0}})}\le \kappa\},\quad \ell \ge 2. \] We also introduce the notations \[ \gamma_0=1\quad \mbox{and}\quad \gamma_j=\prod_{j=1}^\ell \frac{\alpha}{\alpha +j},\; \ell \ge 1. \] An extension of the proof \eqref{thm-i1.2} of Theorem \ref{theorem-i1} together with an induction argument with respect to $\ell$ yield the following result. \begin{theorem}\label{theorem6.3} Suppose that $\Omega$ is of class $C^{1,1}$ and $\ell$ is a non negative integer. We have, for all $\sigma_1,\sigma_2\in \Sigma^\ell$, \[ \|\partial_\nu ^\ell (\sigma_1-\sigma_2) \|_{C(\Gamma)}\le C\|\tilde{\Lambda}_{\sigma_1}- \tilde{\Lambda}_{\sigma_2}\|^{\gamma_\ell}, \] where $C=C(n,\Omega ,\kappa,\alpha ,\ell)>0$ is a constant \end{theorem} The following lemma is obtained by iterating \cite[Lemma 3.2 in page 264]{Al90}. \begin{lemma}\label{lemma6.2} Let $\ell \ge 2$ an integer and $f\in C^{\ell,\alpha}(\overline{\Omega_{\varrho_0}})$. Then \[ C\max_{|\beta|=\ell} \|\partial^\beta f\|_{C(\Gamma)}\le \|\partial_\nu ^{\ell-1}f\|_\ast^{\gamma_1}+\|\partial_\nu ^{\ell-2}f\|_\ast^{\gamma_1^2}+\ldots +\|f\|_\ast^{\gamma_1^\ell}, \] where $C=C(n,\Omega ,\varrho_0,\ell )>0$ is a constant and \[ \|f\|_\ast =\|\partial_\nu f\|_{C(\Gamma)}+\|f\|_{C(\Gamma)}. \] \end{lemma} In light of this lemma, Theorem \ref{theorem6.3} imply in a straightforward manner the following corollary. \begin{corollary}\label{corollary6.3} Assume that $\Omega$ is of class $C^{1,1}$ and let $\ell\ge 2$ be an integer. We have, for all $\sigma_1,\sigma_2\in \Sigma^\ell$, \[ \max_{|\beta|=\ell}\|\partial ^\beta (\sigma_1-\sigma_2) \|_{C(\Gamma)}\le C\|\tilde{\Lambda}_{\sigma_1}- \tilde{\Lambda}_{\sigma_2}\|^{\gamma_1\gamma_\ell}, \] where $C=C(n,\Omega ,\kappa,\alpha ,\ell)>0$ is a constant \end{corollary} We mention that the case of general anisotropic conductivities can be reformulated as a geometric inverse problem. Precisely the problem is to know whether it is possible to recover the metric of a compact Riemannian manifold with boundary, from the corresponding Dirichlet-to-Neumann map. This problem was solved by Guillarmou and Tzou in dimension two \cite{GT}. In dimensions greater or equal to three the answer is positive for conformally transversally anisotropic manifolds, under the assumption that the geodesic X-ray transform on the transversal manifold is injective \cite{DKLS,DKLLS}. Recent progress toward solving the general case can be found in \cite{KLS}. Concerning stability inequalities we refer to the earlier work by Kang and Yun \cite{KY} in which the authors provide H\"older stability inequality at the boundary of anisotropic conductivities from local Dirichlet-to-Neumann map. While Caro and Salo \cite{CS} obtained logarithmic type stability inequality for the conformal factor in admissible geometries. For non uniqueness results on the determination of anisotropic conductivities from partial boundary data we refer to the recent paper by Daud\'e, Kamran and Nicoleau \cite{DKN} and references therein. \subsection{Isotropic case with partial data}\label{subsection4.2} Throughout this section we use the same notations as in Sections \ref{section2} and \ref{section3}. Fix $\hat{x}\in \mathbb{R}^n$ outside the closure of the convex hull of $\Omega$ and denote by $\Gamma_0$ an open neighborhood of the set \[ F=\{ x\in \Gamma ;\; (x-\hat{x})\cdot \nu (x)\le 0\}. \] Pick $\chi \in C_0^\infty (\Gamma_0)$ so that $0\le \chi \le 1$ and $\chi=1$ in a neighborhood of $F$. We then introduce the following partial Dirichlet-to-Neumann map \[ \hat{\Lambda}_\sigma =\chi \Lambda_\sigma ,\quad \sigma \in \Sigma . \] We consider the following subset of $\dot{\Sigma}$, where $0<s<1/2$, \[ \hat{\Sigma}=\left\{ \sigma \in W^{2,\infty}(\mathbb{R}^n)\cap H^{2+s}(\mathbb{R}^n);\; \mbox{supp}(\sigma)\subset \overline{\Omega}, \; \|\sigma \|_{W^{2,\infty}(\mathbb{R}^n)\cap H^{2+s}(\mathbb{R}^n)}\le \kappa\right\}, \] In the sequel we use that $\hat{\Sigma}$ is continuously embedded in $C^{1,1/2}(\overline{\Omega})$. Let $\sigma_1,\sigma_2\in \hat{\Sigma}$, $a=\sqrt{\sigma_1\sigma_2}$, $f=2a(q_{\sigma_1}-q_{\sigma_2})$ and $w=\ln (\sigma_1/\sigma_2)$. As we have seen in the proof of Proposition \ref{proposition4.1}, $w$ is the solution of the BVP \[ \mathrm{div}(a\nabla w)=f. \] From the results in \cite[Section 4.5 in page 168]{Chou}, there exists three constants $C=C(n,\Omega ,\kappa ,\Gamma_0,s)$, $c=c(n,\Omega ,\kappa ,\Gamma_0,s)$ and $\beta=\beta(n,\Omega ,\kappa ,\Gamma_0,s)$ so that, for any $0<\epsilon<1$, we have \begin{align*} &C\left(\|w\|_{C(\Gamma)}+\|\nabla w\|_{C(\Gamma)}\right)\le \epsilon ^\beta\|w\|_{C^{1,\alpha}(\overline{\Omega})} \\ &\hskip 3cm + e^{c/\epsilon}\left( \|w\|_{L^2(\Gamma)}+\|\nabla w\|_{L^2(\Gamma_0)}+\|q_{\sigma_1}-q_{\sigma_2}\|_{L^2(\Omega)}\right), \end{align*} from which we derive similarly as in the proof of Proposition \ref{proposition4.1} \begin{align*} &C\left(\|\sigma_1-\sigma_2\|_{C(\Gamma)}+\|\nabla (\sigma_1-\sigma_2)\|_{C(\Gamma)}\right)\le \epsilon ^\beta \\ &\hskip 1cm + e^{c/\epsilon}\left( \|\sigma_1-\sigma_2\|_{L^2(\Gamma_0)}+\|\nabla (\sigma_1-\sigma_2)\|_{L^2(\Gamma_0)}+\|q_{\sigma_1}-q_{\sigma_2}\|_{L^2(\Omega)}\right). \end{align*} One more time, Proposition \ref{proposition4.1} yields \begin{align} &C\|\sigma_1-\sigma_2\|_{H^1(\Omega)}\le \epsilon ^\beta \label{7.1} \\ &\hskip 1cm + e^{c/\epsilon}\left( \|\sigma_1-\sigma_2\|_{L^2(\Gamma_0)}+\|\nabla (\sigma_1-\sigma_2)\|_{L^2(\Gamma_0)}+\|q_{\sigma_1}-q_{\sigma_2}\|_{L^2(\Omega)}\right).\nonumber \end{align} Denote by $\ell \ge 2 $ the smallest integer satisfying \[ \frac{\ell+1}{\ell-1}\le 1+\alpha. \] According to \cite[Theorem 1.4 in page 724]{KY}, we get \begin{equation}\label{7.2} \|\sigma_1-\sigma_2\|_{C(\Gamma_0)}+\|\nabla (\sigma_1-\sigma_2)\|_{C(\Gamma_0)}\le C\|\hat{\Lambda}_{\sigma_1}-\hat{\Lambda}_{\sigma_2}\|^{2^{-\ell}}. \end{equation} Here and henceforward $C=C(n,\Omega ,\kappa ,\Gamma_0,s)>0$ is a constant. On the other hand, we have from \cite[Theorem 1.1 in page 2461]{CDR} \begin{equation}\label{7.3} \|q_{\sigma_1}-q_{\sigma_2}\|_{L^2(\Omega)}\le C\left|\ln \left|\ln \|\hat{\Lambda}_{\sigma_1}-\hat{\Lambda}_{\sigma_2}\|\right|\right|^{-2s/(3+3s)}. \end{equation} Inequalities \eqref{7.2} and \eqref{7.3} in \eqref{7.1} give, where $0<\epsilon <1$, \begin{equation}\label{7.4} C\|\sigma_1-\sigma_2\|_{H^1(\Omega)}\le \epsilon ^\beta+e^{c/\epsilon}\left|\ln \left|\ln \|\hat{\Lambda}_{\sigma_1}-\hat{\Lambda}_{\sigma_2}\|\right|\right|^{-2s/(3+3s)}. \end{equation} Define $\Psi_\beta :[0,\infty )\rightarrow [0,\infty)$, $\beta >0$, as follows \[ \Psi_\beta (0)=0\quad \mbox{and}\quad \Psi_\beta (\rho)=|\ln |\ln |\ln \rho|||^{-\beta}+\rho,\quad \rho >0. \] We obtain, by minimizing the right hand side of \eqref{7.4} with respect to $\epsilon$, the following result. \begin{theorem}\label{theorem7.1} Suppose that $\Omega$ is of class $C^{1,1}$. We have for any $\sigma_1,\sigma_2 \in \hat{\Sigma}$ \[ \|\sigma_1-\sigma_2\|_{H^1(\Omega)}\le C\Psi_\beta \left(\|\hat{\Lambda}_{\sigma_1}-\hat{\Lambda}_{\sigma_2}\|\right), \] where $C=C(n,\Omega ,\kappa ,\Gamma_0,s)>0$ and $\beta=\beta(n,\Omega ,\kappa ,\Gamma_0,s)>0$ are constants. \end{theorem} \section{Stability at the boundary using oscillating solutions}\label{section5} The following lemma is essentially due to Kohn and Vogelius \cite{KV84}. The version stated here is borrowed from \cite{Ka} (see Lemma 4.1 in page 142 and its proof). We suppose in this section that $\Omega$ is of class $C^{1,1}$. \begin{lemma}\label{lemma-os.1} Pick $x_0\in \Gamma$. Then there exists a sequence $(\psi_k)$ in $H^{3/2}(\Gamma)\cap C^{1,1}(\Gamma)$ satisfying, for each $k\ge 1$, the following properties: \\ $\mathrm{(i)}\; \mathrm{supp}(\psi_k)\subset B(x_0,\mathfrak{c}k^{-1})$, \\ $\mathrm{(ii)}\;\|\psi_k\|_{H^{1/2}(\Gamma)}=1$, \\ $\mathrm{(iii)}\;C^{-1}k^{-(1/2+s)}\le \|\psi_k\|_{H^{-s}(\Gamma)}\le Ck^{-(1/2+s)}$, $-1\le s\le 1$, \\ where $\mathfrak{c}=\mathfrak{c}(n,\Omega)$ and $C=C(n,\Omega ,s)\ge 1$ are constants. \end{lemma} For notational convenience, for $\sigma \in \Sigma$, we set, where $k\ge 1$, $u_\sigma ^k=u_\sigma (\psi_k)$ ($\in H^2(\Omega)$), with $\psi_k$ as in Lemma \ref{lemma-os.1}. \begin{proposition}\label{proposition-os.1} Let $0<\rho <\mathrm{diam}(\Omega)$. There exist a constant $C=C(n,\Omega ,\kappa)>0$ so that, for any $x_0\in \Gamma$, we have \begin{equation}\label{prop-os.1} \|u_\sigma^k\|_{H^1(\Omega\setminus \overline{B}_\rho)}\le C\rho^{-1}k^{-1},\quad k\ge 2\mathfrak{c}/\rho, \end{equation} where $B_\rho=B(x_0,\rho)$ and $\mathfrak{c}$ is as in Lemma \ref{lemma-os.1}. \end{proposition} \begin{proof} Pick $0<\rho <\mathrm{diam}(\Omega)$ and $x_0\in \Gamma$. Fix $\phi \in C_0^\infty (\Omega)$ so that $0\le \phi \le 1$, $\phi =1$ in a neighborhood of $B_{\rho/2}$ and $|\nabla \phi|\le c\rho^{-1}$, where $c$ is a universal constant. Let then $\varphi=1-\phi$ and $v^k=\varphi u_\sigma^k$. Furthermore, according to Lemma \ref{lemma-os.1}, we have $\mathrm{supp}(\psi_k)\subset B_{\rho/2}$, for each $k\ge k_\rho=2\mathfrak{c}/\rho$. In consequence, $v_k\in H_0^1(\Omega)$, for each $k\ge k_\rho$. We assume in the rest of this proof that $k\ge k_\rho$. We have \begin{align*} \mathrm{div}(\sigma \nabla v^k)&=\mathrm{div}(\sigma \varphi \nabla u_\sigma^k)+\mathrm{div}(\sigma u_\sigma^k\nabla \varphi) \\ & =\sigma \nabla \varphi\cdot \nabla u_\sigma^k+\mathrm{div}(\sigma u_\sigma^k\nabla \varphi). \end{align*} Green's formula then yields \begin{align*} \int_\Omega \sigma |\nabla v^k|^2dx&= -\int_\Omega \sigma \varphi u_\sigma^k \nabla \varphi\cdot \nabla u_\sigma^kdx +\int_\Omega \sigma u_\sigma^k\nabla \varphi \cdot \nabla (\varphi u_\sigma^k)dx \\ &=\int_\Omega \sigma (u_\sigma^k)^2|\nabla \varphi|^2dx=\int_\Omega \sigma (u_\sigma^k)^2|\nabla \phi|^2dx. \end{align*} Whence \begin{equation}\label{os1} \int_{\Omega\setminus \overline{B}_\rho} \sigma |\nabla u_\sigma^k|^2dx\le \kappa ^2\int_\Omega u_\sigma^k (u_\sigma^k |\nabla \phi|^2)dx. \end{equation} We write \[ \int_\Omega u_\sigma^k (u_\sigma^k |\nabla \phi|^2)dx=-\int_\Omega u_\sigma^k \mathrm{div}(\sigma \nabla u) dx, \] where $u\in H_0^1(\Omega )\cap H^2(\Omega)$ is the unique solution of the equation \[ -\mathrm{div}(\sigma \nabla u)= u_\sigma^k |\nabla \phi|^2 \quad \mathrm{in}\quad \Omega . \] Noting that $\mathrm{div}(\sigma \nabla u_\sigma^k)=0$ in $\Omega$, we obtain by applying Green's formula \[ -\int_\Omega u_\sigma^k \mathrm{div}(\sigma \nabla u) dx=-\int_\Gamma \psi_k\sigma \partial_\nu udS(x), \] from which we derive \begin{equation}\label{os2} \int_\Omega u_\sigma^k (u_\sigma^k |\nabla \phi|^2)dx\le \kappa \|\partial_\nu u\|_{H^{1/2}(\Gamma)}\|\psi_k\|_{H^{-1/2}(\Gamma)}. \end{equation} Now, the usual $H^2$ a priori estimate (e.g. \cite[Theorem 8.53 in page 326]{RR} and its proof) together with the continuity of the trace operator $w\in H^2(\Omega)\mapsto \partial_\nu w\in H^{1/2}(\Gamma)$ give \[ \|\partial_\nu u\|_{H^{1/2}(\Gamma)} \le C\|u_\sigma^k |\nabla \phi|^2\|_{L^2(\Omega)}. \] Here and until the end of the proof $C=C(n,\Omega,\kappa)>0$ denotes a generic constant. Thus \begin{equation}\label{os3} \|\partial_\nu u\|_{H^{1/2}(\Gamma)} \le C\rho^{-1}\|u_\sigma^k |\nabla \phi|\|_{L^2(\Omega)}. \end{equation} In light of two-sided inequality in Lemma \ref{lemma-os.1}, we get by combining \eqref{os2} and \eqref{os3} \[ \|u_\sigma^k |\nabla \phi|\|_{L^2(\Omega)}\le C\rho^{-1}k^{-1}. \] This in \eqref{os1} gives \begin{equation}\label{os4} \|\nabla u_\sigma^k\|_{L^2(\Omega\setminus \overline{B}_\rho)}\le C\rho^{-1}k^{-1}. \end{equation} As $v^k\in H_0^1(\Omega)$, we have, according to Poincarr\'e's inequality, \[ \int_\Omega |v^k|^2dx\le c_\Omega \int_\Omega |\nabla v^k|^2. \] This and the preceding calculations yield \begin{equation}\label{os5} \| u_\sigma^k\|_{L^2(\Omega\setminus \overline{B}_\rho)}\le C\rho^{-1}k^{-1},\quad k\ge k_\rho. \end{equation} We obtain the expected inequality by putting together \eqref{os4} and \eqref{os5}. \end{proof} \begin{lemma}\label{lemma-os.2} Let $0<\rho <\mathrm{diam}(\Omega)$. There exist a constant $C=C(n,\Omega ,\kappa)>0$ so that, for any $x_0\in \Gamma$, we have \begin{equation}\label{lem-os.2} C\le \|\nabla u_\sigma ^k\|_{L^2(B_\rho\cap \Omega)}+\rho^{-1}k^{-1}+k^{-1}, \quad k\ge 2\mathfrak{c}/\rho, \end{equation} where $B_\rho=B(x_0,\rho)$ and $\mathfrak{c}$ is as in Lemma \ref{lemma-os.1}. \end{lemma} \begin{proof} In this proof $C=C(n,\Omega ,\kappa)>0$ denotes a generic constant. According to Lemma \ref{lemma-a1} in Appendix \ref{appendixB}, the map \[ w\in H^1(\Omega) \mapsto \|\nabla w\|_{L^2(\Omega)}+\|w\|_{H^{-1/2}(\Gamma)} \] defines a norm, which is equivalent to the usual norm of $H^{1}(\Omega)$. Hence \[ C\|u_\sigma ^k\|_{H^{1/2}(\Gamma)}\le \|\nabla u_\sigma ^k\|_{L^2(\Omega)}+\|u_\sigma^k\|_{H^{-1/2}(\Gamma)}= \|\nabla u_\sigma ^k\|_{L^2(\Omega)}+\|\psi_k\|_{H^{-1/2}(\Gamma)}. \] Using again two-sided inequality in Lemma \ref{lemma-os.1}, both for $s=-1/2$ and $s=1/2$, in order to get \[ C\le \|\nabla u_\sigma ^k\|_{L^2(\Omega)}+k^{-1}. \] This and \eqref{prop-os.1} imply \[ C\le \|\nabla u_\sigma ^k\|_{L^2(B_\rho\cap \Omega)}+\rho^{-1}k^{-1}+k^{-1}, \quad k\ge 2\mathfrak{c}/\rho, \] as expected. \end{proof} Define, where $\sigma \in \Sigma$, \[ Q_\sigma (u)=\int_\Omega \sigma |\nabla u|^2dx,\quad u\in H^1(\Omega ), \] and \[ K(f)=\{u\in H^1(\Omega );\; u_{|\Gamma}=f\},\quad f\in H^{1/2}(\Gamma). \] We recall that \[ Q_\sigma( u_\sigma (f))=\min\{Q_\sigma(u);\; u\in K(f)\} \] (e.g \cite[Section 3 in page 135]{Ka}). Set, for $\sigma_j\in \Sigma$, $u_j^k=u_{\sigma_j}^k$ and $Q_j=Q_{\sigma_j}$, $j=1,2$. Let $x_0\in \Gamma$ so that $|\sigma (x_0)|=\|\sigma\|_{C(\Gamma)}$, where $\sigma=\sigma_1-\sigma_2$. Without loss of generality we may assume that $|\sigma (x_0)|=\sigma (x_0)$. As we have seen above \[ \|\sigma\|_{C(\Gamma)}\le \sigma (x)+2\kappa |x-x_0|^\alpha,\quad x\in \Omega, \] that we rewrite in the following form \[ \|\sigma\|_{C(\Gamma)}+\sigma_2 (x)\le \sigma_1 (x)+2\kappa |x-x_0|^\alpha ,\quad x\in \Omega. \] Hence, where $0<\rho<\mathrm{diam}(\Omega)$, \begin{align*} \|\nabla u_1^k\|^2_{B(x_0,\rho)\cap \Omega}\|\sigma\|_{C(\Gamma)}&+\int_{B(x_0,\rho)\cap \Omega}\sigma_2|\nabla u_1^k|^2dx \\ &\le \int_{B(x_0,\rho)\cap \Omega}\sigma_1|\nabla u_1^k|^2dx+\rho ^\alpha \|\nabla u_1^k\|^2_{B(x_0,\rho)\cap \Omega}. \end{align*} That is we have \begin{align*} \|\nabla u_1^k\|^2_{B(x_0,\rho)\cap \Omega}\|\sigma\|_{C(\Gamma)}&+Q_2 (u_1^k)-\int_{\Omega\setminus B(x_0,\rho)}\sigma_2|\nabla u_1|^2dx \\ &\le Q_1(u_1^k)-\int_{\Omega\setminus B(x_0,\rho)}\sigma_1|\nabla u_1|^2dx+\rho ^\alpha \|\nabla u_1^k\|^2_{B(x_0,\rho)\cap \Omega}. \end{align*} But $Q_2 (u_2^k)\le Q_2 (u_1^k)$. Whence \begin{align*} \|\nabla u_1^k\|^2_{B(x_0,\rho)\cap \Omega}\|\sigma\|_{C(\Gamma)}&+Q_2 (u_2^k)-\int_{\Omega\setminus B(x_0,\rho)}\sigma_2|\nabla u_1|^2dx \\ &\le Q_1(u_1^k)-\int_{\Omega\setminus B(x_0,\rho)}\sigma_1|\nabla u_1|^2dx+\rho ^\alpha\|\nabla u_1^k\|^2_{B(x_0,\rho)\cap \Omega}. \end{align*} On the other hand we know that \[ Q_j (u_j^k)=\langle \Lambda_j(\psi_k),\psi_k\rangle_{1/2},\quad j=1,2. \] Note that this identity yields \[ \kappa^{-1}\|\nabla u_j^k\|_{L^2(\Omega)}^2 \le \|\Lambda_j\|\le C,\quad j=1,2. \] In consequence, \begin{align*} \|\nabla u_1^k&\|^2_{B(x_0,\rho)\cap \Omega}\|\sigma\|_{C(\Gamma)}-\int_{\Omega\setminus B(x_0,\rho)}\sigma_2|\nabla u_1|^2dx \\ &\le \langle (\Lambda_1-\Lambda_2)(\psi_k),\psi_k\rangle_{1/2} -\int_{\Omega\setminus B(x_0,\rho)}\sigma_1|\nabla u_1|^2dx+\rho ^\alpha\|\nabla u_1^k\|^2_{B(x_0,\rho)\cap \Omega} . \end{align*} In light of Proposition \ref{proposition-os.1} and Lemma \ref{lemma-os.2}, we find \[ C\|\sigma\|_{C(\Gamma)}\le \|\Lambda_1-\Lambda_2\|+\rho^{-2}k^{-2}+k^{-2}+\rho^\alpha, \quad k\ge 2\mathfrak{c}/\rho. \] Making first $k$ converging to $\infty$ and then $\rho$ tending to $0$ in order to obtain \[ C\|\sigma\|_{C(\Gamma)}\le \|\Lambda_1-\Lambda_2\|. \] In other words we proved \eqref{thm-i1.1} of Theorem \ref{theorem-i1}. Next, we prove \eqref{thm-i1.2} of Theorem \ref{theorem-i1} in which the exponent $\alpha/(\alpha+1)$ is substituted by $\alpha/[2(1+\alpha)]$. To this end, we denote, where $\sigma \in \Sigma$, by $\lambda_\sigma^1=\lambda_\sigma^1(n,\Omega,\sigma)$ the first eigenvalue of the unbounded operator $-\mathrm{div}(\sigma \nabla \cdot)$ with domain $H_0^1(\Omega)\cap H^2(\Omega)$. We can associate to this eigenvalue a unique eigenfunction $\varphi_\sigma^1$ that satisfies \[ 0<\varphi_\sigma ^1\quad \mathrm{in}\; \Omega \quad \mathrm{and}\quad \|\varphi_\sigma^1\|_{L^2(\Omega)}=1. \] \begin{lemma}\label{lemma-os.3} There exists $C=C(n,\Omega ,\kappa )>1$, $\varrho_0=\varrho_0(n,\Omega ,\kappa )\le \dot{\varrho}$ and $0<\beta =\beta (n)<1$ so that, for any $\sigma \in \Sigma$, we have $\varphi_\sigma^1\in C^2(\Omega)\cap C^{1,\beta}(\overline{\Omega})$ and \[ C^{-1}\mathrm{dist}(x,\Gamma)\le \varphi_\sigma^1(x)\le C\mathrm{dist}(x,\Gamma),\quad x\in \Omega_{\varrho_0} . \] \end{lemma} \begin{proof} In this proof $C=C(n,\Omega ,\kappa )>1$ denotes a generic constant. By modifying slightly the proof of \cite[Theorem 2.2]{CTX}, we find $0<\beta =\beta (n)<1$ so that $\varphi_\sigma^1\in H^2(\Omega)\cap C^{1,\beta}(\overline{\Omega})$ and \begin{equation}\label{os6} \|\varphi_\sigma^1\|_{H^2(\Omega)\cap C^{1,\beta}(\overline{\Omega})}\le C. \end{equation} Fix $y\in \Omega$ and pick $r>0$ so that $B(y,2r)\Subset \Omega$. Let $\chi \in C_0^\infty(B(y,2r))$ satisfying, $0\le \chi \le 1$ and $\chi=1$ in a neighborhood of $B(y,r)$. Then a straightforward computations show that \[ \mathrm{div}(\sigma \nabla (\chi \varphi_\sigma ^1))=-\lambda_\sigma ^1\chi \varphi_\sigma ^1+\sigma \nabla \chi \cdot \nabla \varphi_\sigma^1+\mathrm{div}(\sigma \varphi_\sigma^1\nabla \chi)\in C^\beta (\overline{B}(y,2r)). \] We get, by applying the usual H\"older regularity that $\chi \varphi_\sigma ^1\in C^{2,\min(\alpha ,\beta)}(\overline{B}(y,2r))$ (e.g. \cite[Theorem 6.8 in page 100]{GiT}). We deduce that we have in particular $\varphi_\sigma^1\in C^2(B(y,r))$. Since $y\in \Omega$ was fixed arbitrarily, we conclude that $\varphi_\sigma^1\in C^2(\Omega)$. Now as $\varphi_\sigma^1\in C^2(\Omega)\cap C^{1,\beta}(\overline{\Omega})$ we can apply \cite[Theorem 1.1]{CTX} in order to get \begin{equation}\label{os7} \|\varphi_\sigma^1\|_{L^1(\Omega)}\le -C\partial_\nu \varphi_\sigma^1(x),\quad x\in \Gamma. \end{equation} On the other hand, we have, in light of \eqref{os6}, \[ 1=\|\varphi_\sigma ^1\|_{L^2(\Omega)}\le C\|\varphi_\sigma ^1\|_{L^1(\Omega )}. \] This, together with \eqref{os6} and \eqref{os7}, imply \begin{equation}\label{os8} C^{-1}\le -\partial_\nu \varphi_\sigma^1(x)\le C,\quad x\in \Gamma . \end{equation} We have, for any $x\in \Omega_{\dot{\varrho}}$, \begin{align*} \varphi_\sigma^1(x)&=\varphi_\sigma^1(x)-\varphi_\sigma^1(\tilde{x})=-|x-\tilde{x}|\int_0^1\nabla \varphi_\sigma^1(\tilde{x}-t|x-\tilde{x}|\nu (\tilde{x}))\cdot \nu(\tilde{x})dt \\ &=-|x-\tilde{x}|\partial_\nu \varphi_\sigma^1(\tilde{x}) \\ &\qquad +|x-\tilde{x}|\int_0^1[\nabla \varphi_\sigma^1(\tilde{x})- \nabla \varphi_\sigma^1(\tilde{x}-t|x-\tilde{x}|\nu (\tilde{x}))]\cdot \nu(\tilde{x})dt, \end{align*} where $\tilde{x}=\mathfrak{p}(x)$. In light of \eqref{os6}, we obtain \[ |x-\tilde{x}|(-\partial_\nu \varphi_\sigma^1(\tilde{x})-C|x-\tilde{x}|^\beta)\le \varphi_\sigma^1(x)\le |x-\tilde{x}|(-\partial_\nu \varphi_\sigma^1(\tilde{x})+C|x-\tilde{x}|^\beta). \] We derive, by using \eqref{os8}, that there exists $\varrho_0=\varrho_0(n,\Omega ,\kappa )\le \dot{\varrho}$ so that \[ C^{-1}|x-\tilde{x}|\le \varphi_\sigma^1(x)\le C|x-\tilde{x}|,\quad x\in \Omega_{\varrho_0}. \] In other words, we proved that \[ C^{-1}\mathrm{dist}(x,\Gamma)\le \varphi_\sigma^1(x)\le C\mathrm{dist}(x,\Gamma),\quad x\in \Omega_{\varrho_0}, \] as expected \end{proof} In the sequel $\varrho_0$, is as in Lemma \ref{lemma-os.3}. \begin{lemma}\label{lemma-os.4} Let $x_0\in \Gamma$ and $0<\rho <\varrho_0$. There exist two constants $C_j=C_j(n,\Omega ,\kappa)>0$, $j=1,2$, so that we have \begin{equation}\label{lem-os.4} \int_{\Omega\cap B_\rho} \mathrm{dist}(x,\Gamma)|\nabla u_\sigma^k|^2dx\ge C_0k^{-1} -C_1(k^{-2}+\rho^{-2}k^{-2}),\quad k\ge 2\mathfrak{c}/\rho, \end{equation} where $B_\rho=B(x_0,\rho)$ and $\mathfrak{c}$ is as in Lemma \ref{lemma-os.1}. \end{lemma} \begin{proof} Let $x_0\in \Gamma$ and $0<\rho <\varrho_0$ and set $k_\rho=2\mathfrak{c}/\rho$, $\mathfrak{c}$ is as in Lemma \ref{lemma-os.1}. We have seen above that $\mathrm{supp}(\psi_k)\subset B(x_0,\rho)$ for each $k\ge k_\rho$. Pick $\sigma \in \Sigma$ and set $u^k=u_\sigma ^k$, $\lambda^1=\lambda_\sigma^1$ and $\varphi^1=\varphi_\sigma^1$. In the rest of this proof we assume that $k\ge k_\rho$. Also, $C=C(n,\Omega ,\kappa)>0$ and $C_j=C_j(n,\Omega ,\kappa)>0$, $j=1,2$, denote generic constants. Taking into account that $\varphi^1u^k\in H_0^1(\Omega)$, we obtain \[ 0=-\int_\Omega \mathrm{div}(\sigma \nabla u^k)\varphi^1u^kdx=\int_\Omega \sigma\varphi^1 |\nabla u^k|^2dx+\int_\Omega \sigma u^k\nabla u^k\cdot \nabla \varphi^1dx. \] But \begin{align*} \int_\Omega \sigma u^k\nabla u^k\cdot \nabla \varphi^1dx&=\frac{1}{2}\int_\Omega \sigma \nabla (u^k)^2\cdot \nabla \varphi^1dx \\ &=\frac{\lambda^1}{2}\int_\Omega \varphi^1(u^k)^2dx+\frac{1}{2}\int_\Gamma \sigma(\psi_k)^2\partial_\nu \varphi^1dS(x). \end{align*} Hence \begin{equation}\label{os9} \int_\Omega \sigma\varphi^1 |\nabla u^k|^2dx =-\frac{\lambda^1}{2}\int_\Omega \varphi^1(u^k)^2dx-\frac{1}{2}\int_\Gamma\sigma (\psi_k)^2\partial_\nu \varphi^1dS(x). \end{equation} Using that $\|\psi_k\|_{L^2(\Gamma)}\ge Ck^{-1/2}$ and $-\partial_\nu \varphi^1\ge C$ we get \[ -\int_\Gamma\sigma (\psi_k)^2\partial_\nu \varphi^1dS(x)\ge Ck^{-1}. \] Thus we have, in light of \eqref{os9}, \begin{equation}\label{os10} \int_\Omega \sigma |\nabla u^k|^2dx \ge -\frac{\lambda^1}{2}\int_\Omega (\varphi^1u^k)u^kdx+Ck^{-1}. \end{equation} Denote by $u\in H_0^1(\Omega)\cap H^2(\Omega)$ the solution of the equation \[ -\mathrm{div}(\sigma \nabla u)=\varphi^1u^k\quad \mathrm{in}\; \Omega. \] Then \begin{align*} \|(\varphi ^1)^{1/2}u^k\|_{L^2(\Omega)}^2=\int_\Omega (\varphi^1u^k)u^kdx&=-\int_\Omega \mathrm{div}(\sigma \nabla u)u^k \\ &=-\int_\Gamma \sigma \partial_\nu uu^kdS(x). \\ &\le C\|\partial_\nu u\|_{H^{1/2}(\Gamma)}\|u^k\|_{H^{-1/2}(\Gamma)} \\ &\le Ck^{-1}\|\partial_\nu u \|_{H^{1/2}(\Gamma)}. \end{align*} The usual $H^2$ a priori estimate yields $\|\partial_\nu u \|_{H^{1/2}(\Gamma)}\le C\|\varphi ^1u^k\|_{L^2(\Omega)}$ (e.g. \cite[Theorem 8.53 in page 326]{RR} and its proof). Whence \[ \|\partial_\nu u \|_{H^{1/2}(\Gamma)}\le C\|(\varphi ^1)^{1/2}u^k\|_{L^2(\Omega)}. \] Therefore \[ \int_\Omega (\varphi^1u^k)u^kdx\le Ck^{-2}. \] This in \eqref{os10} gives \[ \int_\Omega \varphi^1 |\nabla u^k|^2dx\ge C_0k^{-1} -C_1k^{-2}. \] We have also from \eqref{prop-os.1} \[ \|\nabla u_\sigma^k\|_{H^1(\Omega\setminus \overline{B}_\rho)}\le C\rho^{-1}k^{-1}. \] In consequence, \[ \int_{B_\rho\cap \Omega} \varphi^1 |\nabla u^k|^2dx\ge C_0k^{-1} -C_1(k^{-2}+\rho^{-2}k^{-2}). \] We end up getting the expected inequality by applying Lemma \ref{lemma-os.3}. \end{proof} Assume that \[ \|\partial_\nu \sigma\|_{C(\Gamma)}:=\epsilon >0. \] Let $x_0\in \Gamma$ so that $|\partial_\nu\sigma (x_0)|= \|\partial_\nu \sigma\|_{C(\Gamma)}$. We have, for $x\in \Omega\cap B(x_0,\rho)$, \[ |\nabla \sigma (x)\cdot\nu (\tilde{x})|\ge |\nabla \sigma (x_0)\cdot \nu(\tilde{x})|-2\kappa |x-x_0|^\alpha, \] where we set $\tilde{x}=\mathfrak{p}(x)$. But \[ |\nabla \sigma (x_0)\cdot \nu(\tilde{x})| \ge |\partial_\nu \sigma (x_0)|-2\kappa |\nu(\tilde{x})-\nu(x_0)|. \] On the other hand, as $\Omega$ is $C^{1,1}$, there exists $c=(n,\Omega,\alpha)>0$ so that \[ |\nu(\tilde{x})-\nu(x_0)|\le c|x_0-\tilde{x}|^\alpha\le 2c|x-x_0|^\alpha. \] Whence \[ |\nabla \sigma (x)\cdot\nu (\tilde{x})|\ge |\partial_\nu \sigma (x_0)|-2\kappa(1+2c) |x-x_0|^\alpha. \] If $\rho_0=\min ([\epsilon/(4\varkappa(1+2c) )]^{1/\alpha},\varrho_0)$, we obtain, for each $0<\rho\le \rho_0$, \[ |\nabla \sigma (x)\cdot\nu (\tilde{x})|\ge |\partial_\nu \sigma (x_0)|/2,\quad x\in B(x_0,\rho). \] Let $x\in B(x_0,\rho/2)\cap \Omega$. Then, by Proposition \ref{gproposition}, we have \[ x_t=\tilde{x}-t|x-\tilde{x}|\nu(\tilde{x})\in \Omega_{\dot{\varrho}}\cap B(x_0,\rho),\quad 0< t\le 1, \] and \[ \tilde{x}_t=\mathfrak{p}(\tilde{x}_t)=\tilde{x}. \] In consequence, \[ |\nabla \sigma (x_t)\cdot\nu (\tilde{x})|\ge |\partial_\nu \sigma (x_0)|/2 \] In light of the mean value theorem, there exists $0<t_0<1$ so that \[ |\sigma (x)-\sigma (\tilde{x})|=|\nabla \sigma (x_{t_0})\cdot \nu(\tilde{x})||x-\tilde{x}|. \] Whence \[ |\sigma (x)-\sigma (\tilde{x})|=|\nabla \sigma (x_{t_0})\cdot \nu(\tilde{x})||x-\tilde{x}|\ge \mathrm{dist}(x,\Gamma)\|\partial_\nu \sigma\|_{C(\Gamma)}/2. \] Without loss of generality, we assume \[ \mathrm{dist}(x,\Gamma)\|\partial_\nu \sigma\|_{C(\Gamma)}/2\le \sigma (x)-\sigma (\tilde{x}), \quad x\in B(x_0,\rho/2). \] We can proceed similarly as above in order to get \[ Ck^{-1}\|\partial_\nu \sigma \|_{C(\Gamma)}\le \|\Lambda_1-\Lambda_2\|+k^{-2}+\rho^{-2}k^{-2},\quad k\ge 2\mathfrak{c}/\rho,\; 0<\rho \le \rho_0, \] where we used that $\|\sigma\|\le C\|\Lambda_1-\Lambda_2\|$. That is we have \[ C\|\partial_\nu \sigma \|_{C(\Gamma)}\le k\|\Lambda_1-\Lambda_2\|+(1+\rho^{-2})k^{-1},\quad k\ge 2\mathfrak{c}/\rho,\; 0<\rho \le \rho_0. \] In this inequality we take $k$ of the form $k=[t+1]$ (the entire part of $t+1$), with $t\in \mathbb{R}$ satisfying $t\ge 2$. We find, by taking into account that $t\le k\le 2t$, \[ C\|\partial_\nu \sigma \|_{C(\Gamma)}\le t\|\Lambda_1-\Lambda_2\|+(1+\rho^{-2})t^{-1},\quad t\ge 2\mathfrak{c}/\rho,\; 0<\rho \le \rho_0. \] It is clear that if $\|\Lambda_1-\Lambda_2\|=0$ then we get $\|\partial_\nu \sigma \|_{C(\Gamma)}=0$ by passing to the limit when $t\rightarrow \infty$ in \[ C\|\partial_\nu \sigma \|_{C(\Gamma)}\le (1+\rho^{-2})t^{-1},\quad t\ge 2\mathfrak{c}/\rho,\; 0<\rho \le \rho_0. \] But this is impossible since we assumed that $\|\partial_\nu \sigma \|_{C(\Gamma)}\ne 0$. We then choose $t$ in such a way that \[ t\|\Lambda_1-\Lambda_2\|=(1+\rho^{-2})t^{-1}. \] That is \[ t^2=(1+\rho^{-2})\|\Lambda_1-\Lambda_2\|^{-1}. \] This choice is possible whenever \[ (1+\rho^{-2})\|\Lambda_1-\Lambda_2\|^{-1}\ge 4\mathfrak{c}^2\rho^{-2}, \] which is equivalent to the following inequality \[ (1+\rho^2)\ge 4\mathfrak{c}^2\|\Lambda_1-\Lambda_2\|. \] This condition is satisfied for instance if \[ 4\mathfrak{c}^2\|\Lambda_1-\Lambda_2\|\le 1. \] Under this condition we obtain \begin{equation}\label{nd1} C\|\partial_\nu \sigma \|_{C(\Gamma)}\le (1+\rho^{-1})\|\Lambda_1-\Lambda_2\|^{1/2}. \end{equation} When $\rho_0=\varrho_0$ then the last inequality yields in a straightforward manner, by taking $\rho=\varrho_0$, that \[ C\|\partial_\nu \sigma \|_{C(\Gamma)}\le \|\Lambda_1-\Lambda_2\|^{1/2}, \] Otherwise, we have $\rho_0=[\epsilon/(4\varkappa(1+c) )]^{1/\alpha}=\tilde{c}\epsilon^{1/\alpha}$. We get, by taking $\rho=\rho_0$ in \eqref{nd1}, \[ C\|\partial_\nu \sigma \|_{C(\Gamma)}\le (1+\|\partial_\nu \sigma \|_{C(\Gamma)}^{-1/\alpha})\|\Lambda_1-\Lambda_2\|^{1/2}. \] Hence \[ C\|\partial_\nu \sigma \|_{C(\Gamma)}^{1+1/\alpha}\le \|\Lambda_1-\Lambda_2\|^{1/2}. \] In other words, we have \[ C\|\partial_\nu \sigma \|_{C(\Gamma)}\le \|\Lambda_1-\Lambda_2\|^{\alpha/[2(1+\alpha)]}. \] This estimate is obviously satisfied if $4\mathfrak{c}^2\|\Lambda_1-\Lambda_2\|\ge 1$. Hence the expected inequality follows. This section is largely inspired by \cite{Ka}.
2,869,038,156,963
arxiv
\section{Introduction} Structurally or magnetically disordered glassy systems show special characteristics like memory effects or replica symmetry breaking as a result of a complex energy landscape. A large number of local minima or metastable states exist and can trap the system at low temperatures. Therefore, these local energy minima have been of interest for some time. For mean-field spin glasses their distribution as function of energy has been determined as early as 1981.\cite{Bray_Moore} The one-dimensional dimensional system with short-range interactions can also be treated analytically\cite{Li} and some general properties are known for higher-dimensional spin glasses.\cite{Newman_Stein} A completely analytical treatment for the cubic lattice is, however, not available and the numerical investigation of metastable states is even more demanding than standard spin-glass simulations. Previous studies were restricted to exact enumeration of small systems \cite{Burda_1,Burda_2,Waclaw_Burda} or quenches from random configurations.\cite{BaityJesi_Parisi} In the latter work as well as in studies on structural glasses\cite{Heuer} the local minima are seen in relation to the equilibrium configurations from which they are derived by steepest-descent (greedy algorithm) and are referred to as inherent structures. Recently we introduced a technique \cite{hoga} for the Edwards-Anderson model that efficiently derives inherent structures from a sequence of spin configurations by means of a dynamic greedy algorithm. In this study we extend this approach and propose a method that samples all metastable states with equal probability. We use this and a more traditional method to measure the distributions of metastable states as a function of energy for the Sherrington-Kirkpatrick (SK)\cite{SK} and the three-dimensional Edwards-Anderson (EA)\cite{EA} model. The rest of the paper is organized as follows: We discuss the models in section 2 and briefly review the analytical solution of Bray and Moore in section 3. In section 4 we introduce our methods and section 5 contains the results. \section{Model} We consider the Ising Hamiltonian \begin{equation} \mathcal{H}=-\sum\limits_{ \langle ij \rangle } J_{ij}s_is_j,\qquad s_i\in\{-1,1\}, \end{equation} where the sum runs over all pairs of spins $s_i$ interacting via bonds $J_{ij}$. The latter are randomly chosen according to a Gaussian distribution: \begin{equation} P(J_{ij})=\frac1{\sqrt{2\pi J^2}}e^{-J_{ij}^2/2J^2}. \end{equation} In case of the SK model every spin interacts with every other while for the 3d EA model spins are placed on the sites of a cubic lattice and only adjacent ones contribute to the energy. If we consider single-site energies, i.e. the sum of all terms to which an individual spin $s_k$ contributes, \begin{equation} e_k=-\sum\limits_{\langle ij\rangle} J_{ij}s_is_j(\delta_{ik}+\delta_{jk}) \end{equation} one can express the Hamiltonian as \begin{equation} \mathcal{H}=\frac12\sum_ke_k=E \end{equation} and the energy change associated with a single spin flip \begin{equation} \mathbf{S}=(s_1,\dots,s_N)\rightarrow \mathbf{S'}=(s_1,\dots,s_{k-1},-s_k,s_{k+1},\dots), \end{equation} \begin{equation} e_k\rightarrow e'_k=-e_k \end{equation} as functions of it \begin{equation} \mathcal{H}(\mathbf{S'})-\mathcal{H}(\mathbf{S})=-2e_k. \end{equation} Hence, a metastable state or more precisely a single-flip stable state, i.e., a spin configuration for which every single spin flip causes an increase in energy can be asserted if $e_k<0$ for all $k$. It is the distribution $\Omega(E)$ of these metastable states that we are interested in. \section{Bray and Moore's solution} In 1981 Bray and Moore \cite{Bray_Moore} derived an analytic expression for the distribution of metastable states for the SK model. They used the dimensionless normalized energy \begin{equation} \varepsilon=\frac{E}{NJz^\frac12} \end{equation} where $N$ is the total number of spins and $z$ the coordination number of the lattice, i.e., $z=N-1$ for the SK model. For the limit \begin{equation} g_0(\varepsilon) \coloneqq \lim\limits_{N \rightarrow \infty}N^{-1}\langle\ln\Omega(\varepsilon)\rangle_J, \end{equation} where $\langle\dots\rangle_J$ denotes the disorder average, they obtained \begin{equation} g_0(\varepsilon)=\varepsilon^2+2\varepsilon\tau-\ln\left[\sqrt{\pi/2}(-2\varepsilon-\tau)\right], \end{equation} where the function $\tau=\tau(\varepsilon)$ is implicitly defined by \begin{equation} 0=2\varepsilon+\tau+\Phi'(\tau)/\Phi(\tau) \end{equation} with \begin{equation} \Phi(x)=\frac1{\sqrt{2\pi}}\int\limits_{-\infty}^x e^{-\frac{y^2}2}dy. \end{equation} However, they state that this solution is only valid for\\ $\varepsilon > \varepsilon_c \approx -0.672$. Nonetheless, it follows from the position of the maximum of $g_0(\varepsilon)$ shown in Fig.~\ref{fig:BMg} that for large systems the number of metastable states is given by \begin{equation} \langle\ln N_S\rangle_J/N=0.199228 \end{equation} and that the average energy of a local minimum becomes \begin{equation} \epsilon_{\rm av}=-0.5061. \end{equation} Bray and Moore then proceeded with an expansion in $1/z$ and obtained an approximation for non-mean-field spin glasses: \begin{equation} N^{-1}\langle\ln\Omega(\varepsilon)\rangle=g_0(\varepsilon)+z^{-1}g_1(\varepsilon)+O(z^{-2}) \end{equation} with \begin{equation} g_1(\varepsilon)=-\varepsilon^2(\tau^2-2\varepsilon^2), \end{equation} also displayed in Fig.~\ref{fig:BMg}. \begin{figure} \begin{center} \includegraphics[width=.9\columnwidth]{./n_min_analytic.eps} \caption{\small{\label{fig:BMg} \emph{ The functions $g_0$ and $g_1$.}}} \end{center} \end{figure} \section{Methods} We apply two methods in order to sample local minima. While the first is a more traditional approach using standard Monte Carlo techniques, the second method is a novel algorithm that has been derived from the dynamic greedy algorithm.\cite{hoga} Its efficiency relies on the specifically local nature of single spin-flips, i.e., low connectivity, and in this study we only apply it to the Edwards-Anderson model. \subsection{Method I} This method is a standard Monte Carlo technique which employs flips of single spins and samples in principle all possible states of the spin glass. The ensemble is designed to include all local minima with a sufficiently high probability, such that their distribution can be inferred. Since the goal is to find local minima, i.e., states where all spins have negative energy and which, therefore, are stable against single-spin flips, it is intuitive to use the number of spins with positive energy as a control parameter: \begin{equation} n^{\rm p}(\mathbf{S})=\sum_{i=0}^N\Theta(e_i), \end{equation} where $\Theta$ is the Heaviside step function. However, simply minimizing this parameter would yield only local minima around the maximum of the minima distribution, but not in its tails. In order to sample the rare minima at high and at low energy, we also incorporate the Boltzmann weight. In the ensemble $k$ of our simulation a state $\mathbf{S}$ is occupied with a probability \begin{equation} P_k(\mathbf{S}) \propto \omega_k\left( n^{\rm p}(\mathbf{S} ) \right)e^{-\beta_k\mathcal{H}(\mathbf{S})}, \end{equation} where $\beta_k$ drives the energy of the system similar to the inverse temperature in a canonical ensemble and $\omega_k(m)$ is a weight function that for a given number $m$ equals the inverse sum of the Boltzmann weights of all spin configurations that have $m$ spins with positive energy: \begin{equation} \omega_k(m) = \left( \sum_\mathbf{S} \delta_{n^{\rm p}(\mathbf{S}),m}\, e^{-\beta_k\mathcal{H}(\mathbf{S})} \right)^{-1}, \end{equation} where the sum goes over all possible states. The weights $\omega$ causes a flat distribution over $n^{\rm p}(\mathbf{S})$ similar to the weights of a multicanonical simulation\cite{muca1,muca2} or the inverse density of states from the Wang-Landau method\cite{Wang_Landau} leading to a flat histogram over energy. Since $\omega_k$ is a priori not known we determine it before the actual simulation using an iterative procedure.\cite{muca_wght_det} During the simulation a proposed step $\mathbf{S}_{\rm old}\rightarrow\mathbf{S}_{\rm new}$ is accepted with a probability according to the well-known Metropolis criterion: \begin{equation} p^k_{\rm flip}(\mathbf{S}_{\rm old},\mathbf{S}_{\rm new}) = \min\left(1,\frac{P_k(\mathbf{S}_{\rm new})}{P_k(\mathbf{S}_{\rm old})}\right). \end{equation} Multiple such ensembles with different $\beta$ are combined via the replica exchange method \cite{parallel_temp1} and two ensembles $k$ and $l$ exchange configurations with the probability \begin{equation} p^{kl}_{\rm exch}(\mathbf{S}_k,\mathbf{S}_l) = \min\left(1,\frac{P_k(\mathbf{S}_l)P_l(\mathbf{S}_k)}{P_k(\mathbf{S}_k)P_l(\mathbf{S}_l)}\right), \end{equation} where $\mathbf{S}_k$ is the the configuration belonging to ensemble $k$ before the attempted exchange. In order to estimate the distribution of local minima we apply the weighted histogram analysis method (WHAM).\cite{wham1,wham2} It is possible to apply this algorithm directly to the various distributions of local minima measured at different $\beta$, however, since the data obtained at low $n^{\rm p}$ is carrying a large statistical error, we decided to determine the reweighting factors using all available data. We first reweight in order to obtain the unormalized canonical distributions \begin{equation} \tilde{\Pi}_k(E_i) = \sum_{ \substack{t ,\\ E_i-\epsilon<E(\mathbf{S}_{k,t})<E_i+\epsilon } } \omega_k\left( n^{\rm p}(\mathbf{S}_{k,t} ) \right)^{-1} , \end{equation} where $\mathbf{S}_{k,t}$ are the configurations generated by the simulation in ensemble $k$ and $2\epsilon$ is the binning width $E_{i+1}=E_i+2\epsilon$, and normalize \begin{equation} \Pi_k(E_i) = \frac{ \tilde{\Pi}_k(E_i) }{ \sum\limits_{j} \tilde{\Pi}_k(E_j) }\approx\frac{g(E_i)e^{-\beta_kE_i}}{z_k}. \end{equation} Here, $g(E)$ denotes the density of states and $z_k$ are the partition sums \begin{equation} z_k=\sum_i g(E_i)e^{-\beta_kE_i}, \end{equation} which can be self-consistently determined by iterating \begin{equation} z_k = \sum\limits_i e^{ -\beta_k E_i } \frac{ \sum\limits_l \Pi_l(E_i) }{ \sum\limits_l z_l^{-1} e^{ -\beta_l E_i }}. \end{equation} One could now calculate the density of states \begin{equation} g(E_i)=\frac{ \sum\limits_l \Pi_l(E_i) }{ \sum\limits_l z_l^{-1} e^{ -\beta_l E_i }}. \end{equation} or using the $\beta$-dependent distributions of the local minima, \begin{equation} \Pi^0_k(E_i) = \frac{\sum\limits_{ \substack{t ,\\ E_i-\epsilon<E(\mathbf{S}_{k,t})<E_i+\epsilon } } \delta_{n^{\rm p}(\mathbf{S}_{k,t}),0} \omega_k\left( 0 ) \right)^{-1} }{ \sum\limits_{j} \tilde{\Pi}_k(E_j) }, \end{equation} derive the overall distribution of local minima \begin{equation} \Omega(E_i)=\frac{ \sum\limits_l \Pi^0_l(E_i) }{ \sum\limits_l z_l^{-1} e^{ -\beta_l E_i }}. \end{equation} \subsection{Method II} \paragraph{Basic concept} The aim of this method is to create an ensemble that contains all metastable states -- and only those -- with equal probability and to enable transitions between them, such that in a second step standard Monte Carlo techniques can be employed in order to investigate their properties. As a first step, we set up a composite state that contains an unspecified spin configuration $\mathbf S$ and $jN$ random numbers $\{\xi\}\in(0,1]^{jN}$. Here $j$ specifies how many random numbers per lattice site are used. If basic Monte Carlo steps like spin flips and randomizations of elements of $\{\xi\}$ are applied to this state in an unbiased fashion, the system will perform simple sampling of the state space of spin configurations and simultaneously of $(0,1]^{jN}\subset\mathbb{R}^{jN}$. Now we interpret the composite state as a random quench, i.e., the random numbers $\{\xi\}$ are used to create a sequence of spin configurations that starts at $\mathbf S$ and is guaranteed to end in a metastable state $\rho$. Applying the same Monte Carlo steps as before, changes in $\mathbf S$ and $\{\xi\}$ will, therefore, often cause changes of $\rho$ such that a random walk in the space of local minima is performed. However, it can not be expected that all metastable states are visited with equal probability. We bias the ensemble such that a composite state is represented with a probability proportional to a weight $P_{\rm goal}({\mathbf S},\{\xi\})$. The function $P_{\rm goal}$ is chosen such that the resulting random walk in the space of local minima performs simple sampling, i.e., all metastable states $\rho$ are occupied with uniform probability. \paragraph{The principal ensemble} The initial spin configuration $\mathbf S$ and the set of random numbers $\{\xi\}$ is mapped onto a sequence of spin configurations by the primitive Monte Carlo method $\mathcal{M}$: \begin{equation} \mathcal{M}:(\mathbf{S},\{\xi\}) \mapsto ( \sigma_0,\sigma_1,\sigma_2,\dots,\sigma_f ), \end{equation} where $\sigma_i$ are spin configurations of the spin glass with $\sigma_0 \equiv \mathbf{S}$. In our case $\mathcal{M}$ stands for an energy minimization. The random numbers $\{\xi\}$ are used to randomly pick a spin with positive energy in $\sigma_i$, which is then flipped, thus creating $\sigma_{i+1}$. This is repeated until all spins have negative energy, such that the final state $\sigma_f\equiv\rho$ is a local energy minimum. As usual, the primitive methods that are used to modify $\mathbf{S}$ and $\{\xi\}$ are unbiased, i.e., if simple sampling (ss) were used all $\mathbf{S}$ would be visited with equal probability and the random numbers $\{\xi\}$ would be uniformly distributed. In such a process the probability to obtain a given sequence $( \sigma_0,\dots,\sigma_f )$ can easily be determined. Since $\mathcal{M}$ does an unbiased selection from all spins with positive energy, the probability of each individual draw equals the inverse numbers of spins with positive energy and the total probability is proportional to their product: \begin{equation} P_{\rm ss}\left(( \sigma_0,\dots,\sigma_f )\right) \equiv P_{\rm ss}\left( \mathcal{M}(\mathbf{S},\{\xi\}) \right) = P(\mathbf{S}) \prod\limits_{i=0}^{f-1}\frac1{n_i^{\rm p}}, \end{equation} where $n_i^{\rm p}$ is the number of spins with positive energy in the configuration $\sigma_i$. Using the inverse of this probability as the statistical weight of a state in a biased ensemble, \begin{equation} P_{\rm is}( \mathbf{S},\{\xi\} ) \propto P_{\rm ss}\left( \mathcal{M}(\mathbf{S},\{\xi\}) \right)^{-1}, \end{equation} during an importance-sampling (is) simulation will cause this simulation to create all sequences $( \sigma_0,\dots,\sigma_f )$ with equal probability. Of course, our goal is not to sample all sequences with equal probability, but all final states $\sigma_f$, which are the local energy minima. We have to assign an additional weight $W$ to each sequence such that for all local minima $\rho$ \begin{equation} p(\rho)=\sum\limits_{\substack{( \sigma_0,\dots,\sigma_f ) ,\\ \sigma_f=\rho }}W\left(( \sigma_0,\dots,\sigma_f )\right)={\rm const}. \end{equation} This is in a sense the inverse process to the first reweighting, where we introduced a weight function in order to move from a uniform distribution over the starting configurations $\mathbf{S}$ or $\sigma_0$ to a uniform distribution over all sequences. Now we wish to abandon the latter in favor for an ensemble with a uniform distribution over the final states $\sigma_f$. \begin{figure} \begin{center} \includegraphics[width=.9\columnwidth]{./up_tree.eps} \caption{\small{\label{fig:up_tree} \emph{Branches of the tree of sequences for one particular final state $\sigma_f$. Circles represent spin configurations and STOP-nodes indicate that the initial configuration $\sigma_0$ has been identified with the node below. See text for a detailed description.}}} \end{center} \end{figure} Consider the partial tree depicted in Fig.~\ref{fig:up_tree}. The full tree contains all those sequences that end in one particular final state, the local minimum $\sigma_f$. Each circle represents a spin configuration and any two connected states differ by exactly one spin value with the energy decreasing towards the root. The same configuration can appear multiple times in the tree. One possibility is that spins can be flipped in a different order which will cause states to appear more than once at the same level. We reconstruct the sequences in reverse order, i.e., starting with the final state $\sigma_f$ and proceeding upwards to previous states. The length of the sequences is variable. Therefore, if all possible paths from the root $\sigma_f$ should be considered, we have to accommodate for the possibility of a premature stop while a continuation towards configurations of higher energy is still possible. This is symbolized by the STOP-nodes. They are not configurations themselves, but identify their parent node with $\sigma_0$. If we assign weights $w$ to all branches such that the sum over the weights of a node's outgoing (upward) branches equals unity \begin{equation} w_i^{\rm stop}+\sum\limits_{j=1}^{n_i^{\rm n}}w_i^j=1, \end{equation} then the product of these weights along each path has the desired property of $W$. Here, $n_i^{\rm n}=N-n_i^{\rm p}$ is the number of spins with negative energy in the configuration $\sigma_i$ and, therefore, the number of spin configurations with higher energy to which $\sigma_i$ can be connected. The most intuitive solution is to assign equal weights to all true continuations: \begin{equation} w_i^j=\frac{1-w_i^{\rm stop}}{n_i^{\rm n}},\qquad j=1,2,\dots,n_i^{\rm n}. \label{eq:tree_weight_1} \end{equation} The remaining weights $w^{\rm stop}$ determine the length $f+1$ of the sequence. We chose \begin{equation} w_i^{\rm stop}= \begin{cases} 0 & \text{if $f+1-i<l_{\min}$} \\ \frac1{l_{\max}-f+i} & \text{else} \end{cases} \label{eq:tree_weight_2} \end{equation} in order to obtain sequences of any length between (and including) $l_{\rm min}$ and $l_{\rm max}$ with equal frequency. If, for instance, $l_{\rm min}=3$ and $l_{\rm max}=7$ the STOP-weights $w_i^{\rm stop}$ for the levels in Fig.~\ref{fig:up_tree} from the bottom to the top would read $0,0,\frac15,\frac14,\frac13,\frac12,1$. We find \begin{equation} W\left(( \sigma_0,\dots,\sigma_f )\right)=w_0^{\rm stop} \prod\limits_{i=1}^{f}\frac{1-w_i^{\rm stop}}{n_i^{\rm n}}, \end{equation} which with our choice of $w_i^{\rm stop}$ simplifies to \begin{eqnarray} W\left(( \sigma_0,\dots,\sigma_f )\right) &=& \frac1{l_{\rm max}-l_{\rm min}+1} \prod\limits_{i=1}^{f}\frac1{n_i^{\rm n}},\\ &\propto& \prod\limits_{i=1}^{f}\frac1{n_i^{\rm n}} \end{eqnarray} if $f+1\in\{l_{\rm min},\dots,l_{\rm max}\}$, otherwise $W\left(( \sigma_0,\dots,\sigma_f )\right)=0$. Eventually, multiplying $P_{\rm is}( \mathbf{S},\{\xi\} )$ and $W( \mathbf{S},\{\xi\} )$, we are left with an ensemble which includes all final states $\sigma_f$ with equal probability \begin{equation} P_{\rm goal}( \mathbf{S},\{\xi\} ) \coloneqq \frac{ \prod\limits_{i=0}^{f-1}n_i^{\rm p} } { \prod\limits_{i=1}^{f}n_i^{\rm n} }. \end{equation} The freedom to restrict the length of the sequence from above is important since during the construction of $W$ we implicitly assumed that we can always move to states with higher energy and could, therefore, always chose $w^{\rm stop}\ne 1$. To ensure that this assumption is justified during a simulation the sequences must not be too long. In our simulations we chose $l_{\rm max}=N/3$ and $l_{\rm min}=1$. \paragraph{The minimization} As stated above the minimization method $\mathcal{M}$ randomly chooses spins with positive energy and flips these until a stable configuration is reached. Normally, in such a procedure a spin would be selected by considering all candidates and using a single random number uniformly distributed between zero and the number of spins with positive energy, which rounded up will determine which spin to flip. However, this kind of global selection is unsuited in our case. It would cause a modification of the initial state $\sigma_0$ to potentially affect every single selection which would, therefore, require a complete reconstruction of the $\sigma_i$. The changes to $\sigma_f$ would be considerable, which is undesirable in a Monte Carlo simulation, since the resulting acceptance rates would be very small. Instead, we implement strictly local conditions which collectively effect a global selection. In the following as we construct the sequence $\sigma_0,\sigma_1,\dots,\sigma_f$ we will record the evolution of single spins by means of `spin states' $\zeta$ which describe a spin's value and its energy as determined by the environment, i.e., the adjacent spins. The initial state $\zeta_{0,k}$ of spin $s_k$ as given by the spin configuration $\sigma_0$ will change to a new state $\zeta_{1,k}$ as soon as its energy changes, which happens if either spin $s_k$ or one of its neighboring spins get flipped. Since consecutive $\sigma_i$ only differ in exactly one spin, the first index of the $\zeta$ will in general not agree with the spin configurations to which they belong. For instance, most states $\zeta_{0,k}$ will be shared by $\sigma_0$ and $\sigma_1$. If we assign a uniformly distributed random number $\eta_{0,i}\in(0,1]$ to each initial spin state $\zeta_{0,i}$ with positive energy and sort the spins based on the magnitude of $\eta_{0,i}$ it is clear the we will obtain a completely random sequence. We identify the largest $\eta_0$ and flip the associated spin, thus creating $\sigma_1$. We then proceed with the new largest $\eta$. The following rules apply: \begin{itemize} \item{If by the flip of a spin with random number $\eta_{\mu,k}$ an adjacent previously stable spin acquires positive energy in its new state $\zeta_{\nu,l}$, it can easily be inserted into the ordered set of spins with positive energy by choosing the random number $\eta_{\nu,l}\in(0,\eta_{\mu,k}]$.} \item{Albeit not strictly necessary, a new random number is also assigned in the same way if the energy of an unstable spin changes but remains positive.} \item{If a spin with positive energy changes to a stable state, i.e., with negative energy, its $\eta$ is removed from the ordered set.} \item{The non-adjacent spins that retain their energy during a spin flip and whose state therefore does not change keep their random number.} \end{itemize} For reasons of efficiency, in our simulation we reserve one random number $\xi\in(0,1]$ for each spin state regardless of the sign of its energy and calculate $\eta$ from it when required. Random numbers of spins with negative energy have no impact and can be considered as temporarily decoupled degrees of freedom. We can reformulate the algorithm by introducing a flipping `time' \begin{equation} t_{\mu,k}\coloneqq-\ln\eta_{\mu,k} \label{eq:def_time} \end{equation} and consequently \begin{equation} t_{\mu,k}=t^{\rm orig}_{\mu,k}-\ln\xi_{\mu,k}, \label{eq:def_time2} \end{equation} where $\xi_{\mu,k}$ is a uniformly distributed random number $\in(0,1]$ and $t^{\rm orig}_{\mu,k}$ is the time at which the particular state of spin $s_k$ was created, i.e., the time when a flip of one of its neighbors last changed its energy. A flip of a spin will not lead to a new flipping time for itself since after a flip its energy is per definition negative. In the beginning, i.e., for the state $\sigma_0$ all $t^{\rm orig}_{0,k}=0$. Although not used in our work, it is instructive to consider a biased selection. If the spin $s_k$ in state $\zeta_{\mu,k}$ shall be selected with the relative weight $v_{\mu,k}$ it is relatively easy to show that \begin{equation} t_{\mu,k}=t^{\rm orig}_{\mu,k}-\frac{\ln\xi_{\mu,k}}{v_{\mu,k}} \end{equation} generates the desired behavior. If weights proportional to the Boltzmann weight of the respective energy change were chosen and if energy increases were allowed $\mathcal{M}$ would become the waiting-time Monte Carlo method.\cite{WTM} We will now briefly discuss the different methods that we use in order to modify the sequence $\{\sigma\}$ during out Monte Carlo simulation. \paragraph{Top-down update} If instead of the spin with the largest random number $\eta$ (or the smallest flipping time $t$) the spin with the highest energy were flipped, the thus derived deterministic method would constitute a so-called greedy algorithm. Both methods are structurally very similar, each flip only affects adjacent spins and each spin state is characterized by a quantity (energy or $\eta$) whose maximum determines the next step. There is one difference that makes the random minimization easier to handle. During the greedy algorithm within the sequence of flipped spins the energy sometimes increases, while in the method used here $\eta$ will always decrease ($t$ will always increase). In a recent article \cite{hoga} about `dynamical greedy algorithms' for the Edwards-Anderson model we have discussed algorithms that will propagate changes in the initial configuration $\sigma_0$ and determine the new $\sigma_f$ with little computational effort. Since the same ideas are used here with very little modification to implement a method that allows for changes to $\mathbf S$, i.e., to $\sigma_0$ we refer to this publication for details. Since random changes to the initial configuration will in general change the weight $W_{\rm goal}$ the acceptance probability must contain the ratio \begin{equation} R = \frac{ P_{\rm goal}( \mathbf{S}_{\rm new},\{\xi_{\rm new}\} ) }{ P_{\rm goal}( \mathbf{S}_{\rm old},\{\xi_{\rm old}\} ) } \end{equation} as a bias correction. \paragraph{Single random number update} Besides the starting configuration we can and should also modify the random numbers $\xi$ during the simulation. All random numbers that belong to spin states with negative energy can be updated at leisure and those which belong to spin states $\zeta_{\mu,k}$ with positive energy, but which do not lead to a spin flip since the state is replaced before the flipping time: \begin{equation} t_{\mu,k}>t^{\rm orig}_{\mu+1,k} \end{equation} can be assigned a new uniformly distributed random number \begin{equation} \xi_{\mu,k}\in\left[0, \exp({-t^{\rm orig}_{\mu+1,k}})\right).\ \label{eq:upd_xi} \end{equation} In both cases the sequence of spin flips and therefore the weight $W_{\rm goal}$ remains unchanged. Hence, these updates can always be accepted with probability one. It is, of course, possible to modify any $\xi$ without constraint and proceed to determine the resulting possibly altered sequence $\sigma_0,\dots,\sigma_j,\sigma_{j+1}',\dots,\sigma_f'$. Then a non-trivial acceptance probability would ensue. In our simulations, however, we do not use such a method. \paragraph{Bottom-up update} The distribution of sequences as defined by (\ref{eq:tree_weight_1}) and (\ref{eq:tree_weight_2}) enables us to introduce another update. Exploiting the fact that all sequence lengths between $l_{\rm min}$ and $l_{\rm max}$ are equally likely just as any upward continuation in Fig.~\ref{fig:up_tree}, it is possible to create a new sequence that ends with the old local minima $\sigma_f$: \begin{itemize} \item{ Chose a new length $f'+1$ from the allowed values randomly. } \item{ Starting from $\sigma'_{f'}=\sigma_f$ create $\sigma'_{f'-1},\sigma'_{f'-2},\dots,\sigma'_0$ by randomly flipping spins with negative energy. } \item{ Assign new random numbers $\{\xi\}$ that are consistent with this sequence. } \end{itemize} The third point warrants a more thorough discussion. Naturally, we start with $\sigma'_0$ since the random numbers $\eta'_0$ do not depend on the latter ones. If $\sigma'_0$ possesses $n^{\rm p}_0$ spins with positive energy and if $\sigma'_1$ is reached by flipping $s_k$ we have to assign $\eta_{0,k}$ according to a distribution that equals the distribution of $\max(\chi_1,\chi_2,\dots,\chi_{n^{\rm p}_0})$, where the random numbers $\chi_i\in(0,1]$ are uniformly distributed. We find the distribution \begin{equation} p\left( \eta_{0,k} \right)= n^{\rm p}_0 (\eta_{0,k})^{n^{\rm p}_0-1},\quad \eta_{0,k}\in(0,1] \end{equation} which means that we can set \begin{equation} \eta_{0,k}=\chi^\frac1{n^{\rm p}_0}, \end{equation} where $\chi\in(0,1]$ is uniformly distributed. Similarly, if the spin flip from $\sigma'_{i-1}$ to $\sigma'_i$ occurred at time $\tau_i$ and if the spin $s_l$ has to be flipped in order to reach the new state $\zeta_{\mu+1,l}$ and $\sigma'_{i+1}$, $\eta_{\mu,l}$ is distributed the same way as $\max(\chi'_1,\chi'_2,\dots,\chi'_{n^{\rm p}_i})$, with $\chi'_i\in(0,e^{-\tau_i})$ uniformly distributed. Hence \begin{equation} p\left( \eta_{\mu,l} \right)= n^{\rm p}_i \left( \eta_{\mu,l} \right)^{n^{\rm p}_i-1}(e^{\tau_i})^{n^{\rm p}},\quad \eta_{\mu,l}\in(0,e^{-\tau_i}) \end{equation} and therefore we can calculate $\eta_{\mu,l}$ from a uniformly distributed random number: \begin{equation} \eta_{\mu,l}=\chi^\frac1{n^{\rm p}_i}e^{-\tau_i}. \end{equation} Once all times and respective random numbers $\eta$ of the performed flips are defined, their basic random numbers $\xi$ can be calculated for the known times by inverting (\ref{eq:def_time2}) and for the remaining spin states with positive energy according to (\ref{eq:upd_xi}). All other $\xi$ are decoupled and may be kept or chosen at random. Since the update is designed to create sequences with the desired distribution there is no bias to correct by the acceptance probability: \begin{equation} R = 1. \end{equation} It is worth noting that in principle this update allows for a true Markovian chain in the space of local minima. Without it, the selection of a new state $\rho'$ does not exclusively depend on the current state $\rho$, but on the hidden degrees of freedom in $\mathbf S$ and $\{\xi\}$. A Markovian process is performed in their state space. Now, we can completely randomize these hidden degrees of freedom after each step using the bottom-up update, thus removing the surplus `memory' from the system. However, since the procedure involves the entire system and is, therefore, computationally expensive, it is not advisable to apply it that often. In our simulations we randomly select a spin in $\mathbf S$ and attempt a spin flip in a top-down update. Then, the random variables at this lattice site are updated if possible. After $N$ such combinations we perform a single bottom-up update. \paragraph{Simulation} Once the framework introduced above is in place we can ignore all its inner workings and treat it as a normal system which ergodically changes from one single-flip stable configuration $\rho\equiv\sigma_f$ to another. Of course, the configurations of this particular system are a subset of the states of another system, but this concerns us no longer. In order to obtain statistics for a large energy range we apply a flat-histogram method. We introduce another weight function \begin{equation} W_{\rm flat}(E) \approx \Omega(E)^{-1} \end{equation} and require that in our simulation the probability to visit a certain metastable state $\rho$ is \begin{equation} P_{\rm flat}(\rho) \propto W_{\rm flat}\left( E(\rho)\right), \end{equation} which means that new states are accepted with the probability \begin{equation} P_{\rm flat}^{\rm acc}(\rho_{\rm old}\rightarrow \rho_{\rm new}) = \min\left( 1 , \frac{ W_{\rm flat}\left( E(\rho_{\rm new})\right) }{ W_{\rm flat}\left( E(\rho_{\rm old})\right) } R \right). \end{equation} We initially approximate $W_{\rm flat}$ using a variant of the well-known Wang-Landau algorithm \cite{Wang_Landau} with an additional restriction. The algorithm has difficulties to converge and to sample the distribution in the extreme tails, i.e., at low and at high energy because only very few states exist there. The high-energy minima are much harder to find than the low-energy ones. We suspect the reason is that the latter are embedded in large basins and large metabasins which help to guide the simulation. The problematic regions can be excluded from the simulation by restricting $W_{\rm flat}$: \begin{equation} W_{\rm flat}(E)< \begin{cases} \min(W_{\rm flat}(E))+\Delta_{\rm L} & \text{if $E<E^*$} \\ \min(W_{\rm flat}(E))+\Delta_{\rm R} & \text{if $E>E^*$}, \end{cases} \end{equation} where $E^*$ is the position of the minimum of $W_{\rm flat}$ \begin{equation} W_{\rm flat}(E^*) = \min(W_{\rm flat}(E)). \end{equation} This is more convenient than restricting the energy range directly because it can be applied in the same way to all samples. The values that we use are listed in Table~\ref{tab:w_restr}. Choosing $\Delta_{\rm L}=0.205N$ still allows for the sampling of the ground state, but prevents the algorithm to spend too much time at low energies during the weight determination. Once $W_{\rm flat}(E)$ is known with sufficiently high precision, we perform the main simulation and record a histogram $H(E)$ of the local minima from which their distribution can be calculated: \begin{equation} \Omega(E)=\frac{H(E)}{W_{\rm flat}(E)} \, . \end{equation} \begin{table} \caption{\small{\label{tab:w_restr} \emph{Upper bounds for the weight function $W_{\rm flat}(E)$.}}} \begin{center} \begin{tabular}{c c c} \toprule $L$ & $\Delta_{\rm L}$ & $\Delta_{\rm R}$\\ \hline 4 & $\infty$ & $\infty$\\ 6 & $\infty$ & $0.18L^3$\\ 8 & $0.205L^3$ & $0.17L^3$\\ 10 & $0.205L^3$ & $0.13L^3$\\ \botrule \end{tabular} \end{center} \end{table} \section{Results} As a first goal we test the validity of our methods. \begin{figure} \begin{center} \includegraphics[width=.9\columnwidth]{./dos_integer.eps} \caption{\small{\label{fig:dos_int} \emph{ The tails of the distribution of local minima for a single $4\times4\times4$ sample of the Edwards-Anderson model as measured with method II. The complete distribution is shown in the inset. The $\Omega$-values are integer multiples of the lowest non-zero value. The appropriate normalization $\Omega_0$ is not obtained from the simulation, but is deliberately chosen such that the lowest occupied level equals $2$ indicating that the respective energy intervals each contain a single twofold degenerate metastable state. }}} \end{center} \end{figure} In Fig.~\ref{fig:dos_int} we show $\Omega(E)$ for a $L=4$ sample of the Edwards-Anderson model measured with method II. We use a binning method, i.e., every data point represents the aggregated statistics from a small energy interval. Each interval contains an integer number of local minima, hence a measurement of $\Omega$ should produce values that are integer multiples of the lowest non-zero value. This becomes clearly apparent in the tails of the distribution. We interpret the larger statistical fluctuations at high energy as evidence that sampling the high-energy minima is more demanding. \begin{figure} \begin{center} \includegraphics[width=.9\columnwidth]{./method_comp.eps} \caption{\small{\label{fig:method_comp} \emph{The distribution of local minima for a single $10\times10\times10$ sample of the Edwards-Anderson model measured with both methods. In the main plot only every tenth data point for method I is displayed in order to ensure visibility. Notwithstanding, both data sets have a similar resolution. While the results are consistent, which suggests that both methods are accurate, the statistical error of method II is substantially lower (see inset). Besides, at high energy method I failed to find minima in a number of energy intervals leading to an apparent thinning-out of data points.}}} \end{center} \end{figure} In order to compare both methods we show results for a $10\times10\times10$ system in Fig.~\ref{fig:method_comp}. We find that the results are in agreement. However, method I suffers from a much larger statistical error and for larger systems method II is able to cover a much larger energy range. Consequently, we proceed to apply method II if possible, i.e., for the Edwards-Anderson model. \subsection{Total number of minima} We note that the raw data from the simulations are a priori not normalized. Only for $L=4$ and method II it is in principal possible to obtain a normalization constant since very precise measurements in the tails of the distribution are required. This was done `manually' for the distribution in Fig.~\ref{fig:dos_int}. To automatize the normalization for small systems we use the lower values of the distribution of minima $\Omega$ to define a fitness function \begin{equation} \mathcal{F}(s)=\sum\limits_{\substack{i ,\\ 0<\Omega(E_i)<S}}\frac{\cos\left( 2\pi\Omega(E_i)/s \right)}{\Omega(E_i)}\quad, \end{equation} which is intended to quantify how well the data matches a supposed level distance $s$. Here, the parameter $S$ is a suitably chosen upper threshold that limits the computational effort and increases precision while the division by $\Omega(E)$ is introduced since we presume that $\ln\Omega(E)$ has a constant statistical error which implies that $\Omega(E)$ has a statistical error proportional to its absolute value and that larger values should, therefore, contribute less. The true level distance, i.e., the statistical contribution of a single (twofold degenerate) local minimum is then estimated by the position of the maximum of $\mathcal{F}(s)$ and dividing by 2 accounts for the degeneracy: \begin{equation} \Omega_0=\argmax_{s>s_{\rm min}}\mathcal{F}(s)/2, \label{eq:single_min_contr} \end{equation} where we considered \begin{equation} s_{\rm min}=\frac{\min(\Omega(E_i))}{10} \end{equation} sufficient. It is highly unlikely that for $L=4$ and the interval width we used ($\Delta E=E_{i+1}-E_i=0.3J$) the interval with the lowest population already contains more than 10 local minima. We can now properly normalize the distributions of local minima and determine the total number of minima \begin{equation} N_S=\sum_{i}^\infty\Omega(E_i)/\Omega_0, \end{equation} take the disorder average, and obtain an estimate for the number\footnote{Here, we have used the logarithm of the average of $N_S$ in order to be able to compare our result. However, since both $N_S$ and $\Omega(E)$ are expected to be log-normal distributed, in the rest of the paper we consider averages of logarithmic quantities.} of metastable states for the three-dimensional EA model with $L=4$: \begin{equation} \ln\langle N_S\rangle_J/N=0.2111(3). \end{equation} This result matches the value $0.21125(1)$ that was calculated in Ref.~\onlinecite{Waclaw_Burda} using an analytic approximation. \subsection{Averaging} Unfortunately, for larger system this normalization method can not be applied and all other distributions presented henceforth are only determined up to unknown factors. Since we calculate the average of the logarithm of the distributions, these become unknown additive constants which do not have a direct effect besides creating an unknown additive constant for the average as well. However, the second moments $\langle \left(\ln\Omega(E)\right)^2 \rangle_J$ and hence the estimators of the statistical errors of $\langle \ln\Omega(E) \rangle_J$ depend on these constants. In our analysis we chose them such that the maximum of the canonical distributions $\Omega(E)e^{-\beta_{\rm sync}E}$ is identical for all samples. This is equivalent to the (not entirely valid) assumption that all samples have about the same number of local minima and leads to an underestimation of the statistical error. With increasing system size this effect will vanish. Since we obtain very precise data for the Edwards-Anderson model we can use the natural choice $\beta_{\rm sync}=0$. For the SK model, however, the data is very noisy around the maximum of $\Omega(E)$ and we use $\beta_{\rm sync}=0.05$ for $N=96$ and $\beta_{\rm sync}=0.2$ for $N=128$. The first averaging procedure is given by \begin{equation} [\ln\Omega(E)]_1 \coloneqq \langle\, \ln\Omega(E)-\ln\Omega(E^*)\,\rangle_J, \end{equation} where $E^*$ is the position of the maximum of $\Omega(E)e^{-\beta_{\rm sync}E}$. Due to the variability of the interactions $J_{ij}$ the energy interval at which local minima exist shifts especially for small systems. This means that we can obtain data from all samples only from a relatively small energy region. Outside this interval no average can be computed since the logarithm of the missing distributions is not defined. To obtain an averaged function over a larger interval we introduce a second averaging procedure for which we shift all distributions along the energy axis such that their maxima coincide with the average maximum position: \begin{equation} [\ln\Omega(E)]_2 \coloneqq \left\langle\, \ln\Omega(E+E^*-\langle E^*\rangle_J)-\ln\Omega(E^*)\,\right\rangle_J. \end{equation} \subsection{Sherrington-Kirkpatrick model} We simulate systems of size $N=48,64,96,128$ with method I using parameters according to Table~\ref{tab:sk_param}. For each size we investigated 200 samples. \begin{table} \caption{\small{\label{tab:sk_param} \emph{Parameters used for the simulation of the SK model.}}} \begin{center} \begin{tabular}{c c c} \toprule $N$ & $\beta_{\rm min}$ & $\beta_{\rm max}$ \\ \hline $48$ & $-1.44$ & $1.4$ \\ $64$ & $-1.4$ & $1.4$ \\ $96$ & $-0.6$ & $1.4$ \\ $128$ & $-0.4$ & $1.4$ \\ \botrule \end{tabular} \end{center} \end{table} Figure~\ref{fig:SK_dos_av_1} shows the average logarithmic distribution of local minima for the first averaging method and indicates that our results are basically in agreement with the analytical prediction. Details are discernible in Fig.~\ref{fig:SK_dos_av_1_diff_g} where we show the deviation of the averages from the finite-system approximation $g_0(\varepsilon)+\frac1zg_1(\varepsilon)$ with $z=N-1$. Since we do not have a valid normalization and can not determine the correct vertical position of the curves in Fig.~\ref{fig:SK_dos_av_1} the absolute differences $\frac1N \left[ \ln\Omega(\varepsilon NJ\sqrt{N-1}) \right]-(g_0(\varepsilon)+\frac1{N-1} g_1(\varepsilon))$ have no meaning. Only the relation between the differences is relevant, i.e., a horizontal curve means agreement with the analytical prediction while a large slope indicates deviation. Consequently, the vertical positions of the curves have been adjusted for convenience and are not the result of a physically motivated normalization. Error bars result from the disorder average and represent two standard deviations. With increasing system size the range in energy $\varepsilon$ where local minima exist expands. Regardless of the averaging technique used the distribution $\Omega(\varepsilon NJ\sqrt{N-1})$ reaches smaller and greater $\varepsilon$ for $N=64$ than for $N=48$. However, for even larger system the shortcomings of the Monte Carlo method and the increasing complexity of the energy landscape make it impossible to find minima of high energy. In fact the downward curve at high $\varepsilon$ for $N=96$ and $N=128$ suggest that even for the energies where we can find minima large populations are not accessed. The alternative explanation, that the analytical prediction is not accurate, seems less likely, especially since we obtain nice horizontal curves for $N=48$ and $N=64$. At low energies the measured distributions divert from the analytical solution as predicted by Bray and Moore. The deviation is clearly visible for $\varepsilon < -0.6$. However, the errors are relatively large and it is difficult to judge whether the calculated $\varepsilon_c \approx -0.672$ will be realized for larger systems. \begin{figure} \begin{center} \includegraphics[width=.9\columnwidth]{./SK_dos_av_1.eps} \caption{\small{\label{fig:SK_dos_av_1} \emph{The average 1 logarithmic density of local minima for the SK model. The vertical position of the curves, i.e., the normalization of $\Omega$ have been chosen such that the maxima of the curve coincide.}}} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=.9\columnwidth]{./SK_dos_av_1_diff_g01.eps} \includegraphics[width=.9\columnwidth]{./SK_dos_av_2_diff_g01.eps} \caption{\small{\label{fig:SK_dos_av_1_diff_g} \emph{Deviation of the averaged distribution from the analytical prediction for the SK model using a) averaging method 1 and b) method 2. The vertical positions $y(N)$ correspond to the unknown normalization constants and have here been chosen to avoid overlapping curves.}}} \end{center} \end{figure} \subsection{Edwards-Anderson model} For the investigation of the Edwards-Anderson model we are able to use method II and we obtain very precise results. We consider lattices of linear extension $L=4,6,8,10$ and investigated 1000 samples for each size. In the insets of Fig.~\ref{fig:EA_dos_av} the distributions calculated with both averaging methods are plotted together with the approximation $g_0(\varepsilon)+\frac16g_1(\varepsilon)$. We observe almost no dependence on the system size, except for the fact that the support becomes broader. Again, for large systems the sampling of local minima with high energies becomes very difficult, which led us to use the restrictions listed in Table~\ref{tab:w_restr}. Consequently, the rescaled energies reached for $L=10$ are not as high as for $L=8$. While the data agrees reasonably well with the analytical solution on the right flank of the distribution and the maximum is in a similar position, we see considerable deviations for energies below the peak. We find that the data is much better described by polynomials \begin{equation} p_1( \tilde{\varepsilon} )= -13.39\,\tilde{\varepsilon}^4-1.10\,\tilde{\varepsilon}^3-4.143\,\tilde{\varepsilon}^2+{\rm const} \end{equation} and \begin{equation} p_2( \tilde{\varepsilon} )= -13.73\,\tilde{\varepsilon}^4-1.07\,\tilde{\varepsilon}^3-4.142\,\tilde{\varepsilon}^2+{\rm const}, \end{equation} where $\tilde{\varepsilon}$ is a shifted energy such that $\tilde{\varepsilon}=0$ at the maximum position for $L=10$: \begin{equation} \tilde{\varepsilon} = \varepsilon - (-0.4978). \end{equation} Both polynomials where obtained by fitting to the $L=10$ averages for $\varepsilon\in[-0.55,\infty)$. Note that the contributions from the third- and fourth-order term are small and for $\varepsilon\in[-0.6,-0.4]$ the quadratic term alone provides a very good approximation. The deviations from these polynomials depicted in the main plots in Fig.~\ref{fig:EA_dos_av} are very small and while with the first averaging method a clear dependence on the system size emerges, the curves for $L>4$ for the second averaging method are much closer together. From the statistical errors which are shown for the $L=10$ curves it also becomes clear that the second averaging method produces more precise results. We expect that both techniques would deliver the same curves for very large systems, hence we conclude that $p_2(\varepsilon)$ provides a better approximation of the true distribution of minima for large systems. \begin{figure}[t] \begin{center} \includegraphics[width=.95\columnwidth]{./EA_dos_av_1.eps} \includegraphics[width=.95\columnwidth]{./EA_dos_av_2.eps} \caption{\small{\label{fig:EA_dos_av} \emph{The logarithmic density of local minima for the EA model with the sample average taken with a) method 1 and b) method 2. The main plots show the deviations from the polynomial approximations $p_1$ and $p_2$ (see text) in order to highlight size-dependent behavior. In the insets the distributions are plotted together with the analytic approximation by Bray and Moore.}}} \end{center} \end{figure} \section{Conclusion} In order to measure the distribution of metastable states of spin glasses we have introduced two different algorithms. The first method employs traditional Monte Carlo techniques like flat-histogram, replica-exchange, and weighted histogram analysis. This method is able to find local minima over a wide range in energy, however, it has great difficulties at high energy and the statistical errors are comparatively large. For the second approach we designed an ensemble that allows a direct uniform sampling of metastable states. This method can efficiently be applied for the Edwards-Anderson model and yields very precise results. We find that for the Sherrington-Kirkpatrick model our results are consistent with the analytical predictions. Unfortunately, statistical errors are substantial and our simulations were only able to access the whole energy range for small system sizes. Investigating the Edwards-Anderson model by means of our novel method we were able to measure the distribution of metastable states with great precision. We found that the results -- suitably rescaled -- show very little dependence on the system size and we are, therefore, confident that also for much larger systems distributions very similar to the ones we describe would be obtained.
2,869,038,156,964
arxiv
\section{Introduction} The properties of physical, biological and many other systems can be described by differential and recursive equations; the latter are also called discrete dynamical system (see e.g. \cite{De}, \cite{GR}, \cite{Sh}). Also systems of non-linear, higher dimensional recursive equations arise in solving many different problems (see e.g. \cite{D'}, \cite{Kr}, \cite{Mu}, \cite{Roz},\cite{SR}, \cite{Ta}). But theory of the systems of recursive equations is not developed enough. So for each such a system one has to use a specific argument which is suitable for soling the system. In the paper we consider the Hamiltonian (energy) \begin{equation} H(\sigma)= \sum_{l=(x-1, x): x\in \mathbf{Z}} I_x{\mathbf{1}}_{\sigma(x-1)\ne\sigma(x)}, \label{1} \end{equation} where $\mathbf{Z}=\{...,-2,-1, 0, 1,2,...\}$,\ $\sigma$ is a function (configuration), $\sigma:\mathbf{Z}\to \{-1,1\}$, (the set of all configurations $\sigma$ is denoted by $\Omega=\{-1,1\}^{\mathbf {Z}})$ and $ I_x\in R$ for any $x\in \mathbf{Z}$. In statistical physics the Hamiltonian (\ref{1}) is called an one-dimensional (1D) model. Let us consider a sequence $\Lambda_n=[-n,n]$, $n=0,1,...$ and denote $\Lambda_n^c=\mathbf{Z}\setminus \Lambda_n$. Consider a boundary condition $\sigma^{(+)}_n=\sigma_{\Lambda_n^c}=\{\sigma(x)=+ 1: x\in \Lambda_n^c\}.$ The energy $H_n^+(\sigma)$ of the configuration $\sigma$ in the presence of the boundary condition $\sigma^{(+)}_n$ is expressed by the formula \begin{equation} H^+_n(\sigma)= \sum_{l=(x-1, x): x\in \Lambda_n} I_x{\mathbf{1}}_{\sigma(x-1)\ne\sigma(x)}+ I_{-n}{\mathbf{1}}_{\sigma(-n)\ne 1}+I_{n+1}{\mathbf{1}}_{\sigma(n)\ne 1}. \label{2} \end{equation} The Gibbs measure on $\Omega_n=\{-1, 1\}^{\Lambda_n}$ with respect to the boundary condition $\sigma_n^{(+)}$ is defined by \begin{equation} \mu^+_{n,\beta}(\sigma)=Z^{-1}(n,\beta,+) \exp(-\beta H_n^+(\sigma)), \label{3} \end{equation} where $\beta=T^{-1}$, $T>0-$ temperature and $ Z(n, \beta, +)$ is the normalizing factor (statistical sum or partition function): \begin{equation} Z(n, \beta, +)=\sum_{\varphi\in \Omega_n}\exp(-\beta H_n^+(\varphi)).\label{4} \end{equation} Note that the probability (with respect to measure $\mu^+_{n,\beta}$) of a subset $\Omega'_n$ of $\Omega_n$ is defined by \begin{equation} \mu^+_{n,\beta}(\Omega'_n)=Z^{-1}(n,\beta,+) \sum_{\psi\in \Omega'_n}\exp(-\beta H_n^+(\psi))={Z'(n,\beta,+)\over Z(n,\beta,+)}, \label{5} \end{equation} where $Z'(n,\beta,+)$ is called a "crystal" partition function: \begin{equation} Z'(n,\beta,+)=\sum_{\psi\in \Omega'_n}\exp(-\beta H_n^+(\psi)).\label{6} \end{equation} So to define the Gibbs measure and probability of an event of the system one has to compute the partition functions. If $\mu^+_\beta=\lim_{n\to \infty}\mu^+_{n,\beta}$ exists then it is called a limit Gibbs measure. A limit Gibbs measure for a given type of interaction (energy) may fail to be unique this means that the physical system with this interaction can take several distinct equilibria i.e there is phase transition. Note that (see \cite{Ge}, p.95) for the model (\ref{1}) on $N=\{1,2,...\}$ it was shown that there occurs a phase transition iff $\sum_{n\geq 1}e^{-2I_n}<\infty.$ In \cite{Ro} using a contour argument it has been proven that for that model (\ref{1}) the phase transition occurs if $I_n+I_{n+k}>k$ for any $n\in \mathbf{Z}, \ k\in N.$ In this paper we consider some (crystal) partition functions of the model and give the system of recursive equations for the functions. Under some conditions on parameters of the model we describe their solutions. \section{Partition function of "+" and "$\pm$" -boundary conditions} Consider two type of partition functions: \begin{equation} Z^+_n=\sum_{\sigma_n\in \Omega_n}\exp\{-\beta H^+_n(\sigma_n)\}, \label{7} \end{equation} \begin{equation} Z^{\pm}_n=\sum_{\sigma_n\in \Omega_n}\exp\{-\beta H^{\pm}_n(\sigma_n)\}, \label{8} \end{equation} where $H^+_n$ is defined by (\ref{2}) and \begin{equation} H^{\pm}_n(\sigma_n)=H^+_n(\sigma_n)+I_{-n}\sigma(-n). \label{9} \end{equation} In this paper for the simplicity assume \begin{equation} I_n=I_{-n+1},\ \ \mbox{for any} \ \ n\in \mathbf{Z}. \label{10} \end{equation} \begin{proposition}\label{p1} If condition (\ref{10}) is satisfied then the partition functions (\ref{7}) and (\ref{8}) have the form \begin{equation} \begin{array}{llll} Z_n^+={1\over 2}\bigg(\prod_{i=0}^n(1+e^{-\beta I_{i+1}})^2+\prod_{i=0}^n(1-e^{-\beta I_{i+1}})^2\bigg),\\ Z_n^{\pm}={1\over 2}\bigg(\prod_{i=0}^n(1+e^{-\beta I_{i+1}})^2-\prod_{i=0}^n(1-e^{-\beta I_{i+1}})^2\bigg).\\ \end{array}\label{11} \end{equation} \end{proposition} \begin{proof} Under the condition (\ref{10}) we get $Z^-_n=Z^+_n$ and $Z^{\pm}_n=Z^{\mp}_n$. Now from (\ref{7}), (\ref{8}) we obtain the following system of recursive equations \begin{equation} \begin{array}{llll} Z^+_n=(1+e^{-2\beta I_{n+1}})Z^+_{n-1}+2e^{-\beta I_{n+1}}Z^{\pm}_{n-1},\\ Z^{\pm}_n=(1+e^{-2\beta I_{n+1}})Z^{\pm}_{n-1}+2e^{-\beta I_{n+1}}Z^+_{n-1} \end{array}\label{12} \end{equation} Putting $X_n=Z^+_n-Z^{\pm}_n$ and $Y_n=Z^+_n+Z^{\pm}_n$ from (\ref{12}) we get \begin{equation} \begin{array}{llll} X_n=(1-e^{-\beta I_{n+1}})^2X_{n-1},\\ Y_n=(1+e^{-\beta I_{n+1}})^2Y_{n-1}.\\ \end{array}\label{13} \end{equation} The equalities $X_0=Z^+_0-Z^{\pm}_0=(1-e^{-\beta I_1})^2, \ \ Y_0=(1+e^{-\beta I_1})^2$ with (\ref{13}) imply $$ X_n=\prod_{i=0}^n(1-e^{-\beta I_{i+1}})^2, \ \ Y_n=\prod_{i=0}^n(1+e^{-\beta I_{i+1}})^2.$$ Hence we get (\ref{11}). \end{proof} For example, in a case of the usual Ising model i.e. $I_m=I,$ $\forall m\in \mathbf{Z}$ from (\ref{11}) denoting $\tau=\exp(-\beta I)$ we get $$\begin{array}{llll} Z_n^+={1\over 2}\bigg((1+\tau)^{2(n+1)}+(1-\tau)^{2(n+1)}\bigg),\\ Z_n^{\pm}={1\over 2}\bigg((1+\tau)^{2(n+1)}-(1-\tau)^{2(n+1)}\bigg).\\ \end{array}$$ Using these equalities (for usual Ising model) it is easy to see that $$ {Z_n^+\over Z_n^{\pm}}\to 1, \ \ {\rm if} \ \ n\to\infty .$$ This means that for the Ising model the partition functions $Z_n^+$ and $Z^{\pm}_n$ are asymptotically equal. This gives in fact uniqueness of limit Gibbs measure for the 1D Ising model. Such an asymptotical equality is true if $I_m$ is a periodic function of $m$ i.e $I_{m+p}=I_m$ for some $p\geq 1$ and all $m\in N$. \section{Crystal partition functions} In this section we are going to describe the crystal partition functions. Denote $\Omega_{m,n}=\{-1,1\}^{[m,n]},$ where $[m,n]=\{m, m+1,..., n\},\ m,n \in \mathbf{Z}, \ n\geq m.$ Put $$N_\varepsilon(\sigma)=|\{x\in [m,n]: \sigma(x)=\varepsilon\}|, \ \varepsilon=\pm 1,$$ where $|S|$ is the cardinal of the set $S$. For $r=0,1,..., n-m+1$ consider the following crystal partition functions: \begin{equation} Z^{\varepsilon, r}_{m,n}=\sum_{\sigma\in\Omega_{m,n}:N_{-\varepsilon}(\sigma)=r}e^{-\beta H^\varepsilon(\sigma)}, \ \ \varepsilon =-,+ \label{14} \end{equation} \begin{equation} Z^{\pm, r}_{m,n}=\sum_{\sigma\in\Omega_{m,n}:N_+(\sigma)=r}e^{-\beta H^\pm(\sigma)}. \label{15} \end{equation} Note that $Z^{-,r}_{m,n}=Z^{+,r}_{m,n}$. Denoting $X^r_{m,n}=Z^{-,r}_{m,n}$ and $Y^r_{m,n}=Z^{\pm, r}_{m,n}$ from (\ref{14}) and (\ref{15}) one easily gets the following system of (multi-variable) recursive equations \begin{equation} \begin{array}{llll} X^r_{m,n}=X^{r-1}_{m,n-1}+e^{-\beta I_{n+1}}Y^r_{m, n-1},\\ Y^r_{m,n}=Y^r_{m,n-1}+e^{-\beta I_{n+1}}X^{r-1}_{m, n-1}, \end{array}\label{16} \end{equation} where $r=0,1,...,n-m+1$, $m,n\in \mathbf{Z}$, $n\geq m$. The system (\ref{16}) can be reduced to a recursive equation with respect to $X^r_{m,n}$. Indeed, from the first equation of (\ref{16}) we get \begin{equation} Y^r_{m, n-1}= e^{\beta I_{n+1}}\left(X^r_{m,n}-X^{r-1}_{m,n-1}\right).\label{17} \end{equation} Now from the second equation of (\ref{16}) using (\ref{17}) we get \begin{equation} X^r_{m,n}=X^{r-1}_{m,n-1}+e^{-\beta (I_{n+1}-I_n)}(X^r_{m,n-1}-X^{r-1}_{m,n-2})+e^{-\beta(I_n+I_{n+1})}X^{r-1}_{m,n-2}, \label{18} \end{equation} where $r=1,2,...,n-m+1,\ n\geq m$ and $$ X^0_{m,m}=1, \ \ X^1_{m,m}=e^{-\beta (I_m+I_{m+1})}, \ \ X^0_{m,m+1}=1,$$ $$X^1_{m,m+1}=e^{-\beta (I_m+I_{m+1})}+e^{-\beta(I_{m+1}+I_{m+2})}, \ \ X^2_{m,m+1}=e^{-\beta (I_m+I_{m+2})},\ m\in \mathbf{Z}.$$ Iterating (\ref{18}) one can obtain an expression for $X^r_{m,n}.$ Then using (\ref{17}) one can find $Y^r_{m,n}.$ But these expressions would be in a very bulky form. Now we shall illustrate such an expression for the Ising model i.e. $I_m\equiv I,\ m\in \mathbf{Z}$. In this case the recurrence equation (\ref{18}) becomes more simple \begin{equation} X^r_n=X^{r-1}_{n-1}+X^r_{n-1}+(\chi-1)X^{r-1}_{n-2},\label{19} \end{equation} where $X^r_n=X^r_{m,n_1}$ with $n_1-m=n$, $\chi=e^{-2\beta I}.$ It is easy to see that for $r=0,1,2,3,4$ the solutions are $$X^0_n=1, \ \ X^1_n=n\chi, \ \ X^2_n=(n-1)\chi+{(n-2)(n-1)\over 2!}\chi^2,$$ $$X^3_n=(n-2)\chi+(n-2)(n-3)\chi^2+{(n-2)(n-3)(n-4)\over 3!}\chi^3,$$ $$X^4_n=(n-3)\chi+{3(n-3)(n-4)\over 2}\chi^2+{(n-3)(n-4)(n-5)\over 2}\chi^3+$$ $${(n-3)(n-4)(n-5)(n-6)\over 4!}\chi^4,$$ here we used the following formulas $$\sum_{j=1}^nj^2={1\over 6}(2n^3+3n^2+n),\ \ \sum_{j=1}^nj^3={1\over 4}(n^4+2n^3+n^2).$$ Note that $X^r_n$ has a form $$X^r_n=a_{1,n}^r\chi+a_{2,n}^r\chi^2+...+a_{r,n}^r\chi^r.$$ For the coefficients $a_{k,n}^r$, $0\leq k\leq r\leq n$, using (\ref{19}) we obtain the following system of recursive equations \begin{equation} \begin{array}{llll} a^r_{1,n}=a^r_{1,n-1}+a^{r-1}_{1,n-1}-a^{r-1}_{1,n-2},\\[2mm] a^r_{k,n}=a^r_{k,n-1}+a^{r-1}_{k,n-1}+a^{r-1}_{k-1,n-2}-a^{r-1}_{k,n-2},\ \ k=2,3,...,r-1,\\[2mm] a^r_{r,n}=a^r_{r,n-1}+a^{r-1}_{r-1,n-2},\\ \end{array}\label{20} \end{equation} The following lemma gives solution to (\ref{20}) \begin{lemma} Solution of the system of recursive equations (\ref{20}) is \begin{equation} a^r_{k,n}={n-r+1 \choose k}{r-1\choose k-1}, \ \ 0\leq k\leq r\leq n. \label{21} \end{equation} \end{lemma} \begin{proof} We shall use mathematical induction (cf. with \cite{Ni} pages 148-150). Let $A_m$ denote all cases of (\ref{21}) with $n+k+r=m.$ Formulas given above for $X^r_n,\ \ r=0,1,2,3,4$ show that the formula (\ref{21}) is true for small values of $m$. Assuming that $A_m$ holds, we are to prove $A_{m+1}$ that is, equation (\ref{20}), for any integers $n, r$ and $k$ whose sum is $m+1.$ Since RHS of equation (\ref{20}) contains terms with $n+r+k\leq m$ using the assumption of the induction for each term of RHS of (\ref{20}) we get (\ref{21}). \end{proof} Thus the solution of (\ref{19}) is given by \begin{equation} X^r_n=\sum_{k=1}^r{n-r+1 \choose k} {r-1\choose k-1}\chi^k \label{22} \end{equation} \begin{remark} Note that for $\chi=1$ (i.e. there is no interaction) the solution of (\ref{19}) is $X^r_n={n\choose r}.$ Using (\ref{22}) for $\chi=1$ we obtain the following property of binomial coefficients \begin{equation} {n\choose r}=\sum_{k=1}^r{n-r+1 \choose k}{r-1\choose k-1}. \label{23} \end{equation} This identity is known as the convolution identity of Vandermonde. \end{remark} Since interaction (parameter $I$) of the 1D Ising model is translation-invariant (does not depend on the points of $\mathbf{Z}$), the unknown functions $X^r_{m,n}, \ Y^r_{m,n}$ of the system (\ref{16}) depend on $n-m$ and $r$ only (see (\ref{19})), consequently, instate of $n-m$ we can write $n$. Summarizing the results for the Ising model we have \begin{theorem} For the Ising model the solution of the system of recursive equations (\ref{16}) is $$X^r_n=\sum_{k=1}^r{n-r+1 \choose k} {r-1\choose k-1}\chi^k,$$ $$Y^r_n={1\over \sqrt{\chi}}\left(X^r_{n+1}-X^{r-1}_n\right),$$ where $n$ stands for $n-m$. \end{theorem} {\bf Acknowledgments.} I thank the Abdus Salam International Center for Theoretical Physics (ICTP), Trieste, Italy for providing financial support of my visit to ICTP (February-April 2009).
2,869,038,156,965
arxiv
\section{Introduction} Homological quantum codes \cite{Martin,PHD} are an important class of stabilizer codes, each of them is constructed by selecting a special basis for each vector space in a short chain complex. The most common types of homological quantum codes are surface codes and color codes. In surface code's case \cite{Martin,PHD,Projective}, the chain complexes are those that come naturally form cell embedding of graphs on surfaces with bases the cells of their embeddings. The color codes \cite{Bombin}, however, cannot be constructed directly by chain complexes of graphs, but \cite{Nicolas} shows that they can be seem as homological quantum codes coming from \emph{hypergraphs} on surfaces. \par In \cite{Martin}, a new kind of codes based on the \(\mathbb{Z}_2\)-homology of \emph{hypermaps} are considered, which are another kind of homological codes that have something to do with hypergraph for hypermap themselves can be seem as 2-cell embeddings of the Walsh representation of hypergraphs. The author called them hypermap-homology quantum codes and gave theorems to calculate their distance, then he showed by an example that the new kind of code could have better parameters than toric code \cite{Kitaev}---the most famous kind of surface code. However, in \cite{Pradeep}, Pradeep Sarvepalli showed by a constructive method that canonical hypermap-homology codes are just another kind of surface codes, which indicates that their parameters cannot be superior to a type of homological codes that people already known.\par Fortunately, we can say that Pradeep's work gave us a new method of constructing surface codes, and by taking his method one step further, we could explore some relations between surface codes that people may not be familiar with before. The most obvious relationship between surface codes is code and its dual code. Dual code means the code coming from the dual complex of the original cell-embedding which is a cell-complex of dimension 2. There are also dual codes in the sense of hypermap-homology, which are formed by considering dual hypermaps mentioned in \cite{Martin}. There, Martin didn't talk about the code of the dual hypermap-homology directly, but only use the dual hypermap itself in his proposition 4.16. to calculate the \( X\)-check distance of the original hypermap-homology code. Then a natural question is what's the relationship between the quantum code constructed by hypermap-homology using Martin's method and that from the dual hypermap by the same method. \par \section{Background} \subsection{Homological quantum codes} In this section, we briefly review the part of the construction of homological quantum codes that we need , assuming the reader is at least familiar with the basics of CSS stabilizer code and surface code. For an easy introduction, see \cite{PHD}. A chain complex is a sequence of vector spaces \(V_i\) with linear morphisms \(d_i\) in between satisfying \(d_{i}\circ d_{i+1}=0\). In the context of a homological quantum code, we consider only a smallest block of it, which consists of three vector spaces and two morphisms as follow: \begin{figure}[h] \centering \begin{tikzpicture}[codi] \obj{V_{i+1} & V_i & V_{i-1} \\}; \mor V_{i+1} {d_{i+1}}:-> V_i ; \mor V_i {d_i}:-> {V_{i-1}} ; \end{tikzpicture} \end{figure} \noindent and all vector spaces are limited to \(\mathbb{Z}_2\)-vector spaces. To construct an error correcting code, we choose a basis for each vector space. Denoting \([d_i]\) the matrix of \(d_i\) for these bases, we have \([d_{i}\circ d_{i+1}]=[d_i][d_{i+1}]=0\), which means that we can use \(A=\) $\big(\begin{smallmatrix} H_X & 0\\ 0 & H_Z \end{smallmatrix}\big)$ as the binary check matrix for a Calderbank, Shor,and Steane (CSS) code with \(H_X=[d_i]\) and \(H_Z=[d_{i+1}]^T\). If we denote \(C_X\) and \(C_Z\) the kernel of the matrices \(H_X\), \(H_Z\), then by the standard theory of CSS code, the number of logical qubits should be \begin{equation} k=n-\dim(C_X^\perp)-\dim(C_Z^\perp) \end{equation} where n denotes the dimension of the central vector space \(V_i\). On the other hand, by definition of homology groups, that is, the quotient spaces \(H_i=\ker d_{i}/\im d_{i+1}\), we also have \(\dim H_i=\dim(\ker{d_i})-\dim(\im{d_{i+1}})=n-\dim{C_X^\perp}-\dim{C_Z^\perp}=k\), which indicates that the numbers of logical qubits of these codes depend only on dimensions of the central homology groups of their chain complexes. We call these codes the homological quantum code. \subsection{Hypermaps and their homological codes} We're going to review some basic concepts of hypermaps, their \(\mathbb{Z}_2\)-homology and hypermap-homology quantum codes. There are two kinds of hypermaps, combinatorial hypermaps and topological hypermaps, which are mutually transformable. A \emph{combinatorial hypermap} is easy to define, it's a pair of elements \((\alpha, \sigma)\) of the group \(S_n\) of all permutations on \(B=\{1,2,...,n\}\) under composition with \(<\sigma, \alpha>\) transitive on \(B\). Here, `transitive' means that every two elements of B can be transformed to each other by an element of the subgroup \(<\sigma, \alpha>\). To define topological hypermaps, we need to define the concept of a hypergraph at first. A (connected) \emph{hypergraph} is simply a (connected) bipartite graph. However, the edges of the original bipartite graph are renamed the darts of the hypergraph, while we call the vertices of the bipartite graph that are naturally divided in to two separate subsets \(V\) and \(E\) the `virtices' and `edges' of the corresponding hypergraph. Then following \cite{Martin}, a \emph{topological (oriented) hypermap} is a 2-cell embedding of a connected hypergraph in a compact, connected, oriented surface (2-manifold). Here, a 2-cell embedding means a CW complex with underline topological space the 2-manifold and 1-skeleton the hypergraph. Readers who do not know about CW complex can simply check the text book \cite{Hatcher} for the definition. For us, the most important feature of a CW complex is that the two cells of it must be homomorphic to the interior of a circle, say \(x^2+y^2<1\), while there are no such kind of limitations to the closures of these 2-cells. To transform a topological hypermap to a combinatorial one, note first that at every sites (we use the world `sites' to replace `vertices' of the original bipartite graph, which consist of both vertices and edges of the hypergraph.), the hypergraph is star-shaped locally---there is one site and the darts that incident to it. Also note that because the surface is oriented, there exist a global normal vector field with respect to the orintation. By seeing from the top of the normal vector at each site, we can orient the darts clockwise or counterclockwise. (If you don't want to immerse the surface in to \(\mathbf{R}^3\), which is needed to find such a normal field, you can directly use the right coordinate charts to assign the orientation for the darts at each sites. We do not take this more abstract approach here.) We denote B the set of all darts of the hypergraph, which is assumed to be label by the number set \(\{1,2,...,n\}\) with \(n\) the cardinality of B, then define a permutation \(\alpha\in{S_B}\) which takes a dart to the next one that clockwise (with respect to the normal field) around the \emph{edge} it incident to, and another permutation \(\sigma\in{S_B}\) which takes a dart to the next one that counterclockwise around the \emph{vertex} it incident to. Now we have the pair \((\alpha, \sigma)\), and the transitivity of \(<\sigma, \alpha>\) is easily seem from the connectivity of the hypergraph. When drawing pictures, we usually use a small circle to represent a vertex, a small square to represent an edge, and let the normal vectors point out from the papper. (Of course, we could only draw part of the whole hypermap this way.) Also, we write the lable (a number in \(\{1,2,...,n\}\)) of a dart inside the face (2-cell) that counterclockwise incident to the dart with respect to its edge (the square node of the dart), and say that the dart belongs to this face, in this sense, one dart can only belongs to one face. Figure \ref{fig:1} shows an example of a picture of hypermap, as we can see, the darts labeled 3, 11 and 7 belong to face \(f_2\), while the darts labeled 5 and 6 belong to the face \(f_1\). The darts 4 doesn't belong to any of these two faces but another face which is not explicitly labeled here. Also, we have \(\sigma(4)=5\), \(\alpha(6)=7\) and \(\alpha^{-1}(4)=7\).\par \begin{figure} \centering \begin{tikzpicture}[ roundnode/.style={circle, draw=green!60, fill=green!5, very thick, minimum size=2mm}, squarednode/.style={rectangle, draw=red!60, fill=red!5, very thick, minimum size=2mm}, fakenode/.style={rectangle, draw=orange!0, fill=blue!0, very thick, minimum size=2mm}, ] \node[squarednode] (edge 1){} ; \node[fakenode] (site 1) [below=0.8cm of edge 1]{}; \node[roundnode] (vertex 3) [below=2.8cm of site 1]{}; \node[roundnode] (vertex 2) [left=1.532cm of site 1] {}; \node[roundnode] (vertex 1) [right=1.532cm of site 1] {}; \node[squarednode] (edge 3) [below=1.8cm of vertex 2] {}; \node[squarednode] (edge 2) [below=1.8cm of vertex 1] {}; \node[roundnode] (vertex 4) [left=1.8cm of edge 3] {}; \node[squarednode] (edge 4) [left=1.8cm of vertex 2] {}; \node[fakenode] (site 3) [above=0.507cm of edge 4]{}; \node[fakenode] (site 4) [left=0.507cm of site 3 ]{}; \node[fakenode] (site 6) [below=0.507cm of vertex 4]{}; \node[fakenode] (site 5) [left=0.507cm of site 6]{}; \node[fakenode] (site 2) [above=0.8cm of edge 1]{}; \node[fakenode] (site 7) [below=0.8cm of vertex 3]{}; \node[fakenode] (site 11) [right=0.3cm of vertex 1]{}; \node[fakenode] (site 12) [right=0.8cm of vertex 1]{}; \node[fakenode] (site 13) [above=0.666cm of site 11]{}; \node[fakenode] (site 9) [right=0.666cm of edge 2]{}; \node[fakenode] (site 8) [below=0.3cm of site 9]{}; \node[fakenode] (site 10) [above=0.1cm of site 1]{}; \node[fakenode] (site 14) [left=0.366cm of site 10]{3}; \node[fakenode] (site 15) [below=2.27cm of site 14]{7}; \node[fakenode] (site 16) [below=0.8cm of vertex 2]{}; \node[fakenode] (site 17) [right=2.9cm of site 16]{11}; \node[fakenode] (site 18) [left=3.1cm of site 17]{6}; \node[fakenode] (site 19) [left=1.15cm of site 18]{5}; \node[fakenode] (site 20) [left=1.7cm of site 15]{4}; \node[fakenode] (site 21) [right=0.35cm of site 19]{\textcolor{blue!50}{\( f_1\)}}; \node[fakenode] (site 22) [left=0.8cm of site 17]{\textcolor{blue!50}{\(f_2\)}}; \draw[] (edge 1) -- (site 2); \draw[] (edge 2) -- (site 8); \draw[] (edge 4) -- (site 4); \draw[] (vertex 1) -- (site 13); \draw[] (vertex 1) -- (site 12); \draw[] (vertex 3) -- (site 7); \draw[] (vertex 4) -- (site 5); \draw[] (vertex 1) -- (edge 2); \draw[] (vertex 1) -- (edge 1); \draw[] (vertex 2) -- (edge 1); \draw[] (vertex 2) -- (edge 3); \draw[] (vertex 3) -- (edge 3); \draw[] (vertex 3) -- (edge 2); \draw[] (vertex 2) -- (edge 4); \draw[] (vertex 4) -- (edge 4); \draw[] (vertex 4) -- (edge 3); \end{tikzpicture} \caption{Part of a hypermap} \label{fig:1} \end{figure} Notice that when the permutation \(\alpha^{-1}\sigma\) (We take the convention in \cite{Martin}, that is, acting from left to right.) continuously acts on a dart, the dart will circle around it's face, that is to say, the orbit of the dart under the action of subgroup \(<\alpha^{-1}\sigma>\) is the face to which the dart belongs. Can we construct an oriented topological hypermap by finding the orbits of the subgroup \(<\alpha^{-1}\sigma>\) of a combinatorial hypermap \((\sigma, \alpha)\) and then gluing these `faces' together ? The answer is `Yes'. Let's suppose that we have found all orbits of \(<\alpha^{-1}\sigma>\), they form a partition \(I\) of \(B=\{1,2,...,n\}\). When all subsets of the partition is acted by \(\alpha^{-1}\), they form another partition \(O\), which, in the topological hypermaps' case, represents the `outer' darts of the faces, recall Figure \ref{fig:1}. Now, every element of \(B\) belongs exactly to one subset of \(I\) and one subset of \(O\). For each orbit, which consists of elements \(\{i_1, i_2,...,i_s\}\) with \(i_{k+1}=\alpha^{-1}\sigma(i_k)\) for any \(i_k\) with \(k<s\) and \(i_{1}=\alpha^{-1}\sigma(i_s)\) , we draw a face (a polygon) on the paper in the way as Figure \ref{fig:2} shows, which has s inner edges label by \(i_k\) and s outer edges label by \(\alpha^{-1}(i_k)\) . Each element of \(B\) labels an inner edge of precisely one of these faces ( because for any of these faces, its inner edges are exactly one subset of partion \(I\) ) and a \begin{wrapfigure}{r}{0.5\textwidth} \begin{tikzpicture}[ roundnode/.style={circle, draw=black!100, fill=green!0, minimum size=2mm}, squarednode/.style={rectangle, draw=black!100, fill=red!0, minimum size=2mm}, fakenode/.style={rectangle, draw=orange!0, fill=blue!0, very thick, minimum size=2mm}, ] \node[fakenode] (site 1) {}; \node[squarednode] (edge 1) [left=1.8 of site 1] {}; \node[squarednode] (edge 5) [right=1.8 of site 1] {}; \node[fakenode] (site 2) [left=1.214 of site 1] {}; \node[roundnode] (vertex 2) [above=1.214cm of site 2] {}; \node[squarednode] (edge 3) [above=1.8 of site 1] {}; \node[roundnode] (vertex 4) [right=2.628 of vertex 2] {}; \node[roundnode] (vertex 8) [below=2.628 of vertex 2] {}; \node[squarednode] (edge 7) [below=1.8 of site 1] {}; \node[roundnode] (vertex 6) [below=2.628 of vertex 4] {}; \node[fakenode] (site i_1) [right=0.45 of vertex 2] {\(i_1\)}; \draw[] (vertex 2) -- (edge 3); \node[fakenode] (site i_2) [below=0.45 of vertex 4]{\(i_2\)}; \draw[] (vertex 4) -- (edge 5); \node[fakenode] (site i_3) [left=0.45 of vertex 6]{\(i_3\)}; \draw[] (vertex 6) -- (edge 7); \node[fakenode] (site i_4) [above=0.45 of vertex 8]{\(i_4\)}; \node[fakenode] (site a^1) [right=0.45 of edge 3 ]{\(\alpha^{-1}(i_1)\)}; \draw[] (vertex 4) -- (edge 3); \node[fakenode] (site a^3) [left=0.45 of edge 7 ]{\(\alpha^{-1}(i_3)\)}; \draw[] (vertex 8) -- (edge 7); \node[fakenode] (site op) [below=0.05 of site i_4 ]{}; \node[fakenode] (site a^2) [right=3.2 of site op ]{\(\alpha^{-1}(i_2)\)}; \draw[] (vertex 6) -- (edge 5); \node[fakenode] (site a_4) [left=3.55 of site i_2 ]{}; \node[fakenode] (site a^4) [above=0.01 of site a_4 ]{\(\alpha^{-1}(i_4)\)}; \draw[] (vertex 2) -- (edge 1); \draw[] (vertex 8) -- (edge 1); \end{tikzpicture} \caption{A face for \(s=4\)} \label{fig:2} \end{wrapfigure} outer edge of precisely one of these faces (because for any of these faces, its outer edges are exactly one subset of partion \(O\)), and we can glue these two faces along the two edges labeled by this element. Now, for all element of \(B\), we glue faces this way. (There is possibility that a face be glued with itself, which happens when \(i_k=\alpha^{-1}(i_j)\) for some \(k\), \(j\in \{1,2,...,s\}\).) All faces have their naturally pointed normal fields, i.e., normal vectors pointing out from paper, this helps us matching the orientations when we glue an inner edge with an outer edge. All we need to do is to glue squared node with squared node, round node with round node, then their naturally pointed normal fields will be matched together and form a global normal field. To see why, notice that when walking along an edge from a round node to a squared node, we are actually clockwise circling the normal vectors when its an inner edge, otherwise, we would circle the normal vectors counterclockwise. When edges are being glued the right way, i.e., round node to round node, squared node to squared node, we will see the local picture right before gluing from some suitable direction as Figure \ref{fig:3}. shows in which the left-side edge is a outer edge, while the right-side \begin{wrapfigure}{l}{0.35\textwidth} \begin{tikzpicture}[ squarednode/.style={rectangle, draw=black!40, fill=green!20, very thick, minimum size=2mm}, roundnode/.style={circle, draw=black!40, fill=green!20, very thick, minimum size=2mm}, fakenode/.style={rectangle, very thick, minimum size=2mm}, capnode1/.style={rectangle, draw=blue!20, fill=blue!20, very thick, minimum size=2mm}, capnode2/.style={rectangle, draw=orange!20, fill=orange!20, very thick, minimum size=2mm}, capnode3/.style={rectangle, draw=blue!0, fill=blue!0, very thick, minimum size=2mm}, ] \fill[orange!20] (-0.5,0) rectangle (-2,3); \fill[blue!20] (0.5,0) rectangle (2,3); \node[fakenode] (site 1) {}; \draw[very thick, black!40] (0.5,0) -- (0.5,3); \node[roundnode] (vertex 1) [right=0.18cm of site 1] {}; \node[squarednode] (edge 1) [above=2.65 of vertex 1] {}; \draw[very thick, black!40] (-0.5,0) -- (-0.5,3); \node[roundnode] (vertex 2) [left=0.18cm of site 1] {}; \node[squarednode] (edge 2) [above=2.65cm of vertex 2] {}; \node[fakenode] (site 2) [right=1cm of site 1] {}; \node[fakenode] (site 3) [left=1cm of site 1] {}; \node[capnode1] (site 4) [above=1.15cm of site 2]{\textcolor{black!40}{\(f_2\)}}; \node[capnode2] (site 5) [above=1.15cm of site 3]{\textcolor{black!40}{\(f_1\)}}; \end{tikzpicture} \caption{A local picture right before gluing} \label{fig:3} \end{wrapfigure} \noindent one, an inner edge. Now, according to the previous observation, both of the normal fields of face \(f_1\) and \(f_2\) shoud be pointing again from paper to us (at least, within the local picture), which means that their naturally normal vector fields are perfectly matching along the edge we glued. Denote \(X\) for the space after gluing, the neighbors of every points of \(X\) are now surface like, i.e., homeomorphic to \(\mathbf{R}^2\), excepting those that come from nodes (squared or round) of the original faces and are called sites. In the vicinity of these sites, each dart (glued edges) has two and only two faces that incident on it, therefore, the neighbor of these sites must be a single cone (It's impossible to have two or more cones with only their apex glued into a single site.) without self intersection, which is topologically a disk. So, \(X\) is surface like near every points---we've got an oriented surface! The edges and nodes of the original faces becomes the darts and sites of an hypermap on \(X\), and the interiors of the faces themselves become the 2-cells. We've transformed a combinatorial hypermap to a topological one, and it's a simple task to check that this topological hypermap with normal field the one we glued are exactly the original hypermap \((\alpha, \sigma)\) in the sense of permutations on \(B\), and the transformations between these two kinds of hypermaps are mutually reversible. Further more, every two darts of the newly constructed hypermap can be transformed to each other mutually by circling around the sites for finite many times due to the transitivity of \(<\sigma, \alpha>\) , which indicates that the surface is connected. Also, the orbits of \(<\alpha^{-1}\sigma>\) are finite, which indicates that the surface is compact.\par Giving a topological hypermap, four \(\mathbb{Z}_2\)-vector spaces \(\mathcal{V}\), \(\mathcal{E}\), \(\mathcal{F}\), \(\mathcal{W}\) exist with bases the vertices, edges, faces (2-cells) and darts of the hypermap. Define \(d_2(f)=\sum_{i\in{f}}^{} \omega_i\), which map a face to the sum of the darts that belong to it. Here, we use \(\omega_i\) to represent the dart labeled \(i\), as in \cite{Martin,Pradeep}. We also define \(d_1(w_i)= v_{\owns{i}} + v_{\owns{\alpha^{-1}(i)}}\) where \(v_{\owns{i}}\) denotes the vertex that dart \(\omega_i\) incident on, so as \(e_{\owns{i}}\), and \(\iota(e)=\sum_{i\in{e}}\omega_i\) which maps an edge to all darts \(\omega_i\) that incident on it. (We already know that a face is an orbit of \(<\alpha^{-1}\sigma>\), here, we notice that a site of hypergraph can also be regarded as an orbit---an edge \(e\) represents an orbit of \(<\alpha>\) or \(<\alpha^{-1}>\) whose elements consist of darts that incident on \(e\), so as an vertex, which represents an orbit of \(<\sigma>\). This is why we use `\(\owns\)' in the previous definitions.) Based on these definitions, we define \(d_2: \mathcal{F} \rightarrow \mathcal{W}\), \(d_1: \mathcal{W} \rightarrow \mathcal{V}\), and \(\iota : \mathcal{E} \rightarrow \mathcal{W}\) by extending linearly to the whole vector spaces. Now, it is straightforward to check that \(d_1\circ{d_2}=0\), \(d_1\circ\iota=0\). The second identity tells us that we can define a map \(\partial_1\) from the quotient space \(\mathcal{W}/\iota(\mathcal{E})\) to \(\mathcal{V}\) by \(\partial_1(\omega_i + \iota(\mathcal{E}))=d_1(\omega_i)\) without ambiguity. We also define \(p : \mathcal{W} \rightarrow \mathcal{W}/\iota(\mathcal{E})\) the natural projection and \(\partial_2=p\circ{d_2}\). All these maps are given in the commutative diagram shown in Figure \ref{jiaohuantu}. \begin{wrapfigure}{r}{0.4\textwidth} \centering \begin{tikzpicture}[ squarednode/.style={rectangle, draw=black!0, fill=green!0, very thick, minimum size=2mm}, ] \node[squarednode] (site 1) {\(\mathcal{W}\)}; \node[squarednode] (site 2) [right=1.6 of site 1] {\(\mathcal{V}\)}; \node[squarednode] (site 3) [left=1.6 of site 1] {\(\mathcal{F}\)}; \node[squarednode] (site 4) [below=1 of site 1] {\(\mathcal{W}/\iota(\mathcal{E})\)}; \node[squarednode] (fake l) [left=0.3 of site 4]{}; \node[squarednode] (fake r) [right=0.3 of site 4]{}; \node[squarednode] (p1) [above=0.1 of fake l]{\(\partial_2\)}; \draw[->] (site 3) -- (site 4); \node[squarednode] (p2) [above=0.1 of fake r]{\(\partial_1\)}; \draw[->] (site 4) -- (site 2); \node[squarednode] (fake l+) [left=0.4 of site 1]{}; \node[squarednode] (fake r+) [right=0.4 of site 1]{}; \node[squarednode] (d1) [above=-0.2 of fake l+]{\(d_2\)}; \node[squarednode] (d2) [above=-0.2 of fake r+]{\(d_1\)}; \draw[->] (site 1) -- (site 2); \draw[->] (site 3) -- (site 1); \node[squarednode] (center) [below=0.3 of site 1]{}; \node[squarednode] (p) [right=-0.1 of center]{\(p\)}; \draw[->] (site 1) -- (site 4); \end{tikzpicture} \caption{Definition of \(\partial_1\), \(\partial_2\)} \label{jiaohuantu} \end{wrapfigure} Now, we have constructed a short chain complex \(\partial_1\circ\partial_2=0\), form which we get what we called hypermap-homology. To construct a hypermap-homology quantum code, we need to choose bases for \(\mathcal{F}\), \(\mathcal{W}/\iota(\mathcal{E})\) and \(\mathcal{V}\). For \(\mathcal{F}\) and \(\mathcal{V}\), the faces (2-cells) and vertices themselves are what we want, while there are no such naturally existing bases for \(\mathcal{W}/\iota(\mathcal{E})\). To choose a basis for \(\mathcal{W}/\iota(\mathcal{E})\), firstly, we choose one \emph{special dart} for each edge \(e\), that is, a dart \(i\) that incident on \(e\), and denote \(S\) for all these special darts, then it is straightforward to prove that the elements \(\omega_i+\iota(\mathcal{E})\) with \(i\in{B\setminus{S}}\) form a basis for \(\mathcal{W}/\iota(\mathcal{E})\). Notice that there is also the chain complex \(d_1\circ{d_2}=0\), If we had used it to construct a homological code, we wouldn't have to choose special darts, and we would have a code with \(\dim{\mathcal{E}}-1\) more logical qubits than hypermap-homology codes, why didn't we do that? In fact, by copying Pradeep's procedure ( We will talk about it later.) in this situation, we can show that this code equals an \emph{planar code} \cite{Martin,Projective} that is formed by puncturing so many holes in a closed orientable surface, which would catastrophically reduce its distance, and therefore has less practical interests. \par As an example, let's construct a hypermap-homology quantum code of the pair \((\alpha, \sigma)\) with \(\alpha=\)(4 3 2 1)(5 7 8 6), \(\sigma=\)(7 1 6 3)(5 2 8 4) in \(S_8\) permutation group. The hypermap has 2 edges, \(e_1=\)(4 3 2 1), \(e_2=\)(5 7 8 6), and 2 vertices \(v_1=\)(7 1 6 3), \(v_2=\)(5 2 8 4). The faces can be calculated as follow: \begin{equation}\label{faces} \begin{aligned} \alpha^{-1}\sigma & = \begin{pmatrix} 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8\\ 8 & 7 & 5 & 6 & 3 & 4 & 2 & 1 \end{pmatrix}\\ & = (1\quad8)(2\quad7)(3\quad5)(4\quad6) \end{aligned} \end{equation} from equation \ref{faces}, we draw 4 faces as Figure \ref{fig:5} shows: \begin{figure}[h] \centering \begin{tikzpicture}[ squarednode/.style={rectangle, draw=black!40, fill=green!20, very thick, minimum size=2mm}, roundnode/.style={circle, draw=black!40, fill=green!20, very thick, minimum size=2mm}, fakenode/.style={rectangle, draw=orange!0, fill=blue!0, minimum size=1mm}, ] \node[roundnode] (vertex 1) [] {}; \node[squarednode] (edge 2) [right=1cm of vertex 1] {}; \node[roundnode] (vertex 2) [right=1.6cm of edge 2] {}; \node[squarednode] (edge 4) [right=1cm of vertex 2] {}; \node[roundnode] (vertex 3) [right=1.6cm of edge 4] {}; \node[squarednode] (edge 6) [right=1cm of vertex 3] {}; \node[roundnode] (vertex 4) [right=1.6cm of edge 6] {}; \node[squarednode] (edge 8) [right=1cm of vertex 4] {}; \node[squarednode] (edge 1) [above=1cm of vertex 1] {}; \node[squarednode] (edge 3) [above=1cm of vertex 2] {}; \node[squarednode] (edge 5) [above=1cm of vertex 3] {}; \node[squarednode] (edge 7) [above=1cm of vertex 4] {}; \node[roundnode] (vertex 8) [above=1cm of edge 2] {}; \node[roundnode] (vertex 7) [above=1cm of edge 4] {}; \node[roundnode] (vertex 6) [above=1cm of edge 6] {}; \node[roundnode] (vertex 5) [above=1cm of edge 8] {}; \node[fakenode] (site 1) [right=0.375cm of vertex 1] {}; \node[fakenode] (site 2) [below=-0.1cm of site 1] {2}; \node[fakenode] (site 3) [right=0.375cm of edge 1] {}; \node[fakenode] (site 4) [above=-0.1cm of site 3] {7}; \draw [very thick, black!40](vertex 8) -- (edge 1); \draw [very thick, black!40](vertex 1) -- (edge 2); \node[fakenode] (site 5) [below=0.375cm of edge 1] {}; \node[fakenode] (site 6) [right=-0.1cm of site 5] {8}; \draw [very thick, black!40](vertex 1) -- (edge 1); \node[fakenode] (site 7) [below=0.375cm of vertex 8] {}; \node[fakenode] (site 8) [left=-0.1cm of site 7] {1}; \draw [very thick, black!40](vertex 8) -- (edge 2); \node[fakenode] (site 9) [right=0.375cm of vertex 1] {}; \node[fakenode] (site ) [below=-0.1cm of site 1] {2}; \node[fakenode] (site 3) [right=0.375cm of edge 1] {}; \node[fakenode] (site 4) [above=-0.1cm of site 3] {7}; \draw [very thick, black!40](vertex 8) -- (edge 1); \draw [very thick, black!40](vertex 1) -- (edge 2); \node[fakenode] (site 5) [below=0.375cm of edge 1] {}; \node[fakenode] (site 6) [right=-0.1cm of site 5] {8}; \draw [very thick, black!40](vertex 1) -- (edge 1); \node[fakenode] (site 7) [below=0.375cm of vertex 8] {}; \node[fakenode] (site 8) [left=-0.1cm of site 7] {1}; \draw [very thick, black!40](vertex 8) -- (edge 2); \node[fakenode] (site 1') [right=0.375cm of vertex 2] {}; \node[fakenode] (site 2') [below=-0.1cm of site 1'] {3}; \node[fakenode] (site 3') [right=0.375cm of edge 3] {}; \node[fakenode] (site 4') [above=-0.1cm of site 3'] {5}; \draw [very thick, black!40](vertex 7) -- (edge 3); \draw [very thick, black!40](vertex 2) -- (edge 4); \node[fakenode] (site 5') [below=0.375cm of edge 3] {}; \node[fakenode] (site 6') [right=-0.1cm of site 5'] {7}; \draw [very thick, black!40](vertex 2) -- (edge 3); \node[fakenode] (site 7') [below=0.375cm of vertex 7] {}; \node[fakenode] (site 8') [left=-0.1cm of site 7'] {2}; \draw [very thick, black!40](vertex 7) -- (edge 4); \node[fakenode] (site 1'') [right=0.375cm of vertex 3] {}; \node[fakenode] (site 2'') [below=-0.1cm of site 1''] {4}; \node[fakenode] (site 3'') [right=0.375cm of edge 5] {}; \node[fakenode] (site 4'') [above=-0.1cm of site 3''] {6}; \draw [very thick, black!40](vertex 6) -- (edge 5); \draw [very thick, black!40](vertex 3) -- (edge 6); \node[fakenode] (site 5'') [below=0.375cm of edge 5] {}; \node[fakenode] (site 6'') [right=-0.1cm of site 5''] {5}; \draw [very thick, black!40](vertex 3) -- (edge 5); \node[fakenode] (site 7'') [below=0.375cm of vertex 6] {}; \node[fakenode] (site 8'') [left=-0.1cm of site 7''] {3}; \draw [very thick, black!40](vertex 6) -- (edge 6); \node[fakenode] (site 1''') [right=0.375cm of vertex 4] {}; \node[fakenode] (site 2''') [below=-0.1cm of site 1'''] {1}; \node[fakenode] (site 3''') [right=0.375cm of edge 7] {}; \node[fakenode] (site 4''') [above=-0.1cm of site 3'''] {8}; \draw [very thick, black!40](vertex 5) -- (edge 7); \draw [very thick, black!40](vertex 4) -- (edge 8); \node[fakenode] (site 5''') [below=0.375cm of edge 7] {}; \node[fakenode] (site 6''') [right=-0.1cm of site 5'''] {6}; \draw [very thick, black!40](vertex 4) -- (edge 7); \node[fakenode] (site 7''') [below=0.375cm of vertex 5] {}; \node[fakenode] (site 8''') [left=-0.1cm of site 7'''] {4}; \draw [very thick, black!40](vertex 5) -- (edge 8); \end{tikzpicture} \caption{Faces of \((\alpha, \sigma)\) } \label{fig:5} \end{figure} \noindent After labeling their inner edges, the outer edges can also be labeled using \(\alpha^{-1}\) as was shown in Figure \ref{fig:2}. Now we can glue the four faces by gluing (Of course, squared node to squared node, round node to round node.) an inner edge with a outer edge that has the same label, and we've got a topological hypermap. To construct a quantum code, let us choose dart 2 and 5 as special darts, then a basis for \(\mathcal{W}/\iota(\mathcal{E})\) is (1, 3, 4, 6, 7, 8) with \(i\) represent the equivalence class \(\omega_{i}+\iota(\mathcal{E})\). Also, a basis for \(\mathcal{F}\) is (\(f_1\), \(f_2\), \(f_3\), \(f_4\)), which represents the faces from left to right in Figure \ref{fig:5}, and a basis for \(\mathcal{V}\) is (\(v_1\), \(v_2\)). Under these bases, the binary check matrix \(A\) (see 2.1) is calculated as: \begin{equation}\label{b mat} \begin{aligned} H_X & = \begin{pmatrix} 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 & 1 \end{pmatrix} \qquad H_Z & = \begin{pmatrix} 1 & 0 & 0 & 0 & 0 & 1 \\ 1 & 1 & 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 1 & 1 & 1 \\ 0 & 0 & 1 & 1 & 0 & 0 \end{pmatrix} \end{aligned} \end{equation} \noindent from matrix \(A\), we can directly write out the generators for the hypermap-homology code: \begin{equation} \begin{aligned} X_{v_1}=X_{v_2} & =X_1X_3X_4X_6X_7X_8\\ Z_{f_1} & =Z_1Z_8\\ Z_{f_2} & =Z_1Z_3Z_4Z_7\\ Z_{f_3} & =Z_3Z_6Z_7Z_8\\ Z_{f_4} & =Z_4Z_6 \end{aligned} \end{equation} only 4 of the above 6 operators are independent, say \(X_{v_1}\), \(Z_{f_1}\), \(Z_{f_2}\), \(Z_{f_3}\), which can be seem from equation \ref{b mat}. This tells us that we have \(6-4=2\) logic qubits---\(k=\dim{H_1}=2\). To physically realize a logic qubit, we can firstly attach a qubit \(|\phi_i\rangle\) to each non-special dart \(i\in{B\setminus{S}}\), which makes the above operators local, then project the whole system \(|\psi\rangle=\bigotimes_{i\in{B\setminus{S}}}|\phi_i\rangle\) to the stabilizer code space \(\mathcal{C}\) by measuring the four operators and then applying some extra gates according to the results. \subsection{Relation between hypermap codes and surface codes} We have introduced Martin's work on the construction of hypermap quantum codes, however, more detailed in gluing faces. In this section, we review pradeep's construction of equivalent surface codes from hypermap-homology quantum codes, here, we also give a little attention for singular situations. \par To reduce a hypermap-homology quantum code to a surface code, we firstly draw a curve (A smooth map from \([0,1]\) to the surface.) connecting vertex \(v_{\owns{i}}\) and \(v_{\owns{\alpha^{-1}(i)}}\) for each dart \(\omega_i\in{B}\), and label the curve `\(i\)' again! These curves should satisfy the following conditions: \begin{itemize} \item Each of them cannot be a single point. \item Curve \(i\) must lies entirely within the 2-cell to which dart \(\omega_i\) belongs excepting its end points \(v_{\owns{i}}\) and \(v_{\owns{\alpha^{-1}(i)}}\). \item They cannot have any intersections or self-intersections within a 2-cell. \end{itemize} Then for each special dart \(\omega_i\), we erase the curve \(i\). Now, the remaining curves together with the vertices of the original hypergraph form the 1-skeleton of a new cell structure (CW-complex) on the surface \(X\). From this new cell structure, we can make an surface code with underline chain complex the ordinary boundary maps \(\partial^*_2\) between the 2-cells and 1-cells, and \(\partial^*_1\) between the 1-cells and 0-cells. Algebraic topology shows that its homology group \(H^*_1=\ker \partial^*_{1}/\im \partial^*_{2}\) is just the singular homology group \(H_1(X)\) \cite{Hatcher}.\par Our description about the construction of the new surface code is slightly different from that in \cite{Pradeep}, which makes it more straightforward to deal with the singular cases like the two shown bellow (Dashed line represents a deleted curve that we are interested in.): \begin{figure}[h] \centering \begin{tikzpicture}[ squarednode/.style={rectangle, draw=black!80, fill=green!0, thick, minimum size=2mm}, roundnode/.style={circle, draw=black!80, fill=green!0, thick, minimum size=2mm}, fakenode/.style={rectangle, draw=orange!0, fill=blue!0, minimum size=1mm}, ] \draw[red!80,thick] (1.3,1.3) .. controls (0.8,1.4) and (0.6,1.3) .. (0.5,1.2); \draw[red!80,thick] (1.38,1.38) .. controls (1.4,0.8) and (1.3,0.6) .. (1.2,0.5); \draw[red!80,thick] (1.3,1.3) -- (1.0,1.75); \draw[red!80,thick] (1.3,1.3) -- (1.7,0.9); \draw[red!80,thick] (1.3,1.3) -- (1.6,1.75); \node[squarednode] (edge 1) {}; \node[fakenode] (site 2) [right=0.5cm of edge 1] {}; \node[fakenode] (site 3) [above=0.1cm of site 2] {\(i\)}; \node[fakenode] (site 1) [above=1cm of edge 1] {}; \node[roundnode] (vertex 1) [right=1cm of site 1 ] {}; \draw[black!40,thick] (vertex 1.south) .. controls +(down:5mm) and +(right:5mm) .. (edge 1.east); \draw [black!40,thick](vertex 1.west) .. controls +(left:5mm) and +(up:5mm) .. (edge 1.north); \draw[red!80,thick] (vertex 1) .. controls (0.4,1) and (1,0.4) .. (vertex 1); \draw[black!40,thick] (vertex 1) -- (1.3,1.8); \draw[black!40,thick] (vertex 1) -- (1.85,1.28); \draw [black!40,thick](edge 1) -- (-0.5,0); \draw [black!40,thick](edge 1) -- (-0.4,-0.4); \draw[black!40,thick] (edge 1) -- (0,-0.5); \node[fakenode] (site 14) [right=7.25cm of edge 1] {}; \node[fakenode] (site 15) [above=0.4cm of site 14] {\(i\)}; \node[roundnode] (vertex 2) [right=6cm of vertex 1 ] {}; \node[squarednode] (edge 2) [below=0.5cm of vertex 2] {}; \node[squarednode] (edge 3) [left=1cm of edge 2] {}; \node[squarednode] (edge 4) [right=1cm of edge 2] {}; \node[fakenode] (site 4) [below=1.3cm of vertex 2] {}; \node[fakenode] (site 5) [left=1.3cm of site 4] {}; \node[fakenode] (site 6) [right=1.3cm of site 4] {}; \draw [black!40,thick](edge 2) -- (vertex 2); \draw [black!40,thick](edge 3) -- (vertex 2); \draw [black!40,thick](edge 4) -- (vertex 2); \draw [black!40,thick](edge 3) -- (site 5); \draw [black!40,thick](edge 4) -- (site 6); \draw[red!80, thick, dashed] (vertex 2) .. controls (site 5) and (site 6) .. (vertex 2); \node[fakenode] (site 7) [above=1cm of site 5] {}; \node[fakenode] (site 8) [above=1cm of site 6] {}; \draw [black!40,thick](edge 3) -- (site 7); \draw [black!40,thick](edge 4) -- (site 8); \node[fakenode] (site 9) [above=0.2cm of vertex 2] {}; \draw [black!40,thick](vertex 2) -- (site 9); \node[fakenode] (site 10) [left=0.1cm of site 9] {}; \node[fakenode] (site 11) [right=0.1cm of site 9] {}; \draw[red!80,thick] (vertex 2) -- (site 10); \draw[red!80,thick] (vertex 2) -- (site 11); \node[fakenode] (site 12) [above=0.1cm of edge 3] {}; \node[fakenode] (site 13) [above=0.1cm of edge 4] {}; \draw[red!80,thick] (vertex 2) -- (site 12); \end{tikzpicture} \caption{Cases when \(v_{\owns{i}}=v_{\owns{\alpha^{-1}(i)}}\). } \label{fig:sing} \end{figure} \noindent The left part is a local picture of the singular case \(v_{\owns{i}}=v_{\owns{\alpha^{-1}(i)}}\) but \(i\neq{\alpha^{-1}(i)}\). Here, the red line represents the newly added curves, and the explicitly labeled dart is non-special. This dart passes its label \(i\) to the red self-circle inside the 2-cell to which it belongs. For cases where \(i={\alpha^{-1}(i)}\), \(\omega_i\) is the only dart that incident on the edge \(e_{\owns{i}}\), and is therefore an special dart. Thus, no matter how we draw the curve \(i\), it is to be erased, as is shown in the right part of Figure \ref{fig:sing}.\par Contrast of the two quantum codes can now be done step by step. For each face \(f_i\) (2-cell) of the original hypermap, we denote \(\omega_1^i\), \(\omega_2^i\), \(\omega_3^i\), ... , \(\omega_s^i\) the darts that belongs to it, and assume that \(\omega_{k_1}^i\), \(\omega_{k_2}^i\), \(\omega_{k_3}^i\), ... \(\omega_{k_r}^i\) (\(k_i\leqslant{s}\)) are special darts. Recall that all the equivalence classes \(\omega_i+\iota({\mathcal{E}})\) of the non-special darts \(\omega_i\in{B\setminus{S}}\) form a basis for \(\mathcal{W}/\iota(\mathcal{E})\), under this basis, we can write out the boundary of \(f_i\) as \begin{equation} \begin{aligned} \partial_2{f_i} & = \sum_{n=1}^{s}(\omega_{n}^{i}+\iota({\mathcal{E}}))\\ & = \sum_{n\neq{k_j}, n\in\{1,\cdots,s\}}(\omega_{n}^{i}+\iota({\mathcal{E}}))+\sum_{j=1}^{r}\sum_{ \omega\in<\alpha^{-1}>\cdot{\omega_{k_j}^{i}} ,\omega\neq{\omega_{k_j}^{i}} } (\omega +\iota({\mathcal{E}})) \end{aligned} \end{equation} where \(<\alpha^{-1}>\cdot{\omega_{k_j}^{i}}\) means the orbit of \(\omega_{k_j}^{i}\) under the action of \(<\alpha^{-1}>\). All darts in the second row of equation (5) are non-special and therefore passes their labels to some of the newly added curves whose union forms the geometrical boundary of a 2-cell \({f}^*_i\) in the new cell structure on \(X\) and the boundary matrix \([\partial_2^*f_i^*]\) under the natural basis of all newly added curves are the same as the the matrix \([\partial_2f_i]\) of the hypermap boundary under the basis of non-special darts. Next, for each non-special dart \(\omega_i\), the boundary \(\partial_1(\omega_i+\iota(\mathcal{E}))\) are \(v_{\owns{i}}\) and \(v_{\owns{\alpha^{-1}(i)}}\), which is the same as the \(\partial^*_1\) boundary of its corresponding curve \(i\) ---Again, same matrix. By definition of homological quantum code, these two codes are essentially the same code. Thus, We can reduce every hypermap homology code to an equivalent surface code.\par For a specific example of Pradeep's construction, see Figure 8 and 9 of his paper \cite{Pradeep}. To us, his example contains a subtle situation in which our slightly modified procedure may encompass some ambiguities, i.e., there are two ways of drawing curves in the situation shown as follow:\par \begin{figure}[h] \centering \begin{tikzpicture}[ squarednode/.style={rectangle, draw=black!80, fill=green!0, thick, minimum size=2mm}, roundnode/.style={circle, draw=black!80, fill=green!0, thick, minimum size=2mm}, fakenode/.style={rectangle, draw=orange!0, fill=blue!0, minimum size=1mm}, ] \node[squarednode] (edge 1) {}; \node[fakenode] (site 1) [below=0.5cm of edge 1 ] {}; \node[fakenode] (site 2) [right=-0.2cm of site 1 ] {\(i\)}; \node[fakenode] (site 3) [right=0.2cm of site 2 ] {\textcolor{red!80}{\(i\)}}; \node[roundnode] (vertex 1) [right=1.5cm of edge 1 ] {}; \node[roundnode] (vertex 2) [below=1.5cm of edge 1 ] {}; \node[squarednode] (edge 2) [below=1.5cm of vertex 1 ] {}; \node[fakenode] (site 1') [above=0.5cm of edge 2 ] {}; \node[fakenode] (site 2') [left=-0.2cm of site 1' ] {\(j\)}; \node[fakenode] (site 3') [left=0.2cm of site 2' ] {\textcolor{red!80}{\(j\)}}; \draw[red!80,thick] (vertex 1.south) .. controls +(down:7mm) and +(right:7mm) .. (vertex 2.east); \draw[red!80,thick] (vertex 1.west) .. controls +(left:7mm) and +(up:7mm) .. (vertex 2.north); \draw [black!40,thick](edge 2) -- (vertex 2); \draw [black!40,thick](edge 2) -- (vertex 1); \draw [black!40,thick](vertex 1) -- (edge 1); \draw [black!40,thick](vertex 2) -- (edge 1); \node[fakenode] (site or) [right=2cm of site 2' ] {\emph{or}}; \node[squarednode] (edge 1) [right=6cm of edge 1 ] {}; \node[fakenode] (site 1) [below=0.5cm of edge 1 ] {}; \node[fakenode] (site 2) [right=-0.2cm of site 1 ] {\(i\)}; \node[fakenode] (site 3) [right=0.2cm of site 2 ] {\textcolor{red!80}{\(j\)}}; \node[roundnode] (vertex 1) [right=1.5cm of edge 1 ] {}; \node[roundnode] (vertex 2) [below=1.5cm of edge 1 ] {}; \node[squarednode] (edge 2) [below=1.5cm of vertex 1 ] {}; \node[fakenode] (site 1') [above=0.5cm of edge 2 ] {}; \node[fakenode] (site 2') [left=-0.2cm of site 1' ] {\(j\)}; \node[fakenode] (site 3') [left=0.2cm of site 2' ] {\textcolor{red!80}{\(i\)}}; \draw[red!80,thick] (vertex 1.south) .. controls +(down:7mm) and +(right:7mm) .. (vertex 2.east); \draw[red!80,thick] (vertex 1.west) .. controls +(left:7mm) and +(up:7mm) .. (vertex 2.north); \draw [black!40,thick](edge 2) -- (vertex 2); \draw [black!40,thick](edge 2) -- (vertex 1); \draw [black!40,thick](vertex 1) -- (edge 1); \draw [black!40,thick](vertex 2) -- (edge 1); \end{tikzpicture} \caption{ Ambiguity occurs when \(d_1\omega_i=d_1\omega_j\), but \(i\neq{j}\).} \label{fig:7} \end{figure} \noindent In the left part of Figure \ref{fig:7}, curve \(i\) is closer to dart \(i\), while in the right part, curve \(j\) is closer to dart \(i\). Although both left and right part of Figure \ref{fig:7} is the correct way of adding curves according to our 3 conditions proposed before, the right side one is different from Pradeep's original example. Does this cause any problem? Fortunately, `No'---the domain enclosed by curve \(i\) and \(j\) is an 2-cell, and so does not affect the whole topological structure no matter which one of the two curves is erased when it's necessary, further more, the existence of the 2-cell \(f^*_i\) in the previous paragraph is also not affected (\(f^*_i\) can be seem as a flower that is formed by attaching the petal-cells enclosed by curves of the darts \(\omega\in<\alpha^{-1}>\cdot{\omega_{k_j}^{i}} \) for each special dart \(\omega_{k_j}^{i}\) to the main disc enclosed by curves of all the inner darts \(\omega_{n}^{i}\). These petal-cells must have no intersection, which is not affected by the different ways of Figure \ref{fig:7}.). \section{Quantum codes of dual hypermaps } \subsection{Dual hypermap and Pradeep's surface code} There are also concepts of dual hypermaps in both topological and combinatorial sense \cite{Martin}. In this section, we show that the structure of Pradeep's surface code of the original hypermap will be more clear in the perspective of dual hypermap.\par We adopt the notation used in \cite{Martin}. A topological hypermap can be regarded as a pair \(H=(\Sigma,\Gamma)\) with \(\Sigma\) the underline oriented 2-manifold and \(\Gamma\) the hypergraph of its 1-skeleton. From \(H\), we can construct the \emph{topological dual} \(H^*=(\Sigma_{op},\Gamma^*)\) as a hypermap satisfying the following conditions: \begin{itemize} \item \(\Sigma_{op}\) is the surface \(\Sigma\) with opposite orientation. \item The edges of \(H^*\) are the edges of \(H\). \item There is precisely one vertex of \(H^*\) for each face of \(H\), inside that face. \item For each dart labeled \(i\) of \(H\), there is precisely one dart of \(H^*\) that goes from \(e_{\owns{i}}\) to the vertex of \(H^*\) which lies inside the face \emph{\(f_{\owns{i}}\)} that \(i\) belongs to. We labeled this dart \(i\) again and all darts of \(H^*\) are therefore labeled. \item The tangent vector of dart \(i\) of \(H^*\) which starts at \(e_{\owns{i}}\) must lie between tangent vectors of darts \(i\) and \(\alpha^{-1}(i)\) of \(H\) starts at the same edge. \item The darts \(i\) of \(H^*\) must lie within faces \(f_{\owns{i}}\) excepting the points \(e_{\owns{i}}\), and have no intersection or self-intersection inside the faces. \end{itemize} \begin{figure}[ht] \centering \begin{tikzpicture}[ roundnode/.style={circle, draw=green!60, fill=green!5, very thick, minimum size=2mm}, squarednode/.style={rectangle, draw=red!60, fill=red!5, very thick, minimum size=2mm}, fakenode/.style={rectangle, draw=orange!0, fill=blue!0, very thick, minimum size=2mm}, Roundnode/.style={circle, draw=purple!30, fill=purple!10, very thick, minimum size=1mm}, ] \node[squarednode] (edge 1){} ; \node[fakenode] (site 1) [below=0.8cm of edge 1]{}; \node[roundnode] (vertex 3) [below=2.8cm of site 1]{}; \node[roundnode] (vertex 2) [left=1.532cm of site 1] {}; \node[roundnode] (vertex 1) [right=1.532cm of site 1] {}; \node[squarednode] (edge 3) [below=1.8cm of vertex 2] {}; \node[squarednode] (edge 2) [below=1.8cm of vertex 1] {}; \node[roundnode] (vertex 4) [left=1.8cm of edge 3] {}; \node[squarednode] (edge 4) [left=1.8cm of vertex 2] {}; \node[fakenode] (site 3) [above=0.507cm of edge 4]{}; \node[fakenode] (site 4) [left=0.507cm of site 3 ]{}; \node[fakenode] (site 6) [below=0.507cm of vertex 4]{}; \node[fakenode] (site 5) [left=0.507cm of site 6]{}; \node[fakenode] (site 2) [above=0.8cm of edge 1]{}; \node[fakenode] (site 7) [below=0.8cm of vertex 3]{}; \node[fakenode] (site 11) [right=0.3cm of vertex 1]{}; \node[fakenode] (site 12) [right=0.8cm of vertex 1]{}; \node[fakenode] (site 13) [above=0.666cm of site 11]{}; \node[fakenode] (site 9) [right=0.666cm of edge 2]{}; \node[fakenode] (site 8) [below=0.3cm of site 9]{}; \node[fakenode] (site 10) [above=0.1cm of site 1]{}; \node[fakenode] (site 14) [left=0.366cm of site 10]{3}; \node[fakenode] (site 15) [below=2.27cm of site 14]{7}; \node[fakenode] (site 16) [below=0.8cm of vertex 2]{}; \node[fakenode] (site 17) [right=2.9cm of site 16]{11}; \node[fakenode] (site 18) [left=3.1cm of site 17]{6}; \node[fakenode] (site 19) [left=1.15cm of site 18]{5}; \node[fakenode] (site 20) [left=1.7cm of site 15]{4}; \node[Roundnode] (Vertex 1) [right=0.4cm of site 19]{}; \node[Roundnode] (Vertex 2) [left=1cm of site 17]{}; \draw[] (edge 1) -- (site 2); \draw[] (edge 2) -- (site 8); \draw[] (edge 4) -- (site 4); \draw[] (vertex 1) -- (site 13); \draw[] (vertex 1) -- (site 12); \draw[] (vertex 3) -- (site 7); \draw[] (vertex 4) -- (site 5); \draw[] (vertex 1) -- (edge 2); \draw[] (vertex 1) -- (edge 1); \draw[] (vertex 2) -- (edge 1); \draw[] (vertex 2) -- (edge 3); \draw[] (vertex 3) -- (edge 3); \draw[] (vertex 3) -- (edge 2); \draw[] (vertex 2) -- (edge 4); \draw[] (vertex 4) -- (edge 4); \draw[] (vertex 4) -- (edge 3); \draw[red!100,dashed] (vertex 1) .. controls (0,-0.3) .. (vertex 2); \draw[red!100,dashed] (vertex 1) .. controls (1.7,-2.9) .. (vertex 3); \draw[red!100] (vertex 3) .. controls (-1.7,-2.9) .. (vertex 2); \draw[red!100,dashed] (vertex 4.east) .. controls +(right:12mm) and +(down:12mm) .. (vertex 2.south); \draw[red!100] (vertex 4.north) .. controls +(up:12mm) and +(left:12mm) .. (vertex 2.west); \draw[red!100] (vertex 2) .. controls (-0.2,0) .. (-0.2,0.9); \draw[red!100] (vertex 2) .. controls (-3.8,-0.8) .. (-4.5,-0.3); \draw[red!100] (vertex 4) .. controls (-1.7,-3.5) .. (vertex 3); \draw[red!100,dashed] (vertex 4) -- (-4.5,-4); \draw[red!100,dashed] (vertex 3) -- (-0.3,-5.2); \draw[red!100,dashed] (vertex 4) .. controls (-4.3,-1.3) .. (-4.8,-0.6); \draw[red!100] (vertex 1) .. controls (0.2,0) .. (0.2,0.9); \draw[red!100] (vertex 1) -- (2.2,-0.3); \draw[red!100,dashed] (vertex 1) -- (2.55,-0.4); \draw[red!100,dashed] (vertex 1) -- (2.85,-0.8); \draw[red!100,dashed] (vertex 1) -- (2.85,-1.3); \draw[red!100] (vertex 1) .. controls (2.2,-2.8) .. (2.85,-3.4); \draw[red!100] (vertex 3) .. controls (1.8,-3.6) .. (2.65,-4); \draw[red!100] (vertex 3)-- (0.3,-5.2); \draw[red!100] (vertex 4) -- (-4.8,-3.8); \draw[cyan!100] (Vertex 2) -- (edge 1); \draw[cyan!100] (Vertex 2) -- (edge 3); \draw[cyan!100] (Vertex 2) -- (edge 2); \draw[cyan!100] (Vertex 1) -- (edge 4); \draw[cyan!100] (Vertex 1) -- (edge 3); \draw[cyan!100] (-2,-4) -- (edge 3); \draw[cyan!100] (-3.5,0) -- (edge 4); \draw[cyan!100] (-5,-1.5) -- (edge 4); \draw[cyan!100] (-0.7,0.4) -- (edge 1); \draw[cyan!100] (0.7,0.4) -- (edge 1); \node[fakenode] (site 22) [below=2.8cm of vertex 1]{}; \draw[cyan!100] (site 22) -- (edge 2); \draw[cyan!100] (2.8,-2.6) -- (edge 2); \end{tikzpicture} \caption{The dual hypermap and Pradeep's curves} \label{fig:8} \end{figure} Again, there is a little difference in above conditions from those in \cite{Martin}, which clarifies singular situations where \(i=\sigma(i)\). Figure \ref{fig:8} shows the picture of both dual hypermap (with darts drawn in sky-blue and vertex in purple.) and curves constructed by Pradeep (Here, dart 3,11,6 are in the set \(S\) of special darts, we draw their corresponding curves as dashed line which means that they will finally be erased.), based on the original hypermap of Figure \ref{fig:1}. We can immediately see from Figure \ref{fig:8} that the hypergraph of the dual hypermap and graph of Pradeep's surface code are dual to each other in the sense of cell-complex, simply speaking, Pradeep's surface codes are just dual graphs of the dual hypermap with some curves erased! Is that always the case? No, notice that when drawing Pradeep's curves, there is still some freedom under the constraints of section 2.3 which makes it possible to draw them so that curve \(i\) do not cross dart \(i\) in the dual hypermap \(H^*\) or there are more than one intersections of curve \(i\) with the hypergraph \(\Gamma^*\). On the contrary, the same freedom also allows what we want, i.e, we have \begin{theorem} \label{Pro 1} We can construct Pradeep's surface code of the hypermap \(H=(\Sigma,\Gamma)\) by dualizing the graph \(\Gamma^*\) of the dual hypermap \(H^*=(\Sigma_{op},\Gamma^*)\) and erase the curves that intersect the darts of \(H^*\) whose label i belongs to \(S\). \end{theorem} \noindent where we kept the vertices of \(\Gamma\) as vertices of Pradeep's curve during dualization, if not, we would have a surface code that is equivalent to Pradeep's surface code in a similar sense which we'll talk about in the next paragraph.\par The combinatorial hypermap corresponds to \(H^*\) are easily seen to be \((\alpha',\sigma')=(\alpha^{-1},\alpha^{-1}\sigma)\) and we have \(((\alpha')',(\sigma')')=(\alpha,\sigma)\), which suggests that in topological hypermap level, we may also have: \begin{equation} (H^*)^*=H \end{equation} Actually, the equals sign in equation (6) are correct in the sense of strong isomorphism mentioned in \cite{Martin}, i.e., we say topological hypermap \(H=(\Sigma,\Gamma)\) and \(H'=(\Sigma',\Gamma')\) are strongly isomorphic, and write \(H=H'\), if there exists an orientation-preserving homeomorphism \(u:\Sigma\rightarrow\Sigma'\) with \(u|_\Gamma\) giving an equality of hypergraphs.\par Furthermore, \(H\) itself is one of the hypermap duals of \(H^*\) that satisfy the conditions of dual hypermap. (For a rigorous treatment, see the proof of proposition 3.) Therefore, Pardeep's surface code of the dual hypermap \(H^*\) with the same set of special darts \(S\) can also be constructed easily by dualizing the original graph \(\Gamma\) and erase those curves which crosses the special darts. \subsection{Edge hypermap codes, $\triangle$-duals and contrary maps} Equation (6) convinced us that the dual hypermap \(H^*\) is really a \emph{dual} of the original hypermap \(H\), and proposition \ref{Pro 1} relates quantum codes of \(H^*\) to the the hypergraph \(\Gamma\), quantum codes of \(H\) to the the hypergraph \(\Gamma^*\), which could be regarded as an indirect relation between the two codes because \(\Gamma\) and \(\Gamma^*\) are related by (6). The relation is however not simple as proposition \ref{Pro 1} itself, which is about graph and its dual graph. To give a more direct relation, we will introduce another kind of duals \(H^{\triangle}\) which again satisfy \((H^\triangle)^\triangle=H\), and their \emph{contrary maps} \(H^\nabla\). But before that, lets firstly introduce a second way to construct homological quantum codes from hypermaps.\par Recall that for a topological hypermap, we have four vector spaces \(\mathcal{V}\), \(\mathcal{E}\), \(\mathcal{F}\), \(\mathcal{W}\), from which we have the chain complex \(\mathcal{F}\xrightarrow{d_2}\mathcal{W}\xrightarrow{d_1}\mathcal{V}\), further more, we constructed the subspace \(\iota(\mathcal{E})\), together with which we have the chain complex \(\mathcal{F}\xrightarrow{\partial_2}\mathcal{W}/\iota(\mathcal{E})\xrightarrow{\partial_1}\mathcal{V}\) by Figure \ref{jiaohuantu}. Then by choosing a special dart for each edge to form a basis \(\omega_i+\iota(\mathcal{E})\) for \(W/\iota(\mathcal{E})\) with \(i\in{B}\setminus{S}\), the hypermap homology code can be constructed. Now, in the first chain complex, we replace \(\mathcal{F}\) by \(\mathcal{E}\), and \(d_2\) by \(\iota:e\mapsto\sum_{i\in{e}} \omega_i\), but leave \(d_1\) unchanged, then we still have chain complex \(d_1\circ{\iota}=0\). Conversely, we replace \(\mathcal{E}\) by \(\mathcal{F}\), and also replace \(\iota\) by \(d_2\), which gives us the subspace \(d_2(\mathcal{F})\) and the quotient \(\mathcal{W}/d_2(\mathcal{F})\). Because \(d_1\circ{d_2}=0\), again the map \(\hat{\partial_1}:\omega_i+d_2(\mathcal{F})\mapsto{d_1\omega_i}\) is well defined and we have the diagram as is shown in \begin{wrapfigure}{r}{0.4\textwidth} \centering \begin{tikzpicture}[ squarednode/.style={rectangle, draw=black!0, fill=green!0, very thick, minimum size=2mm}, ] \node[squarednode] (site 1) {\(\mathcal{W}\)}; \node[squarednode] (site 2) [right=1.6 of site 1] {\(\mathcal{V}\)}; \node[squarednode] (site 3) [left=1.6 of site 1] {\(\mathcal{E}\)}; \node[squarednode] (site 4) [below=1 of site 1] {\(\mathcal{W}/d_2(\mathcal{F})\)}; \node[squarednode] (fake l) [left=0.3 of site 4]{}; \node[squarednode] (fake r) [right=0.3 of site 4]{}; \node[squarednode] (p1) [above=0.1 of fake l]{\(\hat{\partial_2}\)}; \draw[->] (site 3) -- (site 4); \node[squarednode] (p2) [above=0.1 of fake r]{\(\hat{\partial_1}\)}; \draw[->] (site 4) -- (site 2); \node[squarednode] (fake l+) [left=0.4 of site 1]{}; \node[squarednode] (fake r+) [right=0.4 of site 1]{}; \node[squarednode] (d1) [above=-0.2 of fake l+]{\(\iota\)}; \node[squarednode] (d2) [above=-0.2 of fake r+]{\(d_1\)}; \draw[->] (site 1) -- (site 2); \draw[->] (site 3) -- (site 1); \node[squarednode] (center) [below=0.3 of site 1]{}; \node[squarednode] (p) [right=-0.1 of center]{\(p\)}; \draw[->] (site 1) -- (site 4); \end{tikzpicture} \caption{Definition of \(\hat{\partial_1}\), \(\hat{\partial_2}\)} \label{fig:9} \end{wrapfigure} Figure \ref{fig:9}. Simply speaking, we interchange \(\mathcal{E}\) and \(\mathcal{F}\), \(\iota\) and \(d_2\) in Figure \ref{jiaohuantu} to obtain a new chain complex \(\hat{\partial_1}\circ\hat{\partial_2}=0\). To construct stabilizer code, we choose one special dart (An inner edge.) for each face to obtain \(\hat{S}\), then a basis for \(\mathcal{W}/d_2(\mathcal{F})\) is \(\omega_i+d_2(\mathcal{F})\) with \(i\in{B\setminus{\hat{S}}}\). In this article we call homological code obtained this way the \emph{edge hypermap-homology quantum code} to distinguish it from the original \emph{face hypermap-homology quantum code} proposed in \cite{Martin}. For edge codes, Pradeep's curves can be drawn the same way as section 2.3 with one more condition added: \begin{itemize} \item At the beginning of curve \(i\), i.e., vertex \(v_{\owns{i}}\), the tangent line must lie between dart \(i\) and dart \(\sigma^{-1}(i)\), while at the terminal \(v_{\owns{\alpha^{-1}(i)}}\), the tangent line must lie between dart \(\alpha^{-1}(i)\) and dart \(\alpha^{-1}\sigma(i)\). \end{itemize} which deals with the singular situation shown in the right part of Figure \ref{fig:sing}. Because in the edge code's case, the dashed curve may not need to be erased (while the self-circle in the left part of Figure \ref{fig:sing} must be erased.), so to make things beautiful (also necessary in latter's construction), we want an edge always lies inside the 2-cell of Pradeep's surface code that the edge generates (the center of a `flower' or main disc in edge code's case).\par From a given topological hypermap \(H=(\Sigma,\Gamma)\), we can construct a another hypermap \(H^{\triangle}=(\Sigma_{op},\Gamma^\triangle)\) satisfying the following constraints: \begin{itemize} \item \(\Sigma_{op}\) is the surface \(\Sigma\) with opposite orientation. \item The vertices of \(H^\triangle\) are the vertices of \(H\). \item There is precisely one edge of \(H^\triangle\) for each face of \(H\), inside that face. \item For each dart labeled \(i\) of \(H\), there is precisely one dart of \(H^\triangle\) that goes from \(v_{\owns{i}}\) to the edge of \(H^\triangle\) which lies inside the face \emph{\(f_{\owns{i}}\)} that \(i\) belongs to. We labeled this dart \(i\) again and all darts of \(H^\triangle\) are therefore labeled. \item The tangent vector of dart \(i\) of \(H^\triangle\) which starts at \(v_{\owns{i}}\) must lie between tangent vectors of dart \(i\) and \(\sigma^{-1}(i)\) of \(H\) starts at the same vertex. \item The darts \(i\) of \(H^\triangle\) must lie within faces \(f_{\owns{i}}\) excepting the points \(v_{\owns{i}}\), and have no intersection or self-intersection inside the faces. \end{itemize} As with the dual hypermap \(H^*\), in the sense of strong isomorphism, \(H^\triangle\) is also unique, satisfying \((H^\triangle)^\triangle=H\) and therefore could be seem as another kind of dual hypermap. Further more, by definition of \(H^\triangle\), it's combinatorial hypermap is \((\hat{\alpha},\hat{\sigma})=(\sigma^{-1}\alpha,\sigma^{-1})\), which means that the faces of \(H^\triangle\) are the orbits of \begin{equation} <\hat{\alpha}^{-1}\hat{\sigma}>=<(\sigma^{-1}\alpha)^{-1}\sigma^{-1}>=<\alpha^{-1}> \end{equation} , i.e., the edges of \(H\), while the edges of \(H^\triangle\) are the orbits of \begin{equation} <\hat{\alpha}^{-1}>=<\alpha^{-1}\sigma> \end{equation} , i.e., the faces of \(H\). Thus, when choosing a set of special darts for a face hypermap code of \(H\), their corresponding darts in \(\Gamma^\triangle\) form exactly a set of special darts for an edge hypermap code of \(H^\triangle\), and vice versa. With this set of common special darts, the face hypermap code of \(H\) equals the edge hypermap code of \(H^\triangle\) due to equation (7) and (8). Actually, there exists a common set of Pradeep's curves for these two codes: \begin{theorem} \label{Pro 2} With a common set of special darts \(S=\hat{S}\), the topological hypermap \(H\) and \(H^\triangle\) can have a common set of Pradeep's curves. As a consequence, the face hypermap-homology quantum code of \(H\) equals the edge hypermap-homology quantum code of \(H^\triangle\) under \(S\). \end{theorem} \begin{proof} For a dart \(\omega_i\) of \(H\), there is an edge $E_{\owns{i}}$ of $H^\triangle$ lies inside the face $f_{\owns{i}}$ which $\omega_i$ belongs to, also, there is a dart $\Omega_i$ of $H^\triangle$ which coresspond to $i$ and connects $E_{\owns{i}}$ and $v_{\owns{i}}$. By coditions of \(H^\triangle\), the darts $\omega_i$, $\alpha^{-1}\omega_i$, $\Omega_i$, $\hat{\alpha}^{-1}\Omega_i$ enclose a tetragon (might be a degenerated one) whose interior is contained in $f_{\owns{i}}$ and any curve on $\Sigma$ which connects the points \(v_{\owns{i}}\) and $v_{\owns{\alpha^{-1}(i)}}=v_{\owns{\hat{\alpha}^{-1}\hat{\sigma}(i)}}=v_{\owns{\hat{\alpha}^{-1}(i)}}$ and which lies totally in the interior of the tetragon excepting the ends must satisfies the constraints of the Pradeep's curve for both \(H\) and $H^\triangle$ obviously. Further more, when \(i\) is in $S$, we just don't have to draw any curve inside the tetragon. \end{proof} Next, we will construct both $H^*$ and $H^\triangle$ simultaneously by forcing the edges of $H^\triangle$ coinside with the vertices of $H^*$ at all faces of \(H\), and avoiding any intersection of the the darts of both $H^*$ and $H^\triangle$ aside from their common ends. Remember that for \(H\), we write the label of a dart inside the face that counterclockwise incident to the dart with respect to its edge, in this article, this rule should also be applied to $H^*$ and $H^\triangle$. However, because the orientations of both $H^*$ and $H^\triangle$ are opposite to that of \(H\), we can meet this rule for all the three hypermaps by writing a single \(i\) inside the triangle that enclosed by darts $\omega_i$, $\omega_i^*$ (this means the dart of \(H^*\) that corresponds to $i$.) and $\Omega_i$, which is shown in Figure \ref{fig:10}. This triangle has been mentioned in literature (see \cite{Triangle} or Definition 4.7 in \cite{Martin}) without explicitly defining \(H^\triangle\). \begin{wrapfigure}{l}{0.35\textwidth} \begin{tikzpicture}[ squarednode/.style={rectangle, draw=black!60, fill=cyan!0, very thick, minimum size=2mm}, roundnode/.style={circle, draw=yellow!80, fill=green!0, very thick, minimum size=2mm}, fakenode/.style={rectangle, very thick, minimum size=2mm}, capnode1/.style={rectangle, draw=orange!0, fill=green!40, very thick, minimum size=2mm}, capnode2/.style={circle, draw=black!60, fill=purple!0, very thick, minimum size=2mm}, capnode3/.style={rectangle, draw=blue!0, fill=blue!0, very thick, minimum size=2mm}, ] \node[squarednode] (edge 1) [] {}; \node[fakenode] (site 1) [below=1.532 of edge 1] {}; \node[roundnode] (vertex 2) [right=0.8 of site 1] {}; \node[capnode2] (vertex 1) [left=0.8 of site 1] {}; \node[capnode1] (edge 2) [right=0.845 of site 1] {}; \draw[thick, black!60] (edge 1) -- (vertex 1); \draw[thick, yellow!80] (edge 1) -- (vertex 2); \draw[thick, green!40] (vertex 2) -- (vertex 1); \node[fakenode] (site 2) [above=0.3 of site 1] {\textcolor{red!60}{\( i\)}}; \node[fakenode] (site 3) [left=0.5 of vertex 1] {}; \end{tikzpicture} \caption{Darts $\omega_i$, $\omega^*_i$, $\Omega_i$ with their colors black, orange and green.} \label{fig:10} \end{wrapfigure} Finally, let's define the \emph{contrary map} \(C\) of a topological hypermap $H=(\Sigma,\Gamma)$ as \(C=(\Sigma_{op}, \overline{\Gamma})\), where $\overline{\Gamma}$ represents the hypergraph obtained by interchanging the edges and vertices of \(\Gamma\). Thus, if we interchange the vertices and edges of the darts of \(H^\triangle\), and turn the normal field back, we get the contrary map \(H^\nabla\). Fortunately, this time, Figure \ref{fig:10} is still the correct way to set the label. The combinatorial hypermap of $H^\nabla$ is simply \begin{equation} (\overline{\alpha},\overline{\sigma})=(\hat{\sigma},\hat{\alpha}) \end{equation} by definition, thus we have \begin{equation} <\overline{\alpha}^{-1}>=<\hat{\sigma}^{-1}>=<\sigma>=<\alpha'^{-1}\sigma'>, \end{equation} and \begin{equation} <\overline{\alpha}^{-1}\overline{\sigma}>=<\hat{\sigma}^{-1}\hat{\alpha}>=<\alpha>=<\alpha'^{-1}>. \end{equation} Equations (10) and (11) show that the faces and edges of \(H^*\) and \(H^\nabla\) are interchanged, which we were already familiar with in the discussion of \(H\) and \(H^\triangle\). Actually, we have \begin{theorem} The contary map \(H^\nabla\) is a \(\triangle\)-dual of the dual hypermap $H^*$, i.e, $H^\nabla=(H^*)^\triangle$. \end{theorem} \begin{proof} The darts of all the four hypermaps $(H,H^*,H^\triangle,H^\nabla)$ are naturally related by the same label $i\in{B}$. We denote the 4 darts labelel $i$ by $(\omega_i,\omega_i^*,\Omega_i,\Omega^*_i)$ with $\Omega_i=\Omega^*_i$ due to the definition of contrary map and \(\omega_i^*\), \(\Omega_i.\) are defined directly from $\omega_i$. Now, we fix an dart \(\omega^*_i\). The 2-cell of \(H^*\) to which \(\omega^*_i\) belongs is the orbit $<\alpha'^{-1}\sigma'>\cdot\omega^*_i$, we denote it $c^*_i$. The dart of $H^*$ which also belongs to the closure of $c^*_i$ and share the same edge with $\omega_i^*$ is $\alpha'^{-1}\omega_i^*=\omega^*_{\alpha'^{-1}(i)}=\omega^*_{\alpha(i)}$. By definition of dual hypermap, the tangent vector of $\omega_i^*$ at its edges $e_{\owns{i}}$ must lie between dart $\omega_i$ and dart $\omega_{\alpha^{-1}(i)}$ of $H$, the tangent vector of $\omega^*_{\alpha(i)}$ at its edges $e_{\owns{\alpha(i)}}=e_{\owns{i}}$ must lie between dart $\omega_{\alpha(i)}$ and dart $\omega_{\alpha\alpha^{-1}(i)}=\omega_i$ of $H$, geometrically, this indicates that the tangent vector of $\omega_i$ at its edges $e_{\owns{i}}$ lies between dart $\omega^*_i$ and dart $\omega^*_{\alpha(i)}=\omega^*_{\alpha'^{-1}(i)}$ of $H^*$ and further more, that the vertex $v_{\owns{i}}$ of \(\omega_i\) lies inside $c^*_i$ because $\Gamma$ and $\Gamma^*$ do not intersect beyond their common edges. Since edges of $H$ must lie on $\Gamma^*$ and therefore can not be inside $c^*_i$, then, for each vertex $v$ of $H$ inside $c^*_i$, there is an dart $\omega$ of $H$ that connects $v$ and an edge $e$ of $H$ on the boundary of $c^*_i$. Notice that for each dart \(\omega_\epsilon\) of $H$, its corresponding dart \(\omega^*_\epsilon\) in $H^*$ is the first dart of $H^*$ that \(\omega_\epsilon\) will meet when it is turned counterclockwise around \(e_{\owns{\epsilon}}\) (At least in the vicinity of \(e_{\owns{\epsilon}}\).), there must be an inner dart of $c^*_i$ that correspond to $\omega$, say $(\alpha'^{-1}\sigma')^k(\omega^*_i)$, we have \(\omega=\omega_{(\alpha'^{-1}\sigma')^k(i)}=\omega_{\sigma^k(i)}\), and therefore \(v=v_{\owns{i}}\), which tells us that $v_{\owns{i}}$ is the only vertex of $H$ that lies inside $c^*_i$ (Here, we have actually given a more rigorous proof of equation (6)). Now, because the vertices of $H$ are exactly the edges of $H^\nabla$, we have that for each 2-cell of $H^*$, there is only one edge of $H^\nabla$ inside. By definition of $H^\triangle$, there is a dart \(\Omega_i=\Omega^*_i\) that connects $v_{\owns{i}}$ and the vertex $v^*_{\owns{i}}$ of \(\omega^*_i\) (see Figure \ref{fig:10}). More over, the tangent vector of \(\Omega^*_i\) at \(v^*_{\owns{i}}\) lies between $\omega_i^*$ and $\omega^*_{\sigma'^{-1}(i)}$. To see this, notice that the dart $\omega^*_{\sigma'^{-1}(i)}$ corresponds the dart $\omega_{(\alpha^{-1}\sigma)^{-1}(i)}$ of \(H\) whose edge lies counterclockwise to $v_{\owns{i}}$ at the boundary of the 2-cell $c_i$ that \(\omega_i\) belongs to, while the edge of the dart $\omega^*_i$ is the edge of dart \(\omega_i\) and lie clockwise to $v_{\owns{i}}$ at the same boundary. Finally, with the fact that $H^\nabla$ has opposite orientation to $H^*$, we have proved our result. \end{proof} When choosing a set of special darts $\omega^*_i$ with \(i\in S'\subset{B}\) for \(H^*\), we have that the darts $\Omega^*_i$ ($i\in S'$) are those which correspond to $\omega^*_i$ in the sense of $\triangle$-dual by the proof of proposition 3. Now, by proposition 2 and 3, we have that the edge hypermap code of $H^\nabla$ under \(S'\) equals the face hypermap code of \(H^*\) under \(S'\) (This can also be proved directly using equation (10) and (11).), and furthermore, they have a common surface code of Pradeep's.\par As a reference, in Figure \ref{fig:11} below, We put all four hypermaps \(H,H^*,H^\triangle,H^\nabla\) with their common Pradeep's surface code in to a single local picture of Figure \ref{fig:1}. Here, we do not explicitly draw out the edges or vertices of $H^\triangle$ and $H^\nabla$. \begin{figure}[ht] \centering \begin{tikzpicture}[ roundnode/.style={circle, draw=green!60, fill=green!5, very thick, minimum size=2mm}, squarednode/.style={rectangle, draw=red!60, fill=red!5, very thick, minimum size=2mm}, fakenode/.style={rectangle, draw=orange!0, fill=blue!0, very thick, minimum size=2mm}, Roundnode/.style={circle, draw=purple!30, fill=purple!10, very thick, minimum size=1mm}, Fakenode/.style={circle, draw=orange!0, fill=orange!0, very thick, minimum size=2mm}, ] \node[squarednode] (edge 1){} ; \node[fakenode] (site 1) [below=0.8cm of edge 1]{}; \node[roundnode] (vertex 3) [below=2.8cm of site 1]{}; \node[roundnode] (vertex 2) [left=1.532cm of site 1] {}; \node[roundnode] (vertex 1) [right=1.532cm of site 1] {}; \node[squarednode] (edge 3) [below=1.8cm of vertex 2] {}; \node[squarednode] (edge 2) [below=1.8cm of vertex 1] {}; \node[roundnode] (vertex 4) [left=1.8cm of edge 3] {}; \node[squarednode] (edge 4) [left=1.8cm of vertex 2] {}; \node[fakenode] (site 3) [above=0.507cm of edge 4]{}; \node[fakenode] (site 4) [left=0.507cm of site 3 ]{}; \node[fakenode] (site 6) [below=0.507cm of vertex 4]{}; \node[fakenode] (site 5) [left=0.507cm of site 6]{}; \node[fakenode] (site 2) [above=0.8cm of edge 1]{}; \node[fakenode] (site 7) [below=0.8cm of vertex 3]{}; \node[fakenode] (site 11) [right=0.3cm of vertex 1]{}; \node[fakenode] (site 12) [right=0.8cm of vertex 1]{}; \node[fakenode] (site 13) [above=0.666cm of site 11]{}; \node[fakenode] (site 9) [right=0.666cm of edge 2]{}; \node[fakenode] (site 8) [below=0.3cm of site 9]{}; \node[fakenode] (site 10) [above=0.1cm of site 1]{}; \node[fakenode] (site 14) [right=0.8cm of vertex 2]{3}; \node[fakenode] (site 15) [below=2.27cm of site 14]{}; \node[fakenode] (site 16) [below=0.8cm of vertex 2]{}; \node[fakenode] (site 17) [right=2.9cm of site 16]{}; \node[fakenode] (site 18) [left=3.1cm of site 17]{6}; \node[fakenode] (site 19) [left=1.15cm of site 18]{5}; \node[fakenode] (site 20) [left=1.7cm of site 15]{4}; \node[Roundnode] (Vertex 1) [right=0.4cm of site 19]{}; \node[Roundnode] (Vertex 2) [left=1cm of site 17]{}; \draw[] (edge 1) -- (site 2); \draw[] (edge 2) -- (site 8); \draw[] (edge 4) -- (site 4); \draw[] (vertex 1) -- (site 13); \draw[] (vertex 1) -- (site 12); \draw[] (vertex 3) -- (site 7); \draw[] (vertex 4) -- (site 5); \draw[] (vertex 1) -- (edge 2); \draw[] (vertex 1) -- (edge 1); \draw[] (vertex 2) -- (edge 1); \draw[] (vertex 2) -- (edge 3); \draw[] (vertex 3) -- (edge 3); \draw[] (vertex 3) -- (edge 2); \draw[] (vertex 2) -- (edge 4); \draw[] (vertex 4) -- (edge 4); \draw[] (vertex 4) -- (edge 3); \draw[red!100,dashed] (vertex 1) .. controls (0,-0.3) .. (vertex 2); \draw[red!100,dashed] (vertex 1) .. controls (1.7,-2.9) .. (vertex 3); \draw[red!100,dashed] (vertex 4.east) .. controls +(right:12mm) and +(down:12mm) .. (vertex 2.south); \draw[red!100] (vertex 4.north) .. controls +(up:12mm) and +(left:12mm) .. (vertex 2.west); \draw[red!100] (vertex 2) .. controls (-0.2,0) .. (-0.2,0.9); \draw[red!100] (vertex 2) .. controls (-3.8,-0.8) .. (-4.5,-0.3); \draw[red!100] (vertex 4) .. controls (-1.7,-3.5) .. (vertex 3); \draw[red!100,dashed] (vertex 4) -- (-4.5,-4); \draw[red!100,dashed] (vertex 3) -- (-0.3,-5.2); \draw[red!100,dashed] (vertex 4) .. controls (-4.3,-1.3) .. (-4.8,-0.6); \draw[red!100] (vertex 1) .. controls (0.2,0) .. (0.2,0.9); \draw[red!100] (vertex 1) -- (2.2,-0.3); \draw[red!100,dashed] (vertex 1) -- (2.55,-0.4); \draw[red!100,dashed] (vertex 1) -- (2.85,-0.8); \draw[red!100,dashed] (vertex 1) -- (2.85,-1.3); \draw[red!100] (vertex 1) .. controls (2.2,-2.8) .. (2.85,-3.4); \draw[red!100] (vertex 3) .. controls (1.8,-3.6) .. (2.65,-4); \draw[red!100] (vertex 3)-- (0.3,-5.2); \draw[red!100] (vertex 4) -- (-4.8,-3.8); \draw[cyan!100] (Vertex 2) -- (edge 1); \draw[cyan!100] (Vertex 2) -- (edge 3); \draw[cyan!100] (Vertex 2) -- (edge 2); \draw[cyan!100] (Vertex 1) -- (edge 4); \draw[cyan!100] (Vertex 1) -- (edge 3); \draw[cyan!100] (-2,-4) -- (edge 3); \draw[cyan!100] (-3.5,0) -- (edge 4); \draw[cyan!100] (-5,-1.5) -- (edge 4); \draw[cyan!100] (-0.7,0.4) -- (edge 1); \draw[cyan!100] (0.7,0.4) -- (edge 1); \node[fakenode] (site 22) [below=2.8cm of vertex 1]{}; \draw[cyan!100] (site 22) -- (edge 2); \draw[cyan!100] (2.8,-2.6) -- (edge 2); \draw[green!30,thick] (Vertex 1) -- (vertex 4); \draw[green!30,thick] (Vertex 1) -- (vertex 2); \draw[green!30,thick] (Vertex 2) -- (vertex 1); \draw[green!30,thick] (Vertex 2) -- (vertex 2); \draw[green!30,thick] (Vertex 2) -- (vertex 3); \node[Fakenode] (dot 1) [above=0.8cm of vertex 2]{}; \node[Fakenode] (dot 2) [below right=0.5cm of vertex 3]{}; \node[Fakenode] (dot 3) [below left=0.5cm of vertex 3]{}; \node[Fakenode] (dot 4) [below right=0.5cm and 0.2cm of vertex 4]{}; \node[Fakenode] (dot 5) [above left=0.05cm and 0.5cm of vertex 4]{}; \node[Fakenode] (dot 6) [above left=0.5cm and -0.2cm of vertex 1]{}; \node[Fakenode] (dot 7) [below right=0.2cm and 0.5cm of vertex 1]{}; \node[Fakenode] (dot 8) [right=0.6cm of Vertex 2]{11}; \node[Fakenode] (dot 9) [right=0.7cm of edge 3]{7}; \draw[green!30,thick] (dot 1) -- (vertex 2); \draw[green!30,thick] (dot 6) -- (vertex 1); \draw[green!30,thick] (dot 7) -- (vertex 1); \draw[green!30,thick] (dot 3) -- (vertex 3); \draw[green!30,thick] (dot 2) -- (vertex 3); \draw[green!30,thick] (dot 5) -- (vertex 4); \draw[green!30,thick] (dot 4) -- (vertex 4); \draw[red!100] (vertex 3) .. controls (-1.7,-2.9) .. (vertex 2); \draw[green!30,thick] (3,-0.5) -- (vertex 1); \draw[blue!100] (Vertex 1) .. controls (-3.8,-1) .. (-3.4,0); \draw[blue!100,dashed] (-3.8,0) .. controls (-4.1,-0.9) .. (-5,-1.3); \draw[blue!100] (Vertex 1) .. controls (-4,-1.3) .. (-5,-1.7); \draw[blue!100,dashed] (Vertex 2) .. controls (-1.8,-3) .. (Vertex 1); \draw[blue!100] (Vertex 1) .. controls (-2.2,-3.2) .. (-2.2,-3.9); \draw[blue!100] (Vertex 2) .. controls (-1.7,-3.3) .. (-1.8,-4); \draw[blue!100,dashed] (Vertex 2) .. controls (-0.2,-0.2) .. (-0.8,0.2); \draw[blue!100] (Vertex 2) .. controls (0.2,-0.2) .. (0.8,0.2); \draw[blue!100] (-0.5,0.7) .. controls (0,0.5) .. (0.5,0.7); \draw[blue!100,dashed] (Vertex 2) .. controls (1.9,-2.7) .. (2.7,-2.3); \draw[blue!100] (Vertex 2) .. controls (1.6,-3.4) .. (1.7,-4); \draw[blue!100] (2.9,-2.9) .. controls (2.3,-3.4) .. (2.2,-4); \end{tikzpicture} \caption{\(H,H^*,H^\triangle,H^\nabla\) and their surface codes} \label{fig:11} \end{figure} \section{Conclusion} Now, we can go back to the question---What's the relationship between homological quantum code of a hypermap and it's dual? More specifically, the relation between face hypermap codes of $H$ and $H^*$, with a set of natually corresponded special darts $\omega_i$ and $\omega^*_i$, where $i\in{S}$. By Figure \ref{fig:11}, there does seem to have some geometrical duality between the Pradeep's surface code of the two, but it's not as simple as the ordinary duality of graphs (CW complexes). What's even worse, when erasing the curves corresponding to special darts, the symmetry seems to be completely broken. Fortunately, after connecting the vertices of $H$ and the vertices of $H^*$ by the green lines of Figure \ref{fig:11}, and then properly setting the orientation, the red curves appear to form a Pradeep's surface code of $H^\triangle$, while the blue curves appear to form a Pradeep's surface code of $H^\nabla$, since both the geometrical relation between $H^\triangle$ and $H^\nabla$, and the algebraic relation between their combinatorial hypermaps (see equation (9)) are much simpler than those between $H$ and $H^*$, we could say that we have found a simple relation between the two quantum codes, which is the primary observation of this article. However, as we can see in Figure \ref{fig:11}, both $\Omega_3$ and $\Omega_{11}$ are special darts of \(H^\triangle\), and they share the same edge, which is a contradiction to the definition of (face) hypermap code, so we need to introduce the edge codes. Now, by the theory developed in this article, the answer to the question is: These two codes are simply the edge hypermap-homology quantum codes of $H^\triangle$ and \(H^\nabla\), with special darts $\Omega_i=\Omega^*_i(i\in{S})$. \medskip \bibliographystyle{unsrt}
2,869,038,156,966
arxiv
\section{Introduction} There is evidence for solar or super-solar metallicities in the circumnuclear environments of quasars out to redshifts $z$$>$4 (e.g., Hamann \& Ferland 1999; Kurk et al. 2007; Jiang et al. 2007; Juarez et al. 2009; Matsuoka et al. 2009). This evidence, mainly from optical lines, is supported by millimeter detections of CO and dust in high-redshift sources, indicating rapid metal enrichment due to starbursts in the circumnuclear regions of at least some galaxies in the early universe (e.g., Solomon \& Vanden Bout 2005). This enrichment, however, might apply mainly to atomic nuclei that are synthesized in short-lived massive stars, and not so much to ``secondary'' nuclei like $^{13}$C that are thought to be mainly synthesized in longer-lived, less-massive stars (but see, e.g., Hamann et al. 2002 for the mainly secondary element nitrogen). In the local universe, $^{12}$C/$^{13}$C abundance ratios are sometimes considered to be a diagnostic of deep stellar mixing and a measure of ``primary'' vs.\ ``secondary'' nuclear processing (e.g., Wilson \& Rood 1994). While $^{12}$C is produced by He burning on rapid time scales in massive stars, $^{13}$C is mainly synthesized by CNO processing of $^{12}$C seed nuclei from earlier stellar generations. This processing occurs more slowly, during the red giant phase in low- and intermediate-mass stars or novae. The $^{12}$C/$^{13}$C ratio may therefore depend on the nucleosynthesis history. It could be much higher in high-$z$ galaxies that are too young to have synthesized large amounts of secondary nuclei like $^{13}$C. At optical, near-IR, and UV wavelengths it is difficult to discriminate between an element's isotopes because their atomic lines are blended (e.g., Levshakov et al. 2006). The prospects are better with radio lines from isotopic substitutions in molecules, which are well separated by a few percent of their rest frequency from the main species. This separation allows both the main and rare species to be easily identified, and to be observable with the same radio receivers and spectrometers. The Cloverleaf Quasar (H1413+117), partly because of amplification by gravitational lensing, is a high-$z$ source with exceptional peak flux densities in $^{12}$C$^{16}$O (hereafter $^{12}$CO; see Appendix~2 of Solomon \& Vanden Bout 2005). This source is therefore one of the best candidates to search for $^{13}$C$^{16}$O (hereafter $^{13}$CO) to try to test models of ``chemical'' evolution over a Hubble time. In this paper we report on a search for $^{13}$CO(3--2) emission in the Cloverleaf at $z$=2.5579, when the universe was 2.5\,Gyr old. \section{Observations} The measurements were made with the IRAM Interferometer on Plateau de Bure, France, in July, August, and September 2008, with 5 antennas in the compact D-configuration (maximum baseline 97\,m) and the new dual-polarization receivers. The receiver and system single-sideband temperatures were 40 and 100\,K, respectively. The spectrometers covered 1\,GHz in each polarization, and the raw spectral resolution was 2.5\,MHz, or 8.1\,km\,s$^{-1}$. The data were binned to various spectral resolutions; in this paper we present data binned in 19$\times$160\,km\,s$^{-1}$ channels, covering a range of 3040\,km\,s$^{-1}$, with a noise of 0.22\,mJy\,beam$^{-1}$ (1$\sigma$) in each channel. The naturally-weighted synthesized beam was 5\ffas6$\times$4\ffas8 at p.a. 62$^{\circ}$. Because the four CO spots of the lensed Cloverleaf image are spread over 1\ffas7, we included more of the total flux by applying to the $u,v$ data a Gaussian taper that fell to $1/e$ at a radius of 100\,m. The slightly broadened beam then became 6\ffas1$\times$5\ffas4, and the noise in the individual channels is 0.23\,mJy\,beam$^{-1}$. \begin{figure}[t] \vspace{-0.0cm} \centering \includegraphics[angle=0,width=8.5cm]{f113co.eps} \vspace{-0.0cm} \caption{Contour map of continuum plus $^{13}$CO $J$=3$\rightarrow$2 emission, covering the central 960\,km\,s$^{-1}$ toward the Cloverleaf QSO. The beam is 6\ffas1$\times$5\ffas4 (lower left) and the contour step is 0.09\,mJy (1\,$\sigma$). The peak value and the spatially-integrated intensity in the central source are both 0.6\,mJy. \label{fig1}} \end{figure} \begin{figure}[t] \vspace{-0.0cm} \centering \includegraphics[angle=0,width=8.5cm]{f213co.eps} \caption{Contour map of the 3.2\,mm continuum emission, covering 2080\,km\,s$^{-1}$ in the off-line channels. The beam is 6\ffas1$\times$5\ffas4 (lower left) and the contour step is 0.09\,mJy (1.5\,$\sigma$). The peak and the spatially-integrated flux density of the central source are both 0.3\,mJy. Combining this figure with the previous one, we conclude that in the 960\,\ifmmode{{\rm \ts km\ts s}^{-1}}\else{\ts km\ts s$^{-1}$}\fi \ band centered on the $^{13}$CO $J$=3$\rightarrow$2 line, line and continuum each contribute half of the total flux density. \label{fig2}} \end{figure} \section{Results} Figures~1 through 3 show the data, and Table~1 summarizes the results. In the integrated line + continuum map (Fig.~1), the peak position (Table~1) agrees well with the centroid of previous high-resolution interferometer maps of the source (e.g., Alloin et al. 1997; Yun et al. 1997; Kneib et al. 1998). At 93\,GHz, the expected continuum is 0.30--0.35\,mJy (from Fig.~3 of Wei{\ss} et al. 2003 and the power-law given in Bradford et al. 2009) and a map in the 13 off-line channels at the positive and negative velocity ends of our spectra indeed yields a continuum flux of 0.3$\pm$0.1\,mJy (Fig.~2). This continuum adds to the line signal, and for this reason, the line appears much broader than the $\sim 430$ \,\ifmmode{{\rm \ts km\ts s}^{-1}}\else{\ts km\ts s$^{-1}$}\fi\ widths of the $^{12}$CO and [C{\sc i}] lines (Wei{\ss} et al. 2003). The observed line has a low signal-to-noise ratio, which prevents a clear distinction between line and continuum, and does not allow us to constrain the line shape. Above the 0.3\,mJy continuum, a Gaussian fit yields an integrated line flux of (0.3$\pm$0.1)\,Jy\,km\,s$^{-1}$ (Fig.~3, see also the much higher upper limit given by Barvainis et al. 1997, their Table~1). An alternative Gaussian fit, with the line width fixed to the width of the $^{12}$CO line, yields a peak line flux density of (0.44$\pm$0.12)\,mJy\,beam$^{-1}$, and the same integrated line flux as the fit shown in Fig.~3. This integrated flux, corrected for frequency squared, leads to a $^{12}$CO/$^{13}$CO $J$=3$\rightarrow$2 line luminosity ratio ($=$ brightness temperature ratio) of 40$^{+25}_{-8}$ (Table~1). This value is conservative. With the line width fixed to the width of the $^{12}$CO line and the actual peak flux density of order 0.35\,mJy, the ratio would become $\sim$75. \begin{table} \label{tab1} \begin{threeparttable} \caption{$^{13}$CO(3--2) Observations and results.} \begin{tabular}{lc} \hline {Parameter}&{$^{13}$CO(3--2)}\\ \hline \multicolumn{2}{l}{{\it Observed CO(3--2) quantities:}} \\ R.A. (J2000) &14$^{\rm h}$ 15$^{\rm m}$ 46\ffs28 $\pm$ 0\ffs03\\ Dec. (J2000) &+11$^{\circ}$ 29$'$ 44\ffas0 $\pm$ 0\ffas4 \\ Center frequency (GHz) &92.91816 \\ Redshift (LSR) $^{\rm a)}$ &$2.55784\pm 0.00003$ \\ Continuum flux density (mJy) &$0.3\pm 0.1$ \\ Integrated $^{13}$CO flux (Jy\,\ifmmode{{\rm \ts km\ts s}^{-1}}\else{\ts km\ts s$^{-1}$}\fi ) &$0.3\pm 0.1$ \\ \\ \multicolumn{2}{l}{{\it Derived CO(3--2) quantities:}} \\ $L^\prime$($^{13}$CO (K\,\kms\,pc$^2$ )$^{\rm c)}$ &($1.1\pm 0.3$)$\times 10^{10}$ \\ $L^\prime$($^{12}$CO (K\,\kms\,pc$^2$ )$^{\rm c)}$ &($45.9\pm 3$)$\times 10^{10}$ \\ $L^\prime$ ratio $^{12}$CO/$^{13}$CO(3--2) &40$^{+25}_{-8}$ \\ \hline \end{tabular} \begin{tablenotes} \item[a)] adopted from $^{12}$CO (Weiss et al. 2003). \item[b)] in a beam of 6\ffas1$\times$5\ffas4. \item[c)] This is the lens-amplified value for a luminosity distance of $D_L$ = 21.28\,Gpc ($H_0 =$71\,\ifmmode{{\rm \ts km\ts s}^{-1}}\else{\ts km\ts s$^{-1}$}\fi\,Mpc$^{-1}$, $\Omega_m =$ 0.27, $\Omega_{\rm vac} =$ 0.73) and an angular diameter distance of $D_A$ = 1.682\,Gpc; linear scale: 1$'' \leftrightarrow 8152$\,pc (Wright 2006). \end{tablenotes} \end{threeparttable} \end{table} \section{Large velocity gradient model calculations} $^{12}$CO lines have higher optical depths than those of $^{13}$CO. Therefore, the measured $^{12}$CO/$^{13}$CO line intensity ratio (Sect.\,3) is a lower limit to the $^{12}$CO/$^{13}$CO abundance ratio. To further constrain the $^{12}$CO/$^{13}$CO abundance ratio of the Cloverleaf QSO, Table~2 provides flux densities and brightness temperatures of seven $^{12}$CO transitions. To simulate these values, a Large Velocity gradient (LVG) model was used with collision rates from Flower (2001), a cosmic microwave background of 9.7\,K, and an ortho-to-para H$_2$ abundance ratio of three (e.g., Wei{\ss} et al. 2005, 2007; Riechers et al. 2006b). The latter is, however, not critical for this study. \begin{figure*}[t] \vspace{-0.0cm} \centering \resizebox{17.8cm}{!}{\rotatebox[origin=br]{-90.00}{\includegraphics{f313co.eps}}} \vspace{-0.0cm} \caption{CO $J$=3$\rightarrow$2 from the Cloverleaf QSO, measured with the IRAM interferometer. {\it Left:} $^{12}$CO(3--2) profile in 10\,\ifmmode{{\rm \ts km\ts s}^{-1}}\else{\ts km\ts s$^{-1}$}\fi\ channels from Weiss et al. (2003). Velocity offsets are relative to 97.1928\,GHz {\it Right:} $^{13}$CO(3--2) profile in 160\,\ifmmode{{\rm \ts km\ts s}^{-1}}\else{\ts km\ts s$^{-1}$}\fi\ channels, from this paper. Velocity offsets are relative to 92.91816\,GHz ($z$ = 2.55784). The red curves show Gaussian fits above a continuum of 0.3\,mJy. \label{fig3}} \end{figure*} We calculated a grid for $^{12}$CO/$^{13}$CO with kinetic temperatures of 30--100\,K and $^{12}$CO fractional abundances per velocity interval of [$^{12}$CO]/([H$_2$](d$v$/d$r$)) = 10$^{-4...-7}$\,pc\,(km\,s$^{-1}$)$^{-1}$. Accounting for possible effects of cloud structure, not only a spherical but also a plan-parallel cloud morphology was considered, with escape probabilities $\beta_{\rm spherical}$ = (1--e$^{-\tau}$)/$\tau$ and $\beta_{\rm plan-parallel}$ = (1--3e$^{-\tau}$)/(3$\tau$), respectively ($\tau$: optical depth). Resulting $^{12}$CO/$^{13}$CO abundance ratios reproducing the six measured $^{12}$CO line intensity ratios (Table~2) are given in Figs.~4 and 5 together with reduced $\chi^2$ ($\chi^2_{\rm red}$) values of the best fit. We adopted a 1$\sigma$ error of 15\% for each fitted brightness temperature ratio. The dependence of the resulting $^{12}$CO/$^{13}$CO ratios on cloud morphology is caused by the different escape probabilities, related to $\tau$ in the case of a spherical and to 3$\tau$ in the case of a plan-parallel cloud geometry. Therefore, a required amount of excitation through photon trapping is reached at lower $^{12}$CO optical depths in the case of a plan-parallel morphology, resulting in smaller $^{12}$CO/$^{13}$CO abundance ratios. \begin{table} \label{tab2} \begin{threeparttable} \caption[]{CO line ratios in the Cloverleaf.} \begin{flushleft} \begin{tabular}{cccc} \hline Line & Integrated & $T_b$ ratio$^{\rm a}$ & Reference$^{\rm b}$ \\ & line flux & to & \\ & (Jy\,km\,s$^{-1}$) & $^{12}$CO(3--2) & \\ \hline & & & \\ CO(3--2) & 13.2$\pm$2.0 & 1.00$\pm$0.15 & 1 \\ CO(4--3) & 21.1$\pm$3.2 & 0.90$\pm$0.13 & 2 \\ CO(5--4) & 24.0$\pm$3.6 & 0.65$\pm$0.09 & 2 \\ CO(6--5) & 37.0$\pm$5.6 & 0.70$\pm$0.10 & 3 \\ CO(7--6) & 45.3$\pm$6.8 & 0.63$\pm$0.09 & 3 \\ CO(8--7) & 51.4$\pm$7.7 & 0.55$\pm$0.08 & 3 \\ CO(9--8) & 41.8$\pm$6.3 & 0.35$\pm$0.05 & 3 \\ & & \\ $^{13}$CO(3--2) &0.3$\pm$0.1 & 0.025$^{+0.006}_{-0.009}$ & 4 \\ & & \\ \hline \end{tabular} \begin{tablenotes} \item[a)] If all lines have the same area filling factor. Adopted 1$\sigma$ errors are $\pm$10\% for the flux densities and $\pm$15\% for the brightness temperature ratios. \item[b)] (1) Wei{\ss} et al. (2003); (2) Barvainis et al. (1997); (3) Bradford et al. (2009); (4) this paper. \end{tablenotes} \end{flushleft} \end{threeparttable} \end{table} The $\chi^2_{\rm red}$ values displayed in Figs.~4 and 5 indicate that the CO data can be fitted by a single molecular gas component (cf. Bradford et al. 2009). All calculations are also consistent with the (not very stringent) upper limits for the $^{13}$CO $J$ = 7$\rightarrow$6 and 8$\rightarrow$7 flux densities from Bradford et al. (2009). At first sight, the figures do not strongly reduce the permitted parameter space, providing $\chi^2_{\rm red}$ values of order 1.25--2. In the upper left corners of each figure, however, the $\chi^2_{\rm red}$ values rise significantly, becoming too large to provide credible solutions. As a consequence, the overall $^{12}$CO/$^{13}$CO abundance ratio appears to be $>$100 in the Cloverleaf QSO. There exist further constraints: (1) $T_{\rm kin}$ $<$ 30\,K is prohibitive because of the temperatures determined from C{\sc i} ($\sim$30\,K) and the dust ($\sim$50\,K, Wei{\ss} et al. 2003). Furthermore, such low temperatures would require extreme CO column densities to raise photon trapping to such levels that the emission from the higher $J$ transitions could be reproduced. (2) $T_{\rm kin}$ $>$ 50\,K is also not likely because of the temperature deduced from [C{\sc i}] and the close association of CO and C{\sc i}, which appears to be independent of the environment (e.g., Ikeda et al. 2002; Zhang et al. 2007). For 30$\leq$T$_{\rm kin}$$\leq$50\,K and [$^{12}$CO]/([H$_2$](d$v$/d$r$)) = 10$^{-7}$\,pc\,(km\,s$^{-1}$)$^{-1}$, we obtain $^{12}$CO/$^{13}$CO abundance ratios in the range 200--3000 (Figs.~4 and 5). However, such a low fractional abundance per velocity interval can be firmly excluded. With [C{\sc i}]/[H$_2$] reaching values in agreement with those of the local galactic disk (Wei{\ss} et al. 2005), the [$^{12}$CO]/[H$_2$] abundance ratio should be of order 10$^{-4}$ (e.g., Frerking et al. 1982). The resulting velocity gradient of d$v$/d$r$ = 10$^3$\,km\,s$^{-1}$\,pc$^{-1}$ would be far too large in view of the measured line width (e.g., Wei{\ss} et al. 2003) and the kinetic temperature such extreme conditions would induce (e.g., Wiklind \& Henkel 2001). A velocity gradient of a few km\,$^{-1}$\,pc$^{-1}$ is more realistic as, e.g., obtained from clouds in virial equilibrium for densities of order 10$^4$\,cm$^{-3}$ (Goldsmith 2001; his equation 2). Such densities are commonly derived for high $z$ sources (e.g., Wei{\ss} et al. 2005, 2007). For diffuse clouds, velocity gradients should be larger (e.g., Papadopoulos et al. 2010). Bradford et al. (2009) suggest that in the Cloverleaf the velocity dispersion may exceed the virial requirement by at least an order of magnitude. Therefore the best choice may be [$^{12}$CO]/([H$_2$](d$v$/d$r$)) = 10$^{-5...-6}$\,pc\,(km\,s$^{-1}$)$^{-1}$ (for the higher value see, e.g., Riechers et al. 2006b; Wei{\ss} et al. 2007) to simultaneously fit the observed CO transitions from $J$=1$\rightarrow$0 up to 11$\rightarrow$10. Depending on the adopted kinetic temperature (30--50\,K) and cloud morphology, and irrespective of the optimal [$^{12}$CO]/([H$_2$](d$v$/d$r$)) value (as long as it is in the wide range displayed by Figs.\,\ref{fig4} and \ref{fig5}) we then find a $^{12}$CO/$^{13}$CO abundance ratio in the range 300--10000. In the following we will discuss whether this estimate can be realistic. \begin{figure}[t] \vspace{-4.0cm} \centering \resizebox{17.8cm}{!}{\rotatebox[origin=br]{-90.00}{\includegraphics{f413co.ps}}} \vspace{-0.0cm} \caption{Results from large velocity gradient (LVG) radiative transfer calculations using a spherical cloud model to simulate the line intensity ratios given in Table~2. The common logarithm of the $^{12}$CO/$^{13}$CO abundance ratio is shown as a function of kinetic temperature in units of Kelvin and of fractional abundance in units of pc\,(km\,s$^{-1}$)$^{-1}$ for a $^{12}$CO/$^{13}$CO $J$=3$\rightarrow$2 line intensity ratio of 40. Resulting reduced $\chi^2$ values ($\chi^2_{\rm red}$) for the simulation of the four $^{12}$CO lines given in Table~2 are shaded. Lightest grey: 1.00 $\leq$ $\chi^2_{\rm red}$ $<$ 1.25, darker shades of grey at 1.25, 1.50, 1.75 to 4.00 with an increment of 0.25. The maximum value in the upper left corner is $\chi^2_{\rm red}$ = 4.07. \label{fig4}} \end{figure} \begin{figure}[t] \vspace{-4.0cm} \centering \resizebox{17.8cm}{!}{\rotatebox[origin=br]{-90.00}{\includegraphics{f513co.ps}}} \vspace{-0.0cm} \caption{Same as Fig.~4, but for a plan-parallel cloud geometry. Resulting $\chi^2_{\rm red}$ values for the simulation of the line intensity ratios given in Table~2 are shaded. Lightest grey: 1.00 $\leq$ $\chi^2$ $<$ 1.25, darker shades of grey at 1.25, 1.50, 1.75 to 4.00 with an increment of 0.25. The maximum value in the upper left corner is $\chi^2_{\rm red}$ = 6.31. \label{fig5}} \end{figure} \section{Discussion} In order to further evaluate our observational result, we have to discuss the correlation between molecular $^{12}$CO/$^{13}$CO and atomic $^{12}$C/$^{13}$C abundance ratios and to summarize relevant observational data from low-redshift galaxies which are, like the Cloverleaf, ultraluminous in the infrared. Finally, we will address some fundamental problems, which are related to the still poorly known morphology of the gas surrounding the Cloverleaf QSO. \subsection{Chemical fractionation and isotope selective photodissociation} Observed isotope ratios may be affected by fractionation. The $^{12}$CO/$^{13}$CO abundance ratio is likely influenced by the reaction $$ ^{13}{\rm C}^+ + ^{12}{\rm CO} \rightarrow\ ^{12}{\rm C}^+ + ^{13}{\rm CO} + \Delta E_{\rm 35K} $$ (Watson et al. 1976). The process enhances $^{13}$CO relative to $^{12}$CO in the more diffuse C$^+$ rich parts of molecular clouds. This may be compensated by isotope selective photodissociation. $^{12}$CO and $^{13}$CO need similar amounts of self-shielding to survive in a hostile interstellar environment. This favors the more abundant isotopologue (e.g., Sheffer et al. 2007). For the Galaxy, such effects can be quantified. Milam et al. (2005) summarized $^{12}$C/$^{13}$C ratios from the galactic disk, obtained with the three molecules CO, CN, and H$_2$CO. These molecular species are synthesized by quite different chemical reactions. The good agreement between their $^{12}$C/$^{13}$C ratios and a lack of correlation with kinetic temperature suggests that chemical fractionation as well as isotope selective photodissociation do not greatly affect the determined isotope ratios. Whether this result is also valid in the case of the Cloverleaf QSO may not be obvious at first sight. The ultraviolet radiation field in the vicinity of the quasar might be exceptionally strong, favoring $^{12}$CO over $^{13}$CO and thus leading to an enhanced molecular abundance ratio with respect to $^{12}$C/$^{13}$C. However, such a scenario is not likely. Firstly, most of the galactic data were obtained toward prominent sites of massive star formation, where the UV radiation field is also exceptionally intense. Secondly, judging from C{\sc i}, in the Cloverleaf the excitation of the molecular gas is intermediate between conditions found for the starburst galaxy M\,82 ($T_{\rm ex,CI}$ $\sim$50\,K) and the central region of the Milky Way ($T_{\rm ex,CI}$ $\sim$ 22\,K) (Stutzki et al. 1997; Wei{\ss} et al. 2003). Thirdly, polycyclic aromatic hydrocarbon (PAH) features are as strong as expected with respect to the far infrared luminosity when compared with more nearby ultraluminous star-forming galaxies, favoring ``normal'' conditions and a predominantly starburst nature of the Cloverleaf's huge FIR emission (Lutz et al. 2007). Finally, the CO emission from the Cloverleaf appears to be more extended than the effective radius out to which the quasar could dominate the UV field. Modeling both the source and the lens of the Cloverleaf QSO, Venturini \& Solomon (2003) find a characteristic radius of $r$ $\sim$ 800\,pc for the CO $J$=7--6 line, which is higher excited and thus possibly less widespread than the $J$=3$\rightarrow$2 transition considered here. If the Cloverleaf's intrinsic far infrared luminosity ($L_{\rm FIR}$ $\sim$ 5$\times$10$^{12}$\,L$_{\odot}$, Lutz et al. 2007) would entirely originate from 6.2--13.6\,eV photons emitted by the active nucleus, we would obtain, at a radius of 800\,pc, a UV photon illumination of $\chi$ $\sim$ 10$^5$\,$\chi_0$ with respect to the local galactic radiation field, $\chi_0$ = 2$\times$10$^{-4}$\,\,erg\,cm$^{-2}$\,s$^{-1}$\,sr$^{-1}$ (see Draine 1978). The Cloverleaf QSO is a Broad Absorption Line (BAL) quasar which permits at least a partial view onto its nuclear engine. Therefore, taking the Cloverleaf's UV luminosity from Fig.\,1 of Barvainis et al. (1995) and accounting for a gravitational amplification by a factor of 11 (Solomon \& Vanden Bout 2005), we obtain accordingly $\chi$ $\sim$ 2.5$\times$10$^4$\,$\chi_0$. Both $\chi$ values are consistent with those encountered in prominent galactic sites of massive star formation and may be upper limits if the Cloverleaf posseses a self-shielding rotating disk. To summarize, physical conditions in the Cloverleaf host galaxy appear to be sufficiently normal so that the $^{12}$C/$^{13}$C isotope ratio should not strongly deviate from the $^{12}$CO/$^{13}$CO molecular abundance ratio. \subsection{$^{12}$CO/$^{13}$CO ratios in $z$$<$1 galaxies} {\it In our Galaxy}, the $^{12}$CO/$^{13}$CO line intensity ratios from molecular clouds are typically about 5, probably corresponding to true $^{12}$C/$^{13}$C abundance ratios of $\sim$25 in the galactic Center, $\sim$50 in the inner galactic disk and the LMC, $\sim$70 at the Sun's galactocentric radius, and $\ga$100 in the outer Galaxy. The solar system ratio of 89 may have been typical of the galactic disk at the Sun's galactocentric radius 4.6\,Gyr ago (e.g., Wilson \& Rood 1994; Wouterloot \& Brand 1996; Wang et al. 2009). Within the framework of ``biased infall'', where the galactic disk developed from inside out (Chiappini et al. 2001), there {\it might} be a future chance to use $^{12}$C/$^{13}$C ratios as a chronometer for nucleosynthesis. {\it In nearby galaxies}, the $^{12}$CO/$^{13}$CO line intensity ratios are usually measured in the $J$=1--0 line and have typical values of $\sim$10. They are higher than the values for individual molecular clouds in the Galaxy because they are mostly observed with larger beams. These include not only the dense clouds, where both species are (almost) optically thick, but also the molecular intercloud medium, where $^{13}$CO is optically thin. Like the better-resolved CO line ratios in our Galaxy, the ratios in nearby galaxies probably correspond to true $^{12}$C/$^{13}$C abundance ratios between 40 and 90 (e.g., Henkel et al. 1993). In a presumably ``normal'' spiral {\it galaxy at redshift 0.89}, in the lens of the background source PKS\,1830-211, Wiklind \& Combes (1998), Menten et al. (1999), and Muller et al. (2006) derive, from the optically thin wings of the absorption lines of HCO$^+$, HCN, and HNC, a $^{12}$C/$^{13}$C abundance ratio of 27$\pm$2. Apparently, even at an age of the universe of $\sim$6.5\,Gyr, it appears that $^{13}$C is as abundant with respect to $^{12}$C as in the center of our Galaxy at the present epoch. {\it Some low-redshift (ultra)luminous infrared galaxies} ((U)LIRGs), however, show peculiarities, which may be relevant to the Cloverleaf. Local (U)LIRGs are known to reveal $^{12}$CO/$^{13}$CO $J$= 1$\rightarrow$0 line intensity ratios which tend to be higher than the canonical value of 10 for ``normal'' galaxies (see, e.g., Aalto et al. 1991; Casoli et al. 1992; Henkel \& Mauersberger 1993). According to Taniguchi \& Ohyama (1998), there is a tight correlation between $L$($^{12}$CO $J$=1$\rightarrow$0) and $L_{\rm FIR}$. However, when comparing ``normal'' galaxies with those with a high $^{12}$CO/$^{13}$CO $J$=1$\rightarrow$0 ratio, the $^{13}$CO luminosities show a deficiency by an average factor of $\sim$3, This $^{13}$CO deficiency is readily explained by metallicity gradients in the progenitor galaxies and strong interaction- or merger-induced inflow of gas into the luminous cores (e.g., Rupke et al. 2008). Apparently, for ultraluminous galaxies the common luminosity - metallicity correlation is not valid. Ultraluminous galaxies are characterized by a lower metallicity, likely yielding higher $^{12}$C/$^{13}$C isotope ratios. In the early universe, gas from outside the cores of the merging progenitors may have been particularly metal poor, leading to extreme carbon isotope ratios. For $T_{\rm kin}$ $\ga$ 20\,K, the $^{12}$CO $J$=3$\rightarrow$2 line is more opaque, typically by a factor of 3, than the corresponding 1$\rightarrow$0 line. Thus our conservatively estimated $J$=3--2 $^{12}$CO/$^{13}$CO line intensity ratio of $\ga$40$^{+25}_{-8}$ corresponds to a 1$\rightarrow$0 ratio well in excess of 40. So far, only few $^{12}$CO/$^{13}$CO $J$=3$\rightarrow$2 line ratios have been measured in luminous mergers of low redshift. Greve et al. (2009) find 8$\pm$2 for the ULIRG Arp~220 and $\ga$30 for the LIRG NGC~6240. The latter value {\it might} be consistent with that of the Cloverleaf. \subsection{Are there alternatives to a $^{13}$C deficiency in the Cloverleaf?} Sects.\,5.1 and 5.2 suggest, that our measured $^{12}$CO/$^{13}$CO line intensity ratio (or its lower limit) require a significant $^{13}$C deficiency in the Cloverleaf. Are there caveats we may have overlooked when reaching this conclusion? If the bulk of the CO emission would not arise, as suggested by Venturini \& Solomon (2003), from a molecular disk but from a large scale outflow, such gas would not be in virial equilibrium and could arise predominantly from a diffuse gas phase. While this would yield (within the LVG approach) a higher velocity gradient and a lower [$^{12}$CO]/([H$_2$](d$v$/d$r$)) value than what is needed for virialized clouds, required densities would then be well in excess of 10$^4$\,cm$^{-3}$, in contradiction with our assumption of predominantly diffuse gas. Furthermore, as long as $T_{\rm kin}$ remains moderate ($\la$50\,K; see Figs.\,\ref{fig4} and \ref{fig5}), $^{12}$C/$^{13}$C ratios remain larger than those encountered in the galactic disk (Sect.\,5.1). Following White (1977), radiative transfer models with simple geometry, either based on microturbulence or on systematic motions, lead to peak and integrated intensities which agree within the differences (up to a factor of three) caused by an uncertain cloud geometry. A full 3-D model of a rotating circumnuclear disk, computing the radiative transfer through many lines of sight, calculating the LVG level populations within each pixel of the simulated source, and also including continuum radiation from dust (e.g., Downes \& Solomon 1998) may be worth doing. In the Cloverleaf, however, the distribution of the molecular gas is still poorly known. A large $^{12}$C/$^{13}$C ratio, implying an underabundance of $^{13}$C, appears to be in direct conflict with optical data. As already mentioned in Sect.\,1, solar or super-solar metallicities are common in quasars up to high redshifts. This does not only refer to so-called ``$\alpha$-elements'' being rapidly synthesized in short-lived massive stars but also to iron (e.g., Iwamuro et al. 2004; Kurk et al. 2007; Sameshima et al. 2009), carbon (e.g., Jiang et al. 2007; Juarez et al. 2009), and, even more importantly, nitrogen (Hamann \& Ferland 1999; De Breuck et al. 2000; Vernet et al. 2001; Hamann et al. 2002; Nagao et al. 2006; Matsuoka et al. 2009), with $^{14}$N being mainly a secondary nucleus produced by CNO burning just like $^{13}$C. A possible explanation for the contradictory results obtained at or near optical wavelengths and the microwave data presented here may be different locations. It is well possible that mainly secondary nuclei like $^{13}$C and $^{14}$N are enriched close to the quasar, in the Broad and Narrow Line Regions and in outflows originating from the active galactic nucleus (AGN). However, CO $J$=7$\rightarrow$6 may arise hundreds of pc away from the AGN (Venturini \& Solomon 2003) and some of the $J$=3$\rightarrow$2 photons may be emitted from locations even farther away. There exists, however, also the possibility that our measured high $^{12}$CO/$^{13}$CO luminosity ratio is misleading and does {\it not} imply a large $^{12}$C/$^{13}$C ratio. As a consequence of different optical depths, $^{12}$CO lines are almost thermalized and are characterized by excitation temperatures well above the level of the cosmic microwave background even at $z$=2.5. $^{13}$CO is less thermalized. In our best fitting models, its $J$=3$\rightarrow$2 excitation temperature lies in the range 20--30\,K. This is significantly above the 9.7\,K of the CMB. However, an extreme (and therefore unlikely) enhancement of the background level by dust radiation could reduce the contrast between line and background for $^{13}$CO far more efficiently than for $^{12}$CO (see Papadopoulos et al. 2010 for the case of Arp\,220), thus establishing an apparent $^{13}$CO deficiency. \section{Outlook} Molecular lines from galaxies in the distant universe have the potential to reveal the contribution of early stellar generations to the enrichment of the interstellar medium. Our data from the $z$ = 2.5 Cloverleaf QSO are a first step toward studying the isotopic composition of such gas in the distant past. Our data indicate, not unexpectedly, a strong deficiency of $^{13}$C with respect to $^{12}$C in the host galaxy. However, the weakness of the tentatively detected line, the limited number of observed transitions, the poorly constrained source morphology, and the potential influence of an enhanced submillimeter radiation background do not yet allow us to derive a definite $^{12}$C/$^{13}$C isotope ratio. Significant progress in this field either requires the detection of stronger sources or the higher instrumental sensitivity of the Atacama Large Millimeter Array (ALMA), which will allow us to study the isotopes of C, N, and O in a number of highly redshifted targets. Toward the Cloverleaf, the main isotopologes of HCN, HCO$^+$, and CN (Solomon et al. 2003; Riechers et al. 2006a, 2007) have already been detected. \begin{acknowledgements} We wish to thank P.P. Papadopoulos, D. Riquelme, S. Veilleux, and an anonymous referee for helpful discussions on ULIRGs and chemical evolution and/or a critical reading of the manuscript. This paper is based on observations taken with the IRAM Plateau de Bure Interferometer. IRAM is supported by INSU/CNRS (France), the MPG (Germany), and the IGN (Spain). DR acknowledges support from NASA through Hubble Fellowship grant HST-HF-01212.01-A, awarded by the Space Telescope Science Institute, which is operated by AURA for NASA under contract NAS5-26555. This research has made use of NASA's Astrophysical Data System (ADS). \end{acknowledgements}
2,869,038,156,967
arxiv
\section{Introduction} Event cameras, also known as neuromorphic cameras, have successfully opened their path to the computer vision and robotics society for its low cost and high dynamic sensing range with low latency and low power consumption. It represents the changes of intensity for a pixel location $(x,y)$ as a plus or minus sign $(\sigma)$ asynchronously by checking the amount of intensity changes with a predefined threshold. This stream-like representation, depending on the scene and camera movement, can achieve $\mu$s order of latency through accurate timestamps $(t)$ and is expressed per fired event in the form of $(x,y,t,\sigma)$. This device has garnered a lot of attention due to the high applicability in systems requiring high dynamic range outputs with low latency, and low power and low memory consumption constraints \cite{mueggler2017event,mostafavi2019event,rebecq2019high,zhu2018multivehicle,Gehrig_2019_ICCV}. New applications for the event cameras have emerged such as intensity image reconstruction or recovering geometric features such as optical flow or depth from the event stream \cite{bardow2016simultaneous,reinbacher2016real,kim2008simultaneous,cook2011interacting,scheerlinck2018continuous,tulyakov2019learning}. Unfortunately, most commercially available event cameras produce relatively low resolution event streams for their efficiency. While there are number of proposals on many applications% estimating super-resolved intensity images from the events has been barely explored in the literature. To generate the high resolution images from the event, one can combine a method to transfer events to intensity images with a super resolution algorithm for intensity images \cite{dai2019second,sajjadi2018frame,haris2019recurrent, lim2017enhanced}. But these pipelined approaches are sub-optimal in generating the high resolution images from the events and may fail to reconstruct details of scenes. For producing high fidelity high resolution images, we aim to directly learn to estimate pixel-wise super-resolved intensity from events in an end-to-end manner and demonstrate that our method is able to super resolve images with rich details and less artifacts, better than pipelined state of the arts in both qualitative and quantitative analyses. To the best of our knowledge, we are the first to model super-resolving event data to higher-resolution intensity images in an end-to-end learning framework. We further extend our method to reconstruct more details by considering APS frames as inputs or learning the network iteratively to add details to an initial image. \section{Related Work} \paragraph{Event to intensity images.} Early attempts in the applications of event cameras, consider relatively short periods of the event stream data and direct accumulation of the plus or minus events in two colors as a gradient interpreted output \cite{brandli2014240}. Synthesising intensity images instead of the gradient representation is originated from the task of simultaneously estimating the camera movement and mosaicing them as a panoramic gradient image \cite{kim2008simultaneous}. In their approach the scene is static and the camera only has rotational movements. By the Poisson integration they transfer a gradient image to an intensity image. In \cite{cook2011interacting}, a bio-inspired network structure of recurrently interconnected maps is proposed to predict different visual aspects of a scene such as intensity image, optical flow, and angular velocity from small rotation movements. In \cite{bardow2016simultaneous}, a joint estimation of optical flow and intensity simultaneously in a variational energy minimization scheme in a challenging dynamic movement setting is proposed. However, their method propagates errors as shadow-like artifacts in the generated intensity images. A variational framework based on a denoising scheme that filters incoming events iteratively is introduced in \cite{reinbacher2016real}. They utilized manifold regularization on the relative timestamp of events to reconstruct the image with more grayscale variations in untextured areas. In \cite{scheerlinck2018continuous}, an asynchronous high-pass filter is proposed to reconstruct videos in a computationally efficient manner. This framework is originally designed for complementing intensity frames with the event information but is also capable of reconstructing images from events without the help of APS frames. Recent approaches use deep convolutional networks to create photo-realistic images directly from the event stream \cite{mostafavi2019event,rebecq2019high}. Both approaches employ a $U{\text -}net$ \cite{ronneberger2015u} as their base architecture with modifications such as using conditional generative adversarial neural networks \cite{mostafavi2019event} or using a deep recurrent structure (up to 40 steps) together with stacked ConvLSTM gates \cite{rebecq2019high}. They further investigated the possibility of reaching very high frame rates and using the output intensity images for downstream applications. \vspace{-1em}\paragraph{Image super resolution (SR).} Intensity image SR algorithms can be largely categorized into single image SR (SISR) \cite{dai2019second, lim2017enhanced} or multiple image SR (MISR) also known as video SR \cite{sajjadi2018frame,haris2019recurrent}. SISR methods add details inferred from the context of the given single low resolution (LR) image while MISR further uses a sequence of images over time. Since MISR uses more LR images to reconstruct the high resolution image, it is generally more successful in recovering missing details and higher frequency information. Since we have a sequence of stacks, MISR is more similar to our approach, although we aim to reconstruct one single image each time. The learning based SR methods outperform previous methods by using deeper and wider networks while utilizing the power of residual connections to prevent vanishing gradients \cite{lim2017enhanced,haris2019recurrent}. Many MISR methods use optical flow representations among the input images as a supplementary source of input to reach higher quality SR outputs \cite{sajjadi2018frame,haris2019recurrent}. Inspired by these methods, we design our SR sub-network as described in Sec. \ref{SR_network}. \section{Approach} We propose a fully convolutional network that takes a sequence of events stacks near the timestamp of interest as input, relates them in pairs with their optical flow obtained by $FNet$ and rectify the combination of the paired stacks and the flow by $EFR$, then feeds them to the recurrent neural network based super-resolution network ($SRNet$) that outputs hidden states and intermediate intensity outputs per each stack. Finally, we mix the intermediate outputs from multiple time stamps by $Mix$ to construct a super resolved intensity image. We briefly illustrate the structure in Fig. \ref{fig:flow} and with the detailed data flow in Fig. \ref{interm} in Sec. \ref{sec:overall_structure}. Beginning with event stacking strategy, we describe the details of our network architecture. % \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{images/Flow4.png} \caption{Overview of our end-to-end event to super-resolved intensity image framework. The input stacks $SBN_{n+m}$ and the central stack $SBN_n$ are given to the FNet to create the optical flow ($F_{n+m}$). The flow and stacks are concatenated and given to the EFR to rectify the event features. Its output $RE_{n+m}$ is given to SRNet together with the previous state ($State_n$) to create intermediate intensity outputs $I_{n+m}$ and the next state ($State_{n+m}$). All intermediate intensity outputs are concatenated and given to the mixer ($Mix$) network which creates the final output ($O_n$). Finally, the output is compared to the training groundtruth (GT) using the similarity loss (Sim) including Learned Perceptual Image Patch Similarity (LPIPS) term and $\ell_1$ term to compute error ($Err$).} \label{fig:flow} \vspace{-1em} \end{figure} \subsection{Event Stacking Method} \label{event_prepare} The stream-like representation of events is sparse in spatial domain and needs preparation to capture scene details to be reconstructed by a convolutional neural network. Despite recent advances of the stacking methods ~\cite{Gehrig_2019_ICCV,tulyakov2019learning}, our network performs well with a simple stacking method such as \emph{stacking based on the number of events} (SBN) \cite{mostafavi2019event}. Employing the advanced stacking methods is straightforward by minor modifications to % the input blocks of our network. With the SBN, starting from any timestamp in the event stream, we count the number of events until we reach a predefined number ($N_{e}$) and accumulate the events to form one \emph{channel} in the stack. We repeat this process $c$ times for one \emph{stack}. Thus, each stack contains $M = c \times N_e$ events in total and has the dimension of $h{\times}w{\times}c$, where $h$ and $w$ are the width and height of the APS images, respectively. This $c$-channel stack is fed into the network as an input. The corresponding APS frame is sampled at the timestamp of the last event in the stack for the ground truth (GT). At each channel, all pixel values are initially set to $128$. If an event is triggered at location $(x,y)$, we replace the pixel value at $(x,y)$ in the same channel with $256$ (positive event) or $0$ (positive event). Since newly coming events can override older events, the $M$ needs to be carefully chosen to better preserve spatio-temporal visual information. The frame rate can be determined by both the $N_e$ and the number of overlapping events between each stack over time. We empirically choose to use $3,000$ events per stack in which each stack has % 3 channels. This number can be modified for the experiments with larger resolution event inputs to ensure that the average number of events in stacks % show visually plausible outputs with fine details. However, since the network is trained on diverse scenes which contain different numbers of local events, the network is not very sensitive to the chosen number of events per stack at inference. \begin{figure*}[t] \centering \vspace{5 pt} \includegraphics[width=1\linewidth]{images/Interm11.png} \caption{Detailed data flow in the proposed method. This example is based on third stack ($SBN_{n+m}$), therefore the previous inputs, optical flow, and intermediate intensity outputs are faded. The APS frame is resized to the size of output ($O_n$) for comparison} \vspace{-1em} \label{interm} \end{figure*} \vspace{-0.2em} \subsection{Network Architecture} \label{net_arch} We design the network architecture by three principles. First, we take into account the characteristics of the input and target (Sec.~\ref{sec:overall_structure}). Second, we have a sufficiently large hypothesis space for the super-resolution network ($SRNet$) to address various level of complexity of movements in a scene (Sec.~\ref{SR_network}). Finally, we propose a novel objective function that can add structural details while being away from blur and artifacts (Sec.~\ref{criterion}). We % describe the details of each component of our proposed network. % \vspace{-6 pt} \subsubsection{Overview} \label{sec:overall_structure} We consider a stream of events stacked for the input to our network. In particular, for the input sequence of three stacks ($3S$), the stacks are the one containing the $n^\text{th}$ APS timestamp ($SBN_n$), the stack before it $SBN_{n-m}$ and the stack after it ($SBN_{n+m}$). We illustrate the network with these inputs in Fig. \ref{interm} with detailed data flow through the sub-networks. Note that the network can be used with the input of any number of stacks in a sequence (\eg, 3 or 7). Each stack has $M$ (\eg, $3,000$) events and its end location $m$ will vary on the timeline of events based on the amount of time it is required to fire $M$ events. $SBN_n$ is the \emph{central stack} among the three sequences. It is fed to the network after $SBN_{n-m}$ and the predicted intensity output is corresponding to this stack. The $SBN_{n+m}$ and $SBN_{n-m}$ stacks are $M$ events away from the beginning or end of the central stack respectively if there is no overlap ($L=0$) among the stacks (`Non-overlapped' input in Fig. \ref{interm}). % We can also have overlapping stacks for creating higher frame-rates; the end of the next stack will be $M$ events after the center minus the amount of overlap ($M {\scriptstyle-} L$) (`overlapped' input in Fig. \ref{interm}). More details on the overlapped stacking is provided in the supplement. $SBN_{n+m}$ and $SBN_{n-m}$ are fed separately with the central stack to the optical flow estimation network ($FNet$) to predict the optical flow ($F_{n+m}$ or $F_{n-m}$) between the stacks. These stacks of events are concatenated with the optical flow obtained by the $FNet$ and then rectified by an event feature rectification network ($EFR$). The rectified event stack ($RE_{n+m}$) is then given to the super-resolution network ($SRNet$). The $SRNet$ takes the previous state ($State_n$) with the rectified events stack ($RE_{n+m}$) and creates the next state ($State_{n+m}$) of the sequential model and a super-resolved intensity like output ($I_{n+m}$). Since the stacks quantize continuous event stream into separate inputs, each stack may not contain all necessary details for reconstructing images. Thus, the intermediate intensity outputs from all the stacks are then mixed by a Mixer network ($Mix$) to reconstruct intensity image $O_n$ with rich details. % For the initial stack, only the first stack is fed to the $EFR$ sub-network to create an initial $State_n$. The output of $Mix$ is given to the similarity network ($Sim$) to optimize the parameters based on the error ($Err$). \subsubsection{Flow Network (FNet)} An unwanted downside of stacking the event stream is losing temporal relation between the stacks. The lost temporal relation between stacks can be partially recovered by using a sequence of the stacks and the optical flow between each pair of stacks as the optical flow reports how the triggered events in the scene have moved and in which location the changes have happened. The SBN stacking includes sufficient edge information and can be used as an image-like input to well-known learning-based optical flow estimation algorithms. Thus, we do not finetune it but use a pretrained $FNet$ for computational efficiency\footnote{Finetuning $FNet$ may further improve the output quality as the stacked image has different visual signature from natural images.}. We use \cite{ilg2017flownet} as our flow estimation network and call it as $FNet$. % \vspace{-6 pt} % \subsubsection{Event Feature Rectification Network (EFR)} Another downside of stacking events is overwriting previous event information in fast triggering locations. The overwritten events result in a blurry stack of events and eventually lower quality reconstructions. To prevent overwriting events, we concatenate two stack of events with the optical flow and provide it to two convolutional layers called the event feature rectification ($EFR$) network. By the $EFR$, we progressively fuse the stacks over the event stream to preserve details from each event. The $EFR$ helps to reconstruction images when two stacks have events in a location visible to only one stack which the optical flow cannot relate, the events will more likely be maintained for the intensity reconstruction since we use all three inputs by the $EFR$. Note that the central stack is provided to this network without estimated flow since there is no flow for it. \vspace{-6 pt} % \subsubsection{Super Resolution Network (SRNet)} \label{SR_network} The rectified events are now super resolved by our main network called $SRNet$. We use a recurrent neural network for the $SRNet$ because each part of the event stream which we stack captures details of the output image and they are originally continuous but quantized by the stacking method. To alleviate the discontinuity, we utilize the internal memory state of recurrent neural network to reconstruct different regions with rich details in a continuous manner as the state is updated internally by each incoming stack. Specifically, a single event stack might partially miss important details from previously fired events which are not in its stacking range but have been captured by the previous stacks. It has been shown that stacked events are capable of synthesizing intensity images by deep neural networks \cite{mostafavi2019event,rebecq2019high} such as $U{\text -}net$ \cite{ronneberger2015u}. Architecturally, we further extend the idea by using $ResNet$ \cite{he2016deep} with 15 blocks in depth with more filters and larger kernel size. In particular, following the well-designed networks in MISR \cite{lim2017enhanced,sajjadi2018frame,haris2018deep,dai2019second}, we utilize the power of residual learning for super-resolving intensity. We use large field of views inspired from the SISR network \cite{haris2018deep} to transfer the rectified event features to SR intensity generator ($RNet{\text -}C$). Its main task is to create an initial SR intensity image state by the combination of transposed convolutional operations. The $SRNet$ is designed to upscale the input $RE$ while adding intensity information. The overall structure of the $SRNet$ is illustrated in Fig.~\ref{fig:SRNet}. We use the combination of three residual networks ($RNet{-}\{A,B,D\}$) that are composed of five ResNet blocks containing two convolutional layers. These networks are shallower than $RNet{\text -}C$ because they encode feature-like representations from previous states and not directly from the rectified events. % The output of $RNet{\text -}A$ which performs as an upsampling encoder is subtracted from the output of $RNet{\text -}C$ to create an internal error ($e_n$), which measures how much the current rectified event stack $RE_{n+m}$ contributes in comparison to the previous state $State_n$ as \begin{equation} e_n = RNet{\text -}C(RE_{n+m}) - RNet{\text -}A(State_n). \label{eq1} \end{equation} This error is given as an input to $RNet{\text -}B$ which performs as a general encoder. We define the the next state ($State_{n+m}$) by the output of $RNet{\text -}B$ summed with $RNet{\text -}C$ thus the current input ($RE_{n+m}$) is emphasized as \begin{equation} State_{n+m} = RNet{\text -}B(e_n) + RNet{\text -}C(RE_{n+m}). \end{equation} The $State_{n+m}$ is given to a final decoder ($RNet{\text -}D$) to make the intermediate intensity output ($I_{n+m}$) as \begin{equation} I_{n+m} = RNet{\text -}D(State_{n+m}). \end{equation} In general, the $RNet{\text -}C$ adds new information from the current stack to the previous state by adding details of the scene missed by the previous stack. Even when there is no events in some regions captured by the current stack but there are scene details in the regions captured by the previous stack, the previous state ($State_n$) holds that information through $RNet{\text -}A$ as its hidden state to reconstruct the scene details in the regions rather missing. We detail other design parameters such as layer type, number of filters in the supplement. % \begin{figure}[t!] \centering \includegraphics[width=1\linewidth]{images/SRNet6.png} \caption{Detailed architecture of the proposed super resolving network ($SRNet$) (Green-block in Fig.~\ref{fig:flow}). Four main residual networks are designed to perform as a large encoder-decoder scheme. $RNet{\text -}A$ is used to update the hidden state while $RNet{\text -}B$ and $RNet{\text -}D$ act as an encoder and decoder respectively to map the hidden state as a super resolved intensity output ($I_{n+m}$). } \label{fig:SRNet} \vspace{-10pt} \end{figure} \vspace{-8 pt} % \subsubsection{Mixer Network (Mix)} The Mixer network is designed to augment the outputs ($I_i$) of the SRNet at different time locations ($i{\scriptstyle=}\{n{\scriptstyle-}m, n ,n{\scriptstyle+}m\}$) to reconstruct detail-rich intensity image ($O_n$) at the central stack's timestamp ($n$). This network employs convolutional layers to reconstruct the intensity image with fine details. \vspace{-8 pt} % \subsubsection{Similarity Loss (Sim)} \label{criterion} Given a reconstructed image ($O$) and its GT ($G$), we define a loss function with two terms. First, we use an unstructured loss such as the $\ell_1$ norm to reconstruct overall sharper images as $\mathcal{L}_{\ell_1}(O,G) = \| O-G \|_1$ rather than $\ell_2$ which results in smoothed edges with low frequency texture in output images. As the $\ell_1$ may lose the structural information of a scene, we further leverage a criterion capable of compensating the lack of structure by the Learned Perceptual Image Patch Similarity (LPIPS) or perceptual similarity \cite{zhang2018unreasonable} as the second term of our objective function. Specifically, given a pair of images ($O,G$) encoded by a pretrained network (\eg, AlexNet \cite{krizhevsky2012imagenet}), the near end features ($\hat{G}^l_{hw}$) of the $l^\text{th}$ layer are extracted while its activations are normalized by the channel dimension ($H_l,W_l$). Then, each channel is scaled by a vector $w_l$ \cite{zhang2018unreasonable}, and the $\ell_2$ distance is computed. Finally, a spatial mean is computed over the image axes ($h,w$) through all layers ($l$) for the LPIPS loss as % \begin{equation} \resizebox{.9\hsize}{!}{ $\mathcal{L}_{LPIPS}(O,G) = \sum_l \frac {1}{H_lW_l}\sum_{h,w}\| w_l \odot(\hat{O}^l_{hw} - \hat{G}^l_{hw}) \|_2^2.$ } \label{eq:loss_PS} \end{equation} The final objective function, $\mathcal{L}_{sim}$, is the combination of the both terms with a balancing parameter $\lambda$ as \begin{equation} \mathcal{L}_{sim}(O,G) = \mathcal{L}_{\ell_1}(O,G) + \lambda\mathcal{L}_{LPIPS}(O,G), \label{eq:loss_l1+PS} \end{equation} which we minimize to learn the parameters. % \begin{table*}[t!] \caption{Comparison to state-of-the-art intensity synthesis methods on real-world sequences \cite{mueggler2017event}. Our method outperforms the previous methods in all sequences in LPIPS, and on average in SSIM. The runner up method is underlined. We used the reported numbers in \cite{rebecq2019high} for HF \cite{scheerlinck2018continuous}, MR \cite{reinbacher2016real} and EV \cite{rebecq2019high} while evaluated the authors' reconstructed images for EG \cite{mostafavi2019event}.} \centering \resizebox{0.99\linewidth}{!}{ \begin{tabular}{cccccc>{\columncolor[gray]{0.9}}cccccc>{\columncolor[gray]{0.9}}cccccc>{\columncolor[gray]{0.9}}c} \toprule \rowcolor{white} && \multicolumn{5}{c}{SSIM ($\uparrow$)} & & \multicolumn{5}{c}{MSE ($\downarrow$) } & & \multicolumn{5}{c}{LPIPS ($\downarrow$)}\\ \cmidrule{3-7} \cmidrule{9-13} \cmidrule{15-19} Sequence && HF \cite{scheerlinck2018continuous} & MR\cite{reinbacher2016real} & EV\cite{rebecq2019high} & EG\cite{mostafavi2019event} & Ours && HF \cite{scheerlinck2018continuous} & MR\cite{reinbacher2016real} & EV\cite{rebecq2019high} & EG\cite{mostafavi2019event} & Ours && HF \cite{scheerlinck2018continuous} & MR\cite{reinbacher2016real} & EV\cite{rebecq2019high} & EG\cite{mostafavi2019event} & Ours \\ \midrule \lmss{dynamic\underline{ }6dof} && 0.39 & \bf{0.52} & 0.46 & \underline{0.48} & 0.44 && 0.10 & \underline{0.05} & 0.14 & \bf{0.03} & \underline{0.05} && 0.54 & 0.50 & 0.46 & \underline{0.45} & \bf{0.42} \\ \lmss{boxes\underline{ }6dof} && 0.49 & 0.45 & \bf{0.62} & 0.45 & \underline{0.61} && 0.08 & 0.10 & 0.04 & \underline{0.03} & \bf{0.02} && 0.50 & 0.53 & \underline{0.38} & 0.48 & \bf{0.32} \\ \lmss{poster\underline{ }6dof} && 0.49 & 0.54 & \underline{0.62} & 0.61 & \bf{0.63} && 0.07 & 0.05 & 0.06 & \bf{0.01} & \underline{0.02} && 0.45 & 0.52 & \underline{0.35} & 0.42 & \bf{0.29}\\ \lmss{shapes\underline{ }6dof} && 0.50 & 0.51 & \bf{0.80} & 0.56 & \underline{0.79} && 0.09 & 0.19 & 0.04 & \underline{0.03} & \bf{0.01} && 0.61 & 0.64 & \underline{0.47} & 0.51 & \bf{0.38} \\ \lmss{office\underline{ }zigzag} && 0.38 & 0.45 & 0.54 & \underline{0.67} & \bf{0.68} && 0.09 & 0.09 & 0.03 & \underline{0.01} & \underline{0.01} && 0.54 & 0.50 & \underline{0.41} & 0.36 & \bf{0.29} \\ \lmss{slider\underline{ }depth} && 0.50 & 0.50 & \underline{0.58} & 0.54 & \underline{0.59} && 0.06 & 0.07 & 0.05 & \underline{0.02} & \underline{0.02} && 0.50 & 0.55 & 0.44 & \underline{0.42} & \bf{0.34} \\ \lmss{calibration} && 0.48 & 0.54 & \underline{0.70} & 0.67 & \bf{0.71} && 0.09 & 0.07 & \underline{0.02} & \underline{0.01} & \underline{0.01} && 0.48 & 0.47 & \underline{0.36} & 0.42 & \bf{0.24} \\ \midrule Average && 0.46 & 0.50 & \underline{0.62} & 0.57 & \bf{0.64} && 0.08 & 0.09 & 0.05 & \underline{0.02} & \underline{0.02} && 0.52 & 0.53 & \underline{0.41} & 0.43 & \bf{0.33} \\ \bottomrule \end{tabular} } \label{tab:realworld} \vspace{-10 pt} \end{table*} \begin{table}[t!] \caption{\protect \centering Quantitative comparison of super-resolved intensity images from events directly (Ours) to events to intensity image synthesis (EV) combined with SISR \cite{dai2019second} and MISR\cite{haris2019recurrent} methods.} \centering \resizebox{0.99\linewidth}{!}{ \begin{tabular}{ccccc} \toprule Method & PSNR ($\uparrow$) & SSIM ($\uparrow$) & MSE ($\downarrow$) & LPIPS ($\downarrow$) \\\midrule EV + SISR $2\times$ & 11.292 & 0.384 & 0.348 & 0.394 \\ EV + MISR $2\times$ & \underline{11.309} & \underline{0.385} & \underline{0.347} & \underline{0.392}\\ \rowcolor{Gray} Ours $2\times$ & \textbf{16.420} & \textbf{0.600} & \textbf{0.108} & \textbf{0.172}\\ \midrule EV + SISR $4\times$ & 11.168 & \underline{0.396} & 0.089 & 0.543\\ EV + MISR $4\times$ & \underline{11.293} & 0.384 & \underline{0.087} & \underline{0.396}\\ \rowcolor{Gray} Ours $4\times$ & \textbf{16.068} & \textbf{0.560} & \textbf{0.028} & \textbf{0.253}\\ \bottomrule \end{tabular} } \vspace{-1em} \label{tab:E2I2SR} \end{table} \section{Experiments and Analyses} For the empirical validation, we use generated sequences using the event camera simulator (ESIM) \cite{rebecq2018esim} and four challenging and diverse real-world public datasets.% \cite{bardow2016simultaneous,mueggler2017event,scheerlinck2018continuous, zhu2018multivehicle}. We describe the details of our dataset in the supplement. For the quantitative analyses, we use PSNR in dB (logarithmic scale), the structural similarity \cite{wang2003multiscale} (SSIM) as a fraction between zero (less similar) to one (fully similar), the mean squared error (MSE), and the perceptual similarity (LPIPS) as a metric to evaluate the similarity of the high level features in two images (lower the value, more the similarity). For each experiment, we train our network on a cluster of $8$ Titan-Xp GPUs. Batch size is $8$ and initial learning rate is $0.01$ which is decayed by a factor of $10$ at every half of the remaining epochs of the given maximum number of epochs (\eg, 50 in our experiments). We use $\lambda=0.01$ for all our experiments, otherwise mentioned. \subsection{Comparison with State of the Arts} We are the first to propose the task of direct reconstruction SR intensity image from events thus there are no directly comparable methods. So, we first down-sample our outputs and compare to same-size intensity reconstruction methods to evaluate the quality of our reconstruction. % Then we compare our method to the state-of-the-art intensity reconstruction methods combined with the state-of-the-art super-resolution (SR) methods. \vspace{-1em}\paragraph{Image reconstruction without super-resolution.} We compare down-sampled outputs of our method to the state-of-the-art event to intensity image methods on seven challenging real-world sequences from the Event Camera dataset \cite{mueggler2017event}. For notation brevity, we abbreviate the high pass filter method \cite{scheerlinck2018continuous} as HF, manifold regularization \cite{reinbacher2016real} as MR, event to video generation \cite{rebecq2019high} as EV and event to intensity by conditional GANs as EG \cite{mostafavi2019event}. Following the evaluation protocols in many real-world event datasets \cite{mueggler2017event,scheerlinck2018continuous, zhu2018multivehicle}, we consider APS frame as GT. We follow the sequence split of \cite{rebecq2019high} and use the reported performance measures of HF, MR and EV. For EG, we used the authors' reconstructed images to evaluate the performance. As shown in Table \ref{tab:realworld}, our proposed method outperforms all other methods in LPIPS. It implies that the reconstructed intensity image is perceptually better than the previous methods. Our method also exhibits higher SSIM scores on multiple sequences and comparable MSE errors to EG. Similar to EV, we train the model only with the synthetic sequences and apply to real world sequences. In this challenging zero-shot data transfer setting without fine-tuning, our method outperforms other methods on real-world events. Note that the two runner up methods in LPIPS (EV and EG) also use learning based framework. \vspace{-1em}\paragraph{Super-resolved image reconstruction.} We now combine state-of-the-art event to intensity reconstruction algorithms with state-of the-art SR methods and compare our method to them. For the state-of-the-art event to intensity algorithm, we use EV\footnote{Publicly available at {\scriptsize \url{https://github.com/uzh-rpg/rpg\_e2vid}}.} since it is the runner up method that outperforms EG in SSIM and LPIPS in most of the sequences and on average (Table~\ref{tab:realworld}). For super resolution algorithms, we use two recent super-resolution algorithms; one for SISR \cite{dai2019second} and another for MISR \cite{haris2019recurrent}. As shown in Table \ref{tab:E2I2SR}, our method outperforms the state-of-the-art intensity reconstruction algorithms combined with the state-of-the-art SR algorithms in all metrics by large margins. We use 30 sequences from our generated dataset by ESIM. \begin{figure}[t!] \centering \begin{tabularx}{\linewidth}{ >{\centering}X >{\centering}X >{\centering}X >{\centering}X >{\centering\arraybackslash}X } \scriptsize Events & \scriptsize EV & \scriptsize EV+SR $2\scriptstyle\times$ & \scriptsize Ours $2\scriptstyle\times$ & \scriptsize APS\\ \end{tabularx} \vspace{-6 pt} \includegraphics[width=1\linewidth]{images/Text_c1.png} \vspace{-3 pt} \begin{tabularx}{\linewidth}{ >{\centering}X >{\centering}X >{\centering}X >{\centering\arraybackslash}X } \scriptsize EV & \scriptsize EV+SR $2\scriptstyle\times$ & \scriptsize Ours $2\scriptstyle\times$ & \scriptsize APS\\ \end{tabularx} \vspace{-6 pt} \includegraphics[width=1\linewidth]{images/Text_c2.png} \vspace{-3 pt} \begin{tabularx}{\linewidth}{ >{\centering}X >{\centering}X >{\centering}X >{\centering}X >{\centering\arraybackslash}X } \scriptsize Events & \scriptsize EV & \scriptsize EV+SR $2\scriptstyle\times$ & \scriptsize Ours $2\scriptstyle\times$ & \scriptsize APS\\ \end{tabularx} \vspace{-6 pt} \includegraphics[width=1\linewidth]{images/Dynamic_c1.png} \vspace{-3 pt} \begin{tabularx}{\linewidth}{ >{\centering}X >{\centering}X >{\centering}X >{\centering\arraybackslash}X } \scriptsize EV & \scriptsize EV+SR $2\scriptstyle\times$ & \scriptsize Ours $2\scriptstyle\times$ & \scriptsize APS\\ \end{tabularx} \vspace{-6 pt} \includegraphics[width=1\linewidth]{images/Dynamic_c2.png} \vspace{-3 pt} \begin{tabularx}{\linewidth}{ >{\centering}X >{\centering}X >{\centering}X >{\centering}X >{\centering\arraybackslash}X } \scriptsize Events & \scriptsize EV & \scriptsize EV+SR $2\scriptstyle\times$ & \scriptsize Ours $2\scriptstyle\times$ & \scriptsize APS\\ \end{tabularx} \vspace{-6 pt} \includegraphics[width=1\linewidth]{images/Calib_c1.png} \vspace{-3 pt} \begin{tabularx}{\linewidth}{ >{\centering}X >{\centering}X >{\centering}X >{\centering\arraybackslash}X } \scriptsize EV & \scriptsize EV+SR $2\scriptstyle\times$ & \scriptsize Ours $2\scriptstyle\times$ & \scriptsize APS\\ \end{tabularx} \includegraphics[width=1\linewidth]{images/Calib_c2.png} \vspace{-10 pt} \caption{Qualitative comparison among synthesizing SR intensity images directly (ours) and super-resolving as a downstream application to intensity image estimation (EV+MISR). % Highlighted boxes are zoomed for better comparison.}% \label{E2SR} \vspace{-1em} \end{figure} For qualitative analyses, we demonstrate intensity reconstruction by EV, the combination of EV+MISR and our method on real-world and simulated sequences in the Fig. \ref{E2SR} and Fig. \ref{teaser}. Note that our method reconstructs fine details from events. In Fig. \ref{teaser}, EG does not always reconstruct scene details from the events and sometimes hallucinates jittery artifacts. While EV reconstructs scene details from the events relatively better than EG, it creates a shadow-like artifact and darkens some areas of the scene. Furthermore, in the presence of hot pixels in the data, EV does not filter them; white or black dots appear in the results by EV while our method mostly filters them out without explicit operations to remove. We present more results in the supplementary material. We further conduct experiments on the sequences from another popular dataset \cite{bardow2016simultaneous} and qualitatively compare our method to EG and EV in Fig. \ref{Bardow}. Our method can reveal details that is not visible in constructing the same sized images such as fingertips or texture. \begin{figure*}[t!] \begin{tabularx}{\linewidth}{ >{\centering}X >{\centering}X >{\centering}X >{\centering}X >{\centering}X >{\centering}X >{\centering}X >{\centering\arraybackslash}X } \scriptsize Events & \scriptsize EG & \scriptsize EV & \scriptsize Ours & \scriptsize Events & \scriptsize EG & \scriptsize EV & \scriptsize Ours\\ \end{tabularx} \includegraphics[width=1\linewidth]{images/Bardow_c4.png} \caption{Qualitative comparison of our downscaled outputs to EV ands EG on sequences from \cite{bardow2016simultaneous} (without APS). Our method is able to reconstruct structural details from inputs as small as $128{\scriptstyle\times}128$ pixels. More results are provided in the supplementary material. } \label{Bardow} \vspace{-1 em} \end{figure*} \subsection{Analysis on Loss Terms ($\mathcal{L}_{sim}$)} We ablate the loss function to investigate the effect of each terms on image reconstruction quantitatively in Table \ref{tab:ablate} and qualitatively in Fig. \ref{l1_ps}. All analyses and ablation studies were performed with the simulated data for reliable quantitative analyses with high quality GT. Using only $\mathcal{L}_{\ell_1}$ term, we observe better performance in PSNR but leads to visually less sharp images thus low performance in all other metrics. Using only $\mathcal{L}_{LPIPS}$ term, we observe that images look visually acceptable but with the downside of lower PSNR with dot-like artifacts on regions with less events and on the edges. The final proposed loss function $\mathcal{L}_{sim}$ performs the best in SSIM and MSE with a slight decrease in PSNR and LPIPS but creates visually the most plausible images. \begin{table}[t!] \centering \caption{\protect \centering Ablation study of the loss function.} % \resizebox{0.95\linewidth}{!}{ \begin{tabular}{ccccc} \toprule Loss & PSNR ($\uparrow$) & SSIM ($\uparrow$) & MSE ($\downarrow$) & LPIPS ($\downarrow$) \\ \midrule $\mathcal{L}_{\ell_1}$ & \textbf{15.33} & \underline{0.517} & \underline{0.034} & 0.485 \\ $\mathcal{L}_{LPIPS}$ & 10.06 & 0.388 & 0.454 & \textbf{0.232} \\ \midrule $\mathcal{L}_{sim}$ (Full) & \underline{15.03} & \textbf{0.528} & \textbf{0.032} & \underline{0.258} \\ \bottomrule \end{tabular} } \label{tab:ablate} \begin{tabularx}{\linewidth}{ >{\centering}X >{\centering}X >{\centering}X >{\centering}X >{\centering}X >{\centering\arraybackslash}X } $\scriptscriptstyle \ell_1$ & \tiny LPIPS & $\scriptscriptstyle\ell_1{+}$\tiny LPIPS & $\scriptscriptstyle\ell_1$ & \tiny LPIPS & $\scriptscriptstyle\ell_1{+}$\tiny LPIPS\\ \end{tabularx} \centering \includegraphics[width=1\linewidth]{images/l1+ps1_.png} \captionof{figure}{Effect of loss function on reconstruction quality. % $\ell_1$ norm smooths edges, perceptual similarity (LPIPS) adds structural details but also creates artifacts. The combination of $\ell_1{\scriptstyle+}LPIPS$ ($\mathcal{L}_{sim}$) shows less artifacts while adding structural details.} \label{l1_ps} \vspace{0.5em} \caption{Effect of number of stacks and scale factor.} \centering \resizebox{0.99\linewidth}{!}{ \begin{tabular}{cccccc} \toprule Scale & \# Stacks & PSNR ($\uparrow$) & SSIM ($\uparrow$) & MSE ($\downarrow$) & LPIPS ($\downarrow$) \\\midrule \multirowcell{2}{$2\times$} & 3S & 15.46 & 0.554 & 0.323 & 0.191\\ & 7S & \textbf{16.42} & \textbf{0.600} & \textbf{0.108} & \textbf{0.172}\\ \midrule \multirowcell{2}{$4\times$} & 3S & 15.03 & 0.528 & 0.032 & 0.258\\ & 7S & \textbf{16.06} & \textbf{0.560} & \textbf{0.028} & \textbf{0.253}\\ \bottomrule \end{tabular}} \label{tab:scale_frame} \vspace{-1em} \end{table} \subsection{Analysis on Super Resolution Parameters} We evaluate the effect of two SR parameters; the upscale factor ($2\times, 4\times$) and size of the sequence of stacks ($3S,7S$) on the output quality. We summarize the results in Table \ref{tab:scale_frame}. Comparing $3S$ and $7S$, we observe that $7S$ results in better performance in all metrics. % It implies that a longer recursion on the sequences may produce more reliable hidden states and results in better quality output. Also, when using longer sequences, it is more likely to capture events that happen only for a short period of time since unrolling on a larger recursion helps to keep information of short events. It is more challenging to super resolve events to larger images as it is not trivial for an algorithm to handle large spatial locations where no events exist. Although the MSE has decreased, compared to $2\times$, it is because the number in the denominator is larger due to the size of the image and not much related to the output quality. \subsection{Qualitative Analysis on HDR Sequences} % One challenging scenario using the event camera is to capture events under extreme dynamic range. We qualitatively analyze outputs under such extreme conditions and compare them to EV in Fig. \ref{fig:boxes_sun}. Normal cameras including the APS frame have much lower dynamic range and either create black regions (when the camera misses to sense intensity details under its sensing range as shown in the top row) or white regions (when light floods in the camera and the camera cannot sense higher than its sensing range as shown in the bottom row). We observe that our method can address a higher range and reveal more structural details that EV and the APS frame fail to capture. \begin{figure}[t!] \begin{tabularx}{\linewidth}{ >{\centering}X >{\centering}X >{\centering}X >{\centering\arraybackslash}X } \scriptsize Events \hspace{20 pt} & \scriptsize EV \hspace{20 pt} & \scriptsize Ours $2\scriptstyle\times$ \hspace{20 pt} & \scriptsize APS \\ \end{tabularx} \includegraphics[width=1\linewidth]{images/hdr_boxes_sun.png} \captionof{figure}{Image reconstruction comparison in extreme HDR scenarios \cite{mueggler2017event,scheerlinck2018continuous}. Our method synthesizes more details while producing less artifacts compared to EV and the APS. Please zoom in and compare the suggested red boxes.} \label{fig:boxes_sun} \vspace{-1em} \end{figure} \subsection{Analysis on the Failure Modes} Failure cases are mostly related to missing background details over long trajectories when the foreground objects have rapid movements. In such sequences, our method only recovers parts of the scene that are in a limited temporal distance to our central stack. We showcase and further analyze a number of failure modes in the supplementary material. \section{Extensions} \paragraph{Video reconstruction.} We aim to reconstruct a single image not a video. So, the temporal consistency between frames are out of our interest thus not always held. To extend our method to video reconstruction, we utilize a blind post-processing method \cite{Lai-ECCV-2018} to encode temporal consistency among the intensity images and demonstrate the qualitative results in a supplementary video. To quantitatively evaluate the temporal consistency, we follow the temporal stability metric from \cite{lai2018learning}, which is based on the flow warping error between two consecutive synthesized frames $(F_t,F_{t+1})$: \begin{equation} \resizebox{.9\hsize}{!}{ $E_{warp}(F_t,F_{t+1})= \frac{1}{\sum_{i=1}^{N}M_t ^{(i)}}\sum_{i=1}^{N}M_t ^{(i)} || F_t ^{(i)} - \hat{F}_{t+1} ^{(i)} ||_2^2, $ } \label{eq:warp} \vspace{-5pt} \end{equation} where $\hat{F}_{t+1}$ is the warped frame of $F_{t+1}$ and $M_t \in \{0,1\}$ is the non-occlusion mask based on \cite{ruder2016artistic} to ensure that the calculations are applied only in the non-occluded regions. We compute the optical flow used for warping the frame and the non-occlusion map based on the APS frames for evaluating the warping errors of all methods compared and APS as they are the GT. % We summarize the results with different size of sequences ($3S$ and $7S$) in comparison to EV in Table \ref{tab:temp_consist}. While our methods ($3S$ and $7S$) are worse than EV due to lack of temporal consistency, a simple post-processing ($3S+$ and $7S+$) significantly improves the performance, outperforming both the EV \cite{rebecq2019high} and its post-processed version ($EV+$) by large margins. \begin{table}[t!] \centering \captionof{table}{Temporal stability error evaluation (Eq. \ref{eq:warp}). Plus sign indicates blind post-processing \cite{lai2018learning}. Our method ($3S$, $7S$) does not directly consider temporal consistency however longer sequences of stacks ($7S$) are more consistent. EV\cite{rebecq2019high} uses up to $L{=}40$ input stacks and is initially more consistent. However, we get lower errors even on our smallest sequence after post-processing.} \resizebox{0.99\linewidth}{!}{% \begin{tabular}{cccccccc} \toprule $E_{warp}(\downarrow)$ & APS & $3S$ & $7S$ & EV \cite{rebecq2019high} & $3S+$ & $7S+$ & EV \cite{rebecq2019high}$+$\\ \cmidrule(lr){1-1} \cmidrule(lr){2-5} \cmidrule(lr){6-8} \lmss{dynamic\underline{ }6dof} & 0.61 & 20.35 & \underline{16.54} & \textbf{8.78} & \textbf{3.42} & \underline{3.71} & 5.56\\ \lmss{boxes\underline{ }6dof} & 1.81 & \underline{16.69} & 17.51 & \textbf{15.69} & \textbf{3.58} & \underline{3.95} & 9.36 \\ \lmss{poster\underline{ }6dof} & 1.10 & \underline{18.80} & 22.66 & \textbf{17.74} & \textbf{4.41} & 5.91 & \underline{5.56} \\ \lmss{shapes\underline{ }6dof} & 0.44 & 24.00 & \underline{21.23} & \textbf{16.66} & \underline{2.80} & \textbf{2.63} & 8.33 \\ \lmss{office\underline{ }zigzag} & 0.08 & 3.62 & \underline{2.19} & \textbf{0.72} & \underline{0.36} & \textbf{0.34} & 0.44\\ \lmss{slider\underline{ }depth} & 0.02 & 0.57 & \underline{0.34} & \textbf{0.19} & \underline{0.06} & \textbf{0.04} & 0.12\\ \lmss{calibration} & 0.36 & 15.46 & \underline{9.72} & \textbf{2.99} & \textbf{1.31} & \underline{1.24} & 1.62\\ \cmidrule(lr){1-1} \cmidrule(lr){2-5} \cmidrule(lr){6-8} Average & 0.63 & 14.21 & \underline{12.89} & \textbf{8.97} & \textbf{2.28} & \underline{2.55} & 5.20\\ \bottomrule \end{tabular} } \label{tab:temp_consist} \vspace{-1em} \end{table} \vspace{-1em}\paragraph{Complementary and Duo-Pass.} To evaluate our method in a challenging set-up, we do not use APS frame to super resolve images. Using APS frame, we can further improve quality of output. % We name the extension by using APS frame as \emph{Complementary} \cite{scheerlinck2018continuous} or \emph{Comp.} We train the initial state of network with the low resolution (LR) APS frame as a central stack (Sec. \ref{sec:overall_structure}) and provide events as its nearby stacks. We observe that the network learns to add higher resolution details from the LR input. However, the Complementary method is sensitive to the quality of central stack, specifically if it is blurry or noisy, its artifacts are propagated to the final reconstruction. To avoid such shortcoming, we propose another extension that does not use APS frames but use two iterations or passes from events only, called \emph{Duo-Pass}. In the first pass, we use the main scheme to create intensity images from events only. In the second pass, we use the synthesized intensity image from the first pass as the central stack similar to that we use the APS frame in the Complementary method. By the Duo-Pass, we are able to further recover HR details that the first pass misses without the help of the APS frame. % We qualitatively compare the results by our method (main), by the Duo-Pass and by the Comp. in Fig. \ref{fig:duo_comp}. We provide more results in the supplementary material. \begin{figure} \centering \begin{tabularx}{\linewidth}{ >{\centering}X >{\centering}X >{\centering}X >{\centering\arraybackslash}X } \scriptsize Ours (main) & \scriptsize Ours (Duo-Pass) & \scriptsize APS & \scriptsize Ours (Comp.) \\ \end{tabularx} \includegraphics[width=1\linewidth]{images/duo_comp3_.png} \captionof{figure}{Extensions. Duo-Pass that iterates the SR twice and Complementary (Comp.) that uses events with APS frames.} \label{fig:duo_comp} \vspace{-1em} \end{figure} \section{Conclusion} We propose to directly reconstruct higher resolution intensity images from events by an end-to-end neural network. We demonstrate that our method reconstructs high quality images with fine details in comparison to the state of the arts in both the same size image reconstruction and super-resolution. We further extend our method to the \emph{Duo-Pass} which performs an extra pass to add missing details and the \emph{Complementary} that utilizes APS frames in addition to events. We also reconstruct videos by our method with a simple post-processing to ensure temporal consistency. \vspace{1em} \begingroup { \small \noindent \textbf{Acknowledgement.} This work was partly supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No.2019R1C1C1009283) and (NRF-2018R1A2B3008640), the Next-Generation Information Computing Development Program through the NRF funded by MSIT, ICT (NRF-2017M3C4A7069369), Institute of Information \& communications Technology Planning \& Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2019-0-01842, Artificial Intelligence Graduate School Program (GIST)) and (No.2019-0-01351, Development of Ultra Low-Power Mobile Deep Learning Semiconductor With Compression/Decompression of Activation/Kernel Data).\par } \endgroup \newpage {\small \bibliographystyle{ieee_fullname}
2,869,038,156,968
arxiv
\section{Introduction} Modern optical time-domain surveys, unbiased with respect to host galaxy environment, have discovered superluminous supernovae (SLSNe) with luminosities exceeding those of normal supernovae (SNe) by at least an order of magnitude \citep{Quimby2011,Chomiuk2011,Gal-Yam2012}. This has dramatically increased the known diversity of SNe, and fueled theoretical and observational efforts to understand the most extreme ways that massive stars end their lives. Similar to their normal luminosity counterparts, SLSNe can be divided into two classes based on the presence or absence of hydrogen emission lines in their spectra. The majority of hydrogen-rich Type II SLSNe show narrow and intermediate width Balmer emission lines and are thus the most luminous examples of Type IIn SNe \citep[but see][]{Inserra2018}. Their luminosities can be explained by interaction with a slow-moving circumstellar medium \citep[CSM;][]{Smith2007,ChevalierIrwin2011}. Hydrogen-poor Type I SLSNe (hereafter SLSNe-I) are characterized at early times by blue spectra indicating temperatures of $\gtrsim\!10^{4}$ K with few features other than distinctive \ion{O}{2} absorption features at wavelengths of $\sim 3600-4600$ \AA\ \citep{Gal-Yam2012}. As the temperature decreases, their spectra begin to resemble normal luminosity Type Ic SNe suggesting that their ejecta have similar compositions, but with an additional, persistent heating source in SLSNe-I. The proposed models for the power sources of SLSNe-I are a central engine \citep{KasenBildsten2010}, hydrogen-free CSM interaction \citep{ChevalierIrwin2011}, or an over-abundant production of radioactive $^{56}$Ni \citep{HegerWoosley2002,Gal-Yam2009}. While CSM interaction can explain the lightcurves of SLSNe-I \citep{Chatzopoulos2013,Nicholl2014} and the emergence of late-time H$\alpha$ emission in some events suggests eventual interaction with material at $\sim\!10^{16}$ cm from the progenitor \citep{Yan2017}, there is no spectroscopic evidence that CSM interaction is the dominant power source near peak. Central engine models, the most popular being the spin-down of a rapidly rotating magnetar \citep{KasenBildsten2010}, are also able to reproduce the lightcurves of SLSNe-I \citep{Inserra2013,Nicholl2014,Nicholl2017}. In addition, the early phase spectra of SLSNe-I have generally favored spectroscopic models produced by a central, illuminating source rather than pair-instability models in which a significant amount of $^{56}$Ni is produced \citep{Dessart2012,Mazzali2016}. This appears to also hold true with the few SLSNe-I that have nebular phase spectra, which have indicated similar ejecta compositions and velocity structures with SNe associated with long gamma-ray bursts \citep[GRBs;][]{Milisavljevic2013,Nicholl2016b,Jerkstrand2016,Jerkstrand2017}. Furthermore, host galaxy studies of SLSNe-I have shown that they occur in metal-poor dwarf galaxies \citep{Chen2013,Lunnan2014,Leloudas2015,Perley2016}, similar to long GRB hosts, and radio and X-ray observations of SLSNe-I indicate low-density circumstellar environments \citep{Nicholl2016,Margutti2017,Coppejans2018}, lower than expected if CSM interaction is the dominant power source. Given the lines of evidence favoring the magnetar central engine model for SLSNe-I, it is important to study whether this model can explain the full range of SLSN-I properties, given that this class exhibits a wide range of photometric behavior. This observed diversity, notably the order of magnitude spread in rise and decline timescales \citep{Nicholl2015} and peak bolometric luminosity, has led to debate in the literature regarding whether SLSNe-I constitute a single class resulting from a single physical mechanism with varying parameters or if sub-classes exist which reflect the presence of multiple power sources and/or explosion mechanisms. Modeling of large samples of SLSN-I lightcurves has suggested that a continuum of ejecta and engine properties can account for the range of known SLSNe-I \citep{Nicholl2017}. However, \citet{Inserra2018a} found that slower SLSNe-I have a shallower velocity gradient, keeping open the possibility that some significant physical differences may exist among SLSNe-I. One important diagnostic is the presence of short timescale lightcurve variability, often referred to as undulations. This was first noted for the slowly evolving SN\,2015bn \citep{Nicholl2016}. \citet{Inserra2017} found similar undulations in other slow SLSNe-I, but such behavior was difficult to detect in faster evolving events, at least in part due to the steeper overall lightcurves and shorter sampling baseline. \citet{Nicholl2014} identified one fast SLSN-I, SSS120810, that did show significant variability. High-amplitude lightcurve undulations have also been found in iPTF13dcc \citep{Vreeswijk2017} and iPTF15esb \citep{Yan2017}. Recently, we began a targeted search for low-$z$ SLSNe which can be studied in detail near peak and to late times. Here we present observations of PS16aqv, a SLSN-I at $z = 0.2025$ discovered as part of this search. Through our extensive follow-up campaign we were able to obtain well-sampled lightcurves and spectra and we find that PS16aqv is overall most similar to the fast evolving SLSNe-I. However, PS16aqv stands out as a fast declining event with clear evidence for undulations in its lightcurve, remarkably similar to SN\,2015bn \citep{Nicholl2016}. Furthermore, the lightcurve exhibited a transition to a very slow decline phase followed by rapid fading, indicating complex behavior at late-times. The paper is structured as follows. In Section 2 we present our photometric and spectroscopic data of PS16aqv. In Section 3 we present the observational characteristics of PS16aqv with comparisons to other SLSNe-I. In Section 4 we discuss our MCMC modeling of the lightcurve with a magnetar model. In Section 5 we analyze the properties of the host galaxy and environment in which PS16aqv occured. In Section 6 we discuss the implications of and possible physical mechanisms responsible for the lightcurve undulations in PS16aqv, their similarity to those in other events, the late-time deviations from a smooth decline, and limits on the nickel mass. Finally, we conclude in Section 7. In this paper we use $H_{0} = 67$ km s$^{-1}$ Mpc$^{-1}$, $\Omega_{m} = 0.32$, and $\Omega_{\Lambda} = 0.68$ \citep{Planck2013}, resulting in a luminosity distance of 1035 Mpc to PS16aqv. The Galactic extinction along the line of sight to PS16aqv is $E(B-V) = 0.0433 \pm 0.0011$ mag \citep{SF2011}. \begin{figure*}[t!] \begin{center} \includegraphics[scale=0.47]{f1.pdf} \end{center} \caption{Left: UV and optical lightcurves of PS16aqv corrected for Galactic extinction with band offsets for clarity. The \textit{Swift}/UVOT filters $U$, $B$, $V$, $UVW1$, $UVM2$, and $UVW2$ and CSS data are in Vega magnitudes and all others are in AB magnitudes. Vertical lines at the bottom indicate the epochs of our spectra. The good time-sampling clearly reveals interesting behavior such as lightcurve undulations. Top Right: The $r$-band and $i$-band lightcurves of PS16aqv including deep upper limits at $\approx 280$ rest-frame days after peak. While the decline rate closely matches that for fully-trapped $^{56}$Co decay at $\sim 80 - 130$ rest-frame days after peak, the decline rate must dramatically increase at later times to account for the upper limits. Bottom Right: Rest-frame absolute magnitude lightcurves (no offsets), which take into account K-corrections (measured from our spectra), Galactic extinction, and internal host galaxy extinction inferred from our lightcurve modeling (Section \ref{sec:mag}). PS16aqv exhibited a peak absolute $r$-band magnitude of $M_{r} = -22.10\pm0.12$.} \label{obsLC} \end{figure*} \capstartfalse \begin{deluxetable*}{ccccccc}[!htb] \tablecolumns{7} \tabcolsep0.1in\footnotesize \tablewidth{7in} \tablecaption{Spectroscopic Observations of PS16aqv \label{tab:spec}} \tablehead { \colhead {Date} & \colhead {MJD} & \colhead {Phase\tablenotemark{a}} & \colhead {Telescope} & \colhead{Instrument} & \colhead {Airmass} & \colhead {Resolution (\AA)} } \startdata 2 March 2016 & 57450.4 & $-$2.5 & MDM/Hiltner & OSMOS & 1.53 & 5 \\ 15 March 2016 & 57463.3 & +8.2 & FLWO 60-inch & FAST & 1.49 & 5.7 \\ 5 April 2016 & 57484.3 & +25.7 & Magellan/Baade & IMACS & 1.17 & 5.4 \\ 14 April 2016 & 57493.3 & +33.2 & MMT & Blue Channel & 1.45 & 4 \\ 10 June 2016 & 57550.3 & +80.6 & Magellan/Clay & LDSS3c & 1.06 & 7.5 \\ 29 July 2016 & 57599.3 & +121.3 & Magellan/Clay & LDSS3c & 1.28 & 7.5 \\ 26 January 2017 & 57780.3 & +271.9 & Gemini-N & GMOS & 1.26 & 11 \enddata \tablenotetext{a}{Rest-frame days since peak bolometric brightness} \end{deluxetable*} \capstarttrue \begin{figure*}[t!] \begin{center} \includegraphics[scale=0.46]{f2.pdf} \end{center} \caption{Left: Residuals from low-order polynomial fits to the post-peak rest-frame $gri$ lightcurves of PS16aqv. At $\approx30$ days after peak the lightcurves show a ``knee'', or undulation, lasting for about 10 days which appears to have a slightly higher amplitude in $g$-band. Such a feature has been seen in SN\,2015bn at $\approx50$ days after peak \citep{Nicholl2016}. Right: Rest-frame $ugri$ lightcurves of PS16aqv compared to SN\,2015bn after compressing the SN\,2015bn lightcurves in time by 40\%. This time compression highlights the similar amplitudes of the lightcurve undulations in the two events. While the undulations occur on different timescales, as might be expected due to the overall lightcurve timescale difference, the behavior is similar.} \label{LCpoly} \end{figure*} \section{Observations of PS16aqv} \subsection{Discovery} PS16aqv, also known as SN\,2016ard, was classified as part of our program to identify SLSNe from the Pan-STARRS Search for Transients \citep[PSST;][]{Huber2015}, which publicly reports stationary transients from the on-going Pan-STARRS near-Earth object survey. PS16aqv was first detected by PSST on 10 February 2016 but due to being partially located in a detector chip gap it was not flagged by the PSST detection software until 20 February 2016 when it reached a magnitude of $i \approx 18.7$. The SN was independently discovered by the Catalina Real-Time Transient Survey \citep[CRTS;][]{Drake2009} on 16 February 2016 and was designated CSS160216:141045-100935. Examining the associated Pan-STARRS 3$\pi$ deep stack, we found a marginal detection of a host galaxy with $r \approx 22.6$ mag. The large brightness contrast between the transient and host galaxy motivated us to initiate follow-up observations. A spectrum obtained on 2 March 2016 using the Ohio State Multiple Object Spectrograph \citep[OSMOS;][]{Martini2011} on the 2.4-m Hiltner telescope at MDM Observatory exhibited a blue continuum with weak spectral features consistent with the \ion{O}{2} lines commonly seen in the pre- and near-maximum light spectra of SLSNe-I. The redshift of $z\approx0.20$ implied by this identification (later confirmed from host galaxy emission lines to be $z = 0.2025 \pm 0.0003$) yielded an absolute magnitude for the PSST detection of $M_{i}\approx -21.3$, confirming the superluminous nature of the event. Following classification we obtained additional photometric and spectroscopic observations. \subsection{UV and Optical Photometry} \label{sec:optobs} We obtained images of PS16aqv in the $BVR$ filters on 1 March 2016 using the 1.3-m telescope at MDM Observatory and in the $gri$ filters using the 48-inch telescope at the Fred Lawrence Whipple Observatory (FLWO) from 11 March to 3 July 2016. We also obtained images using IMACS \citep{Dressler2011} and LDSS3c \citep{Stevenson2016} on the Magellan 6.5-m telescopes at Las Campanas Observatory in the $griz$ filters extending to 19 March 2018. We reduced the images using standard techniques and performed photometry using point-spread function (PSF) fitting implemented with the {\tt daophot} IRAF package. Instrumental magnitudes in the $griz$ filters were calibrated to the Pan-STARRS 3$\pi$ photometric system in AB magnitudes using zeropoints calculated from field comparison stars. The $BVR$ instrumental magnitudes were calibrated to Vega magnitudes using Landolt fields observed on the same night. The uncertainties on the calibrated magnitudes include the uncertainty resulting from the PSF fit and the uncertainty on the nightly zeropoints. We also obtained observations of PS16aqv using the UV/Optical Telescope \citep[UVOT;][]{Roming05} onboard the \textit{Swift} satellite in the $U$, $B$, $V$, $UVW1$, $UVM2$, and $UVW2$ filters. We analyzed the data following the prescription of \citet{Brown09} using the updated calibration files and zeropoints from \citet{Breeveld11}. PS16aqv was detected in 11 epochs from 9 March to 18 April 2016. From discovery to about two months after peak brightness, the flux in our images is dominated by that of the SN in all filters and therefore host subtraction is not necessary. However, as the SN faded, the host contribution became significant and required careful host subtraction to isolate the SN flux. We performed image subtraction using {\tt HOTPANTS} \citep{Becker2015} on our $griz$ images obtained after the gap in observations around 15 May 2016 (MJD 57523). For $g$, $r$, and $z$ observations after this date, we use deep templates obtained on 17 July 2017 with IMACS and for $i$-band observations we use a deep template obtained on 19 March 2018 with LDSS3c. Subtracting these templates from similarly deep $i$- and $r$-band images taken on 31 January and 2 February 2017 ($\sim\!$10 months after peak brightness), respectively, we find no detectable SN flux indicating PS16aqv had already faded significantly by early 2017. We measure upper limits on the brightness of PS16aqv in the 31 January and 2 February 2017 images using the following procedure. We inject a fake point source at the position of PS16aqv (measured using relative astronomy with images containing SN flux) and then we perform image subtraction using the templates. We repeat this for a range of magnitudes and consider the 3$\sigma$ upper limit to correspond to a source detected at 3$\sigma$ in the subtracted image. We find an upper limit of $r>25.6$ mag from the 2 February 2017 image and $i>25.3$ mag from the 31 January 2017 image. PS16aqv was also detected in several epochs by PSST in the $w$, $r$, $i$, and $z$ filters, as well as by the unfiltered CRTS. The earliest two PSST detections were not recorded by the PSST pipeline due to proximity to a chip gap, but by examining the 2D frames and performing PSF fitting photometry we were able to recover the flux. For the purpose of calculating the rest-frame lightcurves we converted the $w$-band magnitudes to $r$-band using a shift of $-0.13$ mag empirically determined from the lightcurves. No correction was applied to the CRTS data as they are already well-matched to our $r$-band measurements. Our ground-based photometry, in addition to the PSST and CRTS data, is listed in Table \ref{tab:ground} and the \textit{Swift}/UVOT photometry is listed in Table \ref{tab:swift}. In Figure \ref{obsLC} we show the corresponding lightcurves. \subsubsection{\textit{Hubble Space Telescope (HST)} Observations} We obtained \textit{HST} observations of PS16aqv on 27 December 2017 using the Advanced Camera for Surveys (ACS) Wide Field Camera (WFC) with the F775W filter (PID: 15162; PI: Blanchard). Four dithered images were corrected for optical distortion and drizzle-combined to a finer grid (0.035'' per pixel) using the {\tt astrodrizzle} task in the {\tt drizzlepac}\footnote{\url{http://drizzlepac.stsci.edu/}} software package provided by STScI. We examine the location of PS16aqv, which we determined by performing relative astrometry with an LDSS3c image containing the transient, and find no point source is detected at the measured position. However, the precision on the position is sufficient to yield information on the environment of PS16aqv (Section \ref{sec:host}). \subsection{Optical Spectroscopy} We obtained 7 epochs of spectroscopy of PS16aqv spanning $-2.5$ to $+272$ rest-frame days since maximum brightness using OSMOS on the 2.4-m Hiltner telescope, the FAST Spectrograph \citep{Fabricant1998} on the 60-inch telescope at FLWO, IMACS and LDSS3c on the Magellan 6.5-m telescopes, the Blue Channel Spectrograph \citep{Schmidt1989} on the 6.5-m MMT telescope, and GMOS-N on the 8-m Gemini-North telescope. The observation epochs, airmasses, and spectral resolutions are given in Table \ref{tab:spec}. The 2D spectra were reduced using standard techniques in IRAF to extract 1D wavelength-calibrated spectra. Relative flux calibration was achieved using standard stars observed on the same nights. If needed, the spectra were scaled to match contemporaneous photometry to achieve an absolute flux calibration. The spectra were corrected for Galactic extinction and transformed to the rest-frame of PS16aqv for analysis. \subsection{X-ray Observations} We obtained X-ray observations of PS16aqv using the X-ray Telescope \citep[XRT;][]{Burrows2005} onboard \textit{Swift} from 9 March to 10 June 2016. The data analysis and results are provided in \citet{Margutti2017}. We find no detection of an X-ray source at the position of PS16aqv in any epoch, resulting in a combined unabsorbed flux upper limit of $F_{X} < 1.5 \times 10^{-14}$ erg s$^{-1}$ cm$^{-2}$ ($0.3 - 10$ keV). \begin{figure*}[t!] \begin{center} \includegraphics[scale=0.397]{f3a.pdf} \includegraphics[scale=0.397]{f3b.pdf} \end{center} \caption{Left: Rest-frame color evolution of PS16aqv in the $u-g$, $g-r$, and $r-i$ colors. Right: Rest-frame $g-r$ (or $B-R$) color evolution of PS16aqv compared to SN\,2015bn, SN\,2013dg, and LSQ12dlf \citep{Nicholl2014,Nicholl2016}. While all of these events have blue $g-r$ colors near peak brightness, they evolve at different rates. Commensurate with the overall lightcurve timescale differences, PS16aqv exhibits a faster color evolution than SN\,2015bn and is slower than SN\,2013dg and LSQ12dlf. The colors of PS16aqv and SN\,2015bn appear to reach a plateau value at about $+80$ days whereas LSQ12dlf continues a consistent reddening with time.} \label{colors} \end{figure*} \begin{figure}[t!] \begin{center} \includegraphics[scale=0.35]{f4.pdf} \end{center} \caption{Top: Bolometric lightcurve of PS16aqv. Middle: Temperature evolution inferred from the blackbody fits to each epoch. We show both fits to the entire SED and the optical data only. The earliest temperature points rely on extrapolation due to the lack of good data on the rise and are therefore very uncertain. Bottom: Photospheric radius inferred from the blackbody fits. A single blackbody yields a poor fit to the entire SED due to UV absorption and so we consider the temperatures and radii inferred from the optical-only fits to be a better representation of the photosphere. As seen in other SLSNe-I, the temperature reaches a constant value and the photosphere begins receding into the ejecta.} \label{bolLC} \end{figure} \section{Observational Characteristics of PS16aqv} \subsection{Multi-Band Observed and Rest-Frame Light Curves} \label{sec:LC} We present the observed UV/optical lightcurves of PS16aqv in Figure \ref{obsLC}. Following the earliest observation by PSST, PS16aqv brightened by about 1.4 magnitudes in 25 days to maximum brightness, a longer rise than most normal Type Ic SNe and consistent with SLSNe-I \citep{Nicholl2015}. Upon reaching maximum brightness PS16aqv mirrored its slow rise with a slow decline in $r$- and $i$-band, and with a faster decline rate in $g$ and bluer filters. About 30 days after maximum brightness, the decline rate of PS16aqv slows considerably in $g$-, $r$-, and $i$-band, forming a prominent ``knee'' \citep[using the terminology of][]{Nicholl2016} in the lightcurves. There is also evidence of this knee in $u$ and perhaps bluer bands, though the large error bars on the latest UV points makes this unclear. Following the knee, the decline rate approximately resumes the same rate in $g$-band and a slightly higher rate in $r$- and $i$-band, until about 100 observer-frame days after peak where the $griz$ lightcurves begin to show a clear flattening. The observed decline rate in $r$-band at this phase is about 0.008 mag/day, roughly matching the decline rate due to fully-trapped $^{56}$Co decay powering. Extrapolating this slow decline to the epoch of our $r$- and $i$-band upper limits at about 330 observer-frame days after peak, we find this slow phase is not sustained and that PS16aqv must have resumed a faster decline to account for the upper limits. We calculate the rest-frame absolute magnitudes in each filter using the precise redshift of PS16aqv with a correction for Galactic extinction and K-corrections. We assume an internal host extinction of $A_{V} = 0.55$ mag based on our lightcurve modeling in Section \ref{sec:mag}. The K-corrections were determined from our observed spectra by convolving each filter bandpass with the observer- and rest-frame spectra using the K-correction code {\tt SNAKE} \citep{Inserra2018}. We then fit a polynomial to the set of K-corrections as a function of time in each filter, allowing us to estimate the K-correction at each photometric epoch. Due to the lack of NUV spectroscopic coverage, the NUV K-corrections rely on blackbody fits to the optical spectra and are thus only approximate. In Figure \ref{obsLC} we show the resulting rest-frame lightcurves of PS16aqv spanning a total range of about $-20$ to $+130$ days relative to peak brightness, showing that PS16aqv reached a maximum luminosity of $M_{r} = -22.10\pm0.12$. While occurring on a different timescale, the knee observed in PS16aqv at 30 days after peak is similar to that seen in SN\,2015bn at 50 days after peak \citep{Nicholl2016}. To help visualize the knee in PS16aqv, also termed an undulation, we fit low-order polynomials to the post-peak $gri$ lightcurves to remove the overall decline trend. In Figure \ref{LCpoly} we show the residuals of these fits, which show the undulations are coherent in time across multiple filters and lasted for about 10 days. The amplitudes of the undulations in each filter are the same as those in SN\,2015bn and there is a slightly higher amplitude in $g$-band. To test the significance of the undulations in PS16aqv, we perform a runs test on the residuals in each filter. We find that the number of runs in each filter shows a statistically significant deviation from the expected number, indicating the residuals are not completely random. In Figure \ref{LCpoly} we also show the $ugri$ rest-frame lightcurves of PS16aqv compared to the rest-frame lightcurves of SN\,2015bn compressed in time by 40\% to match the observed lightcurve knees in the two events. The resulting lightcurves show a striking similarity from maximum brightness to about 50 days after. In Figure \ref{colors} we show PS16aqv's rest-frame color evolution in the $u-g$, $g-r$, and $r-i$ color indices as well as a comparison of the $g-r$ color evolution with that of several other SLSNe-I. The color evolution of PS16aqv is slowest in the $r-i$ color, taking several months to redden by half a magnitude, with progressively faster reddening in $g-r$ and $u-g$. This is because $g-r$ and $u-g$ probe the peak of the thermal continuum, whereas $r-i$ is on the Rayleigh-Jeans tail. After evolving steadily for about 80 days, the $r-i$ and $g-r$ colors appear to show little evolution between 80 and 120 days after maximum brightness. Extrapolating the $g-r$ evolution to peak we find $g-r \approx -0.45$, bluer than SN\,2015bn at peak but similar to the extrapolation of SN\,2013dg. From maximum brightness to about 80 days later, the $g-r$ color of PS16aqv clearly reddens at a faster rate than that of SN\,2015bn, as expected from the overall faster lightcurve evolution of PS16aqv. However, they both show a flattening in the $g-r$ color evolution after about 80 days, which is not seen in LSQ12dlf. In addition, the $g-r$ color evolution of PS16aqv is slower than that of LSQ12dlf and SN\,2013dg and therefore shows an intermediate color evolution. \begin{figure}[t!] \begin{center} \includegraphics[scale=0.37]{f5.pdf} \end{center} \caption{Pseudo-bolometric lightcurves calculated from $griz$ (or $BVRI$) observations for PS16aqv, LSQ12dlf, SN\,2015bn, Gaia16apd, and SN\,2013dg \citep{Nicholl2014,Nicholl2016,Nicholl2017a}. PS16aqv shows a significant flattening in its decline rate around $+80$ days, which is not seen in LSQ12dlf or SN\,2013dg.} \label{bolLCcomp} \end{figure} \begin{figure*}[t!] \begin{center} \includegraphics[scale=0.5]{f6.pdf} \end{center} \caption{Spectra of PS16aqv from $-2.5$ to $+121$ rest-frame days after peak bolometric brightness (colored spectra; phases marked) and comparisons with Gaia16apd, LSQ12dlf, SN\,2012il, and SN\,2015bn \citep[grey spectra;][]{Inserra2013,Nicholl2014,Nicholl2016,Nicholl2017a}. PS16aqv exhibits the typical early blue continuum and \ion{O}{2} absorption lines seen in SLSNe-I and subsequent development of lower ionization lines as the ejecta cool. At $+81$ and $+121$ days, PS16aqv does not show definitive nebular emission lines like SN\,2015bn at $+106$ days after peak, indicating a slow spectroscopic evolution. Host emission lines are detected in the $+81$ day spectrum from which we measured a redshift of $z = 0.2025 \pm 0.0003$.} \label{spec} \end{figure*} \subsection{Bolometric Lightcurve} To understand the total energy output of PS16aqv, we calculate its bolometric lightcurve. This is accomplished by integrating the rest-frame spectral energy distribution (SED) at each epoch with an $r$-band measurement, since $r$-band is the best-sampled filter. To calculate the SED at each epoch, we interpolate the lightcurves of the other filters and if necessary, extrapolate assuming constant colors. Most $gri$ measurements were taken on the same night. While extrapolation is the only way to estimate the UV portion of the SED beyond $\sim40$ days, by this phase most of the flux is captured by $gri$ and so the method of extrapolation has a negligible effect on the bolometric luminosity. To estimate the flux contribution coming from wavelengths blueward and redward of $uvw2$ and $z$, respectively, we fit separate blackbodies to the UV and optical measurements. Due to metal line blanketing in the UV, a single blackbody does not accurately capture the full UV/optical SED. As the SED peaks near $U$-band, the flux contribution from regions outside the observed wavelength range is a small correction. The final bolometric luminosity estimate at each epoch therefore comes from a sum of the measured rest-frame fluxes and the estimated flux contribution from outside our observed wavelength range. We show the resulting bolometric lightcurve of PS16aqv in Figure \ref{bolLC}, showing that at maximum brightness it reached a bolometric luminosity of $\approx\!3.1 \times 10^{44}$ erg s$^{-1}$. Integrating the bolometric lightcurve we find that PS16aqv radiated a total of $\approx\!1.3 \times 10^{51} $ erg over $\sim\!150$ days. This is comparable to the total kinetic energy of typical core-collapse SNe. We also show the blackbody temperature and photospheric radius inferred from blackbody fits to all bands and fits to the optical bands only. Due to line blanketing in the UV we consider the temperature inferred from the fits to the optical data only to be the most reliable estimate of the photospheric temperature. We find that near peak light $T_{\rm BB} \approx 20,000$ K and then begins a rapid decline, taking about $\sim$20 days to reach $\sim10,000$ K. The rate of temperature decline subsequently slows down until leveling off at $\sim$5000 K at about $+80$ days. The photospheric radius, as inferred from the optical fits, starts near $2 \times 10^{15}$ cm, reaches a maximum of about $6 \times 10^{15}$ cm (consistent with an expansion velocity of $\sim\!10^{4}$ km s$^{-1}$), and then slowly declines as the photosphere begins to recede. To facilitate a comparison of the bolometric lightcurve of PS16aqv with other SLSNe-I with varying levels of photometric coverage, we also calculate a pseudo $griz$ bolometric lightcurve resulting from a sum of only the $griz$ measurements. In Figure \ref{bolLCcomp} we show a comparison of PS16aqv's pseudo-bolometric lightcurve with that of SN\,2015bn, Gaia16apd, LSQ12dlf, and SN\,2013dg \citep{Nicholl2014,Nicholl2016,Nicholl2017a}. The timescale of the bolometric evolution of PS16aqv is generally similar to LSQ12dlf and SN\,2013dg, but the better time sampling of PS16aqv reveals a complex behavior with several changes in the decline rate. There is a hint that LSQ12dlf may also show a lightcurve undulation about 15 days earlier than PS16aqv, further highlighting the importance of good time sampling. Notably, PS16aqv shows an abrupt transition to a slow decline phase at $+80$ rest-frame days after peak. This flattening corresponds to when the $g-r$ color evolution reaches a plateau (see Figure \ref{colors}) and when the temperature inferred from the blackbody fits to the optical data reaches a constant value (see Figure \ref{bolLC}). While transitions to slow decline phases have been seen in some other fast evolving SLSNe-I \citep[e.g.~SN\,2011ke;][]{Inserra2013}, the deep late-time upper limits shown in Figure \ref{obsLC} indicate this flattening is not sustained in PS16aqv and that a second transition must have occurred. \begin{figure*} \begin{center} \includegraphics[scale=0.4]{f7a.pdf} \includegraphics[scale=0.50]{f7b.pdf} \end{center} \caption{Top: Gemini spectrum obtained at $+272$ rest-frame days after maximum brightness. The spectrum lacks supernova features and is dominated by host galaxy emission lines. Middle: Spectrum of SN\,2015bn at $+392$ rest-frame days (lower spectrum) scaled to the upper limit on the brightness of PS16aqv at the epoch of the Gemini spectrum and the resulting spectrum after summing the Gemini and scaled SN\,2015bn spectra. At the flux level of the upper limit, the nebular features present in SN\,2015bn are not easily discernible from the host galaxy light. PS16aqv may have had weak nebular lines or was much fainter than the upper limit. Bottom: 2D spectrum from which the 1D spectrum shown in the top panel was extracted. The slit was oriented along the major-axis of the galaxy yielding spatially resolved emission line information. The gradient of the H$\alpha$ emission line flux indicates a gradient in SFR along the galaxy. PS16aqv occurred in a region with a relatively low SFR compared to the bright central region.} \label{gemspec} \end{figure*} \subsection{Spectroscopic Evolution} \label{sec:spec} In Figure \ref{spec} we show the spectroscopic sequence of PS16aqv from $-2.5$ to $+121$ rest-frame days relative to peak. For comparison, we also show spectra of Gaia16apd (\citealt{Nicholl2017a}; see also \citealt{Yan2017a}, \citealt{Kangas2017}), LSQ12dlf \citep{Nicholl2014}, SN\,2012il \citep{Inserra2013}, and SN\,2015bn \citep{Nicholl2016} at various phases. We find that PS16aqv exhibits a similar spectroscopic evolution as previous fast evolving SLSNe-I. The characteristic \ion{O}{2} lines are clearly detected in the $-2.5$ day spectrum. About 10 days later the spectrum already shows signs of evolution, with a cooler continuum and weakening \ion{O}{2} lines. By about 3 weeks after maximum brightness, the spectrum has cooled significantly and the spectral features resulting from highly ionized species such as \ion{O}{2} have given way to low ionization species such as \ion{Fe}{2}, \ion{Mg}{2}, and \ion{Si}{2}. The spectrum of PS16aqv maintains a similar shape and shows the same spectral features for at least 10 days. The transition from high to low ionization spectral features is typical of SLSNe-I. Over the next 50 days the spectrum continues to cool and shows the development of \ion{Ca}{2} absorption and possibly a hint of the emergence of [\ion{Ca}{2}] $\lambda$7300 emission. In addition, \ion{Mg}{1}] $\lambda$4571 may also be present in PS16aqv, though we note that its coincidence with a gap in the iron opacity complicates its identification. Moreover, the lack of other strong nebular features at this phase indicates that \ion{Mg}{1}] is unlikely the dominant source of the spectral peak near 4500 \AA, though it is clearly present in the nebular spectra of other events \citep{Nicholl2016b,Inserra2017}. The spectrum shows little change from $+81$ to $+121$ days after maximum brightness. In addition, these two later epochs show narrow host emission lines indicating some host contamination. The $+81$ and $+121$ day spectra are dominated by photospheric features and do not show strong nebular lines, surprising given PS16aqv's fast lightcurve evolution. In contrast, SN\,2015bn already shows a strong [\ion{Ca}{2}] $\lambda$7300 emission line at $+106$ days. This has also been seen in other slowly evolving SLSNe-I (appearing as early as $+50$ days) and may be due to different emitting zones, a scenario which may allow for the presence of both photospheric and nebular spectral features \citep{Inserra2017, Leloudas2017}. It is unclear why the appearance of particular nebular features during the photospheric phase seems to occur only in the slowly evolving SLSNe-I. We also obtained a spectrum of PS16aqv at $+272$ rest-frame days using GMOS-N with the goal of detecting nebular emission lines. The spectrum, shown in Figure \ref{gemspec}, is clearly dominated by host galaxy light. A week after obtaining this spectrum, we obtained deep imaging of PS16aqv in which the SN was not detected to 3$\sigma$ limits of $r>25.6$ and $i>25.3$ mag. In Figure \ref{gemspec} we also show the nebular spectrum of SN\,2015bn normalized to the $r$-band upper limit. Assuming the intrinsic spectrum of PS16aqv is well represented by SN\,2015bn, we can clearly see that even the strong nebular emission lines are well below the host galaxy continuum. To test this further, we also plot the spectrum resulting from adding the scaled SN\,2015bn spectrum to the GMOS-N spectrum of PS16aqv. A few prominent nebular emission lines seen in SN\,2015bn add flux slightly above the level of the noise in the GMOS-N spectrum which may indicate that at $+272$ days PS16aqv has not developed lines as strong as those in SN\,2015bn or that PS16aqv was simply much fainter than the upper limit. The strongest emission line seen in SN\,2015bn, [\ion{O}{1}]$\lambda6300,6364$, coincides with a strong telluric absorption feature at the redshift of PS16aqv, complicating the identification of a weak emission line. We consider the GMOS-N spectrum to be representative of the host galaxy spectrum of PS16aqv. \begin{figure*} \begin{center} \includegraphics[scale=0.59]{f8.pdf} \end{center} \caption{Ensemble of magnetar model realizations from our MCMC modeling of PS16aqv with {\tt MOSFiT} compared to the observed data. The model provides a good overall fit to the trends in the data and the magnetar engine parameters we find (Table \ref{tab:param}) are reasonable compared to the SLSNe-I sample parameters found by \citet{Nicholl2017}, though we find a notably short spin period. The model favors an internal host extinction value of $A_{V} = 0.55$, relatively high compared to those inferred by \citet{Nicholl2017} and measured from host galaxy observations by \citet{Lunnan2014}. } \label{magfit} \end{figure*} \begin{figure*} \begin{center} \includegraphics[scale=0.32]{f9.pdf} \end{center} \caption{Corner plot of the parameter posterior distributions corresponding to the model realizations shown in Figure \ref{magfit}. The median values and $+/- 1\sigma$ ranges are given in Table \ref{tab:param}.} \label{magfitcorner} \end{figure*} \section{Magnetar Model Fits to PS16aqv} \label{sec:mag} Due to its success at explaining the observed lightcurve properties of SLSNe-I \citep{Inserra2013,Nicholl2017}, we use the magnetar central engine model to fit the lightcurves of PS16aqv, and to compare the results with the broad sample. As with other SLSNe-I, the spectrum of PS16aqv shows no evidence of significant low-velocity CSM. In addition, the blue spectra and overall lightcurve timescale are inconsistent with $^{56}$Ni decay in a pair-instability SNe \citep{Dessart2012,Mazzali2016}. Here we use {\tt MOSFiT}, an MCMC code developed specifically for modeling transients \citep{Guillochon2017}. In {\tt MOSFiT} the model luminosity is calculated using the magnetar engine model as the input power source \citep{KasenBildsten2010}. The energy input from the spin-down luminosity is fed through an Arnett diffusion model to determine the model bolometric luminosity. A model for the photosphere is then used to calculate multi-band model magnitudes to be used to fit the observational data. Following \citet{Nicholl2017} we use a photosphere model that initially expands and cools until reaching a constant temperature, employed to help match the observed temperature evolution of PS16aqv (see Figure \ref{bolLC}). The SED used to calculate the model magnitudes is a blackbody with a cutoff at 3000 \AA\ used to account for the observed UV absorption in SLSNe-I. In addition, we constrain the kinetic energy to be less than the total energy available and penalize models which become optically thin in less than 100 days. We include the same priors as \citet{Nicholl2017} on the resulting 11 free parameters (defined in Table \ref{tab:param}), with the exception of a broader prior on host extinction. In Figure \ref{magfit} we show an ensemble of multi-band light curve fits to the observations of PS16aqv, and in Figure \ref{magfitcorner} we show the resulting parameter posterior distributions. The median values of the key engine and ejecta parameters are given in Table \ref{tab:param}. All of the parameter values fall within the ranges inferred for the sample studied by \citet{Nicholl2017}. The magnetar model provides a good fit to the overall trend in the data and is able to somewhat reproduce the flattening of the decline rate at about $\sim80$ rest-frame days after peak where the temperature plateaus at around 5000 K and the photosphere begins to recede (see Figure \ref{bolLC}), although the data suggest a more abrupt change in decline rate. While the bulk lightcurve behavior is well represented by the model, as expected the model cannot account for the undulation (e.g. $g$ and $u$ bands). Extrapolating the model fits to infer the expected brightness at the epoch of our late limits shown in Figure \ref{obsLC}, we find that the model over-predicts the flux at this time. To investigate this further we performed another fit including the $r$- and $i$-band upper limits. The resulting fits near peak are similar but the late-time decline is slightly steeper and so the only differences in the resulting parameters is a slightly weaker magnetic field and lower gamma-ray opacity. However, the model still over-predicts the flux at the upper limits because this simple model is unable to simultaneously account for the flattening in the lightcurve decline and the late-time limits. The inferred $B$-field is moderately strong compared to other SLSNe-I and the spin period is one of the shortest inferred values compared to the full sample distribution \citep{Nicholl2017}, indicating a large reservoir of rotational energy. In addition, the model prefers a fairly large ejecta mass. The fast spin and relatively strong $B$-field indicate a fast spin-down time of the magnetar of about 1.7 days. As \citet{Nicholl2017} point out, the problems associated with powering the observed lightcurves with magnetars that spin down rapidly may be overcome by the fact that the available rotational energy is larger for short spin periods. The fast spin-down time may explain the overall fast lightcurve decline and temperature evolution despite a relatively high ejecta mass. Furthermore, the high ejecta mass would delay the onset of the nebular phase which is supported by the slow spectroscopic evolution (see Figure \ref{spec}). The diverse lightcurve timescale and spectroscopic properties of SLSNe-I may be explained by events with properties located in different regions of ejecta-magnetar parameter space. Finally, from the model fitting we infer an internal host extinction of $A_{V} = 0.55^{+0.13}_{-0.11}$ mag, a fairly large value compared to measured extinction values from SLSN-I host galaxy studies \citep{Lunnan2014} and the inferred extinction values from the model fitting of the sample in \citet{Nicholl2017}. \capstartfalse \begin{deluxetable}{cc}[h!] \tablecolumns{2} \tabcolsep0.1in\footnotesize \tablewidth{3in} \tablecaption{Model parameter medians and 1$\sigma$ ranges corresponding to the posteriors in Figure \ref{magfitcorner} associated with the fits shown in Figure \ref{magfit} \label{tab:param}} \tablehead { \colhead {Parameter} & \colhead {Value} } \startdata $P_{\rm spin}$ (ms) & 0.93$^{+0.17}_{-0.18}$ \\[5pt] log($B$/10$^{14}$ G) & $0.19^{+0.10}_{-0.11}$ \\[5pt] log($M_{\rm ej}$/M$_{\odot}$) & 1.22$^{+0.09}_{-0.06}$ \\[5pt] $v_{\rm ej}$ (km s$^{-1}$) & 14200$^{+700}_{-1400}$ \\[5pt] $E_{\rm k}$ (10$^{51}$ erg) & 33.00$^{+10.94}_{-6.18}$ \\[5pt] $\kappa$ (cm$^{2}$ g$^{-1}$) & 0.16$^{+0.02}_{-0.03}$ \\[5pt] log $\!\kappa_{\gamma}$ & 0.76$^{+0.80}_{-1.05}$ \\[5pt] $M_{\rm NS}$ (M$_{\odot}$) & 1.81$^{+0.26}_{-0.31}$ \\[5pt] $T_{\rm min}$ (K) & 6064$^{+245}_{-948}$ \\[5pt] $A_{\rm V}^{\rm host}$ & 0.55$^{+0.13}_{-0.11}$ \\[5pt] $t_{\rm exp}$ (days) & -16.94$^{+2.35}_{-4.71}$ \\[5pt] log $\!\sigma$ & -0.83$^{+0.04}_{-0.05}$ \enddata \tablecomments{$P_{\rm spin}$ is the initial spin period of the magnetar, $B$ is the component of the magnetar magnetic field perpendicular to the spin axis, $M_{\rm ej}$ is the ejecta mass, $v_{\rm ej}$ is the ejecta velocity, $E_{\rm k}$ is the kinetic energy, $\kappa$ is the opacity, $\!\kappa_{\gamma}$ is the gamma-ray opacity, $M_{\rm NS}$ is the neutron star mass, $T_{\rm min}$ is the photosphere temperature floor (described in the text), $A_{\rm V}^{\rm host}$ is the internal host galaxy extinction, $t_{\rm exp}$ is the explosion time relative to the first observation, and $\!\sigma$ is the uncertainty required to yield a reduced chi-squared of 1. For more details on the model and these parameters see \citet{Nicholl2017}.} \end{deluxetable} \capstarttrue \begin{figure} \begin{center} \includegraphics[scale=0.5]{f10.pdf} \end{center} \caption{\textit{HST} ACS/F775W image of the host galaxy of PS16aqv with the transient location marked (circle; 3$\sigma$). North is up and East is to the left. PS16aqv exploded in the outskirts of its host galaxy, offset from the brightest star-forming regions.} \label{HST} \end{figure} \section{Host Galaxy and Environment of PS16aqv} \label{sec:host} We measure the $griz$ magnitudes of the host galaxy of PS16aqv from our template images obtained in July 2017 and March 2018. Using Kron apertures implemented by the {\tt MAG\_AUTO} parameter in {\tt SExtractor} \citep{Bertin1996} we find the following values (corrected for Galactic extinction): $g = 22.70 \pm 0.07$, $r = 22.59 \pm 0.04$, $i = 22.14 \pm 0.05$, and $z = 22.55 \pm 0.15$. Using the color transformations of \citet{Jordi2006}, we find an absolute $B$-band magnitude of $M_{B} \approx -16.9$, similar to the median value found for the $z \lesssim 0.5$ host sample presented in \citet{Lunnan2014}. In Figure \ref{HST} we show our \textit{HST} ACS/F775W image of the host galaxy of PS16aqv showing the location of PS16aqv. Using the transient and host galaxy centroids, measured with {\tt SExtractor}, we calculate an offset of $R = 0.71 \pm 0.06$ arcseconds, or $2.46 \pm 0.21$ kpc, where the uncertainty is dominated by the astrometric tie uncertainty. As can be seen in Figure \ref{HST}, PS16aqv occurred in the outskirts of its host galaxy far from the central bright star-forming regions. Using {\tt SExtractor} we measure the half-light radius of the host galaxy in the \textit{HST} image and find $R_{50} \approx 0.43$ arcseconds, or 1.49 kpc, indicating a compact galaxy similar to other SLSN-I hosts \citep{Lunnan2015}. This yields a host normalized offset of $R/R_{50} = 1.65$, a larger offset than 87\% of SLSNe-I in the sample studied by \citet{Lunnan2015}. Following the methodology of \citet{Blanchard2016} we also measure the fractional flux \citep{Fruchter2006}, the fraction of the total galaxy flux coming from pixels fainter than the brightness at the location of PS16aqv, and find a value of $\approx 30\%$, indicating PS16aqv occurred on a relatively faint region of its host galaxy. This is lower than the values for 75\% of the sample in \citet{Lunnan2015} and is consistent with the large measured offset. We use emission lines present in the Gemini spectrum obtained at $+272$ days, which contains negligible SN light, to measure the star formation rate and internal extinction of the host galaxy. We do not make corrections for underlying stellar absorption. By measuring the Balmer decrement and assuming an intrinsic value of 2.86 for the ratio of H$\alpha$ to H$\beta$ emission line flux \citep[Case B recombination;][]{Osterbrock1989}, we find a relatively large extinction of $A_{V} = 1.5$ mag. We note that there is considerable variation in the emission line fluxes along the spatial direction of the 2D spectrum (the slit was aligned along the major-axis of the galaxy; see Figure \ref{gemspec}). We therefore also measure the Balmer decrement along the precise line of sight to PS16aqv using the $+81$ day LDSS3c spectrum, which contains detections of host lines, and find H$\alpha$/H$\beta$ $= 2.3 \pm 0.7$ which is consistent with no or at most modest extinction. Several other lines of evidence suggest non-negligible extinction along the line of sight to PS16aqv. In addition to the inferred extinction of $A_{V} = 0.55$ mag from the model fitting in Section \ref{sec:mag}, a comparison of the spectral shape of PS16aqv with LSQ12dlf at the same phase also indicates extinction. At $\sim$ 1 month after peak brightness, PS16aqv exhibits a redder spectrum than LSQ12dlf, but applying $A_{V} = 0.5$ mag to LSQ12dlf yields a good match to the spectral shape of PS16aqv. Given the difference between the global extinction inferred from the Gemini spectrum and the extinction along the line of sight to PS16aqv, we conclude there must be variation in the spatial dust distribution in the galaxy. We measure the global star formation rate from the Gemini spectrum using the reddening corrected flux of H$\alpha$ (using $A_{V} = 1.5$ mag) and the SFR calibration of \citet{Kennicutt1998}, yielding SFR = 0.85 M$_{\Sun}$ yr$^{-1}$. While the global SFR is consistent with that observed for other SLSN-I host galaxies, the variation of the H$\alpha$ flux along the galaxy as apparent in the 2D gemini spectrum indicates a gradient in the SFR. At the location of PS16aqv we find SFR = 0.16 M$_{\Sun}$ yr$^{-1}$, lower than the central star forming regions. Finally, we measure the metallicity using the double-valued $R_{23}$ diagnostic. We use the $+81$ day LDSS3c spectrum which extends to sufficiently blue wavelengths to measure [\ion{O}{2}] $\lambda3727$, allowing the calculation of $R_{23}$ to determine the metallicity along the line of sight to PS16aqv. For the lower and upper metallicity branch we find 12 + log(O/H) = 8.1 and 8.5, respectively, using emission line fluxes corrected for a host extinction of $A_{V} = 0.55$ mag. While we do not detect [\ion{O}{2}] $\lambda4363$ or [\ion{N}{2}] $\lambda6584$, the measured limit of [\ion{N}{2}]/H$\alpha < 0.05$ is sufficiently constraining to rule out the the high metallicity branch. A metallicity of 12 + log(O/H) = 8.1 is consistent with the range found for the SLSN-I host galaxy population \citep{Lunnan2014}. \section{Discussion} \subsection{Lightcurve Undulations} The physical mechanism responsible for lightcurve undulations remains unknown. They could be the result of variable engine activity or related to the structure of the ejecta or environment. Both PS16aqv and SN\,2015bn demonstrate the importance of dense lightcurve time sampling to capture undulations. Like in SN\,2015bn, the lightcurve undulation in PS16aqv corresponds to when the temperature decline abruptly slows and when the photospheric radius begins to decrease ($\approx$30 rest-frame days after peak), implying the beginning of the recession of the photosphere into the ejecta. \citet{Nicholl2016} suggested that the temperature change and lightcurve undulation observed in SN\,2015bn may be a signature of the influence of the magnetar wind on the structure of the ejecta. In particular, \citet{KasenBildsten2010} predicted that the ejecta is swept up into a dense shell with a sharp increase in temperature interior of the shell. The lightcurve undulation may then be the result of the photosphere reaching the hotter region. The magnetar may also influence the ejecta by driving ionization fronts \citep{Metzger2014}, which could cause changes in the continuum opacity. The increased opacity due to the increased ionization would then cause a delay in the escape of radiation, resulting in a change in the lightcurve decline rate. Finally, the magnetar engine may exhibit flare activity, resulting in intermittent energy injection \citep{YuLi2017}. Rather than originate from the power source, the lightcurve undulation may also be the result of interaction with a low-mass CSM ejected by the progenitor star before the explosion. This could in principle occur even if CSM interaction is not the dominant power source of the lightcurve. The CSM mass required to power the undulation in PS16aqv is $M_{\rm CSM} \lesssim 0.01$ M$_{\odot}$, similar to the masses inferred for undulations in other events \citep{Nicholl2016,Yan2017,Inserra2017}. However, the spectrum shows little change during the undulations, making it difficult to disentangle the above scenarios. In addition to PS16aqv and SN\,2015bn (and possibly LSQ12dlf), there are several other SLSNe-I in the literature which show undulations: SSS120810 \citep{Nicholl2014}, iPTF15esb \citep{Yan2017}, LSQ14an \citep{Inserra2017}, and SN\,2007bi \citep{Gal-Yam2009,Inserra2017}. While LSQ14an lacks data earlier than $\sim60$ days preventing a comparison with the strongest undulation in SN\,2015bn, \citet{Inserra2017} show that the two events exhibit similar lower amplitude undulations around $+75$ days. The undulations in iPTF15esb show a complex morphology with multiple distinct peaks and the event also shows the emergence of late-time H$\alpha$ emission indicating interaction with neutral H shells \citep{Yan2017}. The spectroscopic evidence for late-time interaction lends plausibility to the idea that the lightcurve undulations are also caused by interaction. As in PS16aqv and SN\,2015bn, the undulations in iPTF15esb are stronger in bluer bands. Though the undulations in iPTF15esb are more significant, SN\,2015bn also shows multiple undulations, in particular a "shoulder" feature before peak and the two "knees" during the decline \citep{Nicholl2016}. Unfortunately, the sparse time sampling before peak in PS16aqv prevents a comparison. In addition to the lightcurve undulation at 30 days post-peak, PS16aqv shows a significant flattening in its decline rate about 80 days after peak. The shallower decline is consistent with the decay of fully trapped $^{56}$Co over the 50 days for which PS16aqv remained observable. However, as shown in Figure \ref{obsLC}, our deep upper limits at $\approx280$ rest-frame days after peak show that this slow decline is clearly not sustained. At some point during the gap in observations, PS16aqv must have resumed a much faster decline. This indicates that the flattening at 80 days is more likely related to the ejecta structure or the explosion environment than to $^{56}$Co decay. Moreover, the flattening corresponds to the time at which the inferred blackbody temperature reaches a plateau. We therefore speculate that it could perhaps be related to abrupt changes in opacity, either due to recombination or the breakout of ionization fronts powered by a magnetar. Depending on when the lightcurve resumed a faster decline, the dramatic transition at 80 days may be a more pronounced undulation similar to that observed at 30 days or it may be a longer lived ``plateau'' followed by a rapid drop-off. PS16aqv stands out as a well-observed fast declining SLSN-I with clear evidence for lightcurve undulations similar to those observed in the slow events. \citet{Inserra2017} investigated three fast declining SLSNe-I and found no clear evidence for undulations (though pointed out a possible undulation in LSQ12dlf), suggesting that lightcurve undulations only occur in slowly evolving SLSNe-I such as SN\,2015bn. PS16aqv is a clear counterexample and lends additional support to the idea that there is a single class of SLSNe-I with a consistent explosion mechanism but with varying ejecta/engine properties. Early samples indicated a possible bimodality in timescales \citep{Nicholl2015} but recent larger sample studies suggest that SLSNe-I form a continuum of timescales rather than two distinct fast and slow groups \citep{Nicholl2017, DeCia2017, Lunnan2018}. In addition, \citet{Nicholl2017} show that the engine parameter distributions of fast and slow SLSNe-I overlap with no clear offset; the slow events simply prefer somewhat lower magnetic fields and higher ejecta masses. Observing undulations across the range of lightcurve timescales supports a uniform origin. \subsection{Limits on Radioactive Ejecta} Finally, we use the late-time observations to place a limit on the cobalt mass, $M_{\rm Co}$, since any luminosity from $^{56}$Co decay must be lower than the measured upper limits. Using the standard equation for energy injection by radioactive decay of $^{56}$Co assuming full gamma-ray trapping, we find a limit of $M_{\rm Co} \lesssim 0.35$ M$_{\odot}$. As inferred for previous SLSNe \citep{Pastorello2010,Inserra2013}, this implies a $^{56}$Ni mass far below that required to explain the peak lightcurve luminosity with radioactive decay alone ($\approx\!$ 28 M$_{\odot}$). Our limit on the $^{56}$Ni mass is lower than masses inferred from the late-time decline phase of other SLSNe-I \citep[$\approx1-4$ M$_{\odot}$;][]{Inserra2013} under the same assumption of full gamma-ray trapping, making it the most stringent constraint on radioactive decay in SLSNe-I. In fact, our deep limit indicates a synthesized $^{56}$Ni mass lower than that inferred for some energetic Type Ic SNe \citep[e.g.~SN\,1998bw;][]{Sollerman2002}, and therefore suggests that SLSNe-I do not produce larger $^{56}$Ni masses than energetic Type Ic SNe. An important caveat is that gamma-rays are expected to leak out of the ejecta as the optical depth decreases \citep{Sollerman2004}. Over time the energy deposition provided by the kinetic energy of positrons, which is about 3.4\% of the total released energy, becomes the dominant source of energy from radioactive decay. Under a somewhat pessimistic assumption that the optical depth to gamma-rays reaches unity by about 50 days after peak, the limit on the cobalt mass implied by the upper limit becomes $M_{\rm Co} \lesssim 5$ M$_{\odot}$, still much lower than the $^{56}$Ni mass required to power the peak luminosity. As this would roughly affect all SLSNe-I equally, this does not change the observation that \textit{compared to other SLSNe-I}, the deep limit on PS16aqv's late-time luminosity implies a low $^{56}$Ni mass. This observation supports a picture in which at least some SLSNe-I do not produce significantly more $^{56}$Ni than typical core-collapse SNe, the primary difference being the presence of a central engine in SLSNe-I. \section{Conclusions} We present an extensive photometric and spectroscopic dataset from ground- and space-based telescopes for the SLSN-I PS16aqv. While the photospheric spectra and overall lightcurve evolution timescale are most similar to fast-declining SLSNe-I, PS16aqv shows a remarkably similar lightcurve undulation at $30$ days after peak as the well-studied slowly evolving SLSN-I SN\,2015bn. Well-observed undulations have previously only been seen in the slower evolving SLSNe-I. While the physical mechanism of lightcurve undulations in SLSNe-I remains unknown, it is likely related to either engine activity or the structure of the ejecta or environment. The presence of undulations in SLSNe-I with a range of decline rates lends support to the notion that fast and slow SLSNe-I share the same explosion mechanism and that they are linked by a continuum of engine/ejecta properties. The distributions of these properties may naturally explain other unusual SLSN-I properties, such as fast lightcurve evolution coupled with slow spectroscopic evolution (as observed in PS16aqv) which may be due to a fast magnetar spin-down time coupled with high ejecta mass. In addition, deep late-time limits after PS16aqv settled on to a very slow decline phase suggest that it may have exhibited another more pronounced undulation starting at $+80$ days or it may have exhibited a long-lived plateau before rapidly fading. The growing number of SLSNe-I like PS16aqv with lightcurve complexity highlights the importance of obtaining well-sampled lightcurves of future events. Identifying the origin of lightcurve undulations requires large samples of well-observed events in order to search for potential correlations between undulation characteristics and ejecta/engine properties. The late-time limits also yielded a tight constraint on the synthesized nickel mass ($M_{\rm Ni} \lesssim 0.35$ M$_{\odot}$), lower than estimates from other SLSNe-I. Using deep \textit{HST} imaging and late-time Gemini spectroscopy we also studied the host galaxy of PS16aqv. A spatially resolved host spectrum indicates a spatially varying extinction and star formation rate, with the explosion site located in a faint region 2.46 kpc from the central bright region which corresponds to a large host-normalized offset. While the global host extinction is large ($A_{V} \approx 1.5$ mag), the value inferred along the line of site to PS16aqv from our lightcurve modeling is more modest ($A_{V} \approx 0.55$ mag), though both results suggest the host galaxy of PS16aqv has high extinction compared to other SLSN-I hosts. The rather unremarkable host location of PS16aqv motivates further study into the question of whether the sub-galactic locations of SLSNe-I show a strong preference for bright regions of their hosts, like long GRBs. Increasing the sample size of SLSNe-I with high-resolution host galaxy observations is key to making progress in our understanding of their environments. \acknowledgments The Berger Time-Domain Group at Harvard is supported in part by the NSF under grant AST-1714498 and by NASA under grant NNX15AE50G. This paper is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE1144152. We thank Pete Challis and Allyson Bieryla for assistance with some of the FLWO 48-inch observations. We thank Stephen Smartt and Ken Smith for providing access to the early PSST images. This work is based in part on observations obtained at the MDM Observatory, operated by Dartmouth College, Columbia University, Ohio State University, Ohio University, and the University of Michigan. This work is partially based on data acquired with the Swift GO program 1114109 (PI Margutti). R. M. acknowledges partial support from programs No. NNX16AT51G provided by NASA through Swift Guest Investigator Programs. Based on observations (Proposal ID GN-2016B-FT-28) obtained at the Gemini Observatory acquired through the Gemini Observatory Archive and processed using the Gemini IRAF package, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (United States), the National Research Council (Canada), CONICYT (Chile), Ministerio de Ciencia, Tecnolog\'{i}a e Innovaci\'{o}n Productiva (Argentina), and Minist\'{e}rio da Ci\^{e}ncia, Tecnologia e Inova\c{c}\~{a}o (Brazil). This paper uses data products produced by the OIR Telescope Data Center, supported by the Smithsonian Astrophysical Observatory. Some observations reported here were obtained at the MMT Observatory, a joint facility of the Smithsonian Institution and the University of Arizona. This paper includes data gathered with the 6.5 meter Magellan Telescopes located at Las Campanas Observatory, Chile. Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute. Support for program GO-15162 was provided by NASA through a grant from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. \bibliographystyle{apj}
2,869,038,156,969
arxiv
\section{Introduction} The origin of associated absorption systems (AAS) in the spectra of QSOs with \ensuremath{\beta}~$<$ 0.01\footnote{\ensuremath{\beta}~ is the relative velocity of the absorption systems with respect to the systemic redshift of the QSO in units of velocity of light} (the relative velocity of the AAS with respect to the QSO, hereafter V, $<$ 3000 km s$^{-1}$) is not very well understood. Possibilities for their origin include (i) interstellar/halo clouds in the host galaxy (e.g. \citealt{Ch07}), (ii) material in the core of the active galactic nucleus (AGN), within 10 pc of the black hole \citep{BS97}, (iii) material within 30 kpc of the AGN, accelerated by starburst shocks from the inner galaxy \citep[e.g.][]{FS07} and (iv) clouds in galaxies clustered around the QSO \citep[e.g.][]{W08}. The study of properties of a large sample of such systems and in particular, their correlation with radio and other properties of the parent QSO, should provide clues towards discerning between various scenarios. Several studies of the dependence of the properties of the AAS on QSO radio properties have been undertaken in the past. Some studies found the frequency of occurrence of AAS to depend on the radio properties of the QSO \citep[e.g.][]{An87,Fo88,Gan01,Ba02} Other studies failed to find such dependence \citep[e.g.][]{Ve03} but concluded that differences in the results could probably be attributed to various differences in selection of the relatively small samples (50-100 systems). An excess of AAS in radio loud QSOs was also found by \citet{W08} in a large sample of SDSS QSOs. In a previous study \citep[][hereafter V08]{V08}, based on Sloan Digital Sky Survey (SDSS) data release 3 (DR3), we had studied the average dust extinction and average abundances of a homogeneous sample of 407 AAS (with \ensuremath{\beta}~ $<$ 0.01; V $<$ 3000 km s$^{-1}$) using the method of composite spectra (York et al. 2006; hereafter Y06). Definite evidence of dust in the AAS was obtained by comparing the composite spectra of the absorber sample with that of a non-absorber (control) sample matching in \ensuremath{z_{em}}~ and \ensuremath{i\;{\rm magnitude}}~ on a one to one basis. The dust was found to be of SMC type with no evidence for the presence of a 2175 {\it {\AA}}~ absorption feature. The amount of dust extinction and the frequency of occurrence of AAS in the sample were found to depend on radio properties of the QSOs. A much larger (by a factor of $>$ 4) sample of {Mg~II}~ systems is now available from the SDSS DR7 (Shen \& Menard 2012; hereafter SM12). With this increase in size, it should be possible to gain further understanding of the associated absorbers. In particular, we can study the dependence of their dust extinction and star formation rate (SFR) (as measured from the [O~II]$\lambda$3727 emission line flux), on radio and other properties e.g. the black hole mass (\ensuremath{M_{\rm BH}}, determined from the widths of various emission lines) and Eddington ratio (\ensuremath{R_{\rm Edd}}~ which is the ratio of bolometric luminosity of the object to its Eddington luminosity) of the QSOs. Black hole mass may be indicative of its age. In a merger-driven model of AGNs, supermassive black holes evolve through major mergers which give rise to starbursts and also to accretion onto the nuclear black hole (Sanders et al. 1988; Hopkins et al. 2005, 2006). In such models large amounts of gas and dust are funneled inward which fuels the black hole and also causes the obscuration of the young QSO. The dust might be cleared during a transitional phase resulting in the emergence of a luminous blue QSO. In this picture, reddening is correlated with the evolutionary stage of the QSO. Recently, Shen \& Menard (2012) have shown that QSOs with associated absorbers with \ensuremath{\beta}~$<$ 0.005 (V $<$ 1500 km s$^{-1}$), exhibit enhanced star formation. They suggest that these absorbers could be large-scale outflows indicative of the transitional phase in a merger-driven evolutionary scenario for QSOs. Based on the smaller dust extinction and SFR, they conclude that the systems with \ensuremath{\beta}~$>$ 0.005 (V $>$ 1500 km s$^{-1}$) originate in intervening absorbers. In this paper we present the results of our study of the SM12 sample of AAS, using the method of composite spectra. We particularly focus on the dependence of AAS properties on the radio and other properties of the QSOs. The details of the sample and sub-samples thereof as well as the method of analysis are presented in section 2, results are presented in section 3 and conclusions are presented in section 4. \section{Sample selection and analysis} \subsection{Main sample and sub-samples} As mentioned above we used the sample compiled by SM12 (their Table 1) from the SDSS DR7. Their sample contains 1937 systems in non-BAL QSOs with \ensuremath{\beta}~$<0.01$ (V $<$ 3000 km s$^{-1}$), spanning the redshift range of 0.4-2 and with rest equivalent width \ensuremath{{\rm W}_{\rm Mg\;II}}~ ranging from 0.22 to 6.8 {\it {\AA}}. Five of these have \ensuremath{{\rm W}_{\rm Mg\;II}}~ $<$ 0.3 {\it {\AA}}. There are 92 sight-lines having two AAS each and six sight-lines having three AAS each. For this study, we focused primarily on QSOs having only one AAS along their lines of sight, although some statistics for QSOs with multiple AAS are also presented. Also, we restricted our sample to \ensuremath{{\rm W}_{\rm Mg\;II}} $>$ 0.3 {\it {\AA}}~ so that our results can be compared with those of V08 and Y06. Our full sample, S1, thus consists of 1730 AAS having \ensuremath{{\rm W}_{\rm Mg\;II}} $\ge$ 0.3 {\it {\AA}}. We compiled several sub-samples from S1 by dividing it roughly in half, based on various QSO and absorber properties: \ensuremath{{\rm W}_{\rm Mg\;II}}, \ensuremath{i\;{\rm magnitude}}, radio properties, \ensuremath{M_{\rm BH}}, and \ensuremath{R_{\rm Edd}}. Among the radio properties, we consider whether the QSOs have been radio detected (RD) or undetected (RUD) in the FIRST survey, as well as whether the sources are core dominated (CD) or lobe dominated (LD). Among our sample of 1730 AAS, 263 have been detected while 1341 have been undetected in the FIRST survey. 67 of the 263 RD QSOs, 67 are lobe dominated while 196 are core dominated. The division of S1 based on \ensuremath{\beta}~ was done using the criterion of SM12 (\ensuremath{\beta}~$<$0 (V $<$ 0 km s$^{-1}$), 0 $\le$ \ensuremath{\beta}~ $<$ 0.005 (0 $\le$ V $<$ 1500 km s$^{-1}$) and \ensuremath{\beta}~$\ge$ 0.005 (V $\ge$ 1500 km s$^{-1}$) so that our results can be compared with their conclusions. The radio and other properties (e.g. \ensuremath{M_{\rm BH}}~ and \ensuremath{R_{\rm Edd}}) of QSOs used for defining the sub-samples were taken from Shen et al. (2011). Throughout this study, we use the emission redshifts as given by Hewett \& Wild (2010) and use the relative velocities of AAS with respect to these as given by SM12. Details of the sub-samples are given in Table 1 which lists number of systems and the average values of \ensuremath{{\rm W}_{\rm Mg\;II}}, \ensuremath{z_{abs}}, \ensuremath{\beta}, \ensuremath{i\;{\rm magnitude}}, \ensuremath{M_{\rm BH}} (this is given in units of M$_\odot$ throughout the paper), and \ensuremath{R_{\rm Edd}}. \subsection{Composite spectrum} The method of forming composite spectra is described in detail by Y06 and V08. We describe it briefly here. First, the spectra of individual QSOs, corrected for Galactic reddening, were shifted to the absorber/QSO rest-frame and resampled onto a common pixel-to-wavelength scale. Pixels flagged by the spectroscopic pipeline as possibly bad in some way (Stoughton et al. 2002) were masked and not used in constructing the composites. Also masked were the pixels within 5 {\it {\AA}}~ of the expected line positions of detected intervening absorption systems unrelated to the target system. The geometric mean of all contributing spectra was then calculated for each pixel. The median/mean composite for [O~II] emission line studies was obtained by first fitting the continuum to $\sim$ 30 {\it {\AA}}~ wide regions around 3727 {\it {\AA}}~ in the QSO/absorber rest-frame and then resampling the continuum subtracted spectra to a common wavelength scale. We calculate the \ensuremath{E(B-V)}~ values for various samples by comparing the composite spectrum of each sample with that of the corresponding control samples (sample of QSOs not having AAS, matching one to one in \ensuremath{z_{em}}~ and \ensuremath{i\;{\rm magnitude}}~ with the QSOs in the absorber sample), and fitting an SMC curve (Pei 1992) to the extinction curve so obtained. In deriving this extinction curve, we have normalized the two composites at 3000 {\it {\AA}}, which was the value used by SM12. However, we have verified that normalizing at longer wavelengths does not affect the \ensuremath{E(B-V)}~ values. The values are also independent of whether QSO rest-frame or absorber rest-frame composites are used. To construct the absorber rest-frame composite of the control sample, the spectrum of each QSO in the sample was shifted to the rest-frame of the absorber towards the corresponding QSO (matching in \ensuremath{i\;{\rm magnitude}}~ and \ensuremath{z_{em}}) in the absorber sample. The typical 1 \ensuremath{\sigma}~ errors in the derived \ensuremath{E(B-V)}~ values are generally smaller than 0.003 (The errors of the relative flux density values are calculated using the variance formula for the propagation of errors of a geometric mean. see Y06 and V08 for a detailed analysis of errors). The Milky Way extinction curve does not fit well for any of the samples, and the dust seems to be of SMC type, with no evidence of the 2175 {\it {\AA}}~ bump. The \ensuremath{E(B-V)}~ values so determined are given in the third column of Table 2. We note that any difference in the \ensuremath{E(B-V)}~ values of different sub-samples indicates (i) a difference in the dust column density caused either by different gas column density or by different dust-to-gas ratio, or (ii) different properties of the dust. In QSO absorbers, the extinction curve is found to be same (SMC type) for all the samples. Thus, the dust properties appear to be similar. The dust content is known to be correlated with \ensuremath{{\rm W}_{\rm Mg\;II}}~ (Y06, V08, Wild et al. 2006, Menard et al. 2011). Thus, the dust column may be correlated with \ensuremath{{\rm W}_{\rm Mg\;II}}~ and in case we find the values of \ensuremath{E(B-V)}~ to be different for sub-samples having similar \ensuremath{{\rm W}_{\rm Mg\;II}}, then it may indicate a difference in the dust-to-gas ratio for the two sub-samples. \subsection{Control samples} For sub-samples S1-S8 and S13-S16, we used the control samples which were used by SM12 (and kindly provided by them). In the construction of these samples the radio properties of the QSOs were not taken into account which is also the case for these sub-samples. However, it is known that the RD QSOs are intrinsically redder than the RUD QSOs (e.g.\ see Figure 6 of Kimball et al. 2011; hereafter K11) irrespective of presence/absence of AAS. The geometric mean composites for the samples of 4714 RD and 65253 RUD DR7 QSOs in K11 were kindly provided to us by the authors. Fitting an SMC extinction curve to the ratio of the two composites, we estimate the relative \ensuremath{E(B-V)}~ between the two samples to be 0.042. Another selection effect could be important: the SDSS QSO target selection is made on the basis of (blue) color. However a number of QSOs which may not satisfy the color selection criteria are also observed based on other criteria, mainly, their luminosity in other bands like the radio and hence need not have typical QSO colors and could be reddened by selection. About 70 QSOs in both S9 and S10 are not color selected, i.e. a much higher fraction of QSOs in the RD sample are not color selected and hence could be reddened. In addition, the intrinsic reddening in RD QSOs could also depend on their radio morphology. On the basis of a complete sample of 4714 SDSS QSOs in DR7 with determined radio properties and with FIRST flux S20 $>$ 2 mJy, K11 found that radio sources with unresolved cores have higher reddening. It is therefore necessary to construct control samples also based on their radio properties for sub-samples S9-S12. We constructed such control samples for sub-samples S9 and S10 (matching one to one in \ensuremath{z_{em}}~ and \ensuremath{i\;{\rm magnitude}}~) from the RD and RUD QSOs (respectively) in SDSS DR7 which do not have AAS in their spectra. The \ensuremath{E(B-V)}~ values obtained using these control samples should be independent of any intrinsic reddening in the QSOs caused by their radio properties and should reflect the reddening caused by the presence of AAS. As the control sample now has the matching radio properties, non-color selection should be equally probable in the sub-sample and its control sample. We also constructed control samples of lobe dominated and core dominated QSOs from such QSOs in DR7 without AAS for S11 and S12 respectively. The values of E(B-V) for all these sub-samples (S9-S12), given in Table 2 are calculated using these control samples with matching radio properties in addition to matching \ensuremath{z_{em}}~ and \ensuremath{i\;{\rm magnitude}}. \section{Results} \subsection{Dependence of the frequency of occurrence of AAS on the radio and other properties of QSOs} Among DR7 non-BAL QSOs with redshifts between 0.4 and 2.0, 68755 QSOs have been observed by the FIRST radio survey (Becker, White, and Helfand 1995). Out of these, 6366 are radio detected, 4668 are core-dominated, and 1698 are lobe-dominated. Thus, the fraction of RD QSOs among all QSOs in DR7 in the relevant redshift range is 0.093$\pm$0.001. Out of 1730 QSOs in our sample (having a single AAS each), 1604 have been observed by FIRST: 263 are RD, 196 are core-dominated, and 67 are lobe-dominated. Additionally, out of the 98 QSOs in the SM12 sample having multiple AAS, 90 have been observed by FIRST: 30 are RD (26 being CD) while 60 are RUD QSOs. The fraction of RD QSOs among the QSOs having AAS (single or multiple) is 0.17$\pm$0.01. The occurrence of AAS is higher by a factor of 2.1$\pm$0.1 in RD QSOs compared to RUD QSOs. Similarly, while a fraction $\sim$0.11$\pm$0.02 of RD QSOs in our sample have multiple AAS, the fraction is $\sim$0.045$\pm$0.006 for RUD QSOs in this sample: the incidence of multiple AAS is $\sim2.5\pm0.6$ times as likely in RD QSOs compared to the RUD ones. These values remain unchanged (to within 1 \ensuremath{\sigma}) if the sample is restricted to the systems with \ensuremath{\beta}~ $<$ 0.005 (V $<$ 1500 km s$^{-1}$, which are intrinsic systems according the results of SM12. Thus, the dependence of incidence of AAS on radio properties is the same for all systems with \ensuremath{\beta}~ $<$0.01. The occurrence of AAS is significantly related to the radio properties of the QSOs which is consistent with the results of most previous studies mentioned in section 1. The fraction of core-dominated QSOs among the RD DR7 QSOs is $\sim$0.73$\pm$0.01 while in our sample, selected by the presence of (single or multiple) AAS, it is $\sim$0.76$\pm$0.07. The presence of AAS seems to be independent of the morphology of the radio source. We caution that the numbers here are small and the statistics may not be very meaningful. These results are unchanged if we restrict the analysis to systems with \ensuremath{\beta}~ $<$ 0.005. Among the 73181 non-BAL QSOs in the redshift range of 0.4 and 2.0 in DR7, 47521 QSOs have log(\ensuremath{M_{\rm BH}}) $\le$ 9.1 while 25660 have log(\ensuremath{M_{\rm BH}}) $>$ 9.1. AAS are present in a fraction of 0.028$\pm$0.001 of QSOs having smaller black hole mass while for more massive black holes the values are 0.033$\pm$0.001. The values are thus significantly different. The corresponding values for QSOs with log(\ensuremath{R_{\rm Edd}})$\le$-0.81 and $>$-0.81 are 0.024$\pm$0.001 and 0.022$\pm$0.001 respectively. Thus, considering the black hole mass, QSOs with older black holes have a higher frequency of occurrence of AAS. No such difference exists for samples of lower and higher Eddington ratios. \subsection{Reddening} The reddening for S1 can be directly compared with the reddening obtained for intervening systems by Y06 as both samples are from the SDSS data and use the same method of composite spectra and as the selection criterion for the two samples are identical except for the range of relative velocities with respect to the QSOs. The \ensuremath{E(B-V)}~ for S1 is 3.2$\pm$0.8 times higher than for the intervening systems for which the \ensuremath{E(B-V)}~ is 0.013. A comparison with the results of V08 shows that the \ensuremath{E(B-V)}~ values obtained here are larger by factors between 1.5-2, while the dependence of \ensuremath{E(B-V)}~ on \ensuremath{{\rm W}_{\rm Mg\;II}}, \ensuremath{i\;{\rm magnitude}}~ and radio properties is same. This could partly be because of the higher fraction (15.2\%)of RD QSOs in our sample as compared to (9.9\%) in the sample of V08 and partly a small sample effect. As seen from Table 2, the reddening is higher in fainter QSOs (S3) compared to brighter ones (S2). This could be due to the higher average \ensuremath{{\rm W}_{\rm Mg\;II}}~ (1.61 {\it {\AA}}) in S3 compared to that (1.32 {\it {\AA}}) in S2 as reddening is sensitive to the \ensuremath{{\rm W}_{\rm Mg\;II}}~ values (see \ensuremath{E(B-V)}~ values for S4 and S5 in Table 2). Reddening in AAS with \ensuremath{\beta}~ $>$ 0.005 (S8) is significantly smaller than the values for samples with smaller \ensuremath{\beta}~ (S6 and S7) as found by SM12, but is still significantly higher by a factor of 2.1$\pm0.5$ than that in intervening systems. The sub-sample with higher \ensuremath{M_{\rm BH}}~ (S14) is brighter (see Table 1) than that with lower \ensuremath{M_{\rm BH}}~ (S13), while the \ensuremath{E(B-V)}~ values are equal for both sub-samples. The sub-sample with lower \ensuremath{R_{\rm Edd}}~ (S15) is more reddened compared with the sub-sample having higher Eddington ratio (S16). \subsubsection{Dependence of reddening on radio properties of the QSOs} It is clear from the \ensuremath{E(B-V)}~ values that the AAS in RD QSOs are indeed dustier (by a factor of 2.6$\pm$0.2) than those in RUD QSOs. Among the average properties (See Table 1) of RD and RUD QSOs (sub-samples S9 and S10), the RD QSOs have only marginally higher \ensuremath{{\rm W}_{\rm Mg\;II}}, $i$ band brightness, and \ensuremath{R_{\rm Edd}}. The excess extinction in the RD sub-sample over that in the RUD QSOs thus, can not be possibly be accounted for by the differences in these average properties. As the difference in reddening in RD QSOs can not be accounted by the difference in \ensuremath{{\rm W}_{\rm Mg\;II}}~ therefore, as noted in section 2.2, the AAS towards RD QSOs may have higher dust-to-gas ratio. Differences can also be seen among the core and lobe-dominated sub-samples, in that core-dominated QSOs are 2.0$\pm0.1$ times more reddened. Both classes of RD QSOs (S11 and S12) are significantly more reddened compared to the RUD QSOs. This is consistent with the results of V08 but is in contrast to the results of K11 who found that only radio sources with unresolved cores have higher reddening, and other classes of RD QSOs have reddening comparable to that of the RUD QSOs. Also, the reddening in AAS in RUD QSOs is 2.9$\pm$0.7 times higher than that in the intervening systems (Y06). To check if the reddening in RD QSOs with AAS is correlated with \ensuremath{{\rm W}_{\rm Mg\;II}}, we divided sub-sample S9 into two roughly equal halves (S9a and S9b) depending on \ensuremath{{\rm W}_{\rm Mg\;II}}. The results for these are given in Tables 2. These confirm that the RD QSOs with stronger {Mg~II}~ absorption lines are considerably more reddened compared to those having weaker {Mg~II}~ lines; the difference between the \ensuremath{E(B-V)}~ values of the two samples being 0.093. The average \ensuremath{{\rm W}_{\rm Mg\;II}}~ for the two sub-samples S9a and S9b are 2.5 and 0.9 {\it {\AA}}, which are very similar to those of sub-samples S5 and S4 (2.1 and 0.83 {\it {\AA}}) respectively, for which the \ensuremath{E(B-V)}~ differ by only 0.029. Thus RD QSOs have a stronger dependence of reddening on \ensuremath{{\rm W}_{\rm Mg\;II}}. We also divided the RUD sub-sample (S10) into two halves (S10a and S10b) depending on \ensuremath{{\rm W}_{\rm Mg\;II}}. The results for these are given in the last rows of Tables 2. The average \ensuremath{{\rm W}_{\rm Mg\;II}}~ values for sub-samples S10a and S10b and the difference in the \ensuremath{E(B-V)}~ values for these samples are similar to that between the values for sub-samples S5 and S4. The main conclusion from this study is that the AAS in RD QSOs are significantly more reddened than those in RUD QSOs which, in turn, are significantly more reddened than the intervening systems. \begin{deluxetable*}{lcrllrlll} \tablecaption{Definitions and properties of sub-samples of AAS.} \tabletypesize{\scriptsize} \tablehead{ \colhead{\bf Sample}& \colhead{\bf Selection }&\colhead{\bf No. of}&\colhead{\bf $<$\ensuremath{{\rm W}_{\rm Mg\;II}}$>$}&\colhead{$\bf <$z$_{em}>$}&\colhead{ $\bf <\beta>$}&\colhead{ $\bf <$m$_i^a>$}&\colhead{\bf Log($<$M$^b_{\rm BH}>$)}&\colhead{\bf Log($<$R$_{\rm Edd}>$)}\\ {\bf number}&{\bf criterion }&\colhead{\bf systems}&\colhead{\bf in {\it {\AA}}}&&x10$^{3}$&&&} S0&Sample of SM12&1937&1.43&1.27&1.3&18.56&9.04&-0.64\\ S1&Full sample&1730&1.46&1.28&2.16&18.59&9.04&-0.63\\ S2&m$_{i \le }$18.68&862&1.32&1.26&2.46&18.11&9.15&-0.58\\ S3&m$_{i> }$18.68&868&1.61&1.30&1.85&19.08&8.93&-0.69\\ S4&W$_{\rm Mg\;II}\le$1.24 {\it {\AA}} &866&0.83&1.26&2.27&18.46&9.06&-0.64\\ S5&W$_{\rm Mg\;II}>$1.24 {\it {\AA}} &864&2.10&1.30&2.04&18.73&9.01&-0.62\\ S6&\ensuremath{\beta}$ <$ 0.0&510&1.49&1.29&-1.39&18.62&9.05&-0.67\\ S7&0.0$\le$\ensuremath{\beta} $<$ 0.005&841&1.47&1.22&1.83&18.64&9.04&-0.70\\ S8&\ensuremath{\beta}$\ge$ 0.005&379&1.42&1.40&7.66&18.46&9.01&-0.47\\ S9&Radio-detected (RD)&263&1.67&1.18&1.79&18.45&9.03&-0.54\\ S10&Radio-undetected (RUD)&1341&1.43&1.30&2.25&18.62&9.04&-0.65\\ S11&Lobe-dominated (LD)&67&1.46&1.18&1.64&18.45&9.11&-0.42\\ S12&Core-dominated (CD)&196&1.74&1.18&1.85&18.45&9.01&-0.58\\ S13&Log(M$_{\rm BH}$)$\le9.09$&874&1.47&1.12&2.23&18.72&8.66&-0.49\\ S14&Log(M$_{\rm BH}$)$>9.09$&856&1.46&1.44&2.08&18.47&9.42&-0.85\\ S15&Log(R$_{\rm Edd}$)$\le$-0.81&870&1.55&1.20&1.59&18.78&9.19&-1.09\\ S16&Log(R$_{\rm Edd}$)$>$-0.81&860&1.38&1.36&2.73&18.41&8.88&-0.411\\ \tableline \multicolumn{9}{l}{ a: SDSS $i$ magnitude, corrected for Galactic extinction.}\\ \multicolumn{9}{l}{ b: In units of M$_\odot$}\\ \end{deluxetable*} \begin{deluxetable*}{cccc} \tabletypesize{\scriptsize} \tablecaption{$E(B-V)$ and [O~II]$\lambda$3727 flux excess for sub-samples of AAS.} \vspace*{0.1in} \tablehead{ \colhead{\bf Sample}&\colhead{\bf Selection }&\colhead{\bf \ensuremath{E(B-V)}$^a$ w.r.t } &\colhead{\bf [O~II] flux w.r.t}\\ {\bf number}&{\bf criterion}& {\bf control samples}&{\bf control samples$^b$}} S1&Full sample&0.041&1.92$\pm0.04$\\ S2&m$_{i \le }$18.68&0.031&1.84$\pm0.04$\\ S3&m$_{i> }$18.68&0.049&2.07$\pm0.05$\\ S4&W$_{\rm Mg\;II}\le$1.24 {\it {\AA}} &0.026&1.9$\pm0.05$\\ S5&W$_{\rm Mg\;II}>$1.24 {\it {\AA}} &0.05-&1.94$\pm0.04$\\ S6&\ensuremath{\beta}$ <$ 0.0&0.047&1.32$\pm0.05$\\ S7&0.0$\le$\ensuremath{\beta} $<$ 0.005&0.055&2.32$\pm0.06$\\ S8&\ensuremath{\beta}$\ge$ 0.005&0.027&1.52$\pm0.07$\\ S9&Radio-detected (RD)&0.097&1.73$\pm0.04$\\ S10&Radio-undetected (RUD)&0.038&1.87$\pm$0.06\\ S11&Lobe-dominated (LD)&0.056&2.08$\pm$0.13\\ S12&Core-dominated (CD)&0.109&1.44$\pm$0.04\\ S13&Log(M$_{\rm BH}$)$\le9.09$&0.038&1.76$\pm0.04$\\ S14&Log(M$_{\rm BH}$)$>9.09$&0.041&2.40$\pm0.08$\\ S15&Log(R$_{\rm Edd}$)$\le$-0.81&0.058&2.19$\pm0.05$\\ S16&Log(R$_{\rm Edd}$)$>$-0.81&0.025&1.55$\pm0.04$\\ S9a$^c$&RD,W$_{\rm Mg\;II}>$1.4&0.149&2.10$\pm$0.08$$\\ S9b$^d$&RD,W$_{\rm Mg\;II}\le$1.4&0.056&1.44$\pm$0.04$$\\ S10a$^e$&RUD,W$_{\rm Mg\;II}>$1.23&0.057&2.38$\pm$0.13$$\\ S10b$^f$&RUD,W$_{\rm Mg\;II}\le$1.23&0.018&1.48$\pm$0.06$$\\ \tableline \multicolumn{4}{l}{a: One \ensuremath{\sigma}~ errors in \ensuremath{E(B-V)}~ are typically smaller than 0.003.}\\ \multicolumn{4}{l}{b: Ratio of flux in [O II] emission line in the sample composite to that in the composite of the }\\ \multicolumn{4}{l}{corresponding control sample. Note that the control samples used for S9, S10, s11, S12 and }\\ \multicolumn{4}{l}{for S9a, S9b, S10a and S10b have matching radio properties.}\\ \multicolumn{4}{l}{c: S9a: 129 systems belonging to sub-sample S9 and having \ensuremath{{\rm W}_{\rm Mg\;II}} $>$ 1.4 {\it {\AA}}.}\\ \multicolumn{4}{l}{d: S9b: 134 systems belonging to sub-sample S9 and having \ensuremath{{\rm W}_{\rm Mg\;II}} $\le$ 1.4 {\it {\AA}}.}\\ \multicolumn{4}{l}{e: S10a: 666 systems belonging to sub-sample S10 and having \ensuremath{{\rm W}_{\rm Mg\;II}} $>$ 1.23 {\it {\AA}}.}\\ \multicolumn{4}{l}{f: S10b: 675 systems belonging to sub-sample S10 and having \ensuremath{{\rm W}_{\rm Mg\;II}} $\le$ 1.23 {\it {\AA}}.}\\ \end{deluxetable*} \subsection{Emission lines from host galaxies/absorbers} \subsubsection{Excess emission flux in QSOs with AAS} SM12 found excess emission flux in the [O~II]$\lambda$3727 line (hereafter EEFOII) in the composite spectrum of QSOs having AAS with \ensuremath{\beta}~$<$\,0.005 over that in the composite spectra of the control sample of QSOs without AAS both made at the emission rest frame. These results were interpreted as indicating that the EEFOII originates in the host galaxies of QSOs having AAS with \ensuremath{\beta}~$<$\,0.005, and that it is a measure of SFR in those galaxies. We discuss the [O~II] emission further below. In order to study the emission lines, we constructed continuum-subtracted composites (median and mean) of various sub-samples as well as those of the corresponding control samples in the QSO rest-frame. We also constructed corresponding composites in the absorber rest-frames. For this, the spectrum of each QSO in the control sample was shifted to the rest-frame of the absorber towards the corresponding QSO (matching in \ensuremath{i\;{\rm magnitude}}~ and \ensuremath{z_{em}}) in the absorber sample. We observed that the median composite is not very meaningful when constructed in the absorber rest-frame as the peak of the [O~II]$\lambda$3727 emission line which gets contribution from the AGN and its host galaxy at the emission redshift) is shifted by different amounts (depending on the value of \ensuremath{\beta}~) and as a result the total flux in the line in the median composite is not the same as that in the median composites in the emission rest-frame. This problem does not appear if we construct the mean composites (without clipping in the emission line region). We have therefore, used the mean composites for measuring the EEFOII. The values of the ratio of the [O II] flux in the composite spectrum of the sample to that in the composite of the corresponding control sample for various sub-samples using the control samples described in section 2 are listed in column 4 of Table 2. In most cases these are independent of whether the mean composites are made in the emission or absorption rest-frames, we mention the exception to this below. It can be seen that a significant EEFOII is seen for all sub-samples. The magnitude of EEFOII depends on values of QSO brightness, \ensuremath{\beta}, black hole mass, Eddington ratio, and on radio morphology. The EEFOII does not depend on the strength of absorption lines (S4 and S5). This is different from what is seen for the intervening systems where the [O~II] emission flux is proportional to \ensuremath{{\rm W}_{\rm Mg\;II}}~ (Noterdaeme et al. 2010; Menard et al. 2011). As will be seen below, this is due to the different fractions of RD and RUD QSOs in these samples. The sub-samples with lower \ensuremath{M_{\rm BH}}~ and higher \ensuremath{R_{\rm Edd}}~ have lower EEFOII. Finally, the EEFOII in the S7 sample (0$\le$ \ensuremath{\beta} $<$0.005) is much higher than that in S6 (\ensuremath{\beta} $<0$) and S8 (\ensuremath{\beta} $\ge$0.005). It thus appears that the hosts of AAS with \ensuremath{\beta}~ between 0 and 0.005 are different from those with \ensuremath{\beta}~ $<$ 0 and $>$ 0.005. \begin{figure} \epsscale{1.0} \plotone{F1n.eps} \caption{Left panel shows the absorber frame residual spectra (difference between the absorber rest-frame composites of the absorber sample and those of corresponding control sample) for various \ensuremath{\beta}~ dependent samples. The samples plotted are: S6 (solid line), $0 < v < 200 \,\ensuremath{{\rm km\;s}^{-1}} (\ensuremath{\beta}~=0-0.0007)$ (short dashed line), $200$\,\ensuremath{{\rm km\;s}^{-1}}\,$< v < 400$\,\ensuremath{{\rm km\;s}^{-1}} (\ensuremath{\beta}~=0.0007-0.0013) (long dashed line), $400$\,\ensuremath{{\rm km\;s}^{-1}}\,$< v < 600$\,\ensuremath{{\rm km\;s}^{-1}} (\ensuremath{\beta}~=0.0013-0.002) (dotted line) and $600$\,\ensuremath{{\rm km\;s}^{-1}}\,$< v < 800$\,\ensuremath{{\rm km\;s}^{-1}} (\ensuremath{\beta}~=0.002-0.0027) (dash-dotted line). The vertical lines indicate the effective wavelengths of the [O~II] doublet (3728.6 {\it {\AA}}~ which is the value used by Hewett \& Wild) at the mean values of the relative velocities for each sample. The right panel shows the same but for emission rest-frame composites.\label{fig1}} \end{figure} In order to confirm that the EEFOII arises in the emission rest-frame, we have plotted in the left panel of Figure 1, the excess flux in the [O~II] line in the absorption rest-frame composite spectra of various \ensuremath{\beta}~ dependent sub-samples over that in the composite of the corresponding control samples. For this figure we constructed additional sub-samples having relative velocity between (i) 0 and 200 \ensuremath{{\rm km\;s}^{-1}} (\ensuremath{\beta}~=0-0.0007), (ii) 200 and 400 \ensuremath{{\rm km\;s}^{-1}} (\ensuremath{\beta}~=0.0007-0.0013), (iii) 400 and 600 \ensuremath{{\rm km\;s}^{-1}} (\ensuremath{\beta}~=0.0013-0.002), and (iv) 600 and 800 \ensuremath{{\rm km\;s}^{-1}} (\ensuremath{\beta}~=0.002-0.0027). The number of systems in these sub-samples are 214, 185, 110, and 106 respectively. It is clear that the line centers in the absorber rest-frame composites are progressively shifted towards longer wavelengths with increasing \ensuremath{\beta}~ values of the sub-samples. We have plotted the positions of the [O II] lines at the mean velocities of the samples as vertical lines. For all samples with \ensuremath{\beta}~ $>$0, these are roughly consistent with the centers of the corresponding emission excess confirming that the [O II] emission is in the QSO rest-frame. For \ensuremath{\beta}~$<$0 sample the line shows up at $\sim$3728 {\it {\AA}}~ in the absorber rest-frame composite. The [O II] emission for these systems occurs at the absorber rest-frame and does not show up in the emission rest-frame composite. It thus seems possible that these absorbers are galaxies which are falling towards the QSOs. In the right panel of Figure 1, we have plotted the difference between the composites of the same samples and the composites of the corresponding control samples in the emission rest-frame. The excess is centered around $\sim$3728\,{\it {\AA}}~ and clearly shows that the excess emission occurs at the QSO redshift as was observed by SM12, and may originate in the host galaxy of the individual QSOs. Note that the excess emission is not seen here for the \ensuremath{\beta}~ $<$ 0 sample. We have measured the FWHMs of the excess [O~II] emission in the composites (like those plotted in the right panel of Figure 1) of various sub-samples and these values all fall between 8 and 10 {\it {\AA}}. To estimate the errors in FWHM caused by the errors in \ensuremath{z_{em}}, we generated 100 composite spectra for each of the samples plotted in Figure 1, each time choosing random \ensuremath{z_{em}}~ within the one sigma values given by given by Hewett \& Wild (2010). These errors in \ensuremath{z_{em}} are smaller than 0.006 for QSOs in our sample and in most cases are in fact smaller than 0.0025. The one $\sigma$ error in FWHMs as measured from the FWHMs of the of the 100 composites for each of the samples are all smaller than 0.5 {\it {\AA}}. The FWHMs, thus, are large and are very similar to the FWHMs of the [O~II] emission lines themselves in the spectra of the composites of the control samples. It therefore appears that the excess flux originates in the same regions as the [O II] emission lines of QSOs. \subsubsection{Dependence of the EEFOII on radio properties} Figure 6 of K11 shows that the flux of the [O~II]$\lambda$3727 line is significantly higher in the geometric mean composite of all radio QSOs as compared to that for all RUD QSOs in DR7. K11 have also studied the dependence of emission line fluxes in the median composite spectra for several emission lines on the radio properties of the QSOs. They find that the fluxes depend on the type of radio sources (see their Figure 7). In our samples the values of EEFOII in RD QSOs are similar to that in RUD QSOs (with AAS). The EEFOII for absorbers in the lobe-dominated QSOs (S11) however, is higher than that for the core-dominated QSOs (S12). Thus, on the whole the EEFOII in RD and RUD QSOs is same but in RD QSOs the EEFOII is dependent on the radio morphology. It is interesting to note that in CD QSOs it is even lower than that in the RUD QSOs. A comparison of the EEFOII for the RD sub-samples S9a and S9b defined on the basis of \ensuremath{{\rm W}_{\rm Mg\;II}}, shows that among the RD QSOs, the EEFOII is much stronger in QSOs having AAS with larger \ensuremath{{\rm W}_{\rm Mg\;II}}. The same also holds if we divide the RUD sub-sample S10 into two halves (S10a and S10b) depending on \ensuremath{{\rm W}_{\rm Mg\;II}}. Thus, the EEFOII depends on the strength of the {Mg~II}~ lines in the AAS in both RD and RUD sub-samples. This dependence on \ensuremath{{\rm W}_{\rm Mg\;II}}~ is qualitatively similar to what has been found for intervening systems (Noterdaeme et al. 2010; Menard et al. 2011). \section{Discussion} Using the largest sample of AAS compiled so far, we have found that the incidence of {Mg~II} AAS significantly depends on the radio properties of the QSOs. Radio detected QSOs are 2.1$\pm$0.2 times more likely to have AAS as compared to RUD QSOs. Among the RD QSOs, the incidence of AAS does not seem to depend on the morphology of the radio source. This is inconsistent with earlier studies (mentioned in section 1) that found that the incidence of AAS is either independent of the radio properties of the QSOs (Vestergaard 2003) or it is higher in lobe-dominated QSOs (e.g. Aldcroft et al. 1994). We also find a significantly higher frequency of occurrence of AAS in QSOs with higher black hole masses compared with that in QSOs with smaller black hole masses. It is clear from our results that the radio properties of the QSOs play an important role in influencing the reddening properties of the AAS. The dust extinction is higher in the AAS in RD QSOs than that in RUD QSOs by a factor of 2.6$\pm$0.2. Among the RD QSOs, the CD QSOs are more reddened by a factor of 2.0$\pm$0.1 as compared to the LD QSOs. The reddening in the AAS in RUD QSOs is also higher than that in the intervening systems (by a factor of 2.9$\pm0.7$). For both types of QSOs (RD and RUD), the reddening in the AAS depends on the strength of absorption lines; AAS with stronger lines show higher reddening. Based on the dust extinction alone, the AAS in both RD and RUD QSOs appear to be intrinsic to the QSO. The reddening is insensitive to the black hole mass but depends on the Eddington ratio in the sense that AAS in QSOs having lower Eddington ratios have higher dust extinction. The dust extinction is found to be similar in AAS with \ensuremath{\beta}~ $<$0 and with \ensuremath{\beta}~ between 0 and 0.005. The AAS with \ensuremath{\beta}~ $>$ 0.005 are found to have smaller dust extinction by a factor of 1.7-2.0, however, the extinction is higher than that in the intervening systems (Y06) by a factor of 2.1$\pm0.5$, and therefore those AAS may not be intervening as suggested by SM12. We note that there is always a selection effect acting against the very dusty systems which will not be observable in a flux limited survey. This however, applies both to the AAS as well as the intervening systems. Thus, the difference in the dust content of these two types of systems as obtained by comparing our results with those of Y06 is real as both are samples are taken from the SDSS. We have earlier shown (Khare et al. 2007), from a study of abundances in samples of DLAs and sub-DLAs observed at high resolution and also the average abundances in SDSS Mg II systems at redshifts similar to what we are studying here, that the obscuration bias is not likely to be important. Either very dusty systems do not exist or there may be a bi-modal distribution of dust and there may exist a population of completely dust obscured QSOs. Our results do not apply to such a population. We have clearly demonstrated that the excess emission in the [O~II] line originates at the emission redshift of the QSOs (except for the \ensuremath{\beta}~$<$0 sample for which it occurs close to absorption redshift). This could be due to star formation in the host galaxy. However, for all sub-samples, the FWHM of the excess flux distribution is large ($\sim$ 8-10 {\it {\AA}}) and is similar to the FWHM of the total [O~II] emission. Thus, the EEFOII may also originate in the same regions in the parent AGN and the host galaxy which emit the [O II] line and the presence of AAS enhances the O II emission from these regions. In the the whole sample, the EEFOII does not depend on the Mg~ II equivalent width because of the different fractions of RD and RUD QSOs in the \ensuremath{{\rm W}_{\rm Mg\;II}}~ dependent sub-samples. This is clear from the fact that when we consider the RD and RUD samples separately, the EEFOII is significantly higher for QSOs having AAS with larger \ensuremath{{\rm W}_{\rm Mg\;II}}~, similar to what is observed for the intervening absorbers. The values of the EEFOII are within the range of fluxes determined for intervening Mg~ II systems (Noterdaeme et al. 2010). The EEFOII is similar for the RD and RUD QSOs; however, among the RD QSOs, the EEFOII is higher for the LD QSOs as compared to the CD QSOs. Thus, the dust extinction and EEFOII seem to be anti-correlated among the CD and LD QSOs, which is expected except that the dust extinction seems to be too small to explain the difference in EEFOIIs. The EEFOII does depend on the \ensuremath{M_{\rm BH}}~ and \ensuremath{R_{\rm Edd}}. EEFOII is higher in AAS in QSOs having higher \ensuremath{M_{\rm BH}}~ and having lower \ensuremath{R_{\rm Edd}}. \section{Conclusions} We have studied the properties of associated absorption systems in the redshift range of 0.4 to 2.0, in the spectra of 1730 SDSS QSOs. The main conclusions are as follows: \begin{enumerate} \item The average dust extinction is found to be of SMC type with no evidence for the 2175 {\it {\AA}}~ bump. \item The dust extinction is 3.2$\pm$0.8 times greater than in the intervening {Mg~II}~ absorbers with similar selection criteria. \item By using the control samples comprised only of RD, RUD, lobe-dominated and core-dominated QSOs, we find that (a) the AAS in RD QSOs are 2.6$\pm$0.2 times more dusty compared to the AAS in RUD QSOs; (b) the reddening due to AAS in RD QSOs has a stronger dependence on \ensuremath{{\rm W}_{\rm Mg\;II}}~ compared to that in RUD QSOs; (c) among the RD QSOs, the AAS in the core-dominated QSOs have 2.0$\pm$0.1 times higher dust extinction compared to those in the lobe-dominated QSOs; (d) the reddening in the AAS in RUD QSOs is 2.9$\pm$0.7 times higher than that in intervening absorbers. \item The reddening does not depend on the black hole mass and thus its age. \item The reddening does depend on the Eddington ratio, systems with smaller \ensuremath{R_{\rm Edd}}~ have higher reddening. \item The occurrence of AAS is 2.1$\pm$0.5 times more likely in RD QSOs compared to RUD QSOs. \item The occurrence of multiple AAS is 2.5$\pm$0.6 times more likely in RD QSOs than in RUD QSOs. \item Among the RD QSOs, the frequency of occurrence of AAS appears to be independent of the radio morphology. \item The frequency of occurrence of AAS depends on black hole mass, QSOs with larger \ensuremath{M_{\rm BH}}~ have higher rate of incidences. Thus, the incidence rate is higher for older black holes. \item The EEFOII in the AAS samples over that in the control samples originates at the QSO redshift, and is consistent with its origin in the QSO. The width of the excess emission is large (FWHM $\sim$ 8-10 {\it {\AA}}) and is similar to the width of the [O II] line itself. This indicates its origin in the [O II] emitting regions in the AGN and its host galaxy. The presence of AAS enhances the O II emission from the AGN and/or the host galaxy. \item The EEFOII is similar for RD and RUD QSOs. \item For the RD as well as for RUD QSOs, the EEFOII depends on \ensuremath{{\rm W}_{\rm Mg\;II}}, such that the sub-sample with higher \ensuremath{{\rm W}_{\rm Mg\;II}}~ has higher EEFOII in both cases. \item Among the RD QSOs, the EEFOII is higher for the LD QSOs by a factor of 2.5$\pm$0.4 compared to the CD QSOs. \item The EEFOII depends on the mass of the black hole and the Eddington ratio such that QSOs with higher \ensuremath{M_{\rm BH}}~ and lower \ensuremath{R_{\rm Edd}}~ have higher EEFOII. \item The EEFOII and dust extinction in CD and LD QSOs are anticorrelated. \item The EEFOII is similar in magnitude to that found in the intervening absorbers. \end{enumerate} Based on these results the AAS seem to have very different amount of dust and dust-to-gas-ratio as compared to the intervening systems, which seem to depend on the radio properties of the QSOs and also on the masses of the central black hole. The excess [O~II] emission occurs at the QSO redshift. The width of the excess emission is similar to that of the emission lines in control samples. The AAS could therefore be intrinsic to the QSOs. Even with this large sample of AAS, we are possibly able to argue against only two of the possibilities, (i) and (iv), mentioned in section 1 for the origin of these systems. Their higher dust content compared to the intervening systems (even for \ensuremath{\beta} $>$ 0.005 systems among the AAS) and its dependence on QSO properties argues against their origin in the ISM of galaxies clustering around the QSO, or in the ISM of the host galaxies themselves. One can argue that the jets from the AGN can influence the ISM in the host as well as the surrounding galaxies. However, the higher extinction is seen in both radio loud and radio quiet QSOs. Further studies are needed to distinguish between the other two scenarios. \section*{Acknowledgments} PK thanks CSIR, India for the grant of Emeritus Scientist fellowship. We would like to thank Shen and Menard for making the absorber sample as well as the control sample available. We are grateful to the referee for giving detailed comments and suggestions which helped improve the presentation in a significant way.
2,869,038,156,970
arxiv
\section{Introduction}\label{sec:intro} ESA's {\it Gaia}\ mission \citep{GaiaMission} is an enormous project that is revolutionising Milky Way astronomy. {\it Gaia}\ will provide a wide range of data about the stars of the Milky Way, including photometry and spectroscopy. However it is the astrometry -- and in particular the parallaxes -- from {\it Gaia}\ that are the cause of the most excitement. It is very difficult to determine the distances to stars, and not knowing the distance to a star means that one knows neither where it is nor how fast it is moving, even if the proper motion of the star is known. The RAVE survey \citep[Radial Velocity Experiment:][]{RAVE1} is a spectroscopic survey that took spectra for $\sim$$500\,000$ stars. From these one could determine for each star its line-of-sight velocity and the structural parameters, such as its effective temperature ($T_\mathrm{eff}$), surface gravity ($\log g$) and metallicity ($[\mathrm{M}/\mathrm{H}]$). These can be used to derive the distances to stars, and since RAVE's fourth data release \citep{RAVEDR4} these have been provided by the Bayesian method that was introduced by \cite{BuJJB10}, and extended by \cite{JJBea14}. { Bayesian methods had previously been used for distance estimation in astrophysics for small numbers of stars of specific classes \citep{Th03,Baea03}, and the \citeauthor{BuJJB10} method is similar to an approach that } had previously been used to determine the ages of stars \citep{PoEy04,JoLi05}. Closely related approaches have since been used by numerous studies \citep[e.g.,][]{Seea13,ScBe14,Waea16,Saea16,Scea17,MiHe17,Quea17}. The method produces a probability density function (pdf) for the distance, and these pdfs were tested by, amongst other things, comparison of some of the corresponding parallax estimates to the parallaxes found by {\it Gaia}'s predecessor {\it Hipparcos}\ \citep{HipparcosCatalogue,vL07}. RAVE's most recent data release was the fifth in the series (henceforth DR5), and included distance estimates found using this method \citep{RAVEDR5}. The RAVE sample appears to be kinematically and chemically unbiased \citep{Woea17}. {\it Gaia}'s first data release \citep[{\it Gaia}\ DR1, ][]{GaiaDR1, GaiaDR1:TGAS} includes parallaxes and proper motions for $\sim$$2\,000\,000$ sources. These were available earlier than full astrometry for the other $\sim$$1$ billion sources observed by {\it Gaia}, because the sources were observed more than twenty years earlier by the {\it Hipparcos}\ mission, and their positions at that epoch (and proper motions) appear in either the {\it Hipparcos}\ catalogue or the, less precise, Tycho-2 catalogue \citep{Tycho2}, which used data from the {\it Hipparcos}\ satellite's star mapper. This means that the proper motions of the stars can be derived using this very long time baseline, which breaks degeneracies between proper motion and parallax that made the determination of these parameters for the other sources impossible. The resulting catalogue is known as the Tycho-{\it Gaia}\ Astrometric solution \citep*[TGAS: ][]{TGAS15}. Since RAVE and TGAS use fundamentally different methods for deriving the distances to stars, it is inevitable that these have different precisions for different types of stars. The \cite{BuJJB10} method relies, fundamentally, on comparing the observed magnitude to the expected luminosity. The uncertainty in distance modulus, which is roughly equivalent to a relative distance uncertainty, is therefore approximately independent of the distance to the star. The parallax uncertainty from TGAS, on the other hand, is independent of the parallax value, so the relative precision declines with distance -- large distances correspond to small parallaxes, and therefore large relative uncertainties. In Figure~\ref{fig:DR5uncerts} we show the quoted parallax uncertainty from both TGAS and DR5 for the sources common to both catalogues. In the case of TGAS we use the quoted statistical uncertainties (see Section~\ref{sec:TGAS} for further discussion). We also divide this into the uncertainty for giant stars (DR5 $\log g<3.5$) and dwarfs (DR5 $\log g\geq 3.5$). We see that for TGAS this distinction is immaterial, while it makes an enormous difference for DR5. The DR5 parallax estimates tend to be less precise than the TGAS ones for dwarfs (which tend to be nearby because the survey is magnitude limited), but as precise, or more, for the more luminous giants, especially the more distant ones. It is worth noting that TGAS provides only parallax measurements, not \emph{distance} estimates and, { as discussed by numerous authors at various points over the last century}, the relationship between one and the other is non-trivial when one takes the uncertainties into account \citep[e.g.][]{St27,LuKe73,LuAr97,CBJ15}. \cite{AsCBJ16} looked at how the distances derived from TGAS parallaxes depend on the prior probability distribution used for the density of stars, but did not use any information about a star other than its parallax. For this reason, and because TGAS parallaxes have large relative errors for distant stars, when studying the dynamics of the Milky Way using stars common to RAVE and TGAS, it has been seen as advantageous to use distances from DR5 rather than those from TGAS parallaxes \citep[e.g.,][]{Heea17,Huea16}. It is therefore important to improve these distance estimates and to check whether there are any systematic errors associated with the DR5 distance estimates. \cite{RAVEDR5} discusses the new efforts in RAVE DR5 to reconsider the parameters of the observed stars. They provided new $T_\mathrm{eff}$ values { derived from the Infrared Flux Method \citep[IRFM:][]{Blea79} using an updated version of the implementation described by \cite{Caea10:IRFM}. } Also provided in a separate data-table were new values of $\log g$ following a re-calibration for red giants from the \cite{Vaea17} study of 72 stars with $\log g$ values derived from asteroseismology of stars by the K2 mission \citep{K2}. These were not used to derive distances in the main DR5 catalogue, and we now explore how using these new data products can improve our distance estimates. \begin{figure} \centerline{ \resizebox{\hsize}{!}{\includegraphics{plots/DR5TGAS_uncerts.eps}}} \caption{ Histograms of the quoted random parallax uncertainties ($\sigma_{\varpi}$) from TGAS and those from RAVE DR5 for stars common to the two catalogues. We show histograms of the uncertainties for all stars (solid), and separately for giants ($\log g_{\rm DR5}<3.5$) and dwarfs ($\log g_{\rm DR5}\geq 3.5$). The $y$-axis gives the number of stars per bin, and there are 40 bins in total in both cases. The cut-off at $1\,\mathrm{mas}$ for the TGAS parallaxes is due to a filter applied by the {\it Gaia}\ consortium to their DR1. For RAVE sources we make the standard cuts to the catalogue described in \protect\cite{RAVEDR5}. TGAS parallaxes are more precise than RAVE's for dwarfs, but not necessarily for giants. \label{fig:DR5uncerts} } \end{figure} In this study, we compare parallax estimates from TGAS and RAVE to learn about the flaws in both catalogues. We then include the TGAS parallaxes in the RAVE distance estimation, to derive more precise distance estimates than are possible with either set of data in isolation. It is also possible to derive ages for stars from the same efforts, indeed the use of Bayesian methods to derive distances was preceded by studies using them to determine ages \citep{PoEy04,JoLi05}. RAVE DR4 included the age estimates derived alongside the distances, but these were recognised as only being indicative \citep{RAVEDR4}. In this study we show the substantial improvement that is possible using TGAS parallaxes and a more relaxed prior. In Section~\ref{sec:Bayes} we describe the method used to derive distances. In Section \ref{sec:DR5} we compare results from DR5 to those from TGAS, which motivates us to look at improving our parallax estimates using other RAVE data products in Section~\ref{sec:improved}. In Section~\ref{sec:altprior} we explore the effect of varying our prior. In Section~\ref{sec:TGAS} we look at what we can learn about TGAS by comparison with these new parallax estimates. Finally, Sections \ref{sec:Combined}, \ref{sec:ages} and \ref{sec:reverse} demonstrate the improvements made possible by using the TGAS parallaxes as input to the Bayesian scheme. \section{Bayesian estimation}\label{sec:Bayes} Since RAVE DR4, distances to the stars in the RAVE survey have been determined using the Bayesian method developed by \cite{BuJJB10}. This takes as its input the stellar parameters $T_\mathrm{eff}$, $\log g$ and $[\mathrm{M}/\mathrm{H}]$ determined from the RAVE spectra, and $J$, $H$ and $K_{\rm s}$ magnitudes from 2MASS \citep{2MASS}. This method was extended by \cite{JJBea14} to include dust extinction in the modelling, and introduce an improvement in the description of the distance to the stars by providing multi-Gaussian fits to the full probability density function (pdf) in distance modulus.\footnote{While the distance estimates always use 2MASS (and, in this study, AllWISE) photometry, we will refer to them as `RAVE-only' at various points in this paper, to distinguish them from those found using TGAS parallaxes as input too.} In this paper we extend this method, principally by including the parallaxes found by TGAS as input, but also by adding AllWISE W1 and W2 mid-infrared photometry \citep{AllWISE}. We will explore improvements made possible by using IRFM $T_\mathrm{eff}$ values given in RAVE DR5, rather than $T_\mathrm{eff}$ derived from the spectra. We expect that the IRFM values can be more precise than those from the RAVE spectra, which only span a narrow range in wavelength (8410-8795\AA) Because the original intention of this pipeline was to estimate distances, we often refer to it as the `distance pipeline'. In practice we are now often as interested in its other outputs as we are in the distance estimates. The pipeline applies the simple Bayesian statement \[ \label{eq:bayes} P(\hbox{model}|\hbox{data})=\frac{P(\hbox{data}|\hbox{model})P(\hbox{model})}{P(\hbox{data})}, \] where in our case ``data'' refers to the inputs described above (and shown in Table~\ref{tab:data}) for a single star, and ``model'' comprises a star of specified initial mass ${\cal M}$, age $\tau$, metallicity $[\mathrm{M}/\mathrm{H}]$, and location relative to the Sun (where Galactic coordinates $l$ and $b$ are treated as known and distance $s$ is unknown), observed through a specified line-of-sight extinction, which we parametrise by extinction in the $V$-band, $A_V$. The likelihood $P(\hbox{data}|\hbox{model})$ is determined assuming uncorrelated Gaussian uncertainties on all inputs, and using isochrones to find the values of the stellar parameters and absolute magnitudes of the model star. The isochrones that we use are from the PARSEC v1.1 set \citep{Brea12}, and the metallicities of the isochrones used are given in Table~\ref{tab:isochrones}. $P(\hbox{model})$ is our prior which we discuss below, and $P(\hbox{data})$ is a normalisation constant which we can ignore. The assumption of uncorrelated Gaussian errors on the stellar parameters is one which is imperfect \citep[see e.g.][]{ScBe14,Scea17}, but it is the best approximation that we have available for RAVE. Putting this in a more mathematical form and defining the notation for a single Gaussian distribution \begin{equation} G(x,\mu,\sigma) = \frac{1}{\sqrt{2\pi\sigma^2}} \exp\left(\frac{(x-\mu)^2}{2\sigma^2}\right), \end{equation} we have \begin{multline}\label{eq:maths} P({\cal M},\tau,[\mathrm{M}/\mathrm{H}],s,A_V|\,\hbox{data}) \propto \; P({\cal M},\tau,[\mathrm{M}/\mathrm{H}],s,A_V | l,b) \\ \times \prod_i G(O^T_i ({\cal M},\tau,[\mathrm{M}/\mathrm{H}],s,A_V),O_i, \sigma_i) \end{multline} where the prior $ P({\cal M},\tau,[\mathrm{M}/\mathrm{H}],s,A_V | l,b)$ is described in Section~\ref{sec:prior}, and the inputs $O_i$, $\sigma_i$ are those given in Table~\ref{tab:data} (the cases where any of these inputs are unavailable or not used can be treated as the case where $\sigma_i\rightarrow\infty$). The theoretical values of these quantities -- $O^T_i ({\cal M},\tau,[\mathrm{M}/\mathrm{H}],s,A_V)$ -- are found using the isochrones and the relations between extinctions in different bands given in Section~\ref{sec:prior}. Once we have calculated the probability density functions $P(\hbox{model}|\hbox{data})$ for the stars we can characterise them however we wish. In practice, we characterise them by the expectation values and standard deviation (i.e., estimates and their uncertainties) for all parameters, found by marginalising over all other parameters. For distance we find several characterisations of the pdf: expectation values and standard deviation for the distance itself ($s$), for distance modulus ($\mu$) and for parallax $\varpi$. The characterisation in terms of parallax is vital for comparison with TGAS parallaxes. In addition we provide multi-Gaussian fits to the pdfs in distance modulus because a number of the pdfs are multi-modal, typically because it is unclear from the data whether a star is a main sequence star or a (sub-)giant. Therefore a single expectation value and standard deviation is a poor description of the pdf. The multi-Gaussian fits to the pdfs in $\mu$ provide a compact representation of the pdf, and { following \cite{JJBea14} we write them as} \[\label{eq:defsfk} P(\mu) = \sum_{k=1}^{N_{\rm Gau}} {f_k} G(\mu,\widehat{\mu_k},\sigma_k), \] where the number of components $N_{\rm Gau}$, the means $\widehat{\mu_k}$, weights $f_k$, and dispersions $\sigma_k$ are determined by the pipeline. To determine whether a distance pdf is well represented by a given multi-Gaussian representation in $\mu$ we take bins in distance modulus of width $w_i = 0.2\,\mathrm{mag}$, which contain a fraction $p_i$ of the total probability taken from the computed pdf and a fraction $P_i$ from the Gaussian representation, and compute the goodness-of-fit statistic \[\label{eq:defsF} F = \sum_i \left(\frac{p_i}{w_i}-\frac{P_i}{w_i}\right)^2\tilde{\sigma} w_i \] where the weighted dispersion \[ \tilde{\sigma}^2 \equiv \sum_{k=1,N_{\rm Gau}} f_k \sigma_k^2 \] is a measure of the overall width of the pdf. Our strategy is to represent the pdf with as few Gaussian components as possible, but if the value of $F$ is greater than a threshold value ($F_t=0.04$), or the standard deviation associated with the model differs by more than 20 percent from that of the complete pdf, then we conclude that the representation is not adequate, and add another Gaussian component to the representation (to a maximum of 3 components, which we have found is almost always enough). { We fit the multi-Gaussian representation to the histogram using the Levenberg-Marquandt algorithm \citep[e.g.][]{NumRec}, which we apply multiple times with different starting points estimated from the modes of the distribution. In this way we can take the best result and therefore avoid getting caught in local minima. The relatively broad bins mean that we only use more than one Gaussian component if the pdf is significantly multi-modal, though this comes at the cost of reducing the accuracy of the fit when a peak is narrow.} These multi-Gaussian fits were particularly important in previous RAVE data releases. In DR5 we found that a single Gaussian component proved adequate for only 45 percent of the stars, while around 51 percent are fit with two Gaussians, and only 4 percent require a third component. In Section~\ref{sec:Combined} we show that the addition of TGAS parallaxes substantially reduces the number of stars for which more than one Gaussian is required. The value of $F$ is provided in the database as FitQuality\_Gauss, and we also include a flag (denoted Fit\_Flag\_Gauss) which is non-zero if the standard deviation of the final fitted model differs by more than 20 percent from that of the computed pdf. Typically the problems flagged are rather minor \citep[as shown in fig.~3 of][]{JJBea14}. \begin{table*} \begin{center} \caption{Data used to derive the distances to our stars, and their source.\label{tab:data}} \begin{tabular}{ccc} \hline Data & Symbol & Notes \\ \hline Effective temperature & $T_{\rm eff}$ & RAVE DR5 -- either from spectrum (DR5) or IRFM \\ Surface gravity & $\log{g}$ & RAVE DR5 \\ Metallicity & $[\mathrm{M}/\mathrm{H}]$ & RAVE DR5 \\ $J$-band magnitude & $J$ & 2MASS \\ $H$-band magnitude & $H$ & 2MASS \\ $K_s$-band magnitude & $K_s$ & 2MASS \\ $W_1$-band magnitude & $W_1$ & AllWISE -- not used for DR5 distances \\ $W_2$-band magnitude & $W_2$ & AllWISE -- not used for DR5 distances\\ Parallax & $\varpi_{\rm TGAS}$ & {\it Gaia}\ DR1 -- not used for DR5 distances or in comparisons \\ \hline \end{tabular} \end{center} \end{table*} The uncertainties of the RAVE stellar parameters are assumed to be the quadratic sum of the quoted internal uncertainties and the external uncertainties (Table 4 of DR5). The external uncertainties are those calculated from stars with SNR$>40$, except in the case of the IRFM temperatures for which a single uncertainty serves for stars of every SNR since the IRFM temperatures are not extracted from the spectra. We discard all observations with a signal-to-noise ratio less than 10, or where the RAVE spectral pipeline returns a quality flag (AlgoConv) of `1', because the quoted parameters for these observations are regarded as unreliable. For the 2MASS and AllWISE photometry we use the quoted uncertainties. We discard the AllWISE magnitudes if they are brighter than the expected saturation limit in each band, which we take to be $W_{1,{\rm sat}}=8.1\,\mathrm{mag}$, $W_{2,{\rm sat}}=6.7\,\mathrm{mag}$ \citep[following][]{WISETech}. When using the TGAS parallaxes, we consider only the quoted statistical uncertainties. We will show that these appear to be, if anything, slight overestimates of the uncertainty. { The posterior pdf (eq.~\ref{eq:maths}) is calculated on an grid of isochrones at metallicities as given in Table~\ref{tab:isochrones} and ages spaced by $\delta\log_{10}(\tau/{\mathrm yr})=0.04$ for $\tau<1 \,\mathrm{Gyr}$ and $\delta\log_{10}(\tau/{\mathrm yr})=0.01$ for $\tau>1 \,\mathrm{Gyr}$. For each of these isochrones we take grid points in initial mass ${\cal M}$ such that there is no band in which any magnitude changes by more than 0.005 mag. We then evaluate the posterior on an informed grid in $\log A_V$ and distance, which is centred on the expected $\log A_V$ from the prior at an estimated distance (given the observed and model $J$-band magnitude) and then the estimated distance (given each $\log A_V$ value evaluated). } Where stars have been observed more than once by RAVE, we provide distance estimates for the quoted values from each spectrum. We provide a flag `flag\_dup' which is 0 if the spectrum is the best (or only) one for a given star, as measured by the signal-to-noise ratio, and 1 otherwise. Where one wishes to avoid double counting stars one should only use rows where this flag is 0.\footnote{We have based this on the RAVEID number for each source. It is worth noting that the cross-matching of stars is not perfect, and so despite our best attempts to clean duplicate entries, there may be a few percent of stars that are in fact listed twice.} \subsection{Standard prior} \label{sec:prior} For our standard results, we use the prior that was used for DR4 and DR5. We do this for consistency, and because we find that this provides good results. The prior reflects some elements of our existing understanding of the Galaxy, at the cost of possibly biasing us against some results that run counter to our expectations (for example, metal rich or young stars far from the plane). In Section~\ref{sec:altprior} we consider alternative priors. Although the prior is described in \cite{JJBea14}, we describe it here for completeness, and to enable comparisons with alternative priors considered. The prior considers all properties in our model, and can be written as \[ \label{eq:priorAll} \begin{split} P(\hbox{model}) & = P({\cal M},\tau,[\mathrm{M}/\mathrm{H}],s,A_V | l,b) \\ & = P({\cal M}) \times P(A_V |\, s, l ,b) \times P(s,[\mathrm{M}/\mathrm{H}],\tau |\, l,b) \end{split} \] with the prior on initial mass being a \cite{Kr01} initial mass function (IMF), as modified by \cite{AuJJB09} \begin{equation} \label{eq:priorMass} P({\cal M}) \propto \begin{cases} 0 & {\rm if}\; {\cal M}<0.1\,M_\odot \\ {\;\cal M}^{-1.3} & {\rm if}\; 0.1\,M_\odot \le {\cal M}<0.5\,M_\odot, \\ \;0.536 \, {\cal M}^{-2.2} & {\rm if}\; 0.5\,M_\odot \le {\cal M}<1\, M_\odot,\\ \;0.536 \, {\cal M}^{-2.519} & {\rm otherwise}. \\ \end{cases} \end{equation} We describe extinction in terms of the value $A_V$ for the Johnson $V$ band, and, since extinction is necessarily non-negative, we take our prior to be Gaussian in $\ln A_V$ around an expected value which varies with the model star's position in the Galaxy, $\ln A_V^{\rm pr}(s,l,b)$. To find the expected value $A_V^{\rm pr}(s,l,b)$ we start from an expected value at infinity, $A_V^{\rm pr}(\infty,l,b)$, which we take from the \cite*{ScFiDa98} values of $E(B-V)$, with a correction for high extinction sightlines following \cite{ArGo99} and \cite{Shea11}, leaving us with \[ \label{eq:priorExtinction} \begin{split} A_V^{\rm pr}(\infty,l,b) = & \;3.1 \times E(B-V)_{\rm SFD}\; \times \\ & \left\{0.6+0.2\left[1-\tanh\left(\frac{E(B-V)_{\rm SFD}-0.15}{0.3}\right)\right]\right\}, \end{split} \] We then determine the expected extinction at a given distance $s$ in the direction $l,b$, which is some fraction of the total extinction along that line of sight. We take this to be the fraction of the total extinguishing material along that line of sight that lies closer than $s$ in a 3D dust model of the Milky Way taken from \cite{Shea11}. For details of the model see \cite{JJBea14}. As in \cite{JJBea14} we take the uncertainty in $\ln A_V$ to be $\sqrt{2}$. We can then write the prior on $A_V$ to be \begin{equation} P(A_V | s,l,b) = G(\ln A_V,\; \ln(A_V^{\rm pr}(s,l,b)),\;\sqrt{2}). \end{equation} Extinction varies between different photometric bands. For a given extinction value $A_V$, from \cite{RiLe85} we take the extinctions to be \begin{equation} \begin{aligned} A_J&=0.282A_V\\ A_H&=0.175A_V\\ A_{K_s} &= 0.112A_V, \end{aligned} \end{equation} and, following from this, and using the results of \cite*{YuLiXi13}, we have extinction in the WISE photometric bands of \begin{equation} \begin{aligned} A_{W1}&=0.0695A_V \\ A_{W2}&=0.0549A_V. \end{aligned} \end{equation} The other term in the prior is related to the probability of there being a star of a given $\tau$, $[\mathrm{M}/\mathrm{H}]$ and position. It also contains a factor of $s^2$, to reflect the conical shape of the surveyed volume.\footnote{This factor was stated by \cite{BuJJB10}, but not directly noted by either \cite{Buea11} or \cite{JJBea14}, who simply stated the density profile associated with the prior on position. This oversight meant that \cite{Saea16} noted the absence of this factor as a difference between the \cite{JJBea14} values and their own, closely related, results. The factor of $s^2$ was, however, used in all of these studies.} The prior on distance, $[\mathrm{M}/\mathrm{H}]$ and age can then be written as: \begin{equation}\label{eq:priorofx} P(s,[\mathrm{M}/\mathrm{H}],\tau |\, l,b) \propto s^2 \sum_{i=1}^3 N_i P_i([\mathrm{M}/\mathrm{H}]) \, P_i(\tau) \, P_i(\mathbf{r}), \end{equation} where $i=1,2,3$ correspond to a thin disc, thick disc and stellar halo, respectively and where $\mathbf{r}$ is the Galactocentric position of the star. We then have \paragraph*{Thin disc ($i=1$):} \begin{eqnarray} \label{eq:thindisc} P_1([\mathrm{M}/\mathrm{H}]) &=& G([\mathrm{M}/\mathrm{H}], 0, 0.2), \nonumber \\ P_1(\tau) &\propto& \exp(0.119 \,\tau/\mbox{Gyr}) \quad \mbox{for $\tau \le 10$\,Gyr,} \\ P_1(\mathbf{r}) &\propto& \exp\left(-\frac{R}{R_d^{\rm{thin}}} - \frac{|z|}{z_d^{\rm{thin}}} \right); \nonumber \end{eqnarray} \paragraph*{Thick disc ($i=2$):} \begin{eqnarray}\label{eq:thickdisc} P_2([\mathrm{M}/\mathrm{H}]) &=& G([\mathrm{M}/\mathrm{H}], -0.6, 0.5), \nonumber \\ P_2(\tau) &\propto& \mbox{uniform in range $8 \le \tau \le 12$\,Gyr,} \\ P_2(\mathbf{r}) &\propto& \exp\left(-\frac{R}{R_d^{\rm{thick}}} - \frac{|z|}{z_d^{\rm{thick}}} \right); \nonumber \end{eqnarray} \paragraph*{Halo ($i=3$):} \begin{eqnarray} \label{eq:halo} P_3([\mathrm{M}/\mathrm{H}]) &=& G([\mathrm{M}/\mathrm{H}], -1.6, 0.5), \nonumber \\ P_3(\tau) &\propto& \mbox{uniform in range $10 \le \tau \le 13.7$\,Gyr,} \\ P_3(\mathbf{r}) &\propto& r^{-3.39}; \nonumber \end{eqnarray} where $R$ signifies Galactocentric cylindrical radius, $z$ cylindrical height and $r$ spherical radius. We take $R_d^{\rm thin}=2\,600\,\mathrm{pc}$, $z_d^{\rm thin}=300\,\mathrm{pc}$, $R_d^{\rm thick}=3\,600\,\mathrm{pc}$, $z_d^{\rm thin}=900\,\mathrm{pc}$. These values are taken from the analysis of SDSS data in \cite{Juea08}. The metallicity and age distributions for the thin disc come from \mbox{\cite{Ha01}} and \cite{AuJJB09}, while the radial density of the halo comes from the `inner halo' detected in \cite{Caea10}. The metallicity and age distributions of the thick disc and halo are influenced by \cite{Re10} and \cite{Caea10}. { The halo component tends towards infinite density as $r\rightarrow 0$, so we apply an arbitrary cut-off for $r<1\,\mathrm{kpc}$ -- a region which the RAVE sample does not, in any case, probe.} The normalizations $N_i$ were then adjusted so that at the Solar position, taken as $R_0=$~8.33\,kpc (\citealt{Giea09}), $z_0=$~15\,pc \citep*{JJBGeSp97}, we have number density ratios $n_2 /n_1 = 0.15$ \citep{Juea08}, $n_3 /n_1 = 0.005$ \citep{Caea10}. \begin{table} \begin{center} \caption{Metallicities of isochrones used, taking $Z_\odot = 0.0152$ and applying scaled solar composition, with $Y=0.2485+1.78Z$. Note that the minimum metallicity is $[\mathrm{M}/\mathrm{H}] =-2.2$, significantly lower than for the \protect\cite{JJBea14} distance estimates where the minimum metallicity used was $-0.9$, which caused a distance underestimation for the more metal poor stars \protect\citep{Anea15}. \label{tab:isochrones}} \begin{tabular}{rrr} \hline $Z$ & $Y$ & $[\mathrm{M}/\mathrm{H}]$ \\ \hline 0.00010 & 0.249 & -2.207 \\ 0.00020 & 0.249 & -1.906 \\ 0.00040 & 0.249 & -1.604 \\ 0.00071 & 0.250 & -1.355 \\ 0.00112 & 0.250 & -1.156 \\ 0.00200 & 0.252 & -0.903 \\ 0.00320 & 0.254 & -0.697 \\ 0.00400 & 0.256 & -0.598 \\ 0.00562 & 0.259 & -0.448 \\ 0.00800 & 0.263 & -0.291 \\ 0.01000 & 0.266 & -0.191 \\ 0.01120 & 0.268 & -0.139 \\ 0.01300 & 0.272 & -0.072 \\ 0.01600 & 0.277 & 0.024 \\ 0.02000 & 0.284 & 0.127 \\ 0.02500 & 0.293 & 0.233 \\ 0.03550 & 0.312 & 0.404 \\ 0.04000 & 0.320 & 0.465 \\ 0.04470 & 0.328 & 0.522 \\ 0.05000 & 0.338 & 0.581 \\ 0.06000 & 0.355 & 0.680 \\ \hline \end{tabular} \end{center} \end{table} \section{Comparison of DR5 and TGAS parallaxes} \label{sec:DR5} For RAVE DR5 the distance estimation used the 2MASS $J$, $H$, $K_s$ values, and the $T_\mathrm{eff}$, $\log g$ and $[\mathrm{M}/\mathrm{H}]$ values calculated from RAVE spectra. The parallaxes computed were compared with the parallaxes obtained by the {\it Hipparcos} mission \citep{HipparcosCatalogue}, specifically those found by the new reduction of \cite{vL07} for the $\sim$$5000$ stars common to both catalogues. The parallaxes were compared by looking at the statistic \[ \label{eq:Delta} \Delta = \frac{\left\langle \varpi_{\rm sp} \right\rangle - \varpi_{\rm ref}} { \sqrt{\sigma_{\varpi,{\rm sp}}^2+\sigma_{\varpi,{\rm ref}}^2} }, \] where $\varpi_{\rm sp}$ and $\sigma_{\varpi,{\rm sp}}$ are the spectrophotometric parallax estimates and their uncertainties. In \cite{RAVEDR5} the reference parallax $\varpi_{\rm ref}$ and its uncertainty $\sigma_{\varpi,{\rm ref}}$ were from {\it Hipparcos}, but henceforth in this paper they will be from TGAS. A negative value of $\Delta$, therefore, corresponds to an overestimate of distance from RAVE (compared to the reference parallaxes), and a positive value corresponds to an underestimate of distance. We would hope that the mean value of $\Delta$ is zero and the standard deviation is unity (consistent with the uncertainties being correctly estimated). \begin{figure*} \centerline{ \resizebox{0.33\hsize}{!}{\includegraphics{plots/DR5TGASGiants.eps}} \resizebox{0.33\hsize}{!}{\includegraphics{plots/DR5TGASCoolDwarfs.eps}} \resizebox{0.33\hsize}{!}{\includegraphics{plots/DR5TGASHotDwarfs.eps}}} \caption{ Comparison of parallax estimates from RAVE DR5 and those from TGAS. We divide the stars into giants ($\log g<3.5$), cool dwarfs ($\log g\ge3.5$ and $T_\mathrm{eff}\le5500$$\,\mathrm{K}$) and hot dwarfs ($\log g\ge3.5$ and $T_\mathrm{eff}>5500$$\,\mathrm{K}$) and provide pdfs of $\Delta$ (i.e. difference between spectrophotometric parallax and TGAS parallax, normalised by the combined uncertainty, see Eq.~\ref{eq:Delta}) in each case. The \emph{red} lines show the kernel density estimate of this pdf in each case, with the finely-binned grey histogram shown to give a indication of the variation around this smooth estimate. The \emph{black dashed line} is a Gaussian with mean 0 and standard deviation of unity. The means and standard deviations shown in the top right are for stars with $-4<\Delta_{\rm DR5}<4$, to avoid high weight being given to outliers. Positive values of $\Delta$ correspond to parallax overestimates (i.e. distance or luminosity underestimates). \label{fig:DR5} } \end{figure*} \begin{figure*} \centerline{ \resizebox{0.5\hsize}{!}{\includegraphics{plots/DR5fTeff.eps}} \resizebox{0.5\hsize}{!}{\includegraphics{plots/DR5flogg.eps}}} \caption{ Running average of $\Delta$ (i.e. difference between spectrophotometric parallax and TGAS parallax, normalised by the combined uncertainty, see Eq.~\ref{eq:Delta}) as a function of $T_\mathrm{eff}$ for dwarfs (left lower) and $\log g$ for giants (right lower), comparing DR5 values to those from TGAS. The running averages are computed for widths of $200$$\,\mathrm{K}$ and 0.3 dex respectively. The plot also shows the number density as a function of $T_\mathrm{eff}$ and $\log g$ respectively for reference. Means are only calculated for stars with $-4<\Delta_{\rm DR5-TGAS}<4$. Note that positive values of $\Delta$ correspond to parallax overestimates (i.e. distance or luminosity underestimates). \label{fig:DR5fTg} } \end{figure*} Here, as in \cite{RAVEDR5} we divide the stars into dwarfs ($\log g\ge3.5$) and giants ($\log g < 3.5$), and further subdivide dwarfs into hot ($T_\mathrm{eff} >5500$$\,\mathrm{K}$) and cool ($T_\mathrm{eff} \le 5500$$\,\mathrm{K}$). It is worth noting that this means that main-sequence turn-off stars are likely to be put in the 'dwarf' category. In Figure~\ref{fig:DR5} we show a comparison between the DR5 parallaxes and the TGAS parallaxes described by this statistic (which we call $\Delta_{\rm DR5-TGAS}$ in this case). The figures show kernel density estimates \citep[KDEs:][]{KDE}, which provide an estimate of the pdf of $\Delta_{\rm DR5}$ for each group, along with finely binned histograms (which are used to give a sense of the variation around the smooth KDE). These are generally encouraging for both cool dwarfs and giants, with a mean value that is close to zero (meaning that any parallax, and therefore distance, bias is a small fraction of the uncertainty), and a dispersion that is slightly smaller than unity (implying that the uncertainties of one or both measurements are overestimated). For hot dwarfs there is a clear difference between the DR5 parallaxes and the TGAS parallaxes. The mean value of $\Delta$ is $0.301$, meaning that the systematic error in parallax is a significant fraction of the uncertainty, with the DR5 parallaxes being systematically larger than the TGAS parallaxes (corresponding to systematically smaller distance estimates from DR5). The typical combined quoted uncertainty on the parallaxes for hot dwarfs is $\sim1\,\mathrm{mas}$, so this systematic difference is $\sim0.3\,\mathrm{mas}$, which is comparable to the size of the colour-dependent and spatially correlated uncertainties identified by \cite{GaiaDR1:TGAS}. It was therefore not immediately obvious whether the difference seen here is due to a systematic error with the DR5 parallaxes, or with the TGAS parallaxes. However, we have indications from \cite{RAVEDR5} that the effective temperatures found by by the RAVE pipeline tend to be underestimates for $T_\mathrm{eff}\gtrsim5300$$\,\mathrm{K}$. The effective temperatures determined using the IRFM are systematically \emph{higher} than those found from the RAVE pipeline \citep[fig. 26,][]{RAVEDR5}. If the effective temperature used in the distance estimation is systematically lower than the true value, then this will cause us to systematically underestimate the luminosity of the star, and thus underestimate its distance (overestimate its parallax). Therefore a systematic underestimate of $T_\mathrm{eff}$ by the RAVE pipeline can explain the difference with the IRFM $T_\mathrm{eff}$ values \emph{and} the systematic difference with the TGAS parallaxes. This motivates us to investigate the IRFM temperatures in Section~\ref{sec:improved} for an improved estimate of $T_\mathrm{eff}$, and thus more accurate distance estimates. We can investigate this more closely by looking at how an average value of $\Delta_{\rm DR5}$ (which we write as $\langle\Delta_{\rm DR5}\rangle$) varies with $T_\mathrm{eff}$ for dwarfs or with $\log g$ for giants. In Figure~\ref{fig:DR5fTg} we show the running average of this quantity in windows of width $200$$\,\mathrm{K}$ in $T_\mathrm{eff}$ for dwarfs and 0.3 dex in $\log g$ for giants. For reference we also include the number density as a function of these parameters in each case. The left panel of Figure~\ref{fig:DR5fTg} shows the value of $\langle\Delta_{\rm DR5-TGAS}\rangle(T_\mathrm{eff})$ for dwarfs. As we expect, we see that for $T_\mathrm{eff}\gtrsim5500$$\,\mathrm{K}$ we have a parallax offset of $\sim$0.3 times the combined uncertainty, which has a small dip around $7400$$\,\mathrm{K}$ \footnote{The sharp edges are due to the fact that a relatively large number of sources are assigned temperatures very near to $7410$$\,\mathrm{K}$, due to the pixelisation produced by the fitting algorithm -- see \cite{Koea11}}. The vast majority of what we termed `cool dwarfs' are in the temperature range $4600\lesssimT_\mathrm{eff}<5500$$\,\mathrm{K}$, where TGAS and RAVE clearly agree nicely. Below $\sim4600$$\,\mathrm{K}$ the value of $\langle\Delta\rangle(T_\mathrm{eff})$ goes to very large values, corresponding to a substantial underestimate of distance by RAVE DR5. This was not clearly seen in Figure~\ref{fig:DR5} because there are very few dwarfs in this temperature range. It is not clear what causes this, though it could occur if 1) there is a tendency to underestimate the $T_\mathrm{eff}$ for these stars, which is not something which has been noted before; 2) stars with quoted $\log g$ values between the dwarf and giant branches have been given too high a probability of being dwarfs by the pipeline, and/or 3) the pipeline assigns too low a luminosity to stars near this part of the main sequence -- possibly because many of them are still young and perhaps still settling onto the main-sequence \citep[see][]{Zeea17}. The right panel of Figure~\ref{fig:DR5fTg} shows the value of $\langle\Delta_{\rm DR5}\rangle(\log g)$ for giants. In the range $2.2\lesssim\log g\lesssim3.0$ (which is a region with a high number of stars) we can see that the DR5 parallaxes more-or-less agree with those from TGAS. However, at high $\log g$ RAVE parallaxes are on average larger than those from TGAS (corresponding to an underestimate of the luminosity), whereas at low $\log g$ RAVE parallaxes are on average smaller than those from TGAS (i.e. the luminosity is overestimated). We will discuss this difference in Section \ref{sec:Giants}. It is worth emphasising that the effects we see here for low $T_\mathrm{eff}$ or low $\log g$ are not ones that we would simply expect to be caused by the statistical uncertainties in the RAVE parameters (e.g., the stars with the lowest quoted $\log g$ values being only the ones scattered there by measurement error). The Bayesian framework compensates for exactly this effect, so the problem we are seeing is real. \section{Using other RAVE data products for distance estimation} \label{sec:improved} \begin{figure*} \centerline{ \resizebox{0.33\hsize}{!}{\includegraphics{plots/IRFMTGASGiants.eps}} \resizebox{0.33\hsize}{!}{\includegraphics{plots/IRFMTGASCoolDwarfs.eps}} \resizebox{0.33\hsize}{!}{\includegraphics{plots/IRFMTGASHotDwarfs.eps}}} \caption{ Comparison of parallax estimates from RAVE with temperatures taken from the IRFM and parallax measurements from TGAS. This plot shows the same statistics as in Figure~\ref{fig:DR5}, and again we divide the stars into giants ($\log g<3.5$), cool dwarfs ($\log g\ge3.5$ and $T_\mathrm{eff,IRFM}\le5500$$\,\mathrm{K}$) and hot dwarfs ($\log g\ge3.5$ and $T_\mathrm{eff,IRFM}>5500$$\,\mathrm{K}$) and provide pdfs of $\Delta$ (Eq.~\ref{eq:Delta}) in each case -- positive values of $\Delta$ correspond to parallax overestimates (i.e. distance or luminosity underestimates). The main difference we can see is that the parallax estimates for hot dwarfs are substantially improved. \label{fig:IRFM} } \end{figure*} \begin{figure} \centerline{ \resizebox{\hsize}{!}{\includegraphics{plots/NewComp_Teff.eps}}} \caption{ As Figure~\ref{fig:DR5fTg} (left panel), this is a running average of $\Delta$ as a function of $T_\mathrm{eff}$ for dwarfs ($\log g\ge3.5$), but here we are using $T_\mathrm{eff}$ values determined by the IRFM (blue) or from the RAVE spectra (green). Again we plot also show the number density of dwarfs as a function of $T_\mathrm{eff}$ for reference. Use of the IRFM temperatures reduces the bias seen for hot dwarfs. \label{fig:NewfT} } \end{figure} We now look at how the difference between parallaxes derived from RAVE and those from TGAS compare if we use $T_\mathrm{eff}$ values derived from the IRFM, rather than those derived from the spectrum directly. We also include WISE photometry in the W1 and W2 bands in both cases (as discussed in Section~\ref{sec:Bayes}). Figure~\ref{fig:IRFM} again shows the difference between the parallaxes we derive and those found by TGAS, divided into the same three categories. We can see that the disagreement for hot dwarfs is significantly reduced from that found for DR5, with a systematic offset that is half that seen when using the spectroscopic $T_\mathrm{eff}$ values. However we can also see that the agreement between the two values is now slightly less good than before for cool dwarfs and for giants. We can explore this in more detail by, again, looking at how the average value of $\Delta$ varies as we look at different $T_\mathrm{eff}$ for all dwarfs. In Figure~\ref{fig:NewfT} we show how a running average, $\langle\Delta\rangle(T_\mathrm{eff})$, varies for dwarfs when we use the IRFM or the spectroscopic $T_\mathrm{eff}$ values.\footnote{Note that the $\langle\Delta\rangle$ values using the spectroscopic $T_\mathrm{eff}$ values are now not those given in DR5, but new ones, found when we include the WISE photometry. These prove to be very similar to those found by DR5.} It is clear that whatever we choose as a $T_\mathrm{eff}$ value, our parallax estimates differ dramatically from those from TGAS for dwarfs with $T_\mathrm{eff}\lesssim4600$$\,\mathrm{K}$, but there are very few dwarfs with these temperatures. For $4600$$\,\mathrm{K}$$\lesssimT_\mathrm{eff}\lesssim5500$$\,\mathrm{K}$ the values found using the spectroscopically determined $T_\mathrm{eff}$ values are better than those found using the IRFM values, while for $T_\mathrm{eff}\gtrsim5500$$\,\mathrm{K}$ the IRFM values are better. Even using the IRFM temperatures, the parallaxes found at $T_\mathrm{eff}\sim6400$$\,\mathrm{K}$ are still somewhat larger than those found by TGAS. \subsection{Giants} \label{sec:Giants} \begin{figure} \centerline{ \resizebox{\hsize}{!}{\includegraphics{plots/NewCompNoAS_Logg.eps}}} \caption{ As Figure~\ref{fig:DR5fTg} (right panel), this is a running average of $\Delta$ as a function of $\log g$ for giants ($\log g<3.5$), but here we are using $T_\mathrm{eff}$ values determined by the IRFM (blue) or from the RAVE spectra (green). Again, the plot also shows the number density as a function of $\log g$ respectively for reference. Means are calculated for stars with $-4<\Delta<4$. \label{fig:Newfg} } \end{figure} \begin{figure} \centerline{ \resizebox{\hsize}{!}{\includegraphics{plots/RespectiveUncertsLogg.pdf}}} \caption{ Median parallax (solid line) and median parallax uncertainty (shaded region) for the RAVE pipeline using IRFM $T_\mathrm{eff}$ values (blue) and TGAS (red) as a function of $\log g$. The quoted parallax uncertainty from RAVE becomes much smaller than that from TGAS as $\log g$ becomes small. This means that when we use the TGAS parallaxes to improve the distance estimates, they will have little influence at the low $\log g$ end. \label{fig:Respective} } \end{figure} \begin{figure} \centerline{ \resizebox{\hsize}{!}{\includegraphics{plots/NewComp_logg_NoTGASUncert.eps}}} \caption{ Running average of $(\varpi_{\rm IRFM}-\varpi_{\rm TGAS})/\sigma_{\varpi, \rm IRFM}$ as a function of $\log g$ for giants ($\log g<3.5$) -- this statistic is similar to $\Delta$ used elsewhere, but does not include the TGAS uncertainty. It therefore shows the typical systematic offset of the RAVE parallax estimates as a function of the quoted uncertainty. For the lowest $\log g$, the two values are comparable. \label{fig:offset} } \end{figure} We can now turn our attention to the giant stars. When we simply divide the stars into dwarfs and giants -- as was done with {\it Hipparcos}\ parallaxes by \cite{JJBea14} and \cite{RAVEDR5}, and with TGAS parallaxes in Figures \ref{fig:DR5} and \ref{fig:IRFM} of this study -- any biasses appear small. However, when we study the trend with $\log g$, as in Figures \ref{fig:DR5fTg} and \ref{fig:Newfg}, we see that while the stars with $\log g\gtrsim2.2$ have RAVE parallaxes that are very similar to those from TGAS (with a moderate overestimate for $\log g<3$), the stars with lower $\log g$ values have RAVE parallaxes which seem to be systematically underestimated (corresponding to distance overestimates). We can understand how this may have come about if we look at the comparison of the RAVE $\log g$ values with those found by GALAH \citep{GALAH17} or APOGEE \citep{APOGEE} for the same stars -- as presented in figs. 17 \& 19 of \cite{RAVEDR5}. In both cases there appears to be a trend that the other surveys find larger $\log g$ values for stars assigned RAVE $\log g \lesssim 2$. A systematic underestimate of the $\log g$ values of these stars would lead to exactly this effect. In Section~\ref{sec:ASCal} we will look at the asteroseismic re-calibration of RAVE $\log g$ found by \cite{Vaea17}, which also suggests that these $\log g$ values may be underestimated. It is important to note that these low $\log g$ stars are intrinsically luminous, and therefore those observed by RAVE tend to be distant. This means they have relatively small parallaxes, and so the quoted TGAS uncertainties are a large fraction of true parallax, while those from RAVE are relatively small. Figure~\ref{fig:Respective} illustrates this point by showing the median parallax and uncertainty for each method as a function of $\log g$. { A consequence of this is that the combined parallax uncertainty used to calculate $\Delta$ is dominated by that from TGAS.} { We illustrate this in Figure~\ref{fig:offset}, which shows the median value of the alternative statistic $(\varpi_{\rm IRFM}-\varpi_{\rm TGAS})/\sigma_{\varpi, \rm IRFM}$, where $\varpi_{\rm IRFM}$ is the parallax estimate using the IRFM $T_\mathrm{eff}$ value, and $\sigma_{\varpi, \rm IRFM}$ is the corresponding uncertainty.\footnote{Because the TGAS uncertainty is far smaller than the RAVE uncertainty for dwarfs, the equivalent plot for them is very similar to that in Figure~\ref{fig:NewfT}.} This shows that the systematic error for the lowest $\log g$ stars is comparable to the quoted statistical uncertainty. } This also means that when we include the TGAS parallaxes in the distance pipeline for these objects, it will typically have a rather limited effect, and so the bias that we see here will persist. \subsubsection{Asteroseismic calibration} \label{sec:ASCal} The $\log g$ values given in the main table of RAVE DR5 have a global calibration applied, which uses both the asteroseismic $\log g$ values of 72 giants from \cite{Vaea17} and those of the {\it Gaia}\ benchmark dwarfs and giants \citep{Heea15}. This leads to an adjustment to the raw pipeline values (which were used in RAVE DR4, so we will refer to them as $\log g_{\rm DR4}$) such that \[ \label{eq:DR5Cal} \log g_{DR5} = \log g_{\rm DR4}+0.515-0.026\times\log g_{\rm DR4}-0.023\times\log g_{\rm DR4}^2. \] A separate analysis by \cite{Vaea17} which focussed only on the 72 giants with asteroseismic $\log g$ values, which are only used to recalibrate stars with dereddened colours $0.50 <(J-K_s)_0<0.85\,\mathrm{mag}$, and found that for these stars a much more drastic recalibration was preferred, with the recalibrated $\log g$ value being \[ \label{eq:ASCal} \begin{split} \log g_{\rm AS} & = \log g_{\rm DR4} - 0.78 \log g_{\rm DR4} + 2.04 \\ & \approx 2.61 + 0.22\times ( \log g_{\rm DR4} - 2.61) \end{split} \] This has the effect of increasing the $\log g$ values for stars in the red clump and at lower $\log g$ -- thus decreasing their expected luminosity and distance, and increasing their expected parallax. It has the opposite effect on stars at higher $\log g$. It is clear, therefore, that this recalibration is in a direction required to eliminate the trend in $\Delta$ with $\log g$ for giants seen in the right panel of Figure~\ref{fig:DR5fTg}. It is also worth noting that \cite{RAVEDR5} compared $\log g_{\rm AS}$ to literature values and found a clear trend in the sense that $\log g_{\rm AS}$ was an overestimate for stars with literature $\log g < 2.3$ , and an underestimate for literature $\log g > 2.8$. In Figure~\ref{fig:NewfgAS} we show $\Delta$ as a function of $\log g_{DR5}$ for stars using the recalibrated $\log g_{\rm AS}$ values given by \cite{Vaea17} (along with those when using the DR5 $\log g$ values for reference). We use the DR5 $\log g$ value on the x-axis to provide a like-for-like comparison, and the grey region in Figure~\ref{fig:NewfgAS} is equivalent to the range $2.3<\log g_{\rm AS}<2.8$. It is clear that the asteroseismically calibrated $\log g$ values improve the distance estimation for stars with low $\log g$ values -- even beyond the range of $\log g$ values where these $\log g$ values disagree with other external catalogues \citep[as found by][]{RAVEDR5} -- though it should be noted that these stars (with $0.50 <(J-K_s)_0<0.85\,\mathrm{mag}$) represent a small fraction of the stars with these low $\log g$ values. However, for gravities greater than $\log g_{\rm DR5}\simeq2.5$ (which is the point where $\log g_{\rm AS}=\log g_{\rm DR5}$), the asteroseismic calibration makes the $\log g$ values significantly worse in the sense that the spectrophotometric parallaxes are underestimates (i.e. the distances are typically overestimated). Inspection of the comparison of RAVE DR5 $\log g$ values to those from GALAH or APOGEE in \cite{RAVEDR5} appears to indicate that those with $\log g_{\rm DR5}\approx3$ are split into two groups (one with higher $\log g$ found by the other surveys, one with lower) -- i.e. these are a mixture of misidentified dwarfs/sub-giants and giants. The asteroseismic calibration is blind to this difference, and it seems likely that it does a reasonable job of correcting the $\log g$ values for the giants, at the cost of dramatically underestimating the $\log g$ values for the dwarfs/sub-giants at the same $\log g_{\rm DR5}$. The \cite{Vaea17} catalogue comes with an entry `flag\_050' which is true if the difference between $\log g_{\rm DR5}$ and $\log g_{\rm AS}$ is less than 0.5, and it is recommended that only stars with this flag are used. This sets a upper limit of $\log g_{\rm DR5}\simeq3.5$ for sources where the asteroseismic calibration can be applied. Our work here implies that the asteroseismic calibration should not be used for sources with $\log g_{\rm DR5}\gtrsim2.7$. \begin{figure} \centerline{ \resizebox{\hsize}{!}{\includegraphics{plots/NewCompwAS_Logg.eps}}} \caption{ As Figure~\ref{fig:Newfg} this is a running average of $\Delta$ as a function of $\log g_{\rm DR5}$ for giants where the $\log g$ values used come from the main DR5 calibration (blue; eq~\ref{eq:DR5Cal}) or the asteroseismic calibration (red, Eq.~\ref{eq:ASCal}). Note that the $x$-axis gives the DR5 $\log g$ value in each case - this is to enable a side-by-side comparison. In both cases we have used $T_\mathrm{eff}$ values determined by the IRFM. The grey region indicates the range in $\log g$ over which the asteroseismic calibration appears to work reasonably well for the reference stars considered by \protect\cite{RAVEDR5}. The running averages are computed for over a width of 0.3 in $\log g$. The plot also shows the number density as a function of $\log g$ respectively for reference. Means are calculated for stars with $-4<\Delta<4$. Using the asteroseismically calibrated $\log g$ values for stars clearly improves the distance estimates for $\log g_{\rm DR5}\lesssim2.5$, which is the point where the two values are equal, but makes them worse for $\log g_{\rm DR5}\gtrsim2.5$. \label{fig:NewfgAS} } \end{figure} \subsection{Outliers} \label{sec:Outliers} \begin{figure} \centerline{ \resizebox{\hsize}{!}{\includegraphics{plots/Outliers.eps}}} \caption{ Distributions of the quoted parameters of the $\sim$$1000$ stars that outliers in the sense that they have $|\Delta|>4$ (\emph{blue} lines), and of all stars in the study, for reference, (\emph{green} lines). The plots are pdfs (so the area is normalised to 1 in all cases) produced using a kernel density estimate. The distributions shown are in $T_\mathrm{eff,IRFM}$ (top left), $\log g_{\rm DR5}$ (top right), $[\mathrm{M}/\mathrm{H}]$ (bottom left) and $S/N$ (bottom right). The outliers cover a wide range of these parameter spaces, and do not come from any clearly distinct population. \label{fig:outliers} } \end{figure} We have $\sim$$1000$ stars for which the quoted parallaxes from RAVE and TGAS differ by more than 4$\sigma$. We will refer to these as `outliers'. We would only expect $\sim$$12$ such objects if the errors were Gaussian with the quoted uncertainties. In Figure~\ref{fig:outliers} we show pdfs indicating how these stars are distributed in quoted $T_\mathrm{eff,IRFM}$, $\log g$, and $[\mathrm{M}/\mathrm{H}]$. They cover a wide range of these parameters, and no clear problematic area is evident. They do tend to have relatively low $T_\mathrm{eff}$ values, and constitute a relatively large fraction of stars with quoted $[\mathrm{M}/\mathrm{H}]$ values towards either end of the full range. We also show the distribution of these stars in terms of $S/N$, and we can see that while they tend to have relatively low $S/N$ values, they are certainly not limited to such stars. We have also looked at the values of the AlgoConv quality flag, which is provided with RAVE parameters, and find that the outliers are indicated as unreliable around the same rate as the rest of the sources. Around $26$ percent of the outliers have flags $2$ or $3$, which indicate that the stellar parameters should be used with caution, as compared to $\sim23$ percent of all other sources, which suggests that this is not the problem. There is also no indication that they are particularly clustered on the sky. There is some indication that the outliers tend to be problematic sources as labelled by the flags from \cite{Maea12}, which are provided with DR5. These flags are based on a morphological classification of the spectra, and can indicate that stars are peculiar (e.g., have chromospheric emission or are carbon stars) or that the spectra have systematic errors (e.g., poor continuum normalisation). $\sim$$20$ percent of the outliers are flagged as binary stars, and $\sim$$35$ percent are flagged as having chromospheric emission (compared to $\sim$$2$ percent and $\sim$$6$ percent of all sources, respectively). Similarly, $\sim$$40$ percent of the outliers are in the catalogue of stars with chromospheric emission from \cite{Zeea13,Zeea17}. The chromospheric emission can only have affected the RAVE distance estimates. However binarity can affect either the RAVE distance (by affecting the parameter estimates and/or observed magnitudes) or the TGAS parallaxes (by altering the star's path across the sky, thus changing the apparent parallax). \subsection{Metallicity} \label{sec:HRetc} \begin{figure*} \centerline{ \resizebox{0.30769\hsize}{!}{\includegraphics{plots/IRFMTefflogg.eps}} \resizebox{0.30769\hsize}{!}{\includegraphics{plots/IRFMTeffMH.eps}} \resizebox{0.30769\hsize}{!}{\includegraphics{plots/IRFMloggMH.eps}} \resizebox{0.07692\hsize}{!}{\includegraphics{plots/colourbar.eps}} } \caption{ { Median values of $\Delta$, using IRFM temperatures, as a function of the stellar parameters $T_\mathrm{eff}$, $\log g$ and $[\mathrm{M}/\mathrm{H}]$. Pixel sizes are adapted such that there is never fewer than ten stars in a pixel for which we show the median. For the variation with metallicity we have, as before, divided the stars into dwarfs and giants, to show the more relevant parameter in each case. The grey areas contain very few stars. Density contours are shown as a guide to the location of the majority of the sources in these plots (this shows signs of the pixelisation of these parameters produced by the fitting algorithm used in the RAVE spectroscopic pipeline).} \label{fig:2d} } \end{figure*} { Finally we can look at the variation of $\Delta$ with more than one stellar parameter. In Figure~\ref{fig:2d} we show the variation of $\Delta$ in the Herzsprung-Russell (HR) diagram ($T_\mathrm{eff}$ against $\log g$) for all stars. We also show the variation of $\Delta$ in the $[\mathrm{M}/\mathrm{H}]$-$T_\mathrm{eff}$ plane for dwarfs and the $[\mathrm{M}/\mathrm{H}]$-$\log g$ plane for giants. In all cases we just show the statistics when we use the IRFM temperatures.} { The HR diagram shows some areas where RAVE parallaxes appear to be particularly discrepant. We had already seen that low temperature dwarfs ($T_\mathrm{eff,IRFM}\lesssim4500\,$$\,\mathrm{K}$ are have overestimed parallaxes. The sources with $T_\mathrm{eff,IRFM}\sim5000\,$$\,\mathrm{K}$ and $\log g_{\rm DR5}\sim4.2$ have underestimated parallaxes. These sources are between the dwarf and subgiant branches, and it appears that they are typically assigned too high a probability of belong to the subgiant branch. These will be greatly improved when we include the TGAS parallax in our estimates. Sources at the upper edge of the giant branch (high quoted $T_\mathrm{eff}$ for their quoted $\log g$) also have very small RAVE parallaxes compared to those from TGAS, but these are a small fraction of giant stars. } There are no clear trends with metallicity for giants. For the dwarfs it is perhaps notable that there are significant parallax underestimates for metal poor stars at $T_\mathrm{eff}\sim5200$$\,\mathrm{K}$ and parallax overestimates for both unusually metal poor and metal rich stars at $T_\mathrm{eff}\sim6200$$\,\mathrm{K}$. Again these do not comprise a particularly large fraction of all sources, and will be corrected when we include the TGAS parallax in our estimates. It is worth noting that selection effects mean that the more metal-poor stars (which tend to be further from the Sun in the RAVE sample) are likely to be higher temperature dwarfs, and (particularly) lower $\log g$ giants, and this affects any attempts to look at variation of $\Delta$ with metallicity independent of the other stellar parameters. { Since the most metal-poor stars tend to be cool giants which, as we have noted, are assigned distances in our output that are systematically too large, a sample of our stars which focusses on the metal-poor ones will suffer from particularly serious distance overestimates. Any prior which (like our standard one) assumes that metal-poor stars are the oldest will have a similar overestimate for the stars that are assigned the oldest ages in the sample. Note, however, that the age estimates we provide are found using a prior which assumes no such age-metallicity relation (see Section~\ref{sec:choice}), so the most metal-poor stars are not necessarily assigned the oldest ages in our catalogue.} \subsection{Which to use?} \label{sec:which} It is clear that adopting the IRFM temperature estimates improves the distance estimates for stars that have $T_{\rm eff,Spec}>5500$$\,\mathrm{K}$. Use of the IRFM temperatures does make the problems at low $\log g$ somewhat worse than they already were, but this is a smaller effect. We feel that switching from one temperature estimate to another at different points in the HR diagram would be a mistake, so we use the IRFM temperature in all cases. For $\sim 5000$ sources there is no IRFM $T_\mathrm{eff}$ available, so we do not provide distance estimates. For sources recognised as outliers ($|\Delta |>4$) we assume that the RAVE parameters are unreliable, in the published catalogue these are flagged, and we provide distances estimated using only TGAS parallaxes and the 2MASS and WISE photometry. Similarly, we recognise that there is a systematic problem with dwarfs at $T_\mathrm{eff}<4600\,\mathrm{K}$, so for these stars we exclude the RAVE $T_\mathrm{eff}$ and $\log g$ from the distance estimation, and add an (arbitrary) $0.5$$\,\mathrm{dex}$ uncertainty in quadrature with the quoted RAVE uncertainty on metallicity. We have seen that sources with $\log g_{\rm DR5}<2.0$ show a systematic difference between our parallax estimates and those found by TGAS. This is probably due to a systematic underestimate of log g for these stars by RAVE. We will determine distances to these stars in the same way as to the others, but they will be flagged as probably unreliable. While the asteroseismic recalibration clearly helps for these stars, it is not helpful at high $\log g$, and is applicable to a dwindling fraction of sources as we go to lower $\log g$. We therefore do not attempt to use this recalibration in our distance estimates, though it certainly indicates the direction we must go to improve the RAVE $\log g$ estimates. \section{Alternative priors} \label{sec:altprior} \begin{figure*} \centerline{ \resizebox{0.5\hsize}{!}{\includegraphics{plots/Prior_Comp_Teff.eps}} \resizebox{0.5\hsize}{!}{\includegraphics{plots/Prior_Comp_logg.eps}}} \caption{ As Figure~\ref{fig:DR5fTg}, this is the running average of $\Delta$ as a function of $T_\mathrm{eff}$ for dwarfs (left lower) and $\log g$ for giants (right lower) when using the alternative priors described in Section~\ref{sec:altprior}. In general, the RAVE distance estimates are reasonably robust to a change of prior. \label{fig:ComparePriors} } \end{figure*} It would be very troubling if our results were strongly dependant on our choice of prior. We therefore explore the effect of our prior by considering alternative forms. We will call our standard prior `Standard', and describe the differences from this prior. We consider four main alternative forms: \begin{enumerate} \item `Density' prior. As Standard except that we set the prior on $[\mathrm{M}/\mathrm{H}]$ and $\tau$ to be uniform, with a maximum age of $13.8\,\mathrm{Gyr}$. The minimum and maximum metallicities are effectively set by the isochrone set used (Table~\ref{tab:isochrones}).\footnote{{ It is possible to remove this limitation, under the assumption that the stellar models do not change much at lower or higher metallicities, but the effect is limited, and it is not implemented here.}} This leaves the density profile, initial mass function (IMF) and dust model unchanged. \item `Age' prior. As Standard except that the age prior is the same for all components and simply reflects the assumption that the star formation rate has declined over time, following the same functional form as for the thin disc in the Standard prior i.e., \[ P(\tau) \propto \exp(0.119 \,\tau/\mbox{Gyr}) \quad \mbox{for $\tau \le 13.8$\,Gyr,} \] \item `SB14' prior. As Standard, except that we set the prior on $[\mathrm{M}/\mathrm{H}]$ and $\tau$ identically for all components, following \cite{ScBe14}. This is uniform in $[\mathrm{M}/\mathrm{H}]$ over the metallicity range set by the isochrones, and such that \[ P(\tau\,|\,[\mathrm{M}/\mathrm{H}]) \propto \begin{cases} 0 &{\rm if}\; \tau>13.8\,\mathrm{Gyr} \\ 1 & {\rm if}\; 11\,\mathrm{Gyr} \leq \tau \leq 13.8\,\mathrm{Gyr} \\ \exp\left[\frac{(\tau-11\,\mathrm{Gyr})}{\sigma_\tau([\mathrm{M}/\mathrm{H}])} \right] & {\rm if}\; \tau \leq 11\,\mathrm{Gyr}, \\ \end{cases} \] where \[ \sigma_\tau = \begin{cases} 1.5\,\mathrm{Gyr} & {\rm if}\; [\mathrm{M}/\mathrm{H}] < -0.9 \\ \left(1.5 + 7.5 \times \frac{0.9+[\mathrm{M}/\mathrm{H}]}{0.4} \right)\,\mathrm{Gyr} & {\rm if}\; -0.9 \leq [\mathrm{M}/\mathrm{H}] \leq -0.5 \\ 9 \,\mathrm{Gyr} & {\rm otherwise}. \\ \end{cases} \] \item `Chabrier' prior. As Standard, except that we use a \cite{ChabrierIMF} IMF rather than a \cite{Kr01} IMF, where, following \cite{Roea05} we take \begin{equation} \label{eq:priorMassChab} P({\cal M}) \propto \begin{cases} 0 &{\rm if}\; {\cal M}<0.1\,M_\odot \\ \frac{A_{\rm c}}{\cal M} \exp\left(\frac{\log_{10} {\cal M} - \log_{10} {\cal M}_{\rm c}}{\sigma_{\rm c}}\right)^2 & {\rm if }\; 0.1\,M_\odot \le {\cal M}<M_\odot, \\ \;B_{\rm c}\, {\cal M}^{-2.3} & {\rm otherwise}. \\ \end{cases} \end{equation}\end{enumerate} In Figure~\ref{fig:ComparePriors} we compare the values of $\Delta$ that we derive under all of these priors, in each case using the sets of input parameters described in Section~\ref{sec:which}, and excluding sources where we ignore the RAVE parameters. It is clear from the left-hand panel of Figure~\ref{fig:ComparePriors} that the priors make a very limited difference for the dwarfs, except at the low $T_\mathrm{eff}$ end, where contamination by giants is becoming more important. The right-hand panel of Figure~\ref{fig:ComparePriors} shows that for giants, a prior that is uniform in both $[\mathrm{M}/\mathrm{H}]$ and stellar age -- i.e., the Density prior -- provides even worse results for the low $\log g$ giants than the Standard prior. The other priors provide very similar results to one another at low $\log g$, but differ somewhat at the higher $\log g$ end -- the two priors where $P([\mathrm{M}/\mathrm{H}])$ is a function of position (Standard and Chabrier) tend to have lower $\Delta$ values, i.e. greater distances to these stars derived from RAVE. We have also explored the effect of changing the power-law slope of the halo within our Standard prior (Eq.~\ref{eq:halo}) to either $P_3(\mathbf{r}) \propto r^{-3.9}$ or $P_3(\mathbf{r}) \propto r^{-2.5}$ (compared to the usual $r^{-3.39}$). The results were essentially indistinguishable from those using the Standard prior, even if we isolate the metal-poor stars. Similarly, a decrease of 50 percent for the thin and thick disc scale heights has almost no effect -- the mean and standard deviations of the $\Delta$ values for a given population of stars (as shown in e.g., Figure~\ref{fig:IRFM}) change by $\sim$$0.001$ at most. \subsection{Choice of prior} \label{sec:choice} In the interests of consistency with past studies, we use the Standard prior when producing our distance estimates. However, it is clear that this choice of prior imposes a strong relationship between age and metallicity. Therefore we also provide age estimates (Section~\ref{sec:ages}) using our `Age' prior. The results presented in this section make it clear that results using this prior are roughly as reliable as those from our Standard prior, at least in terms of typical parallax error. \section{Using RAVE parallaxes to learn about TGAS} \label{sec:TGAS} \begin{figure} \centerline{ \resizebox{\hsize}{!}{\includegraphics{plots/OnSkyDifflb.eps}}} \centerline{ \resizebox{\hsize}{!}{\includegraphics{plots/OnSkyDiffLL.eps}}} \caption{ Absolute difference between TGAS parallaxes and the new RAVE-only parallax estimates averaged (median) in bins on the sky in an Aitoff projection, shown in Galactic coordinates ($l,b$, upper) and ecliptic coordinates ($\lambda,\beta$, lower -- note that we have placed $\lambda=180^\circ$ at the centre of this plot to clearly show the feature). In each plot the grey area is where there are few or no stars. The clearest feature is the patch near $l\sim 280^\circ$, $b\sim 0^\circ$ where TGAS parallaxes appear to be systematically larger than those from RAVE. When looked at in ecliptic coordinates this area can be seen to run from ecliptic pole to ecliptic pole, and is therefore likely to be related to {\it Gaia}'s scanning law \protect\citep{GaiaDR1:Validation}. \label{fig:ComboDiffSky} } \end{figure} \begin{figure} \centerline{ \resizebox{0.7\hsize}{!}{\includegraphics{plots/ComboAll.eps}}} \caption{ Distribution of $\Delta$ for all stars using the new RAVE-only parallax estimates compared to TGAS. The standard deviation is less than unity, implying that the uncertainties of at least one of the parallax estimates have been overestimated. \label{fig:ComboDiff} } \end{figure} \begin{figure*} \centerline{ \resizebox{0.2\hsize}{!}{\includegraphics{plots/Comboparbin1.eps}} \resizebox{0.2\hsize}{!}{\includegraphics{plots/Comboparbin2.eps}} \resizebox{0.2\hsize}{!}{\includegraphics{plots/Comboparbin3.eps}} \resizebox{0.2\hsize}{!}{\includegraphics{plots/Comboparbin4.eps}} \resizebox{0.2\hsize}{!}{\includegraphics{plots/Comboparbin5.eps}}} \caption{ Distribution of $\Delta$ using the new RAVE-only parallax estimates, separated into bins by TGAS parallaxes $\varpi_T$. The standard deviation is less than unity in each case, but increases as $\varpi_T$ increases. This could be because the RAVE uncertainties are consistently overestimated, or because the TGAS uncertainties are particularly overestimated for the smallest uncertainties. \label{fig:ComboDiffPar} } \end{figure*} \begin{figure} \centerline{ \resizebox{\hsize}{!}{\includegraphics{plots/DeflateErrs.eps}}} \caption{ The cumulative reduced $\chi^2$ of the TGAS parallax measurements with the new RAVE-only parallax estimates (Eq.~\ref{eq:redchi2}) when the uncertainties of one set or the other have revised downwards. The different lines correspond to the original values (dark blue), the correction from \protect\citet[green; $\alpha=0.79$ and $\beta = 0.10$ in Eq.~\ref{eq:deflateTGAS}]{Goea16}, the best correction available when only `deflating' RAVE uncertainties (red, $\gamma=0.86$ in Eq.~\ref{eq:deflateRAVE}), and the best available only adjusting TGAS uncertainties (light blue; $\alpha=0.95$ and $\beta = 0.20$) \label{fig:deflate} } \end{figure} In Section~\ref{sec:improved} we used the TGAS parallaxes to investigate the RAVE distance estimation, but we can turn this around and use the RAVE distance estimation to learn about TGAS. TGAS is an early release of {\it Gaia}\ data and is therefore expected to contain strong systematic errors \citep{GaiaDR1,GaiaDR1:TGAS}. Various studies have looked at these systematic errors \citep[including the {\it Gaia}\ consortium itself:][]{GaiaDR1:Validation}, by comparison to distances derived for RR Lyrae stars \citep*{Goea16}, red clump stars \citep{Daea17,GoMo17} or eclipsing binaries \citep{StTo16} or, in the case of \cite{ScAu17}, using a statistical approach based on the correlations between velocity components produced by distance errors. Our approach allows us to study a large area in the southern sky using many sources, spanning a wide range in colour, without any assumptions about kinematics. In Figure~\ref{fig:ComboDiffSky} we plot the average difference between the TGAS parallax and that from this study, binned on the sky. Zonal differences are unlikely to be produced by any particular issues with the RAVE distance estimation, but may be related to the way in which the sky has been scanned by {\it Gaia}. We can clearly see a stripe showing a substantial difference at $l\sim280^\circ$, which corresponds to a stripe near the ecliptic pole, as can seen when this diagram is shown in ecliptic coordinates. A similar figure was shown in \citet[fig. 28]{GaiaDR1:Validation}, using the RAVE DR4 parallax estimates, where this feature was attributed to the ``ecliptic scanning law followed early in the mission", and it was noted that a corresponding feature can be found in the median parallaxes of quasar sources. This is also likely to be related to the anomaly reported by \cite{ScAu17}. We can also look again at the width of the distribution of $\Delta$. As we have seen already, the width of the distribution of $\Delta$, when comparing TGAS and DR5, is less than unity. In Figure~\ref{fig:ComboDiff} we show this width for all stars in our new RAVE-only parallax estimates, and it is again less than unity. This indicates that the uncertainties of one or other measurements have been overestimated. When we divide the distribution by quoted TGAS parallax uncertainty (Figure~\ref{fig:ComboDiffPar}) we can see that the problem is particularly acute for sources with small quoted TGAS uncertainties. As discussed in \cite{GaiaDR1:TGAS}, uncertainties in the final TGAS catalogue are designed to be conservative, and have been `inflated' from the formal uncertainties derived internally. This was to take account of uncertainties that are not allowed for in the formal calculation (such as contributions from uncertainties in {\it Gaia}'s calibration and attitude). The scheme used was derived from a comparison to the (independent) {\it Hipparcos}\ parallaxes, and the quoted uncertainties were determined from the formal uncertainties using the formula \[ \sigma_{\varpi, {\rm TGAS}}^2 = a^2 \varsigma_{\varpi, {\rm TGAS}}^2 + b^2 \] where $\varsigma_{\varpi, {\rm TGAS}}$ is the formal parallax error derived internally, $a=1.4$ and $b=0.2\,\mathrm{mas}$. \cite{Goea16} looked at the reported parallaxes of RR Lyrae stars in TGAS, and used the known period-luminosity relationship for these stars to provide an independent estimate of the uncertainties in parallax. They found that for these sources $a=1.1$, $b=0.12\,\mathrm{mas}$ provides a better description of the true TGAS uncertainties, and therefore recommended that the TGAS parallax estimates should be reduced to a value $\sigma_{\varpi, {\rm TGAS, sc}}$ given by the formula \[ \label{eq:deflateTGAS} \sigma_{\varpi, {\rm TGAS, sc}}^2 = \alpha^2 \sigma_{\varpi, {\rm TGAS}}^2 - \beta^2 \] with $\alpha=0.79$ and $\beta = 0.10$. They investigated this by looking at how the sum of values of (their equivalent to) $\Delta^2$ varied as they increased the number of values that they summed over (ordered by nominal parallax uncertainty). This was done in the expectation that it should increase linearly with slope unity. For ease of plotting we consider the closely related statistic \[ \label{eq:redchi2} \chi^2_{\rm red,n} = \frac{1}{n} \sum_i^n \Delta_i^2 \] where the sum is over the $n$ sources with the lowest quoted TGAS uncertainty. This should have a constant value of unity as we sum over increasing numbers of sources. \begin{figure*} \centerline{ \resizebox{0.5\hsize}{!}{\includegraphics{plots/FinalDelta_Teff.eps}} \resizebox{0.5\hsize}{!}{\includegraphics{plots/FinalDelta_logg.eps}}} \caption{ The variation of the average value of $\Delta$ (Eq.~\ref{eq:Delta}) as a function of $T_\mathrm{eff}$ for dwarfs ($\log g\ge3.5$, left) and as a function of $\log g$ for giants ($\log g<3.5$, right). The $T_\mathrm{eff}$ values come from the IRFM. In blue and labelled RAVE+TGAS we show our combined parallax estimates -- we also show the RAVE only estimates (using IRFM $T_\mathrm{eff}$ values, in green and labelled RAVE, and as shown in Figures~\ref{fig:NewfT}~and~\ref{fig:Newfg}) to guide the eye. For low $\log g$ giants, TGAS parallax uncertainties are too large to have a significant effect on the bias seen in RAVE. \label{fig:FinalDelta} } \end{figure*} In Figure~\ref{fig:deflate} we show that, if we use the quoted uncertainties for both RAVE and TGAS, $\chi^2_{\rm red}$ remains smaller than unity for all stars. If we use the prescription from \cite{Goea16} then we come closer to unity when we consider all stars, but $\chi^2_{\rm red}$ is clearly less than unity where the sum is over the stars with lower $\sigma_{\varpi, {\rm TGAS}}$. This suggests that the \citeauthor{Goea16} prescription gives uncertainties that are still overestimated for stars with small $\sigma_{\varpi, {\rm TGAS}}$ and underestimated for those with large $\sigma_{\varpi, {\rm TGAS}}$. Figure~\ref{fig:deflate} also shows two alternative scenarios. We show $\chi^2_{\rm red}$ corresponding to the best values of $\alpha$ and $\beta$ (assuming that the RAVE uncertainties are correct), which are $\alpha=0.95$ and $\beta = 0.20$, which corresponds to $b=0$. Even when we do this (i.e, set the minimum uncertainty from TGAS to zero), the combined uncertainty for the stars with the lowest TGAS uncertainties is clearly too large. We therefore also consider the effect of arbitrarily reducing the RAVE uncertainties according to the formula \[ \label{eq:deflateRAVE} \sigma_{\varpi,{\rm RAVE, sc}} = \gamma\; \sigma_{\varpi, {\rm RAVE}}, \] and find that a value of $\gamma=0.86$ (while keeping the quoted TGAS uncertainties) produces results that are roughly as good as the results we find when deflating the TGAS uncertainties. It is worth noting that, like the TGAS uncertainties, the RAVE stellar parameter uncertainties were designed to be conservative \citep{RAVEDR5}. It is possible that the RAVE uncertainties tend to be overstated, particularly if the quoted external uncertainty estimates are overstated for most stars. While it certainly would not produce a systematic overestimate that was well described by eq~\ref{eq:deflateRAVE}, it could affect our estimates in a more complicated and subtle way. We are therefore not in a position to determine for sure whether it is the RAVE uncertainties or TGAS uncertainties (or both) that are overestimated. We would note that a comparison of DR5's parallax estimates to those from {\it Hipparcos}\ did not suggest underestimated uncertainties in either instance \citep[][fig 25]{RAVEDR5}. We add that the dispersion in $\Delta$ is smaller than unity for both giants and dwarfs, considered independently. We conclude that our results are consistent with the TGAS uncertainties being underestimated, though probably not in quite the same way as the prescription of \cite{Goea16}. We will not attempt to correct for any overestimates of uncertainty when calculating the combined RAVE+TGAS estimates below. \section{Combined distance estimates} \label{sec:Combined} A fundamental element of Bayesian analysis is the updating of the probability of a hypothesis (for example, the hypothesised distance to a star) as more evidence becomes available. TGAS parallaxes provide new evidence regarding these distances, so we are required to take it into account when determining the distances. We can think of this as either an additional piece of input data, or as a prior on parallax for each star (in addition to the prior on distance implied by Equation~\ref{eq:priorofx}) -- the two statements are equivalent. In previous sections we have investigated the properties of the RAVE distance pipeline in the absence of TGAS parallaxes, and developed an understanding of the problems with each dataset. We now incorporate the new evidence from these parallax measurements to obtain more accurate distance estimates than either can provide in isolation. We do this by including them in the set of inputs $(O_i,\sigma_i)$ in Equation~\ref{eq:maths}. It can be expected that the impact of the TGAS parallaxes will be greatest at $T_\mathrm{eff}$ values below the turnoff where there is serious ambiguity whether a star is on the main sequence or ascending the giant branch -- an uncertainty which is reflected in the bimodal pdfs which we are forced to represent using multi-Gaussian fits (Eq.~\ref{eq:defsfk}). We have seen that the parallax estimates (from RAVE alone) for stars with $\log g$ values less than $\sim$$2.0$ appear to be particularly biassed in the sense that they are systematically lower than those found by TGAS. It is very likely that this is due to the RAVE $\log g$ values in this range being systematically underestimated, as is also suggested by a comparison to the $\log g$ values found by GALAH or APOGEE surveys for the same stars. We noted in Section~\ref{sec:Giants} that the TGAS parallax uncertainties for these stars are significantly larger than those found from the RAVE distance pipeline. Therefore we can not expect that our distance estimates for these stars are significantly de-biassed by including the TGAS parallax in the estimate. In Figure~\ref{fig:FinalDelta} we show the average value of $\Delta$ (again as a function of $T_\mathrm{eff,IRFM}$ for dwarfs and $\log g_{\rm DR5}$ for giants) for the combined parallax estimates, with the RAVE-only distance estimates (also using the IRFM temperatures) shown for comparison. One must be very careful not to over-interpret these plots for several reasons (e.g., the RAVE+TGAS parallaxes are obviously not independent of the TGAS ones; the uncertainties for RAVE+TGAS, which enter the calculation of $\Delta$, are generally much smaller than those of RAVE alone), but they clearly indicate that the difference at low $\log g$ values is not removed when we include the TGAS parallax information. \subsection{Improvement} \label{sec:Improve} \begin{figure} \centerline{ \resizebox{\hsize}{!}{\includegraphics{plots/UnexpectedImprovement.eps}}} \centerline{ \resizebox{\hsize}{!}{\includegraphics{plots/FewerGaussians.eps}}} \caption{ The top panel shows the variation over the HR diagram of the ratio of the actually quoted uncertainty on the parallax when combining TGAS and RAVE data and the expected parallax uncertainty (Eq.\ref{eq:naive}) assuming Gaussian uncertainties. The $T_\mathrm{eff}$ values come from the IRFM. In the region between the dwarf and giant branches and in the red clump the improvement on naive expectations is particularly clear. The lower panels provide an explanation: they show the number of Gaussian components required to represent the pdf in distance modulus (Eq.~\ref{eq:defsfk}) without TGAS parallaxes (left) and with them (right). Without the TGAS parallaxes we require a multi-Gaussian representation in $\sim$$45$ percent of cases, whereas with TGAS we only need it in $\sim$$23$ percent of cases. \label{fig:Improvement} } \end{figure} \begin{figure} \centerline{ \resizebox{\hsize}{!}{\includegraphics{plots/FinalDistModErrHR.eps}}} \caption{ Average fractional distance uncertainties across the HR diagram when we ignore TGAS (left) and when we use the TGAS parallax information (right). The improvement is particularly dramatic for cooler dwarfs and stars with $T_\mathrm{eff}\sim6000\,\mathrm{K}$, $\log g\sim2.5$. The $T_\mathrm{eff}$ values come from the IRFM. For low $\log g$ giants, the inclusion of TGAS parallaxes has little effect. \label{fig:ComboUncertHR} } \end{figure} \begin{figure} \centerline{ \resizebox{\hsize}{!}{\includegraphics{plots/FinalDistanceUncerts.eps}}} \caption{ Fractional distance uncertainties for sources when we ignore TGAS parallaxes (upper panel) and when we use TGAS parallaxes (lower panel). In each case we show the pdfs for all sources (black), and separate ones for giants ($2.0<\log g<3.5$, red) and dwarfs ($\log g\ge3.5$, blue). The dashed lines show the median values in each case, (0.33 and 0.16 without TGAS and with TGAS, respectively) for all stars (i.e. 51 percent smaller with TGAS), 0.36 and 0.20 for giants (44 percent smaller) and 0.31 and 0.10 for dwarfs (66 percent smaller). \label{fig:ComboUncertDist} } \end{figure} \begin{figure} \centerline{ \resizebox{\hsize}{!}{\includegraphics{plots/FinalParallaxUncerts.eps}}} \caption{ Parallax uncertainties when using the RAVE pipeline with TGAS parallaxes. The dotted curve is the pdf for all stars using just TGAS. The solid lines show the pdfs for all sources (black), and separate ones for giants ($2.0<\log g<3.5$, red) and dwarfs ($\log g\ge3.5$, blue). The dashed lines show the median values in each case which can be compared to the median TGAS uncertainty for these stars, which is 0.32$\,\mathrm{mas}$ (essentially independently of whether stars are dwarfs or giants). This median is 0.25$\,\mathrm{mas}$ for all stars (24 percent smaller than TGAS), 0.15$\,\mathrm{mas}$ for giants (54 percent smaller) and 0.29$\,\mathrm{mas}$ for dwarfs (9 percent smaller). \label{fig:ComboUncertPar} } \end{figure} Including the TGAS parallaxes in our distance estimation inevitably leads to an improvement in the formal uncertainties. From the discussion of the previous sections, we can claim with some confidence that, outside a few regions of parameter space (e.g., low $\log g$, the stripe near the ecliptic pole), the combination does not introduce significant systematic errors into one dataset or the other. We can make a na\"ive estimate of how the uncertainties will decrease when we combine the two datasets by approximating that the uncertainties from the RAVE-only distance pipeline ($\sigma_{\varpi,{\rm Sp}}$) are Gaussian, in which case we have a new expected uncertainty in parallax $\sigma_{\varpi,{\rm exp}}$ given by \[ \label{eq:naive} 1/{\sigma^2_{\varpi,{\rm exp}}} = 1/{\sigma^2_{\varpi,{\rm Sp}}} + 1/{\sigma^2_{\varpi,{\rm TGAS}}}. \] Because the RAVE uncertainties are significantly non-Gaussian, we do significantly better than this is some regions of the HR diagram. This can be seen in Figure~\ref{fig:Improvement}, which shows the parallax uncertainty we find divided by that which we would naively expect. This is also reflected in the reduced number of stars for which the multi-Gaussian representations are required to describe the distance pdf (lower panel of Figure~\ref{fig:Improvement}). In Figure~\ref{fig:ComboUncertHR} we show how the fractional distance uncertainty varies over the HR diagram, both with and without TGAS parallaxes. It is clear that the main improvement is for dwarfs, and for stars in the regions of the HR diagram where parallax information can break uncertainties regarding whether a star is a giant or a dwarf. When we include TGAS parallaxes, the median fractional distance uncertainty (excluding stars with $\log g_{\rm DR5}<2.0$) falls to 15 percent, from 31 percent using spectrophotometric information alone. For dwarfs the median uncertainty is just 10 percent, while for giants it is 19 percent. The full pdfs of fractional distance uncertainty are shown in Figure~\ref{fig:ComboUncertDist}. The improvement over TGAS alone is shown in terms of parallax uncertainty in Figure~\ref{fig:ComboUncertPar}. In this case it is the giants for which the greatest improvement is found (again excluding stars with $\log g_{\rm DR5}<2.0$). The median TGAS uncertainty is $0.32\,\mathrm{mas}$ for either giants or dwarfs, while the median uncertainty for RAVE+TGAS is $0.20\,\mathrm{mas}$ for giants, and $0.24\,\mathrm{mas}$ for dwarfs. Using our combined estimates and the TGAS proper motions, we can convert this distance uncertainty into a velocity uncertainty. We take a simple Monte-Carlo approach to do this -- for each star we sample from the multi-Gaussian pdf in distance modulus, and from Gaussians in proper motion and radial velocity with the quoted uncertainties. We again assume that the Sun is $8.33\,\mathrm{kpc}$ from the Galactic centre and $15\,\mathrm{pc}$ from the Galactic plane. If we characterise the resulting pdf in terms of a median value and a standard deviation (i.e. uncertainty) in each Galactocentric velocity component, we get the distribution of uncertainties shown in Figure~\ref{fig:VelocityUncert}. The introduction of TGAS parallaxes to our distance estimates improves the velocity accuracy by, on average, $\sim40$ percent in each direction. \begin{figure} \centerline{ \resizebox{\hsize}{!}{\includegraphics{plots/VelocityUncerts.eps}}} \caption{ Velocity uncertainties for sources when we ignore TGAS parallaxes (upper panel) and when we use TGAS parallaxes (lower panel). In each case we show the pdfs in $v_R$ (black), $v_z$ (red) and $v_\phi$ (blue). The dashed lines show the median values in each case, which are $6.6$ and $3.8\,\mathrm{km\,s}^{-1}$ (without TGAS and with TGAS, respectively) for $v_R$, $4.7$ and $2.9\,\mathrm{km\,s}^{-1}$ for $v_z$ and $5.1$ and $3.0\,\mathrm{km\,s}^{-1}$ for $v_\phi$, i.e. the velocity uncertainty in each direction is reduced by $\sim 40$ percent. \label{fig:VelocityUncert} } \end{figure} { Finally we would like to estimate how we could correct our distance estimates to be unbiassed. Since we don't know the true values we will do this under the assumption that the TGAS values are unbiassed. We make the further approximation that -- at a given $T_\mathrm{eff}$ value for dwarfs or $\log g$ value for giants -- we can simply multiply all our RAVE+TGAS parallaxes by a correction factor ${\rm corr}_\varpi$ such that they are unbiassed. For values of ${\rm corr}_\varpi\approx1$ it follows that the equivalent factor for distances is ${\rm corr}_s\approx2-{\rm corr}_\varpi$. We find the value of ${\rm corr}_\varpi$ by requiring that our statistic $\langle\Delta\rangle$ is zero if we compare ${\rm corr}_\varpi\varpi_{\rm RAVE+TGAS}$ and $\varpi_{\rm TGAS}$} { Figure~\ref{fig:Corr} shows the value of ${\rm corr}_\varpi$ we find as a function of $T_\mathrm{eff}$ value for dwarfs and $\log g$ for giants. The dwarfs require systematic changes of less than 1 percent in parallax (or distance) for all but the hottest stars. The giants seem to require systematic changes of more than 10 percent in parallax at $\log g<2.0$, up to around 35 percent at the lowest $\log g$ values. For these low $\log g$ stars, the approximation ${\rm corr}_s\approx2-{\rm corr}_\varpi$ becomes poor. } \begin{figure} \centerline{ \resizebox{\hsize}{!}{\includegraphics{plots/Correction_Teff.eps}}} \centerline{ \resizebox{\hsize}{!}{\includegraphics{plots/Correction_logg.eps}}} \caption{ { Estimated parallax correction factors (${\rm corr}_\varpi$) for the RAVE+TGAS combined parallax estimates as a function of $T_\mathrm{eff}$ for dwarfs ($\log g\geq3.5$, upper) and $\log g$ for giants ($\log g<3.5$, lower). Values are calculated in as a running average over a window of width $200\,{\rm K}$ or $0.3\,\mathrm{dex}$. If we multiply all the RAVE+TGAS $\varpi$ values in this window by ${\rm corr}_\varpi$, then $\Delta$ is, on average zero.} \label{fig:Corr} } \end{figure} \section{Age estimates} \label{sec:ages} The classical method for determining the age of a star is by comparing the luminosity of an F or G star to that expected for stars of its colour on the main-sequence or turning off it. This is only possible if an independent estimate of its distance (e.g., a parallax) is available. By including TGAS parallaxes in the RAVE pipeline we are making precisely this comparison, with additional information and a sophisticated statistical treatment. We can therefore expect that the ages we derive are as reliable as any currently available for main-sequence stars. While the original aim of this pipeline was to determine the distances to stars, an inevitable byproduct is that we also constrain the other `model' properties described in Section~\ref{sec:Bayes}, i.e., initial mass ${\cal M}$, age $\tau$, metallicity $[\mathrm{M}/\mathrm{H}]$ and line-of-sight extinction $A_V$. We can also produce new estimates of the other properties of the stars, such as $T_\mathrm{eff}$ and $\log g$, which we discuss below. Here we look at the improved estimates of $\tau$ that are made possible by including TGAS parallaxes. Age estimates from this pipeline were included in RAVE DR4 (in terms of $\log \tau$), but came with the strong caveat that the prior used (the Standard prior -- see our Section~\ref{sec:prior}) included a fixed relationship between metallicity and age (metal-poor stars are assumed to be old, metal-rich stars younger). In our case, we have now seen that we can use a prior without any explicit age-metallicity relationship and still produce reasonable results (at least in terms of parallaxes -- Figure \ref{fig:ComparePriors}). This gives us some confidence that we will not go too badly wrong using this prior when deriving ages. We would expect that the addition of the TGAS parallax measurements provides us with substantial leverage when determining the ages of stars, and in Figure~\ref{fig:AgeUncertDist} we quantify this. It is clear that, particularly at the low-uncertainty end, we do have a substantial improvement in precision. Without TGAS only 1.5 percent of stars have fractional age uncertainties lower than 0.3, while with TGAS this increases to over 25 percent. In Figure~\ref{fig:AgeUncertHR} we show where in the HR diagram the stars with the smallest age uncertainties are found. As one would expect, they are primarily found near the main-sequence turnoff -- it is in this region that stars evolve quickly with age, and it is therefore possible to get an age estimate with small uncertainties even with imperfect observations. { It is clear from Figure~\ref{fig:FinalDelta} that there are still some biasses in the distance estimates for dwarfs with $6000\lesssimT_\mathrm{eff}\lesssim7000\,\mathrm{K}$ (i.e.~in the main-sequence turnoff region), though Figure~\ref{fig:Corr} suggests that these are only at the 1 percent level. It is reasonable to ask whether this implies a bias in the age estimates. We can not know for sure, because we do not know what causes the bias. We have investigated the possible biasses by running the pipeline having either artificially decreased the input $\log g$ values by $0.4\,\mathrm{dex}$ or artificially decreased the input parallaxes by $50\,\mu\mathrm{as}$. Either change results in parallax estimates that are biassed in the opposite sense to that seen with the real data for these stars. In both cases the changes in stellar ages are small compared to the uncertainties. The change in input $\log g$ produces a typical change of $\sim0.5\,\mathrm{Gyr}$ (or 5 to 10 percent) but with no trend to higher or lower ages (i.e. no clear bias). The change in input $\varpi$ produces a smaller typical change of $\sim0.2\,\mathrm{Gyr}$ (or $\sim$4 percent) with a bias in the sense that cooler stars (with $T_\mathrm{eff}\sim6000$) have slightly lower ages than those originally quoted, by $\sim0.1\,\mathrm{Gyr}$ (or $\sim$2 percent). These are negligible for most purposes, but it is entirely possible that other biasses in the analysis (for example in the metallicities or the stellar isochrones) have larger impacts on the age estimates. The study of the complex interplay of these different factors is beyond the scope of this paper. } We must caution that these age estimates are extremely hard to verify from external sources. A relatively small number of sources have age estimates from asteroseimology studies or because they are part of clusters with known ages, and these are sources for which we have large age uncertainties. We can gain confidence from the facts that 1) the method we are using to determine distances and ages has been carefully tested with pseudo-data for accuracy by, amongst others, \cite{BuJJB10}; and 2) the application of this method to these data to find distances has been rigorously tested against TGAS parallaxes in this study, and we have found that it is generally successful (except for $\log g<2.0$ stars, where we believe that the problem lies in the quoted $\log g$ values). In a forthcoming paper (Wojno et al., MNRAS submitted) we will use these age estimates to isolate young and old populations in the RAVE catalogue and study their properties. This will demonstrate the improvement over past studies \citep{Woea16} in understanding the relationship between age, metallicity and velocity of stars in the Solar neighbourhood which is made possible by the TGAS parallaxes. \begin{figure} \centerline{ \resizebox{\hsize}{!}{\includegraphics{plots/AgeUncertKDECumFinal.eps}}} \caption{ The fraction of sources with a given fractional age uncertainty ($\sigma_\tau/\tau$) displayed as a pdf found using a kernel density estimation (lower panel), and as a cumulative distribution (upper panel). The median values are plotted as dashed lines. The plot shows the distribution of age uncertainties with and without TGAS parallaxes (blue and green curves, respectively). It is particularly clear that the inclusion of TGAS parallaxes allows us to derive age uncertainties of less than $30$ percent for a significant fraction of sources. \label{fig:AgeUncertDist} } \end{figure} \begin{figure*} \centerline{ \resizebox{0.33\hsize}{!}{\includegraphics{plots/HRJK_ageerr02.pdf}} \resizebox{0.33\hsize}{!}{\includegraphics{plots/HRJK_ageerr03.pdf}} \resizebox{0.33\hsize}{!}{\includegraphics{plots/HRJK_ageerrall.pdf}}} \caption{ The location of stars with small fractional age uncertainties in the HR diagram (with colour and absolute magnitude on the two axes in this case). Both the $(J-K_s)$ colour and the absolute magnitude in the $J$ band, $M_J$, have been corrected for extinction using the most likely $\log A_V$ value found by the distance pipeline. The left figure shows those with age uncertainties less than 20 percent, the central figure those with age uncertainties less than 30 percent, and the right figure shows all stars (for comparison). The number density indicated by the colour bar corresponds to the numbers of stars in a pixel of height 0.1 magnitudes in $M_J$ and width 0.01 magnitudes in $(J-K_s)_0$. Unsurprisingly, the smallest fractional age uncertainties are for stars near the main-sequence turnoff. \label{fig:AgeUncertHR} } \end{figure*} \section{Stellar parameters} \label{sec:reverse} \begin{figure} \centerline{ \resizebox{\hsize}{!}{\includegraphics{plots/HRTg_reverse.pdf}}} \caption{ Output from the `reverse pipeline', which finds the $T_\mathrm{eff}$ and $\log g$ values of the stars using the Bayesian method described in this paper. Pixels are coloured by median metallicity, and overlaid contours show the density (with a logarithmic scaling in density between contours). \label{fig:HRReverse} } \end{figure} The Bayesian pipeline takes $\log g$ and $T_\mathrm{eff}$ as inputs to the likelihood calculation, taken either directly from the spectroscopic pipeline or from the IRFM. It also, inevitably (if usually implicitly) determines a posterior probability distribution for these parameters. { The increased information that we now have about the stars (primarily from TGAS) means that these posterior probability distributions are significantly better estimates of the stellar parameters than those we input. In future these may be used to provide estimates for use in the pipeline that determines the chemical abundances, and may be used without giving any input from the spectroscopic pipeline other than metallicity. Because the intention is to provide estimates of these stellar parameters, rather than take them as input, we refer to this as the {\it reverse pipeline}, though it is fundamentally the same machinery}. The use of parallaxes to improve estimates of $\log g$ is far from new \cite*[e.g.][]{BeFeLu03}, and here we simply extend it in much the same way as is being planned within the {\it Gaia}\ consortium \citep[e.g.][]{CBJea13}. Figure~\ref{fig:HRReverse} shows the HR diagram using the best estimates of $T_\mathrm{eff}$ and $\log g$ from the Bayesian pipeline (referred to as $T_\mathrm{eff, PJM}$, $\log g_{\rm PJM}$). We show the density of stars in this plane using a contour (showing a strong red clump) { and colour regions of the diagram by the median metallicity of stars in that region. This can cause artifacts in regions with few stars, such as above the main sequence}. It is worth noting that the sources with $\log g_{DR5}<2$ do not have their $\log g$ values significantly shifted. This is because the TGAS parallaxes are too uncertain to have much of an effect (see Figure~\ref{fig:Respective}). Future {\it Gaia}\ data releases will have smaller parallax uncertainties, so this approach is a viable one to improve the $\log g$ values for these stars after {\it Gaia}\ DR2, on 25th April 2018. We caution that the stars found in regions of the HR diagram away from typical isochrones (e.g. above the main sequence) are likely to have rather untrustworthy parameters from our pipeline. This because our framework is not designed to deal with unusual objects such as binaries (which naturally lie above the main sequence in the colour-magnitude version of the HR diagram). The large majority of these stars are flagged in DR5 using the \cite{Maea12} approach { (for example, $\sim$900 of the $\sim$1000 found above the main sequence with $T_\mathrm{eff, PJM}<5500\,\mathrm{K}$ are flagged)}, and therefore flagged in our catalogue. { The clean appearance of this HR diagram is to a large extent by construction, because stellar models are used to determine a star's place on the diagram. For a true test of the reliability of this method we must look at comparisons to external catalogues.} \begin{figure} \centerline{ \resizebox{\hsize}{!}{\includegraphics{plots/Andrea/external1_PJM.png}}} \caption{ { Comparison of parameters from RAVE-on, DR5, and the reverse pipeline to those from high-resolution field stars studies (as described in the text). Note that the y-axis labels are placed at the top of the figure (i.e.,~$\Delta T_{\rm eff}$, $\Delta \log g$). The differences are given in the sense, e.g., $\log g_{\rm DR5}-\log g_{\rm ext}$. The solid red lines indicate the mean values, the dashed red lines are placed one standard deviation either side. In each case the reverse pipeline parameters show less bias and less spread. } \label{fig:external1} } \end{figure} \begin{figure} \centerline{ \resizebox{\hsize}{!}{\includegraphics{plots/Andrea/external2_PJM.png}}} \caption{ { As Figure~\ref{fig:external1}, except it uses parameters from GALAH for the comparison to parameters from RAVE-on, DR5, and the reverse pipeline. Again the spread of values from the reverse pipeline is far smaller than in either other case.} \label{fig:external2} } \end{figure} \begin{figure} \centerline{ \resizebox{\hsize}{!}{\includegraphics{plots/Andrea/external3_PJM.png}}} \caption{ { As Figures~\ref{fig:external1}, and \ref{fig:external2} except it uses parameters from APOGEE for the comparison to parameters from DR5, and the reverse pipeline. We do not compare RAVE-on with APOGEE, as RAVE-on uses the RAVE-APOGEE overlap stars as part of their training sample. Again, we find that the scatter and bias are substantially reduced.} \label{fig:external3} } \end{figure} { \cite{KuNice17} were the first to highlight the decrease in scatter in the $T_\mathrm{eff}$ and $\log g$ values from the reverse pipeline as compared to the RAVE pipeline alone. Figures~\ref{fig:external1}, \ref{fig:external2} \& \ref{fig:external3} show a more detailed comparison in the $T_\mathrm{eff}$ and $\log g$ parameters from RAVE DR5, RAVE-on \citep[found from the RAVE spectra using a data driven approach by][]{RAVEon} and those presented here compared to high-resolution spectroscopy parameters. First, Figure~\ref{fig:external1} shows the 67 stars presented here that could be matched with high-resolution field star studies from \cite{2014ApJ...797...13S, 2013ApJ...771...67I,2014AJ....147..136R}, from open and globular clusters Blanco 1 \citep{2005MNRAS.364..272F}, 47 Tuc \citep{2014ApJ...780...94C, 2008AJ....135.1551K, 2009A&A...505..139C}, Pleiades \citep{2009PASJ...61..931F}, NGC 2632 \citep{2015AJ....150..158Y} and IC 4651 \citep{2004A&A...422..951C, 2004A&A...424..951P}, as well as from the Gaia-ESO survey \citep{GaiaESO}.} { The difference in $\log g$ between these studies and those presented here is $0.0 \pm 0.42\,\mathrm{dex}$, as compared to $-0.08 \pm 0.83\,\mathrm{dex}$ and $-0.11 \pm 0.92\,\mathrm{dex}$ from RAVE-on and RAVE DR5, respectively. Upon selecting the 53 stars with SNR $> 40$, the difference in $\log g$ is reduced to $0.03 \pm 0.38\,\mathrm{dex}$, as compared to $-0.06\pm0.72\,\mathrm{dex}$ and $0.00\pm0.83\,\mathrm{dex}$ from RAVE-on and RAVE DR5, respectively. } { The scatter in $T_\mathrm{eff}$ is also improved when adopting the temperatures presented here. The difference in $T_\mathrm{eff}$ between the high-resolution studies and those presented here is $75 \pm 282\,\mathrm{K}$ as compared to $51\pm420\,\mathrm{K}$ and $86\pm410\,\mathrm{K}$ from RAVE-on and RAVE DR5, respectively. Using the stars with SNR $> 40$, the difference in $T_\mathrm{eff}$ is $81\pm262\,\mathrm{K}$ as compared to $45\pm372\,\mathrm{K}$ and $87\pm390\,\mathrm{K}$ from RAVE-on and RAVE DR5, respectively. } { Figure~\ref{fig:external2} shows how the $T_\mathrm{eff}$ and $\log g$ parameters from RAVE DR5, RAVE-on and those presented here compare to those from Galah DR1, a high-resolution (R$\sim$$28\,000$) spectroscopic survey. } { From 1379 overlap stars, the difference between Galah $\log g$ and that presented here is $0.12\pm0.26\,\mathrm{dex}$, compared to $0.0\pm0.40\,\mathrm{dex}$ from RAVE-on and $0.3\pm0.54\,\mathrm{dex}$ from RAVE DR5. The 753 overlap stars with the best RAVE stellar parameters (i.e., AlgoConv $= 0$ and SNR $> 40$) the difference between Galah $\log g$ and that presented here is $0.11 \pm0.22\,\mathrm{dex}$, compared to $-0.03\pm0.33\,\mathrm{dex}$ from RAVE-on and $0.37\pm0.42\,\mathrm{dex}$ from RAVE DR5. } { Lastly, we present a comparison of the stellar parameters presented here to APOGEE (R$\sim$$22\,500$, Figure~\ref{fig:external3}). Note that we do not compare RAVE-on with APOGEE, as RAVE-on uses the RAVE-APOGEE overlap stars as part of their training sample. From 183 overlap stars, we find the difference between APOGEE $\log g$ and that presented here is $0.07\pm0.20\,\mathrm{dex}$, compared to $-0.11\pm$ 0.49$\,\mathrm{dex}$ from RAVE DR5. The difference between APOGEE $T_\mathrm{eff}$ and that presented here is $-24 \pm124\,\mathrm{K}$, compared to $-58 \pm 210\,\mathrm{K}$ from RAVE DR5. The 146 overlap stars with AlgoConv $= 0$ and SNR $> 40$ have a difference in $\log g$ between APOGEE and that presented here of $0.08 \pm 0.19\,\mathrm{dex}$, compared to $-0.06\pm 0.39\,\mathrm{dex}$ from RAVE DR5. The difference between APOGEE $T_\mathrm{eff}$ and that presented here is $-23\pm101\,\mathrm{K}$, compared to $-69\pm 116\,\mathrm{K}$ from RAVE DR5. } { Therefore, from a variety of different high-resolution studies, we conclude that the scatter in $\log g$ is a factor of 2 smaller when using surface gravities from the reverse pipeline as compared to both RAVE-on and RAVE DR5 parameters. Also for stars with low SNR ($< 40$), the gravities and temperatures from the reverse pipeline are reliable, equal to or even better than the gravities and temperatures determined from the high SNR stars in RAVE DR5 and RAVE-on. } It is our plan to use this method in an iterative fashion with the RAVE spectroscopic pipeline to improve the accuracy of our stellar parameters and therefore the RAVE abundance estimates. We anticipate that the results will be released as part of RAVE DR6. \section{Conclusions} \label{sec:conclusions} We have produced new distance, age and stellar parameter estimates for stars common to RAVE and TGAS which reflect new measurements of parallax and $T_\mathrm{eff}$ (from TGAS and the infra-red flux method, respectively). This allows us to produce distance estimates that are better than those that either RAVE or TGAS can achieve in isolation. { It also allows us to make age estimates which have better than 30 percent precision for 25 percent of the stars in our sample, and estimates of the stellar parameters which are roughly twice as accurate as from RAVE spectra alone (when compared to external catalogues).} RAVE is the spectroscopic survey with the largest number of sources in common with TGAS, and therefore this dataset has the largest number of sources with both radial velocities from spectroscopy and proper motions from space astrometry. The improvement in distance uncertainty due to this study provides a substantial decrease in the uncertainty on the 3D velocities of these stars. When combined with our age estimates, this gives new insight into the history of our Galaxy. We have carefully tested the RAVE distance pipeline and the TGAS parallaxes against one another. From this comparison we can draw several conclusions: \begin{enumerate} \item The RAVE DR5 parallaxes were overestimated for dwarfs with $T_\mathrm{eff}\gtrsim5500\,\mathrm{K}$ and underestimated for giants with $\log g\lesssim2.0\,\mathrm{dex}$. This corresponds to a $T_\mathrm{eff}$ underestimate in the former case, and a $\log g$ underestimate in the latter. We can (mostly) correct for the former by using the Infrared Flux Method (IRFM) temperatures provided with DR5, but correcting for the latter is beyond the scope of this study. \item When we use the IRFM $T_\mathrm{eff}$ values to find spectrophotometric parallaxes, the two parallax estimates agree well in the vast majority of cases, with systematic differences that are substantially smaller than the statistical ones. \item A comparison as a function of position on the sky indicates that the TGAS parallaxes appear to be overestimated by $\sim0.3\,\mathrm{mas}$ in a region of the sky near Galactic coordinates $(l,b)=(100^\circ,0^\circ$ which is also near the ecliptic pole \citep[see also][]{GaiaDR1:Validation,ScAu17}. \item The small random differences between the RAVE-only parallax estimates and the TGAS parallaxes, and the fact that this is found for many stellar types, suggests that the TGAS random uncertainties are overestimated by $\sim0.2\,\mathrm{mas}$. \end{enumerate} We provide flags with our distance estimates, as indicated in Table~\ref{tab:flags}. To use a `clean' set of stars we recommend that users take only stars with the flag `flag\_all=0'. This yields a set of $137\,699$ stars. As with previous distance estimates from RAVE, we characterise the output pdf from our distance pipeline by the expectation value and uncertainty in distance, distance modulus, and parallax, and by a multi-Gaussian pdf in distance modulus. This last option provides the most complete description of what the distance pipeline finds, though it is clearly less important here than it was before TGAS parallaxes became available (Fig~\ref{fig:Improvement}). The apparatus we have used for this study is applicable to data from any spectroscopic survey. It is our intention to apply it to data from the APOGEE survey in the near future. We will also produce distance estimates for RAVE stars that do not have TGAS parallaxes, using the AllWISE photometry and IRFM temperatures. These will have smaller systematic errors than the DR5 distances, particularly for hot dwarfs, because of the use of IRFM $T_\mathrm{eff}$ values. All of these distance estimates will be made available through the RAVE website (\url{http://dx.doi.org/10.17876/rave/dr.5/033} and \url{http://dx.doi.org/10.17876/rave/dr.5/034} for the distance estimates for sources with and without TGAS parallaxes, respectively). For TGAS sources they constitute a substantial improvement in distance and, therefore, velocity uncertainty over previous data releases. It is our hope that the new, more precise age and distance estimates are of great value in characterising the dynamics and history of our Galaxy. \begin{table*} \caption{Data flags specific to this study. In all cases 1 indicates a potential problem with the distance estimate.\label{tab:flags}} \begin{center} \begin{tabular}{ll} \hline Name & Explanation \\ \hline flag\_low\_logg & $\log g_{\rm DR5}<2.0$ (see Section~\ref{sec:Giants}) \\ flag\_outlier & TGAS \& RAVE-only parallaxes differ by more than $4\sigma$ -- RAVE not used for final distances (see Section~\ref{sec:Outliers}) \\ flag\_N & Spectrum flagged as not normal by \cite{Maea12} \\ flag\_pole & Source lies in the problematic region near the ecliptic pole ($165^\circ<\lambda<195^\circ,\beta<-30^\circ$, see Section~\ref{sec:TGAS}). \\ flag\_dup & A spectrum of the same star with a higher SNR is in RAVE \\ flag\_any & True if any of the above are true, otherwise false \\ \hline \end{tabular} \end{center} \end{table*} \section*{Acknowledgements} The authors are grateful to the referee for numerous suggestions which improved the paper. Paul McMillan is grateful to Lennart Lindegren for suggesting looking at the variation on-sky, and to Louise Howes for a careful reading of the draft. Funding for the research in this study came from the Swedish National Space Board, the Royal Physiographic Society in Lund, and some of the computations were performed on resources provided by the Swedish National Infrastructure for Computing (SNIC) at Lunarc under project SNIC 2016/4-17. Funding for RAVE has been provided by: the Australian Astronomical Observatory; the Leibniz-Institut fuer Astrophysik Potsdam (AIP); the Australian National University; the Australian Research Council; the French National Research Agency; the German Research Foundation (SPP 1177 and SFB 881); the European Research Council (ERC-StG 240271 Galactica); the Istituto Nazionale di Astrofisica at Padova; The Johns Hopkins University; the National Science Foundation of the USA (AST-0908326); the W. M. Keck foundation; the Macquarie University; the Netherlands Research School for Astronomy; the Natural Sciences and Engineering Research Council of Canada; the Slovenian Research Agency (research core funding No. P1-0188); the Swiss National Science Foundation; the Science \& Technology Facilities Council of the UK; Opticon; Strasbourg Observatory; and the Universities of Groningen, Heidelberg and Sydney. The RAVE web site is at https://www.rave-survey.org. This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement. \bibliographystyle{mnras}
2,869,038,156,971
arxiv
\section{Introduction} In \cite{Dvlag} we found differential equations of spectral type satisfied by the generalized Laguerre polynomials $\left\{L_n^{\alpha,M}(x)\right\}_{n=0}^{\infty}$ which are orthogonal on the interval $[0,\infty)$ with respect to the weight function $$\frac{1}{\Gamma(\alpha+1)}x^{\alpha}e^{-x}+M\delta(x),\;\alpha>-1,\;M\ge 0.$$ These orthogonal polynomials were introduced by T.H.~Koornwinder in \cite{Koorn}. In order to find the coefficients of these differential equations we had to solve systems of equations of the form \begin{equation} \label{sysLag} \sum_{i=1}^{\infty}a_i(x)D^iL_n^{(\alpha)}(x)=F_n(x),\;n=1,2,3,\ldots, \end{equation} where $\displaystyle D=\frac{d}{dx}$ denotes the differentiation operator. In \cite{Bavinck} H.~Bavinck showed that the coefficients $\{a_i(x)\}_{i=1}^{\infty}$ are uniquely determined and can be written in the form \begin{equation} \label{oplLag} a_i(x)=(-1)^i\sum_{j=1}^iL_{i-j}^{(-\alpha-i-1)}(-x)F_j(x),\;i=1,2,3\ldots. \end{equation} This result is based on the inversion formula \begin{equation} \label{invLag} \sum_{k=j}^iL_{i-k}^{(-\alpha-i-1)}(-x)L_{k-j}^{(\alpha+j)}(x)= \delta_{ij},\;j\le i,\;i,j=0,1,2,\ldots. \end{equation} See also \cite{Invjac}. This inversion formula was derived in a similar way as the inversion formula involving Charlier polynomials found in \cite{Charlier}. See also \cite{Invjac} and section~3 of this paper. For inversion formulas involving Meixner polynomials the reader is referred to \cite{Meixner}. See also \cite{Meixner2}. In \cite{Soblag} we used the inversion formula (\ref{invLag}) to find differential equations of spectral type satisfied by the Sobolev-type Laguerre polynomials $\left\{L_n^{\alpha,M,N}(x)\right\}_{n=0}^{\infty}$ which are othogonal with respect to the Sobolev-type inner product $$<f,g>\,=\frac{1}{\Gamma(\alpha+1)}\int_{0}^{\infty}x^{\alpha}e^{-x}f(x)g(x)dx +Mf(0)g(0)+Nf'(0)g'(0),$$ where $\alpha>-1$, $M\ge 0$ and $N\ge 0$. In \cite{Koorn} T.H.~Koornwinder also introduced the generalized Jacobi polynomials $\left\{P_n^{\alpha,\beta,M,N}(x)\right\}_{n=0}^{\infty}$ which are orthogonal on the interval $[-1,1]$ with respect to the weight function $$\frac{\Gamma(\alpha+\beta+2)}{2^{\alpha+\beta+1}\Gamma(\alpha+1)\Gamma(\beta+1)} (1-x)^{\alpha}(1+x)^{\beta}+M\delta(x+1)+N\delta(x-1),$$ where $\alpha>-1$, $\beta>-1$, $M\ge 0$ and $N\ge 0$. In \cite{Search} we were looking for differential equations of spectral type satisfied by these generalized Jacobi polynomials. The general case turned out to be very difficult, but in \cite{Symjac} we were able to solve this problem in the special case that $\beta=\alpha$ and $N=M$. In order to find the coefficients of these differential equations we had to solve systems of equations of the form $$\sum_{i=1}^{\infty}c_i(x)D^iP_n^{(\alpha,\beta)}(x)=F_n(x), \;n=1,2,3,\ldots.$$ In \cite{Invjac} we showed that the coefficients $\{c_i(x)\}_{i=1}^{\infty}$ are unique and that they can be written in the form $$c_i(x)=2^i\sum_{j=1}^i\frac{\alpha+\beta+2j+1}{(\alpha+\beta+j+1)_{i+1}} P_{i-j}^{(-\alpha-i-1,-\beta-i-1)}(x)F_j(x),\;i=1,2,3\ldots.$$ This result is based on the inversion formula \begin{eqnarray} \label{invJac} & &\sum_{k=j}^i\frac{\alpha+\beta+2k+1}{(\alpha+\beta+k+j+1)_{i-j+1}}\nonumber\\ & &{}\times P_{i-k}^{(-\alpha-i-1,-\beta-i-1)}(x) P_{k-j}^{(\alpha+j,\beta+j)}(x)=\delta_{ij},\;j\le i,\;i,j=0,1,2,\ldots, \end{eqnarray} which is proved in \cite{Invjac}. This inversion formula was derived in a completely different way than the inversion formulas mentioned before. In \cite{Madrid} it is shown that this inversion formula (with $\beta=\alpha$) can be used to derive the results obtained in \cite{Symjac} in an easier way. Finally in \cite{Dvjac} this inversion formula is used to solve the problem for all $\alpha>-1$, $\beta>-1$, $M\ge 0$ and $N\ge 0$. In this paper we will derive several kinds of inversion formulas and we will show how they can be applied to find coefficients of differential equations for generalizations of some classical orthogonal polynomials. \section{Some classical orthogonal polynomials} In this section we will recall some formulas involving classical orthogonal polynomials which we will use in this paper. For details the reader is referred to \cite{Askey}. The Meixner-Pollaczek polynomials $\left\{P_n^{(\lambda)}(x;\phi)\right\}_{n=0}^{\infty}$ can be defined by their generating function \begin{equation} \label{genMP} \left(1-e^{i\phi}t\right)^{-\lambda+ix}\left(1-e^{-i\phi}t\right)^{-\lambda-ix} =\sum_{n=0}^{\infty}P_n^{(\lambda)}(x;\phi)t^n. \end{equation} The classical Jacobi polynomials $\left\{P_n^{(\alpha,\beta)}(x)\right\}_{n=0}^{\infty}$ can be defined for all $\alpha$ and $\beta$ and $n\in\{0,1,2,\ldots\}$ by \begin{equation} \label{defJac} P_n^{(\alpha,\beta)}(x)=\sum_{k=0}^n\frac{(n+\alpha+\beta+1)_k}{k!} \frac{(\alpha+k+1)_{n-k}}{(n-k)!}\left(\frac{x-1}{2}\right)^k. \end{equation} They satisfy the orthogonality relation \begin{eqnarray*} & &\frac{\Gamma(\alpha+\beta+2)}{2^{\alpha+\beta+1}\Gamma(\alpha+1)\Gamma(\beta+1)} \int_{-1}^1(1-x)^{\alpha}(1+x)^{\beta}P_m^{(\alpha,\beta)}(x)P_n^{(\alpha,\beta)}(x)dx\\ & &{}\hspace{1cm}=\frac{\alpha+\beta+1}{2n+\alpha+\beta+1}\, \frac{(\alpha+1)_n(\beta+1)_n}{(\alpha+\beta+1)_n\,n!}\,\delta_{mn},\;m,n=0,1,2,\ldots. \end{eqnarray*} The Gegenbauer or ultraspherical polynomials $\left\{G_n^{(\lambda)}(x)\right\}_{n=0}^{\infty}$ form a special case of the classical Jacobi polynomials. In fact we have \begin{equation} \label{rel1} G_n^{(\lambda)}(x)=\frac{(2\lambda)_n}{(\lambda+\frac{1}{2})_n} P_n^{(\lambda-\frac{1}{2},\lambda-\frac{1}{2})}(x), \;\lambda>-\frac{1}{2},\;\lambda\ne 0. \end{equation} These ultraspherical polynomials can also be defined by their generating function \begin{equation} \label{genultra} (1-2xt+t^2)^{-\lambda}=\sum_{n=0}^{\infty}G_n^{(\lambda)}(x)t^n. \end{equation} The special case $\lambda=0$ needs another normalization. In that case we have the Chebyshev polynomials of the first kind $\left\{T_n(x)\right\}_{n=0}^{\infty}$ given by $$T_n(x)=\frac{P_n^{(-\frac{1}{2},-\frac{1}{2})}(x)}{P_n^{(-\frac{1}{2},-\frac{1}{2})}(1)}= {}_2F_1\left(\left.{{-n, n} \atop \frac{1}{2}}\right|\frac{1-x}{2}\right),\;n=0,1,2,\ldots.$$ Their generating function equals \begin{equation} \label{genCheb1} \frac{1-xt}{1-2xt+t^2}=\sum_{n=0}^{\infty}T_n(x)t^n. \end{equation} The Chebyshev polynomials of the second kind $\left\{U_n(x)\right\}_{n=0}^{\infty}$ are given by $$U_n(x)=(n+1)\,\frac{P_n^{(\frac{1}{2},\frac{1}{2})}(x)}{P_n^{(\frac{1}{2},\frac{1}{2})}(1)}= (n+1)\,{}_2F_1\left(\left.{{-n, n+2} \atop \frac{3}{2}}\right|\frac{1-x}{2}\right),\;n=0,1,2,\ldots.$$ These polynomials can also be defined by their generating function \begin{equation} \label{genCheb2} \frac{1}{1-2xt+t^2}=\sum_{n=0}^{\infty}U_n(x)t^n. \end{equation} Finally the classical Legendre (or spherical) polynomials $\left\{P_n(x)\right\}_{n=0}^{\infty}$ form another special case of the classical Jacobi polynomials. In fact we have $$P_n(x)=P_n^{(0,0)}(x)=\sum_{k=0}^n\frac{(n+k)!}{(n-k)!\,(k!)^2} \left(\frac{x-1}{2}\right)^k,\;n=0,1,2,\ldots.$$ These Legendre polynomials can also be defined by their generating function \begin{equation} \label{genLeg} \frac{1}{\sqrt{1-2xt+t^2}}=\sum_{n=0}^{\infty}P_n(x)t^n. \end{equation} Note that the Legendre polynomials also form a special case of the ultraspherical polynomials, since we have \begin{equation} \label{rel2} P_n(x)=G_n^{(\frac{1}{2})}(x),\;n=0,1,2,\ldots. \end{equation} The classical Laguerre polynomials $\left\{L_n^{(\alpha)}(x)\right\}_{n=0}^{\infty}$ can be defined for all $\alpha$ and $n\in\{0,1,2,\ldots\}$ as $$L_n^{(\alpha)}(x)=\sum_{k=0}^n(-1)^k\left({n+\alpha \atop n-k}\right) \frac{x^k}{k!}=\sum_{k=0}^n(-1)^k\frac{(\alpha+k+1)_{n-k}}{(n-k)!}\, \frac{x^k}{k!}.$$ The generating function for the classical Laguerre polynomials is given by \begin{equation} \label{genLag} (1-t)^{-\alpha-1}\exp\left(\frac{xt}{t-1}\right)= \sum_{n=0}^{\infty}L_n^{(\alpha)}(x)t^n. \end{equation} Further we have for $n=0,1,2,\ldots$ \begin{equation} \label{diffLag} D^iL_n^{(\alpha)}(x)=(-1)^iL_{n-i}^{(\alpha+i)}(x),\;i=0,1,2,\ldots,n. \end{equation} Another family of continuous orthogonal polynomials is the one named after Hermite. The classical Hermite polynomials $\left\{H_n(x)\right\}_{n=0}^{\infty}$ can be defined by their generating function \begin{equation} \label{genHer} \exp\left(xt-\frac{1}{4}t^2\right)=\sum_{n=0}^{\infty}H_n(x)t^n. \end{equation} Here we used another normalization than in \cite{Askey}. This one turns out to be more convenient in this paper. These classical Hermite polynomials satisfy the orthogonality relation $$\frac{1}{\sqrt{\pi}}\int_{-\infty}^{\infty}e^{-x^2}H_m(x)H_n(x)dx= \frac{\delta_{mn}}{2^n\,n!},\;m,n=0,1,2,\ldots.$$ Further we have for $n=0,1,2,\ldots$ \begin{equation} \label{diffHer} D^iH_n(x)=H_{n-i}(x),\;i=0,1,2,\ldots,n. \end{equation} From the generating function it follows that \begin{equation} \label{nulHer} H_{2n+1}(0)=0\;\textrm{ and }\; H_{2n}(0)=\frac{(-1)^n}{2^{2n}\,n!},\;n=0,1,2,\ldots. \end{equation} Further we will use the kernels \begin{equation} \label{kernHer} K_n(x,y)=\sum_{k=0}^n2^k\,k!\,H_k(x)H_k(y),\;n=0,1,2,\ldots. \end{equation} By using (\ref{nulHer}) we easily find that for $n=0,1,2,\ldots$ \begin{equation} \label{kernHer1} K_{2n+1}(x,0)=K_{2n}(x,0)=\sum_{k=0}^n(-1)^k\frac{(2k)!}{k!}H_{2k}(x) \end{equation} and \begin{equation} \label{kernHer2} K_{2n+1}(0,0)=K_{2n}(0,0)=\sum_{k=0}^n\frac{(2k)!}{2^{2k}\,(k!)^2} =\sum_{k=0}^n\frac{(\frac{1}{2})_k}{k!}=\frac{(\frac{3}{2})_n}{n!}. \end{equation} Finally we will consider the discrete orthogonal polynomials named after Meixner and Charlier. We choose normalizations different from those in \cite{Askey}. The classical Meixner polynomials $\left\{M_n^{(\beta)}(x;c)\right\}_{n=0}^{\infty}$ can be defined by their generating function \begin{equation} \label{genMeixner} \left(1-\frac{t}{c}\right)^x\left(1-t\right)^{-x-\beta}= \sum_{n=0}^{\infty}M_n^{(\beta)}(x;c)t^n. \end{equation} The Meixner polynomials are connected to the classical Jacobi polynomials in the following way \begin{equation} \label{rel3} M_n^{(\beta)}(x;c)=P_n^{(\beta-1,-n-\beta-x)}\left(\frac{2-c}{c}\right), \;n=0,1,2,\ldots. \end{equation} The classical Charlier polynomials $\left\{C_n^{(a)}(x)\right\}_{n=0}^{\infty}$ can also be defined by their generating function \begin{equation} \label{genChar} e^{-at}\left(1+t\right)^x=\sum_{n=0}^{\infty}C_n^{(a)}(x)t^n. \end{equation} \section{Some inversion formulas} In \cite{Charlier} we observed that the generating function (\ref{genChar}) implies that $$1=e^{-at}(1+t)^xe^{at}(1+t)^{-x}=\sum_{n=0}^{\infty} \left(\sum_{k=0}^nC_k^{(a)}(x)C_{n-k}^{(-a)}(-x)\right)t^n,$$ which implies that $$\sum_{k=0}^nC_k^{(a)}(x)C_{n-k}^{(-a)}(-x)= \left\{\begin{array}{ll} 1, & n=0\\ 0, & n=1,2,3\ldots \end{array}\right.$$ or \begin{equation} \label{invChar} \sum_{k=j}^iC_{i-k}^{(-a)}(-x)C_{k-j}^{(a)}(x)= \delta_{ij},\;j\le i,\;i,j=0,1,2,\ldots. \end{equation} As already indicated in \cite{Invjac} this formula (\ref{invChar}) can be interpreted as follows. If we define the matrix $T=(t_{ij})_{i,j=0}^n$ with entries $$t_{ij}=\left\{\begin{array}{ll} C_{i-j}^{(a)}(x), & j\le i\\ 0, & j>i, \end{array}\right.$$ then this matrix $T$ is a triangular matrix with determinant $1$ and the inverse $U$ of $T$ is given by $T^{-1}=U=(u_{ij})_{i,j=0}^n$ with entries $$u_{ij}=\left\{\begin{array}{ll} C_{i-j}^{(-a)}(-x), & j\le i\\ 0, & j>i. \end{array}\right.$$ Therefore we call (\ref{invChar}) an inversion formula. In the same way we find from the generating function (\ref{genLag}) for the classical Laguerre polynomials $$\sum_{k=j}^iL_{i-k}^{(-\alpha-2)}(-x)L_{k-j}^{(\alpha)}(x)= \delta_{ij},\;j\le i,\;i,j=0,1,2,\ldots.$$ However, in view of (\ref{diffLag}) this inversion formula cannot be used to solve the systems of the equations of the form (\ref{sysLag}). In \cite{Bavinck} H.~Bavinck observed that it also follows from the generating function (\ref{genLag}) that \begin{eqnarray*} (1-t)^{i-j-1}&=&(1-t)^{-\alpha-j-1}\exp\left(\frac{xt}{t-1}\right) (1-t)^{\alpha+i}\exp\left(\frac{-xt}{t-1}\right)\\ &=&\sum_{n=0}^{\infty}\left(\sum_{k=0}^nL_k^{(\alpha+j)}(x) L_{n-k}^{(-\alpha-i-1)}(-x)\right)t^n \end{eqnarray*} which implies, by comparing the coefficients of $t^{i-j}$ on both sides, that $$\sum_{k=0}^{i-j}L_k^{(\alpha+j)}(x)L_{i-j-k}^{(-\alpha-i-1)}(-x)=\delta_{ij}, \;j\le i,\;i,j=0,1,2,\ldots,$$ which is equivalent to (\ref{invLag}). This inversion formula implies that the system of equations (\ref{sysLag}) has the unique solution given by (\ref{oplLag}). \section{More (inversion) formulas} Applying the method described in the preceding section to the generating function (\ref{genCheb2}) for the Chebyshev polynomials of the second kind and the generating function (\ref{genLeg}) for the Legendre polynomials we obtain \begin{eqnarray*} \sum_{n=0}^{\infty}U_n(x)t^n&=&\frac{1}{1-2xt+t^2}= \frac{1}{\sqrt{1-2xt+t^2}}\frac{1}{\sqrt{1-2xt+t^2}}\\ &=&\sum_{k=0}^{\infty}P_k(x)t^k\sum_{m=0}^{\infty}P_m(x)t^m= \sum_{n=0}^{\infty}\left(\sum_{k=0}^nP_k(x)P_{n-k}(x)\right)t^n, \end{eqnarray*} which implies that $$\sum_{k=0}^nP_k(x)P_{n-k}(x)=U_n(x),\;n=0,1,2,\ldots.$$ Another interesting formula of this kind can be found by using the generating function (\ref{genCheb2}) for the Chebyshev polynomials of the second kind and the generating function (\ref{genCheb1}) for the Chebyshev polynomials of the first kind. In fact, we have \begin{eqnarray*} \sum_{n=0}^{\infty}U_n(x)t^n&=&\frac{1}{1-2xt+t^2}= \frac{1}{1-xt}\frac{1-xt}{\sqrt{1-2xt+t^2}}\\ &=&\sum_{k=0}^{\infty}x^kt^k\sum_{m=0}^{\infty}T_m(x)t^m= \sum_{n=0}^{\infty}\left(\sum_{k=0}^nx^kT_{n-k}(x)\right)t^n, \end{eqnarray*} which implies that \begin{equation} \label{relCheb} \sum_{k=0}^nx^kT_{n-k}(x)=U_n(x),\;n=0,1,2,\ldots. \end{equation} As before we can use the generating function (\ref{genultra}) for the ultraspherical polynomials to obtain the inversion formula \begin{equation} \label{invultra} \sum_{k=j}^iG_{i-k}^{(-\lambda)}(x)G_{k-j}^{(\lambda)}(x)= \delta_{ij},\;j\le i,\;i,j=0,1,2,\ldots. \end{equation} In view of (\ref{rel2}) the special (limit) case $\lambda=\frac{1}{2}$ should lead to an inversion formula for the Legendre polynomials. If we define for every positive integer $N$ the matrix $A=(a_{ij})_{i,j=1}^N$ with entries $$a_{ij}=\left\{\begin{array}{ll} P_{i-j}(x), & j\le i\\ 0, & j>i, \end{array}\right.$$ then this matrix is a triangular matrix with determinant $1$ and hence invertible. Now we have $G_0^{(\lambda)}(x)=1$, $G_1^{(\lambda)}(x)=2\lambda x\;\rightarrow\;-x$ for $\lambda\rightarrow -\frac{1}{2}$ and for $n=2,3,4,\ldots$ $$G_n^{(\lambda)}(x)=\frac{(2\lambda)_n}{(\lambda+\frac{1}{2})_n} P_n^{(\lambda-\frac{1}{2},\lambda-\frac{1}{2})}(x) \;\rightarrow\;\frac{-2}{n-1}P_n^{(-1,-1)}(x)\;\textrm{ for }\; \lambda\rightarrow -\frac{1}{2}.$$ Now we have by using (\ref{defJac}) for $n=2,3,4,\ldots$ \begin{equation} B_n(x):=\frac{-2}{n-1}P_n^{(-1,-1)}(x)= \frac{1}{n}(1-x)P_{n-1}^{(1,-1)}(x). \end{equation} Hence, the inverse $A^{-1}=B=(b_{ij})_{i,j=1}^N$ is given by $$b_{ij}=\left\{\begin{array}{ll} 0, & i<j\\ 1, & i=j\\ -x & i=j+1\\ B_{i-j}(x), & i\ge j+2. \end{array}\right.$$ In case of the Chebyshev polynomials of the second kind we can obtain an inversion formula as follows. If we define for every positive integer $N$ the matrix $A=(a_{ij})_{i,j=1}^N$ with entries $$a_{ij}=\left\{\begin{array}{ll} U_{i-j}(x), & j\le i\\ 0, & j>i, \end{array}\right.$$ then this matrix is a triangular matrix with determinant $1$ and hence invertible. It is not difficult to show that its inverse $A^{-1}=B=(b_{ij})_{i,j=1}^N$ is given by $$b_{ij}=\left\{\begin{array}{ll} 1, & i=j\\ -2x, & i=j+1\\ 1, & i=j+2\\ 0, & \textrm{otherwise.} \end{array}\right.$$ This can be shown by writing $$BA=C=(c_{ij})_{i,j=1}^N\;\textrm{ with }\;c_{ij}=\sum_{k=1}^Nb_{ik}a_{kj}$$ and showing that $C=I$, the identity matrix. This is done by using the well-known relation $$U_n(x)-2xU_{n+1}(x)+U_{n+2}(x)=0,\;n=0,1,2,\ldots.$$ In case of the Chebyshev polynomials of the first kind we consider the matrix $A=(a_{ij})_{i,j=1}^N$ for every positive integer $N$ with entries $$a_{ij}=\left\{\begin{array}{ll} T_{i-j}(x), & j\le i\\ 0, & j>i. \end{array}\right.$$ Then this matrix is also a triangular matrix with determinant $1$ and hence invertible. The inverse $A^{-1}=B=(b_{ij})_{i,j=1}^N$ is given by $$b_{ij}=\left\{\begin{array}{ll} 0, & i<j\\ 1, & i=j\\ -x, & i=j+1\\ x^{i-j-2}(1-x^2), & i\ge j+2. \end{array}\right.$$ This can also be shown by writing $$BA=C=(c_{ij})_{i,j=1}^N\;\textrm{ with }\;c_{ij}=\sum_{k=1}^Nb_{ik}a_{kj}$$ and showing that $C=I$, the identity matrix. This is done by using the formula (\ref{relCheb}) and the well-known relation $$(1-x^2)U_n(x)-xT_{n+1}(x)+T_{n+2}(x)=0,\;n=0,1,2,\ldots.$$ The generating function (\ref{genMP}) can also be used to find inversion formulas involving Meixner-Pollaczek polynomials. In fact we have $$\sum_{k=j}^iP_{i-k}^{(-\lambda)}(-x;\phi)P_{k-j}^{(\lambda)}(x;\phi) =\delta_{ij},\;j\le i,\;i,j=0,1,2,\ldots$$ or $$\sum_{k=j}^iP_{i-k}^{(-\lambda)}(x;-\phi)P_{k-j}^{(\lambda)}(x;\phi) =\delta_{ij},\;j\le i,\;i,j=0,1,2,\ldots.$$ By using the generating function (\ref{genMeixner}) for the Meixner polynomials we find the inversion formula \begin{equation} \label{invMeixner} \sum_{k=j}^iM_{i-k}^{(-\beta)}(-x;c)M_{k-j}^{(\beta)}(x;c)= \delta_{ij},\;j\le i,\;i,j=0,1,2,\ldots. \end{equation} We remark that this inversion formula is different from the one obtained in \cite{Meixner}. See also \cite{Meixner2} for an application of that inversion formula. Note that the generating function (\ref{genHer}) for the classical Hermite polynomials implies that $$1=\exp\left(xt-\frac{1}{4}t^2\right)\exp\left(-xt+\frac{1}{4}t^2\right) =\sum_{n=0}^{\infty}\left(\sum_{k=0}^nH_k(x)H_{n-k}(ix)i^{n-k}\right)t^n,$$ which implies that $$\sum_{k=0}^nH_k(x)H_{n-k}(ix)i^{n-k}= \left\{\begin{array}{ll} 1, & n=0\\ 0, & n=1,2,3\ldots. \end{array}\right.$$ In view of (\ref{diffHer}) this formula can be used as follows. A system of equations of the form \begin{equation} \label{sysHer} F_n(x)=\sum_{k=1}^{\infty}a_k(x)D^kH_n(x),\;n=1,2,3,\ldots, \end{equation} where the coefficients $\left\{a_k(x)\right\}_{k=1}^{\infty}$ are polynomials which are independent of $n$, has the unique solution \begin{equation} \label{oplHer} a_k(x)=\sum_{j=1}^ki^{k-j}H_{k-j}(ix)F_j(x),\;k=1,2,3,\ldots. \end{equation} \section{Inversion formulas involving Jacobi polynomials} In \cite{Invjac} we have found the inversion formula (\ref{invJac}) involving Jacobi polynomials. As mentioned before this formula was found in a completely different way. The well-known generating function for the classical Jacobi polynomials has a different structure, which implies that the method used before cannot be used in that case. In \cite{Invjac} we proved that for $n=0,1,2,\ldots$ we have $$\sum_{k=0}^n\frac{\alpha+\beta+2k+1}{(\alpha+\beta+k+1)_{n+1}} P_k^{(\alpha,\beta)}(x)P_{n-k}^{(-n-\alpha-1,-n-\beta-1)}(y)= \frac{1}{n!}\left(\frac{x-y}{2}\right)^n.$$ Now $y=x$ leads to the inversion formula (\ref{invJac}). If $y=-x$ this leads to a formula which was used in \cite{Madrid} in the case that $\beta=\alpha$. By using the relation (\ref{rel3}) between the Meixner and the Jacobi polynomials we find from the inversion formula (\ref{invMeixner}) for the Meixner polynomials that $$\sum_{k=j}^iP_{i-k}^{(-\alpha-2,-\beta-i+k)}(x)P_{k-j}^{(\alpha,\beta-k+j)}(x) =\delta_{ij},\;j\le i,\;i,j=0,1,2,\ldots.$$ Another inversion formula involving Jacobi polynomials can be obtained from the inversion formula (\ref{invultra}) for the ultraspherical polynomials. By using (\ref{rel1}) and after setting $\lambda=\alpha+\frac{1}{2}$ this leads to \begin{eqnarray*} & &\sum_{k=j}^i\frac{(-2\alpha-1)_{i-k}}{(-\alpha)_{i-k}} \frac{(2\alpha+1)_{k-j}}{(\alpha+1)_{k-j}}\\ & &{}\hspace{1cm}\times P_{i-k}^{(-\alpha-1,-\alpha-1)}(x)P_{k-j}^{(\alpha,\alpha)}(x)= \delta_{ij},\;j\le i,\;i,j=0,1,2,\ldots. \end{eqnarray*} \section{Applications to differential equations} In this section we will investigate the generalized Hermite polynomials $\left\{H_n^M(x)\right\}_{n=0}^{\infty}$ which are orthogonal on the real line with respect to the weight function $$w(x)=\frac{1}{\sqrt{\pi}}e^{-x^2}+M\delta(x),\;M\ge 0.$$ In \cite{Opsym} these generalized Hermite polynomials are called special (linear) perturbations of the classical Hermite polynomials. They can be represented in terms of the kernels (\ref{kernHer}) as (see \cite{Opsym}) $$H_n^M(x)=H_n(x)+MQ_n(x),\;n=0,1,2,\ldots,$$ where $Q_0(x)=0$ and $$Q_n(x)=\left|\begin{array}{cc} H_n(x) & K_{n-1}(x,0)\\ H_n(0) & K_{n-1}(0,0) \end{array}\right|=\sum_{k=0}^nq_{n,k}H_k(x),\;n=1,2,3,\ldots$$ with, by using (\ref{kernHer1}), for $n=1,2,3,\ldots$ $$q_{n,n}=K_{n-1}(0,0)\;\textrm{ and }\;q_{n,k}=-2^k\,k!\,H_k(0)H_n(0), \;k=0,1,2,\ldots,n-1.$$ In \cite{Opsym} it is shown that these generalized Hermite polynomials satisfy a differential equation of the form \begin{equation} \label{dvHer} M\sum_{k=1}^{\infty}a_k(x)y^{(k)}(x)+y''(x)-2xy'(x)+(2n+M\alpha_n)y(x)=0, \end{equation} where the coefficients $\left\{a_k(x)\right\}_{k=1}^{\infty}$ are polynomials with degree$[a_k(x)]\le k,\;k=1,2,3,\ldots$ which are independent of $n$. Moreover it is shown that the 'eigenvalue' parameters $\left\{\alpha_{2n+1}\right\}_{n=0}^{\infty}$ can be chosen arbitrarily, $$\alpha_0=0\;\textrm{ and }\;\alpha_{2n}= \sum_{j=1}^n(\lambda_{2j}-\lambda_{2j-2})q_{2j,2j},\;n=1,2,3\ldots,$$ where $\lambda_n=2n,\;n=0,1,2,\ldots$. Hence, $\lambda_{2j}-\lambda_{2j-2}=4,\;j=1,2,3,\ldots$ and by using (\ref{kernHer2}) $$q_{2j,2j}=K_{2j-1}(0,0)=\frac{(\frac{3}{2})_{j-1}}{(j-1)!},\;j=1,2,3,\ldots.$$ This implies that $$\alpha_{2n}=4\sum_{j=1}^n\frac{(\frac{3}{2})_{j-1}}{(j-1)!}= 4\sum_{k=0}^{n-1}\frac{(\frac{3}{2})_k}{k!}=4\frac{(\frac{5}{2})_{n-1}}{(n-1)!}, \;n=1,2,3,\ldots.$$ In order to find the coefficients $\left\{a_k(x)\right\}_{k=1}^{\infty}$ we set $y(x)=H_n^M(x)=H_n(x)+MQ_n(x)$ in the differential equation (\ref{dvHer}) and view the left-hand side as a polynomial in $M$. Then the coefficients of this polynomial must vanish, hence $$\sum_{k=1}^{\infty}a_k(x)D^kH_n(x)=-\alpha_nH_n(x)-Q_n''(x)+2xQ_n'(x)-2nQ_n(x), \;n=0,1,2,\ldots$$ and $$\sum_{k=1}^{\infty}a_k(x)D^kQ_n(x)=-\alpha_nQ_n(x),\;n=0,1,2,\ldots.$$ Since we have, by using (\ref{nulHer}) and (\ref{kernHer2}), $$Q_{2n+1}(x)=K_{2n}(0,0)H_{2n+1}(x)=\frac{(\frac{3}{2})_n}{n!}H_{2n+1}(x), \;n=0,1,2,\ldots$$ and $$\frac{(\frac{3}{2})_n}{n!}\ne 0,\;n=0,1,2,\ldots$$ both systems of equations lead to \begin{equation} \label{Her1} \sum_{k=1}^{2n+1}a_k(x)H_{2n+1-k}(x)=-\alpha_{2n+1}H_{2n+1}(x),\;n=0,1,2,\ldots. \end{equation} Further we have $$Q_{2n}(x)=\sum_{k=0}^nq_{2n,2k}H_{2k}(x),\;n=1,2,3,\ldots,$$ where, by using (\ref{kernHer2}), $$q_{2n,2n}=K_{2n-1}(0,0)=\frac{(\frac{3}{2})_{n-1}}{(n-1)!},\;n=1,2,3,\ldots$$ and, by using (\ref{nulHer}), for $k=0,1,2,\ldots,n-1$ and $n=1,2,3,\ldots$ $$q_{2n,2k}=-2^{2k}(2k)!\,H_{2k}(0)H_{2n}(0) =\frac{(-1)^{n+k+1}(2k)!}{2^{2n}\,n!\,k!}.$$ Now we have for $n=1,2,3,\ldots$ \begin{eqnarray*} Q_{2n}''(x)-2xQ_{2n}'(x)&=& \sum_{k=0}^nq_{2n,2k}\left[H_{2k}''(x)-2xH_{2k}'(x)\right]\\ &=&\sum_{k=0}^nq_{2n,2k}\left[-4kH_{2k}(x)\right]. \end{eqnarray*} Hence, for $n=1,2,3,\ldots$ we obtain \begin{eqnarray*} -\alpha_{2n}H_{2n}(x)-Q_{2n}''(x)+2xQ_{2n}'(x)-4nQ_{2n}(x)\\ =-\alpha_{2n}H_{2n}(x)-4\sum_{k=0}^n(n-k)q_{2n,2k}H_{2k}(x), \end{eqnarray*} which leads for $n=1,2,3,\ldots$ to \begin{equation} \label{Her2} \sum_{k=1}^{2n}a_k(x)H_{2n-k}(x)= -\alpha_{2n}H_{2n}(x)-4\sum_{k=0}^n(n-k)q_{2n,2k}H_{2k}(x). \end{equation} Hence, with (\ref{Her1}) and (\ref{Her2}) we have found that $$\sum_{k=1}^{\infty}a_k(x)D^kH_n(x)=\sum_{k=1}^na_k(x)H_{n-k}(x)=F_n(x),\;n=1,2,3,\ldots,$$ where $$\left\{\begin{array}{l} F_{2n+1}(x)=-\alpha_{2n+1}H_{2n+1}(x),\;n=0,1,2,\ldots\\ F_{2n}(x)=-\alpha_{2n}H_{2n}(x)-4\displaystyle\sum_{k=0}^n(n-k)q_{2n,2k}H_{2k}(x),\;n=1,2,3,\ldots. \end{array}\right.$$ This system of equations has the form (\ref{sysHer}). So we can use (\ref{oplHer}) to conclude that $$a_k(x)=\sum_{j=1}^ki^{k-j}H_{k-j}(ix)F_j(x),\;k=1,2,3,\ldots.$$ We emphasize that these generalized Hermite polynomials are orthogonal with respect to a weight function consisting of the classical Hermite weight function and a Dirac delta distribution at the origin. Therefore these generalized Hermite polynomials could be considered as Krall-Hermite polynomials, but these are quite different from the Krall-Hermite polynomials considered in \cite{Haine} which are not orthogonal. Finally, in \cite{Kwon} it is shown that these generalized Hermite polynomials cannot satisfy a finite order differential equation of the form (\ref{dvHer}).
2,869,038,156,972
arxiv
\section*{Acknowledgments} \noindent We thank the ICHEP2022 organizers and convenors for the conference and the invitation. The work of J.~Fiaschi has been supported by STFC under the Consolidated Grant ST/T000988/1. S.~Moretti is supported in part through the NExT Institute and acknowledges funding from the STFC Consolidated Grant ST/L000296/1.
2,869,038,156,973
arxiv
\section{Introduction} One of the main problems in understanding galaxy formation is that star formation is extremely complex. The gas out of which stars form is a turbulent mix of forces in which gravitational collapse, thermal pressure, magnetic fields, cosmic ray pressure and energy from supernovae all fight for dominance \citep{MacLow2004}. This leaves galaxy simulators with a choice; either to model the interstellar gas in detail but restrict their study to a small patch of the galaxy \citep[e.g.][]{Slyz2005} or to simulate the entire galaxy but use a greatly simplified model for the ISM \citep[e.g.][]{Li2005}. The former approach allows the inclusion of many more of the important physical processes but is unable to tell us anything about the global evolution of the disc. The latter, meanwhile, reveals properties of the whole galaxy including star formation histories and global structures, but it is impossible to judge the effect the simple ISM model is having on these results. However, recent numerical simulations are now able to include a more complex multiphase ISM in these global models \citep{Tasker2007, Tasker2006, Wada2007}. This allows us to test the impact of modelling the ISM in a more realistic way in galaxy formation. This has particular baring on cosmological simulations, where the resolution of the ISM of individual galaxies is still beyond our reach. \section{Numerical method} Using the AMR code, {\it Enzo} \citep{Bryan1997}, we compared three models of isolated galaxy discs where we varied the properties of the interstellar gas. In all cases, the simulations started with a rotating Milky Way-sized exponential disc of gas sitting in a static NFW dark matter potential. The set-up is described in more detail in \citet{Tasker2006, Tasker2007}. In our first galaxy model (ISM~\#1), the gas was allowed to radiatively cool to 300\,K. In our second model (ISM~\#2), an additional background photoelectric heating term was included while in our last disc (ISM~\#3), the ISM was kept at a fixed temperature of 10,000\,K, in keeping with previous simulations. All simulations included star formation and ISM models 1 and 2 also included feedback from Type II SNe (this was impossible to include for the third, isothermal, disc). \section{The structure of the ISM} \begin{figure} \begin{center} \includegraphics[width=\textwidth]{tasker1.eps} \caption{Gas density projections at 388\,Myrs for (left to right) ISM~\#1, ISM~\#2 and ISM~\#3, all without feedback. Scale is to the base-10 logarithm with units M$_\odot$Mpc$^{-2}$ and each image is 60\,kpc across.}\label{fig:gasproj} \end{center} \end{figure} Figure~\ref{fig:gasproj} shows projections of the gas density of each galaxy disc 377\,Myrs after the start of the simulation. At this time, the disc has undergone its initial fragmentation (and subsequent star burst) and the evolution is now slow. In Figure~\ref{fig:gasproj}, all images are for runs without stellar feedback. With our first ISM model, shown on the far left, we see that the gas has fragmented out to a well-defined radius. Beyond this point, there is still a significant amount of gas, but it is stable to gravitational fragmentation and does not collapse to form stars. When we include photoelectric heating in ISM~\#2 (middle image) we see a notably different gas structure. In particular, there exists large voids of hot, low density gas. The porous nature of this ISM has been observed both in our own galaxy and (perhaps most dramatically) in the HI map of the LMC. This result agrees with recent simulations of \citet{Wada2000}, who suggest these holes are not the results of SNe remnants, but rather the product of gravitational and thermal instabilities. Our final ISM model, where the temperature is fixed, shows another distinct structure. Due to the fixed high temperature of $10^4$\,K, the Jean's Length is higher than in the other discs, causing the star-forming knots of gas to be much higher in mass. This has the effect of producing very large star clusters that are confined to the densest, inner regions of the disc. Their formation also produces voids in the gas distribution, but this is due to gas deficit, not to instabilities. The pressure distribution of these discs is also interesting. Without feedback, discs with ISM~\#1 and ISM~\#2 are largely in pressure equilibrium, as predicted by the analytical models of \citet{McKee1977}. The isothermal disc cannot be, since fixing the temperature requires the pressure to be proportional to the gas. When feedback is introduced, this situation changes. SNe energy causing streams of gas to be blown both in the disc's plane and off its surface, creating a galactic fountain. \begin{figure} \includegraphics[angle=270, width=\columnwidth]{tasker2.ps} \caption{PDF of the volume weighted gas density at 377\,Myrs for the three ISM types (running left to right). A log normal fit is shown as a solid line in the first panel.} \label{fig:pdfs} \end{figure} The structure of these ISMs can be examined more quantitatively by looking at their 1D PDFs, as shown in Figure~\ref{fig:pdfs}. Comparing all three of the ISMs across the panels, we see a significant amount of substructure in the low density gas, but the high density end of the PDFs are surprisingly uniform. At these densities, all profiles are well fitted by a lognormal distribution. This remains true even when feedback is included. \section{Observational comparison} The left-hand plots in Figure~\ref{fig:obs} shows the star formation history for each galaxy disc. Without feedback, our ISM~\#1 disc converts all the available gas into stars, halting further star formation. By contrast, the addition of background heating quenches star formation by raising the temperature of the coldest gas and allowing the disc to show the beginning of self-regulation. The addition of feedback, however, is a much stronger effect, with the added energy destroying the star forming knots and thereby increasing the available gas at later times. The isothermal disc also shows a flattening in the star formation rate, but this is likely due to the confinement of the star formation to the denser parts of the disc which slows down the rate of gas consumption. We can also compare our results with the widely observed Kennicutt-Schmidt law \citep{Kennicutt1989}, as shown in the right-hand plot of Figure~\ref{fig:obs}. Here, we see that the observed gradient is reproduced well in both the first two ISM types, but less well in our isothermal disc. We do however, persistently overestimate the rate of star formation, even when feedback is included. This is likely due to our star formation recipe and is something to address in later work. \begin{figure} \includegraphics[width=6.7cm]{tasker3.ps} \includegraphics[width=6.7cm]{tasker4.ps} \caption{The star formation history (left) and global Schmidt-Kennicutt law (right) for ISM models 1, 2 and 3 (top to bottom).} \label{fig:obs} \end{figure} \section{Conclusions} One of the most surprising results in this research is that despite significant structural differences in the ISM, the star formation properties remain largely unaffected. The reason for this can be seen in Figure~\ref{fig:pdfs}, where we see that the biggest differences between our three ISM types occur in the low to medium density gas. The high density, star-forming gas, meanwhile, forms a consistent lognormal profile in all cases. This result is largely positive; it suggests that a decent subgrid model for star formation can be used where a detailed ISM model is not possible, such as large-scale cosmological simulations. However, it also implies that a through understanding of star-formation cannot be found from the Kennicutt-Schmidt law, which is largely insensitive to the input physics. This work is presented in greater detail in \citet{Tasker2007}.
2,869,038,156,974
arxiv
\subsection{Paper length} Papers must be 9~pages in length, {\em excluding} the bibliography. {\bf Papers which are clearly overlength will not be reviewed}. This includes papers where the margins and formatting are deemed to have been significantly altered from those laid down by this style guide. The reason such papers will not be reviewed is that there is no provision for supervised revisions of manuscripts. The reviewing process cannot determine the suitability of the paper for presentation in nine pages if it is reviewed in twelve. The bibliography should begin immediately after the paper text. It may be of any length, within reason. It should {\em not} include annotations, figures, or any other paraphernalia intended to subvert the paper length requirement. \begin{figure} \begin{tabular}{ccc} \bmvaHangBox{\fbox{\parbox{2.7cm}{~\\[2.8mm] \rule{0pt}{1ex}\hspace{2.24mm}\includegraphics[width=2.33cm]{images/eg1_largeprint.png}\\[-0.1pt]}}}& \bmvaHangBox{\fbox{\includegraphics[width=2.8cm]{images/eg1_largeprint.png}}}& \bmvaHangBox{\fbox{\includegraphics[width=5.6cm]{images/eg1_2up.png}}}\\ (a)&(b)&(c) \end{tabular} \caption{It is often a good idea for the first figure to attempt to encapsulate the article, complementing the abstract. This figure illustrates the various print and on-screen layouts for which this paper format has been optimized: (a) traditional BMVC print format; (b) on-screen single-column format, or large-print paper; (c) full-screen two column, or 2-up printing. } \label{fig:teaser} \end{figure} \subsection{Dual submission} By submitting this manuscript to BMVC, the authors assert that it has not been previously published in substantially similar form, and no paper currently under submission to a conference contains significant overlap with this one. If you are in doubt about the amount of overlap, cite the dual submission (as described below), and argue in the body of your paper why this is a nontrivial advance. A simultaneous journal submission would be expected to have significant additional material not in the conference paper, and should not be previously published, nor in the final acceptance stages. The conference chairs reserve the right to cancel submission of any paper which is found to violate these conditions. In particular, {\em uncited} dual submissions will be summarily dealt with. \subsection{Anonymity and blind review} BMVC operates a double-blind review process. Your review submission {\bf must not identify you as the author}. This means, in particular, that the author list should be replaced by the words ``BMVC {\em YYYY} Submission \# {\em NNN}'', where the italics are to indicate the year and the submission number. The provided \LaTeX\ command \verb'\bmvcreviewcopy' does this automatically. In addition, acknowledgements should not be included in the review copy. Many authors misunderstand the concept of anonymizing for blind review. Blind review {\bf does not mean that one must remove citations to one's own work}---in fact it is often impossible to review a paper unless the previous citations are known and available. Blind review means that you do not use the words ``my'' or ``our'' when citing previous work. That is all. (But see below for techreports) Saying ``this builds on the work of Lucy Smith [1]'' does not say that you are Lucy Smith, it says that you are building on her work. If you are Smith and Jones, do not say ``as we show in [7]'', say ``as Smith and Jones show in [7]'' and at the end of the paper, include reference 7 as you would any other cited work. An example of a bad paper: \begin{quote} \begin{center} An analysis of the frobnicatable foo filter. \end{center} In this paper we present a performance analysis of our previous paper [1], and show it to be inferior to all previously known methods. Why the previous paper was accepted without this analysis is beyond me. [1] Removed for blind review \end{quote} An example of an excellent paper: \begin{quote} \begin{center} An analysis of the frobnicatable foo filter. \end{center} In this paper we present a performance analysis of the paper of Smith \emph{et al}\bmvaOneDot [1], and show it to be inferior to all previously known methods. Why the previous paper was accepted without this analysis is beyond me. [1] Smith, L and Jones, C. ``The frobnicatable foo filter, a fundamental contribution to human knowledge''. Nature 381(12), 1-213. \end{quote} If you are making a submission to another conference at the same time, which covers similar or overlapping material, you will almost certainly need to refer to that submission in order to explain the differences, just as you would if you or others had previously published related work. In such cases, include the anonymized parallel submission~\cite{Authors06} as additional material and cite it as \begin{quote} [1] Authors. ``The frobnicatable foo filter'', ECCV 2006 Submission ID 324, Supplied as additional material {\tt eccv06.pdf}. \end{quote} Finally, you may feel you need to tell the reader that more details can be found elsewhere, and refer them to a technical report. For conference submissions, the paper must stand on its own, and not {\em require} the reviewer to go to a techreport for further details. Thus, you may say in the body of the paper ``further details may be found in~\cite{Authors06b}''. Then submit the techreport as additional material. Again, you may not assume the reviewers will read this material. Sometimes your paper is about a problem which you tested using a tool which is widely known to be restricted to a single institution. For example, let's say it's 1969, you have solved a key problem on the Apollo lander, and you believe that the ICLL'70 audience would like to hear about your solution. The work is a development of your celebrated 1968 paper entitled ``Zero-g frobnication: How being the only people in the world with access to the Apollo lander source code makes us a wow at parties'', by Zeus \emph{et al}\bmvaOneDot. You can handle this paper like any other. Don't write ``We show how to improve our previous work [Anonymous, 1968]. This time we tested the algorithm on a lunar lander [name of lander removed for blind review]''. That would be silly, and would immediately identify the authors. Instead write the following: \begin{quotation} \noindent We describe a system for zero-g frobnication. This system is new because it handles the following cases: A, B. Previous systems [Zeus et al. 1968] didn't handle case B properly. Ours handles it by including a foo term in the bar integral. ... The proposed system was integrated with the Apollo lunar lander, and went all the way to the moon, don't you know. It displayed the following behaviours which show how well we solved cases A and B: ... \end{quotation} As you can see, the above text follows standard scientific convention, reads better than the first version, and does not explicitly name you as the authors. A reviewer might think it likely that the new paper was written by Zeus \emph{et al}\bmvaOneDot, but cannot make any decision based on that guess. He or she would have to be sure that no other authors could have been contracted to solve problem B. FAQ: Are acknowledgements OK? No. Leave them for the final copy. \subsection{Citations} When citing a multi-author paper, you may save space by using ``{\em et alia}'', shortened to ``\emph{et al}\bmvaOneDot'' (not ``{\em et.\ al.}'' as ``{\em et}'' is a complete word.) The provided \verb'\emph{et al}\bmvaOneDot' macro is a useful {\em aide memoire} in this regard. However, use it only when there are three or more authors. Thus, the following is correct: `` Frobnication has been trendy lately. It was introduced by Alpher~\cite{Alpher02}, and subsequently developed by Alpher and Fotheringham-Smythe~\cite{Alpher03}, and Alpher \emph{et al}\bmvaOneDot~\cite{Alpher04}.'' This is incorrect: ``... subsequently developed by Alpher \emph{et al}\bmvaOneDot~\cite{Alpher03} ...'' because reference~\cite{Alpher03} has just two authors. If you use the \verb'\emph{et al}\bmvaOneDot' macro, then you need not worry about double periods when used at the end of a sentence as in Alpher \emph{et al}\bmvaOneDot. We use {\tt natbib}, so citations in random order are nicely sorted: \cite{Alpher03,Alpher02,Authors06b,Authors06}. However, we don't use the compress option, as we want each reference to have its own hyperlink and popup window. \subsection{Footnotes} Please use footnotes\footnote {This is what a footnote looks like. It often distracts the reader from the main flow of the argument.} sparingly. Indeed, try to avoid footnotes altogether and include necessary peripheral observations in the text (within parentheses, if you prefer, as in this sentence). If you wish to use a footnote, place it at the bottom of the column on the page on which it is referenced. Use Times 8-point type, single-spaced. \begin{figure*} \begin{center} \fbox{\rule{0pt}{2in} \rule{.9\linewidth}{0pt}} \end{center} \caption{Example of a short caption, which should be centered.} \label{fig:short} \end{figure*} \subsection{The ruler} The \LaTeX\ style defines a printed ruler which should be present in the version submitted for review. The ruler is provided in order that reviewers may comment on particular lines in the paper without circumlocution. If you are preparing a document using a non-\LaTeX\ document preparation system, please arrange for an equivalent ruler to appear on the final output pages. The presence or absence of the ruler should not change the appearance of any other content on the page. The camera ready copy should not contain a ruler. (\LaTeX\ users may remove the \verb'[review]' option from the \verb'\documentclass' statement.) Reviewers: note that the ruler measurements do not align well with lines in the paper --- this turns out to be very difficult to do well when the paper contains many figures and equations, and, when done, looks ugly. Just use fractional references (e.g.\ this line is $210.5$), although in most cases one would expect that the approximate location ($210$ in the previous example) will be adequate. \begin{table} \begin{center} \begin{tabular}{|l|c|} \hline Method & Frobnability \\ \hline\hline Theirs & Frumpy \\ Yours & Frobbly \\ Ours & Makes one's heart Frob\\ \hline \end{tabular} \end{center} \caption{Results. Ours is better.} \end{table} \subsection{Mathematics} Please number all of your sections and displayed equations. It is important for readers to be able to refer to any particular equation. Just because you didn't refer to it in the text doesn't mean some future reader might not need to refer to it. It is cumbersome to have to use circumlocutions like ``the equation second from the top of page 3 column 1''. (Note that the ruler will not be present in the final copy, so is not an alternative to equation numbers). All authors will benefit from reading Mermin's description~\cite{Mermin89} of how to write mathematics. \subsection{References} List and number all bibliographical references in 9-point Times, single-spaced, at the end of your paper. When referenced in the text, enclose the citation number in square brackets, for example~\cite{Authors06}. Where appropriate, include the name(s) of editors of referenced books. \subsection{Color} Color is valuable, and will be visible to readers of the electronic copy. However ensure that, when printed on a monochrome printer, no important information is lost by the conversion to grayscale. \section{Introduction} \label{sec:intro} \vspace{-.05cm} \input{sections/introduction.tex} \vspace{-.4cm} \section{Method} \label{sec:method} \vspace{-.2cm} \input{sections/method.tex} \vspace{-.3cm} \section{Experiments} \label{sec:experiments} \vspace{-.15cm} \input{sections/experiments.tex} \section{Conclusion} \label{sec:conclusion} \input{sections/conclusion.tex} \subsection{Experimental Settings} \label{subsec:exp_setup} For our experiments, we employ the R2R (\textit{Room-to-Room}) dataset~\cite{anderson2018vision}. This challenging benchmark builds upon Matterport3D dataset of spaces~\cite{Matterport3D} and contains $7,189$ different navigation paths in $90$ different scenes. For each route, the dataset provides 3 natural language instructions, for a total of 21,567 instructions with an average length of 29 words. The R2R dataset is split into 4 partitions: training, validation on seen environments, validation on unseen scenes, and test on unseen environments. \textbf{Evaluation Metrics.} We adopt the same evaluation metrics employed by previous work on the R2R dataset: navigation error (NE), oracle success rate (OSR), success rate (SR), and success rate weighted by path length (SPL). NE is the mean distance in meters between the final position and the goal. SR is fraction of episodes terminated within no more than 3 meters from the goal position. OSR is the success rate that the agent would have achieved if it received an oracle stop signal in the closest point to the goal along its navigation. SPL is the success rate weighted by normalized inverse path length and penalizes overlong navigations. \textbf{Implementation Details.} For each LSTM, we set the hidden size to 512. Word embeddings are obtained with GloVe~\cite{pennington2014glove}. In our visual encoder, we apply a bottleneck layer to reduce the dimension of the image feature map to 512. We generate dynamic filters with 512 channels using a linear layer with dropout~\cite{srivastava2014dropout} ($p=0.5$). In our attention module, $q$ and $K$ have 128 channels and we apply a ReLU non-linearity after the linear transformation. For our action selection, we apply dropout with $p=0.5$ to the policy hidden state before feeding it to the linear layer. \vspace{-.3cm} \subsection{Ablation Study} \label{subsec:ablation} \input{sections/tables/ablation_2.tex} \begin{figure}[t!] \begin{minipage}[t!]{0.5\textwidth} \centering \includegraphics[width=0.7\textwidth]{images/foo.pdf} \end{minipage} \begin{minipage}[t!]{0.45\textwidth} \resizebox{!}{0.23\textwidth}{ \input{sections/tables/filters.tex} } \vspace{0.35cm} \end{minipage} \caption{Comparison with different numbers of dynamic filters on the validation-unseen set of R2R. The best results for all the metrics are obtained using four different dynamic filters.} \label{fig:exp} \vspace{-.4cm} \end{figure} In our ablation study, we test the influence of our implementation choices on VLN. As a first step, we discuss the impact of dynamic convolution by comparing our model with a similar \textit{seq2seq} architecture that employs fixed convolutional filters. We then detail the importance of using an attention mechanism to extract the current piece of instruction to be fulfilled. Finally, we compare the results obtained using a pre-trained word embedding instead of learning the word representation from scratch. Results are reported in Table~\ref{table:ablation}. \textbf{Static Filters Vs. Dynamic Convolution.} As results show, dynamic convolutional filters surpass traditional fixed filters for VLN. This because they can easily adapt to new instructions and reflect the variability of the task. When compared to a baseline model that employs traditional convolution~\cite{anderson2018vision}, our method performs $14.5\%$ and $9.8\%$ better, in terms of success rate, on the val-seen and val-unseen splits respectively. \textbf{Fixed Instruction Representation Vs. Attention.} The navigation instructions are very complex and rich. When removing the attention module from our architecture, we keep the last hidden state $h_N$ as instruction representation for the whole episode. Even with this limitation, dynamic filters achieve better results than static convolution, as the success rate is higher for both of the validation splits. However, our attention module further increases the success rate by $11.8\%$ and $9.6\%$. \textbf{Word Embedding from Scratch Vs. Pre-trained Embedding.} Learning a meaningful word embedding is nontrivial and requires a large corpus of natural language descriptions. For this reason, we adopt a pre-trained word embedding to encode single words in our instructions. We then run the same model while trying to learn the word embedding from scratch. We discover that a pre-trained word embedding significantly eases VLN. Our model with GloVe~\cite{pennington2014glove} obtains $11.1\%$ and $5.8\%$ more on the val-seen and val-unseen splits respectively, in terms of success rate. \subsection{Multi-headed Dynamic Convolution} \label{subsec:exp3} \vspace{-.05cm}In this experiment, we test the impact of using a different number of dynamically-generated filters. We test our architecture when using 1, 2, 4, 8, and 16 dynamic filters. We find out that the best setup corresponds to the use of 4 different convolutional filters. Results in Fig.~\ref{fig:exp} show that the success rate and the SPL increase linearly with the number of dynamic kernels for a small number of filters, reaching a maximum at 4. The metrics then decrease when adding new parameters to the network. This suggests that a low number of dynamic filters can represent a wide variety of natural language specifications. However, as the number of dynamic filters increase, the representation provided by the convolution becomes less efficient. \vspace{-.25cm} \subsection{Comparison with the State-of-the-art} \label{subsec:comparison} \input{sections/tables/comparison_sota} \vspace{-.05cm}Finally, we compare our architecture with the state-of-the-art methods for VLN. Results are reported in Table~\ref{table:sota_comparison}. We distinguish two main categories of models, depending on their output space: the first, to which our approach belongs, predicts the next atomic action (\emph{e.g}\bmvaOneDot \textit{turn right}, \textit{go ahead}). We call architectures in this category \textit{low-level actions methods}. The second, instead, searches in the visual space to match the current instruction with the most suitable navigable viewpoint. In these models, atomic actions are not considered, as the agent displacements are done with a teleport system, using the next viewpoint identifier as target destination. Hence, we refer to these works as \textit{high-level actions methods}. While the latter achieve better results, they make strong assumptions on the underlying simulating platform and on the navigation graph. Our method, exploiting dynamic convolutional filters and predicting atomic actions, outperforms comparable architectures and achieves state of the art results for \textit{low-level actions} VLN. Our final implementation takes advantage of the synthetic data provided by Fried \emph{et al}\bmvaOneDot~\cite{fried2018speaker} and overcomes comparable methods~\cite{anderson2018vision, wang2018look} by $15\%$ and $10\%$ success rate points on the R2R test set. Additionally, we note that our method is competitive with some \textit{high-level actions} models, especially in terms of SPL. When considering the test set, we notice in fact that our model outperforms Speaker-Follower~\cite{fried2018speaker} by $3\%$, while performing only $1\%$ worse than~\cite{ma2019self}. \textbf{Low-level Action Space or High-level Navigation Space?} While previous work on VLN never considered this important difference, we claim that it is imperative to categorize navigation architectures depending on their output space. In our opinion, ignoring this aspect would lead to inappropriate comparisons and wrong conclusions. Considering the results in Table~\ref{table:sota_comparison}, we separate the two classes of work and highlight the best results for each category. Please note that the random baseline was initially provided by \cite{anderson2018vision} and belongs to \textit{low-level actions} architectures (a random \textit{high-level actions} agent was never provided by previous work). We immediately notice that, with this new categorization, intra-class results have less variance and are much more aligned to each other. We believe that future work on VLN should consider this new taxonomy in order to provide meaningful and fair comparisons. \subsection{Qualitative Results} \label{subsec:qualitative} Fig.~\ref{fig:qualitative_1} shows two navigation episodes from the R2R validation set. We display the predicted action in a green box on the bottom-right corner of each image. Both examples are successful. \vspace{-.3cm} \begin{figure}[h!] \centering \begin{tabular}{ccccc} \small{\textbf{Legend: }} & \includegraphics[width=0.025\textwidth]{images/qualitative/marks/L.png} \small{\textit{left}} & \includegraphics[width=0.025\textwidth]{images/qualitative/marks/R.png} \small{\textit{right}} & \includegraphics[width=0.025\textwidth]{images/qualitative/marks/F.png} \small{\textit{forward}} & \includegraphics[width=0.025\textwidth]{images/qualitative/marks/E.png} \small{\textit{end episode}} \\\vspace{-.25cm}\\ \end{tabular} \begin{tabular}{cccc} \includegraphics[width=0.2\textwidth]{images/qualitative/q1/01_4599_1.jpg} & \includegraphics[width=0.2\textwidth]{images/qualitative/q1/03_4599_1.jpg} & \includegraphics[width=0.2\textwidth]{images/qualitative/q1/05_4599_1.jpg} & \includegraphics[width=0.2\textwidth]{images/qualitative/q1/06_4599_1.jpg} \\ \includegraphics[width=0.2\textwidth]{images/qualitative/q1/09_4599_1.jpg} & \includegraphics[width=0.2\textwidth]{images/qualitative/q1/10_4599_1.jpg} & \includegraphics[width=0.2\textwidth]{images/qualitative/q1/11_4599_1.jpg} & \includegraphics[width=0.2\textwidth]{images/qualitative/q1/13_4599_1.jpg} \\ \multicolumn{4}{c}{\small{\textbf{Instruction:} \textit{From bathroom, enter bedroom and walk straight}}}\\ \multicolumn{4}{c}{\small{\textit{across down two steps, wait at loungers.}}} \\\vspace{-.25cm}\\ \includegraphics[width=0.2\textwidth]{images/qualitative/q4/00_5647_2.jpg} & \includegraphics[width=0.2\textwidth]{images/qualitative/q4/02_5647_2.jpg} & \includegraphics[width=0.2\textwidth]{images/qualitative/q4/03_5647_2.jpg} & \includegraphics[width=0.2\textwidth]{images/qualitative/q4/05_5647_2.jpg} \\ \includegraphics[width=0.2\textwidth]{images/qualitative/q4/06_5647_2.jpg} & \includegraphics[width=0.2\textwidth]{images/qualitative/q4/07_5647_2.jpg} & \includegraphics[width=0.2\textwidth]{images/qualitative/q4/08_5647_2.jpg} & \includegraphics[width=0.2\textwidth]{images/qualitative/q4/09_5647_2.jpg} \\ \multicolumn{4}{c}{\small{\textbf{Instruction:} \textit{Walk past the fireplace and to the left.}}}\\ \multicolumn{4}{c}{\small{\textit{Stop in the entryway of the kitchen.}}} \\ \end{tabular} \vspace{0.2cm} \caption{Qualitative results from the R2R validation set. Each episode is detailed by eight pictures, representing the current position of the agent and containing the next predicted action (from left to right, top to bottom). To make the visualization more readable, we do not display the 360\textdegree~panoramic images.} \label{fig:qualitative_1} \vspace{-.4cm} \end{figure} \subsection{Encoder} \label{subsec:encoder} To represent the two inputs of the architecture, \emph{i.e}\bmvaOneDot the instruction and the visual input at time $t$, we devise an instruction and a visual encoder. The instruction encoder provides a representation of the navigation instructions that is employed to guide the whole navigation episode. On the other hand, the visual encoding module operates at every single step, building a representation of the current observation which depends on the agent position. \textbf{Instruction Encoding.} The given natural language instruction is split into single words via tokenization, and stop words are filtered out to obtain a shorter description. Differently from previous works that train word embeddings from scratch, we rely on word embeddings obtained from a large corpus of documents. Beside providing semantic information which could not be learned purely from VLN instructions, this also let us handle words that are not present in the training set (see Sec. \ref{subsec:ablation} for a discussion). Given an instruction with length $N$, we denote its embeddings sequence as $L = \left(l_1, l_2, ..., l_N\right)$, where $l_i$ indicates the embedding for the $i$-th word. Then, we adopt a Long Short-Term Memory (LSTM) network to provide a timewise contextual representation of the instruction: \begin{equation} X = \left(x_1, x_2, ..., x_N\right) = \text{LSTM}(L), \end{equation} where each $x_i$ denotes the hidden state of the LSTM at time $i$, thus leading to a final representation with shape $(N, d)$, where $d$ is the size of the LSTM hidden state. \textbf{Visual Features Encoding.} As visual input, we employ the panoramic 360\textdegree~view of the agent, and discretize the resulting equirectangular image in a $12 \times 3$ grid, consisting of three different elevation levels and 30\textdegree~heading shift from each other. Each location of the grid is then encoded via the 2048-dimensional features extracted from a ResNet-152~\cite{he2016deep} pre-trained on ImageNet~\cite{deng2009imagenet}. We also append to each cell vector a set of coordinates relative to the current agent heading and elevation: \begin{equation} coord_t = \left(\sin{\phi_t}, \cos{\phi_t}, \sin{\theta_t} \right), \end{equation} where $\phi_t \in (-\pi, \pi]$ and $\theta_t \in [-\frac{\pi}{2}, \frac{\pi}{2}]$ are the heading and elevation angles \textit{w.r.t.} the agent position. By adding $coord_t$ to the image feature map, we encode information related to concepts such as \textit{right}, \textit{left}, \textit{above}, \textit{below} into the agent observation. \vspace{-.13cm} \subsection{Decoder} \label{subsec:decoder} Given the instruction embedding $X$ for the whole episode, we use an attention mechanism to select the next part of the sentence that the agent has to fulfill. We denote this encoded piece of instruction as $s_t$. We detail our attentive module in the next section. \textbf{Dynamic Convolutional Filters.} Dynamic filters are different from traditional, fixed filters typically used in CNN, as they depend on an input rather than being purely learnable parameters. In our case, we can think about them as specialized feature extractors reflecting the semantics of the natural language specification. For example, starting from an instruction like ``head towards the red chair'' our model can learn specific filters to focus on concepts such as \textit{red} and \textit{chair}. In this way, our model can rely on a large ensemble of specialized kernels and apply only the most suitable ones, depending on the current goal. Naturally, this approach is more efficient and flexible than learning a fixed set of filters for all the navigation steps. We use the representation of the current piece of instruction $s_t$ to generate multiple $1\times 1$ dynamic convolutional kernels, according to the following equation: \begin{equation} f_t = \ell_2[\text{tanh}(W_f s_t + b_f)], \end{equation} where $\ell_2[\cdot]$ indicates L2 normalization, and $f_t$ is a tensor of filters reshaped to have the same number of channels as the image feature map. We then perform the dynamic convolution over the image features $I_t$, thus obtaining a response map for the current timestep as follows: \begin{equation} D_t = f_t * I_t. \end{equation} As the aforementioned operation is equivalent to a dot product, we can conceive the dynamic convolution as a specialized form of dot-product attention, in which $I_t$ acts as key and the filters in $f_t$ act as time-varying queries. Following this interpretation, we rescale $D_t$ by $\sqrt{d_{f}}$, where ${d_{f}}$ is the dynamic filter size~\cite{vaswani2017attention} to maintain dot products smaller in magnitude. \textbf{Action Selection.} We use the response maps dynamically generated as input for the policy network. We implement it with an LSTM whose hidden state at time step $t$ is employed to obtain the action scores. Formally, \begin{equation} h_t = \text{LSTM}([\Tilde{D}_t, a_{t-1}], h_{t-1}), \;\;\; p_t = \text{softmax}(W_a h_t + b_a), \end{equation} where $[\cdot, \cdot]$ indicates concatenation, $a_{t-1}$ is the one-hot encoding of the action performed at the previous timestep, and $\Tilde{D}_t$ is the flattened tensor obtained from $D_t$. To select the next action $a_t$, we sample from a multinomial distribution parametrized with the output probability distribution during training, and select $a_t = \argmax p_t$ during the test. In line with previous work, we find out that sampling during the training phase encourages exploration and improves overall performances. Note that, as previously stated, we do not employ a high-level action space, where the agent selects the next viewpoint in the image feature map, but instead make the agent responsible of learning the sequence of low-level actions needed to perform the navigation. The agent can additionally send a specific stop signal when it considers the goal reached, as suggested by recent standardization attempts~\cite{anderson2018evaluation}. \vspace{-.13cm} \subsection{Encoder-Decoder Attention} \label{subsec:enc-dec_attention} The navigation instructions are very complex, as they involve not only different actions but also temporal dependencies between them. Moreover, their high average length represents an additional challenge for traditional embedding methods. For these reasons, we enrich our architecture with a mechanism to attend different locations of the sentence representation, as the navigation moves towards the goal. In line with previous work on VLN~\cite{anderson2018vision, fried2018speaker}, we employ an attention mechanism to identify the most relevant parts of the navigation instruction. We employ the hidden state of our policy LSTM to get the information about our progress in the navigation episode and extract a time-varying query $q_t = W_q h_{t-1} + b_q$. We then project our sentence embedding into a lower dimensional space to obtain key vectors, and perform a scaled dot-product attention~\cite{vaswani2017attention} among them. \begin{equation} \alpha_t = \frac{q_t K^T}{\sqrt{d_{att}}}, \;\text{where} \; K = W_k X + b_k \end{equation} After a softmax layer, we obtain the current instruction embedding $s_t$ by matrix multiplication between the initial sentence embedding and the softmax scores. \begin{equation} s_t = \text{softmax}(\alpha_{t}) X \end{equation} At each timestep of the navigation process $s_t$ is obtained by attending the instruction embedding at different locations. The same vector is in turn used to obtain a time-varying query for attending spatial locations in the visual input. \vspace{-.13cm} \subsection{Training} \label{subsec:training} Our training sample consists of a batch of navigation instructions and the corresponding ground truth paths coming from the R2R (\textit{Room-to-Room}) dataset~\cite{anderson2018vision} (described in section \ref{sec:experiments}). The path denotes a list of discretized viewpoints that the agent has to traverse to progress towards the goal. The agent spawns in the first viewpoint, and its goal is to reach the last viewpoint in the ground truth list. At each step, the simulator is responsible for providing the next ground truth action in the low-level action space that enables the agent to progress. Specifically, the ground truth action is computed by comparing the coordinates of the next target node in the navigation graph with the agent position and orientation. At each time step $t$, we minimize the following objective function: \begin{equation} L = -\sum_{t}{y_t \log{p_t}} \end{equation} where $p_t$ is the output of our network, and $y_t$ is the ground truth low-level action provided by the simulator at time step $t$. We train our network with a batch size of 128 and use Adam optimizer~\cite{kingma2015adam} with a learning rate of $10^{-3}$. We adopt early stopping to terminate the training if the mean success rate does not improve for 10 epochs.
2,869,038,156,975
arxiv
\section{Cubic B\'{e}zier Splines\label{sec:bezier}} SVG can produce graceful curves by graphing quadratic and cubic equations. As mentioned before, in jQuery.Feyn we use cubic B\'{e}zier segments to approximate the photon and gluon propagators (for the line type, gluon propagators are drawn as elliptical arc paths supported by SVG directly). Now, we come to answer the problem how to approximate a sine curve and an ellipse by cubic B\'{e}zier splines. By using the symmetry, we only need to consider one-fourth of its period. A cubic B\'{e}zier spline needs four control points $P_0$, $P_1$, $P_2$ and $P_3$. It is obvious that the endpoints and their tangents should be accurate. Then, we need another constraint to determine all the control points. For approximating the sine curve, we require the curvature of the B\'{e}zier spline in the endpoints to be accurate as well. According to the derivation in \url{http://mathb.in/1447}, we can obtain the following control points \[ P_0=(0,0),\quad P_1=(\lambda p/\pi, \lambda a/2),\quad P_2=(2p/\pi,a),\quad P_3=(p,a),\quad \lambda=0.51128733 \] where $p$ is one-fourth of the period of the sine curve and $a$ is the amplitude. For approximating an ellipse with semi-major axis $a$ and semi-minor axis $b$, we can approximate a cirlce first and then make scaling transformations. The control points for the ellipse in the first quarter are given by \[ P_0=(0,b),\quad P_1=(\kappa a, b),\quad P_2=(a,\kappa b),\quad P_3=(a,0),\quad \kappa=0.55191502 \] Here we have used the result in Spencer Mortensen's article: \begin{center} \url{http://spencermortensen.com/articles/bezier-circle/} \end{center} where the constant $\kappa$ is determined from the constraint that the maximum radial distance from the circle to the B\'{e}zier curve must be as small as possible. \section{Design} In this section, we will discuss the design (and implementation) of jQuery.Feyn from a user's perspective. Before drawing a Feynman diagram, you are encouraged to draft it on graph paper first. \begin{figure}[!ht] \centering \ifpdf \includegraphics[scale=0.8]{line.pdf} \hspace{20pt} \includegraphics[scale=0.8]{loop.pdf} \hspace{20pt} \includegraphics[scale=0.8]{arc.pdf} \else \includegraphics[scale=0.8]{line.eps} \hspace{20pt} \includegraphics[scale=0.8]{loop.eps} \hspace{20pt} \includegraphics[scale=0.8]{arc.eps} \fi \caption{Illustrations on jQuery.Feyn's building blocks\label{fig:blocks}} \end{figure} Roughly speaking, a Feynman diagram can be treated as a directed or undirected graph consisting of a set of nodes and edges. As illustrated in Figure~\ref{fig:blocks}, jQuery.Feyn has provided five types of basic building blocks that can be used to construct Feynman diagrams. Each type may contain one or more graphics primitives (\emph{i.e.} the options): \begin{itemize} \item \textbf{Graph node}: sets the coordinates of nodes with the \texttt{incoming}, \texttt{outgoing}, \texttt{vertices} and \texttt{auxiliary} primitives, or draws the node marks with the \texttt{node} primitive \item \textbf{Propagator}: draws the propagators with the \texttt{fermion}, \texttt{photon}, \texttt{scalar}, \texttt{ghost}, and \texttt{gluon} primitives \item \textbf{Symbol}: draws symbols such as arrows, blobs, and so on with the \texttt{symbol} primitive \item \textbf{Label}: typesets labels with the \texttt{label} primitive \item \textbf{Image}: includes external graphics with the \texttt{image} primitive \end{itemize} For the design of line styles and symbols, we also have references to PSTricks\cite{TZ03}, MetaPost\cite{JH13}, and Asymptote\cite{BHP04}. Beyond Feynman diagrams, jQuery.Feyn also excels at drawing some mathematical diagrams with dense connections between nodes (see Figure~\ref{fig:misc}). \begin{figure}[!ht] \centering \ifpdf \includegraphics[scale=0.6]{cube.pdf} \hspace{20pt} \includegraphics[scale=0.6]{graph.pdf} \else \includegraphics[scale=0.6]{cube.eps} \hspace{20pt} \includegraphics[scale=0.6]{graph.eps} \fi \caption{The cube with an octahedron inside and the Goldner-Harary graph drawn by jQuery.Feyn\label{fig:misc}} \end{figure} \subsection{Graph Nodes} A graph node is a coordinate pair extracted from the position string such as \texttt{"20,180"} with a name with the prefix \texttt{i}, \texttt{o}, \texttt{v}, or \texttt{a}. They are respectively the first-letter of \texttt{incoming}, \texttt{outgoing}, \texttt{vertices}, and \texttt{auxiliary}. These terminologies should be self-explanatory and are not discussed further. In fact, all graphics nodes will be merged into one object in the base implementation. We retain this separation in the user interface just for semantic clarity. To draw node marks, you should use the \texttt{node} primitive. Five types of node marks are provided. Of course, you can also specify diffrent filled colors to denote different vertices (see Figure~\ref{fig:nodes}). \begin{figure}[!ht] \centering \ifpdf \includegraphics[scale=1.0]{nodes.pdf} \else \includegraphics[scale=1.0]{nodes.eps} \fi \caption{Examples of node marks provided by jQuery.Feyn\label{fig:nodes}} \end{figure} \subsection{Propagators} We support five types of propagators: \texttt{fermion}, \texttt{photon}, \texttt{scalar}, \texttt{ghost}, and \texttt{gluon}. According to the conventions by Peskin and Schroeder\cite{PS95}, only fermion and ghost propagators show arrows by default. In practice, boson propagators are represented by the sine curves and gluon propagators by elliptical arcs. They can be approximated by cubic B\'{e}zier paths in SVG, which will be discussed in the appendix~\ref{sec:bezier}. Each type of the propagators has three shapes: \texttt{line}, \texttt{arc}, and \texttt{loop}. The \texttt{tension} parameter controls the shape of arc radius for arc propagators; whereas the \texttt{ratio} parameter controls the shape of elliptical arc for fermion, scalar, and ghost loop propagators, \emph{i.e.} the ratio of y-radius to x-radius. Their geometrical meanings are shown in Figure~\ref{fig:geometry}. \begin{figure}[!ht] \centering \ifpdf \includegraphics[scale=1.2]{geometry.pdf} \else \includegraphics[scale=1.2]{geometry.eps} \fi \caption{The geometrical layout of the arc and loop propagator ($k$ is a constant)\label{fig:geometry}} \end{figure} \subsection{Symbols} For the convenience of drawing complex diagrams, a small set of predefined symbols are provided: \texttt{arrow}, \texttt{blob}, \texttt{bubble}, \texttt{condensate}, \texttt{hadron}, and \texttt{zigzag}. Some of them can also support variants. Examples are illustrated in Figure~\ref{fig:blocks} and Figure~\ref{fig:dvcs}. \begin{figure}[!ht] \centering \ifpdf \includegraphics[scale=0.8]{dvcs.pdf} \else \includegraphics[scale=0.8]{dvcs.eps} \fi \caption{Feynman diagram for deeply virtual Compton scattering drawn by jQuery.Feyn\label{fig:dvcs}} \end{figure} \subsection{Labels} Labels are somewhat annoying. The \texttt{label} primitive has a nice support for subscript, superscript, and accents. For simple text or annotations such as particle names and momentum, this is enough. But it is not competent to typeset math formulae. Two solutions are available: to include as external graphics (see the first diagram in Figure~\ref{fig:blocks}), or to use the MathJax library (see Figure~\ref{fig:gluon}). The first method makes use of the \texttt{image} primitive and always works, while the other depends on browsers' support for the \texttt{foreignObject} element. \begin{figure}[!ht] \centering \ifpdf \includegraphics[scale=0.8]{gluon.pdf} \else \includegraphics[scale=0.8]{gluon.eps} \fi \caption{Feynman diagram for the gluon propogator drawn by jQuery.Feyn with MathJax support\label{fig:gluon}} \end{figure} \subsection{Images} The \texttt{image} primitive is provided to enhance jQuery.Feyn's extensibility. You can use it to include any external graphics whatever you like. As seen in Figure~\ref{fig:blocks}, we have embeded the math formula as an SVG file in the first diagram. It is your own duty to ensure that proper width and height values are assigned to the image object. \section{Introduction} In the field of high energy physics, Feynman diagrams are widely used as pictorial representations of the interaction of sub-atomic particles\cite{DK05}. However, it is not easy to draw such a complicated object in publications (Interestingly, Feynman slash notation is also not easy to typeset). This has led to the development of computer programs. In the framework of LaTeX, there are four approaches: Michael Levine's \texttt{feynman}\cite{ML90} bundle, Jos Vermaseren's \texttt{axodraw}\cite{JV94} package, Thorsten Ohl's \texttt{feynmf}\cite{TO95} package, and Norman Gray's \texttt{feyn}\cite{NG09} font. Besides, Binosi \emph{et. al.}'s \texttt{JaxoDraw}\cite{BT04, BCKT08} and Hahn and Lang's \texttt{FeynEdit}\cite{HL08} are both written in Java which provide graphical user interfaces. The reason why we designed another program is that publishing with HTML5\cite{SK11} has grown to be a very promising technology and requires a tool running in browsers to produce visually pleasing Feynman diagrams for the Web such as Wikipedia or online scientific articles. jQuery.Feyn is an atempt to fill that gap. Compared with the programs mentioned above, our program has great advantages in flexibility and portability. \subsection{Overview} jQuery.Feyn is a \href{http://jquery.com/}{jQuery} plugin to facilitate drawing Feynman diagrams with SVG\cite{JD02}. It makes full use of jQuery's succinctness and extensibility to embrace the tagline: \emph{write less, do more}. The following provides a summary of jQuery.Feyn's main features: \begin{itemize} \item Automatic generation of clean SVG source code \item Easy to use, easy to make fine adjustments \item Predefined propagator styles, vertex types, and symbols \item Support for typesetting labels and including external graphics \item Lightweight, cross-browser, and fully documented \end{itemize} The home of jQuery.Feyn project is \url{http://photino.github.io/jquery-feyn/}. Please refer to this link for up-to-date documentation and practical examples. \subsection{Supported Browsers} Any modern browsers for desktop or mobile with a basic support of inline SVG in HTML5 in the standards mode should be OK to run jQuery.Feyn. However, we do not guarantee that all of them will display the same SVG exactly on your screen due to their disparities in SVG rendering and support level. Also note that mobile browsers often show a lot of quirks that are hard to work around because of their limitations and different UI assumptions. The following provides an incomplete list of supported browsers: \begin{itemize} \item Firefox 4+ \item Chrome 7+ \item Opera 11.6+ \item Safari 5.1+ \item IE 9+ \end{itemize} Personaly, I recommend Firefox 24+ and Chrome 28+, on which my testing will be conducted continually. There is no doubt that newer browsers always have better support for SVG and hence better support for jQuery.Feyn. \subsection{Bug Reports and Comments} The preferred way to report bugs is to use the GitHub issue tracker: \begin{center} \url{https://github.com/photino/jquery-feyn/issues} \end{center} Of course, you can also email me at \href{mailto:[email protected]}{[email protected]} or \href{mailto:[email protected]}{[email protected]}. When reporting bugs, you should be as specific as possible about the problem so that we can easily reproduce it. If possible, please test them on Firefox 24+ and Chrome 28+ to get rid of your browser's quirks. Comments, questions, and requests for adding more features are also welcome. \subsection{License} \begin{verbatim} Copyright (C) 2013 by Zan Pan <[email protected]> Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. \end{verbatim} \section{Complete Reference of Options\label{sec:options}} In the following, we provide a complete list of jQuery.Feyn's options. \texttt{String}, \texttt{Number}, \texttt{Boolean}, \texttt{Array}, and \texttt{Object} are JavaScript's data types. Also note that the bold text in the parentheses is the corresponding default value. \begin{description} \item[xmlns] : \texttt{String} ( \textbf{"http://www.w3.org/2000/svg"} ) \\ Sets the \texttt{xmlns} attribute of \texttt{<svg>} which binds the SVG namespace \item[xlink] : \texttt{String} ( \textbf{"http://www.w3.org/1999/xlink"} ) \\ Sets the \texttt{xlink:href} attribute of \texttt{<svg>} which defines an IRI reference type as a URI \item[version] : \texttt{String} ( \textbf{"1.1"} ) \\ Sets the \texttt{version} attribute of \texttt{<svg>} which indicates the SVG language version \item[x] : \texttt{Number} ( length | \textbf{0} ) \\ Sets the \texttt{x} attribute of \texttt{<svg>} which indicates a x-axis coordinate in the user coordinate system \item[y] : \texttt{Number} ( length | \textbf{0} ) \\ Sets the \texttt{y} attribute of \texttt{<svg>} which indicates a y-axis coordinate in the user coordinate system \item[width] : \texttt{Number} ( length | \textbf{200} ) \\ Sets the \texttt{width} attribute of \texttt{<svg>} which indicates a horizontal length in the user coordinate system \item[height] : \texttt{Number} ( length | \textbf{200} ) \\ Sets the \texttt{height} attribute of \texttt{<svg>} which indicates a vertical length in the user coordinate system \item[title] : \texttt{String} ( text ) \\ Sets the content for the \texttt{<title>} element which displays a title for the Feynman diagram \item[description] : \texttt{String} ( text | \textbf{"Feynman diagram generated by jQuery.Feyn"} ) \\ Sets the content for the \texttt{<desc>} element which describes the Feynman diagram \item[standalone] : \texttt{Boolean} ( true | \textbf{false} ) \\ Enables or disables the SVG code editor to make fine adjustments and save as a standalone SVG file \item[selector] : \texttt{Boolean} ( true | \textbf{false} ) \\ Determines whether or not to set \texttt{id} and \texttt{class} attributes for SVG elements \item[grid] : \texttt{Object} \begin{description} \item[show] : \texttt{Boolean} ( true | \textbf{false} ) \\ Determines whether or not to display a grid system to facilitate your drawing \item[unit] : \texttt{Number} ( length | \textbf{20} ) \\ Sets the length of subdivision for the grid system \end{description} \item[color] : \texttt{String} ( paint | \textbf{black} ) \\ Sets global \texttt{stroke} attribute for SVG elements which defines the color of the outline \item[thickness] : \texttt{Number} ( length | \textbf{1.6} ) \\ Sets global \texttt{stroke-width} attribute for SVG elements which specifies the width of the outline \item[tension] : \texttt{Number} ( parameter | \textbf{1} ) \\ Sets global parameter of arc radius and the zigzag amplitude \item[ratio] : \texttt{Number} ( parameter | \textbf{1} ) \\ Sets global parameter of elliptical arcs for fermion, scalar, and ghost loop propagators \item[clockwise] : \texttt{Boolean} ( true | \textbf{false} ) \\ Sets global \texttt{clockwise} parameter for propagators \item[incoming] : \texttt{Object} \begin{description} \item[i1, i2, i3, \ldots] : \texttt{String} ( position ) \\ Sets the coordinate pairs of graph nodes for incoming particles \end{description} \item[outgoing] : \texttt{Object} \begin{description} \item[o1, o2, o3, \ldots] : \texttt{String} ( position ) \\ Sets the coordinate pairs of graph nodes for outgoing particles \end{description} \item[vertex] : \texttt{Object} \begin{description} \item[v1, v2, v3, \ldots] : \texttt{String} ( position ) \\ Sets the coordinate pairs of graph nodes for vertices \end{description} \item[auxiliary] : \texttt{Object} \begin{description} \item[a1, a2, a3, \ldots] : \texttt{String} ( position ) \\ Sets the coordinate pairs of graph nodes for miscellaneous symbols \end{description} \item[fermion] : \texttt{Object} \begin{description} \item[color] : \texttt{String} ( paint | \textbf{inherient} ) \\ Sets the \texttt{stroke} attribute for \texttt{<g>} into which fermion propagators are grouped \item[thickness] : \texttt{Number} ( length | \textbf{inherient} ) \\ Sets the \texttt{stroke-width} attribute for \texttt{<g>} into which fermion propagators are grouped \item[tension] : \texttt{Number} ( parameter | \textbf{inherient} ) \\ Sets the parameter of arc radius for fermion propagators \item[ratio] : \texttt{Number} ( parameter | \textbf{inherient} ) \\ Sets the parameter of elliptical arcs for fermion propagators \item[arrow] : \texttt{Boolean} ( \textbf{true} | false ) \\ Determines whether or not to show arrows for fermion propagators \item[clockwise] : \texttt{Boolean} ( true | \textbf{false} ) \\ Sets the direction of arrows for arc and loop fermion propagators \item[line] : \texttt{String} ( connections ) \\ Sets the directed edges between graph nodes for fermion lines \item[arc] : \texttt{String} ( connections ) \\ Sets the directed edges between graph nodes for fermion arcs \item[loop] : \texttt{String} ( connections ) \\ Sets the directed edges between graph nodes for fermion loops \end{description} \item[photon] : \texttt{Object} \begin{description} \item[color] : \texttt{String} ( paint | \textbf{inherient} ) \\ Sets the \texttt{stroke} attribute for \texttt{<g>} into which photon propagators are grouped \item[thickness] : \texttt{Number} ( length | \textbf{inherient} ) \\ Sets the \texttt{stroke-width} attribute for \texttt{<g>} into which photon propagators are grouped \item[tension] : \texttt{Number} ( parameter | \textbf{inherient} ) \\ Sets the parameter of arc radius for photon propagators \item[clockwise] : \texttt{Boolean} ( true | \textbf{false} ) \\ Determines whether the first wiggle starts up or down for photon propagators \item[period] : \texttt{Number} ( parameter | \textbf{5} ) \\ Sets the period parameter for photon propagators \item[amplitude] : \texttt{Number} ( parameter | \textbf{5} ) \\ Sets the amplitude parameter for photon propagators \item[line] : \texttt{String} ( connections ) \\ Sets the directed edges between graph nodes for photon lines \item[arc] : \texttt{String} ( connections ) \\ Sets the directed edges between graph nodes for photon arcs \item[loop] : \texttt{String} ( connections ) \\ Sets the directed edges between graph nodes for photon loops \end{description} \item[scalar] : \texttt{Object} \begin{description} \item[color] : \texttt{String} ( paint | \textbf{inherient} ) \\ Sets the \texttt{stroke} attribute for \texttt{<g>} into which scalar propagators are grouped \item[thickness] : \texttt{Number} ( length | \textbf{inherient} ) \\ Sets the \texttt{stroke-width} attribute for \texttt{<g>} into which scalar propagators are grouped \item[tension] : \texttt{Number} ( parameter | \textbf{inherient} ) \\ Sets the parameter of arc radius for scalar propagators \item[ratio] : \texttt{Number} ( parameter | \textbf{inherient} ) \\ Sets the parameter of elliptical arcs for scalar propagators \item[arrow] : \texttt{Boolean} ( true | \textbf{false} ) \\ Determines whether or not to show arrows for scalar propagators \item[clockwise] : \texttt{Boolean} ( true | \textbf{false} ) \\ Sets the direction of arrows for arc and loop scalar propagators \item[dash] : \texttt{String} ( dasharray | \textbf{"5 5"} ) \\ Sets the \texttt{stroke-dasharray} attribute for \texttt{<g>} into which scalar propagators are grouped \item[offset] : \texttt{Number} ( length | \textbf{2} ) \\ Sets the \texttt{stroke-offset} attribute for \texttt{<g>} into which scalar propagators are grouped \item[line] : \texttt{String} ( connections ) \\ Sets the directed edges between graph nodes for scalar lines \item[arc] : \texttt{String} ( connections ) \\ Sets the directed edges between graph nodes for scalar arcs \item[loop] : \texttt{String} ( connections ) \\ Sets the directed edges between graph nodes for scalar loops \end{description} \item[ghost] : \texttt{Object} \begin{description} \item[color] : \texttt{String} ( paint | \textbf{inherient} ) \\ Sets the \texttt{stroke} attribute for \texttt{<g>} into which ghost propagators are grouped \item[thickness] : \texttt{Number} ( length | \textbf{inherient} ) \\ Sets the \texttt{stroke-width} attribute for \texttt{<g>} into which ghost propagators are grouped \item[tension] : \texttt{Number} ( parameter | \textbf{inherient} ) \\ Sets the parameter of arc radius for ghost propagators \item[ratio] : \texttt{Number} ( parameter | \textbf{inherient} ) \\ Sets the parameter of elliptical arcs for ghost propagators \item[arrow] : \texttt{Boolean} ( \textbf{true} | false ) \\ Determines whether or not to show arrows for ghost propagators \item[clockwise] : \texttt{Boolean} ( true | \textbf{false} ) \\ Sets the direction of arrows for arc and loop ghost propagators \item[dotsep] : \texttt{Number} ( length | \textbf{8} ) \\ Sets the \texttt{stroke-dasharray} attribute for \texttt{<g>} into which ghost propagators are grouped \item[offset] : \texttt{Number} ( length | \textbf{5} ) \\ Sets the \texttt{stroke-offset} attribute for \texttt{<g>} into which ghost propagators are grouped \item[line] : \texttt{String} ( connections ) \\ Sets the directed edges between graph nodes for ghost lines \item[arc] : \texttt{String} ( connections ) \\ Sets the directed edges between graph nodes for ghost arcs \item[loop] : \texttt{String} ( connections ) \\ Sets the directed edges between graph nodes for ghost loops \end{description} \item[gluon] : \texttt{Object} \begin{description} \item[color] : \texttt{String} ( paint | \textbf{inherient} ) \\ Sets the \texttt{stroke} attribute for \texttt{<g>} into which gluon propagators are grouped \item[thickness] : \texttt{Number} ( length | \textbf{inherient} ) \\ Sets the \texttt{stroke-width} attribute for \texttt{<g>} into which gluon propagators are grouped \item[tension] : \texttt{Number} ( parameter | \textbf{inherient} ) \\ Sets the parameter of arc radius for gluon propagators \item[clockwise] : \texttt{Boolean} ( true | \textbf{false} ) \\ Determines whether the first wiggle starts up or down for gluon propagators \item[width] : \texttt{Number} ( length | \textbf{15} ) \\ Sets the coil width of gluon propagators \item[height] : \texttt{Number} ( length | \textbf{15} ) \\ Sets the coil height of gluon propagators \item[factor] : \texttt{Number} ( parameter | \textbf{0.75} ) \\ Sets the factor parameter for gluon propagators \item[percent] : \texttt{Number} ( parameter | \textbf{0.6} ) \\ Sets the percent parameter for gluon propagators \item[scale] : \texttt{Number} ( parameter | \textbf{1.15} ) \\ Sets the scale parameter for gluon arcs and loops \item[line] : \texttt{String} ( connections ) \\ Sets the directed edges between graph nodes for gluon lines \item[arc] : \texttt{String} ( connections ) \\ Sets the directed edges between graph nodes for gluon arcs \item[loop] : \texttt{String} ( connections ) \\ Sets the directed edges between graph nodes for gluon loops \end{description} \item[symbol] : \texttt{Object} \begin{description} \item[color] : \texttt{String} ( paint | \textbf{inherient} ) \\ Sets the \texttt{stroke} attribute for \texttt{<g>} into which symbols are grouped \item[thickness] : \texttt{Number} ( length | \textbf{inherient} ) \\ Sets the \texttt{stroke-width} attribute for \texttt{<g>} into which symbols are grouped \item[s1, s2, s3, \ldots] : \texttt{Array} \begin{description} \item[{sn[0]}] : \texttt{String} ( position ) \\ Sets the coordinates of graph nodes for symbol \item[{sn[1]}] : \texttt{Number} ( angle ) \\ Sets the x-axis-rotation angle for symbol \item[{sn[2]}] : \texttt{String} ( "arrow" | "blob" | "bubble" | "condensate" | "hadron" | "zigzag" ) \\ Sets the symbol type \item[{sn[3]}] : \texttt{Number} ( parameter | \textbf{20} ) \\ Sets the distance parameter for the symbol \item[{sn[4]}] : \texttt{Number} ( parameter | \textbf{4} ) \\ Sets the height parameter for the symbol \item[{sn[5]}] : \texttt{Boolean} ( true | \textbf{false} ) \\ Enables or disables a variant for the symbol \end{description} \end{description} \item[node] : \texttt{Object} \begin{description} \item[color] : \texttt{String} ( paint | \textbf{inherient} ) \\ Sets the \texttt{stroke} attribute for \texttt{<g>} into which nodes are grouped \item[thickness] : \texttt{Number} ( length | \textbf{inherient} ) \\ Sets the \texttt{stroke-width} attribute for \texttt{<g>} into which nodes are grouped \item[show] : \texttt{Boolean | String} ( \textbf{false} | "i" | "o" | "v" | "a" | ... | "iova" ) \\ Determines whether or not to show nodes \item[type] : \texttt{String} ( "box" | "boxtimes" | "cross" | \textbf{"dot"} | "otimes" ) \\ Sets the node type \item[radius] : \texttt{Number} ( length | \textbf{3} ) \\ Sets the radius parameter of nodes \item[fill] : \texttt{String} ( paint | \textbf{"white"} ) \\ Sets the \texttt{fill} attribute for \texttt{<g>} into which nodes are grouped \end{description} \item[label] : \texttt{Object} \begin{description} \item[color] : \texttt{String} ( paint | \textbf{inherient} ) \\ Sets the \texttt{stroke} attribute for \texttt{<g>} into which labels are grouped \item[thickness] : \texttt{Number} ( length | \textbf{0} ) \\ Sets the \texttt{stroke-width} attribute for \texttt{<g>} into which labels are grouped \item[fill] : \texttt{String} ( paint | \textbf{"white"} ) \\ Sets the \texttt{fill} attribute for \texttt{<g>} into which labels are grouped \item[family] : \texttt{String} ( family-name | \textbf{"Georgia"} ) \\ Sets the \texttt{font-family} attribute for \texttt{<g>} into which labels are grouped \item[size] : \texttt{Number} ( length | \textbf{15} ) \\ Sets the \texttt{font-size} attribute for \texttt{<g>} into which labels are grouped \item[weight] : \texttt{String} ( "normal" | "bold" | "bolder" | "lighter" ) \\ Sets the \texttt{font-weight} attribute for \texttt{<g>} into which labels are grouped \item[face] : \texttt{String} ( "normal" | \textbf{"italic"} | "oblique" ) \\ Sets the \texttt{font-style} attribute for \texttt{<g>} into which labels are grouped \item[align] : \texttt{String} ( "start" | \textbf{"middle"} | "end" ) \\ Sets the \texttt{text-anchor} attribute for \texttt{<g>} into which labels are grouped \item[t1, t2, t3, \ldots] : \texttt{Array} \begin{description} \item[{tn[0]}] : \texttt{String} ( position ) \\ Sets the coordinates of graph nodes for label \item[{tn[1]}] : \texttt{String} ( text ) \\ Sets the text of label as the content of \texttt{<tspan>} \item[{tn[2]}] : \texttt{Number} ( length | \textbf{18} ) \\ Sets the \texttt{width} attribute for \texttt{<foreignObject>} \item[{tn[3]}] : \texttt{Number} ( length | \textbf{30} ) \\ Sets the \texttt{height} attribute for \texttt{<foreignObject>} \end{description} \end{description} \item[image] : \texttt{Object} \begin{description} \item[m1, m2, m3, \ldots] : \texttt{Array} \begin{description} \item[{mn[0]}] : \texttt{String} ( position ) \\ Sets the coordinates of position for including external image \item[{mn[1]}] : \texttt{String} ( file ) \\ Sets the path for external image file \item[{mn[2]}] : \texttt{Number} ( length | \textbf{32} ) \\ Sets the \texttt{width} attribute for image \item[{mn[3]}] : \texttt{Number} ( length | \textbf{32} ) \\ Sets the \texttt{height} attribute for image \end{description} \end{description} \item[mathjax] : \texttt{Boolean} ( true | \textbf{false} ) \\ Determines whether or not to use MathJax to typeset mathematics in labels \item[ajax] : \texttt{Boolean} ( true | \textbf{false} ) \\ Determines whether or not to merge the code of external SVG image directly \end{description} \section{Summary} jQuery.Feyn is a tool to draw Feynman diagrams in browsers, which utilizes the power of SVG. We have explained how to use it and presented some examples of its building blocks. It is always hard to reconcile ease-of-use with expressiveness. We encourage the users to modify the generated code manually to make fine adjustments. As an open standard with the history over a decade, SVG has rich graphic features and effects, which makes it full of fun to learn in depth. This also contributes to the reason why we introduce jQuery.Feyn to physicists for preparing their publications. \section*{Acknowledgements} We are grateful to GitHub to host our project. We also acknowledge all the people who send us feedback during the testing phase. \section{Usage} \subsection{Getting Started} You can start your tour from jQuery.Feyn's online demo: \begin{center} \url{http://photino.github.io/jquery-feyn/demo.html} \end{center} Settings for Feynman diagrams are in the form of JavaScript's liberal object notation, whose simplicity and extensibility has already made it possible for jQuery.Feyn to be both lightweight and powerful. we hope you will enjoy this feature if it is still unfamiliar to you. Knowledge on jQuery is not necessary to use jQuery.Feyn, but getting a feel of JavaScript syntax will make your exploration easier. More importantly, you should be familiar with SVG markup language\cite{JD02}. Unlike PostScript, the SVG code has great readability. It is not a problem to grasp it in a short time. To use jQuery.Feyn, the first thing you should do is to load the scripts found in the distribution: \begin{Verbatim} <script src="js/jquery-2.0.2.min.js"></script> <script src="js/jquery.feyn-1.0.0.min.js"></script> \end{Verbatim} Please note that jQuery library comes first. After this, you can proceed to configure your desired Feynman diagram like \begin{Verbatim}[frame=single,rulecolor=\color{brown},% xleftmargin=3mm,numbers=left,numbersep=4pt] <script> $(document).ready(function() { $("#container").feyn({ incoming: {i1: "20,180", i2: "180,180"}, outgoing: {o1: "20,20", o2: "180,20"}, vertex: {v1: "100,140", v2: "100,60"}, fermion: {line: "i1-v1-i2,o2-v2-o1"}, photon: {line: "v1-v2"} }); }); </script> \end{Verbatim} The jQuery ID selector \verb|$("#container")| can also be replaced by any other selector that selects a unique block-level element in the document, which serves as the container of jQuery.Feyn's SVG output. The minimal example illustrated above represents a QED process (see Figure~\ref{fig:minimal}). \begin{figure}[!ht] \centering \ifpdf \includegraphics[scale=0.8]{minimal.pdf} \else \includegraphics[scale=0.8]{minimal.eps} \fi \caption{The minimal example of jQuery.Feyn's output\label{fig:minimal}} \end{figure} As can be seen, jQuery.Feyn has done the most part of work automatically. If you are unsatisfied with the output of jQuery.Feyn, or if you would like to add a graphics element that are not provided by jQuery.Feyn, you can always modify the SVG code manually. By setting the \texttt{standalone} option to \texttt{true}, you can edit the source code for your diagram in the textarea. \subsection{Default Options} The default options of jQuery.Feyn's \texttt{Feyn} constructor are listed as follows \begin{Verbatim} { xmlns: "http://www.w3.org/2000/svg", xlink: "http://www.w3.org/1999/xlink", version: "1.1", x: 0, y: 0, width: 200, height: 200, title: "", description: "Feynman diagram generated by jQuery.Feyn", standalone: false, selector: false, grid: {show: false, unit: 20}, color: "black", thickness: 1.6, tension: 1, ratio: 1, clockwise: false, incoming: {}, outgoing: {}, vertex: {}, auxiliary: {}, fermion: {arrow: true}, photon: {period: 5, amplitude: 5}, scalar: {arrow: false, dash: "5 5", offset: 2}, ghost: {arrow: true, thickness: 3, dotsep: 8, offset: 5}, gluon: {width: 15, height: 15, factor: 0.75, percent: 0.6, scale: 1.15}, symbol: {}, node: {show: false, thickness: 1, type: "dot", radius: 3, fill: "white"}, label: {family: "Georgia", size: 15, face: "italic"}, image: {}, mathjax: false, ajax: false } \end{Verbatim} As you have alreaday seen, they can be overridden by passing an option object to \texttt{feyn} method. However, options are not checked in any way, so setting bogus option values may lead to odd errors. For JavaScript internal errors excluding syntax errors, jQuery.Feyn will write the error information to the container of your Feynman diagram to remind you of the case. A complete list of available options will be given in the appendix~\ref{sec:options}. \subsection{Tips, Tricks, and Troubleshooting} \begin{itemize} \item You can edit the generated SVG code directly to make fine adjustments. The corresponding SVG output will be updated immediately when you trigger a text change event by clicking outside of the textarea. It is your own responsibility to ensure the validity of your SVG code. Note that reloading the page will descard your editing, so please manually save the change in an external SVG file. You can use the two textareas in this way: one for testing, and the other for producing. \item Simple labels such as particle names, momentum, data, and comments, can be typeset with jQuery.Feyn's \texttt{label} option. It supports subscript, superscript, bar and tilde accents by using the \texttt{dx} and \texttt{dy} attributes of \texttt{<tspan>}. For complicated mathematical expressions, they should be included as external SVG images with the \texttt{image} option. Troy Henderson's \href{http://www.tlhiv.org/ltxpreview/}{LaTeX Previewer} provides a user-friendly utility for generating LaTeX output\cite{TH12}. \item For special characters such as greek letters and mathematical operators, it is recommended to input the unicode entity by citing its decimal number, for example $\alpha$ can be accessed by \verb|&#945;|. A list of frequently used characters can be found at \href{http://www.ascii-code.com/html-symbol.php}{ascii-code.com/html-symbol}. \item If you are familiar with Mathematical Markup Language (MathML), you can also include mathematical expressions by adding the \texttt{<foreignObject>} element manually. Please check \href{http://caniuse.com/mathml}{caniuse.com/mathml} to see whether or not your browser has a good support for MathML. \item When \href{http://www.mathjax.org/}{MathJax} is available, you can set jQuery.Feyn's \texttt{mathjax} option to \texttt{true} to typeset mathematics in TeX or LaTeX. This functionality also relies on browsers' support for the \texttt{<foreignObject>} element. \item SVG files can be converted to \href{http://image.online-convert.com/convert-to-eps}{EPS} and \href{http://document.online-convert.com/convert-to-pdf}{PDF} online. If your SVG code has included some external SVG files, please set jQuery.Feyn's \texttt{ajax} option to \texttt{true} to merge their code directly, or copy, paste, and modify them manually. Before conversion, you should \href{http://validator.w3.org/}{check} the markup validity of your SVG code. \item Chrome does not support loading local files with ajax by default. You should start Google Chrome with the \texttt{--disable-web-security} or \texttt{--allow-file-access-from-files} option, otherwise you will get a network error. \item Firefox 23 or below has a \href{https://bugzilla.mozilla.org/show_bug.cgi?id=600207}{bug} of renderging the \texttt{<image>} element. Please update your browser to 24+. \end{itemize}
2,869,038,156,976
arxiv
\section{Introduction} Machine learning methods \cite{haykin:08,goodfellow:16,bishop:06} have found applications in condensed matter physics detecting phases of matter and transitions between these on both quantum and classical systems (see, for example, Refs.~\cite{ronhovde:11,nussinov:16,carrasquilla:17,chng:17,tanaka:17,kashiwa:18x}). Different approaches exist, such as lasso \cite{santosa:86,tibshirani:94}, sparse regression \cite{mateos:10,candela:05}, classification and regression trees \cite{rokach:14,shalev:14,mehta:02}, as well as boosting and support vector machines \cite{james:13,hsu:10,platt:99,widodo:07,joachims:98}. Neural networks \cite{lecun:98,zhang:90} are the most versatile and powerful tools, which is why they are commonly used in scientific applications. Convolutional neural networks (CNNs), in particular, are specialized neural networks for processing data with a grid-like topology. Familiar examples include time-series data, where samples are taken in intervals, and images (two-dimensional data sets). The primary difference between neural networks and convolutional neural networks lies in how hidden layers are managed. In CNNs, a {\em convolution} is applied to divide the feature space into smaller sections emphasizing local trends. Because of this, CNNs are ideally-suited to study physical models on hypercubic lattices. Recently, it was demonstrated that CNNs can be applied to the detection of phase transitions in Edwards-Anderson Ising spin glasses on cubic lattices \cite{munoz:19x}. It was shown that the critical behavior of a spin glass with bimodal disorder can be inferred by training the model using data that has Gaussian interactions between the spins. The use of CNNs also results in a reduced numerical effort, which means one could potentially access larger system sizes often needed to overcome corrections to scaling in numerical studies. As such, pairing specialized hardware to simulate Ising systems \cite{alvarez:10a,banos:12,baity:14} with machine learning techniques might one day elucidate properties of spin glasses and related systems. However, as we show in this work, the use of poor input data can result in erroneous or even unphysical results. This (here inadvertent) {\em poisoning} of the training set is well known in computer science where small amounts of bad data can strongly affect the accuracy of neural network systems. For example, Steinhardt {\em et al.}~\cite{steinhardt:17x} demonstrated that already small amounts of bad data can result in a sizable drop in the classification accuracy. References \cite{jagielsky:18,alfeld:16,shi:19} furthermore demonstrate that data poisoning can have a strong effect in machine learning. Reference \cite{jiang:19} focuses on adversarial manipulations \cite{nelson:08,newell:14} of simulational and experimental data in condensed matter physics applications. In particular, they show that changing individual variables (e.g., a pixel in a data set) can generate misleading predictions This suggests that results from machine learning algorithms sensitively rely on the quality of the training input. In this work, we demonstrate that the use of poorly-thermalized Monte Carlo data or simply mislabeled data can result in erroneous estimates of the critical temperatures of Ising spin-glass systems. As such, we focus less on adversarial cases, but more on accidental cases of poor data preparation. We train a CNN with data from a Gaussian Ising spin glass in three space dimensions and then use data generated for a bimodal Ising spin glass to predict the transition temperature of the same model system, albeit with different disorder. In addition, going beyond the work presented in Ref.~\cite{jiang:19}, we introduce an analysis pipeline that allows for the precise determination of the critical temperature. While good data results in a relatively accurate prediction, the use of poorly-thermalized or mislabeled data produce misleading results. This should serve as a cautionary tale when using machine learning techniques for physics applications. The paper is structured as follows. In Sec.~\ref{sec:mod} we introduce the model used in the study, as well as simulation parameters for both training and prediction data. In addition, we outline the implementation of the CNN as well as the approach used to extract the thermodynamic critical temperature, followed by results and concluding remarks. \section{Model and numerical details} \label{sec:mod} To illustrate the effects of poisoned training sets we study the three-dimensional Edwards-Anderson Ising spin glass \cite{edwards:75,binder:86,mezard:87,young:98,stein:13} with a neural network implemented in TensorFlow \cite{albadi:16}. The model is described by the Hamiltonian \begin{equation} {\mathcal H}=-\sum_{\langle i,j \rangle} J_{ij} s_i s_j , \end{equation} where each $J_{ij}$ is a random variable drawn from a given symmetric probability distribution, either bimodal, i.e., $\pm 1$ with equal probability, or Gaussian with zero mean and unit variance. In addition, $s_i = \pm 1$ represent Ising spins, and the sum is over nearest neighbors on a cubic lattice with $N$ sites. Because spin glasses do not exhibit spatial order below the spin-glass transition, we measure the site-dependent spin overlap \cite{ sherrington:75, parisi:80, parisi:83} \begin{equation} \label{overlaps} q_i = S^{\alpha}_{i}S^{\beta}_{i}, \end{equation} between replicas $\alpha$ and $\beta$. In the overlap space, the system is reminiscent of an Ising ferromagnet, i.e., approaches for ferromagnetic systems introduced in Refs.~\cite{carrasquilla:17,chng:17} can be used. For low temperatures, $q = (1/N)\sum_i q_i \to 1$, whereas for $T \to \infty$, $q \to 0$. For an infinite system, $q$ abruptly drops to zero at the critical temperature $T_c$. Therefore, the overlap space is well suited to detect the existence of a phase transition in a disordered system, even beyond spin glasses. In the overlap space, the spin-glass phase transition can be visually seen as the formation of disjoint islands with identical spin configurations. As such, the problem of phase identification in physical systems is reminiscent of an image classification problem where CNN's are shown to be highly efficient compared to fully-connected neural networks (FCN). \subsection{Data generation} We use parallel tempering Monte Carlo \cite{hukushima:96} to generate configurational overlaps. Details about the parameters used in the Monte Carlo simulations are listed in Tab.~\ref{tab:train} for the training data with Gaussian disorder. The parameters for the prediction data with bimodal disorder are listed in Tab.~\ref{tab:test}. \begin{table}[h] \caption{ Parameters for the training samples with Gaussian disorder. $L$ is the linear size of a system with $N = L^3$ spins, $N_{\rm sa}$ is the number of samples, $N_{\rm sw}$ is the number of Monte Carlo sweeps for each of the replicas for a single sample, $T_{\rm min}$ and $T_{\rm max}$ are the lowest and highest temperatures simulated, $N_{T}$ is the number of temperatures used in the parallel tempering Monte Carlo method for each system size $L$, and $N_{\rm con}$ is the number of configurational overlaps for a given temperature in each instance. \label{tab:train} } \begin{tabular*}{\columnwidth}{@{\extracolsep{\fill}}c r r r r r l } \hline \hline $L$ & $N_{\rm sa}$ & $N_{\rm sw}$ & $T_{\rm min}$ & $T_{\rm max}$ & $N_{T}$ & $N_{\rm con}$ \\ \hline $8$ & $20000$ & $50000$ & $0.80$ &$1.21$ &$20$ & $100$\\ \hline $10$ & $10000$ & $40000$ & $0.80$ &$1.21$ &$20$ & $100$\\ \hline $12$ & $20000$ & $655360$ & $0.80$ &$1.21$ &$20$ & $100$\\ \hline $14$ & $10000$ & $1050000$ & $0.80$ &$1.21$ &$20$ & $100$\\ \hline $16$ & $5000$ & $1050000$ & $0.80$ &$1.21$ &$20$ & $100$\\ \hline \hline \end{tabular*} \end{table} \begin{table}[h] \caption{ Parameter for the prediction samples with bimodal disorder. $L$ is the linear size of the system, $N_{\rm sa}$ is the number of samples, $N_{\rm sw}$ is the number of Monte Carlo sweeps for each of the replicas of a single sample, $T_{\rm min}$ and $T_{\rm max}$ are the lowest and highest temperatures simulated, $N_{T}$ is the temperature numbers used in parallel tempering method for each linear system size $L$, and $N_{\rm con}$ is the number of configurational overlaps for a given temperature in each instance. \label{tab:test} } \begin{tabular*}{\columnwidth}{@{\extracolsep{\fill}}c r r r r r l } \hline \hline $L$ & $N_{\rm sa}$ & $N_{\rm sw}$ & $T_{\rm min}$ & $T_{\rm max}$ & $N_{T}$ & $N_{\rm con}$ \\ \hline $8$ & $15000$ & $80000$ & $1.05$ &$1.25$ &$12$ & $500$\\ \hline $10$ & $10000$ & $300000$ & $1.05$ &$1.25$ &$12$ & $500$\\ \hline $12$ & $4000$ & $300000$ & $1.05$ &$1.25$ &$12$ & $500$\\ \hline $14$ & $4000$ & $1280000$ & $1.05$ &$1.25$ &$12$ & $500$\\ \hline $16$ & $4000$ & $1280000$ & $1.05$ &$1.25$ &$12$ & $500$\\ \hline \hline \end{tabular*} \end{table} \subsection{CNN implementation} \label{cnn:imp} We use the same amount of instances used in Ref.~\cite{katzgraber:06} with $100$ configurational overlaps at each temperature for each instance. Because the transition temperature with Gaussian disorder is $T_c \approx 0.95$ \cite{marinari:98,katzgraber:05,katzgraber:06}, following Refs.~\cite{carrasquilla:17,carrasquilla:17b,tanaka:17} for the training data, we label the convolutional overlaps with temperatures above $0.95$ as ``1'' and those from temperatures below $0.95$ as ``0.'' The parameters for the architecture of the convolutional neural network are listed in Tab.~\ref{tab:arch}. We inherit the structure with a single layer from Ref.~\cite{tanaka:17}. All the parameters are determined by extra validation sample sets, which are also generated from Monte Carlo simulations. \begin{table}[h] \caption{CNN architecture, parameters, and hardware details. \label{tab:arch} } \begin{tabular*}{\columnwidth}{@{\extracolsep{\fill}}l l l } \hline \hline Number of Layers & $1$ \\ \hline Channels in each layer& $5$\\ \hline Filter size & $3\times3\times3$\\ \hline stride & $2$\\ \hline Activation function & ReLU\\ \hline Optimizer & AdamOptimizer($10^{-4}$)\\ \hline Batch size & $10^3$\\ \hline Iteration & $10^4$\\ \hline Software & TensorFlow (Python)\\ \hline Hardware & Lenovo x86 HPC cluster with a dual-GPU \\ & NVIDIA Tesla K80 GPU and 128 GB RAM\\ \hline \hline \end{tabular*} \end{table} Note that we use between $4000$ and $10000$ disorder instances for the bimodal prediction data, which is approximately $1/3$ of the numerical effort needed when estimating the phase transition directly via a finite-scaling analysis of Monte Carlo data, as done for example in Ref.~\cite{katzgraber:06}. As such, {\em pairing high-quality Monte Carlo simulations with machine learning techniques can result in large computational cost savings}. \subsection{Data analysis} \label{sec:analysis} Because the configurational overlaps [Eq.~\eqref{overlaps}] include the information about phases, we expect that different phases have different overlap patterns similar to grid-like graphs. Therefore, in the region of a specific phase, it is reasonable to expect that the classification probability for the CNN to identify the phase correctly should be larger than $50\%$. As such, it can be expected that when the classification probability is $0.5$, the system is at the system-size-dependent critical temperature. A thermodynamic estimate can then obtained via the finite-size scaling method presented below. \begin{figure}[h] \includegraphics[width =\columnwidth]{./scaling_good.pdf} \caption{ Classification probabilities for different linear system sizes $L$ as a function of temperature $T$ for the prediction of the critical temperature of the bimodal Ising spin glass via a CNN trained with data from a Gaussian distribution. (a) Prediction probability for different system sizes $L$ near the phase transition temperature. The different data sets cross at $T_{\rm c} \sim 1.122$. (b) Measurement of $\nu_{\rm ml}$ by performing a linear fit in a double-logarithmic scale using the extremum points of the derivative of the prediction error with respect to the temperature. (c) Estimate of the critical temperature $T_{\rm c}$ using the coefficient of the linear term in Eq.~\eqref{leading_order} (normalized to $1$) with $L^{1/\nu_{\rm ml}}$ as the independent variable. The vertical dashed line shows the temperature where the slope vanishes, which corresponds to $T_{\rm c}$. (d) Finite-size scaling of the data using the previously-estimated value of $\nu_{\rm ml}$ and $T_{\rm c}$. The data collapse onto a universal curve indicating that the estimates are accurate.} \label{fig:good_data} \end{figure} Let us define the classification probability as a function of temperature and system size: $p(T,L)$ which can be used as a dimensionless quantity to describe the critical behavior. From the scaling hypothesis, we expect $p(T,L)$ to have the following behavior in the vicinity of the critical temperature $T_{\rm c}$: \begin{equation} \label{pt} \langle p(T,L)\rangle = \tilde{F}\left[L^{1/\nu_{\rm ml}}\left(T-T_{\rm c}\right)\right], \end{equation} where the average is over disorder realizations. Note that the critical exponent $\nu_{\rm ml}$ is different from the one calculated using physical quantities. Due to the limited system sizes that we have studied, finite-size scaling must be used to reliably calculate the critical parameters at the thermodynamic limit. Assuming that we are close enough to the critical temperature $T_{\rm c}$, the scaling function $\tilde{F}$ in Eq.~\eqref{pt} can be expanded to a third-order polynomial in $x=L^{1/\nu_{\rm ml}}\left(T-T_{\rm c}\right)$. \begin{equation} \label{leading_order} \langle p(T,L)\rangle \sim p_0 + p_1x+p_2x^2+p_3x^3. \end{equation} First, we evaluate $\nu_{\rm ml}$ by noting that to the leading order in $x$, the derivative of $\langle p(T,L)\rangle$ in Eq.~\eqref{leading_order} with respect to temperature has the following form: \begin{align} \label{nu-scaling} \frac{d\langle p(T,L)\rangle}{dT} \sim L^{1/\nu_{\rm ml}}\left[p_1+2p_2L^{1/\nu_{\rm ml}}\left(T-T_{\rm c}\right)+\right.\nonumber\\ \left.3p_3L^{2/\nu_{\rm ml}}\left(T-T_{\rm c}\right)^2\right]. \end{align} Therefore, the extremum point of $\frac{d\langle p(T,L)\rangle}{dT}$ scales as \begin{equation} \label{derivative-scaling} \frac{d\langle p(T,L)\rangle}{dT}|_{T=T^*}\sim L^{1/\nu_{\rm ml}}. \end{equation} A linear fit in a double-logarithmic scale then produces the value of $\nu_{\rm ml}$ (slope of the straight line), which is subsequently used to estimate $T_{\rm c}$. To do so, we turn back to Eq.~\eqref{leading_order} where we realize that the coefficient of the linear term in $ L^{1/\nu_{\rm ml}}$ as the independent variable is proportional to $(T-T_{\rm c})$ that changes sign at $T=T_{\rm c}$. Alternatively, we can vary $T_{\rm c}$ until the data for all system sizes collapse onto a common third-order polynomial curve. This is true because the scaling function $\tilde{F}$ as a function of $L^{1/\nu_{\rm ml}}\left(T-T_{\rm c}\right)$ is universal. The error bars can be computed using the bootstrap method. \begin{figure}[t!] \includegraphics[width = 0.5\textwidth]{./crossing_poisoned_1.pdf} \caption{ Classification probabilities for different system sizes $L$ for an Ising spin glass with bimodal disorder. 1\% of the labels have been mixed on average. There is no clear sign of the transition. } \label{fig:mixed_label} \end{figure} \section{Results using data without poisoning} Figure \ref{fig:good_data} shows results from the CNN trained with well-prepared (thermalized) data from a Gaussian distribution, predicting the phase transition of data from a Bimodal disorder distribution. Figure \ref{fig:good_data}(a) shows the prediction probabilities for different linear system sizes $L$ as a function of temperature $T$. The curves cross the $p = 0.5$ line in the region of the transition temperature for the bimodal Ising spin glass. Figures \ref{fig:good_data}(b) and \ref{fig:good_data}(c) show the estimates of the exponent $\nu_{\rm ml}$ and the critical temperature $T_{\rm c}$, respectively using the methods developed in Sec.~\ref{sec:analysis}. The critical temperature $T_c = 1.122(6)$ is in good agreement with previous estimates (see, for example, Ref.~\cite{katzgraber:06}). Finally, in Fig.~\ref{fig:good_data}(d), the data points are plotted as a function of the reduced variable $x=L^{1/\nu_{\rm ml}}\left(T-T_{\rm c}\right)$ using the estimated values of the critical parameters. The universality of the scaling curve underlines the accuracy of the estimates. \section{Results using poisoned training sets} Although we have shown that the prediction from convolutional neural network can be precise, we still need to test how poisoned data sets impact the final prediction. First, we randomly mix the classification labels of the training sample with a probability of $1\%$, i.e., with a training set of $100$ samples, this means only one mislabeled sample on average. Then we train the network and use the same samples in the prediction stage. Compared to Fig.~\ref{fig:good_data}, Figure~\ref{fig:mixed_label} shows no clear sign of a phase transition. This means that mislabeling a very small portion of the training data can strongly affect the outcome. Given the hierarchical structure of CNNs, errors can easily be amplified in propagation \cite{Rumelhart:88,hecht-nielsen:92}, which is a possible explanation of the observed behavior. \begin{figure}[h] \includegraphics[width = 0.46\textwidth]{./crossing_non_equilibrium.pdf} \caption{ Classification probabilities for different system sizes $L$ for an Ising spin glass with bimodal disorder. The Gaussian training data are not thermalized. There is no clear sign of a phase transition. } \label{fig:non-equ} \end{figure} Finally, we test the effects of poorly prepared training data--in this case, the training data are not properly thermalized. Figure \ref{fig:non-equ} shows the prediction results using data with only $50\%$ of the Monte Carlo sweeps needed for thermalization of the Gaussian training samples. Although 50\% might seem extreme at first sight, it is important to emphasize that thermalization times (as well as time-to-solution) are typically distributed according to fat-tail distributions \cite{steiger:15}. In general, users perform at least a factor $2$ of additional thermalization to ensure most instances are in thermal equilibrium. As in the case where the labels were mixed, a transition cannot be clearly identified. This is strong indication that the training data need to be carefully prepared. We have also studied the effects of poorly-thermalized prediction data paired with well-thermalized training data (not shown). In this case, the impacts on the prediction probabilities are small but not negligible. \section{Discussion} We have studied the effects of poisoned data sets when training CNNs to detect phase transitions in physical systems. Our results show that good training sets are a necessary requirement for good predictions. Small perturbations in the training set can lead to misleading results. We do note, however, that we might not have selected the best parameters for the CNN. Using cross-validation or bootstrapping might allow for a better tuning of the parameters and thus improve the quality of the predictions. Furthermore, due to the large number of predictors, overfitting is possible. This, however, can be alleviated by the introduction of penalty terms. Finally, the use of other activation functions and optimizers can also impact the results. This, together with the sensitivity towards the quality of the training data that we find in this work suggest that machine learning techniques should be used with caution in physics applications. Garbage in, garbage out \ldots \begin{acknowledgments} We would like to thank Humberto Munoz Bauza and Wenlong Wang for fruitful discussions. This work is supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via MIT Lincoln Laboratory Air Force Contract No.~FA8721-05-C-0002. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of ODNI, IARPA, or the U.S.~Government. The U.S.~Government is authorized to reproduce and distribute reprints for Governmental purpose notwithstanding any copyright annotation thereon. We thank Texas A\&M University for access to their Terra cluster. \end{acknowledgments}
2,869,038,156,977
arxiv
\section{Hyperbolic anti-concentration bound}\label{sec:anticon} \subsection{Our result} In this section, we will prove an anti-concentration bound for random vectors with respect to the hyperbolic norm, which generalizes the result for PSD matrices in \cite{ay21}. In particular, an important tool we use is the hyperbolic Chernoff bound for random vectors in the hyperbolic cone (Theorem~\ref{thm:chernoff_cone}), together with a robust Littlewood-Offord theorem for hyperbolic cone (Theorem~\ref{thm:anticon_good}). \begin{theorem}[Hyperbolic anti-concentration theorem]\label{thm:anti-concen} Let $h_1,h_2$ be an $m$-variate degree-$d$ hyperbolic polynomial with hyperbolic direction $e_1,e_2\in \mathbb{R}^m$, respectively. Let $y_1, y_2 \in \mathbb{R}^m$ be two vectors. Let $\{x^1_i\}_{i\in [n]}$ and $\{x^2_i\}_{i\in [n]}$ be two sequences of vectors such that $x_i^1\in \Lambda_{+,h_1}$ and $x_i^2\in (-\Lambda_{+,h_2})$, i.e., $\lambda_{\min,h_1}(x_i^{1})\geq 0$, $\lambda_{\max,h_2}(x_i^{2})\leq 0$ for all $i\in [n]$. Let $\tau \geq \frac{1}{\sqrt{\log d}}$. We further assume that $\lambda_{\max, h_1}(x^1_i)\leq \tau$ and $\lambda_{\min,h_2}(x^{2}_i)\geq -\tau$ for all $i\in [n]$. And for $j\in [2]$, we have $\sum_{i=1}^n \lambda_{\min}(x_i^j)^2\geq 1$. Then, for $\Delta \geq 20\tau \log d$, we have \begin{align*} \Pr_{\epsilon\sim \{-1,1\}^n}\left[\exists j\in [2]: ~\left\|\sum_{i=1}^n \epsilon_i x^j_i-y_j\right\|_{h_j} \leq \Delta \right]\leq O(\Delta). \end{align*} \end{theorem} \begin{proof} We follow the proof in \cite{ay21} but adapt it to the hyperbolic polynomial. Let $f_j(\epsilon):=\sum_{i=1}^n \epsilon_ix_i^j$ for $j\in [2]$. And we will first show that \begin{align*} \Pr_{\epsilon\sim \{\pm 1\}^n}[\exists j\in [2]: \lambda_{\max,h_j}(f_j(\epsilon)-y_j)\leq \Delta]\leq O(\Delta), \end{align*} which implies the anti-concentration bound for the hyperbolic spectral norm. Let $p:=\frac{1}{20\tau^2\log d}$ and let $\pi: [n]\rightarrow [2p]$ be a random hash function that independently assigns each $i \in [n]$ to uniformly random bucket in $[2p]$. For $i\in [2p]$, let $C_i:=\{j\in [n]: \pi[j]=i\}$ be the set of elements in the $i$-th bucket. Let $\gamma\sim \{\pm 1\}^{2p}$. For $j\in [2]$, define a new function $g_j(\gamma):\{\pm\}^{2k}\rightarrow \mathbb{R}^m$ as follows: \begin{align*} g_j(\gamma) := \sum_{i=1}^{2p} \gamma_i \cdot \sum_{j\in C_i} x_i^j. \end{align*} That is, we assign the same sign for vectors hashed into the same bucket. \cite{ay21} proved that $f_j(\epsilon)$ and $g_j(\gamma)$ have the same distribution using a direct argument about the random hash function. Thus, it is also true in our case and we just need to prove \begin{align*} \Pr_{\gamma\sim \{\pm 1\}^{2p}}[\exists j\in [2]: \lambda_{\max,h_j}(g_j(\gamma)-y_j)\leq \Delta]\leq O(\Delta) \end{align*} For $j=1$, define the good bucket set \begin{align*} {\cal B}_{\mathrm{good}}^1:=\left\{c\in [2p]: \lambda_{\min,h_1}\left(\sum_{i\in \pi^{-1}(c)} x_i^1\right)\geq \frac{1}{4\tau p}\right\}. \end{align*} By Lemma~\ref{lem:bound_good_buckets}, with probability at least $1-e^{-p/2}$, we have $|{\cal B}_{\mathrm{good}}^1|\geq \frac{8}{5}p$. For $j=2$, define the good bucket set \begin{align*} {\cal B}_{\mathrm{good}}^2:=\left\{c\in [2p]: \lambda_{\max,h_2}\left(\sum_{i\in \pi^{-1}(c)} x_i^2\right)\leq -\frac{1}{4\tau p}\right\}. \end{align*} By considering $-x_i^2$ and applying Lemma~\ref{lem:bound_good_buckets}, we get that with with probability at least $1-e^{-p/2}$, we have $|{\cal B}_{\mathrm{good}}^2|\geq \frac{8}{5}p$. By a pigeonhole principle and union bound, with probability $1-2e^{-p/2}$, $|{\cal B}_{\mathrm{good}}^1 \cap {\cal B}_{\mathrm{good}}^2|\geq \frac{6}{5}p$. That is, at least $\frac{3}{5}$-fraction of $i\in [2p]$ such that \begin{align*} \lambda_{\min,h_1}\left(\sum_{j\in \pi^{-1}(c)}x_i^1\right)\geq \frac{1}{4\tau p}, ~~\text{and}~~ \lambda_{\max,h_2}\left(\sum_{j\in \pi^{-1}(c)}x_i^2\right)\leq -\frac{1}{4\tau p}. \end{align*} Thus, we can apply Theorem~\ref{thm:anticon_good} with $\alpha=\frac{3}{5}, \rho = \frac{1}{4\tau p}$ and get that \begin{align*} \Pr_{\gamma\sim \{-1,1\}^{2p}}\left[ \exists j\in [2]:~\lambda_{\max,h_j}\left(\sum_{i=1}^{2p} \gamma_i \cdot \sum_{k\in \pi^{-1}}x_k^j-y_j\right)\in \left(-\frac{1}{2\tau p},0\right] \right]\leq O\left(\frac{1}{\sqrt{p}}\right)+2e^{-p/2}. \end{align*} Now, we transform back to the distribution of $f_j(\epsilon)$ and have the following bound: \begin{align}\label{eq:bound_small_interval} \Pr_{\epsilon\sim \{\pm 1\}^n}\left[\exists j\in [2]: \lambda_{\max,h_j}(f_j(\epsilon)-y_j)\in \left(-\frac{1}{2\tau p},0\right]\right]\leq O\left(\frac{1}{\sqrt{p}}\right)+2e^{-p/2}. \end{align} However, by our choice of parameters, $\Delta \geq \frac{1}{2\tau p}$. We can partition the interval $[-\Delta, 0]$ into $\lceil 2\tau p\Delta\rceil$ sub-intervals each of length $\frac{1}{2\tau p}$. Since Eq.~\eqref{eq:bound_small_interval} holds for any $y_j\in \mathbb{R}^m$, we can use it to bound the probability of the event $\lambda_{\max,h_j}(f_j(\epsilon)-y_j)\in (-\frac{k}{2\tau p}, -\frac{k-1}{2\tau p}]$ by shifting $y_j':=y_j + \frac{k-1}{2\tau p}\cdot e$. Therefore, by the union bound, we have \begin{align*} \Pr_{\epsilon\sim \{\pm 1\}^n}\left[\exists j\in [2]: \lambda_{\max,h_j}(f_j(\epsilon)-y_j)\in [\Delta,0]\right]\leq &~ \lceil 2\tau p\Delta\rceil\cdot \left(O\left(\frac{1}{\sqrt{p}}\right)+2e^{-p/2}\right)\\ = &~ O(\Delta \cdot \tau \sqrt{p})\\ = &~ O(\Delta), \end{align*} where the last step follows from $\tau \sqrt{p} = O(1)$ by our choice of parameters. We note that the above upper bound also holds for the interval $[0, \Delta]$. Hence, we complete the proof of the theorem. \end{proof} \subsection{Technical lemmas} To prove Theorem~\ref{thm:anti-concen}, we need a robust Littlewood–Offord theorem for hyperbolic cones. This kind of theorems were previously proved by \cite{ost19} for polytopes and \cite{ay21} for positive spectrahedrons. We first give some definitions in \cite{ost19} about functions on hypercube. \begin{definition}[Unateness] A function $F : \{-1, 1\}^n\rightarrow \{0, 1\}$ is unate if for all $i\in [n]$, $F$ is either increasing or decreasing with respect to the $i$th coordinate, i.e., \begin{align*} F(x_1,\dots,x_{i-1}, -1, x_{i+1},\dots,x_n) \leq&~ F(x_1,\dots,x_{i-1}, 1, x_{i+1},\dots,x_n)~~~\forall x\in \{\pm 1\}^n, ~~\text{or}\\ F(x_1,\dots,x_{i-1}, -1, x_{i+1},\dots,x_n) \geq&~ F(x_1,\dots,x_{i-1}, 1, x_{i+1},\dots,x_n)~~~\forall x\in \{\pm 1\}^n. \end{align*} \end{definition} Let $H,\overline{H}$ be the indicator set of two unate functions and $H\subset \overline{H}$. The boundary of $H$ is denoted by $\partial H:=\overline{H}\backslash H$. \begin{definition}[Semi-thin] For $\alpha\in [0, 1]$, we say $\partial H$ is $\alpha$-semi thin if for all $x\in H$, at least $\alpha$-fraction of its hypercube neighbors (different in one coordinate) are not in $\partial {H}$. \end{definition} Now, we state the main theorem of this section: \begin{theorem}[Robust Littlewood-Offord theorem for hyperbolic cones]\label{thm:anticon_good} Let $\alpha\in [0, 1], \rho >0$. Let $\{x^j_i\}_{i\in [n], j\in [2]}$ be $2n$ vectors in $\mathbb{R}^m$ such that $x_i^1\in \Lambda_{+,h_1}, x_i^2\in (-\Lambda_{+,h_2})$ for all $i\in [n]$. If there are at least $\alpha$-fraction of $i\in [n]$ such that $\lambda_{\min,h_1}(x_i^1) \geq \rho$ and $\lambda_{\max,h_2}(x_i^2)\leq -\rho$, then we have \begin{align*} \Pr_{\epsilon\sim \{-1,1\}^n}\left[ \exists j\in [2]:~\lambda_{\max,h_j}\left(\sum_{i=1}^n \epsilon_i x_i^j-y_j\right)\in (-2\rho,0] \right]\leq O\left(\frac{1}{\alpha\sqrt{n}}\right). \end{align*} \end{theorem} \begin{proof} For each $j\in [2]$, define two sets: \begin{align*} H_j := &~ \left\{\epsilon\in \{-1,1\}^n: \lambda_{\max,h_1}\left(\sum_{i=1}^n \epsilon_i x_i^j\right)\leq -2\rho\right\},\\ \overline{H_j}:= &~ \left\{\epsilon\in \{-1,1\}^n: \lambda_{\max,h_2}\left(\sum_{i=1}^n \epsilon_i x_i^j\right)\leq 0\right\}. \end{align*} Then, we have \begin{align*} \partial H_j := \overline{H_j}\backslash H_j = \left\{\epsilon\in \{-1,1\}^n: \lambda_{\max,h_j}\left(\sum_{i=1}^n \epsilon_i x_i^j\right)\in (-2\rho, 0]\right\}. \end{align*} Define $F := H_1 \cap H_2$ and $\partial F := (\overline{H_1} \cap \overline{H_2})\backslash F$. Hence, \begin{align*} \partial F = \left\{\epsilon\in \{-1,1\}^n: \exists j\in [2]~\text{s.t.}~ \lambda_{\max,h_j}\left(\sum_{i=1}^n \epsilon_i x_i^j\right)\in (-2\rho, 0]\right\}. \end{align*} For any $\epsilon\in H_1$, consider its hypercube-neighbour $\epsilon'$ which flip the $k$-th coordinate of $\epsilon$. If $\epsilon'\in \partial H_1$, then we have \begin{align*} \lambda_{\max,h_1}\left(\sum_{i=1}^n \epsilon_i x_i^1 - 2\epsilon_k x_k^1\right)\in (-2\rho, 0], \quad \lambda_{\max,h_1}\left(\sum_{i=1}^n \epsilon_i x_i^1\right)\leq -2\rho. \end{align*} It implies that $\epsilon_k = -1$. By the fact that $\lambda_{\max}(x+y)\geq \lambda_{\max}(x)+\lambda_{\min}(y)$, we have \begin{align*} \lambda_{\max,h_1}\left(\sum_{i=1}^n \epsilon_i x_i^1\right) + 2\lambda_{\min,h_1}(x_k^1)\leq \lambda_{\max}\left(\sum_{i=1}^n \epsilon_i x_i^1 + 2x_k^1\right)\leq 0, \end{align*} which means $\lambda_{\min,h_1}(x_k^1)\leq \rho$. However, we assume that there are $\alpha$-fraction of $k\in [n]$ such that $\lambda_{\min,h_1}(x_k^1)\geq \rho$. Hence, $H_1$ is $\alpha$-semi thin. Similarly, for $\epsilon\in H_2$ and its hypercube-neighbor $\epsilon'$ with the $k$-th coordinate flipped, if $\epsilon'\in \partial H_2$, we have \begin{align*} \lambda_{\max,h_2}\left(\sum_{i=1}^n \epsilon_i x_i^2 -2 x_k^2\right)\in (-2\rho, 0], \quad \lambda_{\max,h_2}\left(\sum_{i=1}^n \epsilon_i x_i^2\right)\leq -2\rho. \end{align*} Hence, \begin{align*} \lambda_{\max,h_2}\left(\sum_{i=1}^n \epsilon_i x_i^2\right) + 2\lambda_{\min,h_2}(-x_k^2) = \lambda_{\max,h_2}\left(\sum_{i=1}^n \epsilon_i x_i^2\right) - 2\lambda_{\max,h_2}(x_k^2)\leq 0, \end{align*} which implies $\lambda_{\max,h_2}(x_k^2)\geq -\rho$. Then, by our assumption, $H_2$ is also $\alpha$-semi thin. Thus, by Theorem 7.18 in \cite{ost19}, we have \begin{align*} \mathrm{vol}(\partial F)\leq O(1/(\alpha\sqrt{n})), \end{align*} which implies the probability upper bound in the lemma. \end{proof} In order to satisfy the $\alpha$-semi thin condition in Theorem~\ref{thm:anticon_good}, we use the following lemma using random hash function to bucket the vectors such that the resulting distribution will make the condition hold. \begin{lemma}[Lemma 46 in \cite{ay21}]\label{lem:bound_good_buckets} Let $\tau \in (0, \frac{1}{100\sqrt{\log d}}]$. Let $\{x_i\}_{i\in [n]}\subset \mathbb{R}^m$ be a sequence of vectors in the hyperbolic cone $\Lambda_{+}$ of $h$ such that \begin{align*} \lambda_{\max}(x_i)\leq \tau, ~~ \sum_{i=1}^n \lambda_{\min}(x_i)^2 \geq 1~~~\forall i\in [n]. \end{align*} Let $p\geq \frac{1}{10\tau^2\log d}$ and $\pi:[n]\rightarrow [p]$ be a random hash function that independently assigns each $i\in [n]$ to a uniformly random bucket in $[p]$. For each $c\in [p]$, define $\sigma_c := \sum_{i\in \pi^{-1}(c)}x_i$. And we say $c\in [p]$ is good if $\lambda_{\min}(\sigma_c)\geq \frac{1}{2\tau p}$. Then, we have \begin{align*} \Pr\left[|\{c\in [p]: c~\text{is good}\}|\leq \frac{4}{5}p\right]\leq \exp(-p/4). \end{align*} \end{lemma} \begin{proof} Fix $c\in [p]$. Define indicator random variables $z_i\in \{0,1\}$ for $i\in [n]$ such that $z_i=1$ if $\pi(i)=c$. Since $\pi$ is a random hash function, we have $\Pr[z_i = 1] = \frac{1}{p}$. Then, consider the random vectors $\{z_i x_i\}_{i\in [n]}$. For each $x_i$, $\supp(z_i x_i)\in \Lambda_+$ and $\lambda_{\max}(z_i x_i)\leq \tau$. We note that $\sigma_c = \sum_{i=1}^n z_i x_i$, and \begin{align*} \mu_{\min} = &~ \sum_{i=1}^n \E[\lambda_{\min}(z_ix_i)]= \frac{1}{m}\sum_{i=1}^n \lambda_{\min}(x_i)\\ \geq &~ \frac{1}{m}\sum_{i=1}^n \lambda_{\min}(x_i)^2\cdot \frac{1}{\lambda_{\max}(x_i)}\\ \geq &~ \frac{1}{\tau m}. \end{align*} Then, by Theorem~\ref{thm:chernoff_cone} with $\delta = 1/2, \mu_{\min}=1/(\tau p), R = \tau$, we have \begin{align*} \Pr\left[\lambda_{\min}\left(\sum_{i=1}^n z_ix_i\right) \leq \frac{1}{2\tau p} \right]\leq d \cdot (2/e)^{\frac{1}{2\tau^2 p}} \leq \frac{1}{10}, \end{align*} where the last step follows from $p\geq \frac{1}{10\tau^2\log d}$. That is, \begin{align}\label{eq:bound_sigma_c} \Pr\left[\lambda_{\min}(\sigma_c)\geq \frac{1}{2\tau p}\right] \geq 1 - \frac{1}{10} = \frac{9}{10}. \end{align} Then, for all $c\in [p]$ and $i\in [n]$, define the indicator variables $B_{c,i}:=\mathbf{1}[\pi(i) = c]$. Then, $\{B_{c,i}\}_{c\in [p],i\in [n]}$ are negatively associated by a balls and bins argument (see \cite{mr95} for details). Now, the even that $c$ is good can be represented by the indicator variables $G_c:=\mathbf{1}[\lambda_{\min}(\sum_{i=1}^n B_{c,i}x_i)\geq \frac{1}{2\tau p}]$ for $c\in [p]$, which is constructed by applying a monotone non-decreasing function to $\{B_{c,i}\}_{i\in[n]}$. Hence, we know that $\{G_c\}_{c\in [p]}$ are also negatively associated. By Eq.~\eqref{eq:bound_sigma_c}, we have $\E[G_c]\geq \frac{9}{10}$. Thus, by the Chernoff bound for negatively associated random variables, we have \begin{align*} \Pr\left[\sum_{i=1}^p G_c\leq \frac{4}{5}p\right]\leq \exp(-p/4), \end{align*} which completes the proof of the lemma. \end{proof} \section{Hyperbolic Chernoff bound for hyperbolic cone vectors}\label{sec:hyper_proof2} The goal of this section is to prove the following theorem, which generalizes the matrix Chernoff bound for positive semi-definite matrices to the hyperbolic version with respect to random vectors in the hyperbolic cone. \begin{theorem}\label{thm:chernoff_cone} Let $h$ be an $m$-variate, degree-$d$ hyperbolic polynomial with hyperbolic direction $e\in \mathbb{R}^m$. Let $\Lambda_+$ denote the hyperbolic cone of $h$ with respect to $e$. Suppose $\mathsf{x}_1,\dots,\mathsf{x}_n$ are $n$ independent random vectors with supports in $\Lambda_+$ such that $\lambda_{\max}(\mathsf{x}_i)\leq R$ for all $i\in [n]$. Define the mean of minimum and maximum eigenvalues as follows: \begin{align*} \mu_{\min}:=\sum_{i=1}^n \E[\lambda_{\min}(\mathsf{x}_i)], ~~\text{and}~~\mu_{\max}:=\sum_{i=1}^n \E[\lambda_{\max}(\mathsf{x}_i)]. \end{align*} Then, we have \begin{align*} \Pr\left[\lambda_{\max}\left(\sum_{i=1}^n \mathsf{x}_i\right)\leq (1+\delta)\mu_{\max}\right] \leq&~ d\cdot \left(\frac{(1+\delta)^{1+\delta}}{e^{\delta}}\right)^{-\mu_{\max}/R}~~\forall \delta\geq 0,\\ \Pr\left[\lambda_{\min}\left(\sum_{i=1}^n \mathsf{x}_i\right)\leq (1-\delta)\mu_{\min}\right] \leq&~ d\cdot \left(\frac{(1-\delta)^{1-\delta}}{e^{-\delta}}\right)^{-\mu_{\min}/R}~~\forall \delta\in [0,1]. \end{align*} \end{theorem} \begin{proof} Without loss of generality, we may assume that $\lambda_{\max}(\mathsf{x}_i)\leq 1$. The general case will follow from scaling. \paragraph{Maximum eigenvalue: } By the Laplace transform method, we have \begin{align}\label{eq:exp_sum} \Pr\left[\lambda_{\max}\left(\sum_{i=1}^n \mathsf{x}_i\right)\geq t\right] \leq &~ \inf_{\theta >0}~e^{-\theta t}\cdot \E\left[\exp\left(\theta\lambda_{\max}\left(\sum_{i=1}^n \mathsf{x}_i\right)\right)\right]\notag\\ = &~ \inf_{\theta >0}~e^{-\theta t}\cdot \E\left[\sum_{q\geq 0}\frac{\theta^q}{q!}\lambda_{\max}\left(\sum_{i=1}^n \mathsf{x}_i\right)^q\right]\notag\\ \leq &~ \inf_{\theta >0}~e^{-\theta t}\cdot\sum_{q\geq 0}\frac{\theta^q}{q!}\E\left[\sum_{j=1}^d\lambda_{j}\left(\sum_{i=1}^n \mathsf{x}_i\right)^q\right], \end{align} where the second step follows from Taylor expansion, and the third step follows from $\mathsf{x}_i\in \Lambda_+$ and each term in the summation are non-negative. Then, the remaining task is very similar to the proof of Lemma~\ref{lem:p_norm_upper_bound}. We will upper bound the expectation of trace moments as follows: let $\E_i$ denote the expectation over $\mathsf{x}_i$, and $\E_{\geq i}$ denote the expectation over $\mathsf{x}_{i},\dots,\mathsf{x}_{n}$. Then, we have \begin{align*} \mathbb{E}_{\geq 1}\left[\sum_{j=1}^d\lambda_{j}\left(\sum_{i=1}^n \mathsf{x}_i\right)^q\right]= &~ \mathbb{E}_{\geq 2} \mathbb{E}_1\left[\sum_{j=1}^d\lambda_{j}\left(\mathsf{x}_1 + \sum_{i=2}^n \mathsf{x}_i\right)^q\right]\\ = &~ \mathbb{E}_{\geq 2} \mathbb{E}_1\left[\tr\left[(A(x_1)+B)^q\right]\right], \end{align*} where $A(x_1), B\in \mathbb{R}^{d\times d}$ are two symmetric matrices given by Corollary~\ref{cor:hv_2_vars} for each choice of $\mathsf{x}_1,\dots,\mathsf{x}_n\in \Lambda_+$. Since all eigenvalues of $\mathsf{x}_1,\dots, \mathsf{x}_n$ are non-negative, $A$ and $B$ are positive semi-definite matrices. Then, we have \begin{align*} \mathbb{E}_{\geq 2} \mathbb{E}_1\left[\tr\left[(A(x_1)+B)^q\right]\right] = &~ \mathbb{E}_{\geq 2} \mathbb{E}_1\left[\sum_{\beta\in \{0,1\}^{q}}\tr\left[\prod_{k=1}^q A(x_1)^{\beta_i} B^{1-\beta_i}\right]\right]\\ \leq &~ \mathbb{E}_{\geq 2} \mathbb{E}_1\left[\sum_{\beta\in \{0,1\}^{q}}\sum_{j=1}^d \lambda_j(A(x_1))^{\sum_{i=1}^q \beta_i}\cdot \lambda_j(B)^{q - \sum_{i=1}^q \beta_i}\right]\\ = &~ \mathbb{E}_{\geq 2} \mathbb{E}_1\left[\sum_{k_1=0}^q\binom{q}{k_1}\sum_{j=1}^d \lambda_j(A(x_1))^{k_1}\cdot \lambda_j(B)^{q - k_1}\right]\\ \leq &~ \mathbb{E}_{\geq 2} \mathbb{E}_1\left[\sum_{k_1=0}^q\binom{q}{k_1}\lambda_{\max}(A(x_1))^{k_1}\cdot \sum_{j=1}^d \lambda_j(B)^{q - k_1}\right]\\ = &~ \mathbb{E}_{1} \left[\sum_{k_1=0}^q\binom{q}{k_1}\lambda_{\max}(\mathsf{x}_1)^{k_1}\cdot \mathbb{E}_{\geq 2} \left[\sum_{j=1}^d \lambda_j\left(\sum_{i=2}^n \mathsf{x}_i\right)^{q - k_1}\right]\right]. \end{align*} where the second step follows from the repeated application of Horn's inequality (Lemma~\ref{lem:horn}) and $A(x_1), B$ are positive semi-matrices, and the last step follows from $\mathsf{x}_1$ is independent with $\mathsf{x}_2,\dots,\mathsf{x}_n$. Then, by repeating this process, we finally get that \begin{align*} \mathbb{E}\left[\sum_{j=1}^d\lambda_{j}\left(\sum_{i=1}^n \mathsf{x}_i\right)^q\right]\leq &~ \E\left[ \sum_{\substack{k_1,\dots,k_n\geq 0\\k_1+\cdots+k_n=q}} \binom{q}{k_1,\dots,k_n} \prod_{i=1}^n \lambda_{\max}(\mathsf{x}_i)^{k_i}\cdot d \right]\\ = &~ d\cdot \E\left[\left(\sum_{i=1}^n \lambda_{\max}(\mathsf{x}_i)\right)^q\right]. \end{align*} Putting it to Eq.~\eqref{eq:exp_sum}, we have \begin{align*} \Pr\left[\lambda_{\max}\left(\sum_{i=1}^n \mathsf{x}_i\right)\geq t\right] \leq &~ \inf_{\theta >0}~e^{-\theta t}\cdot\sum_{q\geq 0}\frac{\theta^q}{q!}\E\left[\sum_{j=1}^d\lambda_{j}\left(\sum_{i=1}^n \mathsf{x}_i\right)^q\right]\\ \leq &~ \inf_{\theta >0}~e^{-\theta t}\cdot\sum_{q\geq 0}\frac{\theta^q}{q!}\cdot d\cdot \E\left[\left(\sum_{i=1}^n \lambda_{\max}(\mathsf{x}_i)\right)^q\right]\\ = &~ \inf_{\theta >0}~e^{-\theta t}\cdot d\cdot \E\left[\exp\left(\theta \cdot \sum_{i=1}^n \lambda_{\max}(\mathsf{x}_i)\right)\right]\\ = &~ \inf_{\theta >0}~e^{-\theta t}\cdot d\cdot \prod_{i=1}^n \E\left[e^{\theta\lambda_{\max}(\mathsf{x}_i)}\right], \end{align*} where the third step follows from the linearity of expectation, and the last step follows from the independence of $\mathsf{x}_1,\dots,\mathsf{x}_n$. For $x\in [0, 1]$, we know that $e^{\theta x}\leq 1 + (e^\theta-1)x$ holds for $\theta \in \mathbb{R}$. Thus, \begin{align*} e^{-\theta t}\cdot d\cdot \prod_{i=1}^n \E\left[e^{\theta\lambda_{\max}(\mathsf{x}_i)}\right] \leq &~ e^{-\theta t}\cdot d\cdot \prod_{i=1}^n (1 + (e^\theta - 1)\E[\lambda_{\max}(\mathsf{x}_i)])\\ = &~ d\cdot \exp\left(-\theta t + \sum_{i=1}^n\log\left(1 + (e^\theta - 1)\E[\lambda_{\max}(\mathsf{x}_i)]\right)\right)\\ \leq &~ d\cdot \exp\left(-\theta t + \sum_{i=1}^n(e^\theta - 1)\E[\lambda_{\max}(\mathsf{x}_i)]\right)\\ = &~ d\cdot \exp\left(-\theta t + (e^\theta - 1)\mu_{\max}\right) \end{align*} where the second step follows from our assumption that $\lambda_{\max}(x)\in [0, 1]$ for all $i\in [n]$, and the third step follows from $\log(1+x)\leq x$ for $x>-1$. Therefore, by taking $\theta := \log(t/\mu_{\max})$, we have \begin{align}\label{eq:concen_geq_t} \Pr\left[\lambda_{\max}\left(\sum_{i=1}^n \mathsf{x}_i\right)\geq t\right] \leq d\cdot \left(\frac{t}{\mu_{\max}}\right)^{-t}\cdot e^{t-\mu}. \end{align} If we choose $t:=(1+\delta)\mu_{\max}$, we get that \begin{align*} \Pr\left[\lambda_{\max}\left(\sum_{i=1}^n \mathsf{x}_i\right)\leq (1+\delta)\mu_{\max}\right] \leq&~ d\cdot \left(\frac{(1+\delta)^{1+\delta}}{e^{\delta}}\right)^{-\mu_{\max}}, \end{align*} which completes the proof of the maximum eigenvalue case. \paragraph{Minimum eigenvalue: } We reduce this case to the maximum eigenvalue case by defining $\mathsf{x}_i':= e-\mathsf{x}_i$ for $i\in [n]$. Then, by Fact~\ref{fac:eigen_linear_trans}, \begin{align*} \lambda_{\max}(\mathsf{x}_i')= 1-\lambda_{\min}(\mathsf{x}_i)\leq 1, ~~\text{and}~~\lambda_{\min}(\mathsf{x}_i')= 1-\lambda_{\max}(\mathsf{x}_i)\geq 0. \end{align*} Thus, \begin{align*} \Pr\left[\lambda_{\min}\left(\sum_{i=1}^n \mathsf{x}_i\right)\leq (1-\delta)\mu_{\min}\right] = &~ \Pr\left[\lambda_{\max}\left(\sum_{i=1}^n \mathsf{x}_i'\right)\geq n - (1-\delta)\mu_{\min}\right]\\ \leq &~ d\cdot \left(\frac{n - (1-\delta)\mu_{\min}}{n - \mu_{\min}}\right)^{n - (1-\delta)\mu_{\min}}\cdot e^{\delta \mu_{\min}}\\ = &~ d\cdot \left(1 + \frac{\delta}{n/\mu_{\min} - 1}\right)^{\left(\frac{n}{(1-\delta)\mu_{\min}} - 1\right) \cdot (1-\delta)\mu_{\min}}\cdot e^{\delta \mu_{\min}}\\ \leq &~ d\cdot \left(\frac{(1-\delta)^{1-\delta}}{e^{-\delta}}\right)^{-\mu_{min}}, \end{align*} where the second step follows from taking $t:=n-(1-\delta)\mu_{\min}$ in Eq.~\eqref{eq:concen_geq_t} and $\mu_{\max}' =n-\mu_{\min}$, the last step follows from $n/\mu_{\min}>0$. Hence, the proof of the theorem is completed. \end{proof} \section{Hyperbolic Chernoff bound for Rademacher sums} Recall that the hyperbolic spectral norm $\|\cdot\|_h$ is defined as: \begin{align*} \|x\|_h := \|\lambda(x)\|_\infty. \end{align*} We should assume that the hyperbolic cone $\Lambda_{h,+}$ is regular. By Theorem~\ref{thm:hyperbolic_norm}, we know that $\|\cdot\|_h$ is a norm and $(\mathbb{R}^m, \|\cdot\|_h)$ is a normed linear space. \subsection{Preliminaries} Applying the general concentration on normed linear space (Theorem~\ref{thm:banach_concentration}) to the $\|\cdot\|_h$ norm, and get the following result: \begin{corollary}[Concentration of hyperbolic norm]\label{cor:hyperbolic_prob} Let $X=\sum_{i=1}^n r_i x_i$, where $r_1, r_2, \cdots, r_n$ are independent Rademacher variables and $x_1, x_2, \cdots, x_n \in \mathbb{R}^n$. Then, for every $t>0$, \begin{align*} \Pr_{ r \sim \{ \pm 1 \}^n }[ \| X \|_h > t ] \leq 2\exp \Big(-t^2 / \Big( 32\E_{ r \sim \{\pm 1\}^n }[\|X\|_h^2] \Big) \Big). \end{align*} \end{corollary} By Theorem~\ref{thm:khinchin_kahane}, we know that any moments of $\|X\|_{h}$ are equivalent up to a constant factor. In particular, \begin{claim}[Equivalence between first- and second-moment]\label{clm:hyperbolic_2_1_norm} Given $n$ vectors $x_1, x_2, \cdots, x_n$. Let $r_1, r_2, \cdots, r_n$ denote a sequence of random Rademacher variables. Let $X = \sum_{i=1}^n r_i x_i$. Then, \begin{align*} ( \E[ \| X \|_h^2 ] )^{1/2}\leq \sqrt{2} \cdot \E[\|X\|_h]. \end{align*} \end{claim} We state two useful facts (Fact~\ref{fac:norm_h_q_bound_by_norm_h} and \ref{fac:norm_h_bound_by_norm_h_q}) that upper and lower bound the hyperbolic-$p$ norm by hyperbolic spectral norm. \begin{fact}\label{fac:norm_h_q_bound_by_norm_h} Let $h$ denote a $m$-variate degree-$d$ hyperbolic polynomial. For any vector $x$, for any $q>1$, we have \begin{align*} \| x \|_{h,q} \leq d^{1/q} \cdot \| x \|_h. \end{align*} \end{fact} \begin{proof} We have \begin{align*} \| x \|_{h,q} = \| \lambda(x) \|_{q} \leq d^{1/q} \cdot \| \lambda(x) \|_\infty = d^{1/q} \cdot \| x \|_h. \end{align*} Thus, we complete the proof. \end{proof} \begin{fact}\label{fac:norm_h_bound_by_norm_h_q} Let $h$ denote a $m$-variate degree-$d$ hyperbolic polynomial. For any vector $x$ and for any $q \geq 1$, we have \begin{align*} \| x \|_{h} \leq \| x \|_{h,q}. \end{align*} \end{fact} \begin{proof} We have \begin{align*} \| x \|_h = \| \lambda(x) \|_{\infty} \leq \| \lambda(x) \|_q = \| x \|_{h,q}. \end{align*} Thus, we complete the proof. \end{proof} \begin{fact}\label{fac:norm_h_matrix} Let $h$ denote a $m$-variate degree-$d$ hyperbolic polynomial. For any vector $x$, if there exists a matrix $A\in \mathbb{R}^{d\times d}$ such that $\lambda(x) = \lambda(A)$, then we have \begin{align*} \|x\|_h = \sigma_1(A). \end{align*} \end{fact} \begin{proof} We have \begin{align*} \|x_1\|_h=\|\lambda(x_1)\|_\infty=\|\lambda(A_1)\|_\infty=\sigma_1(A_1). \end{align*} \end{proof} We state a useful tool from previous work \cite{tj74,zyg02}. \begin{lemma}[\cite{tj74,zyg02}]\label{lem:zyg02_tj74} For $q\geq 2$, we have \begin{align*} \binom{2q}{2k_1,\dots,2k_n} \leq M_{2q}^{2q} \cdot \binom{q}{k_1,\dots,k_n}, \end{align*} where $M_{2q}= ( \frac{(2q)!}{2^q q!} )^{1/(2q)}$. \end{lemma} Using elementary calculations, we can upper bound $M_{2q}$. \begin{fact}\label{fac:m_2q} For any $q \geq 1$, we have \begin{align*} \Big( \frac{ (2q)! }{2^q q! } \Big)^{1/(2q)} \leq \sqrt{2q - 1}. \end{align*} \end{fact} \begin{proof} We have \begin{align*} \left( \frac{ (2q)! }{2^q q! } \right)^{1/(2q)} \leq &~ \left(\frac{e\cdot (2q)^{2q}\cdot \sqrt{2q}\cdot e^{-2q}}{2^q\cdot \sqrt{2\pi} \cdot q^{q} \cdot \sqrt{q}\cdot e^{-q}}\right)^{1/(2q)}\\ = &~ \left(\frac{e^{1-q}}{\sqrt{\pi}}\cdot 2^q\cdot q^{q}\right)^{1/(2q)}\\ \leq &~ \sqrt{2q-1}, \end{align*} where the first step follows from Stirling's formula, and the last step follows from $q\geq 1$. \end{proof} \subsection{Special case: determinant polynomial} We first consider a special case: $h(X) = \det(X)$ for $X\in \mathbb{R}^{d^2}$. (Note that we can think of $m = d^2$.) In this case, $\| X \|_h = \| X \|$, the matrix spectral norm of $X$. The goal of this section is to prove Theorem~\ref{thm:special}. \begin{theorem}[Near-optimal matrix Chernoff bound]\label{thm:special} Given $n$ matrices $X_1, X_2, \cdots, X_n \in \mathbb{R}^{d \times d}$. We define $\sigma = (\sum_{i=1}^n \|X_i\|^2)^{1/2}$. Let $C > 0$ denote some fixed constant. Then, for every $t > 0$, \begin{align*} \Pr_{r \sim \{ \pm 1\}^n } \left[ \left\| \sum_{i=1}^n r_i X_i \right\| > t \right] \leq 2\exp\left(-\frac{C t^2}{\sigma^2\log d}\right). \end{align*} \end{theorem} \begin{proof} Let $X:=\sum_{i=1}^n r_i X_i$. By Corollary~\ref{cor:hyperbolic_prob}, we have \begin{align*} \Pr_{ r \sim \{ \pm 1 \}^n }[ \| X \|_h > t ] \leq 2\exp \left(-\frac{t^2} { 32\E_{ r \sim \{\pm 1\}^n }[\|X\|_h^2] } \right). \end{align*} We know that \begin{align*} \E_ {r \sim \{\pm 1\}^n }\left[\|X\|^2\right] \leq 2\cdot \E_ {r \sim \{\pm 1\}^n }[\|X\|]^2 \leq 2C(\log d) \cdot \sigma^2, \end{align*} where the first step follows from Claim~\ref{clm:hyperbolic_2_1_norm}, and the second step follows from Theorem~\ref{thm:matrix_upper_bound}. Therefore, \begin{align*} \Pr_{ r \sim \{ \pm 1 \}^n }[ \| X \|_h > t ] \leq 2\exp \left(-\frac{t^2} {64C(\log d) \cdot \sigma^2} \right), \end{align*} and the theorem follows. \end{proof} Next, we need to prove Theorem~\ref{thm:matrix_upper_bound}. \begin{theorem}[Expected spectral norm of Rademacher matrix sequence]\label{thm:matrix_upper_bound} There exists a sufficiently large constant $C>0$ such that for any fixed $X_1, X_2, \cdots, X_n\in \mathbb{R}^{d \times d}$, we have \begin{align*} \E_{r \sim \{ \pm 1 \}^n } \left[ \left\|\sum_{i=1}^n r_i X_i \right\| \right]\leq C\sqrt{\log d} \cdot \left( \sum_{i=1}^n \| X_i \|^2 \right)^{1/2} . \end{align*} \end{theorem} \begin{proof} We can upper bound $\E[ \| \sum_{i=1}^n r_i X_i \| ]$ as follows: \begin{align*} \E_{ r \sim \{ \pm 1 \}^n }\left[\left\|\sum_{i=1}^n r_i X_i \right\|\right] \leq & ~ \E_{ r \sim \{ \pm 1 \}^n } \left[\left\|\sum_{i=1}^n r_i X_i \right\|_{2p}\right]\\ \leq & ~ \left( \E_{ r \sim \{ \pm 1 \}^n } \left[\left\|\sum_{i=1}^n r_i X_i \right\|_{2p}^{2p}\right] \right)^{1/(2p)}\\ \leq & ~ \sqrt{2p-1}\left( \sum_{i=1}^n\| X_i \|_{2p}^2 \right)^{1/2} \\ \leq & ~ \sqrt{2p-1} \cdot d^{1/(2p)}\cdot \left( \sum_{i=1}^n\| X_i \|^2 \right)^{1/2}, \end{align*} where the first step follows from $\|A\|\leq \|A\|_{2p}$, the second step follows from the Lyapunov inequality (Lemma~\ref{lem:lyapunov}), the third step follows from Theorem~\ref{thm:matrix_p_norm}, the forth step follows from $\|A\|_{2p}\leq d^{1/(2p)}\cdot \|A\|$. By taking $p=\Theta(\log d)$, we have \begin{align*} \E_{ r \sim \{ \pm 1 \}^n } \left[ \left\| \sum_{i=1}^n r_i X_i \right\|_h \right] \leq C \sqrt{\log d} \cdot \left( \sum_{i=1}^n \| X_i \|^2 \right)^{1/2}, \end{align*} which completes the proof of Theorem~\ref{thm:matrix_upper_bound}. \end{proof} \iffalse \Ruizhe{This bound is close to the result of \cite{aw02}, which proved that \begin{align*} \Pr[\|X\|>t]\leq d\exp\left(-\frac{Ct^2}{\sigma^2}\right). \end{align*} It was further improved to \begin{align*} \Pr[\|X\|>t]\leq d\exp\left(-\frac{t^2}{2\|\sum_{i=1}^n X_i^2\|^2}\right). \end{align*} } \fi \subsection{General case: hyperbolic polynomial}\label{sec:hyper_proof1} The goal of this section is to prove Theorem~\ref{thm:main}. \begin{theorem}[Chernoff bound for hyperbolic polynomial]\label{thm:main} Let $h$ be an $m$-variable, degree-$s$ hyperbolic polynomial with respect to $e$. Given $x_1, x_2, \cdots, x_n \in \mathbb{R}^m$ such that $\rank(x_i)\leq s$ for all $i\in [n]$ and for some $0<s\leq d$. Let $\sigma = ( \sum_{i=1}^n \|x_i\|_h^2 )^{1/2}$. Then, \begin{align*} \E_{ r \sim \{ \pm 1 \}^n }\left[\left\|\sum_{i=1}^n r_i x_i\right\|_h\right] \leq 2\sqrt{\log s }\cdot \sigma. \end{align*} Furthermore, there exist two constants $C_1,C_2>0$ such that for every $t>0$, \begin{align*} \Pr_{ r \sim \{\pm 1\}^n } \left[ \left\| \sum_{i=1}^n r_i x_i \right\|_h > t \right]\leq C_1\exp\left(-\frac{C_2t^2}{\sigma^2 \log (s + 1)}\right). \end{align*} \end{theorem} \begin{proof} We first upper bound $\E_{ r \sim \{ \pm 1 \}^n }[\|\sum_{i=1}^n r_i x_i\|_h]$ by \begin{align}\label{eq:bound_expect_h_norm} \E_{ r \sim \{ \pm 1 \}^n }\left[\left\|\sum_{i=1}^n r_i x_i\right\|_h\right] \leq &~ \E_{ r \sim \{ \pm 1 \}^n }\left[\left\|\sum_{i=1}^n r_i x_i\right\|_{h,2q}\right]\notag\\ \leq & ~ \left( \E_{ r \sim \{ \pm 1 \}^n } \left[\left\|\sum_{i=1}^n r_i x_i\right\|_{h,2q}^{2q}\right] \right)^{1/(2q)}\\ \leq & ~ \sqrt{2q-1} \cdot s^{1/(2q)} \cdot \left( \sum_{i=1}^n\|x_i\|_{h}^2 \right)^{1/2},\notag \end{align} where the first step follows from $\|x\|_{h}\leq \|x\|_{h,2q}$ when $q\geq 1$ (Fact~\ref{fac:norm_h_bound_by_norm_h_q}), the second step follows from the Lyapunov inequality (Lemma~\ref{lem:lyapunov}), and the third step follows from Lemma~\ref{lem:p_norm_upper_bound}. By taking $q=\log s$, we have \begin{align*} \E_{ r \sim \{ \pm 1 \}^n }\left[\left\|\sum_{i=1}^n r_i x_i\right\|_h\right] \leq & ~ \sqrt{ 4 (\log s ) - 2 }\cdot \left( \sum_{i=1}^n \|x_i\|_h^2 \right)^{1/2} \\ = & ~ \sqrt{ 4 (\log s ) - 2 }\cdot \sigma \\ \leq & ~ 2 \sqrt{\log s} \cdot \sigma \\ = & ~ C\sqrt{\log s} \cdot \sigma, \end{align*} where the second step follows from $ \sigma := \left( \sum_{i=1}^n \|x_i\|_h^2 \right)^{1/2}$, and the last step follows from $C:=2$. By Claim~\ref{clm:hyperbolic_2_1_norm}, \begin{align*} \E_{ r \sim \{ \pm 1 \}^n }\left[\left\|\sum_{i=1}^n r_i x_i\right\|_h^2\right] \leq 2\left( \E_{ r \sim \{ \pm 1 \}^n } \left[\left\|\sum_{i=1}^n r_i x_i\right\|_h\right] \right)^2 \leq 2C^2 ( \log s ) \cdot \sigma^2. \end{align*} Then, by Corollary~\ref{cor:hyperbolic_prob}, \begin{align*} \Pr_{ r \sim \{ \pm 1 \}^n } \left[ \left\| \sum_{i=1}^n r_i x_i\right\|_h > t \right] \leq &~ 2 \exp \left(-\frac{t^2}{32 \E_{ r \sim \{ \pm 1 \}^n }[ \| \sum_{i=1}^n r_i x_i \|_h^2 ] }\right)\\ \leq &~ 2\exp\left(-\frac{t^2}{64C^2 (\log (s + 1)) \cdot \sigma^2}\right). \end{align*} Thus, we complete the proof. \end{proof} \subsection{Expected hyperbolic-\texorpdfstring{$2q$}{2q} norm bound}\label{sec:exp_hyper_2p_norm} The goal of this section is to prove Lemma~\ref{lem:p_norm_upper_bound}. \begin{lemma}[Expected hyperbolic-$2q$ norm of Rademacher sum]\label{lem:p_norm_upper_bound} Let $h$ be an $m$-variate, degree-$d$ hyperbolic polynomial. Given $n$ vectors $x_1, \cdots, x_n \in \mathbb{R}^m$ such that $\rank(x_i)\leq s$ for all $i\in [n]$ and for some $0<s\leq d$. For any $q\geq 1$, we have \begin{align*} \left( \E_{r \sim \{\pm 1\}^n }\left[\left\|\sum_{i=1}^n r_i x_i\right\|_{h,2q}^{2q}\right] \right)^{1/(2q)}\leq \sqrt{2q-1} \cdot s^{1/(2q)}\cdot \left( \sum_{i=1}^n \|x_i\|_{h}^2 \right)^{1/2}. \end{align*} \end{lemma} \begin{proof The main idea is to consider the random variables $r_1, r_2, \cdots, r_n$ one at a time. By the conditional expectation, we have \begin{align*} \E_{ r \sim \{ \pm 1 \}^n } \left[\left\|\sum_{i=1}^n r_i x_i\right\|_{h,2q}^{2q}\right] = & ~ \E_{ r_2, \cdots, r_n \sim \{ \pm 1 \} } \left[ \E_{ r_1 \sim \{ \pm 1\} } \left[\left\|\sum_{i=1}^n r_i x_i\right\|_{h,2q}^{2q}\right] \right] \\ = & ~ \E_{r_2,\dots,r_n \sim \{\pm 1\} } \left[ \E_{r_1 \sim \{ \pm 1\} } \left[ \sum_{j=1}^d \lambda_j\left(r_1 x_1 + \sum_{i=2}^n r_ix_i\right)^{2q} \right] \right]. \end{align*} where the last step follows from the definition of $\|\cdot\|_{h,2q}$ norm. To apply Corollary~\ref{cor:hv_2_vars}, let $x=x_1,y=\sum_{i=2}^n r_i x_i$. Then, there exists two symmetric matrices $A_1,B_1\in \mathbb{R}^{d\times d}$ such that \begin{align}\label{eq:hyperbolic_matrix_eigen} \lambda\left( r_1 x_1 + \sum_{i=2}^n r_ix_i \right) = \lambda(r_1 A_1+B_1), \end{align} where $\lambda$ is the vector of eigenvalues ordered from large to small. Then, we have \begin{align*} \lambda(x_1) = \lambda(A_1),\quad \lambda\left(\sum_{i=2}^n r_i x_i\right) = \lambda(B_1). \end{align*} Hence, by the definition of Schatten-$p$ norm, \begin{align}\label{eq:eigen_trace} \sum_{j=1}^d \left( \lambda_j \Big(r_1 x_1 + \sum_{i=2}^n r_ix_i \Big) \right)^{2q} =&~ \|r_1 A_1 +B_1\|_{2q}^{2q}\notag\\ = &~ \tr\left[(r_1A_1+B_1)^{2q}\right]\notag\\ = &~ \sum_{ \beta \in \{0,1\}^{2q} } \tr\left[\prod_{i=1}^{2q}A_1^{\beta_i}B_1^{1-\beta_i}\right]\cdot r_1^{\sum_{i=1}^{2q} \beta_i}. \end{align} where the first step follows from Eq.~\eqref{eq:hyperbolic_matrix_eigen} and the definition of matrix Schatten $p$-norm, the second step follows from $\|A\|_{2q}^{2q}=\tr[A^{2q}]$ for symmetric matrix $A$ and $q\geq 1$, and the last step follows from the linearity of trace. We define a set which will be used later. \begin{align*} \mathcal{B}_{\mathrm{even}} :=\left\{\beta\in \{0,1\}^{2q}: \sum_{i=1}^{2q} \beta_i \text{~is even}\right\}. \end{align*} By taking expectation for $r_1$, we have \begin{align*} \E_{ r_1 \sim \{ \pm 1 \} }\left[ \sum_{j=1}^d \lambda_j\left(r_1 x_1 + \sum_{i=2}^n r_ix_i\right)^{2q} \right] = &~ \sum_{\beta\in \{0,1\}^{2q}} \tr\left[\prod_{i=1}^{2q}A_1^{\beta_i}B_1^{1-\beta_i}\right]\cdot \E_{r_1 \sim \{\pm 1\} }\left[r_1^{\sum_{i=1}^{2q} \beta_i}\right]\\ = &~ \sum_{\beta\in \mathcal{B}_{\mathrm{even}}} \tr\left[\prod_{i=1}^{2q}A_1^{\beta_i}B_1^{1-\beta_i}\right] \end{align*} where the first step follows from Eq.~\eqref{eq:eigen_trace} and the linearity of expectation, and the last step follows from \begin{align*} \E_{r_1 \sim \{\pm 1\} }\big[r_1^{k}\big] = \begin{cases} 0 & \text{if }k\text{~is odd},\\ 1 & \text{if }k\text{~is even}. \end{cases} \end{align*} For each $\beta\in \mathcal{B}_{\mathrm{even}}$, we have \begin{align*} \tr\left[\prod_{i=1}^{2q}A_1^{\beta_i}B_1^{1-\beta_i}\right] \leq \sum_{j=1}^s \sigma_j\left(\prod_{i=1}^{2q}A_1^{\beta_i}B_1^{1-\beta_i}\right)\leq \sum_{j=1}^s \prod_{i=1}^{2q}\sigma_j\left(A_1^{\beta_i}B_1^{1-\beta_i}\right), \end{align*} where $\sigma_j(A)$ is the $j$-th singular value of $A$ and the first step follows from $\tr[A]\leq \sum_{i=1}^{\rank(A)}\sigma_j(A)$ for any real square matrix $A$, and the second step follows from general Horn inequality (Lemma~\ref{lem:horn}). Then, it follows that \begin{align}\label{eq:trace_to_singular} \sum_{\beta\in \mathcal{B}_{\mathrm{even}}} \tr\left[\prod_{i=1}^{2q}A_1^{\beta_i}B_1^{1-\beta_i}\right] \leq &~ \sum_{\beta\in \mathcal{B}_{\mathrm{even}}}\sum_{j=1}^s \sigma_j(A_1)^{\sum_{i=1}^{2q}\beta_i} \sigma_j(B_1)^{2q-\sum_{i=1}^{2q}\beta_i}\notag\\ = &~ \sum_{k=0}^q \binom{2q}{2k} \sum_{j=1}^s \sigma_j(A_1)^{2k}\sigma_j(B_1)^{2q-2k}, \end{align} where the first step follows from $\rank(A)=\rank(x_1)\leq s$. Hence, \begin{align}\label{eq:iter} \E_{r_1,\dots,r_n \sim \{ \pm 1\}}\left[\left\|\sum_{i=1}^n r_i x_i\right\|_{h,2q}^{2q}\right] \leq &~ \sum_{k_1=0}^q \binom{2q}{2k_1}\E_{r_2,\dots,r_n}\left[ \sum_{j=1}^s \sigma_j(A_1)^{2k_1}\sigma_j(B_1)^{2q-2k_1} \right]\notag\\ \leq &~ \sum_{k_1=0}^q \binom{2q}{2k_1}\E_{r_2,\dots,r_n}\left[ \sigma_1(A_1)^{2k_1} \sum_{j=1}^s \sigma_j(B_1)^{2q-2k_1} \right]\notag\\ = &~ \sum_{k_1=0}^q \binom{2q}{2k_1}\sigma_1(A_1)^{2k_1} \E_{r_2,\dots,r_n}\left[\sum_{j=1}^s \lambda_j(B_1)^{2q-2k_1} \right]\notag\\ = &~ \sum_{k_1=0}^q \binom{2q}{2k_1}\|x_1\|_h^{2k_1} \E_{r_2,\dots,r_n}\left[\sum_{j=1}^s \lambda_j(B_1)^{2q-2k_1} \right]\notag \\ = &~ \sum_{k_1=0}^q \binom{2q}{2k_1}\|x_1\|_h^{2k_1} \E_{r_2,\dots,r_n}\left[\left\|\sum_{i=2}^n r_i x_i \right\|_{h,2q-2k_1}^{2q-2k_1} \right], \end{align} where the second step follows from $\sigma_1(A)\geq \cdots \geq \sigma_s(A)$, the third step follows from for $\sum_{i=1}^s \sigma_i(A)^k=\sum_{i=1}^s\lambda_i(A)^k$ for even $k$, the forth step follows from Fact~\ref{fac:norm_h_matrix}, and the last step follows from definition of $\| \cdot \|_{h,q}$. \begin{comment} \Ruizhe{The second inequality is not tight: \begin{align*} \E_{r_1,\dots,r_n \sim \{ \pm 1\}}\left[\left\|\sum_{i=1}^n r_i x_i\right\|_{h,2q}^{2q}\right] \leq &~ \sum_{k_1=0}^q \binom{2q}{2k_1}\E_{r_2,\dots,r_n}\left[ \sum_{j=1}^d \sigma_j(A_1)^{2k_1}\sigma_j(B_1)^{2q-2k_1} \right]\notag\\ \leq &~ \sum_{k_1=0}^q \binom{2q}{2k_1}\E_{r_2,\dots,r_n}\left[ \|A_1\|_{4k_1}^{2k_1}\cdot \|B_1\|_{2(2q-2k_1)}^{2q-2k_1} \right]\tag{Cauchy-Schwarz}\\ = &~ \sum_{k_1=0}^q \binom{2q}{2k_1}\|A_1\|_{4k_1}^{2k_1} \E_{r_2,\dots,r_n}\left[\|B_1\|_{2(2q-2k_1)}^{2q-2k_1} \right]\notag\\ \leq &~ \sum_{k_1=0}^q \binom{2q}{2k_1}\|A_1\|_{4k_1}^{2k_1} \E_{r_2,\dots,r_n}\left[\|B_1\|_{4q-4k_1}^{4q-4k_1} \right]^{1/2}\tag{Jensen's inequality}\\ = &~ \sum_{k_1=0}^q \binom{2q}{2k_1}\|x_1\|_{h,4k_1}^{2k_1} \E_{r_2,\dots,r_n}\left[\left\|\sum_{i=2}^n r_i x_i\right\|_{h,4q-4k_1}^{4q-4k_1} \right]^{1/2} \end{align*} } \end{comment} Now, we can iterate this process for $\E_{r_2,\dots,r_n}\left[\left\|\sum_{i=2}^n r_i x_i \right\|_{h,2q-2k_1}^{2q-2k_1} \right]$. Consider $r_2 x_2 + \sum_{i=3}^{n} r_i x_i$. By Corollary~\ref{cor:hv_2_vars}, there exists two symmetric matrices $A_2,B_2\in \mathbb{R}^{d\times d}$ such that \begin{align*} \lambda\left(r_2 x_2 + \sum_{i=3}^{n} r_i x_i\right) = \lambda(r_2 A + B) \end{align*} for all $r_2\in \{-1, 1\}$. By the conditional expectation again, we can get that \begin{align*} \E_{r_1,\dots,r_n}\left[\left\|\sum_{i=1}^n r_i x_i\right\|_{h,2q}^{2q}\right] \leq &~ \sum_{k_1=0}^q \binom{2q}{2k_1}\|x_1\|_h^{2k_1} \E_{r_2,\dots,r_n}\left[\left\|\sum_{i=2}^n r_i x_i \right\|_{h,2q-2k_1}^{2q-2k_1} \right] \\ \leq &~ \sum_{k_1=0}^q \binom{2q}{2k_1}\|x_1\|_h^{2k_1} \sum_{k_2=0}^{2q-2k_1} \binom{2q-2k_1}{2k_2} \|x_2\|_h^{2k_2}\E_{r_3,\dots,r_n}\left[\left\|\sum_{i=3}^n r_i x_i \right\|_{h,2k_3}^{2k_3} \right], \end{align*} where $k_3=q-k_1-k_2$ and the second step follows from applying Eq.~\eqref{eq:iter} for $\E_{r_2,\dots,r_n}\left[\left\|\sum_{i=2}^n r_i x_i \right\|_{h,2q-2k_1}^{2q-2k_1} \right]$. If we iterate $n-1$ times, we finally get \begin{align}\label{eq:expansion_2q_norm} \E\left[\left\|\sum_{i=1}^n r_i x_i\right\|_{h,2q}^{2q}\right] \leq & ~ \sum_{\substack{k_1,\dots,k_n\geq 0\\k_1+\cdots+k_n=q}} \prod_{i=1}^{n-1} \binom{2q - \sum_{j=1}^{i-1} 2k_j}{2k_{i}} \|x_i\|_h^{2k_i} \cdot \E_{r_n}\left[ \|r_n x_n\|_{h,2k_n}^{2k_n} \right] \notag \\ = &~ \sum_{\substack{k_1,\dots,k_n\geq 0\\k_1+\cdots+k_n=q}}\binom{2q}{2k_1,\dots,2k_n} \prod_{i=1}^{n-1} \|x_i\|_h^{2k_i} \cdot \E_{r_n}\left[ \|r_n x_n\|_{h,2k_n}^{2k_n} \right]\notag\\ = &~ \sum_{\substack{k_1,\dots,k_n\geq 0\\k_1+\cdots+k_n=q}}\binom{2q}{2k_1,\dots,2k_n} \prod_{i=1}^{n-1} \|x_i\|_h^{2k_i} \cdot \|x_n\|_{h,2k_n}^{2k_n}, \end{align} where the first step follows from iterating the same rule for $n-1$ times, the second step follows from \begin{align*} \prod_{i=1}^{n-1} \binom{2q - \sum_{j=1}^{i-1}2k_{j}}{2k_{i}} = &~ \binom{2q}{2k_1}\cdot \binom{2q-2k_1}{2k_2}\cdot \binom{2q-2k_1-2k_2}{2k_3}\cdots \binom{2k_{n-1} + 2k_n}{2k_{n-1}}\\ = &~ \binom{2q}{2k_1}\cdot \binom{2q-2k_1}{2k_2}\cdot \binom{2q-2k_1-2k_2}{2k_3}\cdots \binom{2k_{n-1} + 2k_n}{2k_{n-1}} \cdot \binom{ 2k_n}{2k_{n}}\\ = &~ \binom{2q}{2k_1,\dots,2k_n}. \end{align*} By Lemma~\ref{lem:zyg02_tj74}, we have that \begin{align*} \binom{2q}{2k_1,\dots,2k_n}\leq M_{2q}^{2q} \cdot \binom{q}{k_1,\dots,k_n}, \end{align*} where $M_{2q}=\left(\frac{(2q)!}{2^q q!}\right)^{1/(2q)}\leq \sqrt{2q-1}$ by Fact~\ref{fac:m_2q}. Hence, \begin{align*} \E_{ r \sim \{\pm 1\}^n }\left[\left\|\sum_{i=1}^n r_i x_i\right\|_{h,2q}^{2q}\right] \leq &~ M_{2q}^{2q}\sum_{\substack{k_1,\dots,k_n\geq 0\\k_1+\cdots+k_n=q}}\binom{q}{k_1,\dots,k_n} \prod_{i=1}^{n-1} \|x_i\|_h^{2k_i} \cdot \|x_n\|_{h,2k_n}^{2k_n}\\ \leq &~ M_{2q}^{2q}\sum_{\substack{k_1,\dots,k_n\geq 0\\k_1+\cdots+k_n=q}}\binom{q}{k_1,\dots,k_n} \prod_{i=1}^{n-1} \|x_i\|_h^{2k_i} \cdot s\cdot \|x_n\|_{h}^{2k_n}\\ = & ~ M_{2q}^{2q} \cdot s \cdot \sum_{\substack{k_1,\dots,k_n\geq 0\\k_1+\cdots+k_n=q}}\binom{q}{k_1,\dots,k_n} \prod_{i=1}^{n} \|x_i\|_h^{2k_i} \\ = &~ M_{2q}^{2q}\cdot s \cdot \left(\sum_{i=1}^{n} \|x_i\|_h^2 \right)^q, \end{align*} where the second step follows from $\|x_n\|_{h,2k_n} \leq s^{1/(2k_n)}\cdot \|x_n\|_h$ for rank-$s$ vector (see Fact~\ref{fac:norm_h_q_bound_by_norm_h}), the third step follows from re-organizing the terms, and the last step follows from expanding $(\sum_{i=1}^{n} \|x_i\|_h^2 )^q$. \iffalse \Ruizhe{Suppose we add a dummy vector $x_{n+1}=0$, then we have \begin{align*} \E_{ r \sim \{\pm 1\}^n }\left[\left\|\sum_{i=1}^n r_i x_i\right\|_{h,2q}^{2q}\right] = &~ \E_{ r \sim \{\pm 1\}^{n+1} }\left[\left\|\sum_{i=1}^{n+1} r_i x_i\right\|_{h,2q}^{2q}\right]\\ \leq &~ M_{2q}^{2q}\sum_{\substack{k_1,\dots,k_{n+1}\geq 0\\k_1+\cdots+k_{n+1}=q}}\binom{q}{k_1,\dots,k_{n+1}} \prod_{i=1}^{n} \|x_i\|_h^{2k_i} \cdot \|x_{n+1}\|_{h,2k_{n+1}}^{2k_{n+1}}\\ = &~ M_{2q}^{2q}\sum_{\substack{k_1,\dots,k_{n}\geq 0\\k_1+\cdots+k_{n}=q}}\binom{q}{k_1,\dots,k_{n}} \prod_{i=1}^{n} \|x_i\|_h^{2k_i} \end{align*} where the last step follows from $x_{n+1}=0$ and only the terms in the summation with $k_{n+1}=0$ are nonzero. Hence, we have \begin{align*} \E_{ r \sim \{\pm 1\}^n }\left[\left\|\sum_{i=1}^n r_i x_i\right\|_{h,2q}^{2q}\right] \leq &~ M_{2q}^{2q}\cdot \left(\sum_{i=1}^{n} \|x_i\|_h^2 \right)^q. \end{align*} } \Ruizhe{Consider the following quantity: \begin{align*} \left\|\sum_{i=1}^n \tr[x_i]\cdot x_i\right\|_h^q = &~ \|\tr[x_1] x_1 + y\|_h^q\\ = &~ \|\tr[A]\cdot A + B\|^q\\ \leq &~ (\|\tr[A] A\|+ \|B\|)^q \\ = &~ \sum_{k_1=0}^q \binom{q}{k_1} \|\tr[A] A\|^{k_1} \|B\|^{q-k_1} \end{align*} If we further assume $x_i\in \Lambda_+$ for all $i\in [n]$, then $A$ will be a PSD matrix. Hence, we have \begin{align*} \left\|\sum_{i=1}^n \tr[x_i]\cdot x_i\right\|_h^q \leq &~ \sum_{k_1=0}^q \binom{q}{k_1} (\tr[A])^{k_1} \|A\|^{k_1} \|B\|^{q-k_1}\\ = &~ \sum_{k_1=0}^q \binom{q}{k_1} (\tr[x_1]\cdot \|x_1\|_h)^{k_1} \|\sum_{i=2}^n \tr[x_i]\cdot x_i\|_h^{q-k_1} \end{align*} } \fi Therefore, \begin{align*} \left(\E_{r \sim \{\pm 1\}^n}\left[\left\|\sum_{i=1}^n r_i x_i\right\|_{h,2q}^{2q}\right]\right)^{1/(2q)} \leq \sqrt{2q-1}\cdot s^{1/(2q)}\cdot \left(\sum_{i=1}^{n} \|x_i\|_h^2\right)^{1/2}, \end{align*} which completes the proof of Lemma~\ref{lem:p_norm_upper_bound}. \begin{remark} This upper bound depends essentially on the the maximum hyperbolic rank of all the vectors $x_1,\dots,x_n$, instead of just the last one. It follows since Eq.~\eqref{eq:iter} can be expanded as: \begin{align*} \E_{r_1,\dots,r_n \sim \{ \pm 1\}}\left[\left\|\sum_{i=1}^n r_i x_i\right\|_{h,2q}^{2q}\right] \leq &~ \E_{r_2,\dots,r_n\sim \{\pm 1\}}\left[\left\|\sum_{i=2}^n r_i x_i\right\|_{h,2q}^{2q}\right] + \|x_1\|_{h,2q}^{2q}\\ &+\sum_{k_1=1}^{q-1} \binom{2q}{2k_1} \|x_1\|_{h}^{2k_1}\cdot \E_{r_2,\dots,r_n\sim \{\pm 1\}}\left[\left\|\sum_{i=2}^n r_i x_i\right\|_{h,2q-k_1}^{2q-k_1}\right]. \end{align*} We can see that the first term depends on the rank of $x_2,\dots,x_n$, the second term depends on the $\rank(x_1)$. Hence, the whole summation cannot be uniformly bounded by the rank of the last vector $x_n$. Hence, adding a rank-1 dummy vector cannot improve the bound. \end{remark} \begin{comment} Then, consider each $k_1\in [q]$. Let $D_1^{k_1}:=\textrm{diag}\left[\lambda_1(A_1)^{2k_1},\dots, \lambda_d(A_1)^{2k_1}\right]$. Then, we can write the summing term as \begin{align*} \sum_{j=1}^d \lambda_j(A_1)^{2k_1}\lambda_j(B_1)^{2q-2k_1} = \end{align*} Hence, \begin{align*} \E_{r_2,\dots,r_n}\E_{r_1}\left[ \left\|\sum_{i=1}^n r_i x_i\right\|_{h,2q}^2 \right] = &~ M_{2q} \E_{r_2,\dots,r_n}\left[\left(\sum_{j=1}^d \left(\lambda_j(x_1)^2 + \lambda_j\left(B\right)^2\right)^{q}\right)^{1/q}\right]\\ \leq &~ M_{2q} \E_{r_2,\dots,r_n}\left[ \left(\sum_{j=1}^d \lambda_j(x_1)^{2q}\right)^{1/q}+\sum_{j=1}^d \left(\lambda_j\left( B \right)^{2q}\right)^{1/q} \right]\\ = &~ M_{2q}\|x_1\|_{h,2q}^2 + M_{2q}\E_{r_2,\dots,r_n}\left[\left\|\sum_{i=2}^n r_ix_i\right\|_{h,2q}^2\right] \end{align*} Then, we can iterate to $r_2,\dots,r_n$. However, each step will incur an $M_{2q}$ factor. By~\cite{}, we know that \begin{align*} \binom{2q}{2k}\leq M_{2q}^{2q}\binom{q}{k}, \end{align*} where $M_{2q}=e$. Hence, \begin{align*} \E_{r_2,\dots,r_n}\E_{r_1}\left[ \left\|\sum_{i=1}^n r_i x_i\right\|_{h,2q}^{2q} \right] = &~ M_{2q}^{2q} \E_{r_2,\dots,r_n}\left[\sum_{j=1}^d \left(\lambda_j(x_1)^2 + \lambda_j\left(B_1\right)^2\right)^{q}\right]\\ \leq &~ M_{2q}^{2q} \E_{r_2,\dots,r_n}\left[\left[ \left(\sum_{j=1}^d \lambda_j(x_1)^{2q}\right)^{1/q}+\sum_{j=1}^d \left(\lambda_j\left( B_1 \right)^{2q}\right)^{1/q} \right]^{q}\right]\\ = &~ M_{2q}^{2q}\E_{r_2,\dots,r_n}\left[\left(\|x_1\|_{h,2q}^2 + \left\|\sum_{i=2}^n r_ix_i\right\|_{h,2q}^2\right)^q\right]\\ = &~ M_{2q}^{2q}\sum_{k=0}^q \binom{q}{k}\|x_1\|_{h,2q}^{2k}\E_{r_2,\dots,r_n}\left[\left\|\sum_{i=2}^n r_ix_i\right\|_{h,2q}^{2q-2k}\right]\\ = &~ M_{2q}^{2q}\sum_{k=0}^q \binom{q}{k}\|x_1\|_{h,2q}^{2k}\E_{r_2,\dots,r_n}\left[\left(\left\|\sum_{i=2}^n r_ix_i\right\|_{h,2q}^{2q}\right)^{(q-k)/q}\right]\\ = &~ M_{2q}^{2q}\sum_{k=0}^q \binom{q}{k}\|x_1\|_{h,2q}^{2k}\left(\E_{r_2,\dots,r_n}\left[\left\|\sum_{i=2}^n r_ix_i\right\|_{h,2q}^{2q}\right]\right)^{(q-k)/q} \end{align*} \end{comment} \end{proof} \subsection{Discussion and Open problems} \paragraph{Tighter hyperbolic Chernoff bound?} Our current result has a worse dependence on the variance $\sigma^2$ than the matrix Chernoff bound \cite{tro15}. Can we match the results when $h=\det(X)$? We note that there is a limitation for using the techniques like Golden-Thompson inequality and Lieb's theorem, which were used in \cite{o09, t12} to improve the original matrix Chernoff bound \cite{aw02}, to tighten our result. Because for any symmetric matrix $X$, we can define a mapping such that $\phi(X)$'s eigenvalues are the $p$-th power of $X$'s eigenvalues for any $p>0$, where the mapping is just $X^p$. However, we cannot find such a mapping for vectors with respect to the hyperbolic eigenvalues. Some new techniques may be required to get a hyperbolic Chernoff bound matching the matrix results. \paragraph{Resolving the hyperbolic Spencer conjecture?} Inspired by the matrix Spencer conjecture (due to Meka \cite{m14}), we came up with a more general conjecture for hyperbolic discrepancy. Can we prove or disprove this conjecture? In a very recent work by Reis and Rothvoss \cite{rr20}, they conjectured a weaker matrix Spencer by considering the Schatten-$p$ norm of matrices. We can also define such an $\ell_p$ version of the hyperbolic Spencer conjecture by looking at the $\ell_p$-norm of hyperbolic eigenvalues (the hyperbolic-$p$ norm). Any progress towards the $\ell_p$-hyperbolic Spencer conjecture will provide more insights in matrix and hyperbolic discrepancy theory. \paragraph{Fooling hyperbolic cone?} One of the results in this paper is showing an anti-concentration inequality with respect to the hyperbolic spectral norm, which generalizes the results in \cite{ost19,ay21}. They actually combined the anti-concentration results with the Meka-Zuckerman \cite{mz13} framework to construct PRGs fooling polytopes/positive spectrahedrons. Hence, an open question in complexity theory and pseudorandomness is: can we apply the hyperbolic anti-concentration inequality to construct a PRG fooling positive hyperbolic-spectrahedrons, or even hyperbolic cones? \section{Discrepancy result} In this section, we will show how to relax the isotropic condition in the hyperbolic kadison-Singer theorem~\cite{b18}. And we will apply our hyperbolic concentration result to prove a discrepancy upper bound that works for general vectors. \subsection{Preliminaries}\label{sec:disp_prelim} In this section, we formally state some matrix discrepancy results. We first formally state the discrepancy theorem implied by Kadison-Singer theorem. \begin{theorem}[\cite{mss15}] Let $x_1, \dots, x_n \in \C^m$ and suppose $\|x_ix_i^*\|\leq \epsilon$ for all $i\in [n]$ and $\sum_{i=1}^n x_ix_i^* = I$. Then, there exist signs $r\in \{-1, 1\}^n$ such that \begin{align*} \left\|\sum_{i=1}^n r_i x_ix_i^*\right\|\leq O(\sqrt{\epsilon}). \end{align*} \end{theorem} This theorem also holds for high rank matrices as long as the isotropic condition holds: \begin{theorem}[High rank Kadison-Singer \cite{c16,b18}] Let $X_1, \dots, X_n \in \C^{d\times d}$ be positive semi-definite symmetric matrices such that $\tr[X_i]\leq \epsilon$ for all $i\in [n]$ and $\sum_{i=1}^n X_i = I$. Then, there exist signs $r\in \{-1, 1\}^n$ such that \begin{align*} \left\|\sum_{i=1}^n r_i X_i\right\|\leq O( \sqrt{ \epsilon } ). \end{align*} \end{theorem} \cite{kls19} showed that the isotropic condition is not necessary for rank-1 matrices: \begin{theorem}[Rank-1 matrix Spencer, \cite{kls19}]\label{thm:kls19} Given $n$ vectors $x_1,\dots,x_n\in \C^m$. Let $\sigma^2 = \|\sum_{i=1}^n (x_ix_i^*)^2\|$. Then, there exists a choice of signs $r\in \{-1,1\}^n$ such that \begin{align*} \left\|\sum_{i=1}^n r_i x_ix_i^*\right\|\leq 4\sigma. \end{align*} \end{theorem} \subsection{Hyperbolic Kadison-Singer with relaxed condition}\label{sec:hy_ks} The goal of this section is to prove Theorem~\ref{thm:branden_subisotropic}, which relaxes the isotropic condition in Corollary~\ref{cor:branden_discrepancy} to the bounded hyperbolic norm. We first formally state the upper bound in \cite{b18}: \begin{definition} For $r\in \mathbb{N}_+$, let $U_r$ be the set of all pairs $(\delta, \mu)\in \mathbb{R}_+\times \mathbb{R}_+$ such that \begin{align*} \delta-1\geq \frac{\delta}{\mu}\cdot \frac{\left(1+\frac{\delta}{r\mu}\right)^{r-1}- \left(\frac{\delta}{r\mu}\right)^{r-1}}{\left(1+\frac{\delta}{r\mu}\right)^{r}- \left(\frac{\delta}{r\mu}\right)^{r}}, \end{align*} and either $\mu>1$, or $\delta\in [1,2],\mu>1-\delta/r$. Then, the upper bound in \cite{b18} is: \begin{align*} \delta(\epsilon, n, r):=\inf_{(\delta, \mu)\in U_r}~~\frac{\epsilon\mu + (1-\frac{1}{n})\delta}{1+\frac{\mu-1}{n}}. \end{align*} In particular, $\delta(\epsilon, \infty, r):= \inf_{(\delta, \mu)\in U_r} \epsilon\mu + \delta$. \end{definition} Then, we state the main theorem of this section: \begin{theorem}\label{thm:branden_subisotropic} Let $k \geq 2$ be an integer and $\epsilon,\sigma>0$. Suppose $h\in \mathbb{R}[z_1,\dots,z_m]$ is hyperbolic with respect to $e \in \mathbb{R}^m$, and let $x_1, \dots , x_n$ be $n$ vectors in the hyperbolic cone $\Lambda_+(h, e)$ (see Definition~\ref{def:hyper_cone}) such that \begin{align*} \tr[x_i] \leq \epsilon, ~~\rank(x_i)\leq r~~\forall i\in [n], ~\text{and}\quad \left\|\sum_{i=1}^n x_i\right\|_h \leq \sigma. \end{align*} Then, there exists a partition $S_1 \cup S_2 \cup \cdots \cup S_k = [n]$ such that for all $j\in [k]$, \begin{align*} \left\| \sum_{i\in S_j} x_i\right\|_h \leq \frac{\sigma}{k}\cdot \delta\left(\frac{k\epsilon}{\sigma}, n, rk\right). \end{align*} \end{theorem} \begin{remark}\label{rmk:bound_inf} By Eq.~(1.7) in \cite{b18}, the above bound is at most \begin{align*} \frac{\sigma}{k}\cdot \delta(k\epsilon/\sigma, \infty, \infty) = \frac{\sigma}{k}\cdot (1+\sqrt{k\epsilon/\sigma})^2 = \left(\sqrt{\epsilon} + \sqrt{\sigma/k}\right)^2, \end{align*} which also generalizes the result of \cite{mss15} (Theorem~\ref{thm:kadison_singer}) to hyperbolic polynomials with sub-isotropic condition. \end{remark} The proof of Theorem~\ref{thm:branden_subisotropic} is exactly the same as the proof of Theorem 1.3 in \cite{b18}, but relies on the sub-isotropic version of the following theorem. Therefore, we will only prove Theorem~\ref{thm:branden_6.1}. \begin{theorem}[Sub-isotropic version of Theorem 6.1 in \cite{b18}]\label{thm:branden_6.1} Suppose $h\in \mathbb{R}[z_1,\dots,z_m]$ is a hyperbolic polynomial with respect to $e\in \mathbb{R}^m$. Let $\mathsf{x}_1,\dots,\mathsf{x}_m$ be independent random vectors in $\Lambda_+(e)$ with finite supports such that \begin{align*} \tr[\E[\mathsf{x}_i]] \leq \epsilon, ~~\rank(\E[\mathsf{x}_i])\leq r~~\forall i\in [n], ~\text{and}\quad \left\|\sum_{i=1}^n \E[\mathsf{x}_i]\right\|_h \leq \sigma. \end{align*} Then, we have \begin{align*} \Pr\left[\lambda_{\max}\left(\sum_{i=1}^n \mathsf{x_i}\right)\leq \sigma\cdot \delta(\epsilon/\sigma, n, r)\right]>0. \end{align*} \end{theorem} \begin{proof} Let $V_i$ be the support of $\mathsf{x}_i$ for $i\in [n]$. By Theorem~\ref{thm:compatible}, the family $\{h[v_1,\dots,v_m](t\overline{e}+\underline{\mathbf{1}})\}_{v_i\in V_i}$ is compatible, where $t\overline{e}+\underline{\mathbf{1}} = \begin{bmatrix}te \\ \mathbf{1}\end{bmatrix}\in \mathbb{R}^{n+m}$ By Theorem~\ref{thm:compatible_root_bound}, there exists $(v_1^*,\dots,v_n^*)\in V_1\times \cdots \times V_n$ with nonzero probability, such that the largest root of $h[v_1^*,\dots,v_n^*](t\overline{e}+\underline{\mathbf{1}})$ is at most the largest root of $\E[h[\mathsf{x}_1,\dots,\mathsf{x}_n]]$. By Fact~\ref{fac:affine_linear}, $\E[h[\mathsf{x}_1,\dots,\mathsf{x}_n]] = h[\E[\mathsf{x}_1],\dots,\E[\mathsf{x}_n]]$. Let $\lambda_{\max}(v_1,\dots,v_n)$ denote the largest root of $h[v_1,\dots,v_m](t\overline{e}+\underline{\mathbf{1}})$. Then, we have \begin{align*} \lambda_{\max}(\E[\mathsf{x}_1],\dots,\E[\mathsf{x}_n])\geq ~ \lambda_{\max}(v_1^*,\dots,v_n^*)\geq~ \lambda_{\max}(v_1^*+\cdots +v_n^*), \end{align*} where the second step follows from Theorem~\ref{thm:sum_root_bound_mix}. It is easy to verify that $\E[\mathsf{x}_1],\dots, \E[\mathsf{x}_n]$ satisfy the conditions in Theorem~\ref{thm:mix_hyper_root}. Thus, by Theorem~\ref{thm:mix_hyper_root}, we get that \begin{align*} \lambda_{\max}(v_1^*+\cdots +v_n^*) \leq ~ \lambda_{\max}(\E[\mathsf{x}_1],\dots,\E[\mathsf{x}_n]) \leq ~ \sigma\cdot \delta(\epsilon/\sigma, n, r), \end{align*} which completes the proof. \end{proof} Similar to Corollary~\ref{cor:branden_discrepancy}, Theorem~\ref{thm:branden_subisotropic} also implies the following discrepancy result for vectors in sub-isotropic position. \begin{corollary} Let $0<\epsilon\leq \frac{1}{2}$. Suppose $h\in \mathbb{R}[z_1,\dots,z_m]$ is hyperbolic with respect to $e\in \mathbb{R}^m$, and let $x_1, \dots , x_n\in \Lambda_+(h,e)$ that satisfy \begin{align*} \tr[x_i] \leq \epsilon, ~\text{and}\quad \left\|\sum_{i=1}^n x_i\right\|_h \leq \sigma. \end{align*} Then, there exist signs $r\in \{-1,1\}^n$ such that \begin{align*} \left\|\sum_{i=1}^n r_i x_i\right\|_h\leq 2\sqrt{\epsilon(2\sigma-\epsilon)}. \end{align*} \end{corollary} \begin{proof} By Theorem~\ref{thm:branden_subisotropic} with $k=2$ and the upper bound in Remark~\ref{rmk:bound_inf}, there exists a set $S\subseteq [n]$ such that \begin{align*} \Big\|\sum_{i\in S} x_i\Big\|_h \leq (\sqrt{\epsilon} + \sqrt{\sigma/2})^2, ~~\text{and}~~\Big\|\sum_{i\not\in S} x_i\Big\|_h \leq (\sqrt{\epsilon} + \sqrt{\sigma/2})^2. \end{align*} Since we know that $\|\sum_{i=1}^n x_i\|_h \leq \sigma$, we get that \begin{align*} \Big\|\sum_{i\in S} - \sum_{i\not\in S} x_i\Big\|_h\leq \sigma - 2(\sqrt{\epsilon} + \sqrt{\sigma/2})^2 = 2\sqrt{\epsilon(2\sigma-\epsilon)}. \end{align*} By assigning $r_i=1$ for $i\in S$ and $r_i=-1$ for $i\notin S$, we complete the proof of the corollary. \end{proof} \subsubsection{Technical tools in previous work} In this section, we provide some necessary definitions and technical tools we used in \cite{b18}. \begin{definition}[Directional derivative] Let $h\in \mathbb{R}[x_1,\dots,x_m]$. The directional derivative of $h(x)$ with respect to $v\in \mathbb{R}^m$ is defined as \begin{align*} D_vh(x) := \sum_{i=1}^m v_i \cdot \frac{\partial h}{\partial x_i} (x). \end{align*} \end{definition} The following fact shows the relation between directional derivative and the usual derivative. \begin{fact}\label{fac:dir_derivate} For any polynomial $h(x)$ and any vector $v\in \mathbb{R}^m$, we have \begin{align*} D_vh(x+tv)= \frac{\d}{\d t} h(x + tv). \end{align*} \end{fact} If $h$ is a hyperbolic polynomial, then the directional derivative is related to the hyperbolic trace: \begin{fact}\label{fac:d_trace} If $h$ is hyperbolic with respect to $e\in \mathbb{R}^m$, then for any $v\in \mathbb{R}^m$, we have \begin{align*} \tr[v] = \frac{D_v h(e)}{h(e)}. \end{align*} \end{fact} \begin{definition}[Mixed hyperbolic polynomial] If $h(x)\in \mathbb{R}[x_1,\dots,x_m]$ is a hyperbolic polynomial with respect to $e\in \mathbb{R}^m$, and $v_1,\dots,v_n\in \Lambda_+$, then the mixed hyperbolic polynomial $h[v_1,\dots,v_m]\in \mathbb{R}[x_1,\dots,x_m,y_1,\dots,y_n]$ is defined as \begin{align*} h[v_1,\dots,v_n]:=\prod_{i=1}^m (1-y_i D_{v_i}) h(x). \end{align*} \end{definition} Br{\"a}nd{\'e}n \cite{b18} proved that $h[v_1,\dots,v_n]$ is also hyperbolic with the hyperbolic cone containing $\Lambda_{++}\times \mathbb{R}^n_{\leq 0}$. In our proof, we will also use the following fact, which can be easily proved by showing that $h[v_1,\dots,v_n]$ is affine linear in each coordinate. \begin{fact}\label{fac:affine_linear} Let $\mathsf{x}_1,\dots,\mathsf{x}_n$ be independent random variables in $\mathbb{R}^m$. Then, \begin{align*} \E[h[\mathsf{x}_1,\dots,\mathsf{x}_n]] = h[\E[\mathsf{x}_1],\dots,\E[\mathsf{x}_n]]. \end{align*} \end{fact} Br{\"a}nd{\'e}n \cite{b18} also defined the compatible family of polynomials, which is a sub-class of interlacing family of polynomials in \cite{mss15,mss18}. \begin{definition}[Compatible family of polynomials]\label{def:compatible} Let $S_1,\dots,S_n$ be finite sets. A family of polynomials \begin{align*} {\cal F}=\{f(S;t)\}_{S\in S_1\times \cdots \times S_n}\subset \mathbb{R}[t] \end{align*} is called compatible if the following properties hold: \begin{itemize} \item all the nonzero members of F have the same degree and the same signs of their leading coefficients, and \item for all choices of independent random variables $\mathsf{x}_1\in S_1,\dots,\mathsf{x}_n\in S_n$, the polynomial \begin{align*} \E[f(\mathsf{x}_1,\dots,\mathsf{x}_n; t)] \end{align*} is real-rooted. \end{itemize} \end{definition} The following theorem characterizes the largest root of the expectation polynomial in the compatible family, which is very similar to the result for interlacing family \cite{mss15}. \begin{theorem}[Theorem 2.3 in \cite{b18}]\label{thm:compatible_root_bound} Let $\{f(S;t)\}_{S\in S_1\times \cdots \times S_n}$ be a compatible family, and let $\mathsf{x}_1\in S_1,\dots,\mathsf{x}_n\in S_n$ be independent random variables such that $\E[f(\mathsf{x}_1,\dots,\mathsf{x}_n)]\not \equiv 0$. Then there is a tuple $S = (s_1,\dots, s_n) \in S_1\times \cdots \times S_n$, with $\Pr[\mathsf{x}_i = s_i] > 0$ for all $i\in [n]$, such that the largest root of $f(s_1,\cdots, s_n;t)$ is smaller or equal to the largest root of $\E[f(\mathsf{x}_1,\cdots,\mathsf{x}_n;t)]$. \end{theorem} The theorem below shows that mixed hyperbolic polynomials form a compatible family. \begin{theorem}[Theorem 3.5 in \cite{b18}]\label{thm:compatible} Let $h(x)$ be hyperbolic with respect to $e\in \mathbb{R}^m$, and let $V_1,\dots,V_n$ be finite sets of vectors in $\Lambda_+$. Let $w\in \mathbb{R}^{m+n}$. For $V=(v_1,\dots,v_n)\in V_1\times \cdots \times V_n$, define \begin{align*} f(V;t):=h[v_1,\dots,v_n](t\overline{e}+w), \end{align*} where $\overline{e}:=\begin{bmatrix}e \\ 0\end{bmatrix}\in \mathbb{R}^{n+m}$. Then, $\{f(V;t)\}_{V\in V_1\times \cdots \times V_n}$ is a compatible family. \end{theorem} Let $\lambda_{\max}(v_1,\dots,v_n)$ denote the largest root of the mixed hyperbolic polynomial $h[v_1,\dots,v_m](t\overline{e} + \underline{\mathbf{1}})\in \mathbb{R}[t]$. The following theorem shows that $\lambda_{\max}(v_1,\dots,v_n)$ can be upper-bounded by the largest hyperbolic eigenvalue of the vector $v_1+\cdots +v_n$. \begin{theorem}[Theorem 5.2 in \cite{b18}]\label{thm:sum_root_bound_mix} If $h$ is hyperbolic with respect to $e$ and $v_1,\dots,v_n\in \Lambda_+(e)$, then \begin{align*} \lambda_{\max}(v_1+\cdots+v_n)\leq \lambda_{\max}(v_1,\dots,v_n). \end{align*} \end{theorem} The following theorem shows a connection between the hyperbolic cone of $h$ and the hyperbolic cone of the mixed hyperbolic polynomial $h[v_1,\dots,v_n]$. \begin{theorem}[Corollary 5.5 in \cite{b18}]\label{cor:branden_5.5} Suppose $h$ is hyperbolic with respect to $e\in \mathbb{R}^m$, and let $\Gamma_+$ be the hyperbolic cone of $h[v_1,\dots,v_n]$, where $v_i\in \Lambda_+(e)$ and $1\leq \rank(v_i)\leq r_i$ for $i\in [m]$. Suppose $x\in \Lambda_{++}(e)$ be such that for $i\in [m]$, $\overline{x}+\mu_i\underline{e_i}\in \Gamma_+$ for any $\mu_i>0$. Then, for any $(\delta_i, \mu_i)\in U_{r_i}$ for $i\in [m]$, \begin{align*} \overline{x}+\left(1-\frac{1}{m}\right)\sum_{i=1}^n\delta_i \overline{v_i} + \left(1-\frac{1}{m}\right)\underline{\mathbf{1}} + \frac{1}{m}\sum_{i=1}^n \mu_i \underline{e_i} \in \Gamma_+. \end{align*} \end{theorem} \subsubsection{Upper bound for the largest root of the mixed hyperbolic polynomial} The goal of this section is to prove Theorem~\ref{thm:mix_hyper_root}, which gives an upper bound for the mixed hyperbolic polynomial with vectors in sub-isotropic position. \begin{theorem}[Sub-isotropic version of Theorem 5.6 in \cite{b18}]\label{thm:mix_hyper_root} Suppose $h\in \mathbb{R}[z_1,\dots,z_m]$ is hyperbolic with respect to $e\in \mathbb{R}^m$, and let $v_1, \dots , v_n\in \Lambda_+(h,e)$ that satisfy \begin{align*} \tr[v_i] \leq \epsilon, ~~\rank(v_i)\leq r~~\forall i\in [n], ~\text{and}\quad \left\|\sum_{i=1}^n v_i\right\|_h \leq \sigma. \end{align*} Then, \begin{align*} \lambda_{\max}(v_1,\dots,v_n)\leq \sigma \cdot \delta(\epsilon/\sigma, n, r). \end{align*} \end{theorem} \begin{proof} For $\mu>0$, let $x:=\epsilon \mu \cdot e$ and $\mu_i := \mu$ for $i\in [n]$. Let $e_i\in \mathbb{R}^{n}$ be the $i$-th standard basis vector. Then, we have \begin{align*} h[v_1,\dots,v_n](\overline{x}+\mu_i\underline{e_i}) = &~ (1-\mu D_{v_i})h(\epsilon\mu e)\\ = &~ \epsilon_d \mu^d h(e) + \mu^d\epsilon^{d-1}D_{v_i} h(e)\\ = &~ \mu^d\epsilon^{d-1}h(e)(\epsilon - \tr[v_i])\\ > &~ 0, \end{align*} where the first step follows from Fact~\ref{fac:dir_derivate}, the second step follows from the homogeneity of hyperbolic polynomials, and the third step follows from Fact~\ref{fac:d_trace}. By part (2) of Theorem~\ref{thm:hyperbolic_cone_gar}, we get that $x+\mu_i \underline{e_i}\in \Gamma_+$, the hyperbolic cone of $h[v_1,\dots,v_n]$, for all $i\in [n]$. Then, by Theorem~\ref{cor:branden_5.5}, for any $(\delta, \mu)\in U_r$, \begin{align*} \epsilon \mu \overline{e} + \left(1-\frac{1}{n}\right)\delta \sum_{i=1}^n v_i + (1+\frac{\mu-1}{n})\underline{\mathbf{1}}\in \Gamma_+, \end{align*} which implies \begin{align*} \frac{\epsilon \mu \overline{e} + \left(1-\frac{1}{n}\right)\delta \sum_{i=1}^n v_i }{1+\frac{\mu-1}{n}}+\underline{\mathbf{1}}\in \Gamma_+, \end{align*} by the homogeneity of $\Gamma_+$. Since $\overline{e}\in \Gamma_{++}$, $\lambda_{\max}(\sum_{i=1}^n v_i)\leq \sigma$, and $\Gamma_+$ is a convex cone, we have \begin{align*} \frac{\left(\epsilon \mu + \left(1-\frac{1}{n}\right)\delta\sigma\right) }{1+\frac{\mu-1}{n}}\overline{e} +\underline{\mathbf{1}}\in \Gamma_+. \end{align*} Hence, by Remark 5.1 in \cite{b18}, \begin{align*} \lambda_{\max}(v_1,\dots,v_n) = \inf_{\rho>0}~~ \rho \overline{e} + \underline{\mathbf{1}}\in \Gamma_+. \end{align*} Hence, we conclude that \begin{align*} \lambda_{\max}(v_1,\dots,v_n)\leq &~ \inf_{(\delta, \mu)\in U_r}~~\frac{\left(\epsilon \mu + \left(1-\frac{1}{n}\right)\delta\sigma\right) }{1+\frac{\mu-1}{n}}\\ = &~ \sigma \cdot \delta(\epsilon/\sigma, n, r). \end{align*} \end{proof} \subsection{Discrepancy result with high probability} The goal of this section is to prove Theorem~\ref{thm:eight_deviations}, which proves the rank-1 case of the hyperbolic Spencer conjecture (Conjecture~\ref{conj:hyperbolic_spencer}). \begin{theorem}[Eight deviations suffice]\label{thm:eight_deviations} Given $x_1, x_2, \cdots, x_n \in \mathbb{R}^m$ such that $\rank(x_i)\leq 1$ for all $i\in [n]$. Let $h$ be an $m$-variable, degree-$d$ hyperbolic polynomial with respect to $e$. Let $\sigma = ( \sum_{i=1}^n \|x_i\|_h^2 )^{1/2}$. Then, there exists a sign vector $r \sim \{-1, 1\}^n$ such that \begin{align*} \left\| \sum_{i=1}^n r_i x_i \right\|_h \leq 8\sigma \end{align*} holds. \end{theorem} \begin{proof} Similar to the proof of Theorem~\ref{thm:main}, we first have \begin{align*} \E_{ r \sim \{ \pm 1 \}^n }\left[\left\|\sum_{i=1}^n r_i x_i\right\|_h\right] \leq & ~ \left( \E_{ r \sim \{ \pm 1 \}^n } \left[\left\|\sum_{i=1}^n r_i x_i\right\|_{h,2q}^{2q}\right] \right)^{1/(2q)}\\ \leq & ~ \sqrt{2q-1} \cdot \left( \sum_{i=1}^n\|x_i\|_{h}^2 \right)^{1/2}\\ = &~ \sqrt{2q-1} \cdot \sigma, \end{align*} where the first step follows from Eq.~\eqref{eq:bound_expect_h_norm}. By setting $q=1$, we have \begin{align*} \E_{ r \sim \{ \pm 1 \}^n }\left[\left\|\sum_{i=1}^n r_i x_i\right\|_h\right] \leq \sigma. \end{align*} By Claim~\ref{clm:hyperbolic_2_1_norm}, \begin{align*} \E_{ r \sim \{ \pm 1 \}^n }\left[\left\|\sum_{i=1}^n r_i x_i\right\|_h^2\right] \leq 2\left( \E_{ r \sim \{ \pm 1 \}^n } \left[\left\|\sum_{i=1}^n r_i x_i\right\|_h\right] \right)^2 \leq 2\sigma^2. \end{align*} Then, by Corollary~\ref{cor:hyperbolic_prob}, \begin{align*} \Pr_{ r \sim \{ \pm 1 \}^n } \left[ \left\| \sum_{i=1}^n r_i x_i\right\|_h > t \right] \leq &~ 2 \exp \left(-\frac{t^2}{32 \E_{ r \sim \{ \pm 1 \}^n }[ \| \sum_{i=1}^n r_i x_i \|_h^2 ] }\right)\\ \leq &~ 2\exp\left(-\frac{t^2}{64 \sigma^2}\right). \end{align*} By choosing $t=8\sigma$, we have \begin{align*} \Pr_{ r \sim \{ \pm 1 \}^n } \left[ \left\| \sum_{i=1}^n r_i x_i\right\|_h > 8\sigma \right]\leq 2/e. \end{align*} Therefore, with probability $1-2/e$, we have \begin{align*} \left\| \sum_{i=1}^n r_i x_i\right\|_h \leq 8\sigma, \end{align*} which proves the theorem. \end{proof} \begin{remark} It is interesting to apply Theorem~\ref{thm:eight_deviations} to determinant polynomial $h(x)=\det(X)$. It implies that for rank-1 matrices $X_1,\dots,X_n\in \mathbb{R}^{d\times d}$, \begin{align*} \Pr_{r\sim \{\pm 1\}^n}\left[\left\|\sum_{i=1}^n r_i X_i\right\|>t\right]\leq 2\exp\left(-\frac{t^2}{64 \sigma^2}\right), \end{align*} for $\sigma^2=\sum_{i=1}^n \|X_i\|^2$. This result is in fact incomparable to the matrix Chernoff bound \cite{tro15}, which shows that \begin{align*} \Pr_{r\sim \{\pm 1\}^n}\left[\left\|\sum_{i=1}^n r_i X_i\right\|>t\right]\leq 2d\cdot \exp\left(-\frac{t^2}{2\widetilde{\sigma}^2}\right), \end{align*} where $\widetilde{\sigma}^2=\|\sum_{i=1}^n X_i^2\|$. Because we only know the following relation between $\sigma$ and $\widetilde{\sigma}$ \cite{tro15}: \begin{align*} \widetilde{\sigma}^2 \leq \sigma^2 \leq d\cdot \widetilde{\sigma}^2. \end{align*} \end{remark} \iffalse Note that Theorem~\ref{thm:eight_deviations} can also be extended to constant rank case: \begin{corollary}[Constant-rank hyperbolic Spencer theorem]\label{cor:constant_rank_discrepancy} Let $h$ be an $m$-variable, degree-$d$ hyperbolic polynomial. Given $x_1, x_2, \cdots, x_n \in \mathbb{R}^m$ such that $\rank(x_i) \leq s$ for all $i\in [n]$. Let $\sigma = ( \sum_{i=1}^n \|x_i\|_h^2 )^{1/2}$. Then, there exists a sign vector $r \sim \{-1, 1\}^n$ such that \begin{align*} \left\| \sum_{i=1}^n r_i x_i \right\|_h \leq \Theta(\sqrt{\log s})\cdot \sigma \end{align*} holds. \end{corollary} \begin{proof} By Lemma~\ref{lem:rank_1_2q_norm}, we have \begin{align*} \E_{ r \sim \{ \pm 1 \}^n }\left[\left\|\sum_{i=1}^n r_i x_i\right\|_h\right] \leq & ~ \sqrt{2q-1} \cdot s^{1/(2q)}\cdot \sigma. \end{align*} By choosing $q=(\log s)/2$, we have \begin{align*} \E_{ r \sim \{ \pm 1 \}^n }\left[\left\|\sum_{i=1}^n r_i x_i\right\|_h\right] \leq 2\sqrt{(\log s) - 1}\cdot \sigma. \end{align*} Then, it follows from Claim~\ref{clm:hyperbolic_2_1_norm} and Corollary~\ref{cor:hyperbolic_prob} that \begin{align*} \Pr_{ r \sim \{ \pm 1 \}^n } \left[ \left\| \sum_{i=1}^n r_i x_i\right\|_h > t \right] \leq &~ 2 \exp \left(-\frac{t^2}{64\cdot (4 (\log s) -4)\cdot \sigma^2 }\right)\\ \leq &~ 2 / e, \end{align*} when $t = 16\sqrt{\log s} \cdot \sigma$, and hence the corollary is proved. \end{proof} \subsection{Expected hyperbolic-\texorpdfstring{$2q$}{2q} norm in constant rank case} The goal of this section is to prove Lemma~\ref{lem:rank_1_2q_norm}. \begin{lemma}\label{lem:rank_1_2q_norm} Given $n$ vectors $x_1, \cdots, x_n \in \mathbb{R}^m$ such that $\rank(x_i) \leq s$ for all $i\in [n]$. Let $h$ be an $m$-variate, degree-$d$ hyperbolic polynomial. For any $q\geq 1$, we have \begin{align*} \left( \E_{r \sim \{\pm 1\}^n }\left[\left\|\sum_{i=1}^n r_i x_i\right\|_{h,2q}^{2q}\right] \right)^{1/(2q)}\leq \sqrt{2q-1} \cdot s^{1/(2q)} \cdot \left( \sum_{i=1}^n \|x_i\|_{h}^2 \right)^{1/2}. \end{align*} \end{lemma} \begin{proof} Following the proof of Lemma~\ref{lem:p_norm_upper_bound}, by Eq.~\eqref{eq:trace_to_singular}, we have \begin{align*} \sum_{\beta\in \mathcal{B}_{\mathrm{even}}} \tr\left[\prod_{i=1}^{2q}A_1^{\beta_i}B_1^{1-\beta_i}\right] \leq &~ \sum_{k_1=0}^q \binom{2q}{2k_1} \sum_{j=1}^d \sigma_j(A_1)^{2k_1}\sigma_j(B_1)^{2q-2k_1}\\ = &~ \|B_1\|_{2q}^{2q} + \sum_{k_1=1}^{q-1} \binom{2q}{2k_1} \sum_{j=1}^{s} \sigma_j(A_1)^{2k_1}\sigma_j(B_1)^{2q-2k_1} + \|A_1\|_{2q}^{2q}\\ \leq &~ \|B_1\|_{2q}^{2q} + \sum_{k_1=1}^{q-1} \binom{2q}{2k_1} \sigma_1(A_1)^{2k_1}\cdot \sum_{j=1}^{s} \sigma_j(B_1)^{2q-2k_1} + s\cdot \sigma_1(A_1)^{2q}\\ = &~ \left\|\sum_{i=2}^n r_i x_i\right\|_{h,2q}^{2q} + \sum_{k_1=1}^{q-1} \binom{2q}{2k_1} \|x_1\|_{h}^{2k_1}\cdot \left\|\sum_{i=2}^n r_i x_i\right\|_{h,2q-k_1}^{2q-k_1} + s\cdot \|x_1\|_h^{2q} \end{align*} where the second step separates the $k_1=0$ and $k_1=2q$ terms, the third step follows from $\rank(x_1) = \rank(A)\leq s$, the last step follows uses Corollary~\ref{cor:hv_2_vars} to transform back to the hyperbolic norms. Hence, we can repeat this procedure for the first and second terms in the above equation and finally have: \begin{align*} \E_{ r \sim \{\pm 1\}^n }\left[\left\|\sum_{i=1}^n r_i x_i\right\|_{h,2q}^{2q}\right] \leq &~ s\cdot \sum_{\substack{k_1,\dots,k_n\geq 0\\k_1+\cdots+k_n=q}}\binom{2q}{2k_1,\dots,2k_n} \prod_{i=1}^{n} \|x_i\|_h^{2k_i}\\ \leq &~ M_{2q}^{2q}\cdot s \cdot \sum_{\substack{k_1,\dots,k_n\geq 0\\k_1+\cdots+k_n=q}}\binom{q}{k_1,\dots,k_n} \prod_{i=1}^{n} \|x_i\|_h^{2k_i}\\ = &~ M_{2q}^{2q} \cdot s \cdot \left(\sum_{i=1}^{n} \|x_i\|_h^2 \right)^q, \end{align*} where the second step follows from Lemma~\ref{lem:zyg02_tj74}. The lemma then follows from $M_{2q}\leq \sqrt{2q-1}$. \end{proof} \fi \iffalse \subsection{Hyperbolic Kadison-Singer implies hyperbolic discrepancy}\label{sec:hyper_ks_imp_disc} The goal of this section is to prove Corollary~\ref{cor:branden_discrepancy}. We first recall the statement: \begin{corollary} Let $0<\epsilon\leq \frac{1}{2}$. Suppose $h\in \mathbb{R}[z_1,\dots,z_m]$ is hyperbolic with respect to $e\in \mathbb{R}^m$, and let $x_1, \dots , x_n\in \Lambda_+(h,e)$ that satisfy \begin{align*} \tr[x_i] \leq \epsilon, ~~\rank(x_i)\leq r~~\forall i\in [n], ~\text{and}\quad \sum_{i=1}^n x_i = e. \end{align*} Then, there exist signs $r\in \{-1,1\}^n$ such that \begin{align*} \left\|\sum_{i=1}^n r_i x_i\right\|_h\leq O(\sqrt{\epsilon(1-\epsilon)}). \end{align*} \end{corollary} \begin{proof} By Theorem~\ref{thm:hyperbolic_ks} with $k=2$ and $r=1$, we know that there exists a subset $S_1\subseteq [n]$ such that for $w:=\sum_{i\in S_1} x_i$, \begin{align}\label{eq:w_eigen_cond} \|w\|_h \leq \frac{1}{2} + \sqrt{\epsilon(1-\epsilon)}, ~\text{and}\quad \|e-w\|_h \leq \frac{1}{2} + \sqrt{\epsilon(1-\epsilon)}, \end{align} where the second inequality follows from $\sum_{i=1}^n x_i = e$. Eq.~\eqref{eq:w_eigen_cond} together with Fact~\ref{fac:eigen_linear_trans} gives that \begin{align*} \frac{1}{2}-\sqrt{\epsilon(1-\epsilon)} \leq \lambda_i(w) \leq \frac{1}{2}+\sqrt{\epsilon(1-\epsilon)} ~~~\forall i\in [d]. \end{align*} Then, we have \begin{align*} \|w-2e\|_h = \Big\|\sum_{i\in S_1} x_i - \sum_{i\notin S_1} x_i\Big\|_h \leq O(\sqrt{\epsilon(1-\epsilon)}), \end{align*} which means that if we choose $r_i = 1$ for $i\in S_1$ and $r_i=-1$ for $i\notin S_1$, the discrepancy will be $O(\sqrt{\epsilon(1-\epsilon)})$. The corollary is then proved. \end{proof} \fi \section{Introduction} The study of concentration of sums of independent random variables dates back to Central Limit Theorems, and hence to de Moivre and Laplace, while modern concentration bounds for sums of random variables were probably first established by Bernstein \cite{b24} in 1924. An extremely popular variant now known as Chernoff bounds was introduced by Rubin and published by Chernoff \cite{c52} in 1952. Hyperbolic polynomials are real, multivariate homogeneous polynomials $p(x)\in \mathbb{R}[x_1,\dots,x_n]$, and we say that $p(x)$ is hyperbolic in direction $e\in \mathbb{R}^n$ if the univariate polynomial $p(te-x) = 0$ for any $x$ has only real roots as a function of $t$. The study of hyperbolic polynomials was first proposed by G{\aa}rding in \cite{g51} and has been extensively studied in the mathematics community \cite{g59,g97,bgls01,r06}. Some examples of hyperbolic polynomials are as follows: \begin{itemize} \item Let $h(x)=x_1x_2\cdots x_n$. It is easy to see that $h(x)$ is hyperbolic with respect to any vector $e\in \mathbb{R}_+^n$. \item Let $X = (x_{i,j})_{i,j=1}^n$ be a symmetric matrix where $x_{i,j} = x_{j,i}$ for all $1\leq i,j\leq n$. The determinant polynomial $h(x) = \det(X)$ is hyperbolic with respect to $\widetilde{I}$, the identity matrix $I$ packed into a vector. Indeed, $h(t \widetilde{I}-x)=\det(tI-X)$, the characteristic polynomial of the symmetric matrix $X$, has only real roots by the spectral theorem. \item Let $h(x) = x_1^2 - x_2^2-\cdots - x_n^2$. Then, $h(x)$ is hyperbolic with respect to $e=\begin{bmatrix}1&0&\cdots & 0\end{bmatrix}^\top$. \end{itemize} \begin{figure}[h] \centering \includegraphics[width=0.70 \textwidth]{hyper_two.pdf}% \caption{\small The function on the left is $h(x,y,z)=z^2-x^2-y^2$, which is hyperbolic with respect to $e=\begin{bmatrix}0&0& 1\end{bmatrix}^\top$, since any line in this direction always has two intersections, corresponding to the two real roots of $h(-x,-y,t-z)=0$. The function on the right is $g(x,y,z) =z^4-x^4-y^4$, which is \emph{not} hyperbolic with respect to $e$, since it only has 2 intersections but the degree is 4.} \label{fig:hyper_poly} \end{figure} Inspired by the eigenvalues of matrix, we can define the hyperbolic eigenvalues of a vector $x$ as the real roots of $t\mapsto h(te-x)$, that is, $\lambda_{h,e}(x)=(\lambda_1(x),\dots,\lambda_d(x))$ such that $h(te-x)= h(e)\prod_{i=1}^d (t-\lambda_i(x))$ (see Fact~\ref{fac:hyper_factor}). In other words, the hyperbolic eigenvalues of $x$ are the zero points of the hyperbolic polynomial restricted to a real line through $x$. In this paper, we assume that $h$ and $e$ are fixed and we just write $\lambda(x)$ omitting the subscript. Furthermore, similar to the spectral norm of matrix, the \emph{hyperbolic spectral norm} of a vector $x$ can be defined as \begin{align} \|x\|_h = \max_{i\in [d]} |\lambda_i(x)|. \end{align} In this work, we study the concentration phenomenon of the roots of hyperbolic polynomials. More specifically, we consider the hyperbolic spectral norm of the sum of randomly signed vectors, i.e., $\|\sum_{i=1}^n r_i x_i\|_h$, where $r\in \{-1,1\}^n$ are uniformly random signs and $\{ x_1, x_2, \cdots, x_n \}$ are any fixed vectors in $\mathbb{R}^m$. This kind of summation has been studied in the following cases: \renewcommand\labelenumi{(\theenumi)} \begin{enumerate} \item \textbf{scalar case:} $x_{i}\in \{-1, 1\}$ and the norm is just the absolute value, i.e., $|\sum_{i=1}^n r_i x_i|$, the scalar version Chernoff bound \cite{c52} shows that $\Pr_{r\sim \{-1, 1\}^n}\left[\left|\sum_{i=1}^n r_i x_i\right|>t\right]\leq 2\exp\left(-t^2/(2n)\right)$, corresponding to the case when $h(x)=x$ for $x\in \mathbb{R}$ and the hyperbolic direction $e=1$. \item \textbf{matrix case:} $x_i$ are $d$-by-$d$ symmetric matrices and the norm is the spectral norm, i.e., $\|\sum_{i=1}^n r_i x_i\|$, the matrix Chernoff bound \cite{tro15} shows that $\Pr_{r\sim \{-1, 1\}^n}\left[\left\|\sum_{i=1}^n r_i x_i\right\|>t\right]\leq 2d\cdot \exp\left(-\frac{t^2}{2\left\|\sum_{i=1}^n x_i^2\right\|}\right)$, corresponding to $h(x) = \det(X)$ and $e= I$. \end{enumerate} We try to generalize these results to the hyperbolic spectral norm for any hyperbolic polynomial $h$, which is recognized as an interesting problem in this field by James Renegar \cite{r19}. \subsection{Our results} In this paper, we can prove the following ``Chernoff-type'' concentration for hyperbolic spectral norm. We show that, when adding uniformly random signs to $n$ vectors, the hyperbolic spectral norm of their summation will concentrate with an exponential tail. \begin{theorem}[Nearly optimal hyperbolic Chernoff bound for Rademacher sum]\label{thm:intro_main} Let $h$ be an $m$-variate, degree-$d$ hyperbolic polynomial with respect to a direction $e\in \mathbb{R}^m$. Let $1\leq s \leq d$, $\sigma>0$. Given $x_1, x_2, \cdots, x_n \in \mathbb{R}^m$ such that $\rank(x_i)\leq s$ for all $i\in [n]$ and $\sum_{i=1}^n \|x_i\|_h^2 \leq \sigma^2$, where $\rank(x)$ is the number of nonzero hyperbolic eigenvalues of $x$. Then, we have \begin{align*} \E_{ r \sim \{ \pm 1 \}^n }\left[\left\|\sum_{i=1}^n r_i x_i\right\|_h\right] \leq 2\sqrt{\log(s)}\cdot \sigma. \end{align*} Furthermore, for every $t>0$, and for some fixed constant $c>0$, \begin{align*} \Pr_{ r \sim \{\pm 1\}^n } \left[ \left\| \sum_{i=1}^n r_i x_i \right\|_h > t \right]\leq \exp\left(- \frac{c t^2}{\sigma^2 \log (s+1)}\right). \end{align*} \end{theorem} We discuss the optimality of Theorem~\ref{thm:intro_main} in different cases: \begin{itemize} \item \textbf{Degree-1 case: } When the hyperbolic polynomial's degree $d=1$, the hyperbolic polynomial is $h(x)=x$. Then, we have $\|x\|_h = |x|$ and we get the the Bernstein inequality \cite{b24}: $\Pr_{r\sim \{\pm 1\}^n}\left[ \left|\sum_{i=1}^n r_i x_i\right|>t \right]\leq \exp\left(- \Omega\left(t^2/(\sum_{i=1}^n x_i^2)\right)\right)$. It implies that our result is optimal in this case. \item \textbf{Constant degree case: } When $d>1$ is a constant, consider $h$ being the determinant polynomial of $d$-by-$d$ matrix. Since $d=O(1)$, we can show that $\sigma=( \sum_{i=1}^n \|x_i\|^2 )^{1/2} = \Theta(\|\sum_{i=1}^n x_i^2\|^{1/2})$, and Theorem~\ref{thm:intro_main} exactly recovers the matrix Chernoff bound \cite{tro15}, which implies that our result is also optimal in this case. \iffalse \begin{align*} \Pr_{r\sim \{\pm 1\}^n}\left[ \left\|\sum_{i=1}^n r_i x_i\right\|>t \right]\leq \exp\left(- \frac{c' t^2}{\|\sum_{i=1}^n x_i^2\|}\right), \end{align*} where $c'>0$ is a fixed constant and it is optimal proved by Tropp \cite{tro15}. \fi \item \textbf{Constant rank case: } When all the vectors have constant hyperbolic rank, we still take $h=\det(X)$, but $X_1,\dots,X_n$ are constant rank matrices with arbitrary dimension. In this case, we can obtain a dimension-free matrix concentration inequality: $\Pr_{r\sim \{\pm 1\}^n}\left[\left\|\sum_{i=1}^n r_i X_i\right\|>t\right]\leq 2\exp\left(-\Omega(t^2/\sigma^2)\right)$. It will beat the general matrix Chernoff bound \cite{tro15} when $\sigma$ is not essentially larger than $\|\sum_{i=1}^n X_i^2\|^{1/2}$. Thus, Theorem~\ref{thm:intro_main} is nearly optimal in this case. \end{itemize} Theorem~\ref{thm:intro_main} works for arbitrary vectors in $\mathbb{R}^m$. We also consider the maximum and minimum hyperbolic eigenvalues of the sum of random vectors in the hyperbolic cone, which is a generalization of the positive semi-definite cone for matrices. \begin{theorem}[Hyperbolic Chernoff bound for random vectors in hyperbolic cone]\label{thm:hyper_chernoff_positive_intro} Let $h$ be an $m$-variate, degree-$d$ hyperbolic polynomial with hyperbolic direction $e\in \mathbb{R}^m$. Let $\Lambda_+$ denote the hyperbolic cone\footnote{The hyperbolic cone is a set containing all vectors with non-negative hyperbolic eigenvalues. See Definition~\ref{def:hyper_cone} for the formal definition.} of $h$ with respect to $e$. Suppose $\mathsf{x}_1,\dots,\mathsf{x}_n$ are $n$ independent random vectors with supports in $\Lambda_+$ such that $\lambda_{\max}(\mathsf{x}_i)\leq R$ for all $i\in [n]$. Define the mean of minimum and maximum eigenvalues as $\mu_{\min}:=\sum_{i=1}^n \E[\lambda_{\min}(\mathsf{x}_i)]$ and $\mu_{\max}:=\sum_{i=1}^n \E[\lambda_{\max}(\mathsf{x}_i)]$. Then, we have \begin{align*} &\Pr\left[\lambda_{\max}\left(\sum_{i=1}^n \mathsf{x}_i\right)\leq (1+\delta)\mu_{\max}\right] \leq~ d\cdot \left(\frac{(1+\delta)^{1+\delta}}{e^{\delta}}\right)^{-\mu_{\max}/R}~~\forall \delta\geq 0,\\ &\Pr\left[\lambda_{\min}\left(\sum_{i=1}^n \mathsf{x}_i\right)\leq (1-\delta)\mu_{\min}\right] \leq~ d\cdot \left(\frac{(1-\delta)^{1-\delta}}{e^{-\delta}}\right)^{-\mu_{\min}/R}~~\forall \delta\in [0,1]. \end{align*} \end{theorem} \subsection{Hyperbolic anti-concentration} Anti-concentration is an interesting phenomenon in probability theory, which studies the opposite perspective of concentration inequalities. A simple example is the standard Gaussian random variable, which has probability at most $O(\Delta)$ for being in any interval of length $\Delta$. For Rademacher random variables $x\sim \{\pm 1\}^d$, the celebrated Littlewood-Offord theorem \cite{lo43} states that for any degree-$1$ polynomial $p(x) = \sum_{i=1}^d a_i x_i$ with $|a_i|\geq 1$, the probability of $p(x)$ in any length-1 interval is at most $O(\frac{\log d}{\sqrt{d}})$. Later, the theorem was improved by Erd\"{o}s \cite{e45}, and generalized to higher degree polynomials by \cite{ctv06,rv13,mnv17}. From another prospective, the classic Littlewood-Offord theorem can be interpreted as follows: a half-space cannot concentrate in a small region with respect to the Rademacher measure. \cite{ost19} showed that it also holds for the intersection of half-spaces (polytopes). Very recently, \cite{ay21} further generalized it for positive spectrahedrons. Following this line of research, we prove the following hyperbolic anti-concentration theorem, which shows that the hyperbolic spectral norm of Rademacher sum of vectors in the hyperbolic cone cannot concentration within a small interval. \begin{theorem}[Hyperbolic anti-concentration theorem, informal]\label{thm:anti-concen_intro} Let $h$ be an $m$-variate degree-$d$ hyperbolic polynomial with hyperbolic direction $e\in \mathbb{R}^m$. Let $\{x_i\}_{i\in [n]}\subset \Lambda_+$ be a sequence of vectors in the hyperbolic cone such that $\lambda_{\max}(x_i)\leq \tau$ for all $i\in [n]$ and $\sum_{i=1}^n \lambda_{\min}(x_i^j)^2\geq 1$. Then, for any $y\in \mathbb{R}^m$ and any $\Delta \geq 20\tau \log d$, we have \begin{align*} \Pr_{\epsilon\sim \{-1,1\}^n}\left[\lambda_{\max}\left(\sum_{i=1}^n \epsilon_i x_i-y\right) \in [-\Delta, \Delta] \right]\leq O(\Delta). \end{align*} \end{theorem} From the geometric viewpoint, we can define a ``positive hyperbolic-spectrahedron'' as the space $\{\alpha \in \mathbb{R}^n: \lambda_{\max}(\alpha_1x_1+\cdots +\alpha_nx_n-y)\leq 0\}$, where $x_1,\dots,x_n$ are in the hyperbolic cone. Then, Theorem~\ref{thm:anti-concen_intro} states that the hyperbolic spectral norm of a positive hyperbolic-spectrahedron cannot be concentrated in a small region. \subsection{Hyperbolic discrepancy theory}\label{sec:app_discrepancy} Hyperbolic polynomial is an important tool in the discrepancy theory, which is an important subfield of combinatorics, with many applications in theoretical computer science. Following Meka's blog post \cite{m14}, by combining scalar version Chernoff bound and union bound, we can easily prove that, for any $n$ vectors $x_1,\dots,x_n\in \{-1,1\}^n$, there exists $r\in \{-1,1\}^n$ such that $|\langle r, x_i\rangle|\leq O(\sqrt{n\log n})$ for every $i\in [n]$. In a celebrated result ``Six Standard Deviations Suffice'', Spencer showed that it can be improved to $|\langle r, x_i\rangle|\leq 6\sqrt{n}$ \cite{spe85}. For the matrix case, by the matrix Chernoff bound, it follows that for any symmetric matrix $X_1,\dots,X_n\in \mathbb{R}^{d\times d}$ with $\|X_i\|\leq 1$, for uniformly random signs $r\in \{-1,1\}^n$, with high probability, $\left\|\sum_{i=1}^n r_i X_i\right\|\leq O(\sqrt{\log (d)n})$. \iffalse When $X_i$ are diagonal matrices with $\{-1, 1\}$ entries, the matrix case is equivalent to the scalar case and Spencer's result indicates that the $O(\sqrt{\log d})$ factor may be improved by picking the signs carefully, which leads to the follow conjecture by Meka \cite{m14}: \fi An important open question is, can we shave the $\log (d)$ factor for \emph{some} choice of the signs? \begin{conjecture}[Matrix Spencer Conjecture]\label{conj:matrix_spencer} For any symmetric matrices $X_1,\dots,X_n\in \mathbb{R}^{d\times d}$ with $\|X_i\|\leq 1$, there exist signs $r\in \{-1, 1\}^n$ such that $\|\sum_{i=1}^n r_i X_i\|=O(\sqrt{n})$. \end{conjecture} The breakthrough paper by Marcus, Spielman and Srivastava \cite{mss15} proved the famous Kadison-Singer conjecture \cite{ks59}, which was open for more than half of a century. \begin{theorem}[Kadison-Singer, \cite{ks59, mss15}]\label{thm:kadison_singer} Let $k \geq 2$ be an integer and $\epsilon$ a positive real number. Let $x_1, \dots , x_n \in \C^m$ such that $\|x_ix_i^*\| \leq \epsilon ~~\forall i\in [n]$, and $\sum_{i=1}^n x_ix_i^* = I$. Then, there exists a partition $S_1\cup S_2\cup \cdots \cup S_k=[n]$ such that $\|\sum_{i\in S_j} x_ix_i^* \| \leq (\frac{1}{\sqrt{k}}+\sqrt{\epsilon} )^2\quad \forall j\in [k]$. \end{theorem} The Kadison-Singer theorem implies that for rank-1 matrices $X_1,\dots,X_n$ with $\|X_i\|\leq \epsilon$ in isotropic position\footnote{Isotropic means $X_1+\cdots + X_n = I$.}, there exists a choice of $r\in \{-1, 1\}^n$ such that $\|\sum_{i=1}^n r_i X_i\|\leq O(\sqrt{\epsilon})$.\footnote{For more details and consequences of the Kadison-Singer theorem, we refer the readers to \cite{ct16,ms16}.} \iffalse rank-1 matrix discrepancy in the following sense. Taking $k=2$ in the above theorem gives a set $S_1\subset [n]$ such that for $A:=\sum_{i\in S_1}x_ix_i^*$, we have $\|A\|\leq \frac{1}{2}+O(\sqrt{\epsilon})$ and $\|I-A\|\leq \frac{1}{2}+O(\sqrt{\epsilon})$. In other words, \begin{align*} (\frac{1}{2}-O(\sqrt{\epsilon}))I\preceq A \preceq (\frac{1}{2}+O(\sqrt{\epsilon}))I, \end{align*} where $A\preceq B$ means $B-A$ is a positive semi-definite matrix. Then, we have \begin{align*} \|2A-I\|=\|\sum_{i\in S_1}x_ix_i^* - \sum_{i\notin S_1} x_ix_i^*\|\leq O(\sqrt{\epsilon}), \end{align*} which proved Conjecture~\ref{conj:matrix_spencer} in a special case. \fi \iffalse Note that this bound is much better than the conjectured bound $O(\sqrt{n})$ since we require the rank-1 condition and the ``isotropic'' condition, i.e., the summation of the matrices is a multiple of the identity matrix. \fi Theorem~\ref{thm:kadison_singer} can be generalized for higher rank matrices by Cohen \cite{c16} and Br{\"a}nd{\'e}n \cite{b18} independently. However, their results still need the isotropic condition. On the other hand, Kyng, Luh, and Song \cite{kls19} got rid of the isotropic condition and proved Conjecture~\ref{conj:matrix_spencer} for general rank-1 matrices. They actually proved that the rank-1 matrix discrepancy upper bound is at most $4\sqrt{n}$. Formal theorem statements will be presented in Appendix~\ref{sec:disp_prelim}. Similar to the scalar and matrix cases, the discrepancy theory can be further generalized to the hyperbolic spectral norm. Br{\"a}nd{\'e}n \cite{b18} proved a hyperbolic Kadison-Singer theorem, which generalizes Theorem~\ref{thm:kadison_singer} to the hyperbolic spectral norm and vectors with arbitrary rank and in isotropic condition. Our first result relaxes the isotropic condition to sub-isotropic: \begin{theorem}[Hyperbolic Kadison-Singer with sub-isotropic condition, informal]\label{thm:hyperbolic_ks_sub_intro} Let $k \geq 2$ be an integer and $\epsilon,\sigma>0$. Suppose $h$ is hyperbolic with respect to $e \in \mathbb{R}^m$, and let $x_1, \dots , x_n$ be $n$ vectors in the hyperbolic cone such that \begin{align}\label{eq:subisotropic_intro} \tr[x_i] \leq \epsilon~~\forall i\in [n], ~\text{and}\quad \Big\|\sum_{i=1}^n x_i\Big\|_h \leq \sigma. \end{align} where $\tr[x]:=\sum_{i=1}^d \lambda_i(x)$. Then, there exists a partition $S_1 \cup S_2 \cup \cdots \cup S_k = [n]$ such that for all $j\in [k]$, \begin{align*} \Bigg\| \sum_{i\in S_j} x_i\Bigg\|_h \leq \left(\sqrt{\epsilon}+\sqrt{\sigma/k}\right)^2. \end{align*} \end{theorem} \iffalse \begin{theorem}[Hyperbolic Kadison-Singer, \cite{b18}]\label{thm:hyperbolic_ks} Let $k \geq 2$ be an integer and $\epsilon>0$. Suppose $h\in \mathbb{R}[z_1,\dots,z_m]$ is hyperbolic with respect to $e \in \mathbb{R}^m$, and let $x_1, \dots , x_n$ be $n$ vectors in the hyperbolic cone $\Lambda_+(h, e)$ (see Definition~\ref{def:hyper_cone}) such that \begin{align}\label{eq:hyper_ks_condition} \tr[x_i] \leq \epsilon, ~~\rank(x_i)\leq r~~\forall i\in [n], ~\text{and}\quad \sum_{i=1}^n x_i = e, \end{align} where $\tr[x]:=\sum_{i=1}^d \lambda_i(x)$ and $\rank(x):=|\{i\in [d]: \lambda_i(x)\ne 0\}|$, where $\lambda_i(x)$ is the hyperbolic eigenvalue of $x$. Then, there exists a partition $S_1 \cup S_2 \cup \cdots \cup S_k = [n]$ such that $\| \sum_{i\in S_j} x_i\|_h \leq O_{\epsilon} (1 / k)$\footnote{The actual bound is slightly tighter but very complicated. For simplicity, we put a simple bound here, where $O_\epsilon$ hides the dependence on $\epsilon$.}. In particular, for $k=2$ and $r=1$, we have $O_{\epsilon} (1 / k)= \frac{1}{2}+\sqrt{\epsilon(1-\epsilon)}$. \end{theorem} \fi Theorem~\ref{thm:hyperbolic_ks_sub_intro} implies the high rank case of \cite{mss15} result (Theorem~\ref{thm:kadison_singer}) without the isotropic condition. We note that there is a naive approach to relax the isotropic condition in \cite{mss15,b18}'s results by adding several small dummy vectors to make the whole set in isotropic position. (See \cite{ove15} for more details.) However, Theorem~\ref{thm:hyperbolic_ks_sub_intro} is significantly better than this approach, since the upper bound depends on the hyperbolic spectral norm of the sum of the vectors and it is possible that $\sigma \ll 1$. Theorem~\ref{thm:hyperbolic_ks_sub_intro} also implies the following hyperbolic discrepancy result: \begin{corollary}[Hyperbolic discrepancy for sub-isotropic vectors]\label{cor:branden_discrepancy} Let $0<\epsilon\leq \frac{1}{2}$. Suppose $h\in \mathbb{R}[z_1,\dots,z_m]$ is hyperbolic with respect to $e\in \mathbb{R}^m$, and let $x_1, \dots , x_n\in \Lambda_+(h,e)$ that satisfy Eq.~\eqref{eq:subisotropic_intro}. Then, there exist signs $r\in \{-1,1\}^n$ such that \begin{align*} \left\|\sum_{i=1}^n r_i x_i\right\|_h\leq 2\sqrt{\epsilon(2\sigma-\epsilon)}. \end{align*} \end{corollary} We note that this result is incomparable with \cite{kls19} due to the following reasons: 1) \cite{kls19} only works for arbitrary rank-1 matrices while our result holds for arbitrary rank matrices in the hyperbolic cone; 2) the upper bound of \cite{kls19} depends on $\|\sum_{i=1}^n X_i^2\|^{1/2}$ while our result depends on the hyperbolic trace and spectral norm of the sum of vectors. \iffalse Corollary~\ref{cor:branden_discrepancy} is a non-constructive result in discrepancy theory, like Spencer's six deviations theorem and the Kadison-Singer theorem. We currently don't know any efficient algorithm to find such a good solution. \fi To obtain a hyperbolic discrepancy upper bound for arbitrary vectors (as in the case of Conjecture~\ref{conj:matrix_spencer}), we can apply hyperbolic Chernoff bound (Theorem~\ref{thm:intro_main}) and get the following discrepancy result which holds with high probability: \begin{corollary}\label{Cor:hyper_discrep_high_prob} Let $h$ be a degree-$d$ hyperbolic polynomial with respect to $e\in \mathbb{R}^m$. We are given vectors $x_1,x_2,\cdots, x_n \in \mathbb{R}^m$ such that $\|x_i\|_h\leq 1$ and $\rank(x_i)\leq s$ for all $i \in [n]$ and some $s\in \mathbb{N}_+$. Then for uniformly random signs $r\sim \{-1, 1\}^n$, \begin{align*} \left\| \sum_{i=1}^n r_i x_i \right\|_h \leq O(\sqrt{n \log (s+1)}) \end{align*} holds with probability at least $0.99$. \end{corollary} This result may not be tight when the ranks of the input vectors are large. It is thus interesting to study whether we can do better to improve the $\sqrt{\log d}$ factor in the non-constructive case. We thus conjecture the following hyperbolic discrepancy bound: \begin{conjecture}[Hyperbolic Spencer Conjecture]\label{conj:hyperbolic_spencer} We are given vectors $x_1,x_2,\cdots, x_n \in \mathbb{R}^m$ and a degree $d$ hyperbolic polynomial $h\in\mathbb{R}[z_1,\dots,z_m]$ with respect to $e\in \mathbb{R}^m$, where $\|x_i\|_h\leq 1$ for all $i \in [n]$. Then, there exist signs $r\in \{-1, 1\}^n$, such that \begin{align*} \left\| \sum_{i=1}^n r_i x_i \right\|_h \leq O(\sqrt{n}). \end{align*} \end{conjecture} Note that Conjecture~\ref{conj:hyperbolic_spencer} is more general than the matrix Spencer conjecture (Conjecture~\ref{conj:matrix_spencer}). And for constant degree $d$ or constant maximum rank $s$, this conjecture is true by Corollary~\ref{Cor:hyper_discrep_high_prob}. \iffalse Moreover, we can improve Corollary~\ref{Cor:hyper_discrep_high_prob} for hyperbolic rank-1 vectors and get a degree-independent upper bound. Recall that the hyperbolic rank of a vector $x$ is the number of non-zero roots, i.e., $|\{i\in [d]:\lambda_i(x)\ne 0\}|$. \begin{theorem}[Eight deviations suffice, our result]\label{thm:intro_main2} We are given $x_1, x_2, \cdots, x_n \in \mathbb{R}^m$ such that $\rank(x_i)\leq 1$ for all $i\in [n]$. Let $h$ be an $m$-variate, degree-$d$ hyperbolic polynomial with respect to $e$. Let $\sigma = ( \sum_{i=1}^n \|x_i\|_h^2 )^{1/2}$. Then, for uniformly random signs $r\sim \{-1, 1\}^n$, \begin{align*} \left\| \sum_{i=1}^n r_i x_i \right\|_h \leq 8\sigma \end{align*} holds for constant probability. \end{theorem} Theorem~\ref{thm:intro_main2} is a generalization of rank-1 matrix Spencer theorem \cite{kls19}, and it immediately implies the rank-1 case of the hyperbolic Spencer conjecture. We note that Theorem~\ref{thm:intro_main2} can be further generalized to the constant rank case with worse constant in the upper bound. \fi \begin{comment} Following Raghu Meka's post, Eq.~\eqref{eq:hyper_discrep} can be derived from a Chernoff-type concentration bound for hyperbolic: \begin{conjecture}[Hyperbolic Chernoff bound]\label{conj:hyper_chernoff} Given $x_1,\dots, x_n\in \mathbb{R}^m$ and a degree $d$ hyperbolic polynomial $h$ with respect to a direction $e\in \mathbb{R}^n$. Let $\sigma = ( \sum_{i=1}^n \|x_i\|_h^2 )^{1/2}$. Then for every $t$, we have \begin{align* \Pr_{r \sim \{-1, 1\}^n}\left[ \left\| \sum_{i=1}^n r_i x_i \right\|_h \geq t \right]\leq d \cdot \exp \Big(- \Theta(1) \frac{ t^2}{ \sigma^2 } \Big). \end{align*} \end{conjecture} In this paper, we can prove a weaker version of the above conjecture for general $d$. Our result is essentially optimal when $d = O(1)$. \end{comment} \subsection{Related work} \paragraph{Chernoff-type bounds} There is a long line of work generalizing the classical scalar Chernoff-type bounds to the matrix Chernoff-type bound \cite{r99,aw02,rv07, t12,mjc14,glss18,ks18,nrr20,aby20,jll20}. \cite{r99,rv07} showed a Chernoff-type concentration of spectral norm of matrices which are the outer product of two random vectors. \cite{aw02} first used Laplace transform and Golden-Thompson inequality \cite{g65,t65} to prove a Chernoff bound for general random matrices. It was improved by \cite{t12} and \cite{o09} independently. \cite{mjc14} proved a series of matrix concentration results via Stein's method of exchangeable pairs. Our work further extends this line of research from matrix to hyperbolic polynomials and can fully recover the result of \cite{aw02}. On the other hand, \cite{glss18} showed an expander matrix Chernoff bound. \cite{ks18} prove a new matrix Chernoff bound for Strongly Rayleigh distributions \paragraph{Hyperbolic polynomials} The concept of hyperbolic polynomials was originally studied in the field of partial differential equations \cite{g51, h83, k95}. G{\"u}ler \cite{g97} first studied the hyperbolic optimization (hyperbolic programming), which is a generalization of LP and SDP. Later, a few algorithms \cite{r06,mt14, rs14, ren16,np18, ren19} were designed a few algorithms for hyperbolic programming. On the other hand, a lot of recent research focused on the equivalence between hyperbolic programming and SDP, which is closely related to the ``Generalized Lax Conjecture'' and its variants \cite{hv07,lpr05, b14,kpv15,sau18,ami19,rrsw19}. In addition to the hyperbolic programming, hyperbolic polynomial is a key component in resolving Kadison-Singer problem \cite{mss15, b18} and constructing bipartite Ramanujan graphs \cite{mss18}. Gurvits \cite{g06,g07} proved some Van der Waerden/Schrijver-Valiant like conjectures for hyperbolic polynomials, giving sharp bounds for the capacity of polynomials. \cite{s19} gave an approach to certify the non-negativity of polynomials via hyperbolic programming, generalizing the Sum-of-Squares method. \paragraph{Discrepancy theory} For discrepancy theory, we give a few literature in Section~\ref{sec:app_discrepancy} and we provide more related work here. For Kadison-Singer problem, after the breakthrough result \cite{mss15}, Anari and Oveis Gharan \cite{ag14} generalized it for Strongly Rayleigh distributions. Alishahi and Barzegar \cite{ab20} extended the ``paving conjecture'' to real stable polynomials\footnote{A polynomial is real stable if it is hyperbolic with respect to every $e\in \mathbb{R}^n_>0$.}. Zhang and Zhang \cite{zz21} further relaxed the determinant polynomial in \cite{ag14} and \cite{kls19} to homogeneous real-stable polynomials. For algorithmic results, Bansal \cite{b10} proposed the first constructive version of partial coloring for discrepancy minimization. Based on this work, more approaches \cite{lm15, r17,lrr17,es18, bdgl18,dntt18} were discovered in recent years. For applications of the discrepancy theory, \cite{ag14,ag15} used the Strongly Rayleigh version of Kadison-Singer theorem to improve the integrality gap of the Asymmetric Traveling Salesman Problem. \cite{lz20} used the rank-1 matrix Spencer theorem in \cite{kls19} to obtain a two-sided spectral rounding result. For more applications, we refer to the excellent book by Matousek \cite{m09}. \section*{Appendix} \input{prelim} \input{concentration} \input{chernoff_psd} \input{anticoncentration} \input{discrepancy} \section*{Acknowledgements} The authors would like to thank Petter Br{\"a}nd{\'e}n and James Renegar for many useful discussions about the literature of hyperbolic polynomials.The authors would like to thank Yin Tat Lee and James Renegar, Scott Aaronson for encouraging us to work on this topic. The authors would like to thank Dana Moshkovitz for giving comments on the draft. Ruizhe Zhang was supported by NSF Grant CCF-1648712. \addcontentsline{toc}{section}{References} \bibliographystyle{alpha} \section{Preliminaries} \subsection{Notations} For a vector $x$, we use $\| x \|_0$ to denote the number of non-zeros, use $\| x \|_1$ to denotes its $\ell_1$ norm, and use $\| x \|_p$ to denote its $\ell_p$ norm for $0<p\leq \infty$. We use $r \in \{\pm 1 \}^n$ to denote $n$ i.i.d. random variables where each $r_i$ is $1$ with probability $1/2$ and $-1$ otherwise. The general definition of semi-norm and norm is as follows: \begin{definition}[Semi-norm and norm] Let $\|\cdot \|:V\rightarrow \mathbb{R}$ be a nonnegative function on vector space $V$. We say $\|\cdot \|$ is a semi-norm if it satisfies the following properties: For all $a\in \mathbb{R}$ and $x,y\in V$, \begin{itemize} \item $\|x+y\|\leq \|x\|+\|y\|$; \item $\|ax\|=|a|\cdot \|x\|$. \end{itemize} If furthermore, $\|x\|=0$ implies $x=0$ the zero vector of $V$, then we say $\|\cdot \|$ is a norm. \end{definition} \begin{definition}[Normed linear space] A normed linear space is a vector space over $\mathbb{R}$ or $\C$, on which a normed is defined. \end{definition} \subsection{Basic definitions of hyperbolic polynomials} We provide the definition of hyperbolic polynomial. \begin{definition}[Hyperbolic polynomial] A homogeneous polynomial $h:\mathbb{R}\rightarrow \mathbb{R}$ is hyperbolic with respect to a vector $e\in \mathbb{R}^m$ if $h(e) \ne 0$, and for all $x \in \mathbb{R}^m$, the univariate polynomial $t \mapsto h(te - x)$ has only real zeros. \end{definition} The following fact shows how to factorize a hyperbolic polynomial, which easily follows from the homogeneity of the polynomial: \begin{fact}[Hyperbolic polynomial factorization]\label{fac:hyper_factor} For a degree-$d$ polynomial $h\in \mathbb{R}[z_1,\dots,z_m]$ hyperbolic with respect to $e\in \mathbb{R}^m$, we have \begin{align*} h(te-x)=h(e)\prod_{i=1}^d (t-\lambda_i(x)) \end{align*} where $\lambda_1(x) \geq \lambda_2(x) \geq \cdots \geq \lambda_d(x)$ are real roots of $h(te-x)$. \end{fact} All the vectors with nonnegative hyperbolic eigenvalues form a cone, which is proved by G{\aa}rding \cite{g59}. It is a very important object related to the geometry of hyperbolic polynomials. The formal definition is as follows: \begin{definition}[Hyperbolic cone]\label{def:hyper_cone} For a degree $d$ hyperbolic polynomial $h$ with respect to $e\in \mathbb{R}^m$, its hyperbolic cone is \begin{align*} \Lambda_+(e) := \{x: \lambda_d(x) \geq 0\}. \end{align*} The interior of $\Lambda_+^m$ is \begin{align*} \Lambda_{++}(e) := \{x: \lambda_d(x) > 0\}. \end{align*} \end{definition} G{\aa}rding \cite{g59} showed the following fundamental properties of the hyperbolic cone: \begin{theorem}[\cite{g59}]\label{thm:hyperbolic_cone_gar} Suppose $h\in \mathbb{R}[z_1,\dots,z_m]$ is hyperbolic with respect to $e\in \mathbb{R}^n$. Then, \begin{enumerate} \item $\Lambda_+(e),\Lambda_{++}(e)$ are convex cones. \item $\Lambda_++(e)$ is the connected component of $\{x\in \mathbb{R}^m: h(x)\ne 0\}$ which contains $e$. \item $\lambda_{\min}:\mathbb{R}^m\rightarrow \mathbb{R}$ is a concave function, and $\lambda_{\max}:\mathbb{R}^m\rightarrow \mathbb{R}$ is convex. \item If $e'\in \Lambda_{++}(e)$, then $h$ is also hyperbolic with respect to $e'$ and $\Lambda_{++}(e')=\Lambda_{++}(e)$. \end{enumerate} \end{theorem} For simplicity, we may use $\Lambda_+$ and $\Lambda_{++}$ to denote $\Lambda_+(e),\Lambda_{++}(e)$, when $e$ is clear from context. In this paper, we always assume that $e$ is any fixed vector in the hyperbolic cone of $h$. We define the trace, rank and spectral norm respect to hyperbolic polynomial $h$. \begin{definition}[Hyperbolic trace, rank, spectral norm] Let $h$ be a degree $d$ hyperbolic polynomial with respect to $e\in \mathbb{R}^m$. For any $x\in \mathbb{R}^m$, \begin{align*} \tr [ x ] := \sum_{i=1}^d \lambda_i(x), \quad \rank(x) := \#\{i: \lambda_i(x)\ne 0\}, \quad \|x\|_h := \max_{i\in [d]}|\lambda_i(x)|=\max \{\lambda_1(x), -\lambda_d(x)\}. \end{align*} \end{definition} We define the $p$ norm with respect to hyperbolic polynomial $h$. \begin{definition}[$\| \cdot\|_{h,p}$ norm] For any $p \geq 1$, we define the hyperbolic $p$-norm $\|\cdot \|_{h,p}$ defined as: \begin{align*} \|x\|_{h,p} := \|\lambda(x)\|_{p} = \Big( \sum_{i=1}^d |\lambda_i(x)|^p \Big)^{1/p} \quad \forall x\in \mathbb{R}^m. \end{align*} \end{definition} It has been shown that $\|\cdot\|_h$ and $\|\cdot \|_{h,p}$ are indeed norms: \begin{theorem}[\cite{g59, b18, ren19}]\label{thm:hyperbolic_norm} $\|\cdot\|_h$ is a semi-norm. Furthermore, if $\Lambda_+$ is regular, i.e., $(\Lambda_+ \cap -\Lambda_+)=\{0\}$, then $\|\cdot\|_h$ is a norm on $\mathbb{R}^m$. \end{theorem} \begin{theorem}[\cite{bgls01}] For any $p\geq 1$, $\|\cdot\|_{h,p}$ is a semi-norm. Moreover, if the hyperbolic cone $\Lambda_+$ is regular, then $\|\cdot\|_{h,p}$ is a norm. \end{theorem} \subsection{Basic properties of hyperbolic polynomials} We state a fact for the eigenvalues $\lambda(\cdot)$ of degree-$d$ hyperbolic polynomial $h$. \begin{fact}[\cite{bgls01}]\label{fac:eigen_linear_trans} For all $i\in [d]$, \begin{align*} \lambda_i( s \cdot x + t \cdot e )=\begin{cases} s \cdot \lambda_i(x)+t, & \mathrm{~if~}s\ge 0;\\ s \cdot \lambda_{d-i}(x)+t, & \mathrm{~if~}s<0. \end{cases} \end{align*} \end{fact} \begin{comment} \begin{fact}[\cite{g59}] $\lambda_{\min}(x)$ is concave, i.e., for any $\alpha\in [0,1]$, any $x,y\in \mathbb{R}^m$, \begin{align*} \lambda_{\min}(\alpha \cdot x + (1-\alpha) \cdot y)\geq (1-\alpha) \cdot \lambda_{\min}(x) + \alpha \cdot \lambda_{\min}(y). \end{align*} \end{fact} \begin{fact}[\cite{ren19}] \begin{align*} |\lambda_{\min}(x)-\lambda_{\min}(y)|\leq \|x-y\|_h~~~\forall x,y\in \mathbb{R}^n. \end{align*} \end{fact} \begin{theorem}[\cite{g59}]\label{thm:gar_convex} $\lambda_1(x)$ is a sublinear function, that is, positively homogeneous and convex. \end{theorem} \end{comment} Then, we show that the elementary symmetric sum-products of eigenvalues can be computed from the directional derivatives of the polynomial. \begin{observation}[\cite{bgls01}]\label{obs:symmetric} For a degree-$d$ hyperbolic polynomial $h$ with respect to $e$, we have \begin{align*} h(te+x) = p(e) \cdot \prod_{i=1}^d (t+\lambda_i(x)) = \sum_{i=0}^d s_i(\lambda (x)) \cdot t^{d-i}, \end{align*} where $\lambda(x)=(\lambda_1(x), \cdots, \lambda_d(x))$ are the hyperbolic eigenvalues of $x$ and $s_i : \mathbb{R}^d \rightarrow \mathbb{R}$ is the $i$-th elementary symmetric polynomial: \begin{align*} s_i(y) := \begin{cases} \sum_{S\in \binom{[d]}{i}} \prod_{j\in S} y_{j}, & \forall i \in [d];\\ 1 & \text{if}~ i = 0. \end{cases} \end{align*} Furthermore, for each $i \in \{0,1,\cdots, d\}$, \begin{align*} h(e) \cdot s_i(\lambda(x))=\frac{1}{(d-i)!} \cdot \nabla^{d-i} h(x) \underbrace{ [e,e,\dots, e] }_{(d-i) \mathrm{~terms}}. \end{align*} If $i \in [ d ]$, then $s_i \circ \lambda$ is hyperbolic with respect to $e$ of degree $i$. \end{observation} \begin{corollary} $\tr[x]$ is a linear function. \end{corollary} \begin{proof} By Observation~\ref{obs:symmetric}, we have \begin{align*} \tr[x]=s_1(\lambda(x)) = \frac{1}{h(e)\cdot (d-1)!}\cdot \nabla^{d-1} h(x) [e,e,\dots, e]. \end{align*} Since $h$ is of degree $d$, $\nabla^{d-1}h$ is a degree-1 polynomial. Hence, $\tr[x]$ is a linear function. \end{proof} \subsection{Concentration inequalities} \begin{comment} \begin{theorem}[Theorem 2.1 in \cite{rv13}]\label{thm:Hanson_wright} Let $A$ be an fixed $m \times n$ matrix. Let $x = (X_1, \dots , X_n) \in \mathbb{R}^n$ be a random vector with independent components $X_i$ which satisfy $\E [ X_i ] = 0$, $\E[ X_i^2 ]=1$ and $\| X_i \|_{\psi_2} \leq K$, $\forall i \in [n]$. Then, for every $t \geq 0$, \begin{align*} \Pr \Big[ |\|Ax\|_2-\|A\|_F|>t \Big]\leq 2\exp\left( -\frac{ct^2}{K^4\|A\|^2} \right), \end{align*} where \begin{align*} \|X\|_{\psi_2}:=\sup_{p\geq 1} p^{-1/2} \cdot ( \E[|X|^{p}] ) ^{1/p}. \end{align*} \end{theorem} \end{comment} In general, for any normed linear space, as mentioned in \cite{lt13}, we have the following concentration result: \begin{theorem}[Theorem 4.7 in \cite{lt13}]\label{thm:banach_concentration} Let $x_1,\dots,x_n\in \mathcal{B}$ be a fixed finite sequence in normed linear space $\mathcal{B}$. Let $X=\sum_{i=1}^n r_i x_i$, where $r_1,\dots,r_n$ are independent Rademacher random variables. Then, for every $t>0$, \begin{align*} \Pr_{ r \sim \{ \pm 1 \}^n }[\|X\|_B>t]\leq 2\exp(-t^2/(32\E[\|X\|_B^2])). \end{align*} \end{theorem} For matrices with Schatten-$p$ norm, the expectation of Schatten-$2p$ norm of Rademacher sum can be upper-bounded as follows. \begin{theorem}[Theorem 3.1 in \cite{tj74}]\label{thm:matrix_p_norm} Let $p\geq 1$. For a matrix $A$, we use $\| A \|_p$ to denote the Shatten-$p$ norm. For any fixed $X_1, X_2, \dots, X_n \in \mathbb{R}^{d \times d}$, and for independent Rademacher random variables $r_1, r_2, \dots, r_n$, we have \begin{align*} \left( \E_{r \sim \{ \pm 1 \}^n } \left[ \left\|\sum_{i=1}^n r_i X_i \right\|_{2p}^{2p} \right] \right)^{1/(2p)}\leq \sqrt{2p-1} \cdot \left( \sum_{i=1}^n \| X_i \|_{2p}^2 \right)^{1/2} \end{align*} \end{theorem} \subsection{Khinchin-Kahane inequality} In any normed linear space, for any $p,q\geq 1$, the $p$-th moment and $q$-th moment of the norm of Rademacher sum are equivalent up to a constant, as shown in \cite{kah64}, which generalized the Khinchin inequality \cite{khi23}. \begin{theorem}[\cite{kah64}; also in \cite{lo94,lt13, kr16}]\label{thm:khinchin_kahane} For all $p, q \in [1, \infty)$, there exists a universal constant $C_{p,q} > 0$ depending only on $p, q$, such that for all choices of normed linear space $\mathcal{B}$, finite sets of vectors $x_1, x_2, \cdots , x_n \in \mathcal{B}$, and independent Rademacher variables $r_1, r_2, \cdots , r_n$, \begin{align*} \left( \E_{ r \sim \{\pm 1\}^n }\left[\left\|\sum_{i=1}^n r_i x_i\right\|^q\right] \right)^{1/q}\leq C_{p,q}\cdot \left( \E_{ r \sim \{ \pm 1\}^n }\left[\left\|\sum_{i=1}^n r_i x_i\right\|^p\right] \right)^{1/p}. \end{align*} If moreover $1=p\leq q\leq 2$, then $C_{1,q}=2^{1-1/q}$ is optimal. If $q\in [1,\infty]$, then $C_{1,q}\leq \sqrt{q}$. \end{theorem} \subsection{Matrix analysis tools} We state a Lemma for singular values of the product of matrices. \begin{lemma}[General Horn inequality, Lemma 1.2 in \cite{tj74}]\label{lem:horn} Let $A_1, \cdots, A_n \in \mathbb{R}^{d \times d}$ be symmetric matrices. Let $\sigma_1(A),\dots, \sigma_d(A)$ denote the singular values of $A$. Then, for each $k \in [d]$, \begin{align*} \sum_{j=1}^k \sigma_j\left(\prod_{i=1}^n A_i\right) \leq \sum_{j=1}^k \prod_{i=1}^n \sigma_j(A_i). \end{align*} \end{lemma} We state a Lemma which is implied by H\"{o}lder inequality. \begin{lemma}[Lyapunov's inequality]\label{lem:lyapunov} Let $0<r<s<\infty$ and $X$ be a random variable. Then, \begin{align*} \E\left[|X|^r\right]\leq \left(\E\left[|X|^s\right]\right)^{r/s}. \end{align*} \end{lemma} \subsection{Helton-Vinnikov Theorem} We state a corollary of Helton-Vinnikov Theorem (Theorem~\ref{thm:hv07}), proved by Gurvits \cite{gur04}: \begin{corollary}[Proposition 1.2 in \cite{gur04}]\label{cor:hv_2_vars} Let $h(x)$ be a $m$-variable degree-$d$ hyperbolic polynomial. Then, for $x,y\in \mathbb{R}^m$, there exists two symmetric real matrices $A,B\in \mathbb{R}^{d\times d}$ such that for any $a,b\in \mathbb{R}$, the ordered eigenvalues $\lambda(ax+by)=\lambda(aA+bB)$. \end{corollary} This Corollary relates the hyperbolic eigenvalues of a vector $ax+by$ to the eigenvalues of matrix $aA+bB$, which allows us to study some properties of hyperbolic eigenvalues using results in matrix theory. \subsection{Technique overview} In this section, we provide a proof overview of our results. We first show how prove hyperbolic Chernoff bounds by upper bounding each polynomial moment. After that, we show how to apply our new concentration inequality to prove hyperbolic anti-concentration. Finally, we show how to relax the isotropic condition in \cite{b18}, and also how to get a more general discrepancy result via hyperbolic concentration. \iffalse \subsubsection{General approach for proving Chernoff-type concentration}\label{sec:general_frame} A common way to prove the Chernoff bound is via moment generating function and Markov's inequality. Take the scalar version of Chernoff bound as an example. For i.i.d. random variables $x_1,\dots,x_n$ with $\E[x_i]=0$, consider the moment generating function $\E[e^{aX}]$ where $X:=\sum_{i=1}^n x_i$ and $a$ is a parameter. Then, by Markov's inequality \begin{align}\label{eq:markov} \Pr[X>t] = \inf_{a>0}~e^{-at}\cdot \E[e^{aX}]. \end{align} For scalar random variables, the moment generating function (mgf) satisfies $\E\left[e^{aX}\right]=\prod_{i=1}^n \E\left[e^{ax_i}\right]$. For general random variables that take values in a normed linear space, it becomes more complicated to prove concentration. Consider a special case that we studied in this paper---Rademacher sum, i.e., $X:=\sum_{i=1}^n r_i x_i$ where $x_1,\dots,x_n\in \mathbb{R}^m$ are any fixed $m$ vectors and $r_1,\dots,r_n\in \{-1,1\}$ are independent Rademacher random variables, that is, $r_i=\pm 1$ with probability $1/2$. And we want to upper bound $\Pr_{r\sim \{\pm 1\}^n}[\|X\|_\mathcal{X}>\epsilon]$, where $\|\cdot\|_\mathcal{X}$ is some norm on $\mathbb{R}^m$. However, the mgf cannot be separated as a product of expectations and some new technique is required to upper bound $\E[e^{a\|X\|_\mathcal{X}}]$. One way is to consider the Taylor expansion of the mgf: $\E\left[e^{a\|X\|_\mathcal{X}}\right] = \sum_{p=0}^\infty \frac{a^p}{p!}\cdot \E\left[\|X\|_\mathcal{X}^p\right]$. For Rademacher sum $X$, by Khinchin-Kahane inequality (Theorem~\ref{thm:khinchin_kahane}), the $p$-th moment of $\|X\|_\mathcal{X}$ can be upper-bounded by the 1-st moment, $\E[\|X\|_\mathcal{X}]$, up to a constant factor that only depends on $p$. Therefore, if we can upper bound the expectation $\E[\|X\|_\mathcal{X}]$, then we will get an upper bound for $\Pr_{r\sim \{\pm 1\}^n}[\|X\|_\mathcal{X}>t]$. Ledoux and Talagrand \cite{lt13} used this approach to derive a general formula (Theorem~\ref{thm:banach_concentration}) for concentration of Rademacher sums in a normed linear space (or Banach space for infinite Rademacher sum): \begin{align}\label{eq:intro_concentration} \Pr_{ r \sim \{ \pm 1 \}^n }[ \| X \|_\mathcal{X} > t ] \leq 2\exp \Big(-t^2 \Big/ \Big( 32\E_{ r \sim \{\pm 1\}^n }[\|X\|_\mathcal{X}^2] \Big) \Big). \end{align} \fi \subsubsection{Our technique for hyperbolic Chernoff bound for Rademacher sum} The main idea of our proof of hyperbolic Chernoff bound is to upper bound the polynomial moments. By definition, the hyperbolic spectral norm of $X$ is the $\ell_\infty$ norm of the eigenvalues $\lambda(X)$. Inspired by the proof of the matrix Chernoff bound by Tropp \cite{tro18}, we can consider the $\ell_{2q}$ norm of $\lambda(X)$, for $q\geq 1$. When the hyperbolic polynomial $h$ is the determinant polynomial, this norm is just the Schatten-$2q$ norm of matrices. For general hyperbolic polynomials, we define hyperbolic-$2q$ norm as $\|x\|_{h,2q}:=\|\lambda(x)\|_{2q}$. By the result of \cite{bgls01}, hyperbolic-$2q$ norm is actually a norm in $\mathbb{R}^m$. And the following inequality (by Fact~\ref{fac:norm_h_bound_by_norm_h_q} and Lemma~\ref{lem:lyapunov}) shows the connection between a hyperbolic spectral norm and hyperbolic-$2q$ norm: \begin{align*} \E_{ r \sim \{\pm 1\}^n }[\|X\|_h] \leq \Big(\E_{ r \sim \{\pm 1\}^n }\big[\|X\|_{h,2q}^{2q}\big]\Big)^{1/(2q)}. \end{align*} In order to compute $\|X\|_{h,2q}^{2q}=\sum_{i=1}^{\rank(X)} \lambda_i(X)^{2q}$, we use a deep result about hyperbolic polynomials: the Helton-Vinnikov Theorem \cite{hv07}, which proved a famous conjecture by Lax \cite{lax57}, to translate between hyperbolic polynomials and matrices. The theorem is stated as follows. \begin{theorem}[\cite{hv07}]\label{thm:hv07} Let $f \in \mathbb{R}[x, y, z]$ be hyperbolic with respect to $e = (e_1, e_2, e_3)\in \mathbb{R}^3$. Then there exist symmetric real matrices $A,B,C\in \mathbb{R}^{d\times d}$ such that $f = \det(xA + yB + zC)$ and $e_1A + e_2B + e_3C \succ 0$. \end{theorem} Gurvits \cite{gur04} proved a corollary (Corollary~\ref{cor:hv_2_vars}) that for any $m$-variate hyperbolic polynomial $h$, and $x,y\in \mathbb{R}^m$, there exist two symmetric matrices $A,B\in \mathbb{R}^{d\times d}$ such that for any $a,b\in \mathbb{R}$, $\lambda(ax+by)=\lambda(aA+bB)$, where the left-hand side means the hyperbolic eigenvalues of the vector $ax+by$ and the right-hand side means the eigenvalues of the matrix $aA+bB$. Therefore, we try to separate and consider one random variable $r_i$ at a time. We first consider the expectation over $r_1$. By conditional expectation, let $X_2:=\sum_{i=2}^n r_i x_i$ and we have \begin{align*} \E_{ r \sim \{\pm 1\}^n }\big[\|X\|_{h,2q}^{2q}\big] = \E_{r_2,\dots,r_n\sim \{\pm 1\}}\left[\E_{r_1\sim \{\pm 1\}}\left[\|r_1x_1 + X_2\|_{h,2q}^{2q}\right]\right], \end{align*} By Corollary~\ref{cor:hv_2_vars}, there exist two matrices $A_1,B_1$ such that $\lambda(r_1x_1 + X_2)=\lambda(r_1 A_1 + B_1)$ holds for any $r_1$. And it follows that \begin{align*} \E_{r_1\sim \{\pm 1\}}\left[\|r_1x_1 + X_2\|_{h,2q}^{2q}\right] = \E_{r_1\sim \{\pm 1\}}\left[\|r_1 A_1 + B_1\|_{2q}^{2q}\right]. \end{align*} It becomes much easier to compute the expected Schatten-$2q$ norm of matrices. We can prove that \begin{align*} \E_{ r \sim \{\pm 1\}^n }\big[\|X\|_{h,2q}^{2q}\big] \leq \sum_{k_1=0}^q {\binom{2q}{2k_1}\|x_1\|_h^{2k_1}\cdot \E_{r_2,\dots,r_n}\left[\|X_2 \|_{h,2q-2k_1}^{2q-2k_1} \right]}. \end{align*} Now, we can iterate this process for the remaining expectation $\E_{r_2,\dots,r_n}\left[\|X_2 \|_{h,2q-2k_1}^{2q-2k_1} \right]$. After $n-1$ iterations, we get that \begin{align}\label{eq:intro_final_exp} \left(\E_{r \sim \{\pm 1\}^n}\left[\left\|X\right\|_{h,2q}^{2q}\right]\right)^{1/(2q)} \leq \sqrt{2q-1}\cdot s^{1/(2q)}\cdot \sigma, \end{align} where $\sigma^2 =\sum_{i=1}^n \|x_i\|_h^2$ and $s$ is the maximum rank of $x_1,\dots,x_n$. Then, by taking $q:=\log(d)$ and $\|X\|_h \leq \|X\|_{h,2q}^{2q}$, we get the desired upper bound for the expectation $\E_{r\sim \{\pm 1\}^n}[\|\sum_{i=1}^n r_i x_i\|_h]$ in Theorem~\ref{thm:intro_main}. To obtain the concentration probability inequality, We can apply the result of Ledoux and Talagrand \cite{lt13} for the concentration of Rademacher sums in a normed linear space, which will imply \begin{align}\label{eq:intro_hyper_concentrate} \Pr_{ r \sim \{ \pm 1 \}^n }[ \| X \|_h > t ] \leq 2\exp \Big(-t^2 \Big/ \Big( 32\E_{ r \sim \{\pm 1\}^n }[\|X\|_h^2] \Big) \Big). \end{align} However, we need to verify that the hyperbolic spectral norm $\|\cdot\|_h$ is indeed a norm, which follows from the result of G{\aa}rding \cite{g59}. Since by Khinchin-Kahane inequality (Theorem~\ref{thm:khinchin_kahane}), the second moment of $\|X\|_h$ can be upper-bounded via the first moment. Hence, we can put our expectation upper bound into Eq.~\eqref{eq:intro_hyper_concentrate} and have \begin{align*} \Pr_{ r \sim \{\pm 1\}^n } \left[ \left\| X \right\|_h > t \right]\leq C_1\exp\left(-\frac{C_2t^2}{\sigma^2 \log (s+1)}\right), \end{align*} for constants $C_1,C_2>0$, and hence Theorem~\ref{thm:intro_main} is proved. We defer the formal proof in Section~\ref{sec:hyper_proof1}. \subsubsection{Our technique for hyperbolic Chernoff bound for positive vectors} We can use similar techniques in the previous section to prove Theorem~\ref{thm:hyper_chernoff_positive_intro}. For any random vectors $\mathsf{x}_1,\dots,\mathsf{x}_n\in \Lambda_+$, we may assume $\|\mathsf{x}_i\|_h\leq 1$. Using the Taylor expansion of the mgf, we can show that: \begin{align}\label{eq:pos_mgf} \Pr\left[\lambda_{\max}\left(\sum_{i=1}^n \mathsf{x}_i\right)\geq t\right] \leq \inf_{\theta >0}~e^{-\theta t}\cdot\sum_{q\geq 0}\frac{\theta^q}{q!}\E\left[\left\|\sum_{i=1}^n \mathsf{x}_i\right\|_{h,q}^q\right]. \end{align} Then, for the $q$-th moment, we separate $\mathsf{x}_1$ and $\sum_{i=2}^n \mathsf{x}_i$ and have \begin{align*} \mathbb{E}_{\geq 1}\left[\left\|\sum_{i=1}^n x_i\right\|_{h,q}^q\right] = \mathbb{E}_{\geq 2} \mathbb{E}_1\left[\tr\left[(A(x_1)+B)^q\right]\right], \end{align*} where $A(x_1)$ and $B$ are two PSD matrices obtained via Gurvits's result (Corollary~\ref{cor:hv_2_vars}). The next step is different from the case of Rademacher sum, since we cannot drop half of the terms by the distribution of $\mathsf{x_1}$. Instead, we can fully expand the matrix products in the trace and use Horn's inequality to upper bound the eigenvalue products. We have \begin{align*} \mathbb{E}_{\geq 2} \mathbb{E}_1\left[\tr\left[(A(x_1)+B)^q\right]\right]\leq \mathbb{E}_{1} \left[\sum_{k_1=0}^q\binom{q}{k_1}\lambda_{\max}(\mathsf{x}_1)^{k_1}\cdot \mathbb{E}_{\geq 2} \left[\left\|\sum_{i=2}^n \mathsf{x}_i\right\|_{h,q-k_1}^{q - k_1}\right]\right]. \end{align*} By repeating this process, we finally have \begin{align*} \mathbb{E}\left[\left\|\sum_{i=1}^n \mathsf{x}_i\right\|_{h,q}^q\right]\leq d\cdot \E\left[\left(\sum_{i=1}^n \|\mathsf{x}_i\|_h\right)^q\right]. \end{align*} Then, we put the above upper bound into Eq.~\eqref{eq:pos_mgf}, which gives: \begin{align*} \Pr\left[\lambda_{\max}\left(\sum_{i=1}^n \mathsf{x}_i\right)\geq t\right] \leq \inf_{\theta >0}~e^{-\theta t}\cdot d\cdot \prod_{i=1}^n \E\left[e^{\theta\|\mathsf{x}_i\|_h}\right]. \end{align*} Now, we use some similar calculations in the matrix case \cite{t12} to prove that \begin{align*} \Pr\left[\lambda_{\max}\left(\sum_{i=1}^n \mathsf{x}_i\right)\geq t\right] \leq \inf_{\theta>0}~d\cdot \exp\left(-\theta t + (e^\theta - 1)\mu_{\max}\right). \end{align*} By taking $\theta := \log(t/\mu_{\max})$ and $t:=(1+\delta)\mu_{\max}$, we get that \begin{align}\label{eq:tech_max_eig} \Pr\left[\lambda_{\max}\left(\sum_{i=1}^n \mathsf{x}_i\right)\leq (1+\delta)\mu_{\max}\right] \leq&~ d\cdot \left(\frac{(1+\delta)^{1+\delta}}{e^{\delta}}\right)^{-\mu_{\max}} \end{align} For the minimum eigenvalue case, we can define $\mathsf{x}'_i:=e-\mathsf{x}_i$ for $i\in [n]$. Then, by the property of hyperbolic eigenvalues (Fact~\ref{fac:eigen_linear_trans}), we know that $\mathsf{x}'_i$ are also in the hyperbolic cone and $\lambda_{\max}(\mathsf{x}'_i)=1-\lambda_{\min}(\mathsf{x}'_i)$. Therefore, we can obtain the Chernoff bound for the minimum eigenvalue of $\mathsf{x}$ by applying Eq.~\eqref{eq:tech_max_eig} with $\mathsf{x}_i'$. We defer the formal proof in Section~\ref{sec:hyper_proof2}. \subsubsection{Our technique for hyperbolic anti-concentration} In this part, we will show how to prove the hyperbolic anti-concentration result (Theorem~\ref{thm:anti-concen_intro}) via the hyperbolic Chernoff bound for vectors in the hyperbolic cone (Theorem~\ref{thm:hyper_chernoff_positive_intro}). In \cite{ost19}, they studied the unate functions on hypercube $\{-1, 1\}^n$, which is defined as the function being increasing or decreasing with respect to any one of the coordinates. Then, they showed that the Rademacher measure of a unate function is determined by the expansion of its indicator set in hypercube. In particular, for the maximum hyperbolic eigenvalue, it is easy to see that the indicator function $\left[\lambda_{\max}\left(\sum_{i=1}^n \epsilon_i x_i^j-y_j\right)\in [-\Delta, \Delta]\right]$ is unate when $x_i\in \Lambda_+$. Hence, we can show the anti-concentration inequality by studying the expansion in the hypercube, which by \cite{ay21}, is equivalent to lower-bound the minimum eigenvalue of each vector. However, for the initial input $x_i$, we only assume that $\sum_{i=1}^n\lambda_{\min}(x_i)^2\geq 1$, but we need a $\Omega(\frac{1}{\sqrt{\log d}})$ lower bound for each $x_i$ to prove the theorem. To amplify the minimum eigenvalue, we follow the proof in \cite{ay21} that uses a random hash function to randomly assign the input vectors into some buckets and considers the sum of the vectors in each bucket as the new input. They proved that the ``bucketing'' will not change the distribution. Then, we can use Theorem~\ref{thm:hyper_chernoff_positive_intro} to lower bound the minimum hyperbolic eigenvalue of each bucket, which is a sum of independent random vectors in the hyperbolic cone. Hence, we get that \begin{align*} \Pr\left[\lambda_{\min}\left(\sum_{i=1}^n z_{i,j}x_i\right) \leq \Omega(\frac{1}{\sqrt{\log d}}) \right]\leq \frac{1}{10}, \end{align*} which $z_{i,j}\in \{0, 1\}$ is a random variable indicating that $x_i$ is hashed to the $j$-th bucket. Then, by the standard Chernoff bound for negatively associated random variables, we can prove that most of the buckets have large minimum eigenvalues, which concludes the proof of the hyperbolic anti-concentration theorem. We defer the formal proof in Section~\ref{sec:anticon}. \subsubsection{Our technique for hyperbolic discrepancy} To relax the isotropic condition in \cite{b18}, we basically follow their proof. The high-level idea is to construct a compatible family of polynomials\footnote{The compatible family of polynomials is closely related to the interlacing family in \cite{mss15,mss18}. See Definition~\ref{def:compatible}.} such that the probability in the hyperbolic Kadison-Singer problem (Theorem~\ref{thm:hyperbolic_ks_sub_intro}) can be upper-bounded by the largest root of the expected polynomial of the family, which can be further upper-bounded by the largest root of the mixed hyperbolic polynomial $h[v_1,\dots,v_n]\in \mathbb{R}[x_1,\dots,x_m,y_1,\dots,y_n]$, defined as $h[v_1,\dots,v_n]:=\prod_{i=1}^m (1-y_i D_{v_i}) h(x)$, where $D_{v_i}$ is the directional derivative with respect to $v_i$. In particular, we can consider the roots of the linear restriction $h[v_1,\dots,v_n](te+\mathbf{1})\in \mathbb{R}[t]$. Then, using G{\aa}rding's result \cite{g59} on hyperbolic cone, we know that the largest root equals the minimum $\rho>0$ such that the vector $\rho e + \mathbf{1}$ is in the hyperbolic cone $\Gamma_+$ of $h[v_1,\dots,v_n]$, which can be upper-bounded via similar techniques in \cite{mss15,kls19} to iteratively add each vector $v_i$ while keeping the sum in the hyperbolic cone. Our key observation is that the proof in \cite{b18} essentially proved that \begin{align*} \frac{\epsilon \mu e + \left(1-\frac{1}{n}\right)\delta \sum_{i=1}^n v_i }{1+\frac{\mu-1}{n}}+\mathbf{1}\in \Gamma_+ \end{align*} holds for any vectors $v_i\in \Lambda_+$. Hence, once we assume that $\|\sum_{i=1}^n v_i\|_h\leq \sigma$, then by the convexity of the hyperbolic cone, we get that $\rho\leq \frac{\left(\epsilon \mu + \left(1-\frac{1}{n}\right)\delta\sigma\right) }{1+\frac{\mu-1}{n}}$, which will imply the upper bound in Theorem~\ref{thm:hyperbolic_ks_sub_intro}. We defer the formal proof in Section~\ref{sec:hy_ks}. To obtain the discrepancy result for arbitrary vectors (Corollary~\ref{Cor:hyper_discrep_high_prob}), we can use the hyperbolic Chernoff bound for Rademacher sum (Theorem~\ref{thm:intro_main}) to derive the discrepancy upper bound. For any vectors $x_1,\dots,x_n$ with maximum rank $s$, by setting $t=O(\sigma\sqrt{\log s})$ in Theorem~\ref{thm:intro_main}, we get that $\left\| \sum_{i=1}^n r_i x_i \right\|_h \leq O(\sigma \sqrt{\log s})$ holds with high probability for uniformly random signs $r\sim \{\pm 1\}^n$. \iffalse We can do better for rank-1 vectors. By examining the proof of Theorem~\ref{thm:intro_main}, we find that the factor $d$ in Eq.~\eqref{eq:intro_final_exp} comes from the last iteration: after we separate $r_{n-1}$ from $r_n$, there remains the following term \begin{align*} \E_{r_n\sim \{\pm 1\}}\left[ \|r_n x_n\|_{h,2k_n}^{2k_n} \right]. \end{align*} We can upper bound it by \begin{align*} \E_{r_n\sim \{\pm 1\}}\left[ \|r_n x_n\|_{h,2k_n}^{2k_n} \right] \leq \rank(x_n)\cdot \|x_n\|_h^{2k_n} \leq d\cdot \|x_n\|_h^{2k_n}, \end{align*} which then implies Eq.~\eqref{eq:intro_final_exp}. For rank-1 vectors $x_1,\dots,x_n$, the above upper bound can be improved to \begin{align*} \E_{r_n\sim \{\pm 1\}}\left[ \|r_n x_n\|_{h,2k_n}^{2k_n} \right] \leq \rank(x_n)\cdot \|x_n\|_h^{2k_n} \leq \|x_n\|_h^{2k_n}. \end{align*} It follows that \begin{align*} \left(\E_{r \sim \{\pm 1\}^n}\left[\left\|X\right\|_{h,2q}^{2q}\right]\right)^{1/(2q)} \leq \sqrt{2q-1}\cdot \sigma, \end{align*} which then implies Theorem~\ref{thm:intro_main2} that with constant probability, \begin{align*} \left\| \sum_{i=1}^n r_i x_i \right\|_h \leq 8\sigma. \end{align*} \begin{remark} We note that this technique cannot derive the classical Spencer theorem, which corresponds to the case when $h = \prod_{i=1}^n x_i$ and $x\in \{-1, 1\}^n$. Indeed, we have \begin{align*} \left\| \sum_{i=1}^n r_i x_i \right\|_h = \left\| \sum_{i=1}^n r_i x_i \right\|_\infty, \quad \text{and~~}\sigma = \left(\sum_{i=1}^n \|x_i\|_h^2\right)^{1/2}=\sqrt{n}. \end{align*} However, for each vector $x_i\in \{-1, 1\}^n$, the hyperbolic rank $\rank(x_i) = n$. Therefore, our result only implies an $O(\sqrt{n\log n})$ upper-bound, instead of $6\sqrt{n}$ by Spencer theorem. \end{remark} \fi \subsection{Approach 0 for proving \cref{lem:p_norm_upper_bound}} In this subsection, we try to prove the lemma by reducing the $\|\cdot\|_{h,p}$ norm to $\|\cdot\|_{h,2}$ norm. By~\cref{thm:khinchin_kahane}, we immediately have the following inequality for $\|\cdot\|_{h,p}$ norm: \begin{corollary}[Khinchine-Kahane inequality for hyperbolic $p$-norm]\label{cor:kk_ineq_p_norm} For any $k\geq 2$, \begin{align*} C\frac{1}{\sqrt{k}}\left(\E\left[\left\|\sum_{i=1}^n r_i x_i\right\|_{h,p}^k\right]\right)^{1/k}\leq \E\left[\left\|\sum_{i=1}^n r_i x_i\right\|_{h,p}\right]\leq \left(\E\left[\left\|\sum_{i=1}^n r_i x_i\right\|_{h,p}^k\right]\right)^{1/k} \end{align*} \end{corollary} \begin{comment} For matrices, according to the noncommutative Khinchine inequality, \begin{align*} \mathrm{RHS}\leq \left\|\left(\sum_{i=1}^n X_i^2\right)^{1/2}\right\|_p. \end{align*} \Ruizhe{I'm not sure whether it holds for hyperbolic spectral norm.} \end{comment} Hence, if we can upper bound any $k$-th moment, $\E\left[\left\|\sum_{i=1}^n r_i x_i\right\|_{h,p}^k\right]$, by the above corollary, we will prove \cref{lem:p_norm_upper_bound}. We know that the following relations holds for different hyperbolic norms: for $p\geq 2$, \begin{align}\label{eq:norm_relation} \|x\|_h \leq \|x\|_{h,p} \leq \|x\|_{h,2}\leq d^{1/2-1/p}\|x\|_{h,p} \end{align} It follows that \begin{align*} \left(\E\left[\left\|\sum_{i=1}^n r_i x_i\right\|_{h,p}^2\right]\right)^{1/2}\leq &~ \left(\E\left[\left\|\sum_{i=1}^n r_i x_i\right\|_{h,2}^2\right]\right)^{1/2}\\ = &~ \left(\sum_{i=1}^n \|x_i\|_{h,2}^2\right)^{1/2}\\ \leq &~ \left(\sum_{i=1}^n d^{1-2/p}\|x_i\|_{h,p}^2\right)^{1/2}\\ = &~ d^{1/2-1/p}\left(\sum_{i=1}^n \|x_i\|_{h,p}^2\right)^{1/2}, \end{align*} where the first step follows from \cref{eq:norm_relation}, the second step follows from \cref{clm:type_2}, the third step follows from \cref{eq:norm_relation} again. In other words, the type-2 constant is at most $d^{1/2-1/p}$ for $\|\cdot\|_{h,p}$ norm. By \cref{cor:kk_ineq_p_norm}, we get \begin{align*} \left(\E\left[\left\|\sum_{i=1}^n r_i x_i\right\|_{h,2p}^{2p}\right]\right)^{1/(2p)}\leq C\sqrt{2p} \cdot d^{1/2-1/(2p)} \cdot \left(\sum_{i=1}^n \|x_i\|_{h,2p}^2\right)^{1/2}. \end{align*} The $d^{1/2-1/(2p)}$ factor is not allowed to appear in \cref{lem:p_norm_upper_bound}; otherwise, $\log d$ will become $d$ in the main theorem (\cref{thm:main}), i.e., \begin{align*} \Pr\left[\left\|\sum_{i=1}^n r_i x_i\right\|_h>t\right]\leq &~ 2\exp\left(-\frac{t^2}{32C^2\cdot d\cdot \sigma^2}\right). \end{align*} \subsection{Approach 1 for proving \cref{lem:p_norm_upper_bound}} In this subsection, we try to prove \cref{lem:p_norm_upper_bound} by directly computing the expectation. We first prove the following property of $\|\cdot\|_{h,p}$: \begin{claim}\label{clm:p_norm_poly} For $x\in \mathbb{R}^n$, $p\leq d$, \begin{align*} \|x\|_{h,p}^p =\sum_{i=1}^d \lambda_i(x)^p \end{align*} is a degree-$p$ homogeneous polynomial. \end{claim} \begin{proof} For $1\leq k\leq d$, let \begin{align*} E_k(x)=\sum_{i_1<\cdots < i_k} \prod_{j=1}^k x_{i_j}. \end{align*} Then, by Fact 2.10 in \cite{bgls01}, $E_k(\lambda(x))$ is a degree-$k$ homogeneous polynomial. By Newton identity, \begin{align*} \|x\|_{h,p}^p = (-1)^{p-1}E_p(\lambda(x)) + \sum_{i=1}^{p-1}(-1)^{p-1+i}E_{p-i}(\lambda(x))\|x\|_{h,i}^i. \end{align*} By induction on $p$, we prove that $\|x\|_{h,p}$ is a degree-$p$ homogeneous polynomial. \end{proof} By \cref{clm:p_norm_poly}, we can assume that for $x\in \mathbb{R}^m$, \begin{align*} \|x\|_{h,p}^p = \sum_{i_1,\dots, i_p\in [m]} T(i_1,\dots, i_p) \prod_{j=1}^p x_{i_j}, \end{align*} where $T$ is a symmetric tensor. By \cite{cglm08}, it is always possible to write a homogeneous polynomial as a symmetric tensor, which satisfies $T_{i_1,\dots,i_p}=T_{\sigma(i_1),\dots, \sigma(i_p)}$ for all permutation $\sigma\in \mathcal{S}_p$. Then, we have \begin{align*} \E\left[\left\|\sum_{i=1}^n r_i x_i\right\|_{h,p}^p\right] = \sum_{i_1,\dots, i_p\in [m]} T(i_1,\dots, i_p) \E\left[ \prod_{j=1}^p \sum_{k=1}^n r_k x_{k,i_j} \right]. \end{align*} Now, we can upper bound each term for fixed $i_1,\dots, i_p$. Without loss of generality, assume $i_j=j$ for all $j\in [p]$. Then, it becomes \begin{align*} \E\left[ \prod_{j=1}^p \sum_{k=1}^n r_k x_{k,j} \right] = &~ \sum_{c\in [n]^p} \prod_{j=1}^p x_{c(j),j}\cdot \E\left[ \prod_{i=1}^n r_i^{b_i(c)} \right], \end{align*} where \begin{align*} c=(c_1,\dots,c_p),\quad b_i(c):=|\{k\in [p]: c_k=i\}|\quad \forall i\in [n]. \end{align*} Note that \begin{align*} \E\left[ \prod_{i=1}^n r_i^{b_i(c)} \right]=\begin{cases} 1 & \text{if all $b_i(c)$ are even,}\\ 0 & \text{otherwise.} \end{cases} \end{align*} Hence, \begin{align*} \E\left[ \prod_{j=1}^p \sum_{k=1}^n r_k x_{k,j} \right] = \sum_{c\in C_{even}} \prod_{j=1}^p x_{c(j),j}, \end{align*} where \begin{align*} C_{even}:=\left\{(c_1,\dots,c_p)\in [n]^p\text{~such that~}b_i(c) \text{~is even~}\forall i\in [n]\right\}. \end{align*} It follows that \begin{align}\label{eq:p_norm_tensor_form} \E\left[\left\|\sum_{i=1}^n r_i x_i\right\|_{h,p}^p\right] = &~ \sum_{c\in C_{even}}\sum_{i_1,\dots, i_p\in [m]} T(i_1,\dots, i_p)\prod_{j=1}^p x_{c(j),i_j}\notag\\ = &~ \sum_{c\in C_{even}} \left\langle T, \bigotimes_{i=1}^p x_{c(i)}\right\rangle\notag\\ = &~ \sum_{\substack{\beta_1,\dots, \beta_n\geq 0\\\beta_1+\cdots \beta_n = q}} \binom{2q}{2\beta_1,\dots,2\beta_n}\left\langle T, \bigotimes_{i=1}^n x_i^{\otimes 2\beta_i}\right\rangle, \end{align} where the first step exchanges the sum over $i_1,\dots,i_p$ and the sum over $c$, the second step follows from writing the polynomial evaluation as a inner product, the third step follows from $q=p/2$ and the tensor $T$ is symmetric, that is, the value of $\left\langle T, \bigotimes_{i=1}^p x_{c(i)}\right\rangle$ only depends on the number of appearance of $i$ in $c$ for each $i\in [n]$. And since $c\in C_{even}$, each $i$ must appear even number of times. By~\cite{zyg02,tj74}, we have \begin{align*} \binom{2q}{2\beta_1,\dots, 2\beta_n}\leq M_{2q}^{2q}\binom{q}{\beta_1,\dots,\beta_n}, \end{align*} where \begin{align*} M_{2q}=\left(\frac{(2q)!}{2^qq!}\right)^{1/2q}\sim\Theta(\sqrt{p}). \end{align*} Hence, \begin{align*} \sum_{\substack{\beta_1,\dots, \beta_n\geq 0\\\beta_1+\cdots \beta_n = q}} \binom{2q}{2\beta_1,\dots,2\beta_n}\left\langle T, \bigotimes_{i=1}^n x_i^{\otimes 2\beta_i}\right\rangle \leq &~ M_{2q}^{2q} \sum_{\substack{\beta_1,\dots, \beta_n\geq 0\\\beta_1+\cdots \beta_n = q}} \binom{q}{\beta_1,\dots,\beta_n}\left\langle T, \bigotimes_{i=1}^n x_i^{\otimes 2\beta_i}\right\rangle\\ = &~ M_{2q}^{2q}\left\langle T, \left(\sum_{i=1}^n x_i\otimes x_i\right)^{\otimes q} \right\rangle \end{align*} where the last step follows from the multinomial theorem and $T$ is symmetric. Now, consider the sum $\sum_{i=1}^n x_i\otimes x_i$. Suppose it can be decomposed as the tensor square of some tensor: \begin{align*} \sum_{i=1}^n x_i\otimes x_i = v\otimes v, \end{align*} then we write $v:=(\sum_{i=1}^n x_i\otimes x_i)^{1/2}\in \mathbb{R}^m$. It follows that \begin{align*} M_{2q}^{2q}\left\langle T, \left(\sum_{i=1}^n x_i\otimes x_i\right)^{\otimes q} \right\rangle = M_{2q}^{2q} \left\langle T, v^{\otimes 2q}\right\rangle. \end{align*} Therefore, \begin{align*} \left(\E\left[\left\|\sum_{i=1}^n r_i x_i\right\|_{h,p}^p\right]\right)^{1/p}\leq M_{p}\left\| \left(\sum_{i=1}^n x_i\otimes x_i\right)^{1/2} \right\|_{h,p} \end{align*} \Ruizhe{However, this square root doesn't always exists.} If the square root does not exist, we should think more of the polynomial $\|x\|_{h,p}^{p}$. For the homogeneous polynomial $\langle T, x^{\otimes 2q}\rangle$, it is positive semi-definite since \begin{align*} \left\langle T, x^{\otimes 2q}\right\rangle = \sum_{i=1}^d \lambda(x)_i^{2q} \geq 0. \end{align*} By the Waring decomposition \cite{ah95, cglm08}, we can write it as a sum of $2q$-th power of some linear forms: \begin{align*} \left\langle T, x^{\otimes 2q}\right\rangle = \sum_{t=1}^w s_t \langle v_t, x\rangle^{2q}, \end{align*} where $s_t\in \mathbb{R}, v_t \in \mathbb{R}^m$ and $w$ is the Waring rank of tensor $T$. By \cite{ah95}, we know that $w=\poly(m,d)$. Then, we have \begin{align*} \E\left[\left\|\sum_{i=1}^n r_i x_i\right\|_{h,2q}^{2q}\right]= &~ \sum_{t=1}^w s_t \E\left[\left\langle v_t, \sum_{j=1}^n r_j x_j\right\rangle\right]^{2q}\\ = &~ \sum_{t=1}^w s_t \sum_{\substack{\beta_1,\dots, \beta_n\geq 0\\\beta_1+\cdots \beta_n = q}} \binom{2q}{2\beta_1,\dots,2\beta_n}\prod_{j=1}^n \langle v_t, x_j\rangle^{2\beta_j}\\ \leq &~ M_{2q}^{2q}\sum_{t=1}^w s_t \left(\sum_{j=1}^n \langle v_t, x_j\rangle^2\right)^{q}. \end{align*} where the second step follows from the previous discussion (\cref{eq:p_norm_tensor_form}). If $s_t>0$ for all $t>0$, then by triangle inequality for $\ell_q$ sequence, \begin{align*} \left(\sum_{t=1}^w s_t\left(\sum_{j=1}^n \langle v_t,x_j\rangle^2\right)^q\right)^{1/q} \leq &~ \sum_{j=1}^n \left(\sum_{t=1}^w s_t \langle v_t, x_j\rangle^{2q}\right)^{1/q}. \end{align*} Hence, \begin{align*} \left(\E\left[\left\|\sum_{i=1}^n r_i x_i\right\|_{h,2q}^{2q}\right]\right)^{1/2q}\leq &~ M_{2q}\left ( \sum_{j=1}^n \left(\sum_{t=1}^w s_t \langle v_t, x_j\rangle^{2q}\right)^{1/q} \right)^{1/2}\\ = &~ M_{2q}\left ( \sum_{j=1}^n \|x_j\|_{h,2q}^2 \right)^{1/2}. \end{align*} The above upper bound matches the result of Matrix Khinchine inequality in \cite{tj74}. However, in general, we do not have such a guarantee that $c_t\geq 0$ for all $t\in [w]$. Therefore, we state the following conjecture. Once it is proved, we will get \cref{lem:p_norm_upper_bound}. \begin{conjecture} Let $q\geq 1$. Let $s_t\in \mathbb{R},v_t\in \mathbb{R}^m$ for $t\in [w]$ be such that for all $x\in \mathbb{R}^m$, \begin{align*} \sum_{t=1}^w s_t\langle v_t,x\rangle^{2q}\geq 0. \end{align*} Then, for $x,y\in \mathbb{R}^m$, we have \begin{align*} \left(\sum_{t=1}^w s_t\left(\langle v_t,x\rangle^2 + \langle v_t, y\rangle^2\right)^q\right)^{1/q}\leq \left(\sum_{t=1}^w s_t\langle v_t,x\rangle^{2q}\right)^{1/q} + \left(\sum_{t=1}^w s_t\langle v_t,y\rangle^{2q}\right)^{1/q}. \end{align*} \end{conjecture} \begin{comment} Let $F(x)=-\log h(x)$ and $e\in \mathbb{R}^m$ be the hyperbolic direction. Then we have \begin{align*} \|x\|_{h,2q}^{2q}=\sum_{i=1}^d \lambda(x)_i^{2q}=\frac{1}{(2q-1)!}\nabla^{2q}F(e)[x,\dots, x]. \end{align*} \end{comment} \begin{comment} \begin{conjecture} \begin{align*} \left(\sum_{i_1,\dots,i_q\in [n]}\nabla^{2q}F(e)[x_{i_1},x_{i_1},x_{i_2},x_{i_2},\dots, x_{i_q},x_{i_q}]\right)^{1/q}\leq \sum_{i=1}^n \left(\nabla^{2q}F(e)[x_i,\dots,x_i]\right)^{1/q} \end{align*} \end{conjecture} \end{comment}
2,869,038,156,978
arxiv
\section{Introduction} The combinatorial definition of a \emph{Magidor cardinal} $\lambda$ is given by $\lambda\rightarrow[\lambda]^{\aleph_0\text{-bd}}_\lambda$. Recall that $[\lambda]^{\aleph_0\text{-bd}}$ is the family of all countable bounded subsets of $\lambda$. It contains subsets of any countable order type. The relation $\lambda\rightarrow[\lambda]^{\aleph_0\text{-bd}}_\lambda$ means that for every $c:[\lambda]^{\aleph_0\text{-bd}} \rightarrow\lambda$ there exists $A\in[\lambda]^\lambda$ for which $c\image[A]^{\aleph_0\text{-bd}}\neq\lambda$. The reason of concentrating on bounded subsets of $\lambda$ is that $\lambda\nrightarrow[\lambda]^{\aleph_0}_\lambda$ holds for every infinite cardinal $\lambda$ (by a theorem of Erd\H{o}s and Hajnal). It follows that if $\lambda$ is Magidor then $\lambda$ is a limit cardinal of countable cofinality. Magidor cardinals were defined in \cite{MR3750266} through this combinatorial property. A model-theoretic characterization of these cardinals via elementary embeddings appears in \cite{MR3666820}. It is easy to see that if $\lambda\rightarrow[\lambda]^{\aleph_0\text{-bd}}_\lambda$ then $\lambda\rightarrow[\lambda]^{\aleph_0\text{-bd}}_\alpha$ for some $\alpha<\lambda$ (\cite{MR3750266}, Lemma 1.2). Given a Magidor cardinal $\lambda$, the first such $\alpha$ is denoted by $\alpha_M(\lambda)$ or just $\alpha_M$ if $\lambda$ is clear from the context. Notice that if $\beta\geq\alpha_M$ then $\lambda\rightarrow[\lambda]^{\aleph_0\text{-bd}}_\beta$ as well. We shall say that $\alpha_M$ is \emph{the first omitting cardinal} for $\lambda$. A parallel notion arises for J\'onsson cardinals. Recall that $\lambda$ is J\'onsson iff $\lambda\rightarrow[\lambda]^{<\omega}_\lambda$, in which case there exists a first ordinal $\alpha$ for which $\lambda\rightarrow[\lambda]^{<\omega}_\alpha$. The first such ordinal is denoted by $\alpha_J$ (or $\alpha_J(\lambda)$), and it is a regular cardinal strictly below $\lambda$. Several properties of $\alpha_M$ are phrased in \cite{MR3750266}, among them the fact that it is always a regular cardinal. Some open problems concerning $\alpha_M$ appear in \cite{MR3750266}. A pair of related problems is labeled there as Question 3.12 and Question 3.13. The first one is whether $\alpha_M$ can be a successor of a singular cardinal. In the second one we ask about the possibility that $\alpha_M$ is a large cardinal (e.g., measurable) or the successor of a large cardinal. We shall prove that $\alpha_M$ is not a large cardinal but it can be a successor of a large cardinal. Thence, one can singularize this large cardinal and force $\alpha_M$ to be a successor of a singular cardinal. Although we can force $\alpha_M$ to be a successor of a singular cardinal, the cofinality of this cardinal is uncountable in our model. The last stage of singularizing our large cardinal is done with Magidor forcing and not with Prikry forcing. It is an amusing historical coincidence (Magidor cardinals were defined with no connection to Magidor forcing), but it seems that Prikry forcing fails to place $\alpha_M$ at the first point above a singular cardinal. The reason will be explicated in the sequel, and it points to a substantial property of $\alpha_M$. Our notation is mostly standard. We suggest \cite{MR1994835} for a comprehensive treatment to large cardinals. We shall use the Jerusalem notation in forcing, i.e. $p\leq q$ means that $q$ is stronger than $p$. If $\mathbb{P}$ is a forcing notion and $p,q\in\mathbb{P}$ then $p\parallel q$ means that $p$ and $q$ are compatible. The pure order in Prikry type forcing notions will be denoted by $\leq^*$. We call $\kappa$ supercompact iff for every ordinal $\gamma$ there exists an elementary embedding $\jmath:{\rm V}\rightarrow M$ for which $\kappa=\crit(\jmath)$ and ${}^\gamma M\subseteq M$. A forcing notion $\mathbb{P}$ is $\kappa$-directed-closed iff whenever $A\subseteq\mathbb{P}$ is a directed set of conditions and $|A|<\kappa$, there exists some $q\in\mathbb{P}$ so that $p\in A\Rightarrow p\leq q$. We shall use Prikry and Magidor forcing, and we follow the conventions of \cite{MR2768695}. This has a particular importance with respect to Magidor forcing, as this type of forcing notions can be written in several ways. Laver proved in \cite{MR0472529} that if $\kappa$ is supercompact then one can define a forcing notion $\mathbb{P}$ which makes $\kappa$ indestructible upon any further extension by $\kappa$-directed-closed forcing notions. The forcing $\mathbb{P}$ is a set forcing, based on the so-called Laver's diamond. Arrows notation in this paper is coherent with the common literature. The notation $[\lambda]^{\aleph_0\text{-bd}}$ refers to all bounded subsets of $\lambda$ whose cardinality is $\aleph_0$, regardless of order type. We shall use the notation $[\lambda]^{\omega\text{-bd}}$ if we restrict ourselves to bounded sets of order type $\omega$. We mention here only the less frequent relation $\lambda\rightarrow[\lambda]^{\aleph_0\text{-bd}}_{\nu,<\nu}$ which means that for every $f:[\lambda]^{\aleph_0\text{-bd}}\rightarrow\nu$ there exists $A\in[\lambda]^\lambda$ for which $|f\image[A]^{\aleph_0\text{-bd}}|<\nu$. In general, the arrows notation is designed in order to keep monotonicity, but this need not hold for $\lambda\rightarrow[\lambda]^{\aleph_0\text{-bd}}_{\nu,<\nu}$ with respect to the subscript. We shall mention cardinals in the family of rank-into-rank, so let us recall the definitions of I0 and I1. A cardinal $\lambda$ is I1 iff there exists a non-trivial elementary embedding $\jmath:V_{\lambda+1}\rightarrow V_{\lambda+1}$. If $\lambda$ is I1 then there are many elementary embeddings of the form $k\colon V_{\lambda+1}\to V_{\lambda+1}$ with different critical points. Lest it is important to specify the critical point, we write $\mathrm{I1}(\kappa,\lambda)$ meaning that there exists an elementary embedding $\jmath\colon V_{\lambda+1}\to V_{\lambda+1}$ such that $\crit(\jmath) = \kappa$. A cardinal $\lambda$ is I0 iff there is $\jmath:L(V_{\lambda+1}) \rightarrow L(V_{\lambda+1})$ so that $\crit(\jmath)<\lambda$. Every I1 cardinal $\lambda$ is Magidor, by a simple observation of Menachem Magidor (\cite{MR1994835}, Question 24.1). Finally, we shall have to know that under some circumstances the assertion I1$(\kappa,\lambda)$ is preserved by forcing. We shall use, for this end, Silver's criterion. We phrase the pertinent theorem in a bit more generality than we actually need, see \cite{MR2768691}, Proposition 9.1: \begin{theorem} \label{ssilver} Silver's criterion. \newline Let $\jmath:M\rightarrow N$ be an elementary embedding, where $M,N$ are transitive models of ZFC. Let $\mathbb{P}\in M$ be a forcing notion, and $G\subseteq\mathbb{P}$ a generic subset over $M$. Assume $H$ is a generic subset of $\jmath(\mathbb{P})$ over $N$. Then the following two statements are equivalent: \begin{enumerate} \item [$(a)$] $\jmath(p)\in H$ for every $p\in G$. \item [$(b)$] There exists an elementary embedding $\jmath^+:M[G]\rightarrow N[H]$ so that $\jmath^+(G)=H$ and $\jmath^+\upharpoonright M=\jmath$. \end{enumerate} \end{theorem} \hfill \qedref{ssilver} \newpage \section{Large cardinals and their successors} In this section we focus on Question 3.13 from \cite{MR3750266}. Our first theorem says that under some mild assumptions, one can show that $\alpha_M$ must be a successor cardinal. The metamathematical idea is that $\alpha_M$ (and similarly, $\alpha_J$ and $\alpha_R$ which are the parallel notions for J\'onsson and Rowbottom cardinals respectively) is not a large cardinal in the philosophical sense. Ahead of the proof we need a lemma, which is the parallel to a lemma of Tryba, \cite{MR826499}. The lemma of Tryba refers to J\'onsson cardinals, and here we translate it to Magidor cardinals. \begin{lemma} \label{ttryba} Assume that: \begin{enumerate} \item [$(a)$] $\lambda\rightarrow[\lambda]^{\aleph_0\text{-bd}}_{\nu,<\nu}$ and $\lambda\rightarrow[\lambda]^{\aleph_0\text{-bd}}_{\nu^{{\rm cf}(\nu)},<\nu^{{\rm cf}(\nu)}}$. \item [$(b)$] $\nu<\lambda$ is a limit cardinal. \item [$(c)$] There are no Magidor cardinals in the interval $[\nu,\nu^{{\rm cf}(\nu)}]$. \end{enumerate} Then there exists some $\rho<\nu$ for which $\lambda\rightarrow[\lambda]^{\aleph_0\text{-bd}}_{\nu,<\rho}$, and hence $\lambda\rightarrow[\lambda]^{\aleph_0\text{-bd}}_{\rho,<\rho}$. \end{lemma} \par\noindent\emph{Proof}. \newline First we show that if $\lambda\rightarrow[\lambda]^{\aleph_0\text{-bd}}_\gamma, \gamma\leq\delta\leq\varepsilon$ and there is no Magidor cardinal in the interval $[\delta,\varepsilon]$ then $\lambda\rightarrow[\lambda]^{\aleph_0\text{-bd}}_{\varepsilon,<\delta}$. Toward showing this, assume that $\lambda\nrightarrow[\lambda]^{\aleph_0\text{-bd}}_{\varepsilon,<\delta}$ and choose a function $f:[\lambda]^{\aleph_0\text{-bd}}\rightarrow\varepsilon$ so that $|f\image[A]^{\aleph_0\text{-bd}}|\geq\delta$ whenever $A\in[\lambda]^\lambda$. We may assume that $\varepsilon$ is the first counterexample. By our assumption, $\varepsilon$ is not a Magidor cardinal. Hence $\lambda\rightarrow[\lambda]^{\aleph_0\text{-bd}}_{\varepsilon,<\varepsilon}$ (see Proposition 3.18 in \cite{MR3750266}). Let us choose $B\in[\lambda]^\lambda$ so that $\eta=|f\image[B]^{\aleph_0\text{-bd}}|<\varepsilon$. By the firsthood of $\varepsilon$ and the fact that $\delta\leq\eta<\varepsilon$ we conclude that $\lambda\rightarrow[\lambda]^{\aleph_0\text{-bd}}_{\eta,<\delta}$. In particular, one can choose a subset $C\in[B]^\lambda$ for which $|f\image[C]^{\aleph_0\text{-bd}}|<\delta$, a contradiction to the choice of $f$. We proceed to the assertion of the lemma. Let $A$ be the set $\{\sigma<\nu: \lambda\nrightarrow[\lambda]^{\aleph_0\text{-bd}}_{\nu,<\sigma}\}$. Denote $\sup(A)$ by $\eta$. If $\eta<\nu$ then $\eta^+<\nu$ as well (recall that $\nu$ is a limit cardinal) so $\eta^+$ can serve as the alleged $\rho$ in the lemma. Assume towards contradiction that $\sup(A)=\nu$, and choose a sequence of members of $A$ of the form $\langle\sigma_\alpha:\alpha<{\rm cf}(\nu)\rangle$, cofinal in $\nu$. For every $\alpha<{\rm cf}(\nu)$ choose a function $f_\alpha:[\lambda]^{\aleph_0\text{-bd}}\rightarrow\nu$ such that $|f_\alpha\image[x]^{\aleph_0\text{-bd}}|\geq\sigma_\alpha$ whenever $x\in[\lambda]^\lambda$. Let $B=\prod\limits_{\alpha<{\rm cf}(\nu)}\sigma_\alpha$. We define a function $g:[\lambda]^{\aleph_0\text{-bd}}\rightarrow B$ as follows: $$ g(s)=(f_\alpha(s):\alpha<{\rm cf}(\nu)). $$ Notice that $|B|=\nu^{{\rm cf}(\nu)}$. By assumption $(a)$, $\lambda\rightarrow[\lambda]^{\aleph_0\text{-bd}}_{\nu^{{\rm cf}(\nu)},<\nu^{{\rm cf}(\nu)}}$. Hence there exists a set $x\in[\lambda]^\lambda$ for which $|g\image[x]^{\aleph_0\text{-bd}}|<\nu$. This follows from the beginning of the proof, by letting $\gamma=\delta=\nu, \varepsilon=\nu^{{\rm cf}(\nu)}$, upon noticing that there are no Magidor cardinals in the interval $[\delta,\varepsilon]$ by assumption $(e)$. Consequently, $\lambda\rightarrow[\lambda]^{\aleph_0\text{-bd}}_{\nu^{{\rm cf}(\nu)},<\nu}$, which amounts to the existence of $x\in[\lambda]^\lambda$ so that $|g\image[x]^{\aleph_0\text{-bd}}|<\nu$. On the other hand, every value of $f_\alpha$, for each $\alpha<{\rm cf}(\nu)$, gives rise to a distinct element of $B$. Hence $|g\image[x]^{\aleph_0\text{-bd}}|\geq \bigcup\limits_{\alpha<{\rm cf}(\nu)}|f_\alpha\image[x]^{\aleph_0\text{-bd}}|=\nu$, and this contradiction gives the desired conclusion. \hfill \qedref{ttryba} Based on this lemma, we can prove the following: \begin{theorem} \label{t3} Let $\lambda$ be a Magidor cardinal. \begin{enumerate} \item [$(a)$] If there is no Magidor cardinal in the interval $[\alpha_M,2^{\alpha_M}]$ then $\alpha_M$ is a successor cardinal. \item [$(b)$] If every limit cardinal is a strong limit cardinal then $\alpha_M(\lambda)$ is a successor cardinal for every Magidor cardinal $\lambda$. \end{enumerate} \end{theorem} \par\noindent\emph{Proof}. \newline As mentioned in the introduction, if $\lambda$ is Magidor then $\lambda$ is a limit cardinal. Part $(b)$ follows from part $(a)$ by noticing that if every limit cardinal is strong limit then there are no limit cardinals in the interval $[\alpha_M,2^{\alpha_M}]$ and hence no Magidor cardinals in this interval. We prove, therefore, part $(a)$. Assume towards contradiction that $\alpha_M$ is a limit cardinal. All the requirements of Lemma \ref{ttryba} hold, bearing in mind that $\alpha_M$ here stands for $\nu$ there. Requirement $(a)$ there is a simple property of $\alpha_M$ as proved in \cite[Theorem 1.8]{MR3750266} and $(b)$ is our assumption towards contradiction. Requirements $(c)$ is the assumption of the theorem It follows from the conclusion of Lemma \ref{ttryba} that $\lambda\rightarrow[\lambda]^{\aleph_0\text{-bd}}_{\rho,<\rho}$ for some $\rho<\alpha_M$, but this is an absurd in the light of the definition of $\alpha_M$, so we are done. \hfill \qedref{t3} Our next goal is to show that $\alpha_M$ can be basically a successor of every regular cardinal. This is possible even if one wishes to force $\alpha_M$ to be successor of large cardinals. The following preservation theorem is in the spirit of the celebrated L\'{e}vy\xspace-Solovay preservation theorem from \cite{MR0224458} for measurable cardinals. \begin{claim} \label{llevysolovay} Let $\lambda$ be Magidor and $\alpha<\lambda$. Let $\mathbb{P}$ be an $\alpha$-cc $\aleph_1$-complete forcing notion. \newline Then $\lambda$ remains Magidor in the generic extension by $\mathbb{P}$. \end{claim} \par\noindent\emph{Proof}. \newline Without loss of generality, $\alpha$ is regular and hence not Magidor (we can always work with $\alpha^+$ in lieu of $\alpha$). We may also assume that $\alpha_M \leq \alpha$, by taking larger $\alpha$ if needed. Let $\mathunderaccent\tilde-3 {f}$ be a name of a function from $[\lambda]^{\aleph_0\text{-bd}}$ into $\alpha$, and let $p$ be a condition which forces this fact. We define $g\in{\rm V}$, which is also a function from $[\lambda]^{\aleph_0\text{-bd}}$ into $\alpha$. Given any $t\in[\lambda]^{\aleph_0\text{-bd}}$ let $g(t)=\sup \{\eta<\alpha: \exists q, p\leq q, q\Vdash\mathunderaccent\tilde-3 {f}(t)=\eta\}$. By the chain condition and the regularity of $\alpha$, $g(t)<\alpha$ for every $t\in[\lambda]^{\aleph_0\text{-bd}}$. By the assumption of $\aleph_1$-completeness, $[\lambda]^{\aleph_0\text{-bd}}$ is the same mathematical object both in ${\rm V}$ and ${\rm V}[G]$. Choose $A\in[\lambda]^\lambda$ for which $|g\image[A]^{\aleph_0\text{-bd}}|<\alpha$. This can be done since $\alpha$ is not a Magidor cardinal. But now $\sup (\mathunderaccent\tilde-3 {f}\image[A]^{\aleph_0\text{-bd}})\leq \sup (g\image[A]^{\aleph_0\text{-bd}}) <\alpha$. In particular, $\mathunderaccent\tilde-3 {f}$ omits colors on a full size subset of $\lambda$, so $\lambda$ is Magidor. \hfill \qedref{llevysolovay} We shall see, below, that if $\lambda$ is I1 and $\kappa<\lambda$ where $\kappa$ is measurable then $\lambda$ remains Magidor after adding a Prikry sequence to $\kappa$. Despite the possible preservation of Magidority by Prikry forcing, it turns out that a ``small" Prikry forcing may change the value of $\alpha_M$, in an interesting way. If $\lambda$ is Magidor then one can force $\alpha_M(\lambda)=\aleph_2$ while preserving the Magidority of $\lambda$ (Proposition 1.10 of \cite{MR3750266}). This is done, essentially, by collapsing the predecessor (or predecessors) of $\alpha_M$, and it gives a simple way to decrease $\alpha_M$. Using Prikry forcing, one can \emph{increase} $\alpha_M$. \begin{theorem} \label{increasealpham} Let $\lambda$ be Magidor, and let $\kappa<\lambda$ be a measurable cardinal so that $2^\kappa<\lambda$. Let $\mathbb{P}$ be Prikry forcing through some normal ultrafilter $\mathcal{U}$ over $\kappa$. Let $G\subseteq\mathbb{P}$ be generic. \newline If $\lambda$ is still Magidor in ${\rm V}[G]$ then $\alpha_M>\kappa$. Moreover, $\alpha_M>(\kappa^\omega)^{V[G]}$, so $\alpha_M>\kappa^+$ in ${\rm V}[G]$. \end{theorem} \par\noindent\emph{Proof}. \newline First we prove that if $\mu={\rm cf}(\mu)>2^\kappa$ and $T\in[\mu]^\mu\cap{\rm V}[G]$ then there exists $S\subseteq T$ so that $S\in[\mu]^\mu\cap{\rm V}$. For this end, assume that $\mathunderaccent\tilde-3 {y}$ is a name of a subset of $\mu$ of size $\mu$, and recall that a generic subset $G$ has been chosen. The interpretation of $\mathunderaccent\tilde-3 {y}$ according to $G$ can be written as ${\mathunderaccent\tilde-3 {y}}_G = \bigcup_{p\in G}y_p$ where $y_p = \{\alpha\in\mu: p\Vdash\check{\alpha}\in\mathunderaccent\tilde-3 {y}\}$. Observe that each $y_p$ belongs to the ground model, as the forcing relation is definable in ${\rm V}$. Since ${\rm cf}({\mathunderaccent\tilde-3 {y}}_G)>2^\kappa=|G|$ we see that there exists a single condition $p\in G$ and a set $y_p\in[\mu]^\mu\cap{\rm V}$ such that $p\Vdash y_p\subseteq {\mathunderaccent\tilde-3 {y}}_G$, as desired. Our objective is to define, in ${\rm V}[G]$, a function $f$ from $[\lambda]^{\aleph_0\text{-bd}}$ into $\kappa$ which omits no color on full size subsets of $\lambda$. The main point is to take care of new sets of size $\lambda$, added by the forcing poset. As a preliminary, for every $\alpha<\lambda$ such that ${\rm cf}^{\rm V}(\alpha)=\kappa$ we choose in ${\rm V}$ a cofinal sequence $\langle\beta^\alpha_i:i<\kappa\rangle$. We also fix a function $g:[\kappa]^\omega\rightarrow\kappa$, now in ${\rm V}[G]$, such that $g\image[H]^\omega=\kappa$ whenever $H\in[\kappa]^\kappa$. We may assume that $g$ is defined only over unbounded subsets of $\kappa$ (recall that ${\rm cf}^{\rm V[G]}(\kappa)=\omega$). The existence of $g$ can be proved as the existence proof of the usual $\omega$-J\'onsson functions. Assume now that $t=\{t_n:n\in\omega\}$ belongs to $[\lambda]^{\aleph_0\text{-bd}}$. Let $\gamma_t$ be $\sup(\{t_n + 1 \mid n < \omega\})$. If ${\rm cf}^{\rm V}(\gamma_t)\neq\kappa$ then we define $f(t)=0$. Assume that ${\rm cf}^{\rm V}(\gamma_t)=\kappa$. For every $n\in\omega$ let $\rho_n$ be the first ordinal $i<\kappa$ so that $t_n\leq\beta^{\gamma_t}_i$. We define $f(t)=g(\{\rho_n:n\in\omega\})$. Assume that $T\in[\lambda]^\lambda\cap{\rm V}[G]$. Choose any regular cardinal $\theta<\lambda$ such that $\theta>2^\kappa$. By the assertion from the beginning of the proof we choose $S\in[\lambda]^\theta\cap{\rm V}$ such that $S\subseteq T$. Let $\gamma$ be the supremum the first $\kappa$ elements of $S$. We shall prove that $f\image[S\cap\gamma]^\omega=\kappa$, thus accomplishing the proof (notice that all the members of $[S\cap\gamma]^\omega$ are bounded in $\lambda$). Suppose $\eta<\kappa$ is any color. Since $\langle\beta^\gamma_i:i<\kappa\rangle$ is cofinal in $\gamma$ and since $\kappa$ is regular in $V$, the set $W=\{\rho<\kappa:\exists\delta\in S, \rho=\min\{j<\kappa:\delta\leq\beta^\gamma_j\}\}$ is of size $\kappa$. By the nature of $g$, we can choose $\{\rho_n:n\in\omega\}\subseteq W$ for which $g(\{\rho_n:n\in\omega\})=\eta$. Notice that $\sup \{\rho_n \mid n < \omega\} = \kappa$. For each $n\in\omega$ choose $t_n\in S$ such that $\rho_n=\min\{j<\kappa: t_n\leq\beta^\gamma_j\}$, and let $t=\{t_n:n\in\omega\}$. Now $f(t)=g(\{\rho_n:n\in\omega\})=\eta$, so we are done. We show now how to modify the proof in order to get $\Vdash_{\mathbb{P}} \alpha_M>(\kappa^\omega)^{V[G]}$. We fix the sequences $\langle\beta^\alpha_i:i<\kappa\rangle$ as before, and the $\omega$-J\'onsson function $g$ in $V[G]$. Our goal is to define $f:[\lambda]^{\aleph_0\text{-bd}} \rightarrow \kappa^\omega$ which omits no sequence in $\kappa^\omega$ over any full size subset of $\lambda$. Suppose that $t\in[\lambda]^{\aleph_0\text{-bd}}$. If $\otp(t)\neq\omega\cdot\omega$ then let $f(t)$ be $\vec{0}$, the fixed sequence of zeros. Likewise, if $\gamma_t=\sup(t)$ is not an ordinal of cofinality $\kappa$ in the ground model then we let $f(t) = \vec{0}$. If $\otp(t)=\omega\cdot\omega$ and ${\rm cf}^{\rm V}(\gamma_t)=\kappa$ enumerate the ordinals of $t$ by $\{\langle t_{\omega\cdot m+n}:n\in\omega\rangle :m\in\omega\}$ in increasing order. For each $m\in\omega$ denote the slice $\langle t_{\omega\cdot m+n}:n\in\omega\rangle$ by $t^m$. For every $m,n\in\omega$ let $\rho^m_n$ be the first ordinal $i<\kappa$ such that $t^m_n\leq\beta_i^{\gamma_t}$. finally, define $f(t)=\langle g(\{\rho^m_n:n\in\omega\}):m\in\omega\rangle$. Assume now that $T\in[\lambda]^\lambda\cap{\rm V}[G]$, and chose a sufficiently large regular $\theta < \lambda$ such that for some $S \in [\lambda]^\theta \cap V$ and some $p \in G$, we have $p \Vdash \check{S} \subseteq \mathunderaccent\tilde-3 {T}$. We step up a bit further, and concentrate on the supremum of the first $\kappa \cdot \omega$ elements of $S$, say $\gamma$. We claim that $f\image[S\cap\gamma]^{\aleph_0}=\kappa^\omega$. Assume that $\langle\eta_m:m\in\omega\rangle\in\kappa^\omega$. Let $\gamma_{-1}$ be $0$ and for every $m\in\omega$ let $\gamma_m$ be the supremum of the first $\kappa$ elements of $S$ above $\gamma_{m-1}$. For every $m\in\omega$ let \[W_m=\{\rho<\kappa: \exists\delta\in S,\gamma_{m-1}\leq\delta<\gamma_{m},\ \rho=\min\{j<\kappa: \delta\leq\beta^{\gamma_m}_j\}\}.\] By the choice of $g$ we choose, for each $m\in\omega$, a sequence $(\rho^m_n:n\in\omega)\subseteq W_m$ for which $g(\{\rho^m_n:n\in\omega\})=\eta_m$. Now we can finish the proof as follows. For every $m\in\omega$ let $t^m=\{t^m_n:n\in\omega\}$ where $t^m_n\in S$ satisfies $t^m_n\leq\beta^{\gamma_m}_j$ with respect to $j=\rho^m_n$ (and assuming that $\rho^m_n$ is the first $j$ with this property). Define $t=\bigcup\limits_{m\in\omega}t^m$ and observe that $f(t)=\langle g(t^m):m\in\omega\rangle = \langle \eta_m:m\in\omega\rangle$, thus accomplishing the proof. \hfill \qedref{increasealpham} The fact that Prikry forcing through $\kappa$ results in $\alpha_M\geq\kappa^{++}$ is the strongest reason which stands behind Conjecture \ref{conj0} below. Namely, we suspect that $\alpha_M$ cannot be a successor of a singular cardinal with countable cofinality. The most natural way to force it is Prikry forcing, and this provably fails. An analysis of the proof shows that specific properties of Prikry forcing are not essential for the validity of the basic argument. The main point in the above proof can be abstracted as follows. \begin{corollary} \label{corcov} Let $V,W$ be models of ZFC. \newline Assume that $V\subseteq W$ and $\lambda$ is Magidor in both of them. Assume further that $\mu<\lambda, \mu>{\rm cf}(\mu)=\omega$ in $W$ and $\mu$ is regular in $V$. \newline If every $S\in[\lambda]^\lambda\cap W$ contains a set $T\in V$ of order type $\mu \cdot \omega$, then $\alpha_M>\mu^+$ in $W$. \end{corollary} \hfill \qedref{corcov} If $\lambda$ is Magidor and $\kappa<\lambda$ then $\kappa^+<\lambda$, so the chain condition of Prikry forcing through $\kappa$ is promising. However, Prikry forcing is not $\aleph_1$-complete, so it may ruin the Magidority of $\lambda$ or change the value of $\alpha_M$. In order to employ Prikry forcing we need stronger assumptions. By I1$(\kappa,\lambda)$ we mean that $\lambda$ is I1 as witnessed by $\jmath: {\rm V}_{\lambda+1}\rightarrow {\rm V}_{\lambda+1}$ and $\kappa=\crit(\jmath)$. If $\mu$ is a measurable cardinal below $\kappa$ then Prikry forcing for $\mu$ preserves I1$(\kappa,\lambda)$. More generally, any small forcing notion keeps I1. Probably, the following Lemma is known, but we elaborate: \begin{lemma} \label{ppp} L\'{e}vy\xspace-Solovay for I1. \newline Assume $I1(\kappa,\lambda),$ and $\mathbb{P}\in V_\kappa$ is a forcing notion. Then $I1(\kappa,\lambda)$ holds in ${\rm V}^{\mathbb{P}}$. \end{lemma} \par\noindent\emph{Proof}. \newline Assume $\jmath:V_{\lambda+1}\rightarrow V_{\lambda+1}$ witnesses $I1(\kappa,\lambda)$. Since $\mathbb{P}\in V_\kappa$ we know that $\jmath(p)=p$ for every $p\in\mathbb{P}$, and likewise $\jmath(\mathbb{P})=\mathbb{P}$. Fix a generic subset $G\subseteq\mathbb{P}$. We will use Silver's criterion with $M=N=V_{\lambda+1}$ and $H=G$. Since our formulation of Theorem \ref{ssilver} does not immediately apply to this case, we will continue and give a detailed proof. We claim that there is an elementary embedding $\jmath^+$ from $V_{\lambda+1}^{V[G]}$ into $V_{\lambda+1}^{V[G]}$ which extends $\jmath$. This implies, in particular, that $\kappa=\crit(\jmath^+)$ and hence $V[G]\models I1(\kappa,\lambda)$. For proving this claim notice that if $\mathunderaccent\tilde-3 {x}$ is a name of an element in $V_{\lambda+1}\cap V[G]$ then for some name $\mathunderaccent\tilde-3 {y}\in V_{\lambda+1}$ we have $\Vdash_{\mathbb{P}} \mathunderaccent\tilde-3 {x}=\mathunderaccent\tilde-3 {y}$, so we can focus only on names which belong to $V_{\lambda+1}$. Given a $\mathbb{P}$-name which belongs to $V_{\lambda+1}$, let $\jmath^+({\mathunderaccent\tilde-3 {y}}_G)=(\jmath(\mathunderaccent\tilde-3 {y}))_G$. If $\check{y}$ is a canonical name of an element $y$ in $V_{\lambda+1}^{\rm V}$ then $\jmath^+(y)= \jmath^+(\check{y}_G)=(\jmath(\check{y}))_G=\jmath(y)$, the last equality follows from the elementarity of $\jmath$. We conclude that $\jmath^+$ extends $\jmath$. Similarly, we argue that $\jmath^+$ is elementary. For this, let $\varphi$ be any first order formula and $\mathunderaccent\tilde-3 {y}$ a name in $\mathbb{P}$. We see that: $$ V_{\lambda+1}^{V[G]}\models \varphi[{\mathunderaccent\tilde-3 {y}}_G] \Leftrightarrow \exists p\in G, p \Vdash \varphi[\mathunderaccent\tilde-3 {y}] \Leftrightarrow $$ $$ \exists p\in G, p \Vdash \varphi[\jmath(\mathunderaccent\tilde-3 {y})] \Leftrightarrow V_{\lambda+1}^{V[G]}\models \varphi[\jmath(\mathunderaccent\tilde-3 {y})_G]. $$ By this, $\jmath^+$ is elementary in the generic extension, so we are done. \hfill \qedref{ppp} \begin{corollary} \label{cori1} Assume $I1(\mu_\alpha,\lambda_\alpha)$ where $\langle\mu_\alpha: \alpha\in{\rm Ord}\rangle$ is a proper class. \newline Then one can force the existence of a Magidor cardinal with $\alpha_M$ arbitrarily large. Likewise, it is consistent that $\lambda$ is Magidor and the distance between $\alpha_J$ and $\alpha_M$ is arbitrarily large. \end{corollary} \par\noindent\emph{Proof}. \newline For the first assertion choose any measurable cardinal $\kappa$ and add a Prikry sequence to it. Then use Theorem \ref{increasealpham} in order to conclude that all instances of I1 with critical point above $\kappa$ are still I1 and hence Magidor, with $\alpha_M$ above $\kappa$. For the second assertion notice that each I1 cardinal $\lambda$ is an $\omega$-limit of measurable cardinals, hence Rowbottom. It follows that for colorings of finite sets of $\lambda$ we can find a full size subset which assumes only countably many colors, i.e. $\alpha_J=\aleph_1$. Now use the former paragraph to increase $\alpha_M$ while keeping $\lambda$ as I1, so its $\alpha_J$ is still $\aleph_1$. \hfill \qedref{cori1} \begin{remark} \label{r1} The general fact proved above, that if $\mu={\rm cf}(\mu)>2^\kappa$ and $T\in[\mu]^\mu\cap{\rm V}[G]$ then there exists $S\subseteq T$ so that $S\in[\mu]^\mu\cap{\rm V}$, shows that a small Prikry forcing cannot \emph{create} a new Magidor cardinal. Namely, if $\lambda>{\rm cf}(\lambda)=\omega$ is not Magidor, and $\kappa$ is a measurable cardinal for which $2^\kappa<\lambda$ then Prikry forcing through $\kappa$ keeps the non-Magidority of $\lambda$. This should be compared with a theorem of Woodin, \cite{MR2914848}, who proved that if I0$(\kappa,\lambda)$ then Prikry forcing through $\kappa$ makes $\kappa$ I1, and hence Magidor. \end{remark} \hfill \qedref{r1} Merging the above method with L\'{e}vy\xspace collapses, we can show that basically $\alpha_M$ can be any prescribed successor of a regular cardinal. Moreover, it is consistent that $\alpha_M$ is a successor of a strongly inaccessible cardinal or a strongly Mahlo cardinal. For proving this, we need another lemma about the impact of L\'{e}vy\xspace collapse on $\alpha_M$. \begin{lemma} \label{qqq} Collapsing $\alpha_M$. \newline Assume $\aleph_0 < \mu={\rm cf}(\mu)<\lambda, \lambda$ is Magidor and $\mu^+<\alpha_M(\lambda)\leq\alpha = {\rm cf}(\alpha)<\lambda$. Assume further that $\alpha$ is $\mu$-closed (i.e.\ for all $\beta < \alpha$, $\beta^\mu < \alpha$). Let $\mathbb{P}=L\acute{e}vy(\mu,<\alpha)$ and let $G\subseteq\mathbb{P}$ be generic. Then $V[G]\models\alpha_M(\lambda)=\mu^+$. \end{lemma} \par\noindent\emph{Proof}. \newline We have to prove the following two statements: \begin{enumerate} \item [$(\aleph)$] $\Vdash_{\mathbb{P}} \alpha_M\leq\mu^+$. \item [$(\beth)$] $\Vdash_{\mathbb{P}} \alpha_M\geq\mu^+$. \end{enumerate} The first assertion follows from the chain condition. By the assumption of the lemma, $\mathbb{P}$ is $\alpha$-cc. Now let $\mathunderaccent\tilde-3 {f}$ be a name and let us fix a condition $p\in\mathbb{P}$ that forces that $\mathunderaccent\tilde-3 {f}$ is a function from $[\lambda]^{\aleph_0\text{-bd}}$ into $\alpha$. We define another function $g:[\lambda]^{\aleph_0\text{-bd}}\rightarrow\alpha, g\in{\rm V}$, as follows. Given $s\in[\lambda]^{\aleph_0\text{-bd}}$ let $g(s)=\sup\{\beta<\mu^+: \exists q\geq p, q\Vdash\mathunderaccent\tilde-3 {f}(s)=\beta\}$. Notice that $g(s)\in\alpha$ since $\alpha$ is regular and by the chain condition. In the ground model we have $\lambda\rightarrow[\lambda]^{\aleph_0\text{-bd}}_\alpha$, so we choose a set $A\in[\lambda]^\lambda$ for which $|g\image[A]^{\aleph_0\text{-bd}}|<\alpha$. This can be done since $\alpha$ is regular and hence not Magidor. Fix an ordinal $\gamma < \alpha$ so that $g\image[A]^{\aleph_0\text{-bd}}\subseteq\gamma$, and notice that $p\Vdash_{\mathbb{P}}\sup\{g(s):s\in [A]^{\aleph_0\text{-bd}}\}\geq \sup\{\mathunderaccent\tilde-3 {f}(s):s\in [A]^{\aleph_0\text{-bd}}\}$, since $\mathbb{P}$ is $\aleph_1$-complete so no new countable sets are forced into the universe. We conclude that $p\Vdash_{\mathbb{P}} \mathunderaccent\tilde-3 {f}\image[A]^{\aleph_0\text{-bd}}\subseteq\gamma<\alpha$, which means that $V[G]\models\lambda\rightarrow[\lambda]^{\aleph_0\text{-bd}}_\alpha$. By the collapse, $V[G]\models\alpha=\mu^+$, so part $(a)$ is accomplished. The second assertion follows from the size of $\mathbb{P}$. We have to find a coloring which exemplifies $\lambda\nrightarrow[\lambda]^{\aleph_0\text{-bd}}_\mu$ in the generic extension. Choose a coloring $c:[\lambda]^{\aleph_0\text{-bd}}\rightarrow\mu$ which shows that $\lambda\nrightarrow[\lambda]^{\aleph_0\text{-bd}}_\mu$ in ${\rm V}$. We claim that $\check{c}$ gives the same relation in $V[G]$. For this recall that $[\lambda]^{\aleph_0\text{-bd}}$ is the same object in ${\rm V}$ and in $V[G]$ by $\aleph_1$-completeness. The only possible problem would be a new $\mathunderaccent\tilde-3 {A}\in[\lambda]^\lambda$ which might omit colors. In order to cope with this problem we shall prove that for some $B\in[\lambda]^\lambda\cap{\rm V}$ we have $\Vdash_{\mathbb{P}} \check{B}\subseteq\mathunderaccent\tilde-3 {A}$. Indeed, for each ordinal $\beta<\lambda$ let $\sigma_\beta$ be the statement $\check{\beta}\in\mathunderaccent\tilde-3 {A}$. Let $\langle\lambda_n:n\in\omega\rangle$ be an increasing sequence of regular cardinals such that $\alpha<\lambda_0$ and $\lambda=\bigcup\limits_{n\in\omega}\lambda_n$. By induction on $n\in\omega$ we shall define a set $B_n\in[\lambda_n]^{\lambda_n}$ and a condition $q_n$ such that $q_n\Vdash \check{B}_n\subseteq\mathunderaccent\tilde-3 {A}$. Moreover, the sequence $\langle q_n:n\in\omega\rangle$ will be increasing. Suppose $q_m,B_m$ were constructed for every $m<n$. For $\lambda_n$-many ordinals $\beta$ there is a condition which forced $\sigma_\beta$. Since $\lambda_n={\rm cf}(\lambda_n)>|\mathbb{P}|$ we can pick a single condition $q_n$ (above $q_{n-1}$ in case $n>0$) such that $B_n=\{\beta<\lambda_n:q_n\Vdash\sigma_\beta\}$ is of size $\lambda_n$. By the completeness of $\mathbb{P}$ we choose a condition $q$ so that $q\geq q_n$ for every $n\in\omega$. Let $B=\bigcup\limits_{n\in\omega}B_n$. Notice that $q\Vdash\check{B}\subseteq\mathunderaccent\tilde-3 {A}$ and $|B|=\lambda$, so $c\upharpoonright [B]^{\aleph_0\text{-bd}}=\mu$. Hence $\Vdash_{\mathbb{P}} \check{c}\image [\mathunderaccent\tilde-3 {A}]^{\aleph_0\text{-bd}}=\mu$, as desired. \hfill \qedref{qqq} For the purpose of forcing $\alpha_M$ to be a successor of a singular cardinal we shall need to force that $\alpha_M$ is a successor of a measurable cardinal, and this will be done later. But the above claims enable us to show that $\alpha_M$ can be a successor of small large cardinals. \begin{claim} \label{c1} Making $\alpha_M$ successor of small large cardinals. \begin{enumerate} \item [$(a)$] For every successor ordinal $\beta$, it is consistent (assuming the existence of large cardinals) that $\alpha_M(\lambda)=\aleph_{\beta+1}$ for some Magidor cardinal $\lambda$. \item [$(b)$] It is consistent that $\alpha_M(\lambda)$ is a successor of a strongly inaccessible cardinal (and even strongly Mahlo). \end{enumerate} \end{claim} \par\noindent\emph{Proof}. \newline We need the assumption ${\rm I1}(\kappa,\lambda)$ for some $\kappa$ above $\aleph_{\beta}$, so that there exists a measurable cardinal $\mu<\kappa$ with $\aleph_{\beta}<\mu$. Now we force with Prikry forcing through $\mu$, so if $G$ is a generic set for Prikry forcing then $\lambda$ is still Magidor in ${\rm V}[G]$ and $\alpha_M(\lambda)>\mu$. The next stage is to collapse the predecessors of $\alpha_M$ to $\aleph_{\beta}$. In $V[G]$ choose a regular $\alpha$ such that $\alpha_M \leq \alpha < \lambda$ and $\alpha$ is $\aleph_\beta$-closed (such $\alpha$ exists, since $\lambda$ is still a limit of inaccessible cardinals). Let $H$ be a generic set in ${\rm V}[G]$ for the collapse $\text{L\'{e}vy\xspace}(\aleph_\beta, <\alpha)$. It follows from Lemma \ref{qqq} that $\alpha_M(\lambda)=\aleph_{\beta}^+$ in ${\rm V}[G][H]$, so we are done with part $(a)$. For part $(b)$ notice that the collapse (being complete enough) adds no bounded subsets to the predecessor of $\alpha_M$. Hence if this is an inaccessible cardinal in ${\rm V}$ then it is still inaccessible in ${\rm V}[G][H]$. A similar argument shows that $\alpha_M$ can be forced to be a successor of a strongly Mahlo cardinal. \hfill \qedref{c1} Question 3.13 from \cite{MR3750266} asks for a stronger statement: Is it consistent that $\alpha_M(\lambda)$ is a successor of a measurable cardinal? We shall prove that a positive answer is consistent, even if one replaces measurability by supercompactness. Basically, we would like to lift $\alpha_M$ above some supercompact cardinal and then to collapse its predecessors to this supercompact. Prikry forcing is a useful way to achieve the first mission, but it ruins supercompactness below it since it adds a weak square (and even stronger forms of squares, see \cite{MR1360144}). Fortunately, we also have a delicate way to increase $\alpha_M$, based on the quilshon principle from \cite{MR3750266}: \begin{definition} \label{ssss} Quilshon. \newline Assume $\lambda>\delta={\rm cf}(\delta)$. \newline We say that $\pitchfork_{\lambda,\delta}$ holds iff there is a collection $\{S_\gamma:\gamma<\delta\}$ of disjoint subsets of $\lambda$ so that $S_\gamma\cap\eta$ is a stationary subset of $\eta$ for every ordinal $\eta<\lambda$ with ${\rm cf}(\eta)=\delta$ and every $\gamma<\delta$. \end{definition} It has been proved in \cite{MR3750266}, Theorem 2.2, that $\pitchfork_{\lambda,\delta}$ implies $\alpha_M(\lambda)>\delta$. Likewise, adding $\pitchfork_{\lambda,\delta}$ by the partial square forcing of Jensen preserves supercompactness below $\delta$ (Theorems 2.6 and 2.8 of \cite{MR3750266}). This yields the consistency of $\alpha_M$ being a successor of a supercompact cardinal: \begin{theorem} \label{mt} It is consistent that $\lambda$ is Magidor, $\alpha_M(\lambda)=\mu^+$ and $\mu$ is supercompact. \end{theorem} \par\noindent\emph{Proof}. \newline We begin with $I1(\kappa,\lambda)$ and we fix a supercompact cardinal $\mu$ below $\kappa$. This can be arranged if we choose $\lambda$ to be a limit of supercompact cardinals, since $\kappa=\crit(\jmath)$ can be arbitrarily large below $\lambda$ as the $n$-th iteration elementary embedding $\jmath^n$ satisfies $\crit(\jmath^n)=\jmath^n(\kappa)$ and the values of $\jmath^n(\kappa)$ are unbounded in $\lambda$. Choose a regular cardinal $\delta\in(\mu,\kappa)$ and force $\pitchfork_{\lambda,\delta}$ while preserving the supercompactness of $\mu$ on the one hand and I1$(\kappa,\lambda)$ on the other hand. The canonical way to force $\pitchfork_{\lambda,\delta}$ gives these properties (see Theorems 2.6 and 2.8 of \cite{MR3750266}). We force now with Laver's forcing, making $\mu$ indestructible upon $\mu$-directed-closed forcing notions. By virtue of Lemma \ref{ppp}, $I1(\kappa,\lambda)$ holds in the generic extension and $\mu<\alpha_M(\lambda)$. We may assume that $\alpha_M(\lambda) < \kappa$ since $I1(\kappa,\lambda)$ implies that there are arbitrary large cardinals $\kappa' < \lambda$ such that $I1(\kappa',\lambda)$ holds. By changing $\kappa$ to a larger $\kappa'$ if needed, we may arrange that $\alpha_M(\lambda) < \kappa$. If $\alpha_M(\lambda)=\mu^+$ we are done. If not, let $\alpha=((\alpha_M)^\mu)^+$ and notice that $\alpha<\kappa$ since $\kappa$, being measurable, is strongly inaccessible. Let $\mathbb{P}=L\acute{e}vy(\mu,<\alpha)$ and choose a generic subset $G\subseteq\mathbb{P}$. Since $\mathbb{P}$ is $\mu$-directed closed, forcing with $\mathbb{P}$ preserves the supercompactness of $\mu$. By Lemma \ref{ppp} we still have I1$(\kappa,\lambda)$, so in particular $\lambda$ remains a Magidor cardinal. By Lemma \ref{qqq}, $V[G]\models\alpha_M(\lambda)=\mu^+$, so the proof is accomplished. \hfill \qedref{mt} We conclude this section with two open problems: \begin{question} \label{q1} Is it consistent, under any assumption, that $\alpha_M$ is a limit cardinal? \end{question} The second question is about $\alpha_J$. Our knowledge about $\alpha_J$ is relatively poor (see \cite{MR826499}). We know how to obtain a J\'onsson cardinal with large $\alpha_J$ but we do not know how to change $\alpha_J$ for a given cardinal. The following is typical: \begin{question} \label{q0} Can we increase $\alpha_J$ to an arbitrarily large value? \end{question} \newpage \section{Successors of singular cardinals} In this section we focus on Question 3.12 from \cite{MR3750266}, namely is it consistent that $\alpha_M(\lambda)=\mu^+$ when $\mu$ is a singular cardinal? Our approach depends on the tentative answer to the problem. If we speculate that the answer is no, then the most natural thing would be to express $\mu^+$ as ${\rm tcf}(\prod_{\alpha < {\rm cf}(\mu)}\mu_\alpha,J)$ where $\langle \mu_\alpha \mid \alpha < {\rm cf} (\mu)\rangle$ is an increasing sequence of regular cardinals that is cofinal in $\mu$, and $J$ is an ideal over ${\rm cf}(\mu)$. Now one can fix $f_\alpha:[\lambda]^{\aleph_0\text{-bd}} \rightarrow \mu_\alpha$ which omits no colors, for every $\alpha<{\rm cf}(\mu)$. The hope is to define from these functions a coloring $c:[\lambda]^{\aleph_0\text{-bd}} \rightarrow \mu^+$ which omits no color over full-sized subsets of $\lambda$. If one wishes to try a positive answer then the natural attempt would be to force $\alpha_M=\mu^+$ where $\mu$ is measurable (or even more) and then to singularize $\mu$. If this process keeps $\alpha_M=\mu^+$ then a positive answer to the above question has been given. Practically, there are obstacles in both ways. It seems that there is no simple way to combine functions into small cardinals in order to get a single function $c:[\lambda]^{\aleph_0\text{-bd}} \rightarrow \mu^+$. Actually, the main theorem of this section shows that $\alpha_M(\lambda)$ can be $\mu^+$ where $\mu>{\rm cf}(\mu)>\omega$, thus proving that this approach fails in general, though it may be helpful in case of singular cardinals with countable cofinality. The other approach is problematic as well. The simplest attempt to force $\alpha_M=\kappa^+$ where $\kappa$ is measurable and then to add a Prikry sequence to $\kappa$, fails. By Theorem \ref{increasealpham}, $\alpha_M>\kappa^\omega$ in the generic extension. Since $\kappa>{\rm cf}(\kappa)=\omega$ after Prikry forcing, $\alpha_M>\kappa^+$ in the generic extension. Actually, we believe that this is a ZFC limitation: \begin{conjecture} \label{conj0} Let $\lambda$ be Magidor and $\alpha=\alpha_M(\lambda)$. \newline If $\theta<\alpha$ then $\theta^\omega<\alpha$. In particular, $\alpha_M$ cannot be $\mu^+$ when $\mu>{\rm cf}(\mu)=\omega$. \end{conjecture} \hfill \qedref{conj0} There is, however, an alternative to Prikry forcing. We shall use Magidor forcing in order to force $\alpha_M$ to be a successor of a singular cardinal. As a warm-up we show that under some assumptions on the Magidor cardinal $\lambda$ one can force $\alpha_M(\lambda)=\kappa^{++}$ by singularizing a measurable cardinal $\kappa$. This will be done with the usual Prikry forcing through $\kappa$, so ${\rm cf}(\kappa)^{V[G]}=\omega$ and yet Prikry forcing does not increase $\alpha_M$ too much. Recall: \begin{definition} \label{defstrong} Strong Magidority. \newline Assume that $\beta<\lambda$. \begin{enumerate} \item [$(\aleph)$] $\lambda$ is $\beta$-Magidor iff $\lambda\rightarrow [\lambda]^{<\beta\text{-bd}}_\lambda$. \item [$(\beth)$] $\lambda$ is strongly Magidor iff $\lambda$ is $\beta$-Magidor for every $\beta<\lambda$. \end{enumerate} \end{definition} It has been shown in \cite{MR3750266} that if $\lambda$ is I1 then $\lambda$ is strongly Magidor. If $\lambda$ is strongly Magidor and $\kappa<\lambda$ (or even $\beta$-Magidor and $\kappa\leq\beta<\lambda$) then we can define $\alpha_M^{<\kappa}(\lambda)$ as the first $\alpha<\lambda$ such that $\lambda\rightarrow[\lambda]_\alpha^{<\kappa\text{-bd}}$. The usual $\alpha_M(\lambda)$ is then $\alpha_M^{<\omega_1}(\lambda)$. \begin{claim} \label{clmprikry} Let $\lambda$ be $\kappa^+$-Magidor where $\kappa<\lambda$ is a measurable cardinal. Assume that $\alpha_M^{<\kappa^+}(\lambda)=\kappa^{++}$, $\mathbb{P}$ is Prikry forcing through $\kappa$ and $G\subseteq\mathbb{P}$ is generic. \newline Then $V[G]\models\alpha_M(\lambda)=\kappa^{++}$, and in particular $\lambda$ is still Magidor in $V[G]$. \end{claim} \par\noindent\emph{Proof}. \newline Let $\mathunderaccent\tilde-3 {f}:[\lambda]^{\aleph_0\text{-bd}}\rightarrow \kappa^{++}$ be a name of a coloring. We have to find $A\in[\lambda]^\lambda$ such that the interpretation of $\mathunderaccent\tilde-3 {f}$ restricted to countable subsets of $A$ omits colors from $\kappa^{++}$. It follows from the arguments of \cite{MR3750266} that $2^\kappa<\alpha_M^{<\kappa^+}(\lambda)$, so under our assumption that $\alpha_M^{<\kappa^+}(\lambda)=\kappa^{++}$ we see that $2^\kappa=\kappa^+$. The proofs in \cite{MR3750266} deal with $\aleph_0$-Magidor cardinals, but the same proofs work for the more general case as well. For every $x\in[\lambda]^{<\kappa^+\text{-bd}}\cap V$ we define $g(x)$ to be the supremum over all ordinals $\gamma < \kappa^+$ such that there is a $\mathbb{P}$-name $\mathunderaccent\tilde-3 {\tau}$ and a condition $q\in\mathbb{P}$ such that $q$ forces that $\mathunderaccent\tilde-3 {\tau}$ is a countable subset of $x$ and that $\mathunderaccent\tilde-3 {f}(\mathunderaccent\tilde-3 {\tau}) = \gamma$. Notice that $g\in V$ as the forcing relation is definable in $V$. Observe also that $g(x)\in\kappa^{++}$ for every $x\in[\lambda]^{<\kappa^+\text{-bd}}\cap V$. This is true since the number of names $\mathunderaccent\tilde-3 {\tau}$ (up to equivalence) which appear in the definition of $g(x)$ is at most $(\kappa \cdot 2^\kappa)^{\kappa \cdot \aleph_0} =\kappa^+$. This is true as we can always assume that the conditions in the name $\tau$ consist of countably many maximal antichains, each of size at most $\kappa$, and there are at most $\kappa$ many ordinals in $x$. By the chain condition of $\mathbb{P}$ the value of $\mathunderaccent\tilde-3 {f}(\mathunderaccent\tilde-3 {\tau})$ can be determined in at most $\kappa$ many different ways. As each value of $\mathunderaccent\tilde-3 {f}(\mathunderaccent\tilde-3 {\tau})$ is an ordinal in $\kappa^{++}$ (recall that the range of $\mathunderaccent\tilde-3 {f}$ is $\kappa^{++}$) we see that $g(x)\in\kappa^{++}$. By the assumption that $\alpha_M^{<\kappa^+}(\lambda)=\kappa^{++}$ in the ground model, there is a set $A\in[\lambda]^\lambda$ and an ordinal $\mu<\kappa^{++}$ such that ${\rm Rang}(g)\subseteq\mu$. It follows that ${\rm Rang}(\mathunderaccent\tilde-3 {f})\subseteq\mu$ in the generic extension, so we are done. \hfill \qedref{clmprikry} Our next goal is to show that the assumptions of the above claim are consistent. \begin{claim} \label{clmprepar} Assume that $\lambda$ is I1. \begin{enumerate} \item [$(\aleph)$] It is consistent that $\mu<\lambda$, $\mu$ is supercompact and $\alpha_M^{<\mu^+}(\lambda)=\mu^{++}$. \item [$(\beth)$] It is consistent that $\mu<\lambda$, $\mu$ is supercompact and $\alpha_M^{<\mu}(\lambda)=\mu^+$. \end{enumerate} \end{claim} \par\noindent\emph{Proof}. \newline Choose $\mu<\kappa<\lambda$ so that I1$(\kappa,\lambda)$ and $\mu$ is supercompact. Notice that $\alpha_M^{<\mu^+}(\lambda)>\mu^+$ and $\alpha_M^{<\mu}(\lambda)>\mu$. Let $\alpha = ((\alpha_M^{<\mu^+}(\lambda))^\mu)^+$. Since $\lambda$ is a strong limit cardinal we see that $\alpha<\lambda$. Now we force with $\mathbb{Q} = \text{L\'{e}vy\xspace}(\mu^{+},<\alpha)$ and one can verify that $\alpha_M^{<\mu^+}(\lambda)=\mu^{++}$ in the generic extension by $\mathbb{Q}$, as done in the proof of Theorem \ref{mt}. A similar argument proves part $(\beth)$, but here it is possible to force $\alpha_M^{<\mu}(\lambda)=\mu^+$. Indeed, the pertinent chain condition will be just $\mu^+$-cc, as the elements that we color are of size strictly less than $\mu$. Consequently, $\text{L\'{e}vy\xspace}(\mu,<\alpha)$ is sufficient. \hfill \qedref{clmprepar} The fact that Prikry forcing at $\mu$ might increase $\alpha_M$ only to $\mu^{++}$ is suggestive. A more illuminating formulation of this fact is that basically (under some convenient assumptions that we made) Prikry forcing at $\mu$ makes $\alpha_M=(\mu^\omega)^+$. Philosophically this is the correct point for $\alpha_M$ since one can code $\omega$-sequences of $\mu$ in the generic extension by old $\mu$-sets, thus covering all the colors of $\mu^\omega$ but maintaining $(\mu^\omega)^+$ to omit colors. Mathematically it suggests that if we singularize $\mu$ in such a way that $\mu^\omega=\mu$ in the generic extension then we can force $\alpha_M = (\mu^\omega)^+=\mu^+$ while $\mu>{\rm cf}(\mu)$. This is hopeless with Prikry forcing but it can be done by another Prikry-type forcing notion which makes $\mu>{\rm cf}(\mu)>\omega$. We let Magidor forcing into the discussion at this point. A natural question arose in the wake of Prikry's work. Is it possible to change the cofinality of a measurable cardinal $\kappa$ into uncountable cofinality without collapsing cardinals? A positive answer was given by Magidor in \cite{MR0465868}, nowadays known as Magidor forcing. The definition of the forcing is more involved than the classical Prikry forcing, and in particular the required largeness of the cardinal which changes its cofinality is much more than just measurability. There are other differences between Prikry and Magidor forcing, the most important for us is mirrored in the covering properties of countable sets. If $\mathbb{P}$ is Prikry forcing for $\kappa$ and $x = \{x_n:n\in\omega\}$ is a cofinal Prikry sequence, then $x$ cannot be covered by a set of size less than $\kappa$ from the ground model. But if $\mathbb{M}$ is Magidor forcing then the situation changes a bit. The following claim shows that Magidor forcing has a better covering property when new countable sets are considered. \begin{claim} \label{clmcovering} Assume that $\kappa\leq\lambda$ and $\kappa$ is sufficiently large (e.g. $\kappa$ is supercompact). Let $\mathbb{M}$ be Magidor forcing which makes $\kappa>{\rm cf}(\kappa)>\omega$. \newline If $\mathunderaccent\tilde-3 {\tau}$ is any $\mathbb{M}$-name of an element in $[\lambda]^{\aleph_0\text{-bd}}$ then there are $p\in\mathbb{M}, \theta<\kappa$ and $x\in V$ such that $|x|=\theta$ and $p\Vdash\mathunderaccent\tilde-3 {\tau}\subseteq\check{x}$. \end{claim} \par\noindent\emph{Proof}. \newline For precise definitions and explanation of facts about the Magidor forcing we refer the reader to \cite[Section 5]{MR2768695}, in which the forcing is defined using measure sequences and in particular to \cite[Subsection 5.2]{MR2768695} for details about the version which is used here. A condition $p\in\mathbb{M}$ is a finite sequence $\langle d_1,\ldots,d_n,(\kappa,A)\rangle$ where each $d_i$ is either an ordinal or a pair $(\nu,A_\nu)$. The part $\langle d_1,\ldots,d_n\rangle$ is the stem of $p$, and if $p,q$ share the same stem then $p\parallel q$. Inasmuch as the number of possible stems is $\kappa$, $\mathbb{M}$ is $\kappa^+$-cc. Fix a generic subset $G\subseteq\mathbb{M}$. Let $\mathunderaccent\tilde-3 {\tau}$ be a name of a countable set of ordinals. Each element in ${\mathunderaccent\tilde-3 {\tau}}_G$ is determined by an antichain of size at most $\kappa$, hence by collecting all the possibilities we construct (in $V$) a set $B$ of ordinals, $|B|\leq\kappa$ such that $\Vdash_{\mathbb{M}}\mathunderaccent\tilde-3 {\tau} \subseteq \check{B}$. Since $B\in V, |B|=\kappa$ one can fix a bijection $h:\kappa\rightarrow B$. Let $\mathunderaccent\tilde-3 {\sigma}$ be a name for a countable set of ordinals of $\kappa$, such that $\Vdash_{\mathbb{M}}\mathunderaccent\tilde-3 {\sigma} = h^{-1}(\mathunderaccent\tilde-3 {\tau})$. The interpretation of $\mathunderaccent\tilde-3 {\sigma}$ in the generic extension is a bounded subset of $\kappa$, since $V[G]\models {\rm cf}(\kappa)>\omega$. Hence for some $\beta\in\kappa$ there is a condition $p\in\mathbb{M}$ such that $p\Vdash\mathunderaccent\tilde-3 {\sigma}\subseteq\check{\beta}$. It follows that $p\Vdash\mathunderaccent\tilde-3 {\tau}\subseteq h\image\beta$. Denote $h\image\beta$ by $x$ and $|\beta|=|x|$ by $\theta$. Observe that $p$ forces that $\mathunderaccent\tilde-3 {\tau}$ is contained in the set $x$ and $x\in V$. Since $\theta<\kappa$, we are done. \hfill \qedref{clmcovering} One can modify the above claim to elements in $[\lambda]^{\eta\text{-bd}}$ where $\eta>\aleph_0$, provided that $\mathbb{M}$ forces ${\rm cf}(\kappa)>\eta$. In this way one can obtain $\alpha_M^{<\eta}(\lambda)=\mu^+$ where $\mu>{\rm cf}(\mu)>\eta$ as we shall prove below. We focus on the usual $\alpha_M$ (that is, with respect to countable sets). Let us start with a technical lemma. \begin{lemma}\label{lemma:notJonsson} Let $\mu < \lambda$ be cardinals and let us assume that $\mu$ is regular, uncountable and $2^{<\mu} = \mu$. If $\lambda\rightarrow[\lambda]^{{<}\mu\text{-bd}}_{\theta}$ and $\theta$ is not J\'onsson then $\lambda\rightarrow[\lambda]^{{<}\mu\text{-bd}}_{\theta, <\theta}$. \end{lemma} \par\noindent\emph{Proof}. \newline Let us fix a function $h \colon \mu \to (P_\mu\mu)^{<\omega}$ such that for all $x \in (P_\mu\mu)^{<\omega}$ there are unboundedly many $\zeta < \mu$ such that $h(\zeta) = x$ and if $h(\alpha) = \langle x_0, \dots, x_{n-1}\rangle$ then $x_0, \dots, x_{n-1} \subseteq \alpha$. Let us assume that $f\colon [\lambda]^{{<}\mu\text{-bd}}\to \theta$ is a function such that for all $A \in [\lambda]^\lambda$, $|f\image [A]^{{<}\mu\text{-bd}}| = \theta$. Let $g\colon \theta^{{<}\omega}\to \theta$ witness the negative partition relation $\theta\nrightarrow [\theta]^{{<}\omega}_\theta$. Let us define a function $F\colon [\lambda]^{{<}\mu\text{-bd}}\to \theta$, that contradicts the assumption $\lambda\rightarrow[\lambda]^{{<}\mu\text{-bd}}_{\theta}$. For $x\in [\lambda]^{{<\mu}\text{-bd}}$, let $\{\xi_i \mid i < \otp(x)\}$ be the increasing enumeration of $x$. define $F(x)$ to be $g(\delta_0, \dots, \delta_{n-1})$ where $\delta_i = f(\{\xi_j \mid j \in a_i\})$ and $h(\otp x) = \langle a_i \mid i < n\rangle$. Let us claim that for every $A \in [\lambda]^\lambda$, $F\image [A]^{{<}\mu\text{-bd}} = \theta$. Let $B = f \image [A]^{{<}\mu\text{-bd}}$. By the assumption, $|B| = \theta$. By the choice of $g$, $g\image [B]^{{<}\omega} = \theta$. Let $\gamma \in \theta$. Let us pick $\langle \delta_0, \dots, \delta_{n-1}\rangle \in B^{<\omega}$ such that $g(\delta_0, \dots, \delta_{n-1}) = \gamma$. Let $t_i \in A^{{<}\mu\text{-bd}}$ be a set of ordinals such that $f(t_i) = \delta_i$. Let $y = t_0 \cup \dots \cup t_{n-1}$. Note that $y$ is still a bounded set of size $<\mu$. Let $\langle \zeta_i \mid i < \otp(y)\rangle$ be the increasing enumeration of the elements of $y$ and let $a_i = \{j < \otp(y) \mid \zeta_j \in t_j\}$. By the assumption on $h$ there is a ordinal $\rho \in [\otp(y), \mu)$ such that $h(\rho) = \langle a_0, \dots, a_{n-1}\rangle$. Let $x$ be $y \cup y'$ where $\otp(x) = \rho$, $\min y' > \sup y$, and $y' \subseteq A$. Clearly, $F(x) = \gamma$. \hfill \qedref{lemma:notJonsson} \begin{theorem} \label{thmmt2} Let $\lambda$ be I1. \newline Then one can force $\alpha_M(\lambda)=\mu^+$ where $\mu$ is a singular cardinal with uncountable cofinality. \end{theorem} \par\noindent\emph{Proof}. \newline Our starting point is a strongly Magidor cardinal $\lambda$ with a supercompact cardinal $\mu$ such that $\mu<\lambda$ and $\alpha_M^{<\mu}(\lambda)=\mu^+$. This can be arranged by part $(\beth)$ of Claim \ref{clmprepar}. By Lemma \ref{lemma:notJonsson}, since $\mu^+$ is not J\'onsson, $\lambda \rightarrow [\lambda]^{<\mu\text{-bd}}_{\mu^+,<\mu^+}$. We shall force with Magidor forcing $\mathbb{M}$ to make $\mu>{\rm cf}(\mu)>\omega$ and our task is to show that $\alpha_M(\lambda)=\mu^+$ in the generic extension by $\mathbb{M}$. Before proving this statement we need a preliminary assertion which reduces the number of the pertinent names for the coloring that we wish to define. Assume that $\kappa$ is supercompact and $\mathbb{M}$ is Magidor forcing at $\kappa$. Let $A$ be a set of ordinals such that $|A|<\kappa$. We claim that there exists a set $T$ of names, $|T|\leq\kappa$ such that for any name $\mathunderaccent\tilde-3 {\tau}$ and any condition $p\in\mathbb{M}$ for which $p\Vdash \mathunderaccent\tilde-3 {\tau}\subseteq\check{A} \wedge |\mathunderaccent\tilde-3 {\tau}|=\aleph_0$ there exists a name $\mathunderaccent\tilde-3 {\sigma}\in T$ and a condition $q\geq p$ so that $q\Vdash \mathunderaccent\tilde-3 {\sigma} = \mathunderaccent\tilde-3 {\tau}$. For proving this assertion suppose that $|A|=\theta<\kappa$. Fix a bijection $h:A\rightarrow\theta$. Let $\mathcal{A} = \{p_i:i<\kappa\}$ be a maximal antichain of conditions which force an element into the Magidor sequence above $\theta$. For every $i<\kappa$ fix an ordinal $\alpha_i>\theta$ which is forced to be in the Magidor sequence by $p_i$. Let $\langle d^i_1,\ldots,d^i_{n(i)}\rangle$ be the stem of $p_i$ for every $i<\kappa$. There exists $m = m(i)\in[1,n(i)]$ such that $\alpha_i$ appears in $d^i_m$ (either $d^i_m=\alpha_i$ or $d^i_m=(\alpha_i,A_i)$). A fundamental property of Magidor forcing is that $\mathbb{M}/p_i \cong \mathbb{M}_{\alpha_i}/p_i^{\leq m}\times \mathbb{M}/p_i^{>m}$ and new subsets of $\alpha_i$ are forced only by the lower part $\mathbb{M}_{\alpha_i}/p_i^{\leq m}$. The notation $\mathbb{M}_{\alpha_i}/p_i^{\leq m}$ should be understood as all conditions in the Magidor forcing below the cardinal $\alpha_i$, which are stronger than the condition $p_i^{\leq m}$. Notice that the number of names in $\mathbb{M}_{\alpha_i}/p_i^{\leq m}$ for subsets of $\alpha_i$ is at most $2^{\alpha_i}<\kappa$. Let $T$ be the set of all names of the form $h^{-1}(y)$ where $y$ is a $\mathbb{M}_{\alpha_i}/p_i^{\leq m}$-name for a subset of $\theta$, for every $i<\kappa$, so $|T|\leq\kappa\cdot\kappa=\kappa$. We claim that $T$ is as required. Indeed, assume that $\mathunderaccent\tilde-3 {\tau}$ is an $\mathbb{M}$-name, $p\in\mathbb{M}$ and $p\Vdash \mathunderaccent\tilde-3 {\tau}\subseteq\check{A} \wedge |\mathunderaccent\tilde-3 {\tau}|=\aleph_0$. By the maximality of $\mathcal{A}$ choose $p_i\in\mathcal{A}$ so that $p \parallel p_i$, and let $q\in\mathbb{M}$ be a condition which satisfies $q\geq p,p_i$. Now $q \Vdash \mathunderaccent\tilde-3 {\tau}\subseteq\check{A} \wedge |\mathunderaccent\tilde-3 {\tau}|=\aleph_0$ as $q\geq p$ and $q\Vdash \mathunderaccent\tilde-3 {\sigma}=\mathunderaccent\tilde-3 {\tau}$ for some $\mathunderaccent\tilde-3 {\sigma}\in T$ since $q\geq p_i$. This completes the proof of the assertion. Let $G\subseteq\mathbb{M}$ be generic. We try to show that $V[G]\models \alpha_M(\lambda)=\mu^+$. By the fact that $\lambda$ is I1 in $V$ we can see that $\lambda$ is I1 (and hence Magidor) in $V[G]$. Assume that $\mathunderaccent\tilde-3 {f}:[\lambda]^{\aleph_0\text{-bd}} \rightarrow \mu^+$. Let $A$ be a bounded subset of $\lambda$ of size less than $\mu$ which belongs to $V$. Define $g(A)$ to be the supremum of all ordinals $\alpha < \mu^+$ such that there is an $\mathbb{M}$-name $\mathunderaccent\tilde-3 {\tau}$, the weakest condition forces that $\mathunderaccent\tilde-3 {\tau} \subseteq \check{A} \wedge |\mathunderaccent\tilde-3 {\tau}|=\aleph_0$, and there is a condition $q$ such that $q\Vdash \mathunderaccent\tilde-3 {f}(\mathunderaccent\tilde-3 {\tau}) = \check{\alpha}$. By the preliminary assertion and the chain condition of $\mathbb{M}$ we see that $g(A)\in\mu^+$ for every $A$ as above. Since $g\in V$ and $\alpha_M^{<\mu}(\lambda)=\mu^+$ there are $H\in[\lambda]^\lambda$ and $\beta\in\mu^+$ such that $g\image H^{<\mu\text{-bd}}\subseteq\beta$. We can finish the proof by showing that $\mathunderaccent\tilde-3 {f}\image[H]^{\aleph_0\text{-bd}}$ is forced to be a subset of $\beta$ as well. Suppose not. Choose $\gamma, \mathunderaccent\tilde-3 {\tau}$ and $q$ such that $\gamma>\beta, q\in\mathbb{M}, \mathunderaccent\tilde-3 {\tau}$ is an $\mathbb{M}$-name of a countable bounded subset of $H$ and $q\Vdash\mathunderaccent\tilde-3 {f}(\mathunderaccent\tilde-3 {\tau})=\gamma$. Fix $A\in V,\, A\subseteq H,\,|A|=\theta<\mu$ and $r\geq q$ such that $r\Vdash\mathunderaccent\tilde-3 {\tau} \subseteq A$. Notice that $r\Vdash\mathunderaccent\tilde-3 {\tau}\subseteq H$ as well (since $r\geq q$) and hence for some condition $s\geq r$ we have $s\Vdash \mathunderaccent\tilde-3 {f}(\mathunderaccent\tilde-3 {\tau})\leq g(A)<\beta<\gamma$, a contradiction. We showed that in the generic extension, $\alpha_M(\lambda) \leq \mu^+$. Note that $\alpha_M(\lambda) \neq \mu$ since $\mu$ is singular. The set of all $\zeta < \mu$ in which $\mathbb{M}$ adds a Prikry sequence is unbounded in $\mu$. Let $\zeta$ be a measurable cardinal of Mitchell order $1$ in the generic Magidor club. Let $p\in\mathbb{M}$ be a condition that forces $\zeta$ to be in the Magidor club. The forcing $\mathbb{M} / p$ decomposes into a product of two forcing notions, $\mathbb{M}' \times \mathbb{P}$ where $\mathbb{P}$ is the Prikry forcing at $\zeta$. By standard arguments, one can verify that $\mathbb{P}$ is equivalent to the Prikry forcing at $\zeta$ in the generic extension by $\mathbb{M}'$. Thus, $\mathbb{P}$ forces that $\alpha_M(\lambda) > \zeta$. Since this is true for all $\zeta < \mu$, $\alpha_M(\lambda) \geq \mu^+$ and thus $\alpha_M(\lambda) = \mu^+$, as wanted. \hfill \qedref{thmmt2} As in the former section, the above proof can be abstracted. Basically all we need is the special covering property of new countable sets by relatively small sets. This is the main distinction between Prikry and Magidor forcing, used above. We conclude the paper with an open problem which goes back to singular cardinals with countable cofinality. The main theorem of this section states that $\alpha_M$ can realize the true cofinality of a product of cardinals below some $\mu>{\rm cf}(\mu)>\omega$. The following is natural: \begin{question} \label{qtcf} Assume that $\mu<\lambda$, $\lambda$ is Magidor and $\mu>{\rm cf}(\mu)=\omega$. \newline Is it possible that $\alpha_M = {\rm tcf}(\prod_{n\in\omega}\mu_n,J^{\rm bd}_\omega)$ for some increasing sequence of regular cardinals $\langle \mu_n:n\in\omega\rangle$? \end{question} \section{Acknowledgments} First author's research is supported by Shelah's ERC grant 338821. We would like to thank the anonymous referee for many helpful suggestions that improved this paper's readability and correctness. \newpage \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
2,869,038,156,979
arxiv
\section{Introduction} \section{Motivation} F. B. Jones \cite{jones} proved in 1942, in a famous paper, the existence of additive discontinuous functions $f:\mathbb{R}\to\mathbb{R}$ whose graph $G(f)=\{(x,f(x)):x\in\mathbb{R}\}$ is connected, and characterized them. These functions are extraordinary since their graphs are dense connected subsets of the plane, containing exactly one point in each vertical line $\{x\}\times\mathbb{R}$ \cite{sanjuan}. In his paper the author also stated, without proof, that the graph of a discontinuous additive function must be connected or totally disconnected. For this result he just referenced another famous paper, by Hamel \cite{hamel}, but the proof is not there. Indeed, up to our knowledge, a proof of this dichotomy result has never appeared in the literature. In this note we prove that the graph of a discontinuous monomial is either connected or totally disconnected, and we characterize the discontinuous monomial functions $f:\mathbb{R}\to\mathbb{R}$ with connected graph by means of a big graph property. We also study the connected components of the graphs of additive functions $f:\mathbb{R}^d\to\mathbb{R}$ for $d\geq 1$. These results should be a good starting point to prove that, for larger classes of functions, such as the generalized polynomials or exponential polynomials over the real line, the graphs of the elements of these sets are either connected or totally disconnected. To find examples of both situations is an easy corollary of the structure of these functions and Jones's existence result of additive discontinuous functions with connected graph. \section{Dichotomy property for monomials} Recall that $f:\mathbb{R}\to\mathbb{R}$ is an $n$-monomial function if it is a solution of the so called monomial functional equation \begin{equation} \label{monomials} \frac{1}{n!}\Delta^{n}_hf(x)=f(h) \ \ (x,h\in\mathbb{R}). \end{equation} It is known that $f$ satisfies $(\ref{monomials})$ if and only if $f(x)=F(x,\cdots,x)$ for a certain multi-additive and symmetric function $F:\mathbb{R}^n\to\mathbb{R}$, and that $f$ is a polynomial function of degree at most $n$ (i.e., $f$ solves Fr\'{e}chet's functional equation $\Delta^{n+1}_hf(x)=0$) if and only if $f(x)=\sum_{k=0}^nf_k(x)$, where $f_k(x)$ is a $k$-monomial function for $k=0,1,\cdots, n$. (See, for example, \cite{czerwik}, \cite{kuczma}, for the proofs of these claims). \begin{theorem} [Dichotomy, for monomial functions $f:\mathbb{R}\to\mathbb{R}$] \label{main1} Let $f:\mathbb{R}\to\mathbb{R}$ be a $n$-monomial function. Then $G(f)$ is either connected or totally disconnected. Furthermore, both cases are attained by concrete examples of discontinuous $n$-monomials $f:\mathbb{R}\to\mathbb{R}$, for every $ n \in \mathbb{N} $. \end{theorem} \begin{proof} Let $f$ be an $n$-monomial. Suppose that $G(f)$ is not totally disconnected. Then there exists a connected component $H\subset G(f)$ containing at least two different points $(x_1,y_1)$ and $(x_2,y_2)$. Clearly, $x_1\neq x_2$. Hence, if $\pi_1:\mathbb{R}^2\to\mathbb{R}$ denotes the horizontal projection of the plane, $\pi_1(x,y)=x$, then $I=\pi_1(H)=\{x\in\mathbb{R}: (x,f(x))\in H\}$ is a connected subset of $\mathbb{R}$ which contains two distinct points. Hence $I$ is an interval with non-empty interior and $H=\{(x,f(x)):x\in I\}$. Set $\alpha=\inf I$ and $\beta=\sup I$. Obviously, $\alpha,\beta\in \mathbb{R}\cup\{-\infty,+\infty\}$ and $\alpha<\beta$. In particular either $\beta>0$ or $\alpha<0$. Assume, with no loss of generality, that $\beta>0$ (the other case can be treated with similar arguments, or reduced to this one by using that every monomial is either even or odd, since $f(rx)=r^nf(x)$ for all rational number $r$ and all $x\in\mathbb{R}$, which implies that $f(-x)=(-1)^n f(x)$ for all $x\in\mathbb{R}$). Given $q\in\mathbb{Q}\setminus\{0\}$ we consider the maps \[ \phi_{q,k}(x,y)=(q^kx,q^{kn}y); \ \text{ whenever } (x,y)\in\mathbb{R}^2 \text{ and } k\in\mathbb{Z}. \] Then $\phi_{q,k}:\mathbb{R}^2\to\mathbb{R}^2$ is continuous. Hence $H_{q,k}=\phi_{q,k}(H)$ is a connected subset of the plane for every $k\in\mathbb{Z}$. Given $k\in\mathbb{Z}$, we have that $H_{q,k}\cap H_{q,k+1}\neq \emptyset$ if and only if for some $(x,f(x)), (x^*,f(x^*))\in H$ the equality $\phi_{q,k}(x,f(x))=\phi_{q,k+1}(x^*,f(x^*))$ holds. In other words, this intersection is nonempty if and only if \[ (q^kx,q^{kn}f(x))=(q^{k+1}x^*,q^{(k+1)n}f(x^*)) \text{ for certain } x,x^*\in I. \] Forcing equality between the first components of these vectors we get $q^{k}x=q^{k+1}x^*$, which means that $x^*=\frac{1}{q}x$. Furthermore, under this restriction, we get the equality between the second components of the vectors for free, since \[ q^{(k+1)n}f(x^*)= q^{(k+1)n}f \left( \frac{1}{q}x \right) = q^{(k+1)n} \left(\frac{1}{q}\right)^nf(x) = q^{kn}f(x). \] Thus, we have demonstrated that $H_{q,k}\cap H_{q,k+1}\neq \emptyset$ if and only if there exist $x\in \mathbb{R}$ such that $\{x,\frac{1}{q} x\}\subset I$. In particular, when this holds true, the property is satisfied for all $k\in\mathbb{Z}$ simultaneously and $\widetilde{H}_q=\bigcup_{k\in\mathbb{Z}}H_{q,k}$ is connected. Furthermore, $(t,f(t))\in G(f)$ implies that $\phi_{q,k}(t,f(t))=(q^kt,(q^{k})^nf(t))= (q^kt,f(q^kt))\in G(f)$, so that $H_{q,k}=\phi_{q,k}(H)\subseteq G(f)$ for all $k\in\mathbb{Z}$. Hence $\widetilde{H}_q$ is always a subset of $G(f)$. Assume, by the moment, that $0<\alpha$ and $\beta<\infty$, and take $q\in\mathbb{Q}$ such that $1<q<\frac{\beta}{\alpha}$. Then $0<\alpha$ and $1<q<\frac{\beta}{\alpha}$ imply that $\alpha<q\alpha <\beta$. Take $x$ such that $q\alpha < x<\beta$. Then $x\in I$ and $\alpha <\frac{1}{q}x<\frac{1}{q}\beta<\beta$, so that $\frac{1}{q}x\in I$ too. Thus, in this case, $\widetilde{H}_q$ is a connected subset of $G(f)$. But we also have, in this case, that $\widetilde{H}_q=G_{+}^*(f):= \{(x,f(x)): x>0\}$. This implies that $(0,\infty)\subseteq I$, which contradicts both $\alpha >0$ and $\beta <+\infty$. Hence either $ \alpha \leq 0 $ or $\beta=+\infty$. If $0<\alpha <\beta=+\infty$ and $0<x^*\in I$, then for any $q>1$, $x=qx^*$ and $\frac{1}{q}x=x^*$ both belong to $I$, so that $\widetilde{H}_q= G_{+}^*(f)$ is connected, which contradicts $0<\alpha$. If $ \alpha \leq 0 $ and $ 0 < x \in I $, then for any $q>1$, $x^*=\frac{1}{q}x$ and $x=qx^*$ both belong to $I$, so that $\widetilde{H}_q$ is connected and contains $G_{+}^*(f)$. This forces $\beta=+\infty$ again. If, in particular, $ \alpha < 0 $, we have $ 0 \in I $, and we obtain, analogously to the previous arguments, that $(-\infty,0)\subseteq I$. In this case we thus have $ I = \mathbb{R} $ and $ H = G(f) $. Finally, let us consider the case $ \alpha = 0 $. Then either $ I = [0,\infty) $ or $ I = (0,\infty) $. In the former case $ H = G_{+}(f) := \{(x,f(x)): x\geq 0\} $ is connected. Furthermore, if we define $\varphi(x,y)=(-x,(-1)^n y)$, it is clear that $ G_{-}(f) := \{(x,f(x)): x\leq 0\} = \varphi(G_{+}(f)) $ is also a connected subset of $G(f)$. Furthermore, $ (0,0) = (0,f(0)) \in G_{+}(f) \cap G_{-}(f) $, so that $ G(f) = G_{+}(f) \cup G_{-}(f) $ is connected. In the latter case, when $ I = (0,\infty) $, we have that $ G_{+}^*(f) $ is connected and $ G_{+}(f) = G_{+}^*(f) \cup \{(0,0)\} $ is disconnected. Hence there exist open sets $ U \subset \mathbb{R}^2 $ and $ V \subset \mathbb{R}^2 $ such that $ U \cap V = \emptyset $, $ U \cap G_{+}(f) \neq \emptyset $, $ V \cap G_{+}(f) \neq \emptyset $, and $ G_{+}(f) \subseteq U \cup V $. We may assume, with no loss of generality, that $ (0,0) \in U $. Then $ V \cap G_{+}^*(f) \neq \emptyset $. Since \[ (0,0) = \lim_{m \to \infty} \left( \frac{1}{m} \,,\, \frac{1}{m^n} f(1) \right) = \lim_{m \to \infty} \left( \frac{1}{m} \,,\, f \left( \frac{1}{m} \right) \right) \] is an accumulation point of $ G_{+}^*(f) \,$, we obtain $ U \cap G_{+}^*(f) \neq \emptyset $ as well. This yields that $ G_{+}^*(f) $ is disconnected, which is a contradiction. Till now, we have demonstrated that, if $f:\mathbb{R}\to\mathbb{R}$ is a monomial function, then $G(f)$ is either connected or totally disconnected. Let us now show that both cases are attained by concrete examples. For totally disconnected graphs the example can be easily constructed. Indeed, given $\gamma$ any Hamel basis of $\mathbb{R}$ satisfying $1\in\gamma$ and let $n\in\mathbb{N}$ be a positive integer, we consider $A_{\gamma}:\mathbb{R}\to\mathbb{R}$, the unique $\mathbb{Q}$-linear map which satisfies $A_{\gamma}(1)=1$ and $A_{\gamma}(b)=0$ for every $b\in\gamma\setminus\{1\}$. Obviously $f_n(x)=A_{\gamma}(x)^n$ is an $n$-monomial and $f_{n}(\mathbb{R})\subseteq \mathbb{Q}$. Hence the graph of $f_{n}$ is totally disconnected. The existence of discontinuous $n$-monomials with connected graph follows from the existence of discontinuous additive functions $f:\mathbb{R}\to\mathbb{R}$ with connected graph $G(f)$, a fact that was demonstrated by Jones by using a nontrivial set theoretical argument on ordinals \cite[Theorems 4 and 5]{jones}. Indeed, assume that $f:\mathbb{R}\to\mathbb{R}$ is additive, discontinuous, and $G(f)$ is connected. Then $F(x)=x^{n-1}f(x)$ is a discontinuous $n$-monomial function with connected graph, since the function $\phi:\mathbb{R}^2\to\mathbb{R}^2$ given by $\phi(x,y)=(x,x^{n-1}y)$ is continuous and transforms the graph of $f$ onto the graph of $F$. \end{proof} \begin{corollary} [Dichotomy, for additive functions $f:\mathbb{R}\to\mathbb{R}$] \label{dichotomy} Let $f:\mathbb{R}\to\mathbb{R}$ be an additive function. Then $G(f)$ is connected or totally disconnected. Furthermore, there exists discontinuous additive functions $f:\mathbb{R}\to\mathbb{R}$ with connected graph $G(f)$. \end{corollary} \noindent \textbf{Proof. } Any additive function is a $1$-monomial function. {\hfill $\Box$} \begin{remark} If $G(f)$ is connected, we have two cases: either $f$ is continuous and $G(f)=V$ is a one-dimensional vector space, or $G(f)$ is a connected dense additive subgroup of $\mathbb{R}^2$. \end{remark} The following theorem may be also of interest: \begin{theorem} [$(d+2)$-chotomy property of additive functions] \label{d} If $f:\mathbb{R}^d\to\mathbb{R}$ is an additive function, then: \begin{itemize} \item[$(a)$] There exists $s\in\{0,1,\cdots,d+1\}$ such that the connected component $G$ of $G(f)$ which contains the point $(0,0)$ is a dense subgroup of an $s$-dimensional vector subspace of $\mathbb{R}^{d+1}$. Furthermore, every connected component of $G(f)$ results from $G$ by a translation. \item[$(b)$] All cases described in $(a)$ are attained by concrete examples. \end{itemize} \end{theorem} \noindent \textbf{Proof. } $(a)$ Previous to introduce the main argument, it is necessary to recall two basic facts about topological groups. Concretely, if $G$ is a topological group, and $G_0$ denotes the identity component of $G$ (i.e., the biggest connected subset of $G$ which contains the identity $e\in G$), then $G_0$ is a closed normal subgroup of $G$. Furthermore, the elements of the quotient group $G/G_0$ are just the connected components of $G$ \cite{P}. We consider $f:\mathbb{R}^d\to\mathbb{R}$ an additive function and we set $G(f)=\{(x,f(x)):x\in \mathbb{R}^d\}$. Obviously, the additivity of $f$ implies that $G(f)$ is an additive subgroup of $\mathbb{R}^{d+1}$. Let $G$ be the connected component of $G(f)$ which contains the zero element. Then every connected component of $G(f)$ results from $G$ by a translation. $G$ is a connected additive subgroup of $\mathbb{R}^{d+1}$. Hence, its topological closure $\overline{G}$ is also a connected subgroup of $\mathbb{R}^{d+1}$. It is known that the topological closure of any additive subgroup $H$ of $\mathbb{R}^{d+1}$ satisfies $\overline{H}=V\oplus \Lambda$ for a certain vector subspace $V$ of $\mathbb{R}^{d+1}$ and a discrete additive subgroup $\Lambda$ of $\mathbb{R}^{d+1}$ (see \cite[Theorem 3.1]{Wald} for a proof of this fact). It follows that $\overline{G}=V$ for a certain vector subspace $V$ of $\mathbb{R}^{d+1}$. Hence every connected component of $G(f)$ is the translation $\tau+G$ of a dense connected additive subgroup $G$ of the vector space $V$ for some $\tau\in \mathbb{R}^{d+1}$. Note that, if $V=\{0\}$ then $G(f)$ is totally disconnected and, if $V=\mathbb{R}^{d+1}$, then $G(f)$ is a connected dense additive subgroup of $\mathbb{R}^{d+1}$. All the other cases represent an intermediate situation. For example, if $f$ is continuous, then $G(f)=V$ is a $d$-dimensional vector subspace of $\mathbb{R}^{d+1}$. \noindent $(b)$ All cases described by Theorem \ref{d} can be constructed easily, since all functions $f:\mathbb{R}^d\to\mathbb{R}$ of the form $f(x_1,\cdots,x_d)=A_1(x_1)+A_2(x_2)+\cdots +A_d(x_d)$, with $A_k:\mathbb{R}\to\mathbb{R}$ additive for each $k$, are additive, and we can use the dichotomy result for each one of these functions $A_k$, $k=1,\cdots,d$. {\hfill $\Box$} \begin{remark} While searching in the literature for a demonstration of Corollary \ref{dichotomy}, the first author commented this question to Professor L\'{a}szl\'{o} Sz\'{e}kelyhidi, who also was unable to find the proof nowhere. Then, he got a very nice independent proof of the result \cite{laszlo}. Indeed, for $d=1$ we get the dichotomy result as follows (this is Sz\'{e}kelyhidi's idea): Let $\pi_1:\mathbb{R}\times \mathbb{R}\to \mathbb{R}$ denote the horizontal projection $\pi_1(x,y)=x$, and let $W=\pi_1(G)$ be the projection of the connected component $G$ of $G(f)$ which contains the zero element. Then $\pi_1(G)=\{0\}$ or $\pi_1(G)=\mathbb{R}$, since the only connected subgroups of the real line are $\{0\}$ and $\mathbb{R}$. Thus, if $G(f)$ is not totally disconnected, then $\pi_1(G)=\mathbb{R}$, which implies $G=G(f)$ and hence, $G(f)$ is connected. Unfortunately, this simple proof seems to be very difficult to generalize for the case of monomial functions $f:\mathbb{R}\to\mathbb{R}$, since the graph of an $n$-monomial function is in general not an additive subgroup of $\mathbb{R}^2$. We hope this justifies to introduce the proof of Theorem \ref{main1}. \end{remark} \section{A big-graph property} Recently, Almira and Abu-Helaiel characterized the topological closures of the graphs of monomial functions as follows \cite[Theorem 2.7]{AK_monomials}: \begin{theorem}[Almira, Abu-Helaiel] \label{Graph_Monomials} Assume that $f:\mathbb{R}\to\mathbb{R}$ is a discontinuous $n$-monomial function, let $\Gamma_f=\overline{G(f)}^{\mathbb{R}^2}$, and let us consider the function $A_n(h)=f(h)/h^n$, for $h\neq 0$. Let $\alpha=\sup_{h\in\mathbb{R}\setminus\{0\}}A_n(h)$ and $\beta=\inf_{h\in\mathbb{R}\setminus\{0\}}A_n(h)$. Then: \begin{itemize} \item[$(a)$] If $\alpha=+\infty$ and $\beta=-\infty$, then $\Gamma_f =\mathbb{R}^2$. \item[$(b)$] If $\alpha=+\infty$ and $\beta\in\mathbb{R}$, then $\Gamma_f =\{(x,y):y\geq \beta x^{n}\}$ if $n=2k$ is an even number, and $\Gamma_f =\{(x,y):x\leq 0 \text{ and } y\leq \beta x^{n}\} \cup \{(x,y):x\geq 0\text{ and } y\geq \beta x^{n}\}$ if $n=2k+1$ is an odd number. In particular, if $\beta=0$, we get the half space $\Gamma_f =\{(x,y):y\geq 0\}$ for $n=2k$ and the union of the first and third quadrants $\Gamma_f =\{(x,y):xy\geq 0\}$, for $n=2k+1$. \item[$(c)$] If $\alpha\in\mathbb{R}$ and $\beta=-\infty$, then $\Gamma_f =\{(x,y):y\leq \alpha x^{n}\}$ if $n=2k$ is an even number, and $\Gamma_f =\{(x,y):x\leq 0 \text{ and } y \geq \alpha x^{n}\} \cup \{(x,y):x\geq 0\text{ and } y\leq \alpha x^{n}\}$ if $n=2k+1$ is an odd number. In particular, if $\alpha=0$, we get the half space $\Gamma_f =\{(x,y):y\leq 0\}$ for $n=2k$ and the union of the second and fourth quadrants $\Gamma_f =\{(x,y):xy\leq 0\}$, for $n=2k+1$. \end{itemize} Furthermore, for all $n\geq 2$ there are examples of discontinuous $n$-monomial functions $f$ verifying each one of the claims $(a),(b),(c)$ above. \end{theorem} We use this result to prove the following big graph property: \begin{theorem}[Big graph property] \label{main} Let $f:\mathbb{R}\to\mathbb{R}$ be a discontinuous $n$-monomial function and let $\Gamma_f=\overline{G(f)}^{\mathbb{R}^2}$ and $\Omega_f = \mathrm{Int} (\Gamma_f)$. Then $G(f)$ is connected if and only if $G(f)$ intersects all continuum $K\subseteq \Omega_f$ which touches two distinct vertical lines. \end{theorem} \begin{remark} Recall that continuum means connected and compact with more than one point. \end{remark} \begin{proof} The proof follows the very same arguments used by Jones \cite{jones} in his original proof for the case of additive functions. The main difference is that, for additive functions, the closure of the graph of a discontinuous additive function is the all plane and, for monomials, the corresponding sets are those shown in Theorem \ref{Graph_Monomials}. Assume that $G(f)$ is not connected. Then $G(f)\subseteq U\cup V$ with $U,V$ open subsets of the real plane, $U\cap V=\emptyset$, $G(f)\cap U\neq \emptyset$ and $G(f)\cap V\neq \emptyset$. We can assume that $U$ is connected, since connected components of open subsets of $\mathbb{R}^2$ are open sets. Indeed, by making $U$ or $V$ bigger and bigger, just deleting properly some parts of the borders $\partial U$ or $\partial V$, we can assume that both $U$ and $V$ are connected and share a common border $\partial U=\partial V$. Furthermore, this common frontier is necessarily connected, since the connectedness of the boundary of an open domain in $\mathbb{R}^2$ is equivalent to the connectedness of its complement (indeed, this result holds true for domains in $\mathbb{R}^n$ for all $n>1$ \cite{CKL}). Now, the density of $G(f)$ in $\Gamma_f=\overline{\Omega_f}^{\mathbb{R}^2}$ implies that $V\cap \Gamma_f=\text{Ext}_{\Gamma_f}(U\cap \Gamma_f)$. To prove this, we first observe that $V\cap \Gamma_f\subseteq \text{Ext}_{\Gamma_f}(U\cap \Gamma_f)$, since $V\cap \Gamma_f$ is an open set in the relative topology of $\Gamma_f$ which has empty intersection with $U\cap \Gamma_f$. Thus, if $V\cap \Gamma_f \neq \text{Ext}_{\Gamma_f}(U\cap \Gamma_f)$ , then there exist $\varepsilon>0$ and $(x_0,y_0)\in \Gamma_f$ such that $B((x_0,y_0),\varepsilon)\cap \Gamma_f \subseteq \text{Ext}_{ \Gamma_f }(U\cap \Gamma_f)\setminus V$, which contradicts that $G(f)\subseteq U\cup V$, since $G(f)$ has at least one point in $B((x_0,y_0),\varepsilon)\cap \Gamma_f$. Now we can use the characterization of the sets $\Gamma_f$ given in Theorem \ref{Graph_Monomials} to claim that $\partial U \cap \Omega_f$ contains a continuum which intersects two distinct vertical lines, since otherwise $\partial U$ should contain the intersection of a vertical line with $\Gamma_f$, a fact which leads to a contradiction, since $G(f)$ is a graph and hence intersects all vertical lines. This proves that, if $G(f)$ intersects all continuum $K\subseteq \Omega_f$ which touches two distinct vertical lines, then $G(f)$ is connected. Let us now assume that $G(f)$ is connected and let $K\subseteq \Omega_f$ be a continuum which touches two distinct vertical lines. If $K$ has non-empty interior then $G(f)\cap K\neq \emptyset$, since $G(f)$ is dense in $\Gamma_f$. If $\text{Int}(K)=\emptyset$ and $(x_0,y_0),(x_1,y_1)\in K$ with $x_0<x_1$, then, $K\cap ([x_0,x_1]\times\mathbb{R})$ separates $([x_0,x_1]\times\mathbb{R})\cap \Gamma_f$ in two (or more) components, since $K$ does not intersect the frontier of $\Gamma_f$. Now, $G(f)$ contains at least a point of each one of these components, since $G(f)$ is dense in $\Gamma_f$. It follows that $K\cap G(f)\neq \emptyset$, since $G(f)$ is connected, by hypothesis. Hence, if $G(f)$ is connected, then $G(f)$ intersects every continuum $K\subseteq \Omega_f$ which touches two distinct vertical lines. \end{proof} \begin{remark} We can use the characterization above for another proof of the dichotomy property for monomials as follows: If $f$ is continuous then $G(f)$ is connected. Hence we assume that $f$ is a discontinuous $n$-monomial function. As a first step, we reduce our study to the case of monomial functions with even degree, by demonstrating that $G(f)$ is connected if and only if $G(g)$ is connected, where $g(x)=xf(x)$. The implication $G(f)$ connected implies $G(g)$ connected is trivial. Let us prove the other implication. Indeed, assume that $G(g)$ is connected with $g(x)=xf(x)$, $f(x)$ a $(2k+1)$-monomial function. Let $K$ be a continuum included into $\Omega_f$ which touches two distinct vertical lines. Then $F=\{(x,xy):(x,y)\in K\}$ is a continuum, $F\subseteq \Omega_g$, and $F$ touches two distinct vertical lines. Hence Theorem \ref{main} and the connectedness of $G(g)$ imply that there exists $x_0\neq 0$ such that $(x_0,g(x_0))=(x_0,x_0f(x_0))\in F$. Thus $(x_0,f(x_0))\in K$ and $G(f)$ contains a point of $K$. It follows, again from Theorem \ref{main}, that $G(f)$ is connected. Let us thus assume (with no loss of generality) that $n=2k$ is even. Thanks to Theorem \ref{Graph_Monomials} we can also assume with no loss of generality that $\Gamma_f =\mathbb{R}^2$ or $\Gamma_f =\{(x,y):y\geq \beta x^{2k}\}$ for a certain $\beta\in\mathbb{R}$, since the other cases have analogous proofs. If $G(f)$ is not connected, there exist a continuum $K\subseteq \Omega_f$ with empty interior and two points $(x_0,y_0),(x_1,y_1)\in K$ with $x_0<x_1$, such that $G(f)\cap K=\emptyset$. Obviously, the continuum $K$ separates $(]x_0,x_1[\times \mathbb{R}) \cap \Gamma_f$ in several disjoint open subsets of $\Gamma_f$ (with the relative topology). Hence we can assume that $$(]x_0,x_1[\times \mathbb{R})\cap\Gamma_f \setminus K=U_K\cup V_K,$$ with $U_K,V_K$ disjoint open subsets of $\Gamma_f$, $U_K$ connected, and $\{\alpha\}\times [\beta,+\infty)\subseteq U_K$ for certain $\alpha\in ]x_0,x_1[$ and $\beta>0$. Let us set $U_K^*=U_K\cup \{(x,y)\in \partial U_K: (x,y)\not\in\partial V_K\}$, $V_K^*=V_K\cup \{(x,y)\in \partial V: (x,y)\not\in\partial U_K\}$. Then $U_K^*,V_K^*$ are open connected subsets of $\Gamma_f$, $U_K^*\cap V_K^*=\emptyset$, $G(f)\cap ]x_0,x_1[\times \mathbb{R} \subseteq U_K^*\cup V_K^*$, $K^*=\partial U_K^*=\partial V_K^*$ is a continuum which separates $ ]x_0,x_1[\times \mathbb{R} \cap \Gamma_f$ in exactly two disjoint open connected subsets of $\Gamma_f$, $U_{K^*}=U_K^*$ and $V_{K^*}=V_K^*$. $G(f)\cap U_{K^*}\neq \emptyset$ and $G(f)\cap V_{K^*}\neq \emptyset$. Furthermore, the relation $f(\lambda x)=\lambda^nf(x)$ for all $x\in\mathbb{R}$ and all $\lambda \in\mathbb{Q}$ implies that $G(f)\cap \varphi_{\lambda}( K^*)=\emptyset $ for all rational number $\lambda\neq 0$, where $\varphi_{\lambda}(x,y)=(\lambda x,\lambda^ny)$. Let us prove that the connected component of $G(f)$ which contains the point $(x,f(x))$ with $x\in ]x_0,x_1[$, is the set $\{(x,f(x))\}$. To prove this, we note that the sets $U_K^*,V_K^*$ separate any of these points from the points $(y,f(y))$ of the graph satisfying $y\not\in ]x_0,x_1[$. Thus it is only necessary to consider, to prove our claim, the following two cases: \noindent \textbf{Case 1: $(x,f(x)) \in U_K^*$.} The density of $G(f)$ in $\Gamma_f$ implies there exist an infinite sequence of open intervals $]a_n,b_n[ \subset ]x_0,x_1[$ such that $a_n<x<b_n$, $\lim_{n\to\infty}|a_n-b_n|=0$, $(a_n,f(a_n)),(b_n,f(b_n)) \in V_{K^*}$. \begin{figure}[htb] \centering \includegraphics[scale=0.3]{figure_conectedness.jpg} \caption[]{A visualization of the sets $C_n$} \end{figure} Hence $$ C_n=((\{a_n,b_n\}\times\mathbb{R})\cap U_{K^*})\cup (K^* \cap\overline{U_{K^*}})$$ is a sequence of connected subsets of the plane which separates the point $(x,f(x))$ from any other point $(y,f(y))$ with $y\neq x$, $y\in ]x_0,x_1[$, and $G(f)\cap C_n=\emptyset$ for all $n$ (see the Figure). It follows that $\{(x,f(x))\}$ is the connected component which contains the point $(x,f(x))$. \noindent \textbf{Case 2: $(x,f(x)) \in V_K^*$.} This case has an analogous proof to Case 1. The proof ends now easily. Indeed, if $(x,f(x))$ is any point of $G(f)$, there exists $\lambda\in\mathbb{Q}$ such that $x\in ]\lambda x_0,\lambda^nx_1[$ (since $\mathbb{Q}$ is a dense subset of $\mathbb{R}$) and we can use the arguments above with $\varphi_{\lambda}(K^*)$ instead of $K^*$. \end{remark} \bibliographystyle{amsplain}
2,869,038,156,980
arxiv
\section{Introduction} The search for Higgs bosons is among the prime tasks of the CERN Large Hadron Collider (LHC) \cite{Kunszt:1991qe}. While the standard model (SM) of elementary-particle physics contains one complex Higgs doublet, from which one neutral $CP$-even Higgs boson $H$ emerges in the physical particle spectrum after the spontaneous breakdown of the electroweak symmetry, the Higgs sector of the minimal supersymmetric extension of the SM (MSSM) consists of a two-Higgs-doublet model (2HDM) and accommodates a quintet of physical Higgs bosons: the neutral $CP$-even $h^0$ and $H^0$ bosons, the neutral $CP$-odd $A^0$ boson, and the charged $H^\pm$-boson pair. At the tree level, the MSSM Higgs sector has two free parameters, which are usually taken to be the mass $m_{A^0}$ of the $A^0$ boson and the ratio $\tan\beta=v_2/v_1$ of the vacuum expectation values of the two Higgs doublets. In the following, we focus our attention on the $h^0$ ,$H^0$, and $A^0$ bosons, which we collectively denote by $\phi$. A recent discussion of $H^\pm$-boson production at the LHC may be found in Refs.~\cite{BarrientosBendezu:1998gd,BarrientosBendezu:1999vd,% BarrientosBendezu:2000tu} and the references cited therein. The dominant source of $\phi$ bosons is their single production by $gg$ fusion, $gg\to\phi$, which is mediated by heavy-quark \cite{Georgi:1977gs} and squark \cite{Dawson:1996xz} loops. Another, less important mechanism of single $\phi$-boson production is $b\bar b\to\phi$ \cite{Dicus:1988cx}. The $\phi$ bosons thus produced have essentially zero transverse momentum ($p_T$). In order for the $\phi$ bosons to obtain finite $p_T$, they need to be produced in association with one or more other particles or jets ($j$). In leading order (LO), $j\phi$ associated production proceeds through the partonic subprocesses $gg\to g\phi$, $gq\to q\phi$, and $q\bar q\to g\phi$, which again involve heavy-quark \cite{Ellis:1987xu,Brein:2003df} and squark \cite{Brein:2003df,Muhlleitner:2006wx} loops. Alternatively, the $\phi$ bosons can be produced, with interesting rates, in association with (i) a dijet via intermediate-boson fusion, $qq^\prime\to qq^\prime V^*V^*\to qq^\prime\phi$, where $q$ and $q^\prime$ stand for any light flavor of quark or antiquark, $V=W^\pm,Z$, and virtual particles are marked by an asterisk \cite{Cahn:1983ip}; (ii) a quark-antiquark pair of heavy flavor $Q=t,b$ via $gg,q\bar q\to Q\bar Q\phi$ \cite{Kunszt:1984ri}; (iii) an intermediate boson via $q\bar q^\prime\to W^\pm\phi$ \cite{Glashow:1978ab,Han:1991ia,Ohnemus:1992bd,Djouadi:1999ht}, $q\bar q\to Z\phi$ \cite{Glashow:1978ab,Han:1991ia,Djouadi:1999ht,Kniehl:1990iva,Yin:2002sq,% Yang:2003kr,Kao:2004vp,Li:2005qna}, and $gg\to Z\phi$ \cite{Kniehl:1990iva,Yin:2002sq,Yang:2003kr,Dicus:1988yh,Kao:1991xg,% Kao:2003jw,Brein:2003wg}; or (iv) another, possibly different $\phi$ boson via $q\bar q\to\phi_1\phi_2$ \cite{Djouadi:1999ht,Dawson:1998py,BarrientosBendezu:2001di} and $gg\to\phi_1\phi_2$ \cite{BarrientosBendezu:2001di,Plehn:1996wb,Belyaev:1999mx}. Note that, due to the absence of $A^0VV$ couplings at the tree level, $qq^\prime\to qq^\prime V^*V^*\to qq^\prime\phi$ and the Drell-Yan processes $q\bar q^\prime\to W^{\pm*}\to W^\pm\phi$ and $q\bar q\to Z^*\to Z\phi$ are not possible for $\phi=A^0$. The partonic suprocesses $gg\to Z\phi$ and $gg\to\phi_1\phi_2$ are mediated by heavy-quark \cite{Kniehl:1990iva,Yin:2002sq,Yang:2003kr,Dicus:1988yh,Kao:1991xg,% Kao:2003jw,Brein:2003wg,BarrientosBendezu:2001di,Plehn:1996wb,Belyaev:1999mx} and squark loops \cite{Yin:2002sq,BarrientosBendezu:2001di,Belyaev:1999mx}. Comprehensive reviews of quantum corrections to Higgs-boson production within the SM and MSSM may be found in Refs.~\cite{Kniehl:1993ay,Spira:1997dg}, respectively. In this paper, we revisit $Z\phi$ associated hadroproduction via gluon fusion in the MSSM. In the SM case, mutual agreement between three independent calculations \cite{Kniehl:1990iva,Dicus:1988yh,Brein:2003wg} has been established, and compact formulae for the partonic cross section are available \cite{Kniehl:1990iva}. In the MSSM, the status is much less advanced and, perhaps, somewhat unsatisfactory. As for $gg\to Z\phi$ with $\phi=h^0,H^0$, there exists only one analysis so far \cite{Yang:2003kr}, which has not yet been verified by other authors. The analytic expressions presented in Ref.~\cite{Yang:2003kr} are rather complicated; they involve 14 form factors. In order to translate the SM results \cite{Kniehl:1990iva,Dicus:1988yh,Brein:2003wg} to the case of $gg\to Z\phi$ with $\phi=h^0,H^0$ in the MSSM, it is not sufficient to adjust the $HZZ$ and $Hqq$ couplings; it is also necessary to include certain quark triangle diagrams with an $A^0$ boson in the $s$ channel [see Fig.~\ref{fig:tree}(a)]. On the other hand, the squark loop contributions to these two MSSM processes vanish \cite{Yang:2003kr} for reasons explained below. As for $gg\to ZA^0$, the quark loop contributions were first studied on the basis of a numerical evaluation \cite{Kao:1991xg}, which was recently employed for phenomenological signal-versus-background analyses taking into account the subsequent $Z\to l^+l^-$ and $A^0\to b\bar b$ decays \cite{Kao:2004vp,Kao:2003jw}. An independent analysis, including also the squark loops, was reported in Ref.~\cite{Yin:2002sq}, which does not contain an analytic expression for the partonic cross section either. Unfortunately, comparisons between Refs.~\cite{Yin:2002sq,Kao:2004vp,Kao:1991xg,Kao:2003jw} are not discussed in these papers. Recently, the helicity amplitudes of $gg\to Z\phi$ with $\phi=h^0,H^0,A^0$ were analyzed in Ref.~\cite{Gounaris:2009cc} with regard to the asymptotic helicity conservation property of supersymmetry, and their real and imaginary parts were graphically presented at a specific scattering angle as functions of the center-of-mass (c.m.) energy. With the Higgs hunt at the LHC being in full swing, it is an urgent matter to consolidate our knowledge of $Z\phi$ associated hadroproduction via gluon fusion in the MSSM, which is the motivation of this paper. Specifically, we present compact analytic expressions, also for the partonic cross sections of $q\bar q,b\bar b\to Z\phi$ \cite{Yin:2002sq,Yang:2003kr,Kao:2004vp,Li:2005qna}, and perform a detailed numerical analysis using up-to-date input. The importance of $b\bar b$-initiated subprocesses for Higgs-boson production has been variously emphasized in the literature, in particular, in connection with the final states $\phi$ \cite{Dicus:1988cx}, $\phi_1\phi_2$ \cite{BarrientosBendezu:2001di}, $H^+H^-$ \cite{BarrientosBendezu:1999gp}, and $W^\pm H^\mp$ \cite{BarrientosBendezu:1998gd,Dicus:1989vf}. These subprocesses receive contributions from Feynman diagrams involving $b$-quark Yukawa couplings, which are generally strong for large values of $\tan\beta$. (The $\bar btH^-$ and $\bar tbH^+$ couplings are also strong for small values of $\tan\beta$.) If the two final-state particles couple to a $Z$ boson (or photon), as is the case for the final states $h^0A^0$, $H^0A^0$, $Zh^0$, $ZH^0$, and $H^+H^-$, then there are additional contributions from Drell-Yan-type diagrams, which are already present for the light flavors $q=u,d,s,c$. However, diagrams of the latter type are absent for the final states $h^0h^0$, $h^0H^0$, $H^0H^0$, $A^0A^0$, $ZA^0$, and $W^\pm H^\mp$, which can still be produced through $b\bar b$ annihilation. As for $b\bar b$ annihilation, it should be noted that the treatment of bottom as an active flavor inside the colliding hadrons leads to an effective description, which comprises contributions from the higher-order subprocesses $gb\to Z\phi b$, $g\bar b\to Z\phi\bar b$, and $gg\to Z\phi b\bar b$. If all these subprocesses are to be explicitly included along with $b\bar b\to Z\phi$, then it is necessary to employ a judiciously subtracted $b$-quark PDF in order to avoid double counting \cite{Dicus:1988cx,Gunion:1986pe}. The evaluation of $b\bar b\to Z\phi$ with an unsubtracted $b$-quark PDF is expected to slightly overestimate the true cross section \cite{Dicus:1988cx,Gunion:1986pe}. For simplicity, we shall nevertheless adopt this effective approach in our analysis, keeping in mind that a QCD-correction factor below unity is to be applied. In fact, such a behavior has recently been observed for $b\bar b\to ZA^0$ \cite{Li:2005qna}. In order to reduce the number of unknown supersymmetric input parameters, we adopt a scenario where the MSSM is embedded in a grand unified theory (GUT) involving supergravity (SUGRA) \cite{Djouadi:1996pj}. The MSSM thus constrained is characterized by the following parameters at the GUT scale, which come in addition to $\tan\beta$ and $m_{A^0}$: the universal scalar mass $m_0$, the universal gaugino mass $m_{1/2}$, the trilinear Higgs-sfermion coupling $A$, the bilinear Higgs coupling $B$, and the Higgs-higgsino mass parameter $\mu$. Notice that $m_{A^0}$ is then not an independent parameter anymore, but it is fixed through the renormalization group equation. The number of parameters can be further reduced by making additional assumptions. Unification of the $\tau$-lepton and $b$-quark Yukawa couplings at the GUT scale leads to a correlation between $m_t$ and $\tan\beta$. Furthermore, if the electroweak symmetry is broken radiatively, then $B$ and $\mu$ are determined up to the sign of $\mu$. Finally, it turns out that the MSSM parameters are nearly independent of the value of $A$, as long as $|A|\alt500$~GeV at the GUT scale. This paper is organized as follows. In Sec.~\ref{sec:two}, we list the LO cross sections of $q\bar q\to Z\phi$, including the Yukawa-enhanced contributions for $q=b$, and those of $gg\to Z\phi$, including both quark and squark loop contributions, in the MSSM. The relevant quark and squark loop form factors are relegated to Appendix~\ref{sec:b}. In Sec.~\ref{sec:three}, we present quantitative predictions for the inclusive cross sections of $pp\to Z\phi+X$ at the LHC adopting a favorable SUGRA-inspired MSSM scenario. Sec.~\ref{sec:four} contains our conclusions. For the reader's convenience, the relevant Feynman rules are summarized in Appendix~\ref{sec:a}. \section{\label{sec:two}Analytic Results} In this section, we present the LO cross sections of the partonic subprocesses $q\bar q\to Z\phi$ and $gg\to Z\phi$, where $\phi=h^0,H^0,A^0$, in the MSSM. We work in the parton model of QCD with $n_f=5$ active quark flavors $q=u,d,s,c,b$, which we take to be massless. However, we retain the $b$-quark Yukawa couplings at their finite values, in order not to suppress possibly sizable contributions. We adopt the MSSM Feynman rules from Ref.~\cite{Haber:1984rc}. The couplings of the $Z$ and $\phi$ bosons to quarks, $v_{Zqq}$, $a_{Zqq}$, and $g_{\phi qq}$, are given in Eq.~(5) of Ref.~\cite{BarrientosBendezu:1999gp} and Eq.~(A3) of Ref.~\cite{BarrientosBendezu:2001di}, respectively. As for the $\phi ZZ$ couplings, $g_{h^0ZZ}$ and $g_{H^0ZZ}$ are given by Eq.~(\ref{eq:hzz}) in Appendix~\ref{sec:a}, while the $A^0ZZ$ coupling vanishes at tree level. The $h^0A^0Z$ and $H^0A^0Z$ couplings, $g_{h^0A^0Z}$ and $g_{H^0A^0Z}$, may be found in Eq.~(A2) of Ref.~\cite{BarrientosBendezu:2001di}. For each quark flavor $q$ there is a corresponding squark flavor $\tilde q$, which comes in two mass eigenstates $i=1,2$. The masses $m_{\tilde q_i}$ of the squarks and their trilinear couplings to the $\phi$ bosons, $g_{\phi\tilde q_i\tilde q_j}$, are listed in Eqs.~(A.5), (A.7), and (A.8) and in Table~1 of Ref.~\cite{Hempfling:1993ru}\footnote{% In Ref.~\cite{Hempfling:1993ru}, $m_{\tilde q_i}$ and $g_{\phi\tilde q_i\tilde q_j}$ are called $M_{\tilde Qa}$ and $\tilde V_{Qab}^\phi/g$, respectively.}, Eq.~(A.2) of Ref.~\cite{BarrientosBendezu:1999gp}, and Eq.~(A4) of Ref.~\cite{BarrientosBendezu:2001di}, respectively. Considering the generic partonic subprocess $ab\to Z\phi$, we denote the four-momenta of the incoming partons, $a$ and $b$, and the outgoing $Z$ and $\phi$ bosons by $p_a$, $p_b$, $p_Z$, and $p_\phi$, respectively, and define the partonic Mandelstam variables as $s=(p_a+p_b)^2$, $t=(p_a-p_Z)^2$, and $u=(p_b-p_Z)^2$. The on-shell conditions read $p_a^2=p_b^2=0$, $p_Z^2=m_Z^2=z$, and $p_\phi^2=m_\phi^2=h$. Four-momentum conservation implies that $s+t+u=z+h$. Furthermore, we have $sp_T^2=tu-zh=N$, where $p_T$ is the absolute value of transverse momentum common to the $Z$ and $\phi$ bosons in the c.m.\ frame. The tree-level diagrams for $b\bar b\to Z\phi$ with $\phi=h^0,H^0$ and $\phi=A^0$ are depicted in Fig.~\ref{fig:tree}(a) and (b), respectively. As already mentioned above, there is no Drell-Yan diagram in Fig.~\ref{fig:tree}(b) because of the absence of a $A^0ZZ$ coupling at the tree level. The differential cross sections for the first class of partonic subprocesses may be generically written as \begin{eqnarray} \frac{d\sigma}{dt}\left(b\bar b\to Z\phi\right)&=&\frac{G_F^2c_w^4z}{3\pi s} \left[\left(2z+p_T^2\right)g_{\phi ZZ}^2\left(v_{Zbb}^2+a_{Zbb}^2\right) |{\cal P}_Z(s)|^2+\lambda|P|^2 \vphantom{\frac{1}{t}}\right. \nonumber\\ &&{}-\left.4sp_T^2\left(\frac{1}{t}+\frac{1}{u}\right)g_{\phi bb}a_{Zbb} \mathop{\mathrm{Re}}\nolimits P +g_{\phi bb}^2\left(v_{Zbb}^2T_++a_{Zbb}^2T_-\right)\right], \label{eq:bbzh} \end{eqnarray} where $G_F$ is Fermi's constant, $c_w=m_W/m_Z$ is the cosine of the weak mixing angle, $\lambda=s^2+z^2+h^2-2(sz+zh+hs)$, and \begin{eqnarray} P&=&g_{\phi A^0Z}g_{A^0bb}{\cal P}_{A^0}(s), \nonumber\\ T_\pm&=&2\pm2+2p_T^2\left[z\left(\frac{1}{t}\pm\frac{1}{u}\right) \mp\frac{2s}{tu}\right]. \end{eqnarray} Here, \begin{equation} {\cal P}_{X}(s)=\frac{1}{s-m_X^2+im_X\Gamma_X} \end{equation} is the propagator function of particle $X$, with mass $m_X$ and total decay width $\Gamma_X$. For the second class of partonic subprocesses, we have \begin{eqnarray} \frac{d\sigma}{dt}\left(b\bar b\to ZA^0\right)&=&\frac{G_F^2c_w^4z}{3\pi s} \left[\lambda|S|^2 -4sp_T^2\left(\frac{1}{t}+\frac{1}{u}\right)g_{A^0bb}a_{Zbb}\mathop{\mathrm{Re}}\nolimits S \right. \nonumber\\ &&{}+\left.\vphantom{\frac{1}{t}} g_{A^0bb}^2\left(v_{Zbb}^2T_++a_{Zbb}^2T_-\right)\right], \label{eq:bbza} \end{eqnarray} where \begin{equation} S=g_{h^0A^0Z}g_{h^0bb}{\cal P}_{h^0}(s)+g_{H^0A^0Z}g_{H^0bb}{\cal P}_{H^0}(s). \end{equation} As for $Zh^0$ and $ZH^0$ production, there are also sizable contributions from $q\bar q$ annihilation via a virtual $Z$ boson for the quarks of the first and second generations, $q=u,d,s,c$, whose Yukawa couplings are negligibly small. The corresponding Drell-Yan cross sections are obtained from Eq.~(\ref{eq:bbzh}) by putting $P=T_\pm=0$ and substituting $b\to q$. The resulting expression agrees with Eq.~(2.8) of Ref.~\cite{Kniehl:1990iva}, appropriate for $q\bar q\to ZH$ in the SM, after adjusting the $HZZ$ coupling. The full tree-level cross sections are then obtained by complementing the $b\bar b$-initiated cross sections of Eq.~(\ref{eq:bbzh}) with the Drell-Yan cross sections for $q=u,d,s,c$. The non-vanishing one-loop diagrams pertinent to $gg\to Z\phi$, with $\phi=h^0,H^0$ and $\phi=A^0$ are depicted in Figs.~\ref{fig:loop}(a) and (b), respectively. As already mentioned in the Introduction, the presence of the quark triangle diagrams involving an $s$-channel $A^0$-boson exchange in Fig.~\ref{fig:loop}(a) represents a qualitatively new feature of the MSSM as compared to the SM. Furthermore, similarly to Fig.~\ref{fig:tree}(b), quark triangle diagrams with an $s$-channel $Z$-boson exchange do not appear in Fig.~\ref{fig:loop}(b). In the following, we refer to a squark loop diagram involving an $s$-channel propagator as a triangle diagram. The residual squark loop diagrams are regarded to be of box type. The squark triangle and box diagrams for $gg\to Z\phi$ with $\phi=h^0,H^0$ vanish, and so do the squark box diagrams for $gg\to ZA^0$. This may be understood as follows. (i) The $g_{g\tilde q_i\tilde q_j}$, $g_{gg\tilde q_i\tilde q_j}$, $g_{gZ\tilde q_i\tilde q_j}$, and $g_{Z\tilde q_i\tilde q_j}$ couplings are symmetric in $i$ and $j$, while the $g_{A^0\tilde q_i\tilde q_j}$ coupling is antisymmetric \cite{Rosiek:1989rs}. Thus, squark loops connecting gluons and $Z$ bosons with an odd number of $A^0$ bosons vanish upon summation over $i$ and $j$. (ii) The $g_{g\tilde q_i\tilde q_j}$ and $g_{Z\tilde q_i\tilde q_j}$ couplings are linear in the squark four-momenta, while the $g_{gg\tilde q_i\tilde q_j}$, $g_{gZ\tilde q_i\tilde q_j}$, and $g_{\phi\tilde q_i\tilde q_j}$ couplings are momentum independent \cite{Rosiek:1989rs}. Thus, a squark loop connecting gluons, $Z$ bosons, and $\phi$ bosons vanishes upon adding its counterpart with the loop-momentum flows reversed if the total number of gluons and $Z$ bosons is odd. As in Refs.~\cite{BarrientosBendezu:1999vd,BarrientosBendezu:2000tu}, we express the quark and squark loop contributions in terms of helicity amplitudes. We label the helicity states of the two gluons and the $Z$ boson in the partonic c.m.\ frame by $\lambda_a=\pm1/2$, $\lambda_b=\pm1/2$, and $\lambda_Z=0,\pm1$. We first consider $gg\to Z\phi$ with $\phi=h^0,H^0$. The helicity amplitudes of the quark triangle contribution read \begin{eqnarray} {\mathcal M}_{\lambda_a\lambda_b0}^\triangle&=& -2\sqrt{\frac{\lambda}{z}}(\lambda_a+\lambda_b)\sum_q \left[\frac{z-s}{z}\ a_{Zqq}g_{\phi ZZ}{\mathcal P}_Z(s) \left(F_\triangle\left(s,m_q^2\right)+2\right) \right.\nonumber\\ &&{}-\left.\frac{s}{m_q}g_{A^0qq}g_{\phi A^0Z}{\mathcal P}_{A^0}(s) F_\triangle\left(s,m_q^2\right)\right]. \label{eq:trih} \end{eqnarray} The quark triangle form factor, $F_\triangle$, is given in Eq.~(\ref{eq:fht}). As for the quark box contribution, all twelve helicity combinations contribute. Due to Bose symmetry, they are related by \begin{eqnarray} {\mathcal M}_{\lambda_a\lambda_b\lambda_Z}^\Box(t,u) &=&(-1)^{\lambda_Z}{\mathcal M}_{\lambda_b\lambda_a\lambda_Z}^\Box(u,t), \nonumber\\ {\mathcal M}_{\lambda_a\lambda_b\lambda_Z}^\Box(t,u) &=&{\mathcal M}_{-\lambda_a-\lambda_b-\lambda_Z}^\Box(t,u). \label{eq:bos} \end{eqnarray} Keeping $\lambda_Z=\pm1$ generic, we thus only need to specify four expressions. These read \begin{eqnarray} {\mathcal M}_{++0}^\Box&=& \frac{8}{\sqrt{z\lambda}}\sum_qg_{\phi qq}a_{Zqq}m_q \left[F_{++}^0+(t\leftrightarrow u)\right], \nonumber\\ {\mathcal M}_{+-0}^\Box&=& \frac{8}{\sqrt{z\lambda}}\sum_qg_{\phi qq}a_{Zqq}m_q \left[F_{+-}^0-(t\leftrightarrow u)\right], \nonumber\\ {\mathcal M}_{++\lambda_Z}^\Box&=& -4\sqrt{\frac{2N}{s}}\sum_qg_{\phi qq}a_{Zqq}m_q \left[F_{++}^1-(t\leftrightarrow u)\right], \nonumber\\ {\mathcal M}_{+-\lambda_Z}^\Box&=& -4\sqrt{\frac{2N}{s}}\sum_qg_{\phi qq}a_{Zqq}m_q \left[F_{+-}^1+(t\leftrightarrow u,\lambda_Z\leftrightarrow -\lambda_Z)\right]. \label{eq:boxh} \end{eqnarray} The quark box form factors, $F_{\lambda_a\lambda_b}^{|\lambda_Z|}$, are listed in Eq.~(\ref{eq:fhb}). For the reasons explained above, we have $\tilde{\mathcal M}_{\lambda_a\lambda_b\lambda_Z}^\triangle =\tilde{\mathcal M}_{\lambda_a\lambda_b\lambda_Z}^\Box=0$ for the squark-induced helicity amplitudes. We now turn to $gg\to ZA^0$. The helicity amplitudes of the quark and squark triangle contributions read \begin{eqnarray} {\mathcal M}_{\lambda_a\lambda_b0}^\triangle&=& -8\sqrt{\frac{\lambda}{z}}(1+\lambda_a\lambda_b)\sum_qm_q\left( g_{h^0A^0Z}g_{h^0qq}{\mathcal P}_{h^0}(s)+g_{H^0A^0Z}g_{H^0qq} {\mathcal P}_{H^0}(s) \right)F_\triangle\left(s,m_q^2\right), \nonumber\\ \label{eq:qtria}\\ \tilde{\mathcal M}_{\lambda_a\lambda_b0}^\triangle&=& 2\sqrt{\frac{\lambda}{z}}(1+\lambda_a\lambda_b)\sum_{\tilde q_i} \left(g_{h^0A^0Z}g_{h^0\tilde q_i\tilde q_i}{\mathcal P}_{h^0}(s) +g_{H^0A^0Z}g_{H^0\tilde q_i\tilde q_i}{\mathcal P}_{H^0}(s)\right) \tilde F_\triangle\left(s,m_{\tilde q_i}^2\right), \nonumber\\ \label{eq:stria} \end{eqnarray} respectively. The quark and squark triangle form factors, $F_\triangle$ and $\tilde F_\triangle$, may be found in Eq.~(\ref{eq:fat}). Again, the helicity amplitudes of the quark box contribution satisfy the Bose symmetry relations of Eq.~(\ref{eq:bos}). We find \begin{eqnarray} {\mathcal M}_{++0}^\Box&=& -\frac{8}{\sqrt{z\lambda}}\sum_qg_{A^0qq}a_{Zqq}m_q \left[F_{++}^0+(t\leftrightarrow u)\right], \nonumber\\ {\mathcal M}_{+-0}^\Box&=& -\frac{8}{\sqrt{z\lambda}}\sum_qg_{A^0qq}a_{Zqq}m_q \left[F_{+-}^0+(t\leftrightarrow u)\right], \nonumber\\ {\mathcal M}_{++\lambda_Z}^\Box&=& -4\sqrt{\frac{2N}{s}}\sum_qg_{A^0qq}a_{Zqq}m_q \left[F_{++}^1-(t\leftrightarrow u)\right], \nonumber\\ {\mathcal M}_{+-\lambda_Z}^\Box&=& -4\sqrt{\frac{2N}{s}}\sum_qg_{A^0qq}a_{Zqq}m_q \left[F_{+-}^1-(t\leftrightarrow u,\lambda_Z\to-\lambda_Z)\right]. \end{eqnarray} The quark box form factors, $F_{\lambda_a\lambda_b}^{|\lambda_Z|}$, are presented in Eq.~(\ref{eq:fab}). We recall that $\tilde{\mathcal M}_{\lambda_a\lambda_b\lambda_Z}^\Box=0$. The differential cross section of $gg\to Z\phi$ is then given by \begin{equation} \frac{d\sigma}{dt}(gg\to Z\phi)=\frac{\alpha_s^2(\mu_r)G_F^2m_W^4} {256(4\pi)^3s^2}\sum_{\lambda_a,\lambda_b,\lambda_Z}\left| {\mathcal M}_{\lambda_a\lambda_b\lambda_Z}^\triangle +{\mathcal M}_{\lambda_a\lambda_b\lambda_Z}^\Box +\tilde{\mathcal M}_{\lambda_a\lambda_b\lambda_Z}^\triangle\right|^2, \label{eq:xs} \end{equation} where $\alpha_s(\mu_r)$ is the strong-coupling constant at renormalization scale $\mu_r$. Due to Bose symmetry, the right-hand side of Eq.~(\ref{eq:xs}) is symmetric in $t$ and $u$. The differential cross section of $gg\to ZH$ in the SM is obtained from Eqs.~(\ref{eq:trih})--(\ref{eq:boxh}) and (\ref{eq:xs}), with $\phi=h^0$, by replacing $h^0\to H$, adjusting the $h^0ZZ$ and $h^0qq$ couplings, and discarding the contribution due to $A^0$-boson exchange. In this way, we recover the result of Ref.~\cite{Kniehl:1990iva}, which is expressed in terms of Lorentz-invariant form factors rather than helicity amplitudes. The kinematics of the inclusive reaction $AB\to Z\phi+X$, where $A$ and $B$ are colliding hadrons, is described in Sec.~II of Ref.~\cite{BarrientosBendezu:1998gd}. Its double-differential cross section $d^2\sigma/dy\,dp_T$, where $y$ and $p_T$ are the rapidity and transverse momentum of the $Z$ boson in the c.m.\ system of the hadronic collision, may be evaluated from Eq.~(2.1) of Ref.~\cite{BarrientosBendezu:1998gd}. \section{\label{sec:three}Phenomenological Implications} We are now in a position to explore the phenomenological implications of our results. The SM input parameters for our numerical analysis are taken to be $G_F=1.16637\times10^{-5}$~GeV$^{-2}$, $m_W=80.399$~GeV, $m_Z=91.1876$~GeV, $m_t=172.0$~GeV , and $\overline{m}_b(\overline{m}_b)=4.19$~GeV \cite{Nakamura:2010zzi}. We adopt the LO proton PDF set CTEQ6L1 \cite{Pumplin:2002vw}. We evaluate $\alpha_s(\mu_r)$ and $m_b(\mu_r)$ from the LO formulas, which may be found, {\it e.g.}, in Eqs.~(23) and (24) of Ref.~\cite{Kniehl:1994dz}, respectively, with $n_f=5$ quark flavors and asymptotic scale parameter $\Lambda_{\mathrm QCD}^{(5)}=165$~MeV \cite{Pumplin:2002vw}. We identify the renormalization and factorization scales with the $Z\phi$ invariant mass $\sqrt s$. We vary $\tan\beta$ and $m_{A^0}$ in the ranges $3<\tan\beta<32\approx m_t/m_b$ and 180~GeV${}<m_{A^0}<1$~TeV, respectively. As for the GUT parameters, we choose $m_{1/2}=150$~GeV, $A=0$, and $\mu<0$, and tune $m_0$ so as to be consistent with the desired value of $m_{A^0}$. All other MSSM parameters are then determined according to the SUGRA-inspired scenario as implemented in the program package SUSPECT \cite{Djouadi:2002ze}. We do not impose the unification of the $\tau$-lepton and $b$-quark Yukawa couplings at the GUT scale, which would just constrain the allowed $\tan\beta$ range without any visible effect on the results for these values of $\tan\beta$. We exclude solutions which do not comply with the present experimental lower mass bounds of the sfermions, charginos, neutralinos, and Higgs bosons \cite{Nakamura:2010zzi}. We now study the fully integrated cross sections of $pp\to Z\phi+X$ at the LHC, with c.m.\ energy $\sqrt S=14$~TeV. Figures~\ref{fig:Zh}--\ref{fig:ZA} refer to the cases $\phi=h^0,H^0,A^0$, respectively. In part (a) of each figure, the $m_\phi$ dependence is studied for $\tan\beta=3$ and 30 while, in part~(b), the $\tan\beta$ dependence is studied for $m_{A^0}=300$ and 600~GeV. We note that the SUGRA-inspired MSSM with our choice of input parameters does not permit $\tan\beta$ and $m_{A^0}$ to be simultaneously small, due to the experimental lower bound on $m_{h^0}$ \cite{Nakamura:2010zzi}. This explains why the curves for $\tan\beta=3$ in Figs.~\ref{fig:Zh}--\ref{fig:ZA}(a) only start at $m_{A^0}\approx280$~GeV, while those for $\tan\beta=30$ already start at $m_{A^0}\approx180$~GeV. In Figs.~\ref{fig:Zh} and \ref{fig:ZH}, which refer to $\phi=h^0,H^0$, respectively, the total $q\bar q$-annihilation contributions (dashed lines), corresponding to the coherent superposition of Drell-Yan and Yukawa-enhanced amplitudes, and the $gg$-fusion contributions (solid lines), which arise only from quark loops, are presented separately. For a comparison with future experimental data, they should be added. For comparison, also the pure Drell-Yan contributions (dotted lines) are shown. As for $\phi=h^0$, we observe from Fig.~\ref{fig:Zh} that the contribution due to $q\bar q$ annihilation is almost exhausted by the Drell-Yan process and greatly exceeds the one due to $gg$ fusion, by a factor of 3--5. The $q\bar q$-annihilation contribution falls off by a factor of two as $m_{h^0}$ runs from 82~GeV to 115~GeV and feebly depends on $\tan\beta$, except for the appreciable rise towards the lower edge of the considered $\tan\beta$ range. The $gg$-fusion contribution feebly depends on $m_{h^0}$, $m_{A^0}$, and $\tan\beta$. The situation is very different for $\phi=H^0$, as is obvious from Fig.~\ref{fig:ZH}. Here, $b\overline{b}$ annihilation is generally far more important than the Drell-Yan process, except for $m_{A^0}=300$~GeV and $\tan\beta=3$, where the latter gets close. The contribution due to $b\overline{b}$ annihilation monotonically increases with $\tan\beta$, while the one due to the Drell-Yan process decreases. Furthermore, $gg$ fusion competes with $q\bar q$ annihilation and even dominates for $\tan\beta\alt7$. As for $\phi=A^0$, the $b\bar b$-annihilation contribution (dashed lines) and the total $gg$-fusion contribution (solid lines), corresponding to the coherent superposition of quark and squark loop amplitudes, are presented separately in Fig.~\ref{fig:ZA}. For comparison, also the $gg$-fusion contribution due to quark loops only (dotted lines) is shown. As in the case of $\phi=H^0$, $gg$ fusion competes with $b\bar b$ annihilation and even dominates for $\tan\beta\alt7$. Again, the $b\overline{b}$-annihilation contribution monotonically increases with $\tan\beta$. The bulk of the $gg$-fusion contribution is due to the quark loops, especially at low values of $m_{A^0}$. Finally, we compare our results with the literature. As already mentioned in Sec.~\ref{sec:two}, we recover the well-known SM result \cite{Kniehl:1990iva}, for $\phi=H$, by taking the SM limit of our results for $\phi=h^0$ in Eqs.~(\ref{eq:trih}) and (\ref{eq:boxh}). The contribution due to $A^0$-boson exchange in Eq.~(\ref{eq:trih}), which is not probed in the SM limit, agrees with the analogous contribution to $gg\to W^-H^+$ given in Eq.~(1) of Ref.~\cite{BarrientosBendezu:1999vd} after appropriately adjusting the masses and couplings. On the other hand, the residual terms in the latter equation, which arise from the exchanges of $h^0$ and $H^0$ bosons, coincide with Eq.~(\ref{eq:qtria}) after substituting the appropriate masses and couplings. Similarly, by adjusting masses and couplings in Eq.~(\ref{eq:stria}), we reproduce Eq.~(2.3) in Ref.~\cite{BarrientosBendezu:2000tu}, which gives the squark triangle contribution to $gg\to W^-H^+$. In Ref.~\cite{Kao:1991xg}, numerical results for the cross section of $pp\to ZA^0$ via quark-loop-mediated $gg$ fusion were presented. Adopting the input parameters and proton PDF set specified in that reference, we nicely reproduce the separate contributions due triangle and box diagrams shown in Fig.~4 therein, while we fail to agree with their superposition. Furthermore, we find reasonable agreement with the cross section of $pp\to ZA^0+X$ via $gg$ fusion represented graphically for different scenarios in Figs.~6 and 7 of Ref.~\cite{Li:2005qna} adopting the respective inputs from there. \section{\label{sec:four}Conclusions} We analytically calculated the cross sections of the partonic subprocesses $q\bar q\to Z\phi$ and $gg\to Z\phi$, where $\phi=h^0,H^0,A^0$, to LO in the MSSM. We included the Drell-Yan and Yukawa-enhanced contributions to $q\bar q$ annihilation (see Fig.~\ref{fig:tree}) and the quark and squark loop contributions to $gg$ fusion (see Fig.~\ref{fig:loop}). We presented these results as helicity amplitudes expressed in terms of standard scalar one-loop integrals. We then quantitatively investigated the inclusive cross sections of $pp\to Z\phi+X$ at the LHC with $\sqrt{S}=14$~GeV adopting a favorable SUGRA-inspired MSSM scenario, varying the input parameters $m_{A^0}$ and $\tan\beta$. Our results are presented in Figs.~\ref{fig:Zh}--\ref{fig:ZA}. The total cross section for $\phi=h^0$ is typically of order 1~pb, while those for $\phi=H^0,A^0$ are of order 100~fb (10~fb) for $m_{A^0}=300$~GeV (600~GeV). Assuming design luminosity, $L=10^{34}$~cm${}^{-2}$s${}^{-1}$, a cross section of 1~pb corresponds to $10^5$ events per year and experiment at the LHC (see Table~I of Ref.~\cite{Kniehl:2002wd}). \section*{Acknowledgments} We thank A.~A.~Barrientos Bendezu and R.~Ziegler for their collaboration at the initial stage of this work. The work of B.A.K. was supported in part by the German Federal Ministry for Education and Research BMBF through Grant No.\ 05~HT6GUA, by the German Research Foundation DFG through the Collaborative Research Centre No.~676 {\it Particles, Strings and the Early Universe---The Structure of Matter and Space Time}, and by the Helmholtz Association HGF through the Helmholtz Alliance Ha~101 {\it Physics at the Terascale}. The work of C.P.P. was supported in part by the German Academic Exchange Service (DAAD) Reinvitation Programme under Reference Code A/07/02820 and by the Office of the Vice President for Academic Affairs of the University of the Philippines. \def\Alph{section}.\arabic{equation}{\Alph{section}.\arabic{equation}} \begin{appendix} \setcounter{equation}{0} \section{\label{sec:a}Feynman rules} In this appendix, we collect the Feynman rules used in this paper. The Feynman rules for the $Zq\overline{q}$ vertices are $ig\gamma^\mu(v_{Zqq}-a_{Zqq}\gamma_5)$, with $g=e/s_w$, $e$ being the proton charge, $s_w^2=1-c_w^2$, and \begin{equation} v_{Zqq}=-\frac{I_q-2s_w^2Q_q}{2c_w},\qquad a_{Zqq}=-\frac{I_q}{2c_w}, \end{equation} where $I_q=\pm1/2$ and $Q_q=2/3,-1/3$ are the weak hypercharge and electric charge of quark $q$, respectively. The Feynman rules for the $\phi q\overline{q}$ ($\phi=h^0,H^0$) and $A^0q\overline{q}$ vertices are $igg_{\phi qq}$ and $gg_{A^0qq}\gamma_5$, respectively, with \begin{eqnarray} g_{h^0tt}=-\frac{m_t\cos\alpha}{2m_W\sin\beta},\qquad g_{H^0tt}=-\frac{m_t\sin\alpha}{2m_W\sin\beta},\qquad g_{A^0tt}=-\frac{m_t\cot\beta}{2m_W}, \nonumber\\ g_{h^0bb}=\frac{m_b\sin\alpha}{2m_W\cos\beta},\qquad g_{H^0bb}=-\frac{m_b\cos\alpha}{2m_W\cos\beta},\qquad g_{A^0bb}=-\frac{m_b\tan\beta}{2m_W}, \end{eqnarray} where $\alpha$ is the mixing angle that rotates the weak $CP$-even Higgs eigenstates into the mass eigenstates $h^0$ and $H^0$. The Feynman rules for the $\phi ZZ$ vertices are $igg_{\phi ZZ}g^{\mu\nu}$, with \begin{equation} g_{h^0ZZ}=-\frac{m_Z}{c_w}\sin(\alpha-\beta),\qquad g_{H^0ZZ}=\frac{m_Z}{c_w}\cos(\alpha-\beta). \label{eq:hzz} \end{equation} The Feynman rules for the $\phi A^0Z$ vertices are $gg_{\phi A^0Z}(p+p^\prime)^\mu$, where $p$ is the incoming four-momentum of the $\phi$ boson, $p^\prime$ is the outgoing four-momentum of the $A^0$ boson, and \begin{equation} g_{h^0A^0Z}=\frac{\cos(\alpha-\beta)}{2c_w},\qquad g_{H^0A^0Z}=\frac{\sin(\alpha-\beta)}{2c_w}. \end{equation} The Feynman rules for the $\phi\tilde{q}_i\tilde{q}_j$ vertices are $igg_{\phi\tilde{q}_i\tilde{q}_j}$, with \begin{eqnarray} \lefteqn{\left(\begin{array}{cc} g_{h^0\tilde{t}_1\tilde{t}_1} & g_{h^0\tilde{t}_1\tilde{t}_2} \\ g_{h^0\tilde{t}_2\tilde{t}_1} & g_{h^0\tilde{t}_2\tilde{t}_2} \\ \end{array}\right)} \nonumber\\ &=&{\cal M}^{\tilde{t}}\left(\begin{array}{cc} \frac{m_Z\sin(\alpha+\beta)\left(I^3_t-s_w^2Q_t\right)}{c_w} -\frac{m_t^2\cos\alpha}{m_W\sin\beta} & -\frac{m_t\left(\mu\sin\alpha+A_t\cos\alpha\right)}{2m_W\sin\beta} \\ -\frac{m_t\left(\mu\sin\alpha+A_t\cos\alpha\right)}{2m_W\sin\beta} & \frac{m_Z\sin(\alpha+\beta)s_w^2Q_t}{c_w}-\frac{m_t^2\cos\alpha} {m_W\sin\beta} \\ \end{array}\right)\left({\cal M}^{\tilde{t}}\right)^T, \nonumber\\ \lefteqn{\left(\begin{array}{cc} g_{h^0\tilde{b}_1\tilde{b}_1} & g_{h^0\tilde{b}_1\tilde{b}_2} \\ g_{h^0\tilde{b}_2\tilde{b}_1} & g_{h^0\tilde{b}_2\tilde{b}_2} \\ \end{array}\right)} \nonumber\\ &=& {\cal M}^{\tilde{b}}\left(\begin{array}{cc} \frac{m_Z\sin(\alpha+\beta)\left(I^3_b-s_w^2Q_b\right)}{c_w} +\frac{m_b^2\sin\alpha}{m_W\cos\beta} & \frac{m_b\left(\mu\cos\alpha+A_b\sin\alpha\right)}{2m_W\cos\beta} \\ \frac{m_b\left(\mu\cos\alpha+A_b\sin\alpha\right)}{2m_W\cos\beta} & \frac{m_Z\sin(\alpha+\beta)s_w^2Q_b}{c_w} +\frac{m_b^2\sin\alpha}{m_W\cos\beta} \\ \end{array}\right)\left({\cal M}^{\tilde{b}}\right)^T, \nonumber\\ \lefteqn{\left(\begin{array}{cc} g_{H^0\tilde{t}_1\tilde{t}_1} & g_{H^0\tilde{t}_1\tilde{t}_2} \\ g_{H^0\tilde{t}_2\tilde{t}_1} & g_{H^0\tilde{t}_2\tilde{t}_2} \\ \end{array}\right)} \nonumber\\ &=& {\cal M}^{\tilde{t}}\left(\begin{array}{cc} -\frac{m_Z\cos(\alpha+\beta)\left(I^3_t-s_w^2Q_t\right)}{c_w} -\frac{m_t^2\sin\alpha}{m_W\sin\beta} & \frac{m_t\left(\mu\cos\alpha-A_t\sin\alpha\right)}{2m_W\sin\beta} \\ \frac{m_t\left(\mu\cos\alpha-A_t\sin\alpha\right)}{2m_W\sin\beta} & -\frac{m_Z\cos(\alpha+\beta)s_w^2Q_t}{c_w} -\frac{m_t^2\sin\alpha}{m_W\sin\beta} \\ \end{array}\right)\left({\cal M}^{\tilde{t}}\right)^T, \nonumber\\ \lefteqn{\left(\begin{array}{cc} g_{H^0\tilde{b}_1\tilde{b}_1} & g_{H^0\tilde{b}_1\tilde{b}_2} \\ g_{H^0\tilde{b}_2\tilde{b}_1} & g_{H^0\tilde{b}_2\tilde{b}_2} \\ \end{array}\right)} \nonumber\\ &=& {\cal M}^{\tilde{b}}\left(\begin{array}{cc} -\frac{m_Z\cos(\alpha+\beta)\left(I^3_b-s_w^2Q_b\right)}{c_w} -\frac{m_b^2\cos\alpha}{m_W\cos\beta} & \frac{m_b\left(\mu\sin\alpha-A_b\cos\alpha\right)}{2m_W\cos\beta} \\ \frac{m_b\left(\mu\sin\alpha-A_b\cos\alpha\right)}{2m_W\cos\beta} & -\frac{m_Z\cos(\alpha+\beta)s_w^2Q_b}{c_w} -\frac{m_b^2\cos\alpha}{m_W\cos\beta} \\ \end{array}\right)\left({\cal M}^{\tilde{b}}\right)^T, \end{eqnarray} where \begin{equation} {\cal M}^{\tilde{q}}=\left(\begin{array}{cc} \cos\theta_{\tilde{q}}\ & \sin\theta_{\tilde{q}} \\ -\sin\theta_{\tilde{q}}\ & \cos\theta_{\tilde{q}} \\ \end{array}\right) \end{equation} are the squark mixing matrices, with $\theta_{\tilde{q}}$ being the squark mixing angles. \section{\label{sec:b}Quark and squark loop form factors} In this appendix, we express the quark and squark triangle and box form factors, $F_\triangle$, $\tilde F_\triangle$, and $F_{\lambda_a\lambda_b}^{|\lambda_Z|}$, for $\phi=h^0,H^0$ and $\phi=A^0$, in terms of the standard scalar three- and four-point functions, which we abbreviate as $C_{ijk}^{ab}(c)=C_0\left(a,b,c,m_i^2,m_j^2,m_k^2\right)$ and $D_{ijkl}^{abcd}(e,f)=D_0\left(a,b,c,d,e,f,m_i^2,m_j^2,m_k^2,m_l^2\right)$, respectively. The definitions of the latter may be found in Eq.~(5) of Ref.~\cite{BarrientosBendezu:1999vd}. The quark triangle form factor for $\phi=h^0,H^0$ reads \begin{equation} F_\triangle\left(s,m_q^2\right)=4m_q^2C_{qqq}^{00}(s). \label{eq:fht} \end{equation} The quark box form factors for $\phi=h^0,H^0$ read \begin{eqnarray} F_{++}^0&=&2s(t+u)C^{00}_{qqq}(s) +2\left(t+u+\frac{\lambda}{s}\right) \left[(t-z)C^{z0}_{qqq}(t)+(t-h)C^{h0}_{qqq}(t)\right] \nonumber\\ &&{}-\left[N\left(t+u+\frac{\lambda}{s}\right)+2m_q^2\lambda\right] D^{h0z0}_{qqqq}(t,u)-4\left(szh+m_q^2\lambda\right)D^{hz00}_{qqqq}(s,t), \nonumber\\ F_{+-}^0&=&\frac{(h-z-s)}{N}(t-u)\left[s(t+u)C^{00}_{qqq}(s) -\lambda C^{hz}_{qqq}(s)-2m_q^2ND^{h0z0}_{qqqq}(t,u)\right] \nonumber\\ &&{}+2(t+u)(t-z)\left[1+\frac{t(t-u)(h-z-s)}{N(t+u)}\right]C^{z0}_{qqq}(t) \nonumber\\ &&{}+\frac{2(t-h)}{N}\left[z(u^2-t^2-\lambda)+(t+u)(t^2-zh)\right]C^{h0}_{qqq}(t) \nonumber\\ &&{}-(h-z-s)\left[2st\frac{t^2-zh}{N}+4m_q^2(t-u)\right]D^{hz00}_{qqqq}(s,t), \nonumber\\ F_{++}^1&=&(t-u)\left[\frac{z-h-s}{\sqrt\lambda}-\lambda_Z\right] \left[\frac{s}{N}C^{00}_{qqq}(s)-\frac{1}{2}D^{h0z0}_{qqqq}(t,u)- \frac{s}{N}\left(t+\frac{2N}{t-u}\right)D^{hz00}_{qqqq}(s,t)\right] \nonumber\\ &&{}+\frac{2(h-u)}{\sqrt\lambda N}\left(\lambda_Z\sqrt\lambda+t-u+\frac{2N}{h-u}\right) \left[(h-t)C^{h0}_{qqq}(t)+(z-u)C^{z0}_{qqq}(u)\right], \nonumber\\ F_{+-}^1&=&\frac{s}{N}\left(\frac{4s(t+u)}{\sqrt\lambda}+\sqrt\lambda -\lambda_Z(t-u) \right)C^{00}_{qqq}(s)-\frac{2s}{N}\left(\sqrt\lambda +\lambda_Z(t-u)\right)C^{hz}_{qqq}(s) \nonumber\\ &&{}-\frac{2(t-h)}{\sqrt\lambda N}\left(-s(u+3t)-2N+(u-t)(t-z) +\lambda_Z(t-s-z)\sqrt\lambda\right)C^{h0}_{qqq}(t) \nonumber\\ &&{}+\frac{2(u-z)}{\sqrt\lambda N}\left(3u(s-z)+th-2z(h-2u)-\lambda_Z(h-u) \sqrt\lambda\right)C^{z0}_{qqq}(u) \nonumber\\ &&{}+\frac{s}{\sqrt\lambda N}\left[t\left(\lambda+8zh-4ts -2(t+u)(z+h)\right.\right. \nonumber\\ &&{}+\left.\left.\lambda_Z(-2h+3t+u-2z)\sqrt\lambda \right) -16m_q^2N\right]D^{hz00}_{qqqq}(s,t) \nonumber\\ &&{}+\frac{1}{2}\left(-\sqrt\lambda-\frac{16m_q^2s}{\sqrt\lambda}+ \lambda_Z(t-u)\right)D^{h0z0}_{qqqq}(t,u). \label{eq:fhb} \end{eqnarray} The quark and squark triangle form factors for $\phi=A^0$ read \begin{eqnarray} F_\triangle\left(s,m_q^2\right)&=&2+\left(4m_q^2-s\right)C^{00}_{qqq}(s), \nonumber\\ \tilde F_\triangle\left(s,m_{\tilde q}^2\right)&=& 2+4m_{\tilde q_i}^2C^{00}_{\tilde q_i\tilde q_i\tilde q_i}(s). \label{eq:fat} \end{eqnarray} The quark box form factors for $\phi=A^0$ read \begin{eqnarray} F^0_{++}&=&2s(t+u)C^{00}_{qqq}(s) +2\left(t+u+\frac{\lambda}{s}\right) \left[(t-z)C^{z0}_{qqq}(t)+(t-h)C^{h0}_{qqq}(t)\right] \nonumber\\ &&{}-\left[N\left(t+u+\frac{\lambda}{s}\right)+2m_q^2\lambda\right] D^{h0z0}_{qqqq}(t,u)-4\left(szh+m_q^2\lambda\right)D^{hz00}_{qqqq}(s,t), \nonumber\\ F^0_{+-}&=& -\left[2+\frac{(t-u)^2}{N}\right]\left[s(t+u)C^{00}_{qqq}(s) -\lambda C^{hz}_{qqq}(s)\right] \nonumber\\ &&{}-2\left[3t-u+\frac{t}{N}(t-u)^2\right] \left[(t-z)C^{z0}_{qqq}(t)+(t-h)C^{h0}_{qqq}(t)\right] \nonumber \\ &&{}-2\left(zN-m_q^2\lambda\right)D^{h0z0}_{qqqq}(t,u) \nonumber\\ &&{}+2\left\{st\left[3t-u+\frac{t}{N}(t-u)^2\right]+2m_q^2\lambda\right\} D^{hz00}_{qqqq}(s,t), \nonumber\\ F^1_{++}&=&\left(\frac{z-h-s}{\sqrt\lambda}-\lambda_Z\right) \left\{(t-u)\left(\frac{s}{N}C^{00}_{qqq}(s)-\frac{1}{2}D^{h0z0}_{qqqq}(t,u) \right)\right. \nonumber\\ &&{}-\left.s\left[2+\frac{t}{N}(t-u)\right]D^{hz00}_{qqqq}(s,t)\right\} \nonumber\\ &&{}+2(t-z)\left\{\lambda_Z\frac{h-t}{N}+\frac{1}{\sqrt\lambda} \left[2+\frac{(t-u)(t-h)}{N}\right]\right\}C^{z0}_{qqq}(t) \nonumber\\ &&{}-2(t-h)\left\{\lambda_Z\frac{h-u}{N}+\frac{1}{\sqrt\lambda} \left[2-\frac{(t-u)(u-h)}{N}\right]\right\}C^{h0}_{qqq}(t), \nonumber\\ F^1_{+-}&=&\left(\lambda_Z-\frac{t-u}{\sqrt\lambda}\right) \left[\frac{s}{N}(s-z+h)\left(C^{00}_{qqq}(s)-tD^{hz00}_{qqqq}(s,t)\right) +\frac{s+z-h}{2}D^{h0z0}_{qqqq}(t,u)\right] \nonumber\\ &&{}-2(t-z)\left\{\lambda_Z\frac{t-h}{N}-\frac{1}{\sqrt\lambda} \left[2+\frac{(t-u)(t-h)}{N}\right]\right\}C^{z0}_{qqq}(t) \nonumber\\ &&{}-2(t-h)\left\{\lambda_Z\frac{u-h}{N}+\frac{1}{\sqrt\lambda} \left[2-\frac{(t-u)(u-h)}{N}\right]\right\}C^{h0}_{qqq}(t). \label{eq:fab} \end{eqnarray} \end{appendix}
2,869,038,156,981
arxiv
\section{Introduction} In this paper, we consider the following one-dimensional nonlocal boundary value problem, \begin{equation}\label{1} \left\{ \begin{array}{ll} \epsilon v''+(a_2-\frac{b_2 \lambda}{1+v}-c_2v)v=0,&x \in (0,L), \\ v'(0)=v'(L)=0,\\ \int_0^L \frac{a_1-c_1v}{1+v}-\frac{b_1 \lambda}{(1+v)^2} dx=0, \end{array} \right. \end{equation} where $v=v_\epsilon(x)$ is a positive function and $\lambda=\lambda_\epsilon$ is a positive constant to be determined, while $ a_i, b_i, c_i$, $i=1,2$ and $\epsilon$ are some nonnegative constants. The motivation for studying model (\ref{1}) is that it is a limiting system or the so-called shadow system of the following Lotka-Volterra competition model with $\Omega=(0,L)$, \begin{equation}\label{2} \left\{ \begin{array}{ll} u_t=\Delta[(d_1+\rho_{12}v)u]+(a_1-b_1u-c_1v)u, &x \in \Omega,~t>0, \\ v_t=\Delta[(d_2+\rho_{21}u)v]+(a_2-b_2u-c_2v)v,& x \in \Omega,~t>0, \\ u(x,0)=u_0(x) \geq 0,~ v(x,0)=v_0(x) \geq 0,& x\in \Omega,\\ \frac{\partial u}{\partial \textbf{n}}=\frac{\partial v}{\partial \textbf{n}}=0,& x \in \partial \Omega,~t>0, \end{array} \right. \end{equation} where $d_1$, $d_2$ and $\rho_{12},\rho_{21}$ are positive constants, $d_i$ is referred as the diffusion rate and $\rho_{ij}$ as the cross-diffusion rate. System (\ref{2}) was proposed by Shigesada et al. \cite{SKT} in 1979 to study the phenomenon of species segregation, where $u$ and $v$ represent the population densities of two competing species. A tremendous amount of work has been done on the dynamics of its positive solutions since the proposal of system(\ref{2}). There are also various interesting results on its stationary problem that admits non-constant positive solutions, in particular over a one-dimensional domain. See \cite{E}, \cite{IMNY}, \cite{KY}, \cite{LN}, \cite{LN2}, \cite{LNW}, \cite{MEF}, \cite{MK}, \cite{MM}, \cite{MNTT}, and the references therein. Great progress was made by Lou, Ni in \cite{LN, LN2} in the existence and quantitative analysis of the steady states of (\ref{2}) for $\Omega$ being a bounded domain in $\Bbb{R}^N$, $1\leq N\leq 3$. Roughly speaking, they showed that (\ref{2}) admits only trivial steady states if one of the diffusion rates is large with the corresponding cross-diffusion fixed, and (\ref{2}) allows nonconstant positive steady states if one of the cross-diffusion pressures is large with the corresponding diffusion rate being appropriately given. Moreover, they established the limiting profiles of non-constant positive solutions of (\ref{2}) as $\rho_{12} \rightarrow \infty$ (and similarly as $\rho_{21} \rightarrow \infty$). For the sake of simplicity, we only state their results for $\rho_{21}=0$, while the same analysis can be carried out for the case when $\rho_{21}\neq0$. Moreover, we refer our readers to \cite{WX} and the references therein for recent developments in the analysis of the shadow systems to (\ref{2}). Suppose that $\frac{a_1}{a_2}\neq \frac{b_1}{b_2} \neq \frac{c_1}{c_2}$ and $d_2\neq a_2/\mu_j$ for any $j\geq1$, where $\mu_j$ is the $j-$th eigenvalue of $-\Delta$ subject to homogenous Neumann boundary condition. Let $(u_i,v_i)$ be positive nonconstant steady states of (\ref{2}) with $(d_1,\rho_{12})=(d_{1,i},\rho_{12,i})$. Suppose that $\rho_{12,i}/d_{1,i} \rightarrow r\in(0,\infty)$ as $\rho_{12,i} \rightarrow \infty$, then $(u_i,\rho_{12,i}v_i/d_{1,i}) \rightarrow (\lambda/(1+v),v)$ uniformly on $[0,L]$ for some positive constant $\lambda0$ and $v$ is a positive solution to the following problem, \begin{equation}\label{3} \left\{ \begin{array}{ll} d_2 \Delta v+(a_2-b_2\lambda/(1+v)-(c_2/r)v)v=0,& x \in \Omega,\\ \frac{\partial u}{\partial \textbf{n}}=\frac{\partial v}{\partial \textbf{n}}=0,& x \in \partial \Omega,\\ \int_\Omega (a_1-c_1 v)/(1+v)dx =b_1 \lambda \int_\Omega 1/(1+v)^2dx. \end{array} \right. \end{equation} We now denote $d_2=\epsilon$ since the smallness of diffusion rate tends to create nonconstant solutions for (\ref{3}). Putting $c_2/r=\tilde{c}_2$ and assuming $\Omega=(0,L)$, we arrive at (\ref{1}), where we have dropped the tilde over $c_2$ in (\ref{1}) without causing any confusion. For $\frac{a_1}{a_2}>\frac{b_1}{b_2}$ and if $\epsilon>0$ is small, Lou and Ni \cite{LN2} established the existence of positive solutions $v_\epsilon(\lambda_\epsilon,x)$ to (\ref{1}) by degree theory. Moreover, $v_\epsilon(x)$ has a single boundary spike at $x=0$ if $\epsilon$ being sufficiently small. This paper is devoted to study the solutions of (\ref{1}) that have a different structure, i.e, an interior transition layer. The remaining part of this paper is organized as follows. In Section 2, we carry out bifurcation analysis to establish nonconstant positive solutions to (\ref{1}) for all $\epsilon$ small. The stability of these small amplitude solutions are then determined in Section 3 for $b_1=0$ in (\ref{1}). In Section 4, we show that for any $x_0$ in a pre-determined subinterval of $(0,L)$, there exists positive solutions to (\ref{1}) that have a single interior transition layer at $x_0$. Finally, we include discussions and propose some interesting questions in Section 5. \section{Nonconstant positive solutions to the shadow system} In this section, we establish the existence of nonconstant positive solutions to (\ref{1}). First of all, we apply the following conventional notations as in \cite{LN,LN2} \[A=\frac{a_1}{a_2},B=\frac{b_1}{b_2},C=\frac{c_1}{c_2},\] then we see that (\ref{1}) has a constant solution \[(\bar{v},\bar{\lambda})=\left(\frac{a_2}{c_2} \frac{B-A}{B-C}, \frac{a_2}{b_2}\frac{A-C}{B-C}\Big(1+\frac{a_2}{c_2} \frac{B-A}{B-C}\Big)\right),\] and $\bar v, \bar \lambda>0$ if and only if \begin{equation}\label{4} B>A>C,~\text{or}~B<A<C. \end{equation} \subsection{Existence of positive bifurcating solutions} To obtain non-constant positive solutions of (\ref{1}), we are going to apply the local bifurcation theory due to Crandall and Rabinowtiz \cite{CR}, therefore we shall assume (\ref{4}) from now on. Taking $\epsilon$ as the bifurcation parameter, we rewrite (\ref{1}) in the abstract form \[\mathcal{F}(v,\lambda,\epsilon)=0,~(v,\lambda,\epsilon) \in \mathcal{X} \times \Bbb{R}^+\times \Bbb{R}^+, \] where \begin{equation}\label{5} \mathcal{F}(v,\lambda,\epsilon) =\left( \begin{array}{c} \epsilon v''+(a_2-\frac{b_2 \lambda}{1+v}-c_2v)v\\ \int_0^L \frac{a_1-c_1v}{1+v} dx- \int_0^L \frac{b_1 \lambda}{(1+v)^2} dx \end{array} \right), \end{equation} and $\mathcal{X}$ is a Hilbert space defined by \[\mathcal{X}=\{w \in H^2(0,L) ~\vert~~ w'(0)=w'(L)=0 \}.\] We first collect the following facts about the operator $\mathcal{F}$ before using the bifurcation theory. \begin{lemma} The operator $\mathcal{F}(v,\lambda,\epsilon)$ defined in (\ref{5}) satisfies the following properties: (1)~$\mathcal{F}(\bar{v},\bar{\lambda},\epsilon)=0$ for any $\epsilon \in \Bbb{R}^+$; (2)~$\mathcal{F}: \mathcal{X} \times \Bbb{R}^+ \times \Bbb{R}^+ \rightarrow \mathcal{Y} \times \mathcal{Y}$ is analytic, where $\mathcal{Y}=L^2(0,L)$; (3)~for any fixed $(v_0,\lambda_0) \in \mathcal{X} \times \Bbb{R}^+$, the Fr\'echet derivative of $\mathcal{F}$ is given by \begin{equation}\label{6} D_{(v,\lambda)}\mathcal{F}(v_0,\lambda_0,\epsilon)(v,\lambda) =\left( \begin{array}{c} \epsilon v''+\left(a_2-\frac{b_2\lambda_0}{(1+v_0)^2}-2c_2v_0 \right)v- \frac{b_2v_0}{1+v_0}\lambda\\ \int_0^L\left(\frac{2b_1\lambda_0}{(1+v_0)^3}-\frac{a_1+c_1}{(1+v_0)^2} \right)v- \frac{b_1 \lambda}{(1+v_0)^2} dx \end{array} \right); \end{equation} (4)~$D_{(v,\lambda)}\mathcal{F}(v_0,\lambda_0,\epsilon): \mathcal{X} \times \Bbb{R}^+ \rightarrow \mathcal{Y} \times \Bbb{R}$ is a Fredholm operator with zero index. \end{lemma} \begin{proof} Part (1)--(3) can be easily verified through direct calculations and we leave them to the reader. To prove part (4), we formally decompose the derivative in (\ref{6}) as \[ D_{(v,\lambda)}\mathcal{F}(v_0,\lambda_0,\epsilon)(v,\lambda)=D\mathcal{F}_1(v,\lambda)+D\mathcal{F}_2(v,\lambda),\] where \[D\mathcal{F}_1(v,\lambda)=\left( \begin{array}{c} \epsilon v''+\left( a_2-\frac{b_2\lambda_0}{(1+v_0)^2}-2c_2v_0 \right)v\\ 0 \end{array} \right),\] and \[D\mathcal{F}_2(v,\lambda)=\left( \begin{array}{c} - \frac{b_2v_0}{1+v_0}\lambda\\ \int_0^L\left(\frac{2b_1\lambda_0}{(1+v_0)^3}-\frac{a_1+c_1}{(1+v_0)^2} \right)v- \frac{b_1 \lambda}{(1+v_0)^2} dx \end{array} \right).\] Obviously $D\mathcal{F}_2: \mathcal{X} \times \Bbb{R}^+ \rightarrow \Bbb{R} \times \Bbb{R}$ is linear and compact. On the other hand, $D\mathcal{F}_1$ is elliptic and according to Remark 2.5 of case 2, i.e, $N=1$, in Shi and Wang \cite{SW}, it is strongly elliptic and satisfies the Agmon's condition. Furthermore, by Theorem 3.3 and Remark 3.4 of \cite{SW}, $D\mathcal{F}_1$ is a Fredholm operator with zero index. Thus $D_{(v,\lambda)}\mathcal{F}(v_0,\lambda_0,\epsilon)$ is in the form of \emph{Fredholm operator+Compact operator}, and it follows from a well-known result, e.g, \cite{Ka}, that $D_{(v,\lambda)}\mathcal{F}(v_0,\lambda_0,\epsilon)$ is also a Fredholm operator with zero index. Thus we have concluded the proof of this lemma. \end{proof} Putting $(v_0,\lambda_0)=(\bar{v},\bar{\lambda})$ in (\ref{6}), we have that \begin{equation}\label{7} D_{(v,\lambda)}\mathcal{F}(\bar{v},\bar{\lambda},\epsilon)(v,\lambda) =\left( \begin{array}{c} \epsilon v''+\left( \frac{a_2-c_2-2c_2 \bar{v}}{1+\bar{v}} \right)\bar{v}v- \frac{b_2\bar{v}}{1+\bar{v}}\lambda\\ \int_0^L\left( \frac{a_1-c_1-2c_1 \bar{v}}{(1+\bar{v})^2} \right)v- \frac{b_1 \lambda}{(1+\bar{v})^2} dx \end{array} \right); \end{equation} To obtain candidates for bifurcation values, we need to check the following necessary condition on the null space of operator (\ref{7}), \begin{equation}\label{8} \mathcal{N}(D_{(v,\lambda)}\mathcal{F}(\bar{v},\bar{\lambda},\epsilon)) \neq \{ 0 \}. \end{equation} Let $(v,\lambda)$ be an element in this null space, then $(v,\lambda)$ satisfies the following system \begin{equation}\label{9} \left\{ \begin{array}{ll} \epsilon v''+\left( \frac{a_2-c_2-2c_2 \bar{v}}{1+\bar{v}} \right)\bar{v}v- \frac{b_2\bar{v}}{1+\bar{v}}\lambda=0,~x \in(0,L),\\ \int_0^L\left( \frac{a_1-c_1-2c_1 \bar{v}}{(1+\bar{v})^2} \right)v- \frac{b_1 \lambda}{(1+\bar{v})^2} dx=0,\\ v'(0)=v'(L)=0. \end{array} \right. \end{equation} First of all, we claim that $\lambda=0$. To this end, we integrate the first equation in (\ref{9}) over $0$ to $L$ and have that \[ (a_2-c_2-2c_2 \bar{v} )\int_0^L v dx=b_2 \lambda L;\] on the other hand, the second equation in (\ref{9}) is equivalent to \[(a_1-c_1-2c_1 \bar{v}) \int_0^L v dx=b_1 \lambda L.\] If $\lambda \neq 0$, we must have by equating the coefficients of the two identities above that \[\bar{v}=\frac{B(a_2-c_2)-a_1+c_1}{2(B-C)c_2},\] then by comparing this with the formula \[\bar{v}=\frac{a_2}{c_2} \frac{B-A}{B-C},\] we conclude from a straightforward calculation that $a_2(A-B)=c_2(B-C)$ and this implies that $\bar{v}=-1$ which is a contradiction. Therefore $\lambda$ must be zero as claimed. Now put $\lambda=0$ in (\ref{9}) and we arrive at \begin{equation}\label{10} \left\{ \begin{array}{ll} \epsilon v''+\left( \frac{a_2-c_2-2c_2 \bar{v}}{1+\bar{v}} \right)\bar{v}v=0,~x \in (0,L),\\ \left( \frac{a_1-c_1-2c_1 \bar{v}}{(1+\bar{v})^2} \right)\int_0^L v dx=0,~v'(0)=v'(L)=0. \end{array} \right. \end{equation} It is easy to see that (\ref{10}) has nonzero solutions if and only if $\frac{a_2-c_2-2c_2 \bar{v}}{(1+\bar{v})\epsilon}\bar{v}$ is one of the Neumann eigenvalues for $(0,L)$ and it gives rise to \begin{equation}\label{11} \frac{a_2-c_2-2c_2 \bar{v}}{(1+\bar{v})\epsilon}\bar{v}=(k\pi/L)^2,~ k \in N^+, \end{equation} which is coupled with an eigenfunction $v_k(x)=\cos(k\pi x/L)$. Moreover we can easily see that the zero integral condition is obviously satisfied. Then bifurcation might occur at $(\bar{v},\bar{\lambda},\epsilon_k)$ with \begin{equation}\label{12} \epsilon_k=\frac{a_2-c_2-2c_2 \bar{v}}{(1+\bar{v})(k\pi/L)^2}\bar{v},~ k \in N^+, \end{equation} provided that $\epsilon_k$ is positive or equivalently \begin{equation}\label{13} \bar{v}<\frac{a_2-c_2}{2c_2},~a_2>c_2. \end{equation} We have shown that the null space in (\ref{8}) is not trivial and in particular \[\mathcal{N}\big(D_{(v,\lambda)}\mathcal{F}(\bar{v},\bar{\lambda},\epsilon)\big)=\text{span} \big\{\cos(k\pi x/L),0\big\}, k\in N^+.\] \begin{remark}\label{rk2} $(0,\frac{a_1}{b_1})$ is another trivial solution to (\ref{1}) and local bifurcation does not occur at $(0,\frac{a_1}{b_1})$. Actually, putting $(v_0,\lambda_0)=(0,\frac{a_1}{b_1})$ in (\ref{6}), we see that the $v$-equation in (\ref{9}) becomes \[\epsilon v''+(a_2-\frac{a_1b_2}{b_1})v=0,~v'(0)=v'(L)=0,\] and the null-space $\mathcal{N}(D_{(v,\lambda)}\mathcal{F}(0,\frac{a_1}{b_1},\epsilon))$ must be trivial. \end{remark} Having the potential bifurcation values $\epsilon_k$ in (\ref{12}), we can now proceed to establish non-constant positive solutions for (\ref{1}) in the following theorem which guarantees that the local bifurcation occurs at $(\bar{v},\bar{\lambda},\epsilon_k)$. \begin{theorem}\label{thm22} Assume that (\ref{4}) and (\ref{13}) hold. Then for each $k\in N^+$, there exists $\delta>0$ and continuous functions $s\in(-\delta, \delta):\rightarrow \epsilon_k(s) \in \Bbb{R}^+$ and $s\in(-\delta, \delta):\rightarrow (v_k(s,x),\lambda_k(s)) \in \mathcal{X} \times \Bbb{R}^+$ such that \begin{equation}\label{14} \epsilon_k(0)=\epsilon_k,~(v_k(s,x),\lambda_k(s))=(\bar{v},\bar{\lambda})+s(\cos (k\pi x/L),0) +o(s), \end{equation} where $\epsilon_k$ is defined in (\ref{12}). Moreover, $(v_k(x,s),\lambda_k(s))$ solves system (\ref{1}) and all nontrivial solutions of (\ref{1}) near ($\bar{v},\bar{\lambda},\epsilon_k)$ are on the curve $\Gamma_k=(v_k(x,s),\lambda_k(s),\epsilon_k(s))$. \end{theorem} \begin{proof} To make use of the local bifurcation theory of Crandall and Rabinowtiz \cite{CR}, we have verified all but the following \textit{transversality~condition}: \begin{equation}\label{15} \frac{d}{d \epsilon} \left(D_{(v,\lambda)}\mathcal{F}(\bar{v},\bar{\lambda},\epsilon)\right)(v_k,\lambda_k)\Big\vert_{\epsilon=\epsilon_k} \notin \mathcal{R}(D_{(v,\lambda)}\mathcal{F}(\bar{v},\bar{\lambda},\epsilon_k)), \end{equation} where \[\frac{d}{d \epsilon} \left(D_{(v,\lambda)}\mathcal{F}(\bar{v},\bar{\lambda},\epsilon)\right)(v_k,\lambda_k)\Big\vert_{\epsilon=\epsilon_k}=\left( \begin{array}{c} (\cos\frac{k\pi x}{L})''\\ 0 \end{array} \right). \] If not and (\ref{15}) fails, then there exists a nontrivial solution $(v,\lambda) \in \mathcal{X} \times \Bbb{R}^+$ that satisfies the following problem \begin{equation}\label{16} \left\{ \begin{array}{ll} \epsilon v''+\left( \frac{a_2-c_2-2c_2 \bar{v}}{1+\bar{v}} \right)\bar{v}v- \frac{b_2\bar{v}}{1+\bar{v}}\lambda=(\cos\frac{k\pi x}{L})'',~x\in (0,L),\\ \int_0^L\left( \frac{a_1-c_1-2c_1 \bar{v}}{(1+\bar{v})^2} \right)v- \frac{b_1 \lambda}{(1+\bar{v})^2} dx=0,\\ v'(0)=v'(L)=0. \end{array} \right. \end{equation} By the same analysis that leads to the claim below (\ref{9}), we can show that $\lambda=0$ in (\ref{16}), which then becomes \begin{equation}\label{17} \left\{ \begin{array}{ll} \epsilon_k v''+\left( \frac{a_2-c_2-2c_2 \bar{v}}{1+\bar{v}} \right)\bar{v}v=(\cos\frac{k\pi x}{L})'',~ x\in (0,L),\\ v'(0)=v'(L)=0. \end{array} \right. \end{equation} However, this reaches a contradiction to the Fredholm Alternative since $\cos\frac{k\pi x}{L}$ is in the kernel of the operator on the left hand side of (\ref{17}). Hence we have proved the transversality condition and this concludes the proof of Theorem \ref{thm22}. \end{proof} \subsection{Global bifurcation analysis} We now proceed to extend the local bifurcation curves obtained in Theorem \ref{thm22} by the global bifurcation theory of Rabinowitz in its version developed by Shi and Wang in \cite{SW}. In particular, we shall study the first bifurcation branch $\Gamma_1$. \begin{theorem}\label{thm31}Assume that (\ref{4}) and (\ref{13}) hold. Then there exists a component $\mathcal{S} \subset \mathcal{X} \times \Bbb{R}^+ \times \Bbb{R}^+$ that satisfies, (i) $\mathcal{S}$ contains $(v_1(x,s),\lambda_1(s),\epsilon_1(s)), s \in (-\delta, \delta)$; (ii) $\forall (v,\lambda,\epsilon) \in \mathcal{S}$, $v(x)>0$ on $[0,L]$, $\lambda>0$ and $(v,\lambda)$ is a solution of (\ref{1}); (iii) $\mathcal{S}=\mathcal{S}_u \cup\mathcal{S}_l$ with $\mathcal{S}_u \cap \mathcal{S}_l=(\bar{v},\bar{\lambda},\epsilon_1)$ such that for all small $\epsilon >0$, $\mathcal{S}_u\backslash(\bar{v},\bar{\lambda},\epsilon_1)$ consists of $(v,\lambda,\epsilon)$ with $v'(x)>0$ on $(0,L)$ and $\mathcal{S}_l \backslash(\bar{v},\bar{\lambda},\epsilon_1)$ consists of $(v,\lambda,\epsilon)$ with $v'(x)<0$ on $(0,L)$, where \[\epsilon_1=\frac{a_2-c_2-2c_2\bar v}{(1+\bar v)(\pi/L)^2};\] (iv) $\forall \epsilon \in (0,\epsilon_1)$, there exists $(v,\lambda,\epsilon) \in \mathcal{S}_u$ and the same holds for $\mathcal{S}_l$. \end{theorem} \begin{proof} Denote the solution set of (\ref{1}) by \[\mathcal{D}=\{(v,\lambda,\epsilon) \in \mathcal{X} \times \Bbb{R}^+ \times \Bbb{R}^+ ~\vert~ \mathcal{F}(v,\lambda,\epsilon)=0, (v,\lambda) \neq (\bar{v},\bar{\lambda})\}\] and choose $\mathcal{S}$ to be the maximal connected subset of $\bar{\mathcal{D}}$ that contains $(\bar{v},\bar{\lambda},\epsilon_1)$. Then $\mathcal{S}$ is the desired closed set and $(i)$ follows directly from (\ref{14}) in Theorem \ref{thm22}. To prove that $v(x)$ is positive on $[0,L]$ and $\lambda$ is positive for all $(v,\lambda,\epsilon)\in\Psi$ with $\epsilon>0$, we introduce the following two connected sets: \[\mathcal{S}^+=\mathcal{S}\cap \{\epsilon>0\},\] and \[\mathcal{P}_0=\{(v,\lambda) \in \mathcal{S}^+ ~\vert~ v(x)>0,~x\in[0,L],~\lambda>0\},\] then we want to show that $\mathcal{P}_0=\mathcal{S}^+$. First of all, we observe that $\mathcal{P}_0$ is a subset of $\mathcal{S}^+$ and $\mathcal{P}_0$ is nonempty, since at least the part of $\mathcal{S}$ near $(\bar{v},\bar{\lambda},\epsilon_1)$ is contained in $\mathcal{P}_0$. Now we prove that $\mathcal{P}_0$ is both open and closed in $\mathcal{S}^+$. The openness is trivial, since for any $(v,\lambda,\epsilon) \in \mathcal{P}_0$ and the sequence $(v_k,\lambda_k,\epsilon_k)$ that converges in $(v,\lambda,\epsilon)$ in $\mathcal{X}\times \Bbb{R}^+ \times\Bbb{R}^+$, we must have that $v_k$ converges to $v$ in $C^2([0,L])$, therefore $v_k>0$ on $[0,L]$ since $v>0$ on $[0,L]$. Furthermore, the fact that $\lambda_k>0$ and $\epsilon_k>0$ follows readily from $\lambda>0$ and $\epsilon>0$. Now we show that $\mathcal{P}_0$ is closed in $\mathcal{S}^+$. Take $\{(v_k,\lambda_k,\epsilon_k)\} \in \mathcal{P}_0$ such that $(v_k,\lambda_k,\epsilon_k) \rightarrow (v,\lambda,\epsilon) \text{ in the norm of }\mathcal{X}\times \Bbb{R}^+ \times\Bbb{R}^+$ for some $(v,\lambda,\epsilon) \in \mathcal{S}^+.$ We want to show that $(v,\lambda,\epsilon) \in \mathcal{P}_0$, i.e $v(x)>0$ on $[0,L]$ and $\lambda>0$. Obviously we have $v(x)\geq0$ and $\lambda\geq0$. Now we show that $\lambda\neq 0$ and $v(x)>0$ for all $x_0\in [0,L]$. We argue by contradiction. If $\lambda=0$, the $v(x)$--equation in (\ref{1}) becomes \begin{equation}\label{s12} \left\{ \begin{array}{ll} \epsilon v''+(a_2-c_2v)v=0, x\in (0,L),\\ v'(0)=v'(L)=0. \end{array} \right. \end{equation} It is well-known that (\ref{s12}) has only trivial solution, i.e, $v\equiv0$, or $v\equiv \frac{a_2}{c_2}$, hence $v_k$ converges to either 0 or $\frac{a_2}{c_2}$ uniformly on $[0,L]$. The case that $v_k$ converges to 0 can be treated by the same analysis that shows $v_k(x)>0$. If $v_k$ converges to $\frac{a_2}{c_2}$, we apply the Lebesgue's Dominated Convergence Theorem to the integral constraint in (\ref{1}) and send $\lambda_k \rightarrow 0$, then we have that \[0=\lim_{k\rightarrow \infty} \int_0^L \frac{1}{1+v_k}\left(a_1-\frac{b_1\lambda_k}{1+v_k}-c_1v_k\right) dx=\frac{a_2c_2L}{a_2+c_2}(A-C),\] and it implies that $A=C$, however this is a contradiction to (\ref{4}). Therefore $\lambda$ must be positive as desired. On the other hand, it is easy to see that $v(x)\geq0$ on $[0,L]$ and if $v(x_0)=0$ for some $x_0\in[0,L]$. We apply the Strong Maximum principle and Hopf's lemma to (\ref{1}) and have that $v\equiv 0$ for all $x\in [0,L]$ and $\lambda=\frac{a_1}{b_1}$. However, we have from Remark \ref{rk2} that bifurcation does not occur at $(0,\frac{a_1}{b_1})$. This is a contraction and we must have that $v(x)>0$ on $[0,L]$. To prove $(iii)$, we choose $\mathcal{S}_u$ to be the component of $\mathcal{S} \backslash \{(v_1(s,x),\lambda_1(s),\epsilon_1(s)) \vert s \in(-\delta,0) \}$ containing $ \{(v_1(s,x),\lambda_1(s),\epsilon_1(s)) \vert s\in [0 ,\delta) \}$ and correspondingly $\mathcal{S}_l=\Psi \backslash \{(v_1(s,x),\lambda_1(s),\epsilon_1(s)) \vert s \in (0,\delta)\} $ containing $\{(v_1(s,x),\lambda_1(s),\epsilon_1(s)) \vert s\in (-\delta,0]\}$, then we can readily see that $\mathcal{S}=\mathcal{S}_u \cup \mathcal{S}_l$, $\mathcal{S}_u \cap \mathcal{S}_l=\{(v_1,\lambda_1,\epsilon_1)\}$. Moreover, we introduce the following four subsets: \[\mathcal{S}_u^0:(\mathcal{S}_u \backslash \{(\bar{v},\bar{\lambda},\epsilon_1)\}) \cap \{\epsilon >0\} ,~~\mathcal{P}_1^+:\{(v,\lambda) \in \mathcal{X} \times \Bbb{R}^+~\vert~v(x),~v'(x)>0, x\in(0,L)\},\] \[\mathcal{S}_l^0: (\mathcal{S}_l \backslash \{(\bar{v},\bar{\lambda},\epsilon_1\})\cap \{\epsilon >0\},~~\mathcal{P}_1^-:\{(v,\lambda)\in \mathcal{X} \times \Bbb{R}^+~\vert~ v(x)>0,~v'(x)<0, x\in(0,L)\},\] and we want to show that \[\mathcal{S}_u^0 \subset \mathcal{P}_1^+ \times \Bbb{R}^+,~\mathcal{S}_l^0 \subset \mathcal{P}_1^- \times \Bbb{R}^+. \] We shall only prove the first part, while the latter one can be treated in the same way. We first note that $\mathcal{S}_u^0 \neq \emptyset$ since any solution $(v,\lambda,\epsilon)$ of (\ref{1}) near $(\bar{v},\bar{\lambda},\epsilon_1)$ is in the set $\mathcal{S}_u^0 $ thanks to (\ref{14}). Now that $\mathcal{S}_u^0$ is a connected subset of $ \mathcal{X}^+ \times \Bbb{R}^+ \times \Bbb{R}^+$, we only need to show that $\mathcal{S}_u^0 \cap (\mathcal{P}_1^+ \times \Bbb{R}^+)$ is both open and closed with respect to the topology of $\mathcal{S}_u^0 $ and we divide our proof into two parts. \textbf{\emph{Openness}}: Assume that $(\tilde{v}, \tilde{\lambda}, \tilde{\epsilon}) \in \mathcal{S}_u^0 \cap (\mathcal{P}_1^+ \times \Bbb{R}^+)$ and there exists a sequence $\{(\tilde{v}_k, \tilde{\lambda}_k, \tilde{\epsilon}_k) \}_{k=1}^\infty$ in $\mathcal{S}_u^0$ that converges to $(\tilde{v}, \tilde{\lambda}, \tilde{\epsilon})$ in the norm of $\mathcal{X} \times \Bbb{R}^+ \times \Bbb{R}^+$. We want to show that for all $k$ large that $(\tilde{v}_k, \tilde{\lambda}_k, \tilde{\epsilon}_k) \in \mathcal{P}_1^+ \times \Bbb{R}^+$, i.e, \[\tilde {v}_k'>0,x\in (0,L),~\tilde{\lambda}_k>0~,\tilde{\epsilon}_k>0.\] First of all, it is easy to see that $\tilde{\lambda}_k>0~,\tilde{\epsilon}_k>0$ since both have positive limits as $\tilde{\lambda}>0~,\tilde{\epsilon}>0$. On the other hand, we conclude from $\tilde{v}_k \rightarrow \tilde{v}$ in $\mathcal{X}$ and the elliptic regularity theory that $ \tilde{v}_k \rightarrow \tilde{v} \text{~in~} C^2([0,L])$. Differentiate the first equation in (\ref{1}) and we have \begin{equation}\label{s13} \left\{ \begin{array}{ll} -\epsilon \tilde{v}'_{xx}+(2c_2\tilde{v}+\frac{b_2\lambda}{(1+\tilde v)^2}-a_2)\tilde{v}'=0, x \in (0,L),\\ \tilde{v}'(0)=\tilde{v}'(L)=0. \end{array} \right. \end{equation} We have from Hopf's lemma and the fact $\tilde{v}'(x)>0$ that $\tilde{v}''(L)>0>\tilde{v}''(0)$, then this second order non-degeneracy implies that $\tilde{v}'_k(x)>0$, which is desired. \textbf{\emph{Closedness}}: To show that $\mathcal{S}_u^0 \cap (\mathcal{P}_1^+ \times \Bbb{R}^+)$ is closed in $\mathcal{S}_u^0$, we take a sequence $(\tilde{v}_k, \tilde{\lambda}_k, \tilde{\epsilon}_k) \in \mathcal{S}_u^0 \cap (\mathcal{P}_1^+ \times \Bbb{R}^+)$ and assume that there exists $(\tilde{v}, \tilde{\lambda}, \tilde{\epsilon})$ in $\mathcal{S}_u^0$ such that $(\tilde{v}_k, \tilde{\lambda}_k, \tilde{\epsilon}_k) \rightarrow (\tilde{v}, \tilde{\lambda}, \tilde{\epsilon})$ in the topology of $\mathcal{X} \times \Bbb{R}^+ \times \Bbb{R}^+.$ We want to show that \[\tilde{v}'(x)>0, x\in(0,L),\text{ and }\tilde{\lambda}>0.\] $\tilde{\lambda}>0$ can be easily proved by the same argument as above and we now need to show that $\tilde{v}'(x)>0$. Again we have from the elliptic regularity that $\tilde{v}_k \rightarrow \tilde{v}$ in $C^2([0,L])$, therefore $\tilde{v}'(x)\geq 0, x\in(0,L)$. Applying the Strong Maximum Principle and Hopf's Lemma to (\ref{s13}), we have that either $\tilde{v}'>0$ or $\tilde{v}'\equiv0$ on $(0,L)$. In the latter case, we must have $(\tilde{v},\tilde{\lambda}) \equiv (\bar{v},\bar{\lambda})$ and this contradicts to the definition of $\mathcal{S}_u^0$. Thus we have shown that $\tilde v'>0$ on $(0,L)$ and this finishes the proof of (iii). According to Theorem 4.4 in Shi and Wang \cite{SW}, $\mathcal{S}_{u}$ satisfies one of the following alternatives: $(a1)$ it is not compact in $\mathcal{X} \times \Bbb{R}^+ \times \Bbb{R}^+$; $(a2)$ it contains a point $(\bar{v},\bar{\lambda},\epsilon_*)$ with $\epsilon_* \neq \epsilon_1$; $(a3)$ it contains a point $(\bar{v}+\hat{v},\bar{\lambda}+\hat{\lambda},\epsilon)$ where $0\neq (\hat{v},\hat{\lambda}) \in \mathcal{Z}$, and $\mathcal{Z}$ is a closed complement of $\mathcal{N}\left(D_{(v,\lambda)}\mathcal{F}(\bar{v},\bar{\lambda},\epsilon_1) \right)=\text{span}\{ (\cos \frac{\pi x}{L},0)\}$. If $(a2)$ occurs, then $\epsilon_*$ must be one of the bifurcation values $\epsilon_k$, $k\geq2$ and in a neighborhood of $(\bar{v},\bar{\lambda})$, $v(x)$ has the formula $v(x)=\bar{v}+s \cos\frac{k\pi x}{L}+o(s)$ according to (\ref{14}). This contradicts to the monotonicity of $v(x)$ and thus $(a2)$ can not happen. If $(a3)$ occurs, we can choose the complement to be \[\mathcal{Z}=\Big\{(v,\lambda)\in \mathcal{X} \times \Bbb{R}^+ ~\Big \vert \int_0^L v(x)\cos \frac{\pi x}{L}dx =0 \Big\}, \] However, for any $(v,\lambda) \in \mathcal{Z}$, we have from the integration by parts that \[0=\int_0^L v(x)\cos \frac{\pi x}{L} dx =-\frac{L}{\pi} \int_0^L v'(x) \sin \frac{\pi x}{L} dx<0,\] and this is also a contradiction. Therefore we have shown that only alternative $(a1)$ occurs and $\mathcal{S}_{u}$ is not compact in $\mathcal{X} \times \Bbb{R}^+ \times \Bbb{R}^+$. Now we will study the behavior of $\mathcal{S}_u$ and that of $\mathcal{S}_l$ can be obtained in the exact same way. First of all, we claim that the project of $\mathcal{S}_u$ onto the $\epsilon$-axis does not contain an interval in the form $(\epsilon_0,\infty)$ for any $\epsilon_0>0$ and it is sufficient to show that there exist a positive constant $\bar \epsilon_0$ such that (\ref{1}) has only constant positive solution $(\bar{v},\bar{\lambda})$ if $\epsilon \in (\bar \epsilon_0,\infty)$. To prove the claim, we decompose solution $v(x)$ of (\ref{1}) as \[v=\bar{v}_{\text{ave}}+w,\] where $\bar{v}_{\text{ave}}=\frac{1}{L} \int_0^L vdx$ and $\int_0^L w dx=0$. Then we readily see that $w$ satisfies \begin{equation}\label{s14a} \left\{ \begin{array}{ll} \epsilon w''+(a_2-\frac{b_2 \lambda}{1+\bar v+w}-c_2 \bar v-c_2 w )(\bar v+w)=0,~x \in (0,L),\\ \int_0^L w dx =0,~ w'(0)=w'(L)=0. \end{array} \right. \end{equation} Multiplying both hand sides of (\ref{s14a}) by $w$ and then integrating over $(0,L)$, we have that \[\epsilon \int_0^L (w')^2 dx=(a_2-2c_2\bar{v})\int_0^L w^2 dx-b_2\lambda \int_0^L \frac{(\bar v+w)w}{1+\bar v+w}dx-c_2\int_0^L w^3dx.\] We can easily show from the Maximum Principle that both $v(x)$ and $\lambda$ are uniformly bounded in $\epsilon$, then we have from the inequality above that \[\epsilon \int_0^L (w')^2 dx \leq C_0(a_2,c_2,\bar{v})\int_0^L w^2 dx.\] Then we reach a contradiction for all $\epsilon>\frac{C_0}{(\pi/L)^2}$ unless $w\equiv0$, where $(\pi/L)^2$ is the first positive eigenvalue of $-\frac{d^2}{dx^2}$ subject to Neumann boundary condition. If $w\equiv0$, we have that $v\equiv \bar{v}$ and this is a contradiction as we have shown in the case $(a2)$. Therefore the claim is proved. Now we proceed to show that the projection $\mathcal{S}_u$ onto the $\epsilon$-axis is of the form $(0,\bar{\epsilon}]$ for some $\bar{\epsilon} \geq \epsilon_1$. We argue by contradiction and assume that there exists $\underline \epsilon >0$ such that $(\underline \epsilon ,\bar{\epsilon})$ is contained in this projection, but this projection does not contain any $\epsilon<\underline \epsilon $. Then we have from the uniformly boundedness of $\Vert v_\epsilon(x)\Vert_\infty$ and sobolev embedding that, for each $\epsilon>0$, $\Vert v_\epsilon \Vert_{C^3([0,L])} \leq C$, $\forall(v_\epsilon,\lambda_\epsilon,\epsilon) \in \mathcal{S}_u$ and this implies that $\mathcal{S}_u$ is compact in $\mathcal{X} \times \Bbb{R}^+ \times \Bbb{R}^+$. We reach a contradiction to alternative $(a1)$. Therefore $\mathcal{S}_u$ extends to infinity vertically in $\mathcal{X} \times \Bbb{R}^+ \times \Bbb{R}^+$. This finishes the proof of (iii) and Theorem \ref{thm31}. \end{proof} We have from Theorem \ref{thm31} that there exist positive and monotone solutions $v_\epsilon(\lambda_\epsilon,x)$ to (\ref{1}) for all $\epsilon\in(0,\epsilon_1)$. If $v(x)$ is a monotone increasing solution to (\ref{1}), $v(L-x)$ is a decreasing solution. Then we can construct infinitely many non-monotone- solutions of (\ref{1}) by reflecting and periodically extending $v(x)$ and $v(L-x)$ at $...,-L,0,L,...$ \section{Stability of bifurcating solutions from $(\bar v,\bar \lambda, \epsilon_k)$} In this section, we proceed to investigate the stability or instability of the spatially inhomogeneous solution $(v_k(s,x),\lambda_k(s))$ that bifurcates from $(\bar v,\bar \lambda)$ at $\epsilon=\epsilon_k$. Here the stability refers to that of the inhomogeneous pattern taken as an equilibrium to the time-dependent system of (\ref{1}). To this end, we apply the classical results from Crandall and Rabinowitz \cite{CR2} on the linearized stability with an analysis of the spectrum of system (\ref{1}). First of all, we determine the direction to which the bifurcation curve $\Gamma_k(s)$ turns to around $(\bar v,\bar \lambda,\epsilon_k)$. According to Theorem 1.7 from \cite{CR}, the bifurcating solutions $(v_k(s,x),\lambda_k(s),\epsilon_k(s))$ are smooth functions of $s$ and they can be written into the following expansions \begin{equation}\label{18} \left \{ \begin{array}{ll} v_k(s,x)=\bar v + s \cos\frac{k\pi x}{L}+s^2\varphi_2+s^3\varphi_3 +o(s^3),\\ \lambda_k(s,x)=\bar \lambda + s^2 \bar \lambda_2 +s^3 \bar \lambda_3+o(s^3),\\ \epsilon_k(s)= \chi_k+ s \mathcal{K}_1+s^2\mathcal{K}_2+o(s^2),\\ \end{array} \right. \end{equation} where $\varphi_i \in H^2(0,L)$ satisfies that $\int_0^L \varphi_i \cos \frac{k\pi x}{L}dx=0$ for $i=2,3$, and $\bar \lambda_2$, $\mathcal{K}_1$, $\mathcal{K}_2$ are positive constants to be determined. We remind that $o(s^3)$ in the $v$--equation of (\ref{18}) is taken in the norm of $H^2(0,L)$. For notational simplicity, we denote in (\ref{1}) \begin{equation}\label{19} f(v,\lambda)=\big(a_2-\frac{b_2 \lambda}{1+v}-c_2v\big)v,~g(v,\lambda)=\frac{a_1-c_1v}{1+v}-\frac{b_1 \lambda}{(1+v)^2}. \end{equation} Moreover, we introduce the notations \begin{equation}\label{20} \bar f_v=\frac{\partial f(v,\lambda)}{\partial v}\vert_{(v,\lambda)=(\bar v,\bar\lambda)},~~\bar f_\lambda=\frac{\partial f(v,\lambda)}{\partial \lambda}\vert_{(v,\lambda)=(\bar v,\bar\lambda)} \end{equation} and we can define $\bar f_{v\lambda}, \bar f_{vvv}, \bar g_{v}, \bar g_{\lambda}, \bar g_{v\lambda}, \bar g_{vv}$, etc. in the same manner. Our analysis and calculations are heavily involved with these values and we also want to remind our reader that $f(\bar v,\bar \lambda)=g(\bar v,\bar \lambda)=0$. By substituting (\ref{18}) into (\ref{1}) and collecting the $s^2$-terms, we obtain that \begin{equation}\label{21} \epsilon_k \varphi''_2-\mathcal{K}_1 \Big(\frac{k\pi}{L}\Big)^2\cos\frac{k\pi x}{L} +\bar f_v \varphi_2 +\bar f_\lambda \bar \lambda_2+\frac{1}{2}\bar f_{vv}\cos^2\frac{k\pi x}{L}=0. \end{equation} Multiplying (\ref{21}) by $\cos\frac{k\pi x}{L}$ and integrating it over $(0,L)$ by parts, we see that \[\frac{k^2\pi^2}{2L}\mathcal{K}_1=\Big(-\epsilon_k\Big(\frac{k\pi}{L}\Big)^2+\bar f_v\Big) \int_0^L \varphi_2 \cos \frac{k\pi x}{L}dx =0,\] therefore $\mathcal{K}_1=0$ and the bifurcation at $(\bar v,\bar \lambda, \epsilon_k)$ is of pitch-fork type for all $\epsilon_k$, $k\in N^+$. By collecting the $s^3$-terms from (\ref{1}), we have \begin{equation}\label{22} \begin{array}{ll} &\epsilon_k \varphi''_3+\bar f_v \varphi_3+\bar f_\lambda \bar \lambda_3-\mathcal{K}_2 \Big(\frac{k\pi}{L}\Big)^2\cos\frac{k\pi x}{L}\\ +&\Big(\bar f_{vv}\varphi_2+\bar f_{v\lambda}\bar\lambda_2 \Big)\cos\frac{k\pi x}{L}+\frac{1}{6} \bar f_{vvv}\cos^3\frac{k\pi x}{L}=0. \end{array} \end{equation} Testing (\ref{22}) by $\cos\frac{k\pi x}{L}$, we conclude through straightforward calculations that \begin{equation}\label{23} \frac{k^2\pi^2}{2L}\mathcal{K}_2=\frac{1}{2}\bar f_{vv} \Big(\int_0^L \varphi_2 \cos \frac{2k\pi x}{L}dx+\int_0^L \varphi_2 dx\Big)+\frac{L}{2} \bar f_{v\lambda} \bar \lambda_2+\frac{L}{16}\bar f_{vvv}. \end{equation} Therefore, we need to evaluate the integrals $\int_0^L \varphi_2 \cos \frac{2k\pi x}{L}dx $ and $\int_0^L \varphi_2 dx $ as well as $\bar \lambda$ to find the value of $\mathcal{K}_2$. Multiplying (\ref{21}) by $\cos\frac{2k\pi x}{L}$ and then integrating it over $(0,L)$ by parts, since $\mathcal{K}_1=0$, we have from straightforward calculations that \begin{equation}\label{24} \int_0^L \varphi_2 \cos \frac{2k\pi x}{L}dx=\frac{L}{24\epsilon_k}(\frac{L}{k\pi})^2 \bar f_{vv}=\frac{L^3}{12\epsilon_k (k\pi)^2}\Big(\frac{b_2\bar \lambda}{(1+\bar v)^3}-c_2\Big). \end{equation} Integrating (\ref{21}) over $(0,L)$ by parts, we have that \begin{equation}\label{25} \bar f_v \int_0^L \varphi_2 dx+\bar f_\lambda \bar \lambda_2 L+\frac{L}{4}\bar f_{vv}=0, \end{equation} where we have applied in (\ref{25}) the fact that $\bar f_v=\epsilon_k \big(\frac{k\pi}{L}\big)^2$ in order to keep the solution in a neat form in the coming calculations. Furthermore, we collect $s^2$ terms from the integration equation in (\ref{1}) and have that \begin{equation}\label{26} \bar g_v \int_0^L \varphi_2 dx+\bar g_\lambda \bar \lambda_2 L+\frac{L}{4}\bar g_{vv}=0. \end{equation} Solving (\ref{25}) and (\ref{26}) leads us to \begin{equation}\label{27} \int_0^L \varphi_2 dx =\frac{(\bar f_\lambda\bar g_{vv}-\bar f_{vv} \bar g_\lambda)L }{4(\bar f_v\bar g_\lambda -\bar f_\lambda\bar g_v)},~\bar \lambda_2 =\frac{ \bar f_{vv} \bar g_{v}-\bar f_{v} \bar g_{vv} }{4(\bar f_v\bar g_\lambda -\bar f_\lambda\bar g_v)}. \end{equation} By substituting (\ref{24}) and (\ref{27}) into (\ref{23}), we obtain that \begin{equation}\label{28} \frac{(k\pi)^2}{2L^2} \mathcal{K}_2=\frac{\bar f_{vv} \bar g_{vv}\bar f_\lambda-\bar f_{v\lambda}(\bar f_v\bar g_{vv}-\bar g_v\bar f_{vv})}{8(\bar f_v \bar g_\lambda -\bar g_v\bar f_\lambda)}+\frac{\bar f^2_{vv}}{48 \bar f_v}+\frac{\bar f_{vvv}}{16}. \end{equation} On the other hand, we have from straightforward calculations that \begin{equation}\label{29} \bar f_v=\frac{b_2\bar \lambda \bar v}{(1+\bar v)^2}-c_2\bar v,~\bar f_\lambda=-\frac{b_2 \bar v}{1+\bar v},~\bar f_{v\lambda}=-\frac{b_2}{(1+\bar v)^2}, \end{equation} \begin{equation}\label{30} \bar f_{vv}=\frac{2b_2\bar \lambda}{(1+\bar v)^3}-2c_2,~\bar f_{vvv}=-\frac{6b_1 \bar \lambda}{(1+\bar v)^4}, \end{equation} and \begin{equation}\label{31} \bar g_v=-\frac{a_1+c_1}{(1+\bar v)^2}+\frac{2b_1 \bar \lambda}{(1+\bar v)^3},~\bar g_\lambda=-\frac{b_1 }{(1+\bar v)^2},~\bar g_{vv}=\frac{2(a_1+c_1)}{(1+\bar v)^3}-\frac{6b_1 \bar \lambda}{(1+\bar v)^4}. \end{equation} Moreover, we can also have that \begin{equation}\label{32} \bar f_v\bar g_\lambda -\bar f_\lambda\bar g_v=\frac{(b_1c_2-b_2c_1)\bar v}{(1+\bar v)^2}. \end{equation} For the simplicity of calculations, we assume that $b_1=0$ from now on. Substituting (\ref{29})--(\ref{31}) into (\ref{32}), we have that \begin{eqnarray}\label{33} \frac{(k\pi)^2}{2L^2} \mathcal{K}_2 \!\!\!\!\!\! &= \!\!\!\!\!\! & \frac{\frac{2(a_1+c_1)}{(1+\bar v)^3}\big(\frac{2b_2\lambda}{(1+\bar v)^3}\!-\!2c_2 \big)\big(\frac{-b_2\bar v}{1+\bar v} \big)\!+\!\frac{b_2 }{(1+\bar v)^2}\big(\frac{b_2 \bar \lambda}{1+\bar v}\!-\!c_2(1+2\bar v) \big) }{-8b_2c_1\bar v/(1+\bar v)^2 }\!+\!\frac{\bar f^2_{vv}}{48\bar f_v}\!+\!\frac{\bar f_{vvv}}{16} \nonumber \\ \!\!\!\!\!\! &=\!\!\!\!\!\! & -\frac{(a_1+c_1)\frac{b_2\bar \lambda(1-2\bar v) }{(1+\bar v)^3}+\frac{2c_2\bar v^2-c_2}{1+\bar v} }{4c_1\bar v(1+\bar v)^2}+\frac{\big(\frac{2b_2\bar \lambda}{(1+\bar v)^3}-2c_2\big)^2}{48(\frac{b_2\bar \lambda \bar v}{(1+\bar v)^2}-c_2\bar v)}-\frac{3 b_1 \bar \lambda}{8(1+\bar v)^4}. \end{eqnarray} For the simplicity of notations, we introduce \[t=\frac{b_2\bar \lambda}{(1+\bar v)^2}=\frac{a_2-c_2(1+\bar v)}{1+\bar v},\] then one can easily see that (\ref{13}) implies that bifurcation occurs at $(\bar v,\bar \lambda,\epsilon_k)$ if and only if $t>c_2$ and we shall assume that $t>c_2$ from now on. In terms of the new variable $t$, we observe that (\ref{33}) becomes \begin{eqnarray}\label{34} \frac{(k\pi)^2}{2L^2} \mathcal{K}_2 \!\!\!&=\!\!\!& -\frac{\frac{ 1-2\bar v }{ 1+\bar v }t+\frac{2c_2\bar v^2-c_2}{1+\bar v} }{4c_1\bar v(1+\bar v) }+\frac{\big(\frac{t}{1+\bar v} -c_2\big)^2}{12\bar v (t-c_2)}-\frac{3t}{8(1+\bar v)^2} \nonumber \\ \!\!\!&=\!\!\!& \frac{-\frac{3(t-c_2)}{1+\bar v} \Big(\frac{ 1-2\bar v }{ 1+\bar v }t+\frac{2c_2\bar v^2-c_2}{1+\bar v} \Big)+\big(\frac{t}{1+\bar v} -c_2\big)^2-\frac{9\bar v t(t-c_2)}{2(1+\bar v)^2} }{12\bar v (t-c_2)} \\ \!\!\!&=\!\!\!&\frac{F(t)}{24 \bar v(1+\bar v)^2 (t-c_2)}=\frac{\alpha t^2+\beta t+\gamma}{24 \bar v(1+\bar v)^2 (t-c_2)},~t>c_2, \nonumber \end{eqnarray} where in the last line of (\ref{34}) we have used the notations \begin{equation}\label{35} \alpha=3\bar v-4,~\beta=-(12\bar v^2+7\bar v-8)c_2,~\gamma=(14\bar v^2+4\bar v-4)c^2_2. \end{equation} Now we are ready to determine the sign of $\mathcal{K}_2$ which is crucial in the stability analysis of $(v_k(x,s),\lambda_k(s),\epsilon_k)$ as we shall see later. To this end, we first have from straightforward calculations that \[F(c_2)=2 c_2^2\bar v^2>0;\] moreover, if $\bar v\neq \frac{4}{3}$, the quadratic function $F(t)$ has its determinant $(144\bar v^4+33\bar v^2)c_2^2$ and $F(t)=0$ always have two roots which are \begin{eqnarray}\label{36} t_1^*=\frac{\big((12 \bar v^2+7\bar v-8)-\sqrt{144\bar v^4+33\bar v^2}\big)c_2}{2(3\bar v -4)},\nonumber \\ t_2^*=\frac{\big((12 \bar v^2+7\bar v-8)+\sqrt{144\bar v^4+33\bar v^2}\big)c_2}{2(3\bar v -4)}; \end{eqnarray} furthermore, we readily see that $-\frac{\beta}{2\alpha}-c_2=\frac{(12\bar v^2+\bar v)c_2}{2\alpha}$ and it implies that $t_1^*<c_2<t^*_2$ if $\bar v \in (0,\frac{4}{3})$ and $c_2<t_1^*<t^*_2$ if $\bar v \in (\frac{4}{3},\infty)$. In particular, if $\bar v =\frac{4}{3}$, we have that $F(t)=\beta t+\gamma=-\frac{68c_2}{3}t+\frac{266c_2^2}{9}$ and it has a unique positive root $\frac{59c_2}{51}$. Then we have the following results on the signs of $\mathcal{K}_1$ and $\mathcal{K}_2$. \begin{proposition} Suppose that (\ref{13}) holds and the bifurcation solutions takes the form (\ref{18}). Then $\mathcal{K}_1=0$ and the bifurcation branch is of pitchfork type at $(\bar v,\bar \lambda,\epsilon_k)$ for each $k\in N^+$. Moreover, we assume that $b_1=0$ and denote $t=\frac{a_2-c_2(1+\bar v)}{1+\bar v}$, then we have that, \\ if $\bar v \in (0,\frac{4}{3})$, $\mathcal{K}_2>0$ for $t\in(c_2,t^*_2) $ and $\mathcal{K}_2<0$ for $t\in(t^*_2,\infty)$; \\ if $\bar v =\frac{4}{3}$, $\mathcal{K}_2>0$ for $t\in(c_2,\frac{59 c_2}{51}) $ and $\mathcal{K}_2<0$ for $t\in(\frac{59 c_2}{51},\infty)$;\\ if $\bar v \in ( \frac{4}{3},\infty)$, $\mathcal{K}_2>0$ for $t\in(c_2,t^*_1) \cup (t^*_2,\infty)$ and $\mathcal{K}_2<0$ for $t\in(t^*_1,t^*_2)$. \end{proposition} The graphes of $\mathcal{K}_2$ as a function $t$ are illustrated in Figure (1). It should be observed that, $\mathcal{K}_2>0$ for $t$ slightly bigger than $c_2$ since the bifurcation value $\epsilon_k\ll 1$ in this situation and we have that $\mathcal{K}_2$ is always positive regardless of $\bar v$. \begin{remark} We want to note that the assumption $b_1=0$ is made only for the sake of mathematical simplicity while $\mathcal{K}_2$ becomes extremely complicated to calculate if $b_1>0$ in (\ref{1}). On the other hand, we will shall see in Section 4 that, even when $b_1=0$, system (\ref{1}) admits solutions with single transition layer for $\epsilon$ being sufficiently small. Moreover, this limiting condition is also necessary in our analysis of the transition-layer solutions in Section 4. \end{remark} \begin{figure}[!htb] \minipage{.4\textwidth} \includegraphics[width=1.5in]{a.eps} \caption*{$\bar v\in(0,\frac{4}{3})$}\label{fig:awesome_image1} \endminipage \minipage{.4\textwidth} \includegraphics[width=1.5in]{c.eps} \caption*{$\bar v=\frac{4}{3}$}\label{fig:awesome_image2} \endminipage \minipage{.4\textwidth}% \includegraphics[width=1.5in]{b.eps} \caption*{$\bar v\in(\frac{4}{3},\infty)$}\label{fig:awesome_image3} \endminipage \caption{Graphs of $\mathcal{K}_2$ as a function of $t=\frac{a_2-c_2\bar v}{1+\bar v}$. The assumption that $t>c_2$ is required since bifurcation occurs at $(\bar v,\bar \lambda,\epsilon_k)$ only when (\ref{13}) holds. } \end{figure} We are ready to present the following result on the stability of the bifurcation solution $(v_k(s,x),\lambda_k(s))$ established in Theorem 2.2. Here the stability refers to the stability of the inhomogeneous solutions taken as an equilibrium to the time-dependent counterpart to (\ref{1}). \begin{theorem}\label{thm31} Assume that (\ref{13}) is satisfied. Then for each $k\in N^+$, if $\mathcal{K}_2>0$, the bifurcation curve $\Gamma_k(s)$ at $(\bar v,\bar \lambda,\epsilon_k)$ turns to the right and the bifurcation solution $(v_k(s,x),\lambda_k(s))$ is unstable and if $\mathcal{K}_2<0$, $\Gamma_k(s)$ turns to the left and $(v_k(s,x),\lambda_k(s))$ is asymptotically stable. \end{theorem} The bifurcation branches described in Theorem \ref{thm31} are formally presented in Figure 2. The solid curve means stable bifurcation solutions and the dashed means the unstable solution. To study the stability of the bifurcation solution from $(\bar v,\bar \lambda,\epsilon_k)$, we linearize (\ref{1}) at $(v_k(s,x),\lambda_k(s),\epsilon_k(s))$. By the principle of the linearized stability in Theorem 8.6 \cite{CR}, to show that they are asymptotically stable, we need to prove that the each eigenvalue $\eta$ of the following elliptic problem has negative real part: \[D_{(v,\lambda)}\mathcal{F}(v_k(s,x),\lambda_k(s),\epsilon_k(s))(v,\lambda)=\eta (v,\lambda),~(v,\lambda)\in\mathcal{X} \times \Bbb{R}.\] We readily see that this eigenvalue problem is equivalent to \begin{equation}\label{37} \left\{ \begin{array}{ll} \epsilon_k(s) v''+\big(a_2-\frac{b_2\lambda_k(s)}{(1+v_k(s,x))^2}-2c_2v_k(s,x) \big)v- \frac{b_2v_k(s,x)}{1+v_k(s,x)}\lambda=\eta v,&x\in(0,L), \\ \int_0^L \big(\frac{2b_1\lambda_k(s)}{(1+v_k(s,x))^3}-\frac{a_1+c_1}{(1+v_k(s,x))^2} \big)v- \frac{b_1 \lambda}{(1+v_k(s,x))^2} dx=\eta \lambda,\\ v'(0)=v'(L)=0, \end{array} \right. \end{equation} where $v_k(s,x)$, $\lambda_k(s)$ and $\epsilon_k(s)$ are as established in Theorem 2.2. On the other hand, we observe that $0$ is a simple eigenvalue of $D_{(v,\lambda)}\mathcal{F}(\bar v,\bar \lambda,\epsilon_k)$ with an eigenspace $\text{span}\{(\cos \frac{k\pi x}{L},0)\}$. It follows from Corollary 1.13 in \cite{CR} that, there exists an internal $I$ with $\epsilon_k\in I$ and continuously differentiable functions $\epsilon\in I\rightarrow \mu(\epsilon),~s\in(-\delta,\delta) \rightarrow \eta(s)$ with $\eta(0)=0$ and $\mu(\epsilon_k)=0$ such that, $\eta(s)$ is an eigenvalue of (\ref{63}) and $\mu(\chi)$ is an eigenvalue of the following eigenvalue problem \begin{equation}\label{38} D_{(v,\lambda)}\mathcal{F}(\bar v,\bar \lambda,\epsilon)(v,\lambda)=\mu(v,\lambda),~(v,\lambda)\in \mathcal{X} \times \Bbb{R}; \end{equation} moreover, $\eta(s)$ is the only eigenvalue of (\ref{37}) in any fixed neighbourhood of the origin of the complex plane (the same assertion can be made on $\mu(\epsilon)$). We also know from \cite{CR} that the eigenfunctions of (\ref{38}) can be represented by $(v(\epsilon,x),\lambda(\epsilon))$ which depend on $\epsilon$ smoothly and are uniquely determined through $\big(v(\epsilon_k,x),\lambda(\epsilon_k)\big)=\big(\cos \frac{k\pi x}{L},0 \big)$, together with $\big(v(\epsilon,x)- \cos \frac{k\pi x}{L}, \lambda(\epsilon,x) \big) \in \mathcal{Z}$. \begin{proof} [Proof\nopunct] \emph{of Theorem} \ref{thm31}. Differentiating (\ref{38}) with respect to $\epsilon$ and setting $\epsilon=\epsilon_k$, we arrive at the following system since $\mu(\epsilon_k)=0$ \begin{equation}\label{39} \left\{ \begin{array}{ll} -\big(\frac{k\pi}{L}\big)^2\cos \frac{k\pi x}{L}+\epsilon_k \dot{v}''+ \big( \frac{a_2-c_2-2c_2 \bar{v}}{1+\bar{v}} \big)\bar{v}\dot{v}- \frac{b_2\bar{v}}{1+\bar{v}}\dot{\lambda}=\dot{\mu}(\epsilon_k)\cos \frac{k\pi x}{L},&x\in(0,L), \\ \int_0^L\big( \frac{a_1-c_1-2c_1 \bar{v}}{(1+\bar{v})^2} \big)\dot{v}- \frac{b_1 \dot{\lambda}}{(1+\bar{v})^2} dx=0,\\ \dot{v}'(0)=\dot{v}'(L)=0, \end{array} \right. \end{equation} where the dot-sign means the differentiation with respect to $\epsilon$ evaluated at $\epsilon=\epsilon_k$ and in particular $\dot{v}=\frac{\partial v(\epsilon,x)}{\partial \epsilon}\big\vert_{\epsilon=\epsilon_k}$, $\dot{\lambda}=\frac{\partial \lambda(\epsilon,x)}{\partial \epsilon}\big\vert_{\epsilon=\epsilon_k}$. Multiplying the first equation of (\ref{39}) by $\cos\frac{k\pi x}{L}$ and integrating it over $(0,L)$ by parts, we obtain that \[\dot{\mu}(\epsilon_k)=-\big(\frac{k\pi}{L}\big)^2.\] According to Theorem 1.16 in \cite{CR}, the functions $\eta(s)$ and $-s\epsilon'_k(s)\dot{\mu}(\epsilon_k)$ have the same zeros and the same signs for $s\in(-\delta,\delta)$. Moreover \[\lim_{s\rightarrow 0,~\eta(s)\neq0}\frac{-s\epsilon'_k(s)\dot{\mu}(\epsilon_k)}{\eta(s)}=1.\] Now, since $\mathcal{K}_1=0$, it follows that $\lim_{s\rightarrow 0} \frac{s^2\mathcal{K}_2 }{\eta(s)}=\big( \frac{L}{k\pi}\big)^2$ and we readily see that $\text{sgn}(\eta(s))=\text{sgn}(\mathcal{K}_2)$ for $s\in(-\delta,\delta)$, $s\neq0$. Therefore, we have proved Theorem \ref{thm31} according to the discussion above. \begin{figure}[!htb] \minipage{.6\textwidth} \includegraphics[width=1.85in]{1.eps} \caption*{An illustration of the bifurcation branch for $\mathcal{K}_2<0$}\label{fig:awesome_image1} \endminipage \minipage{.6\textwidth} \includegraphics[width=1.8in]{2.eps} \caption*{An illustration of the bifurcation branch for $\mathcal{K}_2>0$}\label{fig:awesome_image2} \endminipage \caption{Pitchfork-type bifurcation branches. The solid line represents stable bifurcating solution $(v_k(s,x),\lambda_k(s),\epsilon_k(s))$ and the dashed line represents unstable solution.} \end{figure} \end{proof} Thanks to (\ref{12}), there always exist nonconstant positive solutions to (\ref{1}) for each $\epsilon$ being small However, according to Proposition 1 and Theorem \ref{thm31}, the small-amplitude bifurcating solutions $(v_k(s,x),\lambda_k(s),\epsilon_k(s))$ are unstable for $\epsilon_k$ being sufficiently small. Therefore, we are motivated to find positive solutions to (\ref{1}) that have large amplitude. \section{Existence of transition-layer solutions} In this section, we show that, for $\epsilon$ being sufficiently small, system (\ref{1}) always admits solutions with a single transition layer, which is an approximation of a step-function over $[0,L]$. For the simplicity of calculations, we assume that $b_1=0$ and consider the following system throughout the section. \begin{equation}\label{40} \left\{ \begin{array}{ll} \epsilon v''+(a_2-\frac{b_2 \lambda}{1+v}-c_2v)v=0,&x \in (0,L), \\ v'(0)=v'(L)=0,\\ \int_0^L \frac{a_1-c_1v}{1+v}dx= 0. \end{array} \right. \end{equation} Our first approach is to construct the transition-layer solution $v_{\epsilon}(\lambda,x)$ of (\ref{40}) without the integral constraint, with $\lambda$ being fixed and $\epsilon$ being sufficiently small. We then proceed to find $\lambda=\lambda_\epsilon$ and $v_\epsilon(\lambda_\epsilon,x)$ such that the integral condition is satisfied. In particular, we are concerned with $v_\epsilon(x)$ that has a single transition layer over $(0,L)$, and we can construct solutions with multiple layers by reflection and periodic extensions of $v_\epsilon(x)$ at $x=...,-2L,-L,0,L,2L,...$ To this end, we first study the following equation \begin{equation}\label{41} \left\{ \begin{array}{ll} \epsilon v''+f(\lambda, v)=0,&x \in (0,L), \\ v(x)>0,& x\in(0,L),\\ v'(0)=v'(L)=0, \end{array} \right. \end{equation} where \[f(\lambda,v)=\Big(a_2-\frac{b_2 \lambda}{1+v}-c_2v\Big)v,\] and $\lambda$ is a positive constant independent of $\epsilon$. It is easy to see that (\ref{41}) has three constant solutions $\bar v_0(\lambda)=0, \bar v_1(\lambda)\leq \bar v_2(\lambda)$ for each $\lambda \leq \frac{(a_2+c_2)^2}{4b_2c_2}$, where \begin{equation}\label{42} \bar v_1(\lambda)=\frac{a_2\!\!-\!\!c_2\!-\!\sqrt{(a_2\!\!+\!\!c_2)^2\!\!-\!\!4b_2c_2\lambda}}{2c_2},~\bar v_2(\lambda)=\frac{a_2\!\!-\!\!c_2\!+\!\sqrt{(a_2\!\!+\!\!c_2)^2\!\!-\!\!4b_2c_2\lambda}}{2c_2}. \end{equation} and $0<\bar v_1(\lambda)<\bar v_2(\lambda)$ if and only if \begin{equation}\label{43} \lambda \in \Big(\frac{a_2}{b_2}, \frac{(a_2+c_2)^2}{4b_2c_2} \Big), \end{equation} hence we shall assume (\ref{43}) for our analysis from now on. We also want to note that $\bar v_1(\lambda)=0$ and $\bar v_2(\lambda)=\frac{a_2-c_2}{c_2}$ if $\lambda=\frac{a_2}{b_2}$ and $\bar v_1(\lambda)=\bar v_2(\lambda)=\frac{a_2-c_2}{2c_2}$ if $\lambda=\frac{(a_2+c_2)^2}{4b_2c_2}$. The constant solutions $0$ and $\bar v_2$ are stable and $\bar v_1$ is unstable in the corresponding time-dependent system of (\ref{41}). Moreover, for each $\lambda \in \Big(\frac{a_2}{b_2}, \frac{(a_2+c_2)^2}{4b_2c_2} \Big)$, we know that $f(\lambda,v)$ is of Allen-Cahn type and $f_v(\lambda,0)<0, f_v(\lambda,\bar v_2(\lambda))<0$. It is well known from the phase plane analysis that, for example see \cite{F} or \cite{Ka}, the following system has a unique smooth solution $V_0(z)$, \begin{equation}\label{44} \left\{ \begin{array}{ll} V_0''+f(\lambda,V_0)=0,~z \in \Bbb{R}, \\ V_0(z) \in (0,\bar v_2(\lambda)),~z\in \Bbb{R},\\ V_0(-\infty)=0,~V_0(\infty)=\bar v_2(\lambda),~V_0(0)=\bar v_2(\lambda)/2; \end{array} \right. \end{equation} moreover, there exist some positive constants $C,\kappa$ dependent on $\lambda$ such that \begin{equation}\label{45} \Big\vert \frac{dV_0(z)}{dz} \Big\vert \leq Ce^{-\kappa \vert z \vert }, ~z\in \Bbb{R}. \end{equation} Now we construct an approximation of $v_\epsilon(\lambda,x)$ to (\ref{41}) by using the unique solution to (\ref{44}) following \cite{HS}. For each fixed $x_0\in(0,L)$, we denote \[L^*=\min\{x_0,L-x_0\}\] and choose the cut-off functions $\chi_0(y)$ and $\chi_1(y)$ of class $C^\infty([-L,L])$ as \begin{equation}\label{46} \chi_0(y)= \left\{ \begin{array}{ll} 1,&\vert y \vert \leq L^*/4, \\ 0,&\vert y \vert \geq L^*/2,\\ \in (0,1),&y\in[-L,L], \end{array} \right. \end{equation} with $\chi_1(y)=0$ if $y\in[-L,0]$ and $\chi_1(y)=\bar v_2(\lambda)(1-\chi_0(y))$ if $y\in[0,L]$. Set \begin{equation}\label{46a} V_\epsilon(\lambda,x)=\chi_0(x-x_0)V_0(\lambda,\frac{x-x_0}{\sqrt{\epsilon}})+\chi_1(x-x_0). \end{equation} We shall show that, for each $\lambda \in \Big(\frac{a_2}{b_2}, \frac{(a_2+c_2)^2}{4b_2c_2} \Big)$, (\ref{41}) has a solution $v_\epsilon$ in the form \[v_\epsilon(\lambda,x)=V_\epsilon(\lambda,x)+\sqrt \epsilon \Psi(\lambda,x),\] where $\Psi$ is a smooth function on $[0,L]$. Then $\Psi$ satisfies \begin{equation}\label{47} \mathcal{L}_\epsilon \Psi+\mathcal{G}_\epsilon+\mathcal{H}_\epsilon=0, \end{equation} with \begin{equation}\label{48} \mathcal{L}_\epsilon=\epsilon\frac{d^2}{dx^2}+f_v(\lambda,V_\epsilon(\lambda,x)), \end{equation} \begin{equation}\label{49} \mathcal{G}_\epsilon=\epsilon^{-\frac{1}{2}} \Big(\epsilon \frac{d^2 V_\epsilon(\lambda,x)}{dx^2}+f(\lambda,V_\epsilon(\lambda,x)) \Big) \end{equation} and \begin{equation}\label{50} \begin{array}{ll} \mathcal{H}_\epsilon&=\epsilon^{-\frac{1}{2}} \Big(f(\lambda,V_\epsilon(\lambda,x)+\sqrt \epsilon\Psi)-f(\lambda,V_\epsilon(\lambda,x))-\sqrt\epsilon \Psi f_v(\lambda,V_\epsilon(\lambda,x))\Big)\\ &=\Big(\frac{b_2\lambda}{(1+V_\epsilon(\lambda,x))^2(1+V_\epsilon(\lambda,x)+\sqrt \epsilon \Psi)}-c_2 \Big)\sqrt{\epsilon}\Psi^2. \end{array} \end{equation} As we can see from above, $\mathcal{G}_\epsilon$ and $\mathcal{H}_\epsilon$ measure the accuracy that $V_\epsilon(\lambda,x)$ approximates the solution $v_\epsilon(\lambda,x)$. We have the following lemmas on these estimates. \begin{lemma}\label{lem41} Suppose that $\lambda \in \Big(\frac{a_2}{b_2}+\delta, \frac{(a_2+c_2)^2}{4b_2c_2}-\delta \Big)$ for $\delta>0$ small. There exist positive constants $C_1=C_1(\delta)>0$ and $\epsilon=\epsilon_1(\delta)>0$ which is small such that, for all $\epsilon\in(0,\epsilon_1(\delta))$ \[\sup_{x\in(0,L)} \vert\mathcal{G}_\epsilon (x)\vert \leq C_1. \] \end{lemma} \begin{proof} By substituting $V_\epsilon(\lambda,x)=\chi_0(x-x_0)V_0(\lambda,\frac{x-x_0}{\sqrt{\epsilon}})+\chi_1(x-x_0)$ into $\mathcal{G}_\epsilon (x)$, we have from (\ref{44}) that \begin{equation}\label{51} \begin{array}{ll} &\epsilon\frac{d^2V_\epsilon(\lambda,x)}{dx^2}+f(\lambda,V_\epsilon(\lambda,x))\\ &=f\big(\lambda,V_\epsilon\big)-\chi_0f\big(\lambda,V_0\big)+\epsilon\chi''_0V_0+2\sqrt{\epsilon}\chi'_0V'_0+\epsilon \chi''_1\\ &=f\big(\lambda,V_\epsilon\big)-\chi_0f\big(\lambda,V_0\big)+O(\sqrt{\epsilon})\\ &=f\big(\lambda,\chi_0(x-x_0)V_0(\lambda,(x-x_0)/\sqrt{\epsilon})+\chi_1(x-x_0)\big)\\ &\hspace{0.2in}-\chi_0f\big(\lambda,V_0(\lambda,(x-x_0)/\sqrt{\epsilon}) \big)+O(\sqrt{\epsilon}). \end{array} \end{equation} We claim that $\vert f\big(\lambda,V_\epsilon\big)-\chi_0f\big(\lambda,V_0\big) \vert=O(\sqrt{\epsilon})$ and we divide our discussions into several cases. For $\vert x-x_0 \vert \leq L^*/4$, it follows easily from the definitions of $\chi_0$ and $\chi_1$ that $f\big(\lambda,V_\epsilon\big)-\chi_0 f\big(\lambda,V_0\big)=0$ . For $x-x_0\geq L^*/2$ and $x-x_0\leq -L^*/2$, we have that $\chi_0=0,\chi_1=1$ and $\chi_0=\chi_1=0$ respectively. Hence $f\big(\lambda,V_\epsilon\big)-\chi_0f\big(\lambda,V_0\big)=0$ in both cases. For $\vert x-x_0 \vert \in (L^*/4, L^*/2)$, since $V_0$ decays exponentially to $\bar v_2(\lambda)$ at $\infty$ and to $0$ at $-\infty$, we must have that, there exists a positive constant $C$ which is uniform in $\epsilon$ such that \[\vert f\big(\lambda,V_\epsilon\big)-\chi_0f\big(\lambda,V_0\big) \vert \leq C\sqrt{\epsilon}.\] This proves our claim and Lemma \ref{lem41} follows from (\ref{51}). \end{proof} The following properties of $\mathcal{H}_\epsilon$ also follows from straightforward calculations. \begin{lemma}\label{lem42} Suppose that $\lambda \in \Big(\frac{a_2}{b_2}+\delta, \frac{(a_2+c_2)^2}{4b_2c_2}-\delta \Big)$ for $\delta>0$ small. For any $R>0$, there exists $C_2=C_2(\delta,R)>0$ and $\epsilon_2=\epsilon_2(\delta,R)>0$ small such that, if $\epsilon\in (0,\epsilon_2)$ and $\Vert \Psi_i \Vert_\infty\leq R$, $i=1,2$, we have that \begin{equation}\label{52} \Vert\mathcal{H}_\epsilon [\Psi_i]\Vert_\infty \leq C_2\sqrt \epsilon \Vert \Psi_i^2 \Vert_\infty, \end{equation} \begin{equation}\label{53} \Vert\mathcal{H}_\epsilon [\Psi_1]-\mathcal{H}_\epsilon [\Psi_2]\Vert_\infty \leq C_2 \sqrt{\epsilon} \Vert \Psi_1^2-\Psi_2 ^2\Vert_\infty. \end{equation} \end{lemma} \begin{proof} Taking $C_2=b_2\lambda+2c_2 $, we can easily show that (\ref{52}) and (\ref{53}) follows from the definition of $\mathcal{H}_\epsilon$ in (\ref{50}). \end{proof} We also need the following properties of the linear operator $\mathcal{L}_\epsilon$ defined in (\ref{48}). \begin{lemma}\label{lem43} Suppose that $\lambda \in \Big(\frac{a_2}{b_2}+\delta, \frac{(a_2+c_2)^2}{4b_2c_2}-\delta \Big)$ for $\delta>0$ small. For any $p\in[1,\infty]$, there exist $C_3=C_3(\delta,p)>0$ and $\epsilon_3=\epsilon_3(\delta,p)>0$ small such that, $\mathcal{L}_\epsilon$ with domain $W^{2,p}(0,L)$ has a bounded inverse $\mathcal{L}^{-1}_\epsilon$ and if $\epsilon \in (0,\epsilon_3(\delta,p))$ \[ \Vert \mathcal{L}^{-1}_\epsilon g \Vert_p \leq C_3 \Vert g \Vert_p ,~\forall g\in L^p(0,L).\] \end{lemma} \begin{proof} To show that $\mathcal{L}_\epsilon$ is invertible, it is sufficient to show that $\mathcal{L}_\epsilon$ defined on $L^p(0,L)$ with the domain $W^{2,p}(0,L)$ has only trivial kernel. Our proof is quite similar to that of Lemma 5.4 presented by Lou and Ni in \cite{LN2}. We argue by contradiction. Take a sequence $\{(\epsilon_i,\lambda_i)\}_{i=1}^\infty$ such that $\epsilon_i \rightarrow 0$ and $\lambda_i \rightarrow \lambda \in \Big(\frac{a_2}{b_2}+\delta, \frac{(a_2+c_2)^2}{4b_2c_2}-\delta \Big)$ as $i\rightarrow \infty$. Without loss of our generality, we assume that there exists $\Phi_i\in W^{2,p}(0,L)$ satisfying \begin{equation}\label{54} \left\{ \begin{array}{ll} \epsilon_i \frac{d^2\Phi_{i}}{dx^2}+f_v(\lambda_i,V_{\epsilon_i}(\lambda_i,x))\Phi_i=0,x\in(0,L),\\ \Phi_i'(0)=\Phi_i'(L)=0,\\ \sup_{x\in(0,L)} \Phi_i(x) =1. \end{array} \right. \end{equation} Let \[\tilde{\Phi}_i(z)=\Phi_i(x_0+\sqrt{\epsilon_i}z),~\tilde{V}_{\epsilon_i}(\lambda_i,z)=V_{\epsilon_i}(\lambda_i,x_0+\sqrt{\epsilon_i}z),\] for all $z\in\big(x_0-\frac{1}{\sqrt{\epsilon_i}},x_0+\frac{1}{\sqrt{\epsilon_i}}\big)$, $i=1,2,$..., then \[\frac{d^2\tilde\Phi_{i}}{dz^2}+f_v(\lambda_{i},\tilde V_{\epsilon_i}(\lambda_{i},z))\tilde\Phi_{i}=0,~z\in \Big(x_0-\frac{1}{\sqrt{\epsilon_i}},x_0+\frac{1}{\sqrt{\epsilon_i}}\Big).\] It is easy to know that both $f_v(\lambda_i, \tilde V_{\epsilon_i})$ and $\tilde\Phi_i$ are bounded in $\big(x_0-\frac{1}{\sqrt{\epsilon_i}},x_0+\frac{1}{\sqrt{\epsilon_i}}\big)$, therefore we have from the elliptic regularity and a diagonal argument that, after passing to a subsequence if necessary as $i \rightarrow \infty$, $\tilde \Phi_i$ converges to some $\tilde \Phi_0$ in $C^1(\Bbb{R}_c)$ for any compact subset $\Bbb{R}_c$ of $\Bbb{R}$; moreover, $\tilde\Phi_0$ is a $C^\infty$--smooth and bounded function and it satisfies \begin{equation}\label{55} \frac{d^2\tilde\Phi_0 }{dz^2}+f_v(\lambda,V_0(\lambda,z))\tilde\Phi_0=0,z\in \Bbb{R}, \end{equation} where $V_0(\lambda,z)$ is the unique solution of (\ref{44}). Assume that $\Phi_i(x_i)=1$ for $x_i\in[0,L]$, then we have that $f_v(\lambda_i,V_{\epsilon_i}(\lambda,x_i))\geq 0$ according to the Maximum Principle. It follows from the decaying properties of $V_\epsilon$ that $\vert z_i\vert =\vert \frac{x_i-x_0}{\sqrt{\epsilon_i}}\vert \leq C_0 $ for some $C_0$ independent of $\epsilon_i$. Actually, if not and we assume that there exists a sequence $z_i \rightarrow \pm \infty$ as $\epsilon_i \rightarrow 0$, then it easily implies that $\tilde V_{\epsilon_i}\big(\lambda, z_i \big) \rightarrow \bar v_2$ and $0$ respectively. On the other hand, we have that $f_v(\lambda, \bar v_2)<0$ and $f_v(\lambda, 0)<0$, hence $f_v(\lambda,V_{\epsilon_i} (\lambda, x_i ))=f_v(\lambda,\tilde V_{\epsilon_i} (\lambda, z_i ))<0$ for all $\epsilon_i$ small and we reach a contradiction. Therefore we have that $z_i$ is bounded for all $\epsilon_i$ small as claimed and we can always find some $z_0\in \Bbb{R}$ such that $z_i\rightarrow z_0$ and \[\tilde\Phi_0(z_0)=\sup_{z\in\Bbb{R}}\tilde\Phi_0(z)=1,~\tilde\Phi_0'(z_0)=0.\] On the other hand, we differentiate equation (\ref{44}) with respect to $z$ and obtain that \begin{equation}\label{56} \frac{d^2 V'_0}{dz^2}+f_v(\lambda,V_0(\lambda,z))V'_0=0, \end{equation} where $V'_0=\frac{dV_0}{dz}$. Multiplying (\ref{56}) by $\tilde \Phi_0$ and (\ref{57}) by $V'_0$ and then integrating them over $(-\infty,z_0)$ by parts, we obtain that \[0=\int_{-\infty}^{z_0} \tilde\Phi_0''V'_0- (V''_0)'\tilde\Phi_0 dz=\tilde\Phi_0'(z)V'_0(z)\big \vert_{-\infty}^{z_0}-\tilde\Phi_0(z)V''_0(z)\big \vert_{-\infty}^{z_0},\] then we can easily show that $\tilde\Phi_0(z_0)=0$ and this is a contradiction. Therefore, we have prove in invertibility of $\mathcal{L}_\epsilon$ and we denote it inverse by $\mathcal{L}^{-1}_\epsilon$. To show that $\mathcal{L}^{-1}_\epsilon$ is uniformly bounded for all $p\in[1,\infty]$, it suffices to prove it for $p=2$ thanks to the Marcinkiewicz interpolation Theorem. We consider the following eigen-value problem \begin{equation}\label{57} \left\{ \begin{array}{ll} \mathcal{L}_\epsilon \varphi_{i,\epsilon}=\mu_{i,\epsilon} \varphi_{i,\epsilon},~x\in(0,L),\\ \varphi_{i,\epsilon}'(0)=\varphi_{i,\epsilon}'(L)=0,\\ \sup_{x\in(0,L)} \varphi_{i,\epsilon}(x) =1. \end{array} \right. \end{equation} By applying the same analysis as above, we can show that for each $\lambda\in\Big(\frac{a_2}{b_2}, \frac{(a_2+c_2)^2}{4b_2c_2} \Big)$, there exists a constant $C(\lambda)>0$ independent of $\epsilon$ such that $\mu_{i,\epsilon}\geq C(\lambda)$ for all $\epsilon$ sufficiently small. Therefore \[\Vert \mathcal{L}_\epsilon^{-1} g\Vert_{2} =\big\Vert \sum_{j=1}^\infty \frac{<g,\varphi_{i,\epsilon}>}{\mu_{i,\epsilon}} \varphi_{i,\epsilon} \big\Vert_{2}\leq \frac{1}{C(\lambda)} \Vert g\Vert_{2},\] where $<\cdot,\cdot>$ denotes the inner product in $L^2(0,L)$. This finishes the proof of Lemma \ref{lem43}. \end{proof} \begin{proposition} Let $x_0 \in (0,L)$ be arbitrary. Suppose that $\lambda \in \Big(\frac{a_2}{b_2}+\delta, \frac{(a_2+c_2)^2}{4b_2c_2}-\delta \Big)$ for $\delta>0$ small. Then there exists a small $\epsilon_4=\epsilon_4(\delta)>0$ such that for all $\epsilon\in (0,\epsilon_4)$, (\ref{41}) has a family of solutions $v_\epsilon(\lambda,x)$ such that \[\sup_{x\in(0,L)} \vert v_\epsilon(\lambda,x)-V_\epsilon(\lambda,x) \vert \leq C_4\sqrt \epsilon,\] where $C_4$ is a positive constant independent of $\epsilon$ and $V_\epsilon(\lambda,x)$ is given by (\ref{46a}). In particular, \begin{equation}\label{58} \lim_{\epsilon \rightarrow 0^+} v_\epsilon(\lambda,x)= \left\{ \begin{array}{ll} 0, \text{ compact uniformly on } [0,x_0) , \\ \bar v_2(\lambda)/2,~x=x_0,\\ \bar v_2(\lambda),\text{ compact uniformly on } (x_0,L]. \end{array} \right. \end{equation} \end{proposition} \begin{proof} We shall establish the existence of $v_\epsilon$ in the form of $v_\epsilon=V_\epsilon+\sqrt \epsilon \Psi$, where $\Psi$ satisfies (\ref{47}). Then it is equivalent to show the existence of smooth functions $\Psi$. To this end, we want to apply the contraction mapping theorem to the Banach space $C([0,L])$. For any $\Psi\in C([0,L])$, we define \begin{equation}\label{59} \mathcal{T}_\epsilon[\Psi]=-\mathcal{L}^{-1}_\epsilon(\mathcal{G}_\epsilon+\mathcal{H}[\Psi]). \end{equation} Then $\mathcal{T}$ is mapping from $C([0,L])$ to $C([0,L])$ by elliptic regularity. Moreover, we set \[\mathcal{B}=\{\Psi\in C([0,L]) \vert \Vert \Psi \Vert_\infty \leq R_0 \},\] where $R_0\geq 2C_1C_3$. By Lemma \ref{lem41} and \ref{lem43}, we have $\Vert \mathcal{L}^{-1} \mathcal{G}_\epsilon \Vert_\infty\leq C_1C_3$. Therefore, it follows from (\ref{52}) and Lemma \ref{lem42} that, for any $\Psi\in \mathcal{B}$, \[\Vert \mathcal{T}_\epsilon[\Psi]\Vert_\infty \leq C_1C_3+C_2C_3\sqrt{\epsilon} R_0^2\leq 2C_1C_3\leq R_0,\] provided that $\epsilon$ is small. Moreover, it follows from (\ref{53}) and simple calculation that, for any $\Psi_1$ and $\Psi_2$ in $\mathcal{B}$, \[\Vert \mathcal{T}_\epsilon[\Psi_1]-\mathcal{T}_\epsilon[\Psi_2]\Vert_\infty \leq \frac{1}{2} \Vert \Psi_1-\Psi_2 \Vert_\infty.\] if $\epsilon$ is sufficiently small. Hence $\mathcal{T}_\epsilon$ is a contraction mapping from $\mathcal{B}$ to $\mathcal{B}$ and it follows from the contraction mapping theory that $\mathcal{T}_\epsilon$ has a fixed point $\Psi_\epsilon$ in $\mathcal{B}$, if $\epsilon$ is sufficiently small. Therefore, $v_\epsilon$ constructed above is a smooth solution of (\ref{41}). Finally, it is easy to verify that $v_\epsilon$ satisfies (\ref{58}) and this completes the proof of Proposition 2. \end{proof} We proceed to employ the solution $v_\epsilon(\lambda,x)$ of (\ref{41}) obtained in Proposition 2 to constructed solutions of (\ref{40}). Therefore, we want to show that there exists $\lambda=\lambda_\epsilon$ and $(v_\epsilon(\lambda_\epsilon,x), \lambda_\epsilon)$ such that the integral condition in (\ref{40}) is satisfied. Now we are ready to present another main result of this paper. \begin{theorem}\label{thm44} Assume that $a_2-c_2>\frac{2a_1c_2}{c_1}$ and $b_1 \rightarrow 0 \text{ as }\epsilon \rightarrow 0$. Denote \[x_1 = \max \Big\{0,\frac{(a_2-c_2)c_1-2a_1c_2}{(a_1+c_1)(a_2-c_2)}L\Big\},~x_2 =\frac{(a_2-c_2)c_1-a_1c_2}{(a_1+c_1)(a_2-c_2)}L.\] Then there exists $\epsilon_0>0$ small such that for each $x_0\in(x_1,x_2)$ and $\epsilon \in (0,\epsilon_0)$, system (\ref{1}) admits positive solutions $(v_\epsilon(\lambda_\epsilon,x),\lambda_\epsilon)$ such that \begin{equation}\label{60} \lim_{\epsilon \rightarrow 0^+} v_\epsilon(\lambda_\epsilon,x)= \left\{ \begin{array}{ll} 0,&\text{ compact uniformly on } [0,x_0),\\ \bar v_2(\lambda_0)/2,& x=x_0,\\ \bar v_2(\lambda_0), &\text{ compact uniformly on } (x_0,L], \end{array} \right. \end{equation} where $\bar v_2(\lambda_0)=\frac{a_1L}{c_1-(a_1+c_1)x_0}\in (\frac{a_2-c_2}{2c_2},\frac{a_2-c_2}{c_2})$, and \begin{equation}\label{61} \lim_{\epsilon \rightarrow 0^+} \lambda_\epsilon = \bar \lambda_0=\frac{(a_2-c_2\bar v_0)(1+\bar v_0)}{b_2} \in \Big(\frac{a_2}{b_2},\frac{(a_2+c_2)^2}{4b_2c_2} \Big). \end{equation} \end{theorem} \begin{remark} We note that the assumption $a_2-c_2>\frac{2a_1c_2}{c_1}$ is exactly the same as (\ref{12}) when $b_1=0$ and this condition is required to guarantee the existence of small amplitude bifurcating solutions. In particular, we have that $x_1=0$ if $c_2< a_2\leq \frac{2a_1}{C}+c_2 $ and $x_1=\frac{(a_2-c_2)c_1-2a_1c_2}{(a_1+c_1)(a_2-c_2)}L $ if $a_2 > \frac{2a_1}{C}+c_2$; moreover $x_2<L$ for all $a_2>c_2$. Similar as the stability analysis in Section 3, the limit assumption on $b_1$ is only made for the sake of mathematical simplicity. \end{remark} \begin{proof} We shall apply the Implicit Function Theorem for the proof. To this end, we define for all $\epsilon\in(-\delta,\delta)$ for $\delta$ being sufficiently small, \begin{equation}\label{62} \mathcal{I}(\epsilon,\lambda)=\int_0^L \frac{a_1-c_1v_\epsilon(\lambda,x)}{1+v_\epsilon(\lambda,x)}dx- \int_0^L \frac{b_1 \lambda}{(1+v_\epsilon(\lambda,x))^2} dx, \end{equation} where $\lambda \in \big(\bar \lambda_0-\delta , \bar \lambda_0+\delta \big)$ and $\bar \lambda_0$ is a positive constant to be determined. For $\epsilon\leq 0$, we set $v_\epsilon(\lambda,x)=0$ if $x\in [0,x_0)$ and $v_\epsilon(\lambda,x)=\bar v_2(\lambda)$ if $x\in(x_0,L]$. Then we have that \[\mathcal{I}(\epsilon,\lambda)\equiv a_1 x_0+\frac{L-x_0}{1+\bar v_2(\lambda)}\big(a_1-c_1\bar v_2(\lambda)\big),~\forall \epsilon \leq 0\] On the other hand, for $\epsilon>0$, we have from (\ref{62}) that $\mathcal{I}(\epsilon,\lambda)$ is a smooth function of $\lambda$ and \begin{equation}\label{63} \frac{\partial \mathcal{I}(\epsilon,\lambda)}{\partial \lambda }=\int_0^L \frac{2b_1 \lambda-(a_1+c_1)(1+v_\epsilon(\lambda,x)) }{(1+v_\epsilon(\lambda,x))^3} \frac{\partial v}{\partial \lambda}dx-\int_0^L \frac{b_1 }{(1+v_\epsilon(\lambda,x))^2} dx; \end{equation} moreover, we have from Proposition 2 that, $\lim_{\epsilon \rightarrow 0^+} \frac{\partial v}{\partial \lambda} \equiv 0$ if $x\in[0,x_0)$ and $\lim_{\epsilon \rightarrow 0^+} \frac{\partial v}{\partial \lambda} \equiv -\frac{b_2}{\sqrt{(a_2+c_2)^2-4b_2c_2\lambda}}$ if $x\in(x_0,L]$, where the convergence is pointwise in both cases. By the Lebesgue Dominated Convergence Theorem, we see that $\lim_{\epsilon \rightarrow 0^+} \frac{\partial \mathcal{I}(\epsilon,\lambda)}{\partial \lambda }=\frac{(a_1+c_1)b_2(L-x_0)}{(1+\bar v_2)^2\sqrt{(a_2+c_2)^2-4b_2c_2\lambda}} \neq0$ for all $\lambda \neq \frac{(a_2+c_2)^2}{4b_2c_2}$. Hence $\frac{\partial \mathcal{I}(\epsilon,\lambda)}{\partial \lambda }$ is continuous in a neighborhood of $(0, \bar \lambda_0)$ for all $\bar \lambda_0 \in \big(\frac{a_2}{b_2},\frac{(a_2+b_2)^2}{4b_2c_2} \big)$. Therefore, according to the Implicit Function Theorem, in a small neighbourhood of $(\epsilon,\lambda)=(0,\lambda_0)$, there exists $\lambda=\lambda_\epsilon$ and $(v_\epsilon(\lambda_\epsilon,x),\lambda_\epsilon)$ is a solution to system (\ref{1}) such that $\lambda_\epsilon \rightarrow \bar \lambda_0$ as $\epsilon \rightarrow 0^+$. To determine the values of $\bar v_0$ and $\bar \lambda_0$, we send $\epsilon$ to zero and conclude from (\ref{1}) and the Lebesgue Dominated Convergence Theorem that, \begin{equation} \left\{ \begin{array}{ll} a_2-\frac{b_2 \bar \lambda_0}{1+\bar v_2(\bar \lambda_0)}-c_2\bar v_2(\bar \lambda_0)=0,\\ a_1x_0+\big(\frac{a_1-c_1\bar v_2(\bar\lambda_0)}{1+\bar v_2(\bar\lambda_0)} \big)(L-x_0)=0, \end{array} \right. \end{equation} then it follows from straightforward calculations that $\bar v_2(\bar \lambda_0)=\frac{a_1 L}{c_1L -(a_1+c_1)x_0}$ and $\bar \lambda_0=\frac{(a_2-c_2\bar v_2(\bar \lambda_0))(1+\bar v_2(\bar \lambda_0))}{b_2}$. Moreover, since $\lambda_0 \in \big(\frac{a_2}{b_2},\frac{(a_2+c_2)^2}{4b_2c_2} \big)$, it is equivalent to have that $\bar v_2(\bar \lambda_0) \in (\frac{a_2-c_2}{2c_2},\frac{a_2-c_2}{c_2})$, which implies that $x_0\in(x_1,x_2)$ as in Theorem \ref{thm44} through straightforward calculations. This verifies (\ref{60}) and (\ref{61}) and completes the proof of Theorem \ref{thm44}. \end{proof} \section{Conclusion and Discussion} In this paper, we carry out local and global bifurcation analysis in (\ref{1})and establish the nonconstant positive solutions $v_\epsilon(\lambda_\epsilon,x)$ to this nonlinear problem. It is shown that the bifurcating solutions exist for all $\epsilon>0$ being small--see (\ref{12}). Though it might be well-known to some people and it may hold even for general reaction-diffusion systems, we show that all the local branches must be of pitch-fork type. For the simplicity of calculations, we assume that $b_1=0$ and the stability of these bifurcating solutions are then determined. In particular, we have that the bifurcating solutions are always unstable as long as $\epsilon$ is sufficiently small. Finally, we constructed positive solutions to (\ref{1}) that have a single transition layer, where again we have assumed that $b_1=0$ for the sake of mathematical simplicity. Our results complement \cite{LN2} on the structures of the nonconstant positive steady states of (\ref{1}) and help to improve our understandings about the original SKT competition system (\ref{2}). We want to note that, though the assumption $b_1=0$ in Section 3 and Section 4 is made for the sake of mathematical simplicity, it is interesting question to answer whether or not (\ref{1}) admits solutions for all $B<A<C$ or $C<A<B$. It is also an interesting and important question to probe on the global structure of all the bifurcation branches. It is proved in \cite{SW} that the continuum of each bifurcation branch must satisfy one of three alternatives, and new techniques need to be developed in order to rule out or establish the compact global branches. Moreover, more information on the limiting behavior of $v_\epsilon$ not only as $\epsilon$ approaches to zero, but some positive critical value which may also generates nontrivial patterns. See \cite{LNY} for the work on a similar system. The stability of the transition-layer solutions is yet another important and mathematically challenging problem that worths attention. To this end, one needs to construct approximating solutions to (\ref{1}) of at least $\epsilon$--order. Therefore, more information is required on the operator $\mathcal{L}_\epsilon$, for example, the limiting behavior of its second eigenvalue. Our mathematical results are coherent with the phenomenon of competition induced species segregation. We see from the limiting profile analysis of (\ref{2}) in \cite{LN2} that $u(1+v)$ converges to the positive constant $\lambda$ as $\rho_{12}\rightarrow \infty$ provided that $\rho_{12}$ and $d_1$ are comparable. Then the existence of the transition layer in $v$ implies that $u=\frac{\lambda}{1+v}$ must be in the form of an inverted transition layer for $\epsilon$ being small. These transition-layers solution can be useful in mathematical modelings of species segregation. Therefore, the species segregation is formed through a mechanism cooperated by the diffusion rates $d_1$, $d_2$ and the cross-diffusion pressure $\rho_{12}$. Eventually, the structure of $v_\epsilon(x)$ in (\ref{1}) provides essential understandings about the original system (\ref{2}).
2,869,038,156,982
arxiv
\section{Introduction} Dust scattering of starlight by UV bright stars in our own Milky Way is now known to explain the majority of the diffuse far ultraviolet (FUV, $\sim$1300-1800 \AA) background. However, interest in the diffuse FUV background began not with a focus on Galactic dust, but with a search for a cosmologically significant signal from the intergalactic medium (IGM) in order to quantify the total amount of mass and energy in the universe. \citet{1970Kurt} theorized that high energy photons emitted from dense hydrogen and helium in the IGM could be red shifted into the FUV, detectable as a diffuse isotropic continuum background. A hot Galactic halo had also been proposed by \citet{1956Spitzer} as a source of diffuse FUV line emission. It is now well understood that these components do exist, although with lower surface brightness than initially theorized. Among the early observations of the diffuse background were measurements by \citet{1976Morgan}, \citet{1980Paresce}, \citet{1984Jakobsen}, \citet{1989Fix}, \citet{1991Hurwitz}, \citet{1991Perault}, \citet{1999Murthy}. While still incomplete in sky coverage, these observations hinted at a correlation between diffuse FUV intensity and Galactic neutral hydrogen column density. This pointed to a Galactic source for the FUV, specifically scattering of UV star light by dust grains. The Galactic origin of the diffuse FUV was more clearly determined with observations showing the correlation between diffuse FUV intensity and the infrared background at 100 $\mu$m as measured by the Infrared Astronomical Satellite (IRAS) \citep{1987Jakobsen,1991Perault,1996Sasseen}. These results were further confirmed and expanded upon by recent missions with better sky coverage and angular resolution. \citet{2001Schiminovich} observed one quarter of the sky in FUV with the Narrowband ultraviolet imaging experiment for wide-field surveys (NUVIEWS) rocket, comparing it to N$_{\rm HI}$ and 100 $\mu$m column, and finding a linear relationship at high latitudes. Spectroscopic UV observations of parts of the diffuse sky have also been made by the Far-ultraviolet IMaging Spectrograph (FIMS/SPEAR) \citep{2006Edelstein} and the Far Ultraviolet Spectroscopic Explorer (FUSE) \citep{2000Moos}. \citet{2011Seonb} use FIMS/SPEAR data and find correlations between FUV and 100 $\mu$m, N$_{\rm HI}$, and H$\alpha$. \citet{2010Murthy} use low resolution Galaxy Evolution Explorer (GALEX) all sky data, with bright objects removed, and also find a strong correlation between FUV and 100 $\mu$m emission. While the correlation between diffuse FUV and dust column is now broadly accepted, deviations from this correlation are significant \citep{2011Seonb,2010Murthy,2001Schiminovich}. \citet{2004Murthy} found a weak correlation between FUV intensity and 100 $\mu$m using FUSE data, potentially due to differences in the local radiation field at low latitudes. On physical scales corresponding to molecular clouds, there can be significant deviations in the relationship between FUV and dust. Observations in Aquila with FIMS/SPEAR \citep{2012Park} found FUV intensity correlates well with dust column for low extinction sightlines, while there is no correlation in regions with higher dust column. Similarly, in the Draco Cloud, \citet{2010Sujatha} found substantial variations in the relationship between diffuse FUV intensity and 100 $\mu$m intensity using GALEX data. The UV/IR ratio varied by a factor of 10 across the cloud. Such divergent behaviors indicate that dust column is not the sole predictor of diffuse FUV intensity. \citet{2013Seon} and \citet{2011Seonb} explain large variations in the UV/IR ratio as a result of a turbulent interstellar medium (ISM) which is represented as a lognormal function, where the standard deviation of a quantity increases with the mean value. In the low density ISM, light from UV-bright stars (mostly near the plane of the Galaxy) is scattered off of dust grains, resulting in a low level diffuse FUV brightness which is correlated with dust content. Above a certain threshold density, regions of the ISM may not reflect as much, as thicker clumps of dust attenuate FUV radiation. \citet{2008Witt} and \citet{2011Seonb} find that this shielding begins at 100 $\mu$m $> 8$ MJy/sr, but at a range of FUV values. Additionally, deviations from the FUV-dust correlation may indicate regions of especially high FUV radiation from nearby stars, a region of dust with unusual scattering properties, or even regions where molecular hydrogen is able to form and fluoresce. Here we present a nearly all sky survey of the diffuse Galactic FUV background and compare the FUV intensity to 100 $\mu$m emission, N$_{\rm HI}$ observations, and H$\alpha$ intensity maps. We employ a masking and mosaicing technique to remove FUV bright sources from all-sky survey images and create a composite map of the GALEX diffuse FUV sky. This map provides unprecedented, wide and deep coverage compared to results from previous missions. We use this all sky data to investigate the precise nature of the relationship between FUV and tracers of cold Galactic dust and gas across the sky, focusing on how the relationship changes with both Galactic latitude and proximity to various Galactic plane associations. The minimum FUV in these relations is also examined to determine if it reveals a significant isotropic extragalactic component, an un-modelled Galactic component, or another source. The scatter in the relationships, both on large scales and within a single cloud, provide insight into the physical properties of the dust, including scattering asymmetry and albedo. A clear picture of the FUV behavior and what drives it can also allow for the modeling and removal of the Galactic UV foreground. In Section \ref{sec:data} we describe the data products used and any further analysis. In Section \ref{subsec:mosaic} we describe the image mosaicing procedure and initial analysis of the GALEX data set in detail. In Section \ref{sec:skydiscussion} we describe all sky trends and spatial distributions. We discuss in particular the relationship between diffuse FUV and 100 $\mu$m emission (Section \ref{sec:slopes}) and H$\alpha$ intensity (Section \ref{sec:fuvhalpha}). In Section \ref{sec:conclusion} we discuss the implications of our results. \section{Data} \label{sec:data} GALEX, a 0.5 meter modified Ritchey-Chr$\acute{e}$tien telescope, operated for 10 years after its launch in 2003 \citep{2003M,2005M}. GALEX observes in two UV channels (FUV (1344-1786 \AA) and NUV (1771-2831 \AA)) and has an angular resolution of 4.2 arcseconds (FWHM) in the FUV and 5.3 arcseconds (FWHM) in the NUV. In this paper, we use data from the all sky survey (AIS), covering more than 25,000 square degrees on the sky with typical exposure time of 100 seconds, reaching a limiting point-source magnitude (m$_{AB}$) of 19.9 \citep[5 $\sigma$ AB;][]{2007Morrissey}. Each pointing center was chosen to minimize the gaps between adjacent fields. While GALEX avoided bright stars in the Galactic plane and other regions, there is good coverage at higher latitudes. We use images from the 6th GALEX data release (GR6), which contains a total of 34,551 individual AIS pointings. GR6 contains nearly all AIS FUV data taken during the GALEX mission. Maps of the whole sky at 100 $\mu$m were taken from \citet{1998Schlegel}. This map of the sky and the corresponding E(B-V) dust extinction maps were made by combining Cosmic Background Explorer/Diffuse Infrared Background Experiment (COBE/DIRBE) data with IRAS sky survey atlas (ISSA) maps in such a way as to accurately measure 100 $\mu$m emission (without a zero-point offset), which was then also used to derive a column density of dust. This technique is able to estimate the dust at all but the lowest Galactic latitudes and densest clouds to 10\% precision. N$_{\rm HI}$ data of the whole sky was taken from NASA's Legacy Archive for Microwave Background Data (LAMBDA) data service. The all sky neutral hydrogen column density information is an interpolation of two maps, \citet{1997Hartmann} and \citet{1990Dickey}. The \citet{1997Hartmann} map is a velocity integrated (-450 km/s $<$ V$_{lsr}$ $<$ +400 km/s) N$_{\rm HI}$ brightness temperature map sampled every 0.5$^\circ$ and converted to N$_{\rm HI}$. The \citet{1990Dickey} map is a composite of several surveys averaged into 1 degree bins in Galactic coordinates with emission from -250 km/s $<$ V$_{lsr}$ $<$ +250 km/s. H$\alpha$ data is taken from \citet{2003Finkbeiner} and has a 6' (FWHM) resolution. It is a composite of the Virginia Tech Spectral line Survey (VTSS) in the northern hemisphere \citep{1998Dennison} and the Southern H$\alpha$ sky survey atlas (SHASSA) in the southern hemisphere \citep{1998Dennison}. The Wisconson H$\alpha$ Mapper (WHAM) \citep{2002Reynolds} provides a stable zero point at a 1 degree scale. \subsection{Image Mosaicing} \label{subsec:mosaic} In order to observe the diffuse background intensity, we create high resolution FUV images with known point and resolved sources removed. To create these mosaics, we use the GALEX data products described in \citet{2007Morrissey} along with the Montage software package \citep{2003Berriman,2005Laity}. In our analysis we use four main maps to generate FUV background images: \textit{cnt} (counts per pixel), \textit{rrhr} (relative response or effective exposure time per pixel), \textit{sky} (estimated sky background), and \textit{mask} (detected objects). The sky background file is created by the GALEX pipeline and is an estimate of the smoothed background after resolved and point sources are removed from each image \citep{2007Morrissey}. The \textit{mask} file provides the locations of pixels that contain UV-detected objects, which are removed for background estimation. The flagged pixels in this pipeline mask file---also called a segmentation file by the Sextractor object-detection software used to perform photometry on GALEX images---only contains contiguous pixels from an object that are well detected above background, and may not include extended faint light (or optical ghosts, etc.) associated with an object. Our mosaicing procedure involves several steps. First we remove resolved sources from each file to be mosaiced, using the \textit{mask} and \textit{cnt} files. The \textit{mask} file is smoothed using a boxcar of width 10 x 10 pixels to place an extra 15\arcsec\ border around the objects being masked. This is done to more effectively block light from bright stars and galaxies, which can extend beyond the unsmoothed masked area. Even with this extra border a fraction of the light from an object will remain unmasked. Encircled energy curves for the GALEX FUV PSF indicate that 5\% of the light extends beyond 20 arcseconds radius \citep{2007Morrissey}, our typical minimum masked radius. Bright objects will usually have an even larger extent in the object mask, thereby reducing this fraction. For display purposes, the flagged areas on the smoothed \textit{mask} are then excised from the \textit{cnt} files and are replaced with the corresponding section of the \textit{sky} file, which was generated by the pipeline as an estimation of the background in that region. Given the AIS source density, this background replacement has only a small impact on the overall noise in the images. The \textit{sky} file is in units of counts per second, so we multiply the \textit{sky} file by the corresponding regions of the \textit{rrhr} file to maintain the correct units of counts. This can be described by: \begin{equation} \begin{aligned} cnt_{masked,i,j} =& mask_{i,j}*sky_{i,j}*rrhr_{i,j} \\ &+ (1-mask_{i,j})*cnt_{i,j} \end{aligned} \end{equation} \noindent where mask$_{i,j}=1$ for detected objects. Figure \ref{fig:mask} shows two different GALEX images before and after this procedure. The next step was to create a set of GALEX FUV mosaics centered on 12,288 equally spaced points covering the whole sky. Each point was taken from a nested Hierarchical Equal Area isoLatitude Pixelization (HEALPix) ordering to evenly cover the sky \citep{2005Gorski}, with Nside=32. Each mosaiced image contains all GALEX AIS fields within a 3 degree radius from the center of the pointing, with some overlap between neighboring images. Using Montage, the \textit{rrhr} and masked \textit{cnt} files are reprojected so all images to be mosaiced lie in the same plane. The overall size of the \textit{cnt} file is also trimmed to remove the edges. Reprojected files are then mosaiced into large \textit{rrhr} and \textit{cnt} files. The final step is to divide the mosaiced \textit{cnt} file by the mosaiced \textit{rrhr} file, creating a finished mosaic with units of counts per second. \begin{equation} I_{cnts/sec} = \frac{cnt_{masked,mosaiced}}{rrhr_{mosaiced}} \end{equation} The image units of cnts/sec are then converted to photons cm$^{-2}$ sec$^{-1}$ sr$^{-1}$ \AA$^{-1}$, hereafter referred to as continuum units (CU). The conversion from counts/sec to flux is 1.40 $\times$ 10$^{-15}$ for FUV, with appropriate conversions from ergs cm$^{-2}$ to photons sr$^{-1}$. The unit converted mosaic is then compared to the all sky maps described above. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{galex_small.pdf} \caption{Examples of GALEX image masking. \textbf{Top Left:} Image of an unmasked GALEX field with a particularly bright A0 star (V=9.48, FUV=12.75, saturated) on the right of the field. \textbf{Top Right:} Image of same field after masking. Regions covered by the mask have been replaced by the estimated sky background from the \textit{sky} images \citep{2007Morrissey}. Some reflections from bright stars remain after masking. \textbf{Bottom Left:} Close up image of an unmasked GALEX field with dimmer point sources. \textbf{Bottom Right:} Image of same field after masking, leaving behind a smooth background.} \label{fig:mask} \end{figure} GALEX FUV data was not available for a fraction of these points, due to the avoidance of the Galactic plane and other bright objects. The sensitivity of the GALEX detector limited the max count rate for FUV AIS observations to 5000 cnts/s \citep{2007Morrissey}. Of the 12,288 points, 10,019 had GALEX AIS fields within the 3 degree radius, using a total of 28,938 individual GALEX fields. Figure \ref{fig:percover} shows the percentage of the sky covered by our maps for a given Galactic latitude. The lowest latitude regions ($|$b$| <$ 25$^\circ$) have coverage below 75\%. There is a slight asymmetry between north and south hemispheres in this plot, due to the location of the Orion OB association below the Galactic plane. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{percentcover.pdf} \caption{Percentage of coverage of GALEX FUV AIS data for a given Galactic latitude in bins of 1$^\circ$. Light blue line indicates 75\% coverage to guide the eye. Regions near the Galactic plane have much lower coverage than higher latitudes. The average coverage above $|$b$|$=25$^\circ$ is 81\%.} \label{fig:percover} \end{figure} Our final step was to create an all sky map, with each image described above sampled onto lower resolution pixels. A total of 12,582,912 pixels cover the whole sky in a nested HEALPix ordering \citep{2005Gorski} with each pixel covering 11.79 arcmin$^2$ and Nside=1024. Assuming Poisson errors, with signal to noise equal to the square root of the signal, we find a typical AIS image (with 100 second exposure) will yield a signal to noise of $\sim$ 16 per HEALPix pixel. Regions of the sky with more than one GALEX AIS pointing, overlapping edges of pointings, and averaging over larger areas will yield greater signal to noise values. The all sky map is shown in Figure \ref{fig:fuvallsky}. The map contains 65\% of the sky, as compared to 25\% from \citet{2001Schiminovich}, 75\% from \citet{2010Murthy}, and 80\% from \citet{2011Seonb}. \section{The Diffuse FUV Sky} \label{sec:skydiscussion} Here we present the GALEX diffuse FUV all sky map. Figure \ref{fig:fuvallsky} shows the composite map in log scale. At high latitudes, FUV intensity is low, reaching a lower limit of a few hundred CU. At lower latitudes closer to the Galactic plane, the intensity increases to several thousand CU. The growing intensity towards the plane follows a rough cosecant trend with latitude as discussed further below. The highest intensities are found at the edges of known OB associations near dense molecular clouds: Ophiuchus ($l$=355$^\circ$, $b$=18$^\circ$), the Tau-Per-Aur Complex ($l$=170$^\circ$, $b$=-15$^\circ$), and the Orion A \& B complex ($l$=200-220$^\circ$, $b$=-17$^\circ$) \citep{2001Dame}. \citet{2011Seonb} and \citet{2001Schiminovich} both note a ``significant depression'' in the FUV maps at latitudes above b$>$20$^\circ$ between $l$=20$^\circ$ to $l$=60$^\circ$. Overall, we find the intensity at mid to high latitudes here is mostly consistent with intensity at similar latitude regions in other parts of the sky. Regions of the sky with unusually high intensity ($>$ 5000 CU) can be linked to the OB associations mentioned above and are related to the uneven distribution of FUV bright stars. Figure \ref{fig:starlocations} shows the positions of bright stars (Flux$_{1565 \text{\AA}} >$ 1 $\times$ 10$^{-12}$ erg cm$^{-2}$ s$^{-1}$ \AA$^{-1}$) from the TD-1 survey of nearby stars \citep{1978Thompson}. Ophiuchus, the Orion complex, and other areas of high FUV emission have an excess of bright stars. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{FUV_small.pdf} \caption{Log of diffuse FUV intensity (CU) across the sky. The lowest FUV intensity (a few hundred CU) is at the highest latitudes, while the highest FUV intensity (a few thousand CU) is found nearest to the Galactic plane. The highest intensity observed is at the edges of known OB associations, near dense molecular clouds. Overall intensity is best fit as a modified cosecant with latitude, as is discussed in Section \ref{sec:galactictrends}.} \label{fig:fuvallsky} \end{figure} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{FUV_TD_small.pdf} \caption{Log of diffuse FUV intensity (CU) across the sky, with locations of TD-1 bright stars overplotted. The diameter of the points is proportional to the log of the FUV flux. The coordinates are as in Figure \ref{fig:fuvallsky}} \label{fig:starlocations} \end{figure} \subsection{Galactic Trends} \label{sec:galactictrends} FUV intensity vs. Galactic latitude and longitude are shown in Figure \ref{fig:fuvlat}. The left panel shows diffuse FUV intensity vs. Galactic longitude. Overall, we find this matches well with the diffuse FUV intensity from SPEAR/FIMS described in \citet{2011Seonb}. The right panel shows FUV intensity vs. Galactic latitude. FUV intensity increases with decreasing absolute value of the latitude, and appears to be relatively symmetric between northern and southern Galactic hemispheres. The avoidance by GALEX of UV bright regions will bias the latitude-averaged intensity at the lowest latitudes ($|$b$|$ $<$ 20), evident when compared to \citet{2011Seonb} which reaches values of 10,000 CU in the plane. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{fuvlat.pdf} \caption{\textbf{Left:} Plot of FUV vs. Galactic longitude. Blue dots indicate median FUV intensity in 5$^\circ$ bins. Median FUV intensity (800-1000 CU) is relatively constant across longitudes. \textbf{Right:} Plot of FUV vs. Galactic latitude. Blue dots indicate median FUV intensity in 3$^\circ$ bins. The lowest latitudes have fewer points since GALEX has not observed the entire Galactic plane. }\label{fig:fuvlat} \end{figure} Certain Galactic quantities, including column densities and absorption, have been known to follow a cosecant shape with latitude, derived by \citet{1940Parenago} to model Galactic reddening. This model has been expanded and refined, but the basic principle remains the same. Following \citet{1980Milne} and \citet{1966Sturch}, Galactic extinction can be modeled as: \begin{equation} C = \int_{r=0}^d k_o \xi(z) dr = k_o \csc |b| \times \int_{0}^z \xi(z) dz \label{eq:csc} \end{equation} \noindent where k$_o$ is reddening in the plane, $z$ is height above the plane, and $\xi(z)$ is a function that describes how reddening changes with $z$. Using simple trigonometry, we replace a radial distance with $z = \sin|$b$| \times r$. The resulting cosecant dependence is shown in the right hand side of Eq \ref{eq:csc}. Several functions have been suggested for $\xi(z)$, including exponential with a scale height \citep{1980Milne}, although the exact form is not relevant here. If $C$ traces the amount of obscuring dust and the scattered FUV is proportional to the dust column, then one can relate the two by a scale factor, k$_{scatter}$. \begin{equation} I_{FUV}= k_{scatter} \times C \label{eq:final} \end{equation} With a latitude dependent extinction model, we can reasonably expect a latitude dependent FUV intensity. This model has been fit by \citet{1991Perault} and others. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{sinbfuv.pdf} \caption{Plot of FUV sin$|$b$|$ vs. sin$|$b$|$. \textbf{Left:} 2-d histogram of FUV sin$|$b$|$ as a function of Galactic latitude. The black dashed line is the single parameter fit to FUV sin$|$b$|$ at all points, 495 CU. The black dotted line is the single parameter fit from \citet{2011Seonb}, 412 CU. The green and red lines are best fits for a function of the form I=A/sin$|$b$|$ + B, with green fitting points above $|$b$|$=25$^\circ$ and red for all points. The cyan line is the two parameter fit from \citet{2011Seonb}. \textbf{Right} 2-d histogram of the same data as the top plots, but with a 300 CU offset removed. The black dashed line is the median for all points, while the green line is a two parameter fit for $|$b$| >$ 25$^\circ$.}\label{fig:fuvsin} \end{figure} The cosecant dependence of FUV is shown in the left panel of Figure \ref{fig:fuvsin} which plots FUV sin$|$b$|$ vs. sin$|$b$|$. Including all points, the median FUV sin$|$b$|$ is 451 CU, slightly lower than the value from \citet{2011Seonb}, of 525.4 CU. This difference we ascribe to unobserved high intensity regions as discussed previously. While the unobserved regions are concentrated in the plane, high intensity areas at all latitudes are not included in our data. Thus an overall lower median for our all-sky data is to be expected. The deviation from a constant value with sin$|$b$|$ at the lowest latitudes is probably due to the fact that the line of sight is no longer optically thin. At higher latitudes, the behavior is roughly flat, consistent with a cosecant form, with a slight decrease above sin$|$b$|$=0.8. Fitting a function of the form of Equation \ref{eq:final} corresponds to a horizontal line on this plot. If we assume FUV=A/sin$|$b$|$, we find A=495. This is the dashed line in the plot, compared to 412.3$\pm$10.3 (dotted line) from \citet{2011Seonb}. This is not a good fit to our data at any latitude. Adding an additional constant term and fitting a function of the form I = A/$\sin|b|$ + D, we find that the values for A and D vary significantly depending on which latitudes are included. The red line shows the fit for all points with $|$b$| >$ 5 [A=352, D=32]. This fit fails to adequately capture the behavior at all latitudes. Instead, fitting for points with $|$b$| >$ 25, shown as the green line in the left panel of Figure \ref{fig:fuvsin} [A=543, D=-204]. \citet{2011Seonb} find values of 847$\pm$96 and -457$\pm$100, for A and D respectively, shown in cyan. These differences are again likely due to GALEX avoidance of bright objects. At $\sin|b| = $1.0, the value of the FUV intensity will reduce to A+D, yielding a value for a minimum FUV intensity. For both fits, this number is a few hundred CU. The two parameter fit, while relatively good for higher latitudes, raises the question about the physical interpretation of the value for D. The value of A should indicate the scale factor from extinction or dust column to scattering, while a positive value for D could be interpreted as an isotropic component which is unrelated to the scattering traced by the cosecant fit. If the isotropic component is already removed, or if D is negative, then there is no physical motivation for including this term. Further discussion of and modification to the cosecant fit is found in Section \ref{sec:conclusion}. We show the result of removing an offset in the right plot in Figure \ref{fig:fuvsin}. We use 300 CU as a low level intensity that is not related to Galactic components. Further discussion of this component is found in Sections \ref{sec:fuvvsall} and \ref{sec:conclusion}. As noted above, a single parameter cosecant distribution will be horizontal. Instead, the actual data decrease with increasing sin$|$b$|$, indicating that the cosecant model doesn't provide a reasonable fit when a physically motivated isotropic offset is removed. The straight dashed line fit of the median (I$_{FUV}$-300) sin$|$b$|$ = 283 does not match the data at any latitude range. A single parameter fit yields A=312, similar to the median. Forcing a two parameter fit for $|$b$| >$ 25 yields values of [A=648, D=-592] which provides a better match, but has no straightforward physical interpretation. \subsection{Relationship with other Galactic properties} \label{sec:fuvvsall} The diffuse FUV intensity at high latitudes is determined by the distributions and intensities of both FUV emission from bright stars throughout the disk and the dust which scatters that emission. Here we investigate the relationship between FUV intensity and other Galactic quantities that trace dust and gas. In this section all sky maps of 100 $\mu$m emission, H$\alpha$ intensity, and N$_{\rm HI}$ column density are each compared to the diffuse FUV data. While there are overall correlations, we also explore how scatter may provide information about the distribution and properties of the dust and illuminating sources. We note here that the resolution of the N$_{\rm HI}$ column density map is significantly lower than the other maps used. We expect this will increase the scatter in our correlation between FUV and N$_{\rm HI}$ column, but will not change the overall result. Two dimensional histograms are plotted for FUV emission vs. each Galactic quantity in log-log space in Figure \ref{fig:fuvvsglobal}. All three graphs show strong correlation between the FUV intensity and the other measured quantities (correlation values calculated using the linear Pearson method are shown). As found by \citet{2011Seonb}, all three quantities are well correlated with FUV emission, but include a large amount of scatter. The strongest correlation is between FUV emission and 100 $\mu$m intensity, with r=0.80. The correlation between FUV emission and N$_{\rm HI}$ is also quite high, with r=0.78. The weakest correlation is between FUV emission and H$\alpha$ intensity, with r=0.73. In all three plots of Figure \ref{fig:fuvvsglobal}, we find a low level minimum FUV. The FUV intensity has a minimum at around a few hundred CU, flattening below 2 $\times$10$^{20}$ cm$^{-2}$ for N$_{\rm HI}$, 1 MJy/sr for 100 $\mu$, and 0.5 Rayleighs (R) for H$\alpha$. The plots of N$_{\rm HI}$ and 100 $\mu$m both also have a significant flattening of FUV intensity at large values. The FUV median remains constant above 10 $\times$10$^{20}$ cm$^{-2}$ for N$_{\rm HI}$ and 8 MJy/sr for 100 $\mu$. The plot of FUV vs. H$\alpha$, however, continues to be linear at high values of both, also seen in \citet{2011Seona}. Some flattening at large values of 100 $\mu$m and N$_{\rm HI}$ was also observed by \citet{2011Seonb} with SPEAR/FIMS data. The GALEX avoidance of bright regions of the sky could make this more pronounced in our data. \begin{figure} \centering \includegraphics[width=.45\textwidth]{logfuvvsall.pdf} \caption{2-D histogram of FUV intensity vs. Galactic quantities in log-log space. Blue dots indicate the median value in abscissa bins of 0.05 dex. The linear Pearson correlation coefficient is shown in the upper left of each panel. \textbf{Top Left:} FUV intensity vs. N$_{\rm HI}$. At low ($<$ 10$^{20}$ cm$^{-2}$) and high values ($>$ 10$^{21}$ cm$^{-2}$) of N$_{\rm HI}$, the FUV intensity levels off. \textbf{Top Right:} FUV intensity vs. 100 $\mu$m emission. As with N$_{\rm HI}$, the relationship is flat at low ($<$ 1 MJy/sr) and high ($>$ 8 MJy/sr) 100 $\mu$m intensity, with a linear relationship in between. \textbf{Bottom Left:} FUV intensity vs. H$\alpha$ intensity. Unlike the previous two plots, the FUV intensity does not level off at high values of H$\alpha$ intensity.}\label{fig:fuvvsglobal} \end{figure} As noted above, any quantity which has a plane parallel distribution with respect to the Galactic plane will vary with latitude roughly as the cosecant of latitude. By removing the cosecant dependence, we can verify that deviations from a plane parallel distribution are also correlated between two different Galactic quantities. As such, we re-plot Figure \ref{fig:fuvvsglobal} with a factor of sin$|$b$|$, shown in Figure \ref{fig:fuvvsglobalsinb}. The correlation coefficient is again calculated using the linear Pearson method and are weaker after the cosecant correction. The scatter in all plots is increased compared to Figure \ref{fig:fuvvsglobal}. The correlation between 100 $\mu$m emission and diffuse FUV remains the strongest and is discussed further in Section \ref{sec:slopes}. The correlation between H$\alpha$ and diffuse FUV is still the weakest of the three, and we examine it in more detail in Section \ref{sec:fuvhalpha}. \citet{2011Seonb} noted that 100 $\mu$m emission and N$_{\rm HI}$ corresponded well to a plane parallel model, while H$\alpha$ and FUV emission did not. As a simple plane parallel model only crudely represents the true 3-D distribution of any Galactic component (\citealt{1994Witt} and discussed above), it is not surprising that we find these weak correlations. A more detailed model is required to fully interpret this result. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{sinfuvvsall.pdf} \caption{2-d histograms of FUV sin$|$b$|$ vs. Galactic quantities times sin$|$b$|$. Blue dots indicate the median value for abscissa bins of 0.25, while blue lines indicate one standard deviation from the median. Linear Pearson correlation coefficients are shown in the upper left of each panel. \textbf{Top Left:} FUV sin$|$b$|$ vs. N$_{\rm HI}$ sin$|$b$|$. \textbf{Top Right:} FUV sin$|$b$|$ vs. 100 $\mu$m sin$|$b$|$. \textbf{Bottom Left:} FUV sin$|$b$|$ vs. H$\alpha$ sin$|$b$|$.}\label{fig:fuvvsglobalsinb} \end{figure} A comparison of the quantities is shown in Figure \ref{fig:fuvvsglobal_smooth}, to highlight behavior at low intensities where the relationship is primarily linear. \citet{2011Seonb} calculated best fit lines (in red) for b$>$25$^\circ$, while our lines include data from all latitudes. The fits are similar for all but 100 $\mu$m. Restricting our data to b$>$25$^\circ$ yields a closer match in fit for FUV vs. 100 $\mu$m. Table \ref{table:bestfits} shows slopes and intercepts for the calculated best fit lines. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{fuvvsall_smooth.pdf} \caption{2-D histograms of FUV vs. Galactic quantities with a linear scale. Blue points indicate the median for abscissa bins of 1.0, while the blue lines indicate one standard deviation from the median. Green lines are best fit lines for the median. Red lines are best fits lines from \citet{2011Seonb}. \textbf{Top Left} FUV vs. N$_{\rm HI}$. The line fit is restricted to 0-1.2 $\times$ 10$^{21}$ cm$^{-2}$, above which there appears to be a flattening off. \textbf{Top Right:} FUV vs. 100 $\mu$m. The line fit is restricted to 0-8 MJy/sr. \textbf{Bottom Left:} FUV vs. sin $|$b$|$. The fit to the median is a two parameter cosecant function, I=A/sin$|$b$|$+D for b $>$ 25$^\circ$. Here A=457.9$\pm$248.7, D=-88.5$\pm$283.0. \textbf{Bottom Right:} FUV vs. H$\alpha$.}\label{fig:fuvvsglobal_smooth} \end{figure} Because the correlation with dust (and other properties) is nearly linear at low intensities, it is conventional to use the fit to this relation to determine the value of the constant FUV offset, which presumably includes components that are not associated with dust-scattered light. Most analyses have assumed that this component is nearly isotropic and we do the same here. FUV vs 100 $\mu$m emission shows a pronounced flattening at low 100 $\mu$m values ($<$ 1 MJy/sr, as seen in the log-log plot of Figure \ref{fig:fuvvsglobal}). Furthermore, there is a FUV offset in the plots in Figure \ref{fig:fuvvsglobal_smooth}, at zero values of the abscissa. This minimum appears to be 200-300 CU. This offset has also been noted as a positive offset at N$_{\rm HI}$=0 cm$^{-2}$ of 200-300 CU by \citet{1991Martin}, who suggest that it may be partially due to an undetected dust component. These offsets have been discussed in other works as a combination of a low level extragalactic FUV background (a few tens of CU, \citealt{1990Paresce,2001Schiminovich}), incomplete bright object masking and airglow contamination. In this work, we have used 300 CU as the offset and revisit this component in Section \ref{sec:conclusion}. \begin{table*} \normalsize \centering \caption{Best fit lines- slopes and intercepts} \label{table:bestfits} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline & & all points & 0-15$^\circ$ & 15-30$^\circ$ & 30-45$^\circ$ & 45-60$^\circ$ & 60-75$^\circ$ & 75-90$^\circ$ \\ \hline 100 $\mu$m$^a$ & slope & 275$\pm$95 & 253$\pm$68 & 250$\pm$53 & 168$\pm$43.9 & 100$\pm$33 & 62$\pm$18 & 82$\pm$67 \\ & intercept & 241$\pm$273 & 601$\pm$201 & 411$\pm$139 & 480$\pm$109 & 432$\pm$95 & 386$\pm$73 & 338$\pm$98 \\ \hline N$_{\rm HI}$$^b$ & slope & 184$\pm$50 & 223$\pm$50& 123$\pm$39& 121$\pm$30& 88$\pm$18& 68$\pm$21 & 6.3$\pm$33\\ & intercept & 188$\pm$180 & 123$\pm$240 & 636$\pm$169& 403$\pm$117 & 347$\pm$76& 305$\pm$86 &,383$\pm$99\\ \hline H$\alpha$$^c$ & slope & 383$\pm$478 & 214$\pm$61 & 161$\pm$58 & 128$\pm$44 & 83$\pm$19 &7$\pm$42 & 39$\pm$105 \\ &intercept & 519$\pm$859 & 1342$\pm$281 & 938$\pm$269 & 680$\pm$189 & 547$\pm$112 & 434$\pm$96 & 379$\pm$108 \\ \hline \multicolumn{7}{l}{\small ~$^a$~ $<$ 8 MJy/sr for 100 $\mu$m}\\ \multicolumn{7}{l}{\small ~$^b$~ $<$ 10$^{21}$ cm$^{-2}$ for N$_{\rm HI}$}\\ \multicolumn{7}{l}{\small ~$^c$~ $<$ 4 R for all points only, $<$ 10 R for all others, for H$\alpha$}\\ \end{tabular} \end{table*} In the top right panel of Figure \ref{fig:fuvvsglobal_smooth}, we note the break in the FUV intensity at 100 $\mu$m $>$ 8 MJy/sr, with a median FUV intensity of 2400 CU. This saturation in the FUV intensity has been noted previously \citep{2011Seonb,2008Witt}, and appears to occur for lines of sight having an optical depth high enough to both self-shield emission from within the cloud and block scattered FUV intensity from behind the cloud, decreasing the overall FUV intensity from that region. At high 100 $\mu$m we also observe a large scatter in FUV intensity. Along some sightlines, the presence of nearby FUV bright stars can enhance the overall FUV intensity above that predicted under the assumption of a uniform radiation field. A more detailed analysis of UV self-shielding and illumination of these sightlines is discussed in a forthcoming paper. We find a similar break in the correlation of FUV vs. N$_{\rm HI}$ at N$_{HI} \sim$12 $\times$ 10$^{21}$ cm$^{-2}$. There is a break in the plot of FUV intensity vs. H$\alpha$ intensity at 4 R, as found by \citet{2011Seonb}, although this is a more gradual transition than for the other two quantities. In all plots, there is increased scatter as the abscissa values increase. If the FUV intensity is log normal, as described in \citet{2013Seon}, this is a natural property of log normal distributions. However, the plot of FUV vs. Galactic latitude shows this occurs primarily at the lowest latitudes. Cutting out intensity from points with $|$b$|$ $<$ 25$^\circ$ decreases this scatter and it's likely the strong ISRF at low latitudes contributes significantly to the scatter. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{ir_abslat.pdf} \caption{2-D histograms of FUV intensity vs. 100 $\mu$m emission for latitude cuts of 15$^\circ$, N and S combined. Blue dots are the median for bins of 0.5 MJy/sr, with blue lines indicating one standard deviation. The red line is the best fit line to the median below 8 MJy/sr. At low latitudes, there is significant scatter in the relationship, due to both FUV bright stars and obscuring dust. There is a turnover in FUV intensity at $\sim$8 MJy/sr, above which the median FUV value remains constant. The slope of the linear relationship decreases systematically with increasing latitude, becoming smaller at the highest latitude cut.}\label{fig:irabslat} \end{figure} \subsection{Correlations vs. Galactic latitude} We have already noted above that a difficulty in comparing our results to that of previous work is that derived correlations will depend on the Galactic footprint of the data used, and in particular the range in latitude. In order to understand the magnitude and physical origin of these effects, it is useful to divide our large data set into Galactic latitude cuts. In doing so, we note that regions of high scatter are generally confined to the lowest latitudes, while the FUV emission adheres to a linear fit at higher latitudes. Figure \ref{fig:irabslat} shows contour plots of FUV vs. 100 $\mu$m for latitude bins of 15$^\circ$, combining the northern and southern hemispheres. Above 8 MJy/sr in latitude bins from 0-15$^\circ$ and from 30-45$^\circ$, there is a flattening in the FUV profile, with the median constant at around 2500 CU. This flattening is not as clear in the latitude cut from 15-30$^\circ$, but this is likely due to very high FUV emission around the Ophiuchus and Orion OB associations. At high latitudes $|$b$| >$ 45$^\circ$, few points have 100 $\mu$m values above 8 MJy/sr, leaving the low intensity linear relationship. The slope of the linear portion does appear to change with latitude, with the low latitude cuts having a larger slope than at higher latitudes. Line fits below 8 MJy/sr are plotted in red, with fits as in Table \ref{table:bestfits}. Some of the scatter from the linear relation can be directly traced to specific objects or regions in the Galaxy. For example, in the latitude cut 45-60$^\circ$ there is a region of low 100 $\mu$m, but very high FUV ($>$ 2500 CU), which stands out compared to the rest of the high latitude region. This appears to come primarily from the region directly around Spica ($\alpha$ Vir, at $l$ = 316$^\circ$, $b$ = 51$^\circ$, \citealt{2012Park}), which is a spectroscopic binary with two B type stars. The FUV intensity here is high, while the dust emission is more consistent with the rest of the latitude. This area also appears in Figures \ref{fig:h1abslat} and \ref{fig:halphaabslat}, at similarly low values for the abscissa. At very high latitudes, where there appears to be only a weak relationship between dust and FUV intensity, increased relative scatter may be masking any correlation. This is reflected in the Pearson r value for the fits which is generally observed to decrease with latitude. For example, while the overall correlation between FUV and 100 $\mu$m is quite high (r = .80 for all points in log-log space, r = .64 after removing the cosecant dependence), at high latitudes, the linear Pearson r value drops to 0.14 (for $|$b$| >$ 75$^\circ$, with or without the cosecant dependence). The FUV intensity in these regions is quite low, and the scatter is relatively large enough to give the appearance of high latitude FUV intensity that is only weakly sensitive to the 100 $\mu$m emission. Figure \ref{fig:h1abslat} shows contour plots of FUV vs. N$_{\rm HI}$ column for $|$b$|$ cuts of 15$^\circ$. As with Figure \ref{fig:irabslat}, there is a flattening of the FUV emission at high values of N$_{\rm HI}$. This appears to occur at 12 $\times $10$^{20}$ cm$^{-2}$, which is consistent with the behavior found by \citet{1991Hurwitz} and others more recently including \citet{2011Seonb}. Low column at high latitudes means that this flattening column density is not reached above $|$b$|$ = 45$^\circ$. As with Figure \ref{fig:irabslat}, the slope of the linear portion changes between latitude cuts. Line fits below 10 $\times $10$^{20}$ cm$^{-2}$ are plotted in red, with fits as in Table \ref{table:bestfits}. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{h1_abslat.pdf} \caption{2-D histogram of FUV intensity vs. N$_{\rm HI}$ column. Blue and red lines as in Figure \ref{fig:irabslat}. There is a turnover in FUV intensity at $\sim$10$^{21}$ cm$^{-2}$.}\label{fig:h1abslat} \end{figure} Figure \ref{fig:halphaabslat} shows contour plots of FUV vs. H$\alpha$ intensity for $|$b$|$ cuts of 15$^\circ$. Here, there is much more scatter than with 100 $\mu$m or N$_{\rm HI}$. We also observe variation in the H$\alpha$ intensity where FUV plateaus. In the low latitude cuts, the turnover appears at 8-10 R, while at mid latitudes (30$ < |$b$| < $60), there is little evidence for a turnover even above these H$\alpha$ values. At the highest latitudes ($|$b$| >$ 60) the relationship is nearly flat. Line fits below 10 R are plotted in red, with fits as in Table \ref{table:bestfits}. Along with the high FUV intensity region around Spica mentioned above, there is a region of high FUV intensity at latitudes between 30-60$^\circ$. These can be traced to the region directly above the Ophiuchus association ($l$=355$^\circ$, $b$=18$^\circ$). This region has significantly more scattered FUV than H$\alpha$. The Spica and Ophiuchus regions cause a bump in the median FUV value between 1-4 R. A more detailed discussion of FUV vs. H$\alpha$ is found in Section \ref{sec:fuvhalpha}. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{halpha_abslat.pdf} \caption{FUV intensity vs. H$\alpha$ intensity. Blue and red lines as in Figure \ref{fig:irabslat}.}\label{fig:halphaabslat} \end{figure} \subsection{FUV vs. 100 $\mu$m} \label{sec:slopes} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{FUV_IR_small.pdf} \caption{FUV/100 $\mu$m ratio (CU/(MJy/sr)) for the whole sky. The FUV offset of 300 CU has been removed for these plots. Notable high slope regions are around OB associations, e.g. Ophiuchus, and brights stars, e.g. the spectroscopic binary Spica (at l=316, b=51., \citet{2012Park}).}\label{fig:allskyfuvir} \end{figure} The relationship between diffuse FUV intensity and 100 $\mu$m emission is often expressed as a slope with units of CU/(MJy/sr). Previous work shows a wide range of slopes. \citet{1991Perault} measured 244 CU/(MJy/sr) in the northern hemisphere and 214 CU/(MJy/sr) in the southern hemisphere, using data from the ELZ spectrophotometer on DB2-AURA. \citet{1991Hurwitz} found $\sim$294 CU/(MJy/sr) using data from the Berkeley spectrometer on UVX. \citet{1992Wright} obtained 203 CU/(MJy/sr), using data from \citet{1989Fix}. \citet{1995Haikala} used FAUST data to observe Galactic cirrus near the north Galactic pole, finding 128 CU/(MJy/sr). \citet{1996Sasseen} found a range of slopes from -49 to 255 CU/(MJy/sr) in 13 regions using data from FAUST. \citet{2010Sujatha} find slopes between 50 and 480 CU/(MJy/sr) using data from GALEX of part of the Draco Nebula. \citet{2010Murthy} find an average slope of 302 CU/(MJy/sr) using smoothed GALEX data from the whole sky. \citet{2011Seonb} find a slope of 158 CU/(MJy/sr) from SPEAR data. In our all sky GALEX data, we find an average slope of 280 CU/(MJy/sr). Clearly, the behavior of FUV intensity and 100 $\mu$m emission can vary significantly from region to region, and even within the same cloud complex. We show a contour plot of ratios (FUV/IR) for the whole sky in Figure \ref{fig:allskyfuvir}. The isotropic diffuse FUV offset of 300 CU is removed. There are two large regions with high ratios, one in the northern hemisphere above $b>$30$^\circ$ between $l$=60$^\circ$ to $l$=180$^\circ$ and one in the southern hemisphere below $b<$-45$^\circ$ between $l$=240$^\circ$ to $l$=0$^\circ$. These features correspond to regions of particularly low IR emission, with 100 $\mu$m intensity of less than 1 MJy/sr, and often lower than 0.5 MJy/sr. In their original dust map, \citet{1998Schlegel} note these extremely low emission windows as good regions for observations requiring minimum dust contamination. Very low ratios ($<$ 100 CU/(MJy/sr)) are found near the Galactic plane, reflecting the high dust content in these regions. There are some low latitude areas with high ratios from excess FUV intensity due to proximity of nearby OB associations, particularly Orion ($l$=200$^\circ$), the Gum nebula ($l$=260$^\circ$,$b$=-2$^\circ$), and Ophiuchus($l$=355$^\circ$, $b$=18$^\circ$). \begin{figure} \centering \includegraphics[width=0.45\textwidth]{slopesinb.pdf} \caption{2-D histograms of FUV/IR (CU/(MJy/sr) vs. Galactic coordinates. Blue dots indicate the median for bins of 5$^\circ$, with blue lines indicating the standard deviation from the median. \textbf{Top Left:} FUV/IR ratio vs. Galactic latitude. \textbf{Top Right:} The same data as in the top left plot, but plotted vs. sin$|$b$|$. \textbf{Bottom Left:} Same as above, but with the 300 CU offset removed from the FUV data. \textbf{Bottom Right:} Same as above, but with the 300 CU offset removed from the FUV data. The declining slope above sin$|$b$|$ = .4 and the decrease in FUV/IR ratio with increasing latitude both suggest a deviation from a simple plane parallel distribution. }\label{fig:slope_lat} \end{figure} In Figure \ref{fig:slope_lat}, showing the FUV/IR ratio vs. Galactic latitude, the ratio is nearly constant. With the 300 CU offset removed, as in the bottom panels, the ratio begins to decline at latitudes above $|$b$|$=30$^\circ$, the same behavior as the slopes in Figure \ref{fig:irabslat}. This is likely driven by decreasing FUV intensity at high latitudes, rather than increasing IR intensity. In both panels, there is significant scatter at all latitudes. The origin of this scatter becomes more clear in Figure \ref{fig:slope_long}, which shows the ratio of FUV/IR vs. Galactic longitude for different latitude cuts. The high ratio regions centered around $l$=90$^\circ$ in the northern hemisphere and at $b <$ -45$^\circ$, 350 $< l <$ 250$^\circ$ in the southern hemisphere, are the same regions that have been noted previously. Otherwise, elevated ratios at low latitudes indicate higher than expected FUV intensity. In particular, the Orion OB complex, the Gum nebula, and regions in between have excess FUV intensity, concentrated in the Galactic plane. There is little leakage of this to higher latitudes, as evidenced by the general flat profile in the top two panels of Figure \ref{fig:slope_long}. High slopes in the southern hemisphere below this region could point to leakage of FUV photons, but that may also be the result of the low IR emission region discussed above. The region of high slopes near b$>$ 45$^\circ$ at $l$=310$^\circ$ is again due to excess FUV intensity from Spica. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{slope_ir_300offset.pdf} \caption{All plots show 2-D histograms of FUV/IR (CU/(MJy/sr)) vs. longitude for latitude cuts as indicated, with the 300 CU offset removed from the FUV data. Blue dots indicate the median for bins of 5$^\circ$ with blue lines indicating one standard deviation. These plots show both the median slope, 280 CU/(MJy/sr), but also the large regional variations. Some variations are spatially coherent, as evidenced by high slopes at northern latitudes around $l$=60$^\circ$ and southern latitudes around $l$=260$^\circ$.}\label{fig:slope_long} \end{figure} \subsection{FUV vs. H$\alpha$} \label{sec:fuvhalpha} The relationship between diffuse FUV intensity and diffuse H$\alpha$ intensity has not been as well studied as other Galactic quantities. For high latitude H$\alpha$ intensity in particular (tracing the diffuse warm ionized medium, WIM), the common assumption was that most Galactic H$\alpha$ originated in ionized HII regions, with significant leakage of Lyman continuum photons responsible for the H$\alpha$ intensity observed elsewhere \citep{1995Reynolds,2009Haffner}, although the mechanism to provide the necessary leakage has not been convincingly explained \citep{2009Seon,2010Wood,2012Seon}. Recent work by \citet{2010Witt}, \citet{2011Seona}, and \citet{2011Dong} have argued that a significant percent of H$\alpha$ intensity observed outside of HII regions is in fact dust scattered, and can be shown to correlate with the diffuse FUV intensity. Scattering percentages for H$\alpha$ in the WIM have been calculated to be as low as 5-20\% \citep{1999Wood}, 20\% \citep{2011Dong}, and as high as 37\% \citep{2011Seona}, using varied techniques. \citet{2012Brandt} have recently measured the visible spectrum of diffuse Galactic light, and in so doing find that scattering accounts for around 19\% $\pm$ 4\% of H$\alpha$ intensity, for $|$b$| >$ 60$^\circ$. As shown in Figures \ref{fig:fuvvsglobalsinb} and \ref{fig:fuvvsglobal}, there is a correlation between the diffuse FUV and H$\alpha$ intensity, although it is not as tightly correlated as 100 $\mu$m and N$_{\rm HI}$, with r=0.73 (log-log) overall and r=0.45 after the latitude dependence is removed. Still, this indicates that there is some shared dependence between FUV and H$\alpha$ as discussed above. Our data set mainly encompasses the diffuse WIM due to the avoidance of the Galactic plane and bright regions. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{fuv_halpha_small.pdf} \caption{All-sky map of H$\alpha$/FUV, in units of R/10$^3$ CU. A 300 CU offset for the FUV map has been removed.}\label{fig:halphafuvallsky300} \end{figure} Figure \ref{fig:halphafuvallsky300} shows an all-sky map of H$\alpha$/FUV in units of R/10$^3$ CU, following \citet{2011Seona}. An offset of 300 CU has been subtracted from the FUV data. There is clear structure, including especially high ratios around the Gum Nebula and Orion complex. These are all likely due to high H$\alpha$ intensity from HII regions. Other OB associations, including Ophiuchus and structures from $l$=0 to 180$^\circ$, do not have the same high ratios, despite similar FUV emission values. Finally, there are high ratios at both poles, primarily driven by low FUV intensity than high H$\alpha$. Outside of these regions, at mid-latitudes, there is a relatively stable ratio of 2-3 R/10$^3$ CU. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{fuvhalphaslopesinb.pdf} \caption{2-D histograms of H$\alpha$/FUV (R/10$^3$ CU) vs. Galactic coordinates. Blue dots indicate the median for latitude bins of 5$^\circ$ with blue lines indicating one standard deviation. \textbf{Top Left:} A 2-d histogram of H$\alpha$/FUV vs. Galactic latitude. \textbf{Top Right:} The same data plotted vs. sin$|$b$|$. \textbf{Bottom Left:} Same as above, but with a 300 CU offset removed from the FUV data. \textbf{Bottom Right:} Same as above, but with a 300 CU offset removed from the FUV data.}.\label{fig:slopelathalpha} \end{figure} Figure \ref{fig:slopelathalpha} shows the ratio of H$\alpha$/FUV as a function of latitude. The higher ratios at low latitudes are the result of bright HII regions, but in general the ratio is nearly constant. Unlike for FUV/100 $\mu$m (Figure \ref{fig:slope_lat}), the plot of H$\alpha$/FUV vs. Galactic latitude doesn't appear significantly changed by the removal of the FUV offset. Potentially this is because the H$\alpha$ intensity is the result of a wide range of processes, not just scattering, so the correlation is low to begin with (as noted in Figure \ref{fig:fuvvsglobalsinb}). The range of ratios becomes larger at high latitudes, but the median remains roughly constant below sin$|$b$|$=0.8, and rises slightly after. Figure \ref{fig:fuvHalpha_latcuts} shows the ratio of H$\alpha$/FUV as a function of Galactic longitude for different latitude cuts, with the 300 CU FUV offset removed. Like its counterpart for 100 $\mu$m emission Figure \ref{fig:slope_long}, the ratio varies by an order of magnitude across the sky. At the highest latitude cuts, the standard deviation is 2 R/10$^3$ CU, but the mean is relatively stable with longitude. At latitudes closer to the Galactic plane, the standard deviation decreases, but the variation in ratio can be more than a factor of 2 between different longitudes. Some of this variation is seen in multiple latitude cuts, with high H$\alpha$/FUV ratios appearing in the same longitude range. Of particular note is the peak at $l$=200$^\circ$, potentially associated with the Orion OB association, which appears at all latitude cuts. This peak is the result of high H$\alpha$ intensity and recalls a similarly placed peak in Figure \ref{fig:slope_long}. In some cases, excess H$\alpha$ intensity may be caused by significant Lyman continuum photon leakage into high latitudes. This may be related to the broad features of H$\alpha$ excess found near the Gum nebula and Orion. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{slope_ha_300offset.pdf} \caption{All plots show 2-D histograms of H$\alpha$/FUV (R/10$^3$ CU) vs. longitude for latitude cuts as indicated, with the 300 CU offset removed from the FUV data. Blue dots indicate the median for longitude bins of 5$^\circ$ while blue lines indicate one standard deviation. These plots show both the low level average slope (2-4 R/10$^3$ CU), but also the large regional variations. Some variations are spatially coherent, as evidenced by high slopes at southern latitudes around $l$=260$^\circ$ and 200$^\circ$.}\label{fig:fuvHalpha_latcuts} \end{figure} \section{Discussion} \label{sec:conclusion} The dust content of the Galaxy provides a common origin for both the diffuse FUV and 100 $\mu$m emission. Cold dust emits at IR wavelengths and efficiently scatters FUV starlight. In general, these two quantities vary proportionately. Here we consider two different simplified models of the observed FUV intensity. The first assumes that it can be modeled as a linear function of 100 $\mu$m emission. A second refined model fits FUV as a function of Galactic latitude, albedo, and scattering asymmetry, using a modified cosecant fit to overcome the deficiencies described in Section \ref{sec:galactictrends}. \subsection{Linear fit between FUV and 100 $\mu$m} Linearity between FUV intensity and 100 $\mu$m emission is found for points with 100 $\mu$m emission less than $\sim$8 MJy/sr. Line fits and other evidence discussed in Section \ref{sec:fuvvsall} also indicate a FUV offset at the 100 $\mu$m zero-point. These two values, the slope and offset, of the linear fit between FUV intensity and 100 $\mu$m emission are discussed in further detail. \subsubsection{FUV offset} An FUV offset in the linear relationship with other tracers of the cold Galactic ISM has been observed previously, and in our work appears to be $\sim$300 CU. This offset is assumed to result from a local source of diffuse isotropic background, an extragalactic background or an isotropic Galactic source not yet considered. As discussed above in Section 2, some contribution may also come from incomplete masking of known resolved objects. As the FUV background shows low-level variability over the course of an orbital night, some contribution is likely to originate from \textsc{OI} (1356\AA, 1304\AA) airglow and/or geocoronal Ly $\alpha$ (1216\AA) lines, which have night-sky intensities of 1.0, 10 and 3000 R respectively. \textsc{OI} 1356\AA\ falls within the FUV bandpass (at 35\% peak efficiency) resulting in a count rate at the detector corresponding to a $\sim$150 CU FUV continuum background. GALEX included a blue edge filter which is expected to attenuate the contribution from \textsc{OI} 1304\AA\ and Ly$\alpha$ below these levels. During orbital night we observe a variation in the background intensity of $\pm$50 CU. Observations of an identical target throughout the year show a similar 50 CU scatter, presumably due to seasonal variation in orbit geometry and airglow intensity. \citet{2013Murthy} also calculated the expected contribution to the GALEX FUV channel from airglow as a function of both time from local midnight and angle between the Sun and the observed target. From this work, airglow was estimated to be 200 CU $\pm$ 100 CU at local midnight, comparable to our assessment, with a similar variation vs. local midnight. As the low level of variation is not likely to impact our analysis, we have left more detailed modelling (and subtraction) of the variable airglow component to a subsequent paper. It is worth nothing that \citet{2011Seona} found a similar offset using FIMS/SPEAR data while excluding the \textsc{OI} airglow line. The contribution due to Zodiacal light is sufficiently low for the FUV band that we do not attempt to remove it \citep{1998Leinert}. Inspection of FUV intensity vs. ecliptic latitude shows no evidence for zodiacal contamination. Another possible source of low level intensity is unresolved or incompletely masked FUV objects. There are at least three potential contributors here: 1) unmasked scattered light or ghosts from bright stars; 2) any unresolved and/or undetected light from faint stars or extragalactic objects that have not been masked and 3) unmasked light from the other masked objects in the field. The GALEX pipeline mask, which is used to remove bright objects (see Section \ref{subsec:mosaic}) could potentially have missed faint stars. Furthermore unmasked reflections and ghosts around bright stars are visible in the data. These can contribute to the overall scatter, aside from any contribution to the offset. In general GALEX avoided observing bright stars with m$_{FUV}\sim$9.5 with a flux producing a local count rate exceeding 5,000 counts per second. If we conservatively assume that 1\% of the light from the brightest observable star filled an 11 square arcminute pixel then we would anticipate a diffuse contribution of $<$6000 CU at that location on the sky. The source density of stars just below the avoidance limit (e.g., with 9.5$<m_{FUV}<$12) is low, much less than 1 per square degree over the AIS region, suggesting that fewer than 0.1\% of all pixels may be contaminated by unmasked bright starlight. Additionally, our object detection software treats most bright stars as extended sources, creating a larger masked area than for fainter unresolved objects. Extragalactic diffuse FUV intensity is believed to contribute only a few tens to 100 CU. \citet{2005Xu} calculated the contribution to the GALEX data from both resolved and unresolved galaxies to be 1.03 $\pm$0.15 nW m$^{-2}$ sr$^{-1}$, or about 51.5 $\pm$7.5 CU. \citet{2011Voyer} find that the integrated light from field galaxies contribute flux at the level of 65-82 CU to the extragalactic background. \citet{2011Seonb}, also calculated that the cumulative effect of unmasked unresolved FUV stars and galaxies is probably not significant. These same results suggest that unmasked light from the objects below the AIS detection limit will be negligible. Additional components could come from other sources of FUV intensity, including molecular hydrogen fluorescence and line emission such as C IV. It seems unlikely that there is enough evenly distributed molecular hydrogen at these high latitudes to contribute significantly to the continuum offset. \citet{2006Ryu} and \citet{2008Ryu} report band-averaged I(H$_2)/I_{cont}$ ratios of $\sim$0.15 in molecular-rich star-forming; the ratios in diffuse gas is likely to be lower (e.g. \citealt{1990Martin,2006Lee,2008Lee}). \citet{2008Ryu} also suggest that the band-averaged contribution from C IV is even lower. As significant concentrations of molecular gas are present closer to the disk, H$_2$ fluorescence may contribute to the large scatter for $|$b$|<$25$^\circ$. A last concern is whether a possible systematic zero-point offset exists in the comparison data sets. We can investigate this possibility by comparing the relationship between different tracers of the Galactic ISM, provided that they are uncorrelated. More recent data sets, such as the Planck map of cold Galactic dust, could provide new measures of the lowest dust column densities. However a preliminary inspection of the 2013 Planck data indicates the low dust regions are still present at the levels observed previously \citep{2013Planck}. The offset calculated using these data did not change. H$\alpha$ provides a different view of the ISM which is indeed suggested in the correlations we observe. The presence of HII regions will introduce additional scatter, and the intercepts of the fits in these regions are typically a few hundred CU above the value used here. However, at high latitudes where star-forming regions are not present, the offset decreases to $\sim$400 CU, similar to the offset obtained using other Galactic quantities. \subsubsection{FUV-IR Slope} With the offset removed, the slope of the FUV vs. 100 $\mu$m relation is variable across the sky, as seen in Figures \ref{fig:allskyfuvir} and \ref{fig:slope_lat}. There are two regimes in the behavior of the FUV. In the optically thin regime, typically where 100 $\mu$m is less than 8 MJy/sr, the FUV and 100 $\mu$m are correlated. In the optically thick regime, FUV saturates and the correlation disappears. In the discussion below we only refer to the optically thin regime. At mid and high latitudes there are very few regions that deviate from a linear relationship, due to an absence of optically thick dust. A very simple model, assuming isotropic scattering, an average cosecant dust column relation and a constant scale height, predicts a uniform relation between 100 $\mu$m and FUV across the sky. Instead, the slope declines with increasing latitudes. This change in slope for optically thin clouds between mid and high latitudes indicates that a simple scattering picture may not be valid. The emission at 100 $\mu$m decreases at high latitudes, following the csc$|$b$|$ relation, with the simple model suggesting that the FUV intensity should decrease proportionally. Our results, as in \citet{2011Seonb}, show that the FUV intensity is decreasing faster than expected, leading to a smaller typical value for the slope at high latitudes. \subsection{Modified cosecant fit and scattering properties} The changing slope between FUV intensity and 100 $\mu$m emission at high latitudes is related to the deviations from a cosecant dependence (Figure \ref{fig:fuvsin}). As discussed in Section \ref{sec:galactictrends}, a function of the form I=A/sin$|$b$|$ for FUV intensity with the offset removed is not able to fully describe the observed intensity. Adding an extra term to the function, making it I=A/sin$|$b$|$+D, yields better fits for $|$b$| >$ 25$^\circ$ (see also \citealt{2011Seonb}). But under the assumption that all isotropic components have been accounted for and removed, there is no physical basis for the inclusion of the constant D. The simple cosecant fit does not include parameters for non-isotropic dust scattering, instead assuming that the dust scattering scale factor was constant with latitude. \citet{1979Jura} proposed that the surface brightness of a cloud at various latitudes is a function not just of the ISRF, but also of the scattering function for the dust, assuming all illumination originates in the plane. With the inclusion of optical depth by \citet{1992Wright}, the dust scattered intensity, S, can be expressed as: \begin{equation} S = S_o \tau a (1 - 1.1 g \sqrt{sin|b|}) \label{eq:scatter} \end{equation} This approximation is valid for $|$b$| >$ 10$^\circ$, g $<$ .85, and optical depths of less than 1 where S$_o$ is the peak scattered ISRF. To calculate optical depth, we used E(B-V) values from \citet{1998Schlegel}, and a standard optical depth calculation: \begin{equation} \tau = \frac{R_{\lambda}}{1.086} \times E(B-V) \label{eq:scatter2} \end{equation} We then use Equation \ref{eq:scatter} and fit for values of $a$, $g$, and S$_o$. We compare predicted intensity to FUV intensity (with 300 CU offset removed), and find the best fit values are 0.62 $\pm$ 0.04 and 0.78 $\pm$ 0.05, for $a$ and $g$ respectively, with S$_o$ =6260 $\pm$ 400 CU for sin$|$b$| >$ 0.3, or $|$b$| >$ 20$^\circ$. Here, a non-linear least squares fit was applied to the median of FUV intensity in bins of $|$b$|$ = 1.0 degree. There is some degeneracy between the choice of peak intensity S$_o$ and albedo $a$. The albedo value we predict is related in part to the selection of S$_o$. Larger values of S$_o$ allow for a smaller $a$. If we force S$_o$ = 5000, the best fit model predicts an albedo of .75 $\pm$ 0.04, while $g$ remains unchanged. The value of S$_o$ in the best fit model is similar to the 5800 CU scaling used by \citet{1991Hurwitz}. An overlay of the fit on the data is shown in Figure \ref{fig:scatterplot}. As with the cosecant fit, this modified fit is not valid at low latitudes. The best fit we find is similar to the two part cosecant fit described in Section \ref{sec:galactictrends}, but is able to more accurately capture the effect of asymmetrical scattering. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{agfit.pdf} \caption{2-D histograms of FUV intensity vs. Galactic latitude, with 300 CU offset removed. \textbf{Left Plot:} FUV intensity vs. Galactic latitude, $|$b$|$. \textbf{Right Plot:} FUV sin$|$b$|$ vs. sin$|$b$|$. Blue dots are the median for bins of 1$^\circ$, above $|$b$|$=20$^\circ$, with blue lines indicating the standard deviation. The red line is the best fit of Equation \ref{eq:scatter} for FUV intensity. Values of 0.62 $\pm$ 0.04, 0.78 $\pm$ 0.05, and 6260 $\pm$ 400 are used, for $a$, $g$, and $S_o$, respectively.}\label{fig:scatterplot} \end{figure} The values we derive for $a$ and $g$ are slightly higher but within the limits of previous measurements. \citet{2003Draine}, in a review of previous work, reported a range of modeled values for diffuse Galactic light (DGL) plus values predicted from dust models: $a$ varied between 0.2 and 0.6 in the FUV, while $g$ varied between 0.0 and 0.8. Albedo from the dust models of \citet{2001Weingartner} was $a$ $\approx$ 0.4, with predicted scattering asymmetry of $g$ $\approx$ 0.7. \citet{1994Witt} found $a =$ 0.5 and $g =$ 0.9. \citet{2001Schiminovich} fit for values of $a =$ 0.45 $\pm$ 0.05 and $g =$ 0.77 $\pm$ .1. \citet{2011Murthy} found limits on $g$ between 0.58 $\pm$ .12, based on scattering angles around individual stars. \citet{2008Lee} find an albedo of $a =$ 0.36 $\pm$ 0.20, and $g =$ 0.52 $\pm$ 0.22. Reported values for $a$ and $g$ for individual regions or clouds have lower values than those of the DGL. For example, \citet{2013Lim} found $a =$ 0.42 $\pm$ 0.05 and $g =$ 0.47 $^{+.11} _{-.27}$, for Taurus-Auriga-Perseus complex. See \citet{2003Draine} for additional clouds and reflection nebulae. The range of values for $a$ and $g$ in individual regions reflect in part the ambiguity of geometry in these clouds. Values for $g$ in particular can vary if the dust is placed behind the illuminating stars or between the stars and the observer. \subsection{Deviations in FUV intensity} After accounting for the FUV offset and the scattering asymmetry of Galactic dust grains, there remain two high latitude regions of the sky that are worth further consideration. These two regions, one in the northern hemisphere and one in the southern hemisphere, have high FUV/IR ratios ($\sim$600 CU/(MJy/sr)) compared to ratios found in areas at the same latitude ($\sim$200 CU/(MJy/sr)). As shown in Figure \ref{fig:allskyfuvir}, the region above b=30$^\circ$, and between 60$^\circ$ $< l <$ 120$^\circ$ in the northern hemisphere and below b=-30$^\circ$ and between 240$^\circ$ $< l <$ 300$^\circ$ in the southern hemisphere, have somewhat elevated FUV intensity given the dust content, even with the offset removal. These regions do partially coincide with structures in the halo (high latitude Galactic clouds) and some Galactic satellites including the Magellanic clouds and stream, but it is unlikely that excess FUV from these objects is being detected. Uniformly increasing the offset to require that these regions of the sky have zero FUV intensity (as the low levels of N$_{\rm HI}$ and 100 $\mu$m emission might require) ultimately causes other higher latitude regions of the sky to have negative FUV intensities. Thus, we do not think that the excess FUV intensity in these regions is a result of underestimating the offset. Instead, we can consider two explanations for the excess FUV intensity here. The first case is that these regions contain additional FUV intensity that is unrelated to the dust content. There are well known super bubble regions which contain FUV line emission due to ionized gas and other sources. However, known super bubbles are not coincident with the regions of interest, instead being adjacent (Orion-Eridanus in the south and Ophiuchus in the north) and likely unrelated to the FUV emission found in these regions. The known line emission in super bubble regions is estimated at around 32700 LU (in the case of Orion-Eridanus, \citealt{2011Jo,2006Kregenow}), which would translate to 87 CU in the GALEX FUV band. It can't be ruled out that some additional line emission is contributing to the slight FUV excess in these regions, potentially excited by scattering or remnant shocks from the super bubble. It might be more natural to consider a second possibility that infers a causal link between the low dust content and enhanced FUV intensity. One explanation for this may be that these regions are the remnants of old superbubbles. Such a remnant would have a low dust content as the super-bubble cleared out the dusty ISM in the region although a significant population of older UV-bright stars might remain to illuminate the surrounding area. That this could be the case is loosely suggested by the distribution of OB and A type stars from TD-1, as shown in Figure 15 of \citet{2011Seonb} and the lower ratios of H$\alpha$/FUV which may indicate the presence of a softer local radiation field with fewer ionizing photons. Furthermore, the presence of holes or chimneys in these directions is also suggested by observations of the 3-D distribution of the Local Bubble \citep{2003Lallement} and the link to the EUV/soft-X-ray background. Nearby nebular regions in the disk (e.g. the Gum Nebula) remain poorly understood and have been shown to contain expanding gas and clouds \citep{2001Woermann} that may also trace successive generations of star formation. We note that while star formation rate (e.g. FUV starlight) and molecular gas are globally correlated in external galaxy disks, on smaller physical scales less than 1 kpc they are known to show considerable scatter (e.g. \citet{2010Schruba}, \citet{2013Leroy}). \section{Summary} \label{sec:summary} We have constructed an all sky map of diffuse Galactic FUV intensity, using GALEX FUV AIS data, covering 65\% of the sky. We have compared our map to other maps of the diffuse FUV sky and to maps of complementary Galactic quantities. We find the FUV intensity is highly dependent on a combination of 100 $\mu$m emission, Galactic latitude, and proximity to UV bright stars and OB associations. Our main conclusions are: \begin{enumerate} \item FUV intensity is highest near the Galactic plane and around known OB associations and lowest at high latitudes. \item There is a $\sim$300 CU FUV isotropic offset which is likely due to a combination of air glow (likely the dominant contributor), a small extragalactic background component including continuum light from unresolved galaxies, and/or a Galactic component not traced by other indicators. \item FUV intensity and 100 $\mu$m emission show a linear correlation below 8 MJy/sr of 100 $\mu$m. \item FUV intensity and N$_{\rm HI}$ show a linear correlation below 1.2 $\times$ 10$^{21}$ cm$^{-2}$. \item FUV intensity follows a modified cosecant shape with Galactic latitude with low intensity at high latitudes due to strongly forward scattering dust grains. \item We calculate a best fit value of $g$=0.78 $\pm$0.05 for the scattering asymmetry, with $a$=0.62 $\pm$0.04 for albedo, and a peak scattering intensity, S$_o$=6260 $\pm$ 400 CU, for all points with $|$b$|> $20$^\circ$. \end{enumerate} A simple picture of this behavior is that the direct, linear variation of FUV intensity at low 100 $\mu$m emission can be explained as scattered starlight off of low optical depth dust. As 100 $\mu$m emission increases, the FUV intensity increases until reaching a plateau where the dust begins to self-shield. This plateau occurs at around 8 MJy/sr and appears to be constant across the sky. The exact ratio of FUV to 100 $\mu$m emission appears to depend on Galactic latitude, with starlight more effectively scattered at lower latitudes. The scatter in diffuse FUV intensity across a single latitude is primarily caused by anisotropies in the interstellar radiation field, including the scale heights of different stellar components, and geometrical effects caused by the exact structure of individual dust clouds. Less important, but still a component of the scatter could be variations in the type of dust present and the properties of that dust. Of further interest are individual, small regions (less than 1 degree across) where FUV intensity deviates from the expected linear relationship with 100 $\mu$m intensity and modified cosecant model with Galactic latitude. These regions are typically individual dusty clouds or groups of clouds with high 100 $\mu$m emission. While only 10 \% of our data covers points with 100 $\mu$m emission above 8 MJy/sr, this still contains numerous regions with flat or even inverse relationships between FUV and 100 $\mu$m. In a future paper, we discuss in detail individual clouds that deviate from the models described above, some of which show evidence for FUV obscuration or excess FUV intensity. \acknowledgements The authors wish to thank the anonymous reviewer for their detailed and constructive comments. This publication is based on observations made with the NASA Galaxy Evolution Explorer. GALEX was operated for NASA by the California Institute of Technology under NASA contract NAS5-98034. We acknowledge the use of the Legacy Archive for Microwave Background Data Analysis (LAMBDA), part of the High Energy Astrophysics Science Archive Center (HEASARC). HEASARC/LAMBDA is a service of the Astrophysics Science Division at the NASA Goddard Space Flight Center. This research made use of Montage, funded by the National Aeronautics and Space Administration's Earth Science Technology Office, Computation Technologies Project, under Cooperative Agreement Number NCC5-626 between NASA and the California Institute of Technology. Montage is maintained by the NASA/IPAC Infrared Science Archive. The authors also wish to thank Josh Peek for helpful discussions.
2,869,038,156,983
arxiv
\section{\label{sec:Sec1}Introduction} A systematic measurement of energy dependence of the elastic scattering of positive pions from $^{12}C$ presented in Ref.\cite{Alex98} showed oscillations in the excitation function at energies around 35 MeV. The differential cross sections were measured at six scattering angles $(37^{\circ}, 65^{\circ}, 83^{\circ}, 103^{\circ}, 118^{\circ}, 142^{\circ})$ in the energy range of 18-44 MeV with an increment in the incident energy of 2 MeV. The differential cross sections were compared to the calculations made within the framework of the unitary scattering theory (UST) of the pion-nucleus scattering developed in Ref. \cite{MKH89}. Despite good description of the measured differential cross sections, the experimental data presented in terms of the excitation function (differential cross section at a given scattering angle as a function of energy) showed oscillation structures which the UST approach did not reproduce. These oscillations become more pronounced at angles around $90^{\circ}$. A typical disagreement between theory and experiment for the excitation function at $83^{\circ}$ is shown in Figure \ref{fig: fig1} by the dashed line. One of the possible explanations of these oscillations could be the formation of a diproton resonance in the $\pi NN$ system. However, a systematic experimental study \cite{Pasyuk97} of the $\pi ^{+}+d \rightarrow p+p$ reaction in the same energy range of 18 - 44 MeV with the increment of 1 - 2 MeV did not show any oscillations in either the total and differential cross sections. In the present paper we explore the possibility of explaining these oscillations by the formation of quasi stationary pionic atom states in the vicinity of the threshold of the DCX reaction channel. The DCX $^{12}\text{C}(\pi ^{+},\pi ^{-})^12\text{O}$ reaction creates two opposite charged particles that can form pionic atom states below the threshold of this reaction. If a $\pi^{-}$ forms a bound state with $^{12}\text{O}$ it cannot escape below the threshold due to lack of energy. It is known that such quasi-bound states can manifest themselves as resonances in the elastic cross section. This type of sub-threshold resonances was first investigated and described by A. Baz' \cite{Baz59,Baz71}. Another argument in favor of this mechanism comes from the fact that the mass excess of $^{12}\text{O}$ is 32.06 MeV \cite{Ajzen85}. This brings the Q-value of the DCX channel in $\pi^{+}-^{12}\text{C}$ scattering into the $\sim 30$ MeV energy region. It is worthwhile to note that one of the first experimental observations of $^{12}\text{O}$ was obtained in the DCX reaction $^{12}\text{C}(\pi^{+},\pi^{-})^{12}\text{O}$ \cite{Mord85}. The paper is organized as follows. In Section II we present a short overview the UST approach which is used for the description of pion-nucleus interaction. Section III is devoted to the theory of sub-threshold resonances due to formation of quasi-bound pionic atom states. In Section IV, a systematic comparison of theoretical calculations with the data is presented. In Section V we discuss the main results of the paper. \section{\label{sec:Sec2}Theory} In this section we briefly summarize the UST formalism \cite{MKH89} that we use in the description of pion-nucleus scattering. For simplicity we consider the scattering of pions by the nuclei with zero spin. The pion-nucleus scattering amplitude is presented in a standard way as \begin{equation} f_{\pi A}= f_{C} + f_{sc}, \label{eq:S1} \end{equation} where $f_{C}$ is the Coulomb amplitude, and $f_{sc}$ is the nuclear-Coulomb amplitude \begin{equation} f_{sc}= \frac {1}{2ik}\sum_{l=0}^{\infty}e^{2i\sigma^{\pm}_l}(S_le^{2i\delta^{\pm}_{R,l}} - 1) \label{eq:S2} \end{equation} where $\sigma^{\pm}_l$ are the coulomb phases and $\delta^{\pm}_{R,l}$ - the Coulomb corrections caused by the effects of the Coulomb distortion of the pion wave. A detailed procedure for calculating these corrections is given in Ref.\cite{MKH89}. The hadronic part is presented by the S- matrix, \begin{equation} S_l = e^{2i\delta _{\pi A,l}}, \label{eq:S3} \end{equation} where $\delta _{\pi A,l}$ are pure hadronic phase shifts which we calculate within the framework of the UST approach \cite{MKH89}. The UST approach is based on the method of evolution with respect to the coupling constant \cite{DAK65,DAK79}. The basic equations are formulated for calculation of the pion-nucleus phase shifts. \begin{equation} {\delta}_{\pi A}(k)= {\delta}^{pot}_{\pi A}(k)~+~{\delta}^{abs}_{\pi A}(k). \label{eq:S4} \end{equation} Here, ${\delta}^{pot}$ is the part of the pion-nucleus phase shift that is formed by the multiple scattering of a pion by the nuclear nucleons, and ${\delta}^{abs}$ is the absorption correction. The "potential" part is expressed in terms of the pion-nucleon phase shifts and the nuclear ground state characteristics such as nuclear form factor and correlation functions. The absorption part is expressed in terms of the absorption parameters ${\tilde B}_0$ and ${\tilde C}_0$, \begin{equation} {\delta}^{abs}_{\pi A}(k) = A(A-1)\frac{1+\epsilon}{1+2\epsilon/A} {\hat \rho}^2(\vec q) [{\tilde B}_0(k)+{\tilde C}_0(k)({\vec{\kappa '}}\cdot {\vec \kappa})], \label{eq:S5} \end{equation} where, $\epsilon={{\omega}_\pi}(k)/2M$, ${\omega}_\pi$ is the pion energy, M is the mass of a nucleon, ${\hat \rho}^2(\vec q)$ is the Fourier transform of the square of nuclear density $\rho (r)$ normalized to unity, ${\vec q}={\vec k}'- {\vec k}$ is the momentum transfer, and ${\vec \kappa}$ and ${\vec \kappa}'$ are the pion momenta in the $\pi-2N$ center-of-mass system. The absorption parameters determined from the pionic atom data are \cite{MKH89}, \begin{eqnarray} {\tilde B}_0(k)~& =& ~(-0.1~+~i0.1)~\text{fm}^4, \nonumber \\ {\tilde C}_0(k)~& = &~(-2.8~+~i1.0)~\text{fm}^6 \label{eq:S6} \end{eqnarray} In its standard form the UST formalism does not take in to account the possibility of formation of sub-threshold resonances in pion-nucleus interactions. \section{\label{sec:Sec3}DCX and Sub-threshold Resonances } The opening of the DCX reaction channel \begin{equation} \pi^{+} + ^{12}C\rightarrow (\pi^{-}, ^{12}O)^{*}\rightarrow\pi^{+} + ^{12}C , \label{eq:S7} \end{equation} creates the possibility of formation of bound pionic atom states below the threshold of this reaction. As it was shown by Baz' \cite{Baz59,Baz71}, the formation of such sub-threshold bound states is reflected by creating resonances in the elastic cross section. In Refs.\cite{Baz59,Baz71}, a general case of elastic scattering of two particles $X(a,a)X$ below the threshold of the inelastic channel $X(a,b)Y$ when particles $b$ and $Y$ can form bound states was considered. The main idea of theoretical description of the effect of sub-threshold resonances is based on the fact that one can neglect the energy dependence of the wave functions of $(a,X)$ and $(b,Y)$ systems arising from the strong interaction and focus on the analysis of energy dependence of the Coulomb wave function of the bound $(b,Y)$ system. The formation of the quasi-bound states due to opening of the DCX channel modifies the partial S-matrix (\ref{eq:S3}) in the following way \begin {eqnarray} S_l=e^{2i(\delta_{\pi A, l}+\delta^{res}_l) }, \label{eq:S8} \end{eqnarray} where the resonance part is given by \begin {eqnarray} \delta ^{res}_l = \arctan\frac{\delta_2 +\kappa_2(-1)^l\zeta _{l} cot\pi \eta}{\delta_1 +\kappa_1(-1)^l\zeta _l \cot \pi \eta}, \label{eq:S9} \end{eqnarray} \begin {eqnarray} \zeta _{l} = \frac{\pi(2kr \eta)^{2l+1}}{(2l+1)(\Gamma^2(2l+1)}, \label{eq:S10} \end{eqnarray} where $\eta ={Z\alpha}/{\beta}$ is the Sommerfeld parameter: $Z$ is the nuclear charge, $\alpha = 1/137$, and $\beta$ is the pion velocity in the pion-nucleus c.m. Below the threshold ($E < E_{thr}$) this parameter is given by \begin {eqnarray} \eta=Z\alpha\sqrt{\frac{{\mu}_{\pi A}}{2\vert{E-E_{thr}}\vert}}, \label{eq:S11} \end{eqnarray} where ${\mu}_{\pi A}$ is the pion-nucleus reduced mass, parameters $\delta_{l,2}$ and $\kappa _{l,2}$ are the real and imaginary parts of energy independent complex parameters $\delta$ and $\kappa$ in the vicinity of the threshold energy $E_{thr}$. These constants are expressed in terms of the logarithmic derivatives of pion-nucleus wave functions in the strong interaction region. The resonance energies are determined by the following condition: $\delta^{res}=n\pi+\frac{\pi}{2}$, which gives the following equation \begin{equation} \cot \pi X =(-1)^l\frac{\delta_1}{\zeta _l \kappa_1}. \label{eq:S12} \end{equation} The solution of this equation can be written as \begin{eqnarray} E^{res}_{nl} &=& E_{thr}- \frac{Z^2e^4{\mu}_{\pi A}}{2n^2\xi^2_{nl}}, \nonumber \\ \xi _{nl} &\equiv 1& +\frac{1}{n\pi}\arctan{(-1)^l\frac{\zeta_l \beta_1}{\alpha_1}}, n=1,2,3, ... . \label{eq:S13} \end{eqnarray} Here, $\xi _{nl}$ represents the strong interaction shift of pure Coulomb pionic atom energy levels, \begin{equation} E^C_n= -\frac{Z^2e^4{\mu}_{\pi A}}{2n^2}. \label{eq:S14} \end{equation} From Eq.(\ref{eq:S13}) it follows that for a given total pion-nucleus angular momentum ($l$) there is an infinite number of resonances with increasing density as $E \rightarrow E_{thr}$. It is easy to see that the width of the resonance region is determined by the energy of the first Coulomb level. For reaction (\ref{eq:S7}) the energy of the first pionic atom level is given by \begin{equation} E^C_1=-\frac{{\mu _{\pi N}}(Z+2)^2e^4}{2}. \label{eq:S15} \end{equation} In the vicinity of each resonance energy $E=E^{res}_{nl}$ the S-matrix can be approximated by the Breit-Wigner formula as \begin {equation} S_l \approx e^{2i{\delta_{\pi A,l}}}{\bigl(1-\sum_{n=1}^{\infty}\frac{i{\Gamma}^{e}_{nl}}{E-E^{res}_{nl}-i\Gamma^{e}_{nl}/2}\bigr)}, \label{eq:S16} \end{equation} where ${\Gamma}^e_{nl}$ is the elastic width of each resonance. In applications to real processes the upper limit should be replaced by some finite number N which is determined by experimental energy resolution. A simple procedure of generalization this formalism to include an important case when one of the particles created in the opening reaction channel is unstable was proposed in Ref. \cite{Baz59}. In this case Eq.(\ref{eq:S16}) is replaced by \begin {equation} S_l \approx e^{2i{\delta_{\pi A,l}}}{\bigl(1-\sum_{n=1}^{N}\frac{i{\Gamma}^{e}_{nl}}{E-E^{res}_{n,l}-i(\Gamma^{e}_{nl}+\Gamma)/2}\bigr)}, \label{eq:S17} \end{equation} where $\Gamma$ represents the particle's energy width. In the considered case this particle is the nucleus created in the DCX reaction. Formula (\ref{eq:S16}) can be simplified if the width of created nucleus is much bigger than the elastic widths of corresponding sub-threshold resonances. Indeed, if $\Gamma >> \Gamma^{e}_{nl}$ one can neglect quantities $\Gamma^{e}_{nl}$ in the denominators and present Eq.(\ref{eq:S16}) in the following form \begin {equation} S_l \approx e^{2i{\delta_{\pi A,l}}}{\bigl(1-\frac{i\Gamma^e_{tot}}{E-E^{res}_l-i\Gamma/2}\bigr)}, \label{eq:S18} \end{equation} where \begin {equation} \Gamma^e_{tot}=\sum_{n=1}^{\infty}\Gamma^{e}_{nl}, \label{eq:S18a} \end{equation} and $E^{res}_l$ is some average value of the sub-threshold resonance energy. This formula can be re-written as \begin {equation} S_l \approx e^{2i{\delta_{\pi A,l}}}{\bigl(1- \gamma\frac{i{\Gamma}}{E-{E^{res}_l}-i(\Gamma /2 }\bigr)}, \label{eq:S18} \end{equation} where $\gamma\equiv \Gamma^e_{tot}/\Gamma$. The derived formula presents a single-term Breit-Wigner approximation for the infinite series of the sub-threshold resonances contributing to the scattering process. It is important to note that $\Gamma^e_{tot}$ in an effective elastic width representing the contribution from all resonances at a given orbital momentum $l$. \section{\label{sec:Sec3}Calculations} In scattering of positive pions from $^{12}\text{C}$ the DCX reaction creates two opposite charged particles $(\pi^{-},^{12}O)$ which can form the pionic atom below the threshold of this reaction. This reaction has a positive Q-value ($32.06$ MeV). In addition, a positive pion needs to overcome the nucleus Coulomb barrier. Therefore, the threshold energy of the DCX reaction channel will be determined by the sum of the reaction Q-value and the magnitude of the Coulomb repulsion barrier $\delta V_C$, i.e. $E_{thr}\approx Q+ \delta V_{C}$, where $\delta V_{C}=Ze^2/R, R=r_0 A^{1/3}, r_0=1.1fm$. For $(\pi^{+},^{12}\text{C})$, $\delta V_{C}\approx 3.43$ MeV. Therefore, the threshold energy $E^{thr}\approx 35.45$ MeV. The resonance energies are shifted down from the threshold energy by the amount of the corresponding binding energy in accordance with Eq.(\ref{eq:S13}). The lowest resonance energy corresponds to the $1s$-state in the $(\pi^{-},^{12}\text{O}$ atom. Using Eq.(\ref{eq:S15})one can obtain that $E^C_1\approx -0.23$ MeV, and the following value for the resonance energy $E^{res}_{1s}\approx 35.26$ MeV. In the calculations below this value will be used as the average sub-threshold resonance energy in Eq.(\ref{eq:S18}) as well. The $^{12}\text{O}$ nucleus is unstable. The width of the g.s. of unbound $^{12}\text{O}$ is known with a large uncertainty: $\Gamma = 0.40 \pm 0.25$ MeV \cite{Ajzen85}. The main decay mode is two-proton emission to the ground state of $^{10}$C. The "elastic" strong interaction width of the $1s$-state of ($\pi^{-},^{12}\text{O}$) is about $10^{-3}$ MeV (see, e.g. \cite{Swanner}). Since the elastic width is much smaller than the width $^{12}\text{O}$ one can use the derived one-term Breit-Wigner approximation (\ref{eq:S18}). The spin and parity of $^{12}\text{O}$ g.s. is $0^{+}$. Therefore, the sub-threshold s-resonances can be generated by the pion s-wave only. The s-wave S-matrix is given by \begin {equation} S_0 \approx e^{2i{\delta_{\pi A,0}}}{\bigl(1-\frac{2i{\Gamma^{el}}_{e}}{E-{E^{res}}_n-i\Gamma _{tot}}\bigr)}, \label{eq: S19} \end{equation} where $E^{res}=E^{thr} + E^C_{1}\approx 35.26$MeV, and $\Gamma =0.4$ MeV. The $2s$-energy level in the ($\pi^{-},^{12}\text{O}$) atom is separated by $0.18$ MeV, and the distance between the higher energy levels is rapidly decreasing as $\sim 1/n^3$. Therefore, one can expect that only several low lying levels will make a noticeable contribution in $\Gamma^{e}_{tot}$. In our calculations we will consider this quantity as a free parameter to be determined from the data. In Figure \ref{fig: fig1} we present calculations at $83^{\circ}$ for different values of $\gamma=\Gamma^{e}_{tot}/\Gamma$. This parameter determines the magnitude of the effective elastic width $\Gamma^{e}_{tot}$. In our calculations $\Gamma =0.4 \text{MeV}$. It is seen that the best description of the data is obtained with $\gamma \approx 0.1$ which corresponds to $\Gamma ^{e}_{tot}\approx$ 0.04 MeV. \begin{figure}[here] \includegraphics[scale=0.3]{fig1 \caption{\label{fig: fig1}The excitation function at $83^{\circ}$. Experimental data is taken from Ref. \cite{Alex98}. The dash line presents the UST calculations without taking into account the effect of sub-threshold resonances ($\gamma =0$); the dot line corresponds to $\gamma=0.01$; dash-dot - $\gamma=0.05$ ; solid - $\gamma =0.1$; dash-dot-dot - $\gamma =0.5$; and short dash-dot - $\gamma =1.0$.}. \end{figure} The results of calculations of the excitation function with and without taking into account the sub-threshold resonance effect at all scattering angles measured in Ref.\cite{Alex98} are presented in Figure \ref{fig:fig2}. These calculations were performed with $\gamma=0.1$ ($\Gamma ^{e}_{tot}\approx$ 0.04 MeV) that was found to provide the best description of the data at $83^{\circ}$. \begin{figure}[h] \includegraphics[scale=0.3]{fig2 \caption{\label{fig:fig2} Excitation functions at fixed scattering angles. Black squares are the data from \cite{Alex98}, the lines present the UST calculations without ($\gamma=0$) and with ($\gamma=0.1$) inclusion of the sub-threshold resonance effect from formation of (${\pi}^{-}, ^{12}\text{O}$) atom.} \end{figure} Figure \ref{fig:fig3} shows the effect of the S-wave sub-threshold resonance on the total cross sections. One can see that the reaction cross section can reach the magnitude of $\sim 300 mb$. \begin{figure}[h] \includegraphics[scale=0.3]{fig3} \caption{\label{fig:fig3} Total cross sections: $\sigma _{el}, \sigma _{tot}$, and $\sigma _{r}= \sigma _{tot}-\sigma _{el}$. The lines present the results of the UST calculations without ($\gamma=0$) and with ($\gamma=0.1$) taking into account the sub-threshold resonance effect. The experimental data is taken from Ref. \cite{Saunders}}. \end{figure} At each resonance energy the partial cross section reaches its "kinematic" maximum $ \sigma_{l,{res}}=\frac{4\pi}{k^2}(2l+1)$. For example, for $T_{\pi}=35$ MeV for $l=0$ $\sigma_{0,{res}}=\frac{4\pi}{k^2}\approx 500 mb$. It means that even at low energies the reaction cross section can be quite big, comparable to the cross pion-nucleus sections at the $\Delta _{33}$ resonance region. There is no direct systematic experimental data on total cross sections at the sub-threshold resonance region. The data from Ref. \cite{Saunders} covers the energy region from 45 to 65 MeV and is in agreement with the UST calculations. \section{\label{sec:Sec4} Conclusion} In this paper we presented an explanation for the energy dependence in the excitation function in scattering of positive pions from $^{12}\text{C}$ at pion energies in the 30 - 35 MeV region which were observed in Ref.\cite{Alex98}. It is shown that these oscillations can be explained by formation of the quasi-stationary pionic atom states in the vicinity of the threshold of the DCX reaction channel. The threshold energy is determined by the reaction Q-value and the pion's kinetic energy required overcoming the nuclear Coulomb barrier. In the considered case the threshold energy is about 35 MeV. The width of the resonance region is determined by the magnitude of the first Coulomb level of the pionic atom which is about 0.23 MeV for $(\pi^{-},^{12}\text{O})$ atom. The narrowness of this region may explain why other experimental groups (a detailed comparison of existing experimental data is given in Ref.\cite{Alex98}) did not see these oscillations. Fortunately, in Ref.\cite{Alex98} the data sets at different pion's energies included the pion's energy at 35.4 MeV. In presented analysis there is one free parameter - the elastic width of the sub-threshold resonances. The best description of the oscillations was obtained with $\Gamma^e_{tot}\approx 0.04$ MeV. It is important to note that this value refers to the integrated elastic width that accounts for the contribution of an infinite series of resonances at a given orbital momentum. In Section II it was shown that one can approximate the sum over all resonances by a single Breit-Wigner formula (\ref{eq:S18}) if the particle's decay width is much bigger than the corresponding elastic widths. In the considered reaction this condition is satisfied since the $^{12}\text{O}$ decay width is $\sim 0.4$ MeV. As the pion energy approaches the sub-threshold resonance energies the reaction cross section varies significantly as it is shown on Figure (\ref{fig:fig3}). It means that, despite the fact that the DCX reaction cross section is quite small by itself at low energies, the sub-threshold resonance effect amplifies the DCX role in pion-nucleus dynamics. In addition, if the decay width of the nucleus created due to the DCX is much bigger than the corresponding elastic width the final state of the quasi-bound system is determined by the nuclear decay. In other words, one can say that positive pions may become effective "burners" of the nuclei when their energy matches the energy of sub-threshold resonances caused by the formation of pionic atom states below the threshold of the DCX reaction. \begin{acknowledgements} The author is indebted to V. B. Belyaev and J.R. Peterson for stimulating discussions and helpful advises. \end{acknowledgements} \nocite{*}
2,869,038,156,984
arxiv
\section{Introduction} Understanding the mechanism of unconventional superconductivity, where the structure lacks an inversion symmetry has been a tough challenge ever since the discovery of the heavy fermion noncentrosymmetric (NCS) superconductor CePt$_{3}$Si \cite{Bauer2004,EBA}. The lack of an inversion center in the crystal structure of the noncentrosymmetric superconductor makes parity an unconserved quantity. As a result, the superconducting ground state of an NCS superconductor may exhibit a possible mixing of spin-singlet and spin-triplet pair states \cite{rashba,sky,kv,ia,pa,fujimoto1,fujimoto2,fujimoto3,mdf}. The parity mixed superconducting ground state gives rise to several anomalous superconducting properties, e.g. upper critical field exceeding the Pauli limit, nodes in the superconducting gap, a helical vortex state, and time-reversal symmetry breaking.\\ Several NCS superconducting systems have been investigated to study the effects of broken inversion symmetry \cite{nr1,RT1,rf,rhf,rz3,YC,LC,rw,mib,lip,mac1,rg,ig,lrs,lps,rb1}, but majority of them appear to show $\textit{s}$-wave superconductivity. Theoretical predictions suggest that NCS superconductors are prime candidates to exhibit time-reversal symmetry breaking (TRSB) due to its admixed superconducting ground states. To date only a few NCS superconductors Re$_{6}$Zr \cite{rz1}, LaNiC$_{2}$ \cite{lnc2}, SrPtAs \cite{SPA} and La$_{7}$Ir$_{3}$ \cite{li1} have been reported to show TRSB. It is a rarely observed phenomena and apart from NCS superconductors, it has only been observed in a few unconventional superconductors e.g. Sr$_{2}$RuO$_{4}$ \cite{sro1,sro2}, UPt$_{3}$ \cite{UP1,UP2}, PrPt$_{4}$Ge$_{12}$ \cite{ppg}, LaNiGa$_{2}$ \cite{LNG}, and Lu$_{5}$Rh$_{6}$Sn$_{18}$ \cite{LoS}. The discrepancy between theory, experiment and the possibility of realizing an unconventional superconducting state having TRSB in NCS superconductors are of great interest. To understand the superconducting mechanism, it is required to study new NCS superconducting systems by combining bulk measurements such as transport, magnetization, heat capacity, etc. and local probe techniques like muon spectroscopy. Muon spectroscopy is one of the most direct methods of detecting the unconventional superconducting ground state. This technique can accurately determine the temperature dependence of the magnetic penetration depth and the onset of time-reversal symmetry breaking in superconductors.\\ Here we are reporting the superconducting state of a binary NCS compound ($\alpha$ - $Mn$ structure) Nb$_{0.5}$Os$_{0.5}$, having superconducting transition $T_{c}$ = 3.07 K. Resistivity, magnetization, and specific heat measurements were carried out to explore the superconducting properties of Nb$_{0.5}$Os$_{0.5}$. $\mu$SR measurements in transverse-field (TF) and longitudinal-field (LF) configurations are used to probe the flux line lattice (FLL) and time-reversal symmetry breaking respectively. \section{Experimental Details} The polycrystalline sample of Nb$_{0.5}$Os$_{0.5}$ was prepared by arc melting. The stoichiometric amounts of Nb (99.95$\%$, Alfa Aesar) and Os (99.95$\%$, Alfa Aesar) were placed on the water cooled copper hearth in an ultrapure argon gas atmosphere. The sample was inverted and remelted several times to ensure sample homogeneity and the observed weight loss is negligible. The phase analysis was done using x-ray diffraction (XRD) at room temperature on a X'pert PANalytical diffractometer. The magnetization and ac susceptibility measurements were performed using the magnetic property measurement system (MPMS 3, Quantum Design Inc.). The electrical resistivity and specific heat measurements were done using the physical property measurement system (PPMS, Quantum Design Inc.). The $\mu$SR measurements were carried out using the MuSR spectrometer at the ISIS facility, Rutherford Appleton Laboratory, Didcot, U. K. in both longitudinal and transverse geometries. \begin{figure} \includegraphics[width=1.0\columnwidth]{Fig1.eps} \caption{\label{Fig1:xrd} Powder XRD pattern for the Nb$_{0.5}$Os$_{0.5}$ sample recorded at room temperature using Cu $K_{\alpha}$ radiation. The solid red line shows the experimental data. The dotted blue line corresponds to Rietveld refinement to the pattern.} \end{figure} \section{Results and Discussion} \subsection{Sample characterization} The powder x-ray diffraction pattern for Nb$_{0.5}$Os$_{0.5}$ was collected at room temperature. Rietveld refinement was performed using the High Score Plus Software. As observed from Fig. 1, the Nb$_{0.5}$Os$_{0.5}$ sample has no impurity phase. It can be indexed by cubic, noncentrosymmetric $\alpha$ - $Mn$ structure (space group $I \bar{4}3m$, No. 217) with the lattice cell parameter a = 9.765(3) \text{\AA}. \subsection{Normal and superconducting state properties} \subsubsection{Electrical resistivity} \begin{figure} \includegraphics[width=1.0\columnwidth]{Fig2.eps} \caption{\label{Fig2:Resistivity} The resistivity measurement $\rho(T)$ for Nb$_{0.5}$Os$_{0.5}$ taken in zero field in a temperature range of 1.85 K $\le$ T $\le$ 300 K. The inset shows $\rho(T)$ measurements as a function of magnetic fields.} \end{figure} The electrical resistivity measurement was done by the ac transport technique in the temperature range of 1.85 K $\le$ $\textit{T}$ $\le$ 300 K in zero field (see Fig. 2). The zero resistivity is acquired around $T_{c}^{0}$ $\approx$ 3.1 K. The normal state resistivity remains almost temperature independent up to the highest measured temperature, indicating that Nb$_{0.5}$Os$_{0.5}$ exhibit poor metallicity. The low value of the residual resistivity ratio (RRR) ($\frac{\rho(300)}{\rho(10)}$ = 1.05) suggests the dominance of strong electronic scattering due to the disorder. The resistivity measurements as a function of temperature were also done under different applied magnetic fields (up to 3T, see inset of Fig. 2) to calculate the higher critical field. \subsubsection{Magnetization} The magnetization measurement was done in zero-field cooled warming (ZFCW) and field cooled cooling (FCC) mode in an applied field of 5 mT (see Fig. 3(a)). The superconducting transition temperature was observed around $T_{c}^{onset}$ = 3.07 K, with the transition width of $\Delta T_{c}$ = 0.21 K. Low field M-H measurements were done at different temperatures to determine the lower critical field $H_{c1}$(0). It is defined as the first deviation from linearity in low-field regions in M vs H curves (see Fig. 3(b)). Using the formula $H_{c1}(T)= H_{c1}(0)(1-(T/T_{c})^{2})$ for the temperature variation of $H_{c1}(T)$, we estimated $H_{c1}$(0) = 3.06 $\pm$ 0.05 mT. \begin{figure*}[t] \includegraphics[width=2.1\columnwidth]{Fig3.eps} \caption{\label{Fig3:zfc} (a) The magnetization data for Nb$_{0.5}$Os$_{0.5}$ taken in 5 mT field shows the superconducting transition at $T_{c}$ = 3.07 K. (b) The lower critical field $H_{c1}$ estimated by the GL formula was 3.06 mT. Inset shows the M vs H curves taken at various temperatures. (c) The upper critical field $H_{c2}$(T) obtained from magnetization, ac susceptibility, resistivity, and specific heat measurements. The dotted lines show the GL fits, yielding $H_{c2}$(0) $\simeq$ 5.4 T for Nb$_{0.5}$Os$_{0.5}$.} \end{figure*} The temperature dependence of the upper critical field $H_{c2}$(T) was obtained by measuring the field dependence of superconducting transition $T_{c}$ in magnetization, ac susceptibility, resistivity, and specific heat measurements. It is evident from the graph (see Fig. 3(c)) that $H_{c2}$ varies linearly with the temperature and possibly best be fitted by the Ginzburg-Landau (GL) relation $H_{c2}(T) = H_{c2}(0)\frac{(1-t^{2})}{(1+t^2)}$, where t = $T/T_{c}$. By fitting above equation in the $H_{c2}$-T graph, the specific heat and magnetization measurements give $H_{c2}$(0) $\simeq$ 5.4 $\pm$ 0.1 T, whereas resistivity and ac susceptibility measurements give $H_{c2}$(0) $\simeq$ 4.6 $\pm$ 0.1 T. Using the relation $H_{c2}$(0) = $\Phi_{0}/2\pi\xi_{GL}^{2}$ where $\Phi_{0}$ is the quantum flux $(h/2e)$, we obtained $\xi_{GL}(0)$ = 78.12 \text{\AA}. Other superconducting parameters such as the Ginzburg Landau parameter $\kappa_{GL}$(0) (= 61), penetration depth $\lambda_{GL}$(0) (= 4774 \text{\AA}) and the thermodynamic critical field $H_{c}$(0) (= 62.6 mT) were calculated using the standard relations given in Ref. \cite{tin}.\\ For a type-II BCS superconductor in the dirty limit, the orbital limit of the upper critical field $H_{c2}^{orbital}$(0) is given by the Werthamer-Helfand-Hohenberg (WHH) \cite{EH,NRW} expression $H_{c2}^{orbital}$(0) = -0.693 $T_{c}\left.\frac{-dH_{c2}(T)}{dT}\right|_{T=T_{c}}$. Using initial slope 2.1 T K$^{-1}$ from the $H_{c2}$-T phase diagram, $H_{c2}^{orbital}$(0) in the dirty limit was estimated to be 4.46 T. Within the $\alpha$-model the Pauli limiting field is given by $H_{c2}^{p}$(0) = 1.86$T_{c}(\alpha/\alpha_{BCS})$ \cite{DC}. Using $\alpha$ = 1.81 (from the specific heat measurement), it yields $H_{c2}^{p}$(0) = 5.85 T. The upper critical field $H_{c2}$(0) calculated above is close to both the orbital limiting field and Pauli limiting field. Therefore, it is highly desirable to perform the detailed investigations of the upper critical field in high quality single crystals of Nb$_{0.5}$Os$_{0.5}$.\\ \subsubsection{Specific heat} The temperature dependence of the specific heat was collected in zero field. The normal state low temperature specific heat data above $T_{c}$ can be fitted with the equation $C/T$ = $\gamma_{n}+\beta_{3}T^{2}+\beta_{5}T^{4}$ to the limit $\textit{T}$ $\to$ 0, to extract the electronic contribution ($\gamma_{n}$) and phononic contribution ($\beta_{3}$, $\beta_{5}$) to the specific heat. The solid red line in the inset of Fig. 4 shows the best fit to the data which yields $\gamma_{n}$ = 3.42 $\pm$ 0.01 mJ mol$^{-1}$ K$^{-2}$, $\beta_{3}$ = 0.039 $\pm$ 0.002 mJ mol$^{-1}$ K$^{-4}$, and $\beta_{5}$ = 0.205 $\pm$ 0.004 $\mu$J mol$^{-1}$ K$^{-6}$. The value of $\beta_{3}$ corresponds to a Debye temperature $\theta_{D}$ is 367 K. The Sommerfeld coefficient is proportional to the density of states $D_{C}(E_{F})$ at the Fermi level given by $\gamma_{n}$ = $(\pi^{2}k_{B}^{2}/3)D_{C}(E_{F})$, where using $\gamma_{n}$ = 3.42 $\pm$ 0.01 mJ mol$^{-1}$ K$^{-2}$ we obtained $D_{C}(E_{F})$ = 1.45 $\frac{states}{eV f.u}$.\\ \begin{figure} \includegraphics[width=1.0\columnwidth]{Fig4.eps} \caption{\label{Fig4:hc2} Single-gap BCS expression given in Eq. (3) fits fairly well for $\Delta$(0)/$k_{B}T_{c}$ = 1.81 in Nb$_{0.5}$Os$_{0.5}$. Inset: The low temperature specific heat data above $T_{c}$ is fitted to the Debye model shown by solid red line.} \end{figure} The electron-phonon coupling constant can be calculated using the McMillan equation \cite{WL} \begin{equation} \lambda_{e-ph} = \frac{1.04+\mu^{*}ln(\theta_{D}/1.45T_{c})}{(1-0.62\mu^{*})ln(\theta_{D}/1.45T_{c})-1.04 } , \label{eqn1:ld} \end{equation} where $\mu^{*}$ is the Coulomb repulsion parameter, typically given by $\mu^{*}$ = 0.13 for many intermetallic superconductors. Using $T_{c}$=3.07 K and $\theta_{D}$ = 367 K for Nb$_{0.5}$Os$_{0.5}$, we obtained $\lambda_{e-ph}$ $\simeq$ 0.53. This value is comparable to other fully gapped NCS superconductors \cite{nr1,RT1,sas}, suggesting that Nb$_{0.5}$Os$_{0.5}$ is a weakly coupled superconductor. Using the value of $\lambda_{e-ph}$, we have calculated the effective mass for the quasiparticles $m^{*}$ = 1.53 $m_{e}$ \cite{GG}. The electronic contribution to the specific heat can be calculated by subtracting the phononic contribution. The normalized specific heat jump $\frac{\Delta C_{el}}{\gamma_{n}T_{c}}$ is 1.48 for $\gamma_{n}$ = 3.42 mJ mol$^{-1}$ K$^{-2}$, which is close to the value for a BCS superconductor (= 1.43) in the weak coupling limit. The temperature dependence of the normalized entropy S in the superconducting state for a single-gap BCS superconductor is given by \begin{equation} \frac{S}{\gamma_{n}T_{c}} = -\frac{6}{\pi^2}\left(\frac{\Delta(0)}{k_{B}T_{c}}\right)\int_{0}^{\infty}[ \textit{f}\ln(f)+(1-f)\ln(1-f)]dy , \label{eqn2:s} \end{equation} where $\textit{f}$($\xi$) = [exp($\textit{E}$($\xi$)/$k_{B}T$)+1]$^{-1}$ is the Fermi function, $\textit{E}$($\xi$) = $\sqrt{\xi^{2}+\Delta^{2}(t)}$, where $\xi$ is the energy of normal electrons measured relative to the Fermi energy, $\textit{y}$ = $\xi/\Delta(0)$, $\mathit{t = T/T_{c}}$, and $\Delta(t)$ = tanh[1.82(1.018(($\mathit{1/t}$)-1))$^{0.51}$] is the BCS approximation for the temperature dependence of the energy gap. The normalized electronic specific heat is then calculated from the normalized entropy by \begin{equation} \frac{C_{el}}{\gamma_{n}T_{c}} = t\frac{d(S/\gamma_{n}T_{c})}{dt} . \label{eqn3:Cel} \end{equation} The $C_{el}$ below $T_{c}$ is described by Eq. (3) whereas above $T_{c}$ its equal to $\gamma_{n}T_{c}$. Figure 4 shows the fitting of the specific heat data using Eq. (3), which yields $\alpha$ = $\Delta(0)/k_{B}T_{c}$ = 1.81 $\pm$ 0.02. The obtained value is close to the BCS value $\alpha_{BCS}$ = 1.764 in the weak coupling limit, suggesting single-gap BCS like superconductivity in Nb$_{0.5}$Os$_{0.5}$.\\ In the $\alpha$ model, BCS parameter $\alpha_{BCS}$ is replaced by $\alpha$ which can be determined using the formula $\Delta C_{el}/\gamma_{n}T_{c} = 1.426(\alpha/\alpha_{BCS})^{2}$ \cite{DC}. Substituting the value of normalized specific heat jump $\Delta C_{el}/\gamma_{n}T_{c}$ = 1.48 for our sample, we get $\alpha$ = 1.8, which is in good agreement with the fitted value. \\ \begin{table}[h!] \caption{Normal and superconducting properties of Nb$_{0.5}$Os$_{0.5}$} \begin{center} \begin{tabular}[b]{lcc}\hline\hline Parameter& unit& value\\ \hline \\[0.5ex] $T_{c}$& K& 3.07\\ $H_{c1}(0)$& mT& 3.06 \\ $H_{c2}(0)$& T& 5.4 \\ $H_{c}(0)$& mT& 62.6 \\ $H_{c2}^{orbital}(0)$& T& 4.46\\ $H_{c2}^{P}(0)$& T& 5.85\\ $\xi_{GL}$& \text{\AA}& 78.12\\ $\lambda_{GL}$& \text{\AA}& 4774\\ $\kappa_{GL}$& &61\\ $\gamma$& mJmol$^{-1}$K$^{-2}$& 3.42\\ $\beta$ & mJmol$^{-1}$K$^{-4}$& 0.039\\ $\theta_{D}$& K& 367\\ $\lambda_{e-ph}$& &0.53\\ D$_{C}$(E$_{f}$)& states/ev f.u& 1.45\\ $\Delta C_{el}/\gamma_{n}T_{c}$& &1.48\\ $\Delta(0)/k_{B}T_{c}$& &1.81 \\[0.5ex] \hline\hline \end{tabular} \par\medskip\footnotesize \end{center} \end{table} \subsubsection{Muon spin relaxation and rotation} The superconducting ground state of Nb$_{0.5}$Os$_{0.5}$ was further analyzed by $\mu$SR relaxation and rotation measurements. The zero-field muon spin relaxation (ZF-$\mu$SR) spectra was collected below ($T$ = 40 mK) and above ($T$ = 3.5 K) the transition temperature ($T_{c}$ = 3.07 K) as displayed in Fig. 5. The absence of any oscillatory component in the spectra confirms that there are no atomic moments, generally associated with the ordered magnetic structure. In the absence of atomic moments, muon-spin relaxation in zero field is given by the Gaussian Kubo-Toyabe (KT) function \cite{RSH} \begin{equation} G_{\mathrm{KT}}(t) = \frac{1}{3}+\frac{2}{3}(1-\sigma^{2}_{\mathrm{ZF}}t^{2})\mathrm{exp}\left(\frac{-\sigma^{2}_{\mathrm{ZF}}t^{2}}{2}\right) , \label{eqn4:zf} \end{equation} where $\sigma_{\mathrm{ZF}}$ accounts for the relaxation due to static, randomly oriented local fields associated with the nuclear moments at the muon site. The spectra well described by the function \begin{equation} A(t) = A_{1}G_{\mathrm{KT}}(t)\mathrm{exp}(-\Lambda t)+A_{\mathrm{BG}} , \label{eqn5:tay} \end{equation} where $A_{1}$ is the initial asymmetry, $\Lambda$ is the electronic relaxation rate, and $A_{\mathrm{BG}}$ is the time-independent background contribution from the muons stopped in the sample holder. By fitting both the ZF-$\mu$SR spectra (Fig. 5 ) with the Eq. (5), yields the similar set of parameters within the sensitivity of the instrument. In the superconducting state, if the spin-triplet component is present, an additional relaxation should be observed \cite{lnc2,rz1,SPA,li1,sro1}. It is clearly absent in Fig. 5, where identical relaxation signals can be observed on the either side of the superconducting transition temperature. This leads to the conclusion that the time-reversal symmetry is preserved in Nb$_{0.5}$Os$_{0.5}$ within the detection limit of $\mu$SR.\\ \begin{figure} \includegraphics[width=1.0\columnwidth]{Fig5.eps} \caption{\label{Fig5:ZFM} Zero field $\mu$SR spectra collected below (40 mK) and above (3.5 K) the superconducting transition temperature. The solid lines are the fits to Gaussian Kubo-Toyabe (KT) function given in Eq. (5). } \end{figure} \begin{figure} \includegraphics[width=1.0\columnwidth]{Fig6.eps} \caption{\label{Fig6:TFM} Representative TF $\mu$SR signals collected at (a) 3.5 K and (b) 0.1 K in an applied magnetic field of 30 mT. The solid lines are fits using Eq. (6).} \end{figure} Transverse-field muon spin rotation (TF-$\mu$SR) measurements were done to gain information on the superconducting gap structure of Nb$_{0.5}$Os$_{0.5}$. Asymmetry spectra was recorded above (3.5 K) and below (0.1 K) the transition temperature $T_{c}$ in a transverse field of 30 mT as shown in Fig. 6. The TF-$\mu$SR precession signal were fitted using an oscillatory decaying Gaussian function \begin{equation} G_{\mathrm{TF}}(t) = A_{1}\mathrm{exp}\left(\frac{-\sigma^{2}t^{2}}{2}\right)\mathrm{cos}(w_{1}t+\phi)+A_{2}\mathrm{cos}(w_{2}t+\phi) , \label{eqn6:Tranf} \end{equation} where $w_{1}$ and $w_{2}$ are the frequencies of the muon precession signal and background signal respectively, $\phi$ is the initial phase offset and $\sigma$ is the Gaussian muon-spin relaxation rate. Figure 6(a) shows the signal in the normal state where depolarization rate is small, attributed to homogeneous field distribution throughout the sample. The significant depolarization rate in the superconducting state shown in the Fig. 6(b) is due to the flux line lattice (FLL) in the mixed state of the superconductor, which gives rise to the inhomogeneous field distribution. The depolarization arising due to the static fields from the nuclear moments $\sigma_{\mathrm{N}}$ is assumed to be temperature independent and adds in quadrature to the contribution from the field variation across the flux line lattice $\sigma_{\mathrm{FLL}}$: \begin{equation} \sigma^{2} = \sigma_{\mathrm{N}}^{2}+\sigma_{\mathrm{FLL}}^{2} . \label{eqn7:sigma} \end{equation} The muon-spin relaxation rate in the superconducting state $\sigma_{\mathrm{FLL}}$ is related to the London magnetic penetration depth $\lambda$ and thus to the superfluid density $n_{s}$ by the equation \begin{equation} \frac{\sigma_{\mathrm{FLL}}(T)}{\sigma_{\mathrm{FLL}}(0)} = \frac{\lambda^{-2}(T)}{\lambda^{-2}(0)} . \label{eqn8:sfd} \end{equation} For an $\textit{s}$-wave BCS superconductor in the dirty limit, the temperature dependence of the London magnetic penetration depth is given by \begin{equation} \frac{\lambda^{-2}(T)}{\lambda^{-2}(0)} = \frac{\Delta(T)}{\Delta(0)}\mathrm{tanh}\left[\frac{\Delta(T)}{2k_{B}T}\right] , \label{eqn9:lpd} \end{equation} where $\Delta$(T) = $\Delta_{0}$$\delta(T/T_{c})$. The temperature dependence of the gap in the BCS approximation is given by the expression $\delta(T/T_{c})$ = tanh[1.82(1.018($\mathit{(T_{c}/T})$-1))$^{0.51}$]. Taking the dirty limit expression for Nb$_{0.5}$Os$_{0.5}$ and combining Eq. (7), (8) and (9), a model was obtained for a dirty limit single-gap s-wave superconductor, where $\sigma$(T) above ${T}_{c}$ is equal to $\sigma_{\mathrm{N}}$ and below ${T}_{c}$ is given by Eq. (10) which contain contributions from both $\sigma_{\mathrm{N}}$ and $\sigma_{\mathrm{FLL}}$. \begin{equation} \sigma(T) = \sqrt{\sigma_{\mathrm{FLL}}^{2}(0)\frac{\Delta^{2}(T)}{\Delta^{2}(0)}\mathrm{tanh}^{2}\left[\frac{\Delta(T)}{2k_{B}T}\right]+\sigma_{\mathrm{N}}^{2}} . \label{eqn10:fs} \end{equation} \begin{figure} \includegraphics[width=1.0\columnwidth]{Fig7.eps} \caption{\label{Fig7:swave} The temperature dependence of the muon-spin relaxation rate $\sigma$(T) collected at an applied field of 30 mT. The solid blue line shows the s-wave fit for a dirty limit superconductor using Eq. (10).} \end{figure} The temperature dependence of muon depolarization rate $\sigma$ was collected in an applied field of 30 mT as shown in Fig. 7. The depolarization rate $\sigma$ remains temperature independent up to $T_{c}$ attributing to random nuclear magnetic moments, then after $T_{c}$, $\sigma$ increases due to the formation of well-ordered FLL. The best fit to the $\sigma(T)$ data were obtained with the single-gap BCS model (Eq. (10)) shown by the solid blue line in Fig. 7, where we have obtained $\sigma_{\mathrm{N}}$ = 0.366 $\pm$ 0.002 $\mu$s$^{-1}$, $\sigma$(0) = 0.444 $\pm$ 0.001 $\mu$s$^{-1}$, and $\Delta$(0) = 0.50 $\pm$ 0.02 meV. The value of $\alpha$ = $\Delta(0)/k_{B}T_{c}$ = 1.89 is close to the value ($\alpha$ = 1.81) obtained from the low temperature specific heat measurement. Thus, the TF- $\mu$SR measurements together with the specific heat measurement confirm that Nb$_{0.5}$Os$_{0.5}$ is a s-wave superconductor.\\ The penetration depth $\lambda$(0) at T = 0 K can be directly calculated ($\sigma_{\mathrm{FLL}}$(0) = 0.251 $\pm$ 0.001 $\mu$s$^{-1}$) from the relation \cite{JES,EHB} \begin{equation} \frac{\sigma_{\mathrm{FLL}}^2(0)}{\gamma_{\mu}^2} = 0.00371 \frac{\Phi_{0}^{2}}{\lambda^{4}(0)} , \label{eqn11:lam} \end{equation} where $\gamma_{\mu}$/2$\pi$ = 135.53 MHz/T is the muon gyromagnetic ratio and $\Phi_{0}$ is the magnetic flux quantum. The value of penetration depth $\lambda$(0) is 6538$\pm$13 \text{\AA}. The estimated value is little higher than the $\lambda_{GL}$(0), which could be due to the dirty limit superconductivity in Nb$_{0.5}$Os$_{0.5}$.\\ Uemura $\textit{et al.}$ showed in 1991 that the superconductors can be classified into a conventional/unconventional superconductor \cite{KK,YJU} based on the ratio of the transition temperature ($T_{c}$) to the Fermi temperature ($T_{F}$). It was shown that the unconventional, exotic superconductors fall in the range of 0.01 $\leq$ $\frac{T_{c}}{T_{F}}$ $\leq$ 0.1. \begin{figure} \includegraphics[width=1.0\columnwidth]{Fig8.eps} \caption{\label{Fig8:up} The Uemura plot showing the superconducting transition temperature $T_{c}$ vs the effective Fermi temperature $T_{F}$, where Nb$_{0.5}$Os$_{0.5}$ is shown as a solid red square. Other data points plotted between the blue solid lines is the different families of unconventional superconductors.} \end{figure} The Fermi temperature can be calculated using the relation \begin{equation} k_{B}T_{F} = \frac{\hbar^{2}}{2}(3\pi^{2})^{2/3}\frac{n_{s}^{2/3}}{m_{e}[1+\lambda_{e-ph}]} , \label{eqn12:ft} \end{equation} where n$_{s}$ is the density of paired electrons and $\lambda_{e-ph}$ is the electron-phonon coupling constant. Using the Sommerfeld coefficient for Nb$_{0.5}$Os$_{0.5}$ \cite{ck}, we have calculated the number density of electrons n$_{e}$ = 2.94 $\times$ 10$^{30}$ $m^{-3}$. The estimated value of $\textit{l}$ (0.56 \text{\AA}) $\ll$ $\xi_{0}$ (14091 \text{\AA}), means that in Nb$_{0.5}$Os$_{0.5}$ the density of paired electrons will be given by n$_{s}$ $\simeq$ n$_{e}$ $\frac{\textit{l}}{\xi_{0}}$ = 1.17 $\times$ 10$^{26}$ $m^{-3}$. The above result is verified from the magnetic penetration depth $\lambda$ calculated from the muon analysis, where the density of paired electrons is given by n$_{s}$ = $\frac{m_{e}(1+ \lambda_{e-ph})}{\mu_{0}e^{2}\lambda^{2}}$ $\simeq$ 1.01 $\times$ 10$^{26}$ $m^{-3}$.\\ Using the value of n$_{s}$ in Eq. (12), it yields $T_{F}$ = 662 K, giving the ratio $\frac{T_{c}}{T_{F}}$ = 0.0046, just outside the range of unconventional superconductors as shown by a solid red square in Fig. 8, where blue solid lines represent the band of unconventional superconductors. A similar result is obtained if we express the superfluid density in term of the muon spin-relaxation rate $\sigma(0)$ $\propto$ $\lambda(0)^{-2}$ $\propto$ $\rho_{s}$(0) as in the original Uemura plot. \section{Conclusion} The transport, magnetization, and heat capacity measurements confirm type-II, $\textit{s}$-wave superconductivity in Nb$_{0.5}$Os$_{0.5}$ having transition temperature $T_{c}$ = 3.07 K. The upper and lower critical fields estimated to be $H_{c1}$$\simeq$ 3.06 mT and $H_{c2}$$\simeq$ 5.4 T respectively. The TF-$\mu$SR measurements further confirm $\textit{s}$-wave superconductivity. The ZF-$\mu$SR measurements show no evidence of long-range magnetic ordering and any additional relaxation channel in the superconducting state. It confirms that time-reversal symmetry is preserved in Nb$_{0.5}$Os$_{0.5}$. This result contradicts the possibility of time-reversal symmetry breaking in NCS superconductors due to the admixed pairing states (spin-singlet/spin-triplet). Several other NCS superconductors (weakly/strongly correlated) reported to show the similar result. It suggest some other mechanism may be involved, which control the TRSB in NCS superconductors. In order to understand the presence and absence of time-reversal symmetry breaking in NCS superconducting compounds, it is clearly important to search the new NCS superconductor. \section{Acknowledgments} R.~P.~S.\ acknowledges Science and Engineering Research Board, Government of India for the Ramanujan Fellowship through Grant No. SR/S2/RJN-83/2012 and Newton Bhabha funding. We thank ISIS, STFC, UK for the muon beamtime to conduct the $\mu$SR experiments.
2,869,038,156,985
arxiv
\section{Introduction} \label{Intro} The paradigm example of a topological complexity invariant for dynamical systems is topological entropy. It measures the exponential growth, in time, of orbits distinguishable at finite precision and can be used to compare the complexity of dynamical systems defined on arbitrary compact metric spaces. Moreover, it is central to the powerful machinery of thermodynamic formalism. There are, however, two situations where entropy does not provide very much information, namely when it is either zero or infinite. In the latter case, mean topological dimension has been identified as a suitable substitute. Its theoretical significance is demonstrated, for example, by the fact that zero mean dimension is one of the few dynamical consequences of unique ergodicity \cite{LindenstraussWeiss2000MeanDimension}. Our focus here lies on the zero entropy regime, and in particular on the very onset of dynamical complexity and the break of equicontinuity. We are looking for a dynamically defined positive real-valued quantity which \romanlist \item is an invariant of topological conjugacy and has other good properties; \item gives value zero to isometries and Morse-Smale systems; \item is able to detect, as test cases, the complexity inherent in the dynamics of Sturmian shifts or Denjoy homeomorphisms on the circle, by taking positive values for such systems. \listend There exist several concepts to describe the complexity of systems in the zero entropy regime (see, for example, \cite{Misiurewicz1981,Smital1986, MisiurewiczSmital1988,KolyadaSharkovsky1991,Carvalho1997, Ferenczi1997MeasureTheoreticComplexity,KatokThouvenot1997SlowEntropy,Ferenczi1999, BlanchardHostMaas2000TopologicalComplexity, HasselblattKatok2002HandbookPrincipalStructures, FerencziPark2007,HuangParkYe2007, HuangYe2009,ChengLi2010,DouHuangPark2011,Marco2013,KongChen2014}). Some of them have properties that may be considered as shortcomings, although this partly depends on the viewpoint and the particular purpose one has in mind. To be more precise, let us consider one example of a standard approach to measure the complexity of zero entropy systems, namely, the (modified) power entropy \cite{HasselblattKatok2002HandbookPrincipalStructures}. In the context of tiling spaces and minimal symbolic subshifts, power entropy is more commonly known as polynomial word complexity and presents a well-established tool to describe the complexity of aperiodic sequences. However, it turns out that power entropy gives positive values to Morse-Smale systems, whereas modified power entropy is too coarse to distinguish Sturmian subshifts or Denjoy examples from irrational rotations. We are thus taking an alternative and complementary direction, which leads us to define the notions of \emph{asymptotic separation numbers} and \emph{amorphic complexity}. Those are based on an asymptotic notion of separation, which is the main qualitative difference to the previous two concepts, since the latter rely in their definition on the classical Bowen-Dinaburg/Hamming metrics which consider only finite time-scales. As a consequence, ergodic theorems can be applied in a more or less direct way to compute or estimate amorphic complexity in many situations. In order to fix ideas, we concentrate on the dynamics of continuous maps defined on metric spaces. Continuous-time systems and more general group actions will be treated in future work.\medskip Let $(X,d)$ be a metric space and $f:X\to X$. Given $x,y\in X$, $\delta>0$, $\nu\in(0,1]$ and $n\in\N$ we let \begin{equation}\label{e.separation_count} \countsep{n}(f,\delta,x,y)\ := \ \#\left\{0\leq k<n\;|\; d(f^k(x),f^k(y))\geq\delta\right\}. \end{equation} We say that $x$ and $y$ are \emph{$(f,\delta,\nu)$-separated} if \[ \varlimsup\limits_{n\to\infty}\frac{\countsep{n}(f,\delta,x,y)}{n}\ \geq \ \nu \ . \] A subset $S\subseteq X$ is said to be \emph{$(f,\delta,\nu)$-separated} if all distinct points $x,y\in S$ are $(f,\delta,\nu)$-separated. The {\em (asymptotic) separation number} $\Sep(f,\delta,\nu)$, for distance $\delta>0$ and frequency $\nu\in(0,1]$, is then defined as the largest cardinality of an $(f,\delta,\nu)$-separated set in $X$. If these quantities are finite for all $\delta,\nu>0$, we say $f$ has {\em finite separation numbers}, otherwise we say it has {\em infinite separation numbers}. Further, if $\Sep(f,\delta,\nu)$ is uniformly bounded in $\nu$ for all $\delta>0$, we say that $f$ has {\em bounded separation numbers}, otherwise we say {\em separation numbers are unbounded}. These notions provide a first qualitative indication concerning the complexity of a system. Roughly spoken, finite but unbounded separation numbers correspond to dynamics of intermediate complexity, which we are mainly interested in here. Once a system behaves `chaotically', in the sense of positive entropy or weak mixing, separation numbers become infinite. \begin{theorem} Suppose $X$ is a compact metric space and $f:X\to X$ is continuous. If $f$ has positive topological entropy or is weakly mixing with respect to some invariant probability measure $\mu$ with non-trivial support, then it has infinite separation numbers. \end{theorem} The proof is given in Section~\ref{QualitativeBehaviour}. Obviously, if $f$ is an isometry or, more generally, equicontinuous, then its separation numbers are bounded. Moving away from equicontinuity, one encounters the class of almost automorphic systems, which are central objects of study in topological dynamics and include many examples of both theoretical and practical importance \cite{auslander1988minimal}. At least in the minimal case, separation numbers are suited to describe this transition, as the following result shows. In order to state it, suppose that $(X,d)$ and $(\Xi,\rho)$ are metric spaces and $f:X\to X$ and $g:\Xi\to\Xi$ are continuous. We say that $f$ is an extension of $g$ if there exists a continuous onto map $h:X\to \Xi$ such that $h\circ f=g\circ h$. The map $f$ is called an {\em almost 1-1 extension} of $g$ if the set $\{\xi\in\Xi\mid \#h^{-1}(\xi)=1\}$ is dense in $\Xi$. In the case that $g$ is minimal, this condition can be replaced by the weaker assumption that there exists one $\xi\in\Xi$ with $\#h^{-1}(\xi)=1$. We further say that $f$ is an {\em almost sure 1-1 extension} if the set $\{\xi\in \Xi\mid \#h^{-1}(\xi)>1\}$ has measure zero with respect to every $g$-invariant probability measure $\mu$ on $\Xi$.\foot{Note that if $g$ is equicontinuous and minimal, then it is uniquely ergodic. Hence, there is only one measure to consider in this case.} Due to Veech's Structure Theorem \cite{veech1965almost}, almost automorphic minimal systems can be defined as almost 1-1 extensions of equicontinuous minimal systems. \begin{theorem} \label{t.automorphic_systems} Suppose $X$ is a compact metric space and $f:X\to X$ is a homeomorphism. \alphlist \item If $f$ is minimal and almost automorphic, but not equicontinuous, then $f$ has unbounded separation numbers. \item If $f$ is an almost sure 1-1 extension of an equicontinuous system, then $f$ has finite separation numbers. \listend \end{theorem} Again, the proof is given in Section~\ref{QualitativeBehaviour}. Two examples for case (b) discussed below are regular Toeplitz flows and Delone dynamical systems arising from cut and project quasicrystals. We refer to \cite{LiTuYe2014,DownarowiczGlasner2015} for some recent progress on extensions of minimal equicontinuous systems.\medskip In order to obtain quantitative information, we proceed to study the scaling behaviour of separation numbers as the separation frequency $\nu$ goes to zero. In principle, one may consider arbitrary growth rates (see Section~\ref{sec_definitions}). However, as the examples we discuss all indicate, it is polynomial growth which is the most relevant. Given $\delta>0$, we let \begin{equation}\label{d.delta_amorphic_complexity} \uac(f,\delta)\ := \ \varliminf_{\nu\to 0} \frac{\log \Sep(f,\delta,\nu)}{-\log \nu} \ ,\qquad \oac(f,\delta) \ := \ \varlimsup_{\nu\to 0} \frac{\log\Sep(f,\delta,\nu)}{-\log \nu} \end{equation} and define the {\em lower}, respectively {\em upper amorphic complexity of $f$} as \begin{equation}\label{e.amorphic_complexity} \uac(f) \ := \ \sup_{\delta>0} \uac(f,\delta) \eqand \oac(f) \ := \ \sup_{\delta>0} \oac(f,\delta) \ . \end{equation} If both values coincide, $\fsc(f):=\uac(f)=\oac(f)$ is called the {\em amorphic complexity of $f$}. We note once more that the main difference to the notion of (modified) power entropy is the fact that we use an asymptotic concept of separation, and the scaling behaviour that is measured is not the one with respect to time, but that with respect to the separation frequency. Somewhat surprisingly, this makes amorphic complexity quite well-accessible to rigorous computations and estimates. The reason is that separation frequencies often correspond to certain ergodic averages or visiting frequencies, which can be determined by the application of ergodic theorems. We have the following basic properties. \begin{proposition} Suppose $X,Y$ are compact metric spaces and $f:X\to X,\ g:Y\to Y$ continuous maps. Then the following statements hold. \alphlist \item {\em Factor relation:} If $g$ is a factor of $f$, then $\oac(f)\geq\oac(g)$ and $\uac(f)\geq \uac(g)$. In particular, amorphic complexity is an invariant of topological conjugacy. \item {\em Power invariance:} For all $m\in\N$ we have $\oac(f^m)=\oac(f)$ and $\uac(f^m)=\uac(f)$. \item {\em Product formula:} If upper and lower amorphic complexity coincide for both $f$ and $g$, then the same holds for $f\times g$ and we have $\fsc(f\times g)=\fsc(f)+\fsc(g)$. Otherwise, we have $\oac(f\times g)\leq \oac(f)+\oac(g)$ and $\uac(f\times g)\geq \uac(f)+\uac(g)$. \item {\em Commutation invariance:} $\oac(f\circ g)=\oac(g\circ f)$ and $\uac(f\circ g)=\uac(g\circ f)$. \listend \end{proposition} As the power invariance indicates, amorphic complexity behaves quite different from topological entropy in some aspects. In this context, it should also be noted that no variational principle can be expected for amorphic complexity. This is a direct consequence of requirement (iii) above, which is met by amorphic complexity (see Proposition \ref{p.ac_morse_iso_Sturmian_Denjoy}). The reason is that since Sturmian subshifts, Denjoy examples and irrational rotations are uniquely ergodic and measure-theoretically isomorphic, they cannot be distinguished on a measure-theoretic level. Hence, no reasonable analogue to the variational principle of topological entropy can be satisfied. \begin{proposition}\label{p.ac_morse_iso_Sturmian_Denjoy} Amorphic complexity is zero for all isometries and Morse-Smale systems, but equals one for Sturmian subshifts and Denjoy examples on the circle. \end{proposition} The proof is given in Sections \ref{Isometries} and \ref {BasicExamples}. By means of some elementary examples in Section~\ref{PowerEntropy}, we will also demonstrate that no direct relations -- in terms of inequalities -- exist between amorphic complexity and the notions of power entropy and modified power entropy.\medskip The arguments in the proof of Theorem~\ref{t.automorphic_systems} can be quantified, at least to some extent, to obtain an upper bound on amorphic complexity for minimal almost sure 1-1 extensions of isometries. In rough terms, the result reads as follows. Details will be given in Section~\ref{QuantitativeAutomorphic}. By $\overline{\Dim}_B(A)$ we denote the upper box dimension of a totally bounded subset $A$ of a metric space. \begin{theorem}\label{t.automorphic_quantitative_intro} Suppose $X$ and $\Xi$ are compact metric spaces and $f:X\to X$ is an almost sure 1-1 extension of a minimal isometry $g:\Xi\to\Xi$ with factor map $h$. Further, assume that the upper box dimension of $\Xi$ is finite and strictly positive. Then \begin{equation} \label{e.automorphic_estimate} \oac(f) \ \leq \ \frac{\gamma(h)\cdot\overline\Dim_B(\Xi)} {\overline\Dim_B(\Xi)-\sup_{\delta>0}\overline\Dim_B(E_\delta)} \ , \end{equation} where $E_\delta=\{\xi\in\Xi\mid \diam(h^{-1}(\xi))\geq \delta\}$ and $\gamma(h)$ is a scaling factor depending on the local properties of the factor map $h$. \end{theorem} The proof is given in Section~\ref{QuantitativeAutomorphic}. It should be mentioned, at least according to our current understanding, that this result is of rather abstract nature. The reason is the fact that the scaling factor $\gamma(h)$, defined by \eqref{e.gamma_def}, seems to be difficult to determine in specific examples. However, it turns out that in many cases direct methods can be used instead to obtain improved explicit estimates.\medskip In this direction, we will first investigate {\em regular Toeplitz flows} in Section~\ref{RegularToeplitzFlows}. Given a finite alphabet $A$, a sequence $\omega=(\omega_k)_{k\in\I}\in A^\I$ with $\I=\N_0$ or $\Z$ is called {\em Toeplitz} if for all $k\in\I$ there exists $p\in\N$ such that $\omega_{k+p\ell}=\omega_k$ for all $\ell\in\N$. In other words, every symbol in a Toeplitz sequence occurs periodically. Thus, if we let $\Per(p,\omega)=\{k\in\I\mid \omega_{k+p\ell}=\omega_k \ \textrm{for all}\ \ell\in\N\}$, then $\bigcup_{p\in\N}\Per(p,\omega)=\I$. By $D(p)=\#(\Per(p,\omega)\cap[0,p-1])/p$ we denote the density of the $p$-periodic positions. If $\lim_{p\to\infty} D(p)=1$, then the Toeplitz sequence is called {\em regular}. A well-known example of a regular Toeplitz sequence is the paperfolding sequence, also known as the dragon curve sequence \cite{AlloucheBacher1992}. We call a sequence $(p_\ell)_{\ell\in\N}$ of integers such that $p_{\ell+1}$ is a multiple of $p_\ell$ for all $\ell\in\N$ and $\bigcup_{\ell\in\N}\Per(p_\ell,\omega)=\I$ a {\em weak periodic structure} for $\omega$. More details are given in Section~\ref{RegularToeplitzFlows}. We denote the shift orbit closure of $\omega$ by $\Sigma_\omega$ such that $(\Sigma_\omega,\sigma)$ is the subshift generated by $\omega$. \begin{theorem} \label{t.toeplitz} Suppose $\omega$ is a non-periodic regular Toeplitz sequence with weak periodic structure $(p_\ell)_{\ell\in\N}$. Then \[ \oac\left(\left.\sigma\right|_{\Sigma_\omega}\right) \ \leq \ \varlimsup_{\ell\to\infty}\frac{\log p_{\ell+1}}{\log(1-D(p_\ell))} \ . \] \end{theorem} In Section~\ref{RegularToeplitzFlows}, we demonstrate by means of examples that this estimate is sharp and that a dense set of values in $[1,\infty)$ is attained (Theorem~\ref{t.toeplitz_sharpbound_examples} and Corollary~\ref{c.toeplitz_densevalues}). A more comprehensive treatment of amorphic complexity for symbolic systems of intermediate complexity (zero topological entropy) will be given in \cite{FG2014} (see also Section \ref{BesicovitchSpace}). In Section~\ref{PinchedSystems}, we take a closer look at strange non-chaotic attractors (SNA) appearing in so-called pinched skew products. The latter are known as paradigm examples for the occurrence of SNA \cite{grebogi/ott/pelikan/yorke:1984,keller:1996,glendinning/jaeger/keller:2006}. In this case technical issues prevent a straightforward computation of amorphic complexity, and we are only able to apply a modified version of the concept. However, since the attempt to distinguish SNA and smooth (non-strange) attractors in skew product systems by means of topological invariants has been the origin of our investigations, it seemed important to include these findings.\medskip Finally, we also want to include a research announcement of a result from the forthcoming paper \cite{FGJ2014AmorphicComplexityQuasicrystals}, which fits well into the above discussion. Suppose $\tilde L$ is a cocompact discrete subgroup of $\R^m\times\R^D$ such that $\pi_1:\tilde L\to \R^m$ is injective and $\pi_2:\tilde L\to \R^D$ has dense image. Further, assume that $W\subset \R^D$ is compact and satisfies $W=\overline{\inte(W)}$. The pair $(\tilde L,W)$ is called a {\em cut and project scheme} and defines a Delone subset $\Lambda(W):=\pi_1\big((\R^m\times W)\cap \tilde L\big)$ of $\R^m$. A natural $\R^m$-action on the space of Delone sets in $\R^m$ is given by $(t,\Lambda)\mapsto \Lambda-t$. Taking the orbit closure $\Omega(\Lambda(W)) := \overline{\{\Lambda(W)-t\mid t\in\R^m\}}$ of $\Lambda(W)$, in a suitable topology, we obtain a {\em Delone dynamical system} $(\Omega(\Lambda(W)),\R^m)$ whose dynamical properties are closely related to the geometry of the Delone set $\Lambda(W)$. We refer to \cite{Schlottmann1999GeneralizedModelSets,Moody2000ModelSetsSurvey, LagariasPleasants2003RepetitiveDeloneSets,BaakeLenzMoody2007Characterization} and references therein for further details. For the amorphic complexity, adapted to general actions of amenable groups, we obtain \begin{theorem}[\cite{FGJ2014AmorphicComplexityQuasicrystals}] Suppose $(\tilde L,W)$ is a cut and project scheme in $\R^m\times \R^D$ and $(\Omega(\Lambda(W)),\R^m)$ is the associated Delone dynamical system. Then \begin{equation}\label{e.quasicrystal_estimate} \oac(\Omega(\Lambda(W)),\R^m) \ \leq \ \frac{D}{D-\overline\Dim_B(\partial W)} \ . \end{equation} \end{theorem} As in the case of regular Toeplitz flows, it can be demonstrated by means of examples that this estimate is sharp. At the same time, equality does not always hold. It is well-known that under the above assumptions the dynamical system $(\Omega(\Lambda(W)),\R^m)$ is an almost 1-1 extension of a minimal and isometric $\R^m$-action on a $D$-dimensional torus. Moreover, it turns out that with the notions of Theorem~\ref{t.automorphic_quantitative_intro} we have $\overline\Dim_B(\partial W)=\overline\Dim_B(E_\delta)$ for all $\delta>0$. Thus, \eqref{e.quasicrystal_estimate} can be interpreted as a special case of \eqref{e.automorphic_estimate}, with $\gamma(h)=1$. However, as we have mentioned, the proof is independent and based on more direct arguments.\medskip \noindent{\bf Acknowledgments.} The above results were first presented during the conference `Complexity and Dimension Theory of Skew Product Systems' in Vienna in September 2013, and we would like to thank the organisers Henk Bruin and Roland Zweim\"uller for creating this opportunity as well as the Erwin-Schr\"odinger-Institute for its hospitality and the superb conditions provided during the event. T.\ J.\ also thanks the organisers of the `Dynamics and Numbers activity' (MPIM Bonn, June--July 2014), during which this work was finalized. We are indebted to Tomasz Downarowicz for his thoughtful remarks, and in particular for suggesting the study of Toeplitz systems. All authors acknowledge support of the German Research Council (Emmy Noether Grant Ja 1721/2-1) and M.\ G.\ has been supported by a doctoral scholarship of the `Studienstiftung des deutschen Volkes'. \section{Qualitative behaviour of asymptotic separation numbers} \label{QualitativeBehaviour} Let $(X,\Bcal,\mu)$ be a probability space and let $\mu$ be invariant with respect to the measurable map $f:X\to X$. For the definition of ergodic and weak-mixing measures, respectively, see for example \cite{Walters1982}. \begin{theorem}[{\cite[Theorem 4.10.6]{BrinStuck2002}}] \label{theorem_equivalence_weak_mixing_ergodicity} The following statements are equivalent \begin{enumerate} \item[(a)] $\mu$ is weak-mixing with respect to $f$, \item[(b)] $\mu^m=\Times_{k=1}^{m}\mu$ is ergodic with respect to $\Times_{k=1}^m f$ for all $m\geq 2$. \end{enumerate} \end{theorem} \begin{theorem} Let $(X,d)$ be a metric space. Suppose $f:X\to X$ is Borel measurable and $\mu$ is a Borel probability measure invariant under $f$. Furthermore, assume that $\mu$ is weak-mixing with respect to $f$ and its support is not a single point. Then $f$ has infinite separation numbers. \end{theorem} \begin{proof} For each $\delta>0$ we define the function $h_\delta:X^2\to\{0,1\}$ as $h_\delta(z,w):=\Theta(d(z,w)-\delta)$ where $\Theta:\R\to\{0,1\}$ is the Heaviside step function. Note that \[ \frac{1}{n}\countsep{n}(f,\delta,x,y) \ = \ \frac{1}{n}\sum\limits_{k=0}^{n-1}h_\delta\big(f^k(x),f^k(y)\big) \ . \] Since $\mu$ is not supported on a single point, we can find $\delta_0>0$ and $\nu_0>0$ such that for all $\delta\leq\delta_0$ we have \begin{align}\label{thm_weak_mixing_heaviside_prop} \int h_\delta d\mu^2\ \geq \ \nu_0 \ . \end{align} (Note that $\int h_{\delta'}d\mu^2\geq \int h_\delta d\mu^2$ for $\delta'\leq\delta$.) Fix $\delta\in(0,\delta_0]$, $\nu\in (0,\nu_0]$ and let \begin{align} \label{pairwise_frequency _separation} \phi_m:X^m\to\R^{m(m-1)/2}: \begin{pmatrix} x_1\\ x_2\\ \vdots\\ x_m \end{pmatrix} \mapsto\lim\limits_{n\to\infty}\frac{1}{n}\sum\limits_{k=0}^{n-1} \begin{pmatrix} h_\delta(f^k(x_1),f^k(x_2))\\ h_\delta(f^k(x_1),f^k(x_3))\\ \vdots\\ h_\delta(f^k(x_{m-1}),f^k(x_m)) \end{pmatrix} \end{align} for each $m\geq 2$. Since $h_\delta$ is bounded, observe that the functions $(x_1,\dots,x_m)\mapsto h_\delta(x_i,x_j)$ with $1\leq i<j\leq m$ are in $L^1(\mu^m)$. By ergodicity of $\mu^m$, the limits in \eqref{pairwise_frequency _separation} exist $\mu^m$-almost everywhere. Further, $\phi_m$ is $\mu^m$-almost surely constant and all its entries are different from zero, since we have \begin{eqnarray*} \lefteqn{\lim\limits_{n\to\infty}\frac{1}{n}\sum\limits_{k=0}^{n-1} h_\delta\left(f^k(x_i),f^k(x_j)\right) \ = \ \int\limits_{X^m}h_\delta(x_i,x_j)d\mu^m(x_1,\dots,x_m) } \\ &= & \int\limits_{X^2}h_\delta(x_i,x_j)d\mu(x_i)d\mu(x_j) \ \geq \ \nu_0 \ > \ 0 \hspace{8eM} \end{eqnarray*} for $1\leq i<j\leq m$ by \eqref{thm_weak_mixing_heaviside_prop}. Thus, the above implies that for each $m\in\N$ there exist at least $m$ points that are pairwise $(f,\delta,\nu)$-separated, so that \[ \Sep(f,\delta,\nu) \ \geq \ m \ . \] Since $m$ was arbitrary and the pair $(\delta,\nu)$ is fixed, we get that $\Sep(f,\delta,\nu)$ is infinite. \end{proof}\smallskip The analogous statement for maps with positive topological entropy is a direct consequence of a result of Downarowicz in \cite{Downarowicz2014}. In order to state it, we say that two points $x$ and $y$ in a metric space $(X,d)$ are \emph{DC2-scrambled with respect to $f$} if the following two conditions are fulfilled \begin{eqnarray} \forall\delta>0&:&\varlimsup\limits_{n\to\infty} \frac{\#\left\{0\leq k<n\;|\;d(f^k(x),f^k(y))<\delta_{\phantom 0}\right\}} {n} \ = \ 1 \ ,\nonumber\\ \exists\delta_0>0&:&\varliminf\limits_{n\to\infty} \frac{\#\left\{0\leq k<n\;|\;d(f^k(x),f^k(y))<\delta_0\right\}} {n}\ < \ 1 \ .\label{e.scrambling2} \end{eqnarray} Furthermore, we say that a subset $S\subseteq X$ is \emph{DC2-scrambled} if any pair $x,y\in S$ with $x\neq y$ is DC2-scrambled. The set $S$ is called \emph{uniformly DC2-scrambled} if the $\delta_0$'s and the lower frequencies in \eqref{e.scrambling2} are uniform for all pairs $x,y\in S$ with $x\neq y$. Now by \cite[Theorem 1.2]{Downarowicz2014}, if $f$ has positive topological entropy, then there exists an uncountable DC2-scrambled set $S$, and as stated in \cite[Remark 2]{Downarowicz2014} this set can be chosen uniformly DC2-scrambled. It is then obvious from \eqref{e.scrambling2} that the points in $S$ are pairwise $(f,\delta,\nu)$-separated for the respective parameters $\delta,\nu>0$, i.e.\ $\Sep(f,\delta,\nu)=\infty$. Thus, we obtain \begin{theorem} Let $(X,d)$ be a compact metric space. Suppose $f:X\to X$ is a continuous map with positive topological entropy. Then $f$ has infinite separation numbers. \end{theorem} \smallskip We now turn to the opposite direction and aim to show that almost sure 1-1 extensions of equicontinuous systems have finite separation numbers. In order to do so, we need to introduce some further notions and preliminary statements. Suppose $(X,d)$ and $(\Xi,\rho)$ are compact metric spaces and $f:X\to X$ is an extension of $g:\Xi\to \Xi$ with factor map $h:X\to \Xi$. For $x\in X$, define the {\em fibre of $x$} as $F_x:=h^{-1}(h(x))$. Denote the collection of fibres by $\cF:=\{F_x \mid x\in X\}$. Given $\delta>0$, let \[ \textstyle\cF_{\delta}\ := \ \{x\in X\;|\;\diam(F_x)\geq\delta\} \ = \ \bigcup_{\stackrel{F\ssq\cF}{\diam(F)\geq\delta}} F \ . \] Further, let $\cF_{>0}:=\bigcup_{\delta>0}\cF_\delta$, $E_\delta:=h(\cF_\delta)$ and $E:=h(\cF_{>0})$. Obviously, both $\cF_\delta$ and $E_\delta$ are decreasing in $\delta$. The next lemma is well-known and we omit the easy proof. \begin{lemma}\label{lemma_E_delta_closed} The set $\cF_\delta$ is closed for all $\delta>0$. \end{lemma} Note that as a direct consequence the sets $\cF_{>0}$, $E_\delta$ and $E$ are Borel measurable. The following basic observation will be crucial in the proof of the next theorem. From now on, we denote by $B_{\eps}(A)$ for $\eps>0$ the open $\eps$-neighborhood of a subset $A$ of a metric space. \begin{lemma} \label{l.eta_function} For all $\delta>0$ and $\varepsilon>0$ there exists $\eta=\eta_\delta(\eps)>0$ such that if $x,y\in X$ satisfy $d(x,y)\geq\delta$ and $\rho(h(x),h(y))<\eta$, then $h(x)$ and $h(y)$ are contained in $B_\eps(E_\delta)$. \end{lemma} \begin{proof} Assume for a contradiction that the statement is false. Then there are $\delta, \varepsilon>0$ and sequences $(x_k)_{k\in\N}$, $(y_k)_{k\in\N}$ in $X$ such that $h(x_k)\notin B_\eps(E_\delta)$ or $h(y_k)\notin B_\eps(E_\delta)$ and $d(x_k,y_k)\geq\delta$ for all $k\in\N$, but $\rho(h(x_k),h(y_k))\to 0$ as $k\to\infty$. By going over to subsequences if necessary, we may assume that $(h(x_k))_{k\in\N}$ lies in $X\backslash B_\eps(E_\delta)$ and that $(x_k)_{k\in\N}$ and $(y_k)_{k\in\N}$ converge. Let $x:=\lim_{k\to\infty} x_k$ and $y:=\lim_{k\to\infty} y_k$. Then $d(x,y)\geq\delta$ and $h(x) =\kLim h(x_k) \notin B_\eps(E_\delta)$. However, $h(x)=h(y)$ and thus $\diam(F_x)=\diam(F_y)\geq\delta$, such that $x\in\cF_\delta$, which is the required contradiction. \end{proof} \begin{theorem}\label{t.finite_separation_numbers} Let $f:X\to X$ be a continuous map. Further, assume that $f$ is an almost sure 1-1 extension of an isometry $g:\Xi\to\Xi$. Then $f$ has finite separation numbers. \end{theorem} Note that this implies Theorem~\ref{t.automorphic_systems}(b), since any equicontinuous system is an isometry with respect to an equivalent metric. \begin{proof} Denote by $\cM(g)$ the set of all $g$-invariant Borel probability measures on $\Xi$. Fix $\delta>0$ and $\nu>0$. We claim that since $\mu(E_\delta)\leq \mu(E)=0$ for all $\mu\in\cM(g)$, there exists $\eps>0$ such that \begin{equation} \label{e.eps_choice} \mu\left(\overline{B_\eps(E_\delta)}\right) \ < \ \nu \quad \textrm{for all } \mu\in\cM(g) \ . \end{equation} Otherwise, it would be possible to find a sequence $\mu_n\in\cM(g)$ with $\mu_n\big(\overline{B_{1/n}(E_\delta)}\big)\geq \nu$, which can be chosen such that it converges to some $\mu\in\cM(g)$ in the weak-$\ast$-topology. If $\varphi_m(\xi):=\max\{1-m\cdot d(\xi,B_{1/m}(E_\delta)),0\}$, then we have $\int_\Xi\varphi_m\ d\mu_n\geq \nu$ for all $n\geq m$ and hence $\int_\Xi\varphi_m\ d\mu\geq \nu$ for all $m\in\N$. However, this implies $\mu(E_\delta)\geq \nu$ by dominated convergence, contradicting our assumptions. Hence, we may choose $\eps>0$ as in \eqref{e.eps_choice}. This, in turn, implies that \begin{equation}\label{e.visits_to_B_eps} \varlimsup_{n\to\infty}\frac{\#\big\{0\leq k<n \mid g^k(\xi)\in \overline{B_\eps(E_\delta)}\big\}}{n} \ < \ \nu \end{equation} for all $\xi\in \Xi$. If this was not the case, it would again be possible to construct a $g$-invariant measure $\mu$ contradicting \eqref{e.eps_choice}, this time as a limit of finite sums $\mu_{\ell}:=\frac{1}{n_{\ell}}\sum_{k=0}^{n_{\ell}-1} \delta_{g^k(\xi)}$ of weighted Dirac measures for some $\xi\in\Xi$ that does not satisfy \eqref{e.visits_to_B_eps}. (Note that in this situation we have $\mu_{\ell}\big(\overline{B_\eps(E_\delta)}\big)\geq\nu$ for all $\ell\in\N$, and this inequality carries over to the limit $\mu$ by the Portmanteau Theorem.) Hence, given any pair $x,y\in X$, the frequency by which both of the iterates of $h(x)$ and $h(y)$ visit $\overline{B_\eps(E_\delta)}$ at the same time is smaller than $\nu$. Together with Lemma \ref{l.eta_function}, this implies that if $\rho(h(x),h(y))<\eta_{\delta}(\eps)$, then the points $x$ and $y$ cannot be $(f,\delta,\nu)$-separated. Thus, if $S\ssq X$ is an $(f,\delta,\nu)$-separated set, then the set $h(S)$ must be $\eta_\delta(\eps)$-separated (compare Section \ref{BesicovitchSpace}) with respect to the metric $\rho$. By compactness, the maximal cardinality $N$ of an $\eta_\delta(\eps)$-separated set in $\Xi$ is bounded. We obtain \begin{equation}\label{e.automorphic_Sep_bound} \Sep(f,\delta,\nu) \ \leq \ N \ . \end{equation} Since $\delta>0$ and $\nu>0$ where arbitrary, this completes the proof. \end{proof} As immediate consequences, we obtain \begin{corollary} If for all $\delta>0$ the set $E_{\delta}$ is finite and contains no periodic point, then $f$ has finite separation numbers. \end{corollary} \begin{corollary} If $\lim_{n\to\infty}\diam\big(F_{f^n(x)}\big)=0$ for all $x\in X$, then $f$ has finite separation numbers. \end{corollary} For the second corollary, use Poincaré's Recurrence Theorem to get a contradiction. It remains to prove part (a) of Theorem~\ref{t.automorphic_systems}, which we restate as \begin{theorem} \label{t.unbounded_separation} Let $f:X\to X$ be a continuous map. Further, assume that $f$ is a minimal almost 1-1 extension of an isometry $g:\Xi\to \Xi$ such that the factor map $h$ is not injective. Then $f$ has unbounded separation numbers. \end{theorem} For the proof, we will again need two preliminary lemmas. Given $x,y\in X$ and $\delta>0$, we let \begin{equation} \label{e.separation_frequency} \nu(f,\delta,x,y) \ := \ \varlimsup_{n\to\infty} \ntel\countsep{n}(f,\delta,x,y) \ . \end{equation} \begin{lemma} \label{l.point_separation} Suppose $V_1,V_2\ssq \Xi$ are two open sets which satisfy $d(h^{-1}(V_1),h^{-1}(V_2))\geq \delta$. Then $\nu(f,\delta,x_1,x_2)>0$ for all $x_1\in h^{-1}(V_1)$ and $x_2\in h^{-1}(V_2)$. \end{lemma} \begin{proof} Let $\xi_1:=h(x_1)$ and $\xi_2:=h(x_2)$. By assumption, we have that $d(f^k(x_1),f^k(x_2))\geq \delta$ whenever $g^k(\xi_1)\in V_1$ and $g^k(\xi_2)\in V_2$. Consequently, \begin{equation} \label{e.separation_frequency2} \nu(f,\delta,x_1,x_2) \ \geq \ \varlimsup_{n\to\infty} \ntel \#\left\{ 0\leq k<n \mid (g\times g)^k(\xi_1,\xi_2)\in V_1\times V_2\right\} \ . \end{equation} However, as $g$ is an isometry, so is $g\times g$. This implies that all points $(\xi_1,\xi_2)\in \Xi\times \Xi$ are almost periodic, and the set of return times to any of their neighbourhoods is syndetic \cite{auslander1988minimal}. Hence, the right-hand side of \eqref{e.separation_frequency2} is strictly positive. \end{proof} \begin{lemma}\label{l.good_neighbourhoods} Suppose $f$ is a minimal almost 1-1 extension of $g$ and $\diam(h^{-1}(\xi))>\delta$ for some $\xi\in \Xi$. Then for every neighbourhood $U$ of $\xi$ there exist $V_1,V_2\ssq U$ such that $d(h^{-1}(V_1),h^{-1}(V_2)) > \delta$. \end{lemma} \begin{proof} Due to minimality, singleton fibres are dense in $X$. Hence, it is possible to find $x_1,x_2\in h^{-1}(U)$ such that $F_{x_i}=\{x_i\}$, $i\in\{1,2\}$ and $d(x_1,x_2)>\delta$. Then, by continuity, any sufficiently small neighbourhoods $V_i$ of $h(x_i)$ will satisfy $d(h^{-1}(V_1),h^{-1}(V_2))>\delta$. \end{proof} \begin{proof}[\bf Proof of Theorem~\ref{t.unbounded_separation}.] Since the factor map $h$ is not injective, there exists $\xi\in \Xi$ with $\diam(h^{-1}(\xi))>\delta$ for some $\delta>0$. We will construct, by induction on $k\in\N$ with $k\geq 2$, a sequence of finite families of disjoint open sets $V^k_1\ld V^k_k$ with the property that for all $1\leq i<j\leq k$ there exists $n^k_{i,j}\in \N_0$ such that \begin{equation}\label{e.open_set_properties} d\left(h^{-1} \left( g^{n^k_{i,j}}\left(V^k_i\right)\right), h^{-1}\left(g^{n^k_{i,j}}\left(V^k_j\right)\right)\right) \ > \ \delta \ . \end{equation} For any family of points $x^k_i\in h^{-1}(V^k_i)$, $i\in\{1\ld k\}$, and $1\leq i<j\leq k$ we will then have \[ \nu\left(f,\delta,x^k_i,x^k_j\right) \ = \ \nu\left(f,\delta,f^{n^k_{i,j}}\left(x^k_i\right), f^{n^k_{i,j}}\left(x^k_j\right)\right) \ > \ 0 \] by Lemma~\ref{l.point_separation}. Thus, if $\nu_k:=\min\left\{\nu\left(f,\delta,x^k_i,x^k_j\right)\mid 1\leq i<j \leq k\right\}$, then $\{x^k_1\ld x^k_k\}$ is a $(f,\delta,\nu_k)$-separated set of cardinality $k$. This implies that $\sup_{\nu>0} \Sep(f,\delta,\nu)$ is infinite, as required, since $k$ was arbitrary. It remains to construct the disjoint open sets $V^k_i$. For $k=2$, the sets $V^2_1$ and $V^2_2$ can be chosen according to Lemma~\ref{l.good_neighbourhoods} with $n^2_{1,2}=0$. Suppose that $V^k_1\ld V^k_k$ have been constructed as above. By minimality, there exists $n\in\N$ such that $g^n(V^k_k)$ is a neighbourhood of $\xi$. Lemma~\ref{l.good_neighbourhoods} yields the existence of open sets $V,V'\ssq g^n(V^k_k)$ with $d(h^{-1}(V),h^{-1}(V'))>\delta$. We now set \[ V^{k+1}_i\ := \ V^k_i\ \textnormal{ for }\ i\in\{1\ld k-1\}\ , \ V^{k+1}_k \ := \ g^{-n}(V) \quad\textnormal{and}\quad V^{k+1}_{k+1}\ := \ g^{-n}(V') \ , \] so that $V^{k+1}_k\cup V^{k+1}_{k+1}\ssq V^k_k$. Choosing $n^{k+1}_{i,j}:=n^k_{i,j}$ if $1\leq i<j\leq k-1$, $n^{k+1}_{i,j}:=n^k_{i,k}$ if $1\leq i\leq k-1$ and $j\in\{k,k+1\}$ and $n^{k+1}_{k,k+1}:=n$, we obtain that \eqref{e.open_set_properties} is satisfied for all $1\leq i<j\leq k+1$. \end{proof} \section{Properties of amorphic complexity and basic examples} \label{sec_definitions} \subsection{More general growth rates} As mentioned in the introduction, one may consider more general than just polynomial growth rates in the definition of amorphic complexity. We call $a:\R_+\times(0,1]\to\R_+$ a {\em scale function} if $a(\,\cdot\,,\nu)$ is non-decreasing, $a(s,\,\cdot\,)$ is decreasing and $\lim_{\nu\to 0}a(s,\nu)=\infty$ for all $s\in\R_+$. If the separation numbers of $f$ are finite, then we let \begin{equation}\label{def_fsc_delta} \begin{aligned} \uac(f,a,\delta)&:=\sup\left\{s>0\;\left |\;\varliminf \limits_{\nu\to 0}\frac{\Sep(f,\delta,\nu)}{a(s,\nu)}>0\right.\right\} \ ,\\ \oac(f,a,\delta)&:=\sup\left\{s>0\;\left|\;\varlimsup \limits_{\nu\to 0}\frac{\Sep(f,\delta,\nu)}{a(s,\nu)}>0\right.\right\} \end{aligned} \end{equation} and proceed to define the \emph{lower and upper amorphic complexity of $f$ with respect to the scale function $a$} as \begin{equation} \begin{aligned} \label{d.general_amorphic_compexity} \uac(f,a)&:=\sup\limits_{\delta>0}\,\uac(f,a,\delta) \ ,\\ \oac(f,a)&:=\sup\limits_{\delta>0}\,\oac(f,a,\delta) \ . \end{aligned} \end{equation} As before, if $\uac(f,a)=\oac(f,a)$, then their common value is denoted by $\fsc(f,a)$. If $a(s,\nu)=\nu^{-s}$, then this reduces to the definition given in the introduction. In order to obtain good properties, however, some regularity has to be imposed on the scale function. We say a scale function $a$ is \emph{O-(weakly) regularly varying (at the origin) with respect to $\nu$} if \[ \varlimsup\limits_{\nu\to 0}\frac{a(s,c\nu)}{a(s,\nu)} \] is finite for each $s,c>0$. Under this assumption, a part of the theory can be developed in a completely analogous way, until specific properties of polynomial growth start to play a role. For the sake of simplicity, we refrain from stating the results in this section in their full generality. However, we provide extra comments in each subsection to specify the class of scale functions the corresponding results extend to. For more information on O-regularly varying functions, see for example \cite{AljancicArandelovic1977,BuldyginKlesovSteinebach2006} and references therein. \subsection{Definition via $(f,\delta,\nu)$-spanning sets} \label{SpanningSets} As in the case of topological entropy, amorphic complexity can be defined in an equivalent way by using spanning sets instead of separating sets. A subset $S$ of a metric space $(X,d)$ is said to be \emph{$(f,\delta,\nu)$-spanning} if for all $x\in X$ there exists a $y\in S$ such that \[ \varlimsup\limits_{n\to\infty}\frac{\countsep{n}(f,\delta,x,y)}{n} \ < \ \nu \ . \] By $\Span(f,\delta,\nu)$ we denote the smallest cardinality of any $(f,\delta,\nu)$-spanning set in $X$. \begin{lemma}\label{lem_relations_sep_span} Let $f:X\to X$ be a map, $\delta>0$ and $\nu\in(0,1]$. We have that \begin{equation}\label{e.span_sep_relation} \Sep(f,\delta,\nu)\ \geq \ \Span(f,\delta,\nu) \eqand \Span(f,\delta,\nu/2)\ \geq \ \Sep(f,2\delta,\nu)\ . \end{equation} \end{lemma} \begin{proof} For the first inequality, the proof is similar to the argument in the comparison of the separating and spanning sets in the classical definition of topological entropy \cite[Chapter 7.2]{Walters1982}. For the second inequality, assume \twlog\ that $\Span(f,\delta,\nu/2)<\infty$. Let $S\subseteq X$ be an $(f,\delta,\nu/2)$-spanning set of cardinality $\Span(f,\delta,\nu/2)$ and assume for a contradiction that $\tilde S\subseteq X$ is an $(f,2\delta,\nu)$-separated set with $\#\tilde S>\#S$. Then for some $y\in S$ there exist $x_1,x_2\in\tilde S$ such that \[ \varlimsup\limits_{n\to\infty}\frac{\countsep{n}(f,\delta,x_i,y)}{n} \ < \ \frac{\nu}{2} \] with $i\in\{1,2\}$. However, due to the triangle inequality we have that \[ \countsep{n}(f,2\delta,x_1,x_2) \ \leq \ \countsep{n}(f,\delta,x_1,y)+\countsep{n}(f,\delta,y,x_2) \ \] and consequently \[ \varlimsup\limits_{n\to\infty}\frac{\countsep{n}(f,2\delta,x_1,x_2)}{n} \ \leq \ \varlimsup\limits_{n\to\infty}\frac{\countsep{n}(f,\delta,x_1,y)}{n} + \varlimsup\limits_{n\to\infty}\frac{\countsep{n}(f,\delta,x_2,y)}{n} \ < \ \nu \ . \] This contradicts the fact that $x_1$ and $x_2$ are $(f,2\delta,\nu)$-separated. \end{proof} \begin{corollary}\label{c.span_sep} Given a metric space $X$ and $f:X\to X$, we have that \begin{equation} \uac(f) \ = \ \sup_{\delta>0} \varliminf_{\nu\to 0} \frac{\log\Span(f,\delta,\nu)}{-\log\nu} \eqand \oac(f) \ = \ \sup_{\delta>0} \varlimsup_{\nu\to 0} \frac{\log\Span(f,\delta,\nu)}{-\log\nu} \ . \end{equation} \end{corollary} \begin{remarks}\mbox{} \alphlist \item The above statement remains true if $a(s,\nu)=\nu^{-s}$ is replaced by any O-regularly varying scale function. \item In the definition of $(f,\delta,\nu)$-separated sets and $(f,\delta,\nu)$-spanning sets one could also use $\liminf$ instead of $\limsup$, and thus define the notions of \emph{strongly $(f,\delta,\nu)$-separated sets} and \emph{weakly $(f,\delta,\nu)$-spanning sets}, respectively. However, there is no analogue to the second inequality in \eqref{e.span_sep_relation} in this case. \listend \end{remarks} \subsection{Factor relation and topological invariance} We assume that $X$ and $\Xi$ are arbitrary metric spaces, possibly non-compact. The price to pay for this is that we have to assume the uniform continuity of the factor map. All the assertions of this section remain true for arbitrary scale functions. \begin{proposition} Assume $g:\Xi\to \Xi$ is a factor of $f:X\to X$ with a uniformly continuous factor map $h:X\to \Xi$. Then $\uac(f)\geq\uac(g)$ and $\oac(f)\geq \oac(g)$. \end{proposition} \begin{proof} We denote the metric on $X$ and $\Xi$ with $d$ and $\rho$, respectively. The uniform continuity of $h$ implies that for every $\delta>0$ there exists $\tilde\delta>0$ such that $\rho(h(z),h(w))\geq\delta$ implies $d(z,w)\geq\tilde\delta$. Suppose $\xi,\xi'\in \Xi$ are $(g,\delta,\nu)$-separated. Then there exist $x,x'\in X$ such that $h(x)=\xi$ and $h(x')=\xi'$. Since $\rho(g^k(\xi),g^k(\xi'))\geq\delta$ implies $d(f^k(x),f^k(x'))\geq\tilde\delta$, the points $x$ and $x'$ need to be $(f,\tilde \delta,\nu)$-separated. Given $\nu\in(0,1]$, this means that if $S\subseteq \Xi$ is a $(g,\delta,\nu)$-separated set, then there exist $\tilde S\subseteq X$ with $h(\tilde S)=S$ and $\tilde\delta>0$ such that $\tilde S$ is a $(f,\tilde\delta, \nu)$-separated set. Therefore, for all $\nu\in(0,1]$ we get \begin{align*} \Sep(f,\tilde\delta,\nu)\geq \Sep(g,\delta,\nu) \ . \end{align*} The assertions follow easily. \end{proof} \begin{corollary}\label{fsc_top_inv} Suppose $X$ and $\Xi$ are compact and let $f:X\to X$ and $g:\Xi\to \Xi$ be conjugate. Then $\uac(f)=\uac(g)$ and $\oac(f)=\oac(g)$. \end{corollary} For the next corollary, observe that $f\circ g$ is an extension of $g\circ f$ with factor map $h=g$, and conversely $\tilde h=f$ is a factor map from $g\circ f$ to $f\circ g$. \begin{corollary} Suppose $f:X\to X$ and $g:X\to X$ are uniformly continuous. Then $\uac(f\circ g)=\uac(g\circ f)$ and $\oac(f\circ g)=\oac(g\circ f)$. \end{corollary} \subsection{Power invariance and product rule} We first consider iterates of $f$. In contrast to topological entropy, taking powers does not affect the amorphic complexity. Throughout this section, we assume that $X$ and $Y$ are metric spaces. \begin{proposition} Assume $f:X\to X$ is uniformly continuous and let $m\in\N$. Then $\uac(f^m)=\uac(f)$ and $\oac(f^m)=\oac(f)$. \end{proposition} \begin{proof} Since all iterates of $f$ are uniformly continuous as well, we have that for every $\delta>0$ there exists $\tilde\delta>0$ such that $d(f^i(z),f^i(w))\geq\delta$ implies $d(z,w)\geq\tilde\delta$ for all $i\in\{0\ld m-1\}$. Suppose $x,y\in X$ are $(f,\delta,\nu)$-separated. Assume that $d(f^k(x),f^k(y))\geq\delta$ with $k=m\cdot\tilde k+i$, where $\tilde k\in\N_0$ and $i\in\{0,\dots,m-1\}$. Then by the above we have $d\big(f^{m\tilde k}(x),f^{m\tilde k}(y)\big)\geq\tilde\delta$. This means that for $\tilde n\in\N$ and $n\in\{m\cdot\tilde n,\dots,m(\tilde n+1)-1\}$ we get \[ \ntel\countsep{n}(f,\delta,x,y) \ \leq \ \ntel\left(m\cdot\countsep{\tilde n}(f^m,\tilde\delta,x,y)+m\right) \ \leq \ \frac{1}{\tilde n}(\countsep{\tilde n}(f^m,\tilde\delta,x,y)+1)\ . \] By taking the $\limsup$ we get that $x$ and $y$ are $(f^m,\tilde\delta,\nu)$-separated. Hence, \begin{align}\label{prop_iterates_of_f_1} \Sep(f^m,\tilde\delta,\nu)\ \geq \ \Sep(f,\delta,\nu) \ . \end{align} Conversely, suppose that $x$ and $y$ are $(f^m,\delta,\nu)$-separated. Then for $k\geq 1$ it follows from $d(f^{mk}(x),f^{mk}(y))\geq\delta$ that $d\big(f^{\tilde k}(x),f^{\tilde k}(y)\big)\geq\tilde\delta$ for all $\tilde k\in\{m(k-1)+1,\dots,mk\}$. Each $\tilde n\in\N$ belongs to a block $\{m(n-1)+1,\dots,m\cdot n\}$ with $n\in\N$ and we have \[ \frac{1}{\tilde n}\countsep{\tilde n}(f,\tilde\delta,x,y) \ \geq \ \frac{1}{\tilde n}\left(m\cdot\countsep{n}(f^m,\delta,x,y)-m\right) \ \geq\ \ntel\left(\countsep{n}(f^m,\delta,x,y)-1\right) \ . \] Again, by taking the $\limsup$ we get that $x$ and $y$ are $(f,\tilde\delta,\nu)$-separated. Hence, \begin{align}\label{prop_iterates_of_f_2} \Sep(f,\tilde\delta,\nu) \ \geq \ \Sep(f^m,\delta,\nu) \ . \end{align} Using \eqref{prop_iterates_of_f_1} and \eqref{prop_iterates_of_f_2}, we get that $\uac(f^m)=\uac(f)$ and $\oac(f^m)=\oac(f)$. \end{proof} \begin{remarks}\mbox{} \alphlist \item The above result remains true for arbitrary scale functions. \item If $f$ is not uniformly continuous, then we still have $\Sep(f,\delta,\nu/m)\geq\Sep(f^m,\delta,\nu)$. This yields $\uac(f,a)\geq\uac(f^m,a)$ and $\oac(f,a)\geq\oac(f^m,a)$ for $a$ O-regularly varying. \listend \end{remarks} In contrast to the above, the product formula is specific to polynomial growth or, more generally, to scale functions satisfying a product rule of the form $a(s+t,\nu)=a(s,\nu)\cdot a(t,\nu)$. \begin{proposition} Let $f:X\to X$ and $g: Y\to Y$. Then $\uac(f\times g)\geq\uac(f)+\uac(g)$ and $\oac(f\times g)\leq \oac(f)+\oac(g)$. Therefore, if the limits $\fsc(f)$ and $\fsc(g)$ exist, we get \[ \fsc(f\times g)=\fsc(f)+\fsc(g) \ . \] \end{proposition} \begin{proof} We denote the metric on $X$ and $Y$ by $d_X$ and $d_Y$, respectively. Let $d$ be the maximum metric on the product space $X\times Y$. Using Corollary~\ref{c.span_sep}, the assertions are direct consequences of the following two inequalities, which we show for all $\delta>0$ and $\nu\in(0,1]$ \begin{eqnarray} \label{sep_product_inequality}\Sep(f\times g,\delta,\nu) & \geq & \Sep(f,\delta,\nu)\cdot \Sep(g,\delta,\nu) \ , \\ \label{span_product_inequality}\Span(f\times g,\delta,\nu) & \leq & \Span(f,\delta,\nu/2)\cdot\Span(g,\delta,\nu/2) \ . \end{eqnarray} For proving \eqref{sep_product_inequality} assume that $S_X\subseteq X$ and $S_Y\subseteq Y$ are $(f,\delta,\nu)$- and $(g,\delta,\nu)$-separated sets, respectively, with cardinalities $\Sep(f,\delta,\nu)$ and $\Sep(g,\delta,\nu)$, respectively. Then $S:=S_X\times S_Y\subseteq X\times Y$ is an $(f\times g,\delta,\nu)$-separated set. This implies \eqref{sep_product_inequality}. Now, in order to prove \eqref{span_product_inequality} assume that $\tilde S_X\subseteq X$ and $\tilde S_Y\subseteq Y$ are $(f,\delta,\nu/2)$- and $(g,\delta,\nu/2)$-spanning sets, respectively, with cardinalities $\Span(f,\delta,\nu/2)$ and $\Span(g,\delta,\nu/2)$, respectively. The set $\tilde S:=\tilde S_X\times\tilde S_Y\subseteq X\times Y$ is $(f\times g,\delta,\nu)$-spanning, since for arbitrary $(x,y)\in X\times Y$ there are $\tilde x\in\tilde S_X$ and $\tilde y\in\tilde S_Y$ such that \begin{multline*} \countsep{n}(f\times g,\delta,(x,y),(\tilde x,\tilde y)) =\#\left\{0\leq k<n\;|\;d\big((f\times g)^k(x,y), (f\times g)^k(\tilde x,\tilde y)\big)\geq\delta\right\}\\ \leq\#\left\{0\leq k<n\;|\; d_X(f^k(x),f^k(\tilde x))\geq\delta\right\} +\#\left\{0\leq k<n\;|\; d_Y(g^k(y),g^k(\tilde y))\geq\delta\right\} \ .\qedhere \end{multline*} \end{proof} \subsection{Isometries, Morse-Smale systems and transient dynamics}\label{Isometries} It is obvious that all isometries have bounded separation numbers and zero amorphic complexity, since $\Sep(f,\delta,\nu)$ does not depend on $\nu$ in this case. Similarly, amorphic complexity is zero for Morse-Smale systems. Here, we call a continuous map $f$ on a compact metric space $X$ \emph{Morse-Smale} if its non-wandering set $\Omega(f)$ is finite. This implies that $\Omega(f)$ consists of a finite number of fixed or periodic orbits, and for any $x\in X$ there exists $y\in\Omega(f)$ with $\nLim f^{np}(x)=y$, where $p$ is the period of $y$. Since orbits converging to the same periodic orbit cannot be $(f,\delta,\nu)$-separated, we obtain $\Sep(f,\delta,\nu)\leq \#\Omega(f)$ for all $\delta,\nu>0$. Hence, separation numbers are even bounded uniformly in $\delta$ and $\nu$. This shows that amorphic complexity is, in some sense, less sensitive to transient behaviour than power entropy, which gives positive value to Morse-Smale systems (see Section~\ref{PowerEntropy}). However, amorphic complexity is not entirely insensitive to transient dynamics, and the relation $\fsc(f)=\fsc(\left.f\right|_{\Omega(f)})$ does not always hold. An example can be given as follows. Let $f:[0,1]\times\kreis\to[0,1]\times \kreis$ be of the form $f(x,y):=(g(x),y+\alpha(x)\bmod 1)$, where $\T^1:=\R^1/\Z^1$, $\alpha:[0,1]\to\R$ is continuous and $g:[0,1]\to[0,1]$ is a Morse-Smale homeomorphism with unique attracting fixed point $x_a=0$ and unique repelling fixed point $x_r=1$, so that $\kLim g^k(x)=0$ for all $x\in(0,1)$. Let $x_0\in(0,1)$ and $x_k:=g^k(x_0)$ for $k\in\N$ and $x_0':=(x_0+x_1)/2$. Suppose $\alpha$ is given by \begin{equation}\label{e.alpha} \alpha(x) \ := \ \left\{ \begin{array}{cl} 0 & \textrm{if } x\in\{0\}\cup (x_0,1] ;\\ 1-2\frac{|x_0'-x|}{x_0-x_1} & \textrm{if } x\in (x_1,x_0];\\ \ktel \alpha\left(g^{-(k-1)}(x)\right) & \textrm{if } x \in (x_k,x_{k-1}],\ k\geq 2; \end{array}\right. \ . \end{equation} Then, if $x,x'\in [x_1,x_0']$, we have that \begin{equation}\label{e.different_speeds} \left| \sum_{k=0}^{n-1} \alpha\circ g^k(x) - \sum_{k=0}^{n-1} \alpha\circ g^k(x')\right| \ = \ 2\frac{\abs{x-x'}}{x_0-x_1}\sum _{k=1}^{n} \frac{1}{k} \ . \end{equation} This means that one of the two points $(x,0),(x',0)$ performs infinitely more turns around the annulus $[0,1]\times\kreis$ as $n\to\infty$, and it is not difficult to deduce from \eqref{e.different_speeds} that $(x,0),(x',0)$ are $(f, \delta,\nu)$-separated for some fixed $\delta,\nu>0$ independent of $x,x'$. Hence, $[x_1,x_0']\times\{0\}$ is an uncountable $(f,\delta,\nu)$-separated set, and we obtain $\Sep(f,\delta,\nu)=\infty$. It should be interesting to describe which types of transient behaviour have an impact on amorphic complexity and which ones do not, and thus to understand whether this quantity may be used to distinguish qualitatively different types of transient dynamics. However, we are not going to pursue this issue further here, but confine ourselves to give a simple criterion for the validity of the equality $\fsc(f)=\fsc(\left.f\right|_{\Omega(f)})$. We say $f$ has the \emph{unique target property} if for every $x\in X\smin\Omega(f)$ there exists $y\in\Omega(f)$ such that $\nLim d(f^n(x),f^n(y))=0$. Then the following statement is easy to prove. \begin{lemma} \label{l.unique_target} Assume $f$ has the unique target property, then $\uac(f)=\uac\big(\left.f\right|_{\Omega(f)}\big)$ and $\oac(f)=\oac\big(\left.f\right|_{\Omega(f)}\big)$. \end{lemma} In fact, the nonwandering set $\Omega(f)$ does not play a special role in the definition of the unique target property nor in the above lemma and can be replaced by any other subset of $X$ (even invariance is not necessary). For later use (see Section~\ref{PinchedSystems}), we provide a precise formulation. Given $E\ssq X$, we let \begin{equation}\label{e.subset_separation_numbers} \Sep_{E}(f,\delta,\nu) \ := \ \sup\left\{\#A \mid A\ssq E \textrm{ and } A \textrm{ is }(f,\delta,\nu) \textrm{-separated}\right\} \end{equation} and define \begin{align}\label{eq: defn h am for subsets} \begin{split} \uac_{E}(f,\delta) & \ := \ \varliminf_{\nu\to 0} \frac{\log \Sep_{E}(f,\delta,\nu)}{-\log \nu}\quad , \quad \uac_{E}(f) \ := \ \sup_{\delta>0}\uac_{E}(f,\delta)\ ,\\ \oac_{E}(f,\delta) & \ := \ \varlimsup_{\nu\to 0} \frac{\log\Sep_{E}(f,\delta,\nu)}{-\log \nu}\quad , \quad \oac_{E}(f)\, \ := \ \sup_{\delta>0}\oac_{E}(f,\delta) \ . \end{split} \end{align} We say $f$ has the \emph{unique target property with respect to $E\ssq X$} if for all $x\in X$ there exists $y\in E$ such that $\nLim d(f^n(x),f^n(y))=0$. \begin{lemma}\label{lem: unique target general} Suppose $f:X\to X$ has the unique target property with respect to $E\ssq X$. Then $\uac(f)=\uac_{E}(f)$ and $\oac(f)=\oac_{E}(f)$. \end{lemma} \begin{proof} Suppose $S=\{x_1\ld x_m\} \ssq X$ is an $(f,\delta,\nu)$-separated set. Then by assumption there exist $y_1\ld y_m\in E$ such that $\nLim d(f^n(x_i),f^n(y_i))=0$ for all $i\in\{1\ld m\}$. Hence, the set $\tilde S:=\{y_1\ld y_m\}\ssq E$ is $(f,\delta,\nu)$-separated as well. This shows that $\Sep_{E}(f,\delta,\nu)\geq\Sep(f,\delta,\nu)$, and since the reverse inequality is obvious this proves the statement. \end{proof} \subsection{Denjoy examples and Sturmian subshifts} \label{BasicExamples} We start with some standard notation concerning circle maps and symbolic dynamics. Let $\T^1=\R/\Z$ be the circle and denote by $d$ the usual metric on $\T^1$. Further, we denote the open and the closed counter-clockwise interval from $a$ to $b$ in $\T^1$ by $(a,b)$ and $[a,b]$, respectively. The Lebesgue measure on $\T^1$ is denoted by $\Leb$. Moreover, the rigid rotation with angle $\alpha\in\R$ is denoted by $R_\alpha(x):=x+\alpha\mod 1$. For a finite set $A$ we denote by $\sigma$ the left shift on $\Sigma_A:=A^{\I}$ where $\I$ equals either $\N_0$ or $\Z$. The product topology on $\Sigma_A$ is induced by the \emph{Cantor metric} $\rho(x,y):=2^{-j}$ where $x=(x_k)_{k\in\I}$, $y=(y_k)_{k\in\I}\in\Sigma_A$ and $j:=\min\{|k|: x_k\neq y_k\textnormal{ with } k\in\I\}$.\smallskip We first recall some basics about Sturmian subshifts and Denjoy homeomorphisms of the circle. For Sturmians, we mainly follow \cite[Section 2.2]{ChazottesDurand2005}. Assume that $\alpha\in (0,1)$ is irrational. Consider the coding map $\phi_{\alpha}:\T^1\to\{0,1\}$ defined via $\phi_{\alpha}(x)=0$ if $x\in I_0:=[0,1-\alpha)$ and $\phi_{\alpha}(x)=1$ if $x\in I_1:=[1-\alpha,1)$. Set \[ \Sigma_{\alpha}\ := \ \overline{ \left\{(\phi_{\alpha}(R_{\alpha}^k(x)))_{k\in\Z}\;|\;x\in\T^1\right\}} \ \subset \ \Sigma_{\{0,1\}} \ . \] The subshift $(\Sigma_{\alpha},\sigma)$ is called the \emph{Sturmian subshift generated by $\alpha$} and its elements are called \emph{Sturmian sequences}. According to \cite{MorseHedlund1940}, there exists a map $h:\Sigma_{\alpha}\to\T^1$ semi-conjugating $\sigma$ and $R_{\alpha}$ with the property that $\#h^{-1}(x)=2$ for $x\in\{k\alpha\mod 1\;|\;k\in\Z\}$ and $\#h^{-1}(x)=1$ otherwise. If $x=k\alpha$, then one of the two alternative sequences in $h^{-1}(x)$ corresponds to the coding with respect to the original partition $\{I_0, I_1\}$, whereas the other one corresponds to the coding with respect to the partition $\{(0,1-\alpha],(1-\alpha,1]\}$. Further information is given in \cite[Section 1.6]{BlanchardMaassNogueira2000}.\smallskip Poincaré's classification of circle homeomorphisms in \cite{Poincare1885} states that to each orientation preserving homeomorphism $f:\T^1\to\T^1$ of the circle we can associate a unique real number $\alpha\in[0,1)$, called the rotation number of $f$, such that $f$ is semi-conjugate, via an orientation preserving map, to the rigid rotation $R_{\alpha}$, provided $\alpha$ is irrational (see also \cite{demelo/vanstrien:1993,HasselblattKatok1997}). Another classical result by A. Denjoy \cite{Denjoy1932} states that if $f$ is a diffeomorphism such that its derivative is of bounded variation, then $f$ is even conjugate to $R_{\alpha}$. In this case, the amorphic complexity is zero. However, Denjoy also constructed examples of $C^1$ circle diffeomorphism with irrational rotation number that are not conjugate to a rotation and later, Herman \cite{Herman1979} showed that these examples can be made $C^{1+\varepsilon}$ for any $\varepsilon<1$. Such maps are commonly called \emph{Denjoy examples} or {\em Denjoy homeomorphisms}. From Poincar\'e's classification, it is known that in this case there exist \emph{wandering intervals}, that is, open intervals $I\subset \kreis$ such that $f^n(I)\cap I=\emptyset$ for all $n\geq 1$. Any Denjoy example has a unique minimal set $C$, which is a Cantor set and coincides with the non-wandering set $\Omega(f)$. All connected components of $\kreis\smin C$ are wandering intervals, and the length of their $n$-th iterates goes to zero as $n\to\infty$. Since the endpoints of these intervals belong to the minimal set, this also implies that Denjoy examples have the unique target property. \smallskip Not surprisingly, there is an intimate connection between Denjoy examples and Sturmian subshifts. Let $f$ be a Denjoy homeomorphism with rotation number $\alpha$ and suppose it has a unique wandering interval $I$, in the sense that the minimal set $C=\kreis\smin\bigcup_{n\in\Z} f^n(I)$.\foot{It is possible to have several connected components of $\kreis\smin C$ with pairwise disjoint orbits.} Given any $x_0\in I$, let $J_0:=[f(x_0),x_0)$ and $J_1:=[x_0,f(x_0))$. Then for every $x\in\kreis$ the coding $\ind_{J_1}\circ f^n(x)$, where $\ind_{J_1}$ denotes the indicator function of $J_1$, is a Sturmian sequence in $\Sigma_\alpha$. Moreover, in this situation any point in the minimal set $C$ has a unique coding. This yields the following folklore statement. \begin{lemma} \label{l.Sturmian_Denjoy} For any Sturmian subshift $(\Sigma_\alpha,\sigma)$ there exists a Denjoy homeomorpism $f$ with minimal set $C$ such that $\left.f\right|_{C}$ is conjugate to $\left.\sigma\right|_{\Sigma_\alpha}$. \end{lemma} For our purposes, this means that we only have to determine the amorphic complexity of Denjoy examples. Note that the converse to the above lemma is false: if $f$ has multiple wandering intervals with pairwise disjoint orbits, then it is not conjugate to a Sturmian subshift. \begin{theorem} \label{t.denjoy} Suppose $f:\T^1\to\T^1$ is a Denjoy homeomorphism. Then $\fsc(f)=1$. \end{theorem} Since Denjoy examples have the unique target property, Lemma~\ref{l.unique_target} yields that $\fsc(\left.f\right|_{C})=\fsc(f)=1$. Together with Corollary~\ref{fsc_top_inv} and Lemma~\ref{l.Sturmian_Denjoy}, this implies \begin{corollary} For any Sturmian subshift $(\Sigma_\alpha,\sigma)$ we have $\fsc\big(\left.\sigma\right|_{\Sigma_\alpha}\big)=1$. \end{corollary} Theorem~\ref{t.denjoy} is a direct consequence of the following two lemmas. However, before we proceed, we want to collect some more facts concerning Denjoy examples, following mainly \cite[Section 0]{Markley1970} and \cite[Section 2]{Hernandez-CorbatoOrtega2012}. The Cantor set $C=\Omega(f)$ can be described as \[ C\ = \ \T^1\backslash\bigcup\limits_{\ell=1}^\infty (a_{\ell},b_{\ell}) \ , \] where $((a_{\ell},b_{\ell}))_{\ell\in\N}$ is a family of open and pairwise disjoint intervals. The \emph{accessible points $A\subset\T^1$ of $C$} are defined as the union of the endpoints of these intervals and the \emph{inaccessible points of $C$} are defined as $I:=C\backslash A$. A \emph{Cantor function $p:\T^1\to\T^1$ associated to $C$} is a continuous map satisfying \[ p(x)=p(y)\ \Longleftrightarrow \ x=y\textnormal{ or } x,y \in [a_{\ell},b_{\ell}] \textnormal{ for some }\ell\geq 1 \ , \] that is, $p$ collapses the intervals $[a_{\ell},b_{\ell}]$ to single points and is invertible on $I$. From this definition it is not difficult to deduce that $p$ is onto and that $p(A)$ is countable and dense in $\T^1$. Furthermore, we can assume without loss of generality that $p\circ f=R_{\alpha}\circ p$, where $\alpha\in [0,1)\backslash\Q$ is the rotation number of $f$, see \cite[Section 2]{Markley1970}. \begin{lemma} Let $f:\T^1\to\T^1$ be a Denjoy homeomorphism. Then there exists $\delta>0$ such that $\Sep(f,\delta,\nu)\geq\lfloor 1/\nu\rfloor$ for all $\nu\in(0,1]$. \end{lemma} Note that by definition this implies that $\uac(f)\geq 1$. \begin{proof} Suppose $\nu\in(0,1/2]$. Since $p(A)$ is dense in $\T^1$, we can choose for each $m\in\{1,2,3\}$ a point $\zeta_m\in p(A)$ such that \begin{equation}\label{property_zeta_m_s} d(\zeta_m,\zeta_n) \ > \ 1/4 \quad \textrm{for } m\neq n \ . \end{equation} Note that to each $\zeta_m$ we can associate an interval $[a_{\ell_m},b_{\ell_m}]$ with $p\big([a_{\ell_m},b_{\ell_m}]\big)=\{\zeta_m\}$. Now, choose $\delta>0$ such that \[ \delta\ \leq \ \min\limits_{m=1}^3d\big(a_{\ell_m},b_{\ell_m}\big) \ . \] Since $p(I)$ has full Lebesgue measure in $\T^1$, we can choose a set of $\lfloor 1/\nu\rfloor$ points \[ M\ =\ \big\{x_1,\dots,x_{\lfloor 1/\nu\rfloor}\big\}\ \subset\ I \ , \] such that $p(M)$ is an equidistributed lattice in $\T^1$ with distance $1/\lfloor 1/\nu\rfloor \geq \nu$ between adjacent vertices. Consider distinct points $x_i, x_j\in M$ and assume without loss of generality that $\Leb([p(x_i),p(x_j)])\leq 1/2$. Set $P:=[p(x_i),p(x_j)]$. If $\zeta_1\in R_{\alpha}^k(P)$ for some $k\geq 0$, then due to \eqref{property_zeta_m_s} we have that $\zeta_2\in\T^1\backslash R_{\alpha}^k(P)$ or $\zeta_3\in\T^1\backslash R_{\alpha}^k(P)$, such that both $[f^k(x_i),f^k(x_j)]$ and $[f^k(x_j),f^k(x_i)]$ contain some interval $[a_{\ell_m},b_{\ell_m}]$ with $m\in\{1,2,3\}$. Hence, we have \[ d(f^k(x_i),f^k(x_j))\ \geq\ \delta \ . \] Consequently, we obtain \[ \frac{\countsep{n}(f,\delta,x_i,x_j)}{n}\ \geq \ \frac{\#\left\{0\leq k<n\;|\;\zeta_1\in R_{\alpha}^k(P)\right\}}{n} \ . \] By Weyl’s Equidistribution Theorem \cite[Example 4.18]{EinsiedlerWard2011}, the right-hand side converges to $p(x_j)-p(x_i)\geq\nu$ as $n\to\infty$. This means that $x_i$ and $x_j$ are $(f,\delta,\nu)$-separated, so that $M$ is an $(f,\delta,\nu)$-separated set. \end{proof} \begin{lemma} Let $f:\T^1\to\T^1$ be a Denjoy homeomorphism. Then for any $\delta>0$ there exists a constant $\kappa=\kappa(\delta)$ such that \[ \Span(f,\delta,\nu) \ \leq \ \kappa/\nu \quad \textrm{for all } \nu\in(0,1] \ . \] \end{lemma} Together with Corollary \ref{c.span_sep}, this implies that $\oac(f)\leq 1$, thus completing the proof of Theorem~\ref{t.denjoy}. \begin{proof} We show that if $0<\tilde\nu\leq 1/(2(\lceil 1/\delta\rceil+1))$, then \[ \Span(f,\delta,2\tilde\nu(\lceil 1/\delta\rceil+1)) \ \leq \ \lceil 1/\tilde\nu\rceil \ . \] Since $\lceil 1/\tilde \nu\rceil \leq 2/\tilde \nu$, this implies the statement with $\kappa(\delta):=4(\lceil 1/\delta\rceil+1)$. Let $\mu:=\Leb\circ p^{-1}$ and define the function $\varphi_{\tilde\nu}:\T^1\to[0,\infty)$ by \[ \varphi_{\tilde\nu}(x):=\mu([x,x+\tilde\nu]) \ . \] Note that $d(x,y)\leq\mu([p(x),p(y)])$ and that $\varphi_{\tilde\nu}(x)=d(p^{-1}(x),p^{-1}(x+\tilde\nu))$ almost everywhere. In particular, $\varphi_{\tilde\nu}$ is measurable. Now, consider a subset $\tilde I\subseteq I$ such that \begin{align}\label{property_ba_varphi_alpha} \frac{\#\left\{0\leq k<n\;|\; \varphi_{\tilde\nu}(R_{\alpha}^k(x))\geq\delta\right\}}{n} \ \longrightarrow\ \Leb(\{x\in\kreis\mid \varphi_{\tilde\nu}(x)\geq\delta\}) \quad\textrm{as}\quad n\to\infty \end{align} for all $x\in p(\tilde I)$. Let $\{\varphi_{\tilde\nu}\geq\delta\}:=\{x\in\kreis\mid \varphi_{\tilde\nu}(x)\geq\delta\}$. Using Birkhoff's Ergodic Theorem, we know that $\tilde I$ can be chosen such that $p(\tilde I)$ has full Lebesgue measure. Hence, we can choose a set of $\lceil 1/\tilde\nu\rceil$ points \[ M:=\big\{x_1,\dots,x_{\lceil 1/\tilde\nu\rceil}\big\}\subset\tilde I, \] such that $p(M)$ is an equidistributed lattice in $\T^1$ with distance $1/\lceil 1/\tilde\nu\rceil \leq \tilde\nu$ between adjacent vertices. Our aim is to show that $M$ is an $(f,\delta,2\tilde\nu(\lceil 1/\delta\rceil+1))$-spanning set. For arbitrary $y\in\T^1$, let $x_i,x_j\in M$ be the two adjacent lattice points with $p(y)\in [p(x_i),p(x_j)]$ (that is, $j=i+1$ or $i=\lceil 1/\tilde\nu\rceil$ and $j=1$). Then \[ R_{\alpha}^k[p(x_i),p(y)]\ \subseteq\ [R_{\alpha}^k(p(x_i)),R_{\alpha}^k(p(x_i))+\tilde\nu] \] for $k\geq 0$, and this implies \begin{eqnarray*} \lefteqn{ d\left(f^k(x_i),f^k(y)\right) \ \leq \ \mu\left([p(f^k(x_i)),p(f^k(y))]\right)} \\ & = & \mu\left(R_{\alpha}^k[p(x_i),p(y)]\right) \ \leq \ \varphi_{\tilde\nu}(R_{\alpha}^k(p(x_i))) \ . \end{eqnarray*} We get that \[ \frac{\countsep{n}(f,\delta,x_i,y)}{n}\ \leq \ \frac{\#\left\{0\leq k<n\;|\; \varphi_{\tilde\nu}(R_{\alpha}^k(p(x_i)))\geq\delta\right\}}{n} \] and using \eqref{property_ba_varphi_alpha} we know that the right-hand side convergences to $\Leb(\{\varphi_{\tilde\nu}\geq\delta\})$ as $n\to\infty$. It remains to show that $\Leb(\{\varphi_{\tilde\nu}\geq\delta\})< 2\tilde\nu(\lceil 1/\delta\rceil+1)$. Suppose for a contradiction that this inequality does not hold. Then $\{\varphi_{\tilde\nu}\geq \delta\}$ is not contained in a union of less than $\lceil 1/\delta\rceil+1$ intervals of length $2\tilde\nu$. Consequently, there exist at least $\lceil 1/\delta\rceil+1$ points $\zeta_i\in\T^1$ with $\varphi_{\tilde\nu}(\zeta_i)\geq\delta$ and $d(\zeta_i,\zeta_j)\geq\tilde\nu$ for $i\neq j$. We thus obtain \[ \mu(\T^1)\ \geq \ \sum\limits_{i=1}^{\lceil 1/\delta\rceil+1} \mu([\zeta_i,\zeta_i+\tilde\nu]) \ = \ \sum\limits_{i=1}^{\lceil 1/\delta\rceil+1} \varphi_{\tilde\nu}(\zeta_i)\ \geq \ 1+\delta\ > \ 1 \ , \] which is a contradiction. This means $\varlimsup_{n\to\infty}\countsep{n}(f,\delta,x_i,y)/n \leq\Leb(\{\varphi_{\tilde\nu}\geq\delta\}) <2\tilde\nu(\lceil1/\delta\rceil+1)$, and since $y$ was arbitrary this shows that $M$ is an $(f,\delta,2\tilde\nu(\lceil 1/\delta\rceil+1))$-spanning set. This completes the proof. \end{proof} \subsection{Relations to power entropy}\label{PowerEntropy} Given a compact metric space $(X,d)$ and a continuous map $f:X\to X$, the {\em Bowen-Dinaburg metrics} are given by $d_n(x,y):=\max_{i=0}^{n-1}d(f^i(x),f^i(y))$. A set $S\ssq X$ is called {\em $(f,\delta,n)$-separated}, for $\delta>0$ and $n\in\N$, if $d_n(x,y)\geq \delta$ for all $x\neq y\in S$. Let $\wh S(f,\delta,n)$ denote the maximal cardinality of an $(f,\delta,n)$-separated set. Then topological entropy, defined as \[ \htop(f) \ := \ \sup_{\delta>0} \nLim \frac{\log\wh S(f,\delta,n)}{n} \ , \] measures the exponential growth of these numbers, see for example \cite{Walters1982} for more information. If topological entropy is zero, then {\em power entropy} instead simply measures the polynomial growth rate, given by \[ h_{\textrm{pow}}(f) \ := \ \sup_{\delta>0} \varlimsup_{n\to\infty} \frac{\log \wh S(f,\delta,n)}{\log n} \ . \] We refer to \cite{HasselblattKatok2002HandbookPrincipalStructures} and \cite{Marco2013} for a more detailed discussion. Now, note that already one wandering point is enough to ensure that power entropy is at least bigger than one \cite{Labrousse2013}. Given a Morse-Smale homeomorphism on a compact metric space, we hence conclude that the corresponding power entropy is positive, as claimed above. This shows that we may have $h_{\textrm{pow}}(f)>\fsc(f)$. Conversely, consider the map $f:\torus\to\torus,\ (x,y)\mapsto (x,x+y)$ where $\T^2:=\R^2/\Z^2$. Then given $z=(x,y)$ and $z'=(x',y')$, we have that \[ d_n(z,z') \ \leq \ n|x-x'|+|y-y'| \ , \] which implies that $\wh S(f,\delta,n) \ \leq \ \frac{C\cdot n}{\delta^2}$ for some constant $C>0$. Hence, $h_{\textrm{pow}}(f)\leq 1$. However, at the same time we have that if $x\neq x'$, then $z$ and $z'$ rotate in the vertical direction with different speeds, and this makes it easy to show that $\kreis\times\{0\}$ is an $(f,\delta,\nu)$-separated set for suitable $\delta,\nu>0$, so that $\Sep(f,\delta,\nu)=\infty$. Hence, we may also have $\fsc(f)>h_{\textrm{pow}}(f)$, showing that no inequality holds between the two quantities.\smallskip {\em Modified power entropy} $h_{\textrm{pow}}^*$ is defined in the same way as power entropy, with the only difference that the metrics $d_n$ in the definition are replaced by the {\em Hamming metrics} \[ d_n^*(x,y) \ := \ \frac{1}{n}\inergsum d(f^i(x),f^i(y)) \ . \] Since $d_n^*\leq d_n$, modified power entropy is always smaller than power entropy, and it can be shown that for Morse-Smale systems it is always zero. The same is true, however, for Denjoy examples and Sturmian subshifts, so that modified power entropy does not seem suitable to detect topological complexity on the very fine level we are interested in here. The same example $f(x,y)=(x,x+y)$ as above shows that we may have $\fsc(f)>h_{\textrm{pow}}^*(f)$. An example for the opposite inequality is more subtle, but can be made such that it demonstrates at the same time the non-existence of a variational principle for the modified power entropy (a question that was left open in \cite{HasselblattKatok2002HandbookPrincipalStructures}). It will be contained in the forthcoming note \cite{GroegerJaeger2015ModifiedPowerEntropy}. \subsection{Besicovitch space}\label{BesicovitchSpace} In this section, we want to state some basic results concerning amorphic complexity in the context of symbolic systems. The corresponding proofs will be included in the forthcoming paper \cite{FG2014}, where amorphic complexity of symbolic systems is studied more systematically. Let $A$ be a finite set, $\Sigma_A:=A^{\N_0}$ and $\rho$ the Cantor metric on $\Sigma_A$ (see Section \ref{BasicExamples}). For a general continuous map $f:X\to X$ on a compact metric space $X$ and some $\delta>0$ we can not expect that $\varlimsup_{n\to\infty}\countsep{n}(f,\delta,\,\cdot\,,\,\cdot)/n$ is a metric (even not a pseudo-metric since the triangle inequality will usually fail). However, this changes in the setting of symbolic dynamics. \begin{proposition} We have that $\big(\tilde d_\delta\big)_{\delta \in (0,1]}$, defined as \begin{align*} \tilde d_{\delta}(x,y)\ :=\ \varlimsup_{n\to\infty} \frac{\countsep{n}(\sigma,\delta,x,y)}{n}\quad\textnormal{for} \quad x,y\in\Sigma_A \ , \end{align*} is a family of bi-Lipschitz equivalent pseudo-metrics. \end{proposition} Note that $\tilde d_1$ is usually called the \emph{Besicovitch pseudo-metric} and it turns out to be especially useful for understanding certain dynamical behaviour of cellular automata (see, for example, \cite{BlanchardFormentiKurka1997} and \cite{CattaneoFormentiMargaraMazoyer1997}). Now, following a standard procedure, we introduce the equivalence relation \begin{align*} x \ \sim y \ : \ \Leftrightarrow \ \tilde d_{\delta}(x,y) \ = \ 0 \quad\textnormal{for}\quad x,y\in\Sigma_A \ . \end{align*} Due to the previous proposition, this relation is well-defined and independent of the chosen $\delta$. Denote the corresponding projection mapping by $[\,\cdot\,]$. We equip $\big[\Sigma_A\big]$ with the metric $d_\delta\left([x],[y]\right):= \tilde d_\delta\left(x,y\right)$, $[x]$, $[y]\in\big[\Sigma_A\big]$ for some $\delta \in (0,1]$ and call $\big(\big[\Sigma_A\big], d_\delta\big)$ the \emph{Besicovitch space}. Given a subshift $\Sigma\ssq\Sigma_A$, we also call $[\Sigma]$ the {\em Besicovitch space associated to $\Sigma$}. We have the following properties. \begin{theorem}[\cite{BlanchardFormentiKurka1997, CattaneoFormentiMargaraMazoyer1997}] The Besicovitch space $[\Sigma_A]$ is perfect, complete, pathwise connected and (topologically) infinite dimensional. However, it is neither locally compact nor separable. \end{theorem} Note that we can define the shift map on the Besicovitch space as well and that it becomes an isometry. Before we proceed, we need to give the definition of box dimension in general metric spaces $(X,d)$. The \emph{lower} and \emph{upper box dimension} of a totally bounded subset $E\subseteq X$ are defined as \begin{align*} \underline\Dim_B(E)\ := \ \varliminf\limits_{\eps\to 0} \frac{\log N_\eps(E)}{-\log\eps} \quad\textnormal{and}\quad \overline\Dim_B(E)\ := \ \varlimsup\limits_{\eps\to 0} \frac{\log N_\eps(E)}{-\log\eps} \ , \end{align*} where $N_\eps(E)$ is the smallest number of sets of diameter strictly smaller than $\eps$ needed to cover $E$. If $\underline\Dim_B(E)=\overline\Dim_B(E)$, then we call their common value $\Dim_B(E)$ the \emph{box dimension of $E$}. Further, let $M_\eps(E)$ be the maximal cardinality of an $\eps$-separated subset of $E$, that is, a set $S\ssq E$ with $d(x,y)\geq\eps$ for all $x\neq y\in S$. Then one can replace $N_\eps(E)$ by $M_\eps(E)$ in the definition of box dimension \cite[Proposition 1.4.6]{Edgar1998}. Now, suppose $(\Sigma,\sigma)$ is a subshift of $(\Sigma_A,\sigma)$. If $\left.\sigma\right|_{\Sigma}$ has finite separation numbers, we observe for each $\delta\in (0,1]$ that \[ \Sep(\left.\sigma\right|_{\Sigma},\delta,\nu)=M_\nu([\Sigma]) \quad\textnormal{and}\quad \Span(\left.\sigma\right|_{\Sigma},\delta,\nu)=N_\nu([\Sigma]) \quad\textnormal{in}\quad \big(\big[\Sigma_A\big], d_\delta\big) \] for all $\nu\in (0,1]$. This immediately implies \begin{proposition} Let $\Sigma$ be a subshift of $\Sigma_A$. Then \begin{itemize} \item[(a)] $\left.\sigma\right|_{\Sigma}$ has finite separation numbers if and only if $[\Sigma]$ is totally bounded in $\big[\Sigma_A\big]$, and \item [(b)] in this setting, $\uac(\left.\sigma\right|_{\Sigma})=\underline\Dim_B([\Sigma])$ and $\oac(\left.\sigma\right|_{\Sigma})=\overline\Dim_B([\Sigma])$. \end{itemize} \end{proposition} This means for example that all regular Toeplitz subshifts $\Sigma$ (see Section \ref{RegularToeplitzFlows}) have a totally bounded associated Besicovitch space, using Theorem \ref{t.finite_separation_numbers} (in fact one can show by a more direct argument that $[\Sigma]$ is even compact), and that we can find regular Toeplitz subshifts with associated Besicovitch spaces of arbitrarily high box dimension, see Theorem \ref{t.toeplitz_sharpbound_examples}. An example of a minimal and uniquely ergodic subshift with zero topological entropy such that its projection is not totally bounded is the subshift generated by the shift orbit closure of the well-known Prouhet-Thue-Morse sequence. (See, for example, \cite{AlloucheShallit1992} for the definition of this sequence and further information.) The fact that the projection is not totally bounded follows directly from the strict positivity of the \emph{aperiodicity measure} of the Prouhet-Thue-Morse sequence $x$, defined as $\inf_{m\in\N} \varliminf_{n\to\infty} S_n(\sigma,1,x,\sigma^m(x))/n$, see \cite{PritykinUlyashkina2009, MorseHedlund1938}. \section{Quantitative analysis of almost sure 1-1 extensions of isometries} \label{QuantitativeAutomorphic} The aim of this section is to give a quantitative version of the argument in the proof of Theorem~\ref{t.finite_separation_numbers} in order to obtain an upper bound for amorphic complexity in this situation. For the whole section let $X$ and $\Xi$ be compact metric spaces and $f:X\to X$ an almost sure 1-1 extension of $g:\Xi\to\Xi$, with factor map $h$. Further, assume that $g$ is a minimal isometry, with unique invariant probability measure $\mu$.\foot{Note that a minimal isometry is necessarily uniquely ergodic.} In this case, it is easy to check that the measure of an $\eps$-ball $B_\eps(\xi)$ does not depend on $\xi\in\Xi$. For the scaling of this measure as $\eps\to 0$, we have \begin{lemma} In the above situation, we get \begin{equation*} \varlimsup_{\eps\to 0} \frac{\log\mu(B_\eps(\xi))}{\log \eps} \ = \ \overline{\Dim}_B(\Xi) \end{equation*} for all $\xi\in\Xi$ and the analogous equality holds for the limit inferior. \end{lemma} \begin{proof} Recall that we can also use $M_\eps(\Xi)$ in the definition of the box dimension of $\Xi$ (see Section \ref{BesicovitchSpace}). Let $\hat\mu(\eps):=\mu(B_\eps(\xi))$, where $\xi\in\Xi$ is arbitrary, and suppose $S\ssq \Xi$ is an $\eps$-separated subset with cardinality $M_\eps(\Xi)$. Observe that the $\eps/2$-balls $B_{\eps/2}(\xi)$ with $\xi\in S$ are pairwise disjoint. We obtain $1=\mu(\Xi) \geq \sum_{\xi\in S}\hat\mu(\eps/2)$ and thus $M_{\eps}(\Xi)\leq 1/\hat\mu(\eps/2)$. Hence, \begin{eqnarray*} \overline{\Dim}_B(\Xi) & = & \varlimsup_{\eps\to 0} \frac{\log M_{\eps}(\Xi)}{-\log \eps} \ \leq \ \varlimsup_{\eps\to 0} \frac{\log \hat\mu(\eps/2)}{\log \eps} \ = \ \varlimsup_{\eps\to 0} \frac{\log\hat\mu(\eps)}{\log \eps} \ . \end{eqnarray*} Conversely, the $\eps$-balls $B_{\eps}(\xi)$ with centres $\xi$ in $S$ cover $\Xi$, and this easily leads to the reverse inequality. \end{proof} By the Minkowski characterisation of box dimension, we have for $E\ssq \Xi$ \begin{equation} \label{e.Minkowski} \overline{\Dim}_B(E) \ = \ \overline{\Dim}_B(\Xi) - \varliminf_{\eps\to 0} \frac{\log\mu(B_\eps(E))}{\log \eps} \ . \end{equation} The proof of this fact in the setting above is the same as in Euclidean space, see, for example, \cite{Falconer2007FractalGeometry}. We denote by $\eta_{\delta}(\eps)$ the constant given by Lemma~\ref{l.eta_function} and let \begin{equation}\label{e.gamma_def} \gamma(h) \ := \ \varlimsup_{\delta\to 0}\varlimsup_{\eps\to 0} \ \frac{\log\eta_\delta(\eps)}{\log\eps} \ . \end{equation} This is the scaling factor from Theorem \ref{t.automorphic_quantitative_intro}, which we restate here as \begin{theorem}\label{t.automorphic_quantitative} Suppose that the upper box dimension of $\Xi$ is finite and strictly positive and $\gamma(h)>0$. Then under the above assumptions, we have \begin{equation}\label{e.automorphic_upper_bound} \oac(f) \ \leq \ \frac{\overline\Dim_B(\Xi)\cdot \gamma(h)} {\overline\Dim_B(\Xi)-\sup_{\delta>0}\overline{\Dim_B}(E_\delta)} \ , \end{equation} where $E_\delta=\{\xi\in \Xi\mid \diam(h^{-1}(\xi))\geq \delta\}$. \end{theorem} \begin{proof} Without loss of generality, we assume that $\gamma(h)$ is finite and fix $\delta>0$. Going back to the end of the proof of Theorem~\ref{t.finite_separation_numbers}, we find that according to its definition the number $N$ in \eqref{e.automorphic_Sep_bound} is equal to $M_{\eta_\delta(\eps)}(\Xi)$. Thus, we have already shown that if $\nu>\mu(B_\eps(E_\delta))$ for some $\eps>0$, then $\Sep(f,\delta,\nu) \leq M_{\eta_\delta(\eps)}(\Xi)$. Now, note that $\mu(B_\eps(E_\delta))$ is monotonously decreasing to $0$ as $\eps\to 0$. For $\nu$ small enough choose $k\in\N$ such that $\mu(B_{2^{-k-1}}(E_\delta))<\nu\leq \mu(B_{2^{-k}}(E_\delta))$. We obtain \begin{eqnarray*} \oac(f,\delta) & \leq & \varlimsup_{k\to\infty} \frac{\log M_{\eta_\delta(2^{-k-1})}(\Xi)}{-\log\mu(B_{2^{-k}}(E_\delta))} \\ & = & \varlimsup_{k\to\infty} \frac{M_{\eta_\delta(2^{-k-1})}(\Xi)}{-\log\eta_\delta(2^{-k-1})} \cdot \frac{\log\eta_\delta(2^{-k-1})}{\log 2^{-k-1}} \cdot \frac{\log 2^{-k-1}}{\log\mu(B_{2^{-k}}(E_\delta))} \\ & \leq & \overline\Dim_B(\Xi)\cdot \gamma(h) \cdot\left(\varliminf_{k\to\infty} \frac{\log\mu(B_{2^{-k}}(E_\delta))}{\log 2^{-k}}\right)^{-1} \\ & = & \frac{\overline\Dim_B(\Xi)\cdot \gamma(h)} {\overline\Dim_B(\Xi)-\overline{\Dim}_B(E_\delta)} \ , \end{eqnarray*} where we use \eqref{e.Minkowski} for the last equality. Taking the supremum over all $\delta>0$ yields \eqref{e.automorphic_upper_bound}. \end{proof} \section{Regular Toeplitz flows}\label{RegularToeplitzFlows} Inspired by earlier constructions of almost periodic functions by Toeplitz, the notions of Toeplitz sequences and Toeplitz subshifts or flows were introduced by Jacobs and Keane in 1969 \cite{JacobsKeane1969}. In the sequel, these systems have been used by various authors to provide a series of interesting examples of symbolic dynamics with intriguing dynamical properties, see for example \cite{MarkleyPaul1979PositiveEntropyToeplitzFlows, Williams1984ToeplitzFlows} or \cite{Downarowicz2005} and references therein. In what follows, we will study the amorphic complexity for so-called regular Toeplitz subshifts. Let $A$ be a finite alphabet, $\Sigma_A=A^\I$ with $\I=\N_0$ or $\Z$ and $\rho$ the Cantor metric on $\Sigma_A$ (see Section \ref{BasicExamples}). Assume that $\omega \in \Sigma_A$ is a non-periodic Toeplitz sequence with associated Toeplitz subshift $(\Sigma_\omega,\sigma)$, as defined in Section~\ref{Intro}. Given $p\in\N$ and $x=(x_k)_{k\in\I}\in\Sigma_A$, let \[ \Per(p,x):=\{k\in\I\;|\;x_{k}=x_{k+p\ell} \textnormal{ for all }\ell\in\N\} \ . \] We call the $p$-periodic part of $\omega$ the \emph{$p$-skeleton of $\omega$}. To be more precise, define the $p$-skeleton of $\omega$, denoted by $S(p,\omega)$, as the sequence obtained by replacing $\omega_k$ with the new symbol `$\ast$' for all $k\notin\Per(p,\omega)$. Note that the $p$-skeletons of two arbitrary points in $\Sigma_{\omega}$ coincide after shifting one of them by at most $p-1$ positions. We say that $p$ is an \emph{essential period of $\omega$} if $\Per(p,\omega)$ is non-empty and does not coincide with $\Per(\tilde p,\omega)$ for any $\tilde p<p$. A \emph{weak periodic structure of $\omega$} is a sequence $(p_{\ell})_{\ell\in\N}$ such that each $p_{\ell}$ divides $p_{\ell+1}$ and \begin{align}\label{Toeplitz_periods} \bigcup\limits_{\ell\in\N}\Per(p_{\ell},\omega)=\I \ . \end{align} If, additionally, all the $p_l$'s are essential, we call $(p_{\ell})_{\ell\in\N}$ a \emph{periodic structure of $\omega$}. For every (non-periodic) Toeplitz sequence we can find at least one periodic structure \cite{Williams1984ToeplitzFlows}. \begin{remark}\label{remark_non_essential_ps} Note that from each weak periodic structure we can obtain a periodic structure in the following way. Suppose $(p_{\ell})_{\ell\in\N}$ is a weak periodic structure of $\omega$. Without loss of generality, we can assume that $\Per(p_{\ell},\omega)\neq\emptyset$ and $\Per(p_{\ell},\omega)\subsetneq\Per(p_{\ell+1},\omega)$ for all $\ell\in\N$ (recall that $\omega$ is non-periodic). For each $p_{\ell}$ choose the smallest $\tilde p_{\ell}\in\N$ such that $\Per(\tilde p_{\ell},\omega)$ coincides with $\Per(p_{\ell},\omega)$. Then by definition $\tilde p_{\ell}$ is an essential period. Since $p_{\ell}$ divides $p_{\ell+1}$ we have $\Per(\tilde p_{\ell},\omega) \subset\Per(\tilde p_{\ell+1},\omega)$. The next lemma and the minimality of the $\tilde p_{\ell}$'s imply that $\tilde p_{\ell}$ divides $\tilde p_{\ell+1}$ for each $\ell\in\N$, so that $(\tilde p_{\ell})_{\ell\in\N}$ is a periodic structure. \end{remark} The next lemma is probably well-known to experts and we omit the proof here. \begin{lemma}\label{Toeplitz_gcd} If $\Per(p,x)\subseteq\Per(q,x)$, then $\Per(\gcd(p,q),x)=\Per(p,x)$ where $x\in\Sigma_A$ and $p,q\in\N$. \end{lemma} Given $p\in\N$, we define the relative densitiy of the $p$-skeleton of $\omega$ by \[ D(p)\ := \ \frac{\#(\Per(p,\omega)\cap [0,p-1])}{p} \ . \] Since $\omega$ is non-periodic, we have $D(p)\leq 1-1/p$. For a (weak) periodic structure $(p_{\ell})_{\ell\in\N}$, the densities $D(p_{\ell})$ are non-decreasing in $\ell$ and we say that $(\Sigma_{\omega},\sigma)$ is a \emph{regular Toeplitz subshift} if $\lim_{\ell\to\infty}D(p_{\ell})=1$. Note that regularity of a Toeplitz subshift does not depend on the chosen (weak) periodic structure (use \eqref{Toeplitz_periods} and Lemma \ref{Toeplitz_gcd}). It is well-known that a regular Toeplitz subshift is an almost sure 1-1 extension of a minimal isometry (an odometer) \cite{Downarowicz2005}. Thus, we obtain from Theorem~\ref{t.finite_separation_numbers} that its asymptotic separation numbers are finite. However, as mentioned in the introduction, a quantitative analysis is possible and yields the following. \begin{theorem} \label{t.toeplitz_estimate} Suppose $(\Sigma_{\omega},\sigma)$ is a regular Toeplitz subshift and let $(p_{\ell})_{\ell\in\N}$ be a (weak) periodic structure of $\omega$. For $\delta,s>0$ we have \[ \varlimsup\limits_{\nu\to 0} \frac{\Sep(\sigma,\delta,\nu)}{\nu^{-s}} \ \leq \ C\cdot\varlimsup\limits_{\ell\to\infty} \frac{p_{\ell+1}}{(1-D(p_{\ell}))^{-s}} \ , \] with $C=C(\delta,s)>0$. \end{theorem} Note that this directly implies Theorem~\ref{t.toeplitz}. \begin{proof} Recall that since $\omega$ is a regular Toeplitz sequence, the densities $D(p_{\ell})$ are non-decreasing and converge to $1$. Choose $m\in\N$ with $2^{-m}<\delta\leq 2^{-m+1}$ and $\ell\in\N$ such that \begin{align} \label{toeplitz_upper_bound_cond_nu} (2m+1)2(1-D(p_{\ell+1}))\ < \ \nu\ \leq\ (2m+1)2(1-D(p_{\ell})) \ . \end{align} Then we have \[ \Sep(\sigma,\delta,\nu)\ \leq \ \Sep(\sigma,2^{-m},(2m+1)2(1-D(p_{\ell+1}))) \] and claim that the second term is bounded from above by $p_{l+1}$. Assume for a contradiction that there exists a $(\sigma, 2^{-m},(2m+1)2(1-D(p_{\ell+1})))$-separated set $S\subseteq\Sigma_{\omega}$ with more than $p_{\ell+1}$ elements. Then, there are at least two points $x=(x_k)_{k\in\I}$, $y=(y_k)_{k\in\I}\in S$ with the same $p_{\ell+1}$-skeleton. This means $x$ and $y$ can differ at most at the remaining positions $k\notin\Per(p_{\ell+1},x)=\Per(p_{\ell+1},y)$. Using the fact that $\rho(x,y)\geq 2^{-m}$ if and only if $x_k\neq y_k$ for some $k\in\I$ with $|k|\leq m$, we obtain \begin{eqnarray*} \lefteqn{ \varlimsup\limits_{n\to\infty}\frac{\countsep{n}(\sigma,2^{-m},x,y)}{n} \ \leq \ (2m+1)\varlimsup\limits_{n\to\infty} \frac{\#\left\{0\leq k<n\;|\;x_k\neq y_k\right\}}{n} } \\ & \leq & (2m+1)\varlimsup\limits_{n\to\infty} \frac{\#([0,n-1]\smin \Per(p_{\ell+1},\omega))}{n} \ = \ (2m+1)(1-D(p_{\ell+1})) \ . \end{eqnarray*} However, this contradicts \eqref{toeplitz_upper_bound_cond_nu}. Hence, we obtain \[ \frac{\Sep(\sigma,\delta,\nu)}{\nu^{-s}} \ \leq \ C(\delta,s)\cdot\frac{p_{\ell+1}}{(1-D(p_{\ell}))^{-s}} \ , \] where $C(\delta,s):=(2m+1)^s$. Note that $m$ only depends on $\delta$. Taking the limit superior yields the desired result. \end{proof} For the remainder of this section, our aim is to provide a class of examples demonstrating that the above estimate is sharp and that the amorphic complexity of regular Toeplitz flows takes at least a dense subset of values in $[1,\infty)$. To that end, we first recall an alternative definition of Toeplitz sequences \ (cf.\ \cite{JacobsKeane1969}). Consider the extended alphabet $\mathcal A:=A\cup\{\ast\}$ where we can think of $\ast$ as a hole or placeholder like in the definition of the $p$-skeleton. Then, $\omega\in\Sigma_A$ is a Toeplitz sequence if and only if there exists an \emph{approximating sequence} $(\omega^{\ell})_{\ell\in\N}$ of periodic points in $(\Sigma_{\mathcal A},\sigma)$ such that (i) for all $k\in\I$ we have $\omega_k^{\ell+1}=\omega_k^{\ell}$ as soon as $\omega_k^{\ell}\in A$ for some $\ell\in\N$ and (ii) $\omega_k=\lim_{\ell\to\infty}\omega_k^{\ell}$, see \cite{Eberlein1971}. Such an approximating sequence of a Toeplitz sequence is not unique. For example, every sequence of $p_{\ell}$-skeletons $(S(p_{\ell},\omega))_{\ell\in\N}$ with $(p_{\ell})_{\ell\in\N}$ a (weak) periodic structure satisfies these properties. Let us interpret Theorem~\ref{t.toeplitz_estimate} in this context. For a $p$-periodic point $x\in\Sigma_{\mathcal A}$, we can define the relative density of the holes in $x$ by \[ r(x) \ := \ \frac{\#\{0\leq k<p\;|\; x_k=\ast\}}{p} \ . \] Note that $D(p)=1-r\left(S(p,\omega)\right)$ for every $p\in\N$. Suppose $(\omega^{\ell})_{\ell\in\N}$ is an approximating sequence of $\omega$. We say $(p_{\ell})_{\ell\in\N}$ is a \emph{sequence of corresponding periods of $(\omega^{\ell})_{\ell\in\N}$} if $p_{\ell}$ divides $p_{\ell+1}$ and $\sigma^{p_{\ell}}(\omega^{\ell})=\omega^{\ell}$ for each $\ell\in\N$. We have that $r(\omega^{\ell})\geq 1/p_{\ell}$. Moreover, $r(\omega^{\ell}) \geq 1-D(p_\ell)$, so that Theorem~\ref{t.toeplitz_estimate} implies \begin{corollary} \label{cor_upper_bound_ham} Assume $(\Sigma_{\omega},\sigma)$ is a regular Toeplitz subshift. Let $(\omega^{\ell})_{\ell\in\N}$ be an approximating sequence of $\omega$ and let $(p_{\ell})_{\ell\in\N}$ be a sequence of corresponding periods of $(\omega^{\ell})_{\ell\in\N}$. Furthermore, assume $p_{\ell+1}\leq C p_{\ell}^t$ and $r(\omega^{\ell})\leq K/p_{\ell}^u$ for $\ell$ large enough, where $C,t\geq 1$, $u\in (0,1]$ and $K>0$. Then \[ \oac(\sigma)\leq\frac{t}{u} \ . \] \end{corollary} For the construction of examples, it will be convenient to use so-called $(p,q)$-Toeplitz (infinite) words, as introduced in \cite{CassaigneKarhumaeki1997}. Let $\I=\N_0$. Suppose $v$ is a finite and non-empty word with letters in $\mathcal A$ and at least one entry distinct from $\ast$. Let $\abs{v}$ be its length and $\abs{v}_{\ast}$ be the number of holes in $v$. We use the notation $\overline v\in\Sigma_{\mathcal A}$ for the one-sided periodic sequence that is created by repeating $v$ infinitely often. Define the sequence $(T_{\ell}(v))_{\ell\in\N}$ recursively by \[ T_{\ell}(v) \ := \ F_{v}(T_{\ell-1}(v)) \ , \] where $T_0(v):=\overline\ast$ and $F_{v}:\Sigma_{\mathcal A}\to\Sigma_{\mathcal A}$ assigns to each $x\in\Sigma_{\mathcal A}$ the sequence that is obtained from $\overline v$ by replacing the subsequence of all occurrences of $\ast$ in $\overline v$ by $x$. We get that $(T_{\ell}(v))_{\ell\in\N}$ is an approximating sequence and denote the corresponding Toeplitz sequence by $T(v)$ \cite{CassaigneKarhumaeki1997}. Setting $p:=\abs{v}$, $q:=\abs{v}_{\ast}$ and $d:=\gcd(p,q)$, we say $T(v)$ is a \emph{$(p,q)$-Toeplitz word}. One particular nice feature of $(p,q)$-Toeplitz words is that in order to exclude periodicity one only has to check a short prefix of the sequence. \begin{theorem}[{\cite[Theorem 4]{CassaigneKarhumaeki1997}}] \label{thm_periodicity_Toeplitz_words} Let $T(v)$ be a $(p,q)$-Toeplitz word. Then $T(v)$ is periodic if and only if its prefix of length $p$ is $d$-periodic. \end{theorem} \begin{theorem}\label{t.toeplitz_sharpbound_examples} Suppose $m\in\N$ and let $0{\,^m}1$ be the word starting with $m$ zeros and ending with a single one. Furthermore, let $v$ be a word with letters in $\mathcal A=\{0,1,\ast\}$ such that $1\leq\abs{v}_{\ast}\leq\abs{v}\leq m$. Then $\omega:=T(0{\,^m}1v)$ is a $(p,q)$-Toeplitz word and the corresponding regular Toeplitz subshift $(\Sigma_{\omega},\sigma)$ has amorphic complexity \[ \fsc(\sigma)=\frac{\log p/d}{\log p/q} \ . \] \end{theorem} \begin{proof} Define for each $n\in\N$ and $x=(x_k)_{k\in\N_0}$, $y=(y_k)_{k\in\N_0}\in\Sigma_{\mathcal A}$ \[ S_{n}(x,y) \ := \ \#\left\{0\leq k<n\;|\;x_k, y_k\neq\ast\textnormal{ and } x_k\neq y_k\right\} \ . \] Observe that \begin{align*} \begin{split} \lefteqn{S_{p}\big(T(0{\,^m}1v),\sigma^{j}(T(0{\,^m}1v))\big)}\\ &\quad\geq \ S_{p}\big(T_{1}(0{\,^m}1v), \sigma^{j}(T_{1}(0{\,^m}1v))\big) \ = \ S_{p}\big(\overline{0{\,^m}1v},\sigma^{j}(\overline{0{\,^m}1v})\big) \ \geq \ 1 \end{split} \end{align*} for every $0<j<p$ due to the special form of the prefix $0{\,^m}1$ and the assumption $\abs{v}\leq m$. This directly implies that $\omega$ is non-periodic, using Theorem \ref{thm_periodicity_Toeplitz_words}. To get an upper bound for $\oac(\sigma)$, note that $(p^{\ell}/d^{\ell-1})_{\ell\in\N}$ is a sequence of corresponding periods of $(T_{\ell}(0{\,^m}1v))_{\ell\in\N}$ and $r(T_{\ell}(0{\,^m}1v))=q^{\ell}/p^{\ell}$ for each $\ell\in\N$. This is proved easily by induction: The statement is true for $T_1(0{\,^m}1v) = \overline{0{\,^m}1v}$. When going from $\ell$ to $\ell+1$, by the induction hypothesis each of the $p^\ell/d^{\ell-1}$-periodic blocks of $T_\ell(0{\,^m}1v)$ has $q^{\ell}/d^{\ell-1}$ free positions. In order to accommodate $q/d$ such periodic blocks of $T_\ell(0{\,^m}1v)$ it needs $p^{\ell}/d^\ell$ of the $p$-periodic blocks of $\overline{0{\,^m}1v}$ with $q$ free positions each. Thus, the resulting periodic block of $T_{\ell+1}(0{\,^m}1v)$ has length $p^{\ell+1}/d^\ell$ and $q^{\ell+1}/d^{\ell}$ free positions. Now, Corollary \ref{cor_upper_bound_ham} gives the desired upper bound. In order to prove the lower bound, we show by a similar induction that \begin{align}\label{0m1_lb} S_{p^{\ell}/d^{\ell-1}}\big(T_{\ell}(0{\,^m}1v), \sigma^{j}(T_{\ell}(0{\,^m}1v))\big) \ \geq \ q^{\ell-1}/d^{\ell-1} \end{align} for every $0<j<p^{\ell}/d^{\ell-1}$ and $\ell\in\N$. If $j$ is not a multiple of $p$, then by induction assumption each $p^\ell/d^{\ell-1}$-periodic block of $T_\ell(0{\,^m}1v)$ has $p/d\cdot q^{\ell-2}/d^{\ell-2}$ mismatches with $\sigma^j(T_\ell(0{\,^m}1v))$ coming from the mismatches of the $p/d$ contained $p^{\ell-1}/d^{\ell-2}$-periodic blocks of $T_{\ell-1}(0{\,^m}1v)$ with $\sigma^j(T_{\ell-1}(0{\,^m}1v))$. If $j$ is a multiple of $p$, then the mismatches result in a similar way from the shift in the sequences that are inserted into $\overline{0{\,^m}1v}$, since $\sigma^{ip}(T_\ell(0{\,^m}1v))=F_v(\sigma^{iq}(T_{\ell-1}(0{\,^m}1v)))$. Note that the fact that $p^{\ell}/d^{\ell-1}$ is a minimal period comes from the assumption that $d=\gcd(p,q)$. As a direct consequence from \eqref{0m1_lb}, we obtain that for all $\ell\in\N$ and $0\leq i<j<p^{\ell}/d^{\ell-1}$ \[ S_{p^{\ell}/d^{\ell-1}}\big(\sigma^{i}(T_{\ell}(0{\,^m}1v)), \sigma^{j}(T_{\ell}(0{\,^m}1v))\big)\geq q^{\ell-1}/d^{\ell-1} \ . \] Hence, \[ \{\omega,\sigma(\omega),\dots,\sigma^{p^{\ell}/d^{\ell-1}-1}(\omega)\} \] is a $(\sigma,1,q^{\ell-1}/p^{\ell})$-separated set. For $\nu$ small enough choose $\ell\in\N$ such that $q^{\ell}/p^{\ell+1}<\nu\leq q^{\ell-1}/p^{\ell}$ and observe \[ \frac{\Sep(\sigma,\delta,\nu)}{\nu^{-s}} \ \geq \ \frac{\Sep(\sigma,1,q^{\ell-1}/p^{\ell})}{\nu^{-s}} >\frac{p^{\ell}}{d^{\ell-1}}\cdot\frac{q^{ls}}{p^{(l+1)s}} \] for $\delta, s>0$. This yields $\uac(\sigma)\geq(\log p/d)/(\log p/q)$. \end{proof} As the set $\{\log p/\log (p/q)\mid p,q\in\N,\, \gcd(p,q)=1\}$ is dense in $[1,\infty)$, we obtain \begin{corollary}\label{c.toeplitz_densevalues} In the class of $(p,q)$-Toeplitz words, amorphic complexity takes (at least) a dense set of values in $[1,\infty)$. \end{corollary} \begin{remark} From the results in \cite[Theorem 5]{CassaigneKarhumaeki1997}, one can directly conclude that for all (non-periodic) $(p,q)$-Toeplitz words the power entropy equals $(\log p/d)/(\log p/q)$. Thus, for our examples provided by the last theorem power entropy and amorphic complexity coincide. It would be interesting to know if this is true for all $(p,q)$-Toeplitz words, or if not, in which cases this equality holds. \end{remark} \section{Strange non-chaotic attractors in pinched skew products} \label{PinchedSystems} As we have mentioned in previous sections, one of the main reasons for considering amorphic complexity is the fact that it gives value zero to Morse-Smale systems, while power entropy assigns a positive value to these. The latter is unsatisfactory from an abstract viewpoint, since such dynamics should certainly be considered entirely trivial. At the same time, however, this issue may also raise practical problems. In more complicated systems, attractor-repeller dynamics may coexist with other more subtle dynamical mechanisms. In this case, the contribution of the Morse-Smale component to power entropy may overlay other effects, and two systems may not be distinguishable despite a clearly different degree of dynamical complexity. Of course, the computation of topological complexity invariants in more complex non-linear dynamical systems will generally be difficult and technically involved. Nevertheless, we want to include one classical example in this section which fits the situation described above. In order to keep the exposition brief, we concentrate on a positive qualitative result for amorphic complexity and refrain from going into detail concerning (modified) power entropy.\smallskip Recall that $\T^1=\R/\Z$, $d$ is the usual metric on $\T^1$ and $\Leb$ denotes the Lebesgue measure on $\T^1$ (cf.\ Section \ref{BasicExamples}). Suppose $f:\kreis\times[0,1]\to\kreis\times[0,1]$ is a continuous map of the form \begin{equation} \label{eq:1} f(\theta,x) \ = \ (\theta+\omega\mod 1,f_\theta(x)) \ , \end{equation} where $\omega\in\kreis$ is irrational. For the sake of simplicity we will suppress `$\textrm{mod } 1$' in the following. The maps $f_\theta:[0,1]\to[0,1]$ are called {\em fibre maps}, $f$ itself is often called a {\em quasiperiodically forced (qpf) 1D map}. If all the fibre maps in (\ref{eq:1}) are monotonically increasing, then the topological entropy of $f$ is zero.\footnote{This is a direct consequence of \cite[Theorem 17]{bowen:1971}.} Notwithstanding, systems of this type may exhibit considerable dynamical complexity. A paradigm example in this context are so-called {\em pinched skew products}, introduced by Grebogi et al in \cite{grebogi/ott/pelikan/yorke:1984} and later treated rigorously by Keller \cite{keller:1996}. In order to fix ideas, we concentrate on the specific parameter family \begin{equation} \label{eq:2} f(\theta,x) \ = \ (\theta+\omega,\tanh(\alpha x)\cdot \sin(\pi \theta)) \ , \end{equation} which is close to the original example introduced by Grebogi and his coworkers. The crucial features of this system are that \romanlist \item the zero line $\kreis\times\{0\}$ is $f$-invariant; \item the fibre maps $f_\theta:x\mapsto \tanh(\alpha x)\cdot \sin(\pi\theta)$ are all concave; \item the fibre map $f_{0}$ sends the whole interval $[0,1]$ to $0$. \listend Item (iii) is often refered to as {\em pinching}. It is the defining property of the general class of pinched skew products, as introduced in \cite{glendinning:2002}. Note that all of the arguments and statements in this section immediately carry over to a whole class of fibre maps and higher-dimensional rotations in the base (see the set $\mathcal T^\ast$ and Example 4.1 in \cite{GroegerJaeger2013}). A function $\varphi:\kreis\to[0,1]$ is called an {\em invariant graph} of (\ref{eq:1}) if $f_\theta(\varphi(\theta))=\varphi(\theta+\omega)$ for all $\theta\in\kreis$. In this case, the associated point set $\Phi:=\{(\theta,\varphi(\theta))\mid \theta\in\kreis\}$ is $f$-invariant.\footnote{Slightly abusing notation, the term {\em invariant graph} is used both for the function and its graph.} If the fibre maps are all differentiable, the {\em (vertical) Lyapunov exponent} of $\varphi$ is defined as \begin{equation}\label{eq:3} \lambda(\varphi) \ := \ \int_{\kreis} \log|f'_\theta(\varphi(\theta))| \ d\theta \ . \end{equation} If $\lambda(\varphi)\leq 0$, then $\Phi$ is an attractor in the sense of Milnor \cite{milnor:1985} (see, for example, \cite[Proposition 3.3]{jaeger:2003}). If $\varphi$ is continuous, then it is even a topological attractor and contains an open annular neighbourhood in its basin of attraction. In case $\varphi$ is not continuous, the attractor $\Phi$ combines non-chaotic dynamics (zero entropy, absence of positive Lyapunov exponents) with a complicated topological structure (related to the absence of continuity, see \cite{stark:2003,jaeger:2007} for more information). Due to this combination of properties, it is called a {\em strange non-chaotic attractor (SNA)}. As mentioned above, the zero line $\Phi_0:=\kreis\times\{0\}$ is an invariant graph of (\ref{eq:2}). An elementary computation yields $\lambda(\varphi_0) = \log \alpha - \log 2$. If $\alpha\leq 2$, so that $\lambda(\varphi_0)\leq 0$, then $\Phi_0$ is the global attractor of the system, meaning that $\Phi_0=\bigcap_{n\in\N} f^n(\kreis\times[0,1])$. Accordingly, all orbits converge to the zero line, that is, $\nLim f^n_\theta(x)=0$, where $f^n_\theta=f_{\theta+(n-1)\omega}\circ\ldots\circ f_\theta$. If $\alpha>2$, then this picture changes drastically. Now, a second invariant graph $\varphi^+$ with negative Lyapunov exponent appears, which satisfies $\varphi^+(\theta)>0$ for $\Leb$-a.e.\ $\theta\in\kreis$ \cite{keller:1996}. However, at the same time there exists a dense set of $\theta$'s with $\varphi^+(\theta)=0$. This latter fact is easy to see, since $\varphi^+$ is invariant and $f_{0}(\varphi^+(0))=0$ by property (iii) above. Thus, $\varphi^+$ is an SNA (see Figure~\ref{fig: upper bounding graphs}(a)). \begin{figure} \centering \subfloat[]{\includegraphics[width=2.6in, height=2in]{pinched}} \subfloat[]{\includegraphics[width=2.6in, height=2in]{non_pinched}} \caption{The upper bounding graphs of $f$ (a) and $f^\eps$ (b) with $\eps=0.05$. In both cases, $\omega$ is the golden mean and $\alpha=3$. The horizontal axis is $\T^1$, the vertical axis $[0,1]$.} \label{fig: upper bounding graphs} \end{figure} We note that $\varphi^+$ can be defined as the upper bounding graph of the global attractor $\mathcal{A}:=\bigcap_{n\in\N} f^n(\kreis\times[0,1])$, that is, \begin{equation}\label{e.upper_bounding_graph} \varphi^+(\theta) \ := \ \sup\{x\in[0,1]\mid (\theta,x)\in\mathcal{A}\} \ . \end{equation} The rigorous proof of these facts in \cite{keller:1996} is greatly simplified by the particular structure of pinched skew products. More natural systems, though, often occur as the time-one-maps of flows generated by scalar differential equations with quasiperiodic right-hand side. In particular, this means that such systems are invertible and the non-invertible pinched skew products have a certain toy-model character. Nowadays, however, established methods of multiscale analysis yield a wealth of results about the existence and structure of SNA in broad classes of invertible systems as well \cite{young:1997,bjerkloev:2005,bjerkloev:2007,jaeger:2006,fuhrmann:2014}. In many cases, it turned out that this machinery allows to transfer results and insights first obtained for pinched systems to a more general setting. One example is the computation of the Hausdorff dimension of SNA \cite{GroegerJaeger2013,FGJ2014DimensionsSNA}, another is a question about the structure of their topological closure ({\em filled-in property}) \cite{jaeger:2007,bjerkloev:2007,FGJ2014DimensionsSNA}, going back to Herman \cite{herman:1983}. In this sense, pinched systems have proven to be very adequate models for more general qpf systems.\medskip If (\ref{eq:2}) is slightly modified by adding a small positive constant $\eps>0$ to the multiplicative forcing term $\sin(\pi\theta)$, we obtain a new system \begin{equation}\label{eq:4} f^\eps(\theta,x) \ = \ (\theta+\omega,\tanh(\alpha x) \cdot(\sin(\pi\theta) + \eps)) \ . \end{equation} In this case, the Lyapunov exponent $\lambda(\varphi_0)$ still increases strictly with $\alpha$ and there exists a critical value $\alpha_c=\int_{\kreis}\log|\sin(\pi\theta)+\eps|\ d\theta$ at which $\lambda(\varphi_0)=0$. If $\alpha\leq \alpha_c$, the graph $\Phi_0$ is the global attractor as before, and if $\alpha>\alpha_c$ a second invariant graph $\varphi^+$ with negative Lyapunov exponent appears above $\Phi_0$. However, there is one important qualitative difference to the previous situation. Due to the invertibility of the system, it is easy to show that the graph $\Phi^+$ is a continuous curve (see Figure~\ref{fig: upper bounding graphs}(b)). The resulting dynamics are much simpler than in the case of an SNA. In particular, the system is conjugate to the direct product of the underlying irrational rotation with a Morse-Smale map $g$ on $[0,1]$, with unique repelling fixed point $0$ and unique attracting fixed point $x\in(0,1)$, and all points outside $\Phi_0$ are Lyapunov stable. In contrast to this, the system (\ref{eq:2}) with $\alpha>2$ has sensitive dependence on initial conditions \cite{glendinning/jaeger/keller:2006}, and thus has no Lyapunov stable points at all. It is thus reasonable to expect that both cases can be distinguished by means of a suitable topological complexity invariant. However, in this case the rigorous analysis is more difficult than in the previous chapters. The reason is that while the qualitative analysis of pinched skew products is comparatively easy due to their particular structure, a more detailed quantitative study is still rather involved on a technical level. In particular, it typically requires to exclude an exceptional set of measure zero from the considerations, on which the dynamics are hard to control. Since topological complexity invariants in the zero entropy regime typically do not satisfy a variational principle (see Introduction), the lack of control even on a set of measure zero impedes their computation. For this reason, we do not attempt to determine the power entropy or modified power entropy of (\ref{eq:2}) in a rigorous way. However, based on the intuition gained from previous work on pinched systems in \cite{jaeger:2007,GroegerJaeger2013} and heuristic arguments, we expect that the power entropy of (\ref{eq:2}) with $\alpha>2$ equals 1, whereas the modified power entropy is zero. It is easy to show that the same values are attained by (\ref{eq:4}) with $\alpha>\alpha_c$, and hence both quantities should not be suitable to distinguish between the two substantially different types of behaviour. As we have mentioned before, this was one of the original motivations for the introduction of amorphic complexity. In principle, though, the same restrictions as for the computation of (modified) power entropy hold for amorphic complexity, and the existence of the exceptional uncontrolled set does not allow a straightforward application of the concept. What we concentrate on here is to show that amorphic complexity distinguishes between SNAs and continuous attractors. This is the main result of this section. Recall that $\omega\in\kreis$ is called {\em Diophantine} if there exist constants $c,d>0$ such that \begin{equation}\label{e.Diophantine} d(n\omega,0) \ \geq \ cn^{-d} \end{equation} for all $n\in\N$. \begin{theorem}\label{t.pinched_systems} Suppose $\omega$ is Diophantine and $\alpha$ in (\ref{eq:2}) is sufficiently large. Then there exists an invariant (under the rotation by angle $\omega$) set $\Omega\ssq\kreis$ of full Lebesgue measure such that \[ 0 \ < \ \uac\big(\left.f\right|_{\Omega\times[0,1]}\big)\ \leq \ \oac\big(\left.f\right|_{\Omega\times[0,1]}\big)\ < \ \infty \ . \] In contrast to this, we have $\textrm{ac}(f^\eps)=0$ if $f^\eps$ is given by (\ref{eq:4}) with $\eps>0$ and any $\alpha\geq 0$. \end{theorem} Note that $\textrm{ac}(f^\eps)=0$ follows immediatly from the conjugacy between $f^\eps$ and the product of the underlying rotation with the Morse-Smale map $g$ from above, using Corollary~\ref{fsc_top_inv}. \begin{remark}\label{r.ac_measure} The approach taken by restricting to a subset of full measure in the above statement can be formalized in a more systematic way. Although we do not pursue this issue much further here, we believe that this may make the concept of amorphic complexity applicable to an even broader range of systems. Let $(X,d)$ be a metric space and consider a map $f:X\to X$ and let $E\ssq X$. For the definition of $\Sep_{E}(f,\delta,\nu)$ see \eqref{e.subset_separation_numbers}. Further, we say $A\ssq E$ is \emph{$(f,\delta,\nu)$-spanning in $E$} if for each $x \in E$ there is $y\in A$ such that $\varlimsup_{n\to \infty}\frac{1}{n}{S_n(f,\delta,x,y)}<\nu$. Let $\Span_{E}(f,\delta,\nu)$ be the smallest cardinality of any $(f,\delta,\nu)$-spanning set in $E$. For the definition of $\uac_{E}(f)$ and $\oac_{E}(f)$ see \eqref{eq: defn h am for subsets} and note that similarly as before we can use $\Span_{E}(f,\delta,\nu)$ instead of $\Sep_{E}(f,\delta,\nu)$ there (see Section~\ref{SpanningSets}). Now, suppose we are given a Borel probability measure $\mu$ on $X$. Then we define \begin{align*} \uac_{\mu}(f)&\ := \ \inf\left\{\uac_{E}(f) \mid E\ssq X \textrm{ and }\mu(E)=1\right\} \ ,\\ \oac_\mu(f)&\ := \ \inf\left\{\oac_E(f) \mid E\ssq X \textrm{ and }\mu(E)=1\right\} \ . \end{align*} With these notions, what we actually show is that under the assumptions of Theorem~\ref{t.pinched_systems} we have \begin{align*} 0<\uac_{\mu}(f) \leq \oac_{\mu}(f) < \infty, \end{align*} where $\mu$ can be either the Lebesgue measure on $\kreis\times[0,1]$, or the measure $\mu_{\phi^+}$, which is the Lebesgue measure on $\kreis$ lifted to the graph $\Phi^+$, i.e.\ $\mu_{\phi^+}(A):=\Leb(\pi_{\theta}(A\cap\Phi^+))$ where $A\subseteq\kreis\times[0,1]$ is Borel measurable and $\pi_\theta$ is the projection onto the first coordinate. Note that these statements are slightly stronger than the ones given in Theorem~\ref{t.pinched_systems}. \end{remark} We first consider the lower bound. Thereby, we will focus on the SNA $\Phi^+$ and show that the restriction of $f$ to this set already has positive lower amorphic complexity. \begin{proposition}\label{prop: ac > 0} Suppose $\omega$ is Diophantine and $\alpha$ in (\ref{eq:2}) is sufficiently large. Then there is a positive uniform lower bound for $\uac_{\Phi^+\cap(\Omega\times[0,1])}(f)$ for all $\Omega\ssq\kreis$ with $\Leb(\Omega)=1$. \end{proposition} For the proof, we need a number of preliminary statements taken from previous studies of pinched skew products in \cite{jaeger:2007,GroegerJaeger2013}. First, \cite[Lemma 4.2]{GroegerJaeger2013} states that if $\alpha$ in (\ref{eq:2}) is sufficiently large, then there exist constants $\gamma,L_0,\beta,a,b>0$ and $m\in\N$ such that the following conditions are satisfied. \begin{eqnarray}\label{e.m} m & \geq & 22(1+1/\gamma) \\ a & \geq & (m+1)^d\label{e.a}\\ b & \leq & c\label{e.b1}\\ b & < & d(n\omega,0) \ \textrm{for all } n\in\{1\ld m-1\}\label{e.b2}\\ \abs{f_\theta(x)-f_\theta(y)} & \leq & \alpha^{-\gamma}\abs{x-y} \ \textrm{for all } \theta\in\kreis,\ x,y\in[L_0,1] \label{e.contraction}\\ f_\theta(x) & \geq & \min\left\{L_0,ax\right\}\cdot \label{e.reference_system} \min\left\{1,2d(\theta,0)/b\right\} \ \textrm{for all } (\theta,x)\in\kreis\times[0,1] \end{eqnarray} It is worth mentioning that we can choose $a$ proportional to $\alpha$. Moreover, we note that \begin{eqnarray}\label{e.maximal_expansion} \abs{f_\theta(x)-f_\theta(y)} & \leq & \alpha\abs{x-y} \ \textrm{for all } \theta\in\kreis,\ x,y\in [0,1]\ ,\\ \abs{f_\theta(x)-f_{\theta'}(x)} & \leq & \pi d(\theta,\theta') \ \textrm{for all } \theta,\theta'\in\kreis,\ x\in[0,1] \ . \end{eqnarray} Given any $n\in\N$, let $r_n := \frac{b}{2}a^{-\frac{n-1}{m}}$ and $\tau_n:=n\omega$. We will need the following elementary estimate. \begin{lemma}[\cite{jaeger:2007,GroegerJaeger2013}] \label{lem: consequence of diophantine w} Let $n\in \N$ and suppose $d(\tau_n,0)\leq \ell b\cdot a^{-i}$ for some $i>0$ and $\ell>0$. Then $n\geq\frac{a^{i/d}}{\ell^{1/d}}$. \end{lemma} \begin{proof} (\ref{e.Diophantine}) implies $c\cdot n^{-d}\leq \ell b\cdot a^{-i}$, and using (\ref{e.b1}) we get $n^{-d}\leq\ell a^{-i}$. \end{proof} In order to analyse the dynamics of $f$ on $\Phi^+$, it turns out to be crucial that $\varphi^+$ is approximated by the so-called \emph{iterated boundary lines} $\nfolge{\phi_n}$ of (\ref{eq:2}). These are given by \begin{align*} \varphi_n: \kreis \to [0,1];\quad\theta \mapsto f_{\theta-n\omega}^n(1) \ , \end{align*} with $f^n_\theta(x):=\pi_x\circ f^n(\theta,x)=f_{\theta+(n-1)\omega} \circ\ldots\circ f_\theta(x)$ where $\pi_x$ is the projection onto the second coordinate. Note that by the monotonicity of the maps $f_\theta$, the sequence $(\phi_n)_{n\in \N}$ is decreasing. Further, as a consequence of the definition of $\phi^+$ in (\ref{e.upper_bounding_graph}), it can be shown easily that $\phi_n \to \phi^+$ pointwise as $n\to \infty$ \cite{GroegerJaeger2013}. The following proposition tells us to which degree the $n$-th iterated boundary line approximates the graph $\phi^+$. \begin{proposition}[\cite{jaeger:2007},\cite{GroegerJaeger2013}] \label{properties_approx_graphs} Given $q\in\N$, the following holds. \begin{enumerate} \item[(i)] $|\varphi_{n}(\theta)- \varphi_{n}(\theta')|\leq\pi\alpha^n d(\theta,\theta')$ for all $n\in\N$ and $\theta,\theta'\in\kreis$. \item[(ii)] There exists $\lambda>0$ such that if $n\geq mq+1$ and $\theta\notin\bigcup_{j=q}^n B_{r_{j}}(\tau_j)$, then $|\varphi_{n}(\theta)-\varphi_{n-1}(\theta)|\leq \alpha^{-\lambda(n-1)}$. \end{enumerate} \end{proposition} \begin{figure} \centering \subfloat[]{\includegraphics[width=1.8in]{1}} \subfloat[]{\includegraphics[width=1.8in]{2}} \subfloat[]{\includegraphics[width=1.8in]{3}}\\ \subfloat[]{\includegraphics[width=1.8in]{4}} \subfloat[]{\includegraphics[width=1.8in]{5}} \subfloat[]{\includegraphics[width=1.8in]{6}} \caption{The iterated boundary lines $\phi_n$ for $n=1,\ldots,6$.} \label{fig: iterated boundary lines} \end{figure} Figure~\ref{fig: iterated boundary lines} shows the development of the iterated upper boundary lines for $n=1\ld 6$. As can be seen, $\phi_n$ has exactly $n$ zeros (at $\tau_1,\ldots,\tau_n$). In order to describe the qualitative behaviour, we refer to $\left.\vphantom{T}\psi\right|_{B_{r_{j}}(\tau_j)}$ as the \emph{$j$-th peak} of $\psi$ or the \emph{peak of $\psi$ around $\tau_j$}, where $\psi\in\left\{\phi^+,\phi_j,\phi_{j+1},\ldots\right\}$. We say the $j$-th peak is a \emph{fresh peak} if the $2r_j$-neighbourhood of $\tau_j$ does not intersect any previous peak, that is, $B_{2r_{j}}(\tau_j)\cap B_{r_{l}}(\tau_l)=\emptyset$ for each $1\leq l<j$. In the following, we label the fresh peaks by $n_1<n_2<\ldots$, that is, there is $j\in \N$ with $n_j=l$ if and only if the $l$-th peak is fresh. \begin{lemma}\label{lem: fresh peaks density} If $\alpha$ is large enough, there are infinitely many fresh peaks and they appear with positive density, that is, \begin{align*} \varliminf_{j\to \infty} j/n_j\ >\ 0\ . \end{align*} \end{lemma} \begin{proof} Let \begin{align*} N(k):= \left\{j \in \{2,\ldots,k\}\mid \textrm{there is } 1\leq l < j \text{ with } B_{2r_j}(\tau_j)\ssq B_{2 r_l}(\tau_l) \right\}. \end{align*} Thus, $N(n_j)$ contains the complement of $\left\{n_l\mid 1\leq l\leq j\right\}$. Further, \begin{align*} \#N(k)&\ \leq\ \sum_{l=1}^{k-1} \#\left\{j \in \{l+1,\ldots,k\} \mid B_{2 r_j}(\tau_j) \ssq B_{2 r_l}(\tau_l)\right\}\\ &\stackrel{\mathclap{\textrm{Lemma~\ref{lem: consequence of diophantine w}}}} {\ \leq\ }\#\left\{j \in \{2,\ldots,k\}\mid B_{2 r_j}(\tau_j) \ssq B_{2r_1} (\tau_1)\right\}+\sum_{l=2}^{k-1} \frac{(k-l)2^{1/d}}{a^{(l-1)/md}}\\ &\ \leq\ \#\left\{j \in \{2,\ldots,k\}\mid B_{2 r_j}(\tau_j) \ssq B_{2 r_1} (\tau_1)\right\}+ 2^{1/d}k \sum_{l=2}^{k-1} a^{-(l-1)/md}\\ &\ <\ \#\left\{j \in \{2,\ldots,k\}\mid B_{2 r_j}(\tau_j) \ssq B_{2 r_1} (\tau_1)\right\}+ \frac{2^{1/d}ka^{-1/md}}{1-a^{-1/md}}\ . \end{align*} Thus, for big enough $\alpha$ (and hence big enough $a$) and due to (\ref{e.b2}), there are infinitely many fresh peaks and \begin{align*} &\varliminf_{j\to \infty} j/n_j\ \geq\ \varliminf_{j\to \infty} \frac{n_j-\#N(n_j)}{n_j}\ \geq\ 1-\Leb(B_{2r_1}(\tau_1)) -\frac{2^{1/d}a^{-1/md}}{1-a^{-1/md}}\ >\ 0\ .\qedhere \end{align*} \end{proof} Since $\left(\phi_n\right)_{n\in\N}$ is monotonously decreasing and each iterated boundary line is continuous, we know that $\phi^+$ is close to zero in a neighbourhood of each $\tau_n$. However, the next statement tells us that for most $\theta$ in a neighbourhood of a fresh peak, $\phi^+$ is bigger than some threshold $\delta_0>0$. This dichotomy is the basis for the mechanism by which we prove Proposition~\ref{prop: ac > 0}. \begin{lemma}\label{lem: fresh peaks height} Suppose $\alpha$ is large enough. There exist $\delta_0>0$ and a super-exponentially fast decaying sequence $\left(\eps_n\right)_{n\in\N}$ such that \begin{align*} \Leb\left(\left\{\theta \in B_{2r_{n_j}}(\tau_{n_j})\setminus B_{r_{n_j}}(\tau_{n_j})\mid \phi^+(\theta)< \delta_0\right\}\right) \ <\ \eps_{n_j}\ . \end{align*} \end{lemma} \begin{proof} Let $\ell:=m+1$. Since $\phi_{\ell}$ is continuous and $\phi_{\ell}(\theta)\neq 0$ for $\theta\notin\{\tau_1,\ldots,\tau_{\ell}\}$, there exists $\delta_0>0$ such that \begin{align}\label{eq: phi1 > L0} \phi_{\ell}(\theta)\ \geq\ 2\delta_0 \end{align} for $\theta\notin \bigcup_{j=1}^{\ell} B_{r_j}(\tau_j)$. Due to (\ref{e.b2}), we have that if $\alpha$ (and hence $a$) is large enough, then $\kreis\setminus\bigcup_{j=1}^{\infty}B_{r_j}(\tau_j)$ is non-empty. Let $\theta\notin\bigcup_{j=1}^{\infty} B_{r_j}(\tau_j)$. For $k \in \N$ with $k>\ell$, Proposition~\ref{properties_approx_graphs} (ii) yields \begin{align*} \left|\phi_k(\theta)-\phi_{\ell}(\theta)\right|\ \leq\ \sum_{j=\ell+1}^k\left|\phi_{j}(\theta)-\phi_{j-1}(\theta)\right|\ \leq\ \sum_{j=\ell}^{k-1} \alpha^{-\lam j}\ \leq\ \frac{\alpha^{-\lam \ell}}{1-\alpha^{-\lam}}\ . \end{align*} Together with equation (\ref{eq: phi1 > L0}), this gives \begin{align}\label{eq: phil > L0/2} \phi_k(\theta)\ \geq\ 2\delta_0-\frac{\alpha^{-\lam \ell}}{1-\alpha^{-\lam}}\ >\ \delta_0 \end{align} for sufficiently large $\alpha$. Let $j\geq2$. Since the $n_j$-th peak is fresh, we have $B_{2r_{n_j}}(\tau_{n_j})\cap \bigcup_{l=1}^{n_j-1} B_{r_l}(\tau_l)=\emptyset$. Further, Lemma~\ref{lem: consequence of diophantine w} yields that the first time $l>n_j$ a peak intersects $B_{2r_{n_j}}(\tau_{n_j})$ is bounded from below by $n_j+a^{(n_j-1)/md}$. Hence, \begin{align*} B_{2r_{n_j}}(\tau_{n_j})\setminus \bigcup_{l\geq 1} B_{r_l}(\tau_l)&\ =\ B_{2r_{n_j}}(\tau_{n_j})\setminus\bigcup_{l\geq {n_j}} B_{r_l}(\tau_l)\\ &\ =\ \left (B_{2r_{n_j}}(\tau_{n_j})\setminus B_{r_{n_j}}(\tau_{n_j})\right)\setminus \bigcup_{l\geq a^{({n_j}-1)/md}+{n_j}} B_{r_l}(\tau_l)\ . \end{align*} Note that for $n\in\N$ \begin{align*} \Leb\left(\bigcup_{l\geq a^{(n-1)/md}+n} B_{r_l}(\tau_l)\right) &\ \leq\ b \sum_{l\geq a^{(n-1)/md}+n}a^{-(l-1)/m}\\ &\ =\ \frac b{1-a^{-1/m}}a^{-\left(a^{(n-1)/m}+n-1\right)/m}\ =:\ \eps_n\ . \end{align*} By means of equation \eqref{eq: phil > L0/2}, this proves the statement since $\phi_k\to \phi^+$ as $k\to\infty$. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop: ac > 0}] Suppose $\delta<\frac{\delta_0}2$, where $\delta_0$ is chosen as in Lemma \ref{lem: fresh peaks height} and set $\eta_{j}:=\delta/(2\pi\alpha^{n_j})$. Further, let $\Omega$ be a given set of full measure. As $\left(\phi_n\right)_{n\in\N}$ is monotonously decreasing, Proposition \ref{properties_approx_graphs} (i) shows that \begin{align}\label{eq: phi small close to peaks} \phi^+(\theta)<\delta\quad\textrm{ for all }\quad \theta\in B_{2\eta_j}(\tau_{n_j})\ . \end{align} By possibly going over to a subsequence (such that $n_1$ is big enough), Lemma~\ref{lem: fresh peaks height} yields that for each $j\in \N$ there is $\theta^{n_j}\in B_{2r_{n_j}}(\tau_{n_j})$ with \begin{align}\label{eq: phi big outside of fresh peaks} \Leb\left(\left\{\theta\in B_{\eta_j} \left(\theta^{n_j}\right)\mid\phi^+(\theta)>2\delta\right\}\right) \ >\ \eta_j\ . \end{align} Set $\Delta_j:=\tau_{n_j}-\theta^{n_j}$. By \eqref{eq: phi small close to peaks} and \eqref{eq: phi big outside of fresh peaks}, we have that \begin{align}\label{eq: measure of seperating region} \Leb\left(\left\{\theta\in \kreis: \abs{\phi^+(\theta)-\phi^+(\theta+\Delta)}>\delta \text{ for all } \Delta \in B_{\eta_j}(\Delta_j)\right\}\right)\ \geq\ \eta_j\ . \end{align} We denote by $\Omega_j$ the set of such $\theta$ which visit the set $\{\theta\in \kreis : \abs{\phi^+(\theta)-\phi^+(\theta+\Delta)}>\delta \text{ for all }\Delta \in B_{\eta_j}(\Delta_j)\}$ with a frequency $\eta_j$. Note that by \eqref{eq: measure of seperating region} and Birkhoff's Ergodic Theorem, $\Leb(\Omega_j)=1$ such that $\tilde\Omega:=\Omega\cap\bigcap_{j\in\N}\Omega_j$ has full measure. Next, we choose $2^j$ points in $\Phi^+\cap \tilde \Omega\times[0,1]$ which are mutually $(f,\delta,\eta_j)$-separated from each other. Let $\theta\in\bigcap_{x\in\{0,1\}^j}\tilde\Omega-\sum_{k=1}^jx_k\Delta_k$ and define $\theta_x=\theta+\sum_{k=1}^jx_k\Delta_k$ where $x=(x_1,\ldots,x_j)\in\{0,1\}^j$. By possibly going over to a subsequence of $\left(n_j\right)_{j\in\N}$ (still of positive density), we may assume without loss of generality that $\sum_{k=i+1}^\infty (\eta_k+|\Delta_k|)<\eta_i$ for all $i\in\N$ such that $d(\theta_x,\theta_y)\in B_{\eta_\ell}(\Delta_\ell)$ for distinct $x,y\in \{0,1\}^j$ and $\ell:=\min\{k\mid x_k\neq y_k\}\leq j$. By definition, we have for all $x\in\{0,1\}^j$ that $\theta_x\in\tilde\Omega$ and hence, the set $\{(\theta_x,\phi^+(\theta_x))\mid x\in \{0,1\}^j\}$ is $(f,\delta,\eta_j)$-separated. We have thus shown \begin{align*} \varliminf_{\nu \to 0} \frac{\log\Sep_{\Phi^+\cap(\Omega\times[0,1])} (f,\delta,\nu)}{\log \nu^{-1}} &\ \geq\ \varliminf_{j\to\infty} \frac{\log\Sep_{\Phi^+\cap(\tilde\Omega\times[0,1])} (f,\delta,\eta_{j})}{\log \eta_{j+1}^{-1}}\\ &\ \geq\ \varliminf_{j\to\infty} \frac{\log 2^j}{n_{j+1}\log\alpha-\log\delta/2\pi} \ =\ \frac{\log 2}{\log \alpha }\varliminf_{j\to\infty} j/n_{j+1} \ >\ 0 \end{align*} by Lemma~\ref{lem: fresh peaks density}. As $\varliminf_{j\to\infty} j/n_{j+1}$ is independent of the set $\Omega$, this proves the statement. \end{proof} We now turn to the upper bound of the amorphic complexity. Let $\Omega\ssq\kreis$ be the set of all $\theta\in\kreis$ such that for all $q\in \N$ \begin{align*} \lim\limits_{n\to\infty}\frac{\#\left\{0\leq i\leq n-1\mid\theta+i\omega \in\bigcup_{j=q}^\infty B_{r_{j}}(\tau_j)\right\}}{n}\ =\ \Leb\left(\bigcup_{j=q}^\infty B_{r_{j}}(\tau_j)\right). \end{align*} Note that Birkhoff's Ergodic Theorem yields that $\mu_{\phi^+}(\Phi^+\cap(\Omega\times[0,1]))=\Leb(\Omega)=1$ (see Remark \ref{r.ac_measure}). The upper bound on $\oac_{\Phi^+\cap(\Omega\times[0,1])}(f)$ will follow easily from the following assertion. \begin{lemma}\label{lem: generic points close enough->no separation} There exist $\kappa,c_0>0$ such that for all positive $\delta$ and small enough $\nu$, we have that for each $\theta,\theta'\in\Omega$ with $d(\theta,\theta')<\eps=\eps(\delta,\nu)=c_0\delta\nu^{\kappa m}$ the points $(\theta,\phi^+(\theta))$ and $(\theta',\phi^+(\theta'))$ are not $(f,\delta,\nu)$-separated. \end{lemma} \begin{proof} Observe that there is a constant $C>0$, independent of both $\delta$ and $\nu$, such that for $q(\nu):=\left\lceil-C\log\nu\right\rceil$ we have \begin{align}\label{eq: defn of q(nu)} \Leb\left(\bigcup_{j=q(\nu)}^\infty B_{r_j}(\tau_j)\right) \ \leq\ \sum_{j=q(\nu)}^\infty a^{-(j-1)/m}\ <\ \nu/2\ . \end{align} Set $n(\nu):=mq(\nu)+1$ and assume that $n(\nu)$ is large enough (i.e.\ $\nu$ is small) to guarantee \begin{align}\label{eq: quality of approximation after n(nu) iterations} \abs{\varphi^+(\theta)-\varphi_{n(\nu)}(\theta)}\ <\ \delta/4 \end{align} for all $\theta\notin\bigcup_{j=q(\nu)}^\infty B_{r_{j}}(\tau_j)$ (cf. Proposition~\ref{properties_approx_graphs}(ii)). Note that if $d(\theta,\theta')<\eps(\delta,\nu):= \delta/(4\pi\alpha^{m(-C\log \nu+1)+1})$, then \begin{align}\label{eq: lipschitz condition phi n(nu)} \abs{\varphi_{n(\nu)}(\theta)- \varphi_{n(\nu)}(\theta')}\ \leq\ \pi\alpha^{n(\nu)}d(\theta,\theta')\ <\ \delta/4 \end{align} for all $\theta,\theta'$ (cf. Proposition~\ref{properties_approx_graphs}(i)). Now, assume $\theta,\theta'\in\Omega$ verify $d(\theta,\theta')<\eps(\delta,\nu)<\delta/4$. By \eqref{eq: defn of q(nu)} and definition of $\Omega$, we know that \begin{align*} \lim\limits_{n\to\infty}\frac{\#\left\{0\leq i\leq n-1\mid\theta+i\omega, \theta'+i\omega\notin\bigcup_{j=q(\nu)}^\infty B_{r_{j}}(\tau_j)\right\}}{n} \ >\ 1-\nu. \end{align*} Further, \eqref{eq: quality of approximation after n(nu) iterations} and \eqref{eq: lipschitz condition phi n(nu)} yield that $\abs{\phi^+(\theta+i\omega)-\phi^+(\theta'+i\omega)}<\frac34\delta$ whenever both $\theta+i\omega$ and $\theta'+i\omega$ are not in $\bigcup_{j=q(\nu)}^\infty B_{r_{j}}(\tau_j)$. Hence, \begin{align*} \varlimsup_{n\to \infty}\frac{S_n(f,\delta, (\theta,\phi(\theta)), (\theta',\phi(\theta')))}{n}\ <\ \nu, \end{align*} so that $(\theta,\phi(\theta))$ and $(\theta',\phi(\theta'))$ are not $(f,\delta,\nu)$-separated. \end{proof} We thus have \begin{proposition}\label{prop: upper ac on the graph} Suppose $\alpha$ in (\ref{eq:2}) is sufficiently large and $\Omega$ is as in the previous lemma. Then \begin{align*} \oac_{\Phi^+\cap(\Omega\times[0,1])}(f)\ \leq\ \kappa m, \end{align*} with $\kappa$ as in Lemma~\ref{lem: generic points close enough->no separation}. \end{proposition} \begin{proof} By the previous lemma, we know that for small enough $\nu$ \begin{align*} &\Span_{\Phi^+\cap(\Omega\times[0,1])}(f,\delta,\nu)\ \leq\ \left\lceil\frac1{\eps(\delta,\nu)}\right\rceil+1\ =\ \left\lceil\frac{\nu^{-\kappa m}}{c_0\delta}\right\rceil+1\ <\ 2\frac{\nu^{-\kappa m}}{c_0\delta}\ .\qedhere \end{align*} \end{proof} \begin{proof}[Proof of Theorem~\ref{t.pinched_systems}] It is left to show the upper bound. To that end, we show that there is an invariant set of full measure $\tilde\Omega\ssq\Omega$ ($\Omega$ as above) such that \begin{align}\label{eq: fibrewise attraction} \lim_{n\to \infty} \phi^+(\theta+n\omega)-f_\theta^n(x)\ = \ 0 \end{align} for all $\theta\in\tilde \Omega$ and $x\in (0,1]$. In other words, we show that $\left.f\right|_{\tilde\Omega\times(0,1]}$ has the unique target property with respect to $\Phi^+\cap(\tilde\Omega\times(0,1])$. By means of Lemma~\ref{lem: unique target general} and Proposition~\ref{prop: upper ac on the graph} this yields that $\oac(\left.f\right|_{\tilde\Omega\times(0,1]})\leq\kappa m$ and it is easy to see that $\kappa m$ is in fact an upper bound for $\oac(\left.f\right|_{\tilde\Omega\times[0,1]})$. Note that since $\phi^+$ is the upper bounding graph of the global attractor, we have \eqref{eq: fibrewise attraction} for all $\theta$ and $x\in [\phi^+(\theta),1]$. Now, define $\psi(\theta):=\sup\{x\in [0,\phi^+(\theta)]\mid \eqref{eq: fibrewise attraction}\text{ does not hold}\}$. Due to monotonicity, $\psi$ is an invariant graph. By \cite[Proposition~3.3]{jaeger:2003}, we have that for $\Leb$-a.e.\ $\theta$ there is $\delta(\theta)>0$ such that \eqref{eq: fibrewise attraction} holds for $x\in (\phi^+(\theta)-\delta(\theta),\phi^+(\theta)]$. Hence, $\psi$ is distinct from $\phi^+$. Since concavity of the fibre maps $f_\theta$ only allows for two invariant graphs (equivalence up to sets of measure zero, cf.\ \cite[Theorem~2.1]{AnagnostopoulouJaeger2012}), this shows that $\psi=0$ Lebesgue almost surely. Set $\tilde \Omega := \bigcap_{n\in\Z} n \omega + \{\theta \in \Omega \mid \psi(\theta)=0\}$. This ends the proof. \end{proof} \footnotesize
2,869,038,156,986
arxiv
\section{Introduction} \label{s:intro} Recommender systems suffer from the well-known cold-start problem~\cite{schein2002methods} that arises when users have rated no or only few items. The cold-start problem is particularly problematic in neighborhood-based recommendation approaches such as collaborative filtering (CF)~\cite{schafer2007collaborative} since the ratings of these users cannot be exploited to find similar users. Trust-based recommender systems (e.g., \cite{fazeli2014,lathia2008trust,massa2004trust,donovan2005trustrecsys}) have been proposed as a potential remedy for the cold-start problem. They alleviate this problem by generating a trust network, i.e., a type of a social network in which nodes usually represent users and edges represent trust connections between users based on their explicitly expressed or implicitly derived trust relationships. Although trust is a complex and ambiguous concept from social sciences, in the context of recommender systems, we use a simple interpretation in which users trust other users in the system if they trust their opinions and ratings on different items~\cite{massa2004trust}. Resulting trust network can be used to find the $k$-most similar users, whose items are recommended to a target user. Trust networks are, however, typically sparse~\cite{kim2015sparse} since only a fraction of users have trust connections, which makes finding the $k$-similar users challenging. In the present work, we explore the utility of \emph{graph embeddings} to extract the $k$-similar users from trust networks. To that end, we conduct experiments on three publicly available benchmark datasets often used in studies on trust-based recommender systems: Epinions~\cite{massa2004trust}, Ciao~\cite{tang-etal12a}, and Filmtrust~\cite{guo2013novel}. We empirically evaluate a range of state-of-the-art graph embedding approaches~\cite{GOYAL2018} from four distinct method families, i.e., \begin{inparaenum}[(i)] \item factorization-based methods, \item random-walk-based approaches, \item methods based on deep learning, and \item the LINE approach~\cite{tang2015line} that falls in neither of these families, \end{inparaenum} with respect to their ability to deliver accurate, novel, and diverse recommendations~\cite{Zhang2012aurealist} for cold-start users. In our experimental setup, we split each dataset into a validation set (warm-start users) and a test set (cold-start users). For each graph embedding approach, we perform a hyperparameter optimization on each validation set. We then select the hyperparameters which result in highest recommendation accuracy. We generate recommendations for each target user in a CF manner by finding $k$-similar neighbors using the learned embeddings and ranking their items by similarity scores. Finally, we evaluate the resulting graph embeddings against a corresponding test set with respect to accuracy and beyond accuracy metrics. We compare the graph embedding approaches against five baselines from trust-based recommender systems, commonly used in cold-start settings: (i) Most Popular (MP) recommends the most frequently rated items, (ii) $Trust_{dir}$ extracts trusted users directly from a trust network, (iii) $Trust_{undir}$ ignores edge directions and extracts neighbors from the resulting undirected network, (iv) $Trust_{jac}$ applies the Jaccard coefficient on the explicit trust network, and (v) $Trust_{Katz}$~\cite{duricic2018regular} computes the Katz similarity to infer transitive trust relationships between users. To quantify the algorithmic performance, we evaluate recommendation quality in terms of nDCG, novelty, diversity and user coverage. We find that as a result of their ability to create a representation of each user in a network, graph embeddings are able to improve user coverage when compared to the baseline approaches. Our experiments also show that random-walk-based approaches, i.e., Node2vec and DeepWalk, consistently outperform other graph embedding methods on all three datasets in terms of recommendation accuracy. Finally, we find a positive correlation between novelty and accuracy in all three datasets suggesting that users in the respective platforms tend to prefer novel content. Summing up, our contributions are three-fold. Firstly, we provide a large-scale empirical study on the efficacy of graph embedding approaches in trust-based recommender systems. Secondly, unlike many previous studies, which evaluated only recommendation accuracy, we compare different approaches with respect to beyond accuracy metrics such as novelty, diversity and user coverage. Lastly, our results provide new insights into user preferences based on correlations between different recommendation quality metrics. \vspace{-2mm} \section{Graph Embeddings} \label{s:approaches} In this study, we compare the recommendation performance of graph embedding approaches from four distinct method families~\cite{GOYAL2018}, i.e., factorization-based methods, random-walk-based approaches, deep-learning-based approaches, and the LINE approach~\cite{tang2015line} which falls in neither of the first three families. \para{Factorization-based Approaches.} Factorization-based approaches produce node embeddings using matrix factorization. The inner product between the resulting node embedding vectors approximates a deterministic graph proximity measure~\cite{hamilton2017graphreplearning}. In total, we investigate five different factorization approaches: \noindent - \emph{Graph Factorization (GF)}~\cite{ahmed2013distributed} factorizes the adjacency matrix and determines proximity between nodes directly on the adjacency matrix.\addtocounter{footnote}{-2}\footnote{\label{gem} Implementation used: \url{https://github.com/palash1992/GEM-Benchmark}} \noindent - \emph{Laplacian Eigenmaps (LE)}~\cite{belkin2002laplacian} factorizes the normalized Laplacian matrix and preserves the $1^{st}$-order proximity.\footnoteref{gem} % \noindent - \emph{Locally Linear Embedding (LLE)}~\cite{roweis2000nonlinear} minimizes the squared difference between the embedding of a node and a linear combination of its neighbors' embeddings, weighted by the edges connecting to them. The solution of this minimization problem reduces to a factorization problem.\footnoteref{gem} \noindent - \emph{High-Order Proximity preserved Embedding (HOPE)} is able to preserve higher-order proximities and capture the asymmetric transitivity.~\cite{Ou2016}.\footnoteref{gem} \noindent - \emph{Graph Representations with Global Structural Information (GraRep)}~\cite{cao2015grarep} can handle higher-order similarity as it considers powers of the adjacency matrix.\footnote{Implementation used: \url{https://github.com/benedekrozemberczki/role2vec}} \para{Random Walk-based Approaches.} RW-based approaches first identify the context of a node with a random walk and then learn the embeddings typically using a skip-gram model~\cite{GOYAL2018}. In total, we evaluated three different approaches: \noindent - \emph{DeepWalk}~\cite{Perozzi2014} extracts node sequences with truncated random walks and applies a skip-gram model~\cite{Mikolov2013} with hierarchical softmax on the node pairs.\footnote{Implementation used: \url{https://github.com/phanein/deepwalk}}. \noindent - \emph{Node2vec}~\cite{Grover2016} extends DeepWalk with hyperparameters to configure the depth and breadth of the random walks. In contrast to DeepWalk, Node2vec enables to define flexible random walks, while DeepWalk only allows unbiased random walks over the graph~\cite{hamilton2017graphreplearning}.\footnote{Implementation used \url{https://github.com/aditya-grover/node2vec}} \noindent - \emph{Role2vec}~\cite{Ahmed2018} uses attributed random walks to learn embeddings. As Role2vec enables to define functions that map feature vectors to types, it can learn embeddings of \emph{types of nodes}.\footnote{Implementation used: \url{https://github.com/benedekrozemberczki/role2vec}} \para{Deep Learning-based Approaches.} Such approaches use deep neural network models to generate node embeddings. In this research paper, we studied three deep learning-based models in total: \noindent - \emph{Deep Neural Networks for Graph Representations (DNGR)}~\cite{cao2016deep} uses random surfing to build a normalized node co-occurrence matrix and employs a stacked denoising autoencoder to learn node embeddings.\footnote{Implementation used: \url{https://github.com/ShelsonCao/DNGR}} \noindent - \emph{Structural Deep Network Embedding (SDNE)}~\cite{wang2016structural} finds neighbors by means of $1^{st}$ and $2^{nd}$ order proximity and learns node embeddings via autoencoders.\footnote{Implementation used: \url{https://github.com/suanrong/SDNE}} \noindent - \emph{Graph sample and aggregate GraphSAGE}~\cite{hamilton2017inductive} is a multi-layered graph convolutional neural network, which represents nodes internally by aggregating their sampled neighborhoods and utilizes a random-walk-based cost function for unsupervised learning. GraphSAGE performs the convolution in the graph space. It uses either mean-based, GCN-based, LSTM-based, mean pooling or max pooling models for aggregation.\footnote{Implementation used: \url{https://github.com/williamleif/GraphSAGE}} \para{Large-Scale Information Network Embedding.} \emph{LINE}~\cite{tang2015line} creates embeddings that preserve $1^{st}$-order and $2^{nd}$-order proximity which are represented as joint and conditional probability distributions respectively.\footnote{Implementation used: \url{https://github.com/tangjianpku/LINE}} \section{Preliminaries} \label{s:study} \subsection{Datasets} \label{s:study:datasets} We employ three open datasets commonly used when evaluating trust-based recommender systems, i.e., Epinions~\cite{massa2004trust}, Ciao~\cite{tang-etal12a}, and FilmTrust~\cite{guo2013novel}. For all three datasets, we create an unweighted trust network, in which each node represents a user, and each directed edge denotes a trust relationship between two users. The trust network is then an adjacency matrix $\matr{A}$ where $\matr{A}_{u,v}$ is $1$ in case of a trust link between $u$ and $v$, and $0$ otherwise. As a result of preliminary experiments, we found that most of the approaches achieved better accuracy results with an undirected network. One possible explanation is that removing the edge direction increases the average number of edges for each node and reduces the sparsity of the adjacency matrix. Moreover, some approaches are not able to consider link direction by design. Therefore, we convert the trust network to an undirected network in all of our experiments by removing edge direction, thus making $\matr{A}$ symmetric. Furthermore, we create a ratings matrix $\matr{R}$, where each non-zero entry $\matr{R}_{u,i}$ represents a rating given by a user~$u$ to an item~$i$. Table \ref{tab:dataset-stats} shows basic statistics for all three datasets. \begin{table}[H] \setlength{\tabcolsep}{12pt} \centering \renewcommand{\arraystretch}{1.25} \caption{Dataset statistics.} \label{tab:dataset-stats} \scalebox{0.92}{ \begin{tabular}{l|r|r|r|r|r} Dataset & \#Users & \multicolumn{1}{c|}{\#Items} & \#Edges & \#Ratings & \multicolumn{1}{c}{Graph density} \\ \hline \hline Epinions [4] & 49,288 & 139,738 & 487,183 & 664,824 & $2 \times 10^{-4}$ \\ \hline Ciao [5] & 19,533 & 16,121 & 40,133 & 72,665 & $1.85 \times 10^{-3}$ \\ \hline Filmtrust [6] & 1,642 & 2,071 & 1,853 & 35,497 & $2.43 \times 10^{-3}$ \end{tabular} } \renewcommand{\arraystretch}{1} \end{table} \para{Dataset Splits.} We split each dataset into two sets: warm-start users, i.e., users with $>10$ ratings and cold-start users, i.e., users with $\leq 10$ item ratings. While we use the subset of warm-start users as a validation set for hyperparameter optimization concerning recommendation accuracy, the subset of cold-start users is used as a test set for measuring algorithm performance. Table~\ref{tab:split} reports the number of cold-start and warm-start users in our datasets. \begin{table}[t] \setlength{\tabcolsep}{12pt} \centering \renewcommand{\arraystretch}{1.1} \caption{Number of users per dataset split.} \label{tab:split} \scalebox{0.97}{ \begin{tabular}{c|cc|cc} \multicolumn{1}{l|}{} & \multicolumn{2}{c|}{\textbf{Users with ratings}} & \multicolumn{2}{c}{\textbf{Users with ratings \& trust}} \\ \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Warm-start} & \multirow{2}{*}{Cold-start} & \multicolumn{1}{c}{Warm-start} & \multicolumn{1}{c}{Cold-start} \\ & & & \multicolumn{1}{c}{(Validation set)} & \multicolumn{1}{c}{(Test set)} \\ \hline \multicolumn{1}{l|}{Epinions} & 14,769 & 25,393 & 14,769 & 25,393 \\ \multicolumn{1}{l|}{Ciao} & 1,020 & 16,591 & 571 & 2,124 \\ \multicolumn{1}{l|}{Filmtrust} & 963 & 545 & 499 & 241 \\ \end{tabular} } \renewcommand{\arraystretch}{1} \end{table} \subsection{Experimental Setup} \label{s:study:designmetrics} The initial directed trust network is converted to an undirected network by removing edge directions. The resulting undirected symmetric $\matr{A}$ is then used as an input for the graph embedding methods, which, as a result, create a $d$-dimensional embedding for each node (i.e., user) in the graph. \para{Recommendation Strategy.} After generating the embedding for each node in the graph, a similarity matrix $\matr{S}$ is created based on the pairwise cosine similarity between nodes' embeddings. Recommendations are generated in a kNN manner where we find the $k$-nearest neighbors $N_k$ (i.e., $k$ most similar users) for the target user $u_t$ using the similarity matrix $\matr{S}$. We use $k=40$ across all of our experiments as in~\cite{duricic2018regular}. Then, we assign a score for all items the users in $N_k$ have interacted with: \begin{equation} \label{eq:score} score(i, u_t) = \sum_{v \in N_k} S_{u_t, v} \cdot R_v(i), \end{equation} \noindent where $R_v(i)$ corresponds to the rating assigned by the user $v$ to the item $i$ and $S_{u_t, v}$ corresponds to the similarity score in $\matr{S}$ between target user $u_t$ and the neighbor user $v$ from $N_k$. For each target user $u_t$ with $n$ rated items, we recommend $10$ items ranked according to Eq~\ref{eq:score} and compare them with the actual rated items. \para{Evaluation Metrics.} Previous research has shown~\cite{Kaminskas2016beyondaccuracy} that accuracy may not always be the only or the best criteria for measuring recommendation quality. Typically, there is a trade-off between accuracy, novelty, and diversity since users also like to explore novel and diverse content depending on the context. Therefore, in our work, we examine both novelty and diversity as well as accuracy. In particular, in our experimental setup, we use the following four accuracy and beyond-accuracy metrics for evaluation. \\ \emph{Normalized Discounted Cumulative Gain (nDCG$@n$)} -- a ranking-dependent metric measuring recommendation accuracy based on the Discounted Cumulative Gain (DCG) measure~\cite{jarvelin2008discounted}. \\ \emph{Novelty$@n$} -- corresponds to a recommender's ability to recommend long-tail items that the target user has probably not yet seen. We compute novelty using the Expected Popularity Complement (EPC) metric~\cite{vargas2011rank}. \\ \emph{Diversity$@n$} -- describes how dissimilar items are in the recommendation list. We calculate it as the average dissimilarity of all pairs of items in the recommendation list~\cite{smyth2001similarity}. More specifically, we use cosine similarity to measure the dissimilarity of two items based on doc2vec embeddings~\cite{mikolov2013distributed} learned using the item vector from $\matr{R}$ where each rating is replaced with the user id. \\ \emph{User Coverage} -- defined as the number of users for whom at least one item recommendation could have been generated divided by the total number of users in the target set~\cite{massa2004trust}. \para{Baseline Approaches.} We evaluate the graph embeddings approaches against five different baselines: \\ - \emph{Explicit directed trust (Trust$_{dir}$)} is a naive trust-based approach that uses the unweighted, directed trust network's adjacency matrix for finding user's nearest neighbors, i.e. $\matr{S} = \matr{A}$. \\ - \emph{Explicit undirected trust (Trust$_{undir})$} is similar to Trust$_{dir}$ but converts the network to an undirected one by ignoring the edge direction, thus making $\matr{A}$ symmetric, i.e. $\matr{S} = \matr{A}_{undir}$. \\ - \emph{Explicit trust with Jaccard (Trust$_{jac}$)} uses the Jaccard index on the undirected trust network $\matr{A}_{undir}$. $\matr{S}$ is a result of calculating the pairwise Jaccard index on $\matr{A}_{undir}$. The intuition behind this algorithm is that two users are treated as similar if they have adjacent users in common, i.e., trustors and trustees. \\ - \emph{Explicit trust with Katz similarity (Trust$_{Katz}$}) ~\cite{duricic2018regular} exploits regular equivalence, a concept from network science by using Katz similarity in order to model transitive trust relationships between users. \\ - \emph{Most Popular (MP)} is a non-personalized approach in recommender systems, which recommends the most frequently rated items. \\ \vspace{-8mm} \section{Results} \label{s:results} Table~\ref{tab:ndcg2} shows our results in terms of nDCG, novelty, diversity and user coverage for $n=10$ recommendations on cold-start users (test set). The reported results depict those hyperparameter configurations\footnote{Details on the hyperparameter optimization can be found at: \url{https://github.com/tduricic/trust-recommender/blob/master/docs/hyperparameter-optimization.md}}, which achieve the highest recommendation accuracy on warm users (validation set). \iffalse \subsection{Hyperparameter Optimization} \label{sec:hyp_opt} For the graph embedding approaches, we perform a grid search on the parameters and, where possible, experiment with different model architectures using the validation set, i.e., the warm-start users. Table~\ref{tab:ndcgconf} reports on the best performing configurations for each approach and dataset concerning nDCG. LLE and LE do not have hyperparameters and are hence not part of the table. All embeddings are evaluated for dimension size $d=128$ except for GraRep where $d=120$ which we explain below. \para{GF.} We perform a grid search that included three different values for the learning rate $\lambda \in \{0.1, 0.5, 1.0\}$ of the Graph Factorization approach. \para{HOPE.} The evaluation includes four different high-order proximity measurements, i.e., Katz with a decay parameter $\beta \in \{0.1, 0.9, 0.99\}$, Rooted PageRank (RPR) with probability of randomly walking to a neighbor $\alpha \in \{0.5, 0.85. 0.99\}$, Common Neighbors (CN), and Adamic-Adar (AA). Katz and RPR can be classified as proximities, which preserve global asymmetric transitivity. CN and AA preserve local asymmetric transitivity. This results in $8$ experiments per dataset. We observe that nDCG for both Ciao and Epinions drop with higher $\alpha$ with RPR similarity. \para{GraRep.} We evaluate four different higher-order similarities (number of adjacency matrix powers), more specifically, $1, 2, 3$, and $4$ with a total of $4$ experiments per dataset. GraRep requires having a target embedding size divisible by all orders, which is why $d=120$ is selected as it is the highest number below 128 that is divisible by all four considered orders. Interestingly, higher-order similarities lead to lower nDCG on Epinions and Ciao, meaning that direct connections are preferred over transitive ones. However, that is not the case on Filmtrust, where transitive links have a more significant impact on recommendation accuracy. \para{Node2vec.} In Node2vec, we use the concept of second-order random walks, i.e., we can vary the return parameter $p$ as well as the in-out parameter $q$ that controls the probability of a walk revisiting nodes~\cite{grover2016node2vec}. We set $p=1$ in our experiments and change $q \in \{0.01,0.5,1,2,100\}$ with low $q$ indicating a response to homophily, which defines to which local community a node belongs, and high $q$ indicating a response to structural roles of nodes, e.g., hub nodes, bridge nodes, and peripheral nodes. We also experiment with different walks lengths $t \in \{20,40,80\}$ and window sizes $w \in \{3,5,8\}$. This results in a total of $45$ Node2vec experiments per dataset. % With respect nDCG, on Epinions and Filmtrust we can observe a slight improvement for extremely higher $q$ while on Ciao, we see a slight decrease. Moreover, Filmtrust prefers both extreme $q$ values in terms of nDCG to the case where $p=q$. \para{DeepWalk.} Grid search explored the walk length of the random walk $t \in \{5,10,20,40,80,120,160\}$ as well as the window size $w \in \{1,2,3,5,8\}$ for the skip-gram model amounting to $35$ experiments for DeepWalk per dataset in total. \el{this sentence reads weird} We observe that with larger window sizes, we reach higher nDCG on average for Epinions and Ciao, while the opposite holds for Filmtrust. \para{Role2vec.} We focus on the role-mapping in Role2vec instead of the window size. We tested two types of node features, i.e., graph motifs features, which are small subgraphs of size 3 surrounding the node of interest, and Weisfeiler-Lehman features, which aggregate degrees of the adjacent nodes. We experiment with these feature types in combination with different choices of the number of clusters $ \in \{25,50,75\}$ for the role function and different walk lengths $t \in \{20,40,80\}$. We conduct all Role2vec experiments with window size $w = 5$ totaling to $18$ experiments. \begin{table*}[t] \setlength{\tabcolsep}{7pt} \centering \renewcommand{\arraystretch}{1.1} \caption{\textbf{Best performing hyperparameter setting for each evaluated graph embedding approach based on nDCG@10}} \scalebox{.85}{ \begin{tabular}{c?c?c?c|c|c} \textbf{Cat.} & \textbf{Approach} & \textbf{Parameter} & \textbf{Epinions} & \textbf{Ciao} & \textbf{Filmtrust} \\ \specialrule{1pt}{0pt}{2pt} \specialrule{1pt}{0pt}{0pt} \multirow{4}{*}{\rotatebox[origin=c]{90}{\parbox{1cm}{\centering \textbf{Fact.}}}} & GF & $\lambda$ & 0.1 & 0.5 & 1.0 \\ \clineB{2-6}{1} & \multirow{2}{*}{HOPE} & \multirow{2}{*}{similarity} & Katz & \multirow{2}{*}{CN} & RPR \\ & & & $\beta=0.01$ & & $\alpha=0.85$ \\ \clineB{2-6}{1} & GraRep & order &1 & 1 & 4 \\ \specialrule{1pt}{0pt}{0pt} \multirow{9}{*}{\rotatebox[origin=c]{90}{\parbox{3cm}{\centering \textbf{Random Walk}}}} & \multirow{4}{*}{Node2vec} & p & 1 & 1 & 1 \\ & & q & 0.5 & 2 & 100 \\ & & walk length & 80 & 80 & 80 \\ & & window size & 8 & 3 & 3 \\ \clineB{2-6}{1} & \multirow{2}{*}{DeepWalk} & walk length & 120 & 10 & 160 \\ & & window size & 8 & 8 & 2 \\ \clineB{2-6}{1} & \multirow{3}{*}{Role2vec} & feature type & WL & WL & WL \\ & & walk length & 20 & 40 & 80 \\ & & \#clusters & 75 & 75 & 25 \\ \specialrule{1pt}{0pt}{0pt} \multirow{5}{*}{\rotatebox[origin=c]{90}{\parbox{1cm}{\centering \textbf{DL}}}} & DNGR & hidden layers & [256,192,128] & [128,128,128,128] & [256,192,128] \\ \clineB{2-6}{1} & \multirow{2}{*}{SDNE} & $\alpha_{2nd}$ & 0.4 & 0.4 & 0.3 \\ & & $\beta$ & 30 & 30 & 20 \\ \clineB{2-6}{1} \clineB{2-6}{1} & \multirow{3}{*}{GraphSAGE} & model & GCN & GCN & LSTM \\ & & learning rate & 0.0001 & 0.001 & 0.001 \\ & & neg. samples & 3 & 3 & 5 \\ \clineB{2-6}{1} \specialrule{1pt}{0pt}{0pt} \multirow{3}{*}{\centering \textbf{LINE}} & \multirow{3}{*}{LINE} & Order & 1 & 1 & 2 \\ & & learning rate & 0.025 & 0.01 & 0.0001 \\ & & neg. samples & 13 & 5 & 8 \\ \specialrule{1pt}{0pt}{0pt} \end{tabular} } \renewcommand{\arraystretch}{1} \label{tab:ndcgconf} \end{table*} \para{DNGR.} We experiment with three different autoencoder architectures, i.e., sizes of hidden layers $ \in \{[256,192,128], [128,128,128], [128,128,128,128]\}$ For DNGR experiments, we set the random surfing rate to $0.98$ as it showed a better performance in~\cite{cao2016deep}. \para{SDNE.} We evaluate different values for $\alpha_{2nd} \in \{0.1, 0.2, 0.3, 0.4\}$ and $\beta \in \{10, 20, 30\}$. The hyperparameter $\alpha_{2nd}$ controls the influence of second-order proximity in the loss function. The hyperparameter $\beta$ controls the reconstruction weight of the non-zero elements in the training graph. We identified that a higher $\alpha_{2nd}$ and $\beta$ resulted in higher nDCG best on all three datasets. \para{GraphSAGE.} With GraphSAGE, we can select the aggregator (i.e., mean, GCN-based, LSTM-based, and max pooling aggregation) and the learning rate as well as the number of negative samples for the unsupervised learning loss function. We evaluated GraphSAGE with these four model types, varying the learning rate $\lambda \in \{0.0001,0.001,0.01\}$ and the number of negative samples $Q \in \{3,5,8\}$. In total, we carried out $36$ experiments for GraphSAGE on each dataset. \para{LINE.} In LINE, we can select the proximity order, i.e., first-order and second order. We tried both orders, and the first-order achieved higher nDCG on Epinions and Ciao. We configured the learning rate $\lambda \in \{0.0001,0.001,0.005,0.01,0.025,0.125\}$ and the number of negative samples that are used to optimize the probabilistic model $Q \in \{2,3,5,8,13,21\}$. This sums up to $72$ experiments in total. \fi \subsection{Accuracy Results} To ease the interpretation of the evaluation results across all three datasets, we rank the results by nDCG and compute the average of these ranks. Correspondingly, in the Rank$_{nDCG}$ column, we show the resulting average rank for the three datasets for recommendation accuracy. We can observe that RW-based approaches, especially Node2vec and DeepWalk, are the best performing approaches on all three datasets. In most cases, approaches based on graph embeddings outperform the baselines, except for Trust$_{jac}$ on Epinions. Contrary to a study conducted in~\cite{duricic2018regular}, Trust$_{jac}$ achieves higher accuracy in comparison with Trust$_{Katz}$. The reason is that in the present work, we convert the trust network to an undirected network, i.e., do not consider the direction of the trust edge. HOPE and Laplacian Eigenmaps perform best among the factorization-based approaches; LINE shows a good performance on all three datasets concerning all three metrics, and GraphSAGE is the best deep learning approach. SDNE does not perform well in our experiments, which we attribute to not exploring a sufficiently broad range of hyperparameters. \begin{table*}[t!] \setlength{\tabcolsep}{2pt} \centering \renewcommand{\arraystretch}{1.25} \caption{\textbf{Evaluation results on cold-start users for different trust-based CF approaches for $n=10$ recommendations concerning nDCG, novelty, diversity, and user coverage comparing approaches from five different algorithm families across three different datasets. Values marked with $^{\boldsymbol{*}}$ denote that the corresponding approach was significantly better than every other approach with respect to the appropriate metric according to a Wilcoxon signed-rank test (Bonferroni corrected, $p < 0.01$). Rank$_{nDCG}$ is calculated by summing nDCG-based ranks across the datasets and re-ranking the sums.}} \scalebox{.72}{ \begin{tabular}{c?c??c??c|c|c|c?c|c|c|c?c|c|c|c} \multirow{2}{*}{\textbf{Cat.}} & \multirow{2}{*}{\textbf{Approach}} & % \multirow{2}{*}{\textbf{\shortstack{Rank\\ $_{nDCG}$}}} & \multicolumn{4}{c?}{\textbf{Epinions}} & \multicolumn{4}{c?}{\textbf{Ciao}} & \multicolumn{4}{c}{\textbf{Filmtrust}} \\ \clineB{4-15}{2.5} & & & \textbf{nDCG} & \textbf{Nov.} & \textbf{Div.} & \textbf{UC} & \textbf{nDCG} & \textbf{Nov.} & \textbf{Div.} & \textbf{UC} & \textbf{nDCG} & \textbf{Nov.} & \textbf{Div.} & \textbf{UC} \\ \specialrule{1pt}{0pt}{2pt} \specialrule{1pt}{0pt}{0pt} \multirow{5}{*}{\rotatebox[origin=c]{90}{\textbf{Baseline}}} & Trust$_{dir}$ & 15 & .0245 & .0060 & .6006 & 59.2\% & .0140 & .0028 & .3700 & 3.9\% & .2655 & .0313 & \textbf{.2784} & 30.3\% \\ \clineB{2-15}{1} & Trust$_{undir}$ & 15 & .0260 & .0063 & .5960 & 97.0\% & .0127 & \textbf{.0045} & .3632 & 11.4\% & .2739 & .0284 & .2731 & 42.0\% \\ \clineB{2-15}{1} & Trust$_{jac}$ & 11 & .0373 & .0056 & .6548 & 99.9\% & .0176 & .0027 & .3996 & 12.8\% & .3387 & \textbf{.0369} & .2266 & 36.1\% \\ \clineB{2-15}{1} & Trust$_{Katz}$ & 12 & .0290 & .0046 & .6979 & \multirow{14}{*}{\rotatebox[origin=c]{90}{100 \%}} & .0158 & .0026 & .3842 & 13.0\% & .3681 & .0322 & .2185 & 42.9\% \\ % \clineB{2-6}{1} \clineB{8-15}{1} & MP & 17 & .0134 & .0015 & \textbf{.7621}$^{\boldsymbol{*}}$ & & .0135 & .0012 & \textbf{.5666} & % 100\% & .3551 & .0137 & .1672 & 100\% \\ \clineB{1-6}{3} \clineB{8-15}{3} \multirow{5}{*}{\rotatebox[origin=c]{90}{\textbf{Factorization}}} & LLE & 7 & .0309 & .0044 & .6977 & & .0239 & .0036 & .4013 & \multirow{12}{*}{\rotatebox[origin=c]{90}{13.1 \%}} & .3649 & .0159 & .1926 & \multirow{12}{*}{\rotatebox[origin=c]{90}{44.2 \%}} \\ % \clineB{2-6}{1} \clineB{8-10}{1} \clineB{12-14}{1} & LE & 3 & .0318 & .0045 & .6961 & & .0231 & .0034 & .3962 & & .3715 & .0161 & .1853 & \\ % \clineB{2-6}{1} \clineB{8-10}{1} \clineB{12-14}{1} & GF & 14 & .0138 & .0023 & .7024 & & .0154 & .0022 & .3970 & & .3686 & .0154 & .1945 & \\ % \clineB{2-6}{1} \clineB{8-10}{1} \clineB{12-14}{1} & HOPE & 3 & .0331& .0047 & .6728 & & .0220 & .0033 & .3956 & & .3718 & .0158 & .1827 & \\ % \clineB{2-6}{1} \clineB{8-10}{1} \clineB{12-14}{1} & GraRep & 7 & .0298 & .0042 & .6704 & & .0209 & .0030 & .3974 & & .3694 & .0147 & .1859 & \\ \clineB{1-6}{3} \clineB{8-10}{3} \clineB{12-14}{3} \multirow{3}{*}{\rotatebox[origin=c]{90}{ \parbox{1cm}{\centering \textbf{RW}}}} & Node2vec & \textbf{1} & .0413 & .0064 & .6581 & & .0228 & .0036 & .4042 & & \textbf{.3904} & .0151 & .2235 & \\%\clineB{2-15}{1} \clineB{2-6}{1} \clineB{8-10}{1} \clineB{12-14}{1} & DeepWalk & 2 & \textbf{.0435}$^{\boldsymbol{*}}$ & \textbf{.0067}$^{\boldsymbol{*}}$ & .6707 & & \textbf{.0247}$^{\boldsymbol{*}}$ & .0037 & .3992 & & .3654 & .0152 & .1950 & \\ % \clineB{2-6}{1} \clineB{8-10}{1} \clineB{12-14}{1} & Role2vec & 6 & .0363 & .0054 & .6910 & & .0149 & .0024 & .3933 & & .3695 & .0151 & .1919 & \\ \clineB{1-6}{3} \clineB{8-10}{3} \clineB{12-14}{3} \multirow{3}{*}{\rotatebox[origin=c]{90}{\parbox{1cm}{\centering \textbf{DL}}}} & DNGR & 10 & .0353 & .0051 & .6869 & & .0197 & .0031 & .4023 & & .3583 & .0142 & .1959 & \\ % \clineB{2-6}{1} \clineB{8-10}{1} \clineB{12-14}{1} & SDNE & 12 & .0184 & .0022 & .7412 & & .0175 & .0028 & .3921 & & .3687 & .0152 & .2003& \\ % \clineB{2-6}{1} \clineB{8-10}{1} \clineB{12-14}{1} & GS & 7 & .0325 & .0047 & .6810 & & .0216 & .0031 & .3963 & & .3678 & .0151 & .1883 & \\ \clineB{1-6}{3} \clineB{8-10}{3} \clineB{12-14}{3} \multirow{1}{*}{\rotatebox[origin=c]{0}{\textbf{LINE}}} & LINE & 5 & .0407 & .0063 & .6566 & & .0222 & .0033 & .3992 & & .3667 & .0150 & .1947 & \\ \specialrule{1pt}{0pt}{0pt} \end{tabular} } \renewcommand{\arraystretch}{1} \label{tab:ndcg2} \end{table*} \subsection{Beyond-Accuracy Results} \para{Novelty, Diversity, and User Coverage.} Being superior in the case of Rank$_{nDCG}$, Node2vec also achieves high novelty and diversity scores. Plus, it performs similarly or better than other RW-based methods across all three datasets. Factorization-based approaches show average performance concerning both novelty and diversity, except for GF, which scores very low on novelty and above average on diversity. DL approaches show average to below-average performance on novelty and average performance on diversity. Trust-based baselines achieve high novelty scores in general and, not surprisingly, MP has a high diversity score and the worst novelty score out of all approaches. Since all graph embedding approaches create a latent representation of each user in a trust network using it to generate a set of item recommendations, there are no differences among them in user coverage. Except for MP, all baselines result in lower user coverage than the graph embedding approaches. Since MP provides the same list of recommendations to all users, it always has a maximum user coverage. \para{Evaluation Metrics and User Preferences.} Table \ref{tab:ndcg2} reports only mean values for each of the approaches. However, we store individual nDCG, novelty, and diversity values for each target user and each approach and dataset. By computing the Kendall rank correlation coefficient (Bonferroni corrected, $p<0.01$) on non-zero metrics values for all approaches, we can get an insight into user preferences for each dataset. In this manner, we observe a statistically significant positive mean correlation across all three datasets between nDCG and novelty, ranging from $0.43$ on Epinions to $0.36$ on Filmtrust. This suggests that users on all three platforms prefer recommendations with higher novelty, especially on Epinions. We also observe a statistically significant mean negative correlation between diversity and novelty on Epinions ($-0.15$), which suggests that on this platform, more novel content seems to be less diverse. \iffalse \subsection{Correlation of nDCG, Diversity and Novelty} \elex{unclear how this section contributes to an understanding of the graph embedding approaches. what is the research hypothesis that can be answered with this experiment. It needs to be discussed how this relates to the graph embedding approaches. So, if that is indeed the case that in ciao, users prefer novel items, you need to discuss the implications of this and which graph embeddings method would be suitable to use in such a setting. In its current state, this section is disconnected from the rest of the paper} In order to uncover the effect of the utilized graph embedding approaches on a specific dataset, we further study the relationship between all reported evaluation metrics. That is, we apply a Kendall tau~\cite{knight1966computer} correlation test to measure the correlation between each pair of these metrics. The high $p$-value of the Kendall tau test, indicating the probability of the absence of correlation, in Filmtrust case suggests the insignificance of the correlation. However, this is not the case for Ciao or Epinions. In Ciao, we can see a high correlation between nDCG and novelty with an average $\overline{\tau}=0.1563$ comparing to Epinions $\overline{\tau}=0.066$. Another positive correlation for Ciao is between nDCG and diversity with average $\overline{\tau}=0.0476$, relatively larger than Epinions $\overline{\tau}=0.0026$. These two observations support that users on Ciao prefer more novel and diverse content than users on Epinions, which can be a reflection of the fact that items in Ciao are DVDs while items in Epinions are general products. Epinions, on the other hand, shows a strong correlation between diversity and novelty with average $\overline{\tau}=0.546$ comparing to Ciao where average is $\overline{\tau}=0.1084$. Figure~\ref{fig:correlationDist} shows the correlation between each pair of the considered metrics on each dataset. \begin{figure}[t] \centering \includegraphics[width=1\textwidth]{metrics-correlations} \caption{\textbf{Histogram showing Kendall tau correlations between the individual recommendation metrics for every approach on all three datasets.}} \label{fig:correlationDist} \end{figure} \fi \section{Conclusions and Future Work} \label{s:conclusion} In this work, we explored the utility of graph embedding approaches from four method families to generate latent user representations for trust-based recommender systems in a cold-start setting. We found that random-walk-based approaches, (i.e., Node2vec and DeepWalk), consistently achieve the best accuracy. We additionally compared the methods concerning novelty, diversity, and user coverage. Our results showed that again, Node2vec and DeepWalk scored high on novelty and diversity. Thus, they can provide a balanced trade-off between the three evaluation metrics. Moreover, our experiments showed that we can increase the user coverage of recommendations when we utilize graph embeddings in a $k$-nearest neighbor manner. Finally, a correlation analysis between the nDCG, novelty, diversity scores revealed that in all three datasets, users tend to prefer novel recommendations. Hence, on these datasets, recommender systems should offer a good tradeoff between accuracy and novelty. This could also explain the superior performance of the random-walk based approaches and we plan to investigate this in more detail in follow-up work. \para{Limitations and Future Work.} One limitation of this study is that we treated the trust networks as undirected while, in reality, they are directed. This may have resulted in loss of information, and as such, we aim to further explore how to preserve different properties of trust networks (e.g., asymmetry). Moreover, it is possible that we did not examine an ample enough space of hyperparameters, which might have resulted in a more reduced performance of some of the approaches, e.g., SDNE. Furthermore, we aim to explore node properties of $k$-nearest neighbors for all methods to find and interpret the critical node properties preserved by the graph embeddings, which impact the recommendation accuracy, thus providing a better understanding of the complex notion of trust. Finally, we aim to incorporate user features obtained from the rating matrix into graph embeddings learned on the trust network. \para{Acknowledgements.} This work is supported by the H2020 project TRUSTS (GA: 871481) and the ``DDAI'' COMET Module within the COMET – Competence Centers for Excellent Technologies Programme, funded by the Austrian Federal Ministry for Transport, Innovation and Technology (bmvit), the Austrian Federal Ministry for Digital and Economic Affairs (bmdw), the Austrian Research Promotion Agency (FFG), the province of Styria (SFG) and partners from industry and academia. The COMET Programme is managed by FFG.
2,869,038,156,987
arxiv
\section{Introduction} Gradings for cluster algebras have been introduced in various ways by a number of authors and for a number of purposes. The evolution of the notion started with the foundational work of Fomin and Zelevinsky \cite{FZ-CA1}, who consider $\ensuremath{\mathbb{Z}}^{n}$-gradings where $n$ is precisely the rank of the cluster algebra. Shortly afterwards, in the course of considering Poisson structures compatible with cluster algebras, Gekhtman, Shapiro and Vainshtein \cite{GSV-CA-Poisson} gave a definition of a toric action on a cluster algebra, which dualises to that of a $\ensuremath{\mathbb{Z}}^{m}$-grading, where $m$ can now be arbitrary. In \cite{GradedCAs} the first author examined the natural starting case of finite type cluster algebras without coefficients. A complete classification of the integer multi-gradings that occur was given and it was observed that the gradings so obtained were all \emph{balanced}, that is, there exist bijections between the set of variables of degree $\underline{d}$ and those of degree $-\underline{d}$. This phenomenon was explained by means of graded generalised cluster categories, where---following \cite{DominguezGeiss}---by generalised cluster category we mean a 2-Calabi--Yau triangulated category $\curly{C}$ with a basic cluster-tilting object $T$. The definition made in \cite{GradedCAs} associates an integer vector (the multi-degree) to an object in the category in such a way that the vectors are additive on distinguished triangles and transform naturally under mutation. This is done via the key fact that every object in a generalised cluster category has a well-defined associated integer vector-valued datum called the index with respect to $T$; in order to satisfy the aforementioned two properties, degrees are necessarily linear functions of the index. The categorical approach has the advantage that it encapsulates the global cluster combinatorics, or more accurately the set of indices does. Another consequence is an explanation for the observed balanced gradings in finite type: the auto-equivalence of the cluster category given by the shift functor induces an automorphism of the set of cluster variables that reverses signs of degrees. Hence any cluster algebra admitting a (triangulated) cluster categorification necessarily has all its gradings being balanced (providing the set of reachable rigid indecomposable objects, which is in bijection with the set of cluster variables, is closed under the shift functor). This is the case for finite type or, more generally, acyclic cluster algebras having no coefficients. Our main goal is to provide a version of the above in the Frobenius, i.e.\ exact category, setting, similarly to the triangulated one. A Frobenius category is exact with enough projective objects and enough injective objects, and these classes of objects coincide. From work of Fu and Keller \cite{FuKeller} and the second author \cite{Pressland}, we have a definition of a Frobenius cluster category and objects in such a category also have indices. From this we may proceed along similar lines to \cite{GradedCAs} to define gradings and degrees, except that we elect to work (a) over an arbitrary abelian group $\mathbb{A}$ and (b) in a more basis-free way by working K-theoretically and with the associated Euler form. We prove the foundational properties of gradings for Frobenius cluster categories: that degrees are compatible with taking the cluster character, that they are additive on exact sequences and that they are compatible with mutation. Furthermore, we prove an analogue of a result of Palu \cite{Palu-Groth-gp} in which we show that the space of gradings for a graded Frobenius cluster category $\curly{E}$ is closely related to the Grothendieck group, namely that the former is isomorphic to $\Hom{\ensuremath{\mathbb{Z}}}{\mathrm{K}_{0}(\curly{E})}{\mathbb{A}}$. This enables one to show that some categorical datum is a grading by seeing that it respects exact sequences, and conversely that from the cluster algebra categorified by $\curly{E}$ we may deduce information about $\mathrm{K}_{0}(\curly{E})$. We exhibit this on examples, notably the categories of Buan, Iyama, Reiten and Scott \cite{BIRS1} corresponding to Weyl group elements, also studied by Gei\ss, Leclerc and Schr\"oer \cite{GLS-PFV} in the context of categorifying cells in partial flag varieties. The homogeneous coordinate rings of Grassmannians are an example of particular importance in this area. They admit a graded cluster algebra structure but beyond the small number of cases when this structure is of finite type, little is known about the cluster variables. A first step towards a better understanding is to describe how the degrees of the cluster variables are distributed: are the degrees unbounded? does every natural number occur as a degree? are there finitely many or infinitely many variables in each occurring degree? By using the Frobenius cluster categorification of Jensen, King and Su \cite{JKS} and the grading framework here, we can hope to begin to examine these questions. \subsection*{Acknowledgements} The work herein was begun during a research visit of the second author to Lancaster University, funded by EPSRC grant EP/M001148/1. The second author would like to thank Paul Balmer, Andrew Hubery, Alastair King, Sondre Kvamme and Pierre-Guy Plamondon for useful conversations, and acknowledge financial support from Sonderforschungsbereich 701 at Universit\"{a}t Bielefeld, and from the Max-Planck-Gesellschaft. \section{Preliminaries} \label{preliminaries} The construction of a cluster algebra of geometric type from an initial seed $(\underline{x},B)$, due to Fomin and Zelevinsky \cite{FZ-CA1}, is now well-known. Here $\underline{x}=(x_1,\dotsc,x_n)$ is a transcendence base for a certain field of fractions of a polynomial ring, called a cluster, and $B$ is an $n\times r$ integer matrix whose uppermost $r\times r$ submatrix (the principal part of $B$) is skew-symmetrisable. If the principal part of $B$ is skew-symmetric, then $B$ is often replaced by the (ice) quiver $Q=Q(B)$ it defines in the natural way. We refer the reader who is unfamiliar with this construction to the survey of Keller \cite{Keller-CASurvey} and the books of Marsh \cite{Marsh-book} and of Gekhtman, Shapiro and Vainshtein \cite{GSV-Book} for an introduction to the topic and summaries of the main related results in this area. We set some notation for later use. For each index $1\leq k\leq r$, set \begin{align*} \underline{b}_{k}^{+} & = -\underline{\boldsymbol{e}}_{k}+\sum_{b_{ik}>0}b_{ik}\underline{\boldsymbol{e}}_{i} \qquad \text{and} \\ \underline{b}_{k}^{-} & = -\underline{\boldsymbol{e}}_{k}-\sum_{b_{ik}<0}b_{ik}\underline{\boldsymbol{e}}_{i}, \end{align*} where the vector $\underline{\boldsymbol{e}}_{i}\in \ensuremath{\mathbb{Z}}^{n}$ ($n$ being the number of rows of $B$) is the $i$th standard basis vector. Note that the $k$th column of $B$ may be recovered as $B_{k}=\underline{b}_{k}^{+}-\underline{b}_{k}^{-}$. Then for $1\leq k\leq r$, the exchange relation for mutation of the seed $(\underline{x},B)$ in the direction $k$ is given by \[ x_{k}^{\prime}=\underline{x}^{\underline{b}_{k}^{+}}+\underline{x}^{\underline{b}_{k}^{-}}, \] where for $\underline{a}=(a_{1},\dotsc ,a_{n})$ we set \[ \underline{x}^{\underline{a}} = \prod_{i=1}^{n} x_{i}^{a_{i}}. \] If $(\underline{x},B)$ is a seed, we call the elements of $\underline{x}$ cluster variables. The variables $x_1,\dotsc,x_r$ are called mutable, and $x_{r+1},\dots,x_n$ (which appear in the cluster of every seed related to $(\underline{x},B)$ by mutations) are called frozen. Note that while some authors do not consider frozen variables to be cluster variables, it will be convenient for us to do so. We will sometimes also refer to the indices $1,\dotsc,r$ and $r+1,\dotsc,n$ as mutable and frozen respectively. Throughout, for simplicity, we will assume that all algebras and categories are defined over $\mathbb{C}$. All modules are left modules. For a Noetherian algebra $A$, we denote the abelian category of finitely generated $A$-modules by $\fgmod{A}$. If $B$ is a matrix, we denote its transpose by $B^{t}$. \subsection{Graded seeds, cluster algebras and cluster categories}\label{s:gradedCAs} Let $\mathbb{A}$ be an abelian group. The natural definition for an $\mathbb{A}$-graded seed is as follows. \begin{definition} A multi-graded seed is a triple $(\underline{x},B,G)$ such that \begin{enumerate}[label=(\alph*)] \item $(\underline{x}=(x_{1},\dotsc ,x_{n}),B)$ is a seed, and \item $G\in\mathbb{A}^n$, thought of as a column vector, satisfies $B^{t}G=0$. \end{enumerate} The matrix multiplication in (b) makes sense since $\mathbb{A}$ is a $\ensuremath{\mathbb{Z}}$-module. This is most transparent when $\mathbb{A}=\ensuremath{\mathbb{Z}}^m$, so that $G$ is an $n\times m$ integer matrix. \end{definition} From now on, unless we particularly wish to emphasise $\mathbb{A}$, we will drop it from the notation and simply use the term ``graded''. The above data defines $\deg_{G}(x_{i})=G_{i}\in\mathbb{A}$ (the $i$th component of $G$) and this can be extended to rational expressions in the generators $x_{i}$ in the obvious way. Condition (b) ensures that for each $1\leq k\leq r$, we have $\underline{b}_k^+\cdot G=\underline{b}_k^-\cdot G$, making sense of these dot products via the $\ensuremath{\mathbb{Z}}$-module structure of $\mathbb{A}$, so every exchange relation is homogeneous, and \[G_k':=\deg(x_k')=\underline{b}_k^+\cdot G-G_k=\underline{b}_k^-\cdot G-G_k.\] Thus we can also mutate our grading, and repeated mutation propagates a grading on an initial seed to every cluster variable and to the associated cluster algebra. Hence we obtain the following well-known result, given in various forms in the literature. \begin{proposition}\label{p:gradedCA} The cluster algebra $\curly{A}(\underline{x},B,G)$ associated to an initial graded seed $(\underline{x},B,G)$ is an $\mathbb{A}$-graded algebra. Every cluster variable of $\curly{A}(\underline{x},B,G)$ is homogeneous with respect to this grading. \qed \end{proposition} We refer the reader to \cite{GradedCAs} for a more detailed discussion of the above and further results regarding the existence of gradings, relationships between gradings and a study of $\ensuremath{\mathbb{Z}}$-gradings for cluster algebras of finite type with no coefficients. \subsection{Graded triangulated cluster categories}\label{s:graded-triang-cluster-categories} Our interest here is in generalising the categorical parts of \cite{GradedCAs}, which refer to models of cluster algebras without frozen variables given by $2$-Calabi--Yau triangulated categories, and explain how to interpret the data of a grading on the cluster algebra at this categorical level. Our main goal is to provide a similar theory for stably $2$-Calabi--Yau Frobenius categories, which may be used to model cluster algebras that do have frozen variables. In order to motivate what will follow for the Frobenius setting, we give the key definitions and statements from the triangulated case, without proofs as these may be found in \cite{GradedCAs}. \begin{definition}[{\cite{DominguezGeiss}}] Let $\curly{C}$ be a triangulated 2-Calabi--Yau category with suspension functor $\Sigma$ and let $T\in \curly{C}$ be a basic cluster-tilting object. We will call the pair $(\curly{C},T)$ a generalised cluster category. \end{definition} Write $T=T_{1}\ensuremath{ \oplus} \dotsm \ensuremath{ \oplus} T_{r}$. Setting $\Lambda=\op{\End{\curly{C}}{T}}$, the functor\footnote{This functor is replaced by $E=F\Sigma$ in \cite{DominguezGeiss}, \cite{GradedCAs}; we use $F$ here, as in \cite{FuKeller}, for greater compatibility with the Frobenius case.} $F=\curly{C}(T,-)\colon \curly{C} \to \fgmod{\Lambda}$ induces an equivalence $\curly{C}/\text{add}(\Sigma T)\to\fgmod{\Lambda}$. We may also define an exchange matrix associated to $T$ by \[ (B_{T})_{ij}=\dim \text{Ext}_{\Lambda}^{1}(S_{i},S_{j})-\dim \text{Ext}_{\Lambda}^{1}(S_{j},S_{i}). \] Here the $S_{i}=FT_{i}/\mathop{\mathrm{rad}} FT_{i}$, $i=1,\dotsc ,r$ are the simple $\Lambda$-modules. Thus, if the Gabriel quiver of the algebra $\Lambda$ has no loops or $2$-cycles, $B_T$ is its corresponding skew-symmetric matrix. For each $X\in \curly{C}$ there exists a distinguished triangle \[ \bigoplus_{i=1}^{r} T_{i}^{m(i,X)} \to \bigoplus_{i=1}^{r} T_{i}^{p(i,X)} \to X \to \Sigma \left( \bigoplus_{i=1}^{r} T_{i}^{m(i,X)} \right) \] Define the index of $X$ with respect to $T$, denoted $\ind{T}{X}$, to be the integer vector with $\ind{T}{X}_{i}=p(i,X)-m(i,X)$. By \cite[\S 2.1]{Palu}, $\ind{T}{X}$ is well-defined and we have a cluster character \begin{align*} C_{?}^{T}\colon \text{Obj}(\curly{C}) &\to \mathbb{C}[x_{1}^{\pm 1},\dotsc ,x_{r}^{\pm 1}] \\ X & \mapsto \underline{x}^{\ind{T}{X}}\sum_{\underline{e}} \chi(\mathrm{Gr}_{\underline{e}}(F\Sigma X))\underline{x}^{B_{T}\cdot\underline{e}} \end{align*} Here $\mathrm{Gr}_{\underline{e}}(F\Sigma X)$ is the quiver Grassmannian of $\Lambda$-submodules of $F\Sigma X$ of dimension vector $\underline{e}$ and $\chi$ is the topological Euler characteristic. We use the same monomial notation $\underline{x}^{\underline{a}}$ as previously. We recall that for any cluster-tilting object $U$ of $\curly{C}$, and any indecomposable summand $U_k$ of $U$, there are non-split exchange triangles \[ U_{k}^{*} \to M \to U_{k} \to \Sigma U_{k}^{*} \qquad \text{and} \qquad U_{k} \to M' \to U_{k}^{*} \to \Sigma U_{k} \] with $M,M' \in \operatorname{add}(U)$, that glue together to form an Auslander--Reiten $4$-angle \[U_k\to M'\to M\to U_k\] in $\curly{C}$ \cite[Definition~3.8]{IyamaYoshino}. If the quiver of $\op{\End{\curly{C}}{U}}$ has no loops or $2$-cycles incident with the vertex corresponding to $U_k$, then $M,M'\in\operatorname{add}(U/U_k)$ and $U^{*}=(U/U_{k})\ensuremath{ \oplus} U_{k}^{*}$ is again cluster-tilting. In the generality of our setting, these results are all due to Iyama and Yoshino \cite{IyamaYoshino}. The natural definition of a graded generalised cluster category is then the following. \begin{definition}[{\cite[Definition~5.2]{GradedCAs}}]\label{d:graded-gen-cl-cat} Let $(\curly{C},T)$ be a generalised cluster category and let $G\in\mathbb{A}^r$ such that $B_{T}G=0$. We call the tuple $(\curly{C},T,G)$ a graded generalised cluster category. \end{definition} Note that, in this context, $B_{T}$ is square and skew-symmetric, so we may suppress taking the transpose in the equation $B_{T}G=0$. \begin{definition}[{\cite[Definition~5.3]{GradedCAs}}]\label{d:degree-graded-gen-cl-cat} Let $(\curly{C},T,G)$ be a graded generalised cluster category. For any $X\in \curly{C}$, we define $\deg_{G}(X)=\ind{T}{X}\cdot G$.\end{definition} The main results about graded generalised cluster categories are summarised in the following Proposition, the most significant of these being \ref{p:prop-of-gen-cc-additive-on-triang}. The proofs in \cite{GradedCAs} are given for $\mathbb{A}=\ensuremath{\mathbb{Z}}^m$, but remain valid in the more general setting. \begin{proposition}[{\cite[\S5]{GradedCAs}}]\label{p:prop-of-gen-cc} Let $(\curly{C},T,G)$ be a graded generalised cluster category. \begin{enumerate}[label=(\roman*)] \item Let $\mathbb{C}[x_{1}^{\pm 1},\dotsc ,x_{r}^{\pm 1}]$ be $\mathbb{A}$-graded by $\deg_{G}(x_{i})=G_{i}$ (the $i$th component of $G$). Then for all $X \in \curly{C}$, the cluster character $C_{X}^{T}\in \mathbb{C}[x_{1}^{\pm 1},\dotsc ,x_{r}^{\pm 1}]$ is homogeneous of degree $\deg_G(X)$. \item\label{p:prop-of-gen-cc-additive-on-triang} For any distinguished triangle $X\to Y \to Z \to \Sigma X$ of $\curly{C}$, we have \[ \deg_{G}(Y)=\deg_{G}(X)+\deg_{G}(Z).\] \item\label{p:prop-of-gen-cc-mutation} The degree $\deg_{G}$ is compatible with mutation in the sense that for every cluster-tilting object $U$ of $\curly{C}$ with indecomposable summand $U_{k}$ we have \[ \deg_{G}(U_{k}^{*})=\deg_{G}(M)-\deg_{G}(U_{k})=\deg_{G}(M')-\deg_{G}(U_{k}), \] where $U_{k}^{*}$, $M$ and $M'$ are as in the above description of exchange triangles in $\curly{C}$. \item\label{p:prop-of-gen-cc-Groth-gp} The space of gradings for a generalised cluster category $(\curly{C},T)$ may be identified with $\Hom{\ensuremath{\mathbb{Z}}}{\mathrm{K}_0(\curly{C})}{\mathbb{A}}$, where $\mathrm{K}_0(\curly{C})$ is the Grothendieck group of $\curly{C}$ as a triangulated category.\footnote{This statement corrects \cite[Proposition~5.5]{GradedCAs} for the case $\mathbb{A}=\ensuremath{\mathbb{Z}}$, which replaces $\Hom{\ensuremath{\mathbb{Z}}}{\mathrm{K}_0(\curly{C})}{\ensuremath{\mathbb{Z}}}$ by $\mathrm{K}_0(\curly{C})$ itself. The proof given in \cite{GradedCAs} proves the statement given here for an arbitrary abelian group essentially without modification. An example of $\curly{C}$ for which $\mathrm{K}_0(\curly{C})$ and $\Hom{\ensuremath{\mathbb{Z}}}{\mathrm{K}_0(\curly{C})}{\ensuremath{\mathbb{Z}}}$ are non-isomorphic is provided by \cite[Thm.~1.3]{BKL}.} \item For each $X\in \curly{C}$, $\deg_{G}(\Sigma X)=-\deg_{G}(X)$. That is, for each $d\in \mathbb{A}$, the shift automorphism $\Sigma$ on $\curly{C}$ induces a bijection between the objects of $\curly{C}$ of degree $d$ and those of degree $-d$. \qed \end{enumerate} \end{proposition} Part~\ref{p:prop-of-gen-cc-mutation} of the preceding proposition shows how to mutate the data of $G$ when mutating the cluster-tilting object $T$, to obtain a new grading vector compatible with the exchange matrix of the new cluster-tilting object, defining the same grading on the cluster algebra. However, we may obtain an even stronger conclusion from part~\ref{p:prop-of-gen-cc-Groth-gp}, since this provides a ``base-point free'' definition of a grading, depending only on the category $\curly{C}$ and not on the cluster-tilting object $T$. Read differently, this shows that if $(\curly{C},T,G)$ is a graded generalised cluster category, then for any cluster-tilting object $T'\in\curly{C}$, there is a unique $G'\in\mathbb{A}^r$ such that $(\curly{C},T',G')$ is a graded generalised cluster category and $\deg_G(X)=\deg_{G'}(X)$ for all $X\in\curly{C}$. We will explain this in more detail below in the case of Frobenius categories. If $\curly{H}$ is the category of coherent sheaves on a weighted projective line with all weights odd, then the Grothendieck group of the cluster category $\curly{C}$ of $\curly{H}$ is a non-zero quotient of $\ensuremath{\mathbb{Z}}_2\oplus\ensuremath{\mathbb{Z}}_2$ \cite[Theorem~1.3]{BKL}. (If one only imposes relations coming from triangles obtained by projecting triangles of the derived category of $\curly{C}$ to $\curly{H}$, then one obtains exactly $\ensuremath{\mathbb{Z}}_2\oplus\ensuremath{\mathbb{Z}}_2$ \cite[Proposition~3.7(ii)]{BKL}, but $\curly{C}$ may have more triangles than these.) By part~\ref{p:prop-of-gen-cc-Groth-gp} of the preceding proposition, this cluster category $\curly{C}$ admits no $\ensuremath{\mathbb{Z}}$-gradings, but does admit $\ensuremath{\mathbb{Z}}_2$-gradings. In fact \cite[Proposition 3.10(ii)]{BKL}, any such grading is a linear combination of the functions giving the degree and rank of a sheaf modulo $2$. Part (v) of Proposition~\ref{p:prop-of-gen-cc} shows that for cluster algebras admitting a categorification by a generalised cluster category $(\curly{C},T)$ such that the mutation class of $T$ is closed under the shift functor $\Sigma$, all gradings must be balanced, meaning that for any $d\in\mathbb{A}$, the cluster variables of degree $d$ are in bijection with those of degree $-d$. If $Q$ admits a nondegenerate Jacobi-finite potential $W$, then the corresponding cluster algebra is categorified by the Amiot cluster category $\curly{C}_{Q,W}$, which has a cluster-tilting object $T$ whose endomorphism algebra is the Jacobian algebra of $(Q,W)$ \cite{Amiot}. If $Q$ admits a maximal green sequence, then it provides a sequence of mutations from $T$ to $\Sigma T$ in $\curly{C}_{Q,W}$, so the mutation class of $T$ is closed under $\Sigma$ \cite[Proposition~5.17]{Keller-QD}. It follows that all gradings of the cluster algebra associated to $Q$ are balanced. All of these assumptions hold, for example, when $Q$ is a finite acyclic quiver (so $W=0$); for the statement about maximal green sequences, see Br\"{u}stle, Dupont and P\'{e}rotin \cite[Lemma~2.20]{BDP}. Conversely, we can use gradings to show that certain cluster algebras cannot admit a categorification as above. For example, the Markov cluster algebra, all of whose exchange matrices are given by \[B=\begin{pmatrix}0&2&-2\\-2&0&2\\2&-2&0\end{pmatrix}\] or its negative, admits the grading $(1,1,1)$. This is an integer grading under which all cluster variables have strictly positive degrees, so it is not balanced. While the Markov quiver associated to $B$ has a non-degenerate potential for which the resulting (completed) Jacobian algebra is finite dimensional, and thus has an associated Amiot cluster category $\curly{C}$, this category has exactly two mutation classes of cluster-tilting objects. (One can also realise this Jacobian algebra as that coming from a tagged triangulation of the once-punctured torus; such triangulations can include tagged arcs or not, but it is not possible to mutate a triangulation without tagged arcs into one with tagged arcs, giving another explanation for the existence of these two mutation classes.) The shift functor on $\curly{C}$ takes rigid indecomposable objects appearing as summands in one mutation class (which correspond to cluster variables) to rigid indecomposables from the other class (which do not), allowing the existence of a non-balanced grading on the cluster algebra. It has been shown by Ladkani that many of these properties hold more generally for quivers arising from triangulations of punctured surfaces \cite{Ladkani}. \section{Graded Frobenius cluster categories} In this section, we provide the main technical underpinnings for the Frobenius version of the above theory, in which we consider exact categories rather than triangulated ones. Background on exact categories, and homological algebra in them, can be found in B\"{u}hler's survey \cite{Buehler}. An exact category $\curly{E}$ is called a Frobenius category if it has enough projective objects and enough injective objects, and these two classes of objects coincide. A typical example of such a category is the category of finite dimensional modules over a finite dimensional self-injective algebra. More generally, if $B$ is a Noetherian algebra with finite left and right injective dimension as a module over itself (otherwise known as an Iwanaga--Gorenstein algebra), the category \[\operatorname{GP}(B)=\{X\in\fgmod{B}:\Ext{i}{B}{X}{B}=0,\ i>0\},\] is Frobenius \cite{Buchweitz}. (Here $\operatorname{GP}(B)$ is equipped with the exact structure in which the exact sequences are precisely those that are exact when considered in the abelian category $\fgmod{B}$.) The initials ``GP'' are chosen for ``Gorenstein projective''. Given a Frobenius category $\curly{E}$, its stable category $\underline{\curly{E}}$ is formed by taking the quotient of $\curly{E}$ by the ideal of morphisms factoring through a projective-injective object. By a famous result of Happel \cite[Theorem~2.6]{Happelbook}, $\underline{\curly{E}}$ is a triangulated category with shift functor $\Omega^{-1}$, where $\Omega^{-1}X$ is defined by the existence of an exact sequence \[0\to X\to Q \to\Omega^{-1}X\to0\] in which $Q$ is injective. The distinguished triangles of $\underline{\curly{E}}$ are isomorphic to those of the form \[X\to Y\to Z\to\Omega^{-1}X\] where \[0\to X\to Y\to Z\to 0\] is a short exact sequence in $\curly{E}$. \begin{definition} A Frobenius category $\curly{E}$ is stably $2$-Calabi--Yau if the stable category $\underline{\curly{E}}$ is Hom-finite and there is a functorial duality \[\mathrm{D}\Ext{1}{\curly{E}}{X}{Y}=\Ext{1}{\curly{E}}{Y}{X}\] for all $X,Y\in\curly{E}$. \end{definition} \begin{remark} The above definition is somewhat slick---it is equivalent to requiring that $\underline{\curly{E}}$ is \linebreak $2$-Calabi--Yau as a triangulated category (that is, that $\underline{\curly{E}}$ is Hom-finite and $\Omega^{-2}$ is a Serre functor), as one might expect. \end{remark} Let $\curly{E}$ be a stably $2$-Calabi--Yau Frobenius category. If $U$ is cluster-tilting in $\curly{E}$, then it is also cluster-tilting in the $2$-Calabi--Yau triangulated category $\curly{E}$, and a summand $U_k$ of $U$ is indecomposable in $\underline{\curly{E}}$ if and only if it is indecomposable and non-projective in $\curly{E}$. Thus for any cluster-tilting object $U$ of $\curly{E}$ and for any non-projective indecomposable summand $U_{k}$ of $U$, we can lift the exchange triangles involving $U_k$ from $\underline{\curly{E}}$ to $\curly{E}$, and obtain exchange sequences \[0\to U_{k}^{*} \to M \to U_{k} \to 0 \qquad \text{and} \qquad 0\to U_{k} \to M' \to U_{k}^{*} \to 0 \] with $M,M' \in\operatorname{add}{(U)}$. If the quiver of $\op{\End{\curly{E}}{U}}$ has no loops or $2$-cycles incident with the vertex corresponding to $U_k$, then $U_k'=U/U_k\oplus U_k^*$ is again cluster-tilting, just as in the triangulated case. Fu and Keller \cite{FuKeller} give the following definition of a cluster character on a stably $2$-Calabi--Yau Frobenius category. \begin{definition}[{\cite[Definition~3.1]{FuKeller}}] Let $\curly{E}$ be a stably $2$-Calabi--Yau Frobenius category, and let $R$ be a commutative ring. A cluster character on $\curly{E}$ is a map $\phi$ on the set of objects of $\curly{E}$, taking values in $R$, such that \begin{itemize} \item[(i)]if $M\ensuremath \cong M'$ then $\phi_{M}=\phi_{M'}$, \item[(ii)]$\phi_{M\ensuremath{ \oplus} N}=\phi_{M}\phi_{N}$, and \item[(iii)]if $\dim\Ext{1}{\curly{E}}{M}{N}=1$ (equivalently, $\dim\Ext{1}{\curly{E}}{N}{M}=1$), and \begin{align*} &0\to M\to X\to N\to 0,\\ &0\to N\to Y\to M\to 0 \end{align*} are non-split sequences, then \[\phi_{M}\phi_{N}=\phi_{X}+\phi_{Y}.\] \end{itemize} \end{definition} Let $\curly{E}$ be a stably $2$-Calabi--Yau Frobenius category, and assume there exists a cluster-tilting object $T\in\curly{E}$. Assume without loss of generality that $T$ is basic, and let $T=\bigoplus_{i=1}^nT_i$ be a decomposition of $T$ into pairwise non-isomorphic indecomposable summands. We number the summands so that $T_i$ is projective if and only if $r<i\leq n$. Let $\Lambda=\op{\End{\curly{E}}{T}}$, and $\underline{\Lambda}=\op{\End{\underline{\curly{E}}}{T}}=\Lambda/\Lambda e\Lambda$, where $e$ is the idempotent given by projection onto the maximal projective-injective summand $\bigoplus_{i=r+1}^nT_i$ of $T$. We assume that $\Lambda$ is Noetherian, as with this assumption the forms discussed below will be well-defined. The examples that concern us later will have Noetherian $\Lambda$, but we acknowledge that this assumption is somewhat unsatisfactory, given that it is often difficult to establish. Fu and Keller \cite{FuKeller} show that such a $T$ determines a cluster character on $\curly{E}$, as we now explain; while the results of \cite{FuKeller} are stated in the case that $\curly{E}$ is Hom-finite, the assumption that $\Lambda$ is Noetherian is sufficient providing one is careful to appropriately distinguish between the two Grothendieck groups $\mathrm{K}_0(\fgmod{\Lambda})$ and $\mathrm{K}_0(\fd{\Lambda})$ of finitely generated and finite dimensional $\Lambda$-modules respectively. We write \begin{align*} F&=\Hom{\curly{E}}{T}{-}\colon\curly{E}\to\fgmod{\Lambda},\\ E&=\Ext{1}{\curly{E}}{T}{-}\colon\curly{E}\to\fgmod{\Lambda}. \end{align*} Note that $E$ may also be expressed as $\Hom{\underline{\curly{E}}}{T}{\Omega^{-1}(-)}$, meaning it takes values in $\fgmod{\underline{\Lambda}}$. For $M\in\fgmod{\Lambda}$ and $N\in\fd{\Lambda}$, we write \begin{align*} \ip{M}{N}_1&=\dim\Hom{\Lambda}{M}{N}-\dim\Ext{1}{\Lambda}{M}{N},\\ \ip{M}{N}_3&=\dim\Hom{\Lambda}{M}{N}-\dim\Ext{1}{\Lambda}{M}{N}+\dim\Ext{2}{\Lambda}{M}{N}-\dim\Ext{3}{\Lambda}{M}{N}. \end{align*} The algebra $\underline{\Lambda}=\op{\End{\underline{\curly{E}}}{T}}$ is finite dimensional since $\underline{\curly{E}}$ is Hom-finite, so $\fgmod{\underline{\Lambda}}\subseteq\fd\Lambda$. Fu and Keller show \cite[Proposition~3.2]{FuKeller} that if $M\in\fgmod{\underline{\Lambda}}$, then $\ip{M}{N}_3$ depends only on the dimension vector $(\dim\Hom{\Lambda}{P_i}{M})_{i=1}^n$, where the $P_i=FT_i$ are a complete set of indecomposable projective $\Lambda$-modules. Thus if $v\in\ensuremath{\mathbb{Z}}^r$, we define \[\ip{v}{N}_3:=\ip{M}{N}_3\] for any $M\in\fgmod{\underline{\Lambda}}$ with dimension vector $v$. Let $R=\mathbb{C}[x_1^{\pm1},\dotsc,x_n^{\pm1}]$ be the ring of Laurent polynomials in $x_1,\dotsc,x_n$. Define a map $X\to C^T_{X}$ on objects of $\curly{E}$, taking values in $R$, via the formula \[C^T_{X}=\prod_{i=1}^nx_i^{\ip{FX}{S_i}_1}\sum_{v\in\ensuremath{\mathbb{Z}}^r}\chi(\text{Gr}_v(EX))\prod_{i=1}^nx_i^{-\ip{v}{S_i}_3}.\] Here, as before, $\text{Gr}_v(EX)$ denotes the projective variety of submodules of $EX$ with dimension vector $v$, and $\chi(\text{Gr}_v(EX))$ denotes its Euler characteristic. The modules $S_i=FT_i/\mathop{\mathrm{rad}} FT_i$ are the simple tops of the projective modules $P_i$. By \cite[Theorem~3.3]{FuKeller}, the map $X\mapsto C^T_{X}$ is a cluster character, with the property that $C^T_{T_i}=x_i$. The cluster-tilting object $T$ also determines an index for each object $X\in\curly{E}$. To see that this quantity is well-defined we will use the following lemma, the proof of which is included for the convenience of the reader. \begin{lemma} \label{approximations-are-admissible} Let $\curly{E}$ be an exact category, and let $M,T\in\curly{E}$. \begin{itemize} \item[(i)]If there exists an admissible epimorphism $T'\to M$ for $T'\in\operatorname{add}{T}$, then any right $\operatorname{add}{T}$-approximation of $M$ is an admissible epimorphism. \item[(ii)]If there exists an admissible monomorphism $M\to T'$ for $T'\in\operatorname{add}{T}$, then any left $\operatorname{add}{T}$-approximation of $M$ is an admissible monomorphism. \end{itemize} \end{lemma} \begin{proof} We prove only (i), as (ii) is dual. Pick an admissible epimorphism $\pi\colon T'\to M$ with $T'\in\operatorname{add}{T}$ and a right $\operatorname{add}{T}$-approximation $f\colon R\to M$. Consider the pullback square \[\begin{tikzcd}[column sep=20pt] X\arrow{r}{g}\arrow{d}{\pi'}&T'\arrow{d}{\pi}\\ R\arrow{r}{f}&M \end{tikzcd}\] As $f$ is a right $\operatorname{add}{T}$-approximation, there is a map $h\colon T'\to R$ such that the square \[\begin{tikzcd}[column sep=20pt] T'\arrow{r}{1}\arrow{d}{h}&T'\arrow{d}{\pi}\\ R\arrow{r}{f}&M \end{tikzcd}\] commutes, and so by the universal property of pullbacks, there is $g'\colon T'\to X$ such that $gg'=1$. Thus $g$ is a split epimorphism, fitting into an exact sequence \[\begin{tikzcd}[column sep=20pt] 0\arrow{r}&K\arrow{r}{i}&X\arrow{r}{g}&T'\arrow{r}&0. \end{tikzcd}\] It then follows, again by the universal property of pushouts, that $\pi'i$ is a kernel of $f$. Since $f\pi'=\pi g$ is the composition of two admissible epimorphisms, $f$ is itself an admissible epimorphism by the obscure axiom \cite[A.1]{KellerCC}, \cite[(Dual of) Proposition~2.16]{Buehler}. \end{proof} Given an object $X\in\curly{E}$, we may pick a minimal right $\operatorname{add}{T}$-approximation $R_X\to X$, where $R_X$ is determined up to isomorphism by $X$ and the existence of such a morphism. Let $P\to X$ be a projective cover of $X$, which exists since $\curly{E}$ has enough projectives; this is an admissible epimorphism by definition, and $P\in\operatorname{add}{T}$ since $T$ is cluster-tilting. Thus by Lemma~\ref{approximations-are-admissible}, the approximation $R_X\to X$ is an admissible epimorphism, and so there is an exact sequence \[0\to K_X\to R_X\to X\to0\] in $\curly{E}$. Since $T$ is cluster-tilting, $K_X\in\operatorname{add}{T}$, and we define $\ind{T}{X}=[R_X]-[K_X]\in\mathrm{K}_0(\operatorname{add}{T})$. It is crucial here that $\ind{T}{X}$ is defined in $\mathrm{K}_0(\operatorname{add}{T})$, rather than in $\mathrm{K}_0(\curly{E})$ where it would simply be equal to $[X]$. We also associate to $T$ the exchange matrix $B_T$ given by the first $r$ columns of the antisymmetrisation of the incidence matrix of the quiver of $\Lambda$. By definition, $B_T$ has entries \[(B_T)_{ij}=\dim\Ext{1}{\Lambda}{S_i}{S_j}-\dim\Ext{1}{\Lambda}{S_j}{S_i}\] for $1\leq i\leq n$ and $1\leq j\leq r$. \begin{definition}[{cf.\ \cite[Definition~3.3]{Pressland}}]\label{d:Frob-cl-cat} A Frobenius category $\curly{E}$ is a Frobenius cluster category if it is Krull--Schmidt, stably $2$-Calabi--Yau and satisfies $\operatorname{gldim}(\op{\End{\curly{E}}{T}})\leq 3$ for all cluster-tilting objects $T\in\curly{E}$, of which there is at least one. \end{definition} Note that a Frobenius cluster category $\curly{E}$ need not be Hom-finite, but the stable category $\underline{\curly{E}}$ must be, since this is part of the definition of $2$-Calabi--Yau. Let $\curly{E}$ be a Frobenius cluster category. Let $T=\bigoplus_{i=1}^n T_{i}\in \curly{E}$ be a basic cluster-tilting object, where each $T_i$ is indecomposable and is projective-injective if and only if $i>r$, let $\Lambda=\op{\End{\curly{E}}{T}}$ be its endomorphism algebra, and let $\underline{\Lambda}=\op{\End{\underline{\curly{E}}}{T}}$ be its stable endomorphism algebra. We continue to write $F=\Hom{\curly{E}}{T}{-}\colon \curly{E}\to\fgmod{\Lambda}$ and $E=\Ext{1}{\curly{E}}{T}{-}\colon\curly{E}\to\fgmod{\underline{\Lambda}}$. Since $\underline{\curly{E}}$ is Hom-finite, $\underline{\Lambda}$ is a finite dimensional algebra. The Krull--Schmidt property for $\curly{E}$ is equivalent to $\curly{E}$ being idempotent complete and having the property that the endomorphism algebra $A$ of any of its objects is a semiperfect ring \cite[Corollary~4.4]{KrauseKS}, meaning there are a complete set $\{e_i:i\in I\}$ of pairwise orthogonal idempotents of $A$ such that $e_iAe_i$ is local for each $i\in I$. For many representation-theoretic purposes, semiperfect $\mathbb{K}$-algebras behave in much the same way as finite dimensional ones; for example, if $A$ is semiperfect then the quotient $A/\mathop{\mathrm{rad}}{A}$ is semi-simple, and its idempotents lift to $A$. For more background on semiperfect rings, see, for example, Anderson and Fuller \cite[Chapter~27]{AndersonFuller-Book}. For us, a key property of a semiperfect ring $A$ is that the $A$-modules $Ae_i/\mathop{\mathrm{rad}}{Ae_i}$ (respectively, their projective covers $Ae_i$) form a complete set of finite dimensional simple $A$-modules (respectively indecomposable projective $A$-modules) up to isomorphism \cite[Proposition~27.10]{AndersonFuller-Book}. As we will require this later, we include being Krull--Schmidt in our definition of a Frobenius cluster category, noting that other work in this area---notably the original definition in \cite{Pressland}---requires only idempotent completeness. Since $\Lambda$ is Noetherian and $\operatorname{gldim}{\Lambda}\leq 3$, the Euler form \[\ip{M}{N}_e=\sum_{i\geq0}(-1)^i\dim\Ext{i}{\Lambda}{M}{N}\] is well-defined as a map $\mathrm{K}_0(\fgmod\Lambda)\times \mathrm{K}_0(\fd\Lambda)\to\ensuremath{\mathbb{Z}}$, and coincides with the form $\ip{-}{-}_3$ introduced earlier. \begin{remark} \label{exchangematrixgivesip} By a result of Keller and Reiten \cite[\S4]{KellerReiten} (see also \cite[Theorem~3.4]{Pressland}), $\fgmod{\Lambda}$ has enough $3$-Calabi--Yau symmetry for us to deduce that $\dim\Ext{k}{\Lambda}{S_i}{S_j}=\dim\Ext{3-k}{\Lambda}{S_j}{S_i}$ when $1\leq j\leq r$. It follows that \[(-B_T)_{ij}=\ip{S_i}{S_j}_3=\ip{S_i}{S_j},\] so the matrix of $\ip{-}{-}$, when restricted to the span of the simple modules in the first entry and the span of the first $r$ simple modules in the second entry, is given by $-B_T$. \end{remark} One can show by taking projective resolutions that the classes $[P_i]$ of indecomposable projective $\Lambda$-modules span $\mathrm{K}_0(\fgmod{\Lambda})$. Moreover, since $\ip{P_i}{S_j}_e=\delta_{ij}$, any $x\in \mathrm{K}_0(\fgmod{\Lambda})$ has a unique expression \[x=\sum_{i=1}^n\ip{x}{S_{i}}_{e}[P_i]\] as a linear combination of the $[P_i]$, and so these classes in fact freely generate $\mathrm{K}_0(\fgmod{\Lambda})$. Recall from the definition of the index that if $X\in\curly{E}$, there is an exact sequence \[0\to K_X\to R_X\to X\to 0\] in which $K_X$ and $R_X$ lie in $\operatorname{add}{T}$. Since $E$ vanishes on $\operatorname{add}{T}$, the functor $F$ takes the above sequence to a projective resolution \[0\to FK_X\to FR_X\to FX\to 0\] of $FX$ in $\fgmod{\Lambda}$. Thus $FX$ has projective dimension at most $1$, and so $\ip{FX}{-}_1=\ip{FX}{-}_e$. We can therefore rewrite the cluster character of $X$ as \[C^T_{X}=\prod_{i=1}^nx_i^{\ip{FX}{S_i}_e}\sum_{v\in\ensuremath{\mathbb{Z}}^r}\chi(\text{Gr}_v(EX))\prod_{i=1}^nx_i^{-\ip{v}{S_i}_e}.\] We now proceed to defining gradings for Frobenius cluster categories. We can follow the same approach as in the triangulated case, using the index. However, by \cite{FuKeller}, we have the following expansion of the index in terms of the classes of the indecomposable summands of $T$: \[ \ind{T}{X}=\sum_{i=1}^n \ip{FX}{S_i}_{e}[T_{i}]\in \mathrm{K}_0(\operatorname{add}{T}). \] Since $\Ext{1}{\Lambda}{T}{T}=0$, there are no non-split exact sequences in $\operatorname{add}{T}$, and so $\mathrm{K}_0(\operatorname{add}{T})$ is freely generated by the $[T_i]$. For the same reason, the functor $F$ is exact when restricted to $\operatorname{add}{T}$, and so induces a map $F_{\ast}\colon K_{0}(\operatorname{add}{T}) \to K_{0}(\fgmod{\Lambda})$, which takes $[T_i]$ to $[P_i]$, and so is an isomorphism. Applying this isomorphism to the above formula, we obtain $F_{\ast}(\ind{T}{X})=\sum \ip{FX}{S_{i}}_{e}[P_{i}]=[FX]$. From this we see that if we wish to work concretely with matrix and vector entries, the index can be computed explicitly. For the general theory, however, the equivalent K-theoretic expression is cleaner and so we shall phrase our definition of grading in those terms, the above observation showing us that this is equivalent to the approach in \cite{GradedCAs}. We will define our $\mathbb{A}$-gradings to be certain elements of $\mathrm{K}_0(\fd{\Lambda})\otimes_\ensuremath{\mathbb{Z}}\mathbb{A}$. To state a suitable compatibility condition, it will be necessary to extend the Euler form to an $\ensuremath{\mathbb{Z}}$-bilinear form $\mathrm{K}_0(\fgmod{\Lambda})\times(\mathrm{K}_0(\fd{\Lambda})\otimes_\ensuremath{\mathbb{Z}}\mathbb{A})\to\mathbb{A}$. In the by now familiar way, we do this using the $\ensuremath{\mathbb{Z}}$-module structure on $\mathbb{A}$, and, abusing notation, define \[\ip{x}{\sum y_i\otimes a_i}_e=\sum \ip{x}{y_i}_ea_i.\] It is straightforward to check that this form is well-defined and $\ensuremath{\mathbb{Z}}$-linear in each variable. Thus we arrive at the following definition of a graded Frobenius cluster category, exactly analogous to Definitions~\ref{d:graded-gen-cl-cat} and \ref{d:degree-graded-gen-cl-cat} in the triangulated case. \begin{definition} Let $\curly{E}$ be a Frobenius cluster category and $T$ a cluster-tilting object of $\curly{E}$ such that $\Lambda=\op{\End{\curly{E}}{T}}$ is Noetherian. We say that $G\in\mathrm{K}_0(\fd{\Lambda})\otimes_\ensuremath{\mathbb{Z}}\mathbb{A}$ is a grading for $\curly{E}$ if $\ip{M}{G}_{e}=0$ for all $M\in\fgmod{\underline{\Lambda}}$. We call $(\curly{E},T,G)$ a graded Frobenius cluster category. \end{definition} \begin{definition} Let $(\curly{E},T,G)$ be a graded Frobenius cluster category. Define $\deg_{G}\colon \curly{E} \to \mathbb{A}$ by $\deg_{G}(X)=\ip{FX}{G}_{e}$. \end{definition} We record some straightforward consequences of the above definitions. \begin{remark}\label{grading-remarks} {\ } \begin{enumerate}[label=(\roman*)] \item When considering $\ensuremath{\mathbb{Z}}$-gradings, we may use the natural isomorphism $\mathrm{K}_0(\fd\Lambda)\otimes_\ensuremath{\mathbb{Z}}\integ\stackrel{\sim}{\to}\mathrm{K}_0(\fd\Lambda)$ to think of a grading as an element of the Grothendieck group itself. Similarly, we can think of $\ensuremath{\mathbb{Z}}^m$-gradings as elements of $\mathrm{K}_0(\fd\Lambda)^m$. \item Using the basis of simples for $\mathrm{K}_0(\fd{\Lambda})$, we can write $G=\sum_{i=1}^n[S_i]\otimes G_i$ for some unique $G_i\in\mathbb{A}$. Writing $\underline{G}\in\mathbb{A}^n$ for the column vector with entries $G_i$, the grading condition is equivalent to requiring $B_T^t\underline{G}=0$, by Remark~\ref{exchangematrixgivesip} and the assumption that $\underline{\Lambda}$ is finite dimensional. \item Let $G_i$ be as in (ii). Since $FT_{i}=P_{i}$ and $\ip{P_i}{S_j}_{e}=\delta_{ij}$, we may compute \[\deg_{G}(T_{i})=\ip{FT_{i}}{G}_{e}=G_i,\] as expected. \end{enumerate} \end{remark} The K-theoretic phrasing of the above definition leads us to the following observation. \begin{lemma}\label{l:proj-inj-grading} Let $\curly{E}$ be Hom-finite, let $T\in\curly{E}$ be a cluster-tilting object with endomorphism algebra $\Lambda$ and let $V\in\curly{E}$ be projective-injective. Write $F=\Hom{\curly{E}}{T}{-}$. Then $[FV]\in \mathrm{K}_{0}(\fd \Lambda)$ is a $\ensuremath{\mathbb{Z}}$-grading for $\curly{E}$, and $\deg_{[FV]}(X)=\dim\Hom{\curly{E}}{X}{V}$. \end{lemma} \begin{proof} Letting $M\in \fgmod \underline{\Lambda}$, we need to check that $\ip{M}{FV}_{e}=0$. By the internal Calabi--Yau property of $\fgmod \Lambda$ (see Remark~\ref{exchangematrixgivesip}), we may instead check that $\ip{FV}{M}_{e}=0$. Firstly, $\Ext{i}{\Lambda}{FV}{M}=0$ for $i>0$ since $FV$ is projective. Recall from above that there is an idempotent $e\in\Lambda$, given by projecting onto a maximal projective summand of $T$, such that $\underline{\Lambda}=\Lambda/\Lambda e\Lambda$. Using this, $FV\in\operatorname{add}\Lambda e$ by the definition of $e$, and $\Hom{\Lambda}{\Lambda e}{M}=eM=0$ since $M$ is a $\underline{\Lambda}$-module. Hence $\Hom{\Lambda}{FV}{M}=0$ also, so that $\ip{FV}{M}_{e}=\ip{M}{FV}_{e}=0$ as required. By definition, $\deg_{[FV]}(X)=\dim\Hom{\Lambda}{FX}{FV}$ for $X\in\curly{E}$. Since $T$ is cluster-tilting, we have the short exact sequence \[0\to K_X\to R_X\to X\to 0,\] with $K_X,R_X\in\operatorname{add}{T}$, used to define the index. Applying $\Hom{\curly{E}}{-}{V}$, we obtain the exact sequence \[0\to\Hom{\curly{E}}{X}{V}\to\Hom{\curly{E}}{R_X}{V}\to\Hom{\curly{E}}{K_X}{V}.\] Alternatively, we can apply $\Hom{\Lambda}{F{-}}{FV}$ to obtain the exact sequence \[0\to\Hom{\Lambda}{FX}{FV}\to\Hom{\Lambda}{FR_X}{FV}\to\Hom{\Lambda}{FK_X}{FV}.\] Since $F$ restricts to an equivalence on $\operatorname{add}{T}$, and $V\in\operatorname{add}{T}$ since it is projective-injective, the right-hand maps in these two exact sequences are isomorphic, yielding an isomorphism $\Hom{\curly{E}}{X}{V}\cong\Hom{\Lambda}{FX}{FV}$ of their kernels, from which the result follows. \end{proof} This gives us a family of $\ensuremath{\mathbb{Z}}$-gradings canonically associated to any Hom-finite Frobenius cluster category; note that in fact we only need $FV=\Hom{\curly{E}}{T}{V}\in \fd \Lambda$, so for some specific Hom-infinite $\curly{E}$ and specific $V$ and $T$ the result may still hold. We will give some more examples of gradings later but first give the main results regarding graded Frobenius cluster categories, analogous to those in Proposition~\ref{p:prop-of-gen-cc} for the triangulated case. We treat the straightforward parts first. \begin{proposition} Let $(\curly{E},T,G)$ be a graded Frobenius cluster category. \begin{enumerate}[label=(\roman*)] \item Let $\mathbb{C}[x_{1}^{\pm 1},\dotsc ,x_{n}^{\pm 1}]$ be graded by $\deg_{G}(x_{j})=G_i$, where $G_i$ is defined as in Remark~\ref{grading-remarks}(ii). Then for all $X \in \curly{E}$, the cluster character $C_{X}^{T}\in \mathbb{C}[x_{1}^{\pm 1},\dotsc ,x_{n}^{\pm 1}]$ is homogeneous of degree $\deg_G(X)$. \item\label{p:prop-of-gen-cc-additive-on-exact-seq} For any exact sequence $0\to X\to Y \to Z \to 0$ in $\curly{E}$, we have \[ \deg_{G}(Y)=\deg_{G}(X)+\deg_{G}(Z). \] \item The degree $\deg_{G}$ is compatible with mutation in the sense that for every cluster-tilting object $U$ of $\curly{E}$ with indecomposable summand $U_{k}$ we have \[ \deg_{G}(U_{k}^{*})=\deg_{G}(M)-\deg_{G}(U_{k})=\deg_{G}(M')-\deg_{G}(U_{k}), \] where $U_{k}^{*}$, $M$ and $M'$ are as in the above description of exchange sequences in $\curly{E}$. It follows that $\deg_G(M)=\deg_G(M')$, which is the categorical version of the claim that all exchange relations in a graded cluster algebra are homogeneous. \end{enumerate} \end{proposition} \begin{proof} {\ } \begin{enumerate}[label=(\roman*)] \item As usual, for $v\in\ensuremath{\mathbb{Z}}^n$ we write $\underline{x}^v=\prod_{i=1}^nx_i^{v_i}$. Then if $\deg_{G}(x_i)=G_i$, we have \[\deg_{G}(\underline{x}^v)=\sum_{i=1}^nv_iG_i=\ip{\sum_{i=1}^nv_i[P_i]}{G}_{e}.\] Each term of $C_X^T$ may be written in the form form $\lambda \underline{x}^v$, where \[v_i=\ip{FX}{S_i}_{e}-\ip{M}{S_i}_{e}\] for some $M\in\fgmod{\underline{\Lambda}}$, and $\lambda$ is a constant. It follows that \[\sum_{i=1}^nv_i[P_i]=[FX]-[M],\] so the degree of $\underline{x}^v$ is \[\ip{FX}{G}_{e}-\ip{M}{G}_{e}=\ip{FX}{G}_{e}=\deg_{G}(X),\] since $\ip{M}{G}_{e}=0$ by the definition of a grading. In particular, this is independent of $M$, so $C^T_X$ is homogeneous of degree $\deg_{G}(X)$. \item Applying $F$ to the exact sequence $0\to X\to Y\to Z\to 0$ and truncating gives an exact sequence \[0\to FX\to FY\to FZ\to M\to0\] for some $M\subseteq EX$. In particular, $M\in\fgmod{\underline{\Lambda}}$. In $\mathrm{K}_0(\fgmod{\Lambda})$, we have \[[FX]+[FZ]=[FY]+[M],\] so applying $\ip{-}{G}_e$ gives \[\deg_G(X)+\deg_G(Z)=\deg_G(Y)+\ip{M}{G}_e=\deg_G(Y)\] since $M\in\fgmod{\underline{\Lambda}}$. \item This follows directly from (ii) applied to the exchange sequences \[0\to U_{k}^{*} \to M \to U_{k} \to 0 \qquad \text{and} \qquad 0\to U_{k} \to M' \to U_{k}^{*} \to 0.\qedhere\] \end{enumerate} \end{proof} Since the shift functor on $\underline{\curly{E}}$ does not typically lift to an automorphism of $\curly{E}$, and projective-injective objects of $\curly{E}$ may have non-zero degrees, we have no natural analogue of Proposition~\ref{p:prop-of-gen-cc}(v) in the Frobenius setting. It remains to give an analogue of part~\ref{p:prop-of-gen-cc-Groth-gp}, concerning the relationship between gradings and the Grothendieck group of a graded Frobenius cluster category. The first part of the following theorem is directly analogous to \cite[Theorem 10]{Palu-Groth-gp} for the triangulated case. \begin{theorem}\label{t:grading-Groth-gp} Let $\curly{E}$ be a Frobenius cluster category with a cluster-tilting object $T$ such that $\Lambda=\op{\End{\curly{E}}{T}}$ is Noetherian. \begin{enumerate}[label=(\roman*)] \item\label{t:grading-Groth-gp-relns} The Grothendieck group $\mathrm{K}_{0}(\curly{E})$, as an exact category, is isomorphic to the quotient of $\mathrm{K}_{0}(\operatorname{add}_{\curly{E}} T)$ by the relations $[X_{k}]-[Y_{k}]$, for $1\leq k\leq r$, where \[0\to U_{k}^{*} \to Y_k \to U_{k} \to 0 \qquad \text{and} \qquad 0\to U_{k} \to X_k \to U_{k}^{*} \to 0\] are the exchange sequences associated to the summand $U_k$ of $T$. \item\label{t:grading-Groth-gp-grading-space} The space of $\mathbb{A}$-gradings of $\curly{E}$, defined above as a subspace of $\mathrm{K}_0(\fd{\Lambda})\otimes_\ensuremath{\mathbb{Z}}\mathbb{A}$, is isomorphic to $\Hom{\ensuremath{\mathbb{Z}}}{\mathrm{K}_0(\curly{E})}{\mathbb{A}}$, via the map $G\mapsto\deg_G$. \end{enumerate} \end{theorem} \begin{proof} Let $\curly{H}^b(\operatorname{add}_{\curly{E}}{T})$ denote the bounded homotopy category of complexes with terms in $\operatorname{add}_{\curly{E}}{T}$, and let $\curly{H}^b_{\curly{E}\text{-ac}}(\operatorname{add}_{\curly{E}}{T})$ denote the full subcategory of $\curly{E}$-acyclic complexes. By work of Palu \cite[Lemma~2]{Palu-Groth-gp}, there is an exact sequence \[\begin{tikzcd}[column sep=20pt] 0\arrow{r}&\curly{H}^b_{\curly{E}\text{-ac}}(\operatorname{add}_{\curly{E}}{T})\arrow{r}&\curly{H}^b(\operatorname{add}_{\curly{E}}{T})\arrow{r}&\curly{D}^b(\curly{E})\arrow{r}&0, \end{tikzcd}\] of triangulated categories, to which we may apply the right exact functor $\mathrm{K}_0$ to obtain \[\begin{tikzcd}[column sep=20pt] \mathrm{K}_0(\curly{H}^b_{\curly{E}\text{-ac}}(\operatorname{add}_{\curly{E}}{T}))\arrow{r}&\mathrm{K}_0(\curly{H}^b(\operatorname{add}_{\curly{E}}{T}))\arrow{r}&\mathrm{K}_0(\curly{D}^b(\curly{E}))\arrow{r}&0. \end{tikzcd}\] By \cite[Proof of Lemma~9]{Palu-Groth-gp}, there is a natural isomorphism $\mathrm{K}_0(\curly{H}^b_{\curly{E}\text{-ac}}(\operatorname{add}_{\curly{E}}{T}))\stackrel{\sim}{\to}\mathrm{K}_0(\fgmod{\underline{\Lambda}})$. Moreover, since $T$ is cluster-tilting, there are no non-split exact sequences in $\operatorname{add}{T}$, and so $\mathrm{K}_0(\operatorname{add}{T})$ is freely generated by the indecomposable summands of $T$. Thus taking the alternating sum of terms gives an isomorphism $\mathrm{K}_0(\curly{H}^b(\operatorname{add}_{\curly{E}}{T}))\stackrel{\sim}{\to}\mathrm{K}_0(\operatorname{add}_{\curly{E}}{T})$ \cite{Rose-Note}. These isomorphisms induce a commutative diagram \[\begin{tikzcd}[column sep=20pt] \mathrm{K}_0(\curly{H}^b_{\curly{E}\text{-ac}}(\operatorname{add}_{\curly{E}}{T}))\arrow{r}\arrow{d}&\mathrm{K}_0(\curly{H}^b(\operatorname{add}_{\curly{E}}{T}))\arrow{r}\arrow{d}&\mathrm{K}_0(\curly{D}^b(\curly{E}))\arrow{r}\arrow{d}&0\\ \mathrm{K}_0(\fgmod{\underline{\Lambda}})\arrow{r}{\varphi}&\mathrm{K}_0(\operatorname{add}_{\curly{E}}{T})\arrow{r}&\mathrm{K}_0(\curly{E})\arrow{r}&0 \end{tikzcd}\] with exact rows. Since the two leftmost vertical maps are isomorphisms, the induced map $\mathrm{K}_0(\curly{D}^b(\curly{E}))\to\mathrm{K}_0(\curly{E})$, which is again given by taking the alternating sum of terms, is also an isomorphism. We claim that the map $\varphi$ in the above diagram is given by composing the map from $\mathrm{K}_0(\fgmod{\underline{\Lambda}})$ to $\mathrm{K}_0(\fgmod{\Lambda})$ induced by the inclusion of categories with the inverse of the isomorphism $F_*\colon\mathrm{K}_0(\operatorname{add}_{\curly{E}}{T})\stackrel{\sim}{\to}\mathrm{K}_0(\fgmod{\Lambda})$. Since $\underline{\Lambda}$ is finite dimensional, the Grothendieck group $\mathrm{K}_0(\fgmod{\underline{\Lambda}})$ is spanned by the classes of the simple $\underline{\Lambda}$-modules $S_k$ for $1\leq k\leq r$, so it suffices to check that $\varphi$ acts on these classes as claimed. Let \[0\to U_{k}^{*} \to Y_k \to U_{k} \to 0 \qquad \text{and} \qquad 0\to U_{k} \to X_k \to U_{k}^{*} \to 0\] be the exchange sequences associated to the summand $U_k$ of $T$. Then there is an exact sequence \[0\to FU_k\to FX_k\to FY_k\to FU_k\to S_k\to0.\] From this we see that $[S_k]=[FX_k]-[FY_k]=F_*([X_k]-[Y_k])$ in $\mathrm{K}_0(\fgmod{\Lambda})$, and so we want to show that $\varphi[S_k]=[X_k]-[Y_k]$. On the other hand, $[S_k]$ is the image of the class of the $\curly{E}$-acyclic complex \[\cdots\to0\to U_k\to X_k\to Y_k\to U_k\to0\to\cdots\] under Palu's isomorphism $\mathrm{K}_0(\curly{H}^b_{\curly{E}\text{-ac}}(\operatorname{add}_{\curly{E}}{T}))\stackrel{\sim}{\to}\mathrm{K}_0(\fgmod{\underline{\Lambda}})$ (cf.\ \cite[Proof of Theorem~10]{Palu-Groth-gp}), and the image $\varphi[S_k]$ of this complex in $\mathrm{K}_0(\operatorname{add}_{\curly{E}}{T})$ is $[X_k]-[Y_k]$, as we wanted. This yields \ref{t:grading-Groth-gp-relns}. Now applying $\Hom{\ensuremath{\mathbb{Z}}}{-}{\mathbb{A}}$ to the exact sequence \[\begin{tikzcd}[column sep=20pt] \mathrm{K}_0(\fgmod{\underline{\Lambda}})\arrow{r}{\varphi}&\mathrm{K}_0(\operatorname{add}_{\curly{E}}{T})\arrow{r}&\mathrm{K}_0(\curly{E})\arrow{r}&0 \end{tikzcd}\] shows that $\Hom{\ensuremath{\mathbb{Z}}}{\mathrm{K}_0(\curly{E})}{\mathbb{A}}$ is isomorphic to the kernel of $\varphi^t=\Hom{\ensuremath{\mathbb{Z}}}{\varphi}{\mathbb{A}}$, which we will show coincides with the space of gradings. Indeed, we may identify $\mathrm{K}_0(\operatorname{add}_{\curly{E}}{T})$ with $\mathrm{K}_0(\fgmod{\Lambda})$ via $F_*$, and then use the Euler form to identify $\mathrm{K}_0(\fd{\Lambda})\otimes_\ensuremath{\mathbb{Z}}\mathbb{A}$ with $\Hom{\ensuremath{\mathbb{Z}}}{\mathrm{K}_0(\fgmod{\Lambda})}{\mathbb{A}}$, the map \[x\mapsto\ip{-}{x}_e\] being an isomorphism as usual. Under this identification, we have $\varphi^tG=\ip{-}{G}_e|_{\mathrm{K}_0(\fgmod{\underline{\Lambda}})}$, and so $G\in\ker{\varphi}^t$ if and only if it is a grading. The claim that the isomorphism is given explicitly by $G\mapsto\deg_G=\ip{F(-)}{G}_e$ can be seen by diagram chasing, and hence \ref{t:grading-Groth-gp-grading-space} is proved. \end{proof} The significance of this theorem is that, as in the triangulated case, it provides a basis-free method to identify gradings on Frobenius cluster categories and the cluster algebras they categorify. In the latter context, basis-free essentially means free of the choice of a particular cluster. Specifically, as explained in more detail below, to establish that some categorical datum gives a grading, one only needs to check that that it respects exact sequences. This is potentially significantly easier than checking the vanishing of the product $B_{T}^t\underline{G}$ where $B_{T}$ is given in terms of dimensions of $\text{Ext}$-spaces over the endomorphism algebra $\Lambda$ of some cluster-tilting object $T$. On the other hand, given some knowledge of the cluster algebra being categorified---in particular, knowing a seed---one can use the above theorem to deduce information about the Grothendieck group of the Frobenius cluster category. As promised in Section~\ref{preliminaries}, we can use Theorem~\ref{t:grading-Groth-gp} to see how the grading in a graded Frobenius cluster category is independent of the cluster-tilting object. Precisely, let $(\curly{E},T,G)$ be a graded Frobenius cluster category, and let $\deg_G$ be the corresponding function on $\mathrm{K}_0(\curly{E})$. Let $T'=\bigoplus_{i=1}^nT_i'$ be another cluster-tilting object, with $\Lambda'=\op{\End{\curly{E}}{T'}}$, and denote the simple $\Lambda'$-modules by $S_i'$ for $1\leq i\leq n$. Using the inverse of the isomorphism of Theorem~\ref{t:grading-Groth-gp}, we see that if $G'$ in $\mathrm{K}_0(\fd\Lambda')$ is given by \[G'=\sum_{i=1}^n\deg_G(T_i')[S_i'],\] then $(\curly{E},T',G')$ is a graded Frobenius cluster category with $\deg_G=\deg_{G'}$, as one should expect. Note that this statement holds even if, as can happen, there is no sequence of mutations from $T$ to $T'$. As was remarked about the triangulated case in \cite{GradedCAs}, these observations highlight how the categorification of a cluster algebra is able to see global properties, whereas the algebraic combinatorial mutation process is local. The following example shows the theorem in action, although again we need the additional assumption of Hom-finiteness of $\curly{E}$. \begin{lemma}\label{l:dim-vector} Assume that $\curly{E}$ is Hom-finite and let $P$ be a projective-injective object. Then $\dim \Hom{\curly{E}}{P}{-}$ and $\dim\Hom{\curly{E}}{-}{P}$ define $\ensuremath{\mathbb{Z}}$-gradings for $\curly{E}$. \end{lemma} \begin{proof} Since $P$ is projective and injective, both $\Hom{\curly{E}}{P}{-}$ and $\Hom{\curly{E}}{-}{P}$ are exact functors, and so in each case taking the dimension yields a function in $\Hom{\ensuremath{\mathbb{Z}}}{\mathrm{K}_0(\curly{E})}{\ensuremath{\mathbb{Z}}}$. Then the result follows immediately from Theorem~\ref{t:grading-Groth-gp}. \end{proof} In sufficiently nice cases, applying this result with a complete set of indecomposable projectives will yield that the dimension vector of a module is a (multi-)grading. However, we remark that some care may be needed regarding which algebra we measure ``dimension vector'' over. If $\curly{E}\subset\fgmod{\Pi}$ for some algebra $\Pi$ (as in most examples), then we may consider the $\Pi$-dimension vector of $X\in\curly{E}$, defined in the usual way. On the other hand, any Hom-finite Frobenius cluster category $\curly{E}$ is equivalent to $\GP(B)\subset\fgmod{B}$ for $B$ the opposite endomorphism algebra of a basic projective generator $P=\bigoplus_{i=1}^nP_i$ of $\curly{E}$, by \cite[Theorem~2.7]{KIWY}. Re-interpreting all of the objects of $\curly{E}$ as $B$-modules, the projective-injectives will now be precisely the projective $B$-modules, and $(\dim\Hom{\curly{E}}{P_i}{X})$ is the $B$-dimension vector of $X$ (tautologically, since the equivalence $\curly{E}\to\GP(B)$ takes $X$ to $\Hom{\curly{E}}{P}{X}$). Note that $B$ may not be the same as the algebra $\Pi$ from which $\curly{E}$ originated, and the $B$-dimension vector of a module may differ from the $\Pi$-dimension vector. Given a complete set of projectives, it is natural to ask whether the associated grading might be standard, as defined in \cite{GradedCAs}; we briefly recall this definition and some related facts. \begin{definition} Let $(\underline{x},B)$ be a seed. We call a multi-grading $G$ whose columns are a basis for the kernel of $B$ a standard multi-grading, and call $(\underline{x},B,G)$ a standard graded seed. \end{definition} It is straightforward to see, from rank considerations, that mutation preserves the property of being standard. Moreover, as shown in \cite{GradedCAs}, if $(\underline{x},B,G)$ is a standard graded seed and $H$ is any grading for $(\underline{x},B)$, then there exists an integer matrix $M=M(G,H)$ such that for any cluster variable $y$ in $\curly{A}(\underline{x},B,H)$ we have \[ \deg_{H}(y)=\deg_{G}(y)M, \] where on the right-hand side we regard $y$ as a cluster variable of $\curly{A}(\underline{x},B,G)$ in the obvious way. That is, to describe the degree of a cluster variable of a graded cluster algebra $\curly{A}(\underline{x},B,H)$, it suffices to know its degree with respect to some standard grading $G$ and the matrix $M=M(G,H)$ transforming $G$ to $H$. In particular, to understand the distribution of the degrees of cluster variables, it suffices to know this for standard gradings. Since the statement applies in the particular case when $G$ and $H$ are both standard, we see that from one choice of basis for the kernel of $B$, we obtain complete information. For if we chose a second basis, the change of basis matrix tells us how to transform the degrees. Hence up to a change of basis, there is essentially only one standard grading for each seed. Then, depending on the particular Frobenius cluster category at hand, if we have knowledge of the rank of the exchange matrix, we may be able to examine categorical data such as the number of projective-injective modules or dimension vectors and hence try to find a basis for the space of gradings. For example, for a basic cluster-tilting object $T$ in $\curly{E}$ a Hom-finite Frobenius cluster category, we have $n-r$ projective-injective summands in $T$: if the exchange matrix $B_{T}$ has full rank, a basis for the space of gradings has size $n-r$ so that, via Lemma~\ref{l:proj-inj-grading}, a canonical standard grading is given by the set $\{ [FT_{i}] \mid i>r \}$, which is linearly independent since it is a subset of the basis of projectives for $\mathrm{K}_0(\fd\Lambda)=\mathrm{K}_0(\fgmod{\Lambda})$. From knowledge of this standard grading, we then obtain any other grading by means of some linear transformation. In the next section, we do this for two important examples. \section{Examples of graded Frobenius cluster categories} \subsection{Frobenius cluster categories associated to partial flag varieties} Let $\mathfrak{g}$ be the Kac--Moody algebra associated to a symmetric generalised Cartan matrix. Let $\Delta$ be the associated Dynkin graph and pick an orientation $\vec{\Delta}$. Let $Q$ be the quiver obtained from $\vec{\Delta}$ by adding an arrow $\alpha^*\colon j\to i$ for each arrow $\alpha\colon i\to j$ of $\vec{\Delta}$. Then the preprojective algebra of $\Delta$ is \[\Pi=\ensuremath \mathbb{C} Q/\sum_{\alpha\in\vec{\Delta}}[\alpha,\alpha^*],\] which is, up to isomorphism, independent of the choice of orientation $\vec{\Delta}$. For each $w\in W$, the Weyl group of $\mathfrak{g}$, Buan, Iyama, Reiten and Scott \cite{BIRS1} have introduced a category $\curly{C}_{w}$; the following version of its construction follows \cite{GLS-KacMoody}, and is dual to the original. Assume $w$ has finite length and set $l(w)=n$; we do this for consistency with the notation used above but note that other authors (notably \cite{GLS-KacMoody}, \cite{GLS-QuantumPFV}) use $r$ and their $n$ is our $n-r$. Set $\hat{I}_{i}$ to be the indecomposable injective $\Pi$-module with socle $S_{i}$, the 1-dimensional simple module supported at the vertex $i$ of $Q$. Given a module $W$ in $\fgmod \Pi$, we define \begin{itemize} \item $\mathrm{soc}_{(l)}(W):=}%{\stackrel{\scriptscriptstyle{\mathrm{def}}}{=} {\displaystyle \sum_{\substack{U\leq W \\ U\ensuremath \cong S_{l}}} U}$ and \item $\mathrm{soc}_{(l_{1},l_{2},\ldots,l_{s})}(W):=}%{\stackrel{\scriptscriptstyle{\mathrm{def}}}{=} W_{s}$ where the chain of submodules $0=W_0\subseteq W_{1} \subseteq \cdots \subseteq W_{s} \subseteq W$ is such that $W_{p}/W_{p-1} \ensuremath \cong \mathrm{soc}_{(l_{p})}(W/W_{p-1})$. \end{itemize} Let $\mathbf{i}=(i_n,\dotsc,i_1)$ be a reduced expression for $w$. Then for $1\leq k \leq n$, we define $V_{\mathbf{i},s} :=}%{\stackrel{\scriptscriptstyle{\mathrm{def}}}{=} \mathrm{soc}_{(i_{k},i_{s-1},\ldots,i_{1})}(\hat{I}_{i_{s}})$. Set $V_{\mathbf{i}}=\bigoplus_{k=1}^{n} V_{\mathbf{i},k}$ and let $I$ be the subset of $\{ 1,\dotsc ,n\}$ such that the modules $V_{\mathbf{i},i}$ for $i\in I$ are $\curly{C}_{w}$-projective-injective. Set $I_{\mathbf{i}}=\bigoplus_{i\in I} V_{\mathbf{i},i}$ and $n-r=\card{I}$. Note that this is also the number of distinct simple reflections appearing in $\mathbf{i}$. Define \[ \curly{C}_{\mathbf{i}}=\operatorname{Fac}(V_{\mathbf{i}})\subseteq \text{nil}\ \Pi. \] That is, $\curly{C}_{\mathbf{i}}$ is the full subcategory of $\fgmod \Pi$ consisting of quotient modules of direct sums of finitely many copies of $V_{\mathbf{i}}$. Then $\curly{C}_{\mathbf{i}}$ and $I_{\mathbf{i}}$ are independent of the choice of reduced expression $\mathbf{i}$ (although $V_{\mathbf{i}}$ is not), so that we may write $\curly{C}_{w}:=}%{\stackrel{\scriptscriptstyle{\mathrm{def}}}{=}\curly{C}_{\mathbf{i}}$ and $I_w:=}%{\stackrel{\scriptscriptstyle{\mathrm{def}}}{=} I_{\mathbf{i}}$. It is shown in \cite{BIRS1} that $\curly{C}_{w}$ is a stably 2-Calabi--Yau Frobenius category. Moreover $\curly{C}_{w}$ has cluster-tilting objects: $V_{\mathbf{i}}$ is one such. Indeed, cluster-tilting objects are maximal rigid, and vice versa. The indecomposable $\curly{C}_{w}$-projective-injective modules are precisely the indecomposable summands of $I_{w}$, and $\curly{C}_{w}=\text{Fac}(I_{w})$. Furthermore, it is also shown in \cite[Proposition~2.19]{GLS-KacMoody} that the global dimension condition of Definition~\ref{d:Frob-cl-cat} also holds, leaving only the Krull--Schmidt condition. By \cite[Corollary~4.4]{KrauseKS}, we should check that the endomorphism algebras of objects of $\curly{C}_w$ are semiperfect, and that this category is idempotent complete. The first of these properties holds since $\curly{C}_w$ is Hom-finite. The second follows from the fact that $\curly{C}_w$ is a full subcategory of the idempotent complete category $\fgmod(\Pi/\operatorname{Ann}{I_w})$, and that if $M$ is an object of $\operatorname{Fac}(I_w)$, then so are all direct summands of $M$. We conclude that $\curly{C}_{w}$ is a Frobenius cluster category, in the sense of Definition~\ref{d:Frob-cl-cat}. Let $\Lambda=\op{\End{\curly{C}_{w}}{V_{\mathbf{i}}}}$ and $F=\Hom{\curly{C}_{w}}{V_{\mathbf{i}}}{-}$. Then, as above, the modules $P_{k}:=}%{\stackrel{\scriptscriptstyle{\mathrm{def}}}{=} FV_{\mathbf{i},k}$ for $1\leq k\leq n$ are the indecomposable projective $\Lambda$-modules and the tops of these, $S_{k}$, are the simple $\Lambda$-modules. Recall that the exchange matrix obtained from the quiver of $\Lambda$, which we shall call $B_{\mathbf{i}}$, has entries \[(B_{\mathbf{i}})_{ij}=\dim\Ext{1}{\Lambda}{S_i}{S_j}-\dim\Ext{1}{\Lambda}{S_j}{S_i}\] for $1\leq i\leq n$ and $j\notin I$, so that the $r$ columns of $B_{\mathbf{i}}$ correspond to to the mutable summands $V_{\mathbf{i},j}$, $j\notin I$, of $V_{\mathbf{i}}$. Let $L_{\mathbf{i}}$ be the $n\times n$ matrix with entries \[ (L_{\mathbf{i}})_{jk}=\dim\Hom{\Pi}{V_{\mathbf{i},j}}{V_{\mathbf{i},k}}-\dim\Hom{\Pi}{V_{\mathbf{i},k}}{V_{\mathbf{i},j}}. \] \noindent By \cite[Proposition~10.1]{GLS-QuantumPFV} we have \[ \sum_{l=1}^{n} (B_{\mathbf{i}})_{lk}(L_{\mathbf{i}})_{lj}=2\delta_{jk}, \] and hence the matrix $B_{\mathbf{i}}$ has maximal rank, namely $r$. It follows that there exists some standard integer multi-grading $G_{\mathbf{i}}=(G_{1},\dotsc ,G_{n-r})\in \mathrm{K}_{0}(\fgmod{\Lambda})^{n-r}$ for $\curly{C}_{w}$ and $(\curly{C}_{w},V_{\mathbf{i}},G_{\mathbf{i}})$ is a graded Frobenius cluster category. As discussed above, such a standard grading can be used to construct all other gradings, so our goal is to identify one. We have additional structure on $\curly{C}_{w}$ that we may make use of. Namely, $\curly{C}_{w}$ is Hom-finite and we may apply Lemma~\ref{l:proj-inj-grading} with respect to the $\curly{C}_{w}$-projective-injective modules $V_{\mathbf{i},i}$ that are the indecomposable summands of $I_{\mathbf{i}}$. The resulting grading $[FV_{\mathbf{i},i}]$, $i\in I$, is standard, since its $n-r$ components are a subset of the basis of projectives for $\mathrm{K}_0(\fgmod{\Lambda})$, and so in particular are linearly independent. By Theorem~\ref{t:grading-Groth-gp}, the existence of this standard grading implies that the Grothendieck group $\mathrm{K}_0(\curly{C}_w)$ has rank $n-r$. We wish to understand this standard grading more explicitly. Note that the objects of $\curly{C}_{w}$ are $\Pi$-modules and we may consider dimension vectors with respect to the $\Pi$-projective modules. Then we notice that in fact the grading by $([FV_{\mathbf{i},i}])_{i\in I}$ is equal to the $\Pi$-dimension vector grading in the case at hand. This is because, by Lemma~\ref{l:proj-inj-grading}, the degree of $X$ with respect to $[FV_{\mathbf{i},i}]$ is $\dim\Hom{\Pi}{X}{V_{\mathbf{i},i}}$, and each $V_{\mathbf{i},i}$ is both a submodule and a minimal right $\curly{C}_w$-approximation of an indecomposable injective $\hat{I}_{i}$ for $\Pi$, so $\Hom{\Pi}{X}{V_{\mathbf{i},i}}=\Hom{\Pi}{X}{\hat{I}_{i}}$, the dimensions of the latter giving the $\Pi$-dimension vector of $X$. In \cite[Corollary~9.2]{GLS-KacMoody}, Gei\ss, Leclerc and Schr\"{o}er have shown that \[ \ensuremath \mbox{\underline{dim}}_{\Pi} V_{\mathbf{i},k}=\omega_{i_{k}}-s_{i_{1}}s_{i_{2}}\dotsm s_{i_{k}}(\omega_{i_{k}})\] for all $1\leq k\leq n$, where the $\omega_{j}$ are the fundamental weights for $\mathfrak{g}$ and the $s_{j}$ the Coxeter generators for $W$. This enables us to construct the above grading purely combinatorially. \begin{example} We consider the following seed associated to $\mathfrak{g}$ of type $A_{5}$ with\[ \mathbf{i}=(3,2,1,4,3,2,5,4,3), \] as given in \cite[Example~12.11]{GLS-QuantumPFV}. The modules $V_{k}:=}%{\stackrel{\scriptscriptstyle{\mathrm{def}}}{=} V_{\mathbf{i},k}$, in terms of the usual representation illustrating their composition factors as $\Pi$-modules, are \begin{align*} V_1&=\begin{smallmatrix}3\end{smallmatrix}& V_2&=\begin{smallmatrix}3\\&4\end{smallmatrix}& V_3&=\begin{smallmatrix}3\\&4\\&&5\end{smallmatrix}\\\\ V_4&=\begin{smallmatrix}&3\\2\end{smallmatrix}& V_5&=\begin{smallmatrix}&3\\2&&4\\&3\end{smallmatrix}& V_6&=\begin{smallmatrix}&3\\2&&4\\&3&&5\\&&4\end{smallmatrix}\\\\ V_7&=\begin{smallmatrix}&&3\\&2\\1\end{smallmatrix}& V_8&=\begin{smallmatrix}&&3\\&2&&4\\1&&3\\&2\end{smallmatrix}& V_9&=\begin{smallmatrix}&&3\\&2&&4\\1&&3&&5\\&2&&4\\&&3\end{smallmatrix} \end{align*} The exchange quiver for this seed is \begin{center} \scalebox{1}{\input{initialseedforMat33.tikz}} \end{center} It is straightforward to see that $\Pi$-dimension vectors yield a grading: for example, looking at the vertex corresponding to $V_{1}$, the sums of the dimension vectors of incoming and outgoing arrows are $[0,1,2,1,0]$ and $[0,1,1,0,0]+[0,0,1,1,0]$ respectively. \end{example} \subsection{Grassmannian cluster categories} Let $\Pi$ be the preprojective algebra of type $\mathsf{A}_{n-1}$, with vertices numbered sequentially, and let $Q_k$ be the injective module at the $k$th vertex. In \cite{GLS-PFV}, Gei\ss, Leclerc and Schr\"oer show that the category $\operatorname{Sub}\, Q_{k}$ of submodules of direct sums of copies of $Q_k$ ``almost'' categorifies the cluster algebra structure on the homogeneous coordinate ring of the Grassmannian of $k$-planes in $\ensuremath \mathbb{C}^n$, but is missing a single indecomposable projective object corresponding to one of the frozen variables of this cluster algebra. The category $\Sub{Q_k}$ is in fact dual to one of the categories $\curly{C}_w$ introduced in the previous section, for $\Delta=\mathsf{A}_{n-1}$ and $w$ a particular Weyl group element depending on $k$, so it is a Frobenius cluster category in the same way. Jensen, King and Su \cite{JKS} complete the categorification via the category $\CM(A)$ of maximal Cohen--Macaulay modules for a Gorenstein order $A$ (depending on $k$ and $n$) over $Z=\powser{\mathbb{C}}{t}$. One description of $A$ is as follows. Let $\Delta$ be the graph (of affine type $\tilde{\mathsf{A}}_{n-1}$) with vertex set given by the cyclic group $\ensuremath{\mathbb{Z}}_n$, and edges between vertices $i$ and $i+1$ for all $i$. Let $\Pi$ be the completion of the preprojective algebra on $\Delta$ with respect to the arrow ideal. Write $x$ for the sum of ``clockwise'' arrows $i\to i+1$, and $y$ for the sum of ``anti-clockwise'' arrows $i\to i-1$. Then we have \[A=\Pi/\langle x^k-y^{n-k}\rangle.\] In this description, $Z$ may be identified with the centre $\powser{\mathbb{C}}{xy}$ of $A$. Jensen, King and Su also show \cite[Theorem~4.5]{JKS} that there is an exact functor \linebreak $\pi\colon \CM(A) \to \operatorname{Sub}\, Q_{k}$, corresponding to the quotient by the ideal generated by $P_{n}$, and that for any $N\in \operatorname{Sub}\, Q_{k}$, there is a unique (up to isomorphism) minimal $M$ in $\CM(A)$ with $\pi M\ensuremath \cong N$ and $M$ having no summand isomorphic to $P_{n}$. Such an $M$ satisfies $\mathrm{rk}(M)=\dim \mathrm{soc}\ \pi M$, where $\mathrm{rk}(M)$ is the rank of each vertex component of $M$, thought of as a $Z$-module. We now show that $\CM(A)$ is again a Frobenius cluster category. Properties of the algebra $A$ mean that an $A$-module is maximal Cohen--Macaulay if and only if it is free and finitely generated as a $Z$-module. Since $Z$ is a principal ideal domain, and hence Noetherian, any submodule of a free and finitely generated $Z$-module is also free and finitely generated, and so $\CM(A)$ is closed under subobjects. In particular, $\CM(A)$ is closed under kernels of epimorphisms. Moreover \cite[Corollary~3.7]{JKS}, $A\in\CM(A)$, and so $\Omega(\fgmod{A})\subseteq\CM(A)$. As a $Z$-module, any object $M\in\CM(A)$ is isomorphic to $Z^k$ for some $k$, so we have that $\op{\End{Z}{M}}\cong Z^{k^2}$ is a finitely generated $Z$-module. Since $Z$ is Noetherian, the algebra \linebreak $\op{\End{A}{M}}\subseteq\op{\End{Z}{M}}$ is also finitely generated as a $Z$-module. Thus $\op{\End{A}{M}}$ is Noetherian, as it is finitely generated as a module over the commutative Noetherian ring $Z$. We may now apply \cite[Proposition~3.6]{Pressland} to see that any cluster-tilting object $T\in\CM(A)$ satisfies $\operatorname{gldim}{\op{\End{A}{T}}}\leq 3$. Moreover \cite[Corollary~4.6]{JKS}, $\underline{\CM}(A)=\underline{\operatorname{Sub}}\,{Q_k}$, so $\underline{\CM}(A)$ is $2$-Calabi--Yau, and $\CM(A)$ is a Frobenius cluster category. Unlike $\operatorname{Sub}\, Q_{k}$ and the $\curly{C}_w$, the category $\CM(A)$ is not Hom-finite. However, as already observed, the endomorphism algebras of its objects are Noetherian, so we may apply our general theory to this example. In their study of the category $\CM(A)$, Jensen, King and Su show the following. Let \[ \ensuremath{\mathbb{Z}}^{n}(k)=\{ x\in \ensuremath{\mathbb{Z}}^{n} \mid k\ \text{divides} \textstyle\sum_{i} x_{i} \} \] with basis $\alpha_{1},\dotsc ,\alpha_{n-1},\beta_{[n]}$, where the $\alpha_{j}=e_{j+1}-e_j$ are the negative simple roots for $\mathrm{GL}_{n}(\ensuremath \mathbb{C})$ and $\beta_{[n]}=e_{1}+\dotsm +e_{k}$ is the highest weight for the representation $\bigwedge^{k}(\ensuremath \mathbb{C}^{n})$. Then by \cite[\S 8]{JKS} we have that $\mathrm{K}_{0}(\CM (A))\ensuremath \cong \mathrm{K}_{0}(A)\ensuremath \cong \ensuremath{\mathbb{Z}}^{n}(k)$; let $G\colon \mathrm{K}_{0}(\CM(A))\to \ensuremath{\mathbb{Z}}^{n}(k)$ denote the composition of these isomorphisms. The $\mathrm{GL}_{n}(\ensuremath \mathbb{C})$-weight of the cluster character of $M\in \CM(A)$ (called $\tilde{\psi}_{M}$ in $\cite{JKS}$) is given by the coefficients in an expression for $G[M]\in\ensuremath{\mathbb{Z}}^{n}(k)$ in terms of the basis of $\ensuremath{\mathbb{Z}}^n(k)$ given above \cite[Proposition~9.3]{JKS}, and thus this weight defines a group homomorphism $\mathrm{K}_0(\CM(A))\to\ensuremath{\mathbb{Z}}^n$. Said in the language of this paper, $\CM(A)$ is a graded Frobenius cluster category with respect to $\mathrm{GL}_{n}(\ensuremath \mathbb{C})$-weight, this giving a standard integer multi-grading. Let $\delta\colon \ensuremath{\mathbb{Z}}^{n}(k)\to \ensuremath{\mathbb{Z}}$ be the (linear) function $\delta(x)=\frac{1}{k}\sum_{i} x_{i}$. By the linearity of gradings, composing $G$ with $\delta$ yields a $\ensuremath{\mathbb{Z}}$-grading on $\CM (A)$ also. Explicitly, $\delta(x)$ is the $\beta_{[n]}$-coefficient of $x$ in our chosen basis, and is also equal to the dimension of the socle of $\pi M$, which is equal to $\mathrm{rk}(M)$, which is equal to the degree of the cluster character of $M\in \CM(A)$ as a homogeneous polynomial in the Pl\"{u}cker coordinates of the Grassmannian. It is well known that the cluster structure on the Grassmannian is graded with respect to either the $\mathrm{GL}_{n}(\ensuremath \mathbb{C})$-weight (also called the content of a minor, and, by extension, of a product of minors) or the natural grading associated to the Pl\"{u}cker embedding. The results of \cite{JKS} show that these gradings are indeed naturally reflected in the categorification of that cluster structure. This opens the possibility of attacking some questions on, for example, the number of cluster variables of a given degree by examining rigid indecomposable modules in $\CM(A)$ of the corresponding rank, say. We hope to return to this application in the future. Of course, one can also argue directly that $\mathrm{rk}(M)$ yields a grading on $\CM (A)$, considering it as a function on $\mathrm{K}_{0}(\CM(A))$. Note that the socle dimension of $\pi M$ is not a grading on $\operatorname{Sub}\, Q_{k}$, but rather it is the datum within $\operatorname{Sub}\, Q_{k}$ that specifies how one should lift $\pi M$ to $M$ (see \cite[\S 2]{JKS} for an illustration of this). As described in the previous section, $\operatorname{Sub}\, Q_{k}$ (in its guise as one of the $\curly{C}_{w}$) does admit gradings, such as the grading describing the degree of the cluster character of $\pi M\in \operatorname{Sub}\, Q_{k}$ (called $\psi_{\pi M}$ in \cite{JKS}) with respect to the standard matrix generators. \small \bibliographystyle{halpha}
2,869,038,156,988
arxiv
\subsection{\em ...}; the following \renewcommand{\subsection}{\@startsection {subsection}% {2}% {0mm}% {-\baselineskip}% {0.15\baselineskip}% {\normalfont\normalsize}}% \makeatother \setlength{\topmargin}{-2.2cm} \setlength{\oddsidemargin}{-15mm} \setlength{\leftmargin}{-1in} \setlength{\textheight}{260mm} \setlength{\textwidth}{187mm} \setlength{\columnsep}{7mm} \setlength{\textfloatsep}{13pt} \setlength{\abovedisplayskip}{8pt} \setlength{\belowdisplayskip}{8pt} \renewcommand{\textfraction}{.1} \renewcommand{\bottomfraction}{.6} \setlength{\jot}{.2in} \linespread{0.9} \begin{document} \title{On a three-dimensional lattice approach for modelling corrosion induced cracking and its influence on bond between reinforcement and concrete} \author{\large {Peter Grassl and Trevor Davies}\\ {\em Department of Civil Engineering, University of Glasgow, Glasgow, United Kingdom}\\ } \date{ \pagestyle{empty} \thispagestyle{empty} \abstract{ABSTRACT: The present work involves the discrete modelling of corrosion induced cracking and its influence on the bond between reinforcement and concrete. A lattice approach is used to describe the mechanical interaction of a corroding reinforcement bar, the surrounding concrete and the interface between steel reinforcement and concrete. The cross-section of the ribbed reinforcement bar is taken to be circular, assuming that the interaction of the ribs of the deformed reinforcement bar and the surrounding concrete is included in a cap-plasticity interface model. The expansion of the corrosion product is represented by an eigenstrain in the lattice elements forming the interface. The lattice modelling approach is applied to the analysis of corrosion induced cracking and its influence of the bond strength. The model capabilities are assessed by comparing results of analyses with those from unconfined pull-out tests reported in the literature. Future work will investigate the influence of the stiffness of interface elements and the effect of lateral confinement on corrosion induced cracking. \vspace{5mm} Keywords: lattice, concrete, cracking, reinforcement, corrosion \vspace{-5mm} } \maketitle \frenchspacing \section{INTRODUCTION} The present work involves the modelling of corrosion induced cracking of reinforced concrete by means of a three-dimensional lattice approach. Corrosion of reinforcement involves the transformation of steel into expansive rust. If the expansion of rust is restrained, it results in radial pressure in the confining material. For reinforced concrete, this radial pressure and accompanying transverse tensile stresses may cause cracking \shortcite{AndAloMol93}. This cracking is not desirable, since it reduces the anchorage capacity of the reinforcement \shortcite{LeeNogTom02}. Most of the anchorage capacity of deformed reinforcement is provided by ribs on the surface of the bar, which resist the slip between concrete and reinforcement by transferring inclined radial forces into the concrete~\shortcite{Tep79}. The capacity of the concrete to resist these forces can be significantly reduced by the corrosion induced cracking. Consequently, there is a considerable interest in developing models which can predict the mechanism of corrosion induced cracking and its influence on the bond capacity of reinforced concrete. Many of the present models include the effect of corrosion induced cracking on bond by reducing the bond strength of the interface between concrete and steel \shortcite{LeeNogTom02}. In these models, the relationship between the amount of rust and the reduction of bond strength is determined empirically. Thus, these models are of limited validity for the prediction of the influence of corrosion on bond. Only very few models describe the expansion of the rust, the radial pressure and the transverse stresses on the concrete explicitly \shortcite{Lun05}. These models have the potential to establish an analytical relationship between the expansion of rust, cracking and spalling. They can be combined with realistic bond models \shortcite{Lun05a}, so that the influence of corrosion induced cracking on the bond capacity can be predicted without the need of empirical relationships. However, this modelling framework is computationally demanding, since it requires three-dimensional modelling of the mechanical response of the concrete, the bond between reinforcement bar and concrete, and the reinforcement bar itself as shown in Figure~\ref{fig:3DBond}. \begin{figure} \begin{center} \epsfig{file=./fig3DBond.eps,width=9cm}\\ \end{center} \caption{Three-dimensional modelling of reinforced concrete: Concrete, reinforcement and bond between concrete and reinforcement are considered as individual phases. Left: Concrete cube (light grey) with reinforcement bar (dark grey); right: detail of interface between reinforcement and concerte.} \label{fig:3DBond} \end{figure} In earlier work, three-dimensional continuum based models were used to describe the cracking of concrete surrounding the reinforcement. However, three-dimensional continuum modelling of cracking in concrete is challenging, since it is not straight-forward to include the localised deformations into the continuum description. Combined with the modelling of the bond between concrete and reinforcement, it can be exceedingly difficult. This might explain the small number of models which are available based on this three-dimensional approach. Discrete approaches, such as lattice and particle models, might be a favourable alternative to describe this discontinuous problem. Their potential to model corrosion induced cracking and its influence on bond is assessed in the present study. Lattice approaches have been used successfully in the past to model the failure of concrete, as reported by \shortciteN{SchMie92b} and \shortciteN{BolSai98}. The model by Bolander has been shown to accurately reproduce analytical solutions for elasticity and potential flow problems \shortcite{YipMohBol05,BolBer04}. Furthermore, it allows for the use of constitutive models, which are formulated by means of tractions and displacement jumps, as commonly used in interface approaches for concrete fracture \shortcite{CabLopCar06}. These have been shown to result in an element-size independent description of crack-openings. This lattice approach is used in the present study to describe the mechanical response of three phases, namely reinforcement, concrete and bond between reinforcement and concrete. In this approach, the lattice elements do not represent the meso-structure of the materials~\cite{ZubBaz87}. Instead, they are used to discretise the continuum. Thus, constitutive models are required for all three phases, which are described in more detail in the following sections. \section{MODELLING APPROACH} The present lattice approach for the modelling of corrosion induced cracking follows the framework developed by Bolander and his co-workers. The nodes of the lattice are randomly located in the domain to be analysed, subject to the constraint of a minimum distance \shortcite{ZubBaz87}. The arrangement of the lattice elements is determined from the edges of the tetrahedra of the Delaunay tessellation based on the randomly placed nodes. The cross-sectional properties of these elements are obtained from the dual Voronoi tessellation of the same set of random nodes. For the interface between reinforcement and concrete, the lattice nodes are not placed randomly but at special locations, such that the middle cross-sections of the lattice elements form the boundaries between the two phases \shortcite{BolBer04}. The nodes of the lattice elements have six degrees of freedom, namely three translations and three rotations. These degrees of freedom are related to three displacement and three rotation discontinuities at the centroid of the middle cross-section of the elements. The three rotation discontinuities are related to moments by elastic relationships. The three displacement discontinuities are used in interface constitutive models to determine the corresponding tractions. In the present study, an elastic interface model is used for the reinforcement. One limitation of the present lattice approach is that it only predicts Poisson's ratios of less than $1/4$ for 3D and less than $1/3$ for 2D. This is restrictive for the 3D modelling of the steel reinforcement, which has a Poisson's ratio greater than $1/4$. This limitation could be overcome by combining the lattice model with continuum tetrahedra as discussed for the 2D case by \shortciteN{GraBazCus06}. However, this approach is beyond the scope of the present study. The interface model for concrete is based on a combination of plasticity and damage, which describes the softening and reduction of the unloading stiffness in tension as well as the nonlinear hardening response in high confined compression \shortcite{Gra09a}. For the interface between concrete and reinforcement, a new cap-plasticity model is proposed which is described in more detail in the next section. \subsection{\em Elasto-plastic cap model for the bond between concrete and reinforcement} In the lattice model, the nodal degrees of freedom are related to displacement jumps at the middle cross-section of the lattice element. This three-dimensional displacement jump $\mathbf{u}_{\rm c} = \left(u_{\rm n}, u_{\rm s}, u_{\rm t}\right)^T$ is transformed into strains $\boldsymbol{\varepsilon} = \left(\varepsilon_{\rm n}, \varepsilon_{\rm s}, \varepsilon_{\rm t} \right)^T$ by means of the interface thickness $h$ as \begin{equation}\label{eq:smear} \boldsymbol{\varepsilon} = \dfrac{\mathbf{u}_{\rm c}}{h} \end{equation} The three subscripts $n$, $s$ and $t$ denote the normal and two tangential directions in the local coordinate system of the interface (Figure~\ref{fig:interface}a). The thickness of the interface $h$ (Figure~\ref{fig:interface}b) is chosen to be equal to the lattice elements crossing the interface between reinforcement steel and concrete. \begin{figure}[h!] \begin{center} \begin{tabular}{c} \epsfig{file=./figInterfaceA.eps, width=5cm}\\ (a)\\ \epsfig{file=./figInterfaceB.eps, width=5cm}\\ (b) \end{tabular} \end{center} \caption{Interface. (a) Tangential plane of interface with the local coordinate system $n$, $s$ and $t$. (b) Cross-section of interface of thickeness $h$.} \label{fig:interface} \end{figure} The strains are related to the nominal stress $\boldsymbol{\sigma} = \left(\sigma_{\rm n}, \sigma_{\rm s}, \sigma_{\rm t} \right)^T$ by the elasto-plastic stress-strain relationship \begin{equation}\label{eq:stressStrain} \boldsymbol{\sigma} = \mathbf{D}_{\rm e} \left(\boldsymbol{\varepsilon}- \boldsymbol{\varepsilon}_{\rm c} - \boldsymbol{\varepsilon}_{\rm p}\right) \end{equation} where $\mathbf{D}_{\rm e}$ is the elastic stiffness, $\boldsymbol{\varepsilon}_{\rm c} = \left(\varepsilon_{\rm c}, 0, 0 \right)^T$ is the eigenstrain describing the expansion of the corrosion product and $\boldsymbol{\varepsilon}_{\rm p} = \left(\varepsilon_{\rm pn}, \varepsilon_{\rm ps}, \varepsilon_{\rm pt} \right)^T$ is the plastic strain. The yield surface of the plasticity model consists of a Mohr-Coulomb friction law combined with an elliptical cap. The shape of the cap surface is adjusted so that a smooth transition between the two surfaces is obtained (Figure~\ref{fig:surface}). This combination was initially proposed by \shortciteN{SwaSeo00} and further developed by \shortciteN{DolIbr07}. \begin{figure} \begin{center} \epsfig{file=./figSurface.eps,width=8.5cm} \end{center} \caption{Yield surface: Mohr-Coulomb friction law combined with a cap.} \label{fig:surface} \end{figure} The yield function $f$ depends on the normal stress $\sigma_{\rm n}$ and the shear stress norm $\sigma_{\rm q} = \sqrt{\sigma_{\rm s}^2 + \sigma_{\rm t}^2}$ as \begin{small} \begin{equation}\label{eq:yield} f = \left\{ \begin{array}{ll} \sigma_{\rm q} + \alpha \sigma_{\rm n} & \mbox{if $\sigma_{\rm n0} \leq \sigma_{\rm n}$} \vspace{0.5cm}\\ \sigma_{\rm q}^2 + \dfrac{\left(\sigma_{\rm n} + f_{\rm c} -a \right)^2}{\beta^2} - \dfrac{a^2}{\beta^2} & \mbox{if $\sigma_{\rm n} \leq \sigma_{\rm n0}$} \vspace{0.5cm} \end{array} \right. \end{equation} \end{small} \noindent where $\alpha$ is the frictional angle and $f_{\rm c}$ is the compressive strength. Furthermore, \begin{equation} a = \dfrac{\beta \alpha f_{\rm c}}{\alpha \beta + \sqrt{1+\beta^2\alpha^2}} \end{equation} where $\beta$ is the ratio of the short and long radii of the cap ellipse (Figure~\ref{fig:surface}). At the point where the two parts of the yield surface meet, the normal stress is $\sigma_{\rm n0} = -\dfrac{a}{ \beta \alpha \sqrt{1+ \beta^2 \alpha^2 }}$. The rate of the plastic strains in Equation~\ref{eq:stressStrain} is \begin{equation}\label{eq:flow} \dot{\boldsymbol{\varepsilon}}_{\rm p} = \dot{\lambda} \dfrac{\partial g} {\partial \bar{\boldsymbol{\sigma}}} \end{equation} where $g$ is the plastic potential and $\lambda$ is the plastic multiplier. In the present study, $g$ is chosen to be very similar to the yield function $f$. The only difference is that $\alpha$ is replaced by the dilatancy angle $\psi$ so that the magnitude of the normal plasticity strain generated during shear loading can be controlled. The plasticity model is completed by the loading and unloading conditions: \begin{equation}\label{eq:loadUn} f \leq 0 \mbox{,} \hspace{0.5cm} \dot{\lambda} \geq 0 \mbox{,} \hspace{0.5cm} \dot{\lambda} f = 0 \end{equation} This plasticity bond model is similar to the one developed by \shortciteN{Lun05a}. However, in the present work, the response is assumed to be perfectly plastic, i.e. the shape of the yield surface is independent of the plastic strains. The bond model will be extended to hardening and softening in future work. The implementation of the present model is simplified by introducing a smooth transition between the cap and the frictional law. Thus, a special vertex stress return in the transition region is not required. \subsection{\em Model for corrosion between concrete and reinforcement} The effect of corrosion is idealised as an eigenstrain $\varepsilon_{\rm c}$, which is determined from the free expansion of the corrosion product $u_{\rm c}$ as $\varepsilon_{\rm c} = u_{\rm c}/h$ (Figure~\ref{fig:corrosion}). \begin{figure} \begin{center} \begin{tabular}{c} \epsfig{file=./figCorrosion.eps,width=6.5cm} \end{tabular} \end{center} \caption{Representation of the corrosion process as an expansive layer of rust.} \label{fig:corrosion} \end{figure} This expansion is related to the corrosion penetration depth $x_{\rm c}$ as \begin{equation}\label{eq:corExp} u_{\rm c} = \sqrt{r^2 + \left(2 r x_{\rm c} - x_{\rm c}^2\right)\left(\lambda_{\rm c} - 1\right)} - r \end{equation} where $\lambda_{\rm c}$ is the ratio of the volume ratio of rust and steel. The corrosion penetration $x_{\rm c}$ is related to the corrosion percentage $\rho_{\rm c}$ as \begin{equation}\label{eq:corPen} x_{\rm c} = r \left(1-\sqrt{1-\dfrac{\rho_{\rm c}}{100}}\right) \end{equation} \section{COMPARISON WITH EXPERIMENTAL RESULTS} The lattice approach is used to model the experiments reported by \shortciteN{LeeNogTom02}. The geometry and loading setup of the experiments is shown in Figure~\ref{fig:geometry}. Reinforcement bars ($\diameter = 13$~mm) embedded in concrete cubes were initially subjected to corrosion and subsequently pulled out. \begin{figure}[ht!] \begin{center} \begin{tabular}{c} \epsfig{file=./figLeeGeometrya.eps,width=7.5cm}\\ (a)\\ \epsfig{file=./figLeeGeometryb.eps,width=5.5cm}\\ (b) \end{tabular} \end{center} \caption{Geometry and loading set-up for the by corrosion pull-out test reported by \protect \shortciteN{LeeNogTom02}. The reinforcement bar with a diameter $\diameter = 13$~mm is placed eccentrically in $y$-direction in the concrete specimen. No lateral reinforcement is provided.} \label{fig:geometry} \end{figure} The concrete used in the experiments is characterised by a Young's modulus of $E_{\rm c} =22.6$~GPa, Poisson's ratio of $\nu_{\rm c}=0.17$, a tensile strength of $f_{\rm t} = 2.7$~MPa and a compressive strength of $f_{\rm c} = 24.7$~MPa. The Young's modulus of the reinforcement is $E_{\rm s} = 183$~GPa. In the present study ,the response of concrete, reinforcement and bond between concrete and reinforcement is modelled by the lattice approach described earlier. The lattice for the analyses is shown in Figure~\ref{fig:mesh}. For the reinforcement and the interface between reinforcement and corrosion, the mesh is structured. For the concrete the lattice is random. \begin{figure} \begin{center} \epsfig{file=./figMesh.eps,width=7cm} \end{center} \caption{Mesh for the lattice analysis.} \label{fig:mesh} \end{figure} Three analyses were performed. In the first analysis the reinforcement was pulled out without initial corrosion. In the other two analyses, corrosion percentages of $\rho_{\rm c} = 3.2$ and $16.8$~\% were considered before the pullout. Assuming uniform corrosion, the corrosion percentages were transformed according to Equation~\ref{eq:corPen} to corrosion penetrations of $x_{\rm c} = 105$ and $571$~$\mu$m. With an expansion ratio of $\lambda_{\rm c} = 1.4$, this gives, according to Equation~\ref{eq:corExp}, free corrosion product expansions of $u_{\rm c} = 41.6$ and $214.8$~$\mu$m, respectively. With an interface element thickness of 1 mm, this gives eigenstrains of $\varepsilon_{\rm c} = 0.0416$ and $0.2148$. In all three analyses, the load $F$ was controlled by the end slip in the form of relative horizontal displacements of nodes $A$ and $B$ as shown in Figure~\ref{fig:geometry}a. The results of the analyses are compared to the experimental results in the form of average bond stress-slip curves shown in Figure~\ref{fig:ld}. Here, the average bond stress was determined as $\tau = F/(\pi \diameter \ell)$, where $\ell = 6 \diameter$ is the embedded length. \begin{figure} \begin{center} \epsfig{file=./figExpLd.eps,width=9cm} \end{center} \caption{Comparison of predicted average bond stress-slip curves and experimental data reported by \protect \shortciteN{LeeNogTom02} for three corrosion percentages $\rho_{\rm c} = 0$, $3.2$ and $16.8$~\%.} \label{fig:ld} \end{figure} The pre-peak regime of the load-split curves obtained in the analyses is in very good agreement with the experiments. However, the post-peak responses obtained in the analyses deviate considerably from those reported in the literature. In particular, the analysis of the corrosion free case exhibits a more brittle response than reported in the experiments. On the other hand, the load-slip curves with initial corrosion exhibit a more ductile response than reported in the experiments. This is surprising, since it is expected that cracking in the concrete cover should reduce the pressure at the interface and, thus, also the tangential stresses. Consequently, a more consistent pattern of post-peak response might be expected across all three cases. More studies are required to explore this observation. For the analyses without corrosion, the crack patterns for the peak bond stress and the maximum slip (presented in Figure~\ref{fig:ld}) are shown in Figure~\ref{fig:crack1}. Crack patterns are visualised as those middle cross-sections of lattice elements in which the norm of the crack opening vector is greater than $10$~$\mu$m and increasing. \begin{figure}[ht] \begin{center} \begin{tabular}{c} \epsfig{file=./figCrack1.eps,width=6cm}\\ (a)\\ \epsfig{file=./figCrack2.eps,width=6cm}\\ (b)\\ \end{tabular} \end{center} \caption{Crack patterns for the pullout analysis for the corrosion-free case at (a) peak and (b) end of the average bond stress slip curve. Cracks initiate at the interface between reinforcement and concrete and propgate to the specimen surface.} \label{fig:crack1} \end{figure} Thus, only active cracks are presented. At the peak of the average bond stress-slip curve, the concrete cover is cracked at its thinnest section (Figure~\ref{fig:crack1}a). With further slip, additional cracks initiate from the reinforcement and propagate radially into the specimen as shown in Figure~\ref{fig:crack1}b. \begin{figure}[ht] \begin{center} \begin{tabular}{c} \epsfig{file=./figCrack3.eps,width=6cm}\\ (a) \\ \epsfig{file=./figCrack4.eps,width=6cm}\\ (b) \end{tabular} \end{center} \caption{Crack patterns for the analyses with (a) $3.2$~\% and (b) $16.8$~\% corrosion percentage before the pullout.} \label{fig:crack2} \end{figure} In Figure~\ref{fig:crack2} the crack patterns are shown, for the two corrosion cases, at the end of the corrosion process. For both corrosion cases, cracking of the concrete cover occurs before the pullout, which corresponds to the observations reported in the literature \shortcite{LeeNogTom02}. \section{CONCLUSIONS} In the present work a lattice approach is used to describe the mechanical interaction of a corroding reinforcement bar, the surrounding concrete and the interface between steel reinforcement and concrete. The cross-section of the ribbed reinforcement bar is taken to be circular, assuming that the interaction of the ribs of the deformed reinforcement bar and the surrounding concrete is included in a cap-plasticity interface model. This lattice approach is capable of representing many of the important characteristics of corrosion induced cracking and its influence on bond. The idealisation of the corrosion expansion as an eigenstrain allows for the modelling of corrosion induced cracking. Furthermore, the frictional bond law can model the decrease of the bond strength if the concrete is pre-cracked. Very good agreement with experimental results in the pre-peak regime of the bond stress-slip curves was obtained. More studies are required to investigate the post-peak response of the bond stress-slip curves. Also, further studies will be performed to investigate the influence of the element length of the interface between reinforcement and concrete on the analyses results. Also, we will study the influence of the stiffness of the lattice elements on corrosion induced cracking and its interplay with lateral confinement. \section*{ACKNOWLEDGEMENTS} The simulations were performed with the object-oriented finite element package OOFEM \shortcite{Pat99,PatBit01}, extended by the present authors. \bibliographystyle{chicago} \begin{small}
2,869,038,156,989
arxiv
\section{Introduction} In this article we study the regularization of the minimization problem \begin{equation}\label{OCPl}\tag{$\mathbb P_0$} \min_{u\in\Uad} J_0(u)\quad\text{with}\quad J_0(u):= \frac{1}{2} \norm{Tu-z}^2_{H} \end{equation} for $T:U \to H$ a given linear and continuous operator between the control space $U:=L^2(\Omega_U)$ with scalar product $(\cdot,\cdot)_U$ and an arbitrary Hilbert space $H$ where $z\in H$ is a fixed function to be approached. The set $\Omega_U\subset\mathbb R^n$, $n\ge 1$, is a bounded measurable domain and the set of admissible controls $\Uad\subset U$ is given by \begin{equation} \label{E:Uad} \Uad:=\twoset{u\in U}{ a(x)\le u(x)\le b(x)\quad \text{for almost all $x\in\Omega_U$}} \end{equation} with fixed control bounds $a$, $b\in L^\infty(\Omega_U)$ fulfilling $a\le b$. We give two instances of $T$ as solution operators of linear partial differential equations (PDEs): \begin{exam}\label{exam:poisson} Let $y$ be the unique weak solution of the Poisson problem \begin{equation}\label{e:poisson} \begin{aligned} -\Delta y & = u &&\text{in $\Omega$,}\\ y & = 0 &&\text{on $\partial\Omega$} \end{aligned} \end{equation} for given $u\in\H$ on some bounded sufficiently regular domain $\Omega\subset \mathbb R^d$, $d\ge 1$, with boundary $\partial\Omega$. We set $\Omega_U:=\Omega$ and get $y=Tu$ where $T:U=\H \to H:=\H$ is the weak solution operator associated with problem \eqref{e:poisson}. \end{exam} \begin{exam}\label{exam:timedep} Consider the heat equation \begin{equation}\label{e:heateq} \begin{aligned} \partial_t y -\Delta y &= Bu &&\text{in }I\times\Omega\,,\\ y&=0&&\text{in } I\times\partial\Omega\,,\\ y(0)&=0&&\text{in } \Omega\,. \end{aligned} \end{equation} with a control operator $B:U\to \LIIVd$. We fix a time interval $I:=(0,T_e)\subset \mathbb R$ with a given end-time fulfilling $0<T_e<\infty$. Furthermore, we assume $\Omega$ to be a domain as in the previous example. Let $T:=SB$ be the control-to-state operator with $S:L^2(I,\Vd)\allowbreak \to H:=\LIIH$ being the weak solution operator for the heat equation \eqref{e:heateq}. We will discuss it later from \eqref{E:operatorS} onwards. Let us mention two instances for the control operator B: \begin{enumerate} \item (Distributed controls) We set $\Omega_U := I\times\Omega$. The control operator $B:U=H\rightarrow \LIIVd$ is given by $B:=\identity$, i.e., the identity mapping induced by the standard Sobolev imbedding $\iota: \H\hookrightarrow\Vd$. \item (Located controls) Let $\Omega_U := I$ and $g_1\in \H$ be a fixed function. The operator $B$ given by \begin{equation}\label{E:Bloccont} B: L^2(I,\mathbb R)\rightarrow \LIIVd\,,\quad u\mapsto \left( t\mapsto u(t)\iota g_1 \right), \end{equation} with $\iota$ from the previous item maps a control function $u$ depending only on time to a function distributed in space-time. With little more effort one can consider the case of several fixed functions $g_1, \dots, g_D$, replacing $B$ by $u\mapsto \left( t\mapsto \sum_{i=1}^D u_i(t)\iota g_i \right)$ and seeking for control functions $u_1, \dots, u_D$. We omit this generalization here to shorten the exposition and refer the interested reader to \cite{dissnvd}. \end{enumerate} \end{exam} To unify the examples just given, it is useful to write $T=SB$ with two continuous linear operators $B:U\to R$ and $S:R\to H$ where $R$ is an appropriately chosen function space. This decomposition is always possible for a given $T$ by taking $B=\identity$ and $S=T$ (or vice versa). Often, the solutions of \eqref{OCPl} possess a special structure: They take values only on the bounds $a$ and $b$ of the admissible set $\Uad$ given in \eqref{E:Uad} and are therefore called \emph{bang-bang solutions}. Theoretical and numerical questions related to this control problem attracted much interest in recent years, see, e.g., \cite{deckelnick-hinze}, \cite{wachsmuth1}, \cite{gong-yan}, \cite{wachsmuth2}, \cite{wachsmuth3}, \cite{wachsmuth4}, \cite{wachsmuth5}, \cite{felgenhauer2003}, \cite{alt-bayer-etal2}, \cite{alt-seydenschwanz-reg1}, and \cite{seydenschwanz-regkappa}. The last four papers are concerned with $T$ being the solution operator of an \emph{ordinary} differential equation, the first three papers with $T$ being a solution operator of an \emph{elliptic} PDE as in Example~\ref{exam:poisson}, and the remaining references with $T$ being a general linear operator as here. In \cite{dissnvd}, a brief survey of the content of these and some other related papers is given at the end of the bibliography. For an appropriate discretization of Example~\ref{exam:timedep} we refer to \cite{DanielsHinzeVierling}, \cite{dissnvd}, and the forthcoming \cite{danielshinze}, but see also the numerics section below. Problem~\eqref{OCPl} is in general ill-posed, meaning that a solution does not depend continuously on the datum $z$, see \cite[p. 1130]{wachsmuth2}. The numerical treatment of a discretized problem version is often challenging or even impossible. Therefore, we use Tikhonov regularization to overcome these difficulties. The \emph{regularized problem} is given by \begin{equation}\label{OCP}\tag{$\mathbb P_\alpha$} \min_{u\in\Uad} J_\alpha(u)\quad\text{with}\quad J_\alpha(u):= \frac{1}{2} \norm{Tu-z}^2_{H} + \frac{\alpha}{2} \norm{u}^2_{U} \end{equation} where $\alpha > 0$ denotes the regularization parameter. Formally, for $\alpha=0$ problem \eqref{OCP} reduces to problem \eqref{OCPl} also called the \emph{limit problem}. Note that for the regularized problems \eqref{OCP}, $\alpha > 0$, and their discretizations, explicit solution representations are available and can be utilized for numerical implementation; cf. \eqref{FONC}, \eqref{FONCkh} below. We recall in the \textbf{next section} basic properties of the regularized and the limit problem: Problem \eqref{OCP} has a solution $\uopt_\alpha$ for all $\alpha \ge 0$. If $\alpha > 0$, the solution is unique. If $\alpha = 0$ and the operator $T$ is injective, the solution of the limit problem \eqref{OCPl} is unique, too. Note that $T$ is injective in Example~\ref{exam:poisson} and \ref{exam:timedep}. If $T$ is not injective, the limit problem might have several solutions. By $\hat u_0$ we denote \emph{the} solution of the limit problem with minimal $U$ norm, i.e. $ \hat u_0 = \argmin \twoset{\norm{u}_U}{\text{$u$ solves \eqref{OCPl}}}.$ We close the section by stating a first convergence result, which in particular shows that the regularized solutions $\uopt_\alpha$ converge to $\hat u_0$ if $\alpha$ tends to zero. More convergence results are obtained in the \textbf{third section} if a condition on the smoothness of the limit problem \eqref{OCPl} is fulfilled. For easy reference, we call this Assumption~\ref{A:sourcestruct} \emph{source-measure condition} below. The main result is Theorem~\ref{T:regmain}, where we show convergence rates which improve known ones, see Table~\ref{tab:comp} for a detailed comparison. The necessity of the just mentioned smoothness conditions to obtain better convergence rates is a topic which is discussed in the \textbf{fourth section}. We present a new proof of the necessity of the measure condition which motivates another condition, namely \eqref{E:struct2}, in the special case of bang-bang solutions. This new condition \eqref{E:struct2} is exploited in the \textbf{fifth section}. We show that the condition implies the same convergence rates as the source-measure condition. The new condition is (almost) necessary to obtain these rates. Finally, it turns out that the new and the old condition coincide if the limit problem is of certain regularity. The reason to introduce this new condition \eqref{E:struct2} is that it leads to an improved bound on the decay of smoothness in the weak derivative of the optimal control when $\alpha$ tends to zero. This bound is useful to derive improved convergence rates for the discretization errors of the regularized problem, which we sketch. The \textbf{last section} is concerned with a numerical example confirming our theoretical findings. \section{First results} \begin{lemm}\label{L:OCPexistence} The optimal control problem \eqref{OCP} admits for fixed $\alpha \ge 0$ at least one solution $\uopt_\alpha\in U$, which can be characterized by the first order necessary and sufficient optimality condition \begin{equation}\label{VarIneq} \uopt_\alpha\in\Uad,\quad \left( \alpha\uopt_\alpha + \dual B\popt_\alpha,u-\uopt_\alpha\right)_U \ge 0\quad \forall\ u\in\Uad \end{equation} where $\dual B$ denotes the adjoint operator of $B$, $\yopt_\alpha:=T\uopt_\alpha\in H$ is named \emph{optimal state}, and the so-called \emph{optimal adjoint state} $\popt_\alpha$ is defined by $\popt_\alpha:=\dual S(\yopt_\alpha-z)$. If $\alpha >0$ or $T$ is injective, the solution $\uopt_\alpha$ is unique. The quantities $\yopt_\alpha$ and $\popt_\alpha$ are always unique for given $\alpha\ge 0$ even if $\uopt_0$ is not. \end{lemm} \begin{proof} We have a convex optimization problem with a weakly lower semicontinuous cost functional on the non-empty, bounded, closed, and convex set $\Uad$. Therefore, classic theory as elaborated, e.g., in \cite{ekeland-temam}, guarantees existence and uniqueness. We refer to \cite[Theorem 1.46, p. 66]{hpuu} or \cite[Satz 2.14]{troeltzsch} for a proof in our specific setting. Note that in the case $\alpha = 0$, uniqueness of the state $\yopt_0$ follows from the fact that the cost functional of \eqref{OCPl} with respect to the state, i.e. $y\mapsto \frac 12 \norm{y-z}^2_H$, is strictly convex. Thus by injectivity of $T$, uniqueness of $\uopt_0$ can be derived since $\yopt_0 = T\uopt_0$. \end{proof} As a consequence of the fact that $\Uad$ is a closed and convex set in a Hilbert space we have the following lemma. \begin{lemm}\label{L:orthoproj} In the case $\alpha > 0$ the variational inequality \eqref{VarIneq} is equivalent to \begin{equation}\label{FONC} \uopt_\alpha = P_{\Uad}\left(-\frac{1}{\alpha}\dual B\popt_\alpha\right) \end{equation} where $P_{\Uad}: U \to \Uad$ is the orthogonal projection. \end{lemm} \begin{proof} See \cite[Corollary 1.2, p. 70]{hpuu} with $\gamma = \frac 1\alpha$. \end{proof} We now derive an explicit characterization of optimal controls. \begin{lemm}\label{L:ucharact} If $\alpha >0$, then for almost all $x\in \Omega_U$ there holds for the optimal control \begin{equation}\label{E:uoptpwchar} \uopt_\alpha (x)= \begin{cases} a(x)&\text{if $\dual B\popt_\alpha(x)+\alpha a(x) >0$},\\ -\alpha^{-1}\dual B\popt_\alpha(x) &\text{if $\dual B\popt_\alpha(x) + \alpha \uopt_\alpha(x) = 0$},\\ b(x)&\text{if $\dual B\popt_\alpha(x)+\alpha b(x) <0$}. \end{cases} \end{equation} Suppose $\alpha = 0$ is given. Then any optimal control fulfills a.e. in $\Omega_U$ \begin{equation}\label{E:bangbang} \uopt_0 (x) \begin{cases} =a(x)&\text{if $\dual B\popt_0(x) >0$},\\ \in [a(x),b(x)]&\text{if $\dual B\popt_0(x) =0$},\\ =b(x)&\text{if $\dual B\popt_0(x) <0$}. \end{cases} \end{equation} \end{lemm} \begin{proof} Let us first note that the variational inequality \eqref{VarIneq} is for $\alpha \ge 0$ equivalent to the following pointwise one: \begin{equation}\label{VarIneqpw} \forall ' x\in\Omega_U\ \forall\ v\in [a(x),b(x)] : \left( \alpha\uopt_\alpha(x) + \dual B\popt_\alpha(x),v-\uopt_\alpha(x)\right)_{\mathbb R} \ge 0 \end{equation} where ``$\forall '$'' denotes ``for almost all''. This can be shown via a Lebesgue point argument, see the proof of \cite[Lemma 2.26]{troeltzsch}. By cases, one immediately derives \eqref{E:uoptpwchar} and \eqref{E:bangbang} from \eqref{VarIneqpw}. \end{proof} As a consequence of \eqref{E:bangbang} we have: If $\dual B\popt_0$ vanishes only on a subset of $\Omega_U$ with Lebesgue measure zero, the optimal control $\uopt_0$ is unique and a \emph{bang-bang solution}: It takes values only on the bounds $a$ and $b$ of the admissible set $\Uad$ given in \eqref{E:Uad}. If the limit problem \eqref{OCPl} admits several solutions, we by $\hat u_0$ denote the minimal $U$ norm solution, i.e. \begin{equation}\label{E:minnormsol} \hat u_0 = \argmin \twoset{\norm{u}_U}{\text{$u$ solves \eqref{OCPl}}}. \end{equation} Note that this minimization problem has a unique solution since the $U$ norm is strictly convex and the set \{$u$ solves \eqref{OCPl}\} is non-empty, closed and convex in $U$. The next Theorem establishes convergence $\uopt_\alpha \to \hat u_0$ if $\alpha\to 0$, which is the reason to highlight the minimal $U$ norm solution among the solutions of \eqref{OCPl}. \begin{theo}\label{T:invprob} For the solution $(\uopt_\alpha,\yopt_\alpha)$ of \eqref{OCP} with $\alpha > 0$ and any solution $(\uopt_0,\yopt_0)$ of \eqref{OCPl}, there holds \begin{enumerate} \item The optimal control and the optimal state depend continuously on $\alpha$. More precisely, the inequality \begin{equation}\label{E:regestinvprob} \norm{\yopt_{\alpha'}-\yopt_\alpha}_H^2 +\alpha' \norm{\uopt_{\alpha'}-\uopt_\alpha}_U^2 \le (\alpha-\alpha')(\uopt_{\alpha}, \uopt_{\alpha'}-\uopt_{\alpha})_U \end{equation} holds for all $\alpha\ge 0$ and all $\alpha'\ge 0$. \item The regularized solutions converge to the minimal $U$ norm solution $\hat u_0$, i.e., \begin{equation}\label{E:uatou0} \norm{\uopt_\alpha - \hat u_0}_U\to 0 \quad\text{ if } \alpha\to 0. \end{equation} \item The optimal state satisfies the rate of convergence \begin{equation}\label{E:regconvgen} \norm{\yopt_\alpha-\yopt_0}_H = o(\sqrt{\alpha}). \end{equation} \end{enumerate} \end{theo} \begin{proof} The Theorem is a collection of classic results from the theory of linear inverse problems with convex constraints (given here by $\Uad$) taken from \cite[Chapter~5.4]{engl-hanke-neubauer}, see also \cite{diss-neubauer}. \end{proof} \section{Refined convergence rates under additional assumptions} To prove better rates of convergence with respect to $\alpha$, we rely on the following assumption. \begin{assu}[{\cite[Assumption 3.1]{wachsmuth2}}]\label{A:sourcestruct} Let $\uopt_0$ be a solution of \eqref{OCPl}. There exist a set $A\subset \Omega_U$, a function $w\in H$ with $\dual Tw\in L^\infty(\Omega_U)$ and constants $\kappa > 0$ and $C\ge 0$, such that there holds the inclusion \begin{equation*} \twoset{x\in\Omega_U} { \dual B\popt_0(x)=0 }\allowbreak\subset A^c \end{equation*} for the complement $A^c=\Omega_U\backslash A$ of $A$ and in addition \begin{enumerate} \item (source condition) \begin{equation}\label{E:source} \chi_{A^c} \uopt_0 = \chi_{A^c} P_{\Uad}(\dual Tw). \end{equation} \item (($\popt_0$-)measure condition) \begin{equation}\label{E:struct} \forall\ \epsilon > 0:\quad \meas(\twoset{x\in A}{0 \le \abs{\dual B\popt_0(x)} \le \epsilon}) \le C\epsilon^\kappa \end{equation} with the convention that $\kappa :=\infty$ if the left-hand side of \eqref{E:struct} is zero for some $\epsilon > 0$. \end{enumerate} \end{assu} Source conditions of the form $\uopt_0 = P_{\Uad}(\dual Tw)$ are well known in the theory of inverse problems with convex constraints, see \cite{diss-neubauer} and \cite{engl-hanke-neubauer}. However, since they are usually posed almost everywhere, thus globally, they are unlikely to hold in the optimal control setting with $T$ as in, e.g., Example~\ref{exam:poisson}, see \cite[p. 860]{wachsmuth1}. Similar measure conditions were previously used for control problems with elliptic PDEs, starting with the analysis in \cite{wachsmuth1} and \cite{deckelnick-hinze}. A condition related to the measure condition was also used to establish stability results for bang-bang control problems with autonomous ODEs, see \cite[Assumption 2]{felgenhauer2003}. In all above-mentioned references, the measure condition \eqref{E:struct} is assumed to hold with $A=\Omega_U$, thus globally. Together with formula \eqref{E:bangbang} one immediately observes that this implies bang-bang controls. The combination of both conditions in Assumption~\ref{A:sourcestruct} was introduced in \cite{wachsmuth2} and also used in \cite{wachsmuth3}. In Theorem~\ref{T:regmain} we will show that if a solution $\uopt_0$ of \eqref{OCPl} fulfills Assumption~\ref{A:sourcestruct}, we have convergence $\uopt_\alpha\to\uopt_0$ for $\alpha\to 0$. From formula \eqref{E:uatou0} in Theorem~\ref{T:invprob} we conclude $\uopt_0 = \hat u_0$, which means: If Assumption~\ref{A:sourcestruct} is valid for a solution of \eqref{OCPl}, this solution has to be the minimal $U$ norm solution \eqref{E:minnormsol}. Key ingredient in our analysis of the regularization error is the following lemma, which has its origin in the proof of \cite[Theorem 3.5]{wachsmuth2}. \begin{lemm}\label{L:l1reg} Let Assumption~\ref{A:sourcestruct}.2 be valid for a solution $\uopt_0$ of \eqref{OCPl}. Then there holds with some constant $C>0$ independent of $\alpha$ and $u$ \begin{equation}\label{E:l1reg} C\norm{u-\uopt_0}_{L^1(A)}^{1+1/\kappa} \le (\dual B\popt_0,u-\uopt_0)_A \le (\dual B\popt_0,u-\uopt_0)_U \quad\forall\ u\in\Uad \end{equation} where $(\cdot,\cdot)_A$ and $(\cdot,\cdot)_U$ denote the scalar products in $L^2(A)$ and $U=L^2(\Omega_U)$, respectively. \end{lemm} \begin{proof} For $\epsilon > 0$ we define $B_\epsilon := \twoset{x\in A}{\abs{\dual B\popt_0}\ge\epsilon}$. Using the (pointwise) optimality condition \eqref{VarIneqpw} and Assumption~\ref{A:sourcestruct}.2, we conclude for some $u\in\Uad$ \begin{equation*} \begin{aligned} \int_{\Omega_U} (\dual B\popt_0,u-\uopt_0)_{\mathbb R} &= \int_{\Omega_U} \abs{\dual B\popt_0}\abs{u-\uopt_0} \ge \int_A \abs{\dual B\popt_0}\abs{u-\uopt_0} \\ &\ge \epsilon \norm{u-\uopt_0}_{L^1(B_\epsilon)}\\ &\ge \epsilon \norm{u-\uopt_0}_{L^1(A)} -\epsilon \norm{u-\uopt_0}_{L^1(A\backslash B_\epsilon)}\\ &\ge \epsilon \norm{u-\uopt_0}_{L^1(A)} -\epsilon \norm{u-\uopt_0}_{L^\infty(\Omega_U)} \meas(A\backslash B_\epsilon) \\ &\ge \epsilon \norm{u-\uopt_0}_{L^1(A)} -c\epsilon^{\kappa + 1} \norm{u-\uopt_0}_{L^\infty(\Omega_U)} \end{aligned} \end{equation*} where without loss of generality $c>1$. Setting $\epsilon:=c^{-2/\kappa} \norm{u-\uopt_0}_{L^1(A)}^{1/\kappa} \norm{u-\uopt_0}_{L^\infty(\Omega_U)}^{-1/\kappa}$ yields \begin{equation*} \begin{aligned} \int_A (\dual B\popt_0,u-\uopt_0)_{\mathbb R} &\ge c^{-2/\kappa} (1-\frac 1c) \norm{u-\uopt_0}_{L^\infty(\Omega_U)}^{-1/\kappa} \norm{u-\uopt_0}_{L^1(A)}^{1+1/\kappa}\\ &\ge c^{-2/\kappa} (1-\frac 1c) \norm{b-a}_{L^\infty(\Omega_U)}^{-1/\kappa} \norm{u-\uopt_0}_{L^1(A)}^{1+1/\kappa}\\ \end{aligned} \end{equation*} by the definition of $\Uad$. \end{proof} With the previous Lemma, we can now improve the inequality \eqref{E:regestinvprob} (setting there $\alpha:=0$) from general inverse problem theory, since the error in the control in the $L^1$ norm now appears on the left-hand side with a factor C>0 independent of $\alpha$. This is in contrast to the error in the $L^2$ norm. \begin{lemm}\label{L:l1regmore} Let Assumption~\ref{A:sourcestruct}.2 hold (with possibly $\meas (A)=0$) for a solution $\uopt_0$ of \eqref{OCPl}. Then there holds for some $C>0$ independent of $\alpha$ \begin{multline*} \norm{\yopt_{\alpha}-\yopt_0}_H^2 + C \norm{\uopt_{\alpha}-\uopt_0}_{ L^1(A)}^{1+1/\kappa} + \alpha \norm{\uopt_{\alpha}-\uopt_0}_U^2\\ \le \alpha (\uopt_0, \uopt_0-\uopt_{\alpha})_U \quad\forall\ \alpha >0. \end{multline*} \end{lemm} \begin{proof} Adding the necessary condition for $\uopt_\alpha$ \eqref{VarIneq} with $u:=\uopt_0$, i.e., \begin{equation*} 0 \le \left( \alpha\uopt_\alpha + \dual B\popt_\alpha, \uopt_0-\uopt_\alpha\right)_U , \end{equation*} to the estimate \eqref{E:l1reg} of Lemma~\ref{L:l1reg} with $u:=\uopt_\alpha$, we get \begin{equation*} \begin{aligned} C\norm{\uopt_{\alpha}-\uopt_0}_{L^1(A)}^{1+1/\kappa} &\le ( \dual B(\popt_\alpha-\popt_0), \uopt_0-\uopt_\alpha )_U + \alpha (\uopt_\alpha,\uopt_0-\uopt_\alpha)_U \\ &\le -\norm{\yopt_\alpha-\yopt_0}_H^2 + \alpha (\uopt_{\alpha}-\uopt_0,\uopt_0-\uopt_{\alpha})_U\\ &\quad\quad\quad + \alpha (\uopt_0,\uopt_0-\uopt_\alpha)_U\\ &\le -\norm{\yopt_\alpha-\yopt_0}_H^2 - \alpha \norm{\uopt_{\alpha}-\uopt_0}_U^2 + \alpha (\uopt_0,\uopt_0-\uopt_\alpha)_U. \end{aligned} \end{equation*} \end{proof} The following Lemma is extracted from the proof of \cite[Lemma 3.2]{wachsmuth2}. It shows how the source condition (Assumption~\ref{A:sourcestruct}.1) is taken into account to reduce the error estimate to the set $A$. \begin{lemm}\label{L:sourcecest} Let Assumption~\ref{A:sourcestruct}.1 (source condition) be satisfied for a solution $\uopt_0$ of \eqref{OCPl}. Then there holds with a constant $C>0$ \begin{equation*} (\uopt_0, \uopt_0-u)_U \le C( \norm{T(u-\uopt_0)}_H +\norm{u-\uopt_0}_{L^1(A)} ) \quad\forall\ u\in\Uad. \end{equation*} \end{lemm} \begin{proof} The source condition is equivalent to \[ 0\le (\chi_{A^c}(\uopt_0 - \dual Tw),u-\uopt_0)_U \quad\forall\ u\in\Uad. \] Using this representation, we can estimate \begin{align*} (\uopt_0, \uopt_0-u)_U &\le \left(\chi_{A^c}\dual Tw,\uopt_0-u\right)_U +\left(\chi_A \uopt_0,\uopt_0-u\right)_U\\ &\le \left(w,T(\uopt_0-u)\right)_H +\left(-\dual Tw+\uopt_0, \chi_A\left(\uopt_0-u\right)\right)_U. \end{align*} Since $\dual Tw\in L^\infty(\Omega_U)$, we get the claim. \end{proof} Using this Lemma, we can now state regularization error estimates. We consider different situations with respect to the fulfillment of parts of Assumption~\ref{A:sourcestruct}. \begin{theo}\label{T:regmain} For the regularization error there holds with positive constants $c$ and $C$ indepent of $\alpha > 0$ the following, where $\uopt_0$ is in fact $\hat u_0$ as noted before Lemma~\ref{L:l1reg}. \begin{enumerate} \item The error in the optimal state fulfills the rate of convergence \begin{equation*} \norm{\yopt_\alpha-\yopt_0}_H = o(\sqrt{\alpha}). \end{equation*} \item Let Assumption~\ref{A:sourcestruct}.1 be satisfied with $\meas(A)=0$ (source condition holds a.e. on the domain) for a solution $\uopt_0$ of \eqref{OCPl}. Then the optimal control converges with the rate \begin{equation} \norm{\uopt_{\alpha}-\uopt_0}_U \le C\sqrt{\alpha}, \end{equation} and the optimal state converges with the improved rate \begin{equation}\label{E:regconvbetter} \norm{\yopt_{\alpha}-\yopt_0}_H \le C\alpha. \end{equation} \item Let Assumption~\ref{A:sourcestruct}.2 be satisfied with $\meas(A^c)=0$ (measure condition holds a.e. on the domain) for a solution $\uopt_0$ of \eqref{OCPl}. From \eqref{E:bangbang} we conclude that $\uopt_0$ is the unique solution of \eqref{OCPl}. Then the estimates \begin{align} \norm{\uopt_{\alpha}-\uopt_0}_{L^1(\Omega_U)} &\le C\alpha^{\kappa} \label{E:regconvumeas} \\ \norm{\uopt_{\alpha}-\uopt_0}_U &\le C\alpha^{\kappa/2} \label{E:regconvumeas2} \\ \norm{\yopt_{\alpha}-\yopt_0}_H &\le C\alpha^{(\kappa+1)/2} \label{E:regconvymeas} \end{align} hold true. If furthermore $\kappa > 1$ holds and in addition \begin{equation}\label{E:dualTlinfty} \dual T:\range(T)\to L^\infty(\Omega_U) \quad\quad\text{exists and is continuous,} \end{equation} we can improve \eqref{E:regconvymeas} to \begin{equation}\label{E:regconvymeasimp} \norm{\yopt_{\alpha}-\yopt_0}_H \le C\alpha^{\kappa}. \end{equation} \item Let Assumption~\ref{A:sourcestruct} be satisfied with $\meas(A)\cdot\meas(A^c) > 0$ (source and measure condition on parts of the domain) for a solution $\uopt_0$ of \eqref{OCPl} and let in addition $\alpha < 1$. Then the estimates \begin{align} \norm{\uopt_{\alpha}-\uopt_0}_{L^1(A)} &\le C\alpha^{\min(\kappa,\,\frac 2{1+1/\kappa})} \label{E:regconvusourcemeas} \\ \norm{\uopt_{\alpha}-\uopt_0}_U &\le C\alpha^{\min(\kappa,\,1)/2} \label{E:regconvusourcemeas2} \\ \norm{\yopt_{\alpha}-\yopt_0}_H &\le C\alpha^{\min((\kappa+1)/2,\,1)} \label{E:regconvysourcemeas} \end{align} hold true. If furthermore $\kappa > 1$ and \eqref{E:dualTlinfty} hold, we have the improved estimate \begin{equation}\label{E:regconvusourcemeasimp} \norm{\uopt_{\alpha}-\uopt_0}_{L^1(A)} \le C\alpha^\kappa. \end{equation} \end{enumerate} \end{theo} \begin{proof} In this proof, we denote by $C_1, \ldots, C_4$ positive constants. 1. The estimate is just a repetition of \eqref{E:regconvgen}. 3. Let us recall the estimates of Lemma~\ref{L:l1regmore}, i.e., \begin{equation}\label{E:l1regmorecopy} \norm{\yopt_{\alpha}-\yopt_0}_H^2 + C \norm{\uopt_{\alpha}-\uopt_0}_{ L^1(A)}^{1+1/\kappa} + \alpha \norm{\uopt_{\alpha}-\uopt_0}_U^2 \le \alpha (\uopt_0, \uopt_0-\uopt_{\alpha})_U. \end{equation} By Young's inequality we can estimate with a constant $\hat C>0$ \begin{equation}\label{E:youngkappa} \hat C\alpha \norm{\uopt_{\alpha}-\uopt_0}_{L^1(A)} \le \tilde{C}\alpha^{\kappa+1} + \frac{C}{2} \norm{\uopt_{\alpha}-\uopt_0}_{ L^1(A)}^{1+1/\kappa} \end{equation} where $C$ is the same constant as in \eqref{E:l1regmorecopy} and $\tilde C=\tilde C(C,\hat C,\kappa)$ is the constant from Young's inequality. If $A=\Omega_U$ up to a set of measure zero, we can combine both estimates taking $\hat C:=\norm{\uopt_0}_{L^\infty}$, and move the second summand of \eqref{E:youngkappa} to the left. This yields the claim since \begin{equation*} \frac{\kappa + 1}{1+1/\kappa} = \kappa . \end{equation*} The improved estimate \eqref{E:regconvymeasimp} can be obtained with the help of \eqref{E:regconvumeas} as follows \begin{equation*} \begin{aligned} \norm{\yopt_{\alpha}-\yopt_0}_H^2 &= (\dual T(\yopt_{\alpha}-\yopt_0),\uopt_\alpha-\uopt_0)_U \le C_1 \norm{\dual T(\yopt_{\alpha}-\yopt_0)}_{L^\infty} \norm{\uopt_\alpha-\uopt_0}_{L^1}\\ &\le C_2 \norm{\yopt_{\alpha}-\yopt_0}_H \norm{\uopt_\alpha-\uopt_0}_{L^1} \le C_3 \norm{\yopt_{\alpha}-\yopt_0}_H\, \alpha^\kappa. \end{aligned} \end{equation*} 2.+4. We combine \eqref{E:l1regmorecopy} with the estimate of Lemma~\ref{L:sourcecest} (with $u:=\uopt_\alpha$), invoke Cauchy's inequality and get \begin{multline*} \norm{\yopt_{\alpha}-\yopt_0}_H^2 + C \norm{\uopt_{\alpha}-\uopt_0}_{ L^1(A)}^{1+1/\kappa} + \alpha \norm{\uopt_{\alpha}-\uopt_0}_U^2\\ \le \alpha (\uopt_0, \uopt_0-\uopt_{\alpha})_U \le C_1\alpha ( \norm{\yopt_\alpha-\yopt_0}_H +\norm{\uopt_{\alpha}-\uopt_0}_{L^1(A)})\\ \le C_2\alpha^2 + \frac 12\norm{\yopt_\alpha-\yopt_0}_H^2 + C_1\alpha \norm{\uopt_{\alpha}-\uopt_0}_{L^1(A)}. \end{multline*} We now move the second addend to the left. If $\meas(A)=0$ (case 2.), we are done. Otherwise (case 4.) we continue estimating, making use of \eqref{E:youngkappa}, to get \begin{equation*} \norm{\yopt_{\alpha}-\yopt_0}_H^2 + C \norm{\uopt_{\alpha}-\uopt_0}_{ L^1(A)}^{1+1/\kappa} + \alpha \norm{\uopt_{\alpha}-\uopt_0}_U^2 \le C_3\alpha^{\min(2,\kappa+1)}, \end{equation*} from which the claim follows. To establish formula \eqref{E:regconvusourcemeasimp}, we integrate \eqref{VarIneqpw} over $A$, taking $v:=\uopt_0(x)$, to end up with \begin{equation*} 0 \le \left( \alpha\uopt_\alpha + \dual B\popt_\alpha, \uopt_0-\uopt_\alpha\right)_A . \end{equation*} By $(\cdot,\cdot)_A$ we again denote the scalar product in $L^2(A)$. We add this inequality to the estimate \eqref{E:l1reg} of Lemma~\ref{L:l1reg} with $u:=\uopt_\alpha$, to get \begin{equation*} C\norm{\uopt_{\alpha}-\uopt_0}_{L^1(A)}^{1+1/\kappa} \le ( \dual B(\popt_\alpha-\popt_0), \uopt_0-\uopt_\alpha )_A + \alpha (\uopt_\alpha,\uopt_0-\uopt_\alpha)_A. \end{equation*} Making use of \eqref{E:dualTlinfty} and the convergence rate \eqref{E:regconvysourcemeas} with $\kappa > 1$, we conclude \begin{equation*} \norm{\dual B(\popt_0-\popt_\alpha)}_{L^\infty(\Omega_U)} = \norm{\dual T(\yopt_0-\yopt_\alpha)}_{L^\infty(\Omega_U)} \le C_1 \norm{\yopt_0-\yopt_\alpha}_H \le C_2\alpha. \end{equation*} Since $\uopt_\alpha\in L^\infty(\Omega_U)$ by \eqref{E:Uad}, combining both estimates gives \begin{equation*} \begin{aligned} C\norm{\uopt_{\alpha}-\uopt_0}_{L^1(A)}^{1+1/\kappa} &\le C_3 (\norm{\dual B(\popt_\alpha-\popt_0)}_{L^\infty(A)} + \alpha) \norm{ \uopt_0-\uopt_\alpha }_{L^1(A)} \\ &\le C_4\alpha \norm{ \uopt_0-\uopt_\alpha }_{L^1(A)} . \end{aligned} \end{equation*} Dividing the expression by the norm on the right and taking the $\kappa$th power, we are done. \end{proof} Some remarks on the previous theorem are in order. Let us compare the first with the other cases, where Assumption~\ref{A:sourcestruct} is taken (partially) into account. In all cases, we get an improved convergence rate for the optimal state. The second case replicates well known estimates from the theory of inverse problems with convex constraints, see, e.g., \cite{diss-neubauer} and \cite[Theorem 5.19]{engl-hanke-neubauer}. However, as indicated in the discussion after Assumption~\ref{A:sourcestruct}, this situation is unlikely to hold in the context of optimal control problems. Concerning the ``min''-functions in the estimates of case 4, we note that the left argument is chosen if $\kappa < 1$, the right one if $\kappa > 1$. In the case $\kappa = 1$, both expressions coincide. Thus the worse part of Assumption~\ref{A:sourcestruct} with respect to the rates of cases 2 and 3 dominates the convergence behavior of the regularization errors in the mixed situation of case 4 on the whole domain $\Omega_U$. This, however, is not the case locally on $A$, as \eqref{E:regconvusourcemeasimp} shows. As mentioned after Assumption~\ref{A:sourcestruct}, case 3 implies bang-bang controls. The condition \eqref{E:dualTlinfty} is fulfilled for Example~\ref{exam:poisson} since $\dual T:\H\to H^2(\Omega)\cap \HI \hookrightarrow \Loo$ by well-known regularity theory and Sobolev imbeddings, see, e.g., \cite{evans}, if $\Omega$ is sufficiently regular. For Example~\ref{exam:timedep}, condition \eqref{E:dualTlinfty} is also valid, see \cite[p. 24]{dissnvd}. \begin{table}[hbt] \begin{center} \begin{tabular}{c|c|c|c|c} \hline quantity $\le C\alpha^r$ & here & br & there & assumptions, source\\ \hline $\norm{\uopt_{\alpha}-\uopt_0}_{L^1(A)}$ &$r=\kappa$ &$\leftarrow$ &$r=\frac{\kappa}{2-\kappa}$ &$\kappa <1$ \\ &&&& by \eqref{E:regconvumeas}/\eqref{E:regconvusourcemeas}\\ \rowcolor[gray]{.9} $\norm{\uopt_{\alpha}-\uopt_0}_{L^1(A)}$ &$r=\kappa$ &$=$ &$r=\kappa$ &$\kappa = 1$ or (3. and $\kappa > 1$) \\ \rowcolor[gray]{.9} &&&& by \eqref{E:regconvumeas} or \eqref{E:regconvusourcemeas}\\ $\norm{\uopt_{\alpha}-\uopt_0}_{L^1(A)}$ &$r=\kappa$ &$\leftarrow$ &$r=\frac{\kappa+1}{2}$ &4. and $\kappa > 1$\\ &&&& by \eqref{E:regconvusourcemeasimp} \\ \hline $\norm{\uopt_{\alpha}-\uopt_0}_{U}$ &$r=\frac{\kappa}{2}$ &$\leftarrow$ &$r=\frac{\kappa}{2(2-\kappa)}$ &$\kappa < 1$\\ &&&& by \eqref{E:regconvumeas2} or \eqref{E:regconvusourcemeas2}\\ \rowcolor[gray]{.9} $\norm{\uopt_{\alpha}-\uopt_0}_{U}$ &$r=\frac{\kappa}{2}$ &$=$ &$r=\frac{\kappa}{2}$ &$\kappa=1$ or (3. and $\kappa > 1$)\\ \rowcolor[gray]{.9} &&&& by \eqref{E:regconvumeas2}\\ $\norm{\uopt_{\alpha}-\uopt_0}_{U}$ &$r=\frac{1}{2}$ &$=$ &$r=\frac{1}{2}$ &4. and $\kappa > 1$\\ &&&& by \eqref{E:regconvusourcemeas2} \\ \hline $\norm{\yopt_{\alpha}-\yopt_0}_H $ &$r=\frac{\kappa+1}{2}$ &$\leftarrow$ &$r=\frac{1}{2-\kappa}$ &$\kappa < 1$\\ &&&& by \eqref{E:regconvymeas} or \eqref{E:regconvysourcemeas}\\ \rowcolor[gray]{.9} $\norm{\yopt_{\alpha}-\yopt_0}_H $ &$r=1$ &$=$ &$r=1$ &$\kappa = 1$ or (4. and $\kappa > 1$)\\ \rowcolor[gray]{.9} &&&& by \eqref{E:regconvymeas} or \eqref{E:regconvysourcemeas}\\ $\norm{\yopt_{\alpha}-\yopt_0}_H $ &$r=\kappa$ &$\leftarrow$ &$r=\frac{\kappa+1}{2}$ &3. and $\kappa > 1$\\ &&&& by \eqref{E:regconvymeasimp}\\ \end{tabular} \end{center} \caption{ Comparison of convergence rates given in Theorem~\ref{T:regmain}.3+4 (``here'') with \cite[Theorem 3.2]{wachsmuth2} (``there''), assuming always \eqref{E:dualTlinfty}. The column ``br'' points to the \emph{better rate} (i.e. larger $r$) unless both coincide ($=$). We abbreviate by ``3.'' and ``4.'' the assumptions of Theorem~\ref{T:regmain}.3 and 4, respectively. } \label{tab:comp} \end{table} Let us finally compare in Table~\ref{tab:comp} the cases 3 and 4 with the convergence results of \cite[Theorem 3.2]{wachsmuth2} to point out which rates stated above are improved. Note for comparison, that \eqref{E:dualTlinfty} is always assumed in \cite[Theorem 3.2]{wachsmuth2} and $p_\alpha$ there is $\dual B\popt_\alpha$ here. If we assume \eqref{E:dualTlinfty}, we can estimate $ \norm{\dual B(\popt_0-\popt_\alpha)}_{L^\infty} =\norm{\dual T(\yopt_0-\yopt_\alpha)}_{L^\infty} \le C \norm{\yopt_0-\yopt_\alpha}_H $, and combine this with \eqref{E:regconvymeas}, \eqref{E:regconvymeasimp}, or \eqref{E:regconvysourcemeas}. Since in \cite[Theorem 3.2]{wachsmuth2} the convergence rates for $\norm{p_0-p_\alpha}_{L^\infty}$ are obtained in the same way, comparing the state rates gives the same results as comparing $\norm{\dual B(\popt_0-\popt_\alpha)}_{L^\infty}$ with $\norm{p_0-p_\alpha}_{L^\infty}$. We therefore omit the latter. \clearpage \section{Necessity of the additional assumptions} Let us now consider the question of necessity of Assumption~\ref{A:sourcestruct} to obtain convergence rates, thus a converse of Theorem~\ref{T:regmain}. We first show that a convergence rate $\norm{\yopt_\alpha - \yopt_0}_H\le C \alpha$ implies the source condition \eqref{E:source} to hold at least on $\twoset{x\in\Omega_U}{ \dual B\popt_0(x)=0 }$. The following Theorem is a for our purposes simplified version of \cite[Theorem 4]{wachsmuth3}. It resembles a necessity result known from inverse problem theory, see, e.g., \cite[Theorem 5.19]{engl-hanke-neubauer} or \cite{diss-neubauer}. However, in inverse problems, the condition $T\uopt_0 = z$ is typically assumed. \begin{theo}\label{T:necsource} Let $\hat u_0$ be the minimal $U$ norm solution of \eqref{OCPl} defined in \eqref{E:minnormsol}. If we assume a convergence rate $\norm{\yopt_\alpha - \yopt_0}_H=\mathcal{O}(\alpha)$, then there exists a function $w\in H$ such that $\hat u_0 = P_{\Uad} (\dual Tw)$ holds pointwise a.e. on \begin{equation}\label{E:Kset} K:=\twoset{x\in\Omega_U}{ \dual B\popt_0(x)=0}. \end{equation} Thus \eqref{E:source} holds on $K$ instead of $A^c$. If even $\norm{\yopt_\alpha - \yopt_0}_H=o(\alpha)$, then $\hat u_0$ vanishes on $K$. \end{theo} \begin{proof} We integrate the necessary condition \eqref{VarIneqpw} over $K$ to obtain \[ 0 \le \left( \alpha\uopt_\alpha + \dual TT\left(\uopt_\alpha-\hat u_0\right), u-\uopt_\alpha \right)_K\quad\quad\forall\ u\in\Uad. \] Dividing the expression by $\alpha$ and taking the limit we get with the help of \eqref{E:uatou0} the inequality \[ 0 \le (\dual T\dot{y_0} + \hat u_0 , u - \hat u_0)_K \quad\quad\forall\ u\in\Uad \] for any weak subsequential limit $\dot{y_0}$ of $\frac 1\alpha (\yopt_\alpha - \yopt_0)$, which exists due to the assumption of the Theorem. Taking $w:=-\dot{y_0}$, we obtain the equation $\hat u_0(x) = P_{[a(x),b(x)]}(w(x))$ pointwise on $K$ by varying $u$. Since $P_{\Uad}$ acts pointwise, we get the claim. The second assertion follows from the equality $\dot{y_0} = 0$ in case of $\norm{\yopt_\alpha - \yopt_0}_H=o(\alpha)$. \end{proof} We next show that if \eqref{E:dualTlinfty} and $\kappa > 1$ hold true, convergence as in Theorem~\ref{T:regmain}.3 implies the measure condition \eqref{E:struct}. \begin{theo}\label{T:necmeasure} Let us assume \begin{equation}\label{E:zerosetinAc} \exists\ A\subset \Omega_U:\quad \twoset{x\in\Omega_U}{\dual B\popt_0(x)=0}\subset A^c. \end{equation} Let us further assume the \emph{$\sigma$-condition} \begin{equation}\label{E:sigmacond} \exists\ \sigma > 0\ \forall'\ x\in\Omega_U:\quad a \le -\sigma < 0 < \sigma \le b \end{equation} where ``$\forall '$'' denotes ``for almost all''. If $\kappa > 1$ and convergence rates $\norm{\uopt_\alpha - \uopt_0}_{L^p(A)}^p + \norm{\dual B(\popt_\alpha - \popt_0)} _{L^\infty(A)} \le C\alpha^\kappa$ are known to hold for a solution $\uopt_0$ of \eqref{OCPl} and some real $p\ge 1$, then the measure condition \eqref{E:struct} from Assumption~\ref{A:sourcestruct} is fulfilled. \end{theo} \begin{proof} Let us introduce the sets \begin{align*} A_0 &:= \twoset{x\in A}{-\dual B\popt_0 < 0 \text{ and } \alpha a \ge -\dual B\popt_\alpha},\\ A_1 &:= \twoset{x\in A}{-\dual B\popt_0 < 0 \text{ and } \alpha a < -\dual B\popt_\alpha < \alpha b},\\ A_2 &:= \twoset{x\in A}{ -\dual B\popt_0 < 0 < \alpha b \le -\dual B\popt_\alpha},\\ A_3 &:= \twoset{x\in A}{ -\dual B\popt_0 > 0 \text{ and } \alpha a < -\dual B\popt_\alpha < \alpha b},\\ A_4 &:= \twoset{x\in A}{ -\dual B\popt_0 > 0 > \alpha a \ge -\dual B\popt_\alpha }, \quad\text{and}\\ A_5 &:= \twoset{x\in A}{ -\dual B\popt_0 > 0 \text{ and } \alpha b \le -\dual B\popt_\alpha } . \end{align*} We also need two subsets of $A_1$ and $A_3$, respectively, namely by \eqref{E:sigmacond} \begin{align*} \tilde A_1 &:= \twoset{x\in A}{-\dual B\popt_0 < 0 \text{ and } -\alpha \frac\sigma 2 \le -\dual B\popt_\alpha \le \alpha \frac\sigma 2} \subset A_1, \quad\text{and}\\ \tilde A_3 &:= \twoset{x\in A}{ -\dual B\popt_0 > 0 \text{ and } -\alpha \frac\sigma 2 \le -\dual B\popt_\alpha \le \alpha \frac\sigma 2} \subset A_3. \end{align*} From \eqref{E:zerosetinAc} we conclude $A = A_0 \cup A_1 \cup A_2 \cup A_3 \cup A_4\cup A_5$, and from Lemma~\ref{L:ucharact} we infer \begin{equation}\label{E:palphameas} \begin{aligned} \int_A \abs{\uopt_0-\uopt_\alpha}^p &= \int_{A_1} \abs{a+\alpha^{-1}\dual B \popt_\alpha}^p +\int_{A_3} \abs{b+\alpha^{-1}\dual B \popt_\alpha}^p +\int_{A_2\cup A_4} \abs{a-b}^p\\ &\ge \int_{A_1} \abs{a+\alpha^{-1}\dual B \popt_\alpha}^p +\int_{A_3} \abs{b+\alpha^{-1}\dual B \popt_\alpha}^p\\ &\ge \int_{\tilde A_1} \abs{a+\alpha^{-1}\dual B \popt_\alpha}^p +\int_{\tilde A_3} \abs{b+\alpha^{-1}\dual B \popt_\alpha}^p\\ &\ge (\frac \sigma 2)^p \meas( \twoset{x\in A}{\abs{\dual B\popt_\alpha} \le \frac \sigma 2\alpha } ). \end{aligned} \end{equation} Note for the last step that $\tilde A_1 \cup \tilde A_3 = \twoset{x\in A}{\abs{\dual B\popt_\alpha} \le \frac \sigma 2\alpha } $ due to \eqref{E:zerosetinAc}. From $\norm{\uopt_\alpha - \uopt_0}_{L^p(A)}^p \le C\alpha^\kappa$ and \eqref{E:palphameas} we conclude \[ \meas(\twoset{x\in A}{\abs{\dual B\popt_\alpha} \le C_1\alpha }) \le C_2\alpha^\kappa. \] Since $\kappa > 1$ and $\norm{\dual B(\popt_\alpha - \popt_0)}_{L^\infty(A)} \le C\alpha^\kappa$, we get for some arbitrarily chosen $x\in A$ with $\abs{\dual B\popt_0(x)} \le \alpha C_1/2$ the estimate \[ \abs{\dual B\popt_\alpha(x)} \le \abs{\dual B\popt_0(x)} + \abs{\dual B(\popt_\alpha-\popt_0)(x)} \le \frac{C_1}{2} (\alpha + \alpha^{\kappa -\epsilon}) \le C_1\alpha \] for some sufficiently small $\epsilon = \epsilon(C_1,\kappa) > 0$. Consequently, we have \[ \meas(\twoset{x\in A}{ \abs{\dual B\popt_0} \le \frac{C_1}{2} \alpha }) \le C_2\alpha^\kappa. \] \end{proof} Concerning the previous Theorem, let us mention the related result \cite[Theorem 8]{wachsmuth3}. It has the same implication, but assumes \eqref{E:regconvumeas2} and \eqref{E:regconvymeas}, which imply the prerequisites of Theorem~\ref{T:necmeasure} in case of \eqref{E:dualTlinfty}. For the case $\kappa \le 1$, it is an open question whether the previous Theorem (and likewise \cite[Theorem 8]{wachsmuth3}) is valid. Note that the $\sigma$-condition \eqref{E:sigmacond} is a strengthening of the condition ``$a \le 0 \le b$ almost everywhere''. For \eqref{OCPl}, the problem we finally want to solve, this weaker assumption can always be met by a simple transformation of the variables. \section{Bang-bang solutions} In this section, we introduce at first a second measure condition and show that it implies the same convergence results as in Theorem~\ref{T:regmain}.3, thus might replace the $\popt_0$-measure condition \eqref{E:struct} from Assumption~\ref{A:sourcestruct}. We analyze necessity of the condition to obtain convergence rates and show that for bang-bang solutions fulfilling \begin{equation}\label{E:bangbangsol} \meas(\twoset{x\in\Omega_U}{ \dual B\popt_0(x)=0 })=0, \end{equation} both measure conditions coincide. Note that \eqref{E:bangbangsol} by \eqref{E:bangbang} implies uniqueness of the solution $\uopt_0$ of \eqref{OCPl}. \begin{defi}[$\popt_\alpha$-measure condition] If for the set \begin{equation}\label{E:setialpha} I_\alpha:=\twoset{x\in \Omega_U} {\alpha a < -\dual B\popt_\alpha < \alpha b} \end{equation} the condition \begin{equation}\label{E:struct2} \exists\ \bar\alpha >0\ \forall\ 0<\alpha<\bar\alpha: \quad \meas(I_\alpha)\le C\alpha^\kappa \end{equation} holds true (with the convention that $\kappa := \infty$ if the measure in \eqref{E:struct2} is zero for all $0<\alpha<\bar\alpha$), we say that the \emph{$\popt_\alpha$-measure condition} is fulfilled. \end{defi} The equality in the estimate \eqref{E:palphameas} from the proof of Theorem~\ref{T:necmeasure} shows that if the $\popt_\alpha$-measure condition holds and we assume the additional condition $ \meas(A_2\cup A_4)\le C\alpha^\kappa $ (with $A_i$ as in that proof), we get the convergence rate $ \norm{\uopt_\alpha - \uopt_0}_{L^p(\Omega_U)}^p \le C\alpha^\kappa $ for each $1\le p < \infty$ given \eqref{E:bangbangsol}. Interestingly, these additional conditions are not needed to obtain convergence in the control, as we will now show. \begin{theo}\label{T:convpalpha} If the $\popt_\alpha$-measure condition \eqref{E:struct2} and the $\sigma$-condition \eqref{E:sigmacond} are fulfilled, the convergence rates \begin{equation}\label{E:convpalpharates} \norm{\uopt_{\alpha}-\uopt_0}_{L^1(\Omega_U)} \le C\alpha^{\kappa} \quad\text{and}\quad \norm{\yopt_{\alpha}-\yopt_0}_H \le C\alpha^{(\kappa+1)/2} \end{equation} hold true for any solution $\uopt_0$ of \eqref{OCPl}. If in addition $\kappa > 1$ and \eqref{E:dualTlinfty} is fulfilled, we have the improved estimate \begin{equation}\label{E:convpalpharateimp} \norm{\yopt_{\alpha}-\yopt_0}_H \le C\alpha^{\kappa}. \end{equation} \end{theo} \begin{proof} Let $u\in\Uad$ be arbitrarily chosen. For the active set $I_\alpha^c$ of $\popt_\alpha$, which is the complement of the inactive set $I_\alpha$ defined in \eqref{E:setialpha}, we have by Lemma~\ref{L:ucharact}, making use of the $\sigma$-condition \eqref{E:sigmacond}, the estimate \begin{equation}\label{E:skpest1} (\dual B\popt_\alpha,u-\uopt_\alpha)_{I_\alpha^c} = \int_{I_\alpha^c} \abs{\dual B\popt_\alpha}\abs{u-\uopt_\alpha} \ge \sigma \alpha \norm{u-\uopt_\alpha}_{L^1(I_\alpha^c)}. \end{equation} Invoking the $\popt_\alpha$-measure condition \eqref{E:struct2}, we get on the inactive set itself the estimate \begin{equation}\label{E:skpest2} \abs{(\dual B\popt_\alpha,u-\uopt_\alpha)_{I_\alpha}} \le C\alpha \norm{u-\uopt_\alpha}_{L^1(I_\alpha)} \le CC_{ab}\alpha^{\kappa+1} \end{equation} with $C_{ab}=\max(\norm{a}_\infty,\norm{b}_\infty)$. Consequently, with $L^1:=L^1(\Omega_U)$ we get \begin{equation} \begin{aligned} \sigma \alpha \norm{u-\uopt_\alpha}_{L^1}-C\alpha^{\kappa+1} &\stackrel{\eqref{E:struct2}}{\le} \sigma \alpha \norm{u-\uopt_\alpha}_{L^1} - \sigma \alpha \norm{u-\uopt_\alpha}_{L^1(I_\alpha)}\\ &\stackrel{\phantom{\eqref{E:struct2}}}{=} \sigma \alpha \norm{u-\uopt_\alpha}_{L^1(I_\alpha^c)}\\ &\stackrel{\eqref{E:skpest1}}{\le} (\dual B\popt_\alpha,u-\uopt_\alpha)_{I_\alpha^c}\\ &\stackrel{\phantom{\eqref{E:struct2}}}{=} (\dual B\popt_\alpha,u-\uopt_\alpha) -(\dual B\popt_\alpha,u-\uopt_\alpha)_{I_\alpha}\\ &\stackrel{\eqref{E:skpest2}}{\le} (\dual B\popt_\alpha,u-\uopt_\alpha) + C\alpha^{\kappa+1}. \end{aligned} \end{equation} Rearranging terms, we conclude \begin{equation} \sigma \alpha \norm{u-\uopt_\alpha}_{L^1} \le (\dual B\popt_\alpha,u-\uopt_\alpha) + C\alpha^{\kappa+1}. \end{equation} Taking $u:=\uopt_0$ in the previous equation and adding the necessary condition \eqref{VarIneq} for $\uopt_0$ for the special case $u:=\uopt_\alpha$, i.e., \begin{equation} (-\dual B\popt_0,\uopt_0 - \uopt_\alpha ) \ge 0, \end{equation} we get the estimate \begin{equation} \begin{aligned} \sigma \alpha \norm{\uopt_0-\uopt_\alpha}_{L^1} &\le (\dual B(\popt_\alpha-\popt_0),\uopt_0-\uopt_\alpha) + C\alpha^{\kappa+1}\\ &= - \norm{\yopt_{\alpha}-\yopt_0}_I^2 + C\alpha^{\kappa+1}, \end{aligned} \end{equation} from which the claim follows. The improved estimate can be established as in the proof of Theorem~\ref{T:regmain}. \end{proof} The $\popt_\alpha$-measure condition \eqref{E:struct2} is slightly stronger than what actually is necessary in order to obtain the above convergence rates in the control. \begin{coro} Let $\uopt_0$ be a solution of \eqref{OCPl} and let us assume that the $\sigma$-condition \eqref{E:sigmacond} is valid. If the convergence rate $ \norm{\uopt_\alpha - \uopt_0}_{L^p(\Omega_U)}^p \le C\alpha^\kappa $ is known to hold for some real $p\ge 1$ and some real $\kappa > 0$, then the measure condition \begin{equation} \meas(\twoset{x\in \Omega_U}{ \alpha (a+\epsilon) \le -\dual B\popt_\alpha(x) \le \alpha (b-\epsilon) } \le \frac{C}{\epsilon^p} \alpha^\kappa \end{equation} is fulfilled for each $0<\epsilon<\sigma$. \end{coro} \begin{proof} This follows from the proof of Theorem~\ref{T:necmeasure}. \end{proof} If the limit problem is of certain regularity, the $\popt_\alpha$-measure condition is not stronger than the $\popt_0$-measure condition, and, as we show afterwards, both conditions coincide. \begin{lemm}\label{L:palphamcond} Let Assumption~\ref{A:sourcestruct} hold with $\meas(A^c)=0$ ($\popt_0$-measure condition is valid a.e. on $\Omega_U$). Let furthermore $\kappa \ge 1$ and \eqref{E:dualTlinfty} be valid. Then the $\popt_\alpha$-measure condition \eqref{E:struct2} is fulfilled. \end{lemm} \begin{proof} Since the set $I_\alpha$ from \eqref{E:setialpha} fulfills $ I_\alpha \subset \twoset{x\in \Omega_U}{\abs{\dual B\popt_\alpha(x)} \le C\alpha} $ with $C=\max(\norm{a}_\infty,\norm{b}_\infty)$, we conclude with \eqref{E:dualTlinfty} and Theorem~\ref{T:regmain} that if $x\in I_\alpha$ and $\kappa \ge 1$, we have \[ \abs{\dual B\popt_0(x)}\le \abs{\dual B\popt_\alpha(x)} + \abs{\dual B(\popt_0-\popt_\alpha)(x)} \le C\alpha. \] With the $\popt_0$-measure condition \eqref{E:struct} we obtain the estimate \[ \meas(I_\alpha) \le \meas(\twoset{x\in \Omega_U}{\abs{\dual B\popt_0(x)} \le C\alpha}) \le C\alpha^\kappa, \] which concludes the proof. \end{proof} \begin{coro} Let a bang-bang solution be given which fulfills \eqref{E:bangbangsol}. In the case of $\kappa > 1$, \eqref{E:dualTlinfty}, and the $\sigma$-condition \eqref{E:sigmacond}, both measure conditions are equivalent. \end{coro} \begin{proof} One direction of the claim, namely ``$\popt_0$-m.c. $\Rightarrow$ $\popt_\alpha$-m.c.'', has already been shown in Lemma~\ref{L:palphamcond}. For the other direction, we know from Theorem~\ref{T:convpalpha} that the convergence rates \eqref{E:convpalpharates} and \eqref{E:convpalpharateimp} hold, which by \eqref{E:dualTlinfty} and Theorem~\ref{T:necmeasure} imply the $\popt_0$-measure condition. \end{proof} Let us now consider the situation that the optimal adjoint state fulfills the regularity \begin{equation}\label{E:hreg} \exists\ C>0 : \norm{\partial_x \dual B\popt_\alpha}_{L^\infty(\Omega_U)} \le C \end{equation} with a constant $C>0$ independent of $\alpha$ and $\partial_x$ denoting the weak differential operator. This bound is valid, e.g., for Example~\ref{exam:timedep}.2 (located controls), see \cite[p. 30]{dissnvd} for a proof. Furthermore, we assume the Sobolev regularity $a$, $b\in W^{1,\infty}(\Omega_U)$ for the control bounds. Since the orthogonal projection possesses for $f\in W^{1,\infty}(\Omega_U)$ the property \[ \norm{\partial_x P_{\Uad}(f)}_{L^\infty(\Omega_U)} \le \norm{\partial_x f}_{L^\infty(\Omega_U)} + \norm{\partial_x a}_{L^\infty(\Omega_U)} + \norm{\partial_x b}_{L^\infty(\Omega_U)}, \] see, e.g. \cite[Corollary 2.1.8]{ziemer}, we obtain with the projection formula \eqref{FONC} and the constant \begin{equation}\label{E:Cab} C_{ab} := \norm{\partial_x a}_{L^\infty(\Omega_U)} + \norm{\partial_x b}_{L^\infty(\Omega_U)} \end{equation} a bound on the derivative of the optimal control, namely \begin{equation}\label{E:uaainf} \norm{\partial_x \uopt_\alpha}_{L^\infty(\Omega_U)} \le \frac 1\alpha \norm{\partial_x \dual B\popt_\alpha}_{L^\infty(\Omega_U)} + C_{ab} \stackrel{\eqref{E:hreg}}{\le} C\frac 1\alpha, \end{equation} if $\alpha > 0$ is sufficiently small. If the $\popt_\alpha$-measure condition \eqref{E:struct2} is valid, this decay of smoothness in dependence of $\alpha$ can be relaxed in weaker norms, as the following Lemma shows. \begin{lemm}[Smoothness decay in the derivative]\label{L:smdecderiv} Let the $\popt_\alpha$-measure condition \eqref{E:struct2} be fulfilled and the regularity condition \eqref{E:hreg} be valid as well as $a$, $b\in W^{1,\infty}(\Omega_U)$. Then there holds with the constant $C_{ab}$ defined in \eqref{E:Cab} for sufficiently small $\alpha >0$ and each $p$ with $1\le p < \infty$ the inequality \begin{equation} \norm{\partial_x \uopt_\alpha}_{L^p(\Omega_U)} \le C\max(C_{ab},\alpha^{\kappa/p-1}) \end{equation} with a constant $C>0$ independent of $\alpha$. Note that $C_{ab}=0$ in the case of constant control bounds $a$ and $b$. \end{lemm} \begin{proof} We invoke \eqref{E:struct2} and \eqref{E:uaainf} to get the estimate \begin{equation*} \begin{aligned} \norm{\partial_x \uopt_\alpha}_{L^p(\Omega_U)}^p &\le \meas(I_\alpha) \norm{\partial_x \uopt_\alpha} _{L^\infty(\Omega_U)}^p + \meas(\Omega_U)C_{ab}^p\\ &\le C\max(\alpha^{\kappa-p},C_{ab}^p) \end{aligned} \end{equation*} with the set $I_\alpha$ from \eqref{E:setialpha}. \end{proof} Let us now briefly sketch an application of the previous lemma in the numerical analysis of a suitable finite element discretization of Example~\ref{exam:timedep}.2 (located controls), which has been analyzed in detail recently in \cite{dissnvd}, see also \cite{danielshinze}, founded on a novel discretization scheme proposed in \cite{DanielsHinzeVierling}. Discretizing the regularized problem~\eqref{OCP} in space and time, one ends up with a problem $(\mathbb P_{kh})$ depending on the regularization parameter $\alpha > 0$ and the grid sizes $k$ and $h$ for the time and space grid, respectively, related to the finite element discretization. This discretization is used later in the numerics section, where it is described in more detail. One can show that this problem has again a unique solution $\uopt_{\alpha,kh}$ and that the error fulfills ($\uopt_0$ the unique solution of \eqref{OCPl}) \begin{multline}\label{E:theorem77} \norm{\uopt_0-\uopt_{\alpha,kh}}_U^2 + \norm{\uopt_0-\uopt_{\alpha,kh}}_{L^1(\Omega_U)} \\ \le C \left( \alpha + h^2 + k^2 \max(1,C_{ab},\alpha^{\kappa/2-1}) \right)^\kappa \end{multline} with $C>0$ independent of $\alpha$, $k$, and $h$, see \cite[Theorem 77]{dissnvd}. Here, Lemma~\ref{L:smdecderiv} is used in the proof to get the factor $\alpha^{\kappa/2-1}$. This factor is obviously better (if $\alpha \to 0$) than $\alpha^{-1}$ which one would get using estimate \eqref{E:uaainf} only. Using Lemma~\ref{L:smdecderiv} one can also show error estimates for other quantities, e.g., an adjoint state error. \section{A numerical example} To validate numerically the new convergence rates for the regularization error given in Theorem~\ref{T:regmain}.3, we construct in the next subsection a known (unique) solution $\uopt_0$ together with its problem data. The problem data (but not the solution $\uopt_0$) is used to numerically solve the \emph{regularized} problem \eqref{OCP}. We describe in the next but one subsection how this approximation is computed. In the last subsection, we present and discuss the numerical results. \subsection{A limit problem with given unique solution} To build a concrete instance of the limit problem \eqref{OCPl}, we consider the situation of Example~\ref{exam:timedep}.2 (heat equation with located controls). We modify the heat equation \eqref{e:heateq} in that we take into account a fixed initial value $y(0)=y_0$, which can be interpreted as a modification of $z$ in \eqref{OCPl}, more precisely $z = y_d - S(0,y_0)$, where $S(f,g)$ denotes the solution of the heat equation \begin{equation}\label{E:heatfg} \begin{aligned} \partial_t y -\Delta y &= f &&\text{in }I\times\Omega\,,\\ y&=0&&\text{in } I\times\partial\Omega\,,\\ y(0)&=g&&\text{in } \Omega\,. \end{aligned} \end{equation} We consider a given exact solution of the limit problem \eqref{OCPl} which we denote by $(\uopt,\yopt,\popt)$, thus omitting the index for $\alpha = 0$. Please take care of the fact that $\yopt = S(B\uopt,y_0)$ in what follows, which is not $T\uopt$. In the numerical procedure described below, we of course only make use of the problem data. The solution triple $(\uopt,\yopt,\popt)$ is only used to evaluate the error norms. To understand the construction of the test example, let us elaborate in more detail the weak formulations of the solution operator $S$ and its adjoint for Example~\ref{exam:timedep}.2. It also motivates the discretization schemes stated below. With the space \[ W(I):=\twoset{v\in \LIIV}{v_t \in \LIIVd} , \] the operator \begin{equation}\label{E:operatorS} S: \LIIVd\times \H \rightarrow W(I),\quad (f,g) \mapsto y:= S(f,g) , \end{equation} denotes the weak solution operator associated with the heat equation \eqref{E:heatfg}, which is defined as follows. For $(f,g) \in \LIIVd\times \H$ the function $y\in W(I)$ with $\langle \cdot,\cdot \rangle := \langle \cdot,\cdot \rangle_{\Vd\V}$ satisfies the two equations \begin{subequations}\label{E:WF} \begin{align} y(0)={}&g \\ \begin{split} \int_0^T \bigg\langle \partial_t y(t),v(t)\bigg\rangle + a(y(t),v(t))\, dt ={}& \int_0^T \bigg\langle f(t),v(t)\bigg\rangle\, dt\\ \phantom{=}{}&\quad\forall\, v\in \LIIV. \end{split} \end{align} \end{subequations} Note that by the embedding $W(I)\hookrightarrow C([0,T],\H)$, see, e.g., \cite[Theorem 5.9.3]{evans}, the first relation is meaningful.\\ In the preceding equation, the bilinear form $a:H^1(\Omega)\times H^1(\Omega)\to\mathbb R$ is given by \[ a(f,g):= \int_\Omega \nabla f(x) \nabla g(x)\ dx. \] The equations \eqref{E:WF} yield an operator $S$ in the sense of \eqref{E:operatorS}: \begin{lemm}[Properties of the solution operator $S$] \mbox{} \begin{enumerate} \item For every $(f,g) \in \LIIVd\times \H$ a unique state $y \in W(I)$ satisfying \eqref{E:WF} exists. Thus the operator $S$ from \eqref{E:operatorS} exists. Furthermore the state fulfills \begin{equation}\label{E:stabS} \norm{y}_{W(I)} \le C \left(\norm{f}_{\LIIVd}+\norm{g}_{\H}\right). \end{equation} \item Consider the bilinear form $A:W(I)\times W(I)\to\mathbb R$ given by \begin{equation}\label{bilinA} A(y,v):= \int_0^T -\bigg\langle v_t,y\bigg\rangle + a(y,v)\, dt + \bigg\langle y(T),v(T)\bigg\rangle \end{equation} with $\langle \cdot,\cdot \rangle := \langle \cdot,\cdot \rangle_{\Vd\V}$. Then for $y\in W(I)$, equation \eqref{E:WF} is equivalent to \begin{equation}\label{E:WFM} A(y,v) = \int_0^T \bigg\langle f,v\bigg\rangle\, dt + (g,v(0))_{\H}\quad\forall\ v\in W(I). \end{equation} Furthermore, $y$ is the only function in $W(I)$ fulfilling equation \eqref{E:WFM}. \end{enumerate} \end{lemm} \begin{proof} This can be derived using standard results, see \cite[Lemma~1]{dissnvd}. \end{proof} One can show with standard results, see, e.g., \cite[Lemma~2]{dissnvd}, that the optimal adjoint state $\popt_\alpha \in W(I)$ is the unique weak solution defined and uniquely determined by the adjoint equation \begin{equation}\label{E:AA} A(v,\popt) = \int_0^T \langle \yopt-y_d,v\rangle_{\Vd\V}\, dt \quad \forall\ v\in W(I). \end{equation} This equation corresponds to the backward heat equation \begin{equation}\label{E:bwheat} \begin{aligned} -\partial_t \bar p -\Delta \bar p &= \yopt-y_d &&\text{in }I\times\Omega\,,\\ \bar p&=0&&\text{on } I\times\partial\Omega\,,\\ \bar p(T)&=0 &&\text{on } \Omega. \end{aligned} \end{equation} Let us now construct the test example. We make use of the fact that instead of the linear control operator $B$, given by \eqref{E:Bloccont}, we can also use an \emph{affine linear} control operator \begin{equation}\label{E:Btilde} \tilde B: U\rightarrow L^2(I,\Vd)\,,\quad u\mapsto g_0 + Bu \end{equation} where $g_0$ is a fixed function of certain regularity since $g_0$ can be interpreted as a modification of $z$. With a space-time domain $\Omega\times I := (0,1)^2 \times (0,0.5)$, thus $T_e=0.5$, we choose the optimal control to be the lower bound of the admissible set, i.e., $\uopt := a_1 := -0.2$. For the upper bound we set $b_1:= 0.2$. With the function \[ g_1(x_1,x_2) := \sin(\pi x_1)\sin(\pi x_2) \] the optimal adjoint state is chosen as \[ \popt(t,x_1,x_2) := (T_e-t)^{1/\kappa} g_1(x_1,x_2) \] for some fixed $\kappa >0$ specified below. With the constant $a:=2$, we take for the optimal state \begin{equation*} \yopt(t,x_1,x_2) := \cos\left(\frac{t}{T_e}\,2\pi a\right)\cdot g_1(x_1,x_2)\,, \end{equation*} from which we derive by \eqref{E:bwheat} \[ -\partial_t \popt - \Delta \popt = \frac 1\kappa (T_e-t)^{1/\kappa - 1} g_1 - (T_e-t)^{1/\kappa}\Delta g_1 = \yopt - y_d \,, \] which gives $y_d$. We also get the initial value of the optimal state $\yopt$: \[ y_0(x_1,x_2) = \yopt(0,x_1,x_2) = g_1(x_1,x_2)\,. \] Finally we obtain \begin{equation*} \begin{aligned} g_0&= \partial_t \yopt - \Delta \yopt - B\uopt\\ &= g_1 2 \pi \left( -\frac{a}{T_e} \sin\left(\frac{t}{T_e}\,2\pi a\right) + \pi\cos\left(\frac{t}{T_e}\,2\pi a\right) \right) -g_1 \cdot\uopt\,. \end{aligned} \end{equation*} This example fulfills the measure condition \eqref{E:struct} of Assumption~\ref{A:sourcestruct} with $\meas(A^c) = 0$ and exponent $\kappa$ from the definition of $\popt$. \subsection{Discretization of the regularized problem} We now describe the discretized regularized optimal control problem $(\mathbb P_{kh})$ which is solved as an approximation for \eqref{OCP}. Consider a partition $0=t_0 < t_1 < \dots < t_M=T_e$ of the time interval $\bar I=[0,T_e]$. With $I_m=[t_{m-1},t_m)$ we have $[0,T_e)=\bigcup_{m=1}^M I_m$. Furthermore, let $t_m^*=\frac{t_{m-1}+t_m}{2}$ for $m=1,\dots,M$ denote the interval midpoints. By $0=:t_0^* < t_1^* < \dots < t_M^* < t_{M+1}^*:=T_e$ we get a second partition of $\bar I$, the so-called \emph{dual partition}, namely $[0,T_e)=\bigcup_{m=1}^{M+1} I_m^*$, with $I_m^*=[t_{m-1}^*,t_m^*)$. The grid width of the first mentioned (primal) partition is given by the parameters $k_m=t_m-t_{m-1}$ and $k=\max_{1\le m\le M} k_m$. Here and in what follows we assume $k<1$. We also denote by $k$ (in a slight abuse of notation) the grid itself. On these partitions of the time interval, we define the Ansatz and test spaces of the Petrov--Galerkin schemes. These schemes will replace the continuous-in-time weak formulations of the state equation and the adjoint equation, i.e., \eqref{E:WFM} and \eqref{E:AA}, respectively. To this end, we define at first for an arbitrary Banach space $X$ the semidiscrete function spaces \begin{subequations}\label{e:semfs} \begin{align} P_k(X):&=\twoset{v\in C([0,T],X)}{\restr{v}{I_m}\in \mathcal P_1(I_m,X)} \hookrightarrow H^1(I,X),\\ P_k^*(X):&=\twoset{v\in C([0,T],X)}{\restr{v}{I_m^*}\in \mathcal P_1(I_m^*,X)} \hookrightarrow H^1(I,X),\\ \shortintertext{and} Y_k(X):&=\twoset{v:[0,T]\rightarrow \dual{X}}{\restr{v}{I_m}\in \mathcal P_0(I_m,X)}\,. \end{align} \end{subequations} Here, $\mathcal P_i(J,X)$, $J\subset \bar I$, $i\in\{0,1\}$, is the set of polynomial functions in time with degree of at most $i$ on the interval $J$ with values in $X$. Note that we can extend the bilinear form $A$ of \eqref{bilinA} in its first argument to $W(I)\cup Y_k(\V)$, thus consider the operator \begin{equation}\label{bilinAe} A:W(I)\cup Y_k(\V) \times W(I) \rightarrow \mathbb{R},\quad \text{$A$ given by \eqref{bilinA}}. \end{equation} Using continuous piecewise linear functions in space, we can formulate fully discretized variants of the state and adjoint equation. We consider a regular triangulation $\mathcal T_h$ of $\Omega$ with mesh size $h:=\max_{T\in\mathcal T_h}\allowbreak\diam(T)$, see, e.g., \cite[Definition (4.4.13)]{brenner-scott}, and $N=N(h)$ triangles. We assume that $h < 1$. We also denote by $h$ (in a slight abuse of notation) the grid itself. With the space \begin{equation} X_h := \twoset{\phi_h\in C^0(\bar\Omega)}{\restr{\phi_h}{T}\in \mathcal P_1(T,\mathbb R) \quad\forall\ T\in\mathcal T_h} \end{equation} we define $X_{h0} := X_h \cap \V$ to discretize $\V$. We fix fully discrete ansatz and test spaces, derived from their semidiscrete counterparts from \eqref{e:semfs}, namely \begin{equation}\label{E:spaceskh} P_{kh}:=P_k(X_{h0}),\quad P_{kh}^*:=P_{kh}^*(X_{h0}),\quad \text{and}\ Y_{kh}:=Y_k(X_{h0}). \end{equation} With these spaces, we introduce fully discrete state and adjoint equations as follows. \begin{defi}[Fully discrete adjoint equation]\label{D:pkh} For $h\in \LIIVd$ find $p_{kh}\in P_{kh}$ such that \begin{equation}\label{E:AdjDiscrh} A(\tilde y,p_{kh}) =\int_0^T \langle h(t),\tilde y(t)\rangle_{\Vd\V}\, dt \quad\forall\ \tilde y \in Y_{kh}. \end{equation} \end{defi} \begin{defi}[Fully discrete state equation]\label{D:ykh} For $(f,g)\in L^2(I,\Vd)\times \H$ find $y_{kh}\in Y_{kh}$, such that \begin{equation}\label{E:WFDh} A(y_{kh},v_{kh})= \int_0^T \langle f(t),v_{kh}(t)\rangle_{\Vd\V}\, dt + (g,v_{kh}(0)) \quad\forall\ v_{kh}\in P_{kh}. \end{equation} \end{defi} Existence and uniqueness of these two schemes follow as in the semidiscrete case discussed in \cite{DanielsHinzeVierling} or \cite[section~2.1.2]{dissnvd}. For error estimates of the two schemes, we refer again to \cite{dissnvd} or \cite{danielshinze}. We are now able to introduce the discretized optimal control problem which reads \begin{equation}\label{OCPkh}\tag{$\mathbb P_{kh}$} \begin{aligned} &\min_{ y_{kh}\in Y_{kh},u\in\Uad} J(y_{kh},u)=\min \frac{1}{2}\norm{y_{kh}-y_d}^2_I+ \frac{\alpha}{2}\norm{u}^2_U,\\ &\text{s.t. } y_{kh}=S_{kh}(Bu,y_0) \end{aligned} \end{equation} where $\alpha$, $B$, $y_0$, $y_d$, and $\Uad$ are chosen as for \eqref{OCP} and $S_{kh}$ is the solution operator associated to the fully discrete state equation \eqref{E:WFDh}. Recall that the space $Y_{kh}$ was introduced in \eqref{E:spaceskh}. For every $\alpha > 0$, this problem admits a unique solution triple ($\uoptd$, $\yoptd$, $\poptd$) where $\yoptd = S_{kh}(B\uoptd,y_0)$ and $\poptd$ denotes the discrete adjoint state which is the solution of the fully discrete adjoint equation \eqref{E:AdjDiscrh} with right-hand side $h:=\yoptd-y_d$. The first order necessary and sufficient optimality condition for problem \eqref{OCPkh} is given by \begin{equation}\label{VarIneqkh} \uoptd\in\Uad,\quad \left( \alpha\uoptd + \dual B\poptd,u-\uoptd\right)_U \ge 0\quad \forall\ u\in\Uad, \end{equation} which can be rewritten as \begin{equation}\label{FONCkh} \uoptd = P_{\Uad}\left(-\frac{1}{\alpha}\dual B\poptd\right). \end{equation} The above mentioned facts can be proven in the same way as for the continuous problem \eqref{OCP}. Note that the control space $U$ is not discretized in the formulation \eqref{OCPkh}. In the numerical treatment, the relation \eqref{FONCkh} is instead exploited to get a discrete control. This approach is called \emph{Variational Discretization} and was introduced in \cite{Hinze2005}, see also \cite[Chapter 3.2.5]{hpuu} for further details. \subsection{Numerical results} We solve numerically the regularized discretized problem \eqref{OCPkh} as an approximation of the limit problem \eqref{OCPl} in the situation of Example~\ref{exam:timedep}.2 with data given in the last but one subsection. Recall that we denote by $\uopt_{kh}$ the former, by $\uopt$ the latter. We investigate the behavior of the error $\norm{\uopt_{kh}-\uopt}$ if $\alpha \to 0$ for fixed small discretization parameters $k$ and $h$ and different values of the parameter $\kappa$ from the measure condition \eqref{E:struct}. To solve $(\mathbb P_{kh})$, a fixed-point iteration on the equation \eqref{FONCkh} is performed: Each fixed-point iteration is initialized with the starting value $u_{kh}^{(0)}:=a_1$ which is the lower bound of the admissible set. As a stopping criterion for the fixed-point iteration, we require for the discrete adjoint states belonging to the current and the last iterate that \[ \norm{ \dual B \left(p_{kh}^{(i)} - p_{kh}^{(i-1)}\right) }_{L^\infty(\Omega\times I)} < t_0 \] where $t_0:=10^{-5}$ is a prescribed threshold. We end up with what we denote by $u_{kh}$ in the tables below: An approximation of $\uoptd$. The idea is that if $\alpha$ is not to small in comparison to $k$ and $h$, we expect to have $\norm{u_{kh}-\uopt} \approx \norm{\uopt_\alpha - \uopt}$, which means that the influence of the discretization is negligible in relation to the influence of the regularization. Here, we report only on the errors in the optimal control since we observed no or only poor convergence in the error of the optimal state and adjoint state, respectively. This might be due to the fact that the influence of the space- and time-discretization error is much larger than that of the regularization error. This phenomenon was also observed for elliptic problems, compare \cite{wachsmuth1}. We consider a fixed fine space-time mesh with $\text{Nh}=(2^5+1)^2$ nodes in space and $\text{Nk}=(2^{11}+1)$ nodes in time. The regularization parameter $\alpha = 2^{-\ell}$ is considered for $\ell=1,2,3,4,5,6$. The problem is solved for different values of $\kappa$, namely $\kappa = 0.3$, $0.5$, $1$, and $2$. Let us remark that the convergence of the fixed-point iteration for our example does not depend on the starting value. As one can see from the \emph{experimental order of convergence} (EOC) in the Tables~\ref{tab:ex2k03}, \ref{tab:ex2k05}, \ref{tab:ex2k1}, and \ref{tab:ex2k2}, the new convergence rates of Theorem~\ref{T:regmain}.3, more precisely \eqref{E:regconvumeas} and \eqref{E:regconvumeas2}, can be observed numerically. It seems that they cannot be improved any further. In Figure~\ref{fig:ex2k1}, the convergence of the computed optimal control to the limit control is depicted if $\alpha \to 0$. \begin{table}[hbt] \begin{center} \begin{tabular}{ccccccc} \hline & $\norm{\uopt-u_{kh}}$ & $\norm{\uopt-u_{kh}}$ & $\text{EOC}$ & $\text{EOC}$\\ $\ell$ & $L^1(I,\mathbb R)$ & $L^2(I,\mathbb R)$ & $L^1$ & $L^2$\\ \hline 1 & 0.09417668 & 0.13354708 & / & / \\ 2 & 0.08837777 & 0.12648809 & 0.09 & 0.08\\ 3 & 0.07681662 & 0.11533688 & 0.20 & 0.13\\ 4 & 0.06212895 & 0.10353644 & 0.31 & 0.16\\ 5 & 0.05008158 & 0.09264117 & 0.31 & 0.16\\ 6 & 0.04011694 & 0.08237596 & 0.32 & 0.17\\ \hline \end{tabular} \end{center} \caption{ Errors and EOC in the control ($\kappa = 0.3$, $\alpha \to 0$, $h$, $k$ fixed).} \label{tab:ex2k03} \end{table} \begin{table}[hbt] \begin{center} \begin{tabular}{ccccccc} \hline & $\norm{\uopt-u_{kh}}$ & $\norm{\uopt-u_{kh}}$ & $\text{EOC}$ & $\text{EOC}$\\ $\ell$ & $L^1(I,\mathbb R)$ & $L^2(I,\mathbb R)$ & $L^1$ & $L^2$\\ \hline 1 & 0.07912861 & 0.11494852 & / & / \\ 2 & 0.05957289 & 0.09753159 & 0.41 & 0.24 \\ 3 & 0.04204449 & 0.08187630 & 0.50 & 0.25 \\ 4 & 0.02963509 & 0.06865675 & 0.50 & 0.25 \\ 5 & 0.02084162 & 0.05749818 & 0.51 & 0.26 \\ 6 & 0.01463170 & 0.04811089 & 0.51 & 0.26 \\ \hline \end{tabular} \end{center} \caption{ Errors and EOC in the control ($\kappa = 0.5$, $\alpha \to 0$, $h$, $k$ fixed).} \label{tab:ex2k05} \end{table} \begin{table}[hbt] \begin{center} \begin{tabular}{ccccccc} \hline & $\norm{\uopt-u_{kh}}$ & $\norm{\uopt-u_{kh}}$ & $\text{EOC}$ & $\text{EOC}$\\ $\ell$ & $L^1(I,\mathbb R)$ & $L^2(I,\mathbb R)$ & $L^1$ & $L^2$\\ \hline 1 & 0.04006495 & 0.07304858 & / & / \\ 2 & 0.02000722 & 0.05160925 & 1.00 & 0.50 \\ 3 & 0.00998774 & 0.03646496 & 1.00 & 0.50 \\ 4 & 0.00498724 & 0.02576440 & 1.00 & 0.50 \\ 5 & 0.00249053 & 0.01820019 & 1.00 & 0.50 \\ 6 & 0.00123906 & 0.01282180 & 1.01 & 0.51 \\ \hline \end{tabular} \end{center} \caption{ Errors and EOC in the control ($\kappa = 1$, $\alpha \to 0$, $h$, $k$ fixed).} \label{tab:ex2k1} \end{table} \begin{figure} \centering \subfloat[$\ell=1$]{\includegraphics[trim=25mm 75mm 15mm 91mm, clip,width=0.3\textwidth]{tht5-1-1.pdf}} \subfloat[$\ell=2$]{ \includegraphics[trim=25mm 75mm 15mm 91mm, clip,width=0.3\textwidth]{tht5-2-1.pdf}} \subfloat[$\ell=3$]{ \includegraphics[trim=25mm 75mm 15mm 91mm, clip,width=0.3\textwidth]{tht5-3-1.pdf}} \caption{ Optimal control $\uopt$ (solid) and computed counterpart $u_{kh}$ (dashed) over time after level $\ell$ ($\kappa = 1$, $\alpha \to 0$, $h$, $k$ fixed). } \label{fig:ex2k1} \end{figure} \begin{table}[hbt] \begin{center} \begin{tabular}{ccccccc} \hline & $\norm{\uopt-u_{kh}}$ & $\norm{\uopt-u_{kh}}$ & $\text{EOC}$ & $\text{EOC}$\\ $\ell$ & $L^1(I,\mathbb R)$ & $L^2(I,\mathbb R)$ & $L^1$ & $L^2$\\ \hline 1 & 0.01081546 & 0.03305084 & / & / \\ 2 & 0.00279478 & 0.01690248 & 1.95 & 0.97 \\ 3 & 0.00074507 & 0.00878066 & 1.91 & 0.94 \\ 4 & 0.00020543 & 0.00463711 & 1.86 & 0.92 \\ 5 & 0.00005823 & 0.00246523 & 1.82 & 0.91 \\ 6 & 0.00001564 & 0.00125068 & 1.90 & 0.98 \\ \hline \end{tabular} \end{center} \caption{ Errors and EOC in the control ($\kappa = 2$, $\alpha \to 0$, $h$, $k$ fixed).} \label{tab:ex2k2} \end{table} \printbibliography \end{document}
2,869,038,156,990
arxiv
\section{Introduction}\setcounter{equation}{0}\quad Kinks are examples of topological solitons (for a review see \cite{MS}) in one space dimension, which interpolate between distinct vacua. A kink can be trivially embedded into three-dimensional Euclidean space as a solution which is independent of all but one spatial direction, and this is known as a domain wall. The energy of a domain wall is proportional to its surface area, therefore the embedding into $\bb{R}^3$ gives a domain wall with infinite energy. However, the domain wall does have finite energy per unit area, which is the tension of the wall, and is equal to the kink energy in the one-dimensional theory. Domain walls can be constructed in a three-dimensional compact space, in which case they have finite energy. A minimal energy configuration of domain walls approximates a minimal surface with increasing accuracy as the thickness of the domain wall is reduced. The surface associated with a configuration of domain walls can equivalently be defined as the level set of the field (taking the level set value to be the midpoint between the two vacua) or as a maximal energy density isosurface. This approach of modelling minimal surfaces using domain walls has been successfully implemented numerically to investigate both triply periodic and quasicrystalline minimal surfaces \cite{GH,SE}. In this paper, we extend the domain wall description of minimal surfaces to double bubbles, which are global minimal area surfaces which enclose and separate two prescribed volumes. We show that intersecting domain walls in a Wess-Zumino theory with three vacua form double bubbles under a certain volume-preserving flow. In 2002 it was proved \cite{proof} that in three-dimensional Euclidean space the double bubble consists of three segments of round spheres, with radii $R_1\le R_2\le R_3,$ satisfying the relation $R_1^{-1}=R_2^{-1}+R_3^{-1},$ and meeting in triple junctions at angles of $120^\circ.$ Such a configuration is known as the standard double bubble and is the familiar structure seen in soap bubbles. The proof \cite{proof} followed an earlier computer assisted proof \cite{comproof}, restricted to the situation in which the two prescribed volumes are equal. On compact spaces the double bubble problem is far more complicated. Even in the flat cubic three-torus, it is still an open problem to prove the form taken by a double bubble. The only rigorous result is that the standard double bubble is recovered if the size of the double bubble is sufficiently small compared to the size of the torus \cite{3dnum}. Numerical investigations have been performed for the flat cubic three-torus \cite{3dnum} using Brakke's surface evolver software \cite{brak}, which is based on adaptive computations of triangulations of the surface. These numerical results suggest ten different qualitative forms taken by the double bubble, depending upon the two bubble volumes relative to the volume of the torus. In two space dimensions the double bubble problem is better understood, and consists of determining the curve with minimal perimeter that encloses and separates two prescribed areas. Note that in the two-dimensional case the term volume will often be used rather than area, in order to keep the terminology of the three-dimensional situation, but we shall always refer to perimeter as the quantity to be minimized. In the flat square two-torus it has been proved \cite{2d} that the double bubble takes one of four different qualitative forms, and explicit formulae are available for the perimeter of each form as a function of the two bubble volumes. A numerical evaluation of these formulae then allows the determination of the form of the double bubble for any given pair of bubble volumes, and hence the construction of a phase diagram. To illustrate the applicability of our domain wall approach to double bubbles we reconstruct the phase diagram for double bubbles in the flat square two-torus and also construct all known examples of double bubbles in the flat cubic three-torus. \section{Domain walls in the Wess-Zumino model}\setcounter{equation}{0}\quad The field theory of interest in this paper is the bosonic sector of the Wess-Zumino model, which contains a single complex scalar field $\phi.$ For static fields the theory may be defined by its energy density, which is given by \begin{equation} {\cal E}=\frac{1}{4}\partial_i\phi\partial_i\bar\phi +\left|\frac{dW}{d\phi}\right|^2, \label{energydensity} \end{equation} where $W(\phi)$ is the superpotential. For applications to double bubbles we require a triply degenerate vacuum and therefore take the superpotential to be \begin{equation} W=\phi-\frac{1}{4}\phi^4. \label{superpotential} \end{equation} The zero energy vacuum field configurations are the three cube roots of unity $\phi=\omega^j\equiv \phi_j,$ where $\omega=e^{i2\pi/3},$ with $j\in\{1,2,3\}.$ In one space dimension this theory has six types of kink solutions (including anti-kinks, as we make no distinction between these in this paper), which are the minimal energy field configurations $\phi(x)$ that interpolate between any two distinct vacua, that is, $\phi(-\infty)=\phi_i$ and $\phi(+\infty)=\phi_j,$ with $i\ne j.$ These kinks satisfy a first-order Bogomolny equation and have an energy given by \begin{equation} E=\int^{\infty}_{-\infty}{\cal E}\,dx=|W(\phi_i)-W(\phi_j)|=3\sqrt{3}/4. \end{equation} Kinks in the above theory have a width of order one, but introducing a constant $\epsilon^2$ in front of the first term in (\ref{energydensity}) makes the kink width $O(\epsilon).$ The thin wall limit is $\epsilon\rightarrow 0,$ and rigorous results require proofs based on regularity as this limit is approached. However, as only relative scales are relevant, we find it more convenient for a numerical implementation to fix the kink scale to be of order one and consider all other scales in the problem (such as the size of a region containing a single vacuum) to be $O(\epsilon^{-1}).$ A trivial embedding of kinks into a higher-dimensional version of the theory (two and three spatial dimensions will be considered) yields domain walls with tension $\mu=3\sqrt{3}/4.$ The tension is simply the energy per unit area in three dimensions and the energy per unit length in two dimensions. A kink traces out a curved path in the $\phi$-plane, connecting two distinct vacua, but it can be shown that when viewed in the $W$-plane the path is simply a straight line \cite{GT}. We will make use of this fact later. This theory has a domain wall junction \cite{GT}, which also satisfies a first-order Bogomolny equation, in which the three types of domain wall mutually intersect at angles of $120^\circ$, dividing space into three equal sectors with a different vacuum value occuring in the interior of each sector. The junction has $120^\circ$ angles because all domain walls have the same tension and there must be a tension balance for an equilibrium configuration. The domain wall junction has been computed numerically, together with networks of junctions connected by domain wall segments that yield tilings of the plane \cite{Sa}. The salient features are that the energy of a domain wall is proportional to its surface area, and three intersecting domain walls form a junction with $120^\circ$ angles. These properties suggest that domain walls in this system may provide a useful field theory description of double bubbles. \section{A volume-preserving flow}\setcounter{equation}{0}\quad A bubble configuration in three dimensions consists of a spherical domain wall separating one vacuum field in the interior of the bubble from a distinct vacuum field in the exterior of the bubble. The energy of such a bubble configuration is approximately $\mu A$, where $A$ is the surface area of the bubble and $\mu$ is the tension introduced above. The error in this approximation can be made arbitrarily small by increasing the ratio of the bubble radius to the width of the domain wall, that is, by approaching the thin wall limit. Clearly, for the energy density (\ref{energydensity}), there are no bubbles that are stationary points of the energy. In order to have a bubble (and later double bubbles) a constraint on the volume of the bubble must be included within the field theory description, since in this theory there is no pressure to resist the tension-induced collapse. In the following we describe how volume constraints may be included in a natural manner within the field theory. There are two obvious ways in which dynamics may be introduced into the static theory discussed in the previous Section. The first is to consider relativistic dynamics, which leads to the usual Lagrangian description of the Wess-Zumino model and yields a nonlinear wave equation which is second-order in the time derivative. The other obvious dynamics is gradient flow, which produces a nonlinear diffusion equation that is first-order in time and generates an energy decreasing evolution. Given the above discussion, it is easy to see that in both cases an initial bubble configuration will simply collapse, as this reduces the potential energy. In the real Ginzburg-Landau model, it is known that gradient flow dynamics can be modified by the introduction of a time-dependent effective magnetic field that preserves the global average of the field. This volume-preserving flow is used, for example, in the study of phase ordering and interface-controlled coarsening \cite{vpflow}. In this Section we describe a generalization of this volume-preserving flow which is applicable to the Wess-Zumino model and allows two independent volumes to be preserved during the flow. Motivated by the applications to follow, we consider the theory defined on a torus $T^d,$ with volume $V.$ First of all, for each vacuum we need to introduce a density function, such that it is equal to unity if the field is in the given vacuum and vanishes if the field is in any other vacuum. A simple choice is \begin{equation} v_i(\phi)=\frac{|\phi-\phi_j|^2|\phi-\phi_k|^2} {|\phi_i-\phi_j|^2|\phi_i-\phi_k|^2} =\frac{1}{9}|\phi-\phi_j|^2|\phi-\phi_k|^2, \label{vden} \end{equation} where $i,j,k$ are three distinct elements of $\{1,2,3\}.$ Clearly these functions satisfy the required property that $v_i(\phi_j)=\delta_{ij}.$ Next we define the volume occupied by each vacuum as \begin{equation} V_i=\int_{T^d} v_i\, d^dx. \label{volume} \end{equation} In the thin wall limit, the width of the domain wall is negligible compared to the volumes occupied by the vacua, and therefore in this limit $V_1+V_2+V_3\rightarrow V.$ The task is to construct an energy minimizing gradient flow in which both volumes $V_1$ and $V_2$ remain constant. Note that $V_3$ is then automatically constant, at least to the accuracy in which the thin wall limit is approached, as $V_3=V-V_1-V_2.$ The starting point is the gradient flow dynamics associated with the energy density (\ref{energydensity}) \begin{equation} \frac{\partial\phi}{\partial t_0} =-\frac{\delta {\cal E}}{\delta \bar\phi}\equiv F =\frac{1}{4}\partial_i\partial_i\phi+3(1-\phi^3)\bar\phi^2. \label{fflow} \end{equation} The dynamics that follows from flow in the $t_0$ time variable reduces the energy $E$ but also changes the volumes $V_i.$ Associated with each volume density $v_i$ one can define an alternative gradient flow \begin{equation} \frac{\partial\phi}{\partial t_i} =-\frac{\partial v_i}{\partial \bar\phi}\equiv f_i =\frac{1}{9}(\phi-\phi_j)(\phi-\phi_k)(2\bar\phi-\bar\phi_j-\bar\phi_k), \label{fiflow} \end{equation} so that evolution in the time variable $t_i$ is a flow that reduces the volume $V_i.$ The idea is to introduce a new flow, with time variable $t,$ that is constructed from the $t_0$ flow by projecting out the components in the directions of the $t_i$ flows. This makes the $t$ flow orthogonal to all $t_i$ flows and hence preserves all the volumes $V_i.$ The appropriate inner product is \begin{equation} <f,g>\,=\frac{1}{V}\int_{T^d} \bar f g \, d^dx, \end{equation} and we use the notation \begin{equation} ||f||=\sqrt{<f,f>}. \end{equation} Given the flows (\ref{fiflow}) an orthonormal set of flows $\hat f_i$ is constructed as \begin{equation} \tilde f_i=f_i-\sum_{j<i}<\hat f_j,f_i>\hat f_j, \quad \mbox{then}\quad \hat f_i=\frac{\tilde f_i}{||\tilde f_i||}. \end{equation} Finally, the $t$ flow is defined to be \begin{equation} \frac{\partial\phi}{\partial t} =F-\sum_{i=1}^2 <\hat f_i,F>\hat f_i, \label{tflow} \end{equation} which is a nonlocal and nonlinear reaction diffusion equation. By construction the flow (\ref{tflow}) is orthogonal to all the $t_i$ flows, that is, \begin{equation} <f_i,\frac{\partial \phi}{\partial t}>\,=0. \label{orthog} \end{equation} It is easy to prove that the flow (\ref{tflow}) preserves both volumes $V_i,$ with $i=1,2$, while reducing the energy $E.$ First of all, \begin{equation} \frac{dV_i}{dt}=\int_{T^d}\frac{dv_i}{dt}\, d^dx =2\Re\int_{T^d} \frac{\partial v_i}{\partial\phi}\frac{\partial \phi}{\partial t}\, d^dx =-2\Re\int_{T^d} \bar f_i \frac{\partial \phi}{\partial t} \, d^dx =-2V\Re <f_i,\frac{\partial \phi}{\partial t}>\,=0, \end{equation} where we have used the orthogonality property (\ref{orthog}). To prove that the flow is energy decreasing, it is convenient to write the sum in (\ref{tflow}) as a single term, by defining \begin{equation} \hat f=\frac{<\hat f_1,F>\hat f_1+<\hat f_2,F>\hat f_2} {\sqrt{<\hat f_1,F>^2+<\hat f_2,F>^2}}. \end{equation} Using the orthogonality property $<\hat f_1,\hat f_2>=0,$ allows (\ref{tflow}) to be rewritten as \begin{equation} \frac{\partial\phi}{\partial t} =F-<\hat f,F>\hat f. \label{tflow2} \end{equation} Therefore, \begin{eqnarray} & &\frac{dE}{dt}=\int_{T^d}\frac{d {\cal E}}{dt}\, d^dx =2\Re\int_{T^d} \frac{\partial \bar\phi}{\partial t} \frac{\delta {\cal E}}{\delta \bar \phi} \, d^dx =-2\Re\int_{T^d} \frac{\partial \bar \phi}{\partial t} F \, d^dx =-2V\Re <\frac{\partial \phi}{\partial t},F>\nonumber\\ & & =-2V\Re <\frac{\partial \phi}{\partial t}, \frac{\partial \phi}{\partial t}+<\hat f,F>\hat f> =-2V\Re <\frac{\partial \phi}{\partial t}, \frac{\partial \phi}{\partial t}> \,\,\le 0, \end{eqnarray} where the last expression uses the fact that the $t$ flow is perpendicular to the combined volume reducing flow, that is, $<\frac{\partial \phi}{\partial t},\hat f>\,=0.$ In summary, we have proved that the flow (\ref{tflow}) is energy decreasing while preserving both volumes $V_1$ and $V_2.$ The end points of this flow are therefore equilibrium configurations that are critical points of the energy functional under constrained variations that preserve both volumes. Static solutions of this flow are candidates for double bubbles. The volume-preserving flow of the real Ginzburg-Landau model is recovered from the above generalization by restricting $\phi$ to be real $\phi=\bar\phi\equiv \varphi$ and choosing the volume density to be $v_1=(\varphi-\varphi_2)/(\varphi_1-\varphi_2),$ where $\varphi_1$ and $\varphi_2=-\varphi_1$ are the two real vacua. In this case $f_1=1$ and the flow takes the form \begin{equation} \frac{\partial\varphi}{\partial t}=F-<F>, \end{equation} where $<F>\,\equiv\,<1,F>$ is the average value of $F$. This flow clearly preserves the average volume as $\frac{\partial}{\partial t}<\varphi>=0.$ The definition of a double bubble requires the surface to be the global minimum of the area functional, with the prescribed volume constraints. Therefore, there may be a number of surfaces which are local minima of the surface area, or even saddle points, but are not double bubbles. All local minima must be found to determine which is the double bubble, and this applies to the field theory energy computations too. In the following two Sections we present the results of a numerical implementation of the above volume-preserving flow. Double bubbles in the two-torus and three-torus are studied and comparisons made with previous results obtained using other methods that are not based on field theory. \section{Double bubbles in the two-torus}\setcounter{equation}{0}\quad In this Section we restrict our investigations to two-dimensional double bubbles. As mentioned earlier, we shall retain the three-dimensional notation and refer to the volume of a bubble (even though it is an area). The problem is to find curves with minimal perimeter that enclose and separate two prescribed volumes. Although we work on the flat square two-torus, the case of the Euclidean plane can be recovered if the torus is sufficiently large, so that no curve of the double bubble wraps a cycle of the torus. As a first illustration of our method, we discuss an example of this type. \begin{figure}[ht] \begin{center} \includegraphics[width=15cm]{mg08.ps} \caption{The image on the left shows the initial condition and the image on the right is the subsequent final field configuration obtained under the volume-preserving flow. Different colours (or shades of grey) represent regions in space where the field takes values in one of the three different vacua $\phi_1,\phi_2,\phi_3.$ Lines denote the domain wall positions. } \label{fig-mg08} \end{center} \end{figure} Figure \ref{fig-mg08} presents an initial condition and the subsequent final field configuration obtained under the volume-preserving flow. The lattice contains $401^2$ grid points with a lattice spacing $\Delta x=0.2$ so that the volume of the torus is $V=6400.$ The Laplacian is evaluated using second-order accurate finite difference approximations and the flow is evolved using a simple first-order accurate explicit method with timestep $\Delta t=0.008.$ All inner products are evaluated by approximating the integral by a sum over lattice sites. The initial condition in Figure~\ref{fig-mg08} consists of two equal volume rectangles, each of volume $V_1=V_2=0.15V,$ with the field set to the vacuum value $\phi_1$ inside one rectangle and $\phi_2$ inside the other rectangle. Outside these rectangles the field takes the value $\phi_3.$ The final configuration shown is at time $t=400,$ when the field is static to the accuracy that we compute. The equilibrium field configuration clearly takes the form of the standard double bubble in the plane with equal volumes. Note the $120^\circ$ intersection angles of the domian wall junctions. The perimeter, $P,$ of a standard double bubble in the plane with equal volumes $V_1=V_2$ satisfies \cite{2d} \begin{equation} \frac{P}{\sqrt{V_1}}=\frac{16\pi+6\sqrt{3}}{\sqrt{24\pi+9\sqrt{3}}}=6.359\ldots \label{exact2d} \end{equation} Computing the perimeter length in the field theory, as the ratio of the energy to the tension $E/\mu,$ gives the result \begin{equation} \frac{E}{\mu\sqrt{V_1}}=6.370 \label{approx2d} \end{equation} therefore the field theory computation has an error of less than $\frac{1}{4}\%$ for this choice of simulation parameters, which has quite a modest grid. The accuracy can be improved by increasing the number of grid points, so that the size of the torus $\sqrt{V}$ increases relative to the width of the domain wall, that is, the system is moved closer to the thin wall limit. In Section~\ref{sec-mod} we discuss some modifications that can be applied to improve the accuracy of the computations without the need to increase the grid size. Note that from now on we shall refer to all volumes in units of the torus volume, so for the above example we simply write $V_1=0.15,$ etc. \begin{figure}[ht] \begin{center} \includegraphics[width=10cm]{mg10.ps} \caption{Equilibrium fields obtained from four different simulations with varying volumes. Different colours (or shades of grey) represent regions in space where the field takes values in one of the three different vacua. The configurations shown are known as (a) the standard double bubble, (b) the standard chain, (c) the band lens, (d) the double band. } \label{fig-mg10} \end{center} \end{figure} Figure~\ref{fig-mg10} presents equilibrium fields obtained from four different simulations with volumes $\{V_1,V_2\}$ given by (a) $\{0.1,0.15\}$; (b) $\{0.3,0.2\}$; (c) $\{0.35,0.05\}$; (d) $\{0.3,0.4\}$. In each case the initial field consists of rectangular regions of vacuum, similar to that shown in Figure~\ref{fig-mg10}, though some of the rectangles were chosen to wrap the torus. In each case, for the chosen volumes, the resulting field configuration provides a good description of the double bubble. The configurations shown are known as (a) the standard double bubble, (b) the standard chain, (c) the band lens, (d) the double band. It has been proved that for all volumes $\{V_1,V_2\}$ the double bubble takes one of these four forms, and a phase diagram has been computed to determine which of the four forms is taken for any given volumes \cite{2d}. \begin{figure}[ht] \begin{center} \includegraphics[width=10cm]{mg14.ps} \caption{A phase diagram indicating the form taken by the double bubble as a function of the individual bubble volumes. A position of a point inside the triangle is related to the bubble volumes through equation (\ref{point}). The centroid of the triangle is where all three volumes are equal, whereas regions near the vertices are where two volumes are much smaller than the third. Finally, one of the volumes vanishes at an edge, where the included scale gives the fraction of the total volume occupied by one of the two remaining volumes. } \label{fig-mg14} \end{center} \end{figure} Using the field theory description we have been able to reproduce this phase diagram by performing simulations for a range of volumes. This phase diagram is presented in Figure~\ref{fig-mg14}. The set of possible volumes, $V_1,V_2,V_3,$ such that $V_1+V_2+V_3=1,$ is represented by the interior of an equilateral triangle. Explicitly, the $(x,y)$ coordinates in the plane containing the triangle are related to the volumes by \begin{equation} (x,y)=\left(\frac{1}{2}(V_1-V_2),\frac{1}{2\sqrt{3}}(2-3V_1-3V_2)\right). \label{point} \end{equation} The centroid of the triangle is $(x,y)=(0,0)$ and corresponds to $V_1=V_2=V_3=\frac{1}{3},$ so that all three volumes (including the volume exterior to both bubbles) are equal. In this case, both bubble volumes are large and therefore it is favourable to wrap both bubbles around the torus, as this reduces perimeter length, resulting in the double band. Note that the double bubble problem is symmetric under the interchange of any two of the three volumes, therefore the computation can be restricted to the region $V_3\ge V_2\ge V_1,$ and the remainder of the phase diagram constructed from this sixth of the triangle by symmetry. The regions near the vertices of the triangle correspond to the situation in which two volumes are much smaller than the third, and therefore the standard double bubble is recovered. The edges of the triangle are where one of the volumes vanishes, with the mid-point of an edge associated with the two non-vanishing volumes being equal. In a region around the mid-point of an edge the band lens is the optimal form. Finally, if two volumes are reasonably similar in size, and not too small, then the double bubble takes the form of the chain. The phase diagram displayed in Figure~\ref{fig-mg14} is in excellent agreement with the one presented in \cite{2d} and confirms the applicability of our field theory approach to double bubbles. \section{Double bubbles in the three-torus}\setcounter{equation}{0}\quad In this Section we turn our attention to the more complicated problem of double bubbles on the flat cubic three-torus. In this case even the classification of the types of double bubble that exist is an open problem and only numerical results are available \cite{3dnum}. If both bubble volumes are small compared to the volume of the torus then the standard double bubble in three-dimensional Euclidean space is recovered. One also expects various three-dimensional analogues of the two-dimensional double bubbles discussed in the previous Section. Numerical investigations have been performed \cite{3dnum} using the surface evolver software \cite{brak} and yield a number of forms taken by double bubbles for approriate volumes. These results suggest that there are ten different forms taken by double bubbles. Equilibrium configurations were also found that do not appear to be double bubbles for any values of the volumes. \begin{figure} \begin{center} \includegraphics[width=17cm]{mg16.ps} \caption{Energy density isosurfaces for equilibrium configurations corresponding to the ten different conjectured double bubble forms. } \label{fig-mg16} \end{center} \end{figure} Using our field theory approach with domain walls, which we stress is very different to the surface triangulation method used in \cite{3dnum}, we have been able to reproduce all ten types of equilibrium configurations conjectured to be doubles bubbles in \cite{3dnum}. In Figure~\ref{fig-mg16} we display the ten different conjectured double bubble forms, by plotting an energy density isosurface for each, and we include the name of each surface using the nomenclature of \cite{3dnum}. The initial conditions used to generate these surfaces were constructed using the same approach as in the two-dimensional case described in the previous Section, namely, we take a collection of cuboids with assigned vacuum fields. \begin{figure}[ht] \begin{center} \includegraphics[width=15cm]{mg17.ps} \caption{Energy density isosurfaces for six types of equilibrium configurations which are never double bubbles, that is, none is the global minimal energy surface for any values of the volumes. } \label{fig-mg17} \end{center} \end{figure} Six types of equilibrium configurations are presented in Figure~\ref{fig-mg17} which are never double bubbles, that is, none is the global minimal energy surface for any values of the volumes. The transverse cylinder, torus bubble, inner tube and double hydrant are saddle points which are unstable to symmetry breaking perturbations. The central cylinder and hydrant lens appear to be local minima, for suitable values of the volumes. \begin{figure} \begin{center} \includegraphics[width=9cm]{mgtc.ps} \caption{Energy density isosurfaces at increasing times under volume-preserving flow. The initial condition is of a similar type to the transverse cylinder but evolves into a cylinder cross. } \label{fig-mgtc} \end{center} \end{figure} An example evolution under volume-preserving flow is presented in Figure~\ref{fig-mgtc}. The initial condition Figure~\ref{fig-mgtc}(a) is of a similar type to the transverse cylinder, but under the evolution it evolves into a stable cylinder cross Figure~\ref{fig-mgtc}(d). This simulation demonstrates both the instability of the transverse cylinder and the stability of the cylinder cross. The results presented in this Section confirm the applicability of the domain wall approach to three-dimensional double bubbles and show that the method can be numerically realized with reasonable computing resources. Evolutions that involve topology changes of the surface are automatically dealt with within the field theory description, since the surface is obtained as a level set. This allows a large region of configuration space to be explored without the need for fine tuned initial conditions, as demonstrated in Figure~\ref{fig-mgtc}, where the final equilibrium configuration is quite different from the initial condition. Our results provide some confirmation of the numerical results in \cite{3dnum} using very different methods, and therefore provides evidence for the accuracy of both approaches. The phase diagram for three-dimensional double bubbles computed in \cite{3dnum} could also be obtained using our field theory approach, though we have not pursued that here. \section{Modified volumes and penalty functions}\label{sec-mod}\setcounter{equation}{0}\quad In this Section we discuss an improvement on the simple volume density functions introduced earlier and also describe an alternative method to the volume-preserving flow. The volume density functions (\ref{vden}) satisfy the minimal requirement that $v_i(\phi_j)=\delta_{ij}.$ In the thin wall limit the volume of any region in which the field is not in a vacuum value tends to zero, so the above minimal requirement is all that is relevant. However, in any numerical implementation there is a non-zero value of the wall thickness (that is, $\epsilon\ne 0$), and the volume density (\ref{vden}) will provide a contribution to the volume as the domain wall is crossed. Considering the volume $V_i$ then the density will be non-zero in the interior of the domain wall connecting vacuum $\phi_i$ to $\phi_j.$ This is reasonable since the region containing vacuum $\phi_i$ now has a diffuse rather than sharp boundary. However, the simple volume density (\ref{vden}) is also non-zero in the interior of the domain wall connecting vacuum $\phi_j$ to $\phi_k$ and this is certainly not related to the volume $V_i.$ Of course, as the thin wall limit is approached this effect tends to zero, but from the point of view of improving numerical accuracy it would be desirable if this contribution could be eliminated for any wall thickness. This can be achieved using the modification presented below. The above comments lead to an additional requirement on the volume density, namely that $v_i(\phi_{jk})=0,$ where $\phi_{jk}$ denotes the domain wall solution connecting vacuum $\phi_j$ to $\phi_k.$ Recall that although a domain wall solution traces out a curved path in the $\phi$-plane it is a straight line in the $W$-plane. The additional requirement can therefore be met by choosing a volume density $v_i(\phi)$ that vanishes along the whole line in the $W$-plane that connects the points $W(\phi_j)$ and $W(\phi_k).$ The simplest choice is to take \begin{equation} v_i(\phi)= \bigg(\frac{ \Im\{(W(\phi)-W(\phi_k))(\overline{W(\phi_j)}-\overline{W(\phi_k)})\}} {\Im\{(W(\phi_i)-W(\phi_k))(\overline{W(\phi_j)}-\overline{W(\phi_k)})\}} \bigg)^2, \label{modvden} \end{equation} which clearly satisfies the required properties. We have used this modified definition of the volume density and found that, to a good accuracy, the same results are obtained as with (\ref{vden}). The modified volume density indeed solves the problem of spurious contributions to the volume, though as mentioned earlier, such errors were already small by virtue of the fact that they vanish as the thin wall limit is approached. The volume-preserving flow introduced in this paper is an elegant method to minimize energy while constraining volumes. A less sophisticated approach, which is often used in constrained numerical minimization, is to introduce a penalty function. Explicitly, this involves the addition to the field theory energy of a contribution of the form \begin{equation} E_{penalty}=\lambda\{(V_1-V_1^*)^2+(V_2-V_2^*)^2\}, \end{equation} where $V_1^*$ and $V_2^*$ are the two required values of the volumes $V_1$ and $V_2$ and $\lambda$ is a large positive constant, $\lambda \gg 1.$ This additional contribution to the energy is known as a penalty function, since it penalizes field configurations that do not have the required values for the volumes. In the limit as $\lambda\rightarrow\infty$ the penalty function enforces the required volumes $V_i=V_i^*.$ If standard gradient flow is applied to the energy function with the additional term and a finite value of $\lambda$ then the volumes will approach the required values, to an arbitrary accuracy controlled by the value $\lambda.$ For fields with the correct volumes then the additional contribution to the energy vanishes and the usual energy then dominates, producing field configurations that minimize the usual energy with the given fixed volumes as a constraint. The above penalty function method is not as elegant as the volume-preserving flow technique and requires a careful choice of $\lambda,$ to ensure that the volume constraints are satisfied to a good accuracy whilst preventing the penalty contribution from completely dominating over the remaining energy term. However, it does have one practical advantage concerning global minima and equilibrium configurations, as we now discuss. It is easy to see that there are equilibrium surfaces, that is, stationary points of the area functional restricted to volume-preserving variations, which have non-trivial zero modes, and therefore are not even local energy minima. A simple example in $\bb{R}^3$ is one spherical bubble entirely contained within a second spherical bubble. This is an equilibrium configuration which has a zero mode corresponding to the translation of the inner bubble, so that it remains entirely within the outer bubble. The corresponding field theory configuration is also an equilibrium solution and the volume-preserving flow will end at such a static solution given appropriate initial conditions. To find the global energy minimum, which is the standard double bubble, requires an appropriate choice of initial conditions to avoid the flow getting trapped in an equilibrium solution of the above type. More complicated examples of the above phenomenon also exist, such as equilibrium chain configuration in $T^2,$ where each junction is in equilibrium but the relative proportions of the chain segments are not those of minimal area. Even in the thin wall limit, the penalty function method does not solve precisely the same problem as area minimization for given fixed volumes, because $\lambda$ is finite. This difference is sufficient to remove the zero-modes discussed above and it turns out that a numerical implementation of the penalty function method yields global energy minima from a much larger set of initial conditions than the volume-preserving flow technique. In fact, the penalty function method was used to construct the double bubble chains in two and three dimensions presented earlier, as it is more efficient than using volume-preserving flow to compute solutions of the chain type. \section{Conclusion}\setcounter{equation}{0}\quad We have presented a field theory description of double bubbles using domain walls in a Wess-Zumino model with a volume-preserving flow. The applicability of this approach has been demonstrated by reproducing the phase diagram for double bubbles in the flat square two-torus and all examples for candidate double bubbles in the flat cubic three-torus. In addition to providing an alternative numerical approach to computing double bubbles one may speculate that the field theory formulation in terms of a volume-preserving gradient flow might be useful in proving rigorous mathematical results, along the same lines that Ricci flow has proved to be such a great tool in the study of surfaces. By using a higher-order superpotential, a field theory with more than three vacua can be constructed and the volume-preserving flow can be used to find equilibrium configurations. However, such a system does not describe triple bubbles, or their multi-bubble generalizations, since not all domain wall tensions will be equal. To provide a field theory description of multi-bubbles requires additional fields beyond that of a single complex scalar, so that more than three vacua can exist with all domain wall tensions being equal. For example, for triple bubbles an appropriate field would be a real three-component field with four vacua and a tetrahedrally invariant potential. \section*{Acknowledgements} MG thanks the EPSRC for a research studentship. PMS thanks Gary Gibbons and Stephen Watson for useful discussions. The numerical computations were performed on the Durham HPC cluster HAMILTON.
2,869,038,156,991
arxiv
\section{Staying in the Lorentz sector} In the context of the attempts to provide a quantum theory of gravity or to describe spacetime quantum-mechanically, some works \cite{KotE, KotF, StaA} have lately proved it quite useful to introduce a peculiar sort of effective or quantum metric $q_{ab}$, also called qmetric, which acts to some extent as a metric at the same time allowing for the existence of a finite limiting distance $L$ between two events in their coincidence limit. It implements this way intrinsic discreteness of spacetime, still not abandoning the benefits, for calculus, associated to a continuous description of spacetime. One point of merit of this qmetric approach appears to be its genericity. Indeed, the quantum description it offers, does not come from a specific quantum theory of gravity but arises instead straight from simply requiring length quantization, a feature, this, one is likely to find in most specific models, and which has as such the status of quite a generic expectation when quantizing gravity. In Loop Quantum Gravity (LQG) \cite{AshD, ThiA, RovQ} for example, quantization goes through the discretization of the classical theory (general relativity) and the introduction of a quantum theory associated to this discretization. We do get length quantization in it; this however not directly, but as a consequence of the general quantization procedure just mentioned. What we can say is that, concerning length quantization effects, it seems in principle we can compare what predicted by LQG and by the qmetric, with the predictions of the latter coming from length quantization without any specific theory associated with, and those of the former coming instead from the quantum framework provided by the specific theory. This means that the results one can obtain with the qmetric approach, could have wide range applicability within the various specific quantum gravity models, no matter how much they may differ one from the other in their starting assumptions and perspective (for example, whether the quantum theory of gravity has to come from the quantization of the classical theory of gravity or hinges instead on some, as yet unknown/untested, physics at Planck scale) and should be in principle recoverable in any one of them (and results in this sense have been reported in \cite{CouA, CouB}). One result one gets thanks to the quantum metric $q_{ab}$ is the possibility to provide a notion of degrees of freedom or of number of (quantum) states of spacetime at a point \cite{Pad04, Pad06, Pad08, Pad10}, fact which paves the way to a statistical description of field equations, and then to express the basic tenets of gravity using as proper language thermodynamics (as opposed to geometry) \cite{Pad04}. This endows a previous statistical derivation of field equations \cite{PadG, PadF} and the notion of horizon microscopic degrees of freedom \cite{PadK, PadL} as well as recent results connecting so-called black hole chemistry \cite{ManB, ManA} with horizon degrees of freedom \cite{Var}, all arising from macroscopic spacetime thermodynamics, with a microscopic mechanism seemingly able to directly justify these degrees of freedom. Key to the notion of degrees of freedom or of number of states of quantum spacetime is a quantity, denoted here $\rho$, defined in terms of $(D-1)$-dimensional areas (spacetime is assumed $D$-dimensional) of hypersurfaces formed by points at assigned distance from some point $P$ in the space coming from Euclideanisation of original spacetime around $P$. The basic feature about it is that, according to the effective metric, these $(D-1)$-areas remain finite in the coincidence limit of the hypersurfaces shrinking to $P$ \cite{Pad04} (and clearly, one would expect some analogous results do hold true in Lorentzian sector). This of Euclideanisation might be a point of merit, providing insight perhaps into what the structure of the metric might be at the smallest scales. The procedure usually taken to go from a Lorentz signature to a Euclidean signature, the Wick rotation to imaginary time, even if well-established in flat spacetime, is however not free from ambiguities in general curved spacetime \cite{VisJ}. Based on a consistent prescription for this \cite{VisJ} (coming from reconsidering the Wick rotation as an analytic continuation of the metric), results concerning the curvature tensors, when migrating from Lorentzian to Euclidean signature spacetimes, have been presented \cite{KotL}, showing it is worth pursuing in this vein in a qmetric context. On the other hand, it is not so clear which is the role of Euclidean manifolds at a fundamental level, meaning if at the smallest scale the reference manifold is really to be considered Euclidean instead of Lorentzian. There is, after all, no physical proof supporting a non-Lorentzian signature for spacetime. And the context in which a dynamical signature change is foreseen --from a fundamental Euclidean signature to the Lorentzian signature we see in the universe--, along the lines e.g. of \cite{GreA, GreB}, has been pointed out to entail difficulties whose origin can be traced back to how, or how strongly, quantum field theory reacts to changes of signature in the underlying manifold (effects like production of infinite number of particles with infinite energy) \cite{VisAA}. Sound arguments have been also reported, based on the consideration of saddle-point approximation methods, showing that quantum gravity amplitudes should be defined first in terms of Lorentzian path integrals \cite{SorB}. Likewise, from Wick rotation as analytic continuation of the metric, the suggestion has been given that the functional integral should be computed not over all Euclidean manifolds but only over those compatible with a Lorentz structure \cite{VisJ}. In causal dynamical triangulation, evidence has been reported indeed that one gains control on the functional integration if the sum is not taken over all Euclidean geometries, but is restricted instead only to those which are associated with Lorentzian causal geometries \cite{AmbC}. In view of all this, one would then know if the qmetric approach could allow to pick up quantum degrees of freedom even never abandoning the Lorentz sector. The aim of present study is precisely to develop a concept of $\rho$ in the Lorentz sector directly, i.e. with no reliance on Euclideanised space. A partial result in this direction has been already presented in \cite{PesK}. There, a notion of $\rho$ for timelike geodesics has been introduced and its expression has been derived (and the case of spacelike geodesics goes along similar lines). What is left is the consideration of null geodesics and this is the case we try to study here. As we will see, this involves the introduction of a notion of quantum metric $q_{ab}$ for null separated events, this way completing a quantum formulation of spacetime intervals. \section{$\rho$ for timelike/spacelike geodesics} Let us start by recalling what we can do with timelike/spacelike geodesics. We briefly rephrase what is reported in \cite{PesK} for timelike case, using here a notation which encompasses both the timelike and the spacelike case at one stroke. We consider timelike/spacelike geodesics through a generic point $P$ in spacetime, and introduce the two hypersurfaces $\Sigma_\epsilon(P, l)$, $\epsilon = +1$ for spacelike geodesics and $\epsilon = -1$ for timelike ones, of all points $p$ at assigned squared distance from $P$: \begin{eqnarray} \Sigma_\epsilon(P, l) = \big\{p: \ \epsilon\sigma^2(p, P) = l^2 \big\}, \nonumber \end{eqnarray} where $ \sigma^2(p, P) $ is the squared geodesic distance between $P$ and $p$ ($\sigma^2(p, P) = 2 \Omega(p, P)$, with $\Omega(p, P)$ the Synge world function \cite{Syn}), and $l = \sqrt{l^2}$ non-negative. Proceeding analogously to the Euclidean definition, $\rho$ is given in terms of generic/flat ratio of element of areas on $\Sigma_\epsilon(P, l)$, as measured according to the effective metric, in the limit $l \rightarrow 0$. For each assigned normalised vector $n^a$ at $P$ ($n^a n_a = \epsilon$), we consider the intersection point $p$ between the geodesic $\mu(n^a)$ with tangent at $P$ $t^a(P) = n^a$ and the hypersurface $\Sigma_\epsilon(P, l)$. Calling $y^i$, $i = 1, ..., D-1$ coordinates on $\Sigma_\epsilon(P, l)$ such that $y^i(p) =0$, we consider a segment $I$ of hypersurface $\Sigma_\epsilon(P, l)$ around $p$, defined as $ I = \{dy^i\}, $ where $dy^i$ are thought as fixed when $l$ is varied. The $(D-1)$-dimensional area of $I$ is \begin{eqnarray} d^{D-1}V(p) = \sqrt{- \epsilon h(p)} \ d^{D-1}y, \nonumber \end{eqnarray} where $h_{ij}$ are the components of the metric on $\Sigma_\epsilon(P, l)$ in the coordinates $y^i$, metric which coincides with that induced by spacetime metric $g_{ab}$. What we have to consider is the area $[d^{D-1}V]_q$ of $I$ as measured through the effective metric $q_{ab}$. The effective metric is described \cite{KotE, StaA} in terms of the bitensor $q_{ab}(p, P)$ which stems from requiring the squared geodesic distance $\sigma^2$ gets modified into $ \sigma^2 \rightarrow [\sigma^2]_q = {S_L}(\sigma^2) $ with {\bf (R1)} $S_0 = \sigma^2$, {\bf (R2)} ${S_L}(0^\pm) = \pm L^2$, and {\bf (R3)} the kernel $G(\sigma^2)$ of the d'Alembertian gets modified into $G(\sigma^2) \rightarrow [G]_q(\sigma^2) = G({S_L})$ in all maximally symmetric spacetimes. These requirements give, for spacelike or timelike geodesics, the expression \begin{eqnarray}\label{qab} q_{ab}(p, P) = A(\sigma^2) g_{ab}(p) + \epsilon \Big(\frac{1}{\alpha(\sigma^2)} - A(\sigma^2)\Big) t_a(p) t_b(p), \end{eqnarray} where $t^a$ is the normalized tangent vector ($g_{ab} t^a t^b = \epsilon$), not going to change its timelike or spacelike character when in the qmetric, \begin{eqnarray}\label{A} A = \frac{S_L}{\sigma^2} \Big(\frac{\Delta}{\Delta_S}\Big)^\frac{2}{D-1}, \end{eqnarray} \begin{eqnarray}\label{alpha} \alpha = \frac{S_L}{\sigma^2 (S^\prime_L)^2} \end{eqnarray} ($^\prime$ indicates differentiation with respect to the argument $\sigma^2$), where \begin{eqnarray}\label{vanVleck} \Delta(p, P) = - \frac{1}{\sqrt{g(p) g(P)}} {\rm det}\Big[-\nabla^{(p)}_a \nabla^{(P)}_b \frac{1}{2} \sigma^2(p, P)\Big] \end{eqnarray} is the van Vleck determinant (\cite{vVl, Mor, DeWA, DeWB}; see \cite{Xen, VisA, PPV}) which is a biscalar, and the biscalar $\Delta_S(p, P)$ is $\Delta_S(p, P) = \Delta({\tilde p}, P)$, where $\tilde p$ is that point on the geodesic through $P$ and $p$ (on the same side of $p$ with respect to $P$) which has $\sigma^2({\tilde p}, P) = S_L(p, P)$. $\alpha$ is determined by the request that the formula for squared geodesic distance \begin{eqnarray}\label{HJ} g^{ab} \partial_a\sigma^2 \partial_b\sigma^2 = 4 \sigma^2 \end{eqnarray} (Hamilton-Jacobi equation) gets transformed into $q^{ab} \partial_a S_L \partial_b S_L = 4 S_L$; $A$ by the request {\bf R3}. From the effective metric $[h_{ab}]_q(p, P)$ induced by $q_{ab}(p, P)$ at $p$ on $\Sigma_\epsilon(P, l)$, we get the effective-metric $(D-1)$-dimensional area of $I$ as \begin{eqnarray} [d^{D-1}V]_q(p, P) = \Big[\sqrt{- \epsilon h}\Big]_q(p, P) \ d^{D-1}y . \nonumber \end{eqnarray} As in the Euclidean approach, $\rho$ can then be defined as the ratio of effective-metric $(D-1)$-dimensional area of $I$ for the actual metric configuration, $[d^{D-1}V]_{q(g)}(p, P)$, to what we would have were spacetime flat, $[d^{D-1}V]_{q(\eta)}(p, P)$ ($\eta_{ab}$ is Minkowski metric), in the limit $p \rightarrow P$ along $\mu(n^a)$, i.e. \begin{eqnarray}\label{rho_def} \rho(P, n^a) = \bigg(\lim_{p \rightarrow P} \frac{[d^{D-1}V]_{q(g)}(p, P)}{[d^{D-1}V]_{q(\eta)}(p, P)}\bigg) _{\mu(n^a)} . \end{eqnarray} $\rho$ is then derived in terms of the quantities $A$ and $\alpha$ defining the effective metric. The effective metric $[h_{ab}]_q$ induced by $q_{ab}$ turns out to be \begin{eqnarray} [h_{ab}]_q(p, P) = A(\sigma^2) h_{ab}(p) \nonumber \end{eqnarray} \cite{KotG}, which implies \begin{eqnarray} \Big[\sqrt{- \epsilon h}\Big]_q(p, P) = A(\sigma^2)^{\frac{D-1}{2}} \sqrt{- \epsilon h(p)}, \nonumber \end{eqnarray} and then \begin{eqnarray} [d^{D-1}V]_q(p, P) &=& A(\sigma^2)^{\frac{D-1}{2}} d^{D-1}V(p), \nonumber \end{eqnarray} where $d^{D-1}V(p)$ indicates the proper area of $I$ according to the ordinary metric. Here we see that only $A$, and not $\alpha$, is actually involved in the determination of $\rho$. Introducing on $\Sigma_\epsilon$, in a neighbourhood of $p$, mutually orthogonal coordinates $z^i$ such that, chosen any one of them, $z^{\bar i}$, it can be written in the form $z^{\bar i} = l \eta$ with the parameter $\eta$ such that $l d\eta$ is proper distance or proper-time difference, and chosing as $I$ the (hyper)cube ${dz^i}$ defined by $dz^i = l d\eta, \forall i$, we obtain \begin{eqnarray} [d^{D-1}V]_q(p, P) &=& A(\sigma^2)^{\frac{D-1}{2}} l^{D-1} \big(1 + {\cal O}(l^2)\big) (d\eta)^{D-1} \nonumber \end{eqnarray} where the ${\cal O}(l^2)$ term represents the effects of curvature (and is thus of course absent in flat case), and clearly $l = \sqrt{\epsilon \sigma^2}.$ Using the expression (\ref{A}) for $A$, we get \begin{eqnarray} [d^{D-1}V]_q(p, P) = [\epsilon S_L]^{\frac{D-1}{2}} \frac{\Delta(p, P)}{\Delta_S(p, P)} \ \big(1 + {\cal O}(l^2)\big) (d\eta)^{D-1} \nonumber \end{eqnarray} and, in the limit $p \rightarrow P$ along $\mu(n^a)$, \begin{eqnarray} \lim_{l \rightarrow 0} \ [d^{D-1}V]_q(p, P) = L^{D-1} \frac{1}{\Delta_L(P, n^a)} (d\eta)^{D-1}, \nonumber \end{eqnarray} with $\Delta_L(P, n^a) = \Delta({\bar p}, P)$, where $\bar p$ is that point on geodesic $\mu(n^a)$ (on the side in the direction $n^a$) which has $l = L$. This shows that both the numerator and the denominator in expression (\ref{rho_def}) remain non vanishing in the coincidence limit $p \rightarrow P$, exactly as it happens in Euclidean case. Since for flat spacetime $\Delta = 1$ identically and then also $\Delta_L = 1$, we have finally \begin{eqnarray}\label{rho} \rho(P, n^a) = \frac{1}{\Delta_L(P, n^a)}, \end{eqnarray} where the $\Delta_L$ is that of generic metric $g_{ab}$. The scope of this exact expression for $\rho$ clearly includes strictly Riemannian manifolds (as that from Euclideanisation). Expanding $\Delta(p, P)$ in powers of $l$ (\cite{DeWA}; \cite{Xen, VisA, PPV}), \begin{eqnarray}\label{expansion} \Delta(p, P) = 1 + \frac{1}{6} l^2 R_{ab} t^a t^b + o\big(l^2 R_{ab} t^a t^b\big), \end{eqnarray} ($t^a t_a =\epsilon)$ gives \begin{eqnarray}\label{DeltaL} \Delta_L(P, n^a) = 1 + \frac{1}{6} L^2 R_{ab}(P) n^a n^b + o\big(L^2 R_{ab}(P) n^a n^b\big), \end{eqnarray} and \begin{eqnarray}\label{rho_expansion} \rho(P, n^a) = 1 -\frac{1}{6} L^2 R_{ab}(P) n^a n^b + o\big(L^2 R_{ab}(P) n^a n^b\big). \end{eqnarray} Again, this identically applies also to Riemannian manifolds (as that from Euclideanisation), and its form coincides with the expansion obtained \cite{Pad04, Pad06, Pad08, Pad10} defining $\rho$ in the Euclideanised space. \section{qmetric and null geodesics} If we try to extend the scope of effective metric approach to include null geodesics, we have that expression (\ref{qab}) becomes ill defined in this case since $\sigma^2 =0$ all along any null geodesic, and in principle we are then in trouble. We notice however the following. Any affine parametrization $\lambda$ of a null geodesic can be thought of as a measure of distance along the geodesic performed by a canonical observer picked up at a certain point $x$ of the geodesic and parallel transported along the geodesic. Since, when going to the effective metric $q_{ab}$, the squared distance in the coincidence limit is the finite value $\epsilon L^2$ (request {\bf R2} above), we could expect the effect of the qmetric in the null case is to induce a mapping of the parametrization $\lambda$ to a new parametrization $\tilde \lambda = \tilde \lambda(\lambda)$, with ${\tilde \lambda} \rightarrow L$ when $\lambda(p, P) \rightarrow 0$, i.e. when $p \rightarrow P$. In analogy with the spacelike/timelike case, we can then think to give an expression for $q_{ab}(p, P)$ when $p$ is on a null geodesic from $P$ in terms of two functions $\alpha_\gamma = \alpha_\gamma(\lambda)$ and $A_\gamma = A_\gamma(\lambda)$ defined on the geodesic, and determined by a condition on the squared geodetic distance and on the d'Alembertian. In other words, this suggests we assume that the effects of the existence of a limiting length are captured by an effective metric bitensor $q_{ab}$ as above, with its expression on a null geodesic stemming from requiring the affine parametrization $\lambda$ gets modified into $\lambda \rightarrow [\lambda]_q = {\tilde \lambda}(\lambda)$ with {\bf (G1)} ${\tilde \lambda} = \lambda$ if $L = 0$ (or ${\tilde \lambda} \simeq \lambda$ when $\lambda \rightarrow \infty$), {\bf (G2)} ${\tilde \lambda}(0^+) = L$, and {\bf (G3)} the kernel $G(\sigma^2)$ gets modified into $[G]_q(\sigma^2) = G(S_L)$ in all maximally symmetric spacetimes, i.e {\bf (G3)} coincides with {\bf (R3)} above on null geodesics. We see that dealing with the null case appears quite not so obvious, in that we are forced to rewrite for this case from scratch the rules to go to the qmetric given a metric, in terms of an affine parameter $\lambda$ defined on null geodesics only, i.e. $q_{ab}$ is defined strictly on null geodesics and knows nothing outside them. And this, morover, leads to the tricky circumstances that the operators we look at when constraining the expression for $q_{ab}$ (e.g. the d'Alembertian) should be considered in a form which does not hinge on any knowledge, regarding the elements which enter the definition of the operator itself (directional derivatives, vectors), of what happens outside the $(D-1)$-dimensional submanifold swept by all the null geodesics emanating from a point. Let $\gamma$ be a null geodesic through $P$, with affine parameter $\lambda = \lambda(p, P)$ with $\lambda(P, P) = 0$, and null tangent vector $l^a = \frac{dx^a}{d\lambda}$, i.e. $\nabla_a (\sigma^2) = 2 \lambda l_a$ (see e.g. \cite{PPV}). We introduce a canonical observer at $P$, with velocity $V^a$, such that $l_a V^a = -1$. By parallel transport of the observer along $\gamma$, this relation extends all along $\gamma$, with $\lambda$ having the meaning of a distance as measured by this observer. We affinely parametrize any other null geodesic $\hat \gamma$ which goes through $P$, and require ${\hat l}_a V^a = -1$. What we obtain this way, is a $(D-1)$-dimensional congruence $\Gamma$ of null geodesics emanating from $P$ which is affinely parametrized and has deviation vectors orthogonal to the geodesics. We introduce a second null vector $m^a$ at $P$, defined by $m^a \equiv 2 V^a - l^a$, parallelly transported along the geodesic. This gives $m_a V^a = -1$ and $m_a l^a = -2$ all along $\gamma$. The vector $m^a$ does depend on the observer we have chosen. Let $q_{ab}(p, P)$, $p$ on $\gamma$, be of the form \begin{eqnarray}\label{qab_null} q_{ab} = A_\gamma g_{ab} -\frac{1}{2} \Big(\frac{1}{\alpha_\gamma} - A_\gamma\Big) (l_a m_b + m_a l_b). \end{eqnarray} From $q^{ab} q_{bc} = \delta^a_c$, we get \begin{eqnarray} q^{ab} = \frac{1}{A_\gamma} g^{ab} + \frac{1}{2} \Big(\frac{1}{A_\gamma} - \alpha_\gamma\Big) (l^a m^b + m^a l^b), \end{eqnarray} where $l^a = g^{ab} l_b$, $m^a = g^{ab} m_b$. Notice that $q^{ab} l_a l_b = 0$, and the geodesic is null also according to the qmetric. Our first task is to determine the form of $\alpha_\gamma$. To this aim, we use of the request that $[l^a]_q = dx^a/d\tilde\lambda$ be parallelly transported according to the qmetric. We need this, if $\tilde\lambda$ has to be interpreted as a (quantum) arc-length according to a canonical observer. We have \begin{eqnarray} [l^b]_q \ [\nabla_b]_q \ [l_c]_q &=& \frac{d\lambda}{d\tilde\lambda} \, l^b \Bigg(\partial_b\bigg(\frac{d\lambda}{d\tilde\lambda} \frac{1}{\alpha_\gamma} l_c\bigg) - [\Gamma^a_{bc}]_q \frac{d\lambda}{d\tilde\lambda} \ \frac{1}{\alpha_\gamma} l_a\Bigg), \end{eqnarray} where $ [l_c]_q = q_{ac} [l^a]_q = \frac{d\lambda}{d\tilde\lambda} \, \frac{1}{\alpha_\gamma} \, l_c. $ Here, from $ {\Gamma^a}_{bc} = \frac{1}{2} g^{ad} (-\partial_d g_{bc} + \partial_c g_{bd} +\partial_b g_{dc}), $ we have \begin{eqnarray} [{\Gamma^a}_{bc}]_q &=& \frac{1}{2} q^{ad} (-\partial_d q_{bc} + \partial_c q_{bd} +\partial_b q_{dc}) \nonumber \\ &=& \frac{1}{2} q^{ad} (-\nabla_d q_{bc} + 2 \nabla_{\left(b\right.} q_{\left.c\right)d}) + {\Gamma^a}_{bc} \nonumber \end{eqnarray} (cf. \cite{KotG}). Using of this, we get \begin{eqnarray}\label{rayqab2_23_1} [l^b]_q \ [\nabla_b]_q \ [l_c]_q &=& \frac{d\lambda}{d\tilde\lambda} \, l_c \, \frac{d}{d\lambda} \bigg(\frac{d\lambda}{d\tilde\lambda} \, \frac{1}{\alpha_\gamma}\bigg) - \frac{1}{2} \, \bigg(\frac{d\lambda}{d\tilde\lambda}\bigg)^2 \, l^d l^b \, (-\nabla_d q_{bc} + 2 \nabla_{\left(b\right.} q_{\left.c\right)d}) \nonumber \\ &=& \frac{d\lambda}{d\tilde\lambda} \, l_c \, \frac{d}{d\lambda} \bigg(\frac{d\lambda}{d\tilde\lambda} \, \frac{1}{\alpha_\gamma}\bigg) - \bigg(\frac{d\lambda}{d\tilde\lambda}\bigg)^2 \, \Big(\frac{1}{\alpha_\gamma} - A_\gamma\Big) \ l^b \nabla_c l_b, \end{eqnarray} where in the 1st equality we used of $ l^b \nabla_b l_c = 0 $ and of $ q^{ad} l_a = \alpha_\gamma l^d, $ and, in the 2nd, of $ l^d l^b \nabla_c q_{bd} = 2 \big(\frac{1}{\alpha_\gamma} - A_\gamma\big) \, l^b \nabla_c l_b. $ Here, $ \nabla_c l_b $ brings to consider variations of $l_b$ outside $\Gamma$. However, in whichever way might $l^b$ null be thougth to be extended outside $\Gamma$, always it will hold true that $ \nabla_c(l^b l_b) = 2 l^b \nabla_c l_b = 0. $ We have then \begin{eqnarray}\label{rayqab2_23_2} [l^b]_q \ [\nabla_b]_q \ [l_c]_q &=& \frac{d\lambda}{d\tilde\lambda} \, l_c \, \frac{d}{d\lambda} \bigg(\frac{d\lambda}{d\tilde\lambda} \, \frac{1}{\alpha_\gamma}\bigg). \end{eqnarray} $ [l^b]_q \ [\nabla_b]_q \ [l_c]_q = 0 $ requires $ \alpha_\gamma = K \frac{d\lambda}{d\tilde\lambda}, $ with $K$ a constant. To determine $K$ we use the following. When $\lambda\to \infty$, $d\lambda/d\tilde\lambda \to 1$ and we must have also $ q_{ab} \to g_{ab}. $ This implies both $ \lim_{\lambda\to \infty} A_\gamma = 1 $ and $ K = 1. $ What we get is thus \begin{eqnarray}\label{alpha_gamma} \alpha_\gamma = \frac{1}{d{\tilde \lambda}/d\lambda}. \end{eqnarray} As for the determination of $A_\gamma$, we have to refer to {\bf G3}, i.e we consider the d'Alembertian in maximally symmetric spaces at points on null geodesics. What we try first, is to find out some convenient expression for the d'Alembertian. Due to maximal symmetry, we can think in terms of $f = f(\sigma^2)$ and write \begin{eqnarray} \Box f &=& \nabla_a \nabla^a f \nonumber \\ &=& \nabla_a\Big(\partial^a\sigma^2 \frac{df}{d\sigma^2}\Big) \nonumber \\ &=& \big(\nabla_a \partial^a\sigma^2\big) \frac{df}{d\sigma^2} + \big(\partial^a\sigma^2\big) \partial_a \frac{df}{d\sigma^2} \nonumber \\ &=& \big(\nabla_a \partial^a\sigma^2\big) \frac{df}{d\sigma^2} + \big(\partial^a\sigma^2\big) \big(\partial_a\sigma^2\big) \frac{d^2f}{d(\sigma^2)^2}. \nonumber \end{eqnarray} When going to null geodesic $\gamma$, $ \big(\partial^a\sigma^2\big) \big(\partial_a\sigma^2\big) \rightarrow (2 \lambda l^a) (2 \lambda l_a) = 0 $ and we get \begin{eqnarray} \nonumber \Box f = \big(\nabla_a \partial^a\sigma^2\big) \frac{df}{d\sigma^2}. \end{eqnarray} At a point $p'$ close to $\Gamma$ but, possibly, not exactly on it, we can write (cf. \cite{VisA}) \begin{eqnarray}\label{notquiteongamma} \partial^a\sigma^2_{|p'} = 2 \lambda \ l^a_{|p'} + 2 \nu \ m^a_{|p'}, \end{eqnarray} where $\lambda$ and $\nu$ are curvilinear null coordinates of $p'$ (there is a unique point $p$ on $\Gamma$ from which $p'$ is reachable through a null geodesic $\beta$ with tangent $m^a$ at $p$; $\nu$ is the affine parameter of $p'$ along $\beta$, with $\nu(p) = 0$), $l^a_{|p'}$ and $m^a_{|p'}$ are $l^a$ and $m^a$ parallel transported along $\beta$ from $p$ to $p'$. This gives, on $\gamma$, \begin{eqnarray}\label{nabla_partial} \nabla_a\partial^a\sigma^2 &=& 2 \ \big(\lambda \nabla_a l^a + l^a \partial_a\lambda + m^a \partial_a\nu\big) \nonumber \\ &=& 2 \ \big(\lambda \nabla_a l^a + 2), \end{eqnarray} and then \begin{eqnarray} \Box f = \big(4 + 2 \lambda \nabla_a l^a\big) \frac{df}{d\sigma^2} = \big(4 + 2 \lambda \nabla_i l^i\big) \frac{df}{d\sigma^2}, \end{eqnarray} $i = 1, ..., D-1$ indices of components on $\Gamma$. Here, we emphasized the fact that, since the covariant derivative of $l^a$ along $\beta$ is 0, $\nabla_a l^a$ is completely defined within $\Gamma$ and coincides with the expansion of $\Gamma$, $\nabla_a l^a = \nabla_i l^i$. Going to the qmetric, the geodesic $\gamma$ remains null, and we have \begin{eqnarray}\label{Box_q1} [\Box f]_q &=& \big(4 + 2 [\lambda \nabla_a l^a]_q\big) \Big[\frac{df}{d\sigma^2}\Big]_q \nonumber \\ &=& \big(4 + 2 [\lambda]_q \ [\nabla_a l^a]_q\big) \frac{d[f]_q}{dS_L} \nonumber \\ &=& \big(4 + 2 {\tilde \lambda} \ [\nabla_a l^a]_q\big) \Big(\frac{df}{d\sigma^2}\Big)_{|\sigma^2=S_L}. \end{eqnarray} Here $ [l^a]_q = dx^a/d{\tilde \lambda} = (d\lambda/d{\tilde \lambda}) \ l^a, $ and $ f\!\!: \sigma^2 \mapsto f(\sigma^2) $ gets mapped by the qmetric into $ [f]_q\!\!: \sigma^2 \mapsto S_L \mapsto f(S_L)=[f]_q(\sigma^2) $ which has $ \frac{d[f]_q}{dS_L} = (\frac{df}{d\sigma^2})_{|\sigma^2=S_L}. $ As for the divergence, we have $ [\nabla_a l^a]_q = [(\partial_a + {\Gamma^b}_{ab}) l^a]_q. $ From \begin{eqnarray} [{\Gamma^b}_{ab}]_q &=& \frac{1}{2} q^{bc} (-\nabla_c q_{ab} + 2 \nabla_{\left(a\right.} q_{\left.b\right)c}) + {\Gamma^b}_{ab} \nonumber \\ &=& \frac{1}{2} q^{bc} \nabla_a q_{bc} + {\Gamma^b}_{ab}, \nonumber \end{eqnarray} we get \begin{eqnarray} \nonumber [\nabla_a l^a]_q = \nabla_a\Big(\frac{d\lambda}{d{\tilde \lambda}} l^a\Big) + \frac{1}{2} q^{bc} (\nabla_a q_{bc}) \ \frac{d\lambda}{d{\tilde\lambda}} \ l^a. \end{eqnarray} This expression openly shows that all differentials are indeed taken on $\Gamma$. Using formula (\ref{qab_null}) for $q_{ab}$, direct computation gives \begin{eqnarray}\label{rayqab2_25_1} [\nabla_a l^a]_q &=& \frac{d\lambda}{d{\tilde\lambda}} \ \nabla_i l^i + \frac{d}{d\lambda} \Big(\frac{d\lambda}{d{\tilde\lambda}}\Big) + \frac{1}{2} \frac{d\lambda}{d{\tilde\lambda}} \Big\{ (D-2) \frac{d}{d\lambda} \ln A_\gamma - 2 \frac{d}{d\lambda} \ln \alpha_\gamma\Big\} \nonumber \\ &=&\frac{d\lambda}{d{\tilde\lambda}} \ \nabla_i l^i + \frac{1}{2} (D-2) \frac{d\lambda}{d{\tilde\lambda}} \frac{d}{d\lambda} \ln A_\gamma, \nonumber \end{eqnarray} where, in the 2nd equality, use of the expression (\ref{alpha_gamma}) for $\alpha_\gamma$ was made. Inserting this into equation (\ref{Box_q1}), we get \begin{eqnarray} [\Box f]_q = \Big\{4 + 2 {\tilde\lambda} \frac{d\lambda}{d{\tilde\lambda}} \ \nabla_i l^i + {\tilde\lambda} \ (D-2) \frac{d\lambda}{d{\tilde\lambda}} \frac{d}{d\lambda} \ln A_\gamma\Big\} \Big(\frac{df}{d\sigma^2}\Big)_{|\sigma^2=S_L}. \end{eqnarray} Now we are ready to implement condition {\bf G3}. We require that, if $G = G(\sigma^2_{|\tilde p'})$, with $\sigma^2_{|\tilde p'} = S_L$, is solution to $\Box G = 0$ in $\tilde p$ at $\tilde\lambda$ on $\gamma$ ($\tilde p'$ is in a ($D$-dim) neighbourhood of $\tilde p$; with $ \sigma^2_{|\tilde p'}/\sigma^2_{|p'} \to \tilde\lambda^2/\lambda^2 $ when $ \tilde p' \to \tilde p $ and $ p' \to p, $ due to continuity reasons), i.e. if $\Box G_{|\tilde p} = 0$, then $[G]_q(\sigma^2) \equiv G(S_L(\sigma^2))$ be solution of $ [\Box G]_q = 0 $ in $p$ at $\lambda$ on $\gamma$, i.e. \begin{eqnarray}\label{tra21e22} 4 + 2 {\tilde\lambda} \frac{d\lambda}{d{\tilde\lambda}} \ \nabla_i l^i + {\tilde\lambda} \ (D-2) \frac{d\lambda}{d{\tilde\lambda}} \frac{d}{d\lambda} \ln A_\gamma = 0 \end{eqnarray} in $p$. We proceed first to calculate $\Box G_{|\tilde p}$. In $\tilde p'$, we have \begin{eqnarray} \Box G_{|\tilde p'} &=& \big(\nabla_a \nabla^a G\big)_{|\tilde p'} \nonumber \\ &=& \nabla_a \Big( (\partial^a \sigma^2_{|\tilde p'}) \frac{dG}{d\sigma^2_{|\tilde p'}}\Big) \nonumber \\ &=& \big(\nabla_a\partial^a \sigma^2_{|\tilde p'}\big) \frac{dG}{d\sigma^2_{|\tilde p'}} + \big(\partial^a \sigma^2_{|\tilde p'}\big) \, \partial_a \frac{dG}{d\sigma^2_{|\tilde p'}} \nonumber \\ &=& \big(\nabla_a\partial^a \sigma^2_{|\tilde p'}\big) \frac{dG}{d\sigma^2_{|\tilde p'}} + \big(\partial^a \sigma^2_{|\tilde p'}\big) \big(\partial_a \sigma^2_{|\tilde p'}\big) \, \frac{d}{d\sigma^2_{|\tilde p'}} \bigg(\frac{dG}{d\sigma^2_{|\tilde p'}}\bigg). \end{eqnarray} When $\tilde p' \to \tilde p$ on $\gamma$, $ \big(\partial^a \sigma^2_{|\tilde p'}\big) \big(\partial_a \sigma^2_{|\tilde p'}\big) \to (2 {\tilde\lambda} \, {l^a}_{|{\tilde p}}) (2 {\tilde\lambda} \, {l_a}_{|{\tilde p}}) = 0 $ and thus what matters here is the first term. We have \begin{eqnarray} \nabla_a \partial^a \sigma^2_{|\tilde p'} &=& \big(\nabla_a (2 \lambda \, l^a + 2 \nu \, m^a)\big)_{|\tilde p'} \nonumber \\ &=& \nabla_a(2 \tilde\lambda \, {l^a}_{|\tilde p'} + 2 \tilde\nu \, {m^a}_{|\tilde p'}), \end{eqnarray} where we used of relation (\ref{notquiteongamma}) and wrote $ \tilde\lambda = \frac{1}{2} ({\tilde t} + {\tilde r}), $ $ \tilde\nu = \frac{1}{2} ({\tilde t} - {\tilde r}). $ When going to $\gamma$, we get \begin{eqnarray} (\nabla_a \partial^a \sigma^2)_{|\tilde p} &=& 2 \tilde\lambda \, (\nabla_a l^a)_{|\tilde p} + 2 {l^a}_{|\tilde p} \, \nabla_a \tilde\lambda + 2 {m^a}_{|\tilde p} \, \nabla_a \tilde\nu \nonumber \\ &=& 2 \tilde\lambda \, (\nabla_i l^i)_{|\tilde p} + 4, \end{eqnarray} for $\tilde \lambda$ is the affine parameter $\lambda$ at $\tilde p$, and $ {m^a}_{|\tilde p} = dx^a/d{\tilde\nu}. $ Thus, we have \begin{eqnarray}\label{qmetric2_27_3} \Box G_{|\tilde p} &=& \bigg( 2 \tilde\lambda \, (\nabla_i l^i)_{|\tilde p} + 4 \bigg) \, \frac{dG}{d\sigma^2_{|\tilde p}}. \end{eqnarray} $ \Box G_{|\tilde p} = 0 $ then means \begin{eqnarray} 2 \tilde\lambda \, (\nabla_i l^i)_{|\tilde p} + 4 = 0. \end{eqnarray} Inserting this into (\ref{tra21e22}), one obtains \begin{eqnarray} -2 \tilde\lambda \, (\nabla_i l^i)_{|\tilde p} + 2 \tilde\lambda \, \frac{d\lambda}{d\tilde\lambda} \, \nabla_i l^i + \tilde\lambda \, (D-2) \, \frac{d\lambda}{d\tilde\lambda} \, \frac{d}{d\lambda} \ln A_\gamma = 0, \nonumber \end{eqnarray} which is \begin{eqnarray}\label{44_5} -2 \, \frac{d\tilde\lambda}{d\lambda} \, (\nabla_i l^i)_{|\tilde p} + 2 \, \nabla_i l^i + (D-2) \, \frac{d}{d\lambda} \ln A_\gamma = 0. \end{eqnarray} Thanks to the relation (\cite{DeWA, DeWB}; see \cite{VisA, PPV}) \begin{eqnarray} \nabla_a^{(p)}\big[\Delta(p, P) \nabla^a_{(p)} \sigma^2(p, P)\big] = 2 D \ \Delta(p, P) \end{eqnarray} (valid for spacelike/timelike as well as null geodesics), which gives \begin{eqnarray} \nonumber \nabla_a \partial^a \sigma^2 = 2 D + (\nabla_a \ln \Delta^{-1}) \ \partial^a \sigma^2 \end{eqnarray} with $\partial^a \sigma^2 = 2 \lambda l^a$ on $\gamma$, using (\ref{nabla_partial}) the expansion of the congruence can be usefully expressed in terms of the van Vleck determinant as (cf. \cite{VisA}) \begin{eqnarray} \nabla_a l^a = \nabla_i l^i = \frac{D-2}{\lambda} + \frac{d}{d\lambda}\ln \Delta^{-1} \end{eqnarray} and \begin{eqnarray} (\nabla_a l^a)_{|\tilde p} = (\nabla_i l^i)_{|\tilde p} = \frac{D-2}{\tilde\lambda} + \frac{d}{d\tilde\lambda}\ln \Delta_S^{-1}, \end{eqnarray} where $\Delta_S$ is the van Vleck determinant evaluated at $\tilde p$. Substituting this, equation (\ref{44_5}) above becomes \begin{eqnarray} -2 \, \bigg( \frac{d\tilde\lambda}{d\lambda} \, \frac{D-2}{\tilde\lambda} + \frac{d}{d\lambda}\ln \Delta_S^{-1}\bigg) + 2 \, \bigg(\frac{D-2}{\lambda} + \frac{d}{d\lambda}\ln \Delta^{-1}\bigg) + (D-2) \, \frac{d}{d\lambda} \ln A_\gamma = 0, \nonumber \end{eqnarray} or \begin{eqnarray} -2 \, \frac{d\tilde\lambda}{d\lambda} \, \frac{1}{\tilde\lambda} -\frac{2}{D-2} \, \frac{d}{d\lambda}\ln \Delta_S^{-1} +\frac{2}{\lambda} +\frac{2}{D-2} \, \frac{d}{d\lambda}\ln \Delta^{-1} + \frac{d}{d\lambda} \ln A_\gamma = 0, \nonumber \end{eqnarray} which is \begin{eqnarray} \frac{d}{d\lambda} \ln \Bigg(\frac{\lambda^2}{{\tilde\lambda}^2} \, \Big(\frac{\Delta_S}{\Delta}\Big)^{\frac{2}{D-2}} A_\gamma\Bigg) = 0. \end{eqnarray} Thus \begin{eqnarray} A_\gamma = C \, \frac{\tilde\lambda^2}{{\lambda}^2} \, \Big(\frac{\Delta}{\Delta_S}\Big)^{\frac{2}{D-2}}, \nonumber \end{eqnarray} where $C$ is a constant. To determine $C$, we note that using this expression we get, in the $\lambda \rightarrow \infty$ limit, $A_\gamma \rightarrow C$. Since, as we saw, $q_{ab} \rightarrow g_{ab}$ in the same limit implies $A_\gamma \rightarrow 1$, we get $C = 1$. Our expression for $A_\gamma$ is finally \begin{eqnarray}\label{A_gamma} A_\gamma = \frac{\tilde\lambda^2}{{\lambda}^2} \, \Big(\frac{\Delta}{\Delta_S}\Big)^{\frac{2}{D-2}}. \end{eqnarray} In conclusion, what we have got in this Section is the expression (\ref{qab_null}) for the qmetric $q_{ab}$ for null geodesics, with the functions $\alpha_\gamma$ and $A_\gamma$ in it, defined on the null geodesics, required to have the expressions given by equations (\ref{alpha_gamma}) and (\ref{A_gamma}). We notice that no dependence on the chosen canonical observer is present in $\alpha_\gamma$ or $A_\gamma$. The expression (\ref{qab_null}) for $q_{ab}$, however, does depend on the observer, through $m^a$. \section{$\rho$ for null geodesics (Lorentz sector)} Using the results of previous Section, let us proceed now to try to find out an expression for $\rho$ for null geodesics. In complete analogy with the timelike/spacelike case, this quantity can be defined, in the Lorentz sector, as (cf. equation (\ref{rho_def})) \begin{eqnarray}\label{rho_null} \rho(P, l^a) = \bigg(\lim_{p \rightarrow P} \frac{[d^{D-1}V]_{q(g)}(p, P)}{[d^{D-1}V]_{q(\eta)}(p, P)}\bigg) _{\gamma(l^a)}. \end{eqnarray} Here, $\gamma(l^a)$ is a null geodesic through $P$, affinely parameterized through $\lambda = \lambda(p, P)$ with $\lambda(P, P) = 0$, with tangent vector $k^a = dx^a/d\lambda$ along it which takes the value $l^a$ at $P$, i.e. $l^a = k^a_{|P}$. The limit is taken for $p$ approaching $P$ along $\gamma(l^a)$. $ d^{D-1}V $ is a $(D-1)$-dim volume element of a null hypersurface $\Sigma_\gamma$ through $p$, defined by $\Phi = {\rm const}$, with $-(\partial_a \Phi)_{|p} = (k_a)_{|p}$. Apart from this condition on the gradient, the hypersurface $\Sigma_\gamma$ is arbitrary. $ [d^{D-1}V]_q $ is the volume of that same element of hypersurface, according to the qmetric, with $\Sigma_\gamma$ being null also according to the qmetric ($q^{ab} k_a k_b = 0$, as we saw before). The index $q(g)$, or simply $q$, refers to a generic metric $g_{ab}$, while $q(\eta)$ is for the flat case. $d^{D-1}V$ can be written as follows (\cite{PoiA, PadN}, e.g.). Using the vector $m^a$ as defined in the previous Section, we can write the metric transverse to $k^a$ at $p$ as \begin{eqnarray} \nonumber h_{ab} = g_{ab} + \frac{1}{2} (k_a m_b + m_a k_b). \end{eqnarray} Introducing the coordinates $(\lambda, \theta^A)$ for $\Sigma_\gamma$, with the coordinates $\theta^A$ spanning the $(D-2)$-dim space transverse to the generators of $\Sigma_\gamma$, we have the induced metric on the $(D-2)-$dim space is given by \begin{eqnarray} \nonumber \sigma_{AB} &=& g_{ab} e^a_A e^b_B \nonumber \\ &=& h_{ab} e^a_A e^b_B \nonumber \end{eqnarray} in terms of the vectors $e^a_A = \big(\frac{\partial x^a}{\partial \theta^A}\big)_\lambda$ ($e^a_A$ is orthogonal to both $k^a$ and $m^a$). The volume element can then be written as \begin{eqnarray} d^{D-1}V = \sqrt{\sigma} \ d^{D-2}\theta \ d\lambda, \end{eqnarray} with $\sigma = \det (\sigma_{AB})$. Going to the qmetric, $k^a = dx^a/d\lambda$ gets mapped to $[k^a]_q = dx^a/d{\tilde\lambda} = (d\lambda/d{\tilde\lambda}) k^a$. $\Sigma_\gamma$ is null also according to the qmetric, and the metric transverse (according to $q_{ab}$) to $[k^a]_q$ is given by \begin{eqnarray} [h_{ab}]_q = q_{ab} + \frac{1}{2} \Big([k_a]_q [m_b]_q + [m_a]_q [k_b]_q\Big), \nonumber \end{eqnarray} with $ [k_a]_q = q_{ab} [k^a]_q = \frac{1}{\alpha_\gamma} \frac{d\lambda}{d{\tilde\lambda}} k_a = k_a, $ and $ [m_a]_q = \frac{d{\tilde\lambda}}{d\lambda} m_a $ (to get $q^{ab} [k_a]_q [m_b]_q = -2$). Using the expression (\ref{qab_null}) for $q_{ab}$, we get \begin{eqnarray} [h_{ab}]_q = A_\gamma h_{ab}, \end{eqnarray} and, from $ e^a_A = \big(\frac{\partial x^a}{\partial \theta^A}\big)_\lambda = \big(\frac{\partial x^a}{\partial \theta^A}\big)_{\tilde\lambda} = [e^a_A]_q, $ \begin{eqnarray}\label{sigma_q} [\sigma_{ab}]_q &=& q_{ab} [e^a_A]_q [e^b_B]_q \nonumber \\ &=& [h_{ab}]_q [e^a_A]_q [e^b_B]_q \nonumber \\ &=& [h_{ab}]_q e^a_A e^b_B \nonumber \\ &=& A_\gamma \sigma_{ab}. \end{eqnarray} The qmetric volume element is $ [d^{D-1}V]_q = [\sqrt{\sigma}]_q \, d^{D-2}\theta \ d{\tilde\lambda} = [d^{D-2} {\cal A}]_q \, d{\tilde\lambda} $ with $ [d^{D-2} {\cal A}]_q = [\sqrt{\sigma}]_q \, d^{D-2}\theta $ the $(D-2)$-dim area of the element of surface transverse to the generators according to the qmetric, and $d^{D-2} {\cal A} = \sqrt{\sigma} \ d^{D-2}\theta$ the area according to $g_{ab}$. By the way, this form of $[d^{D-1}V]_q$ gives, from \begin{eqnarray} \frac{[d^{D-1}V]_{q(g)}}{[d^{D-1}V]_{q(\eta)}} = \frac{[d^{D-2}{\cal A}]_{q(g)}}{[d^{D-2}{\cal A}]_{q(\eta)}}, \end{eqnarray} an equivalent manner, if one wants, to express $\rho$, as \begin{eqnarray} \rho(P, l^a) = \bigg(\lim_{p \rightarrow P} \frac{[d^{D-2}{\cal A}]_{q(g)}(p, P)}{[d^{D-2}{\cal A}]_{q(\eta)}(p, P)}\bigg) _{\gamma(l^a)}. \end{eqnarray} From (\ref{sigma_q}), \begin{eqnarray} [d^{D-1}V]_q &=& [\sqrt{\sigma}]_q \ d^{D-2}\theta \ d{\tilde\lambda} \nonumber \\ &=& A_\gamma^{\frac{D-2}{2}} \sqrt{\sigma} \ d^{D-2}\theta \ d{\tilde\lambda} \nonumber \\ &=& A_\gamma^{\frac{D-2}{2}} d^{D-2} {\cal A} \ d{\tilde\lambda}. \end{eqnarray} Using, on the $(D-2)$-surface, orthogonal coordinates $z^A$ such that, chosen any one of them, $z^{\bar A}$, it can be put in the form $z^{\bar A} = \lambda \, \chi$, with $\chi$ such that $\lambda \, d\chi$ is proper distance, we can write \begin{eqnarray} \nonumber [d^{D-1}V]_q = A_\gamma^{\frac{D-2}{2}} \lambda^{D-2} \ (1 + {\cal O}(\lambda^2)) \ (d\chi)^{D-2} d{\tilde\lambda}, \end{eqnarray} where the ${\cal O}(\lambda^2)$ term represents the effects of curvature and is absent in flat case. Substituting here the expression (\ref{A_gamma}) for $A_\gamma$, we get \begin{eqnarray} [d^{D-1}V]_q &=& {\tilde\lambda}^{D-2} \frac{\Delta}{\Delta_S} \ (1 + {\cal O}(\lambda^2)) \ (d\chi)^{D-2} d\tilde\lambda. \nonumber \end{eqnarray} Taking the limit $\lambda \rightarrow 0$ we see that this quantity, as well as $[d^{D-2}{\cal A}]_q$, do not vanish, going to the values \begin{eqnarray} \lim_{\lambda \rightarrow 0} \ [d^{D-1}V]_q = L^{D-2} \frac{1}{\Delta_L(P, l^a)} \ (d\chi)^{D-2} d\tilde\lambda, \end{eqnarray} and \begin{eqnarray} \lim_{\lambda \rightarrow 0} \ [d^{D-2}{\cal A}]_q = L^{D-2} \frac{1}{\Delta_L(P, l^a)} \ (d\chi)^{D-2}, \end{eqnarray} with $\Delta_L(P, l^a) = \Delta({\bar p}, P)$, where $\bar p$ is that point on the null geodesic $\gamma(l^a)$ which has $\lambda({\bar p}, P) = L$. In the flat case, $\Delta = 1$ identically and then $\Delta_L(P, l^a) = 1$, as we said, and the expressions above reduces to $ \lim_{\lambda \rightarrow 0} [d^{D-1}V]_{q(\eta)} = L^{D-2} \ (d\chi)^{D-2} d\tilde\lambda $ and $ \lim_{\lambda \rightarrow 0} [d^{D-2}{\cal A}]_{q(\eta)} = L^{D-2} \ (d\chi)^{D-2}. $ Thus, \begin{eqnarray}\label{Delta_null} \rho(P, l^a) &=& \frac{\lim_{\lambda \rightarrow 0} [d^{D-1}V]_{q(g)}} {\lim_{\lambda \rightarrow 0} [d^{D-1}V]_{q(\eta)}} \nonumber \\ &=& \frac{1}{\Delta_L(P, l^a)}. \end{eqnarray} We obtain then, in the null case, that same form we found in the timelike/spacelike case. Since $l^a$ is assigned with the null geodesic at start, we notice that, even if the qmetric $q_{ab}$ does depend on the chosen observer (through $m^a$), no dependence on the observer is left in $\rho$. For timelike/spacelike geodesics, we gave an expansion of $\Delta(p, P)$ in powers of $l = \sqrt{\epsilon \sigma^2}$ (equation (\ref{expansion})). For (affinely parameterized) null geodesics, $\Delta(p, P)$ can be analogously expanded in powers of $\lambda$ as (\cite{DeWA}; \cite{Xen, VisA, PPV}) \begin{eqnarray} \Delta(p, P) = 1 + \frac{1}{6} \lambda^2 R_{ab}(P) l^a l^b + o(\lambda^2 R_{ab}(P) l^a l^b). \end{eqnarray} For $l^a$ in a neighbourhood of $0$, this definitely gives \begin{eqnarray} \Delta_L(P, l^a) = 1 + \frac{1}{6} L^2 R_{ab}(P) l^a l^b + o(L^2 R_{ab}(P) l^a l^b), \end{eqnarray} and \begin{eqnarray} \rho(P, l^a) = 1 - \frac{1}{6} L^2 R_{ab}(P) l^a l^b + o(L^2 R_{ab}(P) l^a l^b). \end{eqnarray} This expression for $\rho$ is analogous to that reported above for timelike/spacelike geodesics (equation (\ref{rho_expansion})), and coincides with the expression which has been found through recourse to Euclidean sector \cite{Pad04, Pad06, Pad08, Pad10}. \section{Conclusions} Starting from the quantum metric $q_{ab}$ put forward in \cite{KotE, KotF, StaA} for timelike/spacelike intervals from the assumption of existence of a lower limit length (along with some consistency conditions), we have introduced a notion of quantum metric $q_{ab}$ for null separated events, and found an expression for it in equation (\ref{qab_null}) (with (\ref{alpha_gamma}) and (\ref{A_gamma})). This expression, and the already existing expressions for timelike and spacelike geodesics \cite{StaA}, complete the task of providing quantum expressions for any kind of spacetime intervals. This quantum metric comes from a single basic request, that of length quantization, not from a specific quantum theory of gravity. As such, it finds in principle wide range applicability across any specific quantum model of gravity which foresees quantization of length, i.e. in practice several, if not all, models. This means that in any such model these formulae might be reproducible and cross-checkable. The formulae for $q_{ab}$ for non-null intervals hint towards a statistical interpretation of spacetime \cite{Pad04}, and this is exploited in the introduction of a scalar function $\rho(P, v^a)$ expressing the density of quantum states, at event $P$ in the direction $v^a$, associated with atoms we may think spacetime is made of \cite{Pad04, Pad06, Pad08, Pad10}. Crucial to this, is the realization that, according to the quantum metric $q_{ab}$ as applied to the Euclidean sector, the cross-sectional area of an equi-geodesic surface centered at $P$ does not vanish but goes to a finite limit, when the surface shrinks classically to $P$, signalling this way (quantum) degrees of freedom for spacetime at $P$ \cite{Pad04}. Here, we have used the formula for $q_{ab}$ for null separated events to derive an expression for $\rho$ for $v^a$ null, thus remaining entirely within the Lorentz sector, i.e. without making use of Euclideanization (which is how $\rho$ was originally introduced). Key to this, has been to find out that, analogously to what happens in the Euclidean case, according to the null quantum metric the cross-sectional area of a null equi-geodesic surface centered at $P$ does not vanish but remain finite when the surface shrinks classically to $P$. The formula we obtain for $\rho$ turns out to coincide with the formula derived through Euclideanization. The formula for null intervals, joined with the formulae for timelike/spacelike cases, provide a complete account of $\rho$ based on quantum spacetime intervals. {\it Acknowledgements.} I am grateful to Sumanta Chakraborty and Dawood Kothawala for remarks and discussions on the topics of the paper.
2,869,038,156,992
arxiv
\section{Price competition and breaking the ``Bertrand curse''} \label{s:bertrand-single} The results of Appendix~\ref{s:cournot} were for the Cournot notion of competition by offered capacity. Each SP decides to serve demand $q_i$ and then the price is determined by the total capacity $q_1+q_2$. The main alternative notion of competition is a {\em Bertrand} competition in which the providers offer a price to the market. In this section we examine how a price-based competition operates under various notions of price-sensitivity for the end users. We start with the outcome of our running example for the case of a Bertrand competition. \subsection{Bertrand analysis for the running example.} Recall our example from Section~\ref{s:narrative} in which the price elasticity function has parameters ${\varepsilon}=1.25$ and $Q=1000$. The per-unit capacity costs are $\beta_1=\$2.5M$ for SP1 and $\beta_2=\$2M$ for SP2 and the fixed capacity costs are $\alpha_1 = \$50M$ and $\alpha_2 = \$100M$. We now consider the dynamics of this example under the simplest type of Bertrand game. If the SPs offer different prices then all the demand goes to the one with the lowest price. If the two SPs offer the same price then the demand is split between them. The value of the lowest price determines the amount of demand in the market. \begin{figure*}[htb] \begin{center} \includegraphics[width=2.6in]{monopolyp.png} ~~ \includegraphics[width=2.6in]{monopolyp-blowup.png} \caption{(Left) The profit for the two SPs as a function of the price $p$. Here $\alpha_1=\$50M$, $\beta_1=\$2.5M$, $\alpha_2=\$100M$ and $\beta_2=\$2M$. (Right) A blown up version of the figure.} \label{f:monopolyqp} \end{center} \end{figure*} In Figure~\ref{f:monopolyqp} (left) we show the monopoly profit function for these parameters as a function of price, $p$. We denote these functions by $\Pi_1^{mon}(p)$ and $\Pi_2^{mon}(p)$. Figure~\ref{f:monopolyqp} (right) shows a blown up version of the figure. We see from the figure that $\Pi_1^{mon}(p)\le 0$ for $p\le 2.68$ and $\Pi_2^{mon}(p)\le 0$ for $p\le 2.28$. Hence if $p_1\ge 2.68$ then SP2 can always claim all the demand and gain a positive profit by setting $p_2=\frac{1}{2}(p_1+2.28)$. As a result, a natural strategy for SP2 is to set $p_2$ slightly under 2.68 (e.g.\ at $2.67$). This effectively drives SP1 out of the market since there is no setting for $p_1$ that would allow it to claim nonzero demand and make a positive profit. The resulting solution is: \begin{eqnarray*} p_1=\$2.68M,&~&q_1=0PB,~\Pi_1=\$0M\\ p_2=\$2.67M,&~&q_2=293PB,~\Pi_2=\$96.3M. \end{eqnarray*} We now comment on the difference between the Bertrand and Cournot results and how they compare to the sharing scenario. \begin{itemize} \item In the Cournot game, each SP has the ability to drive the other out of the market. In the Bertrand game, only SP2 can do that. The reason for this asymmetry in the Bertrand game is that SP2 has a lower variable cost, i.e. $\beta_2<\beta_1$. (For example, this might be due to SP2 having a lower cost for deploying capacity.) Hence SP2 can make a profit at a lower price than SP1. \item If competition is modeled according to a Bertrand game, SP2 is better off sharing with SP1 than driving SP1 out of the market. In contrast, if competition is modeled according to a Cournot game, both SPs can do better than sharing if they are able to drive the other SP out of the market. \item With the Bertrand game, if SP2 drives SP1 out of the market, there is no action of SP1 that would cause SP2 to have a negative profit. In contrast, in the Cournot game if one SP is aggressive and tries to drive the other out of the market, the aggressive SP is in danger of making a negative profit if the other SP refuses to be submissive and also decides to act aggressively. \end{itemize} We now provide a general analysis of the Bertrand game. In Section~\ref{s:bertrand1} we examine the basic Bertrand model and then in Section~\ref{s:bertrand-varian} we show how the results change when not all users are price sensitive. This type of model has been considered as one way of breaking the ``Bertrand curse'' which occurs when both providers have the same costs and so neither can achieve a profit. In Section~\ref{s:sharing-price-competition} we consider another extension of the basic model that is motivated by potential actions of a regulator. In this extension (that is a combination of the sharing model and the Bertrand game) the SPs are allowed to share costs but the regulator enforces that they must still compete on price. \subsection{Bertrand model 1: all end users are price sensitive} \label{s:bertrand1} In this simplest case we assume that all demand goes to the SP that offers the lowest price and if both SPs offer the same price then the demand is split between them. Suppose that the $\alpha_i,\beta_i$ parameters are fixed. \begin{itemize} \item $\alpha_i\ge Q(({\varepsilon}\beta_i/({\varepsilon}-1))^{1-{\varepsilon}}-\beta_i^{1-{\varepsilon}}({\varepsilon}/({\varepsilon}-1))^{-{\varepsilon}})$ for $i=1,2$. In this case a Nash equilibrium is for both SPs to stay out of the market and keep a profit of zero. This is because neither can attain a positive profit, even if they can act as a monopoly. \item If the above condition does not hold for SP $i$, let $\underline{p}_i=\min\{p:Qp^{1-{\varepsilon}}-(\alpha_i+\beta_iQp^{-{\varepsilon}})\ge 0\}$ and let $\bar{p}_i=\max\{p:Qp^{1-{\varepsilon}}-(\alpha_i+\beta_iQp^{-{\varepsilon}})\ge 0\}$. Suppose without loss of generality that $\underline{p}_1\ge \underline{p}_2$. If $\bar{p}_2\le \underline{p}_1$ then a Nash equilibrium is for SP2 to set price $p_2=p_2^{mon}$ and for SP1 to stay out of the market. \item If $\bar{p}_2> \underline{p}_1$ and $\underline{p}_2< \underline{p}_1$ (i.e.\ with strict inequality) then a natural solution is for SP2 to set price $p_2=\min{p_2^{mon},\underline{p}_1}$ and for SP1 to stay out of the market. Note that this is not a Nash equilibrium in the strict sense since if SP1 does not participate then the optimal action of SP2 is to set its price to its monopoly price. However, if SP2 did that then SP1 could potentially get back into the market. Hence a more natural course of action for SP2 is to set its price to the best price that keeps SP1 out of the market. \item If $\underline{p}_2=\underline{p}_1$ then it is not hard to see that the only Nash equilibrium is for both SPs to set price $p_i=\underline{p}_i$ which gives them zero profit. This is an example of the so-called {\em Bertrand curse} in which either one provider is driven completely out of the market or else both providers make zero profit. \end{itemize} \subsection{Bertrand model 2: not all end users are price sensitive} \label{s:bertrand-varian} The Bertrand curse is generally viewed as a an undesirable state of affairs and so there has been much research on methods to avoid it. We now consider how our results change in a framework of Bagwell and Lee~\cite{BagwellL14} that was inspired by earlier work Varian~\cite{Varian80}. In this model we assume that a fraction $I$ of the end users (the ``informed'' users) are price sensitive and a fraction $U=1-I$ (the ``uninformed'' users) are price insensitive. However, each end user still generates demand based on price according to the price elasticity function. Let, \begin{eqnarray*} \Pi_i^+(p)&=&(I+\frac{U}{2})pq(p)-(\alpha_i+\beta_iq(p))\\ \Pi_i^-(p)&=&\frac{U}{2}pq(p)-(\alpha_i+\beta_iq(p)) \end{eqnarray*} In other words, $\Pi_i^+(p)$ is the profit function when SP $i$ has the low price and $\Pi_i^-(p)$ is the profit function when SP $i$ has the high price. Let $[\underline{p}^-_i,\bar{p}^-_i]$ be the price range on which $\Pi^-_i$ is non-negative and let $[\underline{p}^+_i,\bar{p}^+_i]$ be the price range on which $\Pi^+_i$ is non-negative. We assume without loss of generality that $\underline{p}^+_2\le \underline{p}^+_1$. For this case we claim that the following is a stable solution. SP1 sets its price $p_1$ so as to maximize $\Pi^-_1$. Now let $\hat{p}_1\le p_1$ be such that $\Pi^+_1(\hat{p}_1)=\Pi^-_1(p_1)$. SP2 sets its price $p_2=\min\{\hat{p}_1,\arg\max\Pi^+_2(p)\}$. We show that this solution is a stable solution in the following sense. First of all, given the price offered by SP2, SP1 cannot improve its profit with any other price. Hence it satisfies the property of a Nash equilbrium from the perspective of SP1. It does not satsify the property of a Nash equilibrim from the perspective of SP2 since SP2 could potentially increase its profit if SP1 keeps its price fixed. However, if SP2 did that then SP1 could choose a new price in which SP1 does better and SP2 does worse. Another way to look at this is that these prices form a subgame perfect Nash equilibrium for the Stackelberg game in which SP2 sets its price first and then SP1 follows. \subsection{Sharing on cost with price competition} \label{s:sharing-price-competition} When considering the benefits of sharing in the context of a Bertrand competition, one model of sharing would be exactly as was considered before in the context of a Nash Cournot competition. The service providers provide capacity based on the minimum of their costs and then calculate a monopoly price with respect to those costs. The above notion of a Bertrand competition with insensitive users gives rise to another notion of sharing that might be more appealing to a regulator. In particular, the SPs are allowed to cooperate on cost when building capacity. However, when offering service to end users they must still compete on price. This gives rise to a Bertrand competition in which each SP has parameters $\alpha_{\min}=\min\{\alpha_1,\alpha_2\}$ and $\beta_{\min}=\min\{\beta_1,\beta_2\}$. In the case of a Bertrand competition in which all users are price sensitive, both SPs would offer a price $p=\beta_{\min}$ and so neither would generate a profit. However, for the case in which some users are price insensitive, there is a stable situation as described above in which one SP offers a low price in order to get all the price sensitive users while the other one offers a high price in order to get all the price insensitive users. \section{Detailed analysis of the Cournot competition} \label{s:cournot} We now present our more detailed general analysis. We start with the competitive setting governed by the Cournot game, in the which SPs compete by deciding how much demand they wish to serve. Our analysis is more complex than the textbook Cournot analysis since the presence of the non-zero $\alpha_i$ parameters means that each SP is faced with a decision regarding whether or not to compete. In the Cournot setting the price is determined by the aggregate demand, i.e.\ $$ p^{NC}=\left(\frac{q_1^{NC}+q_2^{NC}}{Q}\right)^{-1/{\varepsilon}}, $$ We first consider a relaxation of the game in which the $q_i$ values can be negative and the profit is always given by $\Pi_i=q_ip_i-(\alpha_i+\beta_iq_i)$, regardless of whether or not it is positive. In this case we assume that the quantities $q_1^{NC}$, $q_2^{NC}$ are chosen so that they form a Nash equilibrium with respect to the SP profits. Hence we wish to find a solution for which $\partial \Pi_i/\partial q_i=0$ for $i=1,2$. \begin{theorem} When the SPs compete on quantity in the relaxed Nash-Cournot game, the solution is given by, \begin{eqnarray*} t&=&\frac{1-\beta_2/\beta_1 (1-1/\varepsilon)}{\beta_2/\beta_1-(1-1/\varepsilon)}\\ q_1^*&=& Q\left(\frac{1+t(1-\frac{1}{\varepsilon})}{\beta_2(1+t)^{1+\frac{1}{\varepsilon}}}\right)^{\varepsilon}\\ q_2^*&=&Q\left( \frac{\frac{1}{t}(1-\frac{1}{\varepsilon})+1}{\beta_1(\frac{1}{t}+1)^{1+\frac{1}{\varepsilon}}} \right)^{\varepsilon}\\ p^{NC}&=& \left(\frac{q_1^*+q_2^*}{Q}\right)^{-1/\varepsilon}\\ \Pi_1^{NC}&=&\Pi_1(q_1^*,q_2^*)= \left(\frac{Q}{q_1^*+q_2^*}\right)^{1/\varepsilon} q_1^* - \alpha_1 -\beta_1 q_1^*\\ \Pi_2^{NC}&=&\Pi_2(q_1^*,q_2^*)= \left(\frac{Q}{q_1^*+q_2^*}\right)^{1/\varepsilon} q_2^* - \alpha_2 -\beta_2 q_2^* \end{eqnarray*} \end{theorem} \begin{proof} For ease of notation we drop the superscript $NC$ in this analysis. Define $S:=q_1+q_2$. The solution is given by: \begin{equation} \partial \Pi_1(q_1,q_2)/\partial q_1 = 0 \,, \end{equation} that is, \begin{equation} \label{eq:2N_1} S^{-(1+(1/\varepsilon))}(S-q_1/\varepsilon) = \beta_1 Q^{-1/\varepsilon} \,, \end{equation} and \begin{equation} \partial \Pi_2(q_1,q_2)/\partial q_2 = 0 \,, \end{equation} that is, \begin{equation} \label{eq:2N_2} S^{-(1+(1/\varepsilon))}(S-q_2/\varepsilon) = \beta_2 Q^{-1/\varepsilon} \,. \end{equation} For any fixed $q_2$, we can solve Equation~\ref{eq:2N_1} according to: \begin{eqnarray*} (q_1+q_2)^{-(1+\frac{1}{\varepsilon})}(q_1(1-\frac{1}{\varepsilon})+q_2)&=&\beta_1Q^{-\frac{1}{\varepsilon}}\\ \Leftrightarrow (q_1+q_2)^{1+\varepsilon}(q_1(1-\frac{1}{\varepsilon})+q_2)^{-\varepsilon}&=&\beta_1^{-\varepsilon}Q\\ \Leftrightarrow q_2^{1+\varepsilon}(\frac{q_1}{q_2}+1)^{1+\varepsilon}q_2^{-\varepsilon}(\frac{q_1}{q_2}(1-\frac{1}{\varepsilon})+1)^{-\varepsilon}&=&\beta_1^{-\varepsilon}Q\\ \Leftrightarrow \left( \frac{(\frac{q_1}{q_2}+1)^{1+\frac{1}{\varepsilon}}}{\frac{q_1}{q_2}(1-\frac{1}{\varepsilon})+1} \right)&=&\left(\frac{Q}{q_2\beta_1^{\varepsilon}}\right)^{1/\varepsilon}. \end{eqnarray*} In other words, $q_1/q_2$ is the solution to the equation, $$ (z+1)^{1+\frac{1}{\varepsilon}}=Az+B, $$ where $A=(1-\frac{1}{\varepsilon})(Q/(q_2\beta_1^{\varepsilon}))^{1/\varepsilon}$ and $B=(Q/(q_2\beta_1^{\varepsilon}))^{1/\varepsilon}$. We use $\hat{q}_1(q_2)$ to denote this solution for any fixed value of $q_2$. For any fixed $q_1$, we can solve Equation~\ref{eq:2N_2} according to: \begin{eqnarray*} (q_1+q_2)^{-(1+\frac{1}{\varepsilon})}(q_1+q_2(1-\frac{1}{\varepsilon}))&=&\beta_2Q^{-\frac{1}{\varepsilon}}\\ \Leftrightarrow (q_1+q_2)^{1+\varepsilon}(q_1+q_2(1-\frac{1}{\varepsilon}))^{-\varepsilon}&=&\beta_2^{-\varepsilon}Q\\ \Leftrightarrow q_1^{1+\varepsilon}(1+\frac{q_2}{q_1})^{1+\varepsilon}q_1^{-\varepsilon}(1+\frac{q_2}{q_1}(1-\frac{1}{\varepsilon}))^{-\varepsilon}&=&\beta_2^{-\varepsilon}Q\\ \Leftrightarrow \left( \frac{(1+\frac{q_2}{q_1})^{1+\frac{1}{\varepsilon}}}{1+\frac{q_2}{q_1}(1-\frac{1}{\varepsilon})} \right)&=&\left(\frac{Q}{q_1\beta_2^{\varepsilon}}\right)^{1/\varepsilon}. \end{eqnarray*} In other words, $q_2/q_1$ is the unique positive solution to the equation, $$ (1+z)^{1+\frac{1}{\varepsilon}}=Az+B, $$ where $A=(1-\frac{1}{\varepsilon})(Q/(q_1\beta_2^{\varepsilon}))^{1/\varepsilon}$ and $B=(Q/(q_1\beta_2^{\varepsilon}))^{1/\varepsilon}$. We use $\hat{q}_2(q_1)$ to denote this solution for any fixed value of $q_1$. Suppose we now want to solve both Equations~\ref{eq:2N_1} and \ref{eq:2N_2} simultaneously. By dividing the two equations we have \begin{eqnarray*} \beta_2(S-\frac{q_1}{\varepsilon}) &=& \beta_1(S-\frac{q_2}{\varepsilon})\\ \Rightarrow \frac{\beta_2}{\beta_1}(q_1(1-\frac{1}{\varepsilon})+q_2) &=& q_1+q_2(1-\frac{1}{\varepsilon})\\ \Rightarrow (1-\frac{\beta_2}{\beta_1}(1-\frac{1}{\varepsilon}))q_1&=&(\frac{\beta_2}{\beta_1}-(1-\frac{1}{\varepsilon}))q_2. \end{eqnarray*} For $\beta_2/\beta_1 \neq (1-1/\varepsilon)$, we have \begin{equation} q_2 = \frac{1- (\beta_2/\beta_1) (1-1/\varepsilon)}{\beta_2/\beta_1-(1-1/\varepsilon)}q_1 \,, \end{equation} and so $t:=q_2/q_1=\frac{1-\beta_2/\beta_1 (1-1/\varepsilon)}{\beta_2/\beta_1-(1-1/\varepsilon)}$, which can be calculated by knowing the parameters $\varepsilon$, $\beta_1,\beta_2$. If we let $(q_1^*,q_2^*)$ be the solution to the simultaneous equations then it follows, \begin{equation} \label{eq:q1nc} q_1^*= Q\left(\frac{1+t(1-\frac{1}{\varepsilon})}{\beta_2(1+t)^{1+\frac{1}{\varepsilon}}}\right)^{\varepsilon} \,, \end{equation} and \begin{equation} \label{eq:q2nc} q_2^*=Q\left( \frac{\frac{1}{t}(1-\frac{1}{\varepsilon})+1}{\beta_1(\frac{1}{t}+1)^{1+\frac{1}{\varepsilon}}} \right)^{\varepsilon} \,. \end{equation} (The ratio $q_2^*/q_1^*$ indeed equals $t$). The corresponding profits are given by, \begin{equation} \Pi_1^{NC}:=\Pi_1(q_1^*,q_2^*)= \left(\frac{Q}{q_1^*+q_2^*}\right)^{1/\varepsilon} q_1^* - \alpha_1 -\beta_1 q_1^*\,, \end{equation} and \begin{equation} \Pi_2^{NC}:=\Pi_2(q_1^*,q_2^*)= \left(\frac{Q}{q_1^*+q_2^*}\right)^{1/\varepsilon} q_2^* - \alpha_2 -\beta_2 q_2^*\,. \end{equation} \end{proof} We now consider the real, i.e.\ non-relaxed problem and derive the conditions under which the above Nash equilibrium is a valid solution. This is the case if $q_1^*$, $q_2^*$, $\Pi_1(q_1^*,q_2^*)$ and $\Pi_2(q_1^*,q_2^*)$ are all non-negative. From the above analysis we immediately have, \begin{lemma} \label{l:nash-viable} The pair $(q_1^*,q_2^*)$ is a viable solution to the original problem if and only if, \begin{eqnarray*} (1-1/\varepsilon) &\le& \min\{\beta_1/\beta_2,\beta_2/\beta_1\},\\ \alpha_1&\le&\left(\frac{Q}{q_1^*+q_2^*}\right)^{1/\varepsilon}q_1^*-\beta_1 q_1^*,\\ \alpha_2&\le&\left(\frac{Q}{q_1^*+q_2^*}\right)^{1/\varepsilon}q_2^*-\beta_2 q_2^*. \end{eqnarray*} \end{lemma} Suppose that the conditions of Lemma~\ref{l:nash-viable} do not hold. Note that due to the unimodal nature of the profit curve, if $q_i^*$ is negative then SP $i$ would be better off setting $q_i=0$. Also, if $\Pi_i^{NC}$ is negative then SP $i$ would be better off setting $q_i=0$. In each of these cases we say that SP $i$ is {\em driven out of the market}. However, as we first discussed in Section~\ref{s:aggressive-submissive}, even if the Nash equilibrium is a viable solution an SP may still be incentivized to try and drive the other SP out of the market. other SP out of the market and so we now investigate that situation in detail. In particular let, $$ q'_1=\arg\max_{q_1:\Pi_2(q_1,\hat{q}_2(q_1))\le 0}\{\Pi_1(q_1,0)\}. $$ In other words, let $q'_1$ be the value of $q_1$ that maximizes the profit of SP1 assuming that it can drive SP2 out of the market even if SP2 gives the best response. Similarly let, $$ q'_2=\arg\max_{q_2:\Pi_1(\hat{q}_1(q_2),q_2)\le 0}\{\Pi_2(0,q_2)\}. $$ With these formulas in place, SP1 chooses from the following three options. \begin{itemize} \item Option 1 (Nash-Cournot). Playing value $q_1^*$ under the assumption that SP2 plays value $q_2^*$. This option is viable if all of the quantities, $q_1^*$, $q_2^*$, $\Pi_1(q_1^*,q_2^*)$ and $\Pi_2(q_1^*,q_2^*)$ are non-negative, i.e.\ if the conditions of Lemma~\ref{l:nash-viable} hold. \item Option 2 (Aggression). Playing value $q'_1$ with the expectation that SP2 plays value $0$, (i.e.\ it does not participate in the market). \item Option 3 (Submission). Playing value $0$, i.e.\ not participating in the market. \end{itemize} SP2 is faced with an analogous set of options and so we obtain the following profit table in Figure~\ref{f:payofftable-gen} that is a general version of the first three columns and rows in Figure~\ref{f:payofftable}. \begin{figure*}[htb] \centering \begin{tabular}{|c|c|c|c|}\hline & Nash-Cournot & Aggression & Submission \\ \hline Nash-Cournot&$(\Pi_1(q^*_1,q^*_2),\Pi_2(q^*_1,q^*_2))$ &$(\Pi_1(q^*_1,q'_2),\Pi_2(q^*_1,q'_2))$ &$(\Pi_1(q^*_1,0),0)$\\ \hline Aggression&$(\Pi_1(q'_1,q^*_2),\Pi_2(q'_1,q^*_2)$ &$(\Pi_1(q'_1,q'_2),\Pi_2(q'_1,q'_2))$ &$(\Pi_1(q'_1,0),0)$ \\ \hline Submission&$(0,\Pi_2(0,q^*_2))$ &$(0,\Pi_2(0,q'_2))$ &$(0,0)$\\ \hline \end{tabular} \caption{The profits $(\Pi_1,\Pi_2)$ due to the different strategy combinations. The rows represent the decisions for SP1 and the columns represent the decisions for SP2.} \label{f:payofftable-gen} \end{figure*} \iffalse A number of scenarios could occur, depending on the parameters. (Note that not all of them are mutually exclusive). \begin{itemize} \item The four quantities $q_1^*$, $q_2^*$, $\Pi_1(q_1^*,q_2^*)$ and $\Pi_2(q_1^*,q_2^*)$ are all non-negative and either $q'_1,q'_2$ do not exist or else $\Pi_1(q_1^*,q_2^*)\ge \Pi_1(q'_1,0)$ and $\Pi_2(q_1^*,q_2^*)\ge \Pi_2(0,q'_2)$. In this case the natural solution is $(q_1^*,q_2^*)$. \item Either $q_2^*<0$ or $\Pi_2(q_1^*,q_2^*)<0$. In this case $q'_1$ exists and so a natural solution is $(q'_1,0)$. Note that in this case $(q'_1,0)$ is {\em not} a Nash equilibrium. If SP2 plays $0$ then the optimal action for SP1 is to play its monopoly action $q_1^{mon}$. However, by doing so it would give SP2 the opportunity to get back into the market. Hence $(q'_1,0)$ is a more natural strategy for SP1 to utilize. \item Either $q_1^*<0$ or $\Pi_1(q_1^*,q_2^*)<0$. In this case $q'_2$ exists and so a natural solution is $(0,q'_2)$. Similar to the previous item this is not a Nash equilibrium but it does represent a natural solution. \item The four quantities $q_1^*$, $q_2^*$, $\Pi_1(q_1^*,q_2^*)$ and $\Pi_2(q_1^*,q_2^*)$ are all non-negative but $q'_1$ exists and $\Pi_1(q'_1,0)>\Pi_1(q_1^*,q_2^*)$. In this case SP1 can play $q'_1$ to drive SP2 out of the market. SP2 can then either ``acquiesce'' and play $0$ or it can try to fight back and play $q_2^*$. \item The four quantities $q_1^*$, $q_2^*$, $\Pi_1(q_1^*,q_2^*)$ and $\Pi_2(q_1^*,q_2^*)$ are all non-negative but $q'_2$ exists and $\Pi_2(0,q'_2)>\Pi_2(q_1^*,q_2^*)$. In this case SP2 can play $q'_2$ to drive SP1 out of the market. SP1 can then either ``acquiesce'' and play $0$ or it can try to fight back and play $q_1^*$. \item If both $q'_1$ and $q'_2$ exist then if both SPs try to drive the other out of the market then the solution would be $(q'_1,q'_2)$ in which case neither SP makes a positive profit. \end{itemize} (In Figure~\ref{f:hierarchy} these cases make up the ``Competition$\rightarrow$Cournot'' branches except for the one labeled ``Regulator enforces NE price''.) \fi \iffalse More precisely, by using~(\ref{eq:q1nc}) and~(\ref{eq:q2nc}), one can work out \begin{equation} \label{eq:pi1NC} \Pi_1^{NC}= \bar{\beta}^{1-\varepsilon} Q \varepsilon \left(1-\frac1{2\varepsilon}\right)^{\varepsilon-1} \left(\frac{\rho_2-\rho_1 (1-1/\varepsilon)}{\rho_1+\rho_2}\right)^2 - \alpha_1 \,, \end{equation} and \begin{equation} \label{eq:pi2NC} \Pi_2^{NC}= \bar{\beta}^{1-\varepsilon} Q \varepsilon \left(1-\frac1{2\varepsilon}\right)^{\varepsilon-1} \left(\frac{\rho_1-\rho_2 (1-1/\varepsilon)}{\rho_1+\rho_2}\right)^2 - \alpha_2 \,, \end{equation} where \begin{equation} \label{eq:btmean} \bar{\beta} := \frac{\beta_1+\beta_2}{2} \,. \end{equation} \fi \section{Network Sharing} \label{s:sharing} We now examine the situation under network sharing. In this case the SPs cooperate and use the lowest cost parameters that are available, i.e.\ $$ C^{coop}(q)=\alpha_{min}+\beta_{min}q, $$ where $\alpha_{min}=\min\{\alpha_1,\alpha_2\}$ and $\beta_{min}=\min\{\beta_1,\beta_2\}$. We first assume that the combined entity is able to use monopoly pricing. In this case the combined price, demand and profit is given by, \begin{eqnarray*} p^{coop}&=&{\varepsilon} \beta_{min}/({\varepsilon}-1)\\ q^{coop}&=&Q({\varepsilon} \beta_{min}/({\varepsilon}-1))^{-{\varepsilon}}\\ \Pi^{coop}&=&p^{coop} q^{coop} - (\alpha_{min}+\beta_{min} q^{coop})\\ \end{eqnarray*} It remains to determine how the profit is split between the SPs. A natural way to do this is via the Shapley value which gives to SP $i$ its expected contribution to the coalition assuming that the SPs create the coalition in a random order. There are two ways to calculate this number depending on whether we incorporate {\em externalities}~\cite{MichalakRMSJ10,Myerson77} from outside the coalition. More precisely, when SP $i$ is the first member of the coalition, we can either assume that it can utilize monopoly pricing or we can assume that it still has to compete against the other SP according to the Nash-Cournot game. The former case might be more appropriate in a rural setting in which an SP that enters the market late is unlikely to participate in the market unless it can share. The latter case might be more appropriate in an urban situation in which both SPs feel compelled to enter the market regardless of whether or not they can share. In the first case we get, \begin{eqnarray*} \Pi_1^{coop}&=&\frac{1}{2}(\Pi_1^{mon}+\Pi^{coop}-\Pi_2^{mon})\\ \Pi_2^{coop}&=&\frac{1}{2}(\Pi_2^{mon}+\Pi^{coop}-\Pi_1^{mon}). \end{eqnarray*} In the latter case we get, \begin{eqnarray*} \Pi_1^{coop}&=&\frac{1}{2}(\Pi_1^{NC}+\Pi^{coop}-\Pi_2^{NC})\\ \Pi_2^{coop}&=&\frac{1}{2}(\Pi_2^{NC}+\Pi^{coop}-\Pi_1^{NC}). \end{eqnarray*} (In Figure~\ref{f:hierarchy} these cases make up the {\bf ``Cooperation$\rightarrow$No regulator$\rightarrow$Profit shared via Shapley allocation''} branches.) From a regulator's point of view, the downside of sharing under monopoly pricing is that the price is significantly higher than the competitive case. We now look at an alternative framework in which the price is restricted by a regulator to be the same as in the Nash-Cournot game. In this setting the solution becomes, \begin{eqnarray*} p^{coop}&=&p^{NC}\\ q^{coop}&=&Q(p^{NC})^{-{\varepsilon}}\\ \Pi^{coop}&=&q^{coop}p^{coop}-(\alpha_{min}+\beta_{min}q^{coop}). \end{eqnarray*} Now when we consider the Shapley value without externalities, it only makes sense to assume that the price is constrained to be the regulated price, regardless of the size of the coalition. Hence in all cases the price and demand are the same and so the only difference between the coalitions is the cost. \begin{eqnarray*} \Pi_1^{coop}&=&\frac{1}{2}(q^{coop}p^{coop}-(\alpha_1+\alpha_{min}-\alpha_2)\\&&-(\beta_1+\beta_{min}-\beta_2)q^{coop})\\ \Pi_2^{coop}&=&\frac{1}{2}(q^{coop}p^{coop}-(\alpha_2+\alpha_{min}-\alpha_1)\\&&-(\beta_2+\beta_{min}-\beta_1)q^{coop}) \end{eqnarray*} For the Shapley value with externalities we have, \begin{eqnarray*} \Pi_1^{coop}&=&\frac{1}{2}((q_1^{NC}+q^{coop}-q_2^{NC})p^{coop}-(\alpha_1+\alpha_{min}-\alpha_2)\\&&-(\beta_1q_1^{NC}+\beta_{min}q^{coop}-\beta_2q_2^{NC}))\\ \Pi_2^{coop}&=&\frac{1}{2}((q_2^{NC}+q^{coop}-q_1^{NC})p^{coop}-(\alpha_2+\alpha_{min}-\alpha_1)\\&&-(\beta_2q_2^{NC}+\beta_{min}q^{coop}-\beta_1q_1^{NC})) \end{eqnarray*} \section{Multiple Geographic Regions} \label{s:narrative-multiple} One of the main reasons that regulators allow network sharing even though it leads to loss of competition is that it allows service providers to more quickly offer service over a large geographic region. In order to investigate this phenomenon, we now show how our analysis extends when it is not the case that both SPs can offer service by themselves over the entire market. For this analysis we focus on the specific SP cost parameters given in Section~\ref{s:narrative}. We consider two scenarios, both of which have two regions W and E. In the first scenario SP1 can provide service in region W (with its $\alpha$ value halved to represent fixed costs in one region only), and SP2 can provide service in region E (with its $\alpha$ value also halved). There are three types of user, W, E and WE. WE users need a service provider that can provide service in both regions. Hence if the SPs do not cooperate then these users cannot be served. Let $N_X$ be the fraction of users of type $X$. For our numerical example we assume that $N_W=N_E=N_{WE}=\frac{1}{3}$. In the case without sharing there is no competition and we have, \begin{eqnarray*} p_1^{mon}=\$12.5M,&~&q_1^{mon}=14.18PB,~\Pi_1^{mon}=\$116.82M\\ p_2^{mon}=\$10M,&~&q_2^{mon}=18.74PB,~\Pi_2^{mon}=\$99.96M \end{eqnarray*} In the case of cooperation we assume that each SP builds the network in its ``own'' region but the two SPs cooperate on price over the entire market. Hence the combined entity has a monopoly with parameters $(\alpha_1,\beta_1)$ in the W region and a monopoly with parameters $(\alpha_2,\beta_2)$ in the E region. Thus, $$ p^{coop}=\$11.25M,~q^{coop}=48.54PB,~\Pi^{coop}=\$362M $$ Hence from the Shapley value we have, \begin{eqnarray*} \Pi_1^{coop}&=&\frac{1}{2}(116.82+362-99.96)=\$189.43M\\ \Pi_2^{coop}&=&\frac{1}{2}(99.96+362-116.82)=\$172.57M \end{eqnarray*} In the second scenario we assume initally that SP1 can only serve users in the W region but SP2 can serve users in both the W and E regions. Hence SP2 has a monopoly on both the E users and the WE users. For these two sets of users we have, $$ p_2=\$10M,~q_2=37.49PB $$ For the W users we have a competition between the SPs. If it is a Nash-Cournot competition then we have: \begin{eqnarray*} p_1^{NC}=\$3.75M,&~&q_1^{NC}=26.62PB\\ p_2^{NC}=\$3.75M,&~&q_2^{NC}=37.26PB \end{eqnarray*} The total profit in this case is: \begin{eqnarray*} \Pi_1^{NC}=\$8.28M,&~&\Pi_2^{NC}=\$265M \end{eqnarray*} If it is a Bertrand competition then SP2 is always incentivized to compete since it acts as a monopoly for the E and WE users. Hence it sets its price to the minimum that drives SP1 out of the market for the W users, i.e.\ we have, \begin{eqnarray*} p_1^B=\$3.75M,&~&q_1^B=0PB,~\Pi_1^B=\$0M\\ p_2^B\in\{\$2.77M,\$10M\},&~&q_2^B=130.77PB,~\Pi_2^B=\$272M \end{eqnarray*} where $p_2^B=\$2.77M$ for the W users and $p_2^B=\$10M$ for the WE and E users. For the cooperative solution the SPs have to use the SP2 parameters in the E region but they can use the optimum of the SP1 and SP2 parameters in the W region. Hence in this case we have, $$ p^{coop}=\$11.25M,~q^{coop}=48.54PB,~\Pi^{coop}=\$362M. $$ In order to calculate the Shapley value to split the profit we need to know the individual monopoly values in this case. \begin{eqnarray*} p_1^{mon}=\$12.5M,&~&q_1^{mon}=14.18PB,~\Pi_1^{mon}=\$117M\\ p_2^{mon}=\$10.0M,&~&q_2^{mon}=56.2PB,~\Pi_2^{mon}=\$350M. \end{eqnarray*} Hence, \begin{eqnarray*} \Pi_1^{coop}&=&\frac{1}{2}(117+362-350)=\$64.5M\\ \Pi_2^{coop}&=&\frac{1}{2}(350+362-117)=\$297.5M \end{eqnarray*} \section{Cost Model} \label{s:model} We consider two Service Providers (SPs) that we denote SP1 and SP2. We begin with a single geographical region in which both SPs operate. If SP $i$ serves demand\footnote{Throughout this paper we employ a coarse measure of demand, namely bytes per month across the whole region. Although this is coarse, the expenses of an operator are closely tied to that number. We also assume that demand is based on a single price for each operator. In reality each operator offers multiple data plans with different sizes and costs. We leave the incorporation of different data plans into our analysis as an interesting direction for future work.} of size $q_i$ then its cost is given by, $$ C_i(q_i)=\left\{\begin{array}{cc}\alpha_i+\beta_iq_i&q_i>0\\0&q_i=0\end{array}\right., $$ for some parameters $\alpha_i>0,\beta_i\ge 0$. The parameter $\alpha_i$ reflects a fixed cost, e.g.\ the cost of infrastructure such as buildings or spectrum, whereas the parameter $\beta_i$ reflects the cost of serving a unit of demand, e.g.\ the cost of deploying equipment. The level of demand in the region (denoted $q$) is closely related to the price offered to the end users (denoted $p$). Our assumption is that all users in the region are price conscious. In particular, we assume that the market is elastic with elasticity coefficient ${\varepsilon}$ greater than 1, i.e.\ $ q(p)=Qp^{-{\varepsilon}}, $ for some parameter $Q$ (quantity sold at unit price). Roughly speaking this means that a 1\% reduction in price results in an $({\varepsilon}-1)$\% increase in revenue. We note that a change in the demand can reflect both a change in the number of end users creating that demand as well as a change in the demand per end user. If SP $i$ serves demand $q_i>0$ at price $p_i$ then it receives a profit given by, $\Pi_i=q_ip_i-C_i(q_i) =q_ip_i-(\alpha_i+\beta_iq_i)$. We observe that the fixed cost $\alpha_i$ makes the cost function non-convex which distinguishes our analysis from many previous studies of network economics. In particular, if $\alpha_i\gg 0$ then SP $i$ may not wish to participate in the market because even as a monopolist it is unable to make a profit. If this decision is due to the capacity (Cournot) or price (Bertrand) offered by the other SP then we say that SP $i$ is {\em driven out of the market}. \iffalse We examine a number of basic regimes. \begin{enumerate} \item Only one SP wants to offer service in which case we are in a {\em monopoly} environment. In this case the single SP wants to choose the price so as to maximize its profit. \item Both SPs decide to coooperate and serve demand at the lowest cost possible. In this case we need to determine how the resulting profits are split amont the SPs. We do this via the notion of Shapley value. \item Both SPs offer service and they do not cooperate. In particular they compete according to a Cournot game in which they both offer a certain amount of service and then the price is fixed according to the total service offered. \item Both SPs offer service and they do not cooperate. In particular they compete according to a Bertrand game in which they both offer a price and the demand goes to the operator with the lowest price. In this situation we wish to avoid the {\em Bertrand curse} in which price is driven down to cost and so neither operator can make a profit. \end{enumerate} We also consider a number of variants to the basic model described above. \begin{enumerate} \item For most of the paper we assume that the operators make pricing decisions so as to optimize their profit. However, this can lead to situations where users suffer excessively from SP cooperation. To address these situations we consider frameworks where the price for demand is set by a regulator. \item In general we assume that cooperation means that operators jointly deploy infrastructure (at minimum cost) and they also jointly decide price and capacity so as to maximize their combined profit. We also consider a hybrid model in which the operators cooperate to deploy capacity but they still must compete on price and capacity. \item Our initial assumption is that all users are price conscious. That may not be strictly true and so we also employ a model from Bagwell and Lee~\cite{} in which only an $I$ fraction of the demand is price conscious. We refer to this fraction as the informed fraction. The remainder of the demand chooses an SP at random. \item Lastly, a key goal of our analysis is to show how network sharing allows a new operator to quickly cover a larger geographical area. To model this we consider a model with two geographical regions, W and E and one or more of the SPs may only offer service in one of the regions. We examine the tradoffs for both these SPs since they can now be asymmetric. \end{enumerate} \fi \begin{lemma} \label{l:monopoly} Under monopoly pricing we have, \begin{eqnarray*} p^{mon} &=& {\varepsilon} \beta/({\varepsilon}-1),~~~~~~~~q^{mon} = Q({\varepsilon} \beta/({\varepsilon}-1))^{-{\varepsilon}}\\ \Pi^{mon} &=& Q\left((\frac{{\varepsilon}\beta}{{\varepsilon}-1})^{1-{\varepsilon}}-\beta(\frac{{\varepsilon}\beta}{{\varepsilon}-1})^{-{\varepsilon}}\right)-\alpha. \end{eqnarray*} assuming that $\Pi^{mon}\ge 0$. (If not then the SP stays out of the market.) \end{lemma} The proof (which is standard) is given in Appendix~\ref{s:monopoly}. \iffalse We now state some simple results on monopoly pricing under the above cost structure. Suppose there is a single entity offering service. This entity could be a single SP or it could be two SPs that are collaborating. We denote the resulting price by $p^{mon}$ and the demand by $q^{mon}$. Since the two quantities are related by the elasticity equation, the entity could either choose $q^{mon}$ (and then price $p^{mon}$ is determined) or it could choose $p^{mon}$ (and then demand $q^{mon}$ is determined). Suppose also that as above the cost is given by $\alpha+\beta q^{mon}$, for $q^{mon}>0$. \begin{lemma} \label{L:MONOPOLY} Under monopoly pricing we have, \begin{eqnarray*} p^{mon} = {\varepsilon} \beta/({\varepsilon}-1),~~~~~~~~q^{mon} &=& Q({\varepsilon} \beta/({\varepsilon}-1))^{-{\varepsilon}}\\ \Pi^{mon} = p^{mon} q^{mon} - (\alpha+\beta q^{mon}) &=& Q\left((\frac{{\varepsilon}\beta}{{\varepsilon}-1})^{1-{\varepsilon}}-\beta(\frac{{\varepsilon}\beta}{{\varepsilon}-1})^{-{\varepsilon}}\right)-\alpha. \end{eqnarray*} assuming that $\Pi^{mon}\ge 0$. The SP would only offer a nonzero capacity if $\Pi^{mon}\ge 0$ which is equivalent to the condition, $ \alpha\le Q\left((\frac{{\varepsilon}\beta}{{\varepsilon}-1})^{1-{\varepsilon}}-\beta(\frac{{\varepsilon}\beta}{{\varepsilon}-1})^{-{\varepsilon}}\right). $ If this condition does not hold then the SP would stay out of the market (i.e.\ it would offer zero capacity). Hence for the remainder of the paper we assume the condition holds for both SPs. \end{lemma} \begin{proof} See Appendix~\ref{s:monopoly}. \end{proof} \fi \section{Proof of Lemma~\ref{l:monopoly}} \label{s:monopoly} \begin{proof} In the following we drop the $mon$ superscript. We determine the solution by setting $d\Pi/dq=0$. Recall our assumption that ${\varepsilon}>1$.\footnote{We assume ${\varepsilon}>1$ since otherwise the SPs would be incentivized to set an arbitrarily high price.} \begin{eqnarray*} \Pi &=& \frac{q^{1-(1/{\varepsilon})}}{Q^{-1/{\varepsilon}}}-(\alpha+\beta q)\\ \frac{d\Pi}{dq} &=& \frac{{\varepsilon}-1}{{\varepsilon}}\frac{q^{-1/{\varepsilon}}}{Q^{-1/{\varepsilon}}}-\beta. \end{eqnarray*} Hence $\frac{d\Pi}{dq}=0$ if and only if, \begin{eqnarray*} \frac{q^{-1/{\varepsilon}}}{Q^{-1/{\varepsilon}}}&=&\beta\frac{{\varepsilon}}{{\varepsilon}-1}\\ \Leftrightarrow q &=& Q\left(\frac{{\varepsilon}\beta}{{\varepsilon}-1}\right)^{-{\varepsilon}}\\ \Leftrightarrow p &=& \left(\frac{Q}{Q}\left(\frac{\beta{\varepsilon}}{{\varepsilon}-1}\right)^{-{\varepsilon}}\right)^{-1/{\varepsilon}}\\ \Leftrightarrow p &=& \frac{\beta{\varepsilon}}{{\varepsilon}-1} \end{eqnarray*} \end{proof} \section{Narrative for a single example} \label{s:narrative} We begin by exploring the dynamics for a prototypical example. In this way we can better understand the fundamental dynamics at play rather than getting bogged down in the general equations (which can become somewhat complex). In later sections we consider the general case (with much of the proofs and derivations deferred to the Appendix). For our example the unit of demand is a PB and the unit of price is \$1M. For these units the price elasticity function has parameters ${\varepsilon}=1.25$ and $Q=1000$. The per-unit capacity costs are $\beta_1=\$2.5M$ for SP1 and $\beta_2=\$2M$ for SP2 per petabyte (PB) of wireless capacity. The fixed capacity costs (representing the cost of participating in the market, e.g., for buying spectrum or building cell towers) are $\alpha_1 = \$50M$ and $\alpha_2 = \$100M$. For these parameters, the monopoly price, demand and profit for each operator is given by, \begin{eqnarray*} p_1^{mon}=\$12.5M,&~&q_1^{mon}=42.6PB,~\Pi_1^{mon}=\$376M\\ p_2^{mon}=\$10.0M,&~&q_2^{mon}=56.2PB,~\Pi_2^{mon}=\$350M \end{eqnarray*} \subsection{Network Sharing} \label{s:narrative-sharing} We first examine the situation under network sharing. This is the easiest case to consider since we do not need to worry about the competitive dynamics between the SPs. In particular the SPs cooperate and use the lowest cost parameters that are available, i.e.\ $$ C^{coop}(q)=\alpha_{min}+\beta_{min}q, $$ where $\alpha_{min}=\min\{\alpha_1,\alpha_2\}$ and $\beta_{min}=\min\{\beta_1,\beta_2\}$. (See Figure~\ref{f:sharingcost} for a depiction of $C^{coop}(q)$ in comparison to $C_1(q)$ and $C_2(q)$.) \begin{figure}[htb] \begin{center} \includegraphics[width = 3.in]{sharingcost.png} \caption{The cost function $C^{coop}(q)$ compared against $C_1(q)$ and $C_2(q)$.} \label{f:sharingcost} \end{center} \end{figure} We assume that the combined entity is able to use monopoly pricing. In this case the combined price, demand and profit for our running example is given by, $$ p^{coop}=\$10.0M,~ q^{coop}=56.2PB,~ \Pi^{coop}=\$400M $$ It remains to determine how the profit is split between the SPs. A natural way to do this is via the Shapley value which gives to SP $i$ its expected contribution to the coalition assuming that the SPs create the coalition in a random order. If we assume that the first SP to enter the coalition can utilize monopoly pricing then the profits are given by, \begin{eqnarray*} \Pi_1^{coop}&=&\frac{1}{2}(\Pi_1^{mon}+\Pi^{coop}-\Pi_2^{mon})=\$213M,\\ \Pi_2^{coop}&=&\frac{1}{2}(\Pi_2^{mon}+\Pi^{coop}-\Pi_1^{mon})=\$187M. \end{eqnarray*} In order for an SP to determine whether network sharing is the best option, it needs to compare its profits under sharing with its profits for the case in which it competes with the other SP. As mentioned in Section~\ref{s:intro}, there are many notions of competition - a Cournot game in which the SPs offer capacity to the market and the market sets the price, and a Bertrand game in which the SPs directly offer prices to the market. We begin by examining the Cournot game and defer the corresponding analysis of the Bertrand game to the Appendix. \subsection{Cournot Game} \label{s:cournot-narrative} \begin{figure*}[htb] \begin{center} \includegraphics[width=2.0in]{nash.png} ~~ \includegraphics[width=2.0in]{bestresponse_q1.png} ~~ \includegraphics[width=2.0in]{bestresponse_q2.png} \caption{(Left) The best response for each SP given the actions of the other SP. (Middle) The profits of the two SPs as a function of $q1$ (assuming SP2 plays the best response). (Right) The profits of the two SPs as a function of $q2$ (assuming SP1 plays the best response). The vertical lines represent the Nash-Cournot solution ($q_1=80$, $q_2=112$). } \label{f:profits-q1q2} \end{center} \end{figure*} In the Cournot game SP $i$ offers to serve demand $q_i$ and then the market determines a common price $p$ based on $q_1$ and $q_2$. In particular, $$ p=\left(\frac{q_1+q_2}{Q}\right)^{-1/{\varepsilon}}. $$ The main complication with the Cournot game in our setting is the presence of the $\alpha_i$ terms that introduce a discontinuity into the profit functions. If SP $i$ cannot generate a positive profit for any value $q_i>0$ then it will simply set $q_i=0$ and accept zero profit. In this case we say that SP $i$ is {\em driven out of the market}. Figure~\ref{f:profits-q1q2} (left) illustrates the behavior of the Cournot game. The plot shows the best response for each SP, given the actions of the other SP. In particular, for any value $q_1>0$ (on the x-axis), the green dashed curve represents the value of $q_2$ (on the y-axis) that maximizes the profit of SP2 assuming that the action of SP1 is $q_1$. The blue solid curve is similar and represents the best response for SP1 but with the axes flipped. In particular, for any value $q_2$ (on the y-axis), the blue curve represents the value of $q_1$ (on the x-axis) that maximizes the profit of SP1 assuming that the action of SP2 is $q_2$. The crossing point of the curves at $(q_1,q_2)=(80,112)$ represents a Nash equilibrium. (A closed form expression for this equilbrium is presented in Appendix~\ref{s:cournot}.) In other words, if the action of SP1 is $q_1=80$ then the optimal action of SP2 is $q_2=112$ and if the action of SP2 is $q_2=112$ then the optimal action of SP1 is $q_1=80$. At this point the full set of quantity, price and profit values is given by, \begin{eqnarray*} p_1^{NC}=\$3.75M,&~&q_1^{NC}=80PB,~\Pi_1^{NC}=\$50M\\ p_2^{NC}=\$3.75M,&~&q_2^{NC}=112PB,~\Pi_2^{NC}=\$96M. \end{eqnarray*} \subsection{The impact of a regulator} As we will discuss shortly, the profits for each SP in the sharing scenario are significantly higher than they are in the competitive scenario. One main reason for this is that in the sharing case the combined entity is able to set a monopoly price. The profit increase therefore comes at the expense of the end users who have to pay higher prices. A regulator may deem this to be anti-competitive and as a result may impose an upper bound on the price that the combined entity can charge if the two SPs decide to share. A natural candidate for this upper bound is the price corresponding to the Nash Equilibrium that we computed in the previous section. Sharing can still be beneficial even if a regulator restricts the price because the combined entity can take advantage of the reduced cost function. (See Figure~\ref{f:sharingcost}.) Moreover, in contrast to the full competitive case of Section~\ref{s:cournot-narrative} the SPs will share profits according to Shapley value and so we have, \begin{eqnarray*} p^{reg}&=&p_1^{NC}=\$3.75M,~q^{reg}=q_1^{NC}+q_2^{NC}=192PB,\\ \Pi^{reg}&=&p^{reg}q^{reg}-(\alpha_{min}+\beta_{min}q^{reg})=\$286M \end{eqnarray*} This combined profit is shared via Shapley value and results in, $$ \Pi_1^{reg}=\$120M,~\Pi_2^{reg}=\$166M $$ \subsection{Comparison of network sharing and the Cournot game} In the table in Figure~\ref{f:simplepayofftable} we summarize the price $p$ and the profits $\Pi_1,\Pi_2$ for the case that the SPs compete according to the Nash Equilibrium of the Cournot game (the so-called Nash-Cournot solution), as well as the cases of network sharing with and without a regulator. We see that for the case of network sharing without a regulator, both SPs have significantly higher profit than in the Nash-Cournot solution but this is partly because they can charge a monopoly price to the end users. In the case that a regulator enforces an upper bound on price equal to the Nash-Cournot price, both SPs still obtain a higher profit with network sharing than in the Nash-Cournot solution. This latter effect comes from the fact that the SPs can share the cost of the network and utilize the cost parameters $\alpha_{min},\beta_{min}$ rather than $\alpha_i,\beta_i$. \begin{figure}[htb] \centering \begin{tabular}{|c|c|c|}\hline & price per PB, $p$ & profits $(\Pi_1,\Pi_2)$ \\ \hline Nash-Cournot&3.75&(50, 96)\\ \hline Sharing (no regulator)&10&(213,187)\\ \hline Sharing (regulator)&3.75&(120,166)\\ \hline \end{tabular} \caption{The price $p$ and the SP profits $(\Pi_1,\Pi_2)$ due to the Nash-Cournot solution and the sharing scenarios. All quantities are in units of \$1M.} \label{f:simplepayofftable} \end{figure} \subsection{Aggressive and submissive strategies} \label{s:aggressive-submissive} We now address the more complex dynamics that can arise due to the fact that $\alpha_i>0$. In particular, we ask whether the SPs will be motivated to conform to the Nash-Cournot solution or whether they might be tempted to deviate from that action. Note that both the blue and the green curves hit zero in Figure~\ref{f:profits-q1q2} (left), i.e.\ both SPs have the ability to drive the other one out of the market. Figure~\ref{f:profits-q1q2} further illustrates why the temptation to deviate from the Nash equilibrium might exist. In particular, Figure~\ref{f:profits-q1q2} (middle) shows the profits of the two SPs given a fixed value of $q_1$. More precisely, for each value of $q_1$, the solid blue and dashed green curves show the profits of SP1 and SP2 respectively, if SP2 plays its optimal response. We see that there is a big discontinuity in the profit of SP1. At the point ($q_1=154$) at which SP1 drives SP2 out of the market, the optimal response of SP2 jumps from a non-zero value to zero. This means there is less capacity available which in turn drives up the price and hence the profit of SP1. Note however that it cannot claim the monopoly profit since it cannot reduce to the monopoly point $q_1=42.6$ without letting SP2 back into the market. However, Figure~\ref{f:profits-q1q2} (right) shows that this process could work the other way round as well. In particular, this figure shows the profits of the two SPs given a fixed value of $q_2$. For each value of $q_2$, the solid blue and dashed green curves show the profits of SP1 and SP2 respectively, if SP1 plays its optimal response. This time we see a big jump in the profit of SP2 at the point at which it drives SP1 out of the market. As a result, each SP could benefit if it sets its $q_i$ at a level that drives the other SP out of the market {\em and the other SP acquiesces to being driven out}. We can therefore model the game by assuming that the SPs have the following discrete choices. \begin{itemize} \item Nash-Cournot: In this case each SP assumes that the other SP will compete in the market in which case it makes sense to play the Nash equilibrium value. \item Aggression: An aggressive SP will play at a level that drives the other SP out of the market. \item Submission: A submissive SP will accept being shut out of the market (rather than fighting it and potentially taking a loss). \item Sharing: An SP can offer to enter a sharing arrangement with the other SP. However, the arrangement only goes into effect if both SPs agree to it. If both SPs do agree then we obtain the profits presented in Section~\ref{s:narrative-sharing} (that depend on whether or not a regulator caps the price $p$). \end{itemize} \begin{figure*}[htb] \centering \begin{tabular}{|c|c|c|c|c|c|}\hline & & & & Sharing & Sharing \\ (SP1 profit,SP2 profit) & Nash-Cournot & Aggression & Submission & (no regulator) & (regulator) \\ \hline Nash-Cournot &(50, 96)&(-1,80)&({\em 353,0})& NA & NA\\ \hline Aggression&(9,-1)&(-48,-17)&(254,0)&NA & NA\\ \hline Submission&({\em 0,321})&(0,271)&(0,0)&NA & NA\\ \hline Sharing (no regulator) &NA&NA&NA&(213,187) &NA\\ \hline Sharing (regulator) &NA&NA&NA&NA&(120,166)\\ \hline \end{tabular} \caption{The profits $(\Pi_1,\Pi_2)$ due to the different strategy combinations. The rows represent the decisions for SP1 and the columns represent the decisions for SP2.} \label{f:payofftable} \end{figure*} The table in Figure~\ref{f:payofftable} shows the outcomes of the resulting game. The columns represent the decisions for SP1 and the rows represent the decisions for SP2. The entries have the form (SP1 profit, SP2 profit). The profits due to sharing are included in the bottom right corner of the table. Since sharing only takes place if both parties agree to share, there is no entry in the table if only one SP is sharing. We remark that the italicized entries (where one SP plays the Nash-Cournot strategy and the other plays the Submission strategy) are not viable outcomes because the submissive SP would be better off playing the Nash-Cournot strategy as well. From the above we can conclude the following. \begin{itemize} \item Network sharing is better for both SPs than the Nash-Cournot solution \item If sharing does not take place then the Nash-Cournot solution is a unique Nash equilibrium since that is the only point at which both curves cross in Figure~\ref{f:profits-q1q2} (left). \item An SP might be tempted to deviate from the Nash-Cournot solution since if it aggressive then the best response of the other SP is to be submissive in which case the aggressive SP will do even better than sharing. (We note that this is not a Nash equilibrium since if one SP sets $q_i=0$ then the best response of the other SP is to set $q_{3-i}=q_{3-i}^{mon}$.) \item The downside of aggression is that if both SPs are aggressive then they both make negative profit and hence are worse off (as in standard games of chicken). Whether or not an SP will choose to be aggressive will largely depend on how it expects the other SP to react. \end{itemize} \section{General Analysis} \label{s:general} As mentioned earlier, most of our general analysis is deferred to the Appendices. However, in the following we state our main results for the case of network sharing compared with a Cournot competition. For the case of sharing the combined price, demand and profit is given by, \begin{eqnarray*} p^{coop}&=&{\varepsilon} \beta_{min}/({\varepsilon}-1)\\ q^{coop}&=&Q({\varepsilon} \beta_{min}/({\varepsilon}-1))^{-{\varepsilon}}\\ \Pi^{coop}&=&p^{coop} q^{coop} - (\alpha_{min}+\beta_{min} q^{coop}), \end{eqnarray*} where $\alpha_{min}=\min\{\alpha_1,\alpha_2\}$ and $\beta_{min}=\min\{\beta_1,\beta_2\}$. The profit is shared according to, \begin{eqnarray} \Pi_1^{coop}&=&\frac{1}{2}(\Pi_1^{mon}+\Pi^{coop}-\Pi_2^{mon})\label{eq:shareprofit1}\\ \Pi_2^{coop}&=&\frac{1}{2}(\Pi_2^{mon}+\Pi^{coop}-\Pi_1^{mon})\label{eq:shareprofit2}, \end{eqnarray} where $\Pi^{mon}_1,\Pi^{mon}_2$ are the monopoly profits for SPs 1 and 2 respectively. For the case of the Cournot competition, for any given $q_1,q_2$ the profits of the SPs are specified: \begin{eqnarray} \Pi_1(q_1,q_2)&=& \left(\frac{Q}{q_1+q_2}\right)^{1/\varepsilon} q_1 - \alpha_1 -\beta_1 q_1 \label{eq:prof1}\\ \Pi_2(q_1,q_2)&=& \left(\frac{Q}{q_1+q_2}\right)^{1/\varepsilon} q_2 - \alpha_2 -\beta_2 q_2 \label{eq:prof2} \end{eqnarray} We first need to calculate the best response function for each provider. If SP2 offers to serve demand $q_2$, we show that the best response of SP1 is to set $q_1$ so that $q_1/q_2$ is the solution to the equation, $$ (z+1)^{1+\frac{1}{\varepsilon}}=Az+B, $$ where $A=(1-\frac{1}{\varepsilon})(Q/(q_2\beta_1^{\varepsilon}))^{1/\varepsilon}$ and $B=(Q/(q_2\beta_1^{\varepsilon}))^{1/\varepsilon}$, assuming that this leads to a positive profit. We use $\hat{q}_1(q_2)$ to denote this solution for any fixed value of $q_2$. The best response function for SP2 can be defined analogously. Now that the best response functions are in place, we can calculate the Nash-Cournot solution which is given by, \begin{eqnarray*} t&=&\frac{1-\beta_2/\beta_1 (1-1/\varepsilon)}{\beta_2/\beta_1-(1-1/\varepsilon)}\\ q_1^*&=& Q\left(\frac{1+t(1-\frac{1}{\varepsilon})}{\beta_2(1+t)^{1+\frac{1}{\varepsilon}}}\right)^{\varepsilon}\\ q_2^*&=&Q\left( \frac{\frac{1}{t}(1-\frac{1}{\varepsilon})+1}{\beta_1(\frac{1}{t}+1)^{1+\frac{1}{\varepsilon}}} \right)^{\varepsilon}\\ \end{eqnarray*} (We remark that in some cases this Nash Equilibrium may not exist if either $q^*_1$ or $q^*_2$ is negative or if either of the corresponding profits are negative.) As we saw in our running example, even if the Nash Equilibrium does exist an SP may have an incentive to not play the Nash equilibrium solution but instead try to drive the other SP out of the market. If SP1 wishes to be aggressive in this way then it sets $q_1=q'_1$, where $$ q'_1=\arg\max_{q_1:\Pi_2(q_1,\hat{q}_2(q_1))\le 0}\{\Pi_1(q_1,0)\}. $$ (A similar expression can be derived for the aggressive strategy of SP2.) Since for the submission strategy SP $i$ simply sets $q_i=0$, we have now defined the values of $q_1,q_2$ for the cases of Nash-Cournot, Aggression and Submission. For any problem instance we can then utilize the profit functions of Equations (\ref{eq:prof1}) and (\ref{eq:prof2}) for each possible pair of strategies and combine the results with the profits for sharing (given in Equations (\ref{eq:shareprofit1}) and (\ref{eq:shareprofit2}) in order to obtain a table of the form shown in Figure~\ref{f:payofftable}. \section{Conclusions} In this paper we have presented a model to illustrate the options facing two Service Providers who are deciding whether or not to share network infrastructure. Our cost function has a fixed component and hence is non-convex. We presented a taxonomy of problem variants and derived the profit for each SP both in the case that they share infrastructure as well as the case that they are competitors. For the latter case the dynamics are complicated because the fixed cost function gives an SP the option to be aggressive and try to drive the other SP out of the market. For our running example the profit to each SP in the case of sharing is significantly higher than when they compete fairly according to a Nash Equilibrium, and the profit is comparable to the case in which the SP is aggressive and the other SP is submissive. For this example each SP would likely consider sharing to be a better option since even if it plays agressively in a competitive setting there is no way to ensure that the other SP would not try to be agressive as well. A number of problems remain. In particular we would like to determine how the dynamics would change if the two SPs only share a portion of the network infrastructure. We would also like to incorporate roaming agreements into the model since they represent a ``halfway'' point between total competition and full cooperation. \iffalse In order to understand the benefits of network sharing we need to understand how operators compete. It is natural to consider a scenario where network providers compete on price in the style of a Bertrand competition. In contrast with a Cournot model where sellers compete on volume and the equilibrium behavior generates non-zero profit, in a standard Bertrand game all sellers make zero profit. This is because buyers make purchases from the seller(s) that have the lowest cost of producing the item on sale and the purchase price equals the manufacturing price for these sellers. Varian~\cite{Varian80} looked at how the above issue can be circumvented by assuming that not all buyers are price sensitive. One subset of the buyers pays attention to price and all of their business goes to the seller with the lowest price. The remaining buyers do not care about price and their business is randomly distributed among the remaining sellers. In this case each seller has a choice. They can either set their price high and obtain a large profit from a smaller number of buyers or they can compete on price in order to try and claim an increased share of the market but at lower profit. Varian was able to show that in this framework equilibrium behavior produces non-zero profit for the seller. We follow a later adaptation of this model to a telecom setting due to Bagwell and Lee~\cite{BagwellL14} and show how it can be used to compare sharing options for operators with different geographic footprints. \fi \section{Introduction} \label{s:intro} As communication technologies become increasingly complex, Service Providers (SPs) need to make ever larger investments in order to bring the latest network generation to their end users. As one way to defray these costs, SPs are looking at network sharing agreements that allow them to upgrade their networks more quickly at lower cost. For example, it has been observed that the most lucrative 10\% of mobile access markets already account for over 50\% of an SP's revenues whereas the remaining 90\% of the markets are ``subsidized'' by those~\cite{Larsen12}. Network sharing is especially attractive in the less profitable regions since it minimizes the investment that SPs need to make there. Network sharing can take many forms depending on the assets that are shared. In the most extreme case the entire network is shared. For this case the only way in which the SPs can distinguish themselves is via different service plans. The actual network performance will be the same for both SPs. In less extreme cases only parts of the network will be shared. Examples include one or more of real-estate sharing, tower sharing, RAN (radio access network) sharing and core network sharing. Sharing of such inactive elements is becoming increasingly popular. In China the three major operators have formed a joint venture to share the towers that host their radio equipment~\cite{DengWW15}. In this work we focus on two service providers and derive a model to help understand the implications of network sharing. We wish to understand both the implications for the profit of the SPs as well as the prices faced by the end users. We note that network sharing is a topic of interest to regulators since it can affect the competitive make-up of a market. A regulator may wish to make sure that prices do not rise excessively before signing off on a sharing agreement. We incorporate the effects of such a regulator into our analysis. The main question we ask is: {\em How does network pricing and capacity provisioning differ in the case that the SPs cooperate versus the case that they compete?} In general, we show that sharing can be beneficial for a wide range of network parameters. In particular: \begin{itemize} \item We show that a sharing strategy can generate significantly higher profits for the service providers than if they compete and act according to a Nash equilibrium strategy. \item We demonstrate that in some situations this gain due to sharing holds, even if a service provider has the market power to drive the other service provider out of the market. \end{itemize} For the case of not sharing, we analyze the competition between the service providers both for the case in which providers decide on how much capacity to deploy (which gives rise to a Nash-Cournot game). In the Appendix we also study the case in which providers decide on the price they offer to the market (which gives rise to a Bertrand game). A key difficulty is that in many situations service providers have a {\em fixed cost component} for entering a market which gives rise to a non-convex cost function. Competition in this setting is non-standard and leads to a potential situation in which one provider can drive the other out of the market. Analyzing how this occurs is a key component of our analysis. \iffalse At a qualitative level, network sharing has the following effects. \begin{itemize} \item Network sharing will reduce the overall cost required for network investment since multiple operators can take advantage of economies of scale and also utilize whichever one has the best vendor contracts. This effect is a benefit for both network operators and end users. \item If network sharing allows collusion on price then prices may rise since the operators can shift from competitive pricing to monopoly pricing. A regulator will be interested in making sure that an element of competition remains, even after a network sharing agreement is put in place. \item Network sharing allows operators to offer service more quickly over a larger geographic region than if each had to offer ubiquitous coverage alone. This is the main reason why regulators are open to network sharing agreements since they balance the potential loss of competition against a quicker rollout of new technologies. To the best of our knowledge this aspect of network sharing has not been studied before. \end{itemize} The questions that we wish to ask are: \begin{itemize} \item Will network sharing improve the overall economic situation of a network operator? On the one hand, an operator can take advantage of greater economies of scale when deploying infrastructure. It can also potentially benefit from a reduction in competition. On the other hand, an operator that currently has a competitive advantage with respect to another operator might lose that advantage if it has to share infrastructure with that other operator. \item Will network sharing improve the economic situation for an end user? On the one hand, the greater economies of scale can be passed from the operator to the end user. On the other hand, consumer prices could potentially rise due to the reduction in competition between operators. \item How will operators and end users be affected if an operator that currently offers service in a local region only can expand its footprint by partnering with another operator that has a more global reach? \end{itemize} In order to answer the above questions, we need to understand the detailed dynamics between two operators that are sharing infrastructure and the detailed dynamics between two operators that are in full competition. In order to do this, we require a model for how operators compete with one another. There are two standard ways of modeling competition. In a Nash-Cournot game, operators compete by offering a certain amount of network capacity. In a Bertrand game, operators compete by offering a certain price. Our paper is divided into the following parts. \fi \subsection{Problem variants} \begin{figure}[htb] \begin{center} \includegraphics[scale = 0.30]{options.png} \caption{A schematic of the problem variants that we consider.} \label{f:hierarchy} \end{center} \end{figure} Much of the paper will focus on the dynamics and outcomes of a given competitive or collaborative scenario. Once those scenarios are established we can determine whether or not an SP would be better off entering into a sharing agreement or going it alone. However, there are many types of competitive or cooperative arrangements that each lead to different results for the SPs. We begin by giving a high-level description of the different regimes that can arise. These can be categorized in a hierachical manner as shown in Figure~\ref{f:hierarchy}. At a high level each SP needs to determine whether to {\em compete} with the other SP or whether to {\em cooperate} with it. For the case that SPs decide to compete there are two types of competition, a {\em Bertrand} game in which the SPs each offer their own price and the end users react accordingly, and a {\em Cournot} game in which each SP offers a certain capacity into the market and the market price is set accordingly. The Cournot game admits a number of possible variations. First of all, depending on the problem parameters, there may or may not be a {\em Nash equilibrium} solution for the SPs. If there is a Nash equilibrium solution, then two situations can arise. In one of them there is a {\em regulator} that forces the end user price to be the one determined by the equilibrium solution. In the other there is no regulator and so each SP has to decided whether to play the equilibrium solution. The reason it might not do so is that depending on the cost parameters an SP might have the ability to force the other SP out of the market, i.e.\ it can play a strategy such that the other SP has no incentive to participate in the market. However, that is a risky strategy in that if both SPs try it then they could both be worse off than playing the equilibrium solution. We remark that the ability of one SP to drive the other out of the market arises solely because of the fixed component in the cost that leads to a non-convex cost function. Our remaining analysis considers the situation when SPs cooperate and share their networks. For this case there are variants depending on whether there is a regulator that can enforce price limits. If there is a regulator then we assume that the end user price is forced to be the same as it would be if the SPs were competing according to a Cournot game. If there is no such regulator then we assume that the SPs can set prices as if the combined entity were a monopoly. The remaining decisions refer to how the profits are shared when the two SPs cooperate. We assume that this depends on whether or not both SPs are {\em credible players}, i.e.\ whether or not they can legitimately offer service by themselves. If one SP deems that the other is not credible, then it would look to take the vast majority of the profit for itself. If both SPs can credibly offer service on their own, then we assume that profits in the sharing scenario are divided up according to appropriate schemes, (e.g.\ based on Shapley value). \subsection{Paper Organization} In Section~\ref{s:model} we define the models that we use throughout the paper and we outline the equations that define optimal demand and price for a monopolistic SP. In Section~\ref{s:narrative} we explore the dynamics for a prototypical example in a single geographic region. By starting with a concrete example we can better understand the fundamental dynamics at play rather than getting bogged down in the general equations (which can become somewhat complex). For this example we consider most of the regimes outlined in Figure~\ref{f:hierarchy}. In Section~\ref{s:general} we outline our results for the general case with much of the detailed analysis deferred to the Appendix. In particular, in Appendix~\ref{s:cournot} we present a more detailed study of the Cournot game in which the providers compete via deployed capacity. In particular, we derive the equations that characterize the Nash Equilibrium if it exists under our non-convex cost functions. We also examine the conditions under which one SP can drive the other out of the market. In Appendix~\ref{s:sharing} we determine the SP profits that arise when they enter into a sharing agreement. These profits are determined by how the combined profit is shared. This is calculated via two notions of Shapley value (one of which is based on the notion of ``Shapley value with externalities''.) We also explore a regulatory framework that enforces Nash prices so that end users are not penalized in the sharing scenario. Lastly, we examine how an SP would approach a sharing agreement if it does not deem the other SP to be a credible competitor, i.e.\ if it does not believe the other SP has the cost structure to operate a network on its own. In Appendix~\ref{s:bertrand-single} we analyze the dynamics when the competitive setting is modeled as a Bertrand game. The analysis of the Bertrand game is in general simpler than the corresponding Cournot analysis. This is because in a Bertrand game the SP with the better cost structure always has the ability to drive the other SP out of the market. In addition to the basic Bertrand game we study a number of variants. In one of them the SPs are able to share network costs but they must still compete on price. In another variant only a subset of the end users are deemed to be price conscious. In Appendix~\ref{s:narrative-multiple} we extend our analysis to a situation where {\em a priori} the SPs offer service in different geographic regions. This is an especially attractive situation for network sharing since it allows each SP to offer service over the entire market more rapidly. \iffalse Our paper is structured as follows. In general, the comparison between sharing and not-sharing will of course be influenced by many factors, including the cost of deploying a network, the price sensitivity of the end users and the exact rules for competition among operators. However, in order to make our results more concrete, we illustrate our analysis using specific examples with well-defined network cost and price sensitivity functions. \begin{itemize} \item In order to understand the network dynamics without sharing, we need a way to model competition between operators. The natural way to think about this is via price competition. However, this leads to a standard Bertrand model that is subject to the well-known ``Bertrand curse'' in which all profits are driven to zero. To counter this, in Section~\ref{s:bagwell-lee} we present a framework due to Bagwell and Lee~\cite{BagwellL14} that was in turn based on earlier work by Varian~\cite{Varian80}. In this framework only a subset of the users are assumed to be price sensitive. The remaining users choose among operators at random. This creates eqilibria in the non-sharing case in which operators have profit that is bounded away from zero. \item In Section~\ref{s:model} we present our model. For much of the paper we consider a country that is divided into two geographic regions, W and E. We have a global provider that {\em a priori} intends to build out capacity in both regions and two local providers that {\em a priori} only intend to build out in one of the regions. We also have a pool of that are either global (and hence need service in both regions) or local (and hence need service in only of the regions. By symmetry, we typically focus on the pricing dynamics in only one of the local regions (region W). \item In Section~\ref{s:model} we also introduce the different scenarios that we compare. In the first (``No Sharing'') there is no network sharing and the operators are pure competitors. In the second (``Full sharing with no price competition''), there is network sharing and the local and global operators build their network using the cheapest price to which they have access. In this scenario the sharing operators are able to collude on price and therefore use monopoly pricing for all users. In the third scenario, (``Full sharing with price competition'') the local and global operators again get to build the network using the cheapest available cost. However, in this case a regulator stipulates that they must still compete and therefore cannot use monopoly pricing. \item In Section~\ref{s:numerical} we present the outcomes for the diferrent scenarios using a specific set of cost functions and end user price sensitivity functions. In particular for the case of full sharing with no price competition, we show that both the local and global operators achieve higher profit from the local users than in the case of no sharing. In contrast, the global operators achieves a smaller profit from the global users. From the user perspective, the global users see higher prices but the local users see higher prices. Since a regulator would be wary of approving a sharing scheme that leads to higher prices we also consider the situation with full sharing but the operators are expected to compete on price. In this case the change in operator profit when compared to the no sharing case is directionally the same as for full sharing with no competition. However, this case both local and global users witness a decrease in price, except for the unlikely case in which the operators' {\em a priori} network costs are very different. \item In Section~\ref{s:extensions} we consider the implications of two variants of the model. In the first we assume that only some of the network elements are shared. For the second we consider the results for the full sharing model with an alternative model that aims to capture national roaming. \end{itemize} \fi \iffalse The motivation of our model is that network sharing benefits an operator in two ways. First of all it allows two operators to pool the cost of building the network and take advantage of lower construction costs. Second, if two operators have slightly different areas where their coverage is strong, it allows each of them to take advantage of the combined coverage area. Indeed, according to the GSMA report this is one of the main reasons that regulators are willing to allow sharing since one of its downsides (from a consumer perspective) is that it can reduce competition. Sharing allows an operator to quickly ramp up service of a new technology (e.g.\ 5G) over a wide coverage area and thereby offer its end users the benefits of this ubiquitous coverage. \fi \subsection{Previous Work} The GSM Association wrote an influential report~\cite{GSMA} examining the ways in which wireless infrastructure can be shared. This report describes some existing sharing agreements that are already in place and discusses the economic and regulatory implications. In \cite{JanssenLS14}, Janssen et al.\ discuss the statistical multiplexing gains that can be obtained by combining capacity in a network sharing arrangement. The economics of the Chinese tower sharing agreement mentioned earlier were analyzed by Deng et al. in \cite{DengWW15}. Malanchini and Gruber studied small cell sharing in \cite{MalanchiniG15} and presented ways in which operators could still differentiate themselves (e.g.\ via power management) even if all network resources are shared. The papers \cite{AlQahtani08,KokkuMZR12,MalanchiniVA14,ValentinJA13} discuss ways in which network sharing could be realized in practice. In particular, \cite{KokkuMZR12} discusses a technique known as ``network slicing'' in which the resource allocation algorithms at wireless basestations reserve a fraction of resources for each SP. The general economics literature contains many analyses of duopolies with various cost structures (e.g.\ \cite{Saporiti08}). However, to the best of our knowledge previous work has not considered Cournot competition for network providers under non-concave cost functions with fixed costs, and there has not been a comparison of how the dynamics under sharing compare to the competitive dynamics. \input{model} \input{narrative} \bibliographystyle{abbrv}
2,869,038,156,993
arxiv
\section{Future Work} We are interested in making the circuit design system more intelligent by incorporating a circuit simulator (e.g., SPICE), a physics engine (e.g., Open Dynamics Engine) to simulate printed characters' dynamics, a schematic image recognition system~\cite{Arvo:2000:FSC:354401.354413}, or an interactive sketch beautification system~\cite{Igarashi:1997:IBT:263407.263525} to facilitate the user's creative circuit integrated 3D object design. \section{Conclusion} We presented SurfCuit: a system that integrates circuits into 3D prints by mounting them on the printed surface. Our construction method enables building rather complex, highly-conductive circuit patterns robustly on FDM-based 3D prints. Our interactive design system enables intuitive input and 3D layout of electric circuits on 3D geometry. \balance \bibliographystyle{acm-sigchi} \section{Introduction} \label{sec:introduction} Recent advances in consumer 3D printing technology have made it possible for end users to casually fabricate 3D plastic objects. Many 'makers' would like to add interactivity to their printed objects using sensors, lights, motors, and so on. However, incorporating the necessary electric circuits into these objects has not become inherently easier with 3D printing. Electric circuits are typically designed in 2D, and mounted on planar geometries, such as printed circuit boards (PCBs) Inserting a flat circuit inside a 3D object requires extensive geometry editing to create cavities, wire routing paths, and fixtures. This is generally beyond the reach of the novice or casual maker. \if0 There are many researches about circuit construction It's very good. But there still room for improviements. \fi An alternative to is to use 3D circuitry, where 3D traces are embedded into the object volume or surface. However existing CAD interfaces and fabrication techniques have not been designed with 3D circuits in mind. In this paper, we demonstrate both a design tool and fabrication technique to integrate the mechanical and electrical functions of simple objects. Our approach, which we call SurfCuit, allows the user to design and construct functional and durable electric circuits on the surface of 3D prints (see Fig.~\ref{fig:teaser}). As a prototyping method, surface mounting has various advantages over embedded circuitry. First, construction is much easier since the parts are accessible from outside -- the user does not need to insert the electric parts during printing, or try to fit components into tiny cavities. Second, it is easy to debug and repair surface-mounted circuits, while in many embedded-circuit applications this is difficult or impossible. Finally, the circuit design task is greatly simplified -- compared to three-dimensional arrangements of cavities, fixtures, and wire channels, surface layouts are intuitive and efficient to create. \begin{figure}[t!] \includegraphics[width=86mm]{img/teaser/teaser.pdf} \caption{SurfCuit allows the user to design and fabricate surface mounted circuits on 3D prints (left). An illumination circuit (top right) is mounted on the surface of Christmas tree shape (bottom right). } \label{fig:teaser} \end{figure} Our key challenges are (i) how to fabricate complex circuits on the surface of 3D prints and (ii) how to help the user perform the \emph{circuit layout} task directly on the 3D surface in a computational design tool. Conductive inks do not readily adhere to 3D print plastics and their resistance is too high for many types of components. Instead, SurfCuit uses copper tape and tubes that are soldered together to achieve mechanically durable and highly conductive circuits. We leverage the fact that near soldering temperature, 3D-printed PLA plastic melts to a sticky viscous fluid that bonds well to copper material. To further enhance the fabrication process, our design tool adds shallow channels and holes to the 3D model before printing. These channels both help the user to re-create their virtual circuit traces in the physical world, and also help to firmly affix the copper tape and through-hole parts onto the 3D print. The traces which connect components in an eletrical circuit must be physically isolated, i.e. they cannot intersect with eachother. Because they exist in a (possibly curved) 2D space, with even a moderate number of traces it becomes very difficult to create the circuit without carefully planning out the traces ahead of time. Board layout and planning software like EAGLE is an essential tool for planar circuit desgin. However no existing circuit planning tool is applicable to arbitrary 3D surfaces. In addition, strips of copper tape cannot follow arbitrary paths on 3D surfaces, as many paths introduce too much torsion into the tape, which will result in kinks or tears. The strips should be laid out along \emph{geodesics}, and without computational guidance it is very difficult to ensure that the 3D traces have this property. SurfCuit offers an interactive design tool that allows the user to easily adapt an existing planar circuit schematic to a 3D surface. To demonstrate the capabilities of SurfCuit, we have designed and fabricated a variety of 3D objects with 3D circuitry. Some of these examples involve circuit complexity and levels of voltage/current that are well beyond what has been demonstrated in the literatuer. In addition, we also perform some destructive testing to demonstrate the robustness of our fabrication process. We show various examples of how our system facilitates the user's creation of functional 3D objects with electric circuits. \section{Background and Related Work} \label{sec:related work} \subsection{Molded Intergrated Devices} Various advanced manufacturing technologies support the fabrication of circuitry that conforms to curved surfaces. For example, the Optomec Aerosol Jet system can create metal traces on simple 3D forms using laser metal deposition techniques. Another technology, Molded Interconnect Devices~(MID), makes it possible to install circuitry on plastic surfaces. MID enables functional integration of electric circuit into small spaces, thus is often used for compact implementation of electronic products such as cell phones, cars, and advanced micro robots (see Figure~\ref{fig:MID}). However, manufacturing such objects involves multi-axis laser engraving machines and etching/plating baths only found in advanced industrial facilities. Our SurfCuit system is inspired by these techniques, but our goal is to introduce MID-style fabrication in a more accessible context. In addition, currently there are no CAD tools for MID design. Our SurfCuit design tool is directly applicable to these advanced manufacturing methods. \begin{figure}[htbp!] \centering \includegraphics[width=86mm]{img/mid/mid.pdf} \caption{Example of molded interconnected devices. (Left) A robotic finger tip sensor by CITEC, Bielefeld University. (Right) FEST Bionic Ant Robot} \label{fig:MID} \end{figure} \begin{figure*}[t!] \includegraphics[width=177.8mm]{img/workflow/workflow.pdf} \caption{Workflow of SurfCuit. The user first draws a 2D schematic diagram of a circuit (a), then positions the electrical components on a 3D shape and connects them with curved traces (b). SurfCuit automatically generates channels and holes on the surface (c) to guide the user in placement of copper tapes and tubes. Finally, the user solders the copper pieces together to achieve a robust circuit on the 3D print.} \label{fig:workflow} \end{figure*} \subsection{Interactive 3D Prints} Various works in the human-computer interaction and fabrication literature have addressed the topic of adding interactivity to 3D prints. Techniques have been presented to convert 3D prints into sensors that include cameras~\cite{Savage:2013:SES:2501988.2501992}, acoustics~\cite{Ono:2013:TAA:2501988.2501989} and light-guides~\cite{Willis:2012:POP:2380116.2380190}. Sato et al.~\cite{Sato:2012:TET:2207676.2207743} studied frequency-dependent impedance properties to detect configurations of conductive 3D objects. Printput~\cite{raey} and Capricate~\cite{Schmitz:2015:CFP:2807442.2807503} use conductive filaments to convert the surface of 3D prints into capacitive touch sensors. However, such conductive filament traces have very high resistance, making it difficult to supply enough current to drive larger components (See Section~Resistance Comparison). Fabricating 3D circuit traces inside "tubes" passing through the interior of 3D prints was explored by Savage et. al.~\cite{Savage:2014:STA}. This method is effective at hiding the circuitry, but also illustrates the challenge of repairing such devices. The capabilities of the automatic routing algorithms also limit the complexity of the circuits that can be designed with this approach. Hudson~\cite{Hudson:2014:PTB:2556288.2557338} studied the use of conductive threads for the yarn-based soft 3D printing. The commercially-available Voxel8 3D Printer~\cite{voxel8} can embed circuits in a plastic print using conductive inks. The liquid ink can be deposited on the outer surface of prints, but only in upward-facing regions. \subsection{Prototyping Flat Circuits} Solderless prototyping methods such as traditional breadboards and LittleBits~\cite{Bdeir:2009:EML} are the fastest way to assemble a circuit, but these methods not suitable for permanent use. The resulting circuits are fragile, and also space consuming. Recently, advances in conductive ink have made it possible to directly deposit circuit traces on flat sheets using consumer ink-jet printers~\cite{Kawahara:2013:IIC}. ShrinkCuit~\cite{Lo:2014:SSS:2642918.2647421} uses Shrinky Dinks\texttrademark~ as the substrate to enhance complexity and conductivity of conductive-ink-based circuits. Sketch In Circuit \cite{Qi:2014:SCD:2556288.2557391} uses copper tape for prototyping traces on paper. Circuit Sticker~\cite{Hodges:2014:CSP} enables rapid prototyping of more complex circuits by pasting ready-made circuit boards on the top of traces. Ramakers et al.\ presented an interactive design system for circuits printed on paper~\cite{Ramakers:2015:PIA:2702123.2702487}. The motivation behind these works is to enable creative circuit prototyping for ``makers" and non-experts. We share this motivation, and our goal is to extend these ideas to arbitrary free-form 3D surfaces with a robust construction technique and interactive trace layout interface. \subsection{Circuit Design Tools} There are many circuit layout design tools for planar circuit boards. For example, commercial packages such as Eagle\texttrademark, AutoTRAX\texttrademark and DipTrace\texttrademark provide comprehensive environments for design and simulation for PCB. Autodesk 123D Circuits~\cite{123Dcircuits} provides a layout design system for breadboards. Autodesk Project Wire provides 3D circuit layout for the Voxel8 printer~\cite{voxel8}. Our SurfCuit 3D circuit design tool enables novices to quickly design 3D traces constrained to the surface of an arbitrary input mesh. \section{Surfcuit Circuit Fabrication} \label{sec:implementation} \subsection{Construction Procedure} The robustness of electrical connections is very important for permanent circuit construction --- disconnection of a single trace can disable an entire circuit. However, constructing robust, highly conductive traces on a curved 3D-printed surfaces has been difficult. In this paper, we take advantage of the fact that PLA and ABS plastics that they melt into sticky, glue-like viscous fluids at around 200$^\circ$C. Since the melting point of solder is also around that temperature, soldering on traces and pins melts the surrounding 3D plastic and strengthens the mechanical bonds between them. The result is that after cooling, the highly conductive copper traces are firmly affixed to the surface of the 3D print. Note that our approach is inspired by recent works that exploit melting behaviors for fabrication~\cite{Mueller:2014:LOL:2590181.2567782,Sageman-Furnas:2015:MFC:2820903.2820915}. However here we use melting for bonding, not for forming. This melting technique creates strong connections, but actually using it on 3D surfaces requires some pre-planning. Computational design tools are necessary to plan the spatial layout for complex circuits. To guide the user in fabricating their design, we computationally generate channels and holes in the 3D surface before printing, which also helps to increase circuit robustness. Hence, the workflow of SurfCuit fabrication is as follows (see Fig.~\ref{fig:workflow}): \begin{enumerate} \item In the SurfCuit design tool, the user first draws a 2D circuit schematic diagram, then lays out the electric parts and traces on a 3D surface. \item SurfCuit generates a 3D mesh with channels and holes that guide the user to install copper tape and tubes on a 3D print. \item Then, the user covers these copper traces with solder, and solders the traces, pins, and electric parts together. \item Finally, a thin layer of clear lacquer spray electrically protects the traces. \end{enumerate} With SurfCuit, a novice maker can easily create complex functional 3D objects using widely-available single-material FDM printers. The soldering process requires exactly the same skills as fabricating a 2D circuit, which we already know that nearly anyone can learn to do. Applying solder to the copper tape is not difficult, as the solder naturally flows into the trace channels due to surface tension (see accompanying video). This soldering step also thickens the traces, further increasing mechanical robustness and electrical conductivity. \subsection{Construction Detail} Our circuit fabrication process is intended to be used with though-hole parts. Though-hole parts are desirable because they are widely available and are easy to manually solder. More importantly, through-hole parts achieve mechanically stronger bonds compared to surface mounted parts. Thus, they can be left exposed on the surface. The interval between pins is typically 1/10~inch = 2.54~mm for though-hole parts. Our technique maintains sufficient isolation between neighboring traces by keeping the trace width and pin diameter smaller than this interval (see Fig.~\ref{fig:construction}). \begin{figure}[htbp!] \includegraphics[width=86mm]{img/construction/construction.pdf} \caption{(left)~Dimensions of holes and channels generated by our tool to guide copper tape and tube placement. We used 1.5~mm-wide copper tape and copper tubes with 1/16~inch (1.6~mm) diameter. The images on the right show the channels and holes in a 3D print (top), and the post-soldering result (bottom). } \label{fig:construction} \end{figure} The channels and holes generated by our design tool are essential for the fabrication process. We arrived at this process after multiple iterations with simpler techniques. The benefits of this method include: \begin{itemize} \item The channels and holes help the user to accurately re-produce their complex virtual designs \item The channels and holes provide a large contact area between the copper traces and the 3D print. \item The holes help to temporarily hold components in place while they are soldered \item The channels partially enclose the traces and prevent them from peeling off. \item Since copper tape is placed in a V-shaped channel, it is easy to cover the channel with solder -- surface tension makes the solder naturally flow into the channel. \end{itemize} The use of solid copper, rather than conductive inks or similar alternatives, is desirable for three reasons. First, copper has comparatively high conductivity / low resistance (less than $0.5~\Omega/\mathrm{m}$ in our traces). Thus, we are not limited by parasitic resistance when using the large electric currents typically necessary for driving small motors or incandescent lights. Secondly, its high heat conductivity makes it possible to bond copper to 3D-printed plastics via the application of heat. Finally, the copper is solderable. Solder quickly spreads on the surface of copper and creates mechanically and electrically robust bonding. Compared to silver-based conductive inks (the most conductive), copper is also inexpensive and widely available. To make 1/16 inch traces, we split widely available 1/8 inch copper tape (about 10\$ for 50~m) in half using a rotary cutter. For copper pins, we used 1/16 inch copper tubes manufactured via K\&S Engineering .Inc (three dollars for 1~m). \subsection{Resistance Comparison} During the development of our fabrication process, we experimented with various conductive inks and copper paints. However we found that most are difficult to apply to plastic surfaces. Liquid-based materials were also unreliable as a small crack results in electrical disconnection. This is especially problematic on the rough surfaces of 3D FDM prints, where the ink will tend to pool into the small cavities and channels produced by the printing process, creating highly variable thickness in the conductive layer. Furthermore, as reported in the previous studies, the electric resistance in the conductive liquids are very high (e.g, 11.48~$\Omega$ for a 28cm~x~0.5cm trace using silver conductive ink~\cite{Lo:2014:SSS:2642918.2647421} and 2~$\Omega$/inch for a~ 3mm diameter tunnel filled with copper paint~\cite{Savage:2014:STA}). We also tried drawing traces with conductive PLA filament using a hand-held plastic extruder, however again the resistance is significantly higher than typical PCB traces, and thus cannot support many common circuits. For example, Black Magic 3D's Conductive Graphene Filament, which has one of the highest conductivity among the filaments on the market, still has 0.6~$\Omega/\mathrm{cm}^3$ volume resistivity. To achieve similar resistance as our traces (0.5~$\Omega/\mathrm{m}$), the cross-sectional area should be at least 120~$\mathrm{cm}^2$(=11cm~x~11cm), vastly larger than the through-hole component pitch (2.54~mm). In other words, if we use conductive filament for traces, and the traces have a 1.5mm~x~1.5mm cross-section (small enough to connect to through-hole components), then a 20~cm long trace has more than 0.5~k$\Omega$ resistance. Such resistance causes over a 1V voltage drop with only 2mA current, which is barely enough to light an LED. Generally, conductive filaments are sufficient for capacitive touch sensors or blinking LEDs, however they fall short when attempting to drive common micro-controllers, actuators, and transducers. \subsection{Robustness Comparison} The user of copper tape in circuits-on-surfaces is not entirely novel, in particular copper tape is widely used in wearables and fashion tech. However these circuits are generally very fragile. To demonstrate the mechanical robustness of our approach, we made a qualitative comparison with a na\"ive method using destructive testing (see Figure~\ref{fig:robustness}). In the na\"ive construction method, traces are just copper tapes placed on the 3D prints without the channels and solder. The tape generally has a sticky backing which is sufficient to hold it in place. The test circuits light a LED using a trace pattern that consists of more than twenty copper tapes (see Figure). We then brushed the circuits for one minute with nylon brushes to observe the robustness of the circuit. We first used a relatively soft nylon kitchen brush designed for washing dishes, and then switched to very hard nylon brush meant for scraping off rust. The circuit with na\"ive construction immediately stopped functioning when the soft nylon brush was applied. This is because the connection between overlapping copper tape segments is particularly weak and breaks under small mechanical forces. After few additional seconds of brushing, the copper tape segments in the na\"ive construction start to peel off the plastic. After one minute, the copper traces in the na\"ive construction were severely damaged, to the point where the circuit would need to be entirely rebuilt. On the other hand, there was no visual damage to the SurfCuit traces even after the brushing with the hard brush. The circuit with SurfCuit construction stayed functional during the entire experiment. Please refer to the accompanying video for the detail of the comparison. \begin{figure}[htbp!] \includegraphics[width=86mm]{img/robustness/robustness.pdf} \caption{(left)~LED lighting circuits with na\"ive construction and SurfCuit construction. (right) After brushing for one minute, the traces in the na\"ive construction were severely damaged while the SurfCuit traces were intact.} \label{fig:robustness} \end{figure} \section{SurfCuit Design Tool} Designing the trace layout for a complex circuit (e.g., 10+ connection points) typically requires planning before starting construction. Each trace that is added constrains the design space of future traces, because no traces can intersect. For the 2D flat circuits, we can plan on a paper or in vector-graphics CAD tools. However, since our circuits are on 3D free-form surfaces, we cannot lay out traces on a 2D flat geometry. It is also not practical to lay out traces as 3D space curves, as keeping them "on the surface" is very cumbersome. Hence, we developed a domain-specific interactive CAD tool which allows users to arrange parts and traces on arbitrary 3D surfaces. \subsection{User interface of Surfcuit design tool} Our SurfCuit design tool has two modes: 2D schematic design mode and 3D part and trace layout mode (see Fig.~\ref{fig:interface}). The schematic mode lets the user specify electronic parts and their connections in the form of a 2D diagram, while the 3D layout mode lets the user arrange the parts and traces on the 3D model's surface. The user can switch back and forth between these modes during circuit editing. While the user edits the circuit in one mode, a small window highlights the electric parts or traces currently being edited in the other mode, making 2D/3D correspondence easy to understand. Note that we are inspired by existing works showing highly abstracted schematic diagram while editing complex models \cite{Zhu:2011:SDI:2070781.2024168,Kazi:2014:KSD:2642918.2647375}. \begin{figure}[htbp!] \includegraphics[width=86mm]{img/interface/interface.pdf} \caption{(left)~SurfCuit 3D trace design interface. The user designs traces on a 3D object by dragging and rotating parts. The 2D schematic window is shown at the same time to facilitate understanding of the circuit structure. (right bottom)~The user can change connectivity of traces by a simple gesture.} \label{fig:interface} \end{figure} The schematic diagram is desirable as a circuit input rather than drawing traces directly on a 3D surface because schematics are symbolized and thus easily comprehensible. In our schematic editor, the user places symbols of electric parts and specifies these connections. To make the schematic tidy, the user also can add or delete a point on a trace and switch connectivity of points inside connected traces (see Fig.~\ref{fig:interface}-right bottom). Note that thousands of schematic diagrams are widely available on the internet, for virtually any kind of circuit. Thus, inputting such diagrams does not require sophisticated electronics knowledge, a novice designer can simply copy an existing schematic. The electric parts and their connections, specified in the 2D schematic window, are imported to the 3D layout window. The user lays out the parts by dragging and rotating them on the 3D surface. The traces are automatically generated on the surface in real-time during editing, making it easy for the user to lay out parts while avoiding intersecting traces. Similar to the schematic editor, the user can also add/delete points on the trace and change connectivity of points inside connected traces. Note that the 3D operations maintain the topological connection between pins of parts. These connections are specified in the 2D schematic window and are automatically reflected in the 3D layout window. Finally, when the user presses the ``print" button, the system creates the necessary channels and holes on the mesh that correspond to the design, and then exports the geometry for use in 3D printing software. Channels on the surface are generated using the stroke parametrization technique~\cite{Schmidt13}, which generates texture coordinates with minimum distortion around a stroke. We simply displace points of the mesh in the normal direction with respect to the distance computed from the parametrization. \begin{figure}[htbp!] \includegraphics[width=86mm]{img/method/method.pdf} \caption{Surface trace generation algorithm. (left)~Starting from two endpoints, the line segment between the two points is projected on the surface to find tangential directions for each point. We then incrementally slide each point along the surface in that projected direction, until they meet each other. (right)~If the algorithm fails we annotate the failure to prompt the user to add an additional point.} \label{fig:method} \end{figure} \begin{figure*}[htbp!] \includegraphics[width=177.8mm]{img/results/results.pdf} \caption{Fabricated SurfCuit examples and their schematic diagrams.} \label{fig:results} \end{figure*} \subsection{Algorithmic detail of SurfCuit trace computation} SurfCuit updates the routing of traces interactively during the user's editing. A trace is computed as a curve on the surface connecting two end points, each of which is either a pin of a user-specified trace midpoint. To make manual construction easy, traces should be as short and straight as possible, as this will introduce the smallest amount of twisting (torsion) in the copper tape. The shortest (and thus straightest) path on a surface connecting two points is called a \emph{geodesic}. There are various existing methods to compute geodesics~(e.g.,~\cite{Surazhsky:2005:FEA:1073204.1073228}), however true geodesics are expensive to compute. Thus, we use a heuristic method to estimate geodesics in real time (see Fig.~\ref{fig:method}-left). The basic idea behind our method is to start with the two endpoints $p_0$ and $q_0$, and move them towards eachother until they meet. The direction of movement is defined by a vector in the tangent plane at each point. We compute these directinos by first finding a 3D direction, and then projecting into the tangent plane and normalizing. For a pair of points $p_i$ and $q_i$, the tangent-plane directions are: \begin{eqnarray} \vec{t}_{pq_i} = \frac{(\bi{I}-\vec{n}_{p_i}\otimes\vec{n}_{p_i})(\vec{q}_i-\vec{p}_i) }{|(\bi{I}-\vec{n}_{p_i}\otimes\vec{n}_{p_i})(\vec{q}_i-\vec{p}_i)|}\\ \vec{t}_{qp_i} = \frac{(\bi{I}-\vec{n}_{q_i}\otimes\vec{n}_{q_i})(\vec{p}_i-\vec{q}_i) }{|(\bi{I}-\vec{n}_{q_i}\otimes\vec{n}_{q_i})(\vec{p}_i-\vec{q}_i)|} \end{eqnarray} where $\bi{I}$ is an identity matrix, and $\vec{n}_p$ and $\vec{n}_q$ are unit normal vectors at $\vec{p}$ and $\vec{q}$. We then update the positions of points ($p_i\rightarrow p_{i+1},q_i\rightarrow q_{i+1}$) by taking small steps in the directions of $\vec{t}_{pq_i}$ and $\vec{t}_{qp_i}$, respectively. To keep the resulting point on the triangle mesh, we use discrete parallel transport~\cite{Bergou:2008:DER:1360612.1360662}. First, a point on a triangle is updated in the direction of the triangle's tangent vector $\vec{t}$ until it hits the boundary (edge) of the triangle. Then, at the boundary we transform $\vec{t}$ into the plane of the next triangle using a minimal rotation around the edge of the triangle, i.e. the rotation that takes the current triangle normal to the neighboring triangle normal. We stop this update when the $p_i$ and $q_i$ move 1~mm along the surface, and call them $p_{i+1}$ and $q_{i+1}$. This simple algorithm can fail when the normal of the surface (i.e., $\vec{n}_p$ or $\vec{n}_q$) becomes parallel to $\vec{p}_i-\vec{q}_i$~(see Fig.\ref{fig:method}-right). In such cases, we annotate failure as a line connecting $p_0$ and $q_0$ in order to prompt the user to add another point along the trace. Clearly this algorithm does not produce a bounded approximation to a geodesic, however it does produce exact geodesics for simple surface geometries such as planes or spheres. And in practice, we have found that it is highly effective at producing low-torsion 3D curves which are ideal for trace fabrication. \section{SurfCuit Examples} To demonstrate the effectiveness of our approach, we present seven different examples created using our SurfCuit design tool and fabrication method. Each example is chosen to showcase various properties of SurfCuit. Specifically, we demonstrate integration of many different sensors (for light, touch, and sound), controller ICs, and transducers (for light, electromagnetic actuators, sound, and radio wave) into small spaces. Note that all the examples are fully self-contained and work without external controllers or power sources. Please refer to the accompanying video for more detail. We can fabricate SurfCuits on a wide variety of input meshes. In these examples, the input meshes were taken from the Thingiverse\texttrademark 3D model repository (http://www.thingiverse.com). We did not need to specifically design new shapes from scratch to accommodate electric circuits. \subsection{Enriching 3D Prints with Sound and Lights} \subsubsection{Christmas Tree} Figure~\ref{fig:teaser} shows a Christmas tree~(\href{https://www.thingiverse.com/thing:608606}{thing:608606}) that blinks 13 LEDs in an asynchronized timing using a 16 pin timer IC, CD4060. This example has many components on a relatively small volume. 21 electric parts, 20 traces, and one 9-volt battery are integrated into the volume (12cm~x~6cm~x~6cm). This example also demonstrates the inclusion of an IC with many pins using SurfCuits. Such complex circuits typically need cables over the traces when constructed on a single-sided 2D board. However, in the Surfcuit, we can take advantage of the three dimensional structure of the object to avoid such cables. For example, if we cannot connect two parts on the front side without intersection, the trace can go around the back side. Because the circuit construction is three-dimensional, there are more degrees of freedom in the trace layout. \subsubsection{Chirping Birds} SurfCuit enables the fabrication of complex circuits in a very small volume. The chirping bird~(Fig.~\ref{fig:results}) integrates a light theremin circuit, which uses a 555 timer IC and photoresistor, into a 3D bird shape (\href{https://www.thingiverse.com/thing:359531}{thing:359531}). The light theremin circuit modulates the pitch of the sound according to the intensity of light received by an LDR sensor. Thus, a user can create chirping sounds by waving a hand on the top of the bird. This behavior significantly enhances the static bird geometry-- not only does it generates sounds, it makes the bird a playable instrument. This example also demonstrates circuit integration into a small space. The part volume is very small (2.5cm~x~3cm~x~6cm) but because we use the full 3D space we can fit both the thermemin circuit and 2cm-diameter batteries. Our interactive layout tool allowed us to avoid obstructing semantically-important features such as the bird's face, and place the batteries and switches in the occluded area behind the tail. \subsubsection{Smartphone Stand} The speaker-embedded iPhone stand~(Fig.~\ref{fig:results}) augments a smartphone stand shape (\href{https://www.thingiverse.com/thing:642881}{thing:642881}) by integrating a circuit using an LM386 timer IC to amplify the sound signal. This smartphone stand exemplifies the integration of geometric and electrical functionality. While the original geometry provides the function to hold a smartphone, the circuit amplifies the audio signal. 3D printing makes it easy to fabricate the precise shape needed. Achieving the same geometrical functionality is very difficult with flat circuits. \subsubsection{Police Car} The police car example~(Fig.~\ref{fig:results}) integrates a circuit that blinks a LED beacon while a magnetic speaker generates siren sounds, which is modulated by two 555 timer ICs. Aside from the functionality of making sounds and lights of a police car, the surface mounted circuit also gives mechanical appearance to the 3D printed shape. The input mesh (\href{https://www.thingiverse.com/thing:806770}{thing:806770}) is clearly a car but it lacks any detailed texture, in large part because the printer is limited to a single material. The circuitry on the car body gives the shape some definition, creating a more interesting machine-like appearance that would not be possible with 3D printing alone. \subsection{Dynamic 3D Prints with High-Current Circuits} As previously discussed, our traces have only small amounts of parasitic electric resistance, and thus can handle large amount of electric current (up to 1~Ampere, possibly more). This is enough current drive electro-magnetic actuators, allowing us to create mechanized objects. \subsubsection{Octopus Fan} Our Fan example (Fig.~\ref{fig:current}-left) demonstrates a touch-sensitive USB fan, where the user can toggle a DC motor fan on and off by touching specific locations of the print. The shape of the fan is designed to be clipped to the top of a computer monitor. Upon making physical contact, a 555 timer IC detects a small current transmitted through body, and toggles the motor control. We use a MOSFET to amplify the output and drive the 5V DC motor with about 1.0~A current. SurfCuit is convenient for fabricating touch-sensitive objects since the traces are naturally exposed to the surface. We successfully mounted long curved traces on the octopus's tentacle (\href{https://www.thingiverse.com/thing:158069}{thing:158069}). \begin{figure}[htbp!] \centering \includegraphics[width=86mm]{img/current/current.pdf} \caption{Examples of high-current electric circuits. (Left) Octopus USB fan with a touch switch. (Right) Cat robot waving its hand and its head.} \label{fig:current} \end{figure} \subsubsection{Cat Robot} In the waving cat example~(Fig.~\ref{fig:current}-right), an ATtiny85, which is an Arduino-compatible programmable micro controller, drives two servo motors which wave the arm and shake the head of a cat statue (\href{https://www.thingiverse.com/thing:163032}{thing:163032}). This robot also draw several hundreds milliamperes of current at peak load. The use of a programmable micro-controller makes the interaction/behavior design of this object much more flexible. Again, SurfCuit’s highly conductive traces are critical to allowing the micro-controller to drive the various outputs. \begin{figure}[b!] \centering \includegraphics[width=86mm]{img/covert/covert.pdf} \caption{A covert “Squirrel Spy” which contains a concealed FM transmitter. } \label{fig:covert} \end{figure} \subsection{Concealed Circuit} So far, our examples have placed the circuit components and trances on the exterior surface of objects. These circuits have increased the functionality and interactivity of the 3D prints via sensor-controlled transducers. However, adding on-surface circuits does involve modifying the original surface, which may be undesirable if the surface has specific functional or aesthetic purposes. An example of such a requirement arises in cases where we may wish to obscure the functionality of the circuit. Fig.~\ref{fig:covert} shows a FM transmitter that is concealed inside a shape of a squirrel~(\href{http://www.thingiverse.com/thing:11705}{thing:11705}). This “squirrel spy” could be used in covert recording or “nanny-mic” type applications. To create this object, we split the squirrel geometry along a curved partition surface, and then implemented the circuit on the interior curved cross-sections. Using a curved partition, rather than a planar cut, allows us to design a parting line following concave regions on the surface, which are more easily concealed. We could of course created a larger cavity and installed a flat PCB board, but with SurfCuit we do not have to worry about the complexities of orienting and mounting the board. In this example, we also show that SurfCuit can handle very high-frequency circuits (about 100 MHz). Operating at such frequencies is very sensitive to the parasitic resistance that would be present with less robust fabrication processes. \subsection{Circuit as a Design Element} While in some cases we might wish to hide the circuit, in others we can actively use the circuit as part of the design aesthetic. Many people find beauty in circuits, as seen in the circuit jewelry (e.g., Circuit Breaker Labs~\cite{breaker}) and wearable fashion shows. In fact, many ground-breaking works of industrial design have integrated the internal engineering mechanisms into aesthetics of the design. Notable examples include Swiss watches showing the gear work or “movement”, the iMac G3's translucent body, and the intentionally-exposed functional and structural elements of the Pompidou museum in Paris. Although we cannot claim our own results as works of art, SurfCuit enables this aesthetic by making it easy for creative users to integrate circuitry elements into the external design of complex 3D shapes. For example, our featureless car above was turned into a police car with more interesting “steampunk” styling. To further illustrate this concept, we created a circuit that illuminates EL (electro-luminescent) wires whose placement is designed based on an existing circuit-like facial tattoo (see Fig.~\ref{fig:beauty}). The core of the circuit is an inverter that converts 9V DC current to 120V AC current using 555 timer IC and a micro transformer. Although most of the traces do not contribute the function of the circuit, the texture of the metal traces completely changes the aesthetic of the otherwise smooth and monochrome 3D-printed head (\href{http://www.thingiverse.com/thing:33503}{thing:33503}). This example also demonstrates the use of quite high-voltage circuits with SurfCuit, where insulation between traces is critical. \begin{figure}[htbp!] \centering \includegraphics[width=86mm]{img/beauty/beauty.pdf} \caption{ Facial tattoo model uses circuit's trace patterns as a design element (Left). This model is inspired by an artistic circuit tattoo by \href{http://faeriegem.deviantart.com/art/UV-Circuitboard-Face-Paint-255440051}{Faeriegem} (Right). } \vspace{+9mm} \label{fig:beauty} \end{figure}
2,869,038,156,994
arxiv
\section*{Methods} \textbf{Material growth.} Multiple Bi$_2$Se$_3$/EuSe\ heterostructures are grown using molecular beam epitaxy (MBE), details of the growth, structural and magnetic characterization of EuSe films grown on different substrates can be found in Ref.~\onlinecite{Wang2022a}. For this study we use EuSe(001)/Bi$_2$Se$_3$ (20nm/16QL) grown on a sapphire substrate, HRTEM images can be found in the Supplementary material. \textbf{PNR \& XXR.} PNR experiments were performed on the Magnetism Reflectometer at the Spallation Neutron Source at Oak Ridge National Laboratory \cite{Lauter2009}, using neutrons with wavelengths $\lambda$ in the range of 0.25 – 0.85 nm and a high polarization of 98.5–99\% of the neutron beam. The measurements were carried out in a closed cycle refrigerator (Advanced Research System) equipped with a 1.15 T Bruker electromagnet and/or a 5 T cryomagnet. Using the time-of-flight method, a collimated polychromatic beam of polarized neutrons with a wavelength range $\Delta\lambda$ is incident on the film at a grazing angle $\theta$, interacting with atomic nuclei and spins of unpaired electrons. The reflected intensities $R^+$ and $R^-$ are measured as a function of the wave vector transfer, $Q = 4\pi\sin(\theta)/\lambda$, with the neutron spin parallel (+) or antiparallel (-), respectively, to the applied field. To separate nuclear scattering from magnetic scattering, the spin asymmetry ratio $SA = (R^+ – R^-)/(R^+ + R^-)$ is calculated, for which $SA = 0$ means the absence of a magnetic moment in the system. Being electrically neutral, spin-polarized neutrons penetrate the entire multilayer structure and probe the magnetic and structural composition of the film and hidden interfaces down to the substrate. PNR is a deep penetrating depth sensitive technique that examines the chemical and magnetic depth profiles of materials with a resolution of 0.5 nm. The depth profiles of the nuclear and magnetic scattering length densities (NSLD and MSLD) correspond to the depth profile of the chemical and in-plane magnetization vector distributions, respectively. XRR measurements were performed at the Center for Nanophase Materials Sciences (CNMS), Oak Ridge National Laboratory. XRR measurements were conducted on a PANalytical X’Pert Pro MRD equipped with hybrid monochromator and Xe proportional counter. For the XRR measurements, the X-ray beam was generated at 45 kV/40 mA, and the X-ray beam wavelength after the hybrid mirror was $\lambda=1.5406$ \AA (Cu K$\alpha$1 radiation). \textbf{Transport measurements.} For transport studies devices in Hall bar geometry have been fabricated using conventional e-beam lithography and ion milling. Ohmic contacts on mesoscopic samples were formed by ion milling the top EuSe layer and deposition of Ti/Au or Nb/NbN films. Transport measurements were performed in a variable temperature $^3$He refrigerator with 8 T magnet using standard low frequency lock-in techniques. \section*{Acknowledgments} Authors acknowledge support by the NSF DMR-2005092 (Y.W., L.P.R), and NSF DMR-1905277 (X.L., B.A.A.) grants. M.Z. and T.O. acknowledge use of facilities for High Resolution Electron Microscopy at University of Notre Dame. This material is based upon work supported by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers, Quantum Science Center. This research used resources at the Spallation Neutron Source, a Department of Energy Office of Science User Facility operated by the Oak Ridge National Laboratory. XRR measurements were conducted at the Center for Nanophase Materials Sciences (CNMS), which is a DOE Office of Science User Facility. The data that support the findings of this study are available from the corresponding author upon reasonable request. \section*{Author contributions} L.P.R., J.K.F. and X.L. conceived the experiments, Y.W. performed magnetization and transport measurements, V.L. performed PNR measurements and analysed PNR and XRR data, H.A took part in PNR experiments, J.K carried XRR measurements, J.W. performed high resolution XRD measurements, S.T.K. and P.U. performed micromagnetic simulations, M.Z and T.A.O performed HRTEM. Y.W., V.L., O.M., S.T.K, B.A.A, X.L. and L.P.R. written the manuscript.
2,869,038,156,995
arxiv
\section{Introduction} The region-based object detectors~\cite{rcnn,fastrcnn,fasterrcnn,fpn,cascade,ecr} popularized by R-CNN framework~\cite{rcnn} are conceptually intuitive and flexible, and have achieved top accuracies on challenging benchmarks like MS-COCO~\cite{coco}. Region-based detectors first generate a sparse set of object proposals, and then refine the proposal locations and classify them as one of the foreground classes or as background using a detection head. One crucial module in such a proposal-driven pipeline is the RoIPool~\cite{fastrcnn} or RoIAlign~\cite{maskrcnn} operator, which is responsible for extracting RoI (Region of Interests) features aligned with the proposal locations for the detection head. \begin{figure}[t] \centering \subfloat[Filtering out noisy detections.] {\includegraphics[width=0.48\columnwidth]{figs/motivation_1.pdf} } \subfloat[Recognizing indistinctive objects.] {\includegraphics[width=0.48\columnwidth]{figs/motivation_2.pdf} } \scriptsize \caption{Motivation and example results of our Hierarchical Context Embedding (HCE) framework. By incorporating discriminative context information, our framework can effectively filter out the noisy false positive background detections, and correctly classify objects (\textit{e.g.}, ``\texttt{skateboard}'') which possess no distinctive appearances.} \label{fig:motivation} \end{figure} In this paper, we revisit the RoI features in region-based detectors from the perspective of context information embedding. Our key motivation relies on the fact that while each RoI in very deep CNNs may have a very large theoretical receptive field that often spans the entire input image~\cite{fastrcnn}. However, the effective receptive field~\cite{Luo2016} may only occupy a fraction of the full theoretical receptive field, making the RoI features insufficient for characterizing objects that are highly dependent on context information, such as ``\texttt{bowl}'', ``\texttt{skateboard}'' etc. Here, the contextual information means any auxiliary information that can assist in suppressing the false positive detections in noisy backgrounds, or recognizing objects that have no distinctive appearances themselves. For example, as shown in~Fig.~\ref{fig:motivation} (a), the semantic features of ``\texttt{motorcycle}'' are strong evidences for filtering out the activations of irrelevant object categories like ``\texttt{spoon}'', ``\texttt{bowl}'', and ``\texttt{sink}''. On the other hand, as shown by~Fig.~\ref{fig:motivation} (b), the scene and even the human pose are useful clues for correctly classifying a proposal as ``\texttt{skateboard}'', rather than ``\texttt{tennis racket}''. Recently, several works exploited the region-level context information to improve the localization ability of two-stage detectors. Chen~\textit{et al.}~\cite{context} demonstrated that rich contextual information from neighboring regions can better refine the proposal locations for two-stage detectors. Kantorov~\textit{et al.}~\cite{kantorov2016contextlocnet} leveraged the surrounding context regions to improve weakly supervised object localization. However, to the best of our knowledge, currently there is no enabling framework which is systematically designed for embedding context information to improve the \textit{classification ability} of region-based detectors. In this paper, we present a novel Hierarchical Context Embedding (HCE) framework for region-based object detectors. Our framework consists of three modules. Firstly, we consider that the simplest way to break the contextual limit in object detection, is to partially cast the object-level feature learning as an image-level multi-label classification task. Building upon this realization, we design an image-level categorical embedding module, which in essence is a multi-label classifier upon the detection backbone, in parallel with the existing region-based detection head. It enables the backbone to exploit the whole image context to learn discriminative features for context-dependent object categories. Even as a standalone enhancement, our image-level categorical embedding module can lead to improvements over existing region-based detectors. Upon the image-level categorical embedding module, at the instance-level, we design a simple but effective process to generate hierarchical contextual RoI features which can be directly utilized by the region-wise detection head. Because our contextual RoI features are enhanced by image-level categorical supervisions and exploit larger contexts, they are by nature complementary to conventional RoI features, which is trained by region-based detectors and only exploits limited context. Later, the early-and-late strategies, \emph{i.e.}, feature fusion and confidence fusion, are designed to make full use of our contextual RoI features. By quantitative experiments, we demonstrate that they can be combined to further boost the classification accuracy of the detection head. In general, our proposed HCE framework is easy to implement and is end-to-end trainable. We conduct extensive experiments on MS-COCO 2017~\cite{coco} to validate the effectiveness of our HCE framework. Without bells and whistles, our HCE framework delivers consistent accuracy improvements for almost all existing mainstream region-based detectors, including FPN~\cite{fpn}, Mask R-CNN~\cite{maskrcnn} and Cascade R-CNN~\cite{cascade}. We also conduct ablation studies to verify the effectiveness of each module involved in our HCE framework. Fig.~\ref{fig:motivation} gives the example images of the baseline method and our method, which demonstrates that our framework can effectively filter out the noisy background detections and correctly classify indistinctive objects by leveraging the context information it exploited. \section{Related Work} \subsection{Region-based Object Detection} Convolutional neural networks have lead to a paradigm shift of object detection in the past decades~\cite{liu2019deep}. Among a large number of approaches, the two-stage R-CNN series~\cite{rcnn,fastrcnn,fasterrcnn,fpn,cascade} have become the leading detection framework. The pioneer work, \emph{i.e.}, R-CNN~\cite{rcnn}, extracts region proposals from image with selective search~\cite{uijlings2013selective}, and applies a convolutional network to classify each region of interests independently. Fast R-CNN~\cite{fastrcnn} improves R-CNN by sharing convolutional features among RoIs, which enables fast training and inference. Then, Faster R-CNN~\cite{fasterrcnn} advances the region proposal generation with a Region Proposal Network (RPN). RPN shares the feature extraction backbone with the detection head, which in essence is a Fast R-CNN~\cite{fasterrcnn}. Faster R-CNN is a famous two-stage detection framework, and is the foundation for many follow-up works~\cite{dai2016r,fpn}. Over very recent years, several algorithms have been proposed to further improve the two-stage Faster R-CNN framework. For example, Feature Pyramid Networks (FPN)~\cite{fpn} constructed inherent CNN feature pyramids, which can largely improve the detection performance of small objects. Mask R-CNN~\cite{maskrcnn} extended Faster RCNN by constructing the mask branch, and boosted the performance of both object detection and instance segmentation. Cascade R-CNN~\cite{cascade} utilized multi-stage training strategy to progressively improve the quality of region proposals, and demonstrated significant gains for high quality (measured by higher IoUs) object detection. Complementary to these works, in this paper, we focus on developing a Hierarchical Context Embedding (HCE) framework to improve the \textit{classification ability} of all region-based detectors. Thanks to the simplicity and generalization ability of our HCE framework, it brings consistent and significant improvements over aforementioned leading region-based detectors, \textit{e.g.}, FPN, Mask R-CNN and Cascade R-CNN. \subsection{Context Information for Object Detection} In object detection, both global context~\cite{galleguillos2010context} and local context~\cite{rabinovich2007objects} are widely exploited for improving performance, especially when object appearances are insufficient due to small object size, occlusion, or poor image quality. Our work is inspired by some of previous works, but the key motivation or implementation significantly differ with these works. Next, we review several topics in object detection, which are closely related to our work. \subsubsection{Combined Localization and Classification.} Before the era of deep learning, Harzallah \textit{et al.}~\cite{harzallah2009combining} proposed to combine two closely related tasks, \textit{i.e.}, object localization and image classification. They demonstrated that classification can improve detection by a contextual combination and vice versa. Similar in spirit, we utilize the fully image-level context to learn object-level concepts. But differently, we utilize global context to learn CNN features rather than hand-crafted features adopted in~\cite{harzallah2009combining}. Furthermore, we integrate hierarchical contextual clues beneath both whole images and interested regions to modern region-based CNN detectors, rather than the traditional sliding window detector used by~\cite{harzallah2009combining}. \subsubsection{Region Proposal Refinement.} Recently, Chen \textit{et al.}~\cite{context} explored the rich contextual information to refine the region proposals for object detection. The neighboring regions with useful contexts can benefit the localization quality of region proposals, which further lead to better detection performance. Instead of refining proposals, we focus on improving the \textit{classification ability} of region-based detectors by embedding hierarchical contextual clues. \subsubsection{Weakly-Supervised Object Detection.} In weakly supervised object detection, the bounding box annotations are not provided, and only image-level categorical labels are available. The common practice~\cite{cinbis2016weakly,weakly,bilen2015weakly,kantorov2016contextlocnet} in this area is to first generate a set of noisy object proposals, and then learn from these noisy proposals with specially designed robust algorithms. Among them, Kantorov \textit{et al.}~\cite{kantorov2016contextlocnet} proposed a context-aware deep network which leverages the surrounding context regions to improve localization. Unlike the usage of region-level context information~\cite{kantorov2016contextlocnet} for weakly supervised detection, we focus on the task of fully-supervised object detection, and particularly exploit global image-level context to advance the recognition of context-dependent object categories. \subsection{Context Information for Other Vision Tasks} Beyond object detection, context information has also been utilized to improve other vision tasks. For example, Wang \textit{et al.}~\cite{rnn_attention} leveraged attention mechanisms and LSTMs to discover semantic-aware regions and capture the long-range contextual dependencies for multi-label image recognition. He \textit{et al.}~\cite{adaptive} proposed an adaptive context module to generate multi-scale context representations for semantic segmentation. Qu \textit{et al.}~\cite{deshadownet} embedded multi-context information (the appearance of the input image and semantic understanding) to obtain the shadow matte. Byeon \textit{et al.} ~\cite{contextvp} leveraged the LSTM units to capture the entire available past context on video prediction. Li \textit{et al.}~\cite{derain} adopted the dilated convolution to acquire more contextual information for single image deraining. \section{Approach} \subsection{Framework Overview} We begin by briefly describing our Hierarchical Context Embedding (HCE) framework (see Fig.~\ref{fig:model}) for region-based object detection. Firstly, an image-level categorical embedding module is employed to advance the feature learning of the objects that are highly dependent on larger context clues. Then, hierarchical contextual RoI features are generated by fusing both instance-level and global-level information derived from the image-level categorical embedding module. Finally, early-and-late fusion modules are designed to make full use of the contextual RoI features to improve the classification performance. Our HCE framework is flexible and generalizable, as it can be applied as a plug-and-play component for almost all mainstream region-based object detectors. \begin{figure}[t] \centering \includegraphics[width=1.0\textwidth]{figs/model.pdf} \caption{Overview of our Hierarchical Context Embedding (HCE) framework. At the image-level, we design an \emph{image-level categorical embedding} module upon the detection backbone, which enables the network to learn object-level concepts from global-level context. At the instance-level, we generate \emph{hierarchical contextual RoI features} that are complementary to conventional RoI features, and design the early-and-late fusion strategies (\emph{i.e.}, \emph{feature fusion} and \emph{confidence fusion}) to make full use of the contextual RoI features for improving the classification accuracy of the detection head. } \label{fig:model} \end{figure} \subsection{Image-Level Categorical Embedding} As aforementioned, conventional RoI-based training for region-based detectors may lack the context information, which is crucial for learning discriminative filters for context-dependent objects. To break this limitation, in parallel with the RoI-based branch, we exploit image-level categorical embedding upon the detection backbone, enabling the backbone to learn object-level concepts adaptively from \textit{global-level} context. Our image-level categorical embedding module does not require additional annotations, as the image-level labels can be conveniently obtained by collecting all instance-level categories in an image. Essentially, our image-level categorical embedding module is based on a multi-label classifier. As shown in Fig.~\ref{fig:model} and Fig.~\ref{fig:module} (a), we first apply a $3 \times 3$ convolution layer on the output of ResNet conv$_5$ to obtain the input feature map, and then employ both global max-pooling (GMP) and global average-pooling (GAP) for feature aggregation (as in~\cite{woo2018cbam}). Here, the additional $3 \times 3$ convolution layer aims to alleviate the possible slide effects over the original detection backbone. We refer to the input feature map to our image-level embedding module as \emph{context-embedded image feature}. This is because the input feature map conveys whole image context for learning all object categories that appear in the image, and in turn, each location of the feature map is supervised by all object categories. By contrast, conventional RoI-based trained by region-based detectors only exploits limited context for learning each object category. Formally, let $\bm{X} \in \mathbb{R}^{d \times h \times w}$ denote the input feature map, where $d$ is the channel dimensionality, $h$ and $w$ are the height and width, respectively. Then, the multi-label classifier is constructed by $C$ binary classifiers for all categories: \begin{equation} \hat{\bm{y}}_{cls} = f_{cls}((f_{gmp}(\bm{X}) + f_{gap}(\bm{X}))) \in \mathbb{R}^{C} \,, \end{equation} where $C$ denotes the number of categories, each element of $\hat{\bm{y}}_{cls}$ is a confidence score (logits), and $f_{cls}$ is binary classifier modeled as one fully-connected layer. We assume that the ground truth label of an image is $\bm{y} \in \mathbb{R}^{C}$, where $y^{i} = \{0, 1\}$ denotes whether object of category $i$ appears in the image or not. The multi-label loss can be formulated as follows \begin{equation} \mathcal{L}_{mll}= - \sum_{c=1}^{C}y^{c}\log(\sigma(\hat{y}^{c}_{cls})) + (1-y^{c})\log(1-\sigma(\hat{y}^{c}_{cls})) \,, \end{equation} where $\sigma(\cdot)$ is the sigmoid function. \begin{figure}[t] \centering \subfloat[Image-level categorical embedding.] {\includegraphics[width=0.48\columnwidth]{figs/image_level_embedding.pdf} } \subfloat[Hierarchical contextual RoI feature.] {\includegraphics[width=0.48\columnwidth]{figs/roi_generation.pdf} } \caption{The design of our image-level categorical embedding module and hierarchical contextual RoI feature generation module.} \label{fig:module} \end{figure} Because the global feature learning strategy is complementary to RoI-based training, our image-level categorical embedding module standalone can boost the performance of existing region-based detectors (demonstrated later by experiments, cf. Table~\ref{table:context}). However, one limitation of image-level categorical embedding might be that the derived context-embedded image feature can not be directly used by the detection head. \subsection{Hierarchical Contextual RoI Feature Generation} To further benefit region-wise classification, we generate hierarchical contextual RoI features by combining the instance-level and global-level information from the context-embedded image features. The hierarchical contextual RoI feature generation process is shown in Fig.~\ref{fig:module} (b). \subsubsection{Context-Embedded Instance-Level Feature} We apply RoIAlign~\cite{maskrcnn} with proposals generated by RPN on the context-embedded feature map $\bm{X}$ to obtain RoI features $\bm{x}_{instance}$: \begin{equation} \bm{x}_{instance} = f_{RoIAlign}(\bm{X}; h', w') \in \mathbb{R}^{d \times 7 \times 7} \,, \label{eq:instance} \end{equation} where $f_{RoIAlign}(\cdot)$ is the RoIAlign operation and $h'$ and $w'$ are the height and width of the RoI, respectively. As $\bm{x}_{instance}$ is extracted from the context-embedded image feature $\bm{X}$, we term it as \emph{context-embedded instance-level feature}. \subsubsection{Context-Aggregated Global-Level Feature} To leverage larger context, we exploit RoIAlign on the context-embedded image feature $\bm{X}$ to aggregate the global-level context. We refer to the derived RoI feature as \emph{context-aggregated global-level feature} $\bm{x}_{global}$: \begin{equation} \bm{x}_{global} = f_{RoIAlign}(\bm{X}; H, W) \in \mathbb{R}^{d \times 7 \times 7} \,, \label{eq:global} \end{equation} where $H$ and $W$ are the height and width of the input image, respectively. Once context-embedded instance-level feature $\bm{x}_{instance}$ and context-aggregated global-level feature $\bm{x}_{global}$ obtained, we concatenate these two RoI features and apply a $1 \times 1$ convolution layer to obtain our hierarchical contextual RoI feature $\bm{x}_{context}$: \begin{equation} \bm{x}_{context} = f_{conv}([\bm{x}_{instance} : \bm{x}_{global}]) \in \mathbb{R}^{d \times 7 \times 7} \,, \end{equation} where $f_{conv}(\cdot)$ denotes the $1 \times 1$ convolution operation, $[:]$ refers to concatenation and the ReLU nonlinearity operations are performed following the convolution layer. As the resulting hierarchical contextual RoI feature $\bm{x}_{context}$ absorbs rich context information from the context-embedded image feature $\bm{X}$, it is by nature complementary to the conventional RoI feature extracted from the feature pyramid network (FPN)~\cite{fpn}. \subsection{Early-and-Late Fusion and Inference} To make full use of our contextual RoI feature $\bm{x}_{context}$, we design the early-and-late fusion strategies, \emph{i.e.}, feature fusion and confidence fusion, which has been proven effective in many applications~\cite{gunes2005affect,ebersbach2017fusion}. We show that early-and-late fusion is also well suited to improve region-wise detectors, as it can fully absorb hierarchically embedded information from different levels. \subsubsection{Feature Fusion} To incorporate our contextual RoI features $\bm{x}_{context}$ into region-based detection pipeline, the simplest way is fusing them with the original RoI features extracted from the feature pyramid network (FPN)~\cite{fpn} with element addition. Formally, let $\bm{x}_{FPN}$ denote the original RoI feature extracted from FPN, and $\bm{x}_{fusion}$ denote the fused RoI feature, then we have: \begin{equation} \bm{x}_{fusion} = \bm{x}_{context} + \bm{x}_{FPN} \in \mathbb{R}^{d \times 7 \times 7}. \end{equation} As shown in Fig.~\ref{fig:model}, the fused feature map $\bm{x}_{fusion}$ is then fed into the 2$fc$ detection head to produce refined bounding boxes and classification scores. \subsubsection{Confidence Fusion} We also consider a simple confidence fusion strategy which is complementary to feature fusion. We apply the 2$fc$ head on our hierarchical contextual RoI feature $\bm{x}_{context}$ to produce a classification confidence (logits), and then fuse it with that from the corresponding FPN RoI feature $\bm{x}_{FPN}$ by addition. Formally, let $\hat{\bm{y}}_{fusion}$ denote the fused the confidence: \begin{equation} \hat{\bm{y}}_{fusion} = f_{2fc}(\bm{x}_{context}) + f_{2fc}(\bm{x}_{FPN}) \in \mathbb{R}^{C}. \end{equation} The fused confidence is transformed by a soft-max layer to produce a novel classification score. For each proposal, the classification score $\hat{\bm{y}}_{fusion}$, paired with the refined bounding box predicted the FPN RoI feature, forms another prediction in parallel with the prediction from the feature fusion branch. It is worth mentioning that the weights of the 2$fc$ head applied on different RoI features are shared. \subsubsection{Inferences} Our early-and-late fusion strategy produces two different predictions for a single object proposal. To obtain the final result, as shown by the pipeline in Fig.~\ref{fig:model}, we firstly collect all the boxes and confidences from two prediction branches (\emph{i.e.}, feature fusion and confidence fusion), and then perform NMS over all these boxes. Furthermore, as demonstrated later in experiments, while our two fusion strategies are complementary during training, using only one prediction branch during inference will not cause obvious performance drop but reduce computational cost. However, the performance by only using one fusion strategy for training is inferior to that by using two fusion strategies together. \subsubsection{Loss Function} The whole network is trained end-to-end, and the overall loss is computed as follows: \begin{equation} \mathcal{L} = \mathcal{L}_{feat} + \mathcal{L}_{conf} + \mathcal{L}_{mll} + \mathcal{L}_{rpn} \,, \end{equation} where $\mathcal{L}_{feat}$ and $\mathcal{L}_{conf}$ are the losses for the feature fusion and confidence fusion branches, respectively. All loss terms are considered equally important, without extra hyper-parameters to characterize the trade-off between them, which reveals HCE is generalized and not tricky. \section{Experiments} We conduct extensive experiments on the MS-COCO 2017 dataset~\cite{coco} to demonstrate the effectiveness and generalization ability of our hierarchical context embedding framework. MS-COCO 2017 is the most popular benchmark for general object detection, which contains 80 object categories, 118K images for training, 5K images for validation (\texttt{val}) and 20K for testing (\texttt{test-dev}). We report the standard COCO-style Average Precision (AP) with different IoU thresholds from 0.5 to 0.95 with an interval of 0.05 as metric. All models are trained on COCO training set and evaluate on the \texttt{val} set. For fair comparisons with the state-of-the-art, we also report the results on the \texttt{test-dev} set. \subsection{Implementation Details} We implement our method and re-implement all baseline methods based on MMDetection codebase~\cite{mmdetection}. The re-implementations of the baselines strictly follow the default settings of MMDetection. Images are resized such that the short edge has 800 pixels while the long edge has less than 1333 pixels. We use no data augmentation except horizontal flipping for training. The ResNet is exploited as backbone, which is pre-trained on ImageNet~\cite{imagenet}. Models are trained in a batch size of 16 on 8 GPUs. We train all models with SGD optimizer for 12 epochs in the total, with the initial learning rate as 0.02 and decreased by a factor of 0.1 at 8th epoch and 11th epoch. Weight decay and momentum are set as 0.0001 and 0.9, respectively. We also adopt the linear warming up strategy to begin the training of our model. \subsection{Comparisons with Baselines} \begin{table}[t] \centering \setlength{\tabcolsep}{1.3mm} \caption{Compared with baselines (FPN~\cite{fpn}, Mask R-CNN~\cite{maskrcnn} and Cascade R-CNN~\cite{cascade}) on MS-COCO 2017 \texttt{val}. ``HCE'' denotes that the models are trained and inferred on both feature fusion and confidence fusion. Clearly, our HCE framework achieves consistent accuracy gains overall all baseline detectors on all evaluation metrics. } \begin{tabular}{c|c|c|ccc|ccc} \hline Backbone & Method & HCE & AP & AP$^{50}$ & AP$^{75}$ & AP$^{S}$ & AP$^{M}$ & AP$^{L}$ \\ \hline \multirow{6}{*}{ResNet-50-FPN} & \multirow{2}{*}{FPN} & & 36.3 & 58.3 & 39.1 & 21.6 & 40.2 & 46.9 \\ & & \checkmark & \textbf{38.4} & \textbf{61.0} & \textbf{41.8} & \textbf{22.9} & \textbf{42.5} & \textbf{49.1} \\ \cline{2-9} & \multirow{2}{*}{Mask R-CNN} & & 37.3 & 59.1 & 40.3 & 22.2 & 41.1 & 48.3 \\ & & \checkmark & \textbf{38.8} & \textbf{61.3} & \textbf{42.1} & \textbf{23.2} & \textbf{42.8} & \textbf{49.7} \\ \cline{2-9} & \multirow{2}{*}{Cascade R-CNN} & & 40.5 & 58.7 & 44.1 & 22.3 & 43.6 & 53.8 \\ & & \checkmark & \textbf{41.7} & \textbf{60.5} & \textbf{45.0} & \textbf{23.4} & \textbf{44.9} & \textbf{55.2} \\ \hline \multirow{6}{*}{ResNet-101-FPN} & \multirow{2}{*}{FPN} & & 38.3 & 60.1 & 41.7 & 22.8 & 42.8 & 49.8 \\ & & \checkmark & \textbf{40.0} & \textbf{62.3} & \textbf{43.4} & \textbf{24.0} & \textbf{44.1} & \textbf{51.9} \\ \cline{2-9} & \multirow{2}{*}{Mask R-CNN} & & 39.4 & 60.9 & 43.0 & 23.3 & 43.7 & 51.5 \\ & & \checkmark & \textbf{40.5} & \textbf{62.6} & \textbf{44.0} & \textbf{24.4} & \textbf{44.5} & \textbf{53.4} \\ \cline{2-9} & \multirow{2}{*}{Cascade R-CNN} & & 41.9 & 60.1 & 45.7 & 23.2 & 45.9 & 56.2 \\ & & \checkmark & \textbf{43.0} & \textbf{61.6} & \textbf{46.9} & \textbf{24.6} & \textbf{46.6} & \textbf{57.4} \\ \hline \end{tabular} \label{table:baseline} \end{table} To demonstrate the generality of our HCE framework, we consider three well-known region-based object detectors as our baseline systems, including Feature Pyramid Network (FPN)~\cite{fpn}, Mask R-CNN~\cite{maskrcnn} and Cascade R-CNN~\cite{cascade}. All detectors are instantiated with two different backbones, \emph{i.e.}, ResNet-50 and ResNet-101 with FPN. Integrating our framework with Mask R-CNN and Cascade R-CNN is as straightforward as with FPN. For example, we apply our framework within each training stage of Cascade R-CNN. Comparison results on MS-COCO 2017 \texttt{val} are shown in Table~\ref{table:baseline}. Our HCE framework achieves consistent accuracy gains overall all baseline detectors on all evaluation metrics. Specifically, without the bells and whistles, our method improves 2.1\% and 1.7\% AP for FPN with ResNet-50 and ResNet-101 backbones, respectively. While for more advanced Mask R-CNN and Cascade R-CNN, our method also brings more than 1\% AP improvement on both ResNet-50 and ResNet-101 backbones, \textit{e.g.,} improving the AP for Mask R-CNN with ResNet-50-FPN from 37.3\% to 38.8\%. Additionally, it can be observed that our improvements for Mask R-CNN and Cascade R-CNN baselines are not as significant as FPN. We conjecture that this is because Mask R-CNN and Cascade R-CNN themselves integrate mechanisms for better feature learning, which might overlap with the performance gains with our method. Specifically, Mask R-CNN benefits from extra accurate instance-level mask supervisions, while Cascade R-CNN enjoys IoU-specific multi-stage training to progressively refine object proposals and learn discriminative features for IoU-specific proposals. However, even in these cases, our method can also obtain $+1\%$ AP improvement over these competing baseline methods. \subsection{Error Analyses} \begin{figure}[t!] \centering \includegraphics[width=1.0\textwidth]{figs/error.pdf} \caption{Error Analyses: These illustrations show the percentage of different error types in the top N detections (N = \# objects in that category).} \label{fig:error} \end{figure} In the following, we perform error analyses to further understand in what aspects our HCE framework improves the region-based object detectors. Following the settings of~\cite{yolo}, we choose the top N predictions for each category during inference time. Each prediction is classified based on the type of error: \begin{itemize} \item Correct: correct class and IOU $>$ 0.5 \item Location Error: correct class and 0.1 $<$ IOU $<$ 0.5 \item Background Error: IOU $<$ 0.1 for any object \item Classification Error: class is wrong and IOU $>$ 0.5 \item Other: class is wrong and 0.1 $<$ IOU $<$ 0.5 \end{itemize} We compare different error types between the FPN baseline and our method with ResNet-50 as backbone on MS-COCO 2017 \texttt{val}. Fig.~\ref{fig:error} shows the results of each error type averaged across all 80 categories, and each error type for ``\texttt{hot dog}'', ``\texttt{snowboard}'' and ``\texttt{baseball glove}'' which are highly dependent on context information. Obviously, our method can effectively improve the classification ability of region-based detector and reduce the background errors to a large extent, without compromising the localization performance or increasing other type of errors. Our improvements are particularly noticeable for context-dependent object categories. For example, the (normalized) correctly recognized instances of ``\texttt{hot dog}'' increase from 44.8\% to 51.2\%, while the background false positive detections reduce from 17.6\% to 14.4\%. These observations validate that our HCE framework can indeed improve the classification ability. \subsection{Ablation Studies} In this section, we conduct three series of ablation experiments to analyze the proposed method, using ResNet-50 as backbone on MS-COCO 2017 \texttt{val}. \subsubsection{Context Embedding Operations} We first investigate the impacts of different context embedding operations in our HCE framework. Specifically, there are three context embedding operations involved in our framework. Firstly, the image-level categorical embedding module employs multi-label learning (denoted as ``MLL'') to embed global-level context to advance the learning of context-dependent categories. Then, for further improving region-based classification, both the context-embedded instance-level feature (denoted as ``Instance'') and the context-aggregated global-level feature (denoted as ``Global'') are combined to generate hierarchical contextual RoI feature. Table~\ref{table:context} shows the performance improvements by progressively integrating more context embedding operations. Solely applying MLL on the detection backbone gives $0.5\%$ AP improvement. This verifies that image-level categorical embedding advances the feature learning for context-dependent object categories. Then, the context-embedded instance-level feature which can be directly utilized by the detection head brings another $1.0\%$ AP improvement. Finally, global-level context embedding for contextual RoI feature improves $0.6\%$ AP. These results suggest that the context embedding operations in our framework are complementary with each other. \begin{table}[t] \centering \setlength{\tabcolsep}{2.8pt} \caption{Impacts of different context embedding operations on MS-COCO 2017 \texttt{val}. ``MLL'' means we leverage the image-level categorical embedding module to advance the learning of context-dependent categories. ``Instance'' and ``Global'' denotes that we utilize instance-level (cf. Eq~(\ref{eq:instance})) or global-level (cf. Eq~(\ref{eq:global})) contextual features to further improve the region-wise detection head.} \begin{tabular}{c|ccc|ccc|ccc} \hline Method & MLL & Instance & Global & AP & AP$^{50}$ & AP$^{75}$ & AP$^{S}$ & AP$^{M}$ & AP$^{L}$ \\ \hline \multirow{4}{*}{FPN} & & & & 36.3 & 58.3 & 39.1 & 21.6 & 40.2 & 46.9 \\ & \checkmark & & & 36.8 & 58.9 & 39.7 & 21.9 & 40.5 & 47.2 \\ & \checkmark & \checkmark & & 37.8 & 59.9 & 40.9 & 22.2 & 41.4 & 48.9 \\ & \checkmark & \checkmark & \checkmark & \textbf{38.4} & \textbf{61.0} & \textbf{41.8} & \textbf{22.9} & \textbf{42.5} & \textbf{49.1} \\ \hline \end{tabular} \label{table:context} \end{table} \begin{table}[t] \centering \setlength{\tabcolsep}{5pt} \caption{Effects of different fusion strategies during \textit{training}, evaluated by detection performance on MS-COCO 2017 \texttt{val}. The models share the same backbone network ResNet50-FPN. ``FF Train'' means that we apply feature fusion (FF) for training, while ``CF Train'' means confidence fusion (CF) are applied for training.} \begin{tabular}{c|cc|ccc|ccc} \hline Method & FF Train & CF Train & AP & AP$^{50}$ & AP$^{75}$ & AP$^{S}$ & AP$^{M}$ & AP$^{L}$ \\ \hline \multirow{4}{*}{FPN} & & & 36.8 & 58.9 & 39.7 & 21.9 & 40.5 & 47.2 \\ & \checkmark & & 37.6 & 60.3 & 40.7 & 22.5 & 41.4 & 48.2\\ & & \checkmark & 37.4 & 60.2 & 40.1 & 23.0 & 41.1 & 47.6\\ & \checkmark & \checkmark & \textbf{38.4} & \textbf{61.0} & \textbf{41.8} & \textbf{22.9} & \textbf{42.5} & \textbf{49.1} \\ \hline \end{tabular} \label{table:head} \end{table} \begin{table}[t] \centering \setlength{\tabcolsep}{4pt} \caption{Effects of different fusion strategies in testing, which are evaluated by the inference time and detection performance on MS-COCO 2017 \texttt{val}. Note that all models are trained with both fusion strategies. ``FF Test'' denotes that we evaluate the feature fusion (FF) strategy during inference, while ``CF Test'' means the results are evaluated by confidence fusion (CF) strategy. Inference speed is evaluated on a single 1080ti GPU.} \begin{tabular}{c|cc|c|ccc|ccc} \hline Method & FF Test & CF Test & Speed & AP & AP$^{50}$ & AP$^{75}$ & AP$^{S}$ & AP$^{M}$ & AP$^{L}$ \\ \hline \multirow{4}{*}{FPN} & & & 0.087s & 36.3 & 58.3 & 39.1 & 21.6 & 40.2 & 46.9 \\ & \checkmark & & 0.090s & 38.2 & 60.8 & 41.5 & 22.6 & 42.2 & 49.0 \\ & & \checkmark & 0.094s & 38.3 & 60.8 & 41.6 & 22.8 & 42.3 & 49.0 \\ & \checkmark & \checkmark & 0.100s & \textbf{38.4} & \textbf{61.0} & \textbf{41.8} & \textbf{22.9} & \textbf{42.5} & \textbf{49.1} \\ \hline \end{tabular} \label{table:test} \end{table} \subsubsection{Fusion Strategies in Training} We consider the proposed two fusion strategies, feature fusion and confidence fusion, are complementary to each other. To verify this, we evaluate the performance by training the model with feature fusion and confidence fusion individually, as well as both of them. Table~\ref{table:head} shows the results of different fusion strategies. Specifically, ``FF Train" means that we apply feature fusion (FF) for training, while ``CF Train" means confidence fusion (CF) are applied for training. Utilizing feature fusion and confidence fusion individually for training can outperform the baseline (FPN with MLL) by $0.8\%$ and $0.6\%$ AP, respectively. Training with both fusion strategies achieves the best result, and is clearly better than using each individual fusion strategy separately. \subsubsection{Fusion Strategies in Testing} We also evaluate each fusion strategy independently during inference, with all HCE models trained with both fusion strategies. Table~\ref{table:test} shows the results of each fusion strategy and the combined fusion strategies. ``FF Test'' denotes that we evaluate the feature fusion (FF) strategy during inference, while ``CF Test'' means that the results are evaluated by confidence fusion (CF) strategy. We can see that once the model is trained with both fusion strategies, using only one fusion branch for inference will not cause obvious accuracy drop, but brings computational economy. For example, using the feature fusion branch for inference adds very minimal time cost (0.003s) to the baseline, but increases the AP from 36.3\% to 38.2\%. These results also prove the complementarity of the proposed two fusion strategies. \begin{table}[t] \centering \setlength{\tabcolsep}{1pt} \caption{Comparisons with the state-of-the-art single-model detectors on MSCOCO 2017 \texttt{test-dev}. ``*'' denotes using tricks (with bells and whistles) during inference.} \begin{tabular}{c|c|cccccc} \hline Method & Backbone & AP & AP$^{50}$ & AP$^{75}$ & AP$^{S}$ & AP$^{M}$ & AP$^{L}$ \\ \hline YOLOv3~\cite{yolov3} & Darknet-53 & 33.0 & 57.9 & 34.4 & 18.3 & 35.4 & 41.9 \\ SSD513~\cite{ssd} & Res101 & 31.2 & 50.4 & 33.3 & 10.2 & 34.5 & 49.8 \\ RetinaNet~\cite{focalloss} & Res101-FPN & 39.1 & 59.1 & 42.3 & 21.8 & 42.7 & 50.2 \\ FCOS~\cite{fcos} & Res101-FPN & 41.5 & 60.7 & 45.0 & 24.4 & 44.8 & 51.6 \\ \hline FPN~\cite{fpn} & Res101-FPN & 36.2 & 59.1 & 39.0 & 18.2 & 39.0 & 48.2 \\ Mask R-CNN~\cite{maskrcnn} & Res101-FPN & 38.2 & 60.3 & 41.7 & 20.1 & 41.1 & 50.2 \\ Cascade R-CNN~\cite{cascade} & Res101-FPN & 42.8 & 62.1 & 46.3 & 23.7 & 45.5 & 55.2 \\ Deformable R-FCN*~\cite{deformable} & Aligned-Inception-ResNet & 37.5 & 58.0 & 40.8 & 19.4 & 40.1 & 52.5 \\ DCNv2*~\cite{dv2} & Res101-DeformableV2 & 46.0 & \textbf{67.9} & \textbf{50.8} & \textbf{27.8} & 49.1 & \textbf{59.5} \\ IoU-Net~\cite{iounet} & Res101-FPN & 40.6 & 59.0 & -- & -- & -- & -- \\ TridentNet~\cite{trident} & Res101 & 42.7 & 63.6 & 46.5 & 23.9 & 46.6 & 56.6 \\ Cascade +Rank-NMS~\cite{ranking} & Res101-FPN & 43.2 & 61.8 & 47.0 & 24.6 & 46.2 & 55.4 \\ \hline HCE FPN & Res101-FPN & 41.0 & 63.5 & 44.7 & 23.4 & 44.2 & 52.2 \\ HCE Mask R-CNN & Res101-FPN & 41.6 & 63.9 & 45.4 & 23.7 & 44.7 & 53.1 \\ HCE Cascade R-CNN & Res101-FPN & 44.1 & 63.2 & 47.9 & 25.2 & 46.9 & 57.0 \\ HCE Cascade R-CNN* & Res101-FPN & \textbf{46.5} & 65.6 & 50.6 & 27.4 & \textbf{49.9} & 59.4 \\ \hline \end{tabular} \label{table:soa} \end{table} \subsection{Comparisons with State-of-the-art} We compare our proposed method with state-of-the-art on MS-COCO 2017 \texttt{test-dev}. For fair comparisons, we report the performance of all methods with single-model inference. Specifically, we apply our method on FPN, Mask R-CNN and Cascade R-CNN in 2$\times$ training scheme without bells and whistles. Table~\ref{table:soa} shows all comparison results. Our hierarchical context embedding framework, when integrated with FPN, Mask R-CNN and Cascade R-CNN object detectors, consistently outperforms state-of-the-art object detectors using the same backbone network. For fairly comparisons with Deformable R-FCN* and DCNv2* which adopt multi-scale 3x training scheme and multi-scale testing, we follow the same experimental setting to train our HCE Cascade R-CNN*. It gives an AP of 46.5\%, which surpasses R-FCN* and DCNv2*. These results demonstrate the superior performance of the proposed context embedding framework. \section{Conclusions} In this paper, we investigated the limitation of context information on conventional region-based detectors, and proposed a novel and effective Hierarchical Context Embedding (HCE) framework to facilitate the classification ability of current region-based detectors. Comprehensive experiments demonstrated the consistent outperforming accuracy on almost all existing mainstream region-based detectors, include FPN, Mask R-CNN and Cascade R-CNN. In the future, we will concentrate in extending the usage scope of our HCE framework and adapting it to one-stage detection paradigm.\\ \noindent \textbf{Acknowledgements}: Z.-M. Chen's contribution was made when he was an intern in Megvii Research Nanjing. This research was supported by the National Key Research and Development Program of China under Grant 2017YFA0700800, the National Natural Science Foundation of China under Grants 61772257 and the Fundamental Research Funds for the Central Universities 020914380080. \clearpage \bibliographystyle{splncs04}
2,869,038,156,996
arxiv
\section{Introduction} Chiral magnets have attracted a great deal of attention for a long time \cite{dzyaloshinskii1964,bak1980theory,muhlbauer2009skyrmion}. The absence of inversion symmetry in the atomic lattice gives rise to twists of magnetization $\mathbf M(\mathbf r)$ in magnetically ordered states, which range from simple helices to intricate periodic lattices of skyrmions and magnetic hedgehogs. The microscopic mechanism responsible for the twisting of magnetization is the spin-orbit coupling manifesting itself in magnetic insulators as the Dzyaloshinskii-Moriya (DM) interaction of the form $\mathbf M \cdot (\nabla \times \mathbf M)$ in the continuum approximation \cite{dzyaloshinskii1964}. On the atomistic level, the DM interaction is represented by the pairwise spin interaction $\mathbf D_{ij} \cdot (\mathbf S_i \times \mathbf S_j)$, where $\mathbf D_{ij}$ is a vector specific to the bond connecting spins $\mathbf S_i$ and $\mathbf S_j$ \cite{moriya1960}. Determination of spin interactions in chiral magnets is very important for the understanding of their magnetic states. We present an experimental study of the chiral magnet $\text{Cu}_2\text{OSeO}_3$ by means of inelastic neutron scattering. This compound has a cubic lattice symmetry without an inversion center (space group $P2_13$) \cite{belesi2011magnetic} and exhibits paramagnetic, helical, conical, and skyrmion-crystal phases as a function of temperature and applied magnetic field \cite{adams2012long,white2012electric,seki2012formation,reim2017impact,makino2017thermal,bannenberg2017reorientations,white2018electric,qian2018new,chacon2018observation}. The structural unit cell has 16 magnetic Cu$^{2+}$ spin-1/2 ions which makes a microscopic description at the level of individual spins rather complex and impractical. Romhanyi \emph{et al.} \cite{romhanyi2014entangled,ozerov2014establishing,portnichenko2016magnon,tucker2016spin} introduced a microscopic model with Heisenberg exchange interactions of five different strengths: $J_s^{\text{AF}},J_s^{FM},J_w^{\text{AF}},J_w^{\text{FM}},J^{\text{AF}}_{\text{o.o}}$(FM and AF represent ferromagnetic and antiferromagnetic interactions, respectively), shown in Fig.~\ref{Fig1}(a). As will be shown below, this model nontheless misses significant features of the low energy magnon spectrum. While these problems might be remedied by the addition of DM interactions, a further increase in complexity would be undesirable. Fortunately, magnetic interactions in $\text{Cu}_2\text{OSeO}_3$ exhibit a hierarchy of energy scales \cite{romhanyi2014entangled,janson2014quantum,grigoriev2019spin}, which allow for an efficient modeling at a coarse-grained level, wherein quartets of strongly interacting spins are treated as effective spins with weaker interactions between them. Hints of this hierarchy can be seen in the inelastic neutron spectrum shown in Fig.~\ref{Fig1}(b). It reveals four strongly dispersing magnon bands at low energies (0-12 meV) separated by a large gap from high-energy magnon bands with a relatively weak dispersion (25-33 meV). The low-energy branches are spin waves where spins within each strongly coupled tetrahedron precess in phase with each other and can be described by a single effective spin within a coarse-grained model [Fig.~\ref{Fig1}(c,d)], while the high-energy magnons are associated with the intra-cluster interactions. To bring out the interactions that are relevant for the complex phase diagram and ordered structures, we focus on the low energy inter-cluster magnons in our study. The coarse-grained picture we adopt enables us to identify and refine the magnitude of the anisotropic interaction terms relevant to the helical and skyrmionic spin textures in $\text{Cu}_2\text{OSeO}_3$. We show these terms can be gleaned from specific features in high resolution neutron scattering spectra at energies well beyond the collective energy scales of the mesoscopic phases. We also show how to define the relevant low-energy degrees of freedom for a complex magnetic material with a hierarchy of energy scales and provide a simple expression for the corresponding inelastic scattering cross section in terms of a cluster form factor. \par The paper is organized as follows: In Sec.~\ref{Exp} we present our detailed inelastic magnetic neutron scattering data for $\text{Cu}_2\text{OSeO}_3$ with a focus on the new features that they reveal in the low energy regime. These features will then be related to DM interactions and the associated incommensurate ground state through the simplified coarse-grained model introduced in Sec. \ref{theo}. In Sec. \ref{Num} we numerically calculate the structure factors after deriving the effective form factor (details in Appendix~\ref{Formderive}), and determine the set of interaction parameters by a pixel to pixel data fit. The resulting best-fit parameters are listed in Table \ref{Pa}, bolstered by a detailed discussion of the reliability of the fit and the corresponding error bars in Appendix \ref{Reliability}. The power of the effective model and its limitations are identified and discussed in Sec.~\ref{Dis} before concluding in Sec.~\ref{Concl}.\par Throughout this paper, we use the same lattice structure conventions of Janson et al.~\cite{janson2014quantum}, where the coordinates of 16 Cu ions within the unit cell of a right-handed enantiomer are listed. These are reproduced in Table \ref{Posit} of Appendix.~\ref{LWST}. \begin{figure*}[htbp!] \includegraphics[width=0.8\textwidth ]{figure1_new.png} \caption{(a): Structure of the right-handed enantiomer of cubic $\text{Cu}_2\text{OSeO}_3$. ($a=8.911~\text{\AA}$ space group $P2_13$ \cite{bos2008magnetoelectric,belesi2011magnetic}). Each unit cell contains 16 $\text{Cu}^{2+}$ ions. The two distinct $\text{Cu}^{2+}$ sites are labeled by Cu-1 (white) and Cu-2 (black), respectively. $J_s^{\text{AF}}$ (blue, thick) and $J_s^{\text{FM}}$ (red, thick) are the dominant magnetic interactions. (b) The measured inelastic magnetic neutron scattering cross section acquired with incident neutron energy $E_i=60$ meV at $T=4$~K. The 4D data set is displayed as slices along a trajectory in momentum space connecting the high symmetry points $\Gamma(h,k,l)$; $X(h,k,l+\frac{1}{2})$; $M(h,k+\frac{1}{2},l+\frac{1}{2})$; and $R(h+\frac{1}{2},k+\frac{1}{2},l+\frac{1}{2})$. Here, $h,k$, and $l$ are integers. The integration range of perpendicular $\mathbf{Q}$ direction is 0.1 $\text{\AA}^{-1}$. (c) Each strong tetrahedron is composed of one Cu-1 and three Cu-2 sites, with AF interactions between Cu-1 and Cu-2 sites, and FM interactions between Cu-2 sites. This results in an effective spin-1 cluster with a Cu-1 spin antiparallel with three parallel Cu-2 spin. (d) The effective spins occupy a distorted FCC lattice with effective ferromagnetic inter-cluster interactions. We define the sites connected by the bonds $J_1^{\text{FM}}$ and $J_2^{\text{FM}}$ to be nn and nnn, respectively. (e) The measured inelastic magnetic neutron scattering cross section acquired with $E_i=20$ meV, focusing on the energy range indicated by the gray box in (b). (e) shows the average intensity along the indicated trajectories in the Brillouin zones centered at (021),(111),(121) and (122) averaging over $\pm 0.05~\mathrm{\AA^{-1}}$ in perpendicular $\bf Q$-directions. For (111) only data with energy transfers below 10.5 meV is taken into the average since data with higher energy transfer is not covered well due to kinematic limitations. Four magnon modes are generally observed corresponding to four clusters per unit cell. Additional modes can result from down-folding due to the incommensurate helimagnetic ground state and domain averaging. The intensity band at 2 meV arises from a spurious process unrelated to $\text{Cu}_2\text{OSeO}_3$.} \label{Fig1} \end{figure*} \section{Inelastic neutron scattering}\label{Exp} Single crystals of $\text{Cu}_2\text{OSeO}_3$ were grown by chemical vapor transport. Approximately 50 crystals were co-aligned on an aluminum holder for a total sample mass $m\approx5.1$ g and full width at half maximum (FWHM) mosaic $\approx 0.5^{\circ}$. No provision was made to check individual crystal chirality or orientation apart from aligning the four fold axes so the overall symmetry of the mosaic has approximate cubic symmetry. Time-of-flight inelastic neutron scattering data were acquired on the SEQUOIA instrument at the Spallation Neutron Source. Incoming neutron energies of $E_i=60$ meV and 20 meV were used with the high flux chopper operating at 240 Hz and the high resolution chopper operating at 180 Hz, respectively. The corresponding FWHM elastic energy resolution was 3 meV and 0.5 meV, respectively. The data were acquired at $T=4$~K which is far below the critical temperature $T_c=58$~K. The sample was cooled using a closed-cycle refrigerator, and rotated through $180^\circ$ in $0.5^\circ$ steps about the $(h\bar{h}0)$ axis. These same spectrometer settings were used to measure vanadium incoherent scattering for absolute normalization of the differential scattering cross section. The total beam time accumulated was 0.0655 Ah for $E_i=60$ meV and 0.0673 Ah for $E_i=20$~meV. The data were analyzed in Mantid \cite{arnold2014mantid} where background contributions were masked and subsequently symmetrized in the m$\bar{3}$m Laue class using Horace \cite{ewings2016horace}.\par \begin{figure*}[!htbp] \includegraphics[width=\textwidth]{figure3_anew_0203.png} \caption{(a-j) Inelastic magnetic neutron scattering spectra for $\rm Cu_2OSeO_3$ acquired for T=4 K at high symmetry points in the Brillouin zone. Red symbols show neutron intensity data averaged over $(0.084~\text{\AA}^{-1})^3\times(0.2~\text{meV})$ in the 4D $\mathbf{Q}-\hbar\omega$ space. The blue line shows the result of a highly constrained calculation of the scattering cross section associated with spin waves described by the effective spin-1 model with the optimized exchange parameters listed in Table~\ref{Pa}. The FWHM of the peaks (blue) was determined from instrument energy resolution and a phenomenological relaxation rate $\tilde{\Gamma}=0.19$~meV to characterize on average the extra physical broadening throughout the Brillouin zone (see Sec.~\ref{Num} and Appendix~\ref{reso}). Note the excess broadening of the lower mode at the $X$ point (h-j), which we ascribe to two magnon decay processes that are kinematically accessible here and effectively destroy the $X$ point single magnon (Fig.~\ref{Fig2}). As discussed in Sec.~\ref{theo}, we expect two two-fold degenerate modes at $R$. In the measured cross section at high momentum, a third mode at 6.9 meV can also be observed. The intensity of this mode averaged over $(0.084~\text{\AA}^{-1})^3$ and integrated over [6,7.8] meV is plotted versus $|\mathbf{Q}|^2$ in (k). The linear fit indicates this mode is a phonon. The 8.4 meV modes marked in (a,b) were discussed in Ref.\cite{laurita2017low}. Error bars in all figures represent one standard deviation.} \label{Fig3} \end{figure*} The $E_i=60$ meV inelastic neutron scattering cross section in Fig.~\ref{Fig1}(b) shows a large ($\approx 13$ meV) energy gap separating the four lowest branches from higher energy modes. The $E_i=20$ meV data are displayed as a false-color image in Fig.~\ref{Fig1}(e) and as energy cuts at representative high symmetry points $R(\frac{1}{2}, \frac{5}{2},\frac{1}{2})$, $X(1,2,\frac{1}{2})$, $M(\frac{1}{2},2,\frac{1}{2})$, and $\Gamma(1,2,2)$ in Fig.~\ref{Fig3}. The high symmetry points are defined as: $\Gamma(h,k,l)$; $X(h,k,l+\frac{1}{2})$; $M(h,k+\frac{1}{2},l+\frac{1}{2})$; and $R(h+\frac{1}{2},k+\frac{1}{2},l+\frac{1}{2})$ with $h,k$, and $l$ integers. While broadly consistent with the prior work \cite{romhanyi2014entangled}, our high-resolution data reveal important new features: (1) A splitting at the $R$ point $\Delta_R=1.6(2)$ meV between the two modes with dominant intensity(previously reported by \textcite{tucker2016spin}), whereas the Heisenberg model of \textcite{romhanyi2014entangled} implies four-fold degeneracy. A third mode between 6 meV and 8 meV can also be observed at $R$ points for high momentum transfer. Consistent with Ref.\cite{tucker2016spin}, we identify this mode as a phonon (Fig.~\ref{Fig3}(k)) based on the $|\textbf{Q}|^2$ dependence of the integrated intensity\cite{Zaliznyak_2014} (2) Near the $X$ point there is a dramatic broadening of the lower branch (between 4 and 8 meV in Fig~\ref{Fig1}(e)), where the Heisenberg model \cite{romhanyi2014entangled} calls for two-fold degeneracy. (3) The optical modes at the $\Gamma$ point at 11.6 meV, which in the Heisenberg model is triply degenerate, is split into three modes with splitting $\Delta_{\Gamma}^{o}=0.7(3)$ meV, see Sec.\ref{alge}. In the following we will show that these features directly reflect symmetry-allowed DM interactions and the associated incommensurate nature of the ground state. \par As apparent in Fig.~\ref{Fig1}(e), the low energy parts ($<2$ meV) of the inelastic magnetic scattering at $\Gamma$ points overlap with the tails of elastic coherent and incoherent nuclear and magnetic scattering as a result of the finite energy resolution of the measurements. To resolve magnetic scattering in this low energy regime, we used the MACS instrument\cite{Rodriguez_2008} at the NIST Center for Neutron Research in a separate experiment on the same sample. The final energy was fixed at $E_f=2.4$ meV resulting in a FWHM elastic energy resolution 0.08 meV. The data were acquired at $T=1.6$ K. We were able to resolve magnon dispersion with energy transfers from $\hbar\omega=0.2$ meV to 1.2 meV. The data were processed using the software DAVE\cite{Azuah_2009} and folded assuming cubic symmetry.\par A fixed $\hbar\omega=1.15$ meV slice of MACS data near the $\Gamma(1,\bar{1},\bar{1})$ zone center is shown in Fig.~\ref{MACS1}(a). Within experimental accuracy, the dispersion is isotropic. Notice the four point-like signals outside the rings in Fig.~\ref{MACS1}(a). These are remnants of Bragg diffraction of 2.4 meV neutrons diffusely scattered from the monochromator that were partially subtracted as described in Appendix~\ref{spur}. We approximate the dispersion as $E(q)=Dq^2+\Delta_{\Gamma}$, where $q$ is the distance from the $\Gamma$ point, $D$ is the spinwave stiffness and $\Delta_{\Gamma}$ is a possible anisotropy gap. Taking into account the coarse out of plane Q-resolution of MACS and its energy resolution as described in Appendix~\ref{MRcal}, a pixel-to-pixel fit to the data yields $D=67(8)~\text{meV}~ \text{\AA}^2$, which is slightly larger than the previous neutron report\cite{portnichenko2016magnon} and the overall model parameters in Table~\ref{Pa}, which fit the SEQUOIA data of higher energy transfers and correspond to $D=58(2)~\text{meV}~ \text{\AA}^2$ where the latter range indicates the orientational anisotropy. The data place an upper bound of 0.1 meV on $\Delta_{\Gamma}$, which is consistent with other experiments\cite{kobets2010microwave,prasai2017ballistic}. Fig.~\ref{MACS1}(c,d) compare the angular average neutron scattering intensity data to the resolution smeared intensity distribution anticipated for the best-fit coarse grained model indicated in Table~\ref{Pa}. Here the effects of momentum and energy resolution were taken into account as described in Appendix~\ref{MACSapp} where we also discuss evidence for the incommensurate ground state in the form of a physical momentum space broadening of low energy modes.\par \begin{figure}[!htbp] \includegraphics[width=\columnwidth]{figure5_MACS_0210.png} \caption{(a) Constant $\hbar\omega=1.15(15)$ meV slice through MACS data near the ${\bf Q}_0=(1\bar{1}\bar{1})$ zone center. The spinwave signal forms a circle, which indicates isotropic dispersion. (b) Spinwave model calculation using the parameters in Table.~\ref{Pa} and numerically convoluting with the instrumental resolution described in Appendix~\ref{MACSapp}. (c) $Q_\parallel-\omega$ intensity map of MACS data following azimuthal averaging around ${\bf Q}_0$. Due to the azimuthal averaging, the errorbars of the pixels are inversely proportional to $Q_{\parallel}$; The pixels near $Q_{\parallel}=0$ (for example, bright pixels at $\hbar\omega=0.4,0.6,0.9$ meV) have significantly larger errorbars compared to the pixels of finite $Q_{\parallel}$ and are thus less reliable. (d) Calculated $Q_\parallel-\omega$ intensity map using parameters in Table.~\ref{Pa} and the same azimuthal averaging as for the experimental data. Data in (a,c) shares the same color scal and was not independently normalized. Calculation results in (b,d) share the same normalized color scale. Dashed lines in (c,d) marks the lowest accessible energy transfer (0.2 meV) in the MACS experiment.} \label{MACS1} \end{figure} \section{Spinwave model}\label{theo} Without compromising accuracy, great simplification in modeling the low-energy spin dynamics of $\text{Cu}_2\text{OSeO}_3$ can be achieved by treating each strong tetrahedron as a rigid cluster with an effective spin $S=1$. The corresponding coarse-grained lattice shown in Fig.~\ref{Fig1}(d) is a distorted FCC lattice with the same space group $P2_13$ as the original lattice. There are two different types of ferromagnetic interaction between the effective spins. As shown in Fig.~\ref{Fig1}(a,d), we define the bond arising from $J_w^{\text{AF}}$ and $J_{\text{o.o}}^{\text{AF}}$ to be $J_1$ (nearest neighbor, nn). The interaction arising from $J_w^{\text{FM}}$ is denoted $J_2$ (next nearest neighbor, nnn). The Hamiltonian for the effective model reads \begin{equation} \mathcal{H}_J = \sum_{\langle ij \rangle}J_1\mathbf{S}_i\cdot\mathbf{S}_j + \sum_{\langle \langle ij \rangle \rangle}J_2 \mathbf{S}_i\cdot\mathbf{S}_j, \end{equation} where $\langle ij \rangle$ and $\langle \langle ij \rangle \rangle$ denote pairs of first and second neighbors, respectively. We then use the standard Holstein-Primakoff (HP) substitution for collinear structures and expand to order of $1/S$ before setting $S=1$. The dispersion relation for the resulting quadratic magnon hopping model (Fig.~\ref{Fig2}(a)) is broadly consistent with the inelastic neutron scattering data in Fig.~\ref{Fig1}(e) but dramatically simpler and with fewer parameters than a microscopic model\cite{yang2012strong,romhanyi2014entangled}. The energy of optical modes at the $\Gamma$ point (also the bandwidth of magnon bands below 13 meV) is $8|J_1+J_2|\approx 12$ meV, while the M point splitting reflects the difference between $J_1$ and $J_2$: $4|J_1-J_2|\approx 1.2$ meV. Following the previous DFT calculation \cite{janson2014quantum} and assuming that $|J_1|<|J_2|$ leads to the parameters and calculated magnon dispersion in Fig.~\ref{Fig2}(a) (magenta). High temperature expansion yields \cite{janson2014quantum} $\Theta_{CW}\approx-4(J_1+J_2)=70$ K, which is consistent with the Curie-Weiss temperature $\Theta_{CW}=69(2)$~K extracted from high temperature susceptibility data \cite{bos2008magnetoelectric}. However, contrary to the helimagnetic state of $\rm Cu_2OSeO_3$, this model is a FM and it does not yet account for the previously enumerated features (Splitting of magnon modes at the $\Gamma$ and $R$ points, broadening of the lower magnon branches at the $X$ point) of the high resolution data in Sec.~\ref{Exp} nor the helical ground state. \par To account for these, we augment the model with symmetry allowed DM interactions: \begin{equation} \mathcal{H}_D = \sum_{\langle ij \rangle}\mathbf{D}_{ij}\cdot(\mathbf{S}_i\times\mathbf{S}_j) +\sum_{\langle\langle ij\rangle\rangle}\mathbf{D}'_{ij}\cdot(\mathbf{S}_i\times\mathbf{S}_j). \end{equation} The nearest neighbor DM vectors $\mathbf{D}_{ij}$ are related to each other by lattice symmetries and can be expressed in terms of their coordinates in a local frame, $\mathbf{D}_{ij} = (d_1, d_2, d_3)$. The same applies to the second-neighbor DM vectors $\mathbf{D}'_{ij}$. The absence of mirror symmetries in $\text{Cu}_2\text{OSeO}_3$ means there are no constraints on these 6 parameters. The DM vectors for each bond are in Table \ref{conv} of Appendix~\ref{LWST}. The DM vector for a representative nn bond is shown in Fig.~\ref{Fig2}. Determining the exact ground state and spin wave dispersion relation for a general set of DM interactions is non-trivial. Appendix \ref{LWST} describes a semi-quantitative analysis the results of which we shall now summarize. \par \begin{figure}[!htbp] \includegraphics[width=\columnwidth]{analytic.png} \caption{(a) Magnon dispersion calculated for $\mathcal{H}_J$ with $J_1=-0.6$ meV, $J_2=-0.9$ meV (magenta), and for $\mathcal{H}_{\text{tot}}\equiv\mathcal{H}_J+\mathcal{H}_D$ with $d_1=-d_1'=0.2$ meV and all other DM components zero (green). The general features of the $E_i=20$ meV inelastic neutron data (Fig.~\ref{Fig1} and Fig.~\ref{Fig3}) are reproduced. The DM interactions lift the $R$ point degeneracy as observed in the experimental data (Fig.~\ref{Fig3}(e-g)). The colored background shows the density of states (DOS) of the 2-magnon continuum for each momentum along high symmetry direction. The unit for the DOS is $1/(\text{\AA}^{-1}~\text{meV})$ per unit cell. (b) Local coordinate system defining DM interactions for nn and nnn effective spins. Only the DM interaction for a single nn pair $\mathbf{D}_{14}$ is shown. For nnn (c) a similar set of $(d_1',d_2',d_3')$ projections can be defined. There are no symmetry constraints on $\mathbf{D}$ or $\mathbf{D}^\prime$. Only the components $d_1$ and $d_1'$ contribute to splitting at the $R$ point.} \label{Fig2} \end{figure} \subsection{R Point Splitting} The $R$-point splitting $\Delta_R=1.6(2)$ meV is closely related to DM components $d_1$ and $d_1'$, which mix the magnon modes of the four sublattices in the coarse-grained unit cell. Specifically we find $\Delta_R=4|d_1-d_1'|$. Field theoretical analysis \cite{janson2014quantum} yields the following expression for the helical pitch $|\mathbf k_h|\propto |d_1+d_1'|$ when all other DM components are 0. We note that the splitting at the R point, $\Delta_R$, is independent of whether the ground state is incommensurate (whether $k_h$ is finite). The little group of the lattice space group $P2_13$ at the R point has no four-dimensional irreducible representation to protect any four-fold degeneracy \cite{elcoro2017double} even when the magnetic structure is commensurate. It follows that if $d_1$ and $d_1'$ were the only anisotropy parameters, they would be uniquely determined by $\Delta_R$ and $k_h$. While symmetric anistropic exchange can also contribute to $\Delta_R$, the absence of a significant $\Gamma$ point gap in the excitation spectrum as indicated by the present data ($\Delta_{\Gamma}\leq 0.1$~meV), microwave \cite{kobets2010microwave} and specific heat \cite{prasai2017ballistic} data however constrain such anisotropy terms. \par \subsection{X Point Broadening} The lower branch of the $X$ point magnon dispersion should be two-fold degenerate, because the corresponding little group of $P2_13$ only has two-dimensional irreducible representations \cite{elcoro2017double}. For an incommensurate ground state, the symmetry of the magnon Hamiltonian is lowered by the magnetic structure which selects one particular $\langle100\rangle$ direction. Thus the $X$ point along the magnetic wave vector (defined as $Z$) is distinguishable from the orthogonal $X$-directions. Our measurements are however, carried out on a multi-domain sample so that $X$ and $Z$ point data are superimposed. This effect may contribute to the $X$-point broadening though it cannot account for the continuum between 4 and 8 meV at the $X$-point (Fig~\ref{Fig1}(e),Fig~\ref{Fig3}(h-j)). \par In Fig.~\ref{Fig2}(a), we also indicate the phase space for two-magnon states. The colormap background indicates areas in $\mathbf{P}-E_2(\mathbf{P})$ space where $\mathbf{P}=\mathbf{p}_1+\mathbf{p}_2$ and $E_2(\mathbf{P})=E(\mathbf{p}_1)+E(\mathbf{p}_2)$ represents the two-magnon continuum for a given momentum $\mathbf P$. Here $E(\mathbf{p}_1)$ is the energy of single magnons given by $\mathcal{H}_J$ with momentum $\mathbf{p}_1$. We notice the shape of the two-magnon continuum near the $X$ point and along the $MR$ edge closely resembles the broadened region of the inelastic neutron data (see Fig.~\ref{Fig1}(e)). This suggests possible 1 to 2 magnon decay allowed by the non-collinear magnetic structure, as observed in various magnetic systems\cite{Stone_2006,Plumb_2015}. The crossing of the single magnon dispersion through the two-magnon phase space means the kinematic constraints (conservation of energy and momentum) are satisfied. This is a necessary but not sufficient condition for spontaneous magnon decay \cite{zhitomirsky2013colloquium}. The lower branch of the magnon modes around the $X$ point can in principle decay into two acoustic magnons. The density of states (DOS) of the two-magnon continuum reflects the number of one- to two-magnon decay channels. However, the resulting line width (decay rate) is controlled by the magnitude of interaction vertices: indeed the single-magnon modes with most significant broadening (the lower modes at the $X$ point and the $XM$ and $XR$ edges) do not coincide with the largest two-magnon continuum DOS. It is interesting to note however, that the observed scattering intensity near the $X$-point closely follows the calculated two magnon continuum. This points to the possibility that single magnon excitations are completely destabilized in this region of the Brillouin zone and replaced by two-magnon excitations. \par Another possible mechanism for broadening at the said momenta is magnon-phonon interactions. The previous inelastic neutron scattering experiment at $T=70~K$ \cite{tucker2016spin} reported an acoustic phonon mode around 5 meV and an optical phonon around 8 meV at the $X$ point. These two phonons overlap with the broadened lower branches of magnons at the $X$ point and along the $XR$ edge. The hybridization of crossing magnon and phonon modes at the zone boundary may play a role in the apparent magnon decays. A similar explanation was proposed for magnon softening in ferromagnetic manganese perovskites\cite{Dai_2000}. A thorough quantitative analysis is needed to distinguish between these distinct scenarios.\par \subsection{Splitting of Optical Modes at the $\Gamma$ Point } The splitting of the optical modes at the $\Gamma$ point is affected by $d_2,d_2',d_3,d_3'$, but not by $d_1$ or $d_1'$ (Appendix \ref{LWST},\ref{Reliability}).\par In Fig.~\ref{Fig2}, we show as green lines the magnon dispersion calculated for $\mathcal{H}_{\text{tot}}\equiv\mathcal{H}_J+\mathcal{H}_D$ with the same $J_1, J_2$ as previously employed, $d_1=-d_1'=0.2$ meV, and the remaining DM components set to 0. This is a special case ($d_1=-d_1'$), in which the DM interactions cancel and lead to a collinear FM ground state with $k_h=0$. The experimentally observed energy splitting at the $R$ point is $\Delta_R=1.6$ meV. Note the mode splitting along the $XM$, $XR$, and $MR$ edges due to the multi-domain effect. The optical modes at the $\Gamma$ point however, remain degenerate. By including other components of the DM interaction the dispersion at the $M$ point is modified so the relationship $4|J_1-J_2|\approx 1.2$ meV associated with the experimentally determined $M$-point splitting does not strictly hold in the following numerical fit. \section{Quantitative comparison}\label{Num} \begin{figure*}[!htbp] \includegraphics[width=\textwidth]{ppt_figure4_cl_new_nor_0207.png} \caption{Comparison between experimental (a) and calculated (b) cross section along a path in momentum space that connects labeled high symmetry points. The color bars indicate the intensity scale. In (a), the integration range of perpendicular $\mathbf{Q}$ direction is $\pm 0.05~\mathrm{\AA^{-1}}$. (c) shows the measured and calculated integrated intensity $S(\mathbf{Q})$ (calculated result is multiplied by the constant of proportionality $C$, see Sec.\ref{Dis} (4)). The excellent agreement throughout multiple zones validates the effective-spin formalism and the use of an effective-spin form factor. Error bars in (c) represent one standard deviation.} \label{Fig4} \end{figure*} To make further progress towards an accurate effective-spin Hamiltonian $\mathcal{H}_{\text{tot}}$ for $\rm Cu_2OSeO_3$, we use the Matlab Library SpinW\_R3176 \cite{toth2015linear} to calculate the dynamical structure factor for approximate single wavevector helical ground states. Multiple domains are superimposed in our multi-domain sample. Though there exist several theoretical methods to calculate the ground state wavevector and chirality or handedness of the magnetic helicoid from microscopic parameters \cite{janson2014quantum,chizhikov2015microscopic}, in this work we use a numerical approach to obtain the magnetic ground state for a given set of interaction parameters during the optimization of $\mathcal{H}_{\text{tot}}$. First we use the Luttinger-Tisza method\cite{Litvin_1974} to determine the overall magnetic wavevector. We then use the Monte-Carlo method to optimize the relative directions of the 4 effective spins. These steps are repeated until we obtain a single wavevector state with the lowest possible energy. We require the resulting wavevector to be consistent with the magnetic wavevector $k_h$ \cite{adams2012long} and the chirality previously determined by SANS\cite{dyadkin2014chirality}.\par For comparison with the measured neutron scattering cross section we must take into account the internal structure of the effective spin. As detailed in Appendix \ref{Formderive}, this is accomplished by multiplying the effective-spin cross section with the formfactor of a ferrimagnetic tetrahedron. The instrumental resolution was handled approximately by replacing the delta-function spectral function of the idealized spin wave cross section with a gaussian energy resolution function. To the calculated $E_i-$dependent energy resolution of the instrument, we added a phenomenological width $2\bar{\Gamma}=0.37$ meV in quadrature to match the experimental FWHM at the $R$ point (see Appendix~\ref{reso} for details). Possible origins of $\bar{\Gamma}$ include a finite spin wave relaxation rate for the gapless non-collinear spin structure and apparent broadening due to the down-folding effects associated with the incommensurate spin structure. The finite $\mathbf{Q}-$resolution of the instrument is not explicitly included and could also in part be the origin of $\bar{\Gamma}$. We then carry out a pixel by pixel least squares fit of the measured versus calculated ${\bf Q}$ and $\hbar\omega$ dependent intensity. For each set of interaction parameters in $\mathcal{H}_{\text{tot}}$ we determined the constant of proportionality $C$ between model and data by fitting the equal time structure factor $S(\mathbf{Q})=\int_0^{\infty}d\omega S(\mathbf{Q},\omega)$. Two enantiomers and three magnetic domains with $\mathbf{k}_h$ along different $\langle100\rangle$ directions were superimposed in the calculated $S(\mathbf{Q},\omega)$. The corresponding measured vs calculated structure factor is shown in Fig.~\ref{Fig4}. For a quantitative examination of the quality of this constrained fit, Fig.~\ref{Fig3} further shows cuts versus energy of $S(\mathbf{Q},\omega)$ at selected high symmetry points in the Brillouin zone. The best-fit parameters thus extracted are listed in Table \ref{Pa}. The calculated dispersion from this set of parameters in the energy range below 1.2 meV is shown in Fig.~\ref{MACS1}(d) to compare with the MACS data shown in Fig.~\ref{MACS1}(c). Resolution effects play a significant role here and are partially taken into account as described in Sec.~\ref{MRcal}. Momentum space broadening associated with the incommensurate nature of the ground state is also apparent in this low energy regime (Sec.~\ref{MRcal}). Fitting the raw data to an isotropic quadratic dispersion of the form $E(q)=Dq^2+\Delta_{\Gamma}$ yields $D=67(8)~\text{meV}~\text{\AA}^2$, $\Delta_{\Gamma}=0.0(1)$ meV, slightly larger than the model, which yields $D=58(2)~ \text{meV}~\text{\AA}^2$ and $\Delta_{\Gamma}=0^{+0.03}_{-0.01}$ meV. Note that here we are not probing the lower energy regime where helimagnons can be expected for $q<k_h$ and $\hbar \omega\leq 0.1$ meV. \begin{comment} The DM parameters here have been adjusted to the same convention of previous study, so it will be of opposite sign with the calculated result. This seems strange and should be fully explained. Why not just follow the same convention as in previous studies? (YL: this is just a note to myself, spinW uses a strange convention on D_{ij} which is D_{ij}\cdot(S_j\times S_i), so the DM vector I get from the fit need to be flipped sign) \end{comment} \begin{center} \begin {table*} [!htbp] \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline Parameter&$J_1$ & $J_2$ & $d_1$ & $d_2$ &$d_3$ &$d_1'$ &$d_2'$ &$d_3'$ \\ [0.5ex] \hline Best fit (meV)&$-0.58_{-0.03}^{+0.08}$&$-0.93_{-0.07}^{+0.10}$& $0.24_{-0.03}^{+0.02}$& $-0.05$& $-0.15$& $-0.16_{-0.03}^{+0.02}$& $-0.10$& $0.36$ \\ \hline \end{tabular} \caption {Optimized parameters resulting from the pixel to pixel fit, shown in Fig.~\ref{Fig3} and Fig.~\ref{Fig4}. These parameters stabilize a helimagnetic ground state with $k_h=0.0143$ r.l.u (compared to $0.0145(11)$ r.l.u from \cite{adams2012long}) and with the same magnetic chirality as the lattice chirality \cite{dyadkin2014chirality}. The range of confidence is given for $J_1,J_2,d_1,d_1'$, there are four sectors of parameters with $J_1,J_2$ and $d_1,d_1'$ interchanged that produce a similar quality fit. $d_2,d_3,d_2',d_3'$ are not well bound in this fit. See Appendix \ref{Reliability} for a more detailed discussion of what can be said about these model parameters based on the neutron data. Specifically, we obtain three empirical constraints on $d_2,d_3,d_2'$, and $d_3'$.} \label{Pa} \end {table*} \end{center} \section{Discussion}\label{Dis} Fig.~\ref{Fig3} and Fig.~\ref{Fig4} show good agreement between model and data both in terms of dispersion and intensity. The effective model $\mathcal{H}_{\text{tot}}$ with only 4 parameters ($J_1,J_2,d_1,d_1'$) already accounts for most of the features of the measured magnon dispersion, including the $R$ point splitting which requires anistropic interactions \cite{tucker2016spin}. Despite playing a secondary role and being less bounded by the measured inelastic neutron scattering data, $d_2,d_2',d_3$ and $d_3'$ are included to account for the the splitting of the optical modes at the $\Gamma$ point and the broadening of peaks at $M$. This shows DM interactions can have a non-negligible influence on magnon spectra beyond the low energy regime, while still stabilizing an incommensurate ground state with small $k_h$ consistent with previously reported SANS data. The consistency of the calculated and measured intensity throughout multiple Brillouin zones validates the use of an effective form factor for cluster-spins and solidifies the hierarchical approach to this compound. Several discrepancies however, remain due to the complexity of the physical system and the limits of the model, which we discuss individually here. \par (1) Since the ground state is helical and incommensurate, with real space periodicity $\frac{2\pi}{k_h}$, the period of the magnon dispersion should be $k_h$ in the direction of the wavevector instead of 1 r.l.u. For a single magnetic domain with $\mathbf{k}_h$ along certain $\langle100\rangle$ direction, the observable magnon modes at $\mathbf{q}$ with $q_{\perp}\neq 0$ ($q_{\perp}$ is the component of $\mathbf{q}$ perpendicular to $\mathbf{k}_h$) are magnon modes originating from $\Gamma$ points (denoted as $\mathbf{q}$ mode) and those from $\pm N\mathbf{k}_h$ (denoted as $\mathbf{q}\pm N\mathbf{k}_h$ mode with $N\geq 1$). Along the direction of $\mathbf{k}_h$ ($q_{\perp}=0$), we expect to observe only $\mathbf{q}$ and $\mathbf{q}\pm \mathbf{k}_h$ modes if we have a single $k_h$ helical ground state, while the cantings and phase shifts due to multiple sublattices and possible higher-order spin-orbital coupling terms may include additional modes with less weights\cite{Janoschek_2010,Kugler_2015}. In our measured cross-section, due to the presence of multiple magnetic domains, we generally expect to observe $\mathbf{q}\pm N\mathbf{k}_h$ modes at any finite $\mathbf{q}$. For practical reason we only include $\mathbf{q}$ and $\mathbf{q}\pm\mathbf{k}_h$ in the calculation, therefore all high order folding modes are neglected. A $\Gamma$ point magnetic excitation at 8.4 meV was detected by THz optical spectroscopy\cite{laurita2017low}, which also can be observed in our neutron data (see Fig~\ref{Fig3}(a,b)). It was interpreted as a magnon folded back from high momentum. This mode does not appear in our calculation, which is presumably because our model does not properly take into account such down-folding effects. \par (2) The model treats each cluster as a rigid classical spin-1, which is equivalent to assuming $J_s^{\text{AF}}\rightarrow \infty$ when in fact $J_s^{\text{AF}}=12.5$ meV\cite{portnichenko2016magnon} is large but finite. As a result, the ground state will be a superposition of spin-1 and spin-2 states due to exchange interactions with neighboring tetrahedra\cite{romhanyi2014entangled}, as well as of spin-0 states due to intra-tetrahedra DM interactions. The effects of this can be seen in the ratio between the magnon energy at the $\Gamma$ point and the center of the two modes at the $R$ point. This ratio is strictly 4:3 in the rigid cluster model. In the measured data, the energy of optical modes at the $\Gamma$ point is around 11.6(2) meV so that the model correspondingly would predict a center energy of 8.7(2) meV at the $R$ point. The center energy at the $R$ point is however observed slightly higher at 9.2(2) meV. This 0.5 meV deviation can not be accommodated in the rigid spin-1 model by varying the exchange parameters. Instead the fit procedure leads to a compromise as in Fig.~\ref{Fig3}. This deviation may also be caused by the magnon-phonon coupling between the two magnon modes and the 6.9 meV phonon mode that we identify in Fig. \ref{Fig3}(e-g) and (k). A similar phonon magnetochiral effect was recently proposed in the context of an ultrasound experiment\cite{Nomura_2019}. (3) The overall broadening of magnon peaks exceeds the instrument resolutions corresponding to a relaxation rate $\bar{\Gamma}=0.18(5)$ meV throughout the Brillouin zone. At the $X$ point between 4 and 8 meV (see Fig.~\ref{Fig3} (h-j)), the single magnon branch actually vanishes and is replaced by continuum scattering in a region of ${\bf Q}-\omega$ space that closely matches that of the kinematically allowed two-magon continuum. The broadenings of the upper magnon branch (around 12 meV) at the $X$ point also exceeds the average phenomenglocal FWHM corresponding to $\bar{\Gamma}$ (see Appendix~\ref{reso}). We believe these effects arise from magnon interactions and decay processes as should be anticipated for a low symmetry and low spin ($S=1$) gapless magnet.\par (4) In this study we have used two methods to normalize the neutron data. The first is vanadium incoherent scattering, which gives a normalization factor $N_v$ with systematic uncertainty $\approx 15\%$. We further calculate and compare the Bragg intensities (Appendix~\ref{Bn}), and get a normalization factor $N_B\approx 1.2N_v$ with $\approx 30\%$ uncertainty. Throughout the paper we have adopted $N_B$ for data normalization. The constant of proportionality $C$ (ratio) between normalized measured magnetic cross section and calculated cross section is fitted to be $1.15(5)$. Considering the presence of phonon cross-sections and background scattering, the calculated result of our rigid spin-cluster model is consistent with the experimental data normalized by $N_B$ within uncertainty. Besides limitations in the accuracy of the absolute normalization of the measured neutron scattering cross section, the following reasons may also cause discrepancy between calculated and measured magnetic cross-section: (1) The spin density distribution around $\text{Cu}^{2+}$ may be more extended than for atomic $3d^9$ electrons\cite{dianoux2002neutron}, even spreading onto the ligand sites. This may cause a more rapid decrease of the magnetic form factor $F(\mathbf{Q})$ (see Appendix.~\ref{Formderive}) as a function of $Q$ than accounted for in the analysis. (2) The ground state and low energy excited states of the system may be more entangled\cite{ozerov2014establishing,romhanyi2014entangled} than the rigid limit we take. Such quantum entanglement may reduce (increase) the effective spin length for each $\text{Cu}^{2+}$ by admixing spin-0 (spin-2) states into the ground state and the low energy excited states. (3) The high order folding modes ($\mathbf{q}\pm N\mathbf{k}_h$, $N>1$) we neglect may cause the distribution of spectral weights to differ from calculations neglecting these components. (4) Furthermore, the finite momentum resolution of the instrument has not been fully quantified and included in the comparison between model and data. \section{Conclusion}\label{Concl} $\text{Cu}_2\text{OSeO}_3$ is a complex low symmetry magnetic material. The complexity starts with a large structural unit cell containing 16 magnetic ions. The lack of inversion symmetry gives rise to a chiral magnetic order with a periodicity that is incommensurate with the crystalline lattice. Understanding the spectrum of excitation in such a magnet is a non-trivial task that we dedicated ourselves to in this paper. We conducted an inelastic neutron scattering experiment on $\text{Cu}_2\text{OSeO}_3$ focusing on the 4 lowest magnon branches and built a quantitative effective spin model that can be the basis for describing its low energy magnetism. The model includes DM interactions that stabilize the helimagnetic order. Features of the magnon spectrum missed in previous experiments and calculations have been quantitively established and related to the incommensurability of the magnetic order. The interaction parameters were obtained by fitting the model to $\mathbf{Q}-E$ slices through four dimensional inelastic magnetic neutron scattering data. The resulting coarse-grained model provides an accurate description of the four lowest energy branches of the magnon spectrum. The methods exemplified by this work can be extended to other magnets where dominant interactions lead to the formation of effective spins at low energies. Our model will facilitate understanding of the complicated phase diagram of $\text{Cu}_2\text{OSeO}_3$ including the exotic skyrmion phase. \section*{Acknowledgments} This work was supported as part of the Institute for Quantum Matter, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Basic Energy Sciences under Award No. DE-SC0019331. CB and JK were supported by the Gordon and Betty Moore foundation under the EPIQS program grant number GBMF-4532. Access to MACS was provided by the Center for High Resolution Neutron Scattering, a partnership between the National Institute of Standards and Technology and the National Science Foundation under Agreement No. DMR-1508249.We wish to thank Jonathan Gaudet and Predrag Nikolic for the useful discussion on understanding the $X$-point broadening, and Jiao Lin for helping evaluating the instrumental resolution of SEQUOIA.
2,869,038,156,997
arxiv
\section*{References}} \newcommand{\po}[1]{% \textcolor{blue}{\sf\underline{PO:}\ #1} } \newcommand{\ja}[1]{% \textcolor{red}{\sf\underline{JA:}\ #1} } \usepackage{lineno} \begin{document} \nolinenumbers \title{SMIXS: Novel efficient algorithm for non-parametric mixture regression-based clustering} \author{Peter Mlakar \inst{1} \and Tapio Nummi \inst{4} \and Polona Oblak \inst{2}\and Jana Faganeli Pucer\inst{3}} \authorrunning{P. Mlakar et al.} \institute{University of Ljubljana, Faculty of Computer and Information Science,\newline Republic of Slovenia, Slovenian Environment Agency \newline \email{[email protected]} \and University of Ljubljana, Faculty of Computer and Information Science \email{[email protected]} \and University of Ljubljana, Faculty of Computer and Information Science \email{[email protected]} \and Tampere University, Faculty of Information Technology and \newline Communication Sciences \newline \email{[email protected]}} \maketitle \begin{abstract} We investigate a novel non-parametric regression-based clustering algorithm for longitudinal data analysis. Combining natural cubic splines with Gaussian mixture models (GMM), the algorithm can produce smooth cluster means that describe the underlying data well. However, there are some shortcomings in the algorithm: high computational complexity in the parameter estimation procedure and a numerically unstable variance estimator. Therefore, to further increase the usability of the method, we incorporated approaches to reduce its computational complexity, we developed a new, more stable variance estimator, and we developed a new smoothing parameter estimation procedure. We show that the developed algorithm, SMIXS, performs better than GMM on a synthetic dataset in terms of clustering and regression performance. We demonstrate the impact of the computational speed-ups, which we formally prove in the new framework. Finally, we perform a case study by using SMIXS to cluster vertical atmospheric measurements to determine different weather regimes. \keywords{mixture models \and regression \and clustering \and smoothing splines} \end{abstract} \section{Introduction} Longitudinal datasets contain samples described by measurements of one or more dependent variables over one independent variable, frequently denoted as time. We collect such datasets with the intent of studying the time-dependent developmental nature of individual samples through the fluctuations in their measurements. To this end, we can use regression techniques to provide insights regarding the dependence structure. Furthermore, we are able to extract additional information if subsets of samples share or exhibit similar developmental trends. For such cases, mixture regression methods or clustering algorithms can prove useful. Both of the aforementioned properties of longitudinal datasets are crucial for understanding the nature of the data, therefore the methods used to analyze these properties share similar importance. We can conduct longitudinal data analysis in different frameworks, each pertaining to a different view of the problem. One can approach the issue through the lens of generalized linear mixed models \cite{bolker2015linear,clskom}, k-means variants extended to perform better in a longitudinal setting \cite{clskml}, Bayesian methods \cite{clslu}, generalized estimation equations \cite{wang2014generalized}. For comparison and overview of different longitudinal analysis techniques, we refer to \cite{doi:10.1080/03610918.2020.1861464,fitzmaurice2012applied,wu2006nonparametric}. Here, we focus on a finite mixture model approach \cite{mclachlan2019finite,everitt2013finite}, due to its flexibility and statistical interpretability. Specifically, we study the non-parametric regression-based clustering algorithm developed by \citet{nummi2018semiparametric}. This algorithm leverages Gaussian mixture models (GMM) and smoothing splines to construct $c$ mixtures, described by smooth mean curves. The smoothing splines constrain the individual mixture mean curves based on their estimated roughness. We do this to control the effect noise would have on the final regressor since many real-world datasets contain noise in their measurements. Additionally, the development in latent clusters is complicated and a general smooth function will provide a very good approximation of it. We can control the amount of smoothing enforced per mixture with a smoothing parameter, rendering the algorithm flexible even when processing non-homogeneous data since each mixture is capable of adapting to specific parts of the dataset. Therefore, the result of this procedure is the formulation of mixture means as smooth functions that continuously model transitions between measurements in an energy-optimal way \cite{green1993nonparametric}. This differentiates the proposed algorithm from GMM, making it more resilient to strong noise signals present in the dataset, resulting in better clustering as well as regression performance compared to GMM. However, some drawbacks mitigate the practical applicability of the algorithm \cite{nummi2018semiparametric}. First, the exact estimation of this algorithm is subject to high computational complexity. Because of the iterative nature of the optimisation procedure and the computation of matrix inverses which are required for the estimation of parameters, the algorithm quickly becomes intractable. Adding to the computational burden is the smoothing parameter selection procedure, which requires multiple estimations in each optimization iteration. Unrelated to the computational complexity woes, the variance parameter estimator exhibits unwanted numerical behaviour in the context of Expectation maximization (EM) \cite{mclachlan2019finite} and lastly, we were not able to find any comparative studies of the algorithm \cite{nummi2018semiparametric} with other existing methods which would further bolster the algorithm's usability. Therefore, to overcome these issues, we propose a new algorithm SMIXS with the following major contributions: \begin{itemize}[label=$\bullet$] \item Implementation of speed-ups for crucial computational bottlenecks. \item Alternative smoothing parameter selection procedure using gradient descent. \item Derivation of a more stable penalty-corrected variance estimator. \item Comparison of SMIXS against the base GMM in terms of regression performance, clustering performance, and computational complexity. \item An open source implementation of the algorithm in Julia and Python available on GitHub \cite{smixsgithub}. \end{itemize} The remainder of the paper is structured in the following manner. We present the algorithm SMIXS in Section \ref{ch:mth}. In Section \ref{ch:rslt} we describe the conducted empirical analysis on a synthetic dataset, the results of which are provided in Subsection \ref{sec:syneval}. We present the results of a case study in Subsection~\ref{ch:cstd} where we use SMIXS to cluster the atmospheric sounding data of temperature measurements. Additional derivations and more detailed explanations are supplemented in Appendix \ref{ch:appendix}. \section{Mixture regression for longitudinal clustering} \label{ch:mth} The SMIXS algorithm builds upon the Gaussian mixture model, constraining its mixture means by using natural cubic smoothing splines. This enables the modeling of a rich set of curves \cite{nummi2018semiparametric} while also better describing the underlying development in latent groups. The parameters of the SMIXS model are estimated using the Expectation Maximization (EM) algorithm \cite{dempster1977maximum,mclachlan2019finite}. Let us first define the notation used in the following sections: \begin{itemize}[label=$\bullet$] \item Let $n$ denote the number of total samples and let vector $\bm y_i$ denote the $i$-th sample with $p$ elements. \item Let $\Omega$ be the set of all model parameters and let $\omega_k$ denote the parameter subset belonging to the $k$-th mixture of the model, and let $c$ be the number of mixtures. \item Each $\omega_k$ contains the following parameters: the mixture mean vector $\bm \mu_k$ with $p$ elements, mixture standard deviation $\sigma_k$, and mixing proportion $\pi_k$. The mixing proportions denote the relative importance of individual mixtures and their sum is equal to $\sum_{k = 1}^{c}\pi_k = 1$. \item To convert the Gaussian mixture fitting problem from an incomplete to a complete framework we use the random variable $z_{ik}$ to denote whether or not the $i$-th sample belongs to the $k$-th cluster. If we knew the true values of these random variables for each individual, the clustering would be rendered trivial. Since we do not, we can look at this as a type of missing data problem which facilitates the estimation of the model parameters. The estimator of the variable $z_{ik}$, denoted as $\hat{z}_{ik}$ (hat symbol over a variable denotes its estimator), is computed as $\hat{z}_{ik} = E_{\Omega}(z_{ik}|\bm y_i) = \frac{\pi_k f_k(\bm y_i, \omega_k)}{\sum_{l = 1}^{c}\pi_l f_l(\bm y_i, \omega_l)}.$ Each $\hat{z}_{ik}$ is a positive real number. \item Let $f_k$ denote the $k$-th mixture probability density function which in our cases amounts to the multivariate Gaussian distribution with a diagonal covariance matrix $\sigma_k^2\bm I$. Therefore, within the mixture, we assume independence between measurements and homogeneous variance. \item Let the roughness matrix $\bm G$ be defined as $\bm G = \bm Q \bm R^{-1} \bm Q^\top$, where $\bf Q$ and $\bf R$ represent two band matrices. They encode the relationship between a smoothing spline's values with its second derivatives at the spline knots. For more details see \citet{green1993nonparametric}. \end{itemize} The quantity maximized during the EM maximization step, with respect to the SMIXS model parameters, is the penalized conditional expectation of the log-likelihood. To define the penalty term we begin with the smoothing parameter $\lambda_k$, which controls the penalty's relative importance compared to the regression enforced by the conditional expectation. The penalty term is then defined as \begin{align} \label{eq:pnlt} P_k = \lambda_k\bm\mu_k^\top\bm G\bm\mu_k, \end{align} and penalized conditional expectation is written as \begin{align} \label{eq:etilde} \tilde{E}(\Omega, \bm y) = \sum_{i = 1}^{n}\sum_{k = 1}^{c}\hat{z}_{ik}(\log (\pi_k) + \log (f_k(\bm y_i, \omega_k))) - \sum_{k = 1}^{c}P_k. \end{align} The penalty term $P_k$ restrains the individual mixture means based on their roughness. We defined the concept of roughness as the definite integral of the squared second derivative of a smooth function interpolating the individual mixture mean elements. The nature of this roughness penalty forces the mixture mean to take on the values of a natural cubic spline at the knots \cite{green1993nonparametric}. \\ Continuing with the maximization step of EM, we are required to estimate the remaining model parameters. We compute the mixture proportion estimators as the average of the $n$ index estimators for the $k$-th mixture \begin{align*} \hat{\pi}_k = \frac{1}{n}\sum_{i = 1}^{n} \hat{z}_{ik} \end{align*} and the cluster mean estimators can be calculated by \begin{equation} \label{eq:mu_est} \hat{\bm \mu}_k = \left (\sum_{i = 1}^{n}\hat{z}_{ik}\bm I + \alpha_k\bm G \right )^{-1}\sum_{i = 1}^{n}\hat{z}_{ik}\bm y_i, \end{equation} where $\alpha_k = \frac{\lambda_k}{\sigma_k^2}$. This smoothing weight substitution enables the computation of the mixture mean estimator without the direct need to calculate the variance. After we determine $\alpha_k$ we can proceed with the estimation of other parameters, without any loss of generality, in the order of $\hat{\bm \mu}_k$, $\hat{\sigma}^2_k$. For more details concerning the substitution refer to \citet{green1993nonparametric}. The procedures with which we select the smoothing parameter $\alpha$ and compute the variance estimators differ from the ones proposed by \citet{nummi2018semiparametric}. We introduce and provide arguments for the use of our approaches in the following sections. \subsection{Variance estimator} To calculate the variance estimator as defined in \citet{nummi2018semiparametric} we first select a smoothing parameter value, compute the corresponding mixture mean, and then estimate the variance of a multivariate Gaussian distribution, disregarding the penalty term's direct influence on the variance. The rationale behind this is that when we estimate the mean of a mixture we considered the smoothing constraint. Therefore, when we compute the variance, for which we require the mean, no additional consideration towards the smoothing parameter is needed as we applied it at the point of mean estimation. However, we found that this yields unstable performance in certain cases. To be exact, the expectation maximized during EM decreases in value over consecutive iterations. This is unwanted behaviour in the EM framework \cite{wu1983convergence} and is due to the fact that the variance computed this way is not the maximum expected log-likelihood estimator. To this end, we introduce the penalty-corrected variance estimator which we calculate by computing the maximum expected log-likelihood estimator for $\sigma_k^2$ using Equation (\ref{eq:etilde}). This estimator is defined as \begin{align*} \hat{\sigma}_k^2 = \frac{\sum_{i = 1}^{n}\hat{z}_{ik}(\bm y_i - \bm \mu_k)^{\top}(\bm y_i - \bm \mu_k) + \alpha_k\bm \mu_k^{\top}\bm G\bm \mu_k}{\sum_{i = 1}^{n} \hat{z}_{ik}p}. \end{align*} We provide a more detailed derivation in Appendix \ref{ch:appendix}. The addition of $\alpha_k\bm \mu_k^{\top}\bm G\bm \mu_k$ to the numerator compensates for the smoothness constraint and mitigates the unwanted convergence behaviour. \subsection{Alpha parameter selection} There are multiple ways one can select the value of the smoothing parameter $\alpha_k$. As a possible alternative \citet{nummi2018semiparametric} proposed the cluster-wise maximization of the so-called profile log-likelihood function with respect to the corresponding smoothing parameter $\alpha_k$. This entails that the optimal smoothing parameter $\alpha_k$ is the one that minimizes the variance $\hat{\sigma}_k^2$. However, we believe that such an estimator biases small $\alpha_k$ values as the variance is smallest when the mixture mean follows the weighted arithmetic average, entailing that the smoothing parameter equals zero. Therefore, we propose the use of cross-validation which is also supported in the literature \cite{green1993nonparametric}. Let $\hat{\bm \mu}_j^{-\{ij\}}=\hat{\bm \mu}_j^{-\{ij\}}(\alpha_k)$ denote the $k$-th mixture mean estimator, defined in Equation \eqref{eq:mu_est}, computed by omitting the $j$-th element of the sample $i$. Then cross-validation can be written as \begin{equation} \label{eq:cv} CV(\alpha_k) = \sum_{i = 1}^{n}\hat{z}_{ik}\sum_{j = 1}^{p}(\hat{\bm \mu}_j^{-\{ij\}} - \bm y_{ij})^2. \end{equation} This however requires a grid search over multiple $\alpha_k$, significantly decreasing the speed of the estimation procedure. It is also not sensible to do a comprehensive grid search in the starting iterations of the EM algorithm since the remaining parameter initializations are not optimized. To this end, we suggest that a gradient descent approach might be in order since it eliminates the need for evaluating many alpha values at each iteration of the EM algorithm. By limiting $\alpha_k$ to the interval $[1, 10^6]$ and starting at $\alpha_k = 1$ we conduct gradient descent on the cross-validation score, doing one step each EM iteration. By approximating the derivative of $CV$ by its differential quotient and by denoting the update rate with $\theta$, we define the update of $\alpha_k$ to be \begin{align*} \alpha_k^{new} = \alpha_k - \theta \frac{CV(\alpha_k + h) - CV(\alpha_k)}{h}. \end{align*} In our case we chose $\theta = 10^{-3}$ and $h = 0.1$. This eliminates the grid search computational dilemma to a certain extent and at the same time allows us to reach satisfactory smoothness after multiple iterations. We can also utilize a dynamic learning rate in this procedure, which might result in faster convergence. However, one must note that complex learning rate estimators usually require additional non-trivial computations which might nullify the expected benefits. For an example of a dynamic learning rate used in this context refer to \cite{mlakar2021application}. \subsection{Computational complexity reduction} There are two potentially problematic computational bottlenecks in the presented algorithm. To alleviate these burdens, we identified appropriate solutions which we describe in this section. For more details on both approaches refer to \citet{green1993nonparametric} and \citet{reinsch1967smoothing}, and to the Appendix \ref{ch:appendix}. The first computational problem is computing the matrix inverse \begin{equation}\label{eq:S} \bm S = \left ( \sum_{i = 1}^{n}\hat{z}_{ik}\bm I + \alpha_k \bm G \right )^{-1}. \end{equation} in Equation \eqref{eq:mu_est}, which we call the smoothing matrix. We efficiently compute the inverse by using the Reinsch algorithm \cite{reinsch1967smoothing}. This enables us to express the matrix $\left (\sum_{i = 1}^{n}\hat{z}_{ik}\bm I + \alpha_k\bm G \right )$ in a more suitable form. Define first the matrix $\bm W_k = \sum_{i = 1}^{n}\hat{z}_{ik}\bm I$ and the vector $\tilde{\bm y}_k = \sum_{i = 1}^{n}\hat{z}_{ik}\bm y_i$. Also let the vector $\bm \gamma_k$ denote the vector of second order derivative of the natural cubic spline described by $\bm \mu_k$, evaluated at its knots. This entails that we can implicitly express the mixture mean estimator in the following way \begin{equation}\label{eq:R+QWQ} \bm Q^\top \bm W^{-1}_k \tilde{\bm y}_k = \left (\bm R + \alpha_k \bm Q^\top \bm W^{-1}_k \bm Q \right) \hat{\bm \gamma}_k. \end{equation} Note that since $\bm R + \alpha_k \bm Q^\top \bm W^{-1}_k \bm Q $ is a symmetric, pentadiagonal, positive-definite matrix, we can efficiently compute its inverse by the use of its Cholesky decomposition. This enables us to estimate $\hat{\bm \gamma}_k$ from Equation \eqref{eq:R+QWQ} and so we can compute the mixture mean estimator $\hat{\bm \mu}_k$ from~\eqref{eq:mu_est} in linear time with respect to the number of measurements. To compute the cross-validation score the individual mixture mean vector would have to be estimated for each omission of one measurement from each sample. This renders the computation of the cross-validation score already cumbersome but one must not forget that this procedure is performed for each value of $\alpha_k$ we wish to evaluate. To tackle this second problem we speed up the procedure by using \citet{hutchinson1985smoothing} algorithm. The difference $\hat{\bm \mu}_j^{-\{ij\}} - \bm y_{ij}$ required for the estimation of the cross validation score can then be expressed as \begin{equation} \label{eq:diff} \hat{\bm \mu}_j^{-\{ij\}} - \bm y_{ij} = \frac{\hat{\bm \mu}_j - \bm y_{ij}}{1 - \bm S_{jj}\hat{z}_{ik}}, \end{equation} where $\bm S_{jj}$ are the diagonal elements of the smoothing matrix $ \bm S$. For more details refer to Appendix \ref{ch:appendix}. Note that Equation (\ref{eq:diff}) is a powerful statement as it lets us express the difference $\hat{\bm \mu}_j^{-\{ij\}} - \bm y_{ij}$ in terms of the mixture mean estimator $\hat{\bm \mu}_k$, computed by not omitting any data from the dataset. This entails that we require only one mixture mean estimation per $\alpha_k$ to calculate its corresponding cross-validation score for each omission. \section{Empirical evaluation} \label{ch:rslt} We empirically evaluate and compare SMIXS to the base GMM algorithm, from which SMIXS is derived, on several synthetic datasets. We evaluate its clustering performance, regression performance, and computational complexity. We implement our version of the GMM algorithm, which differs from the SMIXS implementation only in the maximization step. This way we make sure that the difference in algorithms is only due to the way SMIXS estimates the model parameters in the maximization step. By comparing the performance of the two algorithms we want to show the advantages of SMIXS in terms of clustering and regression curve accuracy. The running times of both SMIXS and GMM algorithms depend on the number of clusters we wish to find, the number of subjects present in a dataset, and the number of measurements the dataset contains for each subject. The synthetic datasets enable us to investigate their performance by varying the input variables mentioned above. Also, instead of using a static learning rate to conduct the smoothing parameter estimation in the synthetic dataset study, we use a dynamic one based on the approximation of the second derivative \cite{mlakar2021application}. This might speed up the convergence of the estimation procedure since the learning rate is adjusted based on the slope of the gradient, albeit by spending additional resources by estimating the second derivative. To show the applicability of the SMIXS algorithm to a real-world problem we cluster atmospheric sounding data from Ljubljana. Temperature inversions in Ljubljana are common in winter which greatly affects air quality. We hypothesize that a mixture analysis could improve the prediction of PM$_{10}$ concentrations. In this case study, we use a static learning rate in the smoothing parameter estimation procedure, since it provides good clustering and regression performance. \subsection{Algorithm initialization} The EM algorithm is an iterative approach, whose performance is highly dependent upon the initial values of the involved parameters. The result of the EM algorithm is usually a local instead of the global maximum. The goal of finding good results necessitates the execution of multiple runs with different starting points. We evaluate the quality of each run using log-likelihood. To initialize the starting parameters of cluster means, cluster variances, and mixture proportions we utilize the k-means algorithm. \subsection{Synthetic dataset construction}\label{sec:synth_data} For the quantitative analysis we, constructed a synthetic dataset generator which is available on GitHub \cite{smixsgithub}. The generator constructs a dataset with $c$ clusters, $p$ measurements, and $n$ samples or subjects. The data has one independent variable time and one dependent variable, measurements. By adding white noise to their corresponding cluster means, we sample individual subjects from their clusters. The means are smooth functions created by sampling Perlin noise \cite{perlin1985image}. We added different levels of noise to simulate low to high distortion in measurements. Examples of randomly generated datasets can be seen in Figure \ref{fig:synth}. \begin{figure}[htb] \begin{center} \includegraphics[width=0.32\textwidth]{figures/figures_synth/f11_1.png} \includegraphics[width=0.32\textwidth]{figures/figures_synth/f11_2.png} \includegraphics[width=0.32\textwidth]{figures/figures_synth/f11_3.png} \includegraphics[width=0.32\textwidth]{figures/figures_synth/f11_4.png} \includegraphics[width=0.32\textwidth]{figures/figures_synth/f11_5.png} \includegraphics[width=0.32\textwidth]{figures/figures_synth/f11_6.png} \includegraphics[width=0.32\textwidth]{figures/figures_synth/f11_7.png} \includegraphics[width=0.32\textwidth]{figures/figures_synth/f11_8.png} \includegraphics[width=0.32\textwidth]{figures/figures_synth/f11_9.png} \end{center} \caption{Examples of nine randomly generated datasets containing three clusters with a low level of white noise \cite{mlakar2021application}.} \label{fig:synth} \end{figure} To test the resilience of the algorithm to noise we vary the amount of noise present in a dataset. An example of varying noise levels can be seen in Figure \ref{fig:synth_noise}. We created $500$ different datasets per five different numbers of clusters. Each of these $2500$ datasets was then subjected to four increasing levels of noise, resulting in $10000$ total datasets. We used these datasets to conduct the analysis of clustering and regression performance of both SMIXS and GMM. \begin{figure}[htb] \centering \includegraphics[width=\textwidth]{figures/figures_synth/fn0.png} \caption{Example of a dataset subjected to increasing amounts of white noise. Each color represent samples from a specific latent cluster \cite{mlakar2021application}.} \label{fig:synth_noise} \end{figure} \subsection{Performance metrics} \label{sec:perf-met} For the evaluation of clustering performance we use the F-score \cite{chinchor1993muc}, which is defined as \begin{align*} F = 2\frac{P R}{P + R}, \end{align*} where $P$ denotes precision and $R$ denotes recall. The multi-class confusion matrix enables us to compute the F-score for each cluster separately, treating it as a binary classification problem. To get the final F-score we average the per cluster F-scores giving equal weights to each cluster independent of their sizes. We compare SMIXS and GMM by counting the number of times one of them outperforms the other in terms of F-score. We also track the margin of the better-performing algorithm. We evaluate the quality of the mean cluster curve generated by GMM and SMIXS by calculating the sum of squared differences between the true cluster curves (which are known for synthetic data) and the cluster curves produced by the clustering algorithms. For the final evaluation and comparison, we average all squared differences of clusters. Finally, we compare the computational complexities of GMM and SMIXS. We also analyze the effects that some implemented speed-ups have on the SMIXS algorithm, in particular the addition of the Reinsch algorithm. For this evaluation, we examine three versions of SMIXS relative to the base GMM: \begin{itemize}[label=$\bullet$] \item SMIXS; complete algorithm as described in Section~\ref{ch:mth} with $\alpha$ optimization and all the time complexity reductions, \item SMIXS CA; algorithm SMIXS without $\alpha$ optimization, and \item SMIXS CA-NR; algorithm SMIXS without $\alpha$ optimization and without Reinsch algorithm. \end{itemize} We do not evaluate the computational effects of the \citet{hutchinson1985smoothing} algorithm. The high computational complexity of the base algorithm renders it intractable on the same dataset scale as the above-mentioned variants. It is safe to say that looking at the theoretical implications of \citet{hutchinson1985smoothing}, \citet{green1993nonparametric} algorithm, and from our own testing, the computational complexity reduction is significant, especially when combined with the Reinsch algorithm \cite{reinsch1967smoothing}. \subsection{Synthetic evaluation} \label{sec:syneval} \subsubsection{Clustering performance analysis} Looking at Figure \ref{fig:synth_cls} we can see that SMIXS outperforms GMM in almost all scenarios. \begin{figure}[htb] \centering \includegraphics[width=0.9\textwidth]{figures/figures_synth/synFS2.png} \caption{Clustering performance of SMIXS versus GMM. Columns correspond to the amount of datasets where a specific algorithm performed better than the other. The exception is the light blue column which corresponds to the number of datasets where they both performed equally well \cite{mlakar2021application}.} \label{fig:synth_cls} \end{figure} When the amount of noise is low, which can be seen in noise level one, both algorithms perform equally well for small cluster numbers. This is due to the clustering problems being too simple such that both algorithms construct a perfect clustering of the dataset. In those cases, the F-score is the same for both algorithms. But as we increase the number of clusters in the dataset so does increase the lead of SMIXS over GMM. By increasing the amount of noise this trend is exacerbated, revealing that SMIXS copes better with a higher number of clusters and high noise situations compared to GMM, while also not performing worse in the case of a smaller number of clusters or low noise. The margins in performance displayed in Figure \ref{fig:synth_cls_mrg} further corroborate these findings. The median F-score for SMIXS is constantly above that of GMM. Likewise, the lower quartiles never extend below those of GMM, and GMM's upper quartiles never extend above SMIXS's. Again the only exception are the low now noise, low cluster count examples, where both algorithms performed equally well. \begin{figure}[htb] \begin{center} \includegraphics[width=0.33\textwidth]{figures/figures_synth/f_noise_1.png} \includegraphics[width=0.33\textwidth]{figures/figures_synth/f_noise_2.png} \includegraphics[width=0.33\textwidth]{figures/figures_synth/f_noise_3.png} \includegraphics[width=0.33\textwidth]{figures/figures_synth/f_noise_4.png} \end{center} \caption{Clustering performance of SMIXS and GMM. The box plots aggregate the results of the respective algorithms over all the datasets for a specific noise level and cluster count. Higher values denote better performance for that algorithm.} \label{fig:synth_cls_mrg} \end{figure} \subsubsection{Regression performance analysis} To investigate the effectiveness of smoothing splines on regression accuracy when the latent generator functions are themselves smooth, we compute the mean squared error between the estimated cluster means compared to the correct cluster means for both GMM and SMIXS. This will give us an idea of how far from the ground truth the regressed means are. First, let us look at bar graphs showing the number of datasets where one algorithm outperformed the other in terms of mean squared error (see Figure \ref{fig:synth_rg}). \begin{figure}[htb] \centering \includegraphics[width=0.9\textwidth]{figures/figures_synth/synSQ2.png} \caption{Regression performance of SMIXS versus GMM. Columns correspond to the amount of datasets where a specific algorithm performed better than the other. The exception is the light blue column which corresponds to the number of datasets where they both performed equally well, \cite{mlakar2021application}.} \label{fig:synth_rg} \end{figure} The difference is more pronounced here compared to the previous analysis. Here even at low noise levels, and low cluster numbers SMIXS outperforms GMM in a very large number of datasets. This clearly demonstrates that smoothing splines offer beneficial additions for regressing smooth latent generator functions. Examples of the regression curves constructed from both GMM and SMIXS, and their comparison to the ground truth are visible in Figure \ref{fig:synth_rg_ex}. \begin{figure}[htb] \begin{center} \includegraphics[width=0.45\textwidth]{figures/figures_synth/synSqEx0GMM.png} \includegraphics[width=0.45\textwidth]{figures/figures_synth/synSqEx0SMIXS.png} \end{center} \caption{Examples of the regression accuracy difference between GMM and SMIXS. SMIXS successfully dampens the effect small noise perturbations have on the regular mean. These perturbations are visible in the regression curve of GMM, \citet{mlakar2021application}.} \label{fig:synth_rg_ex} \end{figure} The regression performance in terms of root mean squared error is displayed in Figure \ref{fig:synth_rg_mrg}. As the amount of noise increases so does the margin in favour of SMIXS, confirming our previous findings that it is the better regression algorithm for smooth latent generating functions. \begin{figure}[htb] \begin{center} \includegraphics[width=0.33\textwidth]{figures/figures_synth/s_noise_1.png} \includegraphics[width=0.33\textwidth]{figures/figures_synth/s_noise_2.png} \includegraphics[width=0.33\textwidth]{figures/figures_synth/s_noise_3.png} \includegraphics[width=0.33\textwidth]{figures/figures_synth/s_noise_4.png} \end{center} \caption{Regression performance of SMIXS and GMM. The box plots contain the root mean squared errors (RMSE) between the algorithms computed means and the true cluster curves over all datasets.} \label{fig:synth_rg_mrg} \end{figure} \subsubsection{Computational complexity analysis} Clustering performance is only a part of the complete performance picture. We are also interested in how the execution time of the SMIXS algorithm compares to GMM and other variants of SMIXS, namely SMIXS CA-NR, and SMIXS CA (as described in Section \ref{sec:perf-met}). Therefore, we investigate three scenarios where we create different synthetic datasets with varying numbers of clusters, measurements, and subjects. This reveals how performance scales with progressively larger datasets. Observing the results of the analysis displayed in Figure \ref{fig:synth_cplx} we can immediately see that GMM is the fastest of all the tested approaches. This is not surprising as all the other algorithms constitute an upgrade to GMM. However, in all tests the SMIXS CA variant follows GMM closely, displaying a relatively small performance hit. This suggests that if we predetermine a degree of smoothing for a specific dataset, the SMIXS CA algorithm could serve as a valid alternative to GMM even from the execution time perspective (keeping in mind the other performance benefits of smoothing). Of special note is also the performance of the SMIXS CA-NR variant in the case where we increase the number of measurements. At its peak it is almost $100$ times slower than the remaining methods, clearly demonstrating the effectiveness of the Reinsch algorithm in our framework, since SMIXS CA-NR lacks this speed up. \begin{figure}[htb] \begin{center} \includegraphics[width=0.45\textwidth]{figures/figures_synth/sybTimeClusterCmp.png} \includegraphics[width=0.45\textwidth]{figures/figures_synth/sybTimeMeasurCmp.png} \includegraphics[width=0.45\textwidth]{figures/figures_synth/sybTimeSubjCmp.png} \end{center} \caption{Computational complexity of SMIXS variants plotted against GMM. Each figure represents the relative execution time compared to GMM by varying one parameter in the synthetic dataset. This allows us to inspect the different impacts dataset parameters have on the algorithm execution, \cite{mlakar2021application}.} \label{fig:synth_cplx} \end{figure} \subsection{Case study-- Clustering of atmospheric sounding data} \label{ch:cstd} To show the applicability of SMIXS we use it to cluster atmospheric sounding data from Ljubljana, Slovenia. These measurements are an example of longitudinal data, where the air temperature is measured over different pressure levels (altitudes). Every conducted measurement produces a curve, where the dependent variable corresponds to temperature and altitude represents the independent variable. When we record such atmospheric measurements, the effects of different measuring locations, atmospheric states, and measurement times manifest as variations in the data. Therefore to extract and encode these variations and potential similarities between individual samples, one can utilize cluster and regression analysis. Ljubljana is the capital of Slovenia which lies in a basin $295$ meters above sea level with a very unfavorable dispersion situation \cite{pucer2018impact}. It exhibits a continental climate with cold winters and hot summers. Temperature inversions are common in winter. Air temperature usually decreases with the increase of altitude, but when the temperature at the ground is cooler than higher in the atmosphere we can say that we have a temperature inversion. With temperature inversion the temperature increases from the ground up to a certain point where it starts decreasing with altitude as expected. Temperature inversions affect air quality \cite{pucer2018impact}. The main air pollutant measured in Ljubljana is PM$_{10}$ \cite{pucer2018impact}, and despite concentrations decreasing in the last years, on days with temperature inversion and low wind conditions concentrations can still get very high. The Slovenian Environment Agency (ARSO) \cite{arso} performs atmospheric sounding \cite{golden1986} every day at 5 in the morning using a radiosonde attached to a weather balloon. This way they measure temperature (temperature profile), air humidity, and wind speed at different altitudes, with the mean maximum altitude being $19913.7$ meters above sea level. Early morning temperature inversions can easily be identified and characterized by visually inspecting such temperature measurements, but their automatic processing is not straightforward. In this case study, we show how SMIXS can be used for the automatic processing of meteorological sounding data by clustering them in several clusters. Clustering helps us show that the depth of the morning temperature inversion is associated with higher daily PM$_{10}$ concentrations in Ljubljana. The used data consist of $100$ temperature measurements from altitudes of $300$ to $750$ meters above sea level at Ljubljana, capturing the most relevant air layers. Temperature inversions usually occur at lower altitudes. The closer they are to the ground, the more they affect air pollutant concentrations. PM$_{10}$ daily concentrations are being measured at the same location as the starting point for atmospheric sounding. The data was provided by ARSO. The data is from the years $2017$ to $2019$, resulting in $1072$ samples. We assessed the adequate number of cluster with the Bayesian information criterion (BIC) \cite{schwarz1978estimating}. We observe the plotted BIC curve for an increasing number of clusters from $2$ to $19$ and choose the number of clusters after which the decrease in BIC is not significant enough. In our case, this was when BIC improved by less than three percent when we increased the number of clusters. This procedure is in essence a heuristic but it provides a good guideline as to the likely number of optimal clusters, making a trade-off between interpretability (too many clusters are hard to interpret) and cluster homogeneity. The analysis took $60$ minutes to complete with $50$ initializations per each number of clusters. Each time we keep the best initialization in terms of BIC . The centroids with the associated standard deviations of the clustered atmospheric sounding temperature profiles are shown in Figure \ref{fig:centroidi}. The colour of the centroids represents the measured daily PM$_{10}$ concentration. The centroid on the right represents the days with the most extreme temperature inversions and it also represents the most polluted (the reddest) days. When we observe the centroids from right to left we can see that clusters on the left are much less polluted (green) than the ones on the right, which is expected. When there is an extreme temperature inversion in the morning it is quite usual that it does not break down during the day and air with smog remains trapped near the ground. When there is no temperature inversion or a shallow one in the morning the air masses usually mix at some time during the day due to atmospheric convection and then pollution dissipates. \begin{figure}[htb] \centering \includegraphics[width=\textwidth]{./figures/cls_means_whole.png} \caption{Centroids of the 15 different clusters representing different temperature profiles. To produce the $15$ cluster analysis SMIXS required five minutes, conducting $50$ different clusterings and choosing the best one as the final result.} \label{fig:centroidi} \end{figure} Figure \ref{fig:cls1_cls4} represents the most and the least polluted clusters. Above each plot are the mean cluster PM$_{10}$ concentrations with their associated standard deviation. Cluster 1 represents a normal situation where the temperature decreases with increasing altitude, also the PM$_{10}$ concentrations associated with it are predominantly low. Cluster 4 comprises extreme temperature inversion profiles, the associated concentrations are high. From the sizes of both clusters, we can conclude that the extreme inversion situations are quite rare and that they are typical for the winter months while days with no temperature inversion and low concentrations are common throughout the year, but are more common still in late spring, summer and early autumn. \begin{figure}[htb] \begin{tabular}{cc} \includegraphics[width=0.412\textwidth]{./figures/pers_whole_cluster_1.png} & \includegraphics[width=0.412\textwidth]{./figures/pers_whole_cluster_4.png}\\ \includegraphics[width=0.412\textwidth]{./figures/cls_0_freq_inter.png} & \includegraphics[width=0.412\textwidth]{./figures/cls_3_freq_inter.png}\\ \end{tabular} \caption{Temperature profiles from Clusters 1 and 4 as clustered by the SMIXS algorithm (up) and the monthly frequencies of the two clusters (below). Cluster 1 has much more members than Cluster 4.} \label{fig:cls1_cls4} \end{figure} Figure \ref{fig:frequencies} represents the relative frequencies of the temperature profiles in each cluster. It shows the same as we have seen with Clusters 1 and 4. The extreme temperature inversions are quite uncommon, but temperature inversion in general are not (see Clusters 14 and 12). Still, days without temperature inversions are much more common especially outside the winter months. \begin{figure}[htb] \centering \includegraphics[width=0.7\textwidth]{./figures/cls_frequency.png} \caption{Relative frequencies of the temperature profiles clustered in different clusters. The colour represents the average daily concentration PM$_{10}$} \label{fig:frequencies} \end{figure} The application of the SMIXS enabled us to characterize different temperature profile types typical for Ljubljana, to assess the frequency of temperature inversions, and the frequencies of different profiles per each month. It also enabled us to link different temperature profiles with PM$_{10}$ concentrations. All this would be impossible by using raw atmospheric sounding data. From this case study, we can see that clustering longitudinal data with SMIXS can help us with their interpretation. For future work we could add the cluster information of the meteorological sounding data to the PM$_{10}$ prediction models \cite{faganeli2018bayesian}. \section{Conclusion} \label{ch:cncl} In this work, we proposed improvements to the longitudinal data analysis algorithm originally introduced by \citet{nummi2018semiparametric}. We implemented computational speed-ups pertaining to smoothing splines and modified them to fit this novel context where they are combined with Gaussian mixture models (GMM). We also provided a new numerically stable variance estimator which is derived from the Expectation maximization framework. Lastly, we defined and implemented a new smoothing parameter estimation technique, which enabled our algorithm to exhibit a lower computational complexity compared to other prevalent methods such as grid search, while still yielding good smoothing results. The algorithm was made available on GitHub \cite{smixsgithub}. We tested the SMIXS algorithm on a synthetic dataset, where we compared its performance to GMM in terms of regression accuracy, clustering accuracy, and computational complexity. We showed that SMIXS achieves better results than GMM in both regression and clustering tests but lags behind GMM in computational complexity. However, by predetermining the magnitude of smoothing enforced on each mixture mean, the execution time of SMIXS follows that of GMM closely, while still retaining the benefits brought by the introduction of smoothing splines into the GMM framework. Finally, we conducted a case study by analysing the atmospheric sounding data measured over the city of Ljubljana, the capital of Slovenia. SMIXS yielded interpretable clusters from a large number of intractable, uninterpretable trajectories. The identified clusters showed good correlations with daily PM$_{10}$ concentrations making them a prospective feature for future PM$_{10}$ prediction models. For potential future research, different methods of selecting the smoothing parameter could be explored. Gradient descent is but one possibility which still allows for multiple variations (multiple $\alpha_k$ optimization steps per EM iteration, different types of dynamic learning rates). Striking a balance between computational complexity and estimation accuracy is difficult, which makes this problem challenging to solve. Another possible avenue of research would be to compare the performance between GMM and SMIXS, much in the same way as we did in our work but, with a smaller number of measurements in the dataset. In such cases, it can be expected that the amount of required smoothing would increase, therefore potentially allowing for a deeper analysis of the regression performance. \section*{Acknowledgement} The authors would like to acknowledge the Slovenian Environment Agency (ARSO) who provided PM$_{10}$ concentrations and atmospheric sounding data. This work was supported by the Slovenian Research Agency (ARRS) research core funding P2-0209 (Jana Faganeli Pucer) and P1-0222 (Polona Oblak). \clearpage \bibliographystyle{splncs04nat}
2,869,038,156,998
arxiv
\section{Introduction} \label{Introduction} The interaction between atoms and surfaces is of fundamental importance in the fields of physics, chemistry and biology. For example, atom-surface interactions play an important role in atomic force microscopy \cite{Binnig} and they also affect the dynamical properties of an atom or molecule nearby. Hence the considerable amount of attention it has gained in the last decades. The study of electron dynamics and spectroscopy of highly excited Rydberg atoms in external fields have been a very active field of research in recent years \cite{Inarrea1, Inarrea2, Hua1, Salas}. Rydberg atoms are particularly important in these studies since they are sensitive to perturbations due to their large size and the weak binding of the excited electron; besides, they can be prepared and manipulated in the laboratory. The study of the dynamics of Rydberg atoms near a metallic surface have attracted great attention since this system can simulate many dynamics effects of atoms in strong fields, such as the Zeeman-Stark effect, diamagnetic effect and instantaneous van der Waals interaction, for example. Since the interaction of a Rydberg atom with a metal surface takes place far from the surface (compared with the atom size), the atom-surface interaction can be modeled by the electrostatic method of images, i.e. the images of the electric charges of the atom act as another atom \cite{Ganesan, Simonovic, Dunning, Simonovic2}. Therefore, the image atom exerts additional forces on the atomic electron, thus affecting its dynamical properties. This van der Waals force between the atom and the nearby metal surface plays a vital role in the adsorption process. The same idea has been extended to Rydberg atoms near dielectric surfaces, finding a chaotic or regular classical motion, depending on the atom-surface distance \cite{Hua2, Hua3, Hua4}. In this letter we aim to study the dynamics of a Rydberg atom near a topologically insulating surface. Topological insulators (TIs) are an emerging class of materials which have attracted much attention in condensed matter physics. They are characterized by a bulk insulating behavior with metallic surface states protected by time-reversal (TR) symmetry \cite{Qi-Review, Liang}. In addition to their interesting electronic properties, TIs also display unusual electromagnetic properties. Specifically, the topological magnetoelectric effect (TME), which consist in the mixing of the electric and magnetic induction fields. Many interesting TME have been predicted but none of them have been detected in the laboratory. The most striking consequence of the TME is the image magnetic monopole effect \cite{Qi-Science}, which consists in the following. Consider bringing an electric charge near the surface of a TI, in addition to the image electric charge an image magnetic monopole will also appear inside the material. Physically, the monopole magnetic field is induced by a circulating vortex Hall current on the surface of the TI, which is sourced by the (tangential component of the) electric field next to the interface, rather than by a point magnetic charge. The problem we shall consider in this letter is that of bringing a Rydberg hydrogen atom near the surface of a TI, as shown in fig. \ref{System}. Due to the image magnetic monopole effect, the electric charges of the atom will produce image electric charges and image magnetic monopoles located inside the material, which in turn will interact with the atomic electron. More precisely, here we study the dynamics of the atomic electron when interacting with the image electric and magnetic fields. Using numerical techniques and Poincar\'{e} surfaces of section, we explore extensively the structure of the phase space for the TI TlBiSe$_{2}$. The phase space of the system is found to be separable into regions of vibrational and rotational motion. We show that vibrational-rotational-vibrational type transitions can be tuned with the topological magnetoelectric polarizability (TMEP) $\theta$. \section{Rydberg hydrogen atom near a TI surface} In this section, we briefly review the electromagnetic response of TIs and the image magnetic monopole effect, to then establish the Hamiltonian of the system. \subsection{Electromagnetic response of TIs} It has been suggested recently, based on subtle field-theoretical considerations, that the low-energy effective field theory which describes the electromagnetic response of TIs (independently of microscopic details) is defined by the Lagrangian \begin{align} \mathcal{L} = \frac{1}{8 \pi} \left( \varepsilon \textbf{E} ^{2} - \frac{1}{\mu} \textbf{B} ^{2} \right) + \frac{\alpha}{4 \pi ^{2}} \theta \textbf{E} \cdot \textbf{B} , \label{Lagrangian} \end{align} where $\textbf{E}$ and $\textbf{B}$ are the electromagnetic fields, $\alpha \simeq 1 / 137$ is the fine structure constant, $\varepsilon$ and $\mu$ are the permittivity and permeability, respectively, and $\theta$ is the TMEP. Because of TR symmetry, the last term is a good description of the bulk of a trivial insulator when $\theta = 0$ and the bulk of a TI when $\theta = \pi$.cWhen the boundary is included, this theory is a fair description of both the bulk and the surface only when a TR breaking perturbation is induced on the surface to gap the surface states. In this work we consider that the TR perturbation is a magnetic coating of small thickness which gaps the surface fermions, as shown in fig. \ref{System}. In the described situation, for a TI hosting $2n+1$ surface fermions (with $n \in \mathbb{Z}$), $\theta$ can be shown to be quantized as $\theta = \pm ( 2n+1 ) \pi$. Positive and negative values of $\theta$ are related to different signs of the magnetization in the direction perpendicular to the surface. The axion coupling term in Eq. (\ref{Lagrangian}) does not modify Maxwell equations with redefined constituent relations $\textbf{D} = \varepsilon \textbf{E} + \alpha (\theta / \pi) \textbf{B}$ and $\textbf{H} = \textbf{B} / \mu - \alpha (\theta / \pi) \textbf{E}$. Therefore, nontrivial effects due to the topological term (known as TME) appear only at the interface of a TI and trivial insulator, where the TMEP suddenly changes. Specific TMEs have been predicted. For example, when polarized light propagates through a TI surface, of which the surface states has been gapped by TR symmetry breaking, topological Faraday and Kerr rotations take place \cite{Maciejko}. It was also recently proposed that the sign of the Casimir force between TIs can be tuned by means of the TMEP \cite{Grushin1, Grushin2}. As we will see later, the dynamics of a Rydberg hydrogen atom near a TI can also be tuned by means of the TMEP in a similar fashion. \begin{figure}[tbp] \begin{center} \includegraphics[width=2.5in]{figure.pdf} \end{center} \caption{{\protect\small Schematic of a Rydberg hydrogen atom near a three-dimensional TI half-space and its image electric charges and image magnetic monopoles. The TI is covered with a thin magnetic layer (not to scale) which gaps the surface states.}} \label{System} \end{figure} \subsection{Image magnetic monopole effect} Through the constitutive relations and the usual electromagnetic equations, an applied electric field can induce magnetization, while a magnetic field can induce polarization. More precisely, a tangential electric field on the surface of a TI would generate a transverse surface current, giving rise to a half-integer quantum Hall effect. Therefore, when a pointlike electric charge is brought near to the surface of a TI, the tangential component of the electric field induces a vortex Hall current, which generates a magnetic field that can be simulated by an image magnetic monopole inside the material. This is known as the image magnetic monopole effect. This image magnetic monopole effect can be studied straightforwardly in the same way as the image charge problem in an ordinary insulator \cite{Qi-Science}. However, the same result can also be obtained using far superior techniques, such as the $SL (2 , \mathbb{Z})$ electric-magnetic duality group of TIs \cite{Karch} and Green's function techniques \cite{MCU1, MCU2, MCU3, MCU4}. In short, let us consider the geometry of fig. \ref{System}. The left-half space ($z<0$) is occupied by a TI with dielectric constant $\varepsilon$, magnetic permeability $\mu$ and TMEP $\theta$, whereas the right-half space ($z>0$) is the vacuum. A point electric charge $q$ is located in vacuum at $\textbf{r} _{0} = b \hat{\textbf{e}} _{z}$ from the TI. For $z>0$, the electric field can be interpreted as due to two point electric charges, one of strength $q$ at $\textbf{r} _{0}$, and the other, the image charge, of strength $q ^{\prime} = - \kappa q$, with \begin{align} \kappa = \frac{(\varepsilon - 1) (1 + 1 / \mu) + \tilde{\alpha} ^{2}}{(\varepsilon + 1) (1 + 1 / \mu) + \tilde{\alpha} ^{2}} \quad , \quad \tilde{\alpha} = \alpha (\theta / \pi ) , \label{kappa} \end{align} at the mirror point $- \textbf{r} _{0}$. The magnetic field can be interpreted as that of a magnetic monopole of strength \begin{align} g = \frac{2 q \tilde{\alpha}}{(\varepsilon + 1) (1 + 1 / \mu) + \tilde{\alpha} ^{2}} , \label{ImageMonopole} \end{align} at $- \textbf{r} _{0}$. Going back to our problem, when a Rydberg hydrogen atom (composed by two point electric charges) is brought near to the surface of a TI, as shown in fig. \ref{System}, in addition to the image electric charges (showed in red), image magnetic monopoles also appear inside the material (showed in blue). The interaction of these images with the atomic electron will affect the dynamical properties of the latter. This is the chief motivation of this letter. \subsection{Hamiltonian} Here we derive the Hamiltonian of the classical system depicted in fig. \ref{System}. Let us consider the motion of an electron in a Coulomb field induced by an infinitely massive nucleus of charge $e >0$ at the origin of the coordinate system. The TI-vacuum interface is located at $z = -b$, and the TI is assumed to be covered with a thin magnetic coating such that the surface states are gapped and thus the TME takes place. Therefore, the atomic electron will be affected by image electric charges and image magnetic monopoles as well, and thus the appropriate Hamiltonian is that of a charged particle in electromagnetic fields, i.e. $\mathcal{H} = \left( \textbf{p} + e \textbf{A} \right) ^{2} / 2 m _{e} - e \phi (\textbf{r})$. The interaction between the involved electric charges and the atomic electron is given by \begin{align} - e \phi (\textbf{r}) = - \frac{e ^{2}}{r} + \frac{\kappa e ^{2}}{\sqrt{\rho ^{2} + (z+2b) ^{2}}} - \frac{\kappa e ^{2}}{4 (z + b)} , \label{CoulombInt} \end{align} where $\rho ^{2} = x ^{2} + y ^{2}$ and $r ^{2} = \rho ^{2} + z ^{2}$. The first term describes the usual Coulomb interaction between the nucleus and the atomic electron, and the last two terms account for the interaction between the latter and the two image electric charges. Due to the TME, the atomic charges will also produce image magnetic monopoles, whose magnetic fields will in turn interact with the atomic electron. In this letter we are concerned with the classical picture of this system, and therefore the electron spin is not considered. However, the image monopole magnetic fields will affect the orbital motion. In the Coulomb gauge, the vector potential is \begin{align} \textbf{A} (\textbf{r}) = \frac{g}{\rho ^{2}} \left( 1 - \frac{z + 2 b}{\sqrt{\rho ^{2} + (z + 2 b) ^{2}}} \right) \left( y \hat{\textbf{e}} _{x} - x \hat{\textbf{e}} _{y} \right) . \label{VectorInt} \end{align} This is nothing but the Schwinger vector potential, which is singular along the Dirac string $\vartheta = \pi$ \cite{Schwinger}. The last two terms in the Coulomb interaction (\ref{CoulombInt}) and the vector potential (\ref{VectorInt}) break the spherical symmetry of the free atom. However, the axial symmetry (around the $z$ axis) is still present. Therefore, in cylindrical coordinates $(\rho, z, \phi , p _{\rho} , p _{z} , p _{\phi})$ and atomic units, the Hamiltonian $\mathcal{H}$ of the system reads \begin{align} \mathcal{H} = \frac{p ^{2} _{\rho} + p ^{2} _{z}}{2} + \frac{p _{\phi} ^{2}}{2 \rho ^{2}} + \textbf{p} \cdot \textbf{A} + \frac{\textbf{A} ^{2}}{2} - \frac{1}{r} \notag \\ + \frac{\kappa}{\sqrt{\rho ^{2} + (z+2b) ^{2}}} - \frac{\kappa}{4 (z + b)} . \label{Hamiltonian} \end{align} Owing to the axial symmetry, the $z$ component $l _{z} = p _{\phi}$ of the angular momentum is conserved and the system (\ref{Hamiltonian}) is reduced to a dynamical system with two degrees of freedom. Here we consider the $l _{z} = 0$ case, in such a way that, besides the energy $E = \mathcal{H}$, the dynamical behavior of the atomic electron will depend on the external parameters $\kappa$, $g$ and $b$. \section{Classical dynamics of a Rydberg hydrogen atom near a TI surface} The hydrogen atom is one of the rare real physical systems which is integrable. In the presence of external fields, its integrability depends on the type of perturbation. Since the perturbations in eq. (\ref{Hamiltonian}) couples the $\rho$ and $z$-coordinates, our problem is in general non-integrable. Therefore, here we explore the phase space structure by means of numerical techniques and Poincar\'{e} surfaces of section for the TI TlBiSe$_{2}$ \subsection{Scaling and regularization} To proceed with the numerical analysis of the dynamical behaviour of the system, we scale the coordinates and momentum in the form: $\tilde{\textbf{r}} = \textbf{r} / b$, $\tilde{\textbf{p}} = \textbf{p} \sqrt{b}$. After dropping tildes, Hamiltonian (\ref{Hamiltonian}) becomes \begin{align} \tilde{\mathcal{H}} \equiv \xi = \frac{p ^{2} _{\rho} + p ^{2} _{z}}{2} - \frac{1}{\sqrt{\rho ^{2} + z ^{2}}} + \frac{\kappa}{\sqrt{\rho ^{2} + (z +2) ^{2}}} \notag \\ - \frac{\kappa}{4 (z + 1)} + \frac{\tilde{g} ^{2}}{2 \rho ^{2}} \left( 1 - \frac{z + 2}{\sqrt{\rho ^{2} + (z +2) ^{2}}} \right) ^{2} , \label{ScaledHamiltonian} \end{align} and the dynamics does not depend on the energy $E$ and the nucleus-surface distance $b$ separately, but only on the scaled energy $\xi = E b$. Also, we observe that the $\kappa$-parameter is unaffected by the scaling; however, the magnetic monopole strength becomes a running coupling constant $\tilde{g} = g / \sqrt{b}$ which depends on the distance $b$. It is useful to study the shape of the effective potential $\mathcal{U} (\rho ,z) = U (\rho ,z) + V (\rho ,z)$ in eq. (\ref{ScaledHamiltonian}), \begin{align} U (\rho ,z) &= - \frac{1}{\sqrt{\rho ^{2} + z ^{2}}} + \frac{\kappa}{\sqrt{\rho ^{2} + (z +2) ^{2}}} - \frac{\kappa}{4 (z + 1)} , \notag \\ V (\rho ,z) &= \frac{\tilde{g} ^{2}}{2 \rho ^{2}} \left( 1 - \frac{z + 2}{\sqrt{\rho ^{2} + (z +2) ^{2}}} \right) ^{2} , \end{align} through the determination of its critical points $P _{c} = (\rho _{c} , z _{c})$, which are given by the roots of the equations $\partial _{z} \mathcal{U} =0$ and $\partial _{\rho} \mathcal{U} = 0$. On the one hand, we observe that due to the Coulombic term $U (\rho ,z)$, the effective potential exhibits an infinite potential well at the origin. On the other hand, we find that the function $V (\rho , z)$ arising from the magnetic monopole contribution satisfies the following properties: $\lim _{\rho \rightarrow 0} V = 0$, $\lim _{\rho \rightarrow 0} \partial _{z} V = 0$ and $\lim _{\rho \rightarrow 0} \partial _{\rho} V = 0$. Therefore, the critical points of $\mathcal{U} (\rho ,z)$, when they exist, take place on the $z$ axis ($\rho _{c} = 0$). One can directly verify that the condition $\partial _{\rho} \mathcal{U} = 0$ is satisfied at $\rho _{c}$. Therefore we determine $z _{c}$ from \begin{align} \partial _{z} \mathcal{U} (\rho _{c} , z _{c}) = \frac{z _{c}}{\vert z _{c} \vert ^{3}} - \frac{\kappa}{(z _{c} + 2) ^{2}} + \frac{\kappa}{4 (z _{c} +1) ^{2}} = 0 . \label{ConditionZ} \end{align} Note that $V (\rho , z)$ does not contribute to the determination of the critical point since it is singular along the Dirac string ($z$ axis). Furthermore, due to the divergent electron-(image)electron interaction at $z=-1$, a critical point always exist in the interval $(-1,0)$. To proceed with the numerical determination of $z _{c}$, we fix constants by choosing the recently discovered TI TlBiSe$_{2}$ \cite{TlBiSe2, TlBiSe22, TlBiSe23}, for which $\theta = \pi$, $\mu = 1$ and $\varepsilon = 4$, then $\kappa \approx 0.6$. Numerical solution of eq. (\ref{ConditionZ}) in the region of interest yields $z _{c} = - 0.739304$. By substituting the critical point $P _{c} = (0, z _{c})$ in the corresponding Hessian matrix we readily obtain that $P _{c}$ is a saddle point whose energy is $\xi _{c} = -1.45208$. Physically, because $l _{z} = 0$ there is no centrifugal barrier and the electron can reach the origin, where the Hamiltonian (\ref{ScaledHamiltonian}) presents a singularity. The saddle point $P _{c}$ is the ionization channel through which the atomic electron can be captured by the surface. To avoid the Coulomb singularity, we perform the so-called Levi-Civita regularization \cite{LeviCivita}. This procedure consists in a change to semiparabolic coordinates $(u,v)$, \begin{align} \rho = uv \qquad , \qquad z = (u ^{2} - v ^{2}) / 2 . \end{align} The conjugate momenta, $p _{u} = du / d \tau$ and $p _{v} = dv / d \tau$ are defined with respect to the scaled time $\tau \equiv t / (u ^{2} + v ^{2})$. Finally, after an overall multiplication by $u ^{2} + v ^{2}$, the Hamiltonian (\ref{ScaledHamiltonian}) reads \begin{align} \tilde{\mathcal{H}} ^{\prime} &= 2 = \frac{p _{u} ^{2} + p _{v} ^{2}}{2} - \xi (u ^{2} + v ^{2}) \label{ParabolicHamiltonian} \\ & + \frac{2 \kappa (u ^{2} + v ^{2})}{\sqrt{4 u ^{2} v ^{2} + (u ^{2} - v ^{2} + 4) ^{2}}} - \frac{\kappa (u ^{2} + v ^{2})}{2 ( u ^{2} - v ^{2} + 2)} \notag \\ & + \frac{\tilde{g} ^{2}}{2} \left( \frac{1}{u ^{2}} + \frac{1}{v ^{2}} \right) \left( 1 - \frac{u ^{2} - v ^{2} + 4}{\sqrt{4 u ^{2} v ^{2} + (u ^{2} - v ^{2} + 4) ^{2}}} \right) ^{2} . \notag \end{align} Note that the regularized Hamiltonian $\tilde{\mathcal{H}} ^{\prime}$ takes a constant value $2$ and the scaled energy $\xi$ appears as a parameter. The Hamilton equations of motion arising from (\ref{ParabolicHamiltonian}) are \begin{align} \dot{u} = p _{u} \qquad &, \qquad \dot{v} = p _{v} , \\ \dot{p} _{u} = - \frac{\partial \tilde{\mathcal{H}} ^{\prime}}{\partial u } \qquad &, \qquad \dot{p} _{v} = - \frac{\partial \tilde{\mathcal{H}} ^{\prime}}{\partial v} . \end{align} Next we explore the structure of the phase space by means of numerical techniques and Poincar\'{e} surfaces of section. \begin{figure}[tbp] \begin{center} \includegraphics[width=1.7in]{PS1.pdf} \label{PS1} \includegraphics[width=1.7in]{PS2.pdf} \label{PS2} \\ \includegraphics[width=1.7in]{PS3.pdf} \label{PS3} \includegraphics[width=1.7in]{PS4.pdf} \label{PS4} \end{center} \caption{{\protect\small Poincar\'{e} surfaces of section for different values of the scaled energy}.} \label{Poincare} \end{figure} \subsection{Phase space structure} To get an overall vision of the phase space structure, we fix the nucleus-surface distance to be $b = 10, 000$ a.u. $\approx 530$ nm, which is appropriate for experimental realization with Rydberg hydrogen atoms. In fig. \ref{Poincare}, we plot the Poincar\'{e} surfaces of section for different values of the scaled energy $\xi$. The Poincar\'{e} surface of section is plotted in the $v-p_{v}$ plane; therefore, it is defined by all trajectories which intersect $u = 0$ with $p _{u} >0$. There is an essential difference between the cases $\xi < \xi _{c}$ and $\xi > \xi _{c}$, where the critical value is $\xi _{c} = -1.45208$, which corresponds to a highly excited atom with the principal quantum number $n = 59$. We first discuss the case $\xi < \xi _{c}$, where the electron is confined into the infinite potential well and its dynamics is still close to the integrable limit $\xi \rightarrow - \infty$. Figures \ref{Poincare}a and \ref{Poincare}b show the Poincar\'{e} surfaces of section for $\xi = -2$ ($n = 50 $) and $\xi = -1.5$ ($n = 58$), respectively. We observe that the phase space is divided in several regions. The stable fixed point in the central region $(v=0,p _{v} = 0)$ corresponds to rectilinear orbits along the $u$ axis, which corresponds to physical orbits along the positive $z$ axis. The ellipses around this fixed point are vibrational-type quasiperiodic orbits with the same symmetry pattern, i.e. mainly localized along the positive $z$ axis. Figure \ref{Orbits}a shows a vibrational-type orbit. The two stable fixed points located symmetrically left and right from the central region corresponds to almost circular orbits with opposite angular momenta. The levels around these points are rotational-type quasiperiodic orbits with the same symmetry pattern. Figure \ref{Orbits}b shows a rotational-type orbit. Finally, the levels in the exterior region are vibrational-type quasiperiodic orbits mainly oriented along the negative $z$ axis. \begin{figure}[tbp] \begin{center} \includegraphics[width=1.7in]{Vib1.pdf} \label{Vib} \includegraphics[width=1.7in]{Rot1.pdf} \label{Rot} \end{center} \caption{{\protect\small Examples for: a) vibrational and b) rotational types of orbits.}} \label{Orbits} \end{figure} Now we focus on the opposite regime, when the energy is high enough so that the electron can escape from the nucleus attraction. In this case, the system tends to be nonintegrable and the electron has access to the ionization channel located along the negative $z$ axis. In figures \ref{Poincare}c and \ref{Poincare}d we show Poincar\'{e} surfaces of section for $\xi = -1$ ($n = 71$) and $\xi = - 0.5$ ($n = 100$), respectively. A rich variety of behavior is observed in the Poincar\'{e} section for the system as the energy $\xi$ increase. In fig. \ref{Poincare}c we observe a similar behavior to that of the case $\xi < \xi _{c}$. In fig. \ref{Poincare}d we observe that the vibrational-type closed orbits oriented along the negative $z$ axis, representing quasiperiodic behavior, are replaced by irregular patterns, and eventually the Poincar\'{e} plane seems to be swamped by chaos. An interesting consequence of this system is the atomic ionization for $\xi > \xi _{c}$, which can be understood in the phase space as follows. Since the escape channel is located along the negative $z$ axis ($v$ axis in regularized coordinates), when $\xi > \xi _{c}$ the levels in the exterior region of the Poincar\'{e} surfaces of section are not bounded regions because the rectilinear orbits (along the $v$ axis with $u=0$) are the first orbits to ionize. Since not all trajectories ionize because part of the phase space remains isolated from the ionization channel (mainly the rectilinear orbits along the positive $z$ axis, or $u$ axis with $v=0$ in regularized coordinates), ionized orbits will appear in the phase space as unbounded regions enveloping stable orbits. This behavior is not shown in fig. \ref{Poincare}. A deeper understanding of this charge transfer mechanism is obtained when described in terms of chemical reaction dynamics. In this letter we focus on the consequences of axion electrodynamics in the quasiperiodic orbits, thus the problem of atomic ionization due to TI surfaces is left for future investigations. \section{Tuning the dynamics of the atomic electron} In the previous section we have considered the case of the TI TlBiSe$_{2}$, for which $\theta = \pi$, $\varepsilon = 4$ and $\mu = 1$. As discussed before, when TR symmetry is broken on the surface of the TI (by adding a magnetic coating) the system becomes fully gapped and the TMEP is quantized in odd integer values of $\pi$ such that $\theta = \pm (2n+1) \pi$, where $n \in \mathbb{Z}$. The two signs correspond to the two possible orientations of the magnetization in the direction perpendicular to the surface, and $2n+1$ corresponds to the number of surface fermions. As we will demonstrate in the following, the dynamical properties of the system are sensitive to the value of $\theta$, and the theoretical tunability of its value will allow us to describe a mechanism for switching between vibrational and rotational types of orbits. \begin{figure}[tbp] \begin{center} \includegraphics[width=1.7in]{PSE1.pdf} \label{PSE1} \includegraphics[width=1.7in]{PSE2.pdf} \label{PSE2} \\ \includegraphics[width=1.7in]{RotE1.pdf} \label{RotE1} \includegraphics[width=1.7in]{VibE2.pdf} \label{VibE2} \end{center} \caption{{\protect\small Tuning the dynamics with the TMEP.}} \label{Poincare2} \end{figure} From eqs. (\ref{kappa}) and (\ref{ImageMonopole}) we can see that high values of $\theta$ and low values of $\varepsilon$ favor the topological contribution. Therefore, to amplify the effects due to the topological nontriviality of the material, let us consider the value of $\theta$ which maximizes the magnetic monopole strength $g$, which is $\tilde{\alpha} _{c} = \pm \sqrt{(\varepsilon +1)(1 + 1 / \mu)}$. This critical value produces $\kappa = \varepsilon / (\varepsilon + 1)$ and $g = 1 / \tilde{\alpha} _{c}$. Now let us consider an hypothetical topological insulator with the same optical properties to that of the TI TlBiSe$_{2}$ ($\varepsilon =4$, $\mu = 1$) but hosting a large number of Dirac cones in its surface. Most of the TIs discovered up to date exhibit a single or a small number of surface fermions, and the problem of characterizing such materials with large $\theta$ is still an open question. Therefore, although in practice the tunability of the TMEP from zero to high values is very unlikely, it is suitable to illustrate the effects of the topological nontriviality. In order to analyze the consequences of switching the TMEP from zero to high values, let us first consider the case $\theta = 0$ and compute the Poincar\'{e} surface of section for a given initial condition $\boldsymbol{\lambda} _{0} (\xi) = (p _{u 0} , p _{v 0} , u _{0} , v _{0} )$ and scaled energy $\xi = - 0.5$, as shown in fig. \ref{Poincare2}a. Here we recall that one of the components of the initial condition (e.g. $p _{v 0}$) is determined from the other through the energy shell condition $\tilde{\mathcal{H}} ^{\prime} (\boldsymbol{\lambda} _{0}, \tilde{\alpha} = 0) = 2$. We conveniently choose the rotational-type of orbit depicted in fig. \ref{Poincare2}c. If we turn on the $\theta$-parameter from zero to $\pi$, the Kolmogorov-Arnold-Moser theorem guarantees that the orbits do not significantly change their character with respect to the case $\theta = 0$, since the perturbations to the Hamiltonian due to the topological contributions are very small (of the order of $\alpha$). However, if we turn on the $\tilde{\alpha}$ parameter to its critical value $\tilde{\alpha} _{c}$, such perturbations will not longer be small, but strong enough to produce significant changes in the electron dynamics (KAM theorem does not hold), as we will demonstrate in the following. Due to the energy shell condition $\tilde{\mathcal{H}} ^{\prime} (\boldsymbol{\lambda} _{1}, \tilde{\alpha} _{c}) = 2$, the initial condition now becomes $\boldsymbol{\lambda} _{1} (\xi) = (p _{u 0} , p _{v 1} , u _{0} , v _{0} )$. As shown in fig. \ref{Poincare2}b, the Poincar\'{e} surface of section significantly changes its character due to the strong perturbations. Clearly, a rotational-vibrational type transition is induced due to the change in $\tilde{\alpha}$. The corresponding vibrational-type orbit is shown in fig. \ref{Poincare2}d. One can also demonstrate that a vibrational-rotational type transition can also be tuned with the TMEP. \section{Conclusions} \label{Conclusions} The problem of the dynamics of Rydberg atoms near surfaces is important in its own in the fields of physics, chemistry and biology. When a slowly moving atom or ion approaches to a metallic surface, the mutual interaction leads to electronic processes of great interest in physics. For example, as the atom approaches the surface, the outer electron is captured by the surface and the atom ionizes. After this charge transfer process, the positive ion is attracted to the surface by its image charge and finally it is neutralized by an Auger process. This system has been widely studied in the literature and extensions to dielectric surfaces has also been analyzed. In this letter, we have studied the classical dynamics of a Rydberg hydrogen atom near the surface of a planar topological insulator, as shown in fig. \ref{System}. Due to the large size of a Rydberg atom, the interaction with the TI surface takes place relatively far from the surface and is dominated by nonretarded electromagnetic forces. By virtue of the TME, the electric charges of the atom produce image electric charges and image magnetic monopoles located inside the material, which in turn interact with the atomic electron thus affecting its dynamics. Owing to the axial symmetry of the system, when the Hamiltonian is expressed in cylindrical coordinates, the $z$ component $l _{z}$ of the angular momentum is conserved, and the system reduces to two degrees of freedom. We have restricted our analysis to the case $l _{z} = 0$. By means of numerical techniques and Poincar\'{e} surfaces of section, we have explored the phase space structure of the system for the TI TlBiSe$_{2}$. The corresponding surfaces of section are shown in fig. \ref{Poincare} for different values of the scaled energy $\xi$, where we have distinguished important structures which include stabled fixed points. The levels around these points include vibrational (fig. \ref{Orbits}a) and rotational (fig. \ref{Orbits}b) types of orbits. When the energy of the electron is bigger than the critical energy, the electron has access to the ionization channel and can be captured by the TI surface. Interestingly, we have shown that vibrational-rotational-vibrational type transitions can be tuned with the TMEP $\theta$. To this end, we first consider a particular rotational-type orbit of the atomic electron for $\theta = 0$. Next we turn on the $\theta$-parameter to its critical value $\tilde{\alpha} _{c}$ and we observe the transition to a vibrational-type orbit. We provide a visual demonstration of this effect in fig. \ref{Poincare2}. Importantly, this transition is a unique signature of the topological nontriviality of the material. In practice, to observe this effect we require a topological insulator hosting a large number of surface fermions in its boundary; however the problem of characterizing such materials is still an open question in condensed matter physics. The proposed effect could also be explored in other magnetodielectric materials which are described by higher axion couplings, such as Cr$_{2}$O$_{3}$ \cite{Cr2O3}. However, these materials induce more general magnetoelectric couplings not considered in our model \cite{Essin}. A further extension of this work is the full quantum-mechanical treatment of the system \cite{Urrutia}. Also, inspired by the recent discovered semimetallic phases of the quantum matter, it would be interesting the analysis of the dynamics of Rydberg atoms near Dirac and Weyl semimetals. We hope that our results will be useful in understanding the dynamical behavior of atoms, ions, or molecules near topological phases of matter, and encourage experimental efforts to attain full characterization of these novel phases of matter. \acknowledgments AMR has been supported in part by the projects DGAPA(UNAM) IN104815 and CONACyT (M\'{e}xico) 237503.
2,869,038,156,999
arxiv
\section{Home-grown Efficiency Maps \label{EMcreation}} While most efficiency maps included in the {SModelS\,v1.1}\ database were directly provided by the experimental collaborations as auxiliary material for their publications, we also produced a number of them ourselves in order to improve the coverage of simplified models. Such `home-grown' efficiency maps are particularly relevant for topologies with one or more intermediate particles: here we need several mass planes for the interpolation, but often only one is provided in the official results. The home-grown efficiency maps included in the v1.1.1 database are summarised in Table~\ref{tab:processdetails}. The TxNames denote the following simplified models:\footnote{For gluinos decays, charge-conjugated final states are assumed implicitly.} \begin{table}[t!]\centering \footnotesize \begin{tabular}{|l|l|l|} \hline Analysis & Process & SMS topologies \\ \hline\hline \multirow{2}{*}{ATLAS-SUSY-2013-04~\cite{Aad:2013wta}} & $pp \to \tilde g \tilde g \,(j)$ & T1bbbb, T1btbt, T5, T5WW, T5ZZ \\ & $pp \to \tilde t_1 \tilde t_1^* \,(j)$ & T2tt,T6bbWW \\ \hline \multirow{2}{*}{CMS-SUS-13-012~\cite{Chatrchyan:2014lfa}} & $pp \to \tilde g\tilde g \,(j)$ & T1bbbb, T1btbt, T5, T5bbbb, T5tttt, T5WW, T5ZZ \\ & $pp \to \tilde t_1 \tilde t_1^* \, (j)$ & T2tt, T6bbWW \\ & $pp \to \tilde b_1 \tilde b_1^* \, (j)$ & T2bb \\ & $ pp \to \tilde\chi^\pm_1 \tilde\chi^0_2 \,(j)$ & TChiWZ \\ & $ pp \to \tilde\chi^+_1 \tilde\chi^-_1\,(j)$ & TChiWW \\ & $ pp \to \tilde\chi^0_2 \tilde\chi^0_2\,(j)$ & TChiZZ \\ \hline \multirow{2}{*}{ATLAS-SUSY-2013-11~\cite{Aad:2014vma}} & $ pp \rightarrow \tilde{l} \tilde{\bar l}$ & TSlepSlep \\ & $ pp \to \tilde\chi^+_1 \tilde\chi^-_1$ & TChiWW, TChipChimSlepSnu \\ \hline ATLAS-SUSY-2013-05~\cite{Aad:2013ija} & $pp \to \tilde b_1 \tilde b_1^* (j)$ & T2bb \\[1mm] \hline \end{tabular} \caption{Summary of the `home-grown' efficiency maps included in the v1.1.1 database.} \label{tab:processdetails} \end{table} \noindent \begin{itemize} \item T1bbbb: $p p \rightarrow \tilde g \tilde g$, $ \tilde g \rightarrow b \bar b \tilde\chi^0_1$\,; \item T1btbt: $p p \rightarrow \tilde g \tilde g$, $ \tilde g \rightarrow b t \tilde\chi^0_1$\,; \item T2bb: $p p \rightarrow \tilde b_1 \tilde b_1^*$, $\tilde b_1 \rightarrow b \tilde \chi^0_1$\,; \item T2tt: $p p \rightarrow \tilde t_1 \tilde t_1^*$, $\tilde t_1 \rightarrow t \tilde \chi^0_1$\,; \item T5: $p p \rightarrow \tilde g \tilde g$, $ \tilde g \rightarrow q \tilde q, \tilde q \rightarrow q \tilde\chi^0_1$\,; \item T5bbbb: $p p \rightarrow \tilde g \tilde g$, $\tilde g \rightarrow b\tilde{b}_1 , \tilde b_1 \rightarrow b \tilde\chi^0_1$\,; \item T5tttt: $p p \rightarrow \tilde g \tilde g$, $\tilde g \rightarrow t\tilde{t}_1 , \tilde t_1 \rightarrow t \tilde\chi^0_1$\,; \item T5WW: $p p \rightarrow \tilde g \tilde g$, $\tilde g \rightarrow q \bar{q}' \tilde \chi^{\pm}_1$, $\tilde \chi^{\pm}_1 \rightarrow W^\pm \tilde \chi^0_1$\,; \item T5ZZ: $p p \rightarrow \tilde g \tilde g$, $\tilde g \rightarrow q \bar{q} \tilde \chi^0_2$, $\tilde \chi^0_2 \rightarrow Z \tilde \chi^0_1$\,; \item T6bbWW: $p p \rightarrow \tilde t_1 \tilde t_1^*$, $\tilde t_1 \rightarrow b \tilde\chi^+_1$, $\tilde \chi^+_1 \rightarrow W^+ \tilde \chi^0_1$\,; \item TChiWW: $p p \rightarrow \tilde \chi^+_1 \tilde \chi^-_1$, $\tilde \chi^\pm_1 \rightarrow W^{\pm} \tilde \chi^0_1$\,; \item TChiWZ: $p p \rightarrow \tilde \chi^\pm_1 \tilde \chi^0_2$, $\tilde\chi^\pm_1 \rightarrow W^{\pm} \tilde \chi^0_1$, $\tilde \chi^0_2 \rightarrow Z \tilde \chi^0_1$ \,; \item TChiZZ: $p p \rightarrow \tilde \chi^0_2 \tilde \chi^0_2 $, $\tilde \chi^0_2 \rightarrow Z \tilde \chi^0_1$\,; \item TSlepSlep: $p p \rightarrow \tilde e^+ \tilde e^-$ or $\tilde \mu ^+ \tilde \mu^-$, $ \tilde e^{\pm} \rightarrow e^{\pm} \tilde\chi^0_1$, $ \tilde \mu^{\pm} \rightarrow \mu^{\pm} \tilde\chi^0_1$\,; \item TChipChimSlepSnu: $p p \rightarrow \tilde \chi^+_1 \tilde \chi^-_1$, $\tilde{\chi}_1 ^{\pm} \rightarrow l^{\pm}\tilde{\nu}_{l} $ or $\nu_{l}\tilde{l}^{\pm}$, $\tilde{\nu}_l \rightarrow \nu_l \tilde{\chi}_1^0$, $\tilde{l}^{\pm} \rightarrow l^{\pm}\tilde{\chi}_1^0$\,. \end{itemize} Monte Carlo events for the above processes were generated using {\sc MadGraph5\textunderscore}a{\sc MC@NLO}~\cite{Alwall:2014hca} (v.2.3.3) with decay and hadronisation performed via the integrated installation of {\sc Pythia\,6.4}~\cite{Sjostrand:2006za}. For recasting analyses sensitive to hadronic final states, events were generated including up to one extra jet, see Table \ref{tab:processdetails}. The merging was performed using Matrix Element--Parton Shower (ME-PS) matching according to the $k_T$-jet MLM scheme~\cite{MLM:scheme,Hoche:2006ph,Alwall:2007fs} with merging and matching parameters set as $(Qcut,XQcut)=(90,50)$~GeV, following CMS practice. The hadronised samples were then passed through {\sc MadAnalysis\,5}~\cite{Dumont:2014tja,Conte:2014zja} (v1.4.4) or {\sc CheckMATE}~\cite{Drees:2013wra} (v1.2.2) for further processing. Both these codes interface to the {\sc Delphes\,3}~\cite{deFavereau:2013fsa} detector simulation, using {\sc FastJet}~\cite{Cacciari:2011ma} for jet clustering. Concretely, we used the following recasting codes: \noindent \begin{itemize} \item {\sc MadAnalysis\,5}\ recast code \cite{ATLAS-SUSY-2013-04MA5} for the ATLAS multi-jet + MET analysis~\cite{Aad:2013wta}; \ \item {\sc MadAnalysis\,5}\ recast code \cite{CMS-SUS-13-012MA5} for the CMS multi-jet + MET analysis~\cite{Chatrchyan:2014lfa}; \ \item {\sc MadAnalysis\,5}\ recast code \cite{ATLAS-SUSY-2013-11MA5} for the ATLAS dilepton plus MET analysis~\cite{Aad:2014vma}; \item {\sc CheckMATE}~\cite{Drees:2013wra} for the ATLAS third generation squark search~\cite{Aad:2013ija}. \\ \end{itemize} In the case of topologies with one-step cascade decays (T5, T5bbbb, T5tttt, T5WW, T5ZZ, T6bbWW, TChipChimSlepSnu) we created at least three different mass planes, so that the mass of the intermediate sparticle assumes a value close to the mother mass, close to the neutralino mass and equally spaced between the two, in order to obtain a good coverage of the parameter space. To this end, a parameterization in terms of relative mass differences, $m_{Interm.} = x\cdot m_{Mother} + (1 - x) \cdot m_{\tilde \chi^0_1}$ with $x={\rm const.}$, was used. Moreover, where relevant, we produced planes with fixed mass gaps of $\Delta M(A,B)=m_A-m_B$ to improve the accuracy of the interpolation, {e.g.}\ for T5tttt near the kinematic boundary for on-shell top quarks, and for the T5WW and T6bbWW models (only for CMS-SUS-13-012) in the region where the chargino decays via an off-shell $W$ boson. For the T1btbt model, the chargino and the LSP were considered mass-degenerate. An overview of the various mass planes is given in Table~\ref{tab:massplanes}. \begin{table}\centering\footnotesize \begin{tabular}{| l | l | l |} \hline TxName & mass planes, $x$ parametrization & mass planes, fixed $\Delta M(A,B)$ [GeV] \\ \hline \hline T5 & $x= 0.05,\, 0.5,\, 0.95$ & \\ T5tttt & $x=0.5$ & $\Delta M(\tilde g,\tilde t_1) = 177$, $\Delta M(\tilde t_1,\tilde\chi^0_1) = 177$ \\ T5bbbb & $x = 0.05,\, 0.5,\, 0.95$ & \\ T5WW & $x = 0.05,\, 0.5,\, 0.95$ & $\Delta M(\tilde\chi^\pm_1,\tilde\chi^0_1) = 10, 75$ \\ T5ZZ & $x = 0.05,\, 0.5,\, 0.95$ & \\ T6bbWW & $x = 0.1,\, 0.5,\, 0.9$ & $\Delta M(\tilde\chi^\pm_1,\tilde\chi^0_1) = 10, 75$ \\ TChipChimSlepSnu & $x=0.05,\, 0.25,\, 0.50,\, 0.75,\, 0.95$ & $\Delta M(\tilde\chi^\pm_1,\tilde l)=5,10,15$, $\Delta M(\tilde l,\tilde\chi^0_1)=5,10,15$ \\ \hline \end{tabular} \caption{Mass planes produced for topologies with one-step cascade decays. \label{tab:massplanes}} \end{table} In the input SLHA files, we assumed diagonal mixing matrices for both charginos and neutralinos, giving a pure bino $\tilde \chi^0_1$ and pure winos $\tilde \chi^{\pm}_1$ and $\tilde \chi^0_2$. However, since {\sc Pythia\,6.4} was used for decaying the intermediate particles, any effects in kinematic distributions that might arise from sparticle mixing (due to polarization of the decay products) as well as spin correlations are disregarded. No additional radiation was produced for ATLAS-SUSY-2013-11 since the signal regions targeting these specific SMS veto the presence of jets. For the TSlepSlep simplified model, we took $\tilde l \equiv \tilde e_L, \tilde e_R, \tilde\mu_L, \tilde\mu_R$, with all of them being mass-degenerate. For the TChipChimSlepSnu model, the intermediate left-handed sleptons and sneutrinos are taken mass degenerate for all the three slepton flavours. We have verified that our procedure reproduces well the official 95\%~CL exclusion curves whenever provided by the experimental collaborations; the validation plots are available at~\cite{smodels:validation}. For completeness, we also note that detailed validation notes for the {\sc MadAnalysis\,5}\ recast codes are available at~\cite{ATLASSUSY201304MA5,CMSSUS13012validation,ATLASSUSY201311validation} and for {\sc CheckMATE}\ at~\cite{CMvalidation}. \section{Basic Concepts and Structure} \label{SModelSDefs:basic-concepts-and-definitions} A central task of {SModelS}\ is the decomposition of a complex BSM model into its set of simplified model topologies. As mentioned in the introduction, this decomposition relies on several assumptions~\cite{Kraml:2013mwa}: first, the production channel is not taken into account, and only on-shell particles are considered in cascade decays. Virtual particles are replaced by an effective vertex, where only the on-shell decay products are specified. Additionally, new states are described only by their masses, neglecting all other quantum numbers, thus disregarding potential influences of the spin and coupling structure on the selection efficiencies. Finally it should be noted that the SMS approach is only valid within the narrow width approximation. For a safe application of {SModelS}\ (in particular to non-MSSM or non-SUSY scenarios), these assumptions should be understood and, if needed, verified. A few studies on the validity of the SMS assumptions are available. For example, the effects of alternative production channels in squark simplified models were studied in~\cite{Edelhauser:2014ena}. The effect of a different spin structure was studied for the case of the dijet+MET final state in~\cite{Edelhauser:2015ksa}, for the dilepton+MET final state in~\cite{Arina:2015uea} and for $t\bar{t}$+MET final states in~\cite{Kraml:2016eti}. For all these cases it was found that the application of SMS limits is safe. Generally, however, the validity of the SMS assumptions will depend on the concrete model under consideration, as well as details of the experimental search. In particular, an inclusive cut-and-count search might be less sensitive to differences than a shape-based analysis. Examples for which the SMS assumptions do not hold include the mono-X analyses performed in the context of dark matter searches. Dark matter simplified model results are therefore not included in {SModelS\,v1.1}. In this section we first explain the basic concepts and language used in {SModelS\,v1.1}\ to describe simplified model topologies. This is followed by detailed descriptions of UL-type and EM-type results and of the database of experimental results. \subsection{Simplified Model Definitions} \label{TheoryDefinitions:theorydefs}\label{TheoryDefinitions::doc}\label{TheoryDefinitions:theory-definitions} \subparagraph{Elements} \label{TheoryDefinitions:elements}\label{TheoryDefinitions:element} A simplified model topology representing a specific cascade decay of a pair of BSM states produced in the hard scattering is called an `element' in the SModelS language. Elements contain the final states ($\mathbb{Z}_2$-even) particles appearing in the cascade decay as well as the masses of the BSM ($\mathbb{Z}_2$-odd) states in the decay chain. An element may also hold information about its corresponding `weight' (cross section times branching ratio times efficiency).\footnote{In order to treat the UL and EM map results on the same footing, {SModelS\,v1.1}\ applies a trivial binary efficiency to elements for UL-type results as will be explained later.} Figure~\ref{TheoryDefinitions:elementscheme} illustrates an element and its properties. \begin{figure}[h!] \begin{center} \includegraphics[width=0.44\linewidth]{topSchemeB.png}\qquad\includegraphics[width=0.34\linewidth]{{elementB}.png} \end{center} \caption{Representation of an element, including its overall properties (left) and a concrete example with the information used in {SModelS}\ (right).}\label{TheoryDefinitions:elementscheme} \end{figure} {SModelS}\ works under the inherent assumption that, for collider purposes, all the essential pro\-per\-ties of a BSM model can be encapsulated by its elements. While some caveats apply (see above), such an assumption is extremely helpful to cast the theoretical predictions of a specific BSM model in a model-independent framework, which can then be compared against the corresponding experimental limits. For instance, as shown in Figure~\ref{TheoryDefinitions:elementscheme}, only the masses of the BSM states are used; other properties, such as their spins or other quantum numbers are ignored (the PID's are, however, stored for book-keeping). Below we describe in more detail the element properties and their implementation in SModelS. \subparagraph{Vertices} \label{TheoryDefinitions:vertex}\label{TheoryDefinitions:vertices} Each $\mathbb{Z}_2$-odd decay is represented by a vertex containing its final states (one $\mathbb{Z}_2$-odd state and the $\mathbb{Z}_2$-even particles). \subparagraph{Final States ($\mathbb{Z}_2$-even)} \label{TheoryDefinitions:final-states-z2-even}\label{TheoryDefinitions:final-states} Final states indicate all $\mathbb{Z}_2$-even states coming out of a vertex. In most cases, these correspond to Standard Model particles (leptons, gauge bosons, Higgs,...). Note that, if the input model contains BSM states which are $\mathbb{Z}_2$-even, such as additional Higgs bosons, these are also treated as final states. In contrast, stable or long-lived $\mathbb{Z}_2$-odd particles which might appear in the detector (either as MET or charged tracks) are \emph{not} classified as final states. \subparagraph{Intermediate States ($\mathbb{Z}_2$-odd)} \label{TheoryDefinitions:odd-states}\label{TheoryDefinitions:intermediate-states-z2-odd} The $\mathbb{Z}_2$-odd states are always assumed to consist of BSM particles with $\mathbb{Z}_2$ conserving decays of the form: ($\mathbb{Z}_2$-odd state) \(\rightarrow\) ($\mathbb{Z}_2$-odd state) + final states. The only information kept from the intermediate states are their masses, see Figure~\ref{TheoryDefinitions:elementscheme} (right). If an intermediate state is stable and neutral, it is considered as a MET signal. \subparagraph{Branches} \label{TheoryDefinitions:branches}\label{TheoryDefinitions:branch} A branch is the basic substructure of an element. It represents the cascade decays of a single initial $\mathbb{Z}_2$-odd state. The diagram in Figure~\ref{TheoryDefinitions:branchTop} illustrates an example of a branch. The structure of each branch is fully defined by its number of vertices and the number of final states coming out of each vertex. Furthermore, the branch also holds the information about the final states originating from each vertex and the masses of the intermediate states, as shown in Figure~\ref{TheoryDefinitions:branchTop} (right). \begin{figure}[h!] \begin{center} \includegraphics[width=0.350\linewidth]{{branchTopB}.png} \qquad \includegraphics[width=0.350\linewidth]{branchElB.png} \end{center} \caption{Representation of a branch, including its overall structure (left) and a concrete example with all the information it holds in the SModelS scheme (right).}\label{TheoryDefinitions:branchTop} \end{figure} \subparagraph{Element Representation: Bracket Notation} \label{TheoryDefinitions:element-representation-bracket-notation}\label{TheoryDefinitions:notation} The structure and final states of elements can conveniently be represented in textual form using a notation of nested brackets. Figure~\ref{TheoryDefinitions:bracketnotation} shows how to convert between the graphical and bracket representations of an element. \begin{figure}[h!] \begin{center} \includegraphics[width=0.7\linewidth]{bracketNotationB.png} \end{center} \caption{Conversion of an element to the bracket notation used in SModelS.} \label{TheoryDefinitions:bracketnotation} \end{figure} The brackets are ordered and nested in the following way. The outermost brackets correspond to the branches of the element. The branches are sorted according to their size and each branch contains an \emph{ordered} list of vertices. Each vertex contains a list of the final states originating from it. Schematically, for the example in Figure~\ref{TheoryDefinitions:bracketnotation}, we have: \begin{Verbatim}[commandchars=\\\{\},frame=lines] element = [branch1, branch2] branch1 = [vertex1] vertex1 = [l+,l\char`\-{}] branch2 = [vertex1,vertex2] vertex1 = [l+] vertex2 = [nu] \end{Verbatim} Using the above scheme it is possible to unambiguously describe each element with a simple list of nested brackets. However, in order to fully specify all the information relative to a single element, we must also include the list of intermediate state masses and the element weight. The intermediate state masses are represented by a mass array for each branch, as shown in Figure~\ref{TheoryDefinitions:massnotation}. \begin{figure}[t!] \begin{center} \includegraphics[width=0.75\linewidth]{{massNotationB}.png} \end{center} \caption{Example of an element `mass array', as used by SModelS.} \label{TheoryDefinitions:massnotation} \end{figure} \subparagraph{Topologies} \label{TheoryDefinitions:topologies}\label{TheoryDefinitions:topology} It is often useful to classify elements according to their overall structure or topology. Each topology corresponds to an \emph{undressed} element, removed of its final states and $\mathbb{Z}_2$-odd masses. Therefore the topology is fully determined by its number of branches, number of vertices in each branch and number of final states coming out of each vertex. An example of a topology is shown in Figure~\ref{TheoryDefinitions:globTop}. \begin{figure}[t!] \begin{center} \includegraphics[width=0.33\linewidth]{{globTopB}.png} \end{center} \caption{An example of a topology, which represents a class of elements.} \label{TheoryDefinitions:globTop} \end{figure} Within SModelS, elements are grouped according to their topology. Hence topologies represent a list of elements sharing a common basic structure (same number of branches, vertices and final states in each vertex). \subsection{Database Definitions and Structure} \label{DatabaseDefinitions:databasedefs}\label{DatabaseDefinitions::doc}\label{DatabaseDefinitions:database-definitions} SModelS contains a large database of SMS results from ATLAS and CMS SUSY searches.\footnote{Extensions to non-MET searches are foreseen for future SModelS releases.} Starting with version~1.1, two types of experimental constraints are used: \begin{itemize} \item {\bf Upper Limit (UL)} constraints: constrains on \(\sigma \times BR\) of simplified models, usually provided by the experimental collaborations; \item {\bf Efficiency Map (EM)} constraints: constrains the total signal (\(\sum \sigma \times BR \times \epsilon\)) in a specific signal region. Here \(\epsilon\) denotes the acceptance times efficiency. These are either directly provided by the experimental collaborations or computed by theory groups. \end{itemize} Although the two types of constraints above are very distinct, both the folder structure and the object structure of SModelS are sufficiently flexible to simultaneously handle both UL-type and EM-type results. The database itself can be seen as a collection of experimental results, each of which obey the following hierarchical structure: \begin{figure}[h!t]\centering \includegraphics[width=0.98\linewidth]{{databaseScheme}.png} \caption{Basic folder structure of the SModelS database.} \label{DatabaseDefinitions:databasescheme} \end{figure} \begin{samepage} \begin{itemize} \item {\bf Experimental Result: } corresponds to an experimental publication (published paper or preliminary result like a conference note or analysis summary) and contains a list of Data Sets as well as general information about the result (luminosity, publication reference, ...). \begin{itemize} \item {\bf Data Set: } represents a distinct signal region for a given experimental result. For EM-type results, there is one data set per signal region, each containing the relevant efficiency maps. UL-type results are treated analogously, but, since they correspond to the best signal region or a combination of signal regions, they have just one data set containing the relevant upper limit map. \begin{itemize} \item {\bf Upper Limit map:} contains the upper limit constraints for UL-type results. Each map refers to a single simplified model (or more precisely to a single element or sum of elements). \item {\bf Efficiency map:} contains the efficiencies for EM-type results. Each map refers to a single element or sum of elements. \end{itemize} \end{itemize} \end{itemize} \end{samepage} A schematic summary of the above structure is shown in Figure~\ref{DatabaseDefinitions:databasescheme}. Below we describe in more detail the main concepts and building blocks of the SModelS database. \subsubsection{Upper Limit Results} \label{DatabaseDefinitions:ultype}\label{DatabaseDefinitions:experimental-result-upper-limit-type} Upper Limit experimental results contain the experimental constraints on the cross section times branching ratio ( \(\sigma \times BR\) ) for simplified models from a specific experimental publication or preliminary result. These constraints are typically given in the format of UL maps, which correspond to 95\% confidence level (CL) upper limit values on \(\sigma \times BR\) as a function of the respective parameter space (usually BSM masses or slices over mass planes). The UL values usually assume the best signal region (for a given point in parameter space), a combination of signal regions or more involved limits from other methods. An example of an UL map is shown in Figure~\ref{DatabaseDefinitions:ulplot}. \begin{figure}[t!] \begin{center} \includegraphics[width=0.8\linewidth]{{ULexample}.png} \end{center} \caption{Example of an UL map from~\cite{Chatrchyan:2013lya}. The information used by SModelS is indicated by arrows.} \label{DatabaseDefinitions:ulplot} \end{figure} Within SModelS, the UL map shown in Figure~\ref{DatabaseDefinitions:ulplot} is used to constrain the simplified model $\tilde{q} + \tilde{q} \to \left(jet+\tilde{\chi}_1^0\right) + \left(jet+\tilde{\chi}_1^0\right)$. Using the SModelS notation this simplified model is mapped to the element \([[[jet]],[[jet]]]\), using the notation defined in Section~\ref{TheoryDefinitions:theory-definitions}. Usually a single experimental publication contains several UL maps, each one constraining different simplified models. We stress that the exclusion curve shown in the UL map above is never used by SModelS. \subparagraph{Upper Limit Constraint} \label{DatabaseDefinitions:upper-limit-constraint}\label{DatabaseDefinitions:ulconstraint} The upper limit constraint specifies which simplified model is constrained by the respective UL map. For simple constraints, as the one shown in Figure~\ref{DatabaseDefinitions:ulplot}, it corresponds to a single element (\([[[jet]],[[jet]]]\)). In some cases, however, the constraint corresponds to a sum of elements. As an example, consider the ATLAS upper limit map shown in Figure~\ref{DatabaseDefinitions:constraintplot}. \begin{figure}[th!] \begin{center} \includegraphics[width=0.9\linewidth]{{constraintExample}.png} \end{center} \caption{An example of an UL map from \cite{ATLAS-CONF-2013-049} which constrains the sum of two elements, correspondingly the UL constraint is written down as a sum of two elements in the SModelS database.} \label{DatabaseDefinitions:constraintplot} \end{figure} Here, the upper limits apply to the sum of the cross sections: \begin{gather} \begin{split}\sigma = \sigma([[[e^+]],[[e^-]]]) + \sigma([[[\mu^+]],[[\mu^-]]])\end{split}\notag \end{gather} In this case the UL constraint is: \begin{gather} \begin{split}[[[e^+]],[[e^-]]] + [[[\mu^+]],[[\mu^-]]]\end{split}\notag \end{gather} where it is understood that the sum is over the weights of the respective elements and not over the elements themselves. Note that the sum can be over particle charges, flavors or more complex combinations of elements. However, almost all experimental results sum only over elements sharing a common topology. \subparagraph{Upper Limit Conditions} \label{DatabaseDefinitions:ulconditions}\label{DatabaseDefinitions:upper-limit-conditions} When the analysis constraints are non-trivial (refer to a sum of elements), it is often the case that there are implicit (or explicit) assumptions about the contribution of each element. For instance, in Figure~\ref{DatabaseDefinitions:constraintplot}, it is implicitly assumed that each lepton flavor contributes equally to the summed cross section: \begin{gather} \begin{split}\sigma([[[e^+]],[[e^-]]]) = \sigma([[[\mu^+]],[[\mu^-]]]) \;\;\; \mbox{(condition)}\end{split}\notag \end{gather} Therefore, when applying these constraints to general models, one must also verify that these conditions are satisfied. Once again we can express these conditions in bracket notation: \begin{gather} \begin{split}[[[e^+]],[[e^-]]] = [[[\mu^+]],[[\mu^-]]] \;\;\; \mbox{(condition)}\end{split}\notag \end{gather} where it is understood that the condition refers to the weights of the respective elements and not to the elements themselves. In several cases it is desirable to relax the analysis conditions, so the analysis upper limits can be applied to a broader spectrum of models. Once again, for the example mentioned above, it might be reasonable to impose instead: \begin{gather} \begin{split}[[[e^+]],[[e^-]]] \simeq [[[\mu^+]],[[\mu^-]]] \;\;\; \mbox{(fuzzy condition)}\end{split}\notag \end{gather} The \emph{departure} from the exact condition can then be properly quantified and one can decide whether the analysis upper limits are applicable or not to the model being considered. Concretely, SModelS computes for each condition a number between 0 and 1, where 0 means the condition is exactly satisfied and 1 means it is maximally violated. Allowing for a \(20\%\) violation of a condition corresponds approximately to a ``condition violation value'' (or simply condition value) of 0.2. The condition values are given as an output of SModelS, so the user can decide on the allowed violation. \subsubsection{Efficiency Map Results} \label{DatabaseDefinitions:emtype}\label{DatabaseDefinitions:experimental-result-efficiency-map-type} Efficiency maps correspond to a grid of simulated acceptance times efficiency ( \(A \times \epsilon\) ) values for a specific signal region for a specific simplified model. In the following we will refer to \(A \times \epsilon\) simply as \emph{efficiency} and denote it by \(\epsilon\). Furthermore, additional information, such as the luminosity, number of observed and expected events, etc is also stored in a EM-type result. An important difference between UL-type results and EM-type results is the existence of several signal regions in the latter, which in SModelS are mapped to data sets. While UL-type results contain a single data set, EM results hold several data sets, one for each signal region (see Figure~\ref{DatabaseDefinitions:databasescheme}). Each data set contains one or more efficiency maps, one for each element or sum of elements. The efficiency map is usually a function of the BSM masses appearing in the element, as shown in Figure~\ref{DatabaseDefinitions:emplot}. \begin{figure}[th!] \begin{center} \includegraphics[width=0.8\linewidth]{{EMexample}.png} \end{center}\vspace*{-4mm} \caption{An example of an efficiency map from~\cite{Chatrchyan:2014lfa}. The information used by SModelS is indicated by arrows.} \label{DatabaseDefinitions:emplot} \end{figure} The concept of efficiency maps can also be extended to UL-type results. For the latter, the efficiencies for a given element are either 1, if the element appears in the UL constraint, or 0, otherwise. Atlhough trivial, this extension allows us to treat EM-type results and UL-type results in a very similar fashion (see Section~\ref{TheoryPredictions:theorypredictions} for more details). \subsubsection{TxName Convention} \label{DatabaseDefinitions:txname-convention}\label{DatabaseDefinitions:txname} Since using the bracket notation to describe the simplified models appearing in the upper limit or efficiency maps can be rather lengthy, it is useful to define a shorthand notation for the constraints. SModelS adopts a notation based on the CMS SMS conventions, where each specific constraint is labeled as \emph{T\textless{}constraint name\textgreater{}}, which we refer to as \emph{TxName}. For instance, the TxName corresponding to the constraint in Figure~\ref{DatabaseDefinitions:constraintplot} is \emph{TSlepSlep}. A complete list of TxNames can be found in \cite{smodels:dictionary}. \subsubsection{Database Structure} \label{DatabaseStructure:experimental-result-folder} Let us now discuss in more detail how the information about the experimental results are stored in the database. As shown in Figure~\ref{DatabaseDefinitions:databasescheme}, the basic structure is an ordinary (UNIX) directory hierarchy. A thin Python layer serves as the access to the database. In the official release, the database is organized according to the center-of-mass energies (`8TeV' and `13TeV'), followed by the name of the collaboration (e.g. `8TeV/ATLAS'). The third level of the directory hierarchy encodes the experimental results: \begin{itemize} \item {} 8TeV/ATLAS/ATLAS-SUSY-2013-02 \item {} 8TeV/CMS/CMS-SUS-12-024 \item {} ... \end{itemize} This folder structure is, however, flexible and may be customised by the user. More as well as fewer subdirectories are allowed, making for instance a simple division according to the data's provenance (`LHC', `FastLim', `MadAnalysis5', \ldots) a viable alternative. One small requirement that is imposed on the database is that the top-level directory contains a \code{version} file with a simple version string. The structure of the experimental result folders is as follows. Each experimental result folder contains: \begin{itemize} \item {} a folder for each data set (e.g. \code{data}) \item {} a \path{globalInfo.txt} file \end{itemize} The \path{globalInfo.txt} file contains the meta information about the experimental result. It defines the center-of-mass energy \(\sqrt{s}\), the integrated luminosity, the id used to identify the result and additional information about the source of the data. Below we show the content of \path{CMS-SUS-12-024/globalInfo.txt} as an example: \begin{Verbatim}[commandchars=\\\{\},frame=lines] sqrts: 8.0*TeV lumi: 19.4/fb id: CMS\char`\-{}SUS\char`\-{}12\char`\-{}024 url: https://twiki.cern.ch/twiki/bin/view/CMSPublic/ PhysicsResultsSUS12024 arxiv: http://arxiv.org/abs/1305.2390 publication: http://www.sciencedirect.com/science/article/pii/ S0370269313005339 implementedBy: Wolfgang Waltenberger lastUpdate: 2015/5/11 \end{Verbatim} \subparagraph{Data Set Folder} \label{DatabaseStructure:data-set-folder} Each data set folder contains: \begin{itemize} \item {} the Upper Limit maps for UL-type results or Efficiency maps for EM-type results (\path{TxName.txt} files) \item {} a \path{dataInfo.txt} file containing meta information about the Data Set \end{itemize} \subparagraph{Data Set Folder: Upper Limit Type} \label{DatabaseStructure:data-set-folder-upper-limit-type} Since UL-type results have a single dataset the info file only holds some trivial information, such as the type of experimental result (UL) and the dataset id (`None' for UL-type results). For example, the content of \path{CMS-SUS-12-024/data/dataInfo.txt} is \begin{Verbatim}[commandchars=\\\{\},frame=lines] dataType: upperLimit dataId: None \end{Verbatim} Each \code{TxName.txt} file contains the UL map for a given simplified model as well as some meta information, including the corresponding constraint and conditions. The first few lines of \path{CMS-SUS-12-024/data/T1tttt.txt} read: \begin{Verbatim}[commandchars=\\\{\},frame=lines] txName: T1tttt conditionDescription: None condition: None constraint: [[[\char`\'{}t\char`\'{},\char`\'{}t\char`\'{}]],[[\char`\'{}t\char`\'{},\char`\'{}t\char`\'{}]]] figureUrl: https://twiki.cern.ch/twiki/pub/CMSPublic/ PhysicsResultsSUS12024/T1tttt\char`\_{}exclusions\char`\_{}corrected.pdf \end{Verbatim} The second block contains the upper limits as a function of the BSM masses, according to the mass array convention defined in Section~\ref{TheoryDefinitions:theory-definitions}. It is given as a Python array with the structure \code{[[masses,upper limit], [masses,upper limit],...]}. Explicitly, continuing the above example: \begin{Verbatim}[commandchars=\\\{\},frame=lines] upperLimits: [ [[[400.0*GeV, 0.0*GeV], [400.0*GeV, 0.0*GeV]], 1.815773*pb], [[[400.0*GeV, 25.0*GeV], [400.0*GeV, 25.0*GeV]], 1.806528*pb], [[[400.0*GeV, 50.0*GeV], [400.0*GeV, 50.0*GeV]], 2.139336*pb], [[[400.0*GeV, 75.0*GeV], [400.0*GeV, 75.0*GeV]], 2.472143*pb], ... \end{Verbatim} \subparagraph{Data Set Folder: Efficiency Map Type} \label{DatabaseStructure:data-set-folder-efficiency-map-type} For EM-type results the \path{dataInfo.txt} contains an id to identify the data set (signal region), the number of observed and expected background events and the error on the number of background events for this signal region and the respective signal upper limits. Below we list the content of \path{CMS-SUS-13-012-eff/3NJet6_1000HT1250_200MHT300/dataInfo.txt} as an example: \begin{Verbatim}[commandchars=\\\{\},frame=lines] dataType: efficiencyMap dataId: 3NJet6\char`\_{}1000HT1250\char`\_{}200MHT300 observedN: 335 expectedBG: 305 bgError: 41 upperLimit: 5.681*fb expectedUpperLimit: 4.585*fb \end{Verbatim} Each \code{TxName.txt} file then contains the efficiency map for a given simplified model as well as some meta information. Here are the first few lines of \path{CMS-SUS-13-012-eff/3NJet6_1000HT1250_200MHT300/T2.txt}: \begin{Verbatim}[commandchars=\\\{\},frame=lines] txName: T2 conditionDescription: None condition: None constraint: [[[\char`\'{}jet\char`\'{}]],[[\char`\'{}jet\char`\'{}]]] figureUrl: https://twiki.cern.ch/twiki/pub/CMSPublic/ PhysicsResultsSUS13012/Fig\char`\_{}7a.pdf \end{Verbatim} This first block of data in the \code{T2.txt} file contains information about the element (\([[[\mbox{jet}]],[[\mbox{jet}]]]\)) in bracket notation for which the efficiencies refers to as well as reference to the original data source and some additional information. The second block of data contains the efficiencies as a function of the BSM masses, given as a Python array with the structure \code{[[masses,efficiency], [masses,efficiency],...]}: \begin{Verbatim}[commandchars=\\\{\},frame=lines] efficiencyMap: [ [[[312.5*GeV, 12.5*GeV], [312.5*GeV, 12.5*GeV]], 0.00109], [[[312.5*GeV, 62.5*GeV], [312.5*GeV, 62.5*GeV]], 0.00118], [[[312.5*GeV, 112.5*GeV], [312.5*GeV, 112.5*GeV]], 0.00073], [[[312.5*GeV, 162.5*GeV], [312.5*GeV, 162.5*GeV]], 0.00044], ... \end{Verbatim} \subsubsection{Binary (Pickle) Format} \label{DatabaseStructure:database-binary-pickle-format}\label{DatabaseStructure:databasepickle} At the first time of instantiating the \emph{Database} class, the text files in \path{database-path} are loaded and parsed, and the corresponding data objects are built. The efficiency and upper limit maps themselves are subjected to standard preprocessing steps such as a principal component analysis and Delaunay triangulation. The simplices defined during triangulation are then used for linearly interpolating the data grid, thus allowing SModelS to compute efficiencies or upper limits for arbitrary mass values (as long as they fall inside the data grid). This procedure provides an efficient and numerically robust way of dealing with generic data grids, including arbitrary parametrizations of the mass parameter space, irregular data grids and asymmetric branches. For the sake of efficiency, the entire database -- including the Delaunay triangulation -- is then serialized into a pickle file, which will be read directly the next time the database is loaded. If any changes in the database folder structure are detected, the Python or the SModelS version has changed, SModelS will automatically re-build the pickle file. This action may take a few minutes, but it is performed only once. If desired, the pickling process can be skipped using the option \emph{force\_load = `txt'} when running SModelS (see Section~\ref{RunningSModelS:runningsmodels}). \section{Conclusions and Outlook}\label{Conclusions} SModelS is an automatised tool for interpreting simplified model results from the LHC. It can decompose the signatures of any BSM model containing a $\mathbb{Z}_2$ symmetry into its SMS topologies and compare them to the existing LHC constraints from a large database of experimental results. Version~1.1 of the code, presented in this paper, includes several new features, most importantly the use of efficiency maps. Efficiency maps allow us to combine the results from different topologies and thus improve the constraining power of the tool. {SModelS\,v1.1}\ is also equipped with a likelihood and $\chi^2$ calculator useful for a basic statistical interpretation of the results. Moreover, extended information is provided on the topology coverage. Several speed improvements are further included in order to allow for a faster testing of BSM models. Not counting results marked as `superseded by a more recent analysis', the database v1.1.1 is comprised of 186 results (125 upper limits and 61 efficiency maps) from 21 ATLAS and 23 CMS SUSY searches~\cite{smodels:listofanalyses}, covering a total of 37 simplified models. From these 44 searches, 11 (4 from ATLAS and 7 from CMS) are based on early 13~TeV Run~2 data, comprising 35 UL-type and 2 EM-type results. FastLim-1.0 efficiency maps converted to {SModelS\,v1.1}\ format are also available; they cover another 15 simplified models. A new database browser tool is provided for easy access to the information stored in the database. An update of the database with more 13~TeV results is in preparation and will be released soon, as will be new `home-grown' efficiency maps for testing topologies currently absent in the {SModelS}\ database. To illustrate the importance of the extension to efficiency maps presented here, we have tested how {SModelS\,v1.1}\ constrains the publicly available points from the ATLAS pMSSM study~\cite{Aad:2015baa}. For a direct comparison with ATLAS, we here consider only 8~TeV results in the database. Points with long-lived BSM particles, which currently cannot be treated in {SModelS}, are discarded. The fraction of points excluded by ATLAS which is also excluded by {SModelS}\ is shown in Figure~\ref{pMSSMgluinoPlot} as a function of the gluino mass. As can be seen, the gain from using efficiency maps is quite substantial, in particular for gluinos within LHC reach that exhibit a variety of different decay modes.% \footnote{A detailed study of the coverage of the pMSSM by simplified model results will be presented elsewhere.} Generally, the improvement achieved by using efficiency maps is particularly relevant for regions of parameter space where the signal cross section is spread over several competing simplified model topologies. We therefore ask the ATLAS and CMS collaborations to provide as much as possible efficiency maps for each (aggregated) signal region in addition to the UL maps---this will considerably enhance the usefulness of their SMS interpretations. The alternative is to produce `home-grown' efficiency maps based on implementations of the experimental analyses in public recast frameworks like {\sc MadAnalysis\,5}~\cite{Conte:2014zja,Dumont:2014tja} or {\sc CheckMATE}~\cite{Drees:2013wra,Dercks:2016npn} as presented in Appendix~\ref{EMcreation}. This is, however, clearly less precise than using `official' efficiency maps. \begin{figure}[t!]\centering \includegraphics[width=0.8\linewidth]{excludedHisto.png} \caption{Fraction of the ATLAS-excluded pMSSM points~\cite{Aad:2015baa} also excluded by {SModelS}\ as a function of the gluino mass. The exclusion using only UL-type results (blue) is contrasted to the exclusion achieved by using also EM-type results (red). Note that this is based on 8 TeV results only; points with long-lived sparticles are not considered.} \label{pMSSMgluinoPlot} \end{figure} In conclusion, {SModelS\,v1.1}\ marks an important milestone for the SModelS collaboration, and towards the efforts of using SMS results in a systematic fashion. However, the mission is far from complete: extending the software to non-MET final states is but one out of several major improvements envisaged for future versions of SModelS. \section*{Acknowledgements} We thank the ATLAS and CMS SUSY groups for helpful discussions on their results, and in particular for providing SMS cross section upper limits and efficiency maps in digital format. This work was supported in part by the French ANR, project DMAstro-LHC ANR-12-BS05-0006, and the Theory-LHC-France Initiative of the CNRS (INP/IN2P3). F.A.\ is supported by the Austrian FWF, project P26896-N27, Su.K.\ by the ``New Frontiers'' program of the Austrian Academy of Sciences, U.L.\ by the ``Investissements d'avenir, Labex ENIGMASS'', and A.L.\ by the S\~ao Paulo Research Foundation (FAPESP), projects 2015/20570-1 and 2016/50338-6. The work of J.S.\ is supported by the collaborative research center SFB676 ``Particles, Strings, and the Early Universe'' by the German Science Foundation (DFG) and by the German Federal Ministry of Education and Research (BMBF). \section{Installation and Deployment}\label{Installation} \subsection{Standard Installation} \label{Installation:standard-installation} SModelS is Python code that requires Python version 2.6 or later with the Python packages \code{setuptools, unum, numpy, argparse, docutils} ($\ge 0.3$), \code{scipy} ($\ge 0.9.0$), and \code{pyslha} ($\ge 3.1.0$). The cross section computer provided by \code{smodelsTools.py} makes use of {\sc Pythia\,6.4}\xspace~\cite{Sjostrand:2006za} or {\sc Pythia\,8.2}\xspace~\cite{Sjostrand:2006za,Sjostrand:2014zea} together with {\sc NLLfast}\xspace 1.2 (7 TeV), 2.1 (8 TeV), and 3.1 (13 TeV) \cite{Beenakker:1996ch,Beenakker:1997ut,Kulesza:2008jb,Kulesza:2009kq,Beenakker:2009ha,Beenakker:2010nq,Beenakker:2011fu}. These programs need not be downloaded separately: {\sc Pythia\,6.4.27}\xspace and {\sc NLLfast}\xspace are included in the {SModelS\,v1.1}\ distribution, while {\sc Pythia\,8}\xspace will be downloaded and compiled automatically when required. A fortran compiler (preferably gfortran) is needed to compile {\sc Pythia\,6.4}\xspace and a C++ compiler for {\sc Pythia\,8}\xspace. In addition, the database browser in \code{smodelsTools.py} requires IPython. {SModelS\,v1.1}\ has been tested on Ubuntu, Scientific Linux (CERN) 5 and 6, as well as on Mac OS X 10.11. The {SModelS}\ package can be downloaded from~\cite{smodels:wiki}. Unpacking the tarball with \begin{Verbatim}[commandchars=\\\{\}] tar -zxvf smodels-v1.1.x.tgz \end{Verbatim} \noindent creates the directory \path{smodels-v1.1.x}, where the code (subdirectory \allowbreak \path{smodels}) and the results database (subdirectory \path{smodels-database}) are located. For installation, {SModelS}\ makes use of Python's \code{setuptools}: \begin{Verbatim}[commandchars=\\\{\}] python setup.py install \end{Verbatim} \noindent should install the entire project and compile the internal {\sc Pythia}\xspace and {\sc NLLfast}\xspace versions. It should also resolve the external dependencies, {i.e.}\ install the Python libraries listed above using e.g. \emph{pip}. If the Python libraries are installed in a system folder (as is the default behavior), it will be necessary to run the install command with superuser privilege. Alternatively, one can run setup.py with the ``--user'' flag: \begin{Verbatim}[commandchars=\\\{\}] python setup.py install --user \end{Verbatim} In case the compilation of {SModelS}\ fails, it is advised to try to compile the tools manually, by issuing ``make'' in the \path{lib/} or in smodels-v1.1.x directory. In case the installation of the external libraries fails, you can also try to install them manually, then rerun \code{setup.py}. To further help with installation and deployment, there is also a diagnostic tool available: \begin{Verbatim}[commandchars=\\\{\}] python smodelsTools.py toolbox \end{Verbatim} \noindent lists and checks all internal tools ({\sc Pythia}\xspace and {\sc NLLfast}\xspace) and external (\code{numpy, scipy, unum, ...} ) dependencies. More details on the installation procedure, in particular specific instructions for installation on Ubuntu, Scientific Linux, other platforms or without superuser privileges can be found in the \path{README.rst} file provided with the code. In case everything fails, please contact the authors at \path{[email protected]}. The installation procedure also installs the {SModelS}\ database of experimental results in the \code{smodels-database} subdirectory. As mentioned in Section~\ref{Conclusions}, the v1.1.1 database release contains 186 results (125 upper limits and 61 efficiency maps) from 44 searches; 24 of the EM-type results were `home-grown' using {\sc MadAnalysis\,5}~\cite{Dumont:2014tja} and {\sc CheckMATE}~\cite{Drees:2013wra} recasting, see Appendix~\ref{EMcreation}. The database also includes 35 preliminary results from 13 ATLAS and 4 CMS notes which were superseded by a subsequent publication; they are kept in the database for historical reasons but are not used in the default settings in {SModelS}. The complete list of analyses and results included in the database can be consulted at~\cite{smodels:listofanalyses}. We certify that all the results in the official database release have been carefully validated and the validation material can be found at~\cite{smodels:validation}. \subsection{Adding Results to the Database} The database can conveniently be updated independently from {SModelS}\ code updates. It suffices to unpack any new database tarball and replace the database directory. In the same fashion, one can easily add additional results as explained below. \subsubsection{Adding FastLim data} \label{Installation:adding-fastlim-data} The official SModelS database can be augmented with data from the FastLim~\cite{Papucci:2014rja} database. A tarball with the \emph{properly converted fastlim-1.0} efficiency maps can be downloaded from \cite{smodels:wiki}. Exploding the tarball in the top level directory of the database: \begin{Verbatim}[commandchars=\\\{\}] mv smodels-v1.1-fastlim-1.0.tgz <smodels-database folder> cd <smodels-database folder> tar -xzvf smodels-v1.1-fastlim-1.0.tgz rm smodels-v1.1-fastlim-1.0.tgz \end{Verbatim} adds the FastLim-1.0 results in the standard directory structure ({i.e.}\ in \path{8TeV/ATLAS/}), see Section~\ref{DatabaseDefinitions:databasedefs}. {SModelS}\ auto-detects FastLim results and issues an acknowledgement. When using these results, please properly cite the FastLim paper~\cite{Papucci:2014rja} and the relevant experimental results; for convenience, a bibtex file is provided in the smodels-fastlim tarball. For completeness we note that FastLim-1.0 contains 264 EM-type results based on 11 ATLAS conference notes, which were recast by the authors. Since in {SModelS\,v1.1}\ efficiencies with a relative statistical uncertainty greater than 25\% are set to zero and, moreover, zero-only EM's are discarded per default, effectively used in practice are 163 EM-type results from 9 conference notes. \subsubsection{Adding one's own results} \label{Installation:adding-one-s-own-results} The database is organized as files in an ordinary directory hierarchy. Therefore, adding additional experimental results is a matter of copying and editing text files. Once the new folders and files have been added following the format explained in Section~\ref{DatabaseDefinitions:databasedefs}, {SModelS}\ automatically rebuilds the binary (pickle) database file. \section{Introduction}\label{sec:intro} The ATLAS and CMS experiments at the Large Hadron Collider (LHC) are searching for new physics beyond the Standard Model (BSM) in many different channels. An important class of these searches considers final states with large missing transverse energy (MET), targeting supersymmetry (SUSY) with R-parity conservation or other models with a new conserved parity. In order to design optimal analyses that look for specific final states and to communicate the results, the concept of simplified models~\cite{hep-ph/0703088, 0810.3921, 1105.2838, Okawa:2011xg, 1301.2175} has been widely adopted by the experimental collaborations. While limits in terms of simplified model spectra (SMS) are a convenient way to illustrate and compare the reach of specific analyses, understanding how they constrain a realistic model with a multitude of relevant production channels and decay modes quickly becomes a non-trivial task. To tackle this issue we have created {SModelS}~\cite{Kraml:2013mwa,Kraml:2014sna}, an automatised tool for interpreting simplified model results from the LHC. The principle of {SModelS}\ is to decompose BSM collider signatures featuring a $\mathbb{Z}_2$ symmetry into simplified model topologies, using a generic procedure where each SMS is defined by the vertex structure and the SM final state particles; BSM particles are described only by their masses, production cross sections and branching ratios. The SModelS decomposition corresponds to an approximate mapping of the original model signatures to a coherent sum of simplified model topologies. The underlying assumption~\cite{Kraml:2013mwa} is that differences in the event kinematics ({e.g.}\ from different production mechanisms or from the spin of the BSM particle) do not significantly affect the signal selection efficiencies. The tool can then be used for any BSM model with a $\mathbb{Z}_2$-like symmetry as long as all heavier R-odd particles (cascade-)decay promptly to the lightest one, which should be electrically and colour neutral.% \footnote{Charged tracks may also be treated in an SMS context~\cite{Heisig:2015yla} and will be available in future versions of {SModelS}.} Note that due to the $\mathbb{Z}_2$ symmetry only pair production or associated production of two BSM particles is considered, and MET is always implied in the final state description. Regarding experimental results, the tool contains a large database of SMS results from ATLAS and CMS SUSY searches. Since the publication of SModelS\,v1.0~\cite{Kraml:2014sna} in December 2014, the code base has undergone significant structural changes. Version~1.1 comes with many new features, the most important one being the {\it extension to efficiency maps}. \noindent The advantage of efficiency maps (EM) over the previously (and still) used cross section upper limit (UL) maps is that they allow to combine contributions from different SMS topologies to the same signal region (SR) of a given experimental analysis. This significantly enhances the coverage and constraining power of the SMS results. Further novelties of this release include: \begin{description} \item [\quad --] {\it a new and more flexible database format;} \item [\quad --] {\it extended information on the topology coverage;} \item [\quad --] {\it inclusion of likelihood and \(\chi^2\) calculation for {EM-type results};} \item [\quad --] {\it inclusion of a database browser tool for easy access to the information stored in the database;} \item [\quad --] {\it performance improvement for the decomposition of the input model;} \item [\quad --] {\it inclusion of new simplified model results to the database, including a few 13 TeV results;} \item [\quad --] {\it overall significant speedups of the code and the database.} \end{description} The purpose of this paper is to present these new features, describing in detail the concepts and procedures used in the code and the database. Particular attention is given to how upper limits and efficiency maps are dealt with simultaneously. The (re-)interpretation of LHC searches, which is the main goal of {SModelS}, has recently become a very active field. Several other public tools have been developed by different groups. In particular, FastLim~\cite{Papucci:2014rja} and XQCAT~\cite{Barducci:2014gna} employ pre-computed efficiency maps to constrain, respectively, the minimal supersymmetric standard model (MSSM) and extra heavy quark scenarios. SUSY-AI~\cite{Caron:2016hib} is a code which has been trained with machine learning techniques to reproduce the ATLAS exclusion for the phenomenological MSSM. Finally, {\sc CheckMATE}~\cite{Drees:2013wra,Dercks:2016npn}, {\sc MadAnalysis\,5}~\cite{Conte:2014zja,Dumont:2014tja}, Rivet ($\ge2.5$) \cite{Buckley:2010ar} and GAMBIT's ColliderBit\cite{Athron:2017ard,Balazs:2017moi} allow for more general recasting of ATLAS and CMS searches based on Monte Carlo event simulation. Last but not least, Contur~\cite{Butterworth:2016sqg} aims at constraining BSM scenarios from differential Standard Model measurements based on the Rivet~\cite{Buckley:2010ar} toolkit. The rest of the paper is organised as follows. We start in Section~\ref{SModelSDefs:basic-concepts-and-definitions} by explaining the basic concepts of {SModelS}, including the structure of UL-type and EM-type results and of the database of experimental results. In Section~\ref{Structure:smodels-structure}, we then discuss in detail the decomposition procedure, the computation of the theory predictions, the comparison of theory predictions against the experimental results, and how missing topologies are identified. How to run {SModelS}\ is shown in Section~\ref{RunningSModelS:running-smodels}. Section~\ref{Conclusions} contains conclusions and an outlook. Installation instructions are provided in Appendix~\ref{Installation}. Appendix~\ref{Tools:smodels-tools} describes some useful SModelS-internal tools, while Appendix~\ref{EMcreation} provides information about `home-grown' efficiency maps included in the database. A complete, browsable manual and an extensive code documentation are provided in html format in the `docs' folder of the distribution; they are also available online at \cite{smodels:wiki}. In case of problems using {SModelS}, we kindly ask the user to contact the authors via \code{[email protected]}.\\ \vspace*{6mm} \hrule \vspace*{1mm} \noindent {\bf Note:} When using {SModelS\,v1.1}\ for scientific work (or any other purposes), please cite this paper as well as the original SModelS publication~\cite{Kraml:2013mwa}. In scientific publications, please also cite the programs {SModelS\,v1.1}\ makes use of: {\sc Pythia\,6.4}\xspace~\cite{Sjostrand:2006za} and/or {\sc Pythia\,8.2}\xspace~\cite{Sjostrand:2006za,Sjostrand:2014zea}, {\sc NLLfast}\xspace \cite{Beenakker:1996ch,Beenakker:1997ut,Kulesza:2008jb,Kulesza:2009kq,Beenakker:2009ha,Beenakker:2010nq,Beenakker:2011fu}, and PySLHA~\cite{Buckley:2013jua}. For convenience, these citations are provided as a bibtex file in the {SModelS}\ distribution. We also strongly recommend to cite all the experimental results used; a separate bibtex file is provided to that effect in the {SModelS}\ database folder. In case you use the FastLim efficiency maps in {SModelS}\ (see `adding FastLim data' in Appendix~\ref{Tools:smodels-tools}), please also properly cite FastLim~\cite{Papucci:2014rja} and the relevant experimental results. These references are also provided in a bibtex file with the distribution. \vspace*{2mm} \hrule \section{The SModelS Procedure} \label{Structure:smodels-structure} In this section we describe in detail the main tasks performed by SModelS: the simplified model decomposition, the computation of the relevant signal cross-sections (``theory predictions'') and how these are confronted with the constraints stored in the database. Finally, we explain how missing topologies are identified. First, however, we need to describe how the information about the full model is given as an input. \subsection{Input Model} \label{BasicInput:basicinput}\label{BasicInput:index-0}\label{BasicInput::doc}\label{BasicInput:basic-input} The information about the input (full) model can be given as \begin{itemize} \item {} an SLHA (SUSY Les Houches Accord~\cite{Skands:2003cj}) file containing masses, branching ratios and cross sections for the BSM states, or \item {} an LHE (Les Houches Event~\cite{Alwall:2006yp}) file containing parton level events. \end{itemize} The SLHA format is usually more compact and best suited for supersymmetric models. On the other hand, an LHE file can always be generated for any BSM model (through the use of your favorite MC generator).\footnote{{SModelS}\ can easily be used for non-SUSY models as long as they present a $\mathbb{Z}_2$-type symmetry. However, as mentioned at the beginning of Section~\ref{SModelSDefs:basic-concepts-and-definitions} (see also \cite{Kraml:2013mwa}), a few caveats need to be taken into account. It is the responsibility of the user to make sure that the experimental results used actually apply to the model under consideration---if necessary, a subset of results can be selected via the \code{parameters.ini} file, see Section~\ref{RunningSModelS:the-parameters-file}. } In this case, however, the precision of the results is limited to the MC statistics used to generate the file. \emph{\bf In the case of SLHA input}, the production cross sections for the BSM states have to be included in the SLHA file following the format defined in~\cite{slha:xsections}. Figure~\ref{BasicInput:xsecblock} exemplifies the SLHA cross section format with \(pp \rightarrow \tilde{u}_L^{\ast} + \tilde{g}\) at a center-of-mass energy of 8 TeV and at NLO+NLL QCD accuracy. The information used by SModelS are the center-of-mass energy, the outgoing particle PID's, the cross section value, and the QCD order. \emph{If the input file contains two cross sections for the same process but at different QCD orders, only the highest order will be used.} \begin{figure}[h!] \begin{center} \includegraphics[width=1.25\linewidth]{xsecBlock.png} \end{center} \caption{Format of the SLHA cross section block as used in SModelS. Arrows annotate the meaning of each of the entries in the block.} \label{BasicInput:xsecblock} \end{figure} For the MSSM and some of its extensions,\footnote{Typically extensions by a dark matter candidate other than the neutralino as the lightest SUSY particle (LSP). If the direct production of non-MSSM particles is important, the relevant cross sections have to be computed by external tools and then added by the user to the SLHA file.} the cross sections may be calculated automatically and appended to the SLHA file using the cross-section calculator tool provided by SModelS, see Appendix~\ref{Tools:cross-section-calculator}. \emph{\bf In the case of LHE input}, the total production cross section as well as the center-of-mass energy should be listed in the \textless{}init\textgreater{}\textless{}/init\textgreater{} block, according to the standard LHE format. Moreover, all the $\mathbb{Z}_2$-even particles (see definition in Section~\ref{TheoryDefinitions:theory-definitions}) should be set as stable, since in SModelS they are effectively considered as final states. When generating the events it is also important to ensure that no mass smearing is applied, so the mass values for a given particle are the same throughout the LHE file. \subparagraph{New Particles} \label{BasicInput:new-particles}\label{BasicInput:newparticles} Besides information about the masses and branching ratios, the user must also define which particles are $\mathbb{Z}_2$-odd states (intermediate states) and which are $\mathbb{Z}_2$-even (final states). These definitions must be given in the \code{particles.py} file, where some default values (for SM and MSSM particles) are already loaded. If the user wants to check the SLHA input file for possible errors (see Appendix~\ref{Tools:filechecks}), it is also necessary to define the particle's quantum numbers in the \code{particles.py} file. \subsection{Decomposition into Simplified Models} \label{Decomposition:decomposition}\label{Decomposition::doc}\label{Decomposition:decomposition-into-simplified-models} Given an input model, the first task of SModelS is to decompose it into a sum of simplified models (or `elements' in SModelS language). Based on the input format, SLHA or LHE file, one of two distinct decomposition methods is applied: either the SLHA-based or the LHE-based decomposition. \subsubsection{SLHA-based Decomposition} \label{Decomposition:slha-based-decomposition}\label{Decomposition:slhadecomp} The SLHA file describing the input model is required to contain the masses of all the BSM states as well as their production cross sections and decay branching ratios. All of this information must follow the SLHA file standard. Once the production cross sections are read from the input file, all the cross sections for \emph{production of two} $\mathbb{Z}_2$-odd \emph{states} are stored and serve as the initial step for the decomposition. (All the other cross sections with a different number of $\mathbb{Z}_2$-odd states are ignored.) Starting from these primary mothers, all the possible decays are generated according to the information contained in the DECAY blocks. This procedure is represented by the first decomposition step shown in Figure~\ref{Decomposition:decomp2}. Each of the possible cascade decays for each mother corresponds to a branch. In order to finally generate elements, all the branches are combined in pairs according to the branching ratios as shown by the second step in Figure~\ref{Decomposition:decomp2}. \begin{figure}[h!] \begin{center} \includegraphics[width=0.80\linewidth]{{decomp2B}.png} \end{center} \caption{SModelS decomposition process: the diagram represents the creation of elements given the primary mother pair production and subsequent branches.} \label{Decomposition:decomp2} \end{figure} Each of the elements generated according to the procedure just described will also store its weight, which equals the element production cross section times all the branching ratios appearing in it. In order to avoid a too large number of elements, only those satisfying a {\it minimum weight requirement} are kept. Furthermore, the elements are grouped according to their topologies. The final output of the SLHA decomposition is a list of such topologies, where each topology contains a list of the elements generated during the decomposition. \subparagraph{Minimum Decomposition Weight} \label{Decomposition:minweight}\label{Decomposition:minimum-decomposition-weight} Some models may contain a large number of new states and each may have a large number of possible decays. As a result, long cascade decays are possible and the number of elements generated by the decomposition process may become too large, resulting in excessively long computing times. For most practical purposes, however, elements with very small weights can be discarded, since they will fall well below the experimental limits. Therefore, during the SLHA decomposition, whenever an element is generated with a weight below some minimum value, this element, and all elements which would be derived from it, are ignored. The minimum weight to be considered is set by the {\tt sigmacut} parameter and is easily adjustable, see Section~\ref{RunningSModelS:parameterfile}. Note that, when computing the theory predictions, the weight of several elements can be combined together. Hence it is recommended to set the value of {\tt sigmacut} approximately one order of magnitude below the minimum signal cross sections the experimental data can constrain. \subsubsection{LHE-based Decomposition} \label{Decomposition:lhe-based-decomposition}\label{Decomposition:lhedecomp} More general models can be input through an LHE event file containing parton-level events, including the production of the primary mothers and their cascade decays. Each event can then be directly mapped to an element with the element weight corresponding to the event weight. Finally, identical elements can be combined together (adding their weights). How the information from an LHE event is used to construct a SModelS element is illustrated in Figure~\ref{Decomposition:event}. \begin{figure}[h!]\centering \includegraphics[width=\linewidth]{{eventExample}.png} \caption{An example of an LHE event. Arrows indicate the information used by SModelS to construct the corresponding element.} \label{Decomposition:event} \end{figure} Notice that, for the LHE decomposition, the elements generated are restricted to the events in the input file. Hence, the uncertainties on the elements weights (and which elements are actually generated by the model) are fully dependent on the Monte Carlo statistics used to generate the LHE file. Also, when generating the events it is important to ensure that no mass smearing is applied, so the events always contain the same mass value for a given particle. \subsubsection{Compression of Elements} \label{Decomposition:compression-of-elements}\label{Decomposition:elementcomp} During the decomposition process it is possible to perform several simplifications on the elements generated. In both the LHE and SLHA-based decompositions, two useful simplifications are possible: mass compression and invisible compression. The main advantage of performing these compressions is that the simplified element is always shorter (has fewer cascade decay steps), which makes it more likely to be constrained by experimental results. The details behind the compression methods are as follows: \subparagraph{Mass Compression} \label{Decomposition:masscomp}\label{Decomposition:mass-compression} In case of small mass differences, the decay of an intermediate state to a nearly degenerate one will in most cases produce soft final states, which can not be experimentally detected. Consequently, it is a good approximation to neglect the soft final states and \emph{compress} the respective decay, as shown in Figure~\ref{Decomposition:masscompfig}. \begin{figure}[b!] \begin{center} \includegraphics[width=0.90\linewidth]{{massCompB}.png} \end{center} \caption{Schematic representation of `mass compression' performed by SModelS to deal with soft final states.} \label{Decomposition:masscompfig} \end{figure} After the compression, only the lightest of the two near-degenerate masses are kept in the element, as shown in Figure~\ref{Decomposition:masscompfig}. The main parameter which controls the compression is called \code{minmassgap} (see Section~\ref{RunningSModelS:parameterfile}), which corresponds to the maximum value of \(\epsilon\) in Figure~\ref{Decomposition:masscompfig} up to which the compression is performed: \begin{gather} \begin{split}& \mbox{if } |M_j - M_{j+1}| < {\tt minmassgap} \rightarrow \mbox{the decay is compressed}\\ & \mbox{if } |M_j - M_{j+1}| > {\tt minmassgap} \rightarrow \mbox{the decay is NOT compressed}\\\end{split}\notag \end{gather} Note that the compression is an approximation since the final states, depending on the boost of the parent state, may not always be soft. It is recommended to choose values of \code{minmassgap} of 1--10~GeV; the default value is 5 GeV. \subparagraph{Invisible Compression} \label{Decomposition:invisible-compression}\label{Decomposition:invcomp} Another type of compression is possible when the final states of the last decay are invisible. The most common example is \begin{gather} \begin{split}A \rightarrow \nu + B\end{split}\notag \end{gather} as the last step of the decay chain, where \(B\) is an invisible particle leading to a MET signature. Since both the neutrino and \(B\) are invisible, for all experimental purposes the effective MET object is \(B + \nu = A\). Hence it is possible to omit the last step in the cascade decay, resulting in a compressed element. Note that this compression can be applied consecutively to several steps of the cascade decay if all of them contain only invisible final states, as illustrated in Figure~\ref{Decomposition:massinvpfig}. \begin{figure}[h!] \begin{center} \includegraphics[width=0.900\linewidth]{{invCompB}.png} \end{center} \caption{A schematic representation of `invisible compression' as performed by SModelS to deal with the emission of invisible particles in the final steps of the cascade decay.} \label{Decomposition:massinvpfig} \end{figure} \subsection{Computing Theory Predictions} \label{TheoryPredictions:theory-predictions}\label{TheoryPredictions:theorypredictions}\label{TheoryPredictions::doc} After the decomposition of the input model into simplified models, the next step consists of computing the relevant signal cross sections (or \emph{theory predictions}) for comparison with the experimental limits. Note that UL and EM-type results each require different theoretical predictions to be compared against experimental limits. While UL-type results constrain the total weight (\(\sigma \times BR\)) of a given simplified model, EM-type results constrain the total signal cross-section (\(\sum \sigma \times BR \times \epsilon\)) in a given signal region (data set). We deal with both types of results in an equal footing by defining \begin{equation} \mbox{theory prediction } = \sum_{elements} (\sigma \times BR \times \epsilon) \end{equation} where $\epsilon$ is a ``generalized efficiency'': for EM-type results it corresponds to the simplified model signal efficiency, while for UL-type results it equals to 1 (0) if the element is (is not) constrained by the experimental result. With this definition the theory prediction value can then be directly compared to the respective 95\%~CL upper limit on the signal cross-section extracted from the UL map (for UL-type results) or with the signal upper limit for the respective signal region (for EM-type results). The calculation of the theory prediction can always be divided in two main steps: \emph{Element Selection} and \emph{Element Clustering}, as shown schematically in Figure~\ref{TheoryPredictions:theoryPredScheme}. These steps are described in more detail below. \begin{figure}[h!] \begin{center} \includegraphics[width=\linewidth]{{theoryPredScheme}.png} \end{center} \caption{SModelS procedure for computing the signal cross sections. The generated elements undergo selection and clustering procedure before the final theory prediction can be compared with the corresponding experimental limit.} \label{TheoryPredictions:theoryPredScheme} \end{figure} \subsubsection{Element Selection} Given a specific experimental result (either EM-type or UL-type), the first step for computing the theory predictions is to select the simplified models which are constrained by the corresponding result. For EM-type results this corresponds to selecting all elements which contain non-zero efficiencies ($\epsilon$) for a given signal region, while for UL-type results this step implies selecting all elements which are constrained by the corresponding UL map (see Section~\ref{DatabaseDefinitions:database-definitions}). During this step, the selected elements have their weights (\(\sigma \times BR\)) rescaled by their corresponding efficiencies. As mentioned above, for UL-type results these efficiencies are trivial and equal 1 (0) if the element appears (does not appear) in the UL constraint. On the other hand, for EM-type results, these efficiencies are obtained from the efficiency maps stored in the database for the corresponding experimental result/data set. At the end of the element selection step only the elements with non-zero (rescaled) weights are relevant for computing the theory prediction for the corresponding experimental result and all the other elements are ignored. The procedure just described is illustrated graphically in Figures~\ref{TheoryPredictions:ULselection} and ~\ref{TheoryPredictions:EMselection} for an UL-type and EM-type result, respectively. \begin{figure}[t!] \begin{center} \includegraphics[width=0.950\linewidth]{{ULselection}.png} \end{center} \caption{Procedure for selecting elements for an UL-type result.} \label{TheoryPredictions:ULselection} \end{figure} \begin{figure}[t!] \label{TheoryPredictions:emselectionfig} \begin{center} \includegraphics[width=0.950\linewidth]{{EMselection}.png} \end{center} \caption{Procedure for selecting elements for EM-type results.} \label{TheoryPredictions:EMselection} \end{figure} \subsubsection{Element Clustering} \label{TheoryPredictions:element-clustering}\label{TheoryPredictions:ulcluster} Naively one would expect that after all the elements constrained by the experimental result have been selected and their weights have been rescaled by the corresponding efficiencies, it is trivial to compute the theory prediction. One must simply sum up the weights (\(\sigma \times BR \times \epsilon\)) of all the selected elements. This is indeed the case for EM-type results, where the final theory prediction is simply given by the sum of the selected element weights. For UL-type results, however, the experimental limit on $\sigma \times BR$ only applies to elements with the same mass\footnote{When referring to an element mass, we mean all the intermediate state masses appearing in the element (see Section~\ref{TheoryDefinitions:theory-definitions} for more details). Two elements are considered to have identical masses if their mass arrays are identical .} (or mass array). As a result, the selected elements must be grouped into \emph{clusters} of equal masses. When grouping the elements, one can allow for small mass differences, since the experimental efficiencies should not be strongly sensitive to them. For instance, assume two elements contain identical mass arrays, except for the parent masses which differ by 1 MeV. In this case it is obvious that for all experimental purposes the two elements have identical masses and should contribute to the same theory prediction ({e.g.}, their weights should be added when computing the signal cross section). Unfortunately there is no way to unambiguously define ``similar masses'' and the definition should depend on the experimental result, since different results will be more or less sensitive to mass differences. SModelS uses an UL map-dependent measure of the distance between two element masses, as described below. If two of the selected elements have a mass distance smaller than a maximum value (defined by {\tt maxDist}), they are grouped in the same mass cluster, as illustrated in Figure~\ref{TheoryPredictions:ULcluster}. Once all the elements have been clustered, their weights can finally be added together and compared against the experimental upper limit. \begin{figure}[th!] \begin{center} \includegraphics[width=0.900\linewidth]{{ULcluster}.png} \end{center} \caption{Element clustering procedure in SModelS.} \label{TheoryPredictions:ULcluster} \end{figure} \subparagraph{Mass Distance} \label{TheoryPredictions:massdist}\label{TheoryPredictions:mass-distance} As mentioned above, for UL-type results it is necessary to group elements with similar masses. Since an absolute definition of ``similar masses'' is not possible and the sensitivity to mass differences depends on the experimental result, SModelS uses an ``upper limit map-dependent'' definition. For each element`s mass array, the upper limit for the corresponding mass values is obtained from the UL map. This way, each mass array is mapped to a single number (the cross section upper limit for the experimental result). Then the distance between the two elements' masses is simply given by the relative difference between their respective upper limits. More explicitly: \begin{align} \mbox{Element } A\; (& M_A = [[M1,M2,...],[m1,m2,...]]) \rightarrow \mbox{ Upper Limit}(M_A) = x \notag\\ \mbox{Element } B\; (& M_B = [[M1',M2',...],[m1',m2',...]]) \rightarrow \mbox{ Upper Limit}(M_B) = y \notag\\ & \Rightarrow \mbox{mass distance}(A,B) = \frac{2|x-y|}{(x+y)} \end{align} where \(M_A,M_B\) (\(x,y\)) are the mass arrays (upper limits) for the elements A and B, respectively. If the mass distance of two elements is smaller than {\tt maxDist}, the two masses are considered similar. The default value for {\tt maxDist} in SModelS is 0.2. Notice that the above definition of mass distance quantifies the experimental analysis' sensitivity to mass differences, which is the relevant parameter when clustering elements. Also, a check is performed to ensure that masses with very distinct values but similar upper limits are not clustered together. \subsection{Confronting Predictions with Experimental Limits} \label{ConfrontPredictions:confrontpredictions}\label{ConfrontPredictions::doc} \label{ConfrontPredictions:confronting-predictions-with-experimental-limits} \subparagraph{R-Values} Once the relevant signal cross sections (theory predictions) have been computed for the input model, they must be compared to the respective experimental constraints. SModelS reports the result in the form of $r$-values defined as \begin{equation} r = {\textrm{(theory prediction)/(upper limit)}}. \end{equation} In the case of an UL-type result, the theory predictions typically consist of a list of signal cross sections (one for each cluster). Each theory prediction must then be compared to its corresponding upper limit. This limit is simply the cross section upper limit provided by the experimental publication or preliminary result and is extracted from the corresponding UL map (see Section~\ref{DatabaseDefinitions:database-definitions}). For EM-type results there is a single theory prediction ($\sum \sigma\times BR\times \epsilon$) for each data set (or signal region). This value must be compared to the upper limit on the number of signal events for the corresponding signal region. This upper limit is easily computed using the number of observed and expected events for the data set and their uncertainties and is typically stored in the database. Since most EM-type results have several signal regions (data sets), there will be one theory prediction/upper limit for each data set. By default SModelS considers only the best data set, {i.e.}\ the one with the largest value for $r_{exp} = \mbox{(theory prediction)}/\mbox{(expected limit)}$. In this case each EM-type result will have a single $r$-value, corresponding to the signal region with the best expected limit. The procedure described above can be applied to all the experimental results in the database, resulting in a list of theory predictions and upper limits for each experimental result. A model can then be considered excluded by the experimental results if, for one or more predictions, we have $r>1$.% \footnote{The statistical significance of the exclusion statement is difficult to quantify exactly, since the model is being tested by a large number of results simultaneously.} SModelS can also identify the relevant simplified model topologies which have large cross-sections, but are not constrained by any of the experimental results in the database, see Section~\ref{Tools:topology-coverage}. \input{likelihoods} \subsection{Identifying Missing Topologies} \label{Tools:topcoverage}\label{Tools:topology-coverage} The constraints provided by SModelS are obviously limited by the available set of simplified model interpretations in the database. Therefore it is interesting to identify classes of missing simplified models (termed `missing topologies') which are relevant for a given input model, but are not constrained by the results in the SModelS database. This task is performed as a last step in SModelS, once the decomposition and the theory predictions have been computed. Using the decomposition output, the elements (see Section~\ref{TheoryDefinitions:theory-definitions}) which are not tested by any of the experimental results in the database are grouped into the following classes:% \footnote{If mass or invisible compression are turned on, elements which can be compressed are not considered, to avoid double counting.} \begin{itemize} \item {} \emph{missingTopos}: elements whose final states are not tested by any of the experimental results in the database (independent of the element mass). The missing topologies are further classified as: \begin{itemize} \item {} \emph{longCascade}: elements with long cascade decays (more than one intermediate particle in one of the branches); \item {} \emph{asymmetricBranches}: elements with one branch differing from the other (not considering cases of long cascade decays, see above). \end{itemize} \item {} \emph{outsideGrid}: elements which could be tested by one or more experimental results, but are not constrained because the mass array falls outside the experimental mass grids. \end{itemize} Usually the list of \emph{missingTopos} or \emph{outsideGrid} elements is of considerable length. Hence, to compress this list, all elements differing only by their masses (with the same final states) or electric charges are combined into a \emph{missing} or \emph{outsideGrid} topology. Moreover, by default, electrons and muons are combined to light leptons (denoted ``l''); gluons and light quarks are combined into jets. The \emph{missing} topologies are then further classified (if applicable) into \emph{longCascade} or \emph{asymmetricBranches} topologies. The topologies for each of the four categories are then grouped according to the final state (for the \emph{missingTopos} and \emph{outsideGrid} classes) or according to the PID's of the initially produced mother particles (for the \emph{longCascade} and \emph{asymmetricBranches} classes). For the latter, elements deriving from different mother particles but with the same final state and mass array cannot be distinguished and are therefore combined. The full list of mother PID's is provided in the Python output or as a comment in both the `stdout' and `summary' outputs. The topology coverage tool is normally called from within SModelS ({e.g.}, when running \code{runSModelS.py}) by setting \textbf{testCoverage=True} in the parameters file (see Section~\ref{RunningSModelS:parameterfile}). In the output, contributions in each category are ordered by cross section. By default only the ten with the largest cross sections are shown. \section{SModelS Tools} \label{Tools::doc}\label{Tools:smodels-tools} {SModelS}\ comes with a number of tools that may be convenient for the user: \begin{itemize} \item {} a cross section calculator based on {\sc Pythia}\xspace and {\sc NLLfast}\xspace, \item {} SLHA and LHE file checkers to check your input files for completeness and sanity, \item {} a database browser to provide easy access to the database of experimental results, \end{itemize} \subsection{Cross-Section Calculator} \label{Tools:xseccalc}\label{Tools:cross-section-calculator} This tool computes LHC production cross sections for \emph{MSSM particles} and writes them out in an \emph{SLHA-type format} (see ``SLHA Format for Cross-Sections'' in Section~\ref{BasicInput:basicinput}) This can be particularly convenient for adding cross sections to SLHA input files. The calculation is done at LO with {\sc Pythia}\xspace (optionally {\sc Pythia\,6.4}\xspace or {\sc Pythia\,8.2}\xspace); K-factors for colored particles are computed with {\sc NLLfast}\xspace. \\ \noindent \textbf{The usage of the cross section calculator is:} \begin{Verbatim}[frame=lines] smodelsTools.py xseccomputer [-h] -f FILENAME [-s SQRTS [SQRTS ...]] [-e NEVENTS] [-v VERBOSITY] [-c NCPUS] [-p] [-q] [-k] [-n] [-N] [-O] [-6] [-8] \end{Verbatim} \begin{description} \item[{\emph{arguments}:}] \leavevmode\begin{optionlist}{2cm} \item [-h, -{-}help] show help message and exit. \item [-s SQRTS, -{-}sqrts SQRTS] (int): LHC center-of-mass energy in TeV for computing the cross sections. Can be more than one value; default is both 8 and 13. \item [-e NEVENTS, -{-}nevents NEVENTS] (int): number of Monte Carlo events to be simulated when running {\sc Pythia}\xspace. \item [-c NCPUS, -{-}ncpus NCPUS] (int): number of CPU cores to be used simultaneously. $-1$ means `all'. This is only used when cross sections are computed for multiple SLHA files. \item [-p, -{-}tofile] if set, the cross sections will be written back to the file. If the input file already contains cross sections, only new records will be written. If not set, the cross sections will be written to the screen, only. \item [-q, -{-}query] only query whether the input file already contains cross sections. \item [-k, -{-}keep] keep the temporary directory containing the {\sc Pythia}\xspace run output. This option is only relevant when checking for errors when running {\sc Pythia}\xspace. \item [-n, -{-}NLO] use {\sc Pythia}\xspace and {\sc NLLfast}\xspace to compute NLO cross sections (default is LO). Note that since {\sc NLLfast}\xspace only contains results for production of gluinos and squarks (incl.\ 3rd generation), only these cross sections will be generated. \item [-N, -{-}NLL] use {\sc Pythia}\xspace and {\sc NLLfast}\xspace to compute NLO+NLL cross sections (takes precedence over NLO, default is LO). Note that since NLLfast only contains results for production of gluinos and squarks (incl.\ 3rd generation), only these cross sections will be generated. \item [-O, -{-}LOfromSLHA] if set, SModelS will read the LO cross sections from the input file and use NLLfast to compute the NLO or NLO+NLL cross sections for squarks and gluinos. \item [-f FILENAME, -{-}filename FILENAME] SLHA file to compute cross sections for. If a directory is given, compute cross sections for all files in the directory. \item [-v VERBOSITY, -{-}verbosity VERBOSITY] Verbosity (`debug', `info', `warning', `error'. Default is `info'). \item [-6, -{-}pythia6] use {\sc Pythia\,6}\xspace for LO cross sections. \item [-8, -{-}pythia8] use {\sc Pythia\,8}\xspace for LO cross sections (default). \end{optionlist} \end{description} \noindent Further {\sc Pythia}\xspace parameters are defined in \code{smodels/etc/pythia8.cfg} \\ ({\sc Pythia\,8}\xspace) or \code{smodels/etc/pythia.card} ({\sc Pythia\,6}\xspace).\\ \noindent A typical usage example is: \begin{Verbatim}[commandchars=\\\{\},frame=none] smodelsTools.py xseccomputer \char`\-{}s 8 13 \char`\-{}e 50000 \char`\-{}p \char`\-{}f inputFiles/slha/compressedSpec.slha -6 \end{Verbatim} \noindent which will compute 8~TeV and 13~TeV LO cross sections (at the LHC) with {\sc Pythia\,6}\xspace for all MSSM processes using 50k MC events. If, \emph{after} the LO cross sections have been computed, one wants to add the NLO+NLL cross sections for gluinos and squarks (incl.\ 3rd generation): \begin{Verbatim}[commandchars=\\\{\},frame=none] smodelsTools.py xseccomputer -s 8 13 -p -N -O -f inputFiles/slha/compressedSpec.slha \end{Verbatim} The resulting file will then contain LO cross sections for all MSSM processes and NLO+NLL cross sections for the available processes in NLLfast (gluino and squark production).\footnote{If a higher precision is needed, for instance for electroweak production or for gluino/squark masses outside the {\sc NLLfast}\xspace\ grids, this has to be taken care of by the user.} When reading the input file, SModelS will then use only the highest-order cross section available for each process. \subsection{Input File Checks} \label{Tools:input-file-checks}\label{Tools:filechecks} As discussed in Section~\ref{BasicInput:basicinput}, SModelS accepts both SLHA and LHE input files. It can be convenient to perform certain sanity checks on these files as described below. \paragraph{\bf LHE File Checker} \label{Tools:lhe-file-checker}\label{Tools:lhechecks} For an LHE input file only very basic checks are performed, namely that \begin{itemize} \item {} the file exists, \item {} it contains at least one event, \item {} the information on the total cross section and the center of mass energy can be found. \end{itemize} \noindent \textbf{The usage of the LHE checker is simply:} \begin{Verbatim}[commandchars=\\\{\},frame=lines] smodelsTools.py lhechecker [-h] -f FILENAME \end{Verbatim} \begin{description} \item[{\emph{arguments}:}] \leavevmode\begin{optionlist}{3cm} \item [-h, -{-}help] show this help message and exit \item [-f FILENAME, -{-}filename FILENAME] name of input LHE file \end{optionlist} \end{description} \noindent A typical usage example is: \begin{Verbatim}[commandchars=\\\{\},frame=none] smodelsTools.py lhechecker -f inputFiles/slha/gluino_squarks.lhe \end{Verbatim} \paragraph{\bf SLHA File Checker} \label{Tools:slha-file-checker}\label{Tools:slhachecks} The SLHA file checker allows to perform quite rigorous checks of SLHA input files. Concretely, it verifies that \begin{itemize} \item {} the file exists and is given in SLHA format, \item {} the file contains masses and decay branching ratios in standard SLHA format, \item {} the file contains cross sections according to the SLHA format for cross sections, \item {} the lightest $\mathbb{Z}_2$\emph{-odd state} (the LSP in supersymmetric models) is neutral, \item {} there are no stable charged particles nor displaced vertices (no non-prompt visible decays), as currently all the analyses considered by SModelS require a prompt MET signature. \end{itemize} \noindent In addition, one can ask that \begin{itemize} \item {} all decays listed in the DECAY block are kinematically allowed, \emph{i.e.} the sum of masses of the decay products may not exceed the mother mass. \emph{This check for ``illegal decays'' is turned off by default.} \end{itemize} \noindent If any of the above tests fails (returns a negative result), an error message is shown. Some more comments are in order. In order to check that the lightest $\mathbb{Z}_2$-odd state has zero electric and color charges, the quantum numbers of the BSM particles must be given in the \code{qNumbers} dictionary in \code{particles.py}. The format is\\ \noindent {\tt [2*spin, 3*electric charge, dimension of SU(3) representation]}\\ \noindent The list of quantum numbers is also required to check for displaced vertices or heavy charged particles. The check for long-lived (or stable) particles first verifies that these appear in one of the cross section blocks and their cross section exceeds the minimum cross section value defined by \code{sigmacut} in the parameters file, see Section~\ref{RunningSModelS:parameterfile}. If the cross section is larger than \code{sigmacut} and the particle is stable, the checker checks if it is neutral (both electric and color charges are zero). On the other hand, if the particle is unstable, but its lifetime (times \emph{c}) is larger than a minimum value (\emph{default = 10 mm}), the particle is considered a non-prompt decay. For these cases all channels are then checked for visible decay products. If the branching ratio to visible decays times the maximum production cross section for the particle exceeds \code{sigmacut}, the particle's decay is considered a displaced vertex. \\ \noindent \textbf{The usage of the SLHA checker is:} \begin{Verbatim}[commandchars=\\\{\},frame=lines] smodelsTools.py slhachecker [-h] [-xS] [-lsp] [-longlived] [-m DISPLACEMENT] [-s SIGMACUT] [-illegal] [-dB] -f FILENAME \end{Verbatim} \begin{description} \item[{\emph{arguments}:}] \leavevmode\begin{optionlist}{3cm} \item [-h, -{-}help] show this help message and exit \item [-xS, -{-}xsec] turn off the check for xsection blocks \item [-lsp, -{-}lsp] turn off the check for charged lsp \item [-longlived, -{-}longlived] turn off the check for stable charged particles and visible displaced vertices \item [-m DISPLACEMENT, -{-}displacement DISPLACEMENT] give maximum displacement of secondary vertex in m \item [-s SIGMACUT, -{-}sigmacut SIGMACUT] give sigmacut in fb \item [-illegal, -{-}illegal] turn on check for kinematically forbidden decays \item [-dB, -{-}decayBlocks] turn off the check for missing decay blocks \item [-f FILENAME, -{-}filename FILENAME] name of input SLHA file \end{optionlist} \end{description} \noindent A typical usage example is: \begin{Verbatim}[commandchars=\\\{\},frame=none] smodelsTools.py slhachecker -m 0.001 -s 0.01 -f inputFiles/slha/lightSquarks.slha \end{Verbatim} \noindent Running this will print the status flag and a message with potential warnings and error messages. \subsection{Database Browser} \label{Tools:database-browser}\label{Tools:databasebrowser} The database browser is a tool based on IPython which provides an easy way to directly access the SModelS database. It owns several methods to select experimental results or data sets satisfying some user-defined conditions as well as to access the meta data and data inside each experimental result. \\ \noindent \textbf{The usage of the browser interface is:} \begin{Verbatim}[commandchars=\\\{\},frame=lines] smodelsTools.py database-browser [-h] -p PATH_TO_DATABASE [-t] \end{Verbatim} \begin{description} \item[{\emph{arguments}:}] \leavevmode\begin{optionlist}{3cm} \item [-h, -{-}help] show this help message and exit \item [-p PATH\_TO\_DATABASE,] \item [-{-}path\_to\_database PATH\_TO\_DATABASE] path to SModelS database \item [-t, -{-}text] load text database, don't even search for binary database file \end{optionlist} \end{description} \noindent A typical usage example is: \begin{Verbatim}[commandchars=\\\{\},frame=none] smodelsTools.py database\char`\-{}browser \char`\-{}p ./smodels\char`\-{}database \end{Verbatim} Loading the database may take a few seconds if the binary database file exists. Otherwise the {pickle file} will be created. Starting the browser opens an IPython session, which can be used to select specific experimental results (or groups of experimental results), check upper limits and/or efficiencies for specific masses/topologies and access all the available information in the database. A simple example is given below: \begin{Verbatim}[commandchars=\\\{\},frame=none,fontsize=\small] In [1]: print browser \char`\#{}Print all experimental results in the browser [\char`\'{}ATLAS\char`\-{}SUSY\char`\-{}2015\char`\-{}09\char`\'{}, \char`\'{}CMS\char`\-{}SUS\char`\-{}PAS\char`\-{}15\char`\-{}002\char`\'{}, \char`\'{}ATLAS\char`\-{}CONF\char`\-{}2012\char`\-{}105\char`\'{}, ...] In [2]: browser.selectExpResultsWith(txName = \char`\'{}T1tttt\char`\'{}, dataType = \char`\'{}upperLimit\char`\'{}) \char`\#{}Select only the UL results with the topology T1tttt In [3]: print browser \char`\#{}Print all experimental results in the browser (after selection) [\char`\'{}ATLAS\char`\-{}SUSY\char`\-{}2015\char`\-{}09\char`\'{}, \char`\'{}CMS\char`\-{}SUS\char`\-{}PAS\char`\-{}15\char`\-{}002\char`\'{}, \char`\'{}ATLAS\char`\-{}CONF\char`\-{}2012\char`\-{}105\char`\'{}, \char`\'{}ATLAS\char`\-{}CONF\char`\-{}2013\char`\-{}007\char`\'{}, \char`\'{}ATLAS\char`\-{}CONF\char`\-{}2013\char`\-{}061\char`\'{}, \char`\'{}ATLAS\char`\-{}SUSY\char`\-{}2013\char`\-{}04\char`\'{}, \char`\'{}ATLAS\char`\-{}SUSY\char`\-{}2013\char`\-{}09\char`\'{}, \char`\'{}ATLAS\char`\-{}SUSY\char`\-{}2013\char`\-{}18\char`\'{}, \char`\'{}CMS\char`\-{}PAS\char`\-{}SUS\char`\-{}12\char`\-{}026\char`\'{}, ...] \char`\#{}Define masses for the T1tttt topology: In [4]: gluinoMass, LSPmass = 800.*GeV, 100.*GeV \char`\#{}Get UL for a specific experimental result In [5]: browser.getULFor(\char`\'{}CMS\char`\-{}SUS\char`\-{}PAS\char`\-{}15\char`\-{}002\char`\'{},\char`\'{}T1tttt\char`\'{}, [[gluinoMass,LSPmass],[gluinoMass,LSPmass]]) Out[5]: 5.03E\char`\-{}02 [pb] \char`\#{}Get the upper limits for all the selected results for the given \char`\#{}topology and mass In [6]: for expResult in browser: ...: print expResult.getValuesFor(\char`\'{}id\char`\'{}),\char`\'{}UL = \char`\'{}, expResult.getUpperLimitFor(txname=\char`\'{}T1tttt\char`\'{},mass= [[gluinoMass,LSPmass],[gluinoMass,LSPmass]]) ...: [\char`\'{}ATLAS\char`\-{}SUSY\char`\-{}2015\char`\-{}09\char`\'{}] UL = None [\char`\'{}CMS\char`\-{}SUS\char`\-{}PAS\char`\-{}15\char`\-{}002\char`\'{}] UL = 5.03E\char`\-{}02 [pb] [\char`\'{}ATLAS\char`\-{}CONF\char`\-{}2012\char`\-{}105\char`\'{}] UL = 6.70E\char`\-{}02 [pb] [\char`\'{}ATLAS\char`\-{}CONF\char`\-{}2013\char`\-{}007\char`\'{}] UL = 2.40E\char`\-{}02 [pb] ... \char`\#{}Print the luminosities for the selected experimental results In [7]: for exp in browser: ...: print exp.getValuesFor(\char`\'{}id\char`\'{}), exp.getValuesFor(\char`\'{}lumi\char`\'{}) ...: [\char`\'{}ATLAS\char`\-{}SUSY\char`\-{}2015\char`\-{}09\char`\'{}] [3.20E+00 [1/fb]] [\char`\'{}CMS\char`\-{}SUS\char`\-{}PAS\char`\-{}15\char`\-{}002\char`\'{}] [2.20E+00 [1/fb]] [\char`\'{}ATLAS\char`\-{}CONF\char`\-{}2012\char`\-{}105\char`\'{}] [5.80E+00 [1/fb]] [\char`\'{}ATLAS\char`\-{}CONF\char`\-{}2013\char`\-{}007\char`\'{}] [2.07E+01 [1/fb]] ... \end{Verbatim} \section{Using SModelS} \label{RunningSModelS:running-smodels}\label{RunningSModelS:runningsmodels}\label{RunningSModelS::doc} {SModelS}\ ships with a command-line tool, \code{runSModelS.py}, which is deemed configurable enough to cover a large range of applications. The functionalities contained in this tool include detailed checks of input SLHA or LHE files, running the decomposition, evaluating the theory predictions and comparing them to the experimental limits available in the database, determining missing topologies and printing the output in several available formats. Moreover, starting from v1.1.0, \code{runSModelS.py} can process a whole folder containing a set of SLHA or LHE files in a parallelized fashion. The command-line tool and the parameter file are described in detail below. Users familiar with Python and the SModelS basics may however prefer to write their own main routine. A simple example code is provided in \code{Example.py} and explained step-by-step in the html manual. Finally, {SModelS}\ can also conveniently be used within micrOMEGAS, as explained in \cite{Barducci:2016pcb}. \subsection{Usage of runSModelS.py} \label{RunningSModelS:runsmodels-py}\label{RunningSModelS:runsmodels} \begin{Verbatim}[commandchars=\\\{\},frame=lines] runSModelS.py {[}-h{]} -f FILENAME {[}-p PARAMETERFILE{]} {[}-o OUTPUTDIR{]} {[}-d{]} {[}-t{]} {[}-V{]} {[}-c{]} {[}-v VERBOSE{]} {[}-T TIMEOUT{]} \end{Verbatim} \begin{description} \item[{\emph{arguments}:}] \leavevmode \begin{optionlist}{3cm} \item [-h, -{-}help] show this help message and exit. \item [-f FILENAME, -{-}filename FILENAME] name of SLHA or LHE input file or a directory path (required argument). If a directory is given, loop over all files in the directory. \item [-p PARAMETERFILE, -{-}parameterFile PARAMETERFILE] name of the parameter file, where most options are defined (optional argument). If not set, use all parameters from {\code{smodels/etc/\allowbreak parameters\_default.ini}}. \item [-o OUTPUTDIR, -{-}outputDir OUTPUTDIR] name of output directory (optional argument). The default folder is {\tt ./results/}. \item [-d, -{-}development] if set, SModelS will run in development mode and exit if any errors are found \item [-t, -{-}force\_txt] force loading the text database. \item [-V, -{-}version] show program's version number and exit \item [-c, -{-}run-crashreport] parse crash report file and use its contents for a SModelS run. Supply the crash file simply via `-- filename myfile.crash'. \item [-v VERBOSE, -{-}verbose VERBOSE] sets the verbosity level (debug, info, warning, error). Default value is ``info''. \item [-T TIMEOUT, -{-}timeout TIMEOUT] define a limit on the running time (in secs). If not set, run without a time limit. If a directory is given as input, the timeout will be applied for each individual file. \end{optionlist} \end{description} \noindent A typical usage example is: \begin{Verbatim}[commandchars=\\\{\}] runSModelS.py \char`\-{}f inputFiles/slha/gluino_squarks.slha \\ \char`\-{}p parameters.ini \char`\-{}o ./ \char`\-{}v warning \end{Verbatim} The resulting output will be generated in the current folder, according to the printer options set in the parameter file. \subsection{Parameter File} \label{RunningSModelS:the-parameters-file}\label{RunningSModelS:parameterfile} The basic options and parameters used by \code{runSModelS.py} are defined in the parameter file. An example parameter file, including all available parameters together with a short description, is stored in \code{parameters.ini}. If no parameter file is specified, the default parameters stored in \code{smodels/etc/\allowbreak parameters\_default.ini} are used. Below we describe each entry in the parameter file. \noindent\begin{description} \item[\emph{options}:] main options for turning SModelS features on or off \end{description} \begin{itemize} \item {} \textbf{checkInput} (True/False): if True, \code{runSModelS.py} will run the file check tool on the input file and verify that the input contains all the necessary information. \item {} \textbf{doInvisible} (True/False): turns invisible compression on or off during the decomposition. \item {} \textbf{doCompress} (True/False): turns mass compression on or off during the decomposition. \item {} \textbf{computeStatistics} (True/False): turns the likelihood and \(\chi^2\) computation on or off for EM-type results (see Section~\ref{ConfrontPredictions:confronting-predictions-with-experimental-limits}). \item {} \textbf{testCoverage} (True/False): set to True to run the coverage tool (see Section~\ref{Tools:topology-coverage}). \end{itemize} \noindent\begin{description} \item[\emph{parameters}:] basic parameter values for running SModelS \end{description} \begin{itemize} \item {} \textbf{sigmacut} (float): minimum value for an element weight (in fb). Elements with a weight below \code{sigmacut} are neglected during decomposition of SLHA input files (see Section~\ref{Decomposition:slhadecomp}). The default value is $0.03$~fb. Note that, depending on the input model, the running time may increase considerably if sigmacut is too low, while too large values might eliminate relevant elements. \item {} \textbf{minmassgap} (float): maximum value of the mass difference (in GeV) for perfoming mass compression. The default value is $5$~GeV. \emph{Only used if} \code{doCompress = True}. \item {} \textbf{maxcond} (float): maximum allowed value (in the {[}0,1{]} interval) for the violation of upper limit conditions. A zero value means the conditions are strictly enforced, while 1 means the conditions are never enforced. \emph{Only relevant for printing the} output summary. \item {} \textbf{ncpus} (int): number of CPUs. When processing multiple SLHA/LHE files, SModelS can be run in a parallelized fashion, splitting up the input files in equal chunks. \code{ncpus=-1} uses the total number of CPU cores of the machine. \end{itemize} \noindent\begin{description} \item[\emph{database}:] allows for selection of a subset of experimental results from the database \end{description} \begin{itemize} \item {} \textbf{path}: the absolute or relative path to the database. The user can supply either the directory name of the database, or the path to the pickle file (see Section~\ref{DatabaseStructure:databasepickle}). \item {} \textbf{analyses} (list of results): set to {\emph{all}} to use all available results. If a list of experimental analyses is given, only these will be used. For instance, setting \code{analyses = ATLAS-SUSY-2015-09, CMS-PAS-SUS-15-002} will only use the experimental results from these two analyses. \item {} \textbf{txnames} (list of topologies): set to {\emph{all}} to use all available simplified model topologies. The topologies are labeled according to the TxName convention. If a list of TxNames are given, only the corresponding topologies will be considered. For instance, setting \code{txnames=T2} will only consider experimental results for $pp \to \tilde{q} + \tilde{q} \to (jet+\tilde{\chi}_1^0) + (jet+\tilde{\chi}_1^0)$ and the output will only contain constraints for this topology. A list of all topologies and their corresponding TxNames can be found at~\cite{smodels:dictionary}. \item {} \textbf{dataselector} (list of datasets): set to {\emph{all}} to use all available data sets. If dataselector = upperLimit (efficiencyMap), only UL-type results (EM-type results) will be used. Furthermore, if a list of signal regions (data sets) is given, only the experimental results containing these datasets will be used. For instance, if \code{dataselector = SRA mCT150, SRA mCT200}, only these signal regions will be used. \item {} \textbf{discardZeroes} (True/False): set to True to discard all efficiency maps with zero-only entries. \end{itemize} \noindent\begin{description} \item[\emph{printer}:] main options for the output format \end{description} \begin{itemize} \item {} \textbf{outputType} (list of outputs): use to list all the output formats to be generated. Available output formats are: summary, stdout, log, python, xml and slha. \end{itemize} \noindent\begin{description} \item[\emph{stdout-printer}:] options for the stdout or log printer \end{description} \begin{itemize} \item {} \textbf{printDatabase} (True/False): set to True to print the list of selected experimental results to stdout. \item {} \textbf{addAnaInfo} (True/False): set to True to include detailed information about the TxNames tested by each experimental result. \emph{Only used if printDatabase=True}. \item {} \textbf{printDecomp} (True/False): set to True to print basic information from the decomposition (topologies, total weights, ...). \item {} \textbf{addElementInfo} (True/False): set to True to include detailed information about the elements generated by the decomposition. \emph{Only used if printDecomp=True}. \item {} \textbf{printExtendedResults} (True/False): set to True to print extended information about the theory predictions, including the PID's of the particles contributing to the predicted cross section, their masses and the expected upper limit (if available). \item {} \textbf{addCoverageID} (True/False): set to True to print the list of element IDs contributing to each missing topology (see coverage). \emph{Only used if testCoverage = True}. This option should be used along with \emph{addElementInfo = True} so the user can precisely identify which elements were classified as missing. \end{itemize} \noindent\begin{description} \item[\emph{summary-printer}:] options for the summary printer \end{description} \begin{itemize} \item {} \textbf{expandedSummary} (True/False): set to True to include in the summary output all applicable experimental results, False for only the strongest one. \end{itemize} \noindent\begin{description} \item[\emph{python-printer}:] options for the Python printer \end{description} \begin{itemize} \item {} \textbf{addElementList} (True/False): set to True to include in the Python output all information about all elements generated in the decomposition. If set to True the output file can be quite large. \end{itemize} \noindent\begin{description} \item[\emph{xml-printer}:] options for the xml printer \end{description} \begin{itemize} \item {} \textbf{addElementList} (True/False): set to True to include in the xml output all information about all elements generated in the decomposition. If set to True the output file can be quite large. \end{itemize} \subsection{Output} \label{RunningSModelS:the-output}\label{RunningSModelS:smodelsoutput} The results of \code{runSModelS.py} are printed in the format(s) specified by the \textbf{outputType} in the parameter file. The following formats are available: \begin{itemize} \item {} a human-readable screen output (stdout) or log output. These are intended to provide detailed information about the database, the decomposition, the theory predictions and the missing topologies. The complexity of the output can be controlled through several options in the parameter file. Due to its size, this output is not suitable for storing the results from a large scan; it is more appropriate for a single file input. \item {} a human-readable text file output containing a summary of the run. This format contains the main SModelS results: the theory predictions and the missing topologies, as described in detail below. It can be used for a large scan, since the output can be made quite compact, using the options in the parameter file. \item {} a Python dictionary printed to a file containing information about the decomposition, the theory predictions, and the missing topologies. The output can be significantly long, if all options in the parameter file are set to True. However this output can be easily imported to a Python environment, making it easy to access the desired information. For users familiar with the Python language this is the recommended format. \item {} an xml file containing information about the decomposition, the theory predictions and the missing topologies. The output can be significantly long, if all options are set to True. \item {} a .smodelsslha file (outputType=slha) containing a summary of the most constraining results, and the missing topologies in the SLHA-type format described in~\cite{Barducci:2016pcb}. \end{itemize} \vspace*{6mm} \hrule \vspace*{2mm} \noindent {\bf Notes:} \begin{itemize} \item {} The list of elements can be extremely long. Try setting \textbf{addElementInfo} = False and/or \textbf{printDecomp} = False to obtain a smaller output. \item{} A comment of caution is in order regarding a potentially naive use of the highest $r$-value reported by SModelS, as this does not necessarily come from the most sensitive analysis. For a rigorous statistical interpretation, one should use the $r$-value of the result with the highest \emph{expected} $r$ ($r_{exp}$). Unfortunately, for UL-type results, the expected limits are often not available; $r_{exp}$ is then reported as N/A in the SModelS output. \item{} We also point out that the $r$-values do not include systematic uncertainties for the signal prediction. The signal uncertainties may be relevant, in particular in cases where cross sections are computed only at LO.\\ \hrule \end{itemize} \bigskip As an example we explain below the text-type (summary) output obtained from the sample file \path{gluino_squarks.slha} in \path{inputFiles/slha/}. The output file is written in terms of the following blocks: \begin{itemize}\item information about the basic input parameters and the status of the run:\end{itemize} \begin{Verbatim}[frame=none,fontsize=\footnotesize] Input status: 1 Decomposition output status: 1 #decomposition was successful # Input File: inputFiles/slha/gluino_squarks.slha # maxcond = 0.2 # minmassgap = 5. # ncpus = 1 # sigmacut = 0.03 # Database version: 1.1.1 \end{Verbatim} \begin{itemize} \item a list of all the theory predictions obtained and the corresponding experimental result upper limit. If \code{expandedSummary = False} only the most constraining experimental result is printed. For each applicable experimental result, the corresponding experimental result ID and the center-of-mass energy (sqrts) are printed together with the theory cross section (`Theory\_Value'), the observed upper limit (`Exp\_limit'), the (theory cross section)/(observed upper limit) ratio (r) and, when available, the expected $r$ value (r\_expected). Moreover, the condition violation is given for UL-type results; for EM-type results the signal region used is printed. Finally, the TxNames contributing to the signal cross section are listed and, if \code{computeStatistics = True}, the $\chi^2$ and likelihood values are printed for EM-type results:\end{itemize} \begin{Verbatim}[fontsize=\footnotesize] #Analysis Sqrts Cond_Violation Theory_Value(fb) Exp_limit(fb) r r_expected CMS-SUS-13-019 8.00E+00 0.0 1.773E+00 3.760E+00 4.716E-01 N/A Signal Region: (UL) Txnames: T2 -------------------------------------------------------------------------------- ATLAS-SUSY-2013-02 8.00E+00 0.0 6.617E+00 1.718E+01 3.851E-01 N/A Signal Region: (UL) Txnames: T6WW -------------------------------------------------------------------------------- ATLAS-SUSY-2013-02 8.00E+00 0.0 5.525E-01 1.818E+00 3.039E-01 3.653E-01 Signal Region: SR2jt Txnames: T1, T2 Chi2, Likelihood = 4.185E-02 2.542E-02 -------------------------------------------------------------------------------- ... \end{Verbatim} \begin{itemize}\item the maximum value for the (theory cross section)/(observed upper limit) ratio. If this value is higher than 1, the input model is likely excluded by one of the experimental results. \end{itemize} \begin{Verbatim}[frame=none,fontsize=\footnotesize] The highest r value is = 0.471627309932 \end{Verbatim} \begin{itemize}\item summary information about the missing topologies, if \code{testCoverage = True}. The total `missing topology' cross section corresponds to the sum of weights of all elements which are not tested by any experimental result. If an element is constrained by one or more experimental results, but its mass is outside the efficiency or upper limit grids, its cross section is instead included in the ``outside the mass grid'' category. Finally, the elements which contribute to the total missing topology cross section are subdivided into elements with long decays (cascades with more than one intermediate particle) and with asymmetric branches. \end{itemize} \begin{Verbatim}[frame=none,fontsize=\footnotesize] Total missing topology cross section (fb): 2.767E+02 Total cross section where we are outside the mass grid (fb): 1.760E-01 Total cross section in long cascade decays (fb): 1.096E+02 Total cross section in decays with asymmetric branches (fb): 1.630E+02 \end{Verbatim} \begin{itemize}\item detailed information about the missing topologies, sorted by their cross sections. The element cross section (weight) as well as its description in bracket notation is included. For definiteness, the entries in this and the following lists are sorted by their weights. \end{itemize} \begin{Verbatim}[frame=none,fontsize=\footnotesize] Missing topologies with the highest cross sections (up to 10): Sqrts (TeV) Weight (fb) Element description 8.0 1.601E+01 # [[[jet],[W]],[[jet,jet],[W]]] 8.0 1.395E+01 # [[[jet],[jet,jet],[W]],[[jet,jet],[W]]] 8.0 9.206E+00 # [[[b,t],[W]],[[jet,jet],[W]]] ... \end{Verbatim} \begin{itemize}\item detailed information about the topologies which are outside the experimental results data grid: \end{itemize} \begin{Verbatim}[frame=none,fontsize=\footnotesize] Contributions outside the mass grid (up to 10): Sqrts (TeV) Weight (fb) Element description 8.0 1.440E-01 # [[[jet]],[[t,t]]] 8.0 3.203E-02 # [[[t],[W]],[[t],[W]]] \end{Verbatim} \begin{itemize}\item information about the missing topologies with long cascade decays. The long cascade decays are classified by the initially produced mother particles. If more than one pair of mothers contribute to the same class of elements, the full list is given in the comment. \end{itemize} \begin{Verbatim}[frame=none,fontsize=\footnotesize] Missing topos: long cascade decays (up to 10 entries), sqrts = 8 TeV: Mother1 Mother2 Weight (fb) # allMothers 1000021 2000002 3.743E+01 # [[1000021, 2000002]] 1000002 1000021 1.626E+01 # [[1000002, 1000021]] 1000021 2000001 1.282E+01 # [[1000021, 2000001]] ... \end{Verbatim} \begin{itemize}\item information about the missing topologies with asymmetric decays, in the same format as the long cascade decay description: \end{itemize} \begin{Verbatim}[frame=none,fontsize=\footnotesize] Missing topos: asymmetric branches (w/o long cascades, up to 10), sqrts = 8 TeV Mother1 Mother2 Weight (fb) # allMothers 1000002 1000021 4.725E+01 # [[1000002, 1000021]] 1000021 1000021 4.324E+01 # [[1000021, 1000021]] 1000021 2000002 2.215E+01 # [[1000021, 2000002]] ... \end{Verbatim}
2,869,038,157,000
arxiv
\section*{References}
2,869,038,157,001
arxiv
\section{Introduction} Ever since the proposition, in 1988, by Tsallis \cite{tsallis0}, \cite {tsallis1} of a nonextensive generalization of the canonical formalism of statistical mechanics a spirited discussion \cite{cho1}, \cite{latora1} has grown on the foundations of this branch of physics. But, only until recently \cite{robledo0}-\cite{robledo3} has there appeared firm evidence about the relevance of the generalized formalism for specific model system situations. These studies present rigorous results on the dynamics associated to critical attractors in prototypical nonlinear one-dimensional maps, such as those at the pitchfork and tangent bifurcations and at the accumulation point of the former, the so-called onset of chaos \cite{schuster1}. As these are very familiar and well understood features of these maps it is of interest to see how previous knowledge fits in with the new perspective. Also, clear-cut calculations may help clarify the physical reasons (believed in these examples to be a breakdown in the chain of increasing randomness from non-ergodicity to completely developed chaoticity) for the departure from the Boltzmann-Gibbs (BG) statistics and the competence of the non-extensive generalization. Here we review briefly specific results associated to the aforementioned states in unimodal maps which are all characterized by vanishing Lyapunov coefficients. We also recall how the dynamics at the tangent bifurcation appears to be related to that at thermal critical states. In addition, time evolution at the noise-perturbed onset of chaos is seen to be closely analogous to the glassy dynamics observed in supercooled molecular liquids. \section{Dynamics at the pitchfork and tangent bifurcations} As pointed out in Refs. \cite{robledo0}, \cite{baldovin1}, \cite{robledo1} the long-known \cite{schuster1} exact geometric or {\it static} solution of the Feigenbaum renormalization group (RG) equations for the tangent bifurcation in unimodal maps of nonlinearity $\zeta >1$ also describes the {\it dynamics} of iterates at such state. A straightforward extension of this approach applies also to the pitchfork bifurcations. We recall that the period-doubling and intermittency transitions are based on the pitchfork and the tangent bifurcations, respectively, and that at these critical states the ordinary Lyapunov coefficient $\lambda _{1}$ vanishes. The sensitivity to initial conditions $\xi _{t}$ was determined analytically and its relation with the rate of entropy production examined \cite{robledo1}. The fixed-point expressions have the specific form that corresponds to the temporal evolution suggested by the nonextensive formalism. These studies contain the derivation of their $q$-generalized Lyapunov coefficients $% \lambda _{q}$ and the description of the different possible types of sensitivity $\xi _{t}$. By considering as starting point the $\zeta $-logistic map $f_{\mu }(x)=1-\mu \left| x\right| ^{\zeta }$,$\;\zeta >1$, $-1\leq x\leq 1$, it is found that for both infinite sets of pitchfork and tangent bifurcations $\xi _{t}$, defined as $\xi _{t}\equiv \lim_{\Delta x_{0}\to 0}(\Delta x_{t}/\Delta x_{0})$ (where $\Delta x_{0}$ is the length of the initial interval and $\Delta x_{t}$ its length at time $t$), has the form suggested by Tsallis, \begin{equation} \xi _{t}(x_{0})=\exp _{q}[\lambda _{q}(x_{0})\ t]\equiv [1-(q-1)\lambda _{q}(x_{0})\ t]^{-\frac{1}{q-1}}, \label{sensitivity_00} \end{equation} that yields the customary exponential $\xi _{t}$ with Lyapunov coefficient $% \lambda _{1}(x_{0})$ when $q\rightarrow 1$. In Eq. (\ref{sensitivity_00}) $q$ is the entropic index and $\lambda _{q}$ is the $q$-generalized Lyapunov coefficient; $\exp _{q}(x)\equiv [1-(q-1)x]^{-1/(q-1)}$ is the $q$% -exponential function. The pitchfork and the left-hand side of the tangent bifurcations display weak insensitivity to initial condition, while the right-hand side of the tangent bifurcations presents a `super-strong' (faster than exponential) sensitivity to initial conditions \cite{baldovin1}. For the transition to periodicity of order $n$ the composition $f_{\mu }^{(n)}$ is first considered. In the neighborhood of one of the $n$ points tangent to the line with unit slope one obtains $f^{(n)}(x)=x+u\left| x\right| ^{z}+o(\left| x\right| ^{z})$, where $u>0$ is the expansion coefficient. The general result obtained is $q=2-z^{-1}$ and $\lambda _{q}(x_{0})=zux_{0}^{z-1}$\cite{baldovin1}, \cite{robledo1}. At the tangent bifurcations one has $f_{\mu }^{(n)}(x)=x+ux^{2}+o(x^{2})$,$\ u>0$, and from $z=2$ one gets $q=3/2$. For the pitchfork bifurcations one has instead $% f_{\mu }^{(n)}(x)=x+ux^{3}+o(x^{3})$, because $d^{2}f_{\mu }^{(2^{k})}/dx^{2}=0$ at these transitions, and $u<0$ is now the coefficient associated to $d^{3}f_{\mu }^{(2^{k})}/dx^{3}<0$. In this case we have $z=3$ in $q=2-z^{-1}$ and one obtains $q=5/3$. Notably, these specific results for the index $q$ are valid for all $\zeta >1$ and therefore define the existence of only two universality classes for unimodal maps, one for the tangent and the other one for the pitchfork bifurcations \cite{baldovin1}. See Figs. 2 and 3 in Ref. \cite{baldovin1}. Notice that our treatment of the tangent bifurcation differs from other studies of intermittency transitions \cite{gaspard1} in that there is no feed back mechanism of iterates into the origin of $f^{(n)}(x)$ or of its associated fixed-point map. Here impeded or incomplete mixing in phase space (a small interval neighborhood around $x=0$) arises from the special 'tangency' shape of the map at the pitchfork and tangent transitions that produces monotonic trajectories. This has the effect of confining or expelling trajectories causing anomalous phase-space sampling, in contrast to the thorough coverage in generic states with $\lambda _{1}>0$. By construction the dynamics at the intermittency transitions, describe a purely nonextensive regime. \section{Link between intermittent dynamics and dynamics at thermal criticality} An unanticipated relationship has been shown by Contoyiannis {\it et al} \cite{athens1}-\cite{athens4} to exist between the intermittent dynamics of nonlinear maps at a tangent bifurcation and the equilibrium dynamics of fluctuations at an ordinary thermal critical point. With the aim of obtaining the properties of clusters or domains of the order parameter at criticality, a Landau-Ginzburg-Wilson (LGW) approach was carried out such that the dominant contributions to the partition function arise from a singularity (similar to an instanton) located in the space outside the cluster \cite{athens1}, \cite{athens2}. Then \cite{athens3}, \cite{athens4}, a nonlinear map for the average order parameter was constructed whose dynamics reproduce the averages of the cluster critical properties. This map has as a main feature the tangent bifurcation and as a result time evolution is intermittent. The starting point is the partition function of the $d$-dimensional system at criticality, \begin{equation} Z=\int D[\phi ]\exp (-\Gamma _{c}[\phi ]), \label{partition1} \end{equation} where \begin{equation} \Gamma _{c}[\phi ]=g_{1}\int_{\Omega }dV\left[ \frac{1}{2}(\nabla \phi )^{2}+g_{2}\left| \phi \right| ^{\delta +1}\right] \label{landau1} \end{equation} is the critical LGW free energy of a system of $d$-dimensional volume $% \Omega $, $\phi $ is the order parameter (e.g. magnetization per unit volume) and $\delta $ is the critical isotherm exponent. By considering the space-averaged magnetization $\Phi =\int_{V}\phi (x)dV$, the statistical weight \begin{equation} \rho (\Phi )=\exp (-\Gamma _{c}[\Phi ])/Z, \label{invariant1} \end{equation} where $\Gamma _{c}[\Phi ]\sim g_{1}g_{2}\Phi ^{\delta +1}$ and $Z=\int d\Phi \exp (-\Gamma _{c}[\Phi ])$, was shown to be the invariant density of a statistically equivalent one-dimensional map. The functional form of this map was obtained as the solution of an inverse Frobenius-Perron problem \cite {athens3}. For small values of $\Phi $ the map has the form \begin{equation} \Phi _{n+1}=\Phi _{n}+u\Phi _{n}^{\delta +1}+\epsilon , \label{tangent1} \end{equation} where the amplitude $u$ depends on $g_{1}$, $g_{2}$and $\delta $, and the shift parameter $\epsilon \sim R^{-d}$. Eq. (\ref{tangent1}) can be recognized as that describing the intermittency route to chaos in the vicinity of a tangent bifurcation \cite{schuster1}. The complete form of the map displays a superexponentially decreasing region that takes back the iterate close to the origin in one step. Thus the parameters of the thermal system determine the dynamics of the map. Averages made of order-parameter critical configurations are equivalent to iteration time averages along the trajectories of the map close to the tangent bifurcation. The mean number of iterations in the laminar region was seen to be related to the mean magnetization within a critical cluster of radius $R$. There is a corresponding power law dependence of the duration of the laminar region on the shift parameter $\epsilon $ of the map \cite{athens3}. For $\epsilon >0$ the (small) Lyapunov coefficient is simply related to the critical exponent $% \delta $ \cite{athens2}. As the size of subsystems or domains of the critical system is allowed to become infinitely large the Lyapunov coefficient vanishes and the duration of the laminar episodes of intermittency diverge. Without the feedback to the origin feature in the map we recover the conditions described in the previous section. See Ref. \cite{robledo2} for more details. \section{ Dynamics inside the Feigenbaum attractor} The dynamics at the chaos threshold, also referred to as the Feigenbaum attractor, of the $\zeta $-logistic map at $\mu _{c}$ has been analyzed recently \cite{baldovin2}, \cite{baldovin3}. By taking as initial condition $% x_{0}=0$ we found that the resulting orbit consists of trajectories made of intertwined power laws that asymptotically reproduce the entire period-doubling cascade that occurs for $\mu <\mu _{c}$. This orbit captures the properties of the so-called 'superstable' orbits at $\overline{\mu }% _{n}< $ $\mu _{c}$, $n=1,2,...$ \cite{schuster1}. Here again the Lyapunov coefficient $\lambda _{1}$ vanishes and in its place there appears a spectrum of $q$-Lyapunov coefficients $\lambda _{q}^{(k)}$. This spectrum was originally studied in Refs. \cite{politi1}, \cite{mori1} and our interest has been to examine the relationship of its properties with the Tsallis statistics. We found that the sensitivity to initial conditions has precisely the form of a $q$-exponential, of which we determine the $q$-index and the associated $\lambda _{q}^{(k)}$. The appearance of a specific value for the $q$ index (and actually also that for its conjugate value $Q=2-q$) turns out to be due to the occurrence of Mori's '$q$ transitions' \cite {mori1} between 'local attractor structures' at $\mu _{c}$. Furthermore, we have also shown that the dynamical and entropic properties at $\mu _{c}$ are naturally linked through the nonextensive expressions for the sensitivity to initial conditions $\xi _{t}$ and for the entropy $S_{q}$ in the rate of entropy production $K_{q}^{(k)}$. We have corroborated analytically \cite {baldovin2}, the equality $\lambda _{q}^{(k)}=$ $K_{q}^{(k)}$ given by the nonextensive statistical mechanics. Our results support the validity of the $% q$-generalized Pesin identity at critical points of unimodal maps. Thus, the absolute values for the positions $x_{\tau }$ of the trajectory with $x_{t=0}=0$ at time-shifted $\tau =t+1$ have a structure consisting of subsequences with a common power-law decay of the form $\tau ^{-1/1-q}$ with $q=1-\ln 2/(\zeta -1)\ln \alpha (\zeta )$, where $\alpha (\zeta )$ is the Feigenbaum universal constant that measures the period-doubling amplification of iterate positions \cite{baldovin1}. That is, the Feigenbaum attractor can be decomposed into position subsequences generated by the time subsequences $\tau =(2k+1)2^{n}$, each obtained by proceeding through $% n=0,1,2,...$ for a fixed value of $k=0,1,2,...$. See Fig. \ref{figura1}. The $k=0$ subsequence can be written as $x_{t}=\exp _{2-q}(-\lambda _{q}^{(0)}t) $ with $\lambda _{q}^{(0)}=(\zeta -1)\ln \alpha (\zeta )/\ln 2$. These properties follow from the use of $x_{0}=0$ in the scaling relation \cite {baldovin1} \begin{equation} x_{\tau }\equiv \left| g^{^{(\tau )}}(x_{0})\right| =\tau ^{-1/1-q}\left| g(\tau ^{1/1-q}x_{0})\right| . \label{trajectory1} \end{equation} \begin{figure}[htbp] \centering \includegraphics[width=6cm ,angle=-90]{arobfig1.eps} \caption{Absolute values of positions in logarithmic scales of the first $% 1000$ iterations $\tau $ for a trajectory of the logistic map at the onset of chaos $\mu _{c}(0)$ with initial condition $x_{in}=0$. The numbers correspond to iteration times. The power-law decay of the time subsequences described in the text can be clearly appreciated. } \label{figura1} \end{figure} The sensitivity associated to trajectories with other starting points $% x_{0}\neq 0$ within the attractor can be determined similarly with the use of the time subsequences $\tau =(2k+1)2^{n}$. One obtains $\lambda _{q}^{(k)}=(\zeta -1)\ln \alpha (\zeta )/(2k+1)\ln 2>0$, $k=0,1,2,...$, the positive branch of the Lyapunov spectrum, when the trajectories start at the most crowded ($x_{\tau =0}=1$) and finish at the most sparse ($x_{\tau =2^{n}}=0$) region of the attractor. By inverting the situation we obtain $% \lambda _{2-q}^{(k)}=-2(\zeta -1)\ln \alpha /(2k+1)\ln 2<0$, $k=0,1,2,...$, the negative branch of $\lambda _{q}^{(k)}$, i.e. starting at the most sparse ($x_{\tau =0}=0$) and finishing at the most crowded ($x_{\tau =2^{n}+1}=0$) region of the attractor. Notice that $Q=2-q$ as $\exp _{Q}(y)=1/\exp _{q}(-y)$. For the case $\zeta =2$ see Refs. \cite{baldovin2} and \cite{baldovin3}, for general $\zeta >1$ see Refs. \cite{mayoral1} and \cite{mayoral2} where also a different and more direct derivation is used. So, when considering these two dominant families of orbits all the $q$% -Lyapunov coefficients appear associated to only two specific values of the Tsallis index, $q$ and $2-q$. As a function of the running variable $-\infty <{\sf q}<\infty $ the $% \lambda _{q}^{(k)}$ coefficients become a function $\lambda ({\sf q})$ with two steps located at ${\sf q}=q=1\mp \ln 2/(\zeta -1)\ln \alpha (\zeta )$. In this manner contact can be established with the formalism developed by Mori and coworkers and the $q$ phase transition obtained in Ref. \cite{mori1}% . The step function for $\lambda ({\sf q})$ can be integrated to obtain the spectrum $\phi ({\sf q})$ ($\lambda ({\sf q})\equiv d\phi /d\lambda ({\sf q}% ) $) and its Legendre transform $\psi (\lambda )$ ($\equiv \phi -(1-{\sf q}% )\lambda $), the dynamic counterparts of the Renyi dimensions $D_{{\sf q}}$ and the spectrum $f(\alpha )$ that characterize the geometry of the attractor. The constant slopes in the spectrum $\psi (\lambda )$ represent the Mori's $q$ transitions for this attractor and the value $1-q$ coincides with that of the slope previously detected \cite{politi1}, \cite{mori1}. See Fig. \ref{figura2}. Details appear in Ref. \cite{mayoral2}. \begin{figure}[htbp] \centering \includegraphics[width=10cm ,angle=0]{arobfig2.eps} \caption{a) The Lyapunov coefficient function $\lambda({\sf q})$ at the chaos threshold at $\mu _{c}$ and b) the spectrum $\psi (\lambda )$. See text for description} \label{figura2} \end{figure} Ensembles of trajectories with starting points close to $x_{\tau =0}=1$ expand in such a way that a uniform distribution of initial conditions remains uniform for all later times $t\leq T$ where $T$ marks the crossover to an asymptotic regime. As a consequence of this we established \cite {baldovin3} the identity of the rate of entropy production $K_{q}^{(k)}$ with $\lambda _{q}^{(k)}$. The $q$-generalized rate of entropy production $% K_{q}$ is defined via $K_{q}t=S_{q}(t)-S_{q}(0)$, $t$ large, where \begin{equation} S_{q}\equiv \sum_{i}p_{i}\ln _{q}\left( \frac{1}{p_{i}}\right) =\frac{% 1-\sum_{i}^{W}p_{i}^{q}}{q-1} \label{tsallis1} \end{equation} is the Tsallis entropy, and where $\ln _{q}y\equiv (y^{1-q}-1)/(1-q)$ is the inverse of $\exp _{q}(y)$. See Figs. 2 and 3 in Ref. \cite{baldovin3}. Consider now the logistic map $\zeta $ $=2$ in the presence of additive noise \begin{equation} x_{t+1}=f_{\mu }(x_{t})=1-\mu x_{t}^{2}+\chi _{t}\sigma ,\;-1\leq x_{t}\leq 1,0\leq \mu \leq 2, \label{logistic1} \end{equation} where $\chi _{t}$ is Gaussian-distributed with average $\left\langle \chi _{t}\chi _{t^{\prime }}\right\rangle =\delta _{t.t^{\prime }}$, and $\sigma $ measures the noise intensity. We recall briefly the known properties of this problem \cite{schuster1}, \cite{crutchfield1}. Except for a set of zero measure, all the trajectories with $\mu _{c}(\sigma =0)$ and initial condition $-1\leq x_{0}\leq 1$ fall into the attractor with fractal dimension $d_{f}=0.5338...$. These trajectories represent nonergodic states, since as $t\rightarrow \infty $ only a Cantor set of positions is accessible out of the total phase space $-1\leq x\leq 1$. For $\sigma >0$ the noise fluctuations wipe the sharp features of the periodic attractors as these widen into bands similar to those in the chaotic attractors, nevertheless there remains a well-defined transition to chaos at $\mu _{c}(\sigma )$ where the Lyapunov exponent $\lambda _{1}$ changes sign. The period doubling of bands ends at a finite value $2^{N(\sigma )}$ as the edge of chaos transition is approached and then decreases at the other side of the transition. This effect displays scaling features and is referred to as the bifurcation gap \cite{schuster1}, \cite{crutchfield1}. When $\sigma >0$ the trajectories visit sequentially a set of $2^{n}$ disjoint bands or segments leading to a cycle, but the behavior inside each band is completely chaotic. These trajectories represent ergodic states as the accessible positions have a fractal dimension equal to the dimension of phase space. Thus the removal of the noise $\sigma \rightarrow 0$ leads to an ergodic to nonergodic transition in the map. In the presence of noise ($\sigma $ small) one obtains instead of Eq. (\ref {trajectory1}) \cite{robledo3} \begin{equation} x_{\tau }=\tau ^{-1/1-q}\left| g(\tau ^{1/1-q}x)+\chi \sigma \tau ^{1/1-r}G_{\Lambda }(\tau ^{1/1-q}x)\right| , \label{trajectory3} \end{equation} where $G_{\Lambda }(x)$ is the first order perturbation eigenfunction, and where $r=1-\ln 2/\ln \kappa \simeq 0.6332$. Use of $x_{0}=0$ yields $x_{\tau }=\tau ^{-1/1-q}\left| 1+\chi \sigma \tau ^{1/1-r}\right| $ or $x_{t}=\exp _{2-q}(-\lambda _{q}t)\left[ 1+\chi \sigma \exp _{r}(\lambda _{r}t)\right] $ where $t=\tau -1$ and $\lambda _{r}=\ln \kappa /\ln 2$. At each noise level $% \sigma $ there is a 'crossover' or 'relaxation' time $t_{x}=\tau _{x}-1$ when the fluctuations start suppressing the fine structure of the orbits with $x_{0}=0$. This time is given by $\tau _{x}=\sigma ^{r-1}$, the time when the fluctuation term in the perturbation expression for $x_{\tau }$ becomes unbounded by $\sigma $, i.e. $x_{\tau _{x}}=\tau _{x}^{-1/1-q}\left| 1+\chi \right| $. There are two regimes for time evolution at $\mu _{c}(\sigma )$. When $\tau <\tau _{x}$ the fluctuations are smaller than the distances between neighboring subsequence positions of the $\sigma =0$ orbit at $\mu _{c}(0)$, and the iterate positions with $\sigma >0$ fall within small non overlapping bands each around the $\sigma =0$ position for that $% \tau $. Time evolution follows a subsequence pattern analogous to that in the noiseless case. When $\tau \sim \tau _{x}$ the width of the noise-generated band reached at time $\tau _{x}=2^{N}$ matches the distance between adjacent positions where $N\sim -\ln \sigma /\ln \kappa $, and this implies a cutoff in the progress along the position subsequences. At longer times $\tau >\tau _{x}$ the orbits no longer follow the detailed period-doubling structure of the attractor. The iterates now trail through increasingly chaotic trajectories as bands merge with time. This is the dynamical image - observed along the time evolution for the orbits of a single state $\mu _{c}(\sigma )$ - of the static bifurcation gap originally described in terms of the variation of the control parameter $\mu $ \cite {crutchfield1}, \cite{crutchfield2}, \cite{shraiman1}. The plateau structure of relaxation and the crossover time $t_{x}$ can be clearly observed in Fig. 1b in Ref. \cite{baldovin4} where $<x_{t}^{2}>-$ $<x_{t}>^{2}$ is shown for several values of $\sigma $. \section{Parallels with dynamics near glass formation} We recall the main dynamical properties displayed by supercooled liquids on approach to glass formation. One is the growth of a plateau and for that reason a two-step process of relaxation in the time evolution of two-time correlations \cite{debenedetti1}. This consists of a primary power-law decay in time difference $\Delta t$ (so-called $\beta $ relaxation) that leads into the plateau, the duration $t_{x}$ of which diverges also as a power law of the difference $T-T_{g}$ as the temperature $T$ decreases to a glass temperature $T_{g}$. After $t_{x}$ there is a secondary power law decay (so-called $\alpha $ relaxation) away from the plateau \cite{debenedetti1}. A second important (nonequilibrium) dynamic property of glasses is the loss of time translation invariance observed for $T$ below $T_{g}$, a characteristic known as aging \cite{bouchaud1}. The time fall off of relaxation functions and correlations display a scaling dependence on the ratio $t/t_{w}$ where $t_{w}$ is a waiting time. A third notable property is that the experimentally observed relaxation behavior of supercooled liquids is effectively described, via reasonable heat capacity assumptions \cite {debenedetti1}, by the so-called Adam-Gibbs equation, $t_{x}=A\exp (B/TS_{c}) $, where $t_{x}$ is the relaxation time at $T$, and the configurational entropy $S_{c}$ is related to the number of minima of the fluid's potential energy surface \cite{debenedetti1}. We compare the dynamic properties at the edge of chaos described in the previous section with those known for the process of vitrification of a liquid as $T\rightarrow T_{g}$. At noise level $\sigma $ the orbits visit points within the set of $2^{N}$ bands and, as explained in Ref. \cite{robledo3}, this takes place in time in the same way that period doubling and band merging proceeds in the presence of a bifurcation gap when the control parameter is run through the interval $% 0\leq \mu \leq 2$. Namely, the trajectories starting at $x_{0}=0$ duplicate the number of visited bands at times $\tau =2^{n}$, $n=1,...,N$, the bifurcation gap is reached at $\tau _{x}=$ $2^{N}$, after which the orbits fall within bands that merge by pairs at times $\tau =2^{N+n}$, $n=1,...,N$. The sensitivity to initial conditions grows as $\xi _{t}=\exp _{q}(\lambda _{q}t)$ ($q=1-\ln 2/\ln \alpha <1$) for $t<t_{x}$, but for $t>t_{x}$ the fluctuations dominate and $\xi _{t}$ grows exponentially as the trajectory has become chaotic ($q=1$) \cite{robledo3}. This behavior was interpreted \cite{robledo3} to be the dynamical system analog of the $\alpha $ relaxation in supercooled fluids. The plateau duration $t_{x}\rightarrow \infty $ as $\sigma \rightarrow 0$. Additionally, trajectories with initial conditions $x_{0}$ not belonging to the attractor exhibit an initial relaxation process towards the plateau as the orbit approaches the attractor. This is the map analog of the $\beta $ relaxation in supercooled liquids. The entropy $S_{c}(\mu _{c}(\sigma ))$ associated to the distribution of iterate positions (configurations) within the set of $2^{N}$ bands was determined in Ref. \cite{robledo3}. This entropy has the form $S_{c}(\mu _{c}(\sigma ))=2^{N}\sigma s$, since each of the $2^{N}$ bands contributes with an entropy $\sigma s$, where $s=-\int_{-1}^{1}p(\chi )\ln p(\chi )d\chi $ and where $p(\chi )$ is the distribution for the noise random variable. Given that $2^{N}=$ $1+t_{x}$ and $\sigma =(1+t_{x})^{-1/1-r}$, one has $% S_{c}(\mu _{c},t_{x})/s=(1+t_{x})^{-r/1-r}$or, conversely, \begin{equation} t_{x}=(s/S_{c})^{(1-r)/r}. \label{adamgibbs1} \end{equation} Since $t_{x}\simeq \sigma ^{r-1}$, $r-1\simeq -0.3668$ and $(1-r)/r\simeq 0.5792$ then $t_{x}\rightarrow \infty $ and $S_{c}\rightarrow 0$ as $\sigma \rightarrow 0$, i.e. the relaxation time diverges as the 'landscape' entropy vanishes. We interpret this relationship between $t_{x}$ and the entropy $% S_{c}$ to be the dynamical system analog of the Adam-Gibbs formula for a supercooled liquid. Notice that Eq.(\ref{adamgibbs1}) is a power law in $% S_{c}^{-1}$ while for structural glasses it is an exponential in $S_{c}^{-1}$ \cite{debenedetti1}. This difference is significant as it indicates how the superposition of molecular structure and dynamics upon the bare ergodicity breakdown phenomenon built in the map modifies the vitrification properties. The aging scaling property of the trajectories $x_{t}$ at $\mu _{c}(\sigma )$ was examined in Ref. \cite{robledo3}. The case $\sigma =0$ is readily understood because this property is actually built into the position subsequences $x_{\tau }=\left| g^{(\tau )}(0)\right| $, $\tau =(2k+1)2^{n}$, $k,n=0,1,...$ referred to above. These subsequences can be employed for the description of trajectories that are at first held at a given attractor position for a waiting period of time $t_{w}$ and then released to the normal iterative procedure. For illustrative purposes we select the holding positions to be any of those for a waiting time $t_{w}=2k+1$, $k=0,1,...$ and notice that for the $x_{in}=0$ orbit these positions are visited at odd iteration times. The lower-bound positions for these trajectories are given by those of the subsequences at times $(2k+1)2^{n}$. See Fig. 1. Writing $% \tau $ as $\tau =$ $t_{w}+t$ we have that $t/t_{w}=2^{n}-1$ and $% x_{t+t_{w}}=g^{(t_{w})}(0)g^{(t/t_{w})}(0)$ or \begin{equation} x_{t+t_{w}}=g^{(t_{w})}(0)\exp _{q}(-\lambda _{q}t/t_{w}). \label{trajectory5} \end{equation} This fully developed aging property is gradually modified when noise is turned on. The presence of a bifurcation gap limits its range of validity to total times $t_{w}+t$ $<t_{x}(\sigma )$ and so progressively disappears as $% \sigma $ is increased. \section{Concluding remarks} The implications of joining the results about intermittency described in Sections 2 and 3 are apparent. In the critical clusters of infinite size $% R\rightarrow \infty $ the dynamics of fluctuations obeys the nonextensive statistics. This is expressed via the time series of the average order parameter $\Phi _{n}$, i.e. trajectories $\Phi _{n}$ with close initial values separate in a superexponential fashion according to Eq. (\ref {sensitivity_00}) with $q=(2\delta +1)/(\delta +1)>1$ and with a $q$% -Lyapunov coefficient $\lambda _{q}$ determined by the system parameter values $\delta $, $g_{1}$, $g_{2}$ \cite{robledo2}. Also, as described in Sections 4 and 5, the dynamics of noise-perturbed logistic maps at the chaos threshold exhibits the most prominent features of glassy dynamics in supercooled liquids. The existence of this analogy cannot be considered accidental since the limit of vanishing noise amplitude $% \sigma \rightarrow 0$ (the counterpart of the limit $T-T_{g}\rightarrow 0$ in the supercooled liquid) entails loss of ergodicity. Here we have shown that this nonergodic state corresponds to the limiting state, $\sigma \rightarrow 0$, $t_{x}\rightarrow \infty $, for a family of small $\sigma $ noisy states with glassy properties, that are noticeably described for $% t<t_{x}$ via the $q$-exponentials of the nonextensive formalism \cite {robledo3}. What is the significance of the connections we have reviewed? Are there other connections between critical phenomena and transitions to chaos? Are all critical states - infinite correlation length with vanishing Lyapunov coefficients - outside BG statistics? Where, and in that case why, does Tsallis statistics apply? Is ergodicity failure the basic playground for applicability of generalized statistics? We can mention some noticeable limitations in the examples discussed. In the case of intermittency we have focused only on a single laminar episode (or scape from a position very close to tangency), though this can be of very large duration. In the case of dynamics at the edge of chaos we have payed attention only to the dominant features of the multifractal attractor (the most sparse $x=0$ and most crowded $x=1$ regions) as starting and final orbit positions. {\bf Acknowledgments.} I am grateful to Constantino Tsallis for many valuable discussions. I thank Fulvio Baldovin and Estela Mayoral-Villa for their exceptional participation in the studies here described. Work partially supported by DGAPA-UNAM and CONACyT (Mexican Agencies).
2,869,038,157,002
arxiv
\section{Introduction} A large body of observational evidence from the scale of dwarf galaxies to cosmological scales indicates that 27\% of the mass-energy content of the Universe is in the form of dark matter (DM)~\cite{Ade:2015xua,Bertone:2010zza}. Galaxies like our own are believed to be embedded in a DM halo which extends to more than ten times the scale of the visible galaxy. Broad efforts to identify the particle nature of DM during the last decade have brought together the particle physics, cosmology and astrophysics communities, and may ultimately enable us to constrain the nature of DM by combining the results of cosmological simulations, astrophysical observations, and particle DM searches. In standard cosmology, the DM problem requires new physics beyond the Standard Model of particles physics, since none of the known particles have the required properties to be the DM particle. One of the most extensively studied class of DM candidates is that of Weakly Interacting Massive Particles (WIMPs) which can be searched for via direct, indirect and accelerator searches~\cite{Bertone:2010zza,Jungman:1995df,Bergstrom00,Bertone05}. Direct DM detection experiments search for WIMPs by measuring the small recoil energy of a target nucleus deposited in an underground detector due to a collision with a WIMP. Several direct detection experiments are running around the globe, and have set stringent constraints on the DM-nucleon interaction cross section. For more than a decade the DAMA experiment has reported a hint for a DM signal~\cite{Bernabei:2016ivu} which is in strong tension with the results of other direct detection experiments. The LUX (Large Underground Xenon) experiment currently sets the strongest exclusion limits in the DM mass and spin-independent DM-nucleon cross section plane for large ($>6$ GeV) DM masses~\cite{Akerib:2016vxi}. The interpretation of direct detection results is complicated due to large uncertainties in the astrophysical distribution of DM in the Solar neighborhood. When presenting results in the DM mass and scattering cross section plane, it is usually assumed that the DM distribution in the Galaxy is described by the so-called Standard Halo Model (SHM)~\cite{Drukier:1986tm}: an isothermal sphere of DM, with an isotropic Maxwell-Boltzmann velocity distribution in the Galactic rest frame with the most probable (peak) speed equal to the local circular speed. The DM particles are assumed to be in hydrostatic equilibrium, where the random velocity of the DM particles in the halo provides a collisionless pressure which balances the gravitational potential of the galaxy, and supports the halo from collapsing~\cite{Drukier:1986tm}. The SHM provides however a simplistic description of the DM distribution, and modern estimates rely instead on cosmological simulations of galaxy formation. Using the $\Lambda$CDM cosmological model with parameters inferred from precise cosmic microwave background measurements as the initial conditions, the universe in fact can be simulated starting at very large redshifts before structure formation. We point the reader to Refs.~\cite{Kuhlen:2012ft} and \cite{Frenk:2012ph} for detailed reviews of cosmological simulations. Earlier N-body simulations were performed assuming that all the matter content of the universe is in the form of collisionless DM. High resolution DM-only (DMO) simulations predict local DM velocity distributions which deviate substantially from a Maxwellian distribution\cite{Vogelsberger:2008qb, Kuhlen:2009vh}. However, DMO simulations have significant systematic uncertainties in their predictions due to neglecting the effect of baryons. Baryons play a significant role in the process of galaxy formation, and are necessary to draw realistic predictions from cosmological simulations. In recent years, hydrodynamical simulations which include baryonic physics have become possible and realistic. This is mainly due to advances in physical modeling of baryonic processes, the exponential growth of computing power, and improvement in numerical techniques. Many of the current hydrodynamical simulations are able to reproduce key properties of galaxies, and have achieved significant agreement with observations. DM substructures in both the spatial and velocity distribution of the Milky Way (MW) halo in the form of DM streams or DM clumps can also affect direct detection event rates. The effect of the DM stream associated with the accretion of the Sagittarius dwarf galaxy on dark matter event rates has been studied in isolated simulations of the MW, and shown to be non-negligible~\cite{Purcell:2012sh}. However, high resolution DMO simulations predict that the local DM distribution is very smooth. The DM density in the Solar neighborhood is at most 15\% different from the mean over the best fit ellipsoidal equidensity contour at the 99.9\% CL~\cite{Vogelsberger:2008qb}. DMO simulations have also found that DM streams in the Solar neighborhood are unlikely to be important~\cite{Vogelsberger:2010gd}. The DM substructures of the MW have recently been studied in hydrodynamical simulations, and it has been found that the presence of baryons causing tidal disruption reduces the DM substructures in the inner parts (within $\sim 10$ kpc) of the MW~\cite{Sawala:2016tlo, Garrison-Kimmel:2017zes}. However higher resolution hydrodynamical simulations are needed to study the DM substructures at the Solar position. In this article, we will review the local DM distribution extracted from different hydrodynamical simulations and the implications for DM direct detection. In particular we will discuss the predictions for direct detection of the simulations performed by Ling {\it et al.}~\cite{Ling:2009eh}, Eris~\cite{Kuhlen:2013tra}, NIHAO~\cite{Butsky:2015pya}, EAGLE and APOSTLE~\cite{Bozorgnia:2016ogo}, MaGICC~\cite{Kelso:2016qqj}, and Sloane {\it et al.}~\cite{Sloane:2016kyi}. The paper is structured as follows. In Section~\ref{Notation} we review the formalism for computing direct detection event rates and the astrophysical inputs relevant for direct detection. In Section~\ref{simulations} we present the details of the different simulations studied in this work. In Section~\ref{IdentifyMW} we discuss the different criteria used by different simulation groups to identify simulated MW analogues. In Section~\ref{DMdensity} we review the local DM density extracted from different simulations, as well as the halo shapes of the MW analogues. In Section~\ref{f(v)} we discuss the local DM velocity distributions of different haloes and how they compare to the Maxwellian velocity distribution. The possibility for the existence of a dark disk in different simulations is also studied. In Section~\ref{Implications} we review the implications for direct detection and present an analysis of direct detection data using the DM distribution from different simulations. In Section~\ref{Nonstandard} we comment on non-standard DM-nucleus interactions, and we conclude in Section~\ref{conclusions}. \section{Dark matter direct detection} \label{Notation} \subsection{Event rate in DM direct detection} Consider the elastic collision of a DM particle $\chi$ with mass $m_\chi$ and a target nucleus with mass $m_A$ and atomic mass number $A$. The energy differential event rate (per unit energy, detector mass, and time) is given by \begin{equation} \label{rate} \frac{d R}{d E_R} = \frac{\rho_\chi}{m_\chi} \frac{1}{m_A}\int_{v>v_{\rm min}}d^3 v \frac{d\sigma_A}{d{E_R}} v f_{\rm det}(\vect v, t), \end{equation} where $E_R$ is the energy of the recoiling nucleus, $\rho_\chi$ is the local DM density, $d\sigma_A/d E_R$ is the energy differential DM--nucleus scattering cross section, $f_{\rm det}(\vect v, t)$ is the local DM velocity distribution in the detector rest frame normalized to 1, and $\vect v$ is the relative velocity between DM and the nucleus, while $v\equiv |\vect{v}|$. For a DM particle to deposit a recoil energy $E_R$ in the detector, a minimum speed $v_{\rm min}$ is required, \begin{equation} v_{\rm min}=\sqrt{\frac{m_A E_R}{2 {\mu_{\chi A}^2}}}, \label{eq:vm} \end{equation} where $\mu_{\chi A}=m_\chi m_A/(m_\chi + m_A)$ is the DM--nucleus reduced mass. For the case of spin-independent DM-nucleus scattering with equal couplings of DM to protons and neutrons, the energy differential DM--nucleus cross section can be written in terms of the spin-independent DM-nucleon scattering cross section, $\sigma_{\rm SI}$, \begin{align} \frac{d\sigma_A}{dE_R} = \frac{m_A A^2}{2\mu_{\chi p}^2 v^2} {\sigma_{\rm SI}} F^2(E_R) \,. \label{eq:dsigmadE} \end{align} Here, $\mu_{\chi p}$ is the DM-nucleon reduced mass, and $F(E_R)$ is a form factor taking into account the finite size of the nucleus. From Eqs.~\eqref{rate} and \eqref{eq:dsigmadE}, the energy differential event rate can be written as \begin{equation}\label{eq:Reta} \frac{d R}{d E_R} = \frac{A^2 \sigma_{\rm SI}}{2 m_\chi \mu_{\chi p}^2} \, F^2(E_R) \, \rho_\chi \eta(v_{\rm min}, t), \end{equation} where \begin{equation}\label{eq:eta} \eta(v_{\rm min}, t) \equiv \int_{v > v_{\rm mim}} d^3 v \frac{f_{\rm det}(\vect v, t)}{v} \,, \end{equation} is the halo integral, which together with the local DM density $\rho_\chi$ contain the astrophysical dependence of the event rate. The time dependence of the differential event rate is due to the time dependence of the velocity of the Earth with respect to the Sun, $\vect v_e(t)$. To find the DM velocity distribution in the detector frame, one has to boost the DM velocity in the Galactic frame by the Sun's velocity with respect to the center of the Galaxy, $\vect v_s$, and the Earth's velocity with respect to the Sun $\vect v_e(t)$, such that, $f_{\rm det}(\vect v, t) = f_{\rm gal}(\vect v + \vect v_s + \vect v_e(t))$. \subsection{Astrophysical inputs} The DM density, $\rho_\chi$, and velocity distribution, $f_{\rm det}(\vect v, t)$, at the position of the Sun are the astrophysical inputs entering in the direct detection event rate. To present the results of different direct detection experiments in the plane of the DM-nucleon scattering cross section and the DM mass, and hence to make any predictions for the particle physics nature of DM, an assumption for $\rho_\chi$ and $f_{\rm det}(\vect v, t)$ is required. Notice that the local DM density enters as a normalization in the event rate, while the DM velocity distribution enters the event rate through an integration over DM velocities in the detector (see Eq.~\ref{rate}). Depending on the target nuclei and the energy range probed by a direct detection experiment, the minimum speed, $v_{\rm min}$, is determined for that particular experiment. Thus, different experiments probe different DM speed ranges and their dependence on the DM velocity distribution varies. In the SHM, the DM velocity distribution in the Galactic frame is assumed to be a Maxwellian distribution with a peak speed equal to the local circular speed, $v_c$, and truncated at the escape speed, $v_{\rm esc}$, from the Galaxy, \begin{equation}\label{eq:Maxw} f_{\rm gal}({\bf v}) = \begin{cases} N\exp{\left(-{\bf v}^2/v_c^2\right)} & v<v_{\rm esc}\\ 0 & v \geq v_{\rm esc} \end{cases} \end{equation} where $N$ is a normalization factor. The local circular speed is usually assumed to be 220 or 230 km$/$s, and the commonly adopted escape speed is 544 km$/$s. When adopting the SHM, the uncertainties in the local circular speed and escape speed from the Galaxy are usually neglected. The local circular speed can range from $(220 \pm 20)$~km$/$s to $(279 \pm 33)$~km$/$s~\cite{McMillan:2009yr}. Even if the Maxwellian functional form describes well the local DM velocity distribution, the relationship between its peak speed and the local circular speed can be nontrivial. The Galactic escape speed at the Solar position found by the RAVE survey is $v_{\rm esc} = 533^{+54}_{-41}$~km$/$s at the 90\% CL. The fiducial value for the local DM density in the SHM is 0.3 GeV$/$cm$^3$. There are two main approaches for determining $\rho_\chi$ from observations: local methods, where kinematical data from a nearby population of stars is used to constrain the total Galactic potential, and global methods, which are based on mass modeling the DM and baryonic components of the MW and fits to kinematical data across the whole Galaxy. The recent local and global estimates of the local DM density are in the range $\rho_\chi = (0.2 - 0.8)$ GeV$/$cm$^3$~\cite{Read:2014qva, Catena:2009mf, Weber:2009pt, Salucci:2010qr, McMillan:2011wd, Garbari:2011dh, Iocco:2011jz, Garbari:2011tv, Bovy:2012tw, Zhang:2012rsb, Bovy:2013raa,Pato:2015dua, Silverwood:2015hxa, Huang:2016, McMillan:2016}. \section{Hydrodynamical simulations} \label{simulations} Recently, many realistic hydrodynamical simulations have become possible, and several simulation groups have been able to produce disk galaxies with realistic masses and sizes. In this section we broadly review the hydrodynamical simulations which have been used to extract the DM distribution in MW-like galaxies and study the implications for DM direct detection. We point the reader to Refs.~\cite{Ling:2009eh}, \cite{Kuhlen:2013tra}, \cite{Butsky:2015pya}, \cite{Bozorgnia:2016ogo}, \cite{Kelso:2016qqj}, \cite{Sloane:2016kyi}, and the references therein for the details of each simulation. The parameters of the simulations discussed in this review are presented in Table~\ref{tab:sims}. Notice that each simulation adopts a different method for solving the hydrodynamical equations, a different galaxy formation model, has a different spatial resolution, and a different mass for the DM particle. The cosmological parameters used as initial conditions in the simulations are given in Table~\ref{tab:CosmoPar}. We expect that the small differences in the cosmological parameters used by different simulation groups would not cause a significant variation among their results. The different hydrodynamical prescriptions and parameters of the simulations (given in Table~\ref{tab:sims}) are expected to cause a much larger variation among the end results. Some of the most relevant baryonic processes for causing macroscopic changes in a simulated galaxy are gas cooling processes, star formation treatment, and the supernova feedback mechanism. Except for Eris, all the simulations discussed below assume that hydrogen, helium, and metals are present, and include metal line cooling processes. \begin{table} \centering \begin{tabular}{@{}cccccc@{}} \hline Simulation & code & $N_{\rm DM}$ & $m_{\rm g}~[{\rm M}_\odot]$ & $m_{\rm DM}~[{\rm M}_\odot]$ & $\epsilon$ [pc] \\ \hline Ling {\it et al.} &{\sc ramses}& $2662$ & -- & $7.46\times10^{5}$ & 200 \\ Eris & {\sc gasoline} & 81213 & $2\times10^{4}$ & $9.80\times10^{4}$ & 124 \\ NIHAO &{\sc EFS-Gasoline2} & -- & $3.16 \times10^{5}$ & $1.74\times10^{6}$ & 931 \\ EAGLE (HR) &{\sc P-Gadget (anarchy)} & 1821--3201 & $2.26\times10^{5}$ & $1.21\times10^{6}$ & 350 \\ APOSTLE (IR) & {\sc P-Gadget (anarchy)} & 2160, 3024 & $1.3\times10^{5}$ & $5.9\times10^{5}$ & 308 \\ MaGICC & {\sc gasoline} & 4849, 6541 & $2.2 \times 10^5$ & $1.11\times10^{6}$ & 310 \\ Sloane {\it et al.} & {\sc gasloine}& 5847--7460 & $2.7 \times 10^4$ & $1.5\times10^{5}$ & 174 \\ \hline \end{tabular} \caption{Parameters of the simulations discussed in this article. The columns specify the code used to perform the hydrodynamical simulation, the number of DM particles, $N_{\rm DM}$, within the torus region defined for each simulation in Section \ref{SpeedDist} around the Solar circle of the selected haloes, the initial gas particle mass, $m_g$, the DM particle mass, $m_{\rm DM}$, and the Plummer-equivalent physical softening length, $\epsilon$. The initial gas particle mass is undefined for the Ling {\it et al.} simulation since a grid code is used in that case. } \label{tab:sims} \end{table} \begin{table} \centering \begin{tabular}{@{}ccccccc@{}} \hline Simulation & $\Omega_m$ & $\Omega_\Lambda$ & $\Omega_b$ & $H_0$ [km s$^{-1}$ Mpc$^{-1}$] & $n_s$ & $\sigma_8$ \\ \hline Ling {\it et al.} & 0.3 & 0.7 & 0.045 & 70 & -- & -- \\ Eris & 0.268 & -- & 0.042 & 73 & 0.96 & 0.76 \\ NIHAO &0.3175 & 0.6825 & 0.0490 & 67.1 & 0.9624 & 0.8344 \\ EAGLE (HR) &0.307 &0.693 &0.0482 & $67.8$ & 0.961& 0.83 \\ APOSTLE (IR) &0.272 & 0.728& 0.0455& $70.4$ & 0.967 & 0.81 \\ MaGICC & 0.24 & 0.76 & 0.04 & 73 & -- & 0.79\\ Sloane {\it et al.} &0.26& 0.74 & 0.0455 & $73$ & 0.96 & 0.77 \\ \hline \end{tabular} \caption{Cosmological parameters used to generate the initial conditions in different simulations.} \label{tab:CosmoPar} \end{table} \subsection{Ling {\it et al.}} The simulation performed by Ling {\it et al.}~\cite{Ling:2009eh} uses the cosmological Adaptive Mesh Refinement code {\sc ramses}~\cite{Teyssier:2001cp}, and a ``zoom-in" technique to re-simulate at a higher resolution selected DMO halo. The mass and spatial resolution decreases as the distance from the central region increases. In this simulation, a standard Schmidt law is used to implement star formation by generating stars as a Poisson random process~\cite{Rasera:2005gq}. A second-order unsplit Godunov scheme is used to describe gas dynamics, and radiative gas cooling is implemented based on a standard equilibrium photo-ionised mixture of hydrogen and helium, where excess cooling due to metals is taken into account~\cite{Katz:1992db, Sutherland:1993ec}. Supernova feedback is taken into account using the recipe described in Ref.~\cite{Dubois:2008mz}. \subsection{Eris} The Eris hydrodynamical simulation~\cite{Kuhlen:2013tra} is a cosmological zoom-in simulation of a single MW analogue. The Eris simulation run (details in Ref.~\cite{Guedes:2011ux}) was performed using the N-body$+$smooth particle hydrodynamics (SPH) code {\sc gasoline}~\cite{Wadsley:2003vm} and includes galaxy formation physics. The end product of their simulation is a barred, late-type spiral galaxy similar to the MW. The baryonic processes included in the simulation are: star formation based on atomic gas density threshold of 5 atoms cm$^{-3}$ with 10\% star formation efficiency, radiative cooling at low temperatures which is Compton, atomic, and metallicity dependent, heating from supernova explosions and a cosmic ultraviolet field, and a ``blastwave" scheme for supernova feedback~\cite{Stinson:2006cp}. In the blastwave model, the thermal heating of gas from supernovae is mimicked by locally suspending the gas cooling. Notice that gas metal cooling is ignored in the Eris simulation, and this may affect the DM distribution in the Solar neighborhood. In addition to the Eris hydrodynamical simulation, this study includes a counterpart DMO simulation, ErisDark, which shares the same halo formation history as Eris but treats all matter as collisonless DM. \subsection{NIHAO} The NIHAO (Numerical Investigation of a Hundred Astrophysical Objects) simulations~\cite{Wang:2015jpa} are a large suite of cosmological zoom-in simulations, and use an updated version of the MaGICC simulations~\cite{Stinson:2012uh} for including baryonic processes. The initial conditions are generated using the Planck 2014 cosmological parameters as listed in Table~\ref{tab:CosmoPar}. An improved version of the SPH code {\sc gasoline}~\cite{Wadsley:2003vm} is used, namely the {\sc ESF-Gasoline2} code which includes a revised treatment of the hydrodynamics~\cite{2014MNRAS.442.3013K}. The same numerical resolution is maintained across the wide mass range in the simulations, from dwarf galaxies to massive MW-like spiral galaxies. The galaxies in the NIHAO simulations reproduce the observed stellar mass-halo mass relation. The improved version of {\sc gasoline} includes a subgrid model for turbulent mixing of metals and energy~\cite{Wadsley:2008}, photoelectric heating of dust grains, ultraviolet heating and ionization and cooling due to hydrogen, helium and metals~\cite{Shen:2009zd}. The star formation and feedback is similar to the model used in the MaGICC simulations~\cite{Stinson:2012uh}. Star formation occurs for a gas temperature and density threshold of 15000 K and 10.3 atoms cm$^{-3}$, respectively. Blastwave supernova feedback mechanism~\cite{Stinson:2006cp} is used, and the cooling function is affected by metals which are produced by type II and type Ia supernovae~\cite{Shen:2009zd}. Each halo in the NIHAO project is initially simulated at high resolution without baryons, and thus has a DMO counterpart. Notice that in Table~\ref{tab:sims}, we only list the gas and DM particle masses for one of the MW-like haloes (`g1.92e12') which is the only information presented in Ref.~\cite{Butsky:2015pya}. \subsection{EAGLE and APOSTLE} The simulations in the EAGLE project~\cite{Schaye:2015,Crain:2015} were performed using a state-of-the-art hydrodynamical SPH implementation, {\sc anarchy} \cite{DallaVecchia:2015,Schaller:2015b} which was built on top of an optimized version of the SPH code {\sc gadget}~\cite{Springel:2005}, as well as a detailed subgrid model of galaxy formation. The parameters of the subgrid model are calibrated to produce the observed relation between galaxy stellar mass and size for disk galaxies at $z=0.1$. The Planck 2013~\cite{Planck:2014} cosmological parameters are assumed (see Table~\ref{tab:CosmoPar}). In Ref.~\cite{Bozorgnia:2016ogo}, the implications for direct detection are studied using the EAGLE high resolution (HR) simulation which in the EAGLE papers is referred to as Recal-L025N0752. The APOSTLE project~\cite{Sawala:2015} uses the same code as EAGLE, and applies it to a series of zoomed regions containing DM halo pairs analogous to the MW-M31, or Local Group, system. We use the twelve intermediate resolution APOSTLE volumes, which we refer to as APOSTLE IR, and are comparable in resolution to EAGLE HR\footnote{Higher resolution simulations, denoted as APOSTLE HR also exist within the APOSTLE project, but were not used since their stellar masses are lower than the observed MW stellar mass range.}. These simulations were run using the WMAP7 cosmological parameters, given in Table~\ref{tab:CosmoPar}. The simulations of the EAGLE project use state-of-the-art subgrid models and numerical techniques to include star formation and its energy feedback, radiative cooling, stellar mass loss and metal enrichment, the gas accretion and mergers of supermassive black holes, as well as AGN feedback. Feedback efficiencies are calibrated to reproduce the present day stellar mass function and the observed relation between stellar mass and BH mass, taking into account the galaxy sizes. Feedback from massive stars and AGN are significantly improved compared to previous simulations, such that thermal energy is injected into the gas without turning off hydrodynamical forces or radiative cooling. As a result, galactic winds are generated without the need to specify wind directions, velocities, or other information. For the EAGLE and APOSTLE simulations, companion DMO simulations were run treating all the matter content as collisionless. Therefore we can directly compare the DM distribution of galaxies in the hydrodynamical simulation with their DMO counterparts. \subsection{MaGICC} The MaGICC (Making Galaxies in a Cosmological Context) hydrodynamical simulations~\cite{Stinson:2012uh} were carried out using the SPH code {\sc gasoline}~\cite{Wadsley:2003vm}. Initial conditions are generated from WMAP3 cosmology as listed in Table~\ref{tab:CosmoPar}. A sample of MW-like haloes are identified from a DMO simulation, and were re-evolved with high resolution from high redshift, including baryonic physics. High resolution particles are added in the region of interest using the zoom-in technique, while other regions contain lower resolution particles. Low-temperature metal cooling~\cite{Shen:2009zd}, ultraviolet background radiation, and a Schmidt-Kennicutt star formation law~\cite{Kennicutt:1997ng} are included in the simulations. The blastwave model for supernova feedback~\cite{Stinson:2006cp} is implemented, as well as early energy feedback from massive stars into the interstellar medium. The early feedback results in simulated galaxies which have small realistic bulges and do not appear to exhibit any problems stemming from overcooling. In addition to the hydrodynamical simulations, one halo in a DMO simulation is also studied in MaGICC. This halo was generated with cosmological initial conditions identical to one of the haloes in the hydrodynamical simulations. \subsection{Sloane {\it et al.}} Sloane {\it et al.}~\cite{Sloane:2016kyi} study galaxies which were simulated using the N-body$+$ SPH code {\sc gasoline}~\cite{Wadsley:2003vm}. Four galaxies are selected and re-simulated using the zoom-in technique. The initial conditions used in the simulation assume the WMAP3 cosmological parameters (in Table~\ref{tab:CosmoPar}). The simulations include blastwave supernova feedback~\cite{Stinson:2006cp} and metal line cooling~\cite{Shen:2009zd}. A DMO version of each galaxy exists which is simulated with the same initial conditions, but does not include baryonic physics. \begin{table} \centering \begin{tabular}{@{}cccccc@{}} \hline Simulation & Count & $M_{\rm star}~[\times 10^{10} {\rm M}_\odot]$ & $M_{\rm halo}~[\times 10^{12} {\rm M}_\odot]$ & $\rho_\chi$ [GeV$/$cm$^3]$ & $v_{\rm peak}$ [km$/$s] \\ \hline Ling {\it et al.} & 1 & $\sim 8$ & $0.63$ & 0.37--0.39 & 239 \\ Eris & 1 & $3.9$ & $0.78$ & 0.42 & 239 \\ NIHAO& 5 & 15.9 & $\sim 1$ & 0.42 & 192--363 \\ EAGLE (HR) & 12 & 4.65--7.12 & 2.76--14.26 & 0.42--0.73 & 232--289 \\ APOSTLE (IR) & 2 & 4.48, 4.88 & 1.64--2.15 & 0.41--0.54 & 223--234 \\ MaGICC & 2 & 2.4--8.3 & 0.584, 1.5 & 0.346, 0.493 & 187, 273 \\ Sloane {\it et al.} & 4 & 2.24--4.56& 0.68--0.91 & 0.3--0.4 & 185--204 \\ \hline \end{tabular} \caption{The number and properties of the selected MW analogues in the simulations discussed in this paper. The columns specify the number of the MW analogues, (the range of) their stellar mass $M_{\rm star}$, halo mass $M_{\rm halo}$, local DM density $\rho_\chi$, and best fit peak speed of the Maxwellian distribution, $v_{\rm peak}$, for each simulation.} \label{tab:MW-like} \end{table} \section{Identifying simulated Milky Way analogues} \label{IdentifyMW} To make precise quantitative predictions for the DM distribution from simulations, one needs to identify simulated galaxies which satisfy MW observational constraints. The criteria used to select simulated galaxies which resemble the MW are widely different among different simulation groups, and we review them in this section. The number and properties of the MW analogues selected in each simulation are presented in Table~\ref{tab:MW-like}. Notice that the halo mass has a different definition in each simulation as described below. \subsection{Ling {\it et al.}} The MW analogue in the simulation performed by Ling {\it et al.} has a total mass of $6.3 \times 10^{11}~{\rm M}_\odot$ within a virial radius defined as the radius containing a density equal to 200 times the critical density. This halo has a steady accretion rate and no major mergers in the last 8 Gyr. The bulge mass is close to the disk mass of $4 \times 10^{10}~{\rm M}_\odot$, which deviates from the MW observational constraints. \subsection{Eris} One high resolution simulated MW analogue exists in Eris. At redshift zero this MW analogue has a local circular velocity of 205 km$/$s, a bulge to disk ratio of 0.35, a stellar mass of $3.9 \times 10^{10}~{\rm M}_\odot$, and a virial mass of $7.8 \times 10^{11}~{\rm M}_\odot$. The virial mass is defined as the mass enclosed within the sphere that contains a mean density of 98 times the critical density of the Universe. \subsection{NIHAO} Four ``test'' galaxies are considered in Ref.~\cite{Butsky:2015pya} with masses between $5 \times 10^{10}$ and $10^{12}~{\rm M}_\odot$ within a virial radius defined as the radius containing a density equal to 200 times the critical density of the Universe. From these test galaxies, one with halo mass of $10^{12}~{\rm M}_\odot$ (called `g1.92e12') is considered as MW-like. Additionally, the DM velocity distribution is presented for four more galaxies (called `g8.26e11', `g1.12e12', `g1.77e12', and `g2.79e12'), which have been chosen to have a halo mass of $\approx 10^{12}~{\rm M}_\odot$ and a stellar mass similar to the MW. Notice that in Table~\ref{tab:MW-like}, we only list the stellar mass for g1.92e12 which is presented in Ref.~\cite{Butsky:2015pya}. For the local DM density we present in Table~\ref{tab:MW-like} the mean of the average local DM density for all galaxies with halo masses in the range of ($7.5 \times 10^{11} - 3.5 \times 10^{12}$)~${\rm M}_\odot$ as discussed in Section~\ref{DMdensity}. \subsection{EAGLE and APOSTLE} In the EAGLE HR and APOSTLE IR simulations, a total of 14 MW analogues (called `E1' to `E12' for EAGLE HR and `A1', `A2' for APOSTLE IR in Ref.~\cite{Bozorgnia:2016ogo}) are identified. We first identify all haloes in the mass range $5\times10^{11}<M_{200}/{\rm M}_\odot<2\times10^{13}$, where $M_{200}$ is defined as the mass enclosed within the sphere that contains a mean density 200 times the critical density. We then impose two additional selection criteria for identifying MW analogues: (i) The rotation curves of the simulated galaxies fit well the observed MW rotation curve from Refs.~\cite{Iocco:2015xga}, \cite{Pato:2017yai}. (ii) The stellar mass of the simulated galaxies falls within the 3$\sigma$ MW stellar mass range derived from observations, $4.5 \times10^{10}<M_{*}/{\rm M}_\odot<8.3 \times10^{10}$~\cite{McMillan:2011wd}. As discussed in Ref.~\cite{Bozorgnia:2016ogo}, these two criteria are chosen from among many to define a MW analogue, others of which include star formation rate and age, because they directly affect the local circular speed, and therefore the peak of the local DM speed distribution in the Galactic reference frame (see for example Fig.~1 of Ref.~\cite{Bozorgnia:2016ogo}). Notice that the halo masses of our selected MW analogues are higher than the MW halo mass, $M_{200, {\rm MW}}=1.2^{+0.7}_{-0.4} \times 10^{12}~{\rm M}_\odot$, expected from abundance matching~\cite{Busha:2010sg}. This is probably a result of the slightly too efficient feedback in the simulated haloes in EAGLE HR~\cite{Schaye:2015, Crain:2015}. As discussed in Ref.~\cite{Bozorgnia:2016ogo}, the large halo masses and the mismatch between halo mass and stellar mass do not affect the implications for DM direct detection. In particular, over the small halo mass range probed, we find little correlation between the halo mass and the local DM density or velocity distribution~\cite{Bozorgnia:2016ogo}. \subsection{MaGICC} Two simulated MW-like disk galaxies (called `g1536' and `g15784' in Ref.~\cite{Kelso:2016qqj}) are identified with different accretion histories in the MaGICC simulations~\cite{Kelso:2016qqj}. Their virial masses are $5.84 \times 10^{11} {\rm M}_\odot$ and $1.50 \times 10^{12} {\rm M}_\odot$, where the virial radius is defined as the radius containing a density equal to $\sim 100$ times the critical density. Galaxies with masses between $\sim 5 \times 10^{11}$ and $\sim 2 \times 10^{12} {\rm M}_\odot$ were first selected at random from a DMO simulation, requiring that at $z=0$ there was no structure with a mass greater than $\sim 5 \times 10^{11} {\rm M}_\odot$ within 2.7 Mpc of the halo. The simulations for the regions containing the selected galaxies were re-evolved with baryonic physics. Both MW-like galaxies have present day halo masses similar to that of the MW. The stellar mass of g1536 is lower than the observed MW stellar mass, whereas g15784 has slightly more stellar mass than the MW. The two galaxies especially differ from one another in their stochastic accretion histories and in the initial conditions from which they evolved. The last merger of g1536 occurred at $z=2.9$, while g15784 had its last major merger at $z=2$. The DMO simulation (called `g1536DM' in Ref.~\cite{Kelso:2016qqj}) was generated with the same initial conditions as g1536. Notice that the rotation curves of all three haloes differ from the MW rotation curve due to their different total mass distributions~\cite{Kelso:2016qqj}. \subsection{Sloane {\it et al.}} Four simulated MW analogues (called `h239', `h258', `h277', and `h285' in Ref.~\cite{Sloane:2016kyi}) with virial masses\footnote{It is not clear how the virial mass is defined in Ref.~\cite{Sloane:2016kyi}.} in the range of (0.7 -- 0.9) $\times 10^{12} {\rm M}_\odot$ are selected in the simulation performed by Sloane {\it et al.} The selected galaxies are chosen to span a range of merger histories, with three out of four chosen to have recent mergers. Only h277 has a relatively quiescent merger history similar to the MW, with its last merger at $z \sim 3$. h239 was continually bombarded with small galaxies, and h285 underwent recent mergers at $z \sim 1.7$ until $z \sim 0.8$, including one counter-rotational merger. Finally, h258 which has a prominent dark disk (see the discussion in Section~\ref{VelComponents}), had a same mass merger at $z \sim 1$. This is the same halo studied in Ref.~\cite{Read:2009iv} at a lower resolution compared to Sloane {\it et al.}~\cite{Sloane:2016kyi}. \section{Local dark matter density and halo shape} \label{DMdensity} After having selected the simulated MW analogues, the next step is to extract the DM density in the Solar neighborhood for each simulated halo. To find the DM density at the Solar circle, a region around $\sim 8$ kpc from the Galactic center has to be identified. Each simulation group uses a slightly different region at the Solar circle to determine the average local DM density. Current hydrodynamical simulations are limited in resolution, and cannot resolve the local variations in the DM distribution at different azimuthal angles around the Solar circle. Therefore, the DM density is usually averaged over a cylindrical or spherical region around the Solar circle. Studying the shape of the inner DM halo helps our understanding of how baryonic physics affect the local DM density. In most simulations, the sphericity, $s$, of the DM halo which is defined as the ratio of the minor to major axis of the halo is determined. If the DM halo is a perfect sphere, $s=1$, while $s<1$ denotes a departure from a spherical shape. \subsection{Ling {\it et al.}} The average DM density in the Solar neighborhood is found in a torus aligned with the stellar disk with galactocentric radii between 7 and 9 kpc and height between -1 kpc and 1 kpc with respect to the Galactic plane. Alternatively, a torus with the same radii, but a height between -0.1 kpc and 0.1 kpc is considered. The average local DM density is 0.37 and 0.39 GeV$/$cm$^3$, for the larger and smaller torus, respectively. Due to the dark disk component (discussed in Section~\ref{VelComponents}), the shape of the DM halo is oblate. \subsection{Eris} The Eris simulation considers a cylindrical volume aligned with the stellar disk with a height extending from -0.1 kpc to 0.1 kpc. The DM density profile is calculated in evenly spaced logarithmic bins in the cylinder. The DM density in the cylinder at 8 kpc is 0.42 GeV$/$cm$^3$, which is higher by 34\% than the spherically average DM density at 8 kpc, and higher by 31\% compared to the spherically averaged DM density in the ErisDark DMO simulation. This is a result of both a contraction of the DM halo due to dissipational baryonic processes occurring in the plane of the Galactic disk, as well as the natural tendency of galaxies to be aligned with the halo in which they live. The DM halo is oblate with intermediate to minor axis ratio of 0.99 and minor to major axis ratio of $s=0.69$, which is rounder and more axisymmetric compared to the DM halo in ErisDark. \subsection{NIHAO} The DM density profile is presented individually for the four test galaxies specified in Ref.~\cite{Butsky:2015pya}. For g1.92e12 which has a mass similar to the MW, the spherically averaged DM density is 0.7 GeV$/$cm$^3$ at 8 kpc, as read off from Fig.~11 of Ref.~\cite{Butsky:2015pya}. This is similar to the local DM density of the DMO counterpart of this halo, as seen from the same figure. The average DM density for all galaxies in the halo mass range of $7.5 \times 10^{11}<M_{\rm halo}<3.5 \times 10^{12}~{\rm M}_\odot$ is presented in Fig.~12 of Ref.~\cite{Butsky:2015pya}. At 8 kpc, the mean value of the average DM density is 0.42 GeV$/$cm$^3$, as read off from the top right panel of that figure. The shape of the inner halo measured at 12\% of the virial radius, for the MW-like galaxies (mass of $\approx 10^{12}~{\rm M}_\odot$) in the hydrodynamical simulation is close to spherical, and has an average minor to major axis ratio of $s=0.8$. This is rounder than the shape of the inner DM halo in the DMO simulation which is triaxial. As expected, the effect of the baryonic processes which modify the shape of the DM distribution is strongest in the inner halo. \subsection{EAGLE and APOSTLE} We find the local DM density of the simulated MW analogues in the EAGLE HR and APOSTLE IR simulations, by considering a torus aligned with the stellar disk, with an inner and outer radii of 7 and 9 kpc from the Galactic center, respectively, and a height from -1 to 1 kpc with respect to the Galactic plane. The average local DM density in this torus is 0.42 -- 0.73 GeV$/$cm$^3$ for the 12 haloes in the EAGLE HR simulation, and 0.41 -- 0.54 GeV$/$cm$^3$ for the two haloes in APOSTLE IR. The average DM density varies on average by 32\% along the torus, which is smaller than the halo-to-halo scatter. For two out of 14 MW analogues, the local DM density in the torus is greater than 20\% compared to the average local DM density in a spherical shell with radius between 7 and 9 kpc. Such an increase could be a result of the DM halo contraction due to dissipational baryonic physics. We find that at 5 kpc, the haloes are close to spherical with sphericity, $s=[0.85,0.95]$. The sphericities are lower by less than 10\% at 8 kpc. The deviation of the halo shapes towards either prolate or oblate distributions is very small. The DM sphericity is higher in the hydrodynamical simulations compared to the DMO case, in agreement with the result of earlier simulations~\cite{Dubinski:1993df, Bryan:2012mw}, and higher resolution APOSTLE simulations~\cite{Schaller:2015mua}. \subsection{MaGICC} To find the local DM density for the two MW-like galaxies in the MaGICC simulations, a torus at the Solar circle is considered with a major radius of 8 kpc and a minor radius of 2 kpc. The average DM density in the torus is 0.346 and 0.493 GeV$/$cm$^3$ for the two galaxies. The shape of one of the DM haloes (g1536) in MaGICC hydrodynamical simulations is close to axisymmetric with an intermediate to major axis ratio of $\sim 1$, and flattened with an intermediate to minor axis ratio of $\sim 0.75$. This halo has a sphericity of $s \sim 0.75$ within 5 kpc, as read from the middle panel of Fig.~1 in Ref.~\cite{Kelso:2016qqj}. The other halo (g15784) is nearly spherical with $s \sim 0.9$ within 50 kpc. The differences in the shape of the two haloes are a result of their different accretion histories~\cite{Kelso:2016qqj}. The shape of the halo in the DMO simulation (g1536DM) is highly prolate, except in the inner 2 kpc region. Due to the absence of baryons, the DM halo in the DMO simulation is also triaxial. \subsection{Sloane {\it et al.}} Sloane {\it et al.} consider a cylindrical annulus in the plane of the stellar disk with a central radius of 8 kpc, and height and width of 1 kpc. The average local DM density is $\sim 0.3$ -- 0.4 GeV$/$cm$^3$ for the haloes they consider\footnote{It is not clear if this is the average DM density in the cylindrical annulus region or in a spherical shell around the Solar circle.}. The shape of the DM haloes are not discussed in Ref.~\cite{Sloane:2016kyi}. \section{Local dark matter velocity distribution} \label{f(v)} In order to determine if the SHM provides an accurate description for the DM halo in the Solar neighborhood, one can compare the DM velocity distribution of the simulated MW analogues in the Galactic rest frame to a Maxwellian velocity distribution with a peak speed of 220 km/s, which is usually assumed in the direct detection community. In case of deviations from the SHM, the DM velocity distribution can be fitted with a Maxwellian distribution with a free peak speed, as well as with various other fitting functions. The velocity vector of the simulation particles is specified in a reference frame in the plane of the galaxy, with origin at the Galactic center, $r$-axis in the radial direction, $\theta$ in the tangential direction, and the $z$-axis perpendicular to the stellar disk. The DM velocity modulus (speed) distribution, as well as the distributions of the radial ($v_r$), azimuthal ($v_\theta$), and vertical ($v_z$) components of the DM velocity distribution can be calculated. The DM speed distribution and the three components of the velocity distribution are individually normalized to unity, such that $\int dv f(v)=1$ and $\int dv_i f(v_i)=1$ for $i=r, \theta, z$. Notice that the speed distribution, $f(v)$, is related to the velocity distribution, $f(\bf v)$, by $f(v) = v^2 \int d\Omega_{\bf v} f(\bf v)$, where $d\Omega_{\bf v}$ is an infinitesimal solid angle around the direction of the DM velocity ${\bf v}$. \subsection{DM speed distribution} \label{SpeedDist} As expected, due to the differences in hydrodynamical approach, cosmological parameters, and definitions of physical quantities, the simulations we analyzed exhibit a variety of local speed distributions. We found in particular that: \begin{itemize} \item {\bf There exists a large variation in local speed distributions between the results of different simulations}. To illustrate this point, we show in the left panel of Fig.~\ref{fig:fv} the local speed distributions in the Galactic reference frame for one MW-like halo from each hydrodynamical simulation which has the farthest speed distribution from the SHM Maxwellian with a peak speed of 220 km$/$s (shown as a black dashed curve in the same plot). These haloes are g2.79e12 from NIHAO, E3 from EAGLE HR, g1536 from MaGICC, and h258 from Sloane {\it et al}. \item {\bf There is a substantial halo-to-halo variation in local DM speed distributions within a given simulation suite}. We demonstrate this point in the right panel of Fig.~\ref{fig:fv}, where we present the local DM distributions of two haloes in the EAGLE HR simulations which have speed distributions closest to (halo E12, shown in green) and farthest from (halo E3, shown in orange) the SHM, as well as the two haloes in the APOSTLE IR simulations (haloes A1 and A2, shown in blue and red, respectively). The shaded bands specify the 1$\sigma$ uncertainty in the speed distribution from each halo. \end{itemize} \begin{figure}[t] \includegraphics[width=0.49\textwidth]{Figs/fvall1halo-far.pdf} \includegraphics[width=0.49\textwidth]{Figs/fvEagleApostle.pdf} \caption{Local DM speed distributions in the Galactic rest frame for (a) MW-like haloes in different hydrodynamical simulations (solid colored lines) which have the farthest speed distribution from the SHM Maxwellian with a peak speed of 220 km$/$s (dashed black line), and (b) two MW-like haloes in the EAGLE HR simulation which have speed distributions closest to (green) and farthest from (orange) the SHM and two haloes in the APOSTLE IR simulation (blue and red). The solid curves and shaded regions specify the mean of the DM speed distribution and its 1$\sigma$ error, respectively. The speed distributions plotted on the left panel are taken from Refs.~\cite{Ling:2009eh, Kuhlen:2013tra, Butsky:2015pya, Bozorgnia:2016ogo, Kelso:2016qqj, Sloane:2016kyi}. \label{fig:fv}} \end{figure} A number of fitting formulae have been proposed in the literature to parametrize the departure from the standard Maxwellian, including: \begin{itemize} \item Generalized Maxwellian distribution~\cite{Ling:2009eh}: \begin{equation} \label{eq:genMax} f(v) \propto v^2 \exp[ -( v / v_0 )^{2 \alpha} ] \, , \end{equation} with free parameters $v_0$ and $\alpha$. A standard Maxwellian distribution is recovered for $\alpha = 1$. \vspace{15pt} \item The speed distribution proposed by Mao {\it et al.}~\cite{Mao:2012hf}: \begin{equation} \label{eq:Mao} f(v) \propto v^2 \exp[ - v/ v_0 ] ( v_{\rm esc}^2 - v^2)^p ~\Theta (v_{\rm esc} - v)\, , \end{equation} with free parameters $v_0$ and $p$. \vspace{15pt} \item The Tsallis~\cite{Tsallis:1987eu} distribution: \begin{equation} \label{eq:Tsallis} f(v) \propto v^2 \left(1 - (1-q) v^2/v_0^2 \right)^{q/(1-q)}\, , \end{equation} with free parameters $v_0$ and $q$. We obtain the standard Maxwellian distribution for $q \rightarrow 1$. \end{itemize} Notice that all the fitting functions are normalized such that $\int_0^{v_{\rm esc}} dv f(v)=1$. Below, we describe how well the local DM speed distributions of the MW analogues in each simulation fit a Maxwellian distribution with a free peak as well as the above fitting formulae. \subsubsection{Ling {\it et al.}} The DM speed distribution for the MW analogue is considered in a torus aligned with the stellar disk with radii between 7 kpc to 9 kpc and a height between -1 kpc and 1 kpc. There are 2662 particles in this region. Strong deviations from the best fit Maxwellian speed distribution ($\alpha =1$ in Eq.~\ref{eq:genMax}) is observed, with a large deficit at large speeds. Although a Tsallis distribution (Eq.~\ref{eq:Tsallis}) provides an excellent fit to the DM particles in a spherical shell around 8 kpc, it does not provide a good fit to the particles in the torus around the Solar position. \subsubsection{Eris} To find the average DM speed distribution in the Galactic rest frame at the Solar position, Eris considers an annulus aligned with the stellar disk with inner and outer radii of 6 kpc and 10 kpc from the Galactic center, respectively, and with a height spanning from -2 kpc to 2 kpc. There are 81,213 DM particles in this region. The DM speed distribution in the annulus has a mean speed of 220.8 km$/$s, and is compared to the SHM consisting of a Maxwellian velocity distribution with a peak speed of 220 km$/$s and a Maxwellian velocity distribution with the same peak speed as the simulation. At all speeds less than 350 km$/$s, the DM speed distribution in the simulation is larger than the SHM, and decreases sharply at higher speeds. The DM speed distribution of the simulation is not a perfect fit to the matched to peak Maxwellian distribution, with a slight excess at 230 to 380 km$/$s, and a deficit at higher speeds. The distribution instead fits well the fitting function proposed by Mao {\it et al.} (Eq.~\ref{eq:Mao}). To identify the effect of baryons on the local DM distribution, a comparison with the ErisDark DMO simulation is performed in Ref.~\cite{Kuhlen:2013tra}. In ErisDark, the average DM speed distribution is found in a spherical shell of width 4 kpc around the Solar circle. The DM speed distribution shows the usual departures from the Maxwellian distribution seen in DMO simulations, i.e.~less particles close to the peak and an excess of particles at high speeds. The speed distribution is well fit by the Mao {\it et al.} fitting function. Compared to Eris, the DM speed distribution is broadened and shifted to higher speeds. Baryons fall in the center of the DM halo and deepen the Galactic potential well, resulting in more high speed DM particles at the Solar position. The Maxwellian matched to peak is a much better fit to the local DM speed distribution in Eris than in ErisDark, indicating that {\it including baryons in the simulation results in DM speed distributions closer to a Maxwellian functional form}. \subsubsection{NIHAO} To find the local DM speed distribution in the Galactic rest frame, the NIHAO collaboration considers a spherical shell with radius between 7 kpc and 9 kpc. The speed distributions from the simulated galaxies are compared to their best fit Maxwellian and Gaussian distributions. They find that compared to the best fit Maxwellian, the speed distributions of the simulated galaxies fall faster at high velocities, and are more symmetric around the peak speed. However, the speed distributions are fitted well by a Gaussian distribution. The local DM speed distributions for the haloes in the DMO simulations can be well fitted with a Maxwellian distribution. The large difference between the speed distributions in the hydrodynamical and DMO simulations may be due to the strong feedback model adopted in the NIHAO simulations which results in a strong baryonic impact. Due to the halo contraction in the hydrodynamical simulations, the peak of the DM velocity distribution moves to higher speeds compared to the DMO simulations. \subsubsection{EAGLE and APOSTLE} For each simulated MW analogue in the EAGLE HR and APOSTLE IR simulations, the average DM speed distribution is found in the same torus as described in Section~\ref{DMdensity} used for finding the local DM density. The torus contains a total of 1821 -- 3201 particles depending on the halo in the EAGLE and APOSTLE simulations. The local speed distributions of the MW analogues are fitted with various fitting functions. The Maxwellian distribution with a free peak ($\alpha =1$ in Eq.~\ref{eq:genMax}) provides a better fit to haloes in the hydrodynamical simulations compared to their DMO counterparts. The range of the best fit peak speeds of the Maxwellian distribution is 223 -- 289 km$/$s. The fitting function of Mao {\it et al.} (Eq.~\ref{eq:Mao}) which has one extra free parameter provides the best fit for almost all MW analogues. Figure~\ref{fig:HydroDMO} shows the local DM speed distribution for a MW-like halo in the EAGLE hydrodynamical simulation (solid orange line and its 1$\sigma$ error band) which has a speed distribution farthest from the SHM (halo E3) and its best fit Maxwellian speed distribution (dashed orange line). For comparison, the DM speed distribution for the same halo in the DMO simulation (solid brown line and its 1$\sigma$ error band) and its best fit Maxwellian speed distribution (dashed brown line) is also shown. The best fit Maxwellian speed distribution gives a good fit to the speed distribution of the simulated halo in the hydrodynamical case, but cannot provide a good fit in the DMO case. For most haloes in the EAGLE and APOSTLE DMO simulations, there are large deficits of DM particles at the peak, and an excess at low and very high speeds compared to the best fit Maxwellian distribution. These features are similar to those seen in other DMO simulations~\cite{Vogelsberger:2008qb, Kuhlen:2009vh}. \begin{figure}[t] \centerline{\includegraphics[width=0.6\textwidth]{Figs/fvHydroDMO.pdf}} \caption{Comparison of the local DM speed distributions in the Galactic rest frame for a MW analogue (halo E3) in the EAGLE HR hydrodynamical simulation (solid orange line and its 1$\sigma$ error band) and its DMO counterpart (solid brown line and its 1$\sigma$ error band). Dashed lines specify the best fit Maxwellian speed distributions for the hydrodynamical (orange) and DMO (brown) case. \label{fig:HydroDMO}} \end{figure} \subsubsection{MaGICC} The same torus described in Section~\ref{DMdensity} is considered to find the average DM speed distribution. The region of the torus contains 4849 and 6541 particles for the two MW-like galaxies in the MaGICC simulations. The speed distributions of the simulated haloes are compared to the SHM speed distribution inferred for each halo from its mass distribution. Other than the inferred SHM, two additional approximations are considered: the best fit Maxwellian speed distribution assuming a stationary halo, as well as allowing for a bulk rotation of the halo. The SHM provides a reasonable fit to the speed distributions of the simulated haloes, except in the high speed tail where there is some deficit of particles from the simulations compared to the SHM. The best fit Maxwellian distributions with and without rotation also provide good fits to the speed distributions from simulations and are almost identical. In agreement with previous DMO simulations, the SHM is not a good fit to the speed distribution of the halo in the MaGICC DMO simulation. \subsubsection{Sloane {\it et. al.}} Sloane {\it et al.} considers the same cylindrical annulus described in Section~\ref{DMdensity} to find the local DM speed distribution. There are a total of 5847 -- 7460 DM particles in the annulus, depending on the halo. The average DM speed distribution in this annulus is found for each halo, and compared to the SHM assuming a Maxwellian speed distribution with a peak speed of 220 km$/$s, the best fit Maxwellian speed distribution, and the empirical speed distribution from Mao {\it et al.} (Eq.~\ref{eq:Mao}). Compared to the SHM, there is a deficit of high velocity DM particles in the simulations. However, the SHM is a better fit to the haloes in the hydrodynamical simulations compared to their DMO counterparts. The best fit Mao {\it et al.} fitting function provides a better fit to the speed distributions from the simulations, compared to the SHM. As expected, including baryons in the simulation increases the gravitational potential in the galactic disk and results in more high speed DM particles compared to the DMO case. \subsection{Best fit peak speed and local circular speed} \label{vpeakvc} In an isothermal halo, the DM velocity distribution in the Galactic rest frame is a Maxwellian distribution with a peak speed equal to the local circular speed. In Fig.~\ref{fig:vpeakvc} we show how the best fit peak speed, $v_{\rm peak}$, of the Maxwellian speed distribution in the Galactic rest frame for each simulated MW analogue in the EAGLE HR simulation is correlated with the local circular speed, $v_c$, at 8 kpc for that halo. Notice that $v_c$ is computed from the total mass (in stars, gas, and DM) of the halo enclosed within a sphere of radius 8 kpc. For comparison, we also show the correlation of $v_{\rm peak}$ and $v_c$ for the DMO counterparts of the same haloes. One can draw a few important conclusions from Fig.~\ref{fig:vpeakvc}. The local circular speeds for all MW analogues in the hydrodynamical simulation are larger than the local circular speed of their DMO counterparts. This is not surprising since the effect of baryons is especially important in the Solar circle and results in an increased $v_c$. As discussed in Ref.~\cite{Bozorgnia:2016ogo}, the enclosed stellar mass within the Solar circle is a large fraction of the total stellar mass, and hence $v_c$ at 8 kpc is strongly correlated with the total stellar mass. The presence of baryons results in deepening the gravitational potential as well as a compression of the DM distribution, leading to a larger $v_c$. Moreover, in the hydrodynamical haloes the rotation curves at 8 kpc are close to reaching their maximum values, whereas the DMO rotation curves are still rising at 8 kpc. Hence, it is also not surprising that at the Solar circle, the MW-like haloes in the hydrodynamical simulation are closer to having an isothermal distribution compared to their DMO counterparts. This effect was also noticed and mentioned in Ref.~\cite{Kelso:2016qqj} for the haloes in the MaGICC simulations. Additionally, most MW analogues in the hydrodynamical simulation have a best fit Maxwellian peak speed, $v_{\rm peak}$, larger than their DMO counterparts. As discussed in Section \ref{SpeedDist}, this effect which is also seen in all other simulations discussed in this work (except for Ling {\it et al.} which does not have a DMO counterpart) is also due to baryons deepening the total gravitational potential of the galaxy and resulting in more high speed particles at the Solar circle. It is clear from Fig.~\ref{fig:vpeakvc}, that assuming an isothermal distribution for the DMO haloes, namely setting the peak speed of the Maxwellian speed distribution equal to the local circular speed inferred from the DMO simulation leads to local DM speed distributions which significantly differ from the true DM distribution of the simulated halo. We demonstrate how this assumption can affect direct detection implications in Fig.~\ref{fig:etaDMOHydro}. \begin{figure}[t] \centerline{\includegraphics[width=0.6\textwidth]{Figs/vpeakvc.pdf}} \caption{The correlation of the local circular speed, $v_c$, at 8 kpc with the peak speed, $v_{\rm peak}$, of the best fit Maxwellian speed distribution for the MW-like haloes in the EAGLE HR hydrodynamic simulation (blue) and their DMO counterparts (red). The case of an isothermal halo where $v_{\rm peak}=v_c$ is shown as a dashed black line. \label{fig:vpeakvc}} \end{figure} \subsection{DM velocity distribution components and dark disks} \label{VelComponents} To assess whether any velocity anisotropy is present at the Solar radius, one can study the radial, azimuthal, and vertical components of the DM velocity distribution in the Galactic frame. In this section we describe how well the DM velocity components of the MW analogues in each simulation fit different fitting functions. In particular, the following fitting functions are considered: \begin{itemize} \item Gaussian function: \begin{equation}\label{eq:Gauss} f(v_i) = \frac{1}{\sqrt{\pi} v_0 } \exp \left[-(v_i - \mu)^2/v_0^2 \right] \, , \end{equation} with free parameters $v_0$ and $\mu$. \vspace{15pt} \item Generalized Gaussian function: \begin{equation}\label{eq:GenGauss} f(v_i) = \frac{1}{2 v_0 \Gamma(1 + 1/(2 \alpha))} \exp \left[ -\left((v_i - \mu)^2/v_0^2 \right)^\alpha \right] \, , \end{equation} with free parameters $v_0$, $\mu$, and $\alpha$. \end{itemize} The functions are normalized such that $\int_{-\infty}^{\infty} dv_i f(v_i)=1$. In cases where there is an asymmetry in the azimuthal component of the DM velocity distribution, it can be fitted with a double Gaussian function: \begin{equation}\label{eq:DoubleGauss} f(v_\theta) = c_1 f_{\rm 1}^{\rm Gauss}(v_\theta; v_1, \mu_1) + c_2 f_{\rm 2}^{\rm Gauss}(v_\theta; v_2, \mu_2) \, , \end{equation} with free parameters $c_1, v_1, \mu_1, v_2$, and $\mu_2$; $c_2$ is instead constrained by requiring $f(v_\theta)$ to be normalised to 1, $c_1 + c_2 = 1$. By studying the azimuthal component of the DM velocity distribution in different simulated haloes, we can also search for the existence of a ``dark disk". A dark disk can form when merging satellite galaxies are disrupted by the baryons in the galactic disk. These satellites are dragged towards the galactic plane and disrupted by tidal forces, and their accreted material forms a DM disk co-rotating with the stellar disk~\cite{Read:2008fh}. The existence of a dark disk can modify the signals in direct detection experiments~\cite{Read:2009iv, Purcell:2009yp, Bruch:2008rx}. In particular if a large fraction of DM particles in the Solar neighborhood are in the disk, the direct detection event rates could be enhanced, especially in the low recoil energy range. Depending on the rotation speed of the DM disk compared to the stellar disk, the phase of the annual modulation signal could also be shifted. In this section we will also review the predictions of recent hydrodynamical simulations for the existence of a dark disk component in simulated MW-like haloes. \subsubsection{Ling {\it et al.}} The components of the local DM velocity distribution for the haloes studied in Ling {\it et al.} exhibit anisotropy, and a strong deviation from a Gaussian velocity distribution (Eq.~\ref{eq:Gauss}). The mean of the radial and vertical velocity distributions is compatible with zero, while the DM particles exhibit rotation in the azimuthal direction with a minimum lag velocity of 75 km$/$s compared to the galactic disk, suggesting the existence of a dark disk component. The tangential velocity distribution is fitted well with a double Gaussian (Eq.~\ref{eq:DoubleGauss}), and the rotating DM component in the disk constitutes $\sim 25$\% of the total local DM density. The small fraction of the dark disk in this halo suggests a quiescent merger history, and is not expected to affect direct detection signals significantly~\cite{Ling:2009eh}. \subsubsection{Eris} For the simulated halo in Eris, the radial and vertical velocity distributions at the Solar circle have a mean compatible with zero, while the tangential velocity distribution is skewed towards positive tangential velocities. This is indicative of a population of DM particles co-rotating with the MW disk. To assess if this co-rotation is due to the presence of a dark disk component, the amount of material accreted from satellite galaxies, deposited into the disk, and co-rotating with the stellar disk is computed. Computed in this way, 9.1\% of the DM density in the disk is due to the dark disk component. If the additional criterion that the rotation speed of the dark disk lies within 50 km$/$s of the stellar rotation speed is applied, the fraction of the dark disk will be reduced to 3.2\%. The radial and azimuthal DM velocity distributions are narrower in ErisDark compared to Eris in which dissipational baryonic processes have broadened the distributions. However, the vertical velocity distribution in ErisDark is slightly broader than Eris. The asymmetry seen in the azimuthal velocity distribution in Eris does not occur in ErisDark, and the three components of the velocity distribution have a mean compatible with zero. \subsubsection{NIHAO} The components of the DM velocity distribution were not analyzed in Ref.~\cite{Butsky:2015pya}. The DM velocity distribution was however found to be the same in a spherical shell of radius between 7 and 9 kpc, and in a torus aligned with the plane of the disk, suggesting that the DM distribution is spherically symmetric. Based on this observation, Ref.~\cite{Butsky:2015pya} concludes that their simulated haloes do not have dark disks. \subsubsection{EAGLE and APOSTLE} The three components of the DM velocity distribution for the MW analogues in the EAGLE HR and APOSTLE IR simulations show clear velocity anisotropy. The radial velocity distribution is broader than the vertical and tangential distributions. The distribution of the radial and vertical velocity components are generally peaked around zero, and well fitted with a generalized Gaussian (Eq.~\ref{eq:GenGauss}) with $\alpha \sim 1$ (close to a Gaussian function). The tangential velocity distribution is well fitted with either a Gaussian or generalized Gaussian with $0.6< \alpha < 1$. In five haloes, there is a significant non-zero mean azimuthal speed ($|\mu| > 20$~km$/$s). For the DMO counterparts of the MW analogues, the distributions of the radial and vertical velocity components peak around zero. The mean of the best fit generalized Gaussian for the azimuthal velocity component in the hydrodynamical case is larger than the corresponding mean in the DMO simulation by 3$\sigma$. To study if the asymmetry in the azimuthal component of the DM velocity distribution in some haloes in the hydrodynamical case is due to the existence of a dark disk, we compare the azimuthal velocity distribution of the star particles in the torus for each halo with that of DM particles. Only two haloes have a mean azimuthal velocity comparable (within 50 km$/$s) to that of the stars. The azimuthal component of the DM velocity distribution can also be fitted with a double Gaussian (Eq.~\ref{eq:DoubleGauss}). For the two haloes rotating as fast as the stars, the fraction of the rotating component extracted from the double Gaussian is 32\% and 43\%. In summary, from the 14 MW analogues in the EAGLE HR and APOSTLE IR hydrodynamical simulations, there is a hint for the existence of a co-rotating dark disk only for two haloes. Dark disks have also been extensively searched for in 24 haloes in the APOSTLE IR simulations~\cite{Schaller:2016uot}, two of which are the MW analogues (based on our criteria defined in Section~\ref{IdentifyMW}) which we have studied in Ref.~\cite{Bozorgnia:2016ogo} and reviewed in this work. Ref.~\cite{Schaller:2016uot} finds that only one out of 24 haloes in APOSTLE IR show evidence for a dark disk. The dark disk component developed for this one halo because of a recent impact with a large satellite, and from MW kinematical data we do not expect such an encounter for our Galaxy. \subsubsection{MaGICC} In the two MaGICC MW-like galaxies studied in Ref.~\cite{Kelso:2016qqj}, the SHM inferred from the mass of each simulated halo, as well as the best fit Maxwellian velocity distributions with and without rotation are almost indistinguishable and provide good fits to the components of the velocity distributions of the simulated haloes. In the DMO simulation, the SHM is a bad fit to all three components of the DM velocity distribution at all velocities. Compared to the SHM, the best fit Maxwellian with and without rotation provide better fits to the DMO distributions, but the discrepancies are still large. The best fit rotational speed of the DM particles in the torus is at most $\sim 20$~km$/$s, suggesting that the two simulated galaxies do not have a rotating dark disk component. Similar to the conclusion reached in Refs.~\cite{Bozorgnia:2016ogo} and \cite{Schaller:2016uot} for the EAGLE HR and APOSTLE IR simulations, Ref.~\cite{Kelso:2016qqj} concludes that dark disks are not a generic prediction of hydrodynamical simulations. \subsubsection{Sloane {\it et. al.}} The components of the DM velocity distribution are not presented in Sloane {\it et al.}~\cite{Sloane:2016kyi}. It is however mentioned that the DM component of some of the galaxies studied in the hydrodynamical simulation show evidence of co-rotation with the stellar disk, pointing to the presence of a dark disk. As discussed in Section~\ref{IdentifyMW}, one of the haloes (h258) had a large recent merger. For one of the other studied haloes (h285), the DM shows evidence of counter-rotation with respect to the stellar disk which is not surprising since the halo had a recent counter-rotational merger. Only one of the selected haloes has a relatively quiescent merger history similar to the MW. The DM velocity distributions for the haloes in the DMO simulation are largely isotropic. \section{Implications for direct detection} \label{Implications} In this section we discuss the halo integrals (given in Eq.~\ref{eq:eta}) and direct detection exclusion limits or allowed regions in the plane of DM mass and scattering cross section, as extracted from the Eris, EAGLE and APOSTLE, MaGICC, and Sloane {\it et al.} simulations. Notice that Ling {\it et al.}~\cite{Ling:2009eh} and Butsky {\it et al.}~\cite{Butsky:2015pya} (NIHAO simulations) do not provide plots of halo integrals for their simulated haloes, and are therefore not included in the discussion of this section. \subsection{Halo integrals} The left panel of Fig.~\ref{fig:eta} shows the time averaged halo integrals (Eq.~\ref{eq:eta}) obtained from the DM velocity distributions of simulated haloes in different simulations as a function of the minimum WIMP speed, $v_{\rm min}$. To compare the results of different simulations with the SHM Maxwellian (with peak speed of 220~km$/$s and escape speed of 544~km$/$s), the halo integral for the halo which has the farthest DM speed distribution in the Galactic rest frame from the SHM (specified with a dashed black line) is shown only. The DM speed distributions for the same haloes are presented in the left panel of Fig.~\ref{fig:fv}. The right panel of Fig.~\ref{fig:eta} shows the halo integrals and their 1$\sigma$ uncertainties for the two MW analogues in the EAGLE HR simulation with DM speed distributions closest to (halo E12, shown in green) and farthest from (halo E3, shown in orange) the SHM Maxwellian and the two MW analogues in the APOSTLE IR simulation (haloes A1 and A2, shown in blue and red, respectively). The DM speed distributions for the same haloes are shown in the right panel of Fig.~\ref{fig:fv}. The halo integrals obtained from the best fit Maxwellian speed distributions are shown as dashed curves with matching colors for each halo. To obtain the best fit Maxwellian halo integral for each halo, we boost the best fit Maxwellian speed distribution in the Galactic rest frame by the local circular speed at 8 kpc for that halo\footnote{Notice that in Ref.~\cite{Bozorgnia:2016ogo} the best fit Maxwellian halo integrals were obtained by boosting the best fit Maxwellian speed distribution in the Galactic rest frame for each halo by its best fit Maxwellian peak speed, instead of boosting by its local circular speed. Since boosting by the local circular speed of each halo is more consistent with the simulation data, we obtain an even better fit to the halo integrals of the simulated haloes (as shown in the right panel of Fig.~\ref{fig:eta}) compared to the results presented in Ref.~\cite{Bozorgnia:2016ogo}.}. \begin{figure}[t] \includegraphics[width=0.49\textwidth]{Figs/petaAll.pdf} \includegraphics[width=0.49\textwidth]{Figs/petaEagleApostle.pdf} \caption{Time averaged halo integrals for (a) haloes in different simulations (colored lines) compared to the SHM Maxwellian halo integral with peak speed of 220 km$/$s and escape speed of 544~km$/$s (dashed black line), and (b) the two haloes in the EAGLE HR simulation with DM speed distributions closest to (green) and farthest from (orange) the SHM Maxwellian and the two MW analogues in the APOSTLE IR simulation (blue and red). The solid lines and shaded regions show the mean halo integral and its 1$\sigma$ uncertainty, respectively. The dashed colored lines, which are very similar to the solid lines, show the halo integrals obtained from the best fit Maxwellian speed distribution for each halo (with matching colors). The curves in the left panel are obtained from Refs.~\cite{Kuhlen:2013tra, Bozorgnia:2016ogo, Kelso:2016qqj, Sloane:2016kyi}. The green curve in the left panel shows the average of the maximum and minimum halo integrals presented in Sloane {\it et al.}~\cite{Sloane:2016kyi}. \label{fig:eta}} \end{figure} Below we discuss how well the halo integrals of the simulated haloes fit the SHM or the best fit Maxwellian halo integral. \subsubsection{Eris} The time-averaged halo integral of the halo in Eris is compared to the SHM with peak speed of 220 km$/$s. At low and intermediate values of $v_{\rm min}$ there are only small differences between the Eris and SHM halo integrals. In particular, for $v_{\rm min}<180$~km$/$s, the halo integral in Eris is larger by up to 15\% compared to the SHM, but is suppressed at larger $v_{\rm min}$. The suppression of the halo integral at high $v_{\rm min}$ compared to the SHM may be due to the non-rotating DM density enhancement in the disk. The tail of the halo integral in Eris is moved to higher DM speeds compared to ErisDark. This is due to the broadening of the DM speed distribution in Eris compared to ErisDark as a result of the halo contraction. \subsubsection{EAGLE and APOSTLE} As inferred from Fig.~\ref{fig:eta}, there is a significant halo-to-halo scatter in the halo integrals of the MW analogues in the EAGLE HR and APOSTLE IR simulations. The halo integrals obtained from the best fit Maxwellian speed distributions fall within the 1$\sigma$ uncertainty band of the halo integrals for all but one MW analogue, where there is only a very small deviation at large $v_{\rm min}$. On the other hand, the SHM Maxwellian with a peak speed of 220 km$/$s does not fit the halo integrals of all the MW analogues, especially at higher $v_{\rm min}$. This is due to the different peak speeds of the DM velocity distributions of the MW analogues compared to the SHM. In the left panel of Fig.~\ref{fig:etaDMOHydro} we show a comparison of the halo integrals of a MW analogue in the EAGLE HR hydrodynamical simulation and its DMO counterpart obtained from the DM speed distributions presented in Fig.~\ref{fig:HydroDMO}. Baryons significantly affect the velocity distribution and the halo integrals at the Solar radius, resulting in a shift of the tails of the halo integral towards higher minimum velocities compared to the DMO case. For both the hydrodynamical and DMO cases, the best fit Maxwellian halo integral is computed from the best fit Maxwellian DM speed distribution in the Galactic rest frame boosted by the local circular speed of the corresponding halo. The best fit Maxwellian halo integrals computed in this way fall within the 1$\sigma$ uncertainty band of the halo integrals of the simulated halo in both cases, although the fit is better in the hydrodynamical case. In the right panel of Fig.~\ref{fig:etaDMOHydro} we show a comparison of the best fit Maxwellian halo integrals (dashed lines, shown also in the left panel of the same figure) and the inferred SHM halo integrals (dotted lines), for each halo in the hydrodynamical and DMO cases. The inferred SHM halo integrals are computed by assuming a Maxwellian DM speed distribution for each halo with a peak speed equal to the local circular speed of that halo. For the DMO case, the inferred SHM and the best fit Maxwellian halo integrals are significantly different. This is due to the large difference between the best fit Maxwellian peak speed of the DMO haloes and their local circular speed, as shown in Fig.~\ref{fig:vpeakvc} and discussed in Section~\ref{vpeakvc}. The inferred SHM, however, provides a better fit to the halo integral of the hydrodynamical halo compared to the DMO case. These conclusions remain the same for all the MW analogues in the EAGLE HR and APOSTLE IR simulations and their DMO counterparts. In general, halo integrals computed from the best fit Maxwellian speed distributions boosted by the local circular speed of each halo provide an excellent fit to the halo integrals of the MW analogues. \begin{figure}[t] \includegraphics[width=0.49\textwidth]{Figs/petaDMOHydro.pdf} \includegraphics[width=0.49\textwidth]{Figs/petaDMOHydroComp.pdf} \caption{(a) Comparison of the halo integrals for a MW analogue (halo E3) in the EAGLE HR hydrodynamical simulation (solid orange line and its 1$\sigma$ error band) and its DMO counterpart (solid brown line and its 1$\sigma$ error band) obtained from the DM speed distributions shown in Fig.~\ref{fig:HydroDMO}. Dashed lines specify the halo integrals obtained from the best fit Maxwellian distributions for the hydrodynamical (orange) and DMO (brown) haloes. (b) Comparison of the halo integrals obtained from the best fit Maxwellian (dashed lines, same as shown in panel (a)) and the inferred SHM distributions (dotted lines) for the same MW analogue and its DMO counterpart shown in panel (a). \label{fig:etaDMOHydro}} \end{figure} \subsubsection{MaGICC} As seen in Fig.~6 of Ref.~\cite{Kelso:2016qqj}, the halo integrals of the two MW analogues in MaGICC fit well the halo integrals for the SHM inferred from the mass of each halo as well as those for the best fit Maxwellian with and without rotation. Notice that some deviations exist at high $v_{\rm min}$ compared to the inferred SHM, which may be due to the low particle counts in the simulations at high speeds~\cite{Kelso:2016qqj}. The inferred SHM provides a a very poor fit to the halo integral of the halo in the DMO simulation. However the best fit Maxwellian with and without rotation both provide good fits for the DMO case. Notice that these conclusions are the same as those reached for the EAGLE simulations. \subsubsection{Sloane {\it et al.}} Sloane {\it et al.} study the maximum and minimum halo integrals during a year. They find that the halo integrals of the simulated haloes are lower than the SHM (with peak speed of 220 km$/$s) at high velocities, and higher than the SHM at low velocities. However, the SHM halo integrals show an improved fit to the halo integrals of the simulated haloes in the hydrodynamic simulation compared to the DMO. From Fig.~3 of Ref.~\cite{Sloane:2016kyi} one can also notice that other than one of the selected haloes which has a prominent dark disk (h258), the best fit Maxwellian halo integrals show only small deviations at high $v_{\rm min}$ with respect to the halo integrals of the simulated haloes. In Fig.~\ref{fig:eta} we show the time-averaged halo integral for this same halo which has the farthest halo integral compared to the SHM, computed from the maximum and minimum halo integrals for h258 presented in Fig.~3 of Ref.~\cite{Sloane:2016kyi}. \subsection{Dark matter parameters} The DM distribution extracted from simulations could be used directly in the analysis of data from different direct detection experiments. We discuss how the allowed regions and exclusion limits set by direct detection experiments in the spin-independent cross section and DM mass change compared to the SHM for the MW-like haloes studied in Refs.~\cite{Bozorgnia:2016ogo}, \cite{Kelso:2016qqj} and \cite{Sloane:2016kyi} using the EAGLE HR, MaGICC, and Sloane {\it et al.} simulations. The other hydrodynamical simulations either do not perform an analysis of direct detection data, or the data used are outdated, and therefore we do not present them here. For the purpose of comparing the predictions of different simulations for DM direct detection, we consider the positive signal from DAMA/LIBRA~\cite{Bernabei:2013xsa} (DAMA for short) and the null result from the LUX~\cite{Akerib:2013tjd} experiment. The DAMA experiment which measures the scintillation signal in their NaI crystals has observed a 9.3$\sigma$ annual modulation signal over 14 annual cycles for the total exposure of 1.33 ton yr. The data on the DAMA annual modulation amplitude can be used to set the preferred DAMA regions in the $m_{\rm \chi}$ -- $\sigma_{\rm SI}$ plane. The LUX experiment is a dual phase (liquid $+$ gas) detector operating in the Sanford Underground Laboratory in South Dakota, and measures the ionization and scintillation signals. The latest results presented by the LUX experiment~\cite{Akerib:2016vxi} for an exposure of $3.35 \times 10^4$ kg day is consistent with background. Notice that for the analysis of DAMA data, Refs.~\cite{Bozorgnia:2016ogo} and \cite{Kelso:2016qqj} use the latest DAMA results from Ref.~\cite{Bernabei:2013xsa}, while Ref.~\cite{Sloane:2016kyi} uses the slightly older DAMA data from Ref.~\cite{Bernabei:2010mq} with a total exposure of 1.17 ton yr. Notice also that Refs.~\cite{Bozorgnia:2016ogo}, \cite{Kelso:2016qqj} and \cite{Sloane:2016kyi} all use the 2014 LUX data~\cite{Akerib:2013tjd} which had an exposure of $\sim 10^4$ kg day. The LUX data has been updated since, but the conclusions discussed in \cite{Bozorgnia:2016ogo}, \cite{Kelso:2016qqj}, \cite{Sloane:2016kyi} and summarized here, remain the same. We refer the reader to Refs.~\cite{Bozorgnia:2016ogo}, \cite{Kelso:2016qqj} and \cite{Sloane:2016kyi} for the details of the analysis of direct detection data. The left panel of Fig.~\ref{Fig:ExcLim} shows the LUX exclusion limit (at 90\% CL) and the DAMA allowed region (at 3$\sigma$) for a MW-like halo from the EAGLE HR, MaGICC\footnote{Notice that the LUX exclusion limit for the MW analogue in MaGICC presented here is weaker at low DM masses than those presented in Fig.~9 of Ref.~\cite{Kelso:2016qqj}. The reason is that Ref.~\cite{Kelso:2016qqj} does not use the same lower energy bound of 3 keV that LUX uses when computing the exclusion limits~\cite{KelsoPrivCom}. Therefore, instead of reading off the LUX exclusion limit for the MaGICC MW-like halo from Ref.~\cite{Kelso:2016qqj}, we compute it from the halo integral presented in Fig.~6 of Ref.~\cite{Kelso:2016qqj} using the 3 keV LUX energy threshold.} and Sloane {\it et al.} simulations, which has a local DM speed distribution farthest from the SHM Maxwellian with a peak speed of 220 km$/$s. The local DM speed distributions and halo integrals for the same haloes are shown in the left panels of Figs.~\ref{fig:fv} and \ref{fig:eta}, respectively. The local DM density of the haloes in the EAGLE HR (halo E3) and MaGICC (halo g1536) simulations is 0.68 and 0.346 GeV$/$cm$^3$, respectively, which is used to obtain the exclusion limits and preferred regions in the left panel of Fig.~\ref{Fig:ExcLim}. For the Sloane {\it et al.} halo, the local DM density is set to 0.4 GeV$/$cm$^3$~\cite{Sloane:2016kyi}. In order to solely show the effect of the DM velocity distribution on direct detection results, in the right panel of Fig.~\ref{Fig:ExcLim} we show the results for the same haloes in the left panel, but setting the local DM density to 0.3 GeV$/$cm$^3$ for all haloes. One can see from Fig.~~\ref{Fig:ExcLim} that the largest difference between the exclusion limits and allowed regions with respect to the SHM at all DM masses is due to variation of the local DM density of the simulated haloes with respect to the SHM. The shift at low DM masses is the result of the different high velocity tail of the halo integrals of the simulated haloes with respect to the SHM. \begin{figure}[t] \includegraphics[width=0.49\textwidth]{Figs/ExcLimAll.pdf} \includegraphics[width=0.49\textwidth]{Figs/ExcLimAll-rhoFix.pdf} \caption{LUX exclusion limit (at 90\% CL) and DAMA preferred regions (at 3$\sigma$) in the spin-independent DM-nucleon cross section and DM mass plane for a halo in the EAGLE HR (orange), MaGICC (purple), and Sloane {\it et al.} (green) simulations, as compared to the SHM Maxwellian (dashed black) with peak speed of 220 km$/$s. The local DM density is different for each halo in the left panel, whereas $\rho_\chi=0.3$~GeV$/$cm$^3$ for all haloes in the right panel. The curves in the left panel are obtained from Refs.~\cite{Bozorgnia:2016ogo, Kelso:2016qqj, Sloane:2016kyi}, and rescaled for the local DM density of 0.3~GeV$/$cm$^3$ to obtain the curves in the right panel. \label{Fig:ExcLim}} \end{figure} \section{Comments on non-standard interactions} \label{Nonstandard} The direct detection implications discussed so far in this review hold for the case of standard spin-independent and spin-dependent DM-nucleus interactions, where the energy differential cross section which enters in the direct detection event rate is proportional to $v^{-2}$ (Eq.~\ref{eq:dsigmadE} for the spin-independent case). This leads to event rates proportional to the halo integral, $\eta(v_{\rm min}, t)$, as given in Eqs.~\ref{eq:Reta} and \ref{eq:eta}. The DM-nucleus interaction can, however, be more complicated. For non-standard interactions, the differential cross section can have a different dependence on the relative velocity between the DM and the nucleus, and also depend on the exchanged momentum. The classification of all possible DM-nucleus interactions has been performed in the non-relativistic limit~\cite{Fan:2010gt, Fitzpatrick:2012ix, Fitzpatrick:2012ib}. For a very general set of non-relativistic effective operators, the energy differential cross section can be written as a linear combination of a velocity-dependent term proportional to $v^{-2}$ and a velocity-independent term~\cite{DelNobile:2013sia, Kahlhoefer:2016eds}. The former gives rise to the usual halo integral, $\eta(v_{\rm min}, t)$, while the velocity-independent term gives rise to a new velocity integral in the form of \begin{equation}\label{eq:h} h(v_{\rm min}, t)=\int_{v>v_{\rm min}} d^3 v~v~ f_{\rm det}({\bf v},t). \end{equation} As a result, direct detection event rates will be a sum of two terms, one proportional to the usual halo integral, $\eta(v_{\rm min}, t)$, and the other proportional to $h(v_{\rm min}, t)$. An example of this linear combination of the two different velocity dependences is the case of the magnetic dipole DM, which the DM is assumed to be a fermion interacting via a magnetic dipole moment with the nucleus. We extract the velocity integral, $h(v_{\rm min}, t)$ from the local DM distributions in the EAGLE HR and APOSTLE IR simulations and compare them with $h(v_{\rm min}, t)$ obtained from the best fit Maxwellian speed distributions in the Galactic rest frame and boosted by the local circular speed of each halo. Figure~\ref{fig:h} shows the velocity integrals for the same four haloes shown in the right panel of Fig.~\ref{fig:eta}. The conclusions are similar to those obtained for the usual halo integral, $\eta(v_{\rm min})$. Namely, for all but one MW analogue, the best fit Maxwellian $h(v_{\rm min})$ fall within the 1$\sigma$ uncertainty band of the time-averaged $h(v_{\rm min})$ of the simulated haloes. For one halo, only a very small deviation at large $v_{\rm min}$ exists between the function $h(v_{\rm min})$ extracted from the simulated halo and the best fit Maxwellian distribution. \begin{figure}[t] \centerline{\includegraphics[width=0.6\textwidth]{Figs/phEagleApostle.pdf}} \caption{Same as the right panel of Fig.~\ref{fig:eta}, but showing the time-averaged velocity integral, $h(v_{\rm min})$ (given in Eq.~\ref{eq:h}). \label{fig:h}} \end{figure} \section{Discussion and Conclusions} \label{conclusions} We have reviewed the status of hydrodynamical simulations and their implications for DM direct detection. Despite the diversity of numerical approaches, and of the criteria adopted to identify MW-like galaxies, a number of common trends can be identified, and interesting conclusions can be drawn in particular on the impact of baryonic physics on the local dark matter velocity distribution; the presence of dark disks; and on the implications for the direct detection of dark matter with standard and non-standard interactions. We summarize here the main conclusions: \begin{itemize} \item {\bf Identification of Milky Way-like galaxies:} The criteria used to select simulated galaxies which resemble the MW are widely different among different simulation groups (see Table~\ref{tab:MW-like}). We recommend the prescription introduced in Ref. ~\cite{Bozorgnia:2016ogo}, based on a selection of haloes in the mass range $5\times10^{11}<M_{200}/{\rm M}_\odot<1\times10^{14}$; with a distribution of dark matter and baryons fitting the observed MW rotation curve~\cite{Iocco:2015xga,Pato:2017yai}; and with stellar mass in the range inferred from observations $4.5 \times10^{10}<M_{*}/{\rm M}_\odot<8.3 \times10^{10}$~\cite{McMillan:2011wd}. \item {\bf Local DM density and halo shape:} The definition of the local DM density varies among simulations, but it was always found consistent with the range inferred from local and global observations $\rho_\chi = (0.2 - 0.8)$ GeV$/$cm$^3$~\cite{Read:2014qva, Catena:2009mf, Weber:2009pt, Salucci:2010qr, McMillan:2011wd, Garbari:2011dh, Iocco:2011jz, Garbari:2011tv, Bovy:2012tw, Zhang:2012rsb, Bovy:2013raa,Pato:2015dua, Silverwood:2015hxa, Huang:2016, McMillan:2016}. Although the shape of the inner DM halo varies in different simulations, baryons always tend to make the haloes more spherical\footnote{ This conclusion could not be verified in the case of Ling {\it et al.}, who do not have a DMO case, and Sloane {\it et al.}, who do not study the shape of DM haloes.}. The sphericity (minor to major axis ratio) of the inner haloes in the selected MW-like galaxies in different simulations studied here are in the range of $s=[0.69, 0.95]$. \item {\bf Local DM speed distribution:} In all the simulations analyzed above, including baryons results, as expected, in shifting the peak of the local DM speed distributions to higher speeds, with respect to the DMO case (except in Ling {\it et al.}, which do not have a DMO case). This is due to the dissipative nature of baryons which fall into the center of the MW halo and make the gravitational potential deeper in the inner halo. Including baryons also appear to make the local DM speed distributions more Maxwellian in most cases. One notable exception is however the NIHAO simulation, where the DM speed distributions are less Maxwellian when baryons are included, possibly due to strong baryonic feedback. \item {\bf Dark disks:} None of the simulations we analyzed point towards the existence of a sizable dark disk that can significantly impact direct detection experiments. Ling {\it et al.} find a 25\% fraction for the dark disk, whereas the Eris simulation finds a 3.2--9.1\% dark disk fraction depending on how this fraction is computed. No dark disks are found in the NIHAO and MaGICC simulations, and dark disks are rare in the EAGLE and APOSTLE simulations. Some of the galaxies studied in Sloane {\it et al.} have a dark disk. The cases where dark disks appear more prominent arise from simulations in which a large satellite merged with the MW in the recent past, a circumstance robustly excluded from MW kinematical data. \item {\bf Halo integrals:} The halo integrals of simulated MW-like galaxies in the EAGLE and APOSTLE, MaGICC, and Sloane {\it et al.} simulations are similar to those obtained from the best fit Maxwellian velocity distributions. The only exception is the halo integral of one simulated galaxy in Sloane {\it et al.} which shows some discrepancy with respect to the best fit Maxwellian halo integral, due to the presence of a prominent dark disk (see discussion of dark disks above). We have also checked the velocity integral (Eq.~\ref{eq:h}), which arises from the velocity-independent term in the energy differential cross section for non-standard interactions, and found again that for the MW analogues in EAGLE and APOSTLE, the best fit Maxwellian velocity integral provides an excellent fit to the velocity integral obtained from the simulations. \item {\bf Implications for direct detection:} For the analysis of direct detection results we recommend the adoption of a Maxwell-Boltzmann velocity distribution, with a peak speed, $v_{\rm peak}$, constrained by hydrodynamical simulations of MW-like galaxies, and an independent local circular speed, $v_c$, constrained by observations or simulations. A better selection of MW-like galaxies in numerical simulations, along the lines discussed above, will substantially reduce the astrophysical uncertainties in the exclusion plots and allowed regions set by direct DM experiments. \end{itemize} \section*{Acknowledgments} We thank Mark Lovell and Matthieu Schaller for useful discussions, and providing valuable feedback on this review. We especially thank Chris Kelso for detailed discussions on the MaGICC simulation results. N.B. would like to express a special thanks to the Mainz Institute for Theoretical Physics (MITP) for its hospitality and support during the 2016 workshop ``Dark Matter in the Milky Way". G.B. (P.I.) and N.B.~acknowledge support from the European Research Council through the ERC starting grant WIMPs Kairos. \bibliographystyle{JHEP.bst}
2,869,038,157,003
arxiv
\section*{Introduction} In \cite{Sim1} and \cite{Sim2}, Simpson defined the coarse Betti, de Rham and Dolbeault moduli spaces of a smooth projective complex variety. These are all algebraic spaces, with a complex analytic isomorphism between the Betti and de Rham moduli spaces. The non-abelian Hodge Theorem of \cite[Theorem 7.18]{Sim2} is a homeomorphism between the de Rham and Dolbeault moduli spaces, the key to which was the correspondence between semisimple local systems, pluriharmonic bundles and Higgs bundles. The reductive pro-algebraic fundamental group $\pi_1(X,x)^{\red}$ introduced in \cite{Simpson} encapsulates, in a single object, all the information about the category of semisimple local systems. When $X$ is a compact K\"ahler manifold, the group scheme $\pi_1(X,x)^{\red}$ also has a pure Hodge structure in the form of a discrete circle action, and a description in terms of Higgs bundles. However, it has long been realised that the reductive pro-algebraic fundamental group is slightly inadequate. From it we can recover the points of the Betti moduli space, and from the full pro-algebraic fundamental group we can even recover their infinitesimal neighbourhoods, but in general these groups convey no information about how the neighbourhoods glue together. A further source of dissatisfaction is the discontinuity of the circle action on $\pi_1(X,x)^{\red}$, since it is continuous on moduli spaces. The key idea behind this paper is that we can produce finer and more satisfactory invariants by looking at representations with analytic structure. The group scheme $\pi_1(X,x)^{\red}$ can be recovered from representations in finite-dimensional matrix algebras, but the Riemann--Hilbert correspondence between Betti and de Rham moduli holds with coefficients in any Banach algebra. We accordingly construct Betti, de Rham and Dolbeault moduli functors on Banach algebras, and recover the analytic moduli spaces from these functors. The framed Betti and de Rham functors are represented by a Fr\'echet algebra which we regard as the analytic completion of $\R[\pi_1(X,x)]$. To understand the topological structure underlying these analytic spaces, we then restrict to $C^*$-algebras rather than Banach algebras. There are notions of unitary and pluriharmonic representations with coefficients in any $C^*$-algebra, and the homeomorphism of moduli spaces above extends to an isomorphism between the semisimple de Rham functor and the polystable Dolbeault functor on polynormal $C^*$-algebras, via isomorphisms with the pluriharmonic functor. The $C^*$-algebra of bounded operators gives us a notion of pluriharmonic local systems in Hilbert spaces, and there is a form of Hodge decomposition for these local systems. Lurking behind these comparisons is the twistor moduli functor on Fr\'echet $\O_{\bP^1}^{\hol}$-algebras. Its fibres at $\pm i \in \bP^1$ are the Dolbeault functor and its conjugate, while all other fibres are isomorphic to the de Rham functor. The Deligne--Hitchin twistor space can be recovered as an analytic space from the twistor moduli functor, and pluriharmonic torsors give a splitting of the twistor moduli functor on $C^*$-algebras over $C(\bP^1(\Cx))$. Twistor cochains then admit a Hodge decomposition on pulling back along the Hopf fibration $SU_2 \to \bP^1(\Cx)$, and a continuous circle action serves to promote twistor structures to Hodge structures. The structure of the paper is as follows. In \S \ref{prorepsn}, we cover some background material on pro-representability. Proposition \ref{PNprop} then establishes a topological analogue of Tannaka duality for polynormal $C^*$-algebras and unitary representations, while Lemma \ref{PNlemma2} gives a similar result for non-unitary representations. In \S \ref{derhamsn}, we introduce the framed Betti and de Rham functors $\oR^{B}_{X,x}$, $\oR^{\dR}_{X,x}$ on Fr\'echet algebras for any manifold $X$. In Proposition \ref{cfBettiDR}, we establish an isomorphism $\oR^{B}_{X,x}(A)\cong\oR^{\dR}_{X,x}(A)$ for any Fr\'echet algebra $A$. We can even recover the analytic structure of moduli spaces of $G$-bundles from these symmetric monoidal functors (Remark \ref{recoveranalyticG}). Proposition \ref{fgrep} then shows that $\oR^{B}_{X,x}$ is represented by a Fr\'echet algebra completion $E^B_{X,x}$ of $\R[\pi_1(X,x)]$. This is a Fr\'echet bialgebra from which $\pi_1(X,x)$ can be recovered (Lemma \ref{recovergroup}). In Definition \ref{pluritorsor}, we introduce a symmetric monoidal functor $ \oR^J_{X,x} $ on $C^*$-algebras, parametrising pluriharmonic bundles on a compact K\"ahler manifold $X$. This is representable by a pro-$C^*$-bialgebra $E^J_{X,x}$ (Proposition \ref{pluriprorep} and Lemma \ref{tensorstr}). There are also a symmetric monoidal functor $\oR^{\Dol}_{X,x}$ on Fr\'echet algebras associated to Dolbeault moduli, which is seldom representable, and a harmonic functor extending the definition of $\oR^J_{X,x} $ to all Riemannian manifolds $X$, but with substantial loss of functoriality. In \S \ref{cfansn} we establish relations between the various functors. We can recover the topology on moduli spaces of semisimple representations from $E^J_{X,x}$ (Theorem \ref{Hsstopthm}). Proposition \ref{PNssprop} then gives a Tannakian description of the polynormal completion of $E^J_{X,x}$, while Corollary \ref{PNrepss} gives a simple characterisation of continuous morphisms from $E^J_{X,x} $ to polynormal $C^*$-algebras. \S \ref{Dolsn} then gives similar results for $\oR^{\Dol}_{X,x}$. Lemma \ref{EJabgp} shows that grouplike elements $G((E^J_{X,x})^{\ab})$ of the abelianisation of $E^J_{X,x}$ are just $\H_1(X,\Z\oplus \R)$, with consequences for complex tori. There is a continuous circle action on $E^J_{X,x}$, so it is a pro-$C^*$ dynamical system (Proposition \ref{circleX}). This allows us to regard $E^J_{X,x}$ as an analytic non-abelian Hodge structure of weight $0$ (Remark \ref{pureMHSrk}). In Example \ref{circleEJabgp}, we see that the circle action on $G((E^J_{X,x})^{\ab})$ is just given by the Hodge structure on $\H^1(X,\R)$. Proposition \ref{VHSsemidirect} then characterises pure Hilbert variations of Hodge structure as representations of $E^J_{X,x}\rtimes S^1$. \S \ref{cohosn} is concerned with Hilbert space representations of $E^J_{X,x}$, which correspond to pluriharmonic local systems $\vv$ in Hilbert spaces. We can identify reduced cohomology $\bar{\H}^*(X,\vv)$ with the space of smooth $\vv$-valued harmonic forms, as well as establishing the principle of two types and a formality results (\S \ref{redcohosn}). There are analogous, but weaker, results for non-reduced cohomology (\S \ref{nonredcohosn}). The same is true of direct limits of Hilbert space representations, and Corollary \ref{univlocsys} shows that the universal such is the continuous dual $(E^J_{X,x})'$ (which can be regarded as the predual of the $W^*$-envelope of $E^J_{X,x}$). In \S \ref{twistorHodgecohosn}, these results are extended to show that the natural twistor structure on $\bar{\H}^n(X,\vv)$ is pure of weight $n$ (Corollary \ref{harmcoho2}), with a weaker result for non-reduced cohomology (Proposition \ref{finecoho}). If $\bE$ is the local system associated to the $\pi_1(X,x)$-representation $ E^J_{X,x}$, then Proposition \ref{redenrich} shows that this twistor structure can be enhanced to a form of analytic Hodge filtration on the de Rham algebra $A^{\bt}(X, \bE')$. In \S \ref{SU2sn}, we re-interpret splittings of twistor structures and Archimedean monodromy in terms of the Hopf fibration. Finally, in \S \ref{twistorfamilysn}, we introduce a whole twistor family of framed moduli functors on Fr\'echet algebras over $\bP^1(\Cx)$, from which we can recover both de Rham and Dolbeault functors. The coarse quotient of the framed twistor space is just the Deligne--Hitchin twistor space (Remark \ref{cfdh}). The twistor family carries a natural involution $\sigma$, and we show in Proposition \ref{sigmasections} that $\sigma$-equivariant sections of the framed twistor space are just framed pluriharmonic bundles. Theorem \ref{HTtopthm}, Proposition \ref{PNTprop} and Corollary \ref{PNrepT} then give analogues of Theorem \ref{Hsstopthm}, Proposition \ref{PNssprop} and Corollary \ref{PNrepss} in the twistor setting, describing twistors with $C^*$-algebra coefficients and the topology of the twistor space. I would like to thank Carlos Simpson for originally posing the problem of finding finer invariants, and for helpful discussions. \subsection*{Notation} We will use $k$ to denote either of the fields $\R, \Cx$. \begin{definition} Given a $k$-Hilbert space $H$, write $L(H)$ for the space of $k$-linear bounded operators on $H$, with the norm topology. \end{definition} \begin{definition} Given topological spaces $X,Y$, we write $C(X,Y)$ for the set of continuous maps from $X$ to $Y$. \end{definition} \begin{definition} Given a group $G$ acting on a set $X$, write $[X/G]$ for the groupoid with objects $X$ and morphisms $X \by G$, where the source of $(x,g)$ is $x$ and the target is $xg$. Composition of morphisms is given by $(xg,h) \circ (x,g)= (x, gh)$. \end{definition} \begin{definition} Given a group $G$ acting on sets $S,T$, write $S\by_GT$ for the quotient of $S\by T$ by the $G$-action $g(s,t)= (gs,g^{-1}t)$. \end{definition} \tableofcontents \section{Pro-representability of functors on unital Banach algebras and $C^*$-algebras}\label{prorepsn} \begin{definition} Given a functor $F\co \C \to \Set$, an object $A \in \C$ and an element $\xi \in F(A)$, we follow \cite[\S A.3]{descent} in saying that the pair $(A, \xi)$ is \emph{minimal} if for any pair $(A', \xi')$ and any strict monomorphism $f\co A' \to A$ with $F(f)(\xi')= \xi$, $f$ must be an isomorphism. We say that a pair $(A'', \xi'')$ is \emph{dominated} by a minimal pair if there exists a minimal pair $(A, \xi)$ and a morphism $g\co A \to A''$ with $F(g)(\xi)= \xi''$. \end{definition} \begin{definition} As in \cite[\S A.3]{descent}, we say that a functor $F\co \C \to \Set$ on a category $\C$ containing all finite limits is \emph{left exact} if it preserves all finite limits. This is equivalent to preserving finite products and equalisers, or to preserving fibre products and the final object. \end{definition} \begin{lemma}\label{dominatelemma} Let $\C$ be a category containing finite limits, and take a left exact functor $F\co \C \to \Set$. Assume that for any cofiltered inverse system $\{A_i\}_i$ of strict subobjects of any object $A\in \C$, the limit $\Lim_i A_i$ exists, and that the map \[ F(\Lim_i A_i) \to \Lim_i F(A_i) \] is an isomorphism. Then every pair $(A, \xi\in F(A))$ is dominated by a minimal pair. \end{lemma} \begin{proof} Given the pair $(A, \xi)$, let $I$ be the full subcategory of the overcategory $\C \da A$ consisting of strict monomorphisms $B \to A$ for which $\xi$ lifts to $F(B)$. Note that this lift must be unique by the monomorphism property. If $f\circ g$ is a strict monomorphism, then so is $g$, which implies that all morphisms in $I$ must be strict monomorphisms in $\C$. Moreover, left-exactness of $F$ guarantees that $I$ is closed under the fibre product $\by_A$. the monomorphism properties imply that parallel arrows in $I$ are equal, so $I$ is a cofiltered category. By hypothesis, the limit $L:= \Lim_{B \in I} B$ exists in $\C$. It is necessarily a strict subobject of $A$, since it is the limit of all parallel maps sourced at $A$ and equalised by some $B \in I$. The unique lifts of $\xi$ to each $F(B)$ define an element of \[ \Lim_{B \in I} F(B), \] so by hypothesis we have a corresponding element $\eta \in F(L)$. Therefore $L$ is an object of $I$ and is in fact the initial object of $I$. The pair $(L, \eta)$ is therefore minimal, and dominates $(A, \xi)$ as required. \end{proof} \begin{definition} Recall from \cite[\S A.2]{descent} that a pro-object $X \in \pro(\C)$ is said to be \emph{strict} if it is isomorphic to a pro-object of the form $\{X_i\}$, where each map $X_i \to X_j$ is an epimorphism A functor $F\co \C \to \Set$ is said to be \emph{strictly pro-representable} if there exists a strict pro-object $X$ with $F\cong \Hom_{\pro(\C)}(X,-)$. \end{definition} \begin{proposition}\label{prorepprop} Let $\C$ be a category containing finite limits and limits of cofiltered inverse systems of strict subobjects. Then a functor $F\co \C \to \Set$ is strictly pro-representable if and only if \begin{enumerate} \item $F$ is left exact; \item $F$ preserves limits of cofiltered inverse systems of strict subobjects. \end{enumerate} \end{proposition} \begin{proof} If $F$ satisfies the conditions, then it is left-exact, and by Lemma \ref{dominatelemma} every pair is dominated by a minimal pair. It therefore satisfies the conditions of \cite[Proposition A.3.1]{descent}, so is strictly pro-representable. Conversely, every pro-representable functor $F$ is left-exact, so we need only show that the second condition holds. Write $F= \LLim_{\alpha} \Hom(R_{\alpha},-)$ for a strict inverse system $\{R_{\alpha}\}_{\alpha}$, and take a cofiltered inverse system $\{A_i\}_i$ of strict subobjects of some object $A\in \C$. Given an element $x \in \Lim_i F(A_i)$ with image $x_i$ in $F(A_i)$, by definition there exist objects $R_{\alpha_i}$ and maps $y_i \co R_{\alpha_i} \to A_i$ lifting $x_i$. Now fix $i$; the liftings are compatible in the sense that for $j>i$ (increasing $\alpha_j$ if necessary), there is a commutative diagram \[ \begin{CD} R_{\alpha_j} @>{y_j}>> A_j \\ @VVV @VVV\\ R_{\alpha_i} @>{y_i}>> A_i. \end{CD} \] Since $A_j \to A_i$ is a strict monomorphism and $R_{\alpha_j}\to R_{\alpha_i}$ an epimorphism, $y_i$ must lift to a map $y_{ij}\co R_{\alpha_i} \to A_j$. Since $A_j \to A_i$ is a monomorphism, the lifting $y_{ij}$ is unique. Considering all $j>i$ together, this gives us a unique map $\tilde{x}\co R_{\alpha_i} \to \Lim_j A_j$, which gives rise to a unique pre-image $\tilde{x} \in F(\Lim_j A_j)$, as required. \end{proof} \subsection{Banach algebras} \begin{definition} Write $\Ban\Alg_k$ for the category of unital (not necessarily commutative) Banach algebras over $k$, with bounded morphisms. \end{definition} \begin{proposition}\label{repbanalg} Take a functor $F\co \Ban\Alg_k\to \Set$ such that \begin{enumerate} \item $F$ preserves all finite limits (equivalently: preserves fibre products and the final object), \item $F$ preserves monomorphisms (i.e. maps closed subalgebras to subsets), and \item for all inverse systems $S$ of closed subalgebras, the map \[ F(\bigcap_{s \in S} A_s) \to \bigcap_{s \in S} F(A_s) \] is an isomorphism. \end{enumerate} Then $F$ is strictly pro-representable. \end{proposition} \begin{proof} Given a Banach algebra $B$, every strict subobject of $B$ is a closed subalgebra. For any cofiltered inverse system $\{A_i\}_i$ of strict subobjects of any object $B$, the limit $\Lim_i A_i$ exists in $\Ban\Alg_k$ and is given by $\bigcap_i A_i$. The result now follows from Proposition \ref{prorepprop}. \end{proof} \subsection{$C^*$-algebras} \begin{definition} Write $C^*\Alg_k$ for the category of unital (not necessarily commutative) $C^*$-algebras over $k$, with bounded involutive morphisms. Explicitly, a complex $C^*$-algebra is a complex Banach algebra equipped with an antilinear involution $*$ satisfying $(ab)^*=b^*a^*$ and $\|a^*a\|= \|a\|^2$ for all $a \in A$. A real $C^*$-algebra is a real Banach algebra equipped with a linear involution $*$ satisfying the conditions above, and having the additional property that $1+a^*a$ is invertible for all $a \in A$. \end{definition} A Banach $*$-algebra over $k$ is a $C^*$-algebra if and only if it is isometrically $*$-isomorphic to a self-adjoint norm-closed algebra of bounded operators on a Hilbert $k$-space; for $k=\R$, this is Ingelstam's Theorem \cite[8.2 and 15.3]{goodearl}. \begin{proposition}\label{repcstar} Take a functor $F\co C^*\Alg_k\to \Set$ such that \begin{enumerate} \item $F$ preserves all finite limits (equivalently: preserves fibre products and the final object), \item $F$ preserves monomorphisms (i.e. maps $C^*$-subalgebras to subsets), and \\item for all inverse systems $S$ of nested $C^*$-subalgebras, the map \[ F(\bigcap_{s \in S} A_s) \to \bigcap_{s \in S} F(A_s) \] is an isomorphism. \end{enumerate} Then $F$ is strictly pro-representable. \end{proposition} \begin{proof} The proof of Proposition \ref{repbanalg} carries over. \end{proof} \begin{lemma}\label{consistentCstar} Every complex $C^*$-algebra becomes a real $C^*$-algebra by forgetting the multiplication by $\Cx$. \end{lemma} \begin{proof} We just need to show that $1+a^*a$ is invertible for all $a \in A$. Now, $x \mapsto (1+|x|)^{-1}$ is a continuous function on $\R$ and $a^*a$ is positive self-adjoint, so the continuous functional calculus implies that $(1+a^*a)^{-1} \in A$, as required. \end{proof} \begin{lemma}\label{GalCstar} The category $ C^*\Alg_{\R}$ is equivalent to the category of pairs $(A, \tau)$, for $A \in C^*\Alg_{\Cx}$ and an involution $\tau\co A \to A$ satisfying \begin{enumerate} \item $\tau(ab)= \tau(a)\tau(b)$, \item $\tau(a)^*= \tau(a^*)$, and \item $\tau(\lambda)= \bar{\lambda}$ for $\lambda \in \Cx$. \end{enumerate} \end{lemma} \begin{proof} Given $B \in C^*\Alg_{\R}$, set $A:= B\ten_{\R}\Cx$; this is a complex $C^*$-algebra, with involution $(b\ten \lambda)^*= b^*\ten \bar{\lambda}$. The involution $\tau$ is then given by complex conjugation, with $\tau( b\ten \lambda)= b\ten \bar{\lambda}$. For the quasi-inverse construction, we send a pair $(A, \tau)$ to the algebra $A^{\tau}$ of $\tau$-invariants. That this is a real $C^*$-algebra follows from Lemma \ref{consistentCstar} To see that these are quasi-inverse functors, first note that $(B\ten_{\R}\Cx)^{\tau}=B$. Next, observe that because $\tau$ is antilinear, we can write $A = A^{\tau} \oplus i A^{\tau} \cong A^{\tau}\ten_{\R}\Cx$ for all pairs $(A, \tau)$ as above. \end{proof} \begin{definition} For a complex (resp. real) $*$-algebra $A$, write $U(A)$ for the group of unitary (resp. orthogonal) elements \[ \{a \in A\,:\, a^*a=aa^*=1\}. \] Write $\fu(A)$ for the Lie algebra of anti-self-adjoint elements \[ \{a \in A\,:\, a^*+a=0\}, \] and write $S(A)$ for the self-adjoint elements of $A$, noting that $\fu(A)= iS(A)$ when $A$ is complex. \end{definition} \subsection{Representations and polynormal $C^*$-algebras}\label{PNsn} \subsubsection{Representation spaces} Fix a real unital $C^*$-algebra $A$. \begin{definition} Write $\Rep_n^*(A)$ for the space of unital continuous $*$-homomorphisms $\rho \co A \to \Mat_n(\Cx)$ equipped with the topology of pointwise convergence. Write $\Irr_n^*(A)\subset \Rep_n^*(A)$ for the subspace of irreducible representations. \end{definition} \begin{definition} Define $\FD\oHilb$ to be the groupoid of complex finite-dimensional Hilbert spaces and unitary isomorphisms. \end{definition} \begin{definition} Write $\FD^*\Rep(A)$ for the groupoid of pairs $(V, \rho)$ for $V\in \FD\oHilb$ and $\rho \co A \to \End(V)$ a unital continuous $*$-homomorphism. Morphisms are given by unitary isomorphisms intertwining representations. The set of objects of $\FD^*\Rep(A)$ is given the topology of pointwise convergence. \end{definition} \begin{definition}\label{PNdef} Define $A_{\PN}$ to be the ring of $\Gal(\Cx/\R)$-equivariant continuous additive endomorphisms of the fibre functor $\eta\co \FD^*\Rep(A) \to \FD\oHilb$. Explicitly, $\eta \in A_{\PN}$ associates to each pair $(V,\rho)$ an element $\eta(V, \rho) \in \End(V)$, subject to the conditions: \begin{enumerate} \item For any unitary isomorphism $u \co V \to W$, we have $\eta(W, u\rho u^{-1}) = u\eta(V,\rho) u^{-1}$. \item For any $(V_1,\rho_1), (V_2, \rho_2) \in \FD^*\Rep(A)$, we have $\eta(V_1\oplus V_2, \rho_1\oplus \rho_2) = \eta(V_1, \rho_1) \oplus \eta(V_2, \rho_2)$. \item The maps $\eta \co \Rep_n^*(A) \to \Mat_n(\Cx)$ are continuous and $\Gal(\Cx/\R)$-equivariant. \end{enumerate} \end{definition} \begin{lemma}\label{PNlemma} The ring $A_{\PN}$ has the structure of a pro-$C^*$-algebra over $A$. \end{lemma} \begin{proof} We may describe $A_{\PN}$ as the categorical limit of a diagram of $*$-homomorphisms between the $C^*$-algebras $C(\Rep_n^*(A),\Mat_n(\R))$, thus making it into a pro-$C^*$-algebra. The $*$-homomorphism $A \to A_{\PN}$ is given by mapping $a$ to the transformation $\eta_a(\V,\rho)= \rho(a)$. \end{proof} \subsubsection{Polynormal $C^*$-algebras} \begin{definition} Recall from \cite{pearcy} that an algebra $A$ is said to be $n$-\emph{normal} if for all $a_1, \ldots, a_{2n} \in A$, we have \[ \sum_{\sigma \in S_{2n}} \sgn(\sigma) a_{\sigma(1)} a_{\sigma(2)}\cdots a_{\sigma(2n)}=0. \] Call an algebra \emph{polynormal} if it is $n$-\emph{normal} for some $n$. \end{definition} By the Amitsur--Levitzki theorem, the algebra of $n \by n$-matrices over a commutative ring is $n$-normal. Also note that an $n$-normal algebra is $k$-normal for all $k\ge n$. Note that by restricting to $n$-dimensional representations, we get that for any real $C^*$-algebra $A$, the ring $A_{\PN}$ of Definition \ref{PNdef} is an inverse limit $A_{\PN} = \Lim_n A_{\PN,n}$ of $n$-normal $C^*$-algebras. We now have a result combining aspects of Tannaka and Takesaki duality: \begin{proposition}\label{PNprop} If $A$ is a polynormal unital $C^*$-algebra, then the morphism $A \to A_{\PN}$ of Lemma \ref{PNlemma} is an isomorphism. \end{proposition} \begin{proof} Since $A$ is $N$-normal for some integer $N$, \cite[\S 3]{pearcy} implies that $A$ is of type $I$, with all complex irreducible representations of $A$ having dimension at most $ N$. For a sufficiently large cardinal $\alpha$, \cite{bichteler} characterises the $W^*$-envelope $A''\ten \Cx$ of $A\ten \Cx$ as the ring defined analogously to $A_{\PN}$ by replacing ``continuous'' with ``bounded'' and ``finite-dimensional'' with ``of dimension at most $\alpha$''. Since all irreducible representations are at most $N$-dimensional, the direct integral decomposition of $A$-representations gives us an injective map $A_{\PN} \to A''$, with boundedness following because any $a \in A_{\PN}$ is bounded on $\coprod_{k \le N}\Rep_k^*(A)$. Now, \cite{AkemannShultz} defines a ring $A_c \subset A''$ to consist of those $b$ for which the functions $b,b^*b,bb^*$ are weakly $*$-continuous on the space $P(A)$ of pure states of $A$. Since all irreducible representations arise as subrepresentations of $N$-dimensional representations, continuity on $ \Rep_N^*(A)$ suffices to give continuity on $P(A)$, so the inclusion $A_{\PN} \to A_c$ is an isomorphism. By \cite{BunceDeddens}, the spectrum of $A$ is Hausdorff, and since $A$ is type $I$, \cite{AkemannShultz} then observes that $A$ is perfect, which means that the inclusion $A \to A_c$ is in fact an isomorphism. Thus the map $A \to A_{\PN}$ is an isomorphism. \end{proof} \subsection{The category of $C^*$-algebras with completely bounded morphisms} \subsubsection{Basic properties} \begin{lemma}\label{Cstarimage} If $A$ is a $C^*$-algebra and $f\co A \to B$ a morphism of Banach algebras, then the image of $f$ has the natural structure of a $C^*$-algebra, with $f\co A \to \im(f)$ becoming a $C^*$-homomorphism. \end{lemma} \begin{proof} The kernel of $f$ is a closed two-sided ideal. Thus by \cite[Theorem 3]{segalIdeals}, $A/\ker f$ is a $C^*$-algebra, as required. \end{proof} \begin{definition}\label{cbdef} Recall that a homomorphism $\pi\co A \to B$ of Banach algebras is said to be \emph{completely bounded} if \[ \sup_{n \in \N} \|M_n(\pi)\|< \infty, \] where $M_n(\pi)\co M_n(A) \to M_n(B)$ is the morphism on $n \by n$ matrices given by $\pi$. Given a pro-Banach algebra $A+ \Lim_i A_i$ and a Banach algebra $B$, any morphism $\pi\co A \to B$ factors through some $A_i$, and we say that $\pi$ is \emph{completely bounded} if the map $A_i \to B$ is so. Write $\Hom(A,B)_{cb}$ for the set of completely bounded homomorphisms from $A$ to $B$. \end{definition} \begin{lemma}\label{KSPlemma} If $A$ is a $C^*$-algebra, then any completely bounded homomorphism $f\co A \to L(H)$ is conjugate to a $*$-homomorphism of $C^*$-algebras. \end{lemma} \begin{proof} This is the main result of \cite{paulsen}. \end{proof} \begin{remark}\label{kadison} Kadison's similarity problem asks whether all bounded (non-involutive) homomorphisms between $C^*$-algebras are in fact completely bounded. The answer is affirmative in a wide range of cases, but the general problem remains open. Note that Gardner showed (\cite[Theorem A]{gardner}) that all Banach isomorphisms of $C^*$-algebras are conjugate to $C^*$-homomorphisms (and hence completely bounded). Also note that by \cite[Theorem 3]{segalIdeals}, every closed two-sided ideal of a $C^*$ algebra is a $*$-ideal; combined with Gardner's result, this implies that any bounded surjective map $A \to B$ between $C^*$-algebras must be conjugate to a $C^*$-homomorphism, hence completely bounded. The same is true of any bounded map between $C^*$-algebras whose image is a $C^*$-subalgebra. \end{remark} \begin{definition}\label{cbalgdef} Let $C^*B\Alg_k$ be the category of unital $C^*$-algebras over $k$, with completely bounded morphisms (which need not preserve $*$). \end{definition} \begin{lemma}\label{equivartpluri} For complex $C^*$-algebras $A,B$, giving a $U(B)$-equivariant function $f\co \Hom_{C^*\Alg_{\Cx}}(A,B) \to B$ (for the adjoint action on $B$) is equivalent to giving a $B^{\by}$-equivariant function $\tilde{f}\co \Hom_{C^*B\Alg_{\Cx}}(A,B) \to B$. \end{lemma} \begin{proof} There is a canonical inclusion $\iota\co \Hom_{C^*\Alg_{\Cx}}(A,B)\to \Hom_{C^*B\Alg_{\Cx}}(A,B) $, so given $\tilde{f}$, we just set $f$ to be $\tilde{f} \circ \iota$. The polar decomposition allows us to write $B^{\by}= B_{++}U(B)$, where $B_{++}\subset S(B)$ is the subset of strictly positive self-adjoint elements. Given $f$, there is thus an associated $B^{\by}$-equivariant function $\tilde{f}\co \Hom_{C^*\Alg_{\Cx}}(A,B)\by B_{++} \to B$ given by $\tilde{f}(p,g)= g^{-1}f(p)g$. By Lemma \ref{KSPlemma}, the map $ \Hom_{C^*\Alg_{\Cx}}(A,B)\by B_{++}\to \Hom_{C^*B\Alg_{\Cx}}(A,B)$ is surjective, and we need to check that $\tilde{f}$ descends. Now, if $\xi \in S(B)$ has the property that $\exp(\xi)$ fixes $p(A)$ under conjugation, then $\exp(i\xi t)$ commutes with $f(p)$ for all $t$, so $i\xi$ must also. Thus $\xi$ and hence $\exp(\xi)$ commute with $f(p)$, so $\tilde{f}$ does indeed descend. \end{proof} \subsubsection{Representations} We now fix a real unital $C^*$-algebra $A$. \begin{definition} Define $\FD\Vect$ to be the groupoid of complex finite-dimensional vector spaces and linear maps. \end{definition} \begin{definition} Write $\FD\Rep(A)$ for the category of pairs $(V, \rho)$ for $V\in \FD\Vect$ and $\rho \co A \to \End(V)$ a unital continuous morphism of Banach algebras. Morphisms are given by linear maps intertwining representations. The set of objects of $\FD\Rep(A)$ is given the topology of pointwise convergence. \end{definition} Note that the objects of $\FD\Rep(A)$ decompose into direct sums of irreducibles. \begin{lemma}\label{PNlemma2} The ring $A_{\PN}$ of Lemma \ref{PNlemma} is isomorphic to the ring $A_{\PN'}$ of $\Gal(\Cx/\R)$-equivariant continuous additive endomorphisms of the fibre functor $\eta\co \FD\Rep(A) \to \FD\Vect$. \end{lemma} \begin{proof} Restriction to $*$-representations gives us a map $\psi\co A_{\PN'} \to A_{\PN}$. For a commutative $C^*$-algebra $C$, the $C^*$-algebra $\Mat_k(C)$ is of type $I$. This means that any bounded map $A \to \Mat_k(C)$ is completely bounded, so taking $B= \Mat_k(C)$ in Lemma \ref{equivartpluri} for all $k$ ensures that $\psi$ is an isomorphism. \end{proof} \begin{definition}\label{FDrepbase} Given a commutative unital real $C^*$-algebra $A$ and a $*$-homomorphism $A \to B$ of real $C^*$-algebras, write $\FD\Rep_{\hat{A}}(B)$ for the category of triples $(f,V, \rho)$ for $f \in \hat{A}$ (the spectrum of $A$), $V\in \FD\Vect$ and $\rho \co B \to \End(V)$ a unital continuous morphism of Banach algebras for which $\rho(a)=f(a)\id$ for all $a \in A$. Morphisms are given by linear maps intertwining representations. The set of objects of $\FD\Rep_{\hat{A}}(B)$ is given the topology of pointwise convergence. The category $\FD\Rep_{\hat{A}}(B) $ has an additive structure over $\hat{A}$, given by $(f, V_1, \rho_1)\oplus (f,V_2, \rho_2)=(f,V_1\oplus V_2, \rho_1\oplus\rho_2)$. \end{definition} \begin{lemma}\label{PNlemma2base} Given a commutative unital $C^*$-algebra $A$ and a $*$-homomorphism $A \to B$ of real $C^*$-algebras, the ring $B_{\PN}$ of Lemma \ref{PNlemma} is isomorphic to the ring of $\Gal(\Cx/\R)$-equivariant continuous additive endomorphisms of the fibre functor $\eta\co \FD\Rep_{\hat{A}}(B) \to \FD\Vect$. \end{lemma} \begin{proof} This just combines the proofs of Lemma \ref{PNlemma2} and Proposition \ref{PNprop}. The only modification is to observe that $A= C(\hat{A}, \Cx)^{\Gal(\Cx/\R)}$, and that for any irreducible representation $\rho\co B \to \End(V)$, we necessarily have $\rho|_{A}= f\id$, for some $f \in \hat{A}$. \end{proof} \section{The Betti, de Rham and harmonic functors on Banach algebras}\label{derhamsn} \subsection{The Riemann--Hilbert correspondence} \begin{definition} Given a path-connected topological space $X$ with basepoint $x$ and a unital $\R$-algebra $B$, define the Betti representation space $\oR^B_{X,x}(B)$ by \[ \oR^B_{X,x}(B):= \Hom_{\gp}(\pi_1(X,x), B^{\by}), \] where $B^{\by}$ is the multiplicative group of units in $B$. Define the representation groupoid $\cR^B_X(B)$ by $ \cR^B_X(B):= [\oR^B_{X,x}(B)/B^{\by}]$, where $B^{\by}$ acts by conjugation. Note that this is independent of the choice of basepoint (being equivalent to the groupoid of $B^{\by}$-torsors on $X$). \end{definition} \begin{definition}\label{DRtorsor} Given a connected manifold $X$ with basepoint $x$ and a Banach algebra $B$, define the de Rham groupoid $\cR^{\dR}_X(B)$ to be the groupoid of smooth $B^{\by}$-bundles with flat connections. Thus $\cR^{\dR}_X(B)$ consists of pairs $(\sT, D)$, where $\sT$ is a right $\sA^0_X(B^{\by})$-torsor, and $D$ is a flat connection on $\sT$. Explicitly, write $\sA^n_X(\ad\sT):= \sT\by_{\sA^0_X(B^{\by})}\sA^n_X(B)$, for the adjoint action of $B^{\by}$ on $B$. Then a flat connection on $\sT$ is \[ D\co \sT \to \sA^1(\ad\sT) \] satisfying \begin{enumerate} \item $D$ is a $d$-connection: $D(pg)= \ad_gD(p) + g^{-1}dg $, for $g \in \sA^0_X(B^{\by})$; \item $D$ is flat: $(\ad D)\circ D = 0$. \end{enumerate} Define $\oR^{\dR}_{X,x}(B)$ to be the groupoid of triples $(\sT, D,f)$, where $(\sT, D) \in \cR^{\dR}_X(B)$ and $f \in x^*\sT$ is a distinguished element. Since $ \oR^{\dR}_{X,x}(B)$ has no non-trivial automorphisms, we will regard it as a set-valued functor (given by its set of isomorphism classes). \end{definition} Note that $B^{\by}$ acts on $\oR^{\dR}_{X,x}(B)$ by changing the framing, and that the quotient groupoid is then equivalent to $\cR^{\dR}_X(B)$. \begin{proposition}\label{cfBettiDR} For any pointed connected manifold $(X,x)$, and any Banach algebra $B$, there are canonical equivalences \[ \cR^{\dR}_X(B) \simeq \cR^B_X(B), \quad \oR^{\dR}_{X,x}(B)\cong \oR^B_{X,x}(B) \] functorial in $X,x$ and $B$. \end{proposition} \begin{proof} When $B$ is a finite-dimensional matrix algebra, this is \cite[5.10]{GM}. The same proof carries over to Banach algebras, noting that the argument for existence of parallel transport (\cite[\S II.3]{KN1}) holds in this generality, since $\exp(b)=\sum_{n\ge 0} b^n/n!$ converges and is invertible for all $b \in B$. \end{proof} \begin{remark}\label{recoveranalytic} A Fr\'echet algebra is a countable inverse limit of Banach algebras. Thus $\oR^{\dR}_{X,x}$ has a natural extension to a functor on Fr\'echet algebras, and the equivalences of Proposition \ref{cfBettiDR} extend to Fr\'echet algebras. By considering the Fr\'echet algebras of analytic functions $U \to \Mat_n(\Cx)$, for complex analytic spaces $U$, we can of course recover the analytic structure of the variety $\Hom(\pi_1(X,x), \GL_n(\Cx))$ from the set-valued functor $\oR^B_{X,x}$ on Banach algebras. Proposition \ref{cfBettiDR} then allows us to recover the analytic variety $\Hom(\pi_1(X,x), \GL_n(\Cx))$ from $\oR^B_{X,x}$ by \[ \Hom(\pi_1(X,x), \GL_n(U))\cong \oR^B_{X,x}(\Mat_n(U)). \] In \cite{Sim2}, the varieties $ \oR^B_{X,x}(\Mat_n(-))$ and $\oR^{\dR}_{X,x}(\Mat_n(-))$ are denoted by $\oR_B(X,x,n)$. and $\oR_{\DR}(X,x,n)$. \end{remark} \begin{lemma}\label{tensorstr0} For any real Banach algebras $B,C$, there is a canonical map \[ m\co \oR^B_{X,x}(B)\by \oR^B_{X,x}(C) \to \oR^B_{X,x}(B\ten^{\pi}_{\R}C), \] where $\ten^{\pi}$ is the projective tensor product. This makes $\oR^B_{X,x}$ into a symmetric monoidal functor, with unit corresponding to the trivial representation in each $\oR^B_{X,x}(B)$. \end{lemma} \begin{proof} Given representations $\rho_1\co \pi_1(X,x) \to B^{\by}$ and $\rho_2\co \pi_1(X,x) \to C^{\by}$, we obtain $\rho_1\ten \rho_2\co \pi_1(X,x) \to (B\ten C)^{\by}$. Taking completion with respect to the projective cross norm gives the required result. \end{proof} \begin{remark}\label{recoveranalyticG} Given any complex affine group scheme $G$, we may use the tensor structure on $\oR^B_{X,x}$ to recover the affine analytic variety $ \Hom(\pi_1(X,x), G(\Cx))$. Explicitly, $O(G)$ is a coalgebra, so can be written as a nested union of finite-dimensional coalgebras. Therefore $O(G)^{\vee}$ is a pro-finite-dimensional algebra, and hence a Fr\'echet algebra. Multiplication on $G$ gives us a comultiplication $\mu \co O(G)^{\vee} \to O(G\by G)^{\vee}$. For any complex analytic space $U$, we may then characterise $ \Hom(\pi_1(X,x), G(U))$ as \[ \{\rho \in \oR^B_{X,x}(C(U, O(G)^{\vee}))\,:\, \mu(\rho)= m(\rho, \rho) \in \oR^B_{X,x}(C(U, O(G\by G)^{\vee}))\}. \] \end{remark} \subsection{Representability of the de Rham functor} \begin{lemma}\label{freeprorep} Given a free group $\Gamma= F(X)$, the functor \[ A \mapsto \Hom_{\gp}(\Gamma, A^{\by}) \] on the category of real Banach algebras is pro-representable. \end{lemma} \begin{proof} Given a function $\nu\co X \to [1, \infty)$, let $\bar{\nu}\co \Gamma \to [1, \infty)$ be the largest function subject to the conditions \begin{enumerate} \item $\bar{\nu}(1)=1$; \item $\bar{\nu}(x)=\bar{\nu}(x^{-1})= \nu(x)$ for all $x \in X$; \item $\bar{\nu}(gh)\le \bar{\nu}(g)\bar{\nu}(h)$. \end{enumerate} Explicitly, we write any $g \in \Gamma$ as a reduced word $g= x_1^{n_1}x_2^{n_2}\ldots x_k^{n_k}$, then set $\bar{\nu}(g):= \prod_{i=1}^k \nu(x_i)^{|n_i|}$. We now define a norm $\|-\|_{1,\nu}$ on $k[\Gamma]$ by setting \[ \|\sum_{\gamma \in \Gamma} \lambda_{\gamma} \gamma\|_{1,\nu}:= \sum_{\gamma \in \Gamma} |\lambda_{\gamma}|\cdot \bar{\nu}(\gamma). \] Now, given any representation $\rho\co \Gamma \to A^{\by}$, we may define $\nu\co X \to [1, \infty)$ by \[ \nu(x):= \max\{\|\rho(x)\|, \|\rho(x^{-1})\|\}; \] this at least $1$ because $1= \rho(x)\rho(x^{-1})$, so $1\le \|\rho(x)\|\cdot \|\rho(x^{-1})\|$. It follows that for all $v \in k[\Gamma] $ we have $\|\rho(v)\|\le \|v\|_{1,{\nu}}$, so $\rho$ determines a map \[ k[\Gamma]^{\wedge_{\nu}}\to A, \] where $k[\Gamma]^{\wedge_{\nu}}$ denotes the Banach algebra obtained by completing $k[\Gamma]$ with respect to the norm $\|-\|_{1,{\nu}}$. Next, give $[1, \infty)^X $ the structure of a poset by saying ${\nu}_1\le {\nu}_2$ provided ${\nu}_1(x) \le {\nu}_2(x)$ for all $x \in X$. This is in fact a directed set, since we can define $\max\{{\nu}_1,{\nu}_2\}$ pointwise. There is a canonical morphism \[ k[\Gamma]^{\wedge_{{\nu}_2}}\to k[\Gamma]^{\wedge_{{\nu}_1}} \] whenever ${\nu}_1 \le{\nu}_2$, which gives us an inverse system $k[\Gamma]^{\an}:= \{k[\Gamma]^{\wedge_{\nu}}\}_{\nu}$ of Banach algebras, indexed by the directed set $ ([1, \infty)^X$. Thus we have shown that \[ \Hom_{\gp}(\Gamma, A^{\by}) \cong \LLim_{{\nu} \in [1, \infty)^X} \Hom_{k\Ban\Alg} (k[\Gamma]^{\wedge_{\nu}}, A), \] functorial in Banach $k$-algebras $A$. In other words, our pro-representing object is the inverse system $k[\Gamma]^{\an}$. \end{proof} \begin{example}\label{completeZ} Take $X= \{z\}$, so $\Gamma= \Z$, and let ${\nu}(z)=R$. Then elements of $k[\Z]^{\wedge_{\nu}} $ are $\sum_{i \in \Z} \lambda_i z^i$ such that \[ \sum_{i \ge 0} |\lambda_i| R^i< \infty \quad \sum_{i \le 0} |\lambda_i| R^{-i}< \infty. \] Thus $\Cx[\Z]^{\wedge_{R}}$ is the ring of analytic functions on the annulus $R^{-1}\le |z|\le R$. Hence $\Lim_{R} \Cx[\Z]^{\wedge_{R}}$ is the ring of analytic functions on $\Cx^*$, while $ \Lim_{R} \R[\Z]^{\wedge_{R}}$ is the subring consisting of functions $f$ with $\overline{f(z)}= f(\bar{z})$ Contrast this with the isometric Banach completion of $\Cx[\Z]$, which just gives $\Cx[\Z]^{\wedge_{1}}$, the ring of analytic functions on the circle. \end{example} \begin{lemma}\label{fgfreerep} Given a finitely generated free group $\Gamma= F(X)$, the functor \[ A \mapsto \Hom_{\gp}(\Gamma, A^{\by}) \] on the category of Fr\'echet $k$-algebras is representable. \end{lemma} \begin{proof} We may embed $\N_1$ in $[1, \infty)^X$ as a subset of the constant functions. Since $X$ is finite, $\N_1$ is a cofinal subset of $[1, \infty)^X $, giving us an isomorphism \[ \{k[\Gamma]^{\wedge_{\nu}}\}_{{\nu} \in [1, \infty)^X } \cong \{k[\Gamma]^{\wedge_n}\}_{n \in \N_1 } \] in the category of pro-Banach $k$-algebras. Since $\N_1$ is countable, $k[\Gamma]^{\an}:= \Lim_n k[\Gamma]^{\wedge_n}$ is a Fr\'echet algebra. Applying the proof of Lemma \ref{freeprorep}, we have shown that \[ \Hom(\Gamma, A^{\by})\cong \Hom_{k\Fr\Alg}( k[\Gamma]^{\an} ,A) \] for all Banach algebras $A$. Since any Fr\'echet algebra $A$ can be expressed as an inverse limit $A= \Lim_i A_i$ of Banach algebras, it follows that the same isomorphism holds for all Fr\'echet algebras, so the functor is representable in Fr\'echet algebras. \end{proof} \begin{proposition}\label{fgrep} Given a finitely generated group $\Gamma$, the functor \[ A \mapsto \Hom_{\gp}(\Gamma, A^{\by}) \] on the category of Fr\'echet $k$-algebras is representable. \end{proposition} \begin{proof} Choose generators $X$ for $\Gamma$, so $\Gamma= F(X)/K$ for some normal subgroup $K$. Lemma \ref{fgfreerep} gives a Fr\'echet $k$-algebra $k[F(X)]^{\an}$ governing representations of $F(X)$. Since \[ \Hom_{\gp}(\Gamma, A^{\by})= \{f \in \Hom_{\gp}(F(X), A^{\by})\,:\, f(K)=\{1\}\}, \] our functor will be represented by a quotient of $k[F(X)]^{\an}$. Specifically, let $I$ be the closed ideal of $k[F(X)]^{\an} $ generated by $\{k-1\,:\, k \in K\}$, and set $k[\Gamma]^{\an}:= k[F(X)]^{\an}/I$. This is a Fr\'echet algebra, and \[ \Hom_{\gp}(\Gamma, A^{\by})\cong \Hom_{k\Fr\Alg}( k[\Gamma]^{\an} ,A) \] for all Fr\'echet algebras $A$. For an explicit description of $k[\Gamma]^{\an}$, note that the system of norms is given by \[ \|\sum \lambda_{\gamma}\gamma \|_{1,n}= \sum |\lambda_{\gamma}| \cdot n^{w(\gamma)}, \] where $w(\gamma)$ is the minimal word length of $\gamma$ in terms of $X$. \end{proof} When combined with is tensor structure, this implies that the functor of Proposition \ref{fgrep} is a very strong invariant indeed: \begin{lemma}\label{recovergroup} The tensor structure of Lemma \ref{tensorstr0} gives $k[\Gamma]^{\an} $ the structure of a Fr\'echet bialgebra. The group \[ G(k[\Gamma]^{\an})=\{a \in k[\Gamma]^{\an}\,:\, \mu(a)= a\ten a \in k[\Gamma]^{\an}\ten^{\pi} k[\Gamma]^{\an},\, \vareps(a)=1 \in k\} \] of grouplike elements of $ k[\Gamma]^{\an} $ is then $\Gamma$. \end{lemma} \begin{proof} Applying the map $m$ of Lemma \ref{tensorstr0} to $(\xi,\xi)$, for the canonical element $\xi \in\Hom_{\gp}(\Gamma,k[\Gamma]^{\an} ) $ gives us a comultiplication $\mu\co k[\Gamma]^{\an}\to k[\Gamma]^{\an}\ten^{\pi}k[\Gamma]^{\an}= k[\Gamma\by \Gamma]^{\an}$ and a co-unit $\vareps\co k[\Gamma]^{\an} \to k$. On the topological basis $\Gamma$, we must have $\mu(\gamma)= (\gamma, \gamma)$ and $\vareps(\gamma)=1$. Expressing $a\in G(k[\Gamma]^{\an})$ as $\sum_{\gamma \in \Gamma} a_{\gamma} \gamma$, note that the conditions become $a_{\gamma} a_{\delta}=0$ for $\gamma\ne \delta$, and $\sum a_{\gamma}=1$; thus $a=\gamma$ for some $\gamma\in \Gamma$. \end{proof} \begin{example}\label{completeab} Arguing as in example \ref{completeZ}, for $\Gamma$ abelian and finitely generated, $\Cx[\Gamma]^{\an}$ is isomorphic to the ring of complex analytic functions on $\Hom_{\gp}(\Gamma, \Cx^*)$, while $\R[\Gamma]^{\an} \subset \Cx[\Gamma]^{\an}$ consists of $\Gal(\Cx/\R)$-equivariant functions. The multiplicative analytic functions are of course just $\Gamma$ itself. \end{example} Proposition \ref{cfBettiDR} then implies: \begin{corollary} The functors $\oR^{\dR}_{X,x}(B)$ and $\oR^B_{X,x}$ on real Fr\'echet algebras are representable. \end{corollary} \begin{remark}\label{htpydgrk} Adapting the ideas of \cite{htpy}, the functor $\oR^B_{X,x}$ has a natural extension to those simplicial Fr\'echet algebras $B_{\bt}$ for which $B_n \to \pi_0B$ is a pro-nilpotent extension for each $n$. Explicitly, we could set $ \oR^B_{X,x}(B)$ to be the set of homotopy classes of maps $G(\Sing(X,x)) \to B_{\bt}^{\by}$ of simplicial groups, where $G$ is Kan's loop group. This functor admits a tensor structure extending Lemma \ref{tensorstr0}, The functor $\oR^{\dR}_{X,x}$ has a natural extension to those differential graded Fr\'echet algebras $B_{\bt}$ for which $B_0 \to \H_0B$ is a pro-nilpotent extension. Explicitly, $\oR^{\dR}_{X,x}(B) $ would consist of pairs $(\sT_0, D)$, where $\sT_0$ is a $\sA^0_X(B^{\by}_0)$-torsor and $D\co \cT_0 \to \prod_n \sA^{n+1}_X\ten_{\sA^0_X}\ad \sT_n(n+1)$ is a flat hyperconnection, where $\ad \sT_n := \sT\by_{\sA^0_X(B^{\by}_0) }\sA^0_X(B_n)$. It then seems likely that \cite[Corollary \ref{htpy-bigequiv}]{htpy} should adapt to give natural isomorphisms $\oR^B_{X,x}(B) \cong \oR^{\dR}_{X,x}(NB)$, where $N$ is Dold--Kan normalisation. \end{remark} \subsection{The pluriharmonic functor} Fix a compact connected K\"ahler manifold $X$, with basepoint $x \in X$. \begin{definition}\label{Adef} Given a real Banach space $B$, denote the sheaf of $B$-valued $\C^{\infty}$ $n$-forms on $X$ by $\sA^n_X(B)$, and let $\sA_X^{\bt}$ be the resulting complex. Write $A^{\bt}(X,B):= \Gamma(X, \sA_X^{\bt}(B))$. We also write $\sA^{\bt}_X:= \sA^{\bt}_X(\R)$ and $A^{\bt}(X):= A^{\bt}(X,\R)$. \end{definition} \begin{definition}\label{Sdef} Define $S$ to be the real algebraic group $\prod_{\Cx/\R} \bG_m$ obtained as in \cite{Hodge2} 2.1.2 from $\bG_{m,\Cx}$ by restriction of scalars. Note that there is a canonical inclusion $\bG_m \into S$. \end{definition} The following is a slight generalisation of \cite[Definition \ref{mhs-dmd}]{mhs}: \begin{definition}\label{dmd} For any real Banach space $B$, there is an action of $S$ on $\sA^*_X(B)$, which we will denote by $a \mapsto \lambda \dmd a$, for $\lambda \in \Cx^* = S(\R)$. For $a \in (A^*(X)\ten \Cx)^{pq}$, it is given by $$ \lambda \dmd a := \lambda^p\bar{\lambda}^qa. $$ \end{definition} \begin{definition}\label{pluritorsor} Given a real $C^*$-algebra $B$, define $\cR^J_X(B)$ to be the groupoid of pairs $(U(\sP), D)$, where $U(\sP)$ is a right $\sA^0_X(U(B))$-torsor, and $D$ is a pluriharmonic connection on $U(\sP)$. Explicitly, write $\sP:= U(\sP)\by_{\sA^0_X(U(B))}\sA^0_X(B^{\by})$, and \begin{align*} \ad\sP &:= \sP\by_{\sA^0_X(B^{\by})}\sA^0_X(B)\\ &= U(\sP)\by_{\sA^0_X(U(B))}\sA^0_X(B)\\ &= [U(\sP)\by_{\sA^0_X(U(B))}\sA^0_X(\fu(B))]\oplus [U(\sP)\by_{\sA^0_X(U(B))}\sA^0_X(S(B))], \end{align*} where $U(B)$ and $B^{\by}$ act on $B$ by the adjoint action. Then a pluriharmonic connection on $\sP$ is \[ D\co U(\sP) \to \ad\sP \] satisfying \begin{enumerate} \item $D$ is a $d$-connection: $D(pu)= \ad_uD(p) + u^{-1}du $, for $u \in \sA^0_X(U(B))$; \item $D$ is flat: $(\ad D)\circ D = 0$; \item $D$ is pluriharmonic: $(\ad D) \circ D^c + (\ad D^c)\circ D=0$. \end{enumerate} Here, $D=d^+ + \vartheta$ comes from the decomposition of $\ad\sP$ into anti-self-adjoint and self-adjoint parts, and $D^c= i\dmd d^+ -i \dmd \vartheta$. Define $\oR^J_{X,x}(B)$ to be the groupoid of triples $(U(\sP), D,f)$, where $(U(\sP), D) \in \cR^J_X(B)$ and $f \in x^*U(\sP)$ is a distinguished element. Since $ \oR^J_{X,x}(B)$ has no non-trivial automorphisms, we will regard it as a set-valued functor (given by its set of isomorphism classes). \end{definition} \begin{remarks}\label{RJBanAlg} Note that there is a natural action of $U(B)$ on $\oR^J_{X,x}(B)$, given by changing the framing. The quotient groupoid $[\oR^J_{X,x}(B)/U(B)]$ is thus equivalent to $\cR^J_X(B)$. In \cite[Lemma 7.13]{Sim2}, the set $\oR^J_{X,x}(\Mat_n(\Cx))$ is denoted by $\oR_{\DR}^J(X,x,n)$. Also note that the definition of $\oR^J_{X,x}(B)$ can be extended to any real Banach $*$-algebra $B$. However, this will not be true of the harmonic functor of \S \ref{harmonicsn}. \end{remarks} \begin{example}\label{plurilocsys} When $V$ is a real Hilbert space, the algebra $L(V)$ of bounded operators on $V$ is a real $C^*$-algebra. Then $\cR^J_X(B)$ is equivalent to the groupoid of pluriharmonic local systems $\vv$ in Hilbert spaces on $X$, fibrewise isometric to $V$. The connection $D\co \sA^0(\vv) \to \sA^1(\vv)$ must satisfy the pluriharmonic condition that $DD^c+D^cD=0$, for $D^c$ defined with respect to the smooth inner product $\vv \by \vv \to \sA^0_X$. Isomorphisms in $\cR^J_X(B) $ preserve the inner product. \end{example} \begin{definition}\label{deRhamproj} Define the de Rham projection \[ \pi_{\dR}\co \oR^J_{X,x}(B) \to \oR^{\dR}_{X,x}(B) \] by mapping $(U(\sP),D, f)$ to the framed flat torsor $(\sP,D, f)= (U(\sP)\by_{\sA^0_X(U(B))}\sA^0_X(B^{\by}), D, f\by_{U(B)}B^{\by})$. \end{definition} \begin{proposition}\label{pluriprorep} The functor $\oR^J_{X,x} \co C^*\Alg \to \Set$ is strictly pro-representable, by an object $E^J_{X,x} \in \pro( C^*\Alg)$. \end{proposition} \begin{proof} The final object in $C^*\Alg$ is $0$, and $\oR^J_{X,x}(0)$ is the one-point set, so $\oR^J_{X,x}$ preserves the final object. Given maps $A \to B \la C$ in $C^*\Alg$ and $(p_A, p_B) \in \oR^J_{X,x}(A)\by_{\oR^J_{X,x}(B)}\oR^J_{X,x}(C)$, we get \begin{align*} \pi_{\dR}(p_A, p_C)\in &\oR^{\dR}_{X,x}(A)\by_{\oR^{\dR}_{X,x}(B)}\oR^{\dR}_{X,x}(C)\\ &\cong \oR^B_{X,x}(A)\by_{\oR^B_{X,x}(B)}\oR^B_{X,x}(C)\\ &\cong\oR^B_{X,x}(A\by_BC). \end{align*} Thus we have a flat torsor $(\sP, D) \in \oR^{\dR}_{X,x}(A\by_BC)$. It follows that $p_A\cong (U(\sP_A), D)$ for some orthogonal form $U(\sP_A) \subset \sP_A=\sP\by_{\sA^0_X((A\by_BC)^{\by} )}\sA^0_X(A^{\by}) $, and similarly for $p_C$. Since the images of $p_A$ and $p_C$ are equal in $\oR^{\dR}_{X,x}(B)$, there is a framed orthogonal isomorphism $\alpha\co U(\sP_A)\by_{\sA^0_X(U(A))}\sA^0_X(U(B)) \to U(\sP_C)\by_{\sA^0_X(U(C))}\sA^0_X(U(B))$, inducing the identity on $\sP_B$. Hence $\alpha$ must itself be the identity, so both $U(\sP_A)$ and $U(\sP_C)$ give the same unitary form $U(\sP_B)$ for $\sP_B$. It is easy to check the pluriharmonic conditions, giving an element \[ (U(\sP_A)\by_{U(\sP_B)}U(\sP_C),D)\in \oR^J_{X,x}(A\by_BC ) \] over $(p_A, p_C)$. This is essentially unique, so $\oR^J_{X,x}$ preserves fibre products, and hence finite limits. Now, given a $C^*$-subalgebra $A \subset B$, the map $\oR^J_{X,x}(A)\to \oR^J_{X,x}(B)$ is injective. This follows because if two framed pluriharmonic bundles $\sP_1, \sP_2$ in $\oR^J_{X,x}(A)$ become isomorphic in $\oR^J_{X,x}(B)$, compatibility of framings ensures that the isomorphism $f$ maps $x^*\sP_1$ to $x^*\sP_2$. Since $f$ is compatible with the connections, it thus gives an isomorphism $f\co \sP_1 \to \sP_2$, by considering the associated local systems. Finally, given an inverse system $\{A_i\}_i$ of nested $C^*$-subalgebras of a $C^*$-algebra $B$ and an element of $\bigcap_i \oR^J_{X,x}(A_i)$, we have a compatible system $\{(\sP_i,D_i, f_i)\}_i$. Set $\sP:= \Lim_i \sP_i$, with connection $D$ and framing $f$ induced by the $D_i$ and $f_i$. This defines a unique element of $\oR^J_{X,x}(\bigcap_i A_i)$, showing that \[ F(\bigcap_{i } A_i) \cong \bigcap_{i} F(A_i) \] Thus all the conditions of Proposition \ref{repcstar} are satisfied, so $\oR^J_{X,x}$ is strictly pro-representable. \end{proof} \begin{definition}\label{tensorCstar} Given pro-$C^*$-algebras $B,C$ over $k$, define $B\hat{\ten}_kC$ to be the maximal $k$-tensor product of $B$ and $C$, as defined in \cite[Definition 3.1]{phillips}; this is again a pro-$C^*$-algebra. \end{definition} \begin{lemma}\label{tensorstr} For any real pro-$C^*$-algebras $B,C$, there is a canonical map \[ m \co \oR^J_{X,x}(B)\by \oR^J_{X,x}(C) \to \oR^J_{X,x}(B\hat{\ten}_{\R}C), \] making $\oR^J_{X,x}$ into a symmetric monoidal functor, with unit corresponding to the trivial torsor in each $\oR^J_{X,x}(B)$. \end{lemma} \begin{proof} Given $(U(\sP), D, f, U(\sQ),E, \beta)$ on the left-hand side, we first form the $\sA^0_X(U(B\hat{\ten}C))$-torsor $U(\sR)$ given by $U(\sR):= (U(\sP)\by U(\sQ))_{\sA^0_X(U(B)\by U(C))}\sA^0_X(U(B\hat{\ten}C))$. We then define a connection $F$ on $U(\sR)$ determined by \[ F(p,q,1)= (Dp,q)+ (p,Dq) \in \sA^1_X(\ad \sR)= (U(\sP)\by U(\sQ))_{\sA^0_X(U(B)\by U(C))}\sA^1_X(B\hat{\ten}C) \] for $p \in U(\sP), q \in U(\sQ)$ This is clearly flat and pluriharmonic, and the construction is also symmetric monoidal. \end{proof} Note that this gives $E^J_{X,x}$ the structure of a pro-$C^*$-bialgebra, with comultiplication $\mu \co E^J_{X,x} \to E^J_{X,x} \hten E^J_{X,x}$ coming from $m$, and counit $\vareps\co E^J_{X,x} \to k$ coming from the trivial torsor. The following is immediate: \begin{lemma}\label{Pfunctorial} For any morphism $f\co X \to Y$ of compact connected K\"ahler manifolds, there is a natural transformation \[ f^*\co \oR^J_{Y,fx} \to \oR^J_{X,x} \] of functors. \end{lemma} \subsection{Higgs bundles} \begin{definition} Given a complex Banach algebra $B$, write $\O_X(B)$ for the sheaf on $X$ given locally by holomorphic functions $X \to B$. \end{definition} \begin{definition} For a complex Banach algebra $B$, a Higgs $B$-torsor on $X$ consists of an $\O_X(B)^{\by}$-torsor $\sT$, together with a Higgs form $\theta \in \ad \sT\ten_{\O_X}\Omega^1_X$, where $\ad \sT:= \sT\by_{\O_X(B)^{\by}, \ad}\O_X(B)$ satisfying \[ \theta \wedge\theta = 0 \in \sT\ten_{\O_X}\Omega^2_X. \] \end{definition} \begin{definition} Let $\cR^{\Dol}_{X}(B)$ be the groupoid of Higgs $B$-torsors, and $\oR^{\Dol}_{X,x}(B)$ the groupoid of framed Higgs bundles $(\sT, \theta, f)$, where $f \co B^{\by}\to x^*\sT$ is a $B^{\by}$-equivariant isomorphism. Alternatively, we may think of $f$ as a distinguished element of $x^*\sT$. Note that $\oR^{\Dol}_{X,x}(B)$ is a discrete groupoid, so we will usually identify it with its set of isomorphism classes. Also note that there is a canonical action of $B^{\by}$ on $\oR^{\Dol}_{X,x}(B)$ given by the action on the framings. This gives an equivalence \[ \cR^{\Dol}_{X}(B) \simeq [\oR^{\Dol}_{X,x}(B)/B^{\by}] \] of groupoids. \end{definition} The following is immediate: \begin{lemma} Giving a Higgs $B$-torsor $X$ is equivalent to giving an $\sA^0_X(B^{\by})$ torsor $\sQ$ equipped with a flat $\bar{\pd}$-connection, i.e. a map \[ D''\co \sQ \to \ad\sQ:= \sQ\by_{\sA^0_X(B^{\by}), \ad}\sA^1_X(B) \] satisfying \begin{enumerate} \item $D''(pg)= \ad_gD''(p) + g^{-1}\bar{\pd}g $, for $g \in \sA^0_X(B^{\by})$; \item $(\ad D'')\circ D'' = 0$. \end{enumerate} \end{lemma} \begin{remark}\label{Dolnotprorep} Note that unlike the Betti, de Rham, and harmonic functors, the Dolbeault functor cannot be pro-representable in general. This is for the simple reason that a left-exact scheme must be affine, but the Dolbeault moduli space is seldom so, since it contains the Picard scheme. \end{remark} \begin{definition}\label{Ddecomp} Given $(U(\sP), D) \in \cR^J_X(B)$, decompose $d^+$ and $\vartheta$ into $(1,0)$ and $(0,1)$ types as $d^+=\pd + \bar{\pd}$ and $\vartheta= \theta+\bar{\theta}$. Now set $D'=\pd + \bar{\theta}$ and $D''= \bar{\pd}+ \theta$. Note that $D= D'+D''$ and $D^c= iD'-iD''$. \end{definition} \begin{definition} For a complex $C^*$-algebra $B$, define the Dolbeault projection map $\pi_{\Dol}\co\cR^J_X(B) \to \cR^{\Dol}_X(B)$ by sending $(U(\sP),D)$ to $(\sP\by_{\sA^0_X(B^{\by})}\sA^0_X(B^{\by}), D'')$. \end{definition} \subsection{The harmonic functor}\label{harmonicsn} We now let $X$ be any compact Riemannian real manifold. \begin{definition} Given a compact Riemannian manifold $X$, a real $C^*$-algebra $B$, a right $\sA^0_X(U(B))$-torsor $U(\sP)$ and a flat connection $D$ \[ D\co U(\sP) \to \ad\sP, \] say that $D$ is a harmonic connection if $(d^+)^*\vartheta=0 \in \Gamma(X,\ad\sP)$, for $d^+, \vartheta$ defined as in Definition \ref{pluritorsor}, and the adjoint $*$ given by combining the involution $*$ on $\ad\sP$ with the adjoint on $\sA^*_X$ given by the K\"ahler form. \end{definition} \begin{lemma} A flat connection $D$ as above on a compact K\"ahler manifold is harmonic if and only if it is pluriharmonic. \end{lemma} \begin{proof} The proof of \cite[Lemma 1.1]{Simpson} carries over to this generality. \end{proof} \begin{definition} The lemma allows us to extend Definitions \ref{pluritorsor},\ref{deRhamproj} to any compact Riemannian manifold $X$, replacing pluriharmonic with harmonic in the definition of $\cR^J_X(B)$ and $\oR^J_{X,x}(B)$. \end{definition} \begin{proposition}\label{harmprorep} The functor $\oR^J_{X,x} \co C^*\Alg_{\R} \to \Set$ is strictly pro-representable, by an object $E^J_{X,x} \in \pro( C^*\Alg{\R})$. \end{proposition} \begin{proof} The proof of Proposition \ref{pluriprorep} carries over. \end{proof} Note that Lemma \ref{tensorstr} carries over to the functor $\oR^J_{X,x}$ for any compact Riemannian manifold $X$. The following is immediate: \begin{lemma}\label{Pfunctorial2} For any local isometry $f\co X \to Y$ of compact connected real Riemannian manifolds, there is a natural transformation \[ f^*\co \oR^J_{Y,fx} \to \oR^J_{X,x} \] of functors. \end{lemma} Note that this is much weaker than Lemma \ref{Pfunctorial}, the pluriharmonic functor being \emph{a priori} functorial with respect to all morphisms. \section{Analytic non-abelian Hodge theorems}\label{cfansn} \subsection{The de Rham projection}\label{dRprojsn} Fix a compact connected real Riemannian manifold $X$, with basepoint $x \in X$. The argument of \cite[Lemma 7.17]{Sim2} (which is only stated for $X$ K\"ahler) shows that $\pi_{\dR}$ gives a homeomorphism \[ \oR^J_{X,x}(\Mat_n(\Cx))/U(n) \to \Hom(\pi_1(X,x), \GL_n(\Cx))//\GL_n(\Cx), \] where $//$ denotes the coarse quotient (in this case, the Hausdorff completion of the topological quotient). As an immediate consequence, note that \[ \oR^J_{X,x}(\Cx) \to \Hom_{\gp}(\pi_1(X,x), \Cx^*) \] is a homeomorphism. Thus the abelianisation of $E^J_{X,x}\ten \Cx$ is isomorphic to the commutative $C^*$-algebra $C( \Hom(\pi_1(X,x), \Cx^*), \Cx)$, with $\oR^J_{X,x} \subset \oR^J_{X,x}\ten \Cx$ consisting of $\Gal(\Cx/\R)$-equivariant functions. We now adapt these results to recover a finer comparison between the respective functors. \subsubsection{Harmonic representations} \begin{proposition}\label{uniquemetric} For a compact Riemannian manifold $X$ and all real $C^*$-algebras $B$, the de Rham projection \[ \pi_{\dR}\co \oR^J_{X,x}(B) \to \oR^{\dR}_{X,x}(B) \] has the property that if $p_1, p_2 \in \oR^J_{X,x}(B)$ and if $\ad_b\pi_{\dR}(p_1)= \pi_{\dR}(p_2)$ for some strictly positive self-adjoint element $b \in B$, then $p_1=p_2$. Thus \[ \pi_{\dR}\co \oR^J_{X,x}(B)/U(B) \to \oR^{\dR}_{X,x}(B)/B^{\by} \] is injective. \end{proposition} \begin{proof} The the first statement above implies the second: it suffices to show that for any $(U(\sP), D, f) \in \oR^J_{X,x}(B)$, there are no other harmonic representations in the $B_{++}$-orbit of $\pi_{\dR}(U(\sP),D, f)$. Since $B_{++}= \exp(S(B))$ (by the continuous functional calculus), we can equivalently look at the orbit under the exponential action of the set $S(B)$. We adapt the proof of \cite[Proposition 2.3]{corlette}. The harmonic condition $(d^+)^*\vartheta=0$ is equivalent to saying that for all $\xi \in A^0(X, i\ad \sP)$ \[ \<\vartheta, d^+\xi\>=0 \in A^0(X, B), \] where $\<-,-\>$ is defined using the Riemannian metric. Now, the set of flat connections on $\sP$ admits a gauge action $\star$ of the smooth automorphism group of $\sP$, and hence via exponentiation an action of the additive group $\Gamma(X,\ad \sP)$. An isomorphism in $B_++$ between two flat connections corresponds to an element of $\exp(\Gamma(X, S(\ad \sP)))$, for $S(\ad\sP) \subset \ad \sP$ consisting of symmetric elements, giving a gauge between the respective connections. Thus the $S(B)$-orbit above is given by looking at the orbit of $D$ under $\Gamma(X, S(\ad \sP))$. By analogy with \cite[Proposition 2.3]{corlette}, we fix $\xi \in A^0(X, S(\ad \sP))$, let $d^+_t, \vartheta_t$ be the anti-self-adjoint and self-adjoint parts of $\exp(\xi t)\star D$, and set \[ f(t):= \< \vartheta_t, \vartheta_t \> \in A^0(X,B), \] Now, $\frac{d}{dt}(\exp(\xi t)\star D) =(\exp(\xi t)\star D)\xi = d^+_t\xi + \vartheta_t\wedge \xi$, so $\frac{d}{dt}\vartheta_t= d^+_t\xi$ and \[ f'(t)= 2\<\xi, (d^+_t)^*\vartheta_t \> \in A^0(X,B). \] In other words, $D_t$ is harmonic if and only if $f'(t)=0$ for all $\xi$. Now, if we set $\hat{D}_t:= d^+_t-\vartheta_t$, the calculations of \cite[Proposition 2.3]{corlette} adapt to give \[ 2f''(t)= \|D_t\xi +\hat{D}_{t}\xi\|^2+ \|D_t\xi -\hat{D}_{t}\xi\|^2\in A^0(X,B), \] where $\|v\|^2:= \<v,v\>$; unlike Corlette, we are only taking inner product with respect to the K\"ahler metric, not imposing an additional inner product on $B$. Note that $f''(t)$ an element of $A^0(X,B_{+})$, which lies in $A^0(X,B_{++})$ unless $D_t\xi=0$. If we start with a harmonic connection $D$, this implies that $\exp(\xi)\star D$ is harmonic if and only if $D\xi=0$. However, when $D\xi=0$ we have $\exp(\xi)\star D=D$, showing that $D$ is the unique harmonic connection in its $B_{++}$-orbit. \end{proof} \begin{corollary}\label{Zpluri} For a complex $C^*$-algebra $B$ and an element $p \in \oR^J_{X,x}(B^{\by})$, the centraliser $\z(\pi_{\dR}(p),B^{\by})$ of $\pi_{\dR}(p)$ under the adjoint action of $B$ is given by \[ \z(\pi_{\dR}(p),B^{\by})= \exp(\{ b \in S(B)\,:\, e^{ibt} \in \z(p, U(B)) \forall t \in \R\})\rtimes\z(p, U(B)); \] beware that this is the semidirect product of a set with a group. \end{corollary} \begin{proof} Take $g \in \z(\pi_{\dR}(p),B^{\by})$, and observe that the polar decomposition allows us to write $g = \exp(b)u$, for $u\in U(B)$ and $b \in S(B)$. Since $\pi_{\dR}$ is $U(B)$-equivariant, we have \[ \ad_{\exp(b)}(\pi_{\dR}(\ad_u(p)))= \ad_g(\pi_{\dR}(p))=\pi_{\dR}(p). \] Thus Proposition \ref{uniquemetric} implies that $ \ad_u(p)=p$, so $u \in \z(p, U(B))$. Since $\z(\pi_{\dR}(p),B^{\by})$ is a group, this implies that $\exp(b) \in \z(\pi_{\dR}(p),B^{\by})$, and hence that $\exp(b)$ commutes with the image of $\pi_{\dR}(p)$. We may apply the continuous functional calculus to take logarithms, showing that $b$ itself commutes with the image of $\pi_{\dR}(p)$, so $ib t$ does also. But then $\exp(ib t)\in \z(p, U(B))$ for all $t$, as required. Conversely, if $\exp(ib t)\in \z(p, U(B))$ for all $t$, then $\exp(-ib t)\pi_{\dR}(p)\exp(ib t)= \pi_{\dR}(p)$, and differentiating in $t$ for each element of $\pi_1(X,x)$, we see that $ib$ commutes with $\pi_{\dR}(p)$. Thus $\exp(b) \in \z(\pi_{\dR}(p),B^{\by})$. \end{proof} \subsubsection{Topological representation spaces and completely bounded maps} \begin{lemma}\label{Banachplurifunctor} For the real pro-$C^*$-algebra $E^J_{X,x}$ of Proposition \ref{pluriprorep}, there is a canonical map $\pi_{\dR}\co\Hom_{\pro(\Ban\Alg)}(E^J_{X,x},B) \to \oR^{\dR}_{X,x}(B)$, functorial in real Banach algebras $B$. \end{lemma} \begin{proof} Given $f\co E^J_{X,x} \to B$, Lemma \ref{Cstarimage} factors $f$ as the composition of a surjective $C^*$-homomorphism $g\co E^J_{X,x}\to C$ and a continuous embedding $C \into B$. The de Rham projection of Definition \ref{deRhamproj} then gives us an element $\pi_{\dR}(g)\in \oR^{\dR}_{X,x}(C)$. Combining this with the embedding $C^{\by} \to B^{\by}$ then provides the required element of $ \oR^{\dR}_{X,x}(B)$. \end{proof} \begin{proposition}\label{banachreps} For any real $C^*$-algebra $B$, the map of Lemma \ref{Banachplurifunctor} induces an injection \[ \pi_{\dR}\co \Hom(E^J_{X,x},B)_{cb} \into \oR^{\dR}_{X,x}(B), \] for the completely bounded morphisms of Definition \ref{cbdef}. \end{proposition} \begin{proof} Since $B$ can be embedded as a closed $C^*$-subalgebra of $L(H)$ for some complex Hilbert space $H$, we may replace $B$ with $L(H)$. By Lemma \ref{KSPlemma}, any completely bounded homomorphism $f\co E^J_{X,x} \to L(H)$ is conjugate to a $*$-morphism, since $E^J_{X,x}$ is a pro-$C^*$-algebra. Therefore Proposition \ref{uniquemetric} shows that \[ \Hom(E^J_{X,x},L(H))_{cb}/\GL(H) \into \oR^{\dR}_{X,x}(L(H))/\GL(H). \] Take a homomorphism $f\co E^J_{X,x} \to L(H)$ of $C^*$-algebras; it suffices to show that the centraliser of $f$ and of $\pi_{\dR}(f)$ are equal. By Corollary \ref{Zpluri}, we know that \[ \z(\pi_{\dR}(f),\GL(H))= \exp(\{ b \in S(L(H))\,:\, e^{ibt} \in \z(f, U(H)) \forall t \in \R\})\rtimes\z(f, U(H)). \] If $e^{ibt}$ commutes with $f$ for all $f$, then $e^{ibt}fe^{-ibt}=f$, and differentiating in $t$ shows that $b$ commutes with $f$. Therefore $\exp(b)$ commutes with $f$, showing that \[ \z(\pi_{\dR}(f),\GL(H))\subset \z(f,\GL(H)). \] The reverse inclusion is automatic, giving the required result. \end{proof} \begin{remark}\label{cfredpi} In \cite{Simpson, mhs}, the pro-reductive fundamental group $\pi_1(X,x)^{\red}_k$ is studied --- this is an affine group scheme over $k$. By Tannakian duality (\cite[Ch. II]{tannaka}), we can interpret the dual $O(\pi_1(X,x)^{\red}_{\R})^{\vee}$ of the ring of functions as the ring of discontinuous additive $\Gal(\Cx/\R)$-equivariant endomorphisms of $\eta_x^{\dR, \ss}$. The group scheme $\pi_1(X,x)^{\red}_k$ encodes all the information about the sets of finite-dimensional representations of $\pi_1(X,x)$. As we will now see, $ (E^J_{X,x})_{\PN}$ encodes all the information about their topologies as well. \end{remark} \begin{theorem}\label{Hsstopthm} For any positive integer $n$, $\pi_{\dR}$ gives a homeomorphism $\pi_{\dR,\ss}$ between the space $\Hom_{\pro(\Ban\Alg)}(E^J_{X,x},\Mat_n(\Cx))$ with the topology of pointwise convergence, and the subspace of $\oR^{\dR}_{X,x}(\Mat_n(\Cx))$ whose points correspond to semisimple local systems. \end{theorem} \begin{proof} The isomorphism $\pi_{\dR,\ss}$ is given on points by the proof of \cite[Theorem 3.3]{corlette}, since completely bounded algebra homomorphisms $E^J_{X,x} \to \Mat_n(\Cx)$ are those conjugate to $*$-homomorphisms, which in turn correspond to harmonic local systems. We need to show that this is a homeomorphism. Consider the map $\pi_{\dR}^{\sharp}\co E^{\dR}_{X,x}\to E^J_{X,x}$ of pro-Banach algebras. If $T_i \to T$ is a convergent net in $\Hom_{\pro(\Ban\Alg)}(E^J_{X,x},\Mat_n(\Cx))$, then $T_i(\pi_{\dR}^{\sharp}(\gamma)) \to T(\pi_{\dR}^{\sharp}(\gamma))$, so $\pi_{\dR,\ss}$ is continuous. Now, the de Rham projection $\pi_{\dR} \co \oR^J_{X,x}(B)\to \oR^{\dR}_{X,x}(B)$ is automatically injective for $C^*$-algebras $B$. Thus the inclusion in $E^J_{X,x}$ of the pro-$C^*$-subalgebra generated by $\pi_{\dR}^{\sharp}(\pi_1(X,x))$ must be an epimorphism, since $\R[\pi_1(X,x)]$ is dense in $E^{\dR}_{X,x}$. By \cite[Proposition 2]{reidEpimorphisms}, an epimorphism of $C^*$-algebras is surjective, so $E^J_{X,x}$ must be generated as a pro-$C^*$-algebra by $\pi_{\dR}^{\sharp}(\pi_1(X,x))$, and as a pro-Banach algebra by $\pi_{\dR}^{\sharp}(\pi_1(X,x)) \cup \pi_{\dR}^{\sharp}(\pi_1(X,x))^*$. Now, if we have a convergent sequence $\pi_{\dR}(T_i) \to \pi_{\dR}(T)$, then $T_i(\pi_{\dR}^{\sharp}\gamma) \to T(\pi_{\dR}^{\sharp}\gamma)$ for all $\gamma \in \pi_1(X,x)$, so it suffices to show that the same holds for $(\pi_{\dR}^{\sharp}\gamma)^* $. Given $T \in \Hom_{\pro(\Ban\Alg)}(E^J_{X,x},\Mat_n(\Cx))$, define $T^*$ by $T^*(e):= T(e^*)^*$. We wish to show that $T_i^*(\pi_{\dR}^{\sharp}\gamma) \to T^*(\pi_{\dR}^{\sharp}\gamma) $, which will follow if $\pi_{\dR}(T_i^*) \to \pi_{\dR}(T^*)$. As in the proof of Proposition \ref{banachreps}, we can write $T= \ad_g(S)$, for $S \co E^J_{X,x},\Mat_n(\Cx)$ a $*$-homomorphism and $g \in \GL_n(\Cx)$. Then $T^*= \ad_{(g^*)^{-1}}(S)= \ad_{(gg^*)^{-1}}(T)$; this means that if we write $\pi_{\dR}(T)= (\sV,D, f)$, then $\pi_{\dR}(T^*)=(\sV,D, (f^*)^{-1}) $, where $f^*\co x^*\sV\to\Cx^n$ is defined using the harmonic metric on $\sV$ and the standard inner product on $\Cx^n$. Explicitly, this means that we can describe the involution $*$ on semisimple elements of $\oR^{B}_{X,x}(\Mat_n(\Cx))$ by $\rho^*= (C(\rho)^{-1})^{\dagger}$, where $C$ is the Cartan involution of \cite{Simpson}. If $D= d^++\vartheta$ is the decomposition into anti-hermitian and hermitian parts with respect to the harmonic metric, then $C(\sV,d^++\vartheta , f)=(\sV,d^+-\vartheta , f)$. The proof of \cite[Theorem 3.3]{corlette} ensures that the decomposition $D \mapsto (d^+, \vartheta)$ is continuous in $D$, so $C$ is continuous. Hence $\pi_{\dR}(T) \mapsto \pi_{\dR}(T^*)$ is also continuous, which gives the convergence required. \end{proof} \subsubsection{The polynormal completion and Tannaka duality}\label{recovertopology} \begin{definition}\label{FDDRcat} Let $\FD\oR^{\dR}_{X,x}$ be the category of pairs $(V, p)$ for $V\in \FD\Vect$ and $p \in \oR^{\dR}_{X,x}(\End(V))$. Morphisms $f\co (V_1, p_1)\to (V_2, p_2)$ are given by linear maps $f\co V_1 \to V_2$ for which the adjoint action of \[ \begin{pmatrix} \id & 0 \\ f & \id \end{pmatrix} \in \begin{pmatrix} \End(V_1) & 0 \\ \Hom(V_1,V_2) & \End(V_2) \end{pmatrix} \] on $FD\oR^{\dR}_{X,x} \left(\begin{smallmatrix} \End(V_1) & 0 \\ \Hom(V_1,V_2) & \End(V_2) \end{smallmatrix}\right)$ fixes $p_1 \oplus p_2$. Write $\eta_x^{\dR}\co \FD\oR^{\dR}_{X,x} \to \FD\Vect$ for the fibre functor $(V,p) \mapsto V$. Let $\FD\oR^{\dR,\ss}_{X,x} \subset \FD\oR^{\dR}_{X,x}$ be the full subcategory in which objects correspond to semisimple local systems, with fibre functor $\eta_x^{\dR, \ss}$. \end{definition} \begin{proposition}\label{PNssprop} The ring $ (E^J_{X,x})_{\PN}$ is isomorphic to the ring of continuous additive $\Gal(\Cx/\R)$-equivariant endomorphisms of $\eta_x^{\dR, \ss}$. \end{proposition} \begin{proof} This just combines Lemma \ref{PNlemma2} and Theorem \ref{Hsstopthm}. \end{proof} \begin{remark}\label{nonss} This leads us to contemplate the structure of the ring of continuous additive endomorphisms $f$ of $\eta_x^{\dR}$. Any finite-dimensional $\Cx$-algebra arises as a subalgebra of some matrix algebra, so any such $f$ induces continuous maps $\Hom_{\gp}(\pi_1(X,x),B^{\by}) \to B$ for all finite-dimensional algebras $B$. In particular, this holds when $B= \Mat_n(A)$ for some Artinian $\Cx$-algebra $A$, from which it follows that the maps \[ f_V \co \oR^{\dR}_{X,x}(\End(V)) \to \End(V) \] are all analytic. In other words, any continuous additive endomorphism of $\eta_x^{\dR}$ is automatically analytic. When $\pi_1(X,x)$ is abelian, this ensures that the ring $(E^{\dR}_{X,x})_{\FD}\ten\Cx$ of such endomorphisms is the ring of complex analytic functions on $\Hom_{\gp}(\pi_1(X,x), \Cx^*)$, which by Example \ref{completeab} is just $\Cx[\pi_1(X,x)]^{\an}$. In general, the ring $(E^{\dR}_{X,x})_{\FD}$ is an inverse limit of polynormal Banach algebras, but it is not clear to the author whether it is the pro-polynormal completion of the Fr\'echet algebra $E^{\dR}_{X,x}$. \end{remark} \begin{definition}\label{ssRdef} Given a $k$-normal real $C^*$-algebra $B$, define $ \oR^{\dR,\ss}_{X,x}(B)\subset \oR^{\dR}_{X,x}(B)$ to be the subspace consisting of those $p$ for which $\psi(p)$ corresponds to a semisimple local system for all $\psi \co B \to \Mat_k(\Cx)$. \end{definition} \begin{corollary}\label{PNrepss} For any $k$-normal real $C^*$-algebra $B$, $ \oR^{\dR,\ss}_{X,x}(B)$ is isomorphic to the set of continuous algebra homomorphisms $E^J_{X,x} \to B$. \end{corollary} \begin{proof} Since $B$ is $k$-normal, any such morphism $ E^J_{X,x} \to B$ factors uniquely through $ (E^J_{X,x})_{\PN}$ By Proposition \ref{PNssprop}, a homomorphism $(E^J_{X,x})_{\PN} \to B $ corresponds to a continuous additive $\Gal(\Cx/\R)$-equivariant functor $p^* \co \FD\Rep(B) \to \FD\oR^{\dR,\ss}_{X,x}$ of topological categories fibred over $\FD\Vect$. An element $p \in \oR^{\dR}_{X,x}(B)$ satisfies this condition provided $p^*$ maps to $\FD\oR^{\dR,\ss}_{X,x} \subset \FD\oR^{\dR}_{X,x}$. \end{proof} \begin{remark}\label{likelygenerality} It is natural to ask whether the non-abelian Hodge theorem of \cite{Sim2} extends from finite-dimensional matrix algebras to more general $C^*$-algebras $B$. Proposition \ref{PNssprop} can be thought of as an extension of the correspondence to polynormal $C^*$-algebras, but it seems unlikely to adapt much further, because the arguments of \cite{Sim2,corlette} rely on sequential compactness of $U_n$. \end{remark} \subsection{Residually finite-dimensional completion, products and complex tori}\label{RFDsn} \begin{definition}\label{RFDdef} A pro-$C^*$-algebra $A$ is said to be \emph{residually finite-dimensional} if it has a separating family of finite-dimensional $*$-representations. Given a pro-$C^*$-algebra $A$, define the pro-$C^*$-algebra $A_{\RFD}$ to be the universal residually finite-dimensional quotient of $A$. Explicitly, $A_{\RFD}$ is the quotient of $A$ with respect to the pro-ideal given by the system of kernels of finite-dimensional $*$-representations of $A$. \end{definition} Note that polynormal $C^*$-algebras are residually finite-dimensional, so we have completions $A \to A_{\RFD} \to A_{\PN}$ for general $A$. \begin{proposition}\label{productprop} Given compact connected K\"ahler manifolds $X$ and $Y$, there is an isomorphism $(E^J_{X\by Y, (x,y)})_{\RFD} \cong (E^J_{X,x})_{\RFD}\hat{\ten}( E^J_{Y,y})_{\RFD}$. \end{proposition} \begin{proof} The projections give canonical elements of $\oR^J_{X\by Y, (x,y)}(E^J_{X,x})$ and $\oR^J_{X\by Y, (x,y)}(E^J_{Y,y})$, which by Lemma \ref{tensorstr} give rise to a canonical map \[ f\co E^J_{X\by Y, (x,y)} \to E^J_{X,x}\hat{\ten} E^J_{Y,y}. \] By \cite{corlette}, every finite-dimensional representation of $E^J_{X \by Y, (x,y)}$ corresponds to a semisimple representation of $\pi_1(X\by Y,(x,y))= \pi_1(X,x) \by \pi_1(Y,y)$, so factors through $E^J_X\hten E^J_Y$. Since $(E^J_{X \by Y, (x,y)})_{\RFD} \subset \prod_i \End(V_i)$ where $V_i$ ranges over finite-dimensional irreducible representations, this implies that \[ f_{\RFD}\co (E^J_{X\by Y, (x,y)})_{\RFD} \to (E^J_{X,x})_{\RFD}\hat{\ten}( E^J_{Y,y})_{\RFD} \] is injective. However, the basepoint $y$ gives us a map $X \to X \by Y$, and hence $E^J_{X,x} \to E^J_{X \by Y, (x,y)} $, ensuring that $(E^J_X)_{\RFD} $ lies in the image of $f_{\RFD}$; a similar argument applies to $Y$. Thus $f_{\RFD} $ is surjective, and hence an isomorphism. \end{proof} \begin{remark} Note that we have only imposed the hypothesis that the manifolds be K\"ahler in order to use the functoriality properties of Lemma \ref{Pfunctorial}, since Lemma \ref{Pfunctorial2} is too weak to apply to the maps between $X\by Y$ and $X$. \end{remark} \begin{lemma}\label{ablemma} Given a compact connected K\"ahler manifold $X$, the commutative quotient $(E^J_{X,x})^{\ab} $ is given by \begin{align*} (E^J_{X,x})^{\ab}\ten \Cx &= C(\H^1(X,\Cx^*),\Cx)\\ (E^J_{X,x})^{\ab} &= \{f \in C(\H^1(X, \Cx^*),\Cx)\,:\, f(\bar{\rho})= \overline{f(\rho)}\}. \\ \end{align*} \end{lemma} \begin{proof} Since $(E^J_{X,x})^{\ab} $ is a commutative $C^*$-algebra, the Gelfand--Naimark theorem gives $(E^J_{X,x})^{\ab}\ten \Cx\cong C(\Hom(E^J_{X,x}, \Cx),\Cx) $, with $E^J_{X,x})^{\ab}$ given by $\Gal(\Cx/\R)$-invariants. By \S \ref{recovertopology}, we have a homeomorphism $\Hom(E^J_{X,e}, \Cx) \cong \H^1(X,\Cx^*)$, which completes the proof. \end{proof} \begin{corollary}\label{abcor} For $X$ a complex torus with identity $e$ and a fixed Riemannian metric, we have \begin{align*} (E^J_{X,e})_{\RFD}\ten \Cx &= C(\H^1(X,\Cx^*),\Cx)\\ (E^J_{X,e})_{\RFD} &= \{f \in C(\H^1(X, \Cx^*),\Cx)\,:\, f(\bar{\rho})= \overline{f(\rho)}\}. \\ \end{align*} \end{corollary} \begin{proof} Multiplication on $X$ gives a pointed morphism $X \by X \to X$ and hence by functoriality of $P$ in $X$ and Proposition \ref{productprop}, we have a morphism \[ ( E^J_{X,e})_{\RFD}\hat{\ten}(E^J_{X,e})_{\RFD} \to (E^J_{X,e})_{\RFD} \] of real pro-$C^*$-algebras, and we may apply Lemma \ref{ablemma}. \end{proof} \begin{remark} If it were the case that all irreducible representations of $\pi_1(X,x)$ were harmonic and similarly for $\pi_1(Y,y)$, then the proof of Proposition \ref{productprop} would adapt to show that $E^J_{X\by Y, (x,y)} \cong E^J_{X,x}\hat{\ten} E^J_{Y,y}$. As in the proof of Corollary \ref{abcor}, that would then imply commutativity of $E^J_{X,e}$ for complex tori $(X,e)$, giving $E^J_{X,e}\ten \Cx = C(\Hom(\pi_1(X,e), \Cx^*))$. \end{remark} \begin{lemma}\label{EJabgp} Given a compact connected K\"ahler manifold $X$, the grouplike elements $G((E^J_{X,x})^{\ab}) $ (see Lemma \ref{recovergroup}) of the commutative quotient $(E^J_{X,x})^{\ab} $ are given by \begin{align*} G((E^J_{X,x})^{\ab}\ten \Cx) &\cong \H_1(X,\Z\oplus\Cx) \\ G((E^J_{X,x})^{\ab}) &\cong \H_1(X,\Z\oplus\R), \end{align*} with the map $\pi_1(X,x)^{\ab} \to G((E^J_{X,x})^{\ab})$ given by the diagonal map $\Z \to \Z \oplus \R$. \end{lemma} \begin{proof} The coalgebra structure on $(E^J_{X,x})^{\ab}$ corresponds under Lemma \ref{ablemma} to the group structure on $\H^1(X, \Cx^*) $. Thus $G((E^J_{X,x})^{\ab}) $ consists of continuous functions $f\co \H^1(X, \Cx^*)\to \Cx$ with $f(1)=1$ and $f(ab)=f(a)f(b)$. We have an isomorphism $\Cx^* \cong S^1 \by \R$, given by $re^{i\phi} \mapsto (\phi, r)$. Thus $\H^1(X,\Cx^*)\cong \H^1(X,S^1\by\H^1(X,\R)$. By Pontrjagin duality, a continuous group homomorphism $\H^1(X,S^1) \to \Cx^*$ is just an element of $\H_1(X,\Z)$, and a continuous group homomorphism $\H^1(X,\R) \to \Cx^*$ is an element of $\H_1(X,\Cx)$. \end{proof} \subsection{The Dolbeault projection}\label{Dolsn} Now let $X$ be a compact connected K\"ahler manifold with basepoint $x \in X$. \begin{proposition}\label{uniquemetric2} For all complex $C^*$-algebras $B$, the Dolbeault projection \[ \pi_{\Dol}\co \oR^J_{X,x}(B) \to \oR^{\Dol}_{X,x}(B) \] has the property that if $p_1, p_2 \in \oR^J_{X,x}(B)$ and if $\ad_b\pi_{\Dol}(p_1)= \pi_{\Dol}(p_2)$ for some strictly positive self-adjoint element $b \in B$, then $p_1=p_2$. Thus \[ \pi_{\Dol}\co \oR^J_{X,x}(B)/U(B) \to \oR^{\Dol}_{X,x}(B)/B^{\by} \] is injective. \end{proposition} \begin{proof} The proof of Proposition \ref{uniquemetric} adapts, replacing $D$ with $D''$. \end{proof} \begin{corollary}\label{Zpluri2} For an element $p \in \oR^J_{X,x}(B^{\by})$, the centraliser $\z(\pi_{\Dol}(p),B^{\by})$ of $\pi_{\Dol}(p)$ under the adjoint action of $B$ is given by \[ \z(\pi_{\Dol}(p),B^{\by})= \exp(\{ b \in S(B)\,:\, e^{ibt} \in \z(p, U(B)) \forall t \in \R\})\rtimes\z(p, U(B)); \] beware that this is the semidirect product of a set with a group. \end{corollary} \begin{proof} The proof of Corollary \ref{Zpluri} carries over. \end{proof} \begin{proposition}\label{banachreps2} For the real pro-$C^*$-algebra $E^J_{X,x}$ of Proposition \ref{pluriprorep}, there is a canonical map $\Hom_{\pro(\Ban\Alg)}(E^J_{X,x},B) \to \oR^{\Dol}_{X,x}(B)$, functorial in complex Banach algebras $B$. This induces an injection \[ \Hom(E^J_{X,x},B)_{cb} \into \oR^{\Dol}_{X,x}(B) \] whenever $B$ is a $C^*$-algebra. \end{proposition} \begin{proof} The proofs of Lemma \ref{Banachplurifunctor} and Proposition \ref{banachreps} carry over to this context, replacing Proposition \ref{uniquemetric} and Corollary \ref{Zpluri} with Proposition \ref{uniquemetric2} and Corollary \ref{Zpluri2}. \end{proof} \begin{theorem}\label{Hsttopthm} For any positive integer $n$, there is a homeomorphism $\pi_{\Dol,\st}$ between the space $\Hom_{\pro(\Ban\Alg)}(E^J_{X,x},\Mat_n(\Cx))$ with the topology of pointwise convergence, and the subspace of $\oR^{\Dol}_{X,x}(\Mat_n(\Cx)) $ consisting of polystable Higgs bundles $E$ with $\ch_1(E)\cdot [\omega]^{\dim X -1}=0$ and $\ch_2(E)\cdot [\omega]^{\dim X -2}=0 $. \end{theorem} \begin{proof} The isomorphism of points is given by \cite[Theorem 1]{Simpson}. Replacing Proposition \ref{banachreps} with Proposition \ref{banachreps2}, the argument from the proof of Theorem \ref{Hsstopthm} shows that the map $\pi_{\Dol}\co \Hom_{\pro(\Ban\Alg)}(E^J_{X,x},\Mat_n(\Cx))\to \oR^{\Dol}_{X,x}(\Mat_n(\Cx)) $ is continuous, so we just need to show that it is open. Now, \cite[Proposition 7.9]{Sim2} implies that the isomorphism $\pi_{\dR,\ss} \circ \pi_{\Dol,\st}^{-1}$ is continuous. Since $\pi_{\dR,\ss}$ is a homeomorphism by Theorem \ref{Hsstopthm}, $\pi_{\Dol,\st}$ must also be a homeomorphism. \end{proof} \begin{definition}\label{FDDolcat} Let $\FD\oR^{\Dol}_{X,x}$ be the category of pairs $(V, p)$ for $V\in \FD\Vect$ and $p \in \oR^{\Dol}_{X,x}(\End(V))$, with morphisms defined by adapting the formulae of Definition \ref{FDDRcat}. Let $\FD\oR^{\Dol,\st}_{X,x} \subset \FD\oR^{\Dol}_{X,x}$ be the full subcategory in which objects correspond to those of Theorem \ref{Hsttopthm}. Write $\eta_x^{\Dol}\co \FD\oR^{\Dol}_{X,x} \to \FD\Vect$, $\eta_x^{\Dol,\st}\co \FD\oR^{\Dol,\st}_{X,x} \to \FD\Vect$ for the fibre functors $(V,p) \mapsto V$. \end{definition} \begin{proposition}\label{PNstprop} The ring $ (E^J_{X,x})_{\PN}\ten \Cx$ is isomorphic to the ring of continuous additive endomorphisms of $\eta_x^{\Dol, \st}$. \end{proposition} \begin{proof} The proof of Proposition \ref{PNssprop} carries over, replacing Theorem \ref{Hsstopthm} with Theorem \ref{Hsttopthm}. \end{proof} \begin{definition}\label{stRdef} Given a $k$-normal complex $C^*$-algebra $B$, define $ \oR^{\Dol,\st}_{X,x}(B)\subset \oR^{\Dol}_{X,x}(B)$ to be the subspace consisting of those $p$ for which $\psi(p)\in \FD\oR^{\Dol,\st}_{X,x}$ for all $\psi \co B \to \Mat_k(\Cx)$. \end{definition} \begin{corollary}\label{PNrepst} For any $k$-normal complex $C^*$-algebra $B$, $ \oR^{\Dol,\st}_{X,x}(B)$ is isomorphic to the set of continuous algebra homomorphisms $E^J_{X,x} \to B$. \end{corollary} \begin{proof} The proof of Corollary \ref{PNrepss} carries over, replacing Proposition \ref{PNssprop} with Proposition \ref{PNstprop}. \end{proof} \subsection{Circle actions and $C^*$-dynamical systems} \begin{definition} Define a circle action on a (real or complex) pro-$C^*$-algebra $A$ to be a continuous group homomorphism from $S^1$ to $\Aut_{\pro(C^*\Alg)}(A)$. Here, the topology on $\Aut_{\pro(C^*\Alg)}(A)$ is defined pointwise, so a net $f_i$ converges to $f$ if and only if $f_i(a) \to f(a)$ for all $a \in A$. \end{definition} The following is immediate: \begin{lemma}\label{circle1} Giving a circle action on a $C^*$-algebra $A$ is equivalent to giving a pro-$C^*$-algebra homomorphism $f\co A \to C(S^1,A)$ satisfying \begin{enumerate} \item $1^*\circ f = \id_A\co A \to C(\{1\},A)=A$; \item the diagram \[ \begin{CD} A @>f>> C(S^1,A)\\ @VfVV @VV{C(S^1,f)}V\\ C(S^1,A) @>{m^*}>> C(S^1 \by S^1, A) \end{CD} \] commutes, where $m \co S^1 \by S^1 \to S^1$ is the multiplication. \end{enumerate} \end{lemma} \begin{lemma}\label{functcircle} If a functor $F\co C^*\Alg_k \to \Set$ is represented by a pro-$C^*$-algebra $A$, then to give a circle action on $A$ is equivalent to giving maps \[ {\alpha}_B\co F(B) \to F(C(S^1,B)), \] functorial in $B$, such that \begin{enumerate} \item $F(1^*)\circ {\alpha}_B = \id_{F(B)}\co F(B) \to F(B)$; \item the diagram \[ \begin{CD} F(B) @>{\alpha}_B>> F(C(S^1,B))\\ @V{\alpha}_BVV @VV{{\alpha}_{C(S^1,B)}}V\\ F(C(S^1,B)) @>{F(m^*)}>> F(C(S^1 \by S^1, B)) \end{CD} \] commutes, where $m \co S^1 \by S^1 \to S^1$ is the multiplication. \end{enumerate} \end{lemma} \begin{proof} If $A$ has a circle action ${\alpha}$, then a homomorphism $h\co A \to B$ gives rise to $C(S^1, h)\co C(S^1,A) \to C(S^1, B)$, and we define ${\alpha}_B(h):= C(S^1,h) \circ {\alpha}$. This clearly satisfies the required properties. Conversely, given maps ${\alpha}_B$ as above, write $A= \Lim_i A_i$ as an inverse limit of $C^*$-algebras, and let $h_i\co A \to A_i$ be the structure map. Then ${\alpha}_{A_i}(h_i) \in F(C(S^1, A_i))$ is a map $A \to C(S^1, A_i)$. Since the ${\alpha}_{A_i}(h_i)$ are compatible, we may take the inverse limit, giving a map \[ {\alpha}\co A \to C(S^1, A) \] To see that this is a group homomorphism, just observe that the conditions above ensure that $h_i \circ 1^*\circ {\alpha} = h_i$ and \[ C(S^1\by S^1, h_i)\circ C(S^1,{\alpha}) \circ {\alpha}= C(S^1\by S^1, h_i) \circ m^*{\alpha} \] for all $i$. Taking the inverse limit over $i$ shows that this satisfies the conditions of Lemma \ref{circle1}. Finally, note that these two constructions are clearly inverse to each other. \end{proof} \begin{proposition}\label{circleX} For every compact K\"ahler manifold $X$, there is a canonical continuous circle action on $E^J_{X,x}$. \end{proposition} \begin{proof} Given $(U(\sP), D, f) \in \oR^J_{X,x}(B)$, define ${\alpha}(U(\sP),D, f) \in \oR^J_{X,x}(C(S^1, B))$ as follows. Decompose $D= d^++\vartheta$ into anti-self-adjoint and self-adjoint parts. Set ${\alpha}(U(\sP)):= C(S^1, U(\sP))= U(\sP)\by_{\sA^0_X(U(B))}\sA^0_X(C(S^1, U(B)))$, and then define ${\alpha}(D):= d^++ t \dmd \vartheta$, where $t \in C(S^1, \Cx)$ is the canonical embedding and $\dmd$ is from Definition \ref{dmd}. Thus we have constructed ${\alpha}(U(\sP), D, f):= (C(S^1, U(\sP), d^+ + t\dmd \vartheta, C(S^1, f))$, and it is easy to check that this satisfies the conditions of Lemma \ref{functcircle}. \end{proof} \begin{remark}\label{pureMHSrk} By considering finite dimensional quotients of $E^J_{X,x}$, the circle action induces a continuous map \[ S^1 \by E^J_{X,x} \to O(\pi_1(X,x)^{\red}_{\R})', \] for $O(\pi_1(X,x)^{\red}_{\R})^{\vee}$ as in Remark \ref{cfredpi}. This descends to a discontinuous action of $S^1$ on $ O(\pi_1(X,x)^{\red}_{\R})^{\vee}$, as in \cite{Simpson} (made explicit in the real case as \cite[Lemma \ref{mhs-discreteact}]{mhs}). Note, however that the circle action descends to continuous actions on $(E^J_{X,x})_{\RFD}, (E^J_{X,x})_{\PN}$ (which are subalgebras of $ O(\pi_1(X,x)^{\red}_{\R})^{\vee}$, though not closed). Continuity of the circle action ensures that the map \[ S^1 \by \pi_1(X,x) \to E^J_{X,x} \] is continuous, and hence that the induced map $ S^1 \by \pi_1(X,x) \to \pi_1(X,x)^{\red}_{\R}(\R) \subset O(\pi_1(X,x)^{\red}_{\R})^{\vee}$ is continuous. Thus a continuous circle action on $E^J_{X,x}$ gives rise to a pure Hodge structure on $ \pi_1(X,x)^{\red}$ in the sense of \cite[\S 5]{Simpson}, but without needing to refer to $\pi_1(X,x)$ itself. This suggests that the most natural definition of a pure non-abelian Hodge structure is a continuous circle action on a pro-$C^*$-bialgebra. \end{remark} \begin{example}\label{circleEJabgp} Lemma \ref{ablemma} gives an isomorphism \[ (E^J_{X,x})^{\ab} = \{f \in C(\H^1(X, \Cx^*),\Cx)\,:\, f(\bar{\rho})= \overline{f(\rho)}\}, \] and \ref{EJabgp} then shows that the grouplike elements are $G((E^J_{X,x})^{\ab}) \cong \H_1(X,\Z\oplus\R)$. To describe the circle action on $(E^J_{X,x})^{\ab} $, it thus suffices to describe it on the space $\H^1(X, \Cx^*) $ of one-dimensional complex representations. Taking the decomposition $D= d^++\vartheta$ of a flat connection $D$ into anti-hermitian and hermitian parts, note that we must have $(d^+)^2= \vartheta^2=0$, because commutativity of $\Cx^*$ ensures that commutators vanish, everything else vanishing by hypothesis. This decomposition therefore corresponds to the isomorphism $\H^1(X, \Cx^*)\cong \H^1(X,\S^1) \by \H^1(X,\R)$. Since the action is given by $\vartheta \mapsto t \dmd \vartheta$ for $t \in S^1$, it follows that the $S^1$-action is just the $\dmd$-action on $\H^1(X,\R)$. On $G((E^J_{X,x})^{\ab}) \cong \H_1(X,\Z)\oplus \H_1(X,\R)$, this means that the circle action fixes $\H_1(X,\Z)$ and acts with the $\dmd$-action on $\H_1(X,\R)= \H_1(X,\R)^{\vee}$. \end{example} \begin{definition}\label{DSdef} Recall from \cite[Definition 2.6]{williamsXprod} that a $C^*$-dynamical system is a triple $(A, G, \alpha)$, for $G$ a locally compact topological group, $A$ a $C^*$-algebra, and $\alpha$ a continuous action of $G$ on a $A$. \end{definition} \begin{lemma} The circle action ${\alpha}$ of Proposition \ref{circleX} gives rise to a pro-$C^*$-dynamical system $(E^J_{X,x}, S^1, {\alpha})$, i.e. an inverse system of $C^*$-dynamical systems. \end{lemma} \begin{proof} Since $E^J_{X,x}$ is a pro-$C^*$-algebra, we may write it as an inverse system $E^J_{X,x}= \Lim_i E_i$, for $C^*$-algebras $E_i$. The circle action then sends the structure map $h_i \co E^J_{X,x} \to E_i$ to the map $C(S^1, h_i) \circ {\alpha} \co E^J_{X,x} \to C(S^1,E_i)$, and evaluation at $1 \in S^1$ recovers $h_i$. We may therefore set $E_{{\alpha}(i)}$ to be the closure of the image of $E^J_{X,x} \to C(S^1,E_i)$, and observe that $E_{{\alpha}(i)}$ is $S^1$-equivariant, with $E^J_{X,x}= \Lim_i E_{{\alpha}(i)}$. Thus $(E^J_{X,x}, S^1, {\alpha})= \Lim_i (E_{{\alpha}(i)},S^1, {\alpha})$ is a pro-$C^*$-dynamical system. \end{proof} The following is taken from \cite[Lemma 2.27]{williamsXprod}: \begin{definition} Given a $C^*$-dynamical system $(A, G, \alpha)$ and $f \in C_c(G,A)$, define \[ \|f\|:= \sup\{ \|\pi \rtimes U(f)\|\co (\pi, U) \text{ a covariant representation of } (A, G, \alpha)\}. \] Then $\|-\|$ is called the universal norm, and dominated by $\|-\|_1$. The completion of $C_c(G,A)$ with respect to $\|-\|$ is the crossed product of $A$ by $G$, denoted $A \rtimes_{\alpha}G$. \end{definition} \begin{definition} Define a polarised real Hilbert variation of Hodge structures of weight $n$ on $X$ to be a real local system $\vv$, with a pluriharmonic metric on $\sA^0_X(\vv)$, equipped with a Hilbert space decomposition \[ \sA^0_X(\vv)\ten \Cx = \hat{\bigoplus}_{p+q=n} \sV^{pq}, \] (where $\hat{\bigoplus}$ denotes Hilbert space direct sum), with $\overline{\sV^{pq}}= \sV^{qp}$, and satisfying the conditions \[ \pd: \sV^{pq} \to \sV^{pq}\ten_{\sA^0_X(\Cx)}\sA^{10}_X, \quad \bar{\theta}: \sV^{pq} \to \sV^{p+1,q-1}\ten_{\sA^0_X(\Cx)}\sA^{01_X}, \] for the decomposition $D= \pd +\bar{\pd}+\theta + \bar{\theta}$ of Definition \ref{Ddecomp}. \end{definition} \begin{proposition}\label{VHSsemidirect} Real Hilbert space representations of the non-unital pro-$C^*$-algebra $E^J_{X,x}\rtimes_{\alpha}S^1$ correspond to framed weight $0$ polarised real Hilbert variations of Hodge structure. \end{proposition} \begin{proof} By \cite[Proposition 2.29]{williamsXprod}, a $*$-representation $E^J_{X,x}\rtimes_{\alpha}S^1 \to L(H) $ for a Hilbert space $H$ consists of: \begin{enumerate} \item a $*$-representation $\rho \co E^J_{X,x} \to L(H)$, and \item a continuous representation $u\co S^1 \to U(H)$ \end{enumerate} such that \[ \rho({\alpha}(t,a)) = u(t)\rho(a)u(t)^{-1} \] for all $a \in E^J_{X,x}, t \in S^1$. In other words, in $\oR^J_{X,x}(C(S^1, L(H)))$, we have $\alpha(\rho)= u\rho u^{-1}$, so $\alpha(\rho)$ and $\rho$ are isomorphic in the groupoid $\cR^J_X(C(S^1, L(H)))$. Now, by definition of $E^J_{X,x} $, the representation $\rho$ corresponds to a real local system $\vv$, with a pluriharmonic metric on $\sA^0_X(\vv)$ and a Hilbert space isomorphism $f\co \vv_x \to H$. The representation $\alpha(\rho)$ corresponds to the connection $\alpha(D):= d^++ t \dmd \vartheta$ on $\sA^0_X(C(S^1,\vv))$ for the standard co-ordinate $t\co S^1 \to \Cx$, together with framing $f$. The condition that $\alpha(\rho)$ and $\rho$ are isomorphic then gives us a unitary gauge transformation $g$ between them. In other words, we have a continuous representation $g \co S^1 \to \Gamma(X, U(\sA^0_X(\vv)))$ with $\alpha(D) \circ g= g \circ D$. We must also have $g_x=u$. Thus $g$ gives us a Hilbert space decomposition \[ \sA^0_X(\vv)\ten \Cx = \hat{\bigoplus}_{p+q=0} \sV^{pq}, \] with $\overline{\sV^{pq}}= \sV^{qp}$, and $g(t)$ acting on $\sV^{pq}$ as multiplication by $t^{p-q}$. The condition $\alpha(D) \circ g= g \circ D$ then forces the conditions \[ \pd: \sV^{pq} \to \sV^{pq}\ten_{\sA^0_X(\Cx)}\sA^{10}_X, \quad \bar{\theta}: \sV^{pq} \to \sV^{p+1,q-1}\ten_{\sA^0_X(\Cx)}\sA^{01_X}, \] as required. \end{proof} \begin{remark} Given any $E^J_{X,x}$-representation $V$, \cite[Example 2.14]{williamsXprod} gives an $E^J_{X,x} \rtimes S^1$-rep $\Ind_e^{S^1}V$. Its underlying Hilbert space is just the space $L^2(S^1, V)$ of $L^2$-measurable $V$-valued forms on the circle with respect to Haar measure. For the pluriharmonic local system $\vv$ associated to $V$, this therefore gives a weight $0$ variation $\Ind_e^{S^1}\vv$ of Hodge structures on $X$, with $\sA^0_X(\Ind_e^{S^1}\vv)= \sA^0_X(L^2(S^1, \vv))$. \end{remark} \section{Hodge decompositions on cohomology}\label{cohosn} Fix a compact K\"ahler manifold $X$. \begin{definition} Given a pluriharmonic local system $\vv$ in real Hilbert spaces on $X$ (as in Example \ref{plurilocsys}), the inner product on $\vv$ combines with the K\"ahler metric on $X$ to give inner products $\<-,-\>$ on the spaces $A^n(X,\vv)$ for all $n$. Given an operator $F$ on $A^*(X,\vv)$, we denote the adjoint operator by $F^*$. Let $\Delta= DD^*+D^*D$. \end{definition} \subsection{Sobolev spaces} Note that in general, the Laplacian $\Delta$ is not a bounded operator in the $L^2$ norm. We therefore introduce a system of Sobolev norms: \begin{definition} Define $L^n_{(2),s}(X, \vv)$ to be the completion of $A^n(X,\vv)$ with respect to the inner product $\<v,w\>_s:= \<v, (I + \Delta)^sw\>$. \end{definition} Note that we then have bounded operators $D,D^c \co L^n_{(2),s}(X, \vv)\to L^{n+1}_{(2),s-1}(X, \vv)$, $D^*,D^{c*}\co L^n_{(2),s}(X, \vv)\to L^{n-1}_{(2),s-1}(X, \vv)$ and $\Delta\co L^n_{(2),s}(X, \vv)\to L^n_{(2),s-2}(X, \vv)$. \begin{proposition}\label{sobolevtower} The maps $(I+\Delta)^k \co L^n_{(2),s}(X, \vv)\to L^n_{(2),s-2k}(X, \vv)$ are Hilbert space isomorphisms, and there are canonical inclusions $L^n_{(2),s}(X, \vv)\subset L^n_{(2),s-1}(X, \vv)$. \end{proposition} \begin{proof} The proofs of \cite[Proposition 2.3 and Lemma 2.4]{dodziuk} carry over to this generality. \end{proof} \begin{definition} Define $\cH^n (X,\vv) \subset A^2(X,\vv)$ to consist of forms with $\Delta\alpha=0$. Regard this as a pre-Hilbert space with the inner product $\<-,-\>$. \end{definition} The following implies that $\cH^n (X,\vv)$ is in fact a Hilbert space: \begin{lemma} The inclusions \[ \cH^n (X,\vv) \to \{\alpha \in L^n_{(2),0}\,:\, D\alpha =D^*\alpha=0\} \to \{\alpha \in L^n_{(2),0}\,:\, \Delta\alpha=0\} \] are Hilbert space isomorphisms. \end{lemma} \begin{proof} When $\vv$ is the local system associated to the $\pi_1(X)$-representation $\ell^2(\pi_1(X,x))$, this is \cite[Lemma 2.5]{dodziuk}, but the same proof carries over. \end{proof} \subsubsection{Decomposition into eigenspaces} \begin{definition} Define $T$ to be the composition of $L^n_{(2),0}(X, \vv)\xra{(I+\Delta)^{-1} } L^n_{(2),2}(X, \vv)\into L^n_{(2),0}(X, \vv)$. This is bounded and self-adjoint, with spectrum $\sigma(T) \subset (0,1]$. Thus the spectral decomposition gives $T= \int_{(0,1]} \lambda d\pi_{\lambda}$ for some projection-valued measure $\pi$ on $(0,1]$. \end{definition} For $S\subset [0, \infty)$ measurable, write \[ \nu(S):= \pi_{\{(1+\rho)^{-1}\,:\, \rho \in S\}}. \] Thus \[ T= \int_{\rho\in [0, \infty)} (1+\rho)^{-1} d\nu_{\rho}, \] and for $v \in \cL_{(2),s+2}(X,\vv)$ we have \[ \Delta v = \int_{\rho\in [0, \infty)} \rho d\nu_{\rho}v \in \cL_{(2),s}(X,\vv). \] If we set $E^n(S):= \nu(S)\cL_{(2),0}^n(X,\vv)$, observe that $E^n$ defines a measurable family of Hilbert spaces on $[0, \infty)$, and that we have direct integral decompositions \[ \cL_{(2),s}^n(X,\vv)= \int^{\oplus}_{\rho \in [0, \infty)} (1+\rho)^{-s/2} E^n_{\rho}. \] \subsubsection{Harmonic decomposition of eigenspaces}\label{eigensn} Since the operators $D,D^c, D^*, D^{c*}$ commute with $\Delta$, they descend to each graded Hilbert space $E^n(S)$, provided $S$ is bounded above. If $S$ also has a strictly positive lower bound, then $\Delta$ is invertible on $E^n(S)$, so \[ E^n(S)= \Delta E^n(S). \] As $\Delta = DD^*+D^*D=D^cD^{c*}+D^{c*}D^c$, with $[D,D^c]=[D^*,D^c]=[D^{c*},D]=0$, this implies that \begin{align*} E^n(S)&= DE^{n-1}(S)\oplus D^*E^{n+1}(S)= D^cE^{n-1}(S)\oplus D^{c*}E^{n+1}(S)\\ &= DD^cE^{n-2}(S)\oplus D^*D^cE^n(S)\oplus DD^{c*}E^n(S)\oplus D^*D^{c*}E^{n+2}(S). \end{align*} Furthermore, $D\co D^*E^n(S)\to DE^{n-1}(S)$ and $D^*\co DE^n(S)\to D^*E^{n+1}(S)$ are isomorphisms, with similar statements for $D^c$. If $S$ is just bounded above and does not contain $0$, then the statements above still hold if we replace subspaces with their closures: \begin{proposition}\label{Hodgedecomp0} There are Hilbert space decompositions \begin{eqnarray*} \cL^n_{(2),s}(X, \vv) &= &\cH^n(X,\vv)\oplus\overline{\Delta \cL^n_{(2),s+2}(X, \vv) } \\ &= &\cH^n(X,\vv)\oplus\overline{D\cL^{n-1}_{(2),s+1}(X, \vv)}\oplus \overline{D^*\cL^{n+1}_{(2),s+1}(X, \vv)}\\ &=&\cH^n(X,\vv)\oplus \overline{D^c\cL^{n-1}_{(2),s+1}(X, \vv)}\oplus \overline{D^{c*}\cL^{n+1}_{(2),s+1}(X, \vv)})\\ &=&\cH^n(X,\vv)\oplus \overline{DD^c\cL^{n-2}_{(2),s+2}(X, \vv)} \oplus\overline{D^*D^c\cL^{n}_{(2),s+2}(X, \vv)}\\ &&\phantom{\cH^n(X,\vv)} \oplus \overline{DD^{c*}\cL^{n}_{(2),s+2}(X, \vv)} \oplus\overline{D^*D^{c*}\cL^{n+2}_{(2),s+2}(X, \vv)}. \end{eqnarray*} for all $s$. \end{proposition} \begin{proof} This is essentially the Hodge Theorem, and we can construct the decomposition by a slight modification of \cite[pp94--96]{GriffithsHarris}. We can define an approximate Green's function by \[ G_{\eps}:= \int_{(0,1-\eps]} \frac{\lambda}{1-\lambda} d\pi_{\lambda}. \] Now, note that $(I+\Delta)G_{\eps}= \int_{(0,1-\eps]} \frac{1}{1-\lambda} d\pi_{\lambda}$, which is bounded, so $G_{\eps}$ is the composition of the inclusion $ L^n_{(2),s+2}(X, \vv)\into L^n_{(2),s}(X, \vv)$ with a map \[ G_{\eps}\co L^n_{(2),s}(X, \vv)\to L^n_{(2),s+2}(X, \vv). \] Also note that $G_{\eps}$ commutes with $\Delta$, and that $\Delta G_{\eps} = \pi(0,1-\eps]= I- \pi[1-\eps,1]$. As $\eps \to 0$, this means that $\pi(1) +\Delta G_{\eps}$ converges weakly to $I$. Since $\pi(1)$ is projection onto $\cH^n(X,\vv)$, this gives the decomposition required (noting that norm closure and weak closure of a subspace are the same, by the Hahn--Banach Theorem). \end{proof} Now, for $v \in D^*E^n(S)$, $\<Dv, Dv\>= \<\Delta v, v\>$, so $\|Dv\|^2/\|v\|^2$ lies in $S$. \begin{definition}\label{Deltahalf} Define $\Delta^{\half}\co \cL_{(2),s+1}^n \to \cL_{(2),s}^n$ by \[ \Delta^{\half}:= \int_{\rho\in [0, \infty)} \rho^{\half} d\nu_{\rho}. \] \end{definition} This gives \begin{align*} D\cL_{(2),s+1}^{n-1}(X,\vv)= \Delta^{\half}\overline{D\cL_{(2),s+2}^{n-1}(X,\vv)}, \\ \ker D \cap \cL_{(2),s}^n(X,\vv)= \overline{D\cL_{(2),s+1}^{n-1}(X,\vv)} \oplus \cH^n(X, \vv). \end{align*} Thus \[ \H^n\cL_{(2),s}^{\bt}(X,\vv)\cong \cH^n(X, \vv) \oplus (\overline{D\cL_{(2),s-1}^{n-1}(X,\vv)}/\Delta^{\half}\overline{D\cL_{(2),s-2}^{n-1}(X,\vv)}). \] There are similar statements for the operators $D^c,D^*, D^{c*}$. \begin{definition}\label{Deltahalf2} Define $\Delta^{-\half}D\co \overline{D^*\cL_{(2),s+1}^n(X,\vv)} \to \cL_{(2),s(X,\vv)}^n$ by \[ \Delta^{-\half}:= \int_{\rho\in (0, \infty)} \rho^{-\half} d\nu_{\rho} \circ D, \] and define $\Delta^{-\half}D^c\co \overline{D^{c*}\cL_{(2),s+1}^n(X,\vv)} \to \cL_{(2),s}^n(X,\vv)$ similarly. \end{definition} \begin{proposition}\label{DeltahalfProp} The operator $\Delta^{-\half}D$ gives a Hilbert space isomorphism from the closed subspace $\overline{D^*\cL_{(2),s+1}^n(X,\vv)} $ of $\cL_{(2),s}^{n-1}(X,\vv)$ to the closed subspace $\overline{D\cL_{(2),s+1}^{n-1}(X,\vv)} $ of $\cL_{(2),s}^{n}(X,\vv) $. Likewise, $\Delta^{-\half}D^c\co \overline{D^{c*}\cL_{(2),s+1}^n(X,\vv)} \to \overline{D^c\cL_{(2),s+1}^{n-1}(X,\vv)}$ is a Hilbert space isomorphism. \end{proposition} \begin{proof} We prove this for the first case, the second being entirely similar. Given $a, b \in D^*A^n(X,\vv)$, we have \begin{align*} \<\Delta^{-\half}Da, \Delta^{-\half}Db\>_s &= \< (I+\Delta)^s \Delta^{-\half}Da, \Delta^{-\half}Db\>\\ &= \< (I+\Delta)^s \Delta^{-1} D^*Da, b\>\\ &= \< (I+\Delta)^s a, b\>= \<a,b\>_s, \end{align*} since $D^*a=0$ gives $D^*Da = \Delta a$. Taking Hilbert space completions with respect to $\<-,-\>_s$ then gives the required result. \end{proof} \subsection{The Hodge decomposition and cohomology} \begin{proposition}\label{sobolevlemma} The nested intersection $\bigcap_s L^p_{(2),s}(X,\vv) $ is the space $A^p(X,\vv)$ of $\C^{\infty}$ $\vv$-valued $p$-forms. \end{proposition} \begin{proof} When $\vv$ is finite-dimensional, this is the Global Sobolev Lemma, but the same proof applies for Hilbert space coefficients. \end{proof} \begin{theorem}\label{Hodgedecomp} There are pre-Hilbert space decompositions \begin{align*} A^n(X, \vv)&= \cH^n(X,\vv)&\oplus&\overline{\Delta A^n(X, \vv) }\\ &= \cH^n(X,\vv)&\oplus&\overline{DA^{n-1}(X, \vv)}\oplus \overline{D^*A^{n+1}(X, \vv)}\\ &= \cH^n(X,\vv)&\oplus&\overline{D^cA^{n-1}(X, \vv)}\oplus \overline{D^{c*}A^{n+1}(X, \vv)}\\ &= \cH^n(X,\vv)&\oplus&\overline{DD^cA^{n-2}(X, \vv)} \oplus\overline{D^*D^cA^{n}(X, \vv)}\oplus \overline{DD^{c*}A^{n}(X, \vv)} \oplus\overline{D^*D^{c*}A^{n+2}(X, \vv)}. \end{align*} for all $n$. \end{theorem} \begin{proof} We just take the inverse limit $\Lim_s$ of the decomposition in Proposition \ref{Hodgedecomp0}, and then make the substitution of Proposition \ref{sobolevlemma}. \end{proof} \subsubsection{Reduced cohomology}\label{redcohosn} \begin{definition} Given a cochain complex $C^{\bt}$ in topological vector spaces, write \[ \bar{\H}^n(C^{\bt}):= \H^n(C^{\bt})/\overline{\{0\}}, \] where $\overline{\{0\}}$ is the closure of $0$. Note that we could equivalently define $\bar{\H}^*$ as the quotient of the space of cocycles by the \emph{closure} of the space of coboundaries. Given a local system $\vv$ in topological vector spaces on $X$, define \[ \bar{\H}^n(X, \vv):= \bar{\H}^n(A^{\bt}(X, \vv)). \] \end{definition} \begin{corollary}\label{harmcoho} The maps \[ \cH^n(X,\vv) \to \bar{\H}^n(X,\vv) \] are all topological isomorphisms. \end{corollary} \begin{corollary}[Principle of two types]\label{p2t} As subspaces of $A^n(X, \vv)$, \[ \ker D \cap \ker D^c \cap (\overline{ DA^{n-1}(X,\vv) } + \overline{D^cA^{n-1}(X,\vv) }) = \overline{DD^cA^{n-2}(X,\vv)}. \] \end{corollary} \begin{lemma}[Formality]\label{formallemma} The morphisms \[ (\bar{\H}_{D^c}^*(X,\vv),0) \la (\z_{D^c}^*(X,\vv), D) \to (A^*(X,\vv), D) \] induce isomorphisms on reduced cohomology. \end{lemma} \begin{proof} The proof of \cite[Lemma 2.2]{Simpson} carries over to this generality. \end{proof} \begin{remark} Usually, formality statements such as Lemma \ref{formallemma} lead to isomorphisms on deformation functors (see \cite{GM} for the original case, and \cite[Proposition \ref{htpy-formalpins}]{htpy} for the case closest to our setting). However, there does not appear to be a natural deformation functor associated to topological DGLAs $L$ with obstruction space $\bar{\H}^2(L)$. Thus, in contrast to the pro-algebraic case, it is not clear whether there are natural completions of the homotopy groups which can be described in terms of the reduced cohomology ring. The description of the Archimedean monodromy in \cite[Theorem \ref{mhs-archmonthm}]{mhs} is even less likely to adapt, since it features the Green's operator $G$, which we have had to replace with a non-convergent sequence of operators. \end{remark} \subsubsection{Non-reduced cohomology}\label{nonredcohosn} Taking the inverse limit $\LLim_s$ of the decompositions of \S \ref{eigensn}, we obtain \begin{align*} DA^{n-1}(X,\vv) &= \Delta^{\half} \overline{DA^{n-1}(X,\vv)},\\ D^*A^{n+1}(X,\vv) &= \Delta^{\half} \overline{D^*A^{n+1}(X,\vv)},\\ \end{align*} with similar statements for $D^c, D^{c*}$. Thus: \begin{proposition}\label{nonredcohoprop} \[ \H^nA^{\bt}(X,vv) \cong \cH^n(X, \vv) \oplus(\overline{DA^{n-1}(X,\vv)} /\Delta^{\half}\overline{DA^{n-1}(X,\vv)} ). \] \end{proposition} Applying the operator $*$ then gives \[ \H^nA^{\bt}(X,\vv)\oplus_{\cH^n(X, \vv)} \H^{2d-n}A^{\bt}(X,\vv')\cong A^n(X,\vv)/\Delta^{\half}A^n(X,\vv). \] Moreover, we have topological isomorphisms \begin{align*} \Delta^{-\half}D\co \overline{D^*A^{n}(X,\vv)} &\to \overline{DA^{n-1}(X,\vv)},\\ \Delta^{-\half}D^c\co \overline{D^{c*}A^{n}(X,\vv)} &\to \overline{D^cA^{n-1}(X,\vv)}, \end{align*} for $\Delta^{-\half}D,\Delta^{-\half}D^c$ as in Definition \ref{Deltahalf2}. \subsection{The $W^*$-enveloping algebra} \subsubsection{ $E^J(X,x)'$ }\label{Evee} \begin{definition} Given a $C^*$-algebra $B$ and a positive linear functional $f$, define $B_f$ to be the Hilbert space completion of $B$ with respect to the bilinear form $\<a,b\>_f:= f(a^*b)$. We define $\pi_f$ to be the representation of $B$ on $B_f$ by left multiplication. Note that this is a cyclic representation, generated by $1 \in B_f$. \end{definition} \begin{lemma}\label{Edual} Given a $C^*$-algebra $B$, the topological dual is given by $B'= \LLim_f B_f'$, where $f$ ranges over the filtered inverse system of positive linear functionals on $B$. \end{lemma} \begin{proof} This amounts to showing that $B^{\vee\vee} = \Lim_f B_f$. Since the system is filtered (with $f+g \ge f,g$ and $B_g \to B_f$ for $g \ge f$), $\hat{B}:=\Lim_f B_f$ is the completion of $B$ with respect to the seminorms $\|b\|_f:= f(b^*b)^{\half}$. The space $B_f$ is the strong closure of $B$ in the cyclic representation $\pi_f$, which is just the image of $B^{\vee\vee}$, by the von Neumann bicommutant theorem. Since $B^{\vee\vee}$ is the completion of $B$ with respect to the system of weak seminorms for all representations, this implies that the map $\hat{B} \to B^{\vee\vee}$ is an equivalence. \end{proof} \begin{lemma} For a $C^*$-algebra $B$, and a $B$-representation $V$ in Hilbert spaces, \[ \Hom_B(V, B') \cong V'. \] \end{lemma} \begin{proof} The space $\Hom_B(V, B') $ consists of continuous $B$-linear maps $V \to B'$, and hence to continuous $(A,k)$-bilinear maps $A\by V \to k$. These correspond to continuous linear maps $V \to k$, as required \end{proof} Considering smooth morphisms from $X$ then gives: \begin{corollary}\label{univlocsys} For any $E:=E^J(X,x)$-representation $V$ in real Hilbert spaces, with corresponding local system $\vv$ on $X$, there is a canonical topological isomorphism \[ A^{\bt}(X, \vv) \cong \Hom_E(V', A^{\bt}(X, \bE')), \] where $\bE'$ is the direct system of pluriharmonic local systems corresponding to the ind-$E$-representation $E'$ given by left multiplication. \end{corollary} Of course, all the cohomological decomposition results of this section extend to direct limits, so apply to $\bE'$. Conversely, Corollary \ref{univlocsys} all such results for local systems $\vv$ can be inferred from the corresponding results with $\bE'$-coefficients. \begin{remark}\label{DGArk} The comultiplication $E^J(X,x)\to E^J(X,x)\hten E^J(X,x)$ of Lemma \ref{tensorstr} induces a multiplication \[ E^J(X,x)'\bar{\ten} E^J(X,x)' \to E^J(X,x)' \] on continuous duals, where $ (\Lim_i E_i)'\bar{\ten}(\Lim_jE_j)':= \LLim_{i,j} E_i \bar{\ten} E_j $ for $C^*$-algebras $E_i$ and the dual tensor product $\bar{\ten}$ of \cite[p. 210]{takesakiTOA}. In particular, $\bar{\ten}$ is a crossnorm, so we have a jointly continuous multiplication on $E^J(X,x)'$ Thus $A^{\bt}(X, \bE')$ is also equipped with a jointly continuous (graded) multiplication, so has the structure of a differential graded topological algebra. \end{remark} \subsubsection{Failure of continuity} Since direct integrals of harmonic representations must be harmonic, Corollary \ref{harmcoho} and Proposition \ref{nonredcohoprop} provide us with information about the behaviour of cohomology in measurable families. In particular, they allow us to recover space of measures on the topological spaces of cohomology groups fibred over the moduli spaces of local systems. Thus $E^J_{X,x}$ is a much finer invariant than the pro-algebraic completion of $\pi_1(X,x)$. It is natural to ask whether we can strengthen the Hodge decomposition to incorporate finer topological data. The following example indicates that it does not hold for coefficients in $\bE$ itself: \begin{example} Let $X$ be a complex torus, so $\pi_1(X,e) \cong \Z^{2g}$. By Proposition \ref{abcor}, $(E^J_{X,e})_{\RFD}\ten \Cx = C(\Hom(\pi_1(X,e), \Cx^*),\Cx) \cong C( (\Cx^*)^{2g}, \Cx)$. The complex $A^{\bt}(X, \bE_{\RFD}\ten \Cx)$ is then quasi-isomorphic to $\H^*(\Z^{2g}, C( (\Cx^*)^{2g}, \Cx))$. This is given by taking the completed tensor product of $2g$ copies of the complex $F$ given by $C(\Cx^*,\Cx) \xra{z-1} C(\Cx^*,\Cx)$, so \[ \bar{\H}^* (X, \bE_{\RFD}\ten \Cx)\cong \Cx[-2g]. \] However, $D^*D+DD^*= |z-1|^2$ on the complex $F$, so harmonic forms are given by \[ \cH^* (X, \bE_{\RFD}\ten \Cx)\cong 0. \] \end{example} \section{Twistor and Hodge structures on cochains, and $\SU_2$}\label{twistorHodgecohosn} \subsection{Preliminaries on non-abelian twistor and Hodge filtrations} The following is \cite[Definition \ref{mhs-cdef}]{mhs}: \begin{definition}\label{cdef} Define $C$ to be the real affine scheme $\prod_{\Cx/\R}\bA^1$ obtained from $\bA^1_{\Cx}$ by restriction of scalars, so for any real algebra $A$, $C(A)= \bA^1_{\Cx}(A\ten_{\R}\Cx)\cong A\ten_{\R}\Cx$. Choosing $i \in \Cx$ gives an isomorphism $C \cong \bA^2_{\R}$, and we let $C^*$ be the quasi-affine scheme $C - \{0\}$. We let the real algebraic group $S=\prod_{\Cx/\R} \bG_m$ of Definition \ref{Sdef} act on $C$ and $C^*$ by inverse multiplication, i.e. \begin{eqnarray*} S \by C &\to& C\\ (\lambda, w) &\mapsto& (\lambda^{-1}w). \end{eqnarray*} \end{definition} Fix an isomorphism $C \cong \bA^2$, with co-ordinates $u,v$ on $C$ so that the isomorphism $C(\R) \cong \Cx$ is given by $(u,v) \mapsto u+iv$. Thus the algebra $O(C)$ associated to $C$ is the polynomial ring $\R[u,v]$. $S$ is isomorphic to the scheme $\bA^2_{\bR} -\{(\alpha,\beta)\,:\, \alpha^2+\beta^2=0\}$, with the group isomorphism $S(\R) \cong \Cx^*$ given by $(\alpha,\beta) \mapsto \alpha+i\beta$, and the group isomorphism $S(\Cx) \cong (\Cx^*)^2$ given by $(\alpha,\beta) \mapsto (\alpha+i\beta, \alpha-i\beta)$. By \cite[Corollary \ref{mhs-flathfil} and Proposition \ref{mhs-flattfil}]{mhs}, real Hodge filtrations (resp. real twistor structures) correspond to $S$-equivariant (resp. $\bG_m$-equivariant) flat vector bundles on $C^*$. The latter arises because $[C^*/\bG_m] \simeq \bP^1_{\R}$, so $\bG_m$-equivariant sheaves on $C^*$ correspond to sheaves on $\bP^1$. The following is \cite[Definition \ref{mhs-rowdef}]{mhs}: \begin{definition}\label{rowdef} Define an $S$-action on the real affine scheme $\SL_2$ by $$ (\alpha,\beta, A) \mapsto\left( \begin{smallmatrix} 1 & 0 \\ 0 & \alpha^2+\beta^2 \end{smallmatrix} \right)A \left( \begin{smallmatrix} \alpha & \beta \\ -\beta & \alpha \end{smallmatrix} \right)^{-1}= \left( \begin{smallmatrix} \alpha^2+\beta^2 & 0 \\ 0 & 1 \end{smallmatrix} \right)^{-1}A \left( \begin{smallmatrix} \alpha & -\beta \\ \beta & \alpha \end{smallmatrix} \right). $$ Let $\row_1 :\SL_2 \to C^*$ be the $S$-equivariant map given by projection onto the first row. \end{definition} The subgroup scheme $\bG_m \subset S$ is given by $\beta=0$ in the co-ordinates above, and there is a subgroup scheme $S^1 \subset S$ given by $\alpha^2+\beta^2=1$. These induce an isomorphism $(\bG\by S^1)/(-1,-1)\cong S$. On these subgroups, the action on $\SL_2$ simplifies as follows: \begin{lemma}\label{rowlemma} The action of $\bG_m \subset S$ on $\SL_2$ is given by \[ (\alpha, A) \mapsto\left( \begin{smallmatrix} \alpha^{-1} & 0 \\ 0 & \alpha \end{smallmatrix} \right)A \] and the action of $S^1 \subset \bG_m$ is given by \[ (\alpha,\beta, A) \mapsto A \left( \begin{smallmatrix} \alpha & \beta \\ -\beta & \alpha \end{smallmatrix} \right)^{-1}. \] \end{lemma} The action of $S^1 \subset S$ descends via the maps above to an action on $\bP^1_{\R}$, which is just given by identifying $S^1$ with the real group scheme $\SO_2$. \subsection{The twistor structure on cochains} Fix a compact K\"ahler manifold $X$. \begin{definition} Define $\tD\co A^n(X, \vv)\ten\O_{C^*}\to A^{n+1}(X, \vv)\ten\O_{C^*}$ by \[ \tD= uD+vD^c, \] and write $\tilde{A}^{\bt}(X, \vv) $ for the resulting complex. Put a $\bG_m$-action on $\tilde{A}^{\bt}(X, \vv) $ by letting $A^n(X, \vv) $ have weight $n$, and giving $\O_{C^*}$ the action of $\bG_m \subset S$ from Definition \ref{cdef}. Define $\tilde{\z}^n(X,\vv):= \ker(\tD \co A^n(X,\vv)\ten\O_{C^*} \to A^{n+1}(X,\vv)\ten\O_{C^*}$, $\tilde{\b}^n(X,\vv):= \im(\tD \co A^{n-1}(X,\vv)\ten\O_{C^*} \to A^n(X,\vv))\ten\O_{C^*} $ and \[ \bar{\tilde{\H}}^n(X,\vv):= \tilde{\z}^n(X,\vv)/\overline{\tilde{\b}^n(X,\vv)}. \] \end{definition} By analogy with \cite[Proposition \ref{mhs-flattfil} and Theorem \ref{mhs-mtsmal}]{mhs}, we regard the $\bG_m$-equivariant complex $\tilde{A}^{\bt}(X, \vv)$ over $C^*$ as a twistor filtration on $A^n(X, \vv)$. \begin{corollary}\label{harmcoho2} The canonical inclusion $\cH^n (X,\vv)(n)\ten\O_{C^*} \to \bar{\tilde{\H}}^n(X,\vv)$ is a $\bG_m$-equivariant topological isomorphism. \end{corollary} \begin{proof} It suffices to prove this on pulling back along the flat cover $\row_1\co \SL_2 \to C^*$. We may define $\tD^*\co A^n(X, \vv)\ten O(\SL_2)\to A^{n-1}(X, \vv)\ten O(\SL_2)$ by $\tD^*= yD^*-xD^{c*}$. Then $[\tD, \tD^*]=\Delta$, and since $\Delta$ commutes with $D$ and $D^c$ it also commutes with $\tD$. The result now follows with the same proof as that of Corollary \ref{harmcoho}, replacing $D,D^*$ with $\tD, \tD^*$. \end{proof} \begin{proposition}\label{finecoho} If we write $\cH^n= \cH^n (X,\vv)$ and $M^m= \overline{DD^cA^{m-2}(X,\vv)}$, then there is a $\bG_m$-equivariant isomorphism \[ \tilde{\H}^n(X,\vv) \cong [(\cH^n \oplus M/\Delta^{\half}M^n)(n) \oplus (M^{n+1}/ \Delta^{\half}M^{n+1})(n-1)] \ten\O_{C^*} \] of quasi-coherent sheaves on $C^*$. \end{proposition} \begin{proof} Writing $A^m:= A^m(X,\vv)$, we have a commutative diagram \[ \xymatrix@R=0ex{ & \overline{DD^cA^{n-1}}(n+1)\ten \O_{C^*} \\ \overline{DD^{c*}A^{n}}(n)\ten \O_{C^*} \ar[ur]^-{vD^c} & & \overline{D^*D^{c}A^{n}}(n)\ten \O_{C^*} \ar[ul]_-{-uD}\\ & \overline{D^*D^{c*}A^{n+1}}(n-1)\ten \O_{C^*} \ar[ul]^-{uD}\ar[ur]_-{vD^c}, } \] which we may regard as a bicomplex. By Theorem \ref{Hodgedecomp}, the complex $\tilde{A}^{\bt}(X,\vv)$ decomposes into a direct sum of $\cH^n$'s and total complexes of the bicomplexes above. Arguing as in Proposition \ref{DeltahalfProp}, we have topological isomorphisms \begin{align*} \Delta^{-1} DD^c\co \overline{D^*D^{c*}A^{n+1}} &\to \overline{DD^cA^{n-1}} \\ \Delta^{-\half} D\co \overline{D^*D^{c}A^{n}} &\to \overline{DD^cA^{n-1}}\\ \Delta^{-\half}D^c\co \overline{DD^{c*}A^{n}}&\to \overline{DD^cA^{n-1}}, \end{align*} so the bicomplex above is linearly isomorphic to \[ \xymatrix@R=0.5ex{ & \overline{DD^cA^{n-1}}(n+1)\ten \O_{C^*} \\ \overline{DD^cA^{n-1}}(n)\ten \O_{C^*} \ar[ur]^-{v\Delta^{\half}} & & \overline{DD^cA^{n-1}}(n)\ten \O_{C^*} \ar[ul]_-{-u\Delta^{\half}}\\ & \overline{DD^cA^{n-1}}(n-1)\ten \O_{C^*} \ar[ul]^-{u\Delta^{\half}}\ar[ur]_-{v\Delta^{\half}}. } \] Since the ideal $(u,v)$ generates $\O_{C^*}$, cohomology of the top level of the associated total complex is just \[ (\overline{DD^cA^{n-1}}/\Delta^{\half}\overline{DD^cA^{n-1}})(n+1)\ten \O_{C^*}, \] while cohomology of the bottom level is $0$. Moreover, the map \[ \overline{DD^cA^{n-1}}(n-1) \ten \O_{C^*} \xra{(u,v)} \overline{DD^cA^{n-1}}(n)^2\ten \O_{C^*} \] is an isomorphism to the kernel of $(v,-u)$, so cohomology of the middle level is isomorphic to \[ (\overline{DD^cA^{n-1}}(n-1)/ \Delta^{\half}\overline{DD^cA^{n-1}}(n-1)) \ten \O_{C^*} \ten \O_{C^*}, \] which completes the proof. \end{proof} \begin{remark}\label{MTSbad} In particular, note that $\tilde{\H}^n(X,\vv)$ is of weights $n, n-1$ in general, unlike $\bar{\tilde{\H}}^n(X,\vv)$ which is pure of weight $n$. In particular, this means that the weight filtration given good truncation cannot define a mixed twistor structure on $\tilde{\H}^n(X,\vv)$. \end{remark} We now have the following generalisation of the principle of two types: \begin{lemma}\label{p2t2} As subspaces of $A^n(X, \vv)\ten O(\SL_2)$, \begin{align*} &\ker \tD \cap \ker \tDc \cap (\overline{ \tD( A^{n-1}(X,\vv)\ten O(\SL_2)) } + \overline{\tDc (A^{n-1}(X,\vv) \ten O(\SL_2))})\\ &= \overline{\tD\tDc( A^{n-2}(X,\vv)O(\SL_2))}. \end{align*} \end{lemma} \begin{proof} This follows from Lemma \ref{p2t}, with the same reasoning as \cite[Lemma \ref{mhs-gl2lemma}]{mhs}. \end{proof} \begin{lemma}[Formality]\label{formallemma2} The morphisms \[ (\bar{\H}_{\tDc}^*(A^*(X,\vv)\ten O(\SL_2)),0) \la (\z_{\tDc}^*(A(X,\vv)\ten O(\SL_2)), \tD) \to (A^*(X,\vv)\ten O(\SL_2), \tD) \] induce isomorphisms on reduced cohomology. \end{lemma} \begin{proof} The proof of \cite[Lemma 2.2]{Simpson} carries over to this generality, using Lemma \ref{p2t2}. \end{proof} Following Corollary \ref{univlocsys} and Remark \ref{DGArk}, the results above can all be regarded as statements about the topological differential graded algebra $\tilde{A}^{\bt}(X, \bE')$. \subsection{The analytic Hodge filtration on cochains}\label{analyticMHS} Recall from \S \ref{Evee} that the local system $\bE'$ on $X$ is defined to correspond to the $\pi_1(X,x)$-representation given by left multiplication on $E^J(X,x)'$. \begin{proposition}\label{redenrich} The topological cochain complex $\tilde{A}^{\bt}(X, \bE')$ is equipped with a continuous circle action, satisfying: \begin{enumerate} \item the $S^1$-action and $\bG_m$-actions on $\tilde{A}^{\bt}(X, \bE')$ commute, \item the action of $S^1 \subset \Cx^*= S(\R)$ on $C^*$ makes $\tilde{A}^{\bt}(X, \bE')$ into an $S^1$-equivariant sheaf on $C^*$, and \item $-1 \in S^1$ acts as $-1 \in \bG_m$. \end{enumerate} \end{proposition} \begin{proof} Since $S^1$ acts on $E^J(X,x) $, it acts on $\bE'$, and we denote this action by $v \mapsto t \circledast v$, for $t \in S^1$. We may now adapt the proof of \cite[Theorem \ref{mhs-mhsmal}]{mhs}, defining an $S^1$-action on $\sA^*(X, \R)\ten_{\R}\bE' $ by setting $t \boxast (a\ten v) := (t \dmd a) \ten (t^2 \circledast v)$ for $t \in S^1$ and $\dmd$ as in Definition \ref{dmd}. Passing to the completion $\sA^*(X, \bE')$ completes the proof, with continuity following from Proposition \ref{circleX}. \end{proof} \begin{remarks} If the circle action of Proposition \ref{redenrich} were algebraic, then by \cite[Lemma \ref{mhs-tfilenrich}]{mhs} it would correspond to a Hodge filtration on $A^{\bt}(X, \bE')$. Since finite-dimensional circle representations are algebraic, we may regard Proposition \ref{redenrich} as the natural structure of an infinite-dimensional Hodge filtration. In Proposition \ref{redenrich}, note that we can of course replace $E^J(X,x)$ with any inverse system $B$ of $C^*$-algebra quotients of $E^J(X,x)$ to which the $S^1$-action descends, provided we replace $\bE$ with the local system associated to $B$. As observed in \S \ref{Evee}, we may substitute $\vv=\bE$ in Proposition \ref{finecoho} and Lemma \ref{formallemma2}. Note that the resulting isomorphisms are then equivariant with respect to the circle action of Proposition \ref{redenrich}. \end{remarks} \subsection{$\SU_2$}\label{SU2sn} As we saw in Corollary \ref{harmcoho2}, in order to define the adjoint operator $\tilde{D}^*$ to $\tilde{D}$, it is necessary to pull $\tilde{A}^{\bt}(X,\vv)$ back along the morphism $\row_1 \co \SL_2 \to C^*$. This gives us the complex \[ \row_1^*\tilde{A}^{\bt}(X,\vv)= (A^*(X, \vv) \ten O(\SL_2), \tD), \] where $\tD=uD+vD^c$, with adjoint $\tD^*= yD^*-xD^{c*}$. This leads us to consider the $*$-structure on $O(\SL_2)$ determined by $u^*=y, v^*=-x$. This implies $x^*=-v, y^*=u$, so \[ \begin{pmatrix}u & v \\ x& y \end{pmatrix}^*= \begin{pmatrix}y & -x \\ -v & u \end{pmatrix}, \] or $A^*= (A^{-1})^t$. \begin{lemma} The real $C^*$-enveloping algebra $C^*(O(\SL_2))$ of the real $*$-algebra $O(\SL_2)$ is the ring of continuous complex functions $f$ on $\SU_2$ for which \[ f(\bar{A}) = \overline{f(A)}. \] \end{lemma} \begin{proof} A $*$-morphism $O(\SL_2) \to \Cx$ is a matrix $A \in \SL_2(\Cx)$ with $\bar{A}= A^*= (A^{-1})^t$, so $A \in \SU_2$. Thus the Gelfand representation gives $C^*(O(\SL_2))\ten \Cx \cong C(\SU_2, \Cx)$. Now, writing $\Gal(\Cx/\R) =\<\tau\>$, and taking $f \in O(\SL_2)\ten \Cx$ and $A \in \SU_2$, we have $\tau(f)(A) = \overline{f(\bar{A})}$. This formula extends to give a $\Gal(\Cx/\R)$-action on $C(\SU_2, \Cx)$, and Lemma \ref{GalCstar} then gives \[ C^*(O(\SL_2))= C(\SU_2, \Cx)^{\tau}. \] \end{proof} Note that complex conjugation on $\SU_2$ is equivalent to conjugation by the matrix $\left(\begin{smallmatrix} 0 & 1\\ -1 & 0 \end{smallmatrix}\right)$. \subsubsection{The Hopf fibration} The action of $S(\Cx)$ on $\SL_2(\Cx)$ from Definition \ref{rowdef} does not preserve $\SU_2$. However, Lemma \ref{rowlemma} ensures that for the subgroup schemes $\bG_m,S^1 \subset S$, the groups $S^1 = S^1(\R) \subset S^1(\Cx)\cong \Cx^*$ and $S^1\subset \Cx^*\cong \bG_m(\Cx)$ both preserve $\SU_2$. Thus in the $C^*$ setting, the $S$-action becomes an action of $(S^1\by S^1)/(-1,-1)$ on $\SU_2$, given by \[ (s,t, A)\mapsto \left(\begin{smallmatrix} s^{-1} & 0 \\ 0 & s \end{smallmatrix} \right)A \left( \begin{smallmatrix} \Re t & \Im t \\ -\Im t & \Re t \end{smallmatrix} \right)^{-1}. \] Moreover, there is a $\Gal(\Cx/\R)$-action on this copy of $S^1 \by S^1$, with the non-trivial automorphism $\tau$ given by $\tau(s,t)= (s^{-1},t)$. The action of $(S^1\by S^1)/(-1,-1)$ is then $\tau$-equivariant. Alternatively, we may characterise our group as $S^1 \by S^1 \subset \Cx^*\by \Cx^* \cong S(\Cx)$, by sending $(s,t)$ to $(st,st^{-1})$. On this group $S^1 \by S^1$, the $\Gal(\Cx/\R)$-action is then given by $\tau(w',w'')= (\overline{w''}, \overline{w'})$. Now, consider the composition \[ \SL_2 \xra{\row_1} C^* \to [C^*/\bG_m] \cong \bP^1. \] On taking Gelfand representations of $C^*$-enveloping algebras, this gives rise to the map \[ \SU_2 \to \bP^1(\Cx), \] which is just the Hopf fibration $p \co S^3 \to S^2$, corresponding the quotient by the action of $S^1 \subset \bG_m(\Cx)$ by diagonal matrices. The action of $\tau$ on $\SU_2$ and on $\bP^1(\Cx)$ is just given by complex conjugation. \subsubsection{Smooth sections}\label{smoothsnsn} If we write $\rho_n$ for the weight $n$ action of $\bG_m$ on $\bA^1$, then we may consider the topological vector bundle \[ \SU_2\by_{S^1, \rho_n}\Cx \] on $\bP^1(\Cx)$, for the action of $S^1 \subset \bG_m(\Cx)$ above. \begin{definition} Define $\sA^0_{\bP^1}\Cx(n)$ to be the sheaf of smooth sections of $\SU_2\by_{S^1, \rho_n}\Cx \to \bP^1(\Cx)$, and write $A^0(\bP^1, \Cx(n)):= \Gamma( \bP^1(\Cx),\sA^0_{\bP^1}\Cx(n))$. Beware that for $n\ne 0$, there is no local system generating $\sA^0_{\bP^1}\Cx(n)$. \end{definition} For $U\subset \bP^1(\Cx)$, observe that $\Gamma( U,\sA^0_{\bP^1}\Cx(n)) $ consists of smooth maps \[ f\co p^{-1}(U) \to \Cx \] satisfying $f\left(\begin{smallmatrix} s^{-1} & 0 \\ 0 & s \end{smallmatrix} \right)A)= s^nf(A)$ for all $s \in S^1$. For the quotient map $q\co C^* \to \bP^1$, we may characterise $\Gamma(U, \O_{\bP^1}^{\hol}(n))$ as the space of holomorphic maps \[ f\co q^{-1}(U) \to \Cx \] satisfying $f(su,sv)= s^nf(u,v)$ for all $s \in \Cx^*$. The embedding $S^3 \subset C^*(\Cx)$ thus yields \[ \O_{\bP^1}^{\hol}(n) \subset \sA^0_{\bP^1}\Cx(n), \] and indeed $\sA^0_{\bP^1}\Cx(n)= \O_{\bP^1}^{\hol}(n)\ten_{\O_{\bP^1}^{\hol} }\sA^0_{\bP^1}\Cx$. Now, for the conjugate sheaf $\overline{\O_{\bP^1}^{\hol}(n)}$, note that $\Gamma(U, \overline{\O_{\bP^1}^{\hol}(n)})$ is the space of anti-holomorphic maps \[ f\co q^{-1}(U) \to \Cx \] satisfying $f(su,sv)= \bar{s}^nf(u,v)$ for all $s \in \Cx^*$. Thus we have a canonical embedding \[ \overline{\O_{\bP^1}^{\hol}(n)} \subset\subset \sA^0_{\bP^1}\Cx(-n), \] with $\sA^0_{\bP^1}\Cx(-n)= \overline{\O_{\bP^1}^{\hol}(n)}\ten_{\overline{\O_{\bP^1}^{\hol} }}\sA^0_{\bP^1}\Cx$. Note that the inclusion $O(\SL_2) \subset C(\SU_2, \Cx)$ gives \[ u,v \in A^0(\bP^1, \Cx(1))^{\tau}, \quad \bar{u},\bar{v} \in A^0(\bP^1, \Cx(-1))^{\tau} \] and \[ O(\SL_2) \subset \bigoplus_{n\in \Z} A^0(\bP^1, \Cx(n))^{\tau}. \] \begin{definition}\label{Ndef} By \cite[Definition \ref{mhs-Ndef}]{mhs}, there is a derivation $N$ on $O(\SL_2)$ given by $Nx=u, Ny=v, Nu=Nv=0$, for co-ordinates $\left(\begin{smallmatrix} u & v \\ x & y\end{smallmatrix}\right)$ on $\SL_2$. Since this annihilates $u,v$, it is equivalent to the $O(\SL_2)$-linear map \[ \Omega(\SL_2/C^*) \to O(\SL_2) \] given by $dx\mapsto u, dy \mapsto v$. \end{definition} Note that $N$ has weight $2$, and extends (by completeness) to give $\tau$-equivariant differentials \[ N \co \sA^0_{\bP^1} \Cx(n)\to \sA^0_{\bP^1} \Cx(n+2). \] Note that $N$ is the composition of the anti-holomorphic differential \[ \bar{\pd}_{\bP^1}\co \sA^0_{\bP^1}\Cx(n) \to \sA^0_{\bP^1}\Cx(n)\ten_{ \overline{\O_{\bP^1}^{\hol} }} \overline{\Omega_{\bP^1}} \] with the canonical isomorphism $\Omega_{\bP^1}\cong \O(-2)$. \subsubsection{Splittings of the twistor structure}\label{splittingtwistor} As in \cite[Remark \ref{mhs-sltrivia}]{mhs}, we can characterise the map $\row_1\co \SL_2 \to C^*$ as the quotient $C^*=[\SL_2/\bG_a]$, where $\bG_a$ acts on $\SL_2$ as left multiplication by $ \left(\begin{smallmatrix} 1 & 0 \\ \bG_a & 1 \end{smallmatrix} \right)$. Here, the $S$-action on $\bG_a$ has $\lambda$ acting as multiplication by $\lambda\bar{\lambda}$. Therefore the map $q \circ \row_1 \co \SL_2 \to \bP^1$ is given by taking the quotient of $\SL_2$ by the Borel subgroup $B= \bG_m \ltimes \bG_a$, for the action above. The action of $\bG_m$ corresponds to weights, while the action of $\bG_a$ corresponds to the derivation $N$ above, which we regard as the Archimedean monodromy operator as in \cite[\S \ref{mhs-analogies}]{mhs}. \begin{definition} Given a pluriharmonic local system $\vv$, define $\breve{A}^{\bt}(X, \vv)$ to be the sheaf of $\O_{\bP^1}^{\hol}$-modules associated to the $\bG_m$-equivariant sheaf $\tilde{A}^{\bt}(X, \bE^{\vee}) $ on $C^*$. Explicitly, \[ \breve{A}^{\bt}(X, \vv)= (\bigoplus_n (q_*\tilde{A}^{\bt}(X, \vv))\ten_{\O_{\bP^1}^{\alg}}\O_{\bP^1}^{\hol}(n))^{\bG_m}, \] so \[ \breve{A}^n(X, \vv)= A^n(X,\vv)\ten_{\R} \O_{\bP^1}^{\hol}(n), \] with differential $\breve{D}=uD+vD^c$, for $u,v \in \Gamma(\bP^1, \O_{\bP^1}(1))$. \end{definition} \begin{definition} Write $\mathring{\sA}^{\bt}(X, \vv):= \sA^0_{\bP^1}\ten_{\O_{\bP^1}^{\hol}} \breve{A}^*(X, \vv) $, and observe that this admits an operator \[ \breve{D}^c:=-\bar{v}D+uD^c\co \mathring{\sA}^n(X, \vv) \to \mathring{\sA}^{n+1}(X, \vv)(-2). \] \end{definition} Now, applying the map $O(\SL_2) \into \subset \bigoplus_{n\in \Z} A^0(\bP^1, \Cx(n))^{\tau}$ to Lemmas \ref{p2t2},\ref{formallemma2} yields the following: \begin{lemma}\label{formallemma3} As subspaces of $\mathring{\sA}^n(X, \vv)$, \[ \ker \breve{D} \cap \ker \breve{D}^c \cap (\overline{ \breve{D} \mathring{\sA}^{n-1}(X, \vv) } + \overline{\breve{D}^c \mathring{\sA}^{n-1}(X, \vv)(2)}) = \overline{\breve{D}\breve{D}^c(\mathring{\sA}^{n-2}(X, \vv)(2))}. \] Thus the morphisms \[ (\bar{\sH}_{\breve{D}^c}^*(\mathring{\sA}^*(X, \vv),0) \la (\sZ_{\breve{D}^c}^*( \mathring{\sA}^*(X, \vv)), \breve{D}) \to \mathring{\sA}^n(X, \vv) \] induce isomorphisms on reduced cohomology sheaves. \end{lemma} Now, $\tilde{A}^{\bt}(X, \bE')$ can be recovered from $\row_1^*\tilde{A}^{\bt}(X, \bE')$ and its nilpotent monodromy operator $N$, and Lemma \ref{formallemma} says that $\row_1^*\tilde{A}^{\bt}(X, \bE')$ is equivalent to $\cH^*(X, \bE')\ten O(\SL_2)$ up to reduced quasi-isomorphism. Under the base change above, we have $N= \bar{\pd}_{\bP^1}$ giving an exact sequence \[ 0 \to \breve{A}^{\bt}(X, \vv) \to \mathring{\sA}^{\bt}(X, \vv) \xra{N} \mathring{\sA}^{\bt}(X, \vv)(2) \to 0. \] In other words, we can recover the topological DGA $\breve{A}^{\bt}(X, \bE')$ from the differential $\bar{\pd}=N$ on the topological DGA $\mathring{\sA}^{\bt}(X, \bE')$, and the latter is just $\bigoplus_n \sA_{\bP^1}( \cH^n(X, \bE')(n))$ up to reduced quasi-isomorphism. Also note that when we substitute $\vv:=\bE'$ in Lemma \ref{formallemma3}, the morphisms all become equivariant with respect to the circle action of Proposition \ref{redenrich}. This action makes $ \mathring{\sA}^n(X, \bE')$ into an $S^1$-equivariant sheaf over $\bP^1$, where the action on $\bP^1$ is given by $t^2 \in S^1$ sending $(u:v)$ to $t\dmd (u:v) =(au-bv:av+bu)$ for $t=a+ib \in S^1$. \section{The twistor family of moduli functors}\label{twistorfamilysn} \subsection{Fr\'echet algebras on projective space} On the complex manifold $\bP^1(\Cx)$, we have a sheaf $\O_{\bP^1}^{\hol}$ of holomorphic functions, which we may regard as a sheaf of Fr\'echet algebras. As a topological space, $\bP^1(\Cx)$ is equipped with a $\Gal(\Cx/\R)$-action, the non-trivial element $\tau$ acting on points by complex conjugation. There is also an isomorphism \[ \tau^{\sharp}_{\O} \co \tau^{-1} \O_{\bP^1}^{\hol} \to \O_{\bP^1}^{\hol}, \] given by \[ \tau^{\sharp}_{\O}(f)(z) = \overline{f(\bar{z})}, \] and satisfying \[ \tau^{\sharp}_{\O}\circ \tau^{-1}(\tau^{\sharp}_{\O}) = \id \co \O_{\bP^1}^{\hol} \to \O_{\bP^1}^{\hol}. \] \begin{definition} Define $\Fr\Alg_{\bP^1,\Cx}$ to be the category of sheaves $\sF$ of unital Fr\'echet $\O_{\bP^1}^{\hol}$-algebras, quasi-coherent in the sense that the maps \[ \sF(U)\ten^{\pi}_{\O_{\bP^1}^{\hol}(U)}\O_{\bP^1}^{\hol}(V) \to \sF(V) \] are isomorphisms for all open subspaces $V \subset U$, where $\ten^{\pi}$ here denotes projective tensor product. Define $\Fr\Alg_{\bP^1,\R}$ to be the category of pairs $(\sF, \tau^{\sharp}_{\sF})$ for $\sF \in \Fr\Alg_{\bP^1,\Cx}$ and \[ \tau^{\sharp}_{\sF} \co \tau^{-1} \sF \to \sF \] an $(\O_{\bP^1}^{\hol}, \tau^{\sharp}_{\O})$-linear isomorphism satisfying \[ \tau^{\sharp}_{\sF}\circ \tau^{-1}(\tau^{\sharp}_{\sF}) = \id_{\sF}. \] \end{definition} Note that for any Fr\'echet $k$-algebra $B$, the sheaf $B\ten^{\pi}_k \O_{\bP^1}^{\hol}$ lies in $ \Fr\Alg_{\bP^1,\Cx}$. When $k=\R$, the involution $\id_B\ten \tau^{\sharp}_{\O}$ makes $B\ten^{\pi}_k \O_{\bP^1}^{\hol}$ an object of $ \Fr\Alg_{\bP^1,\R}$. The forgetful functor $\Fr\Alg_{\bP^1,\R} \to \Fr\Alg_{\bP^1,\Cx}$ has a right adjoint, given by $\sF \mapsto \sF \oplus \tau^{-1}\sF $, with the involution $\tau^{\sharp}$ given by swapping summands, and the $\O_{\bP^1}^{\hol}$-structure on $\tau^{-1}\sF $ defined using $\tau_{\O}$. \subsection{The twistor functors} \begin{definition} Define the groupoid-valued functor $\cR^{\bT,\Cx}_X$ on $\Fr\Alg_{\bP^1,\Cx}$ by letting $\cR^{\bT,\Cx}_X(\sB) $ consist of pairs $(\sT, \tD)$, for $\sA^1_X(\pr_2^{-1}\sB^{\by})$-torsors $\sT$ on $X \by \bP^1(\Cx)$ with $x^*\sT$ trivial as a $\sB^{\by}$-torsor on $\bP^1(\Cx)$, and flat $ud+v\dc$-connections \[ \tD\co \sT \to \sA^1_X\ten_{\sA^0_X}\ad \sT(1). \] Here $u,v$ are the basis of $\Gamma(\bP^1_{\R}, \O(1))$ given by the co-ordinates $u,v$ of $C^*$ and the canonical map $C^* \to \bP^1$. Define the set-valued functor $\oR^{\bT,\Cx}_{X,x}$ on $\Fr\Alg_{\bP^1,\Cx}$ by letting $\oR^{\bT,\Cx}_{X,x}(\sB) $ be the groupoid of triples $(\sT, \tD,f)$, with $(\sT, \tD) \in \cR^{\bT,\Cx}_X(\sB)$ and framing $f \in \Gamma(\bP^1(\Cx), x^*\sT)$. \end{definition} \begin{definition} Define the groupoid-valued functor $\cR^{\bT}_X$ on $\Fr\Alg_{\bP^1,\R}$ by letting $\cR^{\bT}_X(\sB) $ consist of triples $(\sT, \tD, \tau_{\sT}^{\sharp})$, for $(\sT, \tD) \in \cR^{\bT,\Cx}_X(\sB)$ and isomorphism $\tau^{\sharp}_{\sT} \co (\id_X \by \tau)^{-1}\sT \to \sT$ satisfying the following conditions. The isomorphism \[ (\tau^{\sharp}_{\sB}, \tau^{\sharp}_{\sT})\co \sA^1_X(\pr_2^{-1}\sB^{\by}) \by_{\tau^{\sharp}_{\sB}, \sA^1_X(\pr_2^{-1}\tau^{-1}\sB^{\by})}(\id_X \by \tau)^{-1}\sT \to \sT \] must be a morphism of $\sA^1_X(\pr_2^{-1}\sB^{\by})$-torsors, and the diagram \[ \begin{CD} \tau^{-1}\sT @>{\tau^{-1}\tD}>> \sA^1_X\ten_{\sA^0_X}\tau^{-1}\sT(1)\\ @V{\tau^{\sharp}_{\sT}}VV @VV{\tau^{\sharp}_{\sT}}V\\ \sT @>{\tD}>> \sA^1_X\ten_{\sA^0_X}\sT(1) \end{CD} \] must commute. Define the functor $\oR^{\bT}_{X,x}$ on $\Fr\Alg_{\bP^1,\R}$ by letting $\oR^{\bT}_{X,x}(\sB)$ be the groupoid of quadruples $(\sT, \tD, \tau_{\sT}^{\sharp},f)$, for $(\sT, \tD, \tau_{\sT}^{\sharp})$ in $\cR^{\bT}_X(\sB) $ and \[ f \in \Gamma(\bP^1(\Cx), x^*\sT)^{ \tau_{\sT}^{\sharp}} \] a framing. \end{definition} \begin{remark} Observe that the groupoids $ \oR^{\bT,\Cx}_{X,x}(\sB),\oR^{\bT}_{X,x}(\sB)$ are equivalent to discrete groupoids, so we will regard them as set-valued functors (given by isomorphism classes of objects). Also note that $\cR^{\bT,\Cx}_X(\sB)$ and $\cR^{\bT}_X(\sB)$ are equivalent to the groupoid quotients $[\oR^{\bT,\Cx}_{X,x}(\sB)/\Gamma(\bP^1(\Cx), \sB^{\by})]$ and $[\oR^{\bT}_{X,x}(\sB)/\Gamma(\bP^1(\Cx), \sB^{\by})^{ \tau_{\sT}^{\sharp}}]$ respectively, with the action given by changing the framing. \end{remark} The following is straightforward: \begin{lemma} The functors $\oR^{\bT,\Cx}_{X,x}$ and $\cR^{\bT,\Cx}_{X}$ can be recovered from $\oR^{\bT}_{X,x}$ and $\cR^{\bT}_{X}$, respectively, via isomorphisms \[ \oR^{\bT,\Cx}_{X,x}(\sB)\cong \oR^{\bT}_{X,x}(\sB \oplus \tau^{-1}\sB) \quad \cR^{\bT,\Cx}_{X}(\sB)\cong \cR^{\bT}_{X}(\sB \oplus \tau^{-1}\sB). \] \end{lemma} Note that the proof of Lemma \ref{tensorstr0} carries over to give canonical comultiplication \[ \oR^{\bT}_{X,x}(\sB_1) \by \oR^{\bT}_{X,x}(\sB_2) \to \oR^{\bT}_{X,x}(\sB_1\ten^{\pi} \sB_2). \] \subsection{Universality and $\sigma$-invariant sections} \begin{proposition}\label{twistoruniversal} The functors $\cR^{\dR}_X$ and $\cR^{\Dol}_X$ can be recovered from $\cR^{\bT}_{X}$ and $\cR^{\bT,\Cx}_{X}$, respectively. Likewise, $\oR^{\dR}_{X,x}$ and $\oR^{\Dol}_{X,x}$ can be recovered from $\oR^{\bT}_{X,x}$ and $\oR^{\bT,\Cx}_{X,x}$. \end{proposition} \begin{proof} Given a point $p \in \bP^1(\Cx)$ and a complex Fr\'echet algebra $B$, we may regard the skyscraper sheaf $p_*B$ as an object of $ \Fr\Alg_{\bP^1,\Cx}$. For a real Fr\'echet algebra and $p \in \bP^1(\R)$, we may regard $p_*B\ten \Cx$ as an object of $\Fr\Alg_{\bP^1,\R}$, with $\tau^{\sharp}$ given by complex conjugation. Now, just observe that on pulling back to $(1:0) \in \bP^1(\R)$, the differential $ud+v\dc$ is just $d$. At $(1:-i) \in \bP^1(\R)$, we have $ud+v\dc= d-i\dc= 2\pd$. Uncoiling the definitions, this gives \[ \oR^{\dR}_{X,x}(B) = \oR^{\bT}_{X,x}((1:0)_*B),\quad \oR^{\Dol}_{X,x}(B) = \oR^{\bT,\Cx}_{X,x}((1:-i)_*B), \] and similarly for $\cR$. \end{proof} \begin{lemma}\label{torsorsections} For a real Fr\'echet algebra $B$, the groupoid $\cR^{\bT}_{X}(B\ten_{\R}^{\pi} \O_{\bP^1}^{\hol})$ is equivalent to the groupoid of triples $(\sP, D,E)$, where $\sP$ is an $\sA^0_X(B^{\by})$-torsor on $X$, $D,E\co \sP \to \sA^1_X\ten_{\sA^0_X}\ad \sP$ are flat $d$- and $\dc$-connections, respectively, and $DE+ED=0$. \end{lemma} \begin{proof} Take an object $(\sT, \tD) \in \cR^{\bT}_{X}(B\ten_{\R} \O_{\bP^1}^{\hol})$. Triviality of the $(B\ten_{\R}^{\pi} \O_{\bP^1}^{\hol})^{\by}$-torsors $x^*\sT$ for $x \in X$ ensures that $\sT$ must be of the form $\sA^0_X(B\ten_{\R}^{\pi}B)^{\by}\by_{\sA^0_X(B^{\by})}\sP$ for some $\sP$ as above. Now, since $\Gamma(\bP^1,\O_{\bP^1}^{\hol}(1))^{\tau^{\sharp}}= \R u \oplus \R v$, the connection $\tD$ can be regarded as a $ud+v\dc$ connection \[ \tD\co \sP \to (\sA^1_X\ten_{\sA^0_X}\ad \sP)\ten_{\R}(\R u \oplus \R v), \] which we write as $uD+vE$. Flatness of $\tD$ is now a statement about $\Gamma(\bP^1,\O_{\bP^1}^{\hol}(2))^{\tau^{\sharp}}= \R u^2 \oplus \R uv \oplus \R v^2 $, with the $u^2$ and $v^2$ terms giving flatness of $D$ and $E$, and the $uv$ term giving the condition $[D,E]=0$. \end{proof} \begin{definition} Define the involution $\tau\sigma_{\bP}$ of the polarised scheme $(\bP^1, \O_{\bP^1}(1))$ to be the map induced by the action of $i \in S(\R)$ on $C^*$ from Definition \ref{cdef}. In particular, $\tau\sigma_{\bP}(u:v)= (v:-u)$. \end{definition} \begin{remark} On \cite[p12]{MTS}, the co-ordinate system $(u+iv:u-iv)$ on $\bP^1(\Cx)$ is used, and antiholomorphic involutions $\sigma, \tau$ are defined. In our co-ordinates, these become $\sigma(u:v)= (\bar{v}, -\bar{u})$ and $\tau(u:v)= (\bar{u}, \bar{v})$. This justifies the notation $\tau\sigma$ used above. Also note that the $\bG_{m,\Cx}$-action on $\bP^1_{\Cx}$ given in \cite[p4]{MTS} is just the complex form of our circle action on $\bP^1_{\R}$ from \S\S \ref{analyticMHS} and \ref{splittingtwistor}. \end{remark} \begin{definition} Given a real Banach algebra $B$, define the involution $\tau\sigma'$ of $\cR^{\bT}_{X}(B\ten_{\R}^{\pi} \O_{\bP^1}^{\hol})$ by sending the pair $(\sT, \tD)$ to $(\tau\sigma_{\bP}^{-1}\sT, J\tau\sigma_{\bP}^{-1}\tD)$. Note that this is well-defined because $\tau\sigma_{\bP}^{-1}\tD $ is a $\tau\sigma_{\bP}(ud+v\dc)= (vd-u\dc)$-connection, and $Jd=\dc, J\dc=-d$, so $J\tau\sigma_{\bP}^{-1}\tD $ is a $(ud+v\dc)$-connection. \end{definition} \begin{definition} Given a $C^*$-algebra $B$, define the Cartan involution $C$ of $B^{\by}$ to be given by $C(g)= (g^{-1})^*$. Note that this induces a Lie algebra involution $\ad C\co b \mapsto -b^*$ on the tangent space $B$ of $B^{\by}$. If $B=A\ten \Cx$ for a real $C^*$-algebra $A$, we write $\tau$ for complex conjugation, so $C\tau$ is the involution $C\tau(g)= (\bar{g}^{-1})^*$. Note that $\ad C\tau$ is the $\Cx$-linear extension of $\ad C$ on $A$. \end{definition} Since $\cR^{\bT}_{X}(\sB) $ only depends on the group of units $\sB^{\by}$ of $\sB$, and its tangent Lie algebra $\sB$, the Cartan involution induces an involution $C\tau$ of $\cR^{\bT}_{X}(B\ten_{\R}^{\pi} \O_{\bP^1}^{\hol}) $. \begin{definition} Given a real Banach algebra $B$, define the involution $\sigma$ of $\cR^{\bT}_{X}(B\ten_{\R}^{\pi} \O_{\bP^1}^{\hol})$ by $\sigma:= (C\tau)(\tau\sigma')$. \end{definition} \begin{proposition}\label{sigmasections} For a real $C^*$-algebra $B$, there is a canonical isomorphism \[ \oR^{\bT}_{X,x}(\sO_{\bP^1}^{\hol}(B))^{\sigma} \cong \oR^J_{X,x}(B). \] \end{proposition} \begin{proof} Since $\oR^{\bT}_{X,x} $ is the groupoid fibre of $\cR^{\bT}_X\to \cR^{\bT}_{\{x\}}$ over the trivial torsor, Lemma \ref{torsorsections} shows that an object $(\sT, \tD,f)$ of $\oR^{\bT}_{X,x}(\sO_{\bP^1}^{\hol}(B))$ is a quadruple $(\sP, D,E,f)$ for $(\sP,D,E)$ as in that lemma, and $f$ our framing. We therefore begin by describing the $\sigma$-action on such data. For $(\sP, D,E,f)$ to be $\sigma$-invariant, we must have an isomorphism $\alpha\co (\sP, D,E,f) \to \sigma(\sP, D,E,f) $. The torsor $\sP$ maps under $\sigma$ to $C(\sP)$, with $f\in x^*\sP$ mapping to $C(f)$ (since $\sigma'$ and $\tau$ affect neither). The isomorphism $\alpha$ then gives $\alpha\co \sP \to C(\sP)$ such that $\alpha(f) = C(f) \in x^*C(\sP)$. Let $U(\sP) \subset \sP$ consist of sections $q$ with $\alpha(q)= C(q)$. This is non-empty (since its fibre at $x$ contains $f$), so it must be an $\sA^0_X(U(B))$-torsor, noting that $U(B)$ is the group of $C$-invariants in $B^{\by}$. Moreover $\sP= \sA^0_X(B^{\by})\by_{\sA^0_X(U(B))}U(\sP)$. Meanwhile, $\tau\sigma'(uD+vE) = vJD -uJE$, so $\tau\sigma'(D,E)= (-JE, JD)$. Thus the isomorphism $\alpha$ gives \[ D|_{\sQ}= -JCE, \quad E|_{\sQ}= JCD. \] In other words, $E=D^c$ and $E^c=-D$ (which are equivalent conditions). Thus $\oR^{\bT}_{X,x}(\sO_{\bP^1}^{\hol}(B))^{\sigma}$ is equivalent to the groupoid of triples $(U(\sP), D,f)$, with $D$ flat and $[D,D^c]=0$. \end{proof} \begin{remark}\label{sigmark} When $B= \Mat_n(\Cx)$, this shows that framed pluriharmonic local systems correspond to framed $\sigma$-invariant sections of the twistor functor. Without the framings, this will not be true in general, since a $\sigma$-invariant section of the coarse moduli space will give a non-degenerate bilinear form which need not be positive definite. Note that for $U \subset \bP^1$, the set of isomorphism classes of $\cR^{\bT, \Cx}_{X}(\Mat_n(\sO_U^{\hol}))$ is the set of sections over $U$ of the twistor space $TW \to \bP^1$ of \cite[\S 3]{simfil}. \end{remark} \begin{remark}\label{twistordgrk} Although we have seen that $\cR^{\bT}_X$ together with its comultiplication encodes all the available information about twistor structures on moduli spaces of local systems, it does not carry information about higher homotopy and cohomology groups. There is, however, a natural extension of $\cR^{\bT}_X$ to differential graded Fr\'echet algebras, by analogy with \cite{htpy,mhs}. This would involve taking $\tD$ to be a hyperconnection $\tD\co \cT_0 \to \prod_n \sA^{n+1}_X\ten_{\sA^0_X}\ad \sT_n(n+1)$. The structures of \S \ref{cohosn} can all be recovered from this functor. \end{remark} \subsection{Topological twistor representation spaces} In this section, we will show that by considering continuous homomorphisms rather than $*$-homomorphisms, we can describe the entire semisimple locus of the twistor family from $(E^J_{X,x})_{\PN}$, rather than just the $\sigma$-equivariant sections. Given a point $(a: b) \in \bP^1(\Cx)$ and a complex Banach algebra $B$, we can generalise the construction of Proposition \ref{twistoruniversal} and consider the set $\oR^{\bT,\Cx}_{X,x}((a:b)_*B)$. This consists of torsors with flat $(ad+b\dc)$-connections. \begin{definition} Define $\oT_{X,x,n}:= \coprod_{(a:b) \in \bP^1(\Cx)} \oR^{\bT,\Cx}_{X,x}((a:b)_*\Mat_n(\Cx))$). \end{definition} Note that $\oT_{X,x,n}$ inherits a $\Gal(\Cx/\R)$-action from $\oR^{\bT,\Cx}_{X,x} $. We can also give $\oT_{X,x,n}$ a complex analytic structure, by saying that a map $f\co U \to \oT_{X,x}(B)$ from an analytic space $U$ consists of an analytic map $f\co U \to \bP^1(\Cx)$ together with an element of $\oR^{\bT,\Cx}_{X,x}(f_*(\Mat_n\O_U))$. We will now investigate the underlying topological structure. \begin{remark}\label{cfdh} The adjoint action of $\GL_n(\Cx)$ on $\oT_{X,x,n}$ is continuous, and indeed compatible with the complex analytic structure. This allows us to consider the coarse quotient $\oT_{X,x,n}//\GL_n(\Cx) $, which is the Hausdorff completion of the topological quotient, equipped with a natural complex analytic structure over $\bP^1(\Cx)$. A straightforward calculation shows that this coarse moduli space is precisely the Deligne-Hitchin twistor space, as constructed in \cite{hitchin} and described in \cite{simpsonwgt2} \S 3. \end{remark} Now, Proposition \ref{sigmasections} induces a map $\pi_{\bT}\co \oR^J_{X,x}(\Mat_n(\Cx)) \by \bP^1(\Cx) \to \oT_{X,x,n}$. For an explicit characterisation, note that an $(ad+b\dc)$-connection $\tD$ lies in $ \oR^J_{X,x}(\Mat_n(\Cx))$ if and only if \[ [\tD, JC\tD]=0, \] and that $JC\tD$ is a $(-\bar{b}d+\bar{a}\dc) = \sigma_{\bP}^{-1}(ad+b\dc)$-connection. \begin{definition} Given a flat $(ad+b\dc)$-connection $\tD$ on a finite-dimensional $\C^{\infty}$ vector bundle $\sV$ on $X$ for $(a:b) \in \bP^1(\Cx)- \{\pm i\}$, we say that $(\sV,\tD)$ is \emph{semisimple} if the local system $\ker \tD$ is so. \end{definition} \begin{definition} Define $\oT_{X,x,n}^{\st}\subset \oT_{X,x,n}$ by requiring that the fibre over any point of $\bP^1(\Cx)- \{\pm i\}$ consist of the semisimple objects, that the fibre over $i$ be $\oR^{\Dol,\st}_{X,x}(\Mat_n(\Cx))$ (\S \ref{Dolsn}), and that the fibre over $-i$ be its conjugate. We give $\oT_{X,x,n}^{\st}$ the subspace topology, so a map $K \to \oT_{X,x,n}^{\st}$ is continuous if the projection $f \co K \to \bP^1(\Cx)$ is so, and the map lifts to an element of $ \oR^{\bT,\Cx}_{X,x}(f_*C(K, \Mat_n(\Cx)))$. \end{definition} \begin{theorem}\label{HTtopthm} For any positive integer $n$, there is a natural homeomorphism $\pi_{\bT,\st}$ over $\bP^1(\Cx)$ between the space $\Hom_{\pro(\Ban\Alg)}(E^J_{X,x},\Mat_n(\Cx)) \by \bP^1(\Cx)$ with the topology of pointwise convergence, and the space $\oT_{X,x,n}^{\st}$. \end{theorem} \begin{proof} The homeomorphism is given on the fibre over $(a:b) \in \bP^1(\Cx)$ by $\pi_{\bT,\st}(U(\sP), D,f) = (\sP, aD+bD^c, f)$. The proofs of Theorems \ref{Hsstopthm} and \ref{Hsttopthm} adapt to show that $\pi_{\bT,\st}$ induces homeomorphisms on fibres over $\bP^1(\Cx)$, and in particular is an isomorphism on points. The same arguments also show that $\pi_{\bT,\st} $ and $\pi_{\dR} \circ \pi_{\bT,\st}^{-1}\co \pi_{\bT,\st}\to \oR^{\dR,\ss}_{X,x}(\Mat_n(\Cx))$ are continuous, so the result follows from Theorem \ref{Hsstopthm}. \end{proof} \begin{definition}\label{FDTcat} Let $\FD\oT_{X,x}^{\st}$ be the category of pairs $(V,p, (a:b))$ for $V\in \FD\Vect$ and $(p,(a:b)) \in \oT_{X,x,n}^{\st}$ where $n=\dim V$. Morphisms are defined by adapting the formulae of Definition \ref{FDDRcat}. Write $\eta_x^{\bT,\st}\co \FD\oT^{\st}_{X,x} \to \FD\Vect\by \bP^1(\Cx)$ for the fibre functors $(V,p, (a:b)) \mapsto (V, (a:b))$. \end{definition} \begin{proposition}\label{PNTprop} The $C^*$-algebra $ (E^J_{X,x})_{\PN}\hten_{\R} C(\bP^1(\Cx), \Cx)$ is isomorphic to the ring of continuous additive endomorphisms of $\eta_x^{\bT, \st}$, and this isomorphism is $\Gal(\Cx/\R)$-equivariant. \end{proposition} \begin{proof} The proof of Proposition \ref{PNssprop} carries over, replacing Theorem \ref{Hsstopthm} with Theorem \ref{HTtopthm}, and Lemma \ref{PNlemma2} with Lemma \ref{PNlemma2base}. \end{proof} \begin{definition}\label{stTdef} Given a $k$-normal real $C^*$-algebra $B$ over $C(\bP^1(\Cx), \Cx))^{\Gal(\Cx/\R)}$, we may regard $B$ as an $\O_{\bP^1}^{\hol}$-algebra via the inclusion of holomorphic functions in continuous functions. Then define $ \oR^{\bT,\st}_{X,x}(B)\subset \oR^{\bT}_{X,x}(B)$ to be the subspace consisting of those $p$ for which \[ (\psi(p),(a:b)) \in \oT_{X,x,k}^{\st} \] for all $(a:b) \in \bP^1(\Cx)$ and $\psi \co B \to \Mat_k(\Cx) $ with $\psi|_{C(\bP^1(\Cx), \Cx))^{\Gal(\Cx/\R)}}= \ev_{(a:b)}\id$. \end{definition} \begin{corollary}\label{PNrepT} For any $k$-normal real $C^*$-algebra $B$ over $C(\bP^1(\Cx), \Cx)^{\Gal(\Cx/\R)}$, there is a natural isomorphism between $ \oR^{\bT,\st}_{X,x}(B)^{\Gal(\Cx/\R)}$ and the set of continuous algebra homomorphisms $E^J_{X,x} \to B$. \end{corollary} \begin{proof} The proof of Corollary \ref{PNrepss} carries over, replacing Proposition \ref{PNssprop} with Proposition \ref{PNTprop}. \end{proof} \bibliographystyle{alphanum} \addcontentsline{toc}{section}{Bibliography}
2,869,038,157,004
arxiv
\section{Introduction} Current cellular networks are expected to evolve towards heterogeneous networks (HetNets) to cope with the explosive demand for wireless data \cite{HetNet1-Qualcomm11,HetNet2-Ghosh12}. This requires service providers (SPs) to deploy small-cells in addition to traditional macro-cells. While typical macro-cells, such as cellular base stations, typically have large transmission power and therefore are capable of covering users within a large region, small-cells have much lower transmission power and are used to provide service to a local area. This unique characteristic of small-cells enables them to be an attractive choice in some spectrum bands that have strict power regulations. For example, in 2012, the FCC proposed to create a new Citizens Broadband Service in the 3550-3650 MHz band (3.5GHz Band), previously utilized for military and satellite operations \cite{FCC}. Due to the low power constraint within this band, only small-cells can be deployed. For SPs that want to use this band to expand their service, this type of bandwidth regulation needs to be taken into account in determining optimal resource allocation strategies. While the deployment of small-cells will increase overall data capacity, it also complicates the network management and resource allocation for SPs. This includes how to differentiate the pricing schemes and optimally split their limited bandwidth resources between macro- and small-cells, taking into account the fact that users in the network are also heterogeneous in terms of mobility patterns. Moreover, these decisions are further complicated by regulatory restrictions on certain bands, such as the designation of new spectrum in the 3.5GHz band only for small-cells. \subsection{Contributions} Our paper analyzes the impact of regulatory requirements that certain bands be used only for small-cells on competitive service providers that allocate bandwidth between macro- and small-cell networks. We also analyze the associated social welfare. At present new spectrum is typically apportioned based on an auction, and another goal is to provide insight into the social welfare achieved via winner-take-all auctions. Given the policy implications of such an analysis, we briefly discuss these results at a high-level; detailed results are in Section~\ref{Sec:SW}. The scenario that we consider in the paper is the following. The spectrum regulator needs to allocate $B$ units of newly available bandwidth to two competitive SPs. Each SP has an initial endowment of licensed bandwidth $B_{1}^o$ and $B_{2}^o$, and gets a proportion of the new bandwidth, denoted as $B_{1}^n$ and $B_2^n$. The regulator determines the rules for this assignment using an appropriate auction procedure; e.g., an allocation of $B_{1}^n=B$ and $B_{2}^n=0$ corresponds to the outcome of a winner-take-all auction that SP 1 wins.The initial bandwidth can be allocated by each SP to either macro-cells or small-cells. In contrast, the new bandwidth can only be used for small-cells, as enforced by the regulatory constraint. In Figure~\ref{Fig:SW_2} we present a typical result that we obtain for different partitions of the new bandwidths amongst the two SPs. The blue line represents the social welfare achieved when the SPs cooperate and the new bandwidth comes with no restrictions; the red line represents the social welfare achieved when the SPs cooperate and the new bandwidth is restricted to small cell use; and the green curve represents the social welfare achieved when the SPs compete with the new bandwidth restricted only to small cell use. From the results of our previous work \cite{Competition5-Chen15}, it is easily verified that blue line is also the social welfare achieved when the SPs compete and there is no restriction on the usage of the new bandwidth. It is clear from Figure~\ref{Fig:SW_2} that the introduction of restrictions on the usage of new bandwidth results in a reduction in social welfare, and additionally, only specific partitions of the new bandwidth will lead to this reduced social welfare value being achieved with competing SPs. It should also be noted that the assignment that results from a winner-take-all auction yields much lower social welfare; these correspond to the two endpoints in the figure. Numerical investigations also show that the SP with the larger amount of initial bandwidth endowment obtains the highest marginal revenue increase from any new bandwidth with the other SP losing revenue when this occurs. The larger SP would thus, bid higher to reduce the influence of the smaller SP. Thus one of the main contributions of our paper is to highlight the possibility of such negative outcomes with the specific designation chosen for small cells, and also to point out the necessity of carrying out such analysis before deciding on other regulatory constraints for newly available spectrum bands. \begin{figure}[htbp] \centering \includegraphics[width=0.45\textwidth,height=0.35\textwidth]{SW_2.pdf} \caption{Social welfare versus $B_1^n$ with large $B$. Here $\text{SW}_{\text{wo}}^{*}$ is the optimal social welfare without regulatory constraints, $\text{SW}_{\text{w}}^{*}$ is the optimal social welfare with regulatory constraints, and $\text{SW}_{\text{w}}^{\text{NE}}$ is the equilibrium social welfare with regulatory constraints.} \label{Fig:SW_2} \end{figure} We now summarize our other contributions in this paper: 1. \emph{Incorporating Bandwidth Regulations into the HetNet Model}: Prior related work that considers bandwidth allocation assumes SPs are free to split spectrum between macro- and small-cells in any way. Here, we add additional small-cell bandwidth constraints that impose a minimum amount of small-cell bandwidth to the SPs. This is primarily motivated by the 3.5GHz Band released by the FCC that can only be used to deploy small-cells. The introduction of such bandwidth constraints has a direct influence on the optimal pricing and bandwidth allocation strategies of SPs. 2. \emph{Characterizing the impact of the regulatory constraints for SPs in both monopoly and competitive scenarios}: We analyze scenarios with both a monopoly SP and competitive SPs. We show that in the monopoly scenario the SP simply increases its small-cell bandwidth to the required minimum amount if its original small-cell bandwidth is less than the constraint. This applies to both social welfare and revenue-maximization. With two competitive SPs, there always exists a unique Nash equilibrium that depends on the regulatory constraints. We illustrate this by considering three cases corresponding to whether the original equilibrium allocation satisfies the two constraints. We characterize the equilibrium for each case. 3. \emph{Social Welfare Analysis}: We also quantify the influence of the regulatory constraints on the social welfare. We conclude that if the equilibrium without constraints violates the constraints, then social welfare loss is inevitable. However, the social welfare loss is always bounded, and the worst case happens when the spectrum regulator requires the SPs to allocate all bandwidth only to small-cells. In this extreme case there are no macro-cells, and consequently none of the mobile users receives wireless service. \subsection{Related work} Pricing and bandwidth allocation problems in HetNets have attracted considerable attention. In \cite{Opt1-Shetty09, Opt2-Gussen11, Opt3-Yun12}, small-cell service is seen as an enhancement to macro-cell service. In contrast, small-cell and macro-cell service are considered to be separate services in \cite{Opt4-Chen11,Opt5-Lin11,Opt6-Duan13,Opt7-Chen13,Competition1-Zhang13, Competition2-Hossain08, Competition5-Chen15,Investment-Chen16, Unlicensed1-Chen16}, the same as our model in this paper. Only optimal pricing is studied in \cite{Opt1-Shetty09, Opt3-Yun12, Competition1-Zhang13, Competition2-Hossain08}, while \cite{Opt2-Gussen11, Opt4-Chen11,Opt5-Lin11,Opt6-Duan13,Opt7-Chen13,Competition5-Chen15,Investment-Chen16,Unlicensed1-Chen16} consider joint pricing and bandwidth allocation, as in this paper. Additionally, except for \cite{Competition1-Zhang13, Competition2-Hossain08,Competition5-Chen15,Unlicensed1-Chen16, Investment-Chen16} that include the competitive scenario with multiple SPs, all the other work assumes only one SP. In this paper, we investigate both monopoly and competitive scenarios, and adopt a model similar to that in our previous work \cite{Opt7-Chen13, Competition5-Chen15} (which did not consider bandwidth regulations). The rest of the paper is organized as follows. We present the system model in Section \ref{Sec:System Model}. We consider monopoly and competitive scenarios in Section \ref{Sec:Monopoly} and Section \ref{Sec:Competitive}, respectively. Social welfare analysis is in Section \ref{Sec:SW}. We conclude in Section \ref{Sec:Conclusions}. All proofs of the main results can be found in the appendices. \section{System Model}\label{Sec:System Model} We adopt the mathematical model in our previous work \cite{Competition5-Chen15} for the analysis. We now describe the different aspects of it while pointing out the additional elements considered here. \subsection{SPs} We consider a HetNet with $N$ SPs providing separate macro- and small-cell service to all users. Denote the set of SPs as $\mathcal{N}$. Each SP is assumed to operate a two-tier cellular network consisting of macro- and small-cells that are uniformly deployed over a given area. We further assume all SPs have the same density of infrastructure deployment. We normalize the density of macro-cells to one. The density of small-cells is denoted as $N_S$. In our setting, macro-cells have high transmission power, and therefore can provide large coverage range. In contrast, small-cells have low transmission power, and consequently local coverage range. Each SP $i$ has a total amount of bandwidth $B_i$ exclusively licensed.\footnote{For the monopoly SP scenario, we will ignore the subscript.} Since we assume all macro- and small-cells use separate bands, each SP $i$ needs to decide how to split its bandwidth into $B_{i,M}$, bandwidth allocated to macro-cells, and $B_{i,S}$, bandwidth allocated to small-cells. When determining this partition, every SP is required to conform to (possible) bandwidth regulations enforced by the spectrum regulator. Specifically, SP $i$ is requested to guarantee a minimum amount of bandwidth allocated to small-cells, and this lower bound is denoted as $B_{i,S}^0$. For a fixed bandwidth allocation, the total achievable data rate provided by the macro-cells of SP $i$ is $C_{i, M}=B_{i, M}R_0$, where $R_0$ is the (average) spectral efficiency of the macro-cells. The total available rate in small-cells of SP $i$ is given by $C_{i, S}=\lambda_S B_{i, S}R_0$, where $\lambda_S>1$ reflects the increase in spectral efficiency due to smaller cell size, and possibly greater deployment density. Each SP $i$ provides separate macro- and small-cell services and charges the users a price \emph{per unit rate} for associating with its macro-cells or small-cells, namely, $p_{i, M}$ and $p_{i, S}$. \subsection{Users} We assume the users in the networks are also heterogeneous and categorize them into two types based on their mobility patterns. Mobile users can only be served by macro-cells. In contrast, fixed users are relatively stationary, and can connect to either macro- or small-cells (but not both). Denote the densities of mobile users and fixed users as $N_m$ and $N_f$, respectively. Note that the heterogeneity of the users can also arise from an equivalent model that assumes $(N_m+N_f)$ as the total density of users, who are mobile with probability $N_m/(N_m+N_f)$ and stationary with probability $N_f/(N_m+N_f)$. After user association, let $K_{i, M}$ and $K_{i, S}$ denote the mass of users connected to the macro- and small-cells of SP $i$, respectively. (Note that $K_{i, S}$ consists of fixed users only, whereas $K_{i, M}$ can consist of both mobile and fixed users.) \begin{figure}[htbp] \centering \includegraphics[width=0.45\textwidth,height=0.3\textwidth]{System_Model.pdf} \caption{System Model.} \label{Fig:System_Model} \end{figure} \subsection{User and SP Optimization} Figure \ref{Fig:System_Model} illustrates the network and market model. We now introduce the optimization problems corresponding to both users and SPs. We assume each user is endowed with a utility function, $u(r)$, which only depends on the service rate it gets. For simplicity of analysis, in this paper we assume that all users have the same $\alpha$-fair utility functions \cite{MoWalrand} with $\alpha \in (0,1)$: \begin{equation}\label{Eqn:UtilityFn} u(r)=\frac{r^{1-\alpha}}{1-\alpha}, \quad \alpha \in (0,1). \end{equation} This restriction enables us to explicitly calculate many equilibrium quantities, which appears to be difficult for more general classes of utility. Furthermore, this class is widely used in both networking and economics, where it is a subset of the class of iso-elastic utility functions.\footnote{In general $\alpha$-fair utilities require that $\alpha \geq 0$ to ensure concavity; requiring $\alpha>0$ ensures strict concavity but allows us to approach the linear case as $\alpha \rightarrow 0$. The restriction of $\alpha<1$ ensures that utility is non-negative so that a user can always ``opt out" and receive zero utility. Note also that as $\alpha \rightarrow 1$, we approach the $\log(\cdot)$ (proportional fair) utility function.} Each user chooses the service by maximizing its net payoff $W$, defined as its utility less the service cost. For a service with price $p$, this is equivalent to: \begin{equation} \label{Opt:User Optimization} W= \max_{r\geq 0}\quad u(r)-pr. \end{equation} For $\alpha$-fair utility functions, (\ref{Opt:User Optimization}) has the unique solution: \begin{equation}\label{Eqn:UserRateOpt} r^{*}= D(p)=(u^{\prime})^{-1}(p)=(1/p)^{1/\alpha}, \end{equation} where $D(p)$ here can be seen as the user's rate demand function. The maximum net payoff for a user is thus: \begin{equation} \label{Eqn:User Net Payoff} W^*(p) = u(D(p))- pD(p)=\frac{\alpha}{1-\alpha}p^{1-\frac{1}{\alpha}}. \end{equation} Recall that fixed users can choose between any macro- or small-cell service offered by any SP, while mobile users can only choose the macro-cell service provided by a SP. However, here, we assume mobile users have priority connecting to macro-cells, which means macro-cells will only admit fixed users after the service requests of all mobile users have been addressed. For the association rules, we adopt the same process described in \cite{Competition5-Chen15}. That is, users always choose the service with lowest price and fill the corresponding capacity. If multiple services have the same price, then the users are allocated across them in proportion to the capacities. Once a particular service capacity is exhausted, then the leftover demand continues to fill the remaining service in the same fashion. Each SP determines the bandwidth split and service prices to maximize its revenue, which is the aggregate amount paid by all users associating with their macro- and small-cells. Meanwhile they also need to conform to the constraints on small-cell bandwidth allocation. Specifically, SP $i$ solves the following optimization problem: \begin{subequations} \begin{align} \text{maximize} \quad &S_i=p_{i, M}K_{i, M}D(p_{i, M})+p_{i, S}K_{i, S}D(p_{i, S}), \label {Opt:SP's optimization} \\ \text{subject to} \quad& B_{i, M}+B_{i, S}\le B_i, B_{i, M}\ge 0, B_{i, S}\ge B_{i,S}^0, \label {Opt:SP's optimization-Constraint1}\\ & 0<p_{i, M}, p_{i, S}<\infty. \label {Opt:SP's optimization-Constraint2} \end{align} \end{subequations} Alternatively, a social planner, such as the FCC, may seek to allocate bandwidth and set prices to maximize social welfare, which is the sum utility of all users, subject to the same constraints (\ref{Opt:SP's optimization-Constraint1}) and (\ref{Opt:SP's optimization-Constraint2}). This is given by: \begin{align} \text{maximize} \quad &\text{SW}=\sum\limits_{i=1}^{N}[K_{i, M}u(D(p_{i, M}))+K_{i, S}u(D(p_{i, S}))].\label{Opt:SW optimization} \end{align} \subsection{Sequential Game and Backward Analysis} We model the bandwidth and price adjustments of SPs in the network as a two-stage process: \begin{enumerate} \item Each SP $i$ first determines its bandwidth allocation $B_{i, M}, B_{i, S}$ between macro-cells and small-cells. Denote the aggregate bandwidth allocation profile as $\mathbf{B}$. \item Given $\mathbf{B}$ (assumed known to all SPs), the SPs announce prices for both macro-cells and small-cells. The users then associate with SPs according to the previous user association rule. \end{enumerate} We then do backward induction. That is, we first derive the price equilibrium under a fixed bandwidth allocation. We then characterize the bandwidth allocation equilibrium based on the price equilibrium obtained. \section{Monopoly Scenario with A Single SP}\label{Sec:Monopoly} We first study the bandwidth allocation when a single SP is operating in the network. This is similar to the analysis in our previous work \cite{Opt7-Chen13}, except here we have an additional regulatory constraint that imposes a minimum bandwidth allocation to small-cells. This added constraint will change the optimal bandwidth allocation strategy for the monopoly SP. In \cite{Opt7-Chen13} it is concluded that for the set of $\alpha$-fair utility functions we use in this paper, the revenue-maximizing and social welfare-maximizing bandwidth allocation turn out to be the same. The following theorem states that the optimal bandwidth allocations under both objectives are still the same, but adding a large value for the bandwidth set aside for small-cells changes the optimal bandwidth allocation. \begin{theorem}[Optimal Monopoly Bandwidth Allocation] \label{Thm:Monopoly Bandwidth Allocation} For a monopoly SP, the optimal revenue-maximizing and social welfare-maximizing bandwidth allocation strategies are the same and can be determined by the following cases: 1. If $B_S^0\le \frac{N_f\lambda_S^{1/\alpha-1}B}{N_f\lambda_S^{1/\alpha-1}+N_m}$, the optimal bandwidth allocation remains the same as that without the regulatory constraint. In this case it is given by: \begin{subequations} \begin{align} &B_S^{\text{SW}}=B_S^{\text{rev}}=\frac{N_f\lambda_S^{1/\alpha-1}B}{N_f\lambda_S^{1/\alpha-1}+N_m}, \\ &B_M^{\text{SW}}=B_M^{\text{rev}}=\frac{N_m B}{N_f\lambda_S^{1/\alpha-1}+N_m}. \end{align} \end{subequations} 2. If $B_S^0> \frac{N_f\lambda_S^{1/\alpha-1}B}{N_f\lambda_S^{1/\alpha-1}+N_m}$, the optimal bandwidth allocation is changed to: \begin{equation} B_S^{\text{SW}}=B_S^{\text{rev}}=B_S^0, \quad B_M^{\text{SW}}=B_M^{\text{rev}}=B-B_S^0. \end{equation} Consequently there will be both a welfare and revenue loss if this case applies. In both cases the optimal macro- and small-service prices are market-clearing prices, i.e., the prices that equalize the total rate demand and the total rate supply in both cells. \end{theorem} Theorem \ref{Thm:Monopoly Bandwidth Allocation} states that if the original optimal bandwidth allocation without the bandwidth regulations already satisfies the imposed constraint, then the SP just keeps the same bandwidth allocation. If the original bandwidth allocation violates the regulatory constraint, then the SP increases the small-cell bandwidth to the required level. This is because the added regulatory constraint does not change the concavity of the revenue or social welfare function with respect to the small-cell bandwidth, and further increasing the bandwidth allocation to small-cells will only lead to more revenue or social welfare loss. \section{Competitive Scenario with Two SPs}\label{Sec:Competitive} In this section we turn to the competitive scenario with two SPs, each of which maximizes its individual revenue. Applying the results from \cite{Competition5-Chen15}, the price equilibrium given any fixed bandwidth allocation is always the market-clearing price. We therefore focus on the bandwidth allocation Nash equilibrium. Considering the case without the additional regulatory constraint, using the results from \cite{Competition5-Chen15}, there exists a unique Nash equilibrium and the bandwidth allocations of two SPs at equilibrium are given by: \begin{subequations} \begin{align} &B_{1,S}^{\text{NE}}=\frac{N_f\lambda_S^{1/\alpha-1}B_1}{N_f\lambda_S^{1/\alpha-1}+N_m}, B_{1,M}^{\text{NE}}=\frac{N_m B_1}{N_f\lambda_S^{1/\alpha-1}+N_m},\\ &B_{2,S}^{\text{NE}}=\frac{N_f\lambda_S^{1/\alpha-1}B_2}{N_f\lambda_S^{1/\alpha-1}+N_m}, B_{2,M}^{\text{NE}}=\frac{N_m B_2}{N_f\lambda_S^{1/\alpha-1}+N_m}. \end{align} \end{subequations} With the additional regulatory constraints, we have the following theorem characterizing the corresponding Nash equilibrium between two SPs. \begin{theorem} \label{Thm:NE} With two SPs, with a constraint on minimum small-cell bandwidth, the Nash equilibrium exists and is unique. Moreover, the total bandwidth allocated to small-cells by the two SPs is no less than that without the regulatory constraints. \end{theorem} Theorem \ref{Thm:NE} states that the existence and uniqueness of the Nash equilibrium is preserved after adding the regulatory constraints. This can be proved using similar methods as provided in our previous work \cite{Competition5-Chen15}, with some modifications. The last part of the theorem may be more subtle than it appears. One may try to argue that if any of the constraints is violated, that SP then needs to increase its bandwidth allocation to small-cells. It would then hold that the total bandwidth allocated to small-cells surely increases. However, the logic does not carry through if only one constraint is violated at the Nash equilibrium omitting the constraint. In that case, the SP with violated constraint must increase the bandwidth allocation to small-cells. However, the other SP, whose equilibrium small-cell bandwidth allocation without regulations satisfies the constraint, may potentially \emph{decrease} its bandwidth in small-cells in response to the increase in bandwidth allocation of its competitor. In that case, determining the change in total bandwidth requires a more detailed analysis. Nonetheless, Theorem \ref{Thm:NE} indicates even in that case the total bandwidth in small-cells would not decrease. We will present a specific example later. Depending on whether the regulatory constraints are violated or not at the Nash equilibrium without the constraints, there are three cases we need to cover independently. We will see that, in each case, the Nash equilibrium behaves differently. {\it Case A: Both constraints are satisfied.} The new Nash equilibrium is the same as the Nash equilibrium without regulations. {\it Case B: Both constraints are violated.} The Nash equilibrium without regulations is no longer valid. The following proposition characterizes the properties of the new Nash equilibrium. \begin{proposition}\label{Prop:Both Violated NE} In case B, the Nash equilibrium with regulatory constraints is one of the following types:\\ Type I. Both SPs increase their small-cell bandwidth allocations to exactly the required amount, i.e, $B_{1,S}=B_{1,S}^0, B_{2,S}=B_{2,S}^0$.\\ Type II. One SP increases its small-cell bandwidth exactly to the required amount, while the other SP increases further beyond the required amount, i.e, $B_{1,S}=B_{1,S}^0, B_{2,S}>B_{2,S}^0 \text{ or } B_{1,S}>B_{1,S}^0, B_{2,S}=B_{2,S}^0$. \end{proposition} It is conceptually easy to characterize the necessary and sufficient conditions for the first type of Nash equilibrium to hold since at that equilibrium the marginal revenue increase with respect to per unit of bandwidth increase in small-cells should be non-positive for both SPs. This can be analytically expressed via the two corresponding inequalities: \begin{align}\label{Eqn:Boundary Point NE} \nonumber &\lambda_S{R_S^0}^{-\alpha}-{R_M^0}^{-\alpha}-\frac{\alpha \lambda_S^2B_{i,S}^0R_0}{N_f}{R_S^0}^{-\alpha-1}+\\ &\frac{(B_i-B_{i,S}^0)R_0}{N_m}{R_M^0}^{-\alpha-1} \le 0, \text{ for } i=1,2. \end{align} Here, $R_S^0$ and $R_M^0$ are defined as follows: \begin{subequations} \begin{align} &R_S^0=\frac{\lambda_S(B_{1,S}^0+B_{2,S}^0)R_0}{N_f},\\ &R_M^0=\frac{(B_1-B_{1,S}^0+B_2-B_{2,S}^0)R_0}{N_m}. \end{align} \end{subequations} {\it Case C: Only one constraint is violated.} Without loss of generality, we assume at the Nash equilibrium without regulations, only SP 2's small-cell bandwidth allocation falls below the required threshold. In this case, the new Nash equilibrium is characterized by the following proposition: \begin{proposition}\label{Prop:Both Violated NE} In case C, the Nash equilibrium with regulatory constraints is one of the following two types:\\ Type I. Both SPs increase their small-cell bandwidth allocations to exactly the required amount, i.e, $B_{1,S}=B_{1,S}^0, B_{2,S}=B_{2,S}^0$.\\ Type II. Only SP 2 allocates exactly the required minimum amount of bandwidth to small-cells, i.e, $B_{1,S} > B_{1,S}^0, B_{2,S}=B_{2,S}^0$. \end{proposition} Note that equation (\ref{Eqn:Boundary Point NE}) also applies to give conditions when a type I equilibrium arises. While the type I Nash equilibrium in both cases B and C indicate both SPs allocate exactly the required minimum amount to small-cells, they are quite different. In case B both SPs increase their small-cell bandwidth allocations, whereas in case C, one SP increases its small-cell bandwidth while the other SP decreases its small-cell bandwidth. Another difference that is worth pointing out is that in case C, the SP whose small-cell bandwidth allocation without regulations violates the constraint always operates at exactly the required minimum point at the new Nash equilibrium, while it will further increase its small-cell bandwidth beyond the minimum point in a type II equilibrium for case B. \\ Next we use a specific example in Figure \ref{Fig:NE_Region} to illustrate the different Nash equilibrium regions as a function of the small-cell bandwidth constraints discussed in the preceding cases. The system parameters for this case are: $\alpha=0.5, N_m=N_f=50, R_0=50, \lambda_S=2, B_1=2, B_2=1$. In this example the original equilibrium small-cell bandwidth allocations without the regulatory constraints are: $B_{1,S}=1.34, B_{2,S}=0.67$. Region A corresponds to the Nash equilibrium in case A, which is also the equilibrium without the regulatory constraints. Region B.I and Region B.II correspond to the type-I and type-II Nash equilibrium in case B where both constraints are violated at the original equilibrium, and the same rule applies to Region C.I and C.II. \begin{figure}[htbp] \centering \includegraphics[width=0.45\textwidth,height=0.35\textwidth]{NE_Region.pdf} \caption{Nash equilibrium regions for 2 SPs as the bandwidth regulations vary.} \label{Fig:NE_Region} \end{figure} \section{Social Welfare}\label{Sec:SW} In this section we focus on social welfare analysis. In our previous work \cite{Opt7-Chen13}\cite{Competition5-Chen15}, we showed that for the set of $\alpha$-fair utility functions we use here, the bandwidth allocation at equilibrium is always socially optimal in both monopoly and competitive scenarios. With the additional regulatory constraints on the minimum amount of small-cell bandwidth allocations, this is not necessarily true. Obviously, if the equilibrium without regulations already satisfies the regulatory constraints, then the preceding result still holds, i.e., in case A in the previous section. Otherwise, a social welfare loss is incurred compared to the case without regulatory constraints. Denote $\text{SW}_{\text{wo}}^{*}, \text{SW}_{\text{w}}^{\text{NE}}$ as the equilibrium social welfare without and with regulatory constraints, respectively. The following theorem states that the loss in social welfare is lower bounded, and the worst point occurs at the scenario where the regulatory constraints require both SPs to allocate all bandwidth only to small-cells. \begin{theorem}[Social Welfare] \label{Thm:SW} Compared to the case without the regulatory constraints, social welfare loss is incurred when the following inequality is true: \begin{equation} \frac{N_f\lambda_S^{1/\alpha-1}}{N_f\lambda_S^{1/\alpha-1}+N_m} \sum\limits_{i\in \mathcal{N}}B_i< \sum\limits_{i\in \mathcal{N}}B_{i,S}^0. \end{equation} We have: \begin{equation} \frac{\text{SW}_{\text{w}}^{\text{NE}}}{\text{SW}_{\text{wo}}^{*}} \ge \Big(\frac{N_f\lambda_S^{1/\alpha-1}}{N_f\lambda_S^{1/\alpha-1}+N_m}\Big)^{\alpha}, \end{equation} where the bound is tight exactly when $B_{i,S}^0=B_i, \forall i\in \mathcal{N}$. \end{theorem} In practice, a spectrum regulator, such as the FCC, may seek to find an optimal way to allocate newly available spectrum so that the market equilibrium yields the largest social welfare. We next use our results to analyze the case where the spectrum regulator needs to allocate a total available new bandwidth $B$ to two competitive SPs. SP 1 and 2 each have initial licensed bandwidth $B_{1}^o$ and $B_{2}^o$, and get a proportion of the new bandwidth, denoted as $B_{1}^n$ and $B_2^n$. The initial bandwidth is free to use for either macro-cells or small-cells. In contrast, the new bandwidth can only be used for small-cells. As mentioned before this is motivated by the 3.5GHz band, where FCC regulates the power constraint to be very small, and therefore it can only be used for small-cell deployment \cite{FCC}. The spectrum regulator needs to determine the optimal split of the new bandwidth such that the social welfare under market equilibrium is maximized. We consider the following three scenarios for any possible bandwidth partition $(B_1^n, B_2^n)$: 1) The optimal social welfare without regulatory constraints, $\text{SW}_{\text{wo}}^{*}$. Note, from \cite{Competition5-Chen15}, this is the same as the equilibrium social welfare without regulatory constraints. This will be used as a benchmark. 2) The optimal social welfare with the regulatory constraints, which we denote as $\text{SW}_\text{w}^{*}$. 3) The equilibrium social welfare with regulatory constraints, $\text{SW}_{\text{w/}}^{\text{NE}}$. The next theorem compares the three scenarios depending on the total amount of newly available bandwidth $B$. \begin{theorem}\label{Thm:Three Scenarios} Depending on the amount of new bandwidth $B$, there exists a bandwidth threshold $T$ \begin{equation} T=\frac{(B_1^o+B_2^o)N_f\lambda_S^{1/\alpha-1}}{N_m}, \end{equation} and we have the following conclusions:\\ 1. If $B>T$, then $\text{SW}_{\text{w}}^{\text{NE}}\le \text{SW}_{\text{w}}^{*} < \text{SW}_{\text{wo}}^{*}$. The first inequality is binding, i.e, $\text{SW}_{\text{w}}^{\text{NE}}= \text{SW}_{\text{w}}^{*} < \text{SW}_{\text{wo}}^{*}$, if and only if equation (\ref{Eqn:Boundary Point NE}) holds. \\ 2. If $B\le \frac{(B_1^o+B_2^o)N_f\lambda_S^{1/\alpha-1}}{N_m}$, then $\text{SW}_{\text{w}}^{\text{NE}}\le \text{SW}_{\text{w}}^{*} = \text{SW}_{\text{wo}}^{*}$. The first inequality is binding, i.e., $\text{SW}_{\text{w}}^{\text{NE}} = \text{SW}_{\text{w}}^{*} = \text{SW}_{\text{wo}}^{*}$, if and only if the following condition is met: \begin{equation} B_1^n\in \Big[ B-\frac{B_2^oN_f\lambda_S^{1/\alpha-1}}{N_m}, \frac{B_1^oN_f\lambda_S^{1/\alpha-1}}{N_m} \Big], B_2^n=B-B_1^n. \end{equation} \end{theorem} Theorem \ref{Thm:Three Scenarios} states that if the total amount of newly available bandwidth is too large, no matter if the two competing SPs maximize revenue or social welfare, we always have some social welfare loss compared to the case without regulatory constraints. This can be explained as follows. Using the set of $\alpha$-fair utility functions, without regulatory constraints the socially optimal bandwidth allocation strategy is to allocate bandwidth to macro- and small-cells based on a fixed proportion. If the total amount of newly available bandwidth is not large, simply following the original allocation satisfies the regulation requirement and is therefore socially optimal. However, when the amount of new bandwidth becomes large, since the new bandwidth is required to be allocated to small-cells only, the original optimal proportion would no longer be valid given the small-cell bandwidth constraints. As a result of this, social welfare loss relative to the original allocation scheme becomes inevitable. Further, note that the bandwidth threshold at which this loss begins occurring is proportional to $\frac{N_f}{N_m}$, so that when there are more fixed users willing to use small-cells, the threshold increases. It is also increasing in $\lambda_S$, the gain in spectral efficiency of small-cells and in the initial allotment of licensed bandwidth. Theorem \ref{Thm:Three Scenarios} also indicates that when the amount of newly available bandwidth is below the threshold, there exists a bandwidth split that achieves the optimal benchmark social welfare. This result suggests that if a spectrum controller is planning to enforce bandwidth regulations on newly released bands, it should consider the possible impacts on the market equilibrium without regulations carefully. In particular, if the amount of newly available bandwidth is too large, imposing such regulations might lead to social welfare loss compared to the scenario where the regulations were not imposed. On the other hand, if the amount of new spectrum is small compared to the existing bands already licensed to SPs in the market, the influence on the market equilibrium from the introduction of bandwidth regulations on the new bands is minor and controllable, and therefore will not incur any loss in the social welfare. Figure \ref{Fig:SW_2} and \ref{Fig:SW_1} illustrate Theorem \ref{Thm:Three Scenarios}. The system parameters we use in both cases are: $\alpha=0.5, N_m=N_f=50, R_0=50,\lambda_S=4, B_1^o=1, B_2^o=1.2$. The Figures differ in the amount of new bandwidth. In Figure \ref{Fig:SW_2}, $B=10$, while in Figure \ref{Fig:SW_1}, $B=6$. We can see that when the amount of newly available bandwidth is not large, there is a bandwidth split that achieves the optimal benchmark social welfare. However, when the amount of new bandwidth is large relative to the amount of original bandwidth of the SPs, there exists no possible bandwidth split schemes that achieve the optimal social welfare without the constraints. \begin{figure}[htbp] \centering \includegraphics[width=0.45\textwidth,height=0.35\textwidth]{SW_1.pdf} \caption{Social welfare versus $B_1^n$ with small $B$. } \label{Fig:SW_1} \end{figure} \section{Conclusions}\label{Sec:Conclusions} In this paper we considered the impact of bandwidth regulations on resource allocation in a HetNet. We showed that by imposing a required minimum bandwidth allocation to small-cells, the optimal bandwidth allocation strategies of SPs can change dramatically. While this change is relatively straightforward in the monopoly scenario, it turns out to be much more complicated in the competitive scenario with two SPs. Specifically, the existence and uniqueness of Nash equilibria are still preserved after adding the regulatory constraints. However, the equilibria can exhibit very different structures and characteristics as the constraints vary. We also showed that the introduction of such regulations may shift the equilibrium away from an efficient allocation, thus incurring some social welfare loss. Our results suggest that adding such regulatory constraints complicates the resource allocation schemes in the HetNets. While these constraints may be introduced by the spectrum regulator to address other concerns, they can ultimately reduce the social welfare achieved. For future directions, we are planning to study other policy considerations that we did not take into account in this paper, like the implications for innovation, spectrum caps, and the management of harmful interference. \section{Proof of Theorem \ref{Thm:Monopoly Bandwidth Allocation}} The proof of Theorem \ref{Thm:Monopoly Bandwidth Allocation} is straightforward and we can apply the results in \cite{Opt7-Chen13} directly. In particular, if the original optimal bandwidth allocation still holds with the added regulation constraints, then we are done. Otherwise we have to increase the small-cell bandwidth allocation. Both the revenue and social welfare are concave functions in the small-cell bandwidth allocation, and at the original equilibrium point the marginal revenue and social welfare increase with respect to per unit increase in bandwidth are equal for both macro- and small-cells. Hence, when the small-cell bandwidth increases, we enter the region where the marginal revenue and social welfare increase with respect to per unit increase in bandwidth for small-cells is smaller than that of macro-cells. As a result, the best option is to operate at the boundary point, i.e., allocating exactly the required minimum amount of bandwidth to small-cells. \section{Proof of Theorem \ref{Thm:NE}} As this is a concave game, to prove the existence and uniqueness of the Nash equilibrium, we can use the uniqueness theorem (Theorem 6) in Rosen's paper \cite{Rosen65}, which gives sufficient condition in terms of a certain matrix being negative definite. In our previous work \cite{Competition5-Chen15} it was proved that the required matrix is negative definite for the corresponding game without bandwidth restrictions. Here the only difference is that we have additional linear constraints on the bandwidth allocations, which does not have any effect on this result. Therefore, the same arguments also apply here. As for the second part of the theorem, denote $R_S$ and $R_S^{\prime}$ as the average service rate in small-cells with and without the regulatory constraints, respectively. Suppose at the Nash equilibrium with constraints, the sum bandwidth allocation to small-cells is less than that without the regulatory constraints, then we have: \begin{equation} R_S^{\prime}<R_S. \end{equation} Denote $D_i=\frac{\partial S_i}{\partial B_{i,S}}$, it follows that: \begin{align} \nonumber D_1+D_2=&\lambda_S\Big[2u^{\prime}(R_S)+R_Su^{\prime\prime}(R_S)\Big]-\\ &\Big[2u^{\prime}(R_M)+R_Mu^{\prime\prime}(R_M)\Big]. \end{align} Since $2u^{\prime}(r)+ru^{\prime\prime}(r)$ decreases in $r$, and we know that at the Nash equilibrium without constraints, $D_1=D_2=0$, we can conclude that $D_1+D_2>0$ at the equilibrium with constraints. As a result, at least one of $D_1$ or $D_2$ must be greater than 0 at equilibrium. Without loss of generality, suppose $D_2>0$ at the equilibrium with constraints. Given $D_2>0$ , it must be that $B_{2,S}^{\prime}=B_2$, and $D_1\le 0$. This is because if $D_1>0$ also holds, $B_{1,S}^{\prime}=B_1$ and it contradicts with the fact that $R_S^{\prime}<R_S$. Then at the Nash equilibrium without constraints, we have: \begin{align} \nonumber D_1=&\lambda_Su^{\prime}(R_S)+\lambda_S^2\frac{B_{1,S}R_0}{N_f}u^{\prime\prime}(R_S)\\ &-u^{\prime}(R_M)-\frac{B_{1,M}R_0}{N_m}u^{\prime\prime}(R_M)\\ \nonumber =&\lambda_S\Big[u^{\prime}(R_S)+R_Su^{\prime\prime}(R_S)\Big]-\Big[u^{\prime}(R_M)+R_Mu^{\prime\prime}(R_M)\Big]\\ & -\lambda_S^2\frac{B_{2,S}R_0}{N_f}u^{\prime\prime}(R_S)+\frac{B_{2,M}R_0}{N_m}u^{\prime\prime}(R_M)=0. \label{Appen_B_Inequality1} \end{align} At the equilibrium with constraints, similarly we have: \begin{align}\label{Appen_B_Inequality2} \nonumber D_1=& \lambda_S\Big[u^{\prime}(R_S^{\prime})+R_S^{\prime}u^{\prime\prime}(R_S^{\prime})\Big]- \Big[u^{\prime}(R_M^{\prime})+R_M^{\prime}u^{\prime\prime}(R_M^{\prime})\Big]\\ & -\lambda_S^2\frac{B_2R_0}{N_f}u^{\prime\prime}(R_S^{\prime})\le 0. \end{align} However, since $u^{\prime}(r)+ru^{\prime\prime}(r)$ decreases in $r$, $u^{\prime\prime}(r)<0$ and increases in $r$, and the fact that $R_S^{\prime}<R_S, R_M^{\prime}>R_M$, the inequality sign in (\ref{Appen_B_Inequality2}) should be reversed. Therefore we have a contradiction. \section{Proof of Theorem \ref{Thm:SW}} Applying the same arguments we used in proving Theorem \ref{Thm:Monopoly Bandwidth Allocation}, we know that since increasing the small-cell bandwidth allocation beyond the original equilibrium point only decreases the social welfare, then the worst case occurs at the point that all bandwidth is required to be allocated to small-cells. \section{Proof of Theorem \ref{Thm:Three Scenarios}} For scenario 2) and 3), as long as the sum of the small-cell bandwidth allocations of the two SPs at the equilibrium without the constraints is larger than the sum of the regulation constraints, then they are the same. This requires: \begin{equation} \frac{N_f\lambda_S^{1/\alpha-1}(B_1^o+B_1^n+B_2^o+B_2^n)}{N_f\lambda_S^{1/\alpha-1}+N_m}\ge B_1^n+B_2^n, \end{equation} which yields the following condition: \begin{equation} B\le \frac{(B_1^o+B_2^o)N_f\lambda_S^{1/\alpha-1}}{N_m}. \end{equation} Otherwise, if the preceding condition is not satisfied, the social welfare corresponding to the second scenario is also less than that corresponding to the first scenario, i.e, $\text{SW}_{\text{w}}^{*} < \text{SW}_{\text{wo}}^{*}$. On the other hand, the only possible way for scenario 3) to achieve the optimal social welfare corresponding to scenario 1) is to ensure the Nash equilibrium is exactly the same as the one without the regulation constraints. This requires: \begin{equation} \frac{N_f\lambda_S^{1/\alpha-1}(B_1^o+B_1^n)}{N_f\lambda_S^{1/\alpha-1}+N_m}\ge B_1^n, \frac{N_f\lambda_S^{1/\alpha-1}(B_2^o+B_2^n)}{N_f\lambda_S^{1/\alpha-1}+N_m}\ge B_2^n, \end{equation} which can be simplified to: \begin{subequations} \begin{align} &B\le \frac{(B_1^o+B_2^o)N_f\lambda_S^{1/\alpha-1}}{N_m},\\ &B_1^n\in \Big[ B-\frac{B_2^oN_f\lambda_S^{1/\alpha-1}}{N_m}, \frac{B_1^oN_f\lambda_S^{1/\alpha-1}}{N_m} \Big]. \end{align} \end{subequations} When $\text{SW}_{\text{w}}^{*} < \text{SW}_{\text{wo}}^{*}$, it means the required minimum sum bandwidth allocation to small-cells is larger than the sum bandwidth in small-cells at the equilibrium without constraints. Since we know that at the original equilibrium the social welfare is maximized and the social welfare is a concave function with respect to the sum bandwidth in small-cells, in this case the social welfare maximizing point with the constraints is therefore exactly the required minimum small-cell bandwidth point, i.e, when $B_{1,S}+B_{2,S}=B_{1,S}^0+B_{2,S}^0$. The only possibility for scenario 3) to achieve this is to ensure $B_{1,S}=B_{1,S}^0, B_{2,S}=B_{2,S}^0$ at the Nash equilibrium with constraints. As a result, equation (\ref{Eqn:Boundary Point NE}) becomes exactly the condition for $\text{SW}_{\text{w}}^{\text{NE}}=\text{SW}_{\text{w}}^{*} $. \end{appendices}
2,869,038,157,005
arxiv
\section{Introduction} \label{S0} In the coevolving voter model, an actor placed at a node of a social network either copies the state of his/her neighbor, or rewires the link as to connect another node in the same state as him/herself \cite{vzqz}. In the related Monte Carlo scheme, this decision is taken by randomly selected actors. The related probability of rewiring is $p$, and the probability of copying is $1-p$. In the mean-field scheme, differential equations have been written \cite{vzqz} as to evaluate the process averaged over many decisions. The model is known to be useful for modeling catalytic reactions and formation of public opinion \cite{cfl,frkr}. In its standard version, two states of actors are allowed and the mean degree of a node is uniform \cite{vzqz,cfl,vzqe}. Recently, the model has been generalized to the case where the mean degree of a node depends on its state \cite{toru}. Here we present further generalization of the mean field calculations, where both mean degrees $\mu_a, \mu_b$ and the rewiring probabilities $p_a, p_b$ depend on the node states $a,b$. The derivation is exactly the same as in \cite{toru}, then we do not copy it here; it is only that $p_a, p_b$ are used instead of $p$.\\ Here we are particularly interested in the interpretation of the action of rewiring as a demonstration of homophily. According to \cite{mimc}, 'homophily is the principle, that a contact between similar people occurs at a higher rate that among dissimilar people'. In Section \ref{S1} we argue that both rewiring and state copying have their counterparts in social reality. In Section \ref{S2} we provide definitions of the model parameters. Two subsequent sections (\ref{S3} and \ref{S4}) are devoted to analytical and numerical results of the model. In the last Section \ref{S5} we discuss the main results. \section{Sociological background} \label{S1} In the process of simulated rewiring, individuals in both groups show a tendency to limit their social contacts to their own groups. This conforms to the Schelling segregation model. Thomas Schelling placed the separation process in the spontaneous activities of individuals who want to be surrounded by people similar to themselves. The effect of these individual choices was the creation of unintended patterns of separation in the social structure \cite{tcs1}. As a result, the segregation between the two groups was the result of relatively mild individual preferences to live among neighbors similar to each other. This process was strengthened when staying with other groups was indifferent or uncomfortable for the individual \cite{wfs}. \\ Scheling's model has been also confirmed in the situation when group members are open to integration \cite{tcs2,ban}. The results of analyzes conducted by Junfu Zhang \cite{jnf1} regarding the spatial segregation process showed that separation occurs even when the majority of respondents declare that they prefer to live in integrated districts. This makes it possible to explain the mechanism of persistence of segregation in societies that become more and more tolerant \cite{jnf2}. The research carried out by Fossetta and Dietrich \cite{fsd} show that segregation processes based on individual choices related to being surrounded by people similar to each other is so strong that its occurrence does not depend on structure of the city in which it happens . In addition, they are characterized by high durability, because as shown in \cite{wfs}, changes of individual preferences in this area are very slow.\\ Separation processes based on spontaneous actions of individuals resulting from their individual preferences regarding being surrounded by similar persons constitute a permanent and important factor causing the phenomenon of separation in society \cite{cort,fagi}. They also affect the social network structures built individually by actors, which are based not only on the selection of people who create their nearest surroundings but also on engaging in interactions with other people in accordance with their preferences \cite{fagi}. In this sense, segregation processes based on the Schelling model are the effect of the natural tendency of individuals to homophily, that is, to interact with those who are similar in terms of certain important characteristics, which has consequences for the structure of the network. As a result, personal networks of people are homogeneous in terms of many sociodemographic, behavioral and intrapersonal traits \cite{mimc}. The degree of homophily in social networks influences the level of segregation processes taking place in them. The higher it is, the stronger the tendencies for homophily appear in social networks \cite{bss}. In this sense, homophily contributes to solidifying differences between groups, which deepens the separation between them. In addition, Golub and Jackson \cite{gj} have shown that it slows down the speed at which society reaches consensus.\\ Lazarsfeld and Merton \cite{lzm} distinguished two types of homophily. The first was related to the status of the individual (homophily status) and included basic sociodemographic dimensions that differentiate society, such as: race, ethnicity, gender, age and acquired traits, such as religion, education, occupation or specific behavioral patterns resulting from social status. The second type of homophily was based on values, attitudes and beliefs (value homophily). In this perspective the political orientation is the basis of homophily and can be a factor conducive to separation, which will result in isolation among people sharing the same views.\\ As a result, actors with similar political beliefs will participate in groups corresponding to their preferences. In a situation where an individual changes some personal traits that have so far influenced his membership in a particular group, eg political orientation, the group will be changed \cite{hrst}. This is important for the level of involvement of the actor in his activity in the new group. Similar process can be noticed in the case of conversions, where the people who change affiliation, in this case religious one, tend to have clear attitudes in harmony with the new group, often becoming ardent advocates of a new idea or ideology. This is due, among other things, to the fact that people - (e.g. converts, migrants) who lose the frame of reference which so far have provided them with a social position are attracted by new ideas that compensate for those they lost \cite{elwr}. From the perspective of a convert, previous behavior or belief can be seen as anti-social or felt as a personal threat, resulting in the creation of a social form imagined as a community, but formulated as an organization that excludes immoral and unifies moral, according to new beliefs \cite{snow}.\\ The idea of changing beliefs may be related to the continuum, where radical and total changes are distinguished, based on variations of Nock's distinction \cite{nock} between conversion and adhesion, to indicate the possibility of participating in religious groups and rituals without pursuing a new way of life (in case of adhesion) \cite{maxh}. Therefore, converts tend to be closely integrated with the new group and to identify strongly with it. \section{The coevolving voter model} \label{S2} The medium of our considerations here is a random network, where nodes can be in two states, say $a$ and $b$. It is convenient to call these states as two spin states, up and down. In the coevolutionary voter model \cite{vzqz}, temporal evolution of the network involves two processes, both of them driven by an existence of links between nodes in different states. A randomly selected node either changes its state as to adopt the state of its neighbor, or breaks the link and rewires it to join another node, which is in the same state as itself. It is usually assumed that these actions are performed with probabilities $1-p$ and $p$, respectively. This dynamics ends up in one of two kinds of states: either nodes of different states are completely separated, or there remains some amount of contact. The order parameter of this transition is played by the fraction $\rho$ of links between nodes in different states; they are termed 'active links'. In the former phase, $\rho=0$; in the latter, $\rho$ fluctuates around some value \cite{vzqz}.\\ The coevolving voter model has been generalized recently as to allow for different numbers of neighbors of nodes in different states \cite{toru}. The related mean field model has been delopped in this way; instead of one differential equation for the density $\rho(t)$ of active links we got three equations for $\rho(t)$, $m(t)$ and $\beta (t)$. There, the variable $m$ is defined as the order parameter of nodes: $m=(N_a-N_b)/N$, where $N_a$ ($N_b$) is the number of nodes in the state $a$ ($b$), and $N=N_a+N_b$ is the number of all nodes. Further, the variable $\beta$ is defined as the order parameter of links. Namely, we distinguish $M_{aa}$, i.e. the number of links from nodes $a$ directed towards nodes $a$, from the number $M_{ab}$ of links from nodes $a$ directed towards nodes $b$, and so on. Then, we get $M_{aa}-M_{bb}=N\mu\beta$, where $\mu$ is the mean number of neighbors (mean node degree), calculated over the whole network. Also, $M_{ab}=M_{ba}=N\mu \rho/2$. The equations for $m(t)$ and $\beta(t)$ have been found to match in such a way, that a linear combination of these order parameters has been found to remain constant under time evolution \cite{toru}. Also, the subspace defined by the condition $\beta=m$ has been found to be invariant: once $\beta$ happened to be equal to $m$, it remains equal forever. All these results have been confirmed by extensive Monte-Carlo simulations \cite{toru}. Yet both mean field and MC method relied on an assumption, that the probability $p$ of rewiring does not depend on the node state.\\ It is precisely this assumption which is relieved here. The motivation of this step is the fact that most applications of the voter model \cite{cfl} and its coevolving versions \cite{vzqz} are phenomenological, as competition between species, opinion formation and catalytic reactions. In such applications, any symmetry either needs justification or remains just a simplifying assumption. This is true in particular in social phenomena, where members of different groups can react in a measurably different way on varying external conditions \cite{curr,barb}. Our aim is to explore the coevolving voter model, where $p_a \ne p_b$: the rewiring probabilities are different for different groups. The method is the mean field approach, as applied in \cite{vzqz,toru}, and the whole approach presented here is a direct continuation of those papers.\\ We note that another approach to the voter model has been aplied recently in \cite{lhdf} from the perspective of voting dynamics and political campaigns. Although the formalism applied there is different from applied here and more focused on game theory, the basic improvements of theoretical descriptions are parallel to our approach. In particular, the dynamics of the network structure reflects the applied strategies of actors at the network nodes, and these strategies depend on the node state.\\ In the next section we provide the obtained equations of motion for $\rho(t)$, $m(t)$ and $\beta (t)$, and a list of symbols which will be useful for interpretation. Section 3 is devoted to the results obtained by the numerical solution of the equations of motion. In the last section we interpret these results in terms of social integration. \section{Analytical results} \label{S3} We have applied the same scheme of calculation as in Ref. \cite{toru}, with distinguishing the rewiring probabilities $p_a,p_b$ for different nodes as the only modification. The obtained equations of motion are as follows: \begin{eqnarray} \frac{d\rho}{dt} & = & \rho \frac{ \Big(-2 + m (2 - p_b) + (1 - \beta) \mu (1 - p_b) + p_b\Big)}{(1 - \beta) } \\ \nonumber & + & 2 \rho^2\frac{ \Big(1 - m - \mu (1 - \beta) \Big) (1 - p_b)}{ (1 - \beta)^2} \\ \nonumber & + & \rho \frac{\Big(-2 - m (2 - p_a) + (1 + \beta) \mu (1 - p_a) + p_a\Big)} {(1 + \beta) }\\ \nonumber & + & 2 \rho^2 \frac{ \Big(1 + m - \mu(1 + \beta) \Big) (1 - p_a) )} { (1 + \beta)^2} \label{rdot} \end{eqnarray} \begin{equation} \frac{dm}{dt}= -\frac{(1+m)(1-p_a) \rho}{1+\beta} +\frac{(1-m)(1-p_b) \rho}{1-\beta} \label{mdot} \end{equation} \begin{eqnarray} \frac{d\beta}{dt} & = & \rho \frac{-(1 + \beta) \Big(1 - m + (1 - \beta) \mu \Big) p_b} {\mu(1 - \beta^2 )}\\ \nonumber & + &\frac{ (1 - \beta) \Big(1 + m + (1 + \beta )\mu\Big) p_a } {\mu(1 - \beta^2 )} \label{bdot} \end{eqnarray} Some preliminary conclusions can be drawn from this form of equations, even before they are solved numerically. First, on the contrary to the case $p_a=p_b$ considered in \cite{toru}, the subspace $m=\beta$ is not invariant anymore. Instead, putting $m=\beta$ in Eqns. (\ref{mdot},\ref{bdot}) we get \begin{eqnarray} \frac{dm}{dt} & = & \rho(p_a-p_b)\nonumber\\ \frac{d\beta}{dt} & = & \frac{\mu+1}{\mu}\rho(p_a-p_b) \label{meqb} \end{eqnarray} This means that $|m-\beta|$ increases in time if initially it is equal to zero. Further, we might like to reproduce the condition of stability of the stationary phase $\rho \ne 0$, as it was done in \cite{vzqz}. The reasoning is as follows: let us put $d\rho/dt$ in Eq. (\ref{rdot}) in the form $d\rho/dt=\rho(A+B\rho)$. The boundary of the stability of $\rho = 0$ is where $A=0$. Having substituted $m=\beta$ there, we get a solution $p_a+p_b=2(\mu-2)/(\mu-1)$, what nicely coincides with the results of \cite{vzqz} obtained for $p_a=p_b$. Yet, as shown just above, the condition $m=\beta$ is not justified {\it a priori}. In the next section we will analyse the cases where this condition is fulfilled.\\ All quantities of interest can be expressed by the three variables $\rho$, $m$ and $\beta$. In particular, let us denote the mean degree of nodes $a,b$ as $\mu_a,\mu_b$. Then we have \begin{eqnarray} \mu_a=\mu \frac{1+\beta}{1+m}\nonumber\\ \mu_b=\mu \frac{1-\beta}{1-m} \label{muab} \end{eqnarray} Again, for $m=\beta$ we get $\mu_a=\mu_b = \mu$; in this limit case there is no topological difference between the nodes $a$ and $b$.\\ \section{Numerical results} \label{S4} The mean degree $\mu=4$ is kept for all calculations. As expected, for $p_a=p_b$ and initial $\beta=m$, both $m$ and $\beta$ remain constant and equal to their initial values. The density $\rho$ of active bonds decreases with $p$ and vanishes at $p=2/3$, which agrees with the formula \cite{vzqz} $p_c=(\mu-2)/(\mu-1)$. The final values of $\rho$ does not change with the initial values of $\rho$, yet it depends on the value of $m$. To demonstrate this, in Fig. (\ref{f1}) we show three plots of $\rho (p)$, for different values of $m$.\\ \begin{figure}[!hptb] \begin{center} \includegraphics[width=\columnwidth]{41} \caption{Three plots of stationary values of $\rho (p)$ for $p\equiv p_a=p_b$ and $\beta=m$. The plots are made for $m=0$ (upper plot), $m=0.5$ (middle plot), and $m=0.8$ (lower plot). } \label{f1} \end{center} \end{figure} In Fig. (\ref{f2}), the data on stationary $\rho (p)$ are presented for $p\equiv p_a=p_b$, $\beta \ne m$ and various initial conditions. As we see, a change of the initial value $\rho _0$ of the density of active links leads to a change of the transition probability $p_c$ where $\rho$ vanishes, leaving however the stationary value of $\rho$ unchanged. Further, a change of the initial values $m_0$ and $\beta _0$ not only influences the stationary value of $\rho$, but also may change the transition character, continuous to discontinuous or back. This variation is shown in Fig. (\ref{f2}). Further, as long as the stationary value of $\rho$ is different from zero, the time evolution drives the system to $m=\beta$, i.e. to the state where the mean degree of all nodes is the same. However, once $\rho$ reaches the zero value, the system evolution is frozen and the values of the order parameters $m$ and $\beta$ remain different. In other words, the mean degrees of nodes depend on the index, $a$ or $b$. This fact is demonstrated in Fig. (\ref{f3}), both for continuous and discontinuous transition. \\ \begin{figure}[!hptb] \begin{center} \includegraphics[width=\columnwidth]{43} \caption{The plots of stationary values of $m(p)$ and $\beta (p)$ for $p\equiv p_a=p_b$ and $\beta \ne m$. The plots are made for the initial data as follows: A ($\rho_0 = 0.4$, $m_0 = 0.7$, $\beta _0 = -0.7$) , B ($\rho_0 = 0.8$, $m_0 = 0.7$, $\beta _0 = -0.7$), C ($\rho_0 = 0.8$, $m_0 = 0.7$, $\beta _0 = -0.2$) and D ($\rho_0 = 0.5$, $m_0 = 0.8$, $\beta _0 = -0.6$).} \label{f2} \end{center} \end{figure} \begin{figure}[!hptb] \begin{center} \includegraphics[width=\columnwidth]{44} \includegraphics[width=\columnwidth]{47} \caption{The plots of stationary values of $m(p)$ and $\beta (p)$ for $p\equiv p_a=p_b$ and $\beta \ne m$. The plots are made for the initial data as follows: A ($\rho_0 = 0.4$, $m_0 = 0.7$, $\beta _0 = -0.7$) , B ($\rho_0 = 0.8$, $m_0 = 0.7$, $\beta _0 = -0.7$) (upper figure), C ($\rho_0 = 0.8$, $m_0 = 0.7$, $\beta _0 = -0.2$) and D ($\rho_0 = 0.5$, $m_0 = 0.8$, $\beta _0 = -0.6$) (lower figure).} \label{f3} \end{center} \end{figure} Our main result is related to the case when $p_a \ne p_b$. Then we get $\rho=0$ in all stationary states: the nodes with spins $a$ are not neighbors of the nodes of spin $b$. This outcome may happen within two scenarios: either $m$ and $\beta$ get frozen or nodes of a given spin disappear. The scenario depends on the initial conditions, i.e. the values of $\rho _0$, $m_0$ and $\beta _0$. In Fig. (\ref{f4}), examples are shown of contour maps of $m$ on the plane ($p_b,p_a$). These maps give an insight on the areas where one of spin orientations is absent or rare. These examples are shown to demonstrate, that an outcome of the time evolution depends on the initial conditions. Yet, the fraction of spins $a$ is always at least 0.1 if $p_a > p_b$, and the fraction of spins $b$ is always at least 0.1 if $p_b > p_a$.\\ In Fig. (\ref{f5}) two examples are shown of the areas where one of spin orientations is absent. For clarity of the picture, the surface plots are limited to the cases where $|m|>0.99$. The upper plot is made for the symmetric system, the same as in the upper plot in Fig. (\ref{f4}). The lower plot in Fig. (\ref{f5}) is for the same data as the middle plot in Fig. (\ref{f4}). \\ In Fig. (\ref{f6}) we show an exemplary dependence of the stationary density $\rho$ of active links against the positions in the plane ($p_b,p_a$). It is only for $p_a=p_b$, where $\rho$ is different from zero. This result is observed for all initial states ($\rho _0, m_0, \beta _0$).\\ \begin{figure}[!hptb] \begin{center} \includegraphics[width=0.75\columnwidth]{f3} \includegraphics[width=0.75\columnwidth]{f4} \includegraphics[width=0.75\columnwidth]{f5} \caption{Contour maps for the magnetization $m$ vs the position on the plane ($p_a,p_b$) for selected set of initial conditions: $\rho _0=0.5$, $m_0=0.0$, and $\beta _0=0.0$ (upper plot), $\rho _0=0.5$, $m_0=0.8$, and $\beta _0=-0.6$ (middle plot) and $\rho _0=0.5$, $m_0=0.8$, and $\beta _0=0.6$ (lower plot). The lines of constant $m$ are made for $m=-0.8,-0.6,-0.4,-0.2,0.0,0.2,0.4,0.6,0.8$.} \label{f4} \end{center} \end{figure} \begin{figure}[!hptb] \begin{center} \includegraphics[width=\columnwidth]{14} \includegraphics[width=\columnwidth]{31} \caption{The magnetization $m$ vs the position on the plane ($p_a,p_b$) for selected set of initial conditions: $\rho _0=0.5$, $m_0=0.0$, and $\beta _0=0.0$ (upper plot), $\rho _0=0.5$, $m_0=0.8$, and $\beta _0=-0.6$ (lower plot). For clarity, the plots are limited to the areas where $|m|>0.99$.} \label{f5} \end{center} \end{figure} \begin{figure}[!hptb] \begin{center} \includegraphics[width=\columnwidth]{23} \caption{The density of active links $\rho$ vs the position on the plane ($p_a,p_b$) for selected set of initial conditions: $\rho _0=0.5$, $m_0=0.0$, and $\beta _0=0.0$. The result is $\rho=0$ except at the line $p_b=p_a<2/3$. } \label{f6} \end{center} \end{figure} \section{Discussion} \label{S5} When we treat the values of the probabilities $p_a$, $p_b$ as strategies of actors at nodes $a$ and $b$, we see that the community $a$ will not extinct as long as $p_a>p_b$. Conversely, the community $b$ will not extinct as long as $p_a<p_b$. Once both communities are engaged in the game and both apply these conditions, both end at the state where $p_a=p_b=1$, i.e. they separate immediately, preserving their initial numbers $N_a,N_b$. Another solution is when one community copies the rewiring probability $p$ of another community. Then the results described in Figs. (\ref{f1},\ref{f2},\ref{f3}) remain valid.\\ We note that even in well-organized communities such collective decisions are rarely conscious. As we discussed in Section \ref{S1}, the patterns of separated communities emerge as unintended results of individual decisions. Following the classics, we could distinguish between 'community in itself' and 'community for itself', to conclude that the latter is a premature reification \cite{oxf}. Actually the probabilities $p_a$, $p_b$ can be treated as strategies only within the evolutionary game theory \cite{egt}, and not as a conscious decision which optimizes an expected outcome.\\ For the modeling within the coevolving voter model, our results give two insights. First, following \cite{toru}, we demonstrate that for $p_a=p_b$ the final outcome of the time evolution of the system depends on its initial state. On the one hand, this result is interesting as an example of a collective memory in a social network, which remains out of sight of individual actors. On the other hand, the effect makes any systematic search of the model more difficult. It is tempting, for example, to determine the initial conditions $\rho_0, m_0, \beta_0$ which assure the transition to the frozen state $\rho=0$ continuous or discontinuous. Obviously, such research should be done for the whole range of the rewiring probability $p_a=p_b$. This task, far from being completed, is only mentioned here.\\ Our second insight is related to the case when $p_a \ne p_b$. Paradoxically, in this case the result is more clear: $\rho$ is equal to zero in all stationary states, unless a non-generic condition $p_a=p_b$ is fulfilled. For social interpretation, this result is disappointing, as it indicates that the coexistence of communities in mutual contact is not possible. It is fair to quote David Landes here: "Where one group is strong enough to push another around and stands to gain by it, it will do so." \cite{dsl}. In game theory, a similar frustration has been raised by the famous Prisoner's Dilemma: a proof that in given circumstances, cooperation is not reasonable for an individual \cite{strf}. We hope that the coevolving voter model will find further and more constructive generalizations. \section*{Acknowledgements} One of authors (K.K.) is grateful to Janusz Ho{\l}yst for hospitality and discussions, which initialized the calculations. This work was partly supported by the Faculty of Physics and Applied Computer Science (11.11.220.01/2) and by the Faculty of Humanities (11.11.430.158) AGH UST statutory tasks within subsidy of Ministry of Science and Higher Education.
2,869,038,157,006
arxiv
\section{Introduction} In multi-agent emergent communication research, one of the long-term goals is to develop machines that can successfully communicate between themselves but also with humans. Visual communication in the form of sketching and drawing, which has long preceded written language communication, can directly be interpreted by a human observer \citep{gelb1963study, eitz2012humans}. Recently~\citet{NEURIPS2021_drawingtocomm} demonstrated that it was possible to train a pair of agents parameterised by artificial neural networks to play a visual communication game by communicating through line drawings. Such a demonstration was often discussed as a logical next step in the emergent communication community but was challenging until the innovation of a differentiable rasterisation algorithm \citep{DBLP:journals/corr/abs-2103-16194} that allowed end to end training. The referential signalling games, inspired by \citet{Lewis1969-LEWCAP-4}, involved a `receiver' agent that was presented with a set of images, and a `sender' agent that was presented with a single image from the receiver's set. The goal of the game was for the sender agent to communicate to the receiver which image they had by drawing a picture using a fixed number of line strokes. Without specific inductive biases, it was demonstrated that successful communication could be achieved between the agents, however, the images themselves were not interpretable; they were in essence the visual equivalent of a hash code. \citet{NEURIPS2021_drawingtocomm} however, went on to further demonstrate that, with appropriate representational biases, induced by additional loss functions during training, there was strong evidence that the drawings produced by the machine could also be used to communicate successfully with humans. In this paper, we investigate the factors that lead to shared visual representations of drawing between humans and machines. More concretely, we explore the effect of different perceptual losses and visual encoders on making sketches produced by a drawing model ~\citep{NEURIPS2021_drawingtocomm} more interpretable to a human observer. We start by introducing a more powerful network for encoding visual information --- the pretrained Vision Transformer~\citep{dosovitskiy2020vit} from the CLIP framework~\citep{radford2021learning}, and then explore different approaches to inducing the network to produce more understandable drawings. We compare against the pretrained VGG16 feature extractor used in the original model and develop an approach that enables us to ask what the main semantic content of the drawings are using ``prompt engineering''~\citep{radford2021learning} with the CLIP model. \section{Background} Marr's classical theory of vision~\citep{Marr:1982:VCI:1095712} posited that sketch-like representations were a fundamental part of human vision. Works from Neuroscience also suggest that simple line drawings capture the core visual features needed to recognise physical objects \citep{biederman1988surface} and activate the same brain regions used to distinguish natural scene categories from colour photographs \citep{walther2011simple}. Recent works on visual communication games \citep{NEURIPS2021_drawingtocomm, fernando2020language}, which are analogous to the approach taken by emergent communication research \citep{Havrylov2017, lazaridou2017multi, lazaridou2018emergence, bouchacourt2018agents} have shown that agents can successfully convey information by drawing, which is a much simpler and more interpretable form than language. The main challenges with a language-based communication protocol developed between agents playing such games are its interpretability and grounding into natural language, which would allow a human to understand and use the information conveyed \citep{lowe2019pitfalls}. Drawing and sketching, on the other hand, have been used since prehistoric times by humans to depict the surrounding visual world, long before the emergence of written language \citep{gelb1963study}. Sketches of a visual scene produced by a human could have various levels of abstraction: from very detailed and close to reality representations, to depictions of just the semantics expressed in terms of the objects in the scene and their relations, to low-level representations like edges and key points. This work, however, looks at the factors that can make a \textit{constrained} visual communication channel interpretable for human observers. \section{Extended Model with CLIP} In this work, we follow the game setup proposed by \cite{NEURIPS2021_drawingtocomm} which was inspired by \cite{Havrylov2017}'s image guessing game. As illustrated in \cref{fig:overview}, the game requires the sender to communicate the target image to the receiver, by sketching 20 black straight lines. The receiver has to guess the correct image from a pool of photographs consisting of $K$ distractors plus the target. The model is trained end-to-end with a multi-class hinge loss, we refer to this game objective as $\gameloss$. However, the addition of a perceptual loss, $\perceploss$, has been shown to improve humans ability to recognise the object depicted in the sketch \citep{NEURIPS2021_drawingtocomm}. An overview of the agents' architecture and game setup is shown in \cref{fig:overview}. In the ``original'' game setup, the photograph that the sender communicates about matches the target from the receiver's pool of images. \cite{NEURIPS2021_drawingtocomm}, however, proposed other game setups in which this requirement does not hold. For the purpose of this study, we only explore the original game variant for which we set the number of distractors, $K$, to 99. It is worth noting that this sort of game would be very difficult for humans to play. Guessing the target from a set of 100 images, which could contain multiple examples from the same class as the target, based only on a 20-line black and white sketch seems impossible for humans. The trained agents, however, manage to establish a visual communication protocol that can be used to successfully solve the task as shown in \cref{subsec:results}. \begin{figure}[ht!] \centering \resizebox{0.9\textwidth}{!}{\input{images/model-diagram/model.tikz}} \caption{\textbf{Model overview.} Two agents are trained to play an image guessing game in which they communicate through a simple line drawing. As indicated, unlike in \citet{NEURIPS2021_drawingtocomm}'s original model, we also experiment with a more powerful pretrained Vision Transformer encoder module (ViT-B/32 from CLIP~\citep{radford2021learning}). An additional perceptual loss (not shown) between the sender's input photo and output sketch induces the sketch to be more understandable. Figure adapted from \citet{NEURIPS2021_drawingtocomm}.} \label{fig:overview} \end{figure} \subsection{Inducing interpretable drawings with ``perceptual'' losses}\label{section:perceplosses} \citet{zhang2018perceptual} demonstrated that a loss computed using the weighted difference of features extracted across a range of low and intermediate layers in a pretrained VGG-16 \citep{Simonyan15} and AlexNet \citep{krizhevsky2012imagenet} CNN could predict human perception of the similarity of images. Building upon this idea \citet{NEURIPS2021_drawingtocomm} demonstrated that such a loss function could be used to induce a drawing agent to produce sketches that were significantly more interpretable by humans (in the sense of improved agent-human gameplay) than agents trained without such a loss. In the latter case, the agents could learn to play the game well and generalise to unseen images, but the drawings produced by the sender were essentially visual representations of hash codes with rather random sets of lines. With our modification to move from the VGG16 feature extraction network to the Vision Transformer (ViT) model of the CLIP framework \citep{radford2021learning}, we first explored whether a similar type of perceptual loss to that used by \citet{NEURIPS2021_drawingtocomm} would be possible. We extracted the feature after each transformer residual block from both the sketch produced by the sender and the corresponding photo that was presented to the sender and used this to compute the loss. The loss itself involves normalising each layer's features, computing the sum squared difference between sketch ($\bm S$) and image ($\bm I$) features at each layer $l$ and performing a weighted sum over the layers, $L$, \begin{equation} \perceploss(\bm{S}, \bm{I}, \bm{w}) = \sum_{l \in L} \frac{\bm{w}_l}{n_l} \big\| \hat{\bm{S}}^{(l)} - \hat{\bm{I}}^{(l)} \big\|^2_2 \; , \end{equation} where $n_l$ is the dimensionality of the $l$-th layer feature. For the experiments presented here, we used fixed uniform weights for each layer, $w_l=1 \, \forall l \in L$. The effect of different weights is explored by \citet{NEURIPS2021_drawingtocomm}. We also investigate another method for generating interpretable drawings, inspired by the approach taken by \citet{CLIPDraw} The key idea is that we add an extra loss, $\cliploss$, computed by the cosine distance (negative cosine similarity) between the encoded representation, $ f(\cdot)$ (e.g. from the last layer of the ViT encoder, or \verb|relu5_3| of the VGG16) of the generated sketch and input image. However, such a loss alone does not result in sketches that are perceptually similar to the input, so instead perceptual similarity is induced by computing the loss over a set of randomly \textit{transformed} sketches, $T$: \begin{equation} \cliploss(\bm{S}, \bm{I}) = - \sum_{t \in T} \frac{f(t(\bm S)) \cdot f(\bm I)}{\| f(t(\bm S))\| \|f(\bm I)\|} \; . \end{equation} Following \citet{CLIPDraw}, $T$ consists of four randomly sampled transformations created by applying a random perspective transformation and random resizing and cropping in sequence. This crude modelling of physical spatial constraints is sufficient to induce the sketches to be interpretable. \subsection{Results}\label{subsec:results} We next present results of the visual communication game played with STL-10 images \citep{coates2011analysis} in the original game setup as described by \cite{NEURIPS2021_drawingtocomm}. \Cref{fig:resultsmodels} shows test communication success rates and sketches produced by models constructed with either a VGG16 or ViT image encoder, trained with only the game objective $\gameloss$, or with the addition of either of the two perceptual losses, $\perceploss$ and $\cliploss$, described in \cref{section:perceplosses}. For the experiments run with the ImageNet-pretrained VGG16 image encoder, we use the same parameters as specified in \cite{NEURIPS2021_drawingtocomm}. When replacing the image encoder with CLIP's pretrained ViT-B/32 model \citep{radford2021learning}, we found that increasing the hidden sizes of the Primitive Decoder, shown in \cref{fig:overview}, from 64 and 256 to 1024 each, significantly improves the quality of the sketches. It is also worth noting that a bigger learning rate is needed for this model to converge, more specifically 0.001. Results show that models trained with only the game objective achieve the highest communication success rate, i.e. agents can successfully communicate about the target image, although, they do so by drawing what it looks to us like random sets of lines. The addition of perceptual losses to $\gameloss$ leads to significantly more interpretable sketches. To assess the level of interpretability, we extended the human evaluation performed by \citet{NEURIPS2021_drawingtocomm} to include the models studied in this work. This pilot study consists in pairing a pre-trained Sender agent with a human Receiver to play the visual communication game, through a user interface that presents the human with a sketch and 10 possible photographs to choose from. Each human participant played 30 games (i.e. identified 30 sketches) with $K=9$ distractors for each model configuration. The games are sampled randomly from all those possible within the STL-10 test dataset. In \cref{fig:resultsmodels}, we include human communication success rates averaged over the 6 participants taking part in this pilot study, for the games played with sketches generated by the corresponding models' Sender agent. \Cref{tab:humaneval-extented} shows the communication success between the agents playing the games included in this pilot study, the human success rate and an additional measure, the human class communication rate, that looks at the accuracy of humans at determining the class of the sketch rather than the specific instance. The model using CLIP's image encoder, pretrained on the task of matching (image, text) pairs, leads to better image representations, and eventually, sketches that can be more easily interpreted by humans than those produced by a model with a VGG16 encoder pretrained for the supervised image classification task on ImageNet. Similarly, we observed that the addition of either of the perceptual losses significantly improves humans' ability to recognise the main category depicted in the sketch. More sketches generated during testing can be found in \cref{app:moresketches}. \begin{figure}[tb] \centering \begingroup \setlength{\tabcolsep}{1pt} \renewcommand{\arraystretch}{0} \setlength\extrarowheight{0pt} \begin{tabular}{m{0.10\linewidth}m{0.821\linewidth}m{0.065\linewidth}} \begin{flushleft} {\scriptsize \textbf{model name,\\loss\\(agent acc.)}} \end{flushleft} & \includegraphics[trim=0 452 0 226,clip,width=\linewidth]{images/svrhm_per_class/clienc_sop_photos.png} & \begin{flushright} {\scriptsize \textbf{human acc. \\(stddev)}} \end{flushright} \\ \begin{flushleft}{\scriptsize ViT-B/32, $\gameloss$ (69.8\%)}\end{flushleft} & \includegraphics[trim=0 452 0 226,clip,width=\linewidth]{images/svrhm_per_class/clipenc_gameloss_sketches.png} & \begin{flushright} {\scriptsize $5.6\%$ $(\pm 3.1)$ } \end{flushright}\\ \begin{flushleft}{\scriptsize ViT-B/32, $+\perceploss$ (63.4\%)} \end{flushleft} & \includegraphics[trim=0 452 0 226,clip,width=\linewidth]{images/svrhm_per_class/clienc_sop_sketches.png}& \begin{flushright} {\scriptsize $45.3\%$ $(\pm 5.4)$} \end{flushright}\\ \begin{flushleft}{\scriptsize ViT-B/32, $+\cliploss$ (61.1\%)} \end{flushleft} & \includegraphics[trim=0 410 0 205,clip,width=\linewidth]{images/svrhm_per_class/jonclip_cliploss_4rows_sketches.png}& \begin{flushright}{\scriptsize $62.7\%$ $(\pm 11.6)$} \end{flushright}\\ \begin{flushleft}{\scriptsize VGG16, $\gameloss$ (75.7\%) } \end{flushleft} & \includegraphics[trim=0 196 0 98,clip,width=\linewidth]{images/svrhm_per_class/vgg_gameloss_sketches.png}& \begin{flushright} {\scriptsize $8.3\%$ $(\pm 5.4)$ }\end{flushright}\\ \begin{flushleft}{\scriptsize VGG16, $+\perceploss$ (72.1\%) }\end{flushleft} & \includegraphics[trim=0 196 0 98,clip,width=\linewidth]{images/svrhm_per_class/vgg_soploss_sketches.png}& \begin{flushright} {\scriptsize $38.3\%$ $(\pm 2.5)$} \end{flushright}\\ \begin{flushleft}{\scriptsize VGG16, $+\cliploss$ (51.8\%) }\end{flushleft} & \includegraphics[trim=0 196 0 98,clip,width=\linewidth]{images/svrhm_per_class/vgg_cliploss_sketches.png}& \begin{flushright} {\scriptsize $34.0\%$ $(\pm 3.9)$ }\end{flushright} \end{tabular} \endgroup \caption{\textbf{Sketches from the visual communication game using STL-10 dataset with different image encoders and ``perceptual'' losses}. Models trained with the $\gameloss$ only do not learn to draw in an interpretable fashion. For both ViT-B/32 and VGG16 image encoders, the addition of either perceptual loss induces more structure into the resulting drawings, making them more similar to the subject of the image, although it decreases that agents' communication success (shown in brackets on the left side). Perceptual losses also quantitatively improve human performance when pitched against agents (note that reported human accuracies on the right-hand side are for games with 9 distractors as opposed to the agent accuracies on the left with 99 distractors). CLIP-pretrained ViT-B/32 models have higher human performance than the VGG models.} \label{fig:resultsmodels} \end{figure} \begin{table} \caption{\textbf{Human Evaluation results - extended} Trained agents communicate successfully between themselves in all settings. The addition of either perceptual loss allows humans to achieve significantly better than random performance (images from STL-10, original games have 9 distractors/game for these experiments \& random chance is 10\%). In addition, humans are better at guessing the correct image class when the models are trained with either of the perceptual losses.} \label{tab:humaneval-extented} \centering \begin{tabular}{lllll \toprule & & Agent & Human & Human \\ Model & Loss & comm. rate & comm. rate & class comm. rate \\ \midrule VGG16 & $\gameloss$ & $100\%$ & $8.3\% (\pm 5.4)$ & $15.0\% (\pm 2.5)$\\ VGG16 & $\gameloss+\perceploss$ & $93.3\%$ & $38.3\% (\pm 2.5)$ & $55.6\% (\pm 7.1)$\\ VGG16 & $\gameloss+\cliploss$ & $86.7\%$ & $34\% (\pm 3.9)$ & $49.3\% (\pm 7.7)$ \\ ViT-B/32 & $\gameloss$ & $93.3\%$ & $5.6\% (\pm 3.1)$ & $15.6\% (\pm 3.1)$\\ ViT-B/32 & $\gameloss+\perceploss$ & $96.7\%$ & $45.3\% (\pm 5.4)$ & $63.3\% (\pm 7.0)$\\ ViT-B/32 & $\gameloss+\cliploss$ & $96.7\%$ & $62.7\% (\pm 11.6)$ & $83.3\% (\pm 9.2)$\\ \bottomrule \end{tabular} \end{table} \section{Exploring what the drawings mean with prompt engineering} It would be beneficial to be able to understand what information the sender agent is trying to convey through its sketch and compare that to what a human playing the game might try to impart. In the particular game setting we are using, if one communicates only the object in the scene then the expected communication rate would only be 10\%. To achieve higher rates much more nuanced information about the image contents needs to be conveyed. Ultimately understanding what is being communicated, and how it differs to humans would allow us to design better approaches to inducing more human-like behaviour in the model and the agent's internal representations. To start to explore this in more detail, we demonstrate that we can begin to answer the question of what is being communicated by using the CLIP model as a probe. With the technique of prompt engineering, where a set of textual prompts are encoded with CLIP's language model, it becomes possible to ask basic questions about how CLIP perceives a sketch in terms of the semantic content. For the initial experiments presented here we use two prompt templates: \texttt{``a drawing of a XXX.''} and \texttt{``a photo of a XXX.''}. The placeholder (\texttt{XXX}) is replaced by the 10 different classes in the STL-10 dataset to create a complete set of 20 prompts. For each of the models we then compute a number of statistics regarding CLIPs perception of the sketch, the target image (i.e. the receiver image that is the true answer) and the guessed image (the image that the receiver actually picked), averaged over all 8000 possible games in the STL-10 test set. More specifically we ask which class CLIP perceives an image $\bm I$ to be, using a function $\mathrm{c}(\bm I)$ which returns the placeholder of the closest prompt (using cosine similarity in the embedding space), and compare CLIPs predicted class between the sketch, guess and target. In a similar way, we also compute which of the two templates \texttt{\{photo, drawing\}} CLIP predicts the sketch, target and guess to belong to. Finally, we also utilise a function $\mathrm{gt}(input)$ which returns the STL-10 ground-truth class label of the sender agent's input, to allow us to analyse to what extent CLIP's perception of the images matches the true label. The results of this analysis are shown in \cref{tab:promptresults}. \begin{table}[h] \caption{\textbf{Comparing models with CLIP using prompt engineering: $\mathrm{c}(\bm I)$ returns which class CLIP perceives image $\bm I$ to be; $\mathrm{gt}(input)$ returns the true class of the sender agent's input photo; $\mathrm{tp}(\bm I)$ returns the type (photo or drawing) CLIP predicts $\bm I$ to be.} There are significant differences between models, however, it is clear that the perceptual losses strongly encourage a more object-centric representation. CLIP is very good at telling the difference between sketches and photos in all cases, despite the perceptual losses pulling together the representations.} \label{tab:promptresults} \centering \begin{tabular}{rcccccc} \toprule & \multicolumn{3}{c}{VGG16 encoder} & \multicolumn{3}{c}{ViT-B/32 encoder}\\ & {\small $\gameloss$} & {\small $+\perceploss$} & {\small $+\cliploss$} & {\small $\gameloss$} & {\small $+\perceploss$} & {\small $+\cliploss$} \\ \midrule c(sketch)==gt(input) & 7.3\% & 24.0\% & 41.2\% & 9.4\% & 96.4\% & 96.6\% \\ c(sketch)==c(target) & 7.3\% & 24.2\% & 41.5\% & 9.7\% & 96.4\% & 96.5\% \\ c(sketch)==c(guess) & 7.6\% & 24.6\% & 38.8\% & 10.4\% & 94.9\% & 96.1\% \\ c(target)==gt(input) & 97.3\% & 97.3\% & 97.3\% & 97.3\% & 97.3\% & 97.3\% \\ c(guess)==gt(input) & 85.6\% & 82.1\% & 76.5\% & 77.6\% & 94.8\% & 95.8\% \\ \midrule tp(sketch)=='drawing' & 100\% & 99.9\% & 99.9\% & 99.9\% & 96.2\% & 99.3\% \\ tp(target)=='photo' & 99.4\% & 99.4\% & 99.4\% & 99.4\% & 99.4\% & 99.4\% \\ tp(guess)=='photo' & 99.4\% & 98.4\% & 98.4\% & 99.0\% & 99.4\% & 99.4\% \\ \bottomrule \end{tabular} \end{table} The results in \cref{tab:promptresults} indicate that both forms of perceptual loss do a good job of making the sender agent produce sketches that capture the main class of object in the input image. There is an inherent bias towards CLIP generated sketches because the same model is being used to perform the generation and the probing. The fact that for all models \texttt{c(guess)==gt(input)} rates are so high suggests CLIP identifies the class of the guessed image to be correct, i.e. be the same as the ground truth label. On the other hand, \texttt{c(sketch)==c(target)} measures if the class CLIP thinks the sketch to be is the same as the class that CLIP thinks the target image to be. The fact that this measure is much lower across all VGG16 models suggests that sketches produced with this feature encoder are not as interpretable to the CLIP model as the sketches produced with the ViT-B/32 encoder. As can be seen in \cref{fig:resultsmodels}, the CLIP generated sketches are qualitatively more interpretable than the VGG ones. When looking at these results bear in mind that the communication game itself is entirely self-supervised; the notion of object class is clearly not required for successful communication and is instead a side effect of inducing a perceptual loss between internal representations. The results also show that despite the perceptual losses forcing the representations of the sketch and image together, the CLIP-based probe is able to recognise the sketch as being a drawing and the receiver images from the dataset as being photos almost all of the time. \section{Discussion and Future Directions} The aim of this study was to explore how representational losses influence the sketches produced by artificial agents playing a visual communication game. We showed that the addition of either perceptual loss to the communication game leads to qualitatively more recognisable sketches than those produced by agents trained with the game objective only. Although the additional representational losses slightly decrease the agents' ability to communicate, they significantly increase the possibility to recognise the sketches as the semantic category of the photographs they represent (as shown in \cref{tab:promptresults} and with the human evaluation presented in \cref{fig:resultsmodels}). The striking differences between images from the two loss formulations raise lots of questions and this is definitely an area we would like to explore in future work. Clearly, there are many other formulations that would be exciting to experiment with too. Going forwards it would also be interesting to explore if it is possible to minimise the drop in communication rates that arise from introducing this perceptual bias. Our brief experiments with prompt engineering in the previous section open up a number of doors for future analysis. The game being played by the agents is complex, and the communication success rates are far in excess of the 10\% that would na\"ively result from the models only communicating information about the class. The obvious next step is to question what additional information is being conveyed in the sketches; is it interpretable semantic information about the input image, or is it some kind of neural hash code that just happens to allow communication to succeed, or is it a mixture of both aspects? Further, similar to the causal interventions that are applied in emergent communication scenarios with non-visual channels, we would also wish to explore which parts of a sketch (perhaps bundles of strokes) contribute to particular aspects of semantic meaning. We hope that with richer datasets and considerably more engineering of prompts that the setup we've outlined in this paper would allow these goals to be achieved. \newpage \begin{ack} D.M. is supported by the EPSRC Doctoral Training Partnership (EP/R513325/1). J.H. received funding from the EPSRC Centre for Spatial Computational Learning (EP/S030069/1). The authors acknowledge the use of the IRIDIS High-Performance Computing Facility, the ECS Alpha Cluster, and associated support services at the University of Southampton in the completion of this work. \end{ack} { \small \bibliographystyle{svrhm_2021}
2,869,038,157,007
arxiv
\section{Introduction} \label{sec:intro} Rogue waves threaten safety and survivability of marine structures. The mechanisms leading to the formation of such extreme waves have been investigated and probabilistic descriptions derived to provide improved design criteria (e.g. \cite{perlin2013breaking,bitner2014occurrence,toffoli2012statistics,alberello2016non}). Breaking of large waves is the most hazardous condition in terms of wave forces on marine structures \cite{faltinsen1993sea,grue2002four,kim2008nonlinear}. However, it remains elusive how the mechanism leading the formation of rogue wave affects the wave shape at the breaking and the associated kinematic field. Measurements under deep water breaking waves have shown that wave velocities, and associated forces, exceed those predicted by the potential flow theory in the crest region. Using Laser Doppler Anemometry (\textsc{lda}) under plungers, Easson \& Greated \cite{easson1984breaking} report velocities two times larger than those predicted by linear theory and forces fives times larger than those of an equivalent 5$^{th}$ order Stokes wave. Analogous results are reported in Kim et al. \cite{kim1992kinematics} for a spillers in random sea. Measured particle velocities in the crest region exceed those predicted using equivalent Stokes wave and linear superposition of the spectral components. Kim et al. \cite{kim1992kinematics} argue that the asymmetric shape (crest higher than the troughs with forward leaning wave front) associated to large transient waves as a result of energy focussing might affect the accuracy of the estimation of the velocity field. Breaking waves have also been experimentally investigated by means of Particle Image Velocimetry (\textsc{piv}), this technique, compared to \textsc{lda}, offers the advantage of obtaining fluid velocities over a plane (unlike pointwise \textsc{lda} measurements). Under plungers, Skyner \cite{skyner1996acomparison} recorded particle velocities higher than the phase speed of the waves. Observations of velocities exceeding the phase speed were also made by Perlin et al. \cite{perlin1996anexperimental}, even though the fluid flow presents a different topology compared to Skyner \cite{skyner1996acomparison}. Difference in the flow structure are most certainly related to a different underlying wave spectrum. \textsc{piv} was systematically employed by Grue et al. \cite{grue2003kinematics,grue2006experimental,grue2012orbital} to investigate breaking waves in deep water conditions. Monochromatic wave trains, unidirectional focussed wave groups and unidirectional random seas were all considered. Grue et al. \cite{grue2003kinematics} observed that all velocity profiles could be described by an universal profile if opportune dimensionless parameters were chosen. The velocity profile beneath a wave can be approximated by a third order monochromatic Stokes wave with the same period and amplitude using the so-called Grue method \cite{grue2003kinematics}. The wavenumber $k$ and the steepness $\epsilon$ (product of the wavenumber and the linear wave amplitude $a$) are obtained numerically solving the system of equations: \begin{equation} \begin{cases} \dfrac{\omega^2}{gk} = 1 + \epsilon^2 \\ k\eta_M = \epsilon + \dfrac{1}{2}\epsilon^2 + \dfrac{1}{2}\epsilon^3 \\ \end{cases} \label{eq_grue} \end{equation} The radial frequency is computed linearly from the trough-to-trough wave period (i.e. $\omega=2\pi/T_{TT}$ being $T_{TT}$ the distance between the troughs around the crest). Once the solution is obtained the velocity profile has the exponential profile: \begin{equation} u_G=\epsilon \sqrt{\frac{g}{k}}\exp{(k\eta)}. \label{eq_grue_u} \end{equation} The Grue velocity profile matches previous breaking measurements reported in e.g. \cite{kim1992kinematics,skyner1996acomparison,baldock1996laboratory}. Furthermore, the Grue method compares well with second order potential flow predictions \cite{stansberg2006kinematics,johannessen2010calculations}. The good performance of the Grue method and its relative simplicity established it as one of the method commonly accepted by industry standard to define the velocity profile under large waves \cite{stansberg2006kinematics}. Another method to estimate the velocity profile underneath a random wave field has been proposed by Donelan et al. \cite{donelan1992simple}. The method is based on a superposition of wave components. Unlike a traditional linear superposition that has been found to overestimate crest velocities, in the Donelan method spectral wave components (surface and velocity corrections) are iteratively added to the perturbed solution. To compute the velocity profile the required steps are as follows. First a Fourier Transform alghorith is used to compute amplitudes, $a_n$, and phases, $\varepsilon_n$, of the surface elevation. A vertical grid is defined, i.e. $z$. The successive velocity and amplitude increments are computed iteratively as \begin{equation} \delta u_n = a_n\omega_n \cos(\omega_n t+\varepsilon_n) \cdot \exp (k_n (z-\eta_{n-1})), \label{eq_do1} \end{equation} \begin{equation} u_n = u_{n-1}+\delta u_n, \label{eq_do2} \end{equation} \begin{equation} \delta \eta_n = a_n \cos(\omega_n+\varepsilon_n), \label{eq_do3} \end{equation} \begin{equation} \eta_n = \eta_{n-1}+\delta \eta_n. \label{eq_do4} \end{equation} Finally, the velocities for grid points outside the water domain have to be set to zero. From the iterative procedure it can be deducted that for the $n^{th}$ component the mean water level is the pre-existing wavy surface and the velocities are computed over a varying $z$. The Donelan method has been found to compare well with field data \cite{donelan1992simple}. In this paper the predictive performances of the Grue and Donelan methods are tested against laboratory measurements of the velocity profile underneath breaking rogue waves. The formation of the breaking waves in the wave flume is controlled by wave focussing techniques, e.g. \cite{longuet1974breaking,tromans1991new}. Two techniques commonly used in model tests are compared: the dispersive focussing \cite{longuet1974breaking,tromans1991new}, using different underlying JONSWAP spectra, and the Nonliner Schr{\"o}dinger equation (NLS) framework \cite{zakharov1968stability}. Whereas the velocity field under breaking waves generated by dispersive focussing has been examined in the past, it is yet uncertain how it compares to the kinematic field of breaking events generated using breathers solutions of the NLS that more realistically replicate wave evolution at sea. The paper is structured as follows. In the next Section we describe the experimental set-up. The wave generation mechanisms are presented in Section \ref{ch:wg}. The evolution in space of the wave group and its spectral properties are shown in the following Section. Description of the wave shape, velocity profiles and comparison with enginnering methods is discussed in Section \ref{ch:piv}. Final remarks are reported in the Conclusions. \section{Experimental set-up} \label{ch:exp_su} The purpose of the experiments is to monitor the spatial evolution of a steep wave group and measure water particle velocity at breaking. Experiments have been conducted in the Extreme Air-Sea Interaction facility (EASI) in the Michell Hydrodynamics Laboratory at The University of Melbourne (Australia). The wave flume is 60 $\times$ 2\,m (length $\times$ width). The water depth was imposed to be 0.9\,m. At one end of the tank a computer-controlled cylindrical wave-maker produces user-defined wave forms. At the opposite end a sloping beach is installed to absorb the incoming wave energy. Optical access, to perform \textsc{piv} measurements, is provided through a glass window on the side of the flume, located 34\,m from the wave-maker. A schematic of the facility and the experimental set-up is shown in Fig.~\ref{fig:flume}. \begin{figure}[htbp \centerline{\includegraphics[trim={0 200 0 50},clip,width=0.5\textwidth]{flumenew.png}} \caption{Sketch of the EASI facility (not to scale).} \label{fig:flume} \end{figure} At the window, the shape of the breaking wave is recorded by a camera and \textsc{piv} measurements can be undertaken. This technique has been used to explore coastal and ocean processes at laboratory scale since the 90s, e.g. \cite{greated1992particle,chang1996measurement}. \textsc{piv} allows the calculation of the spatio-temporal properties of the kinematic field by cross-correlating pairs of images of a properly seeded fluid. The analysis of two images, taken at time $\Delta t$ apart, provides the displacement of the particles and consequently their velocity \cite{adrian2011particle}. Experiments are performed with a two-dimensional \textsc{piv} set-up, i.e. only the planar velocities components along the flow and in the vertical direction are extracted. The set-up is sketched in Fig.~\ref{fig:piv}. \begin{figure}[htbp \centerline{\includegraphics[trim={0 120 0 80},clip,width=0.5\textwidth]{pivppt.png}} \caption{Sketch of the \textsc{piv} system (not to scale).} \label{fig:piv} \end{figure} The laser beam is generated by Photonics DM20-527 dual head Nd:YLF laser that delivers 100\,mJ/pulse at 15\,Hz. The beam is converted in a light sheet at the centre of the tank via a series of optics. Images are recorded by Andor CMOS camera equipped with a Nikkor f/3.5 60\,mm macro lens. The camera resolution is 2120 $\times$ 2560\,pixel and the corresponding field of view is approximately 170 $\times$ 200 mm (horizontal $\times$ vertical). Silver coated glass spheres with mean particle diameter of 10 $\mu$m are used to seed the water. The laser and the particles used in the experiments provide a better image quality compared to a similar set-up used for preliminary tests \cite{alberello2016omae}. The separation time between images pairs is $\Delta \, t=2.5$\,ms. During the pre-processing step water surface is manually detected to mask the air side to improve the quality of the subsequent cross-correlation algorithm. The PIVlab tool for MATLAB \cite{thielickePIVlab,thielicke2014PIVlab} is applied to extract the velocity field in the horizontal and vertical direction. The surface elevation is recorded by resistive wave gauges at various position along the tank. The probe positions, relative to the wave-maker, are $x \in$ 14.05, 25.15, 30.10, 32.60, 33.95, 34.90, 41.40, 45.15\,m and the beach starts at $x=51.40$\,m. The probes are not equispaced because their positioning along the tank is constrained by accessibility reasons. The fifth probe, i.e. $x=33.95$\,m, is located in the middle of the \textsc{piv} field of view to independently monitor the surface elevation where the breaking occurs. Note that during the \textsc{piv} recording the fifth probe has been removed from the camera field of view to improve the image quality for the subsequent analysis. To obtain robust statistical results each test is repeated 20 times. \section{Wave generation} \label{ch:wg} The location of the breaking event in the wave tank is controlled deterministically by means of wave focussing techniques. In deep water conditions, the dispersive focussing has been commonly used in the past. This method relies on the differential celerity of wave components of the wave spectrum, i.e. longer waves propagate faster than shorter waves. By defining an initial phase shift at the wave-maker for each spectral component, it is possible to synchronise wave components at a specific point in space and time to generate the extreme wave, e.g. \cite{skyner1996acomparison,perlin1996anexperimental,longuet1974breaking}. New Wave Theory (NWT) \cite{tromans1991new}, popular among practitioners, relies on this technique. Dispersive focussing explicitly exploits the linear properties of the waves. Corrections at higher order exist to provide a more accurate prediction, waves are in fact fully nonlinear. Modulational instability is one of the main mechanisms leading to the growth and, eventually, breaking of rogue waves \cite{tulin1999laboratory}. This mechanism relies on the nonlinear wave-wave interactions betweneen wave components that can be accurately modelled in the framework of the Nonlinear Schr\"odinger equation (NLS). Among the large class of breather solution of the NLS, the Peregrine breather produces one rogue event starting from an almost monochromatic wave train \cite{alberello2016omae,peregrine1983water,chabchoub2011rogue}. This mechanism has been recently exploited to investigate ship response to extreme waves, e.g. \cite{onorato2013rogueplos,zhang2016modelling,klein2016peregrine}. \subsection{Dispersive focussing} According to the wave linear theory the surface elevation at any given time and position is provided by: \begin{equation} \eta (x,t) = \sum_{j=1}^N a_j \cos(\omega _jt - k_jx + \varepsilon _j). \label{eq_nwt1} \end{equation} In Eqn.~(\ref{eq_nwt1}), $\eta(x,t)$ denotes the surface elevation at time $t$ and position $x$, $\omega_j$ its wave frequency, $k_j$ the wave number, $\varepsilon_j$ the phase and $a_j$ are the amplitudes of the spectral wave components. Amplitudes can be extracted from the input wave spectrum, e.g. JONSWAP. At the focussing all spectral components are in phase, we can then write: \begin{equation} A=\sum_{j=1}^N a_j. \label{eq_sum1} \end{equation} In our experiments amplitudes are extracted from an underlying JONSWAP spectrum. The peak wave period imposed at the wave-maker is $T_0=0.8$\,s. For the water depth 0.9\,m, this peak period guarantees deep water conditions. The corresponding wavelength is 1\,m and the associated wavenumber $k_0=2\pi$. Different peak enhancement factor $\gamma$ were analysed during the experiments, i.e. $\gamma=1,3,6$. The lower value corresponds to the Pierson-Moskowitz spectrum, $\gamma=3$ is close to the standard JONSWAP formulation, while $\gamma=6$ provides a narrower spectrum. Wave spectra are reconstructed using 256 wave components in the frequency range $0.5\leq \omega/\omega_0\leq 2$. The amplitude of each wave component is calculated as: \begin{equation} a_j = A\frac{S(\omega_j)}{\sum_{j=1}^N S(\omega_j)} \label{eq_sum2} \end{equation} where $S(\omega)$ is the input spectrum. The input signal at the wave-maker, i.e. $\eta(x=0,t)$, is reconstructed using Eqn.~(\ref{eq_nwt1}). The process of identifying the correct initial input surface elevation is repeated iteratively to obtain a single breaking wave at the desired location (i.e. in the camera field of view). The calibration of the breaking position is challenging \cite{tian2010energy}. Particular attention has been devoted to avoid formation of micro-breakers (or whitecapping) on the surface before the main breaking event detected with the \textsc{piv}. Two main difficulties are encountered: waves are fully nonlinear and the steepness at the breaking onset is unknown a-priori. The degree of nonlinearity is related to the wave steepness which, for the present experiments, is high. One of the main consequences is the shifting of the focussing location compared to linear theory, e.g. \cite{baldock1996extreme}. Methods have been proposed to adjust the phases \cite{chaplin1996frequency,clauss2011new,fernandez2014extreme}. However the wave shape at the breaking onset remains uncertain \cite{toffoli2010maximum} despite wave focussing experiments have shown an inverse correlation between wave steepness at the breaking onset and the spectral bandwith \cite{perlin2013breaking}. \subsection{Nonlinear Schr{\"o}dinger Equation (NLS)} Compared to linear potential flow theory, the NLS equation provides an enhanced description of waves nonlinear evolution. This is a solution for the slowly varying envelope and is derived from the Euler equations written in Hamiltonian form. The equation for deep water waves, first derived by Zakharov \cite{zakharov1968stability}, reads: \begin{equation} i \left(\frac{\partial \psi}{\partial t} + c_g \frac{\partial \psi}{\partial x}\right) - \frac{\omega_0}{8k_0^2}\frac{\partial^2 \psi}{\partial x^2} - \frac{\omega_0 k_0^2}{2}|\psi|^2\psi=0 \label{eq_NLS} \end{equation} where $\psi$ is the envelope, $c_g=\omega_0/(2k_0)$ the group velocity, $\omega_0$ and $k_0$ denote the wave frequency and wave number of the carrier wave component as imposed at the wave maker. In dimensionless form, Eqn.~(\ref{eq_NLS}) becomes: \begin{equation} i q_\tau + q_{\chi\chi} + 2 |q|^2q=0. \label{eq_NLSdimensionless} \end{equation} The transformations $\tau=-\omega_0t/(8k_0^2)$, $\chi=x-c_gt$ and $q = \sqrt{2}k_0^2\psi$ are used. One of the exact solutions of Eqn.~(\ref{eq_NLSdimensionless}) is the Peregrine breather \cite{peregrine1983water}: \begin{equation} q_P (\chi,\tau) = \left( 1 - \frac{4(1+4i\tau)}{1+4\chi^2+16\tau^2}\right) e^{2i\tau}. \label{eq_Peregrine} \end{equation} The surface elevation corresponding to the Peregrine breather is: \begin{equation} \eta_P(x,t)=\operatorname{Re}\{\psi_P\left(x,t\right)\exp\left[i\left(k_0x-\omega_0t+\varepsilon\right)\right]\} \end{equation} where $\psi_P$ denotes the solution $q_P$ after transformation to dimensional variables. Away from the focussing the surface elevation correspond to a slightly perturbed monochromatic wave train (the Peregrine solution has infinite modulation period). At the focussing, the amplification factor of the extreme wave is 3, i.e. the rogue wave is three times higher than the monochromatic wave train from which it emerges. Analogously to the dispersive focussing, the Peregrine breather leads to the formation of only one rogue event in the wave tank, i.e. the solution is doubly localised in space and time. In the experiments, the solution is computed for a carrier wave with period $T_0=0.8$\,s. This corresponds to the peak period of the dispersive focussing cases. Similarly to the dispersive focussing case, particular attention was devoted in avoiding formations of micro-breakers before the camera field of view. Experiments under analogous conditions reported by Shemer \& Liberzon \cite{shemer2014lagrangian} suggest that a spiller can be expected in this case. \section{Wave evolution} \label{ch:eta} \subsection{Dispersive focussing} The time-series of the surface elevation are presented in Fig.~\ref{fig:tseries}. The groups become more compact in time as they approach the breaking point (probe 5). After the breaking, the wave groups broaden again, i.e. the envelope is elongated. \begin{figure*}[htbp \centerline{\includegraphics[trim={0 0 0 0},clip,width=1\textwidth]{tseries}} \caption{Time series of the surface elevation at different distances (not equispaced) for the various wave configurations, the envelope is shown in red. Propagation is from bottom to top. The vertical shift is $0.1$\,m, an horizontal shift is applied to centre the wave-group around $t=0$\,s. The breaking location is framed. Dispersive focussing are denoted DF, Nonlinear focussing NL.} \label{fig:tseries} \end{figure*} The dimensionless spectra corresponding to the various stage of evolution are reported in Fig.~\ref{fig:spec}. A spectral transformation is observed as the group propagates along the tank. An energy downshift occurs during the wave focussing (i.e. up to the breaking point). The spectral transformation is more evident for narrower spectra ($\gamma=3,6$). After the breaking energy is injected at higher frequencies ($1<\omega/\omega_0<1.5$). \begin{figure*}[htbp \centerline{\includegraphics[trim={0 0 0 0},clip,width=1\textwidth]{spec3}} \caption{Normalised surface elevation spectra at different distances (not equispaced) for the various wave configurations. Propagation is from bottom to top. The breaking location is framed. Dispersive focussing are denoted DF, Nonlinear focussing NL.} \label{fig:spec} \end{figure*} Linear and secod order wave theory would not be able to predict any spectral evolution. The large steepness of the wave group at the breaking results in an high Benjamin-Feir Instability index (BFI) that underpins a strong nonlinear wave evolution. The Benjamin-Feir Instability plays a substantial role despite the fact that the dominant mechanism is dispersive focussing. Results are consistent with predictions for fully nonlinear seas \cite{socquet2005probability}. The highly nonlinear evolution observed in the experiments also explains the difficulties encountered in calibrating the focussing location. During its evolution the group naturally tends to a more stable configuration (i.e. lower steepness) via a downshift of the energy and a consequent broadening of the spectrum itself. Due to constrains in the facility the first probe is already at about 15 wavelength from the wave maker. At this distance nonlinear evolution has already taken place. The double peaked spectral structure, particularly marked for $\gamma=1$, is consistent with fully nonlinear numerical simulations and solution of the modified NLS reported in Adcock \& Taylor \cite{adcock2016nonlinear}. Note, however, that in Adcock \& Taylor \cite{adcock2016nonlinear} breaking does not occur due to the lower steepness considered in their simulations. To assess the quality of the focussing at the breaking location, we use a quality factor $Q$ which is the ratio between the maximum measured wave elevation and the maximum elevation of the design wave \cite{johannessen2003nonlinear}. The quality factor ranges in $0<Q<1$ with $Q=1$ corresponding to ideal focussing. In the experiments, although dispersive focussing cases break at different steepnesses, and consequently amplitudes, the input energy content at the wave maker is the same. The quality $Q$ is 0.49, 0.61 and 0.66 for $\gamma$ 1,3 and 6 respectively. By narrowing the spectrum the quality increases, i.e. the wave shape is closer to the designed one. Note that in the current experiments steeper wave conditions that lead to breaking are investigated, higher quality ($Q=0.95$) have been reported for non breaking cases \cite{johannessen2003nonlinear}. \subsection{Nonlinear Schr{\"o}dinger Equation (NLS)} The right panel in Fig.~\ref{fig:tseries} shows the evolution for the Peregrine solution. In this case the emergence and disappearance of the rogue wave event from an otherwise monochromatic wave train can be seen. The spectral evolution of the Peregrine solution contrasts with the one observed for the wave groups dominated by dispersive focussing, see Fig.~\ref{fig:spec}. In this case there is no downshift of the energy. The wave nonlinear evolution is already accounted for in the equation, i.e. the NLS, but a slight broadening can be seen at the base of the spectrum (this would be clearer in logarithmic scale, cf. \cite{alberello2016omae}). Wave breaking inhibits the time-reversal symmetry \citep{chabchoub2014time} meaning that the focussing and defocussing process are asymmetric (this is clearer in the time domain, see Fig.~\ref{fig:tseries}). A quality factor can also be defined for the Peregrine solution as the ratio between the measured amplification and the theoretical amplification, i.e. 3. In this case the quality factor is 0.9, much higher than the one recorded for the dispersive focussing cases. Using the NLS framework a wave breaking closer to the designed shape can be obtained. We must note that the Peregrine breather can be seen as a limiting case of dispersive focussing when the $\gamma$ parameter tends to infinity, i.e. extremely narrow spectrum. \section{Wave breaking} \label{ch:piv} \subsection{Dispersive focussing} Although the spectral evolution provides fundamental information about the nonlinear wave interactions, wave breaking is a highly localised mechanism that strongly relates to the time series rather than the spectral characteristics, e.g. \cite{chalikov2012simulation}. Fig.~\ref{fig:tseriesz} provides the dimensionless time-series at the wave breaking. Normalisation is done using the period and wavenumber imposed at the wave maker (i.e. $T_0$ and $k_0$). The asymmetry parametres are also reported and these can be used as a indication of proximity to breaking \cite{chalikov2012simulation}. $S_k$ denotes the vertical asymmetry, $A_s$ the horizontal asymmetry \cite{babanin2007predicting}. The former is a measure of how higher the crest is with respect to the trough, while the latter indicates whether the wave is leaning forward ($A_s<0$) or backwards ($A_s>0$). \begin{figure}[htbp \centerline{\includegraphics[trim={0 0 0 0},clip,width=0.5\textwidth]{tserieszn}} \caption{Close-up of the surface elevation for the breaking wave for the various wave configurations, from top to bottom: $DF,\gamma=1$, $DF,\gamma=3$, $DF,\gamma=6$ and $NLS$. An horizontal shift is applied to obtain the crest at $t=0$\,s. Values of asymmetry are reported in the plots.} \label{fig:tseriesz} \end{figure} In all the cases the wave period of the extreme wave is shorter than the input period, this despite the spectral downshift. Surface elevations are strongly asymmetric around the horizontal axis (i.e. $S_k$), asymmetry is less pronounced around the vertical axis (i.e. $A_s$). Most importantly, the analysis of the timeseries shows that the breaking occurs at different steepnesses for the different cases. This further corroborates the difficulties of identifying the breaking onset a-priori; breaking most likely depends on the phase difference between spectral components and, possibly, the overall energy redestribution among wave components \cite{johannessen2001laboratory,johannessen2010calculations}. Our observations confirm that the breaking onset occurs at larger amplitudes for narrower spectra, cf. \cite{perlin2013breaking}. Camera images allow a detailed analysis of the shape and velocity of the breaking wave in the space domain (Fig.~\ref{fig:piv_vel}). The smallest breaking wave, the one recorded for dispersive focussing and $\gamma=1$, results in a plunger, spiller-like breaking waves are recorded for $\gamma=6$. In deep water conditions spiller and plunger are the only possible shapes of breaking waves, and the first is more frequent in the ocean \cite{duncan2001spilling}. The velocity field corresponding to the breaking wave images is shown in the right panel of Fig.~\ref{fig:piv_vel}. Larger velocities are recorded for higher wave amplitudes. \begin{figure*}[htbp \centerline{\includegraphics[trim={0 0 0 0},clip,width=1\textwidth]{vel_piv}} \caption{\textsc{piv} images of the breaking waves (left panels) and corresponding velocity field (right panels) for the various wave configurations,from top to bottom: $DF,\gamma=1$, $DF,\gamma=3$, $DF,\gamma=6$ and $NLS$.} \label{fig:piv_vel} \end{figure*} The dimensionless horizontal velocity profile under the crest averaged over 20 repetitions is shown in Fig.~\ref{fig:vel_pr_4}. The shaded area shows the confidence interval (i.e. $\pm$ the standard deviation $\sigma$ or 68\% confidence interval). The measured velocity is compared against the profile defined by methods commonly used in the engineering practice. The continuous line shows a reference exponential velocity profile (denoted $u_L$) of a monochromatic wave with amplitude $\eta_M$ and period $T_0$. The dashed line ($u_G$) is the profile obtained by applying the Grue method \cite{grue2003kinematics} which requires $\eta_M$ and the measured trough-to-trough period, i.e. $T_{TT}$, as inputs. The dash-dotted line ($u_D$) depicts the profile obtained by using Donelan method \cite{donelan1992simple}. \begin{figure}[htbp \centerline{\includegraphics[trim={0 0 0 0},clip,width=0.5\textwidth]{vel_don} \caption{Averaged horizontal velocity component for the various wave configurations, from top to bottom: $DF,\gamma=1$, $DF,\gamma=3$, $DF,\gamma=6$ and $NLS$. The confidence interval $\pm\sigma$ is shown as shaded area. The reference theoretical solutions are also shown: linear (continuous), Grue (dashed) and Donelan (dash-dotted).} \label{fig:vel_pr_4} \end{figure} A common feature of the \textsc{piv} experiments is the narrow confidence interval at low $z$, however, error increases close to the crest of the wave. The presence of bubbles and the highly reflective interface make the \textsc{piv} analysis challenging. As a consequence, the uncertainty in the top part of the wave is high. Measurements show that plungers are more energetic than spillers, i.e. deviation from linear velocity profile is more accentuated for the dispersive focussing with $\gamma=1$, i.e. the dimensionless velocity $u/u_{L,z=0}$ reaches 3 at the crest ($u_{L,z=0}$ is the velocity computed from linear wave theory at $z=0$). In the dispersive focussing cases, velocities at the crest are between 2 and 3 times higher than those predicted by the linear theory. In general, the linear method leads to an overestimation of the velocity in the lower part of the wave but notably underestimates the velocity at the crest. Grue method provides a better fit but it suffers of the same drawbacks. Furthermore this method is sensitive to the definition of the local trough-to-trough wave period that might lead to underestimation of the velocities at any subsurface, see for example $\gamma=3$. Donelan method agrees better at any depth with the measured velocity. The latter uses the entire time series and not only the local properties of the breaking wave, i.e. period and amplitude. However, also the Donelan method cannot reproduce the velocity at the tip of the crest where the recorded velocities are about 25\% higher than predictions, see Fig.~\ref{fig:vel_dif}. \begin{figure}[htbp \centerline{\includegraphics[trim={0 0 0 0},clip,width=0.5\textwidth]{vel_dif} \caption{Relative difference between \textsc{piv} measurements and Donelan prediction. The NLS case is denoted as $\gamma\rightarrow\infty$. Circles denote comparison of velocities extracted below the crest at the mean water lavel, squares denote comparison at the crest.} \label{fig:vel_dif} \end{figure} \subsection{Nonlinear Schr{\"o}dinger Equation (NLS)} The solution generated using the Peregrine breather breaks at relatively high steepness, see Fig.~\ref{fig:tseriesz}. Note that the underlying spectral shape is a narrow spectrum (i.e. almost monochromatic). Narrow spectra, prone to wave growth via nonlinear mechanisms, lead to breaking waves at steepnesses closer to the Stokes limit \cite{tulin1999laboratory}. The Peregrine breather, which is driven by a nonlinear mechanism, requires an higher initial energy to evolve into a breaking shape. Compared to the dispersive focussing cases, the wave energy for the Peregrine breather is about 67\% higher. The horizontal asymmetry is rather low (i.e. a slight forward leaning is measured), the vertical asymmetry indicates that the crest is twice the trough. Camera images (Fig.~\ref{fig:piv_vel}) show that the Peregrine breather breaks as a spiller under our experimental conditions, cf. \cite{shemer2014lagrangian}. In the case of the Peregrine breather, velocities at the crest are underestimated by the Grue method, whereas the Donelan method performs far better (Fig.~\ref{fig:vel_pr_4}). The presence of the breaking leads to velocities 50\% higher than those predicted by the linear model and Grue at the crest. The Donelan method reproduces the measured velocity profile at any depth. However, the Donelan method also predicts velocities at the tip of the crest are 20\% lower than those measured in the experiments, see Fig.~\ref{fig:vel_dif}. \section{Conclusions} \label{ch:conc} Two different focussing techniques, dispersive focussing and Nonlinear Schr{\"o}dinger framework, are used to generate a breaking rogue wave event in a unidirectional wave flume and to compare the associated wave field. These focussing techniques are the two main mechanisms leading to the wave growth and, eventually, breaking in the ocean. The evolution has been recorded and the associated velocity field measured at the breaking by means of optical measurements, i.e. Particle Image Velocimetry (\textsc{piv}). From the experimental observations it can be inferred that the dominant generation mechanism, and hence the associated spectrum, strongly affects the wave dynamical evolution along the tank, the steepness at the breaking and the shape of the breaking wave itself. However, experiments confirm that the relation between spectral properties and time history at the breaking, in particular the breaking onset, is still elusive. Starting from the same initial wave period, i.e. $T_0=0.8$\,s, dispersive focussing cases undergo a significant spectral change. Nonlinear interactions take place even if the main focussing mechanism is linear. The dynamical evolution is particularly evident in our experiments due to the high steepness and the long propagation distance between generation and breaking (about 35 wavelengths). Breaking occurs at higher steepness for the narrower spectrum, i.e. $\gamma=6$. However, the smaller wave is a more energetic plunger whereas for increasing $\gamma$ the breaking is gentler. Using the NLS framework to generate a breaking wave allows to account for wave nonlinear evolution. A better control on the focussing and the breaking can be achieved, i.e. the quality $Q$ is higher. In this case the steepness at the breaking is close to the Stokes limit but despite the high steepness the breaking is a less energetic spiller. Wave velocities, measured with Particle Image Velocimetry, are compared with standard engineering models, Grue and Donelan method, that use the recorded surface elevation to derive the velocity profile. Grue method, which takes in account the maximum elevation and the local zero crossing wave period, overestimates velocities in the lower part of the wave but underestimates the highest velocities at the crest. Furthermore, Grue method is highly sensitive to the definition of the zero-crossing period; this is not straightforward in spectral conditions. Multiple wave components run over each other making the shape of the breaking wave changes rapidly in space and time domain. Donelan method, which takes in consideration the time series, provides a better fit at any elevation. However even if this more complex method is adopted the velocities in the breaking region are severely underestimated (Donelan provide velocities 20\% lower than those measured). \section*{Acknowledgements} A.A. and F.N. are supported by the Swinburne University of Technology Postgraduate Research Award (SUPRA). A.C. acknowledges support from the Burgundy Region, The Association of German Engineers (VDI) and the Japan Society for the Promotion of Science (JSPS). \bibliographystyle{elsarticle-num}
2,869,038,157,008
arxiv
\section{Introduction} The most valuable characteristics of quantum systems are quantum correlations such as entanglement, which represents a useful resource for applications unattainable in the framework of classical theory, such as quantum teleportation, quantum cryptography and dense coding \cite{hor, brus}. In the last two decades, the use of continuous variable (CV) systems became a very powerful approach to quantum information processing, opening the way to various protocols and tasks, like quantum cryptography, quantum teleportation, quantum state discrimination \cite{braun,sera,weed}. CV quantum systems provide the quantum description of the propagating electromagnetic field, and therefore manifest particular relevance for quantum communication and quantum techniques like detection, imaging and sensing. A significant problem in quantum information theory and any application is to efficiently reveal the properties of an unknown quantum state, in particular to certify the presence of entanglement in a given unknown state. Usual entanglement criteria for CV systems consist in certain operations on the second moments, or uncertainties, of quantum states, such as the positive partial transpose (PPT) criterion \cite{per, sim}. Therefore, first a full tomography is required for completely unknown states, in order to reconstruct the entire covariance matrix. However, this method may be a very resource-consuming and demanding experimental procedure, especially for quantum states with a high number of modes. In addition, the full information about the second moments of the state can be excessive for the characterisation of entanglement present in the state. Instead, one can choose to measure certain fixed combinations of second moments, giving rise to a specific test, which detects entanglement in some states and does not detect it in others \cite{duan, aoki, look}. Entanglement witnesses (EWs) represent another commonly used entanglement test, being directly accessible in experiments through measurable observables \cite{toth}. A Hermitian observable $W$ is an entanglement witness if for all separable states $\rho_s$, $\Tr[W\rho_s]\geq 0$ holds, while for some entangled state $\rho$ we have that $\Tr[W\rho]<0$ \cite{toth}. For CV systems a special instance of entanglement witnesses can be defined, which embodies the entanglement criterion in terms of the variances of the canonical observables of the state \cite{and, hyll, sper, shch}. Typically, entanglement witnesses are employed when certain knowledge about the state is available. Given an unknown quantum state, however, the complexity of the state and the absence of any information about it deprive us of a specific experimental strategy in tackling the problem of efficient entanglement detection. Therefore, the best strategy in this case would be to perform random measurements, serving as building blocks for the construction of an entanglement witness by means of a semidefinite optimization algorithm. This idea is inspired by an analogous method for the discrete-variable case, which was developed in \cite{jochn}. The paper is organized as follows. In Sec. 2 we present the theoretical framework of CV states, mainly based on the second moments description of the state. In Sec. 3 we introduce the entanglement witnesses based on the covariance matrix (CM) of the state, as presented in Refs. \cite{and, hyll}, and propose a set of stronger linear semidefinite constraints in order to characterize the EWs. Then, we simulate random homodyne measurements for two-mode CMs in Sec. 4 and formulate a semidefinite program (SDP) optimizing the witness constructible from given experimental data. We present the results of the efficiency of entanglement detection for random two-mode CMs and, in particular, for the class of squeezed vacuum states in Sec. 5, and illustrate an example of bipartite bound EW. A statistical analysis of our method is provided in Sec. 6, and a summary and conclusions are presented in Sec. 7. \section{Continuous variable systems} A CV system of $N$ canonical bosonic modes, like the quantized electromagnetic field with a Hamiltonian of a system of $N$ harmonic oscillators (modes), is defined in a Hilbert space $ \mathcal H=\bigotimes_{k=1}^N \mathcal H_k$, each one with an infinite-dimensional space $\mathcal H_k=L^2(\mathbb R)$ and two canonical observables $\hat x_k$ and $\hat p_k$, with the corresponding phase space variables of position $x_k$ and momentum $p_k$ \cite{braun, sera, weed}. One can define a vector of quadrature operators $\hat R^{\rm T}\equiv(\hat R_1,...,\hat R_{2N})=(\hat x_1,\hat p_1,...,\hat x_N,\hat p_N)$ satisfying the bosonic commutation relations \begin{equation}\label{cmr} [\hat R_i,\hat R_j]=i\Omega_{ij} \hat I,\quad i,j=1,...,2N, \end{equation} where $\hat I$ is the identity matrix, and $\Omega_{ij}$ are the elements of the fundamental symplectic matrix (we assume $\hbar=1$) \begin{equation}\label{sym} \Omega_N=\bigoplus_{1}^N \left({\begin{array}{*{50}c} 0 & 1 \\ -1 & 0 \\ \end{array}}\right). \end{equation} The primary role in this study is played by the statistical moments of the quadrature operators, that characterize the state with density operator $\rho$ \cite{braun,weed} , up to the second order: the displacement vector, which is the real vector $d$ of first order moments $d_i=\langle \hat R_i\rangle_\rho=\Tr[\hat R_i\rho]$, and the covariance matrix (CM), which is the real, symmetric matrix $\gamma$ whose entries are the second order moments in symmetrized form (the variances) of the quadrature operators, defined as \cite{sera, mukunda}: \begin{equation}\label{ptr} \gamma_{ij}=\langle\{\hat R_i-\langle\hat R_i\rangle, \hat R_j-\langle\hat R_j\rangle\}_+\rangle_\rho, \end{equation} where $\{ ,\}_+$ represents the anticommutator. The Robertson-Schr\"odinger uncertainty relation in terms of the CM reads \begin{equation}\label{unc} \gamma+i\Omega_N \geq 0, \end{equation} assuring that it is a CM of a physical quantum state. Gaussian states represent the class of CV states which are completely characterized by their first and second moments. The entanglement criteria discussed in this paper can also detect entanglement of non-Gaussian states. A quantum state of a bipartite system is entangled if it cannot be prepared by means of operations acting locally on the subsystems. In the case of separable states correlations are attributed to possible classical communication between subsystems, and hence are of classical origin. This reasoning carries over to CV systems, where a separability criterion can be defined in terms of CMs. If a CM $\gamma$ of a state of $N$ modes is fully separable, then there exist CMs $\gamma_i$, $i=1,\ldots, N$, corresponding to $N$ subsystems, such that \cite{ww} \begin{equation} \gamma\geq \bigoplus_{i=1}^N \gamma_i. \end{equation} Conversely, if this holds, then Gaussian states with CM $\gamma$ are separable. Therefore, if this criterion is violated, then the corresponding state is entangled, irrespective of whether it is Gaussian or not. If it is not violated, then a Gaussian state is separable, while a non-Gaussian state might be entangled.\\ In the following, we will refer to the situation of a $k-$partition of an $N-$mode system as the splitting or distribution of an $N-$mode system into $k$ subsystems, where every subsystem $j$ ($j=1,\ldots,k$) is composed of $N_j$ modes, such that $\sum_{j=1}^k N_j=N$.\\ \subsection{Symplectic transformations} Unitary operators acting on the quantum state space are equivalent to symplectic transformations which preserve the commutation relations of canonical variables. The real symplectic group is defined by \cite{dutta}: \begin{equation} Sp(2N,\mathbb R)=\{S\in \mathcal{M}(2N,\mathbb R): S\Omega_N S^{\rm T}=\Omega_N \}, \end{equation} where $S$ is a symplectic transformation acting in phase space as $\hat R \to \hat R'=S \hat R$, and $\mathcal{M}(2N, \mathbb R)$ denotes the set of $2N\times 2N$ real matrices. Symplectic transformations act by congruence on CMs: $\gamma'=S\gamma S^{\rm T}$. Every symplectic transformation can be decomposed using the Euler decomposition, which represents the singular value decomposition for real symplectic matrices \cite{dutta}: \begin{equation}\label{eu} S=K\Big[\bigoplus_{i=1}^{N} S(r_i) \Big] L, \end{equation} where $K$, $L$ are symplectic and orthogonal matrices, while \begin{equation}\label{sqr} S(r_i)=\left({\begin{array}{*{20}c} e^{-r_{i}} & 0 \\ 0 & e^{r_{i}} \end{array}}\right) \end{equation} are one-mode squeezing matrices (symplectic and nonorthogonal) with $r_i$ the squeezing parameter. The symplectic and orthogonal matrices form the maximal compact subgroup $K(N)$ within the noncompact group $Sp(2N,\mathbb R)$ \cite{dutta}. The group $K(N)$ is isomorphic to the group $U(N)$ of $N\times N$ complex unitary matrices: \begin{equation}\label{ku} K(N)=\{S(X,Y)|X-iY\in U(N)\}, \end{equation} where the corresponding symplectic matrices are of the following form: \begin{equation}\label{sorth} S(X,Y)=\left({\begin{array}{*{20}c} X & Y \\ -Y & X \end{array}}\right) \in Sp(2N, \mathbb R). \end{equation} Such transformations describe multiport interferometers and are called passive canonical unitaries, which preserve the photon number \cite{weed}. The active canonical unitaries correspond to nonorthogonal symplectic transformations, such as one-mode squeezers. In the following we will use the theorem by Williamson \cite{will}, according to which every matrix $M\in\mathcal{M}(2N,\mathbb R)$, $M\geq0$ can be brought to a diagonal form through symplectic transformations: \begin{equation} SMS^T={\rm diag}(s_1, s_1,...., s_N, s_N), \end{equation} where $s_1,....,s_N\geq 0$ are called symplectic eigenvalues of $M$. By \begin{equation}\label{defstr} {\rm str}[M]:=\sum_{i=1}^N s_i \end{equation} we will denote the symplectic trace of $M$. \section{Entanglement witnesses for covariance matrices} An entanglement witness based on CMs is characterised by a real symmetric matrix $Z\geq 0$ such that \cite{hyll} \begin{eqnarray}\label{subeq} &\Tr[Z \gamma_s]\geq 1, \qquad {\rm for~all~separable~CMs} \qquad \gamma_s, \nonumber\\ &\Tr[Z \gamma]<1, \qquad {\rm for~some~entangled~CM} \qquad \gamma.\label{12b} \end{eqnarray} The EWs based on second moments defined in Eqs. (\ref{12b}) represent hyperplanes in the space of CMs that separate some entangled states from the set of separable CMs. If there exists $Z$ which fulfills conditions (\ref{12b}), then the state with CM $\gamma$ is entangled, irrespective of whether it is Gaussian or not, while if this test does not detect entanglement in a given non-Gaussian state, then the result is inconclusive. The following Theorem fully characterises the EWs for multimode CV states defined in Eqs. (\ref{12b}) for different entanglement classes. \textbf{Theorem.} (taken from \cite{and,hyll}) \textit{A covariance matrix $\gamma$ of a $k-$partite system with $\sum_{j=1}^k N_j=N$ modes is entangled with respect to this partition iff there exists $Z$ such that \begin{equation}\label{dett} \Tr[Z \gamma]< 1, \end{equation} where $Z$ is a real, symmetric $2N\times 2N$ matrix satisfying \begin{eqnarray}\label{wine} & Z\geq 0, \nonumber\\ & \sum_{j=1}^k{\rm str}[Z_j]\geq \frac 1 2, \end{eqnarray} where $Z_j$ is the block matrix on the diagonal of $Z$ acting on the subsystem $j$. Matrices $Z$ are called EWs based on second moments.} Due to the convexity of the set of separable CMs there always exists an EW $Z$ giving the result of Eq. (\ref{dett}) for an entangled CM $\gamma$. In Ref. \cite{hyll} the authors formulated the above theorem in a slightly different way: in addition to Eqs. (\ref{wine}) it is stated that such an EW has to satisfy also ${\rm str}[Z]<\frac 1 2$, instead of condition $\Tr[Z \gamma]< 1$. Note that there is no contradiction between the conditions (\ref{wine}) and ${\rm str}[Z]< \frac 1 2$ that an EW has to satisfy, since the relation ${\rm str} [Z]\leq\sum_{i=1}^N{\rm str}[Z_i]$ holds \cite{and}.\\ \indent Nevertheless, the two formulations of the Theorem above are equivalent. In order to show this we will use the results from Ref. \cite{and} where it is proven that $\Tr[Z\gamma]\geq 1$ for all separable CMs $\gamma$ if and only if $Z\geq 0$ and $\sum_{j=1}^k{\rm str}[Z_j]\geq \frac 1 2$. In addition, it is shown that $\Tr[Z \gamma]\geq1$ for all CMs $\gamma$ if and only if $Z\geq 0$ and ${\rm str}[Z]\geq \frac 1 2$. As $Z\ngeq 0$ would contradict $\Tr[Z \gamma]\geq1$ for all separable CMs $\gamma$, it follows that if $\Tr[Z\gamma]<1$ for some CM $\gamma$ then ${\rm str}[Z]< \frac 1 2$. Conversely, if ${\rm str}[Z]< \frac 1 2$ then there exists some CM $\gamma$ such that $\Tr[Z\gamma]<1$. The problem of finding an EW that most robustly detects entanglement in a given CM arises as a semidefinite optimization problem (SDP) (see Ref. \cite{hyll} where the authors provide also numerical routines performing this task). Here we consider the situation when no information about the state is available, and we aim at constructing the EWs from given random measurements. For this purpose, the description of EWs given in the Theorem above can serve as constraints in our optimization program. However, the inequality (\ref{wine}) cannot be used in this form as a semidefinite constraint in an SDP, because its left hand-side cannot be regarded as a linear function since the symplectic eigenvalues of a matrix $M\geq 0$ are given by the eigenvalues of the matrix $M^{\frac 1 2}(i\Omega_{N}) M^{\frac 1 2}$ \cite{sera}. In the following, we propose a set of linear semidefinite constraints for EWs, which are stronger than conditions (\ref{wine}). \textbf{Proposition.} \textit{For the entanglement witness $Z$ of a $k-$partite entangled $N-$mode covariance matrix with $\sum_{j=1}^k N_j=N$, the inequalities (\ref{wine}) are satisfied if the following conditions are fulfilled: \begin{eqnarray}\label{prop} & Z\geq 0, \nonumber\\ & Z_j+i\frac{x_j}{N_j} \Omega_{N_j} \geq 0, \quad x_j\in \mathbb{R},\quad j=1,....,k-1,\\ & Z_k+i \frac{1}{N_k}\Big(\frac 1 2-\sum_{j=1}^{k-1}x_j\Big) \Omega_{N_k} \geq 0.\nonumber \end{eqnarray}} \begin{proof} \begin{enumerate} \item In the first part we prove the proposition for $k=N$. First, we prove this for $N=2$, i.e. for a two-mode witness with the following block form: \begin{equation}\label{z2} Z=\left({\begin{array}{*{20}c} Z_1 & Z_c \\ Z_c^{\rm T} & Z_2 \end{array}}\right), \end{equation} where $Z_1$ and $Z_2$ are $2 \times 2$ matrices. Since $Z$ is a positive semidefinite matrix, also the principal submatrices $Z_1$ and $Z_2$ are positive semidefinite. Let us assume the following inequality: \begin{equation} Z_1+i x \Omega_1 \geq 0,\quad {\rm where} \quad \Omega_1= \left({\begin{array}{*{20}c} 0 & 1 \\ -1 & 0 \end{array}}\right), \quad x\in \mathbb R. \end{equation} By symplectic transformation $S$ the positive matrix above can be brought to the Williamson normal form as follows \footnote{Any symplectic transformation preserves the symplectic eigenvalues, and since we know that ${\rm Tr}[M]\geq 2 \ {\rm str}[M]$ holds for any positive matrix $M$ \cite{bathia}, then we may say that symplectic transformations preserve also the positivity.}: \begin{equation}\label{poz} S(Z_1+i x \Omega_1 )S^{T}=Z_1^w+i x \Omega_1=\left({\begin{array}{*{20}c} z_1 & ix \\ -ix & z_1 \end{array}}\right), \end{equation} where $Z_1^w={\rm diag}(z_1,z_1)$, with $z_1$ the positive symplectic eigenvalue of $Z_1$. The eigenvalues $\alpha$ of matrix (\ref{poz}) are determined from the equation: \begin{equation} (z_1-\alpha)^2-x^2 =(z_1-\alpha-x )(z_1-\alpha+x )=0, \end{equation} and hence \begin{equation} z_1\pm x=\alpha\geq 0. \end{equation} Thus, the symplectic eigenvalue $z_1$ fulfills the inequality $z_1\geq\pm x$, or $z_1\geq |x|$. A similar inequality can be formulated for the block matrix $Z_2$: \begin{equation} Z_2+i(\frac 1 2-x) \Omega_1 \geq 0, \end{equation} from which we obtain the following condition for the symplectic eigenvalue $z_2$ : \begin{equation} z_2\geq \Big|\frac 1 2-x\Big|. \end{equation} Now, the sum of symplectic eigenvalues gives: \begin{equation} z_1+z_2\geq |x|+\Big|\frac 1 2 -x\Big|\geq \Big|x+\frac 1 2 -x \Big|=\frac 1 2. \end{equation} The above inequality assures that the condition (\ref{wine}) is always fulfilled. The generalization to more modes is straightforward. For instance, consider a three-mode CM and we want an EW detecting three-partite entanglement. Then, according to the Proposition, we need to impose constraints on the three block diagonal matrices of the witness, which amount to the following inequalities for the corresponding symplectic eigenvalues: \begin{eqnarray}\label{gen} & z_1\geq |x_1|, \quad x_1\in \mathbb{R},\nonumber\\ & z_2\geq |x_2|, \quad x_2\in \mathbb{R},\\ & z_3\geq \Big|\frac 1 2 -x_1-x_2\nonumber\Big|. \end{eqnarray} These inequalities imply the constraint (\ref{wine}). \item In the second part, we present the generalization of the proof for $k-$partite entanglement of $N-$mode CMs, with $k<N$. Consider, for simplicity, a three-mode state and the bipartition between the first and the other two modes. The witness $Z$ is a $6\times 6$ matrix where $Z_1$ is the $2\times 2$ block diagonal matrix of $Z$ acting on the first mode, and we denote by $Z'$ the $4\times 4$ block matrix acting on the other two modes. Then the corresponding constraints on the witness are: \begin{eqnarray} & Z\geq 0, \nonumber\\ & Z_1+i x \Omega_{1} \geq 0, \quad x\in \mathbb{R},\\ & Z'+i \frac{1}{2}(\frac 1 2-x) \Omega_{2} \geq 0.\nonumber \end{eqnarray} If we denote by $z_1$ the symplectic eigenvalue of $Z_1$, and by $z_1'$, $z_2'$ the two symplectic eigenvalues of $Z'$, then the conditions above are equivalent to: \begin{eqnarray} & z_1\geq |x|, \quad x\in \mathbb{R},\nonumber\\ & z_1'\geq \frac 1 2 \Big|\frac 1 2 -x\Big|, \\ & z_2'\geq \frac 1 2 \Big|\frac 1 2 -x\Big|\nonumber, \end{eqnarray} which satisfy the condition (\ref{wine}). Since this is a bipartite state, the lower bounds for the three symplectic eigenvalues depend on a single parameter $x$, while, according to the Proposition, the detection of three-partite entanglement in a three-mode state would require two optimization parameters, $x_1$ and $x_2$ (see Eq. (\ref{gen})). The generalization of the proof to $N$ modes and $k$ parties is straightforward. Note that the conditions in the Proposition are stronger for $k-$partite entanglement (with $k<N$) than for genuine multipartite entanglement (i.e. $k=N$), where the optimization for every symplectic eigenvalue is done independently. \end{enumerate}\end{proof} While the semidefinite inequalities proposed in the previous Proposition present the advantage of being linear, the drawback of these constraints is that they are stronger than those required by the Theorem characterizing the EWs based on second moments, and therefore some EWs will not satisfy conditions (\ref{prop}). \section{Entanglement witnesses from random measurements} Here we will shortly present the physical set-up of the homodyne detection, that encodes the experimental settings measuring the variances of the state. Homodyne measurements are phase sensitive measurements which allow the detection of the moments of quadratures up to the second order \cite{braun,weed}. We denote by $\hat k$ and $\hat k^\dag$ the mode operators of our state. A simple scheme for balanced homodyne measurements is composed of a balanced beam splitter superposing the signal mode to be measured $\hat k$ with a strong local oscillator field $\alpha_{LO}=|\alpha_{LO}|e^{i\theta}$ with phase $\theta$, and two photon detectors, converting the electromagnetic modes into two output photon currents, $i_1$ and $i_2$. The actual quantity to be measured is the difference in the photon currents, given by: \begin{equation} \delta i =i_1-i_2=q |\alpha_{LO}| \ \langle\hat x_{\theta}\rangle, \end{equation} with $q$ being a constant, and $\hat x_{\theta}$ is the generalized quadrature operator of mode $\hat k$ defined as: \begin{equation} \hat x_{\theta}=\frac{\exp{(-i\theta)}\hat k+\exp{(i\theta)}\hat k^{\dag}}{\sqrt{2}}, \end{equation} which covers the whole continuum of quadratures for $\theta \in [0, \pi]$. It was shown in Ref. \cite{proj} that in the strong local oscillator limit the homodyne detection performs the projective measurements corresponding to POVM $|x_\theta\rangle \langle x_\theta|$, where $| x_\theta\rangle$ is the eigenstate of the quadrature phase operator $\hat x_\theta$. In two-mode homodyne detection, we rely on the detection scheme proposed in Ref. \cite{dauria}, where the two-mode states are characterized by a single homodyne detector. By denoting with $\hat a$ and $\hat b$ the initial modes to be detected, the mode $\hat k$ arriving at the detector can be expressed as \cite{dauria} \begin{equation}\label{kk} \hat k=\exp(i \varphi) \cos \phi \ \hat a +\sin \phi \ \hat b, \end{equation} which corresponds to applying a phase shift of angle $\varphi$ between the horizontal and vertical polarization components, a polarization rotator of angle $\phi$, and a polarizing beam splitter (PBS) reflecting the vertically polarized component of the beam toward the detector \cite{dauria}. Using repeated measurements of the quadratures for a set of identical states, the homodyne data are collected for which a probability distribution can be assigned with the variance given by: \begin{equation}\label{p} \langle \hat x_{\theta}^2 \rangle-\langle \hat x_{\theta}\rangle^2=\Tr[P \gamma], \end{equation} where $P$ is the matrix for the measurement of the quadrature variance of the mode $\hat k$: \begin{equation} P=u u^{\rm T}, u^{\rm T}=\left({\begin{array}{*{20}c} \cos \phi \cos(\theta-\varphi)& \cos \phi \sin(\theta-\varphi)&\sin \phi \cos \theta & \sin \phi \sin \theta \end{array}}\right). \end{equation} As $P$ is a symmetric, real $4\times4$ matrix we can see that for $10$ different combinations of angles $\theta$, $\phi$ and $\varphi$ the entire two-mode CM can be reconstructed (the number of unknown independent parameters in an $N-$mode CM is $N(2N+1)$). The extension of detection to $N-$mode CV states by a single homodyne detector can be achieved by applying the same two-mode combination scheme $N-1$ times. For example, for the initial modes $\hat a$, $\hat b$ and $\hat c$, the generalized mode arriving at the detector is: \begin{equation} \hat k= \exp(i \varphi_1) \cos \phi \ \hat a + \exp(i \varphi_2)\sin \phi \cos \psi \ \hat b+ \sin \phi \sin \psi \ \hat c, \end{equation} from where we can see that for $\psi=0$ and $\varphi_2=0$ the two-mode case in Eq. (\ref{kk}) is obtained. We denote by $P_j$ the matrix of the j-th measurement. \subsection{Constructing witnesses} Random measurement directions in the case of two modes are given by random angles $\theta, \phi, \varphi$ that are drawn from a uniform distribution in an interval: \begin{eqnarray}\label{ang} 0\leq & \theta\leq \pi, \\ 0\leq & \phi \leq \pi, \\ 0\leq & \varphi < 2 \pi. \end{eqnarray} The problem of finding a witness operator $Z$, given the repeated independent measurements $P_j$ on the CM, reduces to finding the coefficients $c_j$ such that $Z=\sum_{j} c_j P_j$. Therefore, we apply the Proposition in order to find the best witness for two-mode CMs, and propose the following SDP: \begin{eqnarray} {\rm minimize~over} {~x:} {~~~~}{\bf{c\cdot m}}{}{}\nonumber \\ {\rm subject~to:}~~ {Z=\sum_j c_j P_j,}\nonumber \\ ~~~~~~~~~~~~~~~~~{Z=\left({\begin{array}{*{20}c} Z_1 & Z_c \\ Z_c^{\rm T} & Z_2 \end{array}}\right)\geq 0, }\\ ~~~~~~~~~~~~~~~~~{Z_1+i x \Omega_1 \geq 0, }\nonumber \\ ~~~~~~~~~~~~~~~~~{Z_2+i (\frac 1 2 - x) \Omega_1 \geq 0}, \nonumber~\label{min} \end{eqnarray} where $\bf{m}=\Tr(P\gamma)$, with $\bf{P}$ being the vector of measurement matrices $P_j$. This SDP finds the matrix $Z$, given the experimentally obtained data, such that \begin{equation}\label{w} \bf{c\cdot m}=\Tr[Z\gamma] \end{equation} takes its minimal value, while being an EW as defined in Theorem above. If the obtained value in Eq. (\ref{w}) is smaller than one, then the CV state with CM $\gamma$ can be unambiguously identified as being entangled. This SDP also allows for the identification of the minimal number of measurements that are required for entanglement assessment in arbitrary states. The number of measurements in a tomographically complete setting is given by $N(2N+1)$, where $N$ is the number of modes. This is the maximal number of measurement settings required to detect entanglement. However, the set of EWs described in the Proposition is more restrictive than the set of all EWs. The consequences will be discussed later. \section{Detection of non-PPT entanglement and bound entanglement} The proposed SDP has the immediate advantage that it does not require any information about the state, except the number of modes $N$. We will now test the performance of this method by simulating its implementation on random two-mode entangled CV states, and on four-mode bipartite bound entangled states. \\ \indent The entanglement of two-mode CMs with block structure given by: \begin{equation}\label{bl} \gamma=\left({\begin{array}{*{20}c} \gamma_1 & \varepsilon_{1,2}\\ \varepsilon_{1,2}^{\rm T} & \gamma_2 \end{array}}\right), \end{equation} is quantified by means of the logarithmic negativity \cite{ser4}: \begin{eqnarray}\label{ent} E=\max \{0,-\frac{1}{2}\log_2 f\}, \end{eqnarray} where \begin{eqnarray}\label{logneg} f=\frac{1}{2}(\det \gamma_1 +\det \gamma_2)-\det \varepsilon_{1,2} -\left({\left[\frac{1}{2}(\det \gamma_1+\det \gamma_2)-\det \varepsilon_{1,2}\right]^2-\det\gamma}\right)^{1/2}.\nonumber\end{eqnarray} An EW provides a lower bound for the logarithmic negativity measure when the positive partial transpose (PPT) criterion of separability is necessary and sufficient \cite{and}: \begin{equation} E\geq \log_2 \frac{1}{w}, \end{equation} where $w\in(0,1)$ is the outcome of measuring an EW on CM $\gamma$: $\Tr[Z\gamma]=w$. For two-mode CMs the logarithmic negativity corresponds to the minimal\footnote{Compared to the optimal EW in state space, the minimal EW based on second moments gives the best estimate of the degree of entanglement the considered state has, but it is not necessarily the finest witness \cite{and}. } EW $Z_{min}$ giving the smallest possible value $w_{min}$. In the following we investigate the efficiency of our method for detecting entanglement of arbitrary CV states, with respect to the minimal number of measurements required to accomplish this task. Thus, given an arbitrary unknown CM our algorithm first computes the variances of the generalized quadrature (\ref{p}) for one random measurement direction in phase space and then carries out the SDP optimization to check if the state is entangled. If entanglement is not detected, additional random measurements are successively simulated and the optimization algorithm is executed each time until the entanglement is detected. At least two measurement settings are required in order to detect entanglement. \subsection{Detecting entanglement in squeezed vacuum states} The CMs of squeezed vacuum states have the form: \begin{equation}\label{sts} \gamma=\left({\begin{array}{*{20}c} \cosh 2r & 0 & \sinh 2r & 0 \cr 0 & \cosh 2r & 0 & - \sinh 2r \cr \sinh 2r & 0 & \cosh 2r & 0 \cr 0 & - \sinh 2r & 0 & \cosh 2r \end{array}}\right), \end{equation} where $r$ is the squeezing parameter. For such states the logarithmic negativity can be calculated using (\ref{ent}), obtaining a linear dependence on the squeezing parameter: \begin{equation}\label{entv} E=2 ~ r \log_2 e,\end{equation} where $e$ is the Euler constant. Squeezed vacuum states are Gaussian states, which are naturally accessible in many experimental situations where spontaneous down conversion is involved, being also useful in many quantum optics applications \cite{braun, weed} \footnote{Recent experiments report the achievement in measuring 15 dB of squeezed light \cite{vahl}, which corresponds to $r\approx 1.73$ according to the formula \cite{ads}: $ \# {\rm dB}= 10 \log_{10} e^{2 r}$.}. \begin{figure}[h] \centering \includegraphics[width=0.95\columnwidth]{lastt.png}\\ \caption{Fraction of entanglement detection of squeezed vacuum states: $5\times10^5$ runs of the algorithm on the two-mode squeezed vacuum states (\ref{sts}) with squeezing parameter $r\in[0,2]$. The logarithmic negativity is given by $E=2 ~ r \log_2 e$ (see Eq. (\ref{entv})). By successively adding measurement directions, the EW is evaluated at every round until the presence of entanglement is certified. The data are normalized such that they sum up to $1$ for every value of entanglement.} \end{figure} In Fig. 1 we show the fraction of entanglement detection of squeezed vacuum states, using random EWs. Contrary to intuition, states with less entanglement are more easily detected, i.e. they require on average fewer measurements than states with higher entanglement. This is due to the fact that in this case the amount of entanglement is linked to the strength of quadrature squeezing. It is well known that it is difficult to measure high squeezing in CV states \cite{vahl} (see also the explanation given in Fig. 2). The full tomography for two-mode CMs is reached by $10$ independent measurements. The CM (\ref{sts}) of the squeezed vacuum state has some zero elements, and with this knowledge about the state one would need only $6$ measurements to reconstruct the CM entirely. However, our method may require more than $6$ measurements to assess entanglement since we assumed no information about the states, except the dimension of the CM. As a consequence of the stronger constraints imposed on the EWs in Eqs. (\ref{prop}) our method requires, with very low probability ($0.0094 \%$ in our example), more than $10$ measurements, which correspond to full tomography.\\ \begin{figure}[h] \centering \includegraphics[width=0.48\textwidth]{var1.pdf}\qquad \quad \includegraphics[width=0.48\textwidth]{var2.pdf} \caption{ The variance of the generalized quadrature $\Tr[P \gamma]$, see Eq. (\ref{p}), of the squeezed vacuum CM, as a function of $\varphi\in [-\pi,\pi]$ and $\phi\in [-\pi,\pi]$, for $\theta=0$ and squeezing parameter $r=0.2$ (left) and $r=1$ (right). The horizontal plane represents $\Tr[P \gamma]=1$, which is the case of the squeezed vacuum states with $r=0$ (separable states). In the regions below this plane entanglement is detected.} \end{figure} In Fig. 2 we show the variance of the generalized quadrature $\Tr[P\gamma]$, see $(\ref{p})$, for $\theta=0$, as a function of $\varphi\in [-\pi,\pi]$ and $\phi\in [-\pi,\pi]$, for different values of the squeezing parameter, $r=0.2$ (left) and $r=1$ (right), of the squeezed vacuum state. The outcomes of the random measurements are represented by the points on this surface. The horizontal plane is given by $\Tr[P \gamma]=1$, which holds for a separable vacuum state with $r=0$. The areas below this plane, where $\Tr[P \gamma]<1$, correspond to the region of parameters $\varphi$ and $\phi$ for which entanglement is detected. We observe that the areas of the regions of entanglement detection are decreasing with increasing the squeezing. This corresponds to the fact that highly squeezed states occupy a smaller region in phase space in terms of the angles $\phi, \varphi$. Thus, more random measurements are needed to detect the entanglement. \subsection{Detecting entanglement in random covariance matrices} Random CMs are produced as follows. Starting with a CM in diagonal form, with symplectic eigenvalues $\nu_i\geq1$ ($i=1,...,N$) randomly generated from a uniform distribution in a finite real interval $[1,t]$, $t>1$: \begin{equation}\label{th} \gamma_{th}= \bigoplus_{i=1}^N \left({\begin{array}{*{20}c} \nu_i & 0 \\ 0 & \nu_i \\ \end{array}}\right), \end{equation} the general random CMs $\gamma$ are created by applying random symplectic transformations $S\in Sp(2N,\mathbb R)$, as follows \cite{sera}: \begin{equation}\label{th1} \gamma=S \gamma_{th} S^{\rm T}. \end{equation} The matrix (\ref{th}) is the CM of thermal states, with the symplectic eigenvalue of every mode $i$ related to the thermal photon number $n_i$ as follows: $\nu_i=2 n_i +1$ \cite{sera}. Random symplectic matrices are generated using the Euler decomposition (\ref{eu}). First, unitary matrices $X$ and $Y$ in Eq. (\ref{ku}) are generated from the Haar distribution \cite{dutta}, and the symplectic orthogonal matrices $K$ and $L$ are formed as in Eq. (\ref{sorth}). The one-mode squeezers defined in Eq. (\ref{sqr}) are created by randomly choosing parameters $r_i$ via a uniform distribution in a finite interval. For this purpose we implemented the Matlab code presented in Ref. \cite{jagger}. In Fig. 3 we illustrate the efficiency of entanglement detection for general random two-mode CMs, created from thermal state CMs (\ref{th}) with random symplectic eigenvalues $\nu_i\in[0,5]$, by random symplectic transformations (\ref{eu}) with squeezing parameters $r_i\in[0,2]$. The probability that entanglement is detected by $11-12$ measurements in this case, is $0.05 \%$. Our method shows a slight improvement in the efficiency of entanglement detection for highly entangled states compared to less entangled states, and most of the time it does not require full tomography. The evident difference in the efficiency of entanglement detection in random CMs compared to squeezed vacuum states may reside in the fact that highly squeezed states look classical in random measurement directions, which does not have to be the case for random states. In addition, squeezed vacuum states are a special class of states for which the logarithmic negativity has a linear dependence on the squeezing parameter alone (see Eq. (\ref{entv})), while for a general two-mode CM the logarithmic negativity depends also on thermal photon number of the modes, and the simulation of entanglement detection shows a different behaviour. In general, it is unlikely to draw randomly an entangled state with high logarithmic negativity, especially for states with a high number of modes. However, for the two-mode CMs, with the range of entanglement considered in Fig. 3, a substantial fraction of randomly generated CMs is entangled, which allowed us to perform the simulation. \begin{figure}[h] \centering \includegraphics[width=0.95\columnwidth]{last52.png}\\ \caption{Fraction of entanglement detection for random two-mode states: $5\times10^5$ runs of the algorithm on random two-mode CMs for $\nu_i\in[0,5]$ and $r_i\in[0,2]$. By successively adding measurement directions, the EW is evaluated at every round until the presence of entanglement is certified. The data are normalized such that they sum up to $1$ for every value of entanglement.} \end{figure} \subsection{Detecting bipartite bound entanglement} Since the proposed SDP algorithm can be easily generalized to multi-mode CV states, we provide an example of a four-mode CM with $12$ independent parameters, mentioned in Ref. \cite{wolf}, which has bipartite bound entanglement: \begin{equation}\label{4} \gamma=\left({\begin{array}{*{20}c} 2 & 0 & 0 & 0 & 1 & 0 & 0 & 0\cr 0 & 1 & 0 & 0 & 0 & 0 & 0 & -1\cr 0 & 0 & 2 & 0 & 0 & 0 & -1 & 0\cr 0 & 0 & 0 & 1 & 0 & -1 & 0 & 0\cr 1 & 0 & 0 & 0 & 2 & 0 & 0 & 0\cr 0 & 0 & 0 & -1 & 0 & 4 & 0 & 0\cr 0 & 0 & -1 & 0 & 0 & 0 & 2 & 0\cr 0 & -1 & 0 & 0 & 0 & 0 & 0 & 4 \end{array}}\right). \end{equation} The detection of bound entanglement by our method proves that the EWs defined in the Theorem above, goes beyond the criteria which detect entanglement only in states with non-positive partial transpose. A general $N$-mode CM has $N(2N+1)$ independent variables, and for the four-mode CM in Eq. (\ref{4}) by performing $36$ measurements our algorithm provides the best estimate of entanglement, $\Tr[Z_{min} \gamma]=0.8966$, which is in agreement with the results of Ref. \cite{hyll}. In Fig. 4 we depict the frequency of entanglement detection as a function of the number of random measurements composing the witness. The CM in Eq. (\ref{4}) is of a rather simple form, however, the construction of the EW detecting bound entanglement requires $33$ random measurements on average, since our SDP considers the number of modes of the state as the only available information. \begin{figure}[h] \centering \includegraphics[width=0.8\columnwidth]{fourg.eps}\\ \caption{Fraction of entanglement detection for $4-$mode bipartite bound entangled state, see Eq. (\ref{4}): $10^4$ runs of the algorithm. The data are normalized such that they sum up to $1$.} \end{figure} \section{Statistical analysis} Until now we have considered only ideal measurements, where we used the exact variances $m_i={\rm Tr}[P_i \gamma]=(\Delta \hat{x} _{\theta_i})^2$ (see Sec. 4) in order to construct the entanglement witness. In real experiments the accessible data are subject to statistical fluctuations. In the following we perform the statistical analysis for the case of Gaussian states, that is, we assume that the data obtained in homodyning, which represent the collection of outcomes $X_{ij}=\langle \hat{x}_{\theta_i}\rangle_j$, ($j=1,\ldots,n_i$), from $n_i$ repetitions of the measurement with the measurement direction given by $\theta_i$, are governed by the normal probability distribution $\mathcal{N}_i(\mu_i,m_i)$ with the mean $\mu_i$, and variance $m_i=(\Delta \hat x_{\theta_i})^2$. Given the homodyne data from $n_i$ measurements for a fixed measurement direction $\theta_i$, the sample variance denoted as $\bar P_i$, which estimates the variance $m_i$, is given by \footnote{Using $n_i-1$ instead of $n_i$ corrects the bias in the estimation of the population variance, and is called Bessel's correction \cite{ken}. This method is necessary when the population mean $\mu_i$ is unknown, but is estimated by the sample mean $\bar X_i$. }: \begin{equation}\label{sampv} \bar P_i=\frac{1}{n_i-1} \sum_{j=1}^{n_i} (X_{ij}-\bar X_{i})^2, \end{equation} where $\bar X_{i}$ is the sample mean: \begin{equation} \bar X_{i}=\frac{1}{n} \sum_{j=1}^{n_i} X_{ij}. \end{equation} In this case, the estimated value of our witness ${\rm Tr}[Z \gamma]$, denoted as $\bar Z$, is given by: \begin{equation}\label{zp} \bar Z=\sum_i c_i \bar P_i, \end{equation} where the index $i$ is used to denote different measurement settings, and the coefficients $c_i$ were introduced in Eq.(\ref{min}). In the case when the data comes from a Gaussian probability distribution, the distribution of the sample variance follows the $\chi_{n_i-1}^2$ distribution \cite{chi}: \begin{equation} \frac {n_i-1} {m_i}\bar P_i\sim \chi_{n_i-1}^2, \end{equation} where $\chi_{n_i-1}^2$ is the chi-square distribution with $n_i-1$ degrees of freedom, which by definition represents the distribution of sum of squares of $n_i-1$ independent, standard normal random variables. The statistical error carried by $\chi_{n_i-1}^2$ is given by: \begin{equation} \Delta \chi_{n_i-1}^2=\sqrt{{\rm Var}(\chi_{n_i-1}^2)}=\sqrt{2(n_i-1)}, \end{equation} where ${\rm Var}(\chi_{n_i-1}^2)=2(n_i-1)$ is the variance of the chi-square distribution \cite{chi}. Using the error propagation formula the uncertainty of $\bar P_i$ satisfies: \begin{equation}\label{inneq} \Delta \bar P_i=\frac{d \bar P_i}{d \chi_{n_i-1}^2} \Delta \chi_{n_i-1}^2=\frac{m_i}{n_i-1}\Delta \chi_{n_i-1}^2=m_i\sqrt{ \frac{2}{n_i-1} }. \end{equation} Using again standard error propagation and considering that the number of measurement repetitions is equal for every measurement direction, i.e., $n=n_i$ for every $i$, we obtain that the resulting error of $\bar Z$ defined in Eq. (\ref{zp}) has the following expression: \begin{equation}\label{indep} \Delta \bar Z=\sqrt{\sum_i \Big(\frac{d \bar Z}{d \bar P_i}\Big)^2 (\Delta \bar P_i)^2}=\sqrt{ \frac{2}{n-1} }\sqrt{\sum_i c_i^2 m_i^2}. \end{equation} We stress the fact that, although by our method we can also detect entanglement in non-Gaussian states, this formula for the error of the value of the witness is valid only for Gaussian states. If the $X_{ij}$ are not normally distributed, then the statistical analysis of entanglement witnesses based on second moments will require also higher moments of the distribution. In our method the coefficients $c_i$ are derived from the variances $m_i$ (see Eq. (\ref{min})), while Eq. (\ref{indep}) neglects the fact that they are not independent. To solve this difficulty one has to divide the homodyne data into two sets. First, one of them is used to derive the coefficients $c_i$, and then this witness is evaluated using the variances obtained from the other set of data \cite{mor}. In this way, the coefficients can be regarded as independent from the errors in the variances of the second set of data. \begin{figure}[h] \centering \includegraphics[width=0.7\columnwidth]{errlast.pdf}\\ \caption{The maximum of the $3\sigma$ confidence interval for the witness $Z$ of a Gaussian CM $\gamma$, as obtained by the statistical estimate according to Eq. (\ref{indep}). The horizontal dashed line indicates the minimal value of the witness for the considered CM $\Tr[Z_{min}\gamma]=0.852$. The vertical dashed lines indicate the number of measurement repetitions required to detect entanglement with $6$ (blue), $7$ (orange) and $8$ (green) measurement settings.} \end{figure} With the quantity in Eq. (\ref{indep}) it is possible to decide whether it is better to perform additional repetitions of the measurements, or to add new measurements to detect entanglement. For example, consider the single detection of a low entanglement CM with $\Tr[Z_{min} \gamma]=0.852$. In Fig. 5 the $3\sigma$-confidence of $\Tr[Z\gamma]$ is plotted as a function of the number $n$ of measurement repetitions. It shows that a certification of $\Tr[Z\gamma]<1$ with $99.7\%$-confidence is possible for $6$ measurements, which requires a high number of repetitions of the measurements. However, this number significantly decreases when adding another measurement setting. \section{Summary and conclusions} We have proposed a method to detect entanglement of unknown CV states, given only the dimension of their covariance matrices, using random homodyne measurements. Our method provides an alternative for performing full tomography. We characterize the entanglement witnesses based on second moments using stronger semidefinite constraints than those presented in Ref. \cite{hyll}, and which account for obtaining a valid witness at all times. Therefore, a quantum state can be clearly considered entangled if it is detected by this criterion. As these constraints are linear, they can be implemented in an SDP. We studied the feasibility of this method in experimental situations, where the figure of merit is considered the number of measurements required to detect entanglement. First, we tested the proposed algorithm for two-mode squeezed vacuum states, for which the logarithmic negativity linearly depends on the squeezing. We showed that the number of necessary random measurements is very likely to be smaller than for full tomography. We observed an increasing number of measurements required to detect highly entangled states, which is explained by the well-known difficulties in detecting high squeezing. Our primary objective was to simulate the performance of this method for uniformly drawn random two-mode covariance matrices. Without adding any information about the states, we still found a reduction in the number of measurements needed to certify the presence of entanglement. The phenomenology of entanglement detection in random CV states is very similar to the case of decomposable witnesses for discrete systems \cite{jochn}. Hence, a higher entanglement is easier to detect, but in our case this improvement is not as significant as in the discrete case. Only with low probability our method needs a tomographically complete set of measurements to detect entanglement in random two-mode states. Bound entangled CV states can also efficiently be detected by a random entanglement witness. Similarly to the previous cases entanglement is detected with less than a tomographically complete set of measurements. The experimental scheme implementing our method for two-mode CV states consists of a phase shift in polarization basis, a rotator of polarization, a polarizing beam splitter and a homodyne detector, as e.g. presented in Ref. \cite{dauria}. We also extended this scheme to multimode CV states. We investigated the statistical robustness of the method, and showed that it has a good robustness to statistical errors. \paragraph{Acknowledgements.} The authors thank Jochen Szangolies, Thomas Wagner and Matteo Paris for helpful discussions. Giulio Gianfelici has received funding from the German Federal Ministry of Education and Research (BMBF), within the Hardware-based Quantum Security (HQS) project. \section*{References}
2,869,038,157,009
arxiv
\section{} {\bf Abstract.} We define the twisted de Rham cohomology and show how to use it to define the notion of an integral of the form $\int g(x) e^{f(x)}dx$ over an arbitrary ring. We discuss also a definition of a family of integrals and some properties of the homological definition of integral. We show how to use the twisted de Rham cohomology in order to define the Frobenius map on the p-adic cohomology. Finally, we consider two-dimensional topological quantum field theories with general coefficients. \section {Introduction} Physicists usually work with real or complex numbers. It seems, however, that the consideration of other numbers (for example, $p$-adic numbers) can also be useful. This idea is not new (let us mention, for example, numerous papers devoted to $p$-adic strings). It was conjectured that $p$-adic numbers and/or adeles are relevant for the description of the space at small distances; this conjecture remains in the domain of speculations. However, it was shown that $p$-adic methods can be used as a mathematical tool that permits us to obtain information about theories over the complex numbers. For example, in \cite{inst} the $p$-adic analogue of the B-model was used to analyze integrality of instanton numbers. The $p$-adic B-model was defined there in completely formal way, however, one can conjecture that it has a more physical definition and that such a definition can lead to a deeper understanding of standard topological sigma-models. One can make a stronger conjecture that many other physical theories can be formulated in the $p$-adic setting or, more generally, in the setting when the role of numbers is played by elements of any field or even of any ring and that such a formulation can be used to obtain information about standard physical theories. The present paper is a step in this direction. Let us emphasize that our goal is to get new insights into the structure of standard physical theories from number-theoretic considerations; but one can hope that it is also possible to apply the ideas from physics to number theory. It was found recently that S-duality of gauge theories can be used to understand the geometric Langlands program \cite{KW}. It is natural to think that the Langlands program in number theory can also be analyzed by means of a corresponding version of gauge theory. We stressed that we want to use number theory in conventional physics. It is possible, however, that all physical quantities are quantized (there exists elementary length, etc). Then it is natural to believe that the theories over integers have direct physical meaning. To explain what we have in mind when speaking about ``physics over a ring" we start with the following: \begin{defi} Physics is a part of mathematics devoted to the calculation of integrals of the form $\int g(x) e^{f(x)}dx.$ Different branches of physics are distinguished by the range of the variable $x$ and by the names used for $f(x),\, g(x)$ and for the integral. For example, in classical statistical physics $x$ runs over a symplectic manifold, $f(x)$ is called the Hamiltonian function and the integral has the meaning of a partition function or of a correlation function. In a $d$-dimensional quantum field theory $x$ runs over the space of functions on a $d$-dimensional manifold (the space of fields) and $f(x)$ is interpreted as an action functional. \end{defi} Of course this is a joke, physics is not a part of mathematics. However, it is true that the main mathematical problem of physics is the calculation of integrals of the form $\int g(x) e^{f(x)}\,dx.$ If we work over an arbitrary ring $K$ the exponential function and the notion of the integral are not defined. We will show that nevertheless one can give a suitable definition of an integral of the form $\int g(x) e^{f(x)}\,dx.$ Let us start with some simple remarks about integrals over $\mathbb{R}^n$ assuming that $g$ and $f$ are formal power series in the variable $\lambda$ with coefficients belonging to the ring of polynomials on $\mathbb{R}^n$ (in other words $f, g\in \mathbb{R}[x^1,...,x^n][[\lambda]]$). We note that this choice is different from $\mathbb{R}[[\lambda]][x^1,...,x^n]$ and it is more convenient for technical reasons. If $f$ can be represented as $f_0+\lambda V$ where $f_0$ is a negative quadratic form, then the integral $\int g(x) e^{f(x)}\,dx$ can be calculated in the framework of perturbation theory with respect to the formal parameter $\lambda$. We will fix $f$ and consider the integral as a functional $I(g)$ taking values in $\mathbb{R}[[\lambda]].$ It is easy to derive from the relation $$\int \partial _a (h(x) e^{f(x)})dx=0$$ that the functional $I(g)$ vanishes in the case when $g$ has the form $$g=\partial _a h +(\partial _a f) h.$$ One can show that this statement is sufficient to calculate $I(g)$ up to a constant factor. This is roughly equivalent to the observation that integration by parts is sufficient in this case to determine the integral as a power series with respect to $\lambda$. Later we will derive the uniqueness of $I(g)$ from some general considerations; however, one should notice that one can give an easy elementary proof by induction with respect to degree of the polynomial $g$. One can consider the more general integral \begin{equation} \label{ } \int e^{f(x)}\rho \end{equation} as a functional $I(\rho)$ with the argument a form $\rho$ on $\mathbb{R}^n$. We assume that $\rho$ is a $k$-form with coefficients in $\mathbb{R}[x^1,...,x^n][[\lambda]]$ and the integrand is a closed form. The integration is performed over a $k$-dimensional subspace of $\mathbb{R}^n$. However this integral does not vanish (recall that $f=f_0+\lambda V$) only in the case $k=n$, when it is essentially $I(g)$. Let us now consider a formal expression $I(\rho)=\int e^{f(x)}\rho$ where $x\in K^n$ and $\rho$ is a form on $K^n$ for an arbitrary ring $K$. We will assume that $f$ and the coefficients of the form $\rho$ belong to the ring $K[x^1, ...x^n][[\lambda]]$. Moreover, we will suppose that $f=f_0+\lambda V$ where $f_0=\frac{1}{2}x^t Ax$ and $A$ is an invertible matrix with entries from $K$. We will \emph{define} $I(\rho)$ as \emph{a} $K[[\lambda]]$-linear functional taking values in $K[[\lambda]]$ and vanishing on \begin{equation} \label{ } \rho= dh+(df)h \end{equation} for an arbitrary form $h$. We will prove that this definition specifies $I(\rho)$ up to a constant factor on all forms satisfying $d\rho +(dh)\rho=0$, in particular on all $n$-forms. This statement can be reformulated in homological terms by considering the twisted differential $d_f \rho=dh+(df)\rho$ on the space of all differential forms in $x^i$. We can normalize the functionals $I(g)$ and $I(\rho)$ by requiring that $I(g)=1$ if $g=1$ (or equivalently, $I(\rho)=1 $ if $\rho=dx^1...dx^n$). The normalized functionals are defined uniquely in the setting of perturbation theory if $f$ is a perturbation of a non-degenerate quadratic form. Notice that in the case when $K$ is a field one can use the standard Feynman diagram techniques with the propagator $A^{-1}$ and internal vertices specified by $V$ to calculate $I(g)$. The function $g$ determines external vertices of the diagram. To prove this statement we notice that the sum of Feynman diagrams obeys \begin{equation} \label{d} I(gx^k)=I(A^{ka}\partial_ag)+I(A^{ka}\partial_a (\lambda V) g). \end{equation} This follows from the remark that multiplying $g$ by $x^k$ we add one external vertex to the diagram. The diagrams for the new set of external vertices can be obtained from old diagrams by adding a new edge connecting the new external vertex to an old (external or internal) vertex; the first summand in the RHS of (\ref {d}) corresponds to an edge ending in an external vertex, the second summand to an edge ending in an internal vertex. From the other side (\ref {d}) is equivalent to the defining relation for the functional $I(g)$. Considering only Feynman diagrams without connected components and having only internal vertices we obtain the normalized functional $I(g)$. The interpretation of the defining relation for the functional $I(g)$ in terms of diagrams suggests a generalization of this relation to the case of infinite-dimensional integrals. The paper is organized as follows. First of all we define the twisted de Rham cohomology and show how to use it to define the notion of an integral over an arbitrary ring. We discuss also a definition of a family of integrals and some properties of the homological definition of integral. Then we show how to use the twisted de Rham cohomology in order to define the Frobenius map on the p-adic cohomology. Finally, in the last section we consider two-dimensional topological quantum field theories with general coefficients. Throughout the paper we implicitly use the assumption that our algebraic manifolds are in fact affine varieties. This is not a crucial assumption and is used only to streamline the exposition. Whenever necessary the machinery of hypercohomology can be used to generalize the statements and constructions presented to the case of general algebraic manifolds. \section {Twisted de Rham cohomology and the homological definition of the integral} Let us consider a polynomial function $f(x)$ on the space $\mathbb{C}^n$. We define the twisted de Rham differential as $$d_f=d+df$$ where $d$ stands for de Rham differential and $df$ denotes the operator of multiplication by the one-form denoted by the same symbol. The twisted de Rham cohomology $\mathcal{H}_f$ is defined as the cohomology of the differential $d_f$ acting on the space of polynomial differential forms on $\mathbb{C}^n$. Notice that the restriction of the coefficients to polynomials is important: if we allow forms with arbitrary holomorphic coefficients, then the cohomology of $d_f$ is essentially trivial because the new differential is equivalent to the standard de Rham differential, namely $$d_f=e^{-f}\circ d\circ e^f.$$ It is easy to construct a linear functional on $\mathcal{H}_f$ starting with a singular cycle $\Gamma$ in $\mathbb{C}^n$ having the property that the function $e^f$ tends to zero faster than any power of $\|x\|$ when $x\in \Gamma$ tends to infinity. Namely, such a functional can be defined by the formula \begin{equation} \label{I} I([u])=\int _{\Gamma}{u e^f} \end{equation} where $u$ is a polynomial differential form obeying $d_f u=0$ and $[u]$ stands for its class in $\mathcal{H}_f$. The condition on the cycle $\Gamma$ ensures that the integral in (\ref{I}) makes sense, and the fact that the functional $I$ does not depend on the choice of the representative of the cohomology class $[u]$ follows as usual from the Stokes' theorem. Moreover, one can show that every linear functional on $\mathcal{H}_f$ is a linear combination of functionals of this kind. Thus $\mathcal{H}_f$ captures exactly the minimal amount of information that is required to compute the integral over any possible contour. More precisely, if $X$ is a smooth algebraic variety and $f$ is an algebraic function on $X$, then there is a non-degenerate pairing between the singular homology of the pair $(X(\mathbb{C}), f^{-1}(\{z\in\mathbb{C}|-\mathfrak{R}(z)>C\gg 0\}))$ (which we will denote by $\mathcal{H}^f$) and $\mathcal{H}_f$ where $X(\mathbb{C})$ is viewed as an analytic manifold and $\mathcal{H}_f$ is defined by means of differential forms with algebraic coefficients (see for example \cite{kz}).\footnote{One may need to take hypercohomology in the definition of $\mathcal{H}_f$ if $X$ is not an affine variety.} The singular homology with integral coefficients specifies a lattice in $\mathcal{H}^f$; we say that the elements of this lattice are topologically integral. If the function $f$ has only a finite number of critical points one can prove that the cohomology $\mathcal{H}_f$ vanishes in all dimensions except $n$ and that in dimension $n$ it is isomorphic to the quotient of the polynomial ring $\mathbb{C}[x_1,..., x_n]$ by the ideal generated by the derivatives of $f$ with respect to $x_1,...,x_n$ (to the Milnor ring, or in another terminology Jacobian ring). Under certain conditions one can prove that the dimension of $\mathcal{H}_f$ coincides with the dimension of the cohomology of the operator $df$ (Barannikov-Kontsevich theorem).\footnote{This statement was proven in the case when $f$ is a regular projective function in \cite{SA}; another proof is given in \cite {OgV}. It is conjectured, see \cite{SA}, that it is also true in the case when the intersection of $f^{-1}(0)$ with the set of critical points of $f$ is projective. One can characterize the dimension of $\mathcal{H}_f$ also as the total number of vanishing cycles for all the critical values of $f$.} In the case of a finite number of critical points the cohomology of $df$ is concentrated in the dimension $n$ where it coincides with the Milnor ring; we obtain the description of $\mathcal{H}_f$ given above from the Barannikov-Kontsevich theorem. More generally, we can also define $\mathcal{H}_f(K)$ as the cohomology of the differential $d_f=d+df$ in the case when $f\in K[x_1,...x_n]$, i.e., $f$ is a polynomial function on $K^n$ where $K$ is an arbitrary ring. Furthermore, $f$ can be an algebraic function on a manifold over $K$, then $\mathcal{H}_f$ should be defined as hypercohomology of the (differential graded) sheaf of forms equipped with the differential $d_f$. The above considerations prompt the following definition of the integral of the form (\ref {I}) where $f$ is a polynomial function on $K^n$ (or an algebraic function on a manifold over $K$) and $\rho$ is a differential form on $K^n$ with polynomial coefficients (or an algebraic differential form on a manifold) obeying $d_f \rho=0$ (notice that an $n$-form always obeys this condition). Namely, we define an integral as a $K$-linear functional on $\mathcal{H}_f (K)$. Below we restrict our attention to the case when $f$ is a polynomial function on $K^n$. The definition of $\mathcal{H}_f$ and of the integral can also be applied to the slightly more general case when the coefficients of the polynomial $f$ are not necessarily in $K$, but the coefficients of the form $df$ belong to $K$. This is due to the fact that $f$ itself appears nowhere in the definitions. In the definition of the integral it is natural to require that $K$ is a ring without torsion elements (i.e., it injects into $K\otimes_{\mathbb{Z}}\mathbb{Q}$) and to neglect torsion elements in $\mathcal{H}_f (K)$, i.e., to consider the integral as an element of the quotient of $\mathcal{H}_f(K)$ with respect to its torsion subgroup; the quotient can be interpreted as the image of $\mathcal{H}_f(K)$ in $\mathcal{H}_f(K\otimes_{\mathbb{Z}}\mathbb{Q}) \cong\mathcal{H}_f(K)\otimes_{\mathbb{Z}}\mathbb{Q}$. In what follows we use the notation $\mathcal{H}'_f(K)$ for this quotient. Notice that the definitions of $\mathcal{H}_f$ and, to a certain extent, of the integral are functorial. This means that, for example, a homomorphism of rings $K\to K'$ maps a polynomial $f$ on $K^n$ to a polynomial $f'$ on $K'^n$ and $\mathcal{H}_f (K)$ to $\mathcal{H}_{f'}(K')$. More precisely, we recall that $\mathcal{H}_f (K)$ is computed as the cohomology of a certain twisted de Rham complex, with free $K$-modules in each degree. It is immediate that $\mathcal{H}_{f'}(K')$ is computed as the cohomology of the complex obtained from the one above, by tensoring it with $K'$ over $K$. Thus we have a natural map of complexes and so of their cohomologies as well. We note that in general $\mathcal{H}_{f'}(K')$ cannot be identified with $\mathcal{H}_f (K)\otimes_K K'$ and so a $K$-integral, which is a linear map from $\mathcal{H}_f (K)$ to $K$ cannot be extended to $\mathcal{H}_{f'}(K')$. However this is possible in the case when $K'$ is flat over $K$ (this is the case if we take our $K$ to be $\mathbb{Z}$ and $K'$ to be our torsion free $K$ as above). In this case it is true that $\mathcal{H}_{f'}(K')\cong\mathcal{H}_f (K)\otimes_K K'$. It is also possible if $\mathcal{H}_f (K)$ is concentrated in degree $n$ since it is always true that $\mathcal{H}^n_{f'}(K')\cong\mathcal{H}^n_f (K)\otimes_K K'$. In other words, the integrals for different rings are related. Moreover, if the polynomial $f$, or at least the form $df$ has integer coefficients, the study of the integral can be reduced to the case of the ring of integers $ \mathbb{Z}$. This follows from the remark that the ring of polynomial forms on $K^n$ can be realized as a tensor product of $K$ and the ring of polynomial forms over $\mathbb{Z}$; this allows us to apply the universal coefficients theorem for the calculation of $\mathcal{H}_f(K)$. In particular, if $f$ is such a polynomial that $df$ has integer coefficients we can consider $\mathcal{H}_f (K)$ for an arbitrary ring $K$; it follows from the universal coefficients theorem that for torsion free ring $K$ \begin{equation} \label{kk} \mathcal{H}_{f} (K)=\mathcal{H}_f(\mathbb{Z})\otimes K \end{equation} and also \begin{equation} \label{k} \mathcal{H}'_{f} (K)=\mathcal{H}'_f(\mathbb{Z})\otimes K. \end{equation} Notice that the group $\mathcal{H}_f (\mathbb{Z})$ can be very poorly behaved. More precisely, it can have not only an infinite number of generators, but also a lot of torsion.{\footnote {However, in the case when $f$ is a proper map and $p$ is sufficiently large one can prove that the group $\mathcal{H}_f(\mathbb{Z}_p)$ has a finite number of generators \cite{OgV}. }} Note that $\mathcal{H}'_f(\mathbb{Z})$ on the other hand is a free abelian group of the rank equal to the dimension of $\mathcal{H}_f(\mathbb{Q})$ and the embedding of $\mathcal{H}'_f(\mathbb{Z})$ into $\mathcal{H}_f(\mathbb{Q})$ specifies an integral structure therein. We say that an element of $\mathcal{H}_f(\mathbb{Q})$ or of $\mathcal{H}_f(\mathbb{C})$ is integral (or, more precisely, algebraically integral) if it belongs to the image of $\mathcal{H}'_f(\mathbb{Z})$. Notice that there exists no simple relationship between the topological and the algebraic integrality. Considering the pairing between the topologically integral elements of $\mathcal{H}^f(\mathbb{C})$ and the algebraically integral elements of $\mathcal{H}_f(\mathbb{C})$ we obtain in general transcendental numbers called exponential periods (see \cite{kz}). An important case of a quadratic $f$ is addressed below. The following proposition simply means that the integral is determined up to a multiplicative constant in the setting of the quadratic exponential. \begin{pr}Let $A$ be an invertible symmetric matrix with coefficients in $R$. Let $f=\frac{1}{2}x^t Ax$ then $\mathcal{H}_f(R)$ (i.e. the cohomology of $R[x_1,...,x_n][dx_i]$ with the differential $d+df$) is concentrated in degree $n$ and is isomorphic to $R$.\end{pr} \begin{proof} Let $A=(a_{ij})$ and $A^{-1}=(a^{ij})$. Thus we are interested in computing the cohomology of $R[x_i][dx_i]$ with the differential $\sum_i (a^{ij}\partial_j+x_i)\otimes a_{ij}dx_j$ where we omit $\sum$ when an index is repeated. This cohomology is the same (as a graded module) as that of $R[x_i][\xi_i]$ with the differential $\sum_i (a^{ij}\partial_j+x_i)\otimes \xi_i$, where $\xi_i=a_{ij}x_j$ since $A$ is invertible. It is easy to verify that $D_i=a^{ij}\partial_j+x_i$ form a regular sequence of commuting operators on $R[x_i]$, and furthermore $R[x_i]/(D_1,...,D_s)=R[x_{s+1},...,x_n]$. Thus $R[x_i][\xi_i]$ and so $R[x_i][dx_i]$ has cohomology only in the top degree and it is $R$. \end{proof} Let us now consider the setting of perturbation theory assuming that the unperturbed theory is given by a quadratic form with an invertible matrix. The discussion that follows implies that also in this case the homological definition of the integral specifies it up to a factor, and uniquely if we add the normalization condition $I(dx_1 ... dx_n)=1$. To construct the perturbation theory for $f=f_0+\lambda V$ it is convenient to work in terms of the ring $R_N(\lambda)$ by which we mean the quotient of the polynomial ring $R[\lambda]$ by the ideal generated by $\lambda^N$. After allowing $N$ to go to infinity we can consider $\lambda$ as a parameter of perturbation theory. Under certain conditions, in particular, in the case of the perturbation of a quadratic form $f_0={\frac{1}{2}x^t Ax}$, one has that the cohomology $\mathcal{H}_{\frac{1}{2}x^t Ax+\lambda V}$ is isomorphic to $\mathcal{H}_{\frac{1}{2}x^t Ax}$. To prove this we observe that the complex $C_f$ that computes $\mathcal{H}_f(R_N(\lambda))$ is filtered by the powers of the parameter $\lambda$, let us call this filtration $F$. When the associated spectral sequence degenerates, i.e. the cohomology of the complex $C_f$ is isomorphic to the cohomology of the associated graded complex $Gr^F C_f$; then we get the desired isomorphism since $Gr^F C_f$ computes $\mathcal{H}_{f_0}(R_N(\lambda))$. Let us return to the above proposition and replace $R$ by $R_N(\lambda)$. The matrix $A$ is still an invertible symmetric matrix with coefficients in $R$; thus it remains so in $R_N(\lambda)$ as well. We see that $\mathcal{H}_{\frac{1}{2}x^t Ax}(R_N(\lambda))$ is concentrated in degree $n$ and isomorphic to $R_N(\lambda)$. This leads to the degeneration of the spectral sequence since $Gr^F C_f$ has cohomology (which is just $\mathcal{H}_{\frac{1}{2}x^t Ax}(R_N(\lambda))$) concentrated in one degree only. Thus $\mathcal{H}_{\frac{1}{2}x^t Ax+\lambda V}(R_N(\lambda))$ is also concentrated in one degree; where it is isomorphic to $R_N(\lambda)$. There is another way to identify $\mathcal{H}_f(R_N(\lambda))$ with $\mathcal{H}_{f_0}(R_N(\lambda))$ that we will not use. Namely, if $R$ is a $\mathbb{Q}$-algebra, i.e. all the integers are invertible in $R$, then we always have a canonical identification $\mathcal{H}_f\cong\mathcal{H}_{f_0}$ via the exponential map. Explicitly we have $\phi:\mathcal{H}_f\rightarrow\mathcal{H}_{f_0}$ with $\phi(\omega)=e^{f_1 \lambda+f_2\lambda^2+ ...}\omega$. Unfortunately this method destroys any integrality information. \begin{remark} By letting $N$ go to infinity in the definition of $R_N(\lambda)$ we pass outside the considerations of polynomial differential forms with arbitrary coefficients. This is due to the simple observation that $R[[\lambda]][x_i]$ which does fit into our framework, is not the same as $R[x_i][[\lambda]]$ which is what we obtain after taking the limit of $R_N(\lambda)$'s. This is the first example of the relaxation of the polynomial condition. We will see another one later when we encounter overconvergent series. \end{remark} One can apply the above statements to prove some integrality results. Namely the proposition above and the discussion that follows it shows that, when the polynomials $g$ and $h$ have integer coefficients, and the symmetric negative definite matrix $A$ has integer entries and is invertible (over $\mathbb{Z}$), then the quotient $$\dfrac{\int_{\mathbb{R}^n}g(x)e^{\frac{1}{2}x^t Ax+\lambda h(x)}\,dx}{\int_{\mathbb{R}^n}e^{\frac{1}{2}x^t Ax+\lambda h(x)}\,dx}$$ is a power series in $\lambda$ with integer coefficients. (In fact only the one-form $dh$, and not $h$ itself, needs to have integer coefficients.) The expression at hand is a normalized homological integral over $\mathbb{Z}$. As we mentioned in the introduction the proof of uniqueness in the framework of perturbation theory can be given by induction with respect to degree of polynomial $g$. Analyzing this proof one can construct a version of perturbation theory for normalized integral dealing only with integers. (In the standard Feynman perturbation expansion the integrality is not obvious.) Notice that the appropriate apparatus for the study of the groups $\mathcal{H}_f$ is the theory of D-modules \cite{B}, \cite{borel}. By definition a D-module is a sheaf of modules over the sheaf of rings of differential operators. In the case of interest, namely the linear situation that we focus on, the structure of a D-module is equivalent to the action of the Weyl algebra. Recall that the Weyl algebra is generated over the constants by the symbols $x_i$ and $\partial_{i}$ subject to the relations $$\partial_{i} x_j-x_j \partial_{i}=\delta_{ij}.$$ The most obvious example of a module over this algebra is the ring of polynomials in the variables $x_i$; we will denote it by $\mathcal{O}$. An important $D$-module for us is a modification of this construction. Namely, any polynomial $f$ specifies a new D-module structure on $\mathcal{O}$ (usually denoted by $\mathcal{O}e^f$) with the action of $x_i$ (viewed as elements of the Weyl algebra) unchanged, i.e., given by the operators of multiplication by the corresponding coordinates, while $\partial_i$ now acts as $\frac{\partial}{\partial x_i}+\frac{\partial f}{\partial x_i}\cdot$. The twisted cohomology coincides with the cohomology of this D-module. The notion of a D-module is a generalization of the notion of a vector bundle with a flat connection. From a somewhat different point of view, a $D$-module encodes a system of linear differential equations. Thus while a certain function may not exist algebraically, if it is a solution of a linear system of algebraic differential equations, one can consider a $D$-module that is associated to this system. In particular the function $e^f$ is a solution of $\partial y-(\partial f) y=0$; the corresponding D-module was described above as the D-module giving the twisted de Rham cohomology $\mathcal{H}_f$ as its usual de Rham cohomology. It is important to emphasize that the above definitions can be modified in many ways. In particular, in the definition of $\mathcal{H}_f$ one can consider the differential $d_f$ acting on a space of forms that is larger than the space of forms with polynomial coefficients. The most important case is the case of $K=\mathbb{C}_p$ (of the field of complex p-adic numbers) when it is convenient to work with overconvergent series instead of polynomials. One says that a series $\sum a_I x^I$ is overconvergent if $$\text{ord}_p a_I\geq c|I|+d$$ with $c>0$, i.e. $\sum a_I x^I$ converges on a neighborhood of the closed polydisc of radius $1$ around $0\in\mathbb{C}_p^n$. Let us denote by $\mathcal{H}_f^\dagger$ this particular modification. It is certainly not the case that one can always identify $\mathcal{H}_f$ with $\mathcal{H}_f^\dagger$, in general there is only an obvious map from one to the other that need not be surjective or injective. In fact it seems that sufficiently general criteria for addressing this issue are not known. One can prove however that this replacement does not change cohomology in certain important special cases, such as in the discussion surrounding the construction of the Frobenius map in the next section. Recall that in the setting of perturbation theory and the quadratic exponential we have an essentially unique integral. This is not true in general and reflects the freedom of choice of an integrating contour. However, in certain cases we may use additional data to either ensure uniqueness or at least decrease the number of the available options. For example, if we have a group $G$-action that preserves the function $f$, then this induces an action of $G$ also on $\mathcal{H}_f$. It is then natural to require that the integral be invariant under this action. This turns out to be of limited use since it is easy to see that the action of a Lie algebra on $\mathcal{H}_f$ is necessarily trivial (see equation (\ref{homotopy}) below). Thus if $G$ is a connected Lie group then this does not cut down our choices. A more interesting case comes up when one considers the integral in families.\footnote{We will return to this in the last section.} More precisely, given a family of manifolds $X_\lambda$ equipped with functions $f_\lambda$ over a smooth parameter space $\Lambda$, we can consider the construction of $\mathcal{H}_{f_\lambda}$ on each $X_\lambda$. It is implicitly assumed that this arises from a smooth map of smooth spaces $p:X\rightarrow \Lambda$ and an $f$ on $X$, with $X_\lambda=p^{-1}(\lambda)$ and $f_\lambda=f|_{X_\lambda}$. If suitable conditions on the variation of $X_\lambda$ and $f_\lambda$ are imposed, then what one gets is a family of vector spaces of a fixed dimension that vary with $\lambda\in \Lambda$. In other words we have a vector bundle over $\Lambda$; we will denote it by $\mathcal{H}_{f/\Lambda}$. An integral depending on a parameter $\lambda\in \Lambda$ is then a section of the dual bundle $\mathcal{H}_{f/\Lambda}^*$. In fact this vector bundle (and thus its dual) comes with a flat connection generalizing the Gauss-Manin connection. This follows from a general fact that if one has a $D$-module on $X$, then by computing its fiberwise (along $p$) de Rham cohomologies we get a (graded) $D$-module on the base $\Lambda$. This does not require any extra assumptions on the variation of $X_\lambda$ and $f_\lambda$ and so the resulting $D$-module in general will not be a vector bundle with a connection. We sketch a construction of this structure in our special case below. Given a vector field $\xi$ on $\Lambda$ we must specify its action on $\mathcal{H}_{f/\Lambda}$. We do this as follows. Consider a lifting of $\xi$ to a vector field $\widetilde{\xi}$ on $X$. Let $\xi$ act on $\mathcal{H}_{f/\Lambda}$ by \begin{equation}\label{action}L_{\widetilde{\xi}}+\widetilde{\xi}(f)\end{equation} where $L_{\widetilde{\xi}}$ denotes the Lie derivative with respect to $\widetilde{\xi}$ acting on the space of forms on $X$ (more precisely relative forms on $X$ over $\Lambda$). Observe that this is independent of the choice of the particular lifting of $\xi$ as follows from the formula \begin{equation}\label{homotopy} \{d_\Lambda+d_\Lambda f,\iota_\eta\}=L_{\eta}+\eta(f) \end{equation} where $\{,\}$ denotes the anti-commutator, $d_\Lambda$ the fiberwise de Rham differential\footnote{Recall that $d_\Lambda+d_\Lambda f$ is the differential in the complex that computes $\mathcal{H}_{f/\Lambda}$.} and $\eta$ any vertical vector field (i.e. a vector field tangent to the fibers of $p$). Thus the action of a vertical vector field given by the equation (\ref{action}) is trivial on the cohomology. Note that precisely such an $\eta$ arises as the difference between any two choices of the lifting of $\xi$. In the case when only the $f_\lambda$ vary and $X_\lambda$ remain constant, i.e., $X=Y\times\Lambda$ there is a natural lifting of vector fields from $\Lambda$ to $X$ that makes actual computations simpler. The formula for the connection of course remains the same, but the Lie derivative can be interpreted as a usual derivative with respect to the parameters. It is natural to require that the $\lambda$-dependent integral is covariantly constant with respect to the generalized Gauss-Manin connection (in other words it specifies a flat section of the bundle $\mathcal{H}_{f/\Lambda}^*$). In some cases this assumption, together with the requirement that the section be single-valued and behave nicely at the boundary of the parameter space, determines the integral up to a constant factor. Let us give some details in the case when the coefficient ring is $\mathbb{C}$. The topologically integral sections of $\mathcal{H}_{f/\Lambda}^*$\footnote{Recall that they correspond to singular cycles over $\mathbb{Z}$ in appropriate relative homology.} are covariantly constant (but in general multi-valued). Using this remark one can obtain differential equations for the periods (Picard-Fuchs equations) from the Gauss-Manin connection. If the Picard-Fuchs equations have unique (up to a factor) single-valued solution we can say that the integral is also defined up to a factor. For example this is true in the case when the Gauss-Manin connection has maximally unipotent monodromy at the point $\lambda=0$. One can prove some standard properties of the integral using the homological definition. We will formulate these properties as theorems about the groups $\mathcal{H}_f$. 1. Additivity. The property $$ \int_{A\cup B}=\int _A+\int _B - \int _{A\cap B}$$ corresponds to the Mayer-Vietoris exact sequence. 2. Change of variables takes the following form. If $\varphi:X\rightarrow Y$ and $f$ is a function on $Y$, then the usual pullback via $\varphi^*$ of forms induces a map from $\mathcal{H}_f$ to $\mathcal{H}_{f\circ\varphi}$. 3. Fubini theorem is replaced by a spectral sequence. More precisely, to compute $\mathcal{H}_f$ with $f$ a function on $X$ (let us denote it by $\mathcal{H}_f(X)$ to make explicit its dependence on $X$) we compute the cohomology of a certain complex which in the case of a decomposition of the space into a direct product, i.e. $X=X_1\times X_2$, decomposes naturally into a double complex. The associated spectral sequence is thus the replacement for Fubini theorem. Anything that leads to the degeneration of the spectral sequence is beneficial, in particular the case when the cohomology is concentrated in only one degree is especially similar to the familiar Fubini theorem. In the case when $f=f_1+f_2$ with $f_i$ a function on $X_i$ we have that $C_f(X)\cong C_{f_1}(X_1)\otimes C_{f_2}(X_2)$ and so $$\mathcal{H}_f(X)\cong\mathcal{H}_{f_1}(X_1)\otimes\mathcal{H}_{f_2}(X_2).$$ 4. Fourier transform and the $\delta$ function. Let us consider a function $$f(t, x_1,..., x_n)=itP(x_1, ...,x_n)$$ on $\mathbb{R}^{n+1}$. If in the integral (\ref {I}) with this function the form $u$ does not depend on $t$ we can do an integral over $t$; the $\delta$-function we obtain reduces the integral to the integral over the hypersurface $P=0$. Therefore one should expect that the cohomology $\mathcal{H}_f$ in this case is isomorphic to the cohomology of the hypersurface. This statement was proven in \cite{katz} (in different terminology); other proofs were given in \cite{dworkdmod}, \cite{dworkdmod2}. We sketch a proof below. \begin{proof} The cohomology of the hypersurface $P=0$ is given by the complex $\Omega /(P,dP)$ with the differential inherited from the usual de Rham differential $d$ on the space $\Omega$ of differential forms on $\mathbb{R}^n$. The claim is that this is isomorphic (up to shift) to the cohomology of the space $\Omega'$ of differential forms on $\mathbb{R}^{n+1}$ with the twisted differential $d+d(tP)$, where $t$ is the variable on the extra copy of $\mathbb{R}$. The intermediate step is the complex $\Omega[P^{-1}]/\Omega$ with the differential coming from $d$ extended to $\Omega[P^{-1}]$ by the quotient rule. The claim is demonstrated by the following two maps, each of which induces an isomorphism on cohomology. The first is $$\Omega/(P,dP)\rightarrow\Omega[P^{-1}]/\Omega$$ $$\omega\mapsto\tilde{\omega} dP/P$$ and the second is $$\Omega[P^{-1}]/\Omega\rightarrow\Omega'$$ $$\omega/P^{i+1}\mapsto(-1)^i\omega t^i/i!dt$$ thus the composition is simply $$\omega\mapsto\tilde{\omega}dPdt.$$ \end{proof} The above can be rephrased as replacing constraints by extra variables in the integral. It also admits a generalization (see \cite{dworkdmod}, \cite{dworkdmod2}) from the case of a hypersurface to the case of higher codimension, though the proof is no longer as straightforward. Namely, let $X\subset Y$ be a pair of smooth varieties over the field $F$ of characteristic 0 where $F$ is for example $\mathbb{R}$, $\mathbb{C}$ or the $p$-adic field. Furthermore, for the sake of concreteness assume that $Y=F^n$. Let $X$ be cut out of $Y$ by the functions $f_1,..., f_m\in F[x^1,..., x^n]$ satisfying a suitable regularity condition as explained below. Recall that an $F$-point of $X$ is an $n$-tuple $(r_1,..., r_n)$ of elements of $F$ satisfying $f_i(r_1,..., r_n)=0$ for all $i$. Let us require that for every $F$-point of $X$, the $m\times n$ matrix with $F$-entries $$M=\left(\partial_{x_j}f_i(r_1,..., r_n)\right)$$ has full rank. More precisely, it is surjective as a map of $F$-vector spaces $$M:F^n\rightarrow F^m.$$ Let $f$ be an arbitrary function on $Y$, i.e. $f\in F[x^1,..., x^n]$, and let us consider its restriction to $X$. Introduce a new function $g$ on $Y\times F^m$ by setting $$g=f+t^1 f_1+...+t^m f_m$$ where $t^i$ are the coordinates on $F^m$. Then it follows for example from \cite{dworkdmod} that $$\mathcal{H}^i_f(X)\cong\mathcal{H}^{i+2m}_g(Y\times F^m)$$ for all $i$. The map from $\mathcal{H}_f(X)$ to the shift of $\mathcal{H}_g(Y\times F^m)$ can be written down explicitly as $$\omega\mapsto\widetilde{\omega}\,df_1 dt_1 ... df_m dt_m$$ where $\widetilde{\omega}$ is the lifting to $Y$ of the form $\omega$ on $X$. So that integrals over a non-linear $X$ can be replaced by integrals over the linear $Y\times F^m$. \begin{remark} The considerations of this section cannot be applied to the functions on a superspace. In this case one should work with integral forms introduced in \cite{BL} instead of differential forms. Another possibility is to fix a volume element and to work with polyvector fields. (In the superspace case this data specifies an integral form that can be integrated over a subspace of codimension $k$ where $k$ is the number of indices of the polyvector field.) It seems that this approach is also appropriate in the infinite-dimensional case. \end{remark} \section {Frobenius map} One can use the twisted de Rham cohomology to construct the Frobenius map on the p-adic cohomology (cohomology with coefficients in $K=\mathbb{C}_p$). By definition the Frobenius map transforms a point $x=(x_1, ..., x_n)\in K^n$ into a point $x^p=(x_1^p,...,x_n^p)$. If $f$ is a polynomial on $K^n$ the Frobenius map induces a map $\psi$ sending $\mathcal{H}_f$ into $\mathcal{H}_{f'}$ where $f'(x)=f(x^p)$. However, we would like to modify the definition of the Frobenius map in such a way that it transforms the cohomology group into itself. This modification is based on a remark that one can find a p-adic number $\pi$ such that the expression $e^{\pi(z^p-z)}$ considered as a series with respect to $z$ is overconvergent (thus we are no longer speaking of $\mathcal{H}_f$, but rather of $\mathcal{H}_f^\dagger$) and what is equally important, we still have the result that the cohomology of the $D$-module $e^{\pi tP}$ (with \emph{overconvergent} forms) computes the cohomology of the hypersurface $P=0$ when it is smooth. The appropriate $\pi$ is found as the solution of the equation $\pi ^{p-1}=-p$ (see \cite{katz} for more details). We can define the Frobenius map $\Psi$ on the p-adic cohomology $\mathcal{H}_{\pi f}^\dagger$ as a map induced by the transformation of differential forms sending a form $\omega$ into a form $e^{\pi(f(x^p)-f(x))}\omega'$ where $\omega'$ is obtained from $\omega$ by means of the change of variables $x\to x^p$. Here we use the fact that $\mathcal{H}_{\pi f}^\dagger$ allows differential forms with overconvergent coefficients and the multiplication by $e^{\pi(f(x^p)-f(x))}$ transforms a form of this kind into another form of the same kind. One can say that the Frobenius map $\Psi$ is obtained from the ``naive" Frobenius map $\psi$ by introducing a ``correcting factor" $e^{\pi(f(x^p)-f(x))}$. Following the above, we can construct the Frobenius map on the p-adic cohomology of a hypersurface $P(x_1,...,x_n)=0$ where $P$ is a polynomial with p-adic coefficients. We identify this cohomology with the twisted de Rham cohomology corresponding to the function $f=t P(x_1,...,x_n)$ as before. It is a known fact (see \cite{katz} for example) that for this function the cohomology $\mathcal{H}_{\pi f}^\dagger $ is canonically isomorphic to $\mathcal{H}_{ f} $. Thus the Frobenius map on $\mathcal{H}_{\pi f}^\dagger $ transfers to $\mathcal{H}_{ f} $ which is identified (up to shift) with the p-adic cohomology of the hypersurface $P(x_1,...,x_n)=0$. This construction of the Frobenius map on the p-adic cohomology is equivalent to the original one provided by Dwork. It is believed that Dwork's construction is equivalent to the more modern construction via the crystalline cite that is used in \cite{inst} and explained in terms of supergeometry in \cite{padicph}, but it seems that a complete proof of this equivalence does not exist in the literature. Notice, that in the case when we have a family of hypersurfaces labeled by a parameter $\lambda $, we can construct a Frobenius map in two different and inequivalent ways. Namely, we can either raise $\lambda$ to the $p$-th power or not. In the former case we have a Frobenius that acts preserving the fibers of the family, and in the latter case we must modify the correction factor to be $e^{\pi(f(x^p,\lambda^p)-f(x,\lambda))}$ and now we get a Frobenius that mixes fibres. \section {Topological Landau-Ginzburg model and topological sigma-models} The main ingredient of a Landau-Ginzburg model is an algebraic family of algebraic manifolds $X_{\lambda}$ equipped with a family of algebraic functions $f_{\lambda}$. Here $\lambda$ runs over a manifold $\Lambda$ (the base of the family). Denoting the union of $X_{\lambda}$ by $X$ we obtain an algebraic function $f$ on $X$ and a map $p$ from $X$ to $\Lambda$. In the simplest case we have a family of polynomials on the constant $X_\lambda=\mathbb{C}^n$. Recall that for every $\lambda \in \Lambda$ we can consider the twisted de Rham cohomology $\mathcal{H}_{\lambda} = \mathcal{H}_{f_{\lambda}}$; under some conditions these cohomologies form a vector bundle over $\Lambda$ that comes equipped with a flat connection (the Gauss-Manin connection). For an arbitrary family, the collection of $\mathcal{H}_{\lambda}$'s is naturally equipped with the structure of a D-module on the parameter space $\Lambda$ of $\lambda$. The D-module structure can be viewed as a flat connection if this union forms the total space of a vector bundle with fibers $\mathcal {H}_{\lambda}$. Another important ingredient of a Landau-Ginzburg model is a family of holomorphic volume elements $\Omega_{\lambda}$ on the manifolds $X_{\lambda}$. A special case of a Landau-Ginzburg model is a B-model; here the functions $f_{\lambda}$ identically vanish (in other words, a B-model is specified by a family of Calabi-Yau manifolds). More precisely, we are talking about a genus zero Landau-Ginzburg model and a B-model. From the viewpoint of a physicist one can define a B-model for an arbitrary genus as the quantization of a genus zero B-model, however this definition is not mathematically rigorous due to some ambiguities in the quantization procedure. Under certain conditions one can prove that the Landau-Ginzburg model specifies a Frobenius manifold (i.e., a genus 0 TQFT); in particular, for an appropriate family of volume elements this can be demonstrated for a miniversal deformation of a function having one isolated critical point. This is also true for a B-model on a family of compact manifolds. The above definitions of a Landau-Ginzburg model and of a B-model make sense over an arbitrary ring if the manifolds $X_{\lambda}$ are defined over this ring with the one-forms $df_{\lambda}$ and the volume elements $\Omega_{\lambda}$ having coefficients belonging to the same ring as well. In particular, we obtain in this way the definition of p-adic B-model used in \cite {inst}. The consideration of Sec 3 implies that the Frobenius map that was crucial in \cite {inst} can be defined in the case of Landau-Ginzburg model over $\mathbb{C}_p$. (However, to prove integrality results one needs generalization of another definition of Frobenius map that is applicable over $\mathbb{Z}_p$.) One can also consider an A-model on a manifold $Y$ defined over a field of characteristic zero. If the field is algebraically closed the standard considerations permit us to relate the counting of algebraic curves to the homological calculations on the space of stable maps; otherwise we should modify the definitions by considering the intersections over the algebraic completion. Notice that instead of the K\"{a}hler metric one should consider an element of the two-dimensional cohomology. Strictly speaking this element should obey some conditions that guarantee that the expressions we obtain are well defined at least as power series. We will disregard this subtlety here. In the genus zero case, the A-model specifies the quantum multiplication on the cohomology; the structure coefficients $c^k_{ab}$ of this multiplication determine a family $\nabla_a=\partial _a+zc^k_{ab}$ of flat connections on a trivial vector bundle over the two-dimensional cohomology of $Y$ (or, more generally over the total cohomology) with fiber the total cohomology of $Y$. If the genus zero A-model is defined on a complete intersection $Y$ in a toric variety, then it is equivalent to a certain Landau-Ginzburg model $X_{\lambda}, z^{-1}f_{\lambda}, \Omega _{\lambda}$. This statement (the mirror theorem) was proved by Givental over the complex numbers, but in fact his proof works over an arbitrary field of characteristic zero. It is also possible to formally derive the statement of the mirror theorem over a field of characteristic zero from the mirror theorem over the complex numbers. More precisely, one can construct a map from $\Lambda$ (the base of the family of manifolds $ X_{\lambda}$) into the two-dimensional cohomology of $Y$ (the mirror map); this map can be lifted to an isomorphism of the vector bundles between the twisted de Rham cohomology groups $\mathcal {H}_{\lambda}$ over $\Lambda$ and the trivial vector bundle of cohomology groups of $Y$ over the two-dimensional cohomology of $Y$. The Gauss-Manin connection on the Landau-Ginzburg side corresponds to the connection $\nabla_a=\partial _a+zc^k_{ab}$ on the A-model side. If a projective manifold is defined over the integers, then the corresponding space of stable maps is also defined over the integers and hence over an arbitrary ring. This means that we can define, at least formally, an A-model for every ring, however there exists no clear relation between this definition and the counting of algebraic curves. One can check that the correspondence between the A-model and the Landau-Ginzburg model given by the mirror theorem remains valid for an arbitrary entire ring if we neglect torsion, i.e., if we work with groups $\mathcal{H}'_f (K)$. Namely, using the universal coefficients theorem we reduce the proof to the case of the ring of integers. In this case we should prove that the correspondence between the relevant cohomologies preserves the integral structure in $\mathcal{H}_f (\mathbb{Q})$. This follows from the integrality of the mirror map and from the remark that the connections on both sides of the mirror correspondence are compatible with the integral structure. {\bf Acknowledgments} We are indebted to M. Kontsevich, M. Movshev, A. Ogus and V. Vologodsky for useful discussions.
2,869,038,157,010
arxiv
\section{Introduction} \label{sec:introduction} \vspace{-6pt} Recent years have witnessed exciting achievements in the development of highly capable deep neural networks (DNNs), to the extent that new state-of-the-art (SOTA) results are being published frequently. However, achieving this level of performance requires either using extremely large architectures such as GPT-3 \cite{gpt3} or SEER \cite{seer} with billions of parameters (350GB memory and 175B parameters in GPT-3), or ensembling many models. Consequently, this results in an inefficient inference compared to light-weight models. To alleviate the problem of slow inference on large architectures, a natural solution is to apply some form of model compression. Model compression literature is rich and mature, and covers various techniques such as network quantization \cite{hawq, Jacob2018QuantizationAT, zeroq, haq, jin2020neural}, knowledge distillation ~\cite{hinton2015distilling, zhou2000m}, pruning ~\cite{cheng2017survey, he2017channel, gao2020rethinking, le2020paying}, or a combination of multiple techniques ~\cite{polino2018model, cheng2017survey, han2015deep}. After compression, the output DNN may have a reduced number of parameters or may operate in lower bit precision. However, it can be observed that there is a trade-off between the compression ratio and accuracy of a model. Aggressive compression leads to significant performance drop that defeats the purpose. Moreover, compression is one solution for all data and is inference-time deterministic (static) with no flexibility over different data samples. That being said, compression techniques are commonly orthogonal to other approaches in that a degree of compression can be added to other methods. On the other hand, adaptive inference approaches propose to route to different branches of a DNN either stochastically, or based on some decision criteria on input data \cite{branchynet, blockdrop, msdnet, ranet, yu2018slimmable, yu2019universally, yang2020mutualnet, wang2020resolution}. These methods mostly are based on architecture re-design, i.e., the model needs to be built in a specific way to support dynamic inference. This makes their training more complex and imposes additional non-trivial hyper-parameter tuning. Adaptive inference methods can broadly be categorized as redundancy-based and multi-exit structures. The redundancy-based approaches exploit the parameter redundancy in neural networks. To this end, \cite{lccl} designed a convolution-based controller layer, which reduces the computations in practice, even though increases the overall network size. Or \cite{liu2018dynamic, blockdrop, veit2018convolutional, wang2018skipnet, sact, cnmm} dynamically skip some layers or blocks on the fly via selective layer execution. Multi-exit or multi-stage approaches, however, are based on architectures in which a network can exit from different paths based on some confidence criteria. Earlier techniques such as BranchyNet \cite{branchynet} incorporated an entropy-based threshold for routing. A similar approach was taken by \cite{panda2016conditional, berestizshevsky2019dynamically} by training side classifiers to navigate to different paths. \cite{msdnet} proposed a multi-scale dense network to reuse feature maps of different scales, which was further improved in \cite{ranet} by designing a resolution adaptive network (RANet) to identify low resolution inputs as easy cases, and process them with cheaper computations. There are also works based on architecture search for dynamic inference models \cite{yuan2020enas4d}. It is also worth noting that the majority of the existing methods focus on the task of image classification and fail to study the other applications. \cite{adaptivefeeding} is an example were adaptive inference was investigated for the task of object detection, by leveraging a Support Vector Machine (SVM) classifier to route the work-load. A down-side for \cite{adaptivefeeding}, however, is that dynamically changing the routing traffic between the fast and slow branches requires retraining. Although the redundancy-based and multi-exit methods have made a significant progress and work well in practice, we will show that they do not reach the levels of performance provided by our energy-based strategy. In addition, most of these methods require training models in a specific way necessitated by their architecture design. In contrast, our method works with out-of-the-box already trained models without a need for re-training. In this paper, we propose an adaptive inference strategy that combines a large, deep accurate model (called Teacher) with a small, shallow fast one (called Student). Our method is based on an effective energy-based decision making module for routing different samples to deep or shallow models. In this way, certain examples will be sent to the Student model that yields high speed inference, and other examples go to the Teacher model, which is slower, but highly accurate. Our method provides an inference-time trade-off between the inference latency and task accuracy. This can be thought of as a knob for the users to play with, and to dynamically choose a desired point in the trade-off based on their required accuracy or latency. Figure \ref{fig:our-energy-ood} shows a high-level schematic of the proposed framework. In addition to our main adaptive inference strategy, we provide an extension called specialized EBJR, which provides more accurate and efficient inference by training the Student in a way that it only learns to perform the down-stream task partially (details in Section \ref{ssec:method_jr_specialized}). \begin{figure* \centering \vspace{-4pt} \includegraphics[width=1.0\linewidth]{figures/ebjr_framework2.png} \vspace{-15pt} \caption{\small The overall flow-diagram of the proposed energy-based joint reasoning (EBJR) method.} \label{fig:our-energy-ood} \end{figure*} The main contributions of this paper are summarized as follows: \vspace{-1pt} \begin{itemize \vspace{-5pt} \item Combining small/shallow models (low accuracy and latency) with large/deep models (high accuracy and latency) to achieve \textbf{high accuracy and low latency}. Our method is easy to build and deploy, is architecture agnostic, applicable to different down-stream tasks (e.g., classification and object detection), and can be applied to existing pre-trained models (with no need for re-training). \vspace{-8pt} \item An \textbf{energy-based routing mechanism} for directing examples to the small (Student) or large (Teacher) models. This allows a dynamic trade-off between accuracy and computational cost that outperforms the previous works in adaptive inference (with zero overhead for real-time adjustment of speed/accuracy). \vspace{-8pt} \item Creating a small, Student model \textbf{specialized} for a subset of tasks (e.g., top-C classes only) with high accuracy; along with a plus-one (+1) mechanism, to distinguish the top-C-class data from the others. \end{itemize} \vspace{-15pt} \section{Energy-Based Joint Reasoning (EBJR)} \label{sec:method_jr} \vspace{-4pt} We introduce EBJR, a novel energy-based joint reasoning approach for adaptive inference. Our method is inspired by the fact that smaller (shallower/narrower) models typically have lower accuracy, but very fast inference; and larger (deeper/wider) models, on the other hand, are highly accurate, but very slow. We combine the small model (denoted by Student) and the large model (denoted by Teacher) in an efficient and effective way to provide a fast inference, while maintaining the high accuracy. A schematic of our framework is shown in Figure \ref{fig:our-energy-ood}. The main challenge here is to design an effective routing mechanism (denoted by Router) to decide which model to use for each input. As for the adaptive inference, the Router should also provide the option of dynamic trade-offs between accuracy and latency at the inference time. The Router module essentially operates similar to a binary classifier that directs easy samples to the Student and the hard ones to the Teacher. In some ways, this problem is also similar to the out-of-distribution detection (OOD) problem \cite{liu2020energy} in which in- and out-of-distribution data are differentiated. OOD is generally used when a model sees some input test data that differs from its training data (in-distribution data). Consequently, the predictions of the model on OOD samples would be unreliable. For our case, the Router should be able to identify whether or not the input data fits in the distribution with which the Student has been trained (i.e., there is a high probability that the Student can make accurate predictions for that input data). If not, the data is labelled as hard for the Student and should be forwarded to the Teacher that has higher capability. In our work, we investigate the energy characteristics of data samples to route them effectively. \vspace{-10pt} \paragraph{Energy definitions.} Given an input data point $\textbf{x}$, the energy function is defined as $E(\textbf{x}): \mathbb{R}-^D \rightarrow \mathbb{R}$ to map the input $\textbf{x}$ to a scalar, non-probabilistic energy value $y$. The probability distribution over a collection of energy values can be defined according to the Gibbs distribution \cite{hinton1994autoencoders, lecun2006tutorial}: $p(y|\textbf{x}) = \frac{1}{Z}\big(e^{-E(\textbf{x},y)}\big),$ where $Z(\textbf{x}) = \int_{y'} e^{-E(\textbf{x},y')}$ is the partition function. The free energy \cite{lecun2006tutorial} of $\textbf{x}$ can then be expressed as the negative log of the partition function: \vspace{-4pt} \begin{equation} \label{eq:free_energy} F(\textbf{x}) = -\log \big(Z(\textbf{x})\big). \vspace{-4pt} \end{equation} In the following subsections, we will describe our energy-based joint reasoning method, and give formulations for classification, regression, and object detection problems. \vspace{-10pt} \subsection{Classification} \label{ssec:method_jr_classification} \vspace{-3pt} The Student classifier is defined as a function $S^c(\cdot)$ for mapping the input $\textbf{x}$ to $C$ real-valued logits (i.e., for $C$ number of class labels): $S^c(\textbf{x}): \mathbb{R}^D \rightarrow \mathbb{R}^C$. In probability theory, we can use the output of the softmax function to represent a categorical distribution that is a probability distribution over $C$ different possible outcomes \cite{liu2020energy}. A categorical distribution using the softmax function is expressed by: \vspace{-10pt} \begin{equation} \label{eq:categorical_dist} p(y|\textbf{x}) = \frac{e^{S^c_y(\textbf{x})}}{\sum_i^C e^{S^c_i(\textbf{x})}}, \end{equation} where $S^c_y (\textbf{x})$ denotes the logit (probability) of the $y$th class label. The energy for a given input $(\textbf{x},y)$ in this case is defined as $E(\textbf{x},y)=-S^c_y(\textbf{x})$ \cite{liu2020energy}. The free energy function $F^c(\textbf{x};S^c)$ is then expressed similar to (\ref{eq:free_energy}) as: \vspace{-10pt} \begin{equation} \label{eq:classifier_free_energy} F^c(\textbf{x};S^c) = -\log \sum_i^C e^{S^c_i(\textbf{x})}. \vspace{-15pt} \end{equation} \vspace{-10pt} \paragraph{Problem.} We seek to identify samples suitable for the Student and would like to direct the others to the Teacher. A natural solution to this problem is to use the data density function and consider the inputs with low likelihood as hard (or unfit) samples. To this end, an energy-based density function for the Student can be defined as: \vspace{-4pt} \begin{equation} \label{eq:classifier_energy_density} p(\textbf{x}) = \frac{1}{Z^c}\big(e^{-F^c(\textbf{x};S^c)}\big), \vspace{-4pt} \end{equation} where the denominator $Z^c=\int_{\textbf{x}} e^{-F^c(\textbf{x};S^c)}$ is the normalized densities, which can be intractable to compute or estimate. By taking the logarithm of both sides, we obtain: \vspace{-4pt} \begin{equation} \label{eq:classifier_log_density} \log \big( p(\textbf{x}) \big) = -F^c(\textbf{x};S^c) - \log(Z^c). \vspace{-4pt} \end{equation} \vspace{-12pt} \paragraph{Solution.} The $\log(Z^c)$ term is constant for all $\textbf{x}$, and does not affect the overall energy values distribution. Thus, the negative free energy is linearly aligned with the log likelihood function. This makes it a suitable solution to our problem for detecting easy and hard samples. In this case, higher energy means lower likelihood, which represents harder (or more unfit) samples for the Student's training distribution. More precisely, for a threshold $\delta$ on the density function such that $p(\textbf{x}) < \delta$, then a threshold $t$ on the negative free energy can be calculated based on (\ref{eq:classifier_log_density}) as $-F^c(\textbf{x};S^c) < t = \log (\delta Z^c)$. In practice, for a given input, an energy function is applied to the Student outputs at inference to compute the energy score. Then, if the negative energy value is smaller than a threshold, the input is identified as a bad sample for the Student, and is sent to the Teacher. Therefore, given the input data $\textbf{x}$, the Student $S^c(\textbf{x})$, and the threshold $t$, our energy-based Router $V^c(\textbf{x};S^c,t) \in \{0,1\}$ can simply be defined as: \vspace{-6pt} \begin{equation} \label{eq:classifier_Router} V^c(\textbf{x};S^c,t) = \begin{cases} \mbox{1} & \mbox{if } - F^c(\textbf{x};S^c) \geq t \\ \mbox{0} & \mbox{if } - F^c(\textbf{x};S^c) < t. \end{cases} \vspace{-6pt} \end{equation} Let the Teacher classifier be $T^c(\textbf{x}): \mathbb{R}^D \rightarrow \mathbb{R}^{C'}$, with $C = C'$ (the same number of class labels as in the Student). Our joint reasoning classification function $J^c(\textbf{x};S^c,T^c,t) \in [1,C]$ can then be written by: \vspace{-10pt} \begin{equation} \label{eq:classifier_jr} J^c(\textbf{x};S^c,T^c,t) = \begin{cases} \mbox{$S^c(\textbf{x})$} & \mbox{if } V^c(\textbf{x};S^c,t) = 1 \\ \mbox{$T^c(\textbf{x})$} & \mbox{otherwise.} \end{cases} \vspace{-6pt} \end{equation} \begin{comment} \color{ForestGreen} \vspace{-16pt} \paragraph{Connection to softmax:} In addition to energy, softmax scores can also be used for analyzing the Student. Here, we study its mathematical connection with the energy score and its potential to solve the routing problem. The softmax score for a classifier is given by: \vspace{-8pt} \begin{equation} \label{eq:softmax_score} \max_{y} p(y|\textbf{x}) = \max_{y} \frac{e^{S^c_y(\textbf{x})}}{\sum_i^C e^{S^c_i(\textbf{x})}} = \frac{e^{S^c_{max}(\textbf{x})}}{\sum_i^C e^{S^c_i(\textbf{x})}}. \vspace{-4pt} \end{equation} By taking the logarithm of both sides, we start to see the connection between the log of the softmax and the free energy score formulated in (\ref{eq:classifier_free_energy}): \vspace{-8pt} \setlength{\belowdisplayskip}{0pt} \setlength{\belowdisplayshortskip}{0pt} \begin{equation} \begin{split} \log \max_{y} p(y|\textbf{x}) = \log (e^{S^c_{max}(\textbf{x})}) - \log \sum_i^C e^{S^c_i(\textbf{x})} = S^c_{max}(\textbf{x}) + F^c(\textbf{x};S^c), \end{split} \end{equation} where all logits are shifted by their maximum $S^c_{max}$. Plugging in the energy term to (\ref{eq:classifier_log_density}) yields: \vspace{-12pt} \begin{equation} \label{eq:softmax_log_density} \begin{split} \hspace{-6pt} \log \max_{y} p(y|\textbf{x}) = -\log(p(\textbf{x})) + S^c_{max}(\textbf{x}) - \log(Z^c). \end{split} \end{equation} It is observed that for the samples with high likelihood of being in the Student's distribution, the free energy goes lower, but the maximum logit tends to go higher. Due to this shifting, unlike the energy score, the softmax confidence score is not well-aligned with the probability density $p(\textbf{x})$. As a result, the confidence score is less reliable for routing. \end{comment} \color{black} \vspace{-11pt} \subsection{Regression} \label{ssec:method_jr_regression} \vspace{-3pt} A regressor maps an input $\textbf{x}$ to a target scalar $y$ defined as $S^r(\textbf{x}): \mathbb{R}^D \rightarrow \mathbb{R}$. For a given input $(\textbf{x},y)$, the energy function for a regressor is simply defined as $E(\textbf{x},y)=-S^r(\textbf{x},y)$. The regression problem can then be expressed by creating an energy-based model of the conditional density $p(y|\textbf{x})$ as: \vspace{-14pt} \begin{equation} \label{eq:regressor_dist} p(y|\textbf{x};S^r) = \frac{e^{S^r(\textbf{x},y)}}{\int_{y'} e^{S^r(\textbf{x},y')}}, \end{equation} where the denominator is the normalizing partition function that involves a computationally intractable integral. One solution is to obtain its approximations using Monte Carlo importance sampling method as described in \cite{gustafsson2020energy}. The free energy in this case is defined by: \vspace{-6pt} \begin{equation} \label{eq:regressor_free_energy} F^r(\textbf{x};S^r)= -\log \left( \int_{y'} e^{S^r(\textbf{x},y')} \right). \vspace{-0pt} \end{equation} Similar to (\ref{eq:classifier_energy_density}), the density function for a regressor using the energy-based model can be obtained as follows: \vspace{-4pt} \begin{equation} \label{eq:regressor_density} p(\textbf{x}) = \frac{1}{Z^r}\big(e^{-F^r(\textbf{x};S^r)}\big), \vspace{-0pt} \end{equation} where the denominator is the normalized densities defined as $Z^r = \int_{\textbf{x}} e^{-F^r(\textbf{\textbf{x}};S^r)}$. By taking the log of both sides: \vspace{-4pt} \begin{equation} \label{eq:regressor_log_density} \log \big( p(\textbf{x}) \big) = -F^r(\textbf{x};S^r) - \log(Z^r), \end{equation} which, as in the classification problem, shows that $-F^r(\textbf{x};S^r)$ has a linear alignment with the log likelihood function by considering the fact that $\log(Z^r)$ is constant for all $\textbf{x}$, which makes it desirable for our problem. Given the input data $\textbf{x}$, the Student regression model $S^r(\textbf{x})$, and a threshold $t$, our energy-based Router for a regression problem can be simply defined with $V^r(\textbf{x};S^r,t) \in \{0,1\}$ based on (\ref{eq:classifier_Router}). The joint reasoning function $J^r(\textbf{x};S^r,T^r,t) \in \mathbb{R}$ can also be expressed based on (\ref{eq:classifier_jr}), where $T^r(\textbf{x}): \mathbb{R}^D \rightarrow \mathbb{R}$ is the Teacher regression model. \vspace{-8pt} \subsection{Object detection} \label{ssec:method_jr_od} \vspace{-3pt} For the object detection task with a combination of classification and regression, we can define the total free energy score as: $F^o(\textbf{x};S^c,S^r) = F^c(\textbf{x};S^c)+F^r(\textbf{x};S^r)$, where the regressor for predicting 4 points of a bounding box is defined as $S^r(\textbf{x}): \mathbb{R}^D \rightarrow \mathbb{R}^4$. With $B$ number of detected boxes and $C$ labels, the classifier's free energy score $F^c(\textbf{x};S^c)$ is formulated as: \vspace{-8pt} \begin{equation} \label{eq:od_cls_free_energy} F^c(\textbf{x};S^c) = \frac{1}{B}\big(-\sum_b^B \log \sum_i^C e^{S^c_{b,i}(\textbf{x})}\big), \vspace{-0pt} \end{equation} where $S^c_{b,i}$ is the classifier's output for the $i$th class label of the $b$th bounding box with $b \in [1,B]$ and $i \in [1,C]$. The regressor's free energy $F^r(\textbf{x};S^r)$ is also given by: \vspace{-6pt} \begin{equation} \label{eq:od_reg_free_energy} F^r(\textbf{x};S^r) = \frac{1}{4B}\big(-\sum_b^B \sum_j^4 \log \int_{y'} e^{S^r_{b,j}(\textbf{x},y')}\big), \vspace{-0pt} \end{equation} where $S^r_{b,j}$ is the regression output for the $j$th point of the $b$th bounding box with $j \in [1,4]$. The energy-based joint reasoning function for object detection task is finally defined as: \vspace{-4pt} \begin{equation} \small J^o \left(\textbf{x};S^o,T^o,t\right) = \begin{cases} \mbox{$T^o$(\textbf{x})} & \mbox{if } - F^o(\textbf{x};S^o) < t \\ \mbox{$S^o$(\textbf{x})} & \mbox{if } - F^o(\textbf{x};S^o) \geq t, \end{cases} \vspace{-4pt} \end{equation} where {\small$S^o = \{S^c,S^r\}$ and $T^o = \{T^c,T^r\}$} denote the Student and Teacher object detection models. \vspace{-8pt} \subsection{Specialized EBJR} \label{ssec:method_jr_specialized} \vspace{-2pt} In Section \ref{ssec:method_jr_classification}, it was assumed that the Student and Teacher models have equal number of classes that is $C = C'$. As proved in \cite{abramovich2019classification}, in order to achieve a good performance for a classifier with large number of classes, significantly large number of features are required. Since the Teacher model is assumed to be a very large model with significant number of features, it is capable of handling more difficult tasks with a large $C'$. On the other hand, the small Student model may lack enough features to be able to effectively deal with a large $C$. In addition, in inference services such as public clouds, the majority of input data usually belongs to a small, popular subset of classes that are used frequently, for example,``people", ``cat", ``dog", ``car", etc (supplementary materials contain example-per-class histogram plots for public datasets, and confirms this intuition). Considering this fact, the Student can be trained and specialized to be highly accurate on this specific/popular subset (with a small $C$). Consequently, in our joint reasoning scheme, most of the input data can be handled by the Student in a very accurate and computationally efficient way. Let the specialized Student be $\Bar{S}^c(x): \mathbb{R}^D \rightarrow \mathbb{R}^{\Bar{C}+1}$, where $\Bar{C} \ll C$. To make sure the model can still exploit and learn from all data at the training time, we label the data that do not belong to $\Bar{C}$ as an additional class (i.e., the `other' class). The extra class is also utilized as a supplementary mechanism in our Router to evaluate the performance of $\Bar{S}^c$ on a given input data at the inference time. Similar to a binary classifier, it is used for distinguishing the data with $\Bar{C}$ labels from the others. The specialized Student $\Bar{S}^c$ has another benefit for our energy-based Router. Since only a subset of class labels is used for training the Student, the energy differences between in- and out-of-distribution data respectively denoted by $(\textbf{x},i)$ and $(\Bar{\textbf{x}},j)$ tends to be larger: \vspace{-6pt} \begin{equation} | \Bar{F}^c(\textbf{x};\Bar{S}^c_i)-\Bar{F}^c(\Bar{\textbf{x}};\Bar{S}^c_{j}) | > | F^c(\textbf{x};S^c_i)-F^c(\Bar{\textbf{x}};S^c_{j}) |, \label{eq:energy_gap} \vspace{-0pt} \end{equation} where $i \in [1,\Bar{C}]$ and $j \notin [1,\Bar{C}]$. The larger the energy difference, the better the Router can distinguish the fit and unfit data for the Student, which results in more accurate and efficient adaptive inference. Given the input data $\textbf{x}$, the specialized Student $\Bar{S}^c(\textbf{x})$, and a threshold $t$, our specialized energy-based Router $\Bar{V}(\textbf{x};\Bar{S}^c,t) \in \{0,1\}$ is expressed as: \vspace{-4pt} \begin{equation} \small \Bar{V}(\textbf{x};\Bar{S}^c,t) = \begin{cases} \mbox{1} & \mbox{if } - \Bar{F}^c(\textbf{x};\Bar{S}^c) \geq t \text{~~ and~~} \Bar{S}^c(\textbf{x}) \in [1,\Bar{C}] \\ \mbox{0} & \mbox{if } - \Bar{F}^c(\textbf{x};\Bar{S}^c) < t \text{~~or~~} \Bar{S}^c(\textbf{x}) \in \{\Bar{C}+1\}, \end{cases} \vspace{-4pt} \end{equation} where ${\Bar{C}+1}$ denotes the extra class defined in $\Bar{S}^c$. The free energy $\Bar{F}^c(\textbf{x};\Bar{S}^c)$ for the specialized Student is calculated only over the top-$\Bar{C}$ classes, not the extra class, as follows: \vspace{-10pt} \begin{equation} \label{eq:specialized_classifier_free_energy} \Bar{F}^c(\textbf{x};\Bar{S}^c) = -\log \sum_i^{\Bar{C}} e^{\Bar{S}^c_i(\textbf{x})} ~~~\text{with}~~~ i \notin \{\Bar{C}+1\}. \vspace{0pt} \end{equation} Let the Teacher be $T^c(\textbf{x}): \mathbb{R}^D \rightarrow \mathbb{R}^{C'}$ with $\Bar{C} \ll C'$. Then, the specialized joint reasoning function $\Bar{J}(\textbf{x};\Bar{S}^c,T^c,t) \in [1,C']$ for making the predictions related to $\textbf{x}$ can be given by: \vspace{-6pt} \begin{equation} \Bar{J}(\textbf{x};\Bar{S}^c,T^c,t) = \begin{cases} \mbox{$\Bar{S}^c(\textbf{x})$} & \mbox{if } \Bar{V}(\textbf{x};\Bar{S},t) = 1 \\ \mbox{$T^c(\textbf{x})$} & \mbox{otherwise}. \end{cases} \end{equation} \vspace{-16pt} \section{Experiments} \label{sec:experiments} \vspace{-4pt} In this section, we evaluate and discuss the performance of our EBJR approach along with the other related methods on image classification and object detection tasks on different benchmarks. We provide more results and ablation studies in the supplementary materials. \vspace{-10pt} \subsection{Adaptive inference results} \label{sec:experiments_ai_ic} \vspace{-4pt} Figures \ref{fig:ic_comparison_results} and \ref{fig:ic_comparison_results_imagenet} show the classification results for EBJR and the SOTA in adaptive inference on CIFAR-10, CIFAR-100, ImageNet, and Caltech-256 \cite{caltech256} datasets. We use multiple datasets not only to evaluate the generality of our method, but also because not all other methods published results on a single standard dataset. For all the datasets, we use DenseNet models \cite{densenet} for our Student and Teacher, except Caltech-256 for which ResNet models are used. Table \ref{tbl:ic_models} shows and compares the details about the Student, Teacher, and EBJR models and their accuracy, floating point operations (FLOPs), {and average inference time (latency).} \begin{figure*} \centering \begin{subfigure}{0.51\textwidth} \centering \includegraphics[width=0.99\linewidth]{figures/cifar10-comparison.png} \end{subfigure} \hspace{-13pt} \begin{subfigure}{0.51\textwidth} \centering \includegraphics[width=0.99\linewidth]{figures/cifar100-comparison.png} \end{subfigure} \vspace{-10pt} \caption{\label{fig:ic_comparison_results} \small Evaluation of EBJR and the SOTA in adaptive inference on CIFAR datasets.} \end{figure*} Note that many previous approaches are based on the DenseNet architecture, and then adaptively dropping connections for inference speed-up. Thus, we also choose DenseNet as the main architecture to establish a fair comparison although our method does not rely on any specific network design and can work with any black-box architectures. Moreover, we follow the standard practice in the previous works and analyze the results with FLOPs \cite{ranet,msdnet,cnmm,blockdrop}. For our method, the total FLOPs is measured as a weighted average of the Teacher and Student FLOPs based on their usage frequency as: $FLOPs=\frac{1}{N_S+N_T}\big(N_{S}.F_S+N_{T}.(F_S+F_T)\big)$, where $N_S$ and $N_T$ are respectively the number of samples processed by Student (with $F_S$ FLOPs) and Teacher (with $F_T$ FLOPs). Note that the metric used in \cite{ranet,msdnet,cnmm} is multiply-accumulates (MACs), i.e., half the FLOPs used in this work. \begin{figure*} \centering \vspace{-4pt} \begin{subfigure}{0.50\textwidth} \centering \includegraphics[width=0.99\linewidth]{figures/imagenet-comparison.png} \end{subfigure} \hspace{-10pt} \begin{subfigure}{0.50\textwidth} \centering \includegraphics[width=0.99\linewidth]{figures/caltech256-comparison-v3.png} \end{subfigure} \vspace{-5pt} \caption{\small Evaluation of EBJR on the ImageNet (left) and Caltech-256 (right) datasets. {The numbers on the EBJR (random) curve show the percentage of samples processed by the Teacher.}} \label{fig:ic_comparison_results_imagenet} \end{figure*} In Figures \ref{fig:ic_comparison_results} and \ref{fig:ic_comparison_results_imagenet}, the trade-off between accuracy and computational cost is adaptively achieved in our method by choosing different values for the threshold parameter $t$ as defined in (\ref{eq:classifier_Router}) and (\ref{eq:classifier_jr}). {The larger the threshold, the more input data are routed to the Teacher model, which results in more accurate, but slower inference. As the Student is able to make accurate predictions for the majority of input data, the adaptive inference with an appropriate small enough $t$ can almost reach the Teacher's accuracy but with a much lower computational cost. For CIFAR-10, this strategy achieves the Teacher's accuracy with $\approx$2.2$\times$ less FLOPs. It can also lead to approximately 3$\times$ less FLOPs with an accuracy of $\approx$94.5\% (i.e., only $\approx$0.2\% lower than the Teacher)}. The amount of speed-up for Caltech-256 is about 2$\times$, while maintaining the Teacher's accuracy of 89.87\%. For CIFAR-100 and ImageNet, which are more complicated benchmarks, Teacher's top-1 accuracy is almost achieved with approximately 1.5$\times$ savings on the computations. Moreover, as illustrated in Figures \ref{fig:ic_comparison_results} and \ref{fig:ic_comparison_results_imagenet}, our method outperforms the previous works such as RANet \cite{ranet} and MSDNet \cite{msdnet} on all the three benchmarks across a variety of accuracy and cost combinations. \begin{table*} \vspace{-2pt} \fontsize{6.5}{9}\selectfont \begin{center} \begin{tabular}[t]{p{1.12cm}p{0.4cm}p{0.4cm}cp{0.4cm}p{0.4cm}cp{0.4cm}p{0.4cm}cp{0.4cm}p{0.4cm}c} \toprule \multicolumn{1}{c}{ } & \multicolumn{3}{c}{\textbf{CIFAR-10}} & \multicolumn{3}{c}{\textbf{CIFAR-100}} & \multicolumn{3}{c}{\textbf{ImageNet}} & \multicolumn{3}{c}{\textbf{Caltech-256}}\\ \\[-0.30cm] \cmidrule(l{3pt}r{3pt}){2-4} \cmidrule(l{3pt}r{3pt}){5-7} \cmidrule(l{3pt}r{3pt}){8-10} \cmidrule(l{3pt}r{3pt}){11-13} \\[-0.30cm] & \textbf{S} & \textbf{T} & \textbf{EBJR} & \textbf{S} & \textbf{T} & \textbf{EBJR} & \textbf{S} & \textbf{T} & \textbf{EBJR} & \textbf{S} & \textbf{T} & \textbf{EBJR} \\ \\[-0.30cm] \midrule \hspace{-5pt}\textbf{Depth} & 52 & 64 & - & 58 & 88 & - & 121 & 201 & - & 18 & 152 & -\\ \\[-0.30cm] \hspace{-5pt}\textbf{Growth Rate} & 6 & 12 & - & 6 & 8 & - & 12 & 32 & - & - & - & - \\ \\[-0.30cm] \hspace{-5pt}\textbf{Accuracy} {\tiny{($\%$)}} & 91.81 & 94.76 & \textbf{94.74} & 69.28 & 74.94 & \textbf{74.87} & 66.28 & 76.92 & \textbf{76.62} & 83.16 & 89.87 & \textbf{89.87} \\ \\[-0.30cm] \hspace{-5pt}\textbf{FLOPs} {\tiny{($\times10^8$)}} & 0.54 & 2.92 & \textbf{1.36} & 0.64 & 2.14 & \textbf{1.57} & 11.51 & 86.37 & \textbf{58.1} & 54.0 & 340.0 & \textbf{170.1} \\ \\[-0.30cm] { \hspace{-5pt}\textbf{Latency} {\tiny{(ms)}} } & 14.0 & 35.0 & \textbf{23.78} & 26.0 & 51.0 & \textbf{42.1} & 84.0 & 225.0 & \textbf{196.8} & 25.0 & 200.0 & \textbf{113.6} \\ \\[-0.30cm] \bottomrule \end{tabular} \end{center} \vspace{-16pt} \caption{\label{tbl:ic_models}\small {Comparison of EBJR} with the Student (\textbf{S}) and Teacher (\textbf{T}) DensetNet models for CIFAR-10, CIFAR-100, and ImageNet; and ResNet models for Caltech-256 experiments.} \end{table*} \begin{figure*} \centering \begin{subfigure}{0.32\textwidth} \centering \includegraphics[width=0.99\linewidth]{figures/energy_dist_cifar10.png} \end{subfigure} \begin{subfigure}{0.32\textwidth} \centering \includegraphics[width=0.99\linewidth]{figures/energy_dist_cifar100.png} \end{subfigure} \begin{subfigure}{0.32\textwidth} \centering \includegraphics[width=0.99\linewidth]{figures/energy_dist_imagenet.png} \end{subfigure} \vspace{-5pt} \caption{\small Energy score distribution for CIFAR-10, CIFAR-100, and ImageNet.} \label{fig:energy_distribution} \end{figure*} To investigate the performance of energy-based routing mechanism compared to other alternatives, we perform an ablation study on Caltech-256, where the energy score is replaced by the softmax confidence or entropy \cite{branchynet} scores. {We also include the random baseline in this experiment, where the input samples are randomly distributed between the Student and Teacher models (the experiment was run multiple times and the best of them was reported).} The corresponding adaptive inference results are presented in Figure \ref{fig:ic_comparison_results_imagenet}-right. It is observed that softmax- and entropy-based mechanisms can reach the Teacher's accuracy with $\approx$1.4$\times$ and $\approx$1.7$\times$ less FLOPs, which is lower than the energy-based strategy with 2$\times$ speed-up. The theoretical analysis for the entropy score will be given in the supplementary materials. Figure \ref{fig:energy_distribution} illustrates the energy score distribution for the samples processed by the Student (i.e., in-distribution data) and Teacher (i.e., out-of-distribution data). As observed, the in-distribution samples (suitable for the Student) tend to have higher energy scores. Based on our experiments, the optimal setup for EBJR is achieved by choosing the threshold $t$ at the crossing point of the two distributions. As a consequence, by choosing $t$=12.0 for CIFAR-10, $\approx$70\% of the samples are handled by the Student with an accuracy of 99.0\%, and only $\approx$30\% are routed to the Teacher, which results in $\approx$3X less total FLOPs. For CIFAR-100 (with $t$=15.0) and ImageNet (with $t$=15.5), $\approx$50\% are processed by the Student (with an accuracy of $\approx$91.0\%), which achieve about 1.5X less FLOPs. \begin{wraptable}[5]{r}{0.55\textwidth} \centering \setlength{\tabcolsep}{5pt} \fontsize{6.5}{8}\selectfont \begin{tabular}[t]{p{1.55cm}cccc} \toprule \multicolumn{1}{c}{ } & \multicolumn{2}{c}{\textbf{CIFAR-10}} & \multicolumn{2}{c}{\textbf{ImageNet}}\\ \cmidrule(l{3pt}r{3pt}){2-3} \cmidrule(l{3pt}r{3pt}){4-5} & \textbf{EBJR} & \cite{tann2016runtime} & \textbf{EBJR} & \textbf{BL-Net}~\cite{park2015big} \\ \midrule \hspace{-4pt}\textbf{Accuracy loss} {\tiny{($\%$)}} & \textbf{0.0} & 0.96 & \textbf{0.9} (0.0) & 0.9 \\ \hspace{-4pt}\textbf{Power savings} {\tiny{($\%$)}} & \textbf{64.03} & 58.74 & \textbf{56.63} (32.93) & 53.7 \\ \bottomrule \end{tabular} \vspace{-6pt} \caption{\label{tbl:comparison_results_power}\small Power consumption vs. accuracy comparison.} \end{wraptable} \vspace{-13pt} \paragraph{Power-Accuracy Tradeoff.} In the literature, there are also some adaptive inference methods that are proposed for efficient power-accuracy trade-off, for example, \cite{tann2016runtime} and BL-Net \cite{park2015big}. In order to compare EBJR with these approaches, we use the strategy in \cite{lee2019energy} to calculate the power (or energy) consumption per image. As summarized in Table \ref{tbl:comparison_results_power}, the method in \cite{tann2016runtime} reduces the power consumption by 58.74\% with 0.96\% accuracy loss on CIFAR-10, while EBJR (Figure \ref{fig:ic_comparison_results}) achieves 64.03\% power savings without any accuracy loss. Moreover, BL-Net \cite{park2015big} achieves 53.7\% reduction in power consumption with an accuracy loss of 0.9\% on ImageNet. EBJR (Figure \ref{fig:ic_comparison_results_imagenet_mobilenet_coco}), on the other hand, provides a reduction of 56.63\% in power consumption with the same accuracy drop. Unlike BL-Net that does not reach the big model's accuracy, our method achieves the Teacher's accuracy with 32.93\% less power consumption. \vspace{-12pt} \paragraph{MobileNetV2-Based EBJR.} In addition to DenseNet, there exist some SOTA that are based on other architectures such as MobileNetV2 \cite{sandler2018mobilenetv2}, for example, S-Net \cite{yu2018slimmable}, US-Net \cite{yu2019universally}, Mutual-Net \cite{yang2020mutualnet}, and RS-Net \cite{wang2020resolution}. In order to compare EBJR with these approaches, we run another set of experiments on ImageNet, where MobileNetV2 models with 128$\times$128 and 224$\times$224 input resolutions are respectively used as our Student and Teacher. As shown in Figure \ref{fig:ic_comparison_results_imagenet_mobilenet_coco}-Left, EBJR achieves better performance than S-Net, US-Net, and MutualNet across all FLOPs, and also better than RS-Net in high FLOPs. RS-Net provides better results than EBJR in low FLOPs, which is due to the less accurate Student used in EBJR. However, when EBJR and RS-Net are integrated and the RS-Net's 128$\times$128 path is employed as the Student, the results are improved and EBJR outperforms RS-Net at all trade-off points. \begin{comment} \begin{figure*} \centering \begin{subfigure}{0.50\textwidth} \centering \includegraphics[width=0.99\linewidth]{figures/imagenet-comparison-mobilenet.png} \end{subfigure} \vspace{-18pt} \caption{\small {Evaluation of MobileNetV2-based EBJR with previous works on ImageNet}} \label{fig:ic_comparison_results_imagenet_mobilenet} \end{figure*} \end{comment} \begin{figure*} \centering \begin{subfigure}{0.50\textwidth} \centering \includegraphics[width=0.99\linewidth]{figures/imagenet-comparison-mobilenet.png} \end{subfigure} \hspace{-10pt} \begin{subfigure}{0.50\textwidth} \centering \includegraphics[width=0.99\textwidth]{figures/coco-comparison-v2.png} \end{subfigure} \vspace{-5pt} \caption{\small {\textbf{(Left)} Evaluation of MobileNetV2-based EBJR with previous works on ImageNet.} \small \textbf{(Right)} The performance of EBJR for object detection on MS-COCO (compared with EfficientDet \cite{efficientdet}).} \label{fig:ic_comparison_results_imagenet_mobilenet_coco} \end{figure*} \color{black} \begin{comment} \begin{wrapfigure}{r}{0.48\textwidth} \vspace{-16pt} \begin{center} \hspace{-10pt} \includegraphics[width=0.99\textwidth]{figures/coco-comparison-v2.png} \end{center} \vspace{-20pt} \caption{\small The adaptive inference performance of EBJR for object detection on MS-COCO (compared with the EfficientDet models \cite{efficientdet}).} \label{fig:od_comparison} \end{wrapfigure} \end{comment} { \vspace{-15pt} \paragraph{Significance Test.} In order to evaluate the statistical significance of the results, we perform the McNemar's test \cite{dietterich1998approximate} over EBJR and SOTA including RANet and RS-Net. The McNemar's test is interpreted based on a given significance level $\alpha$ (commonly set to 0.05 showing 95\% confidence) as well as the $p$-value and odds ratio calculated by the test. The default assumption (null hypothesis), i.e., if $p > \alpha$, states that the two classifiers should have the same error rate or there should be no difference in the disagreements between them. However, if null hypothesis is rejected, i.e., if $p \leq \alpha$, it suggests that the two classifiers disagree in different ways. After running the test over the EBJR vs. RANet predictions on CIFAR-10 and the EBJR vs. RS-Net predictions on ImageNet , $p$-values of $3.2\times10^{-5}$ and $2.4\times10^{-4}$ are respectively obtained. The very low p-values ($\ll 0.05$), which reject the null hypothesis, strongly confirms that there is a significant difference in the disagreements between EBJR and other two models. Also, an odds ratio of 1.42 and 1.14 is respectively obtained, which gives an estimation of how much better EBJR is compared to RANet and RS-Net. } Unlike image classification, the adaptive inference for the object detection task has rarely been explored. We analyze the performance of EBJR on the task of object detection (formulated and described in Section \ref{ssec:method_jr_od}) on the MS-COCO dataset \cite{coco}. We employ the EfficientDet-D0 and EfficientDet-D4 \cite{efficientdet} as the Student and Teacher, respectively. Figure \ref{fig:ic_comparison_results_imagenet_mobilenet_coco}-Right shows the adaptive inference results compared to the EfficientDet models (D0, D1, D2, D3, and D4). As shown in the figure, EBJR outperforms the standard EfficientDet models, where it reaches 97\% of the Teacher's mAP on MS-COCO with 1.8$\times$ speed-up. For the same mAP level, the adaptive feeding method of \cite{adaptivefeeding} reports only 1.3$\times$ speed-up. \vspace{-8pt} \subsection{Specialized EBJR} \label{ssec:experiments_sai_ic} \vspace{-2pt} In Section \ref{ssec:method_jr_specialized}, we argued that creating a specialized Student targeted to handle only the popular categories can make the joint inference more efficient. To study this case we run a set of experiments on a subset of the Open Images dataset (OID)~\cite{oid} that has been labeled using the 256 class labels of Caltech-256 dataset. We train the Student with 20\% of the class labels (i.e. $\Bar{C}$=50 out of 256 labels) along with an extra one reserved for the other classes. In this setup, we choose the top-50 class labels with the most number of samples in OID training set. For testing, we randomly select a new set of size 3K from the OID validation set, where 75\% of the data have the top-50 of the labels. This is done to ensure the initial assumption of `having the majority of samples from the popular classes' remains valid. \begin{figure*} \centering \begin{subfigure}{0.50\textwidth} \centering \includegraphics[width=0.99\linewidth]{figures/caltech256-oid-specialized-v2.png} \label{fig:caltech256-oid-specialized-specialized} \end{subfigure} \hspace{-10pt} \begin{subfigure}{0.50\textwidth} \centering \includegraphics[width=0.99\linewidth]{figures/caltech256-data-percent-v2.png} \label{fig:caltech256-oid-specialized-data} \end{subfigure} \vspace{-15pt} \caption{\small The performance of the specialized EBJR on OID validation set. (left) The specialized EBJR with $\Bar{C}=50$ compared with the general case. (right) The impact of different input data percentages with different chosen subsets of classes for specialized EBJR.} \label{fig:caltech256-oid-specialized} \end{figure*} Figure \ref{fig:caltech256-oid-specialized}-left shows the results of this experiment. We see that compared to the general cases of EBJR, the specialized EBJR provides the best performance under the assumption that the majority of input data belong to a small subset of classes. For example, compared to the Teacher, the specialized EBJR achieves $\approx$1.5$\times$ less FLOPs with the same accuracy. Figure \ref{fig:caltech256-oid-specialized}-right shows the effect of the percentage of data that belong to the top-$\Bar{C}$ classes. As expected, the more data in the top-$\Bar{C}$ classes, the faster the joint model, since more load will be directed to the Student which is faster than the Teacher. We observe that when $\Bar{C}$ is too low or too high, e.g., $\Bar{C}$=10 or 100, the adaptive inference with the specialized Student becomes less efficient even with large percentages of data in the top-$\Bar{C}$ classes. For $\Bar{C}=20$ or 50, the specialized EBJR becomes more efficient, especially when $50\%$ or more of data belong to top-$\Bar{C}$ classes. More analysis will be given in the supplementary materials. { Note that EBJR is orthogonal to SOTA dynamic inference approaches, including the weight-sharing ones. In Figure 5-left, we applied EBJR on RS-Net, and showed an improved performance on top of it. More results are given in the supplementary materials.} { One limitation of EBJR is memory overheard due to the need of both Student and Teacher at inference time. One solution to deal with this problem is to perform the largest possible Student on the edge, but the Teacher on the cloud. If a desired accuracy on the edge is not met, the Router sends certain samples to the cloud for higher accuracy. Since Student is the largest size that can fit to the device, it is expected to handle most cases, while cloud will be used only sparingly in accuracy-sensitive applications. In this setup, the overall accuracy is not bounded by what can run on edge, but the upper-bound is what can run on cloud. } \vspace{-10pt} \section{Conclusion} \label{sec:conclusion} \vspace{-4pt} In this paper, we presented an adaptive inference method that combines large, but accurate models with small, but fast models. We proposed an effective energy-based routing module for directing different samples to deep or shallow models. Our method provided a trade-off between the inference latency and accuracy, which in practice is a useful knob for the users to adjust based on their required accuracy or latency, without a need for re-training. In addition, we provided an extension to our method for increasing the inference efficiency by training the shallow models in a way that they only learn to perform the down-stream tasks partially. We presented theoretical and experimental evaluations in support of our method. We hope our work can help facilitate building efficient multi-model inference systems. \end{spacing} \clearpage {\small \section{Supplementary materials} \label{sec:supplementary} This section contains the supplementary materials. \subsection{Demo} \begin{figure*}[!b] \centering\includegraphics[width=1.0\columnwidth]{figures/demo.png} \caption{\small A screenshot of the provided demo application.} \label{supp:demo_screenshot} \end{figure*} \vspace{-6pt} In addition to the code, we also include a `Demo.mp4' video file that contains a demonstration of our framework. This is based on screen recording of a web application we built to showcase the use-cases of our method in real-world scenarios. Figure \ref{supp:demo_screenshot} shows a screenshot of the demo application. \subsection{Ablation studies on CIFAR-10, CIFAR-100, and ImageNet} Figure \ref{fig:ablation_studies_cifar} shows the results of ablation studies of our EBJR method with different architectures for Student and Teacher models on CIFAR (10 and 100) and ImageNet. We observe that the results do not vary excessively, which shows the robustness of the proposed method. \begin{figure*}[!htb] \centering \includegraphics[width=0.49\linewidth]{figures/cifar10-comparison-ablation.png} \includegraphics[width=0.49\linewidth]{figures/cifar100-comparison-ablation.png} \includegraphics[width=0.49\linewidth]{figures/imagenet-comparison-ablation.png} \vspace{-4pt} \caption{Ablation studies of EBJR on CIFAR-10, CIFAR-100, and ImageNet with different architectures for the Student and Teacher models. DenseNet-$d$-$g$ denotes a densenet model with depth of $d$ and growth rate of $g$. ResNet-$d$ denotes a resnet model with depth of $d$.} \label{fig:ablation_studies_cifar} \end{figure*} \subsection{More experiments with RANet} In this experiment, we evaluate the performance of EBJR when the SOTA architectures are used as our Student and Teacher models. In other words, we investigate whether our method can be added on top of other efficient methods such as RANet to benefit both from their designs and our joint inference. To this end, we trained the RANet architecture with three scales (as suggested in RANet work) on CIFAR-10, CIFAR-100, ImageNet. The accuracy and computational cost of the used Student and Teacher models for the three datasets are summarized in Table \ref{tbl:ranet_models}. For the Student, we employed the RANet's first classifier from the first scale with 0.316 ($\times 10^8$) FLOPs. For the Teacher, the last classifier from the last scale with 1.89 ($\times 10^8$) FLOPs was used. Figure \ref{fig:ebjr_ranet} shows the corresponding adaptive inference results compared with the RANet baseline on CIFAR-10, CIFAR-100, and ImageNet. We observe that our method is orthogonal to RANet, and can improve it further. \begin{table} \centering \fontsize{8}{10}\selectfont \begin{tabular}[t]{p{1.5cm}cccccc} \toprule \multicolumn{1}{c}{ } & \multicolumn{2}{c}{\textbf{CIFAR-10}} & \multicolumn{2}{c}{\textbf{CIFAR-100}} & \multicolumn{2}{c}{\textbf{ImageNet}}\\ \cmidrule(l{3pt}r{3pt}){2-3} \cmidrule(l{3pt}r{3pt}){4-5} \cmidrule(l{3pt}r{3pt}){6-7} & \textbf{S} & \textbf{T} & \textbf{S} & \textbf{T} & \textbf{S} & \textbf{T} \\ \midrule \hspace{-5pt}\textbf{Accuracy} {\tiny{($\%$)}} & 91.18 & 93.61 & 67.28 & 74.73 & 56.18 & 71.69 \\ \hspace{-5pt}\textbf{FLOPs} {\tiny{($\times10^8$)}} & 0.3162 & 1.898 & 0.3166 & 1.9 & 3.36 & 33.62\\ \bottomrule \end{tabular} \vspace{-5pt} \caption{\label{tbl:ranet_models}\small Details of the Student (\textbf{S}) and Teacher (\textbf{T}) EBJR (RANet) models for CIFAR-10, CIFAR-100, and ImageNet experiments.} \end{table} \begin{figure*}[!htb] \centering \includegraphics[width=0.49\linewidth]{figures/ebjr_ranet_cifar10.png} \includegraphics[width=0.49\linewidth]{figures/ebjr_ranet_cifar100.png} \includegraphics[width=0.49\linewidth]{figures/imagenet-ebjr-ranet.png} \vspace{-8pt} \caption{The performance of EBJR with RANet architecture, compared to the baseline RANet on CIFAR-10, CIFAR-100, and ImageNet.} \label{fig:ebjr_ranet} \end{figure*} \vspace{-12pt} \subsection{Alternative routing mechanisms: Softmax and Entropy} In Section \ref{sec:experiments_ai_ic}, an ablation study with some experiments (Figure \ref{fig:ic_comparison_results_imagenet}-right) was presented to analyze the softmax and entropy scores as alternative means of analyzing the Student. Here, we study the mathematical connection of them with the energy score and their potential to solve the routing problem. \subsubsection{Softmax-based Router} \label{ssec:method_jr_confidence} \vspace{-3pt} The softmax score for a classifier is expressed by: \begin{equation} \label{eq:softmax_score} \max_{y} p(y|\textbf{x}) = \max_{y} \frac{e^{S^c_y(\textbf{x})}}{\sum_i^C e^{S^c_i(\textbf{x})}} = \frac{e^{S^c_{max}(\textbf{x})}}{\sum_i^C e^{S^c_i(\textbf{x})}}. \end{equation} By taking the logarithm of both sides, we start to see the connection between the log of the softmax and the free energy score formulated in (\ref{eq:classifier_free_energy}): \begin{equation} \begin{split} log \max_{y} p(y|\textbf{x}) = log (e^{S^c_{max}(\textbf{x})}) - log \sum_i^C e^{S^c_i(\textbf{x})} = S^c_{max}(\textbf{x}) + F^c(\textbf{x};S^c), \end{split} \end{equation} where all logits are shifted by their maximum logit $S^c_{max}(x)$. Plugging in the energy term to (\ref{eq:classifier_log_density}) yields: \begin{equation} \label{eq:softmax_log_density} \begin{split} \hspace{-6pt} log \max_{y} p(y|\textbf{x}) = -log(p(\textbf{x})) + S^c_{max}(\textbf{x}) - log(Z^c). \end{split} \end{equation} It is observed that for the samples with high likelihood of being in the Student's distribution, the free energy goes lower, but the maximum logit tends to go higher. Due to this shifting, unlike the energy score, the softmax confidence score is not well-aligned with the probability density $p(\textbf{x})$. As a result, the confidence score is less reliable for our Router to analyze the performance of the Student. \subsubsection{Entropy-based Router} The entropy score is a measure of the randomness in the information being processed, and is calculated as follows: \begin{equation} \begin{split} H(\textbf{x};S^c)=-\sum_i^C S^c_i.log (S^c_i), \\ \end{split} \end{equation} where $S^c_i(\textbf{x})$ is the probability (logit) corresponding to the $i$-th class label. Let $U$ be the internal energy (i.e., the expectation value of the energy function \cite{oh2020entropy}), defined by: \begin{equation} \begin{split} U(\textbf{x};S^c) = \sum_i^C E(\textbf{x},i) S^c_i. \end{split} \end{equation} According to \cite{oh2020entropy}, the entropy can be defined in terms of the internal and free energy functions as: \begin{equation} H(\textbf{x};S^c) = U(\textbf{x};S^c) - F(\textbf{x};S^c), \end{equation} where all logits are shifted by the internal energy $U$. Substituting the free energy term from (\ref{eq:classifier_log_density}) yields: \begin{equation} \begin{split} H(\textbf{x};S^c) = log(p(\textbf{x})) + U(\textbf{x};S^c) + log(Z^c), \end{split} \end{equation} which shows that, due to the shifting caused by the internal energy, the entropy score is not reliably aligned with the probability density $p(\textbf{x})$. Thus, it is a less suitable mechanism to be used as a routing mechanism in our Router, as opposed to the energy score. \begin{comment} \subsection{Energy distribution} \begin{figure*} \centering \includegraphics[width=0.32\linewidth]{figures/energy_cifar10.png} \includegraphics[width=0.32\linewidth]{figures/energy_cifar100.png} \includegraphics[width=0.32\linewidth]{figures/energy_imagenet.png} \vspace{-10pt} \caption{Distribution of negative energy over CIFAR-10, CIFAR-100, and ImageNet test sets with the Student models summarized in Table \ref{tbl:ic_models}.} \label{fig:energy_dist} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.32\linewidth]{figures/energy_sup_caltech.png} \includegraphics[width=0.32\linewidth]{figures/energy_unsup_caltech.png} \\ \includegraphics[width=0.32\linewidth]{figures/energy_sup_oid_spec.png} \includegraphics[width=0.32\linewidth]{figures/energy_top50_unsup_oid_spec.png} \vspace{-10pt} \caption{Distribution of negative energy over \textbf{(top row)} Caltech-256 with the configurations and Student models described in Section \ref{ssec:experiments_uai_ic}, and \textbf{(bottom row)} OID-specialized with the Student models described in Section \ref{ssec:experiments_sai_ic}.} \label{fig:energy_dist2} \end{figure*} Figure \ref{fig:energy_dist} illustrates the distribution of the energy scores calculated for the the DenseNet Student models (details in Table \ref{tbl:ic_models}) on the CIFAR-10, CIFAR-100, and ImageNet datasets. We observe from the CIFAR-10 energy plot that almost 80\% of the samples reach the maximum negative energy (above $2.45$) over the DenseNet-52-6 Student model. In other words, the Student has been able to make highly accurate predictions for this portion of samples. So, the most optimal setup for EBJR on CIFAR-10 is achieved by choosing $t=2.45$, which routes 80\% of the samples to the Student and the Teacher's accuracy can be achieved with almost 3$\times$ less computations (as also described in Section \ref{sec:experiments_ai_ic}). Note that since the absolute value of the threshold values can be intuitively intractable for the users, in practice, it is more meaningful to let the users/operators control the accuracy-latency trade-off via a level-based parameter such as the sliding bar shown in our video demo. For CIFAR-100 and ImageNet, Figure \ref{fig:energy_dist} shows that a lower percentage (about 30\%) of the samples can reach the maximum negative energy. This is because CIFAR-100 and ImageNet are more complicated tasks, therefore their small Students are less capable compared to CIFAR-10. Moreover, Figure \ref{fig:energy_dist2} shows the energy distribution of the samples in Caltech-256 for both supervised and unsupervised Student models (details in Section \ref{ssec:experiments_uai_ic}). The energy plots for Caltech-256 imply that the supervised Student model can make accurate predictions for about 70\% of the input samples, while this number is at 60\% for the unsupervised model. Therefore, a more efficient adaptive inference can be achieved by the supervised model compared to the unsupervised one (as also observed in Figure \ref{fig:caltech256-oid-comparison}). Figure \ref{fig:energy_dist2} also shows the energy distribution corresponding to the OID validation set for both supervised and specialized/unsupervised ($\Bar{C}$=50) Students (details in Section \ref{ssec:experiments_sai_ic}). We observe that the specialized/unsupervised Student model is capable of making highly accurate predictions for almost 67\% of the samples (i.e., when $t$=3.94). The supervised model, on the other hand, can achieve good performance on only 50\% of the samples. \end{comment} \subsection{Imbalance in class distributions} In Section \ref{ssec:method_jr_specialized}, it was mentioned that in many practical applications, training or testing datasets are imbalanced. For example, consider a cloud inference API, which receives images as input, and most of the input images belong to a limited number of popular classes or categories. This motivated the specialized EBJR case. We studied the class distribution for the Caltech-256, OID, and MS-COCO datasets in Figures \ref{fig:caltech_oid_dist} and \ref{fig:coco_oid_dist}, and the statistics confirm our intuition. \begin{figure*} \centering \includegraphics[width=0.49\linewidth]{figures/caltech_train_dist.png} \includegraphics[width=0.49\linewidth]{figures/oid_val_spec_dist.png} \vspace{-4pt} \caption{Data and class distributions over the Caltech-256 train set and OID validation set (for image classification with 256 labels). Due to the limited space, only a subset of class names are shown on the X axis for better visualization.} \label{fig:caltech_oid_dist} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.49\linewidth]{figures/coco_train_dist.png} \includegraphics[width=0.49\linewidth]{figures/coco_val_dist.png} \vspace{-4pt} \caption{Data and category distributions over the OID train set and MS-COCO validation set (for object detection with 80 categories). Due to the limited space, only a subset of category names are shown on the X axis for better visualization.} \label{fig:coco_oid_dist} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.49\linewidth]{figures/caltech256-oid-specialized-20-v2.png} \includegraphics[width=0.49\linewidth]{figures/caltech256-oid-specialized-10-v2.png} \vspace{-10pt} \caption{Adaptive inference results of the specialized EBJR with $\Bar{C}=20$ and $\Bar{C}=10$ on OID dataset.} \label{fig:caltech256-different-topn} \end{figure*} \subsection{More results on the specialized EBJR} Figure \ref{fig:caltech256-different-topn} shows the adaptive inference results for the specialized EBJR case. This figure shows the top-1 classification accuracy of joint models when top-$\bar{C}$=10 or 20 popular classes are used. For top-$\bar{C}$=10, we choose the top-10 class labels with the most number of samples in the OID training set, and for testing, we randomly select a new set of size 1.7K from the OID validation set, where 70\% of the data have the top-10 of the class labels. For top-$\bar{C}$=20, the size of the corresponding randomly selected validation set is 2K, where 75\% of the samples belong to the top-20 labels. It should be noted that the Teacher accuracies over the top-10 and top-20 validation sets are not the same because the validation sets are not identical (different data/label distribution). It is observed from Figure \ref{fig:caltech256-different-topn} that top-$\bar{C}$=20 results in a better overall performance, and can achieve the same accuracy as the teacher but with 1.35$\times$ faster inference. Reducing the number of classes to 10 will make the performance worse, where almost no speed-up can be achieved compared to the Teacher. This suggests that limiting the majority of popular categories to a very low number of classes may hurt the performance. \subsection{More insights on inequality (\ref{eq:energy_gap})} The free energy of the Student $S^c(\textbf{x}): \mathbb{R}^D \rightarrow \mathbb{R}^C$ in (\ref{eq:classifier_free_energy}) can be broken into the logarithm of two terms as: \begin{equation} \begin{split} F^c(\textbf{x};S^c) = - log \left( e^{S^c_y} + \sum_i^{\Bar{C}} e^{S^c_i(\textbf{x})} \right), \end{split} \end{equation} where $C = \Bar{C} + 1$ and $y \in \{\Bar{C} + 1\}$. Factoring out the term $e^{S^c_y}$ from inside the logarithm yields: \begin{equation} \begin{split} F^c(\textbf{x};S^c) = - log(e^{S^c_y}) - log \left(1 + \frac{\sum_i^{\Bar{C}} e^{S^c_i(\textbf{x})}}{e^{S^c_y}} \right). \end{split} \end{equation} By denoting the second term as $\hat{F^c}(\textbf{x};S^c)$, we will have: \begin{equation} \label{eq:free_energy_new} F^c(\textbf{x};S^c) = - \big(S^c_y + \hat{F^c}(\textbf{x};S^c)\big). \end{equation} \begin{table*} \centering \fontsize{8}{10}\selectfont \begin{tabular}[t]{p{1.5cm}cccccc} \toprule \multicolumn{1}{c}{ } & \multicolumn{3}{c}{\textbf{Caltech-256}} & \multicolumn{3}{c}{\textbf{OID}}\\ \cmidrule(l{3pt}r{3pt}){2-4} \cmidrule(l{3pt}r{3pt}){5-7} & \textbf{S} & \textbf{S$^*$} & \textbf{T} & \textbf{S} & \textbf{S$^*$} & \textbf{T} \\ \midrule \hspace{-5pt}\textbf{Accuracy} {\tiny{($\%$)}} & 83.16 & 70.71 & 89.87 & 75.0 & 74.1 & 86.64 \\ \hspace{-5pt}\textbf{FLOPs} {\tiny{($\times10^9$)}} & 5.4 & 5.4 & 34.0 & 5.4 & 5.4 & 34.0\\ \bottomrule \end{tabular} \caption{\label{tbl:caltech_models} \small The performance of the supervised and unsupervised Student (respectively denoted by \textbf{S} and \textbf{S$^*$}) and the Teacher (\textbf{T}) on Caltech-256 and OID validation sets.} \end{table*} Let $(x,y)$ be in-distribution and $(\Bar{x},\Bar{y})$ be out-of-distribution samples, where $y \in [1,\Bar{C}]$ and $\Bar{y} \notin [1,\Bar{C}]$. Based on (\ref{eq:free_energy_new}), the inequality (\ref{eq:energy_gap}) can be reformulated as: \begin{equation} \scriptsize \begin{split} | \underbrace{\Bar{F}^c(\textbf{x};\Bar{S}^c)}_\text{decrease} - \underbrace{\Bar{F}^c(\Bar{\textbf{x}};\Bar{S}^c)}_\text{increase} | > | \big( \underbrace{S^c_{\Bar{y}}}_\text{decrease} +\underbrace{\hat{F^c}(\Bar{\textbf{x}};S^c) }_\text{increase} \big) - \big( \underbrace{S^c_y}_\text{increase} +\underbrace{\hat{F^c}(\textbf{x};S^c)}_\text{decrease} \big) |, \end{split} \label{sup:eq:gaps} \end{equation} where the followings can be observed for the left side of this inequality: \begin{itemize} \item Since $(x,y)$ is an in-distribution sample (high likelihood) and also $y \in [1,\Bar{C}]$, the free energy $\Bar{F}^c(\textbf{x};\Bar{S}^c)$ tends to be lower. \item Since $(\Bar{x},\Bar{y})$ is an out-of-distribution sample (low likelihood) and also $\Bar{y} \notin [1,\Bar{C}]$, the free energy $\Bar{F}^c(\Bar{\textbf{x}};\Bar{S}^c)$ tends to be higher. \end{itemize} And for the right side: \begin{itemize} \item For $(x,y)$, the $y$-th logit value $S^c_y$ tends to increase (high likelihood), which makes $\hat{F^c}(\textbf{x};S^c)$ decrease. \item For $(\Bar{x},\Bar{y})$, the $\Bar{y}$-th logit value $S^c_{\Bar{y}}$ tends to decrease (low likelihood), which makes $\hat{F^c}(\Bar{\textbf{x}};S^c)$ to increase. \end{itemize} The two terms on the left side tend to go in the opposite directions, thereby enlarging the energy difference. On the other hand, the two terms on the right side of (\ref{sup:eq:gaps}) do not show a similar behaviour, and thus their gap does not necessarily increase. \subsection{Unsupervised EBJR} \label{ssec:method_jr_unsupervised} So far, it was assumed that the Teacher and Student already-trained models are given, and with those we created a joint inference bundle model. This assumption may not always be true. Suppose we have a large model that is highly accurate, but also very slow. In addition, no dataset with ground truth labels is available to train a small and fast Student model. In this scenario, in order to achieve an efficient joint reasoning model, we can effectively distill the Teacher model to a small and fast Student architecture, in a completely unsupervised manner. Unsupervised knowledge distillation is an emerging technique for leveraging the abundance of unlabeled data for label-free model training. Our framework is flexible in that it can organically incorporate the unsupervised distillation. The most straight-forward application of unsupervised EBJR is for cloud services, which include very large models for different machine learning tasks served through cloud APIs. Such inference services can be replaced by our EBJR architecture in which a side Student model is created for each large model. In this case, there is no need for re-training the large models nor acquiring data labels. By replacing the current large models behind the APIs with the joint reasoning equivalent, a speed-up gain can be achieved without a considerable loss in accuracy. For the classification problem, as an example, the commonly used cross-entropy loss function for training the Student is given by: \begin{equation} CE = - \sum_i^C \tau_i~log(S^c_i(\textbf{x})) ~~~\text{with}~~~ \tau_i = T^c(\textbf{x}), \end{equation} \noindent where the pseudo-labels generated by the Teacher model are utilized as the targets (denoted by $\tau_i$) in the loss function. \begin{figure*} \centering \begin{subfigure}{0.51\textwidth} \centering \includegraphics[width=0.99\linewidth]{figures/caltech256-comparison.png} \label{fig:caltech256-oid-comparison_caltech256} \end{subfigure} \hspace{-13pt} \begin{subfigure}{0.51\textwidth} \centering \includegraphics[width=0.99\linewidth]{figures/caltech256-oid-comparison.png} \label{fig:caltech256-oid-comparison_oid} \end{subfigure} \vspace{-15pt} \caption{\small The performance of supervised and unsupervised EBJR on Caltech-256 and OID validation sets.} \label{fig:caltech256-oid-comparison} \end{figure*} \subsubsection{Experimental results - image classification} \label{ssec:experiments_jr_unsupervised} In this section, we study the performance of the unsupervised version of our method (see Section \ref{ssec:method_jr_unsupervised}). To this end, we perform unsupervised distillation on the Student, using a set of unlabeled examples, which are passed to the Teacher to obtain pseudo-labels. The Student is then trained purely with these pseudo-labels. In this experiment, we use a ResNet-152 pre-trained on the Caltech-256 training set (22K examples in 256 classes) as the Teacher. The Student is a ResNet-18 trained with a 56K unlabeled random subset of OID. For testing, we evaluated our approach on two validation sets including Caltech-256 (7.8K images) and a subset of OID validation set (12K images). The accuracies and computational costs of the Student (supervised and unsupervised) and Teacher on both validations sets are reported in Table \ref{tbl:caltech_models}. Note that we study these two validations sets since they can both be valid measures depending on the target application. One represents the case when a user provides a large Teacher model with some validation data for which the joint model needs to attain a high accuracy. The other represents the case where a user provides a large Teacher model, and the joint model is supposed to work well for data that hit the cloud API, which are similar to the unlabeled data used to train the unsupervised joint model. Figure \ref{fig:caltech256-oid-comparison} presents the adaptive inference results with the unsupervised EBJR and also its comparison with the supervised case on both validation sets. For the supervised EBJR, the Student is trained on Caltech-256 (similar to the Teacher). As observed in Figure \ref{fig:caltech256-oid-comparison}-left, the unsupervised EBJR does not perform as well as the supervised case, which is because the distributions of the training and testing sets are different (OID vs. Caltech-256). However, when evaluated on the OID validation set, which follows the same distribution with which the Student is trained, a better performance is achieved (Figure \ref{fig:caltech256-oid-comparison}-right). It is shown in \cite{xie2020self, sohn2020simple, zoph2020rethinking} that using large amounts of unlabeled data for pseudo-label self-training can achieve results even higher than the supervised models. In agreement with this observation, we will see later in this section that the performance of the unsupervised joint model tends to improve if larger amounts of unlabeled data are used. In some cases, it may even surpass the performance of the supervised model (see Figure \ref{fig:od_comparison_unsup}). That being said, the results in Figure \ref{fig:caltech256-oid-comparison} are excellent for the supervised case, and still very promising for the unsupervised case, as the later is not using any labels for training the joint model. \subsubsection{Experimental results - object detection} \label{ssec:experiments_jr_unsupervised_od} We also analyze the performance of the unsupervised variant of EBJR on the task of object detection on the MS-COCO dataset, where we employ the EfficientDet-D0 and EfficientDet-D4 architectures \cite{efficientdet} for the Student and Teacher, respectively. For the unsupervised EBJR, the OID training set is used as the unlabeled set. \begin{figure*} \centering \includegraphics[width=0.51\linewidth]{figures/coco-comparison.png} \vspace{-6pt} \caption{The adaptive inference performance of the supervised and unsupervised EBJR for object detection on MS-COCO (compared with the EfficientDet models \cite{efficientdet}).} \label{fig:od_comparison_unsup} \end{figure*} \begin{table*} \centering \fontsize{8}{10}\selectfont \begin{tabular}[t]{p{1.6cm}p{1.25cm}p{0.64cm}p{0.5cm}p{0.5cm}p{1.25cm}} \toprule \multicolumn{1}{c}{ } & \multicolumn{4}{c}{\textbf{Student}} & \multicolumn{1}{c}{\textbf{Teacher}}\\ \cmidrule(l{3pt}r{3pt}){2-5} \cmidrule(l{3pt}r{3pt}){6-6} \hspace{-7pt} \textbf{Mode} & Supervised & \multicolumn{3}{c}{\hspace{-8pt} Unsupervised} & Supervised\\ \cmidrule(l{3pt}r{3pt}){2-2} \cmidrule(l{3pt}r{3pt}){3-5} \cmidrule(l{3pt}r{3pt}){6-6} \hspace{-7pt} \textbf{Train-set size} & 118K & 160K & \hspace{-6pt} 320K & \hspace{-6pt} \textbf{1.7M} & 118K\\ \hspace{-7pt} \textbf{mAP} & 0.359 & 0.329 & \hspace{-6pt} 0.350 & \hspace{-6pt} \textbf{0.373} & 0.514\\ \cmidrule(l{3pt}r{3pt}){2-5} \hspace{-7pt} \textbf{FLOPs} {\tiny{($\times 10^9$)}} & \multicolumn{4}{c}{2.54} & 55.2\\ \bottomrule \end{tabular} \vspace{-4pt} \caption{\label{tbl:od_models}\small The performance of the Teacher and the supervised and unsupervised Student with different train-set sizes on MS-COCO.} \end{table*} Table \ref{tbl:od_models} reports the performance of the Student model trained in the supervised and unsupervised settings, compared to the Teacher. For the unsupervised case, we tested different amounts of unlabeled data from OID. We observe that when sufficient unlabeled data (e.g., 1.7M in Table \ref{tbl:od_models}) are provided, the unsupervised Student can perform even better than the supervised one. Moreover, Figure \ref{fig:od_comparison_unsup} shows the adaptive inference results for both supervised and unsupervised (with 1.7M samples from OID) cases compared to the EfficientDet models (D0, D1, D2, D3, and D4). Both supervised and unsupervised EBJR outperform the standard EfficientDet models.
2,869,038,157,011
arxiv
\section{Introducción} Sir Isaac Newton (1642-1727) cuando concibió el principio de inercia imaginó la existencia de un objeto masivo que mantenía su estado en movimiento rectilíneo uniforme. De hecho, Newton suponía que los objetos físicos pueden existir por sí mismos, de manera que podemos abstraer su existencia del resto del Universo sin alterar las cualidades dinámicas del mencionado objeto. Este punto de vista concordaba con la Filosofía Racionalista de René Descartes (1596-1650) en la que se plantea por primera vez en la historia de Occidente la existencia de un ser humano independiente del entorno social y por lo tanto la afirmación del individuo \textit{en sí y de por sí}, el cual es uno de los principios fundamentales sobre los que se levantaría la futura ideología burguesa. Este elemento ideológico penetró en la Ciencia haciendo que esta solamente sea exitosa en el mundo macroscópico, donde el experimentador puede controlar las operaciones de medida y sus errores haciéndolos tan pequeños como él desee, lo que equivale a considerar el límite clásico de la constante de Planck igual a cero. Sin embargo, en el mundo microscópico donde existen objetos tan pequeños como son las moléculas, átomos, partículas elementales, cuyas dimensiones tienen órdenes menores o iguales a $10^{-10}$ m, esas partículas están inmersas en un océano de interacciones cuyo origen está en la propia existencia de cada una de ellas extendidas a todo el Universo. Este océano de interacciones fluctúa estocásticamente haciendo imposible determinar simultáneamente con precisión la posición y velocidad de las partículas elementales. En este trabajo se demuestra que la función de densidad de la acción a la que está sometida la partícula es una gaussiana, lo cual es una consecuencia del Teorema Central del Límite. Expresando esta función en el dominio de la frecuencia se encuentra otra distribución gaussiana cuyo exponente es nuevamente una función cuadrática de una combinación lineal de las variables espacial y temporal, y donde los coeficientes de la variable espacial y de la variable temporal en el exponente son el módulo del vector de onda $k$ y la frecuencia angular $\omega=\omega(\vec{k})$ de la onda respectivamente. De ahí se deduce que el módulo de la cantidad de movimiento $P$ es proporcional a la desviación estándar $\sigma$ del campo de acción de fondo por el módulo del vector de onda, y la energía $E$ de la partícula es también proporcional a la frecuencia angular por la desviación estándar del campo de acción de fondo. Se demuestra además que el núcleo de la ecuación integral que permite transformar la representación en el dominio de la frecuencia a la representación en el dominio del espacio-tiempo comunes es el producto de dos deltas de Dirac cuyos argumentos son $\frac{P}{\hbar}-k$ y $\frac{E}{\hbar} - \omega$ respectivamente. Expresando estas dos distribuciones mediante una superposición continua de ondas planas se concluye que la partícula puede ser representada por una función compleja de la forma \begin{equation} \psi(x,t) = \frac{1}{\sqrt{2 \pi}} e^{i \big( \frac{Px-Et}{\hbar} \big)}. \end{equation} A partir de la última expresión y usando la relación no relativista de la energía y el hecho de que la función de onda del electrón en un potencial, $U(\vec{r})$, se puede expresar como una superposición de ondas planas, se procede a deducir en la Sección \ref{sec:sch} la ecuación de Schr\"odinger de la partícula elemental. En la Sección \ref{sec:inc} se demuestra el Principio de Incertidumbre desde la función de densidad de probabilidad de la acción del campo de fondo de una partícula libre. Además, se obtiene la constante de proporcionalidad entre la desviación estándar y la constante de Planck para la Mecánica Cuántica tradicional. Finalmente, se encuentra la ecuación correspondiente a la energía del punto cero del sistema partícula libre-campo de acción de fondo (ver ecuación (\ref{eq:energia})). \section{Función de Distribución de la Acción e Interpretación Física de la Constante de Planck} \label{sec:cte} Vamos a comenzar deduciendo la densidad de distribuci\'{o}n gaussiana de la acción del campo de fondo sobre la partícula. Para ello, se inicia obteniendo la función de densidad que se presenta en un conjunto repetido de mediciones. De esta forma, se determina el valor real de la medición como el promedio de los valores experimentales obtenidos y se consigue además la dispersión estadística de datos de este proceso. A continuación, se enuncian los supuestos para la obtención de la densidad de distribución gaussiana: \begin{enumerate} \item Peque\~nos errores son más probables que grandes errores. \item Para un valor real $p$ dado, las dispersiones estadísticas en $\pm \varepsilon$, tienen igual probabilidad. \item En la presencia de varias observaciones sobre la misma cantidad, el valor m\'{a}s probable de esa cantidad es el promedio de las observaciones. \end{enumerate} Carl Friedrich Gauss (1777-1855) se refiri\'{o} a dicho proceso como ``el problema más importante de las Matem\'{a}ticas en la Filosof\'{i}a Natural''. Ahora, se procede a la deducción de la densidad de distribución gaussiana asociada a la acción de la partícula. Sea $p \in \mathbb{R}$ el verdadero valor (pero desconocido) de la medida de la cantidad física. Se efectúan $n \in \mathbb{N}$ observaciones independientes del experimento asociado a la medida de la cantidad física $p$, dichas observaciones dan como resultado las medidas $M_{1}, M_{2}, \hdots, M_{n}$. Sea además $\phi$ la funci\'{o}n de densidad de probabilidad del error aleatorio. Se supone que la funci\'{o}n $\phi$ es diferenciable, y que $\phi(x) \neq 0$, para todo $x \in \mathbb{R}$. \\ La suposici\'{o}n 1 anterior implica que $\phi$ tiene un valor m\'{a}ximo en $x=0$, mientras que la afirmaci\'{o}n 2 implica que $\phi (x)=\phi (-x)$, para todo $x \in \mathbb{R}$.\\ Se define la función $f: \mathbb{R} \mapsto \mathbb{R}$ por \begin{equation*} f(x):=\frac{\phi{}'(x)}{\phi (x)}, \text{ para todo } x \in \mathbb{R}. \end{equation*} Entonces, \begin{equation*} f(-x)=-f(x), \text{ para todo } x \in \mathbb{R}. \end{equation*} Note que $X_i:= M_{i}-p$ denota la variable aleatoria asociada al error de la i-\'{e}sima medida. Ya que estas medidas (y errores) se asumen estoc\'{a}sticamente independientes, se sigue que \begin{equation*} \Omega_n :=\phi (M_{1}-p)\phi (M_{2}-p)\hdots \phi (M_{n}-p)=\prod_{i=1}^{n} \phi (M_{i}-p) \end{equation*} es la densidad conjunta asociada a los $n$ errores. Por otro lado, de la afirmaci\'{o}n 3 se tiene que \begin{equation*} \bar{M}_n:=\dfrac{M_{1}+M_{2}+....+M_{n}}{n} \end{equation*} es el estimador veros\'{i}mil de $p$. En otras palabras, dadas las medidas $M_{1}, M_{2}, \hdots, M_{n}$, al escoger $p=\bar{M}_n$, se maximiza el valor de $\Omega_n$. \\ A continuación, se evalúa el valor de la derivada de $\Omega_n$ en el punto $p=\bar{M}_n$. \begin{align*} &0=\frac{d\Omega_n }{dp}\Big |_{p=\bar{M}_n} \\ &=- \sum_{i=1}^n \phi^{\prime}(M_{i}-\bar{M}_n)\prod_{j\not\neq i} \phi(M_{j}-\bar{M}_n) \\ &=- \sum_{i=1}^{n} \dfrac{\phi^{\prime}(M_{i}-\bar{M}_n)}{\phi(M_{i}-\bar{M}_n)} \prod_{k=1}^n \phi(M_{k}-\bar{M}_n)\\ &=- \sum_{i=1}^{n} f(M_{i}-\bar{M}_n) \; \Omega_n \\ & =-\Omega_n \; \sum_{i=1}^n f(M_{i}-\bar{M}_n). \end{align*} Entonces, \begin{equation} \label{eq:1} \sum_{i=1}^n f(M_{i}-\bar{M}_n)=0. \end{equation} Se consideran ahora $M$ y $N$ dos variables aleatorias arbitrarias que podrían representar diversas magnitudes físicas tales como longitud, energía u otras. En vista que las medidas dadas por las variables aleatorias $M_{i}$, $i= 1, \hdots, n$, pueden tomar valores arbitrarios, se toma a continuación \begin{equation} \label{eq:2} M_{1}=M,\ \ \ M_{2}=M_{3}=\cdots =M_{n}=M-nN. \end{equation} Para tal conjunto de medidas, se obtiene por lo tanto que \begin{equation*} \bar{M}_n=M-(n-1)N. \end{equation*} Luego, de la ecuación (\ref{eq:1}) y considerando (\ref{eq:2}), se tiene que \begin{equation*} f(M-[ M-(n-1)N])+(n-1)f(M-nN-(M-(n-1)N))=0. \end{equation*} Por lo tanto, \begin{equation*} f[(n-1)N]=(n-1)f(N). \end{equation*} De la paridad y de la continuidad de $f$, se sigue que existe $k \in \mathbb{R}$ tal que $f(x)=kx$, para todo $x \in \mathbb{R}$. Luego, \begin{align*} &f(\lambda x)=\lambda k \;x =\lambda \; f(x), \text{ para todo } x \in \mathbb{R}. \end{align*} Entonces, \begin{equation*} \frac{\phi'(x)}{\phi (x)}= kx, \text{ para todo } x \in \mathbb{R}. \end{equation*} De donde se sigue que \begin{equation*} \phi(x) = Ae^{\frac{kx^{2}}{2}}, \text{ para todo } x \in \mathbb{R}. \end{equation*} Definamos ahora, \begin{equation*} k=-\dfrac{1}{\sigma^{2}}. \end{equation*} Así, \begin{equation*} \phi (x)=Ae^{-\frac{x^{2}}{2\sigma ^{2}}}, \text{ para todo } x \in \mathbb{R}. \end{equation*} De la expresión anterior y suponiendo que $\int_{\mathbb{R}}\phi (x)dx=1$, se observa que la constante $A$ está dada por \begin{equation*} A=\frac{1}{\sqrt{2\pi}\sigma}. \end{equation*} En consecuencia, \begin{equation} \label{eq:3} \phi (x)= \frac{1}{\sqrt{2\pi}\sigma} e^{-\frac{x^{2}}{2\sigma ^{2}}}, \text{ para todo } x \in \mathbb{R}. \end{equation} Luego, la función de densidad de distribución de la acción del campo de fondo viene dada por: \begin{align*} \phi(S)=\frac{1}{\sqrt{2\pi} \sigma}e^{-\frac{S^{2}}{2\sigma ^{2}}}, \end{align*} donde $S=Px-Et$, aquí $P$ corresponde al momentum de la partícula y $E$ denota su energía. A continuación, se presenta una interpretación de la desviación estándar $\sigma$ en función de la constante de Planck $\hbar$ (ver Observación \ref{Obs1}) abajo. Usando una ecuación integral para representar al término $\sqrt{2\pi} \sigma \; \phi(S)$, se puede escribir \begin{align} \label{eq:4} \exp(-\frac{S^{2}}{2\sigma ^{2}})=\iint\limits_{\mathbb{R}^2} \ B(P,E,k,\omega)C(kx-\omega t)dkd\omega, \end{align} donde en el lado izquierdo de la última ecuación aparece esencialmente la función de densidad de distribución de la acción S en el espacio-tiempo, $C(kx-\omega t)$ corresponde a la representación de esa distribución en el espacio de Fourier y $B(P,E,k,\omega)$ corresponde al Jacobiano de la transformación entre esos dos espacios. Vale mencionar que este tratamiento fue introducido por A. Einstein en uno de sus tres famosos artículos publicados en 1905 para describir el movimiento Browniano y se lo considera como el inicio de la teoría de Procesos Estocásticos. Sea \begin{align*} \theta := kx-\omega t. \end{align*} Derivando (\ref{eq:4}) respecto a $x$, y usando formalmente el Teorema de Convergencia Dominada (Teorema \ref{tcd}) con las hipótesis de regularidad e integrabilidad apropiadas, se obtiene \begin{equation*} -\frac{S}{ \sigma^{2} }\frac{\partial S}{\partial x}\exp(-\frac{S^{2}}{2 \sigma^{2} }) =\iint\limits_{\mathbb{R}^2} B(P,E,k,\omega) C'(kx-\omega t) \frac{\partial \theta }{\partial x }dkd\omega. \end{equation*} Derivando ahora (\ref{eq:4}) respecto a la varible $t$, y usando formalmente el Teorema de Convergencia Dominada, se ve que \begin{equation*} -\frac{S}{ \sigma^{2} }\frac{\partial S}{\partial t}\exp(-\frac{S^{2}}{2 \sigma^{2} }) =\iint\limits_{\mathbb{R}^2} B(P,E,k,\omega) C'(kx-\omega t) \frac{\partial \theta }{\partial t }dkd\omega. \end{equation*} Luego, \begin{align*} &-\frac{S}{ \sigma^{2} }\frac{\partial S}{\partial x} \iint\limits_{\mathbb{R}^2} B(P,E,k,\omega) C(\theta )dkd\omega =\iint\limits_{\mathbb{R}^2} B(P,E,k,\omega) C'(\theta) \frac{\partial \theta }{\partial x}dkd\omega,\\ &-\frac{S}{ \sigma^{2} }\frac{\partial S}{\partial t} \iint\limits_{\mathbb{R}^2} B(P,E,k,\omega) C(\theta )dkd\omega =\iint\limits_{\mathbb{R}^2} B(P,E,k,\omega) C'(\theta) \frac{\partial \theta }{ \partial t }dkd\omega. \end{align*} As\'{i}, \begin{align*} &\iint\limits_{\mathbb{R}^2} B(P,E,k,\omega) \left [ \frac{S}{\sigma ^{2}}\frac{\partial S}{\partial x} C(\theta) +C'(\theta) \frac{\partial \theta }{\partial x }\right ]dkd\omega=0,\\ &\iint\limits_{\mathbb{R}^2} B(P,E,k,\omega) \left [ \frac{S}{\sigma ^{2}}\frac{\partial S}{\partial t} C(\theta) +C'(\theta) \frac{\partial \theta }{\partial t }\right ]dkd\omega=0. \end{align*} Ahora, se considera \begin{equation} \label{eq:5} \frac{S}{\sigma ^{2}} \frac{\partial S}{\partial x} C(\theta) + C'(\theta) \frac{\partial \theta }{\partial x} =0, \end{equation} y \begin{equation} \label{eq:6} \frac{S}{\sigma ^{2}}\frac{\partial S}{\partial t} C(\theta) + C'(\theta) \frac{\partial \theta }{\partial t}=0. \end{equation} Multiplicando la ecuación (\ref{eq:5}) por $dx$ y la ecuación (\ref{eq:6}) por $dt$ y sumando, se consigue \begin{equation*} \left [ C'(\theta)\frac{\partial \theta }{\partial x}dx +C'(\theta)\frac{\partial \theta }{\partial t}dt +C(\theta) \Big( \frac{S}{\sigma ^{2}}\frac{\partial S}{\partial x}dx +\frac{S}{\sigma ^{2}}\frac{\partial S}{\partial t}dt \Big) \right ]=0. \end{equation*} Luego, \begin{align*} dC(\theta) + C(\theta) d\Big( \frac{S^2}{2 \sigma^2} \Big) = 0. \end{align*} Entonces, \begin{align*} \frac{dC(\theta)}{C(\theta)} + d\Big( \frac{S^2}{2 \sigma^2} \Big) = 0. \end{align*} De donde, integrando se obtiene que \begin{align*} \ln (C(\theta)) + \frac{S^2}{2 \sigma^2} = \alpha, \end{align*} donde $\alpha$ es una constante. Luego, \begin{align*} C(\theta) = e^\alpha \; e^{-\frac{S^{2}}{2\sigma ^{2}}}. \end{align*} Así, se tiene que \begin{align} \label{eq:7} C(kx-\omega t) = e^\alpha e^{-\frac{(Px-Et)^{2}}{2\sigma ^{2}}}, \text{ para todo } x \in \mathbb{R}, \; \; t \in \mathbb{R}. \end{align} Tomando $t=0$ en la ecuación (\ref{eq:7}), se sigue que \begin{align*} C(y) = e^\alpha e^{-\frac12 (\frac{P}{\sigma k})^2 y^2}, \text{ para todo } y \in \mathbb{R}. \end{align*} Por otro lado, escogiendo $x=0$ en (\ref{eq:7}), se ve que \begin{align*} C(y) = e^\alpha e^{-\frac12 (\frac{E}{\sigma \omega})^2 y^2}, \text{ para todo } y \in \mathbb{R}. \end{align*} De las dos expresiones anteriores se obtiene la siguiente relación: \begin{align} \label{eq:8} \frac{P}{k} = \frac{E}{\omega}. \end{align} \begin{Remark} \label{Obs1} Sin embargo, se conoce de la Mecánica Cuántica habitual que $\frac{P}{k} = \frac{E}{\omega} = \hbar$. Además, de (\ref{eq:7}) se sigue que $\frac{P}{\sigma}$ es proporcional a $k$, de lo cual se tiene que $\sigma$ es proporcional a $\hbar$ (existe $\beta>0$ tal que $\hbar = \beta \sigma$ ). De aquí en adelante, sin pérdida de generalidad, se escoge $\alpha = 0$ en (\ref{eq:7}). \end{Remark} Finalmente, se procede a construir la función de onda planar asociada a la partícula libre. Usando la ecuación (\ref{eq:4}) y la constante $\beta>0$ mencionada en la Observación \ref{Obs1}, se tiene que \begin{eqnarray*} e^{-\frac{(Px-Et)^{2}}{2\sigma ^{2}}} &=& \iint\limits_{\mathbb{R}^2} \ B(P,E,k,\omega)C(kx-\omega t)dkd\omega \\ &=& \iint\limits_{\mathbb{R}^2} B(P,E,k,\omega) e^{-\frac{\beta^2 (kx-\omega t)^2}{2}}dkdw. \end{eqnarray*} Tomando por ejemplo $B(P,E,k,\omega) = \delta (\frac{P}{\beta \sigma}-k)\delta (\frac{E}{\beta \sigma }-\omega)$ se ve que \begin{equation*} e^{-\frac{(Px-Et)^{2}}{2\sigma ^{2}}} = \iint\limits_{\mathbb{R}^2} \delta \Big(\frac{P}{\beta \sigma }-k\Big) \delta \Big(\frac{E}{\beta \sigma }-\omega\Big) e^{-\frac{\beta^2 (kx-\omega t)^{2}}{2}}dkd\omega. \end{equation*} Usando la {\it{transformada de Fourier}} $\mathcal{F}$, también denotada por \; $\hat{}$ \;, se tiene que \begin{equation*} \widehat{\frac{e^{ia\cdot}}{\sqrt{2\pi}}} = \delta_a, \; \; \text{ para todo } a \in \mathbb{R}, \end{equation*} donde $\delta_a$, para $a \in \mathbb{R}$, representa la \textit{distribución temperada} definida por \begin{equation*} \delta_a (\varphi) = \varphi(a), \; \; \text{ para } \varphi \in \mathcal{S}(\mathbb{R}), \end{equation*} aquí $\mathcal{S}(\mathbb{R})$ representa el {\it{espacio de Schwartz}} de las funciones rápidamente decrecientes en $\mathbb{R}$. Luego, se sigue (usando la notación usual en Física) que \begin{equation*} \delta \Big(\frac{P}{\hbar} -k \Big) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{+\infty} e^{i\big(\frac{P}{\hbar} -k\big)x} dx \end{equation*} y \begin{equation*} \delta \Big(\frac{E}{\hbar} -\omega \Big) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{+\infty} e^{-i\big(\frac{E}{\hbar} -\omega\big)t} dt. \end{equation*} Entonces, siguiendo con la notación usual en Mecánica Cuántica, se ve que \begin{equation*} \delta \Big(\frac{P}{\hbar} -k \Big) \delta \Big(\frac{E}{\hbar} -\omega \Big) = \frac{1}{2\pi} \iint\limits_{\mathbb{R}^2} e^{i \frac{(Px-Et)}{\hbar}} e^{-i(kx-\omega t)} dxdt. \end{equation*} De donde, usando la condición de normalización de Dirac, se observa la representación espacial de la {\it{función de onda de la partícula libre}} dada por \begin{equation*} \psi(x,t) = \frac{1}{\sqrt{2\pi}} e^{i\big(\frac{Px-Et}{\hbar}\big)}. \end{equation*} \section{Deducción de la Ecuación de Schr\"odinger} \label{sec:sch} Si un electr\'{o}n est\'{a} sometido a una energ\'{i}a potencial, $U(\vec{r})$, este forma el paquete de ondas \begin{equation*} \psi (\vec{r},t)=\int D(\vec{k})e^{i(\vec{k} \cdot \vec{r}-\omega(\vec{k})t)}d^{3} \vec{k}, \end{equation*} donde $D$ es la representación en el espacio de momentums de la función de onda. Derivando respecto al tiempo la expresión anterior y usando formalmente el Teorema de Convergencia Dominada con las hipótesis convenientes, se obtiene que \begin{equation*} \frac{\partial \psi }{\partial t} (\vec{r},t) = \int -i\omega(\vec{k}) D(\vec{k})e^{i(\vec{k} \cdot \vec{r}- \omega(\vec{k}) t)}d^{3} \vec{k} \end{equation*} y como $E= \hbar \omega(\vec{k})$, se ve que \begin{equation*} \frac{\partial \psi }{\partial t} (\vec{r},t) = \int -i\frac{E}{\hbar} D(\vec{k})e^{i(\vec{k} \cdot \vec{r}-\omega(\vec{k})t)}d^{3}\vec{k}. \end{equation*} Luego, \begin{equation*} i\hbar\frac{\partial \psi }{\partial t} (\vec{r},t) = \int ED(\vec{k})e^{i(\vec{k}\cdot \vec{r}-\omega(\vec{k})t)}d^{3}\vec{k}. \end{equation*} En el caso no relativista se conoce que \begin{equation*} E=\dfrac{P^{2}}{2m}+U(\vec{r}). \end{equation*} As\'{i} que \begin{equation*} i\hbar \frac{\partial \psi }{\partial t} (\vec{r},t) = \int\frac{P^{2}}{2m}D(\vec{k})e^{i(\vec{k}\cdot \vec{r}-\omega(\vec{k})t)}d^{3}\vec{k} +U(\vec{r}) \; \psi(\vec{r},t). \end{equation*} Nuevamente, usando formalmente el Teorema de Convergencia Dominada con las hipótesis oportunas, se consigue \begin{equation*} -\bigtriangledown ^{2}\psi (\vec{r},t) = \int k^{2}D(\vec{k}) e^{i(\vec{k} \cdot \vec{r}-\omega(\vec{k})t)}d^{3}\vec{k} =\int \frac{P^{2}}{\hbar^{2}}D(\vec{k})e^{i(\vec{k} \cdot \vec{r}-\omega(\vec{k})t)}d^{3}\vec{k}. \end{equation*} De donde se obtiene que \begin{equation*} -\frac{\hbar^{2}}{2m} \bigtriangledown ^{2}\psi (\vec{r},t) = \int \frac{P^{2}}{2m}D(\vec{k})e^{i(\vec{k} \cdot \vec{r}- \omega(\vec{k})t)}d^{3}\vec{k}. \end{equation*} En consecuencia \begin{equation} \label{eq:sh} i\hbar\frac{\partial \psi }{\partial t} (\vec{r},t) = -\frac{\hbar^{2}}{2m}\bigtriangledown ^{2}\psi (\vec{r},t) +U(\vec{r})\psi (\vec{r},t), \end{equation} que es la {\it{ecuación de Schr\"odinger}}. \section{Deducción del Principio de Incertidumbre} \label{sec:inc} Consideramos una partícula libre cuya función de densidad de probabilidad de la acción del campo de fondo viene dada por \begin{align*} \phi(S)=\frac{1}{\sqrt{2\pi} \sigma}e^{-\frac{S^{2}}{2\sigma ^{2}}}, \end{align*} donde \begin{align*} S = \int_0^t L(t') dt' \end{align*} corresponde a la acción de la partícula, $L=T-U$ es el Lagrangiano en la perspectiva de la Mecánica Clásica no relativista, $T=\frac{P^2}{2m}$ representa la energía cinética de la partícula y $U$ es la energía potencial que en el caso de la partícula libre puede ser tomada como cero. Luego, la acción para la partícula libre viene dada por \begin{align*} S = \int_0^t \frac{P^2}{2m} dt' = \frac{P^2}{2m} t = \frac{P^2}{2} \frac{x}{mv} = \frac{Px}{2}. \end{align*} Por otro lado, la varianza de la acción está expresada por \begin{align*} \sigma^2 = \overline{S^2} - {\bar{S}}^2, \end{align*} donde \begin{align*} \overline{S^2} = \frac{1}{\sqrt{2\pi} \sigma} \int_{\mathbb{R}} e^{-\frac{S^{2}}{2\sigma ^{2}}} S^2 dS \end{align*} y \begin{align*} \bar{S} = \frac{1}{\sqrt{2\pi} \sigma} \int_{\mathbb{R}} e^{-\frac{S^{2}}{2\sigma ^{2}}} S dS =0. \end{align*} Entonces, \begin{align*} \sigma^2 = \overline{S^2} = \overline{\Big(\frac{Px}{2}\Big)^2}. \end{align*} Por lo tanto, \begin{align*} \sigma = \frac{1}{2}\sqrt{\overline{(Px)^2}}. \end{align*} Como $\sigma$ representa la desviación estándar de la acción del campo de fondo de la partícula, entonces cualquier rectángulo en el espacio de fase de lados $\Delta x$, $\Delta P$ posee un área mayor o igual que $\sigma$, es decir \begin{align*} \Delta x \Delta P \ge \sigma. \end{align*} Para coincidir con la Mecánica Cuántica usual se toma $\sqrt{\overline{(Px)^2}} = \hbar$. Entonces, se tiene que \begin{align} \label{eq:inc} \Delta x \Delta P \ge \sigma = \frac{\hbar}{2}. \end{align} Se considera ahora el caso de menor incertidumbre, $\Delta x \Delta P = \sigma = \frac{\hbar}{2}$. Sea $\varphi$ la función de onda de la partícula libre correspondiente al caso de incertidumbre mínima en la representación de coordenadas. De la relación \begin{equation*} \phi(S) dS = |\varphi(x)|^2 dx, \end{equation*} la cual determina la probabilidad de que la partícula libre tenga una acción comprendida entre $S$ y $S + dS$ y que también es igual a la probabilidad de que la partícula libre en incertidumbre mínima esté entre $x$ y $x+dx$, se concluye que \begin{equation*} \varphi(x) = \Big(\frac{P}{\hbar \sqrt{2 \pi}}\Big)^{\frac12} e^{-\frac{P^2x^2}{4 \hbar^2}}. \end{equation*} Substituyendo la última expresión en la ecuación de Schr\"odinger se obtiene finalmente que \begin{equation} \label{eq:energia} E = \frac{\hbar \omega}{2} = m c^2, \end{equation} donde $E$ y $m$ son la energía y masa de la partícula libre respectivamente, $c$ es la velocidad de la luz y $\omega \equiv \frac{P^2}{2 \hbar m}$ representa la frecuencia natural de oscilación de la partícula en el interior del campo de acción de fondo. Este último resultado será analizado con más detalle en un próximo trabajo. \section{Conclusiones} A continuación algunas consecuencias de lo expuesto en las secciones anteriores. \begin{itemize} \item Se rescata el carácter objetivo de las partículas elementales, puesto que el aparato macroscópico de medida constituye el resto del Universo. \item Se recupera la naturaleza causal de la Teoría Cuántica, ya que las transiciones ``espontáneas'' son producto de las perturbaciones del campo de fondo que actúa sobre una partícula en estado cuántico ``excitado'' volviendo al estado estacionario de menor energía. \item De la Sección \ref{sec:cte} se deduce que la constante de Planck es ``esencialmente'' la desviación estándar del campo de interacciones de fondo con la partícula. \item De la ecuación (\ref{eq:8}) y de la Observación \ref{Obs1} se deducen los Postulados de Planck y de-Broglie, estableciendo el carácter de la dualidad onda-corpúsculo de la Mecánica Cuántica tradicional. \item En la Sección \ref{sec:sch} se deduce la ecuación de Schr\"odinger asociada a una partícula en un potencial, $U(\vec{r})$, la cual viene dada por la ecuación (\ref{eq:sh}). Esta ecuación se obtuvo suponiendo que la función de onda de dicha partícula es una superposición de infinitas ondas planas. Por otro lado, en la Mecánica Cuántica tradicional la ecuación de Schr\"odinger se la postula como su principio dinámico (ver \cite{gp} para más detalles). \item En la Sección \ref{sec:inc} se deduce, desde la teoría presentada en este artículo, el Principio de Incertidumbre. Además, se observa que para el caso de la Mecánica Cuántica habitual, la constante de proporcionalidad mencionada en la Observación \ref{Obs1} es $\beta=2$. Finalmente, se analiza la conducta física de una partícula libre en condiciones de incertidumbre mínima, obteniendo la ecuación (\ref{eq:energia}) que corresponde a la energía del punto cero del sistema partícula-campo de acción de fondo. \end{itemize} \section{Apéndice} \begin{Theorem}[Teorema de Convergencia Dominada de Lebesgue \cite{Bartle}] \label{tcd} Sea $(\Omega, \mathcal{A}, \mu)$ un espacio de medida. Sea $(f_n)_{n \in \mathbb{N}}$ una sucesión de funciones integrables ($f_n \in L(\Omega, \mathcal{A}, \mu)$, para todo $n \in \mathbb{N}$) la cual converge en casi todas partes a una función medible real-valuada $f$. Si existe una función integrable $g$ tal que $|f_n| \le g$ para todo $n \in \mathbb{N}$, entonces $f$ es integrable y \begin{equation*} \int f d\mu = \lim_{n \rightarrow +\infty} \int f_n d\mu. \end{equation*} \end{Theorem}
2,869,038,157,012
arxiv
\section{Introduction} Although the MSSM is the leading candidate for new physics beyond the Standard Model and sensible explains electroweak symmetry breaking by stabilizing the energy scale, it still leaves with no answer the open problems of the SM, among them the flavor problem \cite{FN}\cite{Diaz-Cruz:2001gf}. Furthermore, SUSY brings a new flavor problem which is closely related to the mass generation mechanism of the superpartners. Namely, a generic sfermion mass could lead to unacceptable large FCNC, which would exclude the model\cite{glashow,FCNC,segundo}. Several conditions or scenarios have been proposed to solve this problem, which reduce the number of free parameters and safely fit the experimental restrictions. The solutions handle in the literature include\cite{Diaz-Cruz:2005ri}:\\ \begin{itemize} \item {\it i)} {\em degeneration}, where different sfermion families have the same mass; \item {\it ii)} {\em proportionality}, here the trilinear $A-terms$ are proportional to the Yukawa couplings (SUGRA)\cite{Hall}; \item {\it iii)} {\em decoupling}, where the superpartners are too heavy to affect the low energy physics(Split SUSY, focus point SUSY, inverted hierarchy)\cite{Feng}; \item {\it iv)} {\em alignment}, in this case the same physics that explains the fermion mass spectra and mixing angles also would explain the pattern of sfermions mass spectra\cite{Nir:1993mx}.\\ \end{itemize} In the MSSM the particle mass spectra depends on the SUSY breaking mechanism. The parametrization of SUSY breaking for MSSM is called {\it Soft SUSY Breaking, SSusyB}. The scalar fields are grouped in a supermultiplet together with the fermions fields, in such a way that the scalar masses are linked to the SSusyB energy scale\footnote{see for instance \cite{Weinberg}} and the mass degeneracy could be broken by the SSusyB mechanism.\\ In this paper we are going to study the slepton mass matrices. Our goal is to determine the slepton mass eigenvalues, which are the ones that hopefully will be measured at coming (LHC) and future colliders (ILC). For this, we shall propose a hierarchy within the mass matrices, which will include a sector that will have the property of being exactly diagonalizable. This sector will determine mostly the slepton masses. We also include a sector with small off-diagonal entries that will lead to lepton flavor violation (LFV), but we leave this last analysis to future work.\\ The organization of this papers is as follows. In the next section we present the terms that contribute to the slepton mass matrix in the MSSM. Section 3 explicitly shows the ansatz proposed for the trilinear terms that contribute to this mass matrix as two contribution orders mentioned above, obtaining the expressions for the slepton masses. We present the numerical results for the parameter space in section 4. And finally, in section 5 we summarize our conclusions.\\ \section{Slepton Mass Matrix} The SUSY invariant terms, which contribute to the diagonal elements of the mass matrix, come from the auxiliary fields, namely the $F$ and $D-terms$. However, the mass matrix also includes terms that come from the Soft SUSY Lagrangian \cite{Martin,Kuroda:1999ks}. Within the MSSM, this soft Lagrangian includes the following terms \begin{equation} \mathcal{L}_{soft}= \mathcal{L}_{sfermion}^{mass}+ \mathcal{L}_{bino}^{mass}+\mathcal{L}_{gaugino}^{mass}+\mathcal{L}_{gluino}^{mass}+\mathcal{L}_{Higgsino}^{mass}+ \mathcal{L}_{H\tilde{f}_{i}\tilde{f}_{j}} \label{softlag} \end{equation} In order to establish the free parameters of the model coming from this Lagrangian, we write down the form of the slepton masses and the Higgs-slepton-slepton couplings, the first and last term of eq. (\ref{softlag}), which are given as \begin{equation} \mathcal{L}_{soft}^{\tilde{l}}=-m_{\tilde{E},ij}^{2}\tilde{\bar{E}}^{i}\tilde{\bar{E}}^{j\dag} -m_{\tilde{L},ij}^{2}\tilde{L}^{i\dag}\tilde{L}^{j}-(A_{e,ij}\tilde{\bar{E}}^{i}\tilde{L}^{j}H_{1}+h.c) \end{equation} \noindent where the \textit{trilinear terms}, or \textit{A-terms}, are the coefficient of the scalar Higgs-sfermions couplings.\\ In principle, any scalar with the same quantum numbers could mix through the soft SUSY parameters\cite{Aitchison}. This general mixing includes the parity superpartners fermionic labels, and leads us to a sfermion mass matrix given as a squared $6\times 6$ matrix, which can be written as a block matrix as\\ \bigskip \begin{equation} \tilde{M}_{\tilde{f}}^{2}= \begin{pmatrix} M_{LL}^{2} & M_{LR}^{2} \\ M_{LR}^{2\dag} & M_{RR}^{2} \end{pmatrix} \label{massLR} \end{equation} \noindent where \begin{eqnarray} M_{LL}^{2}& =& m_{\tilde{L}}^{2}+M_{l}^{(0)2}+\frac{1}{2}\cos2\beta(2m_{W}^{2}-m_{Z}^{2}) \text{{\bf I}}_{3\times 3},\\ M_{RR}^{2} & = & M_{\tilde{E}}^{2}+M_{l}^{(0)2}-\cos2\beta\sin^{2}\theta_{W}m_{Z}^{2} \text{{\bf I}}_{3\times 3},\\ M_{LR}^{2} & = & \frac{A_{l}v\cos\beta}{\sqrt{2}}-M_{l}^{(0)}\mu\tan\beta. \label{Aterm} \end{eqnarray} \noindent where $M_{l}^{(0)}$ is the lepton mass matrix.\\ The lepton-flavor conservation is violated by the non-vanishing off-diagonal elements of each matrix, and the size of such elements is strongly constrained from experiments. In the SUSY Standard Model based on supergravity, it is assumed that the mass matrices $m_{\tilde{E}}^{2}$ and $m_{\tilde{L}}^{2}$ are proportional to unit matrix, while $A_{e,ij}$ is proportional to the Yukawa matrix $y_{e,ij}$. With these soft terms, the lepton-flavor number is conserved exactly\cite{Hisano:1995nq}. However, in general soft-breaking schemes, we expect that some degree of flavor violation would be generated. A particular proposal for this pattern is presented next.\\ \section{An ansatz for the mass matrix} The trilinear terms come directly from the Soft SUSY breaking terms, and contribute toward to increase the superparticles masses. We analyze the consequences on sfermion masses by assuming that such terms would acquire an specific flavor structure, which is represented by some {\it textures}. Textures represent an {\it a priori} assumption\cite{Xing},\cite{fourtext}, in this case, for the mixtures between sfermion families. Such a structure implies that we can classify the matrix elements into three groups, the ones that contribute at leading order, those that could generate appreciable corrections and those that could be discarded, obtaining a hierarchal textures form. \\ We propose an ansatz for the trilinear A-terms in the flavor basis, and study its effects on the physical states. We work on a scheme that performs exact diagonalization. First, we parameterize off-diagonal terms assuming a favor asymmetry inherited from the fermionic SM sector. In general, there is no reason to expect that the sfermion mass states are exactly degenerate, and there is no solid theoretical basis to consider such pattern, although they are phenomenologically viable\cite{Weinberg,Haber}. \\ We assume, as in supergravity models, the condition of degeneracy on pure Left and pure Right contributions: \begin{equation} M_{LL}^{2} \simeq M_{RR}^{2} \simeq \tilde{m}_{0}^{2} \mathbf{I}_{3\times 3}, \end{equation} Our ansatz for the $A-terms$ is build up using textures forms and hierarchal structure as we pointed above. The parametrization is obtained by assuming that the mixing between third and second families is larger than the mixing with the first family. Furthermore, current data mainly suppress the FCNCs associated with the first two slepton families, but allow considerable mixing between the second and third slepton families\cite{Diaz-Cruz:2001gf}. \\ Thus, our proposal includes dominant terms that mix the second and third families, as follows \begin{equation} A_{LO}= A'_{l}= \begin{pmatrix} 0 & 0 & 0 \\ 0 & w & z \\ 0 & y & 1 \end{pmatrix} A_{0}, \label{BLO} \end{equation} \noindent then mixtures with the first family are treated as corrections, and are given as:\\ \begin{equation} \delta A_{l}= \begin{pmatrix} e & s & r \\ s & 0 & 0 \\ r & 0 & 0 \end{pmatrix} A_{0}= \begin{pmatrix} \delta A_{e} & \delta A_{s} & \delta A_{r} \\ \delta A_{s} & 0 & 0 \\ \delta A_{r} & 0 & 0 \end{pmatrix} \label{BNLO} \end{equation}\\ In the case of $w = 0$ we reproduce the ansatz given in Ref. \cite{Diaz-Cruz:2001gf}. The dominant terms give a $4\times 4$ decoupled block mass matrix, in the basis ${\tilde{e}_{L},\tilde{e}_{R}, \tilde{\mu}_{L},\tilde{\mu}_{R},\tilde{\tau}_{L},\tilde{\tau}_{R},}$ as\\ \begin{equation} \tilde{M}_{\tilde{l}}^{2}=\left( \begin{array}{cc|cccc} a & 0 & 0 & 0 & 0 & 0 \\ 0 & a & 0 & 0 & 0 & 0 \\ \hline 0 & 0 & a & X_{2} & 0 & A_{z} \\ 0 & 0 & X_{2} & a & A_{y} & 0 \\ 0 & 0 & 0 & A_{y} & a & X_{3} \\ 0 & 0 & A_{z} & 0 & X_{3} & a \end{array}\right), \label{mBLObloq} \end{equation} \noindent with $X_{3}=\frac{1}{\sqrt{2}}A_{0}v\cos\beta-\mu m_{\tau}\tan\beta$ and $X_{2}=A_{w}-\mu m_{\mu}\tan\beta$. Where $\mu$ is the $SU(2)-invariant$ coupling of two different Higgs superfield doublets, $A_{0}$ is the trilinear coupling scale and $\tan\beta=\frac{v_{2}}{v_{1}}$ is the ratio of the two vacuum expectation values coming from the two neutral Higgs fields, these three are MSSM parameters\cite{Aitchison,Accomando}. \\ The correction takes the form:\\ \begin{equation} \delta \tilde{M}_{\tilde{l}}^{2}=\left( \begin{array}{cc|cccc} 0 & \delta A_{e} & 0 & \delta A_{s} & 0 & \delta A_{r} \\ \delta A_e & 0 & \delta A_{s} & 0 & \delta A_{r} & 0 \\ \hline 0 & \delta A_{s} & 0 & 0 & 0 & 0 \\ \delta A_{s} & 0 & 0 & 0 & 0 & 0 \\ 0 & \delta A_{r} & 0 & 0 & 0 & 0 \\ \delta A_{r} & 0 & 0 & 0 & 0 & 0 \end{array} \right) \label{mBNLObloq} \end{equation}\\ The explicit forms of $A_{z,y,w}$ and $\delta A$ are given in table \ref{formadeBi}.\\ \begin{table}[hbt] \renewcommand{\arraystretch}{1.5} \begin{center} \begin{tabular}{|c|c|} \hline dominant & correction \\ \hline $A_{z}=\frac{1}{\sqrt{2}}z A_{0}v\cos\beta$ & $\delta A_{s}=\frac{1}{\sqrt{2}}sA_{0}v\cos\beta$ \\ $A_{y}=\frac{1}{\sqrt{2}}y A_{0}v\cos\beta$ & $\delta A_{r}=\frac{1}{\sqrt{2}} rA_{0}v\cos\beta$\\ $A_{w}=\frac{1}{\sqrt{2}}w A_{0}v\cos\beta$ & $\delta A_{e}=0$\\ \hline \end{tabular} \renewcommand{\arraystretch}{1.0} \caption[]{\label{formadeBi} \it Explicit terms of the sfermion mass matrix ansatz, assuming $\delta A_{e}$ as a third order element.} \end{center} \end{table} In order to obtain the physical slepton eigenstates, we diagonalize the $4 \times 4$ mass sub-matrix given in (\ref{mBLObloq}).For simplicity we consider that $z=y$, which represent that the mixtures $\tilde{\mu}_L \tilde{\tau}_R$ and $\tilde{\mu}_R \tilde{\tau}_L$ are of the same order . The rotation will be performed to this part using an hermitian matrix $Z_{l}$, such that \begin{equation} Z_{l}^{\dag}M_{\tilde{l}}^{2}Z_{l}= \tilde{M}^{2}_{Diag}, \label{Mdiag} \end{equation} \noindent where \begin{equation} M_{\tilde{l}}^{2}=\left( \begin{array}{cccc} \tilde{m}_{0}^{2} & X_2 & 0 & A_y \\ X_2 & \tilde{m}_{0}^{2} & A_y & 0 \\ 0 & A_y & \tilde{m}_{0}^{2} & X_3 \\ A_y & 0 & X_3 & \tilde{m}_{0}^{2} \end{array} \right). \end{equation}\\ Then the rotation matrix is given by \begin{equation} \begin{pmatrix} \tilde{e}_{L}\\ \tilde{\mu}_{L} \\ \tilde{\tau}_{L}\\ \tilde{e}_{R}\\ \tilde{\mu}_{R}\\ \tilde{\tau}_{R} \end{pmatrix} =\frac{1}{\sqrt{2}}\left( \begin{array}{cccccc} 1 & 0 & 0 & 0 & 0 & 0\\ 0 & -\sin \frac{\varphi}{2} & -\cos \frac{\varphi}{2} & 0 & \sin \frac{\varphi}{2} & \cos \frac{\varphi}{2}\\ 0 & \cos \frac{\varphi}{2} & -\sin \frac{\varphi}{2} & 0 & -\cos \frac{\varphi}{2} & \sin \frac{\varphi}{2}\\ 0 & 0 & 0 & 1 & 0 & 0\\ 0 & -\sin \frac{\varphi}{2} & \cos \frac{\varphi}{2} & 0 & -\sin \frac{\varphi}{2} & \cos \frac{\varphi}{2}\\ 0 & \cos \frac{\varphi}{2} & \sin \frac{\varphi}{2}& 0 & \cos \frac{\varphi}{2} & \sin \frac{\varphi}{2}\\ \end{array} \right) \begin{pmatrix} \tilde{e}_{1}\\ \tilde{\mu}_{1}\\ \tilde{\tau}_{1}\\ \tilde{e}_{2}\\ \tilde{\mu}_{2}\\ \tilde{\tau}_{2} \end{pmatrix} = Z_{B}^{l}\tilde{l}, \label{rotationB} \end{equation}\\ \noindent with \begin{eqnarray} sin\varphi & =& \frac{2A_y}{\sqrt{4 A_y^2+\left(X_2 -X_3 \right)^2}},\nonumber \\ & & \nonumber \\ cos \varphi & =& \frac{\left(X_2 -X_3 \right)}{\sqrt{4 A_y^2+\left(X_2-X_3 \right)^2}} \label{fi} \end{eqnarray} \noindent We obtain the following hierarchy for the sleptons $m_{\tilde{\tau}_{1}}< m_{\tilde{\mu}_{1}} < m_{\tilde{\mu}_{2}} < m_{\tilde{\tau}_{2}}$, for $\mu <0$. Having the following eigenvalues \begin{eqnarray} m^{2}_{\tilde{\mu_{1}}}& = & \frac{1}{2}(2 \tilde{m}_{0}^{2}+X_{2}+X_{3}-R),\nonumber\\ m^{2}_{\tilde{\mu_{2}}}& = & \frac{1}{2}(2 \tilde{m}_{0}^{2}-X_{2}-X_{3}+R),\nonumber\\ m^{2}_{\tilde{\tau_{1}}}& = & \frac{1}{2}(2 \tilde{m}_{0}^{2}-X_{2}-X_{3}-R),\nonumber\\ m^{2}_{\tilde{\tau_{2}}}&=&\frac{1}{2}(2\tilde{m}_{0}^{2}+X_{2}+X_{3}+R), \label{BLOmasses} \end{eqnarray} \noindent with $R=\sqrt{4 A_y^2+\left(X_{2}-X_3 \right)^2}$.\\ \section{Numerical results for slepton masses} From the expressions for the slepton masses (eq. \ref{BLOmasses}), we shall analyze their parameter dependency. In figure \ref{fg:Bwy} we show the dependence on $y(=x)$ and $w$. Then, in the next two figures we show the dependence of the slepton masses on the usual MSSM parameters, $\mu$, $A_{0}$ and $\tan\beta$. \\ We see that $X_{3}$ and $X_{2}$ are given in terms of $\mu$ and $\tan \beta$, having a strong dependency on the sign of $\mu$, and so we obtain a hierarchy of the slepton masses given as follows: \begin{eqnarray} \mu < 0 & & m_{\tau_{1}}<m_{\mu_{2}}<(m_{e_{1}}=m_{e_{2}})<m_{\mu_{1}}<m_{\tau_{2}}\\ \mu > 0 & & m_{\mu_{1}}<m_{\tau_{1}}<(m_{e_{1}}=m_{e_{2}})<m_{\tau_{2}}<m_{\mu_{2}} \end{eqnarray} \begin{figure}[hbt!] \begin{center} \renewcommand{\arraystretch}{0.3} \framebox {\includegraphics[width=13cm,height=12cm]{Bwy.EPS}} \caption[]{\label{fg:Bwy} {\it {\small Slepton masses dependency with respect to the parameter ansatz $w$ (up) and $y$ (down) with $\tilde{m}_{0}=A_{0}=\mu_{susy}=500 GeV$ and $\tan\beta=15$, considering $\mu_{susy}<0$ and $\mu_{susy}>0$.}}} \end{center} \renewcommand{\arraystretch}{0.5} \end{figure} We observed this on the graphs of figure \ref{fg:Bwy}, where we run independently the values of $y$ and $w$ in a range of $[0.02,1]$ and set the values for the soft susy breaking scale as $\tilde{m}_{0}=500\, GeV$, with $\tan \beta =15$. We have practically no dependence on parameter $y$. For $w=0$ we have degeneracy of the four lightest sleptons, and practicaly no dependency on these parameters for the heaviest two sleptons. The non-degeneracy increases up to $10\,GeV$ with $w=\pm 1$ for the two middle sleptons smuons (or staus). This result tells us that if we are to explore the mixtures on the second and third families we have to take into account the term coming from the smuon mass term, represented here whit paremeter $w$, (\ref{BLO}). As we said the strongest dependency comes from the MSSM parameter, and the deviation from universality is manifested by the staus, which in the case of $\mu <0$, show a difference in staus masses of $\sim 40 \,GeV$.\\ \begin{figure}[hbt!] \begin{center} \framebox {\includegraphics[width=12cm,height=10cm]{BtanB.EPS}} \caption[]{\label{fg:BtanB} {\it {\small Slepton masses with respect to $\tan \beta$ for $\mu_{susy}<0$, $\mu_{susy}>0$ and with $w=y=[0.02,1]$, $\tilde{m}_{0}=A_{0}=|\mu_{susy}|=500\,GeV$.}}} \end{center} \end{figure} In figure \ref{fg:BtanB} we verify the behavior of slepton masses with $\tan\beta$, we run the ansatz parameter through the interval $y=w=[0.02,1]$, and $\tilde{m}_{0}=500\,GeV$. We found that for $\mu <0$ the smuons are independent for $\tan\beta > 5 $, while for $\mu >0$ the staus are the ones independent, but for $\tan \beta > 15$.\\ \begin{figure}[hbt!] \begin{center} \framebox {\includegraphics[width=13cm,height=12cm]{BmuA0.EPS}} \caption[]{\label{fg:BmuA0} {\it {\small Slepton masses dependance on $\mu_{susy}$, with $\tilde{m}_{0}=A_{0}=500\, GeV$ (up). And slepton masses dependence on $A_0$ for $\mu_{susy}<0$ and $\mu_{susy}>0$, with $\tilde{m}_{0}=|\mu_{susy}|=500\, GeV$ (down). Both with $\tan \beta=15$ and $y=w=1$.}}} \end{center} \end{figure} Although we have considered the all SSusyB parameters equal to the SUSY breaking mass scale $\tilde{m}_{0}=|\mu|=A_{0}$, this is not necessary true. We explore independently the possible values for the Higgsinos mass parameter $\mu$ from the soft mass term as is shown in top of figure \ref{fg:BmuA0}. In the same sense we explore independently the $trilineal-A$ coupling, the results are shown in the bottom of the same figure, \ref{fg:BmuA0}. In both cases we set the soft mass term as $\tilde{m}_{0}=500\, GeV$. We observed again the difference in the mass hierarchy between smuons and staus depending on the $\mu$ sign. In the trilinear coupling dependency, we observe that the non-degeneration increases for $A_{0}> \tilde{m}_{0}$.\\ \section{Conclusions} We have study the possible non-degeneration for the sleptones massses, using an Ansatz for the slepton mass matrix. Specifically, consider the mixing to occur between the second and third families, and assume that this mixing comes solely from left-right terms. We encounter the parameter space dependency of the masses, including both the MSSM parameters and the proposed model parameters. This non-degeneracy could be measured in the cases where it is about $5\%$ of the SUSY Soft-Breaking mass scale $\tilde{m}_{0}$, this percentage is suggested by considering the experimental uncertainty.\\ We observed that the strongest dependence comes from the MSSM parameter space. While, as we expected, the parameters of the ansatz act only to accomplish for some non-zero terms.\\ A dependence on $\mu$ sign is strongly manifested. The mass hierarchy changes whether $\mu$ is positive or negative, this lead us to the conclusion that if the hierarchy mass spectrum most expected, {\it i.e.} $m_{\tilde{\tau}_{1}}<m_{\tilde{\mu}_{1}}<m_{\tilde{\mu}_{2}}<m_{\tilde{\tau}_{2}}$ then $\mu$ must be negative. Also we observed that for each case, \begin{itemize} \item For \emph{$\mu < 0$}, we obtain non-degeneration on staus, with a difference between them of 10\% or more, for $\tan\beta \sim> 30$ and $|\mu|/\tilde{m}_{0}\sim > 1.6$. And we have practically smuons degeneration. In this case, considering $A_{0}/\tilde{m}_{0} > 2 $, generates a difference in stau masses of $ \sim 10\%$ of $\tilde{m}_{0}$, with $\tan\beta =15$ while for the smuons we reach only $1\%$. For the ansatz parameters we also have an increase in mass difference up to $2\%$ for $y=w=1$ \item For \emph{$\mu > 0$}, the non-degeneration is obtained for the smuons, and the difference between $\tilde{\mu}_{1}$ and $\tilde{\mu}_{2}$, could be larger than 10\% for $\tan\beta \sim> 30$ and $|\mu|/\tilde{m}_{0}\sim > 2$, while we obtain approximately stau degeneration, where only for $A_{0}/\tilde{m}_{0}> 2 $, we reach a difference of $ > 3\%$ of the $\tilde{m}_{0}$.\\ Analyzing the ansatz parameters, we obtain an increased mass difference for $y=w=1$ getting up to $2\%$, with the strongest dependency being on the $w$ parameter.\\ \end{itemize} For $\tan\beta$ we conclude that if a degenerated masses are measured then $\tan\beta$ value should be at around $10$, while in the other case, no-degeneration is manifested either at small $tan\beta$, (less than $\sim 5$) or for large value.\\ The mass difference found here could be tasted possible at LHC, with some difficulties, but certainly at the ILC.\\
2,869,038,157,013
arxiv
\section{Introduction} One system that has long been a paradigm of the quantum optics community is a single-atom coupled to a single mode of the electromagnetic field, the Jaynes-Cummings model\cite{JC}. In practice the creation of a preferred field mode is accomplished by the use of an optical resonator. This resonator generally has losses associated with it, and the atom is coupled to vacuum modes out the side of the cavity leading to spontaneous emission. Energy is put into the system by a driving field incident on one of the end mirrors. The investigation of such a system defines the subfield of cavity quantum electrodynamics\cite{CQED}. Cavity QED systems exhibit many of the nonclassical effects described above, as well as interesting nonlinear dynamics which can lead to optical bistability\cite{bistable}, or chaotic dynamics\cite{chaos}. The presence of the cavity can also be used to enhance or reduce the atomic spontaneous emission rate\cite{CQED}. This system has also been studied extensively in the laboratory, but several practical problems arise.\cite{expt1,expt2,thy} There are typically many atoms in the cavity at any instant in time, but methods have been developed to load a cavity with a single atom. A major problem in experimental cavity QED stems from the fact that the atom(s)are not stationary as is often assumed by theorists. The atoms have typically been in an atomic beam originating from an oven, or perhaps released from a magneto-optical trap. This results in inhomogeneous broadening of the atomic resonance from Doppler and/or transit-time broadening. Using slow atoms can reduce these effects, but the coupling of the atom to the light field in the cavity is spatially dependent, and as the atoms are in motion, the coupling is then time dependent; also different atoms see different coupling strengths. With greater control in recent years of the center of mass motion of atoms, developed by the cooling and trapping community, preliminary attempts have been made to investigate atoms trapped inside the optical cavity\cite{Cool}. The recent demonstration of a single atom laser is indicative of the state of the art \cite{HJKSAL}. In this paper we consider a single atom cavity QED system with the addition of an external potential, provided perhaps by an optical lattice, and study the photon statistics and conditioned field measurements of both the transmitted and fluorescent fields. We seek to understand (with a simple model at first) how the coupling of the atom's center of mass motion to the light field affects the nonclassical effects predicted and observed for a stationary atom. The system we consider is shown schematically in Fig. 1. \begin{figure} \begin{center} \begin{tabular}{c} \includegraphics[height=6cm]{cqedcmm.pdf} \end{tabular} \end{center} \caption[example] {Single atom in a weakly driven optical cavity. Here g is the reversible coupling rate between the mode of the cavity and the atom, $\kappa$ is the decay rate of the field mode of the cavity, $\gamma$ is the spontaneous emission rate. $Y$ is the external drive (taken to be a classical field).} \end{figure} \begin{equation} H = \frac{p^2}{2m} + \frac{1}{2}V_0 \cos^2{kz} + \hbar g_m\cos{k_L}\left(a^\dagger\sigma_+ + a\sigma_-\right). \end{equation} A simplifying assumption is that $k=2k_L$ which is easily recreated in the lab through the use of a $\chi^{(2)}$ non-linearity so that $z = k_L x$. This then reduces the Schrodinger equation to \begin{equation} \frac{d^2\psi}{dx^2} + \frac{2m}{\hbar^2} V_0 \cos{2k_L x} + g_0\cos{kx}\sqrt{n}\psi = -\frac{2mE}{\hbar^2} \end{equation} where we have taken advantage of the fact that by working in the dressed-state picture, the Jaynes-Cummings term can be substituted with the eigenvalue $\sqrt{n}$. Defining three constants: \begin{eqnarray} z = k_L x \\ a = \frac{2mE}{k_L^2 \hbar^2}\\ q = \frac{2m}{k_L^2 \hbar^2}\left( V_0 \pm g_0\sqrt{n}\right) \end{eqnarray} so that the Schrodinger equation can be written in a form that looks conveniently like the general form for the Mathieu functions as described in section 4.2 leads to \begin{equation} \frac{d^2 \psi}{dz^2} + \left(a - 2q\cos{2k_Lz}\right)\psi = 0 \end{equation} The next step is to look at the probability that a transition between the ground and excited states of an atom will occur. In order to do this, we must examine how the transitions depend on the vibrational modes for the different electronic configurations. When a spatial overlap exists between the vibronic states (refer to figure \ref{FranckCondonNew.pdf}), the greater transition rates occur for the larger overlaps. This \lq\lq overlap\rq\rq\ is described by using what are known as the Franck-Condon Factors. Note that though we talk about the spatial overlap between transitions, there is no true spatial displacement in either the simple harmonic or Mathieu cases, so that the Franck-Condon factors are modelling the atomic coupling to the field lattice. \begin{figure} \begin{center} \begin{tabular}{c} \includegraphics[height=6cm]{FranckCondonNew.pdf} \end{tabular} \end{center} \caption[example]{Need for Franck Condon factors} \end{figure} It has already been shown that the coupling rate is given as \begin{equation} g=\mu_{eg}\sqrt{\frac{\hbar\omega_o}{2\epsilon_0 v}}\cos{kz} \end{equation} where \begin{equation} \mu_{eg}=\langle e|qz|g\rangle \end{equation} However, for completeness the Franck-Condon factors need to be included, which will now lead to \begin{equation} \mu_{eg}=\langle e|qz|g\rangle\langle k_o|k_\pm \rangle \end{equation} Our goal is then to express those Franck-Condon factors in terms of the dressed states of the harmonic oscillator which we have been dealing with: \begin{equation}\langle k_o|k_\pm \rangle = \int \psi_{k_0}^{*} \psi_{k_\pm}dy \end{equation} In the harmonic limit, both $\psi_{k,0}$ and $\psi_{k,\pm}$ can be expressed as a Gaussian function times a Hermite polynomial (for a full explanation, see \cite{Mambwe}). However, in terms of the Mathieu functions, this needs to be altered slightly such that \begin{equation} \langle k_0 | k_\pm \rangle = \int \psi^*_{n,k,\pm}\psi_{n,k,0}dy, \end{equation} Where some combination of the Ce and Se Mathieu wave functions needs to be included. This needs to be done analytically, so it will not be necessary to take this any further. The analytical solution involves using the Ce and Se eigenfunctions to solve for the value of q once a potential has been specified. the equation for the $\dot{C}$'s, or the probability amplitudes was derived. They now need to be written in a slightly different form in order to hide the time dependence. This is really a change to a time-dependent basis, and begins by defining $D$'s in terms of $C$'s such that \begin{eqnarray} D_{0,l,g} & = & C_{0,l,g} \\ D_{1,l,\pm} & = & C_{1,l,\pm}e^{-\big(l\Omega_{1,\pm}-l\Omega_{0} \pm g \big)it}\\ D_{2,l,\pm} & = & C_{2,l,\pm}e^{-\big( l\Omega_{2,\pm}-l\Omega_{0} \pm \sqrt{2}g \big)it} \end{eqnarray} Because the weak-field limit is being examined, $C_{0,l,g}=1$ which causes $\dot{C}_{0,l,g}=0$. However, the above equations are still in the harmonic approximation. Going beyond this approximation and again using Mathieu eigenstates, our $\dot{D}$s become \begin{eqnarray} D_{0,l,g} &=& C_{0,l,g} \\ D_{1,l,\pm}&=& C_{1,l,\pm}e^{-i\left( E_{1,l,\pm} - E_{0,l}\right)t/{\hbar}}\\ D_{2,l,\pm} &=& C_{2,l,\pm}e^{-i\left( E_{2,l,\pm} - E_{0,l}\right)t/{\hbar}} \end{eqnarray} Using this, it can be shown that \begin{eqnarray} \dot{D}_{0,l,g} = 0\\ \dot{D}_{1,l,+} = -\bigg[ \frac{\gamma}{4}+\frac{\kappa}{2} +i \big( E_{1,l,\pm} - E_{0,l} + g\big) \bigg]D_{1,l+}-\frac{Y}{\sqrt{2}}D_{0,l,g}-\bigg[ \frac{\gamma}{4}-\frac{\kappa}{2} \bigg]D_{1,l,-}\\ \dot{D}_{1,l,-} = -\bigg[ \frac{\gamma}{4}+\frac{\kappa}{2} +i \big( E_{1,l,\pm} - E_{0,l} +g \big) \bigg]D_{1,l-}-\frac{Y}{\sqrt{2}}D_{0,l,g}-\bigg[ \frac{\gamma}{4}-\frac{\kappa}{2} \bigg]D_{1,l,+}\\ \dot{D}_{2,l,+} = -\bigg[ \frac{\gamma}{4}+\frac{3\kappa}{2} +i \big( E_{2,l,\pm} - E_{0,l} +\sqrt{2}g \big) \bigg]D_{2,l+}-Y\big( \frac{1}{\sqrt{2}} + \frac{1}{2}\big)D_{1,l,+} \nonumber\\ + Y\big( \frac{1}{\sqrt{2}} - \frac{1}{2}\big)D_{1,l,-}-\bigg[ \frac{\gamma}{4}-\frac{\kappa}{2} \bigg]D_{2,l,-} \\ \dot{D}_{2,l,+} = -\bigg[ \frac{\gamma}{4}+\frac{3\kappa}{2} +i \big( E_{2,l,\pm}-E_{0,l}-\sqrt{2}g \big) \bigg]D_{2,l+} + Y\big( \frac{1}{\sqrt{2}} 1 \frac{1}{2}\big)D_{1,l,+} \nonumber\\ - Y\big( \frac{1}{\sqrt{2}} + \frac{1}{2}\big)D_{1,l,-}-\bigg[ \frac{\gamma}{4}-\frac{\kappa}{2} \bigg]D_{2,l,+} \end{eqnarray} Now, though, these must be changed to account for the Franck-Condon factors that were discussed in the previous chapter. \begin{equation} g_m \longrightarrow g_{\pm,l,m} = g_m \ast FC_{\pm,l,m} \end{equation} So the above equations are altered to get their final form: \begin{eqnarray} \dot{D}_{0,l,g} = 0\\ \dot{D}_{1,l,+} = -\bigg[ \frac{\gamma}{4}+\frac{\kappa}{2} +i \big( E_{1,l,\pm}-E_{0,l}+g_{1,l,+} \big) \bigg]D_{1,l+}-\frac{Y}{\sqrt{2}}D_{0,l,g}-\bigg[ \frac{\gamma}{4}-\frac{\kappa}{2} \bigg]D_{1,l,-}\\ \dot{D}_{1,l,-} = -\bigg[ \frac{\gamma}{4}+\frac{\kappa}{2} +i \big( E_{1,l,\pm}-E_{0,l}+g_{1,l,-} \big) \bigg]D_{1,l-}-\frac{Y}{\sqrt{2}}D_{0,l,g}-\bigg[ \frac{\gamma}{4}-\frac{\kappa}{2} \bigg]D_{1,l,+}\\ \dot{D}_{2,l,+} = -\bigg[ \frac{\gamma}{4}+\frac{3\kappa}{2} +i \big( E_{2,l,\pm}-E_{0,l}+\sqrt{2}g_{2,l,+} \big) \bigg]D_{2,l+}-Y\big( \frac{1}{\sqrt{2}} + \frac{1}{2}\big)D_{1,l,+} \nonumber\\ + Y\big( \frac{1}{\sqrt{2}} - \frac{1}{2}\big)D_{1,l,-}-\bigg[ \frac{\gamma}{4}-\frac{\kappa}{2} \bigg]D_{2,l,-} \\ \dot{D}_{2,l,-} = -\bigg[ \frac{\gamma}{4}+\frac{3\kappa}{2} +i \big( E_{2,l,\pm}-E_{0,l}-\sqrt{2}g_{2,l,-} \big) \bigg]D_{2,l+} + Y\big( \frac{1}{\sqrt{2}} 1 \frac{1}{2}\big)D_{1,l,+} \nonumber\\ - Y\big( \frac{1}{\sqrt{2}} + \frac{1}{2}\big)D_{1,l,-}-\bigg[ \frac{\gamma}{4}-\frac{\kappa}{2} \bigg]D_{2,l,+} \end{eqnarray} We can also prescribe an initial wave function in terms of Gaussian functions. We can write \begin{equation} \langle \Psi_{CM}|l \rangle \end{equation} and consider the wavefunction to be a Gaussian \begin{equation} \Psi = Ae^{-y/2\Sigma^2} \end{equation} where A is the normalization constant and $\Sigma$ is the width of the Gaussian. By defining $\sigma = \Sigma/\sigma_0$ with $\sigma_0 = \left(\hbar/m\Omega_{0,l}\right)^{1/2}$ and finding the normalization to be $A = \left(m\Omega_{0,l}/\pi\hbar(2^n n!)^2 \right)^{1/4}$ then \begin{equation} D_{0,l,g} = A\int_{-\infty}^\infty e^{-y^2/2\sigma^2}e^{-y^2/2}H_l(y)dy \end{equation} Releasing the harmonic limit, $\Phi_m$ can be defined as a Mathieu function of order m, and $G(x)$ as a normalized Gaussian. Then \begin{equation} D_{0,l,g} = \int_{a}^{b} \Phi_l G(x)dx \end{equation} where the Gaussian $G(x)$ must be equal to the Error Function such that \begin{equation} \int_{-x}^{x}G(x)dx = \emph{Erf}\left(\frac{x}{\sqrt{2}\sigma}\right) \end{equation} \begin{equation} \emph{Erf}(x) = 1 - \frac{2}{\sqrt{\pi}}\int_x^\infty e^{-u^2}du \end{equation} Recall that when explaining the Quantum Trajectory Formalism in the previous chapter, the condition of being in the weak field limit was used. In the steady state for this case, there is a very small average photon number, and the probability of getting a collapse is small as well. The wavefunction for the steady state is written as \begin{equation} |\Psi_{SS}\rangle = \sum_{n,l}\left( D_{n,l,+}^{ss}|n,l,+\rangle + D_{n,l,-}^{ss}|n,l,-\rangle \right) \end{equation} and the wavefunction after a transmission or fluorescence collapse as \begin{eqnarray} a|\Psi_{ss}\rangle = \frac{|\Psi_{CT}(0)\rangle}{|\Psi_{CT}(0)\rangle|^2}\\ \sigma_- |\Psi_{ss}\rangle = \frac{|\Psi_{CF}(0)\rangle}{|\Psi_{CF}(0)\rangle|^2} \end{eqnarray} The probability of a transmission (cavity emission) occurring at $\tau = 0$ is \begin{equation} P_T(\tau = 0) = 2\kappa \langle \Psi_{CT}|a^\dagger a |\Psi_{CT}\rangle \end{equation} and similarly for a fluorescence \begin{equation} P_F(\tau = 0) = 2\gamma\langle \Psi_{CF}|\sigma_+\sigma_- |\Psi_{CF}\rangle \end{equation} Putting these back into the equation for $g^{(2)}(\tau)$, \begin{eqnarray} g^{(2)}_{TT}(\tau) & = & \frac{\langle\Psi_{CT}|a^\dagger a|\Psi_{CT}\rangle}{\langle\Psi_{ss}|a^\dagger a|\Psi_{ss}\rangle}\nonumber\\ & = & \frac{\sum_{n,l}n|C_{g,n,1}^{CT}(\tau)|^2}{\sum_{n,l}n|C_{g,n,l}^{ss}(\tau)|^2}\nonumber\\ & = & \frac{\sum_{l}|C_{g,n,1}^{CT}|^2(\tau)}{\sum_{l}|C_{g,n,l}^{ss}|^2} \end{eqnarray} Similarly for the fluorescence, \begin{equation} g^{(2)}_{FF}(\tau) = \frac{\sum_{l}|C_{e,0,l}^{CF}|^2(\tau)}{\sum_{l}|C_{e,0,l}^{ss}|^2} \end{equation} In order to define the amplitudes of the states, we need to look at the wave function at the steady state and also after a collapse. These are expressed respectively as \begin{eqnarray} |\psi_{ss}\rangle = \sum^\infty_{n,l=0} \left( C_{1,l,+}^{ss}e^{-iE_{1,l,+}t}|1,l,+\rangle + C_{1,l,-}^{ss}e^{-iE_{1,l,-}t}|1,l,-\rangle\right)\\ |\psi(0)\rangle_{coll} = \sum_{n,l=0}^\infty \left( C_{g,n,l}^{coll}(t)e^{-iE_{g,n,l}t}|g,n,l\rangle + C_{e,n,l}^{coll}(t)e^{-iE_{e,n,l}t}|e,n,l\rangle\right) \end{eqnarray} where the initial amplitudes of the collapsed states are \begin{eqnarray} C_{g,n,l}^{coll}(0) = \frac{\sqrt{2}C_{g,2,l}^{ss}}{\sum_{n,l}\left( 2|C_{g,2,l}^{ss}|^2 + |C_{e,1,l}^{ss}|^2 \right)}\\ C_{e,0,l}^{coll}(0) = \frac{C_{e,1,l}^{ss}}{\sum_{n,l}\left( 2|C_{g,2,l}^{ss}|^2 + |C_{e,1,l}^{ss}|^2 \right)} \end{eqnarray} Now that all of the foundation has been laid, what exactly is the probability of getting either a transmission or a fluorescence event at time $t = \tau$ if a transmission or fluorescence event occured at time $t=0$? The four possible combinations are labelled as TT, FF, TF, or FT, and they are expressed as \begin{eqnarray} g^{(2)}_{TT} & = & \frac{\langle a^\dagger(0)a^\dagger(\tau)a(\tau)a()\rangle}{\langle a^\dagger a \rangle^2}\\ g^{(2)}_{FF} & = & \frac{\langle \sigma_+(0)\sigma_+(\tau)\sigma_-(\tau)\sigma_-(0)\rangle}{\langle \sigma_+\sigma_-\rangle^2}\\ g^{(2)}_{TF} & = & \frac{\langle a^\dagger(0)\sigma_+(\tau)\sigma_-(\tau)a(0)\rangle}{\langle a^\dagger a \rangle\langle\sigma_+\sigma_-\rangle}\\ g^{(2)}_{FT} & = & \frac{\langle\sigma_+(0)a^\dagger(\tau)a(\tau)\sigma_-(0)\rangle}{\langle a^\dagger a \rangle\langle\sigma_+\sigma_-\rangle} \end{eqnarray} The collapse operators have been defined such that a transmission is $\kappa a$ and a fluorescence is $\sqrt{\gamma}\sigma_-$, so that our collapsed states are \begin{eqnarray} |\psi_c^T \rangle = \frac{a|\psi_{ss}}{|a|\psi_{ss}|^2}\\ |\psi_c^F \rangle = \frac{\sigma_-|\psi_{ss}}{|\sigma_-|\psi_{ss}|^2} \end{eqnarray} and the final form of all our second-order correlation functions can be shown to be \cite{Joe} \begin{eqnarray} g^{(2)}_{TT} = \frac{\sum_{\{m\} = 0}^\infty |C_{1,g\{m\}}^{CT}(\tau)|^2}{\sum_{\{m\} = 0}^\infty |C_{1,g\{m\}}^{ss}|^2}\\ g^{(2)}_{FF} = \frac{\sum_{\{m\} = 0}^\infty |C_{0,e\{m\}}^{CF}(\tau)|^2}{\sum_{\{m\} = 0}^\infty |C_{0,e\{m\}}^{ss}|^2}\\ g^{(2)}_{TF} = \frac{\sum_{\{m\} = 0}^\infty |C_{0,e\{m\}}^{CT}(\tau)|^2}{\sum_{\{m\} = 0}^\infty |C_{0,e\{m\}}^{ss}|^2}\\ g^{(2)}_{FT} = \frac{\sum_{\{m\} = 0}^\infty |C_{1,g\{m\}}^{CF}(\tau)|^2}{\sum_{\{m\} = 0}^\infty |C_{1,g\{m\}}^{ss}|^2} \end{eqnarray} \subsection{Anti-bunching} \label{Antibunching section} Once again, the Schwartz inequalities that classical fields obey are \begin{eqnarray} g^{(2)}(0) &\ge& 1\\ g^{(2)}(\tau) &\le& g^{(2)}(0)\\ |g^{(2)}(\tau)-1| &\le& |g^{(2)}(0)-1| \end{eqnarray} A violation of the first inequality means that there are no non-negative probability distributions that describe the field. The other two inequalities tell us information about the photon distribution of our source. The third inequality describes any \lq\lq undershoot" or \lq\lq overshoot" properties, represented by \\ $|g^{(2)}(\tau)-1| > |g^{(2)}(0)-1|$ and $|g^{(2)}(\tau)-1| < |g^{(2)}(0)-1|$, respectively. Most interesting, though, is the second inequality. There are three possibilities for light: random, bunched, or anti-bunched (see figure \ref{Bunchingpic}). Random photon sources are represented by $g^{(2)}(\tau) = 1$, where $g$ is completely independent of $\tau$. Bunched light, or Photon-Bunching is represented by super-Poissonian statistics and the inequality that $g^{(2)}(0) > g^{(2)}(\tau)$. Lastly, is the case of sub-Poissonian statistics where $g^{(2)}(0) < g^{(2)}(\tau)$, known as Anti-Bunching. In photon anti-bunching, there is a great probability that photons will be further apart than close together, making the detection pattern much more uniform. These are the states that have no classical description of fields. There are two ways to describe the amount of anti-bunching of a source. The first is to use perfect anti-bunching, which is the case where $g^{(2)}(0)=0$. The more anti-bunched a source is, the closer $g^{(2)}(0)$ will be to zero. However, anti-bunching may also be characterized by the slope of $g^{(2)}(\tau)$ from some initial value of $g^{(2)}(0)$. The differences between these two terminologies will be explained later. In the weak field limit, the field quadrature is given as \begin{equation} \langle\hat{a}_\theta\rangle = \sum_l \left( C^*_{1,l}C_{0,l}e^{-i\theta} + C^*_{0,l}C_{1,l}e^{i\theta} \right) \end{equation} and so the correlation function for weak fields is \begin{equation} h_\theta(\tau) = \frac{\sum_l \left( C_{1,l}^{CT*}C_{0,l}^{CT}e^{-i\theta} + C_{0,l}^{CT*}C_{1,l}^{CT}e^{i\theta} \right)}{\sum_l \left( C_{1,l}^{ss*}C_{0,l}^{ss} + C_{0,l}^{ss*}C_{1,l}^{ss} \right)} \end{equation} Following the same format as when examining the $g^{(2)}$'s, the four combinations can be written as \begin{eqnarray} h_{\theta}^{TT}(\tau) & = & \frac{\langle a_\theta(\tau)\rangle_{CT}}{\langle a_0(\tau)\rangle_{ss}}\\ h_{\theta}^{FF}(\tau) & = & \frac{\langle \sigma_\theta (\tau)\rangle_{CF}}{\langle\sigma_0(\tau)\rangle_{ss}}\\ h_{\theta}^{TF}(\tau) & = & \frac{\langle\sigma_\theta (\tau)\rangle_{CT}}{\langle\sigma_0(\tau)\rangle_{ss}}\\ h_{\theta}^{FT}(\tau) & = & \frac{\langle a_\theta (\tau)\rangle_{CF}}{\langle\sigma_0(\tau)\rangle_{ss}} \end{eqnarray} and expressed in terms of probability amplitudes, \begin{eqnarray} h_{\theta}^{TT}(\tau) & = & \frac{\sum_{\{m\}} C_{1,g,\{m\}}^{CT}(\tau)C_{0,g,\{m\}}^{CT}(\tau)}{\sum_{\{m\}} C_{1,g,\{m\}}^{ss}C_{0,g,\{m\}}^{ss}} \cos\theta\\ h_{\theta}^{FF}(\tau) & = & \frac{\sum_{\{m\}} C_{0,e,\{m\}}^{CF}(\tau)C_{0,g,\{m\}}^{CF}(\tau)}{\sum_{\{m\}} C_{0,e,\{m\}}^{ss}C_{0,g,\{m\}}^{ss}} \cos\theta\\ h_{\theta}^{TF}(\tau) & = & \frac{\sum_{\{m\}} C_{0,e,\{m\}}^{CT}(\tau)C_{0,g,\{m\}}^{CT}(\tau)}{\sum_{\{m\}} C_{0,e,\{m\}}^{ss}C_{0,g,\{m\}}^{ss}} \cos\theta\\ h_{\theta}^{FT}(\tau) & = & \frac{\sum_{\{m\}} C_{1,g,\{m\}}^{CF}(\tau)C_{0,g,\{m\}}^{CF}(\tau)}{\sum_{\{m\}} C_{1,g,\{m\}}^{ss}C_{0,g,\{m\}}^{ss}} \cos\theta \end{eqnarray} {\section{Inequalities and Non-Classical Behaviors} \setlength{\parindent}{0.25in} A set of figures are now presented. First, though, a reminder to the reader of the inequalities is included, and in which case the violations apply. In the graphs of $g^{(2)}(\tau)$, the data must be examined in two parts. The transmission and fluorescence cases follow a set of inequalities different from those for the cross correlations. \subsection{Transmission and Fluorescence} \setlength{\parindent}{0.25in} The inequality satisfied by classical fields with a positive definite probability distribution is \begin{equation} g^{(2)}(\tau)\ge g^{(2)}(0) \end{equation} Therefore, violations of this are written as \begin{eqnarray} B: g^{(2)}(0) > g^{(2)}(\tau)\\ A: g^{(2)}(0) < g^{(2)}(\tau) \end{eqnarray} Simply put, they are dependent on the initial change in slope of the graph. An initial decrease in the graph signifies bunching \lq\lq B \rq\rq, whereas an an initial increase signifies anti-bunching, \lq\lq A \rq\rq. \begin{figure} \centering \includegraphics[width=\textwidth,totalheight=2.5in, keepaspectratio]{ExampleB.pdf} \caption{\emph{Bunching represented by \lq\lq B\rq\rq.}}\label{Bunchingpic}\label{ExampleB} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth,totalheight=2.5in, keepaspectratio]{ExampleA.pdf} \caption{\emph{Anti-bunching represented by \lq\lq A\rq\rq.}}\label{Bunchingpic}\label{ExampleA} \end{figure} \clearpage \begin{figure} \centering \includegraphics[width=\textwidth,totalheight=2in, keepaspectratio]{ExampleSuperPoissonian.pdf}\label{ExampleSuperPoissonian} \caption{\emph{Example of a graph exhibiting Super-Poissonian statistics, denoted \lq\lq SUP\rq\rq}} \end{figure} The next inequality is derived from the fact that all classical well behaved functions must obey the Schwartz inequality. Because we known that the general form of the Schwartz Inequality is \footnotesize \begin{eqnarray}\label{Schwartz1} \left(\int dx \int dy |f(x,y)g(x,y)|P(x,y)\right)^2 &\leq& \left(\int dx \int dy {f(x,y)}^2 P(x,y)\right) \nonumber \\ &\times& \left(\int dx \int dy \quad {g(x,y)}^2 P(x,y)\right). \end{eqnarray} \normalsize We can absorb the probability into a function of $x,y$ and write \footnotesize \begin{eqnarray}\label{Schwartz11} \left(\int dx \int dy |\bar{f}(x,y)\bar{g}(x,y)|\right)^2 &\leq& \left(\int dx \int dy {\bar{f}(x,y)}^2 \right) \nonumber \\ &\times& \left(\int dx \int dy \quad {\bar{g}(x,y)}^2 \right). \end{eqnarray} \normalsize By choosing $x = \bar{I}$, $y = \bar{I}_0$, $f(x,y) = \bar{I}$,$\quad g(x,y) = 1$, and $P(x,y) = P(\bar{I}, t + \tau ; \bar{I}_0, t)$ where $P(\bar{I}, t + \tau ; \bar{I}_0, t)$ is the joint probability function that there is field intensity $\bar{I}$ at time $t +\tau$ and intensity $\bar{I}_0$ at time $t$, equation \ref{Schwartz1} becomes \begin{eqnarray}\label{Schwartz2} \left(\int d \bar{I} \quad \bar{I}P(\bar{I}, t +\tau)\right)^2 &\leq& \left( \int d \bar{I} \quad{\bar{I}}^2 P(\bar{I}, t+ \tau)\right) \times 1 \nonumber \\ {\langle \bar{I} \rangle}^2 &\leq& \langle {\bar{I}}^2 \rangle \nonumber \\ \frac{\langle {\bar{I}}^2 \rangle}{{\langle \bar{I} \rangle}^2} &\geq& 1. \end{eqnarray} And now we express the Intensities in terms of the field as \begin{eqnarray}\label{Schwartz3} \frac{\langle {E^*(t)}^2 {E(t)}^2 \rangle}{{E^{\ast}(t) E(t) \rangle}^2} &\geq& 1 \nonumber \\ \frac{\langle {a^{\dag}(t)}^2 {a(t)}^2 \rangle}{{\langle {a^{\dag}}(t) a(t)\rangle}^2} &\geq& 1. \end{eqnarray} Because the field is stationary, this can be written as \begin{equation}\label{Schwartz4} \frac{\langle {a^{\dag}(t)}^2 {a(t)}^2 \rangle}{{\langle {a^{\dag}} a\rangle}^2} \geq 1. \end{equation} Which is just the expression for $g^{(2)}(\tau)$ at time $t=0$, so the final inequality is \begin{equation} g^{(2)}(0) \ge 1 \end{equation} no non-negative probability distributions occur (refer to section \ref{g2chpt} for explanation). If $g^{(2)}(0) > 1$ our data is super-Poissonian, and if If $g^{(2)}(0) < 1$ our data is sub-Poissonian. They shall be referred to as \lq\lq SUP\rq\rq\ and \lq\lq SUB\rq\rq. An example of the sub-Poissonian condition is shown in figure \ref{ExampleSuperPoissonian}. Please note specifically the notation used in this section, as some people refer to both of these violations as anti-bunching. The next set of equalities represent what shall be termed an overshoot \lq\lq OS\rq\rq\ or an undershoot \lq\lq US\rq\rq\. These are represented as \begin{eqnarray} |g^{(2)}(\tau) - 1| > |g^{(2)}(0) - 1|\\ |g^{(2)}(\tau) - 1| < |g^{(2)}(0) - 1| \end{eqnarray} respectively. An example of each of these is shown (see figures \ref{ExampleOvershoot}, \ref{ExampleUndershoot}). \begin{figure} \centering \includegraphics[width=\textwidth,totalheight=2.5in, keepaspectratio]{ExampleOvershoot.pdf}\label{ExampleOvershoot} \caption{\emph{Overshoot represented by \lq\lq OS\rq\rq}} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth,totalheight=2.5in, keepaspectratio]{ExampleUndershoot.pdf}\label{ExampleUndershoot} \caption{\emph{Undershoot represented by \lq\lq US\rq\rq}} \end{figure} \clearpage \subsection{Cross Correlations} \setlength{\parindent}{0.25in} There are only two inequalities that must be examined for the cross-correlations. They will be called Cross-Violations, and denoted as \lq\lq CV1\rq\rq and \lq\lq CV2\rq\rq. A CV1 disobeys the inequality \begin{equation} g^{(2)}_{TF,FT}(0) \le \sqrt{g^2_{TT}(0) g^2_{FF}(0)} \end{equation} However, because the case being examined is for one atom, $g^2_{FF}(0)$ is always zero. This is because of the fact that $\sigma_-|e\rangle = |g\rangle$, and $\sigma_-|g\rangle$ is impossible (refer to section \ref{TFFT}). This simplifies the CV1 to \begin{equation} g^{(2)}_{TF,FT}(0) \le 0 \end{equation} in which case there will always be a violation. As for CV2, \begin{equation} g^2_{TF}-1 \le \sqrt{|g^2_{TT}-1||g^2_{FF}-1|} \end{equation} which again simplifies to \begin{equation} g^2_{TF}-1 \le \sqrt{g^2_{TT}-1} \end{equation} \clearpage \section{Graphs for $g/\gamma = 1$, $\kappa/\gamma = 1.6$} \setlength{\parindent}{0.25in} Located on each graph will be a small table indicating which non-classical behaviors are present. If a behavior is not listed, it is assumed to be classical. The following is a reminder of each non-classical abbreviation. \begin{center} B - Bunching\\ A - Anti-bunching\\ SUB - Sub-Poissonian probability distribution\\ SUP - Super-Poissonian probability distribution\\ OS - Overshoot\\ US - Undershoot\\ CV1 - Cross-violation 1\\ CV2 - Cross-violation 2\\ \end{center} Note also that some of the graphs are not smooth lines, but instead have "wiggles". What is being seen are the beat frequencies. In the dressed-state picture, each next highest level can be considered an un-coupled three-level system. When these interact, we see the beat frequencies. \clearpage \section{Inequalities and Non-Classical Behaviors} \setlength{\parindent}{0.25in} As described in Section \ref{Squeezing}, a violation of the classical behaviors of $h_\theta$ are a sign of squeezing (refer to section 7.2.4). Again, the results will be divided into two categories - the transmissions and fluorescence, and the cross correlations. \subsection{Transmission and Fluorescence} \setlength{\parindent}{0.25in} There are only two possible violations to consider \cite{{ElliotRice},{ElliotJoeRice}}. The violations will be denoted as \lq\lq S1\rq\rq and \lq\lq S2\rq\rq because they both signify squeezing. They are defined respectively as \begin{eqnarray} 0 \le h_\theta(0) - 1 \le 1\\ |h_\theta(\tau) -1 | \le |h_\theta(0)-1|\le 1 \end{eqnarray} An S1 violation will therefore occur any time the value of $h_\theta(0)$ is not between 1 and 2. Note that because the fluorescence condition has an initial value of zero, it will always have an S1 violation. An S2 violation will occur any time the graph dips below zero or rises above two. Furthermore, an S2 violation can occur between a more narrow range of values, dependant upon the initial value, as analogous to overhoot/undershoot violations for $g^{(2)}(\tau)$. Examples of S1 and S2 violations are shown below \begin{figure} \centering \includegraphics[width=\textwidth,totalheight=2in, keepaspectratio]{ExampleS1.pdf} \caption{\emph{Example of a graph exhibiting a S1 violation.}}\label{S1ExampleS1} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth,totalheight=2in, keepaspectratio]{ExampleS2.pdf} \caption{\emph{Example of a graph exhibiting a S2 violation.}\label{S2ExampleS2}} \end{figure} Finally we consider violations of classical inequalities for the cross correlations. These are also readily observed \begin{figure} \centering \includegraphics[width=\textwidth,totalheight=2in, keepaspectratio]{ExampleCV1CV2.pdf} \caption{\emph{Example of a graph exhibiting CV1 and CV2 violations.}} \end{figure} We now present tables that examine which nonclassical effects happen for which parameters \begin{figure} \centering \includegraphics[width=\textwidth,totalheight=6in, keepaspectratio]{g2tab1.pdf} \caption{\emph{Nonclassical effects in $g^{(2)}(\tau)$.}} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth,totalheight=6in, keepaspectratio]{g2tab2.pdf} \caption{\emph{Nonclassical effects in $g^{(2)}(\tau)$.}} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth,totalheight=6in, keepaspectratio]{g2tab3.pdf} \caption{\emph{Nonclassical effects in $g^{(2)}(\tau)$.}} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth,totalheight=6in, keepaspectratio]{htab.pdf} \caption{\emph{Nonclassical effects in $h_{\theta}(\tau)$}} \end{figure} \section{Conclusions} We have investigated intensity-intensity and field-intensity correlations in a cavity QED system with an internal potential with a periodity of $\lambda/1$ where $\lambda$ is the simultaneous wavelength of the atomic transition and cavity mode. When both the atom and cavity are off resonance it was found that the anti-bunching in all the cases disappeared the more off resonance we went. It was found that both the photon statistics and the wave-particle correlation functions are quite sensitive to center-of-mass wave function. This can be eased by choosing atomic and cavity detunings equal and opposite (in units of their respective linewidths). Here nonclassical behavior is not reduced drastically We saw that an increase in the width of the Gaussian in both the SHO and Mahtieu cases washed away non-classicalities with the Mathieu case experiencing more rapid washing out with increase in Gaussian width. In all cases investigated, the correlation functions $g^{(2)}_{FF}$ and $h^{FF}_{\theta}$ appear to be sensitive only to the Mathieu and SHO population distributions for large values of the Gaussian width. The intensity-field fluctuations are not as sensitive to detunings.
2,869,038,157,014
arxiv
\section{Introduction} \label{sec:intro} In gravitational and gauge theories, asymptotic symmetries (AS) are a global remnant of large diffeomorphisms and gauge transformations which act non-trivially on physical data at spacetime infinity. The classic example of infinite-dimensional AS, and in many ways the best understood and applied, is that of (quantum) General Relativity (GR) in asymptotically 3D Anti-de Sitter (AdS$_3$) spacetime. The analysis of Brown and Henneaux~\cite{Brown:1986nw} uncovered Virasoro symmetries which presaged, and were ultimately elegantly incorporated into, the AdS$_3$/CFT$_2$ correspondence, translating into the implications of 2D conformal invariance and unitarity. The Virasoro structure and central charges, with modular invariance, led to a precise microscopic account~\cite{Strominger:1997eq} of the Bekenstein-Hawking entropy of AdS$_3$ Schwarzchild black holes, dual to the CFT$_2$ Cardy formula~\cite{Cardy:1986ie}. There is an ongoing program of exploiting this symmetry structure to address more detailed aspects of black hole information puzzles~\cite{Fitzpatrick:2016mjq, Fitzpatrick:2016ive}. In a similar vein to these gravitational asymptotic symmetries, 3D Chern-Simons (CS) gauge theories display infinite-dimensional Kac-Moody (KM) asymptotic symmetries with central extensions, reflecting 2D Wess-Zumino-Witten (WZW) current algebras via the technically simpler CS/WZW correspondence~\cite{Witten:1988hf, Elitzur:1989nr, Witten:1991mm, Gukov:2004id}. In higher dimensions the situation is intriguing, but less well understood. The primordial example is provided by the infinite-dimensional BMS ``supertranslations'' of GR in asymptotically 4D Minkowski spacetime (Mink$_4$)~\cite{Bondi:1962px, Sachs:1962wk}, later extended to include Virasoro-type ``superrotations''~\cite{Barnich:2009se, Barnich:2011ct}, and Kac-Moody asymptotic symmetries from 4D gauge theory~\cite{Strominger:2013lka, He:2014cra, He:2015zea, Kapec:2015ena}. However, the symmetry algebras have appeared without central extensions, ordinarily required by unitarity in lower-dimensional contexts. There are new deep aspects in 4D, unifying asymptotic symmetries with soft limits of gravitons and gauge bosons, and with in-principle physical gravitational and gauge ``memory'' effects (see Ref.~\cite{Strominger:2017zoo} for a review and extensive list of references). There are also hopes of applying AS to help understand black hole information~\cite{Hawking:2016sgy, Hawking:2016msc, Strominger:2017aeh, Carney:2017jut, Carney:2017oxp}, although this is still under debate~\cite{Bousso:2017dny,Mirbabayi:2016axw, Gabai:2016kuf, Gomez:2017ioy, Donnelly:2017jcd, Bousso:2017rsx}. The asymptotic symmetries can be shown to derive from 2D current algebras ``living'' on the celestial sphere, but it is unclear what the precise connection is between this structure and some form of holography in Minkowski spacetime. One hint comes from an intermediate step between 4D and 2D: the soft limit of gravitational and gauge fields renders them effectively 3-dimensional, in a more nuanced generalization of the trivial loss of the time dimension in the {\it static} limit. In particular, some of the soft fields take the form of 3D GR and CS~\cite{Cheung:2016iub}, with close ties to the AdS$_3$/CFT$_2$ and CS/WZW correspondences~\cite{Witten:1988hf, Elitzur:1989nr, Witten:1991mm, Gukov:2004id}. In order to explore the connection of 4D asymptotic symmetries to holography, Ref.~\cite{Mishra:2017zan} turned to the study of asymptotic symmetries in (portions of) AdS$_4$, taking advantage of the well-established $\text{AdS}_4/\text{CFT}_3$ correspondence. In this context, there is a natural way to include 3D (conformal) GR and CS, by simply having them gauge the holographic CFT$_3$ at the outset. Applying 3D (conformal) GR and CS ($+$ CFT$_3$ ``matter") analyses then yields a set of infinite-dimensional asymptotic symmetries {\it with central extensions}. Even in the limit in which the external 3D GR and CS fields decouple from CFT$_3$, these asymptotic symmetries symmetries remain, but {\it losing their central extensions} as the price for restricting to CFT correlators with a well-defined decoupling limit. The resulting asymptotic symmetries closely parallel the supertranslation, superrotation and Kac-Moody asymptotic symmetries of Mink$_4$. In this paper, we continue the study of asymptotic symmetries in the context of $\text{AdS}_4/\text{CFT}_3$. We restrict our attention to gauge theory in the Poincare patch of AdS$_4$ for technical and conceptual simplicity, with 4D GR only an incidental presence needed for duality with CFT$_3$. Within this framework, we will identify different but interconnected ways in which Kac-Moody asymptotic symmetries arise. Most directly we extend the approach of Ref.~\cite{Mishra:2017zan} to the Poincare patch, with CS-gaugings of the holographic CFT defining new $\widetilde{\text{CFT}}$s, and the canonical CS structure leading to Kac-Moody asymptotic symmetries with finite central extensions. The AdS dual of the modified $\widetilde{\text{CFT}}$ shares the same 4D dynamics as the AdS dual of the original CFT, but with the former having an alternate set of AdS boundary conditions~\cite{Witten:2003ya}(particular to 4D). This is key to evading no-go arguments~\cite{Ashtekar:1999jx, Papadimitriou:2005ii} for infinite-dimensional asymptotic symmetries in $\text{AdS}_{d> 3}$. In the case of {\it abelian} gauge/global symmetries of AdS$_4$/CFT$_3$, we can make a stronger statement because the original CFT and the $\widetilde{\text{CFT}}$s are connected by $\text{SL}(2,\textbf{Z})$ ``mirror" symmetry~\cite{Witten:2003ya}. From the AdS$_4$ viewpoint, this $\text{SL}(2,\textbf{Z})$ is associated to electric-magnetic duality, which relates the standard boundary conditions to alternate boundary conditions. In this sense, Kac-Moody asymptotic symmetries structure already resides in the standard AdS$_4$/CFT$_3$ construction, albeit applied in suitable electric-magnetic/mirror dual variables. For both abelian and non-abelian theories, there is another way in which we will show that the standard AdS$_4$/CFT$_3$ theory contains the ``seeds" of the alternate/$\widetilde{\text{CFT}}$ theory, namely by taking gauge-boson long-wavelength limits in the holographically emergent dimension within $\partial \text{AdS}_4$ correlators. We show that this ``holographic soft limit" of the standard theory yields the correlators and Kac-Moody asymptotic symmetries of the alternate theory to leading order in the CS level, closely matching and adding physical significance to the decoupling limit AS analysis of Ref.~\cite{Mishra:2017zan}. Paralleling the connections in $\text{Mink}_4$ between asymptotic symmetries, soft limits and memory effects, we will show in $\text{AdS}_4$ abelian gauge theory that the KM asymptotic symmetries and holographic soft limits are closely connected to ``magnetic" gauge memory effects. The paper is organized as follows. In Section~\ref{sec:ads4-poincare}, we introduce gauge theory in the Poincare patch of $\text{AdS}_4$, standard and alternate boundary conditions, and their holographic translations in terms of $\text{CFT}_3$ and $\widetilde{\text{CFT}}_3 \equiv \text{CS} + \text{CFT}_3$, respectively. In Section~\ref{sec:KM-from-MS} we derive the Kac-Moody asymptotic symmetries of the alternate $\text{AdS}_4/\widetilde{\text{CFT}}_3$ theory from its canonical CS structure. In Section~\ref{sec:EMduality-from-MirrorDuality}, we restrict to abelian theories and point out the passage from standard $\text{AdS}_4/\text{CFT}_3$ to alternate $\text{AdS}_4/\widetilde{\text{CFT}}_3$, and hence Kac-Moody asymptotic symmetries, via electric-magnetic/mirror duality. In Section~\ref{sec:MirrorDual-from-SoftLimit-abelian}, we derive another passage from standard $\text{AdS}_4/\text{CFT}_3$ to alternate $\text{AdS}_4/\widetilde{\text{CFT}}_3$ in abelian theories, this time by introducing the ``holographic soft limit" in its simplest form. In Section~\ref{sec:MirrorDual-from-SoftLimit-nonAbelian} we generalize this soft limit analysis to non-abelian gauge theories in $\text{AdS}_4$, involving more careful treatment of multiple soft external lines. In Section~\ref{sec:memory} we describe (abelian) magnetic memory effects in standard $\text{AdS}_4/\text{CFT}_3$ and give their holographic interpretation and connections to Kac-Moody asymptotic symmetries structure and soft limits. We provide our conclusions in section~\ref{sec:discussion}, including several parallels and contrasts between the $\text{AdS}_4$ and $\text{Mink}_4$ asymptotic symmetry analyses. \section{$\text{AdS}_4$ Gauge Theory, Boundary Conditions and Holography} \label{sec:ads4-poincare} We describe the Poincare patch of $\text{AdS}_4$ by coordinates $X^M \equiv (t,x,y,z)$ and metric, \begin{align} ds_{\text{AdS}_4}^2 & = \frac{dt^2-dx^2-dy^2-dz^2}{z^2}, ~ z>0, \end{align} where we work in units of the $\text{AdS}$ radius of curvature. Its boundary, $\partial \text{AdS}_4 \equiv \text{Mink}_3$, is at $z=0$, with 3D coordinates $x^{\mu} \equiv (t,x,y)$. We consider AdS dynamics of the form, \begin{align} \mathcal{L}_{\text{AdS}_4} &= -\frac{1}{2g^2}\,\text{Tr}\,\mathcal{F}_{MN} \mathcal{F}^{MN} + \frac{\theta}{16\pi^2} \,\text{Tr}\, \mathcal{F}_{MN} \widetilde{\mathcal{F}}^{MN} +\mathcal{A}_M^a \mathcal{J}^{M\:a} + \cdots\:, \label{eq:AdS-dynamics} \end{align} where $\mathcal{A}_M \equiv \mathcal{A}_M^a t^a$ is a 4D gauge field with field strength $\mathcal{F}_{MN} \equiv \mathcal{F}_{MN}^a t^a$, $\mathcal{J}_M^a$ is the 4D current due to gauge-charged matter, $t^a$ are the generators of gauge group, normalized as $\text{Tr}\, t^a t^b = \delta^{ab}/2$, and the ellipsis includes the 4D matter Lagrangian as well as 4D quantum gravity. We will not explicitly need the details of quantum gravity in this paper, but with it the $\text{AdS}_4$ theory has a $\text{CFT}_3$ holographic dual on Mink$_3$, which we will invoke (see Refs.~\cite{Aharony:1999ti, Sundrum:2011ic} for a review). \subsection{Standard ``Dirichlet'' Boundary Conditions} \label{sec:ads4-poincare-dirichlet} The standard $\text{AdS}_4$ boundary condition (b.c.) is \begin{align} \mathcal{A}_\mu^a(x^\nu,z) \xrightarrow[z \rightarrow 0]{} A_\mu^a(x^\nu)\:, \end{align} where $A^a_{\mu}(x^\nu)$ is the source for the dual $\text{CFT}_3$ conserved global current, $J_{\mu}^a(x^\nu)$. The 4D $\theta$-term introduces a subtlety, seen by the decomposition, \begin{align} \theta = \bar{\theta} + 2\pi \kappa,\:\:\bar{\theta} \in [0,2\pi),\:\: \kappa \in \mathbb{Z}. \label{eq:theta-decomposition} \end{align} 4D bulk physics only depends on the angle $\bar{\theta}$ as usual. For simplicity, in this paper we restrict attention to $\bar{\theta} =0$. However, given the total derivative nature of the $\theta$-term, $\kappa$ survives as a $\partial \text{AdS}_4$ action for the source $A_\mu$, \begin{align} \mathcal{L}_{\text{Mink}_3} = \mathcal{L}_{\text{CFT}_3} + A_\mu^a J^{\mu\, a} + \frac{\kappa}{4\pi} \: \epsilon^{\mu\nu\rho} \: \text{Tr} \left( A_\mu \partial_\nu A_\rho + \frac23 A_\mu A_\nu A_\rho \right). \label{eq:CS-action} \end{align} This gives extra contact terms, consistent with 3D conformal invariance, in multi-current correlators at coincident points~\cite{Witten:2003ya}. For example, \begin{align} \left< T\left\{J_\mu^a(x)J_\nu^b(x')\cdots\right\} \right> \:\:\supset \:\: \kappa \: \epsilon_{\mu\nu\rho}\delta^{ab} \partial^\rho\delta^3(x-x') \left< \cdots \right>. \end{align} For vanishing source, $A = 0$, the boundary condition takes the ``Dirichlet'' (D) form $\mathcal{A}_\mu(x^\nu,z) \xrightarrow [z\rightarrow 0]{} 0$, or more gauge-invariantly, \begin{align} \mathcal{F}_{\mu\nu}^a (x^\nu, z) \xrightarrow [z \rightarrow 0]{} 0\:, \label{eq:Dbc} \end{align} since the 3D dual description is also gauge-invariant if we transform the source $A_\mu$ as a background 3D gauge field. \subsection{CS-gauged CFT$_3$ and Alternate Boundary Conditions} \label{sec:ads4-poincare-neumann} We define a modified $\widetilde{\text{CFT}}_3$ by simply elevating the source $A_\mu$ above to a fully dynamical field with the same action, Eq.~\eqref{eq:CS-action}. The $\kappa$ terms no longer represent contact terms for global current correlators of $\text{CFT}_3$, but rather a CS action for $A_\mu$, which then gauges the $\text{CFT}_3$ current $J_\mu$. Schematically, $\widetilde{\text{CFT}}_3 = \text{CS} + \text{CFT}_3$. The $\text{AdS}_4$ dual of $\widetilde{\text{CFT}}_3$ is given by the same bulk dynamics as for the original $\text{CFT}_3$ but with an alternate boundary condition~\cite{Witten:2003ya}. A large set of gauge-invariant boundary conditions respecting the $\text{AdS}_4$ isometries (3D conformal invariance) exist because one can replace the ``Dirichlet" vanishing of ${\cal F}_{\mu \nu}$ at the boundary by vanishing of a more general linear combination of ${\cal F}_{\mu \nu}$ and $\widetilde{\cal F}_{\mu \nu}$. We see that the CS equations of motion corresponding to the action of Eq.~\eqref{eq:CS-action} is matched by alternate boundary condition of the form, \begin{equation} \frac{\kappa}{2\pi} {\cal F}_{\mu \nu} + \frac{1}{g^2}\widetilde{\cal F}_{\mu \nu} \xrightarrow [z \rightarrow 0]{} 0, \label{eq:DandNbc} \end{equation} because of the standard holographic matching \begin{equation} 2 \widetilde{\cal F}_{\mu \nu} \equiv \epsilon_{\mu \nu \rho z} {\cal F}^{z \rho} \xrightarrow [z \rightarrow 0]{} 2g^2\,\epsilon_{\mu \nu \rho} J^{\rho}. \end{equation} In the simplest case, $\kappa = 0$, the alternate boundary condition is just a gauge invariant version of ``Neumann'' (N) boundary condition: \begin{align} 2 \widetilde{\mathcal{F}}_{\mu\nu} \equiv \epsilon_{\mu\nu\rho z}\mathcal{F}^{z \rho} \xrightarrow [z \rightarrow 0]{} 0\:, \label{eq:Nbc} \end{align} as is clear in axial gauge $\mathcal{A}^a_z = 0$, \begin{align} \mathcal{F}_{z \rho} = \partial_z \mathcal{A}_\rho \xrightarrow [z \rightarrow 0]{} 0\:. \label{eq:Nbc-axialGauge} \end{align} \section{Kac-Moody AS from CS Structure} \label{sec:KM-from-MS} In this section we consider the above AdS$_4$ gauge theory ($+$ quantum gravity) with alternate boundary condition, or equivalently in 3D, $\widetilde{\text{CFT}}_3 \equiv$ CS $+$ CFT$_3$, with level $\kappa$. 3D CS gauge theory coupled to matter (provided here by $\text{CFT}_3$) describes relativistic (non)-abelian Aharanov-Bohm type effects between separated charges (e.g. see Ref.~\cite{Tong:2016kpv} for a review), thereby providing charged matter with quantum ``topological hair''. This is manifest already in the CS Gauss Law constraint ($A^a_0$ equation of motion), \begin{align} \frac{\kappa}{2\pi}\, F_{xy}^a = J_0^a\:, \end{align} where $F_{\mu\nu}^a$ is the field strength of $A$. Outside the support of the charge density $J_0$, $F_{xy} = 0$, but spatial Wilson loops (as seen by test charges) here are non-trivial when enclosing charge $J_0$, as in Fig.~\ref{fig:CS-as-AB-effect}. \begin{figure}[h] \centering \includegraphics[width=.7\linewidth]{CS.pdf} \caption{\small{Non-trivial Wilson loops ${\cal C}$ enclosing charge density, giving rise to Aharanov-Bohm type effects on test charges.}} \label{fig:CS-as-AB-effect} \end{figure} Related to the topological nature of their Aharanov-Bohm effects, CS structure on 3D spacetimes with a 2D boundary can be mapped to WZW 2D current algebras, exhibiting Kac-Moody asymptotic symmetries at the 2D boundary~\cite{Witten:1988hf, Elitzur:1989nr, Witten:1991mm, Gukov:2004id}. In the present context however, CS lives on $\text{Mink}_3$, with no finite 2D boundary. But from the canonical viewpoint the state wavefunctional, $\Psi$, at some fixed time, say $t=0$, {\it does} exhibit Euclidean signature WZW/KM structure on the spatial $x-y$ plane at that time, the relevant Ward identities supplied by Gauss' Law~\cite{Witten:1988hf}. One can think of $\Psi(t=0)$ as given by a $\text{CS}$ + $\text{CFT}_3$ path integral on the earlier half of $\text{Mink}_3$, $t < 0$, a spacetime with 2D boundary $t=0$. \subsection{Gauss Law constraints on canonical CS fields} \label{subsec:GaussLawConstraint} To review this, we introduce complex coordinates, \begin{equation} u \equiv x + i y, ~ ~ \bar{u} \equiv x - i y, \end{equation} in which Gauss' Law ($A_0$ equation of motion) reads \begin{align} \left( \partial_{\bar{u}} j^a - 2i\kappa\,\partial_u A_{\bar{u}}^a - f^{abc} j^b A_{\bar{u}}^c \right) \Psi[A_{\bar{u}}] = 2\pi J^a_0 \Psi[A_{\bar{u}}]. \label{eq:gauss-law} \end{align} To explain our notation, from Eq.~\eqref{eq:CS-action} we see from the CS Lagrangian that (after integrating out $A_0^a$) $A_u$ and $A_{\bar{u}}$ are canonically conjugate. Here, we choose to work in $A_{\bar{u}}$ field-space, and denote a (non-canonically normalized, for later convenience) conjugate field-momentum by \begin{align} j^a(u,\bar{u}) \equiv i\pi \frac{\partial \mathcal{L}_{\text{CS}}}{\partial \dot{A}^a_{\bar{u}}} = 2i\kappa \, A_u^a. \end{align} The wavefunctional $\Psi$ is taken to depend on $A_{\bar{u}}$ (coherent state representation) and the CFT fields. At the quantum level the conjugate field-momentum is then given by \begin{equation} j^a(u,\bar{u}) = i\pi \frac{\delta}{\delta A_{\bar{u}}^a(u, \bar{u})}, \end{equation} The quantum Gauss' Law has the form of a functional differential equation that effectively determines the $A_{\bar{u}}$-dependence of the wavefunctional in terms of the matter CFT state. \subsection{Holomorphic 2D WZW current and KM symmetry from CS} \label{subsec:2DWZWCurrent} For simplicity, we begin by exploring $\Psi$ at $A_{\bar{u}} = 0$ and for the special case of the CFT state consisting only of pointlike disturbances at $t=0$, \begin{align} \Psi \propto \prod_n \mathcal{O}_n(u,\bar{u})\left|0\right>\:, \end{align} where the $\mathcal{O}$ are local operators. We discuss more general $A_{\bar{u}}$ below, and more general CFT states in the next subsection. For the special state above, Gauss' Law reduces to \begin{align} \partial_{\bar{u}} j^a(u,\bar{u}) \Psi [ A_{\bar{u}} = 0 ]\: = 2\pi\,\sum_{\alpha=1}^n T_{(\alpha)}^a \delta^2 (u - u_\alpha) \Psi [ A_{\bar{u}} = 0 ]\: , \label{eq:ward-identity} \end{align} where $T^a_{(\alpha)}$ is the representation of the (non-)abelian generator acting on the particular local CFT operator ${\cal O}_{\alpha}(u_{\alpha}, \bar{u}_{\alpha})$, giving its charge. This equation can be integrated\footnote{We are assuming the wavefunctional is a well-behaved function of $A_{\bar{u}}$ at infinity, so that we do not have to include an analytic function of $u$ as integration constant in RHS of Eq.~\eqref{eq:IntegratedGaussLaw}.} to give \begin{align} j^a(u,\bar{u})\Psi[A_{\bar{u}} = 0 ] = \sum_\alpha \frac{T_{(\alpha)}^a}{u-u_\alpha}\Psi[A_{\bar{u}} = 0]\:, \label{eq:IntegratedGaussLaw} \end{align} using the identity $\partial_{\bar{u}} \left(1/(u - u_{\alpha})\right) = 2\pi\,\delta^2(u - u_{\alpha})$. From this we can then extract a 2D ``OPE'', matching that of a standard holomorphic WZW current with a charged operator in 2D Euclidean field theory (e.g. see Ref.~\cite{DiFrancesco:1997nk} for a review), \begin{align} j^a(u,\bar{u})\mathcal{O}_\alpha (u_\alpha, \bar{u}_\alpha) \:\: \xrightarrow[u\rightarrow u_\alpha]{} \:\: \frac{T_{(\alpha)}^a\mathcal{O}_\alpha(u_\alpha, \bar{u}_\alpha)}{u-u_\alpha}. \end{align} Next, we begin with non-vanishing $A_{\bar{u}} $ and act on Gauss' Law with the operator $j^b(u', \bar{u}') \equiv i\pi \delta/\delta A^b_{\bar{u}}(u', \bar{u}')$, and only then set $A_{\bar{u}} = 0$: \begin{align} \left[ \kappa \: \partial_u \delta^2(u-u')\delta^{ab} + \frac{1}{2\pi} \partial_{\bar{u}} j^a j^{b'} -\frac{i}{2} f^{abc} \delta^2(u-u') j^c \right] \Psi = j^b(u',\bar{u}')J_0^a(u,\bar{u})\Psi\:. \end{align} We consider $u$ away from any CFT local operators at $u_{\alpha}$ (within $\Psi$), so the right-hand side is non-singular in $u - u'$. The left-hand side can again be integrated, using the identity $-\partial_{\bar{u}}\left( 1/(u - u')^2\right) = \partial_{\bar{u}} \partial_u \left(1/(u - u')\right) = 2\pi\,\partial_u \delta^2(u - u')$, to give the $j j'$ OPE, \begin{align} j^a(u,\bar{u}) j^b(u',\bar{u}') \:\: \xrightarrow[u\rightarrow u']{} \:\: \frac{\kappa}{\left(u - u'\right)^2}\delta^{ab} + \frac{if^{abc}}{2(u-u')}j^c\:. \label{eq:jj-OPE-Abelian} \end{align} Choosing $u'=0$ the 2D holomorphic current can be expanded in a Laurent expansion of KM charges \begin{align} j^a(u) \equiv \sum_m \frac{Q_m^a}{u^{m+1}}\:, \end{align} Plugging this into the OPE and interpreting the result in standard 2D Euclidean radial quantization gives the KM symmetry algebra, \begin{align} \left[ Q_m^a, Q_n^b \right] = \kappa \: m \:\delta^{ab}\: \:\delta_{m,-n} + if^{abc}\: Q^c_{m+n}\:, \end{align} where the central extension is provided by the CS level $\kappa$. Via $\text{AdS}_4/\text{CFT}_3$ duality, we then conclude that with alternate boundary condition, Eq.~\eqref{eq:DandNbc}, this CFT derivation of the Kac-Moody algebra structure translates to $\text{AdS}_4$ gauge theory. So far our derivation focused on the special CFT state with all charged local operators acting on the vacuum at the same time, $t=0$, dual to all charged lines in $\text{AdS}_4$ arriving at the boundary at the same time $t=0$. Below, we consider more general CFT/AdS states. \subsection{General CFT states and non-holomorphicity of WZW current} \label{subsec:generalCFTstates} More typical CFT states cannot be described by purely local disturbances of the vacuum, created by just local operators at $t=0$. Instead, we can think of them as follows. If we consider the CFT to have a large-$N$ type gauge theoretic structure, it will contain CFT-gauge charged ``quarks" also transforming under a global symmetry of the CFT, which is then gauged by CS. The state at $t=0$ will consist of CFT-gauge singlet combinations of these 3D ``quarks" and ``gluons", but the quarks in a minimal CFT-singlet will typically not all be localized at a single point, but rather dispersed to some extent in 2D space. From this fundamental $\text{CFT}_3$ perspective, our construction of $j$ will still be a holomorphic current, with simple poles at the locations of the 3D quarks at $t=0$, and the entire KM algebra and symmetry structure via Gauss' Law still follows straightforwardly. However, from the $\text{AdS}_4$ dual perspective individual CFT quarks are not explicitly described, rather the 4D description is an effective ``hadronic" description of the different CFT-gauge-singlet combinations of 3D quarks and gluons, in terms of which we only see a ``smeared" continuum approximation to the fundamentally pointlike quark CS-charges, with $J_0$ taking the form of the boundary limit of the 4D transverse electric field. Local CFT/boundary operators can still be used to interpolate the more general states, but they must be allowed to act {\it before} $t=0$ so that their disturbance of the vacuum can spread by $t=0$. This is dual to 4D particles created at the boundary at early times having moved off into the bulk of $\text{AdS}_4$ by $t=0$. We illustrate the nature of this smearing in the case of abelian CS symmetry. The discrete sum over CS-charge locations in Eq.~\eqref{eq:ward-identity} is more generally replaced by the charge density $J_0$ as in Eq.~\eqref{eq:gauss-law}, so that $j$ in Eq.~\eqref{eq:IntegratedGaussLaw} is replaced by a ``smeared" integral over poles, \begin{equation} j = \int d^2 u' \frac{J_0(u', \bar{u}')}{u - u'}, \end{equation} rather than the discrete sum of poles that is more familiar from standard CS/WZW contexts. Nevertheless, we know from the CFT quark perspective that the KM symmetry structure is fully intact for general states. Even at the smeared level of description, the meaning of the KM charges can be discerned. For example, if we consider a state at $t=0$ with some finite region of support for $J_0$, then $j$ is holomorphic outside this region. If the support of $J_0$ excludes the origin, we can expand for small $u$, \begin{equation} j = - \sum_{n\geq0} \int d^2 u' \frac{J_0(u', \bar{u}') u^n}{u'^{n+1}}, \end{equation} corresponding to KM charges as moments of the charge distribution, \begin{equation} Q_n = - \int d^2 u' \frac{J_0(u', \bar{u}')}{u'^n}, ~ n < 0. \end{equation} We can also expand for large $u$ compared to the support of $J_0$, \begin{equation} j = \sum_{n\geq0} \int d^2 u' \frac{J_0(u', \bar{u}') u'^n}{u^{n+1}}, \end{equation} thereby identifying effective KM charges, \begin{equation} Q_n = \int d^2 u' J_0(u', \bar{u}') u'^n, ~ n \geq 0. \end{equation} In later sections we will discuss ``smeared" KM structure and associated memory effects in the context of $\partial \text{AdS}_4$ correlators with standard Dirichlet boundary conditions, which more closely parallel features of the Mink$_4$ S-matrix and memory effects. Nevertheless, the above features of KM structure from the canonical wavefunctional viewpoint for (the holographic dual of) alternate boundary conditions are already somewhat reminiscent of $\text{Mink}_4$. The 2D KM current construction in $\text{Mink}_4$ gauge theory, has simple poles at angular locations of charged particles arriving at lightlike infinity, ${\cal I}^+$. But here too this simple pole structure can be smeared out if the charged particles instead arrive at timelike infinity~\cite{Strominger:2017zoo, Kapec:2015ena, Campiglia:2015qka}. However, in $\text{Mink}_4$ the final destination of charged particles is determined by their 4D mass, massless charges automatically arriving at ${\cal I}^+$ and massive charges at timelike infinity. In this sense, the simple pole structure in $\text{Mink}_4$ is more readily arranged, by restricting to a final state with only massless charges. By contrast in $\text{AdS}_4$, the restricted states at $t=0$ yielding simple pole structure do not follow automatically by restricting the 4D particle species/masses of the final state. Amusingly, the holographic perspective reveals that there is indeed a correlation between the mass of charges and the robustness of the simple pole structure of the 2D KM currents, but the correlation is given in terms of 3D holographic masses! Furthermore, it is for the massive case that the simple pole structure is robust and for the massless case that it is not. In CS theories with massive 3D charged species, the restriction to states with a few pointlike charged excitations at $t=0$ is automatic given a finite energy ``budget", yielding simple-pole structure of $j$ generally. But a $\text{CFT}_3$ consists instead of 3D-massless (and strongly-coupled) ``quarks" as discussed above, so a typical state is a collection of indefinite numbers of these ``quarks". \section{AS from 4D Electric-Magnetic Duality/3D Mirror Symmetry} \label{sec:EMduality-from-MirrorDuality} We have seen that alternate AdS$_4$ boundary condition, dual to the modified $\widetilde{\text{CFT}}_3$, explicitly contains CS and hence CS/WZW-related KM structure. But this analysis seems to exclude the case of standard AdS$_4$ boundary condition, dual to the isolated original $\text{CFT}_3$. The remainder of this paper is devoted to showing different senses in which even this original unmodified theory does connect to Kac-Moody asymptotic symmetries. In this section, we will show that in the case of {\it abelian} AdS$_4$ gauge symmetry there is a full CS and Kac-Moody asymptotic symmetries structure arising from standard boundary condition, when these are imposed on the 4D gauge theory in suitable electric-magnetic dual variables. At the holographic level, this shows how the standard and modified CFTs transform into one another via 3D mirror symmetries. The most familiar form of electric-magnetic duality arises from the invariance of pure Maxwell theory under \begin{align} \mathcal{F} \rightarrow \widetilde{\mathcal{F}},\: \widetilde{\mathcal{F}} \rightarrow -\mathcal{F}\:. \end{align} More precisely, in the presence of charged matter it is described by a discrete duality transformation, $S$, which acts on states with electric charge $n g$ and magnetic charge $2 \pi m/g$ (where $n,m$ are integers for Dirac quantization) according to \begin{equation} S(n,m) = (m, -n). \end{equation} From the viewpoint of the 4D magnetic dual gauge field, $\widetilde{A}_M: ~ \widetilde{F}_{MN} = \partial_M \widetilde{A}_N - \partial_N \widetilde{A}_M$, the roles of the ``standard'' D and ``Neumann'' N boundary conditions are exchanged, as is clear from their gauge-invariant forms, Eq.~\eqref{eq:Dbc}, and Eqs.~\eqref{eq:Nbc}, \eqref{eq:Nbc-axialGauge}. That is, $ D \equiv \widetilde{N}, ~ N \equiv \widetilde{D}$. Electric-magnetic duality extends to a full $\text{SL}(2,\textbf{Z})$, generated by $S$ and $T$, where $T$ corresponds to the shift in the CP-violating parameter $\theta \rightarrow \theta + 2\pi$, another invariance of the bulk 4D physics. Witten has pointed out that general shifts in $\theta$ induce shifts in the spectrum of electric charges of states with non-zero magnetic charge. For the $(2\pi){\rm integer}$ shift of $T$ this Witten effect~\cite{Witten:1979ey} corresponds to \begin{equation} T(n,m) = (n + m, m). \end{equation} In this way, $\text{SL}(2,\textbf{Z})$ duality exchanges ordinary electric charges with more general dyonic charges $(n,m)$. As we saw for the $S$ transformation above, the AdS boundary conditions are not invariant under the more general $\text{SL}(2,\textbf{Z})$ transformations, since they pick out the particular type of $(n,m)$ charge whose gauge field is given Dirichlet boundary condition, thereby defining the global current of the dual CFT. The standard boundary condition picks out ordinary electric charges $(1,0)$ of course. For a general $(n,m)$ the boundary conditions involve an obvious linear combination of the Dirichlet and Neumann boundary conditions, \begin{align} g n \mathcal{F}_{\mu \nu} + \frac{2 \pi m}{g}\widetilde{\mathcal{F}}_{\mu \nu} \xrightarrow[z\rightarrow 0]{} 0. \end{align} $\text{SL}(2,\textbf{Z})$ thereby incarnates as 3D mirror symmetry, transforming between the different CFTs given by these different boundary conditions. For example, if we first apply the $TS$ transformation to the 4D gauge theory and {\it then} impose standard boundary conditions, we get Dirichlet boundary condition applied to the gauge field that couples to $TS(1,0) = (-1,-1)$ charges, \begin{align} g \mathcal{F}_{\mu \nu} + \frac{2 \pi }{g}\widetilde{\mathcal{F}}_{\mu \nu} \xrightarrow[z\rightarrow 0]{} 0. \end{align} From the discussion of subsection~\ref{sec:ads4-poincare-neumann}, we see that this corresponds to a CS gauging of the original $\text{CFT}_3$, with level $\kappa = 1$. In this way, $\text{SL}(2,\textbf{Z})$ equates the standard boundary conditions of $\text{AdS}_4$ gauge theory with alternative boundary conditions, which then manifest Kac-Moody asymptotic symmetries as described earlier. \section{Alternate/$\widetilde{\text{CFT}}$ Correlators from ``Holographic Soft Limit"} \label{sec:MirrorDual-from-SoftLimit-abelian} We now turn to the sense in which the standard $\text{AdS}_4$ Dirichlet boundary condition, dual to $\text{CFT}_3$ in isolation, has implicit CS structure and AS in the original ``electric'' variables once we include a natural $\text{AdS}^{\text{Poincare}}$ generalization of the notion of ``soft limit", applying whether the 4D gauge theory is abelian or non-abelian. This form of CS/AS represents our closest analog of the $\text{Mink}_4$ AS analysis developed in Ref.~\cite{Cheung:2016iub}, and also builds on the $\text{AdS}_4^{\text{Poincare}}$ discussion of Ref~\cite{Mishra:2017zan} . We begin with abelian gauge theory for simplicity in this section, and extend to non-abelian gauge theory in the next. \subsection{Fixed Helicity $\partial \text{AdS}_4$ Correlators} In $\text{Mink}_4$ an S-matrix amplitude with an external photon takes the form, \begin{align} \int_{\text{Mink}_4} d^4 X \mathcal{A}_M \mathcal{J}^M \:, \qquad \mathcal{A}_M(X) = \epsilon_M^\pm(q) e^{iq \cdot X}, \end{align} where $\mathcal{J}$ represents the on-shell current consisting of the rest of the amplitude with amputated photon leg, and $\epsilon_M^{\pm}(q)$ is the polarization vector for $\pm$ helicity, satisfying \begin{align} q^2 = q \cdot \epsilon^\pm = \epsilon^\pm \cdot \epsilon^\pm = 0, \epsilon^\pm \cdot \epsilon^\mp = 1\:. \label{eq:onShell} \end{align} In $\text{AdS}_4$ we compute boundary correlators rather than an S-matrix, \begin{align} \int_{\partial \text{AdS}_4} d^3 x A^\mu (x) \left< T\{ J_\mu^{\text{CFT}}(x)\cdots \}\right> = \int_{\text{AdS}_4} d^4 X \mathcal{A}_M(X) \mathcal{J}^M(X)\:, \quad \mathcal{A}_\mu(x,z) \xrightarrow[z\rightarrow 0]{} A_\mu (x)\:, \label{eq:AdS4-corr} \end{align} where $\mathcal{A}_M$ satisfies the AdS Maxwell's equations. Given the obvious Weyl invariance of the Maxwell action and the Weyl equivalence of AdS$_4$ to {\it half} of $\text{Mink}_4$, \begin{align} ds^2_{\text{AdS}_4} \:\: \underset{\text{Weyl}}{\sim} \:\: dt^2 - dx^2 - dy^2 - dz^2 \: , \quad z >0\:, \end{align} $\text{Mink}_4$ LSZ wavefunctions for external photons, $\mathcal{A}^\pm_M(X) = \epsilon_M^\pm(q) e^{iq \cdot X} $, are also valid choices for AdS correlators. This corresponds to a $\text{CFT}_3$ source, \begin{align} A^\pm_\mu (x) = \epsilon^\pm_\mu(q) e^{i\hat{q} \cdot x}\:, \hat{q}\equiv \left( q_0, q_x, q_y\right)\,. \end{align} While $A, \mathcal{A}$ are complex, their real and imaginary parts define standard $\partial \text{AdS}/\text{CFT}$ correlators, and we are just considering their complex superposition. We choose to work in 4D axial gauge, $\epsilon_z = 0$. It is clear that $A$'s of the above form span all possible sources in $\text{Mink}_3$ with timelike 3-momentum, $\hat{q}$, given that $J$ is conserved (in momentum space, $\hat{q}.J(\hat{q}) = 0$). We see that 4D helicity for massless photons matches a 3D ``helicity'' for timelike CFT sources. The different helicity sources satisfy Chern-Simons-Proca (CSP) equations: \begin{align} 2\epsilon^{\mu\nu\rho} \partial_\nu A_\rho = \pm \: m_3 A^\mu\:, \quad m_3 \equiv q_z, \label{eq:CSP-EOM} \end{align} for $\pm$ helicity. Here, $m_3$ is the mass Casimir invariant of $\text{Mink}_3$, that is $m_3^2 = \hat{q}^2 \equiv q_{\mu} q^{\mu}$ for momentum eigenstates, so that $m_3 = q_z$ by Eq.~\eqref{eq:onShell}. This has a similar structure to the 3D CSP form of helicity-cut $\text{Mink}_4$ S-matrix amplitudes derived in Ref.~\cite{Cheung:2016iub}, where $m_3$ was the Casimir invariant of a Euclidean AdS$_3$ foliation of (a future light cone in) $\text{Mink}_4$. \subsection{The ``holographic soft limit" of $\partial \text{AdS}_4$ correlators} In $\text{Mink}_4$, it was shown that the conventional (leading) soft photon limit of amplitudes captured by the Weinberg Soft Theorems, was equivalent to the limit $m_3 \rightarrow 0$. Here, we simply translate the analogous definition of ``soft limit'' to the $\text{AdS}_4$ context, as vanishing CSP mass, $m_3 \rightarrow 0$, arriving at the (sourceless) CS equation, \begin{align} \epsilon^{\mu\nu\rho} \partial_\nu A_\rho = 0\:, \quad \partial_\mu A^\mu = 0\:. \label{eq:CS-From-CSP-EOM} \end{align} We also effectively have a Lorentz-gauge fixing condition as can be seen by taking the divergence of the CSP Eq.~\eqref{eq:CSP-EOM} for $m_3 \neq 0$ followed by $m_3 \rightarrow 0$. This gives rise to a ``soft'' $\partial \text{AdS}/\text{CFT}$ correlator, Eq.~\eqref{eq:AdS4-corr}, where \begin{align} \mathcal{A}_{\mu}(x,z) = A_\mu(x)\:, \quad \mathcal{A}_z = 0\:. \end{align} This follows because $A$ is pure gauge in $\text{Mink}_3$ since $F = 0$ by Eq.~\eqref{eq:CS-From-CSP-EOM}, and therefore this $\mathcal{A}_M$ is pure gauge in $\text{AdS}_4$, hence trivially satisfying 4D Maxwell's equations and $\mathcal{A}_{\mu}(x,z) \xrightarrow [z \rightarrow 0]{} A_{\mu}(x)$. From the 4D viewpoint, unlike the standard notion of ``soft" in Minkowski spacetime, it is (only) the holographically emergent direction's $z$-dependence, rather than $t$-dependence (overall energy) which is softened.\footnote{In both Mink$_4$ and AdS$_4$ it is important that the helicity is fixed as we take the soft limit.} The above 4D pure gauge configurations in the holographic soft limit are the ``large'' gauge transformations at the root of AS, which we now derive. It is convenient to focus on $\text{CFT}_3$ correlators of the form, \begin{align} \left< 0| T \left\{ e^{i\int d^3 x \: A_\mu(x)J^\mu(x)} \, \mathcal{O}_1(x_1)\mathcal \cdots \mathcal{O}_n(x_n) \right\} | {\rm in} \right>, \label{eq:generalAbelianAdSCorrelators} \end{align} as depicted in Fig.~\ref{fig:UnequalTimeScatteringAdS-Abelian}, where $A_\mu(x)$ is the source for ``soft" photons, the ${\cal O}_{\alpha}$ are arbitrary local CFT operators with $U(1)$ charges $Q_{\alpha}$ (including possibly $J^{\mu}$ itself, corresponding to $\partial$AdS correlators for 4D photons which are ``hard" in our sense), and the $\left|\text{in}\right>$ represents a generic initial CFT state. \begin{figure} \centering \begin{subfigure}[b]{.48\textwidth} \centering \includegraphics[width=.9\linewidth]{UnequalTimeScatteringAdS-Abelian.pdf} \caption{} \label{fig:UnequalTimeScatteringAdS-Abelian} \end{subfigure}\hfill% \begin{subfigure}[b]{.48\textwidth} \centering \includegraphics[width=.9\linewidth]{EqualTimeScatteringAdS-Abelian.pdf} \caption{} \label{fig:EqualTimeScatteringAdS-Abelian} \end{subfigure} \caption{\small{Typical $\partial \text{AdS}_4$ correlators involving 4D photons and matter particles, dual to $\text{CFT}_3$ correlators of the form Eq.~\eqref{eq:generalAbelianAdSCorrelators} involving the $U(1)$ current and other local operators. (a) corresponds to charged matter lines arriving at general times on the boundary, while (b) corresponds to the special case in which all charged matter arrives at $t=0$.}} \label{fig:AdS-Abelian-Scattering} \end{figure} We write the pure gauge form of $A$ solving the (Lorentz-gauge) CS equations as \begin{align} A_\mu(x) = \partial_\mu \lambda(x)\:,\qquad \square_{\text{Mink}_3}\lambda(x) = 0. \end{align} We can specify a particular solution in terms of the ``initial'' value ($t=0$), $ \bar{a}(u, {\bar{u}}) \equiv A_{\bar{u}}(u, \bar{u}, t=0) $, first determining \begin{align} \lambda( u, \bar{u}, t=0) = \int \frac{d^2 u'}{2\pi} \, \frac{\bar{a}(u',\bar{u}')}{u-u'}\, , \label{eq:lambda-zeroTime} \end{align} and then uniquely extending to all $t$ once we impose only positive frequencies (absorbing source) in $\lambda(u, \bar{u}, t)$, \begin{align} \lambda(q_u, q_{\bar{u}}, t) &= \lambda(q_u, q_{\bar{u}}, t = 0) \, e^{-2i\sqrt{q_u q_{\bar{u}}}\: t}. \label{eq:lambda-nonZeroTime} \end{align} By the CFT current Ward identity, \begin{equation} \partial_{\mu} J^{\mu} = -\sum_{\alpha} Q_{\alpha} \delta^3(x - x_{\alpha}), \label{eq:CFT-Ward-Identity} \end{equation} we find \begin{align} i\int d^3 x A_\mu(x) J^\mu (x) = i\sum_\alpha Q_{\alpha} \lambda(x_{\alpha}) \label{eq:A-dot-J} \end{align} \subsection{2D Holomorphic Abelian WZW Current from Holographic Soft Limit} Let us focus first on the special case that all the ${\cal O}_{\alpha}$ are simultaneous, $t_{\alpha} = 0$, as depicted in Fig.~\ref{fig:EqualTimeScatteringAdS-Abelian}, so that by Eqs.~\eqref{eq:A-dot-J}, \eqref{eq:lambda-zeroTime}, \begin{align} i\int d^3 x A_\mu(x) J^\mu (x) = -i\int \frac{d^2 u}{2\pi} \, \bar{a}(u, \bar{u}) \sum_\alpha \frac{Q_\alpha}{u - u_\alpha} \: . \end{align} Thinking of $\bar{a}(u, \bar{u})$ as a source defining a 2D current $j \equiv 2\pi i \,\delta/\delta \bar{a}(u, \bar{u})$, we arrive at a 2D holomorphic form for $j$, \begin{align} \left< 0| j(u, \bar{u}) \mathcal{O}_1(x_1)\cdots \mathcal{O}_n(x_n) | {\rm in} \right> = \sum_\alpha \frac{Q_\alpha}{u - u_\alpha} \left< 0| \mathcal{O}_1(x_1)\cdots \mathcal{O}_n(x_n) | {\rm in} \right>. \label{eq:ope-Abelian} \end{align} The simple pole structure of $j$ is clearly very similar to that observed in soft limits of the Mink$_4$ S-matrix. We can straightforwardly obtain multiple-$j$ correlators since the source is simply exponentiated, but there is no central extension singularity in $j j $ correlators as they coincide, for reasons further discussed in the next section. In the general case of non-simultaneous $t_{\alpha}$ (Fig.~\ref{fig:UnequalTimeScatteringAdS-Abelian}), Eq.~\eqref{eq:A-dot-J} gives a 2D current defined by source $\bar{a}$, \begin{align} j(u, \bar{u}) = -2\pi\,\sum_\alpha Q_\alpha \frac{\delta \lambda(\bar{a}, x_\alpha)}{\delta \bar{a}(u, \bar{u})}, \label{eq:j-non-simultaneous} \end{align} but this is no longer holomorphic, reminiscent of the case of massive charges in the Mink$_4$ S-matrix. We explore this non-holomorphic structure more closely in Section~\ref{sec:memory} in the context of the memory effect. \section{Non-abelian Generalization of Holographic Soft Limit and AS} \label{sec:MirrorDual-from-SoftLimit-nonAbelian} There is a natural generalization of ``soft" to (tree-level) {\it non}-Abelian AdS$_4$ gauge theory. Generalizing Eq.~\eqref{eq:AdS4-corr}, we consider a 4D ``soft" field ${\cal A}^{a}_{M}$ which is a complex solution to the 4D Yang-Mills equations, coupled to a 4D gauge current ${\cal J}^a_M$ representing other charged matter and ``hard'' gluons. The boundary limit ${\cal A}^{a}_{\mu} \xrightarrow[]{z \rightarrow 0} A^a_{\mu}$ of such a complex solution simply corresponds to a complex source $A_{\mu}$ for $J^{\text{CFT}}_{\mu}$ and its associated CFT correlators. When there are multiple ``soft'' gluons, we must generalize the fixing of helicity of ``soft'' photons in the Abelian case in a manner that is compatible with Yang-Mills self-couplings. This is given by requiring the {\it complex} ${\cal A}^{a}_{M}$ to be self-dual (or alternatively, anti self-dual): \begin{equation} \frac12 \epsilon^{\mu \nu \rho} {\cal F}^a_{\mu \nu}(x,z) = i {\cal F}^{\rho z~a}(x,z) \quad \underset{\text{axial gauge}}{=} \quad i \partial_z {\cal A}^{a~\rho}(x,w), \end{equation} where ${\cal F}$ is the full non-abelian 4D field strength. This is closely analogous to what is seen in 4D Minkowski spacetime, where the non-abelian soft ``branches'' attached to a hard scattering process are self-dual when all its external soft gluons have positive helicity~\cite{Cheung:2016iub}. In axial-gauge, the holographic soft limit is again that in which ${\cal A}^{a}_{\rho}$ is $z$-independent. Self-duality then implies the vanishing of all of ${\cal F}$, so that ${\cal A}$ is pure-gauge. The CFT source is simply given by $A_{\mu}^a \equiv {\cal A}^a_{\mu}(x,z \rightarrow 0) = {\cal A}^a_{\mu}(x)$, so that it satisfies a (sourceless) non-Abelian CS equation, \begin{align} \epsilon^{\mu\nu\rho} F_{\nu \rho}^a(x) = 0, \label{eq:CS-EOM-NonAbelian} \end{align} again closely analogous to the Mink$_4$ analysis. More precisely, there will also be an effective 3D gauge-fixing condition that results from the approach to the soft limit, but it will be more complicated than the simple 3D Lorentz gauge of the Abelian case, Eq.~\eqref{eq:CS-From-CSP-EOM}. As for the Abelian case, this condition will not be relevant for the special case of {\it equal-time} correlators of CFT local operators, to which we now turn. \subsection{2D Holomorphic Non-abelian WZW Current from Holographic Soft Limit} The vanishing of the non-Abelian field strength of the source in the soft limit has the solution, \begin{align} iA_\mu(x) = e^{-i\lambda(x)}\partial_\mu e^{i\lambda(x)}\:, \quad \lambda \equiv \lambda^a t^a, \: A_\mu \equiv A_\mu^a t^a\:, \label{eq:pure-gauge-non-abelian} \end{align} where the $\lambda^a(x)$ are {\it complex} gauge transformation fields, reflecting the complex nature of $A_{\mu}^a$ (necessary for Lorentzian self-dual gauge fields). Starting from the general correlator, \begin{align} \left< T \left\{ e^{i\int d^3 x \: A^a_\mu(x)J^{\mu a}(x)} \, \mathcal{O}_1(x_1)\mathcal \cdots \mathcal{O}_n(x_n) \right\} \right>\:, \end{align} we will again consider $\bar{a}^a(u, \bar{u}) \equiv A^a_{\bar{u}}(u, \bar{u}, t=0)$ as the independent variables behind our soft source $A_{\mu}(x)$, and define a 2D current \begin{align} j^a(u,\bar{u}) \equiv 2\pi i \,\frac{\delta}{\delta \bar{a}^a(u,\bar{u})}\:. \end{align} For single $j$ correlators with equal-time ``hard" operators, $t_{\alpha} = 0$, the non-Abelian structure is clearly irrelevant, and we arrive at the analog of Eq.~\eqref{eq:ope-Abelian} again, \begin{align} \left< 0| j^a(u, \bar{u}) \mathcal{O}_1(x_1)\cdots \mathcal{O}_n(x_n) |{\rm in} \right> = \sum_\alpha \frac{T_{(\alpha)}^a}{u - u_\alpha} \left< 0| \mathcal{O}_1(x_1)\cdots \mathcal{O}_n(x_n) |{\rm in} \right> \: . \label{eq:ope-NonAbelian} \end{align} Next we probe correlators $\langle j^a(u, \bar{u}) j^b (u', \bar{u}') ... \rangle$, to search for a non-abelian contribution to the $j j'$ 2D ``OPE''. This requires us to work to order $\bar{a}^2$. At first order in $\bar{a}$, we obviously have \begin{align} \lambda^{(1)\, a} ( u, \bar{u}, t=0) = \int \frac{d^2 u'}{2\pi} \: \frac{\bar{a}^a (u', \bar{u}')}{u-u'}\:, \end{align} as in the Abelian case. To second order, by Eq.~\eqref{eq:pure-gauge-non-abelian}, \begin{align} A_\mu^a(x) \approx \partial_\mu \lambda^{(1)\, a}(x) - \frac{1}{2}f^{abc}\lambda^{(1)\, b}(x) \partial_\mu \lambda^{(1)\, c}(x) + \partial_\mu \lambda^{(2)\, a}(x)\: . \label{eq:A-to-second-order} \end{align} We can use the $\bar{u}$ component of this to solve for $\lambda^{(2)}(t=0)$, \begin{align} \partial_{\bar{u}}\lambda^{(2)\, a}(u, \bar{u}, t=0) &= \frac{1}{2} f^{abc}\lambda^{(1)\, b} (u, \bar{u}, t=0)\: \bar{a}^c(u, \bar{u}) \: , \end{align} from which we derive \begin{align} \lambda^{(2)\, a}(u, \bar{u}, t=0) &= \frac{1}{2} f^{abc} \int \frac{d^2 u'}{2\pi} \: \int \frac{d^2 u''}{2\pi} \: \frac{\bar{a}^b(u', \bar{u}') \, \bar{a}^c(u'',\bar{u}'')}{(u-u'')(u'' - u')} \: . \label{eq:lambda2} \end{align} In this way we see two types of non-abelian corrections enter into the typical $\partial \text{AdS}_4/\text{CFT}_3$ correlator compared to the abelian case, as depicted in Fig.~\ref{fig:ScatteringAdS-nonAbelian}. Of course there are non-abelian interactions in the 4D bulk, but we also have non-abelian corrections to the CFT ``softened" source $A^a_{\mu}$ when expressed in terms of the independent variables $\bar{a}^a$. \begin{figure} \centering \includegraphics[width=.5\linewidth]{ScatteringAdS-nonAbelian.pdf} \caption{\small{A typical $\partial \text{AdS}_4$ correlator for non-abelian AdS gauge theory, with all hard matter arriving at $t=0$. Note that there are both non-abelian bulk interactions and non-abelian corrections to the ``softened" source in terms of the independent variables $\bar{a}^a$. The leading source term $A^{(1)}$ is similar in form to the abelian case, while the next non-abelian correction $A^{(2)}$ is given by the last two terms in Eq.~\eqref{eq:A-to-second-order}.}} \label{fig:ScatteringAdS-nonAbelian} \end{figure} We see that Eq.~\eqref{eq:lambda2} can give rise to a non-trivial ``OPE'' divergence for coinciding $j$'s, so we drop $\lambda^{(1)}$ contributions to focus on that of $\lambda^{(2)}$: \begin{align} \int d^3 x \; A_\mu^a(x) J^{\mu\, a}(x) &\supset \int d^3 x \; \partial_\mu \lambda^{(2)\,a}(x) J^{\mu\, a}(x) \nonumber \\ &= - \int d^3 x \; \lambda^{(2)\,a}(x) \partial_\mu J^{\mu\, a}(x) = \sum_\alpha \lambda^{(2)\,a}(x_\alpha) T^a_{(\alpha)}\:. \end{align} Specializing to the simultaneous limit, $t_{\alpha} = 0$, \begin{align} \int d^3 x \; A_\mu^a(x) J^{\mu\, a}(x) &\supset \: \sum_\alpha \lambda^{(2)\,a}(u_\alpha, \bar{u}_\alpha, t_\alpha = 0) \, T^a_{(\alpha)} \nonumber \\ &= \frac{1}{2} f^{abc} \int \frac{d^2 u}{2\pi} \: \int \frac{d^2 u'}{2\pi} \: \frac{\bar{a}^b(u, \bar{u}) \, \bar{a}^c(u',\bar{u}')}{(u_\alpha-u')(u' - u)} \, T^a_{(\alpha)}\: . \end{align} We thereby derive, \begin{align} & \left< 0|T\left\{ j^a(u, \bar{u}) j^b(u', \bar{u}') \mathcal{O}_1(x_1)\cdots \mathcal{O}_n(x_n) \right\} | {\rm in} \right> \nonumber \\ & \qquad \qquad \supset \:\: \frac{1}{2} f^{abc} \sum_\alpha T^c_{(\alpha)} \left\{ \frac{1}{(u_\alpha - u)(u- u')} - \frac{1}{(u_\alpha - u')(u'- u)} \right\} \left< 0| \mathcal{O}_1(x_1)\cdots \mathcal{O}_n(x_n) | {\rm in} \right> \nonumber \\ & \qquad \qquad \underset{u' \rightarrow u}{\sim} \frac{f^{abc}}{u' - u} \sum_\alpha \frac{T_{(\alpha)}^c}{u-u_\alpha} \left< 0| \mathcal{O}_1(x_1)\cdots \mathcal{O}_n(x_n) | {\rm in} \right> \nonumber \\ & \qquad \qquad = \:\: \frac{f^{abc}}{u' - u} \left< 0| T\left\{j^c(u,\bar{u}) \mathcal{O}_1(x_1)\cdots \mathcal{O}_n(x_n)\right\} | {\rm in} \right>\: . \end{align} In this sense, we have arrived at the Euclidean 2D KM ``OPE'', \begin{align} j^a(u,\bar{u})\;j^b(u',\bar{u}') \:\: \underset{u\rightarrow u'}{\sim} \:\: \frac{f^{abc}}{u-u'}\:j^c(u, \bar{u}), \end{align} but unlike the canonical Eq.~\eqref{eq:jj-OPE-Abelian} we see that we have vanishing central extension here! This absence of a central extension in AS from soft limits matches what is seen in 4D Minkowski spacetime. But as pointed out in Ref.~\cite{Cheung:2016iub}, it is closer to the truth to say that we have {\it infinite} central extension, as we review below. \subsection{Holographic Soft Limit as Portal from Standard to Alternate Theory} The structure of correlators of $j$ we see in the holographic soft limit with Dirichlet boundary condition precisely matches that found in Ref.~\cite{Mishra:2017zan} for alternate b.c in the $\kappa \rightarrow \infty$ limit, as shown there by simple $\kappa$-counting diagrammatic arguments. Here, we just give a heuristic argument for why this is so, based on the path integral for {\it dynamical} CS coupled to the CFT (dual to alternate boundary condition), \begin{align} \int \mathcal{D} A_\mu \exp\left\{ i\int d^3 x \: \frac{\kappa}{4\pi} \,\epsilon^{\mu\nu\rho}\,\text{Tr}\, \left( A_\mu \partial_\nu A_\rho +\frac{2}{3} \,A_\mu A_\nu A_\rho \right) + A_\mu^a\, J^{\mu\,a}_{\text{CFT}_3} \right\} \:. \end{align} We see that as the CS level $\kappa \rightarrow \infty$, there is a wild phase in the path integral, forcing the $\kappa$-dependent part of the action to be extremized, yielding Eq.~\eqref{eq:CS-EOM-NonAbelian}, derived here via the ``soft" limit. With the $t=0$ condition on the path integral, $A_{\bar{u}}^a(t=0) \equiv \bar{a}^a$ (and gauge-fixing), this leads to a specific $A_{\mu}^a(x)$. In this way, the alternate boundary condition becomes effectively Dirichlet boundary condition as $\kappa \rightarrow \infty$, in particular matching the holographic soft limit. The one ``flaw" with this argument is that the $\kappa \rightarrow \infty$ limit for dynamical $A$ is ill-defined for $j j'$ correlators, precisely because of the central term in Eq.~\eqref{eq:jj-OPE-Abelian}. As pointed out in Ref.~\cite{Mishra:2017zan}, this is avoided by only considering connected correlators of the CS fields {\it with} the CFT, since the central term arises from connected correlators of CS with only itself. From the Dirichlet boundary condition viewpoint, this restriction is automatic since we are always considering soft dressing of ``hard'' CFT correlators. With this restriction, the central extension of KM is absent, as if it vanished, when in fact it is infinite as $\kappa \rightarrow \infty$. The seeds of alternate boundary condition correlators are contained in the Dirichlet boundary condition AdS$_4$ (pure CFT$_3$) correlators via their holographic soft limits. One can then unitarize these leading-in-$\kappa$ correlators by going to finite large $\kappa < \infty$, and including the simple pure-CS correlators, which contain the central extension. In this nuanced sense, AS from soft limits are a remnant of the alternate b.c theory, dual to the CS-gauged CFT$_3$. \section{CS Memory Effects and the Holographic Soft Limit} \label{sec:memory} Finally, we point out that AdS$_4$ gauge theory exhibits an analog of the electromagnetic ``memory'' phenomenon of Mink$_4$~\cite{Susskind:2015hpa,Pasterski:2015zua,Strominger:2015bla,Strominger:2017zoo}, closely connected to AS structure. The memory effect compares the parallel transport between two test charges far from a scattering process, long before and after the scattering event, more precisely given by a Wilson loop consisting of spatial transport between the two charges at early and late times, and temporal transport between those times. We focus on the abelian case. \subsection{Alternate boundary conditions and electric memory} We begin with alternate boundary condition, in its dual formulation as $U(1)$ CS + CFT$_3$. Canonically, the CS fields are $A_{\bar{u}}, A_{u}$, effectively in temporal gauge $A_0 = 0$ after deriving the Gauss Law constraint. For simplicity focussing on vanishing electromagnetic field strengths at early times (hence only neutral particles in the initial state), we can choose the further gauge condition $A_u(t = -\infty), A_{\bar{u}}(t= - \infty) =0$. We see that our canonical (can) fields therefore precisely define ``memory'' Wilson loops in more general (gen) gauges, \begin{align} A_i^{\text{can}}(u, \bar{u}, t=0)dx^i &= A_i^{\text{gen}}(u, \bar{u}, t=0)dx^i + \int_0^{-\infty} dt' A_0^{\text{gen}}(u+du, \bar{u}+d\bar{u}, t') \nonumber \\ & -A_i^{\text{gen}}(u, \bar{u}, t=-\infty)dx^i + \int_{-\infty}^0 dt' A_0^{\text{gen}}(u, \bar{u}, t')\:, ~~ {\rm where ~} i \equiv u,\bar{u}. \label{eq:CS-as-Wilson-loop} \end{align} The four terms on the right define four sides of a narrow gauge-invariant ``memory'' Wilson loop, from $u$ to $u+du$ at time $t=0$, to time $- \infty$ at $u+du$, back from $u+du$ to $u$ at time $- \infty$, and then from time $- \infty$ to $t=0$ at $u$. Similarly, a Wilson line of $A^{\text{can}}$ along a finite spatial curve ${\cal C}$ in the $x-y$ plane at time $t=0$ is equivalent to a more general memory Wilson {\it loop} in a general gauge, completing the curve with time-like lines to $t = - \infty$ and spatial Wilson line reversing ${\cal C}$ at time $- \infty$. This is depicted in Fig.~\ref{fig:CS-as-memory}. Because of the Gauss Law constraint, the precise choice of ${\cal C}$ does not matter as along as one does not cross 3D charges in deforming the curve. In the above sense, arbitrary CS gauge theories describe the dynamics of memory effects in 3D. But when the CS charged matter is a CFT$_3$ with AdS$_4$ dual, the memory effects ``lift'' to 4D. The 3D memory Wilson loop above is now seen as a 4D memory Wilson loop at (or near) $\partial \text{AdS}_4$, $z=0$, far from a bulk scattering. This is similar to the $\text{Mink}_4$ memory Wilson loops at large distance from a scattering process~\cite{Susskind:2015hpa,Pasterski:2015zua,Strominger:2015bla}. In the alternate boundary condition AdS case, the CS Gauss Law gives a general relationship between the canonical memory fields $A_{u}^{\text{can}}, A_{\bar{u}}^{\text{can}}$. As noted in subsection~\ref{subsec:GaussLawConstraint}, this relationship effectively determines the CS quantum state completely in terms of the matter CFT state, say as a wavefunctional in $A_{\bar{u}}^{\text{can}}$ in coherent state basis. Both $A_{u}^{\text{can}}$ and $A_{\bar{u}}^{\text{can}}$ are determined as operators acting on this state. That is, Gauss' Law completely determines the memory effect at the quantum level. As we saw in subsections~\ref{subsec:2DWZWCurrent} and~\ref{subsec:generalCFTstates} Gauss' Law is essentially equivalent to the KM structure. Thus, at the most fundamental level, the memory effect is the physical face of the AS structure. \begin{figure} \centering \includegraphics[width=.7\linewidth]{CS-as-memory.pdf} \caption{\small{A general CS memory Wilson loop in Mink$_3$, comparing parallel transport along the spatial curve ${\cal C}$ at early and late times, where for simplicity the early state has vanishing gauge field strength. It can be viewed as composed of many narrow memory Wilson loops, with shared timelike lines canceling due to their opposing orientations. In terms of the canonical CS fields, effectively in temporal gauge, this general Wilson loop is therefore given by just the late Wilson line along ${\cal C}$. (See Eq.~\eqref{eq:CS-as-Wilson-loop})}} \label{fig:CS-as-memory} \end{figure} \subsection{Dirichlet boundary conditions and magnetic memory} Let us switch to Dirichlet boundary condition, in which case the boundary-localized memory Wilson loop vanishes (dual to the absence of CS fields, given just the isolated CFT$_3$). But we saw in Section~\ref{sec:EMduality-from-MirrorDuality} that in magnetic dual variables $\widetilde{\mathcal{A}}$ the boundary condition becomes effectively Neumann. This allows us to consider non-vanishing magnetic memories, given by 't Hooft loops (Wilson loops in $\widetilde{\mathcal{A}}(z\rightarrow 0)$). We will see that this can be non-trivial even in processes involving only standard electric charges but no magnetic charges. These effects are analogous to (the electric-magnetic dual of) the magnetic memory effects in $\text{Mink}_4$ discussed in Ref. \cite{Strominger:2015bla}. There is an important but subtle contrast with the previous subsection. From the holographic viewpoint of the magnetic dual description, there is a 3D $\widetilde{A}$ which is the mirror version of $A$ above. Naively, this $\widetilde{A}$ translates via AdS/CFT into $\widetilde{\cal A}(z=0)$ in the 4D description. But formally $\widetilde{A}$ has a CS level $\widetilde{\kappa} = 0$, so that rather than being a CS field it reduces to a simple Lagrange multiplier for $\widetilde{J}$, which translates via AdS/CFT to the Lagrange multiplier enforcing the Neumann boundary conditions in AdS. Thus, once we are considering the 4D magnetic dual description with Neumann boundary conditions, {\it this} $\widetilde{\cal A}(z=0)$ has already been integrated out of the theory. Instead, in this subsection we are considering the distinct Neumann bulk field $\widetilde{\mathcal{A}}(z)$ in the limit $z \rightarrow 0$. Unlike $A$ (or $\widetilde{A}$), $\widetilde{\mathcal{A}}_u(z\rightarrow 0)$ and $\widetilde{\mathcal{A}}_{\bar{u}}(z\rightarrow 0)$ are {\it not} canonically conjugate, and are not constrained by a (mirror) Gauss Law constraint. We begin with the standard $\text{AdS}/\text{CFT}$ identification of holographic charge density, \begin{align} J_0^{\text{CFT}} \equiv \frac{1}{g^2}{\cal F}_{0z}(z \rightarrow 0) = \frac{1}{g^2}\widetilde{\cal F}_{xy}(z \rightarrow 0) = \frac{-2i}{g^2} \left( \partial_u \widetilde{\mathcal{A}}_{\bar{u}}(z\rightarrow 0)-\partial_{\bar{u}}\widetilde{\mathcal{A}}_u(z\rightarrow 0) \right)\, . \label{eq:da-minus-daBar} \end{align} Note that this relates the magnetic $\widetilde{\mathcal{A}}(z\rightarrow 0)$ gauge field with the original electric CFT$_3$ current. For given $J$, this is a general constraint on the memories measured by the (temporal gauge) $\widetilde{\mathcal{A}}_u(z\rightarrow 0), \widetilde{\mathcal{A}}_{\bar{u}}(z\rightarrow 0)$. In special circumstances, analogous to the set-up in Mink$_4$, we can make a stronger statement. We will assume that our initial state has vanishing field strengths, involving a non-trivial scattering of neutral particles deep in the bulk of AdS$_4$, and results in production of 4D electromagnetic radiation and electrically (not magnetically) charged particles. We take the charges to be massless so that we can continue to treat AdS$_4$ as effectively Mink$_4/2$ by Weyl invariance, and take local CFT operators ${\cal O}_{\alpha}(x_{\alpha})$ to annihilate the charges on $\partial \text{AdS}_4$ at $t_{\alpha} < 0$, before the memory measurement at $t=0$. More generally, we take the radiation and particles to arrive at $\partial \text{AdS}_4$ earlier than $t=0$, and either be reflected away into the bulk or absorbed by boundary/CFT operators. Therefore, radiation from the bulk scattering does not contribute to the boundary $\widetilde{\mathcal{A}}(z\rightarrow 0)$ gauge fields at $t=0$. This set-up is depicted in Fig.~\ref{fig:MemoryInAdS}. \begin{figure} \centering \includegraphics[width=.5\linewidth]{MemoryInAdS.pdf} \caption{\small{A $\partial \text{AdS}_4$ correlator for radiation and charged matter created by a distant bulk scattering, initiated from an electromagnetically neutral state. We focus on a 't Hooft line at $t=0$ in temporal gauge, corresponding to a magnetic memory loop, allowed by the standard boundary conditions. It receives contributions from the secondary radiation emitted by charged matter annihilated at the boundary by local operators. Radiation from the bulk scattering is either absorbed by the CFT current $J$ or reflected by the boundary, and therefore does not contribute to the late-time 't Hooft line.}} \label{fig:MemoryInAdS} \end{figure} But further radiation can result when the charged particles are absorbed by ${\cal O}_{\alpha}$ on $\partial \text{AdS}_4$, effectively ``annihilating'' with their images in the Mink$_4$ covering space of Mink$_4/2 \sim \text{AdS}_4$. This secondary radiation from $z \sim 0$ can spread until $t=0$ and contribute to the boundary fields $\widetilde{\mathcal{A}}(z\rightarrow 0)$ then. In temporal gauge, the transverse radiation satisfies $\partial_x \widetilde{\mathcal{A}}_x(z\rightarrow 0) + \partial_y\widetilde{\mathcal{A}}_y(z\rightarrow 0) + \partial_z \widetilde{\mathcal{A}}_z(z\rightarrow 0)= 0$ as usual. Since the secondary radiation travels in the $x-y$ directions but remains at $z \sim 0$ in order to contribute to the memory measurement there, the $z$-momentum is subdominant, and we have \begin{align} \partial_x \widetilde{\mathcal{A}}_x(z\rightarrow 0) + \partial_y \widetilde{\mathcal{A}}_y(z\rightarrow 0) \equiv \partial_u \widetilde{\mathcal{A}}_{\bar{u}}(u, \bar{u}, z \rightarrow 0, t=0) + \partial_{\bar{u}} \widetilde{\mathcal{A}}_{u}(u, \bar{u}, z \rightarrow 0, t=0) \approx 0. \label{eq:da-plus-daBar} \end{align} We can then solve the simultaneous equations, Eqs.~\eqref{eq:da-minus-daBar}, \eqref{eq:da-plus-daBar}, for the memory fields, \begin{align} \widetilde{\mathcal{A}}_u(u,\bar{u}, z \rightarrow 0, t=0) &= -\frac{ig^2}{4} \int \frac{d^2 u'}{2\pi} \frac{J_0(u', \bar{u}', t=0)}{u' - u} \nonumber \\ \widetilde{\mathcal{A}}_{\bar{u}}(u,\bar{u}, z \rightarrow 0, t=0) &= \frac{ig^2}{4} \int \frac{d^2 u'}{2\pi} \frac{J_0(u', \bar{u}', t=0)}{\bar{u} - \bar{u}'}. \label{eq:a-aBar-Solution} \end{align} We now show that the above memory effect precisely matches the holographic soft limit we derived in Section~\ref{sec:MirrorDual-from-SoftLimit-abelian}. First we note that the secondary radiation satisfies the Maxwell equations, \begin{align} 0 &= \partial_i B_i + \partial_z B_z \approx \partial_i B_i \nonumber \\ 0 &= \partial_0 B_i + \epsilon_{ij}\partial_j E_z - \epsilon_{ij} \partial_z E_j \approx \partial_0 B_i + \epsilon_{ij}\partial_j E_z, ~ ~ {\rm where} ~ i \equiv x,y, \end{align} and where again the $z$-momentum is subdominant so that we drop the $\partial_z$ terms. Since we are near the boundary, we can translate $B_i \rightarrow g^2 \epsilon_{ij} J_j$ and $E_z \rightarrow g^2 J_0$, so that the above relations become \begin{align} \epsilon^{\mu\nu\rho}\partial_\nu J_\rho \approx 0\,. \end{align} Therefore $J_{\mu} \approx \partial_{\mu} \Phi$ is a total gradient. The current Ward identity, Eq.~\eqref{eq:CFT-Ward-Identity}, then reads \begin{align} \partial_\mu \partial^\mu \Phi = - \sum_\alpha Q_\alpha \delta^3(x-x_\alpha)\:, \end{align} with solution \begin{align} \Phi(x) = -i \sum_\alpha Q_\alpha G_S(x-x_\alpha)\:, \end{align} where $G_S$ is the Mink$_3$ scalar $\Phi$ propagator. Therefore, Eq.~\eqref{eq:a-aBar-Solution}, reads \begin{align} \widetilde{\mathcal{A}}_{u}(u, \bar{u}, z \rightarrow 0, t=0) = -\frac{g^2}{4} \sum_\alpha Q_\alpha \: \int \frac{d^2 u'}{2\pi} \: \frac{\partial_0 G_S(u' - u_\alpha, \bar{u}' - \bar{u}_\alpha, -t_\alpha)}{u-u'}\: . \label{eq:A-solution-inTermsOfPropagator} \end{align} Let us compare this result with the holographic soft limit for non-simultaneous ${\cal O}_{\alpha}$, as given by \begin{align} j(u,\bar{u})= \sum_\alpha Q_\alpha \int \frac{d^2 u'}{u-u'} \int \frac{dq_u}{2\pi} \, \frac{dq_{\bar{u}}}{2\pi} \: e^{i q_u (u_\alpha - u')} \, e^{iq_{\bar{u}}(\bar{u}_\alpha - \bar{u}')} \, e^{-2i\sqrt{q_u q_{\bar{u}}}t_\alpha} \label{eq:j-in-terms-of-scalar-propagator} \end{align} following from Eqs.~\eqref{eq:j-non-simultaneous}, \eqref{eq:lambda-nonZeroTime}, \eqref{eq:lambda-zeroTime}. This precisely matches the form of memory, Eq.~\eqref{eq:A-solution-inTermsOfPropagator}, since the time-ordering in $G_S$ is fixed because all $t_{\alpha} < 0$. The special case of $t_{\alpha} \rightarrow 0$ in AdS$_4$ is similar to the case of {\it massless} charges in Mink$_4$ reaching lightlike infinity, in each case leading to holomorphic $j$ with simple poles. We see this explicitly at $t_{\alpha} =0$ in Eq.~\eqref{eq:j-in-terms-of-scalar-propagator}, where the Fourier transforms give $\delta^2(u' - u_{\alpha})$. General $t_{\alpha} \neq 0$ in AdS$_4$ is similar to the case of {\it massive} charges in Mink$_4$ which approach timelike infinity, in which case $j$ is not holomorphic. See Ref.~\cite{Strominger:2017zoo, Kapec:2015ena, Campiglia:2015qka} for the same smeared structure of poles in $\text{Mink}_4$ memory for massive charges as our Eq.~\eqref{eq:a-aBar-Solution}. However, we see that in AdS$_4$ we have a clear holographic interpretation for this smearing in terms of the spreading of holographic charge density over time starting from $\delta$-function localization, $J_0 \propto \partial_0 G_S$, because the 3D charges are ``blobs" of massless CFT constituents. This is in contrast to a 3D theory with only 3D-massive point-particle charges (without 4D dual), where $J_0$ would retain the form of $\delta$-functions at particle locations over time, and the analogous construction of $j$ would have simple poles in $u$ without smearing over time. See the discussion in subsection~\ref{subsec:generalCFTstates}. \section{Discussion} \label{sec:discussion} In this paper we have studied infinite-dimensional Kac-Moody (KM) asymptotic symmetries arising in $\text{AdS}_4^{\text{Poincare}}$ gauge theories. The standard asymptotic analysis, famously admitting only the finite-dimensional global symmetries of a holographically dual CFT$_3$, was evaded in two steps, identified in Ref.~\cite{Mishra:2017zan} but taking their simplest form here. In the present context, the major step was to consider alternate AdS boundary conditions peculiar to four dimensions, holographically dual to a modified $\widetilde{\text{CFT}}_3$ obtained by an external Chern-Simons ($\text{CS}$) gauging of the original $\text{CFT}_3$. The second step was to restrict attention to boundary/CFT correlators (or wavefunctional) at a fixed time, say $t=0$, where the canonical CS structure yields holomorphic currents, whose Laurent expansion coefficients are KM charges. For more general correlators the physical essence of the KM symmetries is retained and generalized by the CS structure, but with a smearing out of the simple pole structure of KM holomorphic currents. We showed how all this connects to ``holographic soft limits'' in AdS$_4^{\text{Poincare}}$ which underlie its KM asymptotic symmetries, for both abelian and non-abelian gauge fields. The 4D fields in this ``soft limit'' take the form of 3D CS fields (implying alternate boundary conditions for the AdS dual) which then lead to KM symmetries on an effectively 2D boundary of the CS spacetime, via the CS/WZW correspondence. While soft limits yield the alternate/$\widetilde{\text{CFT}}_3$ theory to leading order in the associated CS level, in the sense of Ref.~\cite{Mishra:2017zan}, it is interesting to see if the all-orders theory (finite CS level) can naturally emerge from the standard/$\text{CFT}_3$ construction. We showed this for the case of {\it abelian} symmetry, where the standard construction imposed on electric-magnetic (mirror) dual variables assumes the alternate ($\widetilde{\text{CFT}}_3$) form in the original variables, with finite CS level in the holographic description! The KM symmetries were thereby seen to be generalizations of dyonic charge conservation rather than simple electric charge conservation. It is less clear whether there is a non-abelian generalization, given the key role played by the S-duality transformation exchanging electric and magnetic charges. Perhaps a good theoretical laboratory is provided by those special supersymmetric non-abelian theories in which S-duality persists~\cite{Gaiotto:2008ak}. There are several ways in which the KM structure derived in this work bears a resemblance to that of gauge theories in 4D Minkowski spacetime. It is useful to explore the similarities and differences in AS analyses of $\text{Mink}_4$ and $\text{AdS}_4$. In $\text{AdS}_4$, we have seen here and in Ref.~\cite{Mishra:2017zan} that holography allows us to straightforwardly and insightfully arrive at an AS structure previously unnoticed, whereas in $\text{Mink}_4$ there is a more familiar AS structure which may well point to some version of Minkowski holography, as yet unknown. In what follows, we comment on the similarities and differences, summarized briefly in Table~\ref{tab:mink-vs-ads}. The first hint that the AS structures in these two spacetimes may have some commonalities comes from the observation that the underlying CS gauge structure responsible for KM asymptotic symmetries in $\text{AdS}_4$, was also seen in the Minkowski analysis of Ref.~\cite{Cheung:2016iub}. Yet, naively, a close resemblance would have seemed unlikely -- $\text{AdS}_4$ and $\text{Mink}_4$ are different spacetimes, with very different boundary structures. Further, $\text{Mink}_4$ KM asymptotic symmetries reflect gauge-boson soft limits, whereas standard AdS$^{\rm global}$ lacks such soft limits. Nevertheless, we showed here that there is a simple generalization to ``holographic soft limits" in AdS$^{\text{Poincare}}$ which underlies its KM asymptotic symmetries. \renewcommand{\arraystretch}{1.5} \begin{table} \begin{tabular}{|p{7cm}|p{7cm}|} \hline ~~~~~~~~~~~~~~~~~~~~~~\textbf{Mink}$_4$ & ~~~~~~~~~~~~~~~~~~~~~~\textbf{AdS}$_4$ \\ \hline \hline S-matrix & $\partial\text{AdS}_4/\text{CFT}_3$ local correlators \\ \hline Timelike infinity $\equiv$ Euclidean $\text{AdS}_3$ & $\partial\text{AdS}_4 \equiv \text{Mink}_3$ \\ \hline Null infinity ($\mathcal{I}$), 2D geometry & Fixed time $t=0$ on $\partial\text{AdS}_4$, 2D geometry \\ \hline Soft limit, $m_3\rightarrow 0$, where $m_3$ is the Casimir invariant of Euclidean $\text{AdS}_3$~\cite{Cheung:2016iub} & Holographic soft limit, $m_3\rightarrow 0$, where $m_3$ is the Casimir invariant of $\text{Mink}_3$ \\ \hline $\text{CS}$ structure of soft fields & $\text{CS}$ structure of soft fields \\ \hline 2D holomorphic-WZW currents $j^a$ for (massless) charges hitting $\mathcal{I}$ & 2D holomorphic-WZW currents $j^a$ for charges hitting $t=0$ on $\partial\text{AdS}_4$ \\ \hline (Non-)abelian Kac-Moody AS & (Non-)abelian Kac-Moody AS \\ \hline Electric/Magnetic Memories & Electric/Magnetic Memories \\ \hline Electric flux Memory Kernel & Electric flux/Holographic charge density \\ \hline ~~~~~~~~~~~~~~~~~{\bf ?~?} & Holographic Duality \\ \hline ~~~~~~~~~~~~~~~~~{\bf ?~?} & $\widetilde{\text{CFT}}_3$ with fully dynamical $\text{CS}$ (finite level) \\ \hline \end{tabular} \caption{\small{The parallel developments between $\text{Mink}_4$ and $\text{AdS}_4$ gauge dynamics, their soft limits and associated infinite-dimensional KM asymptotic symmetries. AdS/CFT holography provides more of an explanatory structure in the case of AdS$_4$.}} \label{tab:mink-vs-ads} \end{table} In Mink$_4$ gauge theory, massive charges emerging from a scattering event asymptotically approach future timelike infinity. This is a space parametrized by particle boosts, geometrically 3D hyperbolic space or, more suggestively, Euclidean $\text{AdS}_3$~\cite{deBoer:2003vf}. Its boundary is future null infinity, ${\cal I}^+$, the destination for massless particles, which, while 3-dimensional, has 2D geometry due to the one null direction. In $\text{AdS}_4$, there are also asymptotic 3D and 2D geometries. The asymptotic infinity of $\text{AdS}_4^{\text{Poincare}}$ is of course the boundary $\equiv \text{Mink}_3$, the entire spacetime from the holographic perspective. The analogous ``2D boundary" for $\text{AdS}_4$ is provided by a constant time slice on $\partial \text{AdS}_4$, a boundary if one considers a wavefunctional on this time slice as determined by a path integral over just the earlier spacetime region. Canonically in CS, AS structure is associated to the wavefunctional, at say $t=0$, with its spacelike 2D geometry. In this work, we considered scattering in the bulk of $\text{AdS}_4$, with some outgoing particles headed to the boundary and absorbed by local ($\text{CFT}_3$) operators there. Charged particles arriving at $\partial \text{AdS}_4$ at $t=0$ then play a somewhat analogous role to massless charged particles arriving at ${\cal I}^+$ in Mink$_4$. This is seen more sharply by the soft CS/WZW structure that arises. In both cases, we get 2D holomorphic currents, with poles at the locations of the charges, and with Laurent expansions in terms of KM charges. Charged particles arriving at $\partial \text{AdS}_4$ at more general $t \neq 0$ are the analogs of massive charges arriving at timelike infinity in $\text{Mink}_4$ -- the 2D currents exist but are no longer holomorphic, the above-mentioned poles effectively being ``smeared"~\cite{Strominger:2017zoo, Kapec:2015ena, Campiglia:2015qka}. This smearing effect in the context of $\text{AdS}_4$ finds a natural holographic explanation in the tendency of 3D charge density to spread in a CFT$_3$ even if initially created in point-like form by a local operator. As discussed in subsection~\ref{subsec:generalCFTstates}, this smearing for general AdS$_4$ states does not compromise the KM structure, and furthermore in the CFT$_3$ dual description the smeared pole structure again resolves into discrete simple poles at the level of the 3D ``quarks" of the CFT. The analogy between $\text{AdS}_4$ and $\text{Mink}_4$ is imperfect in one significant regard: while massless 4D charges robustly arrive at null infinity, ${\cal I}^+$, in $\text{Mink}_4$, and massive charges do not, in $\text{AdS}_4$ there is no such robust determinant of whether 4D charges will arrive at $\partial \text{AdS}_4$ at $t=0$ or not. Instead, from the 3D Chern-Simons perspective the determining factor of whether KM currents have robust simple pole structure or not is whether 3D charges are massive or massless, respectively. Of course, for the $\text{CFT}_3$ dual to $\text{AdS}_4$ the fundamental charges are massless. Like in $\text{Mink}_4$, $\text{AdS}_4$ also has a close connection between KM symmetries and the memory effect, given by a large asymptotic spacetime Wilson loop. In $\text{AdS}_4$, the analogous Wilson loop at the AdS boundary must vanish by standard boundary conditions. Nevertheless, we demonstrated that non-trivial ``magnetic" memory effects exist even with standard boundary conditions in $\text{AdS}_4$, associated with non-vanishing 't Hooft loops on the boundary, and that these are closely related to holographic soft limits and KM structure. It is an exciting open question as to how the rich structure of asymptotic symmetries and memories imply a new form of ``hair" for complex 4D states such as black holes, and can algebraically encode information that might seem lost according to standard 4D effective field theory analysis. We hope that the simple form and derivation of asymptotic symmetries and memories presented here for $\text{AdS}_4^{\rm Poincare}$, and the deep connection to holography, will help to answer this question in the future. \acknowledgments The research of RS was supported in part by the NSF under grant number PHY-1620074 and by the Maryland Center for Fundamental Physics (MCFP). The research of AM was supported in part by the NSF under grant number PHY-1407744. \bibliographystyle{JHEP}
2,869,038,157,015
arxiv
\section{Introduction} For non interacting systems, the appearance of non dissipative Hall-like currents is due to the existence of topological structures present in the electronic spectrum\cite{TKN82,KM05}. In the case of the quantum Hall effect (QHE) the physical ingredient allowing for such structure is the presence of an external magnetic field breaking time reversal symmetry, while in the case of the quantum spin Hall effect (QSHE) it is the spin-orbit coupling what allows for a spin resolved currents along the edges\cite{SCN04,BZ05}. The quantum anomalous Hall effect (QAHE) lies somewhat in between: time reversal symmetry is broken by the presence of magnetic elements instead and the QSHE can be understood as two copies of the QAHE related by time reversal symmetry\cite{H88}. Remarkably, not only the QHE has been measured\cite{KDP80} but the QSHE and the QAHE have been experimentally confirmed\cite{KWS07,CZF13}. A recurrent question is if this electromagnetically induced response can be modified somehow. It is known that there is no room for such modification unless interactions are present due to the topological meaning of this response\cite{TSG82}. However this is the case for the Hall response in the DC limit. Little is experimentally known when time dependent external electromagnetic fields are considered.Theoretical and experimental efforts have been carried out to understand and measure the frequency structure of closely related responses like the the Faraday and Kerr effects in three dimensional topological insulators (3DTI) and graphene\cite{GJ12,VSL12,SYY13}. Another important property of systems exhibiting Hall responses is the presence of one dimensional conducting channels at the sample's edge. These edge states are the responsible of the conducting properties of these phases of matter when the Fermi level lies in the bulk gap. The transport properties of these conducting channels have been extensively studied in the literature, both for the DC and the AC limits\cite{C03}. Besides its inherent interest in fundamental science\cite{GC11}, it is not necessary to mention the potential applicability of such non dissipative edge channels for future electronic devices both in the DC and in the optical frequency regime, in special for HgTe/CdTe based devices. For these reasons it is interesting to investigate how systems showing Hall responses (like HgTe/CdTe quantum wells) behave under the effect of time varying electric fields. Also we will see that the presence of external time dependent electric fields can unveil previously unreported properties of such systems, like the one described in the present work when dipole interactions between states close to the Fermi level are considered. Such interactions are the manifestation in Condensed Matter Physics of the Stark effect which is the electrical analogue of the Zeeman effect\cite{V01}. The rest of the paper is organized as follows: In section II we describe the Bernevig-Hughes-Zhang lattice model including the dipole moment terms and get the continuum version. In section III we calculate the induced Hall current through integrating out fermions and getting the properly modified Chern-Simons term. We also describe here how the chiral edge states in the QAHE and QSHE get modified through the bulk-boundary correspondence. Finally, in section IV we summarize the results obtained. \section{The model} \begin{figure} \centering \includegraphics[width=1.1\columnwidth]{lattice2.eps} \caption{(Color online) Scheme of the system described in the text after \cite{NSO10}. The green (light) circles and red (dark) lobules represent $s$ and $p$ orbitals, respectively.} \label{fig:lattice1} \end{figure} Let us consider as a prototypical model for the QAHE and QSHE the Bernevig-Hughes-Zhang model corresponding to a square lattice with three orbitals per site\cite{BHZ06}. These orbitals are chosen to be s-like ($J=1/2$) and p-like ($J=3/2$). Is then conceivable that when an external electric field is applied, intra-atomic dipole interactions might take place between these orbitals. For convenience we will consider time dependent external electric fields but in the dipole approximation, that is, we do not take into account spatial dependences of the fields: $\mathbf{E}=\mathbf{E}(t)$. The lattice Hamiltonian in the tight binding approximation in absence of external perturbations reads\cite{BHZ06,FK07,NSO10}: \begin{eqnarray} \nonumber H_{t}=&-&\sum_{i,a=x,y}t_{s}s^{+}_{i}s_{i+a}-\sum_{i,a=x,y}t_{p}p^{+}_{a,i}p_{a,i+a}-\\ \nonumber &-&\sum_{i,a=x,y}t_{ps}\left(s^{+}_{i}p_{a,i+a}-s^{+}_{i}p_{a,i-a}\right)+\\ &+&\sum_{i,a=x,y}\epsilon_{s}s^{+}_{i}s_{i}+\epsilon_{p}p^{+}_{a,i}p_{a,i}+h.c.\label{latham} \end{eqnarray} To (\ref{latham}) we have to add a spin orbit coupling \begin{equation} H_{SO}=\sum_{i}\lambda L^{z}_{i}S^{z}_{i}.\label{SOham} \end{equation} The geometry of the hopping terms can be seen in fig(\ref{fig:lattice1}). When the spin-orbit term is considered, the on-site energies change to $E_{0}=\epsilon_{s}$, $E_{\pm1}=\epsilon_{p}\pm\frac{\lambda}{2}$. Assuming $\epsilon_{s}<\epsilon_{p}$ and $\lambda>0$ to be the largest energy scale involved in the problem, we can neglect the $p_{+1}\equiv (p_{x}+i p_{y})/\sqrt{2}$ orbital, drastically reducing the problem to a two band problem. Firstly we will focus on the QAHE so we will work out the model (\ref{latham}) in its angular momentum polarized version\cite{LQD08,YZZ10}. In order to capture the effect of the electric dipole terms we will consider the standard radiation-matter coupling: \begin{equation} H_{d}=-q\mathbf{E}(t)\int d^{2}\mathbf{r}\psi^{+}(\mathbf{r})\mathbf{r}\psi(\mathbf{r}).\label{intham1} \end{equation} In the tight-binding approximation, the fermion operator is written as $\psi(\mathbf{r})=\sum_{i,\alpha}c_{i,\alpha}\phi_{\alpha}(\mathbf{r}-\mathbf{R}_{i})$ ($c_{i,\alpha}$ represent the amplitudes $s_{i}$ and $p_{i,-1}$ and $\phi_{\alpha}$ the orbital wave functions). Inserting this expression in (\ref{intham1}) and assuming that the resulting overlap integrals are nonzero only for the same site, we can write \begin{eqnarray} \nonumber &&\int d^{2}\mathbf{r}\phi^{*}_{\alpha}(\mathbf{r}-\mathbf{R}_{i})\mathbf{r}\phi^{*}_{\alpha'}(\mathbf{r}-\mathbf{R'}_{i})\approx\\ &\approx& \delta_{i,i'}\delta_{\alpha\alpha'}\mathbf{R}_{i}+\delta_{i,i'}\mathbf{d}_{\alpha\alpha'}. \end{eqnarray} When $\mathbf{E}$ is applied in-plane: \begin{eqnarray} H^{\uparrow}_{d}=-q E_{a}(t)\sum_{i,\alpha\alpha'}\left(R^{a}_{i}\delta_{\alpha\alpha'}+d^{a}_{\alpha\alpha'}\right)c^{+}_{i,\alpha}c_{i,\alpha'}.\label{dipole2} \end{eqnarray} The symbols $(\uparrow, \downarrow)$ refer to the two $m_{J}$ projections in the model (\ref{latham}) and (\ref{SOham}), remembering that due to the presence of the spin-orbit coupling, the spin operator is not a properly well defined conserved quantity. In the more realistic case, $(\uparrow, \downarrow)$ will refer to the to the signs $(+,-)$ of the total angular momentum projections of the states from which the $|\uparrow\rangle$ and $|\downarrow\rangle$ states are built \cite{MR11}. The dipole matrix elements are defined as $d^{j}_{\alpha\alpha'}=\int d^{2}\mathbf{r}\phi^{*}_{\alpha}r^{j}\phi_{\alpha'}$. In our specific case, for the spin-up projection \begin{equation} d^{j}_{\alpha\alpha'}=\int d^{2}\mathbf{x}\langle s| x^{j}|p_{-1}\rangle= d\left(\delta^{jx}+i \delta^{jy}\right),\label{dipole} \end{equation} where $d=\int d^{2}\mathbf{x}\langle s| x|p_{-1}\rangle$. The electric dipole transitions behind this result occur when $\Delta m_{J}=\pm 1$ and $\Delta J=\pm 1$\cite{G05}. The first term in (\ref{dipole2}) is of the form $-q\mathbf{E}(t)\cdot\mathbf{R}_{i}$. The most direct way of dealing with this term is to change the gauge to the temporal gauge $A_{0}=0$ with the following phase change in the fermions: $c_{i}\rightarrow e^{i\Lambda_{i}(t)}c_{i}$, $c^{+}_{i}\rightarrow c^{+}_{i}e^{-i\Lambda_{i}(t)}$, with $\Lambda_{i}(t)=q\int_{t}\mathbf{E}(\tau)\mathbf{R}_{i}d\tau+\Lambda^{0}(\mathbf{R}_{i})\equiv q\mathbf{A}(t)\mathbf{R}_{i}+\Lambda^{0}(\mathbf{R}_{i})$. In absence of an external magnetic field, we have the freedom to choose $\Lambda^{0}(\mathbf{R}_{i})=0$ so the Hamiltonian (\ref{latham}), after reducing to a two component system, is now diagonal in momentum space: \begin{eqnarray} \nonumber H_{\uparrow}(\mathbf{k},\mathbf{A})&=&d_{1}(\mathbf{k},\mathbf{A})\tau_{1}-d_{2}(\mathbf{k},\mathbf{A})\tau_{2}+\\ &+&\mu(\mathbf{k},\mathbf{A})\tau_{0}+ m(\mathbf{k},\mathbf{A})\tau_{3},\label{lathabmatrix} \end{eqnarray} with the parameters $m$, $\mu$, and $d_{i}$ defined in table \ref{tab:delements}. \begin{table} \caption{\label{tab:delements}Parameters for the BHZ model $\left(\mathbf{X}=a\mathbf{k}-qa\mathbf{A}\right)$} \begin{ruledtabular} \begin{tabular}{c} $\mu(\mathbf{X})=\frac{1}{2}(\epsilon_{s}+\epsilon_{p}-\frac{\lambda}{2}-(2t_{s}+t_{p})\sum_{j=x,y}\cos(X_{j}))$ \\ $m(\mathbf{X})=\frac{1}{2}(\epsilon_{s}-\epsilon_{p}+\frac{\lambda}{2}-(2t_{s}-t_{p})\sum_{j=x,y}\cos(X_{j}))$ \\ $d_{1}(\mathbf{X})=\sqrt{2}t_{sp}\sin(X_{x})$\\ $d_{2}(\mathbf{X})=\sqrt{2}t_{sp}\sin(X_{y})$ \end{tabular} \end{ruledtabular} \end{table} With this change of gauge and in matrix notation $H_{d}$ reads \begin{eqnarray} H^{\uparrow}_{d}=-q\frac{d}{c}(\dot{A}_{x}\tau_{1}-\dot{A}_{y}\tau_{2}).\label{dipole3} \end{eqnarray} Note that now the electronic states are coupled to the gauge vector field $\mathbf{A}$ trough the standard Peierls substitution in (\ref{lathabmatrix}) and trough a nonminimal coupling to the time derivative of the vector field in (\ref{dipole3}). Indeed the Hamiltonian (\ref{dipole3}) is nothing but a way of writing the Hamiltonian of the Stark effect. In order to ease the computations, we will work in the continuum limit expanding (\ref{lathabmatrix}) around the $\Gamma$ point $\mathbf{k}=0$, and in the linear response regime corresponding to keep terms linear in $\mathbf{A}$ in (\ref{lathabmatrix}) and $\dot{\mathbf{A}}$ in (\ref{dipole3}). The Hamiltonian takes the form of a single specie of Dirac fermion (dropping an irrelevant redefinition of the zero of energies): \begin{eqnarray} H_{\uparrow}(\mathbf{k})=vk_{x}\tau_{1}-vk_{y}\tau_{2}+m\tau_{3},\label{effham0} \end{eqnarray} with $m=(\epsilon_{s}-\epsilon_{p}+\lambda/2+t_{p}-2t_{s})/2$ and $v=\sqrt{2}at_{sp}$. The total effective Hamiltonian describing the interaction between the electrons and the electromagnetic field is \begin{eqnarray} H^{\uparrow}_{A}=q \frac{v}{c}(A_{x}+\frac{d}{v}\dot{A}_{x})\tau_{1}-q \frac{v}{c}(A_{y}+\frac{d}{v}\dot{A}_{y})\tau_{2}.\label{effhamA} \end{eqnarray} A comment is in order here. Because we have fixed the gauge to the temporal gauge $A_{0}=0$ the system described by (\ref{effham0}) and (\ref{effhamA}) is not invariant under the entire gauge group, but still invariant under time independent gauge transformations. This is a standard situation when the Hamiltonian version of lattice gauge theories is considered\cite{KS75} and it will not be hard to find gauge invariant expressions when calculating the effective electromagnetic action. If we define now the fermion field $\psi(x)=(s(x),p_{-1}(x))^{T}$ and the adjoint field as $\bar{\psi}=(s^{+}(x),p^{+}_{-1}(x))\gamma^{0}$, with $\gamma^{0}=\tau_{3}$, the effective low energy fermionic action reads: \begin{eqnarray} \nonumber\mathcal{S}_{f}&=&\int d^{3}x -i\bar{\psi}\gamma^{\mu}\partial_{\mu}\psi- m\bar{\psi}\psi-\\ &-&q\bar{\psi}\gamma^{\mu}\psi(A_{\mu}+\zeta^{\nu}F_{\nu\mu}).\label{effaction} \end{eqnarray} Above we have defined $\gamma^{1}=\tau_{3}\tau_{1}=i\tau_{2}$, and $\gamma^{2}=-\tau_{3}\tau_{2}=i\tau_{1}$. We have written an entirely gauge invariant fermionic action by defining a constitutive constant vector $\zeta^{\nu}=(d/v,0,0)$ in our specific problem. Note that the last component in the action (\ref{effaction}) has the form of a non minimal coupling between the fermionic current and the gauge field. Nonminimal couplings to the electromagnetic field are not so rare in Condensed Matter Physics. If an external magnetic field is applied to a spinful system, a Zeeman term of the form $H_{Z}=g\mu_{B}B_{a}\sum_{\mathbf{k},\alpha\alpha'}c^{+}_{\mathbf{k}\alpha}s^{a}_{\alpha\alpha'}c_{\mathbf{k},\alpha'}$ with $B_{a}=\epsilon_{abc}\partial_{b}A_{c}$, would be needed to add to the Hamiltonian, or a similar term to (\ref{dipole3}) but coupling states with $\Delta m_{J}=0$, $\Delta J=\pm 1$ would appear if an electric field is applied perpendicularly to the sample leading, together with the SO coupling, to the Rashba term in the tight binding Hamiltonian\cite{KM05,TMS13}. The crucial difference with other non minimal couplings is that the fourth term in (\ref{effaction}) is generated by the external electric field and it will induce extra terms in the linear response regime. \section{Modified QAH response} Let us calculate the induced electronic current by computing the odd part of the effective field theory for the electromagnetic field. The effective action takes the form of a Chern-Simons action modified by the non minimal term\cite{R84} \begin{eqnarray} \nonumber \Gamma_{eff}&=&\int d^{3}x\sigma_{xy}\epsilon^{\mu\rho\nu}(A_{\mu}+\zeta^{\lambda}F_{\lambda\mu})\partial_{\rho} (A_{\nu}+\zeta^{\sigma}F_{\sigma\nu})-\\ &-&J^{\mu}_{e}A_{\mu}.\label{effactionA} \end{eqnarray} In the continuum model $\sigma_{xy}=\frac{q^{2}}{4\pi}sign(m)$, while if we were used the full lattice model $\sigma_{xy}=\frac{q^{2}}{2\pi}sign(m)$. From the effective action (\ref{effactionA}) we can easily read out the induced electronic current: \begin{eqnarray} \langle J^{\mu}_{e}\rangle\equiv\frac{\delta\Gamma_{eff}}{\delta A_{\mu}}=\sigma_{xy}\epsilon^{\mu\rho\nu}\partial_{\rho}A_{\nu}+\sigma_{xy}\epsilon^{\mu\rho\nu}\zeta^{\sigma}\partial_{\rho}F_{\sigma\nu}.\label{inducedcurrent} \end{eqnarray} Note that the induced current (\ref{inducedcurrent}) is gauge invariant. The significance of the second term in the induced charge current (\ref{inducedcurrent}) is most apparent if we write it in components ($\mathbf{E}(t)$ pointing along the $y$ direction): \begin{equation} \langle J^{x}_{e}(t)\rangle=\sigma_{xy}E_{y}(t)+\sigma_{xy}\zeta^{0}\dot{E}_{y}(t).\label{currentcomponents} \end{equation} The Hall charge response of the system is now not just a term proportional to the applied electric field but it takes an extra contribution proportional to the time derivative of the electric field. This second term, although coming from the odd part of the polarization tensor and being proportional to $\sigma_{xy}$, is not universal because it is proportional to $\zeta^{0}$. This parameter with units of time turns out to be odd under time reversal symmetry. Another important observation is that if the electric field $\mathbf{E}$ is time independent, this second term is zero, so it will only be observable under AC fields. It is important to stress that the new terms in (\ref{currentcomponents}) do not renormalize the odd part of the polarization tensor which is the origin of (\ref{effactionA}). \section{Modified QSH response} Now it is time to see how the previous results get modified when we include time reversal symmetry in the system by adding the spin other angular momentum projection in the BHZ model. For this projection the states to be considered are $s_{i}$ and $p_{i,1}$. In the basis formed by these two states the low energy momentum space Hamiltonian reads: \begin{equation} H_{\downarrow}(\mathbf{k})=-vk_{x}\tau_{1}-vk_{y}\tau_{2}+m\tau_{3}\equiv H^{*}_{\uparrow}(-\mathbf{k}),\label{spindownham} \end{equation} where $H_{\uparrow}(\mathbf{k})$ is defined in (\ref{effham0}). Following the same steps leading to the Hamiltonian (\ref{dipole3}) it is not difficult to show that for this projection, the dipole interaction takes the form \begin{equation} H^{\downarrow}_{d}=-q\frac{d}{c}(\dot{A}_{x}\tau_{1}+\dot{A}_{y}\tau_{2}),\label{dipoledown} \end{equation} which is nothing but the Hamiltonian (\ref{dipole3}) after applying the operation of time reversal inversion: $H^{\downarrow}_{d}=\mathcal{T}H^{\uparrow}_{d}\mathcal{T}^{-1}$. Let us write down the total low energy Hamiltonian for both projections: \begin{equation} H(\mathbf{k})=v s_{3}\tau_{1}k_{x}-vs_{0}\tau_{2}k_{2}+s_{0}\tau_{3}m.\label{totalham0} \end{equation} The Pauli matrices $(s_{0},\mathbf{s})$ stand for the two projections of $m_{J}$. If we want to go to a Lagrangian description of the problem we define as before an adjoint spinor $\bar{\psi}=\psi^{+}\gamma^{0}$, with $\gamma^{0}=s_{0}\tau_{3}$, and a set of $\gamma$ matrices as $\gamma^{1}=i s_{3}\tau_{2}$, and $\gamma^{2}=i s_{0}\tau_{1}$. Because we have now more than a single specie of fermions, we can define $\gamma_{5}=-i\gamma^{0}\gamma^{1}\gamma^{2}=s_{3}\tau_{0}$. With this choice of matrices, the two $m_{J}$ projections correspond to the two chiralities in the effective Hamiltonian. It is illuminating to see how the Hamiltonians (\ref{dipole3}) and (\ref{dipoledown}) read in terms of this set of matrices: \begin{equation} H_{d}=-q\frac{d}{c}\gamma_{5}\gamma^{i}\dot{A}_{i}.\label{dipoletotal} \end{equation} It means that the electronic states couple both minimally and non minimally to the gauge field $A_{\mu}$ and the latter is a chiral coupling. The fermionic action (\ref{effaction}) is properly modified to take into account both chiral species and the chiral coupling with $F_{\mu\nu}$: \begin{eqnarray} \nonumber\mathcal{S}_{f}&=&\int d^{3}x -i\bar{\psi}\gamma^{\mu}\partial_{\mu}\psi- m\bar{\psi}\psi-\\ &-&q\bar{\psi}\gamma^{\mu}(A_{\mu}+\gamma_{5}\zeta^{\nu}F_{\nu\mu})\psi.\label{effaction2} \end{eqnarray} In this case we have a different situation than the one described above when a spin polarized model was considered. Now due to the requirement of being the theory time reversal invariant electrons of different spin couple to the electric field through the dipole term with opposite sign, so, as it happens in the standard QSHE a spin current will be generated: \begin{eqnarray} \langle J^{\mu}_{s}\rangle\equiv\langle J^{\mu}_{\uparrow}-J^{\mu}_{\downarrow}\rangle=2\sigma_{xy}\epsilon^{\mu\rho\nu}\partial_{\rho}A_{\nu},\label{spincurrent} \end{eqnarray} but now we will also get a nonzero charge response coming from the dipole term: \begin{eqnarray} \langle J^{\mu}_{e}\rangle\equiv\langle J^{\mu}_{\uparrow}+J^{\mu}_{\downarrow}\rangle=2\sigma_{xy}\epsilon^{\mu\rho\nu}\zeta^{\sigma}\partial_{\rho}F_{\sigma\nu}.\label{inducedcurrent2} \end{eqnarray} The appearance of a charge response in the time reversal invariant system is not surprising actually. The important observation is that the two species of fermions couple oppositely to the electric field through the dipole term. This sign difference conspires with the opposite sign of the Berry phase in both species leading to a non vanishing value of $\langle J^{\mu}_{e}\rangle$. In this case, the role of the chiral field is played by $V_{\nu}=\zeta^{\sigma}F_{\sigma\nu}$\cite{CGV10,VAA13}. The expressions (\ref{currentcomponents}) and (\ref{spincurrent}-\ref{inducedcurrent2}) are the most important results of this work as long as they are the fingerprint of the anomalous modifications in the QAH and QSH phases. However, currently the expressions (\ref{currentcomponents}) and (\ref{spincurrent}-\ref{inducedcurrent2}) are not directly measurable. The experimental tests pass through performing transport measurements using the boundary states. \section{Modified edge states} Following the Callan-Harvey effect\cite{CH85}, let us consider a finite system $\Omega$ with a boundary $\partial\Omega$ described by the action (\ref{effactionA}) and apply a gauge transformation on the electromagnetic field $A_{\mu}\rightarrow A_{\mu}+\partial_{\mu}\Lambda(x)$. Note that under gauge transformations $F_{\lambda\sigma}$ remains invariant. The variation of the effective action (\ref{effactionA}) under the previous transformation can be written as \begin{eqnarray} \delta_{\Lambda}\Gamma_{eff}=\sigma_{xy}\int_{\partial\Omega}d^{2}\Sigma_{\mu}\Lambda(x)\epsilon^{\mu\rho\nu}\partial_{\rho}(A_{\nu}+\zeta^{\sigma}F_{\sigma\nu}),\label{gaugevariation} \end{eqnarray} where $d^{2}\Sigma_{\mu}$ stands for the differential surface element pointing perpendicular to the surface defined by $\partial\Omega$. Without loss of generality we can choose such surface to be defined by the coordinates $(t,x)$, so $d^{2}\Sigma_{\mu}$ will point along the $y$ direction. As usual, a non vanishing variation (\ref{gaugevariation}) means the system is not gauge invariant when confining it in a finite geometry. This lack of gauge invariance must be compensated by another element in the system. In the standard case of a pure CS term, an one dimensional chiral massless fermion will appear at the boundary to restore gauge invariance (the so called chiral massless Schwinger model). Let us see how this chiral fermion changes to deal with the second term in (\ref{gaugevariation}). Let us write down a modified version of the fermionic action for the chiral Schwinger model\cite{H73,JR85} : \begin{eqnarray} \nonumber \mathcal{S}_{1+1}&=&\int d^{2}x -i\bar{\psi}\hat{\gamma}^{\mu}\partial_{\mu}\psi-\\ &-&q\bar{\psi}\hat{\gamma}^{\mu}P_{L}\psi(A_{\mu}+\zeta^{\nu}F_{\nu\mu}),\label{actionCSW} \end{eqnarray} with $\hat{\gamma}^{0}=\tau_{3}$, $\hat{\gamma}^{1}=i\tau_{1}$, $\hat{\gamma}_{5}=\tau_{2}$ and $P_{L}=\frac{1}{2}(1+\hat{\gamma}_{5})$. The presence of the projector $P_{L}$ means that only the left handed fermionic mode is coupled to $A_{\mu}$ and $\zeta^{\nu}F_{\nu\mu}$. Actually, the coupling between $A_{\mu}$ and the right or left fermion depends on the sign of $m$. If we integrate out fermions and apply the previous gauge transformation, the variation is nonzero (meaning that the theory is not gauge invariant) and takes the form: \begin{eqnarray} \delta_{\Lambda}\Gamma_{1+1}=-\frac{q^{2}}{4\pi}\int d^{2}x \Lambda(x)\epsilon^{\alpha\beta}\partial_{\alpha}(A_{\beta}+\zeta^{\sigma}F_{\sigma\beta}).\label{gaugevariation2} \end{eqnarray} which exactly cancels (\ref{gaugevariation}). The conclusion is clear: the edge mode in our system consists in a chiral massless Dirac fermion coupled both minimally and non minimally to the external electromagnetic field. When time reversal symmetry is present the one dimensional metal is not described by the chiral Schwinger model (\ref{actionCSW}) but by the Schwinger model with a nonminimal chiral coupling term: \begin{eqnarray} \nonumber\mathcal{S}_{HL}&=&\int d^{2}x -i\bar{\psi}\hat{\gamma}^{\mu}\partial_{\mu}\psi-\\ &-& q\bar{\psi}\hat{\gamma}^{\mu}(A_{\mu}+\hat{\gamma}_{5}\zeta^{\nu}F_{\nu\mu})\psi.\label{actionHSM} \end{eqnarray} Recent measurements probe the spin polarization of currents in the QSHE\cite{BRB12,NSB13}. We suggest to use similar techniques but allowing for time varying voltages to explore the consequences of the modifications of (\ref{actionCSW}) and (\ref{actionHSM})\footnote{The effect of anomalous terms in the Hall response can be tested by measuring the AC conductance $G(\omega)$ of the edge states. The precise computation of $G(\omega)$ will presented in a separate publication.}. \section{Summary} In the present work we have found how the Hall responses of the QAHE and the QSHE are modified when we take into account the intra atomic dipole elements between the states close to the Fermi level. These modifications, being non universal, are proportional to the Hall conductance $\sigma_{xy}$. We have also found how the edge states acquire an extra non minimal coupling term with the electromagnetic field. This non minimal term is different from the QAHE and the QSHE. These results can be observed by electrical conductance and tunnelling conductance measurements. The results presented here pave the way for new search of phenomena in the physics of two dimensional topological insulators. \section{Aknowledgements} The author acknowledges discussions with M. Sturla about the subject and H. Ochoa and B. Amorim for discussions in the early stages of this project. The author aknowledges the JAE-doc program and the Spanish MEC through Grants No. FIS2011-23713 and No. PIB2010BZ-00512 for financial support.
2,869,038,157,016
arxiv
\section{Introduction} \label{sec:intro} It has been more than a century since astrophysical jets were first observed \citep{Curtis1918PLicO..13....9C}. Astrophysical jets are collimated relativistic magnetized plasma outflows launched from compact accreting objects. They are found in stellar-mass black holes (BH; \cite{Fender2004MNRAS.355.1105F}) and supermassive BHs \citep{Blandford2019ARA&A..57..467B}. Powers of some relativistic jets can exceed the Eddington limit of BHs, requiring highly efficient energy conversion processes from accretion to outflows. However, the formation mechanism of powerful relativistic jets has yet to be answered. Theoretically, the Blandford-Znajek (BZ) mechanism \citep{Blandford1977MNRAS.179..433B} is believed as the plausible explanation for the jet launch. In the BZ mechanism, the jet power is extracted by the rotation of BHs with the support of the magnetic fields threading the central BH. General relativistic magnetohydrodynamic (GRMHD) simulations confirm this process as a plausible and efficient jet power extraction mechanism (see, e.g., \cite{Komissarov2007MNRAS.380...51K, Tchekhovskoy2010ApJ...711...50T, Tchekhovskoy2011MNRAS.418L..79T, McKinney2012MNRAS.423.3083M, Takahashi2016ApJ...826...23T, Avara2016MNRAS.462..636A, Liska2022ApJ...935L...1L}). The BZ mechanism has two key parameters: the spin parameter of the BH and the large-scale magnetic field threading in the BH. Numerical simulations suggest that strong magnetic fields are required to realize observed powerful jets (e.g., \cite{McKinney2012MNRAS.423.3083M}). Such large magnetic flux accumulation is expected to appear in the magnetically arrested disk (MAD) scenario (\cite{Narayan2003PASJ...55L..69N, Igumenshchev2003ApJ...592.1042I}, see also \cite{Bisnovatyi-Kogan1974Ap&SS..28...45B, Bisnovatyi-Kogan1976Ap&SS..42..401B}), where magnetic field dominates the dynamics of the inner disk. Although the event horizon telescope (EHT) has resolved inner accretion disks of M~87 and Sgr~A* with unprecedented spatial resolutions, it is still not conclusive whether their accretion processes are dominated by MAD or other processes \citep{EHT2021ApJ...910L..13E, EHT2022ApJ...930L..16E, Blandford2022MNRAS.514.5141B}. Therefore, observational evidence of MAD in accretion systems is still lacking. Here, the presence of a magnetic field induces spectral line splitting by the Zeeman effect due to the interaction of the magnetic dipole moment of an electron with the magnetic field. The Zeeman effect has been applied to measure magnetic fields of various astrophysical systems such as sunspots \citep{Hale1908ApJ....28..315H}, active stars \citep{Donati1997MNRAS.291..658D}, molecular clouds \citep{Nakamura2019PASJ...71..117N}, and an outer maser disk of an AGN \citep{Modjaz2005ApJ...626..104M}. Detectability of the Zeeman effect in the X-raying accreting neutron stars has also been argued in the literature \citep{Sarazin1977ApJ...216L..67S, Loeb2003PhRvL..91g1103L}. BH-powered X-ray binaries (XRBs) and active galactic nuclei (AGNs) ubiquitously have the Fe~\Ka fluorescence line in their X-ray spectra. Broaden Fe~\Ka lines imply the location of the cold reflecting medium near the BHs and have been used for the BH spin measurements \citep{Reynolds2021ARA&A..59..117R}. Here, hot plasma, namely coronae, should also exist in the vicinity of BHs to reproduce X-ray continuum spectra of accreting BHs. Geometrical configurations of hot coronae and cold reflectors have been debated in literature (see e.g., \cite{Done2007A&ARv..15....1D, Meyer-Hofmeister2011A&A...527A.127M}). Recently, by performing two temperature GR-radiation-MHD simulations, \citet{Liska2022ApJ...935L...1L} demonstrated that a geometrically thin accretion disk transitions into a two-phase medium of cold gas clumps and a hot, magnetically dominated corona, when the thin disk is threaded by large-scale poloidal magnetic fields. This numerical result can naturally explain the coexistence of broaden Fe~\Ka line and hard X-ray continuum emission in XRBs and AGNs. In this Letter, we consider the Zeeman effect on the Fe~\Ka lines of XRBs and AGNs in the MAD state assuming the two-pahse medium in the inner accretion disk. Future X-ray satellite missions such as X-Ray Imaging and Spectroscopy Mission ({\it XRISM}; \cite{XRISM2020SPIE11444E..22T}) and Advanced Telescope for High ENergy Astrophysics ({\it Athena}; \cite{Athena2013arXiv1306.2307N}) will carry an X-ray microcalorimeter with an energy resolution down to several~eV at 6~keV. We also discuss whether future X-ray missions can probe the MAD via the Zeeman effect. \section{Zeeman Effect on MAD} Magnetic flux $\Phi_\mathrm{BH}$ threading a BH is described as \begin{equation} \Phi_\mathrm{BH}\equiv \phi_\mathrm{BH}(\dot{M}_\mathrm{BH}R_g^2c)^{1/2}, \end{equation} where $\phi_\mathrm{BH}$ is the dimensionless magnetic flux, $\dot{M}_\mathrm{BH}$ is the accretion rate onto the BH, and $R_g=GM_\mathrm{BH}/c^2$ is the gravitational radius. $M_\mathrm{BH}$ is the BH mass, $G$ is the gravitational constant, and $c$ is the speed of the light. $\Phi_\mathrm{BH}$ in accretion flows are always nonzero since the magnetic flux is transported inward via accretion. $\phi_{\rm BH}$ is typically 20--50 depending on accretion rates based on GRMHD simulations (e.g., \cite{McKinney2012MNRAS.423.3083M, Avara2016MNRAS.462..636A, Liska2022ApJ...935L...1L}). We set $\phi_{\rm BH}=30$ as a fiducial value, which is based on the recent GR-radiation-MHD simulation accounting for both hot corona and cold medium at $\sim35$\% of the Eddington luminosity \citep{Liska2022ApJ...935L...1L}. The MAD magnetic field at a distance $R\equiv r R_g $ from the BH is, assuming $BR^p=\mathrm{const.}$, \begin{eqnarray} B(R)&=&\frac{B(R_g)R_g^p}{R^p}\\ &=&\frac{\Phi_\mathrm{BH}}{\pi R_g^2 r^p}\\ &\simeq& 2.3\times10^9 \mathrm {G} \biggl(\frac{\phi}{30}\biggr)\biggl(\frac{m}{10}\biggr)^{-1/2} \biggl(\frac{\dot{m}}{0.3}\biggr)^{1/2}\biggl(\frac{r}{1}\biggr)^{-p}, \end{eqnarray} where $m\equiv M_{\mathrm BH}/M_\odot$ and $\dot{m}\equiv \dot{M}_\mathrm{BH}/\dot{M}_\mathrm{Edd}$ with a 10\% radiative efficiency. $\dot{M}_\mathrm{Edd}$ is the Eddington accretion rate for the mass of $M_\mathrm{BH}$. The energy separation by the Zeeman effect is given by \begin{equation} \Delta E_\mathrm{split} \approx \frac{e\hbar}{2m_ec} \left(M_L + 2M_S\right)B\simeq 11.6~\mathrm{eV}~ \biggl(\frac{B}{10^9~\mathrm{G}}\biggr) \label{eq:zeeman} \end{equation} where $M_L$ and $M_S$ are the quantum numbers of the orbital angular momentum and the spin angular momentum. We consider the split transition lines associated with changes of $\Delta M_L=\pm1$ and $\Delta M_S=0$. \begin{figure}[t] \begin{center} \includegraphics[width=\linewidth]{fig.pdf} \end{center} \caption{Fe~\Ka line splitting by the Zeeman Effect for various masses and accretion rates as indicated in the figure. We set $\phi_\mathrm{BH}=30$, $p=2$, and $r=2$. The horizontal dashed and dotted line represents the energy resolution of {\it XRISM} and {\it Athena}, respectively. \label{fig:Zeeman}} \end{figure} In the accreting system, Eq.~\ref{eq:zeeman} can be replaced as \begin{equation} \Delta E_\mathrm{split} \simeq 27~\mathrm{eV}~\biggl(\frac{\phi}{30}\biggr)\biggl(\frac{m}{10}\biggr)^{-1/2} \biggl(\frac{\dot{m}}{0.3}\biggr)^{1/2}\biggl(\frac{r}{1}\biggr)^{-p}. \label{eq:split} \end{equation} Therefore, we can expect the line splitting of Fe~\Ka lines at the level of several tens of eV for XRBs and $10^{-3}$~eV for AGNs with $\phi_\mathrm{BH}=30$, $\dot{m}=0.3$, and $p=2$. The energy resolution of current X-ray CCDs at the Fe~\Ka line is $\sim120$~eV \citep{Ezoe2021RvMPP...5....4E}, larger than the typically expected split. With this resolution, we may infer $B<10^{10}$~G for any available observations. Next-generation X-ray telescopes such as {\it XRISM} and {\it Athena} will have X-ray micro-calorimeter instruments. The planned energy resolution is 7.0~eV and 2.5~eV at 6~keV for {\it XRISM} and {\it Athena}, respectively. Figure~\ref{fig:Zeeman} shows the expected Fe \Ka line split for various BH masses and accretion rates with $\phi_\mathrm{BH}=30$, $p=2$, and $r=2$. Figure~\ref{fig:Zeeman} also shows the energy resolutions of {\it XRISM} and {\it Athena}. Next-generation X-ray telescopes will enable us to see the Zeeman line splitting in XRBs with the presence of MAD. About 20 XRBs have dynamically confirmed BHs \citep{Corral-Santana2016A&A...587A..61C}. Among them, low-mass black holes such as GX~339-4 and GRO~J1655-40, whose BH mass is $5.8\pm0.5~M_\odot$ \citep{Hynes2003ApJ...583L..95H} and $5.4\pm0.3~M_\odot$ \citep{Beer2002MNRAS.331..351B}, respectively, would be possible candidates to see the Zeeman effect on MAD. Another candidate is a persistent X-ray binary Cyg X-1 having the BH mass of $14.8\pm1.0~M_\odot$ \citep{Orosz2011ApJ...742...84O}\footnote{Recent radio astrometric observation suggests its mass of $m=21.2\pm2.2$ \citep{Miller-Jones2021Sci...371.1046M}}. A broad iron \Ka line has also been reported for Cyg X-1 \citep{Duro2011A&A...533L...3D, Duro2016A&A...589A..14D}, with the inner edge of the disk of $\sim1.6R_g$ \citep{Fabian2012MNRAS.424..217F}. Although the accretion rate depends on the states, it is about 1\% even in the low/hard state \citep{Yamada2013PASJ...65...80Y}. The expected $\Delta E_\mathrm{split}$ is $2.7$ and $8.5$~eV for $\dot{m}=0.03$ and $0.3$, respectively. Therefore, {\it XRISM} and {\it Athena} would see the Zeeman effect on MAD even for a relatively massive stellar BH object Cyg~X-1. \section{Discussions} \begin{figure}[t] \begin{center} \includegraphics[width=\linewidth]{zeeman_line.pdf} \end{center} \caption{Simulated broad Fe~\Ka line spectrum accounting for the Zeeman Effect based on the {\tt kerrdisk} model. Detailed parameters are described in the text. We set $\Delta E_\mathrm{split}=24$~eV. The inset in the figure shows an enlarged view of the $6.50$ to $6.56$ keV region to clarify the split feature. \label{fig:FeLine}} \end{figure} The Fe \Ka lines appear broadened and skewed by the Doppler effect and gravitational redshift \citep{Fabian1989MNRAS.238..729F, Laor1991ApJ...376...90L}. Those relativistic effects would blur the Zeeman split lines. Figure~\ref{fig:FeLine} shows a simulated Fe \Ka line in the MAD. We apply the {\tt kerrdisk} code \citep{Brenneman2006ApJ...652.1028B} with emissivity indices of $\alpha_1=\alpha_2=0$, inclination of $i=30^\circ$, dimensionless spin parameter of $a=0.9$, inner and outer radii of $r_\mathrm{min}=r_\mathrm{ms}$ and $r_\mathrm{max}=100~r_\mathrm{ms}$, and redshift $z=0$. $r_\mathrm{ms}$ is the marginally stable orbit radius. We include three lines split by the Zeeman effect assuming $\Delta E_\mathrm{split}=24~\mathrm{eV}$, corresponding to $B\simeq2\times10^{9}~\mathrm{G}$, we also set $p=0$. As the figure shows, three split lines would be distinguishable within $\Delta E_\mathrm{split}=24$~eV. There also exists Doppler broadening due to turbulent motions of accreting flows. The turbulence speed is characterized by the sound speed. The broadening by turbulence is expected at the level of $\sim10$~eV at the temperature of $\sim10^7$~K, which can be comparable to the expected Zeeman split (Eq.~\ref{eq:split}). Three split lines would be more broaden by turbulence than shown in Figure~\ref{fig:FeLine}. In addition, the effect of the continuum is not included in Figure~\ref{fig:FeLine} and the energy split would depend on radius if $p\neq0$. Further detailed spectral simulations, including instrument response functions of {\it XRISM} and {\it Athena}, will be needed. Broad Fe \Ka lines are commonly reported in various XRBs and AGNs \citep{Reynolds2021ARA&A..59..117R}. However, line broadening is known to depend on the spectral modeling (see, e.g., \cite{Done1999MNRAS.305..457D, Makishima2008PASJ...60..585M}) and disk winds further would disturb the ionization states (see e.g., \cite{Tomaru2019MNRAS.490.3098T}). If the location of reflecting iron atoms is further away from the BH, the expected line split drops with $r^{-p}$. Determination of the reflecting medium is necessary to probe the MAD through X-ray Zeeman effect measurements. Therefore, the Zeeman effect on the Fe \Ka line is realized only with the MAD state and the near BH reflector case. In other words, if we do not see the Zeeman effect even with sufficient energy resolutions, it will imply that MAD is absent or the reflector is distant. With a strong magnetic field like MAD, we would also have the quadratic Zeeman effect producing displacements of lines toward shorter wavelengths \citep{Jenkins1939PhRv...55...52J, Schiff1939PhRv...55...59S, Preston1970ApJ...160L.143P, Loeb2003PhRvL..91g1103L}. The energy shift is given as \begin{eqnarray} &&\Delta E_\mathrm{shift} = \frac{e^2a_0^2}{8Z^2m_ec^2}n^4(1+M_L^2)B^2\\ &&\simeq 9.6\times10^{-3}~\mathrm{eV}~\biggl(\frac{\phi}{30}\biggr)^2\biggl(\frac{m}{10}\biggr)^{-1} \biggl(\frac{\dot{m}}{0.3}\biggr)\biggl(\frac{r}{1}\biggr)^{-2p} \label{eq:shift}, \end{eqnarray} where $n$ is the principal quantum number, $a_0$ is the Bohr radius, and $Z$ is the nuclear charge. We set $n=1$, $M_L=1$ and $Z=26$ in Eq.~\ref{eq:shift}. Thus, several orders of magnitude better energy resolution would be necessary to see the quadratic energy shift. \section{Summary} In this Letter, we consider the Zeeman effect on the MAD state. MAD is expected to be associated with the jet production \citep{Tchekhovskoy2010ApJ...711...50T, Tchekhovskoy2011MNRAS.418L..79T, McKinney2012MNRAS.423.3083M}. In the black hole accretion systems, broad Fe~\Ka fluorescence lines have been often reported. A strong magnetic field environment by MAD would induce line splitting of the Fe~\Ka line by the Zeeman effect (Eq.~\ref{eq:split}). Next-generation X-ray telescopes such as {\it XRISM} and {\it Athena} will be able to see the Zeeman splitting of Fe lines in XRBs, if reflectors exist near BHs. The detection of the Zeeman effect would be clear evidence of the MAD in the BH accretion systems. If the Zeeman effect does not appear even with sufficient energy resolutions, it would imply that MAD is absent or the iron reflector is distant. \begin{ack} We would like to thank the anonymous referee for thoughtful and helpful comments. We would also like to thank Roger Blandford, Chris Done, Norita Kawanaka, Katsunori Kusakabe, Shin Mineshige, Hirokazu Odaka,and Shinsuke Takasao for useful discussions and comments. Y.I. is supported by JSPS KAKENHI Grant Number JP18H05458, JP19K14772, and JP22K18277. This work was supported by World Premier International Research Center Initiative (WPI), MEXT, Japan. \end{ack}
2,869,038,157,017
arxiv
\section{Introduction} Elliptic fibrations are a versatile tool for studying algebraic surfaces. One of their key advantages is that one can often compute the N\'eron-Severi\ lattice, and in particular the Picard number, in a systematic way. This has been carried out with great success in the study of K3 surfaces. There is one feature that singles out K3 surfaces among all algebraic surfaces admitting elliptic fibrations: a single K3 surface may admit several distinct elliptic fibrations. Several previous papers classify all jacobian elliptic fibrations on a given class of K3 surfaces (i.e.~elliptic fibrations with section). Oguiso determined all jacobian elliptic fibrations of a Kummer surface of two non-isogenous elliptic curves~\cite{Oguiso}. This classification was achieved by geometric means. Subsequently Nishiyama proved Oguiso's result again by a lattice theoretic technique~\cite{Nishi}. Equations and elliptic parameters were derived by Kuwata and Shioda \cite{KS}. Nishiyama also considered other Kummer surfaces of product type and certain singular K3 surfaces. Kumar recently determined all elliptic fibrations on the Kummer surface of the Jacobian of a generic curve of genus~$2$~\cite{Kumar}. All these classifications are \emph{a priori} only valid in characteristic zero. In this paper we present a classification that is specific to positive characteristic and does not miss any non-jacobian fibrations. Namely we consider the supersingular K3 surface $X$ in characteristic 2 with Artin invariant 1. In this setting we must deal with quasi-elliptic fibrations whose general fiber is a cuspidal rational curve. As a uniform notation, we shall refer to either an elliptic or a quasi-elliptic fibration as a \emph{genus~1 fibration}. \begin{Theorem} \label{thm} Let $X$ denote the supersingular K3 surface $X$ with Artin invariant~1 over an algebraically closed field of characteristic~2. Then $X$ admits exactly 18 genus~1 fibrations. \end{Theorem} A crucial ingredient of our main result is Theorem \ref{thm:jac} stating that any genus 1 fibration on $X$ admits a section. The classification of all possible fibrations is then achieved in Section \ref{s:class} by lattice theoretic means \emph{\`{a} la} Kneser-Nishiyama (cf.~Section \ref{s:Nishi}). We also determine whether the fibrations are elliptic or quasi-elliptic using a criterion developed in Section \ref{s:e-qe} (Theorem \ref{thm:e-qe}). The existence of these fibrations on~$X$ is established by exhibiting an explicit Weierstrass form over the prime field $\mathbb{F}_2$ for each of them. We shall furthermore connect all fibrations by explicit isomorphisms over $\mathbb{F}_4$ (usually even over $\mathbb{F}_2$, but we shall see that this is not always possible). Equations and isomorphisms are given in Section \ref{s:eq}. The uniqueness part of Theorem \ref{thm} is proven in Section \ref{s:unique} by working with explicit Weierstrass equations. Section \ref{s:config} shows that some specific $(-2)$ curves on $X$ generate the incidence graph of points and lines in $\mathbb{P}^2(\mathbb{F}_4)$. We derive some surprising consequences for configurations in $\mathbb{P}^2(\mathbb{F}_4)$ such as the absence of a $14$-cycle. The paper concludes with comments on the implications of our classification for reduction from characteristic zero. \section{The supersingular K3 surface in characteristic 2 with Artin invariant 1} \label{s:ss} On an algebraic surface $S$, we consider the N\'eron-Severi\ group $\mathop{\rm NS}\nolimits(S)$ consisting of divisors up to algebraic equivalence. The N\'eron-Severi\ group is finitely generated and abelian; its rank is called the Picard number of $S$ and denoted by $\rho(S)$. The intersection form endows $\mathop{\rm NS}\nolimits(S)$ with the structure of an integral lattice up to torsion. By the Hodge index theorem, this lattice has signature $(1,\rho(S)-1)$. On a K3 surface, algebraic and numerical equivalence are the same. Hence $\mathop{\rm NS}\nolimits(S)$ is torsion-free and thus a lattice in the strict sense. In characteristic zero, Lefschetz' theorem bounds the Picard number by the central Hodge number: \begin{eqnarray} \label{eq:Lef} \rho(S) \leq h^{1,1}(S). \end{eqnarray} In positive characteristic, however, we have only Igusa's theorem which gives the weaker upper bound: \begin{eqnarray} \label{eq:Igusa} \rho(S)\leq b_2(S). \end{eqnarray} Surfaces attaining equality in the former bound (\ref{eq:Lef}) are sometimes called singular (in the sense of exceptional, as with elliptic curves said to be ``singular'' when they have complex multiplication). Equality in the latter bound~(\ref{eq:Igusa}) leads to Shioda's notion of supersingular surfaces. For K3 surfaces, one has $h^{1,1}(S)=20$ and $b_2(S)=22$. Supersingular K3 surfaces were studied by Artin in \cite{Artin}. In particular he proved that for a supersingular K3 surface in characteristic $p$, the N\'eron-Severi\ group $\mathop{\rm NS}\nolimits(S)$ has discriminant \begin{eqnarray} \label{eq:disc} \mbox{disc}(\mathop{\rm NS}\nolimits(S))=-p^{2\sigma}, \;\;\; 1\leq \sigma\leq 10. \end{eqnarray} Here $\sigma$ is usually called the Artin invariant of $S$. Artin also derived a stratification of the moduli space of supersingular K3 surfaces in terms of~$\sigma$. This classification was later complemented by Ogus who proved that there is a unique supersingular K3 surface with $\sigma=1$ over the algebraic closure of the base field \cite{Ogus} (see \cite{RS2} for characteristic $2$). From here on we specialize to characteristic $p=2$. There are several known models for the unique supersingular K3 surface $X$ with $\sigma=1$ (e.g.~\cite{DK}, \cite{KK}, \cite{RS2}, \cite{S-MJM}). For instance one can take the following~genus one fibration from \cite{DK} with affine parameter $t\in\mathbb{P}^1$: \[ X: y^2 = x^3 + t^3x^2+t. \] This fibration is quasi-elliptic, i.e.~all fibers are singular curves (see Section \ref{s:g=1}), but it has only one reducible fiber. The special fiber is located outside the affine chart on the base curve $\mathbb{P}^1$, at $t = \infty$, and has Kodaira type $I_{16}^*$. It follows that there can be no sections other than the zero section $O$, and that \[ \mathop{\rm NS}\nolimits(X) = U \oplus D_{20}. \] This fibration will reappear in our classification in Sections \ref{s:class}--\ref{s:eq} as \#18. Note that a singular fiber of type $I_{16}^*$ is impossible for a jacobian genus~$1$ fibration on any K3 surface outside characteristic two, for otherwise the surface would contradict either \eqref{eq:Lef} or \eqref{eq:disc}. In comparison, for an elliptic K3 surface in characteristic two, the maximal singular fiber types are $I_{13}^*$ and $I_{18}$ by \cite{S-max}. \section{Genus one fibrations} \label{s:g=1} A genus~$1$ fibration on a smooth projective surface $S$ is a surjective morphism onto a smooth curve $C$\/ such that the general fiber $F$ is a curve of arithmetic genus~$1$. If the characteristic is different from $2$ and $3$, then this already implies that $F$ smooth. In the presence of a section, $F$\/ is an elliptic curve; hence these fibrations are called elliptic. In characteristics $2$ and $3$, however, $F$\/ need not be smooth, it may be a cuspidal rational curve. Such a fibration is called quasi-elliptic. For general properties of genus~$1$ fibrations (mostly elliptic), the reader is referred to the recent survey \cite{SSh} and the references therein, specifically \cite{CD}. We shall review a few more details about quasi-elliptic fibrations in Section \ref{s:unique}. Here we only recall two useful formulas. The first computes the Euler-Poincar\'e characteristic $e(S)$ through the (reducible) singular fibers. The sum includes a local correction term that accounts for the wild ramification $\delta_v$ in the case of an elliptic surface, and for the non-zero Euler-Poincar\'e characteristic of the general fiber in the case of a quasi-elliptic surface: \begin{itemize} \item $S$ elliptic:\phantom{-quasi}\;\;\, $ e(S) = \sum_{v\in C} (e(F_v) + \delta_v) $, \item $S$ quasi-elliptic:\;\; $ e(S) = e(C)e(F) + \sum_{v\in C} (e(F_v) - 2). $ \end{itemize} The Shioda-Tate formula concerns jacobian genus~$1$ fibrations. It asserts that the N\'eron-Severi\ group is generated by fiber components and sections. Outside the \MoW\ group, the only relation is that any two fibers are algebraically equivalent. In order to find a genus~$1$ fibration on a K3 surface, it suffices to find a divisor $D$\/ of zero self-intersection $D^2=0$ by \cite{PSS}. Then either $D$\/ or $-D$\/ is effective by Riemann-Roch, and the linear system $|D|$ or $|-D|$ induces a genus~$1$ fibration (usually elliptic). If the divisor $D$ has the shape of a singular fiber from Kodaira's list, then it in fact appears as a singular fiber of the given fibration. Moreover, any irreducible curve $C$ with $C\cdot D=1$ gives a section of the fibration. In the K3 case, any curve has even self-intersection by the adjunction formula, so $C^2$ is even. Hence $C$ and $D$ span the hyperbolic plane $U$. In summary, a jacobian elliptic fibration on a K3 surface is realized by identifying a copy of $U$\/ inside $\mathop{\rm NS}\nolimits$. (Warning: in general it might not be the copy of~$U$\/ we started with, because the sections of~$D$\/ may have a base locus. But it is always the image of the original copy of~$U$\/ under an isometry of $\mathop{\rm NS}\nolimits(S)$.) We now prove a result which implies that any genus one fibration on $X$ is jacobian: \begin{Theorem} \label{thm:jac} Any genus 1 fibration on a supersingular K3 surface of Artin invariant 1 admits a section. \end{Theorem} \begin{proof} Let $X$ denote the supersingular K3 surface of Artin invariant 1 in characteristic $p$. Given a genus 1 fibration, we denote the class of a fiber by $F$ and the multisection index by $m\in\N$. That is, \[ m\mathbb{Z} = \{D.F, \;\; D\in\mathop{\rm NS}\nolimits(X)\}. \] Then the fibration has a section if and only if $m=1$. Assume $m>1$. Then $F/m\in\mathop{\rm NS}\nolimits(X)^\vee$, and in fact \[ N:= \langle \mathop{\rm NS}\nolimits(X), F/m\rangle \;\; \text{is an even integral lattice,} \] since $F^2=0$. Presently $F$ is indivisible in $\mathop{\rm NS}\nolimits(X)$ since there cannot be any multiple fibers by the canonical bundle formula (see \cite[Thm.~6.8]{SSh}). Hence $\mathop{\rm NS}\nolimits(X)$ has index $m$ in $N$ from which we infer \[ \mbox{disc} (N) = \mbox{disc}(\mathop{\rm NS}\nolimits(X))/m^2. \] Since the discriminant is an integer, it follows at once that $m=p$. But even then, $N$ is a unimodular lattice of signature $(1,21)$ which gives a contradiction. \end{proof} \begin{Remark} The above argument may be applied to any elliptic surface with indivisible fiber class. In fact, one may compare Keum's result for complex elliptic K3 surfaces \cite{Keum} which states in the analogous notation that $\mathop{\rm NS}\nolimits(\mbox{Jac}(X))=N$. \end{Remark} Throughout this paper we shall employ the following terminology. Kodaira's notation for singular fibers of type $I_n$ (and $III, IV$) will be used interchangeably with the corresponding extended Dynkin diagrams $\tilde A_{n-1}$ or the root lattices $A_{n-1}$, and likewise for $\tilde D_n, D_n (n\geq 4)$ and $\tilde E_n, E_n (n=6,7,8)$. In principle, there is an ambiguity for $A_1$ and $A_2$, but throughout this paper the root lattice will in fact determine the fiber type uniquely. The zero section will be denoted by $O$. The fiber component meeting $O$ is called the identity component. For other simple components, we use the self-explanatory termini far component ($\tilde D_n (n>4), \tilde E_6, \tilde E_7$), near component ($\tilde D_n (n>4)$) and opposite component as well as even and~odd components ($\tilde A_n, n$ odd). \section{Elliptic vs.~quasi-elliptic fibrations} \label{s:e-qe} We have already mentioned the subtlety in characteristics $p=2$ and $3$ that there are quasi-elliptic fibrations. This brings us to the question how to detect from $\mathop{\rm NS}\nolimits=U+M$ whether the corresponding genus~$1$ fibration is elliptic or quasi-elliptic. In this section, we shall discuss a few criteria. A first criterion comes from the singular fibers: namely a quasi-elliptic fibration does not admit multiplicative fibers. The additive fiber types are also restricted: \begin{itemize} \item no $IV, IV^*, I_n^* ~(n>0$ odd$)$ in characteristic $2$, \item no $III, III^*$ or $I_n^* ~(n\geq 0)$ in characteristic $3$. \end{itemize} The Euler-Poincar\'e characteristic gives a second simple approach to distinguish elliptic and quasi-elliptic fibration: on a quasi-elliptic fibration, only the reducible singular fibers contribute to $e(X)$ (which can also be computed as alternating sum of Betti numbers or with Noether's formula). If the sum over the fibers indeed returns the right number, then we can compare to the sum without the correction terms for the general fiber (plus possibly wild ramification which necessarily is non-zero for certain fiber types by \cite{SS2}). If the latter sum exceeds $e(X)$, then the fibration cannot be elliptic. This criterion can be very useful because the reducible singular fibers are visible in $\mathop{\rm NS}\nolimits(X)$ by the Shioda-Tate formula. The perhaps most general approach relies on the fact that quasi-elliptic surfaces are always unirational, hence supersingular. On the other hand, the $\mbox{MW}$-group of a quasi-elliptic fibration is always finite and in fact $p$-elementary (i.e.~isomorphic to $(\mathbb{Z}/p\mathbb{Z})^r$ for some $r\in\N$). This leads to the following criterion: \begin{Theorem}[Rudakov-Shafarevich {\cite[\S4]{RS}}] \label{Thm:PSS} Given a genus~1 fibration on some algebraic surface $X$ with $\chi(\mathcal O_X)>1$ in characteristic $p$, not necessarily jacobian. This fibration is quasi-elliptic if and only if the following conditions are satisfied: \begin{enumerate}[(i)] \item $p=2,3$, \item the root lattice of each reducible fiber has $p$-elementary discriminant group, \item the fiber components generate a sublattice of $\mathop{\rm NS}\nolimits(X)$ of corank one. \end{enumerate} \end{Theorem} Specifically this implies for a jacobian quasi-elliptic fibration that the \MoW\ group is $p$-elementary because the fibers do not accommodate any higher torsion. We shall now discuss whether this last property already determines if the fibration is quasi-elliptic. If the quasi-elliptic fibration from Theorem \ref{Thm:PSS} is jacobian, then condition $(iii)$ requires that the fibration is extremal. In general this means that the Picard number is maximal (relative to the inequality \eqref{eq:Lef} or \eqref{eq:Igusa} depending on the characteristic) while the \MoW\ group is finite. Extremal elliptic surfaces are much more special in positive characteristic than in characteristic zero. In fact, Ito showed that in characteristic $p$ extremal elliptic surfaces do always arise through purely inseparable base change from rational elliptic surfaces \cite{Ito2}. (Thus they are again unirational.) Going through all extremal rational elliptic surfaces and their purely inseparable base changes, one can thus deduce the following solution to the above problem: \begin{Proposition} \label{Prop} \label{prop:qe} Let $X$ be a jacobian genus~1 fibration of a supersingular surface in characteristic~$2$. If the \MoW\ group of the fibration is \hbox{$2$-elementary} then $X$\/ is either a rational elliptic surface or quasi-elliptic. \end{Proposition} \begin{Remark} \label{Rem:3} In characteristic $3$, an analogous classification holds true with one series of surfaces added: elliptic surfaces with exactly two singular fibers, one of them of type $I_{3^e}$ for some $e\in\N$ and the other of type $II$ if $e$ is even, or $IV^*$ if $e$ is odd (with wild ramification of index one). These surfaces arise from the rational elliptic surface $y^2 +xy+tx=x^3$ through the purely inseparable base change $t\mapsto t^{3^e}$. Note that these elliptic fibrations are easy to distinguish from quasi-elliptic fibrations thanks to the multiplicative fiber at $t=0$. \end{Remark} \begin{Theorem} \label{thm:e-qe} Let $X$ be a K3 surface over an algebraically closed field of characteristic $p$. Then a given jacobian genus~1 fibration on $X$ is quasi-elliptic iff $p=2,3$, $X$ is supersingular and $\mbox{MW}=(\mathbb{Z}/p\mathbb{Z})^r$ for some $r\in\N$. \end{Theorem} \begin{proof} Quasi-elliptic fibrations only occur in the specified characteristics. For $p=2$, the theorem follows from Proposition \ref{prop:qe}. For $p=3$, we also have to take into account the extra case from Remark \ref{Rem:3}. But this series of surfaces avoids K3 surfaces by inspection of the Euler-Poincar\'e characteristic, so the claim follows. \end{proof} The theorem (as well as the preceeding proposition) is useful from the lattice theoretic viewpoint for the following reason: As we have seen in the previous section, a jacobian genus~$1$ fibration on an algebraic surface $X$ corresponds to a decomposition of the N\'eron-Severi\ lattice $\mathop{\rm NS}\nolimits(X)=U+M$. Here $M$ is often called the essential lattice. If $\chi(\mathcal{O}_X)>1$, then $M$ together with its root type determines the structure of the singular fibers and the \MoW\ group \cite{ShMW}. Since a K3 surface has $\chi=2$, we can thus deduce from the essential lattice $M$ whether a given jacobian genus~$1$ fibration on a K3 surface in characteristic $2$ or $3$ is elliptic or quasi-elliptic. \section{Kneser-Nishiyama method} \label{s:Nishi} In \cite{Nishi}, Nishiyama introduced a lattice theoretic approach to classify all jacobian elliptic fibrations on a complex (elliptic) K3 surface. The method is based on gluing techniques of Kneser and Witt \cite{Kneser} and the classification of Niemeier lattices, i.e.~ negative-definite unimodular lattices of rank $24$. By \cite{Nie}, there are 24 such lattices, and each is determined by its root type. In fact, except for the Leech lattice, the root type has always finite index in the unimodular lattice. For a complex K3 surface $X$, one has $\mathop{\rm NS}\nolimits(X)$ of rank $\rho(X)\leq 20$. The transcendental lattice $T(X)$ is defined as the orthogonal complement of $\mathop{\rm NS}\nolimits(X)$ in $H^2(X, \mathbb{Z})$ with respect to cup-product: \[ T(X) = \mathop{\rm NS}\nolimits(X)^\bot \subset H^2(X,\mathbb{Z}). \] Since $H^2(X,\mathbb{Z})$ has signature $(3,19)$, the signature of $T(X)$ is $(2, 20-\rho(X))$. The information how to glue together $\mathop{\rm NS}\nolimits(X)$ and $T(X)$ in the unimodular lattice $H^2(X,\mathbb{Z})$ is encoded in the isomorphism of the discriminant forms: \[ q_{\mathop{\rm NS}\nolimits(X)}^{\phantom0} \cong -q_{T(X)}^{\phantom0}. \] One now looks for a partner lattice $L$ of $T(X)$ with rank $26-\rho(X)$ such that $L$ is negative definite of discriminant form $q_L=q_{T(X)}$. Such a lattice exists by lattice theory \emph{\`{a} la} Nikulin (cf.~\cite{Nishi-Saitama}). Then one determines all primitive embeddings of $L$ into Niemeier lattices $N$. For each embedding $L\hookrightarrow N$, the orthogonal complement $M=L^\bot\subset N$ is a candidate for the essential lattice of a jacobian elliptic fibration on $X$. To show that $X$ does indeed admit an elliptic fibration with essential lattice $M$, one notes that by construction the lattices $\mathop{\rm NS}\nolimits(X)$ and $U+M$ have the same signature and discriminant form. Thanks to the copy of the hyperbolic plane, these conditions imply that the lattices are isomorphic. But then the representation of $\mathop{\rm NS}\nolimits(X)$ as $U+M$ induces a jacobian elliptic fibration on $X$ with essential lattice $M$, as we explained in Section \ref{s:g=1}. Note that the same approach is not guaranteed to work in characteristic $p>0$. Indeed, consider supersingular K3 surfaces of Artin invariant $\sigma>2$. Here $\mathop{\rm NS}\nolimits(X)$ is $p$-elementary; hence its discriminant group has length $2\sigma$. Assume that $\mathop{\rm NS}\nolimits(X)=U+M$, and that $M$ is embedded primitively into some unimodular lattice $N$. Then the discriminant group $G_L$ of its orthogonal complement $L$ has the same length $2\sigma$. In particular we can estimate the rank of $N$ by \[ \mbox{rank}(N) = \mbox{rank}(M) + \mbox{rank}(L) \geq \mbox{rank}(M) + \mbox{length}(G_L) = 20 + 2\sigma >24. \] However, we can still try to pursue the same approach for supersingular K3 surfaces with Artin invariant $\sigma\leq 2$. This only requires to find a suitable partner lattice $L$ for $\mathop{\rm NS}\nolimits(X)$. In the present situation, we have already mentioned that one way to write $\mathop{\rm NS}\nolimits(X)$ is $\mathop{\rm NS}\nolimits(X) = U \oplus D_{20}$. Hence we can choose $L=D_4$. In fact, the Niemeier lattice with root system $D_{24}$ contains $D_4$ and $D_{20}$ as primitive orthogonal sublattices. With the partner lattice $D_4$, we can now classify all genus~$1$ fibrations on $X$ (automatically jacobian by Theorem \ref{thm:jac}) and decide whether they are elliptic or quasi-elliptic by Theorem \ref{thm:e-qe}. Note that by Theorem \ref{thm:e-qe} it will be immediately clear from the embedding of $D_4$ into the Niemeier lattice whether the resulting genus~$1$ fibration has non-torsion sections (and thus is elliptic). Namely $D_4$ embeds into all root lattices of type $D_n (n\geq 4), E_n (n=6,7,8)$, but not into any $A_n$. The orthogonal complement of this embedding is always a root lattice (and therefore corresponds to fiber components) unless the overlattice in question is $D_5$ or $E_6$. In the latter cases, the \MoW\ rank thus has to be positive, equaling one resp.~two. \section{Genus one fibrations on $X$} \label{s:class} This section gives the primitive embeddings of $L=D_4$ into Niemeier lattices. By the previous section, this describes all genus~$1$ fibrations on our K3 surface $X$. The following table lists the root type $R(N)$ that characterizes the corresponding Niemeier lattice $N$ uniquely. The next entry is the root type $R(M)$ of the orthogonal complement of the primitive embedding of $L=D_4$ into $N$. Since this will serve as essential lattice $M$ of an elliptic fibration, it encodes the reducible singular fibers. The difference of the ranks of $R(M)$ and $M$ (the latter being $20$) gives the $\mbox{MW}$-rank. As explained above, the $\mbox{MW}$-rank is positive if and only if $D_4$ is embedded into $D_5$ or $E_6$. By \cite{ShMW} we obtain the torsion subgroup of $\mbox{MW}$ from the primitive closure $R(M)'$ of $R(M)$ inside $\mathop{\rm NS}\nolimits$: \[ \mbox{MW}(X)_\text{tor} \cong R(M)'/R(M). \] Then Proposition \ref{Prop} tells us whether the fibration will be elliptic or quasi-elliptic, as indicated in the last column. \begin{table}[ht!] $$ \begin{array}{cccccc} \hline \# & R(N) & R(L) & \mbox{rk}(\mbox{MW}) & \mbox{Torsion} & \text{elliptic?}\\ \hline 1 & D_4 A_5^4 & A_5^4 & 0 & 3\times 6 & e\\ 2 & D_4^6 & D_4^5 & 0 & 2^4 & qe\\ 3 & D_5^2 A_7^2 & D_5 A_7^2 & 1 & 8 & e\\ 4 &D_6 A_9^2 & A_1^2 A_9^2 & 0 & 10 & e\\ 5 & D_6^4 & A_1^2 D_6^3 & 0 & 2^3 & qe\\ 6 & E_6 D_7 A_{11} & D_7 A_{11} & 2 & 4 & e\\ 7 & E_6 D_7 A_{11} & A_3 E_6 A_{11} & 0 & 6 & e\\ 8 & E_6^4 & E_6^3 & 2 & 3 & e\\ 9 & D_8^3 & D_4 D_8^2 & 0 & 2\times 2 & qe\\ 10 & D_9 A_{15} & D_5 A_{15} & 0 & 4 & e\\ 11 & E_7 A_{17} & A_1^3 A_{17} & 0 & 6 & e\\ 12 & E_7^2 D_{10} & A_1^3 E_7 D_{10} & 0 & 2\times 2 & qe\\ 13 & E_7^2 D_{10} & D_6 E_7^2 & 0 & 2 & qe\\ 14 & D_{12}^2 & D_8 D_{12} & 0 & 2 & qe\\ 15 & E_8 D_{16} & D_4 D_{16} & 0 & 2 & qe\\ 16 & E_8 D_{16} & D_{12} E_8 & 0 & 1 & qe\\ 17 & E_8^3 & D_4 E_8^2 & 0 & 1 & qe\\ 18 & D_{24} & D_{20} & 0 & 1 & qe\\ \hline \end{array} $$ \caption{Genus one fibrations on $X$} \label{Tab:fibr} \end{table} A priori there is one ambiguity in the table: the root lattice of type $A_1$ can correspond to singular fibers of type $I_2$ or $III$. In the present situation, this problem is solved as follows: If the fibration is quasi-elliptic, then all singular fibers are additive. Hence the above fibers have type $III$. If the fibration is elliptic, then in each case involving an $A_1$ there is torsion in $\mbox{MW}$ of order relatively prime to $2$. Since fibers of type $III$ do not accommodate $\ell$-torsion sections outside characteristic $\ell$ ($\ell\neq 2$), the fibers corresponding to $A_1$'s have type $I_2$. Table \ref{Tab:fibr} settles the classification statement of Theorem \ref{thm}. It remains to prove existence and uniqueness for each genus~$1$ fibration. This will be achieved in Section \ref{s:eq}, as outlined in the next section, and Section \ref{s:unique}. \begin{Remark} In our concrete situation, we can also distinguish elliptic and quasi-elliptic fibrations, given a decomposition $\mathop{\rm NS}\nolimits(X)=U+M$, by computing the Euler-Poincar\'e characteristics of the singular fibers instead of appealing to Theorem \ref{thm:e-qe}. Since some additive fiber types on an elliptic fibration necessarily come with wild ramification by \cite{SS2}, this in fact suffices for all cases but \#18 which is implied by \cite{S-max} to be quasi-elliptic. \end{Remark} Several of the fibrations from Table \ref{Tab:fibr} have been studied by Dolgachev and Kond\=o in \cite{DK}, by Ito in \cite{Ito2}, and by one of us in \cite{S-MJM}, see also \cite{KK}, \cite[App.~2]{OS}, \cite{RS2}, \cite{Schroeer}, \cite[Ex.~4.1]{Sh77} as indicated in the following sections. Here we complement the previous considerations to derive equations and connections for all fibrations. We conclude this section with a remark about Picard numbers over finite fields. For each fibration, we will exhibit a model over $\mathbb{F}_2$ with Picard number $22$ over $\mathbb{F}_4$. However, the question of the Picard number over $\mathbb{F}_2$ is more subtle. We will see in the next section that the first two fibrations admit models $X$ with $\rho(X/\mathbb{F}_2)=15$. This cannot be improved because of the Galois action on the singular fibers and their components (or on the \MoW\ group). In contrast, for all other fibrations we will exhibit models with $\rho(X/\mathbb{F}_2)=21$. This is optimal for supersingular K3 surfaces by \cite[(6.8)]{Artin} (see also \cite[Thm.~4.4]{S-MJM}, \cite{Sss}). More precisely, we will show that all models with $\rho(X/\mathbb{F}_2)$ fixed (i.e.~$15$ or $21$) are isomorphic over $\mathbb{F}_2$. In order to move between these two groups, we will exhibit two different models of $\# 5$ which are isomorphic over $\mathbb{F}_4=\mathbb{F}_2(\varrho)$ with $\varrho^2+\varrho+1=0$. \section{Plan for connections} Let $S$ be a projective K3 surface. Recall that it suffices to identify a divisor $D$ on $S$ that has the shape of a singular fiber from Kodaira's list in order to find an genus~$1$ fibration on~$S$\/ with $D$\/ as singular fiber. The fibration is induced by the linear system $|D|$. Moreover, any irreducible curve $C$ with $C\cdot D=1$ gives a section of the fibration. With these tools at hand, it is in principle possible to derive all fibrations in Table \ref{Tab:fibr} from a single model of the surface $X$. In practice, however, it is often easier to pursue this aim in several steps, since one can usually find only a few linear systems without too much effort. The following diagram sketches how we will connect all fibrations. The numbers refer to the figures in the next section where the connections are derived (or in one case to a subsection which provides a further reference). $$ \begin{array}{ccccccccc} \#10 && \#3 &&&&&&\\ {\downarrow} {\scriptstyle\ref{Fig:10-6}} && {\downarrow} {\scriptstyle \ref{Fig:3-13}} &&&&&&\\ \#6 && \#13 && \#11 &&&&\\ {\downarrow} {\scriptstyle\ref{ss:6} && {\uparrow} {\scriptstyle \ref{Fig:7-13}} & {\nearrow} {\scriptstyle \ref{Fig:7-11}} &&&&&\\ \#8 & \stackrel{\ref{Fig:7-8}}{\longleftarrow} & \#7 & \stackrel{\ref{Fig:7-14}}{\longrightarrow} & \#14 & \stackrel{\ref{Fig:14-16}}{\longrightarrow} & \#16 & \stackrel{\ref{Fig:16-18}}{\longrightarrow} & \#18\\ && {\downarrow} {\scriptstyle \ref{Fig:7-4}} &&&&&&\\ && \#4& \stackrel{\ref{Fig:4-5}}{\longrightarrow} & \#5& \stackrel{\ref{Fig:5-2}}{\longrightarrow} & \#2 & \stackrel{\ref{Fig:1-2}}{\longleftarrow} & \#1\\ && {\downarrow} {\scriptstyle \ref{Fig:4-12}} && {\downarrow} {\scriptstyle \ref{Fig:5-9}} &&&&\\ && \#12 && \#9 &&&&\\ && {\downarrow} {\scriptstyle \ref{Fig:12-15}} &&&&&&\\ \#17 & \stackrel{\ref{Fig:17-15}}{\longrightarrow} & \#15 &&&&&& \end{array} $$ \section{Equations \& Connections} \label{s:eq} Usually we shall use affine coordinates $x,y,t$ with $t$ as the parameter of the base curve $\mathbb{P}^1$ over $\mathbb{F}_2$. The new parameter will be denoted by $u$, i.e.~it exhibits a new genus~$1$ fibration on $X$ by the surjection \begin{eqnarray*} X & \to & \mathbb{P}^1\\ (x,y,t) & \mapsto & u(x,y,t) \end{eqnarray*} A 5-tuple $[a_1,a_2,a_3,a_4,a_6]$ refers to the usual short-hand notation for the elliptic curve \[ y^2 + a_1xy+a_3y = x^3 + a_2x^2+a_4x+a_6. \] This fibration is quasi-elliptic in characteristic $2$ if and only if $a_1\equiv a_3\equiv 0$ identically. \subsection{\#1: $R(M) = A_5^4$} This fibration arises as inseparable base change from the Hesse pencil (see \cite[Ex. 4.1]{Sh77}): \begin{eqnarray}\label{eq:Hesse} X:\;\; x^3+y^3+z^3 = t^2xyz. \end{eqnarray} A Weierstrass model can be found for instance in \cite{Ito2}. We have sections at the base points of the cubics (induced from the Hesse pencil) plus the likes of $[x,y,z]=[t,1,1]$. In total the sections are always given by $x^3=z^3$ or $y^3=z^3$ or $x^3=y^3$. \subsubsection*{Connection with \#2} We can extract $\tilde D_4$ divisors from sections and fiber components. We shall work affinely in the chart $z=1$. For instance by setting $u=y$, we visibly arrange for $\tilde D_4$ fibers at $u=0,\infty$. In the sequel we will draw figures with fiber components and sections to visualize the connections. We will distinguish as follows between old and new fibration: \begin{center} \begin{tabular}{rl} old fiber components & balls\\ old sections & small circles\\ &\\ new fibers & framed by boxes\\ new sections & big circles \end{tabular} \end{center} The center of the following figure sketches the components of the $I_6$ fiber at $t=\infty$. We identify the fiber components $C_x, C_y, C_z$ given by $x=0$ resp.~$y=0$ resp.~$z=0$ of the model (\ref{eq:Hesse}). The other three components arise as the exceptional divisors above the singular points at their intersections. The given sections come from the base points of the Hesse pencil with $y=0, x^3=z^3$ (LHS) or $x^3=y^3, z=0$ (RHS). The component $C_x$ serves as a section of the new fibration. \begin{figure}[ht!] \setlength{\unitlength}{.45in} \begin{picture}(10,3.2)(0,0) \thicklines \multiput(4,2)(2,0){2}{\circle*{.1}} \multiput(4,1)(2,0){2}{\circle*{.1}} \put(4,2){\line(0,-1){1}} \put(6,2){\line(0,-1){1}} \put(5,0.4){\circle*{.1}} \put(5,2.6){\circle*{.1}} \put(5,0.4){\circle{0.2}} \put(3,1.4){\line(5,3){2}} \put(5,0.4){\line(5,3){1}} \put(4,1){\line(5,-3){1}} \put(5,2.6){\line(5,-3){2}} \put(3,1.4){\circle{.1}} \put(3,2){\circle{.1}} \put(3,2.6){\circle{.1}} \put(3,2){\line(1,0){1}} \put(3,2.6){\line(5,-3){1}} \put(6,2){\line(5,3){1}} \put(7,1.4){\circle{.1}} \put(7,2){\circle{.1}} \put(7,2.6){\circle{.1}} \put(6,2){\line(1,0){1}} \put(2,0.3){$u=0$} \put(7.15,0.3){$u=\infty$} \put(5.05,0.1){$C_x$} \put(4.1,1.8){$C_y$} \put(5.55,1.8){$C_z$} \thinlines \put(2.5,0.7){\framebox(2.05,2.3){}} \put(5.45,0.7){\framebox(2.05,2.3){}} \end{picture} \caption{Two $\tilde D_4$ divisors supported on $\tilde A_5$ and sections} \label{Fig:1-2} \end{figure} This yields the quasi-elliptic fibration \[ X:\;\; t^2 = ux(x^3+u^3+1). \] This can be transformed into Weierstrass form as follows. First homogenize the RHS as a quartic polynomial with variable $z$. Setting $x=1$, we obtain a cubic in Weierstrass form up to some factors: \[ X:\;\; t^2 = u((u^3+1)z^3+1). \] The change of variables $(z,t)\mapsto (z/(u(u^3+1))^2, t/(u(u^3+1))^2)$ then returns the Weierstrass form \[ X:\;\; t^2 = z^3 + u^3(u^3+1)^2. \] One reads off singular fibers of type $\tilde D_4$ at $u=0, \infty$ (as seen above) and at the roots of $u^3+1$. \subsection{\#2: $R(M)=D_4^5$} \label{ss:2} This fibration admits several nice models, for instance $[0,0,0,0,(t^3+1)^3]$ with singular fibers at all points of $\mathbb{P}^1(\mathbb{F}_4)$ as seen above. There are plenty of automorphisms respecting the fibration, for instance \[ \alpha: (x,y,t)\mapsto (\varrho x,y,t) \] for $\varrho^3=1$ and those induced by M\"obius transformations of $\mathbb{P}^1$ that permute $\infty$ and third roots of unity such as \[ (x,y,t) \mapsto (x/(t+1)^4, y/(t+1)^6, t/(t+1)). \] $\mbox{MW}=(\mathbb{Z}/2\mathbb{Z})^4$ with sections $P=(t^3+1,0), Q=(t(t^3+1), (t^3+1)^2)$ plus images under above automorphisms. As an example, we give two connections, but we shall not use them here, since they do not lead to models with maximal Picard number $\rho(X/\mathbb{F}_2)=21$ although the new fibrations admit such models (cf.~\ref{s:5}). In the sequel, we shall only give the connections needed for the proof of Theorem \ref{thm}. \subsubsection*{Connection with \#3} $u=y/((t^2+t+1)(x+t^3+1))$ extracts (independently at $u=0$ and $\infty$) two $\tilde A_7$ divisors from pairs of two $\tilde D_4$ fibers connected through two sections. \subsubsection*{Connection with \#8} $u=y/(t^3+1)^2$ extracts $\tilde E_6$ from $\tilde D_4$ at $\infty$ and two-torsion sections $P, \alpha P, \alpha^2 P$ at $u=0$. Same at $u=\infty$ from zero-section plus identity and double components of $\tilde D_4$ fibers at roots of $t^3+1$. The remaining simple components of the fibers at the roots of $t^3+1$ serve as sections. \subsection{\#3: $R(M) = D_5A_7^2$} From \#2, we can obtain the model of \#3 as cubic pencil \[ X:\;\; (x^2+x+1)(y+1) = u^2 (y^2+y+1)(x+1). \] This fibration is a purely inseparable base change by $s=u^2$ from a rational elliptic surface $S$ with configuration $[4,4,III]$. Here the $III$-fiber comes with wild ramification of index one; since the ramification index stays constant under the base change, the special fiber is replaced by type $I_1^*$ as claimed. The base points of the pencil generate $\mbox{MW}(S) \cong \mathbb{Z} \times \mathbb{Z}/4\mathbb{Z}$. We find generators of $\mbox{MW}(X)$ in terms of another model of this elliptic fibration which also has the advantage of maximal Picard number $\rho(X/\mathbb{F}_2)=21$. It arises from the extremal rational elliptic surface $[1,s^2,s^2,0,0]$ with singular fibers of type $I_8$ at $t=0$ and $III$ at $\infty$ through the base change $t\mapsto s=t^2+t$: \begin{eqnarray} \label{eq3} X:\;\; y^2 + xy + (t^2+t)^2 y = x^3 + (t^2+t)^2 x^2. \end{eqnarray} Next to the induced torsion sections $(s^2,0), (0,0), (0,s^2)$, there is an $8$-torsion section $P=(t^2(t+1), t^4 (t+1))$. Moreover there is an induced section $Q=(t^2,\varrho t^4)$ of height $1$. By computing the discriminant of $\mathop{\rm NS}\nolimits(X)$, one verifies that these sections generate $\mbox{MW}(X)$. \subsubsection*{Connection with \#13} $u=(x+s^2)/s^4$ extracts an $\tilde E_7$ at $u=\infty$ from the $\tilde A_7$ at $s=0$ and the zero section. The non-identity components of the other $\tilde A_7$ together with the two-torsion section $Q=(0,0)$ form another $\tilde E_7$ at $u=1$. This leaves a root lattice $D_5$ ($\tilde D_5$ minus identity component) at $\infty$ disjoint; on the new fibration it results in a singular fiber of type $\tilde D_6$ at $u=0$. As a new section, one can take $P$. \begin{figure}[ht!] \setlength{\unitlength}{.45in} \begin{picture}(10.5,5)(-0.25,0.5) \thicklines \put(5.5,3){\circle*{.1}} \put(6,4){\circle*{.1}} \put(7,4.5){\circle*{.1}} \put(8,4){\circle*{.1}} \put(8.5,3){\circle*{.1}} \put(5.5,3){\line(1,2){.5}} \put(6,4){\line(2,1){1}} \put(8.5,3){\line(-1,2){.5}} \put(8,4){\line(-2,1){1}} \put(6,2){\circle*{.1}} \put(7,1.5){\circle*{.1}} \put(8,2){\circle*{.1}} \put(5.5,3){\line(1,-2){.5}} \put(6,2){\line(2,-1){1}} \put(8.5,3){\line(-1,-2){.5}} \put(8,2){\line(-2,-1){1}} \put(0.5,3){\circle*{.1}} \put(1,4){\circle*{.1}} \put(2,4.5){\circle*{.1}} \put(3,4){\circle*{.1}} \put(8.5,3){\line(1,0){1.5}} \put(9.5,3){\circle{.1}} \put(9.15,3.15){$Q$} \put(3.5,3){\circle*{.1}} \put(.5,3){\line(1,2){.5}} \put(1,4){\line(2,1){1}} \put(3.5,3){\line(-1,2){.5}} \put(3,4){\line(-2,1){1}} \put(1,2){\circle*{.1}} \put(2,1.5){\circle*{.1}} \put(3,2){\circle*{.1}} \put(.5,3){\line(1,-2){.5}} \put(1,2){\line(2,-1){1}} \put(3.5,3){\line(-1,-2){.5}} \put(3,2){\line(-2,-1){1}} \put(4.5,3){\circle{.1}} \put(3.5,3){\line(1,0){2}} \put(4.15,3.15){$O$} \put(0,3){\line(1,0){.5}} \put(5.25,5){\circle{.1}} \put(5.25,5){\circle{.2}} \put(5.25,5){\line(3,-4){.75}} \put(5.4,5.1){$P$} \qbezier(1,4)(1,6)(5.25,5) \thinlines \put(.8,0.6){$u=\infty$} \put(8.8,.6){$u=1$} \put(0.75,1){\framebox(4,4){}} \put(5.75,1){\framebox(4,4){}} \end{picture} \caption{Two $\tilde E_7$ divisors supported on two $\tilde A_7$'s and sections} \label{Fig:3-13} \end{figure} We take this example as an opportunity to explain how one can derive the Weierstrass form of the new fibration explicitly. In general, it is often instructive to work with some resolution of singularities related to the new coordinate $u$. Here it concerns the $A_7$ singularity of the Weierstrass form \eqref{eq3} at $(x,y,s)=(0,0,0)$. We proceed in two steps, always choosing an appropriate affine chart. Blowing up twice yields affine coordinates \[ x=s^2x'', y=s^2y''. \] The Weierstrass form transforms as \begin{eqnarray} \label{eq3-1} X:\;\; y''^2 + x''y'' + (s+1)^2 y'' = s^2x''^3 + (s^2+s)^2 x''^2. \end{eqnarray} Here the section $P$ takes the shape $(x'', y'')=(s+1, s^2(s+1))$. The node of the above fibration in the fiber $s=0$ sits at $(x'',y'')=(1,0)$. Hence we translate $x''$ by $1$ and then blow-up two more times. This brings us exactly to the coordinate $u$ from above (and another coordinate $v$): \[ x''=s^2u+1, y''=s^2v. \] Here \eqref{eq3-1} transforms as \begin{eqnarray} \label{eq3-2} X:\;\; v^2+uv+v = s^4u^3+s^4u^2+u+1. \end{eqnarray} The section $P$ is expressed as $(u,v)=(1/s, s+1)$. Now we want to consider \eqref{eq3-2} as an elliptic fibration onto $u\in\mathbb{P}^1$. Then $P$ gives us the section $(s,v)=(1/u, 1+1/u)$. In order to obtain a Weierstrass form, we first translate $s$ and $v$ by the coordinates of the section. This gives \[ X:\;\; v^2+(u+1)v = u^2(u+1)s^4. \] We now modify $v\mapsto sv$, yielding the following plane cubic \[ X:\;\; sv^2+(u+1)v = u^2(u+1)s^3. \] Next we homogenize by the variable $w$ and set $v=1$ to obtain the following quasi-elliptic fibration: \[ X:\;\; (u+1)w^2 = u^2(u+1)s^3+s. \] Finally the variable change $(s,w)\mapsto(s/(u(u+1))^2, w/(u^2(u+1)^3)$ gives the Weierstrass form \[ X:\;\; w^2 = s^3+u^2(u+1)^3s. \] One immediately checks that this has singular fibers of type $\tilde D_6$ at $u=0$ and $\tilde E_7$ at $u=1,\infty$ as predicted. Similar computations apply to all other connections. \subsection{\#4: $R(M) = A_1^2 A_9^2$} This fibration arises from (the mod $2$ reduction of) the universal elliptic curve for $\Gamma_1(5)$ by purely inseparable base change. A model can be given as $[t^2+1, t^2, t^2, 0, 0]$ with $\tilde A_9$'s at $0,\infty$ and $\tilde A_1$'s at the roots of $t^2+t+1$. $\mbox{MW}=\mathbb{Z}/10\mathbb{Z}$ with $5$-torsion section induced from the universal elliptic curve, generated by $(0,0)$ or $(t^2,0)$ for instance. As an extra feature there is a $2$-torsion section $(t^2/(t+1)^2, t^4/(t+1)^3)$ meeting the zero section. (This can only happen for $p^n$-torsion in characteristic $p$; Shioda calls such torsion sections peculiar in \cite{OS}). Sections of order ten are e.g.~$P=(t,t)$ and $(t^2+t^3, t^4)$. \subsubsection*{Connection with \#5} $u=x/t^2$ extracts $\tilde D_6$ from $\tilde A_9$'s and zero section. The remaining fiber components combine with sections $4P, 6P$ (at $t=0$) resp.~$2P, 8P$ (at $t=\infty$) for two further copies of $\tilde D_6$. $A_1$'s stay unchanged. The eight two-torsion sections of the new fibration come from the remaining four fiber components of the two $\tilde A_9$ fibers and the four ten-torsion sections $P, 3P, 7P, 9P$. \begin{figure}[ht!] \setlength{\unitlength}{.45in} \begin{picture}(10,5.5)(-0.25,0.5) \thicklines \put(6,3){\circle*{.1}} \put(6.25,4){\circle*{.1}} \put(7.25,4.5){\circle*{.1}} \put(7.25,4.5){\circle{.2}} \put(8.25,4.5){\circle*{.1}} \put(9.5,3){\circle*{.1}} \put(9.25,4){\circle*{.1}} \put(6,3){\line(1,4){.25}} \put(6.25,4){\line(2,1){1}} \put(7.25,4.5){\line(1,0){1}} \put(9.5,3){\line(-1,4){.25}} \put(9.25,4){\line(-2,1){1}} \put(6.25,2){\circle*{.1}} \put(7.25,1.5){\circle*{.1}} \put(7.25,1.5){\circle{.2}} \put(8.25,1.5){\circle*{.1}} \put(9.25,2){\circle*{.1}} \put(6,3){\line(1,-4){.25}} \put(6.25,2){\line(2,-1){1}} \put(7.25,1.5){\line(1,0){1}} \put(9.5,3){\line(-1,-4){.25}} \put(9.25,2){\line(-2,-1){1}} \put(2.75,4.5){\circle{.2}} \put(2.75,1.5){\circle{.2}} \put(0.5,3){\circle*{.1}} \put(0.75,4){\circle*{.1}} \put(1.75,4.5){\circle*{.1}} \put(2.75,4.5){\circle*{.1}} \put(.75,4){\line(0,1){.75}} \put(.74,4.75){\circle{.1}} \put(.9,4.75){$6P$} \qbezier(.75,4.75)(-3.5,-2)(7.25,1.5) \put(.75,2){\line(0,-1){.75}} \put(.75,1.25){\circle{.1}} \put(.9,1.05){$4P$} \qbezier(.75,1.25)(-3.5,8)(7.25,4.5) \put(4,3){\circle*{.1}} \put(3.75,4){\circle*{.1}} \put(.5,3){\line(1,4){.25}} \put(.75,4){\line(2,1){1}} \put(1.75,4.5){\line(1,0){1}} \put(4,3){\line(-1,4){.25}} \put(3.75,4){\line(-2,1){1}} \put(.75,2){\circle*{.1}} \put(1.75,1.5){\circle*{.1}} \put(2.75,1.5){\circle*{.1}} \put(3.75,2){\circle*{.1}} \put(.5,3){\line(1,-4){.25}} \put(.75,2){\line(2,-1){1}} \put(1.75,1.5){\line(1,0){1}} \put(4,3){\line(-1,-4){.25}} \put(3.75,2){\line(-2,-1){1}} \put(5,3){\circle{.1}} \put(4,3){\line(1,0){2}} \put(5.1,3.15){$O$} \put(8.25,1){\circle{.1}} \put(9.25,2){\line(-1,-1){1}} \put(8.4,0.7){$2P$} \qbezier(2.75,1.5)(7,0)(8.25,1) \put(8.25,5){\circle{.1}} \put(9.25,4){\line(-1,1){1}} \put(8.4,5){$8P$} \qbezier(2.75,4.5)(7,6)(8.25,5) \thinlines \put(5,1.2){$u=\infty$} \put(3.5,1.5){\framebox(3,3){}} \put(0,.75){\framebox(2.1,4.5){}} \put(8,.5){\framebox(2.1,5){}} \end{picture} \caption{Three $\tilde D_6$ divisors supported on two $\tilde A_9$'s and sections} \label{Fig:4-5} \end{figure} \subsubsection*{Connection with \#12} $u=x$ extracts $\tilde E_7$ from $\tilde A_9$ at $\infty$ and zero section. Non-identity components of $\tilde A_9$ at $t=0$ and sections $2P, 8P$ form $\tilde D_{10}$; the two $A_1$'s formed by the non-identity fiber components at roots of $t^2+t+1$ remain, and there is another $A_1$ given by the opposite component of the $\tilde A_9$ at $\infty$. The sections of \#12 are thus given by the two fiber components indicated in the figure, and by the old sections $P, 9P$. \begin{figure}[ht!] \setlength{\unitlength}{.45in} \begin{picture}(10,5)(-0.25,0.5) \thicklines \put(6,3){\circle*{.1}} \put(6.25,4){\circle*{.1}} \put(7.25,4.5){\circle*{.1}} \put(8.25,4.5){\circle*{.1}} \put(9.5,3){\circle*{.1}} \put(9.25,4){\circle*{.1}} \put(6,3){\line(1,4){.25}} \put(6.25,4){\line(2,1){1}} \put(7.25,4.5){\line(1,0){1}} \put(9.5,3){\line(-1,4){.25}} \put(9.25,4){\line(-2,1){1}} \put(6.25,2){\circle*{.1}} \put(7.25,1.5){\circle*{.1}} \put(8.25,1.5){\circle*{.1}} \put(9.25,2){\circle*{.1}} \put(9.25,2){\circle{.2}} \put(9.25,4){\circle{.2}} \put(6,3){\line(1,-4){.25}} \put(6.25,2){\line(2,-1){1}} \put(7.25,1.5){\line(1,0){1}} \put(9.5,3){\line(-1,-4){.25}} \put(9.25,2){\line(-2,-1){1}} \put(0.5,3){\circle*{.1}} \put(0.75,4){\circle*{.1}} \put(1.75,4.5){\circle*{.1}} \put(2.75,4.5){\circle*{.1}} \put(4,3){\circle*{.1}} \put(3.75,4){\circle*{.1}} \put(.5,3){\line(1,4){.25}} \put(.75,4){\line(2,1){1}} \put(1.75,4.5){\line(1,0){1}} \put(4,3){\line(-1,4){.25}} \put(3.75,4){\line(-2,1){1}} \put(.75,2){\circle*{.1}} \put(1.75,1.5){\circle*{.1}} \put(2.75,1.5){\circle*{.1}} \put(3.75,2){\circle*{.1}} \put(.5,3){\line(1,-4){.25}} \put(.75,2){\line(2,-1){1}} \put(1.75,1.5){\line(1,0){1}} \put(4,3){\line(-1,-4){.25}} \put(3.75,2){\line(-2,-1){1}} \put(5,3){\circle{.1}} \put(4,3){\line(1,0){2}} \put(5.1,3.15){$O$} \put(3.25,1){\circle{.1}} \put(3.25,1){\line(-1,1){.5}} \put(3.3,1.15){$2P$} \qbezier(3.25,1)(9,0)(9.25,2) \put(3.25,5){\circle{.1}} \put(3.25,5){\line(-1,-1){.5}} \put(3.3,4.7){$8P$} \qbezier(3.25,5)(9,6)(9.25,4) \thinlines \put(4.85,1.4){$u=\infty$} \put(.4,.9){$u=0$} \put(4.75,1.25){\framebox(3.75,3.5){}} \put(0.25,.75){\framebox(3.65,4.5){}} \end{picture} \caption{$\tilde E_7$ and $\tilde D_{10}$ divisors supported on two $\tilde A_9$'s and sections} \label{Fig:4-12} \end{figure} \subsection{\#5: $R(M) = A_1^2 D_6^3$} \label{s:5} For this quasi-elliptic fibration, we shall exhibit two models in order to transfer from the models with $\rho(X/\mathbb{F}_2)=15$ (\#'s 1, 2) to all other fibrations with optimal models of $\rho(X/\mathbb{F}_2)=21$. We start with the quasi-elliptic fibration $[0,0,0,t(t^3+1)^2,0]$ with $\tilde D_6$'s at roots of $t^3+1$ and $\tilde A_1$'s at $t=0,\infty$. This model has $\rho(X/\mathbb{F}_2)=15$: from $\rho(X/\mathbb{F}_4)=22$, we first have to subtract $6$ divisors for the two $\tilde D_6$ that are conjugate over $\mathbb{F}_4$. By Tate's algorithm, the far components of the $\tilde D_6$ at $t=1$ are also conjugate over $\mathbb{F}_4$. This accounts for the seventh divisor which is not Galois invariant over $\mathbb{F}_2$. $\mbox{MW}\cong (\mathbb{Z}/2\mathbb{Z})^3$ with sections $P=(0,0), ((t+1)(t^3+1),(t^3+1)^2), Q=((t^2+t+1)t, (t^2+t+1)^2t)$ and their images under the automorphism $(x,y,t)\mapsto (\varrho x, y, \varrho^2 t)$. \subsubsection*{Connection with \#2} $u=x/(t^3+1)$ extracts two $\tilde D_4$'s from identity components of $\tilde D_6$'s and $\tilde A_1$ at $\infty$ plus zero section (at $u=\infty$) or from the section $P$ and the fiber components outside $t=\infty$ meeting it (at $u=0$). As new sections, we derive some double fiber components as depicted in the figure. Not that one of them is indeed defined over $\mathbb{F}_2$. \begin{figure}[ht!] \setlength{\unitlength}{.45in} \begin{picture}(10,5.2)(0,-0.5) \thicklines \multiput(3,2)(4,0){2}{\circle{.1}} \multiput(4,2)(1,0){3}{\circle*{.1}} \multiput(4,1)(1,0){3}{\circle*{.1}} \multiput(4,3)(1,0){3}{\circle*{.1}} \multiput(4,4)(2,0){2}{\circle*{.1}} \multiput(4,0)(2,0){2}{\circle*{.1}} \multiput(5,3.3)(0,-1){3}{\circle*{.1}} \multiput(5,3.6)(0,-1){3}{\circle*{.1}} \multiput(5.3,3.6)(0,-1){3}{\circle*{.1}} \multiput(4.7,3.6)(0,-1){3}{\circle*{.1}} \multiput(5,3.6)(0,-1){3}{\line(0,-1){.6}} \multiput(4.7,3.6)(0,-1){3}{\line(1,0){.6}} \put(3,2){\line(1,2){1}} \put(3,2){\line(1,1){1}} \put(3,2){\line(1,-1){1}} \put(3,2){\line(1,-2){1}} \put(7,2){\line(-1,2){1}} \put(7,2){\line(-1,-2){1}} \put(7,2){\line(-1,1){1}} \put(7,2){\line(-1,-1){1}} \qbezier(4,0)(5,.3)(6,0) \qbezier(4,0)(5,-0.3)(6,0) \qbezier(4,4)(5,4.3)(6,4) \qbezier(4,4)(5,3.7)(6,4) \put(7.1,2.2){$P$} \put(2.7,2.1){$O$} \multiput(5,3)(0,-1){3}{\circle{.2}} \put(3,2){\line(1,0){4}} \put(4,3){\line(1,0){2}} \put(4,1){\line(1,0){2}} \thinlines \put(5.8,0.8){\framebox(2.4,3.4){}} \put(1.8,-0.2){\framebox(2.6,3.4){}} \put(2,0){$u=\infty$} \put(7.2,1){$u=0$} \end{picture} \caption{Two $\tilde D_4$ fibers supported on three $\tilde D_6$'s, two $\tilde A_1$'s and two sections} \label{Fig:5-2} \end{figure} In order to connect with \#9, we exhibit another model of this fibration that admits the maximal Picard number $\rho(X/\mathbb{F}_2)=21$. The coordinate change \begin{eqnarray} \label{eq:5} \;\;\;\;\;\; (x,y,t) \mapsto (\varrho^2 x/(t+1+\varrho^2)^2, y/(t+1+\varrho^2)^3, \varrho (t+1+\varrho)/(t+1+\varrho^2) \end{eqnarray} yields the quasi-elliptic fibration $[0,0,0,t^2(t+1)^2(t^2+t+1),0]$. One easily verifies that the $\tilde D_6$ fibers have all components defined over $\mathbb{F}_2$, so $\rho(X/\mathbb{F}_2)=21$. \subsubsection*{Connection with \#9} $u=x/((t^2+t+1)t)$ extracts $\tilde D_4$ from zero section and identity components of $\tilde A_1$'s and $\tilde D_6$'s at $0$ and $\infty$. There are two disjoint copies of $\tilde D_8$. One involves most of the $\tilde D_6$ at $t=1$ as in the figure; the other connects the two $\tilde D_6$ at $0$ and $\infty$ by the section $Q$ . In the new coordinates of \eqref{eq:5}, this section reads $Q=(t(t^2+t+1),t^2(t^2+t+1))$. As new torsion sections, we identify the two fiber components depicted in the figure, and the two old sections $((t+1)(t^2+t+1),(t+1)^2(t^2+t+1))$ and $(t(t+1)(t^2+t+1),t^2(t+1)^2(t^2+t+1))$. \begin{figure}[ht!] \setlength{\unitlength}{.45in} \begin{picture}(10,5.6)(0,-0.5) \thicklines \multiput(3,2)(4,0){2}{\circle{.1}} \multiput(4,2)(1,0){3}{\circle*{.1}} \multiput(4,4)(1,0){3}{\circle*{.1}} \multiput(4,3)(1,0){3}{\circle*{.1}} \multiput(4,1)(2,0){2}{\circle*{.1}} \multiput(4,0)(2,0){2}{\circle*{.1}} \multiput(5,4.3)(0,-1){3}{\circle*{.1}} \multiput(5,4.6)(0,-1){3}{\circle*{.1}} \multiput(5.3,4.6)(0,-1){3}{\circle*{.1}} \multiput(4.7,4.6)(0,-1){3}{\circle*{.1}} \multiput(5,4.6)(0,-1){3}{\line(0,-1){.6}} \multiput(4.7,4.6)(0,-1){3}{\line(1,0){.6}} \put(3,2){\line(1,2){1}} \put(3,2){\line(1,1){1}} \put(3,2){\line(1,-1){1}} \put(3,2){\line(1,-2){1}} \put(7,2){\line(-1,2){1}} \put(7,2){\line(-1,-2){1}} \put(7,2){\line(-1,1){1}} \put(7,2){\line(-1,-1){1}} \qbezier(4,0)(5,.3)(6,0) \qbezier(4,0)(5,-0.3)(6,0) \qbezier(4,1)(5,1.3)(6,1) \qbezier(4,1)(5,0.7)(6,1) \put(7.1,2.2){$P$} \put(2.7,2.1){$O$} \put(3,2){\line(1,0){4}} \put(4,3){\line(1,0){2}} \put(4,4){\line(1,0){2}} \multiput(5,3)(0,-1){2}{\circle{.2}} \thinlines \put(1.8,-0.2){\framebox(2.6,3.4){}} \put(5.8,1.8){\line(0,1){2}} \put(5.8,1.8){\line(1,0){2.4}} \put(8.2,1.8){\line(0,1){3}} \put(5.8,3.8){\line(-1,0){1.4}} \put(4.4,3.8){\line(0,1){1}} \put(4.4,4.8){\line(1,0){3.8}} \put(2,0){$u=\infty$} \put(7.2,4.4){$u=0$} \end{picture} \caption{$\tilde D_4$ and $\tilde D_8$ supported on three $\tilde D_6$'s, two $\tilde A_1$'s and two sections} \label{Fig:5-9} \end{figure} \subsection{\#6: $R(M) = D_7A_{11}$} \label{ss:6} Elliptic fibration given by $[1,t^3,t^3,0,0]$ with $\tilde A_{11}$ at $t=0$ and $\tilde D_7$ at $\infty$. It arises as cubic base change from the rational elliptic surface with $s=t^3$.\\ $\mbox{MW} = \mathbb{Z}/4\mathbb{Z} \times A_2[2/3]$. Torsion generated by $(0,0)$; minimal sections $(t^3 + \varrho t^2, \varrho^2 t^4)$ for $\varrho^3=1$ and their negatives. Over $\Q$ arithmetic and geometry of this fibration have been studied in detail in \cite{S-MJM}. In particular, the connection to \#8 has been worked out over $\Q$, and a divisor of type $\tilde D_{20}$ as in \#18 has been identified over $\mathbb{F}_4$, albeit without expressing its linear system in terms of the above Weierstrass form. \subsection{\#7: $R(M) = A_3E_6A_{11}$} Model for instance $[1,0,t^4,0,0]$.\\ Singular fibers $\tilde A_{11}$ at $t=0$, $\tilde A_3$ at $t=1$ and $\tilde E_6$ at $\infty$.\\ $\mbox{MW}=\mathbb{Z}/6\mathbb{Z}$, generated by $P=(t^2,t^2)$. 3-torsion: $4P=(0,0)$, 2-torsion: $3P=(t^4,t^6)$. \subsubsection*{Connection with \#4} $u =(y-x) / (t(x-t^2))$ extracts two divisors of type $\tilde A_9$ from $\tilde A_{11}$ and $\tilde E_6$ connected by zero section and 6-torsion section $5P=(t^2,t^4)$ on the one hand and by $P, 4P$ on the other hand. The odd components of $\tilde A_3$ are not met by any section and thus form two $A_1$'s. There are three new sections given by fiber components as shown in the figure plus $2P, 3P$ and the even components of $\tilde A_3$. \begin{figure}[ht!] \setlength{\unitlength}{.45in} \begin{picture}(10,5.8)(-0.7,0) \thicklines \put(6,5){\circle*{.1}} \put(7,5){\circle*{.1}} \put(7,5){\circle{.2}} \put(8,4.5){\circle*{.1}} \put(8.5,3.5){\circle*{.1}} \put(8.5,2.5){\circle*{.1}} \put(8,1.5){\circle*{.1}} \put(7,1){\circle*{.1}} \put(6,1){\circle*{.1}} \put(5,4.5){\circle*{.1}} \put(4.5,3.5){\circle*{.1}} \put(4.5,2.5){\circle*{.1}} \put(5,1.5){\circle*{.1}} \put(6,5){\line(1,0){1}} \put(7,5){\line(2,-1){1}} \put(8,4.5){\line(1,-2){.5}} \put(8.5,3.5){\line(0,-1){1}} \put(8.5,2.5){\line(-1,-2){.5}} \put(8,1.5){\line(-2,-1){1}} \put(7,1){\line(-1,0){1}} \put(6,1){\line(-2,1){1}} \put(5,4.5){\line(2,1){1}} \put(4.5,2.5){\line(0,1){1}} \put(4.5,2.5){\circle{.2}} \put(4.5,3.5){\line(1,2){.5}} \put(5,1.5){\line(-1,2){.5}} \put(3.5,3.5){\circle{.1}} \put(3.55,3.65){$O$} \multiput(0.5,3.5)(1,0){3}{\circle*{.1}} \multiput(-1,5)(0.75,-0.75){2}{\circle*{.1}} \multiput(-1,2)(0.75,0.75){2}{\circle*{.1}} \put(-1,5){\line(1,-1){1.5}} \put(-1,2){\line(1,1){1.5}} \put(.5,3.5){\line(1,0){4}} \put(-0.25,2.75){\circle{.2}} \put(2.5,5){\circle{.1}} \put(-1,5){\line(1,0){7}} \put(2.55,4.7){$5P$} \put(2,2){\circle{.1}} \put(2,2){\line(6,-1){3}} \put(-1,2){\line(1,0){3}} \put(2.05,1.6){$P$} \put(-1,2){\line(4,-1){4}} \put(3,1){\circle{.1}} \qbezier(3,1)(13,-1)(8,4.5) \put(3.1,1.1){$4P$} \thinlines \put(-1.5,2.25){\line(0,-1){2}} \put(-1.5,2.25){\line(1,0){9}} \put(-1.5,0.25){\line(1,0){11.5}} \put(10,0.25){\line(0,1){5.25}} \put(7.5,2.25){\line(0,1){3.25}} \put(7.5,5.5){\line(1,0){2.5}} \put(-1.5,3){\framebox(8,2.5){}} \end{picture} \caption{Two $\tilde A_9$ divisors supported on $\tilde E_6, \tilde A_{11}$ and torsion sections} \label{Fig:7-4} \end{figure} \subsubsection*{Connection with \#8} $u = (x-t^3)/(t^4-t^3)$ extracts two $\tilde E_6$'s from $\tilde A_3$ and $\tilde A_{11}$ connected through $O$ and $3P$. The third copy of $\tilde E_6$ comes from the root lattice $E_6$ of non-identity components of the original $\tilde E_6$ fiber. \begin{figure}[ht!] \setlength{\unitlength}{.45in} \begin{picture}(10,4.8)(0,.7) \thicklines \put(6,5){\circle*{.1}} \put(7,5){\circle*{.1}} \put(7,5){\circle{.2}} \put(8,4.5){\circle*{.1}} \put(8.5,3.5){\circle*{.1}} \put(8.5,2.5){\circle*{.1}} \put(8,1.5){\circle*{.1}} \put(7,1){\circle*{.1}} \put(6,1){\circle*{.1}} \put(5,4.5){\circle*{.1}} \put(4.5,3.5){\circle*{.1}} \put(4.5,2.5){\circle*{.1}} \put(5,1.5){\circle*{.1}} \put(6,5){\line(1,0){1}} \put(7,5){\line(2,-1){1}} \put(8,4.5){\line(1,-2){.5}} \put(8.5,3.5){\line(0,-1){1}} \put(8.5,2.5){\line(-1,-2){.5}} \put(8,1.5){\line(-2,-1){1}} \put(7,1){\line(-1,0){1}} \put(6,1){\line(-2,1){1}} \put(6,1){\circle{.2}} \put(5,4.5){\line(2,1){1}} \put(4.5,2.5){\line(0,1){1}} \put(4.5,3.5){\line(1,2){.5}} \put(5,1.5){\line(-1,2){.5}} \put(3.5,3.5){\circle{.1}} \put(3.55,3.65){$O$} \put(2.5,3.5){\line(1,0){2}} \multiput(1.5,4.5)(1,-1){2}{\line(-1,-1){1}} \multiput(1.5,4.5)(1,-1){2}{\circle*{.1}} \multiput(1.5,4.5)(-1,-1){2}{\line(1,-1){1}} \multiput(0.5,3.5)(1,-1){2}{\circle*{.1}} \multiput(1.5,2.5)(0,2){2}{\circle{.2}} \put(0.5,3.5){\line(-3,-1){1}} \put(8.5,2.5){\line(3,1){2}} \put(10,3){\circle*{.1}} \put(9.6,3.1){$3P$} \thinlines \put(2.1,1.4){$u=\infty$} \put(9.6,0.9){$u=0$} \put(2,1.25){\framebox(4.25,4){}} \put(6.75,0.75){\line(1,0){3.75}} \put(6.75,0.75){\line(0,1){4}} \put(6.75,4.75){\line(1,0){3.75}} \put(1,0.75){\line(-1,0){1.5}} \put(1,0.75){\line(0,1){4}} \put(1,4.75){\line(-1,0){1.5}} \end{picture} \caption{Two $\tilde E_6$ divisors supported on $\tilde A_3, \tilde A_{11}$ and 2-torsion sections} \label{Fig:7-8} \end{figure} \subsubsection*{Connection with \#11} $u = (y-t^2) / (t(x-t^2))$ extracts $\tilde A_{17}$ from $\tilde E_6, \tilde A_{11}$ connected through zero section and $P$. Contrary to the connection with \#4, we choose the long way around the $\tilde A_{11}$ fiber. This leaves three $A_1$'s comprising a far component of $\tilde E_6$ as shown in the figure and the odd components of $\tilde A_3$. On top of the indicated fiber component, we obtain new sections from the even components of $\tilde A_3$ and $2P, 5P$. \begin{figure}[ht!] \setlength{\unitlength}{.45in} \begin{picture}(10,5.8)(-1.2,0) \thicklines \put(6,5){\circle*{.1}} \put(7,5){\circle*{.1}} \put(8,4.5){\circle*{.1}} \put(8.5,3.5){\circle*{.1}} \put(8.5,2.5){\circle*{.1}} \put(8,1.5){\circle*{.1}} \put(7,1){\circle*{.1}} \put(6,1){\circle*{.1}} \put(5,4.5){\circle*{.1}} \put(4.5,3.5){\circle*{.1}} \put(4.5,2.5){\circle*{.1}} \put(5,1.5){\circle*{.1}} \put(6,5){\line(1,0){1}} \put(7,5){\line(2,-1){1}} \put(8,4.5){\line(1,-2){.5}} \put(8.5,3.5){\line(0,-1){1}} \put(8.5,2.5){\line(-1,-2){.5}} \put(8,1.5){\line(-2,-1){1}} \put(7,1){\line(-1,0){1}} \put(6,1){\line(-2,1){1}} \put(5,4.5){\line(2,1){1}} \put(4.5,2.5){\line(0,1){1}} \put(4.5,3.5){\line(1,2){.5}} \put(5,1.5){\line(-1,2){.5}} \put(3.5,3.5){\circle{.1}} \put(3.55,3.65){$O$} \multiput(0.5,3.5)(1,0){3}{\circle*{.1}} \multiput(-1,5)(0.75,-0.75){2}{\circle*{.1}} \multiput(-1,2)(0.75,0.75){2}{\circle*{.1}} \put(-1,5){\line(1,-1){1.5}} \put(-1,2){\line(1,1){1.5}} \put(.5,3.5){\line(1,0){4}} \put(-.25,2.75){\circle{.2}} \put(2.5,5){\circle{.1}} \put(-1,5){\line(1,0){7}} \put(2.55,4.7){$P$} \thinlines \put(-1.5,3){\line(0,1){2.5}} \put(-1.5,3){\line(1,0){5}} \put(-1.5,5.5){\line(1,0){10.5}} \put(9,5.5){\line(0,-1){5}} \put(3.5,3){\line(0,-1){2.5}} \put(3.5,0.5){\line(1,0){5.5}} \put(4.75,4.25){\framebox(0.5,0.5){}} \end{picture} \caption{$\tilde A_{17}$ divisor supported on $\tilde E_6, \tilde A_{11}$ and torsion sections} \label{Fig:7-11} \end{figure} \subsubsection*{Connection with \#13} $u = x/t^4$ extracts two $\tilde E_7$'s first from $\tilde A_{11}$ adjoined by the zero section and secondly from $\tilde E_6$ adjoined by $2P, 4P$. Remaining components of $\tilde A_{11}$ combine with $3P$ and $A_3$ ($\tilde A_3$ minus identity component) to $\tilde D_6$. Two sections given by fiber components as depicted. \begin{figure}[ht!] \setlength{\unitlength}{.45in} \begin{picture}(11,7)(-0.75,0.25) \thicklines \put(4,3){\circle*{.1}} \put(4.25,4){\circle*{.1}} \put(5,4.75){\circle*{.1}} \put(6,5){\circle*{.1}} \put(8,3){\circle*{.1}} \put(7.75,4){\circle*{.1}} \put(7,4.75){\circle*{.1}} \put(7,4.75){\circle{.2}} \put(7,1.25){\circle{.2}} \put(4,3){\line(1,4){0.25}} \put(4.25,4){\line(1,1){.75}} \put(5,4.75){\line(4,1){1}} \put(8,3){\line(-1,4){0.25}} \put(7.75,4){\line(-1,1){.75}} \put(7,4.75){\line(-4,1){1}} \put(4.25,2){\circle*{.1}} \put(5,1.25){\circle*{.1}} \put(6,1){\circle*{.1}} \put(7.75,2){\circle*{.1}} \put(7,1.25){\circle*{.1}} \put(4,3){\line(1,-4){0.25}} \put(4.25,2){\line(1,-1){.75}} \put(5,1.25){\line(4,-1){1}} \put(8,3){\line(-1,-4){0.25}} \put(7.75,2){\line(-1,-1){.75}} \put(7,1.25){\line(-4,-1){1}} \put(3,3){\circle{.1}} \put(3.05,3.15){$O$} \multiput(0,3)(1,0){3}{\circle*{.1}} \multiput(-1,4)(0,-2){2}{\circle*{.1}} \multiput(-.5,3.5)(0,-1){2}{\circle*{.1}} \put(-1,4){\line(1,-1){1}} \put(-1,2){\line(1,1){1}} \put(0,3){\line(1,0){4}} \put(9,3){\circle{.1}} \put(9.1,3.15){$3P$} \qbezier(2,3)(3,11)(9,3) \put(10,3){\circle*{.1}} \put(10.5,3.5){\circle*{.1}} \put(10.5,2.5){\circle*{.1}} \put(8,3){\line(1,0){2}} \put(10,3){\line(1,1){.5}} \put(10,3){\line(1,-1){.5}} \put(1,1){\circle{.1}} \put(1,1){\line(-2,1){2}} \put(1,1.2){$2P$} \qbezier(1,1)(6,-.5)(7,1.25) \put(1,5){\circle{.1}} \put(1,5){\line(-2,-1){2}} \put(1,4.6){$4P$} \qbezier(1,5)(6,6.5)(7,4.75) \thinlines \put(2.9,.9){$u=\infty$} \put(-1.1,0.9){$u=0$} \put(-1.25,0.75){\framebox(2.75,4.5){}} \put(2.75,.75){\framebox(3.5,4.5){}} \put(7.5,1.5){\framebox(3.25,3){}} \end{picture} \caption{Two $\tilde E_7$ and $\tilde D_6$ supported on $\tilde E_6, \tilde A_{11}, A_3$ and sections} \label{Fig:7-13} \end{figure} \subsubsection*{Connection with \#14} $u = (x-1)/(t-1)^2$ extracts $\tilde D_8$ from $\tilde E_6$ and $\tilde A_3$ connected through zero section. $\tilde D_{12}$ given by $A_{11}$ extended by sections $P, 5P$. Far components of $\tilde E_6$ serve as new sections. \begin{figure}[ht!] \setlength{\unitlength}{.45in} \begin{picture}(11,6.5)(-0.75,0) \thicklines \put(4,3){\circle*{.1}} \put(4.25,4){\circle*{.1}} \put(5,4.75){\circle*{.1}} \put(6,5){\circle*{.1}} \put(8,3){\circle*{.1}} \put(7.75,4){\circle*{.1}} \put(7,4.75){\circle*{.1}} \put(4,3){\line(1,4){0.25}} \put(4.25,4){\line(1,1){.75}} \put(5,4.75){\line(4,1){1}} \put(8,3){\line(-1,4){0.25}} \put(7.75,4){\line(-1,1){.75}} \put(7,4.75){\line(-4,1){1}} \put(4.25,2){\circle*{.1}} \put(5,1.25){\circle*{.1}} \put(6,1){\circle*{.1}} \put(7.75,2){\circle*{.1}} \put(7,1.25){\circle*{.1}} \put(4,3){\line(1,-4){0.25}} \put(4.25,2){\line(1,-1){.75}} \put(5,1.25){\line(4,-1){1}} \put(8,3){\line(-1,-4){0.25}} \put(7.75,2){\line(-1,-1){.75}} \put(7,1.25){\line(-4,-1){1}} \put(3,3){\circle{.1}} \put(2.95,3.2){$O$} \multiput(1.5,4.5)(.5,-.5){3}{\circle*{.1}} \multiput(0,4.5)(.75,0){2}{\circle*{.1}} \put(0,4.5){\line(1,0){1.5}} \put(0,4.5){\circle{.2}} \multiput(1.5,6)(0,-.75){2}{\circle*{.1}} \put(1.5,6){\line(0,-1){1.5}} \put(1.5,6){\circle{.2}} \put(1.5,4.5){\line(1,-1){1.5}} \put(3,3){\line(1,0){1}} \put(2.25,2.25){\circle*{.1}} \put(1.5,2.25){\circle*{.1}} \put(2.25,1.5){\circle*{.1}} \put(2.25,2.25){\line(1,1){.75}} \put(2.25,2.25){\line(0,-1){.75}} \put(2.25,2.25){\line(-1,0){.75}} \put(4.25,5.5){\circle{.1}} \put(4.25,5.5){\line(1,-1){.75}} \put(4.4,5.5){$P$} \qbezier(4.25,5.5)(3,6)(1.5,6) \put(4.25,0.5){\circle{.1}} \put(4.25,0.5){\line(1,1){.75}} \put(4.4,0.3){$5P$} \qbezier(4.25,0.5)(-1,-1)(0,4.5) \thinlines \put(.6,1.4){$u=\infty$} \put(7.3,0.15){$u=0$} \put(.5,1.25){\framebox(2.75,4.25){}} \put(4.125,0){\framebox(4.125,6){}} \end{picture} \caption{$\tilde D_8$ and $\tilde D_{12}$ supported on $\tilde A_3, \tilde E_6, \tilde A_{11}$ and sections} \label{Fig:7-14} \end{figure} \subsection{\#8: $R(M) = E_6^3$} Model for instance $[0, 0, t^2 (t+1)^2, 0, 0]$, as investigated in \cite{S-MJM}. Singular fibers at $t=0,1,\infty$. $\mbox{MW} = A_2[2/3] \times \mathbb{Z}/3\mathbb{Z}$. Torsion generated by $(0,0)$. Minimal sections $(\varrho t^2, t^2)$ and their negatives. \subsection{\#9: $R(M) = D_4D_8^2$} $[0,0,0,t^2(t^4+t^2+1),t^5(t^2+1)]$.\\ Singular fibers $\tilde D_8$ at $t=0,\infty$, $\tilde D_4$ at $t=1$.\\ $\mbox{MW}=(\mathbb{Z}/2\mathbb{Z})^2$ with sections $(t,0), (t^3,0), (t^3+t,0)$ \subsection{\#10: $R(M) = D_5 A_{15}$} $[t^2, 0, 0, 1, 0]$\\ Singular fibers $\tilde D_5$ at $t=0$, $\tilde A_{15}$ at $\infty$.\\ $\mbox{MW}=\mathbb{Z}/4\mathbb{Z}$, generated by $P=(1,0)$ with 2-torsion at $(0,0)$. \subsubsection*{Connection with \#6} $u = (x+t+1) / t^2$ extracts $\tilde D_7$ from $\tilde D_5, \tilde A_{15}$ connected through zero section. The disjoint components of $\tilde A_{15}$ form an $A_{11}$. New sections as depicted plus $P, 3P$. \begin{figure}[ht!] \setlength{\unitlength}{.45in} \begin{picture}(10,6)(-0.75,0) \thicklines \put(4,3){\circle*{.1}} \put(4.25,4){\circle*{.1}} \put(4.75,5){\circle*{.1}} \put(5.75,5.5){\circle*{.1}} \put(6.75,5.75){\circle*{.1}} \put(9.5,3){\circle*{.1}} \put(9.25,4){\circle*{.1}} \put(8.75,5){\circle*{.1}} \put(7.75,5.5){\circle*{.1}} \put(4,3){\line(1,4){0.25}} \put(4.25,4){\line(1,2){.5}} \put(4.75,5){\line(2,1){1}} \put(5.75,5.5){\line(4,1){1}} \put(9.5,3){\line(-1,4){0.25}} \put(9.25,4){\line(-1,2){.5}} \put(8.75,5){\line(-2,1){1}} \put(7.75,5.5){\line(-4,1){1}} \put(4.25,2){\circle*{.1}} \put(4.75,1){\circle*{.1}} \put(5.75,0.5){\circle*{.1}} \put(6.75,0.25){\circle*{.1}} \put(9.25,2){\circle*{.1}} \put(8.75,1){\circle*{.1}} \put(7.75,0.5){\circle*{.1}} \put(4,3){\line(1,-4){0.25}} \put(4.25,2){\line(1,-2){.5}} \put(4.75,1){\line(2,-1){1}} \put(5.75,0.5){\line(4,-1){1}} \put(9.5,3){\line(-1,-4){0.25}} \put(9.25,2){\line(-1,-2){.5}} \put(8.75,1){\line(-2,-1){1}} \put(7.75,0.5){\line(-4,-1){1}} \put(3,3){\circle{.1}} \put(3.05,3.15){$O$} \multiput(0,3)(1,0){3}{\circle*{.1}} \multiput(-1,4)(0,-2){2}{\circle*{.1}} \put(1,4){\circle*{.1}} \put(-1,4){\line(1,-1){1}} \put(-1,2){\line(1,1){1}} \put(0,3){\line(1,0){4}} \put(1,4){\line(0,-1){1}} \multiput(4.75,5)(0,-4){2}{\circle{.2}} \multiput(-1,4)(0,-2){2}{\circle{.2}} \thinlines \put(0,2){$u=\infty$} \put(-0.25,1.75){\framebox(4.75,2.5){}} \put(5.25,0){\framebox(4.5,6){}} \end{picture} \caption{$\tilde D_7$ and $A_{11}$ supported on $\tilde D_5, \tilde A_{15}$ and zero section} \label{Fig:10-6} \end{figure} \subsection{\#11: $R(M) = A_1^3 A_{17}$} $[t^2, 0, 1, 0, 0]$\\ $\tilde A_1$'s at third roots of unity, $\tilde A_{17}$ at $\infty$.\\ $\mbox{MW}=\mathbb{Z}/6\mathbb{Z}$, generated by $(t,1)$. This fibration appears in \cite[App.~2]{OS} for the peculiar fact that it admits the 2-torsion section $(1/t^2,1/t^3)$ which is not disjoint from the zero section (this is impossible if order and characteristic are coprime). \subsection{\#12: $R(M) = A_1^3E_7D_{10}$} quasi-elliptic $[0,0,0,t^2(t^3+1),0]$.\\ Reducible fibers: $\tilde D_{10}$ at $t=0$, $\tilde E_7$ at $\infty$ and $\tilde A_1$'s at third roots of unity.\\ $\mbox{MW}=(\mathbb{Z}/2\mathbb{Z})^2$ with sections $P=(0,0), Q=(t,t^3), (t^4+t,t^6+t^3).$ \subsubsection*{Connection with \#15} $u = x/t^2$ extracts $\tilde D_{16}$ from $\tilde E_7$ and $D_{10}$ connected through zero section. Far component of $\tilde E_7$ combines with section $P$ and non-identity components of $\tilde A_1$'s to form $\tilde D_4$. \begin{figure}[ht!] \setlength{\unitlength}{.45in} \begin{picture}(12,4.2)(0,0.5) \thicklines \multiput(2,3)(1,0){9}{\circle*{.1}} \put(2,3){\line(1,0){8}} \multiput(2,1)(1,0){7}{\circle*{.1}} \put(2,1){\line(1,0){6}} \put(3,1){\circle{.2}} \put(3,4){\circle*{.1}} \put(3,3){\line(0,1){1}} \put(9,4){\circle*{.1}} \put(9,3){\line(0,1){1}} \put(5,2){\circle*{0.1}} \put(5,1){\line(0,1){1}} \put(9,2){\circle{.1}} \put(10,3){\line(-1,-1){2}} \put(8.6,2){$O$} \thinlines \put(1.5,3){\line(0,1){1.5}} \put(1.5,3){\line(1,-1){2.5}} \put(1.5,4.5){\line(1,0){6}} \put(10.5,3){\line(0,-1){2.5}} \put(10.5,0.5){\line(-1,0){6.5}} \put(10.5,3){\line(-2,1){3}} \end{picture} \caption{$\tilde D_{16}$ divisor supported on $\tilde E_7, \tilde D_{10}$ and zero section} \label{Fig:12-15} \end{figure} \subsection{\#13: $R(M) = D_6 E_7^2$} Quasielliptic $[0, 0, 0, t^5+t^3, 0]$\\ Reducible singular fibers $D_6, E_7, E_7$ at $t=1,0,\infty$. \\ $\mbox{MW}=\mathbb{Z}/2\mathbb{Z}$ generated by $P=(0,0)$. \subsection{\#14: $R(M) = D_8 D_{12}$} Quasielliptic $[0, t, 0, t^6, 0]$.\\ Reducible fibers $D_{12}$ at $t=0$ and $D_8$ at $t=\infty$. \\ $\mbox{MW}=\mathbb{Z}/2\mathbb{Z}$ generated by $P=(0,0)$. \subsubsection*{Connection with \#16} $u=x/t^4$ extracts $\tilde E_8$ from $\tilde D_{12}$ adjoined the zero section. $D_8$ then combines with $P$ and remaining components of $\tilde D_{12}$ to form a new copy of $\tilde D_{12}$. \begin{figure}[ht!] \setlength{\unitlength}{.45in} \begin{picture}(12,4.2)(-0.85,0.5) \thicklines \multiput(0,3)(1,0){11}{\circle*{.1}} \put(0,3){\line(1,0){10}} \multiput(0,1)(1,0){7}{\circle*{.1}} \put(0,1){\line(1,0){6}} \put(7,3){\circle{.2}} \put(1,4){\circle*{.1}} \put(1,3){\line(0,1){1}} \put(9,4){\circle*{.1}} \put(9,3){\line(0,1){1}} \put(6,1.5){\circle*{0.1}} \put(5,1){\line(2,1){1}} \put(2,1.5){\circle*{.1}} \put(1,1){\line(2,1){1}} \put(0,2){\circle{.1}} \put(0,1){\line(0,1){2}} \put(0.1,2.1){$O$} \put(5,2){\circle{.1}} \put(10,3){\line(-5,-1){5}} \put(5,2){\line(-6,-1){3}} \put(5.05,1.6){$P$} \thinlines \put(-0.5,4.5){\line(0,-1){2.916}} \put(-0.5,4.5){\line(1,0){7}} \put(6.5,4.5){\line(0,-1){1.7}} \put(6.5,2.75){\line(-6,-1){7}} \put(7.5,4.5){\line(0,-1){1.75}} \put(7.5,2.75){\line(-6,-1){7}} \put(7.5,4.5){\line(1,0){3}} \put(10.5,4.5){\line(0,-1){4}} \put(10.5,0.5){\line(-1,0){10}} \put(.5,.5){\line(0,1){1.083}} \put(-0.35,4.15){$u=\infty$} \put(9.5,0.7){$u=0$} \end{picture} \caption{$\tilde E_8$ and $\tilde D_{12}$ divisors supported on $\tilde D_8, \tilde D_{12}$ and sections} \label{Fig:14-16} \end{figure} \subsection{\#15: $R(M) = D_4 D_{16}$} Quasi-elliptic $[0,t^3,0,0,t^3]$.\\ Reducible singular fibers $\tilde D_4$ at $t=0$, $\tilde D_{16}$ at $\infty$.\\ $\mbox{MW}=\mathbb{Z}/2\mathbb{Z}$ with section $(1,1)$. \subsection{\#16: $R(M) = E_8D_{12}$} quasi-elliptic $[0,t^3,0,0,t^5]$.\\ Reducible singular fibers $\tilde E_8$ at $t=0, \tilde D_{12}$ at $\infty$. \subsubsection*{Connection with \#18} $u=(x+t^4)/t^3$ extracts $\tilde D_{20}$ from $\tilde E_8$ and $\tilde D_{12}$ connected by zero section. \begin{figure}[ht!] \setlength{\unitlength}{.45in} \begin{picture}(11,5.4)(-.4,-0.5) \thicklines \multiput(0,3)(1,0){11}{\circle*{.1}} \put(0,3){\line(1,0){10}} \multiput(1,1)(1,0){8}{\circle*{.1}} \put(1,1){\line(1,0){7}} \put(1,1){\circle{.2}} \put(1,4){\circle*{.1}} \put(1,3){\line(0,1){1}} \put(9,4){\circle*{.1}} \put(9,3){\line(0,1){1}} \put(3,0){\circle*{0.1}} \put(3,0){\line(0,1){1}} \put(9,2){\circle{.1}} \put(10,3){\line(-1,-1){2}} \put(8.6,2){$O$} \thinlines \put(-0.5,3){\line(0,1){1.5}} \put(-0.5,3){\line(1,-1){3.5}} \put(-0.5,4.5){\line(1,0){8}} \put(10.5,3){\line(0,-1){3.5}} \put(10.5,-0.5){\line(-1,0){7.5}} \put(10.5,3){\line(-2,1){3}} \end{picture} \caption{$\tilde D_{20}$ divisor supported on $\tilde E_8, \tilde D_{12}$ and zero section} \label{Fig:16-18} \end{figure} \subsection{\#17: $R(M) = D_4 E_8^2$} quasi-elliptic: $[0,0,0,0,t^5+t^7]$\\ Reducible fibers: $\tilde D_4$ at $t=1$, $\tilde E_8$ at $0, \infty$. This fibration also features in \cite{Schroeer}, for instance. \subsubsection*{Connection with \#15} $u = x/t^2$ extracts $\tilde D_{16}$ from the two $\tilde E_8$'s connected by the zero section. Far components of $\tilde E_8$ serve as zero and 2-torsion section. $D_4$ is preserved; the additional component to form a new $\tilde D_4$ consists in the curve \[ C=\{x=0, y^2=t^5(t+1)^2\}. \] which only meets the double component of $\tilde D_4$ and the far components of the two $\tilde E_8$'s. \begin{figure}[ht!] \setlength{\unitlength}{.45in} \begin{picture}(7,5.2)(-1.75,-.5) \thicklines \multiput(0,3)(1,0){8}{\circle*{.1}} \put(0,3){\line(1,0){7}} \multiput(0,1)(1,0){8}{\circle*{.1}} \put(0,1){\line(1,0){7}} \put(7,1){\circle{.2}} \put(7,3){\circle{.2}} \put(5,4){\circle*{.1}} \put(5,3){\line(0,1){1}} \put(5,0){\circle*{.1}} \put(5,0){\line(0,1){1}} \put(0,2){\circle{.1}} \put(0,1){\line(0,1){2}} \put(0.1,2.1){$O$} \multiput(-3,2)(1,0){3}{\circle*{.1}} \multiput(-2,3)(0,-2){2}{\circle*{.1}} \put(0,2){\line(-1,0){3}} \put(-2,1){\line(0,1){2}} \thinlines \put(-0.4,-.35){$u=\infty$} \put(-3.4,0.65){$u=0$} \put(-0.5,-0.5){\framebox(7,5){}} \put(-3.5,0.5){\framebox(2,3){}} \end{picture} \caption{$\tilde D_{16}$ divisor supported on two $\tilde E_8$'s and zero section} \label{Fig:17-15} \end{figure} \subsection{\#18: $R(M) = D_{20}$} quasi-elliptic, e.g.~$[0,t^3,0,0,t]$ with $\tilde D_{20}$ at $\infty$. \section{Uniqueness of the genus~1 fibrations} \label{s:unique} In the previous section, we have proved that the supersingular K3 surface $X$\/ admits each genus~$1$ fibration from Table \ref{Tab:fibr}. The proof of Theorem \ref{thm} will thus be completed by showing the uniqueness of each fibration. Here it could be possible to argue with the automorphism group of $X$ or to pursue other lattice theoretic ideas. We decided to follow a different approach following \cite{RS2} that illustrates how quasi-elliptic fibrations can be used to work out models and moduli of supersingular K3 surfaces. Namely the uniqueness problem is stated purely in terms of genus one fibrations: \begin{Proposition} \label{prop} Let $k$ be an algebraically closed field of characteristic two. For each genus~1 fibration from Table \ref{Tab:fibr}, there is exactly one model over $k$ up to isomorphism. \end{Proposition} \begin{Remark} In comparison, on a general Kummer surface of product type the configuration of singular fibers does usually not determine a unique elliptic fibration by \cite{Oguiso}. This is visible from the $2$-torsion points, see the equations in \cite{KS}. \end{Remark} \begin{proof}[Proof of Proposition \ref{prop} for elliptic fibrations] Suppose $S\to\mathbb{P}^1$ is an elliptic fibration from Table \ref{Tab:fibr}. If the fibration is extremal, then it is a purely inseparable base change of an extremal rational elliptic surface by \cite{Ito2}. The uniqueness thus follows from the corresponding statement for rational elliptic surfaces (cf.~\cite{Ito2}). For \#11, an alternative proof can be found in \cite{SS2}. For the remaining three elliptic fibrations, we can still argue with extremal elliptic surfaces because there is either $3$- or $4$-torsion in $\mbox{MW}(S)$. This implies that they arise from some universal elliptic curves by base change. For $3$-torsion and $j$-invariant zero (\#8), this universal elliptic curve is \[ y^2 + sy = x^3. \] Locating the singular fibers of type $\tilde E_6$ at $0, 1$ and $\infty$, we deduce that the base change can only be $t\mapsto s=t^2(t-1)^2$. For $4$-torsion, we are dealing with the universal elliptic curve \begin{eqnarray} \label{eq:4} y^2 + xy + sy = x^3 + sx^2. \end{eqnarray} In any characteristic other than two, this has three singular fibers: type $I_4$ at $0$, $I_1$ at $s=1/16$ and $I_1^*$ at $\infty$. In characteristic two, however, the latter two are merged, but the fiber type $I_1^*$ stays the same with wild ramification of index one. That is, there are only two singular fibers, and each is reducible. Since fibration \#6 has only two reducible fibers as well, it arises from \eqref{eq:4} through a cyclic base change, i.e.~via $t\mapsto s=t^3$. Similarly, we also deduce that \#3 has no irreducible singular fibers. Locating the singular fibers at $0, 1$ and $\infty$, the fibration thus comes from the base change \[ t\mapsto s=t^2(t+1)^2. \] In particular, the elliptic fibration is unique, and we obtain the model for \#3 in \eqref{eq3}. \end{proof} In order to complete the proof of Proposition \ref{prop}, we need a few more general facts about quasi-elliptic fibrations. A good general reference would be the last chapter of \cite{CD}. We have already mentioned that an elliptic curve given by a $5$-tuple $[a_1,a_2,a_3,a_4,a_6]$ is quasi-elliptic in characteristic two if and only if $a_1\equiv a_3\equiv 0$. Completing the cube, we thus obtain the "traditional" Weierstrass form \begin{eqnarray} \label{eq:qe} S:\;\;\; y^2 = x^3 + a_4 x + a_6. \end{eqnarray} Contrary to the usual situation, however, this equation still admits the following automorphisms: \[ x\mapsto x+\alpha^2, \;\;\; y\mapsto y+\alpha x+\beta \] in addition to rescaling $x$ and $y$ by a second resp.~third power. Hence $a_4$ and $a_6$ are unique up to the according scaling and adding fourth powers resp.~squares. Quasi-elliptic fibrations admit a discriminant that detects the reducible singular fibers: \[ \Delta = a_4 (a_4')^2 + (a_6')^2. \] Here the prime indicates the formal derivative with respect to the parameter of the base curve $\mathbb{P}^1$. As a general rule, the order of vanishing of $\Delta$ equals the rank of the Dynkin diagram associated to (the non-identity components of) the reducible singular fiber. It suffices to distinguish two cases to normalize \eqref{eq:qe}: \begin{enumerate}[(i)] \item If $\Delta$ is a square, then so is $a_4$. Thus we can set $a_6=t\sqrt\Delta$ and $a_4=\alpha^2$ where $\alpha$ does not contain any summand with even exponent. \item If there is a fiber of type $III$ or $III^*$, then $a_6\equiv 0$, and $a_4$ exactly encodes the singular fibers. \end{enumerate} We shall now prove the uniqueness for a few quasi-elliptic fibrations from Table \ref{Tab:fibr}. We choose some cases that illustrate the overall ideas. All other fibrations can be treated along the same lines. \begin{proof}[Proof of Proposition \ref{prop} for \#13] Due to the singular fibers of type $\tilde E_7$, we are in case (ii) above, i.e.~$a_6=0$. Then fiber types $\tilde D_n$ and $\tilde E_7$ require exact vanishing order $2$ resp.~$3$ of $a_4$. By M\"obius transformation, we can thus normalize \eqref{eq:qe} uniquely as \[ S:\;\; y^2 = x^3 + t^3 (t+1)^2 x. \] The two-torsion section $(0,0)$ implies that $\sigma=1$ as required. \end{proof} \begin{proof}[Proof of Proposition \ref{prop} for \#9 and \#17] We locate the singular fiber of type $\tilde D_4$ at $t=1$ and the other two reducible fibers at $0$ and $\infty$. Then $\Delta=t^8(t-1)^4$. The above considerations reduce the Weierstrass form \eqref{eq:qe} to \[ S:\;\; y^2 = x^3 + (ut+vt^3)^2 x + t^7+t^5. \] Here the special fiber at $t=0$ has type $\tilde E_8$ if $u=0$ and $\tilde D_8$ otherwise; the analogous statement holds at $t=\infty$. We distinguish three cases. First, if $\boldsymbol{u=v=0}$, then we derive \#17 in a unique way. Secondly, if $\boldsymbol{uv=0}$ without both vanishing, then one fiber has type $\tilde E_8$ and the other $\tilde D_8$. Note that such a surface has $\mathop{\rm NS}\nolimits(S) = U \oplus D_4 \oplus D_8 \oplus E_8$ and thus Artin invariant $\sigma=2$, since the fiber type $\tilde E_8$ on a quasi-elliptic surface does not accomodate $2$-torsion sections. In other words, we derive a one-dimensional family of supersingular K3 surfaces such that each member except for \#17 has Artin invariant $\sigma=2$. Finally we consider the case $\boldsymbol{uv\neq 0}$. This yields a two-dimensional family of supersingular K3 surfaces, such that the general member has $\mathop{\rm NS}\nolimits(S)=U+D_4+2D_8$ and Artin invariant $\sigma=3$. Here the Artin invariant drops after either specializing to the previous family or imposing some two-torsion section. The fibration \#9 requires three non-trivial two-torsion sections. Their intersection behavior with the reducible fibers can be predicted from the height pairing as follows: \begin{table}[ht!] \begin{tabular}{c|ccc} fiber & $\tilde D_4$ & $\tilde D_8$ & $\tilde D_8$\\ \hline fiber & id & far & far\\ comp & non-id & near & far\\ met & non-id & far & near \end{tabular} \end{table} We first investigate a two-torsion section $P=(X,Y)$ that fits into the first row. Here $X$ and $Y$ are polynomial in $t$ of degree at most $4$ resp.~$6$. At $t=0$, it is immediate that $t|X, t^2|Y$. This corresponds to blowing up the surface once at the point $(x,y,t)=(0,0,0)$ and then along the exceptional divisor. In the affine chart $x=tx', y=t^2y''$ this yields \begin{eqnarray} \label{eq:9-1} S:\;\; ty''^2 = x'^3 +(u+vt)^2 x' + t^4+t^2. \end{eqnarray} Here the near simple component of the $\tilde D_8$ fiber is given by $t=x'=0$. The section has to follow the double component $\{t=0, x'=u\}$ through the resolution, so $X=t(u+t\hdots)$. Successively this yields $t^3|Y$ and $X=t(u+t/\sqrt{u}+t^2\hdots)$. By symmetry, the same argument applies to the fiber at $\infty$. We deduce $\deg(Y)\leq 3$ and $X=t^3v+t^2/\sqrt{v}+\hdots$. Combining the information from $t=0$ and $t=\infty$, we deduce $u=v$ and find a unique section $P=t(u+t/\sqrt{u}+ut^2), u^{3/2}t^3)$. Again we have thus found a family of supersingular K3 surfaces with Artin invariant $\sigma\leq 2$. We continue by imposing a torsion section $Q=(\mathcal X, \mathcal Y)$ of the second kind, say meeting the fiber at $\infty$ at a far component. As before, this implies $\deg(\mathcal Y)\leq 3$ and $\mathcal X=t^3u+t^2/\sqrt{u}+\hdots$. By \eqref{eq:9-1}, the near component of the fiber at $t=0$ is met if and only if $t^2|\mathcal X, \mathcal Y$, so $\mathcal X=t^3u+t^2/\sqrt{u}$. Finally the intersection of a non-identity component at $t=1$ requires $(t+1)|\mathcal X, \mathcal Y$. Hence $u=1/\sqrt{u}$, i.e.~$u^3=1$. The three possible choices are identified by scaling $x$ by third roots of unity. Hence we can assume $u=1$ and find the section $Q=(t^2(t+1), t^2(t+1))$. This shows that the quasi-elliptic fibrations \#9 and \#17 are unique. \end{proof} For all other quasi-elliptic fibrations from Table \ref{Tab:fibr}, uniqueness can be proven along similar lines. The cases with five reducible fibers which at first sight might look most complicated are greatly simplified by the following easy observation: Any genus~$1$ fibration from Table \ref{Tab:fibr} has Artin invariant $\sigma=1$; thus it gives a model of our supersingular K3 surface $X$. Now $X$ has a model with all $\mathop{\rm NS}\nolimits(X)$ defined over $\mathbb{F}_4$. By the argumentation in Section \ref{s:g=1}, it follows that any genus~$1$ fibration on $X$ admits such a model, too. For the genus~$1$ fibrations with five reducible fibers, this identifies the locus of reducible fibers on the base curve as $\mathbb{P}^1(\mathbb{F}_4)$ which essentially fixes the Weierstrass form \eqref{eq:qe}. Then it remains to check for precise fiber types and for fiber components to be defined over $\mathbb{F}_4$. For instance, for \#2 this means that we can work with a Weierstrass form \[ S:\;\; y^2 = x^3 + \alpha t^2 x + (t^3+1)^3\;\;\; (\alpha\in\mathbb{F}_4). \] Here the components of the fiber at $t=1$ are encoded in the roots of the polynomial $T^3+\alpha T + 1$. It is easily checked that this polynomial splits over $\mathbb{F}_4$ if and only if $\alpha=0$. We derive the model for \#2 in \ref{ss:2} with \MoW\ group as specified. The details for the remaining cases are left to the reader. \section{Points and lines in $\mathbb{P}^2(\mathbb{F}_4)$} \label{s:config} Consider the elliptic fibration \#1 with $R(L)=A_5^4$ and $\mbox{MW}\cong\mathbb{Z}/3\mathbb{Z} \times \mathbb{Z}/6\mathbb{Z}$. There are 42 obvious $(-2)$ curves formed by the 24 components of the singular fibers and the 18 torsion sections. It is easily verified that the configuration of these 42 rational curves is the incidence graph of the 21 points and 21 lines of $\mathbb{P}^2(\mathbb{F}_4)$ (cf.~\cite{DK}, \cite{KK}). This gives another way to see the large finite automorphism group $\mbox{PGL}_3(\mathbb{F}_4)\times\mathbb{Z}/2\mathbb{Z}$ acting on $X$. We remark that the 42 roots of $\mathop{\rm NS}\nolimits(X)$ under consideration are known as the first Vinberg batch of roots for $\mbox{I}_{1,21}$ (which contains $\mathop{\rm NS}\nolimits(X)$ as even sublattice, see \cite[p.~551]{CS}). Note also that fiber components and sections over $\mathbb{F}_2$ induce the incidence graph of $\mathbb{P}^2(\mathbb{F}_2)$, so our identification is compatible with the Galois action. For each of the other 17 fibrations in our list, most or all of the $(-2)$ curves from $R(L)$ and torsion sections can already be seen in the $\mathbb{P}^2(\mathbb{F}_4)$ picture. For example, for the quasi-elliptic fibration \#2 with $R(L)=D_4^6$ and $\mbox{MW}=(\mathbb{Z}/2\mathbb{Z})^4$, fiber components and sections give 41 rational curves which correspond to all but one of the 42 vertices of the incidence graph. For a few other cases, see the discussion below. From our classification of genus 1 fibrations on $X$ we can extract information about specific subgraphs of the incidence graph: \begin{Theorem} \label{thm:inc} The incidence graph of points and lines in $\mathbb{P}^2(\mathbb{F}_4)$ does not contain any cycle of length $14$ or $2n$ with $n\geq 10$. \end{Theorem} \begin{proof} If there were such a cycle, then we would find a corresponding effective divisor on $X$ via the elliptic fibration \#1. As explained in Section \ref{s:g=1}, this divisor would induce an elliptic fibration on $X$ with the cycle as singular fiber of type $I_{2n}$ (jacobian by Theorem \ref{thm:jac}). Then the classification of genus 1 fibrations on $X$ leads to the desired contradiction. \end{proof} \begin{Remark} Alternatively one can infer $n<11$ from the Shioda-Tate formula and $n\neq 10$ from \cite{S-max}, but we are not aware of an easy argument ruling out $n=7$. \end{Remark} \begin{Proposition} Let $n\in\N$. Assume that there are $n$ points $P_i\in \mathbb{P}^2(\mathbb{F}_4)\; (i\in\mathbb{Z}/n\mathbb{Z})$ such that $P_i, P_{i+1}, P_j$ are never collinear for distinct $i, i+1, j$. Then $n\in\{3,4,5,6,8,9\}$. Conversely for each such $n$, there is a $2n$-cycle in $\mathbb{P}^2(\mathbb{F}_4)$. \end{Proposition} \begin{proof} All other cases are ruled out by Theorem \ref{thm:inc}, so the first statement of the proposition follows. As for the existence part, all $2n$-cycles for $n<9$ can easily be realized in the affine plane $\mathbb{A}(\mathbb{F}_4)$ by way of horizontal and vertical lines and the diagonal, say. As for the $18$-cycle, one can connect, for instance, the affine points $(0,0), (\varrho^2,0), (\varrho,1),(\varrho^2,1),(\varrho,\varrho),(\varrho^2,\varrho),(\varrho,\varrho^2),(1,\varrho^2)$ and the infinite point $[0,1,0]$. \end{proof} We can be even more specific by analyzing the roots perpendicular to the given $2n$-cycle (thus forming fiber components of the induced elliptic fibration), and the points and lines giving rise to sections. In the counts, $a+b$ indicates the partition between points and lines in $\mathbb{P}^2(\mathbb{F}_4)$. \subsection{$\mathbf{\tilde A_5}$} There are $9+9$ disjoint roots, forming another three $\tilde A_5$ hexagons, plus $9+9$ sections (roots that meet exactly one of the $\tilde A_5$ vertices) comprising the full $\mbox{MW}$ group. Of course, this was expected since we started our current investigation exactly with this fibration. \subsection{$\mathbf{\tilde A_7}$} \label{ss:a7} $7+7$ disjoint roots, forming the remaining $\tilde A_7$ and $\tilde D_5$ fibers of \#3, and $8+8$ sections. Here $\mbox{MW}$ has rank 1, so the sections can only comprise part of it. \subsection{$\mathbf{\tilde A_9}$} $6+6$ disjoint roots, forming the other $\tilde A_9$ of \#4 and two isolated $A_1$'s; $5+5$ sections, accounting for the full MW group. \subsection{$\mathbf{\tilde A_{11}}$} \label{ss:a11} There are two possibilities. In one case, the vertices of the same parity on both the hexagon and its dual are always collinear. Then there are $4+4$ disjoint roots, forming a $\tilde D_7$ system, so we have the case of \#6 with MW rank 2. There are $6+6$ sections. In the other case, either the hexagon or its dual is a "hyperoval", with no three points collinear (and the other has vertices of the same parity collinear). Here there are $6+4$ disjoint roots, forming $\tilde E_6$ and $A_3$ of \#7. There are $6+0$ sections, accounting for the full MW group. (The 0 was expected because no line meets a hyperoval in exactly one point). \subsection{$\mathbf{\tilde A_{15}}$} \label{ss:a15} Here if we look at points of the same parity on the octagon and its dual, three of the resulting four sets of 4 points are collinear and the last is in general linear position. There are $2+3$ disjoint roots, forming a $D_5$ root system, consistent with the case \#10. There are $4+0$ sections (none for the octagon with two 4-point lines), accounting for the full MW group. \subsection{$\mathbf{\tilde A_{17}}$} Just 1+1 disjoint roots, so we see only part of the $A_1^3$ . configuration of \#11. (Happily the disjoining roots are also disjoint from each other as they must be to be part of $A_1^3$.) There are $3+3$ sections, again fully accounting for the MW group. \subsection*{$\mathbf{\tilde D_n}$ configurations} Along similar lines, we can study other configurations in the incidence graph of $\mathbb{P}^2(\mathbb{F}_4)$. The $\tilde D_{2n}$ series is much like $\tilde A_{2n-1}$: instead of a polygon, we have a path whose first and last line contain three points each rather than two -- or dually where the first and last vertices have two terminal lines each instead of one. Here the lattices in our classification let us see everything up to $D_{20}$ except $D_{14}$ and $D_{18}$. Thus $\tilde D_{14}$ and $\tilde D_{18}$ are impossible. We will rule out $\tilde D_{20}$ separately below. Conversely, for all other $\tilde D_{2n}, 2\leq n\leq 8$, the existence is easily derived from our analysis of $\tilde A_{2n-1}$ configurations extended by sections. \begin{Example} $\tilde D_{16}$ is obtained from $\tilde A_{15}$ by attaching two sections (aka points in \ref{ss:a15}) that are not opposite while omitting the middle $(-2)$ curve (aka line) of the shorter path connecting them in the extended $\tilde A_{15}$ graph. \end{Example} We shall now disprove the existence of a configuration of type $\tilde D_{20}$ in $\mathbb{P}^2(\mathbb{F}_4)$. The configuration is sketched in the following figure: \begin{figure}[ht!] \setlength{\unitlength}{.45in} \begin{picture}(11,2)(-.4,.5) \thicklines \multiput(0,1)(1,0){5}{\circle*{.1}} \put(0,1){\line(1,0){4.5}} \multiput(6,1)(1,0){5}{\circle*{.1}} \put(5.5,1){\line(1,0){4.5}} \put(4.8,1){$\hdots$} \put(1,2){\circle*{.1}} \put(1,1){\line(0,1){1}} \put(9,2){\circle*{.1}} \put(9,1){\line(0,1){1}} \put(.8,.5){$P_1$} \put(2.8,.5){$P_2$} \put(6.8,.5){$P_8$} \put(8.8,.5){$P_9$} \end{picture} \caption{$\tilde D_{20}$ configuration in $\mathbb{P}^2(\mathbb{F}_4)$} \label{Fig:20} \end{figure} The configuration includes 3 lines through $P_1$, so there are 2 others which we label $\ell_1, \ell_2$. In fact these 2 lines have to contain all points $P_3,\hdots, P_9$ which are off the 3 lines though $P_1$ from the figure, but neither contains $P_2$. We infer that the odd-indexed points $P_3, \hdots,P_9$ sit on $\ell_1$ and the even-indexed points $P_4,\hdots,P_8$ on $\ell_2$. The same argument applies to $P_9$ and leads to a line $\ell_3$ containing the even-indexed points $P_2,\hdots,P_6$. But then clearly $\ell_2=\ell_3$ containing both $P_2$ and $P_8$. This contradicts the choice of configuration which is thus impossible on $\mathbb{P}^2(\mathbb{F}_4)$. Similarly for $\tilde D_{2n-1}$ we have a path with an extra point on one side and an extra line on the other. From our classification we deduce that this is not possible past $\tilde D_7$ while we have already seen $\tilde D_5$ and $\tilde D_7$ in \ref{ss:a7} and \ref{ss:a11}. \section{Reduction from characteristic zero} \label{s:red} The classification of elliptic fibrations on $X$ enables us to determine all elliptic K3 surfaces in characteristic zero with good reduction at (a prime above) $2$ yielding $X$. Let us explain why we consider this an interesting question. The main reason is that we have plenty of possible candidates at hand. For instance, we could work with singular K3 surfaces (attaining the maximal Picard number $\rho=20$ over $\C$). Singular K3 surfaces always come with natural elliptic fibrations from the so-called Shioda-Inose structure. Namely there is Inose's pencil with two $II^*$ fibers and (in general) $\mbox{MW}$-rank two (cf.~\cite{Sandwich}). But those special fibers have wild ramification in characteristic $2$ and $3$ by \cite{SS2}, so there has to be some kind of degeneration. In fact, one can show that for any singular K3 surface the Inose pencil degenerates modulo (any prime above) $2$ to the quasi-elliptic fibration \#17 (so that the reduction is not smooth due to the $\tilde D_4$ fiber on the reduction). A similar pattern holds in general: \begin{Proposition} \label{prop:0} Let $k$ denote a field of characteristic zero with a fixed prime ideal above $2$. Then exactly the jacobian elliptic fibrations \#6 and \#8 reduce smoothly to $X$ up to isomorphism over $\bar k$. \end{Proposition} \begin{proof} Let $S\to \mathbb{P}^1$ be an elliptic surface over $k$. In order for this specific elliptic fibration to have good reduction, the singular fibers are only allowed to degenerate from multiplicative type to additive type, but never with additional fiber components (only irreducible fibers (nodal and cuspidal) and types $\tilde A_1, \tilde A_2$). In the present situation, $X$ is supersingular with $\rho(X)=22$, but in characteristic zero $\rho(S)\leq h^{1,1}(S)=20$. Hence in case of good reduction, the Picard number can only be increased by additional sections. In general this gives \[ \mbox{rank}(\mbox{MW}(X\to \mathbb{P}^1))\geq \rho(X)-\rho(S)\geq 2. \] But in the present situation, \#6 and \#8 are the only elliptic fibrations on $X$ with $\mbox{MW}$ rank at least two. In fact, we have equality, so any elliptic lift $S$ must have $\rho(S)=20$ and finite $\mbox{MW}$ (i.e.~it is extremal). In particular, this implies that the configurations of reducible singular fibers coincide in characteristic zero and $2$. (In characteristic zero, \#6 also has three singular fibers of type $I_1$; upon reduction mod $2$, these singular fibers are indeed merged with the $\tilde D_7$ fiber, but the degeneration only contributes to the wild ramification \cite{S-MJM}.) Over an algebraically closed field, each configuration determines a unique elliptic surface, and the equations from \#6, \#8 do in fact work in any characteristic other than $3$. \end{proof} \begin{Remark} Over non-algebraically closed fields (such as number fields, finite fields), there are cubic twists occurring. See~\cite{S-MJM} for an analysis over $\Q$ that generalizes directly to other fields. \end{Remark} \begin{Remark} A singular K3 surface with supersingular good reduction automatically leads to Artin invariant one by \cite[Proposition 1.0.1]{Shimada}. Thus we infer from Proposition \ref{prop:0} that \#6 and \#8 give the only jacobian elliptic singular K3 surfaces with supersingular good reduction at a prime above $2$. \end{Remark} \subsection*{Acknowledgements} We thank the referee for her or his comments. During the preparation of this manuscript, both authors enjoyed the hospitality of each other's home institution whom we thank for the support. This work was started when the second author held a position at Copenhagen University.
2,869,038,157,018
arxiv
\section{Approach} \label{sec:approach} We are interested in using echoes for predicting depth map of a scene. There is a spatio-temporal relationship between the received echoes and the depth of the scene, i.e\onedot the echoes received at different instant of time directly relate to the different depths in the scene. Let $x(t)$ be the original audio signal and $y(t)$ be the echo response of the scene. Assuming we have $d_i$ distinct depths of materials $m_i$ in the scene (discretized and grouped for simplicity of discussion), the obtained echo response can be approximated with a summation of time delayed original signal, reflecting off the materials at different depths. The amplitudes of the delayed signals will depend upon the material they hit respectively. Considering only first order echoes, say every distinct $d_i$ object contributes to a time delay of $t_i$, and the corresponding material changes the amplitude of the signal by a factor of $a_i$ on an average. The final response\footnote{The final response will include the original signal as well, as the sound emitter and recorder are both turned on together for a brief period of time} could then be approximated as \begin{align} y(t) &= x(t) + \sum_{i=1}^{k} a_i x(t-t_i). \label{eq:reconstruction} \end{align} With $v_s$ denoting the speed of sound in the medium, the time delay can be directly associated with depths as $t_i = \frac{2 d_i}{v_s}$. Further, the amplitudes, $a_i$ of each time-delayed signal, are dependent on the acoustic absorption and reflection coefficient of the material. Hence, the goal of making the network learn depth from the received echo, is influenced by two factors, (i) the relationship between the echoes and spatial depth variation in the scene, and (ii) the different acoustic properties of the scene objects. We propose a carefully tailored attention mechanism (\secref{sec:attn}) between the image and audio modalities for addressing the spatial variation aspect. In addition we also propose to incorporate material property estimation (\secref{sec:matnet}) as a proxy, to account for different properties of the scene elements which inform sound and light reflection and absorption, and hence inform the final depth prediction. \subsection{Overall architecture} We show the block diagram of the proposed network in \figref{fig:block_diagram}. The network consists of the following components: (i) echo subnetwork, (ii) visual subnetwork, (iii) material properties subnetwork, (iv) the multimodal fusion module and finally (v) attention prediction subnetwork. The echo and visual subnetworks consist of encoder-decoder pairs which estimate depth maps of the scenes independently. We input three feature maps coming from echo, visual and material property subnetworks respectively to the multimodal fusion network. The multimodal fusion module produces the fused features which we then feed to the attention network. We further use the attention network to predict two attention maps, one each for the two depth maps obtained from echo and visual decoder networks respectively. We then combine the individual depth maps using these attention maps to output the final depth map of the scene. We now give the details of the different components below. Please also see the supplementary document for detailed layer-wise architecture of the method. \subsection{Echo Net for Echo to Depth} \label{subsec:echonet} The echo net is an encoder decoder network which predicts depth from binaural echo input. We convert the time-domain echo response into a frequency domain spectrogram representation, $\mathbf{E} \in \mathbb{R}^{2\times P \times Q}$, where $P$ is the number of discrete time steps and $Q$ is the number of frequency bins. We input the spectrogram to the encoder part of the network to obtain the encoded representation $f_e \in \mathbb{R}^{N}$ of the echo, which is also one of the inputs to the multimodal fusion module. We then reshape the encoded vector to $N \times 1 \times 1$ and feed it to a series of fractionally strided convolution layers to get the depth map, $\mathbf{D_e} \in \mathbb{R}^{W \times H}$, where $W$, $H$ are the width and the height of the input image. While the upsampling happens here from an extreme $1\times1$ feature, i.e\onedot there is practically no spatial information (except what might get coded in the feature vector itself), such depth prediction from audio has been reported by earlier works also with fair success \cite{gao2020visualechoes}. \vspace{-0.25em} \subsection{Visual Net for Image to Depth}\label{subsec:visualnet} The visual net is also an encoder decoder network which predicts depth from monocular RGB image. The architecture of this network is inspired from U-Net \cite{ronneberger2015u}, and consists of regular convolutional and fractionally strided convolutional layers with skip connections between them. We give the image, $\mathbf{I} \in \mathbb{R}^{3 \times W \times H}$ as input to the network which predicts the depth map, $\mathbf{D_i} \in \mathbb{R}^{W \times H}$. We also use it to obtain the visual features from the intermediate layer (output of last conv layer) of the network denoted, $f_i \in \mathbb{R}^{N \times w \times h}$. We use this feature as one of the inputs to the multimodal fusion module as well. \vspace{-0.5em} \subsection{Material Net for Material Properties}\label{subsec:matnet} \label{sec:matnet} We use this network to extract the material properties of the objects present in the scene. We use a ResNet-18 architecture \cite{he2016deep} and feed the RGB image, $\mathbf{I} \in \mathbb{R}^{3 \times W \times H}$ as the input. We obtain a feature map, $f_m \in \mathbb{R}^{N \times w \times h}$ which encodes the material properties over the spatial locations in the scene image. This feature is the third input to the multimodal fusion module. We initialize the material network with pretraining on a large materials dataset \cite{bell2015material} with classes such as fabric, brick, asphalt, wood, metal, and then train it end to end with the rest of the network. We expect this initial material encoding capability of the network to be a proxy for encoding properties related to sound and light absorption and reflection, which affect depth prediction. Although the network evolves with the end to end training, the attention maps obtained qualitatively validate our assumptions (\secref{subsec:qual}). \vspace{-0.5em} \subsection{Multimodal Fusion Module} The multimodal fusion module combines features from the three sources discussed above, i.e\onedot echo $f_e \in \mathbb{R}^{N}$, visual $f_i \in \mathbb{R}^{N \times w \times h}$ and material $f_m \in \mathbb{R}^{N \times w \times h}$. Given the motivation, discussed in \secref{sec:intro}, that different object might give different depth prediction performances with audio or visual modalities respectively, the multimodal fusion module helps us combine the modalities to provide as input to the attention prediction network (\secref{sec:attn}). We perform two bilinear transforms on the features to obtain two fusion maps, $f_{img}^j$ and $f_{mat}^j$, where $j=1,2,...K$ is the number of output channels in the bilinear transformation, \begin{align} f_{img}^j(p,q) &= f_e^T\mathbf{A}_{img}^jf_i(p,q)+b_{img}^j, \forall p,q \\ f_{mat}^j(p,q) &= f_e^T\mathbf{A}_{mat}^jf_m(p,q)+b_{mat}^j, \forall p,q \end{align} where, $(p,q)$ indexes the spatial coordinates, $\mathbf{A}_{img}^j$, $\mathbf{A}_{mat}^j$ are learnable weights of dimension $N \times N$ and $b_{img}^j$, $b_{mat}^j$ are scalar biases. We finally concatenate the fusion maps $f_{img} \in \mathbb{R}^{N \times w \times h}$ and $f_{mat}\in \mathbb{R}^{N \times w \times h}$ along the first dimension to get the final fusion map $f^*=concat(f_{img}, f_{mat})$ to be fed into the per-pixel attention network. \subsection{Attention Network} \label{sec:attn} The attention network is the final part of the network which we use to predict the per-pixel attention map given the concatenated fusion maps obtained in the previous step. The network consists of a series of fractionally strided convolutional layers with a final \texttt{Sigmoid} layer to normalize the values in the range $[0,1]$. The output of the network is an attention map $\alpha \in \mathbb{R}^{1 \times W \times H}$. We use the attention map $\alpha$ for weighting the echo predicted depth map $\mathbf{D}_e$ and $1-\alpha$ for the image predicted depth map $\mathbf{D}_i$. The final depth map $\hat{\mathbf{D}}$ is thus, \begin{equation} \hat{\mathbf{D}} = \alpha \odot \mathbf{D}_e + (1-\alpha) \odot \mathbf{D}_i \end{equation} where, $\odot$ denotes pointwise multiplication. \subsection{Loss Function and Training} We train the network following \cite{hu2019revisiting}, and use the logarithm of depth errors. The loss is given as, \begin{equation} \mathcal{L}(\hat{\mathbf{D}}, \mathbf{D}) = \frac{1}{W H} \sum_{p=1}^{W}\sum_{q=1}^{H}\ln (1+\lVert \mathbf{D}(p,q) - \hat{\mathbf{D}}(p,q)\rVert_1), \end{equation} where $D$ is the ground truth depth map. The full optimization problem is given by \begin{equation} \theta_e^*, \theta_i^*, \theta_a^*, \theta_f^*, \theta_m^* = \argmin_{\theta_e, \theta_i, \theta_a, \theta_f, \theta_m} \mathcal{L}(\hat{\mathbf{D}}, \mathbf{D}). \label{eq:final_loss} \end{equation} where, $\theta_e$, $\theta_i$, $\theta_a$, $\theta_f$, $\theta_m$ are the parameters for echo to depth network, image to depth network, attention network, fusion module and material property network respectively. We ignore the undefined regions in the ground truth depth maps, and therefore, such regions do not contribute to the learning. Adding smoothness constraints~\cite{li2019learning} can potentially further improve the quality of generated depth, however we obtain good results without using them here. We train the full network in an end to end fashion using standard backpropagation for neural networks. \section{Conclusion} \vspace{-1ex} \label{sec:conclusion} We presented a novel method for estimating depth by combining audio and visual modalities. We hypothesised that material properties play a significant role in the task, and proposed to use automatic material property estimation to predict spatial attention maps which modulate and combine the outputs of audio and image based depth prediction. We showed with quantitative experiments on challenging benchmark datasets, that (i) adding material properties explicitly improves the depth prediction over audio and visual prediction, (ii) having an attention mechanism based fusion method is better than other simpler existing approaches for audio visual fusion. We also demonstrated qualitatively that the attention maps focus on interpretable areas for the two modalities. While audio attention maps tended to ignore materials which would diffuse or absorb the audio wave, the image based attention included those areas. We also demonstrated qualitatively that the proposed method performs better than existing method, especially near the depth edges and brings out the finer structures in the scene. We further showed experiments with reduced image resolution where our method degraded gracefully, while the compared methods loses performances significantly. We even compared our method with existing methods for sparse to dense depth prediction, and obtained encouraging competitive results, while not using sparse dense data as input for our method. We would like to explore such multimodal fusion with other modalities like sparse point clouds in the future to obtain even higher quality depth predictions. Further, geometric prior~\cite{srivastava2021} can also be leveraged to improve the results. In conclusion, we believe that using echo for depth prediction, especially in combination with other modalities is a promising direction, especially given the low cost and wide availability of audio sensors. \vspace{0.1em} \noindent \textbf{Acknowledgment.} Kranti Kumar Parida gratefully acknowledges support from the Visvesvaraya fellowship. \section{Dataset Details} \label{sec:dataset_details} We use two datasets Replica \cite{straub2019replica} and Matterport3D \cite{Matterport3D} for our experiments. Both the datasets are rendered using an open source 3D simulator, Habitat \cite{savva2019habitat}. To obtain echoes on both the datasets, we use the simulations from Soundspaces~\cite{chen2019soundspaces}. Soundspaces augments the simulator by providing realistic audio simulations for the scenes by considering room geometry and materials in the room. \subsection{Simulating Echoes} We use the procedure outlined below to obtain echoes on both Replica and Matterport3D dataset. Soundspaces performs acoustic simulation in two steps as follows. \noindent\textbf{Step 1.} The visual scene from the respective dataset is subdivided into grids. The grids are divided along navigable points so that an agent can be placed there. Then the Room Impulse Response (RIR) is computed between each pair of points using audio ray tracing \cite{veach1995bidirectional}. Each pair denotes a combination of source and receiver which send the audio signal and receive the echoes respectively. \\ \noindent\textbf{Step 2.} The echoes are obtained by convolving the input audio signal with the RIR computed in the previous step. Following Soundspaces, we use the RIR between each pair of point at four orientations ($0^\circ$, $90^\circ$, $180^\circ$, $270^\circ$). For the proposed method, we place the source and receiver at the same point and use the resulting RIR. In addition, following \cite{gao2020visualechoes}, we use the source audio signal as a $3$ ms sweep signal spanning the human hearing range ($20$Hz to $20$kHz). We obtain the echo response by convolving the corresponding source audio signal with the RIRs obtained previously. Further, the sampling rate for the source and received audio (echoes) are $44.1$ kHz and $16$ kHz for Replica and Matterport3D respectively. \subsection{Visual Scenes} We now provide details on the scenes used from each dataset along with the train and test details. \noindent\textbf{Replica dataset.} We use all the $18$ scenes from Replica having $6960$ points in total, from $1740$ images and $4$ orientations. Following \cite{gao2020visualechoes}, we use a train set consisting of $5496$ points and $15$ scenes. The test set consists of $1464$ points from $3$ scenes. As a validation set is not defined for Replica, we use a small subset of points from train set for tuning the network parameters. Then the parameters are fixed, and the entire train set is used training the network. \noindent\textbf{Matterport3D dataset.} Matterport3D consists of $90$ scenes. Soundspaces provides RIR for $85$ of these scenes. Further, we discard another $8$ scenes which have none or a very few navigable points. This results in a dataset with $77$ scenes which we use as our final dataset. These $77$ scenes contain $67,376$ points from $16,844$ and $4$ orientations. The dataset is then split into train, validation and test sets. The train set consists of $40,176$ points and $59$ scenes. The validation set consists of $13,592$ points and $10$ scenes. The test set consists of $13,602$ points and $8$ scenes. \section{Implementation Details} \label{sec:implementation_details} \noindent\textbf{Input.} The input to the Visual Net and Material Net is a $128$x$128$ RGB image. We also perform image augmentation by randomly jittering color, contrast and brightness of the image. The input to the Echo Net is a spectrogram from the simulated echoes. For obtaining the spectrogram, we first convert the time domain audio signal into Short Time Fourier Transform using Hanning window with a fixed window length, hop length and frequency points. We use a two channel audio with duration of $60$ms. For Replica, we use an audio signal of $44.1$kHz and convert it to a $2\times257\times166$ spectrogram using a window length of $64$, hop length of $16$ and $512$ frequency points. For Matterport3D, we use an audio signal of $16$kHz and convert it to a $2\times257\times121$ spectrogram using a window length of $32$, hop length of $8$ and $512$ frequency points. \noindent\textbf{Additional Parameters.} We train the network on both the datasets using Adam optimizer with learning rate of $1e-4$, momentum of $0.9$ and weight decay of $5e-4$. We use batch size of $128$ for Replica and $64$ for Matterport3D. \section{Network Architecture and Parameters} \label{sec:network_architecture} We now provide the detailed architecture of each subnetwork from the proposed method. \noindent\textbf{Echo Net.} It is an encoder-decoder network. The encoder is inspired from \cite{gao2020visualechoes} and consists of a convolutional neural network having $3$ layers with filter dimensions $8\times8$, $4\times4$, $3\times3$ and stride $4\times4$, $2\times2$ and $1\times1$ respectively for each layer. The number of output filters in each layers are $32$, $64$ and $8$ respectively. Finally, we use a $1 \times 1$ conv layer to convert arbitrary sized feature dimension into a $512$ dimensional feature vector. The decoder consists of $7$ fractionally strided convolutional layers with filter size, stride and padding of $4$,$2$ and $1$ respectively. The number of output filters for the $7$ layers are $512, 256, 128, 64, 32, 16$ and $1$ respectively. We use BatchNorm and RELU non-linearity after each layer of the network. \noindent\textbf{Visual Net.} It consists of an encoder decoder network. The encoder consists of a convolutional neural network with $5$ layers. For each layer, the filter size is $4$, the stride is $2$ and padding is $1$. The $5$ layers have $64, 128, 256, 512, 512$ number of output filters respectively. We use LeakyRELU with negative slope of $0.2$ and BatchNorm after each layer. Similarly for the decoder we use $5$ fractionally strided convolutional layers with output filters of size $512, 256, 128, 64, 1$ respectively. We also use skip connections and concat the features from the corresponding encoder layer with decoder output to get the final feature map from the decoder. We use BatchNorm and RELU after each layer. \noindent\textbf{Material Net.} We use the first five convolution blocks of the ResNet-18~\cite{he2016deep}. The first layer has a filter size of $7 \times 7$ and all subsequent layer have filters of size $3 \times 3$. The number of output filters at each layer are $64, 64, 128, 256, 512$ respectively. \noindent\textbf{Attention Net.} We use five fractionally strided convolutional layers with output filter sizes of $512, 256, 128, 64, 1$ respectively for each layer. We use filter size, stride and padding to be $4,2,1$ respectively. \section{Evaluation Metrics} \label{sec:evaluation_metrics} We use following metrics to evaluate our result. We denote the predicted depth and ground truth depth as $\hat{\mathbf{D}}(p)$ and $\mathbf{D}(p)$ for every point $p$. We further use only those points that have valid depth value, i.e. the missing values and the points having zero depth value in $\mathbf{D}$ are ignored. We denote such valid points as $|{\mathbf{D}}|$. \begin{itemize} \item Root Mean Square Error: \begin{equation} \sqrt{\frac{1}{|{\mathbf{D}}|}\sum_{p \in {\mathbf{D}}}|\hat{\mathbf{D}}(p) - \mathbf{D}(p)\|^2} \end{equation} \item Mean absolute relative error: \begin{equation} \frac{1}{|{\mathbf{D}}|}\sum_{p \in {\mathbf{D}}}\frac{\hat{\mathbf{D}}(p) - \mathbf{D}(p)}{\hat{\mathbf{D}}(p)} \end{equation} \item Mean $\log_{10}$ error: \begin{equation} \frac{1}{|{\mathbf{D}}|}\sum_{p \in {\mathbf{D}}}\log_{10} (\hat{\mathbf{D}}(p)) - \log_{10} (\mathbf{D}(p)) \end{equation} \item $\delta_t$ is the percentage of pixels within the error range $t$. We define the error range as mentioned below \begin{equation}\label{eq:delta} max(\frac{\hat{\mathbf{D}}(p)}{\mathbf{D}(p)}, \frac{\mathbf{D}(p)}{\hat{\mathbf{D}}(p)}) < t \end{equation} where $t \in \{1.25, 1.25^2, 1.25^3\}$. \end{itemize} \section{More Qualitative Results} \label{sec:qual_res} We give more qualitative results of depth estimation using various techniques on Replica and Matterport3D datasets in \figref{fig:depth_pred_replica} and \figref{fig:depth_pred_mp3d} respectively. The visualizations of the attention maps from Echo Net and Visual Net are shown in \figref{fig:attention_replica} (Replica) and Fig.\ref{fig:attention_mp3d} (Matterport3D). \begin{figure*} \vspace{-1 em} \centering \includegraphics[width=0.8\textwidth]{fig/replica.pdf} \caption{\textbf{Qualitative results for depth estimation on Replica dataset.} From left to right - input image, depth estimation using only echoes, depth estimation using only image, depth estimation from Visual Echoes, depth estimation using proposed method, ground truth depth map. The proposed method has better depth estimation in complicated scenes containing many objects causing frequent depth variations (e.g\onedot row $1$, row $4$). It also provides robust depth estimation along boundaries of objects (e.g\onedot rows $3$,$7$,$8$). When the individual depth estimations from image and echo are poor leading to poor depth estimation (closer to image) using Visual Echoes, while the proposed method provides closer to ground truth estimation such as cabinets (row $4$), door (row $9$) which are completely missed by other methods.} \label{fig:depth_pred_replica} \end{figure*} \begin{figure*} \vspace{-1 em} \centering \includegraphics[width=0.8\textwidth]{fig/matterport.pdf} \caption{\textbf{Qualitative comparisons for depth estimation on Matterport3D dataset.} From left to right - input image, depth estimation using only echoes, depth estimation using only image, depth estimation from Visual Echoes, depth estimation using proposed method, ground truth depth map. We observe that the proposed method consistently provides better depth map estimation of smaller/farther objects (such as chairs cf\onedot other methods in row $6$) and also at object boundaries (rows $1$,$4$,$5$). It also provides closer to ground truth results on illumination changes (row $7$). We also observe that when image and echo depth estimations individually yield poor results, Visual Echoes tend to perform poorly as well while the proposed method is still able to estimate better depth (row $7$).} \label{fig:depth_pred_mp3d} \end{figure*} \begin{figure} \centering \includegraphics[width=0.44\textwidth]{fig/replica_attention.pdf} \caption{\textbf{Visualization of attention maps on Replica dataset.} From left to right - input image, attention map from Echo Net, attention map from Visual Net. } \label{fig:attention_replica} \end{figure} \begin{figure} \centering \includegraphics[width=0.44\textwidth]{fig/mp3d_attention.pdf} \caption{\textbf{Visualization of attention maps on Matterport3D dataset.} From left to right - input image, attention map from Echo Net, attention map from Visual Net.} \label{fig:attention_mp3d} \end{figure} \section{Related Works} \label{sec:related_work} \noindent\textbf{Audio-visual learning.} Recently there has been a surge in interest in audio-visual learning. In one line of work, the correspondence between both the modalities are used to learn the representation in each of the individual modality, in a self-supervised manner \cite{arandjelovic2017look, Arandjelovic_2018_ECCV, hu2020discriminative, morgado2020learning, owens2018audio}. In \cite{arandjelovic2017look, Arandjelovic_2018_ECCV}, the authors used an auxiliary task, of predicting whether the audio and image pair correspond to each other, to learn representations in each of the modalities. In \cite{owens2018audio}, the authors predicted if the audio and the video clip are temporally synchronized to learn representations. In \cite{hu2020discriminative}, the authors advanced a step further and localized sound generating objects in the image by leveraging the correspondence between audio and the image. In one of the recent approach, the authors in \cite{morgado2020learning}, have used the spatial correspondence between $360^{\circ}$ video and audio. In another line of work, the integration of both audio and video modality was done to increase the performance. Recently a variety of task such audio source separation \cite{zhao2018sound, zhao2019sound, gan2020music, gao2018learning, gao2019co}, zero-shot learning \cite{parida2020coordinated, mazumder2020avgzslnet}, saliency prediction \cite{tsiami2020stavis}, audio spatialization \cite{gao20192, morgado2018self} have used the information from both audio and video modalities to improve the performance cf.\ using single modality only. \noindent\textbf{Depth estimation without echoes.} The depth estimation methods span from only monocular image based methods to multi modal methods. Usually, the modalities are sparse depth maps, LiDAR point clouds, bird's eye views, and normal maps. Monocular depth estimation methods involve utilizing a single RGB image to estimate dense depth~\cite{zhao2020monocular, bhoi2019monocular, tiwari2020pseudo}. Many methods directly utilize single image~\cite{monodepth2, li2019learning, Ranftl2020} or estimate an intermediate 3D representations such as point clouds~\cite{weng2019monocular,you2019pseudo} and bird's eye views~\cite{srivastava2019learning} to estimate dense depth maps. A few other methods work on combining RGB with sparse depth maps, normal maps etc\onedot~\cite{qiu2019deeplidar, ma2019self} to estimate dense depth maps. \noindent\textbf{Depth estimation with echoes.} In \cite{christensen2019batvision, christensen2020batvision} depth of the scene was estimated using only echoes received from a single audio pulse. This approach completely ignored the visual modality while estimating the depth. On similar lines, the authors in \cite{vasudevan2020semantic} estimated the depth of scene directly from binaural audio of the object itself. They did not have the ground truth depth map, and instead used a vision network to predict the ground truth depth map. This method although used the direct audio from the object but the performance of the system was always upper bounded by the predicted depth map from visual input. In all of the methods above, the authors used one modality in isolation and did not fuse multi-modal information to improve the performance of the system. In \cite{gao2020visualechoes} the authors used echolocation as a pre-training task for learning a better visual representation. The authors also gave a case study, where they showed that simply concatenating the audio features with the visual features improves depth prediction. We explore the idea further, of improving depth prediction by adding binaural echoes to image input as well, and propose a novel multimodal fusion method which incorporates automatically estimated material properties in addition. We give the comparison with existing approach in \figref{fig:comparison}. \section{Introduction} \label{sec:intro} \begin{figure} \centering \includegraphics[width=\columnwidth]{fig/teaser.pdf} \caption{ We address the problem of depth prediction using multimodal audio (binaural echo) and visual (monocular RGB) inputs. We propose an attention based fusion mechanisms, where the attention maps are influenced by automatically estimated material properties of the scene objects. We argue that capturing the material properties while fusing echo with images is beneficial as the light and sound reflection characteristics depend not only on the depth, but also on the material of the scene elements. } \label{fig:teaser} \end{figure} Humans perceive the surroundings using multiple sensory inputs such as sound, sight, smell and touch, with different tasks involving different combinations of such inputs. In computer vision, multimodal learning has also gained interest. As one popular stream, researchers have leveraged audio and visual inputs for addressing challenging problems. These problems can be broadly divided into three categories: (i) using audio modality only as the input, to learn a seemingly visual task, e.g.\ using echo for depth prediction~\cite{christensen2019batvision}, (ii) using visual modality as \emph{auxilliary information} for an audio task, e.g.\ using videos to convert mono audio to binaural audio~\cite{gao20192}, and (iii) using both audio visual modalities together, e.g\onedot for depth prediction~\cite{gao2020visualechoes}. Here, we follow the third line of work, and address the problem of depth map prediction using both audio and visual inputs. Studies in psychology and perception indicate that both sound and vision complement each other, i.e\onedot visual information helps calibrate the auditory information ~\cite{kolarik2016auditory} while auditory grouping helps solve visual ambiguity~\cite{watanabe2001sound}. Many animals, like bats and dolphins, use echolocation to estimate the distances of objects from them. Visually impaired humans have also been reported to use echolocation \cite{HumanELWiki}. Motivated by such cases, Christensen et.~al \cite{christensen2019batvision, christensen2020batvision} recently showed that depth maps can be predicted directly from stereo sound. Gao et.~al \cite{gao2020visualechoes} showed that by fusing features from binaural echoes with the monocular image features, depth estimation can be improved. Inspired by these findings, we work with similar reasoning, i.e\onedot sound contains useful information to predict depth, and that echoes, used along with monocular images, improve depth estimation. Going beyond the current methods which do simple combinations of features from echoes and images \cite{gao2020visualechoes}, we argue that the material properties of the objects in the scene significantly inform the spatial fidelity of the two streams. Some objects may lend better depth estimates with echoes, while some may prefer the visual modality more. Deriving from this motivation, we propose a novel end-to-end learnable network with a multimodal fusion module. This novel module incorporates material properties of the scene and fuses the two modalities with spatial attention maps indicating the fidelity of the respective modality for different spatial locations. The material properties are automatically estimated using a sub-network initialized with training on auxiliary data on materials. As the final depth prediction, the method fuses the depth maps produced by the audio and visual inputs, modulated by the predicted attention maps. \figref{fig:teaser} illustrates the difference with a real output of an existing method and the proposed approach, showing qualitative improvements. \begin{figure} \centering \includegraphics[width=\columnwidth]{fig/comparison.pdf} \caption{Comparison of our method with the existing approaches} \label{fig:comparison} \vspace{-1em} \end{figure} We demonstrate the advantages of the proposed method with experiments on Replica \cite{straub2019replica} and Matterport3D \cite{Matterport3D} datasets. We outperform the previous state-of-the-art on Replica dataset by $\sim28\%$ RMSE. On Matterport3D, which is more complex and larger ($5$x) than Replica, we provide results on the multimodal depth prediction task for the first time, and compare the proposed method with existing approaches and challenging baselines. We also show that the proposed network can estimate better depth with low resolution images. This is important in practical systems working on depth estimation from monocular images, as sensors capturing echoes can be used along with cameras, to not only enhance the performance of existing setup but also suffer lesser degradation in depth prediction with the reduction in the quality of images. Further, we give ablation experiments to systematically evaluate the different aspects of the proposed method. In summary, we make the following contributions: \begin{itemize}[leftmargin=1.25em, itemsep=-0.25em] \item We propose a novel end-to-end learnable deep neural network to estimate depth from binaural audio and monocular images. \item We provide exhaustive quantitative and qualitative results on Replica and Matterport3D datasets. On Replica, we outperform the previous state-of-the-art by $\sim 28\%$. On Matterport3D, we provide results benchmarking existing methods. The proposed method achieves state-of-the-art performance, outperforming the existing best method on Matterport3D by $\sim 4\%$. \item We provide exhaustive ablation experiments on the design choices in the network, and validate our intuitions with representative qualitative results. \end{itemize} \section{Experiments} \label{sec:experiments} \subsection{Implementation Details} \noindent\textbf{Datasets.} We perform experiments on Replica \cite{straub2019replica} and Matterport3D~\cite{Matterport3D} datasets. Both the datasets contain indoor scenes. Replica has a total of $18$ scenes covering hotels, apartments, rooms and offices. Matterport3D contains $90$ scenes. On Replica, we follow \cite{gao2020visualechoes} and use $15$ scenes for training and $3$ for testing. On Matterport3D, we use $77$ scenes for evaluation, out of which $59$, $10$ and $8$ scenes are used as train, validation and test respectively. We simulate echoes on these datasets by using the precomputed room impulse response (RIR) provided by \cite{chen2019soundspaces} using 3D simulator Habitat~\cite{savva2019habitat} which takes into consideration the scene geometry and the materials present in the scene. We obtain the echoes by convolving input audio signal with the RIR. We use the material recognition dataset MINC~\cite{bell2015material} for pre-training the material net. \noindent\textbf{Network Architecture.} We use the same architecture for echo encoder, and image to depth encoder and decoder (Visual Net) as \cite{gao2020visualechoes}, for fair comparison and demonstrating the effectiveness of the proposed material and attention networks. We use the first four convolutional layers of ResNet-18 for the material property network. We initialize them with pretraining on ImageNet and MINC dataset. \noindent\textbf{Input Representation.} The input to Visual Net and Material Net is a $128 \times 128$ RGB image. For input to Echo Net, we use the spectrogram of $60$ms echo signal. For training on Replica, we use a sampling frequency of $44.1$ kHz and for Matterport3D, we use a sampling frequency of $16$ kHz. We use Hanning window of length $64$ and $32$ to compute spectrogram for Replica and Matterport3D respectively. We use FFT size of 512 for both the cases. \noindent\textbf{Metrics.} Following earlier works in depth estimation, we report results on root mean squared error (RMSE), mean relative error (REL), mean $log_{10}$ error, and the percentage $\delta_t$ of pixels with both the relative error and its inverse under threshold $t$ where $t \in \{1.25, 1.25^2, 1.25^3\}$. Due to space constraints, we provide more details on the datasets, network architecture, parameter settings and evaluation metrics in the supplementary material. \subsection{Experiment Design} We design the experiments below to demonstrate the following points. (i) Using audio and visual modalities together improves performance over either of them. (ii) Using material properties in addition improves further. (ii) Among the different ways to combine the three, i.e\onedot visual, audio and material properties, the proposed attention based fusion performs the best. We demonstrate the first two points with ablation experiments where we combine the inputs by simple concatenation, followed by a decoder to predict the depth map (\secref{sec:ablation} first part). Further, we demonstrate the third point by comparing combination methods and showing that attention based combination performs the best (\secref{sec:ablation} second part). We then compare our full method with existing state of the art approaches (\secref{sec:soa}). We also show experiments on degrading resolution of image (\secref{sec:res}). \subsection{Ablation Study} \label{sec:ablation} \noindent\textbf{Combination of echo, image and material properties.} We show the results of combining the three inputs with simple concatenation in \tabref{tab:material_property_with_echo}. With only binaural echo as input, the RMSE is $0.995$, which improves to $0.673$ when image is added as well. When material property features are used with echo, the RMSE improves to $0.523$ i.e\onedot an improvement of $\sim47\%$ over echo only input and $\sim22\%$ over image+echo input. Lastly, when image and material property features are concatenated with echo features (all), the RMSE value further improves to $0.491$ i.e\onedot $\sim50\%$ over echo only input and $\sim27\%$ over echo+image input. These experiments validate that even with simple fusion, material properties improve the performance of the system. We attribute the improvement to our intuition that adding material properties explicitly allows the network to internally module the audio and visual features. In the following we demonstrate the proposed explicit multimodal fusion followed by attention based weighting performs much better than the simple concatenation. \begin{table} \centering \resizebox{\columnwidth}{!}{% \begin{tabular}{c|c|c|c|c|c|c} \hline Modality & RMSE ($\downarrow$) & REL ($\downarrow$) & log10 ($\downarrow$) & $\delta_{1.25}(\uparrow)$&$\delta_{1.25^2}(\uparrow)$&$\delta_{1.25^3}(\uparrow)$\\ \hline \hline echo & 0.995 & 0.638& 0.208& 0.388& 0.599&0.742 \\ \hline echo+img & 0.673& 0.451& 0.146 &0.534 &0.734 &0.845\\ echo+mat. & 0.523& 0.282& 0.103& 0.652& 0.839& 0.920\\ all& \textbf{0.491}& \textbf{0.276}& \textbf{0.098}& \textbf{0.667}& \textbf{0.846}& \textbf{0.924}\\ \hline \end{tabular} } \caption{\textbf{Depth estimation by combining different modalities} Using echoes only (echo), echoes with image features (echo+img.), echoes with material features (echo+mat.) and combination of echo, image and material features (all). $\downarrow$ indicates lower is better and $\uparrow$ indicates higher is better. } \label{tab:material_property_with_echo} \vspace{-0.5 em} \end{table} \noindent\textbf{Impact of multimodal fusion and attention.} \label{sec:exp_fusion} We now validate the efficacy of our audio visual fusion method, which uses a multimodal fusion module to predict attention over the modalities to combine them. We compare the proposed fusion method, denoted \texttt{bilinear} with two alternatives, i.e\onedot a simple concatenation of features denoted \texttt{concat}, and a dot product based fusion denoted \texttt{dot}. All these methods fuse the features and use them to estimate attention weights. We also compare by the fusion method of VisualEchoes \cite{gao2020visualechoes}, which fuses features with concatenation and uses them with a decoder to predict depth map, i.e\onedot it has no attention based fusion. We show the results in \tabref{tab:fusion_ablation}. We observe that \texttt{bilinear}, with an RMSE of $0.249$, performs best among the compared methods, highlighting that the proposed fusion is better than the simple concatenation or dot product based fusion. We also observe that \texttt{concat} performs better than VisualEchoes i.e\onedot $0.259$ cf\onedot $0.346$ RMSE. This indicates that attention maps (which are present in \texttt{concat} but absent in VisualEchoes) are important for better performance. \figref{fig:loss_rmse_plot} further shows the training loss (left) and validation RMSE (right) plots. We observe that VisualEchoes suffers from severe overfitting (much higher val RMSE), which is mitigated on adding the material features (i.e\onedot \texttt{concat}). This further reinforces the hypothesis that material properties play an important role in depth prediction. To conclude, we demonstrated from the ablation experiments that, (i) adding material properties explicitly is helpful for audio visual depth prediction, (ii) the proposed fusion strategy is better than simpler alternatives, and (iii) attention based combination of depth maps is better than simple concatenation as used in previous methods, e.g\onedot VisualEchoes. \begin{table} \centering \resizebox{\columnwidth}{!}{% \begin{tabular}{c|c|c|c|c|c|c} \hline Method & RMSE ($\downarrow$) & REL ($\downarrow$) & log10 ($\downarrow$) & $\delta_{1.25}(\uparrow)$&$\delta_{1.25^2}(\uparrow)$&$\delta_{1.25^3}(\uparrow$)\\ \hline \hline VisualEchoes \cite{gao2020visualechoes} & 0.346 & 0.172 & 0.068 & 0.798 & 0.905 & 0.950\\ \hline \texttt{concat} &0.259 &0.122 &0.048 &0.867 &0.939 &0.968 \\ \texttt{dot}& 0.262& 0.133& 0.050& 0.853& 0.943& \textbf{0.974}\\ \texttt{bilinear}& \textbf{0.249}& \textbf{0.118}& \textbf{0.046}& \textbf{0.869}& \textbf{0.943}& \textbf{0.970}\\ \hline \end{tabular} } \caption{\textbf{Performance of different fusion strategies}. \texttt{concat} refers to the concatenation of all the inputs, \texttt{dot} to fusion by dot product, and \texttt{bilinear} to fusion by bilinear transformation (see \secref{sec:exp_fusion}).} \label{tab:fusion_ablation} \vspace{-1.5 em} \end{table} \begin{figure} \centering \includegraphics[width=\columnwidth]{fig/plot.pdf} \caption{\textbf{Training loss and validation RMSE on Replica dataset.} In VisualEchoes \cite{gao2020visualechoes} depth prediction is performed directly from concatenated features. \texttt{dot}, \texttt{concat} and \texttt{bilinear} are the three different fusion strategies for proposed attention based prediction.} \label{fig:loss_rmse_plot} \vspace{-1em} \end{figure} \vspace{-0.5em} \subsection{Comparison to state-of-the-art} \label{sec:soa} \noindent\textbf{Baselines.} We compare on Replica and Matterport3D against VisualEchoes~\cite{gao2020visualechoes} and competitive baselines. The baseline methods are AVERAGE, ECHO2DEPTH and RGB2DEPTH. AVERAGE refers to average depth value of all the samples in training set. ECHO2DEPTH refers to the depth estimation using only Echo Net (\secref{subsec:echonet}) and RGB2DEPTH refers to depth estimation using only Visual Net (\secref{subsec:visualnet}). \noindent\textbf{Comparison on Replica dataset.} We report results in \tabref{tab:replica}. The proposed method outperforms all the compared methods on all the metrics. Specifically, it outperforms VisualEchoes by $\sim28\%$ on RMSE. We also observe that while the improvement of VisualEchoes w.r.t.\xspace RGB2DEPTH is marginal ($0.346$ cf\onedot $0.374$ i.e\onedot $7.4\%$), the proposed method is able to achieve an improvement of $\sim33\%$ ($0.249$ cf\onedot $0.374$ RMSE). Both the methods perform significantly better than AVERAGE and ECHO2DEPTH baselines. \noindent\textbf{Comparison on Matterport3D dataset.} We report results in \tabref{tab:mp3d}. We outperform echo only (ECHO2DEPTH), image only (RGB2DEPTH) and AVERAGE baselines on all the five metrics. Our method also outperforms the existing VisualEchoes method by $\sim4\%$ on RMSE and on all the metric after training the method for $300$ epochs. Further, better results on $\delta$ indicate that the proposed method has lower pixel wise relative error cf\onedot VisualEchoes which are manifested in form of better depth estimation around edges (\secref{subsec:qual}). Since Matterport3D is a popular benchmark for depth estimation, we also compare our method with the state-of-the-art methods on sparse to dense depth map estimation. These methods use sparse depth maps as inputs, while we have no explicit depth information in the inputs. We also use a slightly smaller subset of Matterport3D, i.e\onedot $77$ cf\onedot $90$ scenes for other. The results are shown in~\tabref{tab:mp3d_sparse_depth} where we obtain better results than four out of five compared methods. While the performances are not directly comparable, it supports the argument that echo can be a viable modality for estimating depth from RGB and can potentially provide additional information that are usually obtained from explicit 3D representations such as sparse depth maps. \begin{table} \centering \resizebox{\columnwidth}{!}{% \begin{tabular}{c|c|c|c|c|c|c} \hline Method & RMSE ($\downarrow$) & REL ($\downarrow$) & log10 ($\downarrow$) & $\delta_{1.25}$ ($\uparrow$) & $\delta_{1.25^2}$ ($\uparrow$) & $\delta_{1.25^3}$ ($\uparrow$)\\ \hline \hline AVERAGE & 1.070 & 0.791 & 0.230 & 0.235 & 0.509 & 0.750 \\ ECHO2DEPTH & 0.713 & 0.347 & 0.134 & 0.580 & 0.772 & 0.868 \\ RGB2DEPTH & 0.374 & 0.202 & 0.076 & 0.749 & 0.883 & 0.945\\ VisualEchoes \cite{gao2020visualechoes} & 0.346 & 0.172 & 0.068 & 0.798 & 0.905 & 0.950\\ Proposed Method& \textbf{0.249}& \textbf{0.118}& \textbf{0.046}& \textbf{0.869}& \textbf{0.943}& \textbf{0.970}\\ \hline \end{tabular} } \caption{\textbf{Comparison with existing methods on Replica dataset}. We report the results for the baseline and existing methods directly from \cite{gao2020visualechoes}.} \label{tab:replica} \end{table} \begin{table} \centering \resizebox{\columnwidth}{!}{% \begin{tabular}{c|c|c|c|c|c|c} \hline Method & RMSE ($\downarrow$) & REL ($\downarrow$) & log10 ($\downarrow$) & $\delta_{1.25}$ ($\uparrow$) & $\delta_{1.25^2}$ ($\uparrow$) & $\delta_{1.25^3}$ ($\uparrow$)\\ \hline \hline AVERAGE & 1.913& 0.714& 0.237& 0.264& 0.538& 0.697\\ ECHO2DEPTH & 1.778& 0.507& 0.192& 0.464& 0.642& 0.759\\ RGB2DEPTH & 1.090& 0.260& 0.111& 0.592& 0.802&0.910 \\ VisualEchoes \cite{gao2020visualechoes} & 0.998& 0.193& 0.083& 0.711& 0.878& 0.945\\ Proposed Method& \textbf{0.950}& \textbf{0.175}& \textbf{0.079}& \textbf{0.733}& \textbf{0.886}& \textbf{0.948}\\ \hline \end{tabular} } \caption{\textbf{Comparison with existing methods on Matterport3D dataset.}} \label{tab:mp3d} \end{table} \begin{SCtable} \centering \resizebox{0.6\columnwidth}{!} { \begin{tabular}{c|c|c} \hline Method & RMS ($\downarrow$) & MAE ($\downarrow$) \\ \hline \hline AD \cite{liu2013guided} &1.653 & 0.610\\ MRF \cite{harrison2010image} &1.675 & 0.618\\ Zhang et al. \cite{zhang2018deep} & 1.316 & 0.461 \\ Huang et al. \cite{huang2019indoor} & 1.092 & 0.342 \\ Xiong et al. \cite{xiong2020sparse} & 0.860 & 0.462\\ \hline \emph{Proposed Method$^*$} & \emph{1.008} & \emph{0.570} \\ \hline \end{tabular} } \caption{\textbf{Comparison on Matterport3D}. $^*$The compared methods use sparse depth maps as inputs, while we do not.} \label{tab:mp3d_sparse_depth} \end{SCtable} \vspace{-0.5em} \subsection{Qualitative Results}\label{subsec:qual} \begin{figure} \centering \includegraphics[width=\columnwidth]{fig/attention.pdf} \caption{\textbf{Visualizing attention map.} We show attention maps for echo and image. First two columns are examples from Replica dataset and last two columns are examples from Matterport3D dataset. We observe that the echo modality, in general, produces high attention value for far away solid structure whereas the image modality attends more to nearby points (sofa in first and third example). See supplementary material for more qualitative results.} \label{fig:attention} \vspace{-1em} \end{figure} \begin{figure*} \centering \includegraphics[width=\textwidth]{fig/depth_pred.pdf} \caption{\textbf{Qualitative comparisons for depth estimation on Replica (first two rows) and Matterport3D (last two rows) datasets.} We observe that the proposed approach is able to preserve fine structures, and has better depth estimation at the boundaries when compared to the existing approach. See supplementary material for more qualitative results.} \label{fig:depth_pred} \vspace{-1.5em} \end{figure*} We give qualitative results of both (i) depth prediction, and (ii) attention maps in this section. We first provide few qualitative examples for attention map of echo and image respectively in Fig.~\ref{fig:attention}. We provide the attention maps for two examples each from Replica and Matterport3D. We make a general observation from these results that the echo attention map mostly looks into the far off regular regions but completely ignores the finer structures in them. It gives higher weight to the walls in the scenes but ignores any objects present on it (e.g.\ monitor on third example and wall painting on fourth). It also ignores the chairs and sofa which are present on relatively lower depth values. The image attention map, being complementary to the echo attention, gives higher weights to nearby objects such as the sofa, chair etc\onedot, but ignores the far off wall in all the examples. This could be mainly due to two reasons (i) the echoes from the nearby regions are observed almost instantaneously which is hard to distinguish from the original signal (ii) most of the nearby objects (e.g.\ sofa) are made up of sound absorbing materials which do not reflect audio signals strongly. This results suggest that our network tries to leverage the best of both worlds, to get the final depth map. \begin{table} \centering \resizebox{\columnwidth}{!}{% \begin{tabular}{c|c|c|c|c|c|c} \hline \diagbox{Method}{Scale} & $1$x & $\frac{1}{2}$x & $\frac{1}{4}$x & $\frac{1}{8}$x & $\frac{1}{16}$x & $\frac{1}{32}$x\\ \hline \hline Img Only & 0.374 & 0.362 & 0.375 & 0.398 & 0.440 & 0.550 \\ VisualEchoes\cite{gao2020visualechoes}&0.346& 0.354& 0.357& 0.392& 0.471& 0.593\\ Proposed Method & 0.249 &0.244& 0.249& 0.281& 0.342& 0.446\\ \hline \end{tabular} } \caption{\textbf{Performance on varying input image resolution.} The metric is RMSE (lower is better).} \label{tab:low_res_comparison} \vspace{-1.5em} \end{table} We give a few examples of reconstructed depth using our approach and also compare it with an existing approach in Fig.~\ref{fig:depth_pred}. We observe that our approach is quite robust to fine grained structure in the scene cf\onedot VisualEchoes \cite{gao2020visualechoes}. In the first example, VisualEchoes misses the small chairs in the scene while our method gives an idea of the structure. Again the boundaries in the first example are not very clearly identified by VisualEchoes, but are estimated almost perfectly in our case. We also observe similar trends in other three examples as well. The results from the individual modalities (image and echo) are not satisfactory, but do capture the overall structure of the scene. These results suggests that the networks are not individually powerful but their performance improves significantly when we effectively combine the information from both of them. We encourage the readers to look at the supplementary material for more such qualitative results. \subsection{Experiments with varying image resolutions}\label{sec:res} As discussed in \secref{sec:ablation}, the network can better exploit the combination of echo and image due to attention based integration. This motivates us to study the effect of resolution degradation. This is similar to the case where human vision degrades and the brain learns to compensate and adapt based on the auditory input~\cite{berry20143d, thaler2016echolocation}. We evaluate the proposed method by progressively degrading the image resolution. We give the results in ~\tabref{tab:low_res_comparison} by gradually reducing the resolution to $\frac{1}{2}$, $\frac{1}{4}$,$\frac{1}{8}$, $\frac{1}{16}$ and $\frac{1}{32}$ times the original image. We observe that the proposed approach is more robust to reduction in resolution of the images than the compared methods. The performance of image only method degrades significantly when the downscaling factor is $\frac{1}{8}$ of the original image size, while the proposed method still performs better than image only method with original resolution i.e\onedot $0.281$ RMSE for proposed method at $\frac{1}{8}$x scale cf\onedot $0.374$ with image only at $1$x. Further, we observe that even with very high downscaling of $\frac{1}{32}$x, we obtain a better RMSE as compared to VisualEchoes\ ($0.446$ cf\onedot $0.593$). In fact, VisualEchoes performs worse than even the image only method. Similar observations can be made at $\frac{1}{16}$x. We can also observe that the rate of decrease in RMSE from $1$x to $\frac{1}{4}$x is more in VisualEchoes cf\onedot the image only and the proposed method. This further highlights the efficacy of the proposed method. \section{Approach} \label{sec:approach} We are interested in using echoes for predicting depth map of a scene. There is a spatio-temporal relationship between the received echoes and the depth of the scene, i.e\onedot the echoes received at different instant of time directly relate to the different depths in the scene. Let $x(t)$ be the original audio signal and $y(t)$ be the echo response of the scene. Assuming we have $d_i$ distinct depths of materials $m_i$ in the scene (discretized and grouped for simplicity of discussion), the obtained echo response can be approximated with a summation of time delayed original signal, reflecting off the materials at different depths. The amplitudes of the delayed signals will depend upon the material they hit respectively. Considering only first order echoes, say every distinct $d_i$ object contributes to a time delay of $t_i$, and the corresponding material changes the amplitude of the signal by a factor of $a_i$ on an average. The final response\footnote{The final response will include the original signal as well, as the sound emitter and recorder are both turned on together for a brief period of time} could then be approximated as \begin{align} y(t) &= x(t) + \sum_{i=1}^{k} a_i x(t-t_i). \label{eq:reconstruction} \end{align} With $v_s$ denoting the speed of sound in the medium, the time delay can be directly associated with depths as $t_i = \frac{2 d_i}{v_s}$. Further, the amplitudes, $a_i$ of each time-delayed signal, are dependent on the acoustic absorption and reflection coefficient of the material. Hence, the goal of making the network learn depth from the received echo, is influenced by two factors, (i) the relationship between the echoes and spatial depth variation in the scene, and (ii) the different acoustic properties of the scene objects. We propose a carefully tailored attention mechanism (\secref{sec:attn}) between the image and audio modalities for addressing the spatial variation aspect. In addition we also propose to incorporate material property estimation (\secref{sec:matnet}) as a proxy, to account for different properties of the scene elements which inform sound and light reflection and absorption, and hence inform the final depth prediction. \subsection{Overall architecture} We show the block diagram of the proposed network in \figref{fig:block_diagram}. The network consists of the following components: (i) echo subnetwork, (ii) visual subnetwork, (iii) material properties subnetwork, (iv) the multimodal fusion module and finally (v) attention prediction subnetwork. The echo and visual subnetworks consist of encoder-decoder pairs which estimate depth maps of the scenes independently. We input three feature maps coming from echo, visual and material property subnetworks respectively to the multimodal fusion network. The multimodal fusion module produces the fused features which we then feed to the attention network. We further use the attention network to predict two attention maps, one each for the two depth maps obtained from echo and visual decoder networks respectively. We then combine the individual depth maps using these attention maps to output the final depth map of the scene. We now give the details of the different components below. Please also see the supplementary document for detailed layer-wise architecture of the method. \subsection{Echo Net for Echo to Depth} \label{subsec:echonet} The echo net is an encoder decoder network which predicts depth from binaural echo input. We convert the time-domain echo response into a frequency domain spectrogram representation, $\mathbf{E} \in \mathbb{R}^{2\times P \times Q}$, where $P$ is the number of discrete time steps and $Q$ is the number of frequency bins. We input the spectrogram to the encoder part of the network to obtain the encoded representation $f_e \in \mathbb{R}^{N}$ of the echo, which is also one of the inputs to the multimodal fusion module. We then reshape the encoded vector to $N \times 1 \times 1$ and feed it to a series of fractionally strided convolution layers to get the depth map, $\mathbf{D_e} \in \mathbb{R}^{W \times H}$, where $W$, $H$ are the width and the height of the input image. While the upsampling happens here from an extreme $1\times1$ feature, i.e\onedot there is practically no spatial information (except what might get coded in the feature vector itself), such depth prediction from audio has been reported by earlier works also with fair success \cite{gao2020visualechoes}. \vspace{-0.25em} \subsection{Visual Net for Image to Depth}\label{subsec:visualnet} The visual net is also an encoder decoder network which predicts depth from monocular RGB image. The architecture of this network is inspired from U-Net \cite{ronneberger2015u}, and consists of regular convolutional and fractionally strided convolutional layers with skip connections between them. We give the image, $\mathbf{I} \in \mathbb{R}^{3 \times W \times H}$ as input to the network which predicts the depth map, $\mathbf{D_i} \in \mathbb{R}^{W \times H}$. We also use it to obtain the visual features from the intermediate layer (output of last conv layer) of the network denoted, $f_i \in \mathbb{R}^{N \times w \times h}$. We use this feature as one of the inputs to the multimodal fusion module as well. \vspace{-0.5em} \subsection{Material Net for Material Properties}\label{subsec:matnet} \label{sec:matnet} We use this network to extract the material properties of the objects present in the scene. We use a ResNet-18 architecture \cite{he2016deep} and feed the RGB image, $\mathbf{I} \in \mathbb{R}^{3 \times W \times H}$ as the input. We obtain a feature map, $f_m \in \mathbb{R}^{N \times w \times h}$ which encodes the material properties over the spatial locations in the scene image. This feature is the third input to the multimodal fusion module. We initialize the material network with pretraining on a large materials dataset \cite{bell2015material} with classes such as fabric, brick, asphalt, wood, metal, and then train it end to end with the rest of the network. We expect this initial material encoding capability of the network to be a proxy for encoding properties related to sound and light absorption and reflection, which affect depth prediction. Although the network evolves with the end to end training, the attention maps obtained qualitatively validate our assumptions (\secref{subsec:qual}). \vspace{-0.5em} \subsection{Multimodal Fusion Module} The multimodal fusion module combines features from the three sources discussed above, i.e\onedot echo $f_e \in \mathbb{R}^{N}$, visual $f_i \in \mathbb{R}^{N \times w \times h}$ and material $f_m \in \mathbb{R}^{N \times w \times h}$. Given the motivation, discussed in \secref{sec:intro}, that different object might give different depth prediction performances with audio or visual modalities respectively, the multimodal fusion module helps us combine the modalities to provide as input to the attention prediction network (\secref{sec:attn}). We perform two bilinear transforms on the features to obtain two fusion maps, $f_{img}^j$ and $f_{mat}^j$, where $j=1,2,...K$ is the number of output channels in the bilinear transformation, \begin{align} f_{img}^j(p,q) &= f_e^T\mathbf{A}_{img}^jf_i(p,q)+b_{img}^j, \forall p,q \\ f_{mat}^j(p,q) &= f_e^T\mathbf{A}_{mat}^jf_m(p,q)+b_{mat}^j, \forall p,q \end{align} where, $(p,q)$ indexes the spatial coordinates, $\mathbf{A}_{img}^j$, $\mathbf{A}_{mat}^j$ are learnable weights of dimension $N \times N$ and $b_{img}^j$, $b_{mat}^j$ are scalar biases. We finally concatenate the fusion maps $f_{img} \in \mathbb{R}^{N \times w \times h}$ and $f_{mat}\in \mathbb{R}^{N \times w \times h}$ along the first dimension to get the final fusion map $f^*=concat(f_{img}, f_{mat})$ to be fed into the per-pixel attention network. \subsection{Attention Network} \label{sec:attn} The attention network is the final part of the network which we use to predict the per-pixel attention map given the concatenated fusion maps obtained in the previous step. The network consists of a series of fractionally strided convolutional layers with a final \texttt{Sigmoid} layer to normalize the values in the range $[0,1]$. The output of the network is an attention map $\alpha \in \mathbb{R}^{1 \times W \times H}$. We use the attention map $\alpha$ for weighting the echo predicted depth map $\mathbf{D}_e$ and $1-\alpha$ for the image predicted depth map $\mathbf{D}_i$. The final depth map $\hat{\mathbf{D}}$ is thus, \begin{equation} \hat{\mathbf{D}} = \alpha \odot \mathbf{D}_e + (1-\alpha) \odot \mathbf{D}_i \end{equation} where, $\odot$ denotes pointwise multiplication. \subsection{Loss Function and Training} We train the network following \cite{hu2019revisiting}, and use the logarithm of depth errors. The loss is given as, \begin{equation} \mathcal{L}(\hat{\mathbf{D}}, \mathbf{D}) = \frac{1}{W H} \sum_{p=1}^{W}\sum_{q=1}^{H}\ln (1+\lVert \mathbf{D}(p,q) - \hat{\mathbf{D}}(p,q)\rVert_1), \end{equation} where $D$ is the ground truth depth map. The full optimization problem is given by \begin{equation} \theta_e^*, \theta_i^*, \theta_a^*, \theta_f^*, \theta_m^* = \argmin_{\theta_e, \theta_i, \theta_a, \theta_f, \theta_m} \mathcal{L}(\hat{\mathbf{D}}, \mathbf{D}). \label{eq:final_loss} \end{equation} where, $\theta_e$, $\theta_i$, $\theta_a$, $\theta_f$, $\theta_m$ are the parameters for echo to depth network, image to depth network, attention network, fusion module and material property network respectively. We ignore the undefined regions in the ground truth depth maps, and therefore, such regions do not contribute to the learning. Adding smoothness constraints~\cite{li2019learning} can potentially further improve the quality of generated depth, however we obtain good results without using them here. We train the full network in an end to end fashion using standard backpropagation for neural networks. \section{Conclusion} \vspace{-1ex} \label{sec:conclusion} We presented a novel method for estimating depth by combining audio and visual modalities. We hypothesised that material properties play a significant role in the task, and proposed to use automatic material property estimation to predict spatial attention maps which modulate and combine the outputs of audio and image based depth prediction. We showed with quantitative experiments on challenging benchmark datasets, that (i) adding material properties explicitly improves the depth prediction over audio and visual prediction, (ii) having an attention mechanism based fusion method is better than other simpler existing approaches for audio visual fusion. We also demonstrated qualitatively that the attention maps focus on interpretable areas for the two modalities. While audio attention maps tended to ignore materials which would diffuse or absorb the audio wave, the image based attention included those areas. We also demonstrated qualitatively that the proposed method performs better than existing method, especially near the depth edges and brings out the finer structures in the scene. We further showed experiments with reduced image resolution where our method degraded gracefully, while the compared methods loses performances significantly. We even compared our method with existing methods for sparse to dense depth prediction, and obtained encouraging competitive results, while not using sparse dense data as input for our method. We would like to explore such multimodal fusion with other modalities like sparse point clouds in the future to obtain even higher quality depth predictions. Further, geometric prior~\cite{srivastava2021} can also be leveraged to improve the results. In conclusion, we believe that using echo for depth prediction, especially in combination with other modalities is a promising direction, especially given the low cost and wide availability of audio sensors. \vspace{0.1em} \noindent \textbf{Acknowledgment.} Kranti Kumar Parida gratefully acknowledges support from the Visvesvaraya fellowship. \section{Experiments} \label{sec:experiments} \subsection{Implementation Details} \noindent\textbf{Datasets.} We perform experiments on Replica \cite{straub2019replica} and Matterport3D~\cite{Matterport3D} datasets. Both the datasets contain indoor scenes. Replica has a total of $18$ scenes covering hotels, apartments, rooms and offices. Matterport3D contains $90$ scenes. On Replica, we follow \cite{gao2020visualechoes} and use $15$ scenes for training and $3$ for testing. On Matterport3D, we use $77$ scenes for evaluation, out of which $59$, $10$ and $8$ scenes are used as train, validation and test respectively. We simulate echoes on these datasets by using the precomputed room impulse response (RIR) provided by \cite{chen2019soundspaces} using 3D simulator Habitat~\cite{savva2019habitat} which takes into consideration the scene geometry and the materials present in the scene. We obtain the echoes by convolving input audio signal with the RIR. We use the material recognition dataset MINC~\cite{bell2015material} for pre-training the material net. \noindent\textbf{Network Architecture.} We use the same architecture for echo encoder, and image to depth encoder and decoder (Visual Net) as \cite{gao2020visualechoes}, for fair comparison and demonstrating the effectiveness of the proposed material and attention networks. We use the first four convolutional layers of ResNet-18 for the material property network. We initialize them with pretraining on ImageNet and MINC dataset. \noindent\textbf{Input Representation.} The input to Visual Net and Material Net is a $128 \times 128$ RGB image. For input to Echo Net, we use the spectrogram of $60$ms echo signal. For training on Replica, we use a sampling frequency of $44.1$ kHz and for Matterport3D, we use a sampling frequency of $16$ kHz. We use Hanning window of length $64$ and $32$ to compute spectrogram for Replica and Matterport3D respectively. We use FFT size of 512 for both the cases. \noindent\textbf{Metrics.} Following earlier works in depth estimation, we report results on root mean squared error (RMSE), mean relative error (REL), mean $log_{10}$ error, and the percentage $\delta_t$ of pixels with both the relative error and its inverse under threshold $t$ where $t \in \{1.25, 1.25^2, 1.25^3\}$. Due to space constraints, we provide more details on the datasets, network architecture, parameter settings and evaluation metrics in the supplementary material. \subsection{Experiment Design} We design the experiments below to demonstrate the following points. (i) Using audio and visual modalities together improves performance over either of them. (ii) Using material properties in addition improves further. (ii) Among the different ways to combine the three, i.e\onedot visual, audio and material properties, the proposed attention based fusion performs the best. We demonstrate the first two points with ablation experiments where we combine the inputs by simple concatenation, followed by a decoder to predict the depth map (\secref{sec:ablation} first part). Further, we demonstrate the third point by comparing combination methods and showing that attention based combination performs the best (\secref{sec:ablation} second part). We then compare our full method with existing state of the art approaches (\secref{sec:soa}). We also show experiments on degrading resolution of image (\secref{sec:res}). \subsection{Ablation Study} \label{sec:ablation} \noindent\textbf{Combination of echo, image and material properties.} We show the results of combining the three inputs with simple concatenation in \tabref{tab:material_property_with_echo}. With only binaural echo as input, the RMSE is $0.995$, which improves to $0.673$ when image is added as well. When material property features are used with echo, the RMSE improves to $0.523$ i.e\onedot an improvement of $\sim47\%$ over echo only input and $\sim22\%$ over image+echo input. Lastly, when image and material property features are concatenated with echo features (all), the RMSE value further improves to $0.491$ i.e\onedot $\sim50\%$ over echo only input and $\sim27\%$ over echo+image input. These experiments validate that even with simple fusion, material properties improve the performance of the system. We attribute the improvement to our intuition that adding material properties explicitly allows the network to internally module the audio and visual features. In the following we demonstrate the proposed explicit multimodal fusion followed by attention based weighting performs much better than the simple concatenation. \begin{table} \centering \resizebox{\columnwidth}{!}{% \begin{tabular}{c|c|c|c|c|c|c} \hline Modality & RMSE ($\downarrow$) & REL ($\downarrow$) & log10 ($\downarrow$) & $\delta_{1.25}(\uparrow)$&$\delta_{1.25^2}(\uparrow)$&$\delta_{1.25^3}(\uparrow)$\\ \hline \hline echo & 0.995 & 0.638& 0.208& 0.388& 0.599&0.742 \\ \hline echo+img & 0.673& 0.451& 0.146 &0.534 &0.734 &0.845\\ echo+mat. & 0.523& 0.282& 0.103& 0.652& 0.839& 0.920\\ all& \textbf{0.491}& \textbf{0.276}& \textbf{0.098}& \textbf{0.667}& \textbf{0.846}& \textbf{0.924}\\ \hline \end{tabular} } \caption{\textbf{Depth estimation by combining different modalities} Using echoes only (echo), echoes with image features (echo+img.), echoes with material features (echo+mat.) and combination of echo, image and material features (all). $\downarrow$ indicates lower is better and $\uparrow$ indicates higher is better. } \label{tab:material_property_with_echo} \vspace{-0.5 em} \end{table} \noindent\textbf{Impact of multimodal fusion and attention.} \label{sec:exp_fusion} We now validate the efficacy of our audio visual fusion method, which uses a multimodal fusion module to predict attention over the modalities to combine them. We compare the proposed fusion method, denoted \texttt{bilinear} with two alternatives, i.e\onedot a simple concatenation of features denoted \texttt{concat}, and a dot product based fusion denoted \texttt{dot}. All these methods fuse the features and use them to estimate attention weights. We also compare by the fusion method of VisualEchoes \cite{gao2020visualechoes}, which fuses features with concatenation and uses them with a decoder to predict depth map, i.e\onedot it has no attention based fusion. We show the results in \tabref{tab:fusion_ablation}. We observe that \texttt{bilinear}, with an RMSE of $0.249$, performs best among the compared methods, highlighting that the proposed fusion is better than the simple concatenation or dot product based fusion. We also observe that \texttt{concat} performs better than VisualEchoes i.e\onedot $0.259$ cf\onedot $0.346$ RMSE. This indicates that attention maps (which are present in \texttt{concat} but absent in VisualEchoes) are important for better performance. \figref{fig:loss_rmse_plot} further shows the training loss (left) and validation RMSE (right) plots. We observe that VisualEchoes suffers from severe overfitting (much higher val RMSE), which is mitigated on adding the material features (i.e\onedot \texttt{concat}). This further reinforces the hypothesis that material properties play an important role in depth prediction. To conclude, we demonstrated from the ablation experiments that, (i) adding material properties explicitly is helpful for audio visual depth prediction, (ii) the proposed fusion strategy is better than simpler alternatives, and (iii) attention based combination of depth maps is better than simple concatenation as used in previous methods, e.g\onedot VisualEchoes. \begin{table} \centering \resizebox{\columnwidth}{!}{% \begin{tabular}{c|c|c|c|c|c|c} \hline Method & RMSE ($\downarrow$) & REL ($\downarrow$) & log10 ($\downarrow$) & $\delta_{1.25}(\uparrow)$&$\delta_{1.25^2}(\uparrow)$&$\delta_{1.25^3}(\uparrow$)\\ \hline \hline VisualEchoes \cite{gao2020visualechoes} & 0.346 & 0.172 & 0.068 & 0.798 & 0.905 & 0.950\\ \hline \texttt{concat} &0.259 &0.122 &0.048 &0.867 &0.939 &0.968 \\ \texttt{dot}& 0.262& 0.133& 0.050& 0.853& 0.943& \textbf{0.974}\\ \texttt{bilinear}& \textbf{0.249}& \textbf{0.118}& \textbf{0.046}& \textbf{0.869}& \textbf{0.943}& \textbf{0.970}\\ \hline \end{tabular} } \caption{\textbf{Performance of different fusion strategies}. \texttt{concat} refers to the concatenation of all the inputs, \texttt{dot} to fusion by dot product, and \texttt{bilinear} to fusion by bilinear transformation (see \secref{sec:exp_fusion}).} \label{tab:fusion_ablation} \vspace{-1.5 em} \end{table} \begin{figure} \centering \includegraphics[width=\columnwidth]{fig/plot.pdf} \caption{\textbf{Training loss and validation RMSE on Replica dataset.} In VisualEchoes \cite{gao2020visualechoes} depth prediction is performed directly from concatenated features. \texttt{dot}, \texttt{concat} and \texttt{bilinear} are the three different fusion strategies for proposed attention based prediction.} \label{fig:loss_rmse_plot} \vspace{-1em} \end{figure} \vspace{-0.5em} \subsection{Comparison to state-of-the-art} \label{sec:soa} \noindent\textbf{Baselines.} We compare on Replica and Matterport3D against VisualEchoes~\cite{gao2020visualechoes} and competitive baselines. The baseline methods are AVERAGE, ECHO2DEPTH and RGB2DEPTH. AVERAGE refers to average depth value of all the samples in training set. ECHO2DEPTH refers to the depth estimation using only Echo Net (\secref{subsec:echonet}) and RGB2DEPTH refers to depth estimation using only Visual Net (\secref{subsec:visualnet}). \noindent\textbf{Comparison on Replica dataset.} We report results in \tabref{tab:replica}. The proposed method outperforms all the compared methods on all the metrics. Specifically, it outperforms VisualEchoes by $\sim28\%$ on RMSE. We also observe that while the improvement of VisualEchoes w.r.t.\xspace RGB2DEPTH is marginal ($0.346$ cf\onedot $0.374$ i.e\onedot $7.4\%$), the proposed method is able to achieve an improvement of $\sim33\%$ ($0.249$ cf\onedot $0.374$ RMSE). Both the methods perform significantly better than AVERAGE and ECHO2DEPTH baselines. \noindent\textbf{Comparison on Matterport3D dataset.} We report results in \tabref{tab:mp3d}. We outperform echo only (ECHO2DEPTH), image only (RGB2DEPTH) and AVERAGE baselines on all the five metrics. Our method also outperforms the existing VisualEchoes method by $\sim4\%$ on RMSE and on all the metric after training the method for $300$ epochs. Further, better results on $\delta$ indicate that the proposed method has lower pixel wise relative error cf\onedot VisualEchoes which are manifested in form of better depth estimation around edges (\secref{subsec:qual}). Since Matterport3D is a popular benchmark for depth estimation, we also compare our method with the state-of-the-art methods on sparse to dense depth map estimation. These methods use sparse depth maps as inputs, while we have no explicit depth information in the inputs. We also use a slightly smaller subset of Matterport3D, i.e\onedot $77$ cf\onedot $90$ scenes for other. The results are shown in~\tabref{tab:mp3d_sparse_depth} where we obtain better results than four out of five compared methods. While the performances are not directly comparable, it supports the argument that echo can be a viable modality for estimating depth from RGB and can potentially provide additional information that are usually obtained from explicit 3D representations such as sparse depth maps. \begin{table} \centering \resizebox{\columnwidth}{!}{% \begin{tabular}{c|c|c|c|c|c|c} \hline Method & RMSE ($\downarrow$) & REL ($\downarrow$) & log10 ($\downarrow$) & $\delta_{1.25}$ ($\uparrow$) & $\delta_{1.25^2}$ ($\uparrow$) & $\delta_{1.25^3}$ ($\uparrow$)\\ \hline \hline AVERAGE & 1.070 & 0.791 & 0.230 & 0.235 & 0.509 & 0.750 \\ ECHO2DEPTH & 0.713 & 0.347 & 0.134 & 0.580 & 0.772 & 0.868 \\ RGB2DEPTH & 0.374 & 0.202 & 0.076 & 0.749 & 0.883 & 0.945\\ VisualEchoes \cite{gao2020visualechoes} & 0.346 & 0.172 & 0.068 & 0.798 & 0.905 & 0.950\\ Proposed Method& \textbf{0.249}& \textbf{0.118}& \textbf{0.046}& \textbf{0.869}& \textbf{0.943}& \textbf{0.970}\\ \hline \end{tabular} } \caption{\textbf{Comparison with existing methods on Replica dataset}. We report the results for the baseline and existing methods directly from \cite{gao2020visualechoes}.} \label{tab:replica} \end{table} \begin{table} \centering \resizebox{\columnwidth}{!}{% \begin{tabular}{c|c|c|c|c|c|c} \hline Method & RMSE ($\downarrow$) & REL ($\downarrow$) & log10 ($\downarrow$) & $\delta_{1.25}$ ($\uparrow$) & $\delta_{1.25^2}$ ($\uparrow$) & $\delta_{1.25^3}$ ($\uparrow$)\\ \hline \hline AVERAGE & 1.913& 0.714& 0.237& 0.264& 0.538& 0.697\\ ECHO2DEPTH & 1.778& 0.507& 0.192& 0.464& 0.642& 0.759\\ RGB2DEPTH & 1.090& 0.260& 0.111& 0.592& 0.802&0.910 \\ VisualEchoes \cite{gao2020visualechoes} & 0.998& 0.193& 0.083& 0.711& 0.878& 0.945\\ Proposed Method& \textbf{0.950}& \textbf{0.175}& \textbf{0.079}& \textbf{0.733}& \textbf{0.886}& \textbf{0.948}\\ \hline \end{tabular} } \caption{\textbf{Comparison with existing methods on Matterport3D dataset.}} \label{tab:mp3d} \end{table} \begin{SCtable} \centering \resizebox{0.6\columnwidth}{!} { \begin{tabular}{c|c|c} \hline Method & RMS ($\downarrow$) & MAE ($\downarrow$) \\ \hline \hline AD \cite{liu2013guided} &1.653 & 0.610\\ MRF \cite{harrison2010image} &1.675 & 0.618\\ Zhang et al. \cite{zhang2018deep} & 1.316 & 0.461 \\ Huang et al. \cite{huang2019indoor} & 1.092 & 0.342 \\ Xiong et al. \cite{xiong2020sparse} & 0.860 & 0.462\\ \hline \emph{Proposed Method$^*$} & \emph{1.008} & \emph{0.570} \\ \hline \end{tabular} } \caption{\textbf{Comparison on Matterport3D}. $^*$The compared methods use sparse depth maps as inputs, while we do not.} \label{tab:mp3d_sparse_depth} \end{SCtable} \vspace{-0.5em} \subsection{Qualitative Results}\label{subsec:qual} \begin{figure} \centering \includegraphics[width=\columnwidth]{fig/attention.pdf} \caption{\textbf{Visualizing attention map.} We show attention maps for echo and image. First two columns are examples from Replica dataset and last two columns are examples from Matterport3D dataset. We observe that the echo modality, in general, produces high attention value for far away solid structure whereas the image modality attends more to nearby points (sofa in first and third example). See supplementary material for more qualitative results.} \label{fig:attention} \vspace{-1em} \end{figure} \begin{figure*} \centering \includegraphics[width=\textwidth]{fig/depth_pred.pdf} \caption{\textbf{Qualitative comparisons for depth estimation on Replica (first two rows) and Matterport3D (last two rows) datasets.} We observe that the proposed approach is able to preserve fine structures, and has better depth estimation at the boundaries when compared to the existing approach. See supplementary material for more qualitative results.} \label{fig:depth_pred} \vspace{-1.5em} \end{figure*} We give qualitative results of both (i) depth prediction, and (ii) attention maps in this section. We first provide few qualitative examples for attention map of echo and image respectively in Fig.~\ref{fig:attention}. We provide the attention maps for two examples each from Replica and Matterport3D. We make a general observation from these results that the echo attention map mostly looks into the far off regular regions but completely ignores the finer structures in them. It gives higher weight to the walls in the scenes but ignores any objects present on it (e.g.\ monitor on third example and wall painting on fourth). It also ignores the chairs and sofa which are present on relatively lower depth values. The image attention map, being complementary to the echo attention, gives higher weights to nearby objects such as the sofa, chair etc\onedot, but ignores the far off wall in all the examples. This could be mainly due to two reasons (i) the echoes from the nearby regions are observed almost instantaneously which is hard to distinguish from the original signal (ii) most of the nearby objects (e.g.\ sofa) are made up of sound absorbing materials which do not reflect audio signals strongly. This results suggest that our network tries to leverage the best of both worlds, to get the final depth map. \begin{table} \centering \resizebox{\columnwidth}{!}{% \begin{tabular}{c|c|c|c|c|c|c} \hline \diagbox{Method}{Scale} & $1$x & $\frac{1}{2}$x & $\frac{1}{4}$x & $\frac{1}{8}$x & $\frac{1}{16}$x & $\frac{1}{32}$x\\ \hline \hline Img Only & 0.374 & 0.362 & 0.375 & 0.398 & 0.440 & 0.550 \\ VisualEchoes\cite{gao2020visualechoes}&0.346& 0.354& 0.357& 0.392& 0.471& 0.593\\ Proposed Method & 0.249 &0.244& 0.249& 0.281& 0.342& 0.446\\ \hline \end{tabular} } \caption{\textbf{Performance on varying input image resolution.} The metric is RMSE (lower is better).} \label{tab:low_res_comparison} \vspace{-1.5em} \end{table} We give a few examples of reconstructed depth using our approach and also compare it with an existing approach in Fig.~\ref{fig:depth_pred}. We observe that our approach is quite robust to fine grained structure in the scene cf\onedot VisualEchoes \cite{gao2020visualechoes}. In the first example, VisualEchoes misses the small chairs in the scene while our method gives an idea of the structure. Again the boundaries in the first example are not very clearly identified by VisualEchoes, but are estimated almost perfectly in our case. We also observe similar trends in other three examples as well. The results from the individual modalities (image and echo) are not satisfactory, but do capture the overall structure of the scene. These results suggests that the networks are not individually powerful but their performance improves significantly when we effectively combine the information from both of them. We encourage the readers to look at the supplementary material for more such qualitative results. \subsection{Experiments with varying image resolutions}\label{sec:res} As discussed in \secref{sec:ablation}, the network can better exploit the combination of echo and image due to attention based integration. This motivates us to study the effect of resolution degradation. This is similar to the case where human vision degrades and the brain learns to compensate and adapt based on the auditory input~\cite{berry20143d, thaler2016echolocation}. We evaluate the proposed method by progressively degrading the image resolution. We give the results in ~\tabref{tab:low_res_comparison} by gradually reducing the resolution to $\frac{1}{2}$, $\frac{1}{4}$,$\frac{1}{8}$, $\frac{1}{16}$ and $\frac{1}{32}$ times the original image. We observe that the proposed approach is more robust to reduction in resolution of the images than the compared methods. The performance of image only method degrades significantly when the downscaling factor is $\frac{1}{8}$ of the original image size, while the proposed method still performs better than image only method with original resolution i.e\onedot $0.281$ RMSE for proposed method at $\frac{1}{8}$x scale cf\onedot $0.374$ with image only at $1$x. Further, we observe that even with very high downscaling of $\frac{1}{32}$x, we obtain a better RMSE as compared to VisualEchoes\ ($0.446$ cf\onedot $0.593$). In fact, VisualEchoes performs worse than even the image only method. Similar observations can be made at $\frac{1}{16}$x. We can also observe that the rate of decrease in RMSE from $1$x to $\frac{1}{4}$x is more in VisualEchoes cf\onedot the image only and the proposed method. This further highlights the efficacy of the proposed method. \section{Introduction} \label{sec:intro} \begin{figure} \centering \includegraphics[width=\columnwidth]{fig/teaser.pdf} \caption{ We address the problem of depth prediction using multimodal audio (binaural echo) and visual (monocular RGB) inputs. We propose an attention based fusion mechanisms, where the attention maps are influenced by automatically estimated material properties of the scene objects. We argue that capturing the material properties while fusing echo with images is beneficial as the light and sound reflection characteristics depend not only on the depth, but also on the material of the scene elements. } \label{fig:teaser} \end{figure} Humans perceive the surroundings using multiple sensory inputs such as sound, sight, smell and touch, with different tasks involving different combinations of such inputs. In computer vision, multimodal learning has also gained interest. As one popular stream, researchers have leveraged audio and visual inputs for addressing challenging problems. These problems can be broadly divided into three categories: (i) using audio modality only as the input, to learn a seemingly visual task, e.g.\ using echo for depth prediction~\cite{christensen2019batvision}, (ii) using visual modality as \emph{auxilliary information} for an audio task, e.g.\ using videos to convert mono audio to binaural audio~\cite{gao20192}, and (iii) using both audio visual modalities together, e.g\onedot for depth prediction~\cite{gao2020visualechoes}. Here, we follow the third line of work, and address the problem of depth map prediction using both audio and visual inputs. Studies in psychology and perception indicate that both sound and vision complement each other, i.e\onedot visual information helps calibrate the auditory information ~\cite{kolarik2016auditory} while auditory grouping helps solve visual ambiguity~\cite{watanabe2001sound}. Many animals, like bats and dolphins, use echolocation to estimate the distances of objects from them. Visually impaired humans have also been reported to use echolocation \cite{HumanELWiki}. Motivated by such cases, Christensen et.~al \cite{christensen2019batvision, christensen2020batvision} recently showed that depth maps can be predicted directly from stereo sound. Gao et.~al \cite{gao2020visualechoes} showed that by fusing features from binaural echoes with the monocular image features, depth estimation can be improved. Inspired by these findings, we work with similar reasoning, i.e\onedot sound contains useful information to predict depth, and that echoes, used along with monocular images, improve depth estimation. Going beyond the current methods which do simple combinations of features from echoes and images \cite{gao2020visualechoes}, we argue that the material properties of the objects in the scene significantly inform the spatial fidelity of the two streams. Some objects may lend better depth estimates with echoes, while some may prefer the visual modality more. Deriving from this motivation, we propose a novel end-to-end learnable network with a multimodal fusion module. This novel module incorporates material properties of the scene and fuses the two modalities with spatial attention maps indicating the fidelity of the respective modality for different spatial locations. The material properties are automatically estimated using a sub-network initialized with training on auxiliary data on materials. As the final depth prediction, the method fuses the depth maps produced by the audio and visual inputs, modulated by the predicted attention maps. \figref{fig:teaser} illustrates the difference with a real output of an existing method and the proposed approach, showing qualitative improvements. \begin{figure} \centering \includegraphics[width=\columnwidth]{fig/comparison.pdf} \caption{Comparison of our method with the existing approaches} \label{fig:comparison} \vspace{-1em} \end{figure} We demonstrate the advantages of the proposed method with experiments on Replica \cite{straub2019replica} and Matterport3D \cite{Matterport3D} datasets. We outperform the previous state-of-the-art on Replica dataset by $\sim28\%$ RMSE. On Matterport3D, which is more complex and larger ($5$x) than Replica, we provide results on the multimodal depth prediction task for the first time, and compare the proposed method with existing approaches and challenging baselines. We also show that the proposed network can estimate better depth with low resolution images. This is important in practical systems working on depth estimation from monocular images, as sensors capturing echoes can be used along with cameras, to not only enhance the performance of existing setup but also suffer lesser degradation in depth prediction with the reduction in the quality of images. Further, we give ablation experiments to systematically evaluate the different aspects of the proposed method. In summary, we make the following contributions: \begin{itemize}[leftmargin=1.25em, itemsep=-0.25em] \item We propose a novel end-to-end learnable deep neural network to estimate depth from binaural audio and monocular images. \item We provide exhaustive quantitative and qualitative results on Replica and Matterport3D datasets. On Replica, we outperform the previous state-of-the-art by $\sim 28\%$. On Matterport3D, we provide results benchmarking existing methods. The proposed method achieves state-of-the-art performance, outperforming the existing best method on Matterport3D by $\sim 4\%$. \item We provide exhaustive ablation experiments on the design choices in the network, and validate our intuitions with representative qualitative results. \end{itemize} \section{Related Works} \label{sec:related_work} \noindent\textbf{Audio-visual learning.} Recently there has been a surge in interest in audio-visual learning. In one line of work, the correspondence between both the modalities are used to learn the representation in each of the individual modality, in a self-supervised manner \cite{arandjelovic2017look, Arandjelovic_2018_ECCV, hu2020discriminative, morgado2020learning, owens2018audio}. In \cite{arandjelovic2017look, Arandjelovic_2018_ECCV}, the authors used an auxiliary task, of predicting whether the audio and image pair correspond to each other, to learn representations in each of the modalities. In \cite{owens2018audio}, the authors predicted if the audio and the video clip are temporally synchronized to learn representations. In \cite{hu2020discriminative}, the authors advanced a step further and localized sound generating objects in the image by leveraging the correspondence between audio and the image. In one of the recent approach, the authors in \cite{morgado2020learning}, have used the spatial correspondence between $360^{\circ}$ video and audio. In another line of work, the integration of both audio and video modality was done to increase the performance. Recently a variety of task such audio source separation \cite{zhao2018sound, zhao2019sound, gan2020music, gao2018learning, gao2019co}, zero-shot learning \cite{parida2020coordinated, mazumder2020avgzslnet}, saliency prediction \cite{tsiami2020stavis}, audio spatialization \cite{gao20192, morgado2018self} have used the information from both audio and video modalities to improve the performance cf.\ using single modality only. \noindent\textbf{Depth estimation without echoes.} The depth estimation methods span from only monocular image based methods to multi modal methods. Usually, the modalities are sparse depth maps, LiDAR point clouds, bird's eye views, and normal maps. Monocular depth estimation methods involve utilizing a single RGB image to estimate dense depth~\cite{zhao2020monocular, bhoi2019monocular, tiwari2020pseudo}. Many methods directly utilize single image~\cite{monodepth2, li2019learning, Ranftl2020} or estimate an intermediate 3D representations such as point clouds~\cite{weng2019monocular,you2019pseudo} and bird's eye views~\cite{srivastava2019learning} to estimate dense depth maps. A few other methods work on combining RGB with sparse depth maps, normal maps etc\onedot~\cite{qiu2019deeplidar, ma2019self} to estimate dense depth maps. \noindent\textbf{Depth estimation with echoes.} In \cite{christensen2019batvision, christensen2020batvision} depth of the scene was estimated using only echoes received from a single audio pulse. This approach completely ignored the visual modality while estimating the depth. On similar lines, the authors in \cite{vasudevan2020semantic} estimated the depth of scene directly from binaural audio of the object itself. They did not have the ground truth depth map, and instead used a vision network to predict the ground truth depth map. This method although used the direct audio from the object but the performance of the system was always upper bounded by the predicted depth map from visual input. In all of the methods above, the authors used one modality in isolation and did not fuse multi-modal information to improve the performance of the system. In \cite{gao2020visualechoes} the authors used echolocation as a pre-training task for learning a better visual representation. The authors also gave a case study, where they showed that simply concatenating the audio features with the visual features improves depth prediction. We explore the idea further, of improving depth prediction by adding binaural echoes to image input as well, and propose a novel multimodal fusion method which incorporates automatically estimated material properties in addition. We give the comparison with existing approach in \figref{fig:comparison}. \section{Dataset Details} \label{sec:dataset_details} We use two datasets Replica \cite{straub2019replica} and Matterport3D \cite{Matterport3D} for our experiments. Both the datasets are rendered using an open source 3D simulator, Habitat \cite{savva2019habitat}. To obtain echoes on both the datasets, we use the simulations from Soundspaces~\cite{chen2019soundspaces}. Soundspaces augments the simulator by providing realistic audio simulations for the scenes by considering room geometry and materials in the room. \subsection{Simulating Echoes} We use the procedure outlined below to obtain echoes on both Replica and Matterport3D dataset. Soundspaces performs acoustic simulation in two steps as follows. \noindent\textbf{Step 1.} The visual scene from the respective dataset is subdivided into grids. The grids are divided along navigable points so that an agent can be placed there. Then the Room Impulse Response (RIR) is computed between each pair of points using audio ray tracing \cite{veach1995bidirectional}. Each pair denotes a combination of source and receiver which send the audio signal and receive the echoes respectively. \\ \noindent\textbf{Step 2.} The echoes are obtained by convolving the input audio signal with the RIR computed in the previous step. Following Soundspaces, we use the RIR between each pair of point at four orientations ($0^\circ$, $90^\circ$, $180^\circ$, $270^\circ$). For the proposed method, we place the source and receiver at the same point and use the resulting RIR. In addition, following \cite{gao2020visualechoes}, we use the source audio signal as a $3$ ms sweep signal spanning the human hearing range ($20$Hz to $20$kHz). We obtain the echo response by convolving the corresponding source audio signal with the RIRs obtained previously. Further, the sampling rate for the source and received audio (echoes) are $44.1$ kHz and $16$ kHz for Replica and Matterport3D respectively. \subsection{Visual Scenes} We now provide details on the scenes used from each dataset along with the train and test details. \noindent\textbf{Replica dataset.} We use all the $18$ scenes from Replica having $6960$ points in total, from $1740$ images and $4$ orientations. Following \cite{gao2020visualechoes}, we use a train set consisting of $5496$ points and $15$ scenes. The test set consists of $1464$ points from $3$ scenes. As a validation set is not defined for Replica, we use a small subset of points from train set for tuning the network parameters. Then the parameters are fixed, and the entire train set is used training the network. \noindent\textbf{Matterport3D dataset.} Matterport3D consists of $90$ scenes. Soundspaces provides RIR for $85$ of these scenes. Further, we discard another $8$ scenes which have none or a very few navigable points. This results in a dataset with $77$ scenes which we use as our final dataset. These $77$ scenes contain $67,376$ points from $16,844$ and $4$ orientations. The dataset is then split into train, validation and test sets. The train set consists of $40,176$ points and $59$ scenes. The validation set consists of $13,592$ points and $10$ scenes. The test set consists of $13,602$ points and $8$ scenes. \section{Implementation Details} \label{sec:implementation_details} \noindent\textbf{Input.} The input to the Visual Net and Material Net is a $128$x$128$ RGB image. We also perform image augmentation by randomly jittering color, contrast and brightness of the image. The input to the Echo Net is a spectrogram from the simulated echoes. For obtaining the spectrogram, we first convert the time domain audio signal into Short Time Fourier Transform using Hanning window with a fixed window length, hop length and frequency points. We use a two channel audio with duration of $60$ms. For Replica, we use an audio signal of $44.1$kHz and convert it to a $2\times257\times166$ spectrogram using a window length of $64$, hop length of $16$ and $512$ frequency points. For Matterport3D, we use an audio signal of $16$kHz and convert it to a $2\times257\times121$ spectrogram using a window length of $32$, hop length of $8$ and $512$ frequency points. \noindent\textbf{Additional Parameters.} We train the network on both the datasets using Adam optimizer with learning rate of $1e-4$, momentum of $0.9$ and weight decay of $5e-4$. We use batch size of $128$ for Replica and $64$ for Matterport3D. \section{Network Architecture and Parameters} \label{sec:network_architecture} We now provide the detailed architecture of each subnetwork from the proposed method. \noindent\textbf{Echo Net.} It is an encoder-decoder network. The encoder is inspired from \cite{gao2020visualechoes} and consists of a convolutional neural network having $3$ layers with filter dimensions $8\times8$, $4\times4$, $3\times3$ and stride $4\times4$, $2\times2$ and $1\times1$ respectively for each layer. The number of output filters in each layers are $32$, $64$ and $8$ respectively. Finally, we use a $1 \times 1$ conv layer to convert arbitrary sized feature dimension into a $512$ dimensional feature vector. The decoder consists of $7$ fractionally strided convolutional layers with filter size, stride and padding of $4$,$2$ and $1$ respectively. The number of output filters for the $7$ layers are $512, 256, 128, 64, 32, 16$ and $1$ respectively. We use BatchNorm and RELU non-linearity after each layer of the network. \noindent\textbf{Visual Net.} It consists of an encoder decoder network. The encoder consists of a convolutional neural network with $5$ layers. For each layer, the filter size is $4$, the stride is $2$ and padding is $1$. The $5$ layers have $64, 128, 256, 512, 512$ number of output filters respectively. We use LeakyRELU with negative slope of $0.2$ and BatchNorm after each layer. Similarly for the decoder we use $5$ fractionally strided convolutional layers with output filters of size $512, 256, 128, 64, 1$ respectively. We also use skip connections and concat the features from the corresponding encoder layer with decoder output to get the final feature map from the decoder. We use BatchNorm and RELU after each layer. \noindent\textbf{Material Net.} We use the first five convolution blocks of the ResNet-18~\cite{he2016deep}. The first layer has a filter size of $7 \times 7$ and all subsequent layer have filters of size $3 \times 3$. The number of output filters at each layer are $64, 64, 128, 256, 512$ respectively. \noindent\textbf{Attention Net.} We use five fractionally strided convolutional layers with output filter sizes of $512, 256, 128, 64, 1$ respectively for each layer. We use filter size, stride and padding to be $4,2,1$ respectively. \section{Evaluation Metrics} \label{sec:evaluation_metrics} We use following metrics to evaluate our result. We denote the predicted depth and ground truth depth as $\hat{\mathbf{D}}(p)$ and $\mathbf{D}(p)$ for every point $p$. We further use only those points that have valid depth value, i.e. the missing values and the points having zero depth value in $\mathbf{D}$ are ignored. We denote such valid points as $|{\mathbf{D}}|$. \begin{itemize} \item Root Mean Square Error: \begin{equation} \sqrt{\frac{1}{|{\mathbf{D}}|}\sum_{p \in {\mathbf{D}}}|\hat{\mathbf{D}}(p) - \mathbf{D}(p)\|^2} \end{equation} \item Mean absolute relative error: \begin{equation} \frac{1}{|{\mathbf{D}}|}\sum_{p \in {\mathbf{D}}}\frac{\hat{\mathbf{D}}(p) - \mathbf{D}(p)}{\hat{\mathbf{D}}(p)} \end{equation} \item Mean $\log_{10}$ error: \begin{equation} \frac{1}{|{\mathbf{D}}|}\sum_{p \in {\mathbf{D}}}\log_{10} (\hat{\mathbf{D}}(p)) - \log_{10} (\mathbf{D}(p)) \end{equation} \item $\delta_t$ is the percentage of pixels within the error range $t$. We define the error range as mentioned below \begin{equation}\label{eq:delta} max(\frac{\hat{\mathbf{D}}(p)}{\mathbf{D}(p)}, \frac{\mathbf{D}(p)}{\hat{\mathbf{D}}(p)}) < t \end{equation} where $t \in \{1.25, 1.25^2, 1.25^3\}$. \end{itemize} \section{More Qualitative Results} \label{sec:qual_res} We give more qualitative results of depth estimation using various techniques on Replica and Matterport3D datasets in \figref{fig:depth_pred_replica} and \figref{fig:depth_pred_mp3d} respectively. The visualizations of the attention maps from Echo Net and Visual Net are shown in \figref{fig:attention_replica} (Replica) and Fig.\ref{fig:attention_mp3d} (Matterport3D). \begin{figure*} \vspace{-1 em} \centering \includegraphics[width=0.8\textwidth]{fig/replica.pdf} \caption{\textbf{Qualitative results for depth estimation on Replica dataset.} From left to right - input image, depth estimation using only echoes, depth estimation using only image, depth estimation from Visual Echoes, depth estimation using proposed method, ground truth depth map. The proposed method has better depth estimation in complicated scenes containing many objects causing frequent depth variations (e.g\onedot row $1$, row $4$). It also provides robust depth estimation along boundaries of objects (e.g\onedot rows $3$,$7$,$8$). When the individual depth estimations from image and echo are poor leading to poor depth estimation (closer to image) using Visual Echoes, while the proposed method provides closer to ground truth estimation such as cabinets (row $4$), door (row $9$) which are completely missed by other methods.} \label{fig:depth_pred_replica} \end{figure*} \begin{figure*} \vspace{-1 em} \centering \includegraphics[width=0.8\textwidth]{fig/matterport.pdf} \caption{\textbf{Qualitative comparisons for depth estimation on Matterport3D dataset.} From left to right - input image, depth estimation using only echoes, depth estimation using only image, depth estimation from Visual Echoes, depth estimation using proposed method, ground truth depth map. We observe that the proposed method consistently provides better depth map estimation of smaller/farther objects (such as chairs cf\onedot other methods in row $6$) and also at object boundaries (rows $1$,$4$,$5$). It also provides closer to ground truth results on illumination changes (row $7$). We also observe that when image and echo depth estimations individually yield poor results, Visual Echoes tend to perform poorly as well while the proposed method is still able to estimate better depth (row $7$).} \label{fig:depth_pred_mp3d} \end{figure*} \begin{figure} \centering \includegraphics[width=0.44\textwidth]{fig/replica_attention.pdf} \caption{\textbf{Visualization of attention maps on Replica dataset.} From left to right - input image, attention map from Echo Net, attention map from Visual Net. } \label{fig:attention_replica} \end{figure} \begin{figure} \centering \includegraphics[width=0.44\textwidth]{fig/mp3d_attention.pdf} \caption{\textbf{Visualization of attention maps on Matterport3D dataset.} From left to right - input image, attention map from Echo Net, attention map from Visual Net.} \label{fig:attention_mp3d} \end{figure}
2,869,038,157,019
arxiv
\section{\label{sec:intro} Introduction} Bennett~{\em et~al.}~\cite{Bennett93} showed that an unknown quantum state (a qubit) could be {\em teleported} via two classical bits with the use of a maximally entangled Bell state shared between the sender and receiver. The significance of teleportation as a tool for quantum information was extended when Gottesman and Chuang~\cite{Gottesman99} showed that unitary gates could be performed using modified teleportation protocols, known as {\em gate teleportation}, where the task of applying a certain gate was effectively translated to the task of preparing a certain state. Since then teleportation has been an invaluable tool for the quantum information community, as gate teleportation was the basis for showing that linear optics with single photons and photo-detectors was sufficient for a scalable quantum computer~\cite{KLM}. Moreover, Zhou {\em et~al.}~\cite{Zhou00} demonstrated that all previously known fault-tolerant gate constructions were equivalent to {\em one-bit teleportations} of gates. Recently, the use of one-bit teleportations between a qubit and a continuous variable {\em quantum bus} (or {\em qubus}) has been shown to be important for fault-tolerance~\cite{StabPap}. Using one-bit teleportations to transfer between two different forms of quantum logic, a fault tolerant method to measure the syndromes for any stabiliser code with the qubus architecture was shown, allowing for a linear saving in resources compared to a general CNOT construction. In terms of optics, the two different types of quantum logic used were polarisation $\{\ket{0}=\ket{H},\ket{1}=\ket{V}\}$ and logical states corresponding to rotated coherent states $\{\ket{\alpha},\ket{e^{\pm i\theta}\alpha}\}$, although in general any two-level system (qubit) which can interact with a continuous variable mode (qubus) would suffice. The relative ease with which single qubit operations can be generally performed prompted the question of whether a universal set of gates can be constructed with this rotated coherent state logic. In this paper we describe one such construction, which we call {\em qubus logic}. The fault-tolerant error-correction scheme using a qubus~\cite{StabPap} exploits the fact that entanglement is easy to create with coherent cat states of the qubus, such as $\ket{\alpha}+\ket{\alpha e^{i\theta}}$, and single qubit operations are easily performed on a two-level system. In this paper we describe how these cat sates can be used as a resource to construct other large entangled states, such as cluster states~\cite{Raussendorf01,Nielsen04,Browne05}, using one-bit teleportations between a qubit and a qubus. Although the average fidelities of qubus logic and cluster state preparation are dependent on how strong the interaction between the qubit and the qubus can be made, and how large the amplitude $\alpha$ is, these fidelities can be increased arbitrarily close to $1$ through the use of post-selection during the one-bit teleportations, demonstrating the power and flexibility of teleportation in qubus computation for state preparation. The paper is organised as follows. First, in Section ~\ref{sec:onebitteleportations} we revisit one-bit teleportations for the qubus scheme. Next, in Section ~\ref{sec:coherentstatelogic} we present a technique to perform quantum computation using coherent states of the qubus as basis states. To do this we make use of controlled (phase-space) rotations and ancilla qubits. This coherent state computation scheme is the most efficient to date. In Section ~\ref{sec:clusterstate} we show how we can efficiently prepare repetition encoded states using one-bit teleportations, and how such encoders can be used to prepare large cluster states. \section{\label{sec:onebitteleportations}One-Bit Teleportations} In the original quantum teleportation protocol an arbitrary quantum state can be transferred between two parties that share a maximally entangled state by using only measurements and communication of measurement outcomes~\cite{Bennett93}. Modifications of the resource state allow for the applications of unitaries to an arbitrary state in a similar manner, in what is known as {\em gate teleportation}~\cite{Gottesman99}. The main advantage of gate teleportation is the fact that it allows for the application of the unitary to be delegated to the state preparation stage. In some physical realisations of quantum devices, it may only be possible to prepare these states with some probability of success. In that case, the successful preparations can still be used for scalable quantum computation~\cite{Gottesman99}. When dealing with noisy quantum devices, it is important to encode the quantum state across multiple subsystems, at the cost of requiring more complex operations to implement encoded unitaries. In order to avoid the the uncontrolled propagation of errors during these operations, one can also employ gate teleportation with the extra step of verifying the integrity of the resource state before use~\cite{Gottesman99,Zhou00,KLM,Knill,SDKO}. In the cases where the teleportation protocol is used only to separate the preparation of complex resource states from the rest of the computation, simpler protocols can be devised. These protocols are known as {\em one-bit teleportations}~\cite{Zhou00}. Unitaries implemented through one-bit gate teleportation can also be used for fault-tolerant quantum computation~\cite{Zhou00} as well as measurement based quantum computation~\cite{Raussendorf01}. The main difference between one-bit teleportation and the standard teleportation protocol is the lack of a maximally entangled state. Instead, in order to perform a one-bit teleportation it is necessary that the two parties interact directly in a specified manner, and that the qubit which will receive the teleported state be prepared in a special state initially. Some unitary operations on coherent states can be difficult to implement deterministically, while the creation of entangled multimode coherent states is relatively easy. Single qubits, on the other hand, are usually relatively easy to manipulate, while interactions between them can be challenging. For this reason, we consider one-bit teleportation between states of a qubit and states of a field in a quantum bus, or {\em qubus}. The two types of one-bit teleportations for qubus computation are shown in Fig.~(\ref{one-teleWNL}), based on similar constructions proposed for qubits by Zhou {\em et al.}\cite{Zhou00}. \begin{figure}[ht] \includegraphics[width=8cm]{one-bit-teleportations-new.pdf} \caption{\footnotesize Approximate one-bit teleportation protocols~\cite{Zhou00} using controlled rotations. Here, the light grey lines correspond to qubits, and the thick red lines correspond to quantum bus modes.} \label{one-teleWNL} \end{figure} The one-bit teleportation of the qubit state $a\ket{0}+b\ket{1}$ into the state of the qubus, in the coherent state basis $\{\ket{\alpha},\ket{\alpha e^{i\theta}}\}$, is depicted in Fig.~(\ref{one-teleWNL}a). The qubit itself can be encoded, for example, in the polarisation of a photon, i.e. $\ket{0}=\ket{H}$ and $\ket{1}=\ket{V}$. The initial state, before any operation, is $\bigl(a\ket{0}+b\ket{1}\bigr)\ket{\alpha}$. The controlled phase-space rotation corresponds to the unitary which applies a phase shift of $\theta$ to the bus if the qubit state is $\ket{1}$, and does nothing otherwise~\footnote{This can be implemented by an interaction of the Jaynes-Cummings type between the qubit and the qubus, in the dispersive limit.}. After the controlled rotation by $\theta$ the state becomes $a\ket{0}\ket{\alpha}+b\ket{1}\ket{e^{i\theta}\alpha}$. Representing the qubit state in the Pauli $X$ eigenbasis, this is $\ket{+}\bigl(a\ket{\alpha}+b\ket{e^{i\theta}\alpha}\bigr)/\sqrt{2}+\ket{-}\bigl(a\ket{\alpha}-b\ket{e^{i\theta}\alpha}\bigr)/\sqrt{2}$. When we detect $\ket{+}$ we have successfully teleported our qubit into $\ket{\alpha}$, $\ket{e^{i\theta}\alpha}$ logic. When we detect $\ket{-}$ we have the state $a\ket{\alpha}-b\ket{e^{i\theta}\alpha}$. The relative phase discrepancy can be corrected by the operation $\tilde{Z}$, which approximates the Pauli $Z$ operation in the $\{\ket{\alpha},\ket{\alpha e^{i\theta}}\}$ basis. This correction can be delayed until the state is teleported back to a qubit, where it is more easily implemented. The one-bit teleportation of the state $a\ket{\alpha}+b\ket{\alpha e^{i\theta}}$ of the qubus to the state of the qubit can be performed by the circuit depicted in Fig.~(\ref{one-teleWNL}b). That is, we start with the state $\bigl(a\ket{\alpha}+b\ket{\alpha e^{i\theta}}\bigr)(\ket{0}+\ket{1})/\sqrt{2}$. After the controlled rotation by $-\theta$, the state becomes $\ket{\alpha}\bigl( a\ket{0}+b\ket{1}\bigr)/\sqrt{2}+\bigl(b\ket{e^{i\theta}\alpha}\ket{0}+a\ket{e^{-i\theta}\alpha}\ket{1}\bigr)/\sqrt{2}$. Projecting the qubus state into the $x$-quadrature eigenstate $\ket{x}$ via homodyne detection, which is the measurement we depict as ${\widetilde{Z}}$, we obtain the conditional unnormalised state $\ket{\psi(x)}$ \begin{multline}\label{busbitteleport} \ket{\psi(x)} = \frac{f(x,\alpha)}{\sqrt{2}}(a\ket{0}+b\ket{1}) \\ + \frac{f(x,\alpha\cos(\theta))}{\sqrt{2}}(e^{i\phi(x)}b\ket{0}+e^{-i\phi(x)}a\ket{1}) \end{multline} where \begin{gather} f(x,\beta) = \frac{1}{(2\pi)^4}\exp\left(\frac{-(x-2\beta)^2}{4}\right)\\ \phi(x) = \alpha x \sin(\theta)-\alpha^2\sin(2\theta), \end{gather} since $\langle x | \alpha e^{\pm i \theta}\rangle = e^{\pm i \phi(x)}f(x,\alpha\cos(\theta))$ and $\langle x | \alpha \rangle = f(x,\alpha)$ for real $\alpha$~\cite{Gardiner, Barret05}. The weights $f(x,\alpha)$ and $f(x,\alpha\cos(\theta))$ are Gaussian functions with the same variance but different means, given by $2\alpha$ and $2\alpha\cos(\theta)$, respectively. Given $x_0=\alpha(1+\cos(\theta))$, the midpoint between $f(x,\alpha)$ and $f(x,\alpha\cos(\theta))$, one can maximise the fidelity of obtaining the desired state $a\ket{0}+b\ket{1}$ (averaged over all possible values of $x$) by simply doing nothing when $x>x_0$ (where $f(x,\alpha)>f(x,\alpha\cos(\theta))$), or applying $Z_{\phi(x)}=\exp(-i\phi(x)Z)$, a Pauli $Z$ rotation by $\phi(x)$, followed by a Pauli $X$, when $x\le x_0$. For simplicity, the teleportation corrections are not explicitly depicted in the circuit diagrams. \subsection{Average fidelities} In order to quantify the performance of the protocols just described, consider the {\em process fidelity}~\cite{Jamiolkowski72,Horodecki99,Gilchrist05}. The process fidelity between two quantum operations is obtained by computing the fidelity between states isomorphic to the processes under the Choi-Jamio{\l}kowski isomorphism. For example, in order to compare a quantum process ${\mathcal{E}}$ acting on a $D$ dimensional system to another quantum process ${\mathcal{F}}$ acting on the same system, we compute the fidelity between the states \begin{gather} \ket{{\mathcal{E}}}={\openone}_{1}\otimes{\mathcal{E}}_{2}\left(\frac{1}{\sqrt{d}}\sum_{i=1}^D\ket{ii}_{12}\right)\\ \ket{{\mathcal{F}}}={\openone}_{1}\otimes{\mathcal{F}}_{2}\left(\frac{1}{\sqrt{d}}\sum_{i=1}^D\ket{ii}_{12}\right). \end{gather} In the case of single qubit processes, we just need to consider the action of the process on one of the qubits of the state $\frac{1}{\sqrt{2}}(\ket{00}\pm\ket{11})$. The operational meaning of the process fidelity is given by considering the projection of the first qubit into a particular state $a\ket{0}+b\ket{1}$. In this case the second qubit collapses into the state corresponding to the output of the process acting on the state $a\ket{0}+b\ket{1}$. Thus a high fidelity between $\ket{{\mathcal{E}}}$ and $\ket{{\mathcal{F}}}$ implies a high fidelity between the outputs of the ${\mathcal{E}}$ and ${\mathcal{F}}$. Consider the state produced by the circuit in Fig.~(\ref{one-teleWNL}a) \begin{equation}\label{bitbusentangled} \ket{\psi_\pm}=\frac{1}{\sqrt{2}}(\ket{0,\alpha}\pm\ket{1,\alpha e^{i\theta}}), \end{equation} which depends on the qubit measurement outcome. As the relative phase is known, and the correction can be performed after the state is teleported back to a qubit, for each of the outcomes we can compare this state with the ideal state expected from the definition of the basis states for the qubus. This results in the process fidelity of $1$ for one-bit teleportation into the qubus. For the case where we teleport the state from the qubus back into the qubit, using the circuit in Fig.~(\ref{one-teleWNL}b), we consider the action of the process on the second mode of the state $\ket{\psi_+}$ from Eq.~\eqref{bitbusentangled}. This is not, strictly speaking, the Choi-Jamio{\l}kowski isomorphism, but it gives the same operational meaning for the process fidelity as a precursor to the fidelity between the outputs of the different processes being compared, as any superposition of $\{\ket{\alpha},\ket{\alpha e^{i\theta}}\}$ can be prepared from $\ket{\psi_+}$ by projecting the qubit into some desired state. We expect the output state to be $\frac{1}{\sqrt{2}}\left(\ket{00}+\ket{11}\right)$ from the definition of the basis states, but we instead obtain the unnormalised states \begin{multline} \ket{\psi_E(x>x_0)} = \frac{f(x,\alpha)}{\sqrt{2}} \left(\frac{\ket{00}+\ket{11}}{\sqrt{2}}\right) + \\ \frac{f(x,\alpha\cos(\theta))}{\sqrt{2}} \left(\frac{e^{-i\phi(x)}\ket{01}+e^{i\phi(x)}\ket{10}}{\sqrt{2}}\right), \end{multline} \begin{multline} \ket{\psi_E(x<x_0)} = \frac{f(x,\alpha)}{\sqrt{2}} \left(\frac{e^{-i\phi(x)}\ket{01}+e^{i\phi(x)}\ket{10}}{\sqrt{2}}\right) + \\\frac{f(x,\alpha\cos(\theta))}{\sqrt{2}} \left(\frac{\ket{00}+\ket{11}}{\sqrt{2}}\right). \end{multline} The normalised output state, averaged over all $x$ outcomes, is \begin{multline} \rho = \int_{x_0}^\infty \ket{\psi_E(x>x_0)} \bra{\psi_E(x>x_0)} dx + \\ \int_{-\infty}^{x_0} \ket{\psi_E(x<x_0)} \bra{\psi_E(x<x_0)} dx, \end{multline} so that the average process fidelity for one-bit teleportation into a qubit is \begin{equation} F_p = \frac{1}{2} + \frac{1}{2}{\text{erf}}\left(\frac{x_d}{2\sqrt{2}}\right)\label{eqn:SingeTeleF}, \end{equation} where $x_d=2\alpha(1-\cos(\theta))\approx \alpha \theta^{2}$ for small $\theta$. Teleportation from the qubus into the qubit is not perfect, even in the ideal setting we consider, because the states $\ket{\alpha}$ and $\ket{e^{i \theta}\alpha}$ cannot be distinguished perfectly. However, $F_p$ can be made arbitrarily close to one by letting $x_d\to\infty$, or $\alpha\theta^2\to\infty$ if $\theta\ll1$, as seen in Fig.~(\ref{onebitfidelity}). This corresponds to increasing the distinguishability of the coherent states $\ket{\alpha}$ and $\ket{e^{i \theta}\alpha}$. \begin{figure}[htb] \centering \includegraphics[width=8cm]{tel-fidelity-plot.pdf} \caption{\footnotesize Fidelity $F_p$ of one-bit teleportation from the qubus to a qubit, as a function of $x_d$.} \label{onebitfidelity} \end{figure} \subsection{\label{sec:postselectedteleport}Post-selected teleportation} In order to improve the average fidelity of the teleportations without changing the physical parameters $\alpha$ and $\theta$ of the basis states, one can post-select the outcomes of the $x$-quadrature measurements when teleporting states from the qubus mode to a qubit, as these outcomes essentially herald the fidelity of the output state with the desired state. Discarding the states with fidelity below a certain threshold allows for the average fidelity to be boosted, even in the case where $\alpha\theta^2\not\gg1$, at the cost of a certain probability of failure. This is particularly useful for the preparation of quantum states which are used as resources for some quantum information processing tasks. Instead of accepting all states corresponding to all $x$ outcomes of the homodyne measurement which implements ${\widetilde{Z}}$, we only accept states corresponding to outcomes which are far enough away from the midpoint $x_0$, since the state at $x_0$ has the lowest fidelity with the desired state. More explicitly, we only accept states corresponding to measurement outcomes which are smaller than $x_0-y$ or larger than $x_0+y$. This post-selection can only be performed for one-bit teleportation from the qubus to the qubit, yielding a probability of success given by \begin{multline} \Pr(|x-x_0|>y) =\\\frac{1}{2}\left[{\text{erfc}}\left(\frac{2y-x_d}{2\sqrt{2}}\right)+{\text{erfc}}\left(\frac{2y+x_d}{2\sqrt{2}}\right)\right]\label{eqn:SingeTelePostP}, \end{multline} and process fidelity conditioned on the successful outcome given by \begin{equation} F_{p,y} = \frac{{\text{erfc}}\left(\frac{2y-x_d}{2\sqrt{2}}\right)}{{\text{erfc}}\left(\frac{2y-x_d}{2\sqrt{2}}\right)+{\text{erfc}}\left(\frac{2y+x_d}{2\sqrt{2}}\right)}\label{eqn:SingeTelePostF}. \end{equation} The effect of discarding some of the states depending on the measurement outcome for the teleportation in Fig.~(\ref{one-teleWNL}b) is depicted in Fig.~(\ref{onebitpostselfidelity}). In particular, we see that the process fidelity can be made arbitrarily close to $1$ at the cost of lower probability of success, while $\alpha$ and $\theta$ are unchanged, since \begin{equation} \lim_{y\to\infty}F_{p,y}=1. \end{equation} As the probability mass is highly concentrated due to the Gaussian shape of the wave packets, the probability of success drops super-exponentially fast as a function of $y$. This is because for large $z$ we have~\cite{wolfram} \begin{equation} \frac{2}{\sqrt{\pi}} \frac{e^{-z^2}}{{z+\sqrt{z^2+2}}} < {\text{erfc}}(z) < \frac{2}{\sqrt{\pi}} \frac{e^{-z^2}}{{z+\sqrt{z^2+\frac{4}{\pi}}}}. \end{equation} This fast decay corresponds to the contour lines for decreasing probability of success getting closer and closer in Fig.~(\ref{onebitpostselfidelity}). Thus, while the fidelity can be increased arbitrarily via post-selection (by increasing $y$), this leads to a drop in the probability of obtaining the successful outcome for post-selection. Note that, despite this scaling, significant gains in fidelity can be obtained by post-selection while maintaining the physical resources such as $\alpha$ and $\theta$ fixed, and while maintaining a reasonable probability of success. In particular, if $x_d=2.5$, increasing $y$ from $0$ to $1.25$ takes the fidelity from $0.9$ to $0.99$ while the probability of success only drops from $1$ to $0.5$. If the probability of success is to be maintained constant, a linear increase in $x_d$ can bring the fidelity exponentially closer to unity, as is evident in Fig.~(\ref{onebitpostselfidelity}). As $x_d$ is proportional to the amplitude $\alpha$ of the coherence state, this can be achieved while maintaining $\theta$ constant. Since $\theta$ is usually the parameter which is hard to increase in an experimental setting, this is highly advantageous. \begin{figure}[htb] \centering \includegraphics{tel-plot-new.pdf} \caption{\footnotesize Contour lines for post-selected fidelity $F_{p,y}$ of one-bit teleportation from the qubus to a qubit (blue), and success probability for post-selection (red), as a functions of $x_d$ and $y$.} \label{onebitpostselfidelity} \end{figure} Instead of discarding the outputs with unacceptable fidelity, one can also use the information that the failure is heralded to recover and continue the computation. In the case of the one-bit teleportations described here, such an approach would require active quantum error correction or quantum erasure codes -- the type of codes necessary for heralded errors -- which have much higher thresholds than general quantum error correcting codes~\cite{Knill}. We will not discuss such a possibility further in this paper, and will focus instead on post-selection for quantum gate construction and state preparation. \section{\label{sec:coherentstatelogic}Universal Computation with Qubus Logic} Previous work by Ralph {\em et al.}~\cite{Ralph02,Ralph03} and Gilchrist {\em et al.}~\cite{Gilchrist04} illustrated the construction of a universal quantum computer using what we call {\em coherent state logic}. In these schemes a universal set of gates is applied to qubit basis states defined as $\ket{0}_{L}=\ket{-\alpha'}$ and $\ket{1}_{L}=\ket{\alpha'}$, using partial Bell state measurements and cat states of the form $\left(\ket{-\alpha'}+\ket{\alpha'}\right)/\sqrt{2}$ as resources. To perform a universal set of gates a total of sixteen ancilla cat states are necessary~\cite{Ralph03}. For $\alpha'\geq 2$ the qubits $\ket{-\alpha'}$ and $\ket{\alpha'}$ are approximately orthogonal since $|\bk{\alpha'}{-\alpha'}|^{2}=e^{-4\alpha'^{2}}\leq 10^{-6}$. Using the one-bit teleportations in Fig.~(\ref{one-teleWNL}) we can also perform a universal set of gates on a Hilbert space spanned by the states $\ket{\mathbf{0}}_{L}=\ket{\alpha}$ and $\ket{\mathbf{1}}_{L}=\ket{e^{\pm i\theta}\alpha}$, which we call {\em qubus logic}. As mentioned in the previous section, the two states defined for the logical $\ket{\mathbf{1}}_{L}$ are indistinguishable when we homodyne detect along the $x$-quadrature, a fact that will become important later. The overlap between these basis states $|\bk{\alpha}{e^{\pm i\theta}\alpha}|^{2}=e^{-2|\alpha|^{2}(\cos\theta-1)}\approx e^{-|\alpha|^{2}\theta^{2}}$ (for small $\theta$) is close to 0 provided $\alpha\theta\gg1$, so that we may consider them orthogonal -- e.g. for $\alpha\theta>3.4$, we have $|\bk{\alpha}{e^{i\theta}\alpha}|^{2}\leq 10^{-6}$. It can be seen that our basis states are equivalent to the basis states of coherent state logic given a displacement and a phase shifter. That is, if we displace the arbitrary state $a\ket{\alpha}+b\ket{\alpha e^{i\theta}}$ by $D(-\alpha\cos\left(\theta/2\right)e^{i\theta/2})$ and apply the phase shifter $e^{i(\pi-\theta)\hat{n}/2}$ we have $a\ket{\alpha\sin\left(\theta/2\right)}+be^{i\alpha^{2}\sin(\theta)/2}\ket{-\alpha\sin\left(\theta/2\right)}$. If we now set $\alpha'=\alpha\sin\left(\theta/2\right)\approx \alpha\theta/2$, for small $\theta$, we see that our arbitrary qubus logical state is equivalent to an arbitrary coherent state qubit. The $e^{i\alpha^{2}\sin(\theta)/2}$ phase factor can be corrected once we use a single bit teleportation. If $\alpha'\geq2$ then $\alpha\theta\geq 4$, which is already satisfied by the approximate orthogonality condition $\alpha\theta\gg1$. It is important to note that, although the basis states are equivalent, the gate constructions we describe for qubus logic are very different than the gate constructions for coherent state logic. We compare qubus logic and coherent state logic based on resource usage, i.e. the number of ancilla states and controlled rotations necessary to perform each operation. Since the cat state ancillas needed in coherent state logic, $(\ket{-\alpha'}+\ket{\alpha'})/\sqrt{2}$, can be made using the circuit in Fig.~(\ref{one-teleWNL}a) with an incident photon in the state $(\ket{0}+\ket{1})/\sqrt{2}$ provided $\alpha'=\alpha\sqrt{(1-\cos(\theta))/2}\approx \alpha\theta/2$, we consider the sixteen ancilla cat states required in~\cite{Ralph03} for a universal set of gates to be equivalent to sixteen controlled rotations. In the next two sections, we describe how to construct arbitrarily good approximations to any single qubit unitary rotation as well the unitary $\text{CSIGN}=\text{diag}(1,1,1,-1)$ in qubus logic, as this is sufficient for universal quantum computation~\cite{DiVincenzo95}. \subsection{Single Qubit Gates} An arbitrary single qubit unitary gate $U$ can be applied to the state $c_{0}\ket{\alpha}+c_{1}\ket{e^{i\theta}\alpha}$ by the circuit shown in Fig.~(\ref{SingleQubitGate}). We first teleport this state to the qubit using the circuit in Fig.~(\ref{one-teleWNL}b) and then perform the desired unitary $U$ on the qubit, giving $U\bigl(c_{0}\ket{0}+c_{1}\ket{1}\bigr)$. We can teleport this state back to the qubus mode with Fig.~(\ref{one-teleWNL}a), while the ${\widetilde{Z}}$ correction can be delayed until the next single qubit gate, where it can be implemented by applying a $Z$ in addition to the desired unitary. If it happens that this single qubit rotation is the last step of an algorithm, we know that this $\tilde{Z}$ error will not effect the outcome of a homodyne measurement (which is equivalent to a measurement in the Pauli Z eigenbasis), so that this correction may be ignored. In total this process requires two controlled rotations. \begin{figure}[ht] \includegraphics[width=8cm]{one-qubit-gate.pdf} \caption{\footnotesize A single qubit gate performed on $c_{0}\ket{\alpha}+c_{1}\ket{e^{i\theta}\alpha}$. } \label{SingleQubitGate} \end{figure} Since arbitrary single qubit gates are implemented directly in the two level system, the only degradation in the performance comes from the teleportation of the state from the qubus to the qubit, resulting in the fidelity given in Eq.~(\ref{eqn:SingeTeleF}) In the case that we wish to perform a bit flip on the qubit $c_{0}\ket{\alpha}+c_{1}\ket{e^{i\theta}\alpha}$ we can simply apply the phase shifter $e^{-i\theta\hat{n}}$ to obtain $c_{0}\ket{e^{-i\theta}\alpha}+c_{1}\ket{\alpha}$, similarly to the bit flip gate in~\cite{Ralph03}. \subsubsection{Post-selected implementation of single qubit gates} The fidelity of single qubit gates in qubus logic can be improved simply by using post-selected teleportations. For simplicity, if we disregard the second one-bit teleportation which transfers the state back to qubus logic, we obtain the probability of success given in Eq.~(\ref{eqn:SingeTelePostP}) and the conditional process fidelity given in Eq.~(\ref{eqn:SingeTelePostF}). \subsection{Two Qubit Gates} To implement the entangling CSIGN gate we teleport our qubus logical state onto the polarisation entangled state $\frac{1}{2}\bigl(\ket{00}+\ket{01}+\ket{10}-\ket{11}\bigr)$. The state $\frac{1}{2}\bigl(\ket{00}+\ket{01}+\ket{10}-\ket{11}\bigr) = ({\openone}\otimes H) (\ket{00}+\ket{11})/\sqrt{2}$, where $H$ represents a Hadamard gate, can be produced offline by any method that generates a maximally entangled pair of qubits. As described previously in the context of error correction, such a state can be produced with controlled rotations~\cite{StabPap}. If we start with the qubus coherent state $\ket{\sqrt{2}\alpha}$ and an eigenstate of the Pauli $X$ operator $(\ket{0}+\ket{1})/\sqrt{2}$ incident on Fig.~(\ref{one-teleWNL}a), we obtain $\ket{\sqrt{2}\alpha}+\ket{\sqrt{2}e^{i\theta}\alpha}$. Next we put this through a symmetric beam splitter to obtain $\frac{1}{\sqrt{2}}\bigl(\ket{\alpha,\alpha}+\ket{e^{i\theta}\alpha,e^{i\theta}\alpha}\bigr)$~\cite{Gilchrist04}. If we now teleport this state to polarisation logic with Fig.~(\ref{one-teleWNL}b) we have, to a good approximation, the Bell state $\bigl(\ket{00}+\ket{11}\bigr)/\sqrt{2}$, and with a local Hadamard gate we finally obtain $\frac{1}{2}\bigl(\ket{00}+\ket{01}+\ket{10}-\ket{11}\bigr)$. To make this state we have used three controlled rotations and one ancilla photon. Since we are only concerned with preparing a resource state which in principle can be stored, we can perform post-selection at the teleportations to ensure the state preparation is of high fidelity, as described in Section~\ref{sec:postselectedteleport}. After this gate teleportation onto qubits, we teleport back to the qubus modes after a possible $X$ correction operation. The overall circuit is shown in Fig.~(\ref{TwoQubitGate}). This CSIGN gate requires four controlled rotations. As with the single qubit gates, $\tilde{Z}$ corrections may be necessary after the final teleportations of Fig.~(\ref{TwoQubitGate}), but these corrections can also be delayed until the next single qubit gate. \begin{figure}[ht] \includegraphics[width=8.7cm]{two-qubit-gate.pdf} \caption{\footnotesize Circuit used to perform a CSIGN between states in qubus logic.} \label{TwoQubitGate} \end{figure} We can see what affect the condition $\alpha\theta^{2}\not\gg1$ has on the function of the gate in Fig.~(\ref{TwoQubitGate}) by looking at the process fidelity. As this gate operates on two qubits, the input state to the process we want to compare is \begin{multline} \frac{1}{2}\left(\ket{0,0}\ket{\alpha,\alpha}+\ket{0,1}\ket{\alpha,\alpha e^{i\theta}}\right.\\\left.+\ket{1,0}\ket{\alpha e^{i\theta},\alpha}+\ket{1,1}\ket{\alpha e^{i\theta},\alpha e^{i\theta}}\right). \end{multline} From the basis states we have defined, we expect the output \begin{multline} \ket{\psi_2}=\frac{1}{2}\left(\ket{0,0}\ket{\alpha,\alpha}+\ket{0,1}\ket{\alpha,\alpha e^{i\theta}}\right.\\\left.+\ket{1,0}\ket{\alpha e^{i\theta},\alpha}-\ket{1,1}\ket{\alpha e^{i\theta},\alpha e^{i\theta}}\right). \end{multline} The unnormalised state output from Fig.~(\ref{TwoQubitGate}) is \begin{widetext} \begin{multline}\label{eqn:csignhomostate} \ket{\psi_{2,o}}=\frac{1}{4}\Bigl\{ f(x,\alpha)f(x',\alpha) \left[\ket{00}\ket{00}+\ket{01}\ket{01}+\ket{10}\ket{10}-\ket{11}\ket{11}\right] \\ +f(x,\alpha)f(x',\alpha\cos(\theta)) \left[e^{-i\phi(x')}(\ket{00}\ket{01}+\ket{10}\ket{11})+e^{i\phi(x')}(\ket{01}\ket{00}-\ket{11}\ket{10})\right]\\ +f(x,\alpha\cos(\theta))f(x',\alpha) \left[e^{-i\phi(x)}(\ket{00}\ket{10}+\ket{01}\ket{11})+e^{i\phi(x)}(\ket{10}\ket{00}-\ket{11}\ket{01})\right]\\ \left.+f(x,\alpha\cos(\theta))f(x',\alpha\cos(\theta)) \left[e^{-i(\phi(x)+\phi(x'))}\ket{00}\ket{11}+e^{i(\phi(x')-\phi(x))}\ket{01}\ket{10}+\right.\right.\\ \left.e^{i(\phi(x)-\phi(x'))}\ket{10}\ket{01}-e^{i(\phi(x)+\phi(x'))}\ket{11}\ket{00}\right]\Bigr\}, \end{multline} \end{widetext} where $x$ and $x'$ are the outcomes of the ${\widetilde{Z}}$ measurements (top and bottom in Fig.~(\ref{TwoQubitGate}), respectively). For simplicity, we disregard the final teleportations back to qubus modes, as we have already discussed how they affect the average fidelity of the state in Section~\ref{sec:onebitteleportations}. Since we have two homodyne measurements to consider, we need to look at the four cases: (i) $x$ greater than $x_0$ and $x'$ greater than $x_0$; (ii) $x$ greater than $x_0$ and $x'$ less than $x_0$; (iii) $x$ greater than $x_0$ and $x'$ less than $x_0$; (iv) $x$ less than $x_0$ and $x'$ less than $x_0$. The necessary corrections for each of these cases are (i) ${\openone}\otimes{\openone}$ (ii) ${\openone}\otimes Z_{\phi(x')}X$ (iii) $ Z_{\phi(x)}X \otimes{\openone} $ (iv) $Z_{\phi(x)}X\otimes Z_{\phi(x')}X$. Integrating over $x$ and $x'$ for these four different regions, one finds the process fidelity to be \begin{equation} F_{\text{CSIGN}}=\frac{1}{4}\left(1+\text{erf}\left(\frac{x_{d}}{2\sqrt{2}}\right)\right)^{2}, \end{equation} which just corresponds to the square of the process fidelity for a one-bit teleportation into qubits, as the only source of failure is the indistinguishability of the basis states for qubus logic. A plot showing how this fidelity scales as a function of $x_{d}$ is shown in Fig.~(\ref{csignfidelity}). \begin{figure}[htb] \centering \includegraphics[width=8cm]{csign-fidelity-plot-new.pdf} \caption{\footnotesize Fidelity $F_{\text{CSIGN}}$ of one-bit CSIGN teleportation from the qubus to a qubit, as a function of $x_d$.} \label{csignfidelity} \end{figure} \subsubsection{Post-selected implementation of the entangling gate} We can counteract the reduction in fidelity shown in Fig.~(\ref{csignfidelity}) in a similar way to the single qubit gate case, by only accepting measurement outcomes less than $x_0-y$ and greater than $x_0+y$. We find the success probability and conditional fidelity to be \begin{gather} P_{\text{CSIGN}}=\frac{1}{4}\left(\text{erfc}\left(\frac{2y-x_{d}}{2\sqrt{2}}\right)+\text{erfc}\left(\frac{2y+x_{d}}{2\sqrt{2}}\right)\right)^{2}\\ F_{\text{CSIGN},y}=\left(\frac{\text{erfc}\left(\frac{2y-x_{d}}{2\sqrt{2}}\right)}{\text{erfc}\left(\frac{2y-x_{d}}{2\sqrt{2}}\right)+\text{erfc}\left(\frac{2y+x_{d}}{2\sqrt{2}}\right)}\right)^{2}, \end{gather} respectively. As before, we see that the process fidelity can be made arbitrarily close to $1$ at the cost of lower probability of success. It should also be immediately clear that as $y\to0$, we have $P_{\text{CSIGN}}\to1$ and $F_{\text{CSIGN},y}\to F_{\text{CSIGN}}$. We see the effect of ignoring some of the homodyne measurements in Fig.~(\ref{csignpostselfidelity}). Even though performance is degraded because of the use of two one-bit teleportations, the general scalings of the fidelity and probability of success with respect to $y$ and $x_d$ are similar to the one-bit teleportation. In particular, we see that the fidelity can be increased significantly by increasing $x_d$ (or equivalently, $\alpha$). \begin{figure}[htb] \centering \includegraphics{csign-plot-new.pdf} \caption{\footnotesize Contour lines for post-selected fidelity $F_{\text{CSIGN},y}$ of CSIGN teleportation from the qubus to a qubit (green), and success probability for post-selection (gold), as a functions of $x_d$ and $y$.} \label{csignpostselfidelity} \end{figure} \subsection{Comparison between Qubus Logic and Coherent State Logic} The total number of controlled rotations necessary to construct our universal set of quantum gates on qubus logic, consisting of an arbitrary single qubit rotation and a CSIGN gate, is nine -- the construction of an arbitrary single qubit gate required two controlled rotations and the construction of a CSIGN gate required seven, three for the entanglement production and four for the gate operation. This is in contrast to the sixteen controlled rotations (where we assume each controlled rotation is equivalent to a cat state ancilla) necessary for a universal set of gates in coherent state logic~\cite{Ralph02,Ralph03, Gilchrist04}, where an arbitrary single qubit rotation is constructed via $\exp\left(-i \frac{\vartheta }{2}Z\right)\exp\left(-i \frac{\pi }{4}X\right)\exp\left(-i \frac{\varphi }{2}Z\right)\exp\left(i \frac{\pi }{4}X\right)$, with each rotation requiring two cat state ancilla, and a CNOT gate requiring eight cat state ancilla. As a further comparison we compare the resource consumption of the qubus logic scheme with the recent extension to the coherent state logic scheme by Lund~{\em et al.}~\cite{Lund:PRL} that considers small amplitude coherent states. In this scheme gate construction is via unambiguous gate teleportation, where the failure rate for each teleportation is dependent on the size of the amplitude of the coherent state logical states. Each gate teleportation requires offline probabilistic entanglement generation. On average, an arbitrary rotation about the $Z$ axis would require three cat state ancilla and both the Hadamard and CSIGN gate would each require 27 cat state ancilla. The scheme proposed here yields significant savings compared to previous schemes in terms of the number of controlled rotations necessary to apply a universal set of gates on coherent states. \section{\label{sec:clusterstate}Construction of Cluster States} As we have pointed out in the previous section, the GHZ preparation scheme used for fault-tolerant error correction with strong coherent beams~\cite{StabPap} can be used to perform CSIGN gate teleportation. This approach can be generalised to aid in the construction of cluster states~\cite{Raussendorf01}, as GHZ states are locally equivalent to star graph states~\cite{Hein06, Campbell07}. Once we have GHZ states we can either use CNOT gates built with the aid of a qubus~\cite{Nemoto04,Munro05b} to deterministically join them to make a large cluster state, or use fusion gates~\cite{Browne05} to join them probabilistically. Recent work by Jin~{\em et al.}~\cite{Jin:PRA} showed a scheme to produce arbitrarily large cluster states with a single coherent probe beam. In this scheme, $N$ copies of the state $\left(\ket{H}+\ket{V}\right)/\sqrt{2}$ can be converted into the GHZ state $(\ket{H}^{\otimes N}+\ket{V}^{\otimes N})/\sqrt{2}$ with the use of $N$ controlled rotations and a single homodyne detection . However, the size of the controlled rotations necessary scales exponentially with the size of the desired GHZ state -- the $N'$th controlled rotation would need to be $2^{N-1}-1$ times larger than the first controlled rotation applied to the probe beam. For example, if we consider an optimistic controlled rotation $\theta$ of order $0.1$, once $N$ reaches 10 we would require a controlled rotation on the order of $\pi$, which is unfeasible for most physical implementations. In the next section we describe how to prepare GHZ states that only require large amplitude coherent states, while using the same fixed controlled rotations $\theta$ and $-\theta$. \subsection{GHZ State Preparation and Repetition Encoding} We mentioned a scheme in the previous section to construct the Bell state $\ket{00}+\ket{11}$, but this can be generalised to prepare GHZ states of any number of subsystems. We first start with the state $(\ket{0}+\ket{1})/\sqrt{2}$ and teleport it to a qubus initially in the larger amplitude $\ket{\sqrt{N}\alpha}$. This will give $(\ket{\sqrt{N}\alpha}+\ket{\sqrt{N}\alpha e^{i\theta}})/\sqrt{2}$. Sending this state through an $N$ port beam splitter with $N-1$ vacuum states in the other ports gives $(\ket{\alpha}^{\otimes N}+\ket{\alpha e^{i\theta}}^{\otimes N})/\sqrt{2}$. Each of these modes can then be teleported back to qubits, yielding $(\ket{0}^{\otimes N}+\ket{1}^{\otimes N})/\sqrt{2}$. The resources that we use to make a GHZ state of size $N$ are $N+1$ controlled rotations, $N+1$ single qubit ancillas, a single qubit measurement and $N$ homodyne detections. This circuit can also function as an encoder for a quantum repetition code, in which case we can allow any input qubit state $a\ket{0}+b\ket{1}$ and obtain an approximation to $a\ket{0}^{\otimes N}+b\ket{1}^{\otimes N}$. In order to evaluate the performance of this process, we once again calculate the process fidelity by using the input state $\frac{1}{\sqrt{2}}(\ket{00}+\ket{11})$ and acting on the second subsystem. Using a generalisation of Eqn.~(\ref{eqn:csignhomostate}) we calculate the effect of $\alpha\theta^{2}\not\gg1$ on the production of a GHZ state of size $N$ to be \begin{equation}\label{repfidel} F_{\text{REP}}=\frac{1}{2^{N}}\left(1+\text{erf}\left(\frac{x_{d}}{2\sqrt{2}}\right)\right)^{N}. \end{equation} Again, this corresponds to the process fidelity of a single one-bit teleportation into a qubit raised to the $N$th power. The fidelity of preparing repetition encoded states drops exponentially in $N$. In Fig.~(\ref{repfidelity}) we show the fidelity as a function of $x_{d}$ for $N=3$ and for $N=9$. \begin{figure}[htb] \centering \includegraphics[width=8cm]{rep-fidelity-plot.pdf} \caption{\footnotesize Process fidelity $F_{\text{REP}}$ of repetition encoding as a function of $x_d$.} \label{repfidelity} \end{figure} \subsection{Post-selected Implementation of GHZ State Preparation and Repetition Encoding} The reduction in fidelity due to $\alpha\theta^{2}\not\gg1$ in Eq.~(\ref{repfidel}) can be counteracted, as before, by simply performing post-selection during the one-bit teleportations into the qubits. We find the success probability and conditional fidelity to be \begin{gather} P_{\text{REP}}=\frac{1}{2^{N}}\left(\text{erfc}\left(\frac{2y-x_{d}}{2\sqrt{2}}\right)+\text{erfc}\left(\frac{2y+x_{d}}{2\sqrt{2}}\right)\right)^{N}\\ F_{\text{REP},y}=\left(\frac{\text{erfc}\left(\frac{2y-x_{d}}{2\sqrt{2}}\right)}{\text{erfc}\left(\frac{2y-x_{d}}{2\sqrt{2}}\right)+\text{erfc}\left(\frac{2y+x_{d}}{2\sqrt{2}}\right)}\right)^{N} \end{gather} As $y\to0$ we see that $P_{\text{REP}}\to1$ and $F_{\text{REP},y}\to F_{\text{REP}}$. The effect of discarding some of states corresponding to undesired homodyne measurement outcomes can be seen in Figs.~(\ref{reppostselfidelity}) and (\ref{reppostselfidelity9}). Thus, as discussed in Section~\ref{sec:postselectedteleport}, one can prepare a state encoded in the repetition code with an arbitrarily high process fidelity, regardless of what $\theta$ and $\alpha$ are. The expected degradation in performance due to the additional teleportations is also evident in the faster decay of the probability of success with larger $y$. \begin{figure}[htb] \centering \includegraphics[width=8cm]{rep-3-plot-new.pdf} \caption{\footnotesize Contour lines for post-selected process fidelity $F_{\text{REP},y}$ of 3-fold repetition encoding (blue), and success probability for post-selection (red), as a functions of $\alpha\theta^2$ and $y$.} \label{reppostselfidelity} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=8cm]{rep-9-plot-new.pdf} \caption{\footnotesize Contour lines for post-selected process fidelity $F_{\text{REP},y}$ of 9-fold repetition encoding (green), and success probability for post-selection (gold), as a functions of $\alpha\theta^2$ and $y$.} \label{reppostselfidelity9} \end{figure} \section{\label{sec:conclusions}Discussion} We have described in detail various uses for one-bit teleportations between a qubit and a qubus. Using these teleportations, we proposed a scheme for universal quantum computation, called qubus logic, which is a significant improvement over other proposals for quantum computation using coherent states. This scheme uses fewer interactions to perform the gates, and also allows for the use of post-selection to arbitrarily increase the fidelity of the gates given any interaction strength at the cost of lower success probabilities. The one-bit teleportations also allow for the preparation of highly entangled $N$ party states known as GHZ states, which can be used in the preparation of cluster states. Moreover, the same circuitry can be used to encode states in the repetition code which is a building block for Shor's 9 qubit code. In this case, where we are interested in preparing resource states, the power and flexibility of post-selected teleportations can be fully exploited, as the achievable fidelity of the state preparation is independent of the interaction strength available. The main property of the qubus which is exploited in the schemes described here is the fact that entanglement can be easily created in the qubus through the use of a beam splitter. Local operations, on the other hand, are easier to perform on a qubit. The controlled rotations allow for information to be transferred from one system to the other, allowing for the advantages of each physical system to be exploited to maximal advantage. The fidelity suffers as the operations become more complex, as can be seen in Figs.~(\ref{FidelityAllSave}) and (\ref{AllContourPlotA}). This is because multiple uses of the imperfect one-bit teleportation from qubus to qubit are used. As the process fidelity is less than perfect, error correction would have to be used for scalable computation. However, as we have discussed, the fact that the homodyne measurements essentially herald the fidelity of the operations, it is possible to use post-selection in conjunction with error heralding to optimise the use of physical resources. \begin{figure}[htb] \centering \includegraphics[width=8cm]{FidelityAllSave.pdf} \caption{\footnotesize Process fidelity as a function of $x_{d}$ for (a) the qubus logic single qubit gate ($F_{p}$); (b) the CSIGN teleportation ($F_{\text{CSIGN}}$); (c) repetition encoding with $N=3$ shown in blue ($F_{\text{REP}}$); (d) repetition encoding with $N=9$ ($F_{\text{REP}}$).} \label{FidelityAllSave} \end{figure} \begin{figure}[htb] \centering \includegraphics{all-plots2-new.pdf} \caption{\footnotesize Contour plot showing the conditional process fidelity (solid curves) as a function of $x_{d}$ and $y$ for $F=0.9$ for qubus to qubit one-bit teleportation (red), CSIGN teleportation (gold), repetition encoding for $N=3$ (blue) and repetition encoding for $N=9$(green). The dashed curves are contour curves for the probability of success for post-selection with $\Pr(|x-x_0|>y)=0.5$.} \label{AllContourPlotA} \end{figure} While the scheme presented has been abstracted from particular physical implementations, any physical realisations of a qubit and a continuous variable mode would suffice. The only requirements are controlled rotations, along with fast single qubit gates and homodyne detection, which are necessary to enable feed-forward of results for the implementation of the relevant corrections. \begin{acknowledgments} We would like to thank T.C.~Ralph, K.~Nemoto and W.J.~Munro for valuable discussions. We are supported in part by NSERC, ARO, CIAR, MITACS and MEXT in Japan. C.R.M. would like to thank Mike and Ophelia Lazaridis for financial support while a student at IQC. M.S. would like to thank the Bell family for financial support. \end{acknowledgments} \newpage
2,869,038,157,020
arxiv
\section{Introduction} In this paper, we consider the feedback interconnection shown in Figure \ref{fig:StableNegativefeedback}, where a linear system $G$ is in feedback with a sector-bounded nonlinearity $\Phi$, and we are interested in conditions that guarantee closed-loop stability. Different forms of this problem have been a point of study for over 75 years since the early work of Lur'e \cite{lur1944theory}. Results typically take the form of fixing conditions on one of the system and describing conditions on the other system that guarantees stability of the closed loop. Depending on the orientation of the conic sector characterizing $\Phi$, we can obtain passivity~\cite{desoer2009feedback}, small-gain~\cite{desoer2009feedback,Zames_inputoutput1966_part1}, circle~\cite{khalil2002nonlinear}, conic sector~\cite{Zames_inputoutput1966_part1}, and extended conic sector~\cite{bridgeman2016extended} theorems. These results are \textit{sufficient} conditions for stability. When $G$ is linear and time-invariant (LTI) and $\Phi$ is sector-bounded and memoryless, we obtain the classical Lur'e formulation, where the circle criterion~\cite{brockett1966status} and passivity theorem \cite[Thm.~5.6.18]{vidyasagar2002nonlinear} are sufficient but not necessary for stability. However, if $\Phi$ is allowed to have dynamics, the circle criterion \cite[Thm.~6.6.126]{vidyasagar2002nonlinear} and passivity theorem \cite{khong2018converse} become both necessary and sufficient. \tikzstyle{block} = [draw, fill=white, rectangle, minimum height=2.8em, minimum width=2.8em] \tikzstyle{sum} = [draw, fill=white, circle] \begin{figure}[!htb] \centering \begin{minipage}[T]{0.55\linewidth} \centering \begin{tikzpicture}[auto, >=latex, thick] \node [block] (G) {$G$}; \node [block] (Phi) at ($ (G) + (0,-1.3) $) {$\Phi$}; \node [sum] (sum1) at ($ (G) + (-1.4,0) $) {}; \node [sum] (sum2) at ($ (Phi) + (1.4,0) $) {}; \node [coordinate] (input) at ($ (sum1) + (-.8,0) $) {}; \node [coordinate] (output) at ($ (sum2) + (.8,0) $) {}; \draw [->] (input) -- node[pos=0.2]{$u_1$} (sum1); \draw [->] (sum1) -- node{$e_1$} (G); \draw [->] (G) -| node[pos=0.25]{$y_1$} node[pos=0.95]{$+$} (sum2); \draw [->] (output) -- node[pos=0.2]{$u_2$} (sum2); \draw [->] (sum2) -- node{$e_2$} (Phi); \draw [->] (Phi) -| node[pos=0.25]{$y_2$} node[pos=0.95]{$+$} (sum1); \end{tikzpicture} \end{minipage}% \begin{minipage}[T]{0.45\linewidth} \vspace{-1em} \begin{subequations}\label{interconnect} \begin{align} e_1 &= u_1 + y_2 \label{1a}\\ y_2 &= \Phi e_2 \label{1b}\\ e_2 &= u_2 + y_1 \label{1c}\\ y_1 &= G e_1 \label{1d} \end{align} \end{subequations} \end{minipage} \caption{Two interconnected systems. We assume in this work that $G$ is linear and $\Phi$ is a sector-bounded nonlinearity.\label{fig:StableNegativefeedback}} \vspace{-2mm} \end{figure} Sector bounds are usually defined as bounds on cumulative sums of inner products on $\Ltwoe$, but they can also be defined as holding pointwise in time~(see for example \cite[\S6.1]{khalil2002nonlinear} and \cite[\S1]{desoer2009feedback}). Results can also be formulated in discrete or continuous time. Finally, one can use a different notion of stability. For example, recent works have developed conditions that guarantee robust \textit{exponential} stability. Under mild assumptions, input-output stability automatically implies exponential stability~\cite{Rantzer97systemanalysis,jonsson1996systems}. However, constructing an exponential rate via a gain bound is conservative in general~\cite{boczar2017exponential}. Less conservative sufficient conditions appeared in~\cite{boczar2017exponential,BinITAC2016} but it is not known whether these conditions are also necessary. These issues arose in the context of analyzing iterative algorithms~\cite{lessard2016analysis,Cyrus2018}, where it is desirable to have tight bounds on worst-case convergence rates. \paragraph{Main contribution.} Our main contribution is a robust stability theorem that unifies and generalizes many of the aforementioned classical results by distilling them down to their fundamental components (Theorem~\ref{thm:main}). We work in a general \textit{semi-inner product space} (See Section~\ref{sec:mainpre}), we define $G$ and $\Phi$ as \textit{relations} rather than operators, and our result is both necessary and sufficient. The added generality we provide leads to a clean result that avoids the usual technicalities associated with the extended spaces $\Ltwoe$ and $\ltwoe$. Indeed, our setting need not include a notion of time, so there is no need to worry about causality, boundedness, or even well-posedness. In Section~\ref{sec:Specialization}, we show how classical results in $\ltwoe$, including sufficient-only cases, follow directly from Theorem~\ref{thm:main}. We also clarify exactly how and when causality, boundedness, and well-posedness come into play. In Section~\ref{sec:Exponential}, we use Theorem~\ref{thm:main} to obtain a new weighted stability result that is both necessary and sufficient. \paragraph{Related work.} Several prior works have also provided unified versions of classical robust stability results. We cite two examples: the extended conic sector theorem, which can handle the case where $G$ is unstable~\cite{bridgeman2016extended}, and a loop-shifting transformation that relates passivity, small-gain, and circle theorems~\cite{anderson1972small}. Nevertheless, the present work is unique in its use of semi-inner product spaces and its ability to address exponential stability. There are also generalizations to cases where $G$ is nonlinear or $\Phi$ is not sector-bounded. Examples include dissipativity theory~\cite{willems1972dissipative}, integral quadratic constraints~\cite{megretski1997system,pfifer2015integral}, and graph separation theorems~\cite{teel1996input,safonov1980stability}. These efforts lie beyond the scope of the present work. Finding necessary and sufficient stability guarantees has been a point of interest in different applications, including robotics \cite{stramigioli2015energy,colgate1988robust}, robust control \cite[p.~212]{zhou1996robust}, \cite[p.~158]{narendra1973frequency}, and in finding tight upper bounds for the convergence rate of iterative optimization algorithms \cite[\S7]{lessard2016analysis}. \section{Main result}\label{sec:mainpre} \paragraph{Semi-inner products.} A \textit{semi-inner product space} is a vector space $\mathcal{X}$ equipped with a semi-inner product $\ip{\cdot}{\cdot}$. This is identical to an inner product except that it lacks definiteness. In other words, the associated semi-norm $\norm{x}^2\defeq\langle x,x \rangle$ satisfies $\norm{x}\ge 0$ but $\|x\|=0$ need not imply that $x=0$. We say the semi-inner product space is \textit{\textbf{nontrivial}} if the set $\set{x\in\mathcal{X}}{\norm{x}>0}$ is nonempty. We refer the reader to~\cite{conway} for further details. \paragraph{Relations.} A \textit{relation} $R$ on $\mathcal{X}$ is a subset of the product space $R \subseteq \mathcal{X}\times \mathcal{X}$. We denote the set of all relations on $\mathcal{X}$ as $\mathcal{R}(\mathcal{X}) \defeq 2^{\mathcal{X}\times\mathcal{X}}$. We write $Rx$ to denote any $y\in\mathcal{X}$ such that $(x,y)\in R$. A relation $L\in\mathcal{R}(\mathcal{X})$ is \textit{\textbf{linear}} if it has the property that $(\lambda_1x_1+\lambda_2 x_2, \lambda_1 y_1+\lambda_2 y_2)\in L$ for all $(x_1,y_1)$, $(x_2,y_2)\in L$ and $\lambda_1$, $\lambda_2 \in \mathbb{R}$. We define $\mathcal{L}$ to be the set of all linear relations, so $L\in\mathcal{L}\subseteq \mathcal{R}(\mathcal{X})$. We define $\mathcal{X}^2$ to be the augmented vectors $u\eqdef\left(\begin{smallmatrix}u_1\\u_2\end{smallmatrix}\right)$ where $u_1,u_2\in\mathcal{X}$. We overload matrix multiplication to have an intuitive interpretation in~$\mathcal{X}^2$. Specifically, for any $\xi, \zeta\in\mathcal{X}^2$ and any matrix $N\in\R^{2\times 2}$, \[ N\xi = \bmat{N_{11} & N_{12} \\ N_{21} & N_{22} }\bmat{\xi_1 \\ \xi_2} \defeq \bmat{ N_{11} \xi_1 + N_{12} \xi_2 \\ N_{21} \xi_1 + N_{22} \xi_2} \in \mathcal{X}^2. \] Likewise, inner products in $\mathcal{X}^2$ have the interpretation \[ \ip{\xi}{\zeta} = \ip{\bmat{\xi_1\\ \xi_2}}{\bmat{\zeta_1\\\zeta_2}} \defeq \ip{\xi_1}{\zeta_1} + \ip{\xi_2}{\zeta_2}. \] % The closed-loop system of Figure \ref{fig:StableNegativefeedback} defines relations: \begin{subequations}\label{eq:relations} \begin{align*} R_{uy} &\defeq \left\{(u,y)\in \mathcal{X}^2\times\mathcal{X}^2 \,|\,\eqref{interconnect} \text{ holds for some }e\in\mathcal{X}^2 \right\}\\ R_{ue} &\defeq \left\{(u,e)\in \mathcal{X}^2\times\mathcal{X}^2\,|\,\eqref{interconnect} \text{ holds for some }y\in\mathcal{X}^2 \right\} \end{align*} \end{subequations} We call a set of relations $\mathcal{C}\subseteq \mathcal{R}(\mathcal{X})$ \textit{\textbf{feedback-invariant}} if $\set{(u_i,y_j)}{(u,y)\in R_{uy}} \in \mathcal{C}$ for all $G,\Phi\in \mathcal{C}$ and for all $i,j \in \{1,2\}$. % We call $\mathcal{C}$ \textit{\textbf{complete}} if given any $x,y \in \mathcal{X}$, there exists $\Phi\in\mathcal{C}$ such that $(x,y)\in\Phi$. \newpage \begin{fthm}[Main result]\label{thm:main} Let $\mathcal{X}$ be a nontrivial semi-inner product space, let $M =M^\tp \in \R^{2\times 2}$ be given and let $\mathcal{C}\subseteq \mathcal{R}(\mathcal{X})$ be complete and feedback-invariant. Suppose $G \in \mathcal{L}\cap\mathcal{C}$. The following are equivalent. \begin{enumerate}[label=(\roman*),itemindent=0pt,labelindent=0pt] \item\label{thm_it_i} There exists $N=N^\tp\! \in \R^{2\times 2}$ satisfying $M\!+\!N\!\prec \!0$ (positive definite sense) such that $G$ satisfies \begin{equation}\label{G} \ip{ \bmat{G\xi\\ \xi} }{ N \bmat{G\xi\\ \xi} } \ge 0 \qquad\text{for all }\xi\in\mathcal{X}. \end{equation} \item\label{thm_it_ii} There exists $\gamma>0$ such that for all $\Phi\in\mathcal{C}$, if \begin{equation}\label{Phi} \ip{ \bmat{\xi\\\Phi \xi} }{ M \bmat{\xi\\\Phi \xi} } \ge 0 \qquad\text{for all }\xi\in\mathcal{X}, \end{equation} then for all $(u,y)\in R_{uy}$, the following bound holds \begin{equation}\label{norm} \norm{y} \le \gamma \norm{u}. \end{equation} \end{enumerate} \end{fthm} \begin{proof} See Appendix~\ref{sec:appendix1} for a detailed proof. \end{proof} \begin{rem} Equation~\eqref{norm} can be stated in terms of $(u,e)$ instead of $(u,y)$. Specifically, it is easy to show that \eqref{norm} holds for all $(u,y)\in R_{uy}$ if and only if there exists some $\bar\gamma > 0$ such that $\norm{e} \le \bar\gamma\norm{u}$ holds for all $(u,e)\in R_{ue}$. \end{rem} Theorem~\ref{thm:main} applies to a general semi-inner product space, which need not include a notion of time. Therefore, the notions of causality, boundedness, stability, and well-posedness do not come into play. \section{Specializing the main result}\label{sec:Specialization} In this section, we specialize Theorem~\ref{thm:main} to recover a variety of classical results. We restrict our attention to discrete-time results in the interest of space, though continuous-time extensions are straightforward. Recall the extended space $\ltwoe$, which is the real vector space of semi-infinite sequences $\Z_+\to \R^m$. Also recall the square-summable subset $\ltwo \subset \ltwoe$. Specifically, \begin{align*} \ltwoe &\defeq \set{(x[0],x[1],\dots)}{\vphantom{\bl(} x[k] \in \R^n\text{ for }k=0,1,\dots}, \\ \ltwo &\defeq \set{ x \in \ltwoe }{ \norm{x} \defeq \bbbl(\sum_{k=0}^\infty \normm{x[k]}^2\bbbr)^{1/2} < \infty }. \end{align*} Here, the indices $[k]$ play the role of \textit{time}. We now recall some standard definitions. For any $x\in\ltwoe$, we define the truncated signal $x_T \in \ltwo$ as follows. \[ x_T[k]\defeq \left\{ \begin{array}{lr} x[k] & 0\le k \le T\\ 0 & k \ge T+1 \end{array} \right. \] An operator $G$ is said to be \textit{\textbf{causal}} if for any $T\ge 0$ and $f\in\ltwoe$, we have $(Gf)_T=(Gf_T)_T$. We will now apply Theorem~\ref{thm:main} to the $\ltwoe$ space equipped with a particular semi-inner product defined as \begin{rem}\label{rem:l2vsl2e} It is not fruitful to specialize Theorem~\ref{thm:main} to $\mathcal{X}=\ltwo$ because \eqref{G} and \eqref{Phi} would imply $y_1, y_2 \in \ltwo$; we would be assuming the very thing we are trying to prove. We will instead specialize Theorem~\ref{thm:main} to $\mathcal{X} = \ltwoe$. \end{rem} \paragraph{Causality and well-posedness.} A possible concern in assessing stability of an interconnected systems in the form of Figure \ref{fig:StableNegativefeedback} is existence and uniqueness of solutions $e$ and $y$ for all choices of $u$. One solution is to use relations \cite{Zames_inputoutput1966_part1,vidyasagar2002nonlinear,Schaft2017L2,khong2018converse,safonov1980stability,teel1996graphs}, which avoids the issue entirely since all maps are invertible when viewed as relations. This amounts to using $\mathcal{C}=\mathcal{R}(\mathcal{X})$. Alternatively, one can assume that both $G$ and $\Phi$ are causal operators \cite{zhou1996robust,megretski1997system,Zames_inputoutput1966_part2,vidyasagar2002nonlinear,desoer2009feedback} rather than relations, which implies that if the closed-loop map exists, it must be causal as well~\cite[Prop.~1.2.14]{Schaft2017L2}. To work with causal operators, it is generally required to assume a notion of \textit{well-posedness}. In the $\ltwoe$ case, well-posedness requires the map $u \mapsto (e,y)$ to exist and be unique. Viewed through the lens of Theorem~\ref{thm:main}, if we let $\mathcal{C}$ be the set of causal operators, then our assumption of \textit{invariance} precisely corresponds to assuming well-posedness. Meanwhile, our assumption of \textit{completeness} is a technical condition that is automatically satisfied in $\ltwoe$. \begin{defn}[cumulative semi-inner product] Define $\ip{\cdot}{\cdot}_T$ to be the sum of the component-wise inner products up to time $T$. That is, $\ip{x}{y}_T\defeq\ip{x_T}{y_T}$. Also define the associated semi-norm $\norm{x}_T^2 \defeq \ip{x}{x}_T$. \end{defn} We now state a specialization of Theorem~\ref{thm:main} to the extended space $\ltwoe$, which we prove in Appendix~\ref{sec:appendix2}. \begin{cor}[$\ltwo$ stability]\label{cor:cumulative} Let $M=M^\tp\! \in \R^{2\times 2}$ with $M\npreceq 0$ be given\footnote{The condition $M\npreceq 0$ is only required to prove \ref{Cit_ii}$\implies$\ref{Cit_i}. The case $M\preceq 0$ also corresponds to~\eqref{phi_cum} being degenerate.} and suppose $G:\ltwoe\to\ltwoe$ is a causal linear operator. The following statements are equivalent: \begin{enumerate}[label=(\roman*),itemindent=0pt,labelindent=0pt] \item\label{Cit_i} There exists $N=N^\tp \in \R^{2\times 2}$ satisfying $M + N \prec 0$ such that for all $\xi\in\ltwoe$ and $T \ge 0$, $G$ satisfies \begin{equation}\label{G_cum} \ip{ \bmat{G\xi\\ \xi} }{ N \bmat{G\xi\\\xi} }_T \ge 0. \end{equation} \item\label{Cit_ii} There exists $\gamma>0$ such that for all causal $\Phi:\ltwoe\to\ltwoe$ where the interconnection of $G$ and $\Phi$ is well-posed, if the following statement holds for all $T\ge 0$: \begin{equation}\label{phi_cum} \ip{ \bmat{\xi \\\Phi \xi} }{ M \bmat{\xi\\\Phi \xi} }_T \ge 0 \qquad\text{for all }\xi\in\ltwoe, \end{equation} then for all $(u,y) \in R_{uy}$ with $u\in\ell_2$, \begin{equation}\label{norm_l2} \norm{y} \le \gamma \norm{u}. \end{equation} \end{enumerate} \end{cor} \subsection{Recovering necessary and sufficient results} Corollary~\ref{cor:cumulative} may now be applied to a variety of different scenarios by appropriately choosing $M,N\in\R^{2\times 2}$. \begin{rem}[sign convention]\label{rem:Ntilde} Although we used the positive feedback sign convention in Figure~\ref{fig:StableNegativefeedback}, using the negative feedback convention instead simply amounts to replacing $N$ by $\tilde N$ in Theorem~\ref{thm:main} and Corollary~\ref{cor:cumulative}, where \[ N \defeq \bmat{N_{11} & N_{12}\\ N_{21} & N_{22}} \quad\text{and}\quad \tilde N \defeq \bmat{N_{11} & -N_{12}\\ -N_{21} & N_{22}}. \]\vspace{0pt} \end{rem} \noindent Consider the classical passivity result by Vidyasagar (which is a sufficient-only result), and may be found in~\cite[Thm. 6.7.3.43]{vidyasagar2002nonlinear}. \begin{thm}[Vidyasagar]\label{thm:vidyasagar} Consider the system \[ \left\{ \begin{array}{lr} e_1 = u_1 - y_2, & y_1 = G e_1\\ e_2 = u_2 + y_1, & y_2 = \Phi e_2 \end{array} \right. \] Suppose there exist constants $\epsilon_1$, $\epsilon_2$, $\delta_1$, $\delta_2$ such that for all $\xi\in\ltwoe$ and for all $T\ge 0$ \begin{subequations}\label{eq:passivityeq} \begin{align} \langle \xi, G\xi \rangle_T &\ge \epsilon_1 \| \xi\|_T^2 + \delta_1\| G\xi \|_T^2\\ \langle \xi, \Phi \xi \rangle_T &\ge \epsilon_2 \| \xi\|_T^2 + \delta_2\| \Phi \xi \|_T^2 \end{align} \end{subequations} Then the system is $\ltwo$-stable if $\delta_1+ \epsilon_2>0$ and $\delta_2+\epsilon_1>0$. \end{thm} To obtain a corresponding necessary and sufficient result using Corollary~\ref{cor:cumulative}, compare \eqref{G_cum}--\eqref{phi_cum} to~\eqref{eq:passivityeq}, which yields the following values of $\tilde{N}$, $N$, and $M$ (refer to Remark \ref{rem:Ntilde}). \[ \tilde{N} = \addtolength{\arraycolsep}{-2pt}\bmat{-\delta_1 & \frac{1}{2}\\\frac{1}{2} & -\epsilon_1}\!,\, N =\bmat{-\delta_1 & -\frac{1}{2}\\ -\frac{1}{2} & -\epsilon_1}\!,\, M = \bmat{-\epsilon_2& \frac{1}{2}\\ \frac{1}{2} & -\delta_2 }. \] Applying Corollary \ref{cor:cumulative}, we require $M+N\prec 0$; thus $\delta_1+ \epsilon_2>0$ and $\delta_2+\epsilon_1>0$, which recovers Theorem~\ref{thm:vidyasagar}. Similar specializations of Corollary~\ref{cor:cumulative} apply to the small-gain theorem \cite[Thm.~5.6]{khalil2002nonlinear}, extended conic sector theorem~\cite{bridgeman2016extended}, circle criterion~\cite{jonsson2001lecture}, and other versions of passivity such as Vidyasagar~\cite[Thm.~6.6.58]{vidyasagar2002nonlinear} and Khong~\& Van~der~Schaft~\cite{khong2018converse}. See Table~\ref{Tab:comparison} for a summary of these results. The conditions of Corollary \ref{cor:cumulative} such as~\eqref{G_cum} can be checked via semidefinite programming~\cite{willems1971least}. \begin{rem} Many results in the literature assume one of the systems is memoryless. As we will discuss in Section~\ref{sec:Existing}, this makes Corollary~\ref{cor:cumulative} sufficient-only. Nevertheless, $M$ and $N$ are the same in both cases. \end{rem} \subsection{Recovering sufficient-only results}\label{sec:Existing} Here we discuss sufficient-only results from the literature that can also be obtained from Corollary~\ref{cor:cumulative} via a suitable relaxation. We discuss two such relaxations. \paragraph{Memoryless systems.} If we restrict $\Phi:\ltwoe\to\ltwoe$ to be time-invariant and memoryless, it is equivalent to an operator $\phi:\R^m\to\R^m$ that operates pointwise in time. Consequently, if $\Phi$ satisfies a sector bound of the form \begin{equation}\label{phi_pointwise} \ip{\bmat{\xi\\\phi(\xi)}}{M\bmat{\xi\\\phi(\xi)}} \ge 0\quad\text{ for all }\xi\in \R^m, \end{equation} then $\Phi$ also satisfies the cumulative relationship~\eqref{phi_cum} for all $T$. Therefore, if we define condition \textit{(iii)} to be the same as condition \textit{\ref{Cit_ii}} from Corollary~\ref{cor:cumulative}, except~\eqref{phi_cum} is replaced by~\eqref{phi_pointwise}, then we have \textit{(i)}$\iff$\textit{(ii)}$\implies$\textit{(iii)}. So in general, if \textit{(i)} fails to hold, there must exist some $\Phi$ satisfying~\eqref{phi_cum} such that~\eqref{norm_l2} fails, but such a $\Phi$ need not be time-invariant or memoryless. Examples of this case in the classical literature include \cite{narendra1973frequency,brockett1966status}. \paragraph{Nested sector bounds.} Another possible relaxation of Corollary~\ref{cor:cumulative} is to consider \textit{nested sectors} for one of the systems. For example, define \textit{(i')} to be the same as \textit{(i)} except $N$ is replaced by some $\hat N \preceq N$. Similarly, define \textit{(ii')} to be the same as \textit{(ii)} except $M$ is replaced by some $\hat M \preceq M$. Then, we have the implications: \textit{(i')}$\implies$\textit{(i)}$\iff$\textit{(ii)}$\implies$\textit{(ii')}. The implication \textit{(i')}$\implies$\textit{(ii')} cannot be reversed in general, and is therefore a sufficient-only condition. \begin{table*}[t] \caption{ Existing sufficient conditions for stability that can be transformed into necessary-and-sufficient conditions via Corollary~\ref{cor:cumulative}. We use a positive feedback convention; for negative feedback, replace $N$ by $\tilde N$ as in Remark~\ref{rem:Ntilde}.} \label{Tab:comparison} \centering \small \begin{tabular}{lccc} \toprule \leftbox{\textbf{Name of Theorem}} & \midbox{$\boldsymbol{M}$} & \midbox{$\boldsymbol{N}$} & \midboxx{$\boldsymbol{M+N\prec 0}$} \\ \midrule \leftbox{\textbf{Conic sector theorem}\\ \cite[Thm.~2a,~all three cases]{Zames_inputoutput1966_part1} or\\ \cite[Thm.~3.1,~all parts of Case~1]{bridgeman2016extended}}& \matbox{\frac{-(a+\Delta)(b-\Delta)}{b-a-2\Delta} & \frac{-a-b}{2(b-a-2\Delta)}\\\frac{-a-b}{2(b-a-2\Delta)} & \frac{-1}{b-a-2\Delta} }& \matbox{\frac{ab}{b-a+2ab\delta} & \frac{a+b}{2(b-a+2ab\delta)}\\ \frac{a+b}{2(b-a+2ab\delta)} & \frac{(1+a\delta)(1-b\delta)}{b-a+2ab\delta} } & \midbox{$a<b$, and either $\delta=0,\Delta>0$ or $\delta>0,\Delta=0$.}\\[4mm] \leftbox{\textbf{Extended conic sector thm.}\\ \cite[Thm. 3.1, all parts of Case 2]{bridgeman2016extended}} & \matbox{\frac{(a-\Delta)(b+\Delta)}{b-a+2\Delta} & \frac{a+b}{2(b-a+2\Delta)}\\\frac{a+b}{2(b-a+2\Delta)} & \frac{1}{b-a+2\Delta} }& \matbox{\frac{-ab}{b-a-2ab\delta} & \frac{-a-b}{2(b-a-2ab\delta)}\\ \frac{-a-b}{2(b-a-2ab\delta)} & \frac{-(1-a\delta)(1+b\delta)}{b-a-2ab\delta} } & \midbox{Same as above}\\[4mm] \leftbox{\textbf{Extended passivity}\\% \cite[Thm.~6.6.58]{vidyasagar2002nonlinear}} & \matbox{-\epsilon_2 & \frac{1}{2} \\ \frac{1}{2} & -\delta_2} & \matbox{-\delta_1 & -\frac{1}{2}\\ -\frac{1}{2} &-\epsilon_1} & \midbox{$\delta_1+ \epsilon_2 >0$ and $\delta_2 + \epsilon_1 >0$} \\[4mm] \leftbox{\textbf{Small gain theorem}\\ \cite[Thm.~5.6]{khalil2002nonlinear}} & \matbox{\gamma_2 & 0\\0 &-1/\gamma_2 } & \matbox{-1/\gamma_1 & 0 \\ 0 & \gamma_1 } & \midbox{$\gamma_1 \gamma_2 < 1$} \\[2mm] \bottomrule \end{tabular} \end{table*} \section{Weighted stability result}\label{sec:Exponential} In this section, we present a specialization of Theorem~\ref{thm:main} that leads to a new necessary and sufficient condition for \textit{weighted stability}, which in turn is sufficient for exponential stability. For a fixed $\rho\in(0,1]$, define the set $\ltwo^\rho \subset \ltwo$ of sequences $\{x[k]\}$ such that $\sum_{k=0}^\infty \rho^{-2k}\normm{x[k]}^2<\infty$. This can be thought of as enforcing that $x$ converge to zero exponentially fast. Define the corresponding semi-inner products as $ \ip{x}{y}_{\rho,T} \defeq \sum^T_{k=0} \rho^{-2k}\ip{x[k]}{y[k]} $. Analogously to how we derived Corollary~\ref{cor:cumulative}, we have: \begin{cor}[Weighted stability]\label{cor:exponential} Let $M =M^\tp \in \R^{2\times 2}$ with $M\npreceq 0$ and $\rho \in (0,1]$ be given. Suppose $G:\ltwoe\to\ltwoe$ is causal and linear. The following are equivalent. \begin{enumerate}[label=(\roman*)] \item There exists $N=N^\tp \in \R^{2\times 2}$ satisfying $M + N \prec 0$ such that for all $\xi\in\ltwoe^\rho$ and $T \ge 0$, $G$ satisfies \begin{equation}\label{G_rho} \ip{ \bmat{G\xi\\ \xi} }{ N \bmat{G\xi\\\xi} }_{\rho,T} \ge 0. \end{equation} \item There exists $\gamma >0$ such that for all causal $\Phi:\ltwoe^\rho\to\ltwoe^\rho$ where the interconnection of $G$ and $\Phi$ is well-posed, if the following condition holds for all $T\ge 0$ \begin{equation}\label{phi_rho} \ip{ \bmat{\xi \\\Phi \xi} }{ M \bmat{\xi\\\Phi \xi} }_{\rho,T} \ge 0 \qquad\text{for all }\xi \in \ltwoe^\rho, \end{equation} then for all $(u,y) \in R_{uy}$ with $u\in\ltwo^\rho$, we have \begin{equation}\label{norm_expt2} \norm{y}_{\rho} \le \gamma \norm{u}_\rho. \end{equation} \end{enumerate} \end{cor} The weighted stability guarantee~\eqref{norm_expt2} in Corollary~\ref{cor:exponential} states that when inputs to the system tend to zero exponentially quickly (in the sense that $\lim_{k\to\infty} \rho^{-k} u_k = 0$ for some $\rho \in (0,1]$), then so do the outputs $y$. Under additional assumptions about $G$, this condition implies exponential stability, as detailed in Proposition~\ref{prop:ross}. \begin{prop}[\!\!{\cite[Prop. 5]{boczar2017exponential}}]\label{prop:ross} Suppose $G$ is a discrete-time LTI system and has a minimal realization with state $x[k]$. If the interconnection in Figure~\ref{fig:StableNegativefeedback} has weighted stability with weight $\rho \in (0,1]$, then there exists some $c>0$ such that for any initial $x[0]$ and with $u=0$, we have \[ \normm{ x[k] } \le c \rho^k \normm{ x[0] } \qquad \text{for }k=0,1,\dots \] \end{prop} The converse of Proposition~\ref{prop:ross}, that exponential stability implies weighted stability, does not hold in general. For example, if $G=0$ then we have exponential stability for any $\Phi$. Proving such a converse result typically requires stronger assumptions on $\Phi$ such as Lipschitz continuity~\cite[\S6.46]{vidyasagar2002nonlinear}. \section{Conclusion} In this paper, we introduced a robust stability theorem (Theorem~\ref{thm:main}) framed in a general semi-inner product space. Our result unifies many existing results, including passivity, small-gain, and circle theorems. This includes both necessary-and-sufficient as well as sufficient-only versions, and relation-based as well as operator-based notions of systems. Our theorem also leads to a new result on weighted stability (Corollary~\ref{cor:exponential}). \section{Acknowledgments} Authors would like to thank R.~Boczar, \mbox{L.~Bridgeman}, A.~Packard, A.~Rantzer, P.~Seiler, A.~van~der~Schaft, B.~Van~Scoy, S.~Z.~Khong and M.~Vidyasagar for helpful discussions and comments.
2,869,038,157,021
arxiv
\section{Introduction} In this article we apply Deift-Zhou's non-linear steepest descent method to the 5th order MKdV equation: \begin{equation}\label{eq:5th mkdv} q_t=30q^4q_x-10q^2q_{xxx}-40qq_{xx}q_x-10q_x^3+q_{xxxxx} \end{equation} which belongs to the AKNS hierarchy~\cite{AKNS1974}. As is well known in the integrable system theory,by means of inverse scattering transform, for any Schwartz initial data, the solution to the equations of the AKNS hierarchy exists globally in time. However there is no way to solve them explicitly in general. In the past two decades, asymptotic method plays a crucial role in the integrable system theory. The first systematic method to study the long-time asymptotic behaviour is due to Deift and Zhou\cite{deift_steepest_1993}. In Deift and Zhou's work, they direct consider a Riemann-Hilbert problem(RHP) and deform it to a model RHP which can be solve in terms of solutions of the Weber's parabolic cylinder equation. This can be consider as a nonlinear generalization of the classical method of steepest descent. Later on in 1996, Varzugin\cite{Varzugin1996} generated the classical method of stationary phase to solve the oscillatory RHP, and worked out the asymptotic expansions for the whole AKNS hierarchy with Schwartz initial data, where the error term is controlled by $O(t^{-3/4}\log(t))$. Many works are done for the KdV, NLS, mKdV, etc, which are all of order three or less. Recently, many studies about long time asymptotic form some 3 by 3 Riemann-Hilbert problem even 4 by 4 RHP are showing up, see for example\cite{MA2019,GENG2019151,Boutet2013} In some sense, the long-time asymptotic behaviour for the 5th order MKdV with Schwartz initial data was included in Varzugin's work implicitly. The purpose of writing this article is to study the long-time asymptotic of the 5th order MKdV explicitly. In our work, we will using the Deift-Zhou's method to study the long-time asymptotic behaviour of 5th MKdV which gives a better error terms ($O(t^{-1}\log(t))$) comparing to the method of stationary phase. The difficulties come from the high order of the phase function, which is a fifth order polynomial in our case. We will mainly follow Deift and Zhou's paper and do the necessary adjustments whenever involve the phase function. The main theorem is as following: \begin{theorem} Given $q(x,0)\in \mathcal{S}(\mathbb{R})$ and its associated reflection coefficient $r(z)$,in the linear-like oscillation\cite{AS1977} region $-x=O(t)$,the long-time behaviour of the solution to the 5th MKdV, i.e. the leading term of the solution $q(x,t)$ to MKdV as $t\rightarrow \infty$, can be written as following: \begin{equation} \label{main result} q(x,t)=-2(\frac{v}{640tz_0^3})^{1/2}\cos{\left(-128tz_0^5+v\log{(2560tz_0^5)}+\phi(z_0)\right)}+O(\frac{\log(t)}{t}) \end{equation} where \begin{equation} \phi(z_0)=\frac{5\pi}{4}-\arg(\bar{r}(-z_0))-\arg{(\Gamma(-\nu i))}+\frac{1}{\pi}\int_{-z_0}^{z_0}\log\frac{1-|r(s)|^2}{1-|r(-z_0)|^2}\frac{ds}{s+z_0} \end{equation} and $z_0=(|\frac{x}{80t}|)^{1/4}$. \end{theorem} The outline of the article is as following: In section 2 we simply formula the inverse scattering transform for the 5th MKdV and its connection to a oscillatory RHP. In section 3 we will introduce the solution method to RHP due to Beals and Coifman\cite{Beal1984}, which connects a singular integral equation with RHP. In section 4 a scalar RHP is introduce along with some estimates on the solution. This scalar RHP will be used to conjugate the matrix RHP and in preparing for contour deformation. Also some of the estimates will be used when we reduce the original RHP to a model RHP. In section 5, we will give the fundamental decomposition lemma in order to decompose a Schwartz function three parts, a non-analytic small term, a analytic term and a rational function. Contour deformation and truncation will be based on this lemma. In section 6 and 7 we will perform contour deformation($\Sigma^\sharp$) and truncation ($\Sigma'$) of the RHP, and give the error terms. In section 8, we will reduce the phase function (5th order) to a second order phase. Then we will separate the contributions of the two crosses $\Sigma_{A'}$ and $\Sigma_{B'}$. In section 9, reduce the RHP on $\Sigma_{A'(B')}$ to a model RHP and solve it in terms of parabolic cylinder equation. \section{IST and RHP} \subsection{Inverse scattering problem formulaism} In this section, we will formula the scattering and inverse scattering for initial value $q(x,t=0)\in \mathcal{S}(\mathbb{R})$. First we consider the direct scattering problem and set $t=0 $: \begin{equation} \label{directscattering} \psi_x(x;z)=\left(iz\sigma_3+\begin{pmatrix} 0&q(x,t)\\ \overline{q(x,t)}&0\\ \end{pmatrix}\right)\psi(x;z) \equiv U\psi\end{equation} where $\sigma_3=\begin{pmatrix} 1&0\\ 0&-1\\ \end{pmatrix}$. Then following the standard scattering method\cite{Beal1984,AS1977}, set $\mu=\psi e^{-ixz\sigma_3}$, and rewrite equation\eqref{directscattering} as following: \begin{equation}\label{eq:mu equation} \partial_x\mu=iz[\sigma_3,\mu]+Q\mu,\quad Q=\begin{pmatrix} 0&q(x,t)\\ \overline{q(x,t)}&0\\ \end{pmatrix} \end{equation} In order to analysis the propeties of $\mu$, consider the following two integral equations($z\in \mathbb{R}$): \begin{equation} \mu_{\pm}(x;z)=I+\int_{\pm \infty}^x e^{i(x-y)\text{ad }\sigma_3}Q(y)\mu_{\pm}(y;z)dy \end{equation} By method of The Neumann series for the Volterra equations, we will see that these equations have unique bounded continuous solutions for $x,z\in \mathbb{R}$ provided that $q\in \mathcal{S}$. From ODE theory, any two solutions of equation\eqref{directscattering} are connected by a matrix independent of $x$, i.e. $\psi_+=\psi_-S(t;z)$, where $\psi_{\pm}=\mu_{\pm}e^{ixz\sigma_3}$. Since $\mu_{\pm}$ are normalized to identity matrix as $x\rightarrow \pm \infty$, it is easy to check that the scattering matrix $S(z)$ has determinant 1. And by symmtry of potential matrix, we have $S(z)=\begin{pmatrix} a&\bar{b}\\ b&\bar{a} \end{pmatrix}$, where $|a|^2-|b|^2=1$. By analysis the Wronskians of $\psi_{\pm}$, we will have \begin{equation} \begin{split} a(z)&=1-\int_{\mathbb{R}}q(y)\mu^+_{21}(y;z)dy\\ b(z)&=-\int_{\mathbb{R}}e^{2iyz}q(y)\mu_{11}^-(y;z)dy \end{split} \end{equation} Then $a$ can be analytic continuated to the upper half plane $\mathbb{C}_+$. And $a$ is continuous in $\overline{\mathbb{C}_+}$ and non-vanishing there. Moreover $a(\infty)=1$. Define the reflection coefficient $r:=-\bar{b}/\bar{a}$. In the present paper, we consider solitonless region, i.e. we assume $b\neq 0$. Since $|a|^2=1+|b|^2$, $|a|\geq 1$ and $|r|=1-|a|^{-2}<1$, therefore, $\|r\|_{L^\infty(\mathbb{R})}<1$. And it is well-known that the direct scattering can be consider a bijective map $\mathcal{R}$ from the initial value $q(x,t=0)$ to the reflection coefficient $r(z)$. More over from \eqref{eq:mu equation}, and providing that $q\in \mathbb{R}$, $\bar{\mu}(-\bar{z})$ also satisfies the equation \eqref{eq:mu equation}, then by uniqueness, we obtaion: \begin{equation} \label{eq:symmetry of scattering } \bar{S}(-\bar{z})=S(z) \end{equation} and \begin{equation} \label{eq:symmetry of reflection} \bar{r}(-\bar{z})=r(z) \end{equation} Also from the analysis of the Neumann series of these two Volterra equations, the first column of $\mu_+$ and the second column of $\mu_-$, denoted by $\mu_{+1},\mu_{-2}$ respectively, can be extended to $\mathbb{C}_+$ analytically. Similarly, one sees that the first column of $\mu_-$ and the second column of $\mu_+$, denoted by $\mu_{-1},\mu_{+2}$ respectively, can be extended to $\mathbb{C}_-$ analytically By defining a new matrix \begin{equation} \label{rhp0matrix} m(x;z)=\begin{cases} (\frac{\mu_{+1}}{a(z)},\mu_{-2}),\quad z\in \mathbb{C}_+\\ (\mu_{+2},\frac{\mu_{-1}}{b(z)}),\quad z\in \mathbb{C}_-\\ \end{cases} \end{equation} Also denote $m_{\pm}$ as the boundary value of $m$ from $\mathbb{C}_+$ and $\mathbb{C}_-$ respectively Then by uniqueness of solutions of equation \eqref{directscattering}, we can that since $\psi_+=\psi_-S$,then $\mu_{+}=\mu_{-}e^{ix \text{ ad} \sigma_3}S$.Then by reordering the columns of $\mu_{\pm}$, one can see that there is a matrix $v$ such that $m_+=m_-v$. Direct calculation shows $v(z)=\begin{pmatrix} 1-|r|^2& r\\ -\overline{r}& 1\\ \end{pmatrix}$. One thing worth mention here is that during the calculation, we will see naturally the factorization of matrix $v=\begin{pmatrix} 1 & r\\ 0&1\\ \end{pmatrix}\begin{pmatrix} 1&0\\ -\bar{r}& 1\\ \end{pmatrix}$, which will be used in later sections. We summary the above direct scattering problem as the following Riemann-Hilbert problem: \begin{problem}{0} Given a jump condition $v(x,t=0;z)=e^{ix\text{ad}\sigma_3}\begin{pmatrix} 1-|r|^2& r\\ -\overline{r}& 1\\ \end{pmatrix}=e^{ix\text{ad}\sigma_3}v(z),r\in \mathcal{S}$, on the real line $\mathbb{R}$. We are seeking for a $2\times 2$ matrix valued function $m(x;z)$ satisfies the following conditions: \begin{equation} \begin{cases} m(x;z)\text{ is analytic off the real line and continuous to the boundary}\\ m_+=m_-v(x,t=0;z)\text{ on }\mathbb{R}\\ m=I+\frac{m_1}{z}+o(z)\text{ as } z\rightarrow \infty \end{cases} \end{equation} \subsection{Time evolution and inverse scattering problem} In this section we will briefly discuss the time evolution and the inverse scattering problem and formulate them as a time evolution Riemann-Hilbert Problem. Since 5th MKdV is in the AKNS-hierarchy, the Lax pair of time evolution part corresponding to the direct scattering problem\eqref{directscattering} can be calculated by using some symbolic computation system. Here following Ma's scheme\cite{MA2013So3}, the stationary zero curvature equation $W_x=[U,W]$,where \[W=\sum_{i\geq 0}W_{0,i}\lambda^{-i},W_{0,i}=\begin{pmatrix} a_i& b_i\\ c_i&-a_i\\ \end{pmatrix}\] leads to the following recursion relation: \begin{equation} \begin{cases} b_{i+1}=\frac{1}{2I}b_{i,x}-Iqa_i,\\ c_{i+1}=-\frac{1}{2I}-I\bar{q}a_i\\ a_{i+1,x}=qc_{i+1}-\bar{q}b_{i+1} \end{cases} \end{equation} upon taking the initial values \begin{equation} a_0=16I,b_0=c_0=0 \end{equation} also impose the conditions of the integration for the third recursion relation: \begin{equation} a_i|_{q=0}=b_i|_{q=0}=c_i|_{q=0}=0,\forall i \geq 1 \end{equation} Now let \begin{equation} V^{[m]}=(\lambda^mW)_+ \end{equation} where $(\cdot)_+$ means the principle part of the Laurent expansion. Then the time-evolution problem is followed by \begin{equation} \Psi_t=V^{[m]}\Psi, \end{equation} and the zero curvature equation \begin{equation} U_t-V^{[m]}_x+[U,V^{[m]}]=0 \end{equation} leads to the equivalent non-linear integrable PDEs. For $m=2,3$, we will obtain the NLS equation and the MKdV equation respectively. In current paper, let $m=5$, we obtain the time-evolution part for the 5th MKdV equation, which reads \begin{equation} \psi_t=(16Iz^5\sigma_3+V_0(q,\bar{q},z))\psi\equiv V\psi \end{equation} where \begin{equation} \begin{split} V_0&=16qz^4\sigma_1+z^3(-8Iq^2\sigma_3+8Iq_x\sigma_1\sigma_3)+z^2(8q^3-4q_{xxx})\sigma_1\\ &+z(6Iq^4-4Iq_{xx}q)+2Iq^2_{x}\sigma_3+12Iq^2q_x-2Iq_{xxx}\sigma_1\sigma_3)\\ &+(6q^5-10q^2q_{xx}-10qq^2_x+u_{xxxx})\sigma_1 \end{split} \end{equation} provided that $q=\bar{q}.$ Then the time evolution of the reflection coefficient is given by \begin{equation} r(t)=r(t;z)=e^{-16itz^5}r(z) \end{equation} Now we formulate are the time evolution Riemann Hilbert problem as following: \end{problem} \begin{problem}{RHP1} Given $r(z)\in \mathcal{S}(\mathbb{R})$,$v(x,t;z)=e^{(-16iz^5t+ixz)\text{ ad}\sigma_3}v(z)=e^{-it\theta(z;z_0)\text{ ad}\sigma_3}v(z)$. We are seeking to a $2\times 2$ matrix-valued function satisfying \begin{equation} \begin{cases} m(x,t;z)\text{ is analytic off $\mathbb{R}$ and continuous to $\mathbb{R}$}\\ m_+(x,t;z)=m_-(x,t;z)v(x,t;z)\text{ on }\mathbb{R}\\ m(x,t;z)=I+\frac{m_1(x,t)}{z}+o(z)\text{ as } z\rightarrow \infty \end{cases} \label{RHP1} \end{equation} where the phase function $\theta(z;z_0)=16z^5-80z^4_0z,z^4_0=-\frac{x}{80t}$, $z_0>0$, in this paper we only consider the region $x<0,t>0$ and $-x=O(t),$ as $t\rightarrow \infty$. \end{problem} Since $m$ by the definition of \eqref{rhp0matrix} also satisfies the equation \eqref{eq:mu equation}, then let $z\rightarrow \infty$ in both side, we obtain that the solution to the Cauchy problem of the 5th order MKdV is: \begin{equation} \label{recoveringsolution} q(x,t)=-\lim_{z\rightarrow \infty}iz[\sigma_3,m]_{12}=-2i[m_1(x,t)]_{12} \end{equation} and the analysis of the long-time behaviour of solutions is reduced to the asymptotic analysis of RHP1. \section{Solution method of RHP by matrix factorization} In this section, we recall so-called Beal-Coifman method in order to solve the matrix RHP by factoring a matrix into two triangle matrices. First define the Cauchy Operator $C_{\pm}$, given a function $f\in L^2(\mathbb{R})$, $$C_{\pm}f(z)=\lim_{\epsilon\downarrow 0}\int_\mathbb{R}\frac{f(s)}{s-(z\pm i\epsilon)}\frac{ds}{2\pi i}.$$ It is well known that those operator are bounded from $L^2$ to $L^2$. Also it worth note the property that $C_+-C_-=1$. Now consider a RHP with jump $$v=v_-^{-1}v_+=(1-w_-)^{-1}(1+w_+)$$ on some contour on $\mathbb{R}$, seek a function $m$ which is analytic in the upper half plane $\mathbb{C}_+$ and in the lower half plane $\mathbb{C}_-$ and continuous to the boundary from $\mathbb{C}_+$ or $\mathbb{C}_-$ respectively, denoted by $m_{\pm}$. On the boundary $m_+=m_-v$. The method of Beals and Coifman says that if $\mu$ solve the following singular integral equation: \begin{equation} \mu=1+C_w\mu \end{equation} where $$C_w(f):=C_-(fw_+)+C_+(fw_-).$$ Then the solution to the RHP is then given by \begin{equation} \label{RHPsol} m(z)=1+\int_{\mathbb{R}}\frac{\mu(s)(w_-(s)+w_+(s))}{s-z}\ddbar s,\quad \ddbar s=\frac{ds}{2\pi i} \end{equation} The existence of the RHP now transformed to existence of the singular integral equation, i.e. the invertibility of operator $I-C_w$. Also by the Fredholm theorem, the existence guarantees the uniqueness. It's easy to show $C_w$ is a bounded operator in $L^2$ provided that $w_{\pm}\in L^\infty$. One sufficient condition for $I-C_w$ to be invertible is given by \begin{equation*} \|w_+\|_{L^\infty}+\|w_-\|_{L^\infty}<1 \end{equation*} In the following sections, we will factorize the jump matrix such that the $L^2$ norm corresponding $C_w$ operator will less than 1 for sufficient large $t$. Now suppose the RHP \eqref{RHP1} has a solution, combining \eqref{recoveringsolution} and \eqref{RHPsol}, the potential can be recovered by \begin{equation} q(x,t)=\left[\int_{\mathbb{R}}\mu(s)(w_-(s)+w_+(s))\frac{ds}{\pi}\right]_{12} \end{equation} \section{A scalar RHP} In this section we will consider the following scalar $RHP$, given $r\in \mathcal{S},|r|<1$,seeking analytic function $\delta$, such that \begin{equation}\label{eq:scalar rhp} \begin{cases} \delta_+(z)&=\delta_-(z)[\chi_{D_-}(1-|r|^2)+\chi_{D_+}]\\ \delta(\infty)&=1 \end{cases} \end{equation} where $D_-=\{z:\theta(z)<0\}$ , $D_+=\{z:\theta(z)>0\}$ and $\chi$ is the characteristic function. Then the solution to \eqref{eq:scalar rhp} ,bu the Plemelj's formula, is \begin{equation}\label{eq:1} \delta(z)=e^{\int_{-z_0}^{z_0}\frac{\log(1-|r(s)|^2)}{s-z}\ddbar s} \end{equation} Also since $\log(1-|r(s)|^2)$ is Lipschitz continuous, by the Plemelj-Privalov theorem, $\delta(z)$ is also Lipschitz continuous on $\mathbb{R}$. More explicitly, set $\chi(z)=\int_{-z_0}^{z_0}\frac{\log(1-|r(s)|^2)}{\log(1-|r(-z_0)|^2)}\frac{\ddbar s}{s-z}$ and $\nu=-(2\pi)^{-1}\log(1-|r(-z_0)|^2)$, we have the following formula near for $z\in \mathbb{C}\backslash \mathbb{R}$: \begin{equation} \label{eq:2} \begin{split} \log(\delta(z))&=\int_{-z_0}^{z_0}\frac{\log(1-|r(s)|^2)}{s-z}\ddbar s\\ &=\int_{-z_0}^{z_0}\frac{\log(1-|r(s)|^2)-\log(1-|r(-z_0)|^2)}{s-z}+\frac{\log(1-|r(-z_0)|^2)}{s-z}\ddbar s\\ &=\chi(z)+\int_{-z_0}^{z_0}\frac{\log(1-|r(-z_0)|^2)}{s-z}\ddbar s\\ &=\chi(z)+i\nu \end{split} \end{equation} with properly choosing cuts such that : \begin{equation} |\arg(z\pm z_0)|<\pi. \end{equation} Also, given $q(x,t)$ is real function, then $\bar{\mu}(-\bar{z})$ solves the equation \eqref{eq:mu equation}, which further implies that scattering matrix $s(z)=\bar{s}(-\bar{z})$ thus for the reflection coefficient, we have: \begin{equation}\label{3} r(z)=\bar{r}(-\bar{z}) \end{equation} By uniqueness of the scalar RHP, \eqref{eq:2} and \eqref{3}, we obtain \[\delta(z)=\overline{\delta(-\bar{z})}=(\overline{\delta(\bar{z})})^{-1}\] and for real $z$, \begin{gather} |\delta_+(z)\delta_-(z)|=1,\label{scalarRHPidentity2.0}\\ \quad |\delta_{\pm}(z)|=1,\quad if\quad z\in D_+,\label{scalarRHPidentity2.1}\\ |\delta_+(z)|=|\delta_-^{-1}(z)|=(1-|r(z)|^2)^{1/2} \quad for\quad z\in D_-,\label{scalarRHPidentity3} \end{gather} Hence by the maximum principle, $|\delta(z)|^{\pm 1}<\infty$ for all $z$. Now let we conjugate the RHP \eqref{RHP1} to the following RHP: \begin{problem}{RHP1'} \begin{equation} \label{RHP1'} \begin{cases} m_+\delta_+^{-\sigma_3}=m_-\delta_-^{-\sigma_3}\delta_-^{\sigma_3}v(x,t;z)\delta_+^{-\sigma_3}\\ m\delta^{-\sigma_3}(\infty)=I \end{cases} \end{equation} \end{problem} \begin{remark} The normalization condition can be verified from the proposition since $r\in \mathcal{S}(\mathbb{R})$,$\delta\rightarrow 1$ as $z\rightarrow \infty$. \end{remark} \section{Decomposition of the Schwartz function} In this section, consider a fundamental decomposition of any Schwartz function in the spirit of method of stationary phase method, i.e decomposition a Schwartz function on the intervals where the phase function is monotonic there. This fundamental decomposition will be applied to decompose the matrix RHP. \begin{lemma}Suppose $\rho \in \mathcal{S}(\mathbb{R})$,and given a phase function $\theta(z)=16z^5-80z_0^4z$,where $z_0$ the positive stationary point of $\theta$. Then there exists a decomposition of $\rho=h_1+h_2+R$,such that for $\epsilon>0$ and $\alpha\in (0,\pi/4]$, \begin{equation} \begin{split} |e^{-2it\theta(z)}h_1(z)|&\leq ct^{-k},\quad z(u)=z_0+uz_0e^{\pi i},u\in [0,1]\\ |e^{-2it\theta(z)}h_2(z)|&\leq ct^{-k},\quad z(u)=z_0+uz_0e^{i(\pi-\alpha)},u\in [0,1/\cos{\alpha}]\\ |e^{-2it\theta(z)}R(z)|&\leq Ce^{-4z_0^5\epsilon^2t},\quad z(u)=z_0+uz_0e^{i(\pi-\alpha)},u\in [\epsilon,1/\cos{\alpha}]\\ \end{split} \label{decopestimate} \end{equation} \end{lemma} \begin{proof} Since $\rho\in \mathcal{S}$,consider the Taylor truncate $R(z)=\sum_{j=0}^nc_j(z-z_0)^j$ with $n>1$ , and define $h=\rho-R$, then we have $h=O(|z-z_0|^{n+1})$. Set $a(z)=(z-z_0)^q,n>q\geq1,q\in\mathbb{N}$. Define a new function $f(\theta)=\{h/a\}(z(\theta))H(-|\theta|+64z_0^5)$. It is well-defined since $\theta(z)$ is monotonic in $[-z_0,z_0]$ hence is invertible. Note that by chain rule, we have \begin{equation} \frac{df}{d\theta}=\frac{df}{dz}(\theta')^{-1} \end{equation} Each time this process will reduce the degree of $z-z_0$ by 2. Also we have $d\theta=80(z^2+z_0^2)(z+z_0)(z-z_0)dz$. So near $z_0$,for any $0\leq j\leq \frac{n+1-q}{2} $, $\frac{d^jf}{d\theta^j}=O((z-z_0)^{n+1-q-2j+1})\in L^2(\mathbb{R})$, then by the Plancherel's theorem, we have $(1+s^2)^{j/2}|\hat{f}|\in L^2(\mathbb{R})$. Now consider $h(z)=a(z)\int_{\mathbb{R}}\hat{f}(s)e^{-is\theta}d\theta=a(z)\int_{t}^{\infty}\hat{f}(s)e^{is\theta}ds+a(z)\int_{-\infty}^{t}\hat{f}(s)e^{is\theta}ds$, and set $h_1=a(z)\int_{t}^{\infty}\hat{f}(s)e^{is\theta}ds,h_2=a(z)\int_{-\infty}^{t}\hat{f}(s)e^{is\theta}ds$. Then on the real line, \begin{equation} \begin{split} |e^{-2it\theta}h_1|&\leq c\int_t^\infty |\hat{f}|ds\\ &\leq c\|(1+s^2)^{-p}\|_{L^2([t,\infty))}\|(1+s^2)^{p}|\hat{f}|\|_{L^2([t,\infty))}\\ &\leq ct^{-p} \end{split} \end{equation} On the second segment of \eqref{decopestimate},we have \begin{equation} \begin{split} |e^{-2it\theta}h_2|&c\leq(z_0u)^qe^{-t\Re i\theta(z)}\int_{-\infty}^te^{(s-t)\Re i\theta(s)}|\hat{f}(s)|ds\\ &\leq cz_0^qu^qe^{-t\Re i\theta(z)}\|(1+s^2)^{-1}\|_{L^2(-\infty,t)}\|(1+s^2)|\hat{f}(s)|\|_{L^2(-\infty,t)}\\ &\leq cu^qe^{-t\Re i\theta(z)} \end{split} \end{equation} In fact,consider the identiy \begin{equation} \theta(z)=-64z_0^5+160z_0^5(z-z_0)^2\left(1+\frac{z-z_0}{z_0}+\frac{(z-z_0)^2}{2z_0^2}+\frac{(z-z_0)^3}{10z_0^3}\right) \end{equation} Thus on the ray, $z=z_0+uz_0e^{(\pi-\alpha)i},u\leq 1/\cos{\alpha}$,noting that $\alpha$ is fixed and sufficient small, we have \begin{equation} \begin{split} \Re{(i\theta)}&=16z_0^5u^2\left(10\sin{(2\alpha)}-10u\sin{(3\alpha)}+5u^2\sin{(4\alpha)}-u^3\sin{(5\alpha)}\right)\\ &\geq 16u^2z_0^5\left(10\sin{(2\alpha)}-10\sin{(3\alpha)}/\cos{(\alpha)}+5(\cos{(\alpha)})^{-2}\sin{(4\alpha)}-(\cos{(\alpha)})^{-3}\sin{(5\alpha)}\right)\\ &=16c(\alpha)z_0^5u^2 \end{split} \end{equation} Since for small $\alpha$, $\left(10\sin{(2\alpha)}-10u\sin{(3\alpha)}+5u^2\sin{(4\alpha)}-u^3\sin{(5\alpha)}\right)$ will be monotonically decreasing and it is also easy to check that this minimum is positive as long as $\alpha>0$. Now we have $\Re i\theta(z)\geq 16c(\alpha)z_0^5u^2$, and by taking the derivative of $u^qe^{-t\Re i\theta(z)}$, it is easy to see that $u^qe^{-t\Re i\theta(z)}$ is controlled by $ct^{-q/2}$. So we have \begin{equation} |e^{-2it\theta}h_2|\leq ct^{-q/2} \end{equation} Finally, we estimate $R$ on the third segment of \eqref{decopestimate}. Since $R$ is a polynomial, it is controlled by the exponential decay, that is to say \begin{equation} |e^{-2it\theta}R(z)|\leq ce^{-t\Re i\theta}\leq ce^{-16c(\alpha)z_0^5\epsilon^2t} \end{equation} Since the function $\rho \in \mathcal{S}$, $R$ could be a polynomial of any order, so are the $p$ and $q$. This completes the proof. \end{proof} \begin{remark} If we replace the $a(z)$ by $a(z)/(z+i)^2$, then the first two bounds on \eqref{decopestimate} should be $\frac{ct^{-k}}{1+|z|^2}$ \end{remark} For the part of $(z_0,\infty)$ the decomposition is slightly different by replacing polynomial $R$ by a rational $R$ which decays at infinity. In the proof we need consider $\rho_0$ the Taylor truncate of $(z-i)^{10}\rho$ and set $R=\rho_0/(z-i)^{10}$ and $h=\rho-R$, then by the same harmonic analysis technique we obtain the following lemma: \begin{lemma} Suppose $\rho \in \mathcal{S}(\mathbb{R})$,and given a phase function $\theta(z)=16z^5-80z_0^4z$,where $z_0$ the positive stationary point of $\theta$. Then there exists a decomposition of $\rho=h_1+h_2+R$,such that for $\epsilon>0$ \begin{equation} \begin{split} |e^{-2it\theta(z)}h_1(z)|&\leq ct^{-k}/(1+|z|^2),\quad z(u)=z_0+uz_0e^{\pi i},u\in (-\infty,0)\\ |e^{-2it\theta(z)}h_2(z)|&\leq ct^{-k}/(1+|z|^2),\quad z(u)=z_0+uz_0e^{(\pi-\alpha)i},u\in (-\infty,0)\\ |e^{-2it\theta(z)}R(z)|&\leq ce^{-16c(\alpha)z_0^5\epsilon^2t},\quad z(u)=z_0+uz_0e^{(\pi-\alpha)i},u\in (-\infty,-\epsilon)\\ \end{split} \label{decompest2} \end{equation} \end{lemma} And for phase function $-\theta(z)$, we have their counterparts. We summary all the estimates as the following theorem: \begin{theorem} \label{fundamentaldecomp} Let $\rho$ be a real-valued function in $\mathcal{S}(\mathbb{R})$, given the phase function $\theta(z)=16z^5-80z_0^4z$,where $z_0$ is the only positive stationary point. Take $\epsilon\in (0,4z_0/5)$,and set \begin{equation} \begin{split} L&= \{z:z=z_0+uz_0e^{(\pi-\alpha)i},u\in (-\infty,1/\cos{\alpha}]\}\\ L_\epsilon&=\{z:z=z_0+uz_0e^{(\pi-\alpha)i},u\in (\epsilon,1/\cos{\alpha}]\}\\ \end{split} \end{equation} there exist a decomposition $\rho=h_1+h_2+R$ satisfying the following estimates: \begin{eqnarray} \begin{split} |h_1(z)e^{-2it\theta(z)}|&\leq \frac{ct^{-k}}{1+z^2},\quad z\in \mathbb{R},\forall k\in \mathbb{N}\\ |h_2(z)e^{-2it\theta(z)}|&\leq \frac{ct^{-k}}{1+|z|^2},\quad z\in L,\forall k\in \mathbb{N}\\ |R(z)e^{-2it\theta(z)}|&\leq ce^{-16c(\alpha)z_0^5\epsilon^2t},\quad z\in L_\epsilon \end{split} \end{eqnarray} Similarly, there exist a decomposition of $\bar{\rho}$ with respect to the $e^{2it\theta}$ on the conjugation of $L$ and $L_\epsilon$. And for stationary point $-z_0$ we have similar decomposition along with the estimates. \end{theorem} \begin{figure}[h] \centering \begin{tikzpicture} \draw (-5,0)--(5,0); \node [below] at (3,0) {$z_0$}; \node [below] at (-3,0) {$-z_0$}; \filldraw (3,0) circle (1pt); \filldraw (-3,0) circle (1pt); \draw [red] (0,1)--(2,1/3); \draw (2,1/3)--(5,-2/3); \draw [red] (0,1)--(-2,1/3); \draw (-2,1/3)--(-5,-2/3); \draw [red] (0,-1)--(2,-1/3); \draw (2,-1/3)--(5,2/3); \draw [red] (0,-1)--(-2,-1/3); \draw (-2,-1/3)--(-5,2/3); \draw (3.4,0) arc (0:22:0.4); \node [right] at (3.3,0.1) {$\tiny\alpha$}; \node [above] at (4.2,1/2) {$\bar{L}$}; \node [below] at (4.2,-1/2) {$L$}; \node [above,red] at (0,1) {$L_\epsilon$}; \node [below,red] at (0,-1) {$\bar{L}_\epsilon$}; \end{tikzpicture} \caption{Contours of $L$ and $L_\epsilon$ and their conjugations for both $z_0$ and $-z_0$}\label{deformedContour} \end{figure} \section{Contour Deformation of the RHP1} In this section, we deform the original RHP to the new contour $\Sigma=L\cup \bar{L}\cup \mathbb{R}$. For convenience we change the orientation for $|z|>z_0$, which is done by taking the inverse of the original jump matrix. Let $\Omega_j,j=1,2,...,8$ defined in Fig.\ref{deformRigen}, as well as the orientation of the contours. \begin{figure}[h] \centering \begin{tikzpicture} \draw (-5,0)--(5,0); \node [below] at (3,0) {$z_0$}; \node [below] at (-3,0) {$-z_0$}; \filldraw (3,0) circle (1pt); \filldraw (-3,0) circle (1pt); \draw [red,->] (0,1)--(2,1/3); \draw (2,1/3)--(5,-2/3); \draw [red] (0,1)--(-2,1/3); \draw [<-](-2,1/3)--(-5,-2/3); \draw [red,->] (0,-1)--(2,-1/3); \draw (2,-1/3)--(5,2/3); \draw [red] (0,-1)--(-2,-1/3); \draw [<-](-2,-1/3)--(-5,2/3); \draw [<-] (4,1/3)--(4.5,1/2); \draw [<-] (4,-1/3)--(4.5,-1/2); \draw [->] (-4,1/3)--(-4.5,1/2); \draw [->] (-4,-1/3)--(-4.5,-1/2); \draw (3.4,0) arc (0:22:0.4); \node [right] at (3.3,0.1) {$\tiny\alpha$}; \node [above] at (4.2,1/2) {$\bar{L}$}; \node [below] at (4.2,-1/2) {$L$}; \node [above,red] at (0,1) {$L_\epsilon$}; \node [below,red] at (0,-1) {$\bar{L}_\epsilon$}; \node [above] at (5.1,0.1) {$\Omega_1$}; \node [below] at (0,-0.3) {$\Omega_2$}; \node [above] at (-5.1,0.1) {$\Omega_3$}; \node [below] at (5.1,-0.1) {$\Omega_4$}; \node [above] at (0,0.3) {$\Omega_5$}; \node [below] at (-5.1,-0.1) {$\Omega_6$}; \node [above] at (0.2,1.3) {$\Omega_7$}; \node [below] at (0.2,-1.5) {$\Omega_8$}; \end{tikzpicture} \caption{Contours of $L$ and $L'$ and their conjugations for both $z_0$ and $-z_0$}\label{deformRigen} \end{figure} And consider the factorization of jump matrix of RHP1', combining the decomposition lemma, let $\rho=-r(z)H(|z|>z_0)+\frac{r}{1-|r|^2}H(|z|<z_0)$, the jump matrix on \eqref{RHP1'} $\delta_-^{-\sigma_3}v\delta_+^{\sigma_3}$ can be rewritten as \[ \begin{pmatrix} 1&0\\-\bar{\rho}\delta_-^{-2} & 1 \end{pmatrix} \begin{pmatrix} 1& \rho\delta_+^{2}\\ 0&1 \end{pmatrix}=:b_-^{-1}b_+.\] Due to the decomposition lemma $\rho=h_1+h_2+R$, denote \[b_-=b_-^ob_-^a=\begin{pmatrix} 1&0\\\bar{h_1}\delta_-^{-2} & 1 \end{pmatrix}\begin{pmatrix} 1&0\\\overline{h_2+R}\delta_-^{-2} & 1 \end{pmatrix}\] \[b_+=b_+^ob_+^a=\begin{pmatrix} 1& h_1\delta_+^{2}\\ 0&1 \end{pmatrix}\begin{pmatrix} 1& (h_2+R)\delta_+^{2}\\ 0&1 \end{pmatrix}\] and define \[w_{\pm}:=\pm(b_{\pm}-I)\] \[w=w_++w_-.\] It is easy to show that for fix $x,t$, we have $w_\pm,w\in L^2\cap L^1\cap L^\infty.$ \begin{problem}{RHP2} Setting \begin{equation} m^\sharp(z)=\begin{cases} m\delta^{-\sigma_3},&\quad z\in \Omega_7\cup \Omega_8\\ m\delta^{-\sigma_3}(b_+^a)^{-1},&\quad z\in \Omega_4\cup\Omega_5\cup\Omega_6\\ m\delta^{-\sigma_3}(b_-^a)^{-1},&\quad z\in \Omega_1\cup\Omega_1\cup\Omega_3\\ \end{cases} \end{equation} \begin{equation} v^\sharp(z)=\begin{cases} (b_-^o)^{-1}b_+^o,&\quad z\in \mathbb{R}\\ b_+^a,&\quad z\in L\\ (b_-^a)^{-1},&\quad z\in \bar{L}\\ \end{cases} \end{equation} \begin{equation} \label{RHP2} \text{RHP2}= \begin{cases} m^\sharp_+=m^\sharp_-e^{-it\theta\hat{\sigma_3}}v^\sharp\\ m^\sharp(\infty)=I\\ \end{cases} \end{equation} \end{problem} One thing we need to check is the normalization condition, $m^\sharp(\infty)=I$, which follows directly by using the estimates in the decomposition lemma and estimates for the scalar RHP. \section{Truncation of the contours} Following the analysis in Deift-Zhou's method,especially the restriction lemma (\cite{deift_steepest_1993} Lemma 2.56), since we have similar decomposition for $\rho$($\bar{\rho}$), we can easily estimate the errors generated from the truncating contours. For reader's convenience, we will list the lemmas which trivially follow from the decomposition lemma. Before that, we introduce some new notations first. Set $w':\Sigma\rightarrow M(2,\mathbb{C})$ supported on $\Sigma'=\Sigma\backslash (L_\epsilon\cup\bar{L}_\epsilon\cup \mathbb{R})$ with contributions from $R(\bar{R})$ only. And denote the difference of $w^\sharp$ and $w'$ as $w^e$, see Fig.\ref{fig:truncate errors}, where $w^\sharp=w^\sharp_++w^\sharp_-$ and $w^\sharp_\pm=\pm(b^\sharp_\pm-I)$. In what follows, it takes two steps to reduce RHP-data $(w^\sharp,\Sigma)$ to $(w',\Sigma')$. The following estimate show the $L^2(dz)$ uniform boundedness for $w',w^\sharp,w^e$ while their $L^1$ and $L^\infty$ boundedness are directly from the decomposition lemma. \begin{figure}[h] \centering \begin{tikzpicture} \draw (-5,0)--(5,0); \node [below] at (3,0) {$z_0$}; \node [below] at (-3,0) {$-z_0$}; \filldraw (3,0) circle (1pt); \filldraw (-3,0) circle (1pt); \draw [red,->] (0,1)--(2,1/3); \draw (2,1/3)--(5,-2/3); \draw [red] (0,1)--(-2,1/3); \draw [<-](-2,1/3)--(-5,-2/3); \draw [red,->] (0,-1)--(2,-1/3); \draw (2,-1/3)--(5,2/3); \draw [red] (0,-1)--(-2,-1/3); \draw [<-](-2,-1/3)--(-5,2/3); \draw [<-] (4,1/3)--(4.5,1/2); \draw [<-] (4,-1/3)--(4.5,-1/2); \draw [->] (-4,1/3)--(-4.5,1/2); \draw [->] (-4,-1/3)--(-4.5,-1/2); \draw (3.4,0) arc (0:22:0.4); \node [right]at (1,0.85) {$w^e=R+h_2$}; \node[right ] at (1,-0.95) {$w^e=\overline{R+h_2}$}; \node [right ]at (4.5,-0.4) {$w^e=R$}; \node [right ]at (4.5,0.34) {$w^e=\bar{R}$}; \node [right] at (3.3,0.1) {$\tiny\alpha$}; \node [above] at (0,0) {$w^e=h_1(\bar{h}_1)$}; \node [above] at (4.2,1/2) {$\bar{L}$}; \node [below] at (4.2,-1/2) {$L$}; \node [above,red] at (0,1) {$L_\epsilon$}; \node [below,red] at (0,-1) {$\bar{L}_\epsilon$}; \end{tikzpicture} \caption{The errors of the truncation of contours.} \label{fig:truncate errors} \end{figure} \begin{lemma} \label{truncatelemma} $\|w^\sharp\|_{L^2(\Sigma,dz)}\leq Ct^{-1/4}$,\quad $\|w^e\|_{L^2(\Sigma,dz)}\leq Ct^{-k}$, \quad$\|w'\|_{L^2(\Sigma,dz)}\leq Ct^{-1/4}$ \end{lemma} \begin{proof} The second estimate directly comes from the decomposition lemma. For the last estimate, consider $w'_+$ first, since $|R|\leq C(1+|z|^2)^{-1}$ for $z\in L$, and also we have \begin{equation*} \Re{(i\theta)}\geq 16c(\alpha)z_0^5u^2 \end{equation*} Hence by the estimate for the scalar RHP, we have \begin{equation} \begin{split} \|w'_+\|_{L^2}&\leq (\int_{L}|\delta^2(z)R(z)|^2e^{-2t\Re{(i\theta)}}|dz|)^{1/2}\\ &\leq C(\int_{(-\infty,1/\cos{\alpha}]}e^{-32c(\alpha)z_0^5u^2t}du)^{1/2}\\ &\leq Ct^{-1/4} \end{split} \end{equation} And we have similar estimate for $\bar{R}$, then by triangle inequality we have \begin{equation*} \|w'\|_{L^2}\leq \|w'_+\|_{L^2}+\|w'_-\|_{L^2}\leq Ct^{-1/4} \end{equation*} Then, the first estimate comes from the triangle inequality. \end{proof} The first reduction is to reduce $(w^\sharp,\Sigma)$ to $(w',\Sigma)$. It is essential to show the boundedness of $(1-C_{w'})^{-1}$ and $(1-C_{w^\sharp})^{-1}$ first. We will prove the following two propositions: \begin{prop} $(1-C_{w'})^{-1}$ is uniformly bounded form $L^{\infty}(\Sigma)+L^2(\Sigma)$ to $L^2(\Sigma)$ for $t$ sufficiently large. \end{prop} \begin{proof} It is equivalent to show that there exits $t_0$ such that for $t>t_0$, $\|C_{w'}\|_{L^2}<1$, where $C_{w'}$ maps $L^\infty$ to $L^2$ since $w'(w'_{\pm})\in L^2$. In fact, taking $f\in L^\infty(\Sigma,dz)$ \begin{equation} \begin{split} \|C_{w'}f\|_{L^2}&\leq \|C_+(fw'_-)\|_{L^2}+\|C_-(fw'_+)\|_{L^2}\\ &\leq (\|w'_+\|_{L^2}+\|w'_-\|_{L^2})\|f\|_{L^\infty}\\ &\leq Ct^{-1/4}\|f\|_{L^\infty} \end{split} \end{equation} And by choosing $t_0=C^4+1$, we have $\|C_{w'}\|_{L^2}<1$ uniformly with respect to $z\in \Sigma$. For $f\in L^2(\Sigma,dz)$, \begin{equation} \begin{split} \|C_{w'}f\|_{L^2}&\leq \|C_+(fw'_-)\|_{L^2}+\|C_-(fw'_+)\|_{L^2}\\ &\leq \|w'_-\|_{L^\infty}\|C_+(f)\|_{L^2}+\|w'_+\|_{L^\infty}\|C_-(f)\|_{L^2}\\ &\leq Ct^{-k}\|f\|_{L^2} \end{split} \end{equation} and choosing $t_0=C^k+1$, we have $\|C_{w'}f\|_{L^2}<1$ uniformly with respect to $z$. Then there exists a $t_0$, such that $C_{w'}$ is uniformly bounded from $L^2+L^\infty\rightarrow L^2$ \end{proof} Similarly, we have the following proposition for $(w^\sharp,\Sigma)$: \begin{prop} $(1-C_{w^\sharp})^{-1}$ is uniformly bounded form $L^{\infty}(\Sigma)$ to $L^2(\Sigma)$ for $t$ sufficiently large. \end{prop} Then by the lemma \eqref{truncatelemma} and the resolvent identities, it is by direct computation to show the following estimate: \begin{lemma}\label{first truncate} \begin{equation} q(x,t)=2\left[-\int_{\Sigma}((1-C_{w'})^{-1}I)w'(s)\right]_{12}+O(t^{-k}),\forall k\in \mathbb{N} \end{equation} \end{lemma} And recall the restriction lemma in Deift-Zhou's paper\cite{deift_steepest_1993},Lemma 2.56, we can restrict the Cauchy operator on $\Sigma$ to $\Sigma'$ without errors, i.e. $(1_{\Sigma'}-C_{w'}^{\Sigma'})^{-1}I=(1_{\Sigma}-C_{w'}^{\Sigma})^{-1}I$. And we finally have the following proposition: \begin{prop} \begin{equation} q(x,t)=2\left[-\int_{\Sigma'}((1-C_{w'})^{-1}I)w'(s)\right]_{12}+O(t^{-k}),\forall k\in \mathbb{N} \end{equation} \end{prop} The corresponding RHP reads: Set \begin{equation*} L'=L\backslash L_\epsilon \end{equation*} and \begin{equation} \Sigma'=L'\cup \overline{L'} \end{equation} Define the sectional analytic function $m'(z),z\notin \Sigma'$ as \begin{equation} m'(z)=I+\int_{\Sigma'}\frac{((1-C_{w'})^{-1}I)w'(s)}{s-z}\frac{ds}{2\pi i} \end{equation} On the boundary we have a new RHP: \begin{problem}{RHP3} \begin{equation} \begin{cases} m'_+=m'_-e^{-it\theta\hat{\sigma_3}}v'(z),z\in \Sigma'\\ m'(\infty)=I \end{cases} \end{equation} where \begin{eqnarray} w'&=&w'_++w'_-,\\ b'_\pm&=&I\pm w'_\pm\\ v'&=&(b'_-)^{-1}b'_+ \end{eqnarray} from the definition of $w'$ we have \begin{equation} \begin{cases} b'_+=\begin{pmatrix} 1& R\delta_+^2\\ 0&1\\ \end{pmatrix},b'_-=\begin{pmatrix} 1& 0\\ 0&1\\ \end{pmatrix},z\in L'\\ b'_+=\begin{pmatrix} 1& 0\\ 0&1\\ \end{pmatrix},b'_-=\begin{pmatrix} 1& 0\\ \bar{R}\delta_-^{-2}&1\\ \end{pmatrix},z\in \overline{L'} \end{cases} \end{equation} \end{problem} \section{Reducing the phase function and separating the contributions} In this section we will show how to reduce the order of phase function and then separate the contributions from different stationary phase points. First we introduce some new notations. Split $\Sigma'$ into disjoint union of two crosses $\Sigma'_A\cup \Sigma'_B$, see Figure \ref{splitedCrosses}. And decompose $w'=w'\chi_{\Sigma'_A}+w'\chi_{\Sigma'_B}=:w^{A'}+w^{B'}$. \begin{figure}[h] \centering \begin{tikzpicture} \node [below] at (3,0) {$z_0$}; \node [below] at (-3,0) {$-z_0$}; \filldraw (3,0) circle (1pt); \filldraw (-3,0) circle (1pt); \draw (2,1/3)--(5,-2/3); \draw [<-](-2,1/3)--(-5,-2/3); \draw (2,-1/3)--(5,2/3); \draw [<-](-2,-1/3)--(-5,2/3); \draw [<-] (4,1/3)--(4.5,1/2); \draw [<-] (4,-1/3)--(4.5,-1/2); \draw [->] (-4,1/3)--(-4.5,1/2); \draw [->] (-4,-1/3)--(-4.5,-1/2); \draw [->] (2,1/3)--(2.5,1/6); \draw [->] (2,-1/3)--(2.5,-1/6); \node [below] at (4,-1/2){$\Sigma'_B$}; \node [below] at (-4,-1/2){$\Sigma'_A$}; \node at (0,1) {$\Sigma'$}; \end{tikzpicture} \caption{Splitting $\Sigma'$ into $\Sigma'_A$ and $\Sigma'_B$}\label{splitedCrosses} \end{figure} Define the Cauchy operators $A',B'$ on $\Sigma'$ by \begin{equation} A':=C_{w^{A'}},\quad B':=C_{w^{B'}} \end{equation} Extend the contours $\Sigma'_A$ and $\Sigma'_B$ by assigning 0 to $\hat{w}^{A'}$,$\hat{w}^{B'}$ to the contours \begin{eqnarray} \hat{\Sigma}_{A'}&=&\{z=-z_0+z_0ue^{\pm i \alpha}:u\in\mathbb{R}\}\\ \hat{\Sigma}_{B'}&=&\{z=z_0+z_0ue^{\pm i \alpha}:u\in\mathbb{R}\}\\ \end{eqnarray} The associated operators on $\hat{\Sigma}_{A'}$,$\hat{\Sigma}_{B'}$ are denoted by $\hat{A}',\hat{B}'$. And denote the shifted contours by $\Sigma_A,\Sigma_B$ , which are $\{z=z_0ue^{i\pm \alpha},u\in\mathbb{R}\}$ oriented as $\hat{\Sigma}_{A'}$,$\hat{\Sigma}_{B'}$ respectively. Introduce the shift and scaling operators, set $a=\sqrt{640tz_0^3}$: \begin{eqnarray} N_A(f(z))&:=&f(z/a-z_0)\\ N_B(f(z))&:=&f(z/a+z_0)\\ \end{eqnarray} In the following analysis we will focus on the contour $\Sigma_B$ and give the similar results for the contour $\Sigma_A$ without proofs. In fact, we have \begin{equation} N_B(\delta e^{-it\theta})(z)=\delta_B^0(z)\delta_B^1(z) \end{equation} where \begin{equation} \delta_B^0(z)=e^{\chi(z_0)}e^{-it\theta(z_0)}(2z_0a)^{-i\nu} \end{equation} and \begin{equation} \delta_B^1(z_0)=z^{i\nu}(\frac{2z_0}{z/a+2z_0})^{i\nu}e^{\chi(z/a+z_0)-\chi(z_0)}e^{-iz^2/4[1+z/(az_0)+z^2/(2a^2z_0^2)+z^3/(10z_0^3a^3)]} \end{equation} Since the only difference between our concern and Deift-Zhou 93's work\cite{deift_steepest_1993} is the phase function, and the estimates for the Cauchy operator after shifting and scaling are completely based on the phase. By conjuate the matrices $\hat{w}^{B}_\pm$ on the contour $\bar{L}_B=\{z=uz_0ae^{i\alpha},-\epsilon<u<\infty\}$ and $L_B=\{z=uz_0ae^{-i\alpha},-\epsilon<u<\infty\}$ and with identity jumps for the rest of the contour, i.e. $\Sigma_B\backslash (L_B\cup \bar{L}_B)$, as a result we have \begin{eqnarray} N^0_B(\hat{w}_-^{B'})&:=&(\delta_B^0)^{\hat{\sigma_3}}N_B(\hat{w}_-^{B'})=\begin{pmatrix} 0 & 0 \\ -\bar{R}(z/a+z_0)(\delta_B^1)^{-2} &0\\ \end{pmatrix}\\ N^0_B(\hat{w}_+^{B'})&:=&(\delta_B^0)^{\hat{\sigma_3}}N_B(\hat{w}_+^{B'})=\begin{pmatrix} 0 & R(z/a+z_0)(\delta_B^1)^{2} \\ 0&0\\ \end{pmatrix}\\ \end{eqnarray} As $t\rightarrow \infty$, \begin{equation} \bar{R}(z/a+z_0)(\delta_B^1)^{-2}-\bar{R}({z_0}{\pm})z^{-2\nu i}e^{iz^2/2}\rightarrow 0 \end{equation} where \begin{equation} R(z_0\pm)=\lim_{z\rightarrow z_0\pm}R(z)=\lim_{z\rightarrow z_0\pm}\rho(z)=\{+:\bar{r}(z_0);-:-\bar{r}(z_0)(1-|r(z_0)|^2)^{-1}\} \end{equation} More specifically, for the contour $\Sigma_{B}$, we have the following estimate for the rate of convergence: \begin{lemma}{Analogous to Lemma 3.35 in \cite{deift_steepest_1993}.}\label{lemma:phasereduction} Let $\gamma$ be a small positive number that $\gamma<1/2$ and $t_0$ be some large number. Then for $z\in \bar{L}_B$,and for $t>t_0$, \begin{equation} \begin{split} \left|\bar{R}(z/a+z_0)(\delta_B^1)^{-2}-\bar{R}({z_0}{\pm})z^{-2\nu i}e^{iz^2/2}\right|\\ \leq C(z_0)e^{-\gamma \Im{z^2/2}}\left(t^{-1/2}+t^{-1/2}\log{(t)}\right) \end{split} \end{equation} \end{lemma} \begin{lemma} \label{lemma:ppt} Let $f(s)=\log{\frac{1-|r(s)|^2}{1-|r(z_0)|^2}} $, $\chi{(z)}=\frac{1}{2\pi i}\int_{-z_0}^{z_0}\frac{f(s)}{s-z}ds$, where $r$ is the reflection coefficient which is in schwartz space. Then for $z\in L_0= \{z=ue^{i\alpha},|u|<1\}$,we have \begin{equation} \begin{split} |\chi{(z_0+z)}-\chi{(z_0)}|&\leq c|z||\log{|z|}|\\ \end{split} \end{equation} \end{lemma} \begin{proof} Since $r$ is a Schwartz function, it is trivial to show that $f(s)$ is Lipschtz. And $f(z_0)=0$, so $|f(s)|=|f(s)-f(z_0)|\leq C|s-z_0|$, where $C$ is independent from $z_0,s$. Now write \begin{equation} \begin{split} |\chi(z+z_0)-\chi(z_0)|&\leq \frac{|z|}{2\pi}\int_{-z_0}^{z_0}\frac{|f(s)|ds}{|(s-z-z_0)(s-z_0)|}\\ &\leq \frac{C|z|}{2\pi}\int_{-z_0}^{z_0}\frac{ds}{|s-z-z_0|}\\ &\leq \frac{C|z|}{2\pi}\int_{-2z_0}^{0}\frac{ds}{|s-z|} \end{split} \end{equation} Since $|s-z|\geq 1/2(-s \sin{(\alpha)}+|z|\sin(\alpha)),\forall z\in L_0,s\in [-2z_0,0]$, we have \begin{equation} \begin{split} \int_{-2z_0}^0\frac{ds}{|s-z|}&\leq 2\int_{-2z_0}^0\frac{ds}{-s \sin{(\alpha)}+|z|\sin(\alpha) }\\ &=\frac{2}{\sin \alpha}\int_{-2z_0}^0\frac{ds}{-s+|z|}\\ &=\frac{2}{\sin \alpha}\int_{|z|}^{|z|+2z_0}\frac{ds}{s}\\ &\leq \frac{2}{\sin \alpha}\log{(1+\frac{2z_0}{|z|})}\\ &\leq C\left|\log|z|\right| \end{split} \end{equation} Combining above analysis, lemma is proved. \end{proof} \begin{remark} In fact the above lemma is the direct conclusion from the Plemelj-Privalov theorem. \end{remark} \begin{proof}{The main lemma} Write \begin{equation}\label{phasereduceinequality} \begin{split} \bar{R}(z/a+z_0)&(\delta_B^1)^{-2}-\bar{R}({z_0}{\pm})z^{-2\nu i}e^{iz^2/2}\\ =(e^{i\gamma z^2/2})&\left(e^{i\gamma z^2/2}\left[\bar{R}(z/a+z_0)(\frac{2z_0}{z/a+2z_0})^{-2i\nu}z^{-2i\nu}\right.\right.\\ &\left.\left.e^{i(1-2\gamma)z^2/2\xi}e^{-2[\chi{(z/a+z_0)}-\chi{(z_0)}]}-\bar{R}({z_0}{\pm})z^{-2\nu i}e^{i(1-2\gamma)z^2/2}\right]\right) \end{split} \end{equation} where $\xi=1+(1-2\gamma)^{-1}(z/(az_0)+z^2/(2a^2z_0^2)+z^3/(10z_0^3a^3)).$ Each terms in \eqref{phasereduceinequality} is uniformly bounded with respect to $x<0,t>0$. $|e^{i\gamma z^2/2}|=e^{-\gamma z_0^2u^2a^2 sin(\alpha)}$ is trivially bounded provided that $\alpha<\pi/2$, so is $|e^{i(1-2\gamma )z^2/2}|$. Applying the decomposition lemma, $|\bar{R}(z/a+z_0)|\leq c/(1+z_0^2)$. Also \begin{equation} \begin{split} \sup_{-\epsilon<u<\infty}&|(\frac{2z_0}{z/a+2z_0})^{-2\nu i}|\\ &=\sup_{-\epsilon<u<\infty} e^{-2\nu arg(1+u/2e^{i\alpha})}\leq C \end{split} \end{equation} as $\arg(1+u/2e^{i\alpha})$ is positive when $u>0$ and $0<\nu<-1/(2\pi)\log(1-\eta^2)\leq \infty$ provided that $|r(z)|\leq \eta<1.$. The term $e^{i(1-2\gamma)z^2/2\xi}$ is bounded as \begin{equation} \begin{split} \Re&{i(1-2\gamma)z^2/2\xi}\\ &=\Re i(1-2\gamma)1/2u^2a^2z_0^2e^{i2\alpha}(1+(1-2\gamma)^{-1}(ue^{i\alpha}+1/2u^2e^{2i\alpha}+1/10u^3e^{3i\alpha}))\\ &=-1/2u^2a^2z_0^2(\sin(2\alpha)+u \sin(3\alpha)+1/2u^2 \sin(4\alpha)+1/10u^3 \sin(5\alpha)) \end{split} \end{equation} is negative when $\alpha<\pi /5$ and $u$ goes to infinity, so $|e^{i(1-2\gamma)z^2/2\xi}|$ is bounded. Finally, due to lemma \eqref{lemma:ppt}, $e^{-2\{\chi(z/a+z_0)-\chi(z_0)\}}$ is bounded. Now we have \begin{equation} \begin{split} |e^{i\gamma z^2/2}(\bar{R}(z/a+z_0)-\bar{R}(z_0\pm))|&\leq e^{\Re(i\gamma z^2/2)}\|\bar{R}'\|_{\infty}|z/a|\\ &\leq c(tz_0^3)^{-1/2}\\ \end{split} \end{equation} and \begin{equation} \begin{split} &\left|e^{i\gamma z^2/2}\left((\frac{2z_0}{z/a+2z_0})^{-2i\nu}-1\right)\right|\\ &=\left|e^{i\gamma z^2/2}\int_{1}^{1+z/(2az_0)}(2i\nu)u^{2i\nu-1}du\right|\\ &\leq |e^{i\gamma z^2/2}||z/(2az_0)|\sup_{u=1+\frac{sz}{2az_0},0\leq s\leq 1}|u^{2i\nu-1}| \end{split} \end{equation} Since \begin{equation} \begin{split} |u^{2i\nu-1}|&=|e^{(2i\nu-1)(\log|u|+i \arg(u))}|\\ &=e^{-\log|u|}e^{-2\nu \arg(u)}\\ &=\frac{e^{-2\nu \arg(u)}}{|u|} \end{split} \end{equation} And it is easy to check that $|u|\geq \sin(\alpha)$ and since $\arg(u)$ is bounded, so $$ \sup_{u=1+\frac{sz}{2az_0},0\leq s\leq 1}|u^{2i\nu-1}|$$ is uniformly bounded with respect to $x,t$ and thus \begin{equation} \left|e^{i\gamma z^2/2}\left((\frac{2z_0}{z/a+2z_0})^{-2i\nu}-1\right)\right|\leq C(tz_0^3)^{-1/2}. \end{equation} Next write \begin{equation} \begin{split} &\left|e^{i\gamma z^2/2}\left(e^{-2(\chi(z/a+z_0)-\chi(z_0))}-1\right)\right|\\ &\leq \sup_{0\leq s\leq 1}|e^{-2s(\chi(z/a+z_0)-\chi(z_0))}||2e^{i\gamma z^2/2}(\chi(z/a+z_0)-\chi(z_0))|\\ &\leq C|e^{i\gamma z^2/2}||z/a||\log|z/a||\\ &\leq C\frac{\log(tz_0^3)}{(tz_0^3)^{1/2}} \end{split} \end{equation} And finally we have \begin{equation} \begin{split} &|e^{i\gamma z^2/2}z^{-2\nu i}(e^{i(1-2\gamma)(z^2/2)\xi}-e^{i(1-2\gamma)z^2/2})|\\ &\leq c|e^{i\gamma z^2/2}||z/a|\sup_{0\leq s\leq 1}|\frac{d}{ds}e^{i(1-2\gamma)z^2/2\xi(z;s)}|\\ &\leq C(tz_0^3)^{-1/2} \end{split} \end{equation} Thus combining above estimates yields the expecting lemma. The rapid decay of $C(z_0)$ comes from the decomposition lemma. \end{proof} \begin{remark} By similar analysis, we have \begin{equation} \begin{split} \left|R(z/a+z_0)(\delta_B^1)^{2}-R({z_0}{\pm})z^{2\nu i}e^{-iz^2/2}\right|\\ \leq C(z_0)e^{-\gamma \Im{z^2/2}}\left(t^{-1/2}+t^{-1/2}\log{(t)}\right) \end{split} \end{equation} on $L_B$. \end{remark} Moreover, on the contour $\Sigma_A$, there are similar estimates for $\bar{L}_A$ and $L_A$ too, which are \begin{equation} (N_A\delta e^{-it\theta}) =\delta_A^0\delta_A^1 \end{equation} where \begin{equation} \begin{split} \delta_A^0(z)&= e^{\chi(-z_0)}e^{-it\theta(-z_0)}(2az_0)^{i\nu}\\ \delta_A^1(z)&= e^{iz^2/4(1-z/(az_0)+z^2/(2a^2z_0^2)-z^3/(10a^3z_0^3))}\\ &e^{\chi(z/a-z_0)-\chi(-z_0)}(-z)^{-i\nu}\left(\frac{-2z_0}{z/a-2z_0}\right)^{-i\nu}\\ \end{split} \end{equation} And the analogs of lemma\eqref{lemma:phasereduction} are \begin{equation}\label{eq:lemma for A 1} \begin{split} \left|\bar{R}(z/a-z_0)(\delta_A^1)^{-2}-\bar{R}((-z_0){\pm})(-z)^{2\nu i}e^{-iz^2/2}\right|\\ \leq C(z_0)e^{-\gamma \Im{z^2/2}}\left(t^{-1/2}+t^{-1/2}\log{(t)}\right) \end{split} \end{equation} for $z\in \bar{L}_A$,and \begin{equation}\label{eq:lemma for A 2} \begin{split} \left|R(z/a-z_0)(\delta_A^1)^{2}-R((-z_0){\pm})(-z)^{-2\nu i}e^{iz^2/2}\right|\\ \leq C(z_0)e^{-\gamma \Im{z^2/2}}\left(t^{-1/2}+t^{-1/2}\log{(t)}\right) \end{split} \end{equation} for $z\in L_A$. Then follow the same analysis in DZ93\cite{deift_steepest_1993} we arrive at the following proposition: \begin{prop} \label{prop:split contributions} \begin{equation} \begin{split} q(x,t)&=\left[-2\int_{\Sigma_{A'}}((1-C_{w^{A'}})^{-1}I)w^{A'}(s)\frac{ds}{\pi}\right]_{12}\\ &+\left[-2\int_{\Sigma_{B'}}((1-C_{w^{B'}})^{-1}I)w^{B'}(s)\frac{ds}{\pi}\right]_{12}\\ &+O(t^{-k})+O(\frac{c(z_0)}{t}),\forall k\in \mathbb{N} \end{split} \end{equation} as $t\rightarrow \infty$ \end{prop} \section{Model RHP} In this section, we will transform the argumented RHP to a RHP on the real with a jump does not depend on $z$. Then by Louville's argument, we can solve the RHP explicitly and represented by solutions of the parabolic-cylinder equation. First we introduce some new notations following Deift-Zhou's method. Let $\hat{A}'=C_{\hat{w}^{A'}}:L^2(\hat{\Sigma}_{A'})\rightarrow L^2(\hat{\Sigma}_{A'})$ and let $\tilde{\Delta}_A^0:L^2(\hat{\Sigma}_{A'})\rightarrow L^2(\hat{\Sigma}_{A'})$ as the right multiple by $(\delta_A^0)^{\sigma_3}$. Then after shifting and rescaling, the new operator denotes $A:=C_{w^A}:L^2(\Sigma_{A})\rightarrow L^2(\Sigma_{A})$, where $w^A=(\Delta_A^0)^{-1}(N_A\hat{w}^{A'})\Delta_A^0$, and it has the relation with $\hat{A}'$: \begin{equation} \hat{A}'=N_A^{-1}(\tilde{\Delta}_A^0)^{-1}A\tilde{\Delta}_A^0N_A \end{equation} On the contour $\Sigma_A$, see figure \eqref{fig:sigma_A}, we have the RHP data for $A$ :$w^A=w^A_++w^A_-$, where $w^A_+=\begin{pmatrix} 0 & (N_AR)(\delta_A^1)^2\\ 0& 0\\ \end{pmatrix}$ and $w^A_-=\begin{pmatrix} 0 & 0\\ -(N_A\bar{R})(\delta_A^1)^2&0\\ \end{pmatrix}$. Then base on the lemma \eqref{eq:lemma for A 1} and lemma \eqref{eq:lemma for A 2}, we have the RHP data for $A^0$. Set $v^{A^0}=(b_-^{A^0})^{-1}b_+^{A^0}=(I-w_-^{A^0})^{-1}(I+w_+^{A^0})$, where we define $w^{A^0}$ according to \eqref{eq:lemma for A 1} and \eqref{eq:lemma for A 2}, as \begin{eqnarray} w^{A^0}&=w_+^{A^0}=\begin{pmatrix} 0 & R((-z_0)+)(-z)^{-2\nu i}e^{iz^2/2}\\ 0 & 0\\ \end{pmatrix}\chi_{\{z\in \Sigma_A^2\}}\\ &+\begin{pmatrix} 0 & R((-z_0)-)(-z)^{-2\nu i}e^{iz^2/2}\\ 0 & 0\\ \end{pmatrix}\chi_{\{z\in \Sigma_A^4\}}\\ &=w^{A^0}_-=\begin{pmatrix} 0 & 0\\ -\bar{R}((-z_0)+)(-z)^{2\nu i}e^{-iz^2/2}&0\\ \end{pmatrix}\chi_{\{z\in \Sigma_A^1\}}\\ &+\begin{pmatrix} 0 & 0\\ -\bar{R}((-z_0)-)(-z)^{2\nu i}e^{-iz^2/2}&0\\ \end{pmatrix}\chi_{\{z\in \Sigma_A^3\}} \end{eqnarray} where \begin{eqnarray} R((-z_0)+)&=\lim_{z\rightarrow (-z_0)+}\rho(z)=\frac{r(-z_0)}{1-|r(-z_0)|^2}\\ R((-z_0)-)&=\lim_{z\rightarrow (-z_0)-}\rho(z)=-r(-z_0) \end{eqnarray} \begin{figure}[h] \centering \begin{tikzpicture}[scale=0.6 \draw [<->](-2,-2)--(2,2); \draw (-3,-3)--(3,3); \draw [<->](-2,2)--(2,-2); \draw (-3,3)--(3,-3); \node [below] at (0,0) {0}; \node [right] at (3,3) {$\Sigma_A^2$}; \node [right] at (3,-3) {$\Sigma_A^1$}; \node [left] at (-3,3) {$\Sigma_A^3$}; \node [left] at (-3,-3) {$\Sigma_A^4$}; \end{tikzpicture} \caption{Oriented contour $\Sigma_A$} \label{fig:sigma_A} \end{figure} Next we will show how to approximate the RHP data $w^{A'}$ by the data $w^{A^0}$. In fact, applying the restriction lemma and by changing variables, we can show that \begin{equation} \begin{split} \int_{\Sigma_{A'}}&((1-C_{w^{A'}})^{-1}I)w^{A'}(\xi)d\xi =\int_{\Sigma_{\hat{A}'}}((1-C_{\hat{w}^{A'}})^{-1}I)\hat{w}^{A'}(\xi)d\xi\\ &=\int_{\Sigma_{\hat{A}'}}(N_A^{-1}(\tilde{\Delta}_A^0)^{-1}(1-A)^{-1}\tilde{\Delta}_A^0 N_AI)(\xi)\hat{w}^{A'}(\xi)d\xi\\ &=\frac{1}{a}\int_{\Sigma_A}((1-A)^{-1}\Delta_A^0)(\xi)(\Delta_A^0)^{-1}(N_A\hat{w}^{A'})(\xi)\Delta_A^0(\Delta_A^0)^{-1}d\xi\\ &=\frac{1}{a}\Delta_A^0\int_{\Sigma_A}((1-A)^{-1}I)w^A(\xi)d\xi (\Delta_A^0)^{-1}\\ &=\frac{1}{a}\Delta_A^0\int_{\Sigma_A}((1-A^0)^{-1}I)w^{A^0}(\xi)d\xi (\Delta_A^0)^{-1}+\frac{1}{a}O(t^{-1/2}+\frac{\log(t)}{t^{1/2}}) \end{split} \end{equation} And combining with Proposition \eqref{prop:split contributions}, we have \begin{prop} \label{prop:reduce to model rhp} \begin{equation} \begin{split} q(x,t)=&[\frac{-2}{a}\Delta_A^0\int_{\Sigma_A}((1-A^0)^{-1}I)w^{A^0}(\xi)\frac{d\xi}{\pi} (\Delta_A^0)^{-1}]_{12}\\ &[\frac{-2}{a}\Delta_B^0\int_{\Sigma_B}((1-B^0)^{-1}I)w^{B^0}(\xi)\frac{d\xi}{\pi} (\Delta_B^0)^{-1}]_{12}\\ &+O(t^{-k})+O(t^{-1}+\frac{\log(t)}{t}),\forall k\in \mathbb{N},k>2 \end{split} \end{equation} as $t\rightarrow \infty$ \end{prop} Note $\int_{\Sigma_A}((1-A^0)^{-1}I)w^{A^0}(\xi)d\xi$ is connect to the following RHP, let \begin{equation} m^{A^0}(z)=I+\int_{\Sigma_A}\frac{((1-A^0)^{-1}I)w^{A^0}(\xi)}{\xi-z}\frac{d\xi}{2\pi i},\quad z\in \mathbb{C}\backslash \Sigma_A \end{equation} Then the corresponding RHP reads \begin{equation} \begin{cases} m^{A^0}_+(z)=m^{A^0}_-(z)v^{A^0}(z),\quad z\in \Sigma_A\\ m^{A^0}(\infty)=I \end{cases} \end{equation} where \begin{equation} v^{A^0}(z)=(1-w^{A^0}_-)^{-1}(1+w^{A^0}_+) \end{equation} Also we obtain that \begin{equation} m^{A^0}_1:=-Res(m^{A^0}(z),\infty)=\int_{\Sigma_A}((1-A^0)^{-1}I)w^{A^0}(\xi)\frac{d\xi}{2\pi i}. \end{equation} Similarly, we can compute for $\Sigma_B$, and since the reflection coefficient has the property that $r(z)=-\bar{r}(-\bar{z})$ and note that all the jump matrices are triangle matrix, we have the following relation: \begin{equation} \sigma_3\overline{v^{B^0}(-\bar{z})}\sigma_3=v^{A^0}(z) \end{equation} Moreover, by uniqueness of the RHP, \begin{equation} m^{A^0}(z)=\sigma_3\overline{m^{B^0}(-\bar{z})}\sigma_3 \end{equation} which implies that \begin{equation} m^{B^0}_1=-\sigma_3\overline{m^{A^0}_1}\sigma_3 \end{equation} Now form the Proposition\eqref{prop:reduce to model rhp}, it follows \begin{equation} \label{eq:finall asymptotics} q(x,t)=\frac{-2}{a}\left[(\delta_A^0)^2(m^{A^0}_1)_{12}+\overline{(\delta_A^0)^{2}(m^{A^0}_1)_{12}}\right]+O(\frac{\log(t)}{t}) \end{equation} as $t\rightarrow \infty$. In the rest of the section, we will solve the model RHP in terms of solutions of the parabolic-cylinder equation. The basic idea is to "close the lens", which is the inverse processing of the contour deformation ("open lens"). \begin{figure}[h] \centering \begin{tikzpicture}[scale=0.6 \draw [<-<](-2,-2)--(2,2); \draw (-3,-3)--(3,3); \draw [<-<](-2,2)--(2,-2); \draw (-3,3)--(3,-3); \node [below] at (0,0) {0}; \draw [<-<] (-2,0)--(2,0); \draw [-](-4,0)--(4,0); \node [right] at (2,1) {$\Omega_1^e$}; \node [] at (0,2) {$\Omega_2^e$}; \node [left] at (-2,1) {$\Omega_3^e$}; \node [left] at (-2,-1) {$\Omega_4^e$}; \node [] at (0,-2) {$\Omega_5^e$}; \node [right] at (2,-1) {$\Omega_6^e$}; \end{tikzpicture} \caption{Oriented contour $\Sigma_A$} \label{fig:sigma_e} \end{figure} First we reorient the right-half of $\Sigma_A$ , denote the new contour as $\Sigma_{A,r}$, meanwhile the new RHP data on the right half plane become $w^{A,r}_{\pm}=-w^{A^0}_{\mp}$, then extend the contour $\Sigma_{A,r}$ to $\Sigma_e=\Sigma_{A,r}\cup \mathbb{R}$ by assigning 0 to the RHP data on $\mathbb{R}$ and mark the six regions as shown on the Fig, then define a matrix $\phi$ as \begin{equation} \phi(z)=(-z)^{vi\sigma_3}\times \begin{cases} 1 & z\in \Omega_2^e\cup \Omega_5^e\\ (b_+^{A^0})^{-1} & z\in \Omega_1^e\cup \Omega_4^e\\ (b_-^{A^0})^{-1} & z\in \Omega_3^e\cup \Omega_6^e\\ \end{cases} \end{equation} Conjugating $v^{A^0}$ by $\phi_-(z)v^{A^0}\phi^{-1}_+(z)$, denotes as $v^{A^0,\phi}$, we have a new RHP which only has jumps on the real line. And the jump on the real line is \begin{equation} \begin{split} v^{A^0,\phi}&=\phi_-(z)\phi^{-1}_+(z)\\ &=(-z)^{i\nu \sigma_3}_-((b_-^{A^0})^{-1})b_+^{A^0}(-z)^{-i\nu \sigma_3}_+\\ &=e^{iz^2/4\hat{\sigma}_3}\begin{pmatrix} 1 & 0\\ -\frac{\bar{r}(-z_0)}{1-|r(-z_0)|^2}&1\\ \end{pmatrix}(-z)^{i\nu \sigma_3}_-(-z)^{-i\nu \sigma_3}_+ \begin{pmatrix} 1 & \frac{r(-z_0)}{1-|r(-z_0)|^2}\\ 0 & 1 \end{pmatrix}\\ &=e^{iz^2/4\hat{\sigma}_3}\begin{pmatrix} 1 & 0\\ -\frac{\bar{r}(-z_0)}{1-|r(-z_0)|^2}&1\\ \end{pmatrix}(1-|r(-z_0)|^2)^{\sigma_3} \begin{pmatrix} 1 & \frac{r(-z_0)}{1-|r(-z_0)|^2}\\ 0 & 1 \end{pmatrix}\\ &=e^{iz^2/4\hat{\sigma}_3}\begin{pmatrix} 1-|r(-z_0)|^2 & r(-z_0)\\ -\bar{r}(-z_0)&1\\ \end{pmatrix}\\ &=e^{iz^2/4\hat{\sigma}_3}v(-z_0) \end{split} \end{equation} Then let $H(z)=m^{A^0}(z)\phi^{-1}(z)$, is satisfies the following RHP : \begin{equation} \begin{cases} H_+(z)=H_-(z)e^{iz^2/4\hat{\sigma}_3}v(-z_0), \quad z\in \mathbb{R}\\ H(\infty)=(-z)^{\nu i \sigma_3} \end{cases} \end{equation} Let $\Psi=He^{iz^2/4\sigma_3}$, then $\Psi_+=\Psi_-v(-z_0)$, which has a constant jump over the real line. Then it is easy to check that $\frac{d\Psi}{dz}\Psi^{-1}$ has no jump on the real line hence is entire then by Liouville's argument, we have \begin{equation} \begin{split} \frac{d\Psi}{dz}\Psi^{-1}&=\frac{dH}{dz}H^{-1}+H\sigma_3H^{-1}\frac{iz}{2}\\ &=\frac{iz}{2}\sigma_3+\frac{i}{2}[\sigma_3,m^{A^0}_1]+O(z^{-1})\\ &\equiv\frac{iz}{2}\sigma_3+\frac{i}{2}[\sigma_3,m^{A^0}_1] \end{split} \end{equation} Let $\beta=\frac{i}{2}[\sigma_3,m^{A^0}_1]=\begin{pmatrix} 0 & \beta_{12}\\ \beta_{21} & 0\\ \end{pmatrix}$, it follows that \begin{equation} \label{eq:pre weber's parabolic cylinder equation} \frac{d\Psi}{dz}=(\frac{iz}{2}\sigma_3+\beta)\Psi. \end{equation} First consider $\Im z>0$, from the equation \eqref{eq:pre weber's parabolic cylinder equation}, we obtain two second order ODEs: \begin{eqnarray} \frac{d^2}{dz^2}\Psi_{11}^{+}&=(i/2-z^2/4+\beta_{12}\beta_{21})\Psi_{11}^+\\ \frac{d^2}{dz^2}\Psi_{21}^{-}&=(-i/2-z^2/4+\beta_{12}\beta_{21})\Psi_{21}^-\\ \end{eqnarray} By setting $\Psi_{11}^+(z)=g(e^{-3\pi i/4}z)$, we have \begin{equation} \label{eq: standard equation} \frac{d^2}{dz^2}g(z)-(\frac{z^2}{4}+a)g(z)=0 \end{equation} where $a=-\frac{1}{2}+i\beta_{12}\beta_{21}.$ This is the Weber's parabolic cylinder equation, search this on the Digital Library of Mathematics Functions, we have the asymotics for the solutions when $z\rightarrow \infty$, for reader's convenience, we copy the asymptotic expansions here: \begin{equation} \begin{split} U(a,z)&\sim e^{-\frac{1}{4}z^2}z^{-a-1/2}\sum_{s=0}^{\infty}(-1)^s\frac{(1/2+a)_s}{s!(2z^2)^s},\quad |\arg(z)|<\frac{3\pi }{4}\\ &\sim e^{-\frac{1}{4}z^2}z^{-a-1/2}\sum_{s=0}^{\infty}(-1)^s\frac{(1/2+a)_s}{s!(2z^2)^s}\\ &\pm i\frac{\sqrt{2\pi}}{\Gamma(1/2+a)}e^{\mp i\pi a}e^{\frac{1}{4}z^2}z^{a-1/2}\sum_{s=0}^{\infty}(-1)^s\frac{(1/2-a)_s}{s!(2z^2)^s},\quad \frac{1}{4}\pi<\pm \arg(z)<\frac{5}{4}\pi \end{split} \end{equation} from the digital library, we know that the Wronskian $W\{U(a,z),U(a,-z)\}=\frac{\sqrt{2\pi}}{\Gamma(1/2+a)}$ is non-zero as long as $a+1/2$ is not a non-positive integer. For now, assume that it is true, then the solution of the equation\eqref{eq: standard equation} can be represented by \begin{equation} g(z)=c_1U(a,z)+c_2U(a,-z). \end{equation} And we know that as $z=e^{1/4\pi i}\sigma \rightarrow \infty$, $\Psi_{11}^{+}=(-e^{i\pi/4})^{i\nu}e^{-\sigma^2/4}=e^{\nu i (\log\sigma-i\frac{3}{4}\pi)}e^{-\sigma^2/4}$, compare it with the asymptotic expansion of $g$, we have \begin{equation} c_2=0,a=-\nu i -1/2,c_1=e^{\frac{3}{4}\pi\nu} \end{equation} so that \begin{equation} \Psi_{11}^{+}(z)=e^{\frac{3}{4}\pi\nu}U(a,e^{-\frac{\pi}{4}i}z),\quad \Im z>0 \end{equation} Similary, we have for $\Im z<0$, \begin{equation} \Psi^-_{11}(z)=e^{-\frac{\pi\nu}{4}}U(a,e^{\frac{3\pi i}{4}}z) \end{equation} Meanwhile we have $\Psi_{21}=\beta_{12}^{-1}\left(\frac{d}{dz}\Psi_{11}-\frac{iz}{2}\Psi_{11}\right)$, so $\Psi_{21}^{\pm}$ can be automatically represented by $\Psi_{11}^{\pm}$. Also we have $$\Psi_-^{-1}\Psi_+=v(-z_0)=\begin{pmatrix} 1-|r(-z_0)|^2& r(-z_0)\\ -\bar{r}(-z_0)&1\\ \end{pmatrix},$$ comparing both sides we have the following relation: \begin{equation} \begin{split} -\bar{r}(-z_0)&=\Psi_{11}^-\Psi_{21}^+-\Psi_{21}^-\Psi_{11}^{+}\\ &=\beta_{12}^{-1}[\Psi_{11}^-(\Psi_{11}^+)'-(\Psi_{11}^-)'\Psi_{11}^+]\\ &=\beta_{12}^{-1}e^{\pi \nu/2}W\{U(a,e^{3\pi i/4}z),U(a,e^{-\pi i/4}z)\}\\ &=\frac{e^{\pi \nu/2}e^{3\pi i/4}\sqrt{2\pi}}{\beta_{12}\Gamma(-\nu i)}\quad\quad (\text{see [DLMF] equation (12.2.11)}) \end{split} \end{equation} Thus, \begin{equation} \beta_{12}=\frac{e^{\pi \nu /2}\sqrt{2\pi}e^{3\pi i/4}}{-\bar{r}(-z_0)\Gamma(-\nu i)} \end{equation} and \begin{equation} \beta_{21}=-\nu/\beta_{12}=\frac{e^{\pi \nu /2}\sqrt{2\pi}e^{-3\pi i/4}}{r(-z_0)\Gamma(\nu i)}. \end{equation} As mentioned before, we assume the Wroskian is non-zero. In fact it is true provided that $\frac{1}{\Gamma(1/2+a)}=\frac{1}{\Gamma(-iv)}$ is not zero since $\nu=-\frac{1}{2\pi}\log(1-|r(-z_0)|^2)>0$. Note also we have \begin{eqnarray} (m_1^{A^0})_{21}&=&i\beta_{21}\\ (m_1^{A^0})_{12}&=&-i\beta_{12} \end{eqnarray} Finally, substituting back to equation \eqref{eq:finall asymptotics}, we obtain \begin{equation} \begin{split} q(x,t)&=\frac{-2}{a}\left[(\delta_A^0)^2(m^{A^0}_1)_{12}+\overline{(\delta_A^0)^{2}(m^{A^0}_1)_{12}}\right]+O(\frac{\log(t)}{t})\\ &=\frac{-2}{a}[e^{2\chi(-z_0)}e^{-2it\theta(-z_0)}(2az_0)^{2i\nu}\frac{e^{\pi \nu /2}\sqrt{2\pi}e^{5\pi i/4}}{\bar{r}(-z_0)\Gamma(-\nu i)}\\ &+e^{-2\chi(-z_0)}e^{2it\theta(-z_0)}(2az_0)^{-2i\nu}\frac{e^{\pi \nu /2}\sqrt{2\pi}e^{3\pi i/4}}{r(-z_0)\Gamma(\nu i)}] \\&+O(\frac{\log(t)}{t})\\ \end{split} \end{equation} as $t\rightarrow \infty.$ By simplifying this we get the result\eqref{main result} as being reported at the introduction . % % % % % % %
2,869,038,157,022
arxiv
\section{Introduction} \begin{quote} \emph{How do geometric constraints restrict the combinatorics of polytopes?} \end{quote} One instance of this question asks for the combinatorial types of $d$-polytopes that are \emph{inscribable}, that is, that have realizations with all vertices on a sphere. This question was raised in 1832 by Jacob Steiner \cite{Steiner}, who asked whether all $3$-dimensional polytopes are inscribable. The answer was given by Ernst Steinitz \cite{Steinitz} nearly 100 years later: No --- despite a claim to the contrary by Brückner \cite[p.~163]{Bruck} this is not even true for all simplicial types. Indeed, the polytope obtained by “stacking” a new vertex onto each facet of a tetrahedron is not inscribable (compare Grünbaum \cite[Sect.~13.5]{Gruenbaum}). General $3$-polytopes are inscribable if any only if a certain associated linear program is feasible: This was proved by Hodgson, Rivin \& Smith \cite{Rivin} using hyperbolic geometry; a complete combinatorial characterization for inscribability of $3$-polytopes is not available up to now (compare Dillencourt \& Smith~\cite{Dill96}). In this paper we embark on a study of inscribability for $d$-dimensional convex polytopes. One instance our motivating question is \begin{quote} \emph{How does the condition of inscribability restrict the $f$-vectors of polytopes?} \end{quote} In this context, we observe (Section~\ref{subsec:f-inscribable_3poly}) that all $f$-vectors of $3$-polytopes (as characterized by Steinitz \cite{Steinitz} in 1906) also occur for inscribable polytopes. Also, as the cyclic polytopes are inscribable, we get that the Upper Bound Theorem of McMullen~\cite{McM} is sharp for the restricted class of incribable polytopes (Section~\ref{subsec:f-inscribable_cyclic}). Moreover, it is easy to see that there are stacked $d$-polytopes with $d+1+n$ vertices that are inscribable ($d\ge2$, $n\ge0$), so we find that also the Lower Bound Theorem of Barnette \cite{Barnette1} \cite{Barnette2} is sharp for the restricted class of inscribable polytopes. One is thus naturally led to ask whether all $f$-vectors of convex polytopes, or at least all $f$-vectors of simplicial convex polytopes can be obtained from inscribable polytopes. This will be further studied in~\cite{GonskaZiegler2}. One can then proceed and try to characterize inscribability for some of these classes. This seems out of reach for neighborly polytopes, as according to Shemer \cite{Shemer} there are huge numbers of combinatorial types, and no combinatorial classification in sight. However, as the main result of this paper we provide a combinatorial characterization of inscribable stacked polytopes. It refers to the dual tree of a stacked polytope, which will be formally defined in Section~\ref{subsec:stacked} below; for now, we refer to Figure~\ref{Pic:DualTree}. \begin{figure}[htb!] \centering \begin{tikzpicture}[ rotate around={60:(1,1)}, x = {(-1,0)},y = {(0cm,0.5cm)},z = {(0cm,.7cm)}, scale=1.5,rounded corners=.66pt] \coordinate (P1) at (0,0,5.5); \coordinate (P2) at (-.5,1,4); \coordinate (P3) at (1.5,0,5); \coordinate (P4) at (2,0,4); \coordinate (P5) at (1,1,2); \coordinate (P6) at (-1,0,2); \coordinate (P7) at (1,-1,2); \coordinate (P8) at (0,0,1); \coordinate (V1) at ($.25*(P1)+.25*(P2)+.25*(P5)+.25*(P6)$); \coordinate (V2) at ($.25*(P1)+.25*(P5)+.25*(P6)+.25*(P7)$); \coordinate (V3) at ($.25*(P1)+.25*(P3)+.25*(P5)+.25*(P7)$); \coordinate (V4) at ($.25*(P3)+.25*(P4)+.25*(P5)+.25*(P7)$); \coordinate (V5) at ($.25*(P5)+.25*(P6)+.25*(P7)+.25*(P8)$); \draw [thin] (P1) -- (P2) -- (P5) -- cycle; \draw [thin] (P1) -- (P3) -- (P5) -- cycle; \draw [thin] (P2) -- (P5) -- (P6) -- cycle; \draw [thin] (P3) -- (P4) -- (P5) -- cycle; \draw [thin] (P4) -- (P5) -- (P7) -- cycle; \draw [thin] (P5) -- (P6) -- (P8) -- cycle; \draw [thin] (P5) -- (P7) -- (P8) -- cycle; \draw (P1) -- (P5) -- (P6) -- cycle \draw [red,ultra thick] (V2)--(V5)--(V2)--(V3)--(V4); \draw [blue,ultra thick] (V1)--(V2); \draw [fill=red,draw=red,ultra thick] (V2) circle(2pt) node [anchor=south west]{$r$}; \draw [fill=white,draw=red,ultra thick] (V3) circle(2pt) (V4) circle(2pt) (V5) circle(2pt); \draw [fill=white,draw=blue,ultra thick] (V1) circle(2pt) node [anchor=west]{$u$}; \draw [very thick] (P1) -- (P2) -- (P6) -- cycle \draw [very thick] (P1) -- (P3) -- (P7) -- cycle \draw [very thick] (P1) -- (P6) -- (P7) -- cycle \draw [very thick] (P3) -- (P4) -- (P7) -- cycle \draw [very thick] (P6) -- (P7) -- (P8) -- cycle \end{tikzpicture} \caption{The dual tree of a stacked $3$-polytope.} \label{Pic:DualTree} \end{figure} \begin{theorem}\label{mainthm:polytopes} A stacked polytope is inscribable if and only if all nodes of its dual tree have degree at most~$3$. \end{theorem} Thus the requirement of inscribability does not restrict the possible $f$-vectors of stacked polytopes. However, other combinatorial parameters are restricted. For example, in any inscribable stacked $d$-polytope (other than a simplex, $d\ge3$) less than half of the vertices are simple, while for general stacked $d$-polytopes roughly $\frac{d-1}d$ of the vertices can be simple. \smallskip The study of inscribable convex $d$-polytopes is, via stereographic projection, equivalent to the study of $(d-1)$-dimensional Delaunay triangulations. (The importance of stereographic projection in this context was stressed in 1979 by Brown~\cite{Brown}.) Under this correspondence (which is detailed in Section~\ref{subsec:stereographic}), the stacked $d$-polytopes with $d+1+n$ vertices correspond to the Delaunay triangulations of a $(d-1)$-simplex generated by a sequence of $n$ stellar subdivisions of $(d-1)$-faces. (The rooted tree of a multiple stellar subdivision of a simplex is discussed in Section~\ref{subsec:Delaunay} below; for now, we refer to Figure~\ref{Pic:DualTree2}.) \begin{figure}[htb!] \centering \begin{tikzpicture}[x = {(1cm,0cm)},y = {(0cm,1cm)}, scale=1.3,rounded corners=.66pt] \coordinate (P1) at (0,0); \coordinate (P2) at (5,0); \coordinate (P3) at (2.5,3.75); \coordinate (V0) at ($.33*(P1)+.33*(P2)+.33*(P3)$); \coordinate (V2) at ($.33*(V0)+.36*(P1)+.27*(P3)$); \coordinate (V21) at ($.33*(V0)+.36*(V2)+.27*(P3)$); \coordinate (V3) at ($.3*(V0)+.4*(P3)+.3*(P2)$); \draw [ultra thick] (P1) -- (P2) -- (P3) -- cycle; \draw [thick] (P1) -- (V0) -- (P2); \draw [thick] (P1) -- (V0) -- (P3); \draw [thick] (P2) -- (V0) -- (P3); \draw [thin] (P3) -- (V3) -- (V0); \draw [thin] (P2) -- (V3) -- (V0); \draw [thin] (P3) -- (V3) -- (P2); \draw [thin] (P1) -- (V2) -- (V0); \draw [thin] (P3) -- (V2) -- (V0); \draw [thin] (P1) -- (V2) -- (P3); \draw [thin] (V2) -- (V21) -- (V0); \draw [thin] (P3) -- (V21) -- (V0); \draw [thin] (V2) -- (V21) -- (P3); \draw [ultra thick,red] (V3) -- (V0) -- (V2) -- (V21); \draw [fill=red,draw=red,ultra thick] (V0) circle(2pt) +(0,-0.04) node [anchor=north] {$r$}; \draw [fill=white,draw=red,ultra thick] (V3) circle(2pt) (V2) circle(2pt) (V21) circle(2pt); \coordinate (TU) at (8,3.5); \coordinate (T0) at (8,2.5); \coordinate (T3) at (7,1.5); \coordinate (T2) at (9,1.5); \coordinate (T21) at (9,.5); \draw [ultra thick,blue, dotted] (TU) -- (T0); \draw [ultra thick,red] (T3) -- (T0) -- (T2) -- (T21); \draw [fill=blue,draw=blue,ultra thick] (TU) circle(2pt) node [anchor=south west] {$u$}; \draw [fill=red,draw=red,ultra thick] (T0) circle(2pt) node [anchor=south west] {$r$}; \draw [fill=white,draw=red,ultra thick] (T3) circle(2pt) (T2) circle(2pt) (T21) circle(2pt); \end{tikzpicture} \caption{A stellar subdivision of a simplex and its dual rooted tree.} \label{Pic:DualTree2} \end{figure} \begin{theorem}\label{mainthm:Delaunay} A triangulation that is a multiple stellar subdivision of a $(d-1)$-simplex can be realized as a Delaunay triangulation if and only if at most two of the $(d-1)$-simplices generated in any single stellar subdivision are further subdivided. \end{theorem} \section{Stacked polytopes and Delaunay triangulations}\label{sec:} \subsection{Inscribable polytopes}\label{subsec:inscribable} \begin{definition}[inscribed polytope] A convex $d$-polytope is \emph{inscribed} if its vertices lie on a $(d-1)$-sphere. It is \emph{inscribable} if it is combinatorially equivalent to an inscribed polytope, that is, if it has a realization that is inscribed. \end{definition} \subsection{Stacked polytopes}\label{subsec:stacked} \begin{definition}[stacked polytope]\label{def:stacked} A polytope is \emph{stacked} if it can be built from a $d$-simplex by a sequence of {stacking operations}: A \emph{stacking operation} is performed onto a facet by taking the convex hull of the polytope with a new point that lies beyond the selected facet but beneath all other facets of the polytope. \end{definition} A stacking operation can also be imagined as gluing a simplex onto a facet. A simplicial $d$-polytope $P$ is stacked if and only if it has a triangulation with only interior faces of dimension $d$ and $d-1$. In dimension at least $3$ such a triangulation is unique if it exists: Its simplices are given by the cliques (complete subgraphs) of the graph of~$P$. The “claim to fame” of stacked polytopes is the Lower Bound Theorem \cite{Barnette1} \cite{Barnette2} \cite{Bronsted}: Among all simplicial $d$-polytopes with $d+1+n$ vertices, the stacked polytopes have the minimal number of facets (and indeed, the minimal number of $k$-faces, for all $k$). Moreover, for $d\ge4$ the stacked polytopes are the only polytopes with these parameters. \begin{definition}[dual tree of a stacked polytope]\label{def:dualtree_polytope} For $d\ge3$, the \emph{dual tree} $T_P$ of a stacked $d$-polytope $P$ is the dual graph of its triangulation that has only interior $d$- and $(d-1)$-faces: Every $d$-face in the triangulation corresponds to a node and every interior $(d-1)$-face corresponds to an edge of the tree. \end{definition} The graph $T_P$ given by Definition~\ref{def:dualtree_polytope} is indeed a tree if $P$ is stacked. We choose any node of $T_P$ as a root and assign an order to the rest of the nodes such that a child is always greater than its parent. Any such order implies an iterative construction of $P$ via stackings in the following way: The root represents the initial simplex. Every child has one vertex that it does not share with its parent. This is used to stack the $(d-1)$-face that child and parent share. Assuming that $T_P$ has at least two nodes, we see that the leaves of the tree are responsible for the simple vertices of $P$, but no further simple vertices are possible, except if the root has exactly one child, then there is an additional simple vertex that only the root $d$-simplex contains. Clearly the dual tree $T_P$ of a stacked $d$-polytope on $d+1+n$ vertices has maximal degree \[ \max\deg T_P\le \min\{d+1,n\} \] and stacked polytopes with these parameters exist for all $d\ge3$ and $n\ge0$. \subsection{Delaunay triangulations}\label{subsec:Delaunay} For any affinely spanning finite set of points $V\subset\R^{d-1}$, the \emph{Delaunay subdivision} $\mathcal D(V)$ is the unique subdivision of $\conv(V)$ into inscribed $(d-1)$-polytopes $P_i=\conv(V_i)$, $V_i\subset V$, that are given by the \emph{empty circumsphere condition}: There exists a $(d-2)$-sphere that passes through all vertices of $P_i$ but all other vertices of $D(V)$ lie outside this sphere. If the points in $V$ are in sufficiently general position (which is satisfied in particular if no $d+1$ points lie on a sphere), then the Delaunay subdivision is indeed a triangulation, known as \emph{the Delaunay triangulation} of~$V$. One way to construct the Delaunay subdivision/triangulation is to derive it from an inscribed $d$-polytope by using the (inverse) stereographic projection, as discussed below. We will employ the following very elegant criterion in order to test whether a given triangulation is the Delaunay triangulation. For this we call a face of a triangulation \emph{Delaunay} if there exists a \emph{supporting sphere} of the face, that is, a $(d-2)$-sphere passing through the vertices of the face such that all other vertices of the triangulation lie outside the sphere. Each interior $(d-2)$-face $F$ is contained in exactly two $(d-1)$-faces, $\conv(F\cup\{v_1\})$ and $\conv(F\cup\{v_2\})$; we call it \emph{locally Delaunay} if there exists a $(d-2)$-sphere passing through the vertices of $F$ such that the vertices $v_1$ and $v_2$ lie outside this sphere. \begin{lemma}[Delaunay Lemma]\label{lemma:localDelaunay} Let $V\subset\R^{d-1}$ be a finite, affinely spanning set of points. A triangulation $\mathcal T$ of $\conv(V)$ with vertex set~$V$ is the Delaunay triangulation if and only if one of the following equivalent statements hold: \begin{compactenum}[\rm(1)] \item All $(d-1)$-faces of $\mathcal T$ are {Delaunay}. \item All faces of $\mathcal T$ are {Delaunay}. \item All $(d-2)$-faces of $\mathcal T$ are {Delaunay}. \item All interior $(d-2)$-faces of $\mathcal T$ are {locally Delaunay}. \end{compactenum} \end{lemma} \begin{proof} The first statement is the definition of a Delaunay triangulation. It implies the second statement: For each face $F$ one can always do a slight change to the supporting sphere of a $(d-1)$-face that contains $F$ to derive a supporting sphere of $F$. The second statement implies the third and this in turn the last one. For more details and also for a proof that the last statement implies the first, we refer to Edelsbrunner~\cite[pp.~7 and 99]{Edelsbrunner}. \end{proof} \begin{definition}[stellar subdivision] Let $p$ be a point inside a full-dimensional simplex $\sigma$ of a triangulation $\mathcal T$. A \emph{single stellar subdivision} of $\mathcal T$ at $\sigma$ (by $p$) is the triangulation that replaces $\sigma$ by the simplices spanned by $p$ and a proper face of $\sigma$. We call a triangulation a \emph{multiple stellar subdivision} of $\mathcal T$ at $\sigma$ if one or more single stellar subdivisions have been applied. \end{definition} In the following, we will discuss which Delaunay triangulations can be generated by multiple stellar subdivisions of a triangulation that has just one full-dimensional simplex. \begin{lemma}\label{lem:stellar} A single stellar subdivision of the triangulation that has just one full-dimensional simplex is always a Delaunay triangulation. \end{lemma} \begin{figure}[htb!] \centering \begin{tikzpicture}[scale=.7] \coordinate (A) at ($.8*(0,0)$); \coordinate (B) at ($.8*(7.7,.5)$); \coordinate (C) at ($.8*(3.9,6)$); \coordinate (Z) at ($.8*(4.1,2.6)$); \clip[] (-.7,-.5) rectangle (7,5.5); \draw[style=thick] (A) -- (B) -- (C) -- cycle; \draw[style=thick] (A)--(Z)--(B)--(Z)--(C); \coordinate (AB) at ($(A)!.5!(B)$); \coordinate (AC) at ($(A)!.5!(C)$); \coordinate (AZ) at ($(A)!.5!(Z)$); \coordinate (BZ) at ($(B)!.5!(Z)$); \coordinate (ABx) at ($(AB)!1cm!90:(B)$); \coordinate (ACx) at ($(AC)!1cm!90:(C)$); \coordinate (AZx) at ($(AZ)!1cm!90:(Z)$); \coordinate (BZx) at ($(BZ)!1cm!90:(Z)$); \coordinate (ABZ) at (intersection of AZ--AZx and BZ--BZx); \coordinate (ABC) at (intersection of AC--ACx and AB--ABx); \node (Circ) [draw,red] at (ABZ) [circle through={(Z)}] {}; \node (Circ2) [draw] at (ABC) [circle through={(A)}] {}; \draw[thick,red] (C) circle(.1); \draw[style=thick] +(3.2,1) node {$\sigma$} +(C) node [anchor=south] {$v$} +(Z) node [anchor=north] {$c$}; \end{tikzpicture} \caption{The circumsphere of $\sigma$ cannot contain $v$.} \label{fig:stellar} \end{figure} \begin{proof} If the circumsphere of a new full-dimensional simplex $\sigma$ would contain the vertex $v$ that does not lie in $\sigma$, then it would contain all points of the original simplex, and hence it would contain the new vertex $c$ in its interior. See Figure~\ref{fig:stellar}. \end{proof} \begin{definition}[dual tree of a stellar subdivision]\label{def:dualtree_triangulation} The \emph{rooted tree} $T_{\mathcal T}$ of a multiple stellar subdivision $\mathcal T$ of a $(d-1)$-simplex $\sigma$, for $d\ge3$, has one node for every vertex that is inserted by a single stellar subdivision, or equivalently for every $(d-1)$-face that it destroys. The root node $r$ corresponds to the (first) single stellar subdivision of $\sigma$. The node $v'$ is a child of the node $v$ if it corresponds to a single stellar subdivision of a $(d-1)$-face that was created in the single stellar subdivision corresponding to $v$. \end{definition} Figure~\ref{Pic:DualTree2} shows a multiple stellar subdivision of a $2$-simplex and the corresponding dual tree. \begin{example}\label{example:simplex_subdivided} Figure~\ref{Pic:LowerBound} illustrates the construction of a multiple stellar subdivision of a $(d-1)$-simplex (generalizing Lemma~\ref{lem:stellar}) where the dual tree is a path. \end{example} \begin{figure}[htb!] \centering \begin{tikzpicture}[scale=1] \clip[] (-.5,-.5) rectangle (7,5.5); \coordinate (A) at (0,0); \coordinate (B) at (5,0); \coordinate (C) at (2.5,4); \coordinate (Z) at (2.1,1.5); \coordinate (X1) at ($(C)!.2!(Z)$); \coordinate (X2) at ($(C)!.4!(Z)$); \coordinate (X3) at ($(C)!.7!(Z)$); \coordinate (X4) at ($(C)!.9!(Z)$); \coordinate (X5) at ($(C)!1.1!(Z)$); \coordinate (X6) at ($(C)!1.2!(Z)$); \coordinate (X7) at ($(C)!1.3!(Z)$); \coordinate (X8) at ($(C)!1.4!(Z)$); \draw[fill,pink] (B)--(X2)--(X3)--cycle; \draw[fill,pink] (A)--(B)--(X5)--cycle; \draw[style=thick] (A) -- (B) -- (C) -- cycle; \draw[style=thick] (C)--(X1)--(X2)--(X3)--(X4)--(X5); \draw[] (A)--(X1)--(B); \draw[] (A)--(X2)--(B); \draw[] (A)--(X3)--(B); \draw[] (A)--(X4)--(B); \draw[] (A)--(X5)--(B); \draw[style=dashed] ($(C)!0!(Z)$)--($(C)!1.8!(Z)$); \coordinate (AX5) at ($(A)!.5!(X5)$); \coordinate (BX5) at ($(B)!.5!(X5)$); \coordinate (AX5x) at ($(AX5)!1cm!90:(X5)$); \coordinate (BX5x) at ($(BX5)!1cm!90:(X5)$); \coordinate (F) at (intersection of AX5--AX5x and BX5--BX5x); \node (Circ) [draw,red] at (F) [circle through={(X5)}] {}; \coordinate (X23) at ($(X2)!.5!(X3)$); \coordinate (BX3) at ($(B)!.5!(X3)$); \coordinate (X23x) at ($(X23)!1cm!90:(X3)$); \coordinate (BX3x) at ($(BX3)!1cm!90:(B)$); \coordinate (F) at (intersection of X23--X23x and BX3--BX3x); \node (Circ) [draw,red] at (F) [circle through={(X3)}] {}; \draw[fill] (A) circle (2pt); \draw[fill] (B) circle (2pt); \draw[fill] (C) circle (2pt); \draw[fill] (X1) circle (2pt); \draw[fill] (X2) circle (2pt); \draw[fill] (X3) circle (2pt); \draw[fill] (X4) circle (2pt); \draw[fill] (X5) circle (2pt); \draw[] (X6) circle (2pt); \draw[] (X7) circle (2pt); \draw[] (X8) circle (2pt); \end{tikzpicture} \caption{Take a full dimensional simplex and a ray from a vertex to the interior of the simplex. Apply $n$ single stellar subdivisions to the complex by introducing new vertices on the ray. The result is a Delaunay triangulation.} \label{Pic:LowerBound} \end{figure} \begin{corollary}[a stellar subdivision can be undone]\label{cor:can_be_undone} Let the triangulation $\mathcal T'$ be obtained from a triangulation $\mathcal T$ of an affinely spanning point set $V\subset\R^{d-1}$ by a single stellar subdivision. If $\mathcal T'$ is a Delaunay triangulation, then so is $\mathcal T$. \end{corollary} \begin{proof} A stellar subdivision does not destroy a $(d-2)$-face, thus among the supporting spheres for $(d-2)$-faces in $\mathcal T'$ we have the supporting spheres for all $(d-2)$-faces in $\mathcal T$. \end{proof} We end this section with a first example of a triangulation that \emph{cannot} be realized as a Delaunay triangulation. The last sentence of the following lemma yields a more precise statement that will turn out to be crucially important later. \begin{lemma}\label{lemma:doublestellartriangle} Apply a single stellar subdivision by a point $x$ to a triangle $\conv\{A,B,C\}$ and then single stellar subdivisions by points $a,b,c$ to the three new triangles. The resulting triangulation is not a Delaunay triangulation: At least one of the three edges $Ax$, $Bx$ and $Cx$ violates the locally Delaunay criterion. \end{lemma} \begin{proof} The nine angles at $a,b$ and $c$ sum up to $6\pi$. The three angles that lie in triangles that contain a boundary edge are each smaller than $\pi$, so the remaining six angles must sum to more than $3\pi$. However, each of the three edges $Ax$, $Bx$ and $Cx$ is locally Delaunay if and only if its two opposite angles sum to less than $\pi$. Hence not all three edges can be locally Delaunay. See Figure~\ref{Pic:Bound2D}. \end{proof} \begin{figure}[htb!] \centering \begin{tikzpicture} \coordinate (A) at (0,0); \coordinate (B) at (5,0); \coordinate (C) at (2.5,4); \coordinate (a) at (3.8,1.4); \coordinate (b) at (1.4,1.8); \coordinate (c) at (2.6,0.5); \coordinate (Z) at (2.5,1.5); \draw[style=thick] +(A) node [anchor=east] {$A$} +(B) node [anchor=west] {$B$} +(C) node [anchor=south] {$C$} +(a) node [anchor=north east] {$a$} +(b) node [anchor=north] {$b$} +(c) node [anchor=south west] {$c$} +(Z) node [anchor=south west] {$x$}; \draw[style=thick] (A) -- (B) -- (C) -- cycle; \draw[style=thick] (A) --(Z)--(B)--(Z)--(C); \draw[] (A)--(c)--(B)--(a)--(C)--(b)-- cycle; \draw[ultra thick,red] (Z) -- (C); \coordinate (CZ) at ($(C)!.5!(Z)$); \coordinate (CZx) at ($(CZ)!3mm!90:(Z)$); \node (Circ) [draw,color=red,style=dashed] at (CZx) [circle through={(Z)}] {}; \draw[color=red,thick]($(b)!.3cm!(Z)$) arc(5:71:.3cm); \draw[color=red,thick]($(a)!.3cm!(Z)$) arc(170:113:.3cm); \draw[color=red,thick]($(c)!.3cm!(A)$) arc(190:349:.3cm); \draw[] (a)--(Z)--(b)--(Z)--(c); \end{tikzpicture} \caption In any triangulation of this combinatoral type at least one of the three edges $Ax$, $Bx$ and $Cx$ is not locally Delaunay.} \label{Pic:Bound2D} \end{figure} \subsection{Stereographic projection}\label{subsec:stereographic} The \emph{stereographic projection} \[ \pi: S^{d-1}\setminus\{N\}\ \ \longrightarrow\ \ \R^{d-1}\times\{0\} \] is the bijective map that projects every point $x\neq N$ of the sphere $S^{d-1}$ along the ray through $x$ starting in the north pole $N$ to the equator hyperplane of the sphere, which we identify with $\R^{d-1}$. The inversion $x \mapsto N+2\frac{x-N}{\|x-N\|^2}$ in the sphere with center~$N$ and radius $\sqrt2$ extends this map to a bijection $\widehat\pi:\R^d\cup\{\infty\}\rightarrow\R^d\cup\{\infty\}$. This sphere inversion is a Möbius transformation: It maps spheres to spheres, where spheres through $N$ are mapped to hyperplanes (that is, spheres through $\infty$). The stereographic projection identifies (the vertex sets of) inscribed $d$-polytopes that have a vertex at the north pole with (the vertex sets of) Delaunay subdivisions in $\R^{d-1}$. \begin{proposition}[inscribed polytopes and Delaunay subdivisions]% \label{prop:Stereographic Let $S^{d-1}$ denote the standard unit sphere in $\R^d$ and let $P$ be an inscribed $d$-polytope whose vertex set $V\ensuremath{\mathaccent\cdot\cup}\{N\}\subset S^{d-1}$ includes the north pole $N$ of $S^{d-1}$. Then $\pi(V)$ is the vertex set of a Delaunay subdivision in $\R^{d-1}$ whose $(d-1)$-faces correspond to the facets of $P$ that do not contain the vertex $N$. Conversely, if $W$ is a finite set that affinely spans $\R^{d-1}\times\{0\}$, then $\pi^{-1}(W)\cup\{N\}$ is the vertex set of an inscribed $d$-polytope $P$ whose facets that miss $N$ are given by the $(d-1)$-faces of the Delaunay subdivision of~$W$ and the facets of $P$ that contain the north pole $N$ are exactly the convex hulls $\conv(\pi^{-1}(F\cap W)\cup\{N\})$ given by the facet $F$ of $\overline P:=\conv(W)$. \end{proposition} In the following, we will need this theorem specialized to the simplicial case. \begin{proposition}[inscribed simplicial polytopes and Delaunay triangulations]% \label{prop:Stereographic_simplicial} Let $S^{d-1}$ denote the standard unit sphere in $\R^d$ and let $P$ be an inscribed simplicial $d$-polytope whose vertex set $V\ensuremath{\mathaccent\cdot\cup}\{N\}\subset S^{d-1}$ includes the north pole $N$ of $S^{d-1}$. Then $\pi(V)$ is the vertex set of a Delaunay triangulation in $\R^{d-1}$ whose $(d-1)$-simplices correspond to the facets of $P$ that do not contain the vertex $N$. Conversely, if $W\subset \R^{d-1}\times\{0\}$ is a finite set with a Delaunay triangulation $\mathcal T$ whose convex hull $\overline P:=\conv(W)$ is a simplicial $(d-1)$-polytope that has no points of $W$ on the boundary except for the vertices, then $\pi^{-1}(W)\cup\{N\}$ is the vertex set of a simplicial inscribed $d$-polytope $P=\conv(\pi^{-1}(W)\cup\{N\})$ whose facets that miss $N$ are given by the $(d-1)$-simplices of the Delaunay triangulation $\mathcal T$ and the facets of $P$ that contain the north pole $N$ are exactly the convex hulls $\conv(\pi^{-1}(F\cap W)\cup\{N\})$ given by the facet $F$ of $\overline P$. \end{proposition} As a corollary to this we obtain that the Main Theorems \ref{mainthm:polytopes} (for polytopes) and~\ref{mainthm:Delaunay} (for Delaunay triangulations) are equivalent. We will prove Theorem~\ref{mainthm:Delaunay} below. We also obtain that the Lower Bound Theorem is tight for inscribable polytopes, by applying Proposition~\ref{prop:Stereographic_simplicial} to Example~\ref{example:simplex_subdivided}: \begin{corollary}\label{cor:LBTtight} For all $d\ge2$, $n\ge0$, there is an inscribed stacked $d$-polytope on $d+1+n$ vertices. \end{corollary} We end this section with a proof of a very special (but crucial) part and case of Theorem~\ref{mainthm:Delaunay}. \begin{proof}[Proof of the “only if” part of Theorem~\ref{mainthm:Delaunay} for $d-1=2$.] By Lemma \ref{lemma:localDelaunay} and Corollary \ref{cor:can_be_undone} it suffices to show that the configuration of Figure~\ref{Pic:Bound2D} cannot be realized as a Delaunay triangulation. (This was proved in Lemma~\ref{lemma:doublestellartriangle}. Via Proposition~\ref{prop:Stereographic_simplicial} it is equivalent to the fact that the polytope obtained by stacking onto all four facets of a tetrahedron is not inscribable, which was first proved by Steinitz \cite{Steinitz} \cite[Sect.~13.5]{Gruenbaum}.) \end{proof} \subsection{Some inscribable polytopes}\label{sec:f-inscribable} \subsubsection{3-dimensional polytopes}\label{subsec:f-inscribable_3poly} \begin{proposition} All $f$-vectors of $3$-polytopes occur for inscribable $3$-polytopes. \end{proposition} \begin{proof} According to Steinitz \cite{Steinitz} \cite[Sect.~10.3]{Gruenbaum}, the set of all $f$-vectors of convex $3$-polytopes is \[ \{(f_0,f_1,f_2)\in\Z^3: f_2\le 2f_0-4,\ f_0\le 2f_2-4,\ f_1=f_0+f_2-2\}. \] In Figure~\ref{Pic:Wedge} one can see three types of inscribed $3$-polytopes. These can be described as wedges over an $n$-gon that have been stacked $k$ times. By performing these constructions for arbitrary $n\ge3$ and $k\ge0$ we get three families of inscribable $3$-polytopes. Their $f$-vectors are \begin{eqnarray*} f_{\textrm{left}} &=&(2n-2+k, 3n-3+3k,n+1+2k)=(f_0,f_1,\tfrac{f_0}2+2+\tfrac32k )\\ f_{\textrm{middle}}&=&(2n-1+k, 3n-1+3k,n+2+2k)=(f_0,f_1,\tfrac{f_0}2+2+\tfrac32k+\tfrac12)\\ f_{\textrm{right}} &=&(2n+k,\hspace{7mm}3n+1+3k,n+3+2k)=(f_0,f_1,\tfrac{f_0}2+2+\tfrac32k+1 ) \end{eqnarray*} \begin{figure}[htb!] \centering \begin{tikzpicture}[scale=.75] \begin{scope}[yshift=-.16cm,xshift=3.1cm,rotate=-15,xshift=-3cm,yscale=.2,xscale=.97] \coordinate (Z) at (25:3cm); \coordinate (A) at (-25:3cm); \coordinate (B) at (280:3cm); \coordinate (C) at (250:3cm); \coordinate (D) at (225:3cm); \coordinate (E) at (180:3cm); \coordinate (F) at (135:3cm); \coordinate (G) at (90:3cm); \coordinate (H) at (60:3cm); \end{scope} \begin{scope}[yscale=.2] \coordinate (R1) at (-45:3cm); \coordinate (R2) at (-56:3cm); \coordinate (R3) at (-67:3cm); \end{scope} \coordinate (b) at ($(B)+(0,-1.1)$); \coordinate (c) at ($(C)+(0,-1.85)$); \coordinate (d) at ($(D)+(0,-2.35)$); \coordinate (e) at ($(E)+(0,-2.85)$); \coordinate (f) at ($(F)+(0,-2.23)$); \coordinate (g) at ($(G)+(0,-1.2)$); \coordinate (h) at ($(H)+(0,-.5)$); \draw[pink,fill] (B)--(C)--(c)--(b) ; \draw[] (0,0) circle (3); \draw[dotted,red,xshift=3cm,rotate=15] (-2.9,.08) ellipse (2.9 and .6); \draw[fill,color=black!10] (B)--(R3)--(b); \draw[fill,color=black!5] (A)--(B)--(R3)--(R2)--(R1); \draw[fill,color=black!20] (b)--(R3)--(R2)--(R1)--(A) ; \draw[ultra thick] (A)--(b)--(c)--(d)--(e) (B)--(b) (C)--(c) (D)--(d) (E)--(e); \draw[ultra thick] (Z)--(A)--(B)--(C)--(D)--(E)--(F)--(G)--(H)--cycle; \draw[dashed] (e)--(f)--(g)--(h)--(Z) (F)--(f) (G)--(g) (H)--(h); \draw[thick] (A)--(R1)--(R2)--(R3) (B)--(R1)--(b) (B)--(R2)--(b) (B)--(R3)--(b); \draw[dotted,red,xshift=3cm,rotate=-15] (-2.9,-.1) ellipse (2.9 and .6); \draw[thick,red] (-.35,-.6) ellipse (1.03 and 1.08); \draw[dotted] (0,0) ellipse (3 and .6); \draw[draw=black,fill=white] +(R1) circle (.1) +(R2) circle (.1) +(R3) circle (.1); \draw[red,fill] +(B) circle(.1) +(C) circle(.1) +(c)circle(.1) +(b)circle(.1) ; \end{tikzpicture} \qqua \begin{tikzpicture}[scale=.75] \begin{scope}[xshift=3.1cm,rotate=-15,xshift=-3cm,yscale=.2,xscale=.95] \coordinate (A) at (0:3cm); \coordinate (B) at (280:3cm); \coordinate (C) at (250:3cm); \coordinate (D) at (225:3cm); \coordinate (E) at (180:3cm); \coordinate (F) at (135:3cm); \coordinate (G) at (90:3cm); \coordinate (H) at (55:3cm); \end{scope} \begin{scope}[yscale=.2] \coordinate (R1) at (-30:3cm); \coordinate (R2) at (-52:3cm); \coordinate (R3) at (-68:3cm); \end{scope} \coordinate (b) at ($(B)+(0,-1.35)$); \coordinate (c) at ($(C)+(0,-2.05)$); \coordinate (d) at ($(D)+(0,-2.55)$); \coordinate (e) at ($(E)+(0,-3.1)$); \coordinate (f) at ($(F)+(0,-2.45)$); \coordinate (g) at ($(G)+(0,-1.45)$); \coordinate (h) at ($(H)+(0,-.65)$); \draw[pink,fill] (B)--(C)--(c)--(b) ; \draw[] (0,0) circle (3); \draw[dotted,red,xshift=3cm,rotate=15] (-2.9,0) ellipse (2.9 and .6); \draw[fill,color=black!10] (B)--(R3)--(b); \draw[fill,color=black!5] (A)--(B)--(R3)--(R2)--(R1); \draw[fill,color=black!20] (b)--(R3)--(R2)--(R1)--(b) ; \draw[ultra thick] (b)--(c)--(d)--(e) (B)--(b) (C)--(c) (D)--(d) (E)--(e); \draw[ultra thick] (A)--(B)--(C)--(D)--(E)--(F)--(G)--(H)--cycle; \draw[dashed] (e)--(f)--(g)--(h)--(A) (F)--(f) (G)--(g) (H)--(h); \draw[thick] (A)--(R1)--(R2)--(R3) (B)--(R1)--(b) (B)--(R2)--(b) (B)--(R3)--(b); \draw[dotted,red,xshift=3cm,rotate=-15] (-2.9,0) ellipse (2.9 and .6); \draw[thick,red] (-.39,-.55) ellipse (1.1 and 1.14); \draw[dotted] (0,0) ellipse (3 and .6); \draw[dashed] (A)--(b); \draw[draw=black,fill=white] +(R1) circle (.1) +(R2) circle (.1) +(R3) circle (.1); \draw[red,fill] +(B) circle(.1) +(C) circle(.1) +(c)circle(.1) +(b)circle(.1) ; \end{tikzpicture} \qqua \begin{tikzpicture}[scale=.75] \begin{scope}[xshift=3.1cm,rotate=-15,xshift=-3cm,yscale=.2,xscale=.95] \coordinate (A) at (0:3cm); \coordinate (B) at (280:3cm); \coordinate (C) at (250:3cm); \coordinate (D) at (225:3cm); \coordinate (E) at (180:3cm); \coordinate (F) at (135:3cm); \coordinate (G) at (90:3cm); \coordinate (H) at (55:3cm); \end{scope} \begin{scope}[yscale=.2] \coordinate (R1) at (-25:3cm); \coordinate (R2) at (-46:3cm); \coordinate (R3) at (-77:3cm); \end{scope} \coordinate (b) at ($(B)+(0,-1.35)$); \coordinate (c) at ($(C)+(0,-2.05)$); \coordinate (d) at ($(D)+(0,-2.55)$); \coordinate (e) at ($(E)+(0,-3.1)$); \coordinate (f) at ($(F)+(0,-2.45)$); \coordinate (g) at ($(G)+(0,-1.45)$); \coordinate (h) at ($(H)+(0,-.65)$); \draw[pink,fill] (B)--(C)--(c)--(b)--(R3) ; \draw[] (0,0) circle (3); \draw[dotted,red,xshift=3cm,rotate=15] (-2.9,0) ellipse (2.9 and .6); \draw[fill,color=black!5] (A)--(B)--(R3)--(R2)--(R1); \draw[fill,color=black!20] (b)--(R3)--(R2)--(R1)--(b) ; \draw[ultra thick] (b)--(c)--(d)--(e) (C)--(c) (D)--(d) (E)--(e); \draw[ultra thick] (A)--(B)--(C)--(D)--(E)--(F)--(G)--(H)--cycle; \draw[dashed] (e)--(f)--(g)--(h)--(A) (F)--(f) (G)--(g) (H)--(h); \draw[dashed] (A)--(b); \draw[thick] (A)--(R1)--(R2)--(R3) (B)--(R1)--(b) (B)--(R2)--(b) (B)--(R3)--(b); \draw[dotted,red,xshift=3cm,rotate=-15] (-2.9,0) ellipse (2.9 and .6); \draw[thick,red] (-.39,-.55) ellipse (1.1 and 1.14); \draw[dotted] (0,0) ellipse (3 and .6); \draw[draw=black,fill=white] +(R1) circle (.1) +(R2) circle (.1); \draw[red,fill] +(B) circle(.1) +(C) circle(.1) +(c)circle(.1) +(b)circle(.1) ; \draw[draw=black,fill=red](R3) circle (.1); \end{tikzpicture} \caption{Three constructions for inscribed $3$-polytopes that produce all possible $f$-vectors.} \label{Pic:Wedge} \end{figure} \noindent For $k=0$ the first type produces the $f$-vectors of simple polytopes; the first two types provide all $f$-vectors with the minimal number of facets for any given number of vertices. For $n=3$, the first type produces inscribable stacked $3$-polytopes with arbitrary number of vertices. These are simplicial and hence give the maximal number of facets for any given number of vertices. It is easy to see that for any number of vertices all permissible numbers of facets can be obtained by choosing the right $n$, $k$, and type. \end{proof} \subsubsection{Neighborly polytopes}\label{subsec:f-inscribable_cyclic} \begin{proposition} The $d$-dimensional cyclic polytope $C_d(n)$ with $n$ vertices is inscribable for all $d\ge2$ and $n\ge d+1$. Thus all $f$-vectors of neighborly polytopes occur for inscribable polytopes. \end{proposition} We will sketch three simple proofs for this. \begin{proof}[Proof~1] The \emph{standard moment curve} in $\R^{d-1}$ is given by \[\gamma(t):=(t,t^2,\dots,t^{d-1}).\] This is a curve of order $d$ by Vandermonde's determinant formula. If the sequence of parameters $t_1<t_2<\dots< t_{n-1}$ grows fast enough, then for each $i>d$ the point $\gamma(t_i)$ lies outside all circumspheres of the Delaunay triangulation of $\{\gamma(t_1),\dots,\gamma(t_{i-1})\}$. Thus the Delaunay triangulation of $\{\gamma(t_1),\dots,\gamma(t_{i-1})\}$ is obtained by induction on $i$, where for suitably large $t_i$ the new facets are given by the ``upper'' facets of $\conv\{\gamma(t_1),\dots,\gamma(t_{i-1})\}$ joined to the new vertex $\gamma(t_i)$. One checks, using Gale's evenness criterion, that thus the facets of the Delaunay triangulation of the point set $\{\gamma(t_1),\gamma(t_2),\dots,\gamma(t_{n-1})\}$ correspond exactly to the facets of $C_d(n)$ that do not contain the last vertex. Finish the proof via Proposition~\ref{prop:Stereographic_simplicial}. \end{proof} \begin{proof}[Proof~2 {\rm(Seidel \cite[p.~521]{Seidel})}] The \emph{spherical moment curve} is given by \[C:\R^+\rightarrow\R^d, \qquad c(t):=\frac1{1+t^2+t^4+\dots+t^{2(d-1)}}(1,t,t^2,\dots,t^{d-1}). \] This curve lies on the image of the hyperplane $x_1=1$ under inversion in the origin, that is, on the sphere with center $\frac12e_1$ and radius $\frac12$. Using Descartes' rule of signs one gets that this curve (restricted to the domain $t>0$!) is of order $d$, and thus the convex hull of any $n$ distinct points on this curve is an inscribed realization of~$C_d(n)$. \end{proof} \begin{proof}[Proof~3 {\rm(Grünbaum \cite[p.~67]{Gruenbaum})}] For even $d\ge2$, we consider the \emph{trigonometric moment curve} \[c:(-\pi,\pi]\rightarrow \R^{d}, \qquad c(t):=\left(\sin(t),\cos(t),~\sin(2t),\cos(2t),~\dots,~\sin(\tfrac d2t),\cos(\tfrac d2t)\right)\] Obviously its image lies on a sphere. We verify that this is a curve of order $d$ using the fact that any nonzero trigonometric polynomial of degree $\tfrac d2$ has at most $d$ zeros per period (see e.g.\ Polya \& Szeg\H{o} \cite[pp.~72-73]{PolyaSzegoeII}). Thus we get that the convex hull of any $n$ points on this curve yields an inscribed realization of $C_d(n)$. (Compare \cite[pp.~75-76]{Ziegler}.) For odd $d\ge3$, we check using Gale's evenness criterion that any ``vertex splitting'' on $C_{d-1}(n-1)$ results in a realization of $C_d(n)$; this yields inscribed realizations of $C_d(n)$ where all vertices except for those labeled $1$ and $n$ lie on a hyperplane. (See e.g.\ Seidel \cite[p.~528]{Seidel}, where the ``vertex splitting'' is called ``pseudo-bipyramid''.) \end{proof} \section{Stacked polytopes of dual degree at most 3 are inscribable}\label{sec:sufficiency} The following proposition establishes the “if” part of Main Theorem \ref{mainthm:Delaunay} (and thus also of Main Theorem \ref{mainthm:polytopes}). \begin{proposition}\label{prop:2possible} Let $\mathcal T$ be a Delaunay triangulation in $\R^{d-1}$, let $c$ be an interior vertex of degree $d$. Then one can perform single stellar subdivisions on two arbitrary $(d-1)$-faces $F_1$ and~$F_2$ of $\mathcal T$ that contain $c$ such that the resulting triangulation is again Delaunay. \end{proposition} \begin{proof} Let $F_1,\dots,F_d$ be the $(d-1)$-faces of $\mathcal T$ that contain $c$, and let $\mathcal R$ be the set of all other $(d-1)$-faces of $\mathcal T$. Let $v_1,\dots,v_d$ be the vertices of $F_1,\dots,F_d$ such that $v_i$ is not contained in $F_i$. The circumspheres of $F_3,\dots,F_d$ contain $c$. The intersection of the tangent hyperplanes to these $d-2$ spheres in the point $c$ contains a line $t$ through $c$ that lies tangent to all those spheres. Let $U$ be a small open ball around $c$ that, like $c$, lies outside the circumspheres of all cells in $\mathcal R$. Then $U\cap t\setminus \{c\}$ consists of two disjoint, open line segments. Choose two points, $x_1,x_2$, one in each line segment, and use them for single stellar subdivisions of $F_1$ resp.~$F_2$ (cf.~Figure~\ref{Pic:Construct2}). \begin{figure}[htb!] \centering \begin{tikzpicture} \coordinate (A) at ($.8*(0,0)$); \coordinate (B) at ($.8*(7.7,.5)$); \coordinate (C) at ($.8*(3.9,6)$); \coordinate (Z) at ($.8*(4.1,2.6)$); \clip[] (-.5,-.5) rectangle (7,5.2); \fill[pink] (A) -- (B) -- (C) -- cycle; \draw[style=thick] (A) -- (B) -- (C) -- cycle; \draw[white,style=fill] (Z) circle (1.1cm); \draw[style=dotted] (Z) circle (1.1cm); \draw[style=thick] (A)--(Z)--(B)--(Z)--(C); \coordinate (AZ) at ($(A)!.5!(Z)$); \coordinate (BZ) at ($(B)!.5!(Z)$); \coordinate (AZx) at ($(AZ)!1cm!90:(Z)$); \coordinate (BZx) at ($(BZ)!1cm!90:(Z)$); \coordinate (F) at (intersection of AZ--AZx and BZ--BZx); \node (Circ) [draw] at (F) [circle through={(Z)}] {}; \coordinate (V2) at ($(Z)!.9cm!90:(F)$); \coordinate (V1) at ($(Z)!-.9cm!90:(F)$); \draw[style=dashed] ($(V1)!-8cm!(V2)$)--($(V1)!8cm!(V2)$); \draw[style=fill] (V1) circle (.08cm); \draw[style=fill] (V2) circle (.08cm); \draw[style=thick] +(3.2,.5) node [anchor=south east] {$F_3$} +(6.5,-.5) node [anchor=south east] {$C$} +($(V1)!4cm!(V2)$) node [above] {$t$} +(Z) node [below=9pt] {$U$} +(A) node [anchor=south east] {$v_1$} +(B) node [anchor=south west] {$v_2$} +(C) node [anchor=south] {$v_3$} +(V1) node [anchor=south west] {$x_1$} +(V2) node [anchor=south east] {$x_2$} +(Z) node [anchor=north] {$c$}; \end{tikzpicture} \caption{Choice of the subdivision points.} \label{Pic:Construct2} \end{figure} We claim that the resulting triangulation $\mathcal T'$ is again Delaunay. First we check that $x_1$ and $x_2$ lie inside $F_1$ resp.~$F_2$: They lie outside all facets $F_3,\dots,F_d$ but inside $\conv\{v_1,\dots,v_d\}$, hence they lie in ${F_1}\cup{F_2}$. Because $t$ contains $c$, which is a vertex of $F_1$ and $F_2$, only one component of $t\setminus\{c\}$ can be contained in $F_1$ and only one can be contained in $F_2$. Hence we can assume that $x_1$ lies in the relative interior of $F_1$ and $x_2$ lie in the relative interior of $F_2$. Now we need to show that all interior $(d-2)$-faces of $\mathcal T'$ are locally Delaunay. The cells in $\mathcal R\cup\{F_3,\dots,F_d\}$ lie in both triangulations $\mathcal T$ and in $\mathcal T'$. They have empty circumspheres in $\mathcal T$ by assumption, and in $\mathcal T'$ by construction. Let $\mathcal I$ be the faces of $\mathcal T'$ that are not faces of~$\mathcal T$. It remains to show that all $(d-2)$-faces in $\mathcal T'$ that are contained in two facets of $\mathcal I$ are locally Delaunay. The first type lies in $(d-1)$-faces that both contain $x_1$ or both contain $x_2$. In this case the locally Delaunay condition is given by Lemma~\ref{lem:stellar}. The second type lies in a $(d-1)$-face that contains $x_1$ and an other $(d-1)$-face that contains $x_2$. There is only one such $(d-2)$-face, namely the intersection of $F_1$ and $F_2$. Let's call this face $K$. The circumsphere of $\conv(K\cup\{x_2\})$ does not contain $x_1$, because $x_1,c,x_2$ are collinear and $c$ lies between $x_1$ and $x_2$ (see Figure~\ref{Pic:Construct3}). Hence also $K$ is locally Delaunay and thus all interior $(d-2)$-faces of $\mathcal T'$ are locally Delaunay. Hence $\mathcal T'$ is a Delaunay triangulation. \end{proof} \begin{figure}[htb!] \centering \begin{tikzpicture} \coordinate (A) at ($.8*(0,0)$); \coordinate (B) at ($.8*(7.7,.5)$); \coordinate (C) at ($.8*(3.9,6)$); \coordinate (Z) at ($.8*(4.1,2.6)$); \draw[style=thick] (A) -- (B) -- (C) -- cycle; \draw[style=thick] (A)--(Z)--(B); \draw[style=thick,red] (Z)--(C); \coordinate (AZ) at ($(A)!.5!(Z)$); \coordinate (BZ) at ($(B)!.5!(Z)$); \coordinate (AZx) at ($(AZ)!1cm!90:(Z)$); \coordinate (BZx) at ($(BZ)!1cm!90:(Z)$); \coordinate (F) at (intersection of AZ--AZx and BZ--BZx); \coordinate (V2) at ($(Z)!.7cm!90:(F)$); \coordinate (V1) at ($(Z)!-.7cm!90:(F)$); \coordinate (CZ) at ($(C)!.5!(Z)$); \coordinate (V2Z) at ($(V2)!.5!(Z)$); \coordinate (CZx) at ($(CZ)!1cm!90:(Z)$); \coordinate (V2Zx) at ($(V2Z)!1cm!90:(Z)$); \coordinate (M) at (intersection of CZ--CZx and V2Z--V2Zx); \node [draw,red,dashed] at (M) [circle through={(Z)}] {}; \draw[style=dashed] ($(V1)!-2cm!(V2)$)--($(V1)!3cm!(V2)$); \draw[] (A)--(V1)--(C)--(V2)--(B)--(V2)--(Z)--(V1); \draw[style=fill] (V1) circle (.08cm); \draw[style=fill] (V2) circle (.08cm); \draw[style=fill] (Z) circle (.08cm); \draw[] +($(V1)!3cm!(V2)$) node [anchor=south] {$t$} +(A) node [anchor=east] {$v_1$} +(B) node [anchor=west] {$v_2$} +(C) node [anchor=south] {$v_3$} +(V1) node [anchor=south east] {$x_1$} +($(V2)+(.15,.3)$) node [] {$x_2$} +(3.45,2.8) node [] {$K$} +(Z) node [anchor=north] {$c$}; \end{tikzpicture} \caption{The circumsphere does not contain $x_1$ because $x_1$, $c$ and $x_2$ are collinear.} \label{Pic:Construct3} \end{figure} Using this result, we also obtain examples of stacked polytopes that go beyond the rather special construction given by Corollary~\ref{cor:LBTtight}. \begin{corollary}[inscribable stacked polytopes with bounded vertex degree]\label{prop:BoundDeg} For all $d\ge2$ and $n\ge0$ there exists a stacked inscribed polytope of dimension $d$ that has $d+1+n$ vertices such that no vertex has degree more than~$2d$. \end{corollary} \begin{proof} We may assume $d>2$ (where the inductive steps discussed in the following do not destroy edges). We start with an arbitrary $d$-simplex, which is inscribed. All its vertices are simple; we label them $1,\dots,d+1$. Now for $k=1,\dots,n$ we refer to Proposition~\ref{prop:2possible} in order to stack a new vertex $d+1+k$ onto the facet $\{k+1,k+2,\dots,k+d\}$ at the simple vertex $d+1+(k-1)=d+k$. This in particular destroys the facet $\{k+1,k+2,\dots,k+d\}$ (which contains the vertex labeled $k+1$, which will not be touched again) and creates the new facet $\{k+2,\dots,k+1+d\}$, adjacent to the new simple vertex $d+1+k$. In the stacked inscribed polytope created this way, vertices $i$ and $j$ are adjacent exactly if $|i-j| \le d$. \end{proof} \section{Three stellar subdivisions are impossible}\label{sec:necessity} The following establishes the “only if” part of Main Theorem \ref{mainthm:Delaunay} (and thus also of Main Theorem \ref{mainthm:polytopes}): If multiple stellar subdivisions are performed on three facets $F_1,F_2$ and $F_3$ at a simple interior vertex of an arbitrary triangulation, then the resulting triangulation is not a Delaunay triangulation. For this, it suffices to consider the complex $\Delta$ that arises by a single stellar subdivision of a $(d-1)$-simplex $\sigma\subset\R^{d-1}$ using an {arbitrary} interior point $c\in\sigma$. This complex $\Delta$ with $(d-1)$-faces $F_1,\dots,F_d$ is Delaunay by Lemma~\ref{lem:stellar}. Now for $d\ge3$ we apply single stellar subdivisions to the cells $F_1,F_2,F_3$ by arbitrary interior points $r_1,r_2,r_3$. Our claim is that the resulting triangulation $\mathcal T$ cannot be Delaunay. In order to prove this claim, we first construct a point $x$ that depends only on~$\Delta$. Its position with respect to $\Delta$ is established in Lemma~\ref{lemma:1}. Then Lemma~\ref{lemma:2} records the properties of~$x$ with respect to the subdivision $\mathcal T$. Finally, we establish in Proposition~\ref{prop:3impossible} that $\mathcal T$ cannot be Delaunay: For that we use an inversion in a sphere centered at $x$ in order to simplify the situation so that a projection argument reduces the claim to the case $d=3$, which was established in Lemma~\ref{lemma:doublestellartriangle}. \medskip Let $\sigma=\conv\{v_1,\dots,v_d\}$ be a $(d-1)$-simplex in~$\R^{d-1}$, let $c\in\sigma$ be an interior point, and let $\Delta$ be the single stellar subdivision of $\sigma$ by $c$, with $(d-1)$-faces $F_1,\dots,F_d$, labeled such that $v_i\notin F_i$. For some $k$ $(1\le k<d)$ let $\mathcal F:=\{F_{k+1},\dots,F_d\}$ and $\mathcal G:=\{F_1,\dots,F_k\}$. Then $V_{\mathcal F}:=\{v_{1},\dots,v_k\}$ is the set of vertices of~$\sigma$ that lie in all cells of~$\mathcal F$, while $V_{\mathcal G}:=\{v_{k+1},\dots,v_d\}$ is the set of vertices of~$\sigma$ that lie in all cells of~$\mathcal G$. Now $E_{\mathcal F}:=\aff(V_{\mathcal F}\cup\{c\})$ is an affine subspace of dimension $k$, while $E_{\mathcal G}:=\aff(V_{\mathcal G}\cup\{c\})$ has dimension~$d-k$. The two spaces together affinely span~$\R^{d-1}$, so by dimension reasons they intersect in a line~$\ell$. This line intersects the two complementary faces $\conv(V_{\mathcal F})$ and $\conv(V_{\mathcal G})$ of $\sigma$ in relatively interior points $\bar x$ resp.~$\bar y$. Let $C_{\mathcal F}$ denote the unique $(k-1)$-sphere that contains $V_{\mathcal F}\cup\{c\}$, that is, the circumsphere of the $k$-simplex $\conv(V_{\mathcal F}\cup\{c\})$, which is also the intersection of the circumspheres of $F_{k+1},\dots,F_d$. The point $c$ lies in the intersection $\ell\cap C_{\mathcal F}$. The line $\ell$ also contains the point $\ell\cap\conv(V_{\mathcal F})=\{\bar x\}$, which is a relative-interior point of $\conv(V_{\mathcal F})$ and thus for $k>1$ lies in the interior of the circumspheres of $F_{k+1},\dots,F_d$ and thus in the interior of the sphere $C_{\mathcal F}$ relative to the subspace $E_{\mathcal F}$. Thus $C_{\mathcal F}\cap\ell=\{c,x\}$, where the second intersection point $x$ is distinct from~$c$, and lies outside $\sigma$ for $k>1$. As for $C_{\mathcal F}$ and $x,\bar x$, we define $C_{\mathcal G}$ and $y,\bar y$ for ${\mathcal G}$: See Figure~\ref{fig:lemma:1}. \begin{figure}[htb!] \centering \begin{tikzpicture}[scale=.8] \draw (-1,-4) rectangle (7,6); \coordinate (A) at (0,0); \coordinate (B) at (5,0); \coordinate (C) at (2.5,4); \coordinate (Z) at (2,1.7); \coordinate (r) at (3,2); \coordinate (AZ) at ($(A)!.5!(Z)$); \coordinate (BZ) at ($(B)!.5!(Z)$); \coordinate (CZ) at ($(C)!.5!(Z)$); \coordinate (AZx) at ($(AZ)!1cm!90:(Z)$); \coordinate (BZx) at ($(BZ)!1cm!90:(Z)$); \coordinate (CZx) at ($(CZ)!1cm!90:(Z)$); \coordinate (ABZ) at (intersection of AZ--AZx and BZ--BZx); \coordinate (BCZ) at (intersection of BZ--BZx and CZ--CZx); \coordinate (CAZ) at (intersection of AZ--AZx and CZ--CZx); \draw [fill,pink] (A) -- (B) -- (Z) -- cycle; \node (Circ) [draw,color=red,ultra thick] at (ABZ) [circle through={(Z)}] {}; \coordinate (X) at (intersection of C--Z and Circ); \coordinate (Xbar) at (intersection of A--B and C--Z); \draw[style=thick] +(3.3,-.35) node {$\conv \!V_{\mathcal F}$} +(4.2,-2.5) node {$C_{\mathcal F}$} +(2.1,-1.7) node {$\ell\!\!=\!\!E_{\mathcal G}$} +(-.25,.15) node {$v_1$} +(5.3,.14) node {$v_2$} +($(C)+(1.9,0)$) node {$v_3\!=\!y\!=\!\bar{y}\!=V_{\mathcal G}$} +(2.6,.5) node {$F_3\!\!=\!\!\ {\mathcal F}$} +(2.25,1.9) node {$c$} +(0,5) node {$E_{\mathcal F}$} +(X) node [anchor=south west] {$x$} +(Xbar) node [anchor=north west] {$\bar{x}$}; \draw (A) -- (B) -- (C) -- cycle; \draw[ultra thick] (A) -- (B); \draw (A) --(Z)--(B)--(Z)--(C); \draw[dashed,color=red] ($(C)!-.5!(Z)$)--($(C)!3.3!(Z)$); \draw[style=thick] (X) circle (.1); \draw[style=thick] (Xbar) circle (.1); \draw[style=thick] (Z) circle (.1); \draw[style=thick] (C) circle (.1); \end{tikzpicture} \begin{tikzpicture}[scale=.8] \coordinate (A) at (0,0); \coordinate (B) at (6,0); \coordinate (C) at (2.5,4); \coordinate (D) at (1.7,-1.4); \coordinate (Z) at (2.4,.9); \coordinate (X) at (-.42,-1.9); \coordinate (Xbar) at (.83,-.68); \coordinate (Ybar) at (3.9,2.4); \coordinate (Y) at (6.1,4.55); \draw [fill, pink] (A) -- (D) -- (B) -- (Z) -- (C) -- cycle; \draw [thick,red,rotate=41] (.42,-1.08) ellipse (2cm and 1.1cm); \draw [thick,red] (4.54,2.3) ellipse (2.5cm and 2.85cm); \draw [red,dashed] (-1.3,-2.8) --(6.8,5.27); \draw (-1.3,-3) -- (-1.3,6) -- (7.5,5.2) -- (7.5,-1) -- cycle; \draw (-.8,-4) -- (-1.7,-1.7) -- (5.3,5.3) -- (8.2,5.25) -- cycle; \draw (A) -- (B) -- (C) -- cycle; \draw (A) --(D)--(B)--(D)--(C); \draw[ultra thick] (A) -- (D); \draw (A) --(Z)--(B)--(Z)--(C)--(Z)--(D); \draw [thick] (X) circle (.1); \draw [thick] (Y) circle (.1); \draw [thick] (Ybar) circle (.1); \draw [thick] (Xbar) circle (.1); \draw [thick] (Z) circle (.1); \draw[style=thick] +($(A)+(-.2,.2)$) node {$v_1$} +($(B)+(.4,-.1)$) node {$v_3$} +($(C)+(-.1,.3)$) node {$v_4$} +($(D)+(.3,-.2)$) node {$v_2$} +($(Z)+(.4,.1)$) node {$c$} +($(X)+(-.4,0)$) node {$x$} +($(Xbar)+(-.45,0)$) node {$\bar{x}$} +($(Ybar)+(.4,0)$) node {$\bar{y}$} +($(Y)+(.4,0)$) node {$y$} +(-.3,5) node {$E_{\mathcal G}$} +(.55,-1.75) node {$C_{\mathcal F}$} +(6.7,2) node {$C_{\mathcal G}$} +(-.6,-3.4) node {$E_{\mathcal F}$} +(5,3.8) node [anchor=north west] {$\ell$}; \end{tikzpicture} \caption{The situation of Lemma~\ref{lemma:1}. The left figure illustrates $d=3$, $k=2$, the right one $d=4$, $k=2$.} \label{fig:lemma:1} \end{figure} \begin{lemma}\label{lemma:1} In the situation just described, the point $x$ lies outside the circumspheres of $F_1,\dots,F_k$ and on the circumspheres of $F_{k+1},\dots,F_d$. \end{lemma} \begin{proof} The five points $x,\bar x, c,\bar y,y$ lie in this order along the line $\ell$, where the first two points coincide in the case $k=1$, while the last two coincide for~$d-k=1$. The circumspheres of $F_1,\dots,F_k$ intersect the line $\ell$ in $\{c,y\}$, and thus the point $x$ lies outside these spheres, while the circumspheres of $F_{k+1},\dots,F_d$ intersect the line $\ell$ in $\{c,x\}$. \end{proof} \begin{lemma}\label{lemma:2} If in the above situation the stellar subdivision of some or all of the facets $F_1,\dots,F_k$ results in a Delaunay triangulation $\mathcal T$, then the point $x$ lies outside all of the circumspheres of the newly created $(d-1)$-faces. \end{lemma} \begin{proof} Without loss of generality, let us assume that $\mathcal T$ is a single stellar subdivision of $\Delta$ at $F_1$ by a new vertex $r$ inside $F_1$. This will result in $d$ new facets $F'_1,\dots,F'_d$, whose vertex set consists of~$r$ together with all-but-one of the vertices of $F_1$, which are $c,v_2,\dots,v_d$. \begin{figure}[htb!] \centering \begin{tikzpicture}[scale=1.4] \clip [] (-1.7,-3.1) rectangle (6,5); \coordinate (A) at (0,0); \coordinate (B) at (5,0); \coordinate (C) at (2.5,4); \coordinate (Z) at (1.2,1.1); \coordinate (r) at (2.8,1.7); \coordinate (AZ) at ($(A)!.5!(Z)$); \coordinate (BZ) at ($(B)!.5!(Z)$); \coordinate (rZ) at ($(r)!.5!(Z)$); \coordinate (AZx) at ($(AZ)!1cm!90:(Z)$); \coordinate (BZx) at ($(BZ)!1cm!90:(Z)$); \coordinate (rZx) at ($(rZ)!1cm!90:(Z)$); \coordinate (ABZ) at (intersection of AZ--AZx and BZ--BZx); \coordinate (rBZ) at (intersection of rZ--rZx and BZ--BZx); \draw [fill,pink] (Z) -- (B) -- (C) -- cycle; \draw [ultra thick,red] (B) -- (Z); \draw[dashed,color=red] ($(B)!-1!(Z)$)--($(B)!3!(Z)$); \node (Circ) [draw,color=gray] at (ABZ) [circle through={(Z)}] {}; \node [draw,color=red] at (rBZ) [circle through={(Z)}] {}; \coordinate (X) at (intersection of C--Z and Circ); \coordinate (Xbar) at (intersection of A--B and C--Z); \draw[style=thick] +(2.7,-.28) node {$\conv Q$} +($(A)+(-.2,.1)$) node {$v_1$} +($(B)+(.25,.1)$) node {$v_2$} +($(C)+(.6,0)$) node {$v_3=y$} +($(r)+(.2,.1)$) node {$r$} +($(r)+(.3,.6)$) node {$F'_1$} +($(r)+(-.6,.4)$) node {$F'_2$} +($(r)+(0,-.6)$) node {$F'_d$} +(1.4,.4) node {$F_d$} +(-.9,2) node {aff$(K)$} +(5.4,-2.6) node {$C$} +(2.75,-2.7) node {circ($F'_d$)} +(-.75,-2.8) node {$\ell$} +($(Z)+(-.1,.2)$) node {$c$} +($(X)+(.3,0)$) node {$x$}; \draw (A) -- (B) -- (C) -- cycle; \draw[ultra thick] (A) -- (B); \draw (A) --(Z)--(B)--(Z)--(C); \draw[dashed,color=red] ($(C)!-.5!(Z)$)--($(C)!3.3!(Z)$); \draw[style=thick] (X) circle (.07); \draw[style=thick] (Z) circle (.07); \draw[style=thick] (C) circle (.07); \draw[fill] (r) circle (.07); \draw[] (B)--(r)--(C)--(r)--(Z); \end{tikzpicture} \caption{The three cases of Lemma~\ref{lemma:2}, for $d=3$, $k=2$.} \label{fig:lemma:2} \end{figure} We discuss them in three different cases (see Figure~\ref{fig:lemma:2}): \begin{compactenum}[(I)] \item One new facet, say $F'_1$, does not contain $c$. Then $c$ lies outside the circumsphere of $F'_1$, while all the vertices in $V_{\mathcal G}:=\{v_{k+1},\dots,v_d\}$ are vertices of $F'_1$, so $\bar y$ lies inside the circumsphere, or on its boundary (in the case $d-k=1$). In either case we conclude that $x$ lies outside the circumsphere from the ordering on the line $\ell$ described in the proof of Lemma~\ref{lemma:1}. \item $k-1$ new facets $F'_2,\dots,F'_k$ do not contain a vertex $v_j$, $2\le j\le k$. In this case we argue as in Case~(I). \item $d-k$ new facets $F'_{k+1},\dots,F'_d$ miss a vertex $v_j$, $(k+1\le j\le d)$ from~$V_{\mathcal G}$. Then $F'_j$ is adjacent to the facet $F_j$ because both share the $(d-2)$-face $K:=\conv(\{c,v_2,\dots,v_d\}{\setminus}v_j)$. Their circumspheres intersect in $\aff(K)$. The line $\ell$ intersects $\aff(K)$ in $c$, hence $x$ and $\bar x$ lie on the same side of $\aff(K)$, as well as $v_1$, because $\bar x$ is a convex combination of $v_1$ and $\aff(K)$. So, the circumsphere of $F_j$ passes through $x$ and $v_1$ on the same side of $\aff(K)$. Because $\mathcal T$ is Delaunay, the circumsphere of $F'_j$ does not contain $v_1$ and hence also not $x$.\vskip-9.5mm\mbox{} \end{compactenum} \end{proof} \begin{proposition}\label{prop:3impossible} Let $\Delta$ be a single stellar subdivision of a $(d-1)$-simplex $\sigma=\conv\{v_1,\dots,v_d\}$ in $\R^{d-1}$ by an interior point $c\in\sigma$, so the facets of $\Delta$ are $F_i=\conv(\{c,v_1,\dots,v_d\}{\setminus}v_j)$. Let $\mathcal T$ arise from this Delaunay triangulation $\Delta$ by single stellar subdivisions of $F_1,\dots,F_k$ by interior points $r_i\in F_i$ $(1\le i\le k)$. If $\mathcal T$ is a Delaunay triangulation, then $k<3$. \end{proposition} \begin{proof} For $d=3$ this was established in Lemma~\ref{lemma:doublestellartriangle}, so we assume $d>3$. As a single stellar subdivision can be undone without destroying the Delaunay property (Corollary~\ref{cor:can_be_undone}), it is enough to show that $\mathcal T$ cannot be a Delaunay triangulation if $k=3$. For the sake of contradiction we assume that such a $\mathcal T$ is a Delaunay triangulation. Then we are in the situation discussed above, where we find the point~$x$ on the line $\ell$, which by Lemmas~\ref{lemma:1} and~\ref{lemma:2} lies on the circumspheres of the facets $F_4,\dots,F_d$, but outside the circumspheres of all other facets of~$\mathcal T$. Let $\mathcal R$ denote the set of these other facets. The inversion of $\R^{d-1}$ in the unit sphere centered at $x$ sends all the vertices of~$\mathcal T$ to new points in~$\R^{d-1}$. This inversion induces a simplicial map to a new triangulation~$\mathcal T'$. \[ \Psi:\mathcal R\ \ \longrightarrow\ \ \mathcal T'. \] As an abbreviation, we denote the images of $\Psi$ by a prime $()'$; for example, $\Psi(v_1)=v'_1$. Note that if we apply this to the images of simplices $\sigma,F_1,\dots,F_3$, then we refer to the simplices obtained by applying $\Psi$ to the vertices. The simplicial complex $\mathcal T'$ is a part of the unique Delaunay subdivision of its vertex set, because for all cells in $\mathcal R$, an empty circumsphere is mapped to an empty circumsphere. This in particular shows that $\mathcal T'$ is a simplicial complex. Let $r'_1,r'_2,r'_3$ be the three images of the vertices that where used to perform single stellar subdivisions to $F_1,F_2,F_3$. Then these vertices are also interior vertices of $\mathcal T'$ and hence $\mathcal T'$ is the result of single stellar subdivisions of $F'_1,F'_2$ and $F'_3$ by $r'_1,r'_2,r'_3$. The inversion centered at $x$ also implies that $c',v'_1,v'_2$ and $v'_3$ lie in a commen $2$-plane, as their preimages lie on a $2$-sphere that passes through $x$. Note that no three of these four vertices lie in line. By checking some vertex incidences, we figure out that the structure of $\mathcal T'$ can be described as follows: Take a $(d-1)$-simplex, split it into three simplices by inserting a vertex $c'$ in the interior of a $2$-face and then apply single stellar subdivisions to each of those three simplices by points $r'_1,r'_2$ and $r'_3$. In particular, the support of $\mathcal T'$ is convex, so $T$ is the Delaunay triangulation of the set $\{c',r'_1,r'_2,r'_3,v'_1,\dots,v'_d\}$ of $d+4$ points in $\R^{d-1}$. (See the left part of Figure~\ref{fig:Rd}.) \begin{figure}[htb!] \centering \begin{tikzpicture}[scale=.8,rounded corners=.66pt] \coordinate (A) at (0,0); \coordinate (B) at (6,0); \coordinate (C) at (2.5,4); \coordinate (D) at (1.5,-1.4); \coordinate (Z) at (2.7,-.5); \coordinate (r1) at (1.9,1.6); \coordinate (r2) at (2.75,1.7); \coordinate (r3) at (3.2,1.6); \draw [fill,pink] ($(Z)+(-.12,+.2)$) --(C)--(B)--cycle; \draw [red,dashed] (4.3,1.85) circle(2.76); \draw [red,dotted] (4.3,1.85) ellipse(2.76 and 1.25); \draw [fill, pink,rotate=0] ($(Z)!.5!(B)$) ellipse (1.76cm and .8cm); \draw (-2.3,-2) -- (-.5,1) -- (6.5,1) -- (8.3,-2) -- cycle; \draw [ultra thick] (A) -- (D) -- (B) -- cycle; \draw [ultra thick] (A) --(Z)--(D); \draw [draw=red,rotate=0] ($(Z)!.5!(B)$) ellipse (1.76cm and .8cm); \draw (C) -- (r2) -- (B) -- (r2) -- (A) -- (r2) -- (Z); \draw [ultra thick,red] (Z) --(C)--(B)--cycle; \draw [ultra thick] (C) -- (A) -- (C) -- (D); \draw (C) -- (r1) -- (A) -- (r1) -- (D) -- (r1) -- (Z); \draw (C) -- (r3) -- (D) -- (r3) -- (B) -- (r3) -- (Z); \draw[fill=white] (r1) circle (.1); \draw[fill=pink] (r2) circle (.1); \draw[fill=white] (r3) circle (.1); \draw[style=thick] +($(A)+(-.3,0)$) node {$v'_1$} +($(D)+(-.4,-.1)$) node {$v'_2$} +($(B)+(.2,.3)$) node {$v'_3$} +($(C)+(0,.3)$) node {$v'_4$} +($(r1)+(-.3,.2)$) node {$r'_3$} +($(r2)+(-.37,.2)$) node {$r'_2$} +($(r3)+(.37,.2)$) node {$r'_1$} +(7.5,-1.8) node [anchor=south] {$K$} +(6.1,-1.2) node [anchor=south] {$C'$} +(7.4,1.5) node [anchor=south] {$C$} +($(Z)+(.1,-.2)$) node {$c'$}; \end{tikzpicture} \begin{tikzpicture} \coordinate (A) at ($.7*(0,0)$); \coordinate (B) at ($.7*(8,0)$); \coordinate (C) at ($.7*(4.1,6)$); \coordinate (a) at ($.7*(5.2,2.1)$); \coordinate (b) at ($.7*(3,2.1)$); \coordinate (c) at ($.7*(4,1)$); \coordinate (Z) at ($.7*(4.1,2)$); \draw[style=thick] +($.7*(6.2,5.3)$) node {$C'$} +(A) node [anchor=east] {$v'_1$} +(B) node [anchor=west] {$v'_2$} +(C) node [anchor=south] {$v'_3$} +($(a)+(.55,-.2)$) node {$\pi(r'_1)$} +($(b)+(-.55,-.2)$) node {$\pi(r'_2)$} +($(c)+(.1,-.4)$) node {$\pi(r'_3)$} +($(Z)+(-.2,.3)$) node {$c'$}; \draw[style=thick] (A) -- (B) -- (C) -- cycle; \draw[style=thick] (A)--(Z)--(B); \draw[ultra thick,color=red] (Z)--(C); \draw[dotted] (A)--(c)--(B)--(a)--(C)--(b)-- cycle; \draw[dotted] (a)--(Z)--(b)--(Z)--(c); \node[draw,color=red] at ($(C)!.5!(Z)$) [circle through={(Z)}] {}; \draw (a)[fill] circle (.1); \draw (b)[fill] circle (.1); \draw (c)[fill] circle (.1); \end{tikzpicture} \caption{Left: An example for $d=4$. Right: The projection image in the $2$-plane $K$.} \label{fig:Rd} \end{figure} Let $K$ be the $2$-plane containing $c',v'_1,v'_2$ and $v'_3$ and let $\mathcal T'_K$ denote the subcomplex of~$\mathcal T'$ that lies in $K$. We define barycentric coordinates by the points $v'_1,\dots,v'_d$ and let $\pi$ be the corresponding coordinate projection \[ \pi:\relint (\mathcal T')\rightarrow \relint (\mathcal T'_K)=\relint (\conv\{v_1,\dots,v_3\}). \] As $F_1$ and $F_2$ share a $(d-2)$-face, we know that $F'_1$ and $F'_2$ also share the same $(d-2)$-face in~$\mathcal T'$. It has vertex set $\{c',v'_3,\dots,v'_d\}$ and it must have a supporting sphere. We pick one and call it~$C$. The intersection of~$C$ with~$K$ is a supporting sphere for the edge $(c',v'_3)$ in $\mathcal T'_K$. We call this $1$-sphere $C'$ and notice that the preimage $\conv(C')$ under $\pi$, which is contained in $\conv\! \left(C'\cup\{v'_4,\dots,v'_d\}\right)$, lies completely inside $C$. (This crucial fact is illustrated in red in the right part of Figure~\ref{fig:Rd}: The sphere that contains $C'$ as well as $v'_4$ must enclose the whole truncated cone.) This implies that the images of $r'_1,r'_2$ and $r'_3$ under the projection~$\pi$ lie outside $C'$, but in the interior of $\mathcal T'_K$. From this we derive that we can apply single stellar subdivisions to the three $2$-faces of $\mathcal T'_K$ by the vertices $\pi(r'_1),\pi(r'_2)$ and $\pi(r'_3)$ such that the edges $(c',v'_1),(c',v'_2)$ and $(c',v'_3)$ would still be locally Delaunay, as indicated in the right part of Figure~\ref{fig:Rd}. However, in Lemma~\ref{lemma:doublestellartriangle} we have already proved that this is impossible. \end{proof} \begin{small}
2,869,038,157,023
arxiv
\section{Introduction} \label{sec-intro} \subsection{Overview} We study a family of probability measures on spanning-tree-decorated rooted planar maps, which we define in Section~\ref{sec-burger}, using a generalization of the Sheffield hamburger-cheeseburger model~\cite{shef-burger}. This family includes as special cases maps decorated by a uniform spanning tree \cite{mullin-maps}, planar maps together with a critical Fortuin--Kasteleyn (FK) configuration~\cite{shef-burger}, and maps decorated by an active spanning tree~\cite{kassel-wilson-active}. These models converge in a certain sense (described below) to Liouville quantum gravity (LQG) surfaces decorated by Schramm--Loewner evolution ($\operatorname{SLE}_\kappa$)~\cite{schramm0}, and any value of~$\kappa>4$ corresponds to some measure in the family. Although our results are motivated by SLE and LQG, our proofs are entirely self-contained, requiring no knowledge beyond elementary probability theory. Consider a spanning-tree-decorated rooted planar map $(M, e_0, T)$, where $M$ is a planar map, $e_0$ is an oriented root edge for $M$, and $T$ is a spanning tree of $M$. Let $M^*$ be the dual map of $M$ and let $T^*$ be the dual spanning tree, which consists of the edges of $M^*$ which do not cross edges of $T$. Let~$Q$ be the quadrangulation whose vertex set is the union of the vertex sets of $M$ and $M^*$, obtained by identifying each vertex of $M^*$ with a point in the corresponding face of $M$, then connecting it by an edge (in $Q$) to each vertex of $M$ on the boundary of that face. Each face of~$Q$ is bisected by either an edge of $T$ or an edge of $T^*$ (but not both). Let $\mathbb e_0$ be the oriented edge of~$Q$ with the same initial endpoint as~$e_0$ and which is the first edge in the clockwise direction from $e_0$ among all such edges. As explained in, e.g.,~\cite[\S~4.1]{shef-burger}, there is a path $\lambda$ consisting of edges of (the dual of) $Q$ which snakes between the primal tree $T$ and dual tree $T^*$, starts and ends at $\mathbb e_0$, and hits each edge of $Q$ exactly once. This path $\lambda$ is called the \textit{Peano curve\/} of $(M,e_0,T)$. See Figure~\ref{fig:map} for an illustration. For Euclidean lattices, Lawler, Schramm, and Werner \cite{lsw-lerw-ust} showed that the uniform spanning tree Peano curve converges to $\operatorname{SLE}_8$. For random tree-decorated planar maps with suitable weights coming from the critical Fortuin--Kasteleyn model, Sheffield \cite{shef-burger} proved a convergence result which, when combined with the continuum results of \cite{wedges}, implies that the Peano curve converges in a certain sense to a space-filling version of $\operatorname{SLE}_\kappa$ with $4<\kappa\leq 8$ on an LQG surface. The measures on tree-decorated planar maps we consider generalize these, and converge in this same sense to $\operatorname{SLE}_\kappa$ with $4<\kappa<\infty$. \begin{figure}[htb!] \centering \vspace{-9pt} \includegraphics[width=.9\textwidth]{map-trees} \caption{Top left: a rooted map $(M, e_0)$ (in blue) with a spanning tree $T$ (heavier blue lines). Top right: the dual map $M^*$ (dashed red) with the dual spanning tree $T^*$ (heavier dashed red lines). Bottom left: the quadrangulation $Q$ (in white) whose vertices are the vertices of $M$ and $M^*$. Bottom right: the Peano curve $\lambda$ (in green), exploring clockwise. Formally, $\lambda$ is a cyclic ordering of the edges of $Q$ with the property that successive edges share an endpoint. The triple $(M, e_0, T)$ can be encoded by means of a two-dimensional simple walk excursion in the first quadrant with $2n$ steps, equivalently a word consisting of elements of the set $\Theta_0$ defined below which reduces to the empty word; see Figure~\ref{fig:word}. }\label{fig:map} \end{figure} For the measures on tree-decorated planar maps which we consider in this paper, the conjectured scaling limit of the Peano curve $\lambda$ is a \textit{whole-plane space-filling $\operatorname{SLE}_\kappa$ from $\infty$ to $\infty$\/} for an appropriate value of $\kappa > 4$. In the case when $\kappa \geq 8$, $\operatorname{SLE}_\kappa$ is space-filling \cite{schramm-sle}, and whole-plane space-filling $\operatorname{SLE}_\kappa$ from $\infty$ to $\infty$ is just a whole-plane variant of chordal $\operatorname{SLE}_\kappa$ (see~\cite[footnote~9]{wedges}). It is characterized by the property that for any stopping time $\tau$ for the curve, the conditional law of the part of the curve which has not yet been traced is that of a chordal SLE$_{\kappa}$ from the tip of the curve to $\infty$. Ordinary $\operatorname{SLE}_\kappa$ for $\kappa \in (4,8)$ is not space-filling \cite{schramm-sle}. In this case, whole-plane space-filling $\operatorname{SLE}_\kappa$ from $\infty$ to $\infty$ is obtained from a whole-plane variant of ordinary chordal $\operatorname{SLE}_\kappa$ by iteratively filling in the ``bubbles'' disconnected from $\infty$ by the curve. The construction of space-filling $\operatorname{SLE}_\kappa$ in this case is explained in~\cite[\S~1.2.3 and 4.3]{ig4}. For $\kappa > 4$, whole-plane space-filling $\operatorname{SLE}_\kappa$ is the Peano curve of a certain tree of $\operatorname{SLE}_{16/\kappa}$-type curves, namely the set of all flow lines (in the sense of~\cite{ig1,ig2,ig3,ig4}) of a whole-plane Gaussian free field (GFF) started from different points but with a common angle. There are various ways to formulate the convergence of spanning-tree-decorated planar maps toward space-filling $\operatorname{SLE}_\kappa$-decorated LQG surfaces. One can embed the map $M$ into~$\BB C$ (e.g.\ via circle packing or Riemann uniformization) and conjecture that the Peano curve of $T$ (resp.\ the measure which assigns mass $1/n$ to each vertex of $M$) converges in the Skorokhod metric (resp.\ the weak topology) to the space-filling $\operatorname{SLE}_\kappa$ (resp.\ the volume measure associated with the $\gamma$-LQG surface). Alternatively, one can first try to define a metric on an LQG surface (which has so far been accomplished only in the case when $\gamma = \sqrt{8/3}$~\cite{qle,sphere-constructions,tbm-characterization,lqg-tbm1,lqg-tbm2,lqg-tbm3}, in which case it is isometric to some variant of the Brownian map~\cite{legall-sphere-survey,miermont-survey}), and then try to show that the graph metric on $M$ (suitably rescaled) converges in the Hausdorff metric to an LQG surface. Convergence in the former sense has only recently been shown for ``mated-CRT maps'' using the Tutte (harmonic or barycentric) embedding \cite{gms-tutte}. It has not yet been proved for any other random planar map model, and convergence in the latter (metric) sense has been established only for uniform planar maps and slight variants thereof (which correspond to $\gamma=\sqrt{8/3}$)~\cite{legall-uniqueness,miermont-brownian-map}. \begin{figure}[ht!] \begin{center} \vspace{-30pt} \begin{minipage}[t]{0.43\textwidth} \includegraphics[width=\textwidth]{mating-small} \end{minipage} \hspace{-8pt} \begin{minipage}[t]{0.12\textwidth} \vspace{-0.13\textheight} \includegraphics[scale=0.9,trim=0mm 0mm 10mm 0mm]{peanosphere-scaling-limit} \end{minipage} \hspace{1pt} \begin{minipage}[t]{0.43\textwidth} \includegraphics[width=\textwidth]{mating-big} \end{minipage} \begin{minipage}[t]{0.39\textwidth} \includegraphics[scale=0.85,page=1]{tree-gluing} \end{minipage} \begin{minipage}[t]{0.12\textwidth} \phantom{\includegraphics[scale=0.9,trim=0mm 0mm 10mm 0mm]{peanosphere-scaling-limit}} \end{minipage} \begin{minipage}[t]{0.39\textwidth} \hspace{-0.1\textwidth} \includegraphics[scale=0.85,page=2]{tree-gluing} \end{minipage} \end{center} \caption{ Shown on the top left are the contour functions for the discrete primal tree (blue) and dual tree (red) for the tree-decorated planar map on the bottom left using Sheffield's hamburger-cheeseburger bijection \cite{shef-burger}. Vertices of the tree and dual tree correspond to blue and red horizontal segments in the contour representation; edges of the tree and dual tree correspond to matching up and down steps. The white boundaries between quadrangles in the quadrangulation correspond to the white vertical segments between the blue and red contour functions; the bold white boundary in the quadrangulation, which marks the starting point of the Peano curve, corresponds to the left and right edges in the contour diagram. The main contribution of the current paper is to establish an infinite volume version of the scaling limit result indicated by the orange horizontal arrow on the top. That is, if one first takes a limit as the size of the map tends to infinity, then the contour functions for the infinite discrete pair of trees converge to a two-dimensional correlated Brownian motion which encode a pair of infinite continuum random trees (CRTs) --- this is convergence in the so-called \textit{peanosphere sense}. The main result of \cite{wedges} implies that these two infinite CRTs glued together as shown (i.e.\ contracting the vertical white segments in addition to gluing along the horizontal arrows) determine their embedding into an $\operatorname{SLE}$-decorated LQG surface. That is, if one observes the two contour functions on the top right, then one can measurably recover the LQG surface decorated with an $\operatorname{SLE}$ indicated on the bottom right and conversely if one observes the $\operatorname{SLE}$-decorated LQG surface on the bottom right then one can measurably recover the contour functions on the top right. This allows us to interpret our scaling limit result as a convergence result to $\operatorname{SLE}$-decorated LQG.} \label{fig:peanosphere} \end{figure} Here we consider a different notion of convergence, called convergence in the \textit{peanosphere sense}, which we now describe (see Figure~\ref{fig:peanosphere}). This notion of convergence is based on the work~\cite{wedges}, which shows how to encode a $\gamma$-quantum cone (a certain type of LQG surface parametrized by $\BB C$, obtained by zooming in near a point sampled from the $\gamma$-LQG measure induced by a GFF~\cite[\S~4.3]{wedges}) decorated by an independent whole-plane space-filling $\operatorname{SLE}_\kappa$ curve $\eta$ with $\kappa = 16/\gamma^2$ in terms of a correlated two-dimensional Brownian motion $Z$, with correlation $-\cos(4\pi/\kappa)$. Recall that the contour function of a discrete, rooted plane tree is the function one obtains by tracing along the boundary of the tree starting at the root and proceeding in a clockwise manner and recording the distance to the root from the present vertex. The two coordinates of the Brownian motion $Z$ are the contour functions of the $\operatorname{SLE}_{16/\kappa}$ tree (whose Peano curve is $\eta$) and that of the corresponding dual tree (consisting of GFF flow lines whose angles differ from the angles of the flow lines in the original tree by $\pi$). Here, the distance to the root is measured using $\gamma$-LQG length. On the discrete side, the entire random planar map is determined by the pair of trees. One non-obvious fact established in \cite{wedges} is that the corresponding statement is true in the continuum: the entire $\gamma$-quantum cone and space-filling $\operatorname{SLE}$ turn out to be almost surely determined by the Brownian motion $Z$. We say that the triple $(M, e_0, T)$ converges in the scaling limit (in the peanosphere sense) to a $\gamma$-quantum cone decorated by an independent whole-plane space-filling $\operatorname{SLE}_\kappa$ if the joint law of the contour functions (or some slight variant thereof) of the primal and dual trees~$T$ and~$T^*$ converges in the scaling limit to the joint law of the two coordinates of $Z$. The present paper is a generalization of~\cite{shef-burger}, which was the first work to study peanosphere convergence. The paper~\cite{shef-burger} considered rooted critical FK planar maps. For $n\in\BB N$ and $q\geq 0$, a \textit{rooted critical FK planar map with parameter $q$ and size $n$\/} is a triple $(M,e_0,S)$ consisting of a planar map $M$ with~$n$ edges, a distinguished oriented root edge $e_0$ for $M$, and a set $S$ of edges of $M$, sampled with probability proportional to $q^{K(S)/2}$, where $K(S)$ is the number of connected components of $S$ plus the number of complementary connected components of $S$. The conditional law of $S$ given $M$ is that of the self-dual FK model on~$M$~\cite{fk-cluster}. An \textit{infinite-volume rooted critical FK planar map with parameter~$q$\/} is the infinite-volume limit of rooted critical FK planar maps of size $n$ in the sense of Benjamini--Schramm~\cite{benjamini-schramm-topology}. There is a natural (but not bijective) means of obtaining a spanning tree $T$ of $M$ from the FK edge set~$S$, which depends on the choice of $e_0$; see~\cite{bernardi-sandpile,shef-burger}. It is conjectured~\cite{shef-burger,wedges} that the triple $(M, e_0, T)$ converges in the scaling limit to an LQG sphere with parameter $\gamma$ decorated by an independent whole-plane space-filling $\operatorname{SLE}_\kappa$ with parameters satisfying \begin{equation} \label{eqn-kappa-q} \sqrt q = - 2\cos\left(\frac{4\pi}{\kappa} \right),\qquad \gamma = \frac{4}{\sqrt\kappa} \,. \end{equation} In~\cite[Thm.~2.5]{shef-burger}, this convergence is proven in the peanosphere sense in the case of infinite-volume FK planar maps. This is accomplished by means of a bijection, called the \textit{Sheffield hamburger-cheeseburger bijection}, between triples $(M, e_0, S)$ consisting of a rooted planar map of size $n$ and a distinguished edge set~$S$; and certain words in an alphabet of five symbols (representing two types of ``burgers'' and three types of ``orders''). This bijection is essentially equivalent for a fixed choice of $M$ to the bijection of~\cite{bernardi-sandpile}. The word associated with a triple $(M, e_0, S)$ gives rise to a walk on $\BB Z^2$ whose coordinates are (roughly speaking) the contour function of the spanning tree $T$ of $M$ naturally associated with $S$ (under the mapping mentioned in the previous paragraph) and the contour function of the dual spanning tree $T^*$ of the dual map $M^*$. There is also an infinite-volume version of Sheffield's bijection which is a.s.\ well defined for infinite-volume FK planar maps. See~\cite{chen-fk} for a detailed exposition of this version of the bijection. Various strengthenings of Sheffield's scaling limit result (including an analogous scaling limit result for finite-volume FK planar maps) are proven in~\cite{gms-burger-cone,gms-burger-local,gms-burger-finite}. See also~\cite{chen-fk,blr-exponents} for additional results on FK planar maps and~\cite{gwynne-miller-cle} for a scaling limit result in a stronger topology which is proven using the above peanosphere scaling limit results. In~\cite{kassel-wilson-active}, a new family of probability measures on spanning-trees of (deterministic) rooted planar maps, which generalizes the law arising from the self-dual FK model, was introduced. As explained in that paper, the law on trees $T$ of a rooted map $(M,e_0)$ arising from a self-dual FK model is given by the distribution on all spanning trees of $M$ weighted by~$y^{\operatorname{a}(T)}$, where $y = \sqrt q +1$ and $\operatorname{a}(T) = \operatorname{a}(T,e_0) \in \BB N$ is the ``embedding activity'' of $T$ (which depends on the choice of root $e_0$; we will remind the reader of the definition later). It also makes sense to consider the probability measure on trees $T$ weighted by~$y^{\operatorname{a}(T)}$ for $y \in (0,1)$, so that trees with a lower embedding activity are more likely. The (unifying) discrete model corresponding to any $y\ge 0$ is called a $y$-active spanning tree. In the context of the current paper, it is natural to look at a joint law on the triple $(M,e_0,T)$ such that the marginal on $(M,e_0)$ is the measure which weights a rooted planar map by the partition function of active spanning trees. Indeed, as we explain later, with this choice of law, exploring the tree respects the Markovian structure of the map. We call a random triple sampled from this law a \textit{random rooted active-tree-decorated planar map with parameter $y\geq 0$ and size $n\in\BB N$}. The limiting case $y = 0$ corresponds to a spanning tree conditioned to have the minimum possible embedding activity, which is equivalent to a bipolar orientation on~$M$ for which the source and sink are adjacent~\cite{bernardi-polynomial} (see~\cite{kmsw-bipolar} for more on random bipolar-oriented planar maps). It is conjectured in~\cite{kassel-wilson-active} that for $y \in [0,1)$ the scaling limit of a random spanning tree~$T$ on large subgraphs of a two-dimensional lattice sampled with probability proportional to~$y^{\operatorname{a}(T)}$ is an $\operatorname{SLE}_\kappa$ with $\kappa \in (8,12]$ determined by \begin{equation} \label{eqn-kappa-y} \frac{y-1}{2} = -\cos\left(\frac{4\pi}{\kappa} \right) \,. \end{equation} It is therefore natural to expect that the scaling limit of a rooted active-tree-decorated planar map is a $\gamma$-LQG surface decorated by an independent space-filling $\operatorname{SLE}_\kappa$ with $\kappa \in (8,12] $ as in~\eqref{eqn-kappa-y} and $\gamma = 4/\sqrt\kappa$. We introduce in Section~\ref{sec-burger} a two-parameter family of probability measures on words in an alphabet of 8 symbols which generalizes the hamburger-cheeseburger model of~\cite{shef-burger}. Under the bijection of~\cite{shef-burger}, each of these models corresponds to a probability measure on spanning-tree-decorated planar maps. One parameter in our model corresponds to the parameter $y$ of the active spanning tree, and the other, which we call~$z$, controls the extent to which the tree $T$ and its corresponding dual tree $T^*$ are ``tangled together''. This second parameter can also be interpreted in terms of some form of \textit{bending energy\/} of the Peano curve which separates the two trees, in the sense of~\cite{bbg-bending,DiFrancesco}; see Remark~\ref{remark-bending}. We prove an analogue of~\cite[Thm.~2.5]{shef-burger} for our model which in particular implies that active-tree-decorated planar maps for $0 \leq y < 1$ converge in the scaling limit to $\gamma$-quantum cones decorated by $\operatorname{SLE}_\kappa$ in the peanosphere sense for $\kappa \in (8,12]$ as in~\eqref{eqn-kappa-y} and $\gamma = 4/\sqrt\kappa$. If we also vary~$z$, the other parameter of our model, we obtain tree-decorated random planar maps which converge in the peanosphere sense to $4/\sqrt\kappa$-quantum cones decorated by space-filling $\operatorname{SLE}_\kappa$ for any value of $\kappa > 8$. \begin{remark} \label{remark-bipolar} When $y = 0$, an active-tree-decorated planar map is equivalent to a uniformly random bipolar-oriented planar map~\cite{bernardi-sandpile}. In~\cite{kmsw-bipolar}, the authors use a bijective encoding of bipolar-oriented planar maps, which is not equivalent to the one used in this paper, to show that random bipolar-oriented planar maps with certain face degree distributions converge in the peanosphere sense to an $\operatorname{SLE}_{12}$-decorated $\sqrt{4/3}$-LQG surface, both in the finite-volume and infinite-volume cases (see also~\cite{ghs-bipolar} for a stronger convergence result). In the special case when $z = 1$, our Theorem~\ref{thm-all-S} implies convergence of infinite-volume uniform bipolar-oriented planar maps in the peanosphere sense, but with respect to a different encoding of the map than the one used in~\cite{kmsw-bipolar}. More precisely, bipolar-oriented maps are encoded in~\cite{kmsw-bipolar} by a random walk in $\BB Z^2$ with a certain step distribution. The encoding of bipolar-oriented maps by the generalized hamburger-cheeseburger bijection corresponds to a random walk in $\BB Z^2 \times \{0,1\}$ with a certain step distribution. Both of these walks converge in law to a correlated Brownian motion (ignoring the extra bit in the hamburger-cheeseburger bijection), and the correlations are the same, so we say that they both converge in the peanosphere sense. \end{remark} \subsection{Basic notation} \label{sec-basic} We write $\BB N$ for the set of positive integers. \vspace{6pt} \noindent For $a < b \in \BB R$, we define the discrete intervals $[a,b]_\BB Z \colonequals [a, b]\cap \BB Z$ and $(a,b)_\BB Z \colonequals (a,b)\cap \BB Z$. \vspace{6pt} \noindent If $a$ and $b$ are two quantities, we write $a\preceq b$ (resp.\ $a \succeq b$) if there is a constant $C$ (independent of the parameters of interest) such that $a \leq C b$ (resp.\ $a \geq C b$). We write $a \asymp b$ if $a\preceq b$ and $a \succeq b$. \subsection{Generalized burger model} \label{sec-burger} We now describe the family of words of interest to us in this paper. These are (finite or infinite) words which we read from left to right and which consist of letters representing burgers and orders which are matched to one another following certain rules. Several basic properties of this model are proved in Appendix~\ref{sec-prelim}. Let \begin{equation} \Theta_0 \colonequals \left\{\hb,\cb,\ho,\co\right\}, \end{equation} and let $\mathcal W(\Theta_0)$ be the set of all finite words consisting of elements of $\Theta_0$. The alphabet $\Theta_0$ generates a semigroup whose elements are words in $\mathcal W(\Theta_0)$ modulo the relations \begin{equation} \label{eqn-theta-relations} \begin{split} &\cb \co = \hb \ho = \emptyset \quad\quad\quad\text{(order fulfillment)} \\ &\cb \ho = \ho \cb,\quad \hb \co = \co \hb . \end{split} \end{equation} Following Sheffield~\cite{shef-burger}, we think of $\hb,\cb,\ho,\co$ as representing a hamburger, a cheeseburger, a hamburger order, and a cheeseburger order, respectively. A hamburger order is fulfilled by the freshest available hamburger (i.e., the rightmost hamburger which has not already fulfilled an order) and similarly for cheeseburger orders. We say that an order and a burger which cancel out via the first relation of~\eqref{eqn-theta-relations} have been \textit{matched}, and that the order has \textit{consumed\/} the burger. See Fig.~\ref{fig:word}~(a) for a diagram representing matchings in an example. We enlarge the alphabet by defining \begin{equation} \Theta \colonequals \Theta_0 \cup \left\{ \db, \eb, \fo, \so \right\}, \end{equation} and let $\mathcal W(\Theta)$ be the set of all finite words consisting of elements of $\Theta$. The alphabet $\Theta$ generates a semigroup whose elements are finite words consisting of elements of $\Theta$ modulo the relations~\eqref{eqn-theta-relations} and the additional relations \begin{equation} \label{eqn-theta-relations'} \begin{aligned} \hb \fo & = \hb \ho = \emptyset & \quad \cb \fo & = \cb \co = \emptyset \\ \hb \so &= \hb \co& \cb \so &= \cb \ho \\ \hb \db &= \hb \hb& \cb \db &= \cb \cb \\ \hb \eb &= \hb \cb& \cb \eb &= \cb \hb. \end{aligned} \end{equation} In the language of burgers, the symbol $\fo$ represents a ``flexible order'' which requests the freshest available burger. The symbol $\so$ represents a ``stale order'' which requests the freshest available burger of the type \textit{opposite\/} the freshest available burger. The symbol $\db$ represents a ``duplicate burger'' which acts like a burger of the same type as the freshest available burger. The symbol $\eb$ represents an ``opposite burger'' which acts like a burger of the type opposite the freshest available burger. The model of~\cite{shef-burger} includes the flexible order~$\fo$ but no other elements of $\Theta \setminus \Theta_0$. If a symbol in $\left\{ \db, \eb, \fo, \so \right\}$ has been replaced by a symbol in $\Theta_0$ via one of the relations in~\eqref{eqn-theta-relations'}, we say that this symbol is \textit{identified by\/} the earlier symbol in the relation; and \textit{identified as\/} the symbol in $\Theta_0$ with which it has been replaced. Given a word $x = x_1 \cdots x_n \in \mathcal W(\Theta)$, we write $|x| = n$ for the number of symbols in~$x$. \begin{defn}\hypertop{def-reduce} \label{def-reduce} A word in $\mathcal W(\Theta)$ is called \textit{reduced\/} if all of its orders, $\db$'s, and $\eb$'s lie to the left of all of its $\hb$'s and $\cb$'s. In Lemma~\ref{prop-reduction} we show that for any finite word $x$, there is a unique reduced word which can be obtained from $x$ by applying the relations~\eqref{eqn-theta-relations} and~\eqref{eqn-theta-relations'}, which we call the \text{reduction\/} of $x$, and denote by $\protect\hyperlink{def-reduce}{\mathcal R}(x)$. \end{defn} An important property of the reduction operation (proved in Lemma~\ref{prop-associative}) is \[\protect\hyperlink{def-reduce}{\mathcal R}(xy) =\protect\hyperlink{def-reduce}{\mathcal R}(\protect\hyperlink{def-reduce}{\mathcal R}(x) \protect\hyperlink{def-reduce}{\mathcal R}(y)).\] Note that for any $x\in \mathcal W(\Theta)$, we have $|\protect\hyperlink{def-reduce}{\mathcal R}(x)|\le |x|$. \begin{defn}\hypertop{def-identification} \label{def-identification} We write $x'=\protect\hyperlink{def-identification}{\mathcal I}(x)$ (the \textit{identification\/} of $x$) for the word with $|x'| = |x|$ obtained from $x$ as follows. For each $i\in \{1,\ldots,|x|\}$, if $x_i \in \Theta_0$, we set $x_i' = x_i$. If $x_i \in \{\fo, \so\}$ and $x_i$ is replaced by a hamburger order (resp.\ cheeseburger order) via~\eqref{eqn-theta-relations'} when we pass to the reduced word $\protect\hyperlink{def-reduce}{\mathcal R}(x)$, we set $x_i' = \ho$ (resp.\ $x_i' = \co$). If $x_i \in \{\db, \eb\}$ and $x_i$ is replaced with a hamburger (resp.\ cheeseburger) via~\eqref{eqn-theta-relations'} when we pass to the reduced word, we set $x_i' = \hb$ (resp.\ $x_i' = \cb$). Otherwise, we set $x_i'=x_i$. We say that a symbol $x_i$ is \textit{identified in the word $x$\/} if $x_i'$ is an element of $\Theta_0$, and \textit{unidentified in the word $x$\/} otherwise. \end{defn} For example, \begin{equation*} \begin{aligned} \protect\hyperlink{def-reduce}{\mathcal R}\left(\cb \fo \db \hb \so \right) &= \db \co \hb \\ \protect\hyperlink{def-identification}{\mathcal I}\left( \cb \fo \db \hb \so \right) &= \cb \co \db \hb \co . \end{aligned} \end{equation*} Note that $\protect\hyperlink{def-reduce}{\mathcal R}(\protect\hyperlink{def-identification}{\mathcal I}(x)) = \protect\hyperlink{def-reduce}{\mathcal R}(x)$. Note also that any symbol $x_i$ which has a match when we pass to $\protect\hyperlink{def-reduce}{\mathcal R}(x)$ is necessarily identified, but identified symbols are not necessarily matched. Indeed, symbols in $\Theta_0$ are always identified, and there may be $\so$, $\db$, and/or $\eb$ symbols in $x$ which are identified, but do not have a match. \begin{defn} \label{def-theta-count} \hypertop{def-theta-count} For $\theta\in \Theta$ and a finite word $x$ consisting of elements of $\Theta$, we write \begin{align*} \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\theta}(x) &\colonequals \text{number of $\theta$-symbols in $x$}\\ \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\theta_1|\cdots|\theta_k}(x) &\colonequals \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\theta_1}(x)+\cdots+\protect\hyperlink{def-theta-count}{\mathcal N\!}_{\theta_k}(x)\\ \intertext{We also define} \protect\hyperlink{def-theta-count}{\mathcal B}(x)&\colonequals \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\hb|\cb|\db|\eb}(x)=\text{number of burgers in $x$}\\ \protect\hyperlink{def-theta-count}{\mathcal O}(x)&\colonequals \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\ho|\co|\fo|\so}(x)=\text{number of orders in $x$}\\ \protect\hyperlink{def-theta-count}{\mathcal C}(x)&\colonequals \protect\hyperlink{def-theta-count}{\mathcal B}(x)-\protect\hyperlink{def-theta-count}{\mathcal O}(x) \intertext{and} \protect\hyperlink{def-theta-count}{d}(x) &\colonequals \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\hb}(x) - \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\ho}(x) \\ \protect\hyperlink{def-theta-count}{d^*}(x) &\colonequals \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\cb}(x) - \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\co}(x)\\ \protect\hyperlink{def-theta-count}{\vec{d}}(x) &\colonequals \left(\protect\hyperlink{def-theta-count}{d}(x),\, \protect\hyperlink{def-theta-count}{d^*}(x)\right) \\ \protect\hyperlink{def-theta-count}{\mathcal D}(x) &\colonequals \protect\hyperlink{def-theta-count}{d}(x)-\protect\hyperlink{def-theta-count}{d^*}(x)\,. \end{align*} \end{defn} The reason for the notation $\protect\hyperlink{def-theta-count}{d}$ and $\protect\hyperlink{def-theta-count}{d^*}$ is that these quantities represent distances to the root edge in the primal and dual trees, respectively, in the construction of~\cite[\S~4.1]{shef-burger} (see the discussion just below). Note that these quantities are still defined even if $x$ has some symbols in $\left\{ \db, \eb, \fo, \so \right\}$. Fig.~\ref{fig:word}~(b) shows a random-walk representation of $\protect\hyperlink{def-theta-count}{\vec{d}}$ computed on increasing prefixes of a finite (identified) word. This process will later be our main object of study. If $x$ is a finite word consisting of elements of $\Theta$ with $\protect\hyperlink{def-reduce}{\mathcal R}(x) = \emptyset$, then the bijection described in~\cite[\S~4.1]{shef-burger} applied to $\protect\hyperlink{def-identification}{\mathcal I}(x)$ uniquely determines a rooted spanning-tree-decorated map $(M, e_0, T)$ associated with $x$. \begin{figure}[b!] \captionsetup[subfigure]{position=below,justification=justified,singlelinecheck=false,labelfont=bf} \begin{subfigure}{.48\textwidth} \centering \includegraphics[width=.9\textwidth]{word-match} \caption{The word associated to the decorated map of Fig.~\ref{fig:map}. The chords represent the matchings between orders and burgers that fulfill them.} \end{subfigure} \hfill \begin{subfigure}{.48\textwidth} \centering \includegraphics[width=.68\textwidth]{word-walk} \caption{The trace of the walk $(\protect\hyperlink{def-theta-count}{\vec{d}}_i)_{0\le i \le |x|}$ corresponding to the $\protect\hyperlink{def-theta-count}{\vec{d}}$ vector of increasing prefixes of the word $x=\hb\cb\hb\hb\ho\cb\cb\ho\ho\co\cb\hb\hb\hb\co\ho\co\ho\co\ho$. The walk gives the number of available hamburgers and cheeseburgers as a function of time.} \end{subfigure} \caption{}\label{fig:word} \end{figure} We now describe the probability measure on words which gives rise to the law on spanning-tree-decorated planar maps which we are interested in. Let \alb \mathcal P \colonequals \left\{ (p_\fo, p_\so, p_\db, p_\eb ) \in [0,1]^4 \,:\, p_\fo + p_\so \leq 1 \quad \operatorname{and} \quad p_\db + p_\eb < 1 \right\}. \ale For a vector $\vec p = (p_\fo, p_\so, p_\db,p_\eb ) \in \mathcal P$, we define a probability measure $\PP = \PP_{\vec p}$ on $\Theta$ by \begin{equation} \label{eqn-theta-prob} \begin{aligned} \PP\!\left(\fo\right) &= \frac{p_\fo}{2}, & \PP\!\left(\so\right) &= \frac{p_\so}{2}, & \PP\!\left(\ho \right) = \PP\!\left(\co\right) &= \frac{1-p_\fo-p_\so}{4} \\ \PP\!\left(\db\right) &= \frac{p_\db}{2}, & \PP\!\left(\eb \right) &= \frac{p_\eb}{2}, & \PP\!\left(\hb \right) = \PP\!\left(\cb\right) &= \frac{1-p_\db - p_\eb}{4}. \end{aligned} \end{equation} Let $X= \cdots X_{-1} X_0 X_1 \cdots$ be a bi-infinite word whose symbols are i.i.d.\ samples from the probability measure~\eqref{eqn-theta-prob}. The identification procedure extends naturally to bi-infinite words, and we show in Appendix~\ref{sec-prelim} that a.s.\ the bi-infinite identified word $X' = \protect\hyperlink{def-identification}{\mathcal I}(X)$ exists and contains only elements of $\Theta_0$. Furthermore, a.s.\ each order in $X$ consumes a burger and each burger in $X$ is consumed by an order. That is, each symbol $X_i$ in $X$ has a match~$X_{\phi(i)}$ which cancels it out, so that in effect the reduced bi-infinite word $\protect\hyperlink{def-reduce}{\mathcal R}(X)$ is a.s.\ empty. \begin{defn} \label{def-X-identification} We write $X' = \cdots X_{-1}' X_0' X_1' \cdots$ for the identification of the bi-infinite word~$X$. \end{defn} \begin{defn} \label{def-match} For $i\in \BB Z$, we write $\phi(i) \in \BB Z$ for the index of the symbol matched to $X_i$ in the word $X$. (From the above property, a.s.\ $\phi$ is an involution of $\BB Z$.) \end{defn} For $a < b \in \BB R$, we write \begin{equation} \label{eqn-X(a,b)} X(a,b) \colonequals \protect\hyperlink{def-reduce}{\mathcal R}(X_{\lfloor a \rfloor} \cdots X_{\lfloor b \rfloor}) \quad \operatorname{and} \quad X'(a,b) \colonequals \protect\hyperlink{def-reduce}{\mathcal R}(X_{\lfloor a \rfloor}' \cdots X_{\lfloor b \rfloor}'). \end{equation} The aforementioned results of Appendix~\ref{sec-prelim} allow us to use the infinite-volume version of Sheffield's bijection~\cite{shef-burger} (which is described in full detail in~\cite{chen-fk}) to construct an infinite-volume rooted spanning-tree-decorated planar map $(M^\infty, e_0, T^\infty)$ from the identified word $X'$ of Definition~\ref{def-X-identification}. The set $\mathcal P$ describes a four-parameter family of probability measures on $\Theta$, and hence a four-parameter family of probability measures on triples $(M^\infty, e_0, T^\infty )$. However, as we will see in Corollary~\ref{prop-identification-law} below, the law of $X'$ (and hence also the law of $(M^\infty, e_0, T^\infty )$) depends only on the two parameters $p_\fo - p_\so$ and $p_\db - p_\eb$ (equivalently the parameters $y$ and $z$ defined in~\eqref{eqn-y-z}). \begin{remark} \label{remark-generality} The model described above includes three special symbols which are natural generalizations of the special order $\fo$ included in~\cite{shef-burger}: the order $\so$ has the opposite behavior as the order $\fo$, and the burgers $\db$ and $\eb$ behave in the same way as $\so$ and $\fo$ but with burgers in place of orders. As we will see in Section~\ref{sec-peano-model}, each of these symbols has a natural topological interpretation in terms of the spanning-tree-decorated rooted planar maps encoded by words consisting of elements of $\Theta$. \end{remark} \begin{remark} \label{remark-difference} As we will see, the words we consider in this paper can behave in very different ways from the words considered in~\cite{shef-burger}, which do not include the symbols $\so,\db,$ or $\eb$. For example, in the setting of Section~\ref{sec-variable-SD}, where we allow $\so$'s and $\db$'s but not $\fo$'s or $\eb$'s, the net hamburger/cheeseburger counts $\protect\hyperlink{def-theta-count}{d}(X(1,n))$ and $\protect\hyperlink{def-theta-count}{d^*}(X(1,n)) $ in a reduced word tend to be \emph{negatively} correlated (Theorem~\ref{thm-variable-SD}) and the reduced word $X(1,n)$ tends to have \emph{more} symbols than the corresponding reduced word in the case when $p_\fo = p_\so =p_\db = p_\eb = 0$ (Lemma~\ref{prop-mean-mono}). The opposite is true in the setting of~\cite{shef-burger}. As another example, in the setting of Section~\ref{sec-variable-SD} we expect, but do not prove, that the infinite reduced word $X(1,\infty)$ a.s.\ contains only finitely many unidentified $\so$'s and $\db$'s, whereas $X(1,\infty)$ a.s.\ contains infinitely many unidentified $\fo$'s in the setting of~\cite{shef-burger} (Remark~\ref{remark-I-infinite}). \end{remark} \subsection{Active spanning trees with bending energy} \label{sec-peano-model} Let $(M, e_0)$ be a (deterministic) planar map with $n$ edges with oriented root edge $e_0$. Let $M^*$ be the dual map of $M$ and let $(Q, \mathbb e_0)$ be the associated rooted quadrangulation (as described at the beginning of the introduction). In this subsection we introduce a probability measure on spanning trees of $M$ which is encoded by the model of Section~\ref{sec-burger}. There is a bijection between spanning trees on $M$ and \textit{noncrossing Eulerian cycles\/} on the \textit{medial graph\/} of $M$, which is the planar dual graph of $Q$. (An Eulerian cycle is a cycle which traverses each edge exactly once, vertices may be repeated.) To describe this bijection, let $\lambda$ be a noncrossing Eulerian cycle on the dual of $Q$ starting and ending at $\mathbb e_0$. By identifying an edge of $Q^*$ with the edge of $Q$ which crosses it, we view $\lambda$ as a function from $[1,2n]_\BB Z$ to the edge set of $Q$. Each quadrilateral of $Q$ is bisected by one edge of $M$ and one edge of $M^*$, and $\lambda$ crosses each such quadrilateral exactly twice (one such quadrilateral is shown in gray in Figure~\ref{fig:active-sketch}). Hence $\lambda$ crosses each edge of $M$ and each edge of $M^*$ either 0 or 2 times. The set $T$ of edges of $M$ which are not crossed by $\lambda$ is a spanning tree of $M$ whose discrete Peano curve is $\lambda$ and the set $T^*$ of edges of $M^*$ not crossed by $\lambda$ is the corresponding dual spanning tree of $M^*$. Each quadrilateral of $Q$ is bisected by an edge of either $T$ or $T^*$ (but not both). This establishes a one-to-one correspondence between noncrossing Eulerian cycles on the dual of $Q$ starting and ending at $\mathbb e_0$ and spanning trees of $M$. Now fix a noncrossing Eulerian cycle $\lambda$ as above. For $i\in [1,2n]_\BB Z$ we let $\overline e_i$ be the edge of $T \cup T^*$ which bisects the last quadrilateral of $Q$ crossed by $\lambda$ exactly once at or before time $i$, if such a quadrilateral exists. Let $e$ be an edge of $T\cup T^*$, and let $j,k\in[1,2n]_\BB Z$ be the first and second times respectively that $\lambda$ crosses the quadrilateral of $Q$ bisected by $e$. Observe that if $e$ and $\overline e_{k-1}$ both belong to $M$ or both belong to $M^*$, then in fact $e = \overline e_{k-1}$. In this case, we say that $e$ is of \textit{active type}; this definition coincides with ``embedding activity'', as illustrated in Figure~\ref{fig:active-sketch}. If $\overline e_{j-1}$ exists and $e$ and $\overline e_{j-1}$ either both belong to $M$ or both belong to $M^*$, then we say that $e$ is of \textit{duplicate type}; duplicate edges are illustrated in Figure~\ref{fig-duplicate}, and Remark~\ref{remark-bending} below discusses their relevance. Figure~\ref{fig:active-duplicate-map} shows the active and duplicate edges from Figure~\ref{fig:map}. An edge can be of both active and duplicate type, or of neither active nor duplicate type. \begin{figure}[h!] \centering \hfill\includegraphics[width=.35\textwidth]{explore-active-before}\hfill\raisebox{60pt}{$\rightarrow$}\hfill \includegraphics[width=.35\textwidth]{explore-active-after}\hfill{} \caption{ The Peano exploration process with the Peano path $\lambda$ in green, primal tree $T$ in blue, and dual tree $T^*$ in red. When the gray quadrilateral is first encountered (left panel), the dual edge $e$ is forced to be present (otherwise there would be a primal cycle). This means that $e$ is ``embedding active'', in the sense of~\cite{bernardi-sandpile} (see also~\cite{courtiel-activity}). The Peano curve then explores the map in the region enclosed by the blue near-cycle and exits through the same (gray) quadrilateral (right panel). Just before the second time the gray quadrilateral is encountered, the most recent quadrilateral encountered exactly once is the gray quadrilateral, so $\bar{e}_{k-1}=e$, so $e$ is of active type as defined above. This characterization of the embedding activity was explained in \cite{shef-burger}. } \label{fig:active-sketch} \end{figure} Following~\cite{bernardi-sandpile,shef-burger}, a noncrossing Eulerian cycle $\lambda$ based at $\mathbb e_0$ can be encoded by means of a word $x$ of length $2n$ consisting of elements of $\Theta_0$ with reduced word $\protect\hyperlink{def-reduce}{\mathcal R}(x) = \emptyset$. The symbol $\hb$ (resp.\ $\ho$) corresponds to the first (resp.\ second) time that $\lambda$ crosses an edge of $M$, and the symbol $\cb$ (resp.\ $\co$) corresponds to the first (resp.\ second) time that $\lambda$ crosses an edge of $M^*$. The two times that $\lambda$ crosses a given quadrilateral of $Q$ correspond to a burger and the order which consumes it. With $\overline e_i$ as above, the burger corresponding to the quadrilateral bisected by $\overline e_i$ is the same as the rightmost burger in the reduced word $\protect\hyperlink{def-reduce}{\mathcal R}(x_1\cdots x_{i})$; the edge $\overline e_i$ is undefined if and only if this reduced word is empty. Therefore edges of active type correspond to orders which consume the most recently added burger that has not yet been consumed, and edges of duplicate type correspond to burgers which are the same type as the the most recently added burger that has not yet been consumed. \begin{figure}[t!] \centering \includegraphics[scale=.8]{duplicate-or-not} \caption{Left: the two trees $T$ and $T^*$ and the Peano curve $\lambda$ (in green) run up until step $i-1$. The pink quadrilateral is the most recent one which has been crossed exactly once by $\lambda$ by time $i-1$ and $\overline e_{i-1}$ is the red edge which bisects this quadrilateral. The vertices $v_{i-1}^0$ and $v_{i-1}^1$ discussed in Remark~\ref{remark-bending} are shown in red and blue, respectively. At step $i$, $\lambda$ will either bend away from the red vertex (middle) or toward the red vertex (right). In the former case the edge which bisects the grey quadrilateral belongs to the same tree as $\overline e_{i-1}$, so the edge $\lambda(i)$ is of duplicate type. }\label{fig-duplicate} \end{figure} \begin{figure}[b!] \centering \includegraphics[width=0.55\textwidth]{map-active-duplicate} \caption{The quadrangulation $Q$, the trees $T$ and $T^*$, and the Peano curve $\lambda$ constructed from the triple $(M, e_0, T)$ of Figure~\ref{fig:map} with active (resp.\ duplicate) edges of $T \cup T^*$ indicated with an $a$ (resp.\ a $d$). Edges can be both active and duplicate. The root edge $\mathbb e_0$ is indicated by a thicker white line. If we allow symbols in~$\Theta$ (rather than just $\Theta_0$), the triple $(M, e_0, T)$ can be encoded by many different words of length~$2n$; more precisely, it can be encoded by any word whose identification is the word shown in Figure~\ref{fig:word}. The word corresponding to $(M, e_0, T)$ with the smallest possible number of elements of $\Theta_0$ is $\hb\eb\eb\db\fo\eb\db\so\so\fo\cb\eb\db\db\so\fo\so\fo\so\fo$. In this word, $\fo$ (resp.\ $\so$) symbols correspond to the second time $\lambda$ crosses a quadrilateral of $Q$ bisected by an active (resp.\ inactive) edge and $\db$ (resp.\ $\eb$) symbols correspond to the first time $\lambda$ crosses a quadrilateral of $Q$ bisected by a duplicate (resp.\ non-duplicate) edge. The $\hb$ and $\cb$ symbols correspond to times $i$ for which the edge $\overline e_i$ is not defined. }\label{fig:active-duplicate-map} \end{figure} For a spanning tree $T$ of $M$ rooted at $e_0$, we let $\operatorname{a}(T)$ be the number of active edges and~$\operatorname{d}(T)$ the number of duplicate edges of its Peano curve $\lambda$. These quantities depend on the choice of $e_0$. We define the partition function \begin{equation} \mathcal{Z}(M,e_0,y,z)=\sum_{\text{spanning tree}\, T}y^{\operatorname{a}(T)}z^{\operatorname{d}(T)}\,, \end{equation} which gives rise, when $y,z\ge 0$, to a probability measure \begin{equation}\label{eqn-spanning-tree-law} \PP[T]=\frac{y^{\operatorname{a}(T)}z^{\operatorname{d}(T)}}{\mathcal{Z}(M,e_0,y,z)}\,, \end{equation} on the set of spanning trees $T$ of $M$. This distribution on spanning trees satisfies a domain Markov property: for $i\in [1,2n]_\BB Z$, the conditional law of $\lambda|_{[i+1,2n]_\BB Z}$ given $\lambda|_{[1,i]_\BB Z}$ depends only on the set of quadrilaterals and half-quadrilaterals not yet visited by $\lambda$ together with the starting and ending points of the path $\lambda([1,i]_\BB Z)$. See Figure~\ref{fig:markov} for an illustration of the Markov property of the random decorated map. We call a spanning tree sampled from the above distribution an \textit{active spanning tree with bending energy}, for reasons which are explained in the remarks below. \begin{figure}[t] \centering \includegraphics[width=\textwidth/2]{markov} \caption{Given an initial portion of the exploration process $\lambda$, the set of active and duplicate edges in the remainder of the graph only depends on the initial segment through its boundary. Consequently, the law of the decorated random map conditional on the part already drawn only depends on the white region with the boundary components consisting in the red and blue curves only visited on one side by the green curve. }\label{fig:markov} \end{figure} \begin{remark} \label{remark-active-def} There are other notions of ``active edge'', each of which gives rise to the same Tutte polynomial \[T_M(x,y) = \sum_{\text{spanning trees $t$ of $M$}} x^{\text{\# internally active edges of $t$}}\,y^{\text{\# externally active edges of $t$}}\,.\] The embedding activity illustrated in Figure~\ref{fig:active-sketch} differs from Tutte's original definition, but is more natural in this context because it has the domain Markov property, and has a simple characterization in terms of the hamburger-cheesburger model. The embedding activity is similar to Bernardi's definition \cite[\S~3.1, Def.~3]{bernardi-sandpile}, but with ``maximal'' in place of ``minimal''. The partition function $\mathcal{Z}(M,e_0,y,1)=T_M(y,y)$ is the Tutte polynomial of $M$ evaluated at $(y,y)$. In this case ($z=1$), the partition function is that of the active spanning tree model of~\cite{kassel-wilson-active}, which when $y \geq 1$ coincides with the partition function of the self-dual Fortuin--Kasteleyn (FK) model with parameter $q = (y-1)^2$. \end{remark} \begin{remark} \label{remark-bending} To our knowledge, the notion of edges of duplicate type does not appear elsewhere in the literature. However, this notion can be viewed as a variant of the notion of \textit{bending energies\/} studied in~\cite{bbg-bending} and initially introduced in a different guise in~\cite{DiFrancesco}. Suppose $(\mathcal T, \mathbb v)$ is a rooted triangulation and $\ell$ is a non-self-crossing oriented loop in the dual of $\mathcal T$, viewed as a cyclically ordered sequence of distinct triangles in $\mathcal T$. For each triangle $t$ hit by loop $\ell$, there is a single edge of $t$ which is not shared by the triangles hit by $\ell$ immediately before and after~$t$. We say that $t$ points outward (resp.\ inward) if this edge is on the same (resp.\ opposite) side of the loop $\ell$ as the root vertex $\mathbb v$. The \textit{bending\/} of $\ell$ is the number of pairs of consecutive triangles which either both point outward or both face inward. Such a pair of triangles corresponds to a time when loop $\ell$ ``bends around'' a vertex. If we view the Peano curve $\lambda$ considered above as a loop in the triangulation whose edges are the union of the edges of the quadrangulation $Q$ and the trees $T$ and $T^*$, then the bending of $\lambda$ in the sense of~\cite{bbg-bending} is the number of consecutive pairs of symbols of one of the forms $\hb \hb$, $\ho\ho$, $\hb\ho$, $\ho\hb$, $\cb \cb$, $\co\co$, $\cb\co$, or $\co \cb$ in the identified word which encodes the triple $(M, e_0, T)$ under Sheffield's bijection. The loops considered in~\cite{bbg-bending} are those arising from variants of the $O(n)$ model, so are expected to be non-space-filling in the limit (in fact they are conjectured to converge to CLE$_\kappa$ loops for $\kappa \in (8/3,8)$~\cite{shef-cle}). For space-filling loops (such as the Peano curve $\lambda$), it is natural to keep track of times when the loop returns to a triangle which shares a vertex with one it hits previously, and then bends toward the set of triangles which it has hit more recently. Let us now be more precise about what this means. It is easy to see from Sheffield's bijection (and is explained in~\cite[\S~4.2]{chen-fk}) that two edges $\lambda(i)$ and $\lambda(j)$ for $i,j \in [1,2n]_\BB Z$ share a primal (resp.\ dual) endpoint if and only if the rightmost hamburger (resp.\ cheeseburger) in the reduced words $\protect\hyperlink{def-reduce}{\mathcal R}(x_1\cdots x_i)$ and $\protect\hyperlink{def-reduce}{\mathcal R}(x_1\cdots x_j)$ both correspond to the same burger in the original word $x$, or if these reduced words both have no hamburgers (resp.\ cheeseburgers). Consequently, an edge of duplicate type can be equivalently defined as an edge $\lambda(i)$ such that $\lambda$ crosses a quadrilateral of $Q$ for the first time at time $i$ and the following is true. Let $v_{i-1}^0$ and $v_{i-1}^1$ be the endpoints of $\lambda(i-1)$, enumerated in such a way that $\lambda$ hits an edge which shares the endpoint $v_{i-1}^0$ for the first time before it hits an edge which shares the endpoint $v_{i-1}^1$ for the first time. Then $\lambda$ turns toward $v_{i-1}^1$ at time $i$ (\textit{cf.}\ Figure~\ref{fig-duplicate}). From this perspective, a time when $\lambda$ crosses a quadrilateral bisected by an edge of duplicate type can be naturally interpreted as a time when $\lambda$ ``bends away from the set of triangles which it has hit more recently''. Hence our model is a probability measure on planar maps decorated by an active spanning tree (in the sense of~\cite{kassel-wilson-active}), weighted by an appropriate notion of the bending of the corresponding Peano curve. \end{remark} The generalized burger model of Section~\ref{sec-burger} encodes a random planar map decorated by an active spanning tree with bending energy. The correspondence between the probability vector $\vec p = (p_\fo, p_\so, p_\db, p_\eb) \in \mathcal P$ and the pair of parameters $(y,z)$ is given by \begin{equation} \label{eqn-y-z} y=\frac{1+p_\fo-p_\so}{1-p_\fo+p_\so}\quad\text{and}\quad z=\frac{1+p_\db-p_\eb}{1-p_\db+p_\eb}\,, \end{equation} i.e. \begin{equation} p_\fo - p_\so =\frac{y-1}{1+y}\quad\text{and}\quad p_\db-p_\eb=\frac{z-1}{1+z}\,. \end{equation} To see why this is the case, let $\dot X$ be a random word of length $2n$ sampled from the conditional law of $X_1 \cdots X_{2n}$ given $\{X(1,2n) = \emptyset\}$, where $X$ is the bi-infinite word from Section~\ref{sec-burger} (in the case when $p_\so = 1$, we allow the last letter of $\dot X$ to be a flexible order, since a word whose orders are all $\so$'s cannot reduce to the empty word). Let $\dot X' \colonequals \protect\hyperlink{def-identification}{\mathcal I}(\dot X)$ and let $(M, e_0, T)$ be the rooted spanning-tree-decorated planar map associated with $\dot X'$ under the bijection of~\cite[\S~4.1]{shef-burger}. \begin{lem} \label{prop-activity} \begin{enumerate} \item The law of $(M, e_0, T)$ is that of the uniform measure on edge-rooted, spanning-tree decorated planar maps weighted by $y^{\operatorname{a}(T)} z^{\operatorname{d}(T)}$, with $y$ and $z$ as in~\eqref{eqn-y-z}. \label{item-activity-law} \item The conditional law of $T$ given $(M,e_0)$ is given by the law~\eqref{eqn-spanning-tree-law}; and when $z = 1$, the law of $(M, e_0, T)$ is that of an active-tree-decorated planar map (as defined in the introduction). \label{item-activity-cond} \item If $(M^\infty, e_0^\infty, T^\infty)$ is the infinite-volume rooted spanning-tree-decorated planar map associated with $X$ (by the infinite-volume version of Sheffield's bijection, see the discussion just after~\eqref{eqn-X(a,b)}), then $(M^\infty, e_0^\infty, T^\infty)$ has the law of the Benjamini-Schramm limit~\cite{benjamini-schramm-topology} of the law of $(M,e_0,T)$ as $n\rightarrow\infty$. \label{item-activity-infinite} \end{enumerate} \end{lem} \begin{proof} Throughout the proof we write $a \propto b$ if $a/b$ is a constant depending only on $n$ and $\vec p$. Let $x \in \mathcal W(\Theta )$ be a word of length $2n$ which satisfies $\protect\hyperlink{def-reduce}{\mathcal R}(x) = \emptyset$. Note that $x$ must contain $n$ burgers and $n$ orders. Then in the notation of Definition~\ref{def-theta-count}, \begin{multline} \label{eqn-tree-law1} \PP\!\left(\dot X = x \right) \propto \\ \left(\frac{2p_\fo}{1-p_\fo-p_\so} \right)^{\protect\hyperlink{def-theta-count}{\mathcal N\!}_{\fo}(x)} \left( \frac{2p_\so}{1-p_\fo-p_\so} \right)^{\protect\hyperlink{def-theta-count}{\mathcal N\!}_{\so}(x)} \left( \frac{2p_\db}{1-p_\db-p_\eb} \right)^{\protect\hyperlink{def-theta-count}{\mathcal N\!}_{\db}(x)} \left(\frac{2p_\eb}{1-p_\db-p_\eb} \right)^{\protect\hyperlink{def-theta-count}{\mathcal N\!}_{\eb}(x)} . \end{multline} Let $A^\ho$ (resp.\ $\widetilde A^\ho$) be the set of $i\in [1,2n]_\BB Z$ for which $\dot X_i'$ is a hamburger order matched to a hamburger which is (resp.\ is not) the rightmost burger in $\dot X'(1,i-1)$ (notation as in~\eqref{eqn-X(a,b)}). Let $D^\ho$ (resp.\ $\widetilde D^\ho$) be the set of $i\in [2,2n]_\BB Z$ for which $\dot X_i'$ is a hamburger, $\dot X'(1, i-1) \neq\emptyset$, and the rightmost burger in $\dot X'(1,i-1)$ is a hamburger (resp.\ cheeseburger). Define $A^\co$, $\widetilde A^\co$, $D^\co$, and $\widetilde D^\co$ similarly but with hamburgers and cheeseburgers interchanged. Then \begin{equation*} \operatorname{a}(T) = \# A^\ho + \# A^\co \quad \operatorname{and} \quad \operatorname{d}(T) = \# D^\ho +\# D^\co. \end{equation*} If we condition on $\dot X' $, then we can re-sample $\dot X$ as follows. For each $i \in A^\ho$, independently sample $\dot X_i \in \{\ho, \fo\}$ from the probability measure $\PP( \ho) = (1-p_\fo - p_\so)/(1+ p_\fo -p_\so )$, $\PP(\fo) = 2 p_\fo/(1 + p_\fo-p_\so )$. For each $i \in \widetilde A^\ho$, independently sample $\dot X_i \in \{\ho, \so\}$ from the probability measure $\PP( \ho) = (1-p_\fo - p_\so)/(1 -p_\fo+p_\so)$, $\PP(\so) = 2 p_\so/(1 -p_\fo+p_\so )$. For each $i \in D^\ho$, independently sample $\dot X_i \in \{\hb, \db\}$ from the probability measure $\PP( \hb) = (1-p_\db - p_\eb)/(1+ p_\db - p_\eb)$, $\PP(\db) = 2 p_\db/(1+p_\db - p_\eb )$. For each $i \in \widetilde D^\ho$, independently sample $\dot X_i \in \{\hb, \eb\}$ from the probability measure $\PP( \hb) = (1-p_\db - p_\eb)/(1-p_\db +p_\eb)$, $\PP(\eb) = 2 p_\eb/(1- p_\db +p_\eb )$. Then do the same for $A^\co$, $\widetilde A^\co$, $D^\co$, and $\widetilde D^\co$ but with hamburgers and cheeseburgers interchanged. The above resampling rule implies that with $x$ as above, \begin{align} \label{eqn-tree-law2} \PP\!\left(\dot X = x \,|\, \dot X' = \protect\hyperlink{def-identification}{\mathcal I}(x) \right) &\propto \left(\frac{1 - p_\fo + p_\so}{1 +p_\fo - p_\so} \right)^{\operatorname{a}(T)} \left(\frac{1 - p_\db + p_\eb}{1 + p_\db - p_\eb} \right)^{\operatorname{d}(T)} \left(\frac{2p_\fo}{1-p_\fo-p_\so} \right)^{\protect\hyperlink{def-theta-count}{\mathcal N\!}_{\fo}(x)} \notag \\ &\qquad \times \left( \frac{2p_\so}{1-p_\fo-p_\so} \right)^{\protect\hyperlink{def-theta-count}{\mathcal N\!}_{\so}(x)} \left( \frac{2p_\db}{1-p_\db-p_\eb} \right)^{\protect\hyperlink{def-theta-count}{\mathcal N\!}_{\db}(x)} \left(\frac{2p_\eb}{1-p_\db-p_\eb} \right)^{\protect\hyperlink{def-theta-count}{\mathcal N\!}_{\eb}(x)}. \end{align} By dividing~\eqref{eqn-tree-law1} by~\eqref{eqn-tree-law2}, we obtain \begin{equation*} \PP\!\left( \dot X' = \protect\hyperlink{def-identification}{\mathcal I}(x) \right) \propto y^{\operatorname{a}(T)} z^{\operatorname{d}(T)}. \end{equation*} Therefore, the probability of any given realization of $(M, e_0, T)$ is proportional to $y^{\operatorname{a}(T)} z^{\operatorname{d}(T)} $, which gives assertion~\ref{item-activity-law}. Assertion~\ref{item-activity-cond} is an immediate consequence of assertion~\ref{item-activity-law}. Assertion~\ref{item-activity-infinite} follows from the same argument used in~\cite[\S~4.2]{shef-burger} together with the results of Appendix~\ref{sec-prelim}. \end{proof} \begin{remark} \label{remark-duality} The model described in Lemma~\ref{prop-activity} is self dual in the sense that the law of $(M , e_0 , T)$ is the same as the law of $(M^* , e_0^* , T^*)$, where $M^*$ is the dual map of $M$, $e_0^*$ is the edge of $M^*$ which crosses $e_0$, and $T^*$ is the dual spanning tree (consisting of edges of $M^*$ which do not cross edges of $T$). This duality corresponds to the fact that the law of the inventory accumulation model of Section~\ref{sec-burger} is invariant under the replacements $\hb \leftrightarrow \cb$ and $\co \leftrightarrow \ho$. It may be possible to treat non-self dual variants of this model in our framework by relaxing the requirement that $\PP(\hb) = \PP(\cb)$ and $\PP(\co) = \PP(\ho)$ in~\eqref{eqn-theta-prob}, but we do not investigate this. We remark that there are bijections and Brownian motion scaling limit results analogous to the ones in this paper for other random spanning-tree-decorated map models which do not possess this self duality; see, e.g.,~\cite{kmsw-bipolar,lsw-schnyder-wood}. \end{remark} We end by recording the following corollary of Lemma~\ref{prop-activity}, which says that the law of the identification of the word $X$ (and therefore the law of the associated tree-decorated map) depends on the parmaeter $\vec p$ only via the quantities $y$ and $z$ of~\eqref{eqn-y-z}. \begin{cor} \label{prop-identification-law} Suppose $\vec p = (p_\fo,p_\so,p_\db,p_\eb)$ and $\widetilde{\PP} = (\widetilde p_\fo,\widetilde p_\so, \widetilde p_\db,\widetilde p_\eb)$ are two vectors in $\mathcal P$ which satisfy $p_\fo-p_\so = \widetilde p_\fo - \widetilde p_\so$ and $p_\db - p_\eb = \widetilde p_\db - \widetilde p_\eb $. Let $X = \cdots X_{-1} X_0 X_1 \cdots$ (resp.\ $\widetilde X = \cdots \widetilde X_{-1} \widetilde X_0 \widetilde X_1 \cdots $) be a bi-infinite word such that $\{X_i\}_{i\in\BB N}$ (resp.\ $\{\widetilde X_i\}_{i\in\BB N}$) is a collection of i.i.d.\ samples from the probability measure~\eqref{eqn-theta-prob} with probabilities $\vec p$ (resp.\ with $\widetilde{\vec p}$). Then the identifications $\protect\hyperlink{def-identification}{\mathcal I}(X)$ and $\protect\hyperlink{def-identification}{\mathcal I}(\widetilde X)$ agree in law. \end{cor} \begin{proof} It follows from Lemma~\ref{prop-activity} that the infinite-volume tree-decorated planar maps $(M^\infty, e_0^\infty, T^\infty)$ and $(\widetilde M^\infty, \widetilde e_0^\infty, \widetilde T^\infty)$ associated with $\protect\hyperlink{def-identification}{\mathcal I}(X)$ and $\protect\hyperlink{def-identification}{\mathcal I}(\widetilde X)$ agree in law. Since these maps uniquely determine $\protect\hyperlink{def-identification}{\mathcal I}(X)$ and $\protect\hyperlink{def-identification}{\mathcal I}(\widetilde X)$, respectively, via the same deterministic procedure, we infer that $\protect\hyperlink{def-identification}{\mathcal I}(X) \overset{d}{=} \protect\hyperlink{def-identification}{\mathcal I}(\widetilde X)$. \end{proof} \subsection{Statement of main results} \label{sec-result} Fix $\vec p = (p_\fo,p_\so,p_\db,p_\eb) \in \mathcal P$ and let $X$ be the bi-infinite word from Section~\ref{sec-burger}, whose symbols are i.i.d.\ samples from the probability measure~\ref{eqn-theta-prob}. Also let $X' = \cdots X_{-1}' X_0' X_1' \cdots = \protect\hyperlink{def-identification}{\mathcal I}(X)$ be the identification of $X$, as in Definition~\ref{def-X-identification} and recall the notation~\eqref{eqn-X(a,b)}. For $i \in \BB Z$, define (in the notation of Definition~\ref{def-theta-count}) \begin{equation*}\hypertop{def-d-z} \protect\hyperlink{def-d-z}{d}(i) \colonequals \begin{cases} \protect\hyperlink{def-theta-count}{d}( X'(1,i)) \quad &i \geq 1 \\ 0 \quad &i = 0 \\ \protect\hyperlink{def-theta-count}{d}( X'(i+1,0)) \quad &i \leq -1 \end{cases} \quad \operatorname{and} \quad \protect\hyperlink{def-d-z}{d^*}(i) \colonequals \begin{cases} \protect\hyperlink{def-theta-count}{d^*}(X'(1,i)) \quad &i \geq 1 \\ 0 \quad &i = 0 \\ \protect\hyperlink{def-theta-count}{d^*}( X'(i+1,0)) \quad &i \leq -1. \end{cases} \end{equation*} We extend $\protect\hyperlink{def-d-z}{d}$ and $\protect\hyperlink{def-d-z}{d^*}$ to $\BB R$ by linear interpolation, and define $\protect\hyperlink{def-d-z}{\vec{d}}(t) \colonequals (\protect\hyperlink{def-d-z}{d}(t), \protect\hyperlink{def-d-z}{d^*}(t))$. For $n\in\BB N$ and $t\in \BB R$, let \begin{equation} \label{eqn-Z^n-def} U^n(t) \colonequals n^{-1/2} \protect\hyperlink{def-d-z}{d} (n t),\quad V^n(t) \colonequals n^{-1/2} \protect\hyperlink{def-d-z}{d^*}(n t), \quad Z^n(t) \colonequals (U^n(t), V^n(t) ). \end{equation} It is an immediate consequence of~\cite[Thm.~2.5]{shef-burger} that in the case where $p_\db=p_\eb=p_\so = 0$ and $p_\fo \in (0,1/2)$, the random path $Z^n$ converges in law as $n\rightarrow \infty$ in the topology of uniform convergence on compact intervals to a two-sided two-dimensional correlated Brownian motion $Z = (U,V)$ with $Z(0) =0$ and \begin{equation} \label{eqn-bm-cov-all-F} \operatorname{Var}(U(t) ) = \operatorname{Var}(V(t)) = \frac{1}{y+1}|t| \quad \operatorname{and} \quad \operatorname{Cov}(U(t), V(t) ) = \frac{y-1}{2(y+1)} |t|, \quad \forall t\in\BB R \end{equation} with $y$ as in~\eqref{eqn-y-z}. In the case when $p_\db=p_\eb=p_\so = 0$ and $p_\fo \in [1/2,1]$, the coordinates of~$Z^n$ instead converge in law to two identical two-sided Brownian motions with variance $1/4$. In light of Corollary~\ref{prop-identification-law} above, the above implies that if $(p_\fo, p_\so, p_\db,p_\eb) \in \mathcal P$ with $p_\db = p_\eb = 0$ and $p_\fo - p_\so \geq 0$ (equivalently $y\geq 1$ and $z=1$), then $Z^n$ converges in law as $n\rightarrow\infty$ to a Brownian motion as in~\eqref{eqn-bm-cov-all-F} (resp.\ a pair of identical Brownian motions with variance $1/4$) if $1 \leq y < 3$ (resp.\ $y \geq 3$ and $z=1$). Our main contribution is to prove that the path $Z^n$ converges to a correlated Brownian motion for additional values of $y$ and $z$. \begin{thm} \label{thm-all-S} Let $\vec p = (p_\fo,p_\so,p_\db,p_\eb) \in \mathcal P$ with $p_\fo =0$ and $p_\so = 1$ (equivalently, in the notation~\eqref{eqn-y-z}, $y=0$ and $z\ge 0$ is arbitrary). Then with $Z^n$ as in~\eqref{eqn-Z^n-def}, we have $Z^n\rightarrow Z$ in law in the topology of uniform convergence on compacts, where $Z = (U,V)$ is a two-sided correlated Brownian motion with $Z(0) = 0$ and \begin{equation} \label{eqn-bm-cov-all-S} \operatorname{Var}( U(t)) = \operatorname{Var}( V(t)) = \frac{(1+z)|t|}{2} \quad \operatorname{and} \quad \operatorname{Cov}( U(t), V(t)) = -\frac{ z |t|}{2 },\quad \forall t \in \BB R. \end{equation} \end{thm} We prove Theorem~\ref{thm-all-S} in Section~\ref{sec-all-S}. \begin{thm} \label{thm-variable-SD} Let $\vec p =(p_\fo,p_\so,p_\db,p_\eb) \in \mathcal P$ with $p_\fo -p_\so \leq 0$ and $p_\eb - p_\db \leq 0$ (equivalently, with $y$ and $z$ as in~\eqref{eqn-y-z}, we have $y \in [0,1]$ and $z\in [1, \infty)$). There is a parameter $\protect\hyperlink{def-J}{\chi} \in (1,\infty)$, depending only on $y$ and $z$, such that with $Z^n$ as in~\eqref{eqn-Z^n-def}, $Z^n$ converges in law (in the topology of uniform convergence on compacts) to a two-sided correlated Brownian motion $Z = (U,V)$ with $Z(0) = 0$ and \begin{equation} \label{eqn-bm-cov-variable-SD} \begin{aligned} \operatorname{Var}( U(t)) = \operatorname{Var}( V(t)) &= \frac12 \left( 1 + \frac{(z-y)\protect\hyperlink{def-J}{\chi} }{ (y+1)(z+1)} \right) |t| \quad \operatorname{and} \\ \qquad \operatorname{Cov}( U(t), V(t)) &= - \frac{(z-y)\protect\hyperlink{def-J}{\chi} }{2(y+1)(z+1)} |t|,\quad\quad \forall t \in \BB R. \end{aligned} \end{equation} In the case when $z = 1$, we have $\protect\hyperlink{def-J}{\chi} = 2$. When $y=0$, we have $\protect\hyperlink{def-J}{\chi} = z+1$. \end{thm} \begin{figure}[htb!] \begin{center} \includegraphics[scale=.7]{parameter-graph-yz} \caption{A graph of the range of parameter values for which peanosphere scaling limit results for spanning-tree-decorated random planar maps are known, along with the corresponding values of $\kappa$. On the red and orange segments, the path $Z^n$ converges to a non-negatively correlated Brownian motion~\cite{shef-burger}. On the orange segment the correlation is $1$ and the maps are not conjectured to converge to $\operatorname{SLE}_\kappa$-decorated LQG for any $\kappa > 4$. On the red segment, which corresponds to critical FK planar maps for $q\in [0,4)$, peanosphere scaling limit results are known both in the infinite-volume and finite-volume cases~\cite{shef-burger,gms-burger-cone,gms-burger-local,gms-burger-finite}, and several additional results are known~\cite{chen-fk,blr-exponents,gwynne-miller-cle}. The blue and light blue regions are treated in this paper, and give negatively correlated Brownian motions in the scaling limit. The blue line segments are values for which an infinite-volume peanosphere scaling limit result is known and the exact correlation of the limiting Brownian motion (equivalently the limiting values of $\gamma$ and $\kappa$) is known. The horizontal blue segment corresponds to active-tree-decorated planar maps with parameter $y \in [0,1]$ and the vertical segment corresponds to various laws on bipolar-oriented planar maps. The light blue region is the set of parameter values for which the path $Z^n$ is known to converge to a negatively correlated Brownian motion but the exact correlation is unknown. Special parameter values are shown with dots. The case when $(y,z) = (0,1)$ corresponds to a uniform bipolar-oriented random planar map, as studied in~\cite{kmsw-bipolar}. The case when $(y,z) = (1,1)$ corresponds to a random planar map decorated by a uniform spanning tree. The case $(y,z) = (2,1)$ corresponds to the uniform distribution on the underlying planar map $M$, and is the only case where metric scaling limit results are known. The case $(y,z) = (1 + \sqrt 2, 1)$ corresponds to the FK Ising model. }\label{fig-parameter-graph} \end{center} \end{figure} Figure~\ref{fig-parameter-graph} illustrates the range of parameter values for which Theorems~\ref{thm-all-S} and~\ref{thm-variable-SD} (and their analogues elsewhere in the literature) apply. The value of $\protect\hyperlink{def-J}{\chi}$ when $y=0$ follows from Theorem~\ref{thm-all-S}. The value of $\protect\hyperlink{def-J}{\chi}$ when $z=1$ will be obtained in the course of proving Theorem~\ref{thm-variable-SD}. It remains an open problem to compute $\protect\hyperlink{def-J}{\chi}$ in the case when $z \neq 1$ and $y\neq0$ or to obtain any scaling limit result at all in the case when $z \in [0,1)$ and $y>0$ or when $z \neq1$ and $y \geq 1$. Theorem~\ref{thm-variable-SD} combined with~\cite[Thm.~1.13]{wedges} and~\cite[Thm.~1.1]{kappa8-cov} tells us that the infinite-volume rooted spanning-tree-decorated random planar map $(M^\infty, e_0^\infty, T^\infty)$ converges in the peanosphere sense, upon rescaling, to a $\gamma$-quantum cone decorated by an independent whole-plane space-filling $\operatorname{SLE}_\kappa$ with $\gamma = 4/\sqrt\kappa$ for some $\kappa \geq 8$. Furthermore, since we know the value of $\protect\hyperlink{def-J}{\chi}$ when $z = 1$, Theorem~\ref{thm-variable-SD} together with~\cite[Thm.~2.5]{shef-burger} and Lemma~\ref{prop-activity} below imply the following. \begin{cor} \label{prop-active-conv} Suppose $\vec p \in \mathcal P$ is such that $z = 1$ and $y \in [0,3)$. Then $Z^n$ converges in law to a correlated Brownian motion $Z = (U,V)$ with \begin{equation*} \operatorname{Var}( U(t)) = \operatorname{Var}( V(t)) = \frac{1}{ y+1 } |t| \quad \operatorname{and} \notag \\ \qquad \operatorname{Cov}( U(t), V(t)) = \frac{y-1}{2(y+1) } |t|,\quad \forall t \in \BB R. \end{equation*} Hence the scaling limit of an infinite-volume active-tree-decorated planar map with parameter $y\in [0,3)$ in the peanosphere sense is a $\gamma$-quantum cone decorated by an independent whole-plane space-filling $\operatorname{SLE}_\kappa$ with \begin{equation*} \frac{y-1}{2} =-\cos\left(\frac{4\pi}{\kappa} \right),\quad \gamma = \frac{4}{\sqrt\kappa},\quad \kappa > 4\,. \end{equation*} \end{cor} \subsection{Outline} \label{sec-outline} The remainder of this paper is structured as follows. In Section~\ref{sec-all-S}, we prove Theorem~\ref{thm-all-S}. The key observation in the proof is that if every order in $X$ is an $\so$, then the most recently added burger which has not yet been consumed is the same as the most recently added burger. This allows us to break up the word $X$ into i.i.d.\ blocks of geometric size corresponding to increments of $X$ between the times when the type of the most recently added burger changes. Donsker's theorem applied to the change of $\protect\hyperlink{def-d-z}{\vec{d}}$ over each of the blocks then concludes the proof. The proof of Theorem~\ref{thm-variable-SD}, which is given in Section~\ref{sec-variable-SD}, is much more involved than that of Theorem~\ref{thm-all-S}. Section~\ref{sec-variable-SD} is independent from Section~\ref{sec-all-S}. The proof of Theorem~\ref{thm-variable-SD} uses many of the same ideas as the proof of~\cite[Thm.~2.5]{shef-burger}. However, the argument used in~\cite{shef-burger} does not suffice for our purposes. One of the key inputs in the proof~\cite[Thm.~2.5]{shef-burger} is a tail bound for the law of the length of the reduced word $|X(1,n)|$ (see~\cite[Lem.~3.13]{shef-burger}). This tail bound is deduced from the fact that changing a single symbol in the word $X_1\cdots X_n$ changes the value of $\protect\hyperlink{def-theta-count}{\mathcal D}(X(1,n))$, defined as in Definition~\ref{def-theta-count}, by at most 2 (this fact implies that a certain martingale has bounded increments and allows one to apply Azuma's inequality). When we consider words with stale orders and/or duplicate burgers, the above Lipschitz property does not hold. For example, the reduction of the word $\hb \hb \hb \cb \so \so \so$ consists of a single $\cb$, but if we change the $\cb$ to an $\hb$, the reduced word has length 7. We still obtain an analogue of~\cite[Lem.~3.13]{shef-burger} in the setting of Theorem~\ref{thm-variable-SD} (see Proposition~\ref{prop-length-sup} below), but our proof of this result requires analogues of most of the other lemmas in~\cite[\S~3]{shef-burger} as well as some additional estimates. Section~\ref{sec-variable-SD} is structured as follows. In Section~\ref{sec-chen-lemma}, we prove a monotonicity result (Lemma~\ref{prop-mean-mono}) which says that for a general choice of $p_\so$ and $p_\db$, the expected number of burgers and the expected number of orders in the reduced word $X(1,n)$ is greater than or equal to the corresponding expectation under the law where $p_\so = p_\db = p_\fo = p_\eb = 0$. Under this latter law, the process $\protect\hyperlink{def-d-z}{\vec{d}}$ of Definition~\ref{def-theta-count} is a simple random walk on $\BB Z^2$. In fact, this monotonicity holds even if we condition on an event $E$ which depends only on the one-dimensional simple random walk $i\mapsto \protect\hyperlink{def-theta-count}{\mathcal C}(X(1,i))$ for $i\in [1,n]_{\BB Z}$ (Definition~\ref{def-identification}). The proof proceeds by way of a careful analysis of how the length of the reduction of a finite word changes when we replace the rightmost symbol among all of the $\so$ and $\db$ symbols by an element of $\Theta_0$. In Section~\ref{sec-few-SD}, we prove a result to the effect that the number of unidentified $\db$'s and $\so$'s in $X(1,n)$ is typically negligible in comparison to the number of unmatched $\ho$'s or $\co$'s (Lemma~\ref{prop-few-SD}). Since the $\db$'s and $\so$'s in the reduced word are the only thing which prevents the walk $\protect\hyperlink{def-d-z}{\vec{d}}$ of Definition~\ref{def-theta-count} from having independent increments, this result tells us that macroscopic increments of $\protect\hyperlink{def-d-z}{\vec{d}}$ are in some sense ``close'' to being independent. This fact will be used frequently in the later subsections. To prove Lemma~\ref{prop-few-SD}, we use the monotonicity lemma from Section~\ref{sec-chen-lemma} to show that the expected number of unmatched $\ho$'s added to the word between successive times that unidentified $\db$'s and $\so$'s are added is infinite. In Section~\ref{sec-J-basic}, we study the time $\protect\hyperlink{def-J}{J}$ which is the smallest $j\in \mathbb N$ such that $X(-j,-1)$ contains an $\hb$ or $\cb$. The analogue of the time $\protect\hyperlink{def-J}{J}$ also plays a key role in~\cite{shef-burger,gms-burger-cone,gms-burger-local,gms-burger-finite}. The importance of $\protect\hyperlink{def-J}{J}$ in our setting is that the burger $X_{-\protect\hyperlink{def-J}{J}}$ determines the identification of the symbol $X_0$. We will prove a number of facts about $\protect\hyperlink{def-J}{J}$, the most important of which are Proposition~\ref{prop-J-finite} (which shows that $\protect\hyperlink{def-J}{\chi} := \mathbb E(|X(-\protect\hyperlink{def-J}{J},-1)| ) < \infty$) and Lemma~\ref{prop-J-count-mean} (which shows that the expected number of burgers and the expected number of orders in $X(-\protect\hyperlink{def-J}{J},-1)$ are the same) and Lemma~\ref{prop-J-limit} (a uniform integrability result for $|X(-n,-1)|$ on the event $\{\protect\hyperlink{def-J}{J}> n\}$). Section~\ref{sec-var-bound} contains the calculation which leads to the formula for the variances and covariances of the limiting Brownian motions in Theorem~\ref{thm-variable-SD}. This calculation is based on the results of Section~\ref{sec-J-basic} and is similar to~\cite[\S~3.1]{shef-burger}. Section~\ref{sec-moment-bound} shows that $\mathbb E(|X(1,n)|) \asymp n^{1/2}$. The upper bound follows from an analysis of the times at which burgers of a given type are added when we read the word backwards. The upper bound for the number of $\db$'s and $\so$'s in $X(1,n)$ from Lemma~\ref{prop-few-SD} plays an important role in the proof of this estimate since it allows us to avoid worrying about such unmatched symbols. The proof of the corresponding lower bound uses a comparison to a simple random walk on $\BB Z^2$ based on Lemma~\ref{prop-mean-mono}. In Section~\ref{sec-word-length}, we build on the results of Section~\ref{sec-moment-bound} to prove an exponential upper tail bound for $n^{-1/2} |X(1,n)|$ analogous to~\cite[Lem.~3.13]{shef-burger} (Proposition~\ref{prop-length-sup}). In Section~\ref{sec-variable-SD-proof}, we use this tail bound to deduce tightness of the law of the re-scaled random walk $Z^n$ in the local uniform topology, then conclude the proof of Theorem~\ref{thm-variable-SD} by using our upper bound for the number of $\db$'s and $\so$'s in $X(1,n)$ to show that any subsequential limiting law must have independent, stationary increments. Section~\ref{sec-open-problems} contains some open problems related to the model studied in this paper. Appendix~\ref{sec-prelim} proves some basic facts about the reduction operation $\protect\hyperlink{def-reduce}{\mathcal R}$ and the bi-infinite word $X$. \bigskip \noindent \textbf{Acknowledgements.} We thank the Isaac Newton Institute in Cambridge, UK, where this work was started, for its hospitality. Part of this work was completed while E.G.\ was an intern with the Microsoft Research Theory group. E.G.\ was partially supported by the U.S.\ Department of Defense via an NDSEG fellowship. When this project was completed, A.K. was supported by ETH Z\"urich and was part of NCCR SwissMAP of the Swiss National Science Foundation. J.M.\ was supported by NSF grant DMS-1204894. We thank two anonymous referees for helpful comments on an earlier version of this paper. \section{Scaling limit when all orders are stale} \label{sec-all-S} In this section we prove Theorem~\ref{thm-all-S}, which yields the scaling limit of the law of the walk $Z^n$ when all orders are $\so$. Throughout this section we use the notation of Sections~\ref{sec-burger} and~\ref{sec-result} with $p_\fo = 0$ and $p_\so =1$ and to lighten notation, we set \begin{equation*} p \colonequals p_\db\quad \operatorname{and}\quad q \colonequals p_\eb. \end{equation*} We recall in particular the bi-infinite word $X$ and its identification $X' = \protect\hyperlink{def-identification}{\mathcal I}(X)$. The idea of the proof of Theorem~\ref{thm-all-S} is to break up the word $X$ into independent and (almost) identically distributed blocks of random size such that, within each block, the identifications of the symbols $\db$, $\eb$, and $\so$ are determined. We then apply Donsker's invariance principle to a certain random walk obtained by summing over the blocks. Let $\iota_0$ be the smallest $i \geq 0$ for which $X_{i } = \hb$. Inductively, if $k\in\BB N$ and $\iota_{k-1}$ has been defined, let $\iota_k$ be the smallest $i \geq \iota_{k-1}+1$ for which \begin{equation*} \begin{dcases} X_i \in \left\{\eb, \cb\right\} \quad &\text{$k$ odd}, \\ X_i \in \left\{\eb, \hb \right\} \quad &\text{$k$ even}. \end{dcases} \end{equation*} (In other words, the sequence $(\iota_k)_{k\ge 0}$ is the sequence of nonnegative indices which correspond to alternation in the type of burger produced.) Let \begin{equation} \label{eqn-inc-def} \xi_k = \left(\xi_k^\ho,\, \xi_k^\co \right) \colonequals \begin{dcases} \left(\protect\hyperlink{def-theta-count}{\mathcal N\!}_{\hb|\db|\eb}\left(X_{\iota_{k-1} } \cdots X_{\iota_k-1 } \right),\, - \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\so}\left(X_{\iota_{k-1} } \cdots X_{\iota_k-1 }\right) \right),\quad &\text{$k$ odd}\\ \left( - \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\so}\left(X_{\iota_{k-1} } \cdots X_{\iota_k-1 }\right),\, \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\cb|\db|\eb}\left(X_{\iota_{k-1} } \cdots X_{\iota_k-1 }\right)\right),\quad &\text{$k$ even}. \end{dcases} \end{equation} (There are no $\eb$ symbols in the subword $X_{\iota_{k-1} } \cdots X_{\iota_k-1 }$ except possibly for $X_{\iota_{k-1}}$.) Let \begin{equation} \label{eqn-inc-sum-def} \Xi_k = (\Xi_k^\ho,\Xi_k^\co) \colonequals \sum_{j=1}^k \xi_j \,. \end{equation} \begin{lem} \label{prop-inc-walk} In the setting described just above, we have the following. \begin{enumerate} \item For each $k\in\BB N$, we have $\protect\hyperlink{def-d-z}{\vec{d}}\left( \iota_k-1 \right) - \protect\hyperlink{def-d-z}{\vec{d}}(\iota_0-1) = \Xi_k$. \label{item-inc-walk-sum} \item For each odd (resp.\ even) $k \in\BB N$, we have $\iota_k - \iota_{k-1} = \xi_k^\ho - \xi_k^\co$ (resp.\ $\iota_k - \iota_{k-1} = \xi_k^\co - \xi_k^\ho $). \label{item-inc-walk-time}\\[-\baselineskip] \item The random variables $\xi_k$ for $k\in\BB N$ are independent. \label{item-inc-walk-ind} \item For each $k\in\BB N$, the law of $\iota_k - \iota_{k-1}$ is geometric with success probability $(1-p + q)/4$. If $k$ is odd (resp.\ even), then given $\iota_k-\iota_{k-1}$ the symbols of $X_{\iota_{k-1}+1}\cdots X_{\iota_{k}-1}$ are i.i.d., and each is a burger with probability $(1+p-q)/(3+p-q)$. In particular, the conditional law of $\xi_k^\ho -1$ (resp.\ $\xi_k^\co -1$) given $\iota_k - \iota_{k-1}$ is the binomial distribution with parameters $\iota_k - \iota_{k-1}-1$ and $(1+p-q)/(3+p-q)$. \label{item-inc-walk-law} \end{enumerate} \end{lem} \begin{proof} Since the only orders are of type $\so$, for any $i\in\BB Z$ the most recently added burger which hasn't yet been consumed is the same as the most recently added burger. By the definition of the times $\iota_*$, if $\iota_{k-1}\leq i <\iota_k$, then the top burger is of type $\hb$ if $k$ is odd and of type $\cb$ if $k$ is even. For simplicity we assume throughout the rest of the proof that $k$ is odd; the case when $k$ is even is symmetric. For $\iota_{k-1}\leq i <\iota_k$, if $X_i$ is a burger, then $X'_i=\hb$, and if $X_i$ is an order, then $X'_i=\co$, which implies $\protect\hyperlink{def-d-z}{\vec{d}}(\iota_k-1)-\protect\hyperlink{def-d-z}{\vec{d}}(\iota_{k-1}-1)=\xi_k$. Summing this relation and the analogous relation in the case when $k$ is even gives assertion~\ref{item-inc-walk-sum}. Since $k$ is assumed to be odd, the total number of burgers and orders in $X_{\iota_{k-1}}\cdots X_{\iota_k}$ is $\xi^\ho_k-\xi^\co_k$, which implies assertion~\ref{item-inc-walk-time}. Since $\iota_{k-1}$ for $k\in\BB N$ is a stopping time for the filtration generated by $X_1\cdots X_n$ for $n\in\BB N$, the strong Markov property implies $X_{\iota_{k-1} +1} \cdots X_{\iota_k}$ is independent of $X_1\cdots X_{\iota_{k-1}}$, which implies assertion~\ref{item-inc-walk-ind}. In view of the strong Markov property (and again recalling that $k$ is assumed to be odd), we see that $X_{\iota_{k-1}+1}\cdots X_{\iota_{k}}$ is a string of i.i.d.\ symbols terminated at the first $\cb$ or $\eb$. By~\eqref{eqn-theta-prob}, the terminating symbol occurs with probability $\PP(\eb) + \PP(\cb) = p_\eb/2 +(1-p_\eb-p_\db)/4 =(1-p+q)/4$, which implies the geometric law for $\iota_k-\iota_{k-1}$. Given the length of the string $X_{\iota_{k-1}+1}\cdots X_{\iota_{k}}$, each symbol except the last is a burger independently with probability \begin{equation*} \frac{ \PP(\hb) + \PP(\db) }{ \PP(\hb) + \PP(\db) + \PP(\so) } = \frac{(1+p-q)/4}{(3+p-q)/4}, \end{equation*} which finishes proving assertion~\ref{item-inc-walk-law}. \end{proof} \begin{prop} For odd $k\in\BB N$, \begin{equation} \label{eqn-inc-walk-var} \begin{aligned} \BB E\left(\iota_k - \iota_{k-1} \right) &= \frac{4}{1-p+q} \,,& \operatorname{Var}\left(\iota_k - \iota_{k-1} \right) &= \frac{4(3+p-q)}{(1-p+q)^2} \,, \\ \BB E\left(\xi_k^\ho \right) &= \frac{2}{1-p+q}\,, & \operatorname{Var}\left(\xi_k^\ho \right) &= \frac{2 (1 + p - q)}{(1 - p + q)^2} \,,\\ \BB E\left(\xi_k^\co \right) &= -\frac{2}{1-p+q}\,,\quad& \operatorname{Var}\left( \xi_k^\co \right) &= \frac{2(3-p+q)}{(1-p+q)^2} \,,\\ &&\operatorname{Cov}\left(\xi_k^\ho, \xi_k^\co\right) &= -\frac{2(1+p-q)}{ (1-p+q)^2} \,. \end{aligned} \end{equation} For even $k\in\BB N$, the same holds with $\xi_k^\ho$ and $\xi_k^\co$ interchanged. \end{prop} \begin{proof} Let $Z_i$ be the indicator random variable for the word $X_{\iota_{k-1}+1}\cdots X_{\iota_k-1}$ having length at least $i$ and having a burger in position~$i$, and let $Z^\so_i$ be the indicator variable for the word having length at least $i$ and having an order in position~$i$. For odd $k$, $\xi_k^\ho=1+\sum_{i=1}^\infty Z_i$ and $\xi_k^\co=-\sum_{i=1}^\infty Z^\so_i$, and vice versa for even $k$. Assertions~\ref{item-inc-walk-time} and~\ref{item-inc-walk-law} of Lemma~\ref{prop-inc-walk} yield $\BB E[Z_i]$, $\BB E[Z^\so_i]$, $\BB E[Z_i Z_j]$, $\BB E[Z^\so_i Z^\so_j]$, and $\BB E[Z_i Z^\so_j]$, from which \eqref{eqn-inc-walk-var} follows from a short calculation. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm-all-S}] For $k \in \BB N \cup \{0\}$ let $\Xi_k^\ho$ and $\Xi_k^\co$ be as in~\eqref{eqn-inc-sum-def}. Extend $\Xi^\ho$ and $\Xi^\co$ from $\BB N \cup \{0\}$ to $[0,\infty)$ by linear interpolation. For $t \geq 0$ and $n\in\BB N$, let \begin{equation*} \widehat U^n(t) \colonequals n^{-1/2}\, \Xi_{2n t}^\ho, \quad \widehat V^n(t) \colonequals n^{-1/2}\, \Xi_{2n t}^\co, \quad \operatorname{and} \quad \widehat Z^n(t) \colonequals \left(\widehat U^n(t), \widehat V^n(t) \right). \end{equation*} It follows from~\eqref{eqn-inc-walk-var} that for each $k\in\BB N$, \begin{align*} &\BB E\left(\xi_{2k-1}^\ho + \xi_{2k}^\ho \right) = \BB E\left(\xi_{2k-1}^\co + \xi_{2k}^\co \right) = 0 \,,\\ & \operatorname{Var}\left( \xi_{2k-1}^\ho + \xi_{2k}^\ho\right) = \operatorname{Var}\left( \xi_{2k-1}^\co + \xi_{2k}^\co \right) = \frac{8}{(1-p+q)^2}\,, \\ &\operatorname{Cov}\left( \xi_{2k-1}^\ho + \xi_{2k}^\ho, \xi_{2k-1}^\co + \xi_{2k}^\co\right) = -\frac{4(1+p-q)}{ (1-p+q)^2} \,. \end{align*} By Lemma~\ref{prop-inc-walk}, the pairs $(\xi_{2k-1}^\ho + \xi_{2k}^\ho, \xi_{2k-1}^\co + \xi_{2k}^\co)$ for each $k \in \BB N$ are i.i.d. By Donsker's invariance principle (see~\cite[Thm.~4.3.5]{whitt-limits-book} for a statement in general dimension), $\widehat Z^n$ converges in law as $n\rightarrow \infty$ in the topology of uniform convergence on compacts to a pair $\widehat Z = (\widehat U, \widehat V)$ of correlated Brownian motions with $\widehat Z(0) = 0$ \begin{equation} \label{eqn-hat-bm-cov-all-S} \operatorname{Var}(\widehat U(t)) = \operatorname{Var}(\widehat V(t)) = \frac{8t }{(1-p+q)^2} \quad \operatorname{and} \quad \operatorname{Cov}(\widehat U(t), \widehat V(t)) = -\frac{4(1+p-q) t}{ (1-p+q)^2},\quad \forall t \geq 0. \end{equation} By the law of large numbers, a.s.\ \begin{equation} \label{eqn-time-convergence} \lim_{k\rightarrow\infty} k^{-1} \iota_{\lfloor t k \rfloor} = \frac{4 t}{1-p+q},\quad \forall t\in \BB Q. \end{equation} By the Skorokhod representation theorem, we can find a coupling of a sequence of words $(X^n)$, each with the same law as $X$, with the correlated Brownian motion $\widehat Z$ such that (with $\widehat Z^n$ and $\iota_k^n$ defined with respect to the word $X^n$) we a.s.\ have $\widehat Z^n \rightarrow \widehat Z$ and $k^{-1} \iota_{\lfloor t k \rfloor}^n \rightarrow 4t/(1-p+q)$ for each $t\in\BB Q$. Combining~\eqref{eqn-time-convergence} with the fact that each coordinate of $Z^n$ is monotone between subsequent renewal times, and the continuity of Brownian motion, we obtain that \begin{equation*} (t \mapsto Z^n(t)) \xrightarrow{n\rightarrow\infty} \left( t \mapsto \widehat Z\left(\frac{1-p+q}{8} t \right) \right) \end{equation*} in the topology of uniform convergence on compacts of $[0,\infty)$. By~\eqref{eqn-hat-bm-cov-all-S}, $t \mapsto \widehat Z\left(\frac{1-p+q}{8} t \right)$ is a Brownian motion with variances and covariances as in~\eqref{eqn-bm-cov-all-S}. We thus obtain $Z^n|_{[0,\infty)} \rightarrow Z|_{[0,\infty)}$ in law, with $Z$ as in the theorem statement. Since the law of the bi-infinite word $X$ is translation invariant, we also have that $\left(Z^n - Z^n(s_0)\right)|_{[s_0,\infty)} \rightarrow \left(Z - Z (s_0)\right)|_{[s_0,\infty)}$ in law for each $s_0\in\BB R$. Since $Z^n(0) = Z(0) = 0$ for each $n\in\BB N$, for $s_0 < 0$ and $t\geq s_0$, \begin{equation*} Z^n(t) = \left( Z^n(t) - Z^n(s_0) \right) - \left( Z^n(0) - Z^n(s_0) \right) \end{equation*} and the analogous relation holds for $Z$. From this we infer that $Z^n|_{[s_0, \infty)} \rightarrow Z|_{[s_0, \infty)}$ in law for each $s_0 \in \BB R$. Since $s_0$ can be be made arbitrarily negative, we infer that $Z^n \rightarrow Z$ in law in the topology of uniform convergence on compact subsets of $\mathbb R$. \end{proof} \section{Scaling limit with stale orders and duplicate burgers} \label{sec-variable-SD} In this section we prove Theorem~\ref{thm-variable-SD}. Since the paths $Z^n$ are deterministic functions of the identified word $X' = \protect\hyperlink{def-identification}{\mathcal I}(X)$, Corollary~\ref{prop-identification-law} implies that we only need to prove Theorem~\ref{thm-variable-SD} in the case when $p_\fo = p_\eb = 0$. Throughout this section, we fix $p\in [0,1)$ and $q\in [0,1)$ and let $\PP^{p,q}$ denote the law of the bi-infinite word $X$ whose symbols are i.i.d.\ samples from the law~\eqref{eqn-theta-prob} with $p_\fo = p_\eb = 0$, $p_\so = p$, and $p_\db=q$. Let $\BB E^{p,q}$ denote the corresponding expectation. When there is no ambiguity (i.e.\ only one pair $(p,q)$ is under consideration) we write $\PP = \PP^{p,q}$ and $\BB E = \BB E^{p,q}$. Since we think of $p_\so$ and $p_\db$ as being fixed, we abuse notation and allow ``constants'' to depend on $p_\so$ and $p_\db$, including the implicit constants in asymptotic notation. \subsection{Comparison of expected lengths of reduced words} \label{sec-chen-lemma} The following lemma is one of our main tools for estimating expectations of quantities associated with the word $X$. \begin{lem} \label{prop-mean-mono} Suppose we are in the setting described at the beginning of this section. Let $n\in\BB N$ and let $E$ be an event which is measurable with respect to the $\sigma$-algebra generated by $\left\{\protect\hyperlink{def-theta-count}{\mathcal C}(X(1,i))\right\}_{i \in [1,n]_\BB Z}$ (where here $\protect\hyperlink{def-theta-count}{\mathcal C}$ is as in Definition~\ref{def-theta-count}, i.e., $\protect\hyperlink{def-theta-count}{\mathcal C}=\protect\hyperlink{def-theta-count}{\mathcal B}-\protect\hyperlink{def-theta-count}{\mathcal O}$). For each $n\in\BB N$ and $(p,q) \in [0,1] \times [0,1)$, we have (in the notation of Definition~\ref{def-theta-count}) \begin{equation} \label{eqn-mean-mono-B} \BB E^{p,q}\left( \protect\hyperlink{def-theta-count}{\mathcal B}\left(X(1,n)\right) \BB 1_E \right) \geq \BB E^{0,0}\left( \protect\hyperlink{def-theta-count}{\mathcal B}\left(X(1,n)\right) \BB 1_E \right) \end{equation} and \begin{equation} \label{eqn-mean-mono-O} \BB E^{p,q}\left( \protect\hyperlink{def-theta-count}{\mathcal O}\left(X(1,n)\right) \BB 1_E \right) \geq \BB E^{0,0}\left( \protect\hyperlink{def-theta-count}{\mathcal O}\left(X(1,n)\right) \BB 1_E \right). \end{equation} \end{lem} The intuitive reason why we expect Lemma~\ref{prop-mean-mono} to be true is that it is ``harder'' for a $\db$ or $\so$ to find a match than it is for an element of $\Theta_0$ to find a match, since the $\db$ or $\so$ has to be identified, then matched. So, replacing $\db$'s and $\so$'s by elements of $\Theta_0$ should tend to reduce the number of burgers and orders in the word. To prove the lemma, we will iteratively replace the rightmost symbol amongst all of the $\db$'s and $\so$'s in $X_1\dots X_n$ by an $\hb$ or $\cb$ with equal probability (if it is a $\db$) or by an $\ho$ or $\co$ with equal probability (if it is an $\so$) and argue that each of these replacements reduces the expected number of burgers and orders in $X(1,n)$. The key tool in the proof is Lemma~\ref{prop-word-flip} below. \begin{remark} The proof of Lemma~\ref{prop-mean-mono} is based on an argument of Linxiao Chen which appears in the proof of Lemma~5 in the original arXiv version of~\cite{chen-fk}. Chen's argument does not in fact yield the stochastic domination statement claimed in his Lemma~5, but does prove the analogue of Lemma~\ref{prop-mean-mono} in the setting where $p_\so=p_\db=p_\eb=0$ and $p_\fo \in [0,1]$. \end{remark} Define an involution $\theta\mapsto \theta^\dagger$ on $\Theta$ by \begin{equation} \label{eqn-involution} \begin{split} &\hb^\dagger = \cb \quad \cb^\dagger = \hb \quad \ho^\dagger = \co \quad \co^\dagger = \ho \\ &\db^\dagger = \db \quad \eb^\dagger = \eb \quad \fo^\dagger =\fo \quad \so^\dagger =\so. \end{split} \end{equation} For a word $x = x_1 \cdots x_{|x|}$ consisting of elements of $\Theta $, we write $x^\dagger = x_1^\dagger \cdots x_{|x|}^\dagger$. For such a word $x$, we write $s(x)$ for the index of the rightmost symbol among all of the $\db$ or $\so$ symbols in $x$ (or $s(x) =0$ if no such $x$ exists). We define \[ x^\Hc = \begin{cases} \text{word obtained from $x$ by replacing $x_{s(x)}$ with $\ho$} & \text{$s(x)>0$ and $x_{s(x)} = \so$}\\ \text{word obtained from $x$ by replacing $x_{s(x)}$ with $\cb$} & \text{$s(x)>0$ and $x_{s(x)} = \db$}\\ x & s(x) = 0\,, \end{cases} \] and we define $x^\Ch$ similarly but with $\co$ and $\hb$ in place of $\ho$ and $\cb$. We write $r(x)$ for the largest $k \in [1,s(x)-1]_\BB Z$ for which $x_k=\hb$ or $x_k=\cb$ and $x_k$ has no match in $x_{1} \cdots x_{s(x)-1}$ (or $r(x) = 0$ if no such $k$ exists). We define an involution \begin{equation*} \Psi(x) = \begin{cases} \left( x_1 \cdots x_{r(x)-1} \right)^\dagger x_{r(x)} \cdots x_{s(x)} \left( x_{s(x)+1} \cdots x_{|x|} \right)^\dagger & r(x)>0\\ x_1 \cdots x_{s(x)} \left( x_{s(x)+1} \cdots x_{|x|} \right)^\dagger & r(x)=0. \end{cases} \end{equation*} We make the following elementary observations about the above operations. \begin{enumerate} \item Involution commutes with reduction, i.e.\ $\protect\hyperlink{def-reduce}{\mathcal R}(x^\dagger) = \protect\hyperlink{def-reduce}{\mathcal R}(x)^\dagger$ for all words $x$. \item $s(\Psi(x)) = s(x)$ and $r(\Psi(x)) = r(x)$ for all words $x$, and hence $\Psi(\Psi(x)) = x$. \end{enumerate} \begin{lem} \label{prop-word-flip} Let $x = x_1\cdots x_{|x|}$ be a word consisting of elements of $\Theta_0 \cup \left\{\db, \so \right\}$. If \begin{equation} \label{eqn-word-flip-hyp} \protect\hyperlink{def-theta-count}{\mathcal B}\left( \protect\hyperlink{def-reduce}{\mathcal R}( x^\Hc )\right) > \protect\hyperlink{def-theta-count}{\mathcal B}( \protect\hyperlink{def-reduce}{\mathcal R}( x) ) \end{equation} then \begin{equation} \label{eqn-word-flip-conc} \protect\hyperlink{def-theta-count}{\mathcal B}\left( \protect\hyperlink{def-reduce}{\mathcal R}( x^\Hc) \right) = \protect\hyperlink{def-theta-count}{\mathcal B}( \protect\hyperlink{def-reduce}{\mathcal R}( x ) ) + 1 \quad \operatorname{and} \quad \protect\hyperlink{def-theta-count}{\mathcal B}\left( \protect\hyperlink{def-reduce}{\mathcal R}( \Psi(x)^\Hc ) \right) = \protect\hyperlink{def-theta-count}{\mathcal B}( \protect\hyperlink{def-reduce}{\mathcal R}( \Psi(x) ) ) - 1. \end{equation} \end{lem} To prove Lemma~\ref{prop-word-flip}, we first explain why~\eqref{eqn-word-flip-hyp} implies that $x_{s(x)}$ is identified in $x$ (i.e., $r(x) > 0$) and that the word $\protect\hyperlink{def-reduce}{\mathcal R}(x_{r(x)} \dots x_{s(x)-1})$ must take the form $\co^n \hb^m$ for some $m\geq 1$ and $n\geq 0$ (where here $\co^n$ denotes the word which is a concatenation of $n$ $\co$'s, etc.); see~\eqref{eqn-only-hC}. By means of~\eqref{eqn-reduced-decomp}, we then reduce to the case when $\protect\hyperlink{def-reduce}{\mathcal R}( x_1\dots x_{r(x)-1}) $ (resp.\ $\protect\hyperlink{def-reduce}{\mathcal R}( x_{s(x)} \dots x_{|x|})$) contains only $\hb$'s and $\cb$'s (resp.\ $\ho$'s and $\co$'s). This reduction together with~\eqref{eqn-only-hC} will allow us to write down explicit expressions for the quantities in~\eqref{eqn-word-flip-conc} in terms of $n$ and $m$. Comparing these expressions will yield~\eqref{eqn-word-flip-conc}. \begin{proof}[Proof of Lemma~\ref{prop-word-flip}] If \eqref{eqn-word-flip-hyp} holds, then $x^\Hc \neq x$, so the word $x$ contains at least one $\db$ or $\so$. Since $x_{s(x)}$ is the rightmost $\db$ or $\so$ in the word $x$, replacing $x_{s(x)}$ by $\cb$ or $\ho$ does not change the identification of any symbol in $x_{s(x)+1} \cdots x_{|x|}$. We first argue that~\eqref{eqn-word-flip-hyp} implies that $x_{s(x)}$ is identified in $x$, and hence that $r(x)>0$. Indeed, suppose $x_{s(x)}$ is not identified in the word $x$. Then the reduced word $x(1,s(x)-1)$ contains no $\hb$ or $\cb$ symbols, since the presence of any such symbol would identify $x_{s(x)}$ (recall~\eqref{eqn-theta-relations'}). In this case, the reduced words $\protect\hyperlink{def-reduce}{\mathcal R}(x)$ and $\protect\hyperlink{def-reduce}{\mathcal R}(x^\Hc)$ would have the same set of symbols except the symbol coming from position $s(x)$, and possibly an order in $x_{s(x)+1} \cdots x_{|x|}$ which may consume $x^\Hc_{s(x)}$ if it is a burger. But then $\protect\hyperlink{def-theta-count}{\mathcal B}\left(\protect\hyperlink{def-reduce}{\mathcal R}(x^\Hc)\right) \leq \protect\hyperlink{def-theta-count}{\mathcal B}(\protect\hyperlink{def-reduce}{\mathcal R}(x))$, contradicting~\eqref{eqn-word-flip-hyp}. Henceforth assume that~\eqref{eqn-word-flip-hyp} holds, which implies (by the preceding paragraph) that $x_{s(x)}$ is identified in $x$. If $r(x)<k<s(x)$ and $x_k$ is a burger, then by definition of $r(x)$, either $x_k\in\{\hb,\cb\}$ but is consumed by an order in $x_{r(x)}\cdots x_{s(x)}$, or else $x_k=\db$. If $x_k=\db$ and is identified as $x_{r(x)}^\dagger$, consider the first such $k$. By definition of $r(x)$, the burger that identifies $x_k$ is consumed in $x_{r(x)}\cdots x_{s(x)}$, at a time by which $x_k$ must therefore also have been consumed. Thus the only burgers in $\protect\hyperlink{def-reduce}{\mathcal R}(x_{r(x)}\cdots x_{s(x)})$ are $\db$'s that are identified as $x_{r(x)}$ and the burger $x_{r(x)}$ itself. If $x_{s(x)} = \so$, then since $\protect\hyperlink{def-reduce}{\mathcal R}(x^\Hc) \neq \protect\hyperlink{def-reduce}{\mathcal R}(x)$, it must be that $x_{s(x)}$ corresponds to a $\co$ symbol in the identification $\protect\hyperlink{def-identification}{\mathcal I}(x)$, which in turn implies $x_{r(x)} = \hb$. If on the other hand $x_{s(x) } = \db$, then $x_{s(x)}$ must be identified by an $\hb$ in the word $x$, which again implies $x_{r(x)} = \hb$, since from the previous paragraph we know that all potential intermediate burgers would be $\db$'s. Since the burger $x_{r(x)}=\hb$ is not consumed in $\protect\hyperlink{def-reduce}{\mathcal R}(x_{r(x)}\cdots x_{s(x)})$, any order in this reduced word is identified and must be of type $\co$. Regardless of $x_{s(x)}$, \begin{equation} \label{eqn-only-hC} \protect\hyperlink{def-reduce}{\mathcal R}\left(x_{r(x)}\cdots x_{s(x)-1}\right) = \co^{n} \hb^{m} \quad\quad\quad\text{with $m\geq1$ and $n\geq0$.} \end{equation} Write $x(1, r(x)-1) = Uu$ and $x(s(x)+1,|x|) = Vv$, where $U$ and $V$ are words consisting of only orders and $\db$'s, and $u$ and $v$ are words consisting of only $\hb$'s and $\cb$'s. By definition of $s(x)$, $V$ contains no $\so$ or $\db$. Let $\alpha$ denote the identification of $x_{s(x)}$ in $x_{r(x)}\cdots x_{s(x)}$, which is either $\co$ or $\hb$. By the relation $\mathcal R(\mathcal R(x)\mathcal R(y)) = \mathcal R(xy)$ (Lemma~\ref{prop-associative}) and the commutativity of $\hb$ with $\co$, \begin{equation} \label{eqn-reduced-decomp} \begin{aligned} \protect\hyperlink{def-reduce}{\mathcal R}(x) &= U \protect\hyperlink{def-reduce}{\mathcal R}\left( u \hb^m \co^n \alpha V \right) v, & \protect\hyperlink{def-reduce}{\mathcal R}\left(x^\Hc \right) &= U \protect\hyperlink{def-reduce}{\mathcal R}\left( u \hb^m \co^n \alpha^\dagger V \right) v\,, \\ \protect\hyperlink{def-reduce}{\mathcal R}(\Psi(x)) &= U^\dagger \protect\hyperlink{def-reduce}{\mathcal R}\left( u^\dagger \hb^m \co^{n} \alpha V^\dagger \right) v^\dagger,\quad & \protect\hyperlink{def-reduce}{\mathcal R}\left( \Psi(x)^\Hc \right) &= U^\dagger \protect\hyperlink{def-reduce}{\mathcal R}\left( u^\dagger \hb^m \co^n \alpha^\dagger V^\dagger \right) v^\dagger \,. \end{aligned} \end{equation} From~\eqref{eqn-reduced-decomp} we see that changing $U$ and $v$ while leaving the other words fixed does not change $\protect\hyperlink{def-theta-count}{\mathcal B}\left( \protect\hyperlink{def-reduce}{\mathcal R}( x^\Hc) \right) - \protect\hyperlink{def-theta-count}{\mathcal B}( \protect\hyperlink{def-reduce}{\mathcal R}( x ) )$ or $\protect\hyperlink{def-theta-count}{\mathcal B}\left( \protect\hyperlink{def-reduce}{\mathcal R}( \Psi(x)^\Hc ) \right) - \protect\hyperlink{def-theta-count}{\mathcal B}( \protect\hyperlink{def-reduce}{\mathcal R}( \Psi(x) ) )$, so we assume without loss of generality that $U = v = \emptyset$. Under this assumption, the words $\protect\hyperlink{def-reduce}{\mathcal R}(x)$ and $\protect\hyperlink{def-reduce}{\mathcal R}(\Psi(x))$ both take the form $\protect\hyperlink{def-reduce}{\mathcal R}(y Y)$, where $y$ is a word with only hamburgers and cheeseburgers and $Y$ is a word with only hamburger orders and cheeseburger orders. If $\alpha$ is an order, then $\protect\hyperlink{def-reduce}{\mathcal R}(x^\Hc)$ and $\protect\hyperlink{def-reduce}{\mathcal R}(\Psi(x)^\Hc)$ also take the form $\protect\hyperlink{def-reduce}{\mathcal R}(y Y)$, but if $\alpha$ is a burger and $n>0$, then $\protect\hyperlink{def-reduce}{\mathcal R}(x^\Hc)$ and $\protect\hyperlink{def-reduce}{\mathcal R}(\Psi(x)^\Hc)$ take the form $\protect\hyperlink{def-reduce}{\mathcal R}(y \co^n \cb Y)$ (where in both cases, as above, $y$ denotes a word with only $\cb$'s and $\hb$'s, and $Y$ a word with only $\co$'s and $\ho$'s). For convenience we define \begin{align*} \Delta_\hb &\colonequals \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\hb}\left(u \right) - \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\ho}\left(V \right) \\ \Delta_\cb &\colonequals \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\cb}\left(u \right) - \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\co}\left(V \right) \,. \end{align*} Suppose first $\alpha=\co$. From \eqref{eqn-reduced-decomp} we see \begin{equation*} \begin{split} \protect\hyperlink{def-theta-count}{\mathcal B}(\protect\hyperlink{def-reduce}{\mathcal R}(x)) &= \left( \Delta_\hb + m \right) \vee 0 + \left(\Delta_\cb - n-1 \right) \vee 0 \\ \protect\hyperlink{def-theta-count}{\mathcal B}(\protect\hyperlink{def-reduce}{\mathcal R}(x^\Hc)) &= \left( \Delta_\hb + m -1\right) \vee 0 + \left(\Delta_\cb - n \right) \vee 0 \end{split} \end{equation*} Since $\protect\hyperlink{def-theta-count}{\mathcal B}(\protect\hyperlink{def-reduce}{\mathcal R}(x^\Hc)) > \protect\hyperlink{def-theta-count}{\mathcal B}(\protect\hyperlink{def-reduce}{\mathcal R}(x))$ it follows that $\Delta_\hb \leq -m$ and $\Delta_\cb \geq n+1$, and hence $\protect\hyperlink{def-theta-count}{\mathcal B}(\protect\hyperlink{def-reduce}{\mathcal R}(x^\Hc)) = \protect\hyperlink{def-theta-count}{\mathcal B}(\protect\hyperlink{def-reduce}{\mathcal R}(x))+1$, as claimed. From \eqref{eqn-reduced-decomp} together with $\Delta_\hb\leq -1$ and $\Delta_\cb\geq 1$, we see \begin{equation*} \begin{split} \protect\hyperlink{def-theta-count}{\mathcal B}(\protect\hyperlink{def-reduce}{\mathcal R}(\Psi(x))) &= \left( \Delta_\cb + m \right) \vee 0 + \left(\Delta_\hb - n-1 \right) \vee 0 = (\Delta_\cb+m)+0\\ \protect\hyperlink{def-theta-count}{\mathcal B}(\protect\hyperlink{def-reduce}{\mathcal R}(\Psi(x^\Hc))) &= \left( \Delta_\cb + m -1\right) \vee 0 + \left(\Delta_\hb - n \right) \vee 0 = (\Delta_\cb+m-1)+0 \end{split} \end{equation*} so $\protect\hyperlink{def-theta-count}{\mathcal B}(\protect\hyperlink{def-reduce}{\mathcal R}(\Psi(x^\Hc))) = \protect\hyperlink{def-theta-count}{\mathcal B}(\protect\hyperlink{def-reduce}{\mathcal R}(\Psi(x)))-1$, as claimed. Suppose next $\alpha=\hb$. From \eqref{eqn-reduced-decomp} we see \begin{equation*} \begin{split} \protect\hyperlink{def-theta-count}{\mathcal B}(\protect\hyperlink{def-reduce}{\mathcal R}(x)) &= \left( \Delta_\hb + m + 1 \right) \vee 0 + \left(\Delta_\cb - n \right) \vee 0 \\ \protect\hyperlink{def-theta-count}{\mathcal B}(\protect\hyperlink{def-reduce}{\mathcal R}(x^\Hc)) &= \left( \Delta_\hb + m \right) \vee 0 + [((\protect\hyperlink{def-theta-count}{\mathcal N\!}_\cb(u)-n)\vee 0)+1-\protect\hyperlink{def-theta-count}{\mathcal N\!}_\co(V)]\vee 0 \end{split} \end{equation*} The nested-$\vee$ expression arises because $\protect\hyperlink{def-reduce}{\mathcal R}(x^\Hc)$ takes the form $\protect\hyperlink{def-reduce}{\mathcal R}(y \co^n \cb Y)$. Since $\Delta_\cb=\protect\hyperlink{def-theta-count}{\mathcal N\!}_\cb(u)-\protect\hyperlink{def-theta-count}{\mathcal N\!}_\co(V)$ and $\protect\hyperlink{def-theta-count}{\mathcal B}(\protect\hyperlink{def-reduce}{\mathcal R}(x^\Hc)) > \protect\hyperlink{def-theta-count}{\mathcal B}(\protect\hyperlink{def-reduce}{\mathcal R}(x))$, it follows (by a short argument by contradiction due to the nested-$\vee$ expression) that $\Delta_\hb\leq-m-1$ and \[ (\protect\hyperlink{def-theta-count}{\mathcal N\!}_\cb(u)-n)\vee 0 \geq \protect\hyperlink{def-theta-count}{\mathcal N\!}_\co(V), \] which in turn implies either $\protect\hyperlink{def-theta-count}{\mathcal N\!}_\co(V)=0$ or $\Delta_\cb\geq n$. In either case, $\Delta_\cb\geq 0$. We also see $\protect\hyperlink{def-theta-count}{\mathcal B}(\protect\hyperlink{def-reduce}{\mathcal R}(x^\Hc)) = \protect\hyperlink{def-theta-count}{\mathcal B}(\protect\hyperlink{def-reduce}{\mathcal R}(x))+1$, as claimed. Referring to \eqref{eqn-reduced-decomp} again, and using from above that $\Delta_\hb\le -2$ and $\Delta_\cb\ge 0$, we see \begin{align*} \protect\hyperlink{def-theta-count}{\mathcal B}(\protect\hyperlink{def-reduce}{\mathcal R}(\Psi(x))) &= \left( \Delta_\cb + m + 1 \right) \vee 0 + \left(\Delta_\hb - n \right) \vee 0 = (\Delta_\cb+m+1)+0\\ \protect\hyperlink{def-theta-count}{\mathcal B}(\protect\hyperlink{def-reduce}{\mathcal R}(\Psi(x^\Hc))) &= \left( \Delta_\cb + m \right) \vee 0 + [((\protect\hyperlink{def-theta-count}{\mathcal N\!}_\hb(u)-n)\vee 0)+1-\protect\hyperlink{def-theta-count}{\mathcal N\!}_\ho(V)]\vee 0 \\ & = (\Delta_\cb+m)+[((\Delta_\hb-n)\vee (-\protect\hyperlink{def-theta-count}{\mathcal N\!}_\ho(V)))+1]\vee 0 \intertext{Since $\protect\hyperlink{def-theta-count}{\mathcal N\!}_{\hb}\left(u \right) - \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\ho}\left(V \right)=\Delta_\hb\leq-m-1\leq -2$, it follows that $\protect\hyperlink{def-theta-count}{\mathcal N\!}_\ho(V) \geq 2$, and so} \protect\hyperlink{def-theta-count}{\mathcal B}(\protect\hyperlink{def-reduce}{\mathcal R}(\Psi(x^\Hc))) & = \Delta_\cb+m \,, \end{align*} so in this case as well $\protect\hyperlink{def-theta-count}{\mathcal B}(\protect\hyperlink{def-reduce}{\mathcal R}(\Psi(x^\Hc))) = \protect\hyperlink{def-theta-count}{\mathcal B}(\protect\hyperlink{def-reduce}{\mathcal R}(\Psi(x)))-1$, as claimed. \end{proof} \begin{proof}[Proof of Lemma~\ref{prop-mean-mono}] The law of $n\mapsto \protect\hyperlink{def-theta-count}{\mathcal C}(X(1,n))$ is that of one-dimensional simple random walk, regardless of $p$ and $q$. Therefore $\PP^{p,q}(E) = \PP^{0,0}(E)$, so to prove~\eqref{eqn-mean-mono-B} it suffices to show \begin{equation} \label{eqn-mean-mono} \BB E^{p,q} \left( \protect\hyperlink{def-theta-count}{\mathcal B}(X(1,n )) \,|\, E \right) \geq \BB E^{0,0}\left( \protect\hyperlink{def-theta-count}{\mathcal B}(X(1,n )) \,|\, E \right),\quad \forall n \in \BB N. \end{equation} To this end, let $X^0 = X_{1}^0 \cdots X_{n}^0$ be a word whose law is that of $X_{1} \cdots X_{n}$ under $\PP^{p,q}$. Let $\{\xi_k\}_{k\in [1,n]_\BB Z}$ be i.i.d.\ Bernoulli random variables with parameter $1/2$, independent from $X^0$. For $k\in [1,n]_\BB Z$ inductively define \begin{equation*} X^k = \begin{dcases} (X^{k-1} )^\Hc \quad &\operatorname{if} \: \xi_k = 0\\ (X^{k-1})^\Ch \quad &\operatorname{if} \: \xi_k = 1. \end{dcases} \end{equation*} Since $\protect\hyperlink{def-theta-count}{\mathcal N\!}_{\db|\so}(X^k)=0\vee(\protect\hyperlink{def-theta-count}{\mathcal N\!}_{\db|\so}(X^{k-1})-1)$, and the word $X^n$ is obtained from $X^0$ by replacing each $\db$ symbol in $X^0$ with an independent random symbol which is uniformly distributed on $\left\{\hb,\cb\right\}$ and each $\so$ symbol in $X$ with an independent random symbol which is uniformly distributed on $\left\{\ho,\co\right\}$, the law of $X^n$ is that of $X_{1} \cdots X_{n}$ under $\PP^{0,0}$. We next argue that \begin{equation} \label{eqn-flip-law} \Psi(X^k) \overset{d}{=} X^k \quad \forall \: k\in [0,n]_\BB Z. \end{equation} To see this, let $k\in[1,n]_\BB Z$ and let $j_k$ be the $k^{\text{th}}$ largest $j\in [1,n]_\BB Z$ for which $X^0_{j} \in \left\{\db,\so\right\}$, or $j_k = 0$ if no such $j$ exists. Also let $j_k'$ be the largest $j \in [1, j_k-1]_\BB Z$ for which the reduced word $X^0(j, j_k-1)$ contains an $\hb$ or $\cb$, or $j_k' = 0$ if no such $j$ exists. Then $j_k$ and $j_k'$ are stopping times for the filtration generated by $X^0$, read from right to left. By the strong Markov property, the conditional law of $X^0_{1} \cdots X^0_{ j_k'-1}$ given $X^0_{ j_k'}\cdots X^0_{n}$ is a string of $(j_k'-1)\vee 0$ i.i.d.\ symbols sampled from the law $\PP^{p,q}$. Hence given $j_k'$, $X^0_{1} \cdots X^0_{ j_k'-1}$ is conditionally independent from $X^0_{ j_k + 1} \cdots X^0_{n}$ and $X^0_{ j_k'} \cdots X^0_{ j_k}$. By the above description of the conditional law of $X^0_{1} \cdots X^0_{ j_k'-1}$ given $j_k'$ and $X^0_{ j_k'}\cdots X^0_{n}$ and the symmetry between hamburgers and cheeseburgers, we infer that this conditional law is invariant under involution. Since the definition of $j_k$ is invariant under involution, we infer that also the conditional law of $X^0_{ j_k+1} \cdots X^0_{n}$ given $j_k$ is invariant under involution. Since $j_k$ is a stopping time for $X^0$, read backwards, it follows that the joint conditional law of $X^0_{1} \cdots X^0_{ j_k'-1}$ and $X^0_{ j_k+1} \cdots X^0_{n}$ given $j_k$, $j_k'$ and $X^0_{ j_k'} \cdots X^0_{ j_k}$ is invariant under involution. In particular, \begin{equation} \label{eqn-flip-law-whole} X^0\overset{d}{=} \left( X^0_{1} \cdots X^0_{ j_k'-1} \right)^\dagger X^0_{ j_k'}\cdots X^0_{j_k} \left( X^0_{ j_k+1} \cdots X^0_{n} \right)^\dagger. \end{equation} The word $X^k$ (resp.\ $\Psi(X^k)$) is obtained from the word on the left (resp.\ right) side of~\eqref{eqn-flip-law-whole} by replacing its $k$ rightmost $\db$ or $\so$ symbols with independent random symbols sampled uniformly from $\left\{\hb,\cb\right\}$ or $\left\{\ho,\co\right\}$ respectively. We thus obtain~\eqref{eqn-flip-law}. Now let $E$ be an event as in the statement of the lemma, defined with the word $X^0_1 \dots X_0^n$ in place of the word $X_1\dots X_n$. The operations $x\mapsto x^\Hc$, $x\mapsto x^\Ch$, and $x\mapsto \Psi(x)$ replace burgers with burgers and orders with orders in the word $x$, so the sequence $\protect\hyperlink{def-theta-count}{\mathcal C}(x(1,i))_{i=1,\dots,n}$ is the same for each $x \in \left\{ X^k,\Psi(X^k)\right\}$ and $k \in [0,n]_\BB Z$. Since the event $E$ is determined by $\protect\hyperlink{def-theta-count}{\mathcal C}(X^0(1,i))_{i=1,\dots,n}$, we see that the definition of $ E$ is unaffected if we replace $X^0$ with $X^k$ or $\Psi(X^k)$ for any $k\in [0,n]_\BB Z$. From this observation, we deduce the following: \begin{enumerate} \item The conditional law of $X^0$ given $E$ is the same as the conditional law of $X_1\cdots X_n$ given $E$ under $\PP^{p,q}$. \label{item-E-start-law} \item The conditional law of $X^n$ given $E$ is the same as the conditional law of $X_1\cdots X_n$ given $E$ under $\PP^{0,0}$. \label{item-E-end-law} \item $E$ is independent from the Bernoulli random variables $\{\xi_k\}_{k\in [1,n]_\BB Z}$. \label{item-E-ind} \item By~\eqref{eqn-flip-law}, for each $k\in [1,n]_\BB Z$, the conditional laws of $X^k$ and $\Psi(X^k)$ given $E$ agree. \label{item-E-flip} \end{enumerate} By combining these observations with Lemma~\ref{prop-word-flip}, we find that for each $k\in [1,n]_\BB Z$, \begin{align} \label{eqn-diff-compare} &\PP\!\left( \protect\hyperlink{def-theta-count}{\mathcal B}\left( \protect\hyperlink{def-reduce}{\mathcal R}( X^k ) \right) > \protect\hyperlink{def-theta-count}{\mathcal B}\left( \protect\hyperlink{def-reduce}{\mathcal R}(X^{k-1} ) \right) \,|\, E \right) \notag \\ &\qquad = \tfrac12 \PP\!\left( \protect\hyperlink{def-theta-count}{\mathcal B}\left( \protect\hyperlink{def-reduce}{\mathcal R}( (X^{k-1})^\Hc ) \right) > \protect\hyperlink{def-theta-count}{\mathcal B}\left( \protect\hyperlink{def-reduce}{\mathcal R}(X^{k-1}) \right) \,|\, E \right) + \tfrac12 \PP\!\left( \protect\hyperlink{def-theta-count}{\mathcal B}\left( \protect\hyperlink{def-reduce}{\mathcal R}( (X^{k-1})^\Ch ) \right) > \protect\hyperlink{def-theta-count}{\mathcal B}\left( \protect\hyperlink{def-reduce}{\mathcal R}( X^{k-1} ) \right) \,|\, E \right) \notag \\ &\qquad\leq \tfrac12 \PP\!\left(\protect\hyperlink{def-theta-count}{\mathcal B}\left( \protect\hyperlink{def-reduce}{\mathcal R}( \Psi(X^{k-1})^\Hc ) \right) = \protect\hyperlink{def-theta-count}{\mathcal B}\left( \protect\hyperlink{def-reduce}{\mathcal R}( \Psi(X^{k-1}) ) \right) - 1 \,|\, E \right) \notag\\ &\qquad\quad+\tfrac12\PP\!\left(\protect\hyperlink{def-theta-count}{\mathcal B}\left( \protect\hyperlink{def-reduce}{\mathcal R}( \Psi(X^{k-1})^\Ch ) \right) = \protect\hyperlink{def-theta-count}{\mathcal B}\left( \protect\hyperlink{def-reduce}{\mathcal R}( \Psi(X^{k-1}) ) \right) - 1 \,|\, E \right) \notag \\ &\qquad= \PP\!\left( \protect\hyperlink{def-theta-count}{\mathcal B}\left(\protect\hyperlink{def-reduce}{\mathcal R}( X^k ) \right) = \protect\hyperlink{def-theta-count}{\mathcal B}\left(\protect\hyperlink{def-reduce}{\mathcal R}( X^{k-1} ) \right) - 1 \,|\, E \right). \end{align} We used observation~\ref{item-E-ind} above in the first equality and observation~\ref{item-E-flip} in the last equality. Lemma~\ref{prop-word-flip} implies that $\protect\hyperlink{def-theta-count}{\mathcal B}( \protect\hyperlink{def-reduce}{\mathcal R}( X^k ) ) = \protect\hyperlink{def-theta-count}{\mathcal B}(\protect\hyperlink{def-reduce}{\mathcal R}( X^{k-1} ) ) + 1$ whenever $\protect\hyperlink{def-theta-count}{\mathcal B}(\protect\hyperlink{def-reduce}{\mathcal R}( X^k ) ) > \protect\hyperlink{def-theta-count}{\mathcal B}(\protect\hyperlink{def-reduce}{\mathcal R}( X^{k-1} ) )$, so~\eqref{eqn-diff-compare} implies \alb &\BB E\left( \protect\hyperlink{def-theta-count}{\mathcal B}\left( \protect\hyperlink{def-reduce}{\mathcal R}( X^k ) \right) - \protect\hyperlink{def-theta-count}{\mathcal B}\left( \protect\hyperlink{def-reduce}{\mathcal R}( X^{k-1}) \right) \,|\, E \right) \notag \\ &\qquad \leq \PP\!\left( \protect\hyperlink{def-theta-count}{\mathcal B}\left( \protect\hyperlink{def-reduce}{\mathcal R}( X^k ) \right) > \protect\hyperlink{def-theta-count}{\mathcal B}\left( \protect\hyperlink{def-reduce}{\mathcal R}(X^{k-1} ) \right) \,|\, E \right) - \PP\!\left( \protect\hyperlink{def-theta-count}{\mathcal B}\left(\protect\hyperlink{def-reduce}{\mathcal R}( X^k ) \right) = \protect\hyperlink{def-theta-count}{\mathcal B}\left(\protect\hyperlink{def-reduce}{\mathcal R}( X^{k-1} ) \right) - 1 \,|\, E \right) \leq 0 , \ale whence \begin{equation*} \BB E\left( \protect\hyperlink{def-theta-count}{\mathcal B}\left( \protect\hyperlink{def-reduce}{\mathcal R}( X^k ) \right) \,|\, E \right) \leq \BB E\left( \protect\hyperlink{def-theta-count}{\mathcal B}\left( \protect\hyperlink{def-reduce}{\mathcal R}( X^{k-1}) \right) \,|\, E \right) \quad \forall k\in [1,n]_\BB Z. \end{equation*} Therefore \begin{equation} \label{BRXn<=BRX0} \BB E\left( \protect\hyperlink{def-theta-count}{\mathcal B}\left( \protect\hyperlink{def-reduce}{\mathcal R}( X^n ) \right) \,|\, E \right) \leq \BB E\left( \protect\hyperlink{def-theta-count}{\mathcal B}\left(\protect\hyperlink{def-reduce}{\mathcal R}( X^0 ) \right) \,|\, E \right). \end{equation} By observations~\ref{item-E-start-law} and~\ref{item-E-end-law} above, we obtain~\eqref{eqn-mean-mono-B}. The bound~\eqref{eqn-mean-mono-O} follows observations~\ref{item-E-start-law} and~\ref{item-E-end-law} above, \eqref{BRXn<=BRX0} and \[ \protect\hyperlink{def-theta-count}{\mathcal B}(\protect\hyperlink{def-reduce}{\mathcal R}(X^0))-\protect\hyperlink{def-theta-count}{\mathcal O}(\protect\hyperlink{def-reduce}{\mathcal R}(X^0)) = \protect\hyperlink{def-theta-count}{\mathcal B}(\protect\hyperlink{def-reduce}{\mathcal R}(X^n))-\protect\hyperlink{def-theta-count}{\mathcal O}(\protect\hyperlink{def-reduce}{\mathcal R}(X^n)) \,. \qedhere \] \end{proof} \subsection{Bound on the number of unidentified symbols} \label{sec-few-SD} In the next three subsections we prove analogues of various results found in~\cite[\S~3]{shef-burger} in the setting of Theorem~\ref{thm-variable-SD}. Throughout, we assume we are in the setting described just above the statement of Theorem~\ref{thm-variable-SD} for fixed $(p,q) \in [0,1] \times [0,1)$. The main purpose of this section is to prove the following more quantitative analogue of~\cite[Lem.~3.7]{shef-burger}. \begin{lem} \label{prop-few-SD} For each $\varepsilon>0$, there are positive numbers $c_0,c_1>0$ such that, for each $n\in\BB N$ and $A>0$, the event \begin{equation} \label{eqn-few-SD-event} F_n(\varepsilon,A) \colonequals \left\{ \frac{\protect\hyperlink{def-theta-count}{\mathcal N\!}_{\db|\so}(X(1,n))}{ \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\ho}(X(1,n)) \vee A } \geq \varepsilon \right\} \end{equation} occurs with probability \begin{equation} \label{eqn-few-SD} \PP\left( F_n(\varepsilon,A) \right) \leq c_0 e^{-c_1 A }. \end{equation} \end{lem} Lemma~\ref{prop-few-SD} will be an important tool in what follows since it allows us in many cases to ignore the (potentially quite complicated) manner in which the $\db$'s and $\so$'s are identified. When we apply the lemma, we will typically take $\varepsilon$ to be a small fixed parameter and $A$ to be a small positive power of $n$ (so that $\PP(F_n(\varepsilon,A) )$ decays faster than any negative power of $n$). We expect that an even stronger statement than Lemma~\ref{prop-few-SD} is true, namely, that $ \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\db|\so}(X(1,\infty)) < \infty$ a.s.\ and that $\protect\hyperlink{def-theta-count}{\mathcal N\!}_{\db|\so}(X(1,\infty))$ is stochastically dominated by a geometric distribution. The reason for this is explained in Remark~\ref{remark-I-infinite}. To prove Lemma~\ref{prop-few-SD}, we first observe that if $i \in [1,n]_{\BB Z}$ is such that $X_i$ is a $\db$ or $\so$ which is not identified in $X_1\dots X_n$, then the word $X(1,i-1)$ must contain no hamburgers or cheeseburgers (such a hamburger or cheeseburger would identify $X_i$). We will prove that the expected number of unmatched $\ho$'s added to the word between the successive times when $X(1,j)$ contains no burgers is infinite (Lemma~\ref{prop-time-to-SD-infty}). By Hoeffding's inequality and the fact that the increments of the word $X$ between these successive times are i.i.d., this will tell us that the number of $\db$'s and $\so$'s in $X(1,n)$ is typically negligible compared to the number of $\ho$'s. To start off, we consider the time \begin{equation} K = \min\left\{i \in\BB N : \protect\hyperlink{def-theta-count}{\mathcal C}(X(1,i)) = -1\right\} \end{equation} (here $i\mapsto \protect\hyperlink{def-theta-count}{\mathcal C}(X(1,i))$ is the simple random walk as in Definition~\ref{def-theta-count}). \begin{lem} \label{prop-mean-infty} We have \begin{equation} \label{eqn-mean-infty} \BB E \left( \left| X(1,K)\right| \right) = \infty. \end{equation} Furthermore, if we let $P$ be the smallest $j\in \BB N$ for which $\protect\hyperlink{def-theta-count}{\mathcal C}(X(-j,-1)) =1$, then \begin{equation} \label{eqn-mean-infty'} \BB E \left( \left| X(-P,-1)\right| \right) = \infty. \end{equation} \end{lem} \begin{proof} For each $n\in\BB N$, the event $\{K=n\}$ depends only on $\protect\hyperlink{def-theta-count}{\mathcal C}(X(1,i))$ for $i\in [1,n]_\BB Z$. By Lemma~\ref{prop-mean-mono}, we find \begin{equation*} \BB E \left( \left| X(1,K) \right| \times \BB 1_{(K=n)} \right) \geq \BB E^{0,0}\left( \left| X(1,K) \right| \times \BB 1_{(K=n)} \right),\quad \forall n \in \BB N \end{equation*} where here $\BB E^{0,0}$ denotes the law of $X$ with $p = q= 0$. By summing over all $n$, we obtain \begin{equation} \label{eqn-mean-infty-compare} \BB E \left( \left| X(1,K) \right| \right) \geq \BB E^{0,0}\left( \left| X(1,K) \right| \right). \end{equation} By standard estimates for one-dimensional simple random walk, $\PP^{0,0}\left( K = n \right) \asymp n^{-3/2}$. Under $\PP^{0,0}$, if we condition on $\{K = n\}$, then the conditional law of the walk $\protect\hyperlink{def-d-z}{\vec{d}} = (\protect\hyperlink{def-d-z}{d}, \protect\hyperlink{def-d-z}{d^*})$ restricted to $[0,n ]_{\BB Z}$ is that of a two-dimensional simple random walk conditioned to first exit the diagonal half plane $\{x + y \geq 0\}$ at time $n $. With uniformly positive probability under this conditioning, it holds that \begin{equation*} \protect\hyperlink{def-d-z}{d}(n) - \inf_{i \in [0,n]_{\BB Z}} \protect\hyperlink{def-d-z}{d}(i) \geq n^{1/2}, \end{equation*} in which case $\protect\hyperlink{def-theta-count}{\mathcal N\!}_{\hb}(X(1,n)) \geq n^{1/2}$. Therefore, \begin{equation*} \BB E^{0,0}\left( \left| X(1,K) \right| \times \BB 1_{(K=n)} \right) \succeq n^{-1 }. \end{equation*} By summing over all $n\in\BB N$ we obtain $\BB E^{0,0}\left( \protect\hyperlink{def-theta-count}{\mathcal B}( X(1,K)) \right) = \infty$ and hence~\eqref{eqn-mean-infty}. We similarly obtain~\eqref{eqn-mean-infty'}. \end{proof} \begin{lem} \label{prop-time-to-SD-infty} Let $I_1$ be the smallest $i\in\BB N$ for which $X(1,i)$ contains no hamburgers or cheeseburgers. Then $\BB E\left(\protect\hyperlink{def-theta-count}{\mathcal N\!}_{\ho}(X(1,I_1))\right) = \infty$ (here we take $\protect\hyperlink{def-theta-count}{\mathcal N\!}_{\ho }(X(1,I_1))= \infty$ if $I_1 =\infty$). \end{lem} \begin{remark} \label{remark-I-infinite} It is possible that $I_1=\infty$ with positive probability. In fact, we expect (but do not prove) that this is the case since the coordinates of the re-scaled walk $Z^n$ in~\eqref{eqn-Z^n-def} should be close to attaining a simultaneous running infimum at time $I_1$; and the coordinates of the negatively correlated Brownian motion $Z$ in Theorem~\ref{thm-variable-SD} a.s.\ do not have any simultaneous running infima (this follows by applying a linear transformation and using that an uncorrelated two-dimensional Brownian motion a.s.\ has no $\theta$-cone times for $\theta < \pi/2$~\cite{shimura-cone,evans-cone}). Note that if $I_1 = \infty$ with positive probability, then a.s.\ there are only finitely many times in $\mathbb N$ for which $X(1,i)$ contains no $\hb$'s or $\cb$'s, and hence only finitely many unidentified $\db$'s and $\so$'s in $X(1,\infty)$. We note, by way of comparison, that in the setting when $p_\so=p_\db=p_\eb=0$ and $p_\fo \in (0,1]$, the word $X(1,\infty)$ a.s.\ contains infinitely many $\fo$'s; see~\cite[Lemma 3.7]{shef-burger} in the case $p_\fo > 1/2$ and~\cite[Proposition 3.5]{gms-burger-cone} in the case $p_\fo < 1/2$ (the same proof works for $p_\fo =1/2$). \end{remark} \begin{proof}[Proof of Lemma~\ref{prop-time-to-SD-infty}] The statement of the lemma is obvious if $I_1 = \infty$ with positive probability, so we can assume that $I_1 < \infty$ a.s. If $I_1 > 1$ and $X(1,I_1)$ contains a $\db$ or $\so$ symbol, then $X(1,i)$ would have to contain no hamburgers or cheeseburgers for some $i\leq I_1-1$ (corresponding to the index of the $\db$ or $\so$ in question), which contradicts the definition of $I_1$. Thus either $I_1=1$ or the word $X(1,I_1)$ contains no unidentified $\db$'s or $\so$'s. If $I_1>1$, since every burger in $X(1,I_1)$ is identified, by definition of $I_1$, it must be that $X(1,I_1)$ contains no burgers. Thus if $I_1>1$, the word $X(2,I_1)$ contains more orders than burgers. Now let $K_2$ be the smallest $i \geq 2$ for which $\protect\hyperlink{def-theta-count}{\mathcal C}(X(2,i)) \leq -1$. Then $X_2 \cdots X_{K_2}$ is independent from $X_1$ and agrees in law with $X_1\cdots X_K$. On the event $\left\{X_1 = \hb\right\}$, we have $I_1 \geq K_2$. Therefore, every order appearing in $X(2, K_2)$ except possibly one also appears in $X(1, I_1)$. It follows that \begin{equation*} \BB E\left(\protect\hyperlink{def-theta-count}{\mathcal N\!}_{\ho|\co}(X(1,I_1 ) \right) \geq \frac{1-q}{4} \BB E\left( |X(1, K)| - 1 \right) =\infty. \end{equation*} By symmetry between $\ho$ and $\co$, we also have $\BB E\left(\protect\hyperlink{def-theta-count}{\mathcal N\!}_{\ho }(X(1,I_1))\right) = \infty$. \end{proof} \begin{proof}[Proof of Lemma~\ref{prop-few-SD}] Let $I_0 = 0$ and for $m \in \BB N$, let $I_m$ be the $m^{\text{th}}$ smallest $i\in\BB N$ for which $X(1, i)$ contains no hamburgers or cheeseburgers. The definition of $I_1$ is the same as that given in Lemma~\ref{prop-time-to-SD-infty}. Furthermore, if $i\in\BB N$ and $X_i$ is a $\db$ or a $\so$ which is not identified in $X_1 X_2 \cdots$, then $i$ must be one of the times $I_m$ for $m\in\BB N$. For each $m \in\BB N$, the time $I_m$ is a stopping time for the filtration generated by $X$, read forward. Furthermore, for $m\in\BB N$ and $i\geq I_{m-1} + 1$, the word $X(1, i)$ contains no hamburgers or cheeseburgers if and only if $X(I_{m-1} + 1, i)$ contains no hamburgers or cheeseburgers. By the strong Markov property, the words $X_{I_{m-1}+1} \cdots X_{I_m}$ for $m\in\BB N$ are i.i.d. For $m\in\BB N$, let \[ \xi_m \colonequals \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\ho}(X(I_{m-1}+1,I_m)), \] so that the random variables $\xi_m$ for $m\in\BB N$ are i.i.d. None of the $\ho$'s in $X(I_{m-1} + 1, I_m)$ have a match in $X_1 X_2\cdots$, so for each $m\in\BB N$ \begin{equation*} \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\ho}(X(1,I_m)) = \sum_{k=1}^m \xi_k. \end{equation*} By Lemma~\ref{prop-time-to-SD-infty}, for each $\varepsilon > 0$ we can find an $R > 0$ such that \begin{equation*} \BB E\left( \xi_1 \wedge R \right) \geq 2\varepsilon^{-1}. \end{equation*} By Hoeffding's inequality for sums of i.i.d.\ bounded random variables, for each $m\in\BB N$, \begin{align} \label{eqn-SD-increment-to-infty} \PP\!\left( \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\ho }(X(1,I_m)) \leq \varepsilon^{-1} m \right) &\leq \PP\!\left( \frac1m \sum_{k=1}^m (\xi_k \wedge R) \leq \varepsilon^{-1} \right) \notag \\ &\leq \exp\left(-\frac{2 m}{\varepsilon^2 R^2} \right). \end{align} Given $n\in\BB N$, let $M_n$ be the largest $m\in\BB N$ for which $I_m \leq n$. Then $\protect\hyperlink{def-theta-count}{\mathcal N\!}_{\ho }(X(1,n))\geq \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\ho }(X(1,I_{M_n}))$ and $\protect\hyperlink{def-theta-count}{\mathcal N\!}_{\db|\so}(X(1,n))\leq M_n$. By~\eqref{eqn-SD-increment-to-infty}, \alb \PP(F_n(\varepsilon,A)) &\leq \PP\!\left( \frac{ M_n }{ \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\ho }(X(1,I_{M_n})) } \geq \varepsilon,\; M_n \geq \varepsilon A \right)\\ & = \sum_{m = \lceil \varepsilon A \rceil}^\infty \PP\!\left( \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\ho }(X(1,I_m)) \leq \varepsilon^{-1} m,\; M_n=m\right) \\ & \leq \sum_{m = \lceil \varepsilon A \rceil}^\infty \exp\left(-\frac{2 m}{\varepsilon^2 R^2} \right) \ale so we take $c_1=2/(\varepsilon R^2)$ and $c_0=1/(1-e^{-c_1/\varepsilon})$. \end{proof} \subsection{Renewal times in the word} \label{sec-J-basic} For the bi-infinite word $X$, let $J$ be the age of the freshest (unconsumed) non-duplicate burger, as seen from the present: \begin{equation} \label{def-J}\hypertop{def-J} \protect\hyperlink{def-J}{J}\colonequals\min\big\{j\in\BB N: \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\hb|\cb}(X(-j,-1))>0\big\}, \end{equation} and more generally we define a sequence of backward renewal times $\protect\hyperlink{def-J}{J}_m$ by \begin{equation} \label{def-J_m} \protect\hyperlink{def-J}{J}_m\colonequals\begin{cases} 0 & m=0 \\ \min\big\{j\in\BB N: \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\hb|\cb}(X(-j,-J_{m-1}-1))>0\big\} & m\in\BB N. \end{cases} \end{equation} We also define \begin{equation} \label{eqn-chi-def}\hypertop{def-chi} \protect\hyperlink{def-J}{\chi} \colonequals \BB E\left(|X(-\protect\hyperlink{def-J}{J},-1)| \right), \end{equation} In the case $p=q=0$, $\BB E[\protect\hyperlink{def-J}{J}]=\infty$, so \textit{a priori\/} we could have $\protect\hyperlink{def-J}{\chi} = \infty$, but we will prove that $\protect\hyperlink{def-J}{\chi}$ is finite in Proposition~\ref{prop-J-finite} below. In this subsection we carry out a careful study of the time $\protect\hyperlink{def-J}{J}$ and related quantities. These results are needed for the variance calculation in the next subsection. We start by recording some basic properties of $\protect\hyperlink{def-J}{J}$ (which follow easily from the definition) in Lemma~\ref{prop-J-basic} and an alternative definition of $\protect\hyperlink{def-J}{J}_m$ in Lemma~\ref{prop-J-af}. In Lemma~\ref{prop-J-moment}, we show that $\protect\hyperlink{def-J}{J}$ has finite moments up to order $1/2$. The idea of the proof is to bound $\protect\hyperlink{def-J}{J}$ above by a time associated with the simple random walk $j\mapsto \protect\hyperlink{def-theta-count}{\mathcal C}(X(-j,-1))$. Using this and Lemma~\ref{prop-few-SD}, we prove in Proposition~\ref{prop-J-finite} that $\protect\hyperlink{def-J}{\chi} := \mathbb E( |X(-\protect\hyperlink{def-J}{J}, -1)|)$ is finite and that $\mathbb E( \protect\hyperlink{def-theta-count}{\mathcal C}(X(-\protect\hyperlink{def-J}{J}, -1)) )$ is non-negative. We then show that in fact this latter expectation is 0 using a generalization of the proof of~\cite[Lem.~3.5]{shef-burger}. Since $|X(-\protect\hyperlink{def-J}{J} , -1)| = 2 - \protect\hyperlink{def-theta-count}{\mathcal C}(X(-\protect\hyperlink{def-J}{J}, -1))$ whenever $X(-\protect\hyperlink{def-J}{J},-1)$ contains no $\db$'s, this shows in particular that $\protect\hyperlink{def-J}{\chi} = 2$ when $q= 0$, i.e., $z=1$ (which is why we get an exact expression for the variances and covariances in Theorem~\ref{thm-variable-SD} in this case). The last main result of this subsection is Lemma~\ref{prop-J-limit}, which shows that $\mathbb E( |X(-n,-1)| \mathbb 1_{(\protect\hyperlink{def-J}{J} < n)} ) \rightarrow 0$ as $n\rightarrow \infty$, and is an easy consequence of the earlier results in this subsection and a dominated convergence argument. \begin{lem} \label{prop-J-basic} With $\protect\hyperlink{def-J}{J}$ as in \eqref{def-J}, \begin{enumerate} \item $\protect\hyperlink{def-J}{J}$ is a.s.\ finite. \label{item-J-finite} \item $X_{-\protect\hyperlink{def-J}{J}} \in \left\{\hb, \cb \right\}$. \label{item-J-burger} \item The symbol $X_{-\protect\hyperlink{def-J}{J}}$ does not have a match in $X_{-\protect\hyperlink{def-J}{J}} \cdots X_{-1}$. \label{item-J-match} \item The reduced word $X(-\protect\hyperlink{def-J}{J}, -1)$ consists of only hamburgers and cheeseburger orders (if $X_{-\protect\hyperlink{def-J}{J}} = \hb$) or cheeseburgers and hamburger orders (if $X_{-\protect\hyperlink{def-J}{J}} = \cb$). \label{item-J-reduced} \end{enumerate} \end{lem} \begin{proof} Assertion~\ref{item-J-finite} follows from Lemma~\ref{prop-identification-exists}. By definition of $\protect\hyperlink{def-J}{J}$, the word $X(-\protect\hyperlink{def-J}{J}+1,-1)$ contains no $\hb$ or $\cb$ symbols, so assertion~\ref{item-J-burger} follows from Lemma~\ref{prop-associative} (applied with $x=X_{-\protect\hyperlink{def-J}{J}}$ and $y = X_{-\protect\hyperlink{def-J}{J}+1}\dots X_{-1}$). Suppose $k\in[1,\protect\hyperlink{def-J}{J}-1]_\BB Z$. By definition of $\protect\hyperlink{def-J}{J}$, the word $X(-k,-1)$ contains no $\hb$ or $\cb$. If $X(-\protect\hyperlink{def-J}{J},-k-1)$ contained no burger, then $\protect\hyperlink{def-reduce}{\mathcal R}(X(-\protect\hyperlink{def-J}{J},-k-1)X(-k,-1))=X(-\protect\hyperlink{def-J}{J},1)$ would contain no $\hb$ or $\cb$, contrary to the definition of $\protect\hyperlink{def-J}{J}$. So $X(-\protect\hyperlink{def-J}{J},-k)$ contains a burger. We argue by induction on $\protect\hyperlink{def-J}{J}-k\in[0,\protect\hyperlink{def-J}{J}-1]_\BB Z$ that each symbol in $X_{-\protect\hyperlink{def-J}{J}}\cdots X_{-k}$ is identified in this word. Since $X_{-\protect\hyperlink{def-J}{J}}\in\{\hb,\cb\}$, this is true for $k=\protect\hyperlink{def-J}{J}$. If the claim is true for $k$, then since $X(-\protect\hyperlink{def-J}{J},-k)$ contains a burger, each of which by induction is identified, it follows that $X_{-k+1}$ is identified in $X_{-\protect\hyperlink{def-J}{J}}\cdots X_{-k+1}$, completing the induction. Every burger in $X(-\protect\hyperlink{def-J}{J}+1, -1)$ is a $\db$. Since each burger in $X(-\protect\hyperlink{def-J}{J},-1)$ is identified, it must be that they are identified to $X_{-\protect\hyperlink{def-J}{J}}$. Suppose that $X_{-\protect\hyperlink{def-J}{J}}$ is matched to an order $X_{\phi(-\protect\hyperlink{def-J}{J} )}$ for $\phi(-\protect\hyperlink{def-J}{J} ) \in [-\protect\hyperlink{def-J}{J} +1, -1]_\BB Z$. We assume without loss of generality that $X_{-\protect\hyperlink{def-J}{J}} = \hb$. Consequently, $X(-\protect\hyperlink{def-J}{J}, -1)$ contains no $\cb$. Since $X_{-\protect\hyperlink{def-J}{J}}=\hb$ is consumed, the reduced word $X(-\protect\hyperlink{def-J}{J}, \phi(-\protect\hyperlink{def-J}{J}))$ consists of only $\cb$'s and $\co$'s. Since $X(\phi(-\protect\hyperlink{def-J}{J}) +1,-1)$ contains no $\hb$ or $\cb$, each $\db$ in $X(\phi(-\protect\hyperlink{def-J}{J}) +1,-1)$ is identified by a $\cb$ in $X_{-\protect\hyperlink{def-J}{J}} \cdots X_{-1}$. Consequently, $X(-\protect\hyperlink{def-J}{J},-1)$ contains no $\hb$. We have already shown above that $X(-\protect\hyperlink{def-J}{J},-1)$ contains no $\cb$, so we contradict the definition of $\protect\hyperlink{def-J}{J}$. We thus obtain assertion~\ref{item-J-match}. Since each burger in $X(-\protect\hyperlink{def-J}{J},-1)$ is identified to $X_{-\protect\hyperlink{def-J}{J}}$, and $X_{-\protect\hyperlink{def-J}{J}}$ is not consumed, it must be that each order in $X(-\protect\hyperlink{def-J}{J},-1)$ is for the opposite burger type, which proves assertion~\ref{item-J-reduced}. \end{proof} Our next lemma is an analogue of~\cite[Lem.~A.7]{gms-burger-cone} in the setting where we read the word backward, rather than forward, and is proven in a similar manner. \begin{lem} \label{prop-J-af} The time $ \protect\hyperlink{def-J}{J}_m$ from~\eqref{def-J_m} is the $m^{\text{th}}$ smallest $j\in \BB N$ such that $\protect\hyperlink{def-theta-count}{\mathcal N\!}_{\hb|\cb}(X(-j, -k))>0$ for all $k\in [1,j]_\BB Z$. \end{lem} \begin{proof} Let $\widetilde J_0 = 0$ and for $m\in\BB N$, let $\widetilde J_m$ be the $m^{\text{th}}$ smallest $j\in\BB N$ such that $X(-j, -k)$ contains a hamburger or a cheeseburger for each $k\in [1,j]_\BB Z$. We show by induction that $\widetilde J_m = \protect\hyperlink{def-J}{J}_m$ for each $m\in \BB N$. The base case $m=0$ is trivial. Suppose $m\in\BB N$ and $\widetilde J_{m-1} = \protect\hyperlink{def-J}{J}_{m-1}$. By assertion~\ref{item-J-match} of Lemma~\ref{prop-J-basic}, the word $X(-\protect\hyperlink{def-J}{J}_m, - k)$ contains a hamburger or a cheeseburger (namely $X_{-\protect\hyperlink{def-J}{J}_m}$) for each $k\in [\widetilde J_{m-1}+1, \protect\hyperlink{def-J}{J}_m]_\BB Z$. By definition of $\widetilde J_{m-1}$, the word $X(-\widetilde J_{m-1}, -k)$ (and hence the word $X(-\protect\hyperlink{def-J}{J}_m,-k)$) contains a hamburger or a cheeseburger for each $k\in [ 1, \widetilde J_{m-1}]_\BB Z$. Thus $\protect\hyperlink{def-J}{J}_m$ is one of the $\widetilde J_{m'}$'s, and hence $\protect\hyperlink{def-J}{J}_m \geq \widetilde J_m$. On the other hand, the word $X(-\widetilde J_m, -\protect\hyperlink{def-J}{J}_{m-1}-1)$ contains a hamburger or cheeseburger by the inductive hypothesis and the definition of $\widetilde J_m$, so $\protect\hyperlink{def-J}{J}_m \leq\widetilde J_m$, so in fact $\widetilde J_m = \protect\hyperlink{def-J}{J}_m$. \end{proof} We next prove that $\protect\hyperlink{def-J}{J}$ has finite moments up to order $1/2$ (actually we prove something a little stronger, which will be needed for technical reasons below). \begin{lem} \label{prop-J-moment} Let $M$ be the smallest $m\in\BB N$ for which $\protect\hyperlink{def-theta-count}{\mathcal C}(X(-\protect\hyperlink{def-J}{J}_m,-1)) \geq 1$. Almost surely $M<\infty$, and for each $\zeta \in (0,1/2)$, we have $\BB E(\protect\hyperlink{def-J}{J}_M^\zeta) < \infty$. \end{lem} \begin{proof} Let $P_0 = 0$ and for $m\in\BB N$, let $P_m$ be the smallest $j \in\BB N$ for which $\protect\hyperlink{def-theta-count}{\mathcal C}(X(-j,-1)) = m$, as in Lemma~\ref{prop-backward-burger}. Also let $\widetilde M$ be the smallest $m\in\BB N$ for which $X_{-P_m} \in \left\{\hb, \cb\right\}$. By Lemma~\ref{prop-backward-burger}, the word $X(-P_{\widetilde M},-n)$ contains either a hamburger or a cheeseburger for each $n\in [1,P_m]_\BB Z$. Therefore, Lemma~\ref{prop-J-af} implies that $P_{\widetilde M} = \protect\hyperlink{def-J}{J}_{\widetilde m}$ for some $\widetilde m\in\BB N$. Since $\protect\hyperlink{def-theta-count}{\mathcal C}(X(-P_{\widetilde M}, -1)) = \widetilde M \geq 1$, we have $M \leq \widetilde m$. Therefore $\protect\hyperlink{def-J}{J}_M \leq P_{\widetilde M}$. For $\zeta\in (0,1/2)$, the function $t\mapsto t^\zeta$ is concave, hence subadditive. Thus, for $m\in\BB N$ \begin{equation*} P_m^\zeta \leq \sum_{k=1}^m (P_k - P_{k-1})^\zeta. \end{equation*} Since $j\mapsto \protect\hyperlink{def-theta-count}{\mathcal C}(X(-j,-1))$ is a simple random walk, $\BB E\left( P_1^\zeta \right) < \infty$ for $\zeta \in (0,1/2)$. By the strong Markov property, for each $m \in \BB N$, it holds with conditional probability $1-q$ given $X_{-P_{m-1}} \cdots X_{-1}$ that $X_{-P_m} \in \left\{\hb, \cb\right\}$. Therefore, the law of $\widetilde M$ is geometric with success probability $1-q$, and in particular $\BB E(\widetilde M) < \infty$. By Wald's equation, it holds for each $\zeta \in (0,1/2)$ that $\BB E(P_{\widetilde M}^\zeta) < \infty$, and hence also $\BB E(\protect\hyperlink{def-J}{J}_M^\zeta ) < \infty$. \end{proof} We are now ready to prove that the quantity $\protect\hyperlink{def-J}{\chi}$ of~\eqref{eqn-chi-def} is finite. \begin{prop} \label{prop-J-finite} \begin{equation} \label{eqn-J-mean-finite} \protect\hyperlink{def-J}{\chi} = \BB E\left(|X(-\protect\hyperlink{def-J}{J},-1)|\right) < \infty \end{equation} and \begin{equation} \label{eqn-count-mean-pos} \BB E\left( \protect\hyperlink{def-theta-count}{\mathcal C}(X(-\protect\hyperlink{def-J}{J}, -1) ) \right) \geq 0. \end{equation} \end{prop} \begin{proof} Fix $\varepsilon, \zeta \in (0,1/2)$ and for $n\in\BB N$, let $F_n=F_n(\varepsilon,n^\zeta)$ be defined as in~\eqref{eqn-few-SD-event} but with $X(-n,-1)$ in place of $X(1,n)$. Let \begin{equation} \label{eqn-no-F-sum} \Xi \colonequals \sum_{n=1}^\infty n \BB 1_{F_n}. \end{equation} By Lemma~\ref{prop-few-SD} and translation invariance, $\BB E(\Xi) < \infty$. For $n\in\BB N$, if $F_n$ occurs, then \begin{equation}\label{C<=Xi} \protect\hyperlink{def-theta-count}{\mathcal C}(X(-n,-1)) \leq |X(-n,-1)| \leq n \leq \Xi. \end{equation} For $n\in\BB N$, if $n<\protect\hyperlink{def-J}{J}$ then every burger in $X(-n,-1)$ is a $\db$. If $n<\protect\hyperlink{def-J}{J}$ and furthermore $F_n$ does not occur, then $\protect\hyperlink{def-theta-count}{\mathcal N\!}_{\db|\so}(X(-n,-1)) \leq \varepsilon \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\ho|\co}(X(-n,-1)) + \varepsilon n^\zeta$ since $\varepsilon <1$, so \begin{align} \protect\hyperlink{def-theta-count}{\mathcal C}(X(-n,-1))&=\protect\hyperlink{def-theta-count}{\mathcal N\!}_\db(X(-n,-1))-\protect\hyperlink{def-theta-count}{\mathcal N\!}_{\ho|\co|\so}(X(-n,-1))\notag\\ &\leq\protect\hyperlink{def-theta-count}{\mathcal N\!}_{\db|\so}(X(-n,-1))-\varepsilon\protect\hyperlink{def-theta-count}{\mathcal N\!}_{\ho|\co}(X(-n,-1))\leq\varepsilon n^\zeta\,. \label{C<=nzeta} \end{align} For $n\in\BB N\cup\{0\}$, let \[Y_n = \protect\hyperlink{def-theta-count}{\mathcal C}(X(-(\protect\hyperlink{def-J}{J}\wedge n), -1) )\,.\] Whether or not $F_{(\protect\hyperlink{def-J}{J}\wedge n)-1}$ occurs, from \eqref{C<=Xi} and \eqref{C<=nzeta} applied to $(\protect\hyperlink{def-J}{J}\wedge n)-1$, we have \begin{equation} \label{eqn-count-upper} Y_n \leq 1 + \protect\hyperlink{def-theta-count}{\mathcal C}(X(-(\protect\hyperlink{def-J}{J}\wedge n)+1, -1) )\leq 1 + \varepsilon \protect\hyperlink{def-J}{J}^\zeta + \Xi\,. \end{equation} Note that the $\Xi$ comes from the possibility that $F_{(\protect\hyperlink{def-J}{J}\wedge n)-1}$ does not occur. Since $\protect\hyperlink{def-theta-count}{\mathcal C}(X(-n,-1))$ is a martingale, the optional stopping theorem implies $\BB E[Y_n]=0$. Let $R=1+\protect\hyperlink{def-J}{J}^\zeta+\Xi$. By Lemma~\ref{prop-J-moment} (note that $\protect\hyperlink{def-J}{J} \leq \protect\hyperlink{def-J}{J}_M$) and since $\BB E(\Xi) <\infty$, we have $\BB E(R) < \infty$. Since $0\leq R-Y_n$ and $Y_n\to \protect\hyperlink{def-theta-count}{\mathcal C}(X(-\protect\hyperlink{def-J}{J},-1))$, Fatou's lemma implies \begin{equation} \label{eqn-count-mean} \BB E\left(R - \protect\hyperlink{def-theta-count}{\mathcal C}(X(-\protect\hyperlink{def-J}{J}, -1) ) \right) \leq \liminf_n \BB E(R-Y_n)=\BB E(R). \end{equation} This in particular implies $\BB E(\protect\hyperlink{def-theta-count}{\mathcal C}(X(-\protect\hyperlink{def-J}{J},-1)))\geq 0$, i.e., ~\eqref{eqn-count-mean-pos}. Since every burger in $X(-n,-1)$ is a $\db$ when $n < \protect\hyperlink{def-J}{J}$, \begin{equation} \label{eqn-J-word-split} |X(-n, -1)| \mathbb 1_{\{n < \protect\hyperlink{def-J}{J}\}} = - \protect\hyperlink{def-theta-count}{\mathcal C}(X(-n, -1)) + 2\protect\hyperlink{def-theta-count}{\mathcal N\!}_{\db}\left(X(-n, -1)\right). \end{equation} If $n<\protect\hyperlink{def-J}{J}$ and $F_n$ does not occur, then \begin{align*} \protect\hyperlink{def-theta-count}{\mathcal N\!}_\db(X(-n,-1)) &\leq \varepsilon\protect\hyperlink{def-theta-count}{\mathcal N\!}_{\ho|\co}(X(-n,-1)) + \varepsilon n^\zeta \\ (1-\varepsilon)\protect\hyperlink{def-theta-count}{\mathcal N\!}_\db(X(-n,-1)) &\leq -\varepsilon\, \protect\hyperlink{def-theta-count}{\mathcal C}(X(-n,-1)) + \varepsilon n^\zeta . \end{align*} Note that in the second inequality, we use that $\protect\hyperlink{def-theta-count}{\mathcal N\!}_\db(X(-n,-1)) \leq \varepsilon n^\zeta$ and that $ -\varepsilon \protect\hyperlink{def-theta-count}{\mathcal N\!}_\db(X(-n,-1)) \leq -\varepsilon \protect\hyperlink{def-theta-count}{\mathcal C}(X(-n,-1))$ since every burger in $X(-n,-1)$ is a $\db$. Combining the above inequalities with \eqref{eqn-J-word-split} gives \begin{equation} \label{eqn-length-on-F} |X(-n, -1)| \mathbb 1_{\{ n < \protect\hyperlink{def-J}{J} \} \cap F_n^c} \leq - \left(1 + \frac{2\varepsilon}{1-\varepsilon} \right) \protect\hyperlink{def-theta-count}{\mathcal C}(X(-n, -1)) + \frac{2\varepsilon}{1-\varepsilon} n^\zeta\,. \end{equation} We combine~\eqref{C<=Xi} and \eqref{eqn-length-on-F}, applied to $n = \protect\hyperlink{def-J}{J}-1$, to obtain \begin{equation*} |X(-\protect\hyperlink{def-J}{J}, -1)| \leq 1 - \left(1 + \frac{2\varepsilon}{1-\varepsilon} \right) \protect\hyperlink{def-theta-count}{\mathcal C}(X(-\protect\hyperlink{def-J}{J}, -1)) + \frac{2\varepsilon}{1-\varepsilon} \protect\hyperlink{def-J}{J}^\zeta + \Xi. \end{equation*} Since the expectation of each term on the right side of this last inequality is finite, we obtain~\eqref{eqn-J-mean-finite}. \end{proof} \begin{lem} \label{prop-overshoot-finite} With $M$ as in Lemma~\ref{prop-J-moment}, we have $\BB E(\protect\hyperlink{def-theta-count}{\mathcal C}(X(-\protect\hyperlink{def-J}{J}_M,-1))) < \infty$. \end{lem} \begin{proof} By definition of $M$ and the times $\protect\hyperlink{def-J}{J}_m$, \[ 1\leq \protect\hyperlink{def-theta-count}{\mathcal C}(X(-\protect\hyperlink{def-J}{J}_M,-1)) \leq \protect\hyperlink{def-theta-count}{\mathcal C}(X(-\protect\hyperlink{def-J}{J}_M, -\protect\hyperlink{def-J}{J}_{M-1}-1)) \leq \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\db}(X(-\protect\hyperlink{def-J}{J}_M+1,-\protect\hyperlink{def-J}{J}_{M-1}-1)) + 1 \, ; \] in the second inequality, we use that $ \protect\hyperlink{def-theta-count}{\mathcal C}(X(-\protect\hyperlink{def-J}{J}_{M-1} , - 1)) \leq 0$. Since every burger in $X(-\protect\hyperlink{def-J}{J}_M + 1, -\protect\hyperlink{def-J}{J}_{M-1}-1)$ is a $\db$, and \begin{equation*} \protect\hyperlink{def-theta-count}{\mathcal C}(X(-\protect\hyperlink{def-J}{J}_M+1, -\protect\hyperlink{def-J}{J}_{M-1}-1))\geq \protect\hyperlink{def-theta-count}{\mathcal C}(X(-\protect\hyperlink{def-J}{J}_M+1, -1)) \geq \protect\hyperlink{def-theta-count}{\mathcal C}(X(-\protect\hyperlink{def-J}{J}_M , -1)) - 1 \geq 0 , \end{equation*} we have \begin{equation*} \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\db}\left(X(-\protect\hyperlink{def-J}{J}_M + 1, -\protect\hyperlink{def-J}{J}_{M-1}-1) \right) \geq \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\ho |\co} \left(X(-\protect\hyperlink{def-J}{J}_M + 1, -\protect\hyperlink{def-J}{J}_{M-1}-1) \right). \end{equation*} Now fix $\zeta \in (0,1/2)$, and for $m\in\BB N$ let \begin{equation*} E_m \colonequals \left\{ \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\db}\left(X(-\protect\hyperlink{def-J}{J}_m + 1, -\protect\hyperlink{def-J}{J}_{m-1}-1) \right) \geq \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\ho |\co}\left(X(-\protect\hyperlink{def-J}{J}_m + 1, -\protect\hyperlink{def-J}{J}_{m-1}-1) \right) \vee m^\zeta \right\}. \end{equation*} Either $\protect\hyperlink{def-theta-count}{\mathcal N\!}_{\db}\left(X(-\protect\hyperlink{def-J}{J}_M + 1, -\protect\hyperlink{def-J}{J}_{M-1}-1) \right) < M^\zeta$ or $E_M$ occurs. Therefore, \alb \protect\hyperlink{def-theta-count}{\mathcal C} \left(X(-\protect\hyperlink{def-J}{J}_M, -1 ) \right) &\leq \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\db}\left(X(-\protect\hyperlink{def-J}{J}_M + 1, -\protect\hyperlink{def-J}{J}_{M-1}-1) \right) + 1 \\ &\leq M^\zeta + \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\db}\left(X(-\protect\hyperlink{def-J}{J}_M + 1, -\protect\hyperlink{def-J}{J}_{M-1}-1) \right) \BB 1_{E_M} + 1 \\ &\leq \protect\hyperlink{def-J}{J}_M^\zeta + \sum_{m=1}^\infty \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\db}\left(X(-\protect\hyperlink{def-J}{J}_m + 1, -\protect\hyperlink{def-J}{J}_{m-1}-1) \right) \BB 1_{E_m} +1. \ale By Lemma~\ref{prop-J-moment} we know $\BB E(\protect\hyperlink{def-J}{J}_M^\zeta) <\infty$, so to complete the proof it suffices to show \begin{equation} \label{eqn-overshoot-tail-event} \sum_{m=1}^\infty \BB E\left( \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\db}\left(X(-\protect\hyperlink{def-J}{J}_m + 1, -\protect\hyperlink{def-J}{J}_{m-1}-1) \right) \BB 1_{E_m} \right) <\infty. \end{equation} Recall that the words $X_{-\protect\hyperlink{def-J}{J}_m} \cdots X_{-\protect\hyperlink{def-J}{J}_{m-1}-1}$ are i.i.d.\ with the same law as $X_{-\protect\hyperlink{def-J}{J}} \cdots X_{-1}$. For $B>0$, Lemma~\ref{prop-few-SD} and a union bound over all $n \in [1, B]_\BB Z$ yields \begin{equation*} \PP\left( \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\db}\left(X(-\protect\hyperlink{def-J}{J}+1,-1) \right) \geq \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\ho|\co}\left(X(-\protect\hyperlink{def-J}{J}+1,-1)\right) \vee A,\; \protect\hyperlink{def-J}{J} \leq B \right) \leq c_0 B e^{-c_1 A} \end{equation*} for constants $c_0, c_1 > 0$ depending only on $p$ and $q$. Lemma~\ref{prop-J-moment} and the Chebyshev inequality together imply that $\PP(\protect\hyperlink{def-J}{J}>B) = \PP(\protect\hyperlink{def-J}{J}^\zeta>B^\zeta) \leq \BB E(\protect\hyperlink{def-J}{J}^\zeta) B^{-\zeta}$. Thus \begin{equation*} \PP\left( \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\db}\left(X(-\protect\hyperlink{def-J}{J}+1,-1) \right) \geq \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\ho|\co}\left(X(-\protect\hyperlink{def-J}{J}+1,-1)\right) \vee A \right) \leq c_0 B e^{-c_1 A} + \BB E(\protect\hyperlink{def-J}{J}^\zeta) B^{-\zeta}\,, \end{equation*} and since $B$ was arbitrary, we choose $B=\exp[c_1 A/(1+\zeta)]$. Then \begin{align*} \BB E&\left( \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\db}\left(X(-\protect\hyperlink{def-J}{J}_m + 1, -\protect\hyperlink{def-J}{J}_{m-1}-1) \right) \BB 1_{E_m} \right)\\ &= \BB E\left( \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\db}\left(X(-\protect\hyperlink{def-J}{J}+1, -1) \right) \BB 1\left\{ \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\db}\left(X(-\protect\hyperlink{def-J}{J}+1, -1) \right) \geq \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\ho |\co} \left(X(-\protect\hyperlink{def-J}{J}+1,-1) \right) \vee m^\zeta \right\} \right) \\ &= \sum_{k\geq m^\zeta} k\times\PP\left( k=\protect\hyperlink{def-theta-count}{\mathcal N\!}_{\db}\left(X(-\protect\hyperlink{def-J}{J}+1, -1) \right) \geq \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\ho |\co} \left(X(-\protect\hyperlink{def-J}{J}+1,-1) \right) \vee m^\zeta \right)\\ &\leq \sum_{k\geq m^\zeta} k\times\PP\left( \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\db}\left(X(-\protect\hyperlink{def-J}{J}+1, -1) \right) \geq \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\ho |\co} \left(X(-\protect\hyperlink{def-J}{J}+1,-1) \right) \vee k \right)\\ &\leq \sum_{k\geq m^\zeta} k\times(c_0+\BB E[\protect\hyperlink{def-J}{J}^\zeta]) \times\exp[-(c_1 \zeta/(1+\zeta)) k]\\ &\leq (m^\zeta+\text{const}) \times \text{const} \times \exp[-\text{const}\times m^\zeta]\,, \end{align*} which is summable in $m$, establishing \eqref{eqn-overshoot-tail-event}. \end{proof} The next two lemmas correspond to \cite[Lem.~3.5]{shef-burger}. However, slightly more work is needed to prove Lemma~\ref{prop-J-count-mean} below in our setting because the word $X(-\protect\hyperlink{def-J}{J},-1)$ can contain more than one burger, so with $\protect\hyperlink{def-J}{J}_M$ as in Lemma~\ref{prop-J-moment}, we might have $\protect\hyperlink{def-theta-count}{\mathcal C}(X(-\protect\hyperlink{def-J}{J}_M,-1)) > 1$. \begin{lem} \label{E[M]} Let $M$ be the smallest $m\in\BB N$ for which $\protect\hyperlink{def-theta-count}{\mathcal C}\left(X(-\protect\hyperlink{def-J}{J}_m,-1) \right) \geq 1$, as in Lemma~\ref{prop-J-moment}. Then $\BB E[M]=\infty$. \end{lem} \begin{proof} With $P$ as in Lemma~\ref{prop-mean-infty}, i.e., the smallest $j\in\BB N$ for which $\protect\hyperlink{def-theta-count}{\mathcal C}(X(-j,-1))=1$, \begin{equation} \label{eqn-X(-P,-1)} |X(-P,-1)| = 2\protect\hyperlink{def-theta-count}{\mathcal N\!}_{\hb|\cb|\db}\left(X(-P,-1)\right) + 1\,. \end{equation} For $\varepsilon, \zeta \in (0,1/2)$ and the events $F_n = F_n(\varepsilon , n^\zeta)$ and the random variable $\Xi$ in~\eqref{eqn-no-F-sum}, \[ \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\db|\so}\left(X(-P,-1)\right) \leq \begin{cases} \varepsilon |X(-P,-1)| + \varepsilon P^\zeta & \text{if $F_P$ does not occur}\\ \Xi & \text{if $F_P$ occurs}\,.\end{cases} \] By this and~\eqref{eqn-X(-P,-1)}, \begin{equation} \label{eqn-X(-P,-1)-again} |X(-P,-1)| \leq 2 \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\hb|\cb}\left(X(-P,-1)\right) + 2 \varepsilon P^\zeta + 2 \varepsilon |X(-P,-1)| + 2\Xi + 1 . \end{equation} Since $\protect\hyperlink{def-theta-count}{\mathcal C}(X(-\protect\hyperlink{def-J}{J}_M, -1)) \geq 1$, we have $P \leq \protect\hyperlink{def-J}{J}_M$. Since $\BB E(\Xi) < \infty$ and $\BB E(P^\zeta) \leq \BB E(\protect\hyperlink{def-J}{J}_M^\zeta) < \infty$, $\BB E(|X(-P,-1)|) = \infty$ by Lemma~\ref{prop-mean-infty}, and $\varepsilon <1/2$, we deduce from~\eqref{eqn-X(-P,-1)-again} that \[ \BB E\left(\protect\hyperlink{def-theta-count}{\mathcal N\!}_{\hb|\cb}\left(X(-P,-1)\right)\right) = \infty\,. \] Since $P \leq \protect\hyperlink{def-J}{J}_M$, \[\protect\hyperlink{def-theta-count}{\mathcal N\!}_{\hb|\cb}(X(-P,-1)) \leq \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\hb|\cb}(X(-\protect\hyperlink{def-J}{J}_M,-1))\,.\] Since each symbol in $X(-\protect\hyperlink{def-J}{J}_m, -\protect\hyperlink{def-J}{J}_{m-1}-1)$ is identified, \[ \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\hb|\cb}(X(-\protect\hyperlink{def-J}{J}_M,-1)) \leq \sum_{m=1}^M \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\hb|\cb}(X(-\protect\hyperlink{def-J}{J}_m, -\protect\hyperlink{def-J}{J}_{m-1}-1)\,.\] The summands are i.i.d., and have finite expectation by Proposition~\ref{prop-J-finite}. But the left hand side has infinite expectation, so by Wald's equation, $\BB E[M]=\infty$. \end{proof} \begin{lem} \label{prop-J-count-mean} \begin{equation*} \BB E\left( \protect\hyperlink{def-theta-count}{\mathcal C}(X(-\protect\hyperlink{def-J}{J},-1)) \right) = 0\,. \end{equation*} \end{lem} \begin{proof} Write $\alpha = \BB E\left(\protect\hyperlink{def-theta-count}{\mathcal C}(X(-\protect\hyperlink{def-J}{J},-1) \right)$. Observe that by Proposition~\ref{prop-J-finite}, \[ 0 \leq \alpha \leq \BB E\left(|\protect\hyperlink{def-theta-count}{\mathcal C}(X(-\protect\hyperlink{def-J}{J},-1)| \right) \leq \BB E\left(|X(-\protect\hyperlink{def-J}{J},-1)| \right) < \infty\,. \] The strong Markov property implies that the words $X_{-\protect\hyperlink{def-J}{J}_m} \cdots X_{-\protect\hyperlink{def-J}{J}_{m-1}-1}$ for $m\in\BB N$ are i.i.d., and each has the same law as $X_{-\protect\hyperlink{def-J}{J}} \cdots X_{-1}$. By Lemma~\ref{prop-J-basic}, none of the reduced words $X(-\protect\hyperlink{def-J}{J}_m, -\protect\hyperlink{def-J}{J}_{m-1}-1)$ contains an unidentified $\db$ or $\so$. By definition of $\alpha$, we find that \begin{equation*} A_m \colonequals \protect\hyperlink{def-theta-count}{\mathcal C}\left(X(-\protect\hyperlink{def-J}{J}_m,-1) \right) - \alpha m = \sum_{k=1}^m \protect\hyperlink{def-theta-count}{\mathcal C}\left(X(-\protect\hyperlink{def-J}{J}_k, -\protect\hyperlink{def-J}{J}_{k-1}-1)\right) - \alpha m \end{equation*} is a martingale in $m$. Let $M$ be the smallest $m\in\BB N$ for which $\protect\hyperlink{def-theta-count}{\mathcal C}\left(X(-\protect\hyperlink{def-J}{J}_m,-1) \right) \geq 1$, as in Lemma~\ref{prop-J-moment}. By the optional stopping theorem, for each $n \in \BB N$ we have $\BB E\left(A_{M\wedge n} \right) = 0$. Since $A_{M\wedge n} \leq \protect\hyperlink{def-theta-count}{\mathcal C}(X(-\protect\hyperlink{def-J}{J}_M, -1))$ and the latter quantity has finite expectation by Lemma~\ref{prop-overshoot-finite}, it follows from Fatou's lemma that \begin{equation*} 0 \leq \BB E(A_M) \leq \BB E\left(\protect\hyperlink{def-theta-count}{\mathcal C}(X(-\protect\hyperlink{def-J}{J}_M, -1) )\right). \end{equation*} In particular $\BB E(A_M)\geq 0$ implies \begin{equation*} \alpha \BB E(M) \leq \BB E\left(\protect\hyperlink{def-theta-count}{\mathcal C}(X(-\protect\hyperlink{def-J}{J}_M,-1)) \right). \end{equation*} By Lemma~\ref{prop-overshoot-finite} $\BB E\left(\protect\hyperlink{def-theta-count}{\mathcal C}(X(-\protect\hyperlink{def-J}{J}_M,-1)) \right)<\infty$ and by Lemma~\ref{E[M]} $\BB E(M) = \infty$, so $\alpha\leq 0$. We already showed in Proposition~\ref{prop-J-finite} that $\alpha\geq 0$, so in fact $\alpha=0$. \end{proof} The following corollary is the reason why we know the variance and covariance of $Z$ in Theorem~\ref{thm-variable-SD} in the case when $z=1$. \begin{cor} If $q = 0$ then $\protect\hyperlink{def-J}{\chi} = 2$. \end{cor} \begin{proof} When $q = 0$ the word $X(-\protect\hyperlink{def-J}{J},-1)$ contains exactly one burger. Hence in this case $|X(-\protect\hyperlink{def-J}{J},-1)| = 2- \protect\hyperlink{def-theta-count}{\mathcal C}(X(-\protect\hyperlink{def-J}{J},-1))$. Therefore Lemma~\ref{prop-J-count-mean} implies $\protect\hyperlink{def-J}{\chi} = 2$ in this case. \end{proof} \begin{lem} \label{prop-J-limit} \begin{equation*} \lim_{n\rightarrow\infty} \BB E\left(|X(-n,-1)| \times \BB 1_{(\protect\hyperlink{def-J}{J}>n)} \right) = 0. \end{equation*} \end{lem} \begin{proof} By the optional stopping theorem, for each $n\in\BB N$, \begin{equation} \label{eqn-J-stop-decomp} 0 = \BB E\left( \protect\hyperlink{def-theta-count}{\mathcal C}(X(-\protect\hyperlink{def-J}{J} \wedge n,- 1)) \right) = \BB E\left( \protect\hyperlink{def-theta-count}{\mathcal C}(X(-\protect\hyperlink{def-J}{J},-1)) \BB 1_{(\protect\hyperlink{def-J}{J}\leq n)} \right) + \BB E\left( \protect\hyperlink{def-theta-count}{\mathcal C}(X(-n,-1)) \BB 1_{(\protect\hyperlink{def-J}{J} > n)} \right). \end{equation} Since \[|\protect\hyperlink{def-theta-count}{\mathcal C}(X(-\protect\hyperlink{def-J}{J},-1)) \BB 1_{(\protect\hyperlink{def-J}{J}\leq n)}| \leq |\protect\hyperlink{def-theta-count}{\mathcal C}(X(-\protect\hyperlink{def-J}{J},-1))| \leq |X(-J,-1)|\,, \] and by Proposition~\ref{prop-J-finite} $\BB E(|X(-J,-1)|)<\infty$, by dominated convergence (and Lemma~\ref{prop-J-count-mean}), \begin{equation*} \lim_{n\rightarrow\infty} \BB E\left( \protect\hyperlink{def-theta-count}{\mathcal C}(X(-\protect\hyperlink{def-J}{J},-1)) \BB 1_{(\protect\hyperlink{def-J}{J}\leq n)} \right) = \BB E\left(\protect\hyperlink{def-theta-count}{\mathcal C}(X(-\protect\hyperlink{def-J}{J},-1)) \right) = 0\,. \end{equation*} It therefore follows from~\eqref{eqn-J-stop-decomp} that \begin{equation} \label{eqn-C-on-J} \lim_{n\rightarrow \infty} \BB E\left( \protect\hyperlink{def-theta-count}{\mathcal C}(X(-n,-1)) \BB 1_{(\protect\hyperlink{def-J}{J} > n)} \right) = 0. \end{equation} Now fix $\varepsilon,\zeta \in (0,1/2)$ and let $F_n = F_n(\varepsilon, n^\zeta)$ be as in Lemma~\ref{prop-few-SD} with $X_{-n}\cdots X_{-1}$ in place of $X_1\cdots X_n$, as in the proof of Proposition~\ref{prop-J-finite}. By~\eqref{eqn-length-on-F} and since $|X(-n,-1)| \leq n$, \begin{equation} \label{eqn-length-on-J-decomp} |X(-n,-1)| \BB 1_{(\protect\hyperlink{def-J}{J} > n)} \leq - \left(1 + \frac{2\varepsilon}{1-\varepsilon}\right) \protect\hyperlink{def-theta-count}{\mathcal C}(X(-n,-1)) \BB 1_{(\protect\hyperlink{def-J}{J} > n)} + \frac{2\varepsilon}{1-\varepsilon} n^\zeta \BB 1_{(\protect\hyperlink{def-J}{J} > n)} + n \BB 1_{F_n}. \end{equation} By~\eqref{eqn-C-on-J}, the expectation of the first term on the right in~\eqref{eqn-length-on-J-decomp} tends to 0 as $n\rightarrow\infty$. By Lemma~\ref{prop-few-SD}, $\lim_{n\rightarrow\infty} n \PP(F_n) = 0$. By Lemma~\ref{prop-overshoot-finite}, for each $\zeta'\in (\zeta,1/2)$ we have $\BB E(\protect\hyperlink{def-J}{J}^{\zeta'}) \leq \BB E(\protect\hyperlink{def-J}{J}_M^{\zeta'}) < \infty$, so by Chebyshev's inequality $\PP(\protect\hyperlink{def-J}{J} > n) \leq \BB E(\protect\hyperlink{def-J}{J}_M^{\zeta'})/n^{\zeta'}$. By combining these observations with~\eqref{eqn-length-on-J-decomp}, we obtain the statement of the lemma. \end{proof} \subsection{Variance of the discrepancy between burger types} \label{sec-var-bound} In this subsection we obtain an asymptotic formula for $\operatorname{Var}\protect\hyperlink{def-theta-count}{\mathcal D}(X'(1,n))$, where here $\protect\hyperlink{def-theta-count}{\mathcal D}$ is as in Definition~\ref{def-theta-count} and $X'$ is as in Definition~\ref{def-X-identification}. This formula will be used to obtain the variance and covariance for the limiting Brownian motion in Theorem~\ref{thm-variable-SD}. In particular, we prove Proposition~\ref{prop-var-limit} below. The proof is similar to the argument found in~\cite[\S~3.1]{shef-burger}, but unlike in~\cite[\S~3.1]{shef-burger}, all of the assumptions needed to make the argument work have already been proven. Recall from Proposition~\ref{prop-J-finite} that $\protect\hyperlink{def-J}{\chi}$ is finite. \begin{prop} \label{prop-var-limit} Let $\protect\hyperlink{def-J}{\chi}$ be as in~\eqref{eqn-chi-def}. Then \begin{equation*} \lim_{n\rightarrow\infty} n^{-1} \operatorname{Var}\left(\protect\hyperlink{def-theta-count}{\mathcal D}(X'(-n,-1) ) \right) = 1 + (p+q) \protect\hyperlink{def-J}{\chi}. \end{equation*} \end{prop} \begin{proof} By Lemma~\ref{prop-J-basic}, the word $X(-\protect\hyperlink{def-J}{J},-1)$ is equal to $X'(-\protect\hyperlink{def-J}{J},-1)$ and consists of either $\hb$'s and $\co$'s (if $X_{-\protect\hyperlink{def-J}{J}} = \hb$) or $\cb$'s and $\ho$'s (if $X_{-\protect\hyperlink{def-J}{J}} = \cb$). Therefore, \begin{equation} \label{eqn-J-discrep} \protect\hyperlink{def-theta-count}{\mathcal D}\left(X(-\protect\hyperlink{def-J}{J},-1)\right) = \protect\hyperlink{def-theta-count}{\mathcal D}\left(X'(-\protect\hyperlink{def-J}{J},-1)\right) = \pm |X(-\protect\hyperlink{def-J}{J},-1)| \end{equation} where the sign is positive if $X_{-\protect\hyperlink{def-J}{J}} = \hb$ and negative if $X_{-\protect\hyperlink{def-J}{J}} = \cb$. We observe that $X_0$ is independent from $X(-\protect\hyperlink{def-J}{J},-1)$, and that $X'_0$ is determined by $X_0$ on the event $\left\{X_0 \not\in\left\{\db, \so\right\} \right\}$. Therefore, \[ \BB E\left(\protect\hyperlink{def-theta-count}{\mathcal D}(X'_0) \protect\hyperlink{def-theta-count}{\mathcal D}(X(-\protect\hyperlink{def-J}{J},-1)) \BB 1_{\left( X_0 \not\in\left\{\db, \so\right\} \right)} \right) = 0. \] If on the other hand $X_0 \in\left\{\db, \so\right\}$, if $X_{-\protect\hyperlink{def-J}{J}} = \hb$ then $X_0'\in\{\hb,\co\}$, and if $X_{-\protect\hyperlink{def-J}{J}} = \cb$ then $X_0'\in\{\cb,\ho\}$. Therefore, if $X_0 \in\left\{\db, \so\right\}$ then $\protect\hyperlink{def-theta-count}{\mathcal D}(X'_0)$ has the same sign as $\protect\hyperlink{def-theta-count}{\mathcal D}(X(-\protect\hyperlink{def-J}{J},-1))$, so \begin{align} \label{eqn-discrep-mean-J} \BB E\left(\protect\hyperlink{def-theta-count}{\mathcal D}(X'_0) \protect\hyperlink{def-theta-count}{\mathcal D}(X(-\protect\hyperlink{def-J}{J},-1)) \right) &=\BB E\left(\protect\hyperlink{def-theta-count}{\mathcal D}(X'_0) \protect\hyperlink{def-theta-count}{\mathcal D}(X(-\protect\hyperlink{def-J}{J},-1)) \BB 1_{(X_0\in\{\db,\so\})} \right) \notag\\ &= \PP\!\left( X_0 \in \left\{\db, \so\right\} \right) \BB E\left(|X(-\protect\hyperlink{def-J}{J},-1)| \right) = \frac{\protect\hyperlink{def-J}{\chi} (p+q)}{2}. \end{align} We next observe that $X_0'$ is determined by $X_{-\protect\hyperlink{def-J}{J}} \cdots X_{-1}$ and $X_0$, so by the strong Markov property, for each $n\in\BB N$ it holds that $X_0'$ is conditionally independent from $X'_{-n} \cdots X'_{-\protect\hyperlink{def-J}{J}-1}$ given $X'_{-\protect\hyperlink{def-J}{J}}\cdots X'_{-1}$ (here we set $X(-n,-\protect\hyperlink{def-J}{J}-1) = \emptyset$ if $n \leq \protect\hyperlink{def-J}{J}$, so that the assertion holds vacuously in this case). By symmetry $\protect\hyperlink{def-theta-count}{\mathcal D}(X'(-n,-\protect\hyperlink{def-J}{J}-1))$ has zero conditional mean given $X_{-\protect\hyperlink{def-J}{J}} \cdots X_{-1}$, so \begin{equation} \label{eqn-after-J} \BB E\left( \protect\hyperlink{def-theta-count}{\mathcal D}(X_0') \protect\hyperlink{def-theta-count}{\mathcal D}(X'(-n,-\protect\hyperlink{def-J}{J}-1)) \;|\; X_{-\protect\hyperlink{def-J}{J}} \cdots X_{-1}\right) = 0. \end{equation} Therefore, \begin{equation} \label{eqn-discrep-mean-n} \BB E\left( \protect\hyperlink{def-theta-count}{\mathcal D}(X_0') \protect\hyperlink{def-theta-count}{\mathcal D}(X'(-n, -1)) \right) = \BB E\left( \protect\hyperlink{def-theta-count}{\mathcal D}(X_0') \protect\hyperlink{def-theta-count}{\mathcal D}(X(-\protect\hyperlink{def-J}{J}, -1)) \BB 1_{\protect\hyperlink{def-J}{J}\leq n} \right) + \BB E\left( \protect\hyperlink{def-theta-count}{\mathcal D}(X_0') \protect\hyperlink{def-theta-count}{\mathcal D}(X'(-n, -1)) \BB 1_{\protect\hyperlink{def-J}{J} > n} \right). \end{equation} By~\eqref{eqn-J-discrep},~\eqref{eqn-discrep-mean-J}, and dominated convergence (with $|X(-\protect\hyperlink{def-J}{J},-1)|$ as the dominator; recall Proposition~\ref{prop-J-finite}) we find that the first term on the right in~\eqref{eqn-discrep-mean-n} tends to $\protect\hyperlink{def-J}{\chi}(p+q)/2$ as $n\rightarrow\infty$. The absolute value of the second term is at most $\BB E\left(|X(-n,-1)| \BB 1_{(\protect\hyperlink{def-J}{J} > n)} \right)$, which tends to 0 by Lemma~\ref{prop-J-limit}. By translation invariance, we therefore have \alb \operatorname{Var}\left(\protect\hyperlink{def-theta-count}{\mathcal D}(X'(1,n)) \right) &= \BB E\left( \protect\hyperlink{def-theta-count}{\mathcal D}(X'(-n,-1))^2 \right) \\ &= \sum_{i=1}^n \BB E\left( \protect\hyperlink{def-theta-count}{\mathcal D}(X_i')^2 \right) + 2 \sum_{i=2}^n \BB E\left(\protect\hyperlink{def-theta-count}{\mathcal D}(X_i') \protect\hyperlink{def-theta-count}{\mathcal D}(X'(1,i-1))\right) \\ &= n + 2 \sum_{i=2}^n \BB E\left(\protect\hyperlink{def-theta-count}{\mathcal D}(X_0') \protect\hyperlink{def-theta-count}{\mathcal D}(X'(-i+1, -1))\right) \\ &= \ \left(1 + \protect\hyperlink{def-J}{\chi} (p+q) \right) n + o(n).\qedhere \ale \end{proof} \subsection{Expected length of the reduced word} \label{sec-moment-bound} In this subsection we estimate the expectations of several quantities related to the reduced words $X(1,n)$ and $X'(1,n)$ for $n\in\BB N$ (recall~\eqref{eqn-X(a,b)}). As one might expect due to the diffusive scaling for $Z^n$ in~\eqref{eqn-Z^n-def}, these quantities will typically be of order $n^{1/2}$. We first prove in Lemma~\ref{prop-length-mean-upper'} an upper bound for the length of the latter word, which may be shorter than $|X(1,n)|$ since there could be $\db$'s in $X_1\dots X_n$ which are identified by burgers in $\dots X_{-1} X_0$ but matched to orders in $X_1\dots X_n$. In Lemma~\ref{prop-length-mean-upper}, we transfer this to an upper bound for $|X(1,n)|$ using Lemma~\ref{prop-few-SD}. We then use a comparison to simple random walk on $\BB Z^2$ (via Lemma~\ref{prop-mean-mono}) to prove a corresponding lower bound for the expected number of burgers and orders in $X(1,n)$ (Lemma~\ref{prop-H-mean}). \begin{lem} \label{prop-length-mean-upper'} For $n\in\BB N$, we have (using the notation $\preceq$ from Section~\ref{sec-basic}), \begin{equation*} \BB E\left(|X'(1,n)| \right) \preceq n^{1/2}\,. \end{equation*} \end{lem} \begin{proof} By the symmetry between hamburgers and cheeseburgers, $\BB E\left(\protect\hyperlink{def-theta-count}{\mathcal D}(X'(1,n))\right)=0$, so by Proposition~\ref{prop-var-limit} and translation invariance, for each $n\in\BB N$ we have $\BB E\left( \protect\hyperlink{def-theta-count}{\mathcal D}(X'(1,n))^2\right)= \operatorname{Var}\left(\protect\hyperlink{def-theta-count}{\mathcal D}(X'(-n,-1) ) \right)\preceq n$. Since $n\mapsto \protect\hyperlink{def-theta-count}{\mathcal C}( X'(1,n))$ is a simple random walk, $\BB E\left( \protect\hyperlink{def-theta-count}{\mathcal C}(X'(1,n))^2 \right) = n$. With $\protect\hyperlink{def-theta-count}{d}(X'(1,n))$ as in Definition~\ref{def-theta-count}, $\protect\hyperlink{def-theta-count}{d}(X'(1,n)) = \frac12 \left( \protect\hyperlink{def-theta-count}{\mathcal D}(X'(1,n)) + \protect\hyperlink{def-theta-count}{\mathcal C}(X'(1,n))\right)$. By a union bound and the Chebyshev inequality, we infer \begin{equation} \label{eqn-d-tail} \PP\!\left(|\protect\hyperlink{def-theta-count}{d}(X'(1,n))| \geq k \right) \preceq n/k^2,\quad\quad \forall n,k \in \BB N. \end{equation} For $k\in\BB N$, let $K_k$ be the smallest $i \in \BB N$ for which $X(-i,-1)$ contains at least $k$ hamburgers. Then $X_{-K_k}$ is a $\hb$ without a match in $X_{-K_k} \dots X_{-1}$, so each $\db$ or $\so$ in $X(-K_k +1 , -1)$ must be identified and there are no hamburger orders in $X(-K_k+1,-1)$. Consequently, the word $X(-K_k, -1)$ contains at least $k$ hamburgers, no unidentified $\db$'s or $\so$'s, and no orders other than cheeseburger orders. Therefore, \begin{equation*} \protect\hyperlink{def-theta-count}{d}\left(X(-K_k, -1) \right) = \protect\hyperlink{def-theta-count}{d}\left(X'(-K_k, -1) \right) \geq k. \end{equation*} It follows that if $K_k \leq n$, then either \begin{equation*} \protect\hyperlink{def-theta-count}{d}\left(X'(-n, -K_k- 1) \right) \leq -k/2 \quad \operatorname{or}\quad \protect\hyperlink{def-theta-count}{d}\left(X'(-n,-1) \right) \geq k/2. \end{equation*} Since $K_k$ is a backward stopping time for the word $X$, we infer from the strong Markov property and translation invariance that the conditional law of $\protect\hyperlink{def-theta-count}{d}\left(X'(-n, -K_k- 1) \right)$ given $X_{-K_k} \cdots X_{-1}$ is the same as the law of $\protect\hyperlink{def-theta-count}{d}\left(X'(1, n - K_k) \right)$. By~\eqref{eqn-d-tail} and the union bound, \begin{equation*} \PP\!\left( K_k \leq n \right) \preceq n/k^2\,, \end{equation*} and hence \begin{equation} \label{eqn-H-tail} \PP\!\left(\protect\hyperlink{def-theta-count}{\mathcal N\!}_{\hb}\left(X'(-n,-1) \right) \geq k \right) \preceq n/k^2,\quad \forall k,n \in \BB N. \end{equation} By combining~\eqref{eqn-d-tail} and~\eqref{eqn-H-tail} and noting that $\protect\hyperlink{def-theta-count}{\mathcal N\!}_{\hb|\ho}(x) \leq 2 \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\hb}(x) + |\protect\hyperlink{def-theta-count}{d}( x )|$ for every word $x$, we get \begin{equation*} \PP\!\left(\protect\hyperlink{def-theta-count}{\mathcal N\!}_{\hb|\ho}\left(X'(-n,-1) \right) \geq k \right) \preceq n/k^2,\quad \forall k,n \in \BB N. \end{equation*} By symmetry, the analogous estimate holds with $\cb$ and $\co$ in place of $\hb$ and $\ho$. Since $|X'(-n,-1)|=\protect\hyperlink{def-theta-count}{\mathcal N\!}_{\hb|\cb|\ho|\co}(X'(-n,-1))$, a union bound therefore implies \begin{equation*} \PP\!\left( \left| X'(-n,-1) \right| \geq k\right) \preceq n/k^2,\quad \forall k,n \in \BB N. \end{equation*} Hence \begin{equation*} \BB E\left( \left|X'(-n,-1)\right| \right) = \sum_{k=1}^\infty \PP\!\left( \left| X'(-n,-1) \right| \geq k\right) \preceq \int_1^\infty (1 \wedge (n/k^2)) \, dk \preceq n^{1/2}\,, \end{equation*} which finishes the proof in view of translation invariance. \end{proof} We now estimate the expectation of $|X(1,n)|$, which may be larger than the expectation of $|X'(1,n)|$ since some duplicate burgers with no match in $X_1 \cdots X_n$ may correspond to hamburgers or cheeseburgers in $X'$ which have a match in $X'_1 \cdots X'_n$. \begin{lem} \label{prop-length-mean-upper} \begin{equation} \label{eqn-length-mean-upper} \BB E\left(|X(1,n)|\right) \preceq n^{1/2}\,. \end{equation} and \begin{equation} \label{eqn-SD-mean-upper} \BB E\left( \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\db|\so}(X(1,n))\right) = o(n^{1/2})\,, \end{equation} as $n\in\BB N$ tends to infinity. \end{lem} \begin{proof} If $i\in [1,n]_\BB Z$ is such that $X_i$ does not have a match in $X_1\cdots X_n$ but $X_i'$ has a match in $X_1'\cdots X_n'$, then either $X_i =\db$ or $X_i$ is matched to a $\db$ in the word $X_1\cdots X_n$. Therefore, \begin{equation} \label{eqn-reduced-compare} |X'(1,n)| \leq |X(1,n)| \leq |X'(1,n)| + 2 \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\db}(X(1,n))\,. \end{equation} Now fix $\varepsilon, \zeta \in (0,1/2)$ and for $n\in\BB N$ let $F_n = F_n(\varepsilon, n^\zeta)$ be the event defined in~\eqref{eqn-few-SD-event}. On the event $F_n^c$, we have \begin{equation*} \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\db|\so}(X(1,n)) \leq \varepsilon |X(1,n)| + \varepsilon n^\zeta \leq \varepsilon |X'(1,n)| + 2 \varepsilon \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\db|\so}(X(1,n)) + \varepsilon n^\zeta\,, \end{equation*} where we used~\eqref{eqn-reduced-compare} in the second inequality. After re-arranging this inequality, and considering also the possibility that $F_n$ occurs, we get \begin{equation} \label{eqn-SD-length-compare} \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\db|\so}(X(1,n)) \leq \frac{\varepsilon}{1-2\varepsilon} |X'(1,n)| + \frac{\varepsilon}{1-2\varepsilon} n^\zeta + n \BB 1_{F_n}. \end{equation} Combining~\eqref{eqn-SD-length-compare}, the bound $\BB E(|X'(1,n)|)\preceq n^{1/2}$ from Lemma~\ref{prop-length-mean-upper'}, the exponential decay of $\BB E(n\BB 1_{F_n})$ from Lemma~\ref{prop-few-SD}, and the fact that $\varepsilon>0$ can be made arbitrarily small, we easily obtain~\eqref{eqn-SD-mean-upper}. We obtain~\eqref{eqn-length-mean-upper} from~\eqref{eqn-SD-mean-upper}, \eqref{eqn-reduced-compare} and Lemma~\ref{prop-length-mean-upper'}. \end{proof} \begin{lem} \label{prop-H-mean} For $n\in\BB N$, \begin{equation} \label{eqn-H-mean} \BB E\left( \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\hb}\left(X (1,n) \right) \right) \asymp \BB E\left( \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\ho}\left(X (1,n) \right) \right) \asymp n^{1/2}\,. \end{equation} \end{lem} \begin{proof} The upper bounds for both expectations in~\eqref{eqn-H-mean} follow from Lemma~\ref{prop-length-mean-upper}, so we only need to prove the lower bounds. Recall that $\PP^{0,0}$ denotes the law of $X$ with $p = q = 0$ and $\BB E^{0,0}$ is the corresponding expectation. By Lemma~\ref{prop-mean-mono}, \begin{equation} \label{eqn-burger-mean-compare} \begin{aligned} \BB E \left(\protect\hyperlink{def-theta-count}{\mathcal N\!}_{\hb|\cb|\db}(X(1,n)) \right) &\geq \BB E^{0,0}\left(\protect\hyperlink{def-theta-count}{\mathcal N\!}_{\hb|\cb}(X(1,n))\right)\\ \BB E \left(\protect\hyperlink{def-theta-count}{\mathcal N\!}_{\ho|\co|\so}(X(1,n)) \right) &\geq \BB E^{0,0}\left(\protect\hyperlink{def-theta-count}{\mathcal N\!}_{\ho|\co}(X(1,n))\right)\,. \end{aligned} \end{equation} If all symbols in $X_{-n}\cdots X_{-1}$ are identified, then \begin{align*} \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\hb}(X(-n,-1)) &= \max_{1\leq i\leq n} \protect\hyperlink{def-theta-count}{d}(X(-i,-1)) \\ \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\ho}(X(-n,-1)) &= \max_{1\leq i\leq n+1} -\protect\hyperlink{def-theta-count}{d}(X(-n,-i))\,. \end{align*} Under $\PP^{0,0}$, the maps $i\mapsto\protect\hyperlink{def-theta-count}{\vec{d}}(X(-i,-1))$ and $i\mapsto\protect\hyperlink{def-theta-count}{\vec{d}}(X(-n,-i))$ are two-dimensional simple random walks, so we deduce (using e.g., Donsker's invariance principle and Fatou's lemma together with the fact that Brownian motion has a well-defined running supremum process which is positive at any given time) \begin{equation} \label{E00>=n^1/2} \begin{aligned} \BB E^{0,0}(\protect\hyperlink{def-theta-count}{\mathcal N\!}_{\hb}(X(-n,-1))) &\succeq n^{1/2} \\ \BB E^{0,0}(\protect\hyperlink{def-theta-count}{\mathcal N\!}_{\ho}(X(-n,-1))) &\succeq n^{1/2}\,. \end{aligned} \end{equation} By symmetry $\BB E\left(\protect\hyperlink{def-theta-count}{\mathcal N\!}_{\hb}(X(1,n))\right) = \BB E\left( \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\cb}(X(1,n))\right)$ and $\BB E\left(\protect\hyperlink{def-theta-count}{\mathcal N\!}_{\ho}(X(1,n))\right) = \BB E\left( \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\co}(X(1,n))\right)$, and by \eqref{eqn-SD-mean-upper} of Lemma~\ref{prop-length-mean-upper} $\BB E\left(\protect\hyperlink{def-theta-count}{\mathcal N\!}_{\db|\so}(X(1,n))\right) = o(n^{1/2})$, which combined with \eqref{eqn-burger-mean-compare} and \eqref{E00>=n^1/2} gives the lower bounds $\BB E \left(\protect\hyperlink{def-theta-count}{\mathcal N\!}_{\hb}(X(1,n)) \right) \succeq n^{1/2}$ and $\BB E \left(\protect\hyperlink{def-theta-count}{\mathcal N\!}_{\ho}(X(1,n)) \right) \succeq n^{1/2}$. \end{proof} \subsection{Tail bound for the length of the reduced word} \label{sec-word-length} In this subsection we prove the following analogue of~\cite[Lem.~3.13]{shef-burger}, which will be used to prove tightness of the sequence of paths $Z^n$ defined in~\eqref{eqn-Z^n-def} in the proof of Theorem~\ref{thm-variable-SD}. \begin{prop} \label{prop-length-sup} There are constants $a_0,a_1> 0$ such that for each $n\in\BB N$ and $r > 0$, \begin{equation} \label{eqn-length-max-all} \PP\!\left( \max_{\substack{i,j\in [1,n]_\BB Z\\1\leq i\leq j\leq n}} |X(i,j)| > r n^{1/2} \right) \leq a_0 e^{-a_1 r} \,. \end{equation} \end{prop} To prove Proposition~\ref{prop-length-sup}, we will study the times at which unmatched hamburgers are added when we read the word backwards. The increments of $X$ between these times are i.i.d., and the number of $\db$'s which are identified at each of these times (some of which also correspond to unmatched hamburgers in our word) can be bounded using Lemma~\ref{prop-few-SD} (c.f.\ Lemma~\ref{prop-D-count-finite}). Using a lower bound for the probability that a reduced word of length $n$ contains no hamburgers (Lemma~\ref{prop-J^H-tail}) and Chernoff's inequality, we get an upper tail bound for the number of hamburgers in $X(-n,-1)$. By symmetry, we also have an analogous bound for the number of cheeseburgers in $X(-n,-1)$. Since the difference $\protect\hyperlink{def-theta-count}{\mathcal C}(X(-n,-1))$ between the number of burgers and the number of orders in $X(-n,-1)$ evolves as a simple random walk on $\BB Z$ and by another application of Lemma~\ref{prop-few-SD}, this will be enough to prove Proposition~\ref{prop-length-sup}. \begin{lem} \label{prop-J^H-tail} Let $J^\hb$ be the smallest $j\in\BB N$ for which $X(-j,-1)$ contains a hamburger. Then \begin{equation} \label{eqn-J^H-tail} \PP\!\left(J^\hb > n \right) \asymp n^{-1/2} \end{equation} with the implicit constant depending only on $p$. \end{lem} \begin{proof} For $n\in\BB N\cup\{0\}$, let $E_n$ be the event that $X(1,n)$ contains no hamburgers (recall that $X(1,0) = \emptyset$). By translation invariance, \begin{equation} \label{eqn-J^H-event-compare} \PP\!\left( E_n \right) = \PP\!\left( J^\hb > n \right)\,. \end{equation} In particular, $n\mapsto \PP(E_n)$ is non-increasing. Suppose $i \in [1,n]_\BB Z$. If $X_i$ identifies to $\ho$ in $X_1\cdots X_n$ and has no match in $X_1 \dots X_{i-1}$, then $E_{i-1}$ occurs and $X_i\in\{\ho,\so\}$. On the other hand, if $E_{i-1}$ occurs, then by independence of the symbols of $X$, it holds with conditional probability $\frac{1-p}{4}$ that $X_i = \ho$, in which case $X_i$ does not have a match in $X_1 \cdots X_i$. Therefore, \begin{equation*} \BB E\left(\protect\hyperlink{def-theta-count}{\mathcal N\!}_{\ho}(X(1,n)) \right) \leq \sum_{i=0}^{n-1} \PP\!\left( E_i \right) \leq \frac{4}{1-p} \BB E\left(\protect\hyperlink{def-theta-count}{\mathcal N\!}_{\ho}(X(1,n)) \right) . \end{equation*} By Lemma~\ref{prop-H-mean} we can find a constant $C>1$ such that for each $n\in\BB N$ \begin{equation} \label{eqn-J^H-sum} C^{-1} n^{1/2} \leq \sum_{i=1}^{n-1} \PP\!\left( E_i \right) \leq C n^{1/2}. \end{equation} By monotonicity of $\PP(E_n)$, we immediately obtain \begin{equation*} n \PP(E_n) \leq \sum_{i=1}^{n-1} \PP( E_i) \leq C n^{ 1/2} . \end{equation*} Furthermore, \begin{equation*} 4 C^2 n \PP(E_n) \geq \sum_{i=n}^{\lceil 4 C^2 n \rceil-1} \PP(E_i) \geq 2 C n^{1/2} - C n^{1/2} = C n^{1/2} . \end{equation*} Combining these two relations with~\eqref{eqn-J^H-event-compare} yields~\eqref{eqn-J^H-tail}. \end{proof} \begin{lem} \label{prop-D-count-finite} Let $J^\hb$ be the smallest $j \in \BB N$ for which $X(-j,-1)$ contains a hamburger. There are constants $a_0 , a_1 > 0$ depending only on $p$ such that for each $m \in\BB N$, we have \begin{equation} \label{eqn-D-count-finite} \PP\!\left( \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\db|\so}\left(X(-J^\hb+1,-1)\right) > m \right) \leq a_0 e^{-a_1 m} . \end{equation} \end{lem} \begin{proof} We observe that $\protect\hyperlink{def-theta-count}{\mathcal N\!}_{\db}\left(X(-J^\hb+1,-1)\right) \geq \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\ho}\left(X(-J^\hb+1,-1)\right)$; indeed, otherwise it is not possible for all of the $\ho$'s in $X(-J^\hb+1,-1)$ to be fulfilled in $X_{-J^\hb} \dots X_{-1}$ while still leaving a leftover $\hb$. Now let $c_0 , c_1 > 0$ be as in Lemma~\ref{prop-few-SD} with $\varepsilon = 1$. By that lemma and a union bound, \begin{equation*} \PP\!\left( \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\db|\so}\left(X(-J^\hb+1,-1)\right) \geq m , \, J^\hb \leq e^{c_1 m/2} \right) \leq c_0 e^{-c_1 m/2} . \end{equation*} On the other hand, by Lemma~\ref{prop-J^H-tail} we have \begin{equation*} \PP\! \left( J^\hb > e^{c_1 m/2} \right) \preceq e^{-c_1 m/4} . \end{equation*} Combining these estimates yields~\eqref{eqn-D-count-finite}. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop-length-sup}] Let $J^\hb_0 = 0$ and for $m\in\BB N$ inductively let $J^\hb_m$ be the smallest $j \geq J^\hb_{m-1}$ for which $X(-j,-J^\hb_{m-1}-1)$ contains a hamburger. Then $J^\hb_1$ is the same as the time $J^\hb$ from Lemma~\ref{prop-J^H-tail} and by the strong Markov property the increments $X_{-J^\hb_m} \cdots X_{-J^\hb_{m-1}-1}$ for $m\in\BB N$ are i.i.d. For $m\in\BB N$, let \begin{equation*} H_m \colonequals \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\hb}\left(X(-J^\hb_m, -J^\hb_{m-1}-1)\right) = 1 + \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\db}\left(X(-J^\hb_m +1, -J^\hb_{m-1}-1)\right). \end{equation*} Since none of the reduced words $X(-J^\hb_m,-J^\hb_{m-1}-1)$ contain $\ho$'s, \begin{equation} \label{eqn-H-sum} \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\hb}\left( X(-J^\hb_m, -1) \right) = \sum_{k=1}^m H_k \,. \end{equation} By Lemma~\ref{prop-D-count-finite}, for some positive number $\beta>0$ (depending only on $p$) $\BB E(e^{\beta H_k})<\infty$, and since the $H_k$'s are i.i.d., Chernoff's bound implies that there are positive numbers $\widetilde c_0, \widetilde c_1 > 0$ such that for each $m \in \BB N$, \begin{equation} \label{eqn-H-sum-tail} \PP\!\left( \sum_{k=1}^m H_k \geq \widetilde c_0 m \right) \leq e^{-\widetilde c_1 m}. \end{equation} By Lemma~\ref{prop-J^H-tail}, we can find a constant $c>0$ such that for each $n, m \in\BB N$, \begin{equation*} \PP\!\left( J^\hb_m - J^\hb_{m-1} > n \right) \geq c n^{-1/2}. \end{equation*} Since the increments $J^\hb_m - J^\hb_{m-1}$ are i.i.d., we infer that for each $n, m \in \BB N$, \begin{equation} \label{eqn-J-exp-bound} \PP\!\left( J^\hb_m \leq n \right) \leq \PP\!\left( J^\hb_k - J^\hb_{k-1} \leq n,\, \forall k \leq m \right) \leq \left(1 - c n^{-1/2} \right)^m \leq \exp[-c m/n^{1/2}]\,. \end{equation} Recall that $\protect\hyperlink{def-theta-count}{\mathcal N\!}_{\hb}(X(-j,-1))$ is monotone increasing in $j$. If $J^\hb_{m} \geq n$ and $\protect\hyperlink{def-theta-count}{\mathcal N\!}_{\hb}\left( X(-J^\hb_{m}, -1) \right) \leq \widetilde c_0 m$, then $\protect\hyperlink{def-theta-count}{\mathcal N\!}_{\hb}(X(-j,-1)) \leq \widetilde c_0 m$ for each $j \in [1,n]_\BB Z$. By taking $m = \lfloor r n^{1/2} / \widetilde c_0\rfloor$ and applying~\eqref{eqn-H-sum}, ~\eqref{eqn-H-sum-tail}, and~\eqref{eqn-J-exp-bound}, we find that for each $n\in\BB N$, \begin{equation} \label{eqn-burger-tail} \PP\!\left( \max_{j\in [1,n]_\BB Z} \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\hb}(X(-j,-1)) > r n^{1/2} \right) \leq c_0 e^{-c_1 r} \end{equation} for appropriate $c_0, c_1 > 0$ independent of $r$ and $n$. By symmetry, the analogous estimate holds with $\cb$ in place of $\hb$. Since $j\mapsto \protect\hyperlink{def-theta-count}{\mathcal C}(X(-j,-1))$ is a simple random walk, we have (see e.g.~\cite[Prop.~2.1.2b]{lawler-limic-walks}) \begin{equation} \label{eqn-net-count-tail} \PP\!\left( \max_{j \in [1,n]_\BB Z} |\protect\hyperlink{def-theta-count}{\mathcal C}(X(-j,-1))| > r n^{1/2} \right) \leq b_0 e^{-b_1 r^2} \end{equation} for universal constants $b_0, b_1 > 0$. By Lemma~\ref{prop-few-SD} (applied with $\varepsilon = \frac12$ and $A = \text{const}\times r n^{1/2}$) and the union bound, except on an event of probability $\leq \exp(-\Theta(r))$, \begin{equation} \label{eqn-D-tail} \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\db}\left(X(-j, -1) \right) \leq \frac12 \text{const} \times r n^{1/2} - \frac12 \protect\hyperlink{def-theta-count}{\mathcal C}(X(-j,-1)) + \frac12 \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\hb|\cb|\db}(X(-j,-1))\,, \quad \forall j \in [1, n]_\BB Z . \end{equation} Re-arranging gives \begin{equation} \label{eqn-D-tail'} \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\db}\left(X(-j, -1) \right) \leq \text{const}\times r n^{1/2} - \protect\hyperlink{def-theta-count}{\mathcal C}(X(-j,-1)) + \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\hb|\cb }(X(-j,-1))\,, \quad \forall j \in [1, n]_\BB Z . \end{equation} By writing $|X(-j,-1)|= 2\protect\hyperlink{def-theta-count}{\mathcal N\!}_{\db}(X(-j,-1))+2\protect\hyperlink{def-theta-count}{\mathcal N\!}_{\hb|\cb}(X(-j,-1))-\protect\hyperlink{def-theta-count}{\mathcal C}(X(-j,-1))$, using the bound~\eqref{eqn-D-tail'}, and the bounds~\eqref{eqn-burger-tail} and~\eqref{eqn-net-count-tail}, we obtain \begin{equation} \label{eqn-length-sup-forward} \PP\!\left( \max_{j\in [1,n]_\BB Z} |X(-j,-1)| > r n^{1/2} \right) \leq \text{const} \times e^{-\text{const}\times r} \,. \end{equation} We now observe that for $1\leq i\leq j\leq n$, each order and each unidentified $\so$ or $\db$ in $X(i,j)$ also appears in $X(i,n)$; and each $\hb$ or $\cb$ in $X(i,j)$ either appears in $X(i,n)$ or is consumed by a unique order in $X(j+1,n)$. Thus \begin{equation} \label{eqn-reduced-length-compare} |X(i,j)| \leq |X(i,n)| + |X(j+1,n)|\,. \end{equation} This bound~\eqref{eqn-reduced-length-compare} together with \eqref{eqn-length-sup-forward} imply \eqref{eqn-length-max-all}. \end{proof} \subsection{Convergence to correlated Brownian motion} \label{sec-variable-SD-proof} We are now ready to conclude the proof of Theorem~\ref{thm-variable-SD}. We first establish tightness. \begin{lem} \label{prop-variable-SD-tight} Suppose we are in the setting of Theorem~\ref{thm-variable-SD}. The sequence of laws of the paths $Z^n$ for $n\in\BB N$ is tight in the topology of uniform convergence on compacts of $\BB R$. \end{lem} \begin{proof} Fix $T \geq 1$ and $\varepsilon > 0$. For $N\in\BB N$, we cover the time interval $[0,T]$ by $N$ blocks of the form $[k T/N,(k+2) T/N]$ for $k\in[0,N-1]_\BB Z$. Note that successive blocks overlap. Within each block, the path $Z^n$ has (up to rounding error) $2 n T/N$ steps. Any pair of times $s,t\in[0,T]$ with $|s-t|<T/N$ lie in some common block, and if $s,t\in\BB Z/n$ and $s<t$, $\|Z^n(s)-Z^n(t)\|_1$ is bounded by $|X(ns,nt)|$. Thus Proposition~\ref{prop-length-sup} together with the union bound implies that there exist constants $a_0, a_1 > 0$, such that for any $n\geq N$ (here we take $n\geq N$ to avoid worrying about rounding error), \begin{equation*} \PP\!\left(\sup_{\substack{s,t\in[0,T]\\|s-t|\leq T/N}} \|Z^n(t) - Z^n(s)\|_1 \geq 2^{-m} \right) \leq 2N a_0 \exp\left( - a_1 T^{-1/2} N^{1/2} 2^{-m} \right) \,. \end{equation*} By choosing $N=N_{T,\varepsilon,m}$ sufficiently large, depending on $T$, $\varepsilon$, and $m$, we can make this probability at most $\varepsilon 2^{-m}$ for all $n\geq N_{T,\varepsilon,m}$. By starting with $\delta_m = T/N_{T,\varepsilon,m}$, and then possibly shrinking $\delta_m$, we can arrange that \begin{equation*} \PP\!\left(\sup_{\substack{s,t\in[0,T]\\|s-t|\leq\delta_m}} \|Z^n(t) - Z^n(s)\|_1 \geq 2^{-m} \right) \leq \varepsilon 2^{-m} \end{equation*} for all $n\in\BB N$, not just $n\geq N_{T,\varepsilon,m}$. By the union bound, we obtain that for each $n\in\BB N$, it holds except on an event of probability at most $\varepsilon$ that, whenever $m\in\BB N$ and $s,t \in [0,T]$ with $|t-s| \leq \delta_m$, we have $\|Z^n(t) -Z^n(s)\|_1 < 2^{-m}$. By the Arzel\'a-Ascoli theorem, we obtain tightness of the paths $Z^n|_{[0,\infty)}$ in the topology of uniform convergence on compacts. Tightness of the sequence of the full processes (defined on $\BB R$) follows from translation invariance. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm-variable-SD}] By Lemma~\ref{prop-variable-SD-tight} and Prokhorov's theorem, for any sequence of $n$'s tending to infinity, there exists a subsequence $n_k$ and a random continuous path $Z = ( U, V) : \BB R \rightarrow \BB R^2$ such that, as $k$ tends to infinity, $Z^{n_k}|_{[0,\infty)}$ converges to $Z$ in law in the topology of uniform convergence on compacts. Next we show that the law of $Z$ is uniquely determined (independently of the subsequence). Consider any subsequence $n_k$ for which $Z^{n_k}|_{[0,\infty)}$ converges in law (in the topology of uniform convergence on compacts). By the Skorokhod representation theorem, we can find a coupling of a sequence of random words $(X^{n_k})$, each with the law of $X$, such that if we define $Z^{n_k}$ with $X^{n_k}$ in place of $X$, then a.s.\ as $k$ tends to infinity, $Z^{n_k}$ converges to $Z$ uniformly on compact subsets of $[0,\infty)$. Fix real numbers $t_0 < t_2 < \cdots < t_N$. For $j \in [1,N]_\BB Z$ and $k\in\BB N$, let \begin{equation*} \Upsilon_j^{n_k} \colonequals n_k^{-1/2} \protect\hyperlink{def-theta-count}{\vec{d}} \big( X^{n_k}(\lfloor t_{j-1} n_k \rfloor + 1, \lfloor t_j n_k \rfloor ) \big). \end{equation*} Observe that $\Upsilon_j^{n_k}$ differs from $Z^{n_k}(t_j) - Z^{n_k}(t_{j-1})$ in either coordinate by at most \[2 n_k^{-1/2} + n_k^{-1/2} \protect\hyperlink{def-theta-count}{\mathcal N\!}_{\db|\so}(X^{n_k}\left(\lfloor t_{j-1} n_k \rfloor + 1, \lfloor t_j n_k \rfloor \right)).\] By Lemma~\ref{prop-few-SD} and Proposition~\ref{prop-length-sup}, the latter quantity tends to $0$ in probability as $k$ tends to infinity, and since by the Skorokhod coupling $Z^{n_k}\rightarrow Z$, in fact $\Upsilon_j^{n_k} \rightarrow Z(t_j) - Z(t_{j-1})$ a.s.\ for each $j\in [1,N]_\BB Z$. The random variables $(\Upsilon_j^{n_k} : j\in [1,N]_\BB Z ) $ are independent, and by translation invariance of the law of $X$ together with our above observation about $\Upsilon_j^{n_k}$, the law of each $\Upsilon_j^{n_k}$ converges as $k$ tends to infinity to the law of $Z(t_j ) - Z(t_{j-1})$. Hence the increments $Z(t_j) - Z(t_{j-1})$ are independent and each has the same law as $Z(t_j - t_{j-1})$, i.e., $Z$ has independent stationary increments. By Proposition~\ref{prop-length-sup} and the Vitali convergence theorem, we find that for each $t\geq 0$, the first and second moments of the coordinates of $Z^{n_k}(t)$ converge to the corresponding quantities for $Z(t)$. Convergence of the expectations implies that $\BB E( Z(t)) = 0$ for each $t \geq 0$, and convergence of variances implies with Proposition~\ref{prop-length-sup} implies $Z(t)$ has finite variance. Thus $Z$ is a continuous L\'evy process with independent stationary mean-zero increments, so $Z$ must be a two-dimensional Brownian motion with $Z(0)= 0$ and some variances, covariance, and zero drift. Since $\protect\hyperlink{def-theta-count}{\mathcal C}(X'(1,n))$ is a simple random walk, \begin{equation*} \lim_{k \rightarrow \infty} \operatorname{Var}\left(n_k^{-1/2} \protect\hyperlink{def-theta-count}{\mathcal C}(X'(1,n_k)) \right) = 1\,, \end{equation*} and by Proposition~\ref{prop-var-limit}, \begin{equation*} \lim_{k \rightarrow \infty} \operatorname{Var}\left(n_k^{-1/2} \protect\hyperlink{def-theta-count}{\mathcal D}(X'(1,n_k)) \right) = 1 + (p+q) \protect\hyperlink{def-J}{\chi}\,. \end{equation*} Furthermore, the conditional law of $X$ given $\protect\hyperlink{def-theta-count}{\mathcal C}(X(1,m))$ for all $m\in\BB N$ is invariant under the involution operation~\eqref{eqn-involution} and this operation changes the sign of $\protect\hyperlink{def-theta-count}{\mathcal D}(X'(1,m))$, so \begin{equation*} \operatorname{Cov}\left( \protect\hyperlink{def-theta-count}{\mathcal C}(X'(1,m)), \protect\hyperlink{def-theta-count}{\mathcal D}(X'(1,m)) \right) = 0,\quad \forall m \in \BB N\,. \end{equation*} Equivalently, \alb \operatorname{Var}\left( U(t) + V(t) \right) &=1,\\ \operatorname{Var}\left( U(t) - V(t) \right) &= 1 + (p+ q) \protect\hyperlink{def-J}{\chi},\\ \operatorname{Cov}\left( U(t) + V(t), U(t) - V(t) \right) &= 0. \ale Recalling the formula~\eqref{eqn-y-z}, this implies that $Z$ must be as in~\eqref{eqn-bm-cov-variable-SD}. If the full sequence $\{Z^n\}_{n\in \BB N}$ failed to converge uniformly on compact subsets of $[0,\infty)$ to the law of $Z$, then there would be a subsequence bounded away from the law of $Z$. But by Prokhorov's theorem and the argument above, there would be a subsubsequence converging in law to $Z$, a contradiction. Hence the full sequence $\{Z^n\}_{n\in \BB N}$ converges uniformly on compact subsets of $[0,\infty)$, and thus on compact subsets of $\BB R$ by translation invariance. The statement that $\protect\hyperlink{def-J}{\chi} =2$ when $q = 0$ is established in Lemma~\ref{prop-J-count-mean}. We thus obtain the statement of the theorem when $p_\so = p$, $p_\db = q$, and $p_\fo = p_\eb=0$. By Corollary~\ref{prop-identification-law} we obtain the statement of the theorem in general. \end{proof} \section{Open problems} \label{sec-open-problems} Here we list some open problems related to the model studied in this paper, some of which were mentioned in the text. \begin{enumerate} \item Compute the value of the constant $\protect\hyperlink{def-J}{\chi}$ in Theorem~\ref{thm-variable-SD} when $p_\db - p_\eb \neq0$ ($z\neq1$). Figure~\ref{fig:chi-kappa} shows computer simulations of the value of $\protect\hyperlink{def-J}{\chi}$ and the corresponding value of $\kappa$ in terms of $y$ and $z$. \begin{figure}[h!] \begin{center} \includegraphics[width=\textwidth/3]{chi-y-z} \includegraphics[width=\textwidth/3]{kappa-y-z} \end{center} \caption{Experimental plots for $\protect\hyperlink{def-J}{\chi}$ and $\kappa$ as a function of $(y,z)\in[0,1]\times [1,2]$.} \label{fig:chi-kappa} \end{figure} \item Prove an infinite-volume peanosphere scaling limit result similar to Theorems~\ref{thm-all-S} and~\ref{thm-variable-SD} in the case when $p_\so \neq1$ and $p_\db-p_\eb < 0$ ($y > 0$ and $z \in (0,1)$) or when $p_\fo - p_\so > 0$ and $p_\db-p_\eb \neq0$ ($y >1$ and $z\neq1$). \item Prove a scaling limit result for the walk $Z^n|_{[0,2]}$ conditioned on the event that the reduced word $X(1,2n) =\emptyset$ (which encodes a finite-volume spanning-tree-decorated random planar map), possibly just in the case when $p_\db = p_\eb = p_\fo = 0$. See~\cite[Thm.~1.8]{gms-burger-finite} for an analogous result in the case when $p_\db = p_\eb = p_\so= 0$ and $p_\fo \in [0,1/2)$. \item Prove a scaling limit result for the bending loop model of Remark~\ref{remark-bending}. In particular is there an encoding of this model in terms of a model on words analogous to the one studied in this paper? \item For many statistical mechanics models on random planar maps which converge in the scaling limit to $\operatorname{SLE}_\kappa$-decorated LQG for some value of $\kappa > 0$, it is expected that the same model on a Euclidean lattice converges in the scaling limit to $\operatorname{SLE}_\kappa$. Recall that for peanosphere scaling limit results, the correlation of the Brownian motion is given by $-\cos(4\pi/\kappa)$. In light of Lemma~\ref{prop-activity} and Theorems~\ref{thm-all-S} and~\ref{thm-variable-SD}, it is therefore natural to make the following conjecture, which expands the conjecture in~\cite{kassel-wilson-active}. \begin{conj} Let $\Lambda$ be either the triangular, hexagonal, or square lattice and suppose that either $y = 0$ and $z > 0$ or $y \in [0,1]$ and $z\in [1,\infty)$. Let $T$ be a spanning tree on $\Lambda$ sampled according to the law~\eqref{eqn-spanning-tree-law} (defined, e.g., by taking a limit of the law~\eqref{eqn-spanning-tree-law} on large finite sub-graphs of $\Lambda$) and let $\lambda$ be its associated Peano curve. Then $\lambda$ converges in law in the scaling limit to $\operatorname{SLE}_\kappa$, where $\kappa \geq 8$ is chosen so \begin{equation*} - \cos\left(\frac{4\pi}{\kappa} \right) = \begin{dcases} -\frac{z}{1+z},\quad &y = 0 \\ -\frac{(z-y) \protect\hyperlink{def-J}{\chi}}{(y+1)(z+1) + (z-y)\protect\hyperlink{def-J}{\chi}} \quad &(y,z) \in [0,1] \times [1,\infty), \end{dcases} \end{equation*} where $\protect\hyperlink{def-J}{\chi}$ (depending on $y$ and $z$) is as in Theorem~\ref{thm-variable-SD}. \end{conj} Prove this conjecture. The case when $y = z =1$ corresponds to the uniform spanning tree and has been treated in~\cite{lsw-lerw-ust}. The case $(y,z)=(1+\sqrt{2},1)$ corresponds to the FK--Ising model and has recently been addressed in~\cite{kemp-smirnov-fk-bdy}. \end{enumerate}
2,869,038,157,024
arxiv
\section{Introduction} All realistic quantum systems interact with an environment and a proper description of their dynamics calls for the toolbox of open quantum systems theory \cite{bp, weiss}. In this framework the simplest type of noise is described by a Markovian master equation in the Lindblad form, corresponding to a completely positive, trace preserving dynamical map satisfying the semi-group property \cite{linblad}. The latter condition means that the map can be divided into infinitely many time-steps, each identical and independent of the past and future steps \cite{wolf}, and therefore a Markovian dynamical map has the intuitive interpretation of memoryless dynamics. Markovian processes successfully describe a plethora of physical processes, particularly in the field on quantum optics, but can fail if applied to more complex system-environment interactions where memory effects become important. In such situations one must resort to non-Markovian dynamical maps.\\ Recently the theory of non-Markovian dynamics has beautifully taken shape as a result of proposals for the definition of non-Markovian dynamics \cite{wolf, BLP, RHP, fisher, cv, luo}. Amazingly, in the past the whole concept has lacked a simple, model-independent definition. The application of non-Markovianity quantifiers, constructed on the basis of these definitions, has led to a deeper understanding of the microscopic mechanisms underlying non-Markovian dynamics \cite{microscopic}, and to the clarification of hazy concepts \cite{hazy}. Moreover, it turns out that the quantifiers can be used to witness initial correlations in the composite system-environment state \cite{initialcorrelations}, and in the environment state, and to probe quantum phase transitions of the environment \cite{probeus}, to name just a few examples. The quantifier put forward by Breuer {\it et al.}, equating non-Markovianity with bidirectional information flow, has also been studied in a linear optics set-up, thus establishing that non-Markovianity quantifiers are not merely a theoretical tool \cite{nmexperiment}.\\ In the spirit of these advances we recently conducted an investigation of the non-Markovian dynamics of a qubit coupled to an ultracold Bose-Einstein condensed (BEC) gas with a two-fold motivation \cite{us}. On one hand, experimentalists have discovered astonishingly accurate means of controlling and manipulating ultracold gases \cite{ultracoldexperiment}. This raises a question whether ultracold gases could provide a tailored environment to a quantum system such that its decohering effect on the system is minimized. Indeed, we discovered that simple and experimentally feasible manipulation of the ultracold reservoir leads to significant changes in the way the qubit dephases and enables a perfect control of the Markovian to non-Markovian transition in the qubit dynamics. On the other hand, the way a qubit decoheres may reveal important information about the environment, leading to the concept of a probe qubit \cite{probequbit, dieter, dieter2}. Here the fundamental question is to what extent one may probe a large, complicated environment by looking at a simple and accessible quantum system that interacts with it. This work aims to dive deeper into these two aspects in the context of qubits embedded in ultracold gases.\\ More specifically, we study the dynamics of two different qubit models, each embedded in an identical environment, namely a BEC gas. The scattering length of the bosons forming the BEC can be controlled using the Feshbach resonances and therefore we have access to an environment that can be chosen to consist of either free or interacting bosonic particles. It is worth stressing that the latter regime is widely unexplored and prototypes of open system models are mainly built on the assumption of an environment of free particles. Intuitively one may expect the interacting environment to have better memory keeping properties than a non-interacting one and this was exactly what we discovered in Ref. \cite{us}: non-Markovian effects take place when the environment is sufficiently strongly interacting. It was left as an open question, however, whether this phenomenon is specific to the model we studied in Ref. \cite{us} or if one can generally associate interacting environments to non-Markovian dynamics. By comparing and contrasting the reduced dynamics of two different qubit models we find the answer to be negative: an interacting environment can induce Markovian dynamics on one qubit architecture and non-Markovian on another.\\ We also address another unresolved question of non-Markovian open quantum systems, namely the connection between the emergence of memory effects and the form of the spectral density function characterizing the dynamical map. Non-Markovianity is often associated to structured spectra. In the case of the Jaynes-Cummings model, for example, a decrease in the width of the Lorentzian spectral density function always leads to a higher degree of non-Markovianity \cite{jaynescummings}. The connection is much more subtle for purely dephasing processes, as shown in Ref. \cite{discord}, where two of us unveil a condition on the form of the spectrum to create non-Markovian dynamics. In this article we study this connection for the rather complex dephasing dynamics induced by the BEC environment and show that, quite unexpectedly, purely Markovian dynamics can arise from spectral density functions with rich structure. \section{The two models} \begin{figure}[h] \includegraphics[width=0.9\linewidth]{fig1} \caption{(Color online) The two qubit architectures considered in this article. Model I comprises of an impurity atom trapped in a double well potential and model II assumes a single impurity atom, with an internal level structure, trapped in a deep harmonic trap. Each qubit interacts with a Bose-Einstein condensed ultracold atomic gas in a shallow harmonic trap.} \label{model} \end{figure} We compare two different qubit models composed of a trapped impurity atom interacting with an ultracold bosonic gas. In model I, originally introduced in Ref. \cite{massimo} and displayed in Fig. \ref{model}(a), the impurity atom is trapped in a deep double well potential and forms an effective qubit system where the two qubit states are represented by occupation of the impurity in the left $\ket{l}$ or the right well $\ket{r}$. In model II, see Refs. \cite{dieter, dieter2} and Fig. \ref{model}(b), the impurity is trapped in one site of an optical lattice and has two internal states, $\ket{e} \text{and} \ket{g}$, representing the qubit states. The Hamiltonians for both models are composed of three parts: Hamiltonians of the impurity and of the interacting background gas and the interaction Hamiltonian, respectively, \eqa &&H_A=\int d\xvec \Psi^\dg(\xvec)\left[\frac{\mathbf{p}_A^2}{2m_A}+V_A(\xvec)\right]\Psi(\xvec),\nn\\ &&H_B=\int d\xvec \Phi^\dg(\xvec)\left[\frac{\mathbf{p}_B^2}{2m_B}+V_B(\xvec)+\frac{g_B}{2}\Phi^\dg(\xvec)\Phi(\xvec)\right]\Phi(\xvec),\nn\\ &&H_{AB}=\frac{g_{AB}}{2}\int d\xvec \Phi^\dg(\xvec)\Psi^\dg(\xvec)\Psi(\xvec)\Phi(\xvec). \eeqa Here $m_A$, $\Psi(\xvec)$ and $V_A(\xvec)$ are the mass, field operator and the trapping potential of the impurity atom, $m_B$, $\Phi(\xvec)$, $g_B=4\pi\hbar^2 a_B/m_B$ and $V_B(\xvec)$ are the mass, field operator, coupling constant and the trapping potential of a background gas atom and $a_B$ is the scattering length of the boson-boson collisions. Finally, $g_{AB}=4\pi\hbar^2 a_{AB}/m_{AB}$ is the coupling constant of the impurity-boson interaction where $m_{AB}=m_A m_B/(m_A+m_B)$ is the effective mass.\\ In both models we expand the impurity field operator in terms of Wannier functions $\{\phi_\kvec\}$ localized in the lattice sites/the two wells. Assuming that the lattice sites/the two wells are very deep, hopping and tunneling effects are both suppressed and the Wannier functions take a Gaussian form. We assume that the background gas is weakly interacting and can be treated in the Bogoliubov approximation, neglect all terms that are quadratic in the creation and annihilation operators of the Bogoliubov modes and assume that the background has is homogenous.\\ It turns out that when we focus on a single impurity the Hamiltonians $H_A$ and $H_B$ in moth models are effectively the same. Any differences in these Hamiltonians will not have an effect on the dynamics of information flow characterising non-Markovian effects (they are all related to the phase of the evolving qubit) and can be safely neglected in this study. The interaction Hamiltonians, instead, have a small but crucial difference, arising from the different trapping potentials of the impurities: \eqa H_{AB}^{\text{Model I}}&=&\frac{g_{AB}\sqrt{n_0}}{\Omega}\sum_{\kvec,\;p=L,R}\;\hat{n}_p \hat{c}_k\sqrt{\frac{\epsilon_\kvec}{E_\kvec}}\int d\xvec|\phi(\xvec_p)|^2e^{i \kvec\cdot\xvec}\nn\\ &&\qquad+H.c.,\nn\\ H_{AB}^{\text{Model II}}&=&\frac{g_{AB}\sqrt{n_0}}{\Omega}\sum_\kvec\;\hat{n}\hat{c}_k\sqrt{\frac{\epsilon_\kvec}{E_\kvec}}\int d\xvec|\phi(\xvec)|^2e^{i \kvec\cdot\xvec}\nn\\ &&\qquad+H.c., \eeqa where $n_0$ is the condensate density, $\Omega$ is the quantization volume, $E_\kvec=\sqrt{\epsilon_\kvec(\epsilon_\kvec+2n_0g_B)}$ is the Bogoliubov dispersion relation, $\epsilon_\kvec=\hbar^2k^2/(2 m_B)$ is the dispersion relation of a non-interacting gas with $k=|\kvec|$ and $\hat{c}_k$ is the Bogoliubov excitation operator. Operator $\hat{n}$ is the number operator of the impurities: For model I we assume that there is exactly one impurity atom in the double well system and therefore $\hat{n}_R=\frac{1}{2}(1+\sz)$ and $\hat{n}_L=\frac{1}{2}(1-\sz)$, where $\sz=\ket{l}\bra{l}-\ket{r}\bra{r}$. The two wells are spatially separated by distance $\bf{L}$ so that $\mathbf{x}_R=\mathbf{x}_L-\mathbf{L}$. For model II we also assume one impurity in the lattice site. The atom has one internal state $\ket{g}$ which decouples from the environment and one $\ket{e}$ which does not and therefore $\hat{n}=\ket{e}\bra{e}$.\\ We note that $H_{AB}^{\text{Model I}}$ effectively describes \emph{two} spatially separated qubits of Model II, albeit with a restricted state space $\{\ket{eg}, \ket{ge}\}$. Interestingly these states span the so called subdecoherent state $\ket{eg}+\ket{ge}$, which is very robust against dephasing noise induced by the environment \cite{kalleantti, doll}. Therefore we can expect the qubit architecture of model I to be less affected by noise than model II.\\ \section{Reduced dynamics and information flow} The reduced dynamics of both models can be solved analytically \cite{kalleantti, kohler}. Each qubits dephases under the effect of the ultracold gas, i.e., the diagonal elements of the qubit remain constant while the off-diagonals decay as $\rho_{01}(t)=e^{-\Gamma(t)+i\theta(t)}\rho_{01}(0)$. The phase $\theta(t)$ has no effect on the information flow and therefore we do not consider it in this work. Instead we focus on the decoherence function $\Gamma(t)$. When $\Gamma'(t)>0$ information flows from the system to the environment and if there is an interval where $\Gamma'(t)<0$ then the flow of information is temporarily reversed. We associate this reversal with non-Markovian effects, adopting the proposal of Breuer {\it et al.} as our definition for non-Markovianity \cite{BLP}. Indeed, this measure of non-Markovianity, applied to a dephasing model such as the two models considered here is \eq \label{measure} \mathcal{N}=\int_{\Gamma'(t)<0}ds\,\Gamma'(s). \eeq Recall that this measure captures the maximal amount of information that can flow back from the environment to the system. The decoherence functions for the two physical systems considered here are \eqa \Gamma(t)^{\text{Model I}}&=&\frac{g_{AB}^2n_0}{\Omega}\sum_\kvec e^{-k^2\sigma^2/2}\frac{\epsilon_\kvec}{E_\kvec}\frac{\sin^2(\frac{E_\kvec t}{2\hbar})}{E_\kvec^2}\sin^2\kvec\cdot\mathbf{L},\nn\\ \Gamma(t)^{\text{Model II}}&=&\frac{g_{AB}^2 n_0}{\Omega}\sum_\kvec e^{-k^2\sigma^2/2}\frac{\epsilon_\kvec}{E_\kvec}\frac{\sin^2(\frac{E_\kvec t}{2\hbar})}{E_\kvec^2}, \eeqa where $\sigma$ is the variance parameter. Interestingly the decoherence factor of Model I has \emph{exactly} the structure of the decoherence factor of two qubits of Model II with spatial separation $\mathbf{L}$ in a subdecoherent state \cite{massimo}, as we anticipated in the previous Section. Therefore we can expect some "coherence trapping" in model I that we would not observe in Model II. \begin{figure}[h] \includegraphics[width=0.75\linewidth]{fig2} \caption{Decoherence functions $\Gamma(t)^{\text{Model I}}$ (solid line) and $\Gamma(t)^{\text{Model II}}$ (dashed line) for (a) one-dimensional, (b) two-dimensional and (c) three-dimensional environment for $a_B= 0.25 a_{Rb}$ (black lines) and $a_B= a_{Rb}$ (gray lines).} \label{decoherence} \end{figure} \subsection{Dynamics of the decoherence factor} We plot the decoherence factors of the two models in Fig. \ref{decoherence}, using the same values of parameters as in Ref. \cite{us} and going to the limit of a continuum of modes, $\Omega^{-1}\sum_\kvec\rightarrow(2\pi)^D\int d\kvec$, where $D$ is the dimension of the BEC. As anticipated, the qubit of Model I is much more robust against decoherence than the qubit of Model II. This difference is most striking in the case of a quasi-1D environment, where $\Gamma(t)^{\text{Model I}}$ saturates quickly to a small value, while $\Gamma(t)^{\text{Model II}}$ increases without bound over all considered time-scales. Hence Model I is almost unaffected by the environmental noise while Model II loses all coherence, and all initial states converge towards the maximally mixed state. The difference in the dynamics of the two models is less drastic when the environment is quasi-2D or 3D, where both decoherence factors saturate to a stationary value, although also in these two cases Model I is more robust against the noise than Model II. \\ Furthermore, the decoherence factor of Model II is monotonic in the case of 1D and 2D background gases, implying that the flow of information is always from to the qubit to the environment and the dynamics is Markovian. This is at variance with the decoherence of Model I where we find both Markovian and non-Markovian dynamics in the case of 1D and 2D background gases, depending on the value of the scattering length of the background gas \cite{us}. In the case of a 3D background gas, shown in Fig. \ref{decoherence}(c), both decoherence factors are non-monotonic for a large enough value of the scattering length. This signals non-Markovian effects. In the next Section we quantify these using the measure of Eq. \eqref{measure}. \subsection{Non-Markovianity} In all cases we have considered we only see one period of information backflow. This permits the use of a slightly modified non-Markovianity measure, which captures the maximal \emph{fraction} of information that can flow back from the environment to the system after an initial period of information flowing from the system to the environment \cite{us}. In Fig. \ref{measure} we show this modified non-Markovianity measure against the scattering length of the background gas in the case when the qubits are immersed in a 3D background gas. We observe a crossover from Markovian to non-Markovian dynamics for both models, although with different values of the crossover point: for a range of scattering lenghts Model I decoheres in a non-Markovian way, while the dynamics of Model II is Markovian. We also consider the effect of temperature on both systems. For a small temperature $T=10$ nK we still observe a Markovian to non-Markovian crossover for Model I, although the crossover point has been shifted slightly to a larger scattering length. Physically this means that in order to induce non-Markovian dynamics, the boson-boson interaction has to be stronger to overcome the detrimental effects of thermal fluctuations. For Model II the measure is zero (the dynamics is Markovian) for all the considered values of the scattering length, demonstrating the non-Markovianity of Model II is very fragile against thermal effects in the environment. We explain the differences in the dynamics of the two models in the following Section. \begin{figure}[h] \includegraphics[width=0.75\linewidth]{fig3} \caption{Non-Markovianity measure for model I in a zero-T reservoir (solid black line) and a $T=10$ nK reservoir (solid gray line) and model II in a zero-T reservoir (dashed black line).} \label{measure} \end{figure} \begin{figure}[h] \includegraphics[width=\linewidth]{fig4} \caption{Spectral density functions $J(\omega)$ for Model I (solid lines in all figures) and Model II (dashed lines in all figures) in a (a) one-dimensional, (b) two-dimensional and (c) three-dimensional environment. Left hand side figures show the full spectrum, and the figures on the right show the low-frequency contribution. In the latter we show the spectrum for a weakly interacting background gas with $a_B=10^{-3}a_{Rb}$ (black lines) and for a BEC with $a_B=a_{Rb}$ (gray lines).} \label{spectra} \end{figure} \section{Spectral density functions} The spectral density function $J(\omega)=\sum_\kvec|g_\kvec|^2\delta(\omega-\epsilon_\kvec)$ characterising the dephasing dynamics of an open quantum system is determined by the coupling constants $g_\kvec$ of the effective interaction Hamiltonian $H_{AB}=\sigma_z\sum_\kvec g_\kvec b_\kvec^\dagger+H.c.$ The spectrum of each qubit model considered in this article provides a framework for explaining the notable differences in their dynamics, namely the Markovian to non-Markovian crossover and the increased robustness against environmental noise of Model I compared to Model II. The spectral density, for small frequencies, shows a power-law behaviour $J(\omega)\propto \omega^s$. In the following we call parameter $s>0$ the Ohmicity parameter since it determines whether the spectral density is Ohmic with $s=1$, sub-Ohmic with $s<1$ or super-Ohmic with $s>1$.\\ We conjectured in Ref. \cite{us} that the existence of the crossover from Markovian to non-Markovian dynamics is closely related to which class of the Ohmic spectral densities (sub-Ohmic, Ohmic or super-Ohmic) the spectrum belongs to. In Ref. \cite{discord} we further quantified this claim, presenting a necessary condition for non-Markovian dephasing dynamics: Non-Markovian dynamics can only appear if the spectrum of the environment is super-Ohmic. More specifically, we showed that for a very general dephasing model introduced by Palma, Suominen and Eckert, with a spectrum of the form $J(\omega)=\eta\;\omega_{ph}^{1-s}\omega^s\exp\{-\omega/\omega_c\}$, where $\eta$ is a dimensionless coupling constant that we set to unity and $\omega_{ph}$ is a phononic reference frequency introduced to keep the dimension of the spectrum equal for all values of $s$) \cite{weiss}, non-Markovian effects take place only if $s>s_{crit}=2$, i.e., the spectrum is strongly super-Ohmic. For the physical models considered here the form of the spectral density function is more complicated, but we show that the main results hold also for these two systems. \subsection{Spectra at low frequencies} We plot the spectral densities $J(\omega)$ for the two models in Fig. \ref{spectra}, focusing on the low-frequency part of the spectum in the right hand side column. We observed in Ref. \cite{us} that for the ultracold environments the effective Ohmicity parameter depends on the scattering length of the environmental bosons, $s=s(a_B)$ and on the dimensionality of the BEC environment. Changing these two allows transitions from one Ohmic class to another. Here we confirm that for both models and for all three dimensions increasing the scattering length increases the effective value of $s$ for small frequencies $\omega$. This effect is especially pronounced in the case of Model II. When the environment is a quasi-1D free gas ($a_B/a_{Rb}\ll1$) the low-frequency part of the spectrum is sub-Ohmic but as the scattering length is increased to $a_B/a_{Rb}\approx1$ ,the spectrum approaches an Ohmic form. However, even with further increase in the strength of the boson-boson coupling in the environment the spectrum does not become super-Ohmic and indeed we never observe non-Markovian effects in Model II in the 1D regime. In the case of a quasi-2D environment the spectrum of the environment changes from sub-Ohmic to super-Ohmic when the scattering length value is increased, but even in the more strongly interacting case the spectrum is not sufficiently super-Ohmic to trigger non-Markovian effects. Instead in the 3D case the free gas has an Ohmic spectrum which turns super-Ohmic as the interaction between the bosons is turned on and increased. In this case the spectrum can become so super-Ohmic that we observe non-Markovian effects.\\ Model I is already naturally more Ohmic than Model II in the sense that when the ultracold gas is essentially non-interacting both the quasi-2D and the 3D environments have a super-Ohmic spectrum (the quasi 1D environment is roughly Ohmic) and the spectra become super-Ohmic in all dimensions when the scattering length is increased. Critically, in all dimensions the spectrum becomes super-Ohmic enough to create non-Markovian effects in the dynamics of the double well qubit for some critical value of the scattering length, and therefore we observe a crossover from Markovian to non-Markovian dynamics for Model I in all three dimensions. \subsection{Full spectrum} Changes in the scattering length have a crucial effect on the spectral density for small frequencies but they are almost negligible when looking at the full spectra. Instead the full spectra, shown in the left-hand-side column of Fig. \ref{spectra}, exhibits another very interesting difference between the two models, namely that the spectrum of Model I oscillates as a function of $\omega$. Moreover, the spectral density function vanishes for some specific values of $\omega$ in the quasi-1D case, implying that some modes of the environment are completely decoupled from the qubit. We find that the larger is the separation between the two wells the more roots the spectral density has, i.e., more modes decouple from the qubit. We observed numerically that increasing the well separation also increases the "coherence trapping", leading to higher stationary values of the off-diagonal elements of the qubit density matrix. The higher is the dimension of the environment, the smaller are the deviations of the spectrum of Model I compared to that of Model II. This phenomenon is also reflected in the differences in the decoherence factors of the two models; the differences are most pronounced in the case of a quasi-1D environment and in the higher dimensions the decoherence factors are more similar in both value and dynamical behaviour. \section{Discussion and conclusions} We have studied the non-Markovian dynamics of two physically different realisations of a qubit interacting with a BEC environment. We discovered that the qubit architecture of Model I is much more sensitive to non-Markovian effects than the one used in Model II. This statement applies in three ways: (i) Model I has non-Markovian dynamics in all three dimensions of the BEC, unlike Model II, which is Markovian in 1D and 2D environemnts: (ii) it has larger values of the non-Markovianity measure in the cases when both qubits have non-Markovian dynamics, i.e., it regains previously lost information more easily: (iii) it is more robust against thermal fluctuations. We also discovered that the two qubit architectures can have extremely different sensitivity to environmental noise, especially in the case when the environment is effectively one-dimensional. This result highlights the importance of choosing suitable qubit systems when designing quantum simulations and quantum probe systems.\\ We also explored the connection between the form of the spectral density function and the ensuing qubit dynamics. While the form of the full spectral density function dictates the general dynamics of the qubit, only a very small part of it controls the Markovian to non-Markovian crossover. We noted the importance of the low-frequency part of the spectrum in the emergence of non-Markovian phenomena in Ref. \cite{discord} in the case of a simple dephasing model, and the study presented in this article confirms that the statement holds also for more complex systems. It is nonetheless striking to notice the overwhelming importance of the low-frequency modes. The spectral density function of model I in a quasi-1D BEC has a very rich structure over the frequency range of the order of the cut-off frequency $\sigma^{-1}$, yet this has no effect on the crossover of the qubit dynamics from Markovian to non-Markovian. The crossover is fully controlled by the behaviour of the spectral density function for low frequencies $\omega\ll\sigma^{-1}$, specifically whether the spectrum is quadratically increasing or not. It is worth stressing that this connection seems to be quite specific to pure dephasing noise. In Ref. \cite{nori} the authors studied a dissipative model and found a direct connection between roots of the spectral density function and non-Markovian dissipationless dynamics; here we discovered that in the pure dephasing qubit model the roots of the spectra do not at all affect the (non-)Markovianity of the dynamics.\\ In conclusion, our results have twofold importance. On one hand they illustrate in a very clear way how the connection between non-Markovianity (or memory effects) and structured environments generally has to be taken with great care. On the other hand, and most importantly, they warn us of the misuse of the term "non-Markovian environment". In our study the environment is exactly the same for the two models, and in both cases it is interacting with a qubit probe. However, under certain conditions, perfectly identical environments induce Markovian dynamics on one qubit and non-Markovian dynamics on another. \begin{acknowledgments} We acknowledge financial support from EPSRC (EP/J016349/1), the Finnish Cultural Foundation (Science Workshop on Entanglement) and the Emil Aaltonen foundation (Non-Markovian Quantum Information) and Magnus Ehrnroth foundation. We thank Gabriele De Chiara, Dieter Jaksch, Stephen Clark and Tomi Johnson for helpful discussions. \end{acknowledgments}
2,869,038,157,025
arxiv
\section{Frequency-Dependent Cable Parameters} \label{app:cable-parameters} \subsection{Impact on the Branch Admittance of the Line Model} \begin{figure}[ht] \centering \includegraphics[width=0.8\linewidth]{Figures/FreqDepParams_LineAdmittance.pdf} \caption{% Comparison of line branch admittances of the $\pi$-section equivalents with/without frequency-dependence of the cable parameters. For illustration, the element $(1,1)$ of the compound admittance matrices of the cable types (i.e., UG1 and UG3) is shown. The curves labelled with the suffix ``fd'' correspond to the cable models with frequency-dependent parameters. } \label{fig:line-model} \end{figure} The CIGR{\'E}\xspace report \cite{Rep:2014:CIGRE}, in which the benchmark microgrid is specified, does not provide any information on the frequency dependency of the cable parameters. Therefore, the cable parameters were calculated using EMTP-RV based on the available data on cable material and geometry. The behaviour of the branch admittances as function of frequency is shown in \Cref{fig:line-model}. For illustration, the element $(1,1)$ of the compound admittance matrices are shown. As one can see, whether or not the frequency dependency of the parameters is considered has virtually no impact on the magnitude of the line admittance. In the phase, there is a slight difference between the two models at higher frequencies. \subsection{Impact on the Results of the Harmonic Power-Flow Study} \begin{figure}[ht] \centering \subfloat[] {% \centering \includegraphics[width=\linewidth]{Figures/FreqDepParams_Results_Voltage.pdf} \label{fig:results:voltage} } \subfloat[] {% \centering \includegraphics[width=\linewidth]{Figures/FreqDepParams_Results_Current.pdf} \label{fig:results:current} } \caption{% Impact of the frequency-dependent parameters on the results of the \HPF study. The results are compared at three nodes throughout the benchmark system. The currents for Phase A are given in (\ref{fig:results:current}) and the voltages in (\ref{fig:results:voltage}). The results labelled with the suffix ``fd'' correspond to the line models with frequency-dependent parameters. } \label{fig:results} \end{figure} In order to assess the impact on the results of the \HPF study, analyses were conducted on the benchmark system using either the line model with frequency-invariant or -dependent parameters. The obtained results are shown in \Cref{fig:results}. For illustration, the comparison is done at three nodes throughout the benchmark system. The results are compared at three nodes throughout the system. As one can see, the spectra obtained using the different line models are congruent. This is in line with the previously discussed analyses made in EMTP-RV. \section{Conclusions} \label{sec:concl} In Part~II of this paper, the \HPF method proposed in Part~I was validated. It was confirmed that common types of \CIDER[s] (i.e., power converters with LC and LCL filters) can well be represented using the proposed modelling framework. Moreover, the \HPF method was implemented in Matlab and validated against time-domain simulations with Simulink. For this validation, both individual resources as well as an entire system (i.e., the CIGR{\'E}\xspace low-voltage benchmark microgrid) were investigated. The largest observed errors are 1.33E-3~p.u. w.r.t. current magnitude, 6.33E-5~p.u. w.r.t. voltage magnitude, and 0.87~deg w.r.t. phase. If run on a standard laptop computer, the execution of the \HPF method for the benchmark system is up to five times faster as compared to the \TDS (incl. the Fourier analysis). These results confirm that the proposed approach is indeed accurate and computationally efficient. \section{Introduction} \label{sec:intro} \IEEEPARstart{T}{he} validation of the proposed \emph{Harmonic Power-Flow} (\HPF) method involves two aspects. Firstly, the suitability of the modeling framework to represent common types of \emph{Converter-Interfaced Distributed Energy Resources} (\CIDER[s]) with grid-forming or grid-following controls has to be confirmed. In this respect, power converters equipped with either LC or LCL filters are considered. Secondly, the accuracy of the modeling framework and the performance of the solution algorithm need to be assessed. To this end, the proposed method is applied to analyze individual resources as well as an entire system. More precisely, the \HPF algorithm is implemented in Matlab and compared against time-domain simulations carried out in Simulink. The remainder of Part~II is organized as follows: First, the models of standard components of \CIDER[s] (i.e., actuator, filters, and controllers) are developed in \cref{sec:lib-cmp}, including a thorough discussion of their properties and working principles. These component models are combined in \cref{sec:lib-rsc} to construct the complete models of a grid-forming and a grid-following \CIDER. Then, the proposed method is tested on individual resources in \cref{sec:val-rsc}, as well as on the CIGR{\'E}\xspace low-voltage benchmark microgrid in \cref{sec:val-sys}. Finally, the conclusions are drawn in \cref{sec:concl}. \section{Library of Component Models} \label{sec:lib-cmp} In this section, the models of actuators, filters, and controllers are developed. Note that, in order to obtain compact formulas, the time-dependency of the electrical quantities and signals is not explicitly stated each time. \subsection{Actuator} \label{sec:lib-cmp:act} \begin{figure} \centering \input{Figures/Actuator_Switching} \caption {% Schematic diagram of a three-phase two-level power converter, which is commonly used for \CIDER[s]. The fourth leg is optional: it is required only if the power converter has to be able to inject or absorb homopolar currents. } \label{fig:act} \end{figure} The actuator is the power converter which interfaces the \DC side of the \CIDER (i.e., a source or load) with its \AC side (i.e., the filter). It consists of an array of switches (i.e., power-transistor-type devices), which are controlled so that the output voltage of the actuator $\VT_{\alpha}$ follows the reference $\VT^{*}_{\alpha}$ (see \cref{fig:act}). The switching signals are typically generated by a \emph{Pulse-Width Modulator} (\PWM). This modulation creates distortions in the output voltage of the power converter. Indeed, this is why a filter is needed. Typically, these distortions occur at high frequencies (i.e., several kHz), which are far beyond the frequency range that is of interest for harmonic analysis (i.e., up to 1-2 kHz) \cite{Std:BSI-EN-50160:2000}. Hence, the actuator can be regarded as ideal voltage source in the \HPF study: \begin{Hypothesis}\label{hyp:cmp:act} In the frequency range of interest for the \HPF study, switching effects are negligible. Therefore, the actuator is regarded as ideal voltage source: \begin{alignat}{2} \VT_{\alpha,\pi} = \VT^{*}_{\alpha,\pi}&~\in~ \RealNum^{\dim(\pi)\times1} \end{alignat} \end{Hypothesis} \noindent Note that the size of this vector depends on the reference frame in which the power hardware is modelled% \footnote{% If phase coordinates are used, $\VT_{\alpha,\pi},\VT^{*}_{\alpha,\pi}\in\RealNum^{3\times1}$. }.% \subsection{Filter Stages} \label{sec:lib-cmp:fltr} \begin{figure}[t] \centering \subfloat[] {% \centering \input{Figures/Filter_Inductor} \label{fig:fltr:ind} } % \subfloat[] {% \centering \input{Figures/Filter_Capacitor} \label{fig:fltr:cap} } \caption {% Equivalent circuits of a filter stage $\lambda$ constructed from inductors (\ref{fig:fltr:ind}) or capacitors (\ref{fig:fltr:cap}), respectively. Observe that voltages, currents, and electrical parameters are expressed in the reference frame of the power hardware $\pi$. } \label{fig:fltr} \end{figure} In order to attenuate the high-frequency distortions resulting from the switching in the actuator, \CIDER[s] are equipped with cascades of filter stages \cite{Jrn:PSE:PEC:2004:Blaabjerg}. Each stage consists of inductors or capacitors, which filter currents or voltages, respectively. Notably, the commonly used L-, LC-, and LCL-filters as well as higher-order filters are built in this way. Consider any stage $\lambda$ in the cascade of filters. An inductive filter stage is represented by the equivalent circuit in \cref{fig:fltr:ind}, which is described by the following differential equation: \begin{equation} \VT_{\lambda-1,\pi} - \VT_{\lambda+1,\pi} = \mathbf{R}_{\lambda,\pi}\IT_{\lambda,\pi} + \mathbf{L}_{\lambda,\pi}\frac{d}{dt}\IT_{\lambda,\pi} \label{eq:fltr:ind:diffeq} \end{equation} $\mathbf{R}_{\lambda,\pi},\mathbf{L}_{\lambda,\pi}\in\RealNum^{\dim(\pi)\times\dim(\pi)}$ are the compound electrical parameters of the inductive filter stage, $\IT_{\lambda,\pi}\in\RealNum^{\dim(\pi)\times1}$ is the current flowing through it, and $\VT_{\lambda-1,\pi},\VT_{\lambda+1,\pi}\in\RealNum^{\dim(\pi)\times1}$ are the voltages at the start and end node of the stage, respectively. Again, the sizes of these vectors and matrices depend on the reference frame in which the power hardware is modelled. A capacitive filter stage is represented by the equivalent circuit in \cref{fig:fltr:cap}, which is described by \begin{equation} \IT_{\lambda-1,\pi} - \IT_{\lambda+1,\pi} = \mathbf{G}_{\lambda,\pi}\VT_{\lambda,\pi} + \mathbf{C}_{\lambda,\pi}\frac{d}{dt}\VT_{\lambda,\pi} \label{eq:fltr:cap:diffeq} \end{equation} $\mathbf{G}_{\lambda,\pi},\mathbf{C}_{\lambda,\pi}\in\RealNum^{\dim(\pi)\times\dim(\pi)}$ are the compound electrical parameters of the capacitive filter stage, $\VT_{\lambda,\pi}\in\RealNum^{\dim(\pi)\times1}$ is the voltage across it, and $\IT_{\lambda-1,\pi},\IT_{\lambda+1,\pi}\in\RealNum^{\dim(\pi)\times1}$ are the currents flowing into and out of the stage, respectively. In practice, the filter stages are built from identical discrete elements (i.e., one element per phase). Accordingly, the following hypothesis can be made: \begin{Hypothesis}\label{hyp:cmp:fltr} The compound electrical parameters of the filter stages are diagonal matrices with equal nonzero entries. That is, an inductive filter stage is characterized by \begin{equation} \mathbf{R}_{\lambda,\pi} = R_{\lambda}\operatorname{diag}(\mathbf{1}_{\pi}),~ \mathbf{L}_{\lambda,\pi} = L_{\lambda}\operatorname{diag}(\mathbf{1}_{\pi}) \label{eq:fltr:ind:param} \end{equation} and a capacitive filter stage by \begin{equation} \mathbf{G}_{\lambda,\pi} = G_{\lambda}\operatorname{diag}(\mathbf{1}_{\pi}),~ \mathbf{C}_{\lambda,\pi} = C_{\lambda}\operatorname{diag}(\mathbf{1}_{\pi}) \label{eq:fltr:cap:param} \end{equation} where $\operatorname{diag}(\mathbf{1}_{\pi})$ is the identity matrix w.r.t. reference frame of the power hardware. $R_{\lambda}$, $L_{\lambda}$ and $G_{\lambda}$, $C_{\lambda}$ are the parameters of the discrete elements. \end{Hypothesis} \subsection{Controller Stages} \label{sec:lib-cmp:ctrl} \begin{figure}[t] \centering % \hspace{-0.5cm} \subfloat[] {% \centering \input{Figures/Controller_Inductor} \label{fig:ctrl:ind} } % \hspace{-0.8cm} \subfloat[] {% \centering \input{Figures/Controller_Capacitor} \label{fig:ctrl:cap} } \caption {% Block diagrams of a controller stage $\lambda$ associated with an inductive (\ref{fig:ctrl:ind}) and capacitive (\ref{fig:ctrl:cap}) filter stage, respectively. In general, the control law includes \emph{Feed-Back} (\ctrlFB), \emph{Feed-Forward} (\ctrlFF), and \emph{Feed-Through} (\ctrlFT) terms. Observe that voltages and currents are expressed in the reference frame of the control software $\kappa$. } \label{fig:ctrl} \end{figure} Each filer stage can be coupled with a corresponding controller, which regulates either the current through or the voltage across the filter element, depending on whether the filter stage is inductive or capacitive. \footnote{% As already mentioned in Part~I of this paper, it is common practice for \CIDER[s] with LCL filter to implement one instead of two current control loops. Notably, this can easily be represented in the proposed framework. } As illustrated in \cref{fig:ctrl}, such a controller generally performs \emph{Feed-Back} (\ctrlFB), \emph{Feed-Forward} (\ctrlFF), and \emph{Feed-Through} (\ctrlFT) control. More precisely, as shown in \cref{fig:ctrl:ind,fig:ctrl:cap}, a stage $\lambda$ of the controller calculates the reference for the next-inner stage $\lambda-1$ from the deviation between its own state and the desired reference (i.e., via \ctrlFB and \ctrlFF control), as well as the state of the next-outer stage $\lambda+1$ (i.e., via \ctrlFT control). In principle, each block of a controller stage could be composed of multiple subblocks connected in series or in parallel (e.g., for the mitigation of specific harmonics). In practice, simple \emph{Proportional-Integral-Derivative} (\textup{PID}\xspace) and \emph{Proportional-Resonant} (\textup{PR}\xspace) controllers are commonly used \cite{Jrn:Blaabjerg:2006}. For the sake of illustration, \textup{PI}\xspace-controllers are considered for \ctrlFB control, and \ctrlP-controllers for \ctrlFF and \ctrlFT control: \begin{Hypothesis}\label{hyp:cmp:ctrl} Each controller stage consists of a \textup{PI}\xspace controller for \ctrlFB control, and two \ctrlP controllers for \ctrlFF and \ctrlFT controls. \end{Hypothesis} \noindent Let $\mathbf{K}_{\ctrlFB,\lambda}$, $\mathbf{K}_{\ctrlFF,\lambda},\mathbf{K}_{\ctrlFT,\lambda}\in\RealNum^{\dim(\kappa)\times\dim(\kappa)}$ be the proportional gains and $T_{\ctrlFB,\lambda}$ the integration time, respectively. The control law for an inductive filter stage is given by (see \cref{fig:ctrl:ind}) \begin{align} \VT^{*}_{\lambda-1,\kappa} &= \left[~ \begin{aligned} &\mathbf{K}_{\ctrlFB,\lambda}\left(\Delta\IT_{\lambda,\kappa}+\frac{1}{T_{\ctrlFB,\lambda}}\int\Delta\IT_{\lambda,\kappa}\,dt\right)\\ + &\mathbf{K}_{\ctrlFT,\lambda}\VT_{\lambda+1,\kappa} + \mathbf{K}_{\ctrlFF,\lambda}\IT^{*}_{\lambda,\kappa} \end{aligned} \right. \label{eq:ctrl:ind:law}\\ \Delta\IT_{\lambda,\kappa} &\coloneqq \IT^{*}_{\lambda,\kappa}-\IT_{\lambda,\kappa} \label{eq:ctrl:ind:error} \end{align} $\VT^{*}_{\lambda-1,\kappa},\IT^{*}_{\lambda,\kappa}\in\RealNum^{\dim(\kappa)\times1}$ are the reference voltage at the input of the controller stage and the reference current at its output, respectively. $\VT_{\lambda+1,\kappa},\IT_{\lambda,\kappa}\in\RealNum^{\dim(\kappa)\times1}$ are the voltage at the output of the filter stage and the current through it, respectively, expressed in the reference frame of the controller. The control law for a capacitive filter stage is analogous (see \cref{fig:ctrl:cap}): \begin{align} \IT^{*}_{\lambda-1,\kappa} &= \left[~ \begin{aligned} &\mathbf{K}_{\ctrlFB,\lambda}\left(\Delta\VT_{\lambda,\kappa}+\frac{1}{T_{\ctrlFB,\lambda}}\int\Delta\VT_{\lambda,\kappa}\,dt\right)\\ + &\mathbf{K}_{\ctrlFT,\lambda}\IT_{\lambda+1,\kappa} + \mathbf{K}_{\ctrlFF,\lambda}\VT^{*}_{\lambda,\kappa} \end{aligned} \right. \label{eq:ctrl:cap:law} \\ \Delta\VT_{\lambda,\kappa} &\coloneqq \VT^{*}_{\lambda,\kappa}-\VT_{\lambda,\kappa} \label{eq:ctrl:cap:error} \end{align} $\IT^{*}_{\lambda-1,\kappa},\VT^{*}_{\lambda,\kappa}\in\RealNum^{\dim(\kappa)\times1}$ are the reference current at the input of the controller stage and the reference voltage at its output, respectively. $\IT_{\lambda+1,\kappa},\VT_{\lambda,\kappa}\in\RealNum^{\dim(\kappa)\times1}$ are the current at the output of the filter stage and voltage across it, respectively, expressed in the reference frame of the controller. Typically, the \ctrlFB and \ctrlFT controllers treat each coordinate in the reference frame separately, and apply equal gains to them. In line with this fact, the following hypothesis is made: \begin{Hypothesis}\label{hyp:ctrl:fb-ft} The \ctrlFB and \ctrlFT gains are diagonal matrices with equal nonzero entries. That is \begin{align} \mathbf{K}_{\ctrlFB,\lambda} &= K_{\ctrlFB,\lambda}\operatorname{diag}(\mathbf{1}_{\kappa}) \\ \mathbf{K}_{\ctrlFT,\lambda} &= K_{\ctrlFT,\lambda}\operatorname{diag}(\mathbf{1}_{\kappa}) \end{align} where $\operatorname{diag}(\mathbf{1}_{\kappa})$ is the identity matrix w.r.t. the reference frame of the controls software. \end{Hypothesis} The \ctrlFF controllers, by contrast, are obtained by restating the dynamical models of the filter stages, which are given in the reference frame of the power hardware, in the reference frame of the control software \cite{Jrn:Wang:2015}. To this end, the transformation matrices $\mathbf{T}_{\kappa|\pi}\in\RealNum^{\dim(\kappa)\times\dim(\pi)}$ and $\mathbf{T}_{\pi|\kappa}\in\RealNum^{\dim(\pi)\times\dim(\kappa)}$ are substituted into the filter equations \eqref{eq:fltr:ind:diffeq}--\eqref{eq:fltr:cap:diffeq}. For the inductive filter stage, one obtains \begin{equation} \VT_{\lambda-1,\kappa} - \VT_{\lambda+1,\kappa} = \mathbf{R}_{\lambda,\kappa}\IT_{\lambda,\kappa} + \mathbf{L}_{\lambda,\kappa}\frac{d}{dt}\IT_{\lambda,\kappa} \label{eq:fltr:ind:diffeq:ctrl} \end{equation} where $\mathbf{R}_{\lambda,\kappa},\mathbf{L}_{\lambda,\kappa}\in\RealNum^{\dim(\kappa)\times\dim(\kappa)}$ are given by \begin{align} \mathbf{R}_{\lambda,\kappa} &= \mathbf{T}_{\kappa|\pi}\mathbf{R}_{\lambda,\pi}\mathbf{T}_{\pi|\kappa} + \mathbf{T}_{\kappa|\pi}\mathbf{L}_{\lambda,\pi}\frac{d}{dt}\mathbf{T}_{\pi|\kappa} \label{eq:fltr:ind:rst:ctrl}\\ \mathbf{L}_{\lambda,\kappa} &= \mathbf{T}_{\kappa|\pi}\mathbf{L}_{\lambda,\pi}\mathbf{T}_{\pi|\kappa} \end{align} Analogously, for the capacitive filter stage, one finds \begin{equation} \IT_{\lambda-1,\kappa} - \IT_{\lambda+1,\kappa} = \mathbf{G}_{\lambda,\kappa}\VT_{\lambda,\kappa} + \mathbf{C}_{\lambda,\kappa}\frac{d}{dt}\VT_{\lambda,\kappa} \label{eq:fltr:cap:diffeq:ctrl} \end{equation} where $\mathbf{G}_{\lambda,\kappa},\mathbf{C}_{\lambda,\kappa}\in\RealNum^{\dim(\kappa)\times\dim(\kappa)}$ are given by \begin{align} \mathbf{G}_{\lambda,\kappa} &= \mathbf{T}_{\kappa|\pi}\mathbf{G}_{\lambda,\pi}\mathbf{T}_{\pi|\kappa} + \mathbf{T}_{\kappa|\pi}\mathbf{C}_{\lambda,\pi}\frac{d}{dt}\mathbf{T}_{\pi|\kappa} \label{eq:fltr:cap:cnd:ctrl}\\ \mathbf{C}_{\lambda,\kappa} &= \mathbf{T}_{\kappa|\pi}\mathbf{C}_{\lambda,\pi}\mathbf{T}_{\pi|\kappa} \end{align} Observe that the expressions for $\mathbf{R}_{\lambda,\kappa}$ and $\mathbf{G}_{\lambda,\kappa}$ include terms which result from the temporal derivatives of $\IT_{\lambda,\pi}=\mathbf{T}_{\pi|\kappa}\IT_{\lambda,\kappa}$ and $\VT_{\lambda,\pi}=\mathbf{T}_{\pi|\kappa}\VT_{\lambda,\kappa}$, respectively. These expressions often turn out to be time-invariant thanks to \cref{hyp:cmp:fltr}. For instance, as will be shown shortly, this is the case in the $\textup{DQ}\xspace$ frame. Using \eqref{eq:fltr:ind:diffeq:ctrl} and \eqref{eq:fltr:cap:diffeq:ctrl}, the \ctrlFF gains can be set in order to achieve zero error in steady-state (e.g., \cite{Jrn:Wang:2015}) via the following additional hypothesis. \begin{Hypothesis}\label{hyp:cmp:ctrl:ff} The \ctrlFF gains are set to \begin{equation} \mathbf{K}_{\ctrlFF,\lambda} = \left\{ \begin{array}{cl} \mathbf{R}_{\lambda,\kappa} & \text{for inductive filter stages}\\ \mathbf{G}_{\lambda,\kappa} & \text{for capacitive filter stages} \end{array} \right. \end{equation} \end{Hypothesis} \section{Library of Resource Models} \label{sec:lib-rsc} In this section, the models of typical grid-forming and grid-following \CIDER[s] are constructed from the components presented in \cref{sec:lib-cmp}, based on the the generic \LTP models defined in Part~I. \subsection{Circuit Configurations and Reference Frames} \label{sec:lib-rsc:frames} Usually, the grid and the power hardware are both modeled in phase coordinates, but their circuit configuration may not be the same. The grid is a four-wire system (i.e., three phase plus one neutral conductor), whereas the power converters can be either four-leg or three-leg devices (i.e., with or without neutral conductor). If a \CIDER with a four-leg power converter is connected to a four-wire grid, all sequence components (i.e., positive, negative, and homopolar sequences) of voltage and current can pass in both directions. This corresponds to \begin{equation} \mathbf{T}_{\pi|\gamma} = \mathbf{T}_{\gamma|\pi} = \operatorname{diag}(\mathbf{1}_{3}) \label{eq:trafo:grid:neutral} \end{equation} By contrast, if a \CIDER with a three-leg power converter is connected to a four-wire grid, homopolar sequences are blocked in both directions. This is represented by \begin{equation} \mathbf{T}_{\pi|\gamma} = \mathbf{T}_{\gamma|\pi} = \operatorname{diag}(\mathbf{1}_{3}) - \frac{1}{3} \mathbf{1}_{3\times3} \label{eq:trafo:grid:no_neutral} \end{equation} It is common practice to implement the control software in \emph{Direct-Quadrature} ($\textup{DQ}\xspace$) components \cite{Jrn:Blaabjerg:2006}. If the power hardware is modeled in phase ($\textup{ABC}\xspace$) coordinates, as previously mentioned, one obtains \begin{align} \mathbf{T}_{\kappa|\pi} &= \mathbf{T}_{\pi|\kappa}^{T} \label{eq:trafo:pwr-to-ctrl} \\ \mathbf{T}_{\pi|\kappa} &= \sqrt{\frac{2}{3}} \begin{bmatrix} \Cos{\theta} & -\Sin{\theta} \\ \Cos{\theta-\frac{2\pi}{3}} & -\Sin{\theta-\frac{2\pi}{3}} \\ \Cos{\theta+\frac{2\pi}{3}} & -\Sin{\theta+\frac{2\pi}{3}} \end{bmatrix} \label{eq:trafo:ctrl-to-pwr} \end{align} where $\theta$ is a given reference angle. How the reference angle is obtained, depends on the type of \CIDER. Namely, it is calculated from a reference clock in grid-forming \CIDER[s], whereas a synchronisation unit is needed for grid-following \CIDER[s]. In either case, it is assumed that $\theta$ is synchronized with the fundamental tone: \begin{Hypothesis}\label{hyp:src:frame} Irrespective of the type of \CIDER, the reference angle $\theta$, w.r.t. which the $\textup{DQ}\xspace$ frame is defined, is given by \begin{equation} \theta = 2\pi f_{1}t + \theta_{0} \label{eq:trafo:angle} \end{equation} where $\theta_{0}$ is a known offset. \end{Hypothesis} \noindent If this hypothesis holds, the Fourier coefficients of the transformation matrices \eqref{eq:trafo:pwr-to-ctrl}--\eqref{eq:trafo:ctrl-to-pwr}, which are needed for the harmonic-domain model, are given as follows: \begin{align} \mathbf{T}_{\pi|\kappa,+1} &= \sqrt{\frac{2}{3}} \Exp{j \theta_0} \begin{bmatrix} \frac{1}{2} & -\frac{1}{2j} \\ \frac{1}{2} \alpha^{*} & -\frac{1}{2j} \alpha^{*} \\ \frac{1}{2} \alpha & -\frac{1}{2j} \alpha \end{bmatrix}\\ \mathbf{T}_{\pi|\kappa,-1} &= \mathbf{T}_{\pi|\kappa,+1}^{*} \end{align} where $\alpha = \Exp{j \frac{2\pi}{3}}$. As explained in Part~I, the Fourier coefficients of time-periodic matrices appear on the diagonals of the associated Toeplitz matrices in the harmonic domain. For example, the coefficients of order $h=\pm1$ appear on the first upper and lower diagonal, respectively. Accordingly, $\Hat{\mathbf{T}}_{\kappa|\pi}$ and $\Hat{\mathbf{T}}_{\pi|\kappa}$ have a block band structure, which introduces coupling between the harmonics. Having specified the reference frames, the \ctrlFF gain $\mathbf{K}_{\ctrlFF,\lambda}$ as given in \cref{hyp:cmp:ctrl:ff} can be evaluated. By substitution of \eqref{eq:trafo:pwr-to-ctrl}--\eqref{eq:trafo:ctrl-to-pwr} and \cref{hyp:src:frame} into \eqref{eq:fltr:ind:rst:ctrl} and \eqref{eq:fltr:cap:cnd:ctrl}, one can show that \begin{align} \mathbf{R}_{\lambda,\textup{DQ}\xspace} = \begin{bmatrix} R_\lambda &-2\pi f_1L_\lambda\\ 2\pi f_1 L_\lambda &R_\lambda \end{bmatrix}\\ \mathbf{G}_{\lambda,\textup{DQ}\xspace} = \begin{bmatrix} G_\lambda &-2\pi f_ 1 C_\lambda\\ 2\pi f_1 C_\lambda &G_\lambda \end{bmatrix} \end{align} The off-diagonal elements are a.k.a. decoupling terms \cite{Jrn:Rocabert:2012}. \subsection{Grid-Forming Resource} \label{sec:lib-rsc:forming} \begin{figure}[t] \centering \input{Figures/Resource_Forming} \caption {% Schematic diagram of a grid-forming \CIDER with an LC filter. } \label{fig:CIDER:forming} \end{figure} \cref{fig:CIDER:forming} shows the schematic diagram of a typical grid-forming \CIDER. Its power hardware consists of a \PWM actuator and an LC filter, and its control software of a two-stage \textup{PI}\xspace controller. The actuator is a four-leg power converter which can inject or absorb homopolar currents. This feature is of crucial importance for islanded operation, during which the grid-forming \CIDER takes the role of the slack. The state of the power hardware is given by the inductor current $\IT_{\alpha,\textup{ABC}\xspace}\in\RealNum^{3\times1}$ and the capacitor voltage $\VT_{\varphi,\textup{ABC}\xspace}\in\RealNum^{3\times1}$. The input and disturbance are the actuator voltage $\VT_{\alpha,\textup{ABC}\xspace}\in\RealNum^{3\times1}$ and grid current $\IT_{\gamma,\textup{ABC}\xspace}\in\RealNum^{3\times1}$, respectively. The output includes both state and disturbance. That is \begin{alignat}{2} \mathbf{x}_\pi(t) &= \begin{bmatrix} \IT_{\alpha,\textup{ABC}\xspace}(t)\\ \VT_{\varphi,\textup{ABC}\xspace}(t) \end{bmatrix} &~\in~\RealNum^{6\times1} \label{eq:vf:pwr:state} \\ \mathbf{u}_\pi(t) &= \VT_{\alpha,\textup{ABC}\xspace}(t) &~\in~\RealNum^{3\times1} \\ \mathbf{w}_\pi(t) &= \IT_{\gamma,\textup{ABC}\xspace}(t) &~\in~\RealNum^{3\times1} \\ \mathbf{y}_\pi(t) &= \begin{bmatrix} \mathbf{x}_\pi(t)\\ \mathbf{w}_\pi(t) \end{bmatrix} &~\in~\RealNum^{9\times1} \label{eq:vf:pwr:output} \end{alignat} The time-domain state-space model of the power hardware is obtained by combining the differential equations \eqref{eq:fltr:ind:diffeq} and \eqref{eq:fltr:cap:diffeq} of the filter stages. This yields \begin{align} \mathbf{A}_\pi(t) &= \begin{bmatrix} -\mathbf{L}_\alpha^{-1}\mathbf{R}_\alpha &-\mathbf{L}_\alpha^{-1}\\ \mathbf{C}_\varphi^{-1} &-\mathbf{C}_\varphi^{-1}\mathbf{G}_\varphi \end{bmatrix} \\ \mathbf{B}_\pi(t) &= \begin{bmatrix} \mathbf{L}_\alpha^{-1}\\ \mathbf{0}_{3\times3} \end{bmatrix} \\ \mathbf{E}_{\pi}(t) &= \begin{bmatrix} \mathbf{0}_{3\times3}\\ -\mathbf{C}_\varphi^{-1} \end{bmatrix} \\ \mathbf{C}_\pi(t) &= \begin{bmatrix} \operatorname{diag}(\mathbf{1}_6)\\ \mathbf{0}_{3\times6} \end{bmatrix} \\ \mathbf{D}_{\pi}(t) &= \mathbf{0}_{9\times3} \\ \mathbf{F}_{\pi}(t) &= \begin{bmatrix} \mathbf{0}_{6\times3}\\ \operatorname{diag}(\mathbf{1}_3) \end{bmatrix} \end{align} The sizes of these matrices follow directly from \eqref{eq:vf:pwr:state}--\eqref{eq:vf:pwr:output}. Note that these matrices are time-invariant, which means that only the Fourier coefficient for $h=0$ is nonzero. For instance: \begin{equation} \mathbf{A}_\pi(t) = \mathbf{A}_{\pi,0} \end{equation} The same holds for the other matrices of the state-space model. Accordingly, the power hardware is an \LTI system, which is a particular case of an \LTP system. Recall from \cref{hyp:cmp:ctrl} that each controller stage consists of one \textup{PI}\xspace controller (i.e., for \ctrlFB control) and two \ctrlP controllers (i.e., for \ctrlFF and \ctrlFT control). Since the control software is composed of \textup{PI}\xspace controllers, its state is given by the temporal integrals of the errors w.r.t. the inductor current $\Delta\IT_{\alpha,\textup{DQ}\xspace}\in\RealNum^{2\times1}$ and the capacitor voltage $\Delta\VT_{\varphi,\textup{DQ}\xspace}\in\RealNum^{2\times1}$. Its input and output are defined by the interconnection with the power hardware as shown in \cref{fig:CIDER:forming}. The disturbance is the reference voltage $\VT^*_{\varphi,\textup{DQ}\xspace}\in\RealNum^{2\times1}$ of the outer controller stage. Accordingly \begin{alignat}{2} \mathbf{x}_\kappa(t) &= \int \begin{bmatrix} \Delta\IT_{\alpha,\textup{DQ}\xspace}(t) \\ \Delta\VT_{\varphi,\textup{DQ}\xspace}(t) \end{bmatrix} dt &~\in~\RealNum^{4\times1} \label{eq:vf:ctrl:state} \\ \mathbf{u}_\kappa(t) &= \begin{bmatrix} \IT_{\alpha,\textup{DQ}\xspace}(t) \\ \VT_{\varphi,\textup{DQ}\xspace}(t) \\ \IT_{\gamma,\textup{DQ}\xspace}(t) \end{bmatrix} &~\in~\RealNum^{6\times1} \\ \mathbf{w}_\kappa(t) &= \VT^*_{\varphi,\textup{DQ}\xspace}(t) &~\in~\RealNum^{2\times1} \\ \mathbf{y}_\kappa(t) &= \VT_{\alpha,\textup{DQ}\xspace} (t) &~\in~\RealNum^{2\times1} \label{eq:vf:ctrl:output} \end{alignat} The time-domain state-space model of the control software is found by combining the differential equations \eqref{eq:ctrl:ind:law} and \eqref{eq:ctrl:cap:law} of the controller stages. This gives \begin{align} \mathbf{A}_\kappa(t) &= \begin{bmatrix} \zero{2}{2} & \FBoT{\varphi}\\ \zero{2}{2} & \zero{2}{2} \end{bmatrix} \\ \mathbf{B}_\kappa(t) &= \begin{bmatrix} -\eye{2} &-\mathbf{K}_{\ctrlFB,\varphi} & \mathbf{K}_{\ctrlFT,\varphi}\\ \zero{2}{2} &-\eye{2} & \zero{2}{2} \end{bmatrix} \\ \mathbf{E}_{\kappa}(t) &= \begin{bmatrix} \FFpFB{\varphi}\\ \eye{2} \end{bmatrix} \\ \mathbf{C}_\kappa(t) &= \begin{bmatrix} \FBoT{\alpha} & (\FFpFB{\alpha})\FBoT{\varphi} \end{bmatrix} \\ \mathbf{D}_{\kappa}(t) &= \begin{bmatrix} -\mathbf{K}_{\ctrlFB,\alpha} & (\mathbf{D}_{\kappa})_2 & (\mathbf{D}_{\kappa})_3 \end{bmatrix} \\ (\mathbf{D}_{\kappa})_2 &= \mathbf{K}_{\ctrlFT,\alpha}-(\FFpFB{\alpha})\mathbf{K}_{\ctrlFB,\varphi} \\ (\mathbf{D}_{\kappa})_3 &= (\FFpFB{\alpha})\mathbf{K}_{\ctrlFT,\varphi} \\ \mathbf{F}_{\kappa}(t) &= (\FFpFB{\alpha})(\FFpFB{\varphi}) \end{align} The sizes of these matrices follow directly from \eqref{eq:vf:ctrl:state}--\eqref{eq:vf:ctrl:output}. Evidently, the control software is an \LTI system, too. Indeed, this is one of the reasons for the popularity of the \textup{DQ}\xspace frame since its invention almost a century ago \cite{Jrn:Park:1929}. In grid-forming \CIDER[s], the reference angle $\theta$ is computed from the frequency setpoint $f_{\sigma}$ through integration over time \begin{equation} \theta = 2\pi\int f_{\sigma}\,dt = 2\pi f_{\sigma}t \end{equation} Hence, in line with \cref{hyp:src:frame}, the following hypothesis is made. \begin{Hypothesis}\label{hyp:cmp:trafo:forming} In steady state, the frequency setpoints of all grid-forming \CIDER[s] are equal to the fundamental frequency: \begin{equation} f_{\sigma}=f_{1} \end{equation} \end{Hypothesis} \noindent Indeed, if the grid-forming \CIDER[s] attempted to impose incompatible frequencies on the power system, no steady-state equilibrium could exist. As discussed in Part~I, the setpoints for the resource-level controllers are determined by system-level controllers. Since system-level controllers act on a substantially slower timescale than resource-level ones, the steady-state values of the setpoints can be determined in an independent analysis (i.e., prior to the \HPF study). The reference voltage $\VT^*_{\varphi,\textup{DQ}\xspace}$ is calculated from the voltage setpoint $V_\sigma$ as follows: \begin{Hypothesis}\label{hyp:rsc:former:ref} The reference voltage for the grid-forming \CIDER[s] is calculated as follows \begin{equation} \VT^*_{\varphi,\textup{DQ}\xspace}(t) = \sqrt{\frac{3}{2}} \begin{bmatrix} V_\sigma\\ 0 \end{bmatrix} \end{equation} where $V_\sigma$ is the setpoint for the peak voltage. \end{Hypothesis} \subsection{Grid-Following Resource} \label{sec:lib-rsc:following} \begin{figure}[t] \centering \input{Figures/Resource_Following} \caption {% Schematic diagram of a grid-following \CIDER with an LCL filter. } \label{fig:follow:schem} \end{figure} \cref{fig:follow:schem} shows the schematic diagram of a typical grid-following \CIDER. Its power hardware consists of a \PWM actuator and an LCL filter, and its control software of a three-stage \textup{PI}\xspace controller. The actuator is a three-leg power converter, which is commonly used for grid-following \CIDER[s]. The state of the power hardware is described by the inductor currents $\IT_{\alpha,\textup{ABC}\xspace}\in\RealNum^{3\times1}$ and $\IT_{\gamma,\textup{ABC}\xspace}\in\RealNum^{3\times1}$ and the capacitor voltage $\VT_{\varphi,\textup{ABC}\xspace}\in\RealNum^{3\times1}$. The input is the actuator voltage $\VT_{\alpha,\textup{ABC}\xspace}\in\RealNum^{3\times1}$ and the disturbance is the grid voltage $\VT_{\gamma,\textup{ABC}\xspace}\in\RealNum^{3\times1}$. The output consists of the state and the disturbance. Formally \begin{alignat}{2} \mathbf{x}_\pi(t) &= \begin{bmatrix} \IT_{\alpha,\textup{ABC}\xspace}(t)\\ \VT_{\varphi,\textup{ABC}\xspace}(t)\\ \IT_{\gamma,\textup{ABC}\xspace}(t) \end{bmatrix} &~\in~\RealNum^{9\times1} \label{eq:pq:pwr:state} \\ \mathbf{u}_\pi(t) &= \VT_{\alpha,\textup{ABC}\xspace}(t) &~\in~\RealNum^{3\times1} \\ \mathbf{w}_\pi(t) &= \VT_{\gamma,\textup{ABC}\xspace}(t) &~\in~\RealNum^{3\times1} \\ \mathbf{y}_\pi(t) &= \begin{bmatrix} \mathbf{x}_\pi(t)\\ \mathbf{w}_\pi(t) \end{bmatrix} &~\in~\RealNum^{12\times1} \label{eq:pq:pwr:output} \end{alignat} The matrices of the state-space model are obtained as \begin{align} \mathbf{A}_\pi(t) &= \begin{bmatrix} -\mathbf{L}_\alpha^{-1}\mathbf{R}_\alpha &-\mathbf{L}_\alpha^{-1} &\zero{3}{3} \\ \mathbf{C}_\varphi^{-1} &-\mathbf{C}_\varphi^{-1}\mathbf{G}_\varphi &-\mathbf{C}_\varphi^{-1} \\ \zero{3}{3} &\mathbf{L}_\gamma^{-1} &-\mathbf{L}_\gamma^{-1}\mathbf{R}_\gamma \end{bmatrix} \\ \mathbf{B}_\pi(t) &= \begin{bmatrix} \mathbf{L}_\alpha^{-1}\\ \mathbf{0}_{6\times3} \end{bmatrix} \\ \mathbf{E}_{\pi}(t) &= \begin{bmatrix} \mathbf{0}_{6\times3}\\ -\mathbf{L}_\varphi^{-1} \end{bmatrix} \\ \mathbf{C}_\pi(t) &= \begin{bmatrix} \operatorname{diag}(\mathbf{1}_9)\\ \mathbf{0}_{3\times9} \end{bmatrix} \\ \mathbf{D}_{\pi}(t) &= \mathbf{0}_{12\times3} \\ \mathbf{F}_{\pi}(t) &= \begin{bmatrix} \mathbf{0}_{9\times3}\\ \operatorname{diag}(\mathbf{1}_3) \end{bmatrix} \end{align} Their sizes follow straightforward from \eqref{eq:pq:pwr:state}--\eqref{eq:pq:pwr:output}. Note that these matrices are time-invariant as in the grid-forming case. Analogously, the state-space variables of the control software are given by \begin{alignat}{2} \mathbf{x}_\kappa(t) &= \int \begin{bmatrix} \Delta\IT_{\alpha,\textup{DQ}\xspace}(t) \\ \Delta\VT_{\varphi,\textup{DQ}\xspace}(t) \\ \Delta\IT_{\gamma,\textup{DQ}\xspace}(t) \end{bmatrix} dt &~\in~\RealNum^{6\times1} \label{eq:pq:ctrl:state} \\ \mathbf{u}_\kappa(t) &= \begin{bmatrix} \IT_{\alpha,\textup{DQ}\xspace}(t) \\ \VT_{\varphi,\textup{DQ}\xspace}(t) \\ \IT_{\gamma,\textup{DQ}\xspace}(t) \\ \VT_{\gamma,\textup{DQ}\xspace}(t) \end{bmatrix} &~\in~\RealNum^{8\times1} \\ \mathbf{w}_\kappa(t) &= \IT^*_{\gamma,\textup{DQ}\xspace}(t) &~\in~\RealNum^{2\times1} \\ \mathbf{y}_\kappa(t) &= \VT_{\alpha,\textup{DQ}\xspace}(t) &~\in~\RealNum^{2\times1} \label{eq:pq:ctrl:output} \end{alignat} The matrices of the state-space model are obtained as \begin{align} \mathbf{A}_\kappa(t) &= \begin{bmatrix} \mathbf{0}_{2\times2} & \frac{\mathbf{K}_{\ctrlFB,\varphi}}{T_{\ctrlFB,\varphi}} & (\mathbf{K}_{\ctrlFB,\varphi}+\mathbf{K}_{\ctrlFF,\varphi})\frac{\mathbf{K}_{\ctrlFB,\gamma}}{T_{\ctrlFB,\gamma}}\\ \mathbf{0}_{2\times2} & \mathbf{0}_{2\times2} & \frac{\mathbf{K}_{\ctrlFB,\gamma}}{T_{\ctrlFB,\gamma}}\\ \mathbf{0}_{2\times2} & \mathbf{0}_{2\times2} & \mathbf{0}_{2\times2} \end{bmatrix} \\ \mathbf{B}_\kappa(t) &= \begin{bmatrix} (\mathbf{B}_\kappa)_{11} & -\mathbf{K}_{\ctrlFB,\varphi} & (\mathbf{B}_\kappa)_{13} & (\mathbf{B}_\kappa)_{14}\\ \mathbf{0}_{2\times2} & (\mathbf{B}_\kappa)_{22} & -\mathbf{K}_{\ctrlFB,\gamma} & \mathbf{K}_{\ctrlFT,\gamma}\\ \mathbf{0}_{2\times2} & \mathbf{0}_{2\times2} & (\mathbf{B}_\kappa)_{33} & \mathbf{0}_{2\times2} \end{bmatrix}\\ (\mathbf{B}_\kappa)_\mathit{ii} &= -\operatorname{diag}(\mathbf{1}_{2}) ~ \forall i\\ (\mathbf{B}_\kappa)_{13} &= \mathbf{K}_{\ctrlFT,\varphi} - (\mathbf{K}_{\ctrlFB,\varphi}+\mathbf{K}_{\ctrlFF,\varphi})\mathbf{K}_{\ctrlFB,\gamma}\\ (\mathbf{B}_\kappa)_{14} &= (\mathbf{K}_{\ctrlFB,\varphi}+\mathbf{K}_{\ctrlFF,\varphi})\mathbf{K}_{\ctrlFT,\gamma}\\ \mathbf{E}_{\kappa}(t) &= \begin{bmatrix} (\mathbf{K}_{\ctrlFB,\varphi}+\mathbf{K}_{\ctrlFF,\varphi})(\mathbf{K}_{\ctrlFB,\gamma}+\mathbf{K}_{\ctrlFF,\gamma})\\ \mathbf{K}_{\ctrlFB,\gamma}+\mathbf{K}_{\ctrlFF,\gamma}\\ \operatorname{diag}(\mathbf{1}_2) \end{bmatrix} \\ \mathbf{C}_\kappa(t) &= \begin{bmatrix} \frac{\mathbf{K}_{\ctrlFB,\alpha}}{T_{\ctrlFB,\alpha}} & (\mathbf{K}_{\ctrlFB,\alpha}+\mathbf{K}_{\ctrlFF,\alpha})\frac{\mathbf{K}_{\ctrlFB,\varphi}}{T_{\ctrlFB,\varphi}} & (\mathbf{C}_\kappa)_3 \end{bmatrix}\\ (\mathbf{C}_\kappa)_3 &= (\mathbf{K}_{\ctrlFB,\alpha}+\mathbf{K}_{\ctrlFF,\alpha})(\mathbf{K}_{\ctrlFB,\varphi}+\mathbf{K}_{\ctrlFF,\varphi}) \frac{\mathbf{K}_{\ctrlFB,\gamma}}{T_{\ctrlFB,\gamma}} \\ \mathbf{D}_\kappa(t) &= \begin{bmatrix} -\mathbf{K}_{\ctrlFB,\alpha} & (\mathbf{D}_\kappa)_2 & (\mathbf{D}_\kappa)_3 & (\mathbf{D}_\kappa)_4 \end{bmatrix}\\ (\mathbf{D}_\kappa)_2 &= \mathbf{K}_{\ctrlFT,\alpha}-(\mathbf{K}_{\ctrlFB,\alpha}+\mathbf{K}_{\ctrlFF,\alpha})\mathbf{K}_{\ctrlFB,\varphi}\\ (\mathbf{D}_\kappa)_3 &= (\mathbf{K}_{\ctrlFB,\alpha}+\mathbf{K}_{\ctrlFF,\alpha})\hspace{-1mm} \left(\hspace{-1.5mm} \begin{aligned} &\mathbf{K}_{\ctrlFT,\varphi}\\ - &(\mathbf{K}_{\ctrlFB,\varphi}+\mathbf{K}_{\ctrlFF,\varphi})\mathbf{K}_{\ctrlFB,\gamma} \end{aligned}\hspace{-0.5mm} \right)\hspace{-2mm}\\ (\mathbf{D}_\kappa)_4 &= (\mathbf{K}_{\ctrlFB,\alpha}+\mathbf{K}_{\ctrlFF,\alpha})(\mathbf{K}_{\ctrlFB,\varphi}+\mathbf{K}_{\ctrlFF,\varphi}) \mathbf{K}_{\ctrlFB,\gamma} \\ \mathbf{F}_\kappa(t) &= \prod_{\lambda\in\{\alpha,\varphi,\gamma\}}\left\{\mathbf{K}_{\ctrlFB,\lambda}+\mathbf{K}_{\ctrlFF,\lambda}\right\} \end{align} Their sizes follow straightforward from \eqref{eq:pq:ctrl:state}--\eqref{eq:pq:ctrl:output}. Note that these matrices are also time-invariant. In grid-following \CIDER[s], the reference angle $\theta$ needed for the \textup{DQ}\xspace transform is provided by a synchronization unit, usually a \emph{Phase-Locked Loop} (\PLL). The reference current $\IT^*_{\gamma,\textup{DQ}\xspace}$ is computed in order to track the power setpoint $S_\sigma=P_\sigma+jQ_\sigma$ at the fundamental frequency. Without imposing any further conditions on $\theta$, $\IT^*_{\gamma,\textup{DQ}\xspace}$ is given by \begin{equation} \IT^*_{\gamma,\textup{DQ}\xspace}(t) = \frac{1}{v_{\gamma,\textup{D}\xspace}^2(t)+v_{\gamma,\textup{Q}\xspace}^2(t)}\hspace{-0.5mm} \begin{bmatrix} v_{\gamma,\textup{D}\xspace}(t) & -v_{\gamma,\textup{Q}\xspace}(t)\\ v_{\gamma,\textup{Q}\xspace}(t) & \phantom{+}v_{\gamma,\textup{D}\xspace}(t) \end{bmatrix}\hspace{-1.5mm} \begin{bmatrix} P_\sigma\\ Q_\sigma \end{bmatrix} \label{eq:ref:following:general} \end{equation} In the vast majority of cases, synchronization units in general and \PLL[s] in particular are designed to lock to the fundamental positive-sequence component of the grid voltage (e.g., \cite{Bk:Teodorescu:2011}). This working principle leads to the following hypothesis: \begin{Hypothesis} \label{hyp:rsc:follower:synch} The synchronization units of the grid-following \CIDER[s] lock to the fundamental positive-sequence component of the grid voltage. Therefore, in steady state it holds that \begin{equation} V_{\gamma,\textup{Q}\xspace,0} = \frac{1}{T}\int v_{\gamma,\textup{Q}\xspace}(t)\,dt = 0 \end{equation} \end{Hypothesis} \noindent For instance, this can be achieved by a closed-loop controller which adjusts $\theta$ in order to regulate $v_{\gamma,\textup{Q}\xspace}$ to 0 on average. As required by power quality standards (e.g., \cite{Std:BSI-EN-50160:2000}), the grid voltages have to be maintained balanced and sinusoidal within specified limits\footnote{By contrast, the grid currents may be subject to unbalances and harmonics.}. Under these conditions, it follows that \begin{Hypothesis}\label{hyp:rsc:follower:harms} The time-variant signal content of $v_{\gamma,\textup{D}\xspace}(t)$ and $v_{\gamma,\textup{Q}\xspace}(t)$, as given by $\xi_\textup{D}\xspace(t)$ and $\xi_\textup{Q}\xspace(t)$ below, is low: \begin{alignat}{2} v_{\gamma,\textup{D}\xspace}(t) &= V_{\gamma,\textup{D}\xspace,0}(1+\xi_\textup{D}\xspace(t)),~ & \Abs{\xi_\textup{D}\xspace(t)} &\ll 1\\ v_{\gamma,\textup{Q}\xspace}(t) &= V_{\gamma,\textup{Q}\xspace,0}(1+\xi_\textup{Q}\xspace(t)),~ & \Abs{\xi_\textup{Q}\xspace(t)} &\ll 1 \end{alignat} \end{Hypothesis} \noindent As a consequence of \cref{hyp:rsc:follower:synch,hyp:rsc:follower:harms}, $v_{\gamma,\textup{Q}\xspace}(t)$ can be neglected w.r.t. $v_{\gamma,\textup{D}\xspace}(t)$ in \eqref{eq:ref:following:general}: \begin{equation} \IT^*_{\gamma,\textup{DQ}\xspace}(t) = \frac{1}{v_{\gamma,\textup{D}\xspace}(t)} \begin{bmatrix} P_\sigma\\ Q_\sigma \end{bmatrix} \label{eq:ref:following:simplified} \end{equation} In order to calculate the harmonic-domain closed-loop transfer function of the \CIDER, the Fourier coefficients of $\IT^*_{\gamma,\textup{DQ}\xspace}$ are needed. Unfortunately, the exact expressions which relate the Fourier coefficients of the reciprocal $v^{-1}_{\gamma,\textup{D}\xspace}$ and those of $v_{\gamma,\textup{D}\xspace}$ are complicated \cite{Jrn:Edrei:1953}, and their evaluation is computationally intensive. However, taking advantage of \cref{hyp:rsc:follower:harms}, the following approximation is made: \begin{Hypothesis}\label{hyp:rsc:follower:approx} For the calculation of the reference current in the grid-following \CIDER[s], the reciprocal of the grid voltage is approximated by a second-order Taylor series \begin{equation} \frac{1}{v_{\gamma,\textup{D}\xspace}(t)} \approx \frac{1}{V_{\gamma,\textup{D}\xspace,0}}\left(1-\xi_\textup{D}\xspace(t)+\xi_\textup{D}\xspace^2(t)\right) \end{equation} \end{Hypothesis} \noindent Let $\Psi_{h}$ be the Fourier coefficients of the Taylor approximation of the reciprocal $v^{-1}_{\gamma,\textup{D}\xspace}(t)$: \begin{align} \frac{1}{v_{\gamma,\textup{D}\xspace}(t)} &\approx \sum_{h}\Psi_{h}\exp(jh2\pi f_1 t) \label{eq:ref:following:invVd} \end{align} These Fourier coefficients are obtained by substituting \begin{equation} \xi_{\textup{D}\xspace}(t) = \sum_{h\neq0}\frac{V_{\gamma,\textup{D}\xspace,h}}{V_{\gamma,\textup{D}\xspace,0}}\Exp{jh2\pi f_1 t} \end{equation} into the Taylor series, and expanding the second-order term. This yields \begin{align} \Psi_{h} &= \left\{ \begin{array}{cl} \begin{aligned} \frac{1}{V_{\gamma,\textup{D}\xspace,0}} + \sum_{h\neq0}\frac{\Abs{V_{\gamma,\textup{D}\xspace,h}}^{2}}{V_{\gamma,\textup{D}\xspace,0}^{3}} \end{aligned} & \text{$h=0$} \\ \begin{aligned} -\frac{V_{\gamma,\textup{D}\xspace,h}}{V_{\gamma,\textup{D}\xspace,0}^2} + \sum_{i\neq0}\frac{V_{\gamma,\textup{D}\xspace,i}V_{\gamma,\textup{D}\xspace,h-i}}{V_{\gamma,\textup{D}\xspace,0}^3} \end{aligned} & \text{otherwise} \end{array} \right. \label{eq:ref:following:invVd:fourier} \end{align} Note well that, since the Taylor series in \cref{hyp:rsc:follower:approx} is of second order, the approximation of the $PQ$ control law \eqref{eq:ref:following:simplified} is a nonlinear function of the Fourier coefficients. \subsection{Accordance with the Proposed Generic Model} The previously discussed models are perfectly in accordance with the generic model presented in Part~I of this paper, provided that the listed hypotheses hold. Therefore, the suitability of the generic model to represent different types of \CIDER[s] can be confirmed at this point. \section*{Nomenclature} \begin{center} \input{Tables/Nomenclature_Components} \end{center} \begin{center} \input{Tables/Nomenclature_Resources} \end{center} \begin{center} \input{Tables/Nomenclature_Validation} \end{center} \section{Validation of the Proposed Method\\ on Individual Resources} \label{sec:val-rsc} \subsection{Methodology and Key Performance Indicators} \label{sec:val-rsc:method} \begin{figure}[t] \centering \input{Figures/Setup_Resources} \caption {% Test setup for the validation of the \HPF method on individual \CIDER[s]. The resource is represented by a detailed state-space model (see \cref{sec:lib-rsc}), and the power system by a Th{\'e}venin equivalent (see \cref{tab:TE:parameters,tab:TE:harmonics}). } \label{fig:val-rsc:setup} \end{figure} \begin{table}[t] \centering \caption{Short-Circuit Parameters of the Th{\'e}venin Equivalent} \label{tab:TE:parameters} \input{Tables/Thevenin_Parameters} \end{table} \begin{table}[t] \centering \caption{Harmonic Voltages of the Th{\'e}venin Equivalent (see \cite{Std:BSI-EN-50160:2000}).} \label{tab:TE:harmonics} \input{Tables/Thevenin_Harmonics} \end{table} For the validation of the proposed modelling framework for \CIDER[s], the test setup shown in \cref{fig:val-rsc:setup} is used. It consists of two parts: a \emph{Th{\'e}venin Equivalent} (\TE) that represents the grid, and a detailed model of the \CIDER under investigation. The \TE impedance is characterized by typical short-circuit parameters of a power distribution grid, which are given in \cref{tab:TE:parameters}. The \TE voltage source includes harmonics, whose levels are given in \cref{tab:TE:harmonics}. These levels are set according to the limits specified in the standard BS-EN-50160:2000 \cite{Std:BSI-EN-50160:2000}. In line with this standard, harmonics up to order 25 (i.e., 1.25~kHz) are considered in the \HPF study. Note that, a \TE voltage source with the maximum permissible distortion at each harmonic frequency corresponds to a stressed grid. This condition is deemed most suitable for the validation of the \HPF method, because it poses a challenge to the modelling framework. Moreover, it is crucial that the results are reliable when the system is under stress. The exemplary parameters for the grid-forming and grid-following \CIDER are listed in \cref{tab:CIDER-forming:parameters,tab:CIDER-following:parameters}, respectively. The filter parameters were derived following the design rules proposed in \cite{Jrn:Liserre:2005}. The setpoints are $V_\sigma=241.5\,\text{V-\RMS}$ and $f_\sigma=50\,\text{Hz}$ for the grid-forming \CIDER, and $P_\sigma=50\,\text{kW}$ and $Q_\sigma=16.4\,\text{kVAr}$ for the grid-following one. The \HPF models and method discussed in Part~I of this paper were implemented in Matlab, and compared against \emph{Time-Domain Simulations} (\TDS) with averaged models of the \CIDER[s] in Simulink (recall \cref{hyp:cmp:act}). The \TDS are stopped once steady-state is reached, and the spectra are calculated using the \emph{Discrete Fourier Transform} (\DFT) on a time window composed by the last 5 periods of the fundamental frequency of the obtained signals. All analyses are performed in normalized units w.r.t. the base power $P_{b}=10\,\text{kW}$ and the base voltage $V_{b}=230\,\text{V-\RMS}$. \begin{table}[t] \centering \caption {% Parameters of the Grid-Forming Resource\linebreak (Rated Power $100\,\text{kVA}$) } \label{tab:CIDER-forming:parameters} \input{Tables/Resource_Forming_Parameters} \end{table} \begin{table}[t] \centering \caption {% Parameters of the Grid-Following Resource\linebreak (Rated Power $60\,\text{kVA}$) } \label{tab:CIDER-following:parameters} \input{Tables/Resource_Following_Parameters} \end{table} In order to assess the accuracy and performance of the proposed method, suitable \emph{Key Performance Indicators} (\KPI[s]) have to be defined. The accuracy is quantified by the errors of the harmonic phasors obtained using the \HPF method w.r.t. the \DFT spectra of the time-domain signals. Let $\mathbf{X}_{h}$ denote the Fourier coefficient of a polyphase electrical quantity (i.e., voltage or current). The \KPI[s] are defined as follows: \begin{align} e_{\textup{abs}}(\mathbf{X}_{h}) &\coloneqq \max_{p} \Abs{\Abs{X_{h,p,\HPF}}-\Abs{X_{h,p,\TDS}} }\\ e_{\textup{arg}}(\mathbf{X}_{h}) &\coloneqq \max_{p} \Abs{ \angle X_{h,p,\HPF}- \angle X_{h,p,\TDS} } \end{align} So, $e_{\textup{abs}}(\mathbf{X}_{h})$ and $e_{\textup{arg}}(\mathbf{X}_{h})$ are the maximum absolute errors in magnitude and phase, respectively, over all phases $p\in\Set{P}$. \subsection{Results and Discussion} \label{sec:val-rsc:results} The result for the grid-forming and grid-following \CIDER are shown in \cref{fig:resource:results:CIDER-forming,fig:resource:results:CIDER-following}, respectively. For the sake of simplicity, only the controlled quantities are shown: that is, the output voltage of the grid-forming \CIDER, and the output current of the grid-following \CIDER. The spectra on the left-hand sides of \cref{fig:resource:results:CIDER-forming,fig:resource:results:CIDER-following} show that the Fourier coefficients obtained using \HPF and \TDS are congruent both in magnitude and phase. This is confirmed by the error plots on the right-hand sides of the figures. The maximum errors are $e_{\textup{abs}}(\mathbf{V}_{5})=3.01$E-5~p.u. and $e_{\textup{arg}}(\mathbf{V}_{5})=0.06$~deg for the grid-forming resource, and $e_{\textup{abs}}(\mathbf{I}_{7})=4.95$E-4~p.u. and $e_{\textup{arg}}(\mathbf{I}_{25})=0.46$~deg for the grid-following one. These values are below the accuracy of standard measurement equipment (i.e., they are negligible in practice). \begin{figure}[t] \centering \subfloat[] {% \centering \includegraphics[width=1\linewidth]{Figures/Converter_Forming_Results} \label{fig:resource:results:CIDER-forming} } \subfloat[] {% \centering \includegraphics[width=1\linewidth]{Figures/Converter_Following_Results} \label{fig:resource:results:CIDER-following} } \caption {% Results of the validation on individual grid-forming (\ref{fig:resource:results:CIDER-forming}) and grid-following (\ref{fig:resource:results:CIDER-following}) \CIDER[s]. The plots on the left-hand side show the spectra (i.e., for phase A), and the ones on the right-hand side the error. } \label{fig:resource:results} \end{figure} \section{Validation of the Proposed Method\\on an Entire System} \label{sec:val-sys} \subsection{Methodology and Key Performance Indicators} \label{sec:val-sys:method} Lastly, the proposed \HPF method is applied to the test system shown in \cref{fig:grid:schematic}, which is adapted from the CIGR{\'E}\xspace low-voltage benchmark microgrid \cite{Rep:2014:CIGRE}. That is, the \HPF problem is formulated for the complete system model, and solved numerically using the Newton-Raphson method (see Section~V in Part~I). The test system is characterized as follows. The substation is located in node N1. Its short-circuit parameters, which include both the substation transformer and the upstream grid, are listed in \cref{tab:TE:parameters}. The lines are built from underground cables, whose sequence parameters are given in \cref{tab:grid:parameters}. Note that, while the proposed \HPF method can treat frequency-dependent cable parameters (see Section~III in Part~I), the parameters of the benchmark microgrid are considered to be frequency-independent. A preliminary analysis was conducted with \EMTP-RV in order to confirm that this approximation does hold well on the frequency range under consideration (i.e., $\leqslant$1.25\,kHz). For further details, please see Appendix~\ref{app:cable-parameters}. Five \CIDER[s] are connected to the ends of the side feeders (i.e., in nodes N11 and N15-18): one grid-forming and four grid-following ones. Their parameters are the same as for the resource validation, see \cref{tab:CIDER-forming:parameters,tab:CIDER-following:parameters}. Additionally, unbalanced wye-connected constant-impedance loads, are connected at nodes N19-22. The unbalance of a load is expressed by phase weights, which indicate the distribution of the load among the phases. The setpoints and parameters of the grid-following resources are given in \cref{tab:resources:references}, the setpoints of the grid-forming resources in \cref{fig:grid:schematic}. Notably, the load unbalance is set such that the resulting voltage unbalance does not exceed the limits specified in \cite{Std:BSI-EN-50160:2000} (i.e., $|\mathrm{V}_{1,-}|\leqslant 2\% \cdot |\mathrm{V}_{1,+}|$). \begin{figure}[t] % \centering \input{Figures/Benchmark_System} \caption {% Schematic diagram of the test system, which is based on the CIGR{\'E}\xspace low-voltage benchmark microgrid \cite{Rep:2014:CIGRE} (in black) and extended by unbalanced impedance loads (in grey). For the cable parameters see \cref{tab:grid:parameters}. The set of grid-following resources are composed of constant impedance loads (Z) and constant power loads (P/Q), their parameters are given in \cref{tab:resources:references}. } \label{fig:grid:schematic} % \end{figure} \begin{table}[t] \centering \caption{Sequence Parameters of the Lines in the Test System.} \label{tab:grid:parameters} \input{Tables/System_Cable_Types} \end{table} \begin{table}[t] \centering \caption{Parameters of the Grid-Following Resources and Loads\newline in the Test System.} \label{tab:resources:references} \input{Tables/System_Load_Values} \end{table} As stated in Part~I of this paper, the \HPF problem is a system of nonlinear equations, which are solved numerically by means of the Newton-Raphson method. Due to its nonlinearity, the \HPF problem may have multiple solutions, one or several of which may not even be physically meaningful. As known from numerical analysis, the convergence of an iterative numerical solver to a particular solution can be affected by the choice of the initial point. Therefore, it is crucial to verify whether multiplicity of solutions occurs, and whether the convergence is robust w.r.t. the initial point. In order to assess the convergence behaviour of the proposed \HPF method, the initial spectra of voltages and currents are varied. More precisely, the initial spectra are obtained as a superposition of random positive, negative, and homopolar sequences at each frequency (i.e., fundamental and harmonics), whose magnitudes and phases are uniformly distributed in the intervals $[0,10]\,\text{p.u.}$ and $[0,2\pi]\,\text{rad}$, respectively. The accuracy of the \HPF method is assessed by means of the same \KPI[s] used for the individual resources: the magnitude and phase errors of the \HPF results w.r.t. \DFT spectra of time-domain waveforms. A detailed performance analysis of the \HPF study is conducted. To this end, the method's performance is quantified by the mean and standard deviation of the execution time of the \HPF study through $N=50$ simulations and compared to the execution time of the \TDS (incl. the Fourier analysis). Moreover, a scalability analysis w.r.t. the numbers of \CIDER[s] and w.r.t. the considered harmonic order in the \HPF is performed. In the first case, the grid-following \CIDER[s] at nodes N15-17 are consecutively replaced by wye-connected, balanced impedance loads, whose nominal power is equal to the setpoint of the associated \CIDER. In the second case, the timing analysis for the \HPF is repeated while increasing the considered maximum harmonic order $h_{max}$ consecutively from 11 to 25. It is important to note that the \TDS takes some time to reach the steady state. The settling time of this transient analysis strongly depends on the initialization of the simulation. In order to ensure a fair comparison between the \HPF and the \TDS, the execution time of the latter is measured only for 5 periods in steady state (i.e., the window length required for the \DFT) plus the Fourier analysis (i.e., the \DFT). Note well that this corresponds to the minimum amount of simulation time which would have to be done even if the initialization of the \TDS were perfect. \subsection{Results and Discussion} \label{sec:val-sys:results} \cref{tab:system:sequences} gives the voltage and current sequence components of the nodes where resources are connected. Indeed the passive impedance loads at N19-22 introduce significant unbalances in the nodal phase-to-ground voltages and injected currents. The convergence of the method appears to be robust w.r.t. a random choice of the initial point (i.e., sequence components whose magnitudes and phases are uniformly distributed in the intervals $[0,10]\,\text{p.u.}$ and $[0,2\pi]\,\text{rad}$, respectively). In fact, the method always converged to the same solution irrespective of its initial value. That is, neither divergence of the algorithm nor multiplicity of solutions have been observed. Naturally, this empirical evidence does not provide a general guarantee. Nevertheless, the fact that the convergence is not affected even by substantial variations of the initial point speaks for the robustness of the proposed method. \cref{fig:system:error:VI} shows the maximum absolute errors over all nodes and phases, separately for grid-forming and grid-following \CIDER[s]. The highest errors w.r.t. voltage magnitude and phase are $e_{\textup{abs}}(\mathbf{V}_{19})=6.33$E-5~p.u. and $e_{\textup{arg}}(\mathbf{V}_{23})=0.37$~deg, respectively. The highest errors w.r.t. current magnitude and phase are $e_{\textup{abs}}(\mathbf{I}_{1})=1.33$E-3~p.u. and $e_{\textup{arg}}(\mathbf{I}_{25})=0.87$~deg, respectively. Observe that the magnitude errors of the current harmonics are higher than those of the voltage harmonics, which is likely due to the Taylor approximation in the reference calculation of the grid-following \CIDER[s] (i.e., \cref{hyp:rsc:follower:approx}). Moreover, note that the phase error becomes slightly larger as the harmonic order increases. Nevertheless, the error levels are generally very low. Indeed, as it was the case in the resource validation, the magnitude and phase errors are lower than the accuracy of standard measurement equipment. All simulations are run on the same laptop computer, namely a MacBook Pro 2019 with a 2.4 GHz Intel Core i9 CPU and 32 GB 2400 MHz DDR3 RAM. As shown in \cref{tab:system:timing}, the mean of the execution time of the \HPF method lies between 1.75-6.52~sec with standard deviations from 0.03-0.1~sec depending on the number of \CIDER[s] that are connected. By comparison, the execution time of the \TDS is around 28.33-33.39~sec, out of which ca. 0.6~sec are needed for the Fourier analysis (i.e., the \DFT). Clearly, the \HPF method is faster than the \TDS, while yielding accurate results. The computational complexity of the \HPF method in function of the maximum harmonic order is illustrated in the upper subplot of \cref{fig:system:timing_hmax}. Note that the execution time increases almost linearly, but a non-dominant higher-order component is clearly visible (i.e., as expected based on the involved matrix operations). The non-deterministic component of $T_{exc,\HPF}$ (i.e., the variation around the mean value) is illustrated in the lower subplot of \cref{fig:system:timing_hmax}. Observe that any deviation is small compared to the actual value of $T_{exc,\HPF}$. \begin{table}[t] \centering \caption{Ratio of Sequence Voltages and Currents\linebreak at the Nodes with Resources} \label{tab:system:sequences} \input{Tables/System_Sequence_Ratios} \end{table} \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{Figures/Error} \caption {% Results of the validation on the benchmark system. The plots show the maximum absolute errors over all nodes and phases, for voltages (left column) and currents (right column), in magnitude (top row) and phase (bottom row). } \label{fig:system:error:VI} \end{figure} \begin{table}[t] \centering \caption{Timing Performance (for $N=50$ Simulations of the \HPF)} \label{tab:system:timing} \input{Tables/System_Computational_Cost_CIDERs} \end{table} \begin{figure}[tb] \centering \includegraphics[width=1\linewidth]{Figures/Timing_hmax.pdf} \caption {% Mean and distribution of the timing performance of \HPF for maximum numbers of \CIDER[s] and with varying $h_{max}$ for $N=50$ simulations. The box-and-whisker plot visualizes 25 and 75 percentile of the sample, the whisker length is 1.5-times the interquartile range. } \label{fig:system:timing_hmax} \end{figure}
2,869,038,157,026
arxiv
\section{Proof of Lemma~\ref{lemma:clique_sunflower}} Let $\mb U_{n,q} \subseteq [n]$ be a $q$-random subset of $[n]$ (independent of $\mb G(n,p)$). Let $c_1 \defeq \ln(1/\eps)$ and for $\ell \ge 2$, let $c_\ell \defeq 2\ln(1/\eps)\sum_{j=1}^{\ell-1}\binom{\ell}{j}c_j$. \begin{lemma} $c_\ell \le \ell!(2\log(1/\eps))^\ell$. \end{lemma} \begin{lemma} For all $\ell \in \{1,\dots,n\}$ and $S \subseteq \binom{[n]}{\ell}$, if $|S| \ge c_\ell(1/q)^\ell(1/p)^{\binom{\ell}{2}}$, then there exists $B \in \binom{[n]}{<\ell}$ such that \[ \Pr[\ \bigwedge_{A \in S \,:\, B \subseteq A} (K_A \nsubseteq \mb G(n,p) \cup K_B \text{ or } A \nsubseteq \mb U_{n,q} \cup B) \ ] \le \eps. \] \end{lemma} \begin{proof} By induction on $\ell$. In the base case $\ell=1$, we have $B = \emptyset$ and (by independence) \[ \Pr[\ \bigwedge_{A \in S} (K_A \nsubseteq \mb G(n,p) \text{ or } A \nsubseteq \mb U_{n,q})\ ] &= \Pr[\ \bigwedge_{A \in S} (A \nsubseteq \mb U_{n,q})\ ]\\ &= \prod_{A \in S} \Pr[\ A \nsubseteq \mb U_{n,q}\ ]\\ &= (1-q)^{|S|} \ \ \le\ \ (1-q)^{\ln(1/\eps)/q} \ \ \le\ \ e^{-\ln(1/\eps)} \ \ =\ \ \eps. \] Let $\ell \ge 2$. First, consider the case that there exists $j \in \{1,\dots,\ell-1\}$ and $B \in \binom{[n]}{j}$ such that $$|\{A \in S : B \subseteq A\}| \ge c_{\ell-j}(1/qp^j)^{\ell-j}(1/p)^{\binom{\ell-j}{2}}.$$ Let $T = \{A \setminus B : A \in S \text{ such that } B \subseteq A\} \subseteq \binom{[n]}{\ell-j}$. By the induction hypothesis, there exists $D \in \binom{[n] \setminus B}{<\ell -j}$ such that \[ \Pr[\ \bigwedge_{C \in T \,:\, D \subseteq C} (K_C \nsubseteq \mb G(n,p) \cup K_D \text{ or } C \nsubseteq \mb U_{n,qp^j} \cup D) \ ] \le \eps. \] We have \[ \Pr[\ &\bigwedge_{A \in S \,:\, B \cup D\subseteq A} (K_A \nsubseteq \mb G(n,p) \cup K_{B \cup D} \text{ or } A \nsubseteq \mb U_{n,q} \cup B \cup D)\ ]\\ &= \Pr[\ \bigwedge_{C \in T \,:\, D \subseteq C} (K_{B \cup C} \nsubseteq \mb G(n,p) \cup K_{B \cup D} \text{ or } B \cup C \nsubseteq \mb U_{n,q} \cup B \cup D)\ ]\\ &= \Pr[\ \bigwedge_{C \in T \,:\, D \subseteq C} (K_{B \cup C} \nsubseteq \mb G(n,p) \cup K_{B \cup D} \text{ or } C \nsubseteq \mb U_{n,q} \cup D)\ ]\\ &= \Pr[\ \bigwedge_{C \in T \,:\, D \subseteq C} (K_{B \cup C} \nsubseteq \mb G(n,p) \cup K_{B \cup D} \text{ or } C \nsubseteq \mb U_{n,q} \cup D)\ ]\\ &= \Pr[\ \bigwedge_{C \in T \,:\, D \subseteq C} (K_C \nsubseteq \mb G(n,p) \cup K_D \text{ or } C \nsubseteq \Big\{v \in \mb U_{n,q} : \{v,w\} \in E(\mb G(n,p)) \text{ for all } w \in B\Big\} \cup D\ ]\\ &\le \Pr[\ \bigwedge_{C \in T \,:\, D \subseteq C} (K_C \nsubseteq \mb G(n,p) \cup K_D \text{ or } C \nsubseteq \mb U_{n,qp^j} \cup D\ ]\\ &\le \eps. \] Finally, assume that for all $j \in \{1,\dots,\ell-1\}$ and $B \in \binom{[n]}{j}$, we have $$|\{A \in S : B \subseteq A\}| \le c_{\ell-j}(1/qp^j)^{\ell-j}(1/p)^{\binom{\ell-j}{2}}.$$ Let \[ \mu &\defeq |S| q^\ell p^{\binom\ell 2},\\ \Delta &\defeq \sum_{j=1}^{\ell-1} \sum_{(A,A') \in S^2 \,:\, |A \cap A'| = j} q^{2\ell-j} p^{2\binom{\ell}{2} - \binom{j}{2}}. \] By Janson's inequality, \[ \Pr[\ \bigwedge_{A \in S} (K_A \nsubseteq \mb G(n,p) \text{ or } A \nsubseteq \mb U_{n,q})\ ] &\le \exp\left(- \frac{\mu^2}{\mu+\Delta} \right). \] This bound is $\le \exp(- \mu^2/2\Delta)$ when $\Delta \ge \mu$. We have $\mu \ge c_\ell$, since $|S| \ge c_\ell (1/q)^\ell (1/p)^{\binom\ell 2}$. We also have \[ \Delta &\le \sum_{j=1}^{\ell-1} q^{2\ell-j} p^{2\binom{\ell}{2} - \binom{j}{2}} \sum_{B \in \binom{[n]}{j}} |\{A \in S : B \subseteq A\}|^2 \\ &\le \sum_{j=1}^{\ell-1} q^{2\ell-j} p^{2\binom{\ell}{2} - \binom{j}{2}} \sum_{B \in \binom{[n]}{j}} |\{A \in S : B \subseteq A\}| \cdot c_{\ell-j}(1/q)^{\ell-j}(1/p)^{\binom{\ell-j}{2}}\\ &= q^{\ell}p^{\binom{\ell}{2}} \sum_{j=1}^{\ell-1} c_{\ell-j} \sum_{B \in \binom{[n]}{j}} |\{A \in S : B \subseteq A\}|\\ &= q^{\ell}p^{\binom{\ell}{2}} \sum_{j=1}^{\ell-1} c_{\ell-j} \sum_{A \in S} \sum_{B \in \binom{A}{j}} 1\\ &= |S| q^{\ell}p^{\binom{\ell}{2}} \sum_{j=1}^{\ell-1} \binom{\ell}{j} c_{\ell-j} \\ &= \mu \sum_{j=1}^{\ell-1} \binom{\ell}{j} c_j. \] We have \[ \frac{\mu^2}{2\Delta} &\ge \frac{\mu}{2\sum_{j=1}^{\ell-1} \binom{\ell}{j} c_{\ell-j}}\\ &= \frac{|S| q^\ell p^{\binom\ell 2}}{2\sum_{j=1}^{\ell-1} \binom{\ell}{j} c_{\ell-j} }\\ &\ge \frac{c_\ell}{2\sum_{j=1}^{\ell-1} \binom{\ell}{j} c_{\ell-j}}\\ &= \ln(1/\eps). \] By Janson's inequality, \[ \Pr[\ \bigwedge_{A \in S} (K_A \nsubseteq \mb G(n,p) \text{ or } A \nsubseteq \mb U_{n,q})\ ] &\le \exp\left(-\frac{\mu^2}{2\Delta}\right) \le \eps. \] \end{proof} \section{Introduction} A monotone Boolean circuit is a Boolean circuit with $\mathsf{AND}$ and $\mathsf{OR}$ gates but no negations ($\mathsf{NOT}$ gates). Although a restricted model of computation, monotone Boolean circuits seem a very natural model to work with when computing \emph{monotone} Boolean functions, i.e., Boolean functions $f : \{0,1\}^n \to \{0,1\}$ such that for all pairs of inputs $(a_1, a_2, \ldots, a_n) , (b_1, b_2, \ldots, b_n)\in \{0,1\}^n$ where $a_i \leq b_i$ for every $i$, we have $f(a_1, a_2, \ldots, a_n) \leq f(b_1, b_2, \ldots, b_n)$. Many natural and well-studied Boolean functions such as $\mathsf{Clique}$ and $\mathsf{Majority}$ are monotone. Monotone Boolean circuits have been very well studied in Computational Complexity over the years, and continue to be one of the few seemingly largest natural sub-classes of Boolean circuits for which we have exponential lower bounds. This line of work started with an influential paper of Razborov \cite{razborov_boolean_85} from 1985 which proved an $n^{\Omega(k)}$ lower bound on the size of monotone circuits computing the $\kclq$ function on $n$-vertex graphs for $k \leq \log n$; this bound is super-polynomial for $k = \log n$. Prior to Razborov's result, super-linear lower bounds for monotone circuits were unknown, with the best bound being a lower bound of $4n$ due to Tiekenheinrich \cite{Tk84}. Further progress in this line of work included the results of Andreev \cite{andreev_monotone} who proved an exponential lower bound for another explicit function. Alon and Boppana \cite{alon_boppana} extended Razborov's result by proving an $n^{\Omega(\sqrt k)}$ lower bound for $\smash{\kclq}$ for all $k \le n^{2/3 - o(1)}$. A second paper of Andreev \cite{andreev1987method} from the same time period proved an $2^{\Omega(n^{1/3}/\log n)}$ lower bound for an explicit $n$-variate monotone function. Using a different technique, Harnik and Raz \cite{HR00} proved a lower bound of $2^{\Omega((n/\log n)^{1/3})}$ for a family of explicit $n$-variate functions defined using a small probability space of random variables with bounded independence. However, modulo improvements to the polylog factor in this exponent, the state of art monotone circuit lower bounds have been stuck at $2^{\Omega(n^{1/3 - o(1)})}$ since 1987.\footnote{Stasys Jukna (personal communication) observed that Andreev's bound \cite{andreev1987method} can be improved to $2^{\Omega((n/\sqrt{\log n})^{1/3})}$ using the lower bound criterion of \cite{jukna1999combinatorics}.} To this day, the question of proving truly exponential lower bounds for monotone circuits (of the form $2^{\Omega(n)}$) for an explicit $n$-variate function) remains open! (Truly exponential lower bounds for monotone {\em formulas} were obtained only recently \cite{pitassi2017strongly}.) In the present paper, we are able to improve the best known lower bound for monotone circuits by proving an $2^{\Omega(n^{1/2}/\log n)}$ lower bound for an explicit $n$-variate monotone Boolean function (Section~\ref{sec:harnik_raz}). The function is based on the same construction first considered by Harnik and Raz, but our argument employs the approximation method of Razborov with recent improvements on robust sunflower bounds~\cite{alweiss2019improved,rao2020}. By applying the same technique with a variant of robust sunflowers that we call clique-sunflowers, we are able to prove an $n^{\Omega(k)}$ lower bound for the $\kclq$ function when $k \leq n^{1/3-o(1)}$, thus improving the result of Alon and Boppana when $k$ is in this range (Section~\ref{sec:clique}). Finally, we are able to prove truly exponential lower bounds in the monotone arithmetic setting to a fairly general family of polynomials, which shares some similarities to the functions considered by Andreev and Harnik and Raz (Section~\ref{sec:arithmetic}). \subsection{Monotone circuit lower bounds and sunflowers} The original lower bound for $\kclq$ due to Razborov employed a technique which came to be known as the \emph{approximation method}. Given a monotone circuit $C$ of ``small size'', it consists into constructing gate-by-gate, in a bottom-up fashion, another circuit $\widetilde{C}$ that approximates $C$ on most inputs of interest. One then exploits the structure of this \emph{approximator circuit} to prove that it differs from $\kclq$ on most inputs of interest, thus implying that no ``small'' circuit can compute this function. This technique was leveraged to obtain lower bounds for a host of other monotone problems~\cite{alon_boppana}. A crucial step in Razborov's proof involved the sunflower lemma due to Erd\H{o}s and Rado. A family $\calf$ of subsets of $[n]$ is called a \emph{sunflower} if there exists a set $Y$ such that $F_1 \cap F_2 = Y$ for every $F_1, F_2 \in \calf$. The sets of $\calf$ are called \emph{petals} and the set $Y = \bigcap \calf$ is called the \emph{core}. We say that the family $\calf$ is $\ell$-uniform if every set in the family has size $\ell$. \begin{theorem}[Erd\H{o}s and Rado~\cite{erdos_rado_sunflower}] \label{thm:erdos_rado} Let $\calf$ be a $\ell$-uniform family of subsets of $[n]$. If $\abs{\calf} > \ell!(r-1)^\ell$, then $\calf$ contains a sunflower of $r$ petals. \end{theorem} Informally, the sunflower lemma allows one to prove that a monotone function can be approximated by one with fewer minterms by means of the ``plucking'' procedure: if the function has too many (more than $\ell!(r-1)^\ell$) minterms of size $\ell$, then it contains a sunflower with $r$ petals; remove all the petals, replacing them with the core. One can then prove that this procedure does not introduce many errors. The notion of {\em robust sunflowers} was introduced by the third author in \cite{rossman_kclq}, to achieve better bounds via the approximation method on the monotone circuit size of $\kclq$ when the negative instances are Erd\H{o}s-R\'enyi random graphs $\mb G_{n,p}$ below the $k$-clique threshold.\footnote{Robust sunflowers were called {\em quasi-sunflowers} in \cite{rossman_kclq,gopalan2013dnf,li2018sunflowers,lovett2019fromdnf} and {\em approximate sunflowers} in \cite{lovett2019dnf}. Following Alweiss {\em et al} \cite{alweiss2019improved}, we adopt the new name {\em robust sunflower}.} A family $\calf \sseq 2^{[n]}$ is called a $(p, \eps)$-\emph{robust sunflower} if \begin{equation*} \Pr_{\rndw \sseq_p [n]} \left[ \exists F \in \calf: F \sseq \rndw \cup Y \right] > 1-\eps, \end{equation*} where $Y := \bigcap \calf$ and $\rndw$ is a $p$-random subset of $[n]$ (i.e., every element of $[n]$ is contained in $\rndw$ independently with probability $p$). As remarked in~\cite{rossman_kclq}, every $\ell$-uniform sunflower of $r$ petals is a $(p, e^{-rp^{\ell}})$-robust sunflower. Moreover, as observed in~\cite{lovett2019fromdnf}, every $(1/r, 1/r)$-robust sunflower contains a sunflower of $r$ petals. A corresponding bound for the appearance of robust sunflowers in large families was also proved in~\cite{rossman_kclq}. \begin{theorem}[\cite{rossman_kclq}] \label{thm:approx_sunflower} Let $\calf$ be a $\ell$-uniform family such that $\abs{\calf} \geq \ell!(2\log(1/\eps)/p)^\ell$. Then $\calf$ contains a $(p,\eps)$-robust sunflower. \end{theorem} For many choice of parameters $p$ and $\eps$, this bound is better than the one by Erd\H{o}s and Rado, thus leading to better approximation bounds. In a recent breakthrough, this result was significantly improved by Alweiss, Lovett, Wu and Zhang~\cite{alweiss2019improved}. Soon afterwards, alternative proofs with slightly improved bounds were given by Rao\footnote{ Rao's bound is also slightly stronger in the following sense. He shows that, if the random set $\rndw$ is chosen uniformly at random among all sets of size $\floor{np}$, then we also have $ \Pr \left[ \exists F \in \calf: F \sseq \rndw \cup Y \right] > 1-\eps$. However, for our purposes, the $p$-biased case will suffice. }~\cite{rao2020} and Tao~\cite{tao2020}. A more detailed discussion can be found in a note by Bell, Suchakree and Warnke~\cite{bcw21}. \begin{theorem}[\cite{alweiss2019improved,rao2020,tao2020,bcw21}] \label{thm:improved_sunflower} There exists a constant $B > 0$ such that the following holds for all $p, \eps \in (0,1/2]$. Let $\calf$ be an $\ell$-uniform family such that $\abs{\calf} \geq (B \log(\ell/\eps)/p)^\ell$. Then $\calf$ contains a $(p,\eps)$-robust sunflower. \end{theorem} Theorem~\ref{thm:improved_sunflower} can be verified by combining the basic structure of Rossman's original argument~\cite{rossman_kclq} with the main technical estimate of Rao~\cite{rao2020}. Since the proof does not appear explicitly in any of those papers, for completeness we give a proof on Appendix~\ref{sec:sf_proof}. \subsection{Preliminaries} We denote by $\blt^n_{= m} \sseq \blt^n$ the set of all $n$-bit binary vectors with Hamming weight exactly $m$. We extend the logical operators $\lor$ and $\land$ to binary strings $x, y \in \blt^n$, as follows: \begin{itemize} \item $(x \land y)_i = x_i \land y_i$, for every $i \in [n]$; \item $(x \lor y)_i = x_i \lor y_i$, for every $i \in [n]$. \end{itemize} We will say that a distribution $\rndx$ with support in $\blt^n$ is \emph{$p$-biased} or \emph{$p$-random} if the random variables $\rndx_1,\dots,\rndx_n$ are mutually independent and satisfy $\Pr[\rndx_i = 1] = p$ for all $i$. If a distribution $\rndU$ has support in $2^{[n]}$, we will say that $\rndU$ is \emph{$p$-biased} or \emph{$p$-random} if the random Boolean string $\rndx$ such that $\rndx_i = 1 \iff i \in \rndU$ is $p$-biased. We sometimes write $\rndU \sseq_p [n]$ to denote that $\rndU$ is a $p$-biased subset of $[n]$. We consistently write random objects using boldface symbols (such as $\rndw$, $\mb G_{n,p}$, etc). Everything that is not written in boldface is not random. When taking probabilities or expectation, the underlying distribution is always the one referred to by the boldface symbol. For instance, when $i \in [n]$ and $\rndW$ is a $p$-biased subset of $[n]$, the event $\set{i \in \rndW}$ denotes that the \emph{non-random} element $i$ is contained in the \emph{random} set $\rndW$. For a Boolean function $f$ and a probability distribution $\mb \mu$ on the inputs on $f$, we write $f(\mb \mu)$ to denote the random variable which evaluates $f$ on a random instance of $\mb \mu$. In what follows, we will mostly ignore ceilings and floors for the sake of convenience, since these do not make any substantial difference in the final calculations. \section{Harnik-Raz function} \label{sec:harnik_raz} The strongest lower bound known for monotone circuits computing an explicit $n$-variate monotone Boolean function is $\exp\big(\Omega\big((n/\log n)^{1/3}\big)\big)$, and it was obtained by Harnik and Raz~\cite{HR00}. In this section, we will prove a lower bound of $\exp(\Omega(n^{1/2}/\log n))$ for the same Boolean function they considered. We apply the \emph{method of approximations}~\cite{razborov_boolean_85} and the new \emph{robust sunflower} bound~\cite{alweiss2019improved,rao2020}. We do not expect that a lower bound better than $\exp(n^{1/2-o(1)})$ can be obtained by the approximation method with robust sunflowers. This limitation is discussed with more detail in Section~\ref{sec:hr_discussion}. We start by giving a high level outline of the proof. We define the Harnik-Raz function $\fnhr : \blt^n \to \blt$ and find two distributions $\rndy$ and $\rndn$ with support in $\blt^n$ satisfying the following properties: \begin{itemize} \item $\fnhr$ outputs~1 on $\rndy$ with high probability (Lemma~\ref{claim:hr_result}); \item $\fnhr$ outputs~0 on $\rndn$ with high probability (Lemma~\ref{claim:negative_test}). \end{itemize} Because of these properties, the distribution $\rndy$ is called the \defx{positive test distribution}, and $\rndn$ is called the \defx{negative test distribution}. We also define a set of monotone Boolean functions called \emph{approximators}, and we show that: \begin{itemize} \item every approximator commits many mistakes on either $\rndy$ or $\rndn$ with high probability (Lemma~\ref{lemma:hr_approx_err}); \item every Boolean function computed by a ``small'' monotone circuit agrees with an approximator on both $\rndy$ and $\rndn$ with high probability (Lemma~\ref{lemma:hr_approx_correct}). \end{itemize} Together these suffice for proving that ``small'' circuits cannot compute $\fnhr$. The crucial part where the robust sunflower result comes into play is in the last two items. \subsection{Notation for this section} For $A \subseteq [n]$, let $x_A \in \blt^n$ be the binary vector with support in $A$. For a set $A \subseteq [n]$, let $\clqind{A}$ be the indicator function satisfying \begin{equation*} \clqind{A}(x) = 1 \iff x_A \leq x. \end{equation*} For a monotone Boolean function $f : \blt^n \to \blt$, let $\mintms(f)$ denote the set of minterms of $f$, and let $\mintms_\ell(f) := \mintms(f) \cap \blt_{=\ell}^n$. Elements of $\mintms_\ell(f)$ are called $\ell$-minterms of $f$. This notation is valid only in Section~\ref{sec:harnik_raz} and will be slightly tweaked in Section~\ref{sec:clique} (Lower Bound for $\kclq$) for the sake of uniformity of exposition. \subsection{The function} \label{sec:hr_the_function} We now describe the construction of the function $\fnhr : \blt^n \to \blt$ considered by Harnik and Raz~\cite{HR00}. First observe that, for every $n$-bit monotone Boolean function $f$, there exists a family $\cals \sseq 2^{[n]}$ such that \begin{equation*} f(x_1,\dots,x_n) = D_\cals(x_1,\dots,x_n) := \bigvee_{S \in \cals} \bigwedge_{j \in S} x_{j}. \end{equation*} Indeed, $\cals$ can be chosen to be the family of the coordinate-sets of minterms of $f$. Now, in order to construct the Harnik-Raz function, we will suppose $n$ is a prime number and let $\F_n = \set{0,1,\dots,n-1}$ be the field of $n$ elements. Moreover, we fix two positive integers $c$ and $k$ with $c < k < n$. For a polynomial $P \in \F_n[x]$, we let $S_P$ be the set of the valuations of $P$ in each element of $\set{1,2,\dots,k}$ (in other words, $S_P = \set{P(1),\dots,P(k)}$). Observe that it is not necessarily the case that $\card{S_P}=k$, since it may happen that $P(i)=P(j)$ for some $i,j$ such that $i \neq j$. Finally, we consider the family $\hrfml$ defined as \begin{equation*} \hrfml := \set{S_P : P \in \F_n[x], \text{$P$ has degree at most $c-1$ and} \abs{S_P} \geq k/2 }. \end{equation*} We thus define $\fnhr$ as $\fnhr := D_{\hrfml}$. We now explain the choice of $\hrfml$. First, the choice for valuations of polynomials with degree at most $c-1$ is explained by a fact observed in~\cite{alon_babai_itai}. If a polynomial $\rndp \in \F_n[x]$ with degree $c-1$ is chosen uniformly at random, they observed that the random variables $\rndp(1),\dots,\rndp(k)$ are $c$-wise independent, and are each uniform in $[n]$. This allows us to define a distribution on the inputs (the positive test distribution) that has high agreement with $\fnhr$ and is easy to analyze. Observe further that, since $\abs{\hrfml} \leq n^c$, the monotone complexity of $\fnhr$ is at most $2^{O(c \log n)}$. Later we will choose $c$ to be roughly $n^{1/2}$, and prove that the monotone complexity of $\fnhr$ is $2^{\Omega(c)}$. Finally, the restriction $\abs{S_P} \geq k/2$ is a truncation made to ensure that no minterm of $\fnhr$ is very small. Otherwise, if $\fnhr$ had small minterms, it might have been a function that almost always outputs~$1$. Such functions have very few maxterms and are therefore computed by a small CNF. Since we desire $\fnhr$ to have high complexity, this is an undesirable property. The fact that $\fnhr$ doesn't have small minterms is important in the proof that $\fnhr$ almost surely outputs~$0$ in the negative test distribution (Lemma~\ref{claim:negative_test}). \begin{remark}[Parameters are now fixed] \label{rk:hr_function} Formally, the function $\fnhr$ depends on the choice of the parameters $c$ and $k$. In other words, for every choice of positive integers $c, k$ such that $c < k < n$, we obtain a different function $\fnhr^{(c,k)}$. For the rest of Section~\ref{sec:harnik_raz}, we will let $c$ and $k$ be fixed parameters, and we will refer to $\fnhr$ unambiguously, always with respect to the fixed parameters $c$ and $k$. We will make our choice of $c$ and $k$ explicit in Section~\ref{sec:hr_lower_bound}, but before then we will make no assumptions about $c$ and $k$ other than $c < k < n$. \end{remark} \subsection{Test distributions} \label{sec:hr_test} We now define the positive and negative test distributions. \begin{definition}[Test distributions] \label{def:hr_test} Let $\rndy \in \blt^n$ be the random variable which chooses a polynomial $\rndp \in \F_n[x]$ with degree at most $c-1$ uniformly at random, and maps it into the binary input $x_{S_{\rndp}} \in \blt^n$. Let also $\rndn$ be the $(1/2)$-biased distribution on $\blt^n$ (i.e., each bit is equal to~$1$ with probablity $1/2$, independently of all the others). Equivalently, $\rndn$ is the uniform distribution on $\blt^n$. \end{definition} Harnik and Raz proved that $\fnhr$ outputs~1 on $\rndy$ with high probability. For completeness, we include their proof. \begin{lemma}[Claim 4.1 in~\cite{HR00}] \label{claim:hr_result} We have $ \Pr[\fnhr(\rndy)=1] \geq 1-(k-1)/n. $ \end{lemma} \begin{proof} Let $\rndp$ be the polynomial randomly chosen by $\rndy$. Call a pair $\set{i,j} \sseq [k]$ with $i \neq j$ \defx{coinciding} if $\rndp(i) = \rndp(j)$. Because the random variables $\rndp(i)$ and $\rndp(j)$ are uniformly distributed in $[n]$ and independent for $i \neq j$, we have that $\Pr[\rndp(i) = \rndp(j)] = 1/n$ for $i \neq j$. Therefore, the expected number $\mathsf{Num}(\rndp)$ of coiciding pairs is $\binom{k}{2}/n$. Observe now that $\fnhr(\rndy) = 0$ if and only if $\abs{\rndp(1), \dots, \rndp(k)} < k/2$, which occurs only if there exists more than $k/2$ coinciding pairs. Therefore, by Markov's inequality, we have \begin{equation*} \Pr \left[ \fnhr(\rndy) = 0 \right] \leq \Pr \left[ \mathsf{Num}(\rndp) > k/2 \right] \leq \frac{\binom{k}{2}/n}{k/2} = \frac{k-1}{n}. \qedhere \end{equation*} \qed \end{proof} We now claim that $\fnhr$ also outputs~0 on $\rndn$ with high probability. \begin{lemma} \label{claim:negative_test} We have $ \Pr[\fnhr(\rndn)=0] \geq 1-2^{-(k/2-c\cdot\log_2 n)}. $ \end{lemma} \begin{proof} Let $x_{\rndA}$ be an input sampled from $\rndn$. Observe that $\fnhr(x_{\rndA})=1$ only if there exists a minterm $x$ of $\fnhr$ such that $x \leq x_{\rndA}$. Since all minterms of $\fnhr$ have Hamming weight at least $k/2$ and $\fnhr$ has at most $n^c$ minterms, we have \begin{equation*} \Pr[\fnhr(\rndn)=1] \leq n^c \cdot 2^{-k/2} = 2^{-(k/2-c\cdot\log_2 n)}. \end{equation*} \end{proof} We will also need the following property about the positive test distribution. \begin{lemma} \label{lemma:hr_trimmed_prob} For every $\ell \leq c$ and $A \sseq [n]$ such that $\abs{A}=\ell$, we have \begin{equation*} \Pr[x_A \leq \rndy] \leq \left( k/n \right)^{\ell}. \end{equation*} \end{lemma} \begin{proof} Recall that the distribution $\rndy$ takes a polynomial $\rndp \in \F_n[x]$ with degree at most $c-1$ uniformly at random and returns the binary vector $x_{\set{\rndp(1), \rndp(2),\dots,\rndp(k)}} \in \blt^n$. Let $A \in \binom{[n]}{\ell}$ for $\ell \leq c$. Observe that $x_A \leq {\rndy}$ if and only if $A \sseq \set{\rndp(1), \rndp(2),\dots,\rndp(k)}$. Therefore, if $x_A \leq \rndy$, then there exists indices $\set{j_1,\dots,j_\ell}$ such that $\set{\rndp(j_1), \rndp(j_2),\dots,\rndp(j_{\ell})} = A$. Since $\ell \leq c$, we get by the $c$-wise independence of $\rndp(1),\dots,\rndp(k)$ that the random variables $\rndp({j_1}), \rndp({j_2}),\dots,\rndp({j_\ell})$ are independent. It follows that \begin{equation*} \Pr[ \set{\rndp(j_1), \rndp(j_2),\dots,\rndp(j_{\ell})} = A ] = \frac{\ell !}{n^\ell}. \end{equation*} Therefore, we have \begin{equation*} \Pr[ x_A \leq \rndy ] = \Pr[ A \sseq \set{\rndp(1), \rndp(2),\dots,\rndp(k)} ] \leq \binom{k}{\ell} \frac{\ell !}{n^\ell} \leq \left( \frac{k}{n} \right)^{\ell}. \qedhere \end{equation*} \end{proof} \subsection{A closure operator} \label{sec:hr_closure} In this section, we describe a closure operator in the lattice of monotone Boolean functions. We prove that the closure of a monotone Boolean function $f$ is a good approximation for $f$ on the negative test distribution~(Lemma~\ref{lemma:hr_error_closure}), and we give a bound on the size of the set of minterms of \emph{closed} monotone functions. This bound makes use of the robust sunflower lemma (Theorem~\ref{thm:improved_sunflower}), and is crucial to bounding errors of approximation (Lemma~\ref{lemma:hr_error_trimmed}). Finally, we observe that input functions are closed (Lemma~\ref{lemma:hr_input_approx}). From now on, we let \begin{equation} \label{eq:hr_eps} \eps := n^{-2c}. \end{equation} \begin{definition}[Closed function] We say that a monotone function $f : \blt^n \to \blt$ is \emph{closed} if, for every $A \in \binom{[n]}{\le c}$, we have \[ \Pr[\ f(\rndn \vee x_A) = 1\ ] > 1 - \eps \ \Longrightarrow\ f(x_A) = 1. \] \end{definition} This means that for, a closed function, we always have $\Pr[f(\rndn \vee x_A) = 1] \notin (1-\eps,1)$ when $\abs{A} \leq c$. \begin{remark} [On the parametrization of closedness] \label{rk:hr_closed} We remark that the definition of a \emph{closed} function depends on two parameters: the parameter $\eps$, defined in~(\ref{eq:hr_eps}), and the parameter $c$, used in the construction of $\fnhr$ (see Remark~\ref{rk:hr_function}). Since both of these parameters are fixed throughout Section~\ref{sec:harnik_raz}, it is safe to omit them without risk of confusion. Therefore, we will henceforth say that some function is \emph{closed} without any further specification about the parameters. However, the reader must bear in mind that, whenever a function is said to be \emph{closed}, the \emph{fixed} parameters $c$ and $\eps$ are in view. \end{remark} \begin{definition}[Closure operator] \label{def:hr_closure} Let $f$ be a monotone Boolean function. We denote by $\cl(f)$ the unique minimal closed monotone Boolean function such that $f \leq \cl(f)$. In other words, the function $\cl(f)$ is the unique closed monotone function such that, whenever $f \leq g$ and $g$ is monotone and closed, we have $f \leq \cl(f) \leq g$. \end{definition} \begin{remark} [On closure] \label{rk:hr_closure} Note that $\cl(f)$ is well-defined, since the constant Boolean function that outputs~$1$ is closed and, if $f,g$ are both closed monotone Boolean functions, then so is $f \wedge g$. Furthermore, just as with the definition of closed functions (see Remark~\ref{rk:hr_closed}), the closure operator $\cl(\cdot)$ depends crucially on the parameters $\eps$ and $c$, which are fixed throughout Section~\ref{sec:harnik_raz}. \end{remark} We now give a bound on the error of approximating $f$ by $\cl(f)$ under the distribution $\rndn$. \begin{lemma}[Approximation by closure] \label{lemma:hr_error_closure} For every monotone $f: \blt^n \to \blt$, we have \begin{equation*} \Pr \left[ f(\rndn) = 0 \text{ and } \cl(f)(\rndn) = 1 \right] \leq n^{-c}. \end{equation*} \end{lemma} \begin{proof} We first prove that there exists a positive integer $t$ and sets $A_1, \dots, A_t$ and monotone functions $h_0, h_1, \dots, h_t : \blt^n \to \blt$ such that \begin{enumerate} \item $h_0 = f$, \item $h_i = h_{i-1} \vee \clqind{A_i}$, \item $\Pr[h_{i-1}(\rndn \lor x_{A_i}) = 1] \geq 1-\eps$, \item $h_t = \cl(f)$. \end{enumerate} Indeed, if $h_{i-1}$ is not closed, there exists $A_i \in \binom{[n]}{\leq c}$ such that $\Pr[h_{i-1}(\rndn \lor x_{A_i}) = 1] \geq 1-\eps$ but $h_{i-1}(x_{A_i})=0$. We let $h_i := h_{i-1} \vee \clqind{A_i}$. Clearly, we have that $h_t$ is closed, and that the value of $t$ is at most the number of subsets of $[n]$ of size at most $c$. Therefore, we get $ t \leq \sum_{j=0}^{c} \binom{n}{j}. $ Moreover, by induction we obtain that $h_i \leq \cl(f)$ for every $i \in [t]$. It follows that $h_t = \cl(f)$. Now, observe that \begin{align*} \Pr \left[ f(\rndn) = 0 \text{ and } \cl(f)(\rndn) = 1 \right] &\leq \sum_{i=1}^t \Pr \left[ h_{i-1}(\rndn) = 0 \text{ and } h_i(\rndn) = 1 \right] \\&= \sum_{i=1}^t \Pr \left[ h_{i-1}(\rndn) = 0 \text{ and } x_{A_i} \leq \rndn \right] \\&\leq \sum_{i=1}^t \Pr \left[ h_{i-1}(\rndn \lor x_{A_i}) = 0 \right] \\&\leq \eps \sum_{j=0}^{c} \binom{n}{j} \leq n^{-c}. \qedhere \end{align*} \end{proof} We now bound the size of the set of $\ell$-minterms of a closed function. This bound depends on the robust sunflower theorem~(Theorem~\ref{thm:improved_sunflower}). \begin{lemma}[Closed functions have few minterms] \label{lemma:hr_bound_minterms} Let $B > 0$ be as in Theorem~\ref{thm:improved_sunflower}. If a monotone function $f : \blt^n \to \blt$ is closed, then, for all $\ell \in [c]$, we have \[ \abs{\mintms_\ell(f)} \le (6B c \log n)^\ell \] \end{lemma} \begin{proof} Fix $\ell \in [c]$. For convenience, let $p=1/2$ and recall from~(\ref{eq:clq_eps}) that $\eps=n^{-2c}$. We will begin by proving that $ \abs{\mintms_\ell(f)} \leq (B \log(\ell/\eps)/p)^\ell. $ For a contradiction, suppose we have $ \abs{\mintms_\ell(f)} > (B \log(\ell/\eps)/p)^\ell. $ Consider the family $\calf := \set{A \in \binom{[n]}{\ell}: x_A \in \mintms_\ell(f)}$. Observe that $\abs{\calf} = \abs{\mintms_\ell(f)}$. By Theorem~\ref{thm:improved_sunflower}, there exists a $(p,\eps)$-robust sunflower $\calf' \sseq \calf$. Let $Y := \bigcap \calf'$ and let $\rndw \sseq_p [n]$. We have \begin{align*} \Pr[f(\rndn \vee x_Y)=1] &\geq \Pr[\exists x \in \mintms_\ell(f) : x \leq \rndn \vee x_Y] \\&= \Pr[\exists F \in \calf : F \sseq \rndw \cup Y ] \\&\geq \Pr[\exists F \in \calf' : F \sseq \rndw \cup Y ] \\&> 1-\eps. \end{align*} Therefore, since $f$ is closed, we get that $f(x_Y)=1$. However, since $Y = \bigcap \calf'$, there exists $F \in \calf'$ such that $Y \subsetneq F$. This is a contradiction, because $x_F$ is a minterm of $f$. We conclude that \begin{equation*} \abs{\mintms_\ell(f)} \leq (B \log(\ell/\eps)/p)^\ell \leq (2B \log(cn^{2c}))^\ell \leq (6B c \log n)^\ell. \qedhere \end{equation*} \end{proof} \begin{lemma}[Input functions are closed] \label{lemma:hr_input_approx} For all $i \in [n]$ and large enough $n$, the Boolean functions $\clqind{\set{i}}$ are closed. \end{lemma} \begin{proof} Fix $i \in [n]$. Let $A \sseq [n]$ be such that $\card{A} \leq c$ and suppose that $\clqind{\set{i}}(x_A) = 0$. Note that $\clqind{\set{i}}(x_A) = 0$ is equivalent to $(x_A)_i = 0$. We have \begin{equation*} \Pr[\clqind{\set{i}}(\rndn \lor x_A)=1] = \Pr[(\rndn \lor x_A)_i = 1] = \Pr[\rndn_i = 1] = 1/2 \leq 1-n^{-2c} = 1-\eps, \end{equation*} since $\rndn$ is $(1/2)$-biased (Definition~\ref{def:hr_test}) and $\eps = n^{-2c}$ (as fixed in~(\ref{eq:clq_eps})). Therefore, $\clqind{\set{i}}$ is closed. \end{proof} \subsection{Trimmed monotone functions} \label{sec:hr_trimmed} In this section, we define a \emph{trimming} operation for Boolean functions. We will bound the probability that a \emph{trimmed} function gives the correct output on the distribution $\rndy$, and we will give a bound on the error of approximating a Boolean function $f$ by the trimming of $f$ on that same distribution. \begin{definition}[Trimmed functions] We say that a monotone function $f \in \blt^n \to \blt$ is \emph{trimmed} if all the minterms of $f$ have size at most $c/2$. We define the trimming operation $\trim(f)$ as follows: \begin{equation*} \trim(f) := \bigvee_{\ell = 0}^{{c/2}} \bigvee_{A \in \mintms_{\ell}(f)} \clqind{A}. \end{equation*} \end{definition} That is, the $\trim$ operation takes out from $f$ all the minterms of size larger than $c/2$, yielding a trimmed function. \begin{remark} [Parametrization of $\trim(\cdot)$ and other remarks] \label{rk:hr_trim} We remark that the definition of trimmed functions depends on the choice of the parameter $c$. As this parameter is fixed (see Remark~\ref{rk:hr_function}), the operator $\trim(\cdot)$ is well-defined. Moreover, if all minterms of $f$ have Hamming weight larger than $c/2$ (i.e., if $\mintms_{\ell}(f) = \emptyset$ for all $\ell \in \set{0,1,\dots,c/2}$), then $\trim(f)$ is the constant function that outputs 0. Finally, if $f$ is the constant function $\one$, then $\trim(f) = \one$, because $\one$ contains a minterm of Hamming weight equal to 0. \end{remark} We are now able to bound the probability that a trimmed Boolean function gives the correct output on distribution $\rndy$ and give a bound on the approximation error of the trimming operation. \begin{lemma} [Trimmed functions are inaccurate in the positive distribution] \label{lemma:hr_compute_error} If a monotone function $f \in \blt^n \to \blt$ is trimmed and $f \neq \one$ (i.e.,\ $f$ is not identically $1$), then \begin{equation*} \Pr \left[ f(\rndy) = 1 \right] \leq \sum_{\ell = 1}^{{c/2}} \left( \frac{k}{n} \right)^{\ell} \abs{\mintms_\ell(f)}. \end{equation*} \end{lemma} \begin{proof} It suffices to see that, since $f$ is trimmed, if $f(\rndy) = 1$ and $f \neq \one$ then there exists a minterm $x$ of $f$ with Hamming weight between $1$ and $c/2$ such that $x \leq \rndy$. The result follows from Lemma~\ref{lemma:hr_trimmed_prob} and the union bound. \end{proof} \begin{lemma}[Approximation by trimming] \label{lemma:hr_error_trimmed} Let $f \in \blt^n \to \blt$ be a monotone function, all of whose minterms have Hamming weight at most $c$. We have \begin{equation*} \Pr \left[ f(\rndy) = 1 \text{ and } \trim(f)(\rndy) = 0 \right] \leq \sum_{\ell = {c/2}}^{c} \left( \frac{k}{n} \right)^{\ell} \abs{\mintms_\ell(f)}. \end{equation*} \end{lemma} \begin{proof} If we have $f(\rndy) = 1$ and $\trim(f)(\rndy) = 0$, then there was a minterm $x$ of $f$ with Hamming weight larger than $c/2$ that was removed by the trimming process. Therefore, since $\abs{x} \leq c$ by assumption, the result follows from Lemma~\ref{lemma:hr_trimmed_prob} and the union bound. \end{proof} \subsection{The approximators} \label{sec:hr_approximators} Let $ \cala := \set{\trim(\cl(f)) : f : \blt^n \to \blt \text{ is monotone}}. $ Functions in $\cala$ will be called \emph{approximators}. We define the \emph{approximating} operations $\sqcup, \sqcap: \cala \times \cala \to \cala$ as follows: for $f, g \in \cala$, let \begin{align*} f \sqcup g &:= \trim(\cl(f \vee g)), \\ f \sqcap g &:= \trim(\cl(f \wedge g)). \end{align*} We now observe that every input function is an approximator. Indeed, since every input $\clqind{\set{i}}$ is closed and trivially trimmed (Lemma~\ref{lemma:hr_input_approx}), we have $\trim(\cl(\clqind{\set{i}})) = \trim(\clqind{\set{i}}) = \clqind{\set{i}}$. Thus, $\clqind{\set{i}} \in \cala$ for all $i \in [n]$. Therefore, we can replace each gate of a monotone $\set{\vee, \wedge}$-circuit $C$ by its corresponding approximating gate, thus obtaining a $\set{\sqcup, \sqcap}$-circuit $C^\cala$ computing an approximator. The rationale for choosing this set of approximators is as follows. By letting approximators be the trimming of a closed function, we are able to plug the bound on the set of $\ell$-minterms given by the robust sunflower lemma (Lemma~\ref{lemma:hr_bound_minterms}) on Lemmas~\ref{lemma:hr_compute_error} and~\ref{lemma:hr_error_trimmed}, since the trimming operation can only \emph{reduce} the set of minterms. Moreover, since trimmings can only help to get a negative answer on the negative test distribution, we can safely apply Lemma~\ref{lemma:hr_error_closure} when bounding the errors of approximation. \subsection{The lower bound} \label{sec:hr_lower_bound} In this section, we prove that the function $\fnhr$ requires monotone circuits of size $2^{\Omega(c)}$. By properly choosing $c$ and $k$, this will imply the promised $\exp({\Omega(n^{1/2-o(1)})})$ lower bound for the Harnik-Raz function. First, we fix some parameters. Choose $B$ as in Lemma~\ref{lemma:hr_bound_minterms}. Let $T := 18B$. We also let \begin{equation*} k := n^{1/2}, \quad\quad c := \frac{1}{T} \cdot (k/\log n) = \frac{k}{18B \cdot \log n}. \end{equation*} For simplicity, we assume these values are integers. Note that $c = \Theta(k / \log n) \ll k$. \begin{lemma}[Approximators make many errors] \label{lemma:hr_approx_err} For every approximator $f \in \cala$, we have \begin{equation*} \Pr[f(\rndy)=1] + \Pr[f(\rndn)=0] \leq 3/2. \end{equation*} \end{lemma} \begin{proof} Let $f \in \cala$. By definition, there exists a closed function $h$ such that $f = \trim(h)$. Observe that $\mintms_\ell(f) \sseq \mintms_\ell(h)$ for every $\ell \in [c]$. From Lemma~\ref{lemma:hr_bound_minterms}, we get \begin{equation*} \abs{\mintms_\ell(h)} \leq (6B c \log n)^\ell = (n/3k)^\ell. \end{equation*} Hence, applying Lemma~\ref{lemma:hr_compute_error}, we obtain that, if $f \neq \one$, we have \begin{equation*} \Pr[f(\rndy)=1] \leq \sum_{\ell=1}^{{c/2}} \left( \frac{k}{n} \right)^{\ell} \abs{\mintms_\ell(h)} \leq \sum_{\ell=1}^{{c/2}} 3^{-\ell} \leq 1/2. \end{equation*} Therefore, for every $f \in \cala$ we have $ \Pr[f(\rndy) = 1] + \Pr[f(\rndn) = 0] \leq 1 + 1/2 \leq 3/2. $ \end{proof} \begin{lemma}[$C$ is well-approximated by $C^\cala$] \label{lemma:hr_approx_correct} Let $C$ be a monotone circuit. We have \begin{equation*} \Pr[C(\rndy) = 1 \text{ and } C^\cala(\rndy) = 0] + \Pr[C(\rndn) = 0 \text{ and } C^\cala(\rndn) = 1] \leq \size(C) \cdot 2^{-\Omega(c)}. \end{equation*} \end{lemma} \begin{proof} We begin by bounding the approximation errors under the distribution $\rndy$. We will show that, for two approximators $f,g \in \cala$, if $f \lor g$ accepts an input from $\rndy$, then $f \apor g$ rejects that input with probability at most $2^{-\Omega(c)}$, and that the same holds for the approximation $f \apand g$. First note that, if $f, g \in \cala$, then all the minterms of both $f \lor g$ and $f \land g$ have Hamming weight at most $c$, since $f$ and $g$ are trimmed. Let now $h = \cl(f \lor g)$. We have $(f \apor g)(x) < (f \lor g)(x)$ only if $\trim(h)(x) < h(x)$. Since $h$ is closed, we get from Lemma~\ref{lemma:hr_bound_minterms} that, for all $\ell \in [c]$, we have \begin{equation*} \abs{\mintms_\ell(h)} \leq (6B c \log n)^\ell = (n/3k)^\ell. \end{equation*} We then obtain the following inequality by Lemma~\ref{lemma:hr_error_trimmed}: \begin{align*} \Pr \left[ (f \lor g)(\rndy) = 1 \text{ and } (f \apor g)(\rndy) = 0 \right] \leq \sum_{\ell = {c/2}}^c \left( \frac{k}{n} \right)^{\ell} \abs{\mintms_\ell(h)} \le \sum_{\ell = {c/2}}^c 3^{-\ell} = 2^{-\Omega(c)}. \end{align*} The same argument shows $ \Pr \left[ (f \wedge g)(\rndy) = 1 \text{ and } (f \apand g)(\rndy) = 0 \right] = 2^{-\Omega(c)}. $ Since there are $\size(C)$ gates in $C$, this implies that $ \Pr[C(\rndy) = 1 \text{ and } C^\cala(\rndy) = 0] \leq \size(C) \cdot 2^{-\Omega(c)}. $ To bound the approximation errors under $\rndn$, note that $(f \lor g)(x)=0$ and $(f \apor g)(x) = 1$ only if $ \cl(f \lor g)(x) \neq (f \lor g)(x) $, since trimming a Boolean function cannot decrease the probability that it rejects an input. Therefore, by Lemma~\ref{lemma:hr_error_closure} we obtain \begin{align*} \Pr \left[ (f \lor g)(\rndn) = 0 \text{ and } (f \apor g)(\rndn) = 1 \right] \leq n^{-c} = 2^{-\Omega(c)}. \end{align*} The same argument shows $ \Pr \left[ (f \land g)(\rndn) = 0 \text{ and } (f \apand g)(\rndn) = 1 \right] = 2^{-\Omega(c)}. $ Once again, doing this approximation for every gate in $C$ allows us to conclude $ \Pr[C(\rndn) = 0 \text{ and } C^\cala(\rndn) = 1] \leq \size(C) \cdot 2^{-\Omega(c)}. $ This finishes the proof. \end{proof} \begin{theorem} \label{thm:hr_lower_bound} Any monotone circuit computing $\fnhr$ has size $2^{\Omega(c)} = 2^{\Omega (n^{1/2}/\log n)}$. \end{theorem} \begin{proof} Let $C$ be a monotone circuit computing $\fnhr$. Since $k/2 - c \log_2 n = \Omega(k)$ and $k \ll n$, for large enough $n$ we obtain from Lemmas~\ref{claim:hr_result} and \ref{claim:negative_test} that \begin{equation*} \Pr[\fnhr(\rndy)=1] + \Pr[\fnhr(\rndn)=0] \geq 2-(k-1)/n-2^{-(k/2-c\log_2 n)} \geq 9/5. \end{equation*} We then obtain from Lemmas~\ref{lemma:hr_approx_err} and~\ref{lemma:hr_approx_correct}: \begin{align*} 9/5 &\leq \Pr[\fnhr(\rndy)=1] + \Pr[\fnhr(\rndn)=0] \\& \leq \Pr[C(\rndy) = 1 \text{ and } C^\cala(\rndy) = 0] + \Pr[C^\cala(\rndy)=1] \\& + \Pr[C(\rndn) = 0 \text{ and } C^\cala(\rndn) = 1] + \Pr[C^\cala(\rndn)=0] \\& \leq 3/2 + \size(C)2^{-\Omega(c)}. \end{align*} This implies $\size(C) = 2^{\Omega(c)}$. \end{proof} \subsection{Are better lower bounds possible with robust sunflowers?} \label{sec:hr_discussion} In this section, we allow some degree of imprecision for the sake of brevity and clarity, in order to highlight the main technical ideas of the proof. A rough outline of how we just proved Theorem~\ref{thm:hr_lower_bound} is as follows. First, we noted that the minterms of $\fnhr$ are ``well-spread''. This is Lemma~\ref{lemma:hr_trimmed_prob}, which states that the probability that a fixed set $A \sseq [n]$ is contained in a random minterm\footnote{Here, ``random minterm'' means an input from the distribution $\rndy$, which correlates highly with the minterms of $\fnhr$.} of $\fnhr$ is at most $r^{\card{A}}$, where $r = k/n$. Moreover, we observed that $\fnhr$ outputs $0$ with high probability in a $p$-biased distribution (Lemma~\ref{claim:negative_test}), where $p=1/2$. In the rest of the proof, we roughly showed how this implies that DNFs of size approximately $s = c^{c/2}$ and width $w = c/2$ \emph{cannot} approximate $\fnhr$ (Lemma~\ref{lemma:hr_approx_err}).\footnote{ Formally, our approximators have at most $O(c \log n)^\ell$ terms of width $\ell$ (Lemma~\ref{lemma:hr_bound_minterms}), and no terms of width larger than $c/2$ (by trimming). } We also observed that we can approximate the $\lor$ and $\land$ of width-$w$, size-$s$ DNFs by another width-$w$, size-$s$ DNF, bounding the error of approximation by $r^{c/2} \cdot c^{c/2}$. This was proved by noting that conjunctions of width $c/2$ accept a positive input with probability at most $r^{c/2}$, and there are at most $c^{c/2}$ of them. When $c \approx k \approx \sqrt{n}$, we have $(rc)^{c/2} = 2^{-\Omega(c)}$, and thus \emph{we can} approximate circuits of size $2^{o(c)}$ with width-$w$, size-$s$ DNFs (Lemma~\ref{lemma:hr_approx_correct}). This yields the lower bound. There are two essential numerical components in the proof. First, the ``spreadness rate'' of the function $\fnhr$. A simple counting argument can show that the upper bound of $(k/n)^{\card{A}}$ to the probability $\Pr[x_A \leq \rndy]$ is nearly best possible when the support of $\rndy$ is contained in $\blt^n_{= k}$ and $k = o(n)$. So this can hardly be improved with the choice of another Boolean function. Secondly, the bounds for the size and width of the DNF approximators come from the robust sunflower lemma (Theorem~\ref{thm:improved_sunflower}), which was used to employ the approximation method on $p$-biased distributions. Since the bound of Theorem~\ref{thm:improved_sunflower} is essentially best possible as well, as observed in~\cite{alweiss2019improved}, we cannot hope to get better approximation bounds on a $p$-biased distribution from sunflowers. Therefore, there does not seem to be much room for getting better lower bounds for monotone circuits using the classical approximation method with sunflowers, if we use $p$-biased distributions. To get beyond $2^{\Omega(\sqrt{n})}$, another approach seems to be required. \section{Lower Bound for $\kclq$} \label{sec:clique} Recall that the Boolean function $\kclq : \{0,1\}^{\binom{n}{2}} \to \{0,1\}$ receives a graph on $n$ vertices as an input and outputs a $1$ if this graph contains a clique on $k$ vertices. In this section, we prove an $n^{\Omega(\delta^2 k)}$ lower bound on the monotone circuit size of $\kclq$ for $k \leq n^{(1/3)-\delta}$. We note that the first superpolynomial lower bound for the monotone circuit complexity of $\kclq$ was given by Razborov \cite{razborov_boolean_85}, who proved a $n^{\Omega(k)}$ lower bound for $k \leq \log n$. Soon after, Alon and Boppana \cite{alon_boppana} proved a $n^{\Omega(\sqrt k)}$ for $\kclq$ when $k \le n^{2/3 - o(1)}$. This exponential lower bound was better than Razborov's, as it could be applied to a larger range of $k$, but it was short of the obvious upper bound of $n^{O(k)}$. Our result finally closes that gap, by proving that the monotone complexity of $\kclq$ is $n^{\Theta(k)}$ even for large $k$. As in Section~\ref{sec:harnik_raz}, we will follow the approximation method. However, instead of using sunflowers as in \cite{razborov_boolean_85,alon_boppana} or robust sunflowers as in \cite{rossman_kclq}, we introduce a notion of \emph{clique-sunflowers} and employ it to bound the errors of approximation. \subsection{Notation for this section} In this section, we will often refer to graphs on $n$ vertices and Boolean strings in $\blt^{\binom{n}{2}}$ interchangeably. For $A \subseteq [n]$, let $K_A$ be the graph on $n$ vertices with a clique on $A$ and no other edges. When $\card{A} \leq 1$, the graph $K_A$ is the empty graph with $n$ vertices and 0 edges (corresponding to the Boolean string all of which $\binom{n}{2}$ entries are equal to 0.) The \emph{size} of $K_A$ is $\card{A}$. Let also $\clqind{A} : \blt^{\binom{n}{2}} \to \blt$ denote the indicator function of containing $K_A$, which satisfies \begin{equation*} \clqind{A}(G) = 1 \iff K_A \sseq G. \end{equation*} Functions of the forms $\clqind{A}$ are called \defx{clique-indicators.} Moreover, if $\abs{A} = \ell$, we say that $\clqind{A}$ is a clique-indicator of \emph{size} equal to $\ell$. When $\card{A} \leq 1$, the function $\clqind{A}$ is the constant function $\one$. For $p \in (0,1)$, we denote by $\mb G_{n,p}$ the Erd\H{o}s-R\'enyi random graph, a random graph on $n$ vertices in which each edge appears independently with probability $p$. Let $f : \blt^{\binom{n}{2}} \to \blt$ be monotone and suppose $\ell \in \set{1,\dots, \delta k}$. We define \[ \mintms_\ell(f) \defeq \{A \in \textstyle\binom{[n]}{\ell} : f(K_A) = 1 \text{ and } f(K_{A\setminus\{a\}}) = 0\text{ for all } a \in A\}. \] Elements of $\mintms_\ell(f)$ are called \emph{$\ell$-clique-minterms} of $f$. \subsection{Clique-sunflowers} Here we introduce the notion of \emph{clique-sunflowers}, which is analogous to that of robust sunflowers for ``clique-shaped'' set systems. \begin{definition}[Clique-sunflowers] \label{def:clique_sunflower} Let $\eps, p \in (0,1)$. Let $\cals$ be a family of subsets of $[n]$ and let $Y := \bigcap \cals$. The family $\cals$ is called a $(p, \eps)$-\emph{clique-sunflower} if \begin{equation*} \Pr \left[ \exists A \in S: K_A \sseq \mb G_{n,p} \cup K_Y \right] > 1-\eps. \end{equation*} Equivalently, the family $\cals$ is a clique-sunflower if the family $\set{K_A : A \in \cals} \sseq \binom{[n]}{2}$ is a $(p,\eps)$-robust sunflower, since $K_A \cap K_B = K_{A \cap B}$. \end{definition} Though clique-sunflowers may seem similar to regular sunflowers, the importance of this definition is that it allows us to explore the ``clique-shaped'' structure of the sets of the family, and thus obtain an asymptotically better upper bound on the size of sets that do not contain a clique-sunflower. \begin{lemma}[Clique-sunflower lemma] \label{lemma:clique_sunflower} Let $\eps < e^{-1/2}$ and let $\cals \sseq \binom{[n]}{\ell}$. If the family $\cals$ satisfies $\abs{\cals} > \ell!(2\ln(1/\eps))^\ell(1/p)^{\binom{\ell}{2}}$, then $\cals$ contains a $(p,\eps)$-clique-sunflower. \end{lemma} Observe that, whereas the bounds for ``standard'' robust sunflowers (Theorems~\ref{thm:approx_sunflower} and \ref{thm:improved_sunflower}) would give us an exponent of $\binom{\ell}{2}$ on the $\log(1/\eps)$ factor, Lemma \ref{lemma:clique_sunflower} give us only an $\ell$ at the exponent. As we shall see, this is asymptotically better for our choice of parameters. We defer the proof of Lemma~\ref{lemma:clique_sunflower} to Section~\ref{sec:proof_clqsf}. The proof is based on an application of Janson's inequality \cite{janson1990poisson}, as in the original robust sunflower lemma of \cite{rossman_kclq} (Theorem~\ref{thm:approx_sunflower}). \subsection{Test distributions} We now define the positive and negative test distributions. First, we fix some parameters that will be used throughout the proof. Fix $\delta \in (0,1/3)$. Let \begin{equation} \label{eq:clq_p} k = n^{1/3-\delta} \quad\text{ and }\quad p := n^{-2/(k-1)}. \end{equation} For simplicity, we will assume from now on that $\delta k$ and $\delta k/2$ are integers. \begin{remark}[Parameters are now fixed] \label{rk:clq_parameters} From now on until the end of Section~\ref{sec:clq_lb}, the symbols $p, \delta$ and $k$ refer to fixed parameters, and will always unambiguously refer to the values just fixed. This will only change in Section~\ref{sec:proof_clqsf}, which is independent of the proof of the lower bound for $\kclq$, and in which we will permit ourselves to reuse some of these symbols for other purposes. This means that, whenever $p, \delta$ and $k$ appear in the following discussion, the reader must bear in mind that $p=n^{-2/(k-1)}$, $\delta$ is a fixed number inside $(0,1/3)$ and $k$ is fixed to be $k=n^{1/3-\delta}$. \end{remark} We observe that the probability that $\mb G_{n,p}$ has a $k$-clique is bounded away from~$1$. \begin{lemma} \label{lemma:clique_random_graph} We have $\Pr[\ \mb G_{n,p} \text{ contains a $k$-clique}\ ] \le 3/4$. \end{lemma} \begin{proof} There are $\binom{n}{k} \le (en/k)^k$ potential $k$-cliques, each present in $\mb G_{n,p}$ with probability $p^{\binom{k}{2}} = % n^{-k}$. By a union bound, we have $\Pr[\ \mb G_{n,p} \text{ contains a $k$-clique}\ ] \le (e/k)^k \le (e/3)^3 \le 3/4$. \end{proof} \begin{definition} \label{def:clq_test} Let $\rndy$ be the uniform random graph chosen from all possible $K_A$, where $\card{A} = k$. In other words, the distribution $\rndy$ samples a random minterm of $\kclq$. We call $\rndy$ the \emph{positive test distribution}. Let also $\rndn := \mb G_{n,p}$. We call $\rndn$ the \emph{negative test distribution}. \end{definition} From Lemma~\ref{lemma:clique_random_graph}, we easily obtain the following corollary. \begin{corollary} \label{cor:clique_random_graph} We have $ \Pr[\kclq(\rndy)=1] + \Pr[\kclq(\rndn)=0] \geq 5/4. $ \end{corollary} We now prove an analogous result to that of Lemma~\ref{lemma:hr_trimmed_prob}, which shows that the positive distribution $\rndy$ is unlikely to contain a large fixed clique. \begin{lemma} \label{lemma:clq_spread} For every $\ell \leq k$ and $A \sseq [n]$ such that $\abs{A}=\ell$, we have \begin{equation*} \Pr[K_A \leq \rndy] \leq \left( k/n \right)^{\ell}. \end{equation*} \end{lemma} \begin{proof} The distribution $\rndy$ samples a set $\rndB$ uniformly at random from $\binom{[n]}{k}$ and returns the graph $K_{\rndB}$. Note that $K_A \sseq K_{\rndB}$ if and only if $A \sseq \rndB$. We have \begin{equation*} \Pr[K_A \leq \rndy] = \Pr[A \sseq \rndB] = \frac{\binom{n-k}{k-\ell}}{\binom{n}{k}} \leq \left( \frac{k}{n} \right)^{\ell}. \qedhere \end{equation*} \end{proof} \subsection{A closure operator} \label{sec:clique_closed} As in Section~\ref{sec:hr_closure}, we define here a closure operator in the lattice of monotone Boolean functions. We will again prove that the closure of a function will be a good approximation for it on the negative test distribution. However, unlike Section~\ref{sec:hr_closure}, instead of bounding the set of minterms, we will bound the set of ``clique-shaped'' minterms, as we shall see. Finally, we will observe that input functions are also closed. Henceforth, we fix the error parameter \begin{equation} \label{eq:clq_eps} \eps := n^{-k}. \end{equation} \begin{definition}[Closed functions] We say that $f \in \blt^{\binom{n}{2}} \to \blt$ is \emph{closed} if, for every $A \sseq [n]$ such that $\abs{A} \in \set{2, \dots, \delta k}$, we have \[ \Pr[\ f(\rndn \lor K_A) = 1\ ] > 1 - \eps \ \Longrightarrow\ f(K_A) = 1. \] \end{definition} \begin{remark} [On the parametrization of closedness] \label{rk:clq_closed} Similarly to the Harnik-Raz case (see Remark~\ref{rk:hr_closed}), the definition of a \emph{closed} function depends on three parameters: the probability $p$, which controls the distribution $\rndn$ (as discussed in Definition~\ref{def:clq_test}), the parameter $\eps$, defined in~(\ref{eq:clq_eps}), and the parameter $k$. Since all of these three parameters are fixed until the end of Section~\ref{sec:clq_lb}~(see Remark~\ref{rk:clq_parameters}), and no other reference to closed functions will be made after that, it is safe to omit them without risk of confusion. Therefore, we will henceforth say that some function is \emph{closed} without any further specification about the parameters. However, the reader must bear in mind that, whenever a function is said to be \emph{closed}, the \emph{fixed} parameters $p, \eps$ and $k$ are in view. \end{remark} \begin{remark} [Definitions of closedness compared] Definition~\ref{def:clq_closure} bears great resemblance to Definition~\ref{def:hr_closure}, which also talks about a notion of \emph{closed monotone functions} in the context of lower bounds for the function of Harnik and Raz. Apart from the different parametrizations, the main difference between those two definitions is that, whereas Definition~\ref{def:hr_closure} looks into \emph{all} inputs of Hamming weight at most $c$, here we only care about \emph{clique-shaped} inputs of size at most $\delta k$. \end{remark} As before, we can define the closure of a monotone Boolean function $f$. \begin{definition}[Closure operator] \label{def:clq_closure} Let $f$ be a monotone Boolean function. We denote by $\cl(f)$ the unique minimal closed monotone Boolean function such that $f \leq \cl(f)$. \end{definition} \begin{remark}[On closure] \label{rk:clq_closure} We note again that $\cl(f)$ is well-defined (the same arguments of Remark~\ref{rk:hr_closure} apply here) and remark that its definition also depends on the parameters $p, \eps$ and $k$ (see Remark~\ref{rk:clq_closed}), which are fixed throughout the proof, and therefore can be safely omitted. \end{remark} \begin{lemma}[Approximation by closure] \label{lemma:clique_error_closure} For every monotone $f : \blt^{\binom{n}{2}} \to \blt$, we have \begin{equation*} \Pr \left[ f(\rndn) = 0 \text{ and } \cl(f)(\rndn) = 1 \right] \le n^{-(2/3)k}. \end{equation*} \end{lemma} \begin{proof} We repeat the same argument as that of Lemma~\ref{lemma:hr_error_closure}. Since there are at most $n^{\delta k}$ graphs $K_A$ such that $\card{A} \leq \delta k$ and $\eps = n^{-k}$, the final bound then becomes $n^{-k} \cdot n^{\delta k} \leq n^{-(2/3)k}$. \end{proof} By employing the clique-sunflower lemma (Lemma~\ref{lemma:clique_sunflower}), we are able to bound the set of $\ell$-clique-minterms of closed monotone functions. \begin{lemma}[Closed functions have few minterms] \label{lemma:bound_minterms} If a monotone function $f : \blt^{\binom{n}{2}} \to \blt$ is closed, then, for all $\ell \in \set{2,\dots, \delta k}$, we have \[ \abs{\mintms_\ell(f)} &\le n^{2\ell/3}. \] \end{lemma} \begin{proof} Recall that $p = n^{-2/(k-1)}$ and $\eps = n^{-k}$ (see~(\ref{eq:clq_p}) and (\ref{eq:clq_eps})). Applying the same strategy of Lemma~\ref{lemma:hr_bound_minterms}, replacing the application of Theorem~\ref{thm:improved_sunflower} (robust sunflower theorem) by Lemma~\ref{lemma:clique_sunflower} (clique-sunflower lemma), we obtain \[ \abs{\mintms_\ell(f)} &\leq \ell!(2\log(1/\eps))^\ell(1/p)^{\binom{\ell}{2}} \leq (2\ell k \log n)^\ell \cdot {p^{-\binom{\ell}{2}}} \\& \leq (2\delta k^2 \log n)^\ell \cdot {n^{2\binom{\ell}{2}/(k-1)}} \leq (n^{2/3-2\delta} \log n)^\ell \cdot n^{\delta \ell} \leq n^{2\ell/3}. \qedhere \] \end{proof} \begin{lemma}[Input functions are closed] \label{lemma:clq_input_approx} Let $i,j \in [n]$ be such that $i \neq j$. For large enough $n$, the Boolean function $\clqind{\set{i,j}}$ is closed. \end{lemma} \begin{proof} Fix $i,j \in [n]$ such that $i \neq j$. Let $A \sseq [n]$ be such that $\card{A} \leq \delta k$ and suppose that $\clqind{\set{i,j}}(K_A) = 0$. Note that $\clqind{\set{i,j}}(K_A) = 0$ is equivalent to $\set{i,j} \not\sseq A$. This implies that $\set{i,j}$ is an edge of $\mb \rndn \cup K_A$ if and only if $\set{i,j}$ is an edge of $\rndn$. Therefore, we have \begin{align*} \Pr[\clqind{\set{i,j}}(\rndn \lor K_A)=1] &= \Pr[\clqind{\set{i,j}}(\rndn)=1] \\& = \Pr[\set{i,j} \text{ is an edge of } \mb G_{n,p}] \\& = n^{-2/(k-1)}, \end{align*} since $\rndn = \mb G_{n,p}$ and $p = n^{-2/(k-1)}$ (see (\ref{eq:clq_p}), Remark~\ref{rk:clq_parameters} and Definiton~\ref{def:clq_test}). It now suffices to show that, for large enough $n$, we have $p \leq 1-\eps=1-n^{-k}$ (recall from~(\ref{eq:clq_eps}) that $\eps = n^{-k}$). For convenience, let $\alpha = 1/3-\delta$. Note that $k=n^{\alpha}$. For large enough $n$, we have \begin{equation*} \frac{2 \cdot \log n}{n^{\alpha}-1} \geq n^{-n^\alpha} + n^{-2n^\alpha}. \end{equation*} Using the inequality $\log(1-x) \geq -x -x^2$ for $x \in [0,1/2]$, we get \begin{equation*} \frac{2 \cdot \log n}{k-1} = \frac{2 \cdot \log n}{n^{\alpha}-1} \geq n^{-n^\alpha} + n^{-2n^\alpha} \geq -\log(1-n^{-n^\alpha}) = -\log(1-n^{-k}). \end{equation*} Therefore, we have \begin{equation*} n^{-2/(k-1)} \leq 1-n^{-k}, \end{equation*} and we conclude that $\clqind{\set{i,j}}$ is closed. \end{proof} \subsection{Trimmed monotone functions} In this section, we define again a trimming operation for Boolean functions and prove analogous bounds to that of Section~\ref{sec:hr_trimmed}. \begin{definition}[Clique-shaped and trimmed functions] We say that a function $f : \blt^{\binom{n}{2}} \to \blt$ is \emph{clique-shaped} if, for every minterm $x$ of $f$, there exists $A \sseq [n]$ such that $x = K_A$. Moreover, we say that $f$ is \emph{trimmed} if $f$ is clique-shaped and all the clique-minterms of $f$ have size at most~$\delta k/2$. For a clique-shaped function $f$, we define the trimming operation $\trim(f)$ as follows: \begin{equation*} \trim(f) := \bigvee_{\ell = 1}^{\delta k/2}% \bigvee_{A \in \mintms_{\ell}(f)} \clqind{A}. \end{equation*} That is, the $\trim$ operation takes out from $f$ all the clique-indicators of size larger than % $\delta k/2$, yielding a trimmed function. \end{definition} \begin{remark}[Parametrization of $\trim(\cdot)$ and other remarks] \label{rk:clq_trim} Analogously to the Harnik-Raz case (see Remark~\ref{rk:hr_trim}), the definition of trimmed functions depends on the choice of the parameters $\delta$ and $k$. As these parameters are fixed (see Remark~\ref{rk:clq_parameters}), the operator $\trim(\cdot)$ is well-defined. Moreover, if all clique-minterms of $f$ have size larger than $\delta k/2$ (i.e., if $\mintms_{\ell}(f) = \emptyset$ for all $\ell \in [\delta k/2]$), then $\trim(f)$ is the constant function that outputs 0. Finally, if $f$ is the constant function $\one$, then $\trim(f) = \one$, because $\one$ contains a clique-minterm of size equal to 1 (a clique containing one vertex and no edges). \end{remark} Imitating the proofs of Lemmas~\ref{lemma:hr_compute_error} and~\ref{lemma:hr_error_trimmed}, replacing Lemma~\ref{lemma:hr_trimmed_prob} by Lemma~\ref{lemma:clq_spread}, we may now obtain the following lemmas. \begin{lemma} [Trimmed functions are inaccurate in the positive distribution] \label{lemma:clique_shaped_error} If a monotone function $f : \blt^{\binom{n}{2}} \to \blt$ is a trimmed clique-shaped function such that $f \neq \one$, then \begin{equation*} \Pr \left[ f(\rndy) = 1 \right] \leq \sum_{\ell = 2}^{\delta k/2}% \left( \frac{k}{n} \right)^{\ell} \abs{\mintms_\ell(f)}. \end{equation*} \end{lemma} \begin{lemma}[Approximation by trimming] \label{lemma:clique_error_trimmed} Let $f : \blt^{\binom{n}{2}} \to \blt$ be a clique-shaped monotone function, all of whose clique-minterms have size at most $\delta k$. We have \begin{equation*} \Pr \left[ f(\rndy) = 1 \text{ and } \trim(f)(\rndy) = 0 \right] \leq \sum_{\ell={\delta k/2}}^{\delta k}% \left( \frac{k}{n} \right)^{\ell} \abs{\mintms_\ell(f)}. \end{equation*} \end{lemma} \subsection{Approximators} Similarly as in Section~\ref{sec:hr_approximators}, we will consider a set of \emph{approximators} $\cala$. Let $$\cala := \{\trim(\cl(f)) : f \in \blt^{\binom{n}{2}} \to \blt \text{ is monotone and clique-shaped}\}.$$ Functions in $\cala$ are called \emph{approximators}. Note that every function in $\cala$ is clique-shaped and is the trimming of a closed function. Moreover, observe that every edge-indicator $\clqind{\set{u,v}}$ belongs to $\cala$, since every edge-indicator is closed by Lemma~\ref{lemma:clq_input_approx}. Let $f, g \in \cala$ such that $f = \bigvee_{i=1}^t \clqind{A_i}$ and $g = \bigvee_{j=1}^s \clqind{B_j}$. We define $\bigwedge(f,g) := \bigvee_{i=1}^t \bigvee_{j=1}^s \clqind{A_i \cup B_j}$. We also define operations $\sqcup, \sqcap: \cala \times \cala \to \cala$ as follows: \begin{align*} f \sqcup g &:= \trim(\cl(f \vee g)), \\ f \sqcap g &:= \trim\left( \cl \left( \bigwedge(f,g) \right) \right). \end{align*} It's easy to see that, if $f,g \in \cala$, then $f \sqcup g \in \cala$. To see that $f \sqcap g \in \cala$, note that $\bigwedge(f,g)$ is also a monotone clique-shaped function. \begin{remark}[Reason for definition of $\sqcap$] \label{rk:clq_approx_and} The reason for defining $\sqcap$ in that way is as follows. First observe that $f \land g = \bigvee_{i = 1}^t \bigvee_{j=1}^s (\clqind{A_i} \land \clqind{B_j})$. We simply replace each $\clqind{A_i} \cap \clqind{B_j}$ with $\clqind{A_i \cup B_j}$, thus obtaining $f \sqcap g$. In general, since $\clqind{A_i \cup B_j}$ is a larger conjunction than $\clqind{A_i} \land \clqind{B_j}$, we have $\bigwedge(f,g) \leq f \land g$. However, note that, for every $A \sseq [n]$, we have $\bigwedge(f,g)(K_A) = (f \land g)(K_A)$. Thus, the transformation from $f \land g$ to $\bigwedge(f,g)$ incurs no mistakes in the positive distribution $\rndy$. \end{remark} If $C$ is a monotone $\set{\vee, \wedge}$-circuit, let $C^\cala$ be the corresponding $\set{\sqcup, \sqcap}$-circuit, obtained by replacing each $\lor$-gate by a $\sqcup$-gate, and each $\land$-gate by an $\sqcap$-gate. Note that $C^\cala$ computes an approximator. \subsection{The lower bound} \label{sec:clq_lb} In this section we obtain the lower bound for the clique function. Recall that $k = n^{1/3-\delta}$. We will prove that the monotone complexity of $\kclq$ is $n^{\Omega(\delta^2 k)}$. Repeating the same arguments of Lemmas~\ref{lemma:hr_approx_err} and~\ref{lemma:hr_approx_correct}, we obtain the following analogous lemmas. \begin{lemma}[Approximators make many errors] \label{lemma:clique_approx_err} For every $f \in \cala$, we have \begin{equation*} \Pr[f(\rndy)=1] + \Pr[f(\rndn)=0] \leq 1+o(1). \end{equation*} \end{lemma} \begin{proof} Let $f \in \cala$. By definition, there exists a closed function $h$ such that $f = \trim(h)$. Observe that $\mintms_\ell(f) \sseq \mintms_\ell(h)$ for every $\ell \in \set{2,\dots,\delta k/2}$. By Lemmas~\ref{lemma:bound_minterms} and~\ref{lemma:clique_shaped_error}, if $f \in \cala$ is such that $f \neq \one$, then \begin{equation*} \Pr[f(\rndy)=1] \leq \sum_{\ell=2}^{\delta k/2} \left( \frac{k}{n} \right)^{\ell} \abs{\mintms_\ell(h)} \leq \sum_{\ell=2}^{\delta k/2} \left(\frac{k}{n^{1/3}}\right)^\ell \leq \sum_{\ell=2}^{\delta k/2} n^{-\delta \ell} = o(1). \end{equation*} Therefore, for every $f \in \cala$ we have $ \Pr[f(\rndy) = 1] + \Pr[f(\rndn) = 0] \leq 1 + o(1).% $ \end{proof} \begin{lemma}[$C$ is well-approximated by $C^\cala$] \label{lemma:clique_approx_correct} Let $C$ be a monotone circuit. We have \begin{equation*} \Pr[C(\rndy) = 1 \text{ and } C^\cala(\rndy) = 0] + \Pr[C(\rndn) = 0 \text{ and } C^\cala(\rndn) = 1] \leq \size(C)\cdot O(n^{-\delta^2 k / 2}). \end{equation*} \end{lemma} \begin{proof} To bound the approximation errors under the distribution $\rndy$, first note that, if $f, g \in \cala$, then all the clique-minterms of both $f \lor g$ and $f \land g$ have size at most $\delta k$. Moreover, if $(f \lor g)(x)=1$ but $(f \apor g)(x) = 0$, then $ \trim(\cl(f \lor g)(x)) \neq \cl(f \lor g)(x) $. Therefore, we obtain by Lemmas~\ref{lemma:bound_minterms} and~\ref{lemma:clique_error_trimmed} that, for $f, g \in \cala$, we have \[ \Pr \left[ (f \lor g)(\rndy) = 1 \text{ and } (f \apor g)(\rndy) = 0 \right] &\leq \sum_{\ell = \delta k /2}^{\delta k} \left( \frac{k}{n} \right)^{\ell} \abs{\mintms_\ell(\cl(f \lor g))} \\&\le \sum_{\ell = \delta k /2}^{\delta k} n^{-\delta \ell} = O(n^{-\delta^2 k / 2}). \] As observed in Remark~\ref{rk:clq_approx_and}, we have $\bigwedge(f,g)(\rndy) = (f \land g)(\rndy)$. Thus, once again, the only approximation mistakes incurred by changing a $\land$-gate for a $\sqcap$-gate comes from the trimming operation. Again, we conclude $$ \Pr \left[ (f \wedge g)(\rndy) = 1 \text{ and } (f \apand g)(\rndy) = 0 \right] = O(n^{-\delta^2 k / 2}), $$ which implies \begin{equation*} \Pr[C(\rndy) = 1 \text{ and } C^\cala(\rndy) = 0] \leq \size(C) \cdot O(n^{-\delta^2 k / 2}). \end{equation*} Similarly, to bound the approximation errors under $\rndn$, note that $(f \lor g)(x)=0$ and $(f \apor g)(x) = 1$ only if $ \cl(f \lor g)(x) \neq (f \lor g)(x) $. Therefore, we obtain by Lemma~\ref{lemma:clique_error_closure} that, for $f, g \in \cala$, we have \begin{align*} \Pr \left[ (f \lor g)(\rndn) = 0 \text{ and } (f \apor g)(\rndn) = 1 \right] \leq n^{-(2/3)k}. \end{align*} Moreover, note that $\bigwedge(f,g) \leq f \land g$. As $f \sqcap g = \trim(\cl(\bigwedge(f,g)))$, we obtain that $(f \land g)(x)=0$ and $(f \apand g)(x) = 1$ only if $ \cl(\bigwedge(f,g))(x) > \bigwedge(f,g)(x) $. Therefore, we also have \begin{align*} \Pr \left[ (f \land g)(\rndn) = 0 \text{ and } (f \apand g)(\rndn) = 1 \right] \leq n^{-(2/3)k}. \end{align*} By the union bound, we conclude: \begin{equation*} \Pr[C(\rndn) = 0 \text{ and } C^\cala(\rndn) = 1] \leq \size(C)\cdot n^{-(2/3)k}. \end{equation*} This finishes the proof. \end{proof} We now prove the lower bound for the clique function. \begin{theorem} \label{thm:clique_lower_bound} Let $\delta \in (0,1/3)$ and $k=n^{1/3-\delta}$. The monotone circuit complexity of $\kclq$ is $\Omega(n^{\delta^2 k/2})$. \end{theorem} \begin{proof} Let $C$ be a monotone circuit computing $\kclq$. For large $n$, we obtain from Corollary~\ref{cor:clique_random_graph} and Lemmas~\ref{lemma:clique_approx_err} and~\ref{lemma:clique_approx_correct} \begin{align*} 5/4 &\leq \Pr[\kclq(\rndy)] + \Pr[\kclq(\rndn)] \\& \leq \Pr[C(\rndy) = 1 \text{ and } C^\cala(\rndy) = 0] + \Pr[C^\cala(\rndy)=1] \\& \quad + \Pr[C(\rndn) = 0 \text{ and } C^\cala(\rndn) = 1] + \Pr[C^\cala(\rndn)=1] \\& \leq 1 + o(1) + \size(C) \cdot O(n^{-\delta^2 k / 2}). \end{align*} This implies $\size(C) = \Omega(n^{\delta^2 k/2})$. \end{proof} \subsection{Proof of Lemma~\ref{lemma:clique_sunflower} (Clique-sunflowers)} \label{sec:proof_clqsf} In this section, we give the proof of Lemma~\ref{lemma:clique_sunflower}. The proof is essentially the same as the one given by Rossman for Theorem~\ref{thm:approx_sunflower} in~\cite{rossman_kclq}. We will rely on an inequality due to Janson~\cite{janson1990poisson} (see also Theorem~2.18 in~\cite{random_graphs}). \begin{lemma}[Janson's inequality~\cite{janson1990poisson}] \label{lemma:janson_inequality} Let $\calf$ be a nonempty hypergraph on $[n]$ and let $\rndW \sseq_p [n]$. Define $\mu$ and $\Delta$ in the following way: \begin{align*} \mu &:= \quad\;\; \sum_{F \in \calf} \quad\;\; \Pr[F \sseq \rndW], \\ \Delta &:= \sum_{\substack{ F, H \in \calf \\ F \cap H \neq \emptyset }} \Pr[F \cup H \sseq \rndW]. \end{align*} Then we have \begin{equation*} \Pr[\forall F \in \calf : F \not\sseq \rndW] \leq \exp\{-\mu^2/\Delta\}. \end{equation*} \end{lemma} The following estimates appear in an unpublished note due to Rossman~\cite{rossman_note_sunflowers}, and a slightly weaker form appears implicitly in~\cite{rossman_kclq}. We reproduce the proof for completeness. \begin{lemma}[Lemma~8 of~\cite{rossman_note_sunflowers}] Let $s_0(t), s_1(t), \dots$ be the sequence of polynomials defined by \begin{equation*} s_0(t) := 1 \quad\text{and}\quad s_\ell(t) := t \sum_{j=0}^{\ell-1} \binom{\ell}{j} s_j(t). \end{equation*} For all $t > 0$, we have $ s_\ell(t) \leq \ell!(t+1/2)^\ell. $ \end{lemma} \begin{proof} We first prove by induction on $\ell$ that $ s_\ell(t) \leq \ell!(\log(1/t+1))^{-\ell} $, as follows: \begin{align*} s_\ell(t) = t \sum_{j=0}^{\ell-1} \binom{\ell}{j} s_j(t) &\leq t \sum_{j=0}^{\ell-1} \binom{\ell}{j} j!(\log(1/t+1))^{-j} \\&= t \ell!(\log(1/t+1))^{-\ell} \sum_{j=0}^{\ell-1} \frac{(\log(1/t+1))^{\ell-j}}{(\ell-j)!} \\&\leq t \ell!(\log(1/t+1))^{-\ell} \left( -1+ \sum_{j=0}^{\infty} \frac{(\log(1/t+1))^{j}}{j!} \right) \\&= t\ell!(\log(1/t+1))^{-\ell} (-1+\exp(\log(1/t+1))) \\&= \ell!(\log(1/t+1))^{-\ell}. \end{align*} To conclude the proof, we apply the inequality $1/\log(1/t+1) < t+1/2$ for all $t > 0$. \end{proof} We will also need the following auxiliary definition. \begin{definition} \label{def:pq_clq_sf} Let $\eps,p,q \in (0,1)$. Let $\rndU_{n,q} \subseteq [n]$ be a $q$-random subset of $[n]$ independent of $\rndG_{n,p}$. Let $\cals$ be a family of subsets of $[n]$ and let $B := \bigcap \cals$. The family $\cals$ is called a $(p, q, \eps)$-\emph{clique-sunflower} if \[ \Pr \left[ \exists A \in \cals: \clq{A} \subseteq \rndG_{n,p} \cup \clq{B} \text{ \rm and } A \subseteq \rndU_{n,q} \cup B \right] > 1-\eps. \] The set ${B}$ is called \defx{core}. \end{definition} Clearly, a $(p,1,\eps)$-clique sunflower is a $(p,\eps)$-clique sunflower. By taking $q=1$ in the following lemma, and observing that $s_\ell(\log(1/\eps)) \leq \log(1/\eps)+1/2 \leq 2\log(1/\eps)$ for $\eps \leq e^{-1/2}$, we obtain Lemma~\ref{lemma:clique_sunflower}. \begin{lemma} For all $\ell \in \{1,\dots,n\}$ and $S \subseteq \binom{[n]}{\ell}$, if $\abs{\cals} > s_\ell(\log(1/\eps)) \cdot (1/q)^\ell(1/p)^{\binom{\ell}{2}}$, then $\cals$ contains a $(p,q,\eps)$-clique sunflower. \end{lemma} \begin{proof} By induction on $\ell$. In the base case $\ell=1$, we have by independence that \[ \Pr[ \forall A \in \cals: K_A \nsubseteq \mb G_{n,p} \text{ or } A \nsubseteq \mb U_{n,q} ] &= \Pr[ \forall A \in \cals: A \nsubseteq \mb U_{n,q} ]\\ &= \prod_{A \in S} \Pr[ A \nsubseteq \mb U_{n,q} ]\\ &= (1-q)^{|S|} < (1-q)^{\ln(1/\eps)/q} \le e^{-\ln(1/\eps)} = \eps. \] Thus $\cals$ is itself a $(p,q,\eps)$-clique sunflower. Let now $\ell \ge 2$ and assume that the claim holds for $t \in \set{1,\dots,\ell-1}$. For convenience, let $$c_j := s_j(\log(1/\eps)),$$ for every $j \in \set{0,1,\dots,\ell-1}$. \textit{Case 1.} There exists $j \in \{1,\dots,\ell-1\}$ and $B \in \binom{[n]}{j}$ such that $$|\{A \in \cals : B \subseteq A\}| \ge c_{\ell-j} (1/qp^j)^{\ell-j}(1/p)^{\binom{\ell-j}{2}}.$$ Let $\calt = \{A \setminus B : A \in \cals \text{ such that } B \subseteq A\} \subseteq \binom{[n]}{\ell-j}$. By the induction hypothesis, there exists a $(p,qp^j,\eps)$-clique sunflower $\calt' \sseq \calt$ with core a $D$ satisfying $D \in \binom{[n] \setminus B}{<\ell -j}$. We will now show that $\cals' := \set{B \cup C : C \in \calt'} \sseq \cals$ is a $(p,q,\eps)$-clique sunflower contained in $\cals$ with core ${B \cup D}$. We have \begin{alignat*}{2} \Pr[ & \forall {A} \in \cals': \clq{A} \nsubseteq &&\rndG_{n,p} \cup K_{B \cup D} \text{ or } A \nsubseteq \rndU_{n,q} \cup B \cup D ]\\ &= \Pr[ \forall C \in \calt': &&K_{B \cup C} \nsubseteq \rndG_{n,p} \cup K_{B \cup D} \text{ or } B \cup C \nsubseteq \rndU_{n,q} \cup B \cup D ]\\ &= \Pr[ \forall C \in \calt': &&K_{B \cup C} \nsubseteq \rndG_{n,p} \cup K_{B \cup D} \text{ or } C \nsubseteq \rndU_{n,q} \cup D ]\\ &= \Pr[ \forall C \in \calt': &&K_C \nsubseteq \rndG_{n,p} \cup K_D \text{ or } \\ & \ && C \nsubseteq \big\{v \in \rndU_{n,q} : \{v,w\} \in E(\rndG_{n,p}) \text{ for all } w \in B\big\} \cup D ]\\ &\le \Pr[ \forall C \in \calt': &&K_C \nsubseteq \rndG_{n,p} \cup K_D \text{ or } C \nsubseteq \rndU_{n,qp^j} \cup D ]\\ &< \eps.&& \end{alignat*} Therefore, $\cals'$ is a $(p,q,\eps)$-clique sunflower contained in $\cals$. \textit{Case 2.} For all $j \in \{1,\dots,\ell-1\}$ and $B \in \binom{[n]}{j}$, we have $$|\{A \in \cals : B \subseteq A\}| \le c_{\ell-j}(1/qp^j)^{\ell-j}(1/p)^{\binom{\ell-j}{2}}.$$ In this case, we show that the bound of the lemma holds with $B = \emptyset$. Let \[ \mu &\defeq \abs{\cals} q^\ell p^{\binom\ell 2} > c_\ell, \\ \ovl{\Delta} &\defeq \sum_{j=1}^{\ell-1} \sum_{(A,A') \in \cals^2 \,:\, |A \cap A'| = j} q^{2\ell-j} p^{2\binom{\ell}{2} - \binom{j}{2}}. \] Note that $\ovl{\Delta}$ excludes $j = \ell$ from the sum, which corresponds to pairs $(A,A')$ such that $A = A'$, in which case the summand becomes $\mu$. In other words, the number $\Delta$ of Janson's inequality~(Lemma~\ref{lemma:janson_inequality}) satisfies $\Delta = \mu + \ovl{\Delta}$. Janson's Inequality now gives the following bound: \begin{equation}\label{eq:janson} \Pr[\forall A \in \cals: K_A \nsubseteq \mb G_{n,p} \text{ or } A \nsubseteq \mb U_{n,q} ] \le \exp\left(- \frac{\mu^2}{\mu+\ovl{\Delta}} \right). \end{equation} We bound $\ovl{\Delta}$ as follows: \[ \ovl{\Delta} &\le \sum_{j=1}^{\ell-1} q^{2\ell-j} p^{2\binom{\ell}{2} - \binom{j}{2}} \sum_{B \in \binom{[n]}{j}} |\{A \in S : B \subseteq A\}|^2 \\ &\le \sum_{j=1}^{\ell-1} q^{2\ell-j} p^{2\binom{\ell}{2} - \binom{j}{2}} \sum_{B \in \binom{[n]}{j}} |\{A \in S : B \subseteq A\}| \cdot c_{\ell-j}(1/q)^{\ell-j}(1/p)^{\binom{\ell-j}{2}}\\ & \leq q^{\ell}p^{\binom{\ell}{2}} \sum_{j=1}^{\ell-1} c_{\ell-j} \sum_{B \in \binom{[n]}{j}} |\{A \in S : B \subseteq A\}|\\ &= q^{\ell}p^{\binom{\ell}{2}} \sum_{j=1}^{\ell-1} c_{\ell-j} \sum_{A \in S} \sum_{B \in \binom{A}{j}} 1\\ &= |S| q^{\ell}p^{\binom{\ell}{2}} \sum_{j=1}^{\ell-1} \binom{\ell}{j} c_{\ell-j} \\ &= \mu \sum_{j=1}^{\ell-1} \binom{\ell}{j} c_j = \mu \sum_{j=0}^{\ell-1} \binom{\ell}{j} c_j - \mu. \] Therefore, \[ \frac{\mu^2}{\mu + \ovl{\Delta}} \ge \frac{\mu}{\sum_{j=0}^{\ell-1} \binom{\ell}{j} c_j } = \frac{\mu}{c_\ell/(\log(1/\eps))} > \log(1/\eps). \] Finally, from (\ref{eq:janson}) we get \[ \Pr[\forall A \in \cals: K_A \nsubseteq \mb G_{n,p} \text{ or } A \nsubseteq \mb U_{n,q} ] \le \exp\left(- \frac{\mu^2}{\mu+\ovl{\Delta}} \right) < \eps. \] Therefore, the family $\cals$ is a $(p,q,\eps)$-clique sunflower with an empty core. \end{proof} \section{Monotone arithmetic circuits} \label{sec:arithmetic} In this section, we give a short and simple proof of a truly exponential ($\exp(\Omega(n))$) lower bound for real monotone arithmetic circuits computing a multilinear $n$ variate polynomial. Real monotone arithmetic circuits are arithmetic circuits over the reals that use only positive numbers as coefficients. As we shall see, the lower bound argument holds for a general family of multilinear polynomials constructed in a very natural way from error correcting codes, and the similarities to the hard function used by Harnik and Raz in the Boolean setting is quite evident (see Section~\ref{sec:hr_the_function}). In particular, our lower bound just depends on the rate and relative distance of the underlying code. We note that exponential lower bounds for monotone arithmetic circuits are not new, and have been known since the 80's with various quantitative bounds. More precisely, Jerrum and Snir proved an $\exp(\Omega(\sqrt{n}))$ lower bound for an $n$ variate polynomial in \cite{JS82}. This bound was subsequently improved to a lower bound of $\exp(\Omega(n))$ by Raz and Yehudayoff in \cite{RY11}, via an extremely clever argument, which relied on deep and beautiful results on character sums over finite fields. A similar lower bound of $\exp(\Omega(n))$ was shown by Srinivasan \cite{S19} using more elementary techniques building on a work of Yehudayoff \cite{Y19}. In a recent personal communication Igor Sergeev pointed out to us that truly exponential lower bounds for monotone arithmetic circuits had also been proved in the 1980's in the erstwhile Soviet Union by several authors, including the works of Kasim-Zade, Kuznetsov and Gashkov. We refer the reader to \cite{GSergeev2012} for a detailed discussion on this line of work. We show a similar lower bound of $\exp(\Omega(n))$ via a simple and short argument, which holds in a somewhat general setting. Our contribution is just the simplicity, the (lack of) length of the argument and the observation that it holds for families of polynomials that can be constructed from any sufficiently \emph{good} error correcting codes. \begin{definition} [Monotone, multilinear, homogeneous] \label{def:arithmetic_glossary} A real polynomial is said to \emph{monotone} if all of its coefficients are positive. A real arithmetic circuit is said to be \emph{monotone} if it uses only positive numbers as coefficients. A polynomial $P$ is said to be \emph{multilinear} if the degree of each variable of $P$ is at most $1$ in all of the monomials of $P$. A polynomial $P$ is said to be \emph{homogeneous} if all the monomials of $P$ have the same degree. An arithmetic circuit $C$ is said to be to \emph{homogeneous} (\emph{multilinear}) if the polynomial computed in each of the gates of $C$ is homogeneous (multilinear). \end{definition} \begin{definition}[From sets of vectors to polynomials]\label{def:char poly} Let ${C} \subseteq \F_q^n$ be an arbitrary subset of $\F_q^n$. Then, the polynomial $P_{C}$ is a multilinear homogeneous polynomialof degree $n$ on $qn$ variables $\{x_{i, j} : i \in [q], j \in [n]\}$ and is defined as follows: \[ P_C = \sum_{c \in C} \prod_{j \in [n]} x_{c(j), j} \, . \] Here, $c(j)$ is the $j^{th}$ coordinate of $c$ which is an element of $\F_q$, which we bijectively identify with the set $[q]$. \end{definition} Here, we will be interested in the polynomial $P_C$ when the set $C$ is a \emph{good} code, i.e it has high rate and high relative distance. The following observation summarizes the properties of $P_C$ and relations between the properties of $C$ and $P_C$. \begin{observation}[Codes vs Polynomials]\label{obs:props of poly} Let $C$ be any subset of $\F_q^n$ and let $P_C$ be the polynomial as defined in Definition~\ref{def:char poly}. Then, the following statements are true: \begin{enumerate}[$\bullet$] \item $P_C$ is a multilinear homogeneous polynomial of degree equal to $n$ with every coefficient being either $0$~or~$1$. \item The number of monomials with non-zero coefficients in $P_C$ is equal to the cardinality of $C$. \item If any two distinct vectors in $C$ agree on at most $k$ coordinates (i.e. $C$ is a code of distance $n-k$), then the intersection of the support of any two monomials with non-zero coefficients in $P_C$ has size at most~$k$. \end{enumerate} \end{observation} The observation immediately follows from Definition~\ref{def:char poly}. We note that we will work with monotone arithmetic circuits here, and hence will interpret the polynomial $P_C$ as a polynomial over the field of real numbers. We now prove the following theorem, which essentially shows that for every code $C$ with sufficiently good distance, any monotone arithmetic circuit computing $P_C$ must essentially compute it by computing each of its monomials separately, and taking their sum. \begin{theorem}\label{thm:monotone alg lower bound} If any two distinct vectors in $C$ agree on at most $n/3-1$ locations, then any monotone arithmetic circuit for $P_C$ has size at least $|C|$. \end{theorem} The proof of this theorem crucially uses the following well known structural lemma about arithmetic circuits. This lemma also plays a crucial role in the other proofs of exponential lower bounds for monotone arithmetic circuits (e.g. \cite{JS82,RY11,Y19,S19}). \begin{lemma}[See Lemma 3.3 in~\cite{RY11}]\label{lem:decomposing a size s circuit} Let $Q$ be a homogeneous multilinear polynomial of degree $d$ computable by a homogeneous arithmetic circuit of size $s$. Then, there are homogeneous polynomials $g_0, g_1, g_2, \ldots, g_s, h_0, h_1, h_2, \ldots, h_s$ of degree at least $d/3$ and at most $2d/3-1$ such that \[ Q = \sum_{i = 0}^{s} g_i\cdot h_i \, . \] Moreover, if the circuit for $Q$ is monotone, then each $g_i$ and $h_i$ is multilinear, variable disjoint and each one their non-zero coefficients is a positive real number. \end{lemma} We now use this lemma to prove Theorem~\ref{thm:monotone alg lower bound}. \begin{proof}[Proof of Theorem~\ref{thm:monotone alg lower bound}] Let $B$ be a monotone arithmetic circuit for $P_C$ of size $s$. We know from Observation~\ref{obs:props of poly} that $P_C$ is a multilinear homogeneous polynomial of degree equal to $n$. This along with the monotonicity of $B$ implies that $B$ must be homogeneous and multilinear since there can be no cancellations in $B$. Thus, from (the moreover part of) Lemma~\ref{lem:decomposing a size s circuit} we know that $P_C$ has a monotone decomposition of the form \[ P_C = \sum_{i = 0}^s g_i\cdot h_i \, , \] where, each $g_i$ and $h_i$ is multilinear, homogeneous with degree between $n/3$ and $2n/3-1$, $g_i$ and $h_i$ are variable disjoint. We now make the following claim. \begin{claim}\label{clm:single monomial} Each $g_i$ and $h_i$ has at most one non-zero monomial. \end{claim} We first observe that the claim immediately implies theorem~\ref{thm:monotone alg lower bound}: since every $g_i$ and $h_i$ has at most one non-zero monomial, their product $g_ih_i$ is just a monomial. Thus, the number of summands $s$ needed in the decomposition above must be equal to the number of monomials in $P_C$, which is equal to $|C|$ from the second item in Observation~\ref{obs:props of poly}. \end{proof} We now prove the Claim. \begin{proof}[Proof of Claim] The proof of the claim will be via contradiction. To this end, let us assume that there is an $i \in \{0, 1, 2, \ldots, s\}$ such that $g_i$ has at least two distinct monomials with non-zero coefficients and let $\alpha$ and $\beta$ be two of these monomials. Let $\gamma$ be a monomial with non-zero coefficient in $h_i$ . Since $h_i$ is homogeneous with degree between $n/3$ and $2n/3-1$, we know that the degree of $\gamma$ is at least $n/3$. Since we are in the monotone setting, we also know that each non-zero coefficient in any of the $g_j$ and $h_j$ is a positive real number. Thus, the monomials $\alpha\cdot \gamma$ and $\beta\cdot \gamma$ which have non-zero coefficients in the product $g_i\cdot h_i$ must have non-zero coefficient in $P_C$ as well (since a monomial once computed cannot be cancelled out). But, the supports of $\alpha\gamma$ and $\beta\gamma$ overlap on $\gamma$ which has degree at least $n/3$. This contradicts the fact that no two distinct monomials with non-zero coefficients in $P_C$ share a sub-monomial of degree at least $n/3$ from the distance of $C$ and the third item in Observation~\ref{obs:props of poly}. \end{proof} Theorem~\ref{thm:monotone alg lower bound} when instantiated with an appropriate choice of the code $C$, immediately implies an exponential lower bound on the size of monotone arithmetic circuits computing the polynomial $P_C$. Observe that the total number of variables in $P_C$ is $N = qn$ and therefore, for the lower bound for $P_C$ to be of the form $\exp(\Omega(N))$, we would require $q$, the underlying field size to be a constant. In other words, for any code of relative distance at least $2/3$ over a constant size alphabet which has exponentially many code words, we have a truly exponential lower bound. The following theorem of Garcia and Stichtenoth~\cite{GS95} implies an explicit construction of such codes. The statement below is a restatement of their result by Cohen et al.\cite{CHS18}. \begin{theorem}[\cite{GS95} and \cite{St09}] Let $p$ be a prime number and let $m\in \N$ be even. Then, for every $0 <\rho < 1$ and a large enough integer $n$, there exists an explicit rate $\rho$ linear error correcting block code $C: \F_{p^m}^n \to \F_{p^m}^{n/\rho}$ with distance \[ \delta \geq 1- \rho - \frac{1}{p^{m/2} - 1} \, . \] \end{theorem} The theorem has the following immediate corollary. \begin{corollary}\label{cor:good codes exist} For every large enough constant $q$ which is an even power of a prime, and for all large enough $n$, there exist explicit construction of codes $C \subseteq \F_q^n$ which have relative distance at least $2/3$ and $|C| \geq \exp(\Omega(n))$. \end{corollary} By an explicit construction here, we mean that given a vector $v$ of length $n$ over $\F_q$, we can decide in deterministic polynomial time if $v \in C$. In the arithmetic complexity literature, a polynomial $P$ is said to be explicit, if given the exponent vector of a monomial, its coefficient in $P$ can be computed in deterministic polynomial time. Thus, if a code $C$ is explicit, then the corresponding polynomial $P_C$ is also explicit in the sense described above. Therefore, we have the following corollary of Corollary~\ref{cor:good codes exist} and Theorem~\ref{thm:monotone alg lower bound}. \begin{corollary} There exists an explicit family $\{P_n\}$ of homogeneous multilinear polynomials such that for every large enough $n$, any monotone arithmetic circuit computing the $n$ variate polynomial $P_n$ has size at least $\exp(\Omega(n))$. \end{corollary} \section{Further directions} In this paper, we obtained the first monotone circuit lower bound of the form $\exp(\Omega(n^{1/2}/\log n))$ for an explicit $n$-bit monotone Boolean function. It's natural to ask if we can do better. Ideally, we would like to achieve a truly exponential bound for Boolean monotone circuits, like the one achieved for arithmetic monotone circuits in Section~\ref{sec:arithmetic}. However, as discussed in Section~\ref{sec:hr_discussion}, the $\sqrt{n}$ exponent seems to be at the limit of what current techniques can achieve. An important open-ended direction is to develop sharper techniques for proving monotone circuit lower bounds. Sticking to the approximation method, it is not yet known whether there exists another ``sunflower-type'' notion which still allows for good approximation bounds and yet admits significantly better bounds than what is possible for robust sunflowers. One approach can be to try to weaken the requirement of the core, and ask only that the core of a ``sunflower-type'' set system $\calf$ is properly contained in one of the elements of $\calf$. A weaker notion of robust sunflowers with this weakened core could still be used succesfully in the proof of the lower bound of Section~\ref{sec:harnik_raz}, but it's not yet clear whether this weaker notion admits stronger bounds or not. Moreover, perhaps developing specialised sunflowers for specific functions, such as done for $\kclq$ in Section~\ref{sec:clique}, could help here. One could also consider distributions which are not $p$-biased, as perhaps better bounds are possible in different regimes. Finally, as noted before, our proof of the clique-sunflower lemma follows the approach of Rossman in~\cite{rossman_kclq}. We expect that a proof along the lines of the work of Alweiss, Lovett, Wu and Zhang~\cite{alweiss2019improved} and Rao~\cite{rao2020} should give us an even better bound on the size of set systems without clique-sunflowers, removing the $\ell !$ factor. This would extend our $n^{\Omega(\delta^2 k)}$ lower bound to $k \le n^{1/2-\delta}$. \section*{Acknowledgements} We are grateful to Stasys Junka for bringing the lower bound of Andreev \cite{andreev1987method} to our attention and to the anonymous referees of LATIN 2020 for numerous helpful suggestions. We also thank Igor Sergeev for bringing \cite{GSergeev2012} and the references therein to our attention which show that truly exponential lower bounds for monotone arithmetic circuits had already been proved in the 1980s. Finally, we thank the anonymous reviewers of Algorithmica for careful proofreading and many helpful suggestions and comments. Bruno Pasqualotto Cavalar was supported by S\~ao Paulo Research Foundation (FAPESP), grants \#2018/22257-7 and \#2018/05557-7, and he acknowledges CAPES (PROEX) for partial support of this work. A part of this work was done during a research internship of Bruno Pasqualotto Cavalar and a postdoctoral stay of Mrinal Kumar at the University of Toronto. Benjamin Rossman was supported by NSERC and Sloan Research Fellowship. This version of the article has been accepted for publication after peer review, but is not the Version of Record and does not reflect post-acceptance improvements, or any corrections. The Version of Record is available online at: \url{https://doi.org/10.1007/s00453-022-01000-3}. \bibliographystyle{plain}%
2,869,038,157,027
arxiv
\section{Probability distributions at longer times} As discussed in the main text, all states in the quasienergy spectrum contribute to the probability distribution of the walker. Since most of the states are non-critical, the probability distribution for static binary disorder thus follows $|\psi(q)|^2\sim e^{-|q|/\lambda}$, as shown in Fig.~\ref{Fig:append1} (left panel) for numerical simulations of different time steps $t=15,23,31,39$. The $q$-dependence of $-\ln |\psi(q)|^2 $ is plotted in Fig.~\ref{Fig:append1} (right-top panel), clearly showing a linear dependence on position. When the binary disorder also fluctuates in time, quantum phase coherence is destroyed turning dynamics thus into classical diffusion, $|\psi(q)|^2 \sim e^{-q^2/\sigma_t^2}$, with $\sigma^2_t\sim t$. The numerical simulation, Fig.~\ref{Fig:append1} (center and right-bottom panel), confirms the diffusive behavior and corresponding scaling of $-\ln |\psi(q)|^2 $. Notice that the boundary peaks in the probability distributions are due to a small fraction of quasi-ballistically propagating states. As shown in Fig.~\ref{Fig:append1}, their contribution disappears as longer times $t\gtrsim 20$ are probed. \begin{figure*}[t] \centering \includegraphics[width=2\columnwidth] {appendix_figure1.png} \vspace{.4cm} \caption{ Probability distributions after $t=15, 23,31,39$ time steps for static (left panel) and dephasing (center panel) disorder. The boundary peak visible at $t=15$ disappears for larger time steps. Right panels show $-\ln(|\psi(q)|^2)$ for static disorder (top) and dephasing disorder (bottom). \label{Fig:append1} \vspace{-.2cm} } \end{figure*} \section{Statistics of the power spectrum $S(\omega)$} The relatively small number of time steps measured in the experiment implies that the signature of critical states, viz. the time-staggered spin polarization, is subject to large statistical fluctuations. From our numerical simulations we find that it is necessary to average over many more than 500 disorder realizations to clearly observe the two-peak structure in the power spectrum. Figure~3 (bottom-right panel) in the main text shows the distribution of 1000 peak heights at frequencies $\omega=0, \pi$ in the power spectrum for static (purple) and dephasing disorder (blue). Each of the 1000 shown points is obtained from averaging the power spectrum over 500 random disorder realizations. The average and standard deviation of the entire power spectrum $S(\omega)$ after averaging over the total ensemble of $1000\times 500$ realizations is plotted in Fig.~\ref{Fig:append2} below. \begin{figure}[t] \centering \includegraphics[width=1\columnwidth] {polar_fft_stat.png} \vspace{.4cm} \caption{ Average and standard deviation of the power spectrum of the spin polarization for an ensemble of $1000\times 500$ disorder realizations. \label{Fig:append2} \vspace{-.2cm} } \end{figure} \end{appendix} \end{document}