diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzziaos" "b/data_all_eng_slimpj/shuffled/split2/finalzziaos" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzziaos" @@ -0,0 +1,5 @@ +{"text":"\\section{\n\\setcounter{equation}{0} \n\\@startsection {section}{1}{\\z@}{-3.5ex plus -1ex minus \n -.2ex}{2.3ex plus .2ex}{\\large\\bf}}\n\\renewcommand{\\theequation}{\\arabic{section}.\\arabic{equation}}\n\n\\def\\subsection{\\@startsection{subsection}{2}{\\z@}{-3.25ex plus -1ex minus \n -.2ex}{1.5ex plus .2ex}{\\normalsize\\bf}}\n\n\\def\\subsubsection{\\@startsection{subsubsection}{3}{\\z@}{-3.25ex plus\n -1ex minus -.2ex}{1.5ex plus .2ex}{\\normalsize\\it}}\n\n\\makeatother \n\n\n\\begin{document}\n\n \n\\title{\\bf Bulk viscosity due to kaons\\\\ in color-flavor-locked quark matter}\n\n\n\\author{\nMark G.~Alford and Matt Braby\\\\\n\\normalsize Department of Physics \\\\ \\normalsize Washington University \\\\\n\\normalsize St.~Louis, MO~63130 \\\\ USA \\\\[2ex]\nSanjay Reddy \\\\\n\\normalsize Theoretical Division \\\\\n\\normalsize Los Alamos National Laboratory \\\\\n\\normalsize Los Alamos, NM~87545\\\\ USA \\\\[2ex]\nThomas Sch\\\"afer \\\\\n\\normalsize Physics Department \\\\\n\\normalsize North Carolina State University \\\\\n\\normalsize Raleigh, NC 27695\\\\ USA \\\\[2ex]\n}\n\n\\date{24 Jan 2007\\\\\nRevised 20 Aug 2007 \\\\[1ex]\nLA-UR-07-0429}\n\n\\begin{titlepage}\n\\maketitle\n\\renewcommand{\\thepage}{} \n\n\\begin{abstract}\nWe calculate the bulk viscosity of color-superconducting\nquark matter in the color-flavor-locked (CFL) phase. We assume that\nthe lightest bosons are the superfluid mode $H$ and the kaons\n$K^0$ and $K^+$, and that there is no kaon condensate.\nWe calculate the rate of strangeness-equilibrating processes\nthat convert kaons into superfluid modes, and the resultant\nbulk viscosity. We find that for oscillations with a\ntimescale of milliseconds, at temperatures $T\\ll 1$~MeV,\nthe CFL bulk viscosity is much less than that of unpaired quark matter, \nbut at higher temperatures the bulk viscosity of CFL \nmatter can become larger.\n\\end{abstract}\n\n\\end{titlepage}\n\n\n\\section{Introduction}\nIn this paper we calculate the bulk viscosity of\nmatter at high baryon-number density (well above nuclear density) and \nlow temperature (of order 10 MeV). On theoretical grounds\nit is expected that, at\nsufficiently high baryon-number density, \nthree-flavor matter will be accurately described as a\ndegenerate Fermi liquid of weakly-interacting quarks, with\nCooper pairing at the Fermi surface (color superconductivity \\cite{Reviews})\nin the color-flavor-locked (CFL) channel \\cite{CFL}. That is the\nphase that we will study---the \nphase diagram at lower densities remains uncertain.\nThe motivation for this calculation is that\nin nature the highest baryon-number densities are attained\nin the cores of compact stars, and it is speculated that\nquark matter, perhaps in the CFL phase, may occur there. This means that\nour best chance of learning about the high-density region of the phase\ndiagram of matter is to make some connection between the physical properties of\nthe various postulated phases of dense matter and the observable behavior\nof compact stars. Calculating the bulk viscosity of CFL quark matter\nis part of that enterprise.\n\nCurrent observations of compact stars are able to give us measurements of\nquantities such as the mass, approximate size, temperature, spin and\nspin-down rate of these objects. These estimates are steadily improving,\nand other quantities, such as X-ray emission spectra, are becoming\navailable. However, it is a challenge to connect these distantly-observable\nfeatures to properties of the inner core of the star, where quark\nmatter is most likely to occur\n(for reviews, see Ref.~\\cite{Weber:2004kj,PrakashReview}).\n\nOne possible connection is via oscillations of the compact star,\nwhich on the one hand are affected by the transport properties\nof the interior, and on the other hand may have observable\neffects on the behavior of the star. The bulk viscosity is\none of the relevant transport properties, and is expected to\nplay an important role in suppressing both vibrational and rotational\noscillations. One particularly interesting application\ninvolves $r$-modes\n\\cite{Friedman:2001as,Andersson:2002ch,Kokkotas:2001ze,Madsen:1999ci}.\nIf the viscosity of the star\nis too low, unstable $r$-mode bulk flows will take place, which\nquickly spin the star down, removing its angular momentum as gravitational\nradiation. The fact that we see quickly-spinning compact stars (millisecond\npulsars) puts limits on the internal viscosity. If we can calculate\nthe viscosity of the various phases of quark matter then the observations\ncan be used to rule out some of those phases.\nThere are various complications to this simple picture. Viscosity is\nvery temperature-dependent, so to obtain useful limits\nwe need good measurements of the temperatures\nof these stars. There are also some uncertainties about additional sources\nof damping that could help to quash $r$-modes. But the essential point is\nthat viscosity calculations of the various phases are of great\npotential phenomenological importance, and in this\npaper we report on the results of such a calculation.\n\nThe phase that we choose to study is the CFL phase of quark matter.\n(The bulk viscosity for unpaired, non-interacting quark matter has\nbeen calculated previously \\cite{Madsen:1992sx,Wang:1985tg}.) \nIt is known that the mass of the strange quark induces a stress\non the CFL phase that may lead to neutral kaon condensation \n\\cite{BedaqueSchaefer,Kaplan:2001qk}, producing a ``CFL-$K^0$''\nphase. It is not known whether such condensation occurs at\nphenomenologically interesting densities, because of large uncertainties\nabout instanton effects \\cite{Schafer:2002ty},\nand in this paper we will assume that kaon condensation has not occurred: our\nresults are only applicable to the CFL phase, where there is a\nthermal population of $K^0$ and other mesons, but no condensation.\n\nBulk viscosity arises from a lag in the response of the system to an\nexternally-imposed compression-rarefaction cycle. If there are some\ndegrees of freedom that equilibrate on the same timescale as the\nperiod of the cycle, then the response will be out of phase with\nthe applied compression, and work will be done. For astrophysical\napplications, such as $r$-modes of compact stars, we are interested\nin periods of order 1~ms, which is very long compared\nto typical timescales for particle interactions.\nIn quark matter there is an obvious example of a suitably\nslowly-equilibrating quantity: flavor. Flavor is conserved\nby the strong and electromagnetic interactions, \nand only equilibrates via weak interactions.\n\nIn unpaired quark matter, the lightest degrees of freedom are\nthe quark excitations around the Fermi surface, and their\nflavor-changing weak interactions produce a bulk viscosity \n\\cite{Madsen:1992sx}.\nHowever, in the CFL phase the quark excitations are gapped\nand their contribution to thermodynamic and transport properties\nat temperatures below the gap is irrelevant.\nIgnoring the quarks, then, the lightest degrees of freedom in CFL\nquark matter are the massless superfluid ``$H$'' modes, the electrons\nand neutrinos, the (rotated) photon, and the kaons. Of these, only the\nkaons carry flavor, so this paper will focus on the their\ncontribution to flavor equilibration.\nIn order to\ncalculate the bulk viscosity of CFL matter at long timescales such as\n1~ms we must therefore calculate the production and decay rates of\nthermal kaons. We expect the dominant modes to be ones that involve the $H$,\nlike $K^0\\leftrightarrow H\\ H$, and ones that involve the leptons, like\n$K^\\pm \\leftrightarrow e^\\pm\\ \\nu$.\n\nThis paper is laid out as follows. Section~\\ref{sec:generalities}\ndescribes how the bulk viscosity is related to the production\nand decay rates of the kaons.\nSection~\\ref{sec:dynamics} describes the basic \nthermodynamics of the system including the kaons and superfluid modes. \nSection~\\ref{sec:rates}\ndescribes the overall calculation of the rates of the involved \nprocesses. Section~\\ref{sec:results} presents the results and conclusions.\n\n\n\\section{Relating bulk viscosity to microscopic processes}\n\\label{sec:generalities}\n\nThe bulk viscosity is given by \\cite{Madsen:1992sx}\n\\begin{equation}\n\\zeta = \n\\frac{2 \\bar V^2}{\\omega^2 (\\delta V)^2} \\frac{dE}{dt} \\ ,\n\\label{zeta_def}\n\\end{equation}\nwhere the system is being driven through a small-amplitude\ncompression-rarefaction cycle with volume amplitude $\\delta V$\n(see \\eqn{epsilons} below) and the driving angular frequency is $\\omega$.\nThe average power dissipated per unit volume is\n\\begin{equation}\n\\frac{dE}{dt} = -\\frac{1}{\\tau \\bar V}\\int_0^\\tau p(t)\\frac{dV}{dt}dt \\ ,\n\\label{dEdt}\n\\end{equation}\nwhere $\\tau=2\\pi\/\\omega$.\nWe can parameterize the volume oscillation by an amplitude $\\delta V \\ll \\bar V$\n(chosen to be real by convention), and the resultant\npressure oscillation $p(t)$ by a complex amplitude\n$\\delta p$ which determines its strength and phase:\n\\begin{equation}\n\\begin{array}{rcl}\nV(t) &=& \\bar{V} + {\\rm Re}(\\delta V\\, e^{i\\omega t}) \\\\[1ex]\np(t) &=& \\bar{p} + {\\rm Re}(\\delta p\\, e^{i\\omega t})\n\\end{array}\n\\label{epsilons}\n\\end{equation}\nSubstituting these into \\eqn{dEdt} and \\eqn{zeta_def}, we find that\n\\begin{equation}\n\\begin{array}{rcl}\n\\displaystyle \\frac{dE}{dt} &=& \n \\displaystyle -\\frac{1}{2}\\displaystyle \\omega \\,{\\rm Im}(\\delta p)\\,\\frac{\\delta V}{\\bar V} \\\\[2ex]\n\\zeta &=&\\displaystyle -\\frac{{\\rm Im}(\\delta p)}{\\delta V} \\frac{\\bar V}{\\omega}\n\\end{array}\n\\label{zeta_general}\n\\end{equation}\nWe therefore expect that ${\\rm Im}(\\delta p)$ will turn out to be negative.\nTo determine this quantity, we note that\nthe pressure is a function of the temperature and the chemical potentials.\nWe assume that heat arising from dissipation is conducted away quickly,\nso the whole calculation is performed at constant $T$, and\nin order to find $\\delta p$ we only need to know how the chemical potentials\nvary in response to the driving oscillation. \nWe expect the bulk viscosity to be most strongly influenced by the lightest\nexcitations that carry flavor, and for the sake of definiteness \nwe will take those to be\nkaons. Our analysis could easily be modified to treat the case where\nthe lightest bosons were pions. At this point we do not have to specify \nwhether our kaons are $K^0$ or $K^+$.\nThe relevant chemical potentials are $\\mu_d-\\mu_s$ for the\n$K^0$ and $\\mu_u-\\mu_s$ for the $K^+$. For the following generic\nanalysis we will just write the equilibrating chemical potential\nas ``$\\mu_K$''.\n\nIn thermal equilibrium, the distribution of kaons is determined by their\ndispersion relations \\eqn{Kdisp} and the Bose-Einstein distribution.\nWhen the kaon population goes slightly out of equilibrium in response to\nthe applied perturbation, strong interaction processes are still\nproceeding quickly: only weak interactions are failing to keep up.\nThis means that we can always characterize our kaon population by\nthe flavor chemical potential $\\mu_K$, so the kaon \ndistribution has the form \n$n_K(p)\\propto p^2\/(\\exp((E_K(p)-\\mu_K)\/T)-1)$\n(see Eq.~\\eqn{kaon_distribution}), and nonzero $\\mu_K$ indicates deviation\nfrom equilibrium.\n\nWe express the variations in the chemical potentials for\nquark number and strangeness in terms\nof complex amplitudes $\\delta\\mu$, and $\\delta\\mu_K$,\n\\begin{equation}\n\\begin{array}{rcl}\n\\mu(t) &=& \\bar{\\mu} + {\\rm Re}(\\delta \\mu \\, e^{i\\omega t}) \\ , \\\\\n\\mu_K(t) &=&\\phantom{\\bar{\\mu}\\, +\\,} {\\rm Re}(\\delta\\mu_K e^{i\\omega t}) \\ .\n\\end{array}\n\\label{mu_epsilons}\n\\end{equation}\nNote that the term $-m_s^2\/(2\\mu)$ which is often \ndescribed as an ``effective chemical potential'' is already included \nin the kaon dispersion relation \\eqn{Kdisp},\nso in equilibrium, $\\mu_K$ is zero.\nThe pressure amplitude is then\n\\begin{equation}\n\\delta p =\n \\frac{\\partial p}{\\partial \\mu}\\Bigr|_{\\mu_K} \\delta \\mu\n +\\frac{\\partial p}{\\partial \\mu_K}\\Bigr|_{\\mu} \\delta \\mu_K\n = n_q \\delta \\mu + n_K \\delta \\mu_K\\ ,\n\\label{dp_full}\n\\end{equation}\n(From now on all partial derivatives with respect to $\\mu$\nwill be assumed to be at constant $\\mu_K$, and vice versa.)\nIn principle one might worry that what we have called ``$n_K$'' \nis really $n_d-n_s$ (or $n_u-n_s$), but at temperatures \nbelow the gap, and in the absence of kaon condensation, \nthermal kaons make the dominant contribution to \n$n_d-n_s$ and $n_u-n_s$.\nFrom \\eqn{dp_full} and \\eqn{zeta_general} we find\n\\begin{equation}\n\\zeta = -\\frac{1}{\\omega}\\frac{\\bar V}{\\delta V}\\Bigl(\n \\bar n_q{\\rm Im}(\\delta\\mu) + \\bar n_K{\\rm Im}(\\delta\\mu_K ) \\Bigr) \\ .\n\\label{zeta}\n\\end{equation}\nTo obtain the imaginary parts of the chemical potential amplitudes,\nwe write down the rate of change of the corresponding conserved quantities,\n\\begin{equation}\n\\begin{array}{rclcl}\n\\displaystyle \\frac{dn_q}{dt} \n &=&\\displaystyle \\frac{\\partial n_q}{\\partial \\mu}\\frac{d\\mu}{dt}\n +\\frac{\\partial n_q}{\\partial \\mu_K}\\frac{d\\mu_K}{dt}\n &=&\\displaystyle -\\frac{n_q}{\\bar V} \\frac{dV}{dt} \\ , \\\\[2ex]\n\\displaystyle \\frac{dn_K}{dt} \n &=&\\displaystyle \\frac{\\partial n_K}{\\partial\\mu}\\frac{\\partial \\mu}{dt}\n + \\frac{\\partial n_K}{\\partial \\mu_K}\\frac{d\\mu_K}{dt}\n &=&\\displaystyle -\\frac{n_K}{\\bar V} \\frac{dV}{dt} - \\Gamma_{\\rm total} \\ .\n\\end{array}\n\\label{ndots1}\n\\end{equation}\nAll the partial derivatives are evaluated at equilibrium, $\\mu=\\bar\\mu$ and\n$\\mu_K=0$.\nThe right hand term on the first line expresses the fact that quark\nnumber is conserved, so when a volume is compressed, the quark density\nrises. On the second line, which gives the rate of change of kaon\nnumber, there is such a term from the compression of the existing kaon\npopulation, but there is also a net kaon annihilation rate of\n$\\Gamma_{\\rm total}$ kaons per unit volume\nper unit time, which\nreflects the fact that weak interactions will push the kaon density\ntowards its equilibrium value. \nThe annihilation rate will be calculated from the microscopic\nphysics in section \\ref{sec:rates}.\nFor small deviations from equilibrium\nwe expect $\\Gamma_{\\rm total}$ to be linear in $\\mu_K$, \nso it is convenient to write the rate in\nterms of an average kaon width $\\gamma_K$, which is defined in terms \nof the total rate by writing\n\\begin{equation}\n\\Gamma_{\\rm total}\n= \\gamma_K\\,\\frac{\\partial n_K}{\\partial \\mu_K}\\delta\\mu_K e^{i\\omega t}\n\\label{gamma_K}\n\\end{equation}\nwhere the derivatives are evaluated at $\\mu_K=0$.\n\nWe now substitute the assumed oscillations \\eqn{epsilons} and\n\\eqn{mu_epsilons} in to \\eqn{ndots1}, and solve to obtain\nthe amplitudes $\\delta\\mu$ and $\\delta\\mu_K$ in terms of the amplitude $\\delta V$\nand angular frequency $\\omega$ of the driving oscillation. Inserting their\nimaginary parts in \\eqn{zeta} we obtain the bulk viscosity\n\n\\begin{equation}\n\\zeta = C\\frac{ \\ga_{\\rm eff} }{\\omega^2 + \\ga_{\\rm eff}^2}\n\\label{zeta_K}\n\\end{equation}\nwhere\n\\begin{equation}\n\\begin{array}{rcl}\n\\ga_{\\rm eff} &=&\\displaystyle \\gamma_K\\left(1 -\n \\frac{\\displaystyle\\Bigl(\\deriv{n_K}{\\mu}\\Bigr)^2}{\\displaystyle\\deriv{n_q}{\\mu}\\deriv{n_K}{\\mu_K}}\n \\right)^{-1} \\approx \\gamma_K \\\\[6ex]\nC &=&\\displaystyle \\Bigl(\\deriv{n_q}{\\mu}\\Bigr)^{-1}\\frac{\\displaystyle\n \\left( \\bar{n}_K \\deriv{n_q}{\\mu} - \\bar{n}_q \\deriv{n_K}{\\mu} \\right)^2}{\n \\displaystyle \\deriv{n_K}{\\mu_K}\\deriv{n_q}{\\mu}-\\Bigl(\\deriv{n_K}{\\mu}\\Bigr)^2}\n\\approx \n \\Bigl( \\deriv{n_K}{\\mu_K}\\Bigr)^{-1}\n \\left( \\bar{n}_K - \\bar{n}_q \\deriv{n_K}{\\mu}\n \\Bigl( \\deriv{n_q}{\\mu}\\Bigr)^{-1} \\right)^2\n\\end{array}\n\\label{C_gamma}\n\\end{equation}\nThe approximate forms on the right hand side\nare valid for $T\\ll\\mu$.\nThey follow\nfrom the fact that\nall the derivatives of the kaon free energy go to zero as $T\\to 0$, so $n_K$\nand its derivatives are suppressed relative to $\\bar n_q$ and\n$\\partial n_q\/\\partial\\mu$, which are of order $\\mu^3$ and $\\mu^2$ respectively.\n\nTo evaluate $C$ and $\\ga_{\\rm eff}$\nwe need the particle densities and their derivatives,\nwhich follow from the full free energy of the system,\n$\\Omega = \\Omega_{\\rm CFL-quarks}(\\mu) + \\Omega_K(\\mu,\\mu_K)$, where\n$\\Omega_{\\rm CFL-quarks}$ is the CFL quark free energy at zero temperature\n\\cite{Alford:2002kj} and $\\Omega_K$ is the kaon free energy \n\\eqn{kaon_distribution}. Then $n_q$ and $dn_q\/d\\mu$ come dominantly from \n$\\Omega_{\\rm CFL-quarks}$, and all the other quantities in \\eqn{C_gamma}\ncome from $\\Omega_K$. We can then see that\nthe kaon free energy depends on $E_K(p)-\\mu_K$, and we choose $m_K$ to be\nindependent of $\\mu$, so from the kaon\ndispersion relation \\eqn{Kdisp} we see that\nthe kaon free energy is a function of\n$\\mu_K + m_s^2\/(2\\mu)$. This means that\n\\begin{equation}\n\\deriv{n_K}{\\mu} = -\\frac{m_s^2}{2\\mu^2} \\deriv{n_K}{\\mu_K}\n\\end{equation}\nso terms in \\eqn{C_gamma} involving $dn_K\/d\\mu$ are suppressed \nrelative to those involving $dn_K\/d\\mu_K$.\n\n\nFrom \\eqn{zeta_K} we can already see how the bulk viscosity depends\non the angular frequency $\\omega$ of the oscillation and the equilibration rate\n$\\gamma_K$. At fixed $\\gamma_K$, the bulk viscosity decreases as the\noscillation frequency rises; it is roughly constant for\n$\\omega\\lesssim\\gamma_K$, and then drops off quickly as $1\/\\omega^2$ for\n$\\omega\\gg\\gamma_K$. At fixed $\\omega$, the bulk viscosity is dominated by\nprocesses with rate $\\gamma_K\\sim\\omega$, and their contribution is\nproportional to $1\/\\gamma_K$. If we imagine varying the rate but keeping\nother quantities fixed (e.g.~by varying the coupling constant of the\nequilibrating interaction), then for $\\gamma_K\\ll\\omega$ or $\\gamma_K\\gg\\omega$\nthe bulk viscosity tends to zero. Thus very fast processes, such as\nstrong interactions, are not an important source of bulk\nviscosity. The limit of zero equilibration rate {\\em and} zero\nfrequency is singular, and depends on the order of limits.\n\nIn this paper we will also be concerned with the temperature\ndependence of the bulk viscosity. This cannot be straightforwardly\nread off from \\eqn{zeta_K} because the rates and particle densities \ndepend on the temperature in complicated ways; however we \nexpect that as we go to higher temperatures\n($T \\gg m_K,\\mu_K$) the\nbulk viscosity will grow because the kaon density is rapidly increasing.\nIn the limit of low temperature we expect the viscosity to\nbe suppressed by $\\exp((-m_K+\\mu^{\\rm eff}_K)\/T)$ \nas the thermal kaon population disappears.\n\n\\section{Dynamics of the light modes}\n\\label{sec:dynamics}\n\nIn this section we lay out the properties of the lightest modes of the\nsystem, since they will dominate the transport properties. There is\nan exactly massless scalar Goldstone boson associated with spontaneous\nbreaking of baryon number, and some light pseudoscalars associated\nwith the spontaneous breaking of the chiral symmetry. We will ignore\nthe $\\eta'$ mode associated with the breaking of $U(1)_A$, since $U(1)_A$\nis explicitly broken in QCD at moderate densities.\n\n\\subsection{The superfluid ``$H$'' mode}\n\\label{sec:H}\nThe CFL quark condensate breaks the exact $U(1)_B$ baryon number symmetry of\nthe QCD Lagrangian, creating a superfluid with\nan exactly massless Goldstone boson $H$.\nThe Lagrangian for the superfluid mode of the CFL phase is\n\\cite{Son:2002zn}\n\\begin{equation}\nL_{\\rm eff} = \\frac{N_c N_f}{12\\pi^2}\n \\bigg[(\\partial_0\\phi - \\mu)^2 - (\\partial_i\\phi)^2\\bigg]^2\n\\end{equation}\nThis Lagrangian is correct to leading (zeroth) order in $\\alpha_s$ and\nto leading order in the derivatives of the $\\phi$ field. This can be \nrescaled to give a conventionally normalized\nkinetic term, and the total time-derivative term can be dropped\n\\cite{Manuel:2004iv}, giving\n\\begin{equation}\nL_{\\rm eff} = \\half(\\partial_0\\phi)^2 - {\\txt \\frac{1}{6}} (\\partial_i\\phi)^2 \n - \\frac{\\pi}{9\\mu^2}\\partial_0\\phi(\\partial_{\\mu}\\phi)^2 \n + \\frac{\\pi^2}{108\\mu^4} (\\partial_{\\mu}\\phi \\partial^{\\mu}\\phi)^2\n\\end{equation}\nIgnoring the interaction terms for the moment,\nthe dispersion relation for the $H$ particle is\n\\begin{equation}\nE_H(p) = v_H\\,p\n\\end{equation}\nwhere $v_H^2 = 1\/3$ is the ratio of the \nspatial and temporal derivatives above.\nIn thermal equilibrium at temperature $T$, \nthe $H$ bosons have free energy\n\\begin{equation}\n\\Omega_H = \\frac{T}{2\\pi^2}\\int_0^\\infty dk\\, k^2 \n \\ln(1-\\exp(-v_H k\/T))\n = -\\frac{\\pi^2}{90 v_H^3} T^4\n\\end{equation}\nand number density\n\\begin{equation}\nn_H = \\int \\frac{d^3 k}{(2\\pi)^3} \\frac{1}{\\exp(E_H\/T)-1} \n = \\frac{\\zeta(3)}{\\pi^2 v_H^3} T^3 \\ .\n\\end{equation}\n\nThe cubic and higher-order terms allow a single $H$ to decay into multiple\n$H$ particles.\nFor energies far below $\\mu$ the dominant process is\n$H\\to HH$ \\cite{Manuel:2004iv,Manuel:2005hu},\nwhose rate can be calculated by taking the imaginary part of the\n1-loop $H$ self energy. Higher order corrections to\nthe self energy are ignored, both in Ref.~\\cite{Manuel:2004iv} and\nin this paper. These corrections could be calculated by taking\ninto account quantum mechanical interference, the Landau-Pomeranchuk-Migdal \n(LPM) effect. This should only introduce a difference of $O(1)$ in the\ncoefficient of the self-energy but would not change the parametric result. \nThe self energy is given by Eq.~(3.7) in Ref.~\\cite{Manuel:2004iv},\n\\begin{equation}\n\\Sigma_H(p_0,p) = -\\frac{4\\pi^2}{81 \\mu^4} \\sum_{s_1,s_2 = \\pm}\n \\int \\frac{d^3 k}{(2\\pi)^3} F(p_0,p,k) \\left(\\frac{s_1 s_2}{4 E_1 E_2}\n \\, \\frac{1+f(s_1 E_1) + f(s_2 E_2)}{i\\omega - s_1 E_1 - s_2 E_2}\\right),\n\\end{equation}\nwhere\n\\begin{equation}\n\\begin{array}{c}\nF(p_0,p,k) \\equiv \\bigg[p_0^2 - v^2 p^2 - 2vk(p_0-vk)\\bigg]^2, \\\\[1ex]\nf(E) \\equiv 1\/(e^{E\/T}-1),\\qquad\nE_1 \\equiv vk,\\qquad E_2 \\equiv v|\\bm{p} - \\bm{k}|.\n\\end{array}\n\\end{equation}\nRef.~\\cite{Manuel:2004iv} showed that the real part of this self-energy\nis parametrically smaller than the imaginary part, so we will only\nconcern ourselves with the imaginary part, which we will call $\\Pi_H$.\nThere is no contribution to the imaginary part when $s_1 = s_2 = -1$ as\nthere is no pole in the integral. One can also show that the \ntwo terms where the signs of $s_1$ and $s_2$ are opposite are identical.\nWe can then rewrite this in a slightly simpler and more suggestive\nform that will be used in Section \\ref{sec:rates}.\n\\begin{equation}\n\\begin{array}{rcl}\n\\Pi_H(p_0,p) &=& \\displaystyle \\frac{2\\pi^3 p_0^2}{81 \\mu^4 v^4} \\frac{1}{1+f(p_0)} \n \\int \\frac{d^3 k}{2 k_0 (2\\pi)^3} F(p_0,p,k) G(p_0,p,k) \\ , \\\\[3ex]\n G(p_0,p,k) &=& \\displaystyle (1+f(E_1)) \n\\Bigg(\\frac{1+f(E_2)}{E_2}\\ \\delta(p_0 - E_1 - E_2)\n +\\ 2\\ \\frac{f(E_2)}{E_2}\\ \\delta(p_0 - E_1 + E_2)\\Bigg)\\ .\n\\end{array}\n\\label{PiH}\n\\end{equation}\nThe $H$ propagator can then be written as follows\n\\begin{equation}\nD_H(p_0,p) = \\frac{1}{p_0^2 - v_H^2p^2 + i\\Pi_H(p_0,p)}\n\\label{H_prop}\n\\end{equation}\nand we will use this expression in section~\\ref{sec:rates}.\nIt will be useful in the calculation of the decay rates to have the \n$H$ self-energy at momenta and energies close to mass shell ($p_0=vp$).\nThe self-energy is discontinuous at this point so there are\ntwo values $\\Pi_H^+$ and $\\Pi_H^-$ depending on whether\n$p_0$ tends to $vp$ from above or below,\n\\begin{equation}\n\\begin{array}{rclcl} \n\\Pi_H^+(p) &=&\\displaystyle \\lim_{\\varepsilon\\to 0} \\Pi_H(vp+\\varepsilon,p)\n &=& \\displaystyle-\\frac{\\pi p}{81 \\mu^4 v} \\frac{1}{1+f(vp)}\n\\int_0^p dk\\, I(p,k) \\ , \\\\[3ex]\n\\Pi_H^-(p) &=&\\displaystyle \\lim_{\\varepsilon\\to 0} \\Pi_H(vp-\\varepsilon,p)\n &=& \\displaystyle\\frac{2\\pi p}{81 \\mu^4 v} \\frac{1}{1+f(vp)}\n\\int_p^\\infty dk\\, I(p,k) \\ .\n\\end{array}\n\\label{Pi_H_pbar}\n\\end{equation}\nwhere $I(p,k) = k^2 (p-k)^2 (1+f(vk))f(vk-vp)$. \nAs $T\\to 0$, $\\Pi_H^+(p)\\propto p^6\/\\mu^4$, and $\\Pi_H^-(p)\\to 0$.\n\n\n\n\\subsection{Pions and Kaons}\n\\label{sec:pi_and_K}\nThe CFL quark condensate breaks the approximate $SU(3)$ chiral symmetry of the\nQCD Lagrangian, creating eight light pseudoscalar\npseudo-Goldstone mesons. This octet\nis just a high-density version of the pion\/kaon octet. It is described by\nan effective theory \\cite{Casalbuoni:1999zi,Son:1999cm}\n\\begin{equation}\nL_{\\rm eff} = {\\txt\\frac{1}{4}} f_\\pi^2 \\mbox{Tr}\\Bigl(\n \\nabla_0\\Sigma \\nabla_0\\Sigma^\\dagger - v_\\pi^2 \\partial_i\\Sigma \\partial_i\\Sigma^\\dagger\\Bigr)\n + \\cdots\n\\label{Sigma}\n\\end{equation}\nwhere $\\Sigma=\\exp(iP^a\\lambda^a\/f_\\pi)$, and the normalization of\nthe GellMann matrices is $\\mbox{tr}(\\lambda^a\\lambda^b)=2\\delta^{ab}$, which yields\na conventionally normalized kinetic term for the \nGoldstone boson fields $P^a$.\nAt asymptotic densities, weak-coupling calculations give \\cite{Son:1999cm}\n\\begin{equation}\nf_\\pi^2 = \\frac{21-8\\log(2)}{18}\\left(\\frac{\\mu^2}{2\\pi^2}\\right) \n \\hspace{1cm} v_\\pi^2 = \\frac{1}{3} \\ .\n\\label{fpi}\n\\end{equation}\nWhen weak interactions have equilibrated, \nthe pseudoscalars $P=\\pi^\\pm,K^\\pm,K^0,\\overline{K^0}$\nhave dispersion relations \\cite{Son:1999cm,BedaqueSchaefer}\n\\begin{equation}\nE_{P} = -\\mu^{\\rm eff}_{P} + \\sqrt{v_\\pi^2 p^2 + m_{P}^2} \\ ,\n\\label{Kdisp}\n\\end{equation}\nwhere\n\\begin{equation}\n\\begin{array}{rcl}\n\\mu^{\\rm eff}_{\\pi^\\pm} &=&\\displaystyle \\pm \\frac{m_d^2-m_u^2}{2\\mu} \\ , \\\\[1ex]\n\\mu^{\\rm eff}_{K^\\pm} &=&\\displaystyle \\pm \\frac{m_s^2-m_u^2}{2\\mu}\\ , \\\\[1ex]\n\\mu^{\\rm eff}_{K^0,\\overline{K^0}} &=&\\displaystyle \\pm \\frac{m_s^2-m_d^2}{2\\mu}\\ .\n\\end{array}\n\\label{mueff}\n\\end{equation}\nBecause $m_s\\gg m_u,m_d$, the $K^0$ and $K^+$ are expected to\nhave the smallest energy gap, and so we focus on their contribution\nto the bulk viscosity.\nWe are interested in studying small departures from equilibrium, where\neach meson has an additional chemical potential $\\delta\\mu_P$.\nIt will turn out that the $K^0$ makes the dominant contribution,\nso only $\\delta\\mu_{K^0}$ is relevant (Sec.~\\ref{sec:K0_rates}).\n\nThe expression for the bulk viscosity \\eqn{zeta_K} contains terms\nof the form $\\partial n_K\/\\partial\\mu$, which take into account the fact that\nthe meson distributions depend on the meson dispersion relations, which via \n\\eqn{mueff} depend on the quark chemical potential.\nIn this paper we treat the meson masses $m_{K^0}$ etc as constants,\nbut in perturbative calculations they also depend on $\\mu$ and\nthe CFL pairing gap $\\Delta$ (see section~\\ref{sec:meson_mass}).\n\nIn thermal equilibrium at temperature $T$ and with chemical potential $\\mu_P$, \nthe free energy and number density of a meson $P$ is\n\\begin{eqnarray}\n\\Omega_{P} &=& \\frac{T}{2\\pi^2}\\int_0^\\infty dk\\, k^2\\, \n \\ln\\bigl(1-\\exp(-(E_{P}-\\delta\\mu_{P})\/T)\\bigr) \\\\\nn_{P} &=& -\\frac{\\partial \\Omega_{P}}{\\partial \\mu_{P}} \n = \\frac{1}{2\\pi^2} \\int_0^\\infty dk\\ k^2 \n \\frac{1}{\\exp((E_{P}-\\delta\\mu_{P})\/T)-1}\n\\label{kaon_distribution}\n\\end{eqnarray}\nWhen weak interactions have equilibrated $\\delta\\mu_P=0$, but when\nweak interactions are out of equilibrium the mesons may have nonzero\nchemical potentials.\n\n\\subsection{Pseudo-Goldstone-boson masses}\n\\label{sec:meson_mass}\nAlthough we treat the masses as constants, they are predicted to have\ndensity dependence in high density QCD \\cite{Schafer:2002ty}\n\\begin{equation}\n\\begin{array}{rcl}\nm_{\\pi^\\pm}^2 &=&\\displaystyle \\frac{1}{f_\\pi^2} (2A + 4Bm_s)(m_u+m_d) \\ , \\\\[2ex]\nm_{K^\\pm}^2 &=&\\displaystyle \\frac{1}{f_\\pi^2} (2A + 4Bm_d)(m_u+m_s) \\ , \\\\[2ex]\nm_{K^0,\\overline{K^0}}^2 &=&\\displaystyle \\frac{1}{f_\\pi^2} (2A + 4Bm_u)(m_d+m_s) \\ ,\n\\end{array}\n\\label{meson_mass}\n\\end{equation}\nwhere $A$ is positive and related to instantons \n\\cite{Schafer:1999fe,Manuel:2000wm}. In the limit of asymptotically large \ndensity the coefficient $A$ can be computed reliably, but at moderate \ndensity its value is quite uncertain \\cite{Schafer:2002ty}.\nAsymptotic-density QCD calculations \\cite{Son:1999cm,BedaqueSchaefer} \nalso yield \n\\begin{equation}\nB =\\displaystyle \\frac{3\\Delta^2}{4\\pi^2} \\ ,\n\\end{equation}\nalthough it is not clear how well these expressions can be trusted\nat densities of phenomenological interest.\nWe will assume that $A$ and $B$ are such that there is\nno meson condensation at zero temperature, \nwhich means that all the meson masses\nare greater than their effective chemical potentials.\n\n\\subsection{Weak interactions between light bosons}\n\\label{sec:weak_int}\n\nWe have argued above that the bulk viscosity will arise from flavor\nviolation, which will be dominated by conversion between the lightest\npseudo-Goldstone modes (neutral kaons, typically), which carry flavor,\nand the superfluid $H$ modes, which are flavorless. The dominant effect \nof the weak interaction will be to introduce mixing between the $K^0$\nand the $H$, in the form of a $K^0 \\leftrightarrow H$ vertex \\eqn{KHmixing}\nin the effective theory. \nWe now calculate the strength of that coupling.\n\nThe Lagrangian density for the $H$ modes can be written in a\nnonlinear form analogous to \\eqn{Sigma} for the pseudoscalars,\nso the leading terms in the CFL effective theory become\n\\cite{Casalbuoni:1999zi},\n\\begin{equation}\n\\label{l_cheft}\n{\\cal L}_{\\rm eff} = \\frac{f_\\pi^2}{4} {\\rm Tr}\\left[\n \\nabla_0\\Sigma\\nabla_0\\Sigma^\\dagger - v_\\pi^2\n \\partial_i\\Sigma\\partial_i\\Sigma^\\dagger \\right]\n+12 f_H^2 \\left[\n \\nabla_0 Z\\nabla_0 Z^* - v^2_H\n \\partial_iZ \\partial_i Z^* \\right]\n\\end{equation}\nwhere $Z=\\exp(iH\/2\\sqrt{6}f_H)$ is the \nfield related to the breaking of $U(1)_B$ and $f_H$ is the \ncorresponding decay constant. \nAt large density the coefficients of the CFL effective Lagrangian can \nbe determined in perturbation theory. At leading order $v_\\pi^2=v_H^2=1\/3$,\n$f_\\pi$ is given by \\eqn{fpi}, and from Ref.~\\cite{Son:1999cm} we obtain\n\\begin{equation}\nf_H^2 = \\frac{3}{4}\\left(\\frac{\\mu^2}{2\\pi^2}\\right) \\ .\n\\label{f_H}\n\\end{equation}\n\nUnder an element $(L,R,\\exp(i\\alpha))$ of the\nthe chiral flavor and baryon number symmetry group\n$SU(3)_L\\times SU(3)_R\\times U(1)_V$, the left-handed quarks,\nright-handed quarks, and bosons transform as follows:\n\\begin{equation}\n\\begin{array}{rcl}\nq_L &\\to& \\exp(i\\alpha)L\\, q_L \\ ,\\\\\nq_R &\\to& \\exp(i\\alpha)R \\,q_R \\ ,\\\\\n\\Sigma &\\to& L\\,\\Sigma \\, R^{-1} \\ ,\\\\\nZ &\\to& \\exp(i\\alpha)\\,Z \\ .\n\\end{array}\n\\label{transformation}\n\\end{equation}\nThe weak Hamiltonian breaks the approximate flavor symmetry of QCD,\nand only acts on left-handed fields.\nThe elementary process that is relevant to kaon decay is the\nconversion between strange quarks and down quarks via\nexchange of a $W^\\pm$, which can be treated at energy scales\nwell below 100 GeV as a four-fermion interaction (Ref.~\\cite{Donoghue:1992dd}, \nsect.~II-3 and II-4),\n\\begin{equation}\n{\\cal L}_{\\rm weak} = \\frac{G_F V_{ud} V_{us}}{\\sqrt{2}}\n (\\overline s \\gamma^{\\mu} u)_L(\\overline u \\gamma_{\\mu} d)_L + h.c.\n\\label{weak_H}\n\\end{equation}\nwhere $V_{ud}V_{us}\\approx 0.215(3)$.\nIn order to determine how this interaction is represented\nin the low energy effective theory of the CFL phase,\nwe introduce the spurion field $\\Lambda_{ds}$ which transforms\nas $\\Lambda_{ds}\\to L\\Lambda_{ds}L^\\dagger$. In the QCD \nvacuum we set $\\Lambda_{ds}=\\lambda_6$ using the usual\nnotation for the GellMann matrices (Ref.~\\cite{Donoghue:1992dd}, sect.~II-2), \ni.e.~$(\\Lambda_{ds})_{\\alpha\\beta}= \\delta_{2\\alpha}\\delta_{3\\beta}+\\delta_{3\\alpha}\\delta_{2\\beta}$. \nThus each time this spurion field occurs in an\ninteraction, it mediates a conversion of downness into strangeness, or\nvice versa.\nThe lowest-order terms in the effective theory that involve\nsuch a conversion are obtained by writing down the\nlowest-order terms that contain $\\Lambda_{ds}$ and are invariant under\nspatial rotations and $SU(3)_L\\times SU(3)_R\\times U(1)_V$:\n\\begin{equation}\n\\label{cfl_wk}\n{\\cal L}_{\\rm weak} = \n f_\\pi^2 f_H^2 G_{ds} {\\rm Tr}\\left[\\Lambda_{ds} \\Bigl(\n \\Sigma\\partial_0\\Sigma^\\dag\\, Z\\partial_0 Z^* \n- v_{ds}^4 \\Sigma\\partial_i\\Sigma^\\dag\\, Z\\partial_i Z^* \\Bigr)\\right] \n\\end{equation}\nwhere $G_{ds}$ and $v_{ds}$ are new couplings in the effective action.\n\n\\begin{figure}[htb]\n\\begin{center}\n\\includegraphics[width=10cm]{new_figs\/matching}\n\\caption{Leading contribution to the $X_\\mu^6 B_\\nu$ \npolarization function in the microscopic theory with gauged\nchiral and baryon number symmetries. The shaded bar corresponds\nto the vertex of \\eqn{weak_H}, which is the low energy limit of a\n$W$-boson-mediated interaction.}\n\\label{fig:polarization}\n\\end{center}\n\\end{figure}\n\nDimensional analysis suggests that $G_{ds}\\sim G_F$ and $v_{ds}\\sim 1$. \nIf the density is large we can be more precise and determine\nthe coupling constants using a simple matching argument.\nFor this purpose we gauge the $SU(3)_L$ and $U(1)_B$ symmetries. \nWe will denote the corresponding gauge fields $X_\\mu^A$ and \n$B_\\mu$. The flavor-violating term \\eqn{cfl_wk} in the effective\naction leads to mixing between the $X_\\mu^6$ and $B_\\mu$ gauge\nbosons. By matching to a calculation of the mixing term in the\nmicroscopic theory \\eqn{weak_H}, we now proceed to\ndetermine $G_{ds}$ in terms of $G_F$.\n\nIn the effective theory, the $\\Sigma$ field has one left-handed quark index\nso $\\partial_\\mu\\Sigma \\to (\\partial_\\mu + X^A_\\mu\\lambda_A)\\Sigma$.\nFor the superfluid mode, $\\partial_\\mu Z \\to (\\partial_\\mu + B_\\mu)Z$. Substituting into\n\\eqn{cfl_wk} and evaluating in the CFL vacuum ($\\Sigma=1$)\nwe find the mixing term is\n\\begin{equation} \n{\\cal L}_{\\rm mix} = 2G_{ds}f_\\pi^2 f_H^2 (X_0^6 B_0-v_{ds}^4 X_i^6B_i) \\ .\n\\label{mixing_micro}\n\\end{equation}\nIn the microscopic theory, the corresponding calculation is the\ncomputation of the\n$X_\\mu^6 B_\\nu$ polarization function.\nAt weak coupling the dominant contribution comes from the \ntwo-loop diagram shown in Fig.~\\ref{fig:polarization}. The evaluation of\nthe Feynman diagram is described in Appendix~\\ref{weak_matching}.\nThe result \\eqn{G_ds} is\n\\begin{equation}\n\\label{g_match}\nG_{ds}= \\sqrt{2} V_{ud}V_{us}\\, G_F \\hspace{1cm} v_{ds}^2 = v^2 = 1\/3.\n\\end{equation}\nIt is now straightforward to read off the $K^0\\to H$ amplitude. \nLinearizing \\eqn{cfl_wk}, we find\n\\begin{equation}\n{\\cal L} = G_{ds} f_\\pi f_H \\left( \\partial_0 K^0 \\partial_0 H \n - v_{ds}^4 \\partial_i K^0 \\partial_i H \\right)\n\\label{KHmixing}\n\\end{equation}\nwith $G_{ds}$ and $v_{ds}$ given in (\\ref{g_match}).\nThis leads to a vertex factor for the $K$-$H$ interaction given by\n\\begin{equation}\nA = G_{ds} f_\\pi f_H (p_0^2 - v_{ds}^4\\, p^2);\n\\label{KH_amp}\n\\end{equation}\nThis is the value of the $K^0$-$H$ vertex in Feynman diagrams\nsuch as Fig.~\\ref{fig:K_decay}.\nCombining this vertex factor and the Lagrangian for the $H$, we can \ncalculate the matrix element for conversion between a kaon with\n4-momentum $p$ and two $H$s with 4-momenta $k$ and $q$,\n\\begin{equation}\n{\\it M}^2_{K^0HH}(p,k,q) =\n \\frac{G_{ds}^2f_\\pi^2 \\ (p_0^2 - v_{ds}^4\\, p^2)^2}{144 f_H^2}\n {\\bigg(p_0(k\\cdot q) + k_0 (p \\cdot q) + q_0 (p \\cdot k)\\bigg)^2}\n |D_H(p_0,p)|^2 \\ ,\n\\label{KHH_amp}\n\\end{equation}\nwhere $D_H$ is the $H$-propagator \\eqn{H_prop}.\n\n\\section{Rates of strangeness re-equilibration processes}\n\\label{sec:rates}\n\n\\subsection{Neutral Kaon Rates}\n\\label{sec:K0_rates}\n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=0.5\\textwidth]{new_figs\/kzero_decay}\n\\caption{The $K^0\\to H\\ H$ diagram, including the $K^0\\to H$\nvertex (square), a full $H$ propagator (thick line) and\nthe $H \\to H\\ H$ vertex (round). All external lines are amputated.}\n\\label{fig:K_decay}\n\\end{center}\n\\end{figure}\nIn principle, the correct way to calculate the $K^0$ annihilation rate\nis as follows. As noted above, the weak interaction introduces a small\nmixing between the $K^0$ and the $H$. We rediagonalize the kinetic\nterms in the effective action, in terms of new fields $E_K$ (which is\nthe kaon with a tiny admixture of $H$) and $E_H$ (which is the $H$\nwith a tiny admixture of $K^0$). We have already seen that the $H$\nhas a width, arising from the possibility of $H\\to H\\\nH$\\footnote{There are also decays involving three or more $H$\nparticles, but we expect these to be suppressed, since the $H$ is\nderivatively coupled, and the greater the number of $H$ particles\ninvolved, the smaller the momentum carried by each of them.}.\nWhen we diagonalize, this will induce\na width for the $E_K$. Because the $E_K$ is almost the same state\nas the kaon, the $E_K$ width is a very good estimate of the $K^0$ width.\nIn terms of the original basis, this width\narises from the vertex shown in Fig.~\\ref{fig:K_decay}.\nIn the interests of brevity, we do not rediagonalize, but\nsimply calculate the contribution\nof the vertex shown in Fig.~\\ref{fig:K_decay}\nto the kaon annihilation rate. This comes via the processes\n$K^0\\leftrightarrow H\\ H$ and $H\\ K^0 \\leftrightarrow H$. \nThe net kaon annihilation rate is\n\\begin{equation}\n\\Gamma_{\\rm total} = \\Gamma_{\\rm forward} - \\Gamma_{\\rm backward}\n = (1-e^{-\\delta\\mu_{K^0}\/T}) \\Gamma_{\\rm forward}\\ \n \\approx \\frac{\\delta\\mu_{K^0}}{T} \\Gamma_{\\rm forward}\\ ,\n\\end{equation}\nwhere we have used the properties of the Bose-Einstein distributions.\nWe keep only first order in $\\delta\\mu_{K^0}$, and obtain\nthe average kaon width $\\gamma_K$ \\eqn{gamma_K}, remembering that\n$\\delta\\mu_K$ in section \\ref{sec:generalities} was the amplitude of a complex\noscillation, so $\\delta\\mu_{K^0} = \\delta\\mu_K\\exp(i\\omega t)$,\n\\begin{equation}\n\\gamma_K = \\left(\\frac{\\partial n_K}{\\partial \\mu_K}\\right)^{-1} \n \\frac{\\Gamma_{\\rm forward}(\\delta\\mu_K = 0)}{T} \\ .\n\\label{gamma_micro}\n\\end{equation}\nWe can therefore obtain the average kaon width simply from the forward\nrates $K^0\\to H\\ H$ and $H\\ K^0 \\to H$.\nThe contribution from $K^0\\to H\\ H $ is\n\\begin{equation}\n\\Gamma_{K^0 \\to H H} = \\half \\int_p \\int_{q_1} \\int_{q_2}\\, |M|^2\\, (2\\pi)^3\\, \n \\delta(\\bm{p} - \\bm{q_1} - \\bm{q_2})\\, (2\\pi)\\, \\delta(p_0 - vq_1 - vq_2)\\,\n F_{BE}(p_0,q_1,q_2)\n\\end{equation}\nwith $|M|^2$ given in \\eqn{KHH_amp} and $F_{BE}(p_0,q_1,q_2) =\nf(p_0 - \\delta\\mu_{K^0})(1+f(vq_1))(1+f(vq_2))$.\nThe rate for $K^0\\ H \\to H$ can be obtained by multiplying by $2$ for\nthe symmetry factor difference, \nswitching $q_2 \\to -q_2$ in both delta functions and turning \n$(1+f_{vq_2}) \\to f_{vq_2}$ to make that $H$ an incoming particle. \nAdding the two contributions, and performing\nthe $q_2$ integral using the momentum-conserving delta-function,\nwe obtain the total forward rate\n\\begin{equation}\n\\Gamma_{\\rm forward} = G_{ds}^2 f_\\pi^2 f_H^2 \\int_p\\, f(E_K)\\, (1+f(E_K))\\, \n\\frac{(E_K^2 - v^4\\, p^2)^2\\, \\Pi_H(E_K,p)}{\n(E_K^2 - v^2\\, p^2)^2 + \\Pi_H(E_K,p)^2}\n\\label{K_decay_rate}\n\\end{equation}\nwhere $\\Pi_H(E_K,p)$ was defined in Eq. \\eqn{PiH}.\n\\begin{figure}[htb]\n\\begin{center}\n\\includegraphics[width=10cm]{new_figs\/rate_int}\n\\caption{Plot of the integrand of \\eqn{K_decay_rate}\nas a function of momentum for $p<{\\bar{p}}$. In this plot $\\mu=400~\\MeV$,\n$m_s=120~\\MeV$, and $m_K=27.92~\\MeV$, so ${\\bar{p}} = 22.15~\\MeV$ and $T_a \n\\sim 0.14~\\MeV$.\nFor $T\\ll{\\bar{p}}$ the integral is dominated by low-momentum kaons,\ni.e.~$p\\ll{\\bar{p}}$ ($T=0.05~\\MeV$ line in plot). \nBut as $T$ gets closer to ${\\bar{p}}$,\nthe near-singularity at $p={\\bar{p}}$ becomes more important, and\nat $T=0.5~\\MeV$ the integral is actually dominated by the region\nwhere $p$ is very close to ${\\bar{p}}$.\n\\label{fig:integrand}\n}\n\\end{center}\n\\end{figure}\n\nIn general this can be evaluated numerically and then combined with \n\\eqn{gamma_micro},\\eqn{zeta_K},\\eqn{C_gamma} to obtain the bulk viscosity.\nHowever for certain temperatures, the integrand is dominated by\nmomentum that make the denominator as small as possible, i.e. \nwhen $E_K = v\\,p$. \nThis corresponds to the $H\\leftrightarrow K^0$ resonance, at which\nthe kaon has a special momentum\n$\\bar p$ such that an $H$ with that momentum is also on shell,\n\\begin{equation}\n{\\bar{p}} = \\frac{m_K^2 - {\\mu^{\\rm eff}_{K^0}}^2}{2\\, v \\, {\\mu^{\\rm eff}_{K^0}}} = \\frac{\\delta m}{v}\\left(1 + \\frac{\\delta m}{2 {\\mu^{\\rm eff}_{K^0}}}\\right) \\ .\n\\label{pbar}\n\\end{equation}\nwhere $\\delta m = m_K - {\\mu^{\\rm eff}_{K^0}}$.\nNear this momentum, the virtual $H$ in Fig.~\\ref{fig:K_decay} is\nalmost on shell, so momenta close to $\\bar p$ dominate the integral.\nAs long as the numerator is slowly varying, we can approximate the\nsharp peak as a delta-function, obtaining\n\\begin{equation}\n\\Gamma_{\\rm forward} \\approx \\frac{G_{ds}^2 f_\\pi^2 f_H^2}{18\\sqrt{3}\\pi} \n (1+m_K^2\/{\\mu^{\\rm eff}_{K^0}}^2) \\bar{p}^4 \n \\frac{e^{v\\bar{p}\/T}}{(e^{v\\bar{p}\/T}-1)^2} \\ .\n\\label{rate_onshell}\n\\end{equation}\nThis expression becomes invalid at very low temperature $T \\ll T_a$, \nwhen there are very few kaons with momentum ${\\bar{p}}$ so the main\ncontribution does not come from $p\\approx {\\bar{p}}$, and at very high temperature\n$T\\gg T_b$ when there are so many thermal kaons with $p>{\\bar{p}}$ that\nthey outweigh the contribution from the resonance. One determines\n$T_a$ and $T_b$ by setting the non-resonant contribution\nequal to the resonant\nvalue from Eq.~\\eqn{rate_onshell}, giving\n$T_{a,b}$ as a function of $\\delta m$.\n$T_b$ has to be determined numerically and we found that $T_b \\sim 9~\\MeV$ for\n$\\delta m = 0.1~\\MeV$ and monotonically increases as $\\delta m$ increases, so\n$T_b$ is almost always higher than the temperature range that is of\nphysical interest. $T_a$ is given by the following condition\n\\begin{equation}\nT_a \\approx \\frac{\\delta m^2}{2{\\mu^{\\rm eff}_{K^0}}}\\left(\n \\ln\\Bigl(\\frac{\\mu^4}{\\delta m\\, (m_K\\, T_a)^{3\/2}}\\Bigr)\\right)^{-1} \\ ,\n\\label{temp_a}\n\\end{equation}\nwhere the $T_a$ dependence on the right side is logarithmically weak,\nso we expect that $T_a \\lesssim \\delta m^2\/(2{\\mu^{\\rm eff}_{K^0}})$.\n\nThe integrand of \\eqn{K_decay_rate} is plotted in Fig.~\\ref{fig:integrand},\nfor several different values of the temperature, showing that\nfor lower temperatures, $T < T_a$, the integrand is a smooth\nfunction, with a broad peak in the low-momentum region, but\nas the temperature rises, and the number of thermal\nkaon with momentum $\\bar p$ rises, the integrand develops a very\nsharp peak at $p=\\bar p$, where the intermediate $H$ is on shell.\n\n\\subsection{Charged Kaon Rates}\n\\label{sec:K+_rates}\n\nIn principle there is also a contribution to kaon number violation\nfrom the charged kaon modes, which necessarily involve charged leptons.\nWe will now show that this can be neglected compared to the contribution\nfrom the neutral kaons.\nThe lightest charged kaon is the $K^+$, and the relevant creation\/annihilation\nreactions are\n\\begin{eqnarray}\nK^+ \\leftrightarrow e^+ \\nu_e \\\\\nK^+ + e^- \\leftrightarrow \\nu_e \\\\\nK^+ + \\bar{\\nu_e} \\leftrightarrow e^+. \n\\end{eqnarray}\nThe matrix element for this process has been calculated in \nRef.~\\cite{Jaikumar:2002, Reddy:2003} and is given by\n\\begin{equation}\nA = G f_{\\pi} \\sin \\th_c p_{\\mu} \\bar{e}(k_1)\\gamma^{\\mu}(1-\\gamma_5)\\nu(k_2),\n\\end{equation}\nwhere $G$ is the appropriate coupling constant for the charged kaons in the \nmedium, which we expect is of order $G_F$.\nSumming over initial spins and averaging over final spins, we find\n\\begin{equation}\nM^2 = G^2\\, f_\\pi^2\\, \\sin^2 \\th_c\\, m_e^2\\, (k_1 \\cdot k_2)\n\\end{equation}\nThe rate of the first reaction is \n\\begin{eqnarray}\n\\Gamma &=& \\int_0^{x0} dx \\int_{y_1}^{y_2} dy x\\, y\\, \\frac{x+y+{\\mu^{\\rm eff}_{K^+}}\/T}{x+y}\n (1-z) F(x,y)\\\\\nz &=& \\frac{1}{2 v^2 x y}\\bigg((x^2+y^2)(1-v^2) + 2(x+y){\\mu^{\\rm eff}_{K^+}}\/T \n + 2xy + {({\\mu^{\\rm eff}_{K^+}}^2 - m_{K^+}^2)\/T^2}\\bigg) \\nonumber \\\\\nF(x,y) &=& \\frac{\\delta{\\mu_{K^+}}}{T}\\, \\frac{e^{x+y}}{(e^{x+y}-1)(e^x+1)(e^y+1)}.\n \\nonumber\n\\end{eqnarray}\nwhere $F(x,y)$ is the product of the distribution\nfunctions for the kaon\nand two leptons to lowest order in $\\delta {\\mu_{K^+}}$. \nThe rates for the second and third reactions, which are identical\nfor a massless electron, can be derived from a simple \nchange of $x \\rightarrow -x$ and $y \\rightarrow -y$, respectively. \nThey can then be evaluated numerically. \nNote that these calculations are done keeping only the \nlowest order term in $m_e$ and setting $\\mu_e = 0$. \n\nWe can then compare this rate to the rate for neutral kaon decay and find\nthat \n\\begin{equation}\n\\frac{\\Gamma_{K^+}}{\\Gamma_{K^0}} \\sim \\frac{T^2}{\\mu^2},\n\\label{K+suppression}\n\\end{equation}\nfor $m_{K^0} \\approx m_{K^+}$, so the contribution from charged kaons\nis suppressed by a factor of $(T\/\\mu)^2$. This is to be expected, since\nthe phase space for quarks is of order $\\mu^2 T$,\nlocalized near the quark Fermi surface, whereas the phase space for\nelectrons is of order $T^3$, since in the CFL phase there is no \nFermi sea of electrons.\n\n\\section{Results}\n\\label{sec:results}\n\nOur result for contribution\nof kaons to the bulk viscosity of CFL quark matter is given by\nequations \\eqn{zeta_K}, \\eqn{C_gamma}, \\eqn{gamma_micro}, \n\\eqn{K_decay_rate}.\nThe bulk viscosity is most sensitive to the temperature, and\nto the kaon energy gap\n\\begin{equation}\n\\delta m \\equiv m_K - {\\mu^{\\rm eff}_{K^0}} = m_K - \\frac{m_s^2-m_d^2}{2\\mu} \\ .\n\\label{energy_gap}\n\\end{equation}\nIt also depends on the orthogonal combination $m_K+{\\mu^{\\rm eff}_{K^0}}$, \nthe quark chemical potential $\\mu$ and CFL\npairing gap $\\Delta$. As discussed in section \n\\ref{sec:meson_mass}, we have no reliable way to calculate\n$m_K$ in the density range of interest for compact stars, so in\npresenting our results we will treat $\\delta m$ and $T$ as parameters.\n\nUsing the definitions of the densities of kaons and quarks, one\ncan derive the asymptotic versions of $C$ for temperatures\nfar above and far below the kaon energy gap (Table \\ref{tab:asymptotics}).\nOne can also derive the low temperature version of the rate and\nfrom that, combined with $C$, we can get the low temperature version\nof the bulk viscosity (Table \\ref{tab:asymptotics2}).\nAs one would expect, most kaon-related quantities, including the\nbulk viscosity, are suppressed by $\\exp(-\\delta m\/T)$ at low temperatures.\nThis is because the energy gap $\\delta m$ is the minimum energy required to \ncreate a $K^0$, so the population of thermal kaons is suppressed by a \nBoltzmann factor. \n\n\n\\begin{table}\n\\def\\rule[-2ex]{0em}{5ex}{\\rule[-2ex]{0em}{5ex}}\n\\[\n\\begin{array}{c@{\\qquad}c@{\\qquad}c}\n\\hline\n \\mbox{quantity} & \\multicolumn{2}{c}{\\mbox{asymptotic form}} \\\\[1ex]\n & T \\ll \\delta m & m_K \\ll T \\ll \\mu \\\\\n\\hline\n\\rule[-2ex]{0em}{5ex} n_K & (m_K T)^{3\/2}e^{-\\delta m\/T} & T^3 \\\\\n\\hline\n\\rule[-2ex]{0em}{5ex} \\deriv{n_K}{{\\mu^{\\rm eff}_{K^0}}} & m_K (m_K T)^{1\/2}e^{-\\delta m\/T} & T^2 \\\\\n\\hline\n\\rule[-2ex]{0em}{5ex} \\deriv{n_K}{\\mu} & -\\frac{m_K^2}{\\mu} (m_K T)^{1\/2} e^{-\\delta m\/T} \n & -\\frac{m_K}{\\mu} T^2 \\\\\n\\hline\n\\rule[-2ex]{0em}{5ex} n_q & \\mu^3 & \\mu^3 \\\\\n\\hline\n\\rule[-2ex]{0em}{5ex} \\deriv{n_q}{\\mu} & \\mu^2 & \\mu^2 \\\\\n\\hline\n\\rule[-2ex]{0em}{5ex} C & m_K^3 (m_K T)^{1\/2} e^{-\\delta m\/T} & T^4\\\\\n\\hline\n\\end{array}\n\\]\n\\caption{\nAsymptotic forms for the densities and the C parameter. Constant\nnumerical factors are not shown and it is implicitly assumed that\n$T<0.57\\Delta$ so that there is a CFL condensate, even\nwhen $T \\gg m_K$. \n}\n\\label{tab:asymptotics}\n\\end{table}\n\n\\begin{table}\n\\def\\rule[-2ex]{0em}{5ex}{\\rule[-2ex]{0em}{5ex}}\n\\[\n\\begin{array}{c@{\\qquad}c@{\\qquad}c@{\\qquad}c}\n\\hline\n \\mbox{quantity} & \\multicolumn{2}{c}{\\mbox{approximate form}} \\\\[1ex]\n & T < T_a(\\delta m) \\ll \\delta m & T_a(\\delta m) < T \\lesssim \\delta m \\\\\n\\hline\n\\rule[-2ex]{0em}{5ex} \\Gamma_{\\rm forward} & G_F^2\\, \\sqrt{m_K^3\\,T^3}\\, \\delta m^5\\, e^{-\\delta m\/T} \n & G_F^2\\,\\mu^4\\,{\\bar{p}}^4\\,e^{-v{\\bar{p}}\/T}\\\\\n\\hline\n\\rule[-2ex]{0em}{5ex} \\ga_{\\rm eff} & G_F^2\\, \\delta m^5 \n & G_F^2\\,\\mu^4\\,{\\bar{p}}^4\\,(m_K\\,T)^{-3\/2}\\,e^{-\\delta m^2\/(2\\,{\\mu^{\\rm eff}_{K^0}})\\,T}\\\\\n\\hline\n\\rule[-2ex]{0em}{5ex} \\zeta & G_F^2\\,\\delta m^5\\,m_K^{7\/2}T^{1\/2}\\,e^{-\\delta m\/T}\\,\\omega^{-2}\n & G_F^2\\,\\mu^4{\\bar{p}}^4\\,m_K^2\\,T^{-1}\\,e^{-v{\\bar{p}}\/T}\/{(\\ga_{\\rm eff}^2 + \\omega^2)}\\\\\n\\hline\n\\end{array}\n\\]\n\\caption{\nApproximate forms of the bulk viscosity and related quantities,\nfor small $T$. Constant numerical factors are not shown.\nThe rate has two separate ranges within the $T\\ll\\delta m$ region:\n$T < T_a\\ll \\delta m$ and $T_a < T \\ll \\delta m$, where\n$T_a\\lesssim \\delta m^2\/(2\\,{\\mu^{\\rm eff}_{K^0}})$ \\eqn{temp_a}.\nNote that ${\\bar{p}}$ is related to $\\delta m$ by \\eqn{pbar}. \nThe low temperature entry for $\\zeta$ is in general proportional\nto $(\\ga_{\\rm eff}^2 + \\omega^2)^{-1}$ rather than just $\\omega^{-2}$, but in this\ntemperature range $\\ga_{\\rm eff}$ is always much less than\nastrophysically relevant frequencies ($\\omega\\gtrsim 1$~Hz).\nThere is no third column for the higher end of the range of temperatures\nthat we study in this paper, $T\\sim m_K,\\,{\\mu^{\\rm eff}_{K^0}}$, because\nalthough we can still use\n\\eqn{rate_onshell} for $\\Gamma_{\\rm forward}$, there is no simple form for\n$\\deriv{n_K}{{\\mu^{\\rm eff}_{K^0}}}$ and hence for $\\ga_{\\rm eff}$ or $\\zeta$.\n}\n\\label{tab:asymptotics2}\n\\end{table}\n\n\nTo illustrate the likely contribution of kaons to the bulk viscosity\nof quark matter in compact stars, we now evaluate the bulk viscosity\nnumerically for a range of $\\delta m$ and $T$.\nOur calculations are performed at $\\mu=400~\\MeV$. \nWe vary $\\delta m$ by varying $m_K$ with ${\\mu^{\\rm eff}_{K^0}}$ fixed at $17.92~\\MeV$,\ncorresponding to $m_s=120~\\MeV$.\n\nCompact stars have internal temperatures\nin the MeV range immediately after the supernova, and then cool\nto temperatures in the keV range over millennia, so we explore the range\n$0.01~\\MeV \\lesssim T \\lesssim 10 ~\\MeV$.\nSince $m_K$ and\n${\\mu^{\\rm eff}_{K^0}}$ are both expected to be of order tens of MeV\n\\cite{Schafer:2002ty}, we expect $\\delta m$ to be generically of the same\norder, so we explore the range $0.1~\\MeV \\lesssim \\delta m \\lesssim 10~\\MeV$.\n\nThe bulk viscosity is determined by the kaon equilibration rate\n$\\gamma_K$ \\eqn{gamma_micro} and the coefficient $C$ \\eqn{C_gamma}, so\nwe plot these quantities separately before plotting the\nbulk viscosity.\n\n\\subsection{Proportionality constant $C$ (Fig.~\\ref{fig:c})}\n\n\\begin{figure}[htb]\n\\includegraphics[width=0.49\\textwidth]{new_figs\/c_mKmuK}\n\\hspace{0.02\\textwidth} \n\\includegraphics[width=0.49\\textwidth,angle=0]{new_figs\/c_varyT}\n\\caption{Coefficient $C$ \\eqn{C_gamma}\nas a function of $\\delta m$ (left panel) and temperature (right panel)}\n\\label{fig:c}\n\\end{figure}\n\nIn Fig.~\\ref{fig:c} we show how $C$ depends on $\\delta m$ and $T$.\nRoughly speaking, $C$ measures how sensitive the kaon and quark number\nare to changes in $\\mu_K$ and $\\mu$.\nAt low temperatures $T\\ll\\delta m$, $C$ is suppressed\nby an exponential factor $\\exp(-\\delta m\/T)$, so the curves drop rapidly\nin the high $\\delta m$ region of the left panel, and the low $T$ region\nof the right panel. We also see that the curves for different $\\delta m$\nstart to converge at high temperature (right panel). This is because\nat high enough temperature (beyond the range that we study)\n$C$ would become proportional to $T^4$, independent of $\\delta m$\n(see table \\ref{tab:asymptotics}).\n\n\n\\subsection{Kaon width $\\ga_{\\rm eff}$ (Fig.~\\ref{fig:width})}\n\n\\begin{figure}[hbt]\n\\includegraphics[width=0.49\\textwidth]{new_figs\/rates_mKmuK}\n\\hspace{0.02\\textwidth} \n\\includegraphics[width = 0.49\\textwidth]{new_figs\/rates_varyT}\n\\caption{Plot of average $K^0$ decay width $\\gamma_K$ \\eqn{gamma_micro},\n\\eqn{K_decay_rate} as a function of $\\delta m$ \n(left panel) and temperature (right panel). The horizontal dashed line \nshows where the width is 1 kHz (${\\omega\/2\\pi} = 1~{\\rm ms}^{-1}$),\nthe fastest rotation rate of compact stars. The charged kaon width\nis also shown (dotted line) to illustrate that it is a subleading\ncontribution to strangeness equilibration \\eqn{K+suppression}.\nThe transition that occurs\nat $\\delta m \\approx T$ is where the rate becomes dominated by the $H$\nresonance (Section \\ref{sec:K0_rates}).\n}\n\\label{fig:width}\n\\end{figure}\n\nIn Fig.~\\ref{fig:width} we show how the neutral kaon effective\nwidth $\\gamma_K$ depends on $T$ and $\\delta m$.\n(We also show one charged kaon width curve \nto illustrate that it is subleading \\eqn{K+suppression}).\n\nThe $\\delta m$-dependence is shown in the left panel.\nFrom Table \\ref{tab:asymptotics2}, we expect that at a fixed\ntemperature $T$, for sufficiently\nlarge $\\delta m$, $T_a(\\delta m)$ will become greater than $T$, and\n$\\gamma_K$ will then rise as $\\delta m^5$. This is seen at the upper end of the\n$T = 0.01~\\MeV$ curve, and corresponds to the region where\nlow-momentum ($p<{\\bar{p}}$) kaons dominate the rate.\nFor the rest of the $T = 0.01~\\MeV$ curve, and\nfor all the other curves in the plot, the equilibration is dominated by\nkaons at the $H$-resonance, with momentum ${\\bar{p}}$.\nThe width shows a peak as a function of $\\delta m$, which follows from the\napproximate form for $\\ga_{\\rm eff}$ given in table \\ref{tab:asymptotics2}\n(second column). Using \\eqn{pbar} to relate $\\delta m$ to ${\\bar{p}}$, one\ncan see $\\ga_{\\rm eff} \\sim \\delta m^4 \\exp(-\\delta m^2\/(2 {\\mu^{\\rm eff}_{K^0}} T))$, which is\npeaked at $\\delta m_{\\rm peak} = 2\\sqrt{ {\\mu^{\\rm eff}_{K^0}} T}$. For our plots\n$ {\\mu^{\\rm eff}_{K^0}} \\approx 30~\\MeV$, which gives the observed positions\nof the peaks.\n\nThe $T$-dependence is shown in the right panel.\nAt the very lowest temperatures $\\ga_{\\rm eff}$ will have \na constant value which depends on $\\delta m$ (Table \\ref{tab:asymptotics2}).\nThis is clear in the curves for $\\delta m=5~\\MeV$ and $\\delta m=1~\\MeV$.\nAs with the large $\\delta m$ region of the left panel, this is where\nthe low momentum kaons are dominating the rate.\nIn the intermediate temperature region the width rises quickly\nand then peaks and drops off slowly. This comes from competition \n\\eqn{gamma_micro} between $\\Gamma_{\\rm forward}\/T$, which is\nmonotonically increasing with $T$ \\eqn{rate_onshell},\nand $(d{n_K}\/d{\\mu_K})^{-1}$, which is monotonically decreasing with $T$.\nAt high enough temperature, The expression \\eqn{rate_onshell}\nfor $\\Gamma_{\\rm forward}\/T$ rises as $T$,\nwhile $d{n_K}\/d{\\mu_K}$ rises more quickly, so the width drops.\n\nAt high enough $T$, the curve with $\\delta m = 0.1~\\MeV$ starts to bend upwards.\nThis feature actually corresponds to $T\\gtrsim T_b$ (from Section \n\\ref{sec:K0_rates}) where \\eqn{rate_onshell} becomes invalid and \nkaons with high momentum ($p>{\\bar{p}}$) dominate the rate. For the other\ncurves, $T_b$ is beyond the range that we study.\n\n\\subsection{Bulk viscosity $\\zeta$ (Fig.~\\ref{fig:bv} and \\ref{fig:bv2})}\n\\label{sec:bv}\n\n\\begin{figure}[htb]\n\\includegraphics[width=0.49\\textwidth]{new_figs\/bv_mKmuK}\n\\hspace{0.02\\textwidth}\n\\includegraphics[width=0.49\\textwidth]{new_figs\/bv_varyT}\n\\caption{Plot of bulk viscosity as a function of $\\delta m$\n(left panel) and temperature (right panel).}\n\\label{fig:bv}\n\\end{figure}\n\nIt is useful to compare the behavior of the bulk viscosity in CFL\nquark matter with its behavior in quark matter phases whose\nre-equilibration is dominated by ungapped fermionic modes such as the\n2SC or single-flavor phases (see solid black line in\nFig.~\\ref{fig:bv}). In such phases, the bulk viscosity shows a peak\nas a function of $T$, because $\\ga_{\\rm eff}$ varies monotonically with $T$,\nwhile $C$ is determined by the phase space at the Fermi surface, and\nhence is insensitive to $T$ \\cite{Madsen:1992sx,Sa'd:2006qv,Alford:2006gy}. \nThis produces a single peak when $\\ga_{\\rm eff}$ is equal to the \nangular frequency $\\omega$ of \nthe applied compression oscillation (see \\eqn{zeta_K}). As we will\ndescribe below, our results show that in the CFL phase, the situation\nis more complicated, because $\\ga_{\\rm eff}$ is no longer a monotonic\nfunction of $T$, and also because $C$ can vary rapidly as the control\nparameters $\\delta m$ and $T$ are varied.\n\nThe dependence of the bulk viscosity (at frequency\n$\\omega\/2\\pi= 1$~kHz) on the kaon energy gap $\\delta m$\nis shown in the left panel of Fig.~\\ref{fig:bv}.\nFrom consideration of the factor of $\\ga_{\\rm eff}\/(\\ga_{\\rm eff}^2+\\omega^2)$\nin \\eqn{zeta_K} we would have expected two peaks for $T=0.01~\\MeV$\nand $0.1~\\MeV$, because from Fig.~\\ref{fig:width} we see that at these\ntemperatures $\\ga_{\\rm eff}$ passes through $\\omega=1$~kHz at two different\nvalues of $\\delta m$.\nIn fact we get one peak, close to the lower value of $\\delta m$\nat which $\\ga_{\\rm eff}=\\omega$. The higher peak is washed out by rapid variation\nof $C$ with $\\delta m$, which occurs when $\\delta m> T$ (see Fig.~\\ref{fig:c}).\nEven outside the physically relevant range of $\\delta m$ shown in our\nplots, we do not find additional peaks in the bulk viscosity.\n\nThe dependence of the bulk viscosity (again at $\\omega\/2\\pi = 1$~kHz) on the\ntemperature $T$ is shown in the right panel of Fig.~\\ref{fig:bv}.\nIt is a monotonically increasing function of $T$ for all values of $\\delta m$. \nThis is because, as is clear from the right panel of Fig.~\\ref{fig:c}, \n$C$ varies rapidly with temperature for all physically\nrelevant values of $\\delta m$. \nIn fact, the temperature-dependence of $C$ dominates the bulk viscosity,\nso the right panel of Fig.~\\ref{fig:bv} looks similar to the\nright panel of Fig.~\\ref{fig:c}. \n\n\\begin{figure}[htb]\n\\begin{center}\n\\includegraphics[width=0.49\\textwidth]{new_figs\/bv_varyw}\n\\end{center}\n\\caption{Bulk viscosity as a function of temperature for\noscillations of different frequencies. The curves peaked on the left\nare for unpaired 3-flavor quark matter\\cite{Madsen:1992sx} \nand the rising curves on the right are our calculation of\nthe kaonic bulk viscosity of CFL quark matter, for $\\delta m=1~\\MeV$.\n}\n\\label{fig:bv2}\n\\end{figure}\n\nFinally, Fig.~\\ref{fig:bv2} shows a plot of bulk viscosity \n$\\zeta$ as a function\nof temperature for different oscillation timescales, $\\tau=2\\pi\/\\omega$. \nWe see that for unpaired quark matter,\n$\\zeta_{\\rm unp}$ is independent of $\\omega$ at high temperatures,\nbecause $\\gamma_{\\rm unp}$ then rises far above $\\omega$, \nso, by (see \\eqn{zeta_K}), $\\zeta = C\/\\gamma_{\\rm unp}$.\nHowever, $\\zeta_{\\rm CFL}$ depends more strongly on $\\tau$ at high\ntemperatures, because $\\ga_{\\rm eff}$ is not much greater than $\\omega$\nat high temperature (Fig.~\\ref{fig:width}), so the $\\omega$-dependence in\n\\eqn{zeta_K} is not suppressed. The CFL bulk viscosity becomes larger\nas the frequency drops.\n\n\\section{Conclusions}\n\\label{sec:conclusions}\n\nWe have calculated the contribution of the lightest pseudo-Goldstone\nbosons, the neutral kaons, to the bulk viscosity of CFL quark matter.\nOur results are given by equations \\eqn{zeta_K}, \\eqn{C_gamma},\n\\eqn{gamma_micro}, \\eqn{K_decay_rate}, and are displayed for\nreasonable parameter choices in Figs.~\\ref{fig:bv} and \\ref{fig:bv2}.\nThe bulk viscosity is most sensitive to the temperature, and to the\nkaon energy gap $\\delta m$ \\eqn{energy_gap}. We find that, as one would\nexpect, the kaonic bulk viscosity falls rapidly when the temperature\ndrops below the kaon energy gap, since the the kaon population is then\nheavily Boltzmann-suppressed. It is clear from the right hand panel of\nFig.~\\ref{fig:bv} that once the temperature falls below the 10 MeV\nrange (which is expected to occur in the first minutes after the\nsupernova \\cite{Burrows:1986me}) \nthe bulk viscosity of CFL quark matter at kHz frequencies is\nsuppressed by many orders of magnitude relative to that of unpaired\nquark matter.\n\nIt is noticeable that at low temperatures the suppression is less\nsevere for smaller kaon energy gaps. However, $\\delta m$ is a poorly-known\nparameter of the effective theory of the pseudo-Goldstone bosons. It\nis the difference of the kaon mass and the kaon effective chemical\npotential, both of which are expected to be roughly of the order of\n10~MeV \\cite{Schafer:2002ty}, so it is unnatural to assume that $\\delta m$\nis much smaller than an MeV or so.\nFor astrophysical applications, it is clear that CFL quark matter\ncan be sharply distinguished from quark matter by its bulk viscosity\n(as by many other transport properties) after the very earliest\ntimes in the life of a compact star. Also, a rapidly vanishing\nbulk viscosity as the temperature drops below $10~\\MeV$ could be a\npotential observable associated with core collapse supernovae.\n\nIn general, bulk viscosity arises from re-equilibration in response to\ncompression. We have calculated the dominant contribution\nto the re-equilibration of flavor in quark matter, and we believe that\nthis is the dominant contribution to the bulk viscosity as a whole\nin the range of frequencies that are of astrophysical interest,\nnamely zero to 1000 Hz. Any other contribution would have to\ncome from degrees of freedom that equilibrate on a similar timescale,\nand the only possibility that we can imagine is the thermalization\nof the low-momentum tail of the thermal distribution of $H$ particles.\n\nOur results highlight several interesting questions for future\nresearch. A natural next step would be to extend our calculation to\nthe kaon-condensed ``CFL-$K^0$'' phase, which corresponds to allowing \nthe kaon energy gap to drop to zero.\nThere are also some technical issues in our calculation of the \nflavor changing rate that remain to be addressed. The graph shown \nin Fig.~\\ref{fig:K_decay} includes the width of the $H$ boson due to the \none-loop thermal self energy. This corresponds to the resummation \nof a class of diagrams with multiple $H$ boson radiation and \nabsorption \\cite{Manuel:2004iv}. However, since the mean free path \nassociated with small angle two-body collisions is of the same order \nof magnitude as the radiation length \n(the mean free path between $H$-bremsstrahlung events)\nthis approximation is not \ncorrect. A more complete approach has to take into account \nquantum mechanical inteference, the Landau-Pomeranchuk-Migdal (LPM)\neffect, between different diagrams that have the same final \nstate \\cite{Arnold:2002zm}.\nAnother relevant improvement would be\nto include higher-order corrections to the $H$\ndispersion relation \\cite{Zarembo:2000pj} which could have\na strong effect on the collinear splitting amplitude for\n$H$ particles.\nWe do not expect that these improvements will affect our results significantly,\nbut they would change the numerical prefactors in the rate.\nFinally, it should be\nnoted that we treated the kaon mass as a numerical parameter, but it\nis expected to be density-dependent, and its $\\mu$-dependence\nwill feed into our expressions for the bulk viscosity. We expect\nthat this would only weakly affect our results, but we have not\nperformed an explicit check.\n\n\\medskip\n\\begin{center} {\\bf Acknowledgements} \\end{center}\n\n\\noindent\nWe thank Andreas Schmitt and Tanmoy Bhattacharya\nfor helpful discussions. \nMGA acknowledges the financial support of a short-term fellowship\nfrom the Japan Society for the Promotion of Science and\nthe hospitality of the Hadronic Theory Group at Tokyo University,\nwhere this work was completed.\nMGA and TS thank the Yukawa Institute for Theoretical Physics at \nKyoto University, whose YKIS2006 workshop on \"New Frontiers in QCD\" \nprovided a valuable forum for discussions.This research was\nsupported in part by the Offices of Nuclear Physics and High\nEnergy Physics of the Office of\nScience of the U.S.~Department of Energy under contracts\n\\#DE-FG02-91ER40628, \n\\#DE-FG02-05ER41375 (OJI),\n\\#DE-FG02-03ER41260,\nW-7405-ENG-36. \n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nLet $\\mathcal F = \\{S_1, \\dots, S_n \\}$ be a set of sets. The \\emph{intersection graph} $G$ of $ \\mathcal F $ is the graph whose vertex-set is $\\mathcal{F}$ and two vertices $ S_i $ and $ S_j $ are adjacent in it if and only if $ i \\neq j $ and $S_i \\cap S_j \\neq \\varnothing $. If there exist a set $ S \\subseteq \\mathbb{R}^d $ such that all sets $ S_i $ are transformations of $ S $ obtained by translation and independent scaling in directions of the axis, then we say that $ G $ is an $ S $-graph.\n\n\nA \\emph{hereditary class} (or \\emph{class}, for short) of graphs is a set of graphs closed under induced subgraph and isomorphism. The class \\emph{spanned} by a set of graphs is the smallest class containing all the graphs in the set. Notice that the set of all $ S$-graphs is a hereditary class. \n\nThe \\emph{chromatic number} of a graph $G$, denoted by $\\chi(G)$, is the smallest integer $k$ such that we can\npartition the vertex-set of $G$ into $k$ stable sets. A \\emph{clique} in a graph is a set of pairwise adjacent vertices, and the size of the biggest clique in $ G $ is denoted by $ \\omega(G) $ and is called the \\emph{clique number} of $ G $. It is clear from the definition that $ \\chi(G) \\geq \\omega(G)$. \n\nAn interesting topic in the study of intersection graphs, in particular $ S $-graphs, is their chromatic number. Let $ S $ be a set in $ \\mathbb{R}^d $ and let $\\mathcal{C}_S$ be the class of $ S$-graphs. \nSince there are cliques of any arbitrary size in $ \\mathcal C_S $, the graphs in $ \\mathcal C $ have arbitrarily large chromatic number. However, it is interesting to know whether big cliques are the only reason that those graphs have bounded chromatic number. In particular, one can state the following (weaker) question for a fix $ k \\in \\mathbb{N}$:\n\\begin{question} \\label{question:chi-bounded}\n\tis there a number $ c \\in \\mathbb{N}$ such that for every $ G \\in \\mathcal C $ with $\\omega(G) \\leq k$, we have~$ \\chi(G) \\leq c $?\n\\end{question}\n\nThe case of $ k=2 $ is in particular studied more than the other cases. We call a graph \\emph{triangle-free} if $ \\omega(G) \\leq 2 $, and we say that a class is \\emph{triangle-free} if all graphs in the class are triangle-free. \n\nWe remark that one can state a more general question of whether the chromatic number of each graph in a class is bounded above by a function its clique number. However, this is not the concern of this paper. So, we refer to~\\cite{Scottsurvey} for more information regarding such studies in $\\chi$-boundedness. \n\n\n\n-- An \\emph{interval graph} is an intersection graph of intervals in $\\mathbb{R} $. It is well-known that interval graphs are perfect graphs, meaning that for every graph $ G $ in the class, one has $ \\chi(G) = \\omega(G)$. So, the answer to Question~\\ref{question:chi-bounded} is positive for interval graphs, and the constant $ c $ is equal to $k $.\n\n-- In 1960, Asplund and Gr\\\"{u}nbaum proved that this can be generalized to 2 dimensions as well. In~\\cite{Asplind60}, they showed that \nan intersection graph of axis-aligned rectangles in $ \\mathbb{R}^2 $ with clique number at most $ k $\nhas chromatic number at most $4k^2 - 3k $. \n\n-- Starting from the third dimension, however, the situation changes. \n\nIn 1965, in his Ph.D. thesis~\\cite{Burling65}, Burling studied what we can describe in graph theoretical terms as the chromatic number of intersection graph of polytopes in $ \\mathbb{R}^d $ where there are $m $ fixed lines in $ \\mathbb{R}^d $ such that the edges of the polytopes are parallel to at most $ m' $ lines out of those $ m $ lines. Among other result, he shows that for the case of $ d \\geq 3 $, i.e.\\ when we have at least three dimensions, for any $ m'$ and $m$, and for any $ k \\in \\mathbb{N}$, the answer to Question~\\ref{question:chi-bounded} is negative, i.e. the graphs have unbounded chromatic number.\n\nTo prove the mentioned result, Burling first\nreduced the problem to the case of the triangle-free intersection graphs of \\emph{axis-aligned boxes in $\\mathbb{R}^3$} (box graphs, for short). Then, he found a sequence $\\{\\mathcal G_k\\}_{k \\geq 1} $ of triangle-free box graphs such that $ \\chi(G_k) \\geq k $.\n\nThe sequence $\\{G_k\\}_{k\\geq 1}$ is known as the \\emph{sequence of Burling graphs}. The class of graphs spanned by $ \\{G_k : k\\geq 1 \\} $ is the \\emph{class of Burling graphs}. So, in particular, the class of Burling graphs is a subclass of triangle-free box graphs. \n\n-- In 2012\\footnote{even thought~\\cite{Pawlik2012} is published in 2014, the first version on arXiv is from 2012, and historically, it has appeared before their next paper~\\cite{Pawlik2013}}, in~\\cite{Pawlik2012}, Pawlik, Kozik, Krawczyk, Laso\\'n, Micek, Trotter, and Walczak showed that the answer to \nQuestion~\\ref{question:chi-bounded} is negative for triangle-free line-segment graphs, answering a question of Erd\\H{o}s. To prove this result, they found a sequence $ \\{G_k\\}_{k \\geq 1}$ of triangle-free line-segment graphs such that $ \\chi(G_k) \\geq 1 $. Surprisingly, the $ k$-th graph in their sequence is isomorphic to the $k$-th graph is the sequence of Burling graphs. So, indeed in~\\cite{Pawlik2013}, Burling graphs are rediscovered, but this time, as a subclass of triangle-free line-segment graphs. \n\n-- Later, in~\\cite{Pawlik2013}, Pawlik, Kozik, Krawczyk, Laso\\'n, Micek, Trotter, and Walczak extended their result from~\\cite{Pawlik2012}. They proved that not only for line segment graphs, but for every set $ S \\subseteq \\mathbb{R}^2 $ that is compact, path-connected, and different from an axis-aligned rectangle, the class of triangle-free $ S $-graphs have unbounded chromatic number. To do so, they introduce a sequence $\\{\\mathcal F_k \\}_{k \\geq 1}$ where each $ \\mathcal F_k $ is a collections of transformations of $ S $ (obtained by translation and independent scaling in the directions of axis), and they showed that the intersection graph of $ \\mathcal F_k $ is triangle-free and has chromatic number at least $ k$. It is easy to check that this once again, the intersection graph of $ \\mathcal F_k $ is isomorphic to $ G_k $, the $ k$-th graph in the sequence of Burling graphs. So, in other words, the class of Burling graphs is a subclass of triangle-free $ S $-graphs.\n\nSo, thanks to this result in~\\cite{Pawlik2013}, the answer to Question~\\ref{question:chi-bounded} for $ k = 2$ is known for any set $ S $ in $ \\mathbb{R}^2 $ that is compact and path-connected. \n\nIn~\\cite{Pawlik2012}, it is also explained how their result disproves a conjecture by Scott (Conjecture 8 in~\\cite{Scott97}) from 1997. This new application of Burling graph created new motivations to know this class of graphs better, in particular as intersection graphs. \n\n\n-- With this motivation, in 2016, Chalopin, Esperet, Li and Ossona de Mendez~\\cite{Chalopin2014} studied Burling graphs as \\emph{frame graphs} (a \\emph{frame} is the boundary of an axis-aligned rectangle). By setting a few restriction on how the frames can intersect, they defined the class of \\emph{restricted frame graphs}, a proper subclass of triangle-free frame graphs that contains all Burling graphs. Their work resulted in a better understanding of Burling graphs and more applications of them in solving $\\chi$-boundedness problems. \n\n-- In 2021, in~\\cite{BG1}, Trotignon and the author introduced the class of \\emph{strict frame graphs}, a subclass of triangle-free restricted frame graphs, by adding one more restriction to the set of restrictions defined in~\\cite{Chalopin2014}. They proved that the class of strict frame graphs is equal to the class of Burling graphs. In~\\cite{BG1}, they also define \\emph{strict line-segment graphs} and \\emph{strict box graphs}, subclasses of triangle-free line segment graphs and triangle-free box graphs, by setting a few restriction on how the sets can intersect. They proved that these two classes are also equal to the class of Burling graphs, thus finding Burling graphs not only as a subclass of intersection graphs, but as an exact class of intersection graphs with some restrictions. \n\n\\smallskip\n\nIn this article, we extend the mentioned result from~\\cite{BG1} to any set $ S \\subseteq \\mathbb{R}^2 $ that is compact, path-connected, and different from an axis-aligned rectangle. By setting constraints on how the sets can interact, we define \\emph{constrained $ S $-graphs} for any such set $ S $ and prove that the class of constrained $ S $-graphs is equal to the class of Burling graphs.\n\nIn Section~\\ref{sec:paths}, we introduce some topological lemmas and notions that are used in the rest of the paper. In Section~\\ref{sec:lyon-sets}, we introduce some notations concerning the sets that we work with. In Section~\\ref{sec:constrained-graphs}, we define the class of constrained $ S $-graphs as well as the class of \\emph{constrained graphs}. Finally, in Section~\\ref{sec:equality}, we prove that these two classes are equal and they are both equal to the class of Burling graphs. To do so, we use an equivalent definition of Burling graphs from~\\cite{BG1}, called \\emph{abstract Burling graphs}, which we present in Section~\\ref{sec:Burling-Graphs}.\n\n\n\\subsection*{Notation}\n\nWe use the standard notations from graph theory and topology. For any notation or term not defined in the article, we refer to~\\cite{BondyMurty} (for graph theory) and~\\cite{munkres} (for topology).\n\nAll graphs in this article are without multiple edges or loops. We denote the vertex-set and the edge-set of a graph $ G$ with $ V(G) $ and $E(G) $ respectively.\n\nFor a set $ A $ in $\\mathbb{R}^d$, we denote the interior and the closure of $ A $ respectively by $ A^\\circ$ and $ \\bar{A}$. Moreover, we denote the boundary of $ A $ by $ \\partial A $, i.e. $ \\partial A = \\bar{A} \\setminus A^\\circ $. \n\nWe consider $\\mathbb{R}^d $, and in particular $\\mathbb{R}^2$, with its usual topology. We denote the ball of radius~$ r $ and center~$ c $ in $ \\mathbb{R}^d $ by $ D(c, r) $. \nWe denote the projections on the x-axis and y-axis in $ \\mathbb{R}^2 $ respectively by $ \\pi_1$ and $\\pi_2$.\nWe denote the image of a function $ f $ by $\\im{f} $, and the restriction of $ f $ to a set $ A $ in its domain by $ f|_{A}$.\n\nWe postpone the introduction of any other notation to the sections that they are used in.\n\n\n\\section{Paths and crossings} \\label{sec:paths}\n\nIn this section, we introduce a few notions and prove some lemmas about them. These lemmas will be useful in the proofs of the next sections. \nLet us start with a lemma.\n\n\n\\begin{lemma} \\label{lem:real-border-lemma}\n\tLet $ X $ be a topological space and let $ A, B \\subseteq X$. If $ B $ is connected, $ B \\cap A^\\circ \\neq \\varnothing$, and $ B \\cap [X \\setminus \\bar{A}] \\neq \\varnothing $,\n\tthen $ B \\cap \\partial A \\neq \\varnothing $. \n\\end{lemma}\n\\begin{proof}\n\tNotice that\n\t$$\n\tB = [B \\cap A^\\circ] \\cup [B \\cap (X\\setminus \\bar{A})] \\cup [B \\cap \\partial A].\n\t$$\n\tThe sets, $ B \\cap A^\\circ $ and $ B \\cap (X\\setminus \\bar{A})$ are both open in $B $ and each is non-empty by the assumption. Moreover, their intersection is the empty set. So, if $ B \\cap \\partial A \\neq \\varnothing $, then $ B $ can be written as the union of two non-empty and non-intersecting sets that are open in $ B$, and thus $ B $ is not connected. \n\\end{proof}\n\nAn \\emph{axis-aligned rectangle} in $ \\mathbb{R}^2 $ is a set $ I_1 \\times I_2 \\in \\mathbb{R}^2 $ were $ I_1 $ and $I_2 $ are intervals in $ \\mathbb{R}$. Notice that vertical and horizontal line segments are axis-aligned rectangles. We often use the word \\emph{rectangle} to refer to axis-aligned rectangles.\nLet $ S $ be a subset of $ \\mathbb{R}^2 $. We define the following notions on $ S $:\n\\begin{align*}\n\t\\lset S &= \\inf \\{x : (x,y) \\in S \\}, \\\\\n\t\\rset S &= \\sup \\{x : (x,y) \\in S \\}, \\\\\n\t\\bset S &= \\inf \\{y : (x,y) \\in S \\}, \\\\\n\t\\tset S &= \\sup \\{y : (x,y) \\in S \\}. \n\\end{align*}\nThe letters $\\mathfrak{l} $, $ \\mathfrak{r} $, $\\mathfrak{b}$, and $ \\mathfrak{t} $ stand for \\emph{left}, \\emph{right}, \\emph{bottom}, and \\emph{top} respectively. If $ S $ is a compact set in $ \\mathbb{R}^2 $, then all the values above are finite and also, we can replace $\\inf$ and $\\sup$ by $ \\min $ and $\\max$ respectively; in other words, for each value, there exists a point in $ S $ that obtains the value. In this case, we also define $ \\wset S = \\rset S - \\lset S$ and $ \\hset{S} = \\tset{S} - \\bset{S} $. The letters $ \\mathfrak{w} $ and $\\mathfrak{h} $ stand for \\emph{width} and \\emph{height} respectively. Notice that if $ S' \\subseteq S $, we have $ \\lset{S'} \\geq \\lset{S} $, $ \\rset{S'} \\leq \\rset{S} $, $ \\bset{S'}\\geq \\bset{S} $, and $ \\tset{S'} \\leq \\tset{S}$. \n\n\n\n\nTo be clear, we recall the definitions of path and arc here. A \\emph{path} in a topological space $ X $ is a continuous function $ \\gamma: I \\rightarrow X $ where $ I $ is a closed interval in $\\mathbb{R}$. An \\emph{arc} in $ X $ is a homeomorphism $ \\delta : I \\rightarrow X $ where $ I $ is a closed interval in $ \\mathbb{R}$.\n\n\nA topological space $ X $ is \\emph{path-connected} (resp.\\ \\emph{arc-connected}) if for every $ x, y \\in X $, there exist a path (resp.\\ arc) $ \\gamma : [0,1] \\rightarrow X $ such that $ \\gamma(0) = x $ and $ \\gamma(1) = y $. By definition, an arc-connected space is also path-connected. The inverse is not true in general. However, as we will see, for the sets that we work with in this article the two notions are equivalent (see Theorem~\\ref{thm:bourbaki} and Lemma~\\ref{lem:path-connected-implies-arc-connected}).\n\n\n\nLet $ R $ be an axis-aligned rectangle. \nLet $ A \\subseteq \\mathbb{R}^2 $. We say that $ A $ \\emph{crosses $ R $} vertically (resp.\\ horizontally) if there exists a \n$ \\gamma: [0,1] \\rightarrow A \\cap R $ such that $ \\gamma(0) $ and $\\gamma(1)$ are respectively on the bottom-side and on the top-side (resp.\\ on the left-side and on the right-side) of $ R$. \n\n\n\n\\begin{lemma} \\label{lem:crossing-between-two-lines}\n\tLet $ y_0, y_1 \\in \\mathbb{R}$ such that $ y_0 \\leq y_1 $. For $ i \\in \\{0,1\\}$, let $ L_i $ denote the line $ y = y_i $ in $\\mathbb{R}^2$. Let $ \\gamma:[0,1] \\rightarrow \\mathbb{R}^2 $ be a continuous function such that for $ i \\in \\{0,1\\} $, we have $ \\gamma(i) \\in L_i$. Then, there exist $x_0, x_1 \\in \\mathbb{R} $ such that $ x_0 \\leq x_1 $ and the path $ \\gamma' = \\gamma|_{[x_0, x_1]} $ is \n\talways between or on the lines $ L_0 $ and $L_1$, i.e.\\ $\\im{\\gamma'} \\subseteq \\{ (x,y) : y_0 \\leq y \\leq y_1 \\} $. \n\\end{lemma}\n\\begin{proof}\t\n\tLet $ X_0 = \\gamma^{-1}(L_0) = \\{ x\\in [0,1] : \\gamma(x) \\in L_0 \\} $. Notice that $ X_0 $ is closed since it is the pre-image of a closed set under a continuous function, and is bounded. So, $ X_0 $ is compact. Moreover, $ 0 \\in X_0 $, so $ X_0 \\neq \\varnothing $. Thus, we can set $ x_0 = \\max X_0$. \n\t\n\tSet $ \\gamma'' = \\gamma|_{[x_0,1]} $, and let $ X_1 = \\gamma''^{-1}(L_1) = \\{x \\in [x_0, 1]: \\gamma''(x) \\in L_1 \\}$. Again, $ X_1 $ is compact, and it is non-empty since $ 1 \\in X_1 $. So, we can set $ x_1 = \\min X_1 $. \n\t\n\tSet $ \\gamma' = \\gamma''|_{[x_0, x_1]}$. We prove that $ \\im{\\gamma'} \\subseteq \\{ (x,y) : y_0 \\leq y \\leq y_1 \\} $.\n\t\n\tAssume, for the sake of contradiction, that there exists a point $ t \\in (x_0, x_1) $ such that $(\\pi_2 \\circ \\gamma'') (t) \\leq y_0 $ or $(\\pi_2 \\circ \\gamma'') (t) \\geq y_1 $. In the former case, by the intermediate value theorem, there exists $ t' \\geq t > x_0 $ such that $ (\\pi_2 \\circ \\gamma'') (t) = y_0 $. Thus $ t' \\in X_0$, contradicting the choice of $x_0 $. In the latter case, there exists $ t' \\leq t < x_1 $ such that $ (\\pi_2 \\circ \\gamma'') (t) = y_0 $. Thus $ t' \\in X_1$, contradicting the choice of $ x_1$. \n\\end{proof}\n\n\\begin{lemma} \\label{prop:crossing-a-subrectangle}\n\tLet $ R $ and $ R' $ be two axis-aligned rectangles such that: \n\t\\begin{itemize}\n\t\t\\item $ \\lset{R'} \\leq \\lset{R} \\leq \\rset{R} \\leq \\rset{R'}$, \n\t\t\\item $\\bset{R} \\leq \\bset{R'} \\leq \\tset{R'} \\leq \\tset{R} $.\n\t\\end{itemize}\n\tIf a set $ A $ crosses $ R $ vertically, then it crosses $ R'$ vertically as well. \n\\end{lemma}\n\\begin{proof}\n\tLet $ \\gamma: [0,1] \\rightarrow R \\cap A $ be the crossing path. By two times use of the intermediate theorem on the function $\\pi_2 \\circ \\gamma $, we conclude that there exist $ x_0 $ and $ x_1 $ with $ x_0 \\leq x_1 $ such that $ \\gamma(x_0) $ and $ \\gamma(x_1) $ are respectively on the bottom side-and the top-side of $ R' $. Applying Lemma~\\ref{lem:crossing-between-two-lines} to the path $ \\gamma|_{[x_0, x_1]} $ completes the proof of the lemma. \n\\end{proof}\n\n\nThe following theorem, can be found in several classical topology text-books, in particular, in~\\cite{bourbaki16} (Chapter 3, Section 2, Proposition 18).\n\n\\begin{theorem} \\label{thm:bourbaki}\n\tLet $ X $ be a Hausdorff topological space. If $ a $ and $ b $ are two points in the same path-connected component of $ X $, then there exist an injective path $\\delta : [0,1] \\rightarrow X$ such that $ \\delta(0) = a $ and $\\delta(1) = b $.\n\\end{theorem} \n\nAs a result, we have the following lemma.\n\\begin{lemma} \\label{lem:path-connected-implies-arc-connected}\n\tIf $ \\gamma: [0,1] \\rightarrow \\mathbb{R}^2 $ is a path in $ \\mathbb{R}^2 $, then there exist a arc $ \\delta: [0,1] \\rightarrow \\mathbb{R}^2 $ such that $ \\delta(0) = \\gamma(0) $, $\\delta(1) = \\gamma(1)$, and $\\im{\\delta} \\subseteq \\im{\\gamma}$. \n\\end{lemma}\n\\begin{proof}\n\tSet $ X = \\im{\\gamma} $. With the induced topology, $ X $ is a Hausdorff space. Applying Theorem~\\ref{thm:bourbaki} with $ a $ and $ b $ being $ \\gamma(0)$ and $\\gamma(1)$ implies that there exists an injective path $ \\delta: [0,1] \\rightarrow \\im{\\delta} \\subseteq X $ from $ a $ to $ b $. It is easy to show that it is indeed a homeomorphism. Since $[0,1]$ is compact, $\\delta $ is a closed bijection, and hence a homeomorphism.\n\\end{proof}\n\n\nIn the proof of the following lemma, we use the fact that $ K_5 $, the complete graph on~5 vertices, is not planar, i.e.\\ it has no planar embedding. Recall that in a planar embedding, the edges are represented by curves. \n\nWe believe that the proof that we present here is folklore, but for the sake of clarity we include it. However, the lemma can also be deduced easily from Lemma 2 of~\\cite{maehara84}.\n\n\\begin{lemma}\\label{prop:horizontal-and-vertical-crossing-intersect}\n\tLet $R$ be a rectangle in $\\mathbb{R}^2 $. Let $ A$ and $ B $ be two path-connected sets crossing $R$ vertically and horizontally respectively. Then, $ A \\cap B \\neq \\varnothing $. \n\\end{lemma}\n\\begin{proof}\n\tLet $ \\alpha: [0,1] \\rightarrow A \\cap R $ and $ \\beta: [0,1]\\rightarrow B \\cap R $ be the two crossing paths in the statement of the lemma. Assume, for the sake of contradiction, that $ \\im{\\alpha} \\cap \\im{\\beta} = \\varnothing $.\n\t\n\tIn this proof, we say that two paths (or arcs) $\\gamma$ and $\\delta $ are internally disjoint of $ \\im{\\gamma} \\cap \\im{\\delta} = $\n\tSet $ a_0 = \\alpha(0) $, $ a_1 = \\alpha(1) $, $b_0 = \\beta(0)$, and $b_1 = \\beta(1)$. \n\tBy Lemma~\\ref{lem:path-connected-implies-arc-connected}, there exist arcs \n\t$$ \\hat \\alpha: [0,1] \\rightarrow \\im{\\alpha} \\subseteq A \\cap P \\text{ and } \\hat \\beta : [0,1] \\rightarrow \\im{\\beta} \\subseteq B \\cap P $$\n\tsuch that $ \\hat{\\alpha} (0) = a_0$, $ \\hat{\\alpha} (1) = a_1$, $\\hat{\\beta} (0) = b_0$, and $ \\hat{\\beta} (1) = b_1$.\n\t\n\tFix a real number $ \\epsilon > $. Let $\\gamma_1 $, $ \\gamma_2 $, $\\gamma_3$, and $\\gamma_4$ be paths that respectively join $ b_1 $ to $ a_1 $, $ a_1 $ to $b_0$, $ b_0 $ to $ a_0 $, and $ a_0 $ to $b_1 $ such that their images are disjoint except for their beginnings and ends, and that for each $ i \\in \\{1,2,3,4\\}$, $\\im{\\gamma_i} $ is entirely outside $ R $ except for its beginning and end, and is entirely inside the rectangle\n\t$$ R' = [\\lset{R} - \\epsilon, \\rset{R}+\\epsilon] \\times [\\bset{R}-\\epsilon, \\tset{R} + \\epsilon]. $$ \n\tSee Figure~\\ref{fig:K5-planar}.\n\t\n\t\\begin{figure}\n\t\t\\centering\n\t\t\\vspace*{-1cm}\n\t\t\\includegraphics[width=9.5cm]{fig\/crossing_proof.pdf}\n\t\t\\vspace{-1.2cm}\n\t\t\\caption{Proof of Lemma~\\ref{lem:crossing-between-two-lines}: a planar embedding of $K_5 $.} \\label{fig:K5-planar}\n\t\\end{figure}\n\t\n\tFinally, choose a point $ c $ outside $ R' $, and let $ \\delta_1 $, $ \\delta_2$, $ \\delta_3$, and $ \\delta_4 $ be four paths from $ c $ to $ b_1 $, $ a_1 $, $b_0 $, and $ a_0 $ respectively. Choose $\\delta_i$'s so that their images does not intersect except on $ c $, and such that for each $ i, j \\in \\{1,2,3,4\\} $ the sets $ \\im{\\gamma_i} $ and $ \\im{\\delta_j} $ do not intersect but possibly at the end-point of $\\delta_j$. \n\t\n\tNow, set $ V = \\{a_0,a_1, b_0, b_1 , c\\}$ and $ E = \\{ \\hat{\\alpha}, \\hat{\\beta}, \\gamma_i, \\delta_i : i \\in \\{1,2,3,4\\} \\} $, and notice that $ (V, E) $ is forms an embedding of $ K_5 $ on the plane, a contradiction.\n\\end{proof}\n\n\n\\section{Pouna sets and their territories} \\label{sec:lyon-sets}\n\nLet $ S $ be a subset of $ \\mathbb{R}^2 $.\nThe \\emph{bounding box} of $ S $, denoted by $ \\boxset S $, is defined as follows: \n$$\n\\boxset S = [\\lset S, \\rset S] \\times [\\bset S, \\tset S].\n$$\nSo, $ \\lset{\\boxset{S}} = \\lset{S} $, $ \\rset{\\boxset{S}} = \\rset{S} $, etc. \nFor a collection $ \\mathcal F $ of subsets of $\\mathbb{R}^2 $, by abuse of notation, we write $ \\boxset{\\mathcal F} $ for $ \\boxset{\\cup_{S \\in \\mathcal F} S}$.\n\nWe use the following property in some lemmas.\n\\begin{property} \\label{prop:int-box-min-S-non-empty}\n\tIf $ S $ is not an axis-aligned rectangle, then $ \\boxset{S}^\\circ \\setminus S \\neq \\varnothing$.\n\\end{property}\n\\begin{proof}\n\tFirst of all, $ S $ is not a subset of an axis-aligned line-segment. So, the closure of $\\boxset{S}^\\circ $ is equal to $\\boxset{S}$.\n\tNow, if $\\boxset{S}^\\circ \\setminus S = \\varnothing $, then $ \\boxset{S}^\\circ \\subseteq S \\subseteq \\boxset{S} $, and since $ S $ is closed, we have $ S = \\boxset{S} $, and $ S $ is an axis-aligned rectangle. \n\\end{proof}\n\n\nIn this article, we only consider transformations $ T: \\mathbb{R}^2 \\rightarrow \\mathbb{R}^2 $ which are of the following form: \n$$\nT(x,y) = (ax+c, by+d), \n$$ \nfor some $ a, b \\in \\mathbb{R}^* = \\mathbb{R} \\setminus \\{0\\} $ and $ c,d \\in \\mathbb{R}$. So, whenever we use the word \\emph{transformation}, we are referring to just such functions. Notice that the composition of two transformations of the form above is of the same form. \n\nWe say that $ T $ is a \\emph{positive} transformation if $ a > 0 $ and $ b >0 $. It is easy to see that positive transformations with composition form a group. In particular:\n\\begin{itemize}\n\t\\item the composition of two positive transformations is a positive transformation, \n\t\\item every positive transformation has an inverse.\n\\end{itemize}\n\nSeveral times, we will use the fact that if $ T:(x,y) \\mapsto (ax+c, by+d) $ is a positive transformation and $ A \\subseteq \\mathbb{R}^2$, then setting $ A' = T(A) $, we have:\n\\begin{multline*}\n\t\\lset{A'} = a. \\lset{A} + c, \\ \\rset{A'} = a. \\rset{A} +c, \\ \\bset{A'} = b. \\bset{A}+d, \\ \\text{and } \\tset{A'} = b. \\tset{A}+d. \n\\end{multline*}\nIn particular, $ \\boxset{T(A)} = T(\\boxset{A})$. \n\n\nLet $ S \\subseteq \\mathbb{R}^2 $. In this paper, we call a \\emph{transformed copy} of $ S $ any set $ S' $ of the form:\n\\begin{align*}\n\tS' = T(S) = \\{ T(x,y) : (x, y) \\in S \\},\n\\end{align*} \nfor some transformation $ T : \\mathbb{R}^2 \\rightarrow \\mathbb{R}^2$. \nWe say that $ S' $ is a \\emph{positive transformed copy} of $ S $ if $ T $ is a positive transformation. The \\emph{horizontal reflection} of $ S $ is $T(S) $ where~$ T $ is the transformation that maps~$(x,y)$ to~$(-x, y)$.\n\nIf $ \\mathcal F $ is a collection of subsets of $ \\mathbb{R}^2 $, we use the unconventional notation $ T(\\mathcal F) $ for the collection $ \\{T(S) : S \\in \\mathcal F \\} $. It is easy to see that that $ \\boxset{T(\\mathcal F)} = T(\\boxset{\\mathcal F})$. \n\n\n\\subsection{Pouna sets and their territories}\n\nA \\emph{Pouna set} is a non-empty, compact, and path-connected subset of $ \\mathbb{R}^2 $ which is not an axis-aligned rectangle. \nThe \\emph{territory} of a Pouna set $ S $, denoted by $\\Ter S $, is defined as follows:\n\\begin{equation*}\n\t\\Ter S = \\{(x,y) \\in \\boxset S \\setminus S : \\exists x'\\in \\mathbb{R} \\text{ s.t. } x' > x \\text{ and } (x',y) \\in S \\}.\n\\end{equation*}\n\nWe say that a Pouna set $ S $ is \\emph{strong} if it has a non-empty territory.\nIn Figure~\\ref{fig:territory_examples}, some examples of strong Pouna sets and their territories are represented. \n\n\\begin{figure}\n\t\\centering\n\t\\vspace*{-1cm}\n\t\\includegraphics[width=9.5cm]{fig\/TerritoryExamples.pdf}\n\t\\vspace*{-1cm}\n\t\\caption{Examples of strong Pouna sets and their territories. The Pouna sets are shown in black and their territories in gray.} \\label{fig:territory_examples}\n\\end{figure}\n\n\n\\begin{lemma} \\label{prop:strong-perkins-S-or-horizontal-reflection}\n\tFor every Pouna set $ S $, either $ S $ or its horizontal reflection is strong.\n\\end{lemma}\n\\begin{proof}\n\tLet $ S' = T(S) $ be the horizontal reflection of $ S $. Thus, $ T:(x,y) \\mapsto (-x,y)$. \n\t\n\tBy Property~\\ref{prop:int-box-min-S-non-empty}, we can choose a point $ p = (x,y) \\in \\boxset{S}^\\circ \\setminus S$. Let $ L $ be the horizontal line passing through $ p $, and set $ A $ to be the closed half-plane consisting of the points on $ L $ and under $L$. Notice that $ \\bset{S} < y < \\tset{S} $, so $ S $ has a point on the top-side of $ \\boxset{S}$, thus outside $A = \\bar{A} $ and a point on the bottom-side of $\\boxset{S}$, thus inside $ A^\\circ $. Setting $ B = S $ in the statement of Lemma~\\ref{lem:real-border-lemma}, we conclude that $ S \\cap L \\neq \\varnothing $. In other words, there is a point $ p= (x',y) \\in \\mathcal S $. \n\tIf $ x'> x $, then $ p \\in \\Ter{S} $, and $ S $ is strong. If $ x'< x $, then $ -x' > -x $. Notice that $ (-x', y) \\in S' $ and $ (-x, y) \\in \\boxset{S'}\\setminus S' $. So, $(-x, y) \\in \\Ter{S'} $, and $ S' $ is strong.\n\\end{proof}\n\nIn the next section, we will define the class of constrained $S$-graphs for strong Pouna set. Lemma~\\ref{prop:strong-perkins-S-or-horizontal-reflection} assures that focusing on Strong Lyon sets instead of Lyon sets does not reduce the generality of the definition.\n\n\nAs shown in the next Property, territories behave well under positive transformations. \n\n\\begin{property} \\label{prop:ter(T)=T(ter)}\n\tLet $ S $ be a strong Pouna set and $ T $ be a positive transformation. Then, $ \\Ter{T(S)} = T(\\Ter{S})$. In particular, $ T(S) $ is strong. \n\\end{property}\n\\begin{proof}\t\n\tLet $ T: (x,y) \\mapsto (ax+c, bx+d)$. Denote the inverse of $ T$ by $ T^{-1}$. \n\t\n\tIf $ (x,y) \\in \\boxset{S}\\setminus S $, then \n\t$$T(x,y) \\in \\boxset{S}\\setminus S = T(\\boxset{S}) \\setminus T(S) = \\boxset{T(S)} \\setminus T(S). $$\n\t\n\tMoreover, $ x'> x $ implies $ ax'+b> ax+b$.\n\tTherefore, $(x,y) \\in \\Ter{S}$ implies $ T(x,y) \\in \\Ter{T(S)} $. Hence, $ \\Ter{S} \\subseteq \\Ter{T(S)}$.\n\t\n\tTo finish the proof, notice that $S = T^{-1}(T(S)) $ and $T^{-1} $ is also a positive transformation. Thus, by what precedes, $\\Ter{T(S)} \\subseteq \\Ter{S}$. \n\\end{proof}\n\n\n\\subsection{Subterritories}\n\nThe notion of subterritory will be used in Section~\\ref{sec:equality}. \n\nLet $ B $ and $ E $ be two rectangles such that $ E \\subseteq R $. The right-extension of $ E $ in $ R $ is the rectangle $ E_r $ defined as follows:\n$$\nE_r = [\\rset{E}, \\rset{R}] \\times [\\bset{E}, \\tset{E}].\n$$\nSee Figure~\\ref{fig:right-extension}.\n\n\\begin{figure}\n\t\\centering\n\t\\vspace*{-1cm}\n\t\\includegraphics[width=6cm]{fig\/right_extension.pdf}\n\t\\vspace*{-1cm}\n\t\\caption{$E_r $ is the right extension of $ E $.} \\label{fig:right-extension}\n\\end{figure}\n\n\nA \\emph{subterritory} for a strong Pouna set $ S $ is a non-empty closed rectangle $ E $ such that\n\\begin{enumerate}\n\t\\item $ E \\subseteq \\Ter{S} $,\n\t\\item $ \\lset{E} > \\lset{S} $, $ \\rset{E} < \\rset{S}$, $\\bset{E} > \\bset{S} $, and $ \\tset{E} < \\tset{S}$,\n\t\\item $ S $ crosses the right extension of $ E $. \n\\end{enumerate}\n\n\nDespite looking too restrictive, subterritories always exist in strong Pouna sets, as we prove in the following lemma. \n\n\n\n\\begin{lemma} \\label{lem:subterritory-exists}\n\tEvery strong Pouna set has a subterritory.\n\\end{lemma}\n\\begin{proof}\n\tLet $ S $ be a strong Pouna set and let $ B = \\boxset{S} $. By Property~\\ref{prop:int-box-min-S-non-empty}, there exist a point $ p = (x_p,y_p) \\in B^\\circ \\setminus S $. So, there is $ \\epsilon > 0 $ such that $ D(p,\\epsilon) \\subseteq B^\\circ \\setminus S $. \n\t\n\tLet $L_P $ be the ray $\\{ (x,y) : y = y_p, x \\geq x_p \\}$. Notice that $ L_P \\cap S $ is non-empty and compact. Let $ s = (x_s, y_s)$ be the point in $ L_P \\cap S $ which obtains the value $ \\lset{L_P \\cap S} $. Notice that $ y_s = y_p$. Consider the following rectangle in $ B $:\n\t$$\n\tR = [x_S - \\epsilon\/2, \\rset{S}] \\times [y_s - \\epsilon , y_s + \\epsilon].\n\t$$ \n\tSee Figure~\\ref{fig:ptoof-subter-exists}. \n\t\n\t\\begin{figure}\n\t\t\\centering\n\t\t\\vspace*{-1cm}\n\t\t\\includegraphics[width=9cm]{fig\/proof_subterritory.pdf}\n\t\t\\vspace*{-2cm}\n\t\t\\caption{For proof of Lemma~\\ref{lem:subterritory-exists}.} \\label{fig:ptoof-subter-exists}\n\t\\end{figure}\n\t\n\tIn particular, $ s \\in R^\\circ $ and $ R = \\bar{R} $ does not intersect the border of $ B $. On the other hand there is a point $ s' $ of $ S $ on the top-side of $ B $. Since $ S $ is a path-connected set, we must have a path $\\gamma$ from $ s $ to $ s' $. By Lemma~\\ref{lem:real-border-lemma}, the image of $\\gamma$ must intersect $\\partial B $, and in particular in a point other than $(x_S - \\epsilon\/2, y_s)$ and $ (\\rset{S}, y_s)$. So, $ \\im{\\gamma} \\cap B $ is not a horizontal line. In particular, there are $ y_0, y_1 \\in \\mathbb{R} $ such that $ y_s - \\epsilon \\leq y_0 < y_2 \\leq y_s + \\epsilon$ and such that there is a path $\\delta$ in $ R $ joining a point on the line $ y = y_0 $ to a point on the line $ y = y_1 $.\n\t\n\tSo, by Lemma~\\ref{lem:crossing-between-two-lines}, applied to $\\delta $, there is a path $\\delta': [0,1] \\rightarrow \\mathbb{R}$ such that $ \\pi_2(\\delta(0)) = y_0 $, $ \\pi_2(\\delta(1)) = y_1 $, and $ \\im{\\delta'} \\subseteq [x_S - \\epsilon\/2, \\rset{S}] \\times [y_0 , y_1]$.\n\t\n\tNow, let $ E $ be a rectangle entirely inside $ D(p,\\epsilon)$ defined as follows:\n\t$$\n\tE = [x_p - \\epsilon\/2, x_p + \\epsilon\/2] \\times [(y_p+y_0)\/2, (y_p+y_1)\/2].\n\t$$\n\t\n\tNotice that by Lemma~\\ref{prop:crossing-a-subrectangle}, $ \\delta' $ stabs the right-extension of $ E $. Clearly, $ E$ satisfies all other properties of sub-territory as well. So, $ E $ is a subterritory of $ S $.\n\\end{proof}\n\n\\begin{property} \\label{prop:T(E)-subter-of-T(S)}\n\tIf $ E $ is a subterritory of a strong Pouna set $ S $, then for every positive transformation $ T $, we have that $ T(E) $ is a subterritory of $ T(S) $. \n\\end{property}\n\\begin{proof}\n\tSet $ S' = T(S)$ and $E' = T(E)$. We prove that the three items of the definition holds and $ E' $ is a subterritory of $ S' $. \n\t\n\t{\\noindent \\textbf{Claim}. \\textit{Item (1) of the definition of subterritory holds.}} \n\t\n\tBy Property~\\ref{prop:ter(T)=T(ter)}, we have that $ E'=T(E) \\subseteq T(\\Ter{S}) = \\Ter{S'} $. \n\t\n\t{\\noindent \\textbf{Claim}. \\textit{Item (2) of the definition of subterritory holds.}} \n\tSince $ a$ and $ b $ are positive, for every compact set $ A $ we have \n\t\\begin{equation*}\n\t\t\\begin{split}\n\t\t\t\\lset T(A) &= \\min\\{x: (x,y)\\in T(A) \\} \\\\\n\t\t\t&= \\min \\{x: \\big(\\frac{x-c}{a}, \\frac{y-d}{b}\\big) \\in A \\} \\\\\n\t\t\t& = \\min \\{ au+c : (u,v) \\in E \\} = a . \\lset{A} + c. \n\t\t\\end{split}\n\t\\end{equation*}\n\tIn the equations above we have again used the change of variables $ u = \\frac{x-c}{a} $ and $ v = \\frac{y-d}{b} $. \n\tSo, \n\t$$ \\lset{E'} = a. \\lset{E} + c < a. \\lset{S} + c = \\lset{S'}. $$ \n\t\n\tThe proof of the rest of the inequalities is similar. \n\t\n\t{\\noindent \\textbf{Claim}. \\textit{Item (3) of the definition of subterritory holds.}} \n\tLet $ P$ be the prob for $ \\boxset{S} $ defined by $ E $ and let $ \\gamma: [0,1] \\rightarrow S \\cap P $ be the path connecting top-side of $ P$ to bottom-side of $ P$. Denote by $P'$ the prob for $ \\boxset{S'} $ defined by $ E' $. Notice that $ P'=T(P)$. So, $T(S \\cap P)= T(S) \\cap T(P) = S' \\cap P' $. Thus, the function $ T\\circ \\gamma: [0,1] \\rightarrow S' \\cap P' $ is a path entirely inside $ S' \\cap P' $. Moreover, since $ T$ sends the top-side (resp.\\ bottom-side) of $ P$ to the top-side (resp.\\ bottom-side) of $ P' $, we have that $ (T \\circ \\gamma) (0) $ is on the top-side of $ P' $ and $ (T\\circ \\gamma)(1) $ is on the bottom-side of $ P'$, and this finishes the proof.\n\\end{proof}\n\n\\section{Constrained graphs and constrained $ S $-graphs} \\label{sec:constrained-graphs}\n\nLet $ A $ and $ B $ be two strong Pouna sets. We write \\emph{$A \\prec B$} if $ A \\subseteq \\Ter{B}$. Also, we write \\emph{$ A \\curvearrowright B $} if all the following happen: \n\\begin{itemize}\n\t\\item $\\lset B \\leq \\lset A < \\rset B < \\rset A $,\n\t\\item $\\bset B < \\bset A < \\tset A < \\tset B$,\n\t\\item $ \\{ (x,y) \\in A : x = \\lset A \\} \\subseteq \\Ter B $. \n\\end{itemize} \n\n\n\\begin{definition} \\label{def:constrained-graphs}\n\tLet $ \\mathcal F $ be a non-empty and finite collection of strong Pouna sets satisfying the following constraints:\n\t\\begin{enumerate}\t\n\t\t\\item[\\textbf{\\emph{(C1)}}] for every $ A, B \\in \\mathcal F $, if $ A \\neq B $ and $ A \\cap B \\neq \\varnothing $, then, \n\t\teither $ A \\curvearrowright B $ or $ B \\curvearrowright A $. \\label{gcond:adjacent}\n\t\t\n\t\t\\item[\\textbf{\\emph{(C2)}}] For every $ A, B \\in \\mathcal F $, if $ A \\cap B = \\varnothing $ and \n\t\t$ A \\cap \\Ter B \\neq \\varnothing$, \n\t\tthen \n\t\t$ A \\prec B$. \\label{gcond:prec}\n\t\t\n\t\t\\item[\\textbf{\\emph{(C3)}}] For every $ A, B \\in \\mathcal F $, if $ A \\neq B $ and $ A \\cap B \\neq \\varnothing $, then there exists no $ C \\in \\mathcal F $ such that $ C \\subseteq \\Ter A \\cap \\Ter B $. \\label{gcond:common-territory}\n\t\t\n\t\t\\item[\\textbf{\\emph{(C4)}}] There exist no $ A, B, C \\in \\mathcal F $ such that $ A \\prec B $, $ A \\curvearrowright C $, and $ B \\curvearrowright C $. \n\t\t\\label{gcond:strict-condition}\n\t\t\n\t\t\\item[\\textbf{\\emph{(C5)}}] The maximum number of pairwise intersecting and distinct elements in $ \\mathcal F $ is at most two. \\label{gcon:triangle-free}\n\t\\end{enumerate}\n\tThe intersection graph of $ \\mathcal F $ is a \\emph{constrained graph}.\n\\end{definition}\n\n\nLet $ S \\subseteq \\mathbb{R}^2$. We recall from the introduction that a graph is said to be an \\emph{$ S $-graph} if it is the intersection graph of $ \\{S_1, S_2, \\dots, S_n\\} $, with $ n \\in \\mathbb{N} $, where each $ S_i $ is a transformed copy of $ S $. For a Pouna set $ S$, we define a subclass of $ S$-graphs, \\emph{the constrained $S $-graphs}, by setting some constraints on how the transformed copies of $ S $ can intersect, as follows. \n\n\\begin{definition} \\label{def:constrained-S-graphs}\n\tLet $ S $ be a Pouna set, and let $ \\mathcal F $ be a non-empty and finite collection of transformed copies of $ S $ such that $ \\mathcal F $ satisfies all 5 constraints {\\emph{(C1)}}-{\\emph{(C5)}} as well as the following constraint:\n\t\\begin{enumerate} \n\t\t\\item[\\textbf{\\emph{(C6)}}] if $ S $ is strong, then all elements of $ F $ are positive transformed copies of $ S $, and otherwise, they are all positive transformed copies of the horizontal reflection of~$ S $.\n\t\\end{enumerate}\n\tThe intersection graph of $ \\mathcal F $ is a \\emph{constrained $ S$-graph}.\n\\end{definition} \n\nNotice that the set of all constrained graphs (resp.\\ constrained $ S $-graphs), i.e.\\ the set of all graphs which are isomorphic to the intersection graph of a collection $ \\mathcal F $ as in Definition~\\ref{def:constrained-graphs} (resp.\\ Definition~\\ref{def:constrained-S-graphs}), is a well-defined hereditary class of graphs, as if $ \\mathcal F $ satisfies Constraints (C1)-(C5) (resp.\\ (C1)-(C6)), then so does every non-empty subset of it.\n\nBy definition, every constrained $ S $-graph is a constrained graph. We will see, however, that the two classes are indeed equal. See Corollary~\\ref{cor:all-equal}.\n\n\\begin{property} \\label{prop:T(F)-satisfies-C1-C5}\n\tLet $ S $ be a strong Pouna set, and let $ \\mathcal F $ be a finite collection of transformed copies of $ S $ satisfying Constraints \\emph{(C1)-(C6)}, then for every positive transformation $ T $ the collection $ \\{T(S): S \\in \\mathcal F\\} $ also satisfies \\emph{(C1)-(C6)}.\n\\end{property}\n\n\\begin{proof}\n\tSet $ F' = \\{T(S): S \\in \\mathcal F\\} $. Suppose that $T: (x,y) \\mapsto (ax+c, by+d) $ where $ a > 0 $ and $ b> 0$. \n\t\n\tFist of all, notice that $ A \\cap B \\neq \\varnothing $ if and only if $ T(A) \\cap T(B) \\neq \\varnothing $. So, two sets $T(A)$ and $T(B)$ in $ \\mathcal F' $ intersect if and only if $ A $ and $ B $ intersect in $ F $. \n\t\n\tSecond, notice that for every set $ A $, $ \\lset{T(A)} = a .\\lset{A} +c $. So, since $ a> 0$, if $ \\lset{A} \\leq \\lset B$, then $\\lset{T(A)} \\leq \\lset{T(B)}$. \n\t\n\tThird, if $ A \\subseteq B $, then $ T(A) \\subseteq T(B) $, because if $ p \\in T(A) $, then $ p = (ax+c,by+d) $ for some $ (x,y) \\in A $. Now, since $ (x,y) \\in B $, we have $ p \\in T(B) $. \n\t\n\tFourth, notice that $ \\Ter{T(A)} = T(\\Ter{A}) $. This, along with the third fact implies that if $ A \\subseteq \\Ter{B} $, then $T(A) \\subseteq \\Ter{T(B)}$. \n\t\n\tWith the four facts above, it is easy to check that $ \\mathcal F' $ satisfies Constraints (C1)-(C6).\n\\end{proof}\n\n\nApplied to a specific set $ S$, the definition of constrained $ S $-graphs becomes rather intuitive. For example, when $ S $ is the boundary of a rectangle in $ \\mathbb{R}^2 $, constrained $ S $-graphs is the class of \\emph{strict frame graphs}. Also, when $ S $ is a non-vertical and non-horizontal line segment, constrained $ S $-graphs is the class of \\emph{strict line-segment graphs}. The definition of both classes are in~\\cite{BG1}, Section 6. \n\nSee Figure~\\ref{fig:examples-constrained-S-graphs} for two more examples of constrained $ S $-graphs where $ S $ is a circle and when $ S $ is a square that is not axis-aligned. In each row of the figure, from left to right, the pictures represent the following:\n\\begin{itemize}\n\t\\item The first picture shows the set $ S $ (in black) and its territory (in gray). \n\t\\item The second picture shows the way that two sets can intersect, i.e. what is described by Constraint (C1).\n\t\\item The third picture represents Constraint (C2). In other words, it shows that if two sets do not intersect but one has an intersection with the territory of the other, how they must be placed. Notice that in the first line, there are two possibilities to place a circle in the territory of another circle with no intersection. \n\t\\item The fourth picture shows the forbidden construction in Constraint (C3). \n\t\\item The fifth picture shows the forbidden construction in Constraint (C4). \n\t\\item Finally, we must keep in mind that there must not be three distinct sets that mutually intersect. \n\\end{itemize}\n\n\\begin{figure}\n\t\\centering\n\t\\vspace*{-1cm}\n\t\\includegraphics[width=12cm]{fig\/constraint-S-example.pdf}\n\t\\vspace*{-2.8cm}\n\t\\caption{Examples of constrained $ S$-graphs} \\label{fig:examples-constrained-S-graphs}\n\\end{figure}\n\n\n\\section{Burling graphs} \\label{sec:Burling-Graphs}\n\n\n\\subsection{Abstract Burling graphs}\n\nAs mentioned in the introduction, Burling~\\cite{Burling65} defined Burling graphs in 1965. Here, we do not present the definition by Burling. Instead, we recall an equivalent definitions of Burling graphs defined in~\\cite{BG1}: \\emph{abstract Burling graphs}.\n\nLet $R$ be a binary relation defined on a set $ S $. We say that $ R $ has a \\emph{directed cycle} if there exists positive integer $ k $ and elements $ x_1, \\dots, x_k \\in S $ such that $(x_1, x_2), (x_2, x_3), \\dots, (x_k, x_1) \\in R $. \n\n\\begin{definition}[\\cite{BG1}, Definition 5.1]\n\tA \\emph{Burling set}\n\tis a triple $ (\\mathcal F, \\prec, \\curvearrowright) $ where $ \\mathcal F $ is a non-empty\n\tfinite set, $\\prec$ is a strict partial order on $\\mathcal F$,\n\tand $ \\curvearrowright $ is a binary relation on $\\mathcal F$ with no\n\tdirected cycles such that the following axioms hold:\n\t\\begin{enumerate}\n\t\t\\item[\\textbf{\\emph{(A1)}}] \n\t\tif $ x \\prec y $ and $ x \\prec z $, then either\n\t\t$ y \\prec z $ or $ z \\prec y $, \\label{item:descdesc}\n\t\t\\item[\\textbf{\\emph{(A2)}}] \n\t\tif $ x \\curvearrowright y $ and $ x \\curvearrowright z $, then either\n\t\t$ y \\prec z $ or $ z \\prec y $, \\label{item:adjadj}\n\t\t\\item[\\textbf{\\emph{(A3)}}] \n\t\tif $ x \\curvearrowright y $ and $ x \\prec z $, then $ y \\prec z\n\t\t$, \\label{item:adjdesc}\n\t\t\\item[\\textbf{\\emph{(A4)}}] \n\t\tif $ x \\curvearrowright y $ and $ y \\prec z $, then either\n\t\t$ x \\curvearrowright z $ or $ x \\prec z $. \\label{item:transitiveboth}\n\t\\end{enumerate}\n\tA graph $ G $ is a \\emph{(non-oriented) abstract Burling graph} if it is obtained from a Burling set $ (\\mathcal F, \\prec, \\curvearrowright) $ by setting $V(G) = \\mathcal F $ and $ E(G) = \\{ \\{x,y\\} : x \\curvearrowright y \\} $. \n\\end{definition}\n\nEquivalently, we can say that a graph is an abstract Burling graph if it is the underlying graph of the oriented graph $ \\hat G $ obtained from a Burling set $ (\\mathcal F, \\prec, \\curvearrowright) $ by setting $V(\\hat G) = \\mathcal F $ and $ E(\\hat G) = \\{ xy : x \\curvearrowright y \\} $.\n\nFor the proof of equivalence of the two classes of abstract Burling graphs and Burling graphs as defined classically in the literature, see Theorem 5.7 of~\\cite{BG1}. \n\nThe axiomatic definition of abstract Burling graphs is useful in the proofs of the next section because, for proving that a graph is a Burling graph, we just need to define two appropriate relations $\\prec $ and $ \\curvearrowright $ on the vertex-set of the graph and prove that Axioms (A1)-(A4) hold. \n\n\n\\section{Equality of the three classes} \\label{sec:equality}\n\nIn this section, we prove that the class of Burling graphs is equal to the class of constrained graphs and to the class of constrained $ S $-graphs for every Pouna set $ S$. \n\n\n\\subsection{Constrained graphs are Burling graphs}\n\n\nWe first need a few lemmas.\n\n\\begin{lemma}\\label{lem:prec-properties}\n\tLet $ A $, $ B $ be two strong Pouna sets. If $ A \\prec B $, then \n\t\\begin{enumerate}\n\t\t\\item $\\rset A < \\rset B $,\n\t\t\\item $\\hset{A} \\leq \\hset{B} $,\n\t\t\\item $ \\Ter A \\subseteq \\Ter B $. \n\t\\end{enumerate} \n\\end{lemma}\n\\begin{proof}\n\tBy definition of $\\prec$, we have $ A \\subseteq \\Ter B $. \n\t\n\tTo prove (1), let $ r = \\rset A $. Because $ A$ is compact, there exists a point $ (r, y) $ in $ A $. Consequently, $(r,y) \\in \\Ter B $, so there exists $ r' $ such that $ r'> r $ and $ (r', y) \\in B $. Notice that, $ r' \\leq \\rset B$. Hence, $ \\rset A < \\rset B $.\n\t\n\t\n\tTo prove (2), notice that $ A \\subseteq \\Ter{B} \\subseteq \\boxset{B}$. So, $ \\bset{A} \\geq \\bset{\\boxset{B}} = \\bset{B}$ and $ \\tset{A} \\leq \\tset{\\boxset{B}} = \\tset{B} $. Therefore, $ \\hset{A} = \\tset{A} - \\bset{A} \\leq \\tset{B} - \\bset{B} = \\hset{B}$. \n\t\n\tTo prove (3), let $ p = (x,y) $ be a point in $ \\Ter A $. Notice that \n\t$$ \n\tx \\geq \\lset{\\Ter{A}}\\geq \\lset{\\boxset{A}} = \\lset A.\n\t$$\n\tAlso,\n\t$$\n\tx \\leq \\rset{\\Ter{A}} \\leq \\rset{\\boxset{A}} = \\rset{A}.\n\t$$\n\tSo, $ \\lset{A} \\leq x \\leq \\rset{A}$.\n\tThus,\n\t\\begin{equation*}\n\t\t\\begin{split}\n\t\t\t\\lset{\\boxset B} & \\leq \\lset{\\Ter B} \\leq \\lset A \\leq x \\leq \\rset A \\leq \\rset{\\Ter B} \\leq \\rset{\\boxset B}.\t\t\t\n\t\t\\end{split}\n\t\\end{equation*}\n\tSimilarly, $ \\bset{\\boxset B} \\leq y \\leq \\tset{\\boxset B} $.\n\tTherefore, $ p \\in \\boxset B $. \n\t\n\tNow, by definition of territory, there exists a point $a=(x', y) \\in A $ with $ x' > x $. So, since $ A \\subseteq \\Ter B $, we have $ a \\in \\Ter B $. By definition of territory, there exists a point $ b=(x'', y) \\in B $ with $ x''> x'$, and thus $ x''> x $. This, along with the fact that $ p \\in \\boxset{B} $ implies that $ p \\in \\Ter B $. Consequently $ \\Ter A \\subseteq \\Ter B $. \n\\end{proof}\n\n\n\nWe say that two strong Pouna sets $A$ and $ B $ are \\emph{comparable} if one of the following happens: $ A \\curvearrowright B $, $ B \\curvearrowright A $, $ A \\prec B $, or $ B \\prec A $. \n\n\\begin{lemma} \\label{lem:comparable-sets}\n\tLet $ A $ and $ B $ be two strong Pouna sets in a collection $\\mathcal F $ which satisfies Constraints \\emph{(C1)} and \\emph{(C2)}. If $ \\Ter A \\cap \\Ter B \\neq \\varnothing $, then $ A$ and $ B $ are comparable. \n\\end{lemma}\n\\begin{proof}\n\tIf $ A \\cap B \\neq \\varnothing $, then by Constraint (C1), either $ A \\curvearrowright B $ or $ B \\curvearrowright A $. So, we may assume $ A \\cap B = \\varnothing $. Choose a point $ p = (x,y) \\in \\Ter A \\cap \\Ter B $. There exists $ x', x'' \\in \\mathbb{R} $, both bigger than $ x $, such that $ p'=(x', y) \\in A $ and $ p''=(x'', y) \\in B $. Since $ A $ and $ B $ are disjoint, $ x' \\neq x''$. First, assume that $ x''> x' $. Notice that $ p' \\notin B $ and that $ p'$ is on the straight line joining $ p $ and $ p'' $, which are both points in $ \\boxset B $. Therefore, $ p' \\in \\boxset B $. Consequently, $ p' \\in \\Ter B $. Therefore $ A \\cap \\Ter B \\neq \\varnothing $, and by Constraint (C2), we have $ A \\prec B $. Second, assume that $ x'' x_0 $ such that $ p' = (x', y) \\in A $. Since $ p' \\in A $, we have $ p' \\notin R $. So, in particular, $ x' \\neq x $. \n\tIf $ x' < x $, then $ x' $ is on the strait line joining $ p_0 $ and $ p $. But $ p_0, p \\in R $ and $ R $ is convex, so $ (x',y) \\in R $, a contradiction. Hence $ x'> x $. Now, to show that $ p\\in \\Ter{A}$, it is enough to show that $ p \\in \\boxset{A} \\setminus A $. But $ p $ being in $R $, is not in $ A $. On the other hand, $ p $ is in on the straight line between $ p_0 $ and $ p' $. Now because $ p_0 \\in \\Ter{A} \\subseteq \\boxset{A} $ and $ p' \\in A \\subseteq \\boxset{A} $, we have $p \\in \\boxset{ A }$. This completes the proof. \n\\end{proof}\n\n\nLet $ E $ be a rectangle in $ \\boxset{\\mathcal F} $. The prob \\emph{defined by $ E $} in $ B $ is the prob $ P$ which is obtained by extending the right side of $ E $ to reach the border of $B $, i.e.\n$\nP = \\{(x,y) \\in B: \\lset E \\leq x \\leq \\rset B, \\bset E \\leq y \\leq \\tset E \\}.\n$\nNotice that if $ E $ does not intersect any member of $ \\mathcal F $, then it is a root for $ P $. \n\n\n\\subsubsection*{The construction of Pawlik, Kozik, Krawczyk, Laso\\'n, Micek, Trotter, and Walczak}\n\nNow, we explain the construction of Pawlik, Kozik, Krawczyk, Laso\\'n, Micek, Trotter, and Walczak in~\\cite{Pawlik2013}. Our terminology is slightly different from the one of~\\cite{Pawlik2013}, but we have tried to keep the terminology as close as possible so the reader can refer to~\\cite{Pawlik2013} whenever needed. Fix a strong Pouna set $ S $ and a subterritory $ E $ of $ S $. From now on, for the transformed copy $ S'=T(S) $, we consider the subterritory $ T(E) $. \n\nLet $(\\mathcal F, \\mathcal P) $ be a tuple where $ \\mathcal F$ is a collection of transformed copies of $ S $ and $ \\mathcal P $ is a set of probs of $\\mathcal F$. We define an operation $\\Gamma$ where $ (\\mathcal F', \\mathcal P') = \\Gamma(\\mathcal F, \\mathcal P) $ is obtained as follows:\n\\begin{enumerate}\n\t\\item[\\textbf{(S$'$1)}] For every $ P \\in \\mathcal P $, let $P^\\uparrow $ and $ P^{\\downarrow} $ be respectively the top one-third and the bottom one-third of $ P $, i.e.\n\t$$\n\tP^\\uparrow = [\\lset P, \\rset P] \\times [\\frac{\\bset P + 2\\tset P}{3}, \\tset P]\n\t$$\n\tand\n\t$$\n\tP^\\downarrow = [\\lset P, \\rset P] \\times [\\bset P, \\frac{2\\bset P+ \\tset P}{3}].\n\t$$\n\t\\item[\\textbf{(S$'$2)}] Set $ S_P $ to be a transformed copy of $ S $ where we first match the boundary of $ \\boxset S $ on the boundary of $ P^\\uparrow $, and then we scale it horizontally by $ \\frac{2 \\wset{S}}{\\lset E - \\lset S} $ keeping the left-side of $\\boxset S $ fixed.\n\tFormally, the transformation described above is $ T_P = T_2 \\circ T_1 : \\mathbb{R}^2 \\rightarrow \\mathbb{R}^2 $, where\n\t$$\n\tT_1(x,y) = \\Big( \\frac{\\wset{P^\\uparrow}}{\\wset{S}}x + \\lset{P^\\uparrow} - \\frac{\\lset{S}\\wset{P^\\uparrow}}{\\wset{S}} , \\frac{\\hset{P^\\uparrow}}{\\hset{S}}y + \\bset{P^\\uparrow} - \\frac{\\bset{S}\\hset{P^\\uparrow}}{\\hset{S}} \\Big)\n\t$$\n\tand \n\t$$\n\tT_2(x,y) = \\Big( \\frac{2 \\wset S}{\\lset E - \\lset S}x + \\lset{P^\\uparrow}(1 - \\frac{2 \\wset{S}}{\\lset E - \\lset S}) , y \\Big).\n\t$$\n\tThis transformation ensures that the subterritory of $ S_P$, i.e.\\ $ T_P(E) $, is outside~$ \\boxset{\\mathcal F} $ (See Property~\\ref{prop:prop-after-def-Gamma}). Denote $ T_P(E) $ by $ E_P$.\n\t\\item[\\textbf{(S$'$3)}] Set $ \\mathcal F' = \\mathcal F \\cup \\big( \\cup_{P \\in \\mathcal P} S_P \\big) $. \n\t\\item[\\textbf{(S$'$4)}] For $ P \\in \\mathcal P $, denote by $ P_1 $ the prob for $\\mathcal{F'}$ defined by $ E_P $, and denote by $P_2 $ the prob for $ \\mathcal{F'} $ defined by $ P^\\downarrow $.\n\t\\item[\\textbf{(S$'$5)}] Set $ \\mathcal P' = \\{ P_1, P_2 : P \\in \\mathcal P \\}$. \n\\end{enumerate} \n\n\n\nNow, inductively, we define a sequence $\\{(\\mathcal F_k, \\mathcal P_k)\\}_{k \\geq 1} $ where $ \\mathcal F_k $ is a collection of positive transformed copies of $ S $, and $ \\mathcal P_k $ is a set of probs for $ \\mathcal F_k$. \n\nFor $ k =1 $, set $ \\mathcal F_1 = \\{S\\} $ and $ \\mathcal P_1 = \\{P\\} $ where $ P$ is the prob defined by $ E $. Now, let $ k \\geq 1 $ and assume that $(\\mathcal F_k, \\mathcal P_{k})$ is defined, we define $(\\mathcal F_{k+1}, \\mathcal P_{k+1})$ as follows:\n\\begin{enumerate}\n\t\\item[\\textbf{(S1)}] Set $ (\\mathcal F, \\mathcal P) = \\Gamma(\\mathcal F_k, \\mathcal P_k)$.\n\t\\item[\\textbf{(S2)}] For every $ P \\in \\mathcal P_k $, choose a root $ R_P$. (To see that $P$ has a root, see~\\cite{Pawlik2013} or Theorem~\\ref{thm:BG-is-const-S-graph}.) Create a transformed copy $ (\\mathcal F^P, \\mathcal P^P)$ of $(\\mathcal F, \\mathcal P)$ such that $ \\boxset{\\mathcal F^P} $ is matched to $ R_P $. Formally, apply the transformation:\n\t$$\n\tT'_P(x,y) = \\Big( \\frac{\\wset{R_P}}{\\wset{B_P}}x + \\lset{R_P} - \\frac{\\lset{B_P}\\wset{R_P}}{\\wset{B_P}} , \\frac{\\hset{R_P}}{\\hset{B_P}}y + \\bset{R_P} - \\frac{\\bset{B_P}\\hset{R_P}}{\\hset{B_P}} \\Big),\n\t$$\n\twhere $ B_P = \\boxset{\\mathcal F^P} $.\n\t\\item[\\textbf{(S3)}] Set $ \\mathcal F_{k+1} = \\mathcal F_k \\cup \\big(\\cup_{P \\in \\mathcal P_k} \\mathcal F^P \\big) $.\n\t\\item[\\textbf{(S4)}] Now, for $ P \\in \\mathcal P_k $ and for $ Q \\in \\mathcal P^P $, let $ P_Q $ be the prob for $ \\mathcal F_k$ defined by $ Q $. \n\t\\item[\\textbf{(S5)}] Set $ \\mathcal P_{k+1} = \\{P_Q: P \\in \\mathcal P_k, Q \\in \\mathcal P^P \\}. $\n\\end{enumerate}\n\nThe tuple $(\\mathcal F_{k+1}, \\mathcal P_{k+1})$ is the new sentence of the sequence.\n\n\nFor the rest of this section, let $ G_k $ denote the intersection graph of $ \\mathcal F_k $. \nWe recall that the class spanned by $ \\{G_k\\}_{k\\geq 1} $ is the class of Burling graphs. \n\n\nNow, we state and prove some lemmas and properties about the construction of Pawlik, Kozik, Krawczyk, Laso\\'n, Micek, Trotter, and Walczak. \n\n\n\\begin{property} \\label{prop:prop-after-def-Gamma}\n\tAdopting the notation from the definition of $ \\Gamma $, for every $ P \\in \\mathcal P $, we have: \n\t\\begin{enumerate}\n\t\t\\item the transformation $T_P $ is positive. \n\t\t\\item $ \\lset{E_P} > \\rset{\\boxset{F}} $, so in particular, $ E_P \\cap \\boxset{F} = \\varnothing $.\n\t\\end{enumerate}\n\\end{property}\n\n\\begin{proof}\n\tThe proof of (1) is immediate from the definition of $ T_P $.\n\t\n\tTo prove (2), set $ T_P: (x,y) \\mapsto (ax+c, bx+d)$. We have \n\t$$\n\ta = \\frac{2\\wset{S}}{\\lset{E} - \\lset{S}}.\\frac{\\wset{P^\\uparrow}}{\\wset{S}},\n\t$$\n\tand\n\t$$ \n\tc = \\frac{2 \\wset S}{\\lset E - \\lset S} \\big(\\lset{P^\\uparrow} - \\frac{\\lset{S}\\wset{P^\\uparrow}}{\\wset{S}}\\big) + \\lset{P^\\uparrow}(1 - \\frac{2 \\wset{S}}{\\lset E - \\lset S}).\n\t$$ \n\t\n\tNow, notice that \n\t\\begin{equation*}\n\t\t\\begin{split}\n\t\t\t\\lset{T_P(E)} &= a.\\lset{E} + c \\\\\n\t\t\t&= \\lset{E}.\\frac{2 \\wset{S} \\wset{P^\\uparrow}}{\\wset{S}(\\lset{E}-\\lset{S})} + \\frac{2\\wset{S}\\lset{P^\\uparrow}}{\\lset{E}-\\lset{S}} \\\\ & \\ \\ \\ - \\lset{B_S}.\\frac{2 \\wset{S} \\wset{P^\\uparrow}}{\\wset{S}(\\lset{E}-\\lset{S})} + \\lset{P^\\uparrow} - \\frac{2\\wset{S}\\lset{P^\\uparrow}}{\\lset{E}-\\lset{S}} \\\\\n\t\t\t&= \\lset{P^\\uparrow} + (\\lset{E}-\\lset{S})\\frac{2 \\wset{S} \\wset{P^\\uparrow}}{\\wset{S}(\\lset{E}-\\lset{S})} \\\\\n\t\t\t& > \\lset{P^\\uparrow} + 2 \\wset{P^\\uparrow} = \\rset{P^\\uparrow} + \\wset{P^\\uparrow} > \\rset{P^\\uparrow}.\n\t\t\\end{split}\n\t\\end{equation*}\n\tTo complete the proof, notice that $ \\rset{P^\\uparrow} = \\rset{\\boxset{\\mathcal F}} $. \n\\end{proof}\n\n\\begin{property} \\label{prop:more-on-Gamma-and-disjoint-probs}\n\tLet $ \\mathcal F $ be a collection of strong Pouna sets, and let $ \\mathcal P $ be a set of probs for $ \\mathcal F $ that are mutually disjoint. Setting $(\\mathcal F',\\mathcal P') = \\Gamma(\\mathcal F, \\mathcal P)$ and adopting the notation from the definition of $\\Gamma$, we have that for every $ P \\in \\mathcal P $:\n\t\\begin{enumerate}\n\t\t\\item if $Q \\in \\mathcal P \\setminus \\{P\\}$, then $S_P \\cap Q = \\varnothing $, $ S_P \\cap S_Q = \\varnothing $, and $ \\Ter{S_P} \\cap Q = \\varnothing $,\n\t\t\\item $N_{\\mathcal F'}(P_1) = \\{S_P\\} $,\n\t\t\\item $N_{\\mathcal F'}(P_2) \\subseteq N_{\\mathcal F}(P)$ and $ N_{\\mathcal F'}(P_2) \\subseteq \\mathcal F$.\n\t\\end{enumerate}\n\\end{property}\n\n\n\n\\begin{proof}\n\tItem (1) follows from the facts that $ \\boxset{S_P} \\subseteq P $, $ S_P \\subseteq P$, $ S_Q \\subseteq Q$, and $ P \\cap Q = \\varnothing $.\n\t\n\tTo prove (2), notice that by Property~\\ref{prop:prop-after-def-Gamma}, we have $ \\lset{E_P} > \\rset{\\boxset{\\mathcal F}}$. Since $ P_1 $ is the prob defined by $ E_P$, the prob $ P_1 $ is also outside $ \\boxset{\\mathcal F}$ So, for every $ A \\in \\mathcal F$, we have $ A \\notin N_{\\mathcal F'}(P_1)$. Moreover, by item (1) of this property, for every $ Q \\in \\mathcal P \\setminus \\{P\\}$, we have $ S_Q \\notin N_{\\mathcal F'}(P_1) $. Finally, since $ E_P $ is a subterritory of $ S $, by definition of $ S_P \\cap P_1 \\neq \\varnothing $. Therefore $ N_{\\mathcal F'}(P_1) = \\{S_P\\}$. \n\t\n\t\n\tTo prove (3), assume that $ A \\in \\mathcal F' $ is of the form $ A = S_Q $ for some $ Q $. Case 1, $ Q = P$, in which case $S_Q = S_P \\subseteq P_1 $, and since $P_1 \\cap P_2 = \\varnothing $, we have $ A \\notin N_{\\mathcal F'}(P_2) $. Case 2, $ Q \\neq P $, and thus item (1) of this property implies that $ A \\notin N_{\\mathcal F'}(P_2)$. Therefore, $ N_{\\mathcal F'}(P_2) \\subseteq \\mathcal F$.\n\t\n\tHence, $ N_{\\mathcal F'}(P_2) = N_{\\mathcal F}(P_2)$. So, since $ P_2 \\cap \\boxset{F} \\subseteq P $, we have $ N_{\\mathcal F'}(P_2) \\subseteq N_{\\mathcal F}(P)$. \n\\end{proof}\n\n\n\n\\begin{lemma}\n\tLet $ S $ be a strong Pouna set. Let $ \\mathcal F $ be a collection of transformed copies of $ S $ that satisfies Constraints \\emph{(C1)-(C6)}. Let $ \\mathcal P$ be a set of mutually disjoint stable probs of $\\mathcal{F}$. If $ (\\mathcal F', \\mathcal P') = \\Gamma(\\mathcal F, \\mathcal P)$, then \n\t\\begin{enumerate}\n\t\t\\item elements of $P'$ are mutually disjoint,\n\t\t\\item every element of $P$ is a stable prob for $ \\mathcal F'$,\n\t\t\\item $ \\mathcal F' $ satisfies Constraints \\emph{(C1)-(C6)}.\n\t\\end{enumerate}\n\\end{lemma}\n\\begin{proof}\n\tWe adopt the notation from the definition of $\\Gamma$.\n\t\n\tSet $ \\mathfrak B = \\boxset{\\mathcal F}$ and $ \\mathfrak{B'} = \\boxset{\\mathcal F'}$. Notice that $ \\lset{\\mathfrak{B'}} = \\lset{\\mathfrak{B}}$, $ \\bset{\\mathfrak{B'}} = \\bset{\\mathfrak{B}}$, and $ \\tset{\\mathfrak{B'}} = \\tset{\\mathfrak{B}} $. However, $ \\rset{\\mathfrak{B'}} > \\rset{\\mathfrak{B}}$. For the rest of the proof, we adapt the notations in the definition of $\\Gamma$. \n\t\n\t\n\t{\\noindent \\textbf{Claim}. \\textit{elements of $\\mathcal P' $ are mutually disjoint.}} \t\n\t\n\tCorresponding to every $ P \\in \\mathcal P $, there are two probs in $ \\mathcal P' $, that is, $P_1 $ and $ P_2 $. Notice that $ P_1 \\cap P_2 = \\varnothing $. So, the fact that the probs in $ \\mathcal P' $ are mutually disjoint is implies directly by the same fact about $ \\mathcal P$. \n\t\n\t\n\t\n\t{\\noindent \\textbf{Claim}. \\textit{Elements of $\\mathcal P' $ are stable probs for $\\mathcal F' $.}}\n\t\n\tFix $ P \\in \\mathcal P $. We prove both $ P_1 $ and $P_2 $ are stable probs, and thus every prob in $ \\mathcal P' $ is stable. \n\t\n\tThe prob $ P_1 $ is defined by a subterritory $ E_P $. By Property~\\ref{prop:prop-after-def-Gamma}, $E_P \\cap \\mathfrak{B} = \\varnothing $. Therefore, for every $ A \\in \\mathcal F$, we have $ E_P \\cap A = \\varnothing $. Moreover, by definition of subterritory, $E_P \\cap S_P = \\varnothing $. Finally, since $ E_P \\subseteq P $, by Property~\\ref{prop:more-on-Gamma-and-disjoint-probs}, $ E_P \\cap S_Q = \\varnothing $ for every $ Q \\in \\mathcal P \\setminus \\{P\\} $ as well. Thus $ E_P $ does not intersect any element of $ \\mathcal F' $. So, $ E_P $ is a root for $P_1 $. \n\t\n\tNotice that by Property~\\ref{prop:more-on-Gamma-and-disjoint-probs}, we have $N(P_1) = \\{S_P\\} $, so item (2) of the definition of stable prob holds. Moreover,\n\tsince $ E_P $ is a subterritory of $ S_P$, we have \n\t\\begin{itemize}\n\t\t\\item $ E_P \\subseteq \\Ter{S_P}$, \n\t\t\\item $ \\bset{E_P} > \\bset{S_P}$ and $ \\tset{E} < \\tset{S_P}$,\n\t\t\\item $S_P $ crosses $ P_1$ vertically,\n\t\\end{itemize}\n\twhich proves item (1), (3), and (4) of the definition of stable prob, respectively. For item (3), we have used the facts that $ \\bset{P_1} = \\bset{E_P} $ and $ \\tset{P_1} = \\tset{E_P} $.\n\t\n\t\n\tBy the hypothesis, $ P $ has a root. Let $ R $ be a root of $ P $. Set $ R^\\downarrow = R \\cap P^\\downarrow $ and notice that $ R^\\downarrow $ is a root of $ P^\\downarrow $, as a prob for $\\mathcal F $. In particular, $ R^\\downarrow $ does not intersect any element of $ \\mathcal F $.\n\tNow, let $ A \\in N_{\\mathcal F'} (P_2) $.\n\tBy Property~\\ref{prop:more-on-Gamma-and-disjoint-probs}, we have $ A \\in N_{\\mathcal F}(P) $. Therefore, $ R \\subseteq \\Ter A$. Consequently, $ R^\\downarrow \\subseteq \\Ter A $. This proves item (1) of the definition of stable prob. Moreover, since $ A \\in N_{\\mathcal F}(P) $ and $ P$ is stable, we have \n\t$$\n\t\\bset{A} < \\bset{P} = \\bset{P^\\downarrow} = \\bset{P_2},\n\t\\text{ and }\n\t\\tset{A} > \\tset{P} \\geq \\tset{P^\\downarrow} = \\tset{P_2},\n\t$$\n\twhich proves item (3) of the definition. Also, since $ A $ crosses $ P$ vertically, by Property~\\ref{prop:crossing-a-subrectangle}, it crosses $ P^\\downarrow $ vertically as well, which proves item (4) of the definition. \n\t\n\tNow, assume that $ A, B \\in N_{\\mathcal F'} (P_2) $ and $ A \\neq B $. Again, by Property~\\ref{prop:more-on-Gamma-and-disjoint-probs}, we have $ A, B \\in N_{\\mathcal F}(P) $. Thus, $ A\\cap B =\\varnothing $, proving item (2) of the definition. \n\tHence, $P_2 $ is a stable prob. \n\t\n\t\n\t\n\tNow, we prove that $\\mathcal F' $ satisfies Constraints (C1)-(C6).\n\t\n\t\n\t\n\t{\\noindent \\textbf{Claim}. \\textit{$ \\mathcal F' $ satisfies \\emph{(C1).}}} \n\t\n\tLet $ A, B \\in \\mathcal F'$ be two distinct and intersecting transformed copies of $ S $. Set $ L_A = \\{ (x,y) \\in A : x = \\lset A \\} $. Notice that $ \\lset A = \\lset{L_A}$. \n\t\n\tIf $ A, B \\in \\mathcal F $, then the result holds because $ \\mathcal F $ satisfies (C1). Furthermore, by Property~\\ref{prop:prop-after-def-Gamma}, we cannot have $ A, B \\in \\mathcal F' \\setminus \\mathcal F$. So, without loss of generality, assume $ A \\in \\mathcal F' \\setminus \\mathcal F $, so $ A = S_P $ for some $ P \\in \\mathcal P $, and $ B \\in \\mathcal F$. In particular, $ B \\subseteq \\mathfrak B $, and by construction, $ A \\cap (\\mathfrak{B} \\setminus P^\\uparrow) = \\varnothing $. Hence, $ B \\cap P^\\uparrow \\neq \\varnothing $, and therefore $ B \\in N_{\\mathcal F}(P) $. Thus, by Property~\\ref{prop:more-on-Gamma-and-disjoint-probs}, for every root $ R $ of $ P $, we have $ R \\subseteq \\Ter B $. Moreover, we have $ \\bset{B}< \\bset{P} $ and $ \\tset{B}> \\tset{P} $. Also, notice that by construction, for every $ \\mathfrak{s} \\in \\{\\mathfrak{l}, \\mathfrak{r}, \\mathfrak{b}, \\mathfrak{t} \\}$, we have $ \\mathfrak{s}(A) = \\mathfrak{s}(P^\\uparrow)$. Let $ p= (x,y) \\in L(A) $. So, $ x =\\lset{A} \\lset{P}$ and $ y \\in (\\bset{P}, \\tset{P})$. Moreover, $ \\bset P \\leq \\bset A \\leq y \\leq \\tset A \\leq \\tset P $. Therefore, $ (x, y) \\in \\{ (x',y') \\in P : x' = \\lset P \\}$. Consequently, $ (x, y) \\in R $. So, $ L_A \\subseteq R \\subseteq \\Ter B $.\n\t\n\t\n\tMoreover, we have:\n\t\\begin{multline*}\n\t\t\\lset B = \\lset{\\boxset B } \\leq \\lset{\\Ter B} \\leq \\lset{L_A} \\\\ \n\t\t= \\lset A = \\lset{P^\\uparrow} = \\lset P = \\lset R < \\rset R \\\\ \n\t\t\\leq \\rset{\\Ter B} \\leq \\rset{\\boxset B} = \\rset B \\overset{(a)}{\\leq} \\rset{\\mathfrak{B}} \\overset{(b)}{<} \\rset{A},\n\t\\end{multline*}\t\n\twhere (a) is because $ B \\in \\mathcal F $, and (b) follows from Step (S$'$2) of the construction. Therefore $ \\lset B \\leq \\lset A < \\rset B < \\rset A $.\n\t\n\tOn the other hand,\n\t\\begin{equation*}\n\t\t\\bset B < \\bset P < \\bset{P^\\uparrow} = \\bset A \\overset{(c)}{<} \\tset A = \\tset P < \\tset B, \n\t\\end{equation*}\n\twhere (c) follow from the fact that $ A $, a strong Pouna set, cannot be a subset of a horizontal line segment. Therefore $ \\bset B < \\bset A < \\tset A < \\tset B$. \n\t\n\tHence, all the items in Constraint (C1) hold and $ A \\curvearrowright B $. \n\t\n\t\n\t\n\t{\\noindent \\textbf{Claim}. \\textit{$ \\mathcal F' $ satisfies \\emph{(C2).}}} \n\t\n\tLet $ A $ and $ B $ be two disjoint sets in $ \\mathcal F' $ such that $ A \\cap \\Ter B \\neq \\varnothing $. We prove that $ A , B \\in \\mathcal F $. For the sake of contradiction, assume that $ \\{A, B\\} \\nsubseteq \\mathcal F$. There are three cases possible. \n\t\n\tCase 1: $ A, B \\in \\mathcal F' $. So, there exists $ P, Q \\in \\mathcal P $ such that $ P \\neq Q $ and $ A=S_P $ and $ B = S_Q $. But in that case, by construction, $ \\boxset B \\subseteq Q $, and $ A \\subseteq P$. So, from $ A \\cap \\Ter B \\neq \\varnothing $, we have $ P \\cap Q \\neq \\varnothing $, a contradiction. \n\t\n\tCase 2: $ A =S_P $ for some $P \\in \\mathcal P $, and $ B \\in \\mathcal F $. Since $ A \\subseteq P $, form $ A \\cap \\Ter B \\neq \\varnothing $ we deduce that $ P \\cap \\Ter B \\neq \\varnothing$. Choose $ p = (x,y) \\in P \\cap \\Ter B $. Because by definition of Territory, there exists a point $p'= (x',y) \\in B $ with $ x' > x $. Now, because $ B \\subseteq F $, we have $ p' \\in \\mathfrak B $ and therefore $ p' \\in P $. Hence $ P \\cap B \\neq \\varnothing $, i.e.\\ $ B \\in N(P) $. Therefore, $ B $ crosses $ P$ vertically. Moreover, $A = S_P $ crosses $ P_1 $ and therefore $P$ horizontally. So, by Property~\\ref{prop:horizontal-and-vertical-crossing-intersect}, we have $ A \\cap B \\neq \\varnothing $, a contradiction.\n\t\n\tCase 3: $ A \\in \\mathcal F $ and $ B =S_P $ for some $P \\in \\mathcal P $. In this case $ \\Ter B \\subseteq P $, and therefore $ A \\cap P \\neq \\varnothing $, i.e. $A \\in N(P) $. So, $ A $ crosses $P $ vertically. On the other hand, $ B $ crosses $ P_1 $ and thus $ P$ horizontally. Therefore, by property~\\ref{prop:horizontal-and-vertical-crossing-intersect}, we have $ A \\cap B \\neq \\varnothing $, a contradiction.\n\t\n\t\n\t\n\t{\\noindent \\textbf{Claim}. \\textit{$ \\mathcal F' $ satisfies \\emph{(C3).}}} \n\t\n\tLet $ A , B \\in \\mathcal F $ be two distinct sets with non-empty intersection. For the sake of contradiction, assume that there exists $ C \\in \\mathcal F $ such that $ C \\subseteq \\Ter A \\cap \\Ter B $. We first show that $ C \\in \\mathcal F $. Suppose not, so $ C = S_P $ for some $P \\in \\mathcal P$. Since $ C \\subset P$, neither of $ A $ and $ B $ can be some set of the form $ S_Q $. Therefore $ A, B \\in \\mathcal F $. Now, notice that $ C \\subseteq \\Ter A \\subseteq \\boxset A$. On the other hand, $ \\boxset A \\subseteq \\mathfrak B $, but $ C \\nsubseteq \\mathfrak B$, a contradiction. \n\t\n\tNow we prove that both $ A $ and $ B $ are in $ \\mathcal F $. Suppose not. Without loss of generality, assume that $A= S_P $ for some $P \\in \\mathcal P $. Since $ C \\subseteq \\Ter A $, we must have $ C \\in N(P) $. Therefore $ \\bset C < \\bset P \\leq \\bset A $. On the other hand, because $ C \\subseteq \\Ter A \\subseteq \\boxset A $, we have $ \\bset C \\geq \\bset A $, a contradiction. So, $ A, B \\in \\mathcal F $ as well, and the result follows from the fact that $\\mathcal{F}$ satisfies~(C3). \n\t\n\n\t\n\t{\\noindent \\textbf{Claim}. \\textit{$ \\mathcal F' $ satisfies \\emph{(C4).}}} \n\t\n\tFix $P \\in \\mathcal P$. Let us first prove that there exists no $ A \\in \\mathcal F; $ such that $ A \\curvearrowright S_P $ or $ S_P \\prec A $. First, if $ A \\curvearrowright S_P$, then in particular $ A \\cap S_P \\neq \\varnothing $. Thus, by Property~\\ref{prop:more-on-Gamma-and-disjoint-probs}, we have $ A\\in \\mathcal F $. Therefore, $ \\rset{A} \\leq \\rset{\\mathcal F} < \\rset{P^\\uparrow} = \\rset{S_P}$. But on the other hand, $ A \\prec S_P $ implies $ \\rset{A} > \\rset{S_P}$, a contradiction. Second, if $ S_P \\prec A $, then in particular $S_P \\subseteq \\Ter{A} $. Also, by construction $ S_P \\subseteq P $. Therefore, $ \\Ter{A} \\cap P \\neq \\varnothing $. Hence, by Property~\\ref{prop:more-on-Gamma-and-disjoint-probs}, we have $ A \\in \\mathcal F $. Therefore $ \\rset{A} \\leq \\rset{\\mathcal F} < \\rset{P^\\uparrow} = \\rset{S_P}$.\n\tOn the other hand, by Lemma~\\ref{lem:prec-properties}, $ S_P \\prec A $ implies that $ \\rset{S_P} < \\rset{A}$, a contradiction.\n\t\n\tNow, for the sake of contradiction, assume that there exists $ A, B, C \\in \\mathcal F' $ such that $ A \\prec B $, $ A \\curvearrowright C $, and $ B \\curvearrowright C $. From what we proved above, we know that $ A, C \\in \\mathcal F$. Therefore, since $ \\mathcal F $ satisfies (C4), we cannot have $ B\\in \\mathcal F $. So, $ B = S_P $ for some $ P \\in \\mathcal P $. In particular $ \\Ter{B} \\subseteq \\boxset{B} \\subseteq P^\\uparrow$. \n\t\n\tFrom $A \\prec B $, we have\n\t$\n\tA \\subseteq \\Ter B \\subset P^\\uparrow \\subseteq P\n\t$.\n\tTherefore, $ A \\in N_{\\mathcal F}(P)$. \n\t\n\tOn the other hand, from $ B \\prec C $, we have $ B \\cap C \\neq \\varnothing $, therefore $ C \\cap P^\\uparrow \\neq \\varnothing $. So, $ C \\in N_{\\mathcal F}(P)$. \n\t\n\tSo, $ A $ and $ C $ are two sets in $ N_{\\mathcal F}(P) $ that are not disjoint, which contradicts the fact that $ P$ is stable. \n\t\n\t\n\n\t\n\t{\\noindent \\textbf{Claim}. \\textit{$ \\mathcal F' $ satisfies \\emph{(C5).}}} \n\t\n\tFor the sake of contradiction, assume that $ A $, $ B $, and $ C $ are three sets in $ \\mathcal F' $ that two by two intersect. At least one of the three sets must be in $ \\mathcal F' \\setminus \\mathcal F $, because (C5) holds for $\\mathcal F $. Moreover, because of Property~\\ref{prop:more-on-Gamma-and-disjoint-probs}, at most one of the three sets is in $ \\mathcal F' \\setminus \\mathcal F $. So, without loss of generality, assume that $ A=S_P $ for some $P \\in \\mathcal P $, and that $ B, C \\in \\mathcal F $. But since $ B \\cap A \\neq \\varnothing $, we have $ B \\cap P \\neq \\varnothing$, i.e.\\ $B \\in N(P) $. Similarly, $ C \\in N(P)$. But $ B \\cap C \\neq \\varnothing $ contradicts the fact that $ P$ is stable for $ \\mathcal F $. Hence, (C5) holds for $ \\mathcal F' $. \n\t\n\t\n\t{\\noindent \\textbf{Claim}. \\textit{$ \\mathcal F' $ satisfies \\emph{(C6).}}} \n\t\n\tBy assumption, $ S $ is strong. So, it is enough to show that $ T_P $ in Step (S$'$2) is a positive transformation for every $P \\in \\mathcal{P}$. This follows from the fact that $ T_p $ is positive, as shown in Property~\\ref{prop:prop-after-def-Gamma}. \n\t\n\tThis completes the proof of the lemma. \n\\end{proof}\n\n\n\\begin{theorem}\\label{thm:BG-is-const-S-graph}\n\tLet $ S $ be a Pouna set. Every Burling graph is a constrained $ S $-graph.\n\\end{theorem}\n\\begin{proof}\n\tFor this proof, we adopt the notations in the definition of the construction of Pawlik, Kozik, Krawczyk, Laso\\'n, Micek, Trotter, and Walczak. \n\t\n\tWe may assume that $ S $ is a strong Pouna set, otherwise, we replace every $ S $ in this proof by the horizontal reflection of $ S $.\n\t\n\tFix a subterritory $ E $ of $ S $ (which exists, by Lemma~\\ref{lem:subterritory-exists}), and apply the construction on it. For every $ k \\geq 1 $, we know that $ \\mathcal F_k $ is a collection of transformed copies of $ S $. We first prove that $ \\mathcal F_k$ satisfies Constraints (C1)-(C6). To do so, we prove the following stronger statement by induction on $ k$.\n\t\n\t\\begin{statement}\n\t\tFor every $ k \\geq 1 $, we have:\n\t\t\\begin{enumerate}\n\t\t\t\\item the elements of $ \\mathcal P_k $ are mutually disjoint,\n\t\t\t\\item $\\mathcal P_k $ is a collection of stable probs of $ \\mathcal F_k $, \n\t\t\t\\item $\\mathcal{F}_k$ satisfies constraints (C1)-(C6).\n\t\t\\end{enumerate}\n\t\\end{statement}\n\t\n\tFirst of all, for $ k=1 $, the first item of the statement follows from the fact that the fact that $ E $ is a subterritory of $ S $. Statement (2) and (3) hold trivially, as $ |\\mathcal{F}_1|=1 $.\n\t\n\t\n\tNow, assume that the statement holds for some $ k \\geq 1 $, we prove that it holds for $ k+1 $. \n\t\n\tNotice that for every $ P \\in \\mathcal P$, the transformation $ T'_P$ is positive, so the tuple $ (\\mathcal F^P, \\mathcal P^P) $ in a positive transformed copy of $ \\Gamma(\\mathcal F_{k}, \\mathcal P_{k}) $. So, by Property~\\ref{prop:T(F)-satisfies-C1-C5}, we know that \n\t\\begin{equation} \\label{eq:FP-sat-C1-6}\n\t\t\\text{for every $ P \\in \\mathcal P $, the collection $ \\mathcal F^P $ satisfies Constraints (C1)-(C6).}\n\t\\end{equation}\n\tMoreover, it is easy to check the following:\n\t\\begin{equation} \\label{eq:P-stable-mut-disj}\n\t\t\\text{for every $ P \\in \\mathcal P $, the elements of $ \\mathcal P^P $ are stable probs for $ \\mathcal F$ and are mutually disjoint.}\n\t\\end{equation}\n\t\n\n\t\n\t{\\noindent \\textbf{Claim}. \\textit{The elements of $ \\mathcal P_k $ are mutually disjoint.}} \n\t\n\tLet $ P_Q $ and $P'_{Q'} $ be two probs in $ \\mathcal P_{k+1} $. In order to show that these two probs are disjoint, it is enough to show that $ (\\bset{Q}, \\tset{Q}) $ and $(\\bset{Q'}, \\tset{Q'}) $ are disjoint intervals. If $ P = P' $, then this follows from (\\ref{eq:P-stable-mut-disj}), and if $ P \\neq P' $ from the fact that $ Q $ and $ Q' $ are inside the roots of $P$ and $ P'$ respectively, and $ P $ and $ P' $ are disjoint by induction hypothesis. \n\t\n\t\n\t\n\t{\\noindent \\textbf{Claim}. \\textit{Every $ P \\in \\mathcal P_{k+1} $ is a stable prob for $\\mathcal F_{k+1}$.}} \n\t\n\tLet $ P_Q \\in \\mathcal P_{k+1} $. Notice that $ Q \\in \\mathcal P^P$ is a prob for $\\mathcal F^P $. So, by (\\ref{eq:P-stable-mut-disj}), $ Q $ has a root $ R $ such that for every $ A \\in N_{\\mathcal F^P}(Q)$, we have $ R \\subseteq \\Ter{A}$. So, item (1) of the definition of stable prob holds. \n\t\n\tSet $ N_1 = N_{\\mathcal F^P}(Q) $ and $ N_2 = N_{\\mathcal F_{k+1}}(P) $. \n\t\n\tThe elements in $ N_{\\mathcal F_{k+1}}(P_Q) $ are either the neighbors of $ Q $ as a prob for $ \\mathcal F^P $, so they are in $N_1 $, or are outside $ R_P $ and thus are in $N_2 $. The elements in $N_1 $ are mutually disjoint by (\\ref{eq:P-stable-mut-disj}) and the elements in $ N_2 $ are mutually disjoint by induction hypothesis. Finally, one element in $ N_1 $ and one element in $N_2 $ are disjoint because the former is inside $ R_P $ and the latter does not intersect $ R_P $. So, item (2) of the definition holds as well.\n\t\n\tNow, fix $ A \\in N_{\\mathcal F_{k+1}}(P_Q)$. If $ A \\in N_1 $, then\n\t$$\n\t\\bset{A} < \\bset{Q} = \\bset{P_Q},\n\t\\text{ and }\n\t\\tset{A} > \\tset{Q} = \\tset{P_Q}. \n\t$$\n\tMoreover, there is a path in $ A $ crossing $ Q $. So, the same path crosses $P_Q $ as well. \n\t\n\tIf $ A \\in N_2 $, then\n\t$$\n\t\\bset{A} < \\bset{P} = \\bset{R_P} \\leq \\bset{Q} = \\bset{P_Q},\t\n\t$$\n\tand\n\t$$\n\t\\tset{A} > \\tset{P} = \\tset{R_P} \\geq \\tset{Q} = \\tset{P_Q}.\n\t$$\n\tMoreover, there is a path in $ A $ crossing $ P$, so by Property ??, it crosses $P_Q $ as well. \n\t\n\tNow, we check that $\\mathcal{F}_{k+1}$ satisfies Constraints (C1)-(C6). In what follows, we use several times the fact that that by (\\ref{eq:FP-sat-C1-6}) and by induction hypothesis, the conditions hold when all the elements are chosen inside $ \\mathcal F_k $ or inside $ \\mathcal F^P $ for some $P \\in \\mathcal P_k$. \n\t\n\tMoreover, notice that by induction hypothesis, elements of $ \\mathcal P_k $ are disjoint. Now, because every $A \\in \\mathcal F^P $ is entirely inside $ P $, we know that \n\t\\begin{equation} \\label{eq:FP-FQ-disjoint}\n\t\t\\text{if $ P \\neq Q $, then the elements of $ \\mathcal F^P $ are disjoint from the elements of $F^Q$.}\n\t\\end{equation}\n\t\n\tFurthermore, for every $ P \\in \\mathcal P_k$, the elements of $ \\mathcal F^P $ are all inside $ R_P$. Moreover, by definition of root, no element of $\\mathcal F_k $ intersect $ R_P $, so,\n\t\\begin{equation} \\label{eq:Fk-FP-disjoint}\n\t\t\\text{for every $ P\\in \\mathcal P $, the elements of $ \\mathcal F_k $ are disjoint from the elements of $ \\mathcal F^P $. }\n\t\\end{equation}\n\t\n\t\n\t\n\t{\\noindent \\textbf{Claim}. \\textit{$ \\mathcal F_{k+1}$ satisfies \\emph{(C1).}}} \n\t\n\t\n\tLet $ A, B \\in \\mathcal F_{k+1} $ be two distinct elements such that $ A \\cap B \\neq \\varnothing $. By (\\ref{eq:FP-FQ-disjoint}) and (\\ref{eq:FP-FQ-disjoint}), either $ A, B \\in \\mathcal F_k $ or there exists $ P \\in \\mathcal P_k$ such that $ A, B \\in \\mathcal F^P $. In the former case, by induction hypothesis, we have $A \\curvearrowright B $ or $ B \\curvearrowright A $. In the latter case, by (\\ref{eq:FP-sat-C1-6}), we have $ A\\curvearrowright B $ or $ B \\curvearrowright A $.\n\t\n\t\n\t\n\t{\\noindent \\textbf{Claim}. \\textit{$ \\mathcal F_{k+1}$ satisfies \\emph{(C2).}}} \n\t\n\tLet $ A, B \\in \\mathcal F_{k+1} $ such that $ A \\cap B = \\varnothing $ and $ A \\cap \\Ter{B} \\neq \\varnothing $. There are four cases possible:\n\t\n\tCase 1: $ A, B \\in \\mathcal F_k $, in which case the result follows from (\\ref{eq:FP-sat-C1-6}). \n\t\n\tCase 2: $ A \\in \\mathcal F_k $ and $ B \\in \\mathcal F^P $ for some $P \\in \\mathcal P_k $. \n\t\n\tThis case is not possible, because $ \\Ter{B} \\subseteq \\boxset{B} \\subseteq R_P $. However, $ A \\in \\mathcal F_k $, so $ A $ does not intersect $ R_P $ as it is a root of a prob for $ \\mathcal F_k $. \n\t\n\t\n\tCase 3: $ A \\in \\mathcal F^P $ for some $P \\in \\mathcal P_k $ and $ B \\in \\mathcal F_k $.\n\t\n\tSince $ A \\subseteq R_P $, we have $ R_P \\cap \\Ter{B} \\neq \\varnothing $. Let $ p = (x,y) \\in R_P \\cap \\Ter{B}$. By the definition of territory, there exists $ x' > x $ such that $ p\n\t=(x', y) \\in B $. Moreover, since $ R_P $ is a root of $ P $, we have $ p' \\in P $. So, $ p' \\in B \\cap P $. Therefore, $ B \\in N_{\\mathcal F_k}(P)$. Hence, by (\\ref{eq:P-stable-mut-disj}) and using Property~\\ref{prop:every-root-stable}, we have that every root of $ P $ is inside the territory of $ B$. Hence,\n\t$\n\tA \\subseteq R_P \\subseteq \\Ter{B}\n\t$. \n\tSo, the result holds. \n\t\n\tCase 4: $ A \\in \\mathcal F^P $ and $ B \\in \\mathcal F^Q $ for $ P, Q \\in \\mathcal P_k $. Let $ p \\in A \\cap \\Ter{B} $. So, in particular $ p \\in A \\subseteq P $ and $ p \\in \\Ter{B} \\subseteq \\boxset{F^Q} \\subseteq Q $. Therefore $ P\\cap Q \\neq \\varnothing $. Hence, by the induction hypothesis, we must have $P = Q $. So, the result follows from (\\ref{eq:FP-sat-C1-6}). \n\t\n\t\n\t\n\t{\\noindent \\textbf{Claim}. \\textit{$ \\mathcal F_{k+1} $ satisfies \\emph{(C3).}}} \n\t\n\t\n\tLet $ A, B \\in \\mathcal F_{k+1} $ be two distinct sets such that $ A \\cap B \\neq \\varnothing $. For the sake of contradiction, assume that there exists $ C \\in \\mathcal F_{k+1} $ such that $ C \\subseteq \\Ter{A} \\cap \\Ter{B} $. \n\t\n\tFirst of all, by (\\ref{eq:FP-FQ-disjoint}) and (\\ref{eq:Fk-FP-disjoint}), there are only two possible cases for $ A $ and $ B $: either $ A, B \\in \\mathcal F_{k} $ or $ A, B \\in \\mathcal F^P $ for some $ P \\in \\mathcal P_k $. \n\t\n\tCase 1: $ A, B \\in \\mathcal F_{k} $. In this case, by induction hypothesis, we cannot have $ C \\in \\mathcal F_k $. So, $ C \\in \\mathcal F^P $ for some $ P \\in \\mathcal P_k $. Consequently, $ C \\subseteq R_P $.\n\tNow, let $ p = (x,y) \\in C $. Since $ C \\subseteq \\Ter{A} $, there exists $ x'>x $ such that $ p'=(x',y) \\in A $. But also, $ p' \\in P $. Therefore $ A \\in N_{\\mathcal F_{k}}(P)$. Similarly, we can show that $ B \\in N_{\\mathcal F_k}(P)$. \n\tA contradiction with the fact that the elements in $ N_{\\mathcal F_k}(P) $ are mutually disjoint. \n\t\n\tCase 2: $ A, B \\in \\mathcal F^P $ for some $ P \\in \\mathcal P_k $. Notice that \n\t$$ \\Ter{A} \\subseteq \\boxset{A} \\subseteq \\boxset{\\mathcal F^P} \\subseteq R_P. $$\n\tSo, $ C \\subseteq R_P $. Therefore $ C \\in \\mathcal F^ P$ as well, and the result follows from (\\ref{eq:FP-sat-C1-6}).\n\t\n\t\n\t\n\t{\\noindent \\textbf{Claim}. \\textit{$ \\mathcal F_{k+1} $ satisfies \\emph{(C4).}}} \n\t\n\tAssume, for the sake of contradiction, that there exists $ A, B, C \\in \\mathcal F_{k+1} $ such that $ A\\prec B $, $ A \\curvearrowright C $, and $ B \\curvearrowright C $. By (\\ref{eq:FP-FQ-disjoint}) and (\\ref{eq:Fk-FP-disjoint}), since $ A \\cap C \\neq \\varnothing $ and $ B \\cap C \\neq \\varnothing $, either $ A, B, C \\in \\mathcal F_k $ or $ A, B, C \\in \\mathcal F^P $ for some $P \\in \\mathcal P_k $. The former is not possible because of induction hypothesis, and the latter because of (\\ref{eq:FP-sat-C1-6}). So, there exist no such triple. \n\t\n\t\n\t\n\t{\\noindent \\textbf{Claim}. \\textit{$ \\mathcal F_{k+1} $ satisfies \\emph{(C5).}}} \n\t\n\tFor the sake of contradiction, assume that there exist three distinct set $ A, B, C \\in \\mathcal F_{k+1}$ that are mutually intersecting. By induction hypothesis, such triple does not exists in $ \\mathcal F_{k}$. So, at least on of the sets is in $ \\mathcal F^P $ for some $P \\in \\mathcal P_k $. But then, (\\ref{eq:FP-FQ-disjoint}) and (\\ref{eq:Fk-FP-disjoint}) imply that the three sets are all in $\\mathcal{F}^P$, a contradiction with (\\ref{eq:FP-sat-C1-6}).\n\t\n\t\n\t\n\t{\\noindent \\textbf{Claim}. \\textit{$ \\mathcal F_{k+1} $ satisfies \\emph{(C6).}}} \t\n\tBy assumptions, $ S $ is strong. Thus, we only need to show that every element of $ \\mathcal F_{k+1}$ is a positive transformed copy of $ S $. This is true since the elements of $ \\mathcal F_k $ are positive transformed copies of $ S $ and the elements of each $ \\mathcal F^P $ are also positive transformed copies of $ S$, because by (\\ref{eq:FP-sat-C1-6}), the collection $\\mathcal F^P $ satisfies (C6).\n\t\n\t\n\tThis finishes the proof of the statement. \n\t\n\tTo complete the proof of the theorem, it is enough to notice that by Statement 1, the graphs in the Burling sequence are all constrained $ S $-graphs, and that the class of constrained $ S$-graphs is closed under induced subgraph. \n\\end{proof}\n\n\\subsection{The equality of the three classes}\n\n\\begin{corollary} \\label{cor:all-equal}\n\tThe class of Burling graphs is equal to the class of constrained graphs and is equal to the class of constrained $ S$-graphs for every compact path-connected subset $ S $ of $ \\mathbb{R}^2 $ that is not an axis-aligned rectangle. \n\\end{corollary}\n\\begin{proof}\n\tFollows from Theorems~\\ref{thm:1-const-is-BG} and~\\ref{thm:BG-is-const-S-graph}.\n\\end{proof}\n\n\n\\section*{Acknowledgment}\n\nThe author thanks Paul Meunier for several useful discussions and his contributions to some proofs, in particular Lemmas~\\ref{prop:horizontal-and-vertical-crossing-intersect} and~\\ref{lem:subterritory-exists}. \nThe author thanks Nicolas Trotignon for many insightful discussions.\nThe author also thanks Gael Gallot for useful discussions, in particular during his internship on this topic.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Description of the Projet}\n\nGHaFaS is an improved version of the scanning Fabry-Perot system FaNTOmM\n\\citep{he2003}, which is a resident instrument on the\nObservatoire du mont M\\'egantic (OMM) 1,6m telescope and which has also\nbeen used on the CFH and ESO La Silla 3,6m telescopes. The complete \nsystem is composed of a focal reducer, a calibration unit, a filter wheel\nfor the order sorter filters, an FP etalon and an IPCS camera. The IPCS\nis composed of an Hamamatsu intensifier MCP tube which intensifies\nevery generated electron coming from the photocatode by a factor 10$^7$.\nEach photon event, recorded on a DALSA CCD, is then analysed by a centering \nalgorithm. With this amplification, the camera has essentially no readout noise.\nBecause of this, a zero noise IPCS is to be prefered to CCDs at very low \nflux level \\citep{ga2002}, even if the GaAs IPCS has only a DQE of 25\\%.\nMoreover, because of the fast scanning capability, it can average out the\nvariation of atmospheric transmission which is not possible with the long\nintegration times needed per channel for the CCDs in order to beat the\nread-out noise.\n\nIn the last 3 years with the FaNTOmM system, around 150 galaxies were\nobserved on the OMM, CFH and ESO La Silla telescopes in the context of\n3 large surveys: the SINGS sample \\citep{da2006}, a\nsurvey of barred galaxies, the BHabar sample \\citep{he2005} and\na sample of Virgo spirals \\citep{ch2006}. While the first scientific\njustification of the Montr\\'eal group was to derive high spatial resolution\noptical rotation curves for mass modeling purposes, the data was also used \nby IAC astronomers to constrain the role of gravitational perturbations\nas well as feedback from individual HII regions on the evolution of structures\nin galaxies and by a Berkeley-Munich group and the G\\'EPI group in Paris to\ncompare those local samples to high z galaxies.\n\nGHaFaS will come with its own custom designed focal reducer developed to be\noptically and mechanically compatible with the Nasmyth focus of the WHT.\nThe system has its own control and data acquisition system. It will have \na 4x4 arcmin field with a 0.45 arcsec pixel and $\\sim$5 km\/s velocity\nresolution. Full acquisition and reduction software (mainly based on IDL\nroutines) will be provided by the Montr\\'eal group.\nThe project will be done in 3 phases. For Phase I (July 2007), the optical\nsystem (focal reducer, filter wheel \\& calibration unit) will be delivered \nto the WHT and used with the camera FaNTOmM for this first run. For Phase II\n(end of 2007), an improved GaAs IPCS will be added to the system. Phase III \n(early 2008) will provide an FP controller and possibly a monochromator to\ncalibrate the data at the observing wavelength.\n\n\\section*{Science to be done with GHaFaS}\n\nTwo-dimensional kinematics is a very powerful technique for studying the \nstructure and evolution of galaxies. The distribution of dark matter,\ncircumnuclear star formation and fuelling of active galactic nuclei, detection\nof counter-rotating and kinematically decoupled components, and the effects\nof interaction between massive stars and the interstellar medium are\namong the physical phenomena which can be studied with this technique: see e.g.\n\\citet{fa2007} and \\citet{re2007}.\n\nThe first large program for which we plan to use GHaFaS consists at observing \na sample of 46 carefully selected nearby galaxies which are all included in the\nSINGS, THINGS, GALEX, and other CO and optical archives. Due to the angular\nsize of some of the objects, 2-4 fields may be necessary to reach the 25th \nmagnitude radius. This totals to 72 fields which will require $\\sim$18 clear\nnights of observing with GHaFaS on the WHT. Priority will be given to enlarge our Virgo\nsample of galaxies. The full sample ranges from elliptical (with emission)\nto irregular galaxies, 2\/3 of which are intermediate-type objects, since\nthis is where highly star-forming regions will be observed. It will be\npossible to use this sample for many scientific projects ranging from the \nlarge scale mass modeling using the velocity fields in order to derive the dark \nmatter density profiles to the study of the internal kinematics of the\nindividual HII regions.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section*{Acknowledgements}\nWe thank Andrew Ilyas and Sam Park for helpful discussions.\n\nWork supported in part by the NSF grants CCF-1553428, CNS-1815221, the Google\nPhD Fellowship, and the Microsoft Corporation. This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR001120C0015.\n\nResearch was sponsored by the United States Air Force Research Laboratory and\nwas accomplished under Cooperative Agreement Number FA8750-19-2-1000. The views\nand conclusions contained in this document are those of the authors and should\nnot be interpreted as representing the official policies, either expressed or\nimplied, of the United States Air Force or the U.S. Government. The U.S.\nGovernment is authorized to reproduce and distribute reprints for Government\npurposes notwithstanding any copyright notation herein.\n\n\\section{Additional Experimental Results}\n\\label{app:res}\n\n\\subsection{Human Baselines for \\textsc{Breeds}{} Tasks}\n\\label{app:res_human}\nIn Section~\\ref{sec:humans}, we evaluate human performance on binary versions of \nour \\textsc{Breeds}{} tasks. Appendix Figures~\\ref{fig:human_pairwise_s} \nand~\\ref{fig:human_pairwise_t} show the distribution of annotator accuracy over \ndifferent pairs of superclasses for test data sampled from the source and target \ndomains respectively.\n\n\\begin{figure}[!h]\n\t\n\t\\begin{subfigure}{1\\textwidth}\n\t\t\\centering\n\t\t\t\\includegraphics[width=0.82\\textwidth]{Figures\/eval\/acc_comparison_pairwise_app_S.pdf}\n\t\t\t\\caption{Source domain (no subpopulation shift)}\n\t\t\t\\label{fig:human_pairwise_s}\n\t\\end{subfigure}\n\\begin{subfigure}{1\\textwidth}\n\t\\centering\n\t\\includegraphics[width=0.82\\textwidth]{Figures\/eval\/acc_comparison_pairwise_app_T.pdf}\n\t\\caption{Target domain (with subpopulation shift)}\n\t\\label{fig:human_pairwise_t}\n\\end{subfigure}\n\t\\caption{Distribution of annotator accuracy over pairwise superclass classification \n\t\ttasks. We observe that human annotators consistently perform \n\t\tbetter \n\t\ton tasks \n\t\tconstructed using our modified ImageNet class \n\t\thierarchy (i.e., \\textsc{Breeds}{}) as opposed to those obtained directly from \n\t\tWordNet.}\n\n\\end{figure}\n\n\\clearpage\n\n\\subsection{Model Evaluation}\n\\label{app:res_eval}\nIn Figures~\\ref{fig:perclass_a}-~\\ref{fig:perclass_d121}, we visualize model \nperformance over \\textsc{Breeds}{} superclasses for different model architectures. We observe in \ngeneral that models perform fairly uniformly over classes when the test data is drawn \nfrom the source domain. This indicates that the tasks are well-calibrated---the \nvarious superclasses are of comparable difficulty. At the same time, we see that \nmodel robustness to subpopulation shift, i.e., drop in accuracy on the target domain, \nvaries widely over superclasses. This could be either due to some superclasses\nbeing broader by construction or due to models being more sensitive to\nsubpopulation shift for some classes.\n\n\\begin{figure}[!h]\n\t\\centering\n\t\\includegraphics[width=1\\textwidth]{Figures\/eval\/perclass_alexnet.pdf}\n\t\\caption{Per-class source and target accuracies for AlexNet on \\textsc{Breeds}{}\n tasks.}\n\t\\label{fig:perclass_a}\n\\end{figure}\n\\begin{figure}[!h]\n\t\\centering\n\t\\includegraphics[width=1\\textwidth]{Figures\/eval\/perclass_resnet50.pdf}\n\t\\caption{Per-class source and target accuracies for ResNet-50 on \\textsc{Breeds}{}\n tasks.}\n\t\\label{fig:perclass_r50}\n\\end{figure}\n\\begin{figure}[!h]\n\t\\centering\n\t\\includegraphics[width=1\\textwidth]{Figures\/eval\/perclass_densenet121.pdf}\n\t\\caption{Per-class source and target accuracies for DenseNet-121 on \\textsc{Breeds}{}\n tasks.}\n\t\\label{fig:perclass_d121}\n\\end{figure}\n\n\\clearpage\n\\subsubsection{Effect of different splits}\n\\label{app:res_splits}\n\\label{app:goodbad}\n\nAs described in Section~\\ref{sec:breeds}, to create \\textsc{Breeds}{} tasks, we first identify \na set of relevant superclasses (at the chosen depth in the hierarchy), and then \npartition their subpopulations between the source and target domains. For all the \ntasks listed in Table~\\ref{tab:benchmarks}, the superclasses are balanced---each of \nthem comprise the same number of subpopulations. To ensure this is the case, the \ndesired number of subpopulations is chosen among all superclass subpopulations at \nrandom. These subpopulations are then randomly split between the source and target \ndomains.\n\nInstead of randomly partitioning subpopultions (of a given superclass) between the \ntwo domains, we could instead craft partitions to be more\/less adversarial as \nillustrated in Figure~\\ref{fig:splits_diag}. Specifically, we could control how similar \nthe subpopulations in the target domain are to those in the source domain. For \ninstance, a split would be less adversarial (\\emph{good}) if subpopulations in the \nsource and target domain share a common parent. On the other hand, we could make \na split more adversarial (\\emph{bad}) by ensuring a greater degree of separation (in \nterms of distance in the hierarchy) between the source and target domain \nsubpopulations.\n\n\\begin{figure}[!h]\n\t\\centering\n\t\\includegraphics[width=0.8\\textwidth]{Figures\/gen\/splits.pdf}\n\t\\caption{Different ways to partition the subpopulations of a given superclass into \n\t\tthe source and target domains. Depending on how closely related the \n\t\tsubpopulations in the two domain are, we can construct splits that are more\/less \n\t\tadversarial.}\n\t\\label{fig:splits_diag}\n\\end{figure}\n\n We now evaluate model performance under such variations in the nature of the splits \n themselves---see Figure~\\ref{fig:all_splits}. \nAs expected, models perform comparably well on test data from the source domain, \nindependent of the how the subpopulations are partitioned into the two domains. \nHowever, model robustness to subpopulation \nshift varies considerably based on the nature of the split---it is lowest for the most \nadversarially chosen split. \nFinally, we observe that retraining the linear layer \non data from the target domain recovers a considerable fraction of the accuracy drop \nin all cases---indicating that even for the more adversarial splits, models do learn \nfeatures that transfer well to unknown subpopulations. \n\n\\begin{figure}[!h]\n\t\\begin{subfigure}{1.0\\textwidth}\n\t\\centering\n\t\\includegraphics[width=1\\textwidth]{Figures\/eval\/ENTITY-13_splits.pdf}\n\t\\caption{\\textsc{Entity-13}{} task}\n\t\\label{fig:all3_splits}\n\t\\end{subfigure}\n\t\\begin{subfigure}{1.0\\textwidth}\n\t\\centering\n\t\\includegraphics[width=1\\textwidth]{Figures\/eval\/ENTITY-30_splits.pdf}\n\t\\caption{\\textsc{Entity-30}{} task}\n\t\\label{fig:all4_splits}\n\t\\end{subfigure}\n\t\\caption{Model robustness as a function of the nature of subpopulation shift within \n\tspecific \\textsc{Breeds}{} tasks. We vary how the underlying \n\tsubpopulations of each superclass are split between the source and target \n\tdomain---we \n\tcompare random splits (used in the majority of our analysis), to ones that are more \n\t(\\emph{bad})\n\tor less adversarial (\\emph{good}).\n\tWhen models are tested on samples from the source domain, they perform equally \n\twell across different splits, as one might expect.\n\tHowever, under subpopulation shift (i.e., on samples from the target domain), \n\tmodel robustness varies drastically, and is considerably worse when the split is \n\tmore adversarial.\n\tYet, for all the splits, models have comparable target accuracy \n after retraining their final layer.\n}\n\\label{fig:all_splits}\n\\end{figure}\n\n\n\n\\clearpage\n\\subsubsection{Robustness Interventions}\n\\label{app:res_int}\nIn Tables~\\ref{tab:adv_app} and~\\ref{tab:other_rob_app}, we present the raw \naccuracies of models trained using various train-time robustness interventions.\n\\begin{table}[!h]\n \\setlength{\\tabcolsep}{1.5em}\n\t\\centering\n\t\\renewcommand{\\arraystretch}{1.05}\n\t\\begin{tabular}{llccc}\n\t\t\\toprule\n\t\t\\multicolumn{5}{c}{ResNet-18} \\\\\n\t\t\\midrule\n \\multirow{2}{*}{Task} & \\multirow{2}{*}{$\\varepsilon$} & \\multicolumn{3}{c}{Accuracy (\\%)} \\\\\n & & Source & Target & Target-RT \\\\\n\t\t\\midrule\n \\multirow{3}{*}{\\textsc{Entity-13}} & 0 & \\textbf{90.91 $\\pm$ 0.73} &\n \\textbf{61.52 $\\pm$ 1.23} & \\textbf{76.71 $\\pm$ 1.09} \\\\\n & 0.5 & 89.23 $\\pm$ 0.80 & \\textbf{61.10 $\\pm$ 1.23} & 74.92 $\\pm$ 1.04 \\\\\n & 1.0 & 88.45 $\\pm$ 0.81 & 58.53 $\\pm$ 1.26 & 73.35 $\\pm$ 1.11 \\\\\n\t\t\\midrule\n \\multirow{3}{*}{\\textsc{Entity-30}} & 0 & \\textbf{87.88 $\\pm$ 0.89} &\n \\textbf{49.96 $\\pm$ 1.31} & \\textbf{73.05 $\\pm$ 1.17}\n \\\\\n & 0.5 & 85.68 $\\pm$ 0.91 & \\textbf{48.93 $\\pm$ 1.34} & 71.34 $\\pm$ 1.14 \n\\\\\n & 1.0 & 84.23 $\\pm$ 0.91 & 47.66 $\\pm$ 1.23 & 70.27 $\\pm$ 1.17 \\\\\n\t\t\\midrule\n \\multirow{3}{*}{\\textsc{Living-17}} & 0 & \\textbf{92.01 $\\pm$ 1.30} & \n \\textbf{58.21 $\\pm$ 2.32} & \\textbf{83.38 $\\pm$ 1.79} \\\\\n & 0.5 & 90.35 $\\pm$ 1.35 & 55.79 $\\pm$ 2.44 & \\textbf{83.00 $\\pm$ 1.89} \\\\\n & 1.0 & 88.56 $\\pm$ 1.50 & 53.89 $\\pm$ 2.36 & 80.90 $\\pm$ 1.92 \\\\\n\t\t\\midrule\n \\multirow{3}{*}{\\textsc{Non-living-26}} & 0 & \\textbf{88.09 $\\pm$ 1.28} & \n \\textbf{41.87 $\\pm$ 2.01} & \\textbf{73.52 $\\pm$ 1.71} \\\\\n & 0.5 & 86.28 $\\pm$ 1.32 & \\textbf{41.02 $\\pm$ 1.91} &\n \\textbf{72.41 $\\pm$ 1.71} \\\\\n & 1.0 & 85.19 $\\pm$ 1.38 & \\textbf{40.23 $\\pm$ 1.92} & 70.61 $\\pm$ \n1.73 \\\\\n\t\t\\bottomrule \n\t\\end{tabular}\n\t\\begin{tabular}{llccc}\n\t\t\\multicolumn{5}{c}{} \\\\\n\t\\toprule\n\t\t\\multicolumn{5}{c}{ResNet-50} \\\\\n\t\t\\midrule\n \\multirow{2}{*}{Task} & \\multirow{2}{*}{$\\varepsilon$} & \\multicolumn{3}{c}{Accuracy (\\%)} \\\\\n & & Source & Target & Target-RT \\\\\n\t\\midrule\n \\multirow{3}{*}{\\textsc{Entity-13}} & 0 & \\textbf{91.54 $\\pm$ 0.64} &\n \\textbf{62.48 $\\pm$ 1.16} & \\textbf{79.32 $\\pm$ 1.01 } \\\\\n & 0.5 & 89.87 $\\pm$ 0.80 & \\textbf{63.01 $\\pm$ 1.15} & \n \\textbf{80.14 $\\pm$ 1.00} \\\\\n & 1.0 & 89.71 $\\pm$ 0.74 & 61.21 $\\pm$ 1.22 & 78.58 $\\pm$ 0.98 \\\\\n\t\\midrule\n \\multirow{3}{*}{\\textsc{Entity-30}} & 0 & \\textbf{89.26 $\\pm$ 0.78} &\n \\textbf{51.18 $\\pm$ 1.24} & 77.60 $\\pm$ 1.17 \\\\\n\n & 0.5 & 87.51 $\\pm$ 0.88 & \\textbf{50.72 $\\pm$ 1.28} & \n \\textbf{78.92 $\\pm$ 1.06} \\\\\n & 1.0 & 86.63 $\\pm$ 0.88 & \\textbf{50.99 $\\pm$ 1.27} & \n \\textbf{78.63 $\\pm$ 1.03} \\\\\n\t\\midrule\n \\multirow{3}{*}{\\textsc{Living-17}} & 0 & \\textbf{92.40 $\\pm$ 1.28} &\n \\textbf{58.22 $\\pm$ 2.42} & \\textbf{85.96 $\\pm$ 1.72} \\\\\n & 0.5 & 90.79 $\\pm$ 1.55 & \\textbf{55.97 $\\pm$ 2.38} & \n \\textbf{87.22 $\\pm$ 1.66} \\\\\n & 1.0 & 89.64 $\\pm$ 1.47 & 54.64 $\\pm$ 2.48 & \\textbf{85.63 $\\pm$ 1.73} \\\\\n\t\\midrule\n \\multirow{3}{*}{\\textsc{Non-living-26}} & 0 & \\textbf{88.13 $\\pm$ 1.30} & \n \\textbf{41.82 $\\pm$ 1.86} & 76.58 $\\pm$ 1.69 \\\\\n & 0.5 & \\textbf{88.20 $\\pm$ 1.20} & \\textbf{42.57 $\\pm$ 2.03} &\n \\textbf{78.84 $\\pm$ 1.62} \\\\\n & 1.0 & 86.17 $\\pm$ 1.36 & \\textbf{41.69 $\\pm$ 1.96} & 76.16 $\\pm$ \n1.61 \\\\\n\t\\bottomrule\n\\end{tabular}\n\\vspace{1em}\n\t\\caption{Effect of adversarial training on model robustness to subpopulation \n\tshift. All models are trained on samples from the source domain---either \n\tusing standard \n\ttraining ($\\varepsilon=0.0$) or using adversarial training. Models are then \n\tevaluated in terms of: (a) source accuracy, (b) target accuracy and (c) target \n\taccuracy after retraining the linear layer of the model with data from the \n target domain. Confidence intervals (95\\%) obtained via bootstrapping. Maximum\n task accuracy over $\\varepsilon$ (taking into account confidence interval) shown in bold.}\n\t\\label{tab:adv_app}\n\\end{table}\n\n\n\\begin{table}[!h]\n\t\\centering\n\t\\renewcommand{\\arraystretch}{1.05}\n\t\\begin{tabular}{llccc}\n\t\t\\multicolumn{5}{c}{} \\\\\n\t\\toprule\n\t\t\\multicolumn{5}{c}{ResNet-18} \\\\\n\t\t\\midrule\n \\multirow{2}{*}{Task} & \\multirow{2}{*}{Intervention} & \\multicolumn{3}{c}{Accuracy (\\%)} \\\\\n & & Source & Target & Target-RT \\\\\n\t\t\\midrule\n \\multirow{4}{*}{\\textsc{Entity-13}} & Standard & \n \\textbf{90.91 $\\pm$ 0.73} & \\textbf{61.52 $\\pm$ 1.23 } &\n 76.71 $\\pm$ 1.09 \\\\\n & Erase Noise& \\textbf{91.01 $\\pm$ 0.68} & \\textbf{62.79 $\\pm$ 1.27}\n & \\textbf{78.10 $\\pm$ 1.09} \\\\\n & Gaussian Noise & 77.00 $\\pm$ 1.04 & 47.90 $\\pm$ 1.21 & 70.37 \n$\\pm$ 1.17 \\\\\n& Stylized ImageNet & 76.85 $\\pm$ 1.00 & 50.18 $\\pm$ 1.21 & 65.91 \n$\\pm$ \n1.17 \\\\\n\t\\midrule\n \\multirow{4}{*}{\\textsc{Entity-30}} & Standard & \\textbf{87.88 $\\pm$ 0.89} & \n \\textbf{49.96 $\\pm$ 1.31 } & 73.05 $\\pm$ 1.17 \\\\\n & Erase Noise & \\textbf{88.09 $\\pm$ 0.80} & \\textbf{49.98 $\\pm$ 1.31}\n & \\textbf{74.27 $\\pm$ 1.15} \\\\\n& Gaussian Noise& 74.12 $\\pm$ 1.16 & 35.79 $\\pm$ 1.21 & 65.62 \n$\\pm$ 1.28 \\\\\n & Stylized ImageNet & 70.96 $\\pm$ 1.16 & 37.67 $\\pm$ 1.21 & 60.45 \n$\\pm$ 1.22 \\\\\n\t\\midrule\n \\multirow{4}{*}{\\textsc{Living-17}} & Standard & \\textbf{92.01 $\\pm$ 1.30} &\n \\textbf{58.21 $\\pm$ 2.32} & \\textbf{83.38 $\\pm$ 1.79} \\\\\n & Erase Noise & \\textbf{93.09 $\\pm$ 1.27} & \\textbf{59.60 $\\pm$ 2.40}\n & \\textbf{85.12 $\\pm$ 1.71} \\\\\n& Gaussian Noise & 80.13 $\\pm$ 1.99 & 46.16 $\\pm$ 2.57 & 77.31 \n$\\pm$ \n2.08 \\\\\n & Stylized ImageNet & 79.21 $\\pm$ 1.85 & 43.96 $\\pm$ 2.38 & 72.74 \n$\\pm$ \n2.09 \\\\\n\t\\midrule\n\\multirow{4}{*}{\\textsc{Non-living-26}} & \n Standard & \\textbf{88.09 $\\pm$ 1.28} & \\textbf{41.87 $\\pm$ 2.01 }\n & \\textbf{73.52 $\\pm$ 1.71} \\\\\n & Erase Noise & \\textbf{88.68 $\\pm$ 1.18} & \\textbf{43.17 $\\pm$ 2.10}\n & \\textbf{73.91 $\\pm$ 1.78} \\\\\n& Gaussian Noise & 78.14 $\\pm$ 1.60 & 35.13 $\\pm$ 1.94 & 67.79 \n$\\pm$ 1.79 \\\\\n & Stylized ImageNet & 71.43 $\\pm$ 1.73 & 30.56 $\\pm$ 1.75 & 61.83 \n$\\pm$ 1.98 \\\\\n\t\t\\bottomrule \n\t\\end{tabular}\n\t\\begin{tabular}{llccc}\n\t\t\\multicolumn{5}{c}{} \\\\\n\t\\toprule\n\t\t\\multicolumn{5}{c}{ResNet-34} \\\\\n\t\t\\midrule\n \\multirow{2}{*}{Task} & \\multirow{2}{*}{Intervention} & \\multicolumn{3}{c}{Accuracy (\\%)} \\\\\n & & Source & Target & Target-RT \\\\\n\t\t\\midrule\n \\multirow{4}{*}{\\textsc{Entity-13}} & Standard & \\textbf{91.75 $\\pm$ 0.70}\n & \\textbf{ 63.45 $\\pm$ 1.13 } & \\textbf{ 78.07 $\\pm$ 1.02} \\\\\n & Erase Noise & \\textbf{91.76 $\\pm$ 0.70}\n & \\textbf{62.71 $\\pm$ 1.25} & \\textbf{77.43 $\\pm$ 1.06} \\\\\n & Gaussian Noise & 81.60 $\\pm$ 0.97 & 50.69 $\\pm$ 1.28 & 71.50 \n$\\pm$ 1.13 \\\\\n& Stylized ImageNet & 78.66 $\\pm$ 0.94 & 51.05 $\\pm$ 1.30 & 67.38 \n$\\pm$ 1.16 \\\\\n\\midrule\n \\multirow{4}{*}{\\textsc{Entity-30}} & Standard & \\textbf{88.81 $\\pm$ 0.81} & \n \\textbf{51.68 $\\pm$ 1.28 } & \\textbf{75.12 $\\pm$ 1.11} \\\\\n & Erase Noise & \\textbf{89.07 $\\pm$ 0.82} & \\textbf{51.04 $\\pm$\n 1.27} & \\textbf{74.88 $\\pm$ 1.08} \\\\\n& Gaussian Noise & 75.05 $\\pm$ 1.11 & 38.31 $\\pm$ 1.26 & 67.47 \n$\\pm$ 1.22 \\\\\n & Stylized ImageNet & 72.51 $\\pm$ 1.10 & 38.98 $\\pm$ 1.22 & 61.65 \n$\\pm$ 1.25 \\\\\n\\midrule\n \\multirow{4}{*}{\\textsc{Living-17}} & Standard & \\textbf{92.83 $\\pm$ 1.19}\n & 59.74 $\\pm$ 2.27 & \\textbf{85.46 $\\pm$ 1.83} \\\\\n & Erase Noise & \\textbf{92.96 $\\pm$ 1.32} & \\textbf{61.13 $\\pm$\n 2.30} & \\textbf{85.66 $\\pm$ 1.78} \\\\\n& Gaussian Noise & 84.06 $\\pm$ 1.71 & 48.38 $\\pm$ 2.44 & 78.79 \n$\\pm$ 1.91 \\\\\n & Stylized ImageNet & 80.94 $\\pm$ 2.00 & 44.16 $\\pm$ 2.43 & 72.77 \n$\\pm$ 2.18 \\\\\n\\midrule\n \\multirow{4}{*}{\\textsc{Non-living-26}} & Standard & \\textbf{89.64 $\\pm$ 1.17}\n & \\textbf{43.03 $\\pm$ 1.99 } & \\textbf{74.99 $\\pm$ 1.66} \\\\\n & Erase Noise & \\textbf{89.62 $\\pm$ 1.31} & \\textbf{43.53 $\\pm$ \n 1.89} & \\textbf{75.04 \n $\\pm$ 1.70} \\\\\n & Gaussian Noise & 79.26 $\\pm$ 1.61 & 34.89 $\\pm$ 1.91 & 68.07 \n $\\pm$ 1.78\n \\\\\n & Stylized ImageNet& 71.49 $\\pm$ 1.65 & 31.10 $\\pm$ 1.80 & 62.94 \n$\\pm$ 1.90 \\\\\n\t\t\\bottomrule\n\t\\end{tabular}\n\t\\vspace{1em}\n\t\\caption{Effect of various train-time interventions on model robustness to \n\tsubpopulation \n\tshift. All models are trained on samples from the the source domain. Models \n\tare then \n\tevaluated in terms of: (a) source accuracy, (b) target accuracy and (c) target \n\taccuracy after retraining the linear layer of the model with data from the \n target domain. Confidence intervals (95\\%) obtained via bootstrapping. Maximum\n task accuracy over $\\varepsilon$ (taking into account confidence interval) shown in bold.}\n\t\\label{tab:other_rob_app}\n\\end{table}\n\n\\section{The \\textsc{Breeds}{} Methodology}\n\\label{sec:breeds}\nIn this work, we focus on modeling a pertinent, yet relatively less studied,\nform of subpopulation shift: one wherein the target distribution (used for \ntesting) contains\nsubpopulations that are \\emph{entirely} absent from the source distribution \nthat the model was trained on.\nTo simulate such a shift, one needs to precisely control the data\nsubpopulations that comprise the source and target data distributions.\nOur procedure for doing this comprises two stages that are outlined\nbelow---see Figure~\\ref{fig:breeds} for an illustration.\n\n\\paragraph{Devising subpopulation structure.}\nTypical datasets do not contain \nannotations for individual subpopulations.\nSince collecting such annotations would be challenging, we take an alternative\napproach: we bootstrap the existing dataset labels to simulate \nsubpopulations.\nThat is, we group semantically similar classes into broader\nsuperclasses which, in turn, allows us to re-purpose existing class labels as \nthe desired \nsubpopulation annotations.\nIn fact, we can group classes in a hierarchical manner, obtaining superclasses\nof different specificity.\nAs we will see in Section~\\ref{sec:hierarchy}, large-scale benchmarks often\nprovide class hierarchies~\\citep{everingham2010pascal,deng2009imagenet,\nkuznetsova2018open} that aid such semantic grouping.\n\n\\paragraph{Simulating subpopulation shifts.}\nGiven a set of superclasses, we can define a classification task over them:\nthe inputs of each superclass correspond to pooling together the inputs\nof its subclasses (i.e., the original dataset classes).\nWithin this setup, we can simulate subpopulation shift \nin a relatively straightforward manner.\nSpecifically, for each superclass, we split its subclasses into two\n\\emph{random} and \\emph{disjoint} sets, and assign one of them to the source\nand the other to the target domain.\nThen, we can evaluate model robustness under subpopulation shift \nby simply training on the source domain and testing on the target domain.\nNote that the classification task remains identical between\ndomains---both domains contain the same (super)classes but the subpopulations\nthat comprise each (super)class differ.\n\\footnote{Note that this methodology can be extended to simulate milder\nsubpopulation shifts where the source and target distributions overlap but the\nrelative subpopulation frequencies vary, similar to the setting of\n\\citet{oren2019distributionally}.}\nIntuitively, this corresponds to using different dog breeds to represent the\nclass ``dog'' during training and testing---hence the name of our toolkit. \n\\newline\n\n\\begin{figure}[!t]\n\t\\centering\n \\includegraphics[width=0.9\\textwidth]{Figures\/gen\/pipeline.pdf}\n \\caption{Illustration of our pipeline to create subpopulation shift\n benchmarks. Given a dataset, we first define superclasses by \n grouping semantically similar classes together to form a hierarchy. This allows \n us to treat the dataset labels as subpopulation annotations. Then, we \n construct a \\textsc{Breeds}{} task of specified \n granularity (i.e., depth in the hierarchy) by posing the \n classification task \n in terms of superclasses at that depth and then partitioning their \n respective \n subpopulations into \n the source and target domains.\n \t}\n \\label{fig:breeds}\n\\end{figure}\n\n\\noindent\nThis methodology is quite general and can be applied to a variety of\nsetting to simulate realistic distribution shifts. \nMoreover, it has a number of additional benefits:\n\\begin{itemize}\n \\item \\textbf{Flexibility:} Different semantic groupings of a fixed set of\n classes lead to \\textsc{Breeds}{} tasks of varying granularity.\n For instance, by only grouping together classes that are quite similar one\n can reduce the severity of the subpopulation shift.\n Alternatively, one can consider broad superclasses, each having multiple\n subclasses, resulting in a more challenging benchmark.\n \\item \\textbf{Precise characterization:} The exact subpopulation shift\n between the source and target distribution is known.\n Since both domains are constructed from the same dataset, the impact\n of any external factors (e.g., differences in data collection pipelines) is\n minimized. (Note that such external factors can significantly impact the\n difficulty of the task~\\cite{ponce2006dataset,\n torralba2011unbiased,engstrom2020identifying,tsipras2020from}.)\n \\item \\textbf{Symmetry:} Since subpopulations are split into the\n source and test domains randomly, we expect the resulting tasks to have\n comparable difficulty.\n \\item \\textbf{Reuse of existing datasets:} No additional data collection or\n annotation is required other than choosing the class grouping.\n This approach can thus be used to also re-purpose other existing\n large-scale datasets---even outside the image recognition context---with\n minimal effort and cost.\n\\end{itemize}\n\n\\section{Conclusion}\nIn this work, we develop a methodology for constructing large-scale \nsubpopulation shift benchmarks.\nThe motivation behind our \\textsc{Breeds}{} benchmarks is to\ntest if models can generalize beyond the limited diversity\nof their training datasets---specifically, to novel data subpopulations.\nA major advantage of our approach is its generality.\nIt can be applied to any dataset with a \nmeaningful class structure---including tasks beyond classification \n(e.g., object detection) and domains other than computer vision \n(e.g., natural language processing).\nMoreover, the subpopulation shifts are induced in a manner that is both\ncontrolled and natural, without altering inputs synthetically or needing to \ncollect new data.\n\nWe apply this approach to the ImageNet dataset to construct\nbenchmarks of varying difficulty.\nWe then demonstrate how these benchmarks can be \nused to assess model robustness and the efficacy of various train-time \ninterventions.\nFurther, we obtain human baselines for these tasks to both put\nmodel performance in context and validate that the corresponding \nsubpopulation\nshifts do not significantly affect humans.\n\n\nOverall, our results indicate that existing models still have a long way to go\nbefore they can fully tackle the BREEDS subpopulation shifts, even with \nrobustness interventions.\nWe thus believe that our methodology provides a useful framework for studying\nmodel robustness to distribution shift---an increasingly pertinent topic for\nreal-world deployments of machine learning models.\n\n\\section{Evaluating Model Performance under Subpopulation Shift}\n\\label{sec:eval}\nWe can now use our suite of \\textsc{Breeds}{} tasks as a testbed for assessing model\nrobustness to subpopulation shift as well as gauging the effectiveness of \nvarious\ntrain-time robustness interventions. \nSpecifics of the evaluation setup and additional experimental results \nare provided in Appendices~\\ref{app:eval_setup}\nand~\\ref{app:res_eval}.\n\n\\subsection{Standard training}\nWe start by evaluating the performance of various model architectures trained\nin the standard fashion: empirical risk minimization (ERM) on the source\ndistribution (cf. Appendix~\\ref{app:models}).\nWhile these models perform well on unseen inputs from the domain they are\ntrained on, i.e., they achieve high \\emph{source accuracy}, they suffer a \nconsiderable\ndrop in accuracy under these subpopulation shifts---more than 30\\% in most cases\n(cf. Figure~\\ref{fig:core}).\nAt the same time, models that are more \\emph{accurate} on the \nsource domain also appear to be more \\emph{robust} to distribution shift.\nSpecifically, the fraction of source accuracy that is preserved in the target\ndomain is typically increasing with source accuracy. \n(If this were not the case, i.e., the accuracy of all models dropped by a \nconstant fraction under distribution shift, the target accuracy would match \nthe baseline in Figure~\\ref{fig:core}.)\nThis indicates that, while models are quite brittle to subpopulation shift,\nimprovements in source accuracy \\emph{do} correlate with models \ngeneralizing better to variations in testing conditions.\nNote that model accuracies are not directly comparable across benchmarks, \ndue to the presence of multiple conflating factors.\nOn one hand, more fine-grained tasks present a smaller subpopulation\nshift (subclasses are semantically closer).\nOn the other hand, the number of classes and training inputs\nper class changes significantly, making the task harder.\n\n\n\\begin{figure}[!h]\n\t\\centering\n\t\\includegraphics[width=0.9\\textwidth]{Figures\/eval\/drop_standard.pdf}\n\t\\caption{Robustness of standard models to \\textsc{Breeds}{} \n\t\tsubpopulation shifts. \n\t\tFor each of the four tasks, we plot the accuracy of different (source \n domain-trained) model architectures (denoted by different symbols) on\n the target domain as a function of the source accuracy (which is\n typically high).\n\t\tWe find that model accuracy drops significantly between domains\n\t\t(\\emph{orange} vs.\\ \\emph{dashed} line). Still, models\n\t\tthat are more accurate on the source domain seem to also be more \n\t\trobust\n\t\t(the improvements exceed the baseline \n\t\t(\\emph{grey}) which would correspond to a constant accuracy drop\n across models, i.e., $\\frac{source \\ acc}{target \\ acc}$ = \n\t\tconstant based on AlexNet).\n\t\tMoreover, the drop in model performance on the target domain can \n\t\tbe reduced by retraining\n\t\tthe final model layer with data from that domain (\\emph{green}). However, \n a non-trivial drop persists compared to both the original source\n accuracy, and target accuracy of models trained directly (end-to-end)\n on the target domain (\\emph{blue}).\n\t}\n\t\\label{fig:core}\n\\end{figure}\n\n\\paragraph{Models vs.\\ Humans.} We compare \nthe best performing model (DenseNet-121 in this case) to our previously \nobtained human baselines in Figure~\\ref{fig:acc_human}. To allow for a fair \ncomparison, model accuracy is measured on pairwise superclass \nclassification tasks (cf. Appendix~\\ref{app:eval_setup}). We observe \nthat models do exceedingly well on unseen samples from the source \ndomain---significantly \noutperforming annotators under our task setup. At the same time, models \nalso appear to be more brittle, performing worse than humans on the target\ndomain of these binary \\textsc{Breeds}{} tasks, despite their higher source accuracy.\n\n\n\\paragraph{Adapting models to the target domain.}\nFinally, we focus on the intermediate data representations learned by these \nmodels, aiming to assess how suitable they are\nfor distinguishing classes in the target domain.\nTo assess this, we retrain the last (fully-connected) layer of models\ntrained on the source domain with data from the target domain. \nWe find that the target accuracy of these models increases significantly after\nretraining, indicating that the learned representations indeed generalize to\nthe target domain.\nHowever, we cannot match the accuracy of models trained directly (end-to-end)\non the target domain---see Figure~\\ref{fig:core}---demonstrating that there is\nsignificant room for improvement.\n\n\n\\input{intervene}\n\n\n\\section{Simulating Subpopulation Shifts Within ImageNet}\n\\label{sec:hierarchy}\nWe now describe in more detail how our methodology can be applied to\nImageNet~\\citep{deng2009imagenet}---specifically, the ILSVRC2012\nsubset~\\citep{russakovsky2015imagenet}---to create a suite of \\textsc{Breeds}{}\nbenchmarks.\nImageNet contains a large number of classes, making it particularly\nwell-suited for our purpose.\n\n\\subsection{Utilizing the ImageNet class hierarchy}\nRecall that creating \\textsc{Breeds}{} tasks requires grouping together \nsimilar classes.\nIn the context of ImageNet, such a semantic grouping already \nexists---ImageNet classes\nare a part of the WordNet hierarchy~\\citep{miller1995wordnet}.\nHowever, WordNet is not a hierarchy of objects but rather one of \nword\nmeanings.\nTherefore, intermediate hierarchy nodes are not always well-suited\nfor object recognition due to:\n\n\\begin{itemize}\n \\item \\textbf{Abstract groupings:} WordNet nodes often correspond to\n abstract concepts, e.g., related to the functionality of an object.\n Children of such nodes might thus share little visual\n similarity---e.g., ``umbrella'' and ``roof'' are visually different,\n despite both being ``coverings''.\n \\item \\textbf{Non-uniform categorization:} The granularity of object\n categorization is vastly different across the WordNet hierarchy---e.g.,\n the subtree rooted at ``dog'' is 25-times larger than the one rooted at \n ``cat''. \n Hence, the depth of a node in this hierarchy does not always reflect\n the specificity of the corresponding object category.\n \\item \\textbf{Lack of tree structure:} Nodes in WordNet can have\n multiple parents\\footnote{In programming languages, this is known as ``the\n diamond problem'' or ``the Deadly Diamond of\n Death''~\\citep{martin1997java}.} and thus the resulting classification\n task would contain overlapping classes, making it inherently ambiguous.\n\\end{itemize}\n\n\\noindent\nDue to these issues, we cannot directly use WordNet to identify \nsuperclasses that correspond to a well-calibrated classification task.\nTo illustrate this, we present some of the superclasses\nconstructed by applying clustering algorithms directly to the WordNet \nhierarchy~\\cite{huh2016makes} in\nAppendix Table~\\ref{tab:problems}.\nEven putting the issue of overlapping classes aside, a \\textsc{Breeds}{} task based on\nthese superclasses would induce a very skewed subpopulation shift across\nclasses---e.g., varying the types of ``bread'' is very different that\ndoing the same for different ``mammal'' species.\n\n\\paragraph{Calibrating WordNet for Visual Object Recognition.}\nTo better align the WordNet hierarchy with the task of object\nrecognition in general, and \\textsc{Breeds}{} benchmarks in particular, we manually \nmodify it \naccording to the following two principles.\nFirst, nodes should be grouped together based on their visual characteristics,\nrather than abstract relationships like functionality---e.g., we eliminate\nnodes that do not convey visual information such as ``covering''.\nSecond, nodes of similar specificity should be at the same distance from the\nroot, irrespective of how detailed their categorization within WordNet is---for\ninstance, we placed ``dog'' at the same level as ``cat'' and ``flower'', even\nthough the ``dog'' sub-tree in WordNet is much larger.\nFinally, we removed a number of ImageNet classes that did not naturally fit into\nthe hierarchy.\nThe resulting hierarchy, presented in Appendix~\\ref{app:manual}, contains \nnodes of comparable granularity at the same level.\nMoreover, as a result of this process, each node ends up having a single \nparent\nand thus the resulting hierarchy is a tree.\n\n\\subsection{Creating \\textsc{Breeds}{} tasks}\n\\label{sec:tasks}\nOnce the modified version of the WordNet hierarchy is in place, \\textsc{Breeds}{} \ntasks can be\ncreated in an automated manner.\nSpecifically, we first choose the desired granularity of the task by specifying \nthe distance from the root (``entity'') and retrieving all superclasses at \nthat distance in a top-down manner.\nEach resulting superclass corresponds to a subtree of our hierarchy, with\nImageNet classes as its leaves.\nNote that these superclasses are roughly of the same specificity, due to\nour hierarchy restructuring process.\nThen, we randomly sample a fixed number of subclasses for each \nsuperclass to produce a balanced dataset (omitting superclasses with an\ninsufficient number of subclasses).\nFinally, as described in Section~\\ref{sec:breeds}, we randomly split these\nsubclasses into the source and target domain.\n\\footnote{We also consider more benign or adversarial subpopulation\nsplits for these tasks in Appendix~\\ref{app:goodbad}.}\n\nFor our analysis, we create four tasks, presented in \nTable~\\ref{tab:benchmarks}, \nbased on different levels\/parts of the hierarchy.\nTo illustrate what the corresponding subpopulation shifts look like, we also \npresent (random) image samples for a subset of \nthe tasks in Figure~\\ref{fig:samples}.\nNote that while we focus on the tasks in Table~\\ref{tab:benchmarks} in our \nstudy, our methodology readily enables us to create other \nvariants of these tasks in an automated manner.\n\n\n\\begin{table}[!ht]\n\t\\centering\n\t\\begin{tabular}{lcccr}\n\t\t\\toprule\n\t\t\\textbf{Name} & \\textbf{Subtree} & \\textbf{Level} & \n\t\t\\textbf{Subpopulations} &\n\t\t\\textbf{Examples} \\\\\n\t\t\\midrule\n\t\t\\textsc{Entity-13} & ``entity'' (root) & 3 & 20 & ``mammal'', \n\t\t``appliance'' \\\\\n\t\t\\textsc{Entity-30} & ``entity'' (root) & 4 & 8 & ``fruit'', ``carnivore''\\\\\n\t\t\\textsc{Living-17} & ``living thing'' & 5 & 4 & ``ape'', ``bear'' \\\\\n\t\t\\textsc{Non-living-26} & ``non-living thing'' & 5 & 4 & ``fence'', ``ball''\\\\\n\t\t\\bottomrule\n\t\\end{tabular}\n\\vspace{1em}\n\t\\caption{\\textsc{Breeds}{} benchmarks constructed using ImageNet.\n\t``Level'' indicates the depth of the \n superclasses in the class hierarchy (task granularity). The number of\n\t``subpopulations'' (per superclass) is fixed across \n\tsuperclasses to ensure a balanced \n\tdataset.\n\tWe can also construct specialized tasks, by focusing on subtrees in \n\tthe hierarchy, e.g., only living (\\textsc{Living-17}{}) \n\tor non-living (\\textsc{Non-living-26}{}) objects. \n\tDatasets are named based on the root of the subtree and the resulting number\n of superclasses they end up containing.}\n\t\\label{tab:benchmarks}\n\\end{table}\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=0.9\\textwidth]{Figures\/gen\/samples.pdf}\n\t\\caption{Sample images from random object categories for the \\textsc{Entity-13}{} \n and \\textsc{Living-17}{} tasks. For each task, the top and bottom row\n correspond to the source and target distributions respectively.}\n\t\\label{fig:samples}\n\\end{figure}\n\n\\paragraph{\\textsc{Breeds}{} benchmarks beyond ImageNet.} \nIt is worth nothing that the methodology we described is not restricted to ImageNet and\ncan be readily applied to other datasets as well.\nThe only requirement is that we have access to a semantic grouping of the\ndataset classes, which is the case for many popular \nvision datasets---e.g., \nCIFAR-100~\\cite{krizhevsky2009learning}, \nPascal-VOC~\\cite{everingham2010pascal}, \nOpenImages~\\cite{kuznetsova2018open}, \nCOCO-Stuff~\\cite{caesar2018cocostuff}.\nMoreover, even when a class hierarchy is entirely\nabsent, the needed semantic class grouping can be manually\nconstructed with relatively little effort (proportional \nto the number of classes, not the number of datapoints).\n\n\\input{human}\n\n\\subsection{Calibrating \\textsc{Breeds}{} benchmarks via human studies}\n\\label{sec:humans}\nFor a distribution shift benchmark to be meaningful, it is essential that the source \nand target domains capture the same high-level task---otherwise generalizing\nfrom one domain to the other would be impossible.\nTo ensure that this is the case for the \\textsc{Breeds}{} task, we assess how\nsignificant the resulting distribution shifts are for human annotators\n(crowd-sourced via MTurk).\n\n\\paragraph{Annotator task.} \nTo obtain meaningful performance estimates, it is crucial that\nannotators perform the task based {only} \\emph{on the visual content of the\nimages}, without leveraging prior knowledge of the visual world.\nTo achieve this, we design the following annotation task.\nFirst, annotators are shown images from the source domain, grouped by\nsuperclass, without being aware of the superclass name (i.e., the object \ngrouping it corresponds to).\nThen, they are presented with images from the target domain and are asked to\nassign each of them to one of the groups.\nFor simplicity, we only present two random superclasses at a time, effectively\nsimulating binary classification.\nAnnotator accuracy can be measured directly as the fraction of images that they\nassign to the superclass to which these images belong.\nWe perform this experiment for each of the \\textsc{Breeds}{} tasks constructed in\nSection~\\ref{sec:tasks}.\nAs a point of comparison, we repeat this experiment without subpopulation \nshift\n(test images are sampled from the source domain) and for the \nsuperclasses\nconstructed by~\\citet{huh2016makes} using the WordNet hierarchy directly \n(cf.\nAppendix~\\ref{app:mturk}).\n\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[width=1\\textwidth]{Figures\/eval\/acc_comparison.pdf}\n \\caption{Human performance on (binary) \\textsc{Breeds}{} tasks.\n\t\tAnnotators are provided with labeled \n images from the source\n\t\tdistribution for a \\emph{pair of (undisclosed) superclasses}, and asked \n\t\tto classify\n\t samples from the target domain (`T') into one of the two groups. \n\t\tAs a baseline we also measure annotator performance without \n\t\tsubpopulation shift (i.e., on test images drawn from the source \n domain, `S') and equivalent tasks created via the original WordNet\n hierarchy (cf. Appendix~\\ref{app:mturk}).\n\t\tWe can observe that across all tasks, annotators are fairly robust to \n\t\tsubpopulation shift. \n\t\tFurther, annotators consistently perform better on \\textsc{Breeds}{} task \n\t\tcompared to those based on WordNet directly---indicating that our \n\t\tmodified class hierarchy is indeed better calibrated for object \n\t\trecognition.\n\t\t(We discuss model performance in Section~\\ref{sec:eval}.)\n\t}\n\t\\label{fig:acc_human}\n\\end{figure}\n\n\n\\paragraph{Human performance.} \nWe find that, across all tasks, annotators perform well on unseen data from the\nsource domain, as expected.\nMore importantly, annotators also appear to be quite robust to subpopulation shift, \nexperiencing only a small accuracy drop between the source and target \ndomains (cf. Figure~\\ref{fig:core}).\nThis is particularly prominent in the case of \\textsc{Entity-30}{} and \\textsc{Living-17}{} where the\ndifference in source and target accuracy is within the confidence interval.\nThis indicates that the source and target domains are indeed perceptually \nsimilar for humans, making these benchmarks suitable for studying model \nrobustness.\nFinally, across all benchmarks, we observe that annotators perform better on \n\\textsc{Breeds}{} tasks, as compared to their WordNet equivalents---even on \nsamples from the source domain.\nThis indicates that our modified ImageNet class hierarchy is indeed better \naligned with the underlying visual object recognition task.\n\n\\section*{Broader Impact}\nRobustness to testing conditions is a prerequisite for safe and reliable\ndeployment of machine learning models in the real-world. \nIn this work, we expand the current model evaluation toolkit by providing\nbenchmarks for assessing robustness to subpopulation shifts.\nThis allows us to test if models generalize beyond the limited diversity of\ntheir training datasets.\n\nOn the positive side, this toolkit serves as a simple testbed that can\npinpoint ways in which our models fail to generalize.\nThis provides us with an opportunity to improve and debug these models \nbefore\ndeployment, hence avoiding catastrophic failures in real-world scenarios.\nOn the negative side, well-defined benchmarks can provide users with a false\nsense of security.\nA models that performs well on a range of benchmarks is likely to be trusted \nand deployed in the real-world, despite having blind spots that these\nbenchmarks do not capture.\nIt is thus crucial to perceive benchmark progress properly, using it as a\nguide, without trusting it blindly.\n\n\n\\subsection{Robustness interventions}\n\\label{sec:intervene}\nWe now turn our attention to existing methods for decreasing \nmodel sensitivity to specific synthetic perturbations.\nOur goal is to assess if these methods enhance model robustness to\nsubpopulation shift too.\nConcretely, we consider the following families of \ninterventions (cf. Appendix~\\ref{app:robustness} for details):\n\\begin{itemize}\n \\item \\textbf{Adversarial training}: Enhances robustness to worst-case\n $\\ell_p$-bounded perturbations (in our case $\\ell_2$) by training models \n against a projected gradient descent (PGD) adversary~\\citep{madry2018towards}.\n \\item \\textbf{Stylized Training}: Encourages models to rely more on shape\n rather than texture by training them on a stylized version of \n ImageNet~\\cite{geirhos2018imagenettrained}.\n \\item \\textbf{Random noise}: Improves model robustness to data \n corruptions by incorporating them as data augmentations during \n training---we focus on Gaussian noise and Erase\n noise~\\citep{zhong2020random}, i.e., randomly obfuscating a block of the\n image.\n\\end{itemize}\n\n\\noindent\nNote that these methods can be viewed as ways of imposing a prior on the\nfeatures that the model relies on~\\cite{heinze2017conditional,\n geirhos2018imagenettrained, engstrom2019learning}.\nThat is, by rendering certain features ineffective during training (e.g.,\ntexture) they incentivize the model to utilize alternative features\nfor its predictions (e.g., shape).\nSince different families of features may correlate differently with class labels\nin the target domain, the aforementioned interventions could significantly\nimpact model robustness to subpopulation shift.\n\n\\paragraph{Relative accuracy.}\nTo measure the impact of these interventions, we will focus on the\nmodels' \\emph{relative accuracy}---the ratio of target accuracy to source\naccuracy.\nThis metric accounts for the fact that train-time interventions can impact model\naccuracy on the source domain itself.\nBy measuring relative performance, we are able to compare different training\nmethods on an equal footing. \n\nWe find that robustness interventions \\emph{do} have a small,\nyet non-trivial, impact on the robustness of a particular model architecture to\nsubpopulation shift---see Figure~\\ref{fig:intervene}.\nSpecifically, for the case of adversarial training and erase noise, models often\nretain a larger fraction of their accuracy to the target domain compared to\nstandard training, hence lying on the Pareto frontier of a robustness-accuracy\ntrade-off.\nIn fact, for some of the models trained with these interventions, the \ntarget accuracy is slightly higher than models obtained via standard training,\neven without adjusting for their lower source accuracy (raw\naccuracies for all methods are in Appendix~\\ref{app:res_int}).\nNonetheless, it is important to note that none of these method offer\nsignificant subpopulation robustness---relative accuracy is not\nimproved by more than a few percentage points.\n\n\\begin{figure}[!h]\n\t\\centering\n\t\\includegraphics[width=0.9\\textwidth]{Figures\/eval\/interventions_all_rel_bs.pdf}\n\t\\caption{Effect of train-time interventions on model robustness to \n\t\tsubpopulation shift. We measure model performance in terms of \n\t\t\\emph{relative \n\t\t\taccuracy}--i.e., the ratio between its target and source \n\t\taccuracies. \n\t\tThis allows us to visualize the accuracy-robustness trade-off along with \n\t\tthe\n\t\tcorresponding Pareto frontier (\\emph{dashed}).\n\t\t(Also shown are 95\\% confidence intervals computed via \n\t\tbootstrapping.)\n\t\tWe observe that some of these interventions do improve \n model robustness to subpopulation shift by a small\n amount---specifically, erase noise and adversarial training---albeit\n sometimes at the cost of source accuracy.\n\t}\n\t\\label{fig:intervene}\n\\end{figure}\n\\paragraph{Adapting models to the target domain.}\nThe impact of these interventions is more pronounced if we consider \nthe target accuracy of these models after their last layer has been retrained on data from the target \ndomain---see \nFigure~\\ref{fig:intervene_ft}.\nIn particular, we observe that for adversarially robust models, retraining\nsignificantly boosts accuracy on the target domain---e.g., in the case of\n\\textsc{Living-17}{} it is almost comparable to the initial accuracy on the source domain.\nThis indicates that the feature priors imposed by these interventions\nincentivize models to learn representations that generalize better to\nsimilar domains---in line with recent results\nof~\\citet{utrera2020adversarially,salman2020adversarially}.\nMoreover, we observe that models trained on the stylized version of these\ndatasets perform consistently worse, suggesting that texture might be an\nimportant feature for these tasks, especially in the presence of subpopulation\nshift.\nFinally, note that we did not perform an exhaustive exploration of the\nhyper-parameters used for these interventions (e.g., $\\ell_2$-norm)---it is\npossible that these results can be improved by additional tuning.\nFor instance, we would expect that we can tune the magnitude of the Gaussian\nnoise to achieve performance that is comparable to that of $\\ell_2$-bounded\nadversarial training~\\citep{ford2019adversarial}.\n\n\\begin{figure}[!h]\n\t\\centering\n\t\\includegraphics[width=0.9\\textwidth]{Figures\/eval\/interventions_all_rel_ft_bs.pdf}\n\t\\caption{Target accuracy of models after\n\t\tthey have been retrained (only the final linear layer) on data from the \n\t\ttarget domain (with 95\\% bootstrap confidence intervals).\n\t\tModels trained with robustness interventions often\n\t have higher target accuracy than standard models post retraining.\n }\n\t\\label{fig:intervene_ft}\n\\end{figure}\n\n\\section{Introduction}\n\\label{sec:intro}\nRobustness to distribution shift has been the focus of a long line of\nwork in machine learning~\\citep{schlimmer1986beyond,widmer1993effective,\nkelly1999impact,shimodaira2000improving,sugiyama2007covariate,\nquionero2009dataset,moreno2012unifying,sugiyama2012machine}.\nAt a high-level, the goal is to ensure that models perform well not only on \nunseen\nsamples from the datasets they are trained on, but also on the diverse set of\ninputs they are likely to encounter in the real world.\nHowever, building benchmarks for evaluating such robustness is \nchallenging---it requires modeling realistic data variations in a way that is \nwell-defined, controllable, and easy to simulate.\n\nPrior work in this context has focused on building benchmarks that capture \ndistribution shifts caused by\nnatural or adversarial input\ncorruptions~\\cite{szegedy2014intriguing,fawzi2015manitest,fawzi2016robustness,\n engstrom2019rotation,ford2019adversarial,hendrycks2019benchmarking,\n kang2019testing},\ndifferences in data sources~\\cite{saenko2010adapting,torralba2011unbiased,\n khosla2012undoing,tommasi2014testbed,recht2018imagenet},\nand changes in the frequencies of data \nsubpopulations~\\cite{oren2019distributionally,sagawa2019distributionally}.\nWhile each of these approaches captures a different source of real-world\ndistribution shift, we cannot expect any single benchmark to be \ncomprehensive.\nThus, to obtain a holistic understanding of model robustness, we need \nto keep expanding our testbed to encompass more natural modes of variation.\nIn this work, we take another step in that direction by studying the following\nquestion:\n\n\\begin{center}\n\t\\emph{How well do models generalize to data subpopulations they have \n\tnot seen during training?}\n\\end{center}\n\n\\noindent\nThe notion of \\emph{subpopulation shift} this question refers to is quite\npervasive.\nAfter all, our training datasets will inevitably fail to perfectly \ncapture the diversity of the real word.\nHence, during deployment, our models are bound to encounter unseen \nsubpopulations---for instance, unexpected weather conditions in the \nself-driving car context or different diagnostic setups in medical applications.\n\n\\subsection*{Our contributions}\nThe goal of our work is to create large-scale subpopulation shift benchmarks\nwherein the data subpopulations present during model training and evaluation\ndiffer.\nThese benchmarks aim to assess how effectively models generalize beyond the\nlimited diversity of their training datasets---e.g., whether models can\nrecognize Dalmatians as ``dogs'' even when their training data for ``dogs''\n comprises only Poodles and Terriers.\nWe show how one can simulate such shifts, fairly naturally, \\emph{within}\nexisting datasets, hence eliminating the need for (and the potential biases\nintroduced by) crafting synthetic transformations or collecting additional data.\n\n\\paragraph{\\textsc{Breeds}{} benchmarks.}\nThe crux of our approach is to leverage existing dataset labels and use them to\nidentify \\emph{superclasses}---i.e., groups of semantically similar classes.\nThis allows us to construct classification tasks over such superclasses, and\nrepurpose the original dataset classes to be the subpopulations of interest.\nThis, in turn, enables us to induce a subpopulation shift by directly making\nthe subpopulations present in the training and test distributions \ndisjoint.\nBy applying this methodology to the ImageNet \ndataset~\\citep{deng2009imagenet}, we create a suite of subpopulation shift \nbenchmarks of \nvarying difficulty.\nThis involves modifying the existing ImageNet class\nhierarchy---WordNet~\\citep{miller1995wordnet}---to ensure that \nsuperclasses comprise visually coherent subpopulations.\nWe then conduct human studies to validate that the resulting \\textsc{Breeds}{}\nbenchmarks indeed capture meaningful subpopulation shifts.\n\n\\paragraph{Model robustness to subpopulation shift.} \nIn order to demonstrate the utility of our benchmarks, we employ them \nto\nevaluate the robustness of standard models to subpopulation\nshift.\nIn general, we find that model performance drops significantly on the shifted\ndistribution---even when this shift does not significantly affect humans.\nStill, models that are more accurate on the original distribution tend to also \nbe more robust to these subpopulation shifts.\nMoreover, adapting models to the shifted domain, by\nretraining their last layer on data from this domain, only partially recovers the \noriginal \nmodel\nperformance.\n\n\\paragraph{Impact of robustness interventions.}\nFinally, we examine whether various train-time interventions, designed to\ndecrease model sensitivity to synthetic data corruptions (e.g., $\\ell_2$-bounded\nperturbations) make models more robust to subpopulation shift.\nWe find that many of these methods offer small, yet non-trivial, \nimprovements to\nmodel robustness along this axis---at times, at the expense of performance on the\noriginal distribution.\nOften, these improvements become more pronounced after\nretraining the last layer of the model on the shifted distribution.\nIn the context of adversarial training, our\nfindings are in line with recent work showing that the\nresulting robust models \noften exhibit improved robustness to other data\ncorruptions~\\citep{ford2019adversarial,kang2019testing,taori2020measuring}, and transfer\nbetter to downstream\ntasks~\\citep{utrera2020adversarially,salman2020adversarially}.\nNonetheless, none of these interventions significantly alleviate\nmodel sensitivity to subpopulation shift, indicating that the \\textsc{Breeds}{} \nbenchmarks pose a challenge to current methods.\n\n\n\n\\section{Designing Benchmarks for Distribution Shift}\n\\label{sec:prior}\nWhen constructing distribution shift benchmarks, the key design choice lies in \nspecifying the \\emph{target distribution} to be used during model evaluation.\nThis distribution is meant to be a realistic variation of the \n\\emph{source distribution}, that was used for training.\nTypically, studies focus on variations due to:\n\\begin{itemize}\n \\item \\emph{Data corruptions}: The target distribution is obtained by\n modifying inputs from the source distribution via a family of\n transformations that mimic real-world corruptions.\n Examples include natural or \n adversarial forms of noise~\\cite{fawzi2015manitest,fawzi2016robustness,\n engstrom2019rotation,hendrycks2019benchmarking,ford2019adversarial,kang2019testing,\n shankar2019image}.\n \\item \\emph{Differences in data sources}: Here, the target distribution is an\n independently collected dataset for the same \n task~\\cite{saenko2010adapting,torralba2011unbiased,tommasi2014testbed,\n beery2018recognition,recht2018imagenet}---for\n instance, using PASCAL VOC~\\cite{everingham2010pascal} to evaluate\n ImageNet-trained classifiers~\\cite{russakovsky2015imagenet}. The goal is to\n test whether models are overly reliant on the idiosyncrasies of the datasets\n they are trained\n on~\\cite{ponce2006dataset,torralba2011unbiased}.\n \\item \\emph{Subpopulation shifts}: The source and target distributions \n differ in terms of how well-represented each subpopulation is.\n Work in this area typically studies whether models perform \n \\emph{equally well} across\n all subpopulations from the perspective of\n reliability~\\cite{meinshausen2015maximin, hu2016does,duchi2018learning,\n caldas2018leaf,oren2019distributionally,sagawa2019distributionally}\n or algorithmic \n fairness~\\citep{dwork2012fairness,kleinberg2017inherent,\n \tjurgens2017incorporating, \n \tbuolamwini2018gender,hashimoto2018fairness}.\n\\end{itemize} \n\nIn general, a major challenge lies in ensuring that the distribution\nshift between the source and target distributions (also referred to as \n\\emph{domains}) is caused \nsolely by the \nintended input variations.\nExternal factors---which may arise when crafting synthetic\ntransformations or collecting new \ndata---could skew the \ntarget distribution in different ways, making it hard to gauge model \nrobustness to the exact distribution shift of interest.\nFor instance, recent work~\\citep{engstrom2020identifying} demonstrates that\ncollecting a new dataset while aiming to match an existing one along a specific\nmetric (e.g., as in \\citet{recht2018imagenet}) might result in a miscalibrated\ndataset due to statistical bias.\nIn our study, we aim to limit such external influences by simulating\nshifts within existing datasets, thus avoiding any input modifications.\n\n\\section{Additional Related Work}\nIn Section~\\ref{sec:prior}, we discuss prior work that has directly focused on \nevaluating model robustness to distribution shift. We now provide an \noverview of other related work and its connections to our methodology.\n\n\\paragraph{Distributional robustness.}\nDistribution shifts that are small with respect to some $f$-divergence have been\nstudied in prior theoretical work~\\citep{ben2013robust,duchi2016statistics,\nesfahani2018data,namkoong2016stochastic}.\nHowever, this notion of robustness is typically too pessimistic to capture\nrealistic data variations~\\cite{hu2016does}.\nDistributional robustness has also been connected to\ncausality~\\citep{meinshausen2018causality}:\nhere, the typical approach is to inject spurious correlations into the dataset, \nand assess to what extent models rely on them for \ntheir predictions~\\citep{heinze2017conditional,\narjovsky2019invariant,sagawa2019distributionally}.\n\n\\paragraph{Domain adaptation and transfer learning.} \nThe goal here is to adapt models to the target domain with relatively few\nsamples from it~\\citep{ben2007analysis, saenko2010adapting,ganin2014unsupervised,\ncourty2016optimal,gong2016domain,donahue2014decaf,razavian2014cnn}.\nIn domain adaptation, the task is the same in both domains,\nwhile in transfer learning, the task itself could vary.\nIn a similar vein, the field of \\emph{domain generalization} aims to generalize\nto samples from a different domain (e.g., from ClipArt to photos) by training on\na number of explicitly annotated\ndomains~\\citep{muandet2013domain,li2017deeper,peng2019moment}.\n\n\\paragraph{Zero-shot learning.}\nWork in this domain focuses on learning to recognize previously unseen \nclasses~\\citep{lampert2009learning,xian2017zero}, typically described\nvia a semantic \nembedding~\\citep{lampert2009learning,mikolov2013distributed,socher2013zero,frome2013devise,romera2015embarrassingly}.\nThis differs from our setup, where the focus is on \ngeneralization to unseen subpopulations for the \\emph{same} set of classes.\n\n\\section{Experimental Setup}\n\n\\subsection{Dataset}\n\\label{app:datasets}\nWe perform our analysis on the ILSVRC2012\ndataset~\\citep{russakovsky2015imagenet}. This dataset contains a thousand\nclasses from the ImageNet dataset~\\cite{deng2009imagenet} with an independently\ncollected validation set. \nThe classes are part of the broader hierarchy, WordNet~\\citep{miller1995wordnet}, \nthrough which\nwords are organized based on their semantic meaning.\nWe use this hierarchy as a starting point of our investigation but modify it\nas described in Appendix~\\ref{app:manual}.\n\nFor all the \\textsc{Breeds}{} superclass classification tasks, the train and validation\nsets are obtained by aggregating the train and validation sets of the \ndescendant ImageNet classes (i.e., subpopulations). \nSpecifically, for a given subpopulation, the training and test splits from the \noriginal ImageNet dataset are used as is.\n\n\n\n\\subsection{WordNet issues}\n\\label{app:wordnet}\n\nAs discussed in Section~\\ref{sec:hierarchy}, WordNet is a semantic rather than a\nvisual hierarchy. That is, object classes are arranged based on their meaning\nrather than their visual appearance. Thus, using intermediate nodes for a\nvisual object recognition task is not straightforward. To illustrate this, we\nexamine a sample superclass grouping created by~\\citet{huh2016makes} via\nautomated bottom-up clustering in Table~\\ref{tab:problems}.\n\n\\begin{table}[htp]\n \\input{tables\/table_other_36}\n \\vspace{1em}\n \\caption{Superclasses constructed by~\\citet{huh2016makes} via\n bottom-up clustering of WordNet to obtain 36 superclasses---for\n brevity, we only show superclasses with at least 20 ImageNet classes each.}\n \\label{tab:problems}\n\\end{table}\n\nFirst, we can notice that these superclasses have vastly different\ngranularities.\nFor instance, ``organism'' contains the entire animal kingdom, hence being much\nbroader than ``produce''.\nMoreover, ``covering'' is rather abstract class, and hence its subclasses often\nshare little visual similarity (e.g., ``window shade'', ``pajama'').\nFinally, due to the abstract nature of these superclasses, a large number of\nsubclasses overlap---``covering'' and ``commodity'' share 49 ImageNet \ndescendants.\n\n\\clearpage\n\\subsection{Manual calibration}\n\\label{app:method}\nIn order to allow for efficient and automated creation of superclasses that are\nsuitable for visual recognition, we modified the WordNet hierarchy by applying\nthe following operations:\n\\begin{itemize}\n \\item \\emph{Collapse node}: Delete a node from the hierarchy and add edges\n from each parent to each child. Allows us to remove redundant or overly\n specific categorization while preserving the overall structure.\n \\item \\emph{Insert node above}: Add a dummy parent to push a node further\n down the hierarchy. Allows us to ensure that nodes of similar granularity are at \n the same level.\n \\item \\emph{Delete node}: Remove a node and all of its edges. Used to\n remove abstract nodes that do not reveal visual characteristics.\n \\item \\emph{Add edge}: Connect a node to a parent. Used to reassign the\n children of nodes deleted by the operation above.\n\\end{itemize}\nWe manually examined the hierarchy and implemented these actions in order to\nproduce superclasses that are calibrated for classification.\nThe principles we followed are outlined in Section~\\ref{sec:hierarchy} while\nthe full hierarchy can be explored using the notebooks provided with the \nhierarchy.\\footnote{\\url{https:\/\/github.com\/MadryLab\/BREEDS-Benchmarks}}\n\n\\subsection{Resulting hierarchy}\n\\label{app:manual}\n\nThe parameters for constructing the \\textsc{Breeds}{} benchmarks (hierarchy level,\nnumber of subclasses, and tree root) are given in Table~\\ref{tab:benchmarks}.\nThe resulting tasks---obtained by sampling disjoint ImageNet classes (i.e., \nsubpopulations) for the\nsource and target domain---are shown in\nTables~\\ref{tab:three},~\\ref{tab:four},~\\ref{tab:living},\nand~\\ref{tab:nonliving}.\nRecall that for each superclass we randomly sample a fixed number of \nsubclasses per superclass to ensure that the dataset is approximately \nbalanced.\n\\clearpage\n\\input{tables\/table_3_n00001740}\n\n\\clearpage\n\\input{tables\/table_4_n00001740}\n\n\\clearpage\n\\input{tables\/table_5_n00004258}\n\n\\clearpage\n\\input{tables\/table_5_n00021939}\n\n\n\\clearpage\n\\subsection{Annotator task}\n\\label{app:mturk}\nAs described in Section~\\ref{sec:humans}, the goal of our human studies is to\nunderstand whether humans can classify images into superclasses even without\nknowing the semantic grouping.\nThus, the task involved showing annotators two groups of images, each sampled\nfrom the source domain of a random superclass.\nThen, annotators were shown a new set of images from the target domain (or the\nsource domain in the case of control) and were asked to assign each of them into\none of the two groups. A screenshot of an (random) instance of our annotator\ntask is shown in Figure~\\ref{fig:screenshot}.\n\nEach task contained 20 images from the source domain of each superclass and 12\nimages for annotators to classify (the images where rescaled and center-cropped\nto size $224\\times 224$ to match the input size use for model predictions).\nThe two superclasses were randomly permuted at load time.\nTo ensure good concentration of our accuracy estimates, for every superclass, \nwe \nperformed binary classification tasks w.r.t. 3 other (randomly chosen) superclasses.\nFurther, we used 3 annotators per task. and annotators were compensated \\$0.15 \nper task.\n\n\\paragraph{Comparing with the original hierarchy.} In order to compare our\nsuperclasses with those obtained by \\citet{huh2016makes} via WordNet\nclustering,\\footnote{\\url{https:\/\/github.com\/minyoungg\/wmigftl\/tree\/master\/label_sets\/hierarchy}}\nwe need to define a correspondence between them.\nTo do so, for each of our tasks, we selected the clustering (either top-down or\nbottom-up) that had the closest number of superclasses.\nFollowing the terminology from that work, this mapping is: \\textsc{Entity-13}{} $\\to$\n\\textsc{DownUp-36}, \\textsc{Entity-30}{} $\\to$ \\textsc{UpDown-127}, \\textsc{Living-17}{} $\\to$\n\\textsc{DownUp-753} (restricted to ``living'' nodes), and \\textsc{Non-living-26}{} $\\to$\n\\textsc{DownUp-345} (restricted to ``non-living'' nodes).\n\n\\begin{figure}[htp]\n \\centering\n \\includegraphics[width=0.78\\textwidth]{Figures\/task}\n \\caption{Sample MTurk annotation task to obtain human baselines for \\textsc{Breeds}{} \n benchmarks.}\n \\label{fig:screenshot}\n\\end{figure}\n\n\n\\clearpage\n\\subsection{Evaluating model performance}\n\\label{app:eval_setup}\n\n\\subsubsection{Model architectures and training}\n\\label{app:models}\nThe model architectures used in our analysis are in\nTable~\\ref{tab:models} for which we used standard implementations\nfrom the PyTorch library \n(\\url{https:\/\/pytorch.org\/docs\/stable\/torchvision\/models.html}).\nFor training, we use a batch size of 128, weight decay of $10^{-4}$, \nand learning rates listed in Table~\\ref{tab:models}.\nModels were trained until convergence.\nOn \\textsc{Entity-13}{} and \\textsc{Entity-30}{}, this required a total of 300 epochs, \nwith 10-fold drops in learning rate every 100 epochs, while on\n\\textsc{Living-17} and \\textsc{Non-living-26}, models a total of 450 epochs, with 10-fold learning rate\ndrops every 150 epochs.\nFor adapting models, we retrained the last\n(fully-connected) layer on the train split of the target domain, starting from\nthe parameters of the source-trained model.\nWe trained that layer using SGD with a batch size of 128 for\n40,000 steps and chose the best learning rate out of \n$[0.01, 0.1, 0.25, 0.5, 1.0, 2.0, 3.0, 5.0, 7.0, 8.0, 10.0, 11.0, 12.0]$, based\non test accuracy.\n\n\n\\begin{table}[!h]\n\\begin{center}\n\t\\begin{tabular}{lcc}\n\t\t\\toprule\n\t\t\\textbf{Model} & \\phantom{x} & \\textbf{Learning Rate} \\\\\n\t\t\\midrule\n\t\t\\texttt{alexnet} && 0.01 \\\\ \n\t\t\\texttt{vgg11} && 0.01 \\\\ \n\t\t\\texttt{resnet18} && 0.1 \\\\ \n\t\t\\texttt{resnet34} && 0.1 \\\\ \n\t\t\\texttt{resnet50} && 0.1 \\\\ \n\t\t\\texttt{densenet121} && 0.1 \\\\ \n\t\t\\bottomrule\n\t\\end{tabular}\n\\end{center}\n\t\\caption{Models used in our analysis.} \n\t\\label{tab:models}\n\\end{table}\n\n\\subsubsection{Model pairwise accuracy}\n\\label{app:model_pairwise}\nIn order to make a fair comparison between the performance of models and human \nannotators on the \\textsc{Breeds}{} tasks, we evaluate model accuracy on\npairs of superclasses. On images from that pair,\nwe determine the model prediction to be the superclass for \nwhich the model's predicted probability is higher. A prediction is deemed correct if it \nmatches the superclass label for the image. Repeating this process over random pairs \nof superclasses allows us to estimate model accuracy on the average-case binary\nclassification task.\n\n\n\\subsubsection{Robustness interventions}\n\\label{app:robustness}\nFor model training, we use the hyperparameters provided in \nAppendix~\\ref{app:models}.\nAdditional intervention-specific hyperparameters are listed in Appendix \nTable~\\ref{tab:ri_hyperparams}. Due to computational \nconstraints, we trained a restricted set of model architectures with robustness \ninterventions---ResNet-18 and ResNet-50 for adversarial training, and ResNet-18 \nand ResNet-34 for all others.\nAdversarial training was implemented using the \\texttt{robustness}\nlibrary,\\footnote{\\url{https:\/\/github.com\/MadryLab\/robustness}} while random\nerasing using the PyTorch\n\\texttt{transforms}.\\footnote{\\url{https:\/\/pytorch.org\/docs\/stable\/torchvision\/transforms.html}}\n\n\\begin{table}[!h]\n\t\\begin{center}\n\t\\begin{minipage}{0.3\\textwidth}\n\t\t\\begin{tabular}{ccc}\n\t\t\t\\toprule\n\t\t\t\\textbf{Eps} & \\textbf{Step size} & \\textbf{\\#Steps} \\\\\n\t\t\t\\midrule\n 0.5 & 0.4 & 3 \\\\ \n\t\t\t1 & 0.8 & 3 \\\\ \n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t\n \\caption*{(a) PGD-training~\\citep{madry2018towards}}\n\\end{minipage}\t\n\\hfil\n\\begin{minipage}{0.2\\textwidth}\n\t\t\t\\begin{tabular}{cc}\n\t\t\\toprule\n\t\t\\textbf{Mean} &\t\\textbf{StdDev} \\\\\n\t\t\\midrule\n 0 & 0.2 \\\\ \n\t\t\\bottomrule\n\t\\end{tabular}\n\t\\caption*{(b) Gaussian noise} \n\\end{minipage}\t\n\\hfil\n\\begin{minipage}{0.4\\textwidth}\n\t\t\t\\begin{tabular}{ccc}\n\t\t\\toprule\n \\textbf{Probability} &\t\\textbf{Scale} & \\textbf{Ratio} \\\\\n\t\t\\midrule\n 0.5 & 0.02 - 0.33 & 0.3 - 3.3 \\\\\n\t\t\\bottomrule\n\t\\end{tabular}\n\t\\caption*{(c) Random erasing} \n\\end{minipage}\t\n\t\\end{center}\n\t\\caption{Additional hyperparameters for robustness interventions.} \n\\label{tab:ri_hyperparams}\n\\end{table}\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n \nOne of the main challenges in statistics is the design of a \\textit{universal} estimation procedure. Given data, a universal procedure is an algorithm that provides an estimator of the generating distribution which is simultaneously statistically consistent when the true distribution belongs to the model, and robust otherwise. Typically, a universal estimator is consistent for any model, with minimax-optimal or fast rates of convergence and is robust to small departures from the model assumptions \\cite{BickelRobust1976} such as sparse instead of dense effects or non-Gaussian errors in high dimensional linear regression. Unfortunately, most statistical procedures are based upon strong assumptions on the model or on the corresponding parameter set, and very famous estimation methods such as maximum likelihood estimation (MLE), method of moments or Bayesian posterior inference may fail even on simple problems when such assumptions do not hold. For instance, even though MLE is consistent and asymptotically normal with optimal rates of convergence in parametric estimation under suitable regularity assumptions \\cite{LeCamMLE70,VanderVaartMLE98} and in nonparametric estimation under entropy conditions, this method behaves poorly in case of misspecification when the true generating distribution of the data does not belong to the chosen model.\n\nLet us investigate a simple example presented in \\cite{Birge2006ModelSelectionTesting} that illustrates the non-universal characteristic of MLE. We observe a collection of $n$ independent and identically distributed (i.i.d) random variables $X_1,...,X_n$ that are distributed according to some mixture distribution $P^0_n = (1-2n^{-1})\\mathcal{U}([0,1\/10]) + 2n^{-1} \\mathcal{U}([1\/10,9\/10])$ where $\\mathcal{U}([a,b])$ is the uniform distribution between $a$ and $b$. We consider the parametric model of independent uniform distributions $\\mathcal{U}([0,\\theta])$, $0 \\leq \\theta < 1$, and we choose the squared Hellinger distance $h^2(\\cdot,\\cdot)$ as the risk measure. Here the maximum likelihood is the maximum of the observations $X_{(n)}:=\\max(X_1,...,X_n)$, and $\\mathcal{U}([0,1\/10])$ is a good approximation of the generating distribution $P^0_n$ as $h^2(P^0_n,\\mathcal{U}([0,1\/10])) < 5\/4n$ for $n\\geq4$. Hence, one would expect that $\\mathbb{E}[h^2(P^0_n,\\mathcal{U}([0,X_{(n)}]))]$ goes to $0$ as $n\\rightarrow +\\infty$, which is actually not the case. We do not even have consistency: $\\mathbb{E}[h^2(P^0_n,\\mathcal{U}([0,X_{(n)}]))]>0.38$. Hence, the MLE is not robust to this small deviation from the parametric assumption.\nThe same happens in Bayesian statistics: the regular posterior distribution is not always robust to model misspecification. Indeed, authors of \\cite{Barronetal1999,grunwaldmisspecifiation} show pathologic cases where the posterior does not concentrate to the true distribution.\n\nUniversal estimation is all the more important since it provides a generic approach to tackle the more and more popular problem of robustness to outliers under the i.i.d assumption, although definitions and goals involved in robust statistics are quite different from the universal estimation perspective. H\\\"uber introduced a framework that models situations where a small fraction $\\varepsilon$ of data is contaminated, and he assumes that the true generated distribution can be written $(1-\\varepsilon)P_{\\theta_0}+\\varepsilon Q$ where $Q$ is the contaminating distribution and $\\varepsilon$ is the proportion of corrupted observations \\cite{H\\\"uberRobustness}. The goal when using this approach is to estimate the true parameter $\\theta_0$ given a misspecified model $\\{P_{\\theta}\/\\theta\\in\\Theta\\}$ with $\\theta_0 \\in \\Theta$. A procedure is then said to be robust in this case if it leads to a good estimation of the true parameter $\\theta_0$. More generally, when a procedure is able to provide a good estimate of the generating distribution of i.i.d data when a small proportion of them is corrupted, whatever the values of these outliers, then such an estimator is considered as robust.\n\nInterestingly enough, none of the aforementioned works questioned the independence assumption on the observation. We believe that a universal estimation procedure should still produce sensible estimations under small deviations from this assumption.\n\n\\subsection{Related work}\n\nSeveral authors attempted to design a general universal estimation method. Sture Holm \\cite{BickelRobust1976} suggested that Minimum Distance Estimators (MDE) were the most natural procedures being robust to misspecification. Motivated by \\cite{WolfowitzMDE1957,Parr80}, MDE consists in minimizing some probability distance $d$ between the empirical distribution and a distribution in the model. The MDE $\\hat{\\theta}_n$ is defined by:\n$$\nd(\\hat{P}_n,P_{\\hat{\\theta}_n}) = \\inf_{\\theta \\in \\Theta} d(\\hat{P}_n,P_{\\theta})\n$$\nwhere $\\hat{P}_n=n^{-1}\\sum_{i=1}^n\\delta_{\\{X_i\\}}$ is the empirical measure and $\\Theta$ the parameter set associated to the model. If the minimum does not exist, then one can consider an $\\varepsilon$-approximate solution. In fact, this minimum distance estimator is used in many usual procedures. Indeed, the generalized method of moments \\cite{Hansen1982} is actually defined as minimizing the weighted Euclidean distance between moments of $\\hat{P}_n$ and $P_{\\theta}$ while the MLE minimizes the KL divergence, at least for discrete measures. When the distance $d$ is wisely chosen, e.g. when it is bounded, then MDE can be robust and consistent.\n\nA popular metric is the Total Variation (TV) distance \\cite{Yatracos1985,DevroyeL1}. \\cite{Yatracos1985} built an estimator that is uniformly consistent in TV distance and is robust to misspecification under the i.i.d assumption, but without any assumption on the true distribution of the data. The rate of convergence depends on the Kolmogorov entropy of the model. A few decades later, Devroye and Lugosi studied in details the skeleton estimate, a variant of the estimator of \\cite{Yatracos1985} that is based on the TV-distance restricted to the so-called Yatracos sets, see \\cite{DevroyeL1}. Unfortunately, the skeleton estimate and the original Yatracos estimate are not computationally tractable.\n\nIn \\cite{BaraudBirge2016RhoEstimators} and \\cite{BaraudBirgeSart2017RhoEstimators}, Baraud, Birg\\'e and Sart introduced $\\rho$-estimation, a universal method that retains some appealing properties of the MLE such as efficiency under some regularity assumptions, while being robust to deviations, measured by the Hellinger distance. This $\\rho$-estimation procedure is inspired from T-estimation \\cite{Birge2006ModelSelectionTesting}, itself inspired from earlier works of Le Cam \\cite{LeCam73,LeCam75} and Birg\\'e \\cite{Birge83}, and goes beyond the classical compactness assumption used in T-estimation. In compact models, $\\rho$-estimators can be seen as variants of T-estimators also based on robust tests, but they can be extended to noncompact models such as linear regression with fixed or random design with various error distributions. As T-estimators, they enjoy robustness properties, but involve other metric dimensions which lead to optimal rates of convergence with respect to the Hellinger distance even in cases where T-estimators can not be defined. Moreover, when the sample size is large enough, $\\rho$-estimation recovers the usual MLE in density estimation when the model is parametric, well-specified and regular enough. Hence, $\\rho$-estimation can be seen as a robust version of the MLE. Unfortunately, this strategy is also intractable.\n\n\nMore recently, \\cite{Briol2019} showed that using the Maximum Mean Discrepancy (MMD) \\cite{Gretton2012} to build a minimum distance estimator leads to both robust estimation in the i.i.d case, without any assumption on the model $\\{P_\\theta,\\theta\\in\\Theta\\}$. Moreover, this estimator is tractable as soon as the model is generative, that is, when one can sample efficiently from any $P_\\theta$. MMD, a metric based on embeddings of probability measures into a reproducing kernel Hilbert space, has been applied successfully in a wide range of problems such as kernel Bayesian inference \\cite{Song2011}, approximate Bayesian computation \\cite{Park2016}, two-sample \\cite{Gretton2012} and goodness-of-fit testing \\cite{Jit17}, and MMD GANs \\cite{Roy2015,li15} and autoencoders~\\cite{Zhao2017}, to name a few prominent examples. Such minimum MMD-based estimators are proven to be consistent, asymptotically normal and robust to model misspecification. The trade-off between the statistical efficiency and the robustness is made through the choice of the kernel. The authors investigated the geometry induced by the MMD on a finite-dimensional parameter space and introduced a (natural) gradient descent algorithm for efficient computation of the estimator. This algorithm is inspired from the stochastic gradient descent (SGD) used in the context of MMD GANs where the usual discriminator is replaced with a two-sample test based on MMD \\cite{Roy2015}. These results were extended in the Bayesian framework by~\\cite{BECA2019}.\n\nFinally, a whole branch of probability and statistics study limit theorems (LLN, CLT) under the assumptions that the data is not exactly independent, but that in some sense, the dependence between the observations is not strong. Since the seminal work of~\\cite{mixing0}, many mixing conditions, that is, restrictions on the dependence between observations, were defined. These conditions lead to limit theorems useful to analyze the asymptotic behavior of estimators computed on time series~\\cite{mixing}. Nevertheless, checking mixing assumptions is difficult in practice and many classes of processes that are of interest in statistics such as elementary Markov chains are sometimes not mixing. More recently, \\cite{DependenceDoukhan1999} proposed a new weak dependence condition for time series that is built on covariance-based coefficients which are much easier to compute than mixing ones, and that is more general than mixing as it stands for most relevant classes of processes. We believe that it is important to study robust estimators in this setting, in order to check that they are also robust from small deviations to the independence assumption.\n\n\\subsection{Contributions}\n\nIn this paper, we further investigate universality properties of minimum distance estimation based on MMD distance \\cite{Briol2019}. Inspired by the related literature, our contributions in this paper are the following:\n\\begin{itemize}\n\\item We go beyond the classical i.i.d framework. Indeed, we prove that the the estimator is robust to dependence between observations. To do so, we introduce a new dependence coefficient expressed as a covariance in some reproducing kernel Hilbert space, and which is very simple to use in practice.\n\\item We show that our oracle inequalities imply robust estimation under the i.i.d assumption in the H\\\"uber contamination model and in the case of adversarial contamination.\n\\item We propose a theoretical analysis of the SGD algorithm used to compute this estimator in \\cite{Briol2019} and \\cite{Roy2015} for some finite dimensional models. Thanks to this algorithm, we provide numerical simulations to illustrate our theoretical results.\n\\end{itemize}\n\n\n\n\nThe first result of this paper is a generalization bound in the non-i.i.d setting. It states that under a very general dependence assumption, the generalization error with respect to the MMD distance decreases in $n^{-1\/2}$ as $n\\rightarrow +\\infty$. This result extends the inequalities in \\cite{Briol2019} that are only available in the i.i.d framework, and is obtained using dependence concepts for stochastic processes. We introduce in this paper a new dependence coefficient in the wake of \\cite{DependenceDoukhan1999} which can be expressed as a covariance in some reproducing kernel Hilbert space associated with MMD. This coefficient can be easily computed in many situations and which may be related to usual mixing coefficients such as the popular $\\beta$-mixing one. We show that a weak assumption on this new dependence coefficient can relax the i.i.d assumption of \\cite{Briol2019} and can lead to valid generalization bounds even in the dependent setting.\n\nRegarding robustness, we prove that our generalization bounds for the MMD estimator implies that this estimator is robust to the presence of outliers. Note that this includes H\\\"uber's type contamination, and adversarial contamination as well. In particular, we compare the rate of convergence of the MMD estimator to the minimax estimators in the example of the estimation of the mean of a Gaussian.\n\nRegarding computational issues, we provide a Stochastic Gradient Descent (SGD) algorithm as in \\cite{Briol2019,Roy2015} involving a U-statistic approximation of the expectation in the formula of the MMD distance. We theoretically analyze this algorithm in parametric estimation using a convex parameter set. We also perform numerical simulations that illustrate the efficiency of our method, especially by testing the behavior of the algorithm in the presence of outliers.\n\nThe rest of the paper is organized as follows. Section \\ref{sec:notations} defines the MMD-based minimum distance estimator and our new dependence coefficient based on the kernel mean embedding. Section \\ref{sec:main-res} provides nonasymptotic bounds in the dependent and misspecified framework, with their implications in terms of robust parametric estimation. Section \\ref{sec:examples} illustrates the efficiency of our method in several different frameworks. We finally present an SGD algorithm with theoretical convergence guarantees in Section \\ref{sec:algo} and we perform numerical simulations in Section \\ref{sec:simu}. The proofs of the theorems of Section~\\ref{sec:main-res} are provided in Section \\ref{sec:proofs}. The supplementary material is dedicated to the remaining proofs.\n\n \\section{Background and definitions}\n\\label{sec:notations}\n \nIn this section, we first introduce some notations and present the statistical setting of the paper in Section 2.1. Then, we remind in Section 2.2 some theory on reproducing kernel Hilbert spaces (RKHS) and we define both the maximum mean discrepancy (MMD) and our minimum distance estimator based on the MMD. Finally, we introduce in Section 2.3 a new dependence coefficient expressed as a covariance in a RKHS.\n\n\\subsection{Statistical setting}\n\nWe shall consider a dependent setting throughout the paper. We observe in a measurable space $\\big( \\mathbb{X},\\mathcal{X} \\big)$ a collection of $n$ random variables $X_1$,...,$X_n$ generated from a stationary process. This implies that the $X_i$'s are identically distributed, and we will let $P^0$ denote their marginal distribution. Note that this include as an example the case where the $X_i$'s are i.i.d with generating distribution $P^0$. We introduce a statistical model $\\{ P_{\\theta}\/ \\theta \\in \\Theta \\}$ indexed by a parameter space $\\Theta$.\n\n\\subsection{Maximum Mean Discrepancy}\n\nWe consider a positive definite kernel function $k$, i.e a symmetric function $k : \\mathbb{X} \\times \\mathbb{X} \\rightarrow \\mathbb{R}$ such that for any integer $n\\geq 1$, for any $x_1,...,x_n \\in \\mathbb{X}$ and for any $c_1,...,c_n \\in \\mathbb{R}$:\n$$\n\\sum_{i=1}^n \\sum_{j=1}^n c_i c_j k(x_i,x_j) \\geq 0.\n$$\nWe then consider the reproducing kernel Hilbert space (RKHS) $({\\mathcal{H}_{k}},\\langle\\cdot,\\cdot\\rangle_{\\mathcal{H}_{k}})$ associated with the kernel $k$ which satisfies the reproducing property $f(x)=\\langle f, k(x,\\cdot)\\rangle_{\\mathcal{H}_{k}}$ for any function $f \\in {\\mathcal{H}_{k}}$ and any $x \\in \\mathbb{X}$. From now on, we assume that the kernel is bounded by some positive constant, that will be assumed to be $1$ without loss of generality. Thas is, for any $x,y \\in \\mathbb{X}$, $|k(x,y)|\\leq1$. \n\n\nNow we introduce the notion of \\textit{kernel mean embedding}, a Hilbert space embedding of a probability measure that can be viewed as a generalization of the original feature map used in support vector machines and other kernel methods. Given a probability measure $P$, we define the mean embedding $\\mu_P \\in {\\mathcal{H}_{k}}$ as:\n$$\n\\mu_P(\\cdot) := \\mathbb{E}_{X\\sim P}[k(X,\\cdot)] \\in {\\mathcal{H}_{k}} .\n$$\nAll the applications and the theoretical properties of those embeddings have been well studied \\cite{Fuku2017}. In particular, the mean embedding $\\mu_P$ satisfies the relationship $\\mathbb{E}_{X\\sim P}[f(X)] = \\langle f, \\mu_P\\rangle_{\\mathcal{H}_{k}}$ for any function $f \\in {\\mathcal{H}_{k}}$, and induces a semi-metric \\footnote{ This means that $P \\rightarrow \\| \\mu_P \\|_{\\mathcal{H}_{k}}$ satisfies the requirements of a norm besides $ \\| \\mu_P - \\mu_Q \\|_{\\mathcal{H}_{k}} = 0 $ only for $\\mu_P=\\mu_Q$. } on measures called maximum mean discrepancy (MMD), defined for two measures $P$ and $Q$ as follows:\n$$\n\\mathbb{D}_k(P,Q) = \\| \\mu_P - \\mu_Q \\|_{\\mathcal{H}_{k}}\n$$\nor equivalently\n$$\n\\mathbb{D}_k^2(P,Q) = \\mathbb{E}_{X,X' \\sim P}[k(X,X')] - 2 \\mathbb{E}_{X\\sim P,Y\\sim Q}[k(X,Y)] + \\mathbb{E}_{Y,Y'\\sim Q}[k(Y,Y')] .\n$$\nA kernel $k$ is said to be characteristic if $P\\mapsto \\mu_P$ is injective. This ensures that $\\mathcal{D}_k$ is a metric, and not only a semi-metric. Subsection 3.3.1 of the thorough survey \\cite{Fuku2017} provides a wide range of conditions ensuring that $k$ is characteristic. They also provide many examples of characteristic kernels, see their Table 3.1. Among others, when $\\mathbb{X} \\subset \\mathbb{R}^d$ equiped with the Euclidean norm $\\|\\cdot\\|$, the Gaussian kernel $k(x,y) = \\exp(-\\|x-y\\|^2\/\\gamma^2)$ and the Laplace kernel $k(x,y) = \\exp(-\\|x-y\\|\/\\gamma)$, are known to be characteristic. We actually mostly use these two kernels in our applications. From now on, we will assume that $k$ is characteristic.\n\nNote that there are many applications of the kernel mean embedding and MMD in statistics such as two-sample testing \\cite{Gretton2012}, change-point detection \\cite{Arlot2012}, detection \\cite{MONK19}, we also refer the reader to~\\cite{Vert2019} for a thorough introduction to the applications of kernels and MMD to computationnal biology .\n\nHere, we will focus on estimation of parameters based on MMD. This principle was used to train generative networks \\cite{Roy2015,li15}, it's only recently that it was studied as a general principle for estimation \\cite{Briol2019}. Following these papers we define the MMD estimator $\\hat{\\theta}_n$ such that:\n$$\n\\mathbb{D}_k(P_{\\hat{\\theta}_n},\\hat{P}_n) = \\inf_{\\theta \\in \\Theta} \\mathbb{D}_k(P_{\\theta},\\hat{P}_n)\n$$\nwhere $\\hat{P}_n=(1\/n)\\sum_{i=1}^n \\delta_{X_i}$ is the empirical measure, i.e.:\n$$\n\\hat{\\theta}_n = \\underset{\\theta \\in \\Theta}{\\arg\\min} \\, \\bigg\\{\n\\mathbb{E}_{X,X' \\sim P_\\theta}[k(X,X')] - \\frac{2}{n} \\sum_{i=1}^n \\mathbb{E}_{X\\sim P_\\theta}[k(X,X_i)] \\bigg\\}.\n$$\nIt could be that there is no minimizer, see the discussion in Theorem 1 page 9 in~\\cite{Briol2019}. In this case, we can use an approximate minimizer. More precisely, for any $\\varepsilon>0$ we can always find a $\\hat{\\theta}_{n,\\varepsilon}$ such that:\n$$\n\\mathbb{D}_k(P_{\\hat{\\theta}_{n,\\varepsilon}},\\hat{P}_n) \\leq \\inf_{\\theta \\in \\Theta} \\mathbb{D}_k(P_{\\theta},\\hat{P}_n) + \\varepsilon.\n$$\nIn what follows, we will consider the case where the minimizer exists (that is, $\\varepsilon=0$) but when this is not the case, everything can be easily extended by considering $\\hat{\\theta}_{n,1\/n}$.\n\n\\subsection{Covariances in RKHS}\n\n\nIn this subsection, we introduce and discuss a new dependence coefficient based on the kernel mean embedding. This coefficient allows to go beyond the i.i.d case in the study of the MMD estimator of \\cite{Briol2019}, and to show that it is actually robust to dependence.\n\n\\begin{dfn}\n \\label{dfn.varrho}\n We define, for any $t\\in\\mathbb{N}$,\n $$ \\varrho_t = \\left| \\mathbb{E}\\left< k(X_t,\\cdot)-\\mu_{P^0},k(X_0,\\cdot)-\\mu_{P^0} \\right>_{\\mathcal{H}_k} \\right| . $$\n\\end{dfn}\nIn the i.i.d case, note that $\\varrho_t = 0$ for any $t\\geq 1$. In general, the following assumption will ensure the consistency of our estimator:\n\\begin{asm}\n \\label{asm:our:mixing}\n There is a $\\Sigma < + \\infty$ such that, for any $n$, $\\sum_{t=1}^n \\varrho_t \\leq \\Sigma$.\n\\end{asm}\n\nOur mean embedding dependence coefficient may be seen as a covariance expressed in the RKHS $\\mathcal{H}_{k}$. We shall see throughout the paper that the kernel mean embedding coefficient $\\varrho_t$ can be easily computed in many situations, and that it is closely related to widely used mixing coefficients. In particular, we will show in Section 4.2 that our coefficient $\\varrho_t$ is upper-bounded by the popular $\\beta$-mixing coefficient. For the reader who would not be familiar with $\\beta$-mixing, we also show that any real-valued auto-regressive process $X_t = a X_{t-1} + \\varepsilon_t$ satisfies Assumption~\\ref{asm:our:mixing} as long as $|a|<1$, the $\\varepsilon_t$ are i.i.d and $\\mathbb{E}(|\\varepsilon_0|)<\\infty$. Also, we show that some special cases of such auto-regressive processes are not $\\beta$-mixing, which proves that Assumption~\\ref{asm:our:mixing} is more general than $\\beta$-mixing: an explicit example is given in Subsection~\\ref{subsec:ar}. Hence, Assumption \\ref{asm:our:mixing} may be referred to as a weak dependence condition in the wake of the concept of weak dependence introduced in \\cite{DependenceDoukhan1999}. We will show in the next section that under Assumption \\ref{asm:our:mixing}, we can obtain a nonasymptotic generalization bound of the same order than in the i.i.d case.\n\n \\section{Nonasymptotic bounds in the dependent, misspecified case}\n\\label{sec:main-res}\n \nIn this section, we provide nonasymptotic generalization bounds in MMD distance for the minimum MMD estimator. In particular, we show in Subsection~\\ref{subsec:mmd} that under a weak dependence assumption, it is robust to both dependence and misspecification, and is consistent at the same $n^{-1\/2}$ rate than in the i.i.d case. In particular, we give explicit bounds in the H\\\"uber contamination model and in a more general adversarial setting in Subsection~\\ref{subsec:robust}.\n\n\\subsection{Estimation with respect to the MMD distance}\n\\label{subsec:mmd}\n\n\nFirst, we begin with a theorem that gives an upper bound on the generalization error, i.e the expectation of $\\mathbb{D}_k(P_{\\hat{\\theta}_n},P^0)$. The rate of convergence of this error is of order $n^{-1\/2}$ independently of the dimension of the parameter space $\\Theta$. In fact, note that there is actually no assumption at all on the model $\\{P_{\\theta},\\theta\\in\\Theta\\}$ in this theorem.\n\\begin{thm}\n\\label{theorem:mmd:1}\n We have: $$ \\mathbb{E} \\left[ \\mathbb{D}_k\\left(P_{\\hat{\\theta}_n},P^0 \\right) \\right] \\leq \\inf_{\\theta\\in\\Theta} \\mathbb{D}_k\\left(P_{\\theta},P^0 \\right) + 2 \\sqrt{ \\frac{1+2\\sum_{t=1}^{n}\\varrho_t}{n}} . $$\n\\end{thm}\nAs a consequence, under Assumption~\\ref{asm:our:mixing}:\n $$ \\mathbb{E} \\left[ \\mathbb{D}_k\\left(P_{\\hat{\\theta}_n},P^0 \\right) \\right] \\leq \\inf_{\\theta\\in\\Theta} \\mathbb{D}_k\\left(P_{\\theta},P^0 \\right) + 2 \\sqrt{ \\frac{1+2\\Sigma }{n}} . $$\nWe remind that the proofs of the results in this section are deferred to Section~\\ref{sec:proofs}. It is also possible to provide a result that holds with large probability as in \\cite{Briol2019,Roy2015}. Naturally, this requires stronger assumptions, and the conditions on the dependence become more intricate in this case. Here, we use a condition introduced in \\cite{McDoRIO,McDoRIO2} for generic metric spaces that we adapt to the kernel embedding and to stationarity:\n\\begin{asm}\n\\label{asm:gamma:mixing}\nAssume that there is a family $(\\gamma_{\\ell})_\\ell$ of non-negative numbers such that, for any integer $n$, for any $\\ell\\in\\{1,\\dots,n-1\\} $ and any function $g:\\mathcal{H}_k^\\ell \\rightarrow \\mathbb{R} $ such that\n$$ |g(a_1,\\dots,a_\\ell) - g(b_1,\\dots,b_\\ell)| \\leq \\sum_{i=1}^\\ell \\|a_i - b_i\\|_{\\mathcal{H}_k} , $$\nwe have: $ |\\mathbb{E}[g(\\mu_{\\delta_{X_{\\ell+1}}},\\dots,\\mu_{\\delta_{X_{n}}})|X_{1},\\dots,X_{\\ell}]-\\mathbb{E}[g(\\mu_{\\delta_{X_{\\ell+1}}},\\dots,\\mu_{\\delta_{X_{n}}})]| \\leq \\gamma_{1} + \\dots + \\gamma_{n+\\ell-1} $, almost surely. Assume that $\\Gamma:= \\sum_{\\ell\\geq 1} \\gamma_{\\ell} < \\infty $.\n\\end{asm}\nThis assumption is more technical than Assumption~\\ref{asm:our:mixing}. The idea is quite similar: the coefficient $\\gamma_s$ is a measure of the dependence between $X_t$ and $X_{t+s}$, and the assumption will be satisfied if $X_t$ and $X_{t+s}$ are ``almost independent'' when $s$ is large -- but the sense given to ``almost independent'' is not exactly the same as in Assumption~\\ref{asm:our:mixing}. For example, we show in Subsection~\\ref{subsec:ar} that auto-regressive processes $X_{t+1}=a X_t + \\varepsilon_{t+1}$ with $|a|<1$ and i.i.d $\\varepsilon_t$ satisfy this assumption under the additional condition that the $\\varepsilon_t$ are almost surely bounded. Again, note that in the case of independence, we can take all the $\\gamma_{i}=0$ and hence $\\Gamma=0$ in addition to $\\Sigma=0$. We can now state our result in probability:\n\\begin{thm}\n\\label{theorem:mmd:briol:improved}\nAssume that Assumptions~\\ref{asm:our:mixing} and~\\ref{asm:gamma:mixing} are satisfied. Then, for any $\\delta\\in(0,1)$,\n $$ \\mathbb{P} \\left[ \\mathbb{D}_k\\left( P_{\\hat{\\theta}_n},P^0 \\right) \\leq \\inf_{\\theta\\in\\Theta} \\mathbb{D}_k\\left( P_{\\theta},P^0 \\right)\n + 2 \\frac{\\sqrt{1+2\\Sigma} + (1+\\Gamma)\\sqrt{2\\log\\left(\\frac{1}{\\delta}\\right)} }{\\sqrt{n}} \\right] \\geq 1-\\delta.$$\n\\end{thm}\nAssumption \\ref{asm:gamma:mixing} is fundamental to obtain a result in probability. Indeed, the rate of convergence in Theorem \\ref{theorem:mmd:briol:improved} is characterized by some concentration inequality upper bounding the MMD distance between the empirical and the true distribution as done in \\cite{Briol2019}. Nevertheless, the proof of this inequality in \\cite{Briol2019} is based on a Hoeffding-type inequality known as McDiarmid's inequality \\cite{McDo} that is only valid for independent variables (that is, all the $\\gamma_i=0$), which makes this inequality not applicable in our dependent setting. Hence we use a version of McDiarmid's inequality for time series obtained by Rio \\cite{McDoRIO,McDoRIO2} which is available under the assumption that $\\sum_{\\ell\\geq 1} \\gamma_{\\ell} < \\infty$ (Assumption \\ref{asm:gamma:mixing}).\n\n\\begin{rmk}[The i.i.d case]\n Note that when the $X_i$'s are i.i.d, Assumptions~\\ref{asm:our:mixing} and~\\ref{asm:gamma:mixing} are always satisfied with $\\Sigma=\\Gamma=0$ and thus Theorem~\\ref{theorem:mmd:1} gives simply\n $$ \\mathbb{E} \\left[ \\mathbb{D}_k\\left(P_{\\hat{\\theta}_n},P^0 \\right) \\right] \\leq \\inf_{\\theta\\in\\Theta} \\mathbb{D}_k\\left(P_{\\theta},P^0 \\right) +\\frac{2}{\\sqrt{n}} $$\n while Theorem~\\ref{theorem:mmd:briol:improved} gives\n $$ \\mathbb{P} \\left[ \\mathbb{D}_k\\left( P_{\\hat{\\theta}_n},P^0 \\right) \\leq \\inf_{\\theta\\in\\Theta} \\mathbb{D}_k\\left( P_{\\theta},P^0 \\right)\n + 2 \\frac{1 + \\sqrt{2\\log\\left(\\frac{1}{\\delta}\\right)} }{\\sqrt{n}} \\right] \\geq 1-\\delta.$$ \n\\end{rmk}\n\n\\begin{rmk}[Connection between the MMD distance and the $L^2$ norm]\n\\label{rmk:L2}\nIn Section~\\ref{sec:examples}, we study the connection between the convergence of $\\hat{P}_{\\hat{\\theta}_n}$ in terms of MMD distance, and the convergence of $\\hat{\\theta}_n$, is some classical models. However, it is also worth mentioning a connection between the MMD distance and the quadratic distance on densities. Indeed, assume $\\mathbb{X} = \\mathbb{R}^d$ and that $P$ and $Q$ have density $p$ and $q$ respectively with respect to the Lebesgue measure. Using the Gaussian kernel $k_\\gamma(x,y) = \\exp(-\\|x-y\\|^2\/\\gamma^2)$, we expect that, when $\\gamma \\rightarrow 0$, under suitable assumptions,\n$$ \\mathbb{E}_{X\\sim P, Y\\sim Q}[k(X,Y)] \\sim \\pi^{\\frac{d}{2}} \\gamma^d \\int p(x) q(x) {\\rm d}x $$\nand so that\n\\begin{equation}\n\\label{equa:lien:l2}\n\\mathbb{D}_{k_\\gamma}(P,Q) \\sim \\pi^{\\frac{d}{4}} \\gamma^{\\frac{d}{2}} \\|p-q\\|_{L^2}.\n\\end{equation}\nCorollary 4 page 1527 of~\\cite{L2} provides a formal statement of this claim. Thus, the convergence in the MMD distance has connections with the convergence of the densities (when they exist) in $L^2$.\n\nNote that~\\cite{DevroyeL1,BaraudBirge2016RhoEstimators} argue that the $L^2$-norm is not suitable for universal estimation: indeed, in some models, $P_\\theta$ does not have a density with respect to the Lebesgue measure. But~\\eqref{equa:lien:l2} allows the interpretation of the MMD distance (with the Gaussian kernel) as an approximation of the $L^2$ distance, that is however well defined for {\\it any} model $(P_\\theta)$.\n\\end{rmk}\n\n\\subsection{Robust parametric estimation}\n\\label{subsec:robust}\n\n\\subsubsection{Contamination models}\n\nAs explained in the introduction, when all observations but a small proportion of them are sampled independently from a generating distribution $P_{\\theta_0}$ ($\\theta_0 \\in \\Theta$), robust parametric estimation consists in finding estimators being both rate optimal and resistant to outliers. Two among the most popular frameworks for studying robust estimation are the so-called H\\\"uber's contamination model and the adversarial contamination model. \n\nH\\\"uber's contamination model is as follows. We observe a collection of random variables $X_1,...,X_n$. We consider a contamination rate $\\varepsilon\\in(0,1\/2)$, latent i.i.d random variables $Z_1,...,Z_n \\sim \\text{Ber}(\\varepsilon)$ and some noise distribution $Q$, such that the distribution of $X_i$ given $Z_i=0$ is $P_{\\theta_0}$, and that the distribution of $X_i$ given $Z_i=1$ is $Q$. Hence, the observations $X_i$'s are independent and sampled from the mixture $P^0=(1-\\varepsilon)P_{\\theta_0}+\\varepsilon Q$.\n\nThe adversarial model is more general. Contrary to H\\\"uber's contamination where outliers were all sampled from the contaminating distribution, we do not make any particular assumption on the outliers here. Hence, we shall adopt slightly different notations. We assume that $X_1,\\dots,X_n$ are identically distributed from $P_{\\theta_0} $ for some $\\theta_0\\in\\Theta$. However, the statistician only observes $\\tilde{X_1},\\dots,\\tilde{X_n}$ where $\\tilde{X}_i$ can be any arbitrary value for $i\\in \\mathcal{O}$, where $\\mathcal{O}$ is an arbitrary set subject to the constraint $|\\mathcal{O}| \\leq \\varepsilon n$, and $\\tilde{X}_i=X_i$ for $i\\notin \\mathcal{O}$. The estimators are built based on these observations $\\tilde{X_1},\\dots,\\tilde{X_n}$.\n\n\n\\subsubsection{Literature}\n\nOne hot research trend in robust statistics is focused on the search of both statistically optimal and computationally tractable procedures for the Gaussian mean estimation problem $\\{P_\\theta=\\mathcal{N}(\\theta,I_d)\/\\theta \\in \\mathbb{R}^d\\}$ in the presence of outliers under the i.i.d assumption, which remains a major challenge. Usual robust estimators such as the coordinatewise median and the geometric median are known to be suboptimal in this case, and there is a need to look at more complex estimators such as Tukey's median that achieves the minimax optimal rate of convergence $\\max(\\frac{d}{n},\\varepsilon^2)$ with respect to the squared Euclidean distance, where $d$ is the dimension, $n$ is the sample size and $\\varepsilon$ is the proportion of corrupted data. Unfortunately, computation of Tukey's median is not tractable and even approximate algorithms lead to an $\\mathcal{O}(n^d)$ complexity \\cite{ChanTukey2004,AmentaTukey2000}. This has led to the rise of the recent studies in robust statistics which address how to build robust and optimal statistical procedures, in the wake of the works of \\cite{Tukey1975} and \\cite{H\\\"uberRobustness}, but that are also computationally efficient.\n\nThis research area started with two seminal works presenting two procedures for the normal mean estimation problem: the \\textit{iterative filtering} \\cite{DiakonikolasRobust2016} and the \\textit{dimension halving} \\cite{LaiRaoVempala2016}. These algorithms are based upon the idea of using higher moments in order to obtain a good robust moment estimation, and are minimax optimal up to a poly-logarithmic factor in polynomial time. This idea was then used in several other problems in robust statistics, for instance in sparse functionals estimation \\cite{DuRobustFunctionals}, clustering \\cite{KothariRobustClustering}, mixtures of spherical Gaussians learning \\cite{DiakonikolasExtension1}, and robust linear regression \\cite{DiakonikolasExtension2}. In H\\\"uber's contamination model, \\cite{ColDal17a} achieves the minimax rate without any extra factor in the $\\varepsilon = \\mathcal{O}(\\min(d^{-1\/2},n^{-1\/4}))$ regime with an improved overall complexity. Meanwhile, \\cite{ChaoGaoRobustGAN2019} offers a different perspective on robust estimation and connects the robust normal mean estimation problem with Generative Adversarial Networks (GANs) \\cite{GoodfellowGAN2014,BiauGAN2018}, what enables computing robust estimators using efficient tools developed for training GANs. Hence, the authors compute depth-like estimators that retain the same appealing robustness properties than Tukey's median and that can be trained using stochastic gradient descent (SGD) algorithms that were originally designed for GANs.\n\n\nAnother popular approach for the more general problem of mean estimation under the i.i.d assumption in the presence of outliers is the study of finite-sample sub-Gaussian deviation bounds. Indeed, designing estimators achieving sub-Gaussian performance under minimal assumptions ensures robustness to outliers that are inevitably present when the generating distribution is heavy-tailed. In the univariate case, some estimators present a sub-Gaussian behavior for all distributions under first and second order moments. A simple but powerful strategy, the Median-of-Means (MOM), dates back to \\cite{Nemi1983,Jer86,Alon1999}. This method consists in randomly splitting the data into several equal-size blocks, then computing the empirical mean within each block, and finally taking the median of them. Most MOM-based procedures lead to estimators that are simultaneously statistically optimal \\cite{Lugosi2016,MOM1,Lecue2018,MONK19,chinot2019} and computationally efficient \\cite{Hopkins2019,chera2019,depersin2019}. Moreover, this approach can be easily extended to the multivariate case \\cite{Minsker2015,Hsu2016}. An important advantage is that the MOM estimator has good performance even for distributions with infinite variance. An elegant alternative to the MOM strategy is due to Catoni, whose estimator is based on PAC-Bayesian truncation in order to mitigate heavy tails \\cite{Catoni2012}. It has the same performance guarantees than the MOM method but with sharper and near-optimal constants. In \\cite{CatoniGiulini2017}, Catoni and Giulini proposed a very simple and trivial-to-compute multidimensional extension of Catoni's M-estimator defined as an empirical average of the data, with the observations with large norm shrunk towards zero, and that still satisfies a sub-Gaussian concentration using PAC-Bayes inequalities. The influence function of Catoni and Giulini has been widely used since then, see \\cite{Ilaria2017,Ilaria2018,Holland2019a,Holland2019b,Haddouche2020}. We refer the reader to the beautiful review of \\cite{Lugosi2019mean} for more details on those mean estimation procedures.\n\n\\subsubsection{Robust MMD estimation}\n\nIn this section, we show the properties of our MMD-based estimator in robust parametric estimation with outliers, both in H\\\"uber's contamination model and in the adversarial case. Our bounds are obtained by working directly in the RKHS rather than in the parameter space. the consequence of these results in terms of the Euclidean distance in the parameter space will be explored in Section~\\ref{sec:examples}. \n\nFirst we consider H\\\"uber's contamination model \\cite{H\\\"uberRobustness}. The objective is to estimate $P_{\\theta_0}$ by observing contaminated random variables $X_1$, ..., $X_n$ with actual distribution is $P^0 = (1-\\alpha) P_{\\theta_0} + \\alpha Q $ for some $Q$, and some $0\\leq \\alpha \\leq \\varepsilon$. We state the key following lemma:\n\n\\begin{lemma}\n\\label{lemma:huber}\nWe have, for any $\\theta\\in\\Theta$, $ | \\mathbb{D}_k(P_{\\theta},P^0) - \\mathbb{D}_k (P_{\\theta},P_{\\theta_0})| \\leq 2 \\varepsilon$.\n\\end{lemma}\n\nApplying Lemma~\\ref{lemma:huber} to the left-hand side, and to the right-hand side, of Theorem~\\ref{theorem:mmd:1}, we have the following result.\n\n\\begin{cor}\nAssume that $X_1,\\dots,X_n$ are identically distributed from $P^0 = (1-\\alpha) P_{\\theta_0} + \\alpha Q $ for some $\\theta_0\\in\\Theta$, some $Q$, with $0\\leq \\alpha \\leq \\varepsilon$. Then:\n $$ \\mathbb{E} \\left[ \\mathbb{D}_k\\left(P_{\\hat{\\theta}_n},P_{\\theta_0} \\right) \\right] \\leq 4\\varepsilon + 2 \\sqrt{ \\frac{1+2\\sum_{t=1}^{n}\\varrho_t}{n}} . $$\nIf moreover we assume that Assumptions~\\ref{asm:our:mixing} and~\\ref{asm:gamma:mixing} are satisfied, then for any $\\delta\\in(0,1)$,\n $$ \\mathbb{P} \\left[ \\mathbb{D}_k\\left( P_{\\hat{\\theta}_n},P_{\\theta_0}\\right) \\leq 2 \\left( 2\\varepsilon \n + \\frac{\\sqrt{1+2\\Sigma} + (1+\\Gamma)\\sqrt{2\\log\\left(\\frac{1}{\\delta}\\right)} }{\\sqrt{n}} \\right) \\right] \\geq 1-\\delta.$$\n\\end{cor}\n\nWe obtain a rate $\\max(1\/\\sqrt{n},\\varepsilon)$ in MMD distance (note once again that the convergence rate with respect to more standard distances is studied in Section~\\ref{sec:examples}). When $\\varepsilon \\lesssim 1\/\\sqrt{n}$, then we recover the rate of convergence without contamination, and when $1\/\\sqrt{n} \\lesssim \\varepsilon$, then the rate is dominated by the contamination ratio $\\varepsilon$. Hence, the maximum number of outliers which can be tolerated without breaking down the rate is $n\\varepsilon \\asymp \\sqrt{n}$.\n\nThis result can also be extended to the adversarial contamination setting, where no assumption is made on the outliers.\n\n\\begin{prop}\n \\label{prop:adversarial}\n Assume that $X_1,\\dots,X_n$ are identically distributed from from $P^0 = P_{\\theta_0} $ for some $\\theta_0\\in\\Theta$. However, the statistician only observes $\\tilde{X_1},\\dots,\\tilde{X_n}$ where $\\tilde{X}_i$ can be any arbitrary value for $i\\in \\mathcal{O}$, $\\mathcal{O}$ is any arbitrary set subject to the constraint $|\\mathcal{O}| \\leq \\varepsilon n$, and $\\tilde{X}_i=X_i$ for $i\\notin \\mathcal{O}$ and builds the estimator $\\tilde{\\theta}_n$ based on these observations:\n $$ \\mathbb{D}_k\\left(P_{\\tilde{\\theta}_n},\\frac{1}{n}\\sum_{i=1}^n \\delta_{\\tilde{X}_i}\\right) = \\inf_{\\theta \\in \\Theta} \\mathbb{D}_k\\left(P_{\\theta},\\frac{1}{n}\\sum_{i=1}^n \\delta_{\\tilde{X}_i}\\right). $$\n Then:\n $$ \\mathbb{D}_k\\left(P_{\\tilde{\\theta}_n},P_{\\theta_0} \\right) \\leq \n 4\\varepsilon + 2 \\mathbb{D}_k\\left(P_{\\hat{\\theta}_n},P_{\\theta_0} \\right) . $$\nThus\n $$\n \\mathbb{E} \\left[ \\mathbb{D}_k\\left(P_{\\tilde{\\theta}_n},P_{\\theta_0} \\right) \\right] \\leq 4\\varepsilon + 4 \\sqrt{ \\frac{1+2\\sum_{t=1}^{n}\\varrho_t}{n}}\n $$\n and, under Assumptions~\\ref{asm:our:mixing} and~\\ref{asm:gamma:mixing}, for any $\\delta\\in(0,1)$,\n $$ \\mathbb{P} \\left[ \\mathbb{D}_k\\left( P_{\\hat{\\theta}_n},P_{\\theta_0}\\right) \\leq 4 \\left( \\varepsilon \n + \\frac{\\sqrt{1+2\\Sigma} + (1+\\Gamma)\\sqrt{2\\log\\left(\\frac{1}{\\delta}\\right)} }{\\sqrt{n}} \\right) \\right] \\geq 1-\\delta.$$\n\\end{prop}\n\nOne can see that the rate of convergence we obtain without making any assumption on the outliers is exactly the same than in H\\\"uber's contamination model. The only difference is that the constant in the right hand side of the inequality is tighter in H\\\"uber's contamination model.\n\n \\section{Examples}\n\\label{sec:examples}\n \n\\subsection{Independent observations}\n\nIn the previous section we studied the convergence of $P_{\\hat{\\theta}_n}$ with the MMD distance. In this subsection, we show what are the consequences of these results in terms of the convergence of $\\hat{\\theta}$ in some classical models. For the sake of simplicity, we focus on i.i.d observations. That is, $\\varrho_t = 0$ for any $t\\geq 1$. Moreover, we will only use the Gaussian kernel $k_\\gamma(x,y) = \\exp(-\\|x-y\\|^2\/\\gamma^2)$.\n\n\\subsubsection{Estimation of the mean in a Gaussian model}\n\nHere, $\\mathbb{X}=\\mathbb{R}^d$ and we are interested in the estimation of the mean in a Gaussian model. For the sake of simplicity, we assume that the variance is known.\n\n\\begin{prop} \\label{prop:ex:gauss}\n Assume that $P_\\theta = \\mathcal{N}(\\theta,\\sigma^2 I_d)$ for $\\theta\\in\\Theta=\\mathbb{R}^d$. Moreover, assume that we are in an adversarial contamination model where a proportion at most $\\varepsilon$ of the observations is contaminated. Then, with probability $1-\\delta$,\n\\begin{equation} \\label{ex:gauss:3}\n\\|\\theta-\\tilde{\\theta}_n\\|^2\n\\leq - (4\\sigma^2 + \\gamma^2)\n \\log\\left\\{ 1-8 {\\rm e}^{\\frac{2\\sigma^2 d}{\\gamma^2}} \\left( \\varepsilon \n + \\frac{1 + \\sqrt{2\\log\\left(\\frac{1}{\\delta}\\right)} }{\\sqrt{n}} \\right)^2 \\right\\}.\n\\end{equation}\nIn particular, the choice $\\gamma = \\sigma \\sqrt{2d}$ leads to\n$$\n\\|\\tilde{\\theta}_n - \\theta_0 \\|^2 \\leq -2\\sigma^2(d+2) \\log\\left[1-8 {\\rm e} \\left( \\varepsilon \n + \\frac{1 + \\sqrt{2\\log\\left(\\frac{1}{\\delta}\\right)} }{\\sqrt{n}} \\right)^2\\right].\n$$\n\n\\end{prop}\nThe complete proof can be found in the supplementary material. Note that when $\\varepsilon$ is small and $n$ is large,\n\\begin{multline*}\n\\|\\tilde{\\theta}_n-\\theta_0\\|^2 \\leq -2\\sigma^2(d+2) \\log\\left[1-16 {\\rm e} \\left( \\varepsilon^2 \n + \\frac{\\left(1 + \\sqrt{2\\log\\left(\\frac{1}{\\delta}\\right)}\\right)^2 }{n} \\right)\\right]\n\\\\\n\\sim\n32 {\\rm e}\\sigma^2(d+2) \\left( \\varepsilon^2 \n + \\frac{\\left(1 + \\sqrt{2\\log\\left(\\frac{1}{\\delta}\\right)}\\right)^2 }{n} \\right).\n\\end{multline*}\n\nWe can see that our MMD estimator achieves a rate of convergence $d\\varepsilon^2 + d\/n$ which is the same than for several median-based estimators such as the geometric median or the coordinatewise median (see Proposition 2.1 in \\cite{ChenGao2018}). We have a quadratic dependence in $\\varepsilon$, contrary to many robust estimators such as Median-of-Means which dependence in $\\varepsilon$ is linear. Hence, as soon as the dimension is no larger than the square root of the sample size $d\\leq\\sqrt{n}$, the MMD method tolerates a larger number of outliers without affecting the usual rate of convergence (i.e. the rate with no contamination).\n\nUnfortunately, it seems that our method performs poorly compared to such estimators in large dimension. Indeed, according to Theorems 2.1 and 2.2 in \\cite{ChenGao2018}, the minimax optimal rate with respect $d$, $\\varepsilon$ and $n$ is $\\varepsilon^2 + d\/n$. Furthermore, numerical experiments and the investigation conducted for the population limit case when one has access to infinitely many samples in \\cite{MMDGANRobust} (that has been published since the first version of this paper) suggest that the MMD estimator can not match the minimax rate of convergence. Nevertheless, this non-optimality in the minimax sense does not necessarily imply inaccurate mean estimation in general, and MMD can still lead to efficient estimation in most contamination scenarios.\n\nTo understand why the MMD estimator can not match the minimax rate of convergence in high dimension, and why this is not necessarily a problem, we need to analyze the landscape of the optimization program. \n\nLet us first investigate the population limit case, where we do not work with the MMD distance to the empirical distribution $\\hat{P}_n$ but to the true distribution $(1-\\varepsilon)\\mathcal{N}(\\theta_0,\\sigma^2 I_d)+\\varepsilon Q$, as if we had access to infinitely many samples, and with a point-mass delta Dirac contamination $Q=\\delta_{\\{\\theta_c\\}}$. The optimization program is, for any value of $\\gamma$, \n\\begin{multline*}\n\\min_{\\theta\\in\\mathbb{R}^d} \\mathbb{D}_{k_\\gamma}\\left(P_{\\theta},(1-\\varepsilon)\\mathcal{N}(\\theta_0,\\sigma^2 I_d)+\\varepsilon \\delta_{\\{\\theta_c\\}}\\right)\n\\\\\n=\n\\max_{\\theta\\in\\mathbb{R}^d} \\bigg\\{\n(1-\\varepsilon) \\exp\\left(-\\frac{\\|\\theta -\\theta_0\\|^2}{\\gamma^2+4\\sigma^2}\\right) + \\varepsilon \\left(\\frac{\\gamma^2+4\\sigma^2}{\\gamma^2+2\\sigma^2}\\right)^{d\/2} \\exp\\left(-\\frac{\\|\\theta-\\theta_c\\|^2}{\\gamma^2+2\\sigma^2}\\right) \\bigg\\} .\n\\end{multline*}\n\nEven though the objective function is nonconvex in $\\theta$, it is easy to see that the solution belongs to the line between $\\theta_0$ and $\\theta_c$. More precisely, if $\\theta_0$ and $\\theta_c$ are far from each other, then the solution is simply $\\theta_0$. At the opposite, if $\\theta_0$ and $\\theta_c$ are closed, then the solution will be very close to $\\theta_0$. In the situation in between where $\\|\\theta_0-\\theta_C\\|^2\\approx d$, then it is proven in \\cite{MMDGANRobust} that the solution is at least $\\varepsilon\\sqrt{d}$ far from the true parameter $\\theta_0$, which explains the term $d\\varepsilon^2$ in the rate of convergence of the MMD estimator. Hence, we understand that the worst-case rate of the MMD estimator does not correspond to cases where $\\theta_c$ is far from $\\theta_0$ but to cases where the distance is quite large in high dimensions only (of order $\\sqrt{d}$).\n\nThe previous reasoning can be easily generalized to the MMD estimator with a finite sample. In this situation with $Q=\\delta_{\\{\\theta_c\\}}$, the optimization program can be written, denoting by $\\mathcal{O}$ the set of outliers, \n$$\n\\max_{\\theta\\in\\mathbb{R}^d} \\bigg\\{\n\\sum_{i\\notin\\mathcal{O}} \\exp\\left(-\\frac{\\|\\theta -X_i\\|^2}{\\gamma^2+2\\sigma^2}\\right) + |\\mathcal{O}| \\exp\\left(-\\frac{\\|\\theta-\\theta_c\\|^2}{\\gamma^2+2\\sigma^2}\\right) \\bigg\\} , \n$$\nand the solution belongs to the convex hull of the set of points composed of the (random) inliers in the random variables $X_1,...,X_n$ and of the contamination point $\\theta_c$. A remarkable point in high dimensional probability is that samples from a multivariate standard Gaussian distribution are concentrated on the sphere of radius $\\sqrt{d}$ centered at $\\theta_0$, which means that the typical distance $\\|X_i-\\theta_0\\|$ of a datapoint $X_i$ from the mean $\\theta_0$ is roughly $\\sqrt{d}$. Then, if the contamination is such that $\\|\\theta_0-\\theta_c\\|\n^2\\approx d$, the outliers lie at a distance $\\sqrt{d}$ from $\\theta_0$ without being detected, and thus look harmless but shift the mean by approximately $\\sqrt{d}\\varepsilon$, see Figure \\ref{shiftmean}.\n\\begin{figure}[h]\n\\caption{Illustration of the behaviour of the MMD estimator in the high-dimensional Gaussian mean estimation problem. The true parameter $\\theta_0$ and datapoints sampled from the true distribution $\\mathcal{N}(\\theta_0,I_d)$ are colored in blue. Outliers and the MMD estimator $\\hat{\\theta}_n$ are colored in red. We can see that outliers lying at a distance $\\sqrt{d}$ are not detected and shift the mean by $\\varepsilon\\sqrt{d}$.\n}\n\\label{shiftmean}\n\\includegraphics[width=7cm]{f1.pdf}\n\\centering\n\\end{figure}\n\nHence, perhaps counter-intuitively at first sight, the worst contamination does not come from a value of $\\theta_c$ that is very far away from $\\theta_0$ (in which case the estimation will simply be the mean of the inliers), but that is only $\\sqrt{d}$ away from $\\theta_0$, and hence there is mainly one \"worst-case contamination\" that explains the non-optimality in the minimax sense. Figure 1a of \\cite{MMDGANRobust} even seems to show that the error of the MMD estimator when $\\gamma$ is of order $\\sqrt{d}$ increases with $\\|\\theta_0-\\theta_c\\|$ until achieving $\\sqrt{d}$, and then decreases. The same applies to a Gaussian contamination with a small variance.\n\n\\subsubsection{Cauchy model}\n\nHere, $\\mathbb{X}=\\mathbb{R}$ and $P_\\theta=\\mathcal{C}(\\theta,1)$ where $\\mathcal{C}(\\theta,s)$ has density $1\/[\\pi s (1+(x-\\theta)^2 \/ s^2)]$.\n\n\\begin{prop} \\label{prop:ex:cauchy}\n Assume that $P_\\theta = \\mathcal{C}(\\theta,1)$ for $\\theta\\in\\Theta=\\mathbb{R}$. Moreover, assume that we are in an adversarial contamination model where a proportion at most $\\varepsilon$ of the observations is contaminated. Then, taking $\\gamma=2$ leads to, for any $\\delta>0$,\n$$\n(\\tilde{\\theta}_n - \\theta_0)^2 \\leq\n4\\left( 1 - \\frac{1}{1-96 \\pi \\left( \\varepsilon^2 + \\frac{2 + 4\\log(1\/\\delta) }{n} \\right) } \\right).\n$$\n\\end{prop}\nNote that\n$$ (\\tilde{\\theta}_n - \\theta_0)^2 \\leq\n4\\left( 1 - \\frac{1}{1-128 \\pi \\left( \\varepsilon^2 + \\frac{2 + 4\\log(1\/\\delta) }{n} \\right) } \\right) \\sim 512 \\pi \\left( \\varepsilon^2 + \\frac{2 + 4\\log(1\/\\delta) }{n} \\right) .$$\n\n\\subsubsection{Estimation with a dictionary}\n\\label{dictionary}\n\nWe consider here estimation of $P^0$ by a linear combination of measures in a dictionary. This framework actually appears in various models:\n\\begin{itemize}\n \\item first, when the dictionary contains probability distributions, this is simply a mixture of known components. In this case, the linear combination is actually a convex combination. This context is for example studied in~\\cite{Dal2017}.\n \\item assuming that $P^0$ has a density, in nonparametric density estimation, we can use this setting, the dictionary being defined by a basis of $L_2$. This is for example the point of view in~\\cite{Alquier2008,BTW1,BTW2}.\n\\end{itemize}\nWe will here focus on the first setting, but an extension to the second one is quite straightforward. Let $\\{\\Phi_1,\\dots,\\Phi_D\\}$ be a family of probability measures over $\\mathbb{X}=\\mathbb{R}^d$. For $1\\leq i\\leq D$ we remind that\n$$ \\mu_{\\Phi_i}(\\cdot) = \\int k(x,\\cdot) \\Phi_i({\\rm d}x). $$\nDefine the measure $P_\\theta=\\mathcal{D}(\\theta;\\Phi_1,\\dots,\\Phi_D)=\\sum_{i=1}^D \\theta_i \\Phi_i $, and define the model $\\{P_\\theta,\\theta\\in\\Theta\\}$ with $\\Theta \\subseteq \\mathcal{S}_D=\\{\\theta\\in\\mathbb{R}_+^D: \\sum_{i=1}^D \\theta_i = 1 \\}$. The estimator is then\n$$\n\\hat{\\theta}_n = \\underset{\\theta \\in \\Theta}{\\arg\\min} \\left\\| \\sum_{\\ell=1}^D \\theta_\\ell \\mu_{\\Phi_\\ell}(\\cdot) -\\mu_{\\hat{P}_n} \\right\\|^2_{\\mathcal{H}_k}.\n$$\nAn application of Theorem~\\ref{theorem:mmd:briol:improved} leads to:\n\n\\begin{prop}\n Assume that $P_{\\theta} = \\sum_{i=1}^D \\theta_i \\Phi_i$ where $\\Phi_i$ is a probability distribution. Define the matrix $G_\\gamma = (\\left<\\mu_{\\Phi_i},\\mu_{\\Phi_j}\\right>_{\\mathcal{H}_{k_\\gamma}} )_{1\\leq i,j\\leq D} $. Letting $\\lambda_{\\min}(\\cdot) $ denote the smallest eigenvalue of a symmetric matrix, we have:\n $$ \\mathbb{P} \\left[ \\mathbb{D}_k\\left( P_{\\hat{\\theta}_n},P^0 \\right) \\leq \\inf_{\\theta\\in\\Theta} \\mathbb{D}_k\\left( P_{\\theta},P^0 \\right)\n + 2 \\frac{1 + \\sqrt{2\\log\\left(\\frac{1}{\\delta}\\right)} }{\\sqrt{n}} \\right] \\geq 1-\\delta$$\n and, in the well specified case where $P^0 = P_{\\theta_0}$,\n $$ \\mathbb{P} \\left[ \\|\\hat{\\theta}-\\theta_0\\|^2 \\leq 2 \\frac{1 + \\sqrt{2\\log\\left(\\frac{1}{\\delta}\\right)} }{\\lambda_{\\min}(G_\\gamma) \\sqrt{n}} \\right] \\geq 1-\\delta\n .$$\n\\end{prop}\n\n\\subsection{$\\beta$-mixing observations}\n\nWe now consider non-independent random variables: as in the general framework presented above, $(X_t)_{t\\in \\mathbb{Z}}$ is a strictly stationary time series, with stationary distribution $P^0$, and we observe $X_1,\\dots,X_n$. We will exhibit some condition on the dependence of the $X_i$'s ensuring that we can still estimate $P^0$ with the MMD method.\n\nThere is a very rich literature on limit theorems and exponential inequalities under conditions on various dependence coefficients. Mixing coefficients and their applications are detailed in the monographs~\\cite{mixing,mixing2}, weak dependence coefficients in~\\cite{weakd}. In this subsection, we show that our coefficient $\\varrho_t$ can be upper-bounded by the $\\beta$-mixing coefficients. So for any $\\beta$-mixing process, the estimation of $P^0$ using MMD remains possible. We also remind some examples of $\\beta$-mixing processes. Note that we will show in the next subsection that Theorem~\\ref{theorem:mmd:1} can be successfully applied to non $\\beta$-mixing processes.\n\n\\subsubsection{$\\beta$-mixing and coefficients $\\varrho_t$}\n\nWe start by a reminder of the definition of the $\\beta$-mixing coefficients, from page 4 (Chapter 1) in~\\cite{weakd}.\n\\begin{dfn}\n\\label{dfn:beta}\nGiven two $\\sigma$-algebras $\\mathcal{A}$ and $\\mathcal{B}$,\n $$ \\beta(\\mathcal{A},\\mathcal{B}) = \\frac{1}{2} \\sup_{\n \\begin{tiny}\n \\begin{array}{c}\n I,J \\geq 1\n \\\\ U_1,\\dots,U_I\n \\\\ V_1,\\dots,V_J\n \\end{array}\n \\end{tiny}\n } \\sum_{1\\leq i \\leq I} \\sum_{1\\leq j \\leq J} |\\mathbb{P}(U_i\\cap V_j) - \\mathbb{P}(U_i)\\mathbb{P}(V_j)| $$\n where $(U_1,\\dots,U_I)$ is any partition of $\\mathcal{A}$ and $V_1,\\dots,V_j$ any partition of $\\mathcal{B}$. Put:\n $$\\beta_t^{(X)} = \\beta(\\sigma(X_0,X_{-1},\\dots),\\sigma(X_t,X_{t+1},\\dots)). $$\n\\end{dfn}\nSection 1.5 in~\\cite{mixing} provides summability conditions on the $\\beta_t^{(X)}$ leading to a law of large numbers and to a central limit theorem. Examples are also discussed.\n\\begin{exm}\nAssume in this example that $(X_t)$ is an homogeneous Markov chain given by its transition kernel $P$ and $X_0\\sim \\pi$ where $\\pi P = \\pi$. Assume that there is a $0