diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzznjwc" "b/data_all_eng_slimpj/shuffled/split2/finalzznjwc" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzznjwc" @@ -0,0 +1,5 @@ +{"text":"\\section{INTRODUCTION}\n\nWith the successful launch of the \\emph{Neil Gehrels Swift Observatory} \\citep{Gehrels04}, early-time\nafterglows for gamma-ray bursts (GRBs) have been revealed. They are important for unveiling the open\nquestions for GRB physics, such as their progenitors and central engines \\citep[][for a review]{zhang18}.\nObservationally, a good fraction of the X-ray light curves of (long and short) GRBs were discovered\nto show a long-lasting plateau feature. Such a feature seems to be consistent with the prediction\nof millisecond magnetar (namely rapidly spinning, strongly magnetized neutron star) engine, which\nis thought to be formed from a violent event such as the massive star collapse (for long GRBs) or\ndouble neutron star merger (for short GRBs) \\citep{Paczynski86,Eichler89,Usov92,Woosley93,Thompson94,Dai98a,Dai98b,MacFadyen99,Zhang01,Metzger08,Zhang11a}.\n\n\nThe energy reservoir of a newly born millisecond magnetar is the total rotational energy, which could be\nestimated as\n\\begin{eqnarray}\nE_{\\rm rot} = \\frac{1}{2} I \\Omega^{2}\n\\label{Erot}\n\\end{eqnarray}\nwhere $I$ is the moment of inertia and $\\Omega=2\\pi\/P$ is the angular frequency of the nascent neutron star (NS).\nIn principle, during the spin-down process, the nascent magnetar loses its rotational energy via both magnetic dipole\nradiation and gravitational wave (GW) emission, so that the spin-down law can be written as \\citep{Shapiro83,Zhang01}\n\\begin{eqnarray}\n\\dot{E}&=-L_{\\rm dip}-L_{\\rm GW}&=-\\frac{B_p^2R^6\\Omega^4}{6c^3}-\\frac{32GI^2\\epsilon^2\\Omega^6}{5c^5}.\n\\label{dotE}\n\\end{eqnarray}\n$B_p$ is the dipolar field strength at the magnetic poles on the NS surface. $R$ and $\\epsilon$ are the radius\nand ellipticity of the NS, respectively. Depending on the NS properties, such as the values of $B_p$ and $\\epsilon$,\nthe spin-down process will be dominated by one or the other loss term.\n\n\nThe Poynting flux generated by the EM dipolar emission could undergo the magnetic energy dissipation processes with high\nefficiency \\citep{Zhang11b} and power the X-ray plateau feature in GRB afterglows \\citep{Rowlinson10,Rowlinson13,Gompertz13,Gompertz14,Lu14,Lu15}.\nIn principle, the physical parameters of the newly born magnetar could be estimated by fitting the observed GRB\nX-ray plateau data. On the other hand, one can also infer {the dominant energy loss term} (EM or GW radiation) by\nanalyzing the braking index of the magnetar, which can be obtained by fitting the\ndecay slopes of the X-ray plateau and its follow-up segment \\citep{Zhang01,Lasky16,Lasky17,Lu18,Lu19}. Studying the population of braking indices is another feasible way to look at this \\citep{Sarin20a}.\nSome GRBs were discovered to show an extended plateau, followed by a sharp decay with a decay index larger than 3 \\citep[called internal plateau;][]{Rowlinson10,Rowlinson13,Lu15,Sarin20a},\nwhich is usually interpreted as the magnetar collapses into a BH \\footnote{Alternatively, it has been proposed that the internal plateau might be an imprint of the r-process heating on fall-back accretion \\citep{Desai19}, or be the high-latitude emission signature from a structured jet \\citep{Ascenzi20}.}. For these cases, one can make estimation for the dipolar\nmagnetic field strength $B_p$ and the initial spin period $P_0$ of the magnetar from the observed X-ray plateau luminosity and its\nending time \\citep{Rowlinson10,Rowlinson13,Gompertz13,Gompertz14,Lu15,Gao16,Sarin20a}.\n\n\nIn the previous studies, $R$ and $I$ of magnetars are usually assumed to be constant, and are usually assigned to some so-called fiducial\nvalues, such as $R=10^6~\\rm cm$ and $I=1.5\\times10^{45}~\\rm g~cm^2$. However, as the GRB central engine, the magnetars are supposed to be\nrapidly rotating (especially for short GRBs where the magnetars are formed from merger), in which case its radius and moment of inertia\nshould be related to the rotational speed \\citep{lattimer12}. Nevertheless, the specific values of $R$ and $I$ should also depend on the\nNS equation of state (EoS) and the NS mass.\n\nIn this paper, we intend to study in detail how $R$ and $I$ evolve as the magnetar spins down, how these evolution effects alter the\nmagnetic dipole radiation behavior, and to what extent the NS EoS and NS mass affect the value of dipole radiation luminosity.\n\n\n\\section{$R$ and $I$ evolution}\n\nFor the purpose of this work, we first need to know how the radius $R$ and moment of inertia $I$ for a given fast\nrotating neutron star change with its rotational speed. Obviously, it depends on the EoS of neutron star. Here we\nuse the numerical methods to treat the equilibrium equations for a stationary, axially symmetric, rigid rotating NS,\nwithin a fully general relativistic framework. In this case, the spacetime metric\ncan be written as\n\\begin{eqnarray}\nds^2 &=& -e^{2\\nu}\\, dt^2 + r^2 \\sin^2\\theta B^2 e^{-2\\nu}\n(d\\phi - \\omega\\, dt)^2\n\\nonumber \\\\ & & \\mbox{}\n+ e^{2\\alpha} (dr^2 + r^2\\, d\\theta^2),\n\\end{eqnarray}\nwhere the potentials $\\nu$, $B$, $\\omega$, and $\\alpha$ depend only on $r$ and $\\theta$. We have the\nfollowing asymptotic decay as \\citep{Butterworth76}\n\\begin{eqnarray}\n\\nu &=& -\\frac{M}{r} + \\frac{B_0M}{3r^3} + \\nu_2 P_2(\\cos\\theta)+\\mathcal{O}\\left(\\frac{1}{r^{4}}\\right), \\nonumber \\\\\nB &=& 1 + \\frac{B_0}{r^2} +\\mathcal{O}\\left(\\frac{1}{r^{4}}\\right) , \\nonumber \\\\\n\\omega &=&\\frac{2I\\Omega}{r^3}+\\mathcal{O}\\left(\\frac{1}{r^{4}}\\right) ,\n\\end{eqnarray}\nwhere $M$ is the NS mass and $\\Omega$ is the angular frequency. $B_0$ and $\\nu_2$ are real constant.\nWhen describing the interior of the NS as a perfect fluid, its energy-momentum tensor becomes\n\\begin{eqnarray}\nT^{\\mu\\nu} = (\\rho + p)u^{\\mu}u^{\\nu} + p g^{\\mu\\nu},\n\\end{eqnarray}\nwhere $\\rho$ presents the energy density, $p$ denotes the pressure, and $u^{\\mu}$ is the\n$4$-velocity.\n\n\nFor a selected sample of EoSs [SLy \\citep{Douchin01}, ENG \\citep{Engvik96}, AP3 \\citep{Akmal97}, WFF2 \\citep{Wiringa88}] within\na range of maximum mass ($2.05M_{\\odot}3)}$ following the X-ray plateaus \\citep{Rowlinson10,Rowlinson13,Lu15,Sarin20a} are unnecessarily related to the collapsing of the central magnetar. Instead, it could be caused by the $R$ and $I$ evolution effect, if the spin-down process of the nascent NS is dominated by GW radiation and the ellipticity satisfies $\\epsilon\\propto B_{p}^{2}$. If our interpretation is correct, this may explain why some GRBs that present the internal plateau feature, still show late time central engine activity, manifested through flares and second shallow plateaus \\citep{Troja07,Perley09,Margutti11,Gao15,Gao17b,Zhao20}. In the magnetar collapsing scenario, only flares or plateaus close to the internal plateau could be interpreted with fall-back accretion model \\citep{chen17,Zhao20}.\n\n\n\n\n\n\\section{Analytical analysis for the numerical results}\nUnder certain approximations, we can try to explain the results from numerical calculation by analytical method. By fitting the evolutionary behavior of the $R$ and $I$ for different EoSs and baryonic mass, one can approximately get that\n\\begin{eqnarray}\n\\label{gp} R \\simeq \\left\\{ \\begin{array}{ll} R_0(\\frac{\\Omega}{\\Omega_{\\rm k}})^{m}, & \\Omega_1<\\Omega<\\Omega_{\\rm k};\\\\\nR_0(\\frac{\\Omega_1}{\\Omega_{\\rm k}})^{m}, &\n\\Omega\\leq\\Omega_1. \\\\\n\\end{array} \\right.\n\\end{eqnarray}\n\n\\begin{table*}\n\\begin{center}{\\scriptsize\n\\caption{The evolution indexes of R and I with different EoSs and NS baryonic mass}\n\\begin{tabular}{ccccc}\n\\hline\n\\hline\n\\multicolumn{5}{c}{\\qquad\\qquad\\qquad\\qquad$m$}\\\\\n\\hline\n& AP3 & ENG & SLy & WFF2 \\\\\n\\hline\n$M_{b}=2.0M_{\\odot}$ & 0.34 & 0.35 & 0.21 & 0.33 \\\\\n$M_{b}=2.5M_{\\odot}$ & 0.27 & 0.28 & 0.37 & 0.27 \\\\\n$M_{b}=3.0M_{\\odot}$ & 0.38 & NO & NO & NO \\\\\n\\hline\n\\multicolumn{5}{c}{\\qquad\\qquad\\qquad\\qquad$k$}\\\\\n\\hline\n& AP3 & ENG & SLy & WFF2 \\\\\n\\hline\n$M_{b}=2.0M_{\\odot}$ & 0.14 & 0.13 & 0.13 & 0.12 \\\\\n$M_{b}=2.5M_{\\odot}$ & 0.15 & 0.24 & 0.25 & 0.22 \\\\\n$M_{b}=3.0M_{\\odot}$ & 0.31 & NO & NO & NO \\\\\n \\hline\n \\hline\n \\end{tabular}\n }\n\\end{center}\n\\end{table*}\n\n\\begin{eqnarray}\n\\label{gp} I \\simeq \\left\\{ \\begin{array}{ll} I_0(\\frac{\\Omega}{\\Omega_{\\rm k}})^{k}, & \\Omega_1<\\Omega<\\Omega_{\\rm k};\\\\\nI_0(\\frac{\\Omega_1}{\\Omega_{\\rm k}})^{k}, &\n\\Omega\\leq\\Omega_1, \\\\\n\\end{array} \\right.\n\\end{eqnarray}\nwhere $R_{0}$, $I_{0}$, and $\\Omega_{\\rm k}$ are the initial radius, initial moment of inertia, and Keplerian angular velocity of the newly born NS, respectively.\nIt is worth noting that $\\Omega_{\\rm k}$ depends on the NS EoS and baryonic mass. $m$ and $k$ are the power law index for $R$ and $I$ evolution with respect to the rotational speed of the NS. The best fitting values of $m$ and $k$ for different EoSs and $M_b$ are collected in Table 1. For our selected EoSs, $m$ ranges from 0.21 to 0.38 and $k$ ranges from 0.12 to 0.31. When the baryonic mass is large enough, the magnetar would collapse into a black hole after slightly spinning down, in which case the values of $m$ and $k$ are denoted as ``NO\". With the assumption that the magnetic dipole moment $\\mu\\equiv B_pR^3$\nis conserved, the evolution of $L_{\\rm dip}$ could be derived based on the Equation \\ref{dotE}:\n\n(I) For $L_{\\rm dip}$ dominated case, one has\n\\begin{eqnarray}\nL_{\\rm dip}\\simeq -\\frac{(2+k)I_0\\Omega^{k+1}}{2\\Omega_{\\rm k}^{k}}\\dot{\\Omega}=\\frac{B_{p,0}^2R_0^6\\Omega^{4}}{6c^3},\n\\label{EM_dominated}\n\\end{eqnarray}\nso that the complete solution of $\\Omega(t)$ in Equation \\ref{EM_dominated} could be written as\n\\begin{eqnarray}\n\\Omega=\\Omega_0\\left[1+\\frac{t}{T_{\\rm sd,em}}\\right]^\\frac{1}{k-2},\n\\label{Omega_EM}\n\\end{eqnarray}\nwhere $\\Omega_{0}$ is the initial angular frequency at $t=0$, and $T_{\\rm sd,em}$ is a corresponding characteristic spin-down timescale, which could be derived as\n\\begin{eqnarray}\nT_{\\rm sd,em}=\\frac{3(2+k)I_0c^3}{(2-k)B_{p,0}^2R_0^6\\Omega_{\\rm 0}^{2}}(\\frac{\\Omega_{\\rm 0}}{\\Omega_{\\rm k}})^{k}.\n\\label{Tsdem}\n\\end{eqnarray}\nHence, the evolution history of $L_{\\rm dip}$ could be expressed as\n\\begin{eqnarray}\nL_{\\rm dip}=L_{\\rm sd,em}\\left[1+\\frac{t}{T_{\\rm sd,em}}\\right]^\\frac{4}{k-2},\n\\label{Luminosity_EM}\n\\end{eqnarray}\nwhere $L_{\\rm sd,em}$ is the initial luminosity of electromagnetic dipole emission at $t=0$, which could be calculated as\n\\begin{eqnarray}\nL_{\\rm sd,em}=\\frac{B_{p,0}^2R_0^6\\Omega_0^{4}}{6c^3}.\n\\label{Lsdem}\n\\end{eqnarray}\nIt is clear that both the magnitude and the evolution history of $L_{\\rm dip}$ depend on the realistic EoS and NS mass, as well as the evolution history of the moment of inertia (with index $k$). Comparing Equation \\ref{L obs} with Equation \\ref{Luminosity_EM}, one can obtain the braking index in this scenario as\n\\begin{eqnarray}\nn=3-k.\n\\end{eqnarray}\nWhen $k>0$, we have $n<3$.\n\n(II) For $L_{\\rm GW}$ dominated case with $\\epsilon$ keeping constant, one has\n\\begin{eqnarray}\nL_{\\rm GW}\\simeq -\\frac{(2+k)I_0\\Omega^{k+1}}{2\\Omega_{\\rm k}^{k}}\\dot{\\Omega}=\\frac{32GI_0^2\\epsilon^2\\Omega^{6+2k}}{5c^5\\Omega_{\\rm k}^{2k}},\n\\label{GW_dominated}\n\\end{eqnarray}\nso that the complete solution of $\\Omega(t)$ in Equation \\ref{GW_dominated} could be written as\n\\begin{eqnarray}\n\\Omega=\\Omega_0\\left[1+\\frac{t}{T_{\\rm sd,gw}}\\right]^{-\\frac{1}{k+4}},\n\\label{Omega_EM}\n\\end{eqnarray}\nwhere $T_{\\rm sd,gw}$ could be derived as\n\\begin{eqnarray}\nT_{\\rm sd,gw}=\\frac{5(k+2)c^5}{64(k+4)GI_0\\epsilon^2\\Omega_0^{4}}(\\frac{\\Omega_{\\rm 0}}{\\Omega_{\\rm k}})^{-k}.\n\\label{Omega evolution}\n\\end{eqnarray}\nThe evolution history of $L_{\\rm dip}$ could be expressed as\n\\begin{eqnarray}\nL_{\\rm dip}=L_{\\rm sd,em}\\left[1+\\frac{t}{T_{\\rm sd,gw}}\\right]^{-\\frac{4}{k+4}}.\n\\label{Luminosity_GW}\n\\end{eqnarray}\nSimilar to the EM dominated case, here both the magnitude and the evolution history of $L_{\\rm dip}$ depend on the realistic EoSs and NS mass, as well as the evolution history of the moment of inertia (with index $k$). Comparing Equation \\ref{L obs} with Equation \\ref{Luminosity_GW}, one can derive the braking index in this scenario as\n\\begin{eqnarray}\nn=5+k.\n\\end{eqnarray}\nWhen $k>0$, we have $n>5$.\n\n(III) For $L_{\\rm GW}$ dominated case with $\\epsilon=\\beta B_{p}^{2}$, one has\n\\begin{eqnarray}\nL_{\\rm GW}\\simeq -\\frac{(2+k)I_0\\Omega^{k+1}}{2\\Omega_{\\rm k}^{k}}\\dot{\\Omega}=\\frac{32GI_0^2\\beta^2B^{4}_{p}\\Omega^{6+2k}}{5c^5\\Omega_{\\rm k}^{2k}},\n\\label{GW2_dominated}\n\\end{eqnarray}\nso that the complete solution of $\\Omega(t)$ in Equation \\ref{GW2_dominated} could be written as\n\\begin{eqnarray}\n\\Omega=\\Omega_0\\left[1+\\frac{t}{T_{\\rm sd,gw}}\\right]^{\\frac{1}{12m-k-4}},\n\\label{Omega_GW2}\n\\end{eqnarray}\nwhere $T_{\\rm sd,gw}$ could be derived as\n\\begin{eqnarray}\nT_{\\rm sd,gw}=\\frac{5(k+2)R^{12}_{0}c^5}{64(k+4-12m)GI_0\\beta^2\\mu^{4}\\Omega_0^{4}}(\\frac{\\Omega_{\\rm 0}}{\\Omega_{\\rm k}})^{12m-k}.\n\\label{TGW evolution}\n\\end{eqnarray}\nThe evolution history of $L_{\\rm dip}$ could be expressed as\n\\begin{eqnarray}\nL_{\\rm dip}=L_{\\rm sd,em}\\left[1+\\frac{t}{T_{\\rm sd,gw}}\\right]^{\\frac{4}{12m-k-4}}.\n\\label{Luminosity_GW2}\n\\end{eqnarray}\nIn this case, the evolution history of $L_{\\rm dip}$ not only depends on the evolution history of the moment of inertia $I$, but also the evolution history of $R$. The decaying power law index $4\/(12m-k-4)$ could be around or larger than 3. Comparing Equation \\ref{L obs} with Equation \\ref{Luminosity_GW2}, one can derive the braking index in this scenario as\n\\begin{eqnarray}\nn=5+k-12m,\n\\end{eqnarray}\nwhich could be much smaller than 5 (even smaller than 3).\n\n\n\\section{Conclusion and Discussion}\n\nBy solving the field equations, we find that when a NS's rotational speed is approaching the breakup limit, its radius and moment of inertia would undergo an obvious evolution as the NS spins down. In this case, the deceleration history of the NS would become more complicated for given initial dipole magnetic field and ellipticity. Our main results could be summarized as follows:\n\n\\begin{itemize}[leftmargin=*]\n\\item When realistic values of $R$ and $I$ for different EoSs and NS baryonic mass are considered, the magnetic dipole radiation luminosity could be variant within one to two orders of magnitude.\n\n\\item With the consideration of $R$ and $I$ evolution effects for a rapidly spinning NS, its magnetic dipole radiation light curve would present new segments. For instance,\nwhen GW radiation dominates the spin down power and $\\epsilon\\propto B_{p}^{2}$, the history of magnetic dipole radiation luminosity would consist four segments, i.e., $L_{\\rm dip}\\propto t^{0}$ followed by $L_{\\rm dip}\\propto t^{-\\gamma}$, and then followed by $L_{\\rm dip}\\propto t^{-1}$ and $L_{\\rm dip}\\propto t^{-2}$. The new segment $L_{\\rm dip}\\propto t^{-\\gamma}$ with $\\gamma$ larger than 3 is due to the evolution of $R$ and $I$. In this case, if one apply $L_{\\rm dip}=L_{\\rm sd}(1+t\/T_{\\rm sd})^{4\/(1-n)}$ to fit the dipole radiation luminosity, the obtained braking index could be much smaller than 3 \\footnote{Other reasons why the braking index could be smaller than 3 has been discussed in \\cite{Lasky17}.}.\n\n\\item In the case when EM radiation power is comparable to the GW radiation power, the dipole radiation light curve would become even more complicated. Especially when the initial EM radiation power is slightly larger, for $\\epsilon\\propto B_{p}^{2}$ scenario, there would be a transition from EM loss dominating to GW loss dominating, and then back again as the NS spins down. In this case, the history of dipole radiation could consist five segments, i.e., $L_{\\rm dip}\\propto t^{0}$ followed by $L_{\\rm dip}\\propto t^{-2}$ and then followed by $L_{\\rm dip}\\propto t^{-\\gamma}$ ($\\gamma$ is larger than 3), and then followed by $L_{\\rm dip}\\propto t^{-1}$, finally followed by $L_{\\rm dip}\\propto t^{-2}$. For those complicated situations, Equation \\ref{L obs} may not be a good formula to fit the dipole light curve. But if one still applies Equation \\ref{L obs} to find the braking index, the result could vary from $n<3$ to $n>5$, depending on how many segments being covered by the observations.\n\\end{itemize}\n\nAccording to our results, 1) we suggest that when using the EM observations such as the X-ray plateau data of GRBs to diagnose the properties of the nascent neutron stars, NS EoS and mass information should be invoked as simultaneously constrained parameters; 2) if the spin-down process of the nascent NS is dominated by GW radiation and the ellipticity satisfies $\\epsilon\\propto B_{p}^{2}$, the sharp decay following the X-ray plateau could be caused by the $R$ and $I$ evolution effect rather than due to the NS collapsing. If this is true, it may explain why some GRBs that present the internal plateau feature still show late time central engine activity, manifested through flares and second shallow plateaus. Future EM and GW joint detection could help to distinguish these two scenarios.\n\nFinally, we would like to point out several caveats of our results. First, the physical conditions for a nascent NS would be very complicated, and some conditions may significantly alter the radiation lightcurve, so that the R\/I evolution effect we discussed here would be reduced or even completely suppressed. For instance, it has been proposed that the evolution of the inclination angle between the rotation and magnetic axes of the NS could markedly revise the X-ray emission \\citep{Cikintoglu20}. Moreover, the nascent NS is likely to undergo free precession in the early stages of its lifetime when the rotation and magnetic axes of the system are not orthogonal to each other, which would lead to the fluctuations in the X-ray light curve \\citep{Suvorov20,Suvorov21}. On the other hand, the X-ray plateau emission luminosity could emerge from a plerion-like model of the nascent NS,\nin which model the magnetized, relativistic wind from a millisecond magnetar injects shock-accelerated electrons into a cavity confined by\nthe GRB blastwave. The plerion model introduces an anticorrelation between the luminosity and duration of the plateau, and also shows a sudden\ndrop in the X-ray emission when the central magnetar collapses into a black hole \\citep{Strang19}. \n\n\nSecondly, even if the NS EoS and mass information were invoked as we suggest, one may still face many difficulties when using the X-ray plateau data of GRBs to diagnose the properties of the nascent NS. For instance, precise estimation for the luminosity of X-ray plateaus, in many cases, is very difficult to obtain, because there are no redshift measurement for most GRBs and the uncertainties introduced by the cosmological k-correction could also be significant \\citep{Bloom01}. On the other hand, a constant efficiency is usually assumed to convert the spin-down luminosity to the observed X-ray luminosity, which might be incorrect. It has been proposed that the efficiency might be strongly dependent on the energy injecting luminosity, which leads to a larger braking index from afterglow fitting compared to the case with constant efficiency \\citep{Xiao19}. Furthermore, if one considers both energy injection and radiative energy loss of the blastwave, the X-ray light curve would be altered without changing the braking index, and this model could also fit the data well \\citep{Dallosso11,Sarin20b}. \n\n\nFinally, it is worth noticing that besides the magnetar scenario, many alternative models have been proposed to interpret the X-ray plateau emission, such as the structure jet model \\citep{Beniamini20}, the high-latitude emission model \\citep{Oganesyan20}, and the late time energy injection model \\citep{Matsumoto20}, etc. As suggestted by \\cite{Sarin19}, before attempting to derive NS parameters, one should first ensure the magnetar scenario being the preferred hypothesis. \n\n\n\n\n\n\n\n\n\n\\acknowledgments\nWe thank the anonymous referee for the helpful comments that have helped us to improve the presentation of the paper. LL, HG and SZL acknowledge the National Natural Science Foundation of China under Grant No. 11722324, 11690024, 11633001, the Strategic Priority Research Program of the Chinese Academy of Sciences, Grant No. XDB23040100 and the Fundamental Research Funds for the Central Universities.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nSelf-supervised methods have achieved success in a wide range of NLP tasks, and automatic summarization is no exception \\citep{liu-lapata-2019-text, lewis2019bart, zhang2019pegasus, shi-etal-2019-leafnats, fabbri-etal-2019-multi}. These state-of-the-art abstractive summarization models typically finetune pre-trained transformer-based models on a summarization dataset \\cite{vaswani2017attention}. \nDespite significant improvements over previous methods in terms of automatic evaluation scores such as ROUGE \\citep{lin-2004-rouge}, ensuring factual consistency of the generated summary with respect to the source remains challenging. For example, \\citet{cao2018faithful} claims that about 30\\% of summaries generated by abstractive models contain factual errors, which greatly limits their practicality. \n\nDifferent approaches have been proposed to detect or ensure the factual consistency of generated summaries, including using fact extraction or applying attention on fact triples \\cite{cao2018faithful, zhang2019optimizing, goodrich2019assessing}, applying natural language inference or question answering models for consistency checking \\cite{falke2019ranking, li2018ensure, wang2020asking} and training the model on artificial datasets \\cite{kryscinski2019evaluating}. Most of these approaches either require a high-quality fact extraction model or they only focus on factual consistency \\emph{evaluation}. Improving factuality \\emph{correction} by editing inconsistent parts in generated summaries is a direction that has not been explored much.\n\n\\begin{table}[t]\n\\small\n\\renewcommand{\\arraystretch}{1.3}\n\\setlength\\tabcolsep{2.5pt}\n\\centering\n\\begin{tabular}{|p{7.3cm}|}\n \\hline\n {\\bf Source}: \\\\\n Jerusalem (CNN)The flame of remembrance burns in Jerusalem, and a song of memory haunts Valerie Braham as it never has before. This year, Israel's Memorial Day commemoration is for bereaved family members such as Braham. ``Now I truly understand everyone who has lost a loved one,'' Braham said. (...) \\\\\n \\hline\n \\emph{Original}: {\\bf France's } memorial day commemoration is for bereaved family members as braham. \\emph{(inconsistent)} \\\\\n \\emph{After Correction}: {\\bf Israel's } memorial day commemoration is for bereaved family members as braham. \\emph{(consistent)}\n \\\\\n \\hline\n\\end{tabular}\n\\caption{\\label{table:correction_example} An example of an inconsistent system-generated summary and the output summary from our correction model. In this case, ``France'' is successfully corrected as ``Israel''.}\n\\end{table}\n\nIn this work, we propose a model to improve the factual consistency of system summaries with \\textit{post-editing correction} (Table \\ref{table:correction_example}). Our model takes a draft summary that is generated by an abstractive summarization model and produces a corrected final summary, conditioned on the source document. In addition, our trained corrector can be used as an evaluation model for factual consistency of abstractive summaries, with the assumption that a generated summary is inconsistent if our corrector decides to make edits. To teach the model to correct errors, we train it with artificial data that has factual errors introduced using heuristics proposed by \\citet{kryscinski2019evaluating}.\n\n\n\nThe empirical results based on automatic and human evaluations indicate that our model not only corrects factual errors in summaries, it is also a reliable factuality evaluation model.\nIn a downstream setting where we apply the corrector to the output of an abstractive summarizer, we find that our corrector is able to accurately correct errors in the generated summaries. However, the overall recall on correcting factual errors in real system summaries remains low, suggesting the errors introduced by heuristics have a different distribution than errors made by abstractive summarization systems.\n\n\n\n\n\n\\section{Background and Related Work}\nPrevious work on factual consistency in abstractive summarization can be divided into two categories: abstractive summarization models tailored towards factual consistency \\citep{cao2018faithful,zhang2019optimizing,li2018ensure}, and evaluation models for factual consistency in abstractive summarization \\citep{goodrich2019assessing,falke2019ranking,kryscinski2019evaluating,wang2020asking}. \n\n\\citet{cao2018faithful} proposed a dual attention module in an abstractive summarizer that attends to both the source document and to relation triples extracted from the document. \\citet{zhang2019optimizing} propose to improve their abstractive summarization model by optimizing fact scores defined in radiology reports with reinforcement learning methods. \\citet{li2018ensure} jointly train their model's encoder on summarization and NLI tasks. \\citet{guo-etal-2018-soft} train an abstractive summarization system with the auxiliary tasks of question and entailment generation and show that their generated summaries are less likely to produce extraneous facts. \\citet{kumar-cheung-2019-understanding} show that neural abstractive summarizers often assign higher posterior likelihood to perturbed contrastive summaries that are inconsistent with the source text than to human-written gold-standard ones. Concurrently to our work, \\citet{zhu2020boosting} recently proposed a fact-aware summarization model that uses a knowledge graph. They use a pre-trained corrector module to modify generated summaries. Concurrent to our work, \\citet{dong-2020-multifact} proposes factual correction models that leverages knowledge learned from question answering models via span selection. Their models employ single or multi-masking strategies to either iteratively or auto-regressively replace entities.\n\n\n\nIn terms of evaluating abstractive summarization models for factual consistency,\n\\citet{goodrich2019assessing} proposed a metric to check factual consistency by checking the overlapped fact triples between a source document and generated text on Wikidata. \\citet{falke2019ranking} shows that factual error detection is a difficult task on its own and adapting entailment models for factual error detection do not offer the desired performance. \\citet{kryscinski2019evaluating} finetune a BERT model on heuristically-created data with six types of rule-based text transformations for factual consistency checking. \\citet{wang2020asking} propose a framework for measuring inconsistencies in abstractive summarization by answering questions based on both generated summaries and documents. \n\n\n\n\n\n\n\n\n\\section{Proposed Approach}\nIn this section, we describe our procedure of introducing artificial errors in the datasets for training and propose our end-to-end error corrector model. \n\\subsection{Dataset of Artificial Corruptions}\n\\label{sec:dataset}\n\nInspired by a recent study of error types made by state-of-the-art summarization system, we artificially created a weakly-supervised training dataset based on the text transformations proposed by \\citet{kryscinski2019evaluating}. \n\nGiven a source text $d$ and the reference summary $s$, we corrupt the reference summary into an inconsistent summary $s'$ with a randomly sampled corruption rule (described below) with probability $\\alpha$; otherwise, we keep $s'=s$ with probability $1-\\alpha$. We set $\\alpha=0.3$ to match the factuality error rate in real abstract summaries based on a recent study \\citep{cao2018faithful}. The training data consists of triplets $(s', s, d)$. \n\n\\definecolor{MyGreen}{rgb}{0.76,0.88,0.71}\n\\definecolor{MyBlue}{rgb}{0.18,0.33,0.80}\n\\definecolor{MyOrange}{rgb}{0.97,0.80,0.68}\n\n\\begin{table}[t]\n\\small\n\\renewcommand{\\arraystretch}{1.2}\n\\setlength\\tabcolsep{2.5pt}\n\\centering\n\\begin{tabular}{|p{7.3cm}|}\n \\hline\n {\\bf Source}: \\\\\n (CNN) Gastrointestinal illness has gripped 100 people on the cruise ship Celebrity Infinity, according to a report from the Centers for Disease Control. Of the ship's 2,117 passengers, 95 have suffered from vomiting, diarrhea and other symptoms, the CDC said. (...) \\\\\n \\hline\n {\\bf Reference Summary}: \\\\\n \\textcolor{MyBlue}{\\bf 100} passengers and crew members have been sickened on Celebrity Infinity. The ship, which is based on the West Coast, left San Diego in late March . \n \\\\\n {\\bf Corrupted Summary}: \\\\\n \\textcolor{MyBlue}{\\bf 95} passengers and crew members have been sickened on Celebrity Infinity. The ship, which is based on the West Coast, left San Diego in late March . \n \\\\\n \\hline\n\\end{tabular}\n\\caption{\\label{table:train_example} An example of a \\textit{Number} corruption in the training set. The incorrect number ``95'' also appears in the source document.}\n\\end{table}\n\n\\paragraph{Error Corruptions} \\label{sub-section:error_corruption}\nFour types of errors are used to create the inconsistent summaries: \\textit{Entity}, \\textit{Number}, \\textit{Date}, and \\textit{Pronoun} errors. They are the most common types of errors in abstractive summaries based on our manual inspection of 100 abstractive system-generated summaries that are sampled from the dataset of \\citet{kryscinski2019evaluating} (henceforth, the \\textsc{K2019} dataset). Unlike \\citet{kryscinski2019evaluating}, we corrupt the reference summary rather than sentences sampled from the source document. \n\nIn the first four types of error constructions, we utilize a swapping strategy to introduce errors. For \\textit{Entity}, \\textit{Number}, and \\textit{Date} swapping, one entity in the reference summary is selected and swapped with another random entity of the same type\\footnote{All the entities are extracted using a pre-trained NER model in spaCy \\url{https:\/\/spacy.io\/}. } in the source document. \nFor \\textit{Pronoun} swapping, one pronoun was extracted and swapped with another one of a matching syntactic case. Table~\\ref{table:train_example} shows one example of a corruption.\n\n\n\n\n\\subsection{Training Objective and Models}\n\\label{sec:objective}\nWith the artificial training data consisting of triplets $(s', s, d)$, the goal of the corrector is to generate the correct summary $s$ based on the inconsistent summary $s'$ and the source $d$. This can be expressed as a problem of maximizing the likelihood of $P(s|s',d)$ in an encoder-decoder model. We concatenate $s'$ and $d$ as input to the encoder ($s'$ and $d$ are separated by a separation token) and train the decoder to generate $s$. \n\nWe use BART \\cite{lewis2019bart} as the basis of our summary corrector because of its demonstrated level of performance on conditional text generation tasks. BART is a sequence-to-sequence auto-regressive transformer model that is pre-trained as a denoising auto-encoder. One appealing aspect about BART is that it is pre-trained on a denoising task. Specifically, given an input sentence that is corrupted by text infilling, token deletion as well as other text transformations, BART is trained to output the original sentence. This pre-training task is similar to our summary correction task in which we can regard the corrupted or generated summary as the noisy input and in this case the noise is the inconsistent content in the summary.\n\n\\section{Experiments}\n\\subsection{Evaluation Tasks and Measures}\n\\label{sec:eval_metrics}\nWe evaluate our model on two tasks: factual consistency checking and error correction.\n\n\\paragraph{Factual consistency checking} For this task, the model needs to classify each original input summary as \\emph{consistent} or \\emph{inconsistent} with respect to the source text. It is thus a binary classification task for which we report \\textbf{accuracy}, as well as \\textbf{precision}, \\textbf{recall}, and \\textbf{F1}.\n\nWe interpret the output of our corrector model as a classification decision as follows. If the corrector makes any change to the original input summary, we consider this to be a prediction of the \\emph{inconsistent} class. Otherwise, the corrector makes no change and we consider this a prediction of the \\emph{consistent} class.\n\n\n\n\n\\begin{table}[t!]\n\\renewcommand{\\arraystretch}{1.1}\n\\begin{center}\n\\begin{tabular}{c|c|ccc}\n\\toprule\n\\multirow{2}{*}{~} & \\bf \\multirow{2}{*}{\\makecell{Overall \\\\ Acc.}} & \\multicolumn{3}{c}{\\bf Consistency checking} \\\\ \n~ & \\bf ~ & Prec. & Recall & F1 \\\\ \n\\midrule\nCorrupted & \\multirow{2}{*}{84.38\\%} & 0.79 & 0.95 & 0.86 \\\\\n\nClean & ~ & 0.93 & 0.74 & 0.82 \\\\\n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\caption{\\label{table:analysis1} Performance of our model on consistency checking on our test set of artificial corruptions. Corrupted and clean refer to the subsets of the test set that were artificially perturbed or not perturbed, respectively. \n}\n\\end{table}\n\n\\paragraph{Error correction} For this task, the model must correct inconsistencies in the original summary (in any) with respect to the source text.\n\nWe define \\textbf{correction accuracy} as the proportion of original summaries that are correctly changed by our corrector. On our artificial test set, an input summary is considered successfully corrected if the corrected summary matches the reference summary exactly.\nFor the \\textsc{K2019} dataset, no reference corrections are available. We instead conducted a human evaluation to check the consistency of the corrected output. We read the original and corrected summaries as well as the source document to determine whether a summary is successfully corrected by our model.\n\n\n\\subsection{Datasets}\nWe use two datasets for our experiments. The first is the dataset of artificial corruptions described in Section~\\ref{sec:dataset}, which we create by taking samples from the CNN\/DailyMail dataset. There are in total 287,227 samples in the training set, and we corrupted 30\\% of them (85,583). This results in 16,858\/35,113\/13,408\/20,204 date\/entity\/number\/pronoun corrupted samples respectively. We refer the other 201,644 training samples as clean samples. We also create artificial validation and test set for model selection and evaluation. In the test set, there are 5,780 corrupted samples and 5,710 clean samples.\n\nThe second dataset we use is the \\textsc{K2019} test set of \\citet{kryscinski2019evaluating}.\nThis dataset contains 503 summaries generated by different recent neural abstractive summarizers, which have been manually labeled for whether they contain an inconsistency.\n\nWe evaluate our model on both datasets. We did not use baselines for the artificial test set since it is simply used as a check to demonstrate our model's performance in the artificial setting. The more meaningful evaluations are on K2019 consistency checking and error correction.\n\n\\subsection{Corrector Training Details}\nWe use the BART implementation from fairseq as the basis of our corrector.\\footnote{\\url{https:\/\/github.com\/pytorch\/fairseq\/blob\/master\/examples\/bart}}\nThe pre-trained BART model is fine-tuned on our training dataset for 10 epochs as described in Section \\ref{sec:objective}. The learning rate is set to 3e-5. All our experiments is done on 4 NVIDIA Tesla V100 GPUs. The training process takes about 12 hours.\n\n\n\n\n\\begin{table}[t!]\n\\renewcommand{\\arraystretch}{1.1}\n\\begin{center}\n\\begin{tabular}{c|c|c}\n\\toprule\nModel & \\thead{\\bf Accuracy \\\\ (\\textit{weighted})} & \\bf F1-score \\\\ \n\\midrule\nBERT+MNLI & 51.39\\% & 0.86 \\\\ \nBERT+FEVER & 52.07\\% & 0.88 \\\\ \nFactCC & \\bf 72.65\\% & \\bf 0.86 \\\\ \nFactCCX & 72.88\\% & 0.87 \\\\ \n\\midrule\nOur model & 66.46\\% &0.83 \\\\ \n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\caption{\\label{table:analysis2} Factual consistency checking performance on the \\textsc{K2019} test set. The F1-score reported from our model is the micro-average F1-score.}\n\\end{table}\n\n\\begin{table*}[t]\n\\small\n\\renewcommand{\\arraystretch}{1.2}\n\\setlength{\\belowcaptionskip}{-0.40cm}\n\\centering\n\\begin{tabular}{p{15.5cm}}\n \\toprule\n \\emph{Article}: Jerusalem (CNN)The flame of remembrance burns in Jerusalem, and a song of memory haunts Valerie Braham as it never has before. (...) ``Now I truly understand everyone who has lost a loved one,'' Braham said. \\textcolor{MyBlue}{Her husband, Philippe Braham, was one of 17 people killed in January's terror attacks in Paris.} He was in a kosher supermarket when a gunman stormed in, killing four people, all of them Jewish. (...) \\\\ \n \\emph{Original}: {\\bf Valerie} braham was one of 17 people killed in january's terror attacks in paris. \\emph{(inconsistent)} \\\\ \n \\emph{Corrected}: {\\bf Philippe} braham was one of 17 people killed in january's terror attacks in paris. \\emph{(consistent)} \\\\ \n \\hline\n \\emph{Article}: (...) Thursday's attack by al-Shabaab militants killed 147 people, including 142 students, three security officers and two university security personnel. \\textcolor{MyBlue}{The attack left 104 people injured, including 19 who are in critical condition, Nkaissery said.} (...) \\\\ \n \\emph{Original}: {\\bf 147} people, including 142 students, are in critical condition. \\emph{(inconsistent)} \\\\ \n \\emph{Corrected}: {\\bf 19} people, including 142 students, are in critical condition. \\emph{(inconsistent)} \\\\ \n \\hline\n \\emph{Article}: (CNN) Officer \\textcolor{MyBlue}{Michael Slager}'s five-year career with the North Charleston Police Department in South Carolina ended after he resorted to deadly force following a routine traffic stop. (...) His back is to Slager, who, from a few yards away, raises his gun and fires. \\textcolor{MyBlue}{Slager is now charged with murder.} The FBI is involved in the investigation of the slaying of the father of four. (...) \\\\ \n \\emph{Original}: {\\bf Slager} is now charged with murder. \\emph{(consistent)} \\\\ \n \\emph{Corrected}: {\\bf Michael Slager} is now charged with murder. \\emph{(consistent)} \\\\\n \\hline\n \\emph{Article}: \\textcolor{MyBlue}{(CNN)The announcement this year of a new, original Dr. Seuss book sent a wave of nostalgic giddiness across Twitter,} and months before publication, the number of pre-orders for ``What Pet Should I Get?'' continues to climb. (...) \\textcolor{MyBlue}{It features the spirited siblings from the beloved classic ``One Fish Two Fish Red Fish Blue Fish'' and is believed to have been written between 1958 and 1962.} (...) \\\\\n \\emph{Original}: {\\bf Seuss} book sent a wave of nostalgic giddiness across twitter. \\emph{(consistent)} \\\\ \n \\emph{Corrected}: {\\bf ``One Fish Two Fish Red Fish Blue Fish''} book sent a wave of nostalgic giddiness across twitter. \\emph{(inconsistent)} \\\\\n \\bottomrule\n\\end{tabular}\n\\caption{\\label{table:samples} Examples of applying our corrector to the output of a summarizer. In the first example, the original inconsistent summary is successfully corrected. In the second example, the summary remains false after correction. The third and fourth examples show changes made on consistent summaries. Colored content in articles are support for summaries.\n}\n\\end{table*}\n\n\\begin{table}[t!]\n\\renewcommand{\\arraystretch}{1.15}\n\\setlength{\\belowcaptionskip}{-0.46cm}\n\\begin{center}\n\\begin{tabular}{c|c|cc}\n\\toprule\n\\multirow{2}{*}{\\bf Input} & \\bf \\multirow{2}{*}{\\# Samples} & \\multicolumn{2}{c}{\\bf After Correction} \\\\ \n~ & \\bf ~ & \\emph{ cons.} & \\emph{incons.} \\\\ \n\\midrule\n\\emph{consistent} & 441 & 436 & 5 \\\\\n\\midrule\n\\emph{inconsistent} & 62 & 11 & 51 \\\\\n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\caption{\\label{tab:human-eval} Human evaluation results on error correction on the \\textsc{K2019} dataset.}\n\\end{table}\n\n\\section{Results}\n\\label{sec:results}\n\\paragraph{Artificial corruptions}\nTable~\\ref{table:analysis1} shows the consistency checking performance of our corrector model on our artificial test set. The high classification accuracy and F1 scores indicate that our model is able to identify these artificially injected errors.\n\nFor error correction, among the 5780 corrupted summaries in the test set, 62.13\\% are corrected by the model to exactly match the reference summary. For the 5710 clean summaries, the model made changes to 26.27\\% of them, which results in 73.73\\% correction accuracy on clean summaries. These results show that the model is able to correct majority of the test samples even under our strict evaluation measure.\n\n\n\\paragraph{\\textsc{K2019}}\nTable~\\ref{table:analysis2} shows the consistency checking results on the \\textsc{K2019} test set. Our model is better than the BERT model and slightly worse compared with the FactCC model.\n\n\nAs for correction performance, Table~\\ref{tab:human-eval} shows the evaluation result of our human evaluation. Among 62 inconsistent summaries in the test set, the corrector model made changes to 19 summaries, of which 11 were successfully corrected and 7 remained inconsistent.\nFor the remaining 441 consistent summaries in the test set, changes are made to 39 summaries and the model changed the meaning of 5 samples. In conclusion, with 17.74\\% probability that our model can successfully correct an inconsistent summary and 1.13\\% probability that it will corrupt a consistent one. Compared with the correction rate of 62.13\\% on the artificial test set, much lower correction rate on the real test set suggests that there is still a gap between the two settings. The error types in the training set are not able to represent the diverse errors made by summarization systems.\n\n\\paragraph{Output Analysis} Table~\\ref{table:samples} shows several input and output summaries of our corrector model together with the source document fragments. In the second example, the model correctly replaced 147 with 19, but was not able to correctly remove ``including 142 students'', which is a larger modification to the original summary. More examples can be found in the Appendix.\n\n\\section{Conclusions}\n\nIn this paper, we proposed a novel approach to correct inconsistent content in summaries generated by abstractive summarization models. We train an end-to-end correction model with artificial examples created by corrupting reference summaries. Our model achieved promising performance on our artificial test set and outperformed previous models on the manually annotated test set by wide margins. Our human evaluation indicates that our model is able to correct some factually inconsistent summaries generated by abstractive summarization model. However, low recall on the inconsistent summaries and false positive samples remain as challenges.\n\n\\section*{Acknowledgments}\nThis research was supported by the Canada CIFAR AI Chair program, the Natural Sciences and Engineering Research Council of Canada (NSERC) and the Fonds de recherche du Qu\\'{e}bec -- Nature et technologies (FRQNT). \nWe would also like to thank Compute Canada for providing us computing resources.\n\n\\bibliographystyle{acl_natbib}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:introduction}\n\\emph{Caching is a mature idea from the domains of web caching, content delivery networks, and memory optimization in operating systems. Why is caching still an active topic of discussion?} \nIn the 90s, the traffic in the web exploded, leading its inventor Sir Tim Berners-Lee to declare the network congestion as one of the main challenges for the Internet of the future. The congestion was caused by the \\emph{dotcom boom} and specifically due to the client-server model of connectivity, whereby a webpage was downloaded from the same network server by every Internet user in the world. The challenge was ultimately resolved by the invention of Content Delivery Networks (CDNs), and the exploitation of web caching. The latter replicates popular content in many geographical areas and saves bandwidth by avoiding unnecessary multihop retransmissions. As a byproduct, it also decreases access time (latency) by decreasing the distance between the two communicating entities.\n\nToday, 30 years later, we are reviving the same challenge in the wireless domain. The latest report of Cisco \\cite{Cisco2015} predicts a massive increase of Internet devices connected through the wireless access, and warns of \na steep increase in mobile traffic which is expected to reach by 2018 roughly 60\\% of total network traffic, the majority of which will be video. The wireless system designers strive to fortify 5G wireless networks with higher access rates on the one hand and with increased densification of network infrastructure on the other. Over the last three decades, these two approaches are responsible for the majority of network capacity upgrade per unit area, successfully absorbing the wireless traffic growth. However, with the explosion of access rates and number of base stations, the backhaul of wireless networks will also become congested \\cite{Bastug2014LivingOnTheEdge, Wang2014Cache} which motivates further the use of caching: \n store popular reusable information at the base stations to reduce the load at the backhaul.\nFurthermore a recent technique \\cite{MaddahAli2014Fundamental} combined caching with coding and revolutionized how goodput scales in bandwidth-limited networks. \nTherefore, caching has the potential to become the third key technology for wireless systems sustainability. \n\nThe research community is converging to an enabling architecture as the one described in Figure~\\ref{fig:scenario}. In the network of the future, memory units can be installed in gateway routers between the wireless network and the Internet (e.g. in 4G this is called S-GW), in base stations of different sizes (small or regular size cells), and in end-user devices (e.g. mobile phones, laptops, routers, etc). \nIn this article, we discuss important topics such as (a) the characteristics of cacheable content and how this affects caching technologies in wireless, (b) where to install memory, and (c) the differences between wireless caching and legacy caching techniques. Last, we focus on business barriers that must be overcome for the successful adoption of wireless caching by the industry.\n\n\\begin{figure*}[ht!]\n\t\\centering\n\t\\includegraphics[width=0.95\\linewidth]{scenario.pdf}\n\t\\caption{An illustration of caching in future wireless networks. Contents available in the origin server are cached at the base stations and user devices, for offloading the backhaul and the wireless links.}\n\t\\label{fig:scenario}\n\\end{figure*} \n\n\\section{Dealing with Massive Content}\nNot all network traffic is cacheable. Interactive applications, gaming, voice calls, and remote control signals are examples of information objects that are not reusable and hence cannot be cached. Nevertheless, most network traffic today (estimated 60\\% \\cite{Cisco2015}) is deemed cacheable. We refer to cacheable information objects as \\emph{content} in this article. Since the performance of caching is inherently connected to the specifics of contents, this section is dedicated to the understanding of these specifics.\nIn particular we focus on the following misconceptions: (i) The static IRM model is sufficient for experimentation, (ii) user information cannot be used for popularity estimation due to the vast number of users, and (iii) security issues precludes caching at the edge.\n\\subsection{Insufficiency of Static Popularity Models}\nThe standard approach to design and analyze caching systems involves a model for generating content requests to replace the actual request traces--this approach is often several orders of magnitude faster. \nThe de facto model for performance analysis of web caching is the Independence Reference Model (IRM): content $n$ is requested according to an independent Poisson process with rate $\\lambda p_n$, where $p_n$ refers to the content popularity, modelled by a power law, i.e., $p_n\\propto n^{-\\alpha}, ~\\alpha>0$. This well-established model thrives due to its simplicity; it only has two parameters, namely $\\lambda$ to control the rate of requests, and $\\alpha$ to control the skewness of the popularity. \n\nNumerous studies fit IRM to real traffic with satisfactory results \\cite{Zipf}, so why do we need to change it? \nThe IRM assumes that the content popularity is static, which is of course not true. Trending tweets, breaking news, and the next episode of Game of Thrones, are examples of ephemeral content with rapidly changing popularity; they appear, they become increasingly popular, and they gradually become unpopular again. \n\nIn fact, \\cite{leonardi} considers large YouTube and Video on Demand (VoD) datasets and discovers: time-varying models are more accurate than IRM with respect to caching performance analysis--Figure~\\ref{fig:snm} reproduces the comparison when fitting YouTube data and shows the superiority of modeling the popularity as time-varying. In the inhomogeneous Poisson model proposed in \\cite{leonardi} each content is associated to a ``pulse'' whose duration reflects the content lifespan and whose height denotes its instantaneous popularity--the model is called the \\emph{Shot Noise Model} (SNM), mirroring the Poisson noise from electronics. \n While the shape of the pulse is not important, the study observes strong correlations between popularity and duration; apparently popular contents prosper longer. Finally, a class-based model \\cite{leonardi} can conveniently capture spatio-temporal correlations while allowing analytical tractability.\n Mobile users are especially keen on downloading ephemeral content, thus it is expected that in case of wireless content the improvement in modeling accuracy will be even greater. \n \n\\begin{figure}[ht!]\n\t\\centering\n\t\\includegraphics[width=0.65\\linewidth]{snm-reduced.pdf}\n\t\\caption{(from \\cite{leonardi}) Hit probability comparison between best fit of IRM, SNM and the YouTube traces.}\n\t\\label{fig:snm}\n\\end{figure} \n\nTo optimize a cache one needs to track the changes in content popularity. \nFor example, the classical web caching systems adopt dynamic eviction policies like Least-Recently-Used (LRU) in order to combat time-varying content popularity in a heuristic manner.\nHowever, the joint consideration of popularity variations with wireless systems reveals a new challenge that renders LRU policies inefficient. \nWhile a typical CDN cache normally receives 50 requests\/content\/day, the corresponding figure for base station cache may be as low as 0.1 requests\/content\/day. With such a small number of requests, fast variations of popularity become very difficult to be tracked and classical LRU schemes fail. \n \nThis development motivates novel caching techniques that employ learning methodologies to accurately track the evolution of content popularity over time. A recent study \\cite{mathieu} analyzes the SNM model and gives the optimal\\footnote{The optimality is established for the restricted case of homogeneous rectangularly-shaped pulses and asymptotically large content catalogs.} policy for joint caching and popularity estimation. Additionally \\cite{Roberts2} proposes as an alternative solution the use of LRU with prefilters. \n\\subsection{How to Track Popularity Variations}\nSince content popularity is time-varying, the caching operations can only be optimized if a fresh view of the system is maintained. This requires massive data collection and processing, and statistical inference from this data, which by itself is a complex task to handle. Additionally, user privacy is a concern that can limit the potential of collecting such information. So can we timely gather all this information in a wireless network? \n\nConsider $K$ users subscribed to the telecom operator and $L$ caches placed in the network (e.g., in base stations or other locations). Each of these caches has capability of storing $M$ contents out of $N$ contents in the catalog. Then, let the matrix ${\\mathbf P} \\in \\mathbb{R}^{K \\times N}$ model the content access statistics where rows are users and columns are contents. In other words, each entry (or rating) in this matrix quantifies how popular is content $n$ to user $k$. The popularity matrix ${\\mathbf P}$ is large, sparse, and only partially known in practice, and has to be continuously estimated in order to enable the correct cache decisions at the base stations. \nAt first, this seems to be an impossible feat.\n\nTo deal with the complexity of handling matrix ${\\mathbf P}$, it is possible to use machine learning tools to estimate the unknown entries. Such an estimation is particularly efficient when the matrix has low spectral dimension, ~and the system can be described by a small number of ``features''; fortunately popularity correlation induces such behaviour in matrices obtained from real data. For instance, low-rank matrix factorization methods, i.e., ${\\mathbf P} \\approx {\\bf K}^T {\\bf N}$ where ${\\bf K} \\in \\mathbb{R}^{r \\times K}$ and ${\\bf N} \\in \\mathbb{R}^{r \\times N}$ are factor matrices, can be employed to construct the $r$-rank version of the matrix, using the fact that users' interests are correlated and predictable when $r$ is small. This additionally allows to store the collected statistics in a more compact way.\nAs a result, a big data platform installed in the operator network can provide efficient collection and processing of user access patterns from several locations, as evidenced in \\cite{Bastug2015BigData}. \nFurther development of novel machine learning tools, such as clustering techniques, are needed to improve the estimation of the time-evolving content popularity matrix, i.e., ${\\mathbf P}_l(t)$ for base station $l$, which may differ from base station to base station. \n\nIt is worth noting that a caching system has requirements similar to those of a recommendation system. For example, the well-known Netflix movie recommendation system exploits information of user's past activity in order to predict which movie is likely to be scored high by the user. Similarly, a caching system exploits the request sequence to predict what contents are popular enough to be cached. In this context, user privacy regulations may affect the collection of these valuable data.\n \nA key topic of research in this direction is privacy-preserving mechanisms that can enable sufficient sampling of the time-evolving and location dependent popularity matrix ${\\mathbf P}_l(t)$ without compromising user privacy. Moreover, it is interesting to consider learning enhancement approaches that transfer information to caching domain from other domains such as \"transfer learning\" \\cite{Bastug2015TransferExtended}.\n\n\\subsection{Security Is A Kind of Death}\nA common anti-caching argument relates to the operation of caching in a secure environment. \nThe secure counterpart of HTTP protocol, called HTTPS, was originally used to provide end-to-end (e2e) encryption for securing sensitive information like online banking transactions and authentication. Owing to the recent adoption from traffic giants Netflix and YouTube, the HTTPS protocol is growing in numbers to soon exceed 50\\% of the total network traffic. Content encryption poses an unsurmountable obstacle to in-network operations, including caching. \nSince encrypting the data makes them unique and not reusable, caching, or even statistically processing encrypted content is impossible. Ironically, the statement ``security is a kind of death'' of Tennessee Williams seems to squarely apply to wireless caching.\n\nSecurity is definitely a precious good everyone welcomes. Although securing a video stream might seem as an excessive measure, in some cases it may be well justified. Unfortunately, e2e encryption is clearly not in the Berners-Lee spirit since it prevents operators from optimizing their networks and reanimates the server-client ghost of congestion, a reality that equally no one can overlook. In fact, modern CDN systems resolve this issue by having ``representatives'' of the content provider at the edge of the Internet. These representatives are trusted entities which hold the user keys and are able to decrypt the requests and perform standard caching operations. Ultimately, this methodology is neither entirely secure for the user \\cite{Carnavalet2016Killed}, nor efficient for the network \\cite{Maisonneuve2015Security, papagiannaki}. The need to make the system sustainable finally overrules the need for e2e encryption, which is an argument against HTTPS for video delivery. Given however this situation, how can we realistically push caching deeper into the wireless access?\n\nCurrently, content providers install their own caching boxes in the operator network and intercept the related encrypted content requests deeper in the wireless access network. Examples include the Google Global Cache \\cite{GoogleGlobalCache}, the Saguna solution \\cite{Saguna}, and the CacheBOX of Appliansys \\cite{CacheBox}. In this approach, the boxes are not controlled by the operator which leads to several limitations: (a) The caching boxes cannot perform complex tasks. (b) It is difficult to apply learning techniques without context information from the operator. (c) The caching approach is similar to the CDNs and therefore does not exploit the performance opportunities specific to wireless caching, as we will discuss below.\n\nNew security protocols have been proposed to enable the operators to perform caching on encrypted requests \\cite{papagiannaki}. This leads to an interesting research direction, to combine the user security and privacy with facilitation of the network management operations, which are crucial for the sustainability of the future wireless systems.\n\n\\section{Towards a Unified Network Memory}\nThe proposition of Information Centric Networking (ICN) as a candidate for the future Internet has also risen the subject of \\emph{where to install network memory} \\cite{Roberts}. The ICN approach proposed to equip routers with caches, and to allow content replication everywhere in the network. \nA recent work \\cite{Shenker} came up with striking conclusions about the ICN approach: most of the caching benefits of ICN can be obtained by caching at the edges of the network using the existing CDNs, and that any extra caching in the core network brings only negligible improvements at very high costs. In the wireless domain, however, the question remains relevant: does it make sense to cache even closer to the user than CDN? \n\nIt is commonly believed that caching is very inefficient near the user, and thus it should be done at CDN. Below we explain the main reasons of inefficiency and argue that they can be overcome.\n\\subsection{Caching Deeper than CDN}\nMitigating backhaul and wireless link overload requires going beyond CDN and caching at the base stations and the mobile users. \nHowever, the efficient operation of such caches is very challenging. In particular, there are two main challenges: (a) caches used in wireless networks are typically small as compared to CDN caches, and (b) the popularity profile of traffic is highly unpredictable when non-aggregated.\n\nTo understand point (a), consider that the effectiveness of a cache is measured with the \\emph{hit probability}, i.e., the fraction of requests found in the cache. This can be upper bounded by the popularity sum $\\sum_{n=1}^Mp_n$, where $p_n$ is the ordered popularity distribution with $p_1$ denoting the probability to request the most popular file. \nFor power-law popularity the sum can further be approximated by $(M\/N)^{1-\\alpha}$, where $\\alpha<1$ is the power-law exponent. A very small ratio $M\/N$ means that the hit probability becomes vanishingly small. For example, if we are caching Netflix (12.5PB) in a mobile phone (10GB), $M\/N\\sim 10^{-6}$, $\\alpha\\sim 0.8$ and the hit probability is less than $10\\%$. However, base stations equipped with a disk array (40TB) can be extremely effective when caching contents for a mobile VoD application. Table \\ref{fig:table} provides some indicative numbers for the memory types available and the catalogue sizes of reasonable applications.\n\nIn this context there are three promising research directions: (i) choose a part of the catalog to cache while maintaining network neutrality \\cite{Kocak2013network}, (ii) store only parts of the content using partial caching techniques \\cite{maggi}, and (iii) install massive memory at the edge in the form of small-sized datacenters \\cite{Bioglio2015Optimizing}. The third option will be realized by the \\emph{fog computing} paradigm \\cite{Bonomi2012Fog}.\n\n\\begin{table}\n\\centering\n \\begin{tabular}{|lc|c|c|c|}\n \\hline\n \\multicolumn{2}{|l|}{ } & {\\bf Netflix catalogue} ($12.5$PB)\\tablefootnote{The entire catalogue was anecdotally measured to contain $3.14$PB of content in 2013, which however we multiply by $4$ since the same video is available in multiple formats.} & {\\bf Torrents} ($1.5$PB) & {\\bf Wireless VoD catalogue} ($1$TB) \\\\ \\hline \\hline\n {\\bf Disk} \t\t& $2$TB \t& $\\sim 0.01\\%$ & $\\sim0.1\\%$ & $100\\%$ \\\\ \\hline\n {\\bf Disk Array} & $40$TB \t& $\\sim 0.3\\%$ & $\\sim 2\\%$ & $100\\%$ \\\\ \\hline\n {\\bf Data Center} & $150$PB & $50\\%$ & $100\\%$ & $100\\%$ \\\\\n \\hline\n \\end{tabular}\n \n \n \n \\caption{Typical data size values for normalized cache size $M\/N$ taken from the study of \\cite{Roberts2}. In practice it is anticipated that the wireless traffic is a 80-20 mix of torrent-like traffic and live VoD traffic tailored to wireless device capabilities.}\n \\label{fig:table}\n\\end{table}\n\nTo understand the unpredictable nature of sparse requests (formulated as challenge (b) above), consider as an example the delivery of breaking e-news in a city served by a single CDN node. Most users will download the news only once. The CDN system can quickly detect the rising popularity of the news, since it will receive many requests in a short time frame. From the point of view of a mobile user, however, the detection of the popularity of the trending news becomes very difficult because the news is requested only once by the given user. This example shows that the detection efficiency depends on the number of requests aggregated at the popularity learner. To illustrate this, Figure~\\ref{fig:global} shows the optimal hit probability in a hierarchy of $L$ base stations. Learning at the global CDN cache is shown to detect variations that are $L$ times faster than those at local caches. To remedy the situation it is possible to use an architecture that combines information obtained at different aggregation layers \\cite{mathieu}.\n \\begin{figure*}[ht!]\n\t\\centering\n\t\\includegraphics[width=0.95\\linewidth]{mathieu.pdf}\n\t\\caption{Optimal hit probability comparison between observing the aggregate request process at the CDN-level (global), and observing the individual request process at each base station cache (local), when refreshing the catalogue. The hit probability performance depends on how fast the time-varying popularities can be learnt: global is faster than local.}\n\t\\label{fig:global}\n\\end{figure*} \n\\subsection{Memory Is Cheap But Not Free}\nAlthough the cost of a small cache is dwarfed by the base station cost, the total amount of installed memory in a mobile network can be considerable, therefore deciding to install wireless caching requires a careful cost analysis \\cite{Roberts2}. To compute the optimal size of memory to install at each location, one needs to know (a) the cost coefficients, (b) the skewness of content popularity, and (c) the local traffic distribution in cells. Predicting how (a)-(b) will evolve is quite challenging, but as in \\cite{Roberts2} a survey may determine a good set of parameters at any given time.\n\nFor (c), the literature is extensively based on grid models, which in the case of future wireless networks might be off for a significant factor. More accurate models have been recently introduced from the field of stochastic geometry, where the cache-enabled base stations are distributed according to a spatial point process (often chosen to be 2D Poisson), thus enabling to handle the problem analytically. The validity of such modelling compared to regular cellular models has been verified using extensive simulations. \nAdditional insights for the deployment of cache-enabled base stations can be obtained by analytically characterizing the performance metrics, such as the outage probability and average delivery rate, for a given set of parameters such as given number of base stations, storage size, skewness of the distribution, transmit power and target \\ac{SINR} \\cite{Bastug2014CacheEnabledExtended}. In addition to initially studied single-tier network \\cite{Bastug2014CacheEnabledExtended}, the more detailed modeling and analysis of heterogeneous networks \\cite{Yang2015Analysis, Chen2016Cooperative}, online caching policies \\cite{Zaidi2015Information}, uniform caching \\cite{Blaszczyszyn2014Geographic}, power consumption \\cite{Perabathini2015CachingGreen}, and a Markovian mobility \\cite{Poularakis2013Exploiting} are some recent examples in this direction. Therefore, although storage units become increasingly cheaper, the question of how much storage we should place at each location should be studied jointly with topological models from stochastic geometry, which are both tractable and realistic. An example for such modelling and analysis of a network with cache-enabled base stations is provided in Figure \\ref{fig:ppp}. \n\\begin{figure*}[ht!]\n\t\\centering\n\t\\includegraphics[width=0.95\\linewidth]{ejder.pdf}\n\t\\caption{An example of base station deployment with caching capabilities \\cite{Bastug2014CacheEnabledExtended}: (a) illustrates a snapshot of 2D topology in which users and cache-enabled base stations are distributed according to Poisson Point Processes (PPPs), (b) shows theoretical performance gains of such a deployment, where the validation of results is done via simulations.}\n\t\\label{fig:ppp}\n\\end{figure*} \n\\section{Wireless $\\neq$ Wired}\nWeb caching has been traditionally studied by the networking community. A common misconception says that caching is a network layer technique, and hence the web caching approaches are sufficient for wireless caching as well.\n\nHowever, following the fundamental work of Maddah-Ali and Niesen \\cite{MaddahAli2014Fundamental}, the idea of caching has penetrated the information theory community with a new twist termed \\emph{coded caching}, which promises unprecedented gains. In the following, we discuss the differences between wired and wireless caching.\n\\subsection{Wireless Caching Lies Both at Network and PHY Layers}\nSuppose that a base station wants to deliver information to $K$ users at a rate of 1Mbps each, i.e., for streaming a video. If the video is the same for all users (broadcast video) then this might be possible for an arbitrarily large number of users. For example the base station could use an omni-directional antenna, exploit the broadcast characteristic of wireless medium, and transmit at 1Mbps to all users simultaneously. When the videos are different, this is clearly not possible: the base station needs to multiplex the users over frequency, time, or codes, where each such \\emph{resource block} is then associated to a single user. Since the resource blocks are finitely many, ultimately the base station can serve 1Mbps videos up to a maximum number of users $K_{\\max}$, after which the resources are exhausted. To increase $K_{\\max}$, physical layer researchers propose ways to increase the resource blocks of a given spectrum, i.e., increase spectral efficiency, or install more base stations so there are more resource blocks per unit area, referred to as network densification. \n\nThe novel paradigm of \\cite{MaddahAli2014Fundamental} shows a surprising fact: exploiting caching in a smart way, an unbounded number of users watching different videos can be accomodated. \n How is this made possible? During off-peak operation of the network, the users can cheaply populate their caches with \\emph{parts} of popular contents. This is a perfectly reasonable assumption since the question of sustainability that caching is trying to tackle refers to the hours of the day that the network experiences the traffic peak. The content parts are appropriately chosen according to a caching code which ensures symmetric properties. Then at request time, a coding technique called ``index coding'' is employed, to minimize the number of transmissions to satisfy all users.\\footnote{In fact, finding the optimal index code is a very difficult problem, and hence the proposed approach resorts to efficient heuristics.} The combination of these schemes is shown in \\cite{MaddahAli2014Fundamental} to yield required resource blocks equal to \n$K (1 - M\/N) \/ (1 + K M\/N ),$\nwhere $K$ is the number of users, $M$ the cache size, and $N$ the catalog size.\nHence, if the cacheable fraction of the catalog $M\/N$ is kept fixed, then the required\nnumber of resource blocks does not increase with the number of users $K$, this can be verified by taking the limit $K\\to\\infty$ whereby the above quantity converges to a constant. The result is pictorially summarized in Figure~\\ref{fig:scaling}.\n \\begin{figure}[ht!]\n \t\\centering\n\t\\includegraphics[width=0.65\\linewidth]{scaling.pdf}\n\t\\caption{Required resource blocks for $K$ mobile users with unicast demands, when the caches fit $30\\%$ of the catalog. \n\tCoded caching can serve an arbitrarily large population of users with a fixed number of resource blocks. }\n\t\\label{fig:scaling}\n\\end{figure} \n\nThis promising idea has sprung a wealth of research efforts, some of them in parallel or right after, such as device-to-device (D2D) networks \\cite{Ji2014Fundamental, Jeon2015Capacity}, non-uniform content popularities \\cite{Ji2015Order}, online caching policies \\cite{Pedarsani2013Online}, multi-servers \\cite{Shariatpanahi2015Multi}, multi-library \\cite{Sahraei2016Multi} and multihop wireless networks with cooperative caching \\cite{Gitzenis2013Asymptotic}. More recently, it has been shown that in order to achieve such ``order of $K$'' gain over \nconventional unicast (with possible legacy uncoded caching) systems, the content objects must be split into an $O(\\text{exp}(K))$ number of subpackets; for networks of practical size this gain is not achievable. The optimal tradeoff between coded caching gain and content object size is a very interesting topic of current research. \n\nFrom the implementation point of view, promising research directions include extensions to capture system aspects such as (i) popularity skewness, (ii) asynchronous requests, (iii) content objects of finite size and (iv) cache sizes that scale slower than $N$. Assuming that these practical challenges are resolved, caching for wireless systems will become intertwined with physical layer techniques employed at the base station and the handheld.\n\\subsection{One Cache Analysis Is Not Sufficient}\nA contemporary mobile receives the signal of more than 10 base stations simultaneously. In future densified cellular networks, the mobile will be connected to several femto-, pico-, or nano- cells. The phenomenon of wireless multi-access opens a new horizon in caching exploitation \\cite{femtocaching}. Since a user can retrieve the requested content from many network endpoints, neighboring caches should cooperate and avoid storing the same objects multiple times. \n\nContent placement optimizations of wireless caching typically boil down to a set cover problem in a bipartite graph connecting the users to the reachable caches. Therefore, finding what contents to store at each cache is a difficult problem even if the popularities are assumed known \\cite{femtocaching}. It is possible to relax the problem to convex optimization by the use of distributed storage codes, where each cache stores coded combinations of contents \\cite{femtocaching}, or by obtaining a fractional placement by time sharing different integral placements. These ideas lead to several interesting algorithms in the literature of cooperative caching \\cite{Poularakis2014Approximation, Naveen2015Interaction, Bioglio2015Optimizing, Zhang2015Fundamental, Dehghan2015Complexity}.\n\nWhat is the gain from these approaches? Cooperative caching typically saves space in the cache by avoiding caching the same popular contents in neighboring caches. Equivalently we may think of multiplying the cache size $M$ by a small number, at best say a gain of 3-5. With respect to hit probability, this can correspond to very different levels of gain, depending on the value of $M\/N$. Due to the skewness of the popularity distribution, marginal hit probability gain\\footnote{Gain obtained in hit probability when increasing $M$ slightly.} is high when $M\/N$ is small, and very small when $M\/N$ is large. Since in wireless we expect the former, high gains are expected from cooperative wireless caching.\n \nThe current proposals on cooperative caching assume static popularity, and therefore a promising direction of research along these lines is to design caching schemes that combine cooperation with learning of the time-varying popularity. The time to search and retrieve the content from a nearby cache may also be significant, hence intelligent hash-based filtering and routing schemes are required \\cite{Tao2015Content}. \n\\section{A Stakeholder Analysis for Wireless Caching}\n\\label{sec:grand}\nThe business of wireless caching involves three key stakeholders that together form a complex ecosystem.\n\n\\textbf{The users} of telecommunication services are primarily the customers and the consumers of the content, but in the case of wireless caching they are also active stakeholders. The users might be requested to help in the form of contributing with their own resource, (for example in the case of coded caching it will be memory and processing, or in D2D caching it will be also relaying transmissions) and they will end up spending energy in the benefit of better performance. On the other hand, one could envision the users employing D2D technology to enable caching without the participation of other stakeholders. Due to the complexities mentioned above however, efficient wireless caching will require heavy coordination and extensive monitoring\/processing, hence D2D approaches will be limited to restricted environments. \n\n\\textbf{The operators} of telecommunication networks are well placed for wireless caching. Due to the particularities of coded caching and multi-access caching, the operators are in a unique position to implement new protocols in base stations, affect the standards for new mobile devices, and develop big data processing infrastructure that can realize wireless caching. Nevertheless, for reasons related to encryption, privacy, and global popularity estimation, the operators might not be able to install these technologies without the cooperation of the other two stakeholders.\n\n\\textbf{The providers} of Internet content are champions of trust from the user community. Apart from the security keys, they also hold extensive expertise in implementing caching techniques in the core networks. From this advantageous position, they can positively affect the progressive evolution of caching in wireless networks. On the other hand, content provider-only solutions cannot unleash the full potential of wireless caching, since they are limited to alienated boxes in the operator network that can perform caching only with legacy CDN techniques. The deeper the caches go into the wireless network, the less efficient they will be if they stick to legacy CDN techniques. \n\n\\begin{figure}[ht!]\n\t\\centering\n\t\\includegraphics[width=0.65\\linewidth]{stakeholders.pdf}\n\t\\caption{A stakeholder analysis.}\n\t\\label{fig:stackeholders}\n\\end{figure} \n\nWe summarize what each stakeholder \\emph{offers} and \\emph{needs} in Figure~\\ref{fig:stackeholders}.\nWhat are the prospects of the required collaboration among the stakeholders? Operators and content providers are seeking the ``best friends forever'' union \\cite{Ft2012Content, Openet2014Content} in order to mutually harvest benefits in the digital value chain while keeping their users happy. This is a favorable environment for the scenario of wireless caching. \nIn fact, if the telecom operators enable caching capabilities at the edge of their network, their infrastructure will become sustainable while they will gain access to new business models. On the other hand, content providers can benefit from a caching collaboration since (a) traffic will be intercepted earlier and the content transport cost will be reduced, (b) the user demand will not be held back by sustainability issues, (c) costs associated to deployment of large memory units will be avoided, (d) they will be able to reach closer to their users and extend computing infrastructures to the fog paradigm. Last, it is foreseeable that in some situations the role of the content provider and the wireless operator may converge. \n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\nIn recent years there has been an increasing focus on adaptive (or co-evolving) networks~\\cite{GrossBlasius,GrossSayama,SzaboFath}. \nThe essence of adaptive networks is that node-state dynamics influences the network topology -- and topology influences the node dynamics. \nSeveral adaptive network models have been phrased in the context of epidemics~\\cite{GrossDLimaBlasius,ShawSchwartz}, \ngame theory~\\cite{PachecoTraulsenNowak,christoly2,ZimmermannEguiluzSanMiguel}, \nsocio-dynamics~\\cite{christoly1},\nself-organized criticality~\\cite{BornholdtRohlf,LevinaHerrmannGeisel}, \nfinancial markets~\\cite{PolednaThurner}, \nand evolution~\\cite{JainKrishna2}, just to name a few. \nThese models are understood, either by simulations or by appropriate approximations, \nsuch as mean-field approximations and moment closure~\\cite{DemirelVazquezBoehmeGross,christoly1,Gleeson1,KuehnMC}.\n\nAdaptive networks show bifurcations or phase transitions, \nwhich means that they exist in at least two phases, one that is characterized by well-connected networks and another \nthat has a drastically reduced link density (``collapsed'' phase).\nThe corresponding critical parameters separate the phases and can be computed explicitly for several models. \nIt is known that for bifurcation-induced critical transitions, in the vicinity of these critical parameters (tipping points), \nso-called early-warning signs (precursor signals) exist that are linked to the phenomenon of critical slowing down, see e.g.~\\cite{Wiesenfeld1}. \nFor stochastic systems, slowing down can often be quantified by the autocorrelation and variance of the process.\nIn the context of adaptive networks critical slowing down is observed in terms of node properties \\cite{KuehnZschalerGross,KuehnCT2}. \n\nA classic model for adaptive networks is the {\\em co-evolving} SIS model, where the term co-evolving \nmeans that links and states--$S$ (susceptible) and $I$ (infected)--do not evolve independently. \nIn the {\\em static} SIS network model, where the network does not change over time, \nnodes are in a the $S$ or $I$ state. Each infected node recovers from infection at a rate $r$. \nAn infected node can transmit the disease to connected susceptible nodes at a rate $\\lambda$. \nIn \\cite{GrossDLimaBlasius} rewiring was introduced, where susceptible nodes may rewire a link from an infected node \nto a susceptible node at a rate $w$. This adaptive SIS model shows a different phase diagram, including a \ndisease-free phase (almost all nodes $S$), an epidemic phase (almost all nodes $I$), a bi-stable phase, \nand an oscillatory phase~\\cite{GrossDLimaBlasius,ShawSchwartz,GrossKevrekidis}. \nIn the following, we focus on the phase transition \\emph{from} the epidemic\/bi-stable phase {\\em to} the disease-free state. Depending on the context (infection, opinions, information etc.) the disease-free state can be have a postive, a negative or a neutral connotation. \nThe transition happens at a critical infection rate, the so-called the \\emph{persistence threshold}, $\\lambda_c$.\n\nThe adaptive SIS model can be described with ``macroscopic equations'', \nwhere the stochastic node and link update dynamics is reduced to a system of ordinary differential equations \n(ODEs) that governs the fraction of infected nodes and the densities of the various link-types in the population. \nThe equations are derived in the so-called heterogeneous pair approximation (PA). \nIt is possible to estimate the critical infection rate at the persistence threshold. \nWe denote the fraction of infected nodes by $\\rho=[I]\/N$, where $N$ is the number of nodes. \nThe per-node density of $SS$ links, $SI$ links and \n$II$ links are denoted by \n$\\rho_{SS}=[SS]\/N$, $\\rho_{SI}=[SI]\/N$, and $\\rho_{II}=[II]\/N$, respectively. \nWe also consider the densities of the motives, \n$\\rho_{SSI}= [SSI]\/N$ and $\\rho_{ISI}=[ISI]\/N$, \nwhich denote the respective triplet density per node. \nThese densities are random variables, however, we denote their expectation values with the same variables. \nThe evolution equations for the expectation values (up to second order) are given by~\\cite{GrossDLimaBlasius},\n\\begin{subequations}\n\\label{eq:PAs}\n\\begin{align}\n\\frac{d\\rho}{dt}&=\\lambda \\rho_{SI}-r\\rho\\label{eq:PA1}\\\\\n\\frac{d\\rho_{II}}{dt}&=\\lambda \\rho_{SI}+\\lambda \\rho_{ISI}-2r\\rho_{II}\\label{eq:PA2}\\\\\n\\frac{d\\rho_{SS}}{dt}&=(r+w)\\rho_{SI}-\\lambda \\rho_{SSI}\\label{eq:PA3} \\quad . \n\\end{align}\n\\end{subequations}\nLet $\\langle k \\rangle$ denote the average degree and note, that since the total link density, \n\\begin{equation}\n\\label{eq:masscon}\n\\rho_{SS}+\\rho_{SI}+\\rho_{II}=\\frac{\\langle k \\rangle}{2}\n\\end{equation}\nis conserved in the rewiring process, the seemingly missing $\\rho_{SI}$-equation can be eliminated. \nEquations~\\eqref{eq:PA2}-\\eqref{eq:PA3} are not closed because they depend on triplet densities. \nTo close them, one can use e.g. the homogeneous pair approximation\\footnote{The quality of this \napproximation can be checked in simulations, see Appendix \\ref{ssc:PA}.} \nthat neglects correlations between links, \n\\begin{equation}\n\\label{eq:tripletapprox}\n\\rho_{SSI}\\approx 2\\frac{\\rho_{SI}\\rho_{SS}}{1-\\rho},\\qquad \n\\rho_{ISI}\\approx \\frac{\\rho_{SI}\\rho_{SI}}{1-\\rho} \\quad .\n\\end{equation}\nOne can now solve for the stationary solution of the PA. \nThe disease-free state is always a steady state, but looses \nstability at the so-called invasion threshold, for which the PA yields, \n$\\lambda^{\\text{invasion}}=(r+w)\/\\langle k\\rangle$. \nIn the PA the endemic state starts to be stable at the persistence threshold,\n\\begin{equation}\n\\lambda_c =\\frac{2r}{\\mu^2}\\left(\\sqrt{1+\\frac{w\\mu^2}{r}}-1\\right) \\quad ,\n\\label{eq:persistence_threshold}\n\\end{equation}\nwhere we define $\\mu:=\\langle k \\rangle -1$ as the approximate average excess degree. \nFor $r\\ll w\\mu^2$, we have $\\lambda_c \\approx 2\\sqrt{wr}\/\\mu$.\n\\begin{figure}\n\\centering\n\\begin{tikzpicture}\n\\node[inner sep=0pt] (Fig1) at (0,0)\n{\\includegraphics[height=4.15cm]{TimeSeries_wlegend.pdf}};\n\\node[inner sep=0pt] (Fig2) at (4.15,0)\n{\\includegraphics[height=4.15cm]{Prevalence.pdf}};\n\\end{tikzpicture}\n \\caption{(a) Disease prevalence in the adaptive SIS model. Two timeseries are shown with \n\tan initial prevalence of 60\\% (red) and 50\\% (blue). The network \n\tis initialized as an Erd\\\"os R\\'enyi graph of size $N=400$ with an average \n\tdegree of $\\langle k \\rangle=20$ and the remaining parameters are $r=0.002$, $\\lambda=0.001$, and $w=0.01$. The dashed line indicates the stationary value in the epidemic state. \n\t(b) Stationary endemic prevalence as a function of infection rates (red) \n\tand its standard deviation (blue) are shown for the same parameters. The numerical (green, dotted) and pair approximation (black, dashed) values of the persistence threshold are also indicated. Their values for these parameters are $\\lambda_c=4.2\\times 10^{-4} (\\pm 0.2\\times 10^{-4})$ and $\\lambda_c^{PA}=4.6\\times 10^{-4}$ (c.f. Equation \\eqref{eq:persistence_threshold}) respectively . \n\\label{fg:Prevalence}\n}\n\\end{figure}\nFigure~\\ref{fg:Prevalence}(a) shows two timeseries of the prevalence $\\rho$ in a simulation of the adaptive SIS model in the bistable \nregime. For the specific initial conditions shown, the dynamics either enters the stationary endemic state or \nthe disease-free state. The smaller the initial disease prevalence, the higher the probability of ending \nup in the disease-free state. If the system is in the endemic state, it explores its phase space stochastically. \nIn Fig. \\ref{fg:Prevalence}(b) we show the average prevalence as a function of the infection rate $\\lambda$. \nIt asymptotically approaches $1$ for large $\\lambda$. \nClose to $\\lambda_c$, the prevalence decreases sharply and eventually the endemic state ceases to exist. \nThe standard deviation of $\\rho$ increases as the infection rate approaches $\\lambda_c$ from above~\\cite{KuehnCT2}. \nThis reflects that around the critical point, fluctuations become larger--the chance of an extinction event increases. \nSetting the right-hand-side of \\eqref{eq:PAs} to zero and solving for $\\rho$ yields the equilibrium curve for the prevalence in the PA \nand shows the leading order $\\rho\\propto (\\lambda-\\lambda_c)^{\\frac{1}{2}}$ behaviour, \n\\begin{equation}\n \\rho=\n 1-\\frac{\\lambda\\mu}{2(w-\\lambda)}+ \\frac{\\sqrt{\\lambda^2\\mu^2-4r(w-\\lambda)}}\n {2(w-\\lambda)}\n\\end{equation}\nNote, that the singularity at $\\lambda=w$ is removable by assigning $\\rho(\\lambda=w)=1-\\mu \/ w -2r \/ (w\\mu)+\\mu$.\n\n\\begin{figure*}\n\\centering\n\\begin{tikzpicture}\n\\node[inner sep=0pt] (Fig1) at (0,0)\n{\\includegraphics[height=4.15cm]{LinkDensities_with_PairApprox_FirstLegend.pdf}};\n\\node[inner sep=0pt] (Fig2) at (4.5,0)\n{\\includegraphics[height=4.15cm]{linkexpfitexponeplottogether_gimped.pdf}};\n\\node[inner sep=0pt] (Fig3) at (8.9,0)\n{\\includegraphics[height=4.15cm]{SI_distancesx_diff_gimped.pdf}};\n\\node[inner sep=0pt] (Fig4) at (13.5,0)\n{\\includegraphics[height=4.15cm]{SI_distancesy_diff_gimped.pdf}};\n\\end{tikzpicture}\n \\caption{(a) Critical curves of link densities are shown for numerical simulations (dashed) and in the PA (dotted). \n$SI$ links (red), $SS$ links (blue) and $II$ links (black). $r=0.002$, $w=0.01$, $\\langle k \\rangle=20$, and $N=400$. \n\t(b) Exponents of the tails are fitted to $\\sim \\lambda^{\\beta}$ for $SI$ links (red), and $SS$ links (blue) as a function of \n\trewiring rates $w$ and system sizes. \n\t(c) Location and \n\t(d) size of the maxima in the $SI$ link density as a function of $w$ and system sizes, and for reference, also their values in the PA.\n\\label{fg:linkdensities}\n}\n\\end{figure*}\n\nAt this level the adaptive $SIS$ network model is well-understood.\nIn this paper we ask, how critical transitions in adaptive network dynamics\nare reflected in the network topology of the underlying network(s). \nThe practical motivation behind this question is whether it is possible to use the monitoring of \nthe networks to infer the closeness to the critical point of adaptive systems. \nWe are interested to what extent early-warning signals can be derived from \neventual rearrangements of network structures close to the critical transition $\\lambda_c$.\nIn particular we ask, whether the networks' structural changes follow certain scaling laws and if \nthose can be used for predicting the upcoming transition. \nWe take a first step in this direction by studying the adaptive network model proposed in \\cite{GrossDLimaBlasius}.\nOur main results are:\n\n\\begin{itemize}\n\\item[(R1)] For several network-related quantities, $SI$ link densities, triplet densities, clustering, assortativity, and the eigenvalue gap, \nthere exists a specific crossover of two scaling laws near criticality. \nAs a consequence, these quantities show local extrema close to the persistence threshold. \nThis effect can be explained within the PA framework. \nThese extrema might indeed serve as potential candidates for network-based early-warning signs. \nThe eigenvalue gap might be an especially practical measure. \n\\item[(R2)] Some network-related quantities, such as the degree, the effective branching ratio, and the harmonic mean distance, \nbehave as if there existed a critical point that has a singularity located at the origin $\\lambda^{\\rm network}_c \\sim 0$. \nTheir critical curves end abruptly at the threshold $\\lambda_c$, so that their role as early-warning signs is limited. \n \\item[(R3)] Fluctuations and correlations increase for topological measures near $\\lambda_c$, which might \nbe an additional signal, when approaching the tipping point. \n\\end{itemize}\n\nIn summary, we show that topological changes in adaptive networks close to the critical point carry \npotential information to improve predictability of critical transitions through early-warning signs. \nIn this sense the information that is neglected in the coarse-graining approach does indeed contain a crucial layer of structural information. \n\n\n\n\\section{The critical network}\n\\label{sc:critical}\n\nWe present the results of a numerical study of network properties near the persistence threshold. \nWe employ the Gillespie algorithm for the simulation, which samples the stochastic process in an unbiased way \\cite{Gillespie1}. \nFor infection rates close to the threshold, we use the quasi-stationary method, which was used in \\cite{DeOliveiraDickman}, \nand applied to epidemic networks \\cite{FerreiraFerreiraPastor,FerreiraCastellanoPastor}.\nIn this paper we focus on link densities of various link types, triplet densities, the effective \nbranching ratio, the clustering coefficient, degree distribution, degree assortativity, compactness, and finally, \nspectral properties of the adjacency matrix.\n\n\\subsection{Link densities}\n\\label{ssc:linkdensities}\n\n\nThe densities of $SS$, $SI$ and $II$ links reveal a detailed picture of the \nmechanisms that are at work near the critical persistence threshold. \nIn Fig.~\\ref{fg:linkdensities} (a) we show the average per-node densities for $SS$, $SI$ and $II$ links in the endemic\nstationary state for a range of infection rates near the persistence threshold. \n$SS$ and $SI$ link densities approach $0$ asymptotically for large infection rates because rewiring cannot keep up with the infections.\nHence $II$ links dominate that regime. \n\nClose to the persistence threshold the density of $SS$ links ($II$ links) increase (decrease). \nFor the $SI$ links, however, there is a distinctive maximum that deserves attention. \nOne can express this observation in terms of the derivatives \nwith respect to the infection rate. Using Eq.~\\eqref{eq:masscon}, we have, \n$\\rho_{SI}^{\\prime}=-\\rho_{SS}^{\\prime}-\\rho_{II}^{\\prime}$, \nwhere $\\rho_{AB}^{\\prime}$ denotes the rate of change of the $AB$ link density.\nThus for infection rates near the threshold the $SS$ link density must decrease faster \nthan $II$ links increase; for slightly higher infection rates the roles interchange. \nSo we conclude that $SS$ and $II$ links scale differently near the threshold, as can be seen in \nFig.~\\ref{fg:linkdensities} (b). The tail of $\\rho_{SS}$ scales roughly as $\\lambda^{-2}$, and the tails of $\\rho_{SI}$ and, \nby link conservation, $\\rho_{II}$ scale as $\\lambda^{-1}$. \nThe exponent for $\\rho_{SI}$ is systematically slightly overestimated, \nbecause the square-root behavior interfears slightly. This behavior is robust with respect to system size.\n\nUsing the PA in Eq.~\\eqref{eq:PAs} we get the following estimate, \n\\begin{equation}\n\\rho_{SI}= \n\\frac{r\\mu}{2(w-\\lambda)}\n\\left[\\sqrt{1-\\frac{4r(w-\\lambda)}{\\lambda^2\\mu^2}}-\n1\\right]+\\frac{r}{\\lambda} \\quad . \n\\label{eq:pa_silinks}\n\\end{equation}\nNote, that the singularity at $\\lambda=w$ is again removable. \nThe functional form close to the critical point is given by \n\\begin{equation}\n\\rho_{SI}\n= \\frac{r}{\\lambda_c} + \\frac{ (\\lambda_c-2) r\\mu \\sqrt{\\frac{\\lambda_c}{2} + \\frac{r}{\\mu^2}}}{2 \\lambda_c (w-\\lambda_c)} \n\\sqrt{\\Delta \\lambda\n+\\mathcal O(\\Delta \\lambda) \\quad , \n\\label{eq:pa_silinks_crit}\n\\end{equation}\nwhere $\\Delta \\lambda=\\lambda-\\lambda_c$. Obviously, the density of $SI$ links \nfollows a square-root behavior near the critical point with a positive slope, \nas expected from the universal behaviour of the fold bifurcation~\\cite{GH} that is present at this point~\\cite{GrossDLimaBlasius}. \nFor larger infection rates, $\\lambda\\gg\\lambda_c$ the PA predicts a decay that is dominated by $\\lambda^{-1}$. \nTherefore a maximum must occur in between. \nThere are two (critical) exponents of the $SI$ link densities. \nIn the vicinity to the threshold we expect a square-root behavior $\\rho_{SI} \\sim \\Delta \\lambda^{\\frac12}$, \nfor larger $\\lambda$ we get a power law decay with an exponent $\\rho_{SI} \\sim (\\lambda-0)^{-1}$. \nEffectively, we can write the result obtained in Eq.~\\eqref{eq:pa_silinks} in the functional form, \n\\begin{equation}\nf(\\lambda) = \\alpha \\lambda^{-\\frac{3}{2}} (\\Delta\\lambda)^{\\frac{1}{2}}+ \\beta \\lambda^{-1} + f_0 \\quad . \n\\label{eq:model}\n\\end{equation}\nThe equation has three regimes. For $\\Delta \\lambda$ much smaller than $\\lambda_c$, \n$f(\\lambda) \\approx \\gamma(\\Delta \\lambda)^{1\/2}+\\delta$, with $\\gamma=\\alpha\\lambda_c^{-3\/2}$ \nand $\\delta=\\beta \/ \\lambda_c+f_0$. \nWhen we identify $\\rho_{SI}$ with $f$ we can solve for $\\alpha$, $\\beta$ and $f_0$, using Eq. \\eqref{eq:pa_silinks_crit}. \nFor $\\lambda \\gg \\lambda_c$ it behaves as $f (\\lambda) \\approx \\zeta \\lambda^{-1}-\\xi \\lambda^{-2}+f_0$ \nwith $\\zeta=\\alpha+\\beta$ and $\\xi=\\alpha \\lambda_c \/ 2$, which can be checked by a Taylor expansion of Eq. \\eqref{eq:model}. \nWhen we identify $\\rho_{SI}$ with $f$, we obtain $\\zeta=r$ and $\\xi=r^2 \/ \\mu$, again with an expansion of Eq. \\eqref{eq:pa_silinks} \nfor large $\\lambda$. One can then solve for $\\alpha$ and $\\beta$. \nThe intermediate regime contains the maximum. \nIn Fig. \\ref{fg:linkdensities} (c) and (d) we investigate the location and hight of the maximum of $\\rho_{SI}$ with respect to the rewiring rate $w$. \nBoth, the distance of the maximum from the threshold and its size seem to follow a power laws $w^{-1\/2}$ and $w^{-3\/2}$, respectively. \nThis behavior is also seen in the PA which is shown for comparison. \nSince the PA becomes less accurate towards the threshold, it is not surprising that the PA estimates differ in absolute terms but not qualitatively (slope). The behavior is again robust with respect to system size.\n\n\\subsection{Triplet densities}\n\\label{ssc:triplets}\n\n\\begin{figure}\n\\centering\n\\begin{tikzpicture}\n\\node[inner sep=0pt] (Fig1) at (0,0)\n{\\includegraphics[height=4.15cm]{SSIsmaller.pdf}};\n\\node[inner sep=0pt] (Fig2) at (4.3,0)\n{\\includegraphics[height=4.15cm]{ISIsmaller.pdf}};\n\\end{tikzpicture}\n \\caption{(a) Density of $SSI$ triplets.\n\tThe inset shows the critical exponents of the tail for a range of rewiring rates and system sizes. It is about $-2$.\n\t(b) $ISI$ triplet density. The inset shows a critical exponent of $-1$ irrespective of $w$ and $N$. Parameters as before.\n\t\\label{fg:triplets}\n}\n\\end{figure}\n\nWe now focus on triplets where the central node is susceptible. \nIn particular, we study the $SSI$ and $ISI$ motives that are crucial in the PA in Eq.~\\eqref{eq:tripletapprox}.\nFigure \\ref{fg:triplets} shows the per-node densities for the $SSI$ and $ISI$ triplets. \nAs for the $SI$ links, one can distinguish three regimes: the asymptotic regime of large infection rates, \nthe critical regime of infection rates very close to $\\lambda_c$ and a mid-range regime, containing a maximum. \nThe maximum of the $ISI$ triplets is further away from the threshold than in the $SI$ link case. \nThis can be understood in the PA, Eq.~\\eqref{eq:tripletapprox}. \nThe density of $ISI$ triplets is approximately the square of the $SI$ link densities, divided by the fraction \nof susceptible nodes. The square of a function will not change the position of its maximum, however, division will. \nConsider, \n\\begin{equation*}\n\\left(\\frac{\\rho^2_{SI}}{\\rho_S}\\right)^{\\prime}=\n\\frac{\\left(\\rho_{SI}^2\\right)^{\\prime}}{\\rho_S}-\n\\left(\\frac{\\rho_{SI}}{\\rho_S}\\right)^2(\\rho_S)^{\\prime}\n\\end{equation*} \nThis expression is positive for the infection rate, where $\\rho_{SI}$ becomes maximal \nsince the first term vanishes and the second is positive (because of the decrement of the susceptible density). \nTherefore the maximum of $\\rho_{ISI}$ has not yet been attained at this rate. \nA similar analysis can be done for the $SSI$ triplet.\n\n\\subsection{Effective branching ratio}\n\\label{ssc:ebr}\n\nThe effective branching ratio is defined as \n\\begin{equation}\n\\kappa=\\frac{[SSI]}{[SI]}=\\frac{\\rho_{SSI}}{\\rho_{SI}} \\quad , \n\\end{equation}\nand quantifies the number of potential secondary infections for a given primary infection. \nFigure~\\ref{fg:EBR}(a) shows $\\kappa$ in log-log scale. \nThe effective branching ratio does not have a maximum but follows a power law with an exponent $\\alpha \\approx -1$. \nThe power law is clearly of the form, $\\kappa\\propto (\\lambda-\\lambda^{\\rm ebr}_c)^{-\\alpha}$, \nwhere $\\lambda^{\\rm ebr}_c \\approx 0$. \nThe critical transition at $\\lambda_c$ is not detected by the effective branching ratio.\n\n\\begin{figure}\n\\centering\n\\begin{tikzpicture}\n\\node[inner sep=0pt] (Fig1) at (0,0)\n{\\includegraphics[height=4.15cm]{ebr_2_gimped.pdf}};\n\\node[inner sep=0pt] (Fig2) at (4.5,0)\n{\\includegraphics[height=4.15cm]{ebrfits_2.pdf}};\n\\end{tikzpicture}\n \\caption{(a) Effective branching ratio $\\kappa=SSI\/SI$ (red). \n \t\tThe inset is a log-log plot with a fitted slope, $\\beta_\\kappa = -1.01$. Parameters as before. \n\t\t(b) Exponent $\\beta_\\kappa$ for a range of rewiring rates and system sizes. \n\t\tThe inset shows the fitted threshold $\\lambda_c^{\\text{ebr}}$. \n}\n\\label{fg:EBR}\n\\end{figure}\n\n\\begin{figure}[t]\n\\centering\n\\begin{tikzpicture}\n\\node[inner sep=0pt] (Fig1) at (0,0)\n{\\includegraphics[height=4.15cm]{global_clustering_gimped.pdf}};\n\\node[inner sep=0pt] (Fig2) at (4.5,0)\n{\\includegraphics[height=4.15cm]{cl_distances_2y_difftogether_gimped.pdf}};\n\\end{tikzpicture}\n \\caption{(a) Clustering coefficient (red), its standard deviation around the equilibrium state (blue), \n \tand the clustering coefficient of the Erd\\\"os R\\'enyi graph (dotted) for $N=400$, $\\langle k\\rangle=20$, $r=0.002$, and $w=0.01$. \n\t(b) Distance and relative size (inset) of the local maximum for various $w$ and $N$. \n\tThe infection rate at which the critical curve becomes maximal is $\\lambda_{\\text{max}}$. \n\tSince the clustering coefficient vanishes as $N\\to \\infty$ at constant link density, \n\twe rescale by $C_{ER}=\\langle k \\rangle \/ N$. Parameters are otherwise the same.\n\\label{fg:globalclust}\n\t}\n\\end{figure}\n\nWe measure the exponent $\\beta_\\kappa$ for various rewiring rates $w$ and system sizes $N$ in Fig.~\\ref{fg:EBR}(b). \nFor finite system sizes, we find that $\\beta_\\kappa$ decreases roughly linearly with the rewiring rate. \nWith larger system size this dependence becomes weaker. \nHence, we infer that the density of $SSI$ triplets (Fig.~\\ref{fg:triplets}(a)) is just the product of the \n$SI$ link density and a power law with exponent $-1$. For secondary infections we \nconclude that the risk is highest just at the threshold, \neven though the risk of an initial infection--indicated by the $SI$ link density--is not maximal at the critical point. \n\n\n\n\\subsection{Clustering coefficient}\n\\label{ssc:cc}\n\n\nThe clustering coefficient $C$ measures the number of closed triangles with respect to the total number of triangles in the entire network. \nIt is given in terms of the adjacency matrix $A$ of the network by \n\\begin{equation}\nC=\\frac{Tr A^3}{\\sum_{ij} (A^2)_{ij} - Tr (A^2) }\\;. \\label{eq:clcoeff}\n\\end{equation}\nFigure~\\ref{fg:globalclust}(a) shows the clustering coefficient\nnear $\\lambda_c$. The qualitative behavior is again similar to the $SI$ link density, or the $SSI$ and $ISI$ triplet densities. \nSince there are almost no susceptible nodes in the regime of large infection rates, \nthe stationary network behaves like an Erd\\\"os R\\'enyi graph, whose clustering coefficient is given by \n$C_{\\rm ER}=\\langle k \\rangle\/N$, \nwhere $\\langle k\\rangle =1\/N \\sum_{i}k_i=2L \/ N$, $L$ is the total number of links. \n$C_{ER}$ is the limiting value for large infection rates (dotted horizontal line). \nRewiring creates and destroys triangles. The clustering coefficient depends on the average net effect. \nThe appearance of the maximum can be explained by this net effect in the three regimes. \nFor high infection rates, a rewiring event has a much higher chance of closing an open triangle rather than destroying one, \ndue to the high connectivity of the susceptible graph. \nHowever, the number of rewireable links is very low, which results in an asymptotically vanishing net effect. \nFor infection rates very close to $\\lambda_c$, there are many more susceptible nodes, and \nthe chance for creating a closed triangle is only slightly larger than the chance to destroy one. \nThe average net effect is nevertheless present. \nThe largest effect occurs approximately there, where the number of $SI$ links, and hence the total rewiring rate, is maximal.\n\nFitting a power law to the tail of $C$ is sensitive to the interval choice and whether the ER limiting value is enforced or not. \nParameter values of the fitted exponent vary depending on these choices. Since the clustering coefficient is a nonlinear function of graph motives, it is likely that multiple power laws of the respective motives interfere, which leads to the aforementioned sensitivity.\n\nWe denote by $\\lambda_{\\text{max}}$ the infection rate at which the critical curve becomes maximal. \nIn Fig.~\\ref{fg:globalclust}(b) we show the distance of $\\lambda_{\\text{max}}$ from the threshold $\\lambda_c$ as a function of the rewiring rate. \nThis distance does not decline for all system sizes $N$, as it is the case for the maxima of $\\rho_{SI}$. \nThe trend is rather that the distance rises for larger $N$.\nThe size of the maximum $\\Delta C=C(\\lambda_{\\text{max}})-C(\\lambda_c)$ does not decline as a function of $w$, but levels out. \nHowever, $\\Delta C\\propto N^{-1}$, which can be seen from the inset of Fig.~\\ref{fg:globalclust}(b), \nwhere we rescale $\\Delta C$ by the Erd\\\"os-R\\'enyi value $C_{ER}=\\langle k \\rangle \/ N$. \nThis $N$ dependence is not surprising, because the clustering coefficient itself vanishes at constant link density as $N\\to \\infty$. \nIn summary, the maximum of the clustering coefficient is a possible robust warning sign for the upcoming persistence threshold.\n\n\\subsection{Degree distribution}\n\\label{ssc:degr_distr}\n\n\\begin{figure}[t]\n \\centering\n\\begin{tikzpicture}\n\\node[inner sep=0pt] (Fig1) at (0,0)\n{\\includegraphics[height=4.15cm]{degreedistribution.pdf}};\n\\node[inner sep=0pt] (Fig2) at (4.5,0)\n{\\includegraphics[height=4.15cm]{standard_dev_2_gimped.pdf}};\n\\end{tikzpicture}\n \\caption{(a) Degree distribution of the entire graph (red), the susceptible nodes (blue), and the infected nodes (black) for $N=1000$, $\\langle k\\rangle=20$, $r=0.002$, $\\lambda=0.001$, and $w=0.02$. \n \tThe distributions are close to Poisson distributions as one would expect from ER networks. \n\t(b) Standard deviation of the stationary degree distribution. The inset shows a power law fit. \n\tThe exponent is roughly $\\beta_\\sigma\\approx -0.46$ and $\\lambda_c^{\\sigma}\\approx 0.00022$ for $N=400$, \n\t$\\langle k \\rangle=20$, $r=0.002$, $w=0.01$.\n\t}\n \\label{fg:degree_distribution}\n\\end{figure}\n\nThe degree distribution $p_k$ is the fraction of nodes in the network with degree $k$. The average degree is \n$\\langle k\\rangle =1\/N \\sum_{i}k_i=2L \/ N$. Note that $\\langle k \\rangle$ is constant \ndue to the conservation of links during rewiring. The $n$-th raw moment is given by \n\\begin{equation}\n \\langle k^n\\rangle =\\frac{1}{N}\\sum_i k_i^n=\\rho\\langle k^n \n\\rangle_I+\\rho_S \\langle k^n\\rangle_S \\quad,\n\\end{equation}\nwhere $\\langle k^n \\rangle_{I(S)}$ are the raw moments \nof the degree distribution of infected (susceptible) nodes. \nThe degree distribution in the endemic state has been studied for instance in~\\cite{GrossDLimaBlasius,Marceauetal}. \nThe degrees of the infected and the susceptible nodes both follow a Poisson distribution \n(an indication for ER random graphs), however with different mean values. \nThe behavior in the vicinity of the phase transition at $\\lambda_c$ has not been studied before. \n\nFigure~\\ref{fg:degree_distribution}(a) shows the stationary degree distribution for a set of parameters close to the transition. \nThe overall distribution is a superposition of the susceptible and the infected contribution. \nThe respective Poisson distributions are seen. \nIn Fig.~\\ref{fg:degree_distribution}(b) we show the critical curve of the standard deviation $\\sigma$ at equilibrium. \nWe observe a rise of the standard deviation close to the critical point and the absence of a local maximum. \nFor a Poisson distribution the mean $\\langle k \\rangle$ and variance $\\sigma^2=\\langle k^2\\rangle-\\langle k\\rangle^2$ coincide, and \nunder the assumption, that the infected and the susceptible nodes \nhave a both a Poisson degree distribution, the overall variance is\n\\begin{equation}\n{\\rm Var} =\\langle k \\rangle + \\left[ \\rho \\langle k \\rangle_I^2+ (1-\\rho)\\langle k\\rangle_S^2 \n- \\langle k\\rangle^2 \\right] \\quad .\n\\end{equation}\nHere we used $\\langle k^2\\rangle = \\rho\\langle k^2 \\rangle_I+(1-\\rho)\\langle k^2 \\rangle_S$\n and $\\langle k^2\\rangle_{I(S)}=\\langle k\\rangle^2_{I(S)}+\\langle k\\rangle_{I(S)}$.\nBy Jensen's inequality the term in square brackets is positive and we obtain the inequality, \n${\\rm Var} \\geq {\\rm Var}_{\\rm ER}= \\langle k \\rangle$. \nIn Fig.~\\ref{fg:degree_distribution}(b) the equilibrium standard deviation can be seen to be bounded from below by $\\sigma_{\\rm ER}=\\sqrt{\\rm Var_{\\rm ER}}$, which is indicated by the dotted horizontal line. \n\nThe inset in Fig.~\\ref{fg:degree_distribution}(b) shows a power law fit. \nThe curve is well described by a power law close to the transition, however, it's critical point is not {\\em at} the transition. \nLike for the effective branching ratio (Fig.~\\ref{fg:EBR}) the true critical point is not sensed by $\\sigma$. \nThe fitted critical points $\\lambda_c^\\sigma$ for various parameters and choices of intervals \nall share the feature that they are far away from $\\lambda_c$ and close, or equal to zero. \n\nWe conclude that the broadening of the degree distribution captures the approach towards the critical point, \nbut the true location of the critical point $\\lambda_c$ cannot be seen from the scaling behavior of $\\sigma$. \n\n\n\\subsection{Degree assortativity}\n\\label{ssc:degass}\n\nAssortativity measures the correlations between the degrees of adjacent nodes. \nIn terms of the adjacency matrix $A$ and the degree vector $k_i=\\sum_{j}A_{ij}$ it is \n\\begin{equation}\n \\mathcal A=\\frac{\\sum_{ij}\\left(A_{ij}-\\frac{k_ik_j}{N\\langle k\\rangle}\\right)\nk_ik_j}{\\sum_{ij}\\left(k_i\\delta_{ij}-\\frac{k_ik_j}{N\\langle k\\rangle}\\right)k_ik_j} \\quad .\n\\end{equation}\nIt takes values between $-1$ and $1$. For $\\mathcal A=0$ the network has no \ndegree correlations, for $\\mathcal A=1$ it is maximally degree-correlated, and for \n$\\mathcal A=-1$ it is maximally anti-correlated.\n\n\\begin{figure}[t]\n \\centering\n\\begin{tikzpicture}\n\\node[inner sep=0pt] (Fig1) at (0,0)\n{\\includegraphics[height=4.4cm]{assortativity_gimped.pdf}};\n\\node[inner sep=0pt] (Fig2) at (4.5,0.05)\n{\\includegraphics[height=4.0cm]{assortativity_max_scany_difftogether_gimped.pdf}};\n\\end{tikzpicture}\n \\caption{\n\t(a) Assortativity coefficient for the entire graph (red) and for the susceptible graph (blue). \n \tFor reference we also show the density of $SS$ links (black), which approximates the size of the susceptible graph. \n\t$N=400$, $\\langle k \\rangle = 20$, $r=0.002$, and $w=0.01$. \n\t(b) The distance $\\lambda_{\\text{max}}$ and size $\\Delta A$ (inset) of the local maxima shown for various $w$ and $N$. \n\n\t}\n \\label{fg:assort_coeff}\n\\end{figure}\n\n\\begin{figure*}\n\\centering\n\\begin{tikzpicture}\n\\node[inner sep=0pt] (Fig1) at (0,0)\n{\\includegraphics[height=4.15cm]{harmonicmeandistance_gimped.pdf}};\n\\node[inner sep=0pt] (Fig2) at (4.3,0)\n{\\includegraphics[height=4.15cm]{hmd_fitted_parameters.pdf}};\n\\node[inner sep=0pt] (Fig3) at (8.8,-0.15)\n{\\includegraphics[height=4.5cm]{hmdvsN.pdf}};\n\\node[inner sep=0pt] (Fig4) at (13.5,0)\n{\\includegraphics[height=4.15cm]{harmonicmean_susc_gimped.pdf}};\n\\end{tikzpicture}\n \\caption{\n {(a) Harmonic mean distance $\\rm{HMD}$ of the entire graph and a power law fit on a log-log plot (inset). \n (b) Power law exponent $\\beta^{\\rm{HMD}}_c$ fitted to the tail and the critical point $\\lambda_c^{\\rm{HMD}}$ as a fraction of $\\lambda_c$ (inset). \n (c) Fitted asymptotic value ${\\rm HMD}_\\infty$ as a function of system size $N$, with the numerical values of the Erd\\\"os-R\\'enyi ensemble average \n ${\\rm HMD}_{\\rm ER}(N)$ for a reference. \n (d) Harmonic mean distance of the susceptible subgraph $\\rm{HMD}(S)$, measured with respect to the ER \n value for a graph of that size $S=N (1-\\rho)$. $N=400$, $\\langle k\\rangle=20$, $r=0.002$, and $w=0.01$}\n\t}\n \\label{fg:harmonic_mean_distance}\n\\end{figure*}\n\n\nFigure~\\ref{fg:assort_coeff}(a) shows the critical curve for the assortativity. \nAs for the $SI$ link density and the clustering coefficient, the degree assortativity exhibits a maximum. \nFor large infection rates the network becomes non-assortative, which is the expected Erd\\\"os-R\\'enyi limit. \nAt medium-range infection rates the maximum occurs. \nTowards the threshold, the assortativity decreases again. \n\nIt is instructive to decompose the assortativity coefficient into its constituent parts. \nWe denote by $\\langle k k' \\rangle_{AB}$ the expected product of the degrees $k$ and $k'$ along links of type $AB$. \nIn this notation the coefficient is\n\\begin{equation}\n \\frac{2\\rho_{SS}\\langle kk'\\rangle_{SS}+2\\rho_{SI}\n\\langle kk'\\rangle_{SI}+2\\rho_{II}\\langle kk'\\rangle_{II}\n-\\frac{1}{\\langle k \\rangle}\\langle k^2 \\rangle^2}{ \\langle\n k^3 \\rangle-\\frac{1}{\\langle k \\rangle}\\langle k^2 \\rangle^2} \\quad .\n\\end{equation}\nThe important contribution to the overall assortativity results from the first three terms in the numerator. \nThe assortativity of the susceptible subgraph rises to high values above $0.8$, as can be seen in Fig.~\\ref{fg:assort_coeff} (a) \nThis means that the most important contribution of the three terms comes from the susceptible subgraph. \nThere is, however, a trade-off between the abundance of $SS$ links and the expected degree correlation \n$\\langle kk' \\rangle_{SS}$, while the latter is increasing, the former decreases (Fig.~\\ref{fg:assort_coeff}(a)). \nThe degree correlations, however, increase faster than the $SS$ link density decreases, thus giving rise to the maximum. \n\nThe interference of at least two scaling laws brings about the maximum. \nIts location is studied in Fig.~\\ref{fg:assort_coeff}(b). With respect to the distance of the maximum from the threshold $\\lambda_{\\text{max}}-\\lambda_c$ one observes two small trends, namely a slight increase in distance towards higher rewiring rates, \nand a non-significant decrease with respect to system size. \nThe height of the maximum $\\Delta \\mathcal A=\\mathcal A(\\lambda_{\\text{max}})-\\mathcal A(\\lambda_c)$ (inset of Fig.~\\ref{fg:assort_coeff}(b)) \non the other hand has a strong size dependence. \nIt decreases with increasing $w$, but does so at a higher rate as the system becomes larger. \nThe decrease might follow a power law.\n\nThe degree assortativity is a purely global quantity that is not easy to measure locally. \nIt does, however, bear potential as an early-warning sign because the distance to the threshold is \nsizable and not strongly dependent on the rewiring rate or system size.\n\n\n\n\\subsection{Harmonic mean distance}\n\\label{ssc:cmptness}\n\nThe most natural way of measuring distances on a graph is in terms of the geodesic distance. \nFor two nodes $i$ and $j\\neq i$ the geodesic distance $d_{ij}$ is the length of the shortest path between them, \nand is infinite if $i$ and $j$ are not connected by any path. \nThe ``farness'' of a node $i$ is given by $1\/(N-1)\\sum_{j\\neq i}d(i,j)$, the ``closeness'' by its reciprocal. \nFor graphs with multiple connected components the farness is infinite and the closeness vanishes. \nTo remedy this deficiency one may look at the harmonic geodesic distance $1\/d_{ij}$. The harmonic mean geodesic \ndistance is then given by\n\\begin{equation}\n{\\rm HMD} =\\frac{1}{\\langle \\frac{1}{d}\\rangle}=\\frac{N(N-1)}{\\sum_{i,j\\neq i}\\frac{1}{d_{ij}}} \\quad .\n\\end{equation}\nIt is $1$ for a complete graph, infinite for a set of points without links, and finite otherwise.\n\n\n\\begin{figure*}\n\\centering\n\\begin{tikzpicture}\n\\node[inner sep=0pt] (Fig1) at (0,0)\n{\\includegraphics[height=4.15cm]{eigenvalues_adj.pdf}};\n\\node[inner sep=0pt] (Fig2) at (4.5,0)\n{\\includegraphics[height=4.15cm]{eigenvalue_gap.pdf}};\n\\node[inner sep=0pt] (Fig3) at (9,0)\n{\\includegraphics[height=4.15cm]{adjdistribution.pdf}};\n\\node[inner sep=0pt] (Fig4) at (13.5,0)\n{\\includegraphics[height=4.15cm]{lapdistribution.pdf}};\n\\end{tikzpicture}\n \\caption{\n (a) The first 4 eigenvalues. \n (b) The eigenvalue gap and the distances from the threshold to the minimum for a range of rewiring rates and system sizes (inset). \n (c) Distribution of eigenvalues of the adjacency matrix, and \n (d) the network Laplacian.\n\tBlue curves correspond to high values of the infection rate, $\\lambda=0.03$, and show Wigner's semi circle.\n\tRed lines correspond to the distributions for infection rates close to the critical threshold, $\\lambda_c=0.00043$. \n\tGrey lines show distributions for intermediate infection rates. \n\t$N=400$, $\\langle k\\rangle=20$, $r=0.002$, and $w=0.01$. \n\t}\n \\label{fg:eig_adj}\n\\end{figure*}\n\n\nFigure~\\ref{fg:harmonic_mean_distance}(a) shows the ${\\rm HMD}$.\nFor infection rates close to the critical point, the ${\\rm HMD}$ rises--the network becomes less compact. \nA possible explanation is that a large number of paths in the network lead through the susceptible subgraph, \nespecially for infected nodes. \nThis is supported by the high branching ratio $\\kappa=[SSI]\/[SI]$ close to the critical point. \nTherefore, the overall distances become larger in the vicinity to the critical threshold. \nFor large infection rates the equilibrium distances approach the ensemble average of the Erd\\\"os-R\\'enyi \nharmonic mean distance $\\rm{HMD}_{ER}\\sim \\log(N)$, as seen in Fig.~\\ref{fg:harmonic_mean_distance}(c). \nThe susceptible subgraph itself is at its densest near the threshold and its ${\\rm HMD}$ in comparison to that of an \nER graph of the same size grows linearly (Fig.~\\ref{fg:harmonic_mean_distance}(d)). \nAn explanation is that the number of $SS$ links falls by a factor of $\\lambda^{-1}$ faster than the number of susceptible nodes $S=N(1-\\rho)$. \nSo the average distances of the susceptible subgraph scale by the reciprocal factor with respect to a baseline ER graph of size $S$. \n\nLike for the effective branching ratio $\\kappa$ and for the standard deviation $\\sigma$ (Subsections \\ref{ssc:ebr} and\n\\ref{ssc:degr_distr}), the ${\\rm HMD}$ does not sense the actual critical point $\\lambda_c$. \nA fit to a power law reveals that the exponent is close to $-1$ for a range of rewiring rates and system sizes (Fig.~\\ref{fg:harmonic_mean_distance}(b)). The fitted critical point $\\lambda_c^{\\rm HMD}$ is both bigger than $0$ and strictly smaller than $\\lambda_c$.\n\n\\subsection{Spectral properties}\n\nThe network Laplacian is defined in terms of the adjacency matrix $A$ by $L=D-A$, with $D_{ij}=\\delta_{ij} \\sum_k A_{jk}$\nThe largest eigenvalue of the adjacency matrix, $\\lambda_{1}$, is sometimes referred to as the ``capacity'' of the graph. \nIt carries information about the connectivity and the number of paths in the network. \nIt is bounded from below by the average degree $\\langle k\\rangle$, and from above by the maximal degree. We denote the gap between the first two eigenvalues by $g=\\lambda_1-\\lambda_2$. In the context of Markov chains it measures the speed of convergence in $\\ell^p$ to the stationary distribution under the condition of irreducibility and aperiodicity, \nsee \\cite{LevinPeres,Chung,ChungLu} for more details. \n\nFigure~\\ref{fg:eig_adj}(a) shows the five largest eigenvalues of the adjacency matrix. \nThe largest eigenvalue, unlike the remaining ones, attains a maximum shortly after the threshold and then decreases towards an asymptotic value. \nIn Fig.~\\ref{fg:eig_adj}(b) we see that the eigenvalue gap is large when close to the threshold, drops steeply to a local minimum, \nand then relaxes back to its asymptotic value. \nThe larger the gap the more difficult it is to dissect the graph \\cite{ChungLu} and the faster infections would spread. \nThe distance of the local minimum scales linear with the rewiring rate, as can be seen from the inset of Fig.~\\ref{fg:eig_adj}(b).\nTherefore it becomes a very reliable indicator of the transition and even more so, for higher rewiring rates.\n\nAnother important spectral characteristic of the network is the distribution of eigenvalues. \nIt is known~\\cite{FurediKomlos} that the empirical eigenvalue distribution \nof the adjacency matrix converges to the Wigner semi-circle law for ER graphs. \nThe Laplacian, however, converges to the convolution of a Gaussian with the semi-circle distribution~\\cite{DingJiang}, \nafter appropriate normalisation. \nFigure~\\ref{fg:eig_adj} shows the empirical distribution of both the adjacency matrix (c) and the graph Laplacian (d). \nFar from the threshold, the eigenvalue distribution of the adjacency matrix approaches the semi-circle around the origin, as expected. \nClose to the critical threshold the distribution changes drastically. \nIt remains symmetric around the origin, but develops a narrow peak producing a cusp at the center. \nThis behavior is known from eigenvalue distributions of several scale-free networks~\\cite{ChungLu}. \nFor the empirical eigenvalue distribution of the Laplacian the situation is similar: an\nER limit exists for high infection. Drastic changes occur near the critical threshold.\n\n\n\\section{Discussion -- usability of network measures as early-warning signs}\n\\label{sc:powerlove}\n\n\nIn summary, we formulated the general question of the feasibility of finding network-based precursor signals in adaptive network dynamics \nin the context of a specific epidemic model, the co-evolving SIS model. \nWe find that several network measures indicate no sensitivity whatsoever, for the critical transition.\nThese are the effective branching ratio, the degree distribution and the harmonic mean distance. \nAs a function of the infection rate, these measures show scaling laws, often characterized by an exponent of $-1$, \nwith the singularity located at zero or close to it, but {\\em not} at $\\lambda_c$. \nIt means that these measures behave as if the transition was at $\\lambda^{\\rm ebr \/ degree \/ HMD}_c \\approx 0$, \n$\\lambda^{\\rm ebr \/ degree \/ HMD}_c \\ll \\lambda_c$,\nrather than at $\\lambda_c$. This has severe consequences for their use as an early warning sign, \nbecause the fold bifurcation point is suddenly reached without any warning. \nThese measures can in no way anticipate the true position of the critical point $\\lambda_c$.\n\nWe have shown, however, that a number of other network measures do carry potential for being used as early-warning signs. \nThey are able to detect the critical transition at $\\lambda_c$, when approached from above ($\\lambda > \\lambda_c$).\nIn particular, these measures, which include the\n$SI$ link densities, \ntriplet densities, \nclustering, and \nassortativity, \nshow a crossover of two scaling laws, that are of a functional form as in Eq.~\\eqref{eq:model}. \nThe first scaling law shows an increase of the respective measure as $(\\lambda-\\lambda_c)^{1\/2}$, \nslightly above the transition ($\\lambda > \\lambda_c$). \nThe other is an asymptotic scaling law ($\\lambda \\gg \\lambda_c$), which is characterized by negative integer exponents. \nBetween these two scaling regimes a local maximum exists, which is indeed visible in the corresponding network measures. \nThe location of the maxima occur slightly above the critical point, $\\lambda^{\\rm max} > \\lambda_c$.\n\nBoth, the double scaling and the maximum is also seen in the maximum eigenvalue of the adjacency matrix, \nwhen plotted against $\\lambda$. \nThe eigenvalue gap shows a very clear minimum, well before the critical transition. \nIn practical terms this means that, when approaching the critical transition point, an increase of the \neigenvalue gap signals the immediate vicinity of the transition. \nGiven a sufficiently robust eigenvalue estimate from data, the eigenvalue gap is a very clear and practical early-warning sign. \n\nWe tested the effects of all parameters in the co-evolving SIS model and found that our results are relatively robust. \nThe dependence on rewiring rate and system size has been has been investigated especially carefully. \nThe recovery rate sets the time scale and can therefore can be fixed arbitrarily; we took the choice used in \\cite{GrossDLimaBlasius}. \nThe connectivity determines the location of the threshold. \nThe homogeneous pair approximation becomes unreliable for very low values of $\\langle k \\rangle$ \\cite{Marceauetal}.\n\nWe conclude by mentioning that the network information that is neglected in the classical \ncoarse-graining approach does contain a layer of structural information, that can indeed detect the critical transition point. \nThe next steps to take would be to actually test the performance of the different network measures as precursor signals \nin agent based simulations, where infection rates are exogenously varied slowly. \nAn observer can monitor the networks, the infection-, and rewiring rates, but would not know anything about the location of the critical point. \nIt would be interesting to see to what extent such an observer could predict the collapse of the \nsystem several timesteps in advance. \n\nSupported by the Austrian Science Foundation FWF under the project P29252. \n\n\n\n\\bibliographystyle{plain}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe Busemann-Petty problem was posed in \\cite{Busemann-Petty-1956},\nfirst in a list of ten problems concerning central sections of symmetric convex bodies in ${\\mathbb R}^n$\nand coming from questions in Minkowski geometry. It was originally formulated as\nfollows:\n\\begin{quote}{\\sl Assume that $K$ and $D$ are origin-symmetric convex bodies in ${\\mathbb R}^n$ and satisfy\n\\begin{equation}\\label{eq:intro-1}|K\\cap\\xi^{\\perp }|\\leqslant |D\\cap\\xi^{\\perp }|\\end{equation}\nfor all $\\xi\\in S^{n-1}$. Does it follow that $|K|\\leqslant |D|$?}\n\\end{quote}\nHere $\\xi^\\perp$ is the central hyperplane perpendicular to $\\xi $.\nThe answer is affirmative if $n\\leqslant 4$ and negative if $n\\geqslant 5$ (for the history and the solution to this problem,\nsee the monographs \\cite{Gardner-book} and \\cite{Koldobsky-book}). The isomorphic version of the Busemann-Petty problem\nasks if there exists an absolute constant $C_1>0$ such that whenever $K$ and $D$ satisfy \\eqref{eq:intro-1} we have $|K|\\leqslant C_1|D|$.\nThis question is equivalent to the slicing problem and to the isotropic constant conjecture asking if\n\\begin{equation}\\label{eq:intro-2}L_n:= \\max\\{ L_K:K\\ \\hbox{is isotropic in}\\ {\\mathbb R}^n\\}\\end{equation} is a bounded sequence.\nMore precisely, it is known that if $K$ and $D$ are two centered convex bodies\nin ${\\mathbb R}^n$ such that \\eqref{eq:intro-1} holds true for all $\\xi\\in S^{n-1}$, then\n\\begin{equation}\\label{eq:intro-3}|K|^{\\frac{n-1}{n}}\\leqslant c_1L_n\\,|D|^{\\frac{n-1}{n}},\\end{equation}\nwhere $c_1>0$ is an absolute constant. Regarding $L_n$, Bourgain proved in \\cite{Bourgain-1991} that $L_n\\leqslant\nc\\sqrt[4]{n}\\log\\! n$, and Klartag \\cite{Klartag-2006} improved this bound to $L_n\\leqslant c\\sqrt[4]{n}$. A second proof of Klartag's bound\nappears in \\cite{Klartag-EMilman-2012}. For more information on isotropic convex bodies and log-concave measures see \\cite{BGVV-book}.\n\nShephard's problem (see \\cite{Shephard-1964}) is dual to the Busemann-Petty problem.\n\\begin{quote}{\\sl Let $K$ and $D$ be two centrally symmetric convex bodies in ${\\mathbb R}^n$. Suppose that\n\\begin{equation}\\label{eq:intro-4}|P_{\\xi^{\\perp } }(K)|\\leqslant |P_{\\xi^{\\perp} }(D)|\\end{equation}\nfor every $\\xi \\in S^{n-1}$, where $P_{\\xi^{\\perp }}(A)$ is the orthogonal projection of $A\\subset {\\mathbb R}^n$\nonto $\\xi^{\\perp }$. Does it follow that $|K|\\leqslant |D|$?}\n\\end{quote}\n\nThe answer is affirmative if $n=2$, but shortly after it was posed, Shephard's question was answered in the negative for all $n\\geqslant 3$.\nThis was done independently by Petty in \\cite{Petty-1967} who gave an explicit counterexample in ${\\mathbb R}^3$, and by Schneider\nin \\cite{Schneider-1967} for all $n\\geqslant 3$. After these counterexamples, one might try to relax the question, asking for the smallest constant\n$C_n$ (or the order of growth of this constant $C_n$ as $n\\to \\infty $) for which: if $K,D$ are centrally symmetric convex bodies in\n${\\mathbb R}^n$ and $|P_{\\xi^{\\perp }}(K)|\\leqslant |P_{\\xi^{\\perp }}(D)|$ for all $\\xi \\in S^{n-1}$ then $|K|\\leqslant C_n|D|$.\n\nSuch a constant $C_n$ does exist, and a simple argument, based on John's theorem, shows that $C_n\\leqslant c\\sqrt{n}$,\nwhere $c>0$ is an absolute constant. On the other hand, K. Ball has proved in \\cite{Ball-1991} that this simple\nestimate is optimal: one has $C_n\\simeq\\sqrt{n}$.\n\n\\smallskip\n\nIn the first part of this note we discuss a variant of the two problems, proposed by V.~Milman at the Oberwolfach meeting on Convex Geometry\nand its Applications (December 2015):\n\n\\begin{question}[V. Milman]\\label{question:vitali}Assume that $K$ and $D$ are origin-symmetric convex bodies in ${\\mathbb R}^n$ and satisfy\n\\begin{equation}\\label{eq:intro-5}|P_{\\xi^{\\perp }}(K)|\\leqslant |D\\cap\\xi^{\\perp }|\\end{equation}\nfor all $\\xi\\in S^{n-1}$. Does it follow that $|K|\\leqslant |D|$?\n\\end{question}\n\nIn Section \\ref{sect2} we show that the answer to this question is affirmative. In fact, the lower dimensional analogue of the problem has an affirmative\nanswer. Moreover, one can drop the symmetry assumptions and even the assumption of convexity for $D$.\n\n\\begin{theorem}\\label{th:intro-1}Let $K$ be a convex body in ${\\mathbb R}^n$ and let $D$ be a compact subset of ${\\mathbb R}^n$ such that,\nfor some $1\\leqslant k\\leqslant n-1$,\n\\begin{equation}\\label{eq:intro-6}|P_F(K)|\\leqslant |D\\cap F|\\end{equation}\nfor all $F\\in G_{n,n-k}$. Then,\n\\begin{equation}\\label{eq:intro-7}|K|\\leqslant |D|.\\end{equation}\n\\end{theorem}\nWe also prove stability and separation in Theorem \\ref{th:intro-1}.\nIn the hyperplane case, and assuming that $K$ and $D$ are centered convex bodies, i.e. their center of mass is at the origin,\nwe can provide a more precise answer in terms of the isotropic constant $L_D$ of $D$.\n\n\\begin{theorem}\\label{th:intro-2}Let $K$ and $D$ be two centered convex bodies in ${\\mathbb R}^n$ such that\n\\begin{equation}\\label{eq:intro-8}|P_{\\xi^{\\perp }}(K)|\\leqslant |D\\cap\\xi^{\\perp }|\\end{equation}\nfor all $\\xi\\in S^{n-1}$. Then,\n\\begin{equation}\\label{eq:intro-9}|K|\\leqslant \\frac{c}{L_D}\\,|D|,\\end{equation}\nwhere $c>0$ is an absolute constant. \\end{theorem}\n\nThis means that if the hyperplane conjecture is not true then one can even have ``pathologically good\" (with respect to Question \\ref{question:vitali})\npairs of convex bodies. The proof of Theorem \\ref{th:intro-2} carries over to higher codimensions but the dependence on $L_D$ becomes more\ncomplicated and we prefer not to include the full statement of this version.\n\n\\bigbreak\n\nIn Section \\ref{sect3} we collect some estimates on the lower dimensional Busemann-Petty problem. Let $1\\leqslant k\\leqslant n-1$\nand let $\\beta_{n,k}$ be the smallest constant $\\beta >0$ with the following property: For\nevery pair of centered convex bodies $K$ and $D$ in ${\\mathbb R}^n$ that satisfy\n\\begin{equation}\\label{eq:intro-10}|K\\cap F|\\leqslant |D\\cap F|\\end{equation}\nfor all $F\\in G_{n,n-k}$, one has\n\\begin{equation}\\label{eq:intro-11}|K|^{\\frac{n-k}{n}}\\leqslant \\beta^k\\,|D|^{\\frac{n-k}{n}}.\\end{equation}\nThe following question is open:\n\n\\begin{question}\\label{question:low-dim-BP}{\\sl Is it true that there exists an absolute constant $C_2>0$ such that $\\beta_{n,k}\\leqslant C_2$ for all $n$ and $k$?}\n\\end{question}\n\nBourgain and Zhang \\cite{BZ} showed that $\\beta_{n,k}>1$ if $n-k>3.$ It is not known whether $\\beta_{n,k}$\nhas to be greater than 1 when $n\\ge 5$ and $n-k=2$ or $n-k=3.$\nIt was proved in \\cite{Koldobsky-4} and by a different method in \\cite{Chasapis-Giannopoulos-Liakopoulos-2015}\nthat $\\beta_{n,k}\\le C\\sqrt{n\/k}(\\log(en\/k))^{3\/2},$ where $C$ is an absolute constant. In this note, we observe that the answer to Question \\ref{question:low-dim-BP} is affirmative if the convex body $K$ has bounded isotropic constant, as follows.\n\n\\begin{theorem}\\label{th:intro-3}Let $1\\leqslant k\\leqslant n-1$ and let $K$ be a centered convex body in ${\\mathbb R}^n$ and $D$ a compact subset\nof ${\\mathbb R}^n$ such that\n\\begin{equation}\\label{eq:intro-12}|K\\cap F|\\leqslant |D\\cap F|\\end{equation}\nfor all $F\\in G_{n,n-k}$. Then,\n\\begin{equation}\\label{eq:intro-13}|K|^{\\frac{n-k}{n}}\\leqslant (c_0L_K)^k\\,|D|^{\\frac{n-k}{n}}.\\end{equation}\nwhere $c_0>0$ is an absolute constant. \\end{theorem}\n\nTheorem \\ref{th:intro-3} is a refinement of the estimate $\\beta_{n,k}\\leqslant cL_n$, which was shown in \\cite{Chasapis-Giannopoulos-Liakopoulos-2015}.\nThe proof is based on estimates from \\cite{Dafnis-Paouris-2012} and on Grinberg's inequality (see \\eqref{eq:main-3} in Section 2).\n\nWe also discuss the lower dimensional Shephard problem. Let $1\\leqslant k\\leqslant n-1$\nand let $S_{n,k}$ be the smallest constant $S >0$ with the following property: For\nevery pair of convex bodies $K$ and $D$ in ${\\mathbb R}^n$ that satisfy\n\\begin{equation}\\label{eq:intro-S-1}|P_F(K)|\\leqslant |P_F(D)|\\end{equation}\nfor all $F\\in G_{n,n-k}$, one has\n\\begin{equation}\\label{eq:intro-S-2}|K|^{\\frac{1}{n}}\\leqslant S\\,|D|^{\\frac{1}{n}}.\\end{equation}\n\n\\begin{question}\\label{question:low-dim-S}{\\sl Is it true that there exists an absolute constant $C_3>0$ such that $S_{n,k}\\leqslant C_3$ for all $n$ and $k$?}\n\\end{question}\nGoodey and Zhang \\cite{GZ} proved that $S_{n,k}>1$ if $n-k>1.$\nIn Section \\ref{sect4} we prove the following result.\n\n\\begin{theorem}\\label{th:intro-S}Let $K$ and $D$ be two convex bodies in ${\\mathbb R}^n$ such that\n\\begin{equation}|P_F(K)|\\leqslant |P_F(D)|\\end{equation}\nfor every $F\\in G_{n,n-k}$. Then,\n\\begin{equation}|K|^{\\frac{1}{n}}\\leqslant c_1\\sqrt{\\frac{n}{n-k}}\\log\\left (\\frac{en}{n-k}\\right )\\,|D|^{\\frac{1}{n}},\\end{equation}\nwhere $c_1>0$ is an absolute constant. It follows that $S_{n,k}$ is bounded by an absolute constant if $\\frac{k}{n-k}$ is bounded.\n\\end{theorem}\n\nWe also prove a general estimate, which is logarithmic in $n$ and valid for all $k$. The proof is based on estimates from \\cite{Paouris-Pivovarov-2013}.\n\n\\begin{theorem}Let $K$ and $D$ be two convex bodies in ${\\mathbb R}^n$ such that\n\\begin{equation}|P_F(K)|\\leqslant |P_F(D)|\\end{equation}\nfor every $F\\in G_{n,n-k}$. Then,\n\\begin{equation}|K|^{\\frac{1}{n}}\\leqslant \\frac{c_1\\,\\min w(\\tilde{D})}{\\sqrt{n}}\\,|D|^{\\frac{1}{n}}\\leqslant c_2(\\log n)|D|^{\\frac{1}{n}},\\end{equation}\nwhere $c_1,c_2>0$ are absolute constants, $w(A)$ is the mean width of a centered convex body $A$, and the minimum is over all linear\nimages $\\tilde{D}$ of $D$ that have volume $1$.\n\\end{theorem}\n\nLutwak \\cite{Lutwak-1988} proved that the answer to the Busemann-Petty problem is affirmative if the body $K$ with smaller sections belongs to a special class\nof intersection bodies; see definition below. In Section \\ref{sect5} we prove separation in the Busemann-Petty\nproblem, which can be considered as a refinement of Lutwak's result.\n\n\\begin{theorem} \\label{main-int} Suppose that $\\varepsilon>0$, $K$ and $D$ are origin-symmetric\nstar bodies in ${\\mathbb R}^n,$ $K$ is an intersection body. If\n\\begin{eqnarray}\\label{sect1}\n|K\\cap \\xi^\\bot| \\leqslant |D\\cap \\xi^\\bot| - \\varepsilon,\n\\end{eqnarray}\nfor every $\\xi\\in S^{n-1}$, then\n$$|K|^{\\frac{n-1}n} \\leqslant |D|^{\\frac{n-1}n} - c\\varepsilon \\frac{1}{\\sqrt{n}M(\\overline{K})},$$\nwhere $c>0$ is an absolute constant and $\\overline{K}=|K|^{-\\frac{1}{n}}K$.\n\\end{theorem}\n\nNote that if $\\overline{K}$ is convex isotropic then \\begin{equation}\\frac{1}{M(\\overline{K})} \\geqslant\\ c_1 \\frac{n^{1\/10}L_K}{\\log^{2\/5}(e+n)}\\geqslant c_2\\frac{n^{1\/10}}{\\log^{2\/5}(e+n)}\\end{equation}\nand if $\\overline{K}$ is convex and is in the minimal mean width position then we have\n\\begin{equation}\\frac{1}{M(\\overline{K})} \\geqslant\\ c_3\\frac{\\sqrt{n}}{\\log (e+n)},\\end{equation}\nso the constant in Theorem \\ref{main-int} does not depend\non the bodies. This is an improvement of a previously known result from \\cite{K??}. Also note that\nstability in the Busemann-Petty problem is easier and was proved in \\cite{K??}, as follows.\nIf $K$ is an intersection body in ${\\mathbb R}^n,$ $D$ is an origin-symmetric star\nbody in ${\\mathbb R}^n$ and $\\varepsilon>0$ so that\n\\begin{equation}|K\\cap \\xi^\\bot|\\leqslant |D\\cap \\xi^\\bot|+\\varepsilon \\end{equation}\nfor every $\\xi\\in S^{n-1}$, then\n\\begin{equation}|K|^{\\frac{n-1}n}\\leqslant |L|^{\\frac{n-1}n} + c_n\\varepsilon,\\end{equation}\nwhere $c_n=|B_2^{n-1}|\/|B_2^n|^{\\frac{n-1}n} < 1.$\nThe constant is optimal. For more results on stability and separation in volume comparison\nproblems and for applications of such results, see \\cite{Koldobsky-3}.\n\n\n\n\n\\section{Milman's variant of the two problems}\\label{sect2}\n\nWe work in ${\\mathbb R}^n$, which is equipped with a Euclidean structure $\\langle\\cdot ,\\cdot\\rangle $. We denote the corresponding\nEuclidean norm by $\\|\\cdot \\|_2$, and write $B_2^n$ for the Euclidean unit ball, and $S^{n-1}$ for the unit sphere. Volume is\ndenoted by $|\\cdot |$. We write $\\omega_n$ for the volume of $B_2^n$ and $\\sigma $ for the rotationally invariant probability measure on\n$S^{n-1}$. We also denote the Haar measure on $O(n)$ by $\\nu $. The Grassmann manifold $G_{n,m}$ of $m$-dimensional subspaces of\n${\\mathbb R}^n$ is equipped with the Haar probability measure $\\nu_{n,m}$. Let $1\\leqslant m\\leqslant n-1$ and $F\\in G_{n,m}$. We will denote the\northogonal projection from $\\mathbb R^{n}$ onto $F$ by $P_F$. We also define $B_F=B_2^n\\cap F$ and $S_F=S^{n-1}\\cap F$.\n\nThe letters $c,c^{\\prime }, c_1, c_2$ etc. denote absolute positive constants whose value may change from line to line. Whenever we\nwrite $a\\simeq b$, we mean that there exist absolute constants $c_1,c_2>0$ such that $c_1a\\leqslant b\\leqslant c_2a$. Also if $K,L\\subseteq\n\\mathbb R^n$ we will write $K\\simeq L$ if there exist absolute constants $c_1, c_2>0$ such that $c_{1}K\\subseteq L \\subseteq\nc_{2}K$.\n\nA convex body in ${\\mathbb R}^n$ is a compact convex subset $K$ of\n${\\mathbb R}^n$ with nonempty interior. We say that $K$ is origin-symmetric if $K=-K$. We say that $K$ is centered if\nthe center of mass of $K$ is at the origin, i.e.~$\\int_K\\langle\nx,\\theta\\rangle \\,d x=0$ for every $\\theta\\in S^{n-1}$. We denote by ${\\cal K}_n$ the class of centered\nconvex bodies in ${\\mathbb R}^n$. The support\nfunction of $K$ is defined by $h_K(y):=\\max \\bigl\\{\\langle x,y\\rangle :x\\in K\\bigr\\}$, and\nthe mean width of $K$ is the average\n\\begin{equation}\\label{eq:not-2}w(K):=\\int_{S^{n-1}}h_K(\\theta )\\,d\\sigma (\\theta )\\end{equation}\nof $h_K$ on $S^{n-1}$. For basic facts from the Brunn-Minkowski theory and\nthe asymptotic theory of convex bodies we refer to the books \\cite{Schneider-book} and \\cite{AGA-book} respectively.\n\n\\smallskip\n\nThe proof of Theorem \\ref{th:intro-1} is based on two classical results:\n\\begin{enumerate}\n\\item {\\sl Aleksandrov's inequalities.} If $K$ is a convex body in ${\\mathbb R}^n$ then the sequence\n\\begin{equation}\\label{eq:main-1}Q_k(K)=\\left\n(\\frac{1}{\\omega_k}\\int_{G_{n,k}}|P_F(K)|\\,d\\nu_{n,k}(F)\\right\n)^{1\/k}\\end{equation}is decreasing in $k$. This is a consequence of the Aleksandrov-Fenchel inequality (see \\cite{Burago-Zalgaller-book}\nand \\cite{Schneider-book}). In particular, for every $1\\leqslant k\\leqslant n-1$ we have\n\\begin{equation}\\label{eq:main-2}\\left (\\frac{|K|}{\\omega_n}\\right )^{\\frac{1}{n}}\\leqslant \\left (\\frac{1}{\\omega_k}\\int_{G_{n,k}}|P_F(K)|\\,d\\nu_{n,k}(F)\\right )^{\\frac{1}{k}}\\leqslant w(K),\\end{equation}\nwhere $w(K)$ is the mean width of $K$.\n\\item {\\sl Grinberg's inequality.} If $D$ is a compact set in ${\\mathbb R}^n$ then, for any $1\\leqslant k\\leqslant n-1$,\n\\begin{equation}\\label{eq:main-3}\\tilde{R}_k(D):=\\frac{1}{|D|^{n-k}}\\int_{G_{n,n-k}}|D\\cap F|^n\\,d\\nu_{n,n-k}(F)\\leqslant \\frac{1}{|B_2^n|^{n-k}}\\int_{G_{n,n-k}}|B_2^n\\cap F|^n\\,d\\nu_{n,n-k}(F),\\end{equation}\nwhere $B_2^m$ is the Euclidean ball in ${\\mathbb R}^m$ and $\\omega_m=|B_2^m|$. This fact was proved by Grinberg in \\cite{Grinberg-1990}.\nIt is useful to note that\n\\begin{equation}\\label{eq:main-4}\\tilde{R}_k(B_2^n):=\\frac{\\omega_{n-k}^n}{\\omega_n^{n-k}}\\leqslant e^{\\frac{kn}{2}}.\\end{equation}\nMoreover, Grinberg proved that the quantity $\\tilde{R}_k(D)$ on the left hand side of \\eqref{eq:main-3} is invariant under $T\\in GL(n)$: one has\n\\begin{equation}\\label{eq:main-5}\\tilde{R}_k(T(D))=\\tilde{R}_k(D)\\end{equation}\nfor every $T\\in GL(n)$.\n\\end{enumerate}\n\n\n\\smallskip\n\n\\noindent {\\bf Proof of Theorem \\ref{th:intro-1}.} Let $K$ be a convex body in ${\\mathbb R}^n$ and $D$ be a compact subset of ${\\mathbb R}^n$. Assume that for some\n$1\\leqslant k\\leqslant n-1$ we have\n\\begin{equation}\\label{eq:main-6}|P_F(K)|\\leqslant |D\\cap F|\\end{equation}\nfor all $F\\in G_{n,n-k}$. From \\eqref{eq:main-2} we get\n\\begin{equation}\\label{eq:main-7}\\left (\\frac{|K|}{\\omega_n}\\right )^{\\frac{n-k}{n}}\\leqslant \\frac{1}{\\omega_{n-k}}\\int_{G_{n,n-k}}|P_F(K)|\\,d\\nu_{n,n-k}(F).\\end{equation}\nOur assumption, H\\\"{o}lder's inequality and Grinberg's inequality give\n\\begin{align}\\label{eq:main-8}&\\frac{1}{\\omega_{n-k}}\\int_{G_{n,n-k}}|P_F(K)|\\,d\\nu_{n,n-k}(F) \\leqslant \\frac{1}{\\omega_{n-k}}\\int_{G_{n,n-k}}|D\\cap F|\\,d\\nu_{n,n-k}(F)\\\\\n\\nonumber &\\hspace*{1.5cm} \\leqslant \\frac{1}{\\omega_{n-k}}\\left (\\int_{G_{n,n-k}}|D\\cap F|^n\\,d\\nu_{n,n-k}(F)\\right )^{\\frac{1}{n}}\\\\\n\\nonumber &\\hspace*{1.5cm}\\leqslant \\frac{1}{\\omega_{n-k}}\\,\\frac{\\omega_{n-k}}{\\omega_n^{\\frac{n-k}{n}}}|D|^{\\frac{n-k}{n}}= \\left (\\frac{|D|}{\\omega_n}\\right )^{\\frac{n-k}{n}}.\n\\end{align}\nTherefore, $|K|\\leqslant |D|$. $\\quad \\hfill \\Box$\n\n\\medskip\n\n\\begin{remark}\\label{stability}\\rm Slightly modifying the proof of Theorem \\ref{th:intro-1} one can get\nstability and separation results, as follows. Let $\\varepsilon>0,$ and let $K$ and $D$ be as in Theorem \\ref{th:intro-1}.\nSuppose that for every $F\\in G_{n,n-k}$\n$$|P_F(K)| \\le |D\\cap F| \\pm \\varepsilon.$$\nThen\n$$|K|^{\\frac{n-k}n} \\le |D|^{\\frac{n-k}n} \\pm \\gamma_{n,k}\\varepsilon,$$\nwhere\n$\\gamma_{n,k} = \\frac{\\omega_n^{\\frac{n-k}{n}}}{\\omega_{n-k}}\\in (e^{-k\/2}, 1).$ The plus sign corresponds to stability,\nminus - to separation. Assuming that $\\varepsilon =\\max_F (|P_F(K) - |D\\cap F|)$ in the stability result, we get\n$$|K|^{\\frac{n-k}n} - |D|^{\\frac{n-k}n} \\le \\gamma_{n,k} \\max_F (|P_F(K) - |D\\cap F|).$$\nOn the other hand, if $\\varepsilon =\\min_F (|D\\cap F| - |P_F(K)|)$ in the separation result, then\n$$|D|^{\\frac{n-k}n} - |K|^{\\frac{n-k}n} \\ge \\gamma_{n,k} \\min_F (|D\\cap F| - |P_F(K)|).$$\n\\end{remark}\n\n\\medskip\n\n\n\n\\section{Estimates for the lower dimensional Busemann-Petty problem}\\label{sect3}\n\nIn this section we provide some estimates for the lower dimensional Busemann-Petty problem. We need the next lemma, in which we collect known estimates about the quantities\n\\begin{equation}G_{n,k}(A):=\\left (\\int_{G_{n,n-k}}|A\\cap F|^n\\,d\\nu_{n,n-k}(F)\\right )^{\\frac{1}{kn}},\\end{equation}\nwhere $A$ is a centered convex body in ${\\mathbb R}^n$. The proofs of \\eqref{eq:main-9} and \\eqref{eq:main-10} can be found\nin \\cite{Dafnis-Paouris-2012}, while \\eqref{eq:main-11} follows from \\eqref{eq:main-3} and \\eqref{eq:main-4}.\n\n\\begin{lemma}\\label{lem:main-2}Let $A$ be a centered convex body in ${\\mathbb R}^n$. Then,\n\\begin{equation}\\label{eq:main-9}\\frac{c_1}{L_A}|A|^{\\frac{n-k}{kn}}\\leqslant G_{n,k}(A)\\leqslant \\frac{c_2L_k}{L_A}|A|^{\\frac{n-k}{kn}}\\leqslant \\frac{c_3\\sqrt[4]{k}}{L_A}|A|^{\\frac{n-k}{kn}}.\\end{equation}\nMoreover,\n\\begin{equation}\\label{eq:main-10}G_{n,k}(A)\\leqslant c_4\\sqrt{n\/k}\\,(\\log (en\/k))^{\\frac{3}{2}}|A|^{\\frac{n-k}{kn}}.\\end{equation}\nFinally, for every compact subset $D$ of ${\\mathbb R}^n$ we have\n\\begin{equation}\\label{eq:main-11}G_{n,k}(D)\\leqslant\\sqrt{e}|D|^{\\frac{n-k}{kn}}.\\end{equation}\n\\end{lemma}\n\nUsing Lemma \\ref{lem:main-2} we show that the lower dimensional Busemann-Petty problem (Question \\ref{question:low-dim-BP})\nhas an affirmative answer if the body $K$ has bounded isotropic constant.\n\n\\medskip\n\n\\noindent {\\bf Proof of Theorem \\ref{th:intro-3}.} Since $|K\\cap F|\\leqslant |D\\cap F|$ for all $F\\in G_{n,n-k}$, we know that\n\\begin{equation}\\label{eq:LBP-1}G_{n,k}(K)\\leqslant G_{n,k}(D).\\end{equation}\nUsing \\eqref{eq:main-9} and \\eqref{eq:main-11} we write\n\\begin{equation}\\label{eq:LBP-2}\\frac{c_1}{L_K}|K|^{\\frac{n-k}{kn}} \\leqslant G_{n,k}(K)\\leqslant G_{n,k}(D)\n\\leqslant\\sqrt{e}|D|^{\\frac{n-k}{kn}},\\end{equation}\nand the result follows. $\\quad \\hfill \\Box$.\n\n\\begin{remark}\\label{rem:LBP-1}\\rm Theorem \\ref{th:intro-3} shows that if $K$ belongs to the class\n\\begin{equation}\\label{eq:class-alpha}{\\cal K}_n(\\alpha ):=\\{K\\in {\\cal K}_n: L_K\\leqslant\\alpha\\}\\end{equation}\nfor some $\\alpha >0$, then for every compact set $D$ in ${\\mathbb R}^n$ which satisfies $|K\\cap F|\\leqslant |D\\cap F|$ for all $F\\in G_{n,n-k}$ we have\n\\begin{equation}\\label{eq:LBP-5}|K|^{\\frac{n-k}{n}}\\leqslant (c_0\\alpha )^k\\,|D|^{\\frac{n-k}{n}}.\\end{equation}\nClasses of convex bodies with uniformly bounded isotropic constant include: unconditional convex bodies,\nconvex bodies whose polar bodies contain large affine cubes, the\nunit balls of $2$-convex spaces with a given constant\n$\\alpha $, bodies with small diameter (in particular, the class of\nzonoids) and the unit balls of the Schatten classes (see \\cite[Chapter 4]{BGVV-book}).\\end{remark}\n\n\\begin{example}\\label{ex:beta-lower}\\rm K. Ball has proved in \\cite{Ball-1989} that for every $1\\leqslant k\\leqslant n-1$ and every $F\\in G_{n,n-k}$\nwe have\n\\begin{equation}|Q_n\\cap F|\\leqslant 2^{\\frac{k}{2}},\\end{equation}\nwhere $Q_n$ is the cube of volume $1$ in ${\\mathbb R}^n$. Consider the ball $B_{n,k}=r_{n,k}B_2^n$, where\n\\begin{equation}\\omega_{n-k}r_{n,k}^{n-k}=2^{\\frac{k}{2}}.\\end{equation}\nThen, for every $F\\in G_{n,n-k}$ we have\n\\begin{equation}|Q_n\\cap F|\\leqslant |B_{n,k}\\cap F|.\\end{equation}\nTherefore,\n\\begin{equation}1=|Q_n|\\leqslant \\beta_{n,k}^k|B_{n,k}|^{\\frac{n-k}{n}}=\\beta_{n,k}^k\\omega_n^{\\frac{n-k}{n}}r_{n,k}^{n-k}=2^{\\frac{k}{2}}\\beta_{n,k}^k\\frac{\\omega_n^{\\frac{n-k}{n}}}{\\omega_{n-k}}.\\end{equation}\nThis proves that\n\\begin{equation}\\beta_{n,k}\\geqslant \\frac{1}{\\sqrt{2}}\\left (\\frac{\\omega_{n-k}}{\\omega_n^{\\frac{n-k}{n}}}\\right )^{\\frac{1}{k}}\\sim \\frac{1}{\\sqrt{2}}\\left (\\frac{n}{n-k}\\right )^{\\frac{n-k+1}{2k}}\\end{equation}\nas $n,k\\to\\infty $. Fix $d\\geqslant 2$ and consider $n$ and $k$ that satisfy $n=(d+1)k$. Then, we have the following:\n\\end{example}\n\n\\begin{proposition}\\label{prop:beta-lower-bound}For every $d\\geqslant 2$ there exists $k(d)\\in {\\mathbb N}$ such that\n\\begin{equation}\\beta_{(d+1)k,k}\\geqslant \\frac{1}{\\sqrt{2}}\\left (1+\\frac{1}{d}\\right )^{\\frac{d}{2}}>1\\end{equation}\nfor all $k\\geqslant k(d)$. $\\quad \\hfill \\Box$ \\end{proposition}\n\n\n\n\\medskip\n\nA variant of the proof of Theorem \\ref{th:intro-3} (based again on Lemma \\ref{lem:main-2}) establishes Theorem \\ref{th:intro-2}.\n\n\\medskip\n\n\\noindent {\\bf Proof of Theorem \\ref{th:intro-2}.} Let $K$ be a convex body in ${\\mathbb R}^n$ and $D$ be a compact subset of ${\\mathbb R}^n$\nsuch that $|P_{\\xi^{\\perp }}(K)|\\leqslant |D\\cap\\xi^{\\perp }|$ for every $\\xi\\in S^{n-1}$. From Lemma \\ref{lem:main-2} we know that\n\\begin{equation}\\label{eq:main-12}G_{n,1}(D)\\leqslant \\frac{c_1}{L_D}|D|^{\\frac{n-1}{n}},\\end{equation}\nwhere $c_1>0$ is an absolute constant. Then,\n\\begin{align}\\label{eq:main-13}\n\\left (\\frac{|K|}{\\omega_n}\\right )^{\\frac{n-1}{n}} &\\leqslant \\frac{1}{\\omega_{n-1}}\\int_{S^{n-1}}|P_{\\xi^{\\perp }}(K)|\\,d\\sigma (\\xi )\n\\leqslant \\frac{1}{\\omega_{n-1}}\\int_{S^{n-1}}|D\\cap \\xi^{\\perp }|\\,d\\sigma (\\xi )\\\\\n\\nonumber &= \\frac{1}{\\omega_{n-1}}\\left (\\int_{S^{n-1}}|D\\cap \\xi^{\\perp }|^n\\,d\\sigma (\\xi )\\right )^{\\frac{1}{n}}\\\\\n\\nonumber &= \\frac{1}{\\omega_{n-1}}G_{n,1}(D)\\leqslant \\frac{c_1}{\\omega_{n-1}L_D}|D|^{\\frac{n-1}{n}},\n\\end{align}\nwhich implies that\n\\begin{equation}\\label{eq:main-14}|K|\\leqslant \\frac{c_2\\omega_n}{(\\omega_{n-1}L_D)^{\\frac{n}{n-1}}}\\,|D|\\leqslant \\frac{c_3}{L_D}\\,|D|,\\end{equation}\nwhere $c_2,c_3>0$ are absolute constants. $\\quad \\hfill \\Box$\n\n\\section{Estimates for the lower dimensional Shephard problem}\\label{sect4}\n\nIn this section we discuss the lower dimensional Shephard problem. First, we recall some facts for\nthe class of zonoids. A zonoid is a limit of Minkowski sums of line segments in the Hausdorff metric. Equivalently, a\nsymmetric convex body $Z$ is a zonoid if and only if its polar body is the unit ball of an $n$-dimensional subspace of an $L_1$-space;\ni.e. if there exists a positive measure $\\mu $ (the supporting measure of $Z$) on $S^{n-1}$ such that\n\\begin{equation*}h_Z(x)=\\| x\\|_{Z^{\\circ }}=\\frac{1}{2}\\int_{S^{n-1}}|\\langle x,y\\rangle |d\\mu (y).\\end{equation*}\nThe class of origin-symmetric zonoids coincides with the class of projection bodies. Recall that the projection body $\\Pi K$ of a convex body $K$ is the\nsymmetric convex body whose support function is defined by\n\\begin{equation*}h_{\\Pi K} (\\xi )=|P_{\\xi^{\\perp } }(K)|, \\qquad \\xi\\in S^{n-1}.\\end{equation*}\nFrom Cauchy's formula\n\\begin{equation*}|P_{\\xi^{\\perp } }(K)|={\\frac{1}{2}}\\int_{S^{n-1}}|\\langle u,\\xi \\rangle |\\;d\\sigma_K(u),\\end{equation*}\nwhere $\\sigma_K$ is the surface area measure of $K$, it follows that the projection body of $K$ is a zonoid whose\nsupporting measure is $\\sigma_K$. Minkowski's existence theorem implies that, conversely, every zonoid\nis the projection body of some symmetric convex body in ${\\mathbb R}^n$.\n\nZonoids play a central role in the study of the original Shephard problem: suppose that $K$ is a convex body in ${\\mathbb R}^n$\nand $Z$ is a zonoid in ${\\mathbb R}^n$, and that $|P_{\\xi^{\\perp }}(K)|\\leqslant |P_{\\xi^{\\perp }}(Z)|$ for all $\\xi \\in S^{n-1}$. Then,\n\\begin{equation}|K|\\leqslant |Z|.\\end{equation}\nThe proof involves writing $Z=\\Pi D$ for some convex body $D$, using the identity $V_1(K,\\Pi D)=V_1(D,\\Pi K)$ (where $V_1(A,B)$ is the\nmixed volume $V(A,\\ldots ,A,B)$), the hypothesis in the form $\\Pi (K)\\subseteq \\Pi (Z)$, and the\nmonotonicity of $V_1(D,.)$, to write\n\\begin{equation*}|Z|=V_1(Z,Z)=V_1(Z,\\Pi D)=V_1(D,\\Pi Z)\\geqslant V_1(D,\\Pi (K))=\nV_1(K,\\Pi D)=V_1(K,Z)\\geqslant |K|^{{\\frac{n-1}{n}}}|Z|^{\\frac{1}{n}},\\end{equation*}\nwhere in the last step we also employ Minokowski's first inequality. This shows that $|Z|\\geqslant |K|$.\n\nSince any projection of a zonoid is a zonoid, using an inductive argument we can prove the following (for a detailed account\non this topic, see \\cite[Chapter 4]{Gardner-book}).\n\n\\begin{theorem}\\label{th:SZ-1}Let $K$ be a convex body and let $Z$ be a zonoid in ${\\mathbb R}^n$ such that\n\\begin{equation}|P_F(K)|\\leqslant |P_F(Z)|\\end{equation}\nfor every $F\\in G_{n,n-k}$. Then,\n\\begin{equation}\\label{eq:SZ-1}|K|\\leqslant |Z|.\\end{equation}\n\\end{theorem}\n\nUsing Theorem \\ref{th:SZ-1} and the fact that every ellipsoid is a zonoid, we can give a simple bound for the constants $S_{n,k}$.\n\n\\begin{proposition}\\label{prop:SZ-simple}For all $n$ and $1\\leqslant k\\leqslant n-1$ we have $S_{n,k}\\leqslant c_0\\sqrt{n}$, where $c_0>0$ is an absolute constant.\\end{proposition}\n\n\\noindent {\\it Proof.} Let $K$ and $D$ be two convex bodies in ${\\mathbb R}^n$ such that $|P_F(K)|\\leqslant |P_F(D)|$ for every $F\\in G_{n,n-k}$. There\nexists an ellipsoid ${\\cal E}$ in ${\\mathbb R}^n$ such that $D\\subseteq {\\cal E}$ and $|{\\cal E}|^{\\frac{1}{n}}\\leqslant c_0\\sqrt{n}\\,|D|^{\\frac{1}{n}}$,\nwhere $c_0>0$ is an absolute constant (for example, see \\cite{Ball-1991c} where a sharp estimate for $c_0$ is also given). Since $D\\subseteq {\\cal E}$,\nwe have\n\\begin{equation}|P_F(K)|\\leqslant |P_F(D)|\\leqslant |P_F({\\cal E})|\\end{equation}\nfor all $F\\in G_{n,n-k}$. Since ${\\cal E}$ is a zonoid, Theorem \\ref{th:SZ-1} implies that\n\\begin{equation}|K|^{\\frac{1}{n}}\\leqslant |{\\cal E}|^{\\frac{1}{n}}\\leqslant c_0\\sqrt{n}\\,|D|^{\\frac{1}{n}}.\\end{equation}\nThis shows that $S_{n,k}\\leqslant c_0\\sqrt{n}$. $\\quad \\hfill \\Box$\n\n\\medskip\n\nWe can elaborate on this argument if we use Pisier's theorem from \\cite{Pisier-1989} on the existence of $\\alpha $-regular $M$-ellipsoids for\nsymmetric convex bodies in ${\\mathbb R}^n$ (see \\cite[Theorem 1.13.3]{BGVV-book}):\n\n\\begin{theorem}[Pisier]\\label{th:general-pisier-alpha-regular}\nFor every $0<\\alpha <2$ and every symmetric convex body $A$ in ${\\mathbb\nR}^n$, there exists an ellipsoid ${\\cal{E}_{\\alpha }}$ such that\n\\begin{equation*}\\max\\{ N(A,t{\\cal{E}_{\\alpha }}),N({\\cal{E}_{\\alpha }},tA)\\}\\leqslant\\exp\n\\left (\\frac{c(\\alpha )n}{t^{\\alpha }}\\right )\\end{equation*} for\nevery $t\\geqslant 1$, where $c(\\alpha )$ is a constant depending only on\n$\\alpha $ and satisfies $c(\\alpha )=O\\big ((2-\\alpha )^{-\\alpha\/2}\\big )$ as\n$\\alpha\\to 2$.\n\\end{theorem}\n\n\n\\begin{theorem}\\label{th:SZ-3}Let $1\\leqslant m\\leqslant n-1$ and let $K$ and $D$ be two convex bodies in ${\\mathbb R}^n$ such that\n\\begin{equation}|P_F(K)|\\leqslant |P_F(D)|\\end{equation}\nfor every $F\\in G_{n,m}$. Then,\n\\begin{equation}|K|^{\\frac{1}{n}}\\leqslant c_1\\sqrt{\\frac{n}{m}}\\log\\left(\\frac{en}{m}\\right )\\,|D|^{\\frac{1}{n}},\\end{equation}\nwhere $c_1>0$ is an absolute constant.\n\\end{theorem}\n\n\\noindent {\\it Proof.} Consider the difference body $D-D$ of $D$, and the ellipsoid ${\\cal E}_{\\alpha }$ from Theorem \\ref{th:general-pisier-alpha-regular}, where $\\alpha\\in (0,2)$\nwill be chosen in the end, that corresponds to $A=D-D$. Note that\n\\begin{equation}N({\\cal E}_{\\alpha },c(\\alpha )^{1\/\\alpha }(D-D))\\leqslant e^n,\\end{equation}\ntherefore\n\\begin{equation}\\label{eq:basic}|{\\cal E}_{\\alpha }|^{\\frac{1}{n}}\\leqslant ec(\\alpha )^{1\/\\alpha }|D-D|^{\\frac{1}{n}}.\\end{equation}\nSince\n\\begin{equation}N(P_F(D-D), P_F(t{\\cal E}_{\\alpha }))\\leqslant N(D-D,t{\\cal{E}_{\\alpha }})\\leqslant \\exp\n\\left (\\frac{c(\\alpha )n}{t^{\\alpha }}\\right )\\end{equation}\nfor every $F\\in G_{n,m}$, we have\n\\begin{equation}|P_F(D-D)|\\leqslant \\exp\n\\left (\\frac{c(\\alpha )n}{t_{n,m,\\alpha }^{\\alpha }}\\right )|P_F(t_{n,m,\\alpha }{\\cal E}_{\\alpha })|=|P_F(et_{n,m,\\alpha }{\\cal E}_{\\alpha })|\\end{equation}\nif we choose\n\\begin{equation}t_{n,m,\\alpha }=\\left (\\frac{c(\\alpha )n}{m}\\right )^{\\frac{1}{\\alpha }}.\\end{equation}\nNow, if we set ${\\cal E}:=et_{n,m,\\alpha }{\\cal E}_{\\alpha }$, we have\n\\begin{equation}|P_F(K)|\\leqslant |P_F(D)|\\leqslant |P_F(D-D)|\\leqslant |P_F({\\cal E})|\\end{equation}\nfor every $F\\in G_{n,m}$, and since ${\\cal E}$ is a zonoid, Theorem \\ref{th:SZ-1} shows that $|K|\\leqslant |{\\cal E}|$.\nUsing also \\eqref{eq:basic} and the fact that $c(\\alpha )=O\\big ((2-\\alpha )^{-\\alpha\/2}\\big )$, we get\n\\begin{equation}|K|^{\\frac{1}{n}}\\leqslant et_{n,m,\\alpha }|{\\cal E}_{\\alpha }|^{\\frac{1}{n}}\\leqslant e^2t_{n,m,\\alpha }c(\\alpha )^{1\/\\alpha }|D-D|^{\\frac{1}{n}}\n\\leqslant \\frac{c_1}{2-\\alpha }\\left (\\frac{n}{m}\\right )^{\\frac{1}{\\alpha }}|D|^{\\frac{1}{n}}, \\end{equation}\nwhere $c_1>0$ is an absolute constant (we have also used the fact that $|D-D|^{\\frac{1}{n}}\\leqslant 4|D|^{\\frac{1}{n}}$ by the Rogers-Shephard\ninequality). Choosing $\\alpha =2-\\frac{1}{\\log\\left(\\frac{en}{m}\\right )}$ we get the result. $\\quad \\hfill \\Box$\n\n\\begin{remark}\\label{rem:lutwak-conj}\\rm The lower dimensional Shephard problem is related to Lutwak's conjectures about\nthe affine quermassintegrals: for every convex body $K$ in ${\\mathbb R}^n$ and every $1\\leqslant m\\leqslant\nn-1$, the quantities\n\\begin{equation}\\Phi_{n-m}(K)=\\frac{\\omega_n}{\\omega_m}\\left\n(\\int_{G_{n,m}}|P_F(K)|^{-n}d\\nu_{n,m}(F)\\right )^{-1\/n},\\end{equation}\nwere introduced by Lutwak in \\cite{Lutwak-1984} (and Grinberg proved in \\cite{Grinberg-1990}\nthat these quantities are invariant under volume preserving affine transformations).\nLutwak conjectured in \\cite{Lutwak-1988b} that the affine quermassintegrals satisfy the inequalities\n\\begin{equation}\\omega_n^j\\Phi_i(K)^{n-j}\\leqslant \\omega_n^i\\Phi_j(K)^{n-i}\\end{equation}\nfor all $0\\leqslant i0$ such that for every convex body $K$\nin $\\mathbb R^n$ and every $1\\leqslant m \\leqslant n-1$,\n\\begin{equation}\\label{eq:phi-conj}c_1\\sqrt{n\/m}\\,|K|^{\\frac{1}{n}}\\leqslant \\left (\\int_{G_{n,m}}|P_F(K)|^{-n}\\,d\\nu_{n,m}(F)\\right )^{-\\frac{1}{mn}}\\leqslant c_2\\sqrt{n\/m}\\,|K|^{\\frac{1}{n}}.\\end{equation}\nAssuming \\eqref{eq:phi-conj} we can give an affirmative answer to Question \\ref{question:low-dim-S}. Indeed, let $K$ and $D$ be two convex bodies in ${\\mathbb R}^n$ such that\n$|P_F(K)|\\leqslant |P_F(D)|$ for every $F\\in G_{n,n-k}$. We write\n\\begin{align}c_1\\sqrt{n\/(n-k)}\\,|K|^{\\frac{1}{n}}&\\leqslant \\left (\\int_{G_{n,n-k}}|P_F(K)|^{-n}\\,d\\nu_{n,n-k}(F)\\right )^{-\\frac{1}{(n-k)n}}\\\\\n\\nonumber &\\leqslant \\left (\\int_{G_{n,n-k}}|P_F(D)|^{-n}\\,d\\nu_{n,n-k}(F)\\right )^{-\\frac{1}{(n-k)n}}\\leqslant c_2\\sqrt{n\/(n-k)}\\,|D|^{\\frac{1}{n}},\\end{align}\nand this shows that $|K|^{\\frac{1}{n}}\\leqslant (c_2\/c_1)\\,|D|^{\\frac{1}{n}}$.\n\\end{remark}\n\nThe left hand side of \\eqref{eq:phi-conj} was proved by Paouris and Pivovarov in \\cite{Paouris-Pivovarov-2013}:\n\n\\begin{theorem}[Paouris-Pivovarov]\\label{th:main-3}Let $A$ be a convex body in ${\\mathbb R}^n$. Then,\n\\begin{equation}\\label{eq:main-17}\\left (\\int_{G_{n,m}}|P_F(A)|^{-n}\\,d\\nu_{n,m}(F)\\right )^{-\\frac{1}{mn}}\\geqslant c\\sqrt{n\/m}|A|^{\\frac{1}{n}}.\\end{equation}\n\\end{theorem}\n\nUsing this fact one can obtain the following.\n\n\\begin{proposition}\\label{prop:SP-1}Let $1\\leqslant m\\leqslant n-1$ and let $K$ and $D$ be two convex bodies in ${\\mathbb R}^n$ such that\n\\begin{equation}|P_F(K)|\\leqslant |P_F(D)|\\end{equation} for every $F\\in G_{n,m}$. Then,\n\\begin{equation}\\label{eq:SP-1}|K|^{\\frac{1}{n}}\\leqslant \\frac{c\\,\\min w(\\tilde{D})}{\\sqrt{n}}\\,|D|^{\\frac{1}{n}},\\end{equation}\nwhere $c>0$ is an absolute constant, $w(A)$ is the mean width of a centered convex body $A$, and the minimum is over all linear\nimages $\\tilde{D}$ of $D$ that have volume $1$.\n\\end{proposition}\n\n\\noindent {\\it Proof.} Our assumption implies that\n\\begin{equation}\\left (\\int_{G_{n,m}}|P_F(K)|^{-n}\\,d\\nu_{n,m}(F)\\right )^{-\\frac{1}{mn}}\\leqslant\n\\left (\\int_{G_{n,m}}|P_F(D)|^{-n}\\,d\\nu_{n,m}(F)\\right )^{-\\frac{1}{mn}}\\end{equation}\nBy the linear invariance of $\\Phi_{n-m}(D)$, for any $\\tilde{D}=T(D)$ where $T\\in GL(n)$ and $|\\tilde{D}|=1$, we have\n\\begin{equation}\\left (\\int_{G_{n,m}}|P_F(D)|^{-n}\\,d\\nu_{n,m}(F)\\right )^{-\\frac{1}{mn}}=\n|D|^{\\frac{1}{n}}\\left (\\int_{G_{n,m}}|P_F(\\tilde{D})|^{-n}\\,d\\nu_{n,m}(F)\\right )^{-\\frac{1}{mn}}.\\end{equation}\nNow, using H\\\"{o}lder's inequality we write\n\\begin{equation}\\left (\\int_{G_{n,m}}|P_F(\\tilde{D})|^{-n}\\,d\\nu_{n,m}(F)\\right )^{-\\frac{1}{mn}}\\leqslant\n\\left (\\int_{G_{n,m}}|P_F(\\tilde{D})|\\,d\\nu_{n,m}(F)\\right )^{\\frac{1}{m}}\\end{equation}\nFrom Aleksandrov's inequalites we have \n\\begin{equation}\\left (\\int_{G_{n,m}}|P_F(\\tilde{D})|\\,d\\nu_{n,m}(F)\\right )^{\\frac{1}{m}}\\leqslant \\omega_m^{\\frac{1}{m}}w(\\tilde{D})\\leqslant c_2\\sqrt{n\/m}\\frac{w(\\tilde{D})}{\\sqrt{n}}.\\end{equation}\nTaking into account Theorem \\ref{th:main-3} we get\n\\begin{align}c\\sqrt{n\/m}|K|^{\\frac{1}{n}}&\\leqslant \\left (\\int_{G_{n,m}}|P_F(K)|^{-\\frac{n}{m}}\\,d\\nu_{n,m}(F)\\right )^{-\\frac{1}{n}}\n\\leqslant \\left (\\int_{G_{n,m}}|P_F(D)|\\,d\\nu_{n,m}(F)\\right )^{\\frac{1}{m}}\\\\\n\\nonumber &= \\left (\\int_{G_{n,m}}|P_F(\\tilde{D})|\\,d\\nu_{n,m}(F)\\right )^{\\frac{1}{m}}|D|^{\\frac{1}{n}}\n\\leqslant c_2\\sqrt{n\/m}\\frac{w(\\tilde{D})}{\\sqrt{n}}|D|^{\\frac{1}{n}},\\end{align}\nand the result follows. $\\quad \\hfill \\Box$\n\n\\medskip\n\nAs a corollary we have:\n\n\\begin{theorem}\\label{th:SP-2}Let $1\\leqslant m\\leqslant n-1$ and let $K$ and $D$ be two convex bodies in ${\\mathbb R}^n$ such that\n\\begin{equation}|P_F(K)|\\leqslant |P_F(D)|\\end{equation} for every $F\\in G_{n,m}$. Then,\n\\begin{equation}\\label{eq:SP-2}|K|^{\\frac{1}{n}}\\leqslant c(\\log n)\\,|D|^{\\frac{1}{n}},\\end{equation}\nwhere $c>0$ is an absolute constant.\n\\end{theorem}\n\n\\noindent {\\it Proof.} If $D$ is in the minimal mean width position, we have (see \\cite[Chapter 6]{AGA-book}) \n\\begin{equation}w(\\overline{D})\\leqslant c_1\\sqrt{n}\\log n.\\end{equation}\nThe result follows from Proposition \\ref{prop:SP-1}. $\\quad \\hfill \\Box$\n\n\n\n\\section{Separation in the Busemann-Petty problem}\\label{sect5}\n\nFor the proof of Theorem \\ref{main-int} we need several definitions from convex geometry. A closed bounded set $K$ in ${\\mathbb R}^n$ is called a star body if\nevery straight line passing through the origin crosses the boundary of $K$ at exactly two points different from the origin,\nthe origin is an interior point of $K$, and the Minkowski functional of $K$ defined by\n\\begin{equation}\\label{eq:sep-2}\\|x\\|_K = \\min\\{a\\geqslant 0:\\ x\\in aK\\}\\end{equation}\nis a continuous function on ${\\mathbb R}^n$.\n\nThe radial function of a star body $K$ is defined by\n\\begin{equation}\\label{eq:sep-3}\\rho_K(x) = \\|x\\|_K^{-1}, \\qquad x\\in {\\mathbb R}^n,\\ x\\neq 0.\\end{equation}\nIf $x\\in S^{n-1}$ then $\\rho_K(x)$ is the radius of $K$ in the direction of $x$.\n\nWe use the polar formula for volume of a star body\n\\begin{equation}\\label{polar-volume}|K|=\\int_{S^{n-1}} \\|\\theta\\|_K^{-n} d\\theta,\n\\end{equation}\nwhere $d\\theta$ stands for the uniform measure on the sphere with density 1.\n\nThe class of intersection bodies was introduced by Lutwak in \\cite{Lutwak-1988}. Let $K, D$ be origin-symmetric star bodies in ${\\mathbb R}^n.$ We say that $K$ is the\nintersection body of $D$ and write $K=ID$ if the radius of $K$ in every direction is equal to the $(n-1)$-dimensional volume of the section of $L$ by the central\nhyperplane orthogonal to this direction, i.e. for every $\\xi\\in S^{n-1},$\n\\begin{align}\\label{eq:sep-4}\n\\rho_K(\\xi) &= \\|\\xi\\|_K^{-1} = |D\\cap \\xi^\\bot|= \\frac 1{n-1} \\int_{S^{n-1}\\cap \\xi^\\bot} \\|\\theta\\|_D^{-n+1}d\\theta \\\\\n\\nonumber &=\\frac 1{n-1} R\\left(\\|\\cdot\\|_D^{-n+1}\\right)(\\xi),\n\\end{align}\nwhere $R:C(S^{n-1})\\to C(S^{n-1})$ is the {spherical Radon transform}\n\\begin{equation}\\label{eq:sep-5}Rf(\\xi)=\\int_{S^{n-1}\\cap \\xi^\\bot} f(x) dx,\\qquad \\hbox{for all}\\; f\\in C(S^{n-1}).\\end{equation}\nAll bodies $K$ that appear as intersection bodies of different star bodies\nform {the class of intersection bodies of star bodies}. A more general class of { intersection bodies}\nis defined as follows. If $\\mu$ is a finite Borel measure on $S^{n-1},$ then the spherical Radon transform\n$R\\mu$ of $\\mu$ is defined as a functional on $C(S^{n-1})$ acting by\n\\begin{equation}\\label{eq:sep-6}(R\\mu, f)=(\\mu, Rf)=\\int_{S^{n-1}} Rf(x) d\\mu(x),\\qquad \\hbox{for all}\\; f\\in C(S^{n-1}).\\end{equation}\nA star body $K$ in ${\\mathbb R}^n$ is called an {\\it intersection body} if $\\|\\cdot\\|_K^{-1}=R\\mu$\nfor some measure $\\mu,$ as functionals on $C(S^{n-1}),$ i.e.\n\\begin{equation}\\label{def-int}\n\\int_{S^{n-1}} \\|x\\|_K^{-1} f(x) dx = \\int_{S^{n-1}} Rf(x)d\\mu(x),\\qquad \\hbox{for all}\\; f\\in C(S^{n-1}).\n\\end{equation}\nIntersection bodies played the key role in the solution of the Busemann-Petty problem.\n\\smallbreak\nRecall that $d\\sigma(x)=dx\/|S^{n-1}|$ is the normalized uniform measure on the sphere, and denote by\n$$M(K)=\\int_{S^{n-1}} \\|x\\|_K d\\sigma(x).$$\n\\bigbreak\n\n\\noindent {\\bf Proof of Theorem \\ref{main-int}.} By \\eqref{eq:sep-4}, the condition \\eqref{sect1} can be written as\n\\begin{equation} \\label{radon-buspetty}\nR(\\|\\cdot\\|_K^{-n+1})(\\xi) \\leqslant R(\\|\\cdot\\|_L^{-n+1})(\\xi) + (n-1)\\varepsilon,\n\\ \\hbox{for all}\\; \\xi\\in S^{n-1}.\n\\end{equation}\nSince $K$ is an intersection body, there exists a finite Borel measure\n$\\mu$ on $S^{n-1}$ such that $\\|\\cdot\\|_K^{-1}= R\\mu$ as functionals on $C(S^{n-1}).$\nTogether with \\eqref{polar-volume}, \\eqref{radon-buspetty} and the definition of $R\\mu$, the latter implies that\n\\begin{align}\\label{eq:sep-7} n|K| &= \\int_{S^{n-1}} \\|x\\|_K^{-1}\\|x\\|_K^{-n+1}\\ dx\n= \\int_{S^{n-1}} R\\left(\\|\\cdot\\|_K^{-n+1}\\right)(\\xi)\\ d\\mu(\\xi)\\\\\n\\nonumber &\\leqslant \\int_{S^{n-1}} R\\left(\\|\\cdot\\|_L^{-n+1}\\right)(\\xi)\\ d\\mu(\\xi) - (n-1)\\varepsilon \\int_{S^{n-1}} d\\mu(\\xi)\\\\\n\\label{eq:sep-8} &= \\int_{S^{n-1}} \\|x\\|_K^{-1}\\|x\\|_L^{-n+1}\\ dx - (n-1)\\varepsilon \\int_{S^{n-1}} d\\mu(x).\n\\end{align}\nWe estimate the first term in \\eqref{eq:sep-8} using H\\\"older's inequality:\n\\begin{equation}\\label{eq:sep-9}\\int_{S^{n-1}} \\|x\\|_K^{-1}\\|x\\|_L^{-n+1}\\ dx \\leqslant \\left(\\int_{S^{n-1}} \\|x\\|_K^{-n}\\ dx\\right)^{\\frac1n}\n\\left( \\int_{S^{n-1}} \\|x\\|_L^{-n}\\ dx\\right)^{\\frac{n-1}n} = n |K|^{\\frac1n} |L|^{\\frac{n-1}n}.\n\\end{equation}\nWe now estimate the second term in \\eqref{eq:sep-8} adding the Radon transform of the unit constant\nfunction under the integral ($R{\\bf 1}(x)=\\left|S^{n-2}\\right|$ for every $x\\in S^{n-1}$),\nand using again the fact that $\\|\\cdot\\|_K^{-1}=R\\mu$:\n\\begin{align} \\label{eq:sep-10}\n(n-1)\\varepsilon \\int_{S^{n-1}} d\\mu(x) &= \\frac{(n-1)\\varepsilon}{\\left|S^{n-2}\\right|} \\int_{S^{n-1}} R1(x)\\ d\\mu(x)\n=\\frac{(n-1)\\varepsilon}{\\left| S^{n-2} \\right| } \\int_{S^{n-1}} \\|x\\|_K^{-1}\\ dx\\\\\n\\nonumber &\\geqslant c_1\\varepsilon \\frac{(n-1)|S^{n-1}|}{\\left| S^{n-2} \\right|} \\frac{1}{M(\\overline{K})}|K|^{\\frac 1n}\\\\\n\\label{eq:sep-11} &\\geqslant c_2\\varepsilon \\sqrt{n} \\frac{1}{M(\\overline{K})} |K|^{\\frac 1n},\n\\end{align}\nsince\n\\begin{equation}\\int_{S^{n-1}}\\|x\\|_K^{-1}\\,d\\sigma(x)\\ge \\left (\\int_{S^{n-1}}\\|x\\|_Kd\\sigma(x)\\right )^{-1}=\\frac{1}{M(\\overline{K})}|K|^{\\frac{1}{n}},\\end{equation}\nby Jensen's inequality, homogeneity, and\n\\begin{equation}\\label{eq:sep-12}\\left|S^{n-2}\\right|= \\frac{2\\pi^{\\frac{n-1}2}}{\\Gamma(\\frac{n-1}2)} \\qquad {\\rm and}\\qquad\n\\left|S^{n-1}\\right|= \\frac{2\\pi^{\\frac{n}2}}{\\Gamma(\\frac{n}2)}.\\end{equation}\nCombining \\eqref{eq:sep-11} with \\eqref{eq:sep-8} and \\eqref{eq:sep-9}, we get\n\\begin{equation}\\label{eq:sep-13}n |K| \\leqslant n |K|^{\\frac1n} |L|^{\\frac{n-1}n} - c_2\\varepsilon \\sqrt{n} \\frac{1}{M(\\overline{K})} |K|^{\\frac 1n}\n\\end{equation}\nand, after dividing by $n|K|^{1\/n},$ the proof is complete. $\\quad \\hfill \\Box$\n\n\\bigbreak\n\nSeparation implies a volume difference inequality.\n\n\\begin{corollary}\\label{cor:sep-2}Let $L$ be any origin-symmetric star body in ${\\mathbb R}^n,$ and let $K$ be an origin-symmetric\nintersection body, which is a dilate of an isotropic body. Suppose that\n$$\\min_{\\xi\\in S^{n-1}} \\left(|L\\cap \\xi^\\bot|-|K\\cap \\xi^\\bot|\\right) > 0.$$\nThen\n$$|L|^{\\frac{n-1}n} - |K|^{\\frac{n-1}n} \\geqslant c_2 \\frac{1}{\\sqrt{n}M(\\overline{K})}\n\\min_{\\xi\\in S^{n-1}} \\left(|L\\cap \\xi^\\bot|-|K\\cap \\xi^\\bot|\\right).$$\n\\end{corollary}\n\n\n\\begin{remark}\\label{rem:sep-3}\\rm It was proved in \\cite{GM} that there exists a constant $c>0$ such that\nfor any $n\\in {\\mathbb N}$ and any origin-symmetric isotropic convex body $K$ in ${\\mathbb R}^n$\n\\begin{equation}\\label{eq:sep-1}\\frac{1}{M(K)} \\geqslant\\ c_1 \\frac{n^{1\/10}L_K}{\\log^{2\/5}(e+n)}\\geqslant c_2\\frac{n^{1\/10}}{\\log^{2\/5}(e+n)}.\\end{equation}\nAlso, if $K$ is convex, has volume $1$ and is in the minimal mean width position then we have\n\\begin{equation}\\label{eq:sep-22}\\frac{1}{M(K)} \\geqslant\\ c_3\\frac{\\sqrt{n}}{\\log (e+n)}.\\end{equation}\nInserting these estimates into Theorem \\ref{main-int} and Corollary \\ref{cor:sep-2} we obtain estimates independent\nfrom the bodies.\n\\end{remark}\n\n\\bigbreak\n\n\\bigskip\n\n\\bigskip\n\n\\noindent {\\bf Acknowledgements.} We would like to thank Silouanos Brazitikos for useful suggestions that helped us to simplify\nthe proof and improve the estimate in Theorem \\ref{th:intro-S}. The second named author\nwas partially supported by the US National Science Foundation grant DMS-1265155.\n\n\\bigskip\n\n\\bigskip\n\n\n\n\\footnotesize\n\\bibliographystyle{amsplain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}