diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzelgi" "b/data_all_eng_slimpj/shuffled/split2/finalzzelgi" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzelgi" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nA famous open problem in Gabor analysis is the so-called \\textit{HRT conjecture}, concerning the linear independence of finitely many time-frequency shifts of a non-trivial square-integrable function \\cite{hrt}. To be precise, for $x,\\omega \\in \\bR^d$ consider the translation and modulation operators acting on $f \\in L^2(\\bR^d)$:\n\\[ T_x f(t) = f(t-x), \\quad M_{\\omega}f(t) = e^{2\\pi i t\\cdot \\omega}f(t). \\] For $z=(x,\\omega)\\in \\bR^{2d}$ we say that $\\pi(z)f = M_\\omega T_x f$ is a time-frequency shift of $f$ along $z$. The HRT conjecture can thus be stated as follows:\n\\begin{conj} Given $g \\in L^2(\\bR^d)\\setminus\\{0\\}$ and a set $\\Lambda$ of finitely many distinct points $z_1,\\ldots,z_N \\in \\bR^{2d}$, the set $G(g,\\Lambda)= \\{\\pi(z_k)g \\}_{k=1}^N$ is a linearly independent set of functions in $L^2(\\bR^d)$.\n\\end{conj}\nAs of today this somewhat basic question is still unanswered. Nevertheless, the conjecture has been proved for certain classes of functions or for special arrangements of points. We address the reader to the surveys \\cite{heil speegle,heil survey}, \\cite[Section 11.9]{heil book} and the paper \\cite{okoudjou} for a detailed and updated state of the art on the issue. As a general remark we mention that the difficulty of the problem is witnessed by the variety of techniques involved in the known partial results, and also the surprising gap between the latter and the contexts for which nothing is known. For example, a celebrated result by Linnell \\cite{linnell} states that the conjecture is true for arbitrary $g \\in L^2(\\bR^d)$ and for $\\Lambda$ being a finite subset of a full-rank lattice in $\\bR^{2d}$ and the proof is based on von Neumann algebras arguments. In spite of the wide range of this partial result, a solution is still lacking for smooth functions with fast decay (e.g., $g \\in \\mathcal{S}(\\mathbb{R}^d)$) or for general configurations of just four points. The problem is further complicated by numerical evidence in conflict with analytic conclusions \\cite{gro hrt}. \n\nA recent contribution by Kreisel \\cite{kreisel} proves the HRT conjecture under the assumption that the distance between points in $\\Lambda$ is large compared to the decay of $g$. The class of functions $g$ which are best suited for this perspective include functions with sharp descent near the origin or having a singularity away from which $g$ is bounded. \n\nKreisel's paper ends with a question on the short-time Fourier transform (STFT). Recall that this is defined as \\[ V_g f(x,\\omega) = \\langle f,\\pi(z)g \\rangle = \\int_{\\bR^d} e^{-2\\pi i t \\cdot \\omega} f(t)\\overline{g(t-x)}dt, \\quad z=(x,\\omega)\\in \\bR^{2d}, \\] for given $f,g \\in L^2(\\bR^d)$, where $\\langle \\cdot,\\cdot \\rangle$ denotes the inner product on $L^2(\\bR^d)$. The STFT plays a central role in modern time-frequency analysis \\cite{gro book}.\n\\begin{quest}\\label{quest ft}\n\tGiven $f \\in L^2(\\bR^d)$ and $R,N>0$, is there a way to design a window $g \\in L^2(\\bR^d)$ such that the ``bump with fat tail'' condition\n\t\\begin{equation}\\label{fat tail sph}\n\t|V_g f(z)| < \\frac{|\\langle f,g \\rangle|}{N}, \\quad |z|>R,\n\t\\end{equation} holds? \n\\end{quest}\n\nFrom a heuristic point of view this would amount to determine a window $g$ such that $V_gf$ shows a bump near the origin and a mild decay at infinity; that is, the energy of the signal accumulates a little near the origin and then spreads on the tail (hence a \\textit{fat tail}). This balance is unavoidable in view of the uncertainty principle, which forbids an arbitrary accumulation near the origin. \n\nA positive answer to Question \\ref{quest ft} would prove the HRT conjecture by \\cite[Theorem 3]{kreisel}. In fact we prove that the answer is negative as a consequence of the following result, which can be interpreted as a form of the uncertainty principle for the STFT \\cite{bonami,fernandez,gro up,lieb}.\n\n\\begin{theorem}\\label{maint}\n\tLet $g(t) = e^{-\\pi t^2}$ and assume that there exist $R >0$, $N>1$ and $f \\in L^2(\\bR^d)\\setminus\\{0\\}$ such that \n\t\\begin{equation}\\label{fat tail cyl}\n\t|V_g f(x,\\omega)| \\le \\frac{|\\langle f,g \\rangle|}{N}, \\quad |\\omega|=R.\n\t\\end{equation}\n\tThen\n\t\\begin{equation}\\label{R est cyl} R > \\sqrt{\\frac{\\log N}{\\pi}}. \\end{equation}\n\\end{theorem}\n\nThis result is indeed a negative answer to Question \\ref{quest ft} since $|V_gf (x,\\omega)| = |V_f g(-x,-\\omega)|$. In fact, a stronger result can be proved in the case where the cylinder in \\eqref{fat tail cyl} is replaced by a sphere.\n\n\\begin{theorem}\\label{ft ball}\n\tLet $g(t) = e^{-\\pi t^2}$ and assume that there exists $R>0$, $N>1$ and $f \\in L^2(\\bR^d)\\setminus\\{0\\}$ such that \n\t\\begin{equation}\\label{fat tail ball}\n\t|V_g f(z)| \\le \\frac{|\\langle f,g \\rangle|}{N}, \\quad |z|=R.\n\t\\end{equation}\n\tThen\n\t\\begin{equation}\\label{R est ball} R \\ge \\sqrt{\\frac{2\\log N}{\\pi}}. \\end{equation}\n\tMoreover, \\eqref{fat tail ball} holds with $R=\\sqrt{2\\log N \/ \\pi}$ if and only if $f(t)= ce^{-\\pi t^2}$ for some $c \\in \\mathbb{C}\\setminus\\{0\\}$. \n\\end{theorem} \n\n\\section{Proof of the main results and remarks}\n\\begin{proof}[Proof of Theorem \\ref{maint}] An explicit computation shows that\n\t\\[ |V_g f(x,-\\omega)| = \\left| \\int_{\\bR^d} e^{2\\pi i t \\cdot \\omega}e^{-\\pi(t-x)^2}f(t)dt \\right| = e^{-\\pi\\omega^2}|\\Phi f(z)|, \\]\n\twhere we set \n\t\\[ \\Phi f(z) = \\int_{\\bR^d} e^{-\\pi(t-z)^2}f(t)dt, \\quad z=x+i\\omega \\in \\mathbb{C}^d. \\]\n\tNotice that $\\Phi f$ is an entire function on $\\mathbb{C}^d$, since differentiation under the integral sign is allowed. Define \n\t\\[ M_{a,R} = \\sup_{z \\in Q_{a,R}} |\\Phi f(z)|, \\quad Q_{a,R}=\\{z=x+i\\omega \\in \\mathbb{C}^d : |x|\\le a, |\\omega| \\le R \\}, \\] where $a>0$ will be fixed in a moment. The maximum principle \\cite{narasi} implies that $|\\Phi f|$ takes the value $M_{a,R}$ at some point of the boundary of $Q_{a,R}$. Since $f,g\\in L^2(\\mathbb{R}^d)$, $V_gf$ vanishes at infinity (e.g.\\ \\cite[Corollary 3.10]{ct}), so that $V_g f(x,-\\omega) \\to 0$ for $|x|\\to + \\infty$, uniformly with respect to $\\omega\\in \\mathbb{R}^d$. Therefore $\\Phi f(x+i\\omega) \\to 0$ for $|x| \\to +\\infty$, uniformly with respect to $\\omega$ over compact subsets of $\\bR^d$. This shows that for sufficiently large $a>0$ we have $|\\Phi f(z_0)|=M_{a,R}$ for some point $z_0=(x_0,\\omega_0)$ with $|\\omega_0|= R$. \n\t\n\tIn view of assumption \\eqref{fat tail cyl} the following estimate holds:\n\t\\[ M_{a,R} e^{-\\pi R^2} = |V_gf(z_0)| \\le \\frac{|\\Phi f(0)|}{N}, \\] where we used the identity $\\langle f,g\\rangle = V_gf(0) = \\Phi f(0)$; therefore \n\t\\[ M_{a,R} \\le \\frac{e^{\\pi R^2}}{N}|\\Phi f(0)|. \\] \n\tAssume now that $R \\le \\sqrt{\\log N \/ \\pi}$; this would imply $M_{a,R} \\le |\\Phi f(0)|$ and thus $\\Phi f$ would be constant on $Q_{a,R}$, hence on $\\mathbb{C}^d$ by analytic continuation \\cite{narasi}. Since $\\Phi f(x+i\\omega) \\to 0$ for $|x| \\to +\\infty$ as already showed above, we could conclude that $\\Phi f \\equiv 0$, hence $V_gf \\equiv 0$ and then $f\\equiv 0$, which is a contradiction. \n\\end{proof}\n\n\\begin{remark}\n\tNotice that Theorem \\ref{maint} still holds in the case where the cylinder in \\eqref{fat tail cyl} is replaced by any other cylinder obtained from the previous one by a symplectic rotation (cf.\\ \\cite[Sec. 2.3.2]{dg}). Indeed, if $\\widehat{S}$ denotes a metaplectic operator \\cite{dg} corresponding to $S \\in \\mathrm{Sp}(d,\\mathbb{R}) \\cap \\mathrm{O}(2d,\\mathbb{R})$, condition \\eqref{fat tail cyl} with $z=(x,\\omega)$ replaced by $S^{-1}z$ is equivalent to\n\t\\[ |V_g(\\widehat{S}f)(x,\\omega)| \\le \\frac{|\\langle \\widehat{S}f,g \\rangle|}{N}. \\]\n\tThis can be easily seen by using the covariance property \\cite[Lemma 9.4.3]{gro book}\n\t\\[\n|V_g f(S^{-1}z)|=|V_{\\widehat{S}g} \\widehat{S}f(z)|,\n\t\\]\nthe fact that $\\widehat{S}$ is unitary on $L^2(\\bR^d)$ and that $\\widehat{S}g = cg$ for some $c \\in \\mathbb{C}$, $|c|=1$, if $g(t)=e^{-\\pi t^2}$ \\cite[Prop. 252]{dg}.\n\\end{remark}\n\n\\begin{remark} The estimate for $R$ in \\eqref{R est cyl} is sharp. Consider indeed a dilated Gaussian function $f_\\lambda(t) = e^{-\\pi\\lambda^2 t^2}$, $0 < \\lambda \\le 1$; a straightforward computation (see for instance \\cite[Lemma 3.1]{cn}) shows that \n\t\\[ V_g f_\\lambda (x,\\omega) = (1+\\lambda^2)^{-d\/2}e^{-2\\pi i \\frac{x\\cdot \\omega}{1+\\lambda^2}} e^{-\\pi \\frac{\\lambda^2 x^2}{1+\\lambda^2}} e^{-\\pi \\frac{\\omega^2}{1+\\lambda^2}}. \\] \nCondition \\eqref{fat tail cyl} is thus satisfied if and only if \n\\[ R \\ge \\sqrt{(1+\\lambda^2) \\frac{\\log N}{\\pi}}, \\] and letting $\\lambda \\to 0^+$ yields the bound in \\eqref{R est cyl}. \n\nIt is worth emphasizing that there is no non-zero $f \\in L^2(\\bR^d)$ such that the optimal bound in \\eqref{R est cyl} can be attained, in contrast to other uncertainty principles for the STFT. \n\\end{remark} \n\n\\begin{proof}[Proof of Theorem \\ref{ft ball}]\nRecall the connection between the STFT and the \\textit{Bargmann transform} of a function $f \\in L^2(\\bR^d)$ \\cite[Prop. 3.4.1]{gro book}:\n\\begin{equation}\\label{barg stft} V_g f (x,-\\omega) = 2^{-d\/4}e^{\\pi i x\\cdot \\omega} \\mathcal{B}f(z) e^{-\\pi |z|^2\/2}, \\quad z=x+i\\omega \\in \\mathbb{C}^d, \\end{equation} where the Bargmann transform is defined by\n\\[ \\mathcal{B}f(z) = 2^{d\/4} \\int_{\\bR^d} f(t) e^{2\\pi t\\cdot z - \\pi t^2 - \\pi z^2 \/2}dt; \\]\n(here $g(t) = e^{-\\pi t^2}$ as in the statement). This correspondence is indeed a unitary operator from $L^2(\\bR^d)$ onto the \\textit{Bargmann-Fock space} $\\mathcal{F}^2(\\mathbb{C}^d)$, i.e.\\ the Hilbert space of all entire functions $F$ on $\\mathbb{C}^d$ such that $e^{-\\pi |\\cdot|^2\/2}F \\in L^2(\\mathbb{C}^d)$, cf.\\ \\cite[Sec. 3.4]{gro book} (see also \\cite{toft1,toft2}). \n\nWe now argue as in the proof of Theorem \\ref{maint}. After setting \n\\[ M_R = \\sup_{z\\in B_R(0)} |\\mathcal{B}f(z)|, \\quad B_R(0) = \\{z \\in \\mathbb{C}^d : |z|\\le R \\}, \\] the maximum principle implies that $|\\mathcal{B}f|$ takes the value $M_R$ on some point $z$ with $|z|=R$ and moreover $M_R>0$ (otherwise by analytic continuation we would have $\\mathcal{B}f=0$ and therefore $f=0$). Condition \\eqref{fat tail ball} then implies \\[ M_R \\le \\frac{e^{\\pi R^2\/2}}{N} |\\mathcal{B}f(0)|. \n\\]\nIf $R < \\sqrt{2 \\log N\/\\pi}$ we obtain $M_R < |\\mathcal{B}f(0)|$, which is a contradiction. If $R = \\sqrt{2 \\log N\/\\pi}$ then $M_R = |\\mathcal{B}f(0)|$ and therefore\n $\\mathcal{B}f(z)= C$, $z \\in \\mathbb{C}^d$, again by the maximum principle and analytic continuation, with $C\\ne0$. On the other hand, a direct computation and the injectivity of the Bargmann transform show that $\\mathcal{B}f(z)=1$ (hence $|V_g f (z)| = 2^{-d\/4} e^{-\\pi |z|^2\/2}$) if and only if $f(t)= 2^{d\/4}e^{-\\pi t^2}$. This gives the last part of the claim. \n\\end{proof}\n\n\\section*{Acknowledgments} The authors wish to thank Professor Elena Cordero for fruitful discussions. \\\\ The present research was partially supported by MIUR grant \"Dipartimenti di Eccellenza\" 2018\u20132022, CUP: E11G18000350001, DISMA, Politecnico di Torino.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{s:intro}\nPlanetesimal belts appear to be a common feature of planetary systems.\nThere are two main belts in the solar system: the asteroid belt and\nthe Kuiper belt.\nThese belts inhabit the regions of the solar system where planetesimal\norbits can remain stable over the 4.5 Gyr age of our system (Lecar et al. 2001).\nThe larger planetesimals in the belts are continually grinding\ndown feeding the smaller bodies in a process known as a collisional cascade\nwhich is slowly eroding the belts (Bottke et al. 2005).\nThe smallest dust in the asteroid belt is acted on by radiation forces;\nP-R drag makes the dust spiral in toward the Sun making a disk known as\nthe zodiacal cloud that the Earth sits in the middle of (Leinert \\& Gr\\\"{u}n 1990).\nA dust cloud is also predicted to arise from collisions amongst Kuiper belt\nobjects (Liou \\& Zook 1999), although our information\non this population is sparse (Landgraf et al. 2002)\nbecause its emission is masked by the zodiacal emission\n(Backman, Dasgupta \\& Stencel 1995) and few dust grains make it into the\ninner solar system (Moro-Mart\\'{\\i}n \\& Malhotra 2003).\n\nMany extrasolar systems also have such planetesimal belts, known as\ndebris disks.\nThese have been detected from their dust content (Aumann et al. 1984) from which\nit has been inferred that larger planetesimals must exist to replenish\nthe dust disks because of the short lifetime of this dust (Backman \\& Paresce 1993).\nThe collisional cascade scenario is supported by modeling of the\nemission spectrum of the dust which shows a size distribution similar\nto that expected for dust coming from a collisional cascade (Wyatt \\& Dent 2002, hereafter\nWD02).\nHowever, the issue of how these disks evolve has recently come under\nclose scrutiny.\n\nFrom a theoretical point view, Dominik \\& Decin (2003; hereafter DD03)\nshowed that if P-R drag is not important then a planetesimal belt\nevolving in quasi-steady state would lose mass due to collisional grinding\ndown giving a disk mass (and dust luminosity) that falls off $\\propto t^{-1}$.\nThis is in broad agreement with the observed properties of debris disks:\nthe mean dust luminosity at a given age falls off $\\propto t^{-1.8}$\n(Spangler et al. 2001);\nthe mass inferred from detection statistics falls off $\\propto t^{-0.5}$\n(Greaves \\& Wyatt 2003), while the mass of the detected disks falls off\n$\\propto t^{-1}$ (Najita \\& Williams 2005);\nthe upper limit in luminosity of the detected disks also falls off $\\propto t^{-1}$\n(Rieke et al. 2005).\nWhile these trends can be viewed as a success of the steady-state model,\nit has yet to be proved that a steady state evolution model fits the data\nin more than just general terms (Meyer et al. 2006).\nSeveral puzzling observations also remain to be explained.\n\nDecin et al. (2003) noted that the maximum fractional luminosity of debris disks remains\nconstant at $f=L_{\\rm{ir}}\/L_\\star \\approx 10^{-3}$ up to the oldest stars,\nwhere $L_{\\rm{ir}}$ and $L_\\star$ are the disk and stellar luminosities respectively\n(see also Table \\ref{tab:symb} for definitions of the parameters used in the text), and this\nwas explained by DD03 as a consequence of delayed stirring.\nA delay in the ignition of a collisional cascade is expected if \nit is the formation of Pluto-sized objects which trigger the cascade, since\nsuch massive bodies take longer, up to several Gyr, to form\nfurther from the star (Kenyon \\& Bromley 2002).\nHowever, that interpretation predicts that the radius of the belts should increase with stellar \nage, and this is not observed (Najita \\& Williams 2005).\nThere is also recent evidence that the dust content of some systems is transient.\nThe discovery of a population of dust grains around Vega in the process of removal by \nradiation pressure indicates that this system cannot have remained in steady state for the full \n350 Myr age of the star (Su et al. 2005).\nRieke et al. (2005) used their statistics on A stars, which showed a wide variety of properties \namong the debris disks, to suggest that much of the dust we see is produced episodically in \ncollisions between large planetesimals.\nThere is also an emerging population of debris disks detected around sun-like\nstars with dust at a few AU (Gaidos 1999; Beichman et al. 2005; Song et al. 2005;\nSmith, Wyatt \\& Dent in prep.).\nThere is debate over whether these are atypically massive asteroid belts or the\nconsequence of a rare transient event (e.g., Beichman et al. 2005).\n\nA stochastic element to the evolution of debris disks would fit with \nour understanding of the evolution of the dust content of the inner \nsolar system.\nThis is believed to have been significantly enhanced for timescales of a few\nMyr following collisions between objects $\\sim 100$ km in size in the asteroid \nbelt (Nesvorn\\'{y} et al. 2003; Farley et al. 2006).\nHowever, it is not known whether the aftermath of individual collisions would be\ndetectable in a debris disk, or indeed whether such events would happen frequently\nenough to explain the statistics (WD02; Telesco et al. 2005).\nSuch events have a dramatic effect on the amount of dust in the solar system\nbecause there is relatively little around during the quiescent periods.\nPlanetesimal belts of equivalent mass to those in the solar system would\nnot have been detected in the current debris disk surveys.\nHowever, there is evidence to suggest that both belts were $\\sim 200$ times more\nmassive in the past (e.g., Stern 1996; Bottke et al. 2005).\nPeriods analogous to the heavy bombardment experienced in the solar system up to\n$\\sim 700$ Myr after its formation have also been invoked to explain the fact that debris\ndisks are most often detected around stars $<400$ Myr old (Habing et al. 1999).\n\nIn the light of this controversy we revisit a simple analytical model\nfor the steady state collisional evolution of planetesimal belts which was\noriginally explored in DD03.\nThe model we derive for that evolution is given in \\S \\ref{s:model},\nand differs in a subtle but important way from that of DD03, since it affects\nthe dust production as a function of collision velocity.\nThis model shows that there is a maximum possible\ndisk mass (and dust luminosity) at any given age.\nIn \\S \\ref{s:hot} confrontation with the few hot planetesimal belts discovered recently\nshows that the majority of these cannot be explained as massive asteroid belts,\nrather these must be systems undergoing a transient event.\nThe possibility that these are caused by a recent collision within a planetesimal\nbelt is also discussed, as is the possibility that the dust originates in a planetesimal\nbelt in the terrestrial planet region.\nThe implications of these results are discussed in \\S \\ref{s:conc}.\nApplication of the model to the statistics of detected debris disks will be\nconsidered in a later paper (Wyatt et al., in prep.).\n\n\\section{Analytical collisional evolution model}\n\\label{s:model}\nIn this section a simple analytical model is developed for the\nevolution of a planetesimal belt due to collisions amongst\nits members.\nThe parameters used in this model are summarized in the table \\ref{tab:symb}\nwhich also gives the units assumed for these parameters throughout the paper.\n\n\\subsection{The planetesimal belt size distribution}\n\\label{ss:pb}\nThe planetesimal belt is assumed to be in collisional\nequilibrium with a size distribution defined by:\n\\begin{equation}\n n(D) = K D^{2-3q}, \\label{eq:nd}\n\\end{equation}\nwhere $q=11\/6$ in an infinite collisional cascade (Dohnanyi 1969)\nand the scaling parameter $K$ is called $f_a$ by DD03.\nThat distribution is assumed to hold from the largest planetesimal\nin the disk, of diameter $D_{\\rm{c}}$, down to the size below which particles\nare blown out by radiation pressure as soon as they are created,\n$D_{\\rm{bl}}$.\nIf we assume that $q$ is in the range 5\/3 to 2 then most of the\nmass is in the largest planetesimals while the cross-sectional area is\nin the smallest particles such that:\n\\begin{eqnarray}\n \\sigma_{\\rm{tot}} & = & 3.5 \\times 10^{-17} K(3q-5)^{-1} (10^{-9}D_{\\rm{bl}})^{5-3q} \\label{eq:stot} \\\\\n M_{\\rm{tot}} & = & 8.8 \\times 10^{-17} K \\rho (6-3q)^{-1} D_{\\rm{c}}^{6-3q}, \\label{eq:mtot1} \\\\\n & = & 2.5 \\times 10^{-9} \\left( \\frac{3q-5}{6-3q} \\right)\n \\rho \\sigma_{\\rm{tot}}D_{\\rm{bl}}\n \\left( \\frac{10^9 D_{\\rm{c}}}{D_{\\rm{bl}}} \\right)^{6-3q}, \n \\label{eq:mtot2}\n\\end{eqnarray}\nwhere spherical particles of density $\\rho$ have been assumed and $M_{\\rm{tot}}$ is\nin $M_\\oplus$ if the units of table \\ref{tab:symb} are used for the other parameters.\n\nThe planetesimal belt is assumed to be at a radius $r$, and to have a width $dr$ (in AU).\nOne of the observable properties of a planetesimal belt is its fractional luminosity,\n$f=L_{\\rm{ir}}\/L_\\star$, i.e., the infrared luminosity from the disk divided by the stellar\nluminosity.\nAssuming that the grains act like black bodies and so absorb all the radiation\nthey intercept we can write:\n\\begin{equation}\n f = \\sigma_{\\rm{tot}}\/(4\\pi r^2). \\label{eq:f}\n\\end{equation}\nIn other words, in this model $\\sigma_{\\rm{tot}}$, $M_{\\rm{tot}}$ and $f$ \nare all proportional to each other and just one is needed to define the scaling\nfactor $K$ in equation (\\ref{eq:nd}).\nAssuming the particles act like black bodies also allows us to derive\nthe following relation:\n\\begin{equation}\n D_{\\rm{bl}} = 0.8(L_\\star\/M_\\star)(2700\/\\rho), \\label{eq:dbl}\n\\end{equation}\nwhere $D_{\\rm{bl}}$ is in $\\mu$m, $L_\\star$ and $M_\\star$ are in solar units, and\n$\\rho$ is in kg m$^{-3}$.\n\nRelaxing the black body assumption is easily achieved (e.g., WD02).\nHowever, this would result in relatively small changes in the way $f$ \nscales with $M_{\\rm{tot}}$, and so for its heuristic simplicity we keep this assumption\nthroughout this paper.\nProbably the most important simplification within this model is that of the\ncontinuous size distribution.\nFor example, we know that the cut-off in the size distribution at $D_{\\rm{bl}}$ would\ncause a wave in the size distribution at sizes just larger than this (Th\\'{e}bault,\nAugereau \\& Beust 2003), that large quantities of blow-out grains can also affect\nthe distribution of small size particles (Krivov, Mann, \\& Krivova 2000), and that\nthe dependence of planetesimal strength on size can result in $q \\ne 11\/6$ as well\nas a wave in the distribution at large sizes (Durda et al. 1998;\nO'Brien \\& Greenberg 2003).\nAlso, since the largest planetesimals would not be in collisional equilibrium\nat the start of the evolution, their initial distribution may not be the same\nas that of a collisional cascade, although distributions with $q \\approx 11\/6$\nhave been reported from planet formation models (e.g., Stern \\& Colwell 1997;\nDavis \\& Farinella 1997; Kenyon \\& Luu 1999) meaning this is a reasonable starting\nassumption.\nDespite these simplifications, we believe this model is adequate to explore to \nfirst order the evolution of planetesimal belts which can later be studied in\nmore depth.\n\n\\subsection{Collisional evolution}\n\\label{ss:ce}\nIn a collisional cascade material in a bin with a given size range $D$ to $D+dD$ is replaced\nby fragments from the destruction of larger objects at the same rate that it is destroyed\nin collisions with other members of the cascade.\nThe long-timescale evolution is thus determined by the removal of mass from the top end of\nthe cascade.\nIn this model the scaling factor $K$ (and so the total mass and fractional luminosity\netc) decreases as the number of planetesimals of size $D_{\\rm{c}}$ decreases.\nThe loss rate of such planetesimals is determined by their collisional lifetime, which in\nthe terminology of WD02 is given by:\n\\begin{equation}\n t_{\\rm{c}} = \\sqrt{r^3\/M_\\star} (r dr\/\\sigma_{\\rm{tot}}) [2I\/f(e,I)] \/ f_{\\rm{cc}}, \n\\label{eq:tc1}\n\\end{equation}\nwhere maintaining the units used previously gives $t_{\\rm{c}}$ in years, $I$ is the mean \ninclination\nof the particles' orbits (which determines the torus height), $f(e,I)$ is the ratio of\nthe relative velocity of collisions to the Keplerian velocity ($=v_{\\rm{rel}}\/v_{\\rm{k}}$, \nalso called $\\nu$ by DD03),\nand $f_{\\rm{cc}}$ is the fraction of the total cross-sectional area in the belt which\nis seen by planetesimals of size $D_{\\rm{c}}$ as potentially causing a catastrophic \ncollision.\n\nFrom hereon we will use the assumption that $f(e,I) = \\sqrt{1.25e^2+I^2}$, where $e$\nis the mean eccentricity of the particles, which is valid for Rayleigh distributions\nof $e$ and $I$ (Lissauer \\& Stewart 1993; Wetherill \\& Stewart 1993).\nAn expression for $f_{\\rm{cc}}$ was given in WD02, however, here we will\nignore the gravitational focussing effect, which is important in the\naccumulation phase but not during the destruction\nphase of a planetesimal belt (see \\S \\ref{ss:pc}),\nand so derive an expression that is the\nsame as that given in Wyatt et al. (1999):\n\\begin{equation}\n f_{\\rm{cc}} = (10^{-9} D_{\\rm{bl}}\/D_{\\rm{c}})^{3q-5}G(q,X_{\\rm{c}}), \\label{eq:fcc}\n\\end{equation}\nwhere $X_{\\rm{c}}=D_{\\rm{cc}}\/D_{\\rm{c}}$, $D_{\\rm{cc}}$ is the smallest planetesimal \nthat has enough energy to catastrophically destroy a planetesimal of size $D_{\\rm{c}}$ (which \nis\ncalled $\\epsilon$ in DD03), and:\n\\begin{eqnarray}\n G(q,X_{\\rm{c}}) & = & [(X_{\\rm{c}}^{5-3q}-1)+ (6q-10)(3q-4)^{-1}(X_{\\rm{c}}^{4-3q}-1) \n\\nonumber \\\\\n & & + (3q-5)(3q-3)^{-1}(X_{\\rm{c}}^{3-3q}-1)]. \\label{eq:qgxc}\n\\end{eqnarray}\n\nThe factor $X_{\\rm{c}}$ can be worked out from the dispersal threshold, $Q_{\\rm{D}}^\\star$, \ndefined\nas the specific incident energy required to catastrophically destroy a particle such \nthat (WD02):\n\\begin{eqnarray}\n X_{\\rm{c}} & = & (2Q_{\\rm{D}}^\\star\/v_{\\rm{rel}}^2)^{1\/3}, \\label{eq:xc1} \\\\\n & = & 1.3 \\times 10^{-3} [Q_{\\rm{D}}^\\star r M_\\star^{-1} f(e,I)^{-2} ]^{1\/3}, \n\\label{eq:xc2}\n\\end{eqnarray}\nwhere $Q_{\\rm{D}}^\\star$ is in J kg$^{-1}$ (called $S$ in DD03\\footnote{Equation 25 in DD03 differs \nfrom our\nequation (\\ref{eq:xc1}) because we define $Q_{\\rm{D}}^\\star$ to be the specific incident \nkinetic\nenergy so that $0.5M_2v_{\\rm{rel}}^2=M_1 Q_{\\rm{D}}^\\star$ whereas DD03 define $S$ to \nbe the specific binding energy of the two objects (giving their equation 24).\nIn the limit of $S \\ll v_{\\rm{rel}}^2\/8$ the two equations are the same, since $X_{\\rm{c}} \n\\ll \n1$.})\n\nCombining the above equations gives for the collisional lifetime of the planetesimals\nof size $D_{\\rm{c}}$:\n\\begin{eqnarray}\n t_{\\rm{c}} & = & \\left( \\frac{r^{2.5} dr}{M_\\star^{0.5} \\sigma_{\\rm{tot}}} \\right) \n \\left( \\frac{2[1+1.25(e\/I)^2]^{-0.5}}{G(q,X_{\\rm{c}})} \\right)\n \\left( \\frac{10^{-9}D_{\\rm{bl}}}{D_{\\rm{c}}} \\right)^{5-3q}, \n\\label{eq:tcstot} \\\\\n & = & \\left( \\frac{3.8\\rho r^{2.5} dr \n D_{\\rm{c}}}{M_\\star^{0.5} M_{\\rm{tot}}} \\right) \n \\left( \\frac{(12q-20)[1+1.25(e\/I)^2]^{-0.5}}{(18-9q)G(q,X_{\\rm{c}})} \\right).\n \\label{eq:tcmtot}\n\\end{eqnarray}\nAssuming that collisions are the only cause of mass loss in the belt, the evolution of\nthe disk mass $M_{\\rm{tot}}(t)$ (or equivalently of $K$, $\\sigma_{\\rm{tot}}$, or $f$)\ncan be worked out by solving $dM_{\\rm{tot}}\/dt = -M_{\\rm{tot}}\/t_{\\rm{c}}$ to give:\n\\begin{equation}\n M_{\\rm{tot}}(t) = M_{\\rm{tot}}(0)\/[1+t\/t_{\\rm{c}}(0)], \\label{eq:mtott}\n\\end{equation}\nwhere $M_{\\rm{tot}}(0)$ is the initial disk mass and $t_{\\rm{c}}(0)$ is the collisional \nlifetime at that initial epoch;\nthis solution is valid as long as mass is the only parameter of the planetesimal belt\nthat changes with time.\nThis results in a disk mass which is constant at $M_{\\rm{tot}}(0)$ for $t \\ll t_{\\rm{c}}(0)$, \nbut which falls off $\\propto 1\/t$ for $t \\gg t_{\\rm{c}}(0)$ (as noted, e.g., in DD03).\n\n\\begin{figure*}\n \\centering\n \\begin{tabular}{cc}\n \\hspace{-0.15in} \n \\includegraphics[width=3.2in]{f1a.ps} &\n \\hspace{-0.15in} \n \\includegraphics[width=3.2in]{f1b.ps}\n \\end{tabular}\n \\caption{The dependence of \\textbf{(left)} $G(11\/6,X_{\\rm{c}})$ and \\textbf{(right)}\n $X_{\\rm{c}}$ on planetesimal eccentricity ($e$) for planetesimals of different\n strengths ($Q_{\\rm{D}}^\\star$) and at different distances from the star ($r$).}\n\\label{fig:gvse}\n\\end{figure*}\n\nHowever, another interesting property of this evolution is that, since the\nexpression for $t_{\\rm{c}}(0)$ includes a dependence on $M_{\\rm{tot}}(0)$, the disk\nmass at late times is independent of initial disk mass.\nThis is because more massive disks process their mass faster.\nThis means that for any given age, $t_{\\rm{age}}$, there is a maximum disk mass \n$M_{\\rm{max}}$\n(and also infrared luminosity, $f_{\\rm{max}}$) that can remain due to collisional processing:\n\\begin{eqnarray}\n M_{\\rm{max}} & = & \\left( \\frac{3.8 \\times 10^{-6} \\rho r^{3.5} (dr\/r) \n D_{\\rm{c}}}{M_\\star^{0.5}t_{\\rm{age}}} \\right) \\times \\nonumber \\\\\n & & \\left( \\frac{(12q-20)[1+1.25(e\/I)^2]^{-0.5}}{(18-9q)G(q,X_{\\rm{c}})} \\right), \n \\label{eq:mmax1} \\\\\n f_{\\rm{max}} & = & \\left( \\frac{10^{-6} r^{1.5}(dr\/r)}{4\\pi M_\\star^{0.5} t_{\\rm{age}}} \\right)\n \\left( \\frac{10^{-9} D_{\\rm{bl}}}{D_{\\rm{c}}} \\right)^{5-3q}\n \\times \\nonumber \\\\ \n & & \\left( \\frac{2[1+1.25(e\/I)^2]^{-0.5}}{G(q,X_{\\rm{c}})} \\right).\n \\label{eq:fmax1} \n\\end{eqnarray}\nIn this model, the present day disk mass (or luminosity) is expected to\nbe equal to this \"maximum\" disk mass (or luminosity) for disks in which the\nlargest planetesimals are in collisional equilibrium.\nThis corresponds to disks around stars that are older than the collisional\nlifetime of those planetesimals given in equation (\\ref{eq:tcmtot}).\n\nFor example, with the further assumptions that $q=11\/6$, $e \\approx I$, and\n$\\rho = 2700$ kg m$^{-3}$, we find:\n\\begin{eqnarray}\n M_{\\rm{max}} & = & 0.009 r^{3.5} (dr\/r)\n D_{\\rm{c}} M_\\star^{-0.5} t_{\\rm{age}}^{-1}\/G(11\/6,X_{\\rm{c}}), \\label{eq:mmax2} \\\\\n f_{\\rm{max}} & = & 0.004 r^{1.5} (dr\/r)\n D_{\\rm{c}}^{0.5} L_\\star^{-0.5} t_{\\rm{age}}^{-1}\/G(11\/6,X_{\\rm{c}}), \\label{eq:fmax2}\n\\end{eqnarray}\nwhere $M_{\\rm{max}}$ is in $M_\\oplus$, $r$ in AU, $D_{\\rm{c}}$ in km, $t_{\\rm{age}}$ in Myr, \nand\n$G(11\/6,X_{\\rm{c}})=X_{\\rm{c}}^{-0.5}+0.67X_{\\rm{c}}^{-1.5}+0.2X_{\\rm{c}}^{-2.5}-1.87$,\nwith $X_{\\rm{c}}=10^{-3}(rQ_{\\rm{D}}^\\star\/e^2)^{1\/3}$ ($Q_{\\rm{D}}^\\star$ is in J kg$^{-1}$).\n\nPlots of $G(11\/6,X_{\\rm{c}})$ and $X_{\\rm{c}}$ for typical planetesimal belts are\nshown in Fig.~\\ref{fig:gvse}.\nHowever, for many disks the approximation that $X_{\\rm{c}} \\ll 1$ is valid, and\nso $G(11\/6,X_{\\rm{c}}) \\approx 0.2X_{\\rm{c}}^{-2.5} =\n6.3 \\times 10^6 r^{-5\/6}{Q_{\\rm{D}}^\\star}^{-5\/6}e^{5\/3}M_\\star^{5\/6}$, giving:\n\\begin{eqnarray}\n M_{\\rm{max}} & = & 1.4 \\times 10^{-9} r^{13\/3} (dr\/r)\n D_{\\rm{c}} {Q_{\\rm{D}}^\\star}^{5\/6} \\times \\nonumber \\\\\n & & e^{-5\/3} M_\\star^{-4\/3} t_{\\rm{age}}^{-1}, \\label{eq:mmax3} \\\\\n f_{\\rm{max}} & = & 0.58 \\times 10^{-9} r^{7\/3} (dr\/r)\n D_{\\rm{c}}^{0.5} {Q_{\\rm{D}}^\\star}^{5\/6}\\times \\nonumber \\\\\n & & e^{-5\/3} M_\\star^{-5\/6} L_\\star^{-0.5} t_{\\rm{age}}^{-1}.\n \\label{eq:fmax3}\n\\end{eqnarray}\n\n\n\\subsection{Comparison with DD03}\n\\label{ss:dd03}\nSince DD03 produced a very similar analytical model, our results were\ncompared with those of DD03.\nThe results of disk evolution for a planetesimal belt close to their nominal\nmodel were computed using the parameters:\n$r=43$ AU, $dr=15$ AU, $D_{\\rm{c}}=2$ km, $\\rho=2700$ kg m$^{-3}$, $f(e,I)=0.1$, $e\/I=1$, \n$Q_{\\rm{D}}^\\star=200$ J kg$^{-1}$, $M_{\\rm{tot}}(0)=10M_\\oplus$, A0 star (for which $L_\\star=54L_\\odot$,\n$M_\\star=2.9M_\\odot$, $D_{\\rm{bl}}=15$ $\\mu$m).\nEach of the parameters $M_{\\rm{tot}}(0)$, $r$, $f(e,I)$, $D_{\\rm{c}}$ and spectral type \nwere also varied to make the plots shown in Fig.~\\ref{fig:dd03} which are\nequivalent to Figs 1b-1f of DD03.\n\n\\begin{figure*}\n \\centering\n \\begin{tabular}{cc}\n \\hspace{-0.15in} \\includegraphics[width=3.2in]{f2a.ps} &\n \\hspace{-0.15in} \\includegraphics[width=3.2in]{f2b.ps} \\\\[-0.0in]\n \\hspace{-0.15in} \\includegraphics[width=3.2in]{f2c.ps} &\n \\hspace{-0.15in} \\includegraphics[width=3.2in]{f2d.ps} \\\\[-0.0in]\n \\hspace{-0.15in} \\includegraphics[width=3.2in]{f2e.ps} &\n \\end{tabular}\n \\caption{The collisional evolution of a planetesimal belt with parameters similar\n to the nominal model of DD03 \n [$r=43$ AU, $dr=15$ AU, $D_{\\rm{c}}=2$ km, $\\rho=2700$ kg m$^{-3}$, $f(e,I)=0.1$,\n $e\/I=1$, $Q_{\\rm{D}}^\\star=200$ J kg$^{-1}$, $M_{\\rm{tot}}(0)=10M_\\oplus$, A0 star]\n showing the effect of changing:\n \\textbf{(top left)} starting disk mass $M_{\\rm{tot}}(0)$,\n \\textbf{(top right)} disk radius $r$,\n \\textbf{(middle left)} collision velocity $v_{\\rm{rel}}\/v_{\\rm{k}}$,\n \\textbf{(middle right)} maximum planetesimal size $D_{\\rm{c}}$, and\n \\textbf{(bottom left)} stellar spectral type.\n These plots can be directly compared to figs. 1b-f of DD03.}\n \\label{fig:dd03}\n\\end{figure*}\n\nThe results are very similar in most regards:\nmore massive disks start out with higher $f$, but the turnover from constant to $1\/t$\nevolution is later for lower mass disks meaning that at late times all disks converge\nto the same maximum value (Fig.~\\ref{fig:dd03}a);\nputting the same mass at larger distances reduces the initial dust luminosity $f$, but the\nresulting lower surface density and longer orbital timescales there combine to make the\nturnover happen later which means that at late times more distant belts are more \nmassive (Fig.~\\ref{fig:dd03}b);\nputting the same mass into larger planetesimals reduces the cross-sectional area\nof dust (equation \\ref{eq:mtot2}) and so the initial dust luminosity $f$, but increases\nthe collisional lifetime of those planetesimals (equation \\ref{eq:tcmtot}) which means\nthat at late times belts with larger planetesimals retain their mass for longer\n(Fig.~\\ref{fig:dd03}d);\nlater spectral types have higher starting dust luminosities because the cascade extends\ndown to smaller sizes (equation \\ref{eq:dbl}), and the longer orbital times mean that\nthey keep their mass for longer (Fig.~\\ref{fig:dd03}e).\n\nWhere the models differ is in the exact way $M_{\\rm{tot}}$ is used to get $f$ and \n$t_{\\rm{c}}$, and\nin the way the evolution is affected by changing $v_{\\rm{rel}}\/v_{\\rm{k}}$ \n(Fig.~\\ref{fig:dd03}c).\nThis is because the models make different assumptions.\nHere we assume that the size distribution is continuous between $D_{\\rm{c}}$ and \n$D_{\\rm{bl}}$,\nwhereas in DD03 the large planetesimals feeding the cascade are seen as separate from\nthe cascade.\nThis means that for us $M_{\\rm{tot}}$ gives a direct estimate of $K$ (equation \n\\ref{eq:mtot1})\nand so the amount of dust $f$, while for DD03 they equate the mass flow through the cascade\nwith the mass input from the break-up of planetesimals meaning that while their scaling\nparameter is proportional to $M_{\\rm{tot}}$ (as is ours), it also includes a dependence on \nthe\nparameter we call $X_{\\rm{c}}$ which affects the mass flow rate in the cascade.\nThis explains all of the differences:\nthe details of the scaling explain the slightly different initial $f$ values in all\nthe figures,\nand the fact that for us planetesimals of size $D_{\\rm{c}}$ are destroyed by planetesimals\ndown to size $X_{\\rm{c}}D_{\\rm{c}}$ means that our collisional lifetimes are always shorter \nthan \nthose in\nDD03, since they assume that planetesimals only collide with same size planetesimals.\nFor us changing $v_{\\rm{rel}}\/v_{\\rm{k}}$ does not affect the initial $f$ parameter as \ndescribed\nabove, but it does affect the collisional lifetime of the largest planetesimals\nwhich can survive longer if $v_{\\rm{rel}}\/v_{\\rm{k}}$ is reduced (since this means that fewer\nplanetesimals in the cascade cause destruction on impact).\nThe opposite is the case for the DD03 model: changing $v_{\\rm{rel}}\/v_{\\rm{k}}$ does not\naffect the collisional lifetime of the largest planetesimals, since they only collide\nwith each other, but a lower collision velocity does increase the initial dust\nluminosity because the cascade must have more mass in it to result in a mass flow\nrate sufficient to remove mass introduced by the large planetesimals.\nWhile the difference is subtle, it is important, since $v_{\\rm{rel}}\/v_{\\rm{k}}$ may be \nimportant\nin determining the presence of dust at late times (DD03; section \\ref{s:hot}).\n\nOn the face of it, it seems that our model provides a more accurate description of the disk.\nThe reason is that in a collisional cascade the mass flow does not need to be taken into\naccount, since it results in the $q=11\/6$ size distribution (Tanaka et al. 1996).\nIn other words the dependence of the scaling of the cascade with $X_{\\rm{c}}$ found by\nDD03 should have been removed if the largest planetesimals had been allowed to\ncollide with smaller planetesimals (since increasing $X_{\\rm{c}}$ would have both\nrestricted mass flow within the cascade and slowed down the mass input from\nthe destruction of large planetesimals).\nHowever, it is also true that the $q=11\/6$ distribution only applies in an\ninfinite cascade, and since both models have truncated the size distribution\nat $D_{\\rm{c}}$, this would affect the evolution.\nAlso, the effect of the variation of $Q_{\\rm{D}}^\\star$ with $D$ on the size distribution\nand its evolution are not yet clear, and neither is the evolution of the size \ndistribution while the collisional cascade is being set up.\nThese issues will be discussed only briefly in this paper, in which the simple\nevolution model described above is applied to some of the latest observational\nresults on debris disks.\n\n\\section{Application to rare systems with hot dust}\n\\label{s:hot}\n\n\\begin{deluxetable*}{cccccccc}\n \\tabletypesize{\\scriptsize}\n \\tablecaption{Main sequence sun-like (F, G and K) stars in the literature\n with evidence for hot dust at $<10$ AU.\n \\label{tab:hot} }\n \\tablewidth{0pt}\n \\tablehead{\n \\colhead{Star name} & \\colhead{Sp. Type} & \\colhead{Age, Myr} & \\colhead{Radius, AU} &\n \\colhead{$f_{\\rm{obs}} = L_{\\rm{ir}} \/ L_\\star $} & \\colhead{$f_{\\rm{max}}$} &\n \\colhead{Transient?} & \\colhead{Reference} }\n \\startdata\n HD98800\\tablenotemark{c} & K4\/5V & $\\sim$ 10 & 2.2 & $220\\times10^{-3}$ & \n $270 \\times 10^{-6}$ & Not req & Low et al. (2005) \\\\\n HD113766\\tablenotemark{ac} & F3V & 16 & 3 & $2.1\\times10^{-3}$ & \n $45 \\times 10^{-6}$ & Not req & Chen et al. (2005) \\\\\n HD12039 & G3\/5V & 30 & 4-6 & $0.1\\times10^{-3}$ &\n $200 \\times 10^{-6}$ & Not req & Hines et al. (2006) \\\\ \n BD+20307\\tablenotemark{a} & G0V & 300 & 1 & $40\\times10^{-3}$ &\n $0.36 \\times 10^{-6}$ & Yes & Song et al. (2005) \\\\ \n HD72905\\tablenotemark{a} & G1.5V & 400 & 0.23\\tablenotemark{b} & $0.1\\times10^{-3}$ & \n $0.011 \\times 10^{-6}$ & Yes & Beichman et al. (2006a) \\\\ \n $\\eta$ Corvi\\tablenotemark{a} & F2V & 1000 & 1-2\\tablenotemark{b} & $0.5\\times10^{-3}$ & \n $0.15 \\times 10^{-6}$ & Yes & Wyatt et al. (2005) \\\\ \n HD69830\\tablenotemark{a} & K0V & 2000 & 1 & $0.2\\times10^{-3}$ &\n $0.13 \\times 10^{-6}$ & Yes & Beichman et al. (2005)\n \\enddata\n \\tablenotetext{a}{infrared silicate feature}\n \\tablenotetext{b}{also has cool dust component at $>10$ AU}\n \\tablenotetext{c}{binary star}\n\\end{deluxetable*}\n\nVery few main sequence stars exhibit hot dust within $\\sim 10$ AU,\ni.e., in the region where we expect planets may have formed.\nFour surveys have searched for hot dust around sun-like stars\n(main sequence F, G or K stars) by looking for a 25 $\\mu$m flux in\nexcess of photospheric levels using IRAS (Gaidos 1999), ISO (Laureijs\net al. 2002) and Spitzer (Hines et al. 2006; Bryden et al. 2006).\nAll concluded that only $2 \\pm 2$\\% of these stars have hot dust with\ninfrared luminosities $f=L_{\\rm{ir}}\/L_\\star > 10^{-4}$, finding a total\nof 3 candidates.\nOther hot dust candidates exist in the literature, however some IRAS\nexcess fluxes have turned out to arise from chance alignments with background \nobjects (e.g., Lisse et al. 2002), including the candidate HD128400 from\nthe hot dust survey of Gaidos (1999) (Zuckerman, priv. comm.).\nThus confirmation of the presence of dust centred on the star using\nground- and space-based mid-IR imaging is vitally important (Smith, Wyatt\n\\& Dent, in prep.).\nThe tally of confirmed hot dust sources now stands at seven, and these\nare summarized in Table \\ref{tab:hot} which also gives the estimated\nradial location of the dust based on fitting of the spectral energy\ndistribution of the excess emission;\nfor all stars the dust is predicted to lie at $<10$ AU.\n\nWhile the frequency of the presence of such emission is low, there is\nas yet no adequate explanation for its origin and why it occurs in so few \nsystems.\nAnalogy with the solar system suggests that these are systems in which\nwe are witnessing the collisional grinding down of atypically massive\nasteroid belts.\nHowever, other scenarios have also been proposed in which the dust\nis transient, having been produced in some stochastic process.\nSuch a process could be a recent collision between two massive\nprotoplanets in an asteroid belt (Song et al. 2005), the sublimation\nof one supercomet (Beichman et al. 2005), or the sublimation of a swarm\nof comets, possibly scattered in from several tens of AU in an\nepisode analogous to the period of Late Heavy Bombardment in the\nsolar system (Gomes et al. 2005).\n\n\\subsection{Are these massive asteroid belts?}\n\\label{ss:qe}\n\nHere we consider the possibility that these are atypically massive\nasteroid belts, and show that for the majority of the known systems this\nis unlikely to be the case.\nThe reason is that given in \\S \\ref{ss:ce}, which is that more massive\nasteroid belts are not necessarily more dusty at late times, and there\nis a maximum dust luminosity we can expect for a belt of a given age,\ngiven its radial location (equations \\ref{eq:mmax1}-\\ref{eq:fmax3}).\nTo arrive at a rough estimate of the maximum possible $f_{\\rm{max}}$\nwe assume the following parameters:\nthe largest possible planetesimal is $D_{\\rm{c}} = 2000$ km, since this is above the\nlargest members of the asteroid and Kuiper belts, and fits with the\nexpectation that planetesimal growth is halted once the largest planetesimals\nreach this size due to the resulting gravitational perturbations (Kenyon \\& Bromley \n2002);\nbelt width is $dr = 0.5r$;\nplanetesimal strength is $Q_{\\rm{D}}^\\star = 200$ J kg$^{-1}$, the canonical\nvalue used in DD03, although gravity strengthening can give rise to higher\nvalues for planetesimals larger than $\\sim 1$ km (see \\S \\ref{ss:pc});\nplanetesimal eccentricity is $e = 0.05$, typical for planetesimal belts\nlike the asteroid belt that are undergoing a collisional cascade, and close\nto that expected from stirring by 2000 km planetesimals within such belt.\n\\footnote{Equating the velocity dispersion in the belt with the escape\nvelocity of a planetesimal of size $D_{\\rm{c}}$ gives\n$e \\approx 2.6 \\times 10^{-7} \\rho^{0.5} r^{0.5} M_\\star^{-0.5} D_{\\rm{c}}$. }\nSubstituting in these nominal values into equation (\\ref{eq:fmax3}) and\napproximating $M_\\star=L_\\star=1$ gives:\n\\begin{equation}\n f_{\\rm{max}} = 0.16 \\times 10^{-3} r^{7\/3} t_{\\rm{age}}^{-1}. \\label{eq:fmax4}\n\\end{equation}\nPlots analogous to those in Fig.~\\ref{fig:dd03} are presented in Fig.~\\ref{fig:hotevol}\nwhich shows the evolution for a planetesimal belt with the nominal parameters\ndescribed above (and with a nominal starting mass of $M_{\\rm{tot}}(0)=1M_\\oplus$) along\nwith the consequence for the evolution of changing any of those parameters.\nNote that it is most appropriate to refer to Fig.~\\ref{fig:hotevol}, rather than\nFig.~\\ref{fig:dd03}, when considering the evolution of planetesimal belts close to\nsun-like stars.\n\n\\begin{figure*}\n \\centering\n \\begin{tabular}{cc}\n \\hspace{-0.15in} \\includegraphics[width=3.2in]{f3a.ps} &\n \\hspace{-0.15in} \\includegraphics[width=3.2in]{f3b.ps} \\\\[-0.0in]\n \\hspace{-0.15in} \\includegraphics[width=3.2in]{f3c.ps} &\n \\hspace{-0.15in} \\includegraphics[width=3.2in]{f3d.ps} \\\\[-0.0in]\n \\hspace{-0.15in} \\includegraphics[width=3.2in]{f3e.ps} &\n \\hspace{-0.15in} \\includegraphics[width=3.2in]{f3f.ps}\n \\end{tabular}\n \\caption{The collisional evolution of a planetesimal belt at $r=1$ AU around a sun-like\n star ($L_\\star=M_\\star=1$) of initial mass $M_{\\rm{tot}}(0)=1M_\\oplus$ assuming that belt\n can be described by the parameters used in \\S \\ref{ss:qe} (i.e., $dr\/r=0.5$,\n $D_{\\rm{c}}=2000$ km, $\\rho=2700$ kg m$^{-3}$, $e=0.05$, $e\/I=1$,\n $Q_{\\rm{D}}^\\star=200$ J kg$^{-1}$).\n All panels show dust luminosity $f=L_{\\rm{ir}}\/L_\\star$ as a function of time, and\n the evolution with the above nominal parameters is shown with a solid line.\n The different panels show the effect of changing the following parameters:\n \\textbf{(top left)} starting disk mass $M_{\\rm{tot}}(0)$,\n \\textbf{(top right)} disk radius $r$,\n \\textbf{(middle left)} planetesimal eccentricity $e$,\n \\textbf{(middle right)} maximum planetesimal size $D_{\\rm{c}}$,\n \\textbf{(bottom left)} planetesimal strength $Q_{\\rm{D}}^\\star$, and\n \\textbf{(bottom right)} stellar spectral type.}\n \\label{fig:hotevol}\n\\end{figure*}\n\nThe value of $f_{\\rm{max}}$ is quoted in Table \\ref{tab:hot} under the assumption\nthat the planetesimal belt has the same age as the star.\nThe quoted value for each star is that from equation (\\ref{eq:fmax2}) for its\nspectral type, but is within a factor of three of that given in equation\n(\\ref{eq:fmax4}), indicating that this equation may be readily applied to\nobserved belts in the future.\nThe four oldest systems (BD+20307, HD72905, $\\eta$ Corvi and HD69830)\nhave $f_{\\rm{obs}} \\gg 10^{3}f_{\\rm{max}}$.\nWe show in \\S \\ref{ss:pc} that even with a change in parameters it is not\npossible to devise asteroid belts in these systems that could survive to the age of\nthe stars giving rise to the observed dust luminosities.\nThus we conclude that this period of high dust luminosity started\nrelatively recently.\nThe timescale over which a belt can last above a given\nluminosity, $f_{\\rm{obs}}$, is $t_{\\rm{age}}f_{\\rm{max}}\/f_{\\rm{obs}}$,\nsince collisions would grind a belt down to this level on such a timescale.\nThis implies that belts this luminous only last between a few thousand years\n(BD+20307 and HD72905) and a few Myr ($\\eta$ Corvi and HD69830).\nHowever, the true duration of this level of dust luminosity depends\non the details of the process causing it, and moreover there is still\nup to two orders of magnitude uncertainty in $f_{\\rm{max}}$ (see \\S \\ref{ss:pc}).\nThus this calculation should not yet be used to infer from the $\\sim 2$\\% of\nsystems with hot dust that, e.g., every sun-like star must undergo 10-1000 such\nevents in its lifetime (or fewer systems must undergo even more events).\nFor now the conclusion is that these systems cannot be planetesimal\nbelts that have been evolving in a collisional cascade for the full age of the\nstar.\n\nThis leaves open the possibility that the collisional cascade in these systems\nwas initiated much more recently, perhaps because a long timescale was required\nto form the 2000-3000 km sized planetesimals necessary to stir the planetesimal\nbelt and cause the switch from accretion to collisional cascade (Kenyon \\& Bromley\n2004).\nHowever, we consider this to be unlikely, because the timescale for the formation\nof objects of this size at 1 AU from a solar mass star was given in\nKenyon \\& Bromley (2004) to be $\\sim 0.6 dr\/M_{\\rm{tot}}$ Myr, where\n$M_{\\rm{tot}}$ is the mass of material in an annulus of width\n$dr$, just as in the rest of the paper.\nThis means that the cascade can only be delayed for 100-1000 Myr at 1 AU for planetesimal\nbelts of very low mass, which would also be expected to have low dust luminosities when\nthe cascade was eventually ignited.\nFor example, a delay of $>500$ Myr would require $<0.6 \\times 10^{-3}M_\\oplus$ in the\nannulus at 1 AU of 0.5 AU width, a mass which corresponds to a fractional luminosity\nof $<5 \\times 10^{-6}$ (equations \\ref{eq:mtot2} and \\ref{eq:f} with $\\rho = 2700$ kg \nm$^{-3}$ and $q=11\/6$), much lower than that observed in all systems.\nOne can also consider the same argument in the following way:\nthe observed luminosity $f_{\\rm{obs}}$ implies a planetesimal belt mass which current\nplanet formation theories indicate would result in the growth of 2000 km planetesimals\nwhich would ignite a collisional cascade on a timescale of\n$3 \\times 10^{-3}(dr\/r)\/f_{\\rm{obs}}$ Myr if this was placed at 1 AU from a solar mass star.\nThe conclusion at the end of the last paragraph also considers the collisional cascade to evolve \nin quasi-steady state, and it is possible that collisions between large members of the cascade \nmay have recently introduced large quantitites of small dust;\nthat possibility is discussed in \\S \\ref{ss:singlecoll}.\n\nFor the three youngest systems the conclusions are less clear.\nThe dust luminosities of HD12039 and HD113766 are, respectively, close to\nand fifty times higher than the maximum allowed value for collisionally evolved \nplanetesimal belts.\nHowever, given the uncertainties in the parameters in the model (described in \\S \n\\ref{ss:pc}), we conclude that it is not possible to say that these could not be massive \nasteroid belts.\nThe main reason that firm conclusions cannot be drawn is the large radial location\nof the dust at $>2$ AU.\nThe strong dependence of $f_{\\rm{max}}$ on $r$ means that it is easiest to\nconstrain the nature of belts within a few AU which evolve very rapidly.\nFor the youngest system (HD98800), while its dust luminosity lies a factor of \n800 above the maximum for the age of the star, we do not infer\nthat this must be transient, since the high dust luminosity and low age imply\nthat this system is in a transitional phase and the collisional cascade in\nthis debris disk is likely to have only recently been ignited.\nRather we note that this model implies that due to collisional processing\nthis debris disk cannot maintain this level of dust emission beyond the\nnext $\\sim 10,000$ years (albeit with an additional two orders of magnitude\nuncertainty, \\S \\ref{ss:pc}).\n\n\\subsection{Possible caveats}\n\\label{ss:pc}\nGiven the large number of assumptions that went into the estimate for\n$f_{\\rm{max}}$, it is worth pointing out that this model is in excellent\nagreement with the properties of the asteroid belt in the solar system,\nsince for a 4500 Myr belt at 3 AU the model predicts\n$M_{\\rm{max}}=0.4 \\times 10^{-3}M_\\oplus$, which is close to the inferred\nmass of the asteroid belt of $0.6 \\times 10^{-3}M_\\oplus$ (Krasinsky et al. 2002).\nThe model also predicts $f_{\\rm{max}}=5 \\times 10^{-7}$, which is consistent\nwith the estimate for the zodiacal cloud of\n$L_{\\rm{ir}}\/L_\\star = 0.8 \\times 10^{-7}$ (Backman \\& Paresce 1993).\n\\footnote{In planetesimal belts as tenuous as the asteroid belt, the\neffect of P-R drag is important (Wyatt 2005) meaning that the cross-sectional\narea of dust in the zodiacal cloud is dominated by $\\sim 100$ $\\mu$m sized grains\nrather than grains of size $D_{\\rm{bl}}$ as assumed in the simple model of\n\\S \\ref{ss:pb}.\nTaking this into account would reduce the fractional luminosity predicted by the model\nby an order of magnitude.}\nIt is also necessary to explore if there is any way in which the parameters\nof the model could be relaxed to increase $f_{\\rm{max}}$ and so change the\nconclusions about the transience of the hot dust systems.\nEquation (\\ref{eq:fmax3}) indicates one way in which\n$f_{\\rm{max}}$ could be increased, which is by either reducing the eccentricities of the \nplanetesimals, $e$, or increasing their strength, $Q_{\\rm{D}}^\\star$, both of which could \nincrease $X_{\\rm{c}}$ and so decrease the rate at which mass is lost from the cascade\n(e.g., fig.~\\ref{fig:hotevol}).\nThe other way is to change the size distribution so that a given disk mass results\nin a significantly larger dust luminosity, e.g., by increasing $q$.\n\nIn fact Benz \\& Asphaug (1999) found a value of $Q_{\\rm{D}}^\\star$ that is higher \nthan $2 \\times 10^{5}$ J kg$^{-1}$ for planetesimals as large as 2000 km for both ice\nand basalt compositions.\nThis would result in an increase in $f_{\\rm{max}}$ by a factor of $\\sim 170$\n(e.g., Fig.~\\ref{fig:hotevol}).\nHowever, such a high value of $Q_{\\rm{D}}^\\star$ is possible only due to gravity \nstrengthening\nof large planetesimals, and the dependence in this regime of $Q_{\\rm{D}}^\\star \\propto \nD^{1.3}$ \n(Benz \\& Asphaug 1999) would result in an equilibrium size distribution with\n$q_{\\rm{g}} \\approx 1.68$, since when $Q_{\\rm{D}}^\\star \\propto D^s$ then $q = (11+s)\/(6+s)$\n(O'Brien \\& Greenberg 2003).\nIf such a distribution was to hold down to the smallest dust grains the net result\nwould be a decrease in $f_{\\rm{max}}$ by $\\sim 200$.\nThis is not the case, however, since objects in the size\nrange $D1$ (e.g., Fig.~\\ref{fig:gvse}b).\nIn such a regime mutual collisions do not result in the destruction of \nplanetesimals, rather in their merger and growth.\nAt this point $G(11\/6,X_{\\rm{c}})<0$, i.e., $f_{\\rm{max}}$ is infinite since, in this\nsimple model, whatever the starting conditions there is no evolution (although\nin practice the size distribution would evolve due to planetesimal growth).\nAt $\\sim 1$ AU, this means $e$ must be larger than\n0.0005 (for $Q_{\\rm{D}}^\\star = 200$ J kg$^{-1}$, appropriate for $D_{\\rm{c}}=0.15$ km)\nor 0.014 (for $Q_{\\rm{D}}^\\star = 2 \\times 10^5$ J kg$^{-1}$, appropriate for $D_{\\rm{c}}=2000$ km)\nto initiate a collisional cascade, values which are consistent with those quoted by\nmore detailed planet formation models (e.g., Kenyon \\& Bromley 2002).\nSuch eccentricities would be expected through stirring either by $>1000$ km\nplanetesimals which formed within the belt, or by more massive perturbers which formed \noutside the belt, both of which can be expected to occur within 10-100 Myr\n(Kenyon \\& Bromley 2006).\nThis was considered in \\S \\ref{ss:qe} where it was shown that the cascade would\nbe initiated following the growth of $\\sim 2000$ km planetesimals on timescales\nthat are much shorter than the age of the system for the disk masses required to\nproduce a dust luminosity at the observed level.\n\nThe only route which could plausibly maintain the hot dust systems in Table \\ref{tab:hot}\nin collisional equilibrium over the age of the stars might be to invoke some mechanism\nwhich maintains the eccentricity at a level which the cascade is only just being eroded.\nHowever, Fig.~\\ref{fig:gvse}a shows that $G(11\/6,X_{\\rm{c}})$ is a strong function\nof $e$ when $G(11\/6,X_{\\rm{c}}) < 1$, since the range $G(11\/6,X_{\\rm{c}})=0-1$ is\ncovered by a factor of less than two in eccentricity.\nThus we consider it reasonable to assume that the best possible combination of \n$Q_{\\rm{D}}^\\star$ and $e$ in this regard would result in \n$G(11\/6,X_{\\rm{c}}) \\approx 1$ (corresponding to $X_{\\rm{c}}=0.69$);\nlower values of $G(11\/6,X_{\\rm{c}})$ are possible, but only within a very narrow\nrange of eccentricity.\nSince in the above example with a realistic $Q_{\\rm{D}}^\\star$ prescription extending\nup to 2000 km we assumed $e=0.05$ which already resulted in $G(q_{\\rm{g}},X_{\\rm{c}}) < 1$, \nwe consider that it is not reasonable to fine tune the eccentricity further \nto increase $f_{\\rm{max}}$;\ne.g., decreasing to $e=0.03$ results in some disks not evolving and the rest\nwith $f_{\\rm{max}}$ higher than that quoted in Table \\ref{tab:hot} by a factor $\\sim\n150$.\nThus we conclude that the estimate given in Table \\ref{tab:hot} (and e.g., equation\n\\ref{eq:fmax1}) underestimates $f_{\\rm{max}}$ by at most a factor of $\\sim 100$, unless\nthe eccentricity happens to lie within $\\pm 10\\%$ of a critical value.\n\nIt is also worth noting that low levels of eccentricity would result in large \ngravitational focussing factors for large planetesimals which would enhance\n$f_{\\rm{cc}}$ and so decrease the time for these planetesimals to be catastrophically\ndestroyed, something which is compounded by the higher collision velocity\nin gravitationally focussed collisions which reduces $X_{\\rm{c}}$ because collisions\nwith smaller planetesimals can cause catastrophic disruption (e.g., equation\n\\ref{eq:xc2}).\nHowever, we do not need to account for this here, since gravitational focussing\nbecomes important when $v_{\\rm{rel}} < v_{\\rm{esc}} \\approx \\sqrt{(2\/3)\\pi \\rho G}(10^{-3}D)$\nand so when $e<4\\times 10^{-7}\\sqrt{\\rho r M_\\star^{-1}[1.25+(I\/e)^2]^{-1}}D$\n(where $D$ is in km);\ni.e., when $e<2 \\times 10^{-6}$ for $D=0.15$ km and $e<0.027$ for $D=2000$ km\nat 1 AU from a $1M_\\odot$ star, both of which occur close to or below the level\nat which collisions result in accumulation rather than destruction.\n\n\n\\subsection{Are these the products of single collisions?}\n\\label{ss:singlecoll}\nOne possible origin for the hot dust which is quoted in the literature is that\nit is the product of a single collision (Song et al. 2005).\nOur model can be used to make further predictions for the likelihood of massive collisions\noccurring within an asteroid belt.\nThe maximum number of parent bodies (i.e., planetesimals) larger than $D_{\\rm{pb}}$ \nremaining at late times occurs when $M_{\\rm{tot}}=M_{\\rm{max}}$ and so is given by:\n\\begin{eqnarray}\n n(D>D_{\\rm{pb}}) & = & \\left( \\frac{5.6 \\times 10^{10}r^{3.5} (dr\/r)}\n {M_\\star^{0.5} D_{\\rm{c}}^2 t_{\\rm{age}}} \\right)\n \\left[ \\left( \\frac{D_{\\rm{c}}}{D_{\\rm{pb}}} \\right)^{3q-3}-1 \\right]\n \\nonumber \\\\\n & & \\times \\left( \\frac{(3q-5)[1+1.25(e\/I)^2]^{-0.5}}{(3-3q)G(q,X_{\\rm{c}})} \\right).\n \\label{eq:ndgtdpb}\n\\end{eqnarray}\nThe collision timescale for planetesimals of size $D_{\\rm{pb}}$ is\n\\begin{eqnarray}\n t_{\\rm{c}}(D_{\\rm{pb}}) & = & t_{\\rm{c}}(D_{\\rm{c}}) \n f_{\\rm{cc}}(D_{\\rm{c}})\/f_{\\rm{cc}}(D_{\\rm{pb}}) \\nonumber \\\\ \n & = & 10^6 t_{\\rm{age}}(D_{\\rm{pb}}\/D_{\\rm{c}})^{3q-5}, \n\\label{eq:tcd}\n\\end{eqnarray}\nnoting that the collisional lifetime of the largest planetesimals, $t_{\\rm{c}}(D_{\\rm{c}})$, \nis the age of the star for a planetesimal belt at maximum luminosity for this\nage.\nThese can be combined to give the destructive collision rate for planetesimals larger\nthan $D_{\\rm{pb}}$:\n\\begin{eqnarray}\n dN_{\\rm{c}}(D>D_{\\rm{pb}})\/dt & = & 1000r^{13\/3}(dr\/r)t_{\\rm{age}}^{-2}D_{\\rm{c}}\n D_{\\rm{pb}}^{-3}\\times \\nonumber \\\\ \n & & M_\\star^{-4\/3}{Q_{\\rm{D}}^\\star}^{5\/6}e^{-5\/3},\n \\label{eq:dncdt}\n\\end{eqnarray}\nin Myr$^{-1}$, where the assumptions that $q=11\/6$, $e=I$ and $X_{\\rm{c}} \\ll 1$ have been\nused in deriving this equation.\n\nWe now assume that we are considering collisions capable of reproducing the\nobserved dust level, $f_{\\rm{obs}}$, so that the lifetime of the resulting collision\nproducts can be estimated from the collisional lifetime of that dust, assumed to be\nof size $D_{\\rm{bl}}$ (WD02):\n\\begin{equation}\n t_{\\rm{c}}(D_{\\rm{bl}}) = 0.04 r^{1.5}M_\\star^{-0.5}(dr\/r)f_{\\rm{obs}}^{-1}, \\label{eq:tcdust}\n\\end{equation}\nin years, noting that collisions would remove the dust on a faster timescale than P-R drag\n(Wyatt 2005; Beichman et al. 2005).\nCombining equations (\\ref{eq:dncdt}) and (\\ref{eq:tcdust}) gives the fraction\nof time that collisions are expected to result in dust above a given level of\n$f_{\\rm{obs}}$:\n\\begin{eqnarray}\n P(f>f_{\\rm{obs}}) & = & 4 \\times 10^{-5}r^{35\/6}(dr\/r)^2t_{\\rm{age}}^{-2}\n D_{\\rm{c}}D_{\\rm{pb}}^{-3} \\times \\nonumber \\\\\n & & M_\\star^{-11\/6}f_{\\rm{obs}}^{-1}\n {Q_{\\rm{D}}^\\star}^{5\/6}e^{-5\/3}.\n \\label{eq:pffobs}\n\\end{eqnarray}\n\nTo estimate the minimum size of the parent body, $D_{\\rm{pb}}$, responsible for this\ndust, we consider how large a planetesimal must be to reproduce $f_{\\rm{obs}}$ if a\ndestructive collision resulted in one fragment with half the mass of the original\nplanetesimal (i.e., the definition of a destructive collision), with the remaining\nmass in particles of size $D_{\\rm{bl}}$:\n\\begin{equation}\n D_{\\rm{pb}} = 890[D_{\\rm{bl}}r^2f_{\\rm{obs}}]^{1\/3}. \\label{eq:dpb}\n\\end{equation}\n\n\\begin{deluxetable*}{ccccccc}\n \\tabletypesize{\\scriptsize}\n \\tablecaption{Parameters in the model for the hot dust systems of Table \\ref{tab:hot}\n used to determine whether the observed dust can be the outcome of a single\n collision in a massive asteroid belt which is itself not normally bright\n enough to be detected.\n \\label{tab:hot2} }\n \\tablewidth{0pt}\n \\tablehead{ \\colhead{Star name} & \\colhead{$D_{\\rm{pb}}$, km} & \\colhead{$N(D>D_{\\rm{pb}})$} & \n \\colhead{$dN_{\\rm{c}}(D>D_{\\rm{pb}})\/dt$, Myr$^{-1}$} &\n \\colhead{$t_{\\rm{c}}(D_{\\rm{bl}})$, yr} & \n \\colhead{$P(f>f_{\\rm{obs}})$} & \\colhead{Single collision?} }\n \\startdata\n HD98800 & 530 & 200 & 41 & 0.36 & $15 \\times 10^{-6}$ & No \\\\\n HD113766 & 280 & 890 & 150 & 41 & $6100 \\times 10^{-6}$ & Not imposs \\\\\n HD12039 & 110 & 77,000 & 12,000 & 2300 & 27\\tablenotemark{*} & Not imposs \\\\\n BD+20307 & 320 & 0.47\\tablenotemark{*} & 0.0039 & 0.49 & $0.0019 \\times 10^{-6}$ & No \\\\\n HD72905 & 15 & 1.4 & 0.036 & 22 & $0.79 \\times 10^{-6}$ & No \\\\\n $\\eta$ Corvi & 110& 7.8 & 0.033 & 59 & $2.0 \\times 10^{-6}$ & No \\\\\n HD69830 & 39 & 19 & 0.068 & 110 & $7.7 \\times 10^{-6}$ & No\n \\enddata\n \\tablenotetext{*}{For disks with $P(f>f_{\\rm{obs}})>1$, this value indicates the number of\n collisions at that level we can expect to see in the disk at any one time.\n Likewise, for disks with $N(D>D_{\\rm{pb}})<1$, this value indicates the probability\n that there is an object of this size remaining in the disk.}\n\\end{deluxetable*}\n\nTable \\ref{tab:hot2} lists the parameters for the hot dust systems assuming the\ncanonical parameters of $Q_{\\rm{D}}^\\star=200$ J kg$^{-1}$, $D_{\\rm{c}}=2000$ km and \n$e=0.05$.\nTo determine whether a system could have been reproduced by a single collision, the\nfinal value of $P(f>f_{\\rm{obs}})$ was compared with the statistic that 2\\% of systems\nexhibit hot dust (which therefore considers the optimistic case where all stars\nhave planetesimal belts at a few AU).\nFor the systems which were inferred in Table \\ref{tab:hot} to be transient,\nall are extremely unlikely ($<0.001$\\%) to have been caused by a single collision amongst \nplanetesimals in a planetesimal belt which has undergone a collisional cascade since the star\nwas born.\n\nWhile this statistic is subject to the uncertainties in the model parameters described\nin \\S \\ref{ss:pc}, and so could be in error by around two orders of magnitude,\nit must also be remembered that the most optimistic assumptions were used to arrive at\nthis figure.\nFor example, it is unlikely that the destruction of planetesimals of size $D_{\\rm{pb}}$ would\nrelease half of the mass of the planetesimal into dust $D_{\\rm{bl}}$ in size.\n\\footnote{Such an optimistic assumption should not be dismissed out of hand, however, since \nthe large amount of collisional processing that must have taken place means that\nplanetesimals more than a few km would be rubble piles.\nThese would have undergone \nshattering and reaccumulation numerous times meaning that they could have deep dusty\nregolith layers which could be preferentially ejected in a collision.}\nOn the other hand, one might consider that the lifetime of the observed dust, \n$t_{\\rm{c}}(D_{\\rm{bl}})$, is an\nunderestimate of the duration of dust at the level of $f>f_{\\rm{obs}}$, since the dust\ncould be replenished from the destruction of larger particles.\nIndeed Farley et al. (2006) model the destruction of a 150 km planetesimal in the\nasteroid belt and inferred a dust peak that lasted $\\sim 1$ Myr, precisely because\nlarge fragments produced in the collision replenished the dust population.\nHowever, it should be cautioned that the dust peak inferred by Farley et al. (2006)\nwould not have been detectable as an infrared excess since it only caused a factor\n$\\sim 10$ enhancement in the luminosity of the zodiacal cloud (i.e., to\n$f \\approx 0.8 \\times 10^{-6}$), and that in the context of our model, invoking a\npopulation of larger grains that result from the collision would lead to a \nlarger parent body (i.e., a larger $D_{\\rm{pb}}$) required to reproduce the observed luminosity \n$f_{\\rm{obs}}$ and so less frequent collisions;\ni.e., it may be possible (even desirable) to increase $t_{\\rm{c}}(D_{\\rm{bl}})$, but only at the\nexpense of decreasing $dN_{\\rm{c}}(D>D_{\\rm{pb}})\/dt$ leading to little change in \n$P(f>f_{\\rm{obs}})$.\nWe note that $t_{\\rm{c}}(D_{\\rm{bl}})$ given in Table (\\ref{tab:hot2}) is sufficiently short\nthat a measurement of the variability of the infrared excess on realistic (few year)\ntimescales could lead to constraints on the size of the grains feeding the\nobserved phenomenon, since if a population of larger grains existed then the\nluminosity would fade on much longer timescales.\n\nA further argument against the transient disks being caused by single collisions\nis the fact that the probability of seeing the outcome of a collision, $P(f>f_{\\rm{obs}})$,\nfalls off $\\propto t_{\\rm{age}}^{-2}$, which means that we would expect to see more\ntransient disks around younger stars than around older stars\n(because young stars have more massive disks with more large\nplanetesimals and so more frequent collisions).\nThere is some evidence from Table \\ref{tab:hot} that transience is \nmore common around young systems, since none of the transient systems is older than\n2 Gyr, whereas sun-like stars in the solar neighborhood would be expected to\nhave a mean age of $\\sim 5$ Gyr.\nHowever, while the statistics are poor, a $t_{\\rm{age}}^{-2}$ dependence does seem\nto be ruled out;\ne.g., we would have expected to have detected 10 times more transient disks\ncaused by single collisions in the age range 50-500 Myr\\footnote{It is not reasonable\nto extend the age range to younger systems, since, as noted in \\S \\ref{ss:qe} it is\nhard to discern whether or not dust detected in such systems is transient.}\nthan in the age range 0.5-5 Gyr, whereas 2 transient disks are known in the younger age bin, \nand 2 in the older age bin which is more consistent with a $t_{\\rm{age}}^{-1}$ dependence.\n\n\\begin{deluxetable*}{ccccc}\n \\tabletypesize{\\scriptsize}\n \\tablecaption{Parameters in the model for the transient hot dust systems of Table \\ref{tab:hot}\n used to determine whether the observed dust could originate in the destruction of a\n planetesimal belt coincident with the dust:\n $dM_{\\rm{loss}}\/dt$ is the observed mass loss rate,\n $M_{\\rm{max}}$ is the maximum mass of a planetesimal belt that is coincident with the dust given\n the age of the star,\n $t(f>f_{\\rm{obs}})$ is the length of time such a planetesimal belt could sustain the observed\n dust luminosity,\n and $r_{\\rm{out}}(\\rm{100Myr})$ is the radius of a planetesimal belt which \n would still have enough mass\n to sustain the observed dust luminosity for 100 Myr. \\label{tab:hot3} }\n \\tablewidth{0pt}\n \\tablehead{ \\colhead{Star name} &\n \\colhead{$dM_{\\rm{loss}}\/dt$, $10^{-6}M_\\oplus$Myr$^{-1}$} &\n \\colhead{$M_{\\rm{max}}$, $10^{-6}M_\\oplus$} &\n \\colhead{$t(f>f_{\\rm{obs}})$, Myr} & \n \\colhead{$r_{\\rm{out}}(\\rm{100Myr})$, AU} }\n \\startdata\n BD+20307 & $8.0 \\times 10^6$ & 53 & $6.7 \\times 10^{-6}$ & 45 \\\\\n HD72905 & 19 & 0.072 & $3.7 \\times 10^{-3}$ & 2.4 \\\\\n $\\eta$ Corvi & 2500 & 57 & 0.023 & 9.6 \\\\\n HD69830 & 64 & 12 & 0.18 & 4.5\n \\enddata\n\\end{deluxetable*}\n\n\n\nIn fact, within the context of this model, all of the disks which we infer to be\ntransient would also be inferred to not be the product of single\ncollisions.\nThis is evident by substituting $D_{\\rm{pb}}$ from equation (\\ref{eq:dpb}) and \n$f_{\\rm{max}}$ from equation (\\ref{eq:fmax2}) into equation (\\ref{eq:pffobs}) to get:\n\\begin{equation}\n P(f>f_{\\rm{obs}}) = 0.2 \\times 10^6 (f_{\\rm{max}}\/f_{\\rm{obs}})^2\n (M_\\star e^2 r^{-1} {Q_{\\rm{D}}^\\star}^{-1})^{5\/6},\n \\label{eq:pff2}\n\\end{equation}\nwhich reduces to $P(f>f_{\\rm{obs}})=16(f_{\\rm{max}}\/f_{\\rm{obs}})^2(M_\\star r^{-1})^{5\/6}$\nfor the canonical parameters used before.\nSince transient disks are defined by $f_{\\rm{obs}}\/f_{\\rm{max}} \\gg 1000$, this means\nthey cannot also have a high probability of having their origin in single collisions.\nIt would only be inferred that disks with $f_{\\rm{obs}}\/f_{\\rm{max}} \\ll 1000$ could\nhave their origin in single collisions, but since it is also possible that these disks\nare the result of steady state collisional evolution there is no need to invoke\na single collision to explain their presence, which is why Table \\ref{tab:hot2}\nsimply concluded that it is \"not impossible\" that the disks of\nHD113766 and HD12039 are the product of single collisions.\nWhat equation (\\ref{eq:pff2}) does indicate, however, is that it is possible for single \ncollisions to cause disks to spend some fraction of their time at a\nluminosity enhanced above the nominal maximum value $f_{\\rm{max}}$, and\nthat this occurs more readily for disks at smaller radii and around\nhigher mass stars.\nHowever, whether single collisions really do achieve an observable increase in\nluminosity depends on the size distribution of the collisional fragments, for\nwhich it must be remembered that equation (\\ref{eq:pff2}) used an unrealistically\noptimistic estimate. \n\n\n\\subsection{Are parent planetesimals coincident with dust?}\n\\label{ss:coincident}\n\nFor similar reasons to those in \\S \\ref{ss:singlecoll} it is also possible to\nshow that the parent planetesimals of the dust are extremely unlikely to\noriginate in a planetesimal belt that is coincident with the dust.\nThe reason is that the mass remaining in such a belt would be insufficient to\nreplenish the dust for a length of time commensurate with the statistic that\n2\\% of stars show this phenomenon.\nThe observed dust luminosity, assuming this is comprised of\ndust of size $D_{\\rm{bl}}$ which has a lifetime of $t_{\\rm{c}}(D_{\\rm{bl}})$\n(equation \\ref{eq:tcdust}), implies a mass loss rate due to mutual collisions\nbetween the dust grains of:\n\\begin{equation}\n dM_{\\rm{loss}}\/dt = 1700 f_{\\rm{obs}}^2 r^{0.5} L_\\star M_\\star^{-0.5} (r\/dr),\n \\label{eq:mloss}\n\\end{equation}\nin $M_\\oplus$\/Myr, and this is independent of the collisional evolution\nmodel of \\S \\ref{s:model}.\nHowever, due to the collisional evolution of a planetesimal belt's largest\nmembers, there is a maximum mass that can remain in a belt at the same radius\nas the dust at this age, and this is given in equation (\\ref{eq:mmax1}).\nThis means that if the observed dust originates in an event which, for\nwhatever reason, is causing planetesimals in a belt at the same radius as\nthe dust to be converted into dust, then this can last a maximum time of\n$t(f>f_{\\rm{obs}})=M_{\\rm{max}}\/dM_{\\rm{loss}}\/dt$ before the planetesimal\nbelt is completely exhausted.\nThese figures are given in Table \\ref{tab:hot3} which shows that the longest\nthe type of transient event observed could be sustained in these systems is\nunder 1 Myr, under the assumptions about the planetesimal belts employed in the\nrest of the paper.\n\nA maximum duration of 1 Myr is not sufficient to explain the statistic that\n2\\% of sun-like stars exhibit this phenomenon, since the median age of such\nstars is 5 Gyr, indicating a typical duration (even if this occurs\nin multiple, shorter, events) of around 100 Myr.\nClearly a reservoir of mass is required in excess of that which it\nis possible to retain so close to the star.\n\n\\subsection{Constraints on parent planetesimal belt}\n\\label{ss:constraints}\nIf we assume that the observed mass of hot dust originates in planetesimals that\nwere initially in a belt at a radius $r_{\\rm{out}}$ which has properties like those\nassumed in the rest of the paper,\nand a fractional luminosity of $f_{\\rm{out}}$,\nthen there are two main constraints on that belt.\nFirst, assuming that this belt has been collisionally evolving for the age of\nthe star, then this belt cannot have more mass (or luminosity) than the maximum\nthat could possibly remain due to collisional processing, i.e., $f_{\\rm{out}} < f_{\\rm{max}}$\n(equation \\ref{eq:fmax1}).\nSecond, it must have sufficient mass remaining to feed the observed mass loss rate\nfor long enough to reproduce the statistic that 2\\% of stars exhibit this phenomenon\nwhich implies a total duration of $>100$ Myr.\nFor a belt to have enough mass to feed the observed hot dust\nluminosity of $f_{\\rm{obs}}$ at a radius $r$ for a total time of $t_{\\rm{hot}}$ in Myr\nrequires the belt to have a luminosity of:\n\\begin{equation}\n f_{\\rm{out}} > 710 t_{\\rm{hot}} f_{\\rm{obs}}^2 r_{\\rm{out}}^{-2} r^{0.5}\n D_{\\rm{c}}^{-0.5} L_\\star^{0.5} (dr\/r)^{-1},\n \\label{eq:fout}\n\\end{equation}\nor rather, this is the luminosity it must have had before it was depleted.\n\n\\begin{figure*}\n \\centering\n \\begin{tabular}{cc}\n \\hspace{-0.15in} \\includegraphics[width=3.2in]{f4a.ps} &\n \\hspace{-0.15in} \\includegraphics[width=3.2in]{f4b.ps} \\\\[-0.0in]\n \\hspace{-0.15in} \\includegraphics[width=3.2in]{f4c.ps} &\n \\hspace{-0.15in} \\includegraphics[width=3.2in]{f4d.ps} \\\\[-0.0in]\n \\end{tabular}\n \\caption{Constraints on the fractional luminosity and radius of the\n planetesimal belt feeding the observed transient hot dust (shaded region)\n for the systems: \\textbf{(top left)} $\\eta$ Corvi, \\textbf{(top right)} HD72905,\n \\textbf{(bottom left)} HD69830 and \\textbf{(bottom right)} BD+20307.\n The solid lines are the constraints imposed by the far-IR detection limits (assuming\n black body emission), the maximum luminosity possible in a belt at this radius due to\n erosion by collisional processing, and the luminosity from a belt of sufficient mass\n to feed the observed mass loss rate for 100 Myr.\n The properties of the hot dust in these systems is shown with a diamond, and those\n of the cold dust, where known (the top two figures), shown with a triangle.}\n \\label{fig:transsum}\n\\end{figure*}\n\nComparing this with the maximum mass possible at this age indicates that the parent\nbelt must have a minimum radius of:\n\\begin{eqnarray}\n r_{\\rm{out}}(t_{\\rm{hot}}) & > & 615t_{\\rm{hot}}^{3\/13}t_{\\rm{age}}^{3\/13}f_{\\rm{obs}}^{6\/13}\n r^{3\/26} (dr\/r)^{-6\/13} \\times \\nonumber \\\\\n & & D_{\\rm{c}}^{-3\/13}{Q_{\\rm{D}}^\\star}^{-5\/26}e^{5\/13}L_\\star^{3\/13}M_\\star^{5\/26}.\n \\label{eq:rout}\n\\end{eqnarray}\nTable \\ref{tab:hot3} gives an estimate of the minimum radial location of such a\nplanetesimal belt, under the assumption that the event (or multiple events) of\nhigh hot dust luminosity last $t_{\\rm{hot}} = 100$ Myr.\nThese values indicate that the planetesimal belts must be at least a few AU from the star.\nIt must be cautioned that this conclusion is relatively weak in the case of HD69830,\nsince the uncertainty in the properties of the planetesimals still leaves two orders\nof magnitude uncertainty in the maximum luminosity, $f_{\\rm{max}}$,\nand so also in the maximum mass $M_{\\rm{max}}$ (see \\S \\ref{ss:pc}).\nThis means that, with suitable planetesimal belt properties, a belt in this\nsystem that is coincident with the dust at 1 AU may be able to replenish the\nobserved phenomenon for 20 Myr.\nHowever, we still consider this to be an unlikely scenario, since it would require\nthat the mass of the planetesimal belt is depleted at a constant rate for the\nfull 100 Myr, whereas most conceivable scenarios would result in a mass loss rate\nwhich decreases with time as the planetesimal population is depleted thus requiring\nan even larger starting mass.\n\nThese two constraints are summarized for the 4 systems with transient hot dust in\nFig.~\\ref{fig:transsum}, which shows the shaded region of parameter space in\n$f_{\\rm{out}}$ and $r_{\\rm{out}}$ where the parent planetesimal belt can lie.\nThis figure also shows the location of the hot dust at $f_{\\rm{obs}}$ and $r$,\nillustrating the conclusion of \\S \\ref{ss:qe} that this lies significantly\nabove $f_{\\rm{max}}$, the maximum fractional luminosity expected for a planetesimal\nbelt at the age of the parent star.\nNote that the value $r_{\\rm{out}}(100\\rm{Myr})$ given in Table \\ref{tab:hot3}\ndenotes the intersection of the limits from $f_{\\rm{max}}$ and from equation\n(\\ref{eq:fout}).\n\nA third constraint for the parent planetesimal belt comes from far-IR observations\nof these systems.\nFor 2\/4 of the transient dust systems a colder dust component has already been detected:\n$\\eta$ Corvi has a planetesimal belt with a resolved radius of $\\sim 100$ AU (Wyatt et al. 2005),\nand HD72905 has one inferred to be at $\\sim 14$ AU (Beichman et al. 2006a).\nIn both cases these outer planetesimal belts have been inferred to be at a\ndifferent spatial location from the hot dust either because of imaging constraints\n(Wyatt et al. 2005) or from analysis of the SED (Beichman et al. 2006a).\nThe properties inferred for these planetesimal belts are indicated on Fig.~\\ref{fig:transsum}\nand lie within the shaded region, implying that these planetesimal belts\ndo not have to be transiently regenerated, and also provide a plausible source\npopulation for the hot dust found closer in.\nHowever, no such excess emission has been seen toward HD69830 at either 70 $\\mu$m\n(Beichman et al. 2005) or 850 $\\mu$m (Sheret, Dent \\& Wyatt 2004) indicating a \nplanetesimal belt with a mass at most 5-50 times greater than our own Kuiper belt.\nLikewise, BD+20307 does not have a detectable excess in IRAS 60 $\\mu$m\nobservations (Song et al. 2005).\n\nA low mass reservoir of planetesimals does not necessarily rule out the presence\nof an outer planetesimal belt which is feeding the hot dust for two reasons.\nFirst, the shaded region of Fig.~\\ref{fig:transsum} actually constrains the properties of\nthe planetesimal belt at the time at which depletion started;\ni.e., this population may have already been severely depleted by the same event\nwhich is producing the dust and we are now nearing the end of the hot dust episode.\nSecond, the constraints imposed by a non-detection in the far-IR do not eliminate\nthe whole of the parameter space in which an outer planetesimal belt can lie.\nFig.~\\ref{fig:transsum} includes the constraints on the outer planetesimal belt\nimposed by the non-detection of excess in the far-IR, assuming that the\ndust emits like a black body.\nThe resulting detection limit is then given by:\n\\begin{equation}\n f_{\\rm{det}} = 3.4 \\times 10^9 F_{\\rm{det}}(\\lambda)\n d^2 r_{\\rm{out}}^{-2}\/B_\\nu(\\lambda,T_{\\rm{bb}}), \n \\label{eq:fdet}\n\\end{equation}\nwhere $F_{\\rm{det}}$ is the detection limit in Jy, $d$ is the distance to the star in pc,\nand $B_\\nu(\\lambda,T_{\\rm{bb}})$ is in Jy\/sr.\nFor BD+20307 the non-detection is limited by the sensitivity of\nIRAS, and so lower limits should be achievable with Spitzer.\nFor the two systems with non-detections, the shaded region already takes\nthe far-IR constraint into account.\n\nThe simplification that the emission comes from black body-type grains\nmeans that equation (\\ref{eq:fdet}) underestimates the upper limit\nfrom the far-IR fluxes.\nThis is because the majority of the luminosity comes from \nsmall grains which emit inefficiently at long wavelengths.\nIndeed, the black body assumption would require the hot dust\nof HD69830 and BD+20307 to have been detected in the far-IR,\nwhereas this is not the case.\nWe modeled the emission from non-porous silicate-organic refractory\ngrains in a collisional cascade size distribution at 1 AU from these\nstars to find that the black body assumption used in equation (\\ref{eq:fdet})\nunderestimates the limit by a factor of $3-5$ meaning that non-detection\nof the hot dust in these systems in the far-IR is to be expected.\nThis also means that slightly more luminous outer planetesimal belts\nthan those indicated by the shaded region on Fig.~\\ref{fig:transsum}\nmay still have escaped detection in the far-IR.\n\nUntil now we have not proposed a mechanism which converts the planetesimals\ninto dust.\nWhereas Beichman et al. (2005) invoke sublimation of comets as the origin of\nthe hot dust, and use this to estimate the mass of the parent planetesimal\nbelt, we consider a scenario in which a significant fraction of material\nof all sizes in the parent planetesimal belt is placed on orbits either\nentirely coincident with the hot dust, or with pericenters at that location.\nIn this scenario the dust is reproduced in collisions and the material\nmaintains a collisional cascade size distribution.\nSimply moving material from $r_{\\rm{out}}$ to $r$ would result in an\nincrease in fractional luminosity from $f_{\\rm{out}}$ to\n$f_{\\rm{out}}(r_{\\rm{out}}\/r)^2$.\nThis indicates that the parent planetesimal belt responsible for\nthe hot dust could have originally been on the line on Fig.~\\ref{fig:transsum}\ntraced by $f_{\\rm{out}} = f_{\\rm{obs}} (r\/r_{\\rm{out}})^2$.\nSince this is parallel to the mass loss limit line (equation \\ref{eq:fout}),\nand for all but HD69830 the observed hot dust component lies below this line,\nthis indicates that parent planetesimal belts in the shaded region\ncould be responsible for the hot dust observed, as long as a large fraction\nof their mass is scattered in to the inner regions.\nHowever, it is to be expected that only a fraction of the outer planetesimal\nbelt ends up in the hot dust region, and so it is more likely that the\nparent planetesimal belt started on a line which falls off less steeply\nthan $\\propto r_{\\rm{out}}^{-2}$, and this is consistent with the ratio of the\nhot and cold components of $\\eta$ Corvi and HD72905 which indicate a\ndependence of $f_{\\rm{out}} = f_{\\rm{obs}} (r\/r_{\\rm{out}})^{0.5 \\pm 0.2}$;\nit is also interesting to note that both have $r_{\\rm{out}}\/r =60-70$.\nWe defer further consideration of the expected properties of the\nparent planetesimal belt to a more detailed model of the dynamics\nof the types of events which could cause such a perturbation,\nbut simply note here that the existence of an outer planetesimal belt\nis not ruled out by the current observational constraints in any\nof the systems.\n\n\n\\section{Discussion}\n\\label{s:conc}\nA simple model for the steady state evolution of dust luminosity for planetesimal\nbelts evolving due to collisions was described in \\S \\ref{s:model}.\nThis showed how at late times the remaining planetesimal belt mass and so\ndust luminosity is independent of the initial mass of the belt.\nThis has important implications for the interpretation of the properties of detected disks.\nThis paper discussed the implications for the population of sun-like stars with hot dust\nat $<10$ AU;\nthe implications for the statistics will be discussed in a forthcoming paper (Wyatt\net al., in prep.).\n\nIt was shown in \\S \\ref{ss:qe} that for 4\/7 of the systems with hot dust their radius and age \nare incompatible with a planetesimal belt which has been evolving in quasi-steady state over the \nfull age of the star, and in \\S \\ref{ss:pc} it was shown that this is the case\neven when uncertainties in the model are taken into account.\nThis implies either that the cascade was started recently (within the last Myr or so), or that\nthe dust arises from some other transient event.\nRecent ignition of the collisional cascade seems unlikely, since the mass required\nto feed the observed luminosity would result in the growth of 2000 km planetesimals\nwhich would stir the belt and ignite the cascade on timescales much shorter than\nthe age of the stars.\nPossible origins for the transient event that have been proposed in the literature are:\nrecent collision between massive planetesimals in a planetesimal belt which introduces\ndust with a size distribution $q \\gg 11\/6$ and so can be detected above a collisional\ncascade which is too faint to detect;\none supercomet $\\sim 2000$ km in diameter that was captured into a circular orbit\nin the inner system replenishing the dust through sublimation (Beichman et al. 2005);\na swarm of comets scattered in from the outer reaches of the system (Beichman et al. 2005).\nIn \\S \\ref{ss:singlecoll} the collisional model was used to show that the transient\ndisks are very unlikely ($<0.001$\\% for the most optimistic estimate for any of the\nstars compared with a detection probability of 2\\% for transient hot dust) to\nhave their origin in a recent collision;\nsuch collisions occur too infrequently.\nIn \\S \\ref{ss:coincident} it was also shown that the parent planetesimals of the\nobserved dust must originate in a planetesimal belt much further from the star\nthan the observed dust, typically at $\\gg 2$ AU.\nThis is because collisional processing means that the mass that can remain so\nclose to the star at late times is insufficient to feed the observed phenomenon.\n\nThe most likely scenario is thus a recent event which provoked one or more planetesimals to\nbe scattered in from further out in the disk (Beichman et al. 2005).\nThe observed dust could have been produced from such a scattered planetesimal population\nthrough their grinding down in mutual collisions (\\S \\ref{ss:constraints}), although \nsublimation close to the pericenters of the planetesimals' orbits is a further possible source \nof dust.\nMore detailed study of the scattering and consequent dust production processes is\nrequired to assess these possibilities.\nHowever, this scenario is supported by the presence of far-IR emission originating from\na colder outer planetesimal belt component in 2\/4 of the transient dust systems.\nThe constraints on the outer planetesimal belt which is feeding the\nphenomenon are discussed in \\S \\ref{ss:constraints}, showing that\nthe outer planetesimal belts already found in $\\eta$ Corvi and HD72905 provide a\nplausible source population for the hot dust found closer in, and that the current\nnon-detection of cold dust around the remaining two systems does not rule out the\npresence of an outer planetesimal belt capable of feeding the observed hot dust luminosity.\n\nOne clue to the origin of the parent planetesimals of the dust may be the composition\nof that dust.\nSilicate features have been detected in the mid-IR spectrum of all of the transient\nhot dust stars (Song et al. 2005; Beichman et al. 2005; Beichman et al. 2006a;\nChen et al. 2006).\nDetailed modeling of the spectrum of HD69830 indicates that the mineralogical\ncomposition of its dust is substantially different from that of comets, rather\nthere is a close match to the composition of P- or D-type asteroids found mainly in\nthe 3-5 AU region of the solar system (Lisse et al. 2006).\nWhile the radial location at which planetesimals of this composition form in the\nHD69830 system will depend on the properties of its protostellar nebula, which may be \nsignificantly different to that of the protosolar nebula, as well as on the structure\nand evolution of its planetary system, evidence for water ice in the dust spectrum\nindicates that the parent body formed beyond the ice-line in this system\n(Lisse et al. 2006), i.e., beyond $2-5.5$ AU (Lecar et al. 2006; Alibert et al. 2006).\nThus the compositional data supports the conclusion that the dust is not produced by a \nplanetesimal that formed in situ.\nHowever, it is worth noting that the same compositional data also\nfinds evidence for differentiation in the parent body (inferred from\nabundance differences between the dust and the star) and for heating of\nits rocky material to $>900$ K (inferred from the absence of amorphous\npyroxene), which would also have to be explained in the context of an outer\nplanetesimal belt origin for the dust.\n\nAn analogous transient event is thought to have happened in the solar system resulting in\nthe period known as the Late Heavy Bombardment (LHB) when the terrestrial planets\nwere subjected to an abnormally high impact rate from asteroids and comets.\nThis is believed to have been triggered by a dynamical instability in the\nplanetary system resulting from Jupiter and Saturn crossing the 1:2 resonance\nduring their slow migration (inwards for Jupiter, outwards for Saturn)\ndue to angular momentum exchange with the primordial Kuiper belt\n(Gomes et al. 2005).\nIn this scenario both the asteroid and Kuiper belts were depleted with\na large fraction of these objects being scattered into the terrestrial planet\nregion during an event which lasted $10-150$ Myr (Gomes et al. 2005), i.e.,\nexactly the type of event required to explain the observed hot dust in the scenario\nproposed here (\\S \\ref{ss:constraints}).\nDynamical instabilities in extrasolar planetary systems can also arise from\nmutual gravitational perturbations between giant planets which formed close\ntogether (Lin \\& Ida 1997; Thommes, Duncan \\& Levison 1999).\nIn both scenarios slow diffusion of the orbits of the planets\nmeans that the dynamical instability can occur up to several Gyr after\nthe formation of the planetary system.\nThe delay to the onset of the instability is determined by the separation\nof the outer planet from the outer planetesimal belt (Gomes et al. 2005), or\nfrom the separation between the planets (Lin \\& Ida 1997), with larger separations\nresulting in longer timescales.\n\nLittle is known about the planetary systems of four of the hot dust systems.\nHowever, three Neptune mass (or Jupiter mass if the system is seen face-on)\nplanets have recently been discovered orbiting the star HD69830 at $<1$ AU\non nearly circular orbits (Lovis et al. 2006).\nDynamical simulations showed that the detected planetary system is stable on\ntimescales of 1 Gyr.\nThis does not, however, rule out the possibility of a dynamical instability having occurred.\nWhile no mean motion or secular resonances are immediately identifiable within\nthe detected planetary system which could have have been crossed recently invoking\nsuch a catastrophic event, it is possible that the instability arose with another\nplanet further out which has yet to be detected with longer timescale observations.\nIt is also possible that a fourth planet which existed in the region 0.19-0.63 AU\nbetween the planets HD69830c and HD69830d has recently been scattered out due to a\ndynamical instability (e.g., Thommes et al. 1999).\nThe region 0.3-0.5 AU was identified in Lovis et al. (2006) as being marginally stable,\nand to encompass several mean motion resonances with the outer planet, including\nthe 1:2 resonance at 0.4 AU;\ni.e., a putative fourth planet could have remained in this region for the past 2 Gyr\nuntil the slow migration\/diffusion of the outer planet (HD69830d) caused the 1:2\nresonance to coincide with the orbit of the putative planet which was then scattered\noutward thus promoting the depletion of an outer planetesimal belt much of which\nwas scattered into the inner regions of the system.\nAlibert et al. (2006) considered that the most plausible formation scenario for the\nplanetary system of HD69830 included the inward migration of the outer planets\nfrom beyond the ice-line at a few AU.\nThis would put a substantial distance between the outer planet (HD69830d) and any\nouter planetesimal belt which favors a delay of 2 Gyr before the onset of the\ninstability.\nSearches for further planetary companions in this system, and for the \nrelic of its outer planetesimal belt, are clearly necessary to constrain the\nevolutionary history of this system.\n\nIn conclusion, $\\sim 2$\\% of sun-like stars exhibit transient hot\ndust in the terrestrial planet region;\nthis dust must originate in a planetesimal belt located further from\nthe star than the dust, typically at $\\gg 2$ AU.\nJust four members of this class are currently known, although it seems\nreasonable to assume that our own solar system would have been placed\nin this class during the LHB.\nThe frequency of this phenomenon indicates that either all stars are\nsubjected to an epoch of similar duration (lasting $\\sim 100$ Myr assuming\na typical age of 5 Gyr) or that a smaller fraction of stars undergo much\nlonger (or multiple) events.\nThe distribution of the ages of the stars in this class indicate that the likelihood of\nthese events occurring falls off roughly inversely proportional to the\nage of the stars.\nAn origin for these events in a dynamical instability as proposed for the LHB\nin the solar system is supported by the recent discovery of a multiple\nplanet system coincident with the dust in one of the systems currently in\nthis class.\nHowever, since the LHB in the solar system is thought to have lasted just\n$\\sim 100$ Myr, it remains to be seen whether we are to infer that dynamically\nunstable planetary systems form around all stars, or that the LHB event in\nother systems lasted much longer than in our own, or perhaps that there is in\nfact more than one mechanism causing this hot dust signature.\nObservations that further constrain the planet, planetesimal and dust\ncomplements of the transient hot dust systems are needed to ascertain the \nsimilarities and dissimilarities within this population.\n\n\n\\acknowledgements\nWe are grateful for support provided by the Royal Society (MCW) and PPARC (RS).\nWe are also grateful to Ben Zuckerman, Joseph Rhee and Inseok Song for pointing\nout that there is a strong (unrelated) infrared source in the vicinity of\nHD128400 which is causing the excess identified by Gaidos (1999).\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nSolid-state scintillators are widely used as particle detectors at room temperature, in fields as diverse as national security, medicine, and particle physics. \nThe light they emit when a particle interacts in them is a proxy for the energy deposited by the particle. \nCompared to other types of particle detectors, scintillators offer a wide range of target nuclei, and for some materials, very fast response. \nIn addition, in certain cases, the quantity of emitted light (light yield, or LY) or its timing, can also provide information on the nature of the interacting particle. \nCsI, pure or doped, is a frequently used scintillator, for instance in neutrino physics~\\cite{Akimov:2017}, or in searches for hypothetical dark matter particles~\\cite{Kim:2012rza} that could account for most of the matter in our Universe~\\cite{schnee_introduction_2011}.\nThere has been recent interest in the use of CsI as a cryogenic scintillating calorimeter for the detection of direct dark matter interactions with regular matter~\\cite{Derenzo:2016fse,mikhailik_2014,cerdeno_scintillating_2014,Nadeau:2014kta,Zhang:2016}. \nCsI is an appealing target for such a detector because of its high LY at low temperature and the possibility of probing new WIMP interaction parameter space~\\cite{cerdeno_scintillating_2014}. \nAs an alkali halide material, it could also help to shed some light on the long-standing DAMA\/LIBRA detection claim~\\cite{Bernabei:2008ec,Bernabei:2013ax}, by performing a dark matter experiment with a similar material.\n\nIn comparison to the simple scintillation detectors used by DAMA, which are\nincapable of event-by-event background discrimination, scintillating calorimeters can provide more insight into the nature of the interacting particle. \nA scintillating calorimeter is a particle detector consisting of a scintillating crystal held at cryogenic temperatures ($\\lesssim$~50~mK) as the target medium, read-out by a light detector and thermal sensors, such that, for a given particle interaction, both scintillation and phonon signals can be observed~\\cite{schnee_introduction_2011}. \nThe phonon signal is a good proxy for the energy deposited by the particle.\nFor a given energy deposit, ionizing radiation in the form of alphas ($\\alpha$), gammas ($\\gamma$), and betas ($\\beta$) will produce different amounts of scintillation light compared to the nuclear recoils expected to be caused by dark matter particles and neutrons, and possibly compared to one another.\nThis process, known as quenching, allows for very powerful discrimination against background events. The CRESST experiment, for example, uses an array of scintillating calorimeters to detect nuclear recoils from dark matter particles~\\cite{Angloher:2012tx}.\nThe possible sensitivity of such an alkali halide detector has been explored in simulations~\\cite{nadeau_cryogenic_2015} and by the COSINUS experiment~\\cite{Angloher:2016CsI}, which is planning on using cryogenic undoped NaI to evaluate the DAMA\/LIBRA claim~\\cite{Angloher:2016}. \nThis approach is complementary to larger, room-temperature searches, \nincluding \nANAIS~\\cite{amare_preliminary_2014},\nCOSINE~\\cite{Adhikari:2017,Cherwinka:2014xta},\nPICO-LON~\\cite{Fushimi:2016},\nand SABRE~\\cite{Xu:2014}.\n\nThe scintillation of CsI (with and without doping) has been studied over a range of temperatures~\\cite{lamatsch1971kinetics,Kubota:1988,Schotanus:1990tk,Nishimura:1995,Amsler:2002kq,Moszynski:2005hu,mikhailik_2014}. \nThe LY tends to increase as the temperature of the scintillator decreases. \nThe ratio between $\\alpha$ and $\\gamma$ light emission is an interesting value to probe as it can allow for particle discrimination between $\\alpha$ and $\\gamma$ interactions. \nThe difference in the ionization density between these two particles can affect the portion of deposited energy that is converted to light in the scintillator.\n\nIn the following, we report on our time-resolved measurements of the light yield of CsI between 300--3.4~K, under both $\\alpha$ and $\\gamma$ excitations for the first time, and over an unexplored time window~\\cite{nadeau_cryogenic_2015}.\nWe use a novel zero-suppression data-acquisition technique enabling a large acquisition window of 1 millisecond to fully resolve the decays and reduce bias from losing light from the end of the window.\nOur simultaneous measurement of $\\alpha$ and $\\gamma$ light from undoped CsI provides a determination of the ratio of light emission between $\\alpha$ and $\\gamma$ excitations, hereafter referred to as the $\\alpha\/\\gamma$ ratio ($R_{\\alpha\/\\gamma}$), over this temperature range.\nOur measurements are motivated by the possible use of CsI in rare-event searches at cryogenic temperatures.\n\n\n\\section{Experimental setup and sample preparation}\nAn optical cryostat produced by ColdEdge Technologies\\footnote{Cold Edge Technologies, Allentown PA, www.coldedgetech.com} is used to cool the samples to any temperature down to 3.4~K, as has been described in previous publications~\\cite{verdier_2.8_2009,Verdier:2011pb,Verdier:2012ie,Veloce:2015slj}. \nThe crystal sample is mounted inside the cryostat and excited by $\\alpha$ particles and $\\gamma$ 60~keV quanta from an internal collimated $^{241}$Am source mounted to the sample holder. An external $^{57}$Co source can also be used to test $\\gamma$ energies of 122~keV.\nThe $^{241}$Am $\\alpha$ source used in this work was adapted from a common smoke detector where a protective film covers the radioactive material. Using a silicon detector, we have measured the degraded mean energy of the $\\alpha$ particles coming from the source to be 4.7~MeV~\\cite{vonSivers:2015}.\n\nIn order to facilitate our study of hygroscopic crystals we have built a glove box around the cryostat that was constantly flushed with compressed ``extra dry'' air ($<10$~ppm H$_2$O) and contained several bags of desiccant.\nA photograph of the cryostat and glove box setup is shown in Figure~\\ref{fig:setuppic}.\nThe nominally pure CsI crystal samples studied in this work were purchased from Hilger Crystals\\footnote{www.hilgercrystals.co.uk} and have square cuboid geometries with dimensions $5\\times5\\times2$~mm$^3$ and all sides polished. \nThe samples were adhered to a custom-made sample holder using silver paint instead of being mechanically held in place since the alkali halides are slightly brittle. \nWe determined the optimal amount of adhesive to avoid cracking the crystal through trial and error on other samples, as there is differential contraction between the sample and holder during thermal cycling. \nThe sample and holder were then moved into the glove box, installed onto the cold finger of the cryostat, and put under vacuum as quickly as possible. \nNo changes in the optical quality of the sample were observed by eye during this procedure.\n\nAs illustrated in Fig.~\\ref{fig:setup}, the crystal sample was at an angle of {30\\textdegree} with respect to the $^{241}$Am source so that the emitted $\\alpha$ particles are incident with one of the smooth, large faces of the sample. \nAt the same time, this face is well exposed to a 28~mm diameter PMT (Hamamatsu R7056) through a fused quartz window, with a second exposed to the rear face, partially obstructed by the sample holder. \nThese PMTs have sensitivity to wavelengths down to 185~nm (the emission spectrum of CsI \\cite{Kubota:1988} is known to have UV components) and a maximum quantum efficiency of 25\\% at 420~nm. \nThe PMT that is in view of the unobstructed face of the sample is used as the primary PMT for data acquisition, while the other is used to assist triggering. \nBased on the solid angle of the crystal exposed to the primary PMT and transmittance of the windows, we expect a light collection efficiency of roughly $10\\%$~\\cite{nadeau_cryogenic_2015}.\nNote that no optical filters were used, so all light within the sensitivity of the PMT contributed to the light yield, as is standard in particle detection.\n\n\\begin{figure}\n\t\\centering \n\t\t\\includegraphics[width=.5\\columnwidth]{fig1}\n\t\t\\caption{\\label{fig:setuppic}The optical cryostat and glove box.}\n\\end{figure}\n\n\\begin{figure}\n\t\\centering \n\t\t\\includegraphics[width=\\columnwidth]{fig2}\n\t\t\\caption{\\label{fig:setup}A schematic of the setup for scintillation studies. A $5\\times5\\times2$~mm$^3$ sample is secured at the centre of the cryostat optical chamber on a holder, and is observed by two photomultiplier tubes (PMT). The volume of the optical cryostat is held at vacuum throughout the experiment whereas the glove box is filled with dessicated air.\nThe measurement of $\\alpha$ particles (dotted arrow) and $\\gamma$ quanta (incoming wavy arrows) is achieved using an internally mounted $^{241}$Am source. Optionally, other $\\gamma$ sources can be mounted to the outside of the cryostat. The coincidence of the two PMTs provides a trigger for a digitizer that records photon signals from the PMT that is in view of the completely uncovered side of the crystal (the other PMT is partially masked by the holder).}\n\\end{figure}\n\n\\section{Data acquisition}\n\\label{sec:data_acq}\nData acquisition is based on the multiple photon counting coincidence technique (MPCC), where the trigger for acquisition is produced when both PMTs detect photons within a given coincidence time window, allowing the measurement of both the light yield and the decay time constants of a scintillator \\cite{Kraus:2005vj}. \nWe have previously adapted this technique to the study of bismuth germanate (BGO) \\cite{Verdier:2011pb} and ZnWO$_4$ \\cite{Verdier:2012ie,di_stefano_counting_2013}. \nSignals from the PMTs pass through a series of Phillips Scientific Nuclear Instrumentation Modules (NIM) to produce a trigger pulse for the digitizer: first, a fan-out module to copy the signal; then, a discriminator to discard any low amplitude events below a given threshold, producing a flat logic pulse with length equal to the desired coincidence time window; and finally, a logic module that performs an AND operation on the incoming logic pulses on both channels and accepts any events where the pulses overlap, thus creating the coincidence condition. \nThe logic module provides the trigger pulse to an 8-bit National Instruments digitizer (PXI-5154), which then records the incoming pulses from the PMTs at a 1~GHz sampling rate.\nThe full setup is described in Fig.~\\ref{fig:setup}.\n\nThe coincidence window was set to 30~ns and an acquisition window of 1~ms (50~$\\mu$s pre-trigger and 950~$\\mu$s post-trigger) was used for all events at all temperatures. \nA long acquisition window was chosen so that we are sure that the full scintillation pulse could be observed at all temperatures.\nTo deal with the large amounts of data, a custom-made LabVIEW interface records only samples of the incoming signals that exceed a certain threshold to save hard-disk space and analysis time.\nThis introduces a light-yield and time-constant dependent bias on the estimation of the light yield which has been corrected. This correction is described in detail in Section~\\ref{sec:data_anal}.\nThis is particularly important to ensure an unbiased comparison between $\\alpha$ and $\\gamma$ interactions.\n\nThe scintillation pulses from CsI have been characterized to have at least one very prominent fast decay time ($\\sim$10-100~ns) \\cite{Kubota:1988,Schotanus:1990tk} and other slower decay times ($>$1~$\\mu$s)~\\cite{Schotanus:1990tk}. \nAt the start of a high light-yield scintillation event, many photons arrive at the PMTs faster than can be resolved individually; as a result, the PMTs output a high amplitude spike several $\\mu$s wide (``pile-up'' or analog regime). \nBy contrast, the slow component is slow enough such that single photons can be resolved easily by the PMTs (counting regime). \nDue to the vertical resolution of our digitizer, we therefore cannot use the technique we have developed based on a streaming time-to-digital converter to measure these pulses, since it depends on being able to resolve individual photons~\\cite{di_stefano_counting_2013}. \nIn order to measure the full pulse (fast $+$ slow components) with our digitizer, two copies of the signal from the primary PMT (facing the unobstructed side of the sample) are obtained from the Fan-Out module, as shown in Figure~\\ref{fig:setup}. \nThe signals are then sent to separate channels on the digitizer, where one channel is set to the maximum vertical range (5~V) so that the full amplitude of the initial fast spike can be measured without saturation, and the other channel is set to a low vertical range (0.5~V) in order to resolve the slow, individual photoelectrons that follow.\n\nData from both digitizer channels are recorded and then combined off-line to reconstruct the full pulse by replacing all of the saturated samples on the low range channel with their corresponding unsaturated samples on the high range channel after applying the scaling factor and offset. \nThis results in a single channel of data with complete, unsaturated, reconstructed pulses with a larger dynamic range than possible with a single channel. \nAn example of a single pulse reconstruction is shown in Figure~\\ref{fig:pulse}. \nThe reconstructed data are saved in an ASCII file as arrays of time and amplitude for each digitizer sample above the LabVIEW threshold in each triggered scintillation event.\n\n\\begin{figure}\n \\centering\n\t\\includegraphics[width=\\columnwidth]{fig3}\n\t\\caption{\\label{fig:pulse} Demonstration of pulse reconstruction. Two channels with different ranges are used to record the same pulse, and are later combined through software (red line). Saturation in the low-range channel (blue dot) is fixed using properly scaled values from the high-range channel (black cross), and pulses below precision in the high-range are still caught in the low-range channel. Pulse begins after a 50~$\\mu$s pre-trigger and continues until 1~ms, though only the first 5~$\\mu$s are shown here.}\n\\end{figure}\n\n\\section{Data analysis methods}\n\\label{sec:data_anal}\nThe standard MPCC technique determines the light yield and the decay time components based on the number of photons in a given event and the known arrival time of each photon~\\cite{Kraus:2005vj}. \nA set of cuts are applied to the data in order to reject spurious events such as events where multiple scintillations occur in the acquisition window, and events where photons from a previous event straggle into the pre-trigger. \nEvents with more than a single scintillation will have more photons in the later part of the time window, and thus can be identified by comparing the mean arrival time of the photons in a given scintillation event to the distribution of the mean arrival time of all the events. \nEvents with photons in the pre-trigger region can be rejected using the distribution of the first photon in each event.\n\nIn the standard MPCC technique, the spectrum from a given source is obtained from the histogram of the number of photons in each event; after cutting the spectrum on a particle type and energy, the histogram of all photons with respect to their arrival time gives an average pulse shape~\\cite{Kraus:2005vj}. \nWe have adapted this technique to account for the early burst of unresolvable photons in the alkali halide pulses by building a histogram of the total charge in each event, instead of the arrival time of each photon, to produce the spectra shown in Figure~\\ref{fig:spectrum}. \nA fit to the peak in the spectrum from a given particle at a given energy provides a measurement of the light yield (LY) of the scintillator as described in Section~\\ref{sec:data_anal_ly}.\nA secondary cut is applied to the data set on the total charge to separate the events that are a result of $\\alpha$ or $\\gamma$ interactions.\nWe use $\\pm$ 1-$\\sigma$ from the mean of each spectrum to determine where these cuts will be placed.\n\nWe can also reconstruct the average pulse for an $\\alpha$ or $\\gamma$ interaction by combining the charge vs. time of all pulses that are within the corresponding spectrum. \nThis allows us to determine the scintillation decay time constants separately for excitations from the different particles. \nThe combined pulse is then fit with a series of exponentials to extract the expected exponential decay time constants.\nThis process is described in Section~\\ref{sec:data_anal_ts}.\n\n\\begin{figure}\n \\centering\n\t\\includegraphics[width=\\columnwidth]{fig4}\n\t\\caption{\\label{fig:spectrum}Example of a spectrum of detected photons, here obtained at 77~K from the $\\alpha$s, $\\gamma$s and X-rays of $^{241}$Am and $^{57}$Co.}\n\\end{figure}\n\n\n\\subsection{Light Yield}\n\\label{sec:data_anal_ly}\nThanks to our cryostat and our new data treatment, we are simultaneously sensitive to both $\\alpha$ and $\\gamma$ interactions at all temperatures, even with their large energy difference as illustrated in Figure~\\ref{fig:spectrum}.\nIt allows us to measure charge over a large spread of particles (60~keV--4700~keV) and different temperatures.\nThe charge detected for a given particle varies by two orders of magnitude over the temperature range.\n\nWe note that at temperatures for which the LY enables sufficient resolution, an asymmetry in the $\\alpha$ line becomes evident as a secondary peak that appears 15\\% lower than the main peak.\nWe have verified with a Si detector that it does not come from the source, and we attribute the shape to small scratches on the polished surface, visible under microscope, changing the energy deposition of the $\\alpha$ particles, which interact primarily with the surface of the crystal (interaction depth 24~$\\mu$m~\\cite{ASTAR}). \nThe $\\gamma$ LY is unaffected by the surface condition, as 60~keV $\\gamma$s have a mean free path of 280~$\\mu$m~\\cite{XCOM}\nWe have therefore modeled the $\\alpha$ line by two gaussians with fixed ratios of amplitudes, means and widths based on a free fit at the temperature with the greatest light yield (60~K).\nFor the $\\alpha$-event selection cuts, we consider the higher peak to be representative of the $\\alpha$ LY.\n\nAs mentioned in Section~\\ref{sec:data_acq}, we use an on-line threshold to reduce the amount of data that we are required to read and save to disk by excluding all voltage data below the threshold. \nWe found, however, that doing so affected our measurement efficiency differently for different LYs.\nFigure~\\ref{fig:thresh}a-i shows the single photon response of the PMTs used, determined from a dedicated experiment without the use of the threshold.\nWhen the threshold is applied, we lose a small amount of charge on each side of the distribution.\nIf photons arrive sufficiently spaced out in time as in Figure~\\ref{fig:thresh}a-ii, the same portion of the response will be lost for each photon.\nHowever, if several photons arrive in quick succession the responses can pile-up on each other, allowing the charge to exceed the threshold and be measured (as shown in Figure~\\ref{fig:thresh}a-iii), which would increase the efficiency of that measurement.\nThe likelihood that photons will pile-up on each other depends on both the LY of the interaction as well as the decay constant of the scintillation, both of which change as a function of temperature. \nWe correct for this effect for each particle interaction and each temperature point to be able to globally relate the LY of these interactions.\n \nWe define $Q_{total}$ as the real total amount of charge that the scintillation event has induced in the PMT. \nThis is the value that we want to have access to in order to compare the LY under $\\alpha$s and $\\gamma$s at all temperatures.\n$Q_{meas}$ is defined as our actual measured charge.\nThe efficiency of a measurement $\\epsilon = Q_{meas}\/Q_{total}$ will change at each temperature and for each type of particle interaction. \nSince $Q_{total}$ is inaccessible, we define $\\rho=Q_{above}\/Q_{meas}$, the ratio of the integral above the threshold $Q_{above}$ to the total measured integral $Q_{meas}$, as a proxy for the efficiency $\\epsilon$.\nIn Figure~\\ref{fig:thresh}b, we plot the variable $\\rho$ against $Q_{meas}$ which is representative of the emitted light, for an example temperature. \n$\\alpha$ interactions can be seen in the top right of the plot, with high emitted light and high $\\rho$. \n$\\gamma$ interactions are seen in the middle of the plot, with a lower $\\rho$ and lower emitted light. \nFrom this plot, we can see that the efficiencies for $\\alpha$ and $\\gamma$ interactions are systematically different because of the very different LY causing differing amounts of pile-up between photons.\n\nWe carried out simulations to relate $\\rho$ to the efficiency $\\epsilon$. \nIn the simulations, we used different numbers of photons distributed in time by an exponential distribution with varying decay time constants to produce different values of $\\rho$. \nDetermining the relationship between these two variables in simulated data allows us to make an estimate on the efficiency in our recorded data. \nThe relationship between $\\rho$ and $\\epsilon$ is shown in Figure~\\ref{fig:thresh}c, allowing us to make corrections to our LY and ratio $R_{\\alpha\/\\gamma}$. \n\n\\begin{figure}\n \\centering\n\t\\includegraphics[width=\\columnwidth]{fig5}\n\t\\caption{\\label{fig:thresh}Effect of threshold on measured LY. Fig.~\\ref{fig:thresh}a-i shows that a portion of a single photoelectron charge is lost because the amplitudes are below the detection threshold. \n $Q_{meas}$, shaded in blue, is the amount of charge that we record for a given event.\n $Q_{above}$, vertical cross-hatched, is the area under the curve but above the threshold. \n This allows us to create the variable $\\rho=Q_{above}\/Q_{meas}$, the ratio of charge above threshold, to estimate the fraction of lost charge, as described in Section~\\ref{sec:data_anal_ly}.\n If multiple photons are resolved in time as in Fig.~\\ref{fig:thresh}a-ii, the fraction of lost charge is the same as for the single photoelectron. \n If there is pileup of the photoelectrons as in Fig.~\\ref{fig:thresh}a-iii, the fraction of lost charge is smaller. \n At a given temperature, the data presented in Fig.~\\ref{fig:thresh}b of $\\rho$ vs. $Q_{meas}$ show that the effect is detrimental to the pulses from $\\gamma$s compared to $\\alpha$s, since the former emit fewer photons. \n Fig.~\\ref{fig:thresh}c shows simulations (blue spots) allowing to reconstruct the actual LY from the measured one using the median (black line); error bars are taken from spread of points.}\n\\end{figure}\n\n\\subsection{Time Structure}\n\\label{sec:data_anal_ts}\n\nOur data acquisition system is able to sample the charge induced in the PMTs at a sampling rate of 1~ns.\nUsing this time-resolved data, we are able to create an ``average pulse'' by combining the charge vs. time data for a set of individual traces that correspond to the $\\alpha$ or $\\gamma$ population.\nWe chose to use the coincidence trigger time as the beginning of each pulse; because we are only recording the signal from one PMT, and the trigger is set by a 30~ns coincidence window, the actual first pulse recorded may be anywhere from 0-30~ns before the trigger time.\nThus, we do not expect to be sensitive to time structure below this order of magnitude.\n\nFor each population, the average pulse histogram is fit to five exponential decays plus a constant background to model the scintillation decay and background noise. \nWe use the same functional fit for all temperatures to fit all pulses in the same unbiased way.\nWe expect at least three exponential decay components~\\cite{mikhailik_2014} at low temperature, and additional exponentials allow for a good fit of extra-slow components, though we don't attempt to interpret them.\n\nAs mentioned in Section~\\ref{sec:data_acq}, the application of a threshold changes the efficiency of detection for individual photons versus pile-up photons. \nThis has an effect on the average pulse shape separate from the measurement of LY.\nThe effect can be seen in Figure~\\ref{fig:avepulse}a as a kink in the pulse, at a time where the likelihood of a photon arriving at the PMT decreases to a point where pile-up is no longer likely (see Figure~\\ref{fig:thresh}). \nWe have confirmed through simulation, shown in Figure~\\ref{fig:avepulse}b, that the change in efficiency due to photons piling-up only impacts the pulse in this transition region. \nBefore and after, the shape of the pulse follows the underlying distribution.\nTo deal with this in the fit of exponentials, the kink time section has been excluded, and the fit keeps the same time constant before and after the kink time. \nLastly, since the pulse shape spans multiple orders of magnitude, non-uniform binning has been utilized to reduce bin-to-bin statistical fluctuations in the late portion of the pulse.\n\n\\begin{figure}\n \\centering\n\t\\includegraphics[width=\\columnwidth]{fig6}\n\t\\caption{\\label{fig:avepulse}a) Example of average $\\alpha$ pulses for different temperatures (note irregular binning). At least two exponential decays are evident for each temperature, spanning nearly 6 orders of magnitude in intensity. Kink in pulse, caused by variable charge-reconstruction efficiency, is visible at 60~K between 52--56~$\\mu$s.\n %\n b) Simulations of the kink (Sec.~\\ref{sec:data_anal_ts}). Simulated data for a single time constant with no threshold applied (blue circles); the threshold applied to the total pulse, with photons piling up on each other, as done in the real data (green triangles); and the threshold applied to every arriving photon as if they arrived separately (red squares). Kink occurs in transition between regions where photoelectrons pile up and where they don't, but does not affect time constant on either side of transition. }\n\\end{figure}\n\n\n\n\n\\section{Results and discussions}\n\\label{sec:disc}\n\n\\subsection{Light yield and $\\alpha\/\\gamma$ ratio}\n\\label{sec:disc:LYQ}\nThe temperature response curves for the LY of CsI under $\\alpha$ and $\\gamma$ excitation were measured for several stabilized temperatures while cooling from 300~K to 3.4~K.\nOur method allows us to measure LY without the use of a shaping time, making the measurement independent of any changes in decay time constant vs. temperature.\nAs a proxy for the number of emitted photons, we take the number of photons detected in the front PMT.\nThis number of photons is determined from the charge detected by the PMT thanks to a separate measurement with a low-light source providing the average charge induced in the PMT by a single photon.\n\nIdeally, these results would take into account possible temperature-dependent shifts of the emission spectra relative to the fixed quantum efficiency of the PMTs.\nComparing emission spectra at low temperatures with the quantum efficiency of our PMT, we estimate that, relative to room temperature, our detection efficiency changes by at most 10\\% over the entire temperature range~\\cite{Nishimura:1995,mikhailik_2014}.\n\nThe number of detected photons for the various particles and energies are shown as a function of temperature in Figure~\\ref{fig:LYQF}.\nExcept for $\\gamma$s at temperatures above 200~K, at least 10~photons are detected per event. \nThe temperature dependence of the number of detected photons over energy, a proxy for LY, is also shown in Figure~\\ref{fig:LYQF}b. \nThis LY has been corrected for efficiency as described in Section~\\ref{sec:data_anal_ly}\nWe see the $\\alpha$ LY increase by a factor of 100 from 300~K to 30~K, and decrease to a factor of 30 above 300~K at 3.4~K, our lowest temperature. \nThe $\\gamma$ LY for both 60~keV and 122~keV does not see as much of an increase as for the $\\alpha$s, rising by a factor 20-30 above that at 300~K below 100~K.\nThe increase in LY is consistent between the two different $\\gamma$ energies, within uncertainty.\nOverall, the LY is relatively constant at temperatures below 7~K. We assume it remains so at even lower temperatures.\n\nThough comparison is difficult because of uncertainties in our light collection efficiency and quantum efficiency of the PMTs, previous results of the absolute LY of CsI by Amsler et al.~\\cite{Amsler:2002kq} give $50\\pm5$~ph\/keV at 80~K under $\\gamma$ excitation.\nAnother measurement under $\\gamma$ excitation by Moszynski et al.~\\cite{Moszynski:2003} gives $107\\pm10$~ph\/keV for undoped CsI at 77~K.\nAt 77~K, we detect $83\\pm5$ photons from a 60~keV $\\gamma$-excitation. \nAssuming an efficiency of $\\sim 10 \\%$ due to numerical aperture and reflection of windows~\\cite{verdier_2.8_2009}, and a PMT quantum efficiency of $\\sim 15\\%$ at the mean emission of CsI, this gives a rough estimate of $90$~ph\/keV for the LY , which is compatible with the result from Amsler et al.\nMore recently, Sch\\\"affner et al.~\\cite{Schaffner:2012ei} quote an absolute detected energy of 7.1\\% from undoped CsI at 10~mK.\nUsing the mean energy of an emitted photon from CsI (3.9~eV~\\cite{nadeau_cryogenic_2015}), and taking the light collection efficiency of the CRESST type light detectors to be 31\\%~\\cite{Kiefer:2012va}, this would imply an absolute LY of $60$~ph\/keV.\nAt our lowest temperature, using the same method as above, we estimate the LY of CsI to be roughly $65$~ph\/keV at 3.4~K, which appears to be compatible with the result from Sch\\\"affner et al.\nThe agreement of estimated LY values with previous measurements indicates that the corrections described in Section~\\ref{sec:data_anal_ly} are performing correctly.\n\nTo compare with recent results from Mikhailik et al.~\\cite{mikhailik_2014} and Gridin et al.~\\cite{Gridin:2015}, we have scaled their reported LY values to ours at 77~K. \nAlthough Gridin et al. do not report time-resolved LY, we believe that our results are comparable because of our long acquisition window capturing full scintillation events.\nWe see the same general trend of LY increase as previous measurements, but different relative LY at high temperatures; this could be due to a difference in incidental Tl impurities in the crystals.\nThe light emission from Tl can be seen in the room temperature spectra of Mikhailik et al., but disappears at lower temperature; this could increase the LY preferentially at high temperatures in a crystal with a larger amount of Tl.\n\nAs our setup allows the measurement of the lines from $\\alpha$ particles and $\\gamma$ quanta emitted by $^{241}$Am, we can determine the $\\alpha$\/$\\gamma$ ratio, $R_{\\alpha\/\\gamma}$, defined by the ratio of LY for $\\alpha$s over the LY for $\\gamma$s at a given energy $E$:\n\\begin{equation}\\label{eq:qf}R_{\\alpha\/\\gamma} (E) = \\frac{{\\rm LY}_{\\alpha}(E)}{{\\rm LY}_{\\gamma}(E)} = \\frac{\\mu_\\alpha (E) \/ E}{\\mu_\\gamma (E) \/E} = \\frac{\\mu_\\alpha (E) }{\\mu_\\gamma (E)} ,\n\\end{equation}\nwhere $\\mu_i (E)$ is the measured response to particles of type $i$ depositing energy $E$.\n$R_{\\alpha\/\\gamma}$ quantifies the relative difference in light produced by both types of interactions for an equivalent energy deposit, and thus is independent of our overall light collection efficiency barring effects related to the interaction depth of the particle.\n\nThe temperature dependence of $R_{\\alpha\/\\gamma}$ is shown in Figure~\\ref{fig:LYQF}c, as determined using the 60~keV $\\gamma$ and 4.7~MeV degraded $\\alpha$ from $^{241}$Am.\nRemarkably in our data, $R_{\\alpha\/\\gamma}$ becomes greater than 1 for temperatures between 10-100~K.\nA value of $R_{\\alpha\/\\gamma}$ greater than 1 is unexpected in general because of the higher ionization density of $\\alpha$ particles leading to a higher probability of non-radiative recombination of electrons and holes before formation of self-trapped excitons~\\cite{Sysoeva:1998}, though this behavior has previously been observed in alkali halide crystals grown by the Kyropoulos method~\\cite{Birks:1964-11}, and in pure or doped ZnSe~\\cite{Arnaboldi:2011ce,Sysoeva:1998,Klamra:2002bg,Nagorny:2017}.\nWatts et al.~\\cite{Watts:1962} observed an anomalous value of the $\\alpha\/\\beta$ ratio (which should be equivalent to $R_{\\alpha\/\\gamma}$) of $3.5\\pm0.3$ at 77~K and 4~K for a single CsI crystal, that was not reproduced in other crystals, though they presented no explanation.\n\nBirks suggests that a large temperature dependence in $R_{\\alpha\/\\gamma}$ is associated with a high defect density, as the lower ionization density of $\\gamma$-excitation allows electrons and holes to drift and become trapped in defects instead of forming self-trapped excitons~\\cite{Birks:1964-11}.\nThere is evidence for this in ZnSe, where thermal treatment has been seen to affect $R_{\\alpha\/\\gamma}$~\\cite{Nagorny:2017}.\nAnother hypothesis is that there are significant excitation-dependent changes in the emission spectra, causing our detection efficiency to change between particles at various temperatures.\nLastly, there are three effects that could be considered, though they are unlikely because they would have to be strongly temperature dependent to explain the data: the first being that the light collection efficiency differs significantly between the surface and the bulk of the sample, the second being that the scintillation properties of the surface differ from those of the bulk, and the third being the non-proportionality of the LY.\n\nSince in practice our measurement involves $\\alpha$ and $\\gamma$ interactions of different energies (4.7~MeV and 60~keV respectively), the non-proportionality ($NP$) of the scintillation response could also have an affect on our measured light ratio:\n\\begin{equation}\n\\label{eq:qf_e}\nR_{\\alpha\/\\gamma} (4.7 \\mbox{ MeV}) =\\frac{{\\rm LY}_{\\alpha}(4.7 \\mbox{ MeV})}{{\\rm LY}_{\\gamma}(4.7 \\mbox{ MeV})}= \\underbrace{ \\frac{{\\rm LY}_{\\gamma}(60 \\mbox{ keV})}{{\\rm LY}_{\\gamma}(4.7 \\mbox{ MeV})} }_{NP} \\underbrace{ \\frac{{\\rm LY}_{\\alpha}(4.7 \\mbox{ MeV})}{{\\rm LY}_{\\gamma}(60 \\mbox{ keV})}}_{\\text{measurement}}\n\\end{equation}\nThere is some evidence that the non-proportionality of the LY in pure CsI depends on temperature.\nLu et al~\\cite{Lu:2015} report that at 295~K, the LY of a 60~keV $\\gamma$-interaction is greater than that of a 1~MeV $\\gamma$ by approximately 15\\%, whereas at 100~K there is almost no difference. \nIt is unlikely the observed changes in $R_{\\alpha\/\\gamma}$ can be explained by this effect alone, however, as $R_{\\alpha\/\\gamma}$ varies by a factor of 4 over the measured temperature range. \nAdditionally, to be consistent with a value of $R_{\\alpha\/\\gamma} < 1$ at all temperatures, the non-proportionality would have to decrease an additional 20\\% between 100~K and 30~K. \nWe are not aware of any experimental data at other temperatures for pure CsI, or for $\\gamma$-interactions higher than $\\sim 2$~MeV.\n\n\\begin{figure}\n \\begin{center}\n \\includegraphics[width=0.6\\columnwidth]{fig7}\n \\end{center}\n \\caption{\\label{fig:LYQF}a) Detected photons as a function of temperature, for 4.7~MeV $\\alpha$s (circles) and $\\gamma$s (60~keV: squares; 122~keV: triangles). A correction has been applied (original: light ; corrected: solid) to compensate for the loss in charge detected from our threshold (see Sec.~\\ref{sec:data_anal} for details).\n b) Detected photons (corrected) per unit deposited energy as a function of temperature for $\\alpha$s and $\\gamma$s, showing results for 60~keV and 122~keV $\\gamma$s are consistent within errorbars. Results of $\\alpha$-excitation from Mikhailik et al.~\\cite{mikhailik_2014} are shown by the dashed red and of $\\gamma$-excitation from Gridin et al.~\\cite{Gridin:2015} are shown by the solid green line, scaled to be equal to our result at 77~K.\n c) $\\alpha$\/$\\gamma$ ratio (corrected) as a function of temperature, showing a factor 4 variation and reaching values greater than 1. In a) and b), conversion of detected photons to emitted ones requires a multiplication by roughly 70, though this does not affect ratio in c).} \n\\end{figure}\n\n\\subsection{Time Structure}\n\\label{sec:disc_TS}\nThe results of the time constant fits mentioned in Section~\\ref{sec:data_anal_ts} are summarized in Figure~\\ref{fig:tau}.\nTime constants from the fits that contribute at least 10\\% of the total LY at that temperature are plotted in Figure~\\ref{fig:tau}a for $\\alpha$ excitation, and Figure~\\ref{fig:tau}b for $\\gamma$ excitation.\nThe area of the marker represents the contribution of that time constant to the total integral of the pulse, the largest points showing the most prominent time constant at a given temperature.\n\nFirst, for $\\alpha$ excitation, above 100~K there appears to be two prominent time constants, both shorter than 1~$\\mu$s.\nAs the temperature decreases, the main time constant increases from 0.4~$\\mu$s at 300~K to 1~$\\mu$s at 100~K, where it becomes the only important time constant.\nWe also see a fast component that begins at 0.01~$\\mu$s at 300~K, increasing in length to 0.1~$\\mu$s at 150~K.\nThis behaviour is consistent with the measurements of Amsler et al.~\\cite{Amsler:2002kq} from 100-300~K.\n\nFrom 100~K to 10~K, there remains only one time constant at around 1~$\\mu$s.\nThis component is consistent with previous measurements of undoped CsI scintillation at low temperature~\\cite{mikhailik_2014,Watts:1962}\nBelow 10~K, the structure becomes more complicated, with several components contributing equal amounts of light to the pulse.\nWe see a short component of the order 100~ns from both $\\alpha$s and $\\gamma$s, consistent with the 290~nm emission,\nand a long component that is consistent with the 338~nm emission, which are described by previous studies as a system of three excitonic absorption bands~\\cite{lamatsch1971kinetics,Gridin:2015}.\nThere are several very long time constants present at high temperatures that seem to hold a constant value of 10~$\\mu$s and 100~$\\mu$s, which could be attributed to coincidence within our acquisition window of $\\alpha$ and $\\gamma$ events.\nAlternatively, these could be attributed to the presence of shallow traps or defects.\n\nFor the $\\gamma$ excitation, the main time constants show a very similar pattern to the $\\alpha$ excitation data.\nThis indicates that the same light production mechanism is being excited by the two different radiation sources, which has also been seen previously with $\\alpha$s and X-rays~\\cite{mikhailik_2014}.\nLong time constants visible by $\\alpha$ excitation are absent in the $\\gamma$ excitation data, which could indicate a larger concentration of defects near the surface of the crystal, as expected~\\cite{Nishimura:1995}.\n\nWe have compared our results to the results of fits to a model of various STE decays by Nishimura et al.~\\cite{Nishimura:1995} in Figure~\\ref{fig:tau}. For both $\\alpha$ and $\\gamma$ excitation, the data corresponds well with the model at low temperatures. At high temperatures, two exponential components are seen in $\\alpha$ excitation, corresponding to the singlet and triplet STE states~\\cite{Nishimura:1995} seen as the solid red lines in Figure~\\ref{fig:tau}. At low temperature, the evolution of the two decay rates of the off-center STE, seen as the dotted black lines, follow the same trend in our data. The on-center STE decay rate can also be seen as the dashed black line in Figure~\\ref{fig:tau}, which we also see because of our lack of spectral discrimination between the two states.\n\n\\begin{figure}\n \\centering\t\\includegraphics[width=0.9\\columnwidth]{fig8}\n\t\\caption{\\label{fig:tau} \n\tEvolution of time constants with temperature. At a given temperature, only time constants composing at least 10\\% of LY are shown. Marker size at a given temperature is proportional to the contribution of that time constant to the total LY at that temperature. Models of STE decay from Nishimura et al.~\\cite{Nishimura:1995} are shown for the off-center STE state (dotted black line), on-centre STE state (dashed black line), and singlet and triplet STE state at high temperatures (solid red line).\n a) Time constants from $\\alpha$ excitation \n b) Time constants from $\\gamma$ excitation\n }\n\\end{figure}\n\n\\section{Conclusion}\nWe present the first measurement of the scintillation properties of undoped CsI over a wide range of temperature under both $\\alpha$ and $\\gamma$ excitation using a time-resolved zero-suppression measurement technique.\nFor the first time to our knowledge in CsI, a long measurement time window was used to capture the full scintillation event at each trigger, removing any bias from long or short scintillation times.\n\nThe LY under $\\alpha$ excitation increases by nearly two orders of magnitude from 300--30~K, to a maximum value of roughly 120~ph\/keV.\nAt the lowest measured temperature of 3.4~K, the LY remains a factor 30 above the level at room temperature.\nUnder $\\gamma$ excitation, a factor 20 increase in LY is observed below 100~K when compared with the room temperature yield, with a maximum at roughly 90~ph\/keV.\nThe ratio between $\\alpha$ and $\\gamma$ excitations fluctuates significantly over the temperature range, and was found to be greater than one for temperatures between 10--100~K, which could be an indicator of a large amount of defects in the crystal.\nIt also may be an artifact of particle specific emission changes as the temperature decreases.\nMeasurements of the time constants for both $\\alpha$s and $\\gamma$s agree with previous measurements, and follow previous models of STE decay~\\cite{Nishimura:1995}.\n\nThese measurements further establish undoped CsI as an effective cryogenic scintillator with a high light yield at low temperatures. \nWith such a light yield, above that of most other cryogenic scintillators~\\cite{Mikhailik:2010}, low thresholds can be reached in experimental applications that require it, such as searches for dark matter. In such searches, the high $\\alpha$\/$\\gamma$ ratio at low temperatures indicates that the $\\alpha$ background should remain separated from the nuclear recoil signal region due to the low nuclear recoil quenching factor of 0.1 for Cs and I~\\cite{Angloher:2016CsI}. More generally, light yield and $\\alpha\/\\gamma$ ratio are high at liquid nitrogen temperatures, conditions that lend themselves well to practical applications.\n\n\\section{Acknowledgements}\nThis work has been funded in Canada by NSERC (Grant SAPIN 386432), CFI-LOF and ORF-SIF (project 24536).\n\n\\section{References}\n\\bibliographystyle{elsarticle-num}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe computation of a measure of similarity (or dissimilarity) between pairs of objects is a crucial subproblem \nin several applications in Computer Vision \\cite{Rubner98,Rubner2000,Pele2009}, Computational Statistic \\cite{Bickel2001}, Probability \\cite{Bassetti2006,Bassetti2006b},\nand Machine Learning \\cite{Solomon2014,Cuturi2014,Frogner2015,Arjovsky2017}.\nIn mathematical terms, in order to compute the similarity between a pair of objects, we want to compute a {\\it distance}.\nIf the distance is equal to zero the two objects are considered to be equal; the more the two objects are different,\nthe greater is their distance value. For instance, the Euclidean norm is the most used distance function to compare a pair of points in $\\mathbb{R}^d$.\nNote that the Euclidean distance requires only $O(d)$ operations to be computed. \nWhen computing the distance between complex discrete objects, \nsuch as for instance a pair of discrete measures, a pair of images,\na pair of $d$-dimensional histograms, or a pair of clouds of points, the Kantorovich-Wasserstein distance \\cite{Villani2008,Vershik2013} has proved to be \na relevant distance function \\cite{Rubner98}, which has both nice mathematical properties and useful practical implications.\nUnfortunately, computing the Kantorovich-Wasserstein distance requires the solution of an optimization problem.\nEven if the optimization problem is polynomially solvable, the size of practical instances to be solved is very large,\nand hence the computation of Kantorovich-Wasserstein distances implies an important computational burden.\n\nThe optimization problem that yields the Kantorovich-Wasserstein distance can be solved with different methods.\nNowadays, the most popular methods are based on (i) the Sinkhorn's algorithm \\cite{Cuturi2013,Solomon2015,Altschuler2017}, \nwhich solves (heuristically) a regularized version of the basic optimal transport problem,\nand (ii) Linear Programming-based algorithms \\cite{Flood1953,Goldberg1989,Orlin}, which exactly solve the basic optimal transport problem\nby formulating and solving an equivalent uncapacitated minimum cost flow problem. For a nice overview of both computational approaches,\nwe refer the reader to Chapters 2 and 3 in \\cite{Peyre2018}, and the references therein contained.\n\nIn this paper, we propose a Linear Programming-based method to speed up the computation of Kantorovich-Wasserstein distances of order 2,\nwhich exploits the structure of the ground distance to formulate an uncapacitated minimum cost flow problem.\nThe flow problem is then solved with a state-of-the-art implementation of the well-known Network Simplex algorithm \\cite{Kovacs2015}.\n\nOur approach is along the line of research initiated in \\cite{LingOkada2007}, where the authors proposed a very efficient method\nto compute Kantorovich-Wasserstein distances of order 1 (i.e., the so--called {\\it Earth Mover Distance}), \nwhenever the ground distance between a pair of points is the $\\ell_1$ norm.\nIn \\cite{LingOkada2007}, the structure of the $\\ell_1$ ground distance and of regular $d$-dimensional histograms is exploited to \ndefine a very small flow network. More recently, this approach has been successfully generalized in \\cite{Bassetti2018} to the case of \n$\\ell_\\infty$ and $\\ell_2$ norms, providing both exact and approximations algorithms, which are able to compute distances\nbetween pairs of $512 \\times 512$ gray scale images. The idea of speeding up the computation of Kantorovich-Wasserstein distances by defining a minimum\ncost flow on smaller structured flow networks is also used in \\cite{Pele2009}, where a truncated distance is used as ground distance in place of a $\\ell_p$ norm.\n\nThe outline of this paper is as follows. Section 2 reviews the basic notion of discrete optimal transport and fixes the notation.\nSection 3 contains our main contribution, that is, Theorem 1 and Corollary 2, which permits to speed-up the computation\nof Kantorovich-Wasserstein distances of order 2 under quite general assumptions.\nSection 4 presents numerical results of our approaches, compared with the Sinkhorn's algorithm as implemented in \\cite{Cuturi2013} \nand a standard Linear Programming formulation on a complete bipartite graph \\cite{Rubner98}. \nFinally, Section 5 concludes the paper.\n\n\n\\section{Discrete Optimal Transport: an Overview}\nLet $X$ and $Y$ be two discrete spaces. \nGiven two probability vectors $\\mu$ and $\\nu$ defined on $X$ and $Y$, respectively, and a cost $c : X \\times Y \\to \\mathbb{R}_+$, \nthe {\\it Kantorovich-Rubinshtein functional} between $\\mu$ and $\\nu$ is defined as\n\\begin{equation}\n\\label{eq:kantorovich}\n\t\t \\mathcal{W}_c(\\mu,\\nu)= \\inf_{ \\pi \\in \\Pi(\\mu,\\nu)} \\sum_{ (x,y) \\in X\\times Y} c(x,y) \\pi(x,y)\n\\end{equation}\nwhere $\\Pi(\\mu,\\nu)$ is the set of all the probability measures on $X \\times Y$ with marginals $\\mu$ and $\\nu$, i.e. \nthe probability measures $\\pi$ such that $\\sum_{y \\in Y} \\pi(x,y)=\\mu(x)$ and $\\sum_{x \\in X} \\pi(x,y)=\\nu(y),$\nfor every $(x,y)$ in $X \\times Y$.\n Such probability measures are sometimes called transport plans or couplings for $\\mu$ and $\\nu$. \nAn important special case is when $X=Y$ and the cost function $c$ is a distance on $X$. In this case \n$ \\mathcal{W}_c$ is a distance on the simplex of probability vectors on $X$, also known as {\\it Kantorovich-Wasserstein distance} of order $1$. \n\nWe remark that {\\bf the Kantorovich-Wasserstein distance of order $p$} can be defined, more in general, \nfor arbitrary probability measures on a metric space $(X,\\delta)$ by \n\\begin{equation}\\label{wpgeneral}\n W_p(\\mu,\\nu):=\\left(\\inf_{ \\pi \\in \\Pi(\\mu,\\nu)} \\int_{ X\\times X} \\delta^p(x,y) \\pi(dx dy)\\right)^{\\min(1\/p,1)}\n\\end{equation}\nwhere now $\\Pi(\\mu,\\nu)$ is the set of all probability measures on the Borel sets of $X \\times X$ that have marginals $\\mu$ and $\\nu$, see, e.g., \\cite{AGS}.\nThe infimum in \\eqref{wpgeneral} is attained, and any probability $\\pi$ which realizes the minimum is called an {\\it optimal transport plan}.\n\nThe Kantorovich-Rubinshtein transport problem in the discrete setting can be seen as a special case of \nthe following Linear Programming problem, where we assume now that $\\mu$ and $\\nu$ are \ngeneric vectors of dimension $n$, with positive components, \n\\begin{align}\n\\label{p1:funobj} (P) \\quad \\min \\quad & \\sum_{x \\in X}\\sum_{y \\in Y} c(x,y) \\pi(x,y) \\\\\n\\mbox{s.t.} \\quad \n\\label{p1:supply} & \\sum_{y \\in Y} \\pi(x,y) \\leq \\mu(x) & \\forall x \\in X \\\\\n\\label{p1:demand} & \\sum_{x \\in X} \\pi(x,y) \\geq \\nu(y) & \\forall y \\in Y \\\\\n\\label{p1:posvar} & \\pi(x,y) \\geq 0.\n\\end{align}\n\\noindent If $\\sum_{x } \\mu(x) = \\sum_{y} \\nu(y)$ we have the so-called {\\it balanced} transportation problem, \notherwise the transportation problem is said to be {\\it unbalanced} \\cite{Liero2018,Chizat2016}.\nFor balanced optimal transport problems, constraints \\eqref{p1:supply} and \\eqref{p1:demand} must be satisfied with equality, and the problem \nreduces to the Kantorovich transport problem (up to normalization of the vectors $\\mu$ and $\\nu$). \n\n\nProblem (P) is related to the so-called {\\it Earth Mover's distance}.\nIn this case, $X,Y \\subset \\mathbb{R}^d$, $x$ and $y$ are the centers of two data clusters, and\n$\\mu(x)$ and $\\nu(y)$ give the number of points in the respective cluster. \nFinally, $c(x,y)$ is some measure of dissimilarity between the two clusters $x$ and $y$.\nOnce the optimal transport $\\pi^*$ is determined, the Earth Mover's distance\nbetween $\\mu$ and $\\nu$ is defined as (e.g., see \\cite{Rubner98})\n\\[\nEMD(\\mu,\\nu)= \\frac{\\sum_{x \\in X}\\sum_{y \\in Y} c(x,y) \\pi^*(x,y)}{\\sum_{x \\in X}\\sum_{y \\in Y} \\pi^*(x,y)}.\n\\]\n\nProblem (P) can be formulated as an uncapacitated minimum cost flow problem on a bipartite graph defined as follows \\cite{Ahuja}.\nThe bipartite graph has two partitions of nodes: the first partition has a node for each point $x$ of $X$, and the second partition has a node for each point $y$ of $Y$.\nEach node $x$ of the first partition has a supply of mass equal to $\\mu(x)$,\neach node of the second partition has a demand of $\\nu(y)$ units of mass.\nThe bipartite graph has an (uncapacitated) arc for each element in the Cartesian product $X \\times Y$ having cost equal to $c(x,y)$.\nThe minimum cost flow problem defined on this graph yields the optimal transport plan $\\pi^*(x,y)$, which indeed is an optimal solution of problem \\eqref{p1:funobj}--\\eqref{p1:posvar}.\nFor instance, in case of a regular 2D dimensional histogram of size $N \\times N$, that is, having $n=N^2$ bins, \nwe get a bipartite graph with $2N^2$ nodes and $N^4$ arcs (or $2n$ nodes and $n^2$ arcs). Figure \\ref{fig1}--\\subref{fig1a} shows an example for a $3 \\times 3$ histogram,\nand Figure \\ref{fig1}--\\subref{fig1b} gives the corresponding complete bipartite graph.\n\n\\begin{figure*}[t!]\n \\centering\n \\subfigure[]{\\label{fig1a}\\includegraphics[height=6cm]{.\/fig1a.PNG}}\\qquad \\qquad\n \\subfigure[]{\\label{fig1b}\\includegraphics[height=6cm]{.\/fig1b.PNG}}\\qquad \\qquad\n \\subfigure[]{\\label{fig1c}\\includegraphics[height=6cm]{.\/fig1c.PNG}}\n \\caption{\\subref{fig1a} Two given 2-dimensional histograms of size $N\\times N$, with $N=3$; \\subref{fig1b} Complete bipartite graph with $N^4$ arcs; \\subref{fig1c}: 3-partite graph with $(d+1)N^3$ arcs.\\label{fig1}}\n\\end{figure*}\n\nIn this paper, we focus on the case $p=2$ in equation \\eqref{wpgeneral} and the ground distance function $\\delta$ is the Euclidean norm $\\ell_2$, that is\nthe Kantorovich-Wasserstein distance of order $2$, which is denoted by $W_2$. We provide, in the next section,\nan equivalent formulation on a smaller $(d+1)$-partite graph.\n\n\n\t\n\\section{Formulation on $(d+1)$-partite Graphs}\t\nFor the sake of clarity, but without loss of generality, we present first our construction considering 2-dimensional histograms and the $\\ell_2$ Euclidean ground distance.\nThen, we discuss how our construction can be generalized to any pair of $d$-dimensional histograms.\n\nLet us consider the following flow problem: let $\\mu$ and $\\nu$ be two probability measures over a $N \\times N$ regular grid denoted by $G$.\nIn the following paragraphs, we use the notation sketched in Figure \\ref{fig2}. In addition, we define the set $U:=\\{1,\\dots,N\\}$.\n\n\\begin{figure}[t!]\n\\floatbox[{\\capbeside\\thisfloatsetup{capbesideposition={right,center},capbesidewidth=0.55\\textwidth}}]{figure}[\\FBwidth]\n{\\caption{Basic notation used in Section 3: in order to send a unit of flow from point $(a,j)$ to point $(i,b)$, we either send\n a unit of flow directly along arc $((a,j),(i,b))$ of cost $c((a,j),(i,b))=(a-i)^2 + (j-b)^2$, or, we first send a unit of flow\n from $(a,j)$ to $(i,j)$, and then from $(i,j)$ to $(i,b)$, having total cost $c((a,j),(i,j)) + c((i,j),(i,b)) = (a-i)^2 + (j-j)^2 + (i-i)^2 + (j-b)^2 = (a-i)^2 + (j-b)^2 = c((a,j),(i,b))$. Indeed, the cost of the two different path is exactly the same.}\n \\label{fig2}}\n{\\includegraphics[width=0.35\\textwidth]{.\/images\/fig2.PNG}}\n\\end{figure}\n\nSince we are considering the $\\ell_2$ norm as ground distance, we minimize the functional\t\n\\begin{equation}\n\\label{formulationflow}\n{R}:(F_1,F_2)\\rightarrow \\sum_{i,j=1}^N \\left[\\sum_{a=1}^N (a-i)^2f^{(1)}_{a,i,j}+ \\sum_{b=1}^N (j-b)^2f^{(2)}_{i,j,b}\\right]\n\\end{equation}\t\t\namong all $F_i = \\{f^{(i)}_{a,b,c}\\}$, with $a,b,c \\in \\{1,...,N\\}$ real numbers (i.e., flow variables) satisfying the following constraints\n\\begin{eqnarray}\n\\label{condizionecompmu}\t\\sum_{i=1}^N f^{(1)}_{a,i,j}&=&\\mu_{a,j}, \\qquad\\qquad \\forall a,j \\in U \\times U \\\\\n\\label{condizionecompnu}\t\\sum_{j=1}^N f^{(2)}_{i,j,b}&=&\\nu_{i,b}, \\qquad\\qquad \\forall i,b \\in U \\times U \\\\\n\\label{incollamento} \t\\sum_{a}f^{(1)}_{a,i,j}&=&\\sum_{b}f^{(2)}_{i,j,b}, \\qquad\\forall i,j \\in U \\times U, a \\in U, b \\in U.\n\\end{eqnarray}\n\\noindent Constraints \\eqref{condizionecompmu} impose that the mass $\\mu_{a,j}$ at the point $(a,j)$ is moved to the points $(k,j)_{k=1,...,N}$.\nConstraints \\eqref{condizionecompnu} force the point $(i,b)$ to receive from the points $(i,l)_{l=1,...,N}$ a total mass of $\\nu_{i,b}$.\nConstraints \\eqref{incollamento} require that all the mass that goes from the points $(a,j)_{a=1,...,N}$ to the point $(i,j)$ is moved to the points $(i,b)_{b=1,...,N}$.\nWe call a pair $(F_1,F_2)$ satisfying the constraints \\eqref{condizionecompmu}--\\eqref{incollamento} a {\\it feasible flow} between $\\mu$ and $\\nu$. \nWe denote by $\\mathcal{F}(\\mu,\\nu)$ the set of all feasible flows between $\\mu$ and $\\nu$.\n\t\t\t\nIndeed, we can formulate the minimization problem defined by \\eqref{formulationflow}--\\eqref{incollamento} \nas an uncapacitated minimum cost flow problem on a tripartite graph $T=(V,A)$. \nThe set of nodes of $T$ is $V:=V^{(1)}\\cup V^{(2)} \\cup V^{(3)}$, where $V^{(1)}, V^{(2)}$ and $V^{(3)}$ are the nodes corresponding to three $N\\times N$ regular grids. \nWe denote by $(i,j)^{(l)}$ the node of coordinates $(i,j)$ in the grid $V^{(l)}$. \nWe define the two disjoint set of arcs between the successive pairs of node partitions as\n\\begin{eqnarray}\n\tA^{(1)}&:=& \\{ ((a,j)^{(1)},(i,j)^{(2)}) \\mid i,a,j \\in U \\}, \\\\\n\tA^{(2)}&:=& \\{ ((i,j)^{(2)},(i,b)^{(3)}) \\mid i,b,j \\in U \\},\n\\end{eqnarray}\n\\noindent and, hence, the arcs of $T$ are $A:=A^{(1)} \\cup A^{(2)}$.\nNote that in this case the graph $T$ has $3N^2$ nodes and $2N^3$ arcs.\nWhenever $(F_1,F_2)$ is a feasible flow between $\\mu$ and $\\nu$, we can think of the values $f^{(1)}_{a,i,j}$ as the quantity of \nmass that travels from $(a,j)$ to $(i,j)$ or, equivalently, that moves along the arc $((a,j),(i,j))$ of the tripartite graph, \nwhile the values $f^{(2)}_{i,j,b}$ are the mass moving along the arc $((i,j),(i,b))$\n(e.g., see Figures \\ref{fig1}--\\subref{fig1c} and \\ref{fig2}).\n\t\nNow we can give an idea of the roles of the sets $V^{(1)}$, $V^{(2)}$ and $V^{(3)}$: $V^{(1)}$ is the node set where is drawn the initial distribution $\\mu$, \nwhile on $V^{(3)}$ it is drawn the final configuration of the mass $\\nu$. The node set $V^{(2)}$ is an auxiliary grid that hosts an intermediate \nconfiguration between $\\mu$ and $\\nu$. \n\nWe are now ready to state our main contribution.\t\n\\begin{theorem}\n\t\\label{teoremaequivalenza}\n\tFor each measure $\\pi$ on $G\\times G$ that transports $\\mu$ into $\\nu$, we can find a feasible flow $(F_1,F_2)$ such that\n\t\\begin{equation}\n\t\\label{result1}\n\t{R}(F_1,F_2)=\\sum_{((a,j),(i,b))} ((a-i)^2+(b-j)^2)\\pi_{(a,j),(i,b))}.\n\t\\end{equation}\n\\end{theorem}\t\n\\begin{proof}\\textit{(Sketch).} \n\tWe will only show how to build a feasible flow starting from a transport plan, \n\tthe inverse building uses a more technical lemma (the so--called {\\it gluing lemma} \\cite{AGS,Villani2008}) and can be found in the Additional Material.\t\t\t\t\n\tLet $\\pi$ be a transport plan, if we write explicitly the ground distance $\\ell_2((a,j),(i,b))$ we find that\n\t\\begin{eqnarray*}\n\t\t\\sum_{((a,j),(i,b))} \\ell_2((a,j),(i,b))\\pi_{((a,j),(i,b))}\\hspace{-0.3cm}&=&\\hspace{-0.3cm}\\sum_{((a,j),(i,b))} ((a-i)^2+(j-b)^2)\\pi_{((a,j),(i,b))}\\\\\n\t\t\\hspace{-0.3cm}&=&\\hspace{-0.3cm} \\sum_{j,i} \\left[\\sum_{a,b}(a-i)^2\\pi_{((a,j),(i,b))} + \\sum_{a,b} (j-b)^2 \\pi_{((a,j),(i,b))} \\right].\n\t\\end{eqnarray*}\n\tIf we set $f^{(1)}_{a,i,j}=\\sum_b\\pi_{((a,j),(i,b))}$ and $f^{(2)}_{i,j,b}=\\sum_a \\pi_{((a,j),(i,b))}$ we find\n\t\\begin{equation*}\n\t\t\\sum_{((a,j),(i,b))} \\ell_2((a,j),(i,b))\\pi_{((a,j),(i,b))}=\\sum_{i,j}^n \\left[\\sum_a^n (a-i)^2f^{(1)}_{a,i,j}+ \\sum_b^n (j-b)^2 f^{(2)}_{i,j,b}\\right].\n\t\\end{equation*}\n\tIn order to conclude we have to prove that those $f^{(1)}_{a,i,j}$ and $f^{(2)}_{i,j,b}$ satisfy the constraints \\eqref{condizionecompmu}--\\eqref{incollamento}.\t\t\n\t\n\tBy definition we have \n\t\\begin{equation*}\n\t\t\\sum_i f^{(1)}_{a,i,j}=\\sum_i \\sum_b\\pi_{((a,j),(i,b))}=\\mu_{a,j},\n\t\\end{equation*}\n\tthus proving (\\ref{condizionecompmu}); similarly, it is possible to check constraint \\eqref{condizionecompnu}.\n\tThe constraint \\eqref{incollamento} also follows easily since\n\t\\begin{equation*}\n\t\t\\sum_{a}f^{(1)}_{a,i,j}= \\sum_a \\sum_b\\pi_{((a,j),(i,b))} = \\sum_{b}f^{(2)}_{i,j,b}.\n\t\\end{equation*}\n\\end{proof}\n\nAs a straightforward, yet fundamental, consequence we have the following result.\n\t\n\\begin{corollary} \nIf we set $c((a,j),(i,b))=(a-i)^2+(j-b)^2$ then, for any discrete measures $\\mu$ and $\\nu$, we have that \n\\begin{equation}\nW^2_2(\\mu,\\nu)=\\min_{\\mathcal{F}(\\mu,\\nu)}R(F_1,F_2).\n\\end{equation}\n\\end{corollary}\n\nIndeed, we can compute the Kantorovich-Wasserstein distance of order 2 between a pair of discrete measures $\\mu, \\nu$, by solving an \nuncapacitated minimum cost flow problem on the given tripartite graph $T:=(V^{(1)} \\cup V^{(2)} \\cup V^{(3)}, A^{(1)} \\cup A^{(2)})$.\n\nWe remark that our approach is very general and it can be directly extended to deal with the following generalizations.\n\n\\paragraph{More general cost functions.} The structure that we have exploited of the Euclidean distance $\\ell_2$ is present in any\ncost function $c: G \\times G \\rightarrow [0,\\infty]$ that is separable, i.e., has the form\n\t\\begin{equation*}\n\t\tc(x,y)= c^{(1)}(x_1,y_1) + c^{(2)}(x_2,y_2),\n\t\\end{equation*}\n\twhere both $c^{(1)}$ and $c^{(2)}$ are positive real valued functions defined over $G$. \n\tWe remark that the whole class of costs $c_p(x,y)=(x_1-y_1)^p+(x_2-y_2)^p$ is of that kind, \n\tso we can compute any of the Kantorovich-Wasserstein distances related to each $c_p$.\n\t\n\\paragraph{Higher dimensional grids.} Our approach can handle discrete measures in spaces of any dimension $d$, that is, for instance, any $d$-dimensional histogram. \nIn dimension $d=2$, we get a tripartite graph because we decomposed the transport along the two main directions.\nIf we have a problem in dimension $d$, we need a $(d+1)$-plet of grids connected by arcs oriented as the $d$ fundamental directions, yielding a $(d+1)$-partite graph.\nAs the dimension $d$ grows, our approach gets faster and more memory efficient than the standard formulation given on a bipartite graph.\n\nIn the Additional Material, we present a generalization of Theorem 1 to any dimension $d$ and to {\\it separable} cost functions $c(x,y)$.\n\n\n\\section{Computational Results}\nIn this section, we report the results obtained on two different set of instances. \nThe goal of our experiments is to show how our approach scales with the size of the histogram $N$ and with the dimension of the histogram $d$.\nAs cost distance $c(x,y)$, with $x,y \\in \\mathbb{R}^d$, we use the squared $\\ell_2$ norm.\nAs problem instances, we use the gray scale images (i.e., 2-dimensional histograms) proposed by the DOTMark benchmark \\cite{Dotmark}, \nand a set of $d$-dimensional histograms obtained by bio medical data measured by flow cytometer \\cite{Bernas2008}.\n\n\n\\paragraph{Implementation details.}\nWe run our experiments using the Network Simplex as implemented in the Lemon C++ graph library\\footnote{\\url{http:\/\/lemon.cs.elte.hu} (last visited on October, 26th, 2018)},\nsince it provides the fastest implementation of the Network Simplex algorithm to solve uncapacitated minimum cost flow problems \\cite{Kovacs2015}.\nWe did try other state-of-the-art implementations of combinatorial algorithm for solving min cost flow problems, but the Network Simplex of\nthe Lemon graph library was the fastest by a large margin.\nThe tests are executed on a gaming laptop with Windows 10 (64 bit), equipped with an Intel i7-6700HQ CPU and 16 GB of Ram. \nThe code was compiled with MS Visual Studio 2017, using the ANSI standard C++17. The code execution is single threaded.\nThe Matlab implementation of the Sinkhorn's algorithm \\cite{Cuturi2013} runs in parallel on the CPU cores, but we do not use any GPU in our test.\nThe C++ and Matlab code we used for this paper is freely available at \\url{http:\/\/stegua.github.io\/dpartion-nips2018}.\n\n\\paragraph{Results for the DOTmark benchmark.} \nThe DOTmark benchmark contains 10 classes of gray scale images related to randomly generated images, classical images, \nand real data from microscopy images of mitochondria \\cite{Dotmark}. In each class there are 10 different images.\nEvery image is given in the data set at the following pixel resolutions: $32\\times32$, $64\\times64$, $128\\times128$, $256\\times256$, and $512\\times512$.\nThe images in Figure \\ref{fig:dot} are respectively the {\\it ClassicImages}, {\\it Microscopy}, and {\\it Shapes} images (one class for each row), shown at highest resolution.\n\n\\begin{figure}[!t]\n\\centering\n{\\renewcommand{\\arraystretch}{0.7}\n\\setlength{\\tabcolsep}{0.1em}\n\\begin{tabular}{cccccccccc}\n \\includegraphics[width=0.095\\linewidth]{images\/classic512_1001} & \\includegraphics[width=0.095\\linewidth]{images\/classic512_1002} &\n \\includegraphics[width=0.095\\linewidth]{images\/classic512_1003} & \\includegraphics[width=0.095\\linewidth]{images\/classic512_1004} &\n \\includegraphics[width=0.095\\linewidth]{images\/classic512_1005} &\n \\includegraphics[width=0.095\\linewidth]{images\/classic512_1006} & \\includegraphics[width=0.095\\linewidth]{images\/classic512_1007} &\n \\includegraphics[width=0.095\\linewidth]{images\/classic512_1008} & \\includegraphics[width=0.095\\linewidth]{images\/classic512_1009} &\n \\includegraphics[width=0.095\\linewidth]{images\/classic512_1010} \\\\ \n \\includegraphics[width=0.095\\linewidth]{images\/micro512_1001} & \\includegraphics[width=0.095\\linewidth]{images\/micro512_1002} &\n \\includegraphics[width=0.095\\linewidth]{images\/micro512_1003} & \\includegraphics[width=0.095\\linewidth]{images\/micro512_1004} &\n \\includegraphics[width=0.095\\linewidth]{images\/micro512_1005} &\n \\includegraphics[width=0.095\\linewidth]{images\/micro512_1006} & \\includegraphics[width=0.095\\linewidth]{images\/micro512_1007} &\n \\includegraphics[width=0.095\\linewidth]{images\/micro512_1008} & \\includegraphics[width=0.095\\linewidth]{images\/micro512_1009} &\n \\includegraphics[width=0.095\\linewidth]{images\/micro512_1010} \\\\\n \\includegraphics[width=0.095\\linewidth]{images\/shapes512_1001} & \\includegraphics[width=0.095\\linewidth]{images\/shapes512_1002} &\n \\includegraphics[width=0.095\\linewidth]{images\/shapes512_1003} & \\includegraphics[width=0.095\\linewidth]{images\/shapes512_1004} &\n \\includegraphics[width=0.095\\linewidth]{images\/shapes512_1005} &\n \\includegraphics[width=0.095\\linewidth]{images\/shapes512_1006} & \\includegraphics[width=0.095\\linewidth]{images\/shapes512_1007} &\n \\includegraphics[width=0.095\\linewidth]{images\/shapes512_1008} & \\includegraphics[width=0.095\\linewidth]{images\/shapes512_1009} &\n \\includegraphics[width=0.095\\linewidth]{images\/shapes512_1010} \\\\\n \\end{tabular}}\n\\caption{DOTmark benchmark: Classic, Microscopy, and Shapes images. \\label{fig:dot}}\n\\end{figure}\n\nIn our test, we first compared five approaches to compute the Kantorovich-Wasserstein distances on images of size $32\\times32$:\n\\begin{enumerate}\n\\item {\\bf EMD}: The implementation of Transportation Simplex provided by \\cite{Rubner98}, known in the literature as EMD code, \nthat is an exact general method to solve optimal transport problem. We used the implementation in the programming language C, as provided by the authors,\nand compiled with all the compiler optimization flags active.\n\n\\item {\\bf Sinkhorn}: The Matlab implementation of the Sinkhorn's algorithm\\footnote{\\url{http:\/\/marcocuturi.net\/SI.html} (last visited on October, 26th, 2018)} \n\\cite{Cuturi2013}, that is an approximate\napproach whose performance in terms of speed and numerical accuracy depends on a parameter $\\lambda$: for smaller values of $\\lambda$, the algorithm\nis faster, but the solution value has a large gap with respect to the optimal value of the transportation problem; \nfor larger values of $\\lambda$, the algorithm is more accurate (i.e., smaller gap), but it becomes slower.\nUnfortunately, for very large value of $\\lambda$ the method becomes numerically unstable.\nThe best value of $\\lambda$ is very problem dependent. In our tests, we used $\\lambda=1$ and $\\lambda = 1.5$. The second value, $\\lambda=1.5$, \nis the largest value we found for which the algorithm computes the distances for all the instances considered without facing numerical issues.\n\n\\item {\\bf Improved Sinkhorn}: We implemented in Matlab an improved version of the Sinkhorn's algorithm, \nspecialized to compute distances over regular 2-dimensional grids \\cite{Solomon2015,Solomon2018}.\nThe main idea is to improve the matrix-vector operations that are the true computational bottleneck of Sinkhorn's algorithm, by exploiting the structure of the cost matrix.\nIndeed, there is a parallelism with our approach to the method presented in \\cite{Solomon2015}, since\nboth exploits the geometric cost structure. In \\cite{Solomon2015}, the authors proposes a general method that exploits a heat kernel to speed up\nthe matrix-vector products.\nWhen the discrete measures are defined over a regular 2-dimensional grid, the cost matrix used by the Sinkhorn's algorithm can be obtained using a Kronecker\nproduct of two smaller matrices. Hence, instead of performing a matrix-vector product using a matrix of dimension $N \\times N$, we perform\ntwo matrix-matrix products over matrices of dimension $\\sqrt{N}\\times \\sqrt{N}$, yielding a significant runtime improvement.\nIn addition, since the smaller matrices are Toeplitz matrices, they can be embedded into circulant matrices, and, as consequence, it is possible\nto employ a Fast Fourier Transform approach to further speed up the computation. Unfortunately, the Fast Fourier Transform makes the approach\nstill more numerical unstable, and we did not used it in our final implementation.\n\n\\item {\\bf Bipartite}: The bipartite formulation presented in Figure \\ref{fig1}--\\subref{fig1b}, which is the same as \\cite{Rubner98}, but it is solved\nwith the Network Simplex implemented in the Lemon Graph library \\cite{Kovacs2015}.\n\n\\item {\\bf $3$-partite}: The $3$-partite formulation proposed in this paper, which for 2-dimensional histograms is represented in \\ref{fig1}--\\subref{fig1c}.\nAgain, we use the Network Simplex of the Lemon Graph Library to solve the corresponding uncapacitated minimum cost flow problem.\n\\end{enumerate}\n\nTables \\ref{tab:1}(a) and \\ref{tab:1}(b) report the averages of our computational results over different classes of images of the DOTMark benchmark. \nEach class of gray scale image contains 10 instances, and we compute the distance between every possible pair of images within the same class:\nthe first image plays the role of the source distribution $\\mu$, and the second image gives the target distribution $\\nu$. \nConsidering all pairs within a class, it gives 45 instances for each class.\nWe report the means and the standard deviations (between brackets) of the runtime, measured in seconds.\nTable \\ref{tab:1}(a) shows in the second column the runtime for EMD \\cite{Rubner98}. The third and fourth columns gives the runtime and the optimality gap\nfor the Sinkhorn's algorithm with $\\lambda=1$; the 6-$th$ and 7-$th$ columns for $\\lambda=1.5$.\nThe percentage gap is computed as $\\mbox{Gap}=\\frac{UB-opt}{opt}\\cdot 100$, where $UB$ is the upper bound computed by the Sinkhorn's algorithm, \nand $opt$ is the optimal value computed by EMD. The last two columns report the runtime for the bipartite and $3$-partite approaches presented in this paper.\n\nTable \\ref{tab:1}(b) compares our $3$-partite formulation with the Improved Sinkhorn's algorithm \\cite{Solomon2015,Solomon2018}, reporting the same statistics of the previous table.\nIn this case, we run the Improved Sinkhorn using three values of the parameter $\\lambda$, that are, 1.0, 1.25, and 1.5. While the Improved Sinkhorn is \nindeed much faster that the general algorithm as presented in \\cite{Cuturi2013}, it does suffer of the same numerical stability issues, and,\nit can yield very poor percentage gap to the optimal solution, as it happens for the GRFrough and the WhiteNoise classes, where the optimality gaps\nare on average 31.0\\% and 39.2\\%, respectively.\n\nAs shown in Tables \\ref{tab:1}(a) and \\ref{tab:1}(b), the $3$-partite approach is clearly faster than any of the alternatives considered here, despite being an exact method.\nIn addition, we remark that, even on the bipartite formulation, the Network Simplex implementation of the Lemon Graph library is order of magnitude faster than EMD,\nand hence it should be the best choice in this particular type of instances. We remark that it might be unfair to compare an algorithm implemented in C++ with\nan algorithm implemented in Matlab, but still, the true comparison is on the solution quality more than on the runtime. \nMoreover, when implemented on modern GPU that can fully exploit parallel matrix-vector operations, the Sinkhorn's algorithm can run much faster,\nbut they cannot improve the optimality gap.\n\n\\begin{table}[t!]\n\\caption{Comparison of different approaches on $32 \\times 32$ images. The runtime (in seconds) is given as ``Mean (StdDev)''.\nThe gap to the optimal value {\\it opt} is computed as $\\frac{UB-opt}{opt}\\cdot 100$, where $UB$ is the upper bound computed by Sinkhorn's algorithm.\nEach row reports the averages over 45 instances.\n\\label{tab:1}}\n\\centering\n{\\renewcommand{\\arraystretch}{1.2}\n\\begin{tabular}{lccrcrcc}\n & \\multicolumn{1}{c}{EMD \\cite{Rubner98}} & \\multicolumn{4}{c}{Sinkhorn \\cite{Cuturi2013}} & \\multicolumn{1}{c}{Bipartite} & \\multicolumn{1}{c}{$3$-partite} \\\\\n\t & & \\multicolumn{2}{c}{$\\lambda=1$} & \\multicolumn{2}{c}{$\\lambda=1.5$} & & \\\\\nImage Class & Runtime & Runtime & Gap & Runtime & Gap & Runtime & Runtime \\\\\n\\hline\nClassic & 24.0 (3.3) & 6.0 (0.5) & 17.3\\% & 8.9 (0.7) & 9.1\\% & 0.54 (0.05) & 0.07 (0.01)\\\\\nMicroscopy & 35.0 (3.3) & 3.5 (1.0) & 2.4\\% & 5.3 (1.4) & 1.2\\% & 0.55 (0.03) & 0.08 (0.01)\\\\\nShapes & 25.2 (5.3) & 1.6 (1.1) & 5.6\\% & 2.5 (1.6) & 3.0\\% & 0.50 (0.07) & 0.05 (0.01)\\\\\n \\hline\\noalign{\\smallskip}\n \\multicolumn{8}{c}{(a)} \\\\\n \\multicolumn{8}{c}{} \\\\\n \\end{tabular}}\n\\centering\n{\\renewcommand{\\arraystretch}{1.2}\n\\setlength{\\tabcolsep}{5pt}\n\\begin{tabular}{lcrcrcrc}\n & \\multicolumn{6}{c}{Improved Sinkhorn \\cite{Solomon2015,Solomon2018}} & \\multicolumn{1}{c}{$3$-partite} \\\\\n\t & \\multicolumn{2}{c}{$\\lambda=1$} & \\multicolumn{2}{c}{$\\lambda=1.25$} & \\multicolumn{2}{c}{$\\lambda=1.5$} & \\\\\nImage Class & Runtime & Gap & Runtime & Gap & Runtime & Gap & Runtime \\\\\n\\hline\n CauchyDensity\t&\t0.22\t(0.15)\t&\t2.8\\%\t&\t0.33\t(0.23)\t&\t2.0\\%\t&\t0.41\t(0.28)\t&\t1.5\\%\t&\t0.07\t(0.01)\t\\\\\n Classic\t\t&\t0.20\t(0.01)\t&\t17.3\\%\t&\t0.31\t(0.02)\t&\t12.4\\%\t&\t0.39\t(0.03)\t&\t9.1\\%\t&\t0.07\t(0.01)\t\\\\\n GRFmoderate\t&\t0.19\t(0.01)\t&\t12.6\\%\t&\t0.29\t(0.02)\t&\t9.0\\%\t&\t0.37\t(0.03)\t&\t6.6\\%\t&\t0.07\t(0.01)\t\\\\\n GRFrough\t\t&\t0.19\t(0.01)\t&\t58.7\\%\t&\t0.29\t(0.01)\t&\t42.1\\%\t&\t0.38\t(0.02)\t&\t31.0\\%\t&\t0.05\t(0.01)\t\\\\\n GRFsmooth\t\t&\t0.20\t(0.02)\t&\t4.3\\%\t&\t0.30\t(0.04)\t&\t3.1\\%\t&\t0.38\t(0.04)\t&\t2.2\\%\t&\t0.08\t(0.01)\t\\\\\n LogGRF\t\t\t&\t0.22\t(0.05)\t&\t1.3\\%\t&\t0.32\t(0.08)\t&\t0.9\\%\t&\t0.40\t(0.13)\t&\t0.7\\%\t&\t0.08\t(0.01)\t\\\\\n LogitGRF\t\t&\t0.22\t(0.02)\t&\t4.7\\%\t&\t0.33\t(0.03)\t&\t3.3\\%\t&\t0.42\t(0.04)\t&\t2.5\\%\t&\t0.07\t(0.02)\t\\\\\n Microscopy \t&\t0.18\t(0.03)\t&\t2.4\\%\t&\t0.27\t(0.04)\t&\t1.7\\%\t&\t0.34\t(0.05)\t&\t1.2\\%\t&\t0.08\t(0.02)\t\\\\\n Shapes\t\t\t&\t0.11\t(0.04)\t&\t5.6\\%\t&\t0.16\t(0.06)\t&\t4.0\\%\t&\t0.20\t(0.07)\t&\t3.0\\%\t&\t0.05\t(0.01)\t\\\\\n WhiteNoise\t\t&\t0.18\t(0.01)\t&\t76.3\\%\t&\t0.28\t(0.01)\t&\t53.8\\%\t&\t0.37\t(0.02)\t&\t39.2\\%\t&\t0.04\t(0.00)\t\\\\\n\\hline\\noalign{\\smallskip}\n \\multicolumn{8}{c}{(b)} \\\\\n \\end{tabular}}\n\\end{table}\n\nIn order to evaluate how our approach scale with the size of the images, we run additional tests using images of size $64\\times64$ and $128\\times128$.\nTable \\ref{tab:2} reports the results for the bipartite and $3$-partite approaches for increasing size of the 2-dimensional histograms.\nThe table report for each of the two approaches, the number of vertices $|V|$ and of arcs $|A|$, and the means and standard deviations of the runtime.\nAs before, each row gives the averages over 45 instances. Table \\ref{tab:2} shows that the $3$-partite approach is clearly better (i) in terms of memory,\nsince the 3-partite graph has a fraction of the number of arcs, and (ii) of runtime, since it is at least an order of magnitude faster in computation time.\nIndeed, the 3-partite formulation is better essentially because it exploits the structure of the ground distance $c(x,y)$ used, that is, the squared $\\ell_2$ norm.\n\\begin{table}[t!]\n\\caption{Comparison of the bipartite and the $3$-partite approaches on 2-dimensional histograms.\\label{tab:2}}\n\\centering\n{\\renewcommand{\\arraystretch}{1.2}\n\\setlength{\\tabcolsep}{0.5em}\n\\begin{tabular}{clrrrrrr}\n & & \\multicolumn{3}{c}{Bipartite} & \\multicolumn{3}{c}{$3$-partite} \\\\\nSize & Image Class & $|V|$ & $|A|$ & Runtime & $|V|$ & $|A|$ & Runtime \\\\\n\\hline\n$64 \\times 64$ &Classic & 8\\,193& 16\\,777\\,216 & 16.3 (3.6) & 12\\,288 & 524\\,288 & 2.2 (0.2) \\\\\n&Microscopy & & & 11.7 (1.4) & & & 1.0 (0.2) \\\\\n&Shape & & & 13.0 (3.9) & & & 1.1 (0.3) \\\\\n\\hline\\noalign{\\smallskip}\n$128\\times128$ & Classic & 32\\,768 & 268\\,435\\,456& 1\\,368 (545) & 49\\,152 & 4\\,194\\,304& 36.2 (5.4) \\\\\n&Microscopy & & & 959 (181) & & & 23.0 (4.8) \\\\\n&Shape & & & 983 (230) & & & 17.8 (5.2) \\\\\n \\hline\\noalign{\\smallskip}\n \\end{tabular}}\n\\end{table}\n\n\n\\paragraph{Flow Cytometry biomedical data.}\nFlow cytometry is a laser-based biophysical technology used to study human health disorders. Flow cytometry experiments produce huge set of data, which are very hard to analyze with standard statistics methods and algorithms \\cite{Bernas2008}. Currently, such data is used to study the correlations of only two factors (e.g., biomarkers) at the time, by visualizing 2-dimensional histograms and by measuring the (dis-)similarity between pairs of histograms \\cite{Orlova2016}. However, during a flow cytometry experiment up to hundreds of factors (biomarkers) are measured and stored in digital format. \nHence, we can use such data to build $d$-dimensional histograms that consider up to $d$ biomarkers at the time, and then comparing the similarity among different\nindividuals by measuring the distance between the corresponding histograms.\nIn this work, we used the flow cytometry data related to {\\it Acute Myeloid Leukemia (AML)}, available at \\url{http:\/\/flowrepository.org\/id\/FR-FCM-ZZYA}, \nwhich contains cytometry data for 359 patients, classified as ``normal'' or affected by AML. \nThis dataset has been used by the bioinformatics community to run clustering algorithms,\nwhich should predict whether a new patient is affected by AML \\cite{Aghaeepour2013}.\n\nTable \\ref{tab:3} reports the results of computing the distance between pairs of $d$-dimensional histograms, with $d$ ranging in the set $\\{2,3,4\\}$,\nobtained using the AML biomedical data. Again, the first $d$-dimensional histogram plays the role of the source distribution $\\mu$, while the second\nhistogram gives the target distribution $\\nu$.\nFor simplicity, we considered regular histograms of size $n=N^d$ (i.e., $n$ is the total number of bins), using $N=16$ and $N=32$. \nTable \\ref{tab:3} compares the results obtained by the bipartite\nand $(d+1)$-partite approach, in terms of graph size and runtime. Again, the $(d+1)$-partite approach, by exploiting the structure of the ground distance,\noutperforms the standard formulation of the optimal transport problem. We remark that for $N=32$ and $d=3$, we pass for going out-of-memory\nwith the bipartite formulation, to compute the distance in around 5 seconds with the $4$-partite formulation.\n\n\\begin{table}\n\\caption{Comparison between the bipartite and the $(d+1)$-partite approaches on Flow Cytometry data.}\n\t\\label{tab:3}\n\\centering\n{\\renewcommand{\\arraystretch}{1.2}\n\\begin{tabular}{llrrrrrrr}\n & & & \\multicolumn{3}{c}{Bipartite Graph} & \\multicolumn{3}{c}{$(d+1)$-partite Graph} \\\\\nN & $d$ & $n$ & $|V|$ & $|A|$ & Runtime & $|V|$ & $|A|$ & Runtime \\\\\n\\hline\n$16$ & 2 & 256 & 512 & 65\\,536 & 0.024 (0.01) & 768 & 8\\,192 & 0.003 (0.00) \\\\\n& 3 & 4\\,096 & 8\\,192 & 16\\,777\\,216& 38.2 (14.0) & 16\\,384 & 196\\,608& 0.12 (0.02) \\\\\n& 4 & 65\\,536 & \\multicolumn{3}{c}{{\\it out-of-memory}} & 327\\,680 & 4\\,194\\,304& 4.8 (0.84) \\\\\n \\hline\\noalign{\\smallskip}\n$32$ & 2 & 1\\,024 & 2\\,048 & 1\\,048\\,756 & 0.71 (0.14) & 3072 & 65\\,536& 0.04 (0.01) \\\\\n& 3 & 32\\,768& \\multicolumn{3}{c}{{\\it out-of-memory}} & 131\\,072 & 3\\,145\\,728& 5.23 (0.69) \\\\\n \\hline\\noalign{\\smallskip}\n \\end{tabular}}\n\\end{table}\n\n\n\n\n\n\\section{Conclusions}\nIn this paper, we have presented a new network flow formulation on $(d+1)$-partite graphs that can speed up the optimal solution of transportation problems\nwhenever the ground cost function $c(x,y)$ (see objective function \\eqref{p1:funobj}) has a separable structure along the main $d$ directions, such as, for instance, \nthe squared $\\ell_2$ norm used in the computation of the Kantorovich-Wasserstein distance of order 2.\n\nOur computational results on two different datasets show how our approach scales with the size of the histograms $N$ and with the dimension of the histograms $d$.\nIndeed, by exploiting the cost structure, the proposed approach is better in term of memory consumption, since it has only $dn^{\\frac{d+1}{d}}$ arcs instead of $n^2$.\nIn addition, it is much faster since it has to solve an uncapacitated minimum cost flow problem on a much smaller flow network.\n\n\\subsubsection*{Acknowledgments}\nWe are deeply indebted to Giuseppe Savar\\'e, for introducing us to optimal transport and for many stimulating discussions and suggestions. \nWe thanks Mattia Tani for a useful discussion concerning the Improved Sinkhorn's algorithm.\n\nThis research was partially supported by the Italian Ministry of Education, University and Research (MIUR): \nDipartimenti di Eccellenza Program (2018--2022) - Dept. of Mathematics ``F. Casorati'', University of Pavia.\n\nThe last author's research is partially supported by ``PRIN 2015. 2015SNS29B-002. Modern Bayesian nonparametric methods''.\n\n\\section*{Additional Material}\n\\small\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Introduction}\nThe problem of the restoration in dense or hot matter of the chiral\nsymmetry of the strong interactions, which is spontaneously violated in\nthe QCD vacuum has been extensively addressed. The interest has largely\nfocused on the quark condensate, considered as the order parameter.\nFor independent particles the evolution of the quark condensate with\ndensity or temperature is governed by the sigma commutator of the\nparticles present in the system with the simple following expression:\n\\begin{equation}\n\\frac{<\\overline{q} q(\\rho)>}{<\\overline{q} q(0)>} = 1 -\n\\sum_n\\frac{\\rho_n^s\\;\\Sigma_n}{f_{\\pi}^2m_{\\pi}^2}\n\\label{sigma}\n\\end{equation}\nwhere the sum extends over the species present in the medium, $\\rho^s$ is\ntheir scalar density and $\\Sigma$ their sigma commutator.\n Pions play a crucial role in the restoration process especially in the heat\nbath where they enter as the lightest particles created\nby the thermal fluctuations. In the nuclear medium the main ingredients are the\nnucleons, with some corrections from the exchanged pions. At normal nuclear\n density the magnitude of the condensate has dropped by\nabout 1\/3, a large amount of restoration. It is essentially the effect of\nthe nucleons adding their effects independently, the corrections due to\nthe interaction being small.\nSuch a large amount of restoration raises the question about\nmanifestations directly linked to the symmetry. If there is no\nspontaneous violation of the symmetry, {\\it i.e.}, if it is realized in the\nWigner mode, the hadron masses vanish or there \nexist parity doublets, each hadronic state being degenerate with its\nchiral partner. It is therefore legitimate to believe that the large\namount of restoration at normal density manifests itself either by a decrease\n of the hadron masses, or by a mixing between opposite parities. A link\nbetween the evolution of the hadron masses and the amount of restoration\nhas been suggested~\\cite{BR}. But it cannot be a straightforward one. Indeed\n the density or temperature evolution of the masses\ncannot have a direct relation to that of the condensate\n, as follows from the works of several authors~\\cite{LS,EI,BIR}.\n On the other hand\nthe significance of chiral symmetry restoration for the parity mixing was\nfirst established by Dey et al.~\\cite{IOF} for the thermal case. They showed \nthat in a pion gas a mixing occurs between the vector and axial\ncorrelators. It arises from the\nemission or absorption of s-wave thermal pions, which changes the parity\nof the system. The mixing goes along with a quenching effect of the correlators,\n which, to first order in the pion density, \nequals 4\/3 of the quenching of the quark condensate. These points were also\n made by Steele et al.~\\cite{ZAH}. The extension\n of the formalism of Dey et al.\nto finite densities has been attempted by Krippa~\\cite{KRI}.\n\nThe aim of this work is the discussion of the implications of chiral symmetry\nrestoration in the nuclear medium, in\na world restricted here to nucleons and pions. The only transitions\nallowed in the nucleus are then nuclear transitions or pion production. We\ngive the explicit expressions of the axial and vector\ncurrent in a formalism based on chiral lagrangians. We will show that the\n nuclear pions renormalize the coupling\n constants of the axial current and that the renormalization\ncan be expressed in terms of the pion scalar density. This last quantity also\nenters in the quark condensate evolution. However the\ncomplexity of the nuclear interactions bars a simple link between this\n evolution, which is an average concept, and the\nrenormalizations. For instance, for the axial coupling constant $g_A$\n the detailed spatial structure of the pion scalar density is needed. \n\nOur article is organized as follows. In section 1 we derive the expressions\nof the axial and vector currents from the chiral lagrangians. In section 2\nwe use these expressions to study the renormalization of the pion decay\nconstant in the hot pion gas and in the nuclear medium. The thermal case is\nonly introduced as an illustration of the method since the results are already\nknown. In section 3 we apply the same technique to the axial coupling constant.\n To account for the nucleon-nucleon correlations we express the renormalization\nin the traditional treatment by the meson exchange currents. We give an estimate\nof the quenching of the axial coupling constant. We also discuss the\nrenormalization of the Kroll-Ruderman matrix element of pion photoproduction. \n\n\\bigskip\n\\section{The Lagrangian and the currents} \nOur starting point is the chiral Lagrangian in the form introduced by\nWeinberg. We use, as in our previous work of ref.~\\cite{DCE}, the version of\n Lynn~\\cite{LYN}, which allows one to obtain the nucleon sigma commutator\n in the tree approximation. The Lagrangian writes:\n\\begin{eqnarray}\n{\\cal L}& = & -\\frac{1}{2} m_{\\pi}^2 \\frac{\\displaystyle{\\hbox{\\boldmath$\n\\phi $\\unboldmath}}^2}{\\displaystyle 1 + {\\hbox{\\boldmath$\\phi $\\unboldmath}}\n ^2\/4f_{\\pi}^2} + \\frac{1}{2}\\frac{\\displaystyle \\partial_{\\mu}\n{\\hbox{\\boldmath$\\phi $\\unboldmath}}\\cdot \n\\partial^{\\mu}{\\hbox{\\boldmath$\\phi $\\unboldmath}}}\n{\\displaystyle(1 + {\\hbox{\\boldmath$\\phi $\\unboldmath}}^2\/4f_{\\pi}^2)^2}\n \\nonumber \\\\\n& & + 2\\sigma_N \\overline{\\psi}\\psi\n\\frac{\\displaystyle{\\hbox{\\boldmath$\\phi $\\unboldmath}}^2\/4f_{\\pi}^2}\n{\\displaystyle 1 + {\\hbox{\\boldmath$\\phi $\\unboldmath}}^2\/4f_{\\pi}^2} +\n\\overline{\\psi}(i\\gamma_{\\mu}\\partial^{\\mu}-M)\\psi \\nonumber \\\\\n& & - \\frac{1}{4f_{\\pi}^2}\n\\frac{\\displaystyle\\overline{\\psi}\\gamma_{\\mu}{\\hbox{\\boldmath$(\\tau \\times\n\\phi)$\\unboldmath}} \\cdot \\partial^{\\mu}{\\hbox{\\boldmath$\\phi $\\unboldmath}}\n\\psi}{\\displaystyle 1 +\n{\\hbox{\\boldmath$\\phi $\\unboldmath}}^2\/4f_{\\pi}^2}\n+\n\\frac{g_A}{2f_{\\pi}}\n\\frac{\\displaystyle\\overline{\\psi}\\gamma_{\\mu}\\gamma_5\n{\\hbox{\\boldmath$\\tau $\\unboldmath}}\\cdot\\partial^{\\mu}\n{\\hbox{\\boldmath$\\phi $\\unboldmath}}\\psi}{\\displaystyle 1\n+{\\hbox{\\boldmath$\\phi $\\unboldmath}}^2\/4f_{\\pi}^2} \\; .\n\\label{lag}\n\\end{eqnarray}\n\n\n We have to specify the quantity $\\sigma_N$\nassociated with the nucleon density in eq.~(\\ref{lag}). The free nucleon\nsigma commutator $\\Sigma_N$ cannot be entirely attributed to the pion cloud.\nWe define $\\sigma_N$ to be the difference between the total and pionic \ncontributions:\n\\begin{equation}\n\\Sigma_N = \\sigma_N + \\frac{1}{2} m_{\\pi}^2 \\int\n d{\\hbox{\\boldmath$x $\\unboldmath}}\\langle N \\vert \n{\\hbox{\\boldmath$\\phi^2(x) $\\unboldmath}}\\vert N \\rangle .\n\\label{sig}\n\\end{equation}\n\nFor instance in a description of the nucleon in terms of\nvalence quarks and pions, the pionic contribution is approximatively\n1\/2 to 2\/3 of the total value~\\cite{JCT,BMG}. \n\nFrom the Lagrangian of eq.~(\\ref{lag}) we derive the expressions of the axial\n and isovector vector currents:\n\\begin{eqnarray}\n{\\hbox{\\boldmath${\\cal A} $\\unboldmath}}_{\\mu} & = & \nf_{\\pi}\\frac\n{\\displaystyle\\partial_{\\mu}{\\hbox{\\boldmath$\\phi $\\unboldmath}}}\n{\\displaystyle 1 +\n{\\hbox{\\boldmath$\\phi $\\unboldmath}}^2\/4f_{\\pi}^2}\n-\\frac{1}{2f_{\\pi}}\\frac{\\displaystyle\\big[({\\hbox{\\boldmath$\\phi\n \\times $\\unboldmath}}\\partial_{\\mu}{\\hbox{\\boldmath$\\phi)\\times\\phi\n $\\unboldmath}}\\big]}{\\displaystyle (1 +\n{\\hbox{\\boldmath$\\phi $\\unboldmath}}^2\/4f_{\\pi}^2)^2} \\nonumber \\\\ \n & & + \\frac{g_A}{2}\\overline{\\psi}\\gamma_{\\mu}\\gamma_5\n{\\hbox{\\boldmath$\\tau $\\unboldmath}}\\psi\n+\\frac{g_A}{4f_{\\pi}^2}\\frac{\\displaystyle\\overline{\\psi}\\gamma_{\\mu}\\gamma_5\n\\big[{\\hbox{\\boldmath$(\\tau\\times\\phi)\\times\\phi $\\unboldmath}}\\big]\\psi}\n{\\displaystyle 1 +\n{\\hbox{\\boldmath$\\phi $\\unboldmath}}^2\/4f_{\\pi}^2}\n-\\frac{1}{2f_{\\pi}}\\frac{\\displaystyle\\overline{\\psi}\\gamma_{\\mu}\n{\\hbox{\\boldmath$(\\tau\\times\\phi) $\\unboldmath}}\\psi}\n{\\displaystyle 1 +\n{\\hbox{\\boldmath$\\phi $\\unboldmath}}^2\/4f_{\\pi}^2}\n\\label{acur} \\\\\n& & \\nonumber \\\\\n{\\hbox{\\boldmath${\\cal V} $\\unboldmath}}_{\\mu} & = &\n\\frac{\\displaystyle({\\hbox{\\boldmath$\\phi\n \\times $\\unboldmath}}\\partial_{\\mu}{\\hbox{\\boldmath$\\phi)\n $\\unboldmath}}}{\\displaystyle( 1 +\n{\\hbox{\\boldmath$\\phi $\\unboldmath}}^2\/4f_{\\pi}^2)^2} \\nonumber \\\\\n& & +\\frac{1}{2}\\overline{\\psi}\\gamma_{\\mu}\n{\\hbox{\\boldmath$\\tau $\\unboldmath}}\\psi\n+\\frac{1}{4f_{\\pi}^2}\\frac{\\displaystyle\\overline{\\psi}\\gamma_{\\mu}\n\\big[{\\hbox{\\boldmath$(\\tau\\times\\phi)\\times\\phi $\\unboldmath}}\\big]\\psi}\n{\\displaystyle 1 +\n{\\hbox{\\boldmath$\\phi $\\unboldmath}}^2\/4f_{\\pi}^2}\n-\\frac{g_A}{2f_{\\pi}}\\frac{\\displaystyle\\overline{\\psi}\\gamma_{\\mu}\\gamma_5\n{\\hbox{\\boldmath$(\\tau\\times\\phi) $\\unboldmath}}\\psi}\n{\\displaystyle 1 +\n{\\hbox{\\boldmath$\\phi $\\unboldmath}}^2\/4f_{\\pi}^2}\\; .\n\\label{vcur}\n\\end{eqnarray}\n\nThe conservation law of the vector current can be shown, using the equations of\nmotion for the nucleon and the pion fields.\nThe divergence of the axial current instead satisfies the following\nrelation:\n\\begin{equation}\n\\partial^{\\mu}{\\hbox{\\boldmath${\\cal A}$\\unboldmath}}_{\\mu} = -\nf_{\\pi}m_{\\pi}^2\\;\\frac{{\\hbox{\\boldmath$\\phi$\\unboldmath}}}\n{\\displaystyle 1 + {\\hbox{\\boldmath$\\phi$\\unboldmath}}^2\/4f_{\\pi}^2}\n\\;(1-\\sigma_N\\frac{\\overline{\\psi}\\psi}{f_{\\pi}^2m_{\\pi}^2}) \\; .\n\\label{div}\n\\end{equation}\n\n\nSome comments on the expressions~(\\ref{acur}) and~(\\ref{vcur}) are in order.\n Let us first discuss the free\ncase. We recognize in some of the terms the usual expressions for the\nvector or axial current coupled to a free nucleon or pion. In addition the axial\ncurrent can create one or more pions, either in free space (first\nterms of eq.~(\\ref{acur})), or when it acts on the nucleon \nvia a term (last one of eq.~(\\ref{acur})) which is the equivalent\n for the axial current of the Weinberg-Tomozawa\nterm of $\\pi$-N scattering. Similarly the vector current acting on the\nnucleon can create one (or more) pion via the Kroll-Ruderman term, {\\it i.e.}\nthe contact piece of \nphotoproduction (last term of eq.~(\\ref{vcur})).\n\nLet us now turn to the case of a hadronic medium. \nThe expressions~(\\ref{acur}) and (\\ref{vcur}) illustrate in a striking fashion\nthe way in which the axial and vector current mixing occurs.\nIndeed, in the heat bath any of the pions can be a thermal one. As an example,\n consider the Kroll-Ruderman\n term of the vector current, ignoring at this stage the denominator. \nThe creation or annihilation of a thermal pion of momentum $q$ in this term\n takes care of the pion field, leaving a factor $e^{\\pm iqx}$ and we are left\nwith a current of opposite parity, to be taken at the momentum transfer $k\\pm q$\nwhere $k$ is the photon momentum, as in the formalism of ref.~\\cite{ZAH}.\n Similarly the pion production or annihilation by the Weinberg term of\nthe axial current introduces the vector current nuclear matrix element. \nTo the extent that the Weinberg term is mediated by the rho meson\nand the Kroll-Ruderman one by the $A_1$ meson, these expressions include the\neffects, at low momenta, of the $\\rho-A_1$ mixing. \n\nIt is interesting to observe on expressions~(\\ref{acur}) and (\\ref{vcur}) that\nthe Kroll-Ruderman term itself can be obtained from the fourth term of the\naxial current by suppression of one of the pion fields (representing creation\nor annihilation of a thermal pion). Thus the three terms containing $g_A$ in\neqs~(\\ref{acur}) and (\\ref{vcur}) are linked together by suppression or\naddition of one pion field. The same is true for the three purely pionic terms\nand for the three terms in $\\gamma_{\\mu}$ as well. Thus a grouping three by\n three of the various terms naturally emerges from our expressions.\n \nIn the nuclear medium the virtual pions can be seen as a pion bath and\nsimilar considerations about the mixing might apply. However the pions do not\ncome from an external reservoir but fully belong to the nucleus. Strictly\nspeaking there is no mixing. However the mixing terms of the currents can pick\na pion from the cloud of a nucleon introducing a similarity with the heat bath\nas displayed in fig.~\\ref{fig-figc} in the case of the Kroll-Ruderman term.\n The corresponding process is the\nexcitation of high lying nuclear states (2p-2h). In the case of the third\nisospin component of the current, it is part of the well known\nquasi-deuteron photoabsorption cross-section. Another\nexample is the influence of the Weinberg-Tomozawa term on the time part of\nthe axial current, which enters via the Pauli correlations and sizeably\nincreases the time-like axial coupling constant~\\cite{KDR}. The present \napproach puts these effects, where the mixing terms of the currents pick a pion \nfrom the cloud, in a perspective linked to chiral symmetry. \n\n\nThe mixing goes along with a renormalization of certain coupling constants,\n such as the axial one, that will now be discussed.\n \n\\bigskip\n\\section{The pion decay constant}\n We start with the case of the hot pion gas. This is meant as an illustration\nof our method as no new result is reached. It serves to introduce quantities\nsuch as the residue $\\gamma$ ({\\it i.e.}, the wave function renormalization) \nthat will be used later. \nThe production of a pion by the axial current is governed by the first two\nterms of the expression~(\\ref{acur}). Limiting the expansion to first order in\nthe squared pion field we obtain: \n\\begin{equation}\n{\\hbox{\\boldmath${\\cal A}$\\unboldmath}}_{\\mu} = \n f_{\\pi}\\partial_{\\mu}{\\hbox{\\boldmath$\\phi$\\unboldmath}}\\;(1 -\n\\frac{7}{12}\\frac{{\\hbox{\\boldmath$\\phi$\\unboldmath}}^2}{f_{\\pi}^2}) \\; .\n\\end{equation}\nThe pion field is expanded in terms of creation and annihilation operators\n$B$ and $B^{\\dag}$ for a quasi-pion in the medium:\n\\begin{equation}\n{\\hbox{\\boldmath$\\phi$\\unboldmath}}(x) =\n\\gamma^{1\/2}\\int\\frac{d{\\hbox{\\boldmath$k$\\unboldmath}}}{(2\\pi)^3}\n\\frac{1}{(2\\omega_k^*)^{1\/2}}\n\\big( {\\hbox{\\boldmath$B_k$\\unboldmath}} +\n {\\hbox{\\boldmath$B_{-k}$\\unboldmath}}^{\\dag}\\big)\n e^{i({\\hbox{\\boldmath$k\\cdot x$\\unboldmath}}-\\omega_k^* t)}\\; ,\n\\label{pifi} \n\\end{equation}\nwhere $\\omega_k^*$ is the energy of a quasi-pion of momentum {\\bf k},\n$\\omega_k^* = \\sqrt{\\displaystyle {\\hbox{\\boldmath$k$\\unboldmath}}^2 \n+ m_{\\pi}^{*2}}$ with $m_{\\pi}^*$ the effective pion mass.\n The quantity $\\gamma$ is the residue of the pion pole. Since the derivative of\nthe pion field gives no contribution when it acts on the pions of the bath, \nthe matrix element for production of a quasi-pion by the axial current reduces\nto: \n\\begin{equation}\n\\langle 0\\vert{\\hbox{\\boldmath${\\cal A}$\\unboldmath}}_{\\mu}(0)\\vert\n \\tilde{\\pi}\\rangle\n=\\frac{\\gamma^{1\/2}}{(2\\omega_k^*)^{1\/2}} if_{\\pi} k_{\\mu}\\big(1\n-\\frac{7}{12}\\langle\\displaystyle\\frac{{\\hbox{\\boldmath$\\phi$\\unboldmath}}^2}\n{f_{\\pi}^2}\\rangle\\big) = \\frac{1}{(2\\omega_k^*)^{1\/2}} if_{\\pi}^* k_{\\mu}\\;\n\\end{equation}\nwhere the second equation defines the renormalized pion decay constant\n$f_{\\pi}^*$.\n In a pion gas the residue $\\gamma$ has been derived by Chanfray et\nal.~\\cite{CEW}. To first order in the quantity \n${\\hbox{\\boldmath$\\phi$\\unboldmath}}^2$, equivalently\nthe pion density, it writes, in the Weinberg representation: \n\\begin{equation} \n\\gamma = \\big(1 -\n\\frac{1}{2}\\langle\\frac{{\\hbox{\\boldmath$\\phi$\\unboldmath}}^2}\n{f_{\\pi}^2}\\rangle\\big)^{-1}\\; .\n\\label{res}\n\\end{equation}\n The renormalized pion decay constant then reads:\n\\begin{equation}\nf_{\\pi}^* = f_{\\pi} \\gamma^{1\/2}\\big(1-\\frac{7}{12}\\langle\\displaystyle\n\\frac{{\\hbox{\\boldmath$\\phi$\\unboldmath}}^2}{f_{\\pi}^2}\\rangle\\big)\n\\approx f_{\\pi} \\big(1-\\frac{1}{3}\\langle\\displaystyle\n\\frac{{\\hbox{\\boldmath$\\phi$\\unboldmath}}^2}{f_{\\pi}^2}\\rangle\\big) \\; .\n\\label{fpist}\n\\end{equation}\nOn the other hand, the temperature evolution of the condensate in a hot pion\ngas is, to first order in the quantity ${\\hbox{\\boldmath$\\phi$\\unboldmath}}^2$,\n as given in ref.~\\cite{CEW}:\n\\begin{equation}\n\\frac{<\\overline{q} q>_T}{<\\overline{q} q>_0} = 1 - \\frac{1}{2}\\langle\n\\displaystyle\\frac{{\\hbox{\\boldmath$\\phi$\\unboldmath}}^2}\n{f_{\\pi}^2}\\rangle_T \\;.\n\\label{condT}\n\\end{equation}\nThus to first order in $\\langle{\\hbox{\\boldmath$\\phi$\\unboldmath}}^2\\rangle$ \nthe renormalization of $f_\\pi$ follows the evolution of the\ncondensate but with the coefficient 2\/3, in agreement with chiral perturbation\nresults and other works~\\cite{IOF,CEW,GL,GeL}. Note that this renormalization\n applies to both space and time components of the axial current. This agrees\nwith the findings of ref~\\cite{EEK} where it is shown that to order $T^2$ \nLorentz invariance is preserved.\n\n We now turn to the dense medium. Formally we can follow the same\nprocedure. The presence in the nuclear medium of a pion scalar density, in the\nform of an expectation value of the quantity \n${\\hbox{\\boldmath$\\phi$\\unboldmath}}^2$, renormalizes the pion\ndecay constant. Formally the expression is the same as previously, \n$f_{\\pi}^* = f_{\\pi} \\gamma^{1\/2}\\big(1-\\frac{7}{12}\\langle\\displaystyle\n\\frac{{\\hbox{\\boldmath$\\phi$\\unboldmath}}^2}{f_{\\pi}^2}\\rangle\\big)$.\nIf we treat the nuclear medium as a pion gas, the residue $\\gamma$ entirely\narises from $\\pi$-$\\pi$ interactions and is the same as given previously in\neq.~(\\ref{res}). In this simplified treatment $f_{\\pi}^*$ is given by \neq.~(\\ref{fpist}). It is linked to the pion scalar density, {\\it i.e.}\n the expectation value ${\\hbox{\\boldmath$\\phi$\\unboldmath}}^2$.\n Even with the simple form~(\\ref{fpist}), the\nrenormalization of $f_{\\pi}$ does not follow 2\/3 of the condensate one. The\nreason is that the condensate evolution in the nuclear medium is governed by\nthe full nucleon sigma commutator $\\Sigma_N$, which is not entirely due to\n${\\hbox{\\boldmath$\\phi$\\unboldmath}}^2$. There exists also the non pionic\n contribution embodied in\n$\\sigma_N$, as discussed previously. Thus the two renormalizations do not\nfollow each other. This result is general and applies as well to the axial\ncoupling constant $g_A$. \n\nThis is not the only restriction which prevents a simple link to the condensate\nin the nuclear medium. The residue $\\gamma$ itself is not entirely due to\n$\\pi$-$\\pi$ scattering. There exist other sources for the energy dependence of\nthe s-wave $\\pi$-N interaction, such as the $\\Delta$ excitation. The medium\nrenormalization of $f_{\\pi}$ cannot be written in the simple \nform~(\\ref{fpist}). This illustrates the complexity of the dense medium as\ncompared to the hot pion gas. A more phenomenological approach has been\nfollowed by Chanfray et al.~\\cite{CEK} who linked the \nin-medium pion decay constant through the nuclear\nGell-Mann-Oakes-Renner relation to the evolution of the pion mass, itself\nobtained empirically from the s-wave pion-nucleus optical potential. \n\n\\bigskip\n\\section{The axial coupling constant}\nWe now turn to the axial coupling constant. Its renormalization\n is governed by the fourth term of eq.~(\\ref{acur}).\nAfter rearrangement with the Gamow-Teller current (third term), we get:\n\\begin{equation}\n\\frac{1}{2}g_A\\overline{\\psi}\\gamma_{\\mu}\\gamma_5\\big({\\hbox{\\boldmath$\\tau$\n\\unboldmath}} +\n\\frac{1}{2f_{\\pi}^2}\\frac{{\\hbox{\\boldmath$\\phi\\tau \\cdot\\phi -\\tau\\phi^2$\n\\unboldmath}}}\n{\\displaystyle 1 + {\\hbox{\\boldmath$\\phi$\\unboldmath}}^2\/4f_{\\pi}^2}\\big)\\psi\n=\\frac{1}{2}g_A\\overline{\\psi}\\gamma_{\\mu}\\gamma_5{\\hbox{\\boldmath$\\tau$\n\\unboldmath}}\\psi\\big(1-\\frac{1}{3}\n\\langle\\frac{{\\hbox{\\boldmath$\\phi$\\unboldmath}}^2\/f_{\\pi}^2}\n{\\displaystyle 1 + {\\hbox{\\boldmath$\\phi$\\unboldmath}}^2\/4f_{\\pi}^2}\n\\rangle_T\\big)\\; ,\n\\label{axialr}\n\\end{equation}\nwhere on the right hand side the average is taken over the heat bath.\n On the other hand the condensate is obtained from the chiral symmetry breaking\nLagrangian ${\\cal L}_{sb} = -\\frac{1}{2}m_{\\pi}^2{\\hbox{\\boldmath$\n\\phi $\\unboldmath}}^2\/( 1 + {\\hbox{\\boldmath$\\phi $\\unboldmath}}\n ^2\/4f_{\\pi}^2)$.\n Therefore the condensate evolution follows~\\cite{CEW}:\n\\begin{equation}\n\\frac{<\\overline{q} q>_{T,\\rho}}{<\\overline{q} q>_0} - 1 = \n-\\frac{1}{2}\\langle\\frac{\\displaystyle{\\hbox{\\boldmath$\n\\phi $\\unboldmath}}^2\/f_{\\pi}^2}{\\displaystyle 1 + {\\hbox{\\boldmath$\\phi $\\unboldmath}}\n ^2\/4f_{\\pi}^2}\\rangle_{T,\\rho} \\; . \n\\label{condF}\n\\end{equation}\n Hence the axial coupling constant renormalized by\n the pion loops (fig.~\\ref{fig-figb}a) can be written: \n\\begin{equation}\ng_A^*\/g_A = \\big(1 - \\frac{2}{3}\\frac{<\\overline{q} q>_{T}}\n{<\\overline{q} q>_0}\\big) \\; .\n\\label{gar}\n\\end{equation}\nThus with this chiral Lagrangian, in a hot medium the axial coupling constant\nfollows, to all orders in the pion density, 2\/3 of\nthe quark condensate evolution (as long as it is pion dominated).\n The factor 2\/3 is easily understood here: only two charges out of three\n contribute to the renormalization while all three charge\nstates participate in the condensate evolution. The quenching of $g_A$ is in\nagreement with the universal behaviour of ref.~\\cite{IOF} and with the former\nresult of ref.~\\cite{EK}.\n We have checked the expected independence\n of our results on the particular representation of the non-linear Lagrangian.\n\nWe now turn to the case of finite density. The starting expression is the same\nas the left hand side of eq.~(\\ref{axialr}). In the nuclear medium the pions\n originate from the other nucleons so that the nucleon-nucleon correlations\n cannot be ignored. Here it is useful to make the link between this\nrenormalization and the traditional picture of meson exchange currents. We keep\nonly the two-body terms which are the dominant ones and work to lowest order in\nthe pion field. \nThe corresponding graph is that of fig.~\\ref{fig-figb}b. This type of exchange\ngraph with two pions is not usually considered in nuclear physics. It is\ndictated to us only by these chiral symmetry considerations. \nWe have to express the triangular graph of the figure as an effective two-body\n operator to be\nevaluated between correlated two-nucleon wave functions. A simplification\noccurs in the static approximation where the pions do not transfer energy to\nthe nucleon line. We are left with an integral over the squared pion\npropagator with leads to a simple form in $x$-space for the two-body operator:\n\n\\begin{equation} \n O_{12} = - \\frac{1}{6f_{\\pi}^2} g_A (\\gamma_{\\mu} \\gamma_5)_1\n{\\hbox{\\boldmath$ \\big(\\tau_1 -\\frac{i}{2}(\\tau_1\\times\\tau_2)\\big)\n$\\unboldmath}}\n\\varphi^2{\\hbox{\\boldmath$ (x_1,x_2)$\\unboldmath}}\\; , \n\\label{Otb}\n\\end{equation}\nwhere $\\varphi{\\hbox{\\boldmath$(x_1,x_2)$\\unboldmath}}$\n is the Yukawa field, taken at the point \n\\boldmath$x_1$\\unboldmath, emitted by the nucleon located at the point \n\\boldmath$x_2$\\unboldmath\\ and we have made explicit the dependence in\nthe isospin operator \\boldmath$\\tau_2$\\unboldmath\\ of the emitting nucleon.\n The operator $O_{12}$ has a direct and an exchange contribution. The latter\ncontribution vanishes for the second piece of the two-body operator\n in the limit of zero momentum current. We will furthermore ignore the exchange\nterm of the first piece ({\\it i.e.} in \\boldmath$\\tau_1$\\unboldmath)\n and consider only the short range correlations. We now focus on the direct\n terms and specialize to the charged currents. The isospin factors\n in eq.~(\\ref{Otb}) reduce to the\n expression $3\\tau_1^{\\pm} - 2\\tau_1^{\\pm}\\tau_2^{\\pm}\\tau_2^{\\mp} =\n 2\\tau_1^{\\pm}(1\\mp\\tau_2^0\/2)$. The resulting contributions depend on the\nrelative number of protons and neutrons. In symmetric nuclear matter where they\nare equal, the factor, once summed over all the pion emitters (the\nnucleons with index 2), gives $2\\tau_1^{\\pm}$ multiplied by the nuclear density\n$\\rho$. In the neutron gas instead, depending whether we consider neutron decay\n(the $+$ component) or proton decay (the $-$ one), we would get a factor\n $3\\tau_1^+$ or $1\\tau_1^-$ multiplied by the neutron density $\\rho_n$.\nWe can summarize these results by introducing an effective density, which\ndepends on the charge of the current and on the neutron excess number:\n\\begin{equation}\n \\rho_{eff}^+ = \\frac{3N+Z}{2A}\\rho \\qquad \n\\rho_{eff}^- = \\frac{3Z+N}{2A}\\rho \\, \n\\label{rhoe}\n\\end{equation} \nfrom which we recover the previous results.\nSandwiching the whole\noperator $O_{12}$ between two-nucleon wave functions, we thus obtain:\n \n\\begin{equation}\n\\delta g_A^{ex}\/g_A = -\\frac{1}{3f_{\\pi}^2}\\int\nd{\\hbox{\\boldmath$x_2$\\unboldmath}}\\rho_{eff}^{\\pm}(x_2)[1+\nG({\\hbox{\\boldmath$x_1,x_2$\n\\unboldmath}})] \\varphi^2\n({\\hbox{\\boldmath$x_1,x_2$\\unboldmath}})\\; ,\n\\label{gad}\n\\end{equation}\n where $ G({\\hbox{\\boldmath$x_1,x_2$\\unboldmath}})$ is the short range\n nucleon-nucleon\n correlation function. In symmetric nuclear matter $\\rho_{eff} = \\rho$ whereas\na similar formula would hold in the neutron gas, with the obvious replacement\n of $\\rho$ by the neutron density $\\rho_n$ and of the factor 1\/3 in front by \n 1\/2 and 1\/6 for neutron and proton decay respectively. \n As is apparent on the expression~(\\ref{gad}) it is not\n the full pion field squared which acts in the renormalization\nof $g_A$, but only the part which extends beyond the range of the correlation\nhole. No such distinction occurred for the pion decay constant since the pion\n produced by the axial current can be anywhere in the nucleus. Thus the\nuniversality of the quenching which exists in the heat bath is lost. \n\nIn order to obtain an estimate for $g_A^*$ in symmetric matter, we assume a\n total exclusion of\nother nucleons in a sphere of radius $r_0 = 0.6 fm$ . In\norder to facilitate the comparison of the quenching effect of $g_A$ to\nthat of the condensate which is governed by the nucleon sigma term, we\nintroduce a quantity $(\\Sigma_N)_{eff}$:\n\\begin{equation}\n(\\Sigma_N)_{eff} = \\frac{1}{2}m_\\pi^2\\int d{\\hbox{\\boldmath$x$\\unboldmath}}\n\\theta(x-r_0) {\\hbox{\\boldmath$\\varphi^2(x)$\\unboldmath}} \\; .\n\\label{sigeff}\n\\end{equation}\nNumerically for point-like pion emitters, we find an effective value\n $(\\Sigma_N)_{eff}\\approx 21 MeV$. It is interesting to compare this value\nwith a model calculation in the quark picture. We have used the results of\nWakamatsu~\\cite{WA} in a chiral soliton model.\nThe quantity\n${\\hbox{\\boldmath$\\varphi$\\unboldmath}}^2({\\hbox{\\boldmath$x$\\unboldmath}})$ \nis replaced by the sea quark density distribution according to:\n$\\frac{1}{2}m_\\pi^2 {\\hbox{\\boldmath$\\varphi^2(x)$\\unboldmath}} \\to\n 2m_q\\overline{q} q{\\hbox{\\boldmath$(x)$\\unboldmath}} $.\nThis gives a very similar value $(\\Sigma_N)_{eff}\\approx 19 MeV$.\n \nHowever these numbers \ndo not include the Pauli blocking effect which removes\nthe occupied states in the process of pion emission. This effect\nhas been calculated in refs.~\\cite{ERC,CE} but for the whole space integral\n({\\it i.e.} without a cut-off) of the quantity\n ${\\hbox{\\boldmath$\\phi$\\unboldmath}}^2$.\n Expressed in terms of a modification of the sigma commutator it amounts\nto a reduction $(\\Delta\\Sigma_N)_{Pauli} = -2.6 MeV $.\n The blocking effect, which is moderate, should \n be even less pronounced with the cut-off. We ignore it in the following. \n\nComing back to the renormalized axial coupling constant, we have:\n\\begin{equation}\ng_A^*\/g_A = 1 - \\frac{2}{3}\\frac{\\rho(\\Sigma_N)_{eff}}{f_{\\pi}^2m_{\\pi}^2}\\; .\n\\label{gaq}\n\\end{equation} \nThis represents a 10\\% quenching at normal nuclear density in symmetric matter\n(15\\% for neutron decay in a neutron gas of the same density),\n while the condensate\nhas dropped by 35\\%. Notice that the evolution of $g_A$ is sizeably slower. \nThis quenching applies to all the components, space or time, of the axial\ncurrent. Other renormalization effects have to be added. They are known to\n act differently on the\ndifferent components. For instance the Weinberg-Tomozawa term acts on the\ntime component alone, producing a sizeable enhancement~\\cite{KDR}. In the case\nof the space component the nucleon polarization under the influence of\nthe pion field $N \\to \\Delta$ leads to the Lorentz-Lorenz\n quenching~\\cite{EFT}. In the latter case the two\nrenormalizations go in the same direction of a quenching. The extra reduction\n that we have introduced in this work could help to explain\n the large amount of quenching observed in\nGamow-Teller transitions. To get an idea, we fictitiously translate the\nreduction by chiral symmetry into an equivalent Lorentz-Lorenz effect. We\nintroduce an effective Landau-Migdal parameter $\\delta g_{N\\Delta}'$, to be\nadded to the genuine one, so as to reproduce the 10\\% quenching. This\ncorresponds to an increase $\\delta g_{N\\Delta}' \\approx 0.16$, a significant\nincrease. Indeed the quenching of the Gamow-Teller sum rule requires, if all\nattributed to the Lorentz-Lorenz effect, \n$g_{N\\Delta}'$ to be as big as 0.6 - 0.7 while the favoured theoretical value \n is around 0.4~\\cite{DIC}. Hence the chiral induced quenching would help to fill\n the gap. \n\n\nClosely related to the Gamow-Teller transition is the pion photoproduction at\nthreshold through the Kroll-Ruderman term. To lowest order the nuclear\ntransition is governed by the axial current. We want now to discuss how it is\nrenormalized in the medium following chiral symmetry requirements. Expanding to\nfirst order in ${\\hbox{\\boldmath$\\phi$\\unboldmath}}^2\/f_{\\pi}^2$\n and applying Wick theorem, the relevant current writes:\n\\begin{equation}\n({\\hbox{\\boldmath${\\cal V}$\\unboldmath}}_{\\mu})_{KR} =\n-\\frac{g_A}{2f_{\\pi}}\\overline{\\psi}\\gamma_{\\mu}\\gamma_5\n({\\hbox{\\boldmath$\\tau\\times\\phi$\\unboldmath}})\n\\psi\\big(1 - \\frac{5}{12}\\langle\\displaystyle\\frac{\n{\\hbox{\\boldmath$\\phi$\\unboldmath}}^2}{f_{\\pi}^2}\\rangle\\big) \\; .\n\\label{KR}\n\\end{equation}\nFor the production of a quasi-pion in the medium, the renormalization $r_{KR}$\n of the amplitude involves again the residue $\\gamma$:\n\\begin{equation}\nr_{KR} = \\gamma^{1\/2}\\big(1 - \\frac{5}{12}\\langle\\displaystyle\\frac{\n{\\hbox{\\boldmath$\\phi$\\unboldmath}}^2}{f_{\\pi}^2}\\rangle\\big) \\; .\n\\end{equation}\n \nFor illustrating the complexity of the situation we first assume that the\n residue is entirely given by $\\pi-\\pi$ scattering and\ntake the value of eq.~(\\ref{res}). Moreover we ignore the correlation\n complications. We obtain then: \n\\begin{equation}\nr_{KR} = \\big(1 - \\frac{1}{6}\\langle\\displaystyle\\frac{\n{\\hbox{\\boldmath$\\phi$\\unboldmath}}^2}{f_{\\pi}^2}\\rangle\\big) \\; ,\n\\label{rKR}\n\\end{equation}\nwhich is 1\/3 of the variation of the condensate, in contradistinction to the\naxial transitions where the factor is 2\/3. This result does not contradict the\ngeneral expressions of Dey et al.~\\cite{IOF} as the Kroll-Ruderman term\nrepresents already a mixing of the axial current into the vector one. This\nreduction factor could apply to other mixing amplitudes, but we have not\nestablished it. \nThe evolution as 1\/3 of the condensate one would apply in the hot pion gas\nsituation. In the nuclear medium all the complications mentioned previously\noccur: the role of the correlations, the link between the condensate evolution\nand the expectation value of ${\\hbox{\\boldmath$\\phi$\\unboldmath}}^2$ and\n the problem with the residue $\\gamma$.\nThis case cumulates all of the difficulties of the dense medium. In all\ninstances the overall renormalization of the Kroll-Ruderman matrix element\nin the nuclear medium should be small.\n\n\\section{Conclusion}\nIn conclusion we have investigated the behaviour of the nuclear medium in\nrelation with chiral symmetry restoration. We have\nfocused\non the extension of the parity mixing concept between the axial and vector\ncorrelators, which exists in the hot pion gas. In the nuclear medium there is\n no mixing {\\it stricto sensu}. Indeed the pions, which induce the \nmixing, are not part of an\nexternal system, as in the thermal case, but they belong to the virtual pion\ncloud which is an integral part of the nucleus. We have shown that nevertheless \ncertain consequences of the mixing survive. The nucleus behaves in certain\nrespects as a pion reservoir. The virtual pion emitted by a nucleon acts, as\nillustrated in fig.~\\ref{fig-figa}, on the\nremainder of the nucleus, {\\it i.e.} on the system of (A-1) nucleons, as the\npion of the heat bath. The mixing which does not exist at the level of the\nwhole nucleus is present only at the sublevel of the (A-1) nucleon system. This \ntranslates by the fact that, in the ``mixing'' cross-sections (such as the\nquasi-deuteron photoabsorption one), at least one nucleon has to be ejected: \nthe emitter or absorber of the\n pion. As for the heat bath, this pion reservoir produces a quenching\nof the axial coupling constants. Since the pion originates from a neighbouring\nnucleon this renormalization is nothing else than a meson exchange\ncontribution. It involves the exchange of two pions and has not been so far \nconsidered,\n to our knowledge. We have expressed the renormalizations in terms of the\npion scalar density. The same quantity also enters in the quark condensate\nevolution. One can therefore think of a link between the two quantities, as\noccurs in the heat bath where the link is simple. There is however an important\ndifference of the nuclear medium with respect to the heat bath: the\nrenormalizations are not described by a universal quenching factor expressed in\nterms of the average squared pion field. The nucleonic observables such as\n$g_A$ are renormalized differently due the sensitivity to nucleon-nucleon\nshort range correlations. In this case only that part of the pionic field\n which is beyond\nthe correlation hole enters in the renormalization.\nThis prevents the link to the condensate evolution which instead\ninvolves the average scalar density. This is an illustration of\nthe point made by T. Ericson~\\cite{TE} about the possible importance of\n the spatial fluctuations of the condensate. We have\ngiven an estimate for the quenching of the axial coupling constant arising\nfrom the requirements of chiral symmetry. Although it is not very large (about\n10\\%), this additional quenching is significant and may help explain the large observed\nquenching of the Gamow-Teller sum rule.\nWe have also discussed the photoproduction amplitude arising from the\nKroll-Ruderman term. It represents a mixing term of the axial current\ninto the vector one. We have shown that its evolution is slower than the\naxial coupling constant one.\n\nThis work can be extended to enlarge the space. The first step is to\ninclude the Delta excitation. Another extension concerns the explicit\nintroduction of the rho and the $A_1$ mesons, which the mixing of the axial and\nvector correlators allows to be excited either by the vector or by the\naxial current.\n\n\n\n\n\\bigskip\nWe thank Prof. T. Ericson for useful comments. We are very grateful to\n Prof. M. Wakamatsu for\ncommunication of his detailed results on the quark scalar density.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}