diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzoidm" "b/data_all_eng_slimpj/shuffled/split2/finalzzoidm" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzoidm" @@ -0,0 +1,5 @@ +{"text":"\n\n\n\n\\section{Introduction}\n\\label{Sec:Introduction}\n\nAs numerical simulations of black-hole binaries improve, the criterion\nfor success moves past the ability of a code to merely persist through\nmany orbits of inspiral, merger, and ringdown. Accuracy becomes the\ngoal, as related work in astrophysics and analysis of data from\ngravitational-wave detectors begins to rely more heavily on results\nfrom numerical relativity. One of the most important challenges in\nthe field today is to find and eliminate systematic errors that could\npollute results built on numerics. Though there are many possible\nsources of such error, one stands out as being particularly easy to\nmanage and---as we show---a particularly large effect: the error made\nby extracting gravitational waveforms from a simulation at finite\nradius, and treating these waveforms as though they were the\nasymptotic form of the radiation.\n\nThe desired waveform is the one to which post-Newtonian approximations\naspire, and the one sought by gravitational-wave observatories: the\nasymptotic waveform. This is the waveform as it is at distances of\nover $10^{14}\\,M$ from the system generating the waves. In typical\nnumerical simulations, data extraction takes place at a distance of\norder $100\\,M$ from the black holes. At this radius, the waves are\nstill rapidly changing because of real physical effects. Near-field\neffects~\\cite{Teukolsky1982, Boyle2008, Boyle2009} are plainly\nevident, scaling with powers of the ratio of the reduced wavelength to\nthe radius, $(\\lambdabar\/r)^k$.\\footnote{We use the standard notation\n $\\lambdabar \\equiv \\lambda \/ 2\\pi$.} %\nExtraction methods aiming to eliminate the influence of gauge effects\nalone (\\foreign{e.g.}\\xspace, improved Regge--Wheeler--Zerilli or quasi-Kinnersley\ntechniques) will not be able to account for these physical changes.\n\nEven using a rather naive, gauge-dependent extraction method,\nnear-field effects dominate the error in extracted waves throughout\nthe inspiral for the data presented in this paper~\\cite{Boyle2008}.\nFor extraction at $r=50\\,M$, these effects can account for a\ncumulative error of roughly $50\\%$ in amplitude or a phase difference\nof more than one radian, from beginning to end of a 16-orbit\nequal-mass binary merger. Note that near-field effects should be\nproportional to---at leading order---the ratio of $\\lambdabar\/r$ in\nphase and $(\\lambdabar\/r)^2$ in amplitude, as has been observed\npreviously \\cite{HannamEtAl2008, Boyle2008}. Crucially, because the\nwavelength changes most rapidly during the merger, the amplitude and\nphase differences due to near-field effects also change most rapidly\nduring merger. This means that coherence is lost between the inspiral\nand merger\/ringdown segments of the waveform.\n\nWe can see the importance of this decoherence by looking at its effect\non the matched-filtering technique frequently used to analyze data\nfrom gravitational-wave detectors. Matched filtering~\\cite{Finn1992,\n FinnChernoff1993, BoyleEtAl2009a} compares two signals, $s_{1}(t)$\nand $s_{2}(t)$. It does this by Fourier transforming each into the\nfrequency domain, taking the product of the signals, weighting each\ninversely by the noise---which is a function of frequency---and\nintegrating over all frequencies. This match is optimized over the\ntime and phase offsets of the input waveforms. For appropriately\nnormalized waveforms, the result is a number between 0 and 1, denoted\n$\\Braket{ s_{1} | s_{2}}$, with 0 representing no match, and 1\nrepresenting a perfect match. If we take the extrapolated waveform as\n$s_{1}$ and the waveform extracted at finite radius as $s_{2}$, we can\nevaluate the match between them. If the extrapolated waveform\naccurately represents the ``true'' physical waveform, the mismatch\n(defined as $1-\\Braket{s_{1}|s_{2}}$) shows us the loss of signal in\ndata analysis if we were to use the finite-radius waveforms to search\nfor physical waveforms in detector data.\n\nThe waveforms have a simple scaling with the total mass of the system,\nwhich sets their frequency scale relative to the noise present in the\ndetector. In Figs.~\\ref{fig:MismatchInitial}\nand~\\ref{fig:MismatchAdvanced}, we show mismatches between\nfinite-radius and extrapolated data from the Caltech--Cornell group\nfor a range of masses of interest to LIGO data analysis, using the\nInitial- and Advanced-LIGO noise curves, respectively, to weight the\nmatches. The value of $R$ denotes the coordinate radius of extraction\nfor the finite-radius waveform.\n\n\\begin{figure}\n \n \\includegraphics[width=\\linewidth]{Fig1}\n \\caption{\\CapName{Data-analysis mismatch between finite-radius\n waveforms and the extrapolated waveform for Initial LIGO} This\n plot shows the mismatch between extrapolated waveforms and\n waveforms extracted at several finite radii, scaled to various\n values of the total mass of the binary system, using the\n Initial-LIGO noise curve. The waveforms are shifted in time and\n phase to find the optimal match. Note that the data used here is\n solely numerical, with no direct post-Newtonian contribution.\n Thus, for masses below $40\\,\\Sun$, this data represents only a\n portion of the physical waveform.}\n \\label{fig:MismatchInitial}\n\\end{figure}\n\n\\begin{figure}\n \n \\includegraphics[width=\\linewidth]{Fig2}\n \\caption{\\CapName{Data-analysis mismatch between finite-radius\n waveforms and the extrapolated waveform for Advanced LIGO} This\n plot shows the mismatch between extrapolated waveforms and\n waveforms extracted at several finite radii, scaled to various\n values of the total mass of the binary system, using the\n Advanced-LIGO noise curve. The waveforms are shifted in time and\n phase to find the optimal match. Note that the data used here is\n solely numerical, with no direct post-Newtonian contribution.\n Thus, for masses below $110\\,\\Sun$, this data represents only a\n portion of the physical waveform.}\n \\label{fig:MismatchAdvanced}\n\\end{figure}\n\nThe data in these figures is exclusively numerical data from the\nsimulation used throughout this paper, with no direct contributions\nfrom post-Newtonian (PN) waveforms. However, to reduce ``turn-on''\nartifacts in the Fourier transforms, we have simply attached PN\nwaveforms to the earliest parts of the time-domain waveforms,\nperformed the Fourier transform, and set to zero all data at\nfrequencies for which PN data are used. The match integrals are\nperformed over the intersection of the frequencies present in each\nwaveform, as in Ref.~\\cite{HannamEtAl2009a}.\n\nThis means that the data used here is not truly complete for masses\nbelow $40\\,\\Sun$ in Initial LIGO and $110\\,\\Sun$ in Advanced LIGO,\nand that a detected signal would actually be dominated by data at\nlower frequencies than are present in this data for masses below about\n$10\\,\\Sun$. These masses are correspondingly larger for shorter\nwaveforms, which begin at higher frequencies. It is important to\nremember that this type of comparison can only show that a given\nwaveform (of a given length) is as good as it needs to be for a\ndetector. If the waveform does not cover the sensitive band of the\ndetector, the detection signal-to-noise ratio would presumably improve\ngiven a comparably accurate waveform of greater duration. Thus, the\nbar is raised for longer waveforms, and for lower masses.\n\nThese figures demonstrate that the mismatch can be of order $1\\%$ when\nextracting at a radius of $R=50\\,M$. For extraction at $R=225\\,M$,\nthe mismatch is never more than about $0.2\\%$. The loss in event rate\nwould be---assuming homogeneous distribution of events in\nspace---roughly 3 times the mismatch when using a template bank based\non imperfect waveforms~\\cite{Brown2004}. Lindblom\n\\foreign{et~al.}\\xspace~\\cite{LindblomEtAl2008} cite a target mismatch of less than\n$0.5\\%$ between the physical waveform and a class of model templates\nto be used for \\emph{detection} of events in current LIGO detector\ndata.\\footnote{ This number of 0.5\\% results from assumptions about\n typical event magnitude, template bank parameters, and requirements\n on the maximum frequency of missed events. The parameters used to\n arrive at this number are typical for Initial LIGO.} Thus, for\nexample, if these numerical waveforms were to be used in construction\nof template banks,\\footnote{ We emphasize that these waveforms do not\n cover the sensitive band of current detectors, and thus would not\n likely be used to construct template banks without the aid of\n post-Newtonian extensions of the data. Longer templates effectively\n have more stringent accuracy requirements, so the suitability of\n these extraction radii would change for waveforms of different\n lengths. In particular, our results are consistent with those of\n Ref.~\\cite{HannamEtAl2009a}, which included non-extrapolated data of\n shorter duration.} the waveform extracted at $R=50\\,M$ would not be\nentirely sufficient, in the sense that a template bank built on\nwaveforms with this level of inaccuracy would lead to an unacceptably\nhigh reduction of event rate. The waveforms extracted at $R=100\\,M$\nand $225\\,M$, on the other hand, may be acceptable for Initial LIGO.\nFor the loudest signals expected to be seen by Advanced LIGO, the\nrequired mismatch may be roughly $10^{-4}$~\\cite{LindblomEtAl2008}.\nIn this case, even extraction at $R=225\\,M$ would be insufficient;\nsome method must be used to obtain the asymptotic waveform. For both\nInitial and Advanced LIGO, estimating the parameters of the\nwaveform---masses and spins of the black holes, for\ninstance---requires still greater accuracy.\n\n\n\nExtrapolation of certain quantities has been used for some time in\nnumerical relativity. Even papers announcing the first successful\nblack-hole binary evolutions \\cite{BuonannoEtAl2007,\n CampanelliEtAl2006, BakerEtAl2006} showed radial extrapolation of\nscalar physical quantities---radiated energy and angular momentum.\nBut waveforms reported in the literature have not always been\nextrapolated. For certain purposes, this is\nacceptable---extrapolation simply removes one of many errors. If the\nprecision required for a given purpose allows it, extrapolation is\nunnecessary. However, for the purposes of LIGO data analysis, we see\nthat extrapolation of the waveform may be very important.\n\nWe can identify three main obstacles to obtaining the asymptotic form\nof gravitational-wave data from numerical simulations:\n\\begin{enumerate}\n \\item Getting the ``right'' data at any given point, independent of\n gauge effects (\\foreign{e.g.}\\xspace, using quasi-Kinnersley techniques and improved\n Regge--Wheeler--Zerilli techniques);\n \\item Removing near-field effects;\n \\item Extracting data along a physically relevant path.\n\\end{enumerate}\nMany groups have attempted to deal with the first of these\nproblems.\\footnote{See~\\cite{NerozziEtAl2005}\n and~\\cite{SarbachTiglio2001} and references therein for descriptions\n of quasi-Kinnersley and RWZ methods, respectively. Also, an\n interesting discussion of RWZ methods, and the possibility of\n finding the ``exact'' waveform at finite distances is found\n in~\\cite{PazosEtAl2007}.} While this is, no doubt, an important\nobjective, even the best extraction technique to date is imperfect at\nfinite radii. Moreover, at finite distances from the source,\ngravitational waves continue to undergo real physical changes as they\nmove away from the system~\\cite{Thorne1980}, which are frequently\nignored in the literature. Some extraction techniques have been\nintroduced that attempt to incorporate corrections for these physical\nnear-field effects~\\cite{AbrahamsEvans1988, AbrahamsEvans1990,\n DeadmanStewart2009}. However, these require assumptions about the\nform of those corrections, which we prefer not to impose. Finally,\neven if we have the optimal data at each point in our spacetime, it is\neasy to see that extraction along an arbitrary (timelike) path through\nthat spacetime could produce a nearly arbitrary waveform, bearing no\nresemblance to a waveform that could be observed in a nearly inertial\ndetector. In particular, if our extraction point is chosen at a\nspecific coordinate location, gauge effects could make that extraction\npoint correspond to a physical path which would not represent any real\ndetector's motion. It is not clear how to estimate the uncertainty\nthis effect would introduce to the waveforms, except by removing the\neffect entirely.\n\nWe propose a simple method using existing data-extraction techniques\nwhich should be able to overcome each of these three obstacles, given\ncertain very basic assumptions. The data are to be extracted at a\nseries of radii---either on a series of concentric spheres, or at\nvarious radii along an outgoing null ray. These data can then be\nexpressed as functions of extraction radius and retarded time using\neither of two simple methods we describe. For each value of retarded\ntime, the waveforms can then be fit to a polynomial in inverse powers\nof the extraction radius. The asymptotic waveform is simply the first\nnonzero term in the polynomial. Though this method also incorporates\ncertain assumptions, they amount to assuming that the data behave as\nradially propagating waves, and that the metric itself is\nasymptotically Minkowski in the coordinates chosen for the simulation.\n\nExtrapolation is, by its very nature, a dangerous procedure. The\nfinal result may be numerically unstable, in the sense that it will\nfail to converge as the order of the extrapolating polynomial is\nincreased. This is to be expected, as the size of the effects to be\nremoved eventually falls below the size of noise in the waveform data.\nThere are likely better methods of determining the asymptotic form of\ngravitational waves produced by numerical simulations. For example,\ncharacteristic evolution is a promising technique that may become\ncommon in the near future~\\cite{BishopEtAl1996, Huebner2001,\n BabiucEtAl2005, BabiucEtAl2008}. Nonetheless, extrapolation does\nprovide a rough and ready technique which can easily be implemented by\nnumerical-relativity groups using existing frameworks.\n\nThis paper presents a simple method for implementing the extrapolation\nof gravitational-wave data from numerical simulations, and the\nmotivation for doing so. In Sec.~\\ref{sec:TortoiseExtrapolation}, we\nbegin by introducing an extrapolation method that uses approximate\ntortoise coordinates, which is the basic method used to extrapolate\ndata in various papers~\\cite{BoyleEtAl2007b, BoyleEtAl2008a,\n ScheelEtAl2009, BoyleEtAl2009a, BuonannoEtAl2009a} by the\nCaltech--Cornell collaboration. The method is tested on the inspiral,\nmerger, and ringdown waveform data of the equal mass, nonspinning,\nquasicircular 15-orbit binary simulation of the Caltech--Cornell\ncollaboration. We present the convergence of the wave phase and\namplitude as the extrapolation order increases, and we also compare\ndata extrapolated using various extraction radii. In\nSec.~\\ref{sec:PhaseExtrapolation}, we propose a different\nextrapolation method using the wave phase---similar to the method\nintroduced in Ref.~\\cite{HannamEtAl2008}---to independently check our\nresults, again demonstrating the convergence properties of the method.\nIn Sec.~\\ref{sec:Comparison}, we compare the extrapolated waveforms of\nboth methods at various extrapolation orders, showing that they agree\nto well within the error estimates of the two methods. A brief\ndiscussion of the pitfalls and future of extrapolation is found in\nSec.~\\ref{sec:Conclusions}. Finally, we include a brief appendix on\ntechniques for filtering noisy data, which is particularly relevant\nhere because extrapolation amplifies noise.\n\n\n\\section{Extrapolation using approximate tortoise coordinates}\n\\label{sec:TortoiseExtrapolation} %\n\nThere are many types of data that can be extracted from a numerical\nsimulation of an isolated source of gravitational waves. The two most\ncommon methods of extracting gravitational waveforms involve using the\nNewman--Penrose $\\Psi_{4}$ quantity, or the metric perturbation $\\ensuremath{h}$\nextracted using Regge--Wheeler--Zerilli techniques. Even if we focus\non a particular type of waveform, the data can be extracted at a\nseries of points along the $z$ axis, for example, or decomposed into\nmultipole components and extracted on a series of spheres around the\nsource. To simplify this introductory discussion of extrapolation, we\nignore the variety of particular types of waveform data. Rather, we\ngeneralize to some abstract quantity $f$, which encapsulates the\nquantity to be extrapolated and behaves roughly as a radially outgoing\nwave.\n\nWe assume that $f$ travels along outgoing null cones, which we\nparametrize by a retarded time $\\ensuremath{t_{\\text{ret}}}$. Along each of these null cones,\nwe further assume that $f$ can be expressed as a convergent (or at\nleast asymptotic) series in $1\/r$---where $r$ is some radial\ncoordinate---for all radii of interest. That is, we assume\n\\begin{equation}\n \\label{eq:FormOfExtrapolatedFunction}\n f(\\ensuremath{t_{\\text{ret}}}, r) = \\sum_{k=0}^{\\infty}\\, \\frac{f_{(k)}(\\ensuremath{t_{\\text{ret}}})} {r^{k}}\\ ,\n\\end{equation}\nfor some functions $f_{(k)}$. The asymptotic behavior of $f$ is given\nby the lowest nonzero $f_{(k)}$.\\footnote{For example, if\n $f=r\\Psi_{4}$, then $f_{(0)}$ gives the asymptotic behavior; if\n $f=\\Psi_{4}$, then $f_{(1)}$ gives the asymptotic behavior.}\n\nGiven data for such an $f$ at a set of retarded times, and a set of\nradii $\\{r_{i}\\}$, it is a simple matter to fit the data for each\nvalue of $\\ensuremath{t_{\\text{ret}}}$ to a polynomial in $1\/r$. That is, for each value of\n$\\ensuremath{t_{\\text{ret}}}$, we take the set of data $\\left\\{ f(\\ensuremath{t_{\\text{ret}}}, r_{i}) \\right\\}$ and\nfit it to a finite polynomial so that\n\\begin{equation}\n \\label{eq:FittingPolynomial}\n f(\\ensuremath{t_{\\text{ret}}}, r_{i}) \\simeq \\sum_{k=0}^{N}\\, \\frac{f_{(k)}(\\ensuremath{t_{\\text{ret}}})}\n {r^{k}_{i}}\\ .\n\\end{equation}\nStandard algorithms~\\cite{PressEtAl2007} can be used to accomplish\nthis fitting; here we use the least-squares method. Of course,\nbecause we are truncating the series of\nEq.~\\eqref{eq:FormOfExtrapolatedFunction} at $k=N$, some of the\neffects from $k>N$ terms will appear at lower orders. We will need to\nchoose $N$ appropriately, checking that the extrapolated quantity has\nconverged sufficiently with respect to this order.\n\n\\subsection{Radial parameter}\n\\label{sec:ChoiceOfR}\nOne subtlety to be considered is the choice of $r$ parameter to be\nused in the extraction and fitting. For numerical simulation of an\nisolated system, one simple and obvious choice is the coordinate\nradius $\\ensuremath{r_{\\text{coord}}}$ used in the simulation. Alternatively, if the data is\nmeasured on some spheroidal surface, it is possible to define an areal\nradius $\\ensuremath{r_{\\text{areal}}}$ by measuring the area of the sphere along with $f$, and\nsetting $\\ensuremath{r_{\\text{areal}}} \\equiv \\sqrt{\\text{area}\/4\\pi}$. Still other choices are\ncertainly possible.\n\nOne objective in choosing a particular $r$ parameter is to ensure the\nphysical relevance of the final extrapolated quantity. If we try to\ndetect the wave, for example, we may want to think of the detector as\nbeing located at some constant value of $r$. Or, we may want $r$ to\nasymptotically represent the luminosity distance. These conditions\nmay be checked by inspecting the asymptotic behavior of the metric\ncomponents in the given coordinates. For example, if the metric\ncomponents in a coordinate system including $r$ asymptotically\napproach those of the standard Minkowski metric, it is not hard to see\nthat an inertial detector could follow a path of constant $r$\nparameter.\n\nSuppose we have two different parameters $r$ and $\\tilde{r}$ which can\nbe related by a series expansion\n\\begin{equation}\n \\label{eq:rRelation}\n r = \\tilde{r}\\, \\left[ 1 + a\/\\tilde{r} + \\ldots \\right]\\ .\n\\end{equation}\nFor the data presented in this paper, we can show that the coordinate\nradius $\\ensuremath{r_{\\text{coord}}}$ and areal radius $\\ensuremath{r_{\\text{areal}}}$ are related in this way.\nIntroducing the expansion coefficients $\\tilde{f}_{(k)}$, we can write\n\\begin{equation}\n \\label{eq:FormOfExtrapolatedFunctionExpanded}\n f(\\ensuremath{t_{\\text{ret}}}, r) = \\sum_{k=0}^{\\infty}\\, \\frac{f_{(k)}(\\ensuremath{t_{\\text{ret}}})} {r^{k}} =\n \\sum_{k=0}^{\\infty}\\, \\frac{\\tilde{f}_{(k)}(\\ensuremath{t_{\\text{ret}}})} {\\tilde{r}^{k}}\\ .\n\\end{equation}\nInserting Eq.~\\eqref{eq:rRelation} into this formula, Taylor\nexpanding, and equating terms of equal order $k$, shows that $f_{(0)}\n= \\tilde{f}_{(0)}$ and $f_{(1)} = \\tilde{f}_{(1)}$. Thus, if the\nasymptotic behavior of $f$ is given by $f_{(0)}$ or $f_{(1)}$, the\nfinal extrapolated data should not depend on whether $r$ or\n$\\tilde{r}$ is used. On the other hand, in practice we truncate these\nseries at finite order. This means that higher-order terms could\n``pollute'' $f_{(0)}$ or $f_{(1)}$. The second objective in choosing\nan $r$ parameter, then, is to ensure fast convergence of the series in\nEq.~\\eqref{eq:FittingPolynomial}. If the extrapolated quantity does\nnot converge quickly as the order of the extrapolating polynomial $N$\nis increased, it may be due to a poor choice of $r$ parameter.\n\nThe coordinate radius used in a simulation may be subject to large\ngauge variations that are physically irrelevant, and hence are not\nreflected in the wave's behavior. That is, the wave may not fall off\nnicely in inverse powers of that coordinate radius. For the data\ndiscussed later in this paper, we find that using the coordinate\nradius of extraction spheres is indeed a poor choice, while using the\nareal radius of those extraction spheres improves the convergence of\nthe extrapolation.\n\n\\subsection{Retarded-time parameter}\n\\label{sec:ChoiceOfRetardedTime} %\n\nSimilar considerations must be made for the choice of retarded-time\nparameter $\\ensuremath{t_{\\text{ret}}}$ to be used in extrapolation. It may be possible to\nevolve null geodesics in numerical simulations, and use these to\ndefine the null curves on which data is to be extracted. While this\nis an interesting possibility that deserves investigation, we propose\ntwo simpler methods here based on an approximate retarded time\nconstructed using the coordinates of the numerical simulation and the\nphase of the waves measured in that coordinate system.\n\nAgain, we have two criteria for choosing a retarded-time parameter.\nFirst is the physical suitability in the asymptotic limit. For\nexample, we might want the asymptotic $\\ensuremath{t_{\\text{ret}}}$ to be (up to an additive\nterm constant in time) the proper time along the path of a detector\nlocated at constant $r$. Again, checking the asymptotic behavior of\nthe metric components with respect to $\\ensuremath{t_{\\text{ret}}}$ and $r$ should be a\nsufficient test of the physical relevance of the parameters. Second,\nwe wish to have rapid convergence of the extrapolation series using\nthe chosen parameter, which also needs to be checked.\n\nAs before, we can also show the equivalence of different choices for\nthe $\\ensuremath{t_{\\text{ret}}}$ parameter. Suppose we have two different approximations\n$\\ensuremath{t_{\\text{ret}}}$ and $\\ensuremath{\\accentset{\\smile}{t}_{\\text{ret}}}$ that can be related by a series expansion\n\\begin{equation}\n \\label{eq:trRelation}\n \\ensuremath{t_{\\text{ret}}} = \\ensuremath{\\accentset{\\smile}{t}_{\\text{ret}}}\\, \\left[ 1 + b\/r + \\ldots \\right]\\ .\n\\end{equation}\nUsing the new expansion coefficients $\\accentset{\\smile}{f}_{(k)}$, we\ncan write\n\\begin{equation}\n \\label{eq:FormOfExtrapolatedFunctionExpandedForTR}\n f(\\ensuremath{t_{\\text{ret}}}, r) = \\sum_{k=0}^{\\infty}\\, \\frac{f_{(k)}(\\ensuremath{t_{\\text{ret}}})} {r^{k}} =\n \\sum_{k=0}^{\\infty}\\, \\frac{\\accentset{\\smile}{f}_{(k)}(\\ensuremath{\\accentset{\\smile}{t}_{\\text{ret}}})}\n {r^{k}}\\ .\n\\end{equation}\nNow, however, we need to assume that the functions $f_{(k)}$ can be\nwell-approximated by Taylor series. If this is true, we can again\nshow that $f_{(0)} = \\accentset{\\smile}{f}_{(0)}$ or, \\emph{if} we\nhave $f_{(0)}=\\accentset{\\smile}{f}_{(0)}=0$, that $f_{(1)} =\n\\accentset{\\smile}{f}_{(1)}$. The condition that $f$ be\nwell-approximated by a Taylor series is nontrivial, and can help to\ninform the choice of $f$. Similarly, the speed of convergence of the\nextrapolation can help to inform the choice of a particular $\\ensuremath{t_{\\text{ret}}}$\nparameter. While it has been shown \\cite{DamourEtAl2008} that a\nretarded-time parameter as simple as $\\ensuremath{t_{\\text{ret}}}=T-R$ is sufficient for some\npurposes, we find that convergence during and after merger is\ndrastically improved when using a somewhat more careful choice.\n\nSince we will be considering radiation from an isolated compact\nsource, our basic model for $\\ensuremath{t_{\\text{ret}}}$ comes from the Schwarzschild\nspacetime; we assume that the system in question approaches this\nspacetime at increasing distance. In analogy with the\ntime-retardation effect on outgoing null rays in a Schwarzschild\nspacetime~\\cite{Chandrasekhar1992}, we define a ``tortoise\ncoordinate'' $\\ensuremath{r_{\\ast}}$ by:\n\\begin{equation}\n \\label{eq:TortoiseCoordinate}\n \\ensuremath{r_{\\ast}} \\equiv r + 2 \\Eadm \\ln\\left(\\frac{r}{2\\Eadm} - 1 \\right)\\\n ,\n\\end{equation}\nwhere $\\Eadm$ is the ADM mass of the initial data.\\footnote{ Kocsis\n and Loeb~\\cite{KocsisLoeb2007} pointed out that the propagation of a\n roughly spherical gravitational wave should be affected primarily by\n the amount of mass \\emph{interior to} the wave. Because the waves\n from a merging binary can carry off a significant fraction\n (typically a few percent) of the binary's mass, this suggests that\n we should allow the mass in this formula to vary in time, falling by\n perhaps a few percent over the duration of the waveform. However,\n this is a small correction of a small correction; we have not found\n it necessary. Perhaps with more refined methods, this additional\n correction would be relevant.} %\nIn standard Schwarzschild coordinates, the appropriate retarded time\nwould be given by $\\ensuremath{t_{\\text{ret}}} = t - \\ensuremath{r_{\\ast}}$. It is not hard to see that the\nexterior derivative $\\d \\ensuremath{t_{\\text{ret}}}$ is null with respect to the Schwarzschild\nmetric.\n\nTaking inspiration from this, we can attempt to account for certain\ndifferences from a Schwarzschild background. Let $T$ and $R$ denote\nthe simulation's coordinates, and suppose that we extract the metric\ncomponents $g^{TT}$, $g^{TR}$, and $g^{RR}$ from the simulation. We\nseek a $\\ensuremath{t_{\\text{ret}}}(T,R)$ such that\n\\begin{equation}\n \\label{eq:dTRetarded}\n \\d\\ensuremath{t_{\\text{ret}}} = \\frac{\\partial \\ensuremath{t_{\\text{ret}}}}{\\partial T}\\, \\d T +\\frac{\\partial\n \\ensuremath{t_{\\text{ret}}}}{\\partial R}\\, \\d R\n\\end{equation}\nis null with respect to these metric components. That is, we seek a\n$\\ensuremath{t_{\\text{ret}}}$ such that\n\\begin{multline}\n \\label{eq:NullCondition}\n g^{TT}\\, \\left( \\frac{\\partial \\ensuremath{t_{\\text{ret}}}}{\\partial T} \\right)^{2} + 2\n g^{TR}\\, \\left( \\frac{\\partial \\ensuremath{t_{\\text{ret}}}}{\\partial T} \\right)\\, \\left(\n \\frac{\\partial \\ensuremath{t_{\\text{ret}}}}{\\partial R} \\right) \\\\ + g^{RR}\\, \\left(\n \\frac{\\partial \\ensuremath{t_{\\text{ret}}}}{\\partial R} \\right)^{2} = 0\\ .\n\\end{multline}\nWe introduce the ansatz $\\ensuremath{t_{\\text{ret}}} = t - \\ensuremath{r_{\\ast}}$, where $t$ is assumed to be a\nslowly varying function of $R$,\\footnote{More specifically, we need\n $\\lvert \\partial t\/\\partial R \\rvert \\ll \\lvert \\partial \\ensuremath{r_{\\ast}}\n \/ \\partial R \\rvert$. This condition needs to be checked for all\n radii used, at all times in the simulation. For the data presented\n below, we have checked this, and shown it to be a valid assumption,\n at the radii used for extrapolation.} %\nand $\\ensuremath{r_{\\ast}}$ is given by Eq.~\\eqref{eq:TortoiseCoordinate} with $R$ in\nplace of $r$ on the right side. If we ignore $\\partial t \/ \\partial\nR$ and insert our ansatz into Eq.~\\eqref{eq:NullCondition}, we have\n\\begin{multline}\n \\label{eq:NullConditionB}\n g^{TT}\\, \\left( \\frac{\\partial t}{\\partial T} \\right)^{2} - 2\n g^{TR}\\, \\left( \\frac{\\partial t}{\\partial T} \\right)\\, \\left(\n \\frac{1}{1-2\\Eadm\/R} \\right) \\\\ + g^{RR}\\, \\left(\n \\frac{1}{1-2\\Eadm\/R} \\right)^{2} = 0\\ .\n\\end{multline}\nWe can solve this for $\\partial t \/ \\partial T$:\n\\begin{equation}\n \\label{eq:RetardedTimeSolutionA}\n \\frac{\\partial t} {\\partial T} = \\frac{1}{1-2\\Eadm\/R}\\,\n \\frac{g^{TR} \\pm \\sqrt{(g^{TR})^{2} - \\, g^{TT}\\, g^{RR}}} {g^{TT}}\n \\ .\n\\end{equation}\nSubstituting the Schwarzschild metric components shows that we should\nchoose the negative sign in the numerator of the second factor.\nFinally, we can integrate (numerically) to find\n\\begin{equation}\n \\label{eq:FullRetardedTimeSolution}\n t = \\int_{0}^{T}\\, \\frac{1}{g^{TT}}\\, \\frac{g^{TR} -\n \\sqrt{(g^{TR})^{2} - g^{TT}\\, g^{RR}}} {1-2\\Eadm\/R}\\, \\d T'\\ .\n\\end{equation}\nNow, in the case where $g^{TR}$ is small compared to 1, we may wish to\nignore it, in which case we have\n\\begin{equation}\n \\label{eq:RetardedTimeSolutionB}\n t = \\int_{0}^{T}\\, \\frac{\\sqrt{-g^{RR} \/ g^{TT}}} {1-2\\Eadm\/R}\\,\n \\d T'\\ .\n\\end{equation}\nIt is not hard to see that this correctly reduces to $t=T$ in the\nSchwarzschild case.\n\nFor the data discussed later in this paper, we make further\nassumptions that $g^{RR} = 1-2\\Eadm\/R$, and that $R=\\ensuremath{r_{\\text{areal}}}$. That is,\nwe define the corrected time\n\\begin{subequations}\n \\label{eq:DynamicLapseCorrection}\n \\begin{gather}\n \\ensuremath{t_{\\text{corr}}} \\equiv \\int_{0}^{T}\\, \\sqrt{\\frac{-1\/g^{TT}}{1 -\n 2\\Eadm\/\\ensuremath{r_{\\text{areal}}}}} \\, \\d T' \\intertext{and the retarded time} \\ensuremath{t_{\\text{ret}}}\n \\equiv \\ensuremath{t_{\\text{corr}}} - \\ensuremath{r_{\\ast}}\\ .\n \\end{gather}\n\\end{subequations}\nWe find that this corrected time leads to a significant improvement\nover the naive choice of $t(T)=T$, while no improvement results from\nusing Eq.~\\eqref{eq:FullRetardedTimeSolution}.\n\n\\subsection{Application to a binary inspiral}\n\\label{sec:Application}\nTo begin the extrapolation procedure, we extract the (spin-weight\n$s=-2$) $(l,m)=(2,2)$ component of $\\Psi_{4}$ data on a set of spheres\nat constant coordinate radius in the simulation.\\footnote{See\n Ref.~\\cite{ScheelEtAl2009} for details of the extraction procedure.\n We use $\\Psi_{4}$ data here, rather than Regge--Wheeler--Zerilli\n data because the $\\Psi_{4}$ data from this simulation is of higher\n quality; it appears that the RWZ data is more sensitive to changes\n in gauge conditions after the merger. This problem is still under\n investigation.} %\nIn the black-hole binary simulations used here (the same as those\ndiscussed in Refs.~\\cite{BoyleEtAl2007b, Boyle2008, BoyleEtAl2008a,\n ScheelEtAl2009}), these spheres are located roughly\\footnote{\n Explicitly, the extraction spheres are at radii $\\ensuremath{r_{\\text{coord}}}\/\\ensuremath{M_{\\text{irr}}} = \\{75,\n 85, 100, 110, 120, \\ldots, 190, 200, 210, 225\\}$, though we find\n that the final result is not sensitive to the exact placement of the\n extraction spheres.} %\nevery $\\Delta\\ensuremath{r_{\\text{coord}}} \\approx 10\\Mirr$ (where $\\Mirr$ is the sum of\nthe irreducible masses of the black holes in the initial data) from an\ninner radius of $\\ensuremath{r_{\\text{coord}}}=75\\Mirr$ to an outer radius of\n$\\ensuremath{r_{\\text{coord}}}=225\\Mirr$, where $\\Mirr$ denotes the total apparent-horizon\nmass of the two holes at the beginning of the simulation. This\nextraction occurs at time steps of $\\Delta \\ensuremath{t_{\\text{coord}}} \\approx 0.5\\Mirr$\nthroughout the simulation. We also measure the areal radius, $\\ensuremath{r_{\\text{areal}}}$,\nof these spheres by integrating the induced area element over the\nsphere to find the area, and defining $\\ensuremath{r_{\\text{areal}}} \\equiv\n\\sqrt{\\text{area}\/4\\pi}$. This typically differs from the coordinate\nradius $\\ensuremath{r_{\\text{coord}}}$ by roughly $\\Mirr\/\\ensuremath{r_{\\text{coord}}}$. Because of gauge effects, the\nareal radius of a coordinate sphere changes as a function of time, so\nwe measure this as a function of time. Finally, we measure the\naverage value of $g^{TT}$ as a function of coordinate time on the\nextraction spheres to correct for the dynamic lapse function. The\nareal radius and $g^{TT}$ are then used to compute the retarded time\n$\\ensuremath{t_{\\text{ret}}}$ defined in Eq.~\\eqref{eq:DynamicLapseCorrection}.\n\nThe gravitational-wave data $\\Psi_{4}$, the areal radius $\\ensuremath{r_{\\text{areal}}}$, and\nthe lapse $N$ are all measured as functions of the code coordinates\n$\\ensuremath{t_{\\text{coord}}}$ and $\\ensuremath{r_{\\text{coord}}}$. We can use these to construct the retarded time\ndefined in Eq.~\\eqref{eq:DynamicLapseCorrection}, using $\\ensuremath{r_{\\text{areal}}}$ in place\nof $r$. This, then, will also be a function of the code coordinates.\nThe mapping between $(\\ensuremath{t_{\\text{ret}}},\\ensuremath{r_{\\text{areal}}})$ and $(\\ensuremath{t_{\\text{coord}}},\\ensuremath{r_{\\text{coord}}})$ is invertible, so we\ncan rewrite $\\Psi_{4}$ as a function of $\\ensuremath{t_{\\text{ret}}}$ and $\\ensuremath{r_{\\text{areal}}}$.\n\nAs noted in Sec.~\\ref{sec:ChoiceOfRetardedTime}, we need to assume\nthat the extrapolated functions are well approximated by Taylor\nseries. Because the real and imaginary parts of $\\Psi_{4}$ are\nrapidly oscillating in the data presented here, we prefer to use the\nsame data in smoother form. We define the complex amplitude $A$ and\nphase $\\phi$ of the wave:\n\\begin{equation}\n \\label{eq:AmplitudeAndPhaseDefinition}\n \n \\ensuremath{r_{\\text{areal}}}\\, \\Mirr\\, \\Psi_4 \\equiv A\\, \\ensuremath{\\mathrm{e}}^{\\i \\phi}\\ ,\n\\end{equation}\nwhere $A$ and $\\phi$ are functions of $\\ensuremath{t_{\\text{ret}}}$ and $\\ensuremath{r_{\\text{areal}}}$. Note that this\ndefinition factors out the dominant $1\/r$ behavior of the amplitude.\nThis equation defines the phase with an ambiguity of multiples of\n$2\\pi$. In practice, we ensure that the phase is continuous as a\nfunction of time by adding suitable multiples of $2\\pi$. The\ncontinuous phase is easier to work with for practical reasons, and is\ncertainly much better approximated by a Taylor series, as required by\nthe argument surrounding\nEq.~\\eqref{eq:FormOfExtrapolatedFunctionExpandedForTR}.\n\nA slight complication arises in the relative phase offset between\nsuccessive radii. Noise in the early parts of the waveform makes the\noverall phase offset go through multiples of $2\\pi$ essentially\nrandomly. We choose some fairly noise-free (retarded) time and ensure\nthat phases corresponding to successive extraction spheres are matched\nat that time, by simply adding multiples of $2\\pi$ to the phase of the\nentire waveform---that is, we add a multiple of $2\\pi$ to the phase at\nall times.\n\nExtrapolation of the waveform, then, basically consists of finding the\nasymptotic forms of these functions, $A$ and $\\phi$ as functions of\ntime. We apply the general technique discussed above to $A$ and\n$\\phi$. Explicitly, we fit the data to polynomials in $1\/\\ensuremath{r_{\\text{areal}}}$ for\neach value of retarded time:\n\\begin{subequations}\n \\label{eq:ExtrapolationFormula}\n \\begin{align}\n \\label{eq:AxmplitudeExtrapolation}\n A(\\ensuremath{t_{\\text{ret}}},\\ensuremath{r_{\\text{areal}}}) &\\simeq \\sum_{k=0}^N\\,\\frac{A_{(k)}(\\ensuremath{t_{\\text{ret}}})}{\\ensuremath{r_{\\text{areal}}}^k}\\ , \\\\\n \\label{eq:PhaseExtrapolation}\n \\phi(\\ensuremath{t_{\\text{ret}}},\\ensuremath{r_{\\text{areal}}}) &\\simeq \\sum_{k=0}^N\\,\\frac{\\phi_{(k)}(\\ensuremath{t_{\\text{ret}}})}{\\ensuremath{r_{\\text{areal}}}^k}\\\n .\n \\end{align}\n\\end{subequations}\nThe asymptotic waveform is fully described by $A_{(0)}$ and\n$\\phi_{(0)}$. When the order of the approximating polynomials is\nimportant, we will denote by $A_{N}$ and $\\phi_{N}$ the asymptotic\nwaveforms resulting from approximations using polynomials of order\n$N$.\n\nWe show the results of these extrapolations in the figures below.\nFigs.~\\ref{fig:LapseCorrectionComparison_Corr_Amp}\nthrough~\\ref{fig:LapseCorrectionComparison_NoCorrMatched} show\nconvergence plots for extrapolations using orders $N=1$--$5$. The\nfirst two figures show the relative amplitude and phase difference\nbetween successive orders of extrapolation, using the corrected time\nof Eq.~\\eqref{eq:DynamicLapseCorrection}. Here, we define\n\\begin{subequations}\n \\begin{gather}\n \\label{eq:RelativeAmplitudeDifferenceDefinition}\n \\frac{\\delta A}{A} \\equiv \\frac{A_{N_{a}} - A_{N_{b}}} {A_{N_{b}}}\n \\\\ \\intertext{and}\n \\label{eq:PhaseDifferenceDefinition}\n \\delta \\phi \\equiv \\phi_{N_{a}} - \\phi_{N_{b}}\\ .\n \\end{gather}\n\\end{subequations}\nWhen comparing waveforms extrapolated by polynomials of different\norders, we use $N_{b}=N_{a}+1$. Note that the broad trend is toward\nconvergence, though high-frequency noise is more evident as the order\nincreases, as we discuss further in the next subsection. The peak\namplitude of the waves occurs at time $\\ensuremath{t_{\\text{ret}}}\/\\ensuremath{M_{\\text{irr}}} \\approx 3954$. Note\nthat the scale of the horizontal axis changes just before this time to\nbetter show the merger\/ringdown portion. We see that the\nextrapolation is no longer convergent, with differences increasing\nslightly as the order of the extrapolating polynomial is increased.\nThe oscillations we see in these convergence plots have a frequency\nequal to the frequency of the waves themselves. Their origin is not\nclear, but may be due to numerics, gauge, or other effects that\nviolate our assumptions about the outgoing-wave nature of the data.\nIt is also possible that there are simply no higher-order effects to\nbe extrapolated, so low-order extrapolation suffices.\n\nFigure~\\ref{fig:LapseCorrectionComparison_NoCorrMatched} shows the\nsame data as in Fig.~\\ref{fig:LapseCorrectionComparison_Corr}, except\nthat no correction is used for dynamic lapse. That is, for this\nfigure (and only this figure), we use $\\ensuremath{t_{\\text{ret}}} \\equiv T - \\ensuremath{r_{\\ast}}$, where $T$\nis simply the coordinate time. This demonstrates the need for\nimproved time-retardation methods after merger. Note that the\nextrapolated data during the long inspiral is virtually unchanged\n(note the different vertical axes). After the merger---occurring at\nroughly $\\ensuremath{t_{\\text{ret}}}\/\\ensuremath{M_{\\text{irr}}} = 3954$---there is no convergence when no\ncorrection is made for dynamic lapse. It is precisely the merger and\nringdown segment during which extreme gauge changes are present in the\ndata used here~\\cite{ScheelEtAl2009}. On the other hand, the fair\nconvergence of the corrected waveforms indicates that it is possible\nto successfully remove these gauge effects.\n\n\\begin{figure}\n \n \\includegraphics[width=\\linewidth]{Fig3}\n \\caption{\\CapName{Convergence of the amplitude of the extrapolated\n $\\Psi_{4}$, with increasing order of the extrapolating\n polynomial, $N$} This figure shows the convergence of the\n relative amplitude of the extrapolated Newman--Penrose waveform,\n as the order $N$ of the extrapolating polynomial is increased.\n (See Eq.~\\eqref{eq:ExtrapolationFormula}.) That is, we subtract\n the amplitudes of the two waveforms, and normalize at each time by\n the amplitude of the second waveform. We see that increasing the\n order tends to amplify the apparent noise during the early and\n late parts of the waveform. Nonetheless, the broad\n (low-frequency) trend is towards convergence. Note that the\n differences decrease as the system nears merger; this is a first\n indication that the extrapolated effects are due to near-field\n influences. Also note that the horizontal axis changes in the\n right part of the figure, which shows the point of merger, and the\n ringdown portion of the waveform. After the merger, the\n extrapolation is nonconvergent, though the differences grow slowly\n with the order of extrapolation.}\n \\label{fig:LapseCorrectionComparison_Corr_Amp}\n\\end{figure}\n\n\\begin{figure}\n \n \\includegraphics[width=\\linewidth]{Fig4}\n \\caption{\\CapName{Convergence of the phase of the extrapolated\n $\\Psi_{4}$, with increasing order of the extrapolating\n polynomial, $N$} This figure is the same as\n Fig.~\\ref{fig:LapseCorrectionComparison_Corr_Amp}, except that it\n shows the convergence of phase. Again, increasing the\n extrapolation order tends to amplify the noise during the early\n and late parts of the waveform, though the broad (low-frequency)\n trend is towards convergence. The horizontal-axis scale changes\n just before merger.}\n \\label{fig:LapseCorrectionComparison_Corr}\n\\end{figure}\n\n\\begin{figure}\n \n \\includegraphics[width=\\linewidth]{Fig5}\n \\caption{\\CapName{Convergence of the phase of $\\Psi_{4}$,\n extrapolated with no correction for the dynamic lapse} This\n figure is the same as\n Fig.~\\ref{fig:LapseCorrectionComparison_Corr}, except that no\n correction is made to account for the dynamic lapse. (See\n Eq.~\\eqref{eq:DynamicLapseCorrection} and surrounding discussion.)\n Observe that the convergence is very poor after merger (at roughly\n $\\ensuremath{t_{\\text{ret}}}\/\\ensuremath{M_{\\text{irr}}} = 3954$). This corresponds to the time after which\n sharp features in the lapse are observed. We conclude from this\n graph and comparison with the previous graph that the correction\n is crucial to convergence of $\\Psi_{4}$ extrapolation through\n merger and ringdown.}\n \\label{fig:LapseCorrectionComparison_NoCorrMatched}\n\\end{figure}\n\n\n\\subsection{Choosing the order of extrapolation}\n\\label{sec:ExtrapolationOrder} %\nDeciding on an appropriate order of extrapolation to be used for a\ngiven purpose requires balancing competing effects. As we see in\nFig.~\\ref{fig:LapseCorrectionComparison_Corr_Amp}, for example, there\nis evidently some benefit to be gained from using higher-order\nextrapolation during the inspiral; there is clearly some convergence\nduring inspiral for each of the orders shown. On the other hand,\nhigher-order methods amplify the apparent noise in the\nwaveform.\\footnote{So-called ``junk radiation'' is a ubiquitous\n feature of initial data for current numerical simulations of binary\n black-hole systems. It is clearly evident in simulations as\n large-amplitude, high-frequency waves that die out as the simulation\n progresses. While it is astrophysically extraneous, it is\n nevertheless a correct result of evolution from the initial data.\n Better initial data would, presumably, decrease its magnitude. This\n is the source of what looks like noise in the waveforms at early\n times. It is less apparent in $\\ensuremath{h}$ data than in $\\Psi_{4}$ data\n because $\\Psi_{4}$ effectively amplifies high-frequency components,\n because of the relation $\\Psi_{4} \\approx -\\ddot{h}$.} %\nMoreover, late in the inspiral, and on into the merger and ringdown,\nthe effects being extrapolated may be present only at low orders;\nincreasing the extrapolation order would be useless as higher-order\nterms would simply be fitting to noise.\n\nThe optimal order depends on the accuracy needed, and on the size of\neffects that need to be eliminated from the data. For some\napplications, little accuracy is needed, so a low-order extrapolation\n(or even no extrapolation) is preferable.\\footnote{ We note that---as\n expected from investigations of near-field effects\n \\cite{Teukolsky1982, Boyle2008, Boyle2009}---the second-order\n behavior of the amplitude greatly dominates its first-order behavior\n \\cite{HannamEtAl2008}. Thus, there is no improvement to the\n accuracy \\emph{of the amplitude} when extrapolating with $N=1$; it\n would be better to simply use the data from the largest extraction\n radius.} If high-frequency noise is not considered a problem, then\nsimple high-order extrapolation should suffice. Of course, if both\nhigh accuracy and low noise are required, data may easily be filtered,\nmitigating the problem of noise amplification. (See the appendix for\nmore discussion.) There is some concern that this may introduce\nsubtle inaccuracies: filtering is more art than science, and it is\ndifficult to establish precise error bars for filtered data.\n\n\n\\subsection{Choosing extraction radii}\n\\label{sec:ExtractionRadii} %\n\nAnother decision needs to be made regarding the number and location of\nextraction surfaces. Choosing the number of surfaces is fairly easy,\nbecause there is typically little cost in increasing the number of\nextraction radii (especially relative to the cost of---say---running a\nsimulation). The only restriction is that the number of data points\nneeds to be significantly larger than the order of the extrapolating\npolynomial; more can hardly hurt. More careful consideration needs to\nbe given to the \\emph{location} of the extraction surfaces.\n\nFor the extrapolations shown in\nFigs.~\\ref{fig:LapseCorrectionComparison_Corr_Amp}\nand~\\ref{fig:LapseCorrectionComparison_Corr}, data was extracted on\nspheres spaced by $10$ to $15\\Mirr$, from $R=75\\Mirr$ to\n$R=225\\Mirr$. The outer radius of $225\\Mirr$ was chosen simply\nbecause this is the largest radius at which data exists throughout the\nsimulation; presumably, we always want the outermost radii at which\nthe data are resolved. In choosing the inner radius, there are two\ncompeting considerations.\n\n\\begin{figure}\n \n \\includegraphics[width=\\linewidth]{Fig6}\n \\caption{\\CapName{Comparison of extrapolation of $\\Psi_{4}$ using\n different sets of extraction radii} This figure compares the\n phase of waveforms extrapolated with various sets of radii. All\n comparisons are with respect to the data set used elsewhere in\n this paper, which uses extraction radii $R\/\\Mirr = \\{75, 85,\n 100, 110, 120, \\ldots, 200, 210, 225\\}$. The order of the\n extrapolating polynomial is $N=3$ in all cases.}\n \\label{fig:RadiiComparison}\n\\end{figure}\n\nOn one hand, we want the largest spread possible between the inner and\nouter extraction radii to stabilize the extrapolation. A very rough\nrule of thumb seems to be that the distance to be extrapolated should\nbe no greater than the distance covered by the data. Because the\nextrapolating polynomial is a function of $1\/R$, the distance to be\nextrapolated is $1\/R_{\\text{outer}} - 1\/\\infty = 1\/R_{\\text{outer}}$.\nThe distance covered by the data is $1\/R_{\\text{inner}} -\n1\/R_{\\text{outer}}$, so if the rule of thumb is to be satisfied, the\ninner extraction radius should be no more than half of the outer\nextraction radius, $R_{\\text{inner}} \\lesssim R_{\\text{outer}}\/2$\n(noting, of course, that this is a \\emph{very} rough rule of thumb).\n\nOn the other hand, we would like the inner extraction radius to be as\nfar out as possible. Extracting data near the violent center of the\nsimulation is a bad idea for many reasons. Coordinate ambiguity,\ntetrad errors, near-field effects---all are more severe near the\ncenter of the simulation. The larger these errors are, the more work\nthe extrapolation needs to do. This effectively means that\nhigher-order extrapolation is needed if data are extracted at small\nradii. The exact inner radius needed for extrapolation depends on the\ndesired accuracy and, again, the portion of the simulation from which\nthe waveform is needed.\n\nWe can compare data extrapolated using different sets of radii.\nFigure~\\ref{fig:RadiiComparison} shows a variety, compared to the data\nused elsewhere in this paper. The extrapolation order is $N=3$ in all\ncases. Note that the waveforms labeled $R\/\\Mirr = \\{50, \\ldots,\n100\\}$ and $R\/\\Mirr = \\{100, \\ldots, 225\\}$ both satisfy the rule\nof thumb that the inner radius should be at most half of the outer\nradius, while the other two waveforms do not; it appears that\nviolation of the rule of thumb leads to greater sensitivity to noise.\nOne waveform is extrapolated using only data from small radii,\n$R\/\\Mirr = \\{50, \\ldots, 100\\}$. It is clearly not converged, and\nwould require higher-order extrapolation if greater accuracy is\nneeded. The source of the difference is presumably the near-field\neffect~\\cite{Boyle2008}, which is proportionally larger at small\nradii.\n\nClearly, there is a nontrivial interplay between the radii used for\nextraction and the order of extrapolation. Indeed, because of the\ntime-dependence of the various elements of these choices, it may be\nadvisable to use different radii and orders of extrapolation for\ndifferent time portions of the waveform. The different portions could\nthen be joined together using any of various\nmethods~\\cite{AjithEtAl2008, BoyleEtAl2009a}.\n\n\n\\section{Extrapolation using the phase of the waveform}\n\\label{sec:PhaseExtrapolation}\n\nWhile the tortoise-coordinate method just described attempts to\ncompensate for nontrivial gauge perturbations, it is possible that it\ndoes not take account of all effects adequately. As an independent\ncheck, we discuss what is essentially a second---very\ndifferent---formulation of the retarded-time parameter, similar to one\nfirst introduced in Ref.~\\cite{HannamEtAl2008}. If waves extrapolated\nwith the two different methods agree, then we can be reasonably\nconfident that unmodeled gauge effects are not diminishing the\naccuracy of the final result. As we will explain below, the method in\nthis section cannot be used naively with general data (\\foreign{e.g.}\\xspace, data on\nthe equatorial plane). In particular, we must assume that the data to\nbe extrapolated consists of a strictly monotonic phase. It is,\nhowever, frequently possible to employ a simple technique to make\npurely real, oscillating data into complex data with strictly\nmonotonic phase, as we describe below. The results of this technique\nagree with those of the tortoise-coordinate extrapolation as we show\nin Sec.~\\ref{sec:Comparison}.\n\nInstead of extrapolating the wave phase $\\phi$ and amplitude $A$ as\nfunctions of time and radius, we extrapolate the time $\\ensuremath{t_{\\text{ret}}}$ and the\namplitude $A$ as functions of wave phase $\\phi$ and radius $\\ensuremath{r_{\\text{areal}}}$. In\nother words, we measure the amplitude and the arrival time to some\nradius $\\ensuremath{r_{\\text{areal}}}$ of a fixed phase point in the waveform. This is the\norigin of the requirement that the data to be extrapolated consist of\na strictly monotonic phase $\\phi(\\ensuremath{t_{\\text{ret}}}, \\ensuremath{r_{\\text{areal}}})$ (\\foreign{i.e.}\\xspace, it must be\ninvertible). For the data presented here, the presence of radiation\nin the initial data---junk radiation---and numerical noise cause the\nextracted waveforms to fail to satisfy this requirement at early\ntimes. In this case, the extrapolation is performed separately for\neach invertible portion of the data. That is, the data are divided\ninto invertible segments, each segment is extrapolated separately, and\nthe final products are joined together as a single waveform.\n\n\n\\subsection{Description of the method}\n\\label{sec:PhaseDescription} %\nThis extrapolation technique consists of extrapolating the retarded\ntime and the amplitude as functions of the wave phase $\\phi$ and the\nradius $\\ensuremath{r_{\\text{areal}}}$. In other words, when extrapolating the waveform, we are\nestimating the amplitude and the arrival time of a fixed phase point\nat infinity. Here, we extract the same $\\Psi_{4}$, $g^{TT}$, and\nareal-radius data used in the previous section. As in the previous\nmethod, we first shift each waveform in time using $\\ensuremath{t_{\\text{ret}}} = \\ensuremath{t_{\\text{corr}}} -\n\\ensuremath{r_{\\ast}}$, where $\\ensuremath{t_{\\text{corr}}}$ is defined in\nEq.~\\eqref{eq:DynamicLapseCorrection} and the basic tortoise\ncoordinate $\\ensuremath{r_{\\ast}}$ is defined in Eq.~\\eqref{eq:TortoiseCoordinate} with\nareal radius as the radial parameter. The amplitude and wave phase\nare again defined using Eq.~\\eqref{eq:AmplitudeAndPhaseDefinition},\nand the phase is made continuous as in Sec.~\\ref{sec:Application}.\nThus, we begin with the same data, shifted as with the\ntortoise-coordinate method.\n\nNow, however, we change the method, in an attempt to allow for\nunmodeled effects. Instead of extrapolating $\\phi(\\ensuremath{t_{\\text{ret}}}, \\ensuremath{r_{\\text{areal}}})$ and\n$A(\\ensuremath{t_{\\text{ret}}}, \\ensuremath{r_{\\text{areal}}})$, as with the previous method, we invert these functions\nto get $\\ensuremath{t_{\\text{ret}}}(\\phi,\\ensuremath{r_{\\text{areal}}})$ and $A(\\phi,\\ensuremath{r_{\\text{areal}}})$ as functions of the wave\nphase $\\phi$. In other words, we extrapolate the arrival time and the\namplitude of a signal to a coordinate radius $R$ for each wave phase\nvalue. This is done by fitting the retarded time $\\ensuremath{t_{\\text{ret}}}$ and the\namplitude $A$ data to polynomials in $1\/\\ensuremath{r_{\\text{areal}}}$ for each value of the\nwave phase:\n\\begin{subequations}\n \\label{eq:ExtrapolationFormulaMethod2}\n \\begin{align}\n \\label{eq:AmplitudeExtrapolationMethod2}\n A(\\ensuremath{r_{\\text{areal}}}, \\phi) &\\simeq \\sum^N_{k=0}\\frac{A_{(k)}(\\phi)}{\\ensuremath{r_{\\text{areal}}}^{k}}\n \\ , \\\\\n \\label{eq:PhaseExtrapolationMethod2}\n t(\\ensuremath{r_{\\text{areal}}}, \\phi) &\\simeq \\ensuremath{r_{\\ast}} + \\sum^N_{k=0}\\frac{t_{(k)}(\\phi)}{\\ensuremath{r_{\\text{areal}}}^k}\n \\ ,\n \\end{align}\n\\end{subequations}\nwhere the asymptotic waveform is fully described by $A_{(0)}(\\phi)$\nand $t_{(0)}(\\phi)$.\n\nWith this data in hand, we can produce the asymptotic amplitude and\nphase as functions of time by plotting curves in the \\mbox{$t$--$A$}\nand \\mbox{$t$--$\\phi$} planes parametrized by the phase. In order to\nbe true, single-valued functions, we again need monotonicity of the\n$t_{(0)}(\\phi)$ data, which may be violated by extrapolation. The\nusable data can be obtained simply by removing data from times before\nwhich this condition holds.\n\nChoosing the extraction radii and extrapolation order for this method\nfollows the same rough recommendations described in\nSecs.~\\ref{sec:ExtrapolationOrder} and~\\ref{sec:ExtractionRadii}.\nNote also that the restriction that the data have an invertible phase\nas a function of time is not insurmountable. For example, data for\n$\\Psi_{4}$ in the equatorial plane is purely real, hence has a phase\nthat simply jumps from $0$ to $\\pi$ discontinuously. However, we can\ndefine a new quantity\n\\begin{equation}\n w(t) \\equiv \\Psi_{4}(t) + \\i \\dot{\\Psi}_{4}(t)\\ .\n\\end{equation}\nThis is simply an auxiliary quantity used for the extrapolation, with\na smoothly varying, invertible phase. The imaginary part is discarded\nafter extrapolation.\n\n\n\\subsection{Results}\n\\label{sec:PhaseResults} %\n\n\\begin{figure}\n \n \\includegraphics[width=\\linewidth]{Fig7}\n \\caption{\\CapName{Convergence of the amplitude of $\\Psi_{4}$\n extrapolated using the wave phase, with increasing order $N$ of\n the extrapolating polynomial} This figure shows the convergence\n of the relative amplitude of the extrapolated Newman--Penrose\n waveform extrapolated using the wave phase, as the order $N$ of\n the extrapolating polynomial is increased. (See\n Eq.~\\eqref{eq:ExtrapolationFormulaMethod2}.) Increasing the\n extrapolation order tends to amplify the apparent noise during the\n early and late parts of the waveform, but it improves convergence.\n The vertical axis at $\\ensuremath{t_{\\text{ret}}}\/\\ensuremath{M_{\\text{irr}}}\\approx 3950$ denotes merger.}\n \\label{fig:AmplitudeComparisonMethod2}\n\\end{figure}\n\nIn Figs.~\\ref{fig:AmplitudeComparisonMethod2}\nand~\\ref{fig:PhaseComparisonMethod2} we plot the convergence of the\nrelative amplitude and phase of the extrapolated $(l,m) = (2,2)$ mode\nof the $\\Psi_4$ waveform for extrapolation orders $N=1, \\ldots, 5$. A\ncommon feature of both plots is that during the inspiral, the higher\nthe extrapolation order, the better the convergence. However, the\nnoise is amplified significantly for large orders of extrapolation.\nThis method of extrapolation amplifies high-frequency noise\nsignificantly, compared to the tortoise-coordinate method.\n\n\n\\begin{figure}\n \n \\includegraphics[width=\\linewidth]{Fig8}\n \\caption{\\CapName{Convergence of the phase of $\\Psi_{4}$ as a\n function of time extrapolated using the wave phase, with\n increasing order $N$ of the extrapolating polynomial} Again,\n increasing the extrapolation order tends to amplify the apparent\n noise during the early and late parts of the waveform, though\n convergence is improved significantly.}\n \\label{fig:PhaseComparisonMethod2}\n\\end{figure}\n\nIn the inspiral portion, we have a decreasing error in the\nextrapolation of the phase and the amplitude as the wavelength of the\ngravitational waves decreases. In the merger\/ringdown portion, a more\ncareful choice of the radii and order of extrapolation needs to be\nmade. Since near-field effects are less significant in the data\nextracted at larger radii, extrapolation at low order ($N=2,3$) seems\nsufficient. Data extrapolated at large order ($N=4,5$) has a larger\nerror in the phase and amplitude after merger than data extrapolated\nat order $N=2$ or $3$. Moreover, the outermost extraction radius\ncould be reduced, say to $R_{\\text{outer}} \/ \\ensuremath{M_{\\text{irr}}} = 165$ instead of\n$R_{\\text{outer}} \/ \\ensuremath{M_{\\text{irr}}} = 225$, without having large extrapolation\nerror at late times. Using the radius range $R\/\\ensuremath{M_{\\text{irr}}}={75,\\ldots,160}$\ninstead of the range $R\/ \\ensuremath{M_{\\text{irr}}} = {75,\\ldots,225}$ would leave the\nextrapolation error during the merger\/ringdown almost unchanged, while\nthe extrapolation error during the inspiral would increase by about\n70\\%.\n\nWe note that this method allows easy extrapolation of various portions\nof the waveform using different extraction radii and orders since---by\nconstruction---the wave phase is an independent variable. For example,\nsolve for the phase value of the merger $\\phi_{\\text{merger}}$\n(defined as the phase at which the amplitude is a maximum), then use\nthe radius range $R \/ \\ensuremath{M_{\\text{irr}}} = {75,\\ldots,225}$ for all phase values\nless than $\\phi_{\\text{merger}}$ and the range\n$R\/\\ensuremath{M_{\\text{irr}}}={75,\\ldots,160}$ for all larger phase values.\n\nThis method has been tested also using the coordinate radius $R$ and\nthe naive time coordinate $T$, in place of areal radius and corrected\ntime. We found results similar to those discussed in\nSec.~\\ref{sec:TortoiseExtrapolation}. Using the new time coordinate\n$\\ensuremath{t_{\\text{corr}}}$ instead of the naive time coordinate $T$ improved the\nextrapolation during the merger\/ringdown, as found in\nSec.~\\ref{sec:TortoiseExtrapolation}.\n\nAs with the previous extrapolation method, increasing the\nextrapolation order gives a faster convergence rate of waveform phase\nand amplitude, but it amplifies noise in the extrapolated waveform. To\nimprove convergence without increasing the noise, we need a good\nfiltering technique for the inspiral data. The junk-radiation noise\ndecreases significantly as a function of time, disappearing several\norbits before merger. However, this noise could be reduced by using\nmore extraction radii in the extrapolation process, or by running the\ndata through a low-pass filter. See the appendix for further\ndiscussion of filtering.\n\n\n\n\\section{Comparison of the two methods}\n\\label{sec:Comparison}\n\nBoth methods showed good convergence of the amplitude and the phase of\nthe waveform as $N$ increased in the inspiral portion. (See\nFigs.~\\ref{fig:LapseCorrectionComparison_Corr_Amp}\nand~\\ref{fig:AmplitudeComparisonMethod2} for the amplitude, and\nFigs.~\\ref{fig:LapseCorrectionComparison_Corr}\nand~\\ref{fig:PhaseComparisonMethod2} for the phase.) The wave-phase\nextrapolation method was more sensitive to noise. In the\nmerger\/ringdown portion, both methods have similar convergence as $N$\nincreases, especially when the correction is taken to account for the\ndynamic lapse. The use of the time parameter $\\ensuremath{t_{\\text{corr}}}$ improved the\nagreement between the methods significantly in the merger\/ringdown\nportion for all extrapolation orders. Extrapolating at order $N=2$ or\n$3$ seems the best choice as the noise and phase differences are\nsmallest for these values.\n\nIn Fig.~\\ref{fig:MroueBoyleRelativeAmplitudeDifferenceByOrder}, we\nshow the relative amplitude difference between data extrapolated at\nvarious orders ($N=1,\\ldots,5$). There is no additional time or phase\noffset used in the comparison. Ignoring high-frequency components,\nthe difference in the relative amplitude is always less than 0.3\\% for\ndifferent extrapolation orders. Even including high-frequency\ncomponents, the differences between the two methods are always smaller\nthan the error in each method, as judged by convergence plots. In\nFig.~\\ref{fig:MroueBoylePhaseDifferenceByOrder}, we show the\n\\emph{phase} difference between the data extrapolated using both\nmethods. As in the relative amplitude-difference plots, the best\nagreement is achieved during the inspiral portion. Ignoring\nhigh-frequency components, the difference is less than 0.02 radians\nfor all orders. In the merger\/ringdown portion, the best agreement\nbetween extrapolated waveforms is at order $N=2$ or $3$ where the\nphase difference is less than 0.01 radians.\n\n\\begin{figure}\n \n \\includegraphics[width=\\linewidth]{Fig9}\n \\caption{\\CapName{Relative difference in the amplitude of the two\n extrapolation methods as we increase the order of extrapolation}\n The best agreement between both methods is at low orders of\n extrapolation, for which the relative difference in the amplitude\n is less than 0.1\\% during most of the evolution.}\n \\label{fig:MroueBoyleRelativeAmplitudeDifferenceByOrder}\n\\end{figure}\n\n\\begin{figure}\n \n \\includegraphics[width=\\linewidth]{Fig10}\n \\caption{\\CapName{Phase difference of the two extrapolation methods\n as we increase the order of extrapolation} This figure shows the\n phase difference between waveforms extrapolated using each of the\n two methods. The best agreement between the methods is at orders\n $N=2$ and $3$. The relative difference in the phase is less than\n 0.02 radians during most of the evolution.}\n \\label{fig:MroueBoylePhaseDifferenceByOrder}\n\\end{figure}\n\n\n\n\n\n\\section{Conclusions}\n\\label{sec:Conclusions}\n\nWe have demonstrated two simple techniques for extrapolating\ngravitational-wave data from numerical-relativity simulations. We\ntook certain basic gauge information into account to improve\nconvergence of the extrapolation during times of particularly dynamic\ngauge, and showed that the two methods agree to within rough error\nestimates. We have determined that the first method presented here is\nless sensitive to noise, and more immediately applies to arbitrary\nwavelike data; this method has become the basic standard in use by the\nCaltech--Cornell collaboration. In both cases, there were problems\nwith convergence after merger. The source of these problems is still\nunclear, but will be a topic for further investigation.\n\nAs with any type of extrapolation, a note of caution is in order. It\nis entirely possible that the ``true'' function being extrapolated\nbears little resemblance to the approximating function we choose,\noutside of the domain on which we have data. We may, however, have\nreason to believe that the true function takes a certain form. If the\ndata in question are generated by a homogeneous wave equation, for\ninstance, we know that well-behaved solutions fall off in powers of\n$1\/r$. In any case, there is a certain element of faith that\nextrapolation is a reasonable thing to do. While that faith may be\nmisplaced, there are methods of checking whether or not it is:\ngoodness-of-fit statistics, error estimates, and convergence tests.\nTo be of greatest use, goodness-of-fit statistics and error estimates\nfor the output waveform require error estimates for the input\nwaveforms. We leave this for future work.\n\nWe still do not know the correct answers to the questions numerical\nrelativity considers. We have no analytic solutions to deliver the\nwaveform that Einstein's equations---solved perfectly---predict would\ncome from a black-hole binary merger; or the precise amount of energy\nradiated from any given binary; or the exact kick or spin of the final\nblack hole. Without being able to compare numerical relativity to\nexact solutions, we may be leaving large systematic errors hidden in\nplain view. To eliminate them, we need to use multiple, independent\nmethods for our calculations. For example, we might extract $\\Psi_4$\ndirectly by calculating the Riemann tensor and contracting\nappropriately with our naive coordinate tetrad, and extract the metric\nperturbation using the formalism of Regge--Wheeler--Zerilli and\nMoncrief. By differentiating the latter result twice and comparing to\n$\\Psi_4$, we could verify that details of the extraction methods are\nnot producing systematic errors. (Just such a comparison was done in\nRef.~\\cite{BuonannoEtAl2009a} for waveforms extrapolated using the\ntechnique in this paper.) Nonetheless, it is possible that\ninfrastructure used to find both could be leading to errors.\n\nIn the same way, simulations need to be performed using different\ngauge conditions, numerical techniques, code infrastructures, boundary\nconditions, and even different extrapolation methods. Only when\nmultiple schemes arrive at the same result can we be truly confident\nin any of them. But to arrive at the same result, the waveforms from\neach scheme need to be processed as carefully as possible. We have\nshown that extrapolation is crucial for highly accurate gravitational\nwaveforms, and for optimized detection of mergers in detector data.\n\n\n\n\\begin{acknowledgments}\n We thank Emanuele Berti, Duncan Brown, Luisa Buchman, Alessandra\n Buonanno, Yanbei Chen, \\'{E}anna Flanagan, Mark Hannam, Sascha Husa,\n Luis Lehner, Geoffrey Lovelace, Andrea Nerozzi, Rob Owen, Larne\n Pekowsky, Harald Pfeiffer, Oliver Rinne, Uli Sperhake, B\\'{e}la\n Szil\\'{a}gyi, Kip Thorne, Manuel Tiglio, and Alan Weinstein for\n helpful discussions. We especially thank Larry Kidder, Lee\n Lindblom, Mark Scheel, and Saul Teukolsky for careful readings of\n this paper in various draft forms and helpful comments. This\n research has been supported in part by a grant from the Sherman\n Fairchild Foundation to Caltech and Cornell; by NSF Grants\n PHY-0652952, PHY-0652929 and DMS-0553677 and NASA Grant NNX09AF96G\n to Cornell; and by NSF grants PHY-0601459, PHY-0652995, DMS-0553302\n and NASA grant NNG05GG52G to Caltech.\n\\end{acknowledgments}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{introduction}\nWhen we perform a quantum measurement and extract information from a system, the system is disturbed due to the backaction of the measurement.\nThe more information we extract by quantum measurement, the more strongly the system is disturbed.\nSuch a tradeoff relation has been recognized since Heisenberg pointed out the uncertainty relation~\\cite{He27} between error and disturbance.\nSince 1990's, with the tremendous development of the quantum information theory, various methods of quantifying the information and the disturbance has been proposed, and the tradeoff relations has been shown based on their definitions\\cite{Fu96,Da03,Sa06,BS06,BHH08,CL12}.\nIn most of the studies, the information and the disturbance are defined in the information theoretic settings, i.e., decoding the message encoded in quantum states by a measurement.\n\nIn this paper, we formulate the information and the disturbance in the setting of estimating an unknown quantum state by quantum measurements, with an emphasis on the estimation accuracy.\nWe derive an inequality that shows the tradeoff relation between them, and give a sufficient condition to achieve the equality.\nWe also discuss the condition for divergences, which measure the distinguishability between two quantum states, to satisfy a similar inequality.\n\n\\section{information-disturbance relation based on estimation theory}\nSuppose that we estimate an unknown quantum state corresponding to a density operator $\\h\\rho_{\\bm\\theta}$ by performing a quantum measurement.\nHere, $\\bm\\theta\\in\\Theta\\subset\\mathbb R^m$ represents $m$ real parameters that characterize the unknown state, so that estimating the state is equivalent to estimating the parameters.\nSuch a parameterized family of states $\\{\\h\\rho_{\\bm\\theta}\\}_{\\bm\\theta\\in\\Theta}$ is called a quantum statistical model.\nA quantum measurement is characterized by a mapping from quantum states to the probability distribution of outcomes and the post-measurement state corresponding to each outcome.\nIf the measurement outcome is discrete, the probability $p_{\\bm\\theta,i}$ of obtaining the outcome $i\\in I$ and the post-measurement state $\\h\\rho_{\\bm\\theta,i}$ are respectively given by \n\\e{\np_{\\bm\\theta,i}&=\\sum_j \\tr{\\h K_{ij}\\h\\rho_{\\bm\\theta}\\h K_{ij}^\\dagger}\\\\\n\\h\\rho_{\\bm\\theta,i}&=\\frac{1}{p_{\\bm\\theta,i}}\\sum_j \\h K_{ij}\\h\\rho_{\\bm\\theta}\\h K_{ij}^\\dagger,\n}\nwhere measurement operators $\\{\\h K_{ij}\\}$ satisfy the normalization condition $\\sum_{i,j}\\h K_{ij}^\\dagger\\h K_{ij}=\\h I$.\n\nWhat is the natural quantification of the information in the setting of estimating an unknown state from the outcome of the quantum measurement?\nThe estimating process is characterized by a function $\\bm\\theta^{\\rm est}:I\\rightarrow \\Theta$ which is called an estimator.\nSince the outcome $i\\in I$ is a random variable, the estimator, which is calculated from the outcome, is also a random variable, and should be distributed around the true parameter $\\bm\\theta$.\nAccording to the classical Cram\\'er-Rao inequality, the variance-covariance matrix of the locally unbiased estimator has a lower bound which is determined only by a family of the probability distributions $\\{p_{\\bm\\theta}\\}_{\\bm\\theta\\in\\Theta}$:\n\\e{\n{\\rm Var}_{\\bm\\theta}[\\bm\\theta_{\\rm est}]\\ge (J^C_{\\bm\\theta})^{-1}.\n}\nHere, $J^C_{\\bm\\theta}$ is an $m\\times m$ matrix called the classical Fisher information whose elements are defined as\n\\e{\n[J^C_{\\bm\\theta}]_{ab}=\\sum_i p_{\\bm\\theta,i} \\pd{1}{\\log p_{\\bm\\theta,i}}{\\theta_a} \\pd{1}{\\log p_{\\bm\\theta,i}}{\\theta_b}. \\label{info}\n}\nTherefore, a measurement with a larger classical Fisher information allows us to estimate the state more accurately. \nWe define the classical Fisher information as the information obtained by the quantum measurement.\n\nThe disturbance can be evaluated as the loss of information on the parameter $\\bm\\theta$ that can be extracted from the quantum state.\nThe information on the parameter $\\bm\\theta$ that the quantum state potentially possesses can be quantified by the quantum Fisher information, which is defined as\n\\e{\n[J^Q_{\\bm\\theta}]_{ab}:=\\tr{\\pd{1}{\\h\\rho_{\\bm\\theta}}{\\theta_a}\\bm K^{-1}_{\\h\\rho_{\\bm\\theta}} \\pd{1}{\\h\\rho_{\\bm\\theta}}{\\theta_b}}.\n}\nHere, the superoperator $\\h K_{\\h\\rho}$ is defined as\n\\e{\n\\h K_{\\h\\rho}=\\bm R_{\\h\\rho}f(\\bm L_{\\h\\rho}\\bm R_{\\h\\rho}^{-1}),\n}\nwhere $f:(0,\\infty)\\rightarrow (0,\\infty)$ is an operator monotone function satisfying $f(1)=1$ and $\\bm R_{\\h\\rho} (\\bm L_{\\h\\rho})$ is the right (left)-multiplication of $\\h\\rho$:\n\\e{\n\\bm R_{\\h\\rho}(\\h A)=\\h A\\h\\rho, \\ \\bm L_{\\h\\rho}(\\h A)=\\h\\rho\\h A.\n}\nThe quantum Fisher information is the only metrics on the parameter space that monotonically decrease under an arbitrary completely positive and trace-preserving (CPTP) mapping $\\calE$~\\cite{Pe96}:\n\\e{\nJ^Q_{\\bm\\theta}(\\{\\h\\rho_{\\bm\\theta}\\})\\ge J^Q_{\\bm\\theta}(\\{\\calE(\\h\\rho_{\\bm\\theta})\\}).\n}\nFrom the monotonicity, the quantum Fisher information gives an upper bound of the classical Fisher information obtained by all the possible measurements, and the symmetric logarithmic derivative (SLD) Fisher information~\\cite{He68}, which corresponds to $f(x)=\\frac{1+x}{2}$, is known to be the least upper bound.\nTherefore, the quantum Fisher information, especially SLD Fisher information, can be interpreted as the information on the parameter $\\bm\\theta$ that can be extracted from the quantum state.\nWe define the disturbance of the measurement as \n\\e{\n\\Delta J^Q_{\\bm \\theta}:=J_{\\bm \\theta}^{Q}-\\sum_{i}p_ {\\bm\\theta,i} J_{i,\\bm \\theta}'^{Q},\\label{dist}\n}\nwhere $J_{\\bm \\theta}^{Q}$ and $J_{i,\\bm \\theta}'^{Q}$ are the quantum Fisher information of quantum statistical models $\\{\\h\\rho_{\\bm\\theta}\\}$ and $\\{\\h\\rho_{\\bm\\theta,i}\\}$, respectively.\nFor the sake of generality, we consider the disturbance using a general quantum Fisher information.\n\nThe information~\\eqref{info} and the disturbance~\\eqref{dist} satisfy the following inequality that shows a tradeoff relation between them:\n\\e{\nJ^C_{\\bm\\theta} \\le \\Delta J^Q_{\\bm\\theta}. \\label{i-d}\n}\nThis inequality means that if we perform a quantum measurement on an unknown state and extract information on the state, the state loses more intrinsic information.\nSince the inequality~\\eqref{i-d} is valid independently of the choice of the quantum Fisher information to define the disturbance, we obtain\n\\e{\nJ^C_{\\bm\\theta} \\le \\inf_Q \\Delta J^Q_{\\bm\\theta},\n}\nwhere the infimum is taken over all kinds of the quantum Fisher information to define the disturbance.\nWe note that it is nontrivial which quantum Fisher information gives the minimum disturbance, though the minimum and the maximum of the quantum Fisher information are known to be the SLD Fisher information and the real right logarithmic derivative Fisher information (real RLD), which corresponds to $f(x)=\\frac{2x}{x+1}$, respectively.\n\n\nThe proof of the inequality~\\eqref{i-d} is based on the monotonicity and the separating property of the quantum Fisher information.\nFor a given measurement $\\{\\h K_{ij}\\}$, we define a CPTP mapping $\\calE^{\\rm meas}$ as \n\\e{\n\\calE^{\\rm meas}(\\h\\rho)=\\bigoplus_i \\left( \\sum_j \\h K_{ij}\\h\\rho\\h K_{ij}^\\dagger \\right).\n}\nThen the quantum Fisher information of the quantum statistical model $\\{\\calE(\\h\\rho_{\\bm\\theta})\\}$ is equal to the sum of the classical Fisher information obtained by the measurement and the average quantum Fisher information of the post-measurement states as \n\\e{\nJ^Q_{\\bm\\theta}(\\{\\calE^{\\rm meas}(\\h\\rho_{\\bm\\theta})\\})=J^C_{\\bm\\theta}(\\{p_{\\bm\\theta}\\})+\\sum_{i}p_ {\\bm\\theta,i} J_{i,\\bm \\theta}'^{Q}, \\label{separating}\n}\nwhich we call the separating property (see Appendix~\\ref{proofA} for proof).\nBy applying the monotonicity under the CPTP mapping $\\calE^{\\rm meas}$, we obtain\n\\e{\nJ^Q_{\\bm\\theta}(\\{\\h\\rho_{\\bm\\theta}\\})&\\ge J^Q_{\\bm\\theta}(\\{\\calE^{\\rm meas}(\\h\\rho_{\\bm\\theta})\\}) \\nonumber \\\\\n&=J^C_{\\bm\\theta}(\\{p_{\\bm\\theta}\\})+\\sum_{i}p_ {\\bm\\theta,i} J_{i,\\bm \\theta}'^{Q},\n}\nwhich proves the inequality~\\eqref{i-d}.\n\n\\section{condition for equality}\nMeasurements that achieves the equality of the inequality~\\eqref{i-d} are efficient in a sense that they cause the minimum disturbance among those that extract a given amount of information.\nWhen we adopt the right logarithmic derivative (RLD) Fisher information~\\cite{YL73}, which corresponds to $f(x)=x$, to define the disturbance, a class of measurements called pure and reversible measurement achieves the equality of the inequality~\\eqref{i-d} (see Appendix~\\ref{proofB} for proof):\n\\e{\nJ^C_{\\bm\\theta}=\\Delta J^{\\rm RLD}_{\\bm\\theta} \\label{RLDequality}\n}\nHere, a measurement is called pure if the number of measurement operators is one for each measurement outcome so that the measurement operators can be written as $\\{\\h K_i\\}$, and is called reversible if each $\\h K_i$ has the inverse operator $\\h K_i^{-1}$~\\cite{UIN96}.\n\nAs an example, a measurement on a spin-1\/2 system proposed by Royer~\\cite{Ro94} is pure, whose measurement operators are \n\\e{\n\\h K_1&=\n\\begin{pmatrix}\n\\cos(\\theta\/2-\\sigma\/4)&0\\\\\n0&\\cos(\\theta\/2+\\sigma\/4)\n\\end{pmatrix},\\\\\n\\h K_2&=\n\\begin{pmatrix}\n\\sin(\\theta\/2-\\sigma\/4)&0\\\\\n0&\\sin(\\theta\/2+\\sigma\/4)\n\\end{pmatrix}.\n}\nThis measurement is reversible if $\\theta\/2\\pm\\sigma\/4\\ne n\\pi\/2$.\n\nIn fact, pure measurements are the least disturbing measurements in the following sense.\nSuppose that two measurements $\\{\\h K'_{ij}\\}$ and $\\{\\h K_i\\}$ give the same positive operator-valued measure (POVM)\n\\e{\n\\sum_j \\h K_{ij}'^\\dagger\\h K'_{ij}=\\h K_i^\\dagger \\h K_i, \\quad\\forall i\\in I,\n}\nand hence give the same probability distribution.\nThen, the pure measurement causes less disturbance, and gives the same amount of information:\n\\e{\n\\Delta J^Q_{\\bm\\theta}(\\{\\h K_i\\})&\\le \\Delta J^Q_{\\bm\\theta} (\\{\\h K'_{ij}\\}),\\\\\nJ^C_{\\bm\\theta} (\\{\\h K_i\\})&=J^C_{\\bm\\theta}(\\{\\h K'_{ij}\\}).\n}\n\n\n\\section{information-disturbance relation based on distinguishability}\nIn Ref.~\\cite{BL05}, by using the classical and quantum relative entropies\n\\e{\nS^C(p\\|q)&=\\sum_i p_i\\log\\left( \\frac{p_i}{q_i} \\right),\\\\\nS^Q(\\h\\rho\\|\\h\\sigma)&=\\tr{\\h\\rho(\\log\\h\\rho-\\log\\h\\sigma)},\\label{q.rel.}\n}\na similar inequality to~\\eqref{i-d} was derived:\n\\e{\nS^C(p\\|q)\\le S^Q(\\h\\rho\\|\\h\\sigma)-\\sum_i p_i S^Q(\\h\\rho_i\\|\\h\\sigma_i),\\label{rel.ent}\n}\nwhere $p,q$ and $\\h\\rho_i,\\h\\sigma_i$ are the probability distributions and the post-measurement states of a quantum measurement performed on the quantum states $\\h\\rho, \\h\\sigma$, respectively.\nSince the quantum relative entropy is a measure of the distinguishability of two quantum states~\\cite{HP91,ON02}, Eq.~\\eqref{rel.ent} can also be interpreted as a tradeoff relation between information and disturbance.\nIn particular, if we choose two similar states $\\h\\rho_{\\bm\\theta}$ and $\\h\\rho_{{\\bm\\theta}+{\\rm d}{\\bm\\theta}}$ as the arguments of the relative entropy, Eq.~\\eqref{rel.ent} reproduces the inequality~\\eqref{i-d} with the disturbance defined by the Bogoliubov-Kubo-Mori (BKM) Fisher information, which corresponds to $f(x)=\\frac{x-1}{\\log x}$.\n\nIn the following, we discuss the extension of the inequality~\\eqref{rel.ent} to general divergences.\nLet $D^C(\\cdot\\|\\cdot)$ be a divergence between two probability distributions, and $D^Q(\\cdot\\|\\cdot)$ be its quantum extension, i.e.,\nif two quantum states $\\h\\rho,\\h\\sigma$ commute and therefore are simultaneously diagonalizable as $\\h\\rho=\\sum_ip_i\\ket i\\bra i, \\h\\sigma=\\sum_iq_i\\ket i\\bra i$, we obtain\n\\e{\nD^Q(\\h\\rho\\|\\h\\sigma)=D^C(p\\|q).\n}\nWe note that quantum extensions of a divergence is not unique in general.\n\nLet us consider a condition for divergences $D^C(\\cdot\\|\\cdot),D^Q(\\cdot\\|\\cdot)$ to satisfy the information-disturbance tradeoff relation\n\\e{\nD^C(p\\|q)\\le D^Q(\\h\\rho\\|\\h\\sigma)-\\sum_i p_i D^Q(\\h\\rho_i\\|\\h\\sigma_i).\\label{divergence}\n}\nThe essential properties for the proof of the inequality~\\eqref{i-d} are the monotonicity and the separating property of the quantum Fisher information.\nIn the same way, we require these two properties for divergences:\n\\e{\nD^Q(\\h\\rho\\|\\h\\sigma)&\\ge D^Q(\\calE(\\h\\rho)\\|\\calE(\\h\\sigma)),\\\\\nD^Q(\\calE^{\\rm meas}(\\h\\rho)\\|\\calE^{\\rm meas}(\\h\\sigma))&=D^C(p\\|q)+\\sum_i p_iD^Q(\\h\\rho_i\\|\\h\\sigma_i).\n}\nIf we also require a continuity of $D^C(p\\| q)$ with respect to $p,q$, then it satisfies Hobson's five conditions that axiomatically characterize the classical relative entropy~\\cite{Ho69}.\nTherefore, the divergence with the monotonicity, the separating property, and the continuity must be consistent with the relative entropy at least for classical probability distributions:\n\\e{\nD^C(p\\|q)=S^C(p\\|q).\n}\nAs is shown in~\\cite{BL05}, the well-known quantum relative entropy satisfies the tradeoff relation~\\eqref{divergence} because it has the monotonicity and the separating property.\nAnother quantum extension of relative entropy proposed by Belavkin and Staszewski~\\cite{BS82},\n\\e{\nS^{\\rm BS}(\\h\\rho\\|\\h\\sigma)=\\tr{\\h\\rho\\log(\\h\\rho^{1\/2}\\h\\sigma^{-1}\\h\\rho^{1\/2})}, \\label{BS.rel.}\n}\nalso satisfies the inequality~\\eqref{divergence}.\nHere, $S^{\\rm BS}(\\cdot\\|\\cdot)$ is known to be maximal among all the possible quantum extensions of the classical relative entropy~\\cite{Ma13}.\nBy substituting $\\h\\rho_{\\bm\\theta}$ and $\\h\\rho_{{\\bm\\theta}+{\\rm d}{\\bm\\theta}}$ to the inequality~\\eqref{divergence}, we again obtain the inequality~\\eqref{i-d} with the disturbance defined by the real RLD Fisher information.\n\n\n\\section{conclusion}\nIn this paper, we have formulated the tradeoff relation between information and disturbance in quantum measurement in view of estimating parameters that characterize an unknown quantum state.\nThe information is defined as the classical Fisher information of the probability distributions of measurement outcomes, and the disturbance is defined as the average loss of the quantum Fisher information due to the backaction of the measurement.\nWe have shown the tradeoff relation~\\eqref{i-d} between them.\nWhen we use the RLD Fisher information, the equality of the inequality~\\eqref{i-d} is achieved by pure and reversible measurements.\nIn fact, pure measurements are the least disturbing among those that provide us with a given amount of information.\n\nWe have also discussed the necessary condition for divergences between two quantum states to satisfy a similar tradeoff relation~\\eqref{divergence}.\nIt is necessary for divergences to coincide with the relative entropy at least for classical probability distributions.\nIn addition to the well-known relative entropy, the maximum relative entropy also satisfies the tradeoff relation~\\eqref{divergence}, which reproduces the inequality~\\eqref{i-d} for the disturbance defined by the real RLD Fisher information.\nIf there are quantum extensions of the relative entropy that give an arbitrary quantum Fisher information, another systematic derivation of the inequality~\\eqref{i-d} should be possible.\n\\section*{acknowledgement}\nThis work was supported by\nKAKENHI Grant No. 26287088 from the Japan Society for the Promotion of Science, \na Grant-in-Aid for Scientific Research on Innovation Areas ``Topological Quantum Phenomena'' (KAKENHI Grant No. 22103005),\nthe Photon Frontier Network Program from MEXT of Japan,\nand the Mitsubishi Foundation.\nT. S. was supported by the Japan Society for the Promotion of Science through Program for Leading Graduate Schools (ALPS).\nY. K. acknowledges supports by Japan Society for the Promotion of Science (KAKENHI Grant No. 269905).\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{INTRODUCTION}\n\nHealth anxiety is a significant problem in our modern medical system \\cite{asmundson, taylor}. The belief that one's common and benign symptoms are explained by serious conditions may have several adverse effects such as quality-of-life reduction, incorrect medical treatment and inefficient allocation of medical resources. The web has been shown to be a significant factor in fostering such attitudes \\cite{whitehorvitzcyber, whitehorvitzhypo}. A recent study found that 35\\% of U.S. adults had used the Web to perform diagnosis of medical conditions either for themselves or on behalf of another person, and many ($>$50\\%) pursued professional medical attention concerning their online diagnosis \\cite{foxduggan}. Motivated by the popularity of online health search, we investigated how search engines might improve their health information offerings. We hypothesize that searchers are less likely to develop unrealistic beliefs when they are given unbiased and well-presented information about their medical state. Thus, we believe an ideal search engine should use queries, and available search histories, to extract medically relevant information about the individual as well as detect any health anxieties.\n\nLet us give a (negative) example of a possible medical search session. A user experiencing anxiety about a headache might first spend some time searching for information on a serious condition such as ``brain tumor'' and then switch to a symptom query about headache (a type of transition that prior work shows occurs frequently \\cite{cartright}). The user might choose the query wording ``severe headache explanations'' because of the subjective concern they experience at query time. The engine, registering the words ``severe'' and ``explanations'' as well as the phrase ``brain tumor\" present in the user's search history might compile a search engine result page (SERP) that is biased towards serious conditions. The user, viewing the SERP through the lens of their current health anxiety, may be attracted towards serious conditions in captions \\cite{whitehorvitzbias} and hence select a concerning page, heightening their anxiety further.\n\nIn this paper, we highlight a range of challenges and opportunities in working towards improving exploratory health search and thus hope to outline an agenda that frames this problem. Achieving this requires an understanding of both user and search engine behavior. Users play a role in the search process in two ways: their choice of query formulation as well their subjective consumption of information on the SERP and pages clicked on. This naturally led us to formulate three research questions:\n\\begin{itemize*}\n\\item [{\\bf Q1}] How does a user's subjective medical concern shape his or her choice of wording for medical queries?\n\\item [{\\bf Q2}] How does the search engine interpret the subjective medical concern and objective medical information expressed in the query as well as other measurable characteristics (such as medical search history) when compiling the SERP?\n\\item [{\\bf Q3}] How do users respond to the SERP, both in terms of the consumption of the information on the SERP as well as changes in future behavior caused by viewing the SERP and pages clicked on?\n\\end{itemize*}\n\nResults pertaining to these questions are found in Sections 4, 5, and 6 respectively. To answer them, we studied the search logs of 190 thousand consenting users of a major commercial Web search engine. Search logs are a valuable resource for studying information seeking in a naturalistic setting, and such data has been used by several studies to explore how searchers obtain medical information \\cite{whitehorvitzcyber, cartright}. The pipeline used to process this data and the features extracted for analysis are further described in Section 3. \n\nThe main contributions from our analysis are:\n\n\\begin{itemize*}\n\\item Revealing how certain users have specific preferences for certain query formulations (e.g. ``i have muscle pain'' vs. ``muscle pain'') which also has a significant effect on search results, and potentially health outcomes. \n\\item Finding evidence that users might not be swayed by concerning content appearing on SERPs as we might expect based on prior studies.\n\\item Quantifying the extent to which users with prior medical concern receive more concerning SERPs in response to health queries and choose the most concerning pages to click on, potentially leading to a vicious cycle of concern reinforcement. \n\\item Determining to how users directly and indirectly influence the level of concern expressed in the SERP they receive, both through query choice and other factors (e.g., personalization).\n\\end{itemize*}\n\nWe discuss these findings and their implications in Section 8 and conclude in Section 9.\n\n\\section{RELATED WORK}\n\nRelated research in this area falls into three main categories: health search behavior; the quality of online health content; and health anxiety and the impact of reviewing health content on the Web.\n\nThere continues to be interest in search and retrieval studies on expert and consumer populations in a variety of domains, typically conducted as laboratory studies of search behavior \\cite{bhavnani, hershhickam}. Benigeri and Pluye \\cite{benigeripluye} showed that exposing novices to complex medical terminology puts them at risk of harm from self-diagnosis and self-treatment. It is such consumer searching (rather than expert searching) that we focus on in the remainder of this section.\n\nSearch engine log data can complement laboratory studies, allowing search behavior to be analyzed at scale in a naturalistic setting and mined for a variety of purposes. Logs have been used to study how people search \\cite{jansen}, predict future search and browsing activity \\cite{lauhorvitz}, model future interests \\cite{dupretpiwowarski}, improve search engine quality \\cite{joachims}, and learn about the world \\cite{richardson}. Focusing on how people perform exploratory health searching, Cartright et al. \\cite{cartright} studied differences in search behaviors associated with diagnosis versus more general health-related information seeking. Ayers and Kronenfeld \\cite{ayerskronenfeld} explored changes in health behavior associated with Web usage, and found a positive correlation between it and the likelihood that a user will change their health behavior based on the content viewed. \n\nThe reliability of the information in search results is important in our study; unreliable information can drive anxiety. The quality of online healthcare information has been subject to recent scrutiny. Lewis \\cite{lewis} discussed the trend toward accessing information about health matters online and showed that young people are often skeptical consumers of Web-based health content. Eysenbach and Kohler \\cite{eysenbachkohler} studied users engaged in assigned Web search tasks. They found that the credibility of Web sites was important in the focus group setting, but that in practice, participants largely ignored the information source. Sillence and colleagues \\cite{sillence} studied the influence of design and content on the trust and mistrust of health sites. They found that aspects of design engendered mistrust, whereas the credibility of information and personalization of content engendered trust. \n\nThe medical community has studied the effects of health anxiety, including hypochondriasis \\cite{asmundson}, but not in Web search. Health anxiety is often maladaptive (i.e., out of proportion with the degree of medical risk) and amplified by a lack of attention to the source of their medical information \\cite{taylor, kring}. Such anxiety usually persists even after an evaluation by a physician and reassurance that concerns lack medical basis. A recent study showed that those whom self-identified as hypochondriacs searched more often for health information than average Web searchers \\cite{whitehorvitzhypo}. By estimating the level of health concern via long-term modeling of online behavior, search engines can better account for the effect that results may have and help mitigate health concerns. Our research makes progress in this area.\n\n\\begin{table*}\n\\centering\n\\begin{tabular}{>{\\centering\\arraybackslash}p{\\textwidth}} \nheadache, headaches, severe headache what do I do, which remedy for headache, headache top of head with back pain \\\\ \\hline\nheadache do I have a {\\it tumor}, {\\it headache rack}, my {\\it job} gives me a headache, headache {\\it national parks of california} \\\\ \\hline \na, low, be, he, sick, she, helps, standing, black, speech, male, between, acute, shaking, sensitive, bending, an, testing \\\\ \\hline\n\\end{tabular}\n\\captionsetup{justification=centering}\n\\caption{Examples of landmark queries (top), of non-landmark queries (middle; secondary topics and medical conditions are italic) and of admissible words we used to find potential landmark queries (bottom)}\n\\end{table*}\n\nSearchers may feel too overwhelmed by the information online to make an informed decision about their care\\cite{hart}. Performing self-diagnosis using search engines may expose users to potentially alarming content that can unduly raise their levels of health concern. White and Horvitz \\cite{whitehorvitzcyber} employed a log-based methodology to study escalations in medical concerns, a behavior they termed cyberchondria. Their work highlighted the potential influence of several biases of judgment demonstrated by people and search engines themselves, including base-rate neglect and availability. In a follow up study \\cite{whitehorvitzesc}, the same authors showed a link between the nature and structure of Web page content and the likelihood that users' concerns would escalate. They built a classifier to predict escalations associated with the review of content on Web pages (and we obtained that classifier for the research described in this paper). Others have also examined the effect of health search on user's affective state, showing that the frequency and placement of serious illnesses in captions for symptom searches increases the likelihood of negative emotional outcomes \\cite{lauckner}. Other research has shown that health-related Web usage has been linked with increased depression \\cite{bessiere}.\n\nMoving beyond the psychological impact of health search, researchers have also explored the connection between health concerns and healthcare utilization. In one study \\cite{whitehorvitzhui}, the authors estimated that users sought medical attention by identifying queries containing healthcare utilization intentions (HUIs) (e.g., [physicians in san jose 95113]). Eastin and Guinsler \\cite{eastinguinsler} showed that health anxiety moderated the relationship between health seeking and healthcare utilization. Baker and colleagues \\cite{baker} examined the prevalence of Web use for healthcare, and found that the influence of the Web on the utilization of healthcare is uncertain. The role of the Web in informing decisions about professional treatment needs to be better understood. One of our contributions in this paper is to demonstrate the potential effect of health-related result pages on future healthcare utilization.\n\nOur research extends previous work in a number of ways. By focusing on the first query pertaining to a particular symptom observed in a user history, we show that small differences in query formulation can reflect significant differences amongst health searchers and their health-related search outcomes. To date, no research has demonstrated the impact and insight afforded from analyzing such landmark queries and the behavior around them. To our knowledge, we are also the first to use the search logs to devise statistical experiments which allow us to quantify effects such as user response to medical search results amongst real user populations and provide evidence for causal relationships where possible. We believe that such an approach is necessary to formulate definitive implications for search engine design as well as measuring search engine performance. \n\n\\section{STUDY}\nWe describe various aspects of the study that we performed, including the data, the features extracted, and the statistical methods used for analysis.\n\n\\subsection{Log Data}\n\nTo perform our study we used the anonymized search logs of a popular search engine. Users of this search engine give consent to having information about their queries stored via the terms of use of the engine. During this study, we focused on medical queries related to headache, as it is among the most common health concerns \\cite{nielsen}. We use the phrase {\\it headache query} to refer to queries that contain the substring ``headache'' and occurred during the six month period from September 2012 to February 2013. Amongst those, we call a {\\it landmark query} a query that shows an intent to explore the symptom ``headache'', that does not already contain a possible explanation for headache (e.g., migraine, tumor) and that is not otherwise off-topic. We found these landmark queries by manually assessing frequent headache queries and creating a list of 682 ``admissible'' terms that we believed could occur in landmark queries. We then compiled all queries that exclusively contain terms from that list into a dataset. Examples of landmark queries, non-landmark queries, and admissible words can be found in Table 1. From manual inspection, we concluded with confidence that the dataset captured over 50\\% of all landmark queries present in the logs and that over 95\\% of captured queries are proper landmark queries, the rest being headache queries which contain a significant secondary topic. We then excluded all but the first landmark query instance for each user, ensuring that each user only appears once in the dataset. Overall, our dataset contains over 50,000 unique queries and over 190,000 query instances \/ users.\n\nWe focus on headache since it a common medical symptom (e.g., over 95\\% of adults report experiencing headaches in their lifetime \\cite{rasmussen}) and there are a variety of serious and benign explanations (from caffeine withdrawal to cerebral aneurism), facilitating a rich analysis of content, behavior, and concern. While we believe that headache searching is sufficiently rich and frequent to warrant its own study, investigating queries related to symptoms other than headache could solidify our findings. A large-scale analysis similar to that reported here, but focused on multiple symptoms, is an interesting and important area for future work.\n\n\\subsection{Features}\n\nFor each landmark query in our dataset, we generated features. To frame our three research questions, we modeled the search process around a landmark query as five separate stages: (i) The user's search behavior prior to the landmark query, (ii) the user's choice of wording of the landmark query, (iii) the SERP returned to the user by the search engine, (iv) the user's decision which pages to click on (if any), and (v) the user's search behavior after the landmark query. Our research questions ask about the relationship of these 5 stages. Hence, the features we extracted come in five groups.\n\n\\begin{table}\n\\centering\n\\begin{tabular}{lm{2.3in}cc} \nName&Description \\\\ \\hline\npasttopserious&Number of medical queries containing the most frequent serious condition amongst all serious conditions present \\\\ \\hline\npasttopbenign&Number of medical queries containing the most frequent benign condition amongst all benign conditions present \\\\ \\hline\npastdiffserious&Number of distinct serious conditions in medical queries \\\\ \\hline\npastdiffbenign&Number of distinct benign conditions in medical queries\\\\ \\hline\npastmedical&Number of medical queries \\\\ \\hline\npastheadache&Number of medical queries containing ``headache'' or a condition that is an explanation for headache \\\\ \\hline\npasthui&Number of HUI queries \\\\ \\hline\npasthuiclicked&Number of HUI queries where at least one page on the SERP was clicked \\\\ \\hline\n\\end{tabular}\n\\caption{User Search Behavior Features}\n\\end{table}\n\n\\begin{table*}\n\\centering\n\\begin{tabular}{lllc} \nName & Description (query ..) & Example & Frequency\\\\ \\hline\naudience & .. is about specific population &``headache in adults'' & 3.6\\%\\\\ \nfiller & .. contains a filler such as 'a' or 'and' &``headache and cough'' & 3.6\\%\\\\ \ngoal & .. contains a specific search goal & ``definition of headache'' & 26.3\\%\\\\ \ngoal:condition & .. indicates the goal of diagnosis & ``reasons for headache'' & 8.2\\%\\\\ \ngoal:symptom & .. indicates the goal of related symptoms & ``headache symptoms'' & 2.1\\%\\\\ \ngoal:treatment & .. indicates the goal of treatment & ``headache cure'' & 14.7\\%\\\\ \ngoal:medication & .. indicates the goal of treatment through medication & ``headache pills'' & 2.4\\%\\\\ \ngoal:alternative & .. indicates the goal of alternative treatment & ``natural headache remedy'' & 5.4\\%\\\\ \nsymptomhypothesis & .. states another symptom as cause & ``headache caused by back pain'' & 1.7\\%\\\\ \neventhypothesis & .. relates the headache to a life event & ``headache after hitting head'' & 3.3\\%\\\\ \nduration & .. specifies a duration for the headache & ``chronic headache'' & 10.9\\%\\\\ \nintensity & .. specifies that the headache is strong & ``severe headache'' & 5.0\\%\\\\ \nlocation & .. specifies the location of the headache & ``headache left side of head'' & 17.3\\%\\\\ \npronoun & .. contains a pronoun & ``i have headache'' & 5.8\\%\\\\ \nkindofheadache & .. specifies the kind of pain & ``stabbing headache'' & 2.5\\%\\\\ \nothersymptom & .. specifies an additional symptom & ``headache reflux'' & 27.9\\%\\\\ \ntriggered & .. indicates a headache trigger & ``headache when bending over'' & 1.4\\%\\\\ \ntimeofday & .. indicates a daily pattern & ``headache in afternoon'' & 3.2\\%\\\\ \nopenquestion & .. is phrased as an open question & ``what to do about headache'' & 11.1\\%\\\\ \n\\end{tabular}\n\\caption{High-level {\\it QueryFormulation} features. Note that queries may have multiple features, such as ``severe headache on top of head and cough'', so the percentages in the far-right column do not sum to 100\\%.}\n\\end{table*}\n\n{\\it BeforeSearching}. This group of features describes the level of medical searching before the landmark query (stage (i)). For each query in the user's search log, we first extracted whether that query was of a medical nature. For this, we used a proprietary classifier. From manual inspection, we concluded with confidence that its Type I and II errors are $< 0.1$. Queries so classified as medical will be called {\\it medical queries}. Secondly, we extracted phrases present in the query that describe medical conditions, such as ``common cold'' or ``cerebral aneurism''. The list of phrases we considered was based on the International Classification of Diseases 10th Edition (ICD-10) published by the World Health Organization as well as the U.S. National Library of Medicine's PubMed service and other Web-based medical resources. We also used manually curated synonyms from online dictionaries and standard grammatical inflections to increase coverage. For more information see the approach used by \\cite{whitehorvitzcyber}. The list was also separated into benign and serious medical conditions and we determined which conditions are possible explanations for headache. Thirdly, we extracted phrases that indicate the query is linked to an intention of real-world healthcare utilization (HUI), such as ``emergency care'' or ``hepatologist''. We call those queries {\\it HUI queries}. Finally, the individual-query-level features were aggregated over a time window just before the landmark query to form 8 distinct features, which are shown in Table 2. In our experiments, we considered five different aggregation windows: 1 hour, 1 day, 1 week, 30 days and 90 days. All experiments involving {\\it BeforeSearching} features were replicated for each aggregation window. \n\nWe believe that intense medical searching may be a sign of health concerns or anxieties. White and Horvitz \\cite{whitehorvitzhypo} extensively demonstrated that users believing their symptoms may be explained by a serious condition conduct longer medical search sessions and do so more frequently. Of course, there are many potential reasons for increased medical search such as different web search habits, random noise or even a recent visit to a physician \\cite{whitehorvitzhui}. Overall, we believe that it makes sense to view {\\it BeforeSearching} in light of possible medical concern experienced. However, even if there are significant other factors at play, we believe that the phenomenon of health search intensity remains interesting.\n\n{\\it QueryFormulation}. The choice of wording for the query (stage (ii)) was modeled, firstly, using 19 high-level features. We arrived at these by manually inspecting admissible words and landmark queries (such as in Table 1) and noting the most important high-level ideas expressed through them. We believe that those 19 features capture a significant portion of the semantic variation within queries. The features are shown in Table 3. They offer a useful characterization of the broad range of different types of search intent associated with headache-related queries. Four of these features ({\\it othersymptom}, {\\it duration}, {\\it intensity} and {\\it location}) were further divided by which key phrase was used to express this feature, yielding an additional 117 low-level features (examples are shown in the graph annotation of Figure 1.2-1.5). All feature extraction functions are based on substring matches joined by logical operators. From manual inspection, we concluded with confidence that the Type I and II errors of the extractors of all these features was low. All features in this category are binary.\n\n\\begin{figure*}\n\\centering\n\\includegraphics[scale=1.8]{F1p1.pdf}\n\\includegraphics[scale=1.8]{F1p2.pdf}\n\\includegraphics[scale=1.8]{F1p3.pdf}\n\\caption{Association between query formulation and search behavior features. From left to right: {\\bf {\\color{red} Red}}: {\\it pastmedical} (aggregation window: 90 days); {\\bf {\\color{blue} Blue}}: {\\it pastmedical} (window: 1 hour); {\\bf {\\color{Brown} Brown}}: {\\it futuremedical} (window: 1 hour); {\\bf Black}: {\\it futuremedical} (window: 90 days). All bars show the relative change of medical search intensity of users whose landmark query has a certain formulation relative to the global mean. Error bars are 95\\% confidence intervals.}\n\\end{figure*}\n\n{\\it SERPConcern}. We scored the level of medical concern expressed in each of the Top 8 pages on the SERP (stage (iii)) using a logistic regression classifier designed to predict searching for serious conditions and shown in \\cite{whitehorvitzesc} to have significant predictive power. The classifier was graciously provided to us by the authors for use in our study. Note that during the time period analyzed in this study, the search engine only returned eight results on a large number of SERPs so we disregarded possible further results. It has been shown that users rarely click below position 8, including for health queries \\cite{whitehorvitzbias}. The classifier is based on page features from the URL and HTML content similar to those shown in Table 2. It also includes features that attempt to measure the ``respectability'' of the page (e.g., expressed through the {\\it Health on the Net Foundation} certificate \\cite{hon}) to capture the effect of this on the user. It is then trained to discriminate between pages that lead to serious condition searches (concerning) and pages that lead to benign condition searches (non-concerning). The concern score of an arbitrary page is then the inner product between its feature vector and the learnt weight vector. Finally, we take the weighted sum of the 8 scores for the individual pages on any SERP to produce a single feature value for the full SERP. Note that even though we consider pages leading to benign condition searches less concerning than those leading to serious condition searches, throughout this study, we still consider benign condition searching as an indicator for medical concern experienced by the user, albeit less strong than serious condition searching.\n\n{\\it ClickFeatures}. To measure the users click behavior (stage (iv)), we record whether the user clicked a page on the SERP. If the user did, we also recorded the concern score of the page clicked (as described in the last paragraph) as well as the position of the clicked page on the SERP. We call these three features {\\it hasclicked}, {\\it clickconcern} and {\\it clickposition}. (Users that did not click are excluded from analysis involving features {\\it clickconcern} and {\\it clickposition}.)\n\n{\\it AfterSearching}. The same 8 features as {\\it BeforeSearching}, but aggregated over a window just after the landmark query (stage (v)). In the name, we replace ``past'' with ``future'' (e.g. one feature is called {\\it futuremedical}).\n\n\\subsection{Statistical Methodology}\n\n\nLet $X$ be a dataset where each entry corresponds to a user \/ landmark query (either our full dataset of 190.000 entries or a subset of it.). Write $x_1, .., x_N$ for the data points and $x^1_n, .., x^d_n$ for the components \/ feature values of data point $x_n$. Throughout our analysis, we wish to measure whether there is a significant association between two feature values $i$ and $j$, say between {\\it pastmedical} and {\\it SERPConcern}. By this we mean that the p-value of a suitable independence test on the two variables is low. To measure this simply and robustly, we will choose a threshold $t^i$ and split the data set into two subpopulations $X_<$ and $X_>$ such that for all $x_n$, $n \\in \\{1, .., N\\}$, we have $x_n \\in X_< \\iff x_n^i < t^i$. We also call the two subpopulations the {\\it lower bucket} and {\\it upper bucket} respectively. Then, we either perform a two-sample t-test on the population means of $X_<$ and $X_>$ (Figure 3.3) or we consider a 95\\%-confidence interval around the mean of the smaller bucket if it is significantly smaller than the other bucket (Figures 1 and 2). (In practice, all our subpopulations are large enough to warrant the use of gaussian confidence intervals \/ tests.) A statistical association between features $i$ and $j$ is a symmetric relation, we may choose to split on either feature and compare the means of the other, based on convenience of presentation.\n\nSeveral times, we will encounter a more challenging case where we want to answer the question whether two feature values $i$ and $j$ are associated while controlling for a third feature value $k$. We do this by first dividing the dataset into many subpopulations according to the exact value of feature $k$ to obtain $X^{v^k_1}$, .., $X^{v^k_{N^k}}$, where $v^k_1, .., v^k_{N^k}$ are the values $x_n^k$ can take. Then, we split each of these subpopulations further according to thresholds on feature $i$ as before. Hence, we obtain $X_<^{v^k_1}$, .., $X_<^{v^k_{N^k}}$ and $X_>^{v^k_1}$, .., $X_>^{v^k_{N^k}}$. Because it is nontrivial to jointly compare this potentially large number of buckets, we adopt the following 2-step procedure. \n\nFirst, we take the union of all lower and upper buckets respectively and perform a two-sample t-test as before. For this to control for feature $k$, each lower bucket must be of the same size as its corresponding upper bucket. If, for some $X^{v^k_n}$ there is no threshold that achieves this, we randomly remove data points whose feature value is equal to the median across $X^{v^k_n}$ until this is possible. While this significantly reduces the number of data points entering the analysis, we do not believe this threatens validity or generalizability.\n\nIn the second step, we first individually compare each lower bucket to its corresponding upper bucket. Because many of these buckets are small (e.g. of size 1), we use the Mann-Whitney U statistic to obtain a p-value for each pair of buckets. Then, we aggregate all of those p-values by Stouffer's Z-score method where each p-value is weighted according to the size of its respective buckets. \n\nFinally, we wish to combine p-values obtained in both steps to either accept or reject the null-hypothesis at a given significance level. This is achieved by taking the minimum of both values and multiplying by 2. The cumulative distribution function of that combined quantity is below that of the uniform distribution under the null hypothesis and is thus at least as conservative in rejecting the null as each individual p-value, while gaining a lot of the statistical power of both tests. We quote this value in Figure 3.1-3.2 and 3.4-3.7.\n\n\\section{QUERY FORMULATION}\n\n\\begin{figure*}\n\\centering\n\\includegraphics[scale=1.8]{F2v2.pdf}\n\\caption{Association between query formulation features and {\\it SERPConcern}. All bars show the difference in {\\it SERPConcern} of users whose landmark query has a certain formulation relative to the global mean, measured in standard deviations of the global distribution.}\n\\end{figure*}\n\nWe began our analysis by investigating how users with different levels of {\\it BeforeSearching} phrase their queries. Figure 1.1 shows the mean value of {\\it pastmedical} for each high-level {\\it QueryFormulation} feature relative to the population mean, both aggregated over 1 hour and 90 days. Error bars indicate 95\\% confidence intervals. We chose the feature {\\it pastmedical} to represent {\\it BeforeSearching} because it is the least sparse and hence has the smallest confidence interval.\n\nTo our surprise, we found that most {\\it QueryFormulation} features are highly discriminative, i.e. users choosing queries with certain features have conducted significantly more medical searching than users choosing queries that have different features. In fact, every one of these 19 features is associated with a significant change in prior medical search activity during the hour before the landmark query. This effect could be explained by users near the beginning of their medical search process choosing different query patterns compared to users near the end of their medical search process. However, most {\\it QueryFormulation} features are still discriminative when aggregated over a 90 day window (red bars), and we found that the majority of medical searches in that window do not occur immediately prior to the landmark query. Figure 1.1 also shows the feature {\\it futuremedical}. We find that most {\\it QueryFormulation} features have the same association with past and future search activity, even over a 90 day window. This is evidence that these {\\it QueryFormulation} features characterize users and that heavy medical searchers prefer certain formulations compared to other users in a consistent fashion over the long term.\n \nOver the 1 hour window, users entering an additional symptom in their landmark query show the highest level of search activity. This might be because experiencing a larger number of symptoms causes the user to want to find information about each individual symptom, thus increasing the search need. Surprisingly, {\\it intensity} is not one of the strongest predictors of increased searching even though it appears to be the most intuitive indicator of increased concern. Also, the features {\\it openquestion} and {\\it pronoun} indicate less prior searching. Hence, the presence of sentence-like structures in queries is potentially associated with lower user health concern. It is difficult to intuitively interpret most of the {\\it QueryFormulation} features. Figure 1.2-1.5 shows low-level {\\it QueryFormulation} features relating to high-level features {\\it duration}, {\\it intensity} and {\\it othersymptom}. Unfortunately, the sparseness of {\\it pastmedical} leads to large margins of error, the exact size of which are also difficult to determine. Nonetheless, certain features which are more common and thus have lower margins or error such as ``constant'' or ``terrible'' are as discriminative as high-level features, suggesting they are useful for making inferences about the user. For example, our analysis suggests the very counterintuitive conclusion that users searching for ``headache everyday'' experience a different level of concern than users searching for ``daily headache'' (see difference in feature {\\it pastmedical} between {\\it daily} and {\\it everyday} (blue bar) in Figure 1.2). Similarly, different symptoms are associated with their own level of past search activity. We chose to display 10 relatively concerning symptoms (Figure 1.4) and 10 relatively benign symptoms (Figure 1.5). The choice was made using the findings of a separate crowdsourcing study that we omit from the paper due to space reasons. In that study, many participants were asked to estimate the level of medical concern associated with a set of symptoms. Even though we do not present this study, the difference between the two groups of symptoms is intuitively clear. Surprisingly, we do not see a clear trend that users under Figure 1.4 have searched more, which weakens the hypothesis that objective medical state is linked to search activity.\n\nIn summary, we find that both high-level {\\it QueryFormulation} features as well as individual word choices reveal information about the searcher, which is not necessarily expected. While some {\\it QueryFormulation} features are interpretable, more work is necessary to understand their precise meaning. Also, more data is needed to better study their effect on rarer search events such as HUI queries. \n\n\\section{EFFECT ON SERP}\n\n\n\nWhen personalization is employed by search engines a ``filter bubble\" can be created whereby only supporting information is retrieved \\cite{pariser}. As highlighted in the earlier example, this can be problematic in the case of health searching. In this section, we investigate in what ways the user influences the content of the SERP and the medical concern expressed therein. This question can be separated into two aspects: How do SERPs vary for a given query and how do SERPs vary between queries. \n\n\\subsection{Same Query, Different SERPs}\nTo determine the impact of the user on SERPs within each query, we first investigated how diverse those SERPs were to begin with. We analyzed the composition of SERPs of the 29 most frequent queries from our data set. (Each of these occurred at least 500 times.) We found that on average, 92\\% of SERPs contain the same top result. For example, the page ``www.thefreedictionary.com\/headache'' appears as the top result for the query ``headache definition'' for almost every user. The three most common top results together covered over 99\\% of SERPs. If we consider the Top 8 results on the SERP, we find that eight specific pages are enough to account for 61\\% of all results. Hence, most SERPs returned for a given query are highly similar. We do see considerably more diversity when we consider the ordering of pages on the SERP. Hence, we conclude that factors such as time of day, user location and user personalization may shuffle the ordering of pages, especially in the lower half of the SERP, but do not have the power to promote completely different pages to the top the majority of the time (one notable exception to this is personal navigation \\cite{teevan}, which agressively promotes pages that an individual visits multiple times, but the coverage of this approach is small). Since user outcomes are driven chiefly by top results, we thus expect the impact of non-query factors on user outcomes to be limited.\n\n\n\nNonetheless, we investigated the impact of prior search activity on the SERP by measuring the association between {\\it BeforeSearching} features and {\\it SERPConcern} while controlling for query choice. We measure significance as described in section 3.3. We split on {\\it SERPConcern} and compared the empirical means of the {\\it BeforeSearching} features. Figure 3.1 shows the percentage difference between the mean of the union of the upper buckets and the mean of the union of the lower buckets, aggregated over a 24 hour window before the landmark query (the largest window where significance was obtained). The p-value is shown above the bars. We only show {\\it BeforeSearching} features that were significant. \n\nWe see that users who received concerning SERPs relative to other users entering the same query searched for 3\\% more serious conditions and 2\\% more for the most frequent serious conditions in the 24 hours before the landmark query than than those receiving less concerning SERPs. So, there is an impact of prior searching on the SERP, albeit, as expected, a small one. The difference between the buckets becomes much larger when we consider only users who receive the 10\\% most and least concerning SERPs within each query. Figure 3.2 shows that users receiving especially concerning SERPs search for 12\\% more serious conditions, search 18\\% more for their most frequent serious condition and conduct 10\\% more medical searches overall in the 24 hours before the landmark query. If it was the case that prior searching indicates heightened concern, then search engines present significantly more concerning search results to already concerned users, which may be undesirable.\n\n\\subsection{Different Queries, Different SERPs}\n\n\\begin{figure*}\n\\centering\n\\includegraphics[scale=1.8]{F3p1.pdf}\n\\includegraphics[scale=1.8]{F3p2.pdf}\n\\caption{Association between user search features and features related to the landmark query. Analysis was conducted as described in section 3.3. In each case, we show the ratio between the search behavior feature value across all upper buckets and across all lower buckets. Hence, positive values imply a positive association between features and vice versa. The p-value is noted next to the bar. We do not show results that were not significant, except in Figure 3.7, were only {\\it futurehuiclicked} was significant. The effective sample size refers to the total number of individuals \/ datapoints included in the analysis. This varies for three reasons. (1) Some features are unavailable for some users. For example, in Figure 3.6, we cannot include users that did not click on the SERP. (2) In some Figures, we only include a specific subset of users. For example, in Figure 3.2, we only include users receiving the 10\\% most \/ least concerning SERPs for each query. (3) We have to subsample for statistical reasons (see section 3.3).}\n\\end{figure*}\n\nNow, we turn our attention to how SERPs vary across queries. As a starting point, we tested the association between {\\it BeforeSearching} and {\\it SERPConcern} (without controlling for query choice). Results are shown in Figure 3.3. {\\it BeforeSearching} values shown were aggregated over a one hour window. This window size yielded the most significant results, but significance was preserved over all window sizes up to 90 days. Interestingly, users receiving concerning SERPs are now significantly less likely to engage in medical searching before the landmark query. They search for 14\\% less benign conditions and 17\\% less for their most frequent benign condition. Since these values (and p-values) are much larger than in Figure 3.1, we conclude that the choice of query is the cause for the majority of this effect. At face value, this effect may be either desirable or counterproductive depending on whether {\\it BeforeSearching} is an indicator of subjective concern or objective health state.\n\nIn Figure 2.1, we break down {\\it SERPConcern} by high-level {\\it QueryFormulation} features. Error bars are much smaller than those for {\\it pastmedical} because the SERP scoring function is not sparse. We see that including an additional symptom in the landmark query (feature: {\\it othersymptom}) significantly lowers the level of concern in the SERP. This might be because most of these symptoms are not indicative of a serious brain condition (e.g. cough, stomach pain, hot flashes etc.), thus providing evidence to the search engine that such a condition might not be the underlying reason of the user's health state. We note that this association is the opposite of the association between {\\it othersymptom} and {\\it pastmedical}. We hypothesize that this effect might be responsible for the negative association in Figure 3.3. Indeed, if we exclude searches that include additional symptoms, all negative associations in Figure 3.3 disappear and we obtain significant positive associations between {\\it SERPConcern} and prior serious condition searching. \n\nIn contrast, searching specifically for conditions that explain headache (feature: {\\it goal:condition}) yields the most concerning pages. This is logical given what makes a page most concerning is content about (serious) conditions. Again, it is difficult to interpret the meaning of most of the {\\it QueryFormulation} features intuitively, but each feature is highly discriminative with respect to {\\it SerpConcern} and is thus warranting further study.\n\nFigure 2.2-2.5 breaks down SERPConcern by low-level {\\it QueryFormulation} feature. It appears that search engines do a decent job at ranking different symptoms based on how medically serious they are. {\\it SERPConcern} is overall much higher amongst relatively concerning symptoms versus relatively benign symptoms, showing that the search engine does respond to this factor. Furthermore, we see a form of consistency in the search engine in that queries with additional symptoms consistently receive below average concern scores. We compared this against different key phrases describing the feature {\\it location} (see Table 3) such as ``above eye'', ``temporal'' and ``base of skull''. If we model the {\\it SERPConcern} value associated with each additional symptom as well as each location descriptor as normally distributed, then the difference in mean between the two normals is highly significant ($p < 10^{-6}$).\n\nNonetheless, we still see that there is a considerable amount of unexplained variation in {\\it SERPConcern} values. For example, a user entering ``bad headache'' will receive a less concerning SERP than a user entering ``terrible headache''. We hypothesize that it is unlikely that this difference is always due to hidden semantics, but may often be attributable to random noise. This suggests that there is still significant scope for search engines to better reflect the medical content of queries and become more consistent. One caveat is that some of this noise might be caused by our classifier that is used to assign {\\it SERPConcern} values. The presence of this noise implies that different user populations which have a preference for certain query formulations may be inadvertently led down completely different page trails to different health outcomes.\n\n\\subsection{Summary}\n\nIn summary, we find that search engines do seem to show the ability to return less concerning SERPs for benign symptoms in queries as opposed to more serious symptoms. However, there is significant work to be done to achieve a state where search results reflect the medical information given in the query while being robust to irrelevant nuances. Also, we find that past user searching does have a direct impact in making SERPs more concerning, which might not be desirable. Further analysis might discover the exact cause.\n\n\\section{RESPONSE OF USER TO SERP}\n\nIn this section, we investigate how users respond to the SERP returned in response to the landmark query. We phrase this as two sub-problems: How do users with different levels of prior searching interpret the SERP by making click choices and what impact does the SERP have in altering the behavior of the user. \n\n\\subsection{Impact on Click Behavior}\n\nFirst, we looked at the impact of {\\it BeforeSearching} on click decisions. For this, we measured the association between {\\it BeforeSearching} and {\\it ClickFeatures}. However, to capture the impact of user predisposition, we can only compare users against each other who not only entered the same query, but also received identical SERPs, to control for those two confounders. Unfortunately, this means we cannot include users in our study that received a SERP none else received, which significantly reduces the size of our effective dataset to lie between 20,000 and 50,000 and users. We split each subpopulation based on high \/ low values of {\\it hasclicked}, {\\it clickconcern} and {\\it clickposition} and compared the means of {\\it BeforeSearching} features. Results are shown in Figures 3.4 (window: 30 days), 3.5 (window: 30 days) and 3.6 (window: 24 hours) respectively. \n\nWe found that users who select more concerning pages on any given SERP are significantly more likely to have conducted medical searches in the 30 days before the landmark query. They have searched for 19\\% more serious conditions over this time period. This is quite large given the size of the aggregation window and obfuscating factors such as the limited amount of information available about a page in a SERP caption, the ad hoc nature of a click decision in general and the overall preference for pages ranked near the top. This suggests that people are selectively seeking information and concerned users reinforce their opinion by focusing on concerning content on the SERP. Selective exposure to information has been studied in detail in the psychology community \\cite{frey}. The fact that the significance of these results is highest for a large aggregation window illustrates a user's page preference is formed over the medium term, suggesting it is an attitude rather than a momentary state. Furthermore, this result points to past medical search activity as a good proxy for level of health concern, making our previous analyses more meaningful.\n\nWe found that users with more prior health searching are more likely to click lower positions on the SERP and are less likely to click overall. This might be because users who have searched about similar topics before are likely to look for specific kinds of information and are thus more likely to reject pages as unsuitable, leading them further down the SERP and ultimately to abandon more of their queries. Connections between topic familiarity and search behavior have been noted in previous work \\cite{kelly}. Interestingly, {\\it huiclicked} breaks that trend and is positively associated with clicking on the SERP. This might be because {\\it huiclicked} is by definition associated with a user's general disposition to click on a SERP, which in turn affects the probability of the user clicking in response to the landmark query. \n\n\\subsection{Impact on Future Behavior}\n\nNow we turn to measuring the impact that the SERP has on the state of the user as measured by {\\it AfterSearching}. Previous research (e.g., survey responses in \\cite{whitehorvitzcyber,foxduggan}) showed that online content can have a direct impact on people's healthcare utilization decisions. It also has been shown that the content of pages viewed can be used to predict future serious condition searches \\cite{whitehorvitzesc}. We believe that finding evidence in logs that different SERPs cause users to respond differently would strengthen the motivation for improving search engine performance during health searches. \n\nBefore proceeding, we must point out that this task is quite difficult. We have already shown that users have a myriad of significant predisposition which shape the SERP. Hence, it is impossible to tell whether any change in user behavior after the landmark query was really caused by the SERP or is the result of a predisposition.\n\nOne way to mitigate this is, again, control for query choice. Unfortunately, the group of users receiving concerning SERPs within each query is very different from the group of users receiving concerning SERPs overall, as within-query variation of {\\it SERPConcern} is much smaller than between-query variation (see Section 5). Hence, the true impact of the SERP on users is likely much larger than is measurable in this experiment.\n\nSplitting on {\\it SerpConcern} and comparing the means of {\\it AfterSearching} features yielded no significant differences over any aggregation window. However, looking at the top 10\\% vs. bottom 10\\% of SERPs within each query yielded surprising results. They are shown in Figure 3.7 (window: 1 hour). Users receiving especially concerning SERPs perform 20\\% less HUI queries in the hour following the landmark query when compared to their peers who entered the same query but received an especially unconcerning SERP. Additionally, these users had a larger empirical probability of expressing HUI in the preceding hour (non-significant), amplifying the drop. The ratio of HUI queries between users receiving concerning vs. non-concerning SERPs changes from +20\\% to -20\\% from past to future. To solidify this result, we reran our experiment under exclusion of users who had performed any medical searching over the last 1 hour, 24 hours, etc. We found that the negative association of {\\it futurehui} and {\\it futurehuiclicked} with {\\it SERPConcern} remained significant even when excluding users who had performed no medical searching in the preceding 30 days. (Note that the more users we exclude, the less data we have and the more difficult it is to achieve significance.) This is a surprising result that warrants further investigation.\n\nEven though there is no causal argument to be made, we were still interested in the outright association of {\\it SERPConcern} and {\\it AfterSearching}. We made the interesting observation that the association of {\\it topserious} and {\\it diffserous} actually decreased from past to future. For example, users receiving concerning SERPs issued 2\\% more searches for their most frequent serious condition during the hour before the landmark query, but 1\\% less afterwards (both non-significant). On the contrary, users receiving concerning SERPs searched 17\\% less for their most frequent benign conditions in the hour before the landmark query and only 8\\% less in the hour after. These results are difficult to interpret. However, they do provide some evidence that the impact of the SERP on the user's concern might be more subtle than expected.\n\n\\subsection{Summary}\nIn summary, we found that concerned users are significantly more likely to click concerning pages which may be a serious problem. Additionally, we found only weak evidence that concerning SERPs cause an increase in user's health concerns. To the contrary, users receiving concerning SERPs may be less likely to seek HUI in the short term. Both of these effects need to be investigated further.\n\n\\section{DISCUSSION AND IMPLICATIONS}\n\nIn this section, we summarize our findings and discuss implications, limitations and opportunities for future research. Our main findings are as follows:\n\n\\begin{itemize*}\n\\item Innocuous details in query formulation can hold characterizing information about the search engine user.\n\\item While the search engine does respond to general trends in query formulation sensibly, there is a lot of variation in SERPs for different but similar word choices. \n\\item While users with a history of medical searching are significantly more likely to pick out especially concerning content, the search engine also serves those users more concerning SERPs to begin with.\n\\item There was only weak evidence for the effect of intensification of health concerns through concerning SERPs, but some evidence that concerning SERPs might reduce HUI queries and hence real-world health seeking in the short term.\n\\end{itemize*}\n\nConsidering this, we believe that the most important implication for improving medical search results from this study is clearly highlighting the need for a further rigorous quantification of the extent to which concern in SERPs influences users' levels of health concern. Utlimately, through methodologies to study consenting user cohorts in detail, we can move beyond our focus on health concerns to target health anxieties directly.\n\nWe also believe there is significant potential in refining our understanding of the meaning of both landmark query formulation and general search behavior for extracting hidden information about the user. In this paper, we have shown how these might be used to better infer levels of health concern. In this way, we might be able to improve search engine performance, for example by promoting more medically trusted pages that discuss a wide variety of possible causes objectively if we suspect health anxiety in the searcher. However, a lot more can be done. For example, word choices might reveal the age of the user \\cite{torres} or their level of domain expertise \\cite{hembrooke}, which in turn would have implications for the medical meaning of symptoms. We also have not yet considered formulation nuances of queries other than the landmark query, which might hold much richer information. The need for such analysis to adjust for anxiety is exemplified by our finding that users who conduct more medical searching have a preference for concerning content, which might indicate a cycle of self-reinforcing concern.\n\nOther questions for future research include: In how far is frequent medical searching indicative of the user's actual state of health vs. subjective concern? How can we effectively integrate the results of a possible algorithm that learns searchers' true medical situation into a standard retrieval model without ``overmanaging'' or hurting the overall robustness of the engine? How is it possible to find the right balance between preventing health anxiety and the potential delay of important medical treatment when users are confronted with only benign explanations when they are actually sick?\n\nAlthough we studied the behavior many searchers, one limitation is that we used a single search engine, meaning our findings pertaining to user behavior might not generalize. Another limitation is the imperfect measurement of {\\it SERPConcern}. Due to the fact that some queries occur very frequently (e.g. ``headache''), any error incurred by our scoring function on the top results for those queries might have a big impact on the outcome. As mentioned earlier, another limitation is that we only used data on queries that were related to headache. Finally, query logs offer only a limited view on health concerns, and we need to work with searchers directly to understand the motivations behind their search behaviors.\n\n\\section{CONCLUSIONS}\n\nWe presented a log-based longitudinal study of medical search behavior on the web. We investigated the use of medical query formulation as a lens onto searchers health concerns and found that those features were predictive for this task. We evaluated how a major search engine responds to changes in medical query formulation and saw that there were some trends and a lot of variance, which might have adverse affects by way of misinformation. We showed that a significant tendency for medically concerned users to view concerning content makes it important for engines to manage this effect (e.g., by considering estimated level of health concern as a personalization feature). Finally, we raised the need for detailed study of the impact of SERP composition on users' future behavior. We believe that our results can function as an initial guide for developing practical tools for better health search as well as to inform deeper investigations into health concerns and anxieties on the Web and in general.\n\n\n\n\n\\bibliographystyle{abbrv}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\\begin{sloppypar}\nMarkov chain Monte Carlo (MCMC) methods enable importance sampling from complicated, concentrated probability distributions.\nThe Metropolis-Hastings (MH) algorithm~\\cite{Metropolis1953Equation, Hastings1970Monte} is the prototype for a class of MCMC algorithms where transition proposals are accepted or rejected to generate each sample.\nIts applicability to distributions where probability ratios, but not absolute probabilities, are easily computed motivates its broad usage.\nSince the complexity and scale of these applications routinely push against computational resource limits, practical techniques for minimizing storage requirements and execution time are important aspects of implementations.\nIdeally, the full sequence of samples is recorded in a compact format that facilitates analysis and interpretation.\n\\end{sloppypar}\n\nCurrent implementations of Metropolis-Hastings-class algorithms typically reduce the information content of the full sample sequence in two ways before it is recorded.\nIn the first, a small fraction of samples are recorded in complete detail, but the rest are discarded.\nThis enables arbitrary post-analysis because all degrees of freedom are available for the retained samples.\nHowever, when sampling systems with many degrees of freedom, retaining even a small fraction of samples requires considerable storage capacity.\nEven if capacity is abundant, disk throughput would limit the fraction of samples that could be recorded without having disk write operations dominate the execution time.\nIn the second pattern, averages, moments, histograms, or other quantities are accumulated during simulation and recorded upon completion.\nThis is parsimonious with respect to demands on storage bandwidth and capacity, but decisions about which quantities to accumulate, their frequencies of accumulation, and the number of samples to discard due to starting configuration bias must be made in advance.\nAltering these choices requires repeating the entire simulation.\n\nThese tradeoffs and losses of information complicate the usage of MH-class algorithms in real applications.\nA typical experience from our lab provides an illustrative example in the area of biomolecular simulation.\nA study of conformational and dimerization equilibria~\\cite{Williamson2010Modulation} involved a set of $\\sim$$10$ proteins, each consisting of $\\sim$$10^3$ interacting atoms and modelled at $\\sim$$10$ different temperatures either individually or in pairs.\nAt least three independent replicate simulations were performed for each condition, with each replicate generating $\\sim$$10^8$ samples using MH.\nWhile some quantities of interest had their expectation values accumulated during the initial simulations, only 1 in $\\sim$$10^4$ samples had their full set of atomic coordinates recorded.\nDespite using Gromacs XTC compression with limited precision, these sparse recordings consumed a total of $\\sim$$10^{11}$ bytes of storage.\nSubsequent analyses could only be performed by reanalyzing these recorded samples or repeating the entire set of simulations while accumulating expectations for the new quantities of interest.\nGiven that the simulations consumed $\\sim$$10^5$ total hours of cpu time, both options were suboptimal in terms of efficient usage of computational resources.\n\nThese problems can be completely avoided.\nHere, we describe a simple method for recording the full sample sequence from a MH-class simulation that stores one bit per sample.\nThe method operates independently from any details of the system and the transition proposals, and is therefore generally applicable.\n\n\\section{Description of the method}\n\nRecording a bit string of 1's for acceptance and 0's for rejection, with one bit per transition, suffices to preserve the complete information content of all samples generated during one run of a MH-class algorithm.\nThis method follows naturally from an information theoretic perspective: since all influence from the underlying distribution is reduced to a binary accept-or-reject decision, MH-class algorithms can be viewed as communication channels with capacity of one bit per sample~\\cite[chapter 30.5]{MacKay2002Information}.\nAn existing implementation of a MH-class algorithm can be easily modified to perform this recording operation during each iteration.\nBuffered output is necessary because file systems do not allow writing of individual bits, and also helps to reduce the frequency of disk writes.\n\nRegenerating the full sequence of samples for subsequent reanalysis requires the recorded bit string, the starting state, and any pseudorandom number generator seed(s) from the original simulation.\nThe original simulation code should be reused to guarantee that the original sequence of transition proposals is recapitulated, but modified to use the recorded bit string for deciding whether to accept or reject proposals.\nSince the acceptance criterion no longer needs to be evaluated, all calculations involved in computing ratios of sample weights or proposal probabilities can be skipped during reanalysis.\nHowever, as with the original simulation run, all degrees of freedom for every sample are available for computing and accumulating distributions of any quantity of interest.\n\nCare must be taken to avoid corrupting the sequence of samples during reanalysis.\nThe stream of pseudorandom numbers generated during reanalysis must be identical to the original simulation's stream.\nIn particular, if a single pseudorandom number stream is used for generating transition proposals as well as evaluating the acceptance criterion, the pseudorandom number that would have been used for the acceptance criterion must be generated and discarded during each iteration.\nIf analysis routines themselves make use of pseudorandom numbers, they must generate their own independent streams.\nInsidious platform dependencies are another source of potential corruption when recordings generated on one computer are reanalyzed on another.\nFor example, the implementation of floating point arithmetic differs between processor architectures and compilers.\nThis source of error should be eliminated by adhering to best practices in the coding of floating point operations~\\cite{Goldberg1991What} or writing unit tests that assert platform-specific assumptions are valid during both original simulation and reanalysis runs.\n\n\\section{Demonstration of the method}\n\nTo create a nontrivial demonstration of this recording method, we implemented a MH simulation of a colloidal suspension of charged spherical particles.\nThis three-dimensional off-lattice system consists of $10^3$ particles confined within a spherical droplet.\nEach particle has two charge sites that freely diffuse on the particle's surface.\nThe potential energy is the sum of a pairwise Lennard-Jones potential between particle centers and a pairwise Debye-H\\\"{u}ckel screened electrostatic potential between charge sites.\nIntra-particle and inter-particle electrostatic interactions are screened using different dielectric constants.\nTransition proposals consist of local or full randomization of one particle's center coordinates or charge site positions.\nAn executable Java Archive file containing source code and compiled class files for this demonstration is available as supplementary material.\nTo run it on any computer where Java Runtime Environment version 6 is installed, execute \\texttt{java~-jar~RecordingDemonstration.jar} at a command line.\n\nA comparison with typical alternatives highlights the efficiency of the proposed recording method.\nA straightforward format would be a recording of all seven degrees of freedom $(x, y, z, \\theta_1, \\phi_1, \\theta_2, \\phi_2)$ for all particles.\nWhile this is obviously inefficient, it is simple and easy to parse, facilitating interoperability with other software.\nStoring the updated values, if any, for only the changed degrees of freedom during a transition would be significantly more efficient.\nHowever, this method would tie the recording format to the choice of transition proposals; modifying the simulation to use more sophisticated transitions that simultaneously perturb multiple particles would require redesigning the format.\nAn even more efficient method would record the potential energy (or more generally, the weight) of each sample.\nAs with the proposed method, this would require preserving the simulation code, but would enable full reconstruction of the original samples without calculating any weights.\nTable~\\ref{table:storagetable} compares the storage efficiency of these recording schemes to the proposed one.\n\\texttt{RecordingDemonstration.jar} can derive any of the other recordings starting from the bit string recording, proving that the bit string (along with simulation code and starting state) retains all information about the sample sequence.\n\n\\begin{table}[htbp]\n\\input{storagetable}\n\\label{table:storagetable}\n\\end{table}\n\n\\begin{sloppypar}\nSince the bit string recording obviates all sample weight and proposal probability ratio calculations, iterating over the sample sequence takes significantly less time than generating it.\nTo enable benchmarking, \\texttt{RecordingDemonstration.jar} provides two analyses for the demonstration system.\nThe first is a histogram of the central angle between intra-particle charge sites and the second is a pairwise distance histogram between particle centers.\nThe histograms are accumulated once every 100 and 1000 steps, respectively.\nTable~\\ref{table:timingtable} compares the execution time for performing both analyses during the original simulation versus using the bit string recording.\nNote that energy evaluations are efficiently implemented such that only changing terms are computed; each iteration after the first of the original simulation performs a number of computations that is linear, rather than quadratic, in the number of particles.\nHowever, once the bit string is recorded, energy evaluations are skipped; each iteration only needs to update the degrees of freedom for a single particle and therefore executes in constant time.\n\\end{sloppypar}\n\n\\begin{table}[htbp]\n\\input{timingtable}\n\\label{table:timingtable}\n\\end{table}\n\n\\section{Discussion}\n\nThis method enables efficient reanalysis as long as the time spent generating and executing transition proposals is small compared to the time spent computing ratios of sample weights and proposal probabilities.\nFortunately, this condition is naturally satisfied because sample weights are generally determined by an energy function or other interaction between degrees of freedom that is more expensive to update than the degrees of freedom themselves.\nBy recording the results of the most computationally expensive component of MH-class algorithm implementations, the method proposed here approaches the minimum achievable limits on both storage consumption and execution time.\n\nA closely related alternative method would be to record the full sequence of sample weights in addition to the accept \/ reject decision.\nIn some applications, the sample weights themselves are important subjects of analysis, and rederiving them from the samples during reanalysis would be computationally expensive.\nFor instance, in a simulation of a physical system in the canonical ensemble, one may wish to calculate the heat capacity and other properties of the energy distribution.\nAssuming that weights are represented as IEEE 754 double precision floating point numbers, as in Table~\\ref{table:storagetable}, each sample would require an additional 64 bits of storage.\nWhile much less compact than the bit string recording alone, it would still be far more efficient than most alternatives.\n\nNote that the size of the recording might be further reduced by compressing it using a lossless algorithm.\nOne bit per sample is an \\emph{upper} bound on the entropy rate~\\cite[chapter 30.5]{MacKay2002Information}; if the acceptance rate of the simulation is not exactly 1\/2, the recorded bit string will contain biases that a compression algorithm can exploit.\n\nThe compactness of this recording method makes it susceptible to corruption: during reanalysis, a single incorrectly recorded bit contaminates all subsequent samples.\nAn effective solution would be to encode the raw bit string using error correcting codes (ECC) that provide robustness at the cost of increased storage consumption~\\cite[chapter 11]{MacKay2002Information}.\nIn fact, employment of Reed-Solomon~\\cite{Reed1960Polynomial} and low-density parity-check (LDPC)~\\cite{Gallager1962Lowdensity} codes at the hardware level is already widespread among designers of random access memory, persistent storage devices, and networking interfaces.\nTherefore, ECC at the software level would constitute a redundant layer of protection against error.\nNon-redundant detection of error can be achieved by recording all degrees of freedom for a small number of checkpoint samples, and verifying equality for the corresponding samples generated during reanalysis.\nAlternatively, the common practice of comparing cryptographic hash function outputs can be applied to verifying the integrity of recorded bit strings.\n\nThis method is compatible with descendants of the MH algorithm that retain its general accept-or-reject architecture.\nExamples include expanded ensemble techniques~\\cite{Lyubartsev1992New}, replica exchange Monte Carlo~\\cite{Mitsutake2001Generalizedensemble}, multiple-try Metropolis~\\cite{Liu2000MultipleTry}, simulated annealing~\\cite{Kirkpatrick1983Optimization}, and simulated tempering~\\cite{Marinari1992Simulated}.\nFor some algorithms, such as Wang-Landau sampling~\\cite{Wang2001Efficient}, efficient post-analysis would require recording the system energy at every iteration as described above.\n\n\\section{Conclusion}\n\nThe method described here is a simple and effective approach to data storage and representation in the design of MH-class algorithm implementations.\nIt facilitates decoupling of simulation design from the constraints of data analysis.\nThe information theoretic perspective through which this method was conceived deserves broader appreciation in the development of Monte Carlo algorithms.\n\n\\section{Acknowledgements}\n\nThis work was supported by National Science Foundation MCB 0718924 and National Institutes of Health \u2013 National Institute of General Medical Sciences 5T32GM008802.\n\n\n\n\n\n\n\\bibliographystyle{model1-num-names}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{INTRODUCTION}\n\nObservations of high-energy radiation from a variety of astrophysical\nsources imply the presence of significant populations of nonthermal\n(often relativistic) particles. Much of the observational data is\ncharacterized by strong variability on very short timescales. Nonthermal\ndistributions are naturally produced via the Fermi process, in which\nparticles interact with scattering centers moving systematically and\/or\nrandomly. The first-order Fermi mechanism treats particle acceleration\nin converging flows, such as shocks, and is thought to be important at\nthe Earth's bow shock, in certain classes of solar flares, for cosmic-ray\nacceleration by supernova remnants, and in sources with relativistic\noutflows, including blazars and $\\gamma$-ray bursts \\citep[for reviews,\nsee][]{be87,kir94}. The second-order process, as it was originally\nconceived by Fermi, involved the stochastic acceleration of particles\nscattering with randomly moving magnetized clouds \\citep{fer49}. Later\nrefinements of this idea replaced the magnetized clouds with\nmagnetohydrodynamic (MHD) waves \\citep[e.g.,][]{mel80}. The\nsecond-order, stochastic Fermi process now finds application in a wide\nrange of astrophysical settings, including solar flares\n\\citep{mgr90,lpm04a}, clusters of galaxies \\citep{sst87,pet01,bru04},\nthe Galactic center \\citep{lpm04b,ad04}, and $\\gamma$-ray bursts\n\\citep{wax95,dh01}.\n\nThe standard approach to modeling the acceleration of nonthermal\nparticles via interactions with MHD waves involves obtaining the\nsolution to a steady-state transport equation that incorporates\ntreatments of systematic and stochastic Fermi acceleration, radiative\nlosses, and particle escape \\citep[e.g.,][]{sch84,ss89,lmp06}. However,\nthe prevalence of variability on short timescales in many sources calls\ninto question the validity of the steady-state interpretation.\n\\citet{pp95} provided a comprehensive review of the various\ntime-dependent solutions that have been derived in the past 30 years. In\nmost cases, it is assumed that the momentum diffusion coefficient,\n$D(p)$, and the mean particle escape timescale, $t_{\\rm esc}(p)$, have\npower-law dependences on the particle momentum $p$. Although the set of\nexisting solutions covers a broad range of possible values for the\nassociated power-law indices, there are certain physically interesting\ncases for which no analytical solution is currently available. For\nexample, \\citet{dml96} have shown that for the stochastic acceleration\nof relativistic particles due to resonant interactions with plasma waves\nin black-hole magnetospheres, one obtains $D(p) \\propto p^q$ and $t_{\\rm\nesc}(p) \\propto p^{q-2}$, where $q$ is the index of the wavenumber\ndistribution (see \\S~2). The analytical solution for the time-dependent\nGreen's function in this situation is of particular physical interest,\nbut it has not appeared in the previous literature. This has motivated\nus to reexamine the associated Fokker-Planck transport equation for this\ncase, and to obtain a new family of closed-form solutions for the\nsecular Green's function. The resulting expression, describing the\nevolution of an initially monoenergetic particle distribution,\ncomplements the set of solutions discussed by \\citet{pp95}. Our primary\ngoal in this paper is to present a detailed derivation of the exact\nanalytical solution and to demonstrate some of its key properties.\nDetailed applications of our results to the modeling of particle\nacceleration in black-hole accretion coronae and other astrophysical\nenvironments will be presented in a separate paper.\n\nThe remainder of the paper is organized as follows. In \\S~2 we review\nthe fundamental equations governing the acceleration of charged\nparticles interacting with plasma waves. The transport equation is\nsolved in \\S~3 to obtain the time-dependent Green's function, and\nillustrative results are presented in \\S~4. The astrophysical implications\nof our work are summarized in \\S~5, and additional mathematical details are\nprovided in the Appendix.\n\n\\section{FUNDAMENTAL EQUATIONS}\n\nCharged particles in turbulent astrophysical plasmas are expected to be\naccelerated via interactions with whistler, fast-mode, and Alfv\\'en\nwaves propagating in the magnetized gas. Here we consider a simplified\nisotropic description of the wave energy distribution, denoted by\n$W(k)$, where $W(k) dk$ represents the energy density due to waves with\nwavenumber between $k$ and $k+dk$. The transport formalism we consider\nassumes a power-law distribution for the wave-turbulence spectrum, which\nimplies definite relations between the momentum diffusion coefficient\nand the momentum-dependent escape timescale\n\\citep[e.g.,][]{mel74,sch89a,sch89b,dml96}. MHD waves injected over a\nnarrow range of wavenumber cascade to larger wavenumbers, forming a\nKolmogorov or Kraichnan power spectrum over the inertial range with\n$W(k) \\propto k^{-q}$, where $q=5\/3$ and $q=3\/2$ for the Kolmogorov and\nKraichnan cases, respectively \\citep[e.g.,][]{zm90}. The specific forms\nwe will adopt for the transport coefficients in \\S~2.2 are based on the\nphysics of the resonant scattering processes governing the wave-particle\ninteractions \\citep[see][]{dml96}. We assume that the nonthermal\nparticle distribution is isotropic, and focus on a detailed treatment of\nthe propagation of particles in momentum space due to wave-particle\ninteractions. The spatial aspects of the transport (i.e., the\nconfinement of the particles in the acceleration region) will be treated\nin an approximate manner using a momentum-dependent escape term.\n\n\\subsection{Transport Equation}\n\nThe fundamental transport equation describing the propagation of\nparticles in the momentum space can be written in the flux-conservation\nform \\citep[e.g.,][]{bec92,sch89a,sch89b}\n\\begin{equation}\n{\\partial f \\over \\partial t}\n= - {1 \\over p^2} {\\partial \\over \\partial p}\\left\\{\np^2 \\left[\nA(p) \\, f - D(p) {\\partial f \\over \\partial p}\\right]\n\\right\\}\n- {f \\over t_{\\rm esc}(p)}\n+ {S(p,t) \\over 4 \\pi p^2}\n\\ ,\n\\label{eq1}\n\\end{equation}\nwhere $p$ is the particle momentum, $f(p,t)$ is the particle\ndistribution function, $D(p)$ denotes the momentum diffusion\ncoefficient, $t_{\\rm esc}(p)$ is the mean escape time, $S(p,t)$\nrepresents particle sources, and $A(p)$ describes any additional,\nsystematic acceleration or loss processes, such as shock acceleration or\nsynchrotron\/inverse-Compton emission. The quantity in square brackets in\nequation~(\\ref{eq1}) describes the flux of particles through the\nmomentum space \\citep{tnj71}, and the source term is defined so that\n$S(p,t) \\, dp \\, dt$ gives the number of particles injected into the\nplasma per unit volume between times $t$ and $t + dt$ with momenta\nbetween $p$ and $p + dp$. The total particle number density $n(t)$ and\nenergy density $U(t)$ of the distribution $f(p,t)$ are computed using\n\\begin{equation}\nn(t) = \\int_0^\\infty 4 \\pi \\, p^2 \\, f(p,t) \\, dp\n\\ , \\ \\ \\ \\ \\ \nU(t) = \\int_0^\\infty 4 \\pi \\, \\epsilon \\, p^2 \\, f(p,t)\n\\, dp \\ ,\n\\label{eq2}\n\\end{equation}\nwhere the particle kinetic energy $\\epsilon$ is related to the Lorentz factor\n$\\gamma$ and the particle momentum $p$ by\n\\begin{equation}\n\\epsilon = (\\gamma-1) \\, m c^2 \\ , \\ \\ \\ \\ \\ \n\\gamma=\\left({p^2 \\over m^2 c^2} + 1\\right)^{1\/2} \\ ,\n\\label{eq3}\n\\end{equation}\nand $m$ and $c$ denote the particle rest mass and the speed of light,\nrespectively.\n\nRather than solve equation~(\\ref{eq1}) directly to determine $f(p,t)$\nfor a given source term $S(p,t)$, it is more instructive to first solve\nfor the Green's function, $f_{_{\\rm G}}(p_0,p,t_0,t)$, which satisfies the\nequation\n\\begin{equation}\n{\\partial f_{_{\\rm G}} \\over \\partial t}\n= - {1 \\over p^2} {\\partial \\over \\partial p}\\left\\{\np^2 \\left[A(p) \\, f_{_{\\rm G}}\n- D(p) {\\partial f_{_{\\rm G}} \\over \\partial p}\n\\right]\\right\\}\n- {f_{_{\\rm G}} \\over t_{\\rm esc}(p)}\n+ {\\delta(p-p_0) \\, \\delta(t-t_0) \\over 4 \\pi p_0^2}\n\\ ,\n\\label{eq4}\n\\end{equation}\nwhere $p_0$ is the initial momentum and $t_0$ is the initial time.\nThe source term in this equation corresponds to the injection of a single\nparticle per unit volume with momentum $p_0$ at time $t_0$. The particle\nnumber and energy densities associated with the Green's function are\ngiven by\n\\begin{equation}\nn_{_{\\rm G}}(t) = \\int_0^\\infty 4 \\pi \\, p^2 \\,\nf_{_{\\rm G}}(p_0,p,t_0,t) \\, dp\n\\ , \\ \\ \\ \\ \\ \nU_{_{\\rm G}}(t) = \\int_0^\\infty 4 \\pi \\, \\epsilon \\, p^2 \\,\nf_{_{\\rm G}}(p_0,p,t_0,t) \\, dp\n\\ .\n\\label{eq5}\n\\end{equation}\nOnce the Green's function solution has been determined, the {\\it\nparticular solution} associated with an arbitrary source distribution\n$S(p,t)$ can be computed using the integral convolution\n\\citep[e.g.,][]{bec03}\n\\begin{equation}\nf(p,t) = \\int_0^t \\int_0^\\infty\nf_{_{\\rm G}}(p_0,p,t_0,t) \\,\nS(p_0,t_0) \\, dp_0 \\, dt_0\n\\ ,\n\\label{eq6}\n\\end{equation}\nwhere we have assumed that the particle injection begins at time $t=0$\nand no particles are present in the plasma for $t < 0$.\n\n\\subsection{Transport Coefficients}\n\nIn Appendix~A.1, we demonstrate that for\narbitrary particle energies, the mean rate of change of the\nparticle momentum due to stochastic acceleration is related to the\nmomentum diffusion coefficient $D(p)$ via\n\\begin{equation}\n\\Big\\langle {dp \\over dt} \\Big\\rangle \\Big|_{\\rm stoch} = {1 \\over p^2}\n{d \\over dp}\\left(p^2 D\\right)\n\\ .\n\\label{eq8}\n\\end{equation}\nThe corresponding result for the mean rate of change of the kinetic energy\nis \\citep[see Appendix~A.2 and][]{mr89}\n\\begin{equation}\n\\Big\\langle {d\\epsilon \\over dt} \\Big\\rangle \\Big|_{\\rm stoch}\n= {1 \\over p^2} {d \\over dp} \\left(p^2 v D \\right) \\ ,\n\\label{eq8b}\n\\end{equation}\nwhere $v$ is the particle speed. If the MHD wave spectrum has the\npower-law form $W \\propto k^{-q}$ associated with Alfv\\'en and fast-mode\nwaves, then the momentum dependences of the diffusion coefficient $D(p)$\nand the mean escape time $t_{\\rm esc}(p)$ describing the resonant\npitch-angle scattering of relativistic particles are given by\n\\citep[e.g.,][]{dml96,mr89}\n\\begin{equation}\nD(p) = D_* \\, m^2 c^2 \\left({p \\over m c}\n\\right)^q \\ , \\ \\ \\ \\ \\ \\ \\\nt_{\\rm esc}(p) = t_* \\left({p \\over m c}\n\\right)^{q-2} \\ ,\n\\label{eq7}\n\\end{equation}\nwhere $D_* \\propto {\\rm s}^{-1}$ and $t_* \\propto {\\rm s}$ are\nconstants. We shall focus on transport scenarios with $q \\le 2$, so that\n$t_{\\rm esc}$ is either a decreasing or constant function of the\nmomentum $p$. In order to maintain the physical validity of the\nescape-timescale approach used here, we must require that $t_{\\rm esc}$\nexceed the light-crossing time $L\/c$ for a source with size $L$. This\nimplies a fundamental upper limit to the particle momentum when $q < 2$.\n\nBy combining equations~(\\ref{eq8}) and (\\ref{eq7}), we find that the\nmean rate of change of the momentum for relativistic particles\naccelerated stochastically by MHD waves is given by\n\\begin{equation}\n\\Big\\langle {dp \\over dt} \\Big\\rangle\\Big|_{\\rm stoch} = (q+2) \\, D_*\n\\, m c \\left({p \\over m c}\\right)^{q-1}\n\\ .\n\\label{eq9}\n\\end{equation}\nFor simplicity, we shall assume that the momentum dependence of the\nadditional, systematic loss\/acceleration processes appearing in the transport\nequation (\\ref{eq1}), described by the coefficient $A(p)$, mimics that of the\nstochastic acceleration (eq.~[\\ref{eq9}]). We therefore write\n\\begin{equation}\nA(p) = A_* \\, m c \\left({p \\over m c}\\right)^{q-1}\n\\ ,\n\\label{eq10}\n\\end{equation}\nwhere the constant $A_* \\propto {\\rm s}^{-1}$ determines the positive\n(negative) rate of systematic acceleration (losses). Note that this\nformulation precludes the treatment of loss processes with a quadratic\nenergy dependence (e.g., inverse-Compton or synchrotron) since that\nwould imply $q=3$, which is outside the range considered here. However,\nfirst-order Fermi acceleration at a shock or energy losses due to\nCoulomb collisions can be treated by setting $q=2$ with $A_*$ either\npositive or negative, respectively. This suggests that the results\nobtained here are relevant primarily for the transport of energetic\nions. However, even in this application one needs to bear in mind that\nsynchrotron and inverse-Compton losses will become dominant at\nsufficiently high energies \\citep[e.g.,][]{sch84,ss89,lmp06}.\n\nIt is convenient to transform to the dimensionless momentum and time variables\n$x$ and $y$, defined by\n\\begin{equation}\nx \\equiv {p \\over m c} \\ , \\ \\ \\ \\ \\ \\ \\ \ny \\equiv D_* \\, t\n\\ ,\n\\label{eq11}\n\\end{equation}\nin which case the transport equation~(\\ref{eq4}) for the Green's function becomes\n\\begin{equation}\n{\\partial f_{_{\\rm G}} \\over \\partial y}\n= {1 \\over x^2} {\\partial \\over \\partial x}\\left(\nx^{2+q} \\, {\\partial f_{_{\\rm G}} \\over \\partial x}\\right)\n- {a \\over x^2} {\\partial \\over \\partial x} \\left(\nx^{1+q} \\, f_{_{\\rm G}}\\right) - \\theta \\, x^{2-q} \\, f_{_{\\rm G}}\n+ {\\delta(x-x_0) \\, \\delta(y-y_0) \\over 4 \\pi m^3 c^3 x_0^2}\n\\ ,\n\\label{eq12}\n\\end{equation}\nwhere\n\\begin{equation}\nx_0 \\equiv {p_0 \\over m c} \\ , \\ \\ \\ \\ \\ \\ \\ \ny_0 \\equiv D_* \\, t_0\n\\ ,\n\\label{eq13}\n\\end{equation}\nand we have introduced the dimensionless constants\n\\begin{equation}\na \\equiv {A_* \\over D_*} \\ , \\ \\ \\ \\ \\ \\ \\ \n\\theta \\equiv {1 \\over D_* \\, t_*}\n\\ .\n\\label{eq14}\n\\end{equation}\nNote that $x$ equals the particle Lorentz factor in applications involving\nultrarelativistic particles. The constant $a$ describes the relative\nimportance of systematic gains or losses compared with the stochastic\nprocess.\n\n\\subsection{Fokker-Planck Equation}\n\nAdditional physical insight can be obtained by reorganizing equation~(\\ref{eq12})\nin the Fokker-Planck form\n\\begin{equation}\n{\\partial N_{_{\\rm G}} \\over \\partial y}\n= {\\partial^2 \\over \\partial x^2} \\left(\nx^q \\, N_{_{\\rm G}}\\right)\n- {\\partial \\over \\partial x} \\left[(q+2+a) \\, x^{q-1}\n\\, N_{_{\\rm G}}\\right] - \\theta \\, x^{2-q} \\, N_{_{\\rm G}}\n+ {\\delta(x-x_0) \\, \\delta(y-y_0)}\n\\ ,\n\\label{eq15}\n\\end{equation}\nwhere we have defined the Green's function number distribution $N_{_{\\rm G}}$\nusing\n\\begin{equation}\nN_{_{\\rm G}}(x_0,x,y_0,y) \\equiv 4 \\pi \\, m^3 c^3 \\, x^2 \\, f_{_{\\rm G}}(x_0,x,y_0,y)\n\\ .\n\\label{eq16}\n\\end{equation}\nThe Fokker-Planck coefficients appearing in equation~(\\ref{eq15}), which\ndescribe the evolution of the particle distribution due to the influence\nof stochastic and systematic processes, are given by \\citep{rei65}\n\\begin{equation}\n{1 \\over 2}{d\\sigma^2 \\over dy} = x^q \\ , \\ \\ \\ \\ \\ \n\\Big\\langle{dx\\over dy}\\Big\\rangle = (q+2+a) \\, x^{q-1} \\ ,\n\\label{eq17}\n\\end{equation}\nwhere the first coefficient describes the ``broadening'' of the distribution\ndue to momentum space diffusion, and the second represents the mean ``drift''\nof the particles (i.e., the average acceleration rate).\n\nEquation~(\\ref{eq15}) is equivalent to equation~(49) from \\citet{pp95}\nif we set their parameters $a_{\\rm pp} = 2+a$ and $s_{\\rm pp} = q-2$,\nwhere $s_{\\rm pp}$ denotes the power-law index describing the momentum\ndependence of the escape timescale. Our particular choice for $s_{\\rm\npp}$ reflects the physics of the resonant wave-particle interactions, as\nrepresented by equations~(\\ref{eq7}). The Fokker-Planck form of\nequation~(\\ref{eq15}) clearly reveals the fundamental nature of the\ntransport process. In particular, we note that in the limit $a \\to 0$,\nthe drift coefficient $\\langle dx\/dy\\rangle$ reduces to the purely\nstochastic result (eq.~[\\ref{eq9}]), as expected when systematic\ngains\/losses are excluded from the problem. The total number and energy\ndensities are computed in terms of $N_{_{\\rm G}}$ using (cf. eq.~[\\ref{eq5}])\n\\begin{equation}\nn_{_{\\rm G}}(y) = \\int_0^\\infty N_{_{\\rm G}}(x_0,x,y_0,y) \\, dx\n\\ , \\ \\ \\ \\ \\ \nU_{_{\\rm G}}(y) = \\int_0^\\infty \\epsilon \\, x \\, N_{_{\\rm G}}(x_0,x,y_0,y) \\, dx\n\\ ,\n\\label{eq18}\n\\end{equation}\nwhere (see eq.~[\\ref{eq3}])\n\\begin{equation}\n\\epsilon = m c^2 \\left[\\left(x^2 + 1\\right)^{1\/2}-1\\right]\n\\ .\n\\label{eq19}\n\\end{equation}\nSince the physical situation considered here corresponds to the injection\nof a single particle per unit volume at ``time'' $y=y_0$, it follows that\n$n_{_{\\rm G}}(y_0)=1$.\n\n\\section{SOLUTION FOR THE GREEN'S FUNCTION}\n\nThe result obtained for the Green's function number distribution\n$N_{_{\\rm G}}$ has two important special cases depending on whether $q = 2$\nor $q < 2$. \\citet{pp95} obtained the exact solution to\nequation~(\\ref{eq15}) for the hard sphere case \\citep[][]{ram79} with\n$q=2$ and their parameter $s_{\\rm pp}=0$, corresponding to a\nmomentum-independent escape timescale. In this section we derive the\nexact solution to the time-dependent problem with $q < 2$ and $s_{\\rm\npp}=q-2$, which describes the physics of the wave-particle interactions\n(see eqs.~[\\ref{eq7}]).\n\n\\subsection{Laplace Transformation}\n\nWe define the Laplace transformation of $N_{_{\\rm G}}$ using\n\\begin{equation}\nL(x_0,x,s) \\equiv \\int_0^\\infty e^{-s y} \\, N_{_{\\rm G}}(x_0,x,y_0,y) \\, dy\n\\ .\n\\label{eq20}\n\\end{equation}\nBy operating on the Fokker-Planck equation~(\\ref{eq15}) with\n$\\int_0^\\infty e^{-s y} \\, dy$, we obtain\n\\begin{equation}\n{d^2 \\over d x^2} \\left(x^q \\, L \\right)\n- {d \\over d x} \\left[(q+2+a) \\, x^{q-1}\n\\, L \\right] - \\theta \\, x^{2-q} \\, L - s L\n= - e^{-s y_0} \\, \\delta(x-x_0)\n\\ ,\n\\label{eq21}\n\\end{equation}\nor, equivalently,\n\\begin{equation}\n{d^2 L \\over d x^2} + \\left({q - 2 - a \\over x} \\right)\n{d L \\over d x}\n+ \\left[{(1-q)(2+a) \\over x^2} - {\\theta \\over x^{2 q-2}}\n- {s \\over x^q} \\right] \\, L = - {e^{-s y_0} \\, \\delta(x-x_0) \\over x^q}\n\\ .\n\\label{eq22}\n\\end{equation}\nThis equation can be transformed into standard form by introducing the\nnew momentum variables\n\\begin{equation}\nz(x) \\equiv {2 \\sqrt{\\theta} \\over 2 - q} \\ x^{2-q} \\ , \\ \\ \\ \\ \\ \nz_0(x_0) \\equiv {2 \\sqrt{\\theta} \\over 2 - q} \\ x_0^{2-q} \\ .\n\\label{eq23}\n\\end{equation}\nAfter some algebra, we find that the equation for $L$ now becomes\n\\begin{equation}\nz^2 \\, {d^2 L \\over d z^2} + {a+1 \\over q-2} \\, z \\,\n{d L \\over d z} + \\left[{(1-q)(2+a) \\over (2-q)^2}\n- {z^2 \\over 4} - {s z \\over c_0 (2-q)^2}\\right] \\, L\n= - {c_0 \\, e^{-s y_0} \\, \\delta(z-z_0) \\over 2-q} \\, \\left({z \\over c_0}\n\\right)^{(3-2q)\/(2-q)}\n\\ ,\n\\label{eq24}\n\\end{equation}\nwhere\n\\begin{equation}\nc_0 \\equiv {2 \\sqrt{\\theta} \\over 2-q} \\ .\n\\label{eq25}\n\\end{equation}\nThe solutions to equation~(\\ref{eq24}) obtained for $z \\ne z_0$ that satisfy\nthe high- and low-energy boundary conditions are given by\n\\begin{equation}\nL(z_0,z,s) = e^{-z\/2} \\, z^{(a+2)\/(2-q)} \\,\n\\cases{\nA \\, U(\\alpha,\\beta,z) \\ , & $z \\ge z_0$ \\ , \\cr\nB \\, M(\\alpha,\\beta,z) \\ , & $z \\le z_0$ \\ , \\cr\n}\n\\label{eq26}\n\\end{equation}\nwhere $M$ and $U$ denote the confluent hypergeometric functions\n\\citep{as70}, and\n\\begin{equation}\n\\alpha \\equiv {s + (a+3) \\sqrt{\\theta} \\over 2 (2-q) \\sqrt{\\theta}}\n\\ , \\ \\ \\ \\ \\ \\ \\\n\\beta \\equiv {a+3 \\over 2-q} \\ .\n\\label{eq27}\n\\end{equation}\nThe constants $A$ and $B$ appearing in equation~(\\ref{eq26}) are determined\nby ensuring that the function $L$ is continuous at $z=z_0$, and that it also\nsatisfies the derivative jump condition\n\\begin{equation}\n\\lim_{\\varepsilon \\to 0} {dL \\over dz} \\bigg|_{z_0-\\varepsilon}\n^{z_0+\\varepsilon} = - {c_0 \\, e^{-s y_0} \\over (2-q) \\, z_0^2} \\,\n\\left({z_0 \\over c_0} \\right)^{(3-2q)\/(2-q)}\n= {- e^{-s y_0} \\over 2 \\, x_0 \\sqrt{\\theta}}\n\\ ,\n\\label{eq28}\n\\end{equation}\nobtained by integrating the transport equation~(\\ref{eq24}) with respect\nto $z$ in a small range around the source momentum $z_0$.\n\nThe constant $B$ can be eliminated by combining the continuity and derivative\njump conditions. After some algebra, the solution obtained for $A$ is\n\\begin{equation}\nA = - \\, {e^{-s y_0} \\, e^{z_0\/2} \\, z_0^{(a+2)\/(q-2)} \\, M(\\alpha,\\beta,z_0)\n\\over 2 \\, x_0 \\sqrt{\\theta} \\, W(z_0)}\n\\ ,\n\\label{eq29}\n\\end{equation}\nwhere $W(z)$ denotes the Wronskian, defined by\n\\begin{equation}\nW(z) \\equiv\nM(\\alpha,\\beta,z) {d \\over dz} U(\\alpha,\\beta,z)\n- U(\\alpha,\\beta,z) {d \\over dz} M(\\alpha,\\beta,z)\n\\ .\n\\label{eq30}\n\\end{equation}\nUsing equation~(13.1.22) from \\citet{as70}, we find that $W$ is given\nby the exact expression\n\\begin{equation}\nW(z) = - \\, {\\Gamma(\\beta) \\, z^{-\\beta} \\, e^z \\over\n\\Gamma(\\alpha)}\n\\ ,\n\\label{eq31}\n\\end{equation}\nwhich can be combined with equation~(\\ref{eq29}) and the continuity\nrelation to obtain\n\\begin{equation}\nA = {\\Gamma(\\alpha) \\, z_0^{\\beta+(a+2)\/(q-2)} \\, e^{-s y_0} \\, e^{-z_0\/2}\n\\, M(\\alpha,\\beta,z_0) \\over 2 \\, x_0 \\sqrt{\\theta} \\, \\Gamma(\\beta)}\n\\ ,\n\\label{eq32}\n\\end{equation}\n\\begin{equation}\nB = {\\Gamma(\\alpha) \\, z_0^{\\beta+(a+2)\/(q-2)} \\, e^{-s y_0} \\, e^{-z_0\/2}\n\\, U(\\alpha,\\beta,z_0) \\over 2 \\, x_0 \\sqrt{\\theta} \\, \\Gamma(\\beta)}\n\\ .\n\\label{eq33}\n\\end{equation}\nThe final solution for the Laplace transformation $L$ can therefore\nbe written as\n\\begin{equation}\nL(z_0,z,s) = {\\Gamma(\\alpha) \\, z_0^\\beta \\over \\Gamma(\\beta) \\,\n2 \\, x_0 \\sqrt{\\theta}} \\, \\left({z \\over z_0} \\right)^{(a+2)\/(2-q)}\n\\, e^{-s y_0} \\, e^{-(z+z_0)\/2} \\,\nM(\\alpha,\\beta,z_{\\rm min}) \\, U(\\alpha,\\beta,z_{\\rm max})\n\\ ,\n\\label{eq34}\n\\end{equation}\nwhere\n\\begin{equation}\nz_{\\rm min} \\equiv \\min(z,z_0) \\ , \\ \\ \\ \\ \\ \\ \\\nz_{\\rm max} \\equiv \\max(z,z_0)\n\\ .\n\\label{eq35}\n\\end{equation}\n\n\\subsection{Inverse Transformation}\n\n\\begin{figure}[t]\n\\hspace{35mm}\n\\includegraphics[width=85mm]{f1.eps}\n\\caption{Integration contour $C$ used to evaluate equation~(\\ref{eq36}).}\n\\label{fig1}\n\\end{figure}\n\nThe solution for the Green's function $N_{_{\\rm G}}$ can be found using the\ncomplex Mellin inversion integral, which states that\n\\begin{equation}\nN_{_{\\rm G}}(z_0,z,y_0,y) = {1 \\over 2 \\pi i} \\int_{\\gamma-i \\infty}\n^{\\gamma + i \\infty} e^{s y} \\, L(z_0,z,s) \\, ds\n\\ ,\n\\label{eq36}\n\\end{equation}\nwhere $\\gamma$ is chosen so that the line ${\\rm Re} \\, s = \\gamma$ lies\nto the right of any singularities in the integrand. The singularities\nare simple poles located where $|\\Gamma(\\alpha)| \\to \\infty$, which\ncorresponds to $\\alpha = -n$, with $n = 0,1,2,\\ldots$\nEquation~(\\ref{eq27}) therefore implies that there are an infinite\nnumber of poles situated along the real axis at $s = s_n$, where\n\\begin{equation}\ns_n = \\left[2 \\, (q-2) \\, n - a - 3\\right] \\sqrt{\\theta}\n\\ , \\ \\ \\ n = 0,1,2,\\ldots\n\\label{eq37}\n\\end{equation}\nWe can therefore employ the residue theorem to write\n\\begin{equation}\n\\oint e^{s y} \\, L(z_0,z,s) \\, ds\n= 2 \\pi i \\sum_{n=0}^\\infty \\ {\\rm Res}(s_n)\n\\ ,\n\\label{eq38}\n\\end{equation}\nwhere $C$ is the closed integration contour indicated in Figure~1 and\n${\\rm Res}(s_n)$ denotes the residue associated with the pole at $s=s_n$.\nBased on asymptotic analysis, we conclude that the contribution to the integral\ndue to the curved portion of the contour vanishes in the limit $R \\to \\infty$,\nand consequently we can combine equations~(\\ref{eq36}) and (\\ref{eq38}) to obtain\n\\begin{equation}\nN_{_{\\rm G}}(z_0,z,y_0,y) = \\sum_{n=0}^\\infty \\ {\\rm Res}(s_n)\n\\ .\n\\label{eq39}\n\\end{equation}\nHence we need only evaluate the residues in order to determine the solution\nfor the Green's function.\n\n\\subsection{Evaluation of the Residues}\n\nThe residues associated with the simple poles at $s=s_n$ are evaluated\nusing the formula \\citep[e.g.,][]{but68}\n\\begin{equation}\n{\\rm Res}(s_n) = \\lim_{s \\to s_n}\n(s-s_n) \\ e^{s y} \\, L(z_0,z,s)\n\\ .\n\\label{eq40}\n\\end{equation}\nSince the poles are associated with the function $\\Gamma(\\alpha)$\nin equation~(\\ref{eq34}), we will need to make use of the identity\n\\begin{equation}\n\\lim_{s \\to s_n} (s-s_n) \\ \\Gamma(\\alpha)\n= {(-1)^n \\over n!} \\, 2 \\, (2-q) \\, \\sqrt{\\theta}\n\\ ,\n\\label{eq41}\n\\end{equation}\nwhich follows from equations~(\\ref{eq27}) and (\\ref{eq37}). Combining\nequations~(\\ref{eq34}), (\\ref{eq40}), and (\\ref{eq41}), we find that the\nresidues are given by\n\\begin{equation}\n{\\rm Res}(s_n) = {(-1)^n \\, (2-q) \\, e^{s_n (y-y_0)} \\, z_0^\\beta \\over\nn! \\, \\Gamma(\\beta) \\, x_0} \\, \\left({z \\over z_0}\\right)^{(a+2)\/(2-q)}\n\\, e^{-(z+z_0)\/2} \\, M(-n,\\beta,z_{\\rm min}) \\, U(-n,\\beta,z_{\\rm max})\n\\ .\n\\label{eq42}\n\\end{equation}\nBased on equations~(13.6.9) and (13.6.27) from Abramowitz \\& Stegun\n(1970), we note that the confluent hypergeometric functions appearing in\nthis expression reduce to Laguerre polynomials, and therefore our result\nfor the residue can be rewritten after some simplification as\n\\begin{equation}\n{\\rm Res}(s_n) = {n! \\, e^{s_n (y-y_0)} \\, z_0^\\beta \\, (2-q)\n\\over \\Gamma(\\beta+n) \\, x_0} \\, \\left({z \\over z_0}\\right)^{(a+2)\/(2-q)}\n\\, e^{-(z+z_0)\/2} \\, P_n^{(\\beta-1)}(z) \\, P_n^{(\\beta-1)}(z_0)\n\\ ,\n\\label{eq43}\n\\end{equation}\nwhere $P_n^{(\\beta-1)}(z)$ denotes the Laguerre polynomial.\n\n\\subsection{Closed-Form Expression for the Green's Function}\n\nThe result for the Green's function number distribution $N_{_{\\rm G}}$ is obtained\nby summing the residues (see eq.~[\\ref{eq39}]), which yields\n\\begin{equation}\nN_{_{\\rm G}}(z_0,z,y_0,y) = \\sum_{n=0}^\\infty\n\\ {n! \\, e^{s_n (y-y_0)} \\, z_0^\\beta \\, (2-q)\n\\over \\Gamma(\\beta+n) \\, x_0} \\, \\left({z \\over z_0}\\right)^{(a+2)\/(2-q)}\n\\, e^{-(z+z_0)\/2} \\, P_n^{(\\beta-1)}(z) \\, P_n^{(\\beta-1)}(z_0)\n\\ .\n\\label{eq44}\n\\end{equation}\nThis convergent sum is a useful expression for the Green's function.\nHowever, further progress can be made by employing the bilinear\ngenerating function for the Laguerre polynomials, given by\nequation~(8.976) from \\citet{gr80}. After some algebra, we find that the\ngeneral closed-form solution can be written in the form\n\\begin{equation}\nN_{_{\\rm G}}(x_0,x,y_0,y) = {2-q \\over x_0} \\left({x \\over x_0}\\right)^{(a+1)\/2}\n\\, {\\sqrt{z z_0 \\xi} \\over 1 - \\xi} \\ \\exp\\left[-{(z + z_0)(1+\\xi) \\over 2\n\\, (1-\\xi)} \\right] \\, I_{\\beta-1}\\left({2 \\sqrt{z z_0 \\xi} \\over 1 - \\xi}\\right)\n\\ ,\n\\label{eq45}\n\\end{equation}\nwhere\n\\begin{equation}\nz(x) \\equiv {2 \\sqrt{\\theta} \\over 2 - q} \\ x^{2-q} \\ , \\ \\ \\ \\ \\ \nz_0(x_0) \\equiv {2 \\sqrt{\\theta} \\over 2 - q} \\ x_0^{2-q} \\ , \\ \\ \\ \\ \\ \n\\beta \\equiv {a+3 \\over 2 - q} \\ , \\ \\ \\ \\ \\ \n\\xi(y,y_0) \\equiv e^{2(q-2)(y-y_0)\\sqrt{\\theta}}\n\\ .\n\\label{eq46}\n\\end{equation}\nNote that the solution for $N_{_{\\rm G}}$ depends on the time parameters $y$\nand $y_0$ only through the ``age'' of the injected particles, $y-y_0$, as\nindicated by the form for $\\xi$. In the limit $\\theta \\to 0$,\ncorresponding to infinite escape time, the Green's function number\ndistribution reduces to\n\\begin{equation}\nN_{_{\\rm G}}(x_0,x,y_0,y) \\Big|_{\\theta=0} \\! = {(x x_0)^{(3-q)\/2} \\over (2-q)\n\\, x_0^2 \\, (y-y_0)}\n\\left(x \\over x_0 \\right)^{a\/2} \\!\\! \\exp\\left[-{(x^{2-q} + x_0^{2-q}) \\over\n(2-q)^2 (y-y_0)}\\right]\nI_{\\beta-1}\\!\\left[{2 (x x_0)^{(2-q)\/2} \\over (2-q)^2 (y-y_0)}\\right]\n\\ .\n\\label{eq47}\n\\end{equation}\n\nThe exact solution for the time-dependent Green's function describing the\nevolution of a monoenergetic initial spectrum with $q < 2$ given\nby equation~(\\ref{eq45}) represents an interesting new contribution to\nparticle transport theory. The corresponding solution for the hard-sphere\ncase with $q = 2$, given by equation~(43) of \\citet{pp95}, can be stated\nin our notation as\n\\begin{eqnarray}\nN_{_{\\rm G}}(x_0,x,y_0,y) \\Big|_{q=2} = {e^{-\\lambda (y-y_0)} \\over 2 x_0\n\\sqrt{\\pi (y-y_0)}} \\left({x \\over x_0}\\right)^{(a+1)\/2} \\, \\exp\n\\left[{-(\\ln x - \\ln x_0)^2 \\over 4 \\, (y-y_0)}\\right]\n\\ ,\n\\label{eq48}\n\\end{eqnarray}\nwhere\n\\begin{equation}\n\\lambda \\equiv {(a+1)^2 \\over 4} + 2 + a + \\theta\n\\ .\n\\label{eq49}\n\\end{equation}\nWe note that the general solution for $N_{_{\\rm G}}$ given by\nequation~(\\ref{eq45}) agrees with equation~(\\ref{eq48}) in the\nlimit $q \\to 2$ as required. Furthermore, the equation~(\\ref{eq45})\nallows $q$ to take on {\\it negative} values if desired, and it is also\napplicable over a broad range of both positive and negative values for\n$a$. Recall that when $a=0$, there are no systematic acceleration or\nloss processes included in the model. Positive (negative) values for $a$\nimply additional systematic acceleration (losses).\n\n\\subsection{Transition to the Stationary Solution}\n\nThe analytical solutions we have obtained for the Green's function\nprovide a complete description of the response of the system to the\nimpulsive injection of monoenergetic particles at any desired time. The\ngenerality of these expressions allows one to obtain the particular\nsolution for the distribution function $f$ associated with any\narbitrary momentum-time source function $S$ using the convolution\ngiven by equation~(\\ref{eq6}). One case of special interest is the\nspectrum resulting from the {\\it continual injection} of monoenergetic\nparticles beginning at time $t=0$, described by the source term\n\\begin{equation}\nS(p,t) = \\cases{\n\\dot N_0 \\, \\delta(p-p_0) \\ , & $t \\ge 0$ \\ , \\cr\n0 \\ , & $t < 0$ \\ , \\cr\n}\n\\label{eq50}\n\\end{equation}\nwhere $\\dot N_0$ denotes the rate of injection of particles with\nmomentum $p_0$ per unit volume per unit time. We assume that no\nparticles are present for $ t < 0$. Combining equations~(\\ref{eq6})\nand (\\ref{eq50}), we find that the time-dependent distribution function\nresulting from monoenergetic particle injection is given by\n\\begin{equation}\nf(p,t) = \\dot N_0 \\int_0^t f_{_{\\rm G}}(p_0,p,t_0,t) \\, dt_0 \\ .\n\\label{eq51}\n\\end{equation}\nBy transforming to the dimensionless variables $x$ and $y$ and employing\nequation~(\\ref{eq16}), we conclude that the particular solution for the\nnumber distribution associated with continual monoenergetic particle injection\ncan be written as\n\\begin{equation}\nN(x,y) \\equiv 4 \\pi \\, m^3 c^3 \\, x^2 \\, f(x,y)\n= {\\dot N_0 \\over D_*} \\int_0^y N_{_{\\rm G}}(x_0,x,y_0,y) \\, dy_0 \\ ,\n\\label{eq52}\n\\end{equation}\nwhere we have used equation~(\\ref{eq13}) to make the substitution $dt_0\n= dy_0 \/ D_*$. Since $N_{_{\\rm G}}$ depends on the time parameters $y$ and\n$y_0$ only through the combination $y-y_0$ (see eqs.~[\\ref{eq45}] and\n[\\ref{eq48}]), it follows that\n\\begin{equation}\nN(x,y) = {\\dot N_0 \\over D_*} \\int_0^y N_{_{\\rm G}}(x_0,x,0,y') \\, dy' \\ .\n\\label{eq53}\n\\end{equation}\nThis is a more convenient form for $N(x,y)$ because $y$ now appears only\nin the upper integration bound.\n\nFor general, finite values of $y$, the time-dependent particular\nsolution for $N(x,y)$ given by equation~(\\ref{eq53}) must be computed\nnumerically by substituting for $N_{_{\\rm G}}$ using either\nequation~(\\ref{eq45}) or (\\ref{eq48}), depending on the value of $q$.\nHowever, as $y \\to \\infty$, the solution rapidly approaches a stationary\nresult representing a balance between injection, acceleration, and\nparticle escape. The form of the stationary solution, called the {\\it\nsteady-state Green's function}, $N^{\\rm G}_{\\rm ss}$, can be obtained by\ndirectly solving the transport equation~(\\ref{eq1}) with $\\partial\nf\/\\partial t=0$ and $S(p,t) = \\dot N_0 \\, \\delta(p-p_0)$. Alternatively,\nthe steady-state Green's function can also be computed by taking the\nlimit of the time-dependent solution, which yields\n\\begin{equation}\nN^{\\rm G}_{\\rm ss}(x) \\equiv \\lim_{y \\to \\infty}\nN(x,y) = {\\dot N_0 \\over D_*} \\int_0^\\infty N_{_{\\rm G}}(x_0,x,0,y')\n\\, dy' \\ .\n\\label{eq54}\n\\end{equation}\nIn the $q < 2$ case, we can substitute for $N_{_{\\rm G}}$ using equation~(\\ref{eq45})\nand evaluate the resulting integral using equation~(6.669.4) from \\citet{gr80}.\nAfter some algebra, we obtain\n\\begin{equation}\nN^{\\rm G}_{\\rm ss}(x) = {\\dot N_0 \\over (2-q) \\, x_0 D_*}\n\\left(x \\over x_0\\right)^{(a+1)\/2} (x x_0)^{(2-q)\/2}\n\\, I_{\\beta-1 \\over 2} \\left(\\sqrt{\\theta} \\, x_{\\rm min}^{2-q} \\over 2-q\\right)\n\\, K_{\\beta-1 \\over 2} \\left(\\sqrt{\\theta} \\, x_{\\rm max}^{2-q} \\over 2-q\\right)\n\\ ,\n\\label{eq55}\n\\end{equation}\nwhere $\\beta = (a+3)\/(2-q)$, and\n\\begin{equation}\nx_{\\rm min} \\equiv \\min(x,x_0) \\ , \\ \\ \\ \\ \\ \\ \\\nx_{\\rm max} \\equiv \\max(x,x_0)\n\\ .\n\\label{eq56}\n\\end{equation}\nLikewise, for the case with $q=2$, we can substitute for $N_{_{\\rm G}}$ using\nequation~(\\ref{eq48}) and then utilize equation~(3.471.9) from \\citet{gr80}\nto conclude that the steady-state solution is given by\n\\begin{equation}\nN^{\\rm G}_{\\rm ss}(x) \\Big|_{q=2} = {\\dot N_0 \\over 2 \\, x_0 D_* \\sqrt{\\lambda}}\n\\left(x \\over x_0\\right)^{(a+1)\/2} \\left(x_{\\rm max} \\over x_{\\rm min}\\right)^{-\\sqrt{\\lambda}}\n\\ ,\n\\label{eq57}\n\\end{equation}\nwhere $\\lambda$ is defined by equation~(\\ref{eq49}). The steady-state\nsolutions given by equations~(\\ref{eq55}) and (\\ref{eq57}) agree with\nthe results obtained by directly solving the transport equation. Due to\nthe asymptotic behavior of the Bessel $K_\\nu(z)$ function for large $z$,\nequation~(\\ref{eq55}) indicates that $N^{\\rm G}_{\\rm ss}$ exhibits an\nexponential cutoff at high energies when $q < 2$ \\citep{as70}. This\ncorresponds physically to the fact that the escape timescale decreases\nwith increasing particle momentum in this case. However, when $q=2$\n(eq.~[\\ref{eq57}]), the spectrum displays a pure power-law behavior at\nhigh energies because the escape timescale is independent of the\nparticle momentum. Specific examples illustrating these behaviors will be\npresented in \\S~4.\n\n\\subsection{Escaping Particle Distribution}\n\nThe various expressions we have obtained for the Green's function\n$N_{_{\\rm G}}$ describe the momentum distribution of the particles remaining\nin the plasma at time $t$ after injection occurring at time $t_0$.\nHowever, since our model incorporates a physically realistic,\nmomentum-dependent escape timescale $t_{\\rm esc}(p)$ given by\nequation~(\\ref{eq7}), it is also quite interesting to compute the\nspectrum of the {\\it escaping} particles, which may form an energetic\noutflow capable of producing observable radiation. In general, the\nnumber distribution of the escaping particles, $\\dot N^{\\rm esc}(x,y)$,\nis related to the current distribution of particles in the plasma, $N(x,y)$,\nvia\n\\begin{equation}\n\\dot N^{\\rm esc}(x,y) \\equiv t_{\\rm esc}^{-1} \\, N(x,y)\n= \\theta D_* \\ x^{2-q} \\, N(x,y)\n\\ ,\n\\label{eq58}\n\\end{equation}\nwhere we have used equations~(\\ref{eq7}) and (\\ref{eq14}) to obtain the\nfinal result. The quantity $\\dot N^{\\rm esc}(x,y)\\,dx$ represents the number\nof particles escaping per unit volume per unit time with dimensionless\nmomenta between $x$ and $x+dx$.\n\nAn important special case is the evolution of the escaping particle\nspectrum resulting from {\\it impulsive monoenergetic injection} at\ndimensionless time $y=y_0$. In this application, equation~(\\ref{eq58})\ngives the Green's function number spectrum for the escaping particles,\ndefined by\n\\begin{equation}\n\\dotN_{_{\\rm G}}^{\\rm esc}(x_0,x,y_0,y)\n\\equiv \\theta D_* \\ x^{2-q} \\, N_{_{\\rm G}}(x_0,x,y_0,y)\n\\ ,\n\\label{eq59}\n\\end{equation}\nwhere $N_{_{\\rm G}}$ is evaluated using either equation~(\\ref{eq45}) or\n(\\ref{eq48}) depending on the value of $q$. We can use this expression\nfor $\\dotN_{_{\\rm G}}^{\\rm esc}$ to compute the {\\it total} escaping number\ndistribution resulting from an impulsive flare occurring at $t=0$ by\nintegrating the escaping distribution with respect to time, which yields\n\\begin{equation}\n\\DeltaN_{_{\\rm G}}^{\\rm esc}(x) \\equiv \\int_0^\\infty \\dotN_{_{\\rm G}}^{\\rm esc} \\, dt\n= \\theta \\, x^{2-q} \\int_0^\\infty N_{_{\\rm G}}(x_0,x,0,y') \\, dy'\n\\ ,\n\\label{eq60}\n\\end{equation}\nwhere we have used equation~(\\ref{eq59}) and taken advantage of the fact\nthat $N_{_{\\rm G}}$ depends on $y$ and $y_0$ only through the difference $y-y_0$\n(cf. eq.~[\\ref{eq53}]).\nBy comparing equations~(\\ref{eq54}) and\n(\\ref{eq60}), we deduce that\n\\begin{equation}\n\\DeltaN_{_{\\rm G}}^{\\rm esc}(x) = {\\theta D_* \\over \\dot N_0}\n\\ x^{2-q} \\, N^{\\rm G}_{\\rm ss}(x)\n\\ ,\n\\label{eq61}\n\\end{equation}\nso that the total escaping spectrum is simply proportional to the steady-state\nGreen's function resulting from continual injection, as expected. The process\nconsidered here corresponds to the injection of a single particle at time $t=0$,\nand therefore we find that the normalization of $\\DeltaN_{_{\\rm G}}^{\\rm esc}$ is\ngiven by\n\\begin{equation}\n\\int_0^\\infty \\DeltaN_{_{\\rm G}}^{\\rm esc}(x) \\, dx = 1\n\\ ,\n\\label{eq62}\n\\end{equation}\nwhich provides a useful check on the numerical results.\n\nAnother interesting example is the case of continual monoenergetic\nparticle injection commencing at time $t=0$, described by the source\nterm given by equation~(\\ref{eq50}). The time-dependent buildup of the\nescaping particle spectrum $\\dot N^{\\rm esc}(x,y)$ in this scenario can\nbe analyzed by using equation~(\\ref{eq53}) to substitute for $N(x,y)$ in\nequation~(\\ref{eq58}), which yields\n\\begin{equation}\n\\dot N^{\\rm esc}(x,y)\n= \\dot N_0 \\, \\theta \\ x^{2-q} \\int_0^y N_{_{\\rm G}}(x_0,x,0,y') \\, dy'\n\\ .\n\\label{eq63}\n\\end{equation}\nIn the limit $y \\to \\infty$, the escaping particle spectrum approaches the\nsteady-state result\n\\begin{equation}\n\\dot N_{\\rm ss}^{\\rm esc}(x) \\equiv \\lim_{y \\to \\infty} \\dot N^{\\rm esc}(x,y)\n= \\dot N_0 \\, \\theta \\ x^{2-q} \\int_0^\\infty N_{_{\\rm G}}(x_0,x,0,y') \\, dy'\n\\ .\n\\label{eq64}\n\\end{equation}\nBy comparing equations~(\\ref{eq54}), (\\ref{eq61}), and (\\ref{eq64}), we\nconclude that\n\\begin{equation}\n\\dot N_{\\rm ss}^{\\rm esc}(x)\n= \\theta D_* \\ x^{2-q} \\, N^{\\rm G}_{\\rm ss}(x)\n= \\dot N_0 \\ \\DeltaN_{_{\\rm G}}^{\\rm esc}(x)\n\\ .\n\\label{eq65}\n\\end{equation}\nThe final result can be combined with equation~(\\ref{eq62}) to show that\nthe escaping spectrum satisfies the normalization relation\n\\begin{equation}\n\\int_0^\\infty \\dot N_{\\rm ss}^{\\rm esc}(x) \\, dx = \\dot N_0 \\ ,\n\\label{eq66}\n\\end{equation}\nas expected for the case of continual steady-state injection.\n\nNote that analytical expressions for the steady-state escaping spectrum $\\dot\nN_{\\rm ss}^{\\rm esc}(x)$ can be obtained by substituting for $N^{\\rm G}_{\\rm\nss}(x)$ in equation~(\\ref{eq65}) using either equation~(\\ref{eq55}) or\n(\\ref{eq57}), depending on the value of $q$. We obtain\n\\begin{equation}\n\\dot N^{\\rm esc}_{\\rm ss}(x) = {\\dot N_0 \\, \\theta \\, x^{2-q} \\over (2-q) \\, x_0}\n\\left(x \\over x_0\\right)^{(a+1)\/2} (x x_0)^{(2-q)\/2}\n\\, I_{\\beta-1 \\over 2} \\left(\\sqrt{\\theta} \\, x_{\\rm min}^{2-q} \\over 2-q\\right)\n\\, K_{\\beta-1 \\over 2} \\left(\\sqrt{\\theta} \\, x_{\\rm max}^{2-q} \\over 2-q\\right)\n\\ ,\n\\label{eq67}\n\\end{equation}\nfor $q < 2$, and\n\\begin{equation}\n\\dot N^{\\rm esc}_{\\rm ss}(x) = {\\dot N_0 \\, \\theta \\, x^{2-q} \\over\n2 \\, x_0 \\sqrt{\\lambda}}\n\\left(x \\over x_0\\right)^{(a+1)\/2} \\left(x_{\\rm max} \\over x_{\\rm min}\\right)^{-\\sqrt{\\lambda}}\n\\ ,\n\\label{eq68}\n\\end{equation}\nfor $q = 2$.\n\n\\section{RESULTS}\n\nThe new result we have derived for the secular Green's function\n(eq.~[\\ref{eq45}]) displays a rich behavior through its complex\ndependence on momentum, time, and the dimensionless parameters $q$, $a$,\nand $\\theta$, which are related to the fundamental physical transport\ncoefficients $D_*$, $A_*$, and $t_*$ via equations~(\\ref{eq14}). Here we\npresent several example calculations in order to illustrate the utility\nof the new solution. Detailed applications to astrophysical situations,\nincluding active galaxies and $\\gamma$-ray bursts, will be presented in\nsubsequent papers.\n\n\\begin{figure}[t]\n\\includegraphics[width=6in]{f2.eps}\n\\vskip-0.6in\n\\caption{Green's function solutions to the time-dependent stochastic\nparticle acceleration equation for $x_0=1$. The left panels depict the\nimpulsive-injection solution, $N_{_{\\rm G}}$ (eqs.~[\\ref{eq45}] and\n[\\ref{eq48}]), and the right panels illustrate the response to uniform,\ncontinuous injection, $N$ (eq.~[\\ref{eq53}]), with $\\dot\nN_0=D_*=1$. Note that $N$ approaches the corresponding steady-state\nsolution (eqs.~[\\ref{eq55}] and [\\ref{eq57}]) as $y$ increases. The\nindices of the wave turbulence spectra are indicated, with $q = 2$ for\nhard-sphere scattering, $q = 5\/3$ for a Kolmogorov cascade, and $q =\n3\/2$ for a Kraichnan cascade.}\n\\label{fig2}\n\\end{figure}\n\nThe panels on the left-hand side of Figure~2 depict the time-dependent\nGreen's function solution, $N_{_{\\rm G}}$, describing the evolution of the\nparticle distribution in the plasma resulting from {\\it impulsive}\nmonoenergetic injection at $y=0$. Results are presented for the\nhard-sphere scattering case ($q = 2$), computed using\nequation~(\\ref{eq48}), and for the Kolmogorov ($q = 5\/3$) and Kraichnan\n($q = 3\/2$) cases, evaluated using equation~(\\ref{eq45}). The only\nacceleration mechanism considered here is the stochastic acceleration\nassociated with the second-order Fermi process, and therefore we set $a\n= 0$. The escape parameter $\\theta$ is set equal to unity, so that the\ntimescale for escape is comparable to the diffusion timescale. As the\nwave turbulence spectrum becomes steeper (i.e., as $q$ increases), a\nlarger fraction of the turbulence energy is contained in waves with long\nwavelengths, which interact resonantly with higher energy particles.\nSteeper turbulence spectra therefore produce harder particle\ndistributions, as can be confirmed in the plots. Consequently we\nconclude that an ensemble of hard sphere scattering centers is more\neffective at accelerating nonthermal particles compared with a\nKolmogorov wave spectrum, which in turn is more effective than a\nKraichnan spectrum.\n\nThe panels on the right-hand side of Figure~2 illustrate the buildup of\nthe particle spectrum in the plasma, $N(x,y)$, due to {\\it continual}\nmonoenergetic injection beginning at $y=0$, computed by evaluating\nnumerically the integral in equation~(\\ref{eq53}). We have set $a=0$ and\ntherefore the acceleration is purely stochastic. As $y \\to \\infty$, the\nspectrum approaches the steady-state form given by equation~(\\ref{eq55})\nfor $q<2$ or by equation~(\\ref{eq57}) for $q=2$. In the hard-sphere case\n($q=2$), the particle spectrum displays a power-law shape at high\nenergies in agreement with equation~(\\ref{eq57}). However, when $q < 2$,\nparticle escape dominates over acceleration at high energies, and\ntherefore the steady-state distribution is truncated, even in the\nabsence of radiative losses. This effect produces the quasi-exponential\nturnovers exhibited by the stationary spectra when $q = 5\/3$ and $q =\n3\/2$. Particle escape in these cases mimics energy losses due to, for\nexample, synchrotron emission from electrons.\n\n\\begin{figure}[t]\n\\hskip0.2in\\plottwo{f3a.eps}{f3b.eps}\n\\hskip0.5truein\n\\caption{Evolution of the particle distribution in the plasma resulting\nfrom monoenergetic injection with $q = 5\/3$, $a = 0$, $\\theta = 0.1$,\nand $x_0=1$. Panel~({\\it a}) depicts the Green's function, $N_{_{\\rm G}}$,\nresulting from impulsive monoenergetic injection (eq.~[\\ref{eq45}]), and\npanel ({\\it b}) illustrates the response to continual monoenergetic\ninjection (eq.~[\\ref{eq53}]) and the corresponding steady-state solution\n(eq.~[\\ref{eq55}]) for $\\dot N_0=D_*=1$.}\n\\label{fig3}\n\\end{figure}\n\nFigures~3 and 4 illustrate the effects of varying the values of the\nescape parameter $\\theta$ and the systematic acceleration\/loss parameter\n$a$ for the $q = 5\/3$ case. In Figure~3, we set $a=0$ and $\\theta =\n0.1$, which represents a particle escape timescale that is an order of\nmagnitude larger than the corresponding case depicted in Figure~2. The\nlonger escape time allows the particles to be accelerated to higher mean\nenergies before diffusing out of the plasma, and the decay of the\nparticle density at late times takes place much more slowly than for\nlarger values of $\\theta$. The hardening of the spectra causes the\nquasi-exponential cutoffs to move to higher energies, and the same\neffect can also be noted in the corresponding stationary solutions. The\ncalculations represented in Figure~4 include additional systematic\nacceleration processes that are modeled by setting $a = 2$. The enhanced\nparticle acceleration further hardens the spectrum and shifts the cutoff\nto even higher energies. Although the slope of the low-energy particle\ndistribution ($x < x_0$) is not altered much when only $\\theta$ is\nvaried (see Figs.~2 and 3), this slope becomes significantly steeper\nwhen $a$ is increased, as indicated in Figure~4.\n\nIn the left-hand panels of Figures~5 and 6 we plot the time-dependent\nsolution for the Green's function describing the escaping particles,\n$\\dotN_{_{\\rm G}}^{\\rm esc}$, resulting from {\\it impulsive} monoenergetic\nparticle injection (eq.~[\\ref{eq59}]) when $q = 5\/3$. The right-hand\npanels illustrate the buildup of the escaping spectrum, $\\dot N^{\\rm\nesc}$ (eq.~[\\ref{eq63}]), resulting from {\\it continual} injection\nbeginning at $y=0$. Note the transition to the steady-state solution,\n$\\dot N_{\\rm ss}^{\\rm esc}$ (eq.~[\\ref{eq64}]), as $y \\to \\infty$. In\nFigure~5 we set $\\theta=0.1$ and $a=0$, and in Figure~6 we set\n$\\theta=0.1$ and $a=2$. Included for comparison are the corresponding\nspectra describing the particle distributions in the plasma at the same\nvalues of $y$. We point out that the escaping particle spectra are\nsignificantly harder than the in situ distributions, which reflects the\npreferential escape of the high-energy particles resulting from the\nmomentum dependence of the escape timescale when $q<2$ (see\neq.~[\\ref{eq7}]).\n\n\\begin{figure}[t]\n\\hskip0.2in\\plottwo{f4a.eps}{f4b.eps}\n\\caption{Same as Fig.~3, except $a = 2$.}\n\\label{fig4}\n\\end{figure}\n\n\\section{DISCUSSION AND SUMMARY}\n\nWe have derived new, closed-form solutions (eqs.~[\\ref{eq45}] and\n[{\\ref{eq67}]}) for the time-dependent Green's function representing the\nstochastic acceleration of relativistic ions interacting with MHD waves.\nThe analytical results we have obtained describe the time-dependent\ndistributions for both the accelerated (in situ) and the escaping\nparticles. The Fokker-Planck transport equation considered here includes\nmomentum diffusion with coefficient $D(p) \\propto p^q$, particle escape\nwith mean timescale $t_{\\rm esc} \\propto p^{q-2}$, and additional\nsystematic acceleration\/losses with a rate proportional to $p^{q-1}$,\nwhere $p$ is the particle momentum and $q$ is the index of the wave\nturbulence spectrum. This specific scenario describes the resonant\ninteraction of particles with fast-mode and Alfv\\'en waves, which is one\nof the fundamental acceleration processes in high-energy astrophysics\n\\citep[e.g.,][]{dml96}.\n\nThe new analytical result complements the work of \\citet{pp95} since it\nis applicable for any $q < 2$ provided $s_{\\rm pp} = q-2$, where $s_{\\rm\npp}$ is the power-law index used by these authors to describe the\nmomentum dependence of the escape timescale. The most closely related\nprevious result is given by their equation~(59), which treats the case\nwith $q \\ne 2$ and $s_{\\rm pp}=0$, corresponding to a\nmomentum-independent escape timescale. Our analytical solution\n(eq.~[\\ref{eq45}]) agrees with theirs in the limit $q \\to 2$, as\nexpected. The general features of our new solution were discussed in\n\\S~4, where it was demonstrated that increasingly hard particle spectra\nresult from larger values of the wave index $q$, smaller values of the\nescape parameter $\\theta$, and larger values of the systematic\nacceleration parameter $a$. The rich behavior of the solution as a\nfunction of momentum, time, and the parameters $q$, $\\theta$, and $a$\nprovides useful physical insight into the nature of the coupled\nenergetic\/spatial particle transport in astrophysical plasmas.\n\n\\begin{figure}[t]\n\\hskip0.2in\\plottwo{f5a.eps}{f5b.eps}\n\\caption{Evolution of the particle distribution resulting from\nmonoenergetic injection with $q = 5\/3$, $a = 0$, $\\theta = 0.1$, $x_0 =\n1$, $\\dot N_0=1$, and $D_*=1$. Panel~({\\it a}) treats the case of\nimpulsive injection, with the thin lines representing the particle\ndistribution in the plasma (eq.~[\\ref{eq45}]) and the heavy lines denoting\nthe escaping particle spectrum (eq.~[\\ref{eq59}]) in units of $\\theta$.\nPanel~({\\it b}) illustrates the response to continual monoenergetic\ninjection for the particle distribution in the plasma (eqs.~[\\ref{eq53}] and\n[\\ref{eq55}]; thin lines) and for the escaping particle spectrum\n(eqs.~[\\ref{eq63}] and [\\ref{eq65}]; heavy lines).\nSee the discussion in the text.}\n\\label{fig5}\n\\end{figure}\n\n\\begin{figure}[t]\n\\hskip0.2in\\plottwo{f6a.eps}{f6b.eps}\n\\caption{Same as Fig.~5, except $a = 2$.}\n\\label{fig6}\n\\end{figure}\n\nThe solutions presented here can be used to describe the acceleration\nand transport of relativistic ions in astrophysical environments in\nwhich the turbulence spectrum is very poorly known and can be\napproximated by a power law, such as $\\gamma$-ray bursts, active\ngalaxies, magnetized coronae around black holes, and the intergalactic\nmedium in clusters of galaxies. For example, the hard X-ray emission\nfrom black-hole jet sources such as Cygnus X-1 \\citep{mal06} and the\nmicroquasar LS 5039 \\citep{aha05} could be powered by the stochastic\nacceleration of ions in a black-hole accretion disk corona that\nsubsequently escape and interact with surrounding material. In this\nscenario, persistent acceleration of monoenergetic particles injected\ninto the corona, or flaring episodes averaged over a sufficiently long\ntime, would produce a time-averaged escaping distribution of\nrelativistic protons given by equation~(\\ref{eq67}) for $q < 2$.\nAssuming only stochastic acceleration, so that $a = 0$, the number\ndistribution of escaping particles with $x \\geq x_0$ takes the form\n\\begin{equation}\n\\dot N_{\\rm ss}^{\\rm esc}(x) \n\\propto x^{(7-3q)\/2} K_{{\\beta - 1\\over 2}}\\left( {\\sqrt{\\theta} \nx^{2-q} \\over 2-q}\\right) \\propto \\cases{\nx^{3-2q} \\ , & $x \\ll \\left( {2-q\\over \\sqrt{\\theta}}\\right)^{1\/(2-q)}$ \\ \\cr\nx^{(5-2q)\/2}\\exp\\left(-{\\sqrt{\\theta}x^{2-q}\\over 2-q}\\right) \\ , & \n$x \\gg \\left({2-q\\over \\sqrt{\\theta}}\\right)^{1\/(2-q)}$ \\ \\cr\n}.\n\\label{eq69}\n\\end{equation}\nWhen the escaping hadrons collide with ambient gas or stellar wind\nmaterial, they would generate X-rays and $\\gamma$-rays via a pion\nproduction cascade with a very hard spectrum leading up to a\nquasi-exponential cutoff. The {\\it Gamma ray Large Area Space Telescope}\n(GLAST) telescope will be able to provide detailed spectra from Galactic\nblack-hole sources and unidentified EGRET $\\gamma$-ray sources to test\nfor the existence of this hard component. Our new analytical solution\ncan also be used to model the variability of radiation from ions\naccelerated in the accretion-disk coronae of Seyfert galaxies by\nchanging the level of turbulence. Flaring $\\sim 100\\,$MeV -- GeV\nradiation could be weakly detected by GLAST as a consequence of this\nprocess. Additional applications of our work include studies of the\nstochastic acceleration of relativistic cosmic rays in $\\gamma$-ray\nbursts \\citep{dh01}.\n\nThe results we have obtained describe both the momentum distribution of\nthe particles in the plasma, and the momentum distribution of the\nparticles that escape to form energetic outflows. Since the solutions do\nnot include inverse-Compton or synchrotron losses, which are usually\nimportant for energetic electrons, they are primarily applicable to\ncases involving ion acceleration. In order to treat the acceleration of\nrelativistic electrons, a generalized calculation including both\nstochastic particle acceleration and radiative losses is desirable, and\nwe are currently working to extend the analytical model presented here\nto incorporate these effects. Beyond the direct utility of the new\nanalytical solutions for probing the nature of particle acceleration in\nastrophysical sources, we note that they are also useful for\nbenchmarking numerical simulations.\n\n\\acknowledgements\nT.~L. is funded through NASA {\\it GLAST} Science Investigation No.~DPR-S-1563-Y.\nThe work of C.~D.~D. and P.~A.~B. is supported by the Office of Naval Research.\nThe authors also acknowledge the useful comments provided by the anonymous\nreferee.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}