diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzancl" "b/data_all_eng_slimpj/shuffled/split2/finalzzancl" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzancl" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nThe idea of cosmological inflation is capable to address some\nproblems of the standard big bang theory, such as the horizon,\nflatness and monopole problems. Also, it can provide a reliable\nmechanism for generation of density perturbations responsible for\nstructure formation and therefore temperature anisotropies in Cosmic\nMicrowave Background (CMB)spectrum [1-8]. There are a wide variety\nof cosmological inflation models where viability of their\npredictions in comparison with observations makes them to be\nacceptable or unacceptable (see for instance [9] for this purpose).\nThe simplest inflationary model is a single scalar field scenario in\nwhich inflation is driven by a scalar field called the inflaton that\npredicts adiabatic, Gaussian and scale-invariant fluctuations [10].\nBut, recently observational data have revealed some degrees of\nscale-dependence in the primordial density perturbations. Also,\nPlanck team have obtained some constraints on the primordial\nnon-Gaussianity [11-13]. Therefore, it seems that extended models of\ninflation which can explain or address this scale-dependence and\nnon-Gaussianity of perturbations are more desirable. There are a lot\nof studies in this respect, some of which can be seen in Refs.\n[14-19] with references therein. Among various inflationary models,\nthe non-minimal models have attracted much attention. Non-minimal\ncoupling of the inflaton field and gravitational sector is\ninevitable from the renormalizability of the corresponding field\ntheory (see for instance [20]). Cosmological inflation driven by a\nscalar field non-minimally coupled to gravity are studied, for\ninstance, in Refs. [21-28]. There were some issues on the unitarity\nviolation with non-minimal coupling (see for instance, Refs.\n[29-31]) which have forced researchers to consider possible coupling\nof the derivatives of the scalar field with geometry [32]. In fact,\nit has been shown that a model with nonminimal coupling between the\nkinetic terms of the inflaton (derivatives of the scalar field) and\nthe Einstein tensor preserves the unitary bound during inflation\n[33]. Also, the presence of nonminimal derivative coupling is a\npowerful tool to increase the friction of an inflaton rolling down\nits own potential [33]. Some authors have considered the model with\nthis coupling term and have studied the early time accelerating\nexpansion of the universe as well as the late time dynamics [34-36].\nIn this paper we extend the non-minimal inflation models to the case\nthat a canonical inflaton field is coupled non-minimally to the\ngravitational sector and in the same time the derivatives of the\nfield are also coupled to the background geometry (Einstein's\ntensor). This model provides a more realistic framework for treating\ncosmological inflation in essence. We study in details the\ncosmological perturbations and possible non-Gaussianities in the\ndistribution of these perturbations in this non-minimal inflation.\nWe expand the action of the model up to the third order and compare\nour results with observational data from Planck2015 to see the\nviability of this extended model. In this manner we are able to\nconstraint parameter space of the model in comparison with\nobservation.\n\n\n\\section{Field Equations}\n\nWe consider an inflationary model where both a canonical scalar field and its\nderivatives are coupled non-minimally to gravity. The four-dimensional action for\nthis model is given by the following expression:\n\n\\begin{equation}\nS=\\frac{1}{2}\\int\nd^{4}x\\sqrt{-g}\\Bigg[M_{p}^{2}f(\\phi)R+\\frac{1}{\\widetilde{M}^{2}}G_{\\mu\\nu}\\partial^{\\mu}\\phi\\partial^{\\nu}\\phi-2V(\\phi)\\Bigg]\\,,\n\\end{equation}\nwhere $M_{p}$ is a reduced planck mass, $\\phi$ is a canonical scalar field,\n$f(\\phi)$ is a general function of the scalar field and\n$\\widetilde{M}$ is a mass parameter. The energy-momentum tensor is obtained from action (1) as follows\n\n\\vspace{0.5cm}\n\n$T_{\\mu\\nu}=\\frac{1}{2\\widetilde{M}^{2}}\\bigg[\\nabla_{\\mu}\\nabla_{\\nu}(\\nabla^{\\alpha}\\phi\\nabla_{\\alpha}\\phi)\n-g_{\\mu\\nu}\\Box(\\nabla^{\\alpha}\\phi\\nabla_{\\alpha}\\phi)\n+g_{\\mu\\nu}g^{\\alpha\\rho}g^{\\beta\\lambda}\\nabla_{\\rho}\\nabla_{\\lambda}(\\nabla_{\\alpha}\\phi\\nabla_{\\beta}\\phi)$\n\\begin{equation}\n+\\Box(\\nabla_{\\mu}\\phi\n\\nabla_{\\nu}\\phi)\\bigg]-\\frac{g^{\\alpha\\beta}}{\\widetilde{M}^{2}}\n\\nabla_{\\beta}\\nabla_{\\mu}(\\nabla_{\\alpha}\\phi\n\\nabla_{\\nu}\\phi)-M_{p}^{2}\\nabla_{\\mu}\\nabla_{\\nu}f(\\phi)+M_{p}^{2}g_{\\mu\\nu}\\Box\nf(\\phi)+g_{\\mu\\nu}V(\\phi)\\,.\n\\end{equation}\n\nOn the other hand, variation of the action (1) with respect to the\nscalar field gives the scalar field equation of motion as\n\n\\begin{equation}\n\\frac{1}{2}M_{p}^{2}Rf'(\\phi)-\\frac{1}{\\widetilde{M}^{2}}G^{\\mu\\nu}\\nabla_{\\mu}\\nabla_{\\nu}\\phi-V'(\\phi)=0\\,,\n\\end{equation}\nwhere a prime denotes derivative with respect to the scalar field. We consider a spatially flat\nFriedmann-Robertson-Walker (FRW) line element as\n\n\\begin{equation}\nds^{2}=-dt^{2}+a^{2}(t)\\delta_{ij}dx^{i}dx^{j}\\,,\n\\end{equation}\nwhere $a(t)$ is scale factor. Now, let's assume that $f(\\phi)=\\frac{1}{2}\\phi^{2}$. In this\nframework, $T_{\\mu\\nu}$ leads to the following energy density and\npressure for this model respectively\n\n\\begin{equation}\n\\rho=\\frac{9H^{2}}{2\\widetilde{M}^{2}}\\dot{\\phi}^{2}-\\frac{3}{2}M_{p}^{2}H\\phi(2\\dot{\\phi}+H\\phi)+V(\\phi)\n\\end{equation}\n\n$$p=-\\frac{3}{2}\\frac{H^{2}\\dot{\\phi}^{2}}{\\widetilde{M}^{2}}-\\frac{\\dot{\\phi}^{2}\\dot{H}}{\\widetilde{M}^{2}}\n-\\frac{2H}{\\widetilde{M}^{2}}\\dot{\\phi}\\ddot{\\phi}$$\n\\begin{equation}\n+\\frac{1}{2}M_{p}^{2}\\Bigg[2\\dot{H}\\phi^{2}+3H^{2}\\phi^{2}\n+4H\\phi\\dot{\\phi}+2\\phi\\ddot{\\phi}+2\\dot{\\phi}\\Bigg]-V(\\phi)\\,,\n\\end{equation}\nwhere a dot refers to derivative with respect to the cosmic time. The equations of motion following from action (1) are\n\n\\begin{equation}\nH^{2}=\\frac{1}{3M_{p}^{2}}\\Bigg[-\\frac{3}{2}M_{p}^{2}H\\phi(2\\dot{\\phi}+H\\phi)+\\frac{9H^{2}}{2\\widetilde{M}^{2}}\\dot{\\phi}^{2}+V(\\phi)\\Bigg]\\,,\n\\end{equation}\n\n$$\\dot{H}=-\\frac{1}{2M_{p}^{2}}\\Bigg[\\dot{\\phi}^{2}\\bigg(\\frac{3H^{2}}{\\widetilde{M}^{2}}-\\frac{\\dot{H}}{\\widetilde{M}^{2}}\\bigg)\n-\\frac{2H}{\\widetilde{M}^{2}}\\dot{\\phi}\\ddot{\\phi}-\\frac{3}{2}M_{p}^{2}H\\phi(2\\dot{\\phi}+H\\phi)$$\n\\begin{equation}+\\frac{1}{2}M_{p}^{2}\\bigg((2\\dot{H}+3H^{2})\\phi^{2}+4H\\phi\\dot{\\phi}+2\\phi\\ddot{\\phi}+2\\dot{\\phi}^{2}\\bigg)\\Bigg]\n\\end{equation}\n\n\\begin{equation}\n-3M_{p}^{2}(2H^{2}+\\dot{H})\\phi+\\frac{3H^{2}}{\\widetilde{M}^{2}}\\ddot{\\phi}+3H\\bigg(\\frac{3H^{2}}{\\widetilde{M}^{2}}\n+\\frac{2\\dot{H}}{\\widetilde{M}^{2}}\\bigg)\\dot{\\phi}+V'(\\phi)=0\\,.\n\\end{equation}\n\nThe slow-roll parameters in this model are defined as\n\\begin{equation}\n\\epsilon\\equiv-\\frac{\\dot{H}}{H^{2}}\\,\\,\\,\\,,\\,\\,\\,\\,\\eta\\equiv-\\frac{1}{H}\\frac{\\ddot{H}}{\\dot{H}}\\,.\n\\end{equation}\nTo have inflationary phase, $\\epsilon$ and $\\eta$ should satisfy slow-roll conditions($\\epsilon\\ll1$ , $\\eta\\ll1$). In our setup, we\nfind the following result\n\\begin{equation}\n\\epsilon=\\bigg[1+\\frac{\\phi^{2}}{2}-\\frac{\\dot{\\phi^{2}}}{2\\widetilde{M}^{2}M_{p}^{2}}\\bigg]^{-1}\n\\bigg[\\frac{3\\dot{\\phi}^{2}}{2\\widetilde{M}^{2}M_{p}^{2}}+\\frac{\\phi\\dot{\\phi}}{2H}\n+\\frac{\\ddot{\\phi}}{H\\dot{\\phi}}\\bigg(\\frac{\\phi\\dot{\\phi}}{2H}\n-\\frac{\\dot{\\phi^{2}}}{\\widetilde{M}^{2}M_{p}^{2}}\\bigg)\\bigg]\n\\end{equation}\nand\n\\begin{equation}\n\\eta=-2\\epsilon-\\frac{\\dot{\\epsilon}}{H\\epsilon}\\,.\n\\end{equation}\n\nWithin the slow-roll approximation, equations (7),(8) and (9) can be\nwritten respectively as\n\\begin{equation}\nH^{2}\\simeq\\frac{1}{3M_{p}^{2}}\\Bigg[-\\frac{3}{2}M_{p}^{2}H^{2}\\phi^{2}+V(\\phi)\\Bigg]\\,,\n\\end{equation}\n\n\\begin{equation}\n\\dot{H}\\simeq-\\frac{1}{2M_{p}^{2}}\\Bigg[\\frac{3H^{2}\\dot{\\phi}^{2}}{\\widetilde{M}^{2}}-M_{p}^{2}H\\phi\\dot{\\phi}+M_{p}^{2}\\dot{H}\\phi^{2}\\Bigg]\\,,\n\\end{equation}\nand\n\\begin{equation}\n-6M_{p}^{2}H^{2}\\phi+\\frac{9H^{3}\\dot{\\phi}}{\\widetilde{M}^{2}}+V'(\\phi)\\simeq0\\,.\n\\end{equation}\nThe number of e-folds during inflation is defined as\n\\begin{equation}\n{\\cal N}=\\int_{t_{hc}}^{t_{e}}H\\,dt\\,,\n\\end{equation}\nwhere $t_{hc}$ and $t_{e}$ are time of horizon crossing and end of\ninflation respectively. The number of e-folds in the slow-roll approximation in our setup can be\nexpressed as follows\n\n\\begin{equation}\n{\\cal N}\\simeq\\int_{\\phi_{hc}}^{\\phi_{e}}\\frac{V(\\phi)d\\phi}{M_{p}^{2}\\bigg(1\n+\\frac{1}{2}\\phi^{2}\\bigg)\\Bigg[2M_{p}^{2}\\widetilde{M}^{2}\\phi\n-M_{p}^{2}\\widetilde{M}^{2}\\frac{V'(\\phi)}{V(\\phi)}\\bigg(1+\\frac{1}{2}\\phi^{2}\\bigg)\\Bigg]}\\,.\n\\end{equation}\nAfter providing the basic setup of the model, for testing cosmological viability\nof this extended model we treat the perturbations in comparison with observation.\n\n\n\\section{Second-Order Action: Linear Perturbations}\n\nIn this section, we study linear perturbations around the\nhomogeneous background solution. To this end, the first step is\nexpanding the action (1) up to the second order in small fluctuations. It\nis convenient to work in the ADM formalism given by [37]\n\\begin{equation}\nds^{2}=-N^{2}dt^{2}+h_{ij}(N^{i}dt+dx^{i})(N^{j}dt+dx^{j})\\,,\n\\end{equation}\nwhere $N^{i}$ is the shift vector and $N$ is the lapse function.\nWe expand the lapse function and shift vector to $N=1+2\\Phi$ and\n$N^{i}=\\delta^{ij}\\partial_{j}\\Upsilon$ respectively, where $\\Phi$\nand $\\Upsilon$ are three-scalars. Also,\n$h_{ij}=a^{2}(t)[(1+2\\Psi)\\delta_{ij}+\\gamma_{ij}]$, where $\\Psi$ is\nspatial curvature perturbation and $\\gamma_{ij}$ is shear\nthree-tensor which is traceless and symmetric. In the rest of our study, we\nchoose $\\delta\\Phi=0$ and $\\gamma_{ij}=0$. By taking into account\nthe scalar perturbations in linear-order, the metric (18) is written\nas (see for instance [38])\n\\begin{equation}\nds^{2}=-(1+2\\Phi)dt^{2}+2\\partial_{i}\\Upsilon\ndtdx^{i}+a^{2}(t)(1+2\\Psi)\\delta_{ij}dx^{i}dx^{j}\\,.\n\\end{equation}\n\nNow by replacing metric (19) in action (1) and expanding the action up to the\nsecond-order in perturbations, we find (see for instance [39,40])\n\n$$S^{(2)}=\\int dt dx^{3}a^{3}\\Bigg[-\\frac{3}{2}(M_{p}^{2}\\phi^{2}-\\frac{\\dot{\\phi}^{2}}{\\widetilde{M}^{2}})\\dot{\\Psi}^{2}\n+\\frac{1}{a^{2}}((M_{p}^{2}\\phi^{2}-\\frac{\\dot{\\phi}^{2}}{\\widetilde{M}^{2}})\\dot{\\Psi}$$\n$$-(M_{p}^{2}H\\phi^{2}+M_{p}^{2}\\phi\\dot{\\phi}-\\frac{3H\\dot{\\phi}^{2}}{\\widetilde{M}^{2}})\\Phi)\\partial^{2}\\Upsilon\n-\\frac{1}{a^{2}}(M_{p}^{2}\\phi^{2}-\\frac{\\dot{\\phi}^{2}}{\\widetilde{M}^{2}})\\Phi\\partial^{2}\\Psi$$\n$$+3(M_{p}^{2}H\\phi^{2}+M_{p}^{2}\\phi\\dot{\\phi}-\\frac{3H\\dot{\\phi}^{2}}{\\widetilde{M}^{2}})\\Phi\\dot{\\Psi}\n+3H(-\\frac{1}{2}M_{p}^{2}H\\phi^{2}-M_{p}^{2}\\phi\\dot{\\phi}$$\n\\begin{equation}\n+\\frac{3H\\dot{\\phi}^{2}}{\\widetilde{M}^{2}})\\Phi^{2}\n+\\frac{1}{2a^{2}}(M_{p}^{2}\\phi^{2}+\\frac{\\dot{\\phi}^{2}}{\\widetilde{M}^{2}})(\\partial\\Psi)^{2}\\Bigg]\\,.\n\\end{equation}\n\nBy variation of action (20) with respect to $N$ and $N^{i}$ we find\n\\begin{equation}\n\\Phi=\\frac{M_{p}^{2}\\phi^{2}-\\frac{\\dot{\\phi}^{2}}{\\widetilde{M}^{2}}}{M_{p}^{2}H\\phi^{2}+M_{p}^{2}\\phi\\dot{\\phi}-\\frac{3H\\dot{\\phi}^{2}}{\\widetilde{M}^{2}}}\\dot{\\Psi}\\,,\n\\end{equation}\n\n$$\\partial^{2}\\Upsilon=\\frac{2a^{2}}{3}\\frac{(-\\frac{9}{2}M_{p}^{2}H^{2}\\phi^{2}-9M_{p}^{2}H\\phi\\dot{\\phi}\n+\\frac{27H^{2}\\dot{\\phi}^{2}}{\\widetilde{M}^{2}})}{(M_{p}^{2}H\\phi^{2}+M_{p}^{2}\\phi\\dot{\\phi}-\\frac{3H\\dot{\\phi}^{2}}{\\widetilde{M}^{2}})}$$\n\\begin{equation}\n+3\\dot{\\Psi}a^{2}-\\frac{M_{p}^{2}\\phi^{2}-\\frac{\\dot{\\phi}^{2}}{\\widetilde{M}^{2}}}{M_{p}^{2}H\\phi^{2}+M_{p}^{2}\\phi\\dot{\\phi}-\\frac{3H\\dot{\\phi}^{2}}{\\widetilde{M}^{2}}}\\dot{\\Psi}\\,. \\end{equation}\nFinally the second order action can be rewritten as follows\n\\begin{equation}\nS^{(2)}=\\int dt\ndx^{3}a^{3}\\vartheta_{s}\\bigg[\\dot{\\Psi}^{2}-\\frac{c_{s}^{2}}{a^{2}}(\\partial\\Psi)^{2}\\bigg]\n\\end{equation}\nwhere by definition\n\\begin{equation}\n\\vartheta_{s}\\equiv6\\frac{(M_{p}^{2}\\phi^{2}-\\frac{\\dot{\\phi}^{2}}{\\widetilde{M}^{2}})^{2}(-\\frac{1}{2}M_{p}^{2}H^{2}\\phi^{2}-M_{p}^{2}H\\phi\\dot{\\phi}+\\frac{3}{\\widetilde{M}^{2}}\nH^{2}\\dot{\\phi}^{2})}{(M_{p}^{2}H\\phi^{2}+M_{p}^{2}\\phi\\dot{\\phi}-\\frac{3}{\\widetilde{M}^{2}}H\\dot{\\phi}^{2})^{2}}+\n3(\\frac{1}{2}M_{p}^{2}\\phi^{2}-\\frac{1}{2\\widetilde{2}}\\dot{\\phi}^{2})\n\\end{equation}\nand\n$$c_{s}^{2}\\equiv\\frac{3}{2}\\bigg\\{(M_{p}^{2}\\phi^{2}-\\frac{\\dot{\\phi}^{2}}{\\widetilde{M}^{2}})^{2}\n(M_{p}^{2}H\\phi^{2}+M_{p}^{2}\\phi\\dot{\\phi}-\\frac{3H\\dot{\\phi}^{2}}{\\widetilde{M}^{2}})H$$\n$$-(M_{p}^{2}H\\phi^{2}+M_{p}^{2}\\phi\\dot{\\phi}-\\frac{3H\\dot{\\phi}^{2}}{\\widetilde{M}^{2}})^{2}(M_{p}^{2}\\phi^{2}-\\frac{\\dot{\\phi}^{2}}{\\widetilde{M}^{2}})$$\n$$4(M_{p}^{2}\\phi^{2}-\\frac{\\dot{\\phi}^{2}}{\\widetilde{M}^{2}})(M_{p}^{2}\\phi\\dot{\\phi}-\\frac{\\dot{\\phi}\\ddot{\\phi}}{\\widetilde{M}^{2}})\n(M_{p}^{2}H\\phi^{2}+M_{p}^{2}\\phi\\dot{\\phi}-\\frac{3H\\dot{\\phi}^{2}}{\\widetilde{M}^{2}})$$\n$$-(M_{p}^{2}-\\frac{\\dot{\\phi}^{2}}{\\widetilde{M}^{2}})^{2}(M_{p}^{2}\\dot{H}\\phi^{2}+2M_{p}^{2}H\\phi\\dot{\\phi}M_{p}^{2}\\dot{\\phi}^{2}+M_{p}^{2}\\phi\\ddot{\\phi}\n-\\frac{3\\dot{H}\\dot{\\phi}^{2}}{\\widetilde{M}^{2}}-\\frac{6}{\\widetilde{M}^{2}}H\\dot{\\phi}\\ddot{\\phi})\\bigg\\}$$\n$$\\bigg\\{9[\\frac{1}{2}M_{p}^{2}\\phi^{2}-\\frac{\\dot{\\phi}^{2}}{2\\widetilde{M}^{2}}][4(\\frac{1}{2}M_{p}^{2}\\phi^{2}-\\frac{\\dot{\\phi}^{2}}{2\\widetilde{M}^{2}})\n(-\\frac{1}{2}M_{p}^{2}H^{2}\\phi^{2}-M_{p}^{2}H\\phi\\dot{\\phi}+\\frac{3}{M\\widetilde{^{2}}H^{2}\\dot{\\phi}^{2}})$$\n\\begin{equation}\n+(M_{p}^{2}H\\phi^{2}+M_{p}^{2}\\phi\\dot{\\phi}-\\frac{3H\\dot{\\phi}^{2}}{\\widetilde{M}^{2}})^{2}]\\bigg\\}^{-1}\\,.\n\\end{equation}\n\nIn order to obtain quantum perturbations $\\Psi$, we can find\nequation of motion of the curvature perturbation by varying action (23)\nwhich follows\n\\begin{equation}\n\\ddot{\\Psi}+\\bigg(3H+\\frac{\\dot{\\vartheta_{s}}}{\\vartheta_{s}}\\bigg)+\\frac{c_{s}^{2}k^{2}}{a^{2}}\\Psi=0\\,.\n\\end{equation}\nBy solving the above equation up to the lowest order in slow-roll approximation,\nwe find\n\\begin{equation}\n\\Psi=\\frac{iH\\exp(-ic_{s}k\\tau)}{2c_{s}^{\\frac{3}{2}}\\sqrt{k^{3}}\\vartheta_{s}}(1+ic_{s}k\\tau)\\,.\n\\end{equation}\nBy using the two-point correlation functions we can study power\nspectrum of curvature perturbation in this setup. We find two-point correlation\nfunction by obtaining vacuum expectation value at the end of inflation.\nWe define the power spectrum $P_{s}$, as\n\\begin{equation}\n\\langle0|\\Psi(0,\\textbf{k}_{1})\\Psi(0,\\textbf{k}_{2})|0\\rangle=\\frac{2\\pi^{2}}{k^{3}}P_{s}(2\\pi)^{3}\\delta^{3}(\\textbf{k}_{1}+\\textbf{k}_{2})\\,,\n\\end{equation}\nwhere\n\\begin{equation}\nP_{s}=\\frac{H^{2}}{8\\pi^{2}\\vartheta_{s} c_{s}^{3}}\\,.\n\\end{equation}\n\nThe spectral index of scalar perturbations is given by (see Refs. [41-43] for more details on the cosmological perturbations in generalized gravity theories and also inflationary spectral index in these theories.)\n\n\\begin{equation}\nn_{s}-1=\\frac{d\\ln P_{s}}{d\\ln\nk}|_{c_{s}k=aH}=-2\\epsilon-\\delta_{F}-\\eta_{s}-S\n\\end{equation}\nwhere by definition\n\\begin{equation}\n\\delta_{F}=\\frac{\\dot{f}}{H(1+f)}\\,\\,\\,\\,,\\,\\,\\,\\,\\eta_{s}=\\frac{\\dot{\\epsilon_{s}}}{H\\epsilon_{s}}\\,\\,\\,\\,,\\,\\,\\,\\,S=\\frac{\\dot{c_{s}}}{Hc_{s}}\n\\end{equation}\nalso\n\\begin{equation}\n\\epsilon_{s}=\\frac{\\vartheta_{s}c_{s}^{2}}{M_{pl}^{2}(1+f)}.\n\\end{equation}\n\n\nwe obtain finally\n\n\\begin{equation}\nn_{s}-1=-2\\epsilon-\\frac{1}{H}\\frac{d\\ln c_{s}}{dt}\n-\\frac{1}{H}\\frac{d\\ln[2H(1+\\frac{\\phi^{2}}{2})\\epsilon+\\phi\\dot{\\phi}]}{dt}\\,,\n\\end{equation}\nwhich shows the scale dependence of perturbations due to deviation of $n_{s}$\nfrom $1$.\n\nNow we study tensor perturbations in this setup. To this end, we write the metric as follows\n\\begin{equation}\nds^{2}=-dt^{2}+a(t)^{2}(\\delta_{ij}+T_{ij})dx^{i}dx^{j}\\,,\n\\end{equation}\nwhere $T_{ij}$ is a spatial shear 3-tensor which is transverse and\ntraceless. It is convenient to write $T_{ij}$ in terms of two\npolarization modes, as follows\n\\begin{equation}\nT_{ij}=T_{+}e^{+}_{ij}+T^{\\times}e^{\\times}_{ij}\\,,\n\\end{equation}\nwhere $e^{+}_{ij}$ and $e^{\\times}_{ij}$ are the polarization tensors. In this case the second order action for the tensor mode can de\nwritten as\n\\begin{equation}\nS_{T}=\\int dt dx^{3}\na^{3}\\vartheta_{T}\\bigg[\\dot{T}_{(+,\\times)}^{2}-\\frac{c_{T}^{2}}{a^{2}}(\\partial\nT_{(+,\\times)})^{2}\\bigg]\\,,\n\\end{equation}\nwhere by definition\n\\begin{equation}\n\\vartheta_{T}\\equiv\\frac{1}{8}(M_{p}^{2}\\phi^{2}-\\frac{\\dot{\\phi}^{2}}{\\widetilde{M}^{2}})\n\\end{equation}\nand\n\\begin{equation}\nc_{T}^{2}\\equiv\\frac{\\widetilde{M}^{2}M_{p}^{2}\\phi^{2}+\\dot{\\phi}^{2}}{\\widetilde{M}^{2}M_{p}^{2}\\phi^{2}-\\dot{\\phi}^{2}}\\,.\n\\end{equation}\n\nNow, the amplitude of tensor perturbations is given by\n\\begin{equation}\nP_{T}=\\frac{H^{2}}{2\\pi^{2}\\vartheta_{T}c_{T}^{3}}\\,,\n\\end{equation}\nwhere we have defined the tensor spectral index as\n\\begin{equation}\nn_{T}\\equiv\\frac{d\\ln P_{T}}{d\\ln\nk}|_{c_{T}k=aH}\\,=-2\\epsilon-\\delta_{F}.\n\\end{equation}\nBy using above equations we get finally\n\\begin{equation}\nn_{T}=-2\\epsilon-\\frac{\\phi\\dot{\\phi}}{H(1+\\frac{\\phi^{2}}{2})}\\,.\n\\end{equation}\n\nThe tensor-to-scalar ratio as an important observational quantity in our setup is given by\n\\begin{equation}\nr=\\frac{P_{T}}{P_{s}}=16c_{s}\\bigg(\\epsilon+\\frac{\\phi\\dot{\\phi}}{2H(1+\\frac{\\phi^{2}}{2})}+O(\\epsilon^{2})\\bigg)\\simeq-8c_{s}n_{T}\n\\end{equation}\nwhich yields the standard consistency relation.\n\n\\section{Third-Order Action: Non-Gaussianity}\n\nSince a two-point correlation function of the scalar perturbations\ngives no information about possible non-Gaussian feature of distribution, we study\nhigher-order correlation functions. A three-point correlation function is capable to give the required information. For this\npurpose, we should expand action (1) up to the third order in small\nfluctuations around the homogeneous background solutions. In this respect we obtain\n\n\\vspace{0.5cm} $S^{(3)}=\\int\ndtdx^{3}a^{3}\\bigg\\{3\\Phi^{3}[M_{p}^{2}H^{2}(1+\\frac{\\phi^{2}}{2})+M_{p}^{2}H\\phi\\dot{\\phi}-\\frac{5}{\\widetilde{M^{2}}}H^{2}\\dot{\\phi^{2}}]\n+\\Phi^{2}[9\\Psi(-\\frac{1}{2}M_{p}^{2}\\phi^{2}-M_{p}^{2}H\\phi\\dot{\\phi}$\n\\vspace{0.5cm} $+\\frac{3}{\\widetilde{M}^{}}H^{2}\\dot{\\phi}^{2})\n+6\\dot{\\Psi}(-M_{p}^{2}H(1+\\frac{\\phi^{2}}{2})-\\frac{1}{2}M_{p}^{2}\\phi\\dot{\\phi}\\frac{3}{\\widetilde{M}^{2}}H\\dot{\\phi}^{2})\n-\\frac{\\dot{\\phi}^{2}}{\\widetilde{M}^{2}a^{2}}\\partial^{2}\\Psi\n-\\frac{2}{a^{2}}\\partial^{2}\\Upsilon(-M_{p}^{2}H$\\vspace{0.05cm}\n$(1+\\frac{\\phi^{2}}{2})-\\frac{1}{2}M_{p}^{2}\\phi\\dot{\\phi}\\frac{3}{\\widetilde{M}^{2}}H\\dot{\\phi}^{2})]\n+\\Phi[\\frac{1}{a^{2}}(-M_{p}^{2}H\\phi^{2}-M_{p}^{2}\\phi\\dot{\\phi}+\\frac{3H\\dot{\\phi}^{2}}{\\widetilde{M}^{2}})\\partial_{i}\\Psi\\partial_{i}\\Upsilon\n-9(-M_{p}^{2}H\\phi^{2}-M_{p}^{2}\\phi\\dot{\\phi}+$ \\vspace{0.5cm}\n$\\frac{3H\\dot{\\phi}^{2}}{\\widetilde{M}^{2}})\\dot{\\Psi}\\Psi+\\frac{1}{2a^{4}}(M_{p}^{2}(1+\\frac{\\phi^{2}}{2})+\\frac{3}{2}\\frac{\\dot{\\phi}^{2}}{\\widetilde{M}^{2}})\n(\\partial_{i}\\partial_{j}\\Upsilon\\partial_{i}\\partial_{j}\\Upsilon-\\partial^{2}\\Upsilon\\partial^{2}\\Upsilon)\n+\\frac{1}{a^{2}}(-M_{p}^{2}H\\phi^{2}-M_{p}^{2}\\phi\\dot{\\phi}+\\frac{3H\\dot{\\phi}^{2}}{\\widetilde{M}^{2}})\\Psi\\partial^{2}\\Upsilon\n$ \\vspace{0.5cm}\n$+\\frac{4}{2a^{2}}(M_{p}^{2}(1+\\frac{\\phi^{2}}{2})+\\frac{3}{2}\\frac{\\dot{\\phi}^{2}}{\\widetilde{M}^{2}})\n\\dot{\\Psi}\\partial^{2}\\Upsilon+\\frac{1}{a^{2}}(-M_{p}^{2}\\phi^{2}+\\frac{\\dot{\\phi}}{\\widetilde{M}^{2}})\n\\Psi\\partial^{2}\\Psi\n+\\frac{1}{2a^{2}}(-M_{p}^{2}\\phi^{2}+\\frac{\\dot{\\phi}}{\\widetilde{M}^{2}})(\\partial\\Psi)^{2}\n-6(M_{p}^{2}(1+\\frac{\\phi^{2}}{2})+$ \\vspace{0.5cm}\n$\\frac{3}{2}\\frac{\\dot{\\phi}^{2}}{\\widetilde{M}^{2}})\\dot{\\Psi}^{2}]+\\frac{1}{2a^{2}}(M_{p}^{2}\\phi^{2}+\\frac{\\dot{\\phi}^{2}}{\\widetilde{M}^{2}})\n\\Psi(\\partial\\Psi)^{2}\n+\\frac{9}{2}(-M_{p}^{2}\\phi^{2}+\\frac{\\dot{\\phi}}{\\widetilde{M}^{2}})\\dot{\\Psi^{2}}\\Psi\n-\\frac{1}{a^{2}}(-M_{p}^{2}\\phi^{2}+\\frac{\\dot{\\phi}}{\\widetilde{M}^{2}})\\dot{\\Psi}\\partial_{i}\\Psi\\partial_{i}\\Upsilon\n-\\frac{1}{a^{2}}(-M_{p}^{2}\\phi^{2}+\\frac{\\dot{\\phi}}{\\widetilde{M}^{2}})$\n\\begin{equation}\n\\dot{\\Psi}\\Psi\\partial^{2}\\Upsilon-\\frac{3}{4a^{4}}\\Psi(-M_{p}^{2}\\phi^{2}+\\frac{\\dot{\\phi}}{\\widetilde{M}^{2}})\n(\\partial_{i}\\partial_{j}\\Upsilon\\partial_{i}\\partial_{j}\\Upsilon-\\partial^{2}\\Upsilon\\partial^{2}\\Upsilon)\n+\\frac{1}{a^{4}}(-M_{p}^{2}\\phi^{2}+\\frac{\\dot{\\phi}}{\\widetilde{M}^{2}})\\partial_{i}\\Psi\\partial_{i}\\Upsilon\\partial^{2}\\Upsilon\\bigg\\}\n\\end{equation}\n\nWe use Eqs. (21) and (22) for eliminating $\\Phi$ and $\\Upsilon$ in this relation. For this end, we introduce the quantity $\\chi$ as follows\n\n\\begin{equation}\n\\Upsilon=\\frac{M_{p}^{2}\\widetilde{M}^{2}\\phi^{2}-\\dot{\\phi}^{2}}{\\widetilde{M}^{2}M_{p}^{2}(H\\phi^{2}+\\phi\\dot{\\phi})-3H\\dot{\\phi}^{2}}\\Psi\n+\\frac{2\\widetilde{M^{2}}a^{2}\\chi}{M_{p}^{2}\\widetilde{M}^{2}\\phi^{2}-\\dot{\\phi}^{2}}\\,,\n\\end{equation}\nwhere\n\n\\begin{equation}\n\\partial^{2}\\chi=\\vartheta_{s}\\dot{\\Psi}\\,.\n\\end{equation}\n\nNow the third order action (43) takes the following form\n\n\\vspace{0.5cm}\n\n$S^{(3)}=\\int dt\\,\ndx^{3}a^{3}\\bigg\\{[-3M_{p}^{2}c_{s}^{-2}\\Psi\\dot{\\Psi^{2}}\n+M_{p}^{2}a^{-2}\\Psi(\\partial\\Psi)^{2}+M_{p}^{2}c_{s}^{-2}H^{-1}\\dot{\\Psi}^{3}]$\n\\begin{equation}\n\\bigg[(1+\\frac{1}{4}\\phi^{2})\\epsilon+\\frac{5}{8}\\frac{\\phi\\dot{\\phi}}{H}\\bigg]\n-2(1+\\frac{1}{4}\\phi^{2})^{-1}(\\frac{5}{8}\\frac{\\phi\\dot{\\phi}}{c_{s}^{2}H})\\dot{\\Psi}\\partial_{i}\\Psi\\partial_{i}\\chi\\bigg\\}\\,.\n\\end{equation}\n\nBy calculating the three-point correlation function we can study\nnon-Gaussianity feature of the primordial perturbations. For the\npresent model, we use the interaction picture in which the\ninteraction Hamiltonian, $H_{int}$, is equal to the Lagrangian third\norder action. The vacuum expectation value of curvature\nperturbations at $\\tau=\\tau_{f}$ is\n\n\\begin{equation}\n\\langle\\Psi(\\textbf{k}_{1})\\Psi(\\textbf{k}_{2})\\Psi(\\textbf{k}_{3})\\rangle=-i\\int_{\\tau_{i}}^{\\tau_{f}}d\\tau\n\\langle0|[\\Psi(\\tau_{f},\\textbf{k}_{1})\\Psi(\\tau_{f},\\textbf{k}_{2})\\Psi(\\tau_{f},\\textbf{k}_{3}),H_{int}(\\tau)]|0\\rangle\\,.\n\\end{equation}\n\nBy solving the above integral in Fourier space, we find\n\\begin{equation}\n\\langle\\Psi(\\textbf{k}_{1})\\Psi(\\textbf{k}_{2})\\Psi(\\textbf{k}_{3})\\rangle=(2\\pi)^{3}\\delta^{3}(\\textbf{k}_{1}+\\textbf{k}_{2}+\\textbf{k}_{3})\nP_{s}^{2}F_{\\Psi}(\\textbf{k}_{1},\\textbf{k}_{2},\\textbf{k}_{3})\\,,\n\\end{equation}\nwhere\n\\begin{equation}\nF_{\\Psi}(\\textbf{k}_{1},\\textbf{k}_{2},\\textbf{k}_{3})=\\frac{(2\\pi)^{2}}{\\prod_{i=1}^{3}k_{i}^{3}}G_{\\Psi}\\,,\n\\end{equation}\n\n\\vspace{0.5cm}\n\n$G_{\\Psi}=\\bigg[\\frac{3}{4}\\bigg(\\frac{2}{K}\\Sigma_{i>j}k_{i}^{2}k_{j}^{2}-\\frac{1}{K^{2}}\\Sigma_{i\\neq\nj}k_{i}^{2}k_{j}^{3}\\bigg)+\\frac{1}{4}\\bigg(\\frac{1}{2}\\Sigma_{i}k_{i}^{3}+\\frac{2}{K}\\Sigma_{i>j}k_{i}^{2}k_{j}^{2}\n-\\frac{1}{K^{2}}\\Sigma_{i\\neq j} k_{i}^{2}k_{j}^{3}\\bigg)$\n\\begin{equation}\n-\\frac{3}{2}\\bigg(\\frac{(k_{1}k_{2}k_{3})^{2}}{K^{3}}\\bigg)\\bigg]\\bigg(1-\\frac{1}{c_{s}^{2}}\\bigg)\\,,\n\\end{equation}\n\nand $K=\\sum_{i}k_{i}$. Finally the non-linear parameter $f_{NL}$ is defined as follows\n\n\\begin{equation}\nf_{NL}=\\frac{10}{3}\\frac{G_{\\Psi}}{\\sum_{i=1}^{3}k_{i}}\\,.\n\\end{equation}\n\nHere we study non-Gaussianity in the orthogonal and the equilateral\nconfigurations [44,45]. Firstly we should account $G_{\\Psi}$ in\nthese configurations. To this end, we follow Refs. [46-48] to\nintroduce a shape $\\zeta_{\\ast}^{equi}$ as\n$\\zeta_{\\ast}^{equi}=-\\frac{12}{13}(3\\zeta_{1}-\\zeta_{2})$. In this\nmanner we define the following shape which is orthogonal to\n$\\zeta_{\\ast}^{equi}$\n\\begin{equation}\n\\zeta_{\\ast}^{ortho}=-\\frac{12}{14-13\\beta}[\\beta(3\\zeta_{1}-\\zeta_{2})+3\\zeta_{1}-\\zeta_{2}]\\,,\n\\end{equation}\nwhere $\\beta\\simeq1.1967996$. Finally, bispectrum (48) can be\nwritten in terms of $\\zeta_{\\ast}^{equi}$ and $\\zeta_{\\ast}^{ortho}$\nas follows\n\\begin{equation}\nG_{\\Psi}=G_{1}\\zeta_{\\ast}^{equi}+G_{2}\\zeta_{\\ast}^{ortho}\\,,\n\\end{equation}\nwhere\n\\begin{equation}\nG_{1}=\\frac{13}{12}\\bigg[\\frac{1}{24}\\bigg(1-\\frac{1}{c_{s}^{2}}\\bigg)\\bigg](2+3\\beta)\n\\end{equation}\nand\n\\begin{equation}\nG_{2}=\\frac{14-13\\beta}{12}\\bigg[\\frac{1}{8}\\bigg(1-\\frac{1}{c_{s}^{2}}\\bigg)\\bigg]\\,.\n\\end{equation}\n\nNow, by using equations (50-55) we obtain the amplitude of\nnon-Gaussianity in the orthogonal and equilateral configurations\nrespectively as\n\\begin{equation}\nf_{NL}^{equi}=\\frac{130}{36\\sum_{i=1}^{3}k_{i}^{3}}\\bigg[\\frac{1}{24}\n\\bigg(\\frac{1}{1-c_{s}^{2}}\\bigg)\\bigg](2+3\\beta)\\zeta_{\\ast}^{equi}\\,,\n\\end{equation}\nand\n\\begin{equation}\nf_{NL}^{ortho}=\\frac{140-130\\beta}{36\\sum_{i=1}^{3}k_{i}^{3}}\\bigg[\\frac{1}{8}\n\\bigg(1-\\frac{1}{c_{s}^{2}}\\bigg)\\bigg]\\zeta_{\\ast}^{ortho}\\,.\n\\end{equation}\n\nThe equilateral and the orthogonal shape have a negative and a positive peak in $k_{1}=k_{2}=k_{3}$ limit, respectively [49].\nThus, we can rewrite the above equations in this limit as\n\\begin{equation}\nf_{NL}^{equi}=\\frac{325}{18}\\bigg[\\frac{1}{24}\\bigg(\\frac{1}{c_{s}^{2}}-1\\bigg)\\bigg](2+3\\beta)\\,,\n\\end{equation}\nand\n\\begin{equation}\nf_{NL}^{ortho}=\\frac{10}{9}\\bigg[\\frac{1}{8}\\bigg(1-\\frac{1}{c_{s}^{2}}\\bigg)\\bigg](\\frac{7}{6}+\\frac{65}{4}\\beta)\\,,\n\\end{equation}\nrespectively.\n\n\n\\section{Confronting with Observation}\n\nThe previous sections were devoted to the theoretical framework of\nthis extended model. In this section we compare our model with\nobservational data to find some observational constraints on the\nmodel parameter space. In this regard, we introduce a suitable\ncandidate for potential term in the action. We adopt\\footnote{Note that in general\n$\\lambda$ has dimension related to the Planck mass. This can be seen easily by considering the normalization of $\\phi$ via\n$V(\\phi)=\\frac{1}{n}\\lambda(\\frac{\\phi}{\\phi_{0}})^{n}$ which indicates that $\\lambda$ cannot be dimensionless in general. When we consider some numerical values for $\\lambda$ in our numerical analysis, these values are in \\emph{``appropriate units\"}.}\n$V(\\phi)=\\frac{1}{n}\\lambda\\phi^{n}$ which contains some interesting\ninflation models such as chaotic inflation. To be more specified, we\nconsider a quartic potential with $n=4$. Firstly we substitute this\npotential into equation (11) and then by adopting $\\epsilon=1$ we\nfind the inflaton field's value at the end of inflation. Then by\nsolving the integral (17), we find the inflaton field's value at the\nhorizon crossing in terms of number of e-folds, $N$. Then we\nsubstitute $\\phi_{hc}$ into Eqs. (33), (42), (58) and (59). The\nresulting relations are the basis of our numerical analysis on the\nparameter space of the model at hand. To proceed with numerical\nanalysis, we study the behavior of the tensor-to-scalar ratio versus\nthe scalar spectral index. In figure (1), we have plotted the\ntensor-to-scalar ratio versus the scalar spectral index for $N=60$\nin the background of Planck2015 data. The trajectory of result in\nthis extended non-minimal inflationary model lies well in the confidence levels\nof Planck2015 observational data for viable spectral index and $r$.\nThe amplitude of orthogonal configuration of non-Gaussianity versus\nthe amplitude of equilateral configuration is depicted in figure 2\nfor $N=60$. We see that this extended non-minimal model, in some\nranges of the parameter $\\lambda$, is consistent with observation.\nIf we restrict the spectral index to the observationally viable\ninterval $0.95 ''\n\\end{quote}\n\\end{mdframed}\n\n\n\\descrit{2) News Reporting:} it reports the headline of a misleading news article or another tweet, without any additional commentary and text from the tweet's author.\n\\begin{mdframed}[style=MyFrame,nobreak=true]\n\\begin{quote}\n\\small\n``BREAKING: Unofficial: Trump trailing Biden by only 4,202! There is a ballot count upload glitch in Arizona. Reports saying over 6,000 False Biden Votes Discovered .''\n\n\n\\end{quote}\n\\end{mdframed}\n\n\\descrit{3) Counter Claim:} it attempts to question and\/or debunk the misleading information.\n\\begin{mdframed}[style=MyFrame,nobreak=true]\n\\begin{quote}\n\\small\n``Misleading claims that Trump ballots in Arizona were thrown out because Sharpie pens were provided to voters are untrue. A ballot that cannot be read by the machine would be re-examined by hand and not invalidated if it was marked with a Sharpie.\\#Election2020 .''\n\n\\end{quote}\n\\end{mdframed}\n\n\n\\descrit{4) Satire:} it discusses the false claim in a satirical way.\n\n\\smallskip\n\\begin{mdframed}[style=MyFrame,nobreak=true]\n\\begin{quote}\n\\small\n``@\\textless USER\\textgreater He was allegedly slain by Soros, who then had Chavez's personal army of false voters cram him inside a Dominion voting machine before loading him into an RV with Hunter Biden's second laptop and Hillary's server.''\n\n\\end{quote}\n\\end{mdframed}\n\\descrit{5) Discussion:} it prompts discussion of the details of the misleading claim by adding commentary.\n\\begin{mdframed}[style=MyFrame,nobreak=true]\n\\begin{quote}\n\\small\n``@\\textless USER\\textgreater What happened to all the votes cast for Trump that were destroyed? How are those tallied? Detroit-based Democratic Party activist a local: boasts On FB: I threw out every Trump ballot I saw while working for Wayne County, Michigan. They number in the tens of thousands, as did all of my coworkers.''\n\n\\end{quote}\n\\end{mdframed}\n\\descrit{6) Inquiry:} it inquires about the details of events related to the misleading claim and does not attempt to either support or deny the claim under question. \n\\begin{mdframed}[style=MyFrame,nobreak=true]\n\\begin{quote}\n\\small\n``Has anyone got a compelling justification for this? In accordance with a tweet I saw from @\\textless USER\\textgreater: A \"James Bradley\" born in 1900 has recently been entered into the Michigan Voter Information Center. James apparently submitted an absentee ballot on October 25. For a 120-year-old, not bad! ''\n\\end{quote}\n\\end{mdframed}\n\n\\descrit{7) Irrelevant:} it is irrelevant to the misleading claim. These are considered false positives.\n\\begin{mdframed}[style=MyFrame,nobreak=true]\n\\begin{quote}\n\\small\n``\\#LOSANGELES: Our truck will be at the @HollywoodBowl voting location till 7pm. Use your voting rights and reward yourself with some.''\n\\end{quote}\n\\end{mdframed}\n\n\n\n\\begin{table}[t]\n\\centering\n\\small\n\t\\setlength{\\tabcolsep}{4pt}\n\\begin{tabular}{lrr}\n\\toprule\n\\textbf{Category} & \\textbf{Candidates} & \\textbf{Moderated (\\%)} \\\\ \\midrule\nAmplifying & 1,198 & 241 (20.11\\%) \\\\\nReporting & 922 & 222 (24.07\\%) \\\\\nCounter & 122 & 4 (3.27\\%) \\\\\nSatire & 15 & 0 (0.00\\%) \\\\\nDiscussion & 646 & 83 (12.84\\%) \\\\\nInquiry & 84 & 22 (26.19\\%) \\\\\nIrrelevant & 59 & 1 (1.69\\%) \\\\ \\bottomrule\n\\end{tabular}\n\t%\n\\caption{Categories of candidate tweets and number\/percentage receiving soft moderation by Twitter.}\n\\label{tab:candidate_category}\n\\end{table}\n\n\nTable~\\ref{tab:candidate_category}, reports the number of candidate tweets in each category. %\nThe vast majority falls in the amplification category with 1,198 out of 1,500 tweets (79.86\\%), followed by tweets reporting about the false claim with 922 tweets (61.46\\%).\n43\\% of the tweets add further discussion to the misleading claim under question rather than simply sharing the headline of a news article, and 8\\% of them try to debunk it.\nFinally, 59 (3.93\\%) of the tweets flagged by \\textsc{Lambretta}\\xspace are irrelevant to the claim under study and can therefore be considered false positives. \nAs mentioned, the goal of \\textsc{Lambretta}\\xspace is to flag tweets that are related to a claim that the platform wants to moderate, but human moderators should still make the final decision about applying labels to the candidates flagged by our system. \nWe further discuss the implications of running \\textsc{Lambretta}\\xspace in the wild in Section~\\ref{sec:discussion}.\n\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.95\\columnwidth]{Plots\/coverage.pdf}\n\\caption{Moderation coverage of misleading tweets flagged by \\textsc{Lambretta}\\xspace per claim.}\n\\label{fig:coverage}\n\\end{figure}\n\n\\descr{False Negatives.} \nTo evaluate the False Negatives of \\textsc{Lambretta}\\xspace, we first evaluate the false negative of each of its two phases separately using \\done~ for the first phase, and \\dthree~ for the second phase.\nIn the claim structure extraction module, the Proposition Extractor component fails to extract 2.77\\% of the propositions that are claim span.\nAfter the propositions are extracted, \\textsc{Lambretta}\\xspace misclassifies 3.46\\% of the propositions that contain a claim, implying the missed claim structure would not be processed further in the second phase.\nIn the second phase, we quantify the proportion of tweets missed by the keywords identified through the LTR component of \\textsc{Lambretta}\\xspace. \nThe keywords produced by LTR identify 8,748 of the 10,776 tweets in the ground truth; this yields an 18.81\\% false negative rate from \\textsc{Lambretta}\\xspace's keyword extraction phase.\nThis is much lower than the false negative rate of the second best state-of-the-art approach, YAKE, which is 32.45\\%.\n\n\n\\noindent{\\bf Comparison to Twitter's soft moderation.}\nAfter determining that the recommendations made by \\textsc{Lambretta}\\xspace are accurate, we check if the tweets recommended by our approach were also soft-moderated by Twitter.\nFor every claim from the Claim Extraction Module, we retrieve the relevant set of tweets guided by the best set of keywords from our LTR component. \nWe then follow~\\cite{zannettou2021won} and extract metadata of soft moderation interventions for each tweet (i.e., if the tweet received a soft moderation and the corresponding warning label).\nWe perform this experiment on \\dthree~.\n\nOut of the 101,353 tweets flagged by \\textsc{Lambretta}\\xspace as candidates for moderation, we find that only 4,330 (4.31\\%) were soft moderated by Twitter.\nNote that we could not check the existence of warning labels for 993 tweets as they were inaccessible, with either the tweets having been deleted or the accounts that posted them being deleted or suspended.\nThis experiment highlights the limitations of Twitter's soft moderation approach, suggesting that the platform would benefit from an automated system like \\textsc{Lambretta}\\xspace to aid content moderation.\nIn Section~\\ref{sec:twitter}, we further investigate whether we can identify a specific strategy followed by Twitter in moderating content.\n\n %\n\n\n\n\\subsection{What drives Twitter moderation?}\n\\label{sec:twitter}\n\nThe analysis from the previous sections shows that Twitter only moderates a small fraction of tweets that should be moderated.\nIn this section, we aim to better understand how these moderation decisions are made.\n\nWe start by examining whether certain claims are moderated more aggressively than others and whether the type of message in a tweet affects its chances of being moderated. \nWe then analyze the text and the URLs in moderated and unmoderated tweets, aiming to ascertain: 1) whether Twitter uses text similarity to identify moderation candidates and 2) whether Twitter automatically moderates all tweets linking to a known misleading news article.\nNext, we look at the account characteristics of the users who posted moderated and unmoderated tweets, and engagement metrics (i.e., likes and retweets), aiming to understand if Twitter prioritizes moderating tweets by popular accounts or viral content.\n\n\n\\descr{Coverage by claim.}\nIn Figure~\\ref{fig:coverage}, we plot the Cumulative Distribution Function (CDF) of the percentage of tweets moderated by Twitter for each of our 900 claims, out of the total candidate set flagged by \\textsc{Lambretta}\\xspace.\nApproximately 80\\% of the claims have less than 10\\% of the tweets moderated, whereas 95\\% of claims have close to 20\\% of the tweets moderated.\nVery few claims (5) have at least half of the tweets moderated.\nThe misleading claim with the highest coverage is ``\\textbf{\\textit{Russ Ramsland file affidavit showing physical impossibility of election result in Michigan}}'' with 159 out of 309 (51\\%) candidate tweets receiving moderation labels by Twitter.\nOn the other hand, the claim ``\\textbf{\\textit{Chinese Communists Used Computer Fraud and Mail Ballot Fraud to Interfere with Our National Election}}'' only has 1 out of 236 tweets (0.42\\%) with warning labels.\nThis shows that, while the fraction of pertinent tweets moderated by Twitter is generally low, the platform seems to moderate certain claims more aggressively than others. \n\n\\descr{Coverage by tweet type.}\nIn Section~\\ref{sec:validation}, we list seven categories of tweets discussing misleading claims.\nWe now set out to understand whether Twitter moderates certain types of tweets more than others.\nTable~\\ref{tab:candidate_category} shows the fraction of tweets in our sample set of 1,500 manually analyzed tweets that did receive soft moderation by Twitter, broken down by category.\nTweets raising questions, reporting, or amplifying false claims are more likely to be moderated (with 26.19\\%, 24.07\\%, and 20.11\\% of their tweets being moderated, respectively).\nSatire tweets never received moderation labels, while tweets debunking false claims were only moderated in 3.27\\% of the cases.\nThis indicates that Twitter considers the stance of a tweet mentioning a false claim, perhaps as part of a manual moderation effort.\n\n\\descr{Content analysis.}\nNext, we investigate whether Twitter looks at near identical tweets when applying soft moderation decisions.\nWe take all tweets flagged as candidates by \\textsc{Lambretta}\\xspace, and group together those with a high Jaccard similarity of their words.\nWe remove all the links, user mentions, and lemmatize the tweet tokens by using the \\textbf{\\textit{ekphrasis}} tokenizer~\\cite{ekphrasis}.\nWe consider two tweets to be near identical if their Jaccard similarity is in the range 0.75--0.9 (out of 1.0).\nWe do so to extract tweet pairs that are not exactly the same, but have some variation in the content while discussing the same misleading claim.\nWe exclude retweets, and only consider the tweets originally authored by the users.\n\n\\begin{figure*}[t]\n\\centering\n \\begin{subfigure}{0.375\\linewidth}\\includegraphics[width=\\linewidth]{Plots\/ccdf_followings.pdf}\n \\caption{Following} \n\t\\end{subfigure}\n\t~\n\t \\begin{subfigure}{0.375\\linewidth}\\includegraphics[width=\\linewidth]{Plots\/ccdf_followers.pdf} \n \\caption{Followers} \n\t\\end{subfigure}\n\t \\begin{subfigure}{0.375\\linewidth}\\includegraphics[width=\\linewidth]{Plots\/ccdf_tweets.pdf}\n \\caption{Tweet Counts} \n\t\\end{subfigure}\n\t~\n\t \\begin{subfigure}{0.375\\linewidth}\\includegraphics[width=\\linewidth]{Plots\/ccdf_accountlife.pdf}\n \\caption{Account Age} \n\t\\end{subfigure}\n\t \n\\caption{Cumulative Distribution Functions (CDF) of various user metrics for moderated and unmoderated tweets.}\n\\label{fig:ccdf_useranalysis}\n\\end{figure*}\n\n\nWe extract 17,241 pairs of tweets (out of 438,986 possible pairs), where at least one of the two was moderated by Twitter.\nOnly 3,857 pairs have both tweets moderated.\nNote that \\textsc{Lambretta}\\xspace effectively identifies {\\em all} the 17,241 pairs of tweets as moderation candidates. %\nHere is an example of a very similar pair of tweets, for which Twitter did not add labels to one of them:\n\\smallskip\n\\begin{mdframed}[style=MyFrame,nobreak=true]\n\\begin{quote}\n\\small\n\t\\textbf{Moderated}: ``RudyGiuliani in Trump campaign news conference: \"\"Joe Biden said a few weeks ago that his voting fraud crew was the best in the world. They were excellent, but we got them!\"''\n\t\n\t\\textbf{Unmoderated}: ``Joe Biden said a few weeks ago that his crew was the greatest in the world at catching voter fraud, but we caught them.''\n\\end{quote}\n\\end{mdframed}\n\nThese findings indicate that the decision by Twitter to add soft moderation to a tweet does not seem to be driven by the lexical similarity of tweets.\n\n\\descr{URL analysis.}\nAnother potential indicator used by Twitter when deciding which tweets to moderate is whether they include links to known news disinformation articles.\nFirst, we expand all the links in the body of candidate tweets identified by \\textsc{Lambretta}\\xspace to get rid of URL shorteners~\\cite{maggi2013two}.\nThis yields 13,108 distinct URLs.\nNext, we group candidate tweets by URLs and check what fraction of tweets sharing the URL are moderated by Twitter.\n\n\n\n\\begin{table}[t]\n\\centering\n\\small\n\t\\setlength{\\tabcolsep}{4pt}\n\\begin{tabular}{lrr}\n\\toprule\n\\textbf{URL news story} & \\textbf{Candidates} & \\textbf{Moderated} \\\\ \\midrule\nUSPS whistleblower & 315 & 7 (2.2\\%) \\\\\nChina manipulating election & 252 & 4 (1.5\\%) \\\\\nMichigan ballot dump & 215 & 44 (20\\%) \\\\\n\\#Suitcasegate related FB video & 208 & 3 (1.4\\%) \\\\\nDominion remote machine control & 135 & 15 (11\\%) \\\\ \\bottomrule\n\\end{tabular}%\n\\caption{Examples of URLs in candidate tweets and those being moderated by Twitter.}\n\\label{tab:urlmoderation}\n\\end{table}\n\n\n\nTable~\\ref{tab:urlmoderation} shows the five most common URLs (abstracted to the topic of the news articles) in our dataset, with the fraction of tweets including those URLs moderated by Twitter.\nAll these news stories, excluding one Facebook video, originate from known low-credibility websites like TheGatewayPundit and DC Dirty Laundry, which promote election misinformation.\nTwitter moderates tweets containing those URLs in an inconsistent matter. %\nAlso note that \\textsc{Lambretta}\\xspace can help identify 4,598 additional moderation candidate tweets compared to those on which Twitter intervened.\n\n\\descr{User analysis.}\nWe examine the differences in the social capital (e.g., number of followers) of the authors of tweets moderated by Twitter, compared to those our system recommends for moderation but for which Twitter did not intervene.\nFigure~\\ref{fig:ccdf_useranalysis} reports the CDF of followers, following, tweet count, and account age of accounts that posted moderated and unmoderated tweets.\nWe find that authors of tweets that have warning labels have much fewer followers, followings, lower account activity, and have younger accounts than tweets without warning labels.\nWe also conduct two-sample Kolmogorov-Smirnov tests for each user metric, finding that the differences are statistically significant for followers and account age ($p < 0.01$) as well as following count and status count ($p < 0.05$).\nThis goes against the notion that popular accounts are more likely to have their content moderated.\n\nWe also check if the accounts with moderated tweets were suspended for violating Twitter Rules~\\cite{twitter_rules}. %\nWe find that only 33 out of 3,397 users were suspended by Twitter; this gives us strong ground to rule out the possibility that tweet moderation is not due to the ``legitimacy'' of the account themselves.\n\n\\descr{Engagement analysis.}\nFinally, we analyze engagement metrics. %\nFigure~\\ref{fig:ccdf_tweetanalysis} reports the CDF of retweets and likes categorized by moderation status of the 101,353 candidate tweets \\textsc{Lambretta}\\xspace flags for moderation from \\dthree~, compared to the ones flagged by Twitter.\nSimilar to the user analysis, we find that unmoderated tweets have more engagement. %\nWhen we check for statistical significance of difference in distributions of the retweet count using Kolmogorov-Smirnov tests, we find that it is statistically significant ($p < 0.01$), while we cannot reject the null hypothesis for the likes.\nNote, however, that these results have to be taken with a grain of salt, as we do not have a timeline of when exactly moderation was applied, and whether the soft interventions hampered the virality of online content. \n\n\\begin{figure}\n\\centering\n \\begin{subfigure}{0.375\\textwidth}\\includegraphics[width=\\linewidth]{Plots\/tweets_rt.pdf} \n \\caption{Retweets} \n\t\\end{subfigure}\n\t \\begin{subfigure}{0.375\\textwidth}\\includegraphics[width=\\linewidth]{Plots\/tweets_favs.pdf} \n \\caption{Likes} \n\t\\end{subfigure}\n\t \n\\caption{CDFs of engagement metrics for moderated and unmoderated tweets.}\n\\label{fig:ccdf_tweetanalysis}\n\\end{figure}\n\n\\descr{Takeaways.} Our analysis paints a puzzling picture of soft moderation on Twitter. \nWe find that certain claims are moderated more aggressively.\nStill, Twitter does not seem to have a system in place to identify similar tweets discussing the same false narrative, nor flagging tweets that link to the same debunked news article.\nWe also find that Twitter does not appear to focus on the tweets posted by popular accounts for moderation, but rather that tweets posted by accounts with more followers, friends, activity, and a longer lifespan are more likely to go unmoderated.\nThis confirms the need for a system like \\textsc{Lambretta}\\xspace. %\n\n\\section{Related Work}\nIn this section, we review relevant work on soft moderation, security warnings, and keyword extraction in the context of disinformation.\n\n\\descr{Soft Moderation during the 2020 Elections.}\nAs part of the Civic Integrity Policy efforts surrounding the 2020 US elections, Twitter applied warning labels on ``misleading information.''\nEmpirical analysis~\\cite{zannettou2021won} reports 12 different types of warning messages occurring on a sample of 2,244 tweets with warning labels.\nStatistical assessment of the impact of Twitter labels on Donald Trump's false claims during the 2020 US Presidential election finds that warning labels did not result in any statistically significant increase or decrease in the spread of misinformation~\\cite{papakyriakopoulos2022impact}.\nTwitter later reported that approximately 74\\% of the tweet viewership happened post-moderation and, more importantly, that the warnings yielded an estimated 29\\% decrease in users quoting the labeled tweets~\\cite{twitter_update_2020}. %\n\n\n\\descr{Security Warnings for Disinformation.}\nThe warning labels adopted by Twitter as soft moderation intervention can be broadly categorized as a type of security warning.\nSecurity warnings can be classified into two types: contextual and interstitial.\nThe former passively inform the users about misinformation through UI elements that appear alongside social media posts.\nThe latter prompt the user to engage before taking action with the potential piece of disinformation (e.g., retweeting or sharing).\nA recent study~\\cite{kaiser2021adapting} shows that interstitial warnings may be more effective, with a lower clickthrough rate of misleading articles.\nAdditionally, interstitial warnings are more effective design-wise because they capture attention and provide a window of opportunity for users to think about their actions.\nEfforts to study warning labels on countering disinformation have thus far been mostly focused on Facebook~\\cite{pennycook2018prior,ross2018fake,pennycook2020implied}, where warning labels were limited to ``disputed'' or ``rated false'', and the approach was deemed to be of limited utility by Facebook~\\cite{smith2017designing}.\nRecently, other platforms like Twitter~\\cite{alba2020twitter}, Google~\\cite{googlelabel}, and Bing~\\cite{bing_label} also used some form of fact-check warnings to counter disinformation.\n\n\\descr{Tools for automated content moderation.}\nThe sheer scale of content being produced on modern social media platforms (Facebook, Reddit, YouTube etc.) have motivated the need to adopt tools for automated content moderation~\\cite{fb_guidelines,youtube_guidelines}.\nHowever, due to the nuanced and context-sensitive nature of content moderation, it is a complex socio-technical problem~\\cite{jhaver2019human,seering2019moderator}.\nMost of the work in this space of automated content moderation are focused on Reddit, aiming to identify submissions that violate community-specific policies and norms ranging from hate speech to other types of problematic content~\\cite{seering2020reconsidering}.\nThe most popular solution to automated content moderation in Reddit, AutoModerator~\\cite{jhaver2019human} allows community moderators to set up simple rules based on regular expressions and metadata of users for automated moderation. \nOn YouTube, FilterBuddy~\\cite{jhaver2022designing} is available as a tool for creator-led content moderation by designing filters for keywords and key phrases.\nSimilarly, Twitch offers an automated moderation tool called Automod to allow creators to moderate four categories of content (discriminations and slurs, sexual content, hostility, and profanity) on the platform~\\cite{twitch_automod}.\nAnother tool, called CrossMod~\\cite{chandrasekharan2019crossmod} uses an ensemble of models learned via cross-community learning from empirical moderation decisions made on two subreddits of over 10M subscribers each.\n\n\n\\descr{Keyword Extraction for Disinformation.}\nResearchers have used an array of methods to detect disinformation, ranging from modeling user interactions~\\cite{shu2019role,tschiatschek2018fake,qian2018neural}, leveraging semantic content~\\cite{ciampaglia2015computational,zhang2017constraint,esmaeilzadeh2019neural,pan2018content}, and graph based representations~\\cite{nguyen2020fang,gangireddy2020unsupervised,lu2020gcan}.\nThe foundation of our system lies in the keyword detection, which has been used before to study disinformation on social media.\nDisInfoNet, a toolbox presented in~\\cite{guarino2019beyond}, represents news stories through keyword-based queries to track news stories and reconstruct the prevalence of disinformation over time and space.\nSimilarly, the work in~\\cite{gaglani2020unsupervised} uses keyword extraction techniques as the base for semantic search to detect fake news on WhatsApp.\nThe work in~\\cite{choudhary2018neural} focuses on credibility assessment of textual claims on news articles with potentially false information, also using keyword extraction as a part of their multi-component module.\n\n\\descr{Learning To Rank for Keyword extraction.}\nThe closest applications of LTR to our work are the proposals in~\\citep{jiang2009ranking} for keyphrase extraction and in~\\citep{cai2017keyword} for keyword extraction in Chinese news articles.\nThe foundational work by~\\citep{jiang2009ranking} motivates the necessity of framing the problem of keyphrase extraction as a ranking task rather than a classification task while improving results on extracting keyphrases from academic research papers, and social tagging data.\nThe LTR approach for keyword extraction utilized by \\textsc{Lambretta}\\xspace is motivated by the premise set up by this work.\nSimilarly,~\\citep{cai2017keyword} use Learning To Rank to identify keywords from 1800 public Chinese new articles using TF-IDF, TextRank, and Latent Dirichlet Allocation (LDA) as the set of features for the ranking model.\n\n\\section{Discussion and Conclusion}\n\\label{sec:discussion}\n\nThis paper presented \\textsc{Lambretta}\\xspace, a system geared to automatically flag candidate tweets for soft moderation interventions.\nOur experiments demonstrate that Learning to Rank (LTR) techniques are effective, that \\textsc{Lambretta}\\xspace outperforms other approaches, produces accurate recommendations, and can increase the set of tweets that receive soft moderation by over 20 times compared to those flagged by Twitter during the 2020 US Presidential Election.\n\n\n\\descr{Implications for social media platforms.}\nAs discussed in Section~\\ref{sec:twitter}, soft moderation interventions applied by Twitter appear to be spotty and not following precise criteria.\nThis might be due to moderation being conducted mainly in an ad-hoc fashion, relying on user reports and the judgment of moderators.\n\\textsc{Lambretta}\\xspace can assist this human effort, working upstream of the content moderation process and presenting moderators with an optimal set of tweets that are candidates for moderation. \nBecause of the nuances of moderating false and misleading content, we envision \\textsc{Lambretta}\\xspace to be deployed as an aid to human moderation rather than an automated detection tool.\n\nNonetheless, the claim-specific design of \\textsc{Lambretta}\\xspace can also be used by moderators for other actions as per their policies, e.g., asking users to remove a given tweet or performing hard moderation by removing the tweets.\nThe choice between soft moderation and hard moderation can be made by moderators contextually after the moderation candidates are retrieved through \\textsc{Lambretta}\\xspace, either based on underlying claims or a case-by-case basis.\nE.g., platforms may decide to soft moderate posts that push a certain false narrative but not add warnings if posts inform users about falsehood.\nAlternatively, they might add warnings to posts about the false narrative, providing additional context to users and allowing them to make up their minds about it.\nPlatforms could also craft warning messages depending on the context in which a false claim is discussed or design these messages to be more effective based on the audience and risk levels of specific false claims.\nFor example, different type of warning messages can be applied by platforms to distinguish between different levels of risk associated with the misleading claims (e.g., high and low-level risks associated with COVID-19 misinformation)~\\cite{ling2022learn}.\nWe are confident that Human-Computer Interaction researchers will be able to address these challenges, which go beyond the scope of this paper.\n\n\\descr{Human effort required for adopting \\textsc{Lambretta}\\xspace.}\nWhen setting up \\textsc{Lambretta}\\xspace to work in a new context, platform moderators need to follow the steps highlighted in Sections~\\ref{sec:claimextraction} and~\\ref{sec:keyword}.\nFirst, they need a set of tweets with claims they identified as containing misleading information, together with a tuning dataset like \\dthree~.\nModerators can create a tuning dataset like \\dthree~ by using a broad set of keywords associated with the event or topic and querying the Twitter API to which they have full access (e.g. ``COVID-19,'' ``coronavirus,'' etc., in the case of the pandemic).\nThey then need to tune the threshold for the Claim Stopper API in the Claim Extraction component (see Section~\\ref{sec:claimextraction}).\nIn our experiments, this phase took us, on average, two minutes per claim.\nFinally, they need to create the training set for the LTR model by following the iterative process discussed in Section~\\ref{sec:keyword}.\nWhen performed by a single annotator, this process took, on average, 15 minutes per claim for the experiments discussed in this paper.\nTwitter could speed up these steps further by having multiple annotators work on the same task.\nAdditionally, the work required on each claim is independent of other claims; therefore this process can be easily parallelized within the organization or even through crowdsourcing campaigns~\\cite{founta2018large,lease2012crowdsourcing,oleson2011programmatic}.\n\n\n\\descr{Resilience to evasion.}\nAs with any adversarial problem, malicious actors are likely to try to evade being flagged by \\textsc{Lambretta}\\xspace.\nE.g., they might avoid using certain words to avoid detection and use synonyms or dog whistles instead~\\cite{gerrard2018beyond,tahmasbi2021go,zannettou2020quantitative,zhu2021self}.\nHowever, this would make the false messaging less accessible to the general public, who would need to first understand the alternative words used and ultimately be counterproductive for malicious actors by limiting the reach of false narratives.\n\n\n\n\n\\descr{Limitations.} \\textsc{Lambretta}\\xspace requires a seed of tweets to be moderated, making it inherently reactive.\nHowever, this is a problem common to all moderation approaches, including the work conducted by fact-checking organizations.\nAnother limitation is that we could only test \\textsc{Lambretta}\\xspace on one dataset related to the same major event (the 2020 US Presidential Election), as this is the only reliable dataset with soft moderation labels available to the research community.\n\nEven though Twitter applied warning labels on misinformation about COVID-19, previous research reported that these were unreliable and inconsistent~\\cite{lange_2020,lyons_2020}, which we independently confirmed in our preliminary analysis.\nMore recently, Twitter recently started applying warning labels to tweets in the context of the Russian invasion of Ukraine~\\cite{benson_twitter_russia}, but these labels are applied based on the account posting them (i.e., if the account belongs to Russian or Belarusian state-affiliated media) instead of being claim-specific as required by \\textsc{Lambretta}\\xspace.\nWhile the LTR model used by \\textsc{Lambretta}\\xspace is not specific to the actual keywords being searched, and therefore we expect that it should generalize across the entirety of Twitter, platform moderators using the tool should take further steps to validate it when used in contexts other than politics and elections.\n\n\n\\descr{Future work.} \nWe plan to extend \\textsc{Lambretta}\\xspace to additional platforms.\nSince our system only needs the text of posts as input, we expect it to generalize to other platforms, e.g., Facebook, Reddit, etc.\nWe will also investigate how claims automatically built by \\textsc{Lambretta}\\xspace can be incorporated into warning messages to provide more context to users and allow them to be better protected against disinformation.\n\n\\descr{Acknowledgments.} \nWe thank the anonymous reviewers for their comments that helped us improve the paper.\nOur work was supported by the NSF under grants CNS-1942610, IIS-2046590, CNS-2114407, IIP-1827700, and CNS-2114411, and by the UK's National Research Centre on Privacy, Harm Reduction, and Adversarial Influence Online (REPHRAIN, UKRI grant: EP\/V011189\/1).\n\n\\small\n\\bibliographystyle{abbrv}\n\\input{no_comments.bbl}\n\n\\end{document}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe occurence of important Solar Energetic Particle (SEP) events is one of the\nprominent planning considerations for manned and unmanned lunar and planetary\nmissions \\cite{posner2007up}.\nA high exposure to large solar particles events can deliver critical doses to\nhuman organs and may damage the instruments on board of satellites and the\nglobal positioning system (GPS) due to the risk of saturation. SEP\nevents usually happen 30 minutes after the occurrence of the X-ray flare, which\nleaves very little time for astronauts performing extra-vehicular activity on\nthe International Space Station or planetary surfaces to\ntake evasive actions \\cite{2016l}. Earlier warning of the SEP events will be a valuable tool\nfor mission controllers that need to take a prompt decision concerning the\natrounauts' safety and the mission completion.\nWhen a solar flare or a CME happens, the magnetic force that is exercised is\nmanifested through different effects. Some of the effects are listed in their\norder of occurrence: light, thermal, particle acceleration, and matter ejection\nin case of CMEs. The first effect of a solar flare is a flash of increased\nbrightness that is\nobserved near the Sun's surface which is due to the X-rays and UV radiation.\nThen, part of the magnetic energy is converted into thermal energy in the area\nwhere the eruption happened.\nSolar particles in the atmosphere are then accelerated with different speed\nspectra, that can reach up to 80\\% of the speed of light, depending on the\nintensity of the parent eruptive event.\n \\begin{figure}[h!]\n \\centering\n \\includegraphics[width=1\\linewidth]{figs\/SolarParticleEvent.png}\n \\caption{Example of Sun-Earth magnetical connection and accelerated\n particles movement following the Parker's spiral before reaching\n Earth. (Drawing courtesy from Space Weather) \\cite{greendale}}\n \\label{fig1}\n\\end{figure}\n\nFinally, in the case of a CME, plasma and\nmagnetic field from the solar corona is released into the solar wind.\nThough most of the solar particles have the same composition, they are labeled\ndifferently depending on their energies starting from 1 keV, in the solar wind,\nto more than 500 MeV. SEP events are composed of particles, predominantly\nprotons and electrons, with at least 1 MeV energy that last between 2-20 days\nand have a range of fluxes of 5-6 orders of magnitude \\cite{gabriel1996power}.\nOnly $>$100 MeV particles are discussed herein. It is generally accepted that\nthere are two types of SEP events, one is associated with CMEs and the other is\nassociated with flares that are called respectively gradual and impulsive\n\\cite{reames1999particle}.\n\nIn this paper, we propose a novel method for predicting SEP events $>$100 MeV\nbased on the proton and X-ray correlations time series using interpretable\ndecision tree models.\nPredicting impulsive events is considered to be a more challenging problem than\npredicting the gradual events that happen progressively and leave a large window\nfor issuing SEP warnings. While we are mainly concerned with impulsive events,\nwe used the gradual events as well to test our model with. The accelerated\nimpulsive events may or may not reach Earth depending on the location of their\nparent event because their motion is confined by the magnetic field. More\nspecifically, in order for the accelerated particles to reach Earth, a Sun-Earth\nmagnetic connection needs to exist that allows the particles to flow to Earth\nvia the Parker spiral. Fig~.\\ref{fig1} shows a cartoon of a solar eruption that\nhappened in the Western limb of the Sun that happens to be magnetically\nconnected to Earth.\n\nSince SEP events are also part of solar activity, it may seem that\ntheir occurrence is dependent on the solar cycle and therefore the number of\nSunspots on the Sun's surface, which is the case for other solar eruptions.\nHowever, according to \\cite{gabriel1996power}, there is no correlation between the solar\ncycle and SEP event occurrence and fluences. In addition, there is no\nevidence the dependence of SEP events on the number of Sunspots\nthat are present during that snapshot in time \\cite{gabriel1996power}.\n\\par\n The rest\nof the paper is presented as follows. In Section 2 we provide background\nmaterial on the SEP predictive models and related works. Section 3\ndefines our dataset used in this study, and then in Section 4 we lay out our\nmethodology. Finally, Section 5 contains our experimental results, and we finish\nwith conclusions and future work in Section 6.\n\n\n\\section{Related Works}\nThere are a number of predictive models of SEP events that can be categorized\ninto two classes: physics-based models \\cite{p0, p1} and the precursor-based\nmodels \\cite{c0, c1}.\nThe first category of models includes the SOLar Particle Engineering Code\n(SOLPENCO) that can predict the flux and fluence of the gradual SEP events\noriginating from CMEs \\cite{aran2006solpenco}. However, such efforts mainly\nfocus on gradual events.\nOn the other hand, there are models that rely on historical observations to find\nSEP events associated precursors. One example of such systems is the proton\nprediction system (PPS), which is a program developed at the Air Force Research\nLaboratory (AFRL), that predicts low energy SEP events E$>$\\{5, 10, 50\\} MeV,\ncomposition, and intensities. PPS assumes that there is a relationship between\nthe proton flux and the parent flare event. PPS takes\nadvantage of the correlation between large SEP events observed by the\nInterplanetary Monitoring Platform (IMP) satellites as well as their correlated\nflare signatures captured by GOES proton, X-ray flux and H$\\alpha$ flare\nlocation \\cite{PPS}. Also, the Relativistic Electron Alert System for\nExploration (RELEASE), predicts the intensity of SEP events using relativistic\nnear light speed electrons \\cite{posner2007up}.\nRELEASE uses electron flux data from the SOHO\/COSTEP sensor of the range of\n0.3-1.2 MeV to forecast the proton intensity of the energy range 30-50 MeV.\nAnother example of precursor-based models appear in\n\\cite{laurenza2009technique}, that base their study on the \"big flare\nsyndrome\". This latter theory states that SEP events occurrence at 1 AU is\nhighly probable when the intensity of the parent flare is high. Following this\nassumption, the authors in \\cite{laurenza2009technique} issue SEP forecasts for\nimportant flares greater than M2. To this end, it uses type III radio burst\ndata, H$\\alpha$ data, and GOES soft X-ray data. Finally, $GLE Alert Plus$, is an\noperational system that uses a ground-based neutron\nmonitor (MNDB, www.nmdb.com) to issue alerts of SEP events of energies E$>$433\nMeV. Finally, the University of Malaga Solar Energetic Particles Prediction\n(UMASEP) is another system that first predicts whether a $>$10 MeV and $>$100\nMeV SEP will happen or not. To do so, it computes the correlation between the\nsoft X-ray and proton channels to assess if there is a magnetic connection\nbetween the Sun and Earth at the time of the X-ray event.\n Then, in case of existence of magnetic connection, UMASEP gives an\nestimation on the time when the proton flux is expected to surpass the SWPC\nthreshold of J(E $>$ 10MeV)= 10$pfu$ and J(E $>$ 100MeV)= 1$pfu$ (1$pfu$ = $pr\ncm^{-2} sr^{-1} s^{-1}$) and for the case of UMASEP-100, the intensity of the\nfirst three hours after the event onset time.\n\n \\begin{figure} \n \\centering\n \\includegraphics[width=0.85\\linewidth]{figs\/GOESchart.pdf}\n \\caption{Primary (bold lines) and secondary (thin lines) GOES satellites\n for XRS data since 1986 (the primary and secondary satellites designation is\n unknown prior to 1986) (Figure from NOAA instrument report)}\n \\label{fig2}\n\\end{figure} \n\n \\begin{figure} \n \\centering\n \\includegraphics[width=1\\linewidth]{figs\/ChartGOES.pdf}\n \\caption{Catalogs used to make the x-ray-parent event mapping. X-ray and\n CME catalogs for detecting the parent event report for flare and CME\n respectively.}\n \\label{cats}\n\\end{figure} \n\nWhile most of the SEP predictive systems either focus on the CME associated\nevents or low energy SEP events, with the exception of $GLE Alert Plus$ and\nUMASEP, in this present work, we focus on higher energy SEP events that can be\nmore disruptive than lower energy events. In this work, we study the\nGOES cross-channel correlations that can give an early insight on whether there\nexist a magnetic connection or not.\n\nWe aim to provide an interpretable decision tree models using a\nbalanced dataset of SEP and non-SEP events.\nThe highest SEP energy band of $>$500 MeV or higher\nthat are measurable from the ground is out of the scope of this study.\nSimilarly, the lower SEP energy band of $<$100 MeV is not considered in this\nstudy.\n\n\\section{Data}\nOur dataset is composed of multivariate time series of X-ray, integral proton flux and fluences spectra that were measured on board of\nSpace Environment Monitor (SEM) instruments package of the Geostationary\nOperational Earth Satellites (GOES).\nIn particular, we consider both the short and long X-ray channel data recorded\nby the X-ray Sensor (XRS). For the proton channels, we consider channels P6 and\nP7 recorded by the Energetic Particle Sensor (EPS) and proton channels P8, P9,\nP10, and P11 recorded by the High Energy Proton and Alpha Detector (HEPAD).\nTable~\\ref{goesinstruments} summarizes the instruments onboard the GOES satellites\nand their corresponding data channels that we used.\n\n\\begin{figure*}\n\\centering \n\\begin{tabular}{ |c|c|c|}\nFast-Rising & Slow-Rising & Lack of SEP\\\\\n\\hline \n\\includegraphics[width=0.315\\linewidth]{figs\/g1.pdf}&\n\\includegraphics[width=0.32\\linewidth]{figs\/g2.pdf}&\n\\includegraphics[width=0.32\\linewidth]{figs\/g3.pdf}\n\\\\\n\\hline\n\\includegraphics[width=0.175\\linewidth]{figs\/impulsive.pdf} &\n\\includegraphics[width=0.17\\linewidth]{figs\/CME1.PNG}&\n\\includegraphics[width=0.17\\linewidth]{figs\/FL3.PNG}\n\\\\\n \\hline \n \\end{tabular}\n \\caption{Example of an (a) Impulsive SEP event\n that started on the 2001-04-15 14:05:00 as a result of a flare\n that occurred in the 2011-04-15 13:15:00 shown in the SOHO EIT instument and\n a (b) gradual SEP event whose nearest temporal flare happened on 2001-04-02 21:30:03A and occurred as a\n result of a CME on the 2001-04-02 22:06:07 shown in the SOHO LASCO instrument\n and a (c) a flare that happened on 1999-01-16 12:00:00 that did not lead to any\n $>$ 100 MeV SEP event shown in the SOHO EIT\n instrument.\\label{table1impulsive}}\n\\end{figure*}\n\n The data we collected is made publically available by NOAA in the\nfollowing link:\n(\\href{https:\/\/satdat.ngdc.noaa.gov\/sem\/goes\/data\/new\\_avg\/}{\\textit{https:\/\/satdat.ngdc.noaa.gov\/sem\/goes\/data\/new\\_avg\/}})\nThe data is available in three different cadences. The full resolution data is captured\nevery three seconds from the GOES satellites, which is aggregated and made\navailable with one and five minute cadences. In this paper we use the\naggregated five minute data which is the one usually cited in the literature\n\\cite{neal2001predicting} \\cite{nunez2015real} \\cite{nunez2011predicting}.\nIn most cases, there are a couple a co-existing GOES satellites whose data is\ncaptured by more than one GOES satellite at a time. In this study, we always\nconsider the data reported by the primary GOES satellite that is designated by\nthe NOAA, as illustrated in Fig~.\\ref{fig2}. The latter figure shows the primary GOES\nsatellite with a bold line and the other co-existing GOES for every year.\nGOES-13 measurements were unstable for many years, but have been stable since\n2014.\n\n\n\\begin{table*} \n\\centering\n\\caption{GOES X-ray and Proton instuments and Channels.\\label{goesinstruments}}\n\\begin{tabular}{ |c|c|c|c|}\n\\hline\nInstrument & Channels & Description \\\\\n\\hline\n\\multirow{2}{*}{XRS}\n & xs &Short wavelength channel irradiance (0.5 - 0.3 nm)\\\\\n & xl &Long wavelength channel irradiance (0.1-0.8 nm)\\\\\n \\hline\n\\multirow{4}{*}{HEPAD}\n & p8\\_flux &Proton Channel 350.0 - 420.0 MeV \\\\\n & p9\\_flux &Proton Channel 420.0 - 510.0 MeV \\\\\n & p10\\_flux &Proton Channel 510.0 - 700.0 MeV \\\\\n & p11\\_flux &Proton Channel $>$ 700.0 MeV \\\\\n \\hline\n \\multirow{2}{*}{EPS}\n\n & p6\\_flux &Proton Channel 80.0 - 165.0 MeV \\\\\n & p7\\_flux &Proton Channel 165.0 - 500.0 MeV \\\\\n \\hline\n \n \\end{tabular}\n\\end{table*}\n\n\n\\begin{table}\n \\caption{$>$ 100 MeV SEP Event List with their Parent Events\n(CME\/Flare)\\label{sepevents}}\n\\centering \n\\begin{threeparttable}\n\\begin{tabular}{ |c|c|c|}\n\\hline \nSEP Event ID & Onset Time of SEP Event & Parent X-ray Event \\\\ \\hline\n1 & 1997-11-04 05:52:00 & 1997-11-04 05:52:00 \\\\ \\hline\n2 & 1997-11-06 11:49:00 & 1997-11-06 11:49:00 \\\\ \\hline\n3\\tnote{*} & 1998-04-20 09:38:00 & 1998-04-20 09:38:00 \\\\ \\hline\n4 & 1998-05-02 13:31:00 & 1998-05-02 13:31:00 \\\\ \\hline\n5 & 1998-05-06 07:58:00 & 1998-05-06 07:58:00 \\\\ \\hline\n6 & 1998-08-24 21:50:00 & 1998-08-24 21:50:00 \\\\ \\hline\n7 & 1998-09-30 13:50:00 & 1998-09-30 13:50:00 \\\\ \\hline\n8 & 1998-11-14 05:15:00 & 1998-11-14 06:05:00 \\\\ \\hline\n9 & 2000-06-10 16:40:00 & 2000-06-10 16:40:00 \\\\ \\hline\n10 & 2000-07-14 10:03:00 & 2000-07-14 10:03:00 \\\\ \\hline\n11 & 2000-11-08 22:42:00 & 2000-11-08 22:42:00 \\\\ \\hline\n12 & 2000-11-24 14:51:00 & 2000-11-24 14:51:00 \\\\ \\hline\n13\\tnote{*} & 2000-11-26 16:34:00 & 2000-11-26 16:34:00 \\\\ \\hline\n14\\tnote{*} & 2001-04-02 21:32:00 & 2001-04-02 21:32:00 \\\\ \\hline\n15 & 2001-04-12 09:39:00 & 2001-04-12 09:39:00 \\\\ \\hline\n16 & 2001-04-15 13:19:00 & 2001-04-15 13:19:00 \\\\ \\hline\n17 & 2001-04-17 21:18:00 & 2001-04-18 02:05:00 \\\\ \\hline\n18 & 2001-08-15 12:38:00 & 2001-08-16 23:30:00 \\\\ \\hline\n19\\tnote{*} & 2001-09-24 09:32:00 & 2001-09-24 09:32:00 \\\\ \\hline\n20 & 2001-11-04 16:03:00 & 2001-11-04 16:03:00 \\\\ \\hline\n21 & 2001-11-22 22:32:00 & 2001-11-22 19:45:00 \\\\ \\hline\n22 & 2001-12-26 04:32:00 & 2001-12-26 04:32:00 \\\\ \\hline\n23 & 2002-04-21 00:43:00 & 2002-04-21 00:43:00 \\\\ \\hline\n24 & 2002-08-22 01:47:00 & 2002-08-22 01:47:00 \\\\ \\hline\n25 & 2002-08-24 00:49:00 & 2002-08-24 00:49:00 \\\\ \\hline\n26 & 2003-10-28 09:51:00 & 2003-10-28 09:51:00 \\\\ \\hline\n27 & 2003-11-02 17:03:00 & 2003-11-02 17:03:00 \\\\ \\hline\n28\\tnote{*} & 2003-11-05 02:37:00 & 2003-11-05 02:37:00 \\\\ \\hline\n29\\tnote{+} & 2004-11-01 03:04:00 & 2004-11-01 03:04:00 \\\\ \\hline\n30\\tnote{+} & 2004-11-10 01:59:00 & 2004-11-10 01:59:00 \\\\ \\hline\n31\\tnote{+} & 2005-01-16 21:55:00 & 2005-01-17 08:00:00 \\\\ \\hline\n32\\tnote{+} & 2005-01-20 06:36:00 & 2005-01-20 06:36:00 \\\\ \\hline\n33\\tnote{+} & 2005-06-16 20:01:00 & 2005-06-16 20:01:00 \\\\ \\hline\n34\\tnote{+} \\tnote{*} & 2005-09-07 17:17:00 & 2005-09-07 17:17:00 \\\\ \\hline\n35\\tnote{+} \\tnote{*} & 2006-12-06 18:29:00 & 2006-12-06 18:29:00 \\\\ \\hline\n36\\tnote{+} & 2006-12-13 02:14:00 & 2006-12-13 02:14:00 \\\\ \\hline\n37\\tnote{+} & 2006-12-14 21:07:00 & 2006-12-14 21:07:00 \\\\ \\hline\n38 & 2011-06-07 06:16:00 & 2011-06-07 06:16:00 \\\\ \\hline\n39 & 2011-08-04 03:41:00 & 2011-08-04 03:41:00 \\\\ \\hline\n40 & 2011-08-09 07:48:00 & 2011-08-09 07:48:00 \\\\ \\hline\n41 & 2012-01-23 03:38:00 & 2012-01-23 03:38:00 \\\\ \\hline\n42\\tnote{*} & 2012-01-27 17:37:00 & 2012-01-27 17:37:00 \\\\ \\hline\n43 & 2012-03-07 01:05:00 & 2012-03-07 01:05:00 \\\\ \\hline\n44 & 2012-03-13 17:12:00 & 2012-03-13 17:12:00 \\\\ \\hline\n45 & 2012-05-17 01:25:00 & 2012-05-17 01:25:00 \\\\ \\hline\n46\\tnote{*} & 2013-04-11 06:55:00 & 2013-04-11 06:55:00 \\\\ \\hline\n47 & 2013-05-22 13:08:00 & 2013-05-22 13:08:00 \\\\ \\hline\n \\end{tabular}\n \\begin{tablenotes}\n \\item[*] Gradual Events.\n \\item[+] Missing Data in P6 and P7.\n \\end{tablenotes}\n \\end{threeparttable} \n\\end{table}\n\n\n\\begin{table}\n \\caption{Non SEP Event List \\label{ns}}\n\\centering \n\\begin{threeparttable}\n\\begin{tabular}{|c|c|c|}\n\\hline\nNon SEP Event ID & X-ray Event & Class \\\\ \\hline\n1 & 1997-09-24 02:43:00 & M59 \\\\ \\hline\n2 & 1997-11-27 12:59:00 & X26 \\\\ \\hline\n3 & 1997-11-28 04:53:00 & M68 \\\\ \\hline\n4 & 1997-11-29 22:28:00 & M64 \\\\ \\hline\n5 & 1998-07-14 12:51:00 & M46 \\\\ \\hline\n6 & 1998-08-18 08:14:00 & X28 \\\\ \\hline\n7 & 1998-08-18 22:10:00 & X49 \\\\ \\hline\n8 & 1998-08-19 21:35:00 & X39 \\\\ \\hline\n9 & 1998-11-28 04:54:00 & X33 \\\\ \\hline\n10 & 1999-01-16 12:02:00 & M36 \\\\ \\hline\n11 & 1999-04-03 22:56:00 & M43 \\\\ \\hline\n12 & 1999-04-04 05:15:00 & M54 \\\\ \\hline\n13 & 1999-05-03 05:36:00 & M44 \\\\ \\hline\n14 & 1999-07-19 08:16:00 & M58 \\\\ \\hline\n15 & 1999-07-29 19:31:00 & M51 \\\\ \\hline\n16 & 1999-08-20 23:03:00 & M98 \\\\ \\hline\n17 & 1999-08-21 16:30:00 & M37 \\\\ \\hline\n18 & 1999-08-21 22:10:00 & M59 \\\\ \\hline\n19 & 1999-08-25 01:32:00 & M36 \\\\ \\hline\n20 & 1999-10-14 08:54:00 & X18 \\\\ \\hline\n21 & 1999-11-14 07:54:00 & M80 \\\\ \\hline\n22 & 1999-11-16 02:36:00 & M38 \\\\ \\hline\n23 & 1999-11-17 09:47:00 & M74 \\\\ \\hline\n24 & 1999-12-22 18:52:00 & M53 \\\\ \\hline\n25 & 2000-01-18 17:07:00 & M39 \\\\ \\hline\n26 & 2000-02-05 19:17:00 & X12 \\\\ \\hline\n27 & 2000-03-12 23:30:00 & M36 \\\\ \\hline\n28 & 2000-03-31 10:13:00 & M41 \\\\ \\hline\n29 & 2000-04-15 10:09:00 & M43 \\\\ \\hline\n30 & 2000-06-02 06:52:00 & M41 \\\\ \\hline\n31 & 2000-06-02 18:48:00 & M76 \\\\ \\hline\n32 & 2000-10-29 01:28:00 & M44 \\\\ \\hline\n33 & 2000-12-27 15:30:00 & M43 \\\\ \\hline\n34 & 2001-01-20 21:06:00 & M77 \\\\ \\hline\n35 & 2001-03-28 11:21:00 & M43 \\\\ \\hline\n36 & 2001-06-13 11:22:00 & M78 \\\\ \\hline\n37 & 2001-06-23 00:10:00 & M56 \\\\ \\hline\n38 & 2001-06-23 04:02:00 & X12 \\\\ \\hline\n39\\tnote{+} & 2004-12-30 22:02:00 & M42 \\\\ \\hline\n40\\tnote{+} & 2004-01-07 03:43:00 & M45 \\\\ \\hline\n41\\tnote{+} & 2004-09-12 00:04:00 & M48 \\\\ \\hline\n42\\tnote{+} & 2004-01-17 17:35:00 & M50 \\\\ \\hline\n43\\tnote{+} & 2005-07-27 04:33:00 & M37 \\\\ \\hline\n44\\tnote{+} & 2005-11-14 14:16:00 & M39 \\\\ \\hline\n45\\tnote{+} & 2005-08-02 18:22:00 & M42 \\\\ \\hline\n46\\tnote{+} & 2005-07-28 21:39:00 & M48 \\\\ \\hline\n47\\tnote{+} & 2006-04-27 15:22:00 & M79 \\\\ \\hline\n \\end{tabular}\n \\begin{tablenotes}\n \\item[+] Missing Data in P6 and P7.\n \\end{tablenotes}\n \\end{threeparttable} \n\\end{table}\n\n\n\n\nOnly a portion of the collected data is used to train and test our\nclassifier. The positive class in this study is composed of X-Ray and proton\nchannels time series that led to $>$100 MeV SEP impulsive or gradual events. On\nthe other hand, the negative class is composed of X-Ray and proton channels\ntime series that did not lead to any $>$100 MeV SEP events. In order to\nselect such events we used a number of catalogs. For the positive class events\nwe used the same catalog of SEP events $>$100 MeV in\n\\cite{nunez2011predicting} that covers the events that happened between 1997 and\n2013.\n\n\\par \n\nOur positive class is composed of the 47 X-Ray parent events of their\ncorresponding $>$100 MeV SEP events that appear in \\cite{nunez2011predicting}\nand shown in Table~\\ref{sepevents}.\nWe use the X-Ray catalog\n(\\href{https:\/\/www.ngdc.noaa.gov\/stp\/space-weather\/solar-data\/solar-features\/solar-flares\/x-rays\/goes\/xrs\/}{\\textit{https:\/\/www.ngdc.noaa.gov\/stp\/space-weather\/solar-data\/solar-features\/solar-flares\/x-rays\/goes\/xrs\/}})\nas well as the CME catalog\n(\\href{https:\/\/cdaw.gsfc.nasa.gov\/CME\\_list\/}{\\textit{https:\/\/cdaw.gsfc.nasa.gov\/CME\\_list\/}})\nfrom the SOlar Heliospheric Observatory (SOHO) to derive the parent\nevents of the $>$100 MeV SEP events. There was an exception of two SEP events\nthat happened in August and September 1998 that we believe are gradual events but\ncould not map to any CME report due to the missing data during the SOHO mission\ninterruption. This latter happened because of the major loss of altitude\nexperienced by the spacecraft due to the failure to adequately monitor the\nspacecraft status, and an erroneous decision which disabled part of the on-board\nautonomous failure detection \\cite{nunez2011predicting}.\nIt is worth to note that we consulted the NOAA-prepared\nSEP events catalog along with their parent flare\/CME events\n(\\href{ftp:\/\/ftp.swpc.noaa.gov\/pub\/indices\/SPE.txt}{\\textit{ftp:\/\/ftp.swpc.noaa.gov\/pub\/indices\/SPE.txt}}).\nFor the case of events that are missing the NOAA catalog, we made our own\nflare\/CME-SEP events mapping. Fig.~\\ref{cats} shows the three external catalogs\nthat we used to produce our own catalog from which we generate our SEP\ndataset. \nTo obtain a balanced dataset, we selected another 47 X-ray events that did not\nproduce any SEP events that is, shown in Table~\\ref{ns}. We noticed\nthat there are nine SEP events (refer Table~\\ref{sepevents}\nID:29-37) that happened during the period when only GOES-12\nwas operational as can be seen in Fig.~\\ref{fig2}. At that period, channels P6\nand P7 failed and there was no secondary GOES.\nTo make sure not to create any biased classifier that relies on the missing data to\nmake the prediction, we made sure to choose nine events from the negative class\nas well that did not produce any SEP event (see Table~\\ref{ns}\nID:39-47).\n\nIn this paper we make a clear distinction\nbetween the two different classes of SEP events: gradual and impulsive. We\nassume that an SEP event is flare accelerated, and therefore impulsive, if the lag\nbetween the flare occurrence and the SEP onset time is very small and the peak\nflux intensity has reached a global peak few minutes to an hour after the onset\ntime. On the other hand, a gradual event shows a progressive increase in the\nproton flux trend that does not reach a global peak; instead, the peak is\nmaintained steadily before dropping again progressively. Finally, a non-SEP\nevent happens when there is an X-ray event of minimum intensity $M3.5$ that is\nnot followed by any significant proton flux increase in one of the P6-P11\nchannels. \n\n\\section{Methodology}\n\nThis section introduces a novel approach in predicting the occurrence of $>$ 100\nMeV SEP events based on interpretable decision tree models. We considered the\nX-ray and proton channels as multivariate time series that entail some\ncorrelations which may be precursors to the occurrence of an event. While \\cite\n{nunez2011predicting} considers the correlation between the X-ray and proton\nchannels only, we extended the correlation study into all the channels,\nincluding correlations that happen across different proton channels. We\napproached the problem from a multivariate time series classification\nperspective. The classification task being whether the observed time series\nwindows will lead to an SEP event or not. There are two ways of performing a\ntime series classification. The first approach, which first appeared in\n\\cite{xi2006fast}, is to use the raw time series data and find the\nK-nearest-neighbor with a similarity measures such as Euclidean distance,\nand dynamic time warping. This approach is effective when the time series of\nthe same label shows a distinguishable similar shape pattern. In this problem,\nthe time series that we are working with are direct instruments readings that\nshow a jitter effect, which is common in electromechanical device readings\n\\cite{scargle1982studies}.\nAn example of the jitter effect is shown in P10, and P11 in\nFigure.~\\ref{table1impulsive}-b and Figure.~\\ref{table1impulsive}-c. Time series\njitter makes it hard for distance measures, including elastic measures, to\ncapture similar shape patterns.\nTherefore, we explored the second time series classification approach that\nrelies on extracting features from the raw time series before feeding it to a\nmodel. In the next subsections, we will talk about the time series data\nextraction, the feature generation and data pre-processing.\n\n\\subsection{Data Extraction}\nOur approach starts from the assumption that a $>$100 MeV impulsive event may\noccur if the parent X-ray event peak is at least $M3.5$ as was suggested\nin \\cite{nunez2011predicting}. Therefore we carefully picked the negative class\nan X-ray event whose peak intensity is at least $M3.5$ but did not lead to any\nSEP event (refer column 3 in Table~\\ref{ns}). We extracted different\nobservation windows of data that we call a span. A span is defined as the number\nof hours that constitute the observation period prior to an X-ray event. A total\nof 94 (47*2) X-ray events (shown in column 3 and column 2 of\nTable~.\\ref{sepevents} and Table~.\\ref{ns} respectively) were extracted\nwith different span windows. The span concept is illustrated in the yellow\nshaded area in Figure.~\\ref{table1impulsive}. The span window, in this case is\n10 hours and stops exactly at the start time of the X-ray event. As we\nconsidered the five minutes as the cadence between reports, a 10-hour span\nwindow represents a 120-length multivariate X-ray and proton time series.\n\n\\subsection{Feature Generation}\n\nTo express the X-ray and proton cross-channel correlations we used a Vector\nAutoregression Model (VAR) which is a stochastic process model used to capture\nthe linear interdependencies among multiple time series. VAR is the extension of\nthe univariate autoregressive model to multivariate time series. The VAR model\nis useful for describing the behavior of economic, financial time series and for\nforecasting \\cite{zivot2006vector}. The VAR model permits us to express each\ntime series window as a linear function of past lags (values in the past) of\nitself and of past lags of the other time series. The lag $l$ signifies the\nfactor by which we multiply a value of a time series\nto produce its previous value in time.\nTheoretically, if there exists a magnetic connection between the Sun and Earth\nthrough the Parker spiral, the X-ray fluctuation precedes its corresponding\nproton fluctuation. Therefore, we do not express the X-ray channels in terms of\nthe other time series, but, we focus on expressing the proton channels with\nrespect to the past lags of\nthemselves and with past lags of the X-ray channels (xs and xl). The VAR model\nof order one, denoted as VAR(1) in our setting can be expressed\nby Equations.(\\ref{eq1})-(\\ref{eq6}).\n \nThere is a total of eight time series that represent the proton channels. Every\nequation highlights the relationship between the dependent variable and the\nother protons and X-ray variables, which are independent variables. The higher\nthe dependence of a proton channel on an independent variable, the higher is\nthe magnitude of the coefficient $||\\phi_{dependent\\_{independent}}||$.\nWe used the coefficients of the proton equations\nas a feature vector representing a data sample. The feature vector representing\na data point using the VAR(n) model is expressed in Equation.\\ref{vec}.\n \nSince the lag parameter $l$ determines the number of coefficients involved in the\nequation, the number of features in the feature vector varies. More\nspecifically, the total number of features are 8 (independent variables) * 6 \n(dependent variables).\n\\begin{table*}\n\\begin{equation}\\label{eq1}\nP6_{t,1} = \\phi_{P6\\_{xs,1}}*P6_{t-1,1} + \\phi_{P6\\_{xl,1}}*P6_{t-1,1} +\n\\phi_{P6\\_{P6,1}}*P6_{t-1,1} + \\ldots +\n\\phi_{P6\\_{P11,1}}*P6_{t-1,1} + \\alpha_{P6_{t,1}}\n \\end{equation}\n \\begin{equation}\nP7_{t,1} = \\phi_{P7\\_{xs,1}}*P7_{t-1,1} + \\phi_{P7\\_{xl,1}}*P7_{t-1,1} +\n\\phi_{P7\\_{P6,1}}*P7_{t-1,1} + \\ldots +\n\\phi_{P7\\_{P11,1}}*P7_{t-1,1} + \\alpha_{P7_{t,1}}\n \\end{equation}\n \\begin{equation}\nP8_{t,1} = \\phi_{P8\\_{xs,1}}*P8_{t-1,1} + \\phi_{P8\\_{xl,1}}*P8_{t-1,1} +\n\\phi_{P8\\_{P6,1}}*P8_{t-1,1} + \\ldots +\n\\phi_{P8\\_{P11,1}}*P8_{t-1,1} + \\alpha_{P8_{t,1}}\n\\end{equation}\n\\centering \\vdots\n\\begin{equation}\\tag{6}\\label{eq6}\nP11_{t,1} = \\phi_{P11\\_{xs,1}}*P11_{t-1,1} + \\phi_{P11\\_{xl,1}}*P11_{t-1,1} +\n\\phi_{P11\\_{P6,1}}*P11_{t-1,1} + \\ldots +\n\\phi_{P11\\_{P11,1}}*P11_{t-1,1} + \\alpha_{P11_{t,1}}\n\\end{equation}\n\\end{table*}\n\n\\begin{align}\\tag{7}\n x &= \\begin{bmatrix}\n \\phi_{P6\\_{xs,1}} \\\\\n \\phi_{P6\\_{xl,1}} \\\\\n \\phi_{P6\\_{P6,1}} \\\\\n \\phi_{P6\\_{P7,1}} \\\\\n \\vdots \\\\\n \\phi_{P11\\_{P8,n}}\\\\ \n \\phi_{P11\\_{P9,n}}\\\\ \n \\phi_{P11\\_{P10,n}}\\\\ \n \\phi_{P11\\_{P11,n}}\\\\ \n \\end{bmatrix} \\label{vec}\n \\end{align} \n\n\\subsection{Data Preprocessing}\nBefore feeding the data to a classifier we cleaned the data from empty values\nthat appear in the generated features. To do so, we used the 3-nearest neighbors\nclass-level imputation technique. The method finds the 3 nearest neighbors\nthat have the same label of the sample with the missing feature. Nearest\nneighbors imputation weights the samples using the mean squared difference on\nfeatures based on the other non-missing features. Then it imputes the missing\nvalue with the nearest neighbor sample. The reason why\nthe imputation is done on a class level basis is that features may behave\ndifferently across the two classes (SEP and non-SEP), therefore; it is\nimportant to impute the missing data with the same class values.\n\n\\section{Experimental Evaluation}\n\nIn this section we explain the decision tree model that we will be using as well\nas the sampling methodology. We will also provide a rationale for the choice of\nparameters ($l$ and $span$). Finally we will zoom in the best model\nwith the most promising performance levels.\n\n\n\\subsection{Decision Tree Model}\n\nA decision tree is a hierarchical tree structure used to determine classes based\non a series of rules\/questions about the attribute values of the data points\n\\cite{safavian1991survey}.\nEvery non-leaf node represents an attribute split (question) and all the leaf\nnodes represent the classification result. In short, given a set of features with\ntheir corresponding classes a decision tree produces a sequence of\nquestions that can be used to recognize the class of a data sample.\nIn this paper, the data attributes are the VAR($l$) coefficients\n$[\\phi_{p6\\_{xs, 1}}, \\phi_{p6\\_{xl, 1}}, ..., \\phi_{p6\\_{xs, l}}]$ and the\nclasses are binary: SEP and non-SEP.\n\nThe decision tree classification model first starts by finding the variable that\nmaximizes the separation between classes.\nDifferent algorithms use different metrics, also called purity measures, for\nmeasuring the feature that maximizes the split. Some splitting criteria include\nGini impurity, information gain, and variance reduction. The commonality between\nthese metrics is that they all measure the homogeneity of a given feature with\nrespect to the classes. The metric is applied to each candidate feature to\nmeasure the quality of the split, and the best feature is used. In this paper we\nused the CART decision tree algorithm, as appeared in\n\\cite{loh2011classification} and \\cite{steinberg2009cart}, with Gini and\ninformation gain as the splitting criteria.\n\n\\subsection{Parameter Choice}\n\n \\begin{figure*} \n \\centering\n \\includegraphics[width=0.8\\linewidth]{figs\/SpanAcc.pdf}\n \\caption{Decision tree accuracy with respect to the span window and the\n lag parameters using Gini and information gain splitting criteria. The\n dotted line shows a linear fit to the accuracy curve. }\n \\label{spanacc}\n\\end{figure*} \n\nOur approach relies heavily on the choice of parameters, namely, the span window\nand the VAR model lag parameter. The span is the number of observation hours\nthat precede the occurrence of an X-ray event. The latter determines the length\nof the multivariate time series to be extracted. On the other hand, the lag\n($l$) determines the size of the feature space that will be used as well as the\nlength of the dependence of the time series with each other in the past.\nAs mentioned previously, with a one-step increment of the lag parameter the size of the\nfeature space almost doubles $features\\_number$ = 8*(independent variables)*6\n(equations)*$l$+6*(equations). In order to determine the optimal parameters to\nbe used, we run a decision tree model on a set of values for both the span and\nlag parameters. More specifically, we used the range [3-30] for the span window\nand the set \\{1,3,5,7,9\\} for $l$. Since we have a balanced dataset we used a\nstratified Ten-fold cross validation as the sampling methodology. A stratified\nsampling always ensures a balance in the number of positive and negative samples for\nboth the training and testing data samples. Ten-fold cross-validation randomly\nsplits the data into 10 subsets, models are trained with nine of the\nfolds (90\\% of the dataset), and test it with one fold (10\\% of the dataset).\nEvery fold is used once for testing and nine times for training. In our\nexperiments, we report the average accuracy on the 10 folds.\n Fig.~\\ref{spanacc} illustrates the accuracy curves with respect to the span\n windows for the five lags that we considered. We reported the accuracies of the\n decision tree model using both gini and information gain splitting criteria. In\n order to better capture the model behavior with the increasing span we\n plotted a linear fit to the accuracy curves of each lag. The first observation\n that can be made is that the slopes of the linear fit for $l$=1 and $l$=3 are\n relatively small in comparison to the other lags ($l>$3). This signifies that\n the model does not show any increasing or decreasing accuracy trend with the\n increase of the span window. Therefore we conclude that $l$=1 and $l$=3 are\n too small to discover any relationship between the proton and X-ray channels.\n Having the lag parameter set to $l$=1 and $l$=3 corresponds to expressing the\n time series (independent variable) going back in time up to five minute and 15\n minutes respectively. These latter times are small, especially for $l$=1 (5\n minutes), which theoretically is not possible since the\n protons can at most reach the speed of light that corresponds to a lag of at least 8.5\n minutes. For the other lags ($l>3$) there is noticeable increase in steepness in\n the accuracy linear fit which suggests that the accuracy increases with\n the increasing span window. The second observation is that for all the $l>3$\n datasets the best accuracy was achieved in the last four span window (i.e span\n $\\in$ \\{27,28,29,30\\}). Therefore, we filtered the initial range of\n parameter values to \\{5,7,9\\} for $l$ and \\{27,28,29,30\\} for the span. In the\n next subsection we will zoom in into every classifier within the parameter\n grid.\n \n \\subsection{Learning Curves}\nTo be able to discriminate decision tree models that show similar\naccuracies we use the model learning curves, also called experience curves, to\nhave an insight in how the accuracy changes as we feed the model with more\ntraining examples.\nLearning curves are particularly useful for comparing different\nalgorithms \\cite{madhavan1997new} and choosing optimal model parameters during\nthe design \\cite{pedregosa2011scikit}. It is also a good tool for visually\ninspecting the sanity of the model in case of overtraining or undertraining.\nFigs.~\\ref{lcgini} and \\ref{lcentropy} show the learning curves of the\ndecision tree model using gini and information gain as the splitting criteria\nrespectively. The red line represents the training accuracy which evaluates the\nmodel on the newly trained data. The green line shows the testing\naccuracy which evaluates the model on the the never-seen testing data. The\nshaded area represents the standard deviation of the accuracies after running\nthe model multiple times with the same number of training data. It is noticeable\nthat the standard deviation becomes higher as the lag is increased. Also, it can\nbe seen that the best average accuracies, that appeared in Fig.~\\ref{spanacc}, are\nnot always the ones that have the best learning curves. For example from\nFig.~\\ref{spanacc}, the best accuracy that has been reached appears to be in\n$l$=7 and $span=27,29$; however, the learning curves corresponding to that span\nand lag show that the standard deviation is not very smooth as compared to\n$l$=5. Therefore the experiments show that using $l$=5 results in relatively\nstable models with lower variance. Therefore, we will\nzoom in $l$=5 for all the spans $\\in \\{27,28,29,30\\}$ that we previously\nfiltered.\n\\begin{figure*} \n \\centering \n \\begin{tabular}{ c c c c c }\n\n & Span 27 & Span 28 & Span 29 & Span 30 \\\\\n \n \\begin{turn}{90} \\hspace{10mm} Lag 5 \\end{turn}&\n \\includegraphics[width=0.221000001\\linewidth]{figs\/5s27_gini_m.png} &\n \\includegraphics[width=0.22\\linewidth]{figs\/5s28_gini_m.png} &\n \\includegraphics[width=0.22\\linewidth]{figs\/5s29_gini_m.png}&\n \\includegraphics[width=0.22\\linewidth]{figs\/5s30_gini_m.png}\n \\\\\n \\begin{turn}{90} \\hspace{10mm} Lag 7 \\end{turn}&\n \\includegraphics[width=0.221000001\\linewidth]{figs\/7s27_gini_m.png} &\n \\includegraphics[width=0.22\\linewidth]{figs\/7s28_gini_m.png} &\n \\includegraphics[width=0.22\\linewidth]{figs\/7s29_gini_m.png}&\n \\includegraphics[width=0.22\\linewidth]{figs\/7s30_gini_m.png}\n \\\\\n \n \\begin{turn}{90} \\hspace{10mm} Lag 9 \\end{turn}&\n \\includegraphics[width=0.221000001\\linewidth]{figs\/9s27_gini_m.png} &\n \\includegraphics[width=0.22\\linewidth]{figs\/9s28_gini_m.png} &\n \\includegraphics[width=0.22\\linewidth]{figs\/9s29_gini_m.png}&\n \\includegraphics[width=0.22\\linewidth]{figs\/9s30_gini_m.png}\n \\\\\n \\end{tabular}\n\n \t\\caption{Learning curve of CART Decision Tree Models with Gini splitting\n criterion,spans $\\in$ \\{27,28,29,30\\} and lag $\\in$ \\{5,7,9\\}\\label{lcgini}}\n \n\\end{figure*}\n\n\\begin{figure*} \n\n \\centering \n \\begin{tabular}{ c c c c c }\n\n & Span 27 & Span 28 & Span 29 & Span 30 \\\\\n \n \\begin{turn}{90} \\hspace{10mm} Lag 5 \\end{turn}&\n \\includegraphics[width=0.221000001\\linewidth]{figs\/5s27_entropy_m.png} &\n \\includegraphics[width=0.22\\linewidth]{figs\/5s28_entropy_m.png} &\n \\includegraphics[width=0.22\\linewidth]{figs\/5s29_entropy_m.png}&\n \\includegraphics[width=0.22\\linewidth]{figs\/5s30_entropy_m.png}\n \\\\\n \\begin{turn}{90} \\hspace{10mm} Lag 7 \\end{turn}&\n \\includegraphics[width=0.221000001\\linewidth]{figs\/7s27_entropy_m.png} &\n \\includegraphics[width=0.22\\linewidth]{figs\/7s28_entropy_m.png} &\n \\includegraphics[width=0.22\\linewidth]{figs\/7s29_entropy_m.png}&\n \\includegraphics[width=0.22\\linewidth]{figs\/7s30_entropy_m.png}\n \\\\\n \n \\begin{turn}{90} \\hspace{10mm} Lag 9 \\end{turn}&\n \\includegraphics[width=0.221000001\\linewidth]{figs\/9s27_entropy_m.png} &\n \\includegraphics[width=0.22\\linewidth]{figs\/9s28_entropy_m.png} &\n \\includegraphics[width=0.22\\linewidth]{figs\/9s29_gini_m.png}&\n \\includegraphics[width=0.22\\linewidth]{figs\/9s30_entropy_m.png}\n \\\\\n \n \\end{tabular}\n \\caption{Learning curve of CART Decision Tree Models with information gain\n splitting criterion,spans $\\in$ \\{27,28,29,30\\} and lag $\\in$ \\{5,7,9\\}\\label{lcentropy} }\n\\end{figure*}\n\n\n\\begin{table}[]\n\\centering\n\\caption{Decision Tree model evaluation for gini and information gain splitting\ncriteria\\label{eval}}\n\\label{my-label}\n\\begin{tabular}{l|l|l|l|l|l|l|l|l|}\n\\cline{2-9}\n & \\multicolumn{4}{c|}{Gini} & \\multicolumn{4}{c|}{Information Gain} \\\\ \\hline\n\\multicolumn{1}{|l|}{Span} & 27 & 28 & 29 & 30 & 27 & 28 & 29 & 30 \\\\ \\hline\n\\multicolumn{1}{|l|}{Accuracy} & 0.64 & 0.74 & 0.73 & \\textbf{0.74} & 0.77 &0.70 & 0.67 & \\textbf{0.78} \\\\ \\hline\n\\multicolumn{1}{|l|}{Recall} & 0.69 & 0.70 & 0.74 & \\textbf{0.70} & 0.76 &0.70 & 0.70 & \\textbf{0.73} \\\\ \\hline\n \\multicolumn{1}{|l|}{Precision} & 0.62 & 0.75 & 0.75 & \\textbf{0.76} & 0.78 & 0.72 & 0.72 & \\textbf{0.86} \\\\ \\hline\n \\multicolumn{1}{|l|}{F1} & 0.65 & 0.75 & 0.75 & \\textbf{0.74} & 0.79 &0.71 & 0.69 & \\textbf{0.82} \\\\ \\hline\n \\multicolumn{1}{|l|}{AUC} & 0.65& 0.72 & 0.74 & \\textbf{0.73} & 0.76 & 0.70 & 0.69 & \\textbf{0.77} \\\\ \\hline\n\n\\end{tabular}\n\\end{table}\n\n\n \n\n \\begin{figure*} \n \\centering\n \\includegraphics[width=0.8\\linewidth]{figs\/PCA.pdf}\n \\caption{First 3 PCA components derived from (a) all the original 254 features, (b)\n the data sub-space containing only 4 parameters selected as the most\n relevant by the Gini index (as shown in the tree presented in Fig. 9),\n and (c) another data sub-space containing 4 different parameters (with 1\n repetition) selected as the most relevant by the Entropy measure. The\n PCA-based visualizations represent (sub-)spaces of the same data set (as\n shown in the tree presented in Fig. 10), with lag=5, and span=30.}\n \n \\label{pca}\n\\end{figure*} \n\n\n \\begin{figure} \n \\centering\n \\includegraphics[width=1\\linewidth]{figs\/GiniDT.pdf}\n \\caption{Decision Tree with Gini splitting criteria ($span$=30, $l$=5) }\n \\label{giniDT}\n\\end{figure} \n\n \\begin{figure} \n \\centering\n \\includegraphics[width=1\\linewidth]{figs\/EntropyDT.pdf}\n \\caption{Decision Tree with information gain splitting criteria\n ($span$=30, $l$=5) }\n \\label{entropyDT}\n\\end{figure}\n\nTo determine the best behaving model we choose six evaluation metrics that will\nassess the models' performance from different aspects. Accuracy is the most\nstandard evaluation measure used to assess the quality of a classifier by\ncounting the ratio of correct classification over all the classifications.\nIn this context the accuracy measure is particularly useful because our\ntraining and testing data is balanced. The data balance ensures that if the\nclassifier is highly biased toward a given class it will be reflected on the\naccuracy measure. Recall is the second evaluation measure we considered, also\nknown as the probability of detection, which characterizes the\nability of the classifier to find all of the positive cases. Precision is\nused to evaluate the model with respect to the false alarms. In fact, precision\nis 1 - false alarm ratio. Precision and recall are usually anti-correlated;\ntherefore, a useful quantity to compute is their harmonic mean, the F1 score.\nThe last evaluation measure that we consider in the Area Under Curve (AUC) of the Receiver\nOperating Characteristic curve (ROC) curve. The intuition behind this measure is\nthat AUC equals the probability that a randomly chosen positive example ranks\nabove (is deemed to have a higher probability of being positive than) a randomly\nchosen negative example. It has been claimed in \\cite{auccp} that the AUC is\nstatistically consistent and more discriminating than accuracy.\n\n\n\nTable~.\\ref{eval} shows the aforementioned evaluations on the $l$=5 datasets. It\nis noticeable that span=30 achieves the best performance levels for both\nsplitting criteria. The decision tree models corresponding to those settings\nusing gini and information gain are shown in Fig.~\\ref{giniDT} and\nFig.~\\ref{entropyDT} respectively. For the purpose of visualization we used PCA\ndimensionality reduction technique to plot the full feature space with the 254 dimensions of\nthe lag 5 and span 30 in Fig.~\\ref{pca}-a, as well as the reduced feature space\nwith only the selected features from the gini measure in Fig.~\\ref{pca}-b and\nentropy measure in Fig.~\\ref{pca}-c \\cite{PCA}.\nIt is clearly visible that the SEP and non-SEP classes are almost\nindistinguishable when all the dimensions are used. When the decision tree\nfeature selection is applied, the data points become more scattered in space and\ntherefore easier for the classifier to distinguish. We also note that both\ndecision tree classifiers have as a root a proton x-ray correlation parameter\n($P6\\_xl\\_l2$). Some of the intermediate and leaf nodes have features that show\ncorrelations between proton channels is their conditions. This suggests that\ncross-correlations in proton channels are equally important to X-ray and proton\nchannels correlations that appeared in \\cite{nunez2011predicting}. Our best\nmodel shows a descent accuracy that is comparable (3\\% better) to the UMASEP\nsystem that uses the same catalog. We also made sure that our model is not\nbiased towards the missing data of the lower energy channels P6 and P7 of\nGOES-12 by choosing the same number of samples of positive and negative class\nthat happened during the GOES-12 coverage period.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Conclusion}\nIn this paper we designed a new model to predict $>$100 MeV SEP events based on\nGOES satellite X-ray and proton data. This is the first effort that explores\nnot only the dependencies between the X-ray and proton channels but also the\nauto-correlations and cross-correlations within the proton channels. We have\nfound that proton channel cross-correlations based on a lag time (prior point\nin time) can be an important precursor as to whether an SEP event may happen or\nnot. In particular, we started finding patterns starting from lag 5 and our best\nmodels shows both that the correlation between proton channel $P6$ and X-ray\nchannel $xl$ is an important precursor to SEP events.\nBecause of the missing data due to the failure of the P6 and P7 proton\nchannels onboard GOES-12 we made sure that our dataset uses the same number of\npositive and negative examples coming from GOES-12. To our knowledge this is\nthe first study that explores proton cross-channels correlations in order to\npredict SEP events. As a future extension of this study we are interested in\ndoing ternary classification by further splitting the SEP class into impulsive\nand gradual. We are also interested in real-time SEP event predictions for\npractical applications of this research.\n\n\n\n\n\\section*{Acknowledgment}\nWe thank all those involved with the GOES missions as well as the SOHO\nmission.\nWe also acknowledge all the efforts of NOAA in making the catalogs and X-ray\nreports available.\nThis work was supported in part by two NASA Grant Awards (No.\nNNX11AM13A, and No. NNX15AF39G), and one NSF Grant (No. AC1443061). The\nNSF Grant has been supported by funding from the Division of Advanced\nCyberinfrastructure within the Directorate for Computer and Information Science\nand Engineering, the Division of Astronomical Sciences within the Directorate\nfor Mathematical and Physical Sciences, and the Division of Atmospheric and\nGeospace Sciences within the Directorate for Geosciences.\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nFor studies of the hadronic final state in high-energy collisions, \nversatile programs for the calculation of QCD corrections are required.\nThe extraction of scale-dependent \nphysical quantities such as the running strong coupling\nconstant $\\alpha_s\\left(Q^2\\right)$ and parton densities\n$f_i\\left(\\xi,Q^2\\right)$ requires precise predictions \nin next-to-leading order of QCD perturbation theory.\nAt the electron--proton collider HERA at DESY in Hamburg, \nthe strong coupling constant has been measured via jet rates \n\\cite{1,2}. There is also a direct fit of the gluon density \n$f_g(\\xi, Q^2)$ \\cite{3} based on a Mellin transform\nmethod \\cite{4,5}. Calculations for jet cross sections \nin deeply inelastic scattering for the case of the modified JADE \nscheme\nhave been performed \\cite{6,7,8,9,10}\\footnote{\nIn these calculations, terms of the form $c\\log c$, \n$c$ being the jet cut, have been neglected. This implies in particular\na certain insensitivity with respect to the jet recombination scheme.\nThe set-up of the calculations \\cite{6,7,9} is such that\na jet consisting of two partons is always mapped onto a massless jet. \nTherefore the jet definition scheme which is used on experimental data\nshould be a ``massless'' scheme (this excludes, for example, the E-scheme).\nThe variation of jet cross sections within the possible massless schemes\ncannot be modelled by that calculation.}\nand implemented in the \ntwo programs {\\tt PROJET} \\cite{11} and {\\tt DISJET} \\cite{12}.\n\nIn the meantime,\ncalculations for arbitrary infrared-safe observables in \ndeeply inlastic scattering have become available \\cite{13,14}.\nIn the last few years, the technology for the calculation\nof QCD corrections in next-to-leading order\nhas developed considerably. \nThe main problem in higher-order QCD calculations is the occurence of\nsevere\ninfrared singularities (they ultimately cancel for infrared-safe\nobservables, or are absorbed into process-independent, physical\ndistribution functions such as parton densities and fragmentation functions).\nThere are explicit algorithms available\nwhich permit the calculation to be done in a ``universal'' way: the \ninfrared singularities are subtracted such that arbitrary \ninfrared-safe observables can be calculated numerically. In principle, \nall existing algorithms are variations on a common theme, namely the\ninterplay of the factorization theorems of perturbative QCD and the \ninfrared-safety of the observables under consideration.\nThere are two different ways to achieve the separation of\ndivergent and finite contributions:\nthe phase-space-slicing method \\cite{15} and\nthe subtraction method \\cite{16}. \nBoth methods have their merits and drawbacks.\n\\begin{description}\n\\item[{\\rm\\unboldmath ($\\alpha$)}] \nThe phase-space-slicing method relies on a separation of\nsingular phase-space regions from non-singular ones by means of a \nsmall slicing parameter $s\\rightarrow 0$. The divergent parts are evaluated \nunder the assumption that terms of ${\\cal O}(s (\\log s)^n)$ can be dropped.\nThe analytically evaluated phase-space integrals yield terms of the form\n$(\\log s)^m$, which explicitly cancel against equivalent terms of opposite\nsign from a numerically performed phase-space integration.\nThe simplicity of this scheme is obvious.\nThe main problem is the residual dependence on the technical cut \nparameter~$s$\n(in practice the limit $s\\rightarrow 0$ is not checked for every observable, \nbut it is assumed that a fixed small value will be sufficient).\nMoreover, the numerical cancellation of the logarithmic terms by means\nof a Monte-Carlo integration is delicate.\nThere is a calculational scheme available for the determination of \nthe explicit phase space integrals over the singular regions \\cite{17}. \nFor initial and final-state hadrons this scheme moreover \nrequires the introduction\nof so-called {\\it crossing functions} \\cite{18}, \nto be evaluated for every parton density parametrization.\nFor deeply-inelastic lepton--nucleon scattering, an implementation \nof this calculational scheme is provided by Mirkes and Zeppenfeld\nin {\\tt MEPJET} \\cite{19}.\n\\item[{\\rm\\unboldmath($\\beta$)}] \nThe subtraction method is technically more involved, \nsince the infrared singularities are cancelled point-by-point in \nphase space. The subtraction terms have, owing to the factorization\ntheorems of perturbative QCD, a simple form. The problem is \nto arrange the subtractions in such a way that in the numerical evaluation\nno spurious singularities appear. A general framework, using a specific\nphase space mapping besides the factorization theorems, is given by Catani\nand Seymour in Ref.~\\cite{20}, and implemented in {\\tt DISENT} \n\\cite{21}.\n\nThe approach of the present \npaper is to use a generalized partial fractions\nformula to separate the singularities \\cite{22}. The method is\nbriefly explained in Section~\\ref{algorithm}. We will describe in some\ndetail the implementation {\\tt DISASTER++}\\footnote{\nThis is an acronym for ``Deeply Inelastic Scattering: All Subtraction Through\nEvaluated Residues''.}\nin the form of a {\\tt C++} class library.\n\nThere are two reasons for a new calculation.\n(a) The existing\nprograms have the restriction that the number of flavours is fixed \n($N_f=5$ in the case of {\\tt MEPJET}\nand $N_f$ fixed, but arbitrary for {\\tt DISENT}).\nFor studies of the scale-dependence it is\nnecessary to\nhave a variable number of flavours,\nin order to be consistent with the scale evolution\nof the strong coupling constant and the parton densities.\n{\\tt DISASTER++} makes the $N_f$ dependence explicit in the ``user routine''\non an event-by-event basis,\nand thus results for several renormalization and factorization scales\ncan be calculated simultaneously.\n(b) {\\tt DISASTER++}\nis already set up such that the extension to one-particle-inclusive \nprocesses will be possible without the necessity of re-coding\nthe contributions which are already present for\nthe jet-type observables. This option will be made available\nin future versions of the program, as soon as the remaining contributions\nfor one-particle-inclusive processes are implemented.\n\n\\end{description}\n\nThe outline of this paper is as follows. In Section~\\ref{algorithm}\nwe briefly review the algorithm employed in the present calculation.\nIn Section~\\ref{structure} the {\\tt FORTRAN} interface\nto the {\\tt C++} class library is described. \nSome remarks concerning the installation of the package are made\nin Section~\\ref{installation}.\nA comparison of the available universal programs \n{\\tt DISASTER++} (Version 1.0), {\\tt MEPJET} (Version 2.0) \nand {\\tt DISENT} (Version 0.1)\nis presented in \nSection~\\ref{comparison}.\nIn a previous version of this paper, we have drawn \nthe conclusion that we find an overall, but not completely satisfactory\nagreement of {\\tt DISASTER++} and {\\tt MEPJET}, and \nthat there are large deviations when comparing\n{\\tt DISASTER++} and {\\tt DISENT}.\nOne of the purposes of this paper is to present the results of a comparison\nof {\\tt DISASTER++} and a new, corrected version (0.1) \nof {\\tt DISENT}. We now find\ngood agreement of the two programs.\nWe also give a few more results for {\\tt MEPJET}, in particular\nfor the dependence on the technical cut~$s$. It turns out that\neven for very small values of~$s$ \nwe do not achieve agreement with the\n{\\tt DISASTER++}~\/ {\\tt DISENT} results\nfor several cases under\nconsideration\\footnote{\nIn a very recent paper \\cite{23}, E.~Mirkes quotes the results of the\ncomparison of the three programs as performed in the \nprevious version of this paper \\cite{24} as resulting in \na ``so far satisfactory agreement''. This is a \nmisquotation. The formulation in Ref.~\\cite{24} was that \nfor {\\tt MEPJET} and {\\tt DISASTER++} we find an ``overall, though not \ncompletely satisfactory agreement'', and that the results of {\\tt DISENT}\n(Version 0.0) ``differ considerably''. Moreover, in the summary\nof Ref.~\\cite{24} we mention that a few deviations of {\\tt MEPJET} and\n{\\tt DISASTER++} are present. We wish to stress that there is a certain \nsemantic\ngap between the expression \n``satisfactory agreement'' and the results just quoted.\n}.\nThe paper closes with a summary.\nThe contents of this paper are mainly technical. The details of the calculation\nand phenomenological applications will be described in a forthcoming \npublication.\n\n\\section{The Algorithm}\n\\label{algorithm}\nThe calculation is based on the subtraction method. A simple example\nto illustrate this method in general, and a comparison \nwith the phase-space-slicing\nmethod, is given in Ref.~\\cite{25}.\nFor a more detailed exposition of the contents of this section, \nsee Ref.~\\cite{22}.\n\nThe subtraction method is one of the solutions for the problem of \nhow to \ncalculate \nnumerically \ninfrared-safe observables without having \nto modify the calculation for every observable under consideration.\nIn QCD calculations, infrared singularities cancel for sufficiently \ninclusive observables. \nThe factorization theorems of perturbative\nQCD (see Ref.~\\cite{26} and references therein) \ntogether with the infrared-safety of the observable under consideration\nguarantee that \nthe structure of the limit of the convolution of the parton-level cross \nsection with the observable in soft and collinear regions of phase space\nis well-defined and factorizes in the form of a product of a kernel \nand the Born term.\nThis property allows, for the real corrections, the definition of a subtraction \nterm for every phase-space point.\nFormally:\n\\begin{eqnarray}\n\\int\\mbox{dPS}^{(n)}\\,\\sigma\\,{\\cal O}\n&=& \\sum_A \\int \\mbox{dPS}_{i_A} \\,k_A \\left(\n\\int \\mbox{dPS}^{(n-1)} \\tau_A -\n \\left[\n \\int \\mbox{dPS}^{(n-1)} \\tau_A\n \\right]_{\\mbox{\\scriptsize soft\/coll.~limit}}\n\\right)\\nonumber\\\\\n&+& \\sum_A \\int \\mbox{dPS}_{i_A} \\,k_A \\left[\n \\int \\mbox{dPS}^{(n-1)} \\tau_A\n \\right]_{\\mbox{\\scriptsize soft\/coll.~limit}},\n\\end{eqnarray}\nwhere $\\sigma$ is the parton-level cross section, $\\cal O$ is the\ninfrared-safe observable, $k_A$ is a singular kernel, \nand $\\tau_A$ is the non-singular part of the product $\\sigma\\,\\cal O$.\nThe index~$A$ runs over all possible soft, collinear and \nsimultaneously soft and collinear singularities of~$\\sigma$.\nThe first integral is finite and can be calculated numerically. The second\nintegral contains all infrared singularities. The term in the square bracket\nhas a simple structure\nbecause of the factorization theorems of QCD, and the one-particle\nintegral over the kernel $k_A$ and the factorization contribution from the\nterm in the square brackets can be performed easily.\nThis subtraction formula works only if the subtraction terms do not\nintroduce spurious singularities for the individual terms that eventually\ncancel in the sum. This is achieved by a separation of all singularities\nby means of a general partial fractions formula\n\\begin{equation}\n\\label{pfid}\n\\frac{1}{x_1\\,x_2\\cdots x_n}\n=\\sum_{\\sigma\\in S_n}\n\\frac{1}{x_{\\sigma_1}\\,(x_{\\sigma_1}+x_{\\sigma_2})\\cdots\n (x_{\\sigma_1}+\\ldots+x_{\\sigma_n})},\n\\end{equation}\nwhere the sum runs over all $n!$ permutations of $n$~objects.\n\nIn {\\tt DISASTER++}, the processes for (1+1) and (2+1)-jet production \nfor one-photon exchange are implemented. The program itself, however, \nis set up in a much more general way. The implemented subtraction procedure \ncan handle arbitrary number of final-state partons, and zero or one incoming \npartons (an extension to two incoming partons is possible). The {\\tt C++}\nclass library is intended to provide a very general framework for \nnext-to-leading-order QCD calculations for arbitrary \ninfrared-safe observables. Of course, the explicit matrix \nelements (Born terms, virtual corrections and factorized real corrections)\nhave to be provided for every additional process to be included.\n\n\\section{Program Structure}\n\\label{structure}\nWe now describe the {\\tt FORTRAN} interface to the {\\tt C++}\nclass library. The {\\tt C++} user interface will be documented in a \nforthcoming extension of this manual.\n\nTo set the stage, let us first introduce some terminology.\nThe user has to provide several subroutines which are called by \n{\\tt DISASTER++} for every generated event. Each {\\bf event} \n$e_n$, $n=1\\ldots N$\nconsists of a set of\n{\\bf phase spaces} ${\\cal P}_{nr}$, $r=1\\ldots R_n$, \nand a set of {\\bf contributions} ${\\cal C}_{ni}$, \n$i=1\\ldots L_n$. Phase spaces $\\cal P$\nprovide a set of four-vectors of initial and final-state\nparticles, which are used to calculate observables\n${\\cal O}({\\cal P})$.\nContributions ${\\cal C}_{ni}$ consist of a list of \n{\\bf weights} $w_{nij}$, $j=1\\ldots K_{ni}$ (here: \n$K_{ni}=11$) which have to be multiplied\nby certain {\\bf flavour factors} $F_{nij}$. \nEvery contribution ${\\cal C}_{ni}$ has an associated\nphase space ${\\cal P}_{nr_{ni}}$; \nit is generally the case that particular phase spaces are \nused for different contributions. Flavour factors are products\nof parton densities, quark charges, powers of the strong coupling constant, \nand powers of the electromagnetic coupling constant.\n\nThe expectation value $\\langle {\\cal O} \\rangle$ \nof a particular observable is given by the following \nsum:\n\\begin{equation}\n\\label{exval}\n\\langle {\\cal O} \\rangle = \n \\sum_{n=1}^N\n \\sum_{i=1}^{L_n}\n {\\cal O}({\\cal P}_{nr_{ni}}) \n \\sum_{j=1}^{K_{ni}}\n w_{nij} F_{nij}.\n\\end{equation}\nThe first sum is the main loop of the Monte Carlo integration.\n\n\\noindent\nThe user has to provide a subroutine\n{\\tt user1} and\na function \n{\\tt user2}.\nThe subroutine\n{\\tt user1(iaction)} is called from {\\tt DISASTER++} in the following cases:\n\\begin{description}\n\\item{\\quad{\\tt iaction=1}:} {\\ }\\\\after start-up of {\\tt DISASTER++}\n\\item{\\quad{\\tt iaction=2}:} {\\ }\\\\before the end of {\\tt DISASTER++}\n\\item{\\quad{\\tt iaction=3}:} {\\ }\\\\before the start of the grid-definition \nrun of the adaptive Monte-Carlo routine, or before the final run of\nthe adaptive integration, in case that there is no grid-definition run\n\\item{\\quad{\\tt iaction=4}:} {\\ }\\\\before the start of the final\nrun of the adaptive Monte-Carlo routine\n\\item{\\quad{\\tt iaction=5}:} {\\ }\\\\after the final\nrun of the adaptive Monte-Carlo routine\n\\item{\\quad{\\tt iaction=6}:} {\\ }\\\\once for every event (to initialize intermediate \nweight sums, etc.)\n\\item{\\quad{\\tt iaction=7}:} {\\ }\\\\signals that the event has to be dropped\nfor technical reasons\n\\end{description}\n\n\\noindent\nThe function {\\tt user2(...)} is called from {\\tt DISASTER++}\nafter an event has been constructed.\nIt has the following arguments (in an obvious\nnotation):\n\\begin{verbatim}\n double precision function\n & user2(\n & integer nr, \n & integer nl,\n & double precision fvect(0..3, -10..10, 1..30),\n & integer npartons(1..30),\n & double precision xb(1..30),\n & double precision q2(1..30),\n & double precision xi(1..30),\n & double precision weight(1..11, 1..50),\n & integer irps(1..50),\n & integer ialphas(1..50),\n & integer ialphaem(1..50),\n & integer lognf(1..50)\n & )\n\\end{verbatim}\n\nHere {\\tt nr} stands for $R_n$, {\\tt nl} stands for $L_n$, \n{\\tt fvect(mu, iparticle, ir)} is the {\\tt mu}$^{\\mbox{th}}$ component\nof the four-vector of the particle \nwith label {\\tt iparticle} ({\\tt mu}=0 corresponds to the energy component)\nin units of [GeV]\nin the Breit frame \nfor the phase space {\\tt ir};\n{\\tt npartons(ir)} is the number of final-state partons,\n{\\tt q2(ir)} is the value of $Q^2$, and {\\tt xi(ir)} is the momentum fraction\nof the incident parton.\nThe particle labels {\\tt iparticle} are given by\n\\begin{description}\n\\item[\\quad{\\tt iparticle=-8:}] proton remnant\n\\item[\\quad{\\tt iparticle=-7:}] incident proton\n\\item[\\quad{\\tt iparticle=-5:}] outgoing electron\n\\item[\\quad{\\tt iparticle=-4:}] incident electron\n\\item[\\quad{\\tt iparticle=-1:}] incident parton\n\\item[\\quad{\\tt iparticle=0..(npartons-1):}] outgoing partons\n\\end{description}\n\nThe array {\\tt weight(j, i)} is a list of the weights for contribution\n{\\tt i} in units of [pb], \n{\\tt irps(i)} gives the index of the phase space for this particular \ncontribution,\n{\\tt ialphas(i)} and {\\tt ialphaem(i)} are the powers of the strong \nand electromagnetic coupling constant, respectively, and {\\tt lognf(i)}\nis an index that specifies whether the weights have to be multiplied \nby a factor $\\lambda$ consisting of a product of\na logarithm of a scale and\/or a factor of $N_f$:\n\\begin{description}\n\\item[\\quad{\\tt lognf=0}:] $\\lambda=1$\n\\item[\\quad{\\tt lognf=1}:] $\\lambda=\\ln\\left(\\mu_r^2\/Q^2\\right)$\n\\item[\\quad{\\tt lognf=2}:] $\\lambda=N_f \\ln\\left(\\mu_r^2\/Q^2\\right)$\n\\item[\\quad{\\tt lognf=3}:] $\\lambda=\\ln\\left(\\mu_f^2\/Q^2\\right)$\n\\item[\\quad{\\tt lognf=4}:] $\\lambda=N_f \\ln\\left(\\mu_f^2\/Q^2\\right)$\n\\end{description}\nHere $\\mu_r$ and $\\mu_f$ are the renormalization and factorization scales, \nrespectively.\nThe total flavour factor for contribution $i$ is given by \n\\begin{equation}\nF_{nij} = \n \\lambda \\, \n \\alpha_s^{\\mbox{\\tt ialphas($i$)}} \\,\n \\alpha^{\\mbox{\\tt ialphem($i$)}} \\,\n \\rho_{ij},\n\\end{equation}\nwhere \nthe quantity $\\rho_{ij}$ is a product of squares of quark charges $Q_\\alpha$ \nin units of $e$ and parton densities.\nIn particular:\n\\begin{description}\n\\item \\quad$\\rho_{i1}\n = \\sum\\limits_{\\alpha=1}^{N_f} Q_\\alpha^2 \\, f_\\alpha$\n\\item \\quad$\\rho_{i2} \n = \\sum\\limits_{\\alpha=1}^{N_f} Q_\\alpha^2 \\, \n f_{\\overline{\\alpha}}$\n\\item \\quad$\\rho_{i3} \n = \\sum\\limits_{\\alpha=1}^{N_f} Q_\\alpha^2 \\, f_g$\n\\item \\quad$\\rho_{i4} = \\rho_{i1}$\n\\item \\quad$\\rho_{i5} = \\rho_{i2}$\n\\item \\quad$\\rho_{i6} = \\rho_{i1}\\,(N_f-1)$\n\\item \\quad$\\rho_{i7} = \\rho_{i2}\\,(N_f-1)$\n\\item \\quad$\\rho_{i8}\n = \\sum\\limits_{\\alpha=1}^{N_f} f_\\alpha \\,\n \\sum\\limits_{\\beta=1,\\, \\beta \\neq \\alpha}^{N_f} Q_\\beta^2$\n\\item \\quad$\\rho_{i9}\n = \\sum\\limits_{\\alpha=1}^{N_f} f_{\\overline{\\alpha}} \\,\n \\sum\\limits_{\\beta=1,\\, \\beta \\neq \\alpha}^{N_f} Q_\\beta^2$\n\\item \\quad$\\rho_{i10}\n = \\sum\\limits_{\\alpha=1}^{N_f} f_\\alpha Q_\\alpha \\,\n \\sum\\limits_{\\beta=1,\\, \\beta \\neq \\alpha}^{N_f} Q_\\beta$\n\\item \\quad$\\rho_{i11}\n = \\sum\\limits_{\\alpha=1}^{N_f} f_{\\overline{\\alpha}} Q_\\alpha \\,\n \\sum\\limits_{\\beta=1,\\, \\beta \\neq \\alpha}^{N_f} Q_\\beta$\n\\end{description}\nThe $f_\\alpha$ are parton densities evaluated for \nmomentum fractions $\\mbox{\\tt xi(irps($i$))}$ and factorization scale\n$\\mu_f$,\nand $f_{\\overline{\\alpha}}$ stands for the parton density of the anti-flavour\nof the flavour $\\alpha$. The renormalization and factorization schemes are\nthe $\\overline{\\mbox{MS}}$ scheme. The correction terms for the \nDIS factorization \nscheme will be implemented in the near future.\n\nWe wish to note that the product of the weights, the flavour factors and the \nvalues of the observable is normalized in such a way that\nthe sum yields the expectation value in units of [pb]. No additional \nfactor such as $1\/N$, $N$ being the total number of generated events,\nhas to be applied\nin Eq.~\\ref{exval}.\n\nSince phase spaces are used several times for different contributions,\nit is a good strategy to first evaluate the observable(s) under consideration\nfor every phase space and to store the corresponding results.\nThen there should be the loop over the various contributions (the second sum).\nThe innermost loop is the one over the flavour factors.\n\nThe Monte Carlo integration itself employs the program {\\tt VEGAS}\n\\cite{27,28}. \n{\\tt VEGAS} is an adaptive multi-dimensional integration routine.\nIntegrations proceed in two steps. \nThe first step is an adaptation step in order\nto set up a grid in the integration variables\nwhich then steers the final integration step.\nThe adaptation step itself refines\nthe grid in a sequence of several iterations.\n{\\tt VEGAS} requires, as parameters, the number of Monte Carlo points \nto be used in the first and second step, respectively, \nand the number of iterations to refine the grid. \nIn the framework of {\\tt DISASTER++}, {\\tt VEGAS} can be used in three different\nways: \n\\begin{itemize}\n\\item As an adaptive integration routine.\nThe routine {\\tt user2} should return a value. This value is handed over\nto {\\tt VEGAS} as \nthe value of the integrand at the particular phase space point, \nand summed up. The final integral quoted by {\\tt VEGAS}\nis the sum of these weights\nfor the final integration.\nThis is the best choice if just one observable, \nfor example a jet cross section, is to be evaluated.\n\\item As a routine that merely supplies random numbers for \nthe events.\nIf the number of iterations is set to zero, then {\\tt VEGAS} just performs\nthe final integration run. The user is then responsible for the correct\nsummation of the weights, and for the determination of the \nstatistical error. It should be noted that, since all weights are\navailable individually in the user routine, an arbitrary number of \nobservables can be evaluated in a single run. In particular, since the\ndependence on the renormalization and factorization scales and on $N_f$\nis fully explicit, the study of the scale dependence of observables\ncan be done in a very convenient way. For example, all plots from \nRef.~\\cite{22} \nhave been obtained in a single run of {\\tt DISASTER++}.\n\\item As a combination of the two preceeding alternatives. Here the adaptation \nsteps are included. A ``typical'' infrared-safe observable, \nin the following called the {\\it adaptation variable}, is evaluated, and\nits value is returned to {\\tt VEGAS}. This observable serves to optimize the\ndistribution of points over phase space. A convenient observable of this\nkind is provided by {\\tt DISASTER++} (see below).\nThe ``real'' observables under consideration are evaluated as in the \nsecond alternative in the final integration step.\n\\end{itemize}\n\n\\noindent\n{\\tt DISASTER++} is initialized by a call of \nthe subroutine {\\tt disaster\\_ca()}. It is recommended \nto end a {\\tt DISASTER++}\nrun by a call of the subroutine \n{\\tt disaster\\_cb()} in order to free\ndynamically allocated memory.\n\n\\noindent\nParameters can be set or commands be executed by means of three routines:\n\\begin{description}\n\\item {\\quad\\tt disaster\\_ci(str, i)} {\\ }\\\\ \n sets the integer parameter denoted by \n the character string {\\tt str} to the value {\\tt i}\n\\item {\\quad\\tt disaster\\_cd(str, d)} {\\ }\\\\\n sets the double precision parameter denoted by \n the character string {\\tt str} to the value {\\tt d}\n\\item {\\quad\\tt disaster\\_cc(str)} {\\ }\\\\ executes the command \ngiven by the character string {\\tt str}\n\\end{description}\nThe following parameters are available (there are a few more to optimize the\ngeneration of the phase space points; they will be documented in forthcoming\nversions of this manual):\n\\begin{description}\n\n\\item[\\quad{\\makebox[3cm][l]{\\tt ECM:}}]{\\ }\\\\\n the centre-of-mass energy in [GeV]\n\n\\item[\\quad{\\tt LEPTON\\_INTEGRATION}:]{\\ }\\\\\n {\\tt 1:} integration over $x_B$ and $y$\n\n\\item[\\quad{\\makebox[3cm][l]{\\tt XBMIN:}}]{\\ }\\\\\n minimum value of $x_B$\n\n\\item[\\quad{\\makebox[3cm][l]{\\tt XBMAX:}}]{\\ }\\\\\n maximum value of $x_B$\n\n\\item[\\quad{\\makebox[3cm][l]{\\tt YMIN:}}]{\\ }\\\\\n minimum value of $y$\n\n\\item[\\quad{\\makebox[3cm][l]{\\tt YMAX:}}]{\\ }\\\\\n maximum value of $y$\n\n\\item[\\quad{\\makebox[3cm][l]{\\tt QMIN:}}]{\\ }\\\\\n minimum value of $Q$\n\n\\item[\\quad{\\makebox[3cm][l]{\\tt QMAX:}}]{\\ }\\\\\n maximum value of $Q$\n\n\\item[\\quad{\\makebox[3cm][l]{\\tt WMIN:}}]{\\ }\\\\\n minimum value of $W$\n\n\\item[\\quad{\\makebox[3cm][l]{\\tt WMAX:}}]{\\ }\\\\\n maximum value of $W$\n\n\\item[\\quad{\\makebox[3cm][l]{\\tt PROCESS\\_INDEX:}}]{\\ }\\\\\n {\\tt 1:} leading order\\\\\n {\\tt 2:} next-to-leading order\n \n\\item[\\quad{\\tt NUMBER\\_OF\\_FINAL\\_STATE\\_PARTONS\\_IN\\_BORN\\_TERM:}]{\\ }\\\\\n {\\tt 1}, {\\tt 2}, {\\tt 3} for the process under consideration;\\\\\n {\\tt 1:} (1+1)-jet-type observables (leading and next-to-leading order)\\\\\n {\\tt 2:} (2+1)-jet-type observables (leading and next-to-leading order)\\\\\n {\\tt 3:} (3+1)-jet-type observables (leading order only)\n \n\\item[\\quad{\\makebox[3cm][l]{\\tt POINTS:}}]{\\ }\\\\\n {\\tt POINTS * (FACT\\_PREP + FACT\\_FINAL)} is the \n number of generated points in the Monte Carlo integration\n\n\\item[\\quad{\\makebox[3cm][l]{\\tt FACT\\_PREP:}}]{\\ }\\\\\n the number of points for the grid-definition run is given by\n {\\tt FACT\\_PREP * POINTS}\n\n\\item[\\quad{\\makebox[3cm][l]{\\tt FACT\\_FINAL:}}]{\\ }\\\\\n the number of points for the final integration step is given by\n {\\tt FACT\\_FINAL * POINTS}\n\n\\item[\\quad{\\makebox[3cm][l]{\\tt RUN\\_MC:}}]{\\ }\\\\\n to start the Monte-Carlo integration\n\n\\end{description}\n\n\\noindent\nA convenient adaptation observable can be evaluated by a call of\nthe following function:\n\\begin{verbatim}\n double precision function disaster_cao(\n & integer ipdf_collection,\n & integer ipdf_parametrization,\n & integer ipdf_set,\n & integer ialphas_variant,\n & integer ialphas_order,\n & double precision dalphas_lambdaqcd4,\n & integer ialphaem_variant\n & )\n\\end{verbatim}\nThe arguments of the function call are:\n\\begin{description}\n\n\\item[\\quad{\\tt ipdf\\_collection:}]{\\ }\\\\\nthe collection of parton densities; \\\\\n{\\tt 1:} {\\tt PDFLIB} \\cite{29}\n\n\\item[\\quad{\\tt ipdf\\_parametrization:}]{\\ }\\\\\nparametrization of parton densities (cf.\\ {\\tt PDFLIB})\n\n\\item[\\quad{\\tt ipdf\\_set:}]{\\ }\\\\\nset of parton densities (cf.\\ {\\tt PDFLIB})\n\n\\item[\\quad{\\tt ialphas\\_variant:}]{\\ }\\\\\nfunction which is used to evaluate the strong coupling constant;\\\\\n{\\tt 1:} running coupling $\\alpha_s(Q^2)$ with \nflavour thresholds at the single heavy quark masses\n\n\\item[\\quad{\\tt ialphas\\_order:}]{\\ }\\\\\n{\\tt 1:} one-loop formula\\\\\n{\\tt 2:} two-loop formula\\\\\nfor the running strong\ncoupling constant\n\n\\item[\\quad{\\tt dalphas\\_lambdaqcd4:}]{\\ }\\\\\nthe QCD parameter $\\Lambda_{\\mbox{\\scriptsize QCD}}^{(4)}$\nfor four flavours\n\n\\item[\\quad{\\tt ialphaem\\_variant:}]{\\ }\\\\\nfunction which is used to evaluate the electromagnetic coupling constant;\\\\\n{\\tt 1:} fine structure constant \\\\\n{\\tt 2:} 1\/137\\\\\n(an implementation of the running electromagnetic \ncoupling constant is in preparation)\n\n\\end{description}\n\n\\noindent\nTo simplify the calculation of the flavour factors, \na {\\tt DISASTER++} routine can be called which returns the\nrequired coupling constants and the combinations of parton densities\nand quark charges:\n\\begin{verbatim}\n subroutine disaster_cff(\n & integer ipdf_collection,\n & integer ipdf_parametrization,\n & integer ipdf_set,\n & integer ialphas_variant,\n & integer ialphas_order,\n & double precision dalphas_lambdaqcd4,\n & integer ialphaem_variant,\n & integer nf,\n & double precision ffactin(4),\n & double precision ffactout(13)\n & )\n\\end{verbatim}\nThe arguments of the function call are the same as in the case of the\nroutine {\\tt disaster\\_cao} (see above), except for the following:\n\\begin{description}\n\n\\item[\\quad{\\tt nf:}]{\\ }\\\\\nthe number of flavours $N_f$\n\n\\item[\\quad{\\tt ffactin:}]{\\ }\\\\\ninput parameters;\n\\begin{description}\n\\item {\\tt ffactin(1):} the momentum fraction variable $\\xi$\n\\item {\\tt ffactin(2):} the factorization scale in [GeV] \n(i.e.\\ the scale argument of the parton densities)\n\\item {\\tt ffactin(3):} the renormalization scale in [GeV] \n(i.e.\\ the scale argument of the running strong coupling constant)\n\\item {\\tt ffactin(4):} the scale argument of the running electromagnetic\ncoupling constant\n\\end{description}\n\n\\item[\\quad{\\tt ffactout:}]{\\ }\\\\\noutput parameters;\n\\begin{description}\n\\item {\\tt ffactout(1..11):} the quantities $\\rho_{i1}$ \\ldots\n$\\rho_{i11}$,\n\\item {\\tt ffactout(12):} the running strong coupling constant\n\\item {\\tt ffactout(13):} the electromagnetic\ncoupling constant\n\\end{description}\n\n\\end{description}\n\nIt is strongly recommended to use this routine, since it uses \na cache that stores a few of the most recent values temporarily, such that\nthe sums $\\rho_{ij}$ and the parton densities do not have to be reevaluated.\nThis routine is supplied for the convenience of the user. The weights\nand events generated by {\\tt DISASTER++} do not depend on this routine.\n\nThe description of the program structure just given may sound\ncomplicated. It is actually quite simple to use the program; an example \nfor the calculation of the (2+1)-jet cross section for the JADE algorithm\nin the E-scheme is given in the files {\\tt disaster\\_f.f}\nand {\\tt clust.f}, as described in Section~\\ref{installation}.\n\n\\section{Program Installation}\n\\label{installation}\n\n\\begin{description}\n\\item[Source code:]\nThe source code of the class library is available on the World Wide Web:\n\\begin{verbatim}\n http:\/\/wwwcn.cern.ch\/~graudenz\/disaster.html\n\\end{verbatim}\n\n\\item[Files:]\nThe package consists of a number of files. To facilitate the installation,\nand to enable the {\\tt C++} compiler to perform certain optimizations,\nthe complete {\\tt C++} part of the program is provided as one file\n{\\tt onefile\\_n.cc} (the individual files are available on request). \nAn example for the {\\tt FORTRAN} interface is\ngiven in the file {\\tt disaster\\_f.f} (calculation of the (2+1) jet \ncross section for the JADE algorithm in the E-scheme), \ntogether with a simple cluster\nroutine in the file {\\tt clust.f}. \nThe number of Monte Carlo events in the example is set to \na tiny number (100) in order to terminate the program after a few seconds.\nRealistic values for the parameter {\\tt POINTS} are of the order of \n$10^6$.\nAn example ``make file'' is given in {\\tt makedisaster}. \n\\item[Mixed Language Programming:]\n{\\tt DISASTER++} is mainly writen in the {\\tt C++} programming language.\nThe reason for the choice of this language are twofold:\nObject-oriented programming allows for programs that are easily \nmaintained and extended\\footnote{It could even be said that object-oriented\nprogramming is a kind of applied ontology: the central categories of this \napproach are given by {\\it objects} and {\\it methods} that define their\nrelationships.\n}, and in high-energy physics there is a trend in the experimental \ndomain for a transition from {\\tt FORTRAN} to {\\tt C++}.\nAlthough the goal has been to write a self-contained {\\tt C++}\npackage, \na few parts of the program are still coded in \n{\\tt FORTRAN}. Moreover, the standard parton density parametrizations\nare only \navailable as {\\tt FORTRAN} libraries. This means that the {\\tt DISASTER++}\npackage cannot be run as a stand-alone {\\tt C++} program. In most cases,\nusers may wish to interface the program to their existing {\\tt FORTRAN}\nroutines. An elegant and machine-independent \nway for {\\it mixed language programming} for the case\nof {\\tt C}, {\\tt C++} and {\\tt FORTRAN} is supported by the \n{\\tt cfortran.h} package described in Ref.~\\cite{30}. \nFor every {\\tt FORTRAN} routine to be called by a {\\tt C++} method, \nan {\\tt extern \"C\"} routine has to be defined as an interface, \nand vice versa. The explicit calls are then generated by means of macros \nfrom {\\tt cfortran.h}. The most convenient way is, after compilation, \nto link the {\\tt FORTRAN} and {\\tt C++} parts via the standard\n\\begin{verbatim}\n f77 -o disaster onefile_n.o ...\n\\end{verbatim}\ncommand\\footnote{The procedure is described here for the {\\tt UNIX}\noperating system.}, \nsuch that the {\\tt FORTRAN} part supplies the entry point.\nThe required {\\tt C++} libraries have to be stated explicitly\nvia the {\\tt -L} and {\\tt -l} options. The library paths can be obtained\nby compiling and linking a trivial program {\\tt hw.cc} of the type\n\\begin{verbatim}\n #include \n main() { printf(\"Hello world!\\n\"); }\n\\end{verbatim}\nwith\n\\begin{verbatim}\n gcc -v hw.cc\n\\end{verbatim}\n(for the {\\tt GNU C++} compiler). \nAn example for the required libraries can be found in the \nprototype ``make file'' {\\tt makedisaster}. Some machine-specific information\nis mentioned in the manual of {\\tt cfortran.h}.\n\nIn the {\\tt DISASTER++} package, the explicit {\\tt FORTRAN} interface, \nas described in Section~\\ref{structure},\nis already provided. Thus\nfrom the outside the {\\tt C++}\nkernel is transparent and hidden behind {\\tt FORTRAN} subroutines\nand functions.\n\n\\item[Template instantiation:]\nIn {\\tt DISASTER++}, heavy use is made of {\\it templates}. At present, there\nis not yet a universally accepted scheme for template instantiations.\nThe solution adopted here is the explicit instantiation\nof all templates. This requires\nthat the compiler itself does not instantiate templates automatically.\nThis is achieved for the {\\tt GNU} compiler by means of the switch\n\\begin{verbatim}\n -fno-external-templates\n\\end{verbatim}\n\n\\item[Output files:]\nThere is a small problem with the output from the {\\tt C++} and {\\tt FORTRAN}\nparts of {\\tt DISASTER++}. It seems to be the case that generally {\\tt C++} \n({\\tt FILE* stdout} and {\\tt ostream cout}) \nand {\\tt FORTRAN} ({\\tt UNIT=6}) keep different\nfile buffers. This is no problem when the output is written to a terminal, \nsince then the file buffers are immediately flushed\nafter each line-feed character. When writing to \na file (as is usually the case for batch jobs), the file buffers are not \nimmediately flushed, and this leads to the problem that the output \non the file is mixed in non-chronological order. This problem will be solved\nby the introduction of a particular stream class which hands over the output\nto a {\\tt FORTRAN} routine.\n\n\\item[Miscellaneous:]\n{\\tt DISASTER++} employs the {\\tt ANSI C} {\\tt signal} facility to \ncatch interrupts caused by floating point arithmetic. \nIf the signal {\\tt SIGFPE} is raised, a flag in {\\tt DISASTER++} is set, \nwhich eventually leads to the requirement that the event has to be \ndropped (via a call of {\\tt user1(7)}). Similarly a non-zero value of\nthe variable {\\tt errno} of the {\\tt ANSI C errno} facility\nis treated. The signal handler is also active \nwhen the user routine is executed, which leads to the effect that\nin the case of a floating point exception the program does not crash, but\ncontinues under the assumption that the event has been dropped.\nForthcoming version of {\\tt DISASTER++} will make a flag available\nthat can be used to access the status of the signal handler in \nthe user routines. \nMoreover, it is checked whether the weight returned to {\\tt DISASTER++} via\n{\\tt user2} fulfills the criterion for {\\tt IEEE} {\\tt NaN} (``not a number''). \nIf this is the case, it is also requested that the event be dropped.\n\n\\end{description}\n\n\\section{Comparison of Available Programs}\n\\label{comparison}\n\nIn this section, we compare the three available programs {\\tt MEPJET}\n(Version 2.0)\\footnote{\nFor the very-high statistics runs the default random number\ngenerator (generating a Sobol sequence of pseudo-random numbers) \nof {\\tt MEPJET} ran out of random numbers. We therefore had to modify the\nprogram such that it uses another generator which is also part of\nthe {\\tt MEPJET} package. --- The crossing functions for the ``artificial''\nparton densities have been obtained by means of a modification of the program\n{\\tt make\\_str\\_pdf1.f}.\n}, \n{\\tt DISENT} (Version 0.1)\\footnote{\nAn earlier version of this paper \\cite{24}\nreported results of a comparison \nwith {\\tt DISENT} Version 0.0. We found large discrepancies for \nsome choices of the parton density parametrization. In the meantime,\nan error in {\\tt DISENT} has been fixed, and the results \nof {\\tt DISENT} and {\\tt DISASTER++} are now\nin good agreement, see below.\n} and {\\tt DISASTER++}\n(Version 1.0) numerically for various bins of \n$x_B$ and $y$ as defined in Table~\\ref{tab1}, \nand for various choices of the parton density parametrizations.\n\n\\begin{table}[htb]\n\\begin{center}\n\\begin{tabular}[h]{|c|c|c|c|}\n\\cline{2-4}\n \\multicolumn{1}{c|}{\\rule[-2.5mm]{0mm}{8mm}}\n & \\makebox[4.1cm]{$0.01 < y < 0.03 $} \n & \\makebox[4.1cm]{$0.03 < y < 0.1 $} \n & \\makebox[4.1cm]{$0.1 < y < 0.3 $} \n\\\\ \\hline\n$0.005 < x_B < 0.01$ \\rule[-2.5mm]{0mm}{8mm}\n & \\makebox[1.2cm]{Bin 1}($Q^2 > 4.6\\,\\mbox{GeV}^2$)\n & \\makebox[1.2cm]{Bin 2}($Q^2 > 13.5\\,\\mbox{GeV}^2$)\n & \\makebox[1.2cm]{Bin 3}($Q^2 > 45.0\\,\\mbox{GeV}^2$)\n\\\\ \\hline\n$0.05 < x_B < 0.1$ \\rule[-2.5mm]{0mm}{8mm}\n & \\makebox[1.2cm]{Bin 4} ($Q^2 > 45\\,\\mbox{GeV}^2$)\n & \\makebox[1.2cm]{Bin 5} ($Q^2 > 135\\,\\mbox{GeV}^2$)\n & \\makebox[1.2cm]{Bin 6} ($Q^2 > 450\\,\\mbox{GeV}^2$)\n\\\\ \\hline\n$0.2 < x_B < 0.4$ \\rule[-2.5mm]{0mm}{8mm}\n & \\makebox[1.2cm]{Bin 7} ($Q^2 > 180\\,\\mbox{GeV}^2$)\n & \\makebox[1.2cm]{Bin 8} ($Q^2 > 540\\,\\mbox{GeV}^2$)\n & \\makebox[1.2cm]{Bin 9} ($Q^2 > 1800\\,\\mbox{GeV}^2$)\n\\\\ \\hline\n\\end{tabular}\n\\end{center}\n\\caption[tab1]\n{\n\\label{tab1}\n{\\it\nBins in $x_B$ and $y$. The values in parentheses give the resulting \nlower bounds on $Q^2$.\n}\n}\n\\end{table}\n\nThe centre-of-mass energy is set to 300\\,GeV. To facilitate the\ncomparison, the strong coupling constant is set to a fixed value of\n$\\alpha_s=0.1$, \nand the number of flavours is set to $N_f=5$, even below the bottom \nthreshold ($N_f=5$ is hard-wired into {\\tt MEPJET}). \nThe electromagnetic coupling constant \nis chosen to be $\\alpha=1\/137$ (the value\nis hard-wired into {\\tt DISENT}, but this could be changed trivially, \nin principle). The factorization- and renormalization schemes of the\nhard scattering cross sections are $\\overline{\\mbox{MS}}$, and the\nfactorization and renormalization scales $\\mu_f$ and $\\mu_r$, respectively, \nare set to $Q$.\n\nThe quantity under consideration is the (2+1)-jet cross section, \nshown in Tables~2--8 in Appendix~\\ref{capp}.\nFor simplicity we consider the modified JADE clustering scheme\nwith resolution criterion $S_{ij} <> c W^2$ and the E~recombination scheme, \nwhere\n$S_{ij} = (p_i + p_j)^2$, $W$ is the total hadronic energy,\nand $c=0.02$ is the jet resolution parameter.\nWe require, in the laboratory frame ($E_e=27.439$\\,GeV, \n$E_P=820$\\,GeV), a minimum transverse momentum of 1\\,GeV and a pseudo-rapidity\nof $-3.5<\\eta<3.5$ for all jets\\footnote{These cuts in $p_T$ and $\\eta$ \nare employed in order to facilitate event generation with {\\tt MEPJET}; \nthe phase space generator implemented in that program is reminiscent of a \ngenerator for pp~collider physics where $p_T$ and $\\eta$ cuts\nin the laboratory frame are\na standard experimental procedure. It is thus complicated to generate \nevents with {\\tt MEPJET} \nin the full phase space of the laboratory system, as usually required \nfor eP scattering, where ``natural'' cuts in transverse momentum and\npseudo-rapidity would be performed in the hadronic centre-of-mass frame\nor in the Breit frame.}.\n\nThe parton density parametrizations employed in the comparison are:\n\\begin{description}\n\\item[{\\makebox[1cm][l]{(a)}}] \n \\makebox[6cm][l]{the MRSD$_-^\\prime$ parton densities \n \\cite{31}} (Table 2),\n\\item[{\\makebox[1cm][l]{(b)}}] \n \\makebox[3cm][l]{$q(\\xi)=(1-\\xi)^5$,}\\makebox[3cm][l]{$g(\\xi)=0$}\n (Table 3),\n\\item[{\\makebox[1cm][l]{(c)}}] \n \\makebox[3cm][l]{$q(\\xi)=0$,}\\makebox[3cm][l]{$g(\\xi)=(1-\\xi)^5$} \n (Table 4),\n\\item[{\\makebox[1cm][l]{(d)}}] \n \\makebox[3cm][l]{$q(\\xi)=(1-\\xi)^2$,}\\makebox[3cm][l]{$g(\\xi)=0$}\n (Table 5),\n\\item[{\\makebox[1cm][l]{(e)}}] \n \\makebox[3cm][l]{$q(\\xi)=0$,}\\makebox[3cm][l]{$g(\\xi)=(1-\\xi)^2$} \n (Table 6),\n\\item[{\\makebox[1cm][l]{(f)}}] \n \\makebox[3cm][l]{$q(\\xi)=(1-\\xi)$,}\\makebox[3cm][l]{$g(\\xi)=0$}\n (Table 7),\n\\item[{\\makebox[1cm][l]{(g)}}] \n \\makebox[3cm][l]{$q(\\xi)=0$,}\\makebox[3cm][l]{$g(\\xi)=(1-\\xi)$} \n (Table 8).\n\\end{description}\nHere $q(\\xi)$ generically stands for valence and sea distributions\\footnote{\nThis means that $u_v(\\xi)$, $d_v(\\xi)$, $u_s(\\xi)$, $d_s(\\xi)$, \n$s_s(\\xi)$, $c_s(\\xi)$, \n$b_s(\\xi)$ have been set to $q(\\xi)$.\n}, \nand $g(\\xi)$ is the gluon distribution.\nWe wish to point out that the comparison involving the ``artificial''\nparton densities is not just of academic interest. On the contrary, \nfor the extraction \nof, for instance, the gluon density\nfrom jet data \nit is convenient\nto replace the parton densities by simple functions \nwith special properties (such as powers\nof the momentum fraction variable $\\xi$ or functions of an orthonormal \nbasis system), \nin order to achieve a fast fit. These functions usually do not have\nthe shape of physical parton densities, in particular they do not\nhave to fall off rapidly for $\\xi\\rightarrow 1$.\nMoreover, next-to-leading-order calculations yield unique and well-defined\nresults for the hard scattering cross sections to be convoluted with \nobservables and parton densities. We employ the ``artificial''\nparton densities also in order to have a stricter test of the \nhard scattering cross sections.\n\nThe leading-order results of all three programs are in excellent agreement. \nThe next-to-leading-order results of {\\tt DISASTER++} and {\\tt DISENT} are\nin good agreement within about two to (sometimes) \nthree standard deviations\\footnote{\nWe wish to note that the error estimates quoted by the programs are usually not\nrigorous estimates because of the non-Gaussian distribution of the\nMonte-Carlo weights. Therefore, in principle, it is not possible to \ninfer probabilities for the consistency of data samples produced by two\nprograms based on these estimates. \nA more precise, but in general unfeasible way to obtain an estimate of the\nMonte Carlo error would be to run the programs a number of times \nwith different random number seeds and to analyze the spread of the \nquoted results around their central value.\nSuch a study has recently been done by M.~Seymour\nfor {\\tt DISENT} with the result that the \nindividual error estimates are quite reliable \\cite{32}.\n}\nof the larger of the two errors quoted by the two programs. An exception\nis bin~7 for $g(\\xi) = (1-\\xi)^2$. A run of {\\tt DISENT} with higher statistics \nyields a value of $0.1836 \\pm 0.0025$, which is within two standard deviations\nof the {\\tt DISASTER++} result, indicating that there was indeed a statistical\nfluctuation in the original result.\n\nThe comparison of the next-to-leading-order results \nof {\\tt MEPJET} and {\\tt DISASTER++} requires a more detailed discussion:\n\\begin{itemize}\n\n\\item For the MRSD$_-^\\prime$ parton densities, the results for\nbins 3--9 are\ncompatible within about two standard deviations of the statistical error\nof the Monte-Carlo integrations.\nThe results for bins~1 and~2 differ considerably. \nRuns with a smaller value\nof the internal {\\tt MEPJET} cut-off variable~$s$, which is set by default\nto $s=0.1\\,$GeV$^2$, yield\nthe following results for bin 1:\n$580.6 \\pm 6.7$\\,pb ($s=0.01\\,$GeV$^2$), \n$564.8 \\pm 10.5$\\,pb ($s=0.001\\,$GeV$^2$) and\n$575.4 \\pm 13.0$\\,pb ($s=0.0001\\,$GeV$^2$).\nThe statistical error is increased for decreased~$s$ because the integration\nvolume\nof the (3+1) parton contributions is extended into the singular domain.\nBecause of the increased statistical error, we also performed a \nhigh-statistics runs with $\\sim 4\\cdot10^9$ (!) Monte Carlo events\nof {\\tt MEPJET} \nfor this bin. \nFor $s=0.001\\,$GeV$^2$ we obtain \n$576.3 \\pm 6.7$\\,pb\nand for $s=0.0001\\,$GeV$^2$\nthe result is\n$583.2 \\pm 7.4$\\,pb.\nThese values from {\\tt MEPJET} \nare compatible with the {\\tt DISASTER++} and {\\tt DISENT} results\\footnote{\nThese results underline that, for the phase space slicing method, results\ngenerally have to be validated {\\it ex post} by a cross-check with a \nsmaller technical cut~$s$ and much higher statistics. It may be argued that\nthere are jet algorithms (the $k_T$~algorithm, for example)\nwhich show a better convergence for $s\\rightarrow 0$.\nHowever, the point here is that one does not know in advance whether this\nis the case for the observable under consideration. --- In Ref.~\\cite{23}\nwe find the statement that $s$-independence in {\\tt MEPJET} is achieved for \n$s=0.1\\,$GeV$^2$. Our study shows that this is generally not the case, \nand that extremely small values of~$s$, possibly of the order of\n$s=0.0001\\,$GeV$^2$, might be necessary.\n}.\n\\item For the parton density parametrization (b) (quarks only, with a steeply\nfalling distribution $q(\\xi)$ for $\\xi \\rightarrow 1$), \n{\\tt DISASTER++} and {\\tt MEPJET}\nare in good agreement.\n\n\\item The results for parametrization (c) (steeply falling\ngluon parametrization)\nare in good agreement, except for bin 1.\n\n\\item For parametrization (d), \n{\\tt DISASTER++} and {\\tt MEPJET} are in agreement except for bins 1 and 4.\nRuns with a smaller value\nof the {\\tt MEPJET} cut-off variable~$s$\nyield\nthe following results for bin 1:\n$59.6 \\pm 1.8$\\,pb ($s=0.01\\,$GeV$^2$), \n$56.7 \\pm 5.8$\\,pb ($s=0.001\\,$GeV$^2$) and\n$54.9 \\pm 10.4$\\,pb ($s=0.0001\\,$GeV$^2$).\nA high-statistics run ($\\sim 4\\cdot10^9$ events) of {\\tt MEPJET} \nfor bin 1 with $s=0.0001\\,$GeV$^2$ gives the \ncross section $60.0 \\pm 1.9$\\,pb.\nContrary to the observation in case (a), for small~$s$ \nwe do not get agreement of \nthe {\\tt MEPJET} result with the {\\tt DISASTER++} \/ {\\tt DISENT} result\nof about $48$--$49$\\,pb.\n\n\\item The {\\tt MEPJET} results for parametrization (e) \n($g(\\xi) = (1-\\xi)^2$)\ndeviate considerably from the {\\tt DISASTER++}\nresults in bins~1, 2, 4 and 7.\n\n\\item For parametrization (f),\n{\\tt DISASTER++} and {\\tt MEPJET} are incompatible\nfor bins 1, 2, 4, 6 and 7.\n\n\\item For parametrization (g), \n{\\tt MEPJET} and {\\tt DISASTER++} are compatible in bins\n3, 5, 8 and 9 only.\nA high-statistics run ($\\sim 4\\cdot10^9$ events) of {\\tt MEPJET} \nfor bin 4 with $s=0.0001\\,$GeV$^2$ yields the \ncross section $1.29 \\pm 0.02$\\,pb.\nThis value is different from the result for $s=0.1\\,$GeV$^2$, \nbut still inconsistent\nwith the {\\tt DISASTER++} \/ {\\tt DISENT} result of about $0.69$\\,pb.\n\n\\end{itemize}\n\nThe overall picture is thus: Out of the three programs, {\\tt DISASTER++} \nand {\\tt DISENT} (Version 0.1) are in good agreement within about\ntwo, sometimes three standard deviations of the quoted integration errors, \nboth for ``physical'' and ``artificial'' parton densities. This agreement\nis very encouraging, but not yet perfect, and much more detailed studies\ninvolving different sets of observables and differential distributions\nare required. For the two programs, a direct comparison of the\n``jet structure functions'' should also be feasible.\n\nFor several bins, in particular for the ``artificial'' parton distribution \nfunctions, the {\\tt MEPJET}\nresults for the default setting of the \ninternal parameters deviate considerably from the {\\tt DISASTER++}\nand {\\tt DISENT} results. \nFor one particular bin studied in more detail for\nthe MRSD$_-^\\prime$ parton densities,\nthe\ndiscrepancy disappears in the case of an extremely small internal technical \ncut~$s$ of {\\tt MEPJET}, for a substantial increase of the\nnumber of generated events to obtain a meaningful Monte Carlo error. \nA few {\\tt MEPJET} results employing ``artificial'' \nparton densities have been studied in more detail. We observed that \nin these cases a reduction of the~$s$ parameter does not lead to an\nimprovement of the situation. For lack of computer time, we could not study \nall bins with a smaller $s$~cut. The overall situation \nis thus still inconclusive and unclear. An independent cross check of the\n{\\tt MEPJET} results, in particular of those using the \nimplementation of the crossing functions for the ``artificial'' parton \ndensities, is highly desirable.\n\n\\section{Miscellaneous}\n\\begin{itemize}\n\n\\item If you intend to install and use {\\tt DISASTER++}, please send me \na short e-mail message, and I will put your name on a mailing list\nso that I can inform you when there is a new version of the package.\n\n\\item Suggestions for improvements and bug reports are welcome.\n\n\\item In case that there are problems with the installation of the program, \nplease send me an e-mail message.\n\n\\end{itemize}\n\n\\section{Summary}\n\nWe have presented the {\\tt C++} class library \n{\\tt DISASTER++} for the calculation \nof (1+1) and (2+1)-jet type observables in deeply inelastic scattering.\nThe program is based on the subtraction formalism and thus does not require\na technical cut-off for the separation of the infrared-singular from the\ninfrared-finite phase-space regions. \nA {\\tt FORTRAN} interface to the {\\tt C++} class library is available.\n{\\tt DISASTER++} is actually intended to be a general object-oriented\nframework for next-to-leading order QCD calculations. In particular, \nthe subtraction formalism is implemented in a very general way.\n\nWe have performed a comparison of the three available programs\n{\\tt MEPJET}, {\\tt DISENT} and {\\tt DISASTER++}\nover a wide range of the parameters for the lepton phase space.\nWe find good agreement of {\\tt DISASTER++} and the Catani-Seymour\nprogram {\\tt DISENT} (Version 0.1).\nThe comparison of {\\tt DISASTER++} and the Mirkes-Zeppenfeld program\n{\\tt MEPJET} (for the {\\tt MEPJET} \ndefault parameters) leads to several\ndiscrepancies, both for physical and for ``artificial'' parton densities.\nFor the MRSD$_-^\\prime$ parton densities a \nreduction of the internal {\\tt MEPJET} phase-space slicing cut-off \nvariable~$s$, the number of Monte Carlo events kept fixed, leads to a certain \nimprovement of the central values of the results, \naccompanied by a substantially increased statistical error and fluctuating\ncentral values. A considerable increase of the number of generated events\n(up to of the order of several billion events) \neventually leads to an agreement of the {\\tt MEPJET} results with the\n{\\tt DISASTER++} \/ {\\tt DISENT} results for a particular bin of the lepton \nvariables which has been studied in detail.\nFor ``artificial'' parton densities and a selected set of bins of\nthe lepton variables, a reduction of the internal cut~$s$\ndoes not resolve the discrepancies.\nOther bins are not considered\nfor the lack of computer time for very-high statistics runs.\nIt should be stressed that the present study is still limited in scope.\nAn independent cross check of the {\\tt MEPJET} results for the ``artificial''\nparton densities has to be done until a firm conclusion can be reached.\nMoreover, \nthe study has to be repeated for a wider range of observables and much higher\nMonte Carlo statistics. The $s$~dependence of the {\\tt MEPJET} results\nshould also be studied in more detail.\n\nCompared to the other two programs {\\tt MEPJET} and {\\tt DISENT},\n{\\tt DISASTER++} makes the full $N_f$ dependence and the dependence\non the renormalization and factorization scales available in the \nuser routine. This is required for consistent studies of effects\nsuch as the scale dependence when the bottom threshold is crossed.\n\n\\section{Acknowledgements}\nI wish to thank M.~Seymour for sending me the numerical results for the new \n{\\tt DISENT} version. D.~Zeppenfeld made a few cross\nchecks of the results for the MRSD$_-^\\prime$ parton densities.\nJ.~Collins has provided me with the {\\tt FORTRAN} \nroutine to test the {\\tt IEEE NaN} condition.\nI am also grateful to Th.~Hadig for a few comments on the first version \nof this paper, and for suggestions for improvements of the program.\n\n\\clearpage\n\n\\begin{appendix}\n\n\\section{Numerical Results}\n\\label{capp}\n\nThis appendix contains the numerical results which are discussed in \nSection~\\ref{comparison}. The entries in the tables are the (2+1)-jet \ncross sections\nin units of [pb].\n\n\\begin{center}\n\\begin{tabular}[h]{|c|c|c|c|c|c|c|}\n\\cline{2-7}\n \\multicolumn{1}{c|}{\\rule[-2.5mm]{0mm}{8mm}}\n & \\multicolumn{3}{|c|}{leading order}\n & \\multicolumn{3}{|c|}{next-to-leading order}\n\\\\ \\hline\n bin\\rule[-2.5mm]{0mm}{8mm}\n & \\makebox[2.2cm]{\\tt DISASTER++} \n & \\makebox[2.2cm]{\\tt MEPJET} \n & \\makebox[2.2cm]{\\tt DISENT} \n & \\makebox[2.2cm]{\\tt DISASTER++} \n & \\makebox[2.2cm]{\\tt MEPJET} \n & \\makebox[2.2cm]{\\tt DISENT} \n\\\\ \\hline\\hline\n1\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{402.1}{1.13}\n & \\pmdg{399.9}{0.53}\n & \\pmdg{399.6}{1.1}\n & \\pmdg{585.0}{2.6}\n & \\pmdg{564.1}{1.9}\n & \\pmdg{578.4}{7.1}\n\\\\ \\hline\n2\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{207.6}{0.59}\n & \\pmdg{207.5}{0.34}\n & \\pmdg{207.4}{0.15}\n & \\pmdg{364.8}{1.5}\n & \\pmdg{347.3}{2.4}\n & \\pmdg{361.1}{3.5}\n\\\\ \\hline\n3\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{60.0}{0.16}\n & \\pmdg{59.9}{0.14}\n & \\pmdg{59.9}{0.15}\n & \\pmdg{119.1}{1.71}\n & \\pmdg{118.0}{1.05}\n & \\pmdg{120.1}{0.94}\n\\\\ \\hline\n4\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{82.9}{0.16}\n & \\pmdg{82.9}{0.10}\n & \\pmdg{82.6}{0.21}\n & \\pmdg{98.1}{1.11}\n & \\pmdg{95.1}{0.61}\n & \\pmdg{95.4}{0.87}\n\\\\ \\hline\n5\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{42.9}{0.08}\n & \\pmdg{42.9}{0.06}\n & \\pmdg{42.6}{0.28}\n & \\pmdg{55.3}{0.46}\n & \\pmdg{54.4}{0.49}\n & \\pmdg{54.9}{0.40}\n\\\\ \\hline\n6\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{11.9}{0.02}\n & \\pmdg{11.9}{0.02}\n & \\pmdg{11.9}{0.08}\n & \\pmdg{17.5}{0.06}\n & \\pmdg{16.8}{0.22}\n & \\pmdg{17.3}{0.13}\n\\\\ \\hline\n7\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{9.60}{0.03}\n & \\pmdg{9.58}{0.01}\n & \\pmdg{9.59}{0.04}\n & \\pmdg{12.1}{0.50}\n & \\pmdg{12.7}{0.07}\n & \\pmdg{12.3}{0.15}\n\\\\ \\hline\n8\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{6.24}{0.01}\n & \\pmdg{6.23}{0.01}\n & \\pmdg{6.24}{0.02}\n & \\pmdg{8.61}{0.12}\n & \\pmdg{8.55}{0.15}\n & \\pmdg{8.52}{0.08}\n\\\\ \\hline\n9\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{1.78}{0.003}\n & \\pmdg{1.78}{0.003}\n & \\pmdg{1.78}{0.06}\n & \\pmdg{2.65}{0.03}\n & \\pmdg{2.57}{0.06}\n & \\pmdg{2.63}{0.02}\n\\\\ \\hline\n\\end{tabular}\n\n\\vspace{0.5cm}\nTable 2: {\\it\nComparison for MRSD$_-^{\\,\\prime}$ parton densities.\n}\n\\end{center}\n\n\\clearpage\n\n\\begin{center}\n\\begin{tabular}[h]{|c|c|c|c|c|c|c|}\n\\cline{2-7}\n \\multicolumn{1}{c|}{\\rule[-2.5mm]{0mm}{8mm}}\n & \\multicolumn{3}{|c|}{leading order}\n & \\multicolumn{3}{|c|}{next-to-leading order}\n\\\\ \\hline\n bin\\rule[-2.5mm]{0mm}{8mm}\n & \\makebox[2.2cm]{\\tt DISASTER++} \n & \\makebox[2.2cm]{\\tt MEPJET} \n & \\makebox[2.2cm]{\\tt DISENT} \n & \\makebox[2.2cm]{\\tt DISASTER++} \n & \\makebox[2.2cm]{\\tt MEPJET} \n & \\makebox[2.2cm]{\\tt DISENT} \n\\\\ \\hline\\hline\n1\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{36.2}{0.09}\n & \\pmdg{36.3}{0.05}\n & \\pmdg{36.3}{0.12}\n & \\pmdg{39.1}{0.33}\n & \\pmdg{40.9}{0.89}\n & \\pmdg{38.2}{0.53}\n\\\\ \\hline\n2\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{17.8}{0.04}\n & \\pmdg{17.8}{0.03}\n & \\pmdg{17.7}{0.05}\n & \\pmdg{23.2}{0.37}\n & \\pmdg{22.7}{0.41}\n & \\pmdg{22.6}{0.22}\n\\\\ \\hline\n3\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{5.21}{0.01}\n & \\pmdg{5.21}{0.01}\n & \\pmdg{5.21}{0.02}\n & \\pmdg{8.24}{0.22}\n & \\pmdg{7.86}{0.12}\n & \\pmdg{8.14}{0.06}\n\\\\ \\hline\n4\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{27.3}{0.06}\n & \\pmdg{27.3}{0.03}\n & \\pmdg{27.2}{0.09}\n & \\pmdg{28.0}{0.52}\n & \\pmdg{29.2}{0.18}\n & \\pmdg{30.0}{0.21}\n\\\\ \\hline\n5\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{14.8}{0.03}\n & \\pmdg{14.8}{0.02}\n & \\pmdg{14.7}{0.04}\n & \\pmdg{17.4}{0.29}\n & \\pmdg{16.9}{0.10}\n & \\pmdg{17.0}{0.11}\n\\\\ \\hline\n6\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{4.33}{0.008}\n & \\pmdg{4.32}{0.006}\n & \\pmdg{4.31}{0.01}\n & \\pmdg{5.62}{0.10}\n & \\pmdg{5.44}{0.05}\n & \\pmdg{5.54}{0.03}\n\\\\ \\hline\n7\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{6.38}{0.02}\n & \\pmdg{6.37}{0.01}\n & \\pmdg{6.38}{0.03}\n & \\pmdg{8.49}{0.17}\n & \\pmdg{8.59}{0.10}\n & \\pmdg{8.37}{0.11}\n\\\\ \\hline\n8\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{4.44}{0.01}\n & \\pmdg{4.43}{0.007}\n & \\pmdg{4.44}{0.02}\n & \\pmdg{6.11}{0.08}\n & \\pmdg{6.05}{0.07}\n & \\pmdg{6.07}{0.06}\n\\\\ \\hline\n9\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{1.36}{0.002}\n & \\pmdg{1.36}{0.002}\n & \\pmdg{1.36}{0.05}\n & \\pmdg{2.02}{0.02}\n & \\pmdg{2.00}{0.05}\n & \\pmdg{2.01}{0.01}\n\\\\ \\hline\n\\end{tabular}\n\n\\vspace{0.5cm}\nTable 3: {\\it\nComparison for $q(\\xi) = (1-\\xi)^5$\n}\n\\end{center}\n\n\\begin{center}\n\\begin{tabular}[h]{|c|c|c|c|c|c|c|}\n\\cline{2-7}\n \\multicolumn{1}{c|}{\\rule[-2.5mm]{0mm}{8mm}}\n & \\multicolumn{3}{|c|}{leading order}\n & \\multicolumn{3}{|c|}{next-to-leading order}\n\\\\ \\hline\n bin\\rule[-2.5mm]{0mm}{8mm}\n & \\makebox[2.2cm]{\\tt DISASTER++} \n & \\makebox[2.2cm]{\\tt MEPJET} \n & \\makebox[2.2cm]{\\tt DISENT} \n & \\makebox[2.2cm]{\\tt DISASTER++} \n & \\makebox[2.2cm]{\\tt MEPJET} \n & \\makebox[2.2cm]{\\tt DISENT} \n\\\\ \\hline\\hline\n1\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{4.89}{0.017}\n & \\pmdg{4.89}{0.007}\n & \\pmdg{4.87}{0.01}\n & \\pmdg{5.38}{0.07}\n & \\pmdg{6.03}{0.06}\n & \\pmdg{5.22}{0.13}\n\\\\ \\hline\n2\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{2.66}{0.009}\n & \\pmdg{2.66}{0.007}\n & \\pmdg{2.65}{0.007}\n & \\pmdg{3.67}{0.08}\n & \\pmdg{3.66}{0.04}\n & \\pmdg{3.58}{0.05}\n\\\\ \\hline\n3\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{0.825}{0.003}\n & \\pmdg{0.826}{0.002}\n & \\pmdg{0.826}{0.002}\n & \\pmdg{1.44}{0.07}\n & \\pmdg{1.37}{0.03}\n & \\pmdg{1.39}{0.02}\n\\\\ \\hline\n4\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{1.60}{0.005}\n & \\pmdg{1.60}{0.003}\n & \\pmdg{1.60}{0.003}\n & \\pmdg{1.20}{0.05}\n & \\pmdg{1.30}{0.01}\n & \\pmdg{1.12}{0.04}\n\\\\ \\hline\n5\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{0.904}{0.003}\n & \\pmdg{0.900}{0.001}\n & \\pmdg{0.899}{0.002}\n & \\pmdg{0.833}{0.027}\n & \\pmdg{0.801}{0.008}\n & \\pmdg{0.764}{0.019}\n\\\\ \\hline\n6\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{0.279}{0.001}\n & \\pmdg{0.278}{0.001}\n & \\pmdg{0.278}{0.001}\n & \\pmdg{0.314}{0.007}\n & \\pmdg{0.287}{0.004}\n & \\pmdg{0.299}{0.006}\n\\\\ \\hline\n7\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{0.130}{0.001}\n & \\pmdg{0.131}{0.001}\n & \\pmdg{0.130}{0.001}\n & \\pmdg{0.119}{0.005}\n & \\pmdg{0.118}{0.002}\n & \\pmdg{0.110}{0.006}\n\\\\ \\hline\n8\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{0.0981}{0.001}\n & \\pmdg{0.0980}{0.001}\n & \\pmdg{0.0981}{0.001}\n & \\pmdg{0.105}{0.002}\n & \\pmdg{0.096}{0.001}\n & \\pmdg{0.099}{0.004}\n\\\\ \\hline\n9\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{0.0313}{0.0001}\n & \\pmdg{0.0310}{0.001}\n & \\pmdg{0.0313}{0.001}\n & \\pmdg{0.0396}{0.001}\n & \\pmdg{0.034}{0.001}\n & \\pmdg{0.0386}{0.001}\n\\\\ \\hline\n\\end{tabular}\n\n\\vspace{0.5cm}\nTable 4: {\\it\nComparison for $g(\\xi) = (1-\\xi)^5$\n}\n\\end{center}\n\n\\clearpage\n\n\\begin{center}\n\\begin{tabular}[h]{|c|c|c|c|c|c|c|}\n\\cline{2-7}\n \\multicolumn{1}{c|}{\\rule[-2.5mm]{0mm}{8mm}}\n & \\multicolumn{3}{|c|}{leading order}\n & \\multicolumn{3}{|c|}{next-to-leading order}\n\\\\ \\hline\n bin\\rule[-2.5mm]{0mm}{8mm}\n & \\makebox[2.2cm]{\\tt DISASTER++} \n & \\makebox[2.2cm]{\\tt MEPJET} \n & \\makebox[2.2cm]{\\tt DISENT} \n & \\makebox[2.2cm]{\\tt DISASTER++} \n & \\makebox[2.2cm]{\\tt MEPJET} \n & \\makebox[2.2cm]{\\tt DISENT} \n\\\\ \\hline\\hline\n1\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{46.1}{0.11}\n & \\pmdg{46.2}{0.07}\n & \\pmdg{46.2}{0.14}\n & \\pmdg{49.4}{0.67}\n & \\pmdg{58.8}{0.65}\n & \\pmdg{47.8}{1.2}\n\\\\ \\hline\n2\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{23.8}{0.05}\n & \\pmdg{23.8}{0.09}\n & \\pmdg{23.8}{0.07}\n & \\pmdg{30.6}{0.33}\n & \\pmdg{31.4}{0.71}\n & \\pmdg{29.0}{0.54}\n\\\\ \\hline\n3\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{7.28}{0.02}\n & \\pmdg{7.28}{0.02}\n & \\pmdg{7.29}{0.02}\n & \\pmdg{11.2}{0.21}\n & \\pmdg{11.0}{0.24}\n & \\pmdg{11.4}{0.14}\n\\\\ \\hline\n4\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{42.4}{0.09}\n & \\pmdg{42.3}{0.06}\n & \\pmdg{42.3}{0.12}\n & \\pmdg{38.4}{0.30}\n & \\pmdg{41.9}{0.26}\n & \\pmdg{38.4}{0.31}\n\\\\ \\hline\n5\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{23.9}{0.04}\n & \\pmdg{23.9}{0.03}\n & \\pmdg{23.8}{0.06}\n & \\pmdg{24.8}{0.46}\n & \\pmdg{24.2}{0.19}\n & \\pmdg{23.9}{0.16}\n\\\\ \\hline\n6\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{7.31}{0.01}\n & \\pmdg{7.30}{0.01}\n & \\pmdg{7.27}{0.02}\n & \\pmdg{8.11}{0.19}\n & \\pmdg{8.04}{0.41}\n & \\pmdg{8.24}{0.05}\n\\\\ \\hline\n7\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{20.3}{0.05}\n & \\pmdg{20.3}{0.08}\n & \\pmdg{20.3}{0.08}\n & \\pmdg{23.3}{0.64}\n & \\pmdg{25.1}{0.18}\n & \\pmdg{22.4}{0.24}\n\\\\ \\hline\n8\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{15.4}{0.03}\n & \\pmdg{15.4}{0.02}\n & \\pmdg{15.4}{0.01}\n & \\pmdg{18.6}{0.36}\n & \\pmdg{18.3}{0.47}\n & \\pmdg{18.4}{0.15}\n\\\\ \\hline\n9\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{4.87}{0.01}\n & \\pmdg{4.86}{0.01}\n & \\pmdg{4.87}{0.04}\n & \\pmdg{6.47}{0.08}\n & \\pmdg{6.38}{0.07}\n & \\pmdg{6.41}{0.05}\n\\\\ \\hline\n\\end{tabular}\n\n\\vspace{0.5cm}\nTable 5: {\\it\nComparison for $q(\\xi) = (1-\\xi)^2$\n}\n\\end{center}\n\n\\begin{center}\n\\begin{tabular}[h]{|c|c|c|c|c|c|c|}\n\\cline{2-7}\n \\multicolumn{1}{c|}{\\rule[-2.5mm]{0mm}{8mm}}\n & \\multicolumn{3}{|c|}{leading order}\n & \\multicolumn{3}{|c|}{next-to-leading order}\n\\\\ \\hline\n bin\\rule[-2.5mm]{0mm}{8mm}\n & \\makebox[2.2cm]{\\tt DISASTER++} \n & \\makebox[2.2cm]{\\tt MEPJET} \n & \\makebox[2.2cm]{\\tt DISENT} \n & \\makebox[2.2cm]{\\tt DISASTER++} \n & \\makebox[2.2cm]{\\tt MEPJET} \n & \\makebox[2.2cm]{\\tt DISENT} \n\\\\ \\hline\\hline\n1\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{6.24}{0.02}\n & \\pmdg{6.22}{0.01}\n & \\pmdg{6.21}{0.02}\n & \\pmdg{6.73}{0.13}\n & \\pmdg{8.94}{0.12}\n & \\pmdg{6.67}{0.24}\n\\\\ \\hline\n2\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{3.59}{0.01}\n & \\pmdg{3.58}{0.01}\n & \\pmdg{3.57}{0.01}\n & \\pmdg{4.77}{0.06}\n & \\pmdg{5.24}{0.09}\n & \\pmdg{4.43}{0.11}\n\\\\ \\hline\n3\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{1.18}{0.004}\n & \\pmdg{1.18}{0.004}\n & \\pmdg{1.18}{0.003}\n & \\pmdg{1.93}{0.04}\n & \\pmdg{1.89}{0.04}\n & \\pmdg{1.86}{0.03}\n\\\\ \\hline\n4\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{2.65}{0.007}\n & \\pmdg{2.65}{0.003}\n & \\pmdg{2.65}{0.006}\n & \\pmdg{1.13}{0.03}\n & \\pmdg{1.66}{0.02}\n & \\pmdg{0.94}{0.07}\n\\\\ \\hline\n5\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{1.62}{0.004}\n & \\pmdg{1.61}{0.002}\n & \\pmdg{1.61}{0.003}\n & \\pmdg{1.04}{0.04}\n & \\pmdg{1.09}{0.02}\n & \\pmdg{0.993}{0.03}\n\\\\ \\hline\n6\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{0.535}{0.001}\n & \\pmdg{0.534}{0.001}\n & \\pmdg{0.533}{0.001}\n & \\pmdg{0.433}{0.018}\n & \\pmdg{0.412}{0.009}\n & \\pmdg{0.430}{0.010}\n\\\\ \\hline\n7\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{0.452}{0.002}\n & \\pmdg{0.452}{0.001}\n & \\pmdg{0.451}{0.001}\n & \\pmdg{0.221}{0.026}\n & \\pmdg{0.292}{0.010}\n & \\pmdg{0.129}{0.02}\n\\\\ \\hline\n8\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{0.398}{0.001}\n & \\pmdg{0.398}{0.001}\n & \\pmdg{0.397}{0.001}\n & \\pmdg{0.298}{0.01}\n & \\pmdg{0.271}{0.005}\n & \\pmdg{0.237}{0.01}\n\\\\ \\hline\n9\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{0.136}{0.001}\n & \\pmdg{0.135}{0.001}\n & \\pmdg{0.135}{0.001}\n & \\pmdg{0.130}{0.003}\n & \\pmdg{0.109}{0.002}\n & \\pmdg{0.120}{0.004}\n\\\\ \\hline\n\\end{tabular}\n\n\\vspace{0.5cm}\nTable 6: {\\it\nComparison for $g(\\xi) = (1-\\xi)^2$\n}\n\\end{center}\n\n\\clearpage\n\n\\begin{center}\n\\begin{tabular}[h]{|c|c|c|c|c|c|c|}\n\\cline{2-7}\n \\multicolumn{1}{c|}{\\rule[-2.5mm]{0mm}{8mm}}\n & \\multicolumn{3}{|c|}{leading order}\n & \\multicolumn{3}{|c|}{next-to-leading order}\n\\\\ \\hline\n bin\\rule[-2.5mm]{0mm}{8mm}\n & \\makebox[2.2cm]{\\tt DISASTER++} \n & \\makebox[2.2cm]{\\tt MEPJET} \n & \\makebox[2.2cm]{\\tt DISENT} \n & \\makebox[2.2cm]{\\tt DISASTER++} \n & \\makebox[2.2cm]{\\tt MEPJET} \n & \\makebox[2.2cm]{\\tt DISENT} \n\\\\ \\hline\\hline\n1\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{50.6}{0.12}\n & \\pmdg{50.7}{0.13}\n & \\pmdg{50.7}{0.15}\n & \\pmdg{58.6}{1.29}\n & \\pmdg{72.9}{1.56}\n & \\pmdg{54.7}{2.1}\n\\\\ \\hline\n2\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{27.1}{0.05}\n & \\pmdg{27.1}{0.16}\n & \\pmdg{27.0}{0.07}\n & \\pmdg{36.4}{0.57}\n & \\pmdg{40.0}{0.84}\n & \\pmdg{34.9}{1.0}\n\\\\ \\hline\n3\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{8.51}{0.02}\n & \\pmdg{8.51}{0.02}\n & \\pmdg{8.52}{0.02}\n & \\pmdg{13.8}{0.35}\n & \\pmdg{13.3}{0.43}\n & \\pmdg{13.9}{0.2}\n\\\\ \\hline\n4\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{49.8}{0.10}\n & \\pmdg{49.7}{0.05}\n & \\pmdg{49.6}{0.14}\n & \\pmdg{41.2}{0.55}\n & \\pmdg{47.2}{0.91}\n & \\pmdg{41.9}{0.38}\n\\\\ \\hline\n5\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{29.0}{0.05}\n & \\pmdg{29.0}{0.03}\n & \\pmdg{28.8}{0.07}\n & \\pmdg{27.3}{0.52}\n & \\pmdg{28.2}{0.42}\n & \\pmdg{26.4}{0.19}\n\\\\ \\hline\n6\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{9.09}{0.01}\n & \\pmdg{9.07}{0.01}\n & \\pmdg{9.04}{0.02}\n & \\pmdg{9.58}{0.06}\n & \\pmdg{9.16}{0.15}\n & \\pmdg{9.54}{0.06}\n\\\\ \\hline\n7\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{30.6}{0.08}\n & \\pmdg{30.5}{0.04}\n & \\pmdg{30.5}{0.12}\n & \\pmdg{32.0}{0.34}\n & \\pmdg{36.3}{0.59}\n & \\pmdg{32.4}{0.52}\n\\\\ \\hline\n8\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{24.3}{0.04}\n & \\pmdg{24.3}{0.03}\n & \\pmdg{24.3}{0.07}\n & \\pmdg{27.6}{0.56}\n & \\pmdg{28.4}{0.35}\n & \\pmdg{27.6}{0.21}\n\\\\ \\hline\n9\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{7.88}{0.01}\n & \\pmdg{7.86}{0.01}\n & \\pmdg{7.87}{0.02}\n & \\pmdg{9.63}{0.21}\n & \\pmdg{9.50}{0.15}\n & \\pmdg{9.47}{0.06}\n\\\\ \\hline\n\\end{tabular}\n\n\\vspace{0.5cm}\nTable 7: {\\it\nComparison for $q(\\xi) = (1-\\xi)$\n}\n\\end{center}\n\n\\begin{center}\n\\begin{tabular}[h]{|c|c|c|c|c|c|c|}\n\\cline{2-7}\n \\multicolumn{1}{c|}{\\rule[-2.5mm]{0mm}{8mm}}\n & \\multicolumn{3}{|c|}{leading order}\n & \\multicolumn{3}{|c|}{next-to-leading order}\n\\\\ \\hline\n bin\\rule[-2.5mm]{0mm}{8mm}\n & \\makebox[2.2cm]{\\tt DISASTER++} \n & \\makebox[2.2cm]{\\tt MEPJET} \n & \\makebox[2.2cm]{\\tt DISENT} \n & \\makebox[2.2cm]{\\tt DISASTER++} \n & \\makebox[2.2cm]{\\tt MEPJET} \n & \\makebox[2.2cm]{\\tt DISENT} \n\\\\ \\hline\\hline\n1\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{6.84}{0.02}\n & \\pmdg{6.84}{0.01}\n & \\pmdg{6.82}{0.02}\n & \\pmdg{8.20}{0.25}\n & \\pmdg{11.6}{0.14}\n & \\pmdg{8.26}{0.45}\n\\\\ \\hline\n2\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{4.09}{0.01}\n & \\pmdg{4.07}{0.01}\n & \\pmdg{4.07}{0.01}\n & \\pmdg{5.70}{0.11}\n & \\pmdg{6.69}{0.16}\n & \\pmdg{5.68}{0.17}\n\\\\ \\hline\n3\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{1.39}{0.004}\n & \\pmdg{1.39}{0.005}\n & \\pmdg{1.39}{0.003}\n & \\pmdg{2.41}{0.07}\n & \\pmdg{2.33}{0.05}\n & \\pmdg{2.34}{0.05}\n\\\\ \\hline\n4\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{3.19}{0.01}\n & \\pmdg{3.19}{0.01}\n & \\pmdg{3.19}{0.01}\n & \\pmdg{0.686}{0.09}\n & \\pmdg{1.65}{0.03}\n & \\pmdg{0.691}{0.10}\n\\\\ \\hline\n5\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{2.06}{0.005}\n & \\pmdg{2.06}{0.002}\n & \\pmdg{2.05}{0.003}\n & \\pmdg{1.00}{0.08}\n & \\pmdg{1.14}{0.03}\n & \\pmdg{0.866}{0.05}\n\\\\ \\hline\n6\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{0.711}{0.001}\n & \\pmdg{0.710}{0.001}\n & \\pmdg{0.709}{0.001}\n & \\pmdg{0.500}{0.006}\n & \\pmdg{0.471}{0.01}\n & \\pmdg{0.442}{0.017}\n\\\\ \\hline\n7\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{0.712}{0.003}\n & \\pmdg{0.711}{0.001}\n & \\pmdg{0.710}{0.002}\n & \\pmdg{0.157}{0.026}\n & \\pmdg{0.373}{0.008}\n & \\pmdg{0.082}{0.038}\n\\\\ \\hline\n8\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{0.692}{0.002}\n & \\pmdg{0.690}{0.001}\n & \\pmdg{0.690}{0.001}\n & \\pmdg{0.411}{0.020}\n & \\pmdg{0.408}{0.022}\n & \\pmdg{0.340}{0.023}\n\\\\ \\hline\n9\\rule[-2.5mm]{0mm}{8mm}\n & \\pmdg{0.245}{0.001}\n & \\pmdg{0.245}{0.001}\n & \\pmdg{0.245}{0.001}\n & \\pmdg{0.194}{0.012}\n & \\pmdg{0.172}{0.007}\n & \\pmdg{0.161}{0.008}\n\\\\ \\hline\n\\end{tabular}\n\n\\vspace{0.5cm}\nTable 8: {\\it\nComparison for $g(\\xi) = (1-\\xi)$\n}\n\\end{center}\n\n\\end{appendix}\n\n\\clearpage\n\n\\newcommand{\\bibitema}[1]{\\bibitem{#1}}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{intro}\n\n\\\n\n\n\n\n\nThe AdS\/conformal field theory (CFT) correspondence~\\cite{MaldacenaOriginal,GKP,Witten,WittenThermal}\nhas yielded many important insights into the dynamics of strongly\ncoupled gauge theories. Among numerous results obtained so far,\none of the most striking is the universality of the ratio of the\nshear viscosity $\\eta$ to the entropy density\n$s$~\\cite{Policastro:2001yc,Kovtun:2003wp,Buchel:2003tz,KSSbound}\n\\begin{equation}\n\\label{bound} \\frac{\\eta}{s} = \\frac{1}{4\\pi}\n\\end{equation}\nfor all gauge theories with an Einstein gravity dual in the limit\n$N \\to \\infty$ and ${\\lambda} \\to \\infty$. Here, $N$ is the number of\ncolors and ${\\lambda}$ is the 't Hooft coupling. It was further\nconjectured in~\\cite{KSSbound} that~(\\ref{bound}) is a universal\nlower bound [the Kovtun-Starinets-Son (KSS) bound] for all materials. So far, all known\nsubstances including water and liquid helium satisfy the bound.\nThe systems coming closest to the bound include the quark-gluon\nplasma created at Relativistic Heavy Ion Collider (RHIC)~\\cite{Teaney:2003kp,rr1,songHeinz,r2,Dusling:2007gi, Adare:2006nq}\nand certain cold atomic gases in the unitarity limit (see\ne.g.~\\cite{Schafer:2007ib}). $\\eta\/s$ for pure gluon QCD\nslightly above the deconfinement temperature has also been\ncalculated on the lattice recently~\\cite{Meyer:2007ic} and is\nabout $30 \\%$ larger than~(\\ref{bound}). See\nalso~\\cite{sakai}. See~\\cite{Cohen:2007qr,Cherman:2007fj,Chen:2007jq,Son:2007xw,Fouxon:2007pz}\nfor other discussions of the bound.\n\nNow, as stated above, the ratio~(\\ref{bound}) was obtained for a class of gauge theories whose holographic duals are dictated by classical Einstein gravity (coupled to matter). More generally, string theory (or any\nquantum theory of gravity) contains higher derivative corrections from stringy or quantum effects,\ninclusion of which will modify the ratio. In terms of gauge theories,\nsuch modifications correspond to $1\/{\\lambda}$ or $1\/N$ corrections. As a concrete example, let us take ${\\cal N}=4$ super-Yang-Mills theory, whose dual corresponds to type IIB\nstring theory on $AdS_5 \\times S^5$. The leading order correction in $1\/{\\lambda}$ arises from stringy corrections to the low-energy effective action of type IIB supergravity, schematically of the form ${\\alpha'}^3 R^4$.\nThe correction to $\\eta\/s$ due to such a term was calculated\nin~\\cite{BLS,Benincasa:2005qc}. It was found that the correction is positive, consistent with the\nconjectured bound.\n\nIn this paper, instead of limiting ourselves to\nspecific known string theory corrections, we explore the modification of $\\eta\/s$ due to\ngeneric higher derivative terms in the holographic gravity dual. The reason is partly pragmatic: other\nthan in a few maximally supersymmetric circumstances, very little\nis known about forms of higher derivative corrections generated in string theory. Given the vastness of the string\nlandscape~\\cite{landscape}, one expects that generic corrections do\noccur. Restricting to the gravity sector in $AdS_5$, the leading order\nhigher derivative corrections can be written as\\footnote{Our\nconventions are those of \\cite{Carroll}. In this section we suppress Gibbons-Hawking surface terms.}\n \\begin{equation} \\label{epr}\nI= {1 \\over 16 \\pi G_N} \\int d^5 x \\, \\sqrt{- g} \\left(R - 2 \\Lambda +\n\\lad^2 \\left({\\alpha}_1 R^2+ {\\alpha}_2 R_{\\mu \\nu} R^{\\mu \\nu}+{\\alpha}_3 R^{\\mu\n\\nu\\rho \\sigma} R_{\\mu \\nu \\rho\\sigma} \\right)\\right) \\ ,\n \\end{equation}\nwhere ${\\Lambda} = -{6 \\over \\lad^2}$ and for now we assume that ${\\alpha}_i \\sim\n{{\\alpha'} \\over \\lad^2} \\ll 1$. Other terms with additional derivatives\nor factors of $R$ are naturally suppressed by higher powers of ${{\\alpha'} \\over\n\\lad^2}$. String loop (quantum) corrections can also generate such terms,\nbut they are suppressed by powers of $g_s$ and we will consistently neglect them by taking $g_s \\rightarrow 0$ limit.\\footnote{Note that to calculate $g_s$ corrections, all the light fields must be taken into account. In addition, the calculation of $\\eta\/s$ could be more subtle once we begin to include quantum effects.} To lowest order\nin ${\\alpha}_i$ the correction to $\\eta\/s$ will be a linear\ncombination of ${\\alpha}_i$'s, and the viscosity bound\nis then violated for one side of the half-plane.\nSpecifically, we will find\n \\begin{equation}\n {\\eta \\over s} = {1 \\over 4 \\pi} \\left(1 - 8 {\\alpha}_3 \\right) + O({\\alpha}_i^2)\n \\end{equation}\nand hence the bound is violated for ${\\alpha}_3>0$. Note that the above\nexpression is independent of ${\\alpha}_1$ and ${\\alpha}_2$. This can be\ninferred from a field redefinition argument (see\nSec.\\ref{ap:fR}).\n\nHow do we interpret these violations? Possible scenarios are:\n\n\\begin{enumerate}\n\n\\item The bound can be violated. For example, this scenario would be realized if one explicitly finds a well-defined string theory on $AdS_5$ which generates a stringy correction with ${\\alpha}_3>0$. (See~\\cite{new} for a plausible counterexample to the KSS bound.)\n\n\\item The bound is correct (for example, if one can prove it using a field\ntheoretical method), and a bulk gravity theory with ${\\alpha}_3>0$ cannot have a well-defined boundary CFT dual.\n\n \\begin{enumerate}\n\n\\item The bulk theory is manifestly\ninconsistent as an effective theory. For example, it could\nviolate bulk causality or unitarity.\n\n\n\\item It is impossible to generate such a low-energy effective\nclassical action from a consistent quantum theory of gravity. In\nmodern language we say that the theory lies in the swampland of\nstring theory.\n\n \\end{enumerate}\n\n\\end{enumerate}\n\nAny of these alternatives, if realized, is interesting. Needless\nto say, possibility 1 would be interesting.\nGiven that recent analyses from RHIC\ndata~\\cite{rr1,songHeinz,r2,Dusling:2007gi,Adare:2006nq} indicate \nthe $\\eta\/s$ is close to (and could be even smaller than) the bound, \nthis further motivates\nto investigate the universality of the KSS bound in holographic\nmodels.\n\nPossibility 2(a) should help clarify the physical origin of the\nbound by correlating bulk pathologies and the violation of the\nbound. Possibility 2(b) could provide powerful tools for\nconstraining possible higher derivative corrections in the string\nlandscape. Note that while there are some nice no-go theorems\nwhich rule out classes of nongravitational effective field\ntheories \\cite{AADNR} (also see \\cite{AKS}), the generalization of\nthe arguments of~\\cite{AADNR} to gravitational theories is subtle\nand difficult. Thus, constraints from AdS\/CFT based on the\nconsistency of the boundary theory would be valuable.\n\nIn investigating the scenarios above, Gauss-Bonnet (GB) gravity will\nprovide a useful model. Gauss-Bonnet gravity, defined by the\nclassical action of the form~\\cite{Zwiebach}\n\\begin{equation}\n\\label{action} I = \\frac{1}{16\\pi G_N} \\mathop\\int{d^{5}x \\,\n\\sqrt{-g} \\, \\left[R-2\\Lambda+ {{\\lambda}_{GB} \\over 2} \\lad^2\n(R^2-4R_{\\mu\\nu}R^{\\mu\\nu}+R_{\\mu\\nu\\rho\\sigma}R^{\\mu\\nu\\rho\\sigma})\n\\right]} \\ ,\n\\end{equation}\nhas many nice properties that are absent for theories with more general ratios of the ${\\alpha}_i$'s. For example, expanding around flat Minkowski space, the metric fluctuations\nhave exactly the same quadratic kinetic terms as those in\nEinstein gravity. All higher derivative terms\ncancel~\\cite{Zwiebach}. Similarly, expanding around the\nAdS black brane geometry, which will be the main focus of the\npaper, there are also only second derivatives on the\nmetric fluctuations. Thus small metric fluctuations can be\nquantized for finite values of the parameter\n${\\lambda}_{GB}$.\\footnote{Generic theories in~(\\ref{epr}) contain four\nderivatives and a consistent quantization is not possible other\nthan treating higher derivative terms as perturbations.}\nFurthermore, crucial for our investigation is its remarkable feature of solvability: sets of\nexact solutions to the classical equation of motion have been\nobtained \\cite{BD,Cai} and the exact form of the Gibbons-Hawking\nsurface term is known \\cite{Myers}.\n\nGiven these nice features of Gauss-Bonnet gravity, we will venture outside the regime of the perturbatively corrected Einstein gravity and study the theory with finite values of ${\\lambda}_{GB}$.\nTo physically motivate this, one could envision that somewhere in the string landscape ${\\lambda}_{GB}$ is large but all the other higher derivative corrections are small.\nOne of the main results of the paper is a value of $\\eta\/s$ for the CFT dual of Gauss-Bonnet gravity, \\emph{nonperturbative} in ${\\lambda}_{GB}$:\\footnote{We have also computed the value of $\\eta\/s$ for Gauss-Bonnet gravity\nfor any spacetime dimension $D$ and the expression is given\nin~(\\ref{final}).}\n\\begin{equation} \\label{advertise}\n\\frac{\\eta}{s}=\\frac{1}{4\\pi}[1- 4 {\\lambda}_{GB}].\n\\end{equation}\nWe emphasize that this is not just a linearly corrected value.\nIn particular, the viscosity bound is badly violated as ${\\lambda}_{GB} \\rightarrow \\frac{1}{4}$.\nAs we will discuss shortly, ${\\lambda}_{GB}$ is bounded above by $\\frac{1}{4}$ for the theory to have a boundary CFT, and $\\eta\/s$ never decreases beyond $0$.\n\nGiven the result~(\\ref{advertise}) for Gauss-Bonnet, if the possibility 2(a) were correct, we would expect that pathologies would become easier to discern in the limit where $\\eta\/s\\rightarrow 0$. We will investigate this line of thought in Sec.\\ref{gravitoncone}. On the other hand, thinking along the line of possibility 1, the Gauss-Bonnet theory with ${\\lambda}_{GB}$ arbitrarily close to $\\frac{1}{4}$ may have a concrete realization in the string landscape. In this case, there exists no lower bound for $\\eta\/s$, and investigating the CFT dual of Gauss-Bonnet theory should clarify how to evade the heuristic mean free path argument for the existence of the lower bound (presented in, e.g., \\cite{KSSbound}).\n\nThe plan of the paper is as follows. In Sec.\\ref{pre}, we review\nvarious properties of two-point correlation functions and outline\nthe real-time AdS\/CFT calculation of the shear viscosity. We then\nexplicitly calculate the shear viscosity for Gauss-Bonnet theory\nin Sec.\\ref{shear}. In Sec.\\ref{gravitoncone}, we seek possible\npathologies associated with theories violating the viscosity\nbound. There, we will find a curious new metastable state for\nlarge enough ${\\lambda}_{GB}$. Finally in Sec.\\ref{discussion}, we conclude\nwith various remarks and speculations. To make the paper fairly\nself-contained, various appendices are added. In particular,\nquasinormal mode calculations of the shear viscosity are\npresented in Appendix~\\ref{ap:so} and one using the membrane\nparadigm in Appendix~\\ref{junk}.\n\n\n\n\\section{Shear viscosity in $R^2$ theories: preliminaries} \\label{pre}\n\n\n\\subsection{Two-point correlation functions and viscosity} \\label{meds}\n\nLet us begin by collecting various properties of two-point\ncorrelation functions,\nfollowing~\\cite{Policastro:2002se,Policastro:2002tn,KovtunEV} (see\nalso~\\cite{Son:2007vk}). Consider retarded two-point correlation\nfunctions of the stress energy tensor $T_{\\mu \\nu}$ of a CFT in\n$3+1$-dimensional Minkowski space at a finite temperature $T$\n \\begin{equation} \\label{ttC}\n G_{\\mu\\nu,{\\alpha} \\beta} (\\omega,\\vec q) = - i\\int dt d\\vec x e^{ i\n\\omega t- i \\vec q\\cdot \\vec x} \\theta( t) \\vev{\\left[T_{\\mu\n\\nu}(t,\\vec x) , T_{{\\alpha} \\beta} (0,0)\\right]}.\n \\end{equation}\nThey describe linear responses of the system to small disturbances.\nIt turns out that various components of~(\\ref{ttC}) can be expressed in terms of\nthree independent scalar functions. For example, if we take spatial\nmomentum to be $\\vec q = (0,0 ,q)$, then\n \\begin{equation}\n G_{12,12}= {1\\over 2} G_3 (\\omega, q) , \\ \\ \\ \\ \\ G_{13,13} = {1\\over 2}{\\omega^2 \\over \\omega^2 -q^2}\n G_1 (\\omega, q), \\ \\ \\ \\ \\ G_{33,33} = {2 \\over 3} {\\omega^4 \\over (\\omega^2 -q^2)^2} G_2 (\\omega,\n q),\n \\end{equation}\nand so on. At $\\vec q=0$ all three function $G_{1,2,3} (\\omega,0)$\nare equal to one another as a consequence of rotational\nsymmetry.\n\nWhen $\\omega, |\\vec q| \\ll T$ one expects the CFT plasma to be described by hydrodynamics.\nThe scalar functions $G_{1,2,3}$ encode the hydrodynamic behavior of shear, sound, and transverse modes, respectively.\nMore explicitly, they have the following properties:\n \\begin{itemize}\n \\item $G_1$ has a simple diffusion pole at $\\omega= - i D q^2$, where\n \\begin{equation} \\label{shearA}\n D= {\\eta \\over {\\epsilon} + P} = {1 \\over T} { \\eta \\over s}\n \\end{equation}\n with ${\\epsilon}$ and $s$ being the energy and entropy density, and $P$ the pressure of the\n gauge theory plasma.\n \\item $G_2$ has a simple pole at $\\omega= \\pm c_s q - i \\Gamma_{s} q^2$, where $c_s$ is\n the speed of sound and $\\Gamma_{s}$ is the sound damping\n constant, given by (for conformal theories)\n \\begin{equation} \\label{sound}\n \\Gamma_s = {2 \\over 3T}{\\eta \\over s}\n \\end{equation}\n \\item $\\eta$ can also be obtained from $G_{1,2,3}$ at zero spatial momentum by\nthe Kubo formula, e.g.,\n \\begin{equation} \\label{rrp}\n \\eta= \\lim_{\\omega\\rightarrow 0} {1\\over \\omega} {\\rm Im} G_{12,12} (\\omega,0)\n \\end{equation}\n\n \\end{itemize}\nEquations (\\ref{shearA})--(\\ref{rrp}) provide three independent ways of extracting $\\eta\/s$.\nWe provide calculations utilizing the first two in Appendix~\\ref{ap:so}.\nA calculation utilizing the Kubo formula (\\ref{rrp}) is easier, and we will explicitly implement it for Gauss-Bonnet theory in Sec.\\ref{shear}.\nIn the next subsection, we outline how to obtain retarded two-point functions within the framework of the real-time AdS\/CFT correspondence.\n\n\n\\subsection{AdS\/CFT calculation of shear viscosity: Outline} \\label{adscft}\n\nThe stress tensor correlators for a boundary CFT described by\n(\\ref{epr}) or (\\ref{action}), can be computed from gravity as\nfollows. One first finds a black brane solution (i.e. a black hole\nwhose horizon is ${\\bf R}^3$) to the equations of motion of\n(\\ref{epr}) or (\\ref{action}). Such a solution describes the\nboundary theory on ${\\bf R}^{3,1}$ at a temperature $T$, which can\nbe identified with the Hawking temperature of the black brane. The\nentropy and energy density of the boundary theory are given by the\ncorresponding quantities of the black brane. The fluctuations of\nthe boundary theory stress tensor are described in the gravity\nlanguage by small metric fluctuations $h_{\\mu \\nu}$ around the\nblack brane solution. In particular, after taking into account of\nvarious symmetries and gauge degrees of freedom, the metric\nfluctuations can be combined into three independent scalar fields\n$\\phi_a, a=1,2,3$, which are dual to the three functions $G_a$ of the\nboundary theory.\n\nTo find $G_a$, one could first work out the bulk two-point\nretarded function for $\\phi_a$ and then take both points to the\nboundary of the black brane geometry. In practice it is often more\nconvenient to use the prescription proposed in~\\cite{SS}, which\ncan be derived from the real-time AdS\/CFT\ncorrespondence~\\cite{HS}. Let us briefly review it here:\n\n\\begin{enumerate}\n\n\\item Solve the linearized equation of motion for $\\phi_a (r; k)$ with the\nfollowing boundary conditions:\n\n\\begin{enumerate}\n\n\\item Impose the infalling boundary condition at the horizon. In other words, modes with timelike momenta should be\nfalling into the horizon and modes with spacelike momenta should\nbe regular.\n\n\\item Take $r$ to be the radial direction of the black brane geometry\nwith the boundary at $r=\\infty$. Require\n \\begin{equation} \\label{bdw}\n \\phi_a (r; k)|_{r= {1 \\over {\\epsilon}}} = J_a (k), \\qquad k = (\\omega, q),\n \\end{equation}\nwhere ${\\epsilon} \\to 0$ imposes an infrared cutoff near the infinity\nof the spacetime and $J_a (k)$ is an infinitesimal boundary source for the\nbulk field $\\phi_a(r; k)$.\n\n\\end{enumerate}\n\n\\item Plug in the above solution into the action, expanded to quadratic order in $\\phi_a (r; k)$.\nIt will reduce to pure surface contributions.\nThe prescription instructs us to pick up only the contribution from the boundary at $r={1 \\over {\\epsilon}}$.\nThe resulting action can be written as\n\\begin{equation}\\label{Sbd}\nS = - {1\\over 2} \\int\\!{d^4k\\over (2\\pi )^4}\\,\n J_a (-k) {\\cal F}_a (k,r) J_a (k) \\Big|_{r={1 \\over {\\epsilon}}}\\ .\n\\end{equation}\nFinally the retarded function $G_a (k)$ in momentum space for the\nboundary field dual to $\\phi_a$ is given by\n \\begin{equation} \\label{eo}\n G_a (k) = \\lim_{{\\epsilon} \\to 0} {\\cal F}_a (k,r)\\Big|_{r={1 \\over {\\epsilon}}} \\\n .\n \\end{equation}\n\n\\end{enumerate}\nUsing the Kubo formula~(\\ref{rrp}), we can get the shear viscosity by studying a mode $\\phi_3$ with $\\vec q=0$ in the low-frequency limit $\\omega \\rightarrow 0$. We will do so in the next section. Alternatively, using (\\ref{shearA}) or (\\ref{sound}), we can read off the viscosity from pole structures of retarded two-point functions. Such a calculation is a bit more involved and will be performed in Appendix~\\ref{ap:so}.\n\nThe above prescription for computing retarded functions in AdS\/CFT\nworks well if the bulk scalar field has only two derivatives as in\nGauss-Bonnet case~(\\ref{action}). If the bulk action contains more\nthan two derivatives, complications could arise even if one treats\nthe higher derivative parts as perturbations. For example, one\nneeds to add Gibbons-Hawking surface terms to ensure a\nwell-defined variational problem. A systematic prescription for\ndoing so is, however, not available at the moment beyond the\nlinear order. Thus there are potential ambiguities in implementing\n(\\ref{eo}).\\footnote{In~\\cite{BLS}, such additional terms do not\nappear to affect the calculation at the order under discussion\nthere.} Clearly these are important questions which should be\nexplored more systematically. At the $R^2$ level, as we describe\nbelow in Sec.\\ref{ap:fR}, all of our calculations can be reduced\nto the Gauss-Bonnet case in which these potential complications do\nnot arise.\n\n\n\\subsection{Field redefinitions in $R^2$ theories} \\label{ap:fR}\n\nWe now show that to linear order in ${\\alpha}_i$, $\\eta\/s$\nfor~(\\ref{epr}) is independent of ${\\alpha}_1$ and ${\\alpha}_2$. It is well\nknown that to linear order in ${\\alpha}_i$, one can make a field\nredefinition to remove the $R^2$ and $R_{\\mu \\nu}R^{\\mu\\nu}$ term\nin~(\\ref{epr}). More explicitly, in~(\\ref{epr}) set ${\\alpha}_3 =0$ and\ntake\n \\begin{equation} \\label{eep}\n g_{\\mu \\nu} = \\tilde g_{\\mu \\nu} + {\\alpha}_2 \\lad^2 \\tilde R_{\\mu \\nu} - {\\lad^2 \\over 3}\n ({\\alpha}_2\n+ 2 {\\alpha}_1 ) \\tilde g_{\\mu \\nu} \\tilde R,\n \\end{equation}\nwhere $\\tilde R$ denotes the Ricci scalar for $\\tilde g_{\\mu\\nu}$\nand so on. Then (\\ref{epr}) becomes\n \\begin{equation} \\label{newz}\n I = {1 \\over 16 \\pi G_N} \\int \\sqrt{- \\tilde g} ((1+ {\\cal K})\n \\tilde R - 2 \\Lambda ) + O({\\alpha}^2)\n = {1 + {\\cal K}\\over 16 \\pi G_N} \\int \\sqrt{- g} ( \\tilde R - 2 \\tilde \\Lambda ) +\n O({\\alpha}^2)\n \\end{equation}\nwith \\begin{equation} {\\cal K}= {2 {\\Lambda} \\lad^2 \\over 3} \\left(5 {\\alpha}_1 + {\\alpha}_2 \\right) , \\qquad\n \\tilde {\\Lambda} = {{\\Lambda} \\over 1 + {\\cal K}} \\ .\n\\end{equation} It follows from (\\ref{eep}) that a background solution\n$g^{(0)}$ to (\\ref{epr}) (with ${\\alpha}_3=0$) is related to a solution\n$\\tilde g^{(0)}$ to (\\ref{newz}) by\n \\begin{equation} \\label{sba}\n ds^2_0 = A^2 \\tilde{ds}^2_0, \\qquad A = 1- { {\\cal K} \\over 3} \\ .\n \\end{equation}\nThe scaling in (\\ref{sba}) does not change the background Hawking\ntemperature. The diffusion pole~(\\ref{shearA}) calculated using\n~(\\ref{newz}) around $\\tilde g^{(0)}$ then gives the standard\nresult $D = {1 \\over 4 \\pi T}$~\\cite{Policastro:2002se}.\n Thus we conclude that $\\eta\/s = {1 \\over\n4 \\pi}$ for (\\ref{epr}) with ${\\alpha}_3=0$. Then to linear order\nin ${\\alpha}_i$,\n $\\eta\/s$ can only\ndepend on ${\\alpha}_3$. To find this dependence, it is convenient to\nwork with the Gauss-Bonnet theory~(\\ref{action}). Gauss-Bonnet\ngravity is not only much simpler than~(\\ref{epr}) with generic\n${\\alpha}_3 \\neq 0$, but also contains only second derivative terms in\nthe equations of motion for $h_{\\mu \\nu}$, making the extraction\nof boundary correlators unambiguous.\n\n\n\n\\section{Shear Viscosity for Gauss-Bonnet Gravity}\n\\label{shear}\n\nIn this section, after briefly reviewing the thermodynamic properties of the black brane solution, we compute the shear viscosity for Gauss-Bonnet gravity~(\\ref{action}) nonperturbatively in ${\\lambda}_{GB}$.\nHere, we follow the outline presented in the previous section, with the Kubo formula (\\ref{rrp}) in mind.\nIn Appendix~\\ref{ap:so}, we extract $\\eta\/s$ from the shear channel~(\\ref{shearA}) and the sound channel~(\\ref{sound}) (perturbatively in ${\\lambda}_{GB}$).\nThere we also find that the sound velocity remains at the conformal value $c_s^2 = {1 \\over 3}$ as it should.\nIn Appendix~\\ref{junk}, we provide a membrane paradigm calculation, again nonperturbatively in ${\\lambda}_{GB}$.\nAll four methods give the same result.\n\n\n\n\\subsection{Black brane geometry and thermodynamics}\n\nExact solutions and thermodynamic properties of black objects in\nGauss-Bonnet gravity~(\\ref{action}) were discussed\nin~\\cite{Cai} (see also \\cite{Nojiri:2001aj, Cho:2002hq,Neupane:2002bf,Neupane:2003vz}). Here we summarize some features relevant for our\ndiscussion below. The black brane solution can be written as\n\\begin{equation}\n\\label{bba} ds^2=-f(r)N_{\\sharp}^2dt^2\n+\\frac{1}{f(r)}dr^2+\\frac{r^2}{\\lad^2}\n \\left(\\mathop\\sum_{i=1}^{3}dx_i^2 \\right),\n\\end{equation}\nwhere\n\\begin{equation}\n\\label{perturb} f(r)=\\frac{r^2}{\\lad^2}\\frac{1}{2{\\lambda}_{GB}}\n\\left[1-\\sqrt{1-4{\\lambda}_{GB}\\left(1-\\frac{r_{+}^{\\,4}}{r^4}\\right)} \\right] \\ .\n\\end{equation}\nIn (\\ref{bba}), $N_{\\sharp}$ is an arbitrary constant which specifies the\nspeed of light of the boundary theory. Note that as $r \\to \\infty$,\n\\begin{equation} \\label{Ade}\n f(r) \\to {r^2 \\over a^2 \\lad^2}, \\qquad {\\rm with} \\qquad\n a^2 \\equiv\n {1\\over 2}\\left(1+\\sqrt{1-4{\\lambda}_{GB}}\\right)\\ .\n \\end{equation}\nIt is straightforward to see that the AdS curvature scale of these\ngeometries is $a \\lad$.\\footnote{Here we note that the Gauss-Bonnet\ntheory also admits another background with the curvature scale\n$\\tilde{a}\\,\\lad$ where $\\tilde{a}^2={1\\over 2}\\left(1-\\sqrt{1-4{\\lambda}_{GB}}\\right)$.\nEven though this remains an asymptotically AdS solution for ${\\lambda}_{GB}>0$,\nwe do not consider it here because this background is unstable and\ncontains ghosts \\cite{BD}.} If we choose $N_{\\sharp} = a$, then the boundary\nspeed of light is unity. However, we will leave it unspecified in\nthe following. We assume that ${\\lambda}_{GB}\\leq\\frac{1}{4}$. Beyond this\npoint, (\\ref{action}) does not admit a vacuum AdS solution, and\ncannot have a boundary CFT dual. In passing, we note that while the\ncurvature singularity occurs at $r=0$ for ${\\lambda}_{GB} \\geq 0$, it shifts to\n$r =r_+ \\left(1-{1 \\over 4 {\\lambda}_{GB}}\\right)^{-\\frac{1}{4}}$ for ${\\lambda}_{GB}<0$.\n\n\nThe horizon is located at $r=r_{+}$ and the Hawking temperature,\nentropy density, and energy density of the black brane are\n\\footnote{Note that for {\\it planar} black branes in Gauss-Bonnet\ntheory, the area law for entropy still holds \\cite{oldy}. This is\nnot the case for more general higher-derivative-corrected black\nobjects.}\n\\begin{equation}\n\\label{temperature} T =N_{\\sharp} \\frac{r_{+}}{\\pi \\lad^2},\n\\end{equation}\n\\begin{equation} \\label{entr}\ns =\\frac{1}{4G_{N}}\\left(\\frac{r_{+}}{\\lad}\\right)^3\n=\\frac{(\\pi \\lad)^3}{4G_{N}}\\frac{(T)^3}{N_{\\sharp}^3},\n\\qquad {{\\epsilon}} = {3 \\over 4} T s \\ .\n\\end{equation}\nIf we fix the boundary theory temperature $T$ and the speed of light to be unity (taking $N_{\\sharp}=a$), the entropy and energy density are monotonically\nincreasing functions of ${\\lambda}_{GB}$, reaching a maximum at ${\\lambda}_{GB}={1 \\over 4}$\nand going to zero as ${\\lambda}_{GB} \\to -\\infty$.\n\nTo make our discussion self-contained, in Appendix~\\ref{ap:ther}, we compute the free\nenergy of the black brane and derive the entropy density. In\nparticular, we show that the contribution from the Gibbons-Hawking\nsurface term to the free energy vanishes.\n\n\\subsection{Action and equation of motion for the scalar channel} \\label{scalar}\n\nTo compute the shear viscosity, we now study small metric fluctuations $\\phi = h^1_{\\ 2}$ around the black brane background of the form\n\\begin{equation}\nds^2=-f(r)N_{\\sharp}^2dt^2+\\frac{1}{f(r)}dr^2+\\frac{r^2}{\\lad^2}\n\\left(\\mathop\\sum_{i=1}^{3}dx_i^2 + 2 \\phi(t,\\vec x, r)dx_1 d x_2 \\right) \\ .\n\\end{equation}\nWe will take $\\phi$ to be independent of $x_1$ and $x_2$ and write\n\\begin{equation}\n\\phi(t, \\vec x, r)=\\mathop\\int\\frac{d\\omega dq}{(2\\pi)^{2}}\\, \\phi(r;k)\n\\, e^{-i\\omega t + i q x_3}, \\quad k = (\\omega, 0, 0, q) ,\\ \\ \\phi\n(r; -k)=\\phi^*(r; k) \\ .\n\\end{equation}\nFor notational convenience, let us introduce\n\\begin{equation} \\label{sDef}\nz=\\frac{r}{r_{+}},\\ \\ \\tilde{\\omega}=\\frac{\\lad^2}{r_{+}}\\omega,\\\n\\quad \\tilde{q}=\\frac{\\lad^2}{r_{+}}q, \\qquad\n\\tilde{f}=\\frac{\\lad^2}{r_{+}^2}f = {z^2 \\over 2 {\\lambda}_{GB}} \\left(1 -\n\\sqrt{1-4 {\\lambda}_{GB} + {4 {\\lambda}_{GB} \\over z^4}} \\right).\n\\end{equation}\nThen, at quadratic order, the action for $\\phi$ can be written as\n \\begin{eqnarray} \\label{pee}\n S&=&\\int{dk_1 dk_2 \\over (2 \\pi)^2}S(k_1, k_2) \\ \\ \\ {\\rm with} \\cr\n S(k_1=0, k_2=0)&=&-{1\\over 2} C \\int dz {\\frac{d\\omega dq}{(2\\pi)^{2}}} \\, \\left( K (\\partial_z \\phi)^2 - K_2 \\phi^2 +\n \\partial_z (K_3 \\phi^2) \\right),\n \\end{eqnarray}\nwhere\n \\begin{equation} \\label{vds}\n C = {1 \\over 16 \\pi G_N} \\left(N_{\\sharp} r_+^4\\over \\lad^5\\right), \\ \\ K= z^2 \\tilde{f} (z - {\\lambda}_{GB}\n \\partial_z\\tilde{f}), \\ \\ \\ K_2 = K {\\tilde \\omega^2 \\over N_{\\sharp}^2 \\tilde{f}^2} - \\tilde\n q^2 z \\left(1- {\\lambda}_{GB} \\partial_z^2\\tilde{f} \\right) \\ ,\n \\end{equation}\nand $\\phi^2$ should be understood as a shorthand notation for\n$\\phi(z;k) \\phi (z,-k)$.\nHere, $S$ is the sum of the bulk action (\\ref{action}) and the associated Gibbons-Hawking surface term \\cite{Myers}.\nThe explicit expression for $K_3$ will not be important for our subsequent discussion.\n\nThe equation of motion following from (\\ref{pee}) is\\footnote{An easy way to get the quadratic action~(\\ref{pee}) is to first obtain the linearized equation of motion and then read off $K$ and $K_2$ from it.}\n \\begin{equation} \\label{eom}\n K \\phi'' + K' \\phi' + K_2 \\phi =0 \\ ,\n \\end{equation}\nwhere primes indicate partial derivatives with respect to $z$.\nUsing the equation of motion, the action~(\\ref{pee}) reduces to the surface contributions as advertised in Sec.\\ref{adscft},\n \\begin{equation} \\label{rrk}\n S(k_1=0, k_2=0) = -{1\\over 2} C \\int {\\frac{d\\omega dq}{(2\\pi)^{2}}} \\, \\left(K \\phi' \\phi\n + K_3 \\phi^2 \\right)|_{{\\rm surface}} \\ .\n \\end{equation}\nThe prescription described in Sec.\\ref{adscft} instructs us to\npick up the contribution from the boundary at\n$z\\rightarrow\\infty$. Here, the term proportional to $K_3$ will\ngive rise to a real divergent contact term, which is discarded.\n\nA curious thing about (\\ref{pee}) is that for all values of $z$,\nboth $K$ and $K_2$ (but not $K_3$) are proportional to ${1 \\over 4}\n- {\\lambda}_{GB}$.\\footnote{This can be seen by using the following equation\nin $K$ and $K_2$ \\begin{equation} \\tilde{f}'(z) = {2 z (2z^2 -\\tilde{f}) \\over z^2 - 2 {\\lambda}_{GB}\n\\tilde{f}} \\ . \\end{equation} } Thus other than the boundary term the whole action\n(\\ref{pee}) vanishes identically at ${\\lambda}_{GB} = {1 \\over 4}$.\nNevertheless, the equation of motion (\\ref{eom}) remains\nnontrivial in the limit ${\\lambda}_{GB} \\to {1 \\over 4}$ as the ${1 \\over 4} - {\\lambda}_{GB}$ factor cancels\nout. Note that the correlation function does not necessarily go to\nzero in this limit since it also depends on\nthe behavior of the solution to~(\\ref{eom}) and the limiting\nprocedure~(\\ref{rrk}). As we will see momentarily, as least in the\nsmall frequency limit it does become zero with a vanishing shear\nviscosity.\n\n\n\\subsection{Low-frequency expansion and the viscosity}\n\\label{solution}\n\n\nGeneral solutions to the equation of motion~(\\ref{eom}) can be written as\n\\begin{equation}\n\\phi(z; k)=a_{in}(k)\\phi_{in}(z; k)+a_{out}(k)\\phi_{out}(z; k) \\ ,\n\\end{equation}\nwhere $\\phi_{in}$ and $\\phi_{out}$ satisfy infalling and outgoing boundary conditions at the horizon, respectively.\nThey are complex conjugates of each other, and we normalize them by requiring them to approach $1$ as $z\\to \\infty$.\nThen, the prescription of Sec.\\ref{adscft} corresponds to setting\n\\begin{equation} \\label{explicitBC}\na_{in}(k)=J(k)\\ , \\qquad a_{out}(k)=0 \\ ,\n\\end{equation}\nwhere $J(k)$ is an infinitesimal boundary source for the bulk field $\\phi$.\n\nMore explicitly, as $z \\to 1$, various functions in (\\ref{eom}) have the following behavior\n \\begin{equation}\n{K_2 \\over K} \\approx {\\tilde{\\omega}^2 \\over 16 N_{\\sharp}^2 (z-1)^2 } + O((z-1)^{-1})+O(\\tilde{q}^2),\n\\qquad {K' \\over K} = {1 \\over z-1} + O(1) \\ .\n \\end{equation}\nIt follows that near the horizon $z=1$, equation (\\ref{eom}) can\nbe solved by (for $\\vec q =0$)\n \\begin{equation}\n\\phi (z) \\sim (z-1)^{\\pm {i \\tilde{\\omega} \\over 4 N_{\\sharp}}} \\sim (z-1)^{\\pm {i \\omega\n\\over 4 \\pi T}}\n \\end{equation}\nwith the infalling boundary condition corresponding to the\nnegative sign. To solve (\\ref{eom}) in the small frequency limit,\nit is convenient to write\n\\begin{equation} \\label{anse}\n\\phi_{in} (z; k)=e^{-i \\left({\\tilde{\\omega} \\over 4 N_{\\sharp}}\\right) {\\rm ln}\\left(\\frac{a^2\n\\tilde{f}}{z^2}\\right)} \\left(1-i\\frac{\\tilde{\\omega}} {4\nN_{\\sharp}}g_1(z)+O(\\tilde{\\omega}^2, \\tilde{q}^2)\\right),\n\\end{equation}\nwhere we require $g_1 (z)$ to be nonsingular at the horizon $z=1$.\nWe show in Appendix~\\ref{ap:solo} that $g_1$ is a nonsingular function with the large $z$\nexpansion\n \\begin{equation} \\label{lowEatCFT}\n g_1 (z) = {4 {\\lambda}_{GB} \\over \\sqrt{1-4 {\\lambda}_{GB}}} {a^2 \\over z^4} + O(z^{-8}) \\\n .\n \\end{equation}\nTherefore, with our boundary conditions (\\ref{explicitBC}), we find\n \\begin{equation} \\label{asu}\n \\phi (z;k) =J(k)\\left[ 1 + {i \\tilde{\\omega} \\over 4 N_{\\sharp}} a^2 \\sqrt{1-4 {\\lambda}_{GB}} \\left({1 \\over z^4}\n + O(z^{-8}) \\right) + O(\\tilde{\\omega}^2, \\tilde{q}^2)\\right].\n \\end{equation}\nThis is the right asymptotic behavior for the bulk field $\\phi$\ndescribing metric fluctuations since the CFT stress tensor has\nconformal dimension 4.\n\nPlugging~(\\ref{asu}) into (\\ref{rrk}) and using the expressions for\n$C$ and $K$ in (\\ref{vds}), the prescription described in Sec.\\ref{adscft} gives\n \\begin{equation}\n {\\rm Im} G_{12,12} (\\omega,0)=\\omega{1 \\over 16 \\pi G_N} \\left(r_+^3\\over \\lad^3\\right) (1-4\n {\\lambda}_{GB}) +O(\\omega^2).\n \\end{equation}\nThen, the Kubo formula~(\\ref{rrp}) yields\n \\begin{equation} \\label{ets}\n \\eta = {1 \\over 16 \\pi G_N} \\left(r_+^3\\over \\lad^3\\right) (1-4\n {\\lambda}_{GB}).\n \\end{equation}\nFinally, taking the ratio of (\\ref{ets}) and (\\ref{entr}) we find that\n \\begin{equation} \\label{ror}\n {\\eta \\over s} = {1 \\over 4 \\pi} (1-4\n {\\lambda}_{GB}).\n \\end{equation}\nThis is \\emph{nonperturbative} in ${\\lambda}_{GB}$. Especially,\nthe linear correction is the only nonvanishing term.\\footnote{It would be interesting to find an explanation for vanishing of higher order corrections.}\n\nWe now conclude this section with various remarks:\n\n\\begin{enumerate}\n\n\\item Based on the field redefinition argument presented in\nSec.\\ref{ap:fR}, one finds from (\\ref{ror}) that for (\\ref{epr}),\n \\begin{equation} \\label{oror}\n {\\eta \\over s} = {1 \\over 4 \\pi} \\left(1 - 8 {\\alpha}_3 \\right) + O({\\alpha}_i^2).\n \\end{equation}\nWe have also performed an independent calculation of $\\eta\/s$\n(without using field redefinitions) for~(\\ref{epr}) using all\nthree methods outlined in Sec.\\ref{meds} and\nconfirmed~(\\ref{oror}).\n\n\\item The ratio $\\eta\/s$ dips below the viscosity bound for ${\\lambda}_{GB}\n> 0$ in Gauss-Bonnet gravity and for ${\\alpha}_3 > 0$ in~(\\ref{epr}).\nIn particular, the shear viscosity approaches zero as ${\\lambda}_{GB} \\to {1 \\over 4}$ for Gauss-Bonnet.\n Note that the whole off-shell action becomes zero in this limit. It is\nlikely the on-shell action also vanishes, implying that the\ncorrelation function could become identically zero in this limit.\n\n\n\\item Fixing the temperature $T$ and the boundary speed of light\nto be unity, as we take ${\\lambda}_{GB} \\to -\\infty$, $\\eta \\sim (-{\\lambda}_{GB})^{1\n\\over 4} \\to \\infty$. In contrast the entropy density decreases as\n$s \\sim (-{\\lambda}_{GB})^{-{3 \\over 4}} \\to 0$.\n\n\\item The shear viscosity of the boundary conformal field theory\nis associated with absorption of transverse modes by the black\nbrane in the bulk. This is a natural picture since the shear\nviscosity measures the dissipation rate of those fluctuations: the\nquicker the black brane absorbs them, the higher the dissipation\nrate will be.\nFor example, as ${\\lambda}_{GB} \\rightarrow-\\infty$, $\\eta\/s$ approaches\ninfinity; this describes a situation where every bit of the black\nbrane horizon devours the transverse fluctuations very quickly.\nIn this limit the curvature singularity at $z = \\left(1-{1 \\over 4 {\\lambda}_{GB}}\\right)^{-\\frac{1}{4}}$ approaches the horizon and the tidal force near the horizon becomes strong.\nOn the other hand, as ${\\lambda}_{GB} \\to {1 \\over 4}$,\n$\\eta\/s\\rightarrow0$ and the black brane very slowly absorbs transverse modes.\\footnote{We note that for\n${\\lambda}_{GB}=\\frac{1}{4}$ in $4+1$ spacetime dimension, the radial direction of the background\ngeometry resembles a ${\\rm Ba\\tilde{n}ados}$-Teitelboim-Zanelli (BTZ) black brane.}\n\n\\item The calculation leading to (\\ref{ror}) can be generalized to\ngeneral $D$ spacetime dimensions and one finds for $D\\geq4+1$\\footnote{For general\ndimensions we use the convention\n \\begin{equation}\nS = \\frac{1}{16\\pi G_N} \\mathop\\int{d^{D}x \\, \\sqrt{-g} \\,\n\\left[R-2\\Lambda+ \\alpha_{GB} \\lad^2\n(R^2-4R_{\\mu\\nu}R^{\\mu\\nu}+R_{\\mu\\nu\\rho\\sigma}R^{\\mu\\nu\\rho\\sigma})\n\\right]} \\\n\\end{equation}\nwith ${\\Lambda} = - {(D-1) (D-2) \\over 2 \\lad^2}$ and ${\\lambda}_{GB} = (D-3)(D-4)\n\\alpha_{GB}$.}\n \\begin{equation}\n\\label{final} \\frac{\\eta}{s}=\\frac{1}{4\\pi}\\left[1-2\\frac{(D-1)}{(D-3)}{\\lambda}_{GB} \\right] \\ .\n\\end{equation}\nHere again ${\\lambda}_{GB}$ is bounded above by ${1 \\over 4}$. Thus for $D >\n4+1$, $\\eta$ never approaches zero within Gauss-Bonnet theory. For\n$D=3+1$ or $2+1$, in which case the Gauss-Bonnet term is\ntopological, there is no correction to $\\eta\/s$.\n\n\\item In Appendix~\\ref{junk}, we obtain the same result\n(\\ref{ror}) using the membrane paradigm \\cite{Kovtun:2003wp}. Thus\nwhen embedded into the AdS\/CFT correspondence, the membrane paradigm\ncorrectly captures the infrared (hydrodynamic) sector of the\nboundary thermal field theory. Further, we see something interesting\nin its derivation. There, the diffusion constant is expressed as the\nproduct of a factor evaluated at the horizon (\\ref{one}) and an\nintegral from the horizon to infinity (\\ref{two}). In the limit\n${\\lambda}_{GB}\\to{1 \\over 4}$, it is the former that approaches zero.\n\n\\end{enumerate}\n\n\\section{Causality in Bulk and on Boundary}\n\\label{gravitoncone}\n\nIn this section we investigate if there are causality\nproblems in the bound-violating theories discussed above. First we will discuss the bulk causal structure.\nThen we discuss a curious high-momentum metastable state in the\nbulk graviton wave equation that may have consequences for\nboundary causality. The analysis in this section is refined in~\\cite{newBLMSY} where we indeed see a precise signal of causality violation for ${\\lambda}_{GB}>\\frac{9}{100}$.\n\n\\subsection{Graviton cone tipping}\n\\label{conetip}\n\nAs a consequence of higher derivative terms in the gravity action,\ngraviton wave packets in general do not propagate on the\nlight cone of a given background geometry. For example, when ${\\lambda}_{GB}\n\\neq 0$, the equation (\\ref{eom}) for the propagation of a\ntransverse graviton differs from that of a minimally coupled\nmassless scalar field propagating in the same background geometry\n(\\ref{bba}). To make the discussion precise, let us write (we will\nconsider only $x_{1, 2}$-independent waves) \\begin{equation} \\label{envelope}\n\\phi(t, r, x_3)=e^{-i\\omega t +i k_r r+i q x_3}\\phi_{en}(t, r,\nx_3). \\end{equation} Here, $\\phi_{en}$ is a slowly-varying envelope function,\nand we take the limit $k=(\\omega, k_r, 0, 0, q)\\to \\infty$. In\nthis limit, the equation of motion (\\ref{eom}) reduces to \\begin{equation}\n\\label{eikonal} k^{\\mu}k^{\\nu}g^{\\rm eff}_{\\mu\\nu}\\approx 0, \\ \\end{equation}\nwhere\n \\begin{equation}\n\\label{effgeo} ds_{\\rm eff}^2=g^{\\rm eff}_{\\mu\\nu}dx^{\\mu}dx^{\\nu}\n=f(r)N_{\\sharp}^2 \\left(-dt^2 + {1 \\over c_g^2} dx_3^2 \\right)\n+\\frac{1}{f(r)}dr^2.\n \\end{equation}\nIn (\\ref{effgeo})\n \\begin{equation} \\label{Nse}\nc_g^2 (z) = {N_{\\sharp}^2 \\tilde f(z) \\over z^2} {1-{\\lambda}_{GB} \\tilde{f}'' \\over 1 -\n{{\\lambda}_{GB} \\tilde{f}' \\over z}} \\equiv c_b^2 {1-{\\lambda}_{GB} \\tilde{f}'' \\over 1 - {{\\lambda}_{GB} \\tilde{f}'\n\\over z}}\n \\end{equation}\ncan be interpreted as the local ``speed of graviton'' on a\nconstant $r$-hypersurface. $c_b^2 \\equiv {N_{\\sharp}^2 \\tilde f(z) \\over\nz^2} $ introduced in the second equality in~(\\ref{Nse}) is the\nlocal speed of light as defined by the background\nmetric~(\\ref{bba}). Thus the graviton cone in general does not coincide with the\nstandard null cone or light cone defined by the background metric.\\footnote{Note that\n \\begin{equation} \\label{Nsee}\n {c_g^2 \\over c_b^2} = {1-{\\lambda}_{GB} \\tilde{f}'' \\over 1 - {{\\lambda}_{GB} \\tilde{f}' \\over z}} =\n{1 - 4 {\\lambda}_{GB} + 12 {{\\lambda}_{GB} \\over z^4} \\over 1 - 4 {\\lambda}_{GB} + 4 {{\\lambda}_{GB} \\over z^4} }\n\\ , \\end{equation} and in particular the ratio is greater than $1$ for ${\\lambda}_{GB} >\n0$. Note that bulk causality and the existence of a well-posed\nCauchy problem do not crucially depend on reference metric\nlight cones and such tipping is not a definitive sign of\ncausality problems. Also for any value of ${\\lambda}_{GB}$, the graviton cone coincides with the\nlight cone in the radial direction. If not, we could have argued for\nthe violation of the second law of thermodynamics following\n\\cite{Dubovsky:2006vk,Eling:2007qd}. Further note that for ${\\lambda}_{GB} <\n-{1 \\over 8}$, there exists a region outside the horizon where $c_g^2\n< 0$ which will lead to the appearance of tachyonic modes, following\n\\cite{spectre}. We have not explored the full significance of this\ninstability here since it is not correlated with the viscosity bound.} A few more comments about graviton cone are found at the end of Appendix~\\ref{junk}.\n\n\\begin{figure}[t]\n\\includegraphics[scale=0.7,angle=0]{1Velocity.eps}\n\\caption{$c_g^2 (z)$ (vertical axis) as a function of $z$\n(horizontal axis) for ${\\lambda}_{GB} =0.08$ (left panel) and ${\\lambda}_{GB} =0.1$\n(right panel). For ${\\lambda}_{GB} < {9 \\over 100}$, $c_g^2$ is a monotonically\nincreasing function of $z$. When ${\\lambda}_{GB} > {9 \\over 100}$, as one\ndecreases $z$ from infinity, $c_g^2$ increases from $1$ to a\nmaximum value at some $z>1$ and then decreases to $0$ as $z \\to 1$\n(horizon). }\n \\label{velo}\n\\end{figure}\n\nIn the nongravitational boundary theory there is an invariant\nnotion of light cone and causality.\nAt a heuristic level, a graviton wave packet moving at speed $c_g (z)$ in the\nbulk should translate into disturbances of the stress tensor propagating with the same velocity in the boundary theory.\nIt is\nthus instructive to compare $c_g$ and $c_b$ with the boundary\nspeed of light, which we now set to unity by taking $N_{\\sharp} = a$~($a$\nwas defined in~(\\ref{Ade})). At the boundary ($z= \\infty$) one finds\nthat $c_g (z)= c_b (z)= 1$. In the bulk, the background local\nspeed of light $c_b$ is always smaller than $1$, which is related\nto the redshift of the black hole geometry. The local speed of graviton $c_g (z)$, however, can be greater than $1$ for\ncertain range of $z$ if ${\\lambda}_{GB}$ is sufficiently large. To see this,\nwe can examine the behavior of $c_g^2$ near $z = \\infty$,\n \\begin{equation} \\label{veS}\n c_g^2 (z) - 1 = { b_1 \\over z^4} + O(z^{-8}) , \\quad z\n \\to \\infty,\n \\qquad b_1({\\lambda}_{GB}) = - {1 + \\sqrt{1 - 4 {\\lambda}_{GB}} - 20 {\\lambda}_{GB} \\over 2 (1 - 4\n {\\lambda}_{GB})} \\ .\n \\end{equation}\n $b_1 ({\\lambda}_{GB})$ becomes positive and thus $c_g^2$ increases above $1$\n if ${\\lambda}_{GB} > {9 \\over 100}$. For such a ${\\lambda}_{GB}$, as we decrease $z$ from\n infinity, $c_g^2$ will increase from $1$ to a maximum at some value of\n $z$ and then decrease to zero at the horizon. See Fig.~\\ref{velo}\n for the plot of $c_g^2 (z)$ as a function of $z$ for two values of ${\\lambda}_{GB}$.\n When ${\\lambda}_{GB} = {9 \\over 100}$ one finds that\n the next order term in (\\ref{veS}) is negative and thus $c_g^2$\n does not go above $1$.\n Also note that ${\\lambda}_{GB} \\to {1 \\over 4}$,\n $b_1 ({\\lambda}_{GB})$ goes to plus infinity.\\footnote{In fact coefficients of\n all higher order terms in $1\/z$ expansion become divergent in this limit.}\nThus heuristically, in the boundary theory there is a potential for superluminal propagation of disturbances of the stress tensor.\n\nIn~\\cite{newBLMSY} we explore whether such bulk graviton cone\nbehavior can lead to boundary causality violation by studying the\nbehavior of graviton null geodesics in the effective geometry.\nThere, we indeed see causality violation for ${\\lambda}_{GB}>\\frac{9}{100}$.\n\n\n\\subsection{New metastable states at high momenta (${\\lambda}_{GB} > {9 \\over 100}$) }\n\nWe now study the behavior of the full graviton wave equation. Let us recast the equation (\\ref{eom}) in Schr\\\"{o}dinger form. For this purpose, we introduce\n \\begin{equation}\n{dy \\over dz} = {1 \\over N_{\\sharp} \\tilde{f}(z)} , \\qquad \\psi = B \\phi , \\qquad B\n= \\sqrt{K \\over \\tilde{f}} \\ .\n \\end{equation}\nThen (\\ref{eom}) becomes\n \\begin{equation} \\label{enr}\n - \\partial_y^2 \\psi + V(y) \\psi = \\tilde{\\omega}^2 \\psi\n \\end{equation}\nwith\n \\begin{equation} \\label{potential}\n V (y) = \\tilde{q}^2 c_g^2 (z) + V_1, \\qquad V_1 (y) = {\\partial_y^2 B\n\\over B} = {N_{\\sharp}^2 \\tilde{f}^2 \\over B} \\left(B'' + {\\tilde{f}' \\over \\tilde{f}} B' \\right) \\ ,\n \\end{equation}\n where $c_g^2 (z)$ was defined in~(\\ref{Nse}).\nThe advantage of using (\\ref{enr}) is that qualitative features of\nthe full graviton propagation (including the radial direction) can be\ninferred from the potential $V(y)$, since we have intuition for\nsolutions of the Schr\\\"{o}dinger equation. Since $y$ is\na monotonic function of $z$, below we will use the two coordinates\ninterchangeably in describing the qualitative behavior of $V(y)$.\n\n\nOne can check that $V_1 (z)$ is a monotonically increasing\nfunction for any ${\\lambda}_{GB} > 0$ (note $V_1 (z) \\to + \\infty$ as $z \\to\n\\infty$). For ${\\lambda}_{GB} \\leq {9 \\over 100}$, $c_g^2 (z)$ is also a\nmonotonically increasing function as we discussed in the last\nsubsection and the whole $V(z)$ is monotonic. When ${\\lambda}_{GB} > {9 \\over\n100}$, there exists a range of $z$ where $c_g^2 (z)$ decreases with\nincreasing $z$ for sufficiently large $z$. Thus $V(z)$ can now\nhave a local minimum for sufficiently large $\\tilde{q}$. For\nillustration, see Fig.~\\ref{pote}\n for the plot of $V (z)$ as a function $z$ for two values of ${\\lambda}_{GB}$.\n\n\n\n\n\n\\begin{figure}[t]\n\\includegraphics[scale=0.65,angle=0]{2potential.eps}\n\\caption{$V(z)-q^2$ (vertical axis) as a function of $z$ (horizontal\naxis) for ${\\lambda}_{GB}=0.08$ and $\\tilde{q} =500$ (left panel) and for ${\\lambda}_{GB}=0.1$\nand $\\tilde{q} =500$ (right panel). $V(z)$ is a monotonically increasing function of\n$z$ for ${\\lambda}_{GB} \\leq {9 \\over 100}$, but develops a local minimum for\n${\\lambda}_{GB} > {9 \\over 100}$ with large enough $\\tilde{q}$.}\n \\label{pote}\n\\end{figure}\n\n\nGenerically, a graviton wave packet will fall into the black brane\nvery quickly, within the time scale of the inverse temperature ${1\n\\over T}$ (since this is the only scale in the boundary theory).\nHere, however, precisely when the local speed of graviton $c_g$ can exceed $1$ (i.e. for ${\\lambda}_{GB} > {9 \\over 100}$), $V(z)$\ndevelops a local minimum for large enough $\\tilde{q}$ and the\nSchr\\\"{o}dinger equation~(\\ref{enr}) can have metastable states\nliving around the minimum. Their lifetime is determined by the\ntunneling rate through the barrier which separates the minimum\nfrom the horizon. For very large $\\tilde{q}$ this barrier becomes very\nhigh and an associated metastable state has lifetime\nparametrically larger than the timescale set by the temperature.\nIn the boundary theory, these metastable states translate into\npoles of the retarded Green function for $T_{xy}$ in the lower\nhalf-plane. The imaginary part of such a pole is given by the\ntunneling rate of the corresponding metastable state. Thus for\n${\\lambda}_{GB} > {9 \\over 100}$, in boundary theory we find new\nquasiparticles at high momenta with a small imaginary\npart.\\footnote{A similar type of long-lived quasiparticles exist\nfor ${\\cal N}=4$ SYM theory on $S^3$~\\cite{guido}, but not on ${\\bf\nR^3}$.}\n\nIn~\\cite{newBLMSY}, we confirm that those long-lived quasiparticles give rise to causality violation for ${\\lambda}_{GB}>\\frac{9}{100}$.\n\n\\section{Discussion}\n\\label{discussion}\n\nIn this paper we have computed $\\eta\/s $ for Gauss-Bonnet gravity using a variety of techniques. We have found that the viscosity bound\nis violated for ${\\lambda}_{GB} >0$ and have looked for pathologies correlated to this violation. For small positive ${\\lambda}_{GB}$ we have not found any.\nThe violation of the bound becomes extreme as ${\\lambda}_{GB} \\rightarrow {1 \\over 4}$ where $\\eta$ vanishes. We\nhave focused our attention on this region to find what unusual properties of the boundary theory could yield a violation not only of the bound but also of the qualitative intuitions suggesting a lower bound on $\\eta\/s$. Above we also have discussed a novel quasiparticle excitation. In~\\cite{newBLMSY}, causality violation is firmly established for ${\\lambda}_{GB}>\\frac{9}{100}$.\n\nIt is also instructive to examine the\nbehavior of the zero temperature theory as ${\\lambda}_{GB} \\rightarrow {1 \\over 4}$. Basic parameters describing the boundary CFT are the coefficients of\nthe 4D Euler and Weyl densities called $a$ and $c$ respectively. These have been computed first in \\cite{Henningson:1998gx}, and for Gauss-Bonnet gravity in \\cite{Nojiri:1999mh}. Their results indicate that\n\\begin{eqnarray}\n\\label{ac} c &\\sim& (1 -4{\\lambda}_{GB})^{{1\\over 2}}, \\cr a &\\sim& (3\n(1-4{\\lambda}_{GB})^{{1\\over 2}}-2).\n\\end{eqnarray}\nThe parameter $c$ is related to the two-point function of a\nboundary stress tensor which is forced by unitarity to be\npositive. (\\ref{ac}) shows that $c$ vanishes at ${\\lambda}_{GB} ={1 \\over 4}$\ndemonstrating the sickness of this point.\\footnote{This can also\nbe seen from the derivations in Sec.\\ref{shear}.} For ${\\lambda}_{GB}$ a bit\nless than ${1 \\over 4}$ the stress tensor couples very weakly in a system with a large number of degrees of freedom. This is peculiar\nindeed. In the bulk it seems that gravity is becoming strongly coupled there.\n\nThe coefficient $a$ vanishes at ${\\lambda}_{GB}={5 \\over 36}$. The significance of this is unclear.\n\nMore generally, we believe it would be valuable to explore how\ngeneric higher derivative corrections modify various gauge theory\nobservables. This is important not only for seeing how reliable\nit is to use the infinite 't Hooft coupling approximation for\nquestions relevant to QCD, but also for achieving a more balanced\nconceptual picture of the strong coupling dynamics. Furthermore, this may\n generate new effective tools for separating the\nswampland from the landscape.\n\nAs a cautionary note we should mention that pathologies in the boundary theory in regions that violate the viscosity bound\nmay not be visible in gravitational correlators, at least when $g_s = 0$. As an example consider the ${\\alpha}'^3 R^4$ terms discussed\nin \\cite{BLS}. For positive ${\\alpha}'$, the physical case, the viscosity bound is preserved. But the bulk effective action can equally be\nstudied for ${\\alpha}'$ negative. Here gravitational correlators can be computed and will violate the viscosity bound. The only indication\nof trouble in the boundary theory at $g_s=0$ will come from correlators of string scale massive states, whose mass and CFT conformal weight\n$\\sim 1\/({\\alpha}')^{{1\\over 2}}$, an imaginary number!\n\n\n\n\n\n\n\n\n\n\\begin{acknowledgments}\nWe thank A.~Adams, N.~Arkani-Hamed, R-G.~Cai, A.~Dymarsky, Q.~Ejaz, T.~Faulkner, H.~Jockers,\nP.~Kovtun, J.~Liu, D.~Mateos, H.~Meyer, K.~Rajagopal, D.~T.~Son,\nA.~Starinets, L.~Susskind, B.~Zwiebach for discussions. HL also wishes to thank\nJ.~Liu for collaboration at the initial stages of the work. We would also like to thank Yevgeny Katz and Pavel Petrov for sharing a draft of their work \\cite{new}.\n\n\nMB and HL are partly supported by the U.S. Department of Energy\n(D.O.E) under cooperative research agreement \\#DE-FG02-05ER41360.\nHL is also supported in part by the A. P. Sloan Foundation and the\nU.S. Department of Energy (DOE) OJI program. HL is also supported\nin part by the Project of Knowledge Innovation Program (PKIP) of\nChinese Academy of Sciences. HL would like to thank KITPC\n(Beijing) for hospitality during the last stage of this project.\nResearch at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Research \\& Innovation. RCM also acknowledges\nsupport from an NSERC Discovery grant and from the Canadian\nInstitute for Advanced Research. SS is supported by NSF grant\n9870115 and the Stanford Institute for Theoretical Physics. SY is\nsupported by an Albion Walter Hewlett Stanford Graduate Fellowship\nand the Stanford Institute for Theoretical Physics.\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}