diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzpgaq" "b/data_all_eng_slimpj/shuffled/split2/finalzzpgaq" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzpgaq" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nAn outstanding problem in stellar astrophysics concerns\nthe identification of physical processes responsible for the heating\nof outer stellar atmospheres and the acceleration of stellar winds\n(see reviews by \\citealt{nara90,nara96} and \\citealt{gued07}).\nFor the Sun and other types of stars with surface convection zones,\nacoustic heating has been identified as most likely responsible for\nbalancing the ``basal\" flux emission \\citep[e.g.,][]{buch98,cunt07}.\nOn the other hand, it is well known that most,\nif not all stars also exhibit a large amount of magnetic activity.\nThus, the chromospheres of main-sequence stars, including the Sun, are\nexpected to be significantly shaped by magnetically heated structure\n\\citep[e.g.,][]{saar94,schri96}.\n\nThere is a large body of previous work devoted to the description of\nthe two-component structure of stellar chromospheres. In these models\nthe magnetic component of the chromosphere is typically heated by\nenergy dissipation of longitudinal flux tube waves. \\cite{cunt99}\ncomputed two-component theoretical chromosphere\nmodels for K2~V stars with different levels of magnetic activity\nwith the filling factor for the magnetic component determined\nfrom an observational relationship between the measured magnetic\narea coverage and the stellar rotation period. For stars with\nvery slow rotation, they were able to reproduce the basal flux\nlimit of chromospheric emission previously identified with\nnon-magnetic regions. Most notably, however, \\cite{cunt99}\ndeduced a relationship between the Ca~II~H+K emission\nand the stellar rotation rate that is consistent with the\nrelationship previously obtained by observations; see also\n\\cite{cunt98} for earlier results.\n\nFurther studies for a large spectral range of stars were given\nby \\cite{fawz02} based on specified values for the\nmagnetic filling factor. They concluded that heating by\nacoustic and longitudinal flux tube waves is able to explain\nmost of the observed range of chromospheric activity as gauged\nby the Ca~II and Mg~II lines. On the other hand, indirect\nevidence for non-wave (i.e., reconnective) heating was also\ndeduced needed to explain the structure of the\nhighest layers of stellar chromospheres.\n\nThis type of models, as well as envisioned future models of\nchromospheric heating and emission, partially motivated by\nthe quest of investigating the effects of UV and EUV emission\non planetary atmosphere and (potentially) the evolution of life\n\\citep[e.g.,][]{guin03,lamm03,gued07,cunt10}, require the\ncontinuation of detailed simulations of magnetic wave energy\ngeneration, including studies on longitudinal tube waves in\ndifferent types of stars, particularly main-sequence stars.\nThis latter goal is the focus of the present paper.\n\nPrevious work on the calculation of longitudinal tube waves\nhas been based on progress made by \\cite{musi94} who\ncorrected the Lighthill-Stein theory by incorporating an\nimproved description of the spatial and temporal spectrum\nof the turbulent convection and utilized the corrected theory\nfor calculating revised stellar acoustic wave energy fluxes\n\\citep{ulms96,ulms99}. This type of work focused\non the generation of acoustic waves; however, it did not consider \nstellar magnetic fields. Considering the fundamental\nimportance of magnetic heating in most, if not all stars,\na set of papers focused on the study of longitudinal and\ntransverse tube wave generation has been pursued\n\\citep[e.g.][]{musi89,musi95,ulms98}. \nIn subsequent work, \\cite{ulms01} used the\napproach developed by \\cite{ulms98} to compute\nthe wave energy fluxes carried by longitudinal tube waves\npropagating along thin and vertically oriented magnetic flux\ntubes that are embedded in atmospheres of late-type stars.\nThis numerical approach supplemented previous work by\n\\cite{musi00}, who analytically calculated the\nlongitudinal wave energy fluxes generated in\nstellar convective zones.\n\n\\begin{figure}\n \\begin{minipage}[t]{0.50 \\textwidth}\n \\includegraphics[width=0.999 \\textwidth]{FC_fig1.eps}\n \\end{minipage}\n\\caption{Diagram of a flux tube embedded into a stellar convective zone.\nThe squeezing point of the tube is assumed to be located at optical depth\n$\\tau_{5000} = 1$, coinciding with the ``stellar surface\". Credit: P. Ulmschneider.\n\\label{fig1}}\n\\end{figure}\n\nIn the numerical approach by \\cite{ulms98},\nlongitudinal tube waves are generated as a result of the\nsqueezing of a thin, vertically oriented magnetic \nflux tube by external pressure fluctuations produced \nby the turbulent motions in a stellar photosphere and convection \nzone, which correspond to associated velocity fluctuations.\nHence, to compute the pressure fluctuations imposed \non the tube, it is required to know the external turbulent \nmotions. The motions are modeled by specifying the rms \nvelocity amplitude and using an extended Kolmogorov turbulent\nenergy spectrum with a modified Gaussian frequency factor\n\\citep{musi94}.\n\nThe main advantage of this approach is\nthat it is not restricted to linear waves and that it\nallows for occasionally large-amplitude waves observed\non the Sun at the photospheric level\n\\citep[e.g.,][]{mull85,komm91,nesi93,mull94} and also seen in detailed\ntime-dependent simulations of solar and stellar convection\n\\citep[e.g.,][]{nord90,nord91,catt91,stef93,nord97}.\nHorizontal flow patterns are a notable candidate process \nfor the initiation of wave modes with respect to flux tubes\n(see Fig. 1); see \\cite{stei09a,stei09b} for recent models of\nconvective flows for the Sun based on up-to-date simulations\nextending toward the scale of supergranules.\n\nThe code used by \\cite{ulms98}\nwas originally developed by \\cite{herb85} who treated \nmagnetic flux tubes in the so-called thin flux tube \napproximation and described them mathematically by using a set \nof one-dimensional, time-dependent and nonlinear MHD equations.\nIt allows to compute the instantaneous and time-averaged \nlongitudinal tube wave energy fluxes and the corresponding wave \nenergy spectra. It requires specifying the strength of the \nmagnetic field inside the flux tube and the height in the stellar \natmosphere where the squeezing is assumed to take place.\nThe code has previously been used to calculate \nwave energy fluxes and spectra for longitudinal tube waves \npropagating in the solar atmosphere (see \\citealt{fawz98}\nfor models of different spreading factors), and to investigate the \ndependence of these fluxes on the magnetic field strength, the rms \nvelocity amplitude of turbulent motions, and the location of \nthe squeezing in the atmosphere; for recent models for\nother stars with non-solar metallicities see \\cite{fawz10}.\n\nThe reason for reinvestigating the generation of\nlongitudinal flux tube waves in main-sequence stars\nis three-fold. First, we would like to use realistic\ncombinations of ($T_{\\rm eff}$, $\\log g$), with $T_{\\rm eff}$\nas stellar effective temperature and $\\log g$ as surface\ngravity, for main-sequence stars guided by up-to-date\nstudies by R.~L. Kurucz and D.~F. Gray. Note that\n$\\log g$ is typically close to 4.5 (see Table 1).\nPrevious models by \\cite{ulms01} and others\nhave been pursued for either $\\log g = 4$ or $5$, thus\nresulting in unnecessary interpolation errors.\nSecondly, we would like to investigate the amount of\nupward propagating longitudinal wave energy flux\nfor a wider range of stellar convective and magnetic\nparameters, notably the mixing length $\\alpha$ amid\nrecent progress made through models by \\cite{stei09a,stei09b}\nand others. Thirdly, we would like to deduce a\nfitting formula for the wave energy flux that\nallows insight into the role of the relevant parameters\nconcerning that flux and, furthermore, offers a\nmore universal use.\n\nOur paper is structured as follows: In Sect. 2, we comment\non the parameters of theoretical main-sequence stars.\nAdditionally, we summarize the method for the computation\nof longitudinal tube waves as well as the construction of\nstellar flux tube models. Our results are given in Sect. 3.\nFinally, in Sect. 4 we present the summary and conclusions.\n\n\\begin{table}\n\\caption{Theoretical main-sequence stars}\n\\centering\n\\vspace{0.05in}\n\\vspace{0.05in}\n\\begin{tabular}{l c c c c}\n\\hline\n\\hline\n\\noalign{\\vspace{0.03in}}\nSp. Type & $T_{\\rm eff}$ & $R$ & $M$ & $\\log~g$ \\\\\n... & (K) & ($R_\\odot$) & ($M_\\odot$) & ... \\\\\n\\noalign{\\vspace{0.03in}}\n\\hline\n\\noalign{\\vspace{0.03in}}\n F0~V & 7178 & 1.620 & 1.600 & 4.223 \\\\\n F1~V & 7042 & 1.541 & 1.560 & 4.255 \\\\\n F2~V & 6909 & 1.480 & 1.520 & 4.279 \\\\ \n F3~V & 6780 & 1.453 & 1.480 & 4.283 \\\\ \n F4~V & 6653 & 1.427 & 1.440 & 4.287 \\\\ \n F5~V & 6528 & 1.400 & 1.400 & 4.292 \\\\ \n F6~V & 6403 & 1.333 & 1.330 & 4.312 \\\\ \n F7~V & 6280 & 1.267 & 1.260 & 4.333 \\\\ \n F8~V & 6160 & 1.200 & 1.190 & 4.355 \\\\ \n F9~V & 6047 & 1.155 & 1.120 & 4.362 \\\\ \n G0~V & 5943 & 1.120 & 1.050 & 4.360 \\\\ \n G1~V & 5872 & 1.100 & 1.022 & 4.364 \\\\ \n G2~V & 5811 & 1.080 & 0.994 & 4.368 \\\\ \n G3~V & 5760 & 1.037 & 0.967 & 4.392 \\\\ \n G4~V & 5708 & 0.993 & 0.940 & 4.417 \\\\ \n G5~V & 5657 & 0.950 & 0.914 & 4.443 \\\\ \n G6~V & 5603 & 0.937 & 0.888 & 4.443 \\\\ \n G7~V & 5546 & 0.923 & 0.863 & 4.443 \\\\ \n G8~V & 5486 & 0.910 & 0.838 & 4.443 \\\\ \n G9~V & 5388 & 0.870 & 0.814 & 4.469 \\\\ \n K0~V & 5282 & 0.830 & 0.790 & 4.497 \\\\ \n K1~V & 5169 & 0.790 & 0.766 & 4.527 \\\\ \n K2~V & 5055 & 0.750 & 0.742 & 4.558 \\\\ \n K3~V & 4973 & 0.730 & 0.718 & 4.567 \\\\ \n K4~V & 4730 & 0.685 & 0.694 & 4.608 \\\\ \n K5~V & 4487 & 0.640 & 0.670 & 4.651 \\\\ \n K6~V & 4294 & 0.601 & 0.643 & 4.689 \\\\ \n K7~V & 4133 & 0.565 & 0.614 & 4.722 \\\\ \n K8~V & 4006 & 0.533 & 0.582 & 4.749 \\\\ \n K9~V & 3911 & 0.505 & 0.547 & 4.770 \\\\ \n M0~V & 3850 & 0.480 & 0.510 & 4.783 \\\\ \n\\noalign{\\vspace{0.03in}}\n\\hline\n\\end{tabular}\n\\end{table}\n\n\\section{Methods}\n\n\\subsection{Comments on the theoretical main-sequence stars}\n\nStellar parameters for theoretical main-sequence stars have been\ndeduced by \\cite{gray05}; see his Table B.1. His values,\nnotably $T_{\\rm eff}$ and $\\log g$, serve as basis for the\npresent study. We also improved the accuracy of the $\\log g$\nvalues if more accurate values for the stellar masses and\nstellar radii were given. For stellar spectral types with\nno data given, we calculated those data using biparabolic\ninterpolation. The stellar data are summarized in Table~1.\n\nAnother set of spectral models has been constructed by\nR.~L.~Kurucz and collaborators. They take into account millions\nor hundred of millions of lines for a large array of atoms and molecules;\nsee, e.g., \\cite{cast04} and \\cite{kuru05} for technical details.\nThese models indicate very similar effective temperatures compared\nto the models by \\cite{gray05} for most types of stars. However,\nstellar spectral types of K5~V and below, the indicated effective\ntemperatures of R.~L.~Kurucz are consistently lower noting that\nthe difference amounts to nearly 300~K for spectral type M0~V.\nTherefore, we assumed average values between the models by\nD.~F. Gray and R.~L.~Kurucz for stars of spectral spectral type\nK5~V and M0~V in the following.\n\n\\begin{figure}\n \\begin{minipage}[t]{0.50 \\textwidth}\n \\includegraphics[width=0.999 \\textwidth]{FC_fig2.eps}\n \\end{minipage}\n\\caption{Root mean square turbulent velocities at the squeezing point ($\\tau_{5000}=1$)\nfor the set of theoretical main-sequence stars for different values of the\nmixing-length parameter $\\alpha$.\n\\label{fig2}}\n\\end{figure}\n\n\\begin{figure}\n \\begin{minipage}[t]{0.50 \\textwidth}\n \\includegraphics[width=0.999 \\textwidth]{FC_fig3.eps}\n \\end{minipage}\n\\caption{\nPower spectra of the instantaneous wave energy flux in\nflux tubes with $B\/B_{\\rm eq} = 0.85$ for\nF5~V, G5~V, K5~V, and M0~V stars (from top to bottom)\nas a function of circular frequency $\\omega$. The\nmixing-length parameter is assumed as $\\alpha = 2$.\n\\label{fig3}}\n\\end{figure}\n\n\\begin{table}\n\\caption{Wave energy flux$^a$ for different parameters $\\alpha$ and $\\eta$}\n\\centering\n\\vspace{0.05in}\n\\vspace{0.05in}\n\\begin{tabular}{l c c c c}\n\\hline\n\\hline\n\\noalign{\\vspace{0.03in}}\nSp.Type & $T_{\\rm eff}$ & $F_{\\rm LTW}$ & $F_{\\rm LTW}$ & $F_{\\rm LTW}$ \\\\\n... & (K) & ... & ... & ... \\\\\n\\noalign{\\vspace{0.03in}}\n\\hline\n\\noalign{\\vspace{0.03in}}\n... & ... & $\\eta = 0.75$ & $\\eta = 0.85$ & $\\eta = 0.95$ \\\\\n\\noalign{\\vspace{0.03in}}\n\\hline\n\\noalign{\\vspace{0.03in}}\n\\multicolumn{2}{c}{$\\alpha = 1.5$} & & \\\\\n\\noalign{\\vspace{0.03in}}\n\\hline\n\\noalign{\\vspace{0.03in}}\nF5~V\t& 6528 &\t6.03E8 & 4.05E8 & 9.41E7 \\\\\nF8~V & 6160 & 5.59E8 & 3.10E8 & 9.14E7 \\\\\nG0~V\t& 5943 &\t4.75E8 & 2.94E8 & 7.91E7 \\\\\nG2~V\t& 5811 &\t4.50E8 & 2.62E8 & 7.91E7 \\\\\nG5~V\t& 5657 &\t3.56E8 & 2.20E8 & 6.53E7 \\\\\nG8~V\t& 5486 & 3.27E8 & 1.98E8 & 5.85E7 \\\\\nK0~V & 5282 &\t2.54E8 & 1.56E8 & 4.77E7 \\\\\nK2~V\t& 5055 &\t1.92E8 & 1.10E8 & 3.42E7 \\\\\nK5~V\t& 4487 &\t1.14E8 & 6.83E7 & 2.12E7 \\\\\nK8~V\t& 4006 &\t4.47E6 & 2.87E6 & 9.82E5 \\\\\nM0~V\t& 3850 &\t2.26E6 & 1.33E6 & 5.36E5 \\\\\n\\noalign{\\vspace{0.03in}}\n\\hline\n\\noalign{\\vspace{0.03in}}\n\\multicolumn{2}{c}{$\\alpha = 1.8$} & & \\\\\n\\noalign{\\vspace{0.03in}}\n\\hline\n\\noalign{\\vspace{0.03in}}\nF5~V\t& 6528 &\t8.99E8 & 5.00E8 & 1.25E8 \\\\\nF8~V & 6160 & 7.51E8 & 4.62E8 & 1.15E8 \\\\\nG0~V\t& 5943 &\t5.69E8 & 3.40E8 & 9.73E7 \\\\\nG2~V\t& 5811 &\t5.63E8 & 3.02E8 & 9.05E7 \\\\\nG5~V\t& 5657 &\t5.29E8 & 2.78E8 & 7.96E7 \\\\\nG8~V\t& 5486 & 3.98E8 & 2.37E8 & 7.32E7 \\\\\nK0~V & 5282 &\t2.93E8 & 1.82E8 & 5.60E7 \\\\\nK2~V\t& 5055 &\t2.69E8 & 1.63E8 & 4.72E7 \\\\\nK5~V\t& 4487 &\t1.30E8 & 8.15E7 & 2.50E7 \\\\\nK8~V\t& 4006 &\t5.36E6 & 3.50E6 & 1.24E6 \\\\\nM0~V\t& 3850 &\t2.28E6 & 1.44E6 & 6.49E5 \\\\\n\\noalign{\\vspace{0.03in}}\n\\hline\n\\noalign{\\vspace{0.03in}}\n\\multicolumn{2}{c}{$\\alpha = 2.0$} & & \\\\\n\\noalign{\\vspace{0.03in}}\n\\hline\n\\noalign{\\vspace{0.03in}}\nF5~V\t& 6528 &\t1.12E9 & 6.07E8 & 1.47E8 \\\\\nF8~V & 6160 &\t8.51E8 & 4.92E8 & 1.34E8 \\\\\nG0~V\t& 5943 &\t6.77E8 & 3.96E8 & 1.11E8 \\\\\nG2~V\t& 5811 &\t6.35E8 & 3.65E8 & 1.06E8 \\\\\nG5~V\t& 5657 &\t5.81E8 & 3.35E8 & 1.01E8 \\\\\nG8~V\t& 5486 &\t4.79E8 & 2.86E8 & 8.05E7 \\\\\nK0~V & 5282 &\t3.39E8 & 1.95E8 & 6.39E7 \\\\\nK2~V\t& 5055 &\t2.98E8 & 1.86E8 & 5.81E7 \\\\\nK5~V\t& 4487 &\t1.44E8 & 9.19E7 & 2.86E7 \\\\\nK8~V\t& 4006 &\t6.79E6 & 4.37E6 & 1.57E6 \\\\\nM0~V\t& 3850 &\t3.32E6 & 2.11E6 & 7.62E5 \\\\\n\\noalign{\\vspace{0.03in}}\n\\hline\n\\end{tabular}\n\\vspace{0.05in}\n\\begin{list}{}{}\n\\item[]$^a$The unit of $F_{\\rm LTW}$ is erg~cm$^{-2}$~s$^{-1}$.\n\\end{list}\n\\end{table}\n\n\\subsection{Convective zone models and turbulent velocities}\n\nThe method for calculating wave energy fluxes carried by \nlongitudinal tube waves adopted in the present paper has been\ndescribed in detail by \\cite{ulms01}.\nThus, it is not necessary to present an intricate discussion in\nthe following. In the solar application, it is possible to\nselect many model parameters and characteristic values directly\nfrom observations. However, for stars other than the Sun such\ndata are mostly unavailable. Therefore, we need to discuss in\nsome detail the physical reasoning behind our choice of relevant\nparameters used in our calculations.\n\nIn the current approach, the magnetic flux tubes are embedded\nin nonmagnetized photospheric convection zones (see Fig.~1).\nConsidering that the\ninteraction between the flux tubes and the convective\nturbulence is the driving mechanism for the generation of\nlongitudinal tube waves, among other waves, models of the\nstellar convection zones are required. Guided by previous\nstudies, it is assumed that the squeezing of the tube\nis symmetric with respect to the tube axis. \nThe computed pressure fluctuations are subsequently\ntranslated into gas pressure and magnetic field fluctuations\ninside the tube assuming horizontal pressure balance.\nFinally, the internal velocity perturbation resulting\nfrom the internal pressure fluctuation is calculated. \nThis internal velocity served as a boundary condition \nin the numerical simulation of the generation of the\nlongitudinal tube waves. \n\nBoth numerical simulations of stellar convection and mixing length \nmodels show that the maximum convective velocities occur at optical \ndepths of $\\tau_{5000}\\approx 10$ to 100. For example, \\cite{stef93}\nfound in his time-dependent solar numerical convection calculations \nthat maximum convective velocities $v_{\\rm CMax} \\simeq 2.8$ km~s$^{-1}$\nare reached at $\\tau_{5000} \\approx 50$ and that these values can be \nreproduced via mixing length theory with a mixing length parameter \nof $\\alpha \\simeq 2$. The value $\\alpha = 2$ is furthermore\nindicated by time-dependent hydrodynamic simulations of stellar convection\nfor stars other than the Sun \\citep{tram97} as well as by a \ncareful fitting of evolutionary tracks of the Sun with its present \nluminosity, effective temperature and age \\citep{schro96}. \n\nNevertheless, there is still some debate about the most appropriate\nvalue of $\\alpha$. \\cite{nord90} originally pursued\ndetailed numerical simulations based on 3-D hydrodynamics coupled with\n3-D non-grey radiative transfer for stars similar to Procyon (F5~IV-V),\n$\\alpha$ Cen~A (G2~V), $\\beta$~Hyi (G2~IV), and $\\alpha$ Cen~B (K1~V).\nThey concluded a mixing-length parameter of $\\alpha = 1.5$ (or\nslightly higher), although the mixing length concept appeared to be\nproblematic at photospheric heights. A mixing length of 1.5 was also\nused by \\cite{cunt99} in their two-component theoretical chromosphere\nmodels for K2~V stars with different levels of magnetic activity.\nEven though the deduced relationship between the Ca~II~H+K emission\nand the stellar rotation rate was found to be largely consistent with the\nobserved relationship, the agreement could probably be improved if\na somewhat higher longitudinal wave energy flux, corresponding to a\nslightly larger mixing length parameter, was adopted. Recently,\n\\cite{stei09a,stei09b} pursued updated state-of-the-art simulations\nof solar convection zone extending toward the scale of supergranules\nindicating a mixing length parameter of $\\alpha \\simeq 1.8$.\nFor these reasons, we will calculate a set of models concerning\nwave energy generation of longitudinal tube wave for a set of\n$\\alpha$ values, which are $\\alpha$ = 1.5, 1.8, and 2.0.\n\n\\subsection{Computation of stellar magnetic flux tube models}\n\nOur treatment of stellar convection associated with the facilitation\nof stellar flux tube models is akin to that described by\n\\cite{ulms96}. In this approach, information is needed\nabout velocities of turbulent motions in the overshooting layer near\nthe stellar surface, where the squeezing of the magnetic flux tube\nis assumed to occur. \nSteffen's numerical calculations show that the rms velocities decrease \ntoward the solar surface and reach a plateau in the overshooting layer. \nBetween $\\tau_{5000}=1$ and $10^{-4}$, he finds values of $v_{\\rm rms} = \n1.4$ km~s$^{-1}$, which are essentially independent of height.\n\\cite{ulms98} adopted for the Sun a variety of observed rms velocity \namplitudes $u_t$ in the range $0.9 0$,\nthe integer part~$\\floor{t} \\in \\mathbb{N}_0$ and the fractional part~$\\sawtooth{t} \\in [0, 1)$ of~$t$ \nare uniquely determined by $t = \\floor{t} + \\sawtooth{t}$.\nFor a multi-index $\\alpha \\in \\mathbb{N}_0^n$ we set $|\\alpha| := \\sum_{i=1}^n \\alpha_i$.\n\nA subset $D \\subset \\mathbb{R}^n$ is called a \\emph{domain}\nif it is nonempty, open, and connected.\nIf $D$ is a domain, we define for $t \\in \\mathbb{N}_0$\n \\begin{equation*}\n \\|f\\|_{\\bar{C}^t(D)}\n := \\sum_{|\\alpha|\\le t} \\sup_{x \\in D} |\\partial^\\alpha f(x)|,\n \\end{equation*}\nwhere $\\partial^\\alpha := \\partial^{|\\alpha|}\/(\\partial x_1^{\\alpha_1} \\cdots \\partial x_n^{\\alpha_n})$\nis the classical partial derivative,\nand for noninteger $t > 0$\n \\begin{equation*}\n \\norm{f}_{\\bar{C}^t(D)}\n := \n \\norm{f}_{\\bar{C}^{\\floor{t}}(D)}\n\t+ \n\t\\sum_{|\\alpha| \\le \\floor{t}}\n\t\\sup_{\\substack{x,y \\in D \\\\ x \\neq y}}\n\t\\frac{|\\partial^\\alpha f(x) - \\partial^\\alpha f(y)|}{|x-y|^{\\sawtooth{t}}}.\n \\end{equation*}\n %\n %\nFor $t > 0$ we define the \\emph{H\\\"older spaces}\n \\begin{equation*}\n \\bar{C}^t(D)\n := \n \\left\\{ \n f : D \\rightarrow \\mathbb{R} ;\n f\n \\text{ is $\\floor{t}$ times continuously differentiable and }\n \\|f\\|_{\\bar{C}^t(D)} < + \\infty\n \\right\\}\n .\n \\end{equation*}\n\nThe \\emph{Lebesgue space~$L^p(D)$}, $p \\in [1, \\infty)$, comprises \nall measurable functions $u:D \\rightarrow \\mathbb{R}$ \nfor which\n$\n\t\\|u\\|_{L^p(D)}^p\n\t:= \\int_D |u(x)|^p \\, dx\n$\nis finite.\nFunctions that are equal for almost every $x \\in D$ are identified.\nFor $p \\in [1, \\infty)$ and $k \\in \\mathbb{N}$,\nthe \\emph{Sobolev space $W^k_p(D)$} is defined by\n \\begin{equation*}\n W^k_p(D)\n := \\{ u \\in L^p(D) : \\partial^\\alpha u \\in L^p(D) \\text{ for all } 0 \\le |\\alpha| \\le k\\},\n \\end{equation*}\n where $\\partial^\\alpha$ denotes the distributional partial derivative.\n %\n Equipped with the norm $\\|\\cdot\\|_{W^k_p(D)}$ given by\n \\begin{equation*}\n \\|u\\|_{W^k_p(D)}^p\n := \\sum_{0 \\le |\\alpha| \\le k} \\|\\partial^\\alpha u\\|_{L^p(D)}^p,\n \\end{equation*}\nit becomes a Banach space (see e.g.~\\cite[Thm.~3.3]{Adams2003}).\n\nFinally, we extend the definition of Sobolev spaces to nonintegers for bounded domains $D$ of cone type\nfollowing\n\\cite[Def.~4.2.3]{Triebel1978}.\nAs noted there,\nexamples of bounded domains of cone type \ninclude\nopen cubes\nand\nbounded domains with a smooth (or $C^1$) boundary.\n\n\\begin{definition}\n\tA bounded domain~$D$ is said to be of \\emph{cone type}\n\tif there exist \n\tdomains $U_1, \\ldots, U_m$,\n\tand cones $C_1, \\ldots, C_m$,\n\twhich may be carried over by rotations into the cone of height~$h$\n\t\\[\n\t\tK_h := \\{ x = (x',x_n) \\in \\mathbb{R}^n : 0 < x_n < h, |x'| < a x_n\\}\n\t\\]\n\twith fixed $a>0$ such that \n\t$\\partial D \\subset \\bigcup_{j=1}^m U_j$\n\tand \n\t$(U_j \\cap D) + C_j \\subset D$ for all $j = 1, \\ldots, m$.\n\\end{definition}\n\n\nThe Sobolev spaces of fractional smoothness \nare obtained by setting\n \\begin{equation*}\n W^s_p(D) :=\n \\{\n\t\t\tu \\in W^{\\floor{s}}_p(D)\n\t\t\t:\n\t\t\t\\|u\\|_{W^s_p(D)} < \\infty\n\t\t\\},\n \\end{equation*}\nwhere\n\\begin{equation*}\n \\|u\\|^p_{W^s_p(D)}\n := \\|u\\|_{L^p(D)}^p\n + \\sum_{|\\alpha| = \\floor{s}} \\int_{D \\times D} \\frac{|\\partial^\\alpha u(x) - \\partial^\\alpha u(y)|^p}{|x-y|^{n+\\sawtooth{s}p}} \\, dx \\, dy.\n\\end{equation*}\nThe defined norm~$\\|\\cdot\\|_{W^s_p(D)}$ is equivalent to the norm induced by the real method of interpolation by Remark~4.4.2\/2 in~\\cite{Triebel1978}. Together with \\cite[Theorem~4.6.1(e)]{Triebel1978}, we deduct the following theorem.\n\\begin{theorem} \\label{t:Wsp-Ct}\n\tLet $D \\subset \\mathbb{R}^n$ be a bounded domain of cone type, \n\t$1 < p < \\infty$, and $t \\ge 0$. \n\t%\n\tThen for all $s > t + n\/p$ one has a continuous embedding\n\t$\n\t\tW^s_p(D) \\hookrightarrow \\bar{C}^t(D).\n\t$\n\tThe embedding is still valid for $s = t + n\/p$ if $t \\notin \\mathbb{N}_0$.\n\\end{theorem}\n\n\n\nWe generalize the spaces of H\\\"older continuous and differentiable functions~$\\bar{C}^t$ to manifolds by imposing these properties on charts.\nBefore doing so, \nwe recapitulate the necessary geometric definitions and properties.\nFor more details on manifolds, we refer the reader to \ne.g.~\\cite{Klingenberg1995,Lang1999,Lee2009,W87}.\n\n\nIf $A \\subset \\mathbb{R}^n$ is any subset, $m \\in \\mathbb{N}$, and $k \\in \\mathbb{N} \\cup \\{ 0, \\infty \\}$,\nthen $f : A \\to \\mathbb{R}^m$\nis \\emph{$k$ times continuously differentiable}\nor \\emph{of class $C^k$}\nif for every $x \\in A$\nthere exists an open $\\mathcal{O}_x \\subset \\mathbb{R}^n$ containing $x$\nand\n$g : \\mathcal{O}_x \\to \\mathbb{R}^m$ of class $C^k$\nthat coincides with $f$ on $A$.\nSuch $f$ are collected in $C^k(A; \\mathbb{R}^m)$.\nLooking ahead,\nin order to avoid technicalities\nwe will only consider \nmanifolds \n\\emph{without manifold boundary},\nsuch as the Euclidean space or a sphere therein.\nThe first step \nis the definition of an atlas.\n\n\\begin{definition}\n\tLet $M$ be a set, $r \\in \\mathbb{N} \\cup \\{ 0, \\infty \\}$, and $n \\in \\mathbb{N}$.\n\t%\n\tA \\emph{$C^r$ $n$-atlas~$\\mathcal{A}$ on $M$}\n\tis a collection of \n\t\\emph{charts} $(U_i, \\varphi_i)$, $i \\in I$,\n\tindexed by an arbitrary set $I$,\n\tsatisfying the following:\n\t\\begin{enumerate}\n\t\\item\n\t\t$U_i \\subset M$ and $\\bigcup_{i \\in I} U_i = M$,\n\t\\item\n\t\t$\\varphi_i : U_i \\to \\varphi_i(U_i) \\subset \\mathbb{R}^n$\n\t\tis a bijection\n\t\tand for any $i, j \\in I$,\n\t\t$\\varphi_i(U_i \\cap U_j)$ is open in $\\mathbb{R}^n$,\n\t\\item\n\t\t$\n\t\t\t\\varphi_i \\circ \\varphi_j^{-1} : \n\t\t\t\\varphi_j(U_i \\cap U_j) \\to \\varphi_i(U_i \\cap U_j)\n\t\t$\n\t\tis a $C^r$ diffeomorphism\n\t\tfor any $i, j \\in I$.\n\t\\end{enumerate}\n\\end{definition}\n\nIn the following, \nwe omit the dimension~$n$ \nin reference to an atlas.\nTwo $C^r$ atlases on a set~$M$\nare called equivalent\nif \ntheir union\nis again a $C^r$ atlas on $M$.\nThis indeed defines an equivalence relation\non the $C^r$ atlases on $M$.\nThe union\nof all atlases in\nsuch an equivalence class\nis again an atlas in the equivalence class,\ncalled the \\emph{maximal $C^r$ atlas}.\nThe topology on~$M$ induced by any maximal $C^r$ atlas\nis the empty set together with arbitrary unions of \nits chart domains.\n\n\t\n\\begin{definition}\n\tLet $n \\in \\mathbb{N}$ and $r \\in \\mathbb{N} \\cup \\{ 0, \\infty \\}$.\n\t%\n\tA \\emph{$C^r$ $n$-manifold $M$}\n\tis a set $M \\neq \\emptyset$ \n\ttogether with a maximal $C^r$ atlas $\\mathcal{A}(M)$\n\tsuch that the induced topology\n\tis Hausdorff and paracompact.\n\\end{definition}\n\nRecall that in a Hausdorff topological space\ndistinct points have disjoint open neighborhoods,\nand a topological space\nis called paracompact\nif\nevery open cover admits\na locally finite open cover \n(i.e., for any point, there is an open neighborhood \nwhich intersects only finitely many members of the collection)\nwhich\nrefines the original cover.\nUsually, \nthe maximal atlas is not mentioned explicitly\nand \n$M$ is understood to be equipped with the induced topology.\nWe will say ``a chart on $M$'' to refer to a chart\nin $\\mathcal{A}(M)$.\nFurther, we say that an atlas $\\mathcal{A}$ on $M$\nis an ``atlas for $M$''\nif it is equivalent to $\\mathcal{A}(M)$.\nAny open subset $U \\subset M$ canonically inherits\nthe manifold structure of $M$.\n\n\n\\begin{definition}\n\tLet $M$ be a $C^r$ $n$-manifold\n\tand\n\t$k \\geq 0$ an integer.\n\t%\n\tA function $f : M \\to \\mathbb{R}$\n\tis said to be of class $C^k$,\n\tdenoted by $f \\in C^k(M)$,\n\tif $f \\circ \\varphi^{-1} \\in C^k( \\varphi(U) )$\n\tfor every chart $(U, \\varphi)$ on~$M$.\n\t%\n\tThe \\emph{support} $\\mathop{\\mathrm{supp}} f$ of $f \\in C^0(M)$ is\n\tthe closure of the set $\\{ x \\in M : f(x) \\neq 0 \\}$.\n\n\tFor any $t \\geq 0$,\n\ta function $f : M \\to \\mathbb{R}$ is said to be \n\t\\emph{continuous}\n\t(\\emph{locally of class $\\bar{C}^t$})\n\tif\n\tfor any $x \\in M$\n\tthere exists an open connected subset $V \\subset M$,\n\t$x \\in V$,\n\tsuch that\n\tfor any chart $(U, \\varphi)$\n\twith\n\t$U \\subset V$,\n\tthe composite function\n\t$f \\circ \\varphi^{-1} : \\varphi(U) \\to \\mathbb{R}$\nis continuous\n(of class $\\bar{C}^t$).\n\n\\end{definition}\n\nA useful technical device\nis the partition of unity\ndefined next.\n\n\\begin{definition}\n\tLet $M$ be a $C^r$ $n$-manifold\n\t%\n\tand let $\\mathcal{U} = \\{ U_i \\}_{i \\in I}$ be an open cover of $M$.\n\t%\n\tA \\emph{$C^r$ partition of unity subordinate to $\\mathcal{U}$}\n\tis \n\ta collection\n\t$\\{ \\psi_i \\}_{i \\in I} \\subset C^r(M)$\n\tsuch that\n\t\\begin{enumerate}\n\t\\item\n\t\t$0 \\leq \\psi_i(x) \\leq 1$ for all $i \\in I$ and $x \\in M$,\n\t\\item\n\t\tthere exists\n\t\ta locally finite open cover\n\t\t$\\{ V_i \\}_{i \\in I}$ of~$M$\n\t\twith\n\t\t$\\mathop{\\mathrm{supp}} \\psi_i \\subset V_i \\cap U_i$,\n\t\t%\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\\item\n\t\t$\\sum_{i \\in I} \\psi_i(x) = 1$ for all $x \\in M$\n\t\t(where the sum is finite by the previous assertion).\n\t\\end{enumerate}\n\\end{definition}\n\nThe assumed paracompactness of $M$\nimplies\nthe existence of such partitions of unity,\nsee \n\\cite[Chapter II, Corollary 3.8]{Lang1999}\nor\n\\cite[Theorem 1.73]{Lee2009}, which is stated in the following proposition.\n\n\\begin{proposition}\\label{p:partof1}\n\tLet $M$ be a $C^r$ $n$-manifold.\n\t%\n\tLet $\\mathcal{U} = \\{ U_i \\}_{i \\in I}$ be an open cover of $M$.\n\t%\n\tThen there exists a $C^r$ partition of unity\n\tsubordinate to $\\mathcal{U}$.\n\\end{proposition}\n\n\n\n\n\n\nWe close the preparatory section\nby introducing\nrandom fields on manifolds.\nRandom fields on domains are defined accordingly. \nIn what follows, let $(\\Omega, \\mathcal{F}, P)$ be a probability space.\n\\begin{definition}\nLet $M$ be a $C^r$ $n$-manifold and let $\\mathcal{B}(M)$ denote its Borel $\\sigma$-algebra.\nA mapping $X:\\Omega \\times M \\rightarrow \\mathbb{R}$ \nthat is $(\\mathcal{F} \\otimes \\mathcal{B}(M))$-measurable \nis called a \\emph{(real-valued) random field} on the manifold~$M$.\nA random field $Y$ is \na \\emph{modification} of \na random field $X$ if $P(X(x) = Y(x)) = 1$ for all $x \\in M$.\nFor any $t \\geq 0$,\na random field $X$ on $M$ is said to be \n\\emph{continuous} (\\emph{locally of class $\\bar{C}^t$})\nif\n$X(\\omega)$ is continuous (locally of class $\\bar{C}^t$)\nfor all $\\omega \\in \\Omega$.\n\\end{definition}\nWe note that \nif $M$ is endowed with a metric \n(say, given by a Riemannian metric),\nand the resulting metric space is separable and locally compact,\nthen\nmeasurability of $X(x)$ for all $x \\in M$\nand\ncontinuity in probability of $X$\nimply\n$(\\mathcal{F} \\otimes \\mathcal{B}(M))$-measurability of $X$\n(cf.~\\cite{P09_1}).\n\n\\section{H\\\"older continuity and differentiability of random fields}\\label{s:main}\n\nThis section\ncontains our main results\non \nH\\\"older continuity and differentiability of random fields.\nWe begin by considering random fields on domains of cone type. \nAs indicated in the introduction,\nresults on sample H\\\"older continuity \non different types of domains\nare well-known\n(see, e.g., \\cite{A10,DPZ92,P09,K02}), \nbut\nsample differentiability has not been of main interest so far\n(see, however, \\cite{AT07,P10} for the available results). \nWe prove sample \nH\\\"older continuity and differentiability properties\nin Theorem~\\ref{t:KC-D}\nby revisiting\nthe approach of~\\cite[Proof of Theorem 3.4]{DPZ92}\nvia \nthe Sobolev embedding theorem.\nWe then address\nsample H\\\"older and differentiability properties \nof random fields on manifolds\nin Theorem~\\ref{t:KC-M}.\n\n\n\nWe now state our \nversion of the Kolmogorov--Chentsov\\xspace theorem on domains of cone type.\n\n\\begin{theorem}\n\\label{t:KC-D}\n Let $D \\subset \\mathbb{R}^n$ be a bounded domain of cone type \n and\n let\n $X : \\Omega \\times D \\to \\mathbb{R}$ be a random field on~$D$.\n %\n Assume that there exist $d \\in \\mathbb{N}_0$, $p>1$, $\\epsilon \\in (0, p]$, and $C>0$ such that\n the weak derivatives $\\partial^\\alpha X$ are in $L^p(\\Omega \\times D)$\n and\n \\begin{equation}\n \\label{e:t:KC-D:E}\n \\E( |\\partial^\\alpha X(x) - \\partial^\\alpha X(y)|^p)\n \\le C \\, |x-y|^{n + \\epsilon}\n \\end{equation}\n for all $x,y \\in D$ and\n any multi-index $\\alpha \\in \\mathbb{N}_0^n$ with $|\\alpha| \\leq d$.\n Then $X$ has a modification that is locally of class $\\bar{C}^t$ for all $t < d + \\min\\{\\epsilon\/p,1-n\/p\\}$.\n\\end{theorem}\n\n\nWe remark that \n\\eqref{e:t:KC-D:E} with $\\epsilon > p$ for $\\alpha = 0$ would imply \nthat \nalmost every sample of \nthe random field is actually a constant function\n(cf.~\\cite[Proposition~2]{Br02}).\n\nThe proof is given in two steps.\nIn the following lemma\nwe first obtain a continuous modification of~$X$\nbased on~\\cite[Theorem~2.3.1]{K02},\nwhich we again denote by~$X$.\nIn the second step,\nwe prove Theorem~\\ref{t:KC-D}\nby\ninvoking the Sobolev embedding on~$X(\\omega)$\nfor all $\\omega \\in \\Omega$.\nSince $X(\\omega)$ is continuous for all $\\omega \\in \\Omega$,\nthis does not modify the random field,\nand there is no need to prove measurability\nof a modified field and that it is actually a modification.\nAlternatively,\none could use the last step of \\cite[Proof of Theorem 2]{Schilling2000-ExpoMath}. \n\nLet us start by showing the existence of a continuous modification.\n\n\\begin{lemma} \\label{l:K-D}\n\tUnder the assumptions of Theorem~\\ref{t:KC-D},\n\t$X$ admits a continuous modification.\n\\end{lemma}\n\n\\begin{proof\n Observe that \n the\n domain~$D$ \n equipped with the usual Euclidean metric $|\\cdot - \\cdot|$\n is a totally bounded pseudometric space in the sense of~\\cite{K02}; \n indeed, its metric entropy \n $\\mathsf{D}(\\delta) := \\mathsf{D}(\\delta; {D}, |\\cdot - \\cdot|)$ \n is bounded by $\\mathsf{D}(\\delta) \\le \\tilde{C} \\delta^{-n}$ \n for all $\\delta > 0$ and some constant $\\tilde{C}>0$, \n since the domain ${D}$ \n can be embedded into a $n$-dimensional closed cube of finite diameter.\n %\n We set $\\Psi(r) := C r^{n+\\delta}$, \n where the constant~$C$ is provided by~\\eqref{e:t:KC-D:E},\n and $f(r) := r^{\\delta\/p}$ for $r \\ge 0$. \n %\n The integrals\n $\\int_0^1 r^{-1} f(r) \\, dr\n = p\/\\delta\n $\n as well as\n $\\int_0^1 \\mathsf{D}(r) \\Psi(2r) f(r)^{-p} \\, dr\n \\le C \\tilde{C} 2^{n+\\delta}\n $\n are finite.\n %\n Therefore, \\cite[Theorem~2.3.1]{K02} shows the existence of a continuous modification of~$X$.\n\\end{proof}\n\n\nHaving obtained a continuous modification, we are set to continue with the proof of Theorem~\\ref{t:KC-D}.\n\n\\begin{proof}[Proof of Theorem~\\ref{t:KC-D}]\n Assume without loss of generality\n that the random field $X$ is continuous\n (otherwise apply Lemma~\\ref{l:K-D}).\n Consider arbitrary $0<\\nu<\\min\\{(n+\\epsilon)\/p,1\\}$ and $\\alpha \\in \\mathbb{N}_0^n$ with $|\\alpha|=d$.\n Since\n \\begin{equation*}\n (\\omega,x,y) \\mapsto\n \\frac{|\\partial^\\alpha X(\\omega,x) - \\partial^\\alpha X(\\omega,y)|^p}{|x-y|^{n + \\nu p}}\n \\end{equation*}\n is $(\\mathcal{F} \\otimes \\mathcal{B}(D\\times D))$-measurable, \n we apply Fubini's theorem and hypothesis \\eqref{e:t:KC-D:E} to obtain\n \\begin{align*}\n\t \\E \\left(\n\t \\int_{D \\times D} \\frac{|\\partial^\\alpha X(x)-\\partial^\\alpha X(y)|^p}{|x-y|^{n + \\nu p}} \\, dx \\, dy\n\t \\right)\n & = \\int_{D \\times D} \\frac{\\E(|\\partial^\\alpha X(x)-\\partial^\\alpha X(y)|^p)}{|x-y|^{n + \\nu p}} \\, dx \\, dy\n \\\\\n & \\le C \\int_{D \\times D} |x-y|^{n + \\epsilon - (n + \\nu p)} \\, dx \\, dy\n .\n \\end{align*}\n The last integral is finite due to $\\epsilon - \\nu p > -n$.\n %\n With the $L^p(\\Omega\\times D)$ integrability assumptions\n on $X$ and its derivatives of order $d$,\n this implies \n that\n \\begin{align*}\n \\E ( \\|X\\|_{W^{d+\\nu}_p(D)}^p )\n & = \\E( \\|X\\|_{L^p(D)}^p)\n + \n \\sum_{|\\alpha| = d}\n\t \\E \\left(\n\t \\int_{D \\times D} \n\t \\frac\n\t {|\\partial^\\alpha X(x)-\\partial^\\alpha X(y)|^p}\n\t {|x-y|^{n + \\nu p}}\n\t \\, dx \\, dy\n\t \\right)\n \\end{align*}\n is finite,\n and therefore there exists $\\Omega' \\in \\mathcal{F}$ with $P(\\Omega') = 1$ \nsuch that $X(\\omega) \\in W^{d+\\nu}_p(D)$ for all $\\omega \\in \\Omega'$. \nConsider as continuous modification of~$X$ the random field\n$\\tilde{X} := \\mathds{1}_{\\Omega'} X$, where $\\mathds{1}_{\\Omega'}$ is the indicator function of~$\\Omega'$.\nBy the Sobolev embedding theorem~\\ref{t:Wsp-Ct}, \nwe get that $\\tilde{X}(\\omega) \\in \\bar{C}^t(D)$ for all \n$\\omega \\in \\Omega$ and all\n$t < d + \\nu - n\/p$.\nSince $0<\\nu<\\min\\{(n+\\epsilon)\/p,1\\}$\nwas arbitrary, the claim follows.\n\\end{proof}\n\nWe remark that $1-n\/p$ is positive only for $p > n$.\nIn the case that $n \\geq p$, \nwe obtain in Theorem~\\ref{t:KC-D}\nonly lower sample differentiability order $\\floor{t}$\nthan the assumed weak differentiability order $d$.\n\n\\begin{rem}\n\\label{r:weaker_differentiability_X}\n The assumptions in Theorem~\\ref{t:KC-D} can be weakened. If $d \\neq 0$, it is sufficient that~\\eqref{e:t:KC-D:E} holds for $|\\alpha| = d$, and that $X$ has a continuous modification as provided by Theorem~2.3.1 in~\\cite{K02} under the assumption that \\eqref{e:t:KC-D:E} holds for $\\alpha = 0$ and some $\\epsilon > 0$.\n\\end{rem}\n\nWe apply Theorem~\\ref{t:KC-D} in the following example to a Brownian motion on the interval,\nrecovering\nthe classical \nproperty of H\\\"older continuity with exponent $\\gg < 1\/2$.\n\n\\begin{example}\n\tIf $X$ is a Brownian motion on the interval~$[0,T] \\subset \\mathbb{R}^1$, $T< + \\infty$,\n\tthen Assumption~\\eqref{e:t:KC-D:E} is satisfied\n\tfor $\\alpha = 0$,\n\tany $p \\geq 2$, and $\\epsilon = p\/2 - 1$.\n\t%\n\tThus\n\t$X$ admits a \n\tmodification that is locally of class $\\bar{C}^t$\n\tfor any \n\t$0 < t < \\sup_{p \\geq 2} (p\/2 - 1) \/ p = 1\/2$,\n\twhich is the well-known result.\n\\end{example}\n\n\n\nWe are now ready to generalize Theorem~\\ref{t:KC-D}\nto random fields on manifolds.\n\n\\begin{theorem}\n\\label{t:KC-M}\n Let $M$ be a $C^r$ $n$-manifold, $r > 0$,\n and let $X : \\Omega \\times M \\to \\mathbb{R}$ be a random field on $M$.\n %\n Assume that there exist $d \\in \\mathbb{N}_0$, $p>1$, and $\\epsilon \\in (0,p]$ \n such that\n for any chart $(U, \\varphi)$ on $M$\n with bounded $\\varphi(U) \\subset \\mathbb{R}^n$,\n there exists $C_\\varphi > 0$\n such that\n the weak derivatives of\n $X_\\varphi := X \\circ \\varphi^{-1}$ satisfy \n $\\partial^\\alpha X_\\varphi \\in L^p(\\Omega \\times \\varphi(U))$ and\n \\begin{equation*}\n \\E( |\\partial^\\alpha X_\\varphi(x) - \\partial^\\alpha X_\\varphi(y)|^p)\n \\le\n C_\\varphi \\, |x-y|^{n + \\epsilon}\n \\end{equation*}\n for all $x,y \\in \\varphi(U)$ and\n any multi-index $\\alpha \\in \\mathbb{N}_0^n$ with $|\\alpha| \\leq d$.\n %\n %\n %\n Then $X$ has a modification that is locally of class $\\bar{C}^t$ for all $t < d + \\min\\{\\epsilon\/p,1-n\/p\\}$ with $t \\le r$.\n\\end{theorem}\n\n\\begin{proof}\n\tTo obtain the continuous modification\n\twe first construct a locally finite atlas with \n\tcoordinate domains that are bounded and of cone type.\n\t%\n\tOn each of these charts, a modification of $X$\n\tis provided by Theorem~\\ref{t:KC-D}.\n\t%\n\tUsing a partition of unity we then \n\tpatch together a modification of~$X$\n\twith the desired properties.\n\t\n\tFor each $x \\in M$, let $(\\tilde{U}_x, \\tilde\\varphi_x)$ be \n\ta chart on $M$ with $x \\in \\tilde{U}_x$.\n\t%\n\tLet $D_x \\subset \\tilde\\varphi_x(\\tilde{U}_x)$ be \n\tan open ball of positive radius centered at $\\tilde\\varphi_x(x)$.\n\t%\n\tDefine $U_x := \\tilde\\varphi_x^{-1}(D_x)$ and $\\varphi_x := \\tilde\\varphi_x|_{U_x}$.\n\t%\n\tLet $\\mathcal{A} := \\{ (U_x, \\varphi_x) : x \\in M \\}$ be the resulting atlas for $M$,\n\twhich we will index\n\tby\n\t$\\Phi := \\{ \\varphi : (U, \\varphi) \\in \\mathcal{A} \\}$.\n\t\n\tNow, for each $(U_\\varphi, \\varphi) \\in \\mathcal{A}$,\n\tthe coordinate domain $\\varphi(U_\\varphi)$\n\tis a bounded domain with smooth boundary,\n\tin particular of cone type.\n\t%\n\tWith our assumptions on $X_\\varphi$,\n\tTheorem~\\ref{t:KC-D}\n\tprovides a modification\n\t$Y^\\varphi$ of the random field\n\t$X_\\varphi : \\Omega \\times \\varphi(U_\\varphi) \\to \\mathbb{R}$\n\ton $\\varphi(U_\\varphi)$,\n\twhich is \n\tlocally of class $\\bar{C}^t$\n\tfor any fixed $t < d + \\min\\{\\epsilon\/p,1-n\/p\\}$,\n\tfor each $\\varphi \\in \\Phi$.\n\t\n\tLet $\\{ \\psi_\\varphi \\}_{ \\varphi \\in \\Phi }$ \n\tbe a $C^r$ partition of unity\n\tsubordinate to $\\{ U_\\varphi \\}_{\\varphi \\in \\Phi}$,\n\twhich exists by Proposition~\\ref{p:partof1}.\n\t%\n\tDefine $Y : \\Omega \\times M \\to \\mathbb{R}$\n\tby\n\t$Y := \\sum_{\\varphi \\in \\Phi} \\psi_\\varphi Y^\\varphi \\circ \\varphi$.\n\t%\n\tSince the covering $\\{ \\mathop{\\mathrm{supp}} \\psi_\\varphi \\}_{ \\varphi \\in \\Phi }$\n\tis locally finite,\n\tthe sum is well-defined on a neighborhood of any $x \\in M$.\n\t%\n\tFurthermore, $Y$ is a random field on~$M$ because all $\\varphi \\in \\Phi$ are $C^r$ diffeomorphisms and therefore at least continuous.\n\t%\n\tMoreover, it is a modification of~$X$\n\tby the properties of the partition of unity.\n\t%\n\tOwing to the fact that $r \\geq t$,\n\tthe random field\n\t$Y$\n\tis locally of class $\\bar{C}^t$. \n\\end{proof}\n\nWe finish this section with two comments.\nFirst,\nsince we only used the assumptions on the random field on the charts to apply Theorem~\\ref{t:KC-D}, it is clear that Remark~\\ref{r:weaker_differentiability_X} carries over to Theorem~\\ref{t:KC-M}.\nSecond,\nfor an example of random fields on manifolds, we refer the reader to \\cite{LS13}.\nTherein,\nisotropic Gaussian random fields\non\nthe unit sphere in~$\\mathbb{R}^3$\nare considered\nand\nsample regularity is obtained by direct calculations.\n\n\\section*{Acknowledgment}\n\n\nThe work was supported in part by ERC AdG no.~247277. \nThe authors thank Sonja Cox, Sebastian Klein, Markus Knopf, J\\\"urgen Potthoff, and Christoph Schwab\nfor fruitful discussions and helpful comments,\nas well as Ren\\'e Schilling for pointing out reference~\\cite{Schilling2000-ExpoMath}. \nThe first author acknowledges the hospitality of \nthe Seminar for Applied Mathematics\nduring summer 2013.\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec:intro}\n\nGalaxy clustering has traditionally been one of the most important ways to extract \ncosmological information. Galaxies are \nnot a faithful tracer of dark matter, as their clustering strength is biased relative to the dark matter. \nHowever, they are expected to follow the same gravitational \npotential as the dark matter and hence have the same velocities. This is not observable through \nangular clustering, which is only sensitive to correlations transverse to the line of sight. \nIt is however detectable in redshift surveys, because the redshift of the galaxy does not \nprovide information only on the radial distance, but also on the radial velocity through the Doppler shift. \nThis induces anisotropies in the clustering, which are generically called redshift space distortions (RSD) \n\\cite{1998ASSL..231..185H}. \nThey provide an opportunity to extract information on the dark matter clustering directly. \nOn large scales clustering of galaxies along the line of sight is enhanced\nrelative to the transverse direction due to peculiar motions and this allows\none to determine the ratio of logarithmic rate of growth $f$ to bias $b$ \\cite{1987MNRAS.227....1K}. Combining \nthe statistics from different lines of sight \none can eliminate the unknown bias and measure \ndirectly the logarithmic rate of growth times the amplitude. \n\nIt has been argued that using RSD information could greatly increase our knowledge of cosmological \nmodels, including tests of dark energy and general relativity \\cite{2009MNRAS.397.1348W,2009JCAP...10..007M,2011arXiv1104.3862B}. \nGalaxy clustering has clear \nadvantages over the alternatives such as weak lensing: it is intrinsically 3-dimensional, thus providing \nbetter statistics, and it has high signal to noise. \nWhile most of the predictions in the literature are model dependent, a generic statement can be made that if systematic\neffects were perfectly understood RSD would be one of the most powerful techniques for such studies. \nThe main problem with RSD is that nonlinear velocity effects extend to rather large scales and \ngive rise to a scale dependent and angular dependent clustering signal. It is easy to see these effects in any \nreal redshift survey: one sees elongated features along the line of sight, \ncalled the fingers-of-god (FoG) effect, \nwhich are caused by random velocities inside virialized objects such as \nclusters, which scatter galaxies \nalong the radial direction in redshift space, even if they have a localized \nspatial position in real space. \nThis is just an extreme example and other related effects, such as nonlinear infall streaming \nmotions, also cause nonlinear corrections. \nThis means that one needs to understand these and separate them from the nonlinear evolution of the dark \nmatter and from the nonlinear relation between the galaxies and the dark matter, both of which also give rise to \na scale dependent bias \\cite{1998MNRAS.301..797H}. \n\nSeveral recent studies have investigated these nonlinear effects \n\\cite{2004PhRvD..70h3007S,2007MNRAS.374..477T,2010PhRvD..82f3522T,2011MNRAS.410.2081J,2011arXiv1103.3614T,2011arXiv1105.4165R,2011arXiv1105.5007S}, \nsome limiting the analysis \nto dark matter only and some also including galaxies or halos. The common denominator of these studies is \nthat they are based on various ansatzes for the scale and angular dependence of RSD, typically \ncombined with some perturbation theory analysis. This has the advantage of having just a few free parameters, so \nthat if the ansatz is accurate one can model the effects accurately. The reverse is also true and the problem is \nthat it is difficult to make general statements regarding the range of validity for any given model. \nThere is another problem connected to perturbation theory, in that the usual perturbation theory makes a single stream\napproximation, which we know breaks down on small scales inside the virialized halos \n(indeed, FoG are a manifestation of multi-streaming on small scales). \n\nIn this paper we \npresent a different approach to RSD: we use a distribution function approach to \nshow that one can make a series expansion of RSD, which is convergent on \nsufficiently large scales \nand we derive the most general form of RSD correlator allowed by the\nsymmetries. In this paper we present the formal derivations and conceptual implications, \nreserving all the applications to future work. \nThe structure of this paper is as follows: in section 2 we develop the distribution function approach\nto RSD and derive the helicity decomposition. In section 3 we discuss the power spectra, and use rotation symmetries to derive the most general \nform of the RSD correlator. We also discuss the lowest order contributions\nand connect them to physical quantities such as density, momentum, stress energy tensor etc. \nThis is followed by a discussion in section 4. \n\n\n\\section{Redshift-space distortions from the distribution function}\n\\label{sec:theory}\nThe exact evolution of collisionless particles is described by the\nVlasov equation \\cite{1980lssu.book.....P}. Following the discussion by\n\\cite{2011JCAP...04..032M}, we start from the distribution function of\nparticles $f(\\mathbf{x},\\mathbf{q},\\tau)$ at a phase-space position $(\\mathbf{x},\\mathbf{q})$ and \nat conformal time $\\tau$ in order\nto derive the perturbative redshift-space distortions. Here $\\mathbf{x}$ is the \ncomoving position and $\\mathbf{q}=\\mathbf{p}\/a=m\\mathbf{u}$ is the comoving momentum, where \n$\\mathbf{u}=d\\mathbf{x}\/d\\tau$. \nIn the following we will omit the time dependence, i.e we will \nwrite $f(\\mathbf{x},\\mathbf{q})$. \nThe density\nfield in real space is obtained by averaging the distribution function\nover momentum:\n\\begin{equation}\n \\rho\\left(\\mathbf{x}\\right)\\equiv m_p \\int d^3 \\mathbf{q} ~f\\left(\\mathbf{x},\\mathbf{q}\\right),\n\\end{equation}\nwhere $m_p$ is the particle mass\nand $a=1\/(1+z)$ is the scale factor ($z$ is the redshift).\nIn redshift space the position is distorted by peculiar\nvelocities, thus the comoving redshift-space coordinate for a particle is given\nby $\\mathbf{s}=\\mathbf{x}+\\hat{r}~u_\\parallel\/ \\mathcal{H}$, where $\\hat{r}$ is\nthe unit vector pointing along the observer's line of sight, $u_\\parallel$ is \nthe radial velocity, \n$m_pu_\\parallel=q_\\parallel = \\mathbf{q}\\cdot \\hat{r}$, and $\\mathcal{H}=aH$, where $H$ is the Hubble\nparameter. Then the mass density in redshift space is given by\n\\begin{equation}\n \\rho_s\\left(\\mathbf{s}\\right)=\n m_p~ \\int d^3\\mathbf{x}~ d^3\\mathbf{q}~ f\\left(\\mathbf{x},\\mathbf{q}\\right)\n \\delta^D\\left(\\mathbf{s}-\\mathbf{x}-\\hat{r}\\frac{u_\\parallel}{ \\mathcal{H}}\\right)=\n m_p~ \\int d^3\\mathbf{q}~ f\\left(\\mathbf{s}-\\hat{r}\n \\frac{u_\\parallel}{ \\mathcal{H}},\n \\mathbf{q}\\right)~. \\label{eq:rhos}\n\\end{equation}\nBy Fourier transforming equation \\ref{eq:rhos}, we find\n\\begin{eqnarray}\n \\rho_s\\left(\\mathbf{k}\\right)&=&\n m_p~ \\int d^3\\mathbf{x}~ d^3\\mathbf{q}~ f\\left(\\mathbf{x},\\mathbf{q}\\right)\n e^{i \\mathbf{k}\\cdot \\mathbf{x} + i k_\\parallel u_\\parallel\/ \n \\mathcal{H}} \\nonumber \\\\ &=&\n m_p~ \\int d^3\\mathbf{x}~ e^{i \\mathbf{k}\\cdot \\mathbf{x} } \n \\int d^3\\mathbf{q}~ f\\left(\\mathbf{x},\\mathbf{q}\\right)\n e^{i k_\\parallel u_\\parallel\/ \\mathcal{H}} ~, \\label{eq:rhok}\n\\end{eqnarray}\nwhere $\\mathbf{k}$ is the wavevector in redshift space, corresponding to the\nredshift-space coordinate $\\mathbf{s}$.\n\nNow we expand the second integral in equation \\ref{eq:rhok} as a Taylor series \nin $k_\\parallel u_\\parallel\/ \\mathcal{H}$,\n\\begin{eqnarray}\n m_p~ \\int d^3\\mathbf{q}~ f\\left(\\mathbf{x},\\mathbf{q}\\right)\n e^{i k_\\parallel u_\\parallel\/ \\mathcal{H}} \n &=&m_p~ \\int d^3\\mathbf{q}~ f\\left(\\mathbf{x},\\mathbf{q}\\right)\n \\sum_{L=0}\\frac{1}{L!} \\left(i k_\\parallel u_\\parallel\/ \\mathcal{H}\\right)^L\n \\nonumber \\\\&=&\\bar{\\rho}\\left[\\sum_{L=0}\\frac{1}{L!}\n \\left(\\frac{i k_\\parallel}{\\mathcal{H}}\\right)^L T_\\parallel^L(\\mathbf{x})\\right] ~,\n\\end{eqnarray}\nwhere $\\bar{\\rho}$ is the mean mass density and\n\\begin{equation}\n T_\\parallel^L(\\mathbf{x})={m_p \\over \\bar{\\rho}} \n ~ \\int d^3\\mathbf{q}~ f\\left(\\mathbf{x},\\mathbf{q}\\right) \n u_\\parallel^L\n, \\label{eq:q_def}\n\\end{equation}\nwhere in the last expression the integral over phase space assures that the quantity \nis defined in terms of a\nsum over all particles at position $\\mathbf{x}$. \nFor a single stream at $\\mathbf{x}$ this is just \n$(1+\\delta(\\mathbf{x})) u_\\parallel^L(\\mathbf{x})$.\nThese\nare thus radial components of the moments of the distribution function and\nthe distribution function description allows for inclusion of both bulk velocities and\nmulti-streamed velocities. \nNote that these quantities are\nmass-weighted, and so well-defined for any system: one just sums \nover all the particles in the system weighting each one by the appropriate \npower of their radial velocities. If the field needs to be defined on a grid \na simple assignment scheme of particles to the grid \nsuffices, and empty grid cells are assigned a value of 0. \nWe note this to contrast it with volume weighted quantities which need to \nbe \ndefined even if there are no particles assigned to a given grid cell, which is \noften impossible \nfor sparse biased tracers, specially in underdense regions. We return to this issue later. \n\n The Fourier component of the\ndensity fluctuation in redshift space is\n\\begin{equation}\n \\delta_s(\\mathbf{k})= \n \\sum_{L=0}\\frac{1}{L!}\n \\left(\\frac{i k_\\parallel}{\\mathcal{H}}\\right)^L T_\\parallel^L(\\mathbf{k}) ~,\n \\label{eq:deltak}\n\\end{equation} \nwhere $T_\\parallel^L(\\mathbf{k})$ is the Fourier transform of $T_\\parallel^L(\\mathbf{x})$.\n\\begin{equation}\nT_\\parallel^L(\\mathbf{k})=\\int d^3\\mathbf{x}~ T_\\parallel^L(\\mathbf{x})\n e^{i \\mathbf{k}\\cdot \\mathbf{x}}.\n\\end{equation}\nFor $L=0$ we have\n$T_\\parallel^0(\\mathbf{k})=\\delta(\\mathbf{k})$, \nthe density fluctuation in real space. \n\n\\subsection{Angular decomposition of moments of distribution function}\n\nThe objects $T_\\parallel^L(\\mathbf{x})$ introduced in equation \\ref{eq:q_def}\nare radial components of moments of the\nthe distribution function, which are rank $L$ tensors,\n\\begin{equation}\nT^L_{i_1,i_2,..i_L}={m_p \\over \\bar{\\rho}}\n ~ \\int d^3\\mathbf{q}~ f\\left(\\mathbf{x},\\mathbf{q}\\right)u_{i_1}u_{i_2}...u_{i_L}.\n\\label{eqTfulldef}\n\\end{equation}\nThe real-space density field corresponds to\n$L=0$, i.e. zeroth moment, \nthe $L=1$ moment corresponds to the momentum density, \n$L=2$ gives the stress energy density tensor etc. \nThese objects are symmetric under exchange of any two indices and \nhave $(L+1)(L+2)\/2$ independent components. They can be decomposed into \nhelicity eigenstates under rotation around $\\mathbf{k}$, as we do next. \n\nSince translational symmetry guarantees that each Fourier mode is only correlated \nwith itself, we can work with each Fourier mode separately, and add them appropriately\nin the end when we discuss the power spectra. By symmetry we may take $\\mathbf{k}$ to be along $z$-axis. \nWe can decompose the distribution function into spherical harmonics, \n\\begin{equation}\nf(\\mathbf{k},q,\\theta,\\phi)=\\sum_{l=0}^{\\infty}\\sum_{m=-l}^{m=l}f_l^m(\\mathbf{k},q)Y_{lm}(\\theta,\\phi),\n\\label{flm}\n\\end{equation}\nwhere $q$ is the amplitude of the momentum (often a term $(-i)^{l}\\sqrt{{4 \\pi \\over 2l+1}}$ is inserted into this \nexpansion, but we will drop all such terms here). \nThe components $f_l^m(\\mathbf{k},q)$ are helicity eigenmodes (i.e., eigenmodes of angular momentum component in $z$-direction \n$L_z=-i\\partial\/\\partial \\phi$) and under rotation by angle $\\psi$ around the $z$-axis \nthey transform as \n\\begin{equation}\nf_l^m(\\mathbf{k},q)'=e^{im\\psi}f_l^m(\\mathbf{k},q).\n\\end{equation}\nThis follows from the transformation properties of spherical harmonics. \nA quantity which transforms under rotation according to this equation is said to \nhave helicity $m$. A quantity with helicity $0$ is called a scalar, that with helicity \n$m=\\pm 1$ is called a vector and that with $m=\\pm 2$ a tensor, but the expansion goes to \narbitrary values of $m$. \n\nMoments of the distribution function are defined in terms of integrals of velocity moments \nover the distribution function. \nWe can define helicity eigenstates of moments of the distribution function as \n\\begin{equation}\nT_l^{L,m}(\\mathbf{k})={4\\pi m_p \\over \\bar{\\rho}}~\\int q^2dq u^L f_l^m(\\mathbf{k},q).\n\\label{tllm}\n\\end{equation}\nNote that each term $T_{l}^{L,m}$ contains $L$ powers of velocity \n$u$ ($u\\equiv\\left|\\mathbf{u}\\right|$, and recall that $\\mathbf{q} = m \\mathbf{u}$). \n\nA general rank $L$ tensor $T^L_{i_1,i_2,..i_L}$ can be decomposed into \n$2L+1$ helicity eigenmodes $T_L^{L,m}$ ($m=-L,..,L$) and \nadditional components \nformed \nby a product of a scalar $u^2$ with rank $L-2$ tensors. This gives \nadditional $2(L-2)+1$ \nhelicity eigenmodes \n$T_{L-2}^{L,m}$ ($m=-(L-2)..(L-2)$), and additional components formed \nagain by $u^2$ and rank $L-4$ tensor, which gives additional\n$2(L-4)+1$ helicity eigenmodes\n$T_{L-4}^{L,m}$ ($m=-(L-4)..(L-4)$) etc. \n\nFor the lowest terms we have\nrank 1 tensor $T_i^1$, momentum density, which is a 3-vector in the usual geometrical context, \nand can be decomposed into a $m=0$ helicity scalar component $T_1^{1,0}$ \nand two $m=\\pm 1$ helicity vector components $T_1^{1,\\pm 1}$. \nA general symmetric rank 2 tensor $T_{ij}^2$ has 6 independent components. These \ncan be decomposed into an isotropic rank 0 helicity scalar term $T^{2,0}_0=(1+\\delta)u^2$, \nwhich corresponds to the energy density, and 5 $l=2$ components: one helicity scalar part of anisotropic \nstress tensor $T^{2,0}_2$, two helicity vector part of anisotropic stress tensor $T^{2,\\pm 1}_2$\nand two helicity tensor part of anisotropic stress tensor $T^{2,\\pm 2}_2$.\nAt $L=3$ we have 10 independent components, of which 7 are $T^{3,m}_3$, i.e. $l=3$, $m=-3,..,3$, \nand 3 are $T^{3,m}_1$, $l=1$, $m=-1,0,1$ tensors, \nformed by taking isotropic $u^2$ and multiplying it with a 3-vector $u_i$, the latter of which can be \ndecomposed into $m=-1,0,1$ components. \n\nOne can show that $2L+1+2(L-2)+1+2(L-4)+1+...=(L+1)(L+2)\/2$, so this decomposition \ngives the required number of independent components of a general symmetric tensor of rank $L$. \nIn analysis of \ngeneral relativity it is customary to expand the metric and stress energy tensor into scalar ($m=0$), vector ($m=\\pm 1$)\nand tensor ($m=\\pm 2$) helicity modes (SVT decomposition). No higher order helicity modes \nare needed, since only tensors of rank 0, 1 and 2 enter \ninto the description of the metric and energy momentum tensor. \nIn contrast the moments of distribution function contain tensors of arbitrary rank and the \nexpansion in equations \\ref{flm},\\ref{tllm} is the appropriate generalization of the SVT decomposition.\n\nSo far we worked in the basis defined by $\\mathbf{k}$ pointing in $z$ direction. \nIn general we are interested in computing the components of the moments in the radial direction $\\hat{r}$. \nIf $\\hat{r}$ is parallel to $\\mathbf{k}$ then only $m=0$ components contribute, while for a general direction all of them do.\nThe angular dependence of the moments is obtained\nby performing a rotation of the basis from $z||\\mathbf{k}$ to $z'||\\hat{r}$. \nWe can achieve this by rotating by $\\phi$ around $z$ and then by $\\theta$ around the axis perpendicular to $z$, $z'$, \nso in terms of the general rotation by 3 Euler angles we have $T_{l}^{L,m'}=\\sum_{-m}^{m}D^l_{m,m'}(\\phi,\\theta,0)T_{l}^{L,m}$, \nwhere $D^l_{m,m'}(\\phi,\\theta,\\phi')$ is the general rotation matrix of spin $l$ associated with the 3 Euler angles $\\phi$, $\\theta$, $\\phi'$ \n(we do not need to perform the rotation around $z'$ by $\\phi'$). \nSince $u_{\\parallel}$ is invariant under rotation around $z'||\\hat{r}$ only $m'=0$ survive. \nThe rotation matrix is given by the spherical harmonics, $D^l_{0,m}(\\phi,\\theta,0)=\\sqrt{4\\pi\/(2l+1)}Y_{lm}(\\theta,\\phi)$. \nCombining all together we find\n\\begin{equation}\nT_{\\parallel}^L(\\mathbf{k})=\\sum_{(l=L,L-2,..)}\\sum_{m=-l}^{m=l}n_l^LT_l^{L,m}(\\mathbf{k})Y_{lm}(\\theta,\\phi),\n\\label{tpl}\n\\end{equation}\nwhere $N_l^L$ is a constant independent of angle whose numerical value will not be needed. \n\n\n\\section{Power spectra}\\label{sec:power_th}\nWe will adopt a plane-parallel approximation, where \nonly the angle between the line of sight and the Fourier mode \nneeds to be specified. \nThe redshift-space power spectrum is defined as $\\langle \\delta_s(\\mathbf{k})\\delta_s^*(\\mathbf{k}')\\rangle=P^{ss}(\\mathbf{k})\\delta_D(\\mathbf{k}-\\mathbf{k}')$.\nEquation \\ref{eq:deltak} gives,\n\\begin{equation}\n P^{ss}(\\mathbf{k})=\n \\sum_{L=0}^{\\infty}\\sum_{L'=0}^{\\infty}\\frac{\\left(-1\\right)^{L'}}{L!~L'!}\n \\left(\\frac{i k_\\parallel}{\\mathcal{H}}\\right)^{L+L'} P_{LL'}(\\mathbf{k}) \\label{eq:p_ss0} ~,\n\\end{equation}\nwhere $P_{LL'}(\\mathbf{k})\\delta(\\mathbf{k}-\\mathbf{k}')=\\langle T^{L}_\\parallel(\\mathbf{k}) (T^{*L'}_\\parallel(\\mathbf{k}') \\rangle$. \nNote that $P_{LL'}(\\mathbf{k})=P_{L'L}(\\mathbf{k})^*$ so that the\ntotal result is real valued, as expected. Thus we only need to consider the terms $P_{LL'}(\\mathbf{k})$ with \n$L\\le L'$, each of which comes with a factor of 2 if $L \\ne L'$ and 1 if $L=L'$. \nWe can also write $k_{||}\/k=\\cos \\theta=\\mu$, \n\\begin{equation}\n P^{ss}(\\mathbf{k})=\\sum_{L=0}^{\\infty}\\frac{1}{L!^2}\\left(\\frac{ k\\mu}{\\mathcal{H}}\\right)^{2L} P_{LL}(\\mathbf{k}) +\n 2\\sum_{L=0}^{\\infty}\\sum_{L'>L}\\frac{\\left(-1\\right)^{L'}}{L!~L'!}\n \\left(\\frac{i k\\mu}{\\mathcal{H}}\\right)^{L+L'} P_{LL'}(\\mathbf{k}) \\label{eq:p_ss} ~.\n\\end{equation}\n\n\nNext we want to insert the helicity decomposition of equation \\ref{tpl} and \nconsider the implications of rotational symmetry on the power spectrum. \nEach term $P_{LL'}(\\mathbf{k})$ contains products of multipole moments \n\\begin{equation}\nT_l^{L,m}Y_{lm}(\\theta,\\phi)[T_{l'}^{L',m'}Y_{l'm'}(\\theta,\\phi)]^* \\propto e^{i(m-m')\\phi}.\n\\end{equation}\nUpon averaging over the azimuthal angle $\\phi$ of Fourier modes all the terms \nwith $m\\ne m'$ vanish. Another way to state this is that upon rotation by angle $\\Psi$ the correlators\npick up a term $e^{i(m-m')\\Psi}$, and in order for the power spectrum to be rotationally invariant we \nrequire $m=m'$. Putting it all together we find\n\\begin{equation}\nP_{LL'}(\\mathbf{k})=\\sum_{(l=L,L-2,..)}\\sum_{(l'=L',L'-2,..;\\; l'\\ge l)}\\sum_{m=0}^{l}P^{L,L',m}_{l,l'}(k)P_l^m(\\mu)P_{l'}^m(\\mu),\n\\label{pll}\n\\end{equation}\nwhere $P_l^m(\\mu=\\cos \\theta)$ are the associated Legendre polynomials, which determine the $\\theta$ angular dependence\nof the spherical harmonics, $Y_{lm}(\\theta,\\phi)=\\sqrt{(2l+1)(l-m)!\/4\\pi(l+m)!}P_l^m(\\cos \\theta) e^{im\\phi}$.\nWe \nabsorbed all of the terms that depend on $l$ and $m$ and various constants into the definition of power spectra $P^{L,L',m}_{l,l'}(k)$, \nreplaced the two helicity states $\\pm m$ by a single one with $m>0$, since their $\\theta$ angular dependencies are the same, \nand we absorbed the factor of 2 into the definition of $P^{L,L',m}_{l,l'}(k)$. We also require $l'\\ge l$ and \nabsorb the factor of 2 into the definition of $P^{L,L',m}_{l,l'}(k)$ since the two terms have the same\nangular structure. \nNote that due to statistical isotropy the spectra $P^{L,L',m}_{l,l'}(k)$ depend only on amplitude of $k$, \ni.e. we have \n\\begin{equation}\nP^{L,L',m}_{l,l'}(k) \\propto \\langle T_l^{L,m}(\\mathbf{k})(T_{l'}^{L',m}(\\mathbf{k}))^* \\rangle .\n\\end{equation}\nAll the angular structure is thus in associate Legendre polynomials \n$P_l^m(\\mu)$. \n\nEquations \\ref{eq:p_ss} and \\ref{pll} are the main result of this paper. \nThey show that there exists a well defined expansion in terms of cross and auto-power spectra of velocity \nmoments. The expansion parameter is roughly defined as $k\\mu u\/\\mathcal{H}$, where \n$u$ is related to a typical gravitational velocity of the system (which should be of the order of hundreds of km\/s, \nbut note that we take higher and higher powers of these velocities in the series). \nThe expansion is convergent if the expansion parameter is less than unity. \n\nIn terms of perturbation theory there is a close, but not one to one, relation between the lowest order of perturbation \ntheory and the order of the moment expansion. \nAssuming $\\delta$ and $ku\/\\mathcal{H}$ are of the same order, \nthe lowest order of the contribution in terms of powers of power spectrum (i.e., quadratic in $\\delta$) is \n$(L+L')\/2$ if \n$L+L'$ is even and $L>0$, and $(L+L'+1)\/2$ if odd and $L>0$, while for $L=0$ it is $L'\/2+1$ if $L'$ even \nor $(L'+1)\/2$ if $L'$ odd, but of course all \nhigher order terms also enter. \n\nThese equations also show that there is a close relation between the order of the moments and their \nangular dependence. To understand the angular dependence we first note that \nassociated Legendre polynomial $P_l^m(\\mu)$ contains powers from 1 to $\\mu^{l-m}$ for even $l$, and from\n$\\mu$ to $\\mu^{l-m}$, for odd $l$, \nand is always multiplied with a power of $(1-\\mu^2)^{m\/2}$. \nThus $P_{l,l'}^{L,L',m}(k)$ gets multiplied with powers of $(1-\\mu^2)^{m}$ or $\\mu (1-\\mu^2)^{m}$ to \n$\\mu^{l+l'-2m}(1-\\mu^2)^{m}$, so the highest order is $\\mu^{l+l'}$. In addition we have $\\mu^{L+L'}$ dependence in \nequation \\ref{eq:p_ss}, so the lowest contribution in \npowers of $\\mu$ to \n$P^{ss}(k)$ is $\\mu^{L+L'}$ if $L+L'$ is even or $\\mu^{L+L'+1}$ if $L+L'$ is odd, \nand the highest is $\\mu^{2(L+L')}$. Thus for $P_{00}(\\mathbf{k})$ the only angular term is isotropic, \nfor $P_{01}(\\mathbf{k})$ the only angular term is $\\mu^2$, $P_{11}(\\mathbf{k})$ and $P_{02}(\\mathbf{k})$ contain both $\\mu^2$ and $\\mu^4$ etc. \nNote that only even powers of $\\mu$ enter in the final expression, as required by the symmetry. \nWe now proceed to look in more detail at the lowest order terms. \n\n\\subsection{$P_{00}(\\mathbf{k})$: the isotropic term}\n\nAt the lowest order in the expansion we have correlation of real space density $T_{||}^0=\\delta(\\mathbf{k})$ with itself. \nDensity is a scalar of rank 0, $P_0(\\mu)=1$. The power spectrum is isotropic\n$P_{00}(\\mathbf{k})=P_{00}^{000}(k)$. \nThis term is just the real space power spectrum and of \ncourse does not have any $\\mu$ dependence since it is independent of redshift space distortions. \nFor small values of $\\mu$ this term always dominates, and in the limit $\\mu=0$ the transverse \npower spectrum becomes the real power spectrum $P_{00}(k)$. The real space power spectrum agrees with the\nlinear one on large scales, $P_{00}(k)=P_{\\rm lin}(k)$, slightly dips below the linear one around $k \\sim 0.1h\/Mpc$, \nwhile on even smaller scales the nonlinear corrections cause it to increase over the linear one. \n\n\\subsection{$P_{01}(\\mathbf{k})$}\n\nAt the next order in our expansion (not in perturbation theory PT, see below) we have correlations between the density \n$T_{||}^0(\\mathbf{k}) =\\delta(\\mathbf{k})$ and radial component of momentum density \n$T_{||}^1(\\mathbf{k})=[(1+\\delta)u_{||}](\\mathbf{k})$. \nMomentum density can be decomposed into a scalar ($m=0$) $T_1^{1,0}$\nand two vector ($m=\\pm 1$) components $T_1^{1,\\pm 1}$, but only the scalar part correlates with the density $T_0^{0,0}$, \nwhich is a scalar. \nThus the only contribution comes from $P^{0,1,0}_{0,1}(k) \\propto \\langle T_0^{0,0}(\\mathbf{k})(T_1^{1,0}(\\mathbf{k}))^* \\rangle$, \n\\begin{equation}\nP_{01}(\\mathbf{k})=P^{0,1,0}_{0,1}(k)\\mu,\n\\end{equation}\nwhere we used $P_1^0(\\mu)=\\mu$. \n\nThe scalar mode of momentum can be obtained from the divergence of momentum \nand related to $\\dot{\\delta}$ using the continuity equation,\nwhich in terms of our quantities is\n\\begin{equation}\n\\dot{T}_0^{0,0}+ikT_1^{1,0}=0.\n\\label{ce}\n\\end{equation}\nThis is an exact relation (for conserved quantities), in the sense that the \nvector part of momentum does \nnot contribute to it, since \nit vanishes upon taking the divergence (i.e., vector components are orthogonal to $\\mathbf{k}$ and the dot product is zero). \n\nFrom this we get that\n\\begin{equation}\nP_{01}(\\mathbf{k})=-ik^{-1}\\mu P_{\\delta,\\dot{\\delta}}(k)=-{i\\mu \\over 2k} {dP_{00}(k) \\over dt}.\n\\end{equation}\nThe total contribution from this term to $P^{ss}(k)$ is \n\\begin{equation}\nP^{ss}_{01}(\\mathbf{k})=\\mu^2 \\mathcal{H}^{-1} {dP_{00}(k) \\over d\\tau}=\\mu^2 {d P_{00}(k) \\over d\\ln a}.\n\\end{equation}\nThis is an exact relation for dark matter, valid also in the nonlinear regime. \nIt shows that this term can be obtained directly from the redshift evolution of \nthe dark matter power spectrum $P_{00}(k)$. On large scales it agrees with the linear theory predictions. \nIf we write \n$P_{00}(k)=D(a)^2P_{\\rm lin}(k)$, with $D(a)$ the linear growth rate and $f=d\\ln D\/d\\ln a$, then \nwe find $P^{ss}_{01}(\\mathbf{k})=2f\\mu^2P_{\\rm lin}(k)$. \nWe thus see that this order is of the same order in PT as the first order $P_{00}(k)$, the well known \nKaiser result \\cite{1987MNRAS.227....1K}.\nOn smaller scales we expect the term to deviate from\nthe linear one, just as for $P_{00}(k)$. \n\n\\subsection{$P_{11}(\\mathbf{k})$}\n\nThe next term is the correlation of the momentum density $T_{||}^1(\\mathbf{k})$ with itself.\nIn this case the scalar ($m=0$) $T_1^{1,0}(k)$ \ncorrelates with itself and the vector ($m=\\pm 1$) components $T_1^{1,\\pm 1}(k)$ also \ncorrelate with itself, so both components of \nmomentum contribute, \n\\begin{equation}\nP_{11}(\\mathbf{k})=P^{1,1,0}_{1,1}(k)[P_{1}^{0}(\\mu)]^2+P_{1,1}^{1,1,1}(k)[P_{1}^{1}(\\mu)]^2.\n\\end{equation}\nIn terms of the contribution to the redshift space power spectrum this gives \n\\begin{equation}\nP^{ss}_{11}(\\mathbf{k})=\\mathcal{H}^{-2}k^2\\mu^2[P^{1,1,0}_{1,1}(k)\\mu^2+P^{1,1,1}_{1,1}(k)(1-\\mu^2)]. \n\\label{pss11}\n\\end{equation}\nThe scalar part of the momentum is the one that contributes to the continuity equation \\ref{ce}. \nIn linear perturbation theory only the scalar contribution is non-zero and $P^{1,1,0}_{1,1}(k)=f^2P_{\\rm lin}(k)$. \nThis term is also of linear order and collecting all terms at this order we obtain the usual expression \\cite{1987MNRAS.227....1K}\n\\begin{equation}\nP^{ss}_{\\rm lin}(\\mathbf{k})=(1+f\\mu^2)^2P_{\\rm lin}(k). \n\\end{equation}\n\nHowever, we see from the above that there will be another contribution to both $\\mu^2$ and $\\mu^4$ \nterms from the vector part of momentum correlator $P_{1,1}^{1,1,1}(k) \\propto \\langle |T_1^{11}(k)|^2\\rangle$, \nwhich comes in at the second order in power spectrum. \nThis vector part is often called the vorticity part of the momentum. \nIn general this term is non-zero because vorticity of momentum does not vanish, \neven if vorticity of velocity vanishes for a single streamed fluid \n\\cite{2002PhR...367....1B}.\nAs seen from equation \\ref{pss11} this term always {\\it adds} power to $\\mu^2$ term and subtracts it in $\\mu^4$ term (but is combined with \na positive contribution from the scalar part in $\\mu^4$ term). \n\n\\subsection{$P_{02}(\\mathbf{k})$}\n\nAt orders higher than $P_{11}(\\mathbf{k})$ we no longer have any linear contributions, hence these terms are usually \nnot of interest for extracting the cosmological information. \nHowever, these terms, including what is sometimes called the Fingers-of-God \n(FoG) effect, are known to be important on \nfairly large scales. \nHere we will limit the discussion to some general statements of their $k$ and \n$\\mu$ dependence, \nleaving their more precise calculations to future work. \n\nThere are two different terms that contribute to this term,\n\\begin{equation}\nP_{02}(\\mathbf{k})=P^{0,2,0}_{0,0}(k)[P_{0}^{0}(\\mu)]^2+P^{0,2,0}_{0,2}(k)P_0^0(\\mu)P_{2}^{0}(\\mu).\n\\end{equation}\nIn terms of the contribution to the redshift space power spectrum this gives\n\\begin{equation}\nP^{ss}_{02}(\\mathbf{k})=-\\left({k \\mu \\over \\mathcal{H}}\\right)^2\\left[P^{0,2,0}_{0,0}(k)+{1 \\over 2}P^{0,2,0}_{0,2}(k)(3\\mu^2-1)\\right]. \n\\end{equation}\nThe first term is the correlation between the isotropic part of the mass weighted square of velocity, i.e. the energy density,\n$T_0^{2,0}=(1+\\delta)u^2$ and the density field $T_0^{0,0}=\\delta$. The second term comes from the scalar part \nof the anisotropic stress $T_2^{2,0}$ correlated with the density $T_0^{0,0}=\\delta$. \n\nOn physical grounds we expect the first term \nto be large in systems with a large rms velocity resulting in a term scaling as \n$P^{0,2,0}_{0,0}(k) \\sim P_{00}(k)\\sigma^2$, where $\\sigma^2$ has units of velocity squared, but is not \nsimply the volume averaged velocity squared (see below). The contribution of this term to $P^{ss}$\ngoes as $-(k\\mu\/\\mathcal{H})^2\\sigma^2 P_{00}(k)$, i.e. it is a damping term suppressing the linear power spectrum, \nwith the effect increasing towards higher $k$ (smaller scales). \nThis is the lowest order FoG term, which we see contributes as $(k\\mu)^2$ dependence and so affects the $\\mu^2$ term. \nIt is a damping term that is always negative, while the corresponding $\\mu^2$ term from $P_{11}$ always adds power. \nThe scalar anisotropic stress-density correlator $P_{0,2}^{0,2,0}(k)$ \nalso contributes to $\\mu^2$ angular term, as well as to $\\mu^4$ angular term, \nand is formally of the same order in perturbation theory as \n$P^{0,2,0}_{0,0}(k)$, \nbut is likely to be smaller\non physical grounds that velocity dispersion in virialized objects \nis isotropic and hence has a small anisotropic stress. \n\n\\subsection{$P_{03}(\\mathbf{k})$, $P_{12}(\\mathbf{k})$, $P_{04}(\\mathbf{k})$, $P_{13}(\\mathbf{k})$ \nand $P_{22}(\\mathbf{k})$}\n\nSince the lowest order in $\\mu$ is $(L+L')$ or $L+L'+1$, \nthe terms of order higher than $P_{02}(\\mathbf{k})$ do not contribute to $\\mu^2$ \nterm. \nAt the next order in $\\mu$ we have terms of order $\\mu^4$. At this order there\nare \n7 terms that contribute. $P_{11}(\\mathbf{k})$ and $P_{02}(\\mathbf{k})$ we already discussed: \nwhile $P_{11}(\\mathbf{k})$ has a linear order term and \nis expected to dominate on large scales, $P_{02}(\\mathbf{k})$ is second order in power\nspectrum. They both come with a prefactor of $k^2$. \nAt the next order we have $P_{03}(\\mathbf{k})$ and $P_{12}(\\mathbf{k})$, both second order in power spectrum \nand each multiplied by $k^3$, followed by $P_{13}(\\mathbf{k})$ and $P_{22}(\\mathbf{k})$, also second order in power spectrum, but \neach multiplied by $k^4$, and by $P_{04}(\\mathbf{k})$, third order in power spectrum. \nAll of these terms also\ncontribute to terms of higher order in $\\mu^{2j}$, up to \n$\\mu^6$ or $\\mu^8$ for these terms. \n\nOne can see from this discussion that the angular structure of higher order \nterms is considerably more \ncomplex than that of lower order terms and that all the terms even in powers of\n$\\mu$ are being generated by \nRSD. However, there is a connection between the angular order, powers of $k$ \nand lowest order in perturbation \ntheory, such that only the low powers of $k$ and low lowest order of PT \ncontribute to the lowest orders in $\\mu$. \nThus, at low values of $k\\mu$, the series is convergent. To make these \nstatements more quantitative \na numerical or perturbative analysis is required, which will be presented \nelsewhere \\cite{Okumura2011}.\n\n\\subsection{Shot noise and connections between the correlators}\n\nThe correlators at the same order in powers of velocity, i.e. equal $L+L'$, \ncontain nontrivial cancellations among them. \nTo see this assume velocity is constant over a region of space \n$r \\sim k_0^{-1}$. \nFor example, large scale bulk flows lead to correlated velocities on small \nscales, giving rise to \nnearly equal velocities between nearby particles. On scales smaller than this \nvelocity coherence scale, $k>k_0$, we \ncan pull out these constant velocity terms from the correlators, to obtain \n\\begin{equation}\nP_{LL'}(k)\\delta_D(\\mathbf{k}-\\mathbf{k}')=\\langle [(1+\\delta)u_{\\parallel}^L](\\mathbf{k})[(1+\\delta)\nu_{\\parallel}^L]^*(\\mathbf{k}')\\rangle \\sim P_{00}(k) \\langle u_{\\parallel}^{L+L'} \n\\rangle \\delta_D(\\mathbf{k}-\\mathbf{k}')\n,\n\\end{equation}\nwhere $\\langle u_{\\parallel}^{L+L'} \\rangle$ is just a number corresponding to \nthe spatial average of this term. \nSo these terms are all equal as long as $L+L'$ is the same. \n\nThese terms enter into the sum in equation \\ref{eq:p_ss0} with different \nprefactors and opposite signs, leading to \ncancellations between them. The lowest order example is that of $P_{11}$ and \n$P_{02}$, which enter with equal prefactors but \nopposite signs, canceling any such contributions from each other. \nThis is not surprising: bulk flows lead to ``rigid body'' displacements \nof particles but do not contribute to FoG effects, so their contribution to \n$P_{02}$ must be canceled. \nAs a result, only velocity dispersion type contributions lead to FoG effects. \n\nIn the extreme case this argument can be applied to the shot noise\nfor these correlators, which is the \ncontribution to the power spectrum caused by discreteness\nof tracers. It is well known that the shot noise of a density field sampled by \ntracers of number density $\\bar{n}$ is \ngiven by $P_{00}(k)=\\bar{n}^{-1}$. Analogous calculation for the moments gives \n\\begin{equation}\nP_{LL'}(k)=\\bar{n}^{-1}\\langle u_{\\parallel}^{L+L'}\\rangle.\n\\end{equation}\nThis expression is exact, since by definition a discrete tracer population only has a single value of velocity at any given position, \nso $\\langle u_{\\parallel}^{L+L'} \\rangle$ will be the same for any pair of $L,L'$ such that $L+L'$ is the same. \nThese shot noise terms can be large if the tracer is sparse, i.e. if $\\bar{n}$ is small. \nHowever, the argument above shows that these terms enter with opposiste signs in the final result and so these shot \nnoise contributions cancel in the total sum of \\ref{eq:p_ss0}. This is expected: the only shot noise contribution \nto the total RSD power spectrum $P^{ss}(k)$ should be $\\bar{n}^{-1}$. \nThese examples show that these velocity moments are connected, and \nit is more natural to consider\nthem together, such as $P_{11}(k)-P_{02}(k)$, where the shot noise and the bulk flow terms cancel out. \n\n\\subsection{Relation to Legendre moments}\n\nIn RSD analyses it is customary to integrate $P^{ss}(\\mathbf{k})$ over the lowest order Legendre polynomials to obtain \nmoments $P^{ss}_l(k)$, \n\\begin{equation}\nP^{ss}_{l}(k)=(2l+1)\\int_0^1 P^{ss}(\\mathbf{k})P_l(\\mu)d\\mu,\n\\end{equation}\nwhere $P_l(\\mu)$ are the ordinary Legendre polynomials, $P_0(\\mu)=1$, \n$P_2(\\mu)=(3\\mu^2-1)\/2$ and $P_4(\\mu)=(35\\mu^4-30\\mu^2+3)\/8$.\nOnly the lowest 3 orders contain contributions from linear terms, so the analysis is usually limited to $l=0,2,4$. \nThe 0 moment is just the spherical average of the power spectrum in redshift space. \nThe advantage of this expansion is that in a typical survey the moments are uncorrelated on scales small compared to \nthe size of the survey. \n\nMoments in even $l$ can be viewed as an alternative way to expand in terms of even powers of $\\mu$. However, \nthe expansion given in equation \\ref{pll} is {\\it not} an expansion in Legendre polynomials, since it contains \nproducts of associated Legendre polynomials (including squares of ordinary Legendre polynomials of both even and odd\norders). Hence there is no orthogonality between the moments of distribution function and Legendre moments $P^{ss}_{l}(k)$. So \nif we expand the angular dependence of any given term $P^{ss}_{LL'}(k)$ into Legendre polynomials, we will \ngenerate all orders up to $l=2(L+L')$. This means for example that all terms\nwill contribute to the monopole $l=0$, all except $P^{ss}_{00}(k)$ to quadrupole \n$l=2$ and all but $P^{ss}_{00}(k)$ and $P^{ss}_{10}(k)$ to hexadecupole $l=4$. \nAs a result we always have an infinite number of terms $P_{LL'}(\\mathbf{k})$ contributing to any given\nLegendre series term $P^{ss}_l(k)$, with \nhigher and higher powers of $k$. This expansion is thus considerably more \ncomplex than the expansion in powers of $\\mu^{2j}$, \nwhich has a finite number of terms for any given value of $j$. \n\nThe discussion above suggests it may be more beneficial to fit for powers of $\\mu^{2j}$ rather than \nfor Legendre moments, for example by fitting for $\\mu^0$, $\\mu^2$ and $\\mu^4$ terms, which contain \nlinear order contributions, together with higher order terms $\\mu^6$, $\\mu^8$ etc., which we do not care for and \ncan marginalize over in the end. \nHowever, Legedre moments are uncorrelated while powers of $\\mu^{2j}$ are strongly \ncorrelated, so a marginalization over higher order terms will lead to a large increase in errors for higher $k$, given that these\nterms become very large at high $k$ compared to lowe order terms. So \nthis can only work if sufficiently strong priors are adopted for higher order terms $\\mu^6$, $\\mu^8$ etc. Such priors could \ncome from simulations extracting individual higher order terms or from a parametrized model. This is pursued further in \\cite{Okumura2011}.\n\n\\subsection{Applications to galaxies and issues of bias} \n\nThe relation to other tracers such as galaxies is a rich subject worth exploring further with this method. \nIn this paper we focus primarily on the dark matter, but all the derivations remain unchanged if the dark matter \nparticles are replaced with some other tracers, such as galaxies or halos. \nIn large scale structure we usually define bias as the ratio of galaxy power spectrum (shot noise subtracted) to \nmatter power spectrum, \n$b^2(k)=P_{00}^{gg}(k)\/P_{00}^{mm}(k)$. \nWe can generalize the concept of bias to \n\\begin{equation}\nb_{LL'}(\\mathbf{k})={P_{LL'}^{gg}(\\mathbf{k}) \\over P_{LL'}^{mm}(\\mathbf{k})}, \n\\label{bll}\n\\end{equation}\nwhere $P_{LL'}^{gg}(\\mathbf{k})$ is galaxy correlator and $P_{LL'}^{mm}(\\mathbf{k})$ is the corresponding dark matter term. \nIn linear theory we have $b_{00}=b_1^2$, $b_{01}=b_1$ and $b_{11}=1$, independent of scale or angle, where \nlinear bias is defined as $\\delta_g=b_1\\delta_m$. \nTwo ways to extract cosmological information from RSD are either by combining $P_{00}$ and $P_{01}$ to eliminate $b_1$, or \nto use $P_{11}$ directly. \n\nBefore discussing RSD further it is useful to draw a comparison to weak lensing. In case of weak lensing \nwe can measure both projected dark matter density or galaxy density, so we can perform \na joint correlation analysis of galaxy clustering and weak lensing, where the \ngalaxy auto-correlation is proportional to $b^2$ times matter correlations, cross-correlation between galaxies and \nweak lensing signal around them induced by the dark matter is proportional to $b$ (the so called galaxy-galaxy lensing), \nwhile the weak lensing auto-correlation is independent of bias. \nTwo ways to extract the signal are either using just shear-shear correlations tracing matter-matter correlations, \nor combining galaxy auto-correlation with galaxy-galaxy lensing to eliminate bias. \nThis latter has higher signal to noise but is complicated due to the fact that bias is \nscale independent and the scale dependence depends on the \ngalaxy properties \\cite{2010PhRvD..81f3531B}. To understand when this happens it is useful to expand galaxy density perturbation \nto second order in matter density, $\\delta_g=b_1\\delta_m+b_2\\delta_m^2$. \nThe second order terms will become important when they cannot be neglected against the first order terms,\nso the expansion parameter is $(b_2\/b_1)\\delta_m$. Since $\\delta_m^{\\rm rms}$ increases on small scales this scale \ndependent bias increases towards small scales. Typically we have $|b_2\/b_1|<0.4$ \\cite{2010PhRvD..81f3531B} \nand the corrections become \nimportant at $k \\sim 0.1h\/Mpc$, where $\\delta_m^{\\rm rms} \\sim 0.5$. \n\nReturning back to RSD, our formalism is directly applicable to galaxies, except that \nall the velocity moments are mass weighted for the dark matter, $T_{\\parallel}^{L,m}=\n(1+\\delta_m)u_{\\parallel}^L$, and number density weighted for the galaxies, $T_{\\parallel}^{L,g}= \n(1+\\delta_g)u_{\\parallel}^L$. \nWhat this shows is that if the density distribution of galaxies differs from that of the\ndark matter then all the correlators of velocity moments will differ from each other, even those that appear independent of bias, \nsuch as $P_{11}$. \nIn reality thus the predictions of linear bias model will be modified, \nbecause even if galaxies are faithful tracers of the dark matter velocities at a given position, the weighting of\nthe velocity moments differs: in one case they are weighted by the dark matter mass, in the other by the number of \ngalaxies, and the two \ndiffer in their spatial distribution. This will result in scale dependence of the higher order \nbias terms $b_{LL'}$, just like it does for the $b_{00}$ itself \\cite{2010PhRvD..82d3515H}. \n\nTo quantify this further, for the lowest order momentum density term and for linear bias\nwe must compare correlation of $(1+b_1\\delta_m)u_{\\parallel}$ with itself to give $P_{11}^{gg}(\\mathbf{k})$ \nor with $b_1\\delta_m$ to give $P_{01}^{gg}(\\mathbf{k})$. The auto-correlation will give the result that agrees with the dark matter \nonly for $b_1=1$, or if $b_1\\delta_m \\ll 1$. In the same limit the cross-correlation will give linear bias $b_1$. \nSo momentum density becomes velocity in the limit $b_1\\delta_m \\ll 1$,\nwhile requiring it to be scale independent relative to dark matter \nrequires something like $(b_1-1)\\delta_m \\ll 1$, which for typical LRG galaxies ($b_1 \\sim 2$) \nis in fact a more stringent requirement than that of a scale independent bias condition \ndiscussed above \n$(b_2\/b_1$ versus $b_1$). This suggests \nthat the scale dependence of the momentum density bias terms $b_{01}$ and $b_{11}$ \ndefined in equation \\ref{bll} extends to larger scales\nthan scale dependent bias of density $b_{00}$. \n\nThe conclusion from this discussion is that the scale dependence of bias terms \ninvolving momentum density is a real concern in RSD and likely \nextends to relatively large scales ($k<0.1h\/Mpc$). \nIn terms \nof the angular decomposition in powers of $\\mu$, \nthe discussion of the scale dependent bias of RSD can be divided into a $\\mu^2$ term, which depends \nentirely on $b_{01}$ in $P_{01}$, and the $\\mu^4$ term, for which \nthe scale dependence of $b_{11}$ term above is applicable,\nsince that is the term that does not vanish on large scales in linear theory. \nIn this sense RSD analysis is not the equivalent of a joint galaxy-weak lensing analysis, \nsince weak lensing auto-correlation truly traces the dark matter directly, \nwhile in RSD \nthis limit is achieved only on relatively large scales where \n$\\delta_g^{\\rm rms} \\ll 1$. \n\nThe discussion so far completely ignored FoG effects: for $\\mu^2$ term \nthese are encoded in $P_{02}$ and in vector part of $P_{11}$, which \nunlike $P_{02}$ adds power rather than removes it, and these terms \nhave their own physical interpretation and scale dependence \nunrelated to the scale dependent bias discussion above. \nWhile they can partially cancel the effects discussed above they are unlikely to \nachieve this exactly. \nIn most of the literature so far only the scale dependence induced by FoG effects was discussed \n(although see \\cite{2004PhRvD..70h3007S,2011arXiv1105.4165R}). \nThe simple linear bias model predicts that FoG effects scale with bias squared: the leading order term scales as $b_1^2$ \nboth in $P_{02}^{gg} \\propto \n\\langle [b_1\\delta v_{\\parallel}^2](\\mathbf{k})b_1\\delta(-\\mathbf{k})$ and in $P_{11}^{gg}\\propto [b_1\\delta v_{\\parallel}]^2$. \nIf we write $P_{02}^{gg}(k)-P_{11}^{gg}(k)=P_{00}^{gg}(k)\\sigma^2$, then $\\sigma$ is independent of bias, \nsince $P_{00}^{gg}(k) \\propto b_1^2P_{00}^{mm}(k)$. \n\n\\section{Discussion}\n\nIn this paper we present a distribution function approach to redshift space distortions. \nWe show that the redshift space density can be expressed in terms of a\nsum over velocity moments and the redshift \nspace power spectrum can be expressed in terms of correlators between the Fourier \ncomponents of these moments. \nThese moments are simple objects to calculate in any system: they are calculated by simply taking appropriate powers \nof radial velocity and summing over all particles. The lowest order moments are density, momentum density, stress \nenergy density etc. \n\nWe have decomposed the moments into helicity \neigenstates based on their transformation properties under rotation around the direction of the Fourier mode, \na generalization of SVT decomposition in cosmological perturbation theory. \nWe use rotational invariance to derive all of the allowed correlator terms, showing that only terms with \nthe same helicity can contribute to the correlators. \nThe moments of distribution function are complicated objects with many terms allowed by symmetries, specially at higher order, \nleading to a complicated angular and scale dependence, \nsuggesting that treatments of RSD cannot be fully successful with simple ansatzes, such as the popular FoG velocity\ndispersion model with one free parameter \\cite{2011MNRAS.410.2081J,2011arXiv1103.3614T}. \n\nDespite the complexity of the general RSD description some general statements can be made. \nThe lower order terms generally only \ncontribute to low orders of expansion in $\\mu^2$, where $\\mu$ is the angle between the Fourier mode and the line of sight. \nAs an example, we have shown that only the scalar part of the momentum density correlates with the density, and \nthis term can be written in terms of a time derivative of the power spectrum. This term only contributes to $\\mu^2$\nangular dependence and contains a linear order term. But \nthere is also the vector part of the momentum density-momentum density correlation, \nthe (scalar) energy density-density correlation, and the scalar part of anisotropic stress density-density correlation, \nall of which also contribute to the $\\mu^2$ term. \nThey are all nonlinear and cannot dominate on very large scales, but likely \ndominate on small scales. \nThe energy density-density correlation term is the term most closely related to the \nFoG velocity dispersion effect and is always negative, suppressing the power, \nbut the other terms are formally of the same order in perturbation theory. \nWe have shown that the vorticity part of momentum always adds to the RSD power of $\\mu^2$ term, and hence acts in the opposite direction to \nthe FoG term. Our analysis cannot address which term has a larger amplitude, but it would be interesting to see \nif there are any systems where the terms that add power dominate over those that suppress it. \nThe next angular term has $\\mu^4$ dependence and we identified 7 terms that contribute to it, of which one, \nscalar part of $P_{11}$, contains a linear contribution that does not vanish on large scales. \n\nThe fact that there are a finite number of velocity moment terms at each order of $\\mu^{2j}$ expansion should be contrasted to\nthe popular Legendre multipoles expansion (monopole, quadrupole and hexadecupole contain cosmological information),\nwhich receive contributions from all orders in moments of distribution function. \nThis suggests that a better behaved analysis may be possible if instead of a multipole analysis the analysis is performed in \nterms of a $\\mu^{2j}$ expansion, with the lowest 3 orders containing cosmological information and the rest treated as nuisance \nparameters. \n\nIt is important to emphasize that these moments are mass weighted quantities, and \nno volume averaged quantities ever enter into our expressions. \nThis relates to one of the long standing issues in the treatment of RSD: many of the past treatments \\cite{2004ApJ...606..702T,2011MNRAS.410.2081J,2011arXiv1103.3614T} have \nassumed that RSD trace correlations between velocities and dark matter and that the FoG effects multiply these \ndensity-velocity and velocity-velocity correlations, where FoG quantities are also\ndefined as volume weighted quantities such as velocity dispersion $\\sigma^2=\\langle u^2 \\rangle$. \nBut these volume weighted quantities \nare not well defined, specially for sparse biased systems such as galaxies or clusters. \nFor a biased tracer with $b>1$ one finds that voids with no tracers in them are enlarged, \nsince, for $\\delta<0$, $1+b\\delta$ is closer to 0 than $1+\\delta$. \nThis has forced some workers to use the dark matter velocity field instead, \nwith unpredictable results \\cite{2011arXiv1103.3614T}. \nOur expansion shows that it is more natural to define RSD in terms of mass or number weighted quantities, such as momentum density or \nenergy density, the former replacing velocity and the latter replacing velocity dispersion. \nMass and number weighted moments such as momentum or energy density\nare well defined even in voids (where they are simply zero). In this paper we show that there is a consistent \nexpansion using mass weighted moments, and that the expansion is convergent on large scales. \n\nThe fact that all RSD quantities are density weighted also suggests that RSD effects will differ \nif the galaxy number density distribution differs from mass density distribution. We have shown that even a linear \nbias model induces scale dependent bias of the momentum density correlators, and that this scale dependence is \nlikely to show up on relatively large scales, $k<0.1{\\rm h\/Mpc}$. \nThe success of RSD in extracting cosmological information depends entirely on our ability to model \nthese various bias terms and relate them to each other. Similarly, \nthe success of the approach presented here in modeling RSD depends on our ability to extract these moments from simulations and \ndata and on our ability to model them with analytic models, such as perturbation theory. \nProviding physical interpretation of the terms, as done here, could enable one to develop more effective modeling, \nor provide a better physical understanding of limitations of RSD in extracting cosmological information.\nFor example, it is relatively straight-forward to include the bias induced scale dependence\neffect at the lowest order of PT and we will present the results elsewhere \\cite{Vlah2011}. \nIn this paper we have focused on theory, conceptual issues and general symmetries, while applications to simulations \nand perturbation theory will be presented in \nupcoming work \\cite{Okumura2011,Vlah2011}. \n\n\\begin{acknowledgments}\nWe thank Teppei Okumura and Zvonimir Vlah for helpful discussions. \nThis work is supported by the DOE, the Swiss National Foundation under contract 200021-116696\/1 and WCU grant R32-10130. \n\\end{acknowledgments}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nFor certain problems and domains, collecting real-world data can be difficult and expensive.\nSimulations are used to mimic real-world stimuli and can provide a clear, valid, easily reproducible digital representation of a system, often founded on principles of mathematics and physics.\nHowever, oftentimes simulations may not be able to capture the full complexity of real-world data due to poor assumptions or bias built into the model. This leads to a gap between synthetic and real data distributions, and ultimately a machine learning solution biased towards the larger synthetic dataset.\nThis is a common problem in the medical industry: if we want to study heart disease, collecting electrocardiogram (ECG) heartbeat signals from real subjects can be a time-consuming and tedious process, as medical data needs to be properly classified and anonymized, often requiring a trained professional.\nThere are many existing simulations that mimic a healthy heartbeat \\citep{PhysioBank, simple_ecg_sim, advanced_ecg_sim, NeuroKit2}, but they poorly capture the noise seen in the real data.\nThe complementary unhealthy heartbeat data has yet to be simulated consistently well.\n\nGenerative Adversarial Networks (GANs) \\citep{GAN} have seen great success in generating synthetic data samples of different types such as images, speech, text, etc., by having two neural networks compete against one another.\nFurthermore, researchers have modified GANs to be used in combination with simulators to take advantage of simulated and unsupervised learning, in the SimGAN learning method \\citep{SimGAN}.\nRather than having the GAN start from random noise as its input, as is traditionally done with GANs, a simulator's output is fed to the GAN's input. This change in procedure allows the GAN to take advantage of the simulator's annotations while producing more realistic samples compared to the original simulator.\nHowever, GANs are predominantly used for image generation or 2D data types, and 1D GAN data synthesis has not seen as many real world applications.\nDespite the successes of these models and methods, deep generative models are still difficult to train and evaluate. This difficulty stems from generative models' lack of objective truth. Without truth, there are not always easy quantitative metrics for determining how realistic the models' outputs are.\nCreating a successful network architecture and finding the optimal hyperparameters for it are also challenges, as it often requires a researcher to manually tune the models through trial and error.\nOn top of that, a variety of loss functions \\citep{GAN, WGAN}, network architectures \\citep{DCGAN, styleGAN}, and training techniques \\citep{mbd_feature_matching_loss, WGAN-GP, DRAGAN} have all been proposed, so it is difficult to select which ones would work best for a specific domain.\n \nWhile evolutionary computation (EC) has been utilized before to help optimize and search for successful GAN architectures \\citep{E-GAN} and loss functions \\citep{TaylorGAN}, we could not find an evolutionary system that can perform neural architecture search for a SimGAN, adjust custom loss functions, train and evaluate models across multiple objectives, and optimize hyperparameters simultaneously while being customizable to fit any data problem, including 1D data.\n\nThus, in this paper, we pose the following research questions:\n\\begin{itemize}\n\\item \\textbf{RQ1:} How effective are SimGANs for generating realistic 1D signals from simulated inputs? Are SimGANs able to shift the synthetic data distribution closer to the real data distribution? Are they applicable to real world problems like ECG heartbeat synthesis?\n\\item \\textbf{RQ2:} What are examples of effective quantitative metrics for evaluating SimGAN outputs during and after training? \n\\item \\textbf{RQ3:} How can we utilize evolutionary computation to search for novel SimGAN architectures and optimize existing models? Can we automate this process so that training, evaluation, optimization, and selection of models can be done simultaneously while being flexible enough to be applied to a multitude of problems? \n\\end{itemize}\n\nThe main contributions of this paper are:\n\\begin{enumerate}\n \\item Improvements to the SimGAN learning approach and architecture for 1D data\n \\item New feature-based quantitative metrics for evaluating generated outputs\n \\item An open source software implementation of ezCGP, an evolutionary computation framework for optimizing complex machine learning pipelines\n \\item Examples of SimGAN outputs mimicking abnormal heart conditions from the MIT-BIH arrhythmia database \\citep{MIT-BIH} and the effects of using the outputs for heartbeat classification\n\\end{enumerate}\n\nThe rest of the paper is organized as follows; Section \\ref{Background} presents a brief overview of important concepts and related work. \nSection \\ref{Dataset} describes the dataset and simulators we used. \nSection \\ref{Methods} describes our SimGAN improvements and ezCGP's evolutionary process. \nThe results and discussion are in Section \\ref{ResultsAndDiscussion}. Finally, conclusions are drawn and future work is outlined in Section \\ref{ConclusionAndFutureWork}.\n\n\\section{Background}\n\\label{Background}\nThis section introduces some basic concepts in GAN training, the SimGAN learning method, our ezCGP framework, and summarizes relevant studies related to this research.\n\\subsection{General GAN training}\nGANs \\citep{GAN} typically train two deep neural networks (DNNs): a generator ($G$), which generates synthetic samples, and a discriminator ($D$), which predicts if the synthetic samples are real or fake.\nThese two networks are placed in an adversarial setup and can be seen as a two-player minimax game for the networks to compete against each other.\nA GAN uses two loss functions: one for generator training and one for discriminator training. \nThe generator loss is calculated by how likely the generator is able to fool the discriminator, while the discriminator loss is calculated by how likely it is able to predict correctly. In other words, $D$ and $G$ try to find parameters to optimize the objective function $V(G, D)$:\n\\begin{equation}\n \\underset{G}{\\min} \\text{ } \\underset{D}{\\max} \\text{ } V(D, G) = \\mathbb{E}_{x}[\\log D(x)] + \\mathbb{E}_{z}[\\log(1 - D(G(z)))]. \n\\end{equation}\nWhere $\\mathbb{E}_{x}$ is the expected value over all real data instances, $D(x)$ is the discriminator's prediction if real data instance $x$ is real, $G(z)$ is the generator's output when given input $z$, $D(G(z))$ is the discriminator's prediction if a fake instance is real, and $\\mathbb{E}_{z}$ is the expected value over all generated fake instances $G(z)$.\n\n\\subsection{SimGAN Overview}\nWith the explosion of deep learning models in recent years, it has become more important to collect as much data as possible.\nHowever, this is not always possible, and learning from synthetic data may not achieve the desired performance due to a gap between synthetic and real data distributions.\nTo address the limitations in accessibility of real-world data or the poor assumptions and overly simplified simulations, researchers proposed a modification to the GAN framework named SimGAN \\citep{SimGAN} that utilizes simulated images and unsupervised learning, such that synthetic images from a simulator are used as inputs instead of random vectors.\nThe model tries to improve the realism of the simulator's output by trying to learn features from the real data, and then \"refining\" the simulated data to better match what is found in the real-world. Hence, the generative model is named the refiner ($R$).\n\nAs such, SimGANs have two new loss functions.\nThe refiner uses a self-regularization loss to minimize the difference between the simulated input and the resultant refined output, with the underlying idea being that the refined outputs should be a transformation of the original synthetic data and should not deviate too far from the original simulated conditions. The formula is defined below: \n\\begin{equation}\n V(\\theta) = - \\underset{i}{\\sum} \\log(1 - D(R_{\\theta}(x_{i}))) + \\lambda || R_{\\theta} (x_{i}) - x_{i}||_{1}.\n\\end{equation}\nThe first half of the equation is the same as the GAN generator loss.\nIn the second half, $\\lambda$ is a regularization weight constant, $R_{\\theta} (x_{i}) - x_{i}$ is the difference between the refiner output and the original input, and $||.||_{1}$ is the L1 norm.\n\nThe other loss is a local adversarial loss used by the discriminator network. \nAny local patch sampled from the refined image should have similar statistics to a real image patch.\nTherefore, rather than defining a discriminator network that was trained on the entire global image, the authors used a discriminator that classifies all local image patches separately then aggregates them to predict whether the sample is real or fake.\n\nAdditionally, they introduced the idea of using a history buffer, where refined images generated by the previous steps of the refiner are sampled during each iteration of discriminator\ntraining, and the discriminator loss function is computed by summing the predictions on the current batch of refined images and the sampled history of images. This is meant to improve the stability of adversarial training, as the discriminator network tends to only focus on the latest refined\nimages and the refiner network\nmay reintroduce previously seen artifacts that the discriminator has\nforgotten about. \n\n\\subsection{Easy Cartesian Genetic Programming (ezCGP)}\nezCGP is an end-to-end evolutionary Cartesian Genetic Programming framework designed to be highly flexible and customizable to any researcher's task.\nThe graph representation of a genome, instead of a traditional tree representation, lends itself better to represent the graph architecture of a neural network. \nThe framework also introduces the novel idea of compartmentalizing the genome into a sequence of 'blocks', each with their own evolutionary rules: a unique set of operators and hyperparameters, mating and mutation strategies, number of genes, and evaluation method. Each block can be thought as a conceptual component of an algorithm; by segmenting, we allow the algorithm to be evolved without mixing components from one concept to another. This allows for full representations of complex machine learning pipelines in evolutionary computation.\nA more in-depth explanation of how it is applied in this work is included in Section \\ref{Methods}. We share our code online for further research and experimentation.\n \\footnote{\\url{https:\/\/github.com\/ezCGP\/ezCGP}}\n\n\\subsection{Related Work}\nMedical data and ECG synthesis is challenging since biological systems are dynamic with how the various parts of the body interact. Studies have attempted to use GANs to generate sufficiently realistic medical data \\citep{TSTR} and one study has shown a successful example of using SimGANs to generate biologically plausible ECG signals\\citep{SimGAN_ECG}.\nThe resultant synthetic data generated was shown to improve ECG classification when using the new synthetic data for training ECG classifiers.\nIn this example, they use the ECGSYN ECG simulator \\citep{advanced_ecg_sim} and tailor a specific self-regularization loss based on the simulator's system of ordinary differential equations during the training process.\nUnfortunately, the approach only generates examples for single heart beats, which are extremely short in nature and are not always useful when diagnosing heart disease. \n\nIntegrating evolutionary techniques into traditional GAN approaches is not a novel idea.\nEvolutionary GAN (E-GAN) \\citep{E-GAN} evolves a population of generators, where the mutation method selects among different loss functions to train the generators, which then compete against a single discriminator. \nCOEGAN \\cite{COEGAN} utilizes neuroevolution to coevolve discriminator and generator architectures.\nMulti-objective E-GAN (MO-EGAN) \\citep{multi-EGAN} instead treats E-GAN training as a multi-objective optimization problem, and uses Pareto dominance, defined as when some other outcome is weakly preferred by all individuals and strictly preferred by at least one individual, to select the best solutions across the selected objectives that measure diversity and quality.\nOther approaches propose improving GANs by discovering customized loss functions for each of its networks.\nFor example, TaylorGAN \\citep{TaylorGAN} treats the GAN losses as Taylor expansions and optimizes custom definitions through multi-objective evolution. These losses were meant to act as an alternative to traditional GAN losses such as Wasserstein loss \\citep{WGAN} or minimax loss. \nLipizzaner \\cite{Lipizzaner} uses spatial coevolution to evolve and train a grid of GANs simultaneously, and Mustangs \\cite{Mustangs} builds upon E-GAN and Lipizzaner by mutating the loss function across the coevolution grid.\n\n\\section{Dataset}\n\\label{Dataset}\n\n\\subsection{ECG Dataset}\nFor a real-world example of 1D signals that are difficult to collect, we applied our approach to ECG heartbeat data to help generate realistic samples of various arrhythmias. In our approach to the problem, we want it to be generic towards the simulated source of data, i.e. it should work for multiple simulators, and we want our generated signals to be longer which better represent what doctors would actually use to diagnose heart disease.\nWe use ECG recordings taken from the MIT-BIH Arrhythmia Database \\citep{MIT-BIH} for real heartbeat data, as the database is a public well-established source of data for ECG heartbeat classification tasks.\nThe database contains 48 half-hour ECG records obtained from patients, where each record contains two 30-minute ECG lead signals collected with a sampling rate of 360 samples per second per channel. The database has annotations for heartbeat class information verified by independent experts.\nHowever, a single heartbeat alone is not always useful for true heart disease classification, as such, longer recorded signals often prove more useful.\nFor this purpose, we have segmented the dataset samples into 10 second long intervals, where an entire segment is flagged as \"abnormal\" if any beats are not annotated as normal by the experts. In particular for training our SimGANs we selected 32 samples of uncommon abnormal signals, an example of which is shown in Figure \\ref{abnormal_example}, but the whole dataset of both abnormal and normal heartbeats, roughly about 2000 and 5000 signals respectively, were later used for our ECG classifiers described in Section \\ref{ECG_Classifier}.\n\n\\begin{figure}[!htbp]\n \\centering\n \\includegraphics[trim=0 20 0 50, clip, width=\\linewidth]{figures\/abnormal_example.png}\n \\caption{Example of an ECG heartbeat sample with an abnormal condition from the MIT-BIH Arrhythmia Database. Specifically, this sample contains an instance of a left bundle branch block beat around the 2500 mark}\n \\Description{Abnormal heartbeat example}\n \\label{abnormal_example}\n\\end{figure}\n\nFor our simulated dataset, we use the NeuroKit2 \\citep{NeuroKit2} package as our ECG simulator to collect samples.\nNeuroKit2 is an open source Python package meant to provide easy access to advanced bio-signal processing routines based on existing cited works.\nFor our heartbeat ECG use case, NeuroKit2 can simulate healthy heartbeats and uses either a simple simulation based on Daubechies wavelets \\citep{simple_ecg_sim}, which roughly approximate a cardiac cycle, or a more complex simulation based on ECGSYN \\citep{advanced_ecg_sim}. \nWe utilize both simulators and our simulated dataset contains an equal number of samples from both. \nBoth simulators can specify conditions like duration, sampling rate, and heart rate, but ECGSYN can add synthetic noise drawn from a Laplacian distribution and even mimic a heart's random fluctuations between some beats.\nTo match our real ECG data from the MIT-BIH database, we use the same duration and sampling rate of 10 seconds and 360 samples per second respectively. Roughly 2500 samples were generated from both simulators with a range from 50 through 100 beats per minute for heart rate and up to 10 percent noise added for a total of 5000 samples. An example of a signal generated from both simulators is shown in Figure \\ref{ECGSim_examples}. \n\n\\begin{figure}[!htbp]\n \\centering\n \\includegraphics[trim=0 20 0 50, clip, width=\\linewidth]{figures\/ecg_sim_examples.png}\n \\caption{Generated ECG heartbeat samples from NeuroKit2, the top plot is using the Daubechies wavelets simulator and the bottom plot is using ECGSYN}\n \\Description{Generated samples from NeuroKit2}\n \\label{ECGSim_examples}\n\\end{figure}\n\nFrom these examples, we can see that the real samples often include noise from the collection sensors, and often are more complex and variable in the sequences of heartbeats.\n\n\n\\section{Research Methods}\n\\label{Methods}\nSimGANs are notoriously difficult to train and evaluate, as during training we must balance the learning process between the refiner and discriminator. There is often no objective quantitative measure of how good a generated output is. Our training process is largely inspired by the original SimGAN framework. However, we propose new additions to the framework based on current research and apply the approaches to 1D data during training.\n\nWe add in specialized network architectures, generalizable feature-based quantitative evaluation metrics, and an evolutionary framework to help search and optimize potential neural architectures and hyperparameters. \n\n\\subsection{Training Configuration}\n\nWe use the minimax loss described earlier as our adversarial loss between the refiner and discriminator, as just a simple binary cross entropy loss. For our self-regularization loss in our 1D space, it is actually simpler than the original, as we measure the absolute difference between two signals' amplitudes at each point or bin. In terms of implementation, this is a simple L1 Loss.\n\nThe original SimGAN authors noticed that the refiner network tends to over-emphasize certain features to fool the discriminator networks, leading to the refiner network producing artifacts, so they proposed using a local adversarial loss. In our 1D use case, we quickly noticed that using just a local adversarial loss would lead to a collapse of the waveform and would not resemble the true wave structure, as signals and other forms of 1D data are often sequential and meaning is derived from ordering. Therefore, to help prevent the formation of artifacts, we propose a Siamese dual discriminator network where we ensemble a global discriminator network and a local discriminator network together to capitalize on both a global and local loss. The local discriminator network is fed segments of the signal, of specified equal length, then the network predicts if each segment is real or fake. The signal segment predictions are then averaged to obtain an aggregated prediction for the whole signal, which is then added to the prediction from the global discriminator network. Together they average their predictions to form the unified loss, which is backpropagated to update both network weights. Similarly, the local discriminator network also utilizes the history buffer and trains on previous iterations of refined waveforms.\n\nRecent GAN approaches try to enforce a soft Lipschitz constraint using gradient penalties \\citep{WGAN-GP}, to help bound the losses to some extent and help training stability, as the hypothesis is that mode collapse, when the refiner learns a single example that fools the discriminator and uses it over and over, is the result of the competing game converging to bad local equilibria. As such, during training, we chose to use DRAGAN \\citep{DRAGAN} as our gradient penalty.\n\n\\subsection{Evaluation Metrics}\n\\label{sec:Evaluation}\nThese evaluation metrics are methods to measure a GAN's performance after it has completed training. Evaluating GANs is an open problem, as there is no objective truth in most cases. How does one classify if a generated signal is \"real\" enough? One way is simple visual qualitative analysis: \"does it look right?\" However, this often requires domain knowledge and is not scalable. The following metrics are a quantitative way to measure model performance. \n\n\\subsubsection{Feature Evaluation}\nOne method for evaluating the refiner outputs is extracting key features from the real data, then comparing it to the features from the refined outputs. For every signal, we can effectively extract a feature vector. The following are a list of features we utilized, that can be generalized to multiple domains: \n\\begin{enumerate}\n \\item Amplitude\/height difference between peaks \n \\item Amplitude\/height ratio between peaks \n \\item Distance between peaks\n \\item Trough heights\n \\item Peak to trough amplitude\/height difference\n \\item Peak to trough distance\n \\item Area under the signal's curve\n \\item Number of times the signal changes direction\n \\item Roughness of signal measured by a difference between a rolling mean of values and the current value\n\\end{enumerate}\nNaturally, if the domain is deeply understood, more specific features can be used. We have ways to visualize the feature distributions so that we can qualitatively determine if a refiner truly transforms the simulated data to better reflect the real data distribution, or if one refiner is better than another based on one feature or another. \n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[trim=0 5 0 25, clip, scale=0.4]{figures\/violin_plot_sample.png}\n \\caption{Example of a violin plot visualization of feature distributions of the number of times the signal changes direction. Note that we can qualitatively assess if the simulated distribution approaches the real distribution.}\n \\Description{Violinplot example }\n\\end{figure}\nWe use distance measurements like Kullback\u2013Leibler divergence (KL-Div), Wasserstein Distance, or Kolmogorov\u2013Smirnov tests (KS-Stat) to quantitatively determine how far the refined output features are from our expected real features. We typically utilize KS-Stat as our distance metric, as it can be a nonparametric test of the equality between two discrete distributions. However, these distance metrics expect the distributions to be the same size and effectively \"map\" instances to one another. This is difficult for SimGANs as the simulated data size is expected to be much larger than the real data size, and there is no true mapping between real and simulated data. Therefore, to use distance metrics, we repeatedly sample the distributions until we are confident every signal has been sampled, and then average the distances calculated by our metric.\n\nAnother metric is a statistical t-test for unequal distributions to measure how likely that the two samples are drawn from the same distribution. We use a one-tailed unequal variance Welch's t-test to compare the distributions of the extracted features from the refined and real signals. Unlike the traditional use of t-tests where we look for the p-value to be small to show there is a significant difference between two distributions, we want to show that there is no significant difference between the distribution of features between the real and refined data. Therefore, we want our p-value to be close to $1$. We can use the t-test on each feature individually, then keep track of the number of features where the simulated distribution was significantly different from the real distribution, and average the resultant p-values across all the features.\n\n\\subsubsection{FID Score}\nFrechet Inception Distance \\cite{fidscore} was originally used to measure the quality of generated images.\nFID scores measure the difference between sets of generated and ground truth images via comparing the distributions (specifically the mean and co-variance) of model activations of each set.\nAn oracle network, typically Inception \\cite{inception}, is used for computing the activations.\nUnfortunately there is not enough data to train a custom oracle network for the ECG dataset. Therefore, we convert ECG waveforms to 2D images via stacking the waveform and zero-padding the width to the input shape of Inception. \n\n\\subsubsection{Tournament Evaluation}\nTournament evaluation is a peer to peer evaluation method that is domain agnostic \\citep{tournament}. The underlying idea is that we have our SimGAN population compete with themselves. So every refiner and discriminator will be paired with one another for every possible combination and the refiner is trying to fool the discriminator it is paired against. Each network is assigned an initial rating, based on the Glicko \\citep{glicko} system, and as they win or lose matches, their rating will go up or down. The idea is that good refiners will be able to fool many types of discriminators if their outputs are close to the real signals.\nHowever, as this method is peer-to-peer, it does not provide a direct measure of how well the refiners are performing and changes with the population. In fact, it is entirely possible that the entire population performs poorly. It should be noted that during our experiments, we noticed that the tournament would sometimes reward refiners that produced random noise, as the discriminators were not trained on random noise for very long. So despite a high tournament ranking, the resultant refiner is not guaranteed to generate good data. However, since GANs don't converge easily, a model at an earlier step can easily produce better outputs than ones in later steps. Therefore, we log each SimGAN at different intervals and use tournament evaluation to select the best point in time to represent the individual SimGAN as it is sent through ezCGP.\n\n\\subsection{Initial Seed}\nWith our new training improvements, we define a hand-designed SimGAN architecture that we hypothesized would be successful, and used it as an initial seed for our evolutionary process. Both the refiner and discriminator networks are inspired by DCGAN \\citep{DCGAN} for deep convolutional network GANs where ResNet \\citep{resnet} blocks are used as our convolutions. To further improve our discriminator so that it makes more accurate predictions while providing useful feedback to the refiner, we utilize the Siamese dual discriminator described earlier, a mini-batch discrimination \\citep{mbd_feature_matching_loss} layer to discriminate between whole mini-batches of samples rather than between individual samples, and a feature extractor layer which extracts the features described above. This seed is visualized in Figure \\ref{seed}.\n\n\n\\begin{figure*}[!htbp]\n \\centering\n \\includegraphics[width=.9\\linewidth]{figures\/SeededIndividual.pdf}\n \\caption{Architecture of our seeded individual for ezCGP. Notice the three distinct genome blocks. Note that only the active genes are shown.}\n \\Description{seeded example }\n \\label{seed}\n\\end{figure*}\n\n\\subsection{ezCGP Optimization}\nDue to the compartmentalization of a genome in ezCGP, we define our genome as the following sequence of blocks: one block to build the architecture of the refiner network, another block to build the architecture of the discriminator network, and a third block to set the network training hyperparameters. Each block has its own set of operators, genes, and rule set to fit its definition. For our problem, we defined operators to act as various different neural network layer types, to better search for SimGAN improvements. A wide variety of operators and blocks allows for easy evolution and neural architecture search, especially if an initial seed is given. \nExample operators include: convolutional layers, concatenation layers, linear layers, pooling layers, dropout layers, batch normalization layers, ResNet blocks, activation layers, flatten layers, mini-batch discrimination layers, and custom feature extract layers.\n\nEach operator has its own set of hyperparameters that can be tuned, but hyperparameters that guide the individual's learning process can also be tuned, such as optimizer types, learning rates, number of training steps, regularization weights, etc. All together these operators can easily allow us to build and define deep neural networks. Mating is defined to be strictly the exchange of whole blocks between parents; each block is set to mate with 33 percent probability. Mutation is defined as either a change of a gene's position in a block, a change of the operation used at a gene, or the change of the hyperparameter requested by the operation. The two neural network blocks have their genetic material mutated with a probability of 20 percent, and the remaining block mutated with a probability of 10 percent. After training, each individual was scored by the four objectives mentioned earlier: minimize the FID score, minimize the KS statistic, minimize the number of features where the simulated distributions were significantly different from the real distributions, maximize the average p-value of the feature distributions. The initial population size was set to 4, as a successful genome can easily spawn many children and a low initial population is used to prevent potential computational constraints such as GPU memory. We maintained a hall-of-fame with a max size of 40, and the next generation was selected using NSGA-II \\citep{NSGA-II} from a pool of individuals: the previous generation's offspring and hall-of-fame. Algorithm \\ref{ezCGP_algo} shows the pseudo-code for the evolution process. We ran this for 48 hours on a single computer with 1 TeslaV100-PCIE 32GB GPU, 128 GB RAM, and a Xeon-Gold6126 CPU.\n\n\n\\begin{algorithm}\n\\caption{ezCGP Evolution}\n\\begin{algorithmic} \n\\STATE $init\\_population()$\n\\WHILE{not converged}\n\\STATE{$parents=get\\_parent\\_list()$}\n\\FOR{$p1,p2$ in $parents$}\n\\FOR{$i^{th}block$ in $range(num\\_blocks)$}\n\\STATE{$roll=random\\_number()$}\n\\IF{$roll < i^{th}block\\_prob\\_mate$}\n\\STATE{$population += mate(p1,p2)$}\n\\ENDIF\n\\ENDFOR\n\\ENDFOR\n\n\\FOR{$indiv$ in $population$}\n\\FOR{$i^{th}block$ in $range(num\\_blocks)$}\n\\STATE{$roll=random\\_number()$}\n\\IF{$roll < i^{th}block\\_prob\\_mutate$}\n\\STATE{$mutate(indiv)$}\n\\ENDIF\n\\ENDFOR\n\\ENDFOR\n\n\\FOR{$indiv$ in $population$}\n\\STATE{$train(indiv)$}\n\\STATE{$score(indiv)$}\n\\ENDFOR\n\n\\STATE{$population = select\\_population(indiv)$}\n\n\\ENDWHILE\n\n\\end{algorithmic}\n\\label{ezCGP_algo}\n\\end{algorithm}\n\n\\section{Results and Discussion}\n\\label{ResultsAndDiscussion}\n\n\\subsection{Novel Configurations}\nezCGP searches for neural architectures, so over time we should see improved configurations for SimGANs across the population. Figure \\ref{evolved1} and \\ref{evolved2} are examples of evolved networks. In total over 40 individuals were created by the end of generation three. \n\n\\begin{figure*}[!htbp]\n \\centering\n \\includegraphics[width=.9\\linewidth]{figures\/Evolved_Individual_No_Feature.pdf}\n \\caption{Architecture of an evolved individual from ezCGP. Notice the discriminator network opts not to use the feature extractor and adds additional dropout layers, causing a stronger regularization effect to occur. The refiner network uses a different activation function for the first ResNet block.}\n \\Description{Evolved individual 1 example }\n \\label{evolved1}\n\\end{figure*}\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[scale=0.5]{figures\/Evolved_Individual_Concat.pdf}\n \\caption{Architecture of another evolved individual's discriminator network from ezCGP. Notice the network takes advantage of concatenate layers to use features from previous convolutions.}\n \\Description{Evolved individual 2 example }\n \\label{evolved2}\n\\end{figure}\n\nThe evolved refiner genomes did not have as many significant changes as the discriminators. We hypothesize this is due to how we implemented a ResNet block as a large single gene-operator with less evolvability, and thus generally shows good performance overall and overpowers the discriminators. \n\n\n\\subsection{Quality of Generated ECG Data}\nFigure \\ref{ECGSim_examples} shows examples where we can see that the real samples for ECG heartbeat signals are often rougher than the simulated examples caused by noise from the collection sensors. Not only that, more complex conditions like blockages are not possible to generate with the given simulators. However, after training our seeded SimGAN model using the real and simulated data, we can qualitatively observe in Figure \\ref{ECG_Gen_0} that the SimGAN's refined waveforms include noise and features present in the real data, though it may not be as principled as the simulation.\n\n\\begin{figure}[!htbp]\n \\centering\n \\includegraphics[trim=0 65 0 65, clip, width=\\linewidth]{figures\/seeded_output.png}\n \\caption{Generated ECG heartbeat sample from our seeded SimGAN individual in generation 0.}\n \\Description{Generated sample from Gen 0 }\n \\label{ECG_Gen_0}\n\\end{figure}\n\nTo analyze the effects of evolution, we select a Pareto individual in our last generation from ezCGP and compare how the optimized refiner transforms the same wave. Figure \\ref{ECG_Gen_5} shows improvement as the refined signal contains characteristics beyond learned noise.\n\n\\begin{figure}[!htbp]\n \\centering\n \\includegraphics[trim=0 65 0 65, clip, width=\\linewidth]{figures\/evolved_output.png}\n \\caption{Generated ECG heartbeat sample from our optimized SimGAN individual from generation 9. }\n \\Description{Generated sample from Gen 9}\n \\label{ECG_Gen_5}\n\\end{figure}\n\n\n\\subsection{Empirical Analysis}\n\\label{ECG_Classifier}\nSynthetic data is useful when it is sufficiently similar to real data. A TSTR score empirically shows that the data is similar enough to use, where the synthetic data is used to train a model, which is tested on real data \\citep{TSTR}. We extend TSTR score by training the same type of classifier on a subset of real data, a combination of real and simulated data, and a combination of real and refined data. We test the trained classifiers on the same withheld set of real signals. \n\nAs Table \\ref{Tab:comp} shows, we train four types of binary classifier models on the ECG signals output from our trained SimGAN models to predict whether a signal contains normal or abnormal characteristics. We can see that there is a performance boost in F1 scores after the models are trained on the refined data compared to just the real or real and simulated data. This indicates that the refined training data is likely closer to the real test set than the simulated training data was. The refiner showcases how we can balance the training data with realistic abnormal heartbeats that were created from simulated healthy heartbeats. Similarly, the data our evolved SimGAN individual produces that the classifiers trained on sometimes performs better than the initial seeded individual, which suggests the evolved refiner's signals are of higher quality. Our refined signals are also model agnostic, as multiple models show similar improvement after being trained on the same data. \n\n\n\n\n\n\\begin{table\n\\centering\n\\caption{F1-Scores of ECG classifiers with dataset variations}\n\\resizebox{\\linewidth}{!}{%\n\\begin{tabular}{lrrrr}\n\\toprule\n& AdaBoost\n& Gradient Boosted Tree \n& Multi-Layer Perceptron\n& 1D-SqueezeNet \\citep{SqueezeNet}\\\\ \n\\midrule\nReal Waveforms \n& 0.205 & 0.256 & 0.293 & 0.875 \\\\ \nReal + Simulated Normal \n& 0.176 & 0.243 & 0.313 & 0.854 \\\\\nReal + Seed-refined Abnormal \n& \\textbf{0.247} & 0.341 & \\textbf{0.491} & 0.884\\\\\nReal + Evolved-refined Abnormal \n& 0.227 & \\textbf{0.354} & 0.467 & \\textbf{0.916}\\\\ \\bottomrule\n\\end{tabular}%\n}\n\\label{Tab:comp}\n\\end{table}\n\n\\section{Conclusion and Future Work}\n\\label{ConclusionAndFutureWork}\nIn this paper we show a method for using genetic evolution in conjunction with SimGANs to refine simulated data to better fit the real distribution.\nWe applied this method to an electrocardiogram dataset to generate refined waveforms, and show that using the refined waveforms to train an ECG classifying method will improve the classifier performance over the unrefined waveforms. \n\nWe could potentially adapt this work to other forms of one-dimensional data, such as audio or electroencephalography data. Simulated data will help fill the datasets, but there is still a worry that simulated and real-world data are separable. The method could also be modified to work for unevenly-spaced time-series data.\n\nThere is ample room for improvements in the evolution process. We can perform efficiency updates to encourage more diverse architectures to be found. Other seeds could be tested with various architectures that work well on one-dimensional data (such as transformers). Data preprocessing blocks and methods can be added as gene-operators, allowing ezCGP to specify individuals that both change model architecture\/hyperparameters and the preprocessing for its inputs. Our dual discriminator network could be generalized for ensembled networks for both discriminator and refiner networks, and could lend itself to co-evolution solutions.\n\nGiven the unclear nature of evaluating GANs, multiple objectives are beneficial for evaluating them. However, too many objectives can make the selection process overly complex. While Section \\ref{sec:Evaluation} showed a multitude of objectives for evaluation, we only used four for selection. Future work could explore creating more representative objectives or utilizing a many-objective selection algorithm. \n\nSince we have described a multitude of different losses and configurations which can be used to train SimGAN models, we can see that it quickly becomes very complex and there is no definitive reason for choosing one loss over another. There were many other losses and training techniques that have shown success in other studies that we chose not to include, such as Wasserstein Loss \\citep{WGAN} and the original WGAN-GP gradient penalty \\citep{WGAN-GP}. Therefore, we propose to model the loss function as a symbolic regression problem, where all the losses and evaluations will be used, each weighted by an evolved constant, that can be optimized.\n\n\n\\bibliographystyle{ACM-Reference-Format}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}