diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzeise" "b/data_all_eng_slimpj/shuffled/split2/finalzzeise" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzeise" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nWithin a fixed class of dark energy models, such as the cosmological constant\nor scalar field quintessence, various cosmological observables are all\ninterrelated by the properties of the class itself.\nThe narrower the class,\nthe higher the expected correlation between measurements of different\nobservables. Therefore, given a class of dark energy models, constraints from\none set of cosmic acceleration observables make predictions for other observables.\nFor example, it is well known that since the first release of WMAP data\n\\cite{Spergel_2003}, the Hubble constant in a spatially flat universe with a\ncosmological constant and cold dark matter ($\\Lambda$CDM) has been predicted\nto a precision better than it has yet been measured. Predictions like this one\ntherefore offer the opportunity for more precise measurements to falsify\nthe dark energy model (in this case, flat $\\Lambda$CDM)~\\cite{Hu_standards}.\n\nIn a previous paper (hereafter MHH) \\cite{PaperI}, we showed how this idea\ncan be generalized to additional acceleration observables and wider classes of\ndark energy models. Other observables include the expansion rate $H(z)$, the \ncomoving angular diameter distance $D(z)$, and the linear growth function\n$G(z)$. The model classes we considered include a cosmological constant, \nwith and without\nspatial curvature, and scalar field quintessence models, with and without \nearly dark energy and spatial curvature components. \nUsing forecasts for a Stage IV \\cite{DETF} SN sample and Planck CMB data, we found \nthat future data sets will provide numerous strong predictions that we \nmay use to attempt to falsify various acceleration paradigms.\n\n\\vspace{1cm}\nIn this paper, we evaluate the predictive power of {\\it current} measurements\nto constrain the expansion rate, distance, and growth as a function of\nredshift. Specifically, we consider current measurements of supernovae (SN),\nthe cosmic microwave background (CMB), baryon acoustic oscillations (BAO), and\nthe Hubble constant ($H_0$). These predictions \ntarget the redshift ranges and required precision for future measurements\nseeking to rule out whole classes of models for cosmic acceleration.\n\nOur approach complements studies that seek to constrain an ever expanding set\nof parameters of the dark energy. The most ambitious analyses currently\nutilize $\\sim 5$ parameters to describe the dark energy equation of state\n$w(z)$ \\cite{Huterer_Cooray,Wang_Tegmark_2005,Riess_2006,Zunckel_Trotta,\n Sullivan_Cooray_Holz,Zhao_Huterer_Zhang,Zhao_Zhang:2009,Serra:2009}. We\ntake these studies in a new direction: rather than constraining parameters\nassociated with the equation of state, we propagate constraints from the data\ninto allowed ranges for $H(z)$, $D(z)$, $G(z)$, and auxiliary observables that\ncan be constructed from them through a principal component representation of\n$w(z)$ that is complete in these observables for $z<1.7$. This work goes\nbeyond previous studies that are similar in spirit\n(e.g.~\\cite{Kujat,Sahlen05,Huterer_Peiris,Chongchitnan_Efstathiou,ZhaKnoTys08})\nby directly applying constraints from current data sets\n to complete representations of several dark energy model classes and making\n concrete predictions for a number of observable quantities.\n\nThis paper is organized as follows.\nWe begin in Sec.~\\ref{sec:methods} with a discussion of the methodology of\npredicting observables within classes of dark energy models, including \ndescriptions of each of the acceleration observables, cosmological data sets, \nand model classes. We present our\npredictions from current data in Sec.~\\ref{sec:predict} \nand discuss the results in Sec.~\\ref{sec:discussion}.\n\n\n\\section{Methodology}\n\\label{sec:methods}\n\n\n\n\\subsection{Acceleration Observables}\n\\label{sec:obs}\n\nThere are two general types of acceleration observables: those related\nto the expansion history and geometry of the universe, and those related to the growth of structure.\nIn terms of a general evolution for the dark energy \nequation of state $w(z)$, \nthe expansion history observables are the Hubble expansion rate\n\\begin{eqnarray}\nH(z) &=& H_0 \\left[ \\Omega_{\\rm m} (1+z)^3 + \\Omega_{\\rm DE} f(z) + \\Omega_{\\rm K} (1+z)^2 \\right]^{1\/2},\n\\nonumber \\\\\n&& f(z) = \\exp\\left[3 \\int_0^z dz' \\frac{1+w(z')}{1+z'}\\right], \\label{eq:hz} \n\\end{eqnarray}\nwhere $\\Omega_{\\rm m}$ and $\\Omega_{\\rm DE}$ are the present matter and dark energy densities, \nrespectively, as fractions of the critical density for flatness, \nspatial curvature is parametrized by $\\Omega_{\\rm K}\\equiv 1-\\Omega_{\\rm m}-\\Omega_{\\rm DE}$, \nand the small contribution of radiation at $z\\sim 1$ is neglected; \nand the comoving angular diameter distance\n\\begin{equation}\nD(z) = \\frac{1}{(|\\Omega_{\\rm K}|H_0^2)^{1\/2}} S_{\\rm K}\\left[(|\\Omega_{\\rm K}|H_0^2)^{1\/2} \n\\int_0^z \\frac{dz'}{H(z')} \\right],\n\\label{eq:dist}\n\\end{equation}\nwhere the function $S_{\\rm K}(x)$ is equal to $x$ in a flat universe ($\\Omega_{\\rm K}=0$), \n$\\sinh x$ in an open universe ($\\Omega_{\\rm K}>0$), and $\\sin x$ in a closed universe ($\\Omega_{\\rm K}<0$).\nThe growth of\nlinear density perturbations $\\delta \\propto G a$\nis given by\n\\begin{equation}\nG'' + \\left(4+\\frac{H'}{H}\\right)G' + \\left[\n3+\\frac{H'}{H}-\\frac{3}{2}\\Omega_{\\rm m}(z)\\right]G = 0,\n\\label{eq:growth}\n\\end{equation}\nwhere primes denote derivatives with respect to $\\ln a$ and $\\Omega_{\\rm m}(z) = \\Omega_{\\rm m}\nH_0^2(1+z)^3\/H^2(z)$. We assume scales for which the dark energy density is\nspatially smooth compared with the matter and normalize $G(z)=1$ at $z=10^3$.\n\n\nThere are several auxiliary quantities related to the growth function that are also\ninteresting to examine. Since growth measurements like the evolution of the\ncluster abundance often compare the change in growth relative to the present, we\nalso consider a different normalization for the growth function,\n\\begin{equation}\nG_0(z)\\equiv \\frac{G(z)}{G(0)}.\n\\end{equation}\nVelocity field measurements, on the other hand, are sensitive to the growth {\\it rate}\n\\begin{equation}\nf(z) \\equiv 1 + \\frac{G'}{G}.\n\\end{equation}\nSpecifically, the amplitude of the velocity power spectrum can be measured\nfrom redshift space distortions and constrains $f(z)G(z)$ independently of galaxy bias (e.g.\\ see~\\cite{Percival_White}).\nFinally, given that the growth rate is approximately related to expansion\nhistory observables by $f(z) = [\\Omega_{\\rm m}(z)]^{\\gamma}$ where the growth index is\n$\\gamma \\approx 0.55$ for flat $\\Lambda$CDM\\ \\cite{WangSteinhardt,Linder_gamma} we\nalso consider predictions for\n\\begin{equation}\n\\gamma(z)\\equiv {\\ln[f(z)] \\over \\ln[\\Omega_{\\rm m}(z)]} \\,.\n\\label{eq:gamma_z}\n\\end{equation}\nNote however that $\\gamma(z)$ is not a direct observable but rather \nmust be inferred from a combination of measurements in a \nspecific dark energy context.\n\n\nWe ignore the influence of massive neutrinos throughout this study. \nThe effect of massive neutrinos on the growth of structure is\nsignificantly scale-dependent, \nbut on present linear scales well below the horizon, \n$k\\sim 0.01-0.1~h$~Mpc$^{-1}$, the growth suppression from a \nnormal neutrino mass hierarchy with $\\sum m_{\\nu}\\sim 0.05$~eV \\cite{nu_oscill}\nis $\\lesssim 1\\%$ in $G(z)$ and $f(z)G(z)$ and smaller for other \nobservables. The maximum decrement in growth from nearly-degenerate \nneutrinos with $\\sum m_{\\nu}\\sim 0.5$~eV (e.g.~\\cite{Reid_SDSSDR7}) \nis $\\sim 1-10\\%$ on these scales. In the predictions we present here, \nthese effects would appear as an additional ``early'' dark energy \ncomponent with $w\\approx 0$. Future precise measurements of $\\sum m_{\\nu}$ \nfrom independent data could be used to correct the growth predictions here \nby scaling them by the appropriate suppression factor.\n\n\n\n\n\\subsection{Constraints from Current Data}\n\\label{sec:data}\n\nThe main observational constraints we consider when making predictions for\nacceleration observables include relative distances at $z \\lesssim 1.5$ from\nType Ia SNe and absolute distances at $z_*=1090$ from the CMB, $z_{\\rm BAO}\\approx\n0.35$ from BAO, and $z_h\\approx 0.04$ from low-redshift SNe calibrated with\nmaser and Cepheid distances. Since low-$z$ distances mainly probe the Hubble\nconstant for smoothly varying $w(z)$, we refer to the low-$z$ SN\ncalibration as an $H_0$ constraint. The CMB data additionally constrain\nparameters that impact dark energy models such as the matter density $\\Omega_{\\rm m} h^2$\nand the fraction of dark energy density at recombination.\n\nIn the simplest classes of models, the SN and CMB data suffice to make\naccurate predictions for expansion and growth observables. In more complex\nclasses, BAO and $H_0$ constraints on distances are necessary. Even\nin these cases, predictive power is still retained in that measured distances\nto a few specific redshifts constrain $H(z)$, $D(z)$, and $G(z)$ at all redshifts.\nWe now describe each of these data sets in more detail.\n\nThe Type Ia SN sample we use is the Union compilation~\\cite{SCP_Union}. These\nSN observations measure relative distances, $D(z_1)\/D(z_2)$, over a range of\nredshifts spanning $0.015 \\leq z \\leq 1.551$, with most SNe at $z \\lesssim\n1$. We add the SN constraints using the likelihood code for the Union data\nsets \\cite{Union_like}, which includes estimated systematic errors for the SN\ndata~\\cite{SCP_Union}.\n\nFor the CMB, we use the most recent, 5-year release of data from the WMAP\nsatellite \\cite{Komatsu_2008,Nolta_2008,Dunkley_2008} employing the likelihood\ncode available at the LAMBDA web site \\cite{WMAP_like}. Unlike the CMB\ndistance priors on $D(z_*)$ and $\\Omega_{\\rm m} h^2$ used for the forecasts in MHH, the\nlikelihood used here contains the full information from the CMB angular power\nspectra; in particular this provides sensitivity to large fractions of\nearly dark energy at recombination as well as information about late-time dark\nenergy and spatial curvature from the ISW effect without necessitating\nadditional priors. We compute the CMB angular power spectra using the code\nCAMB \\cite{Lewis:1999bs,camb_url} modified with the parametrized\npost-Friedmann (PPF) dark energy module \\cite{PPF,ppf_url} to include \n models with general dark energy equation of state evolution\nwhere $w(z)$ may cross $w=-1$. Note that while our {\\it predictions} for\ngrowth observables apply to scales on which dark energy is smooth relative to\nmatter, the CAMB+PPF code self-consistently accounts for the effects of \nscale-dependent dark energy perturbations on the CMB anisotropies.\n\n\nThe BAO constraint we use is based on the measurement of the correlation\nfunction of SDSS Luminous Red Galaxies (LRGs) \\cite{Eisenstein}, which\ndetermines the distance and expansion rate at $z_{\\rm BAO}\\approx 0.35$ through the\ncombination $D_V(z) \\equiv [z D^2(z)\/H(z)]^{1\/3}$. We implement this\nconstraint by taking the volume average of this quantity, $\\langle D_V \\rangle$, over\nthe LRG redshifts, $0.16z_{\\rm max}$, we adopt a simple parametrization by \nassuming a constant equation of state, $w(z>z_{\\rm max})=w_{\\infty}$, \nrestricted to $-1\\leq w_{\\infty}\\leq 1$. The dark energy density at $z>z_{\\rm max}$ can be\nextrapolated from its value at $z_{\\rm max}$ as\n\\begin{equation}\n\\rho_{\\rm DE}(z) = \\rho_{\\rm DE}(z_{\\rm max})\\left(\\frac{1+z}{1+z_{\\rm max}}\\right)^{3(1+w_{\\infty})}.\n\\end{equation}\nFor more restricted model classes where we assume that there is \nno significant early dark energy, \nwe fix $w_{\\infty}=-1$ since a constant dark energy density \nrapidly becomes negligible relative to the matter density \nat increasing redshift.\nNote that the possibility of early dark energy is automatically \nincluded in the smooth $w_0-w_a$\nmodel class where the equation of state at high redshift is $w\\approx w_0+w_a$.\n\n\nIn addition to the dark energy parameters described above ($\\bm{\\theta}_{\\rm DE}$), we include\ncosmological parameters that affect the CMB angular power spectra but \nnot the acceleration observables ($\\bm{\\theta}_{\\rm nuis}$): the physical baryon density $\\Omega_{\\rm b} h^2$, \nthe normalization and tilt of the primordial curvature\nspectrum $\\Delta_{\\cal R}^2 = A_s (k\/k_0)^{n_s-1}$ with $k_0 = 0.05$~Mpc$^{-1}$,\nand the optical depth to reionization $\\tau$.\nThis brings our full set of parameters for $\\Lambda$CDM\\ to $\\bm{\\theta}_{\\Lambda}=\\bm{\\theta}_{{\\rm DE},\\Lambda}+\\bm{\\theta}_{\\rm nuis}$,\nand for quintessence and smooth $w_0-w_a$ dark energy\nwe define the analogous parameter sets with\n\\begin{eqnarray}\n\\bm{\\theta}_{{\\rm DE},\\Lambda} &=&\\{\\Omega_{\\rm m} h^2, \\Omega_{\\rm m}, \\Omega_{\\rm K}\\} \\,,\\nonumber\\\\\n\\bm{\\theta}_{\\rm DE,Q} &=& \\bm{\\theta}_{{\\rm DE},\\Lambda} + \\{\\alpha_1,\\ldots, \\alpha_{N_{\\rm max}}, w_{\\infty}\\} \\,,\\nonumber\\\\\n\\bm{\\theta}_{\\rm DE,S} &=& \\bm{\\theta}_{{\\rm DE},\\Lambda} + \\{w_0, w_a\\} \\,,\\nonumber\\\\\n\\bm{\\theta}_{\\rm nuis} &=& \\{ \\Omega_{\\rm b} h^2, n_s, A_s, \\tau \\} \\,,\n\\label{eq:parametersfull}\n\\end{eqnarray}\nwhere we count $\\Omega_{\\rm m}$ and $\\Omega_{\\rm K}$ as dark energy parameters\nsince $\\Omega_{\\rm DE}= 1-\\Omega_{\\rm m} - \\Omega_{\\rm K}$.\nNote that the Hubble constant is a derived parameter, $h=H_0\/(100~{\\rm km~s}^{-1}{\\rm\n Mpc}^{-1}) = (\\Omega_{\\rm m} h^2\/\\Omega_{\\rm m})^{1\/2}$. \nAlthough the observable predictions mainly depend on constraints on the \ndark energy parameters $\\bm{\\theta}_{\\rm DE}$, we include the additional ``nuisance'' \nparameters $\\bm{\\theta}_{\\rm nuis}$ due to degeneracies between \n$\\bm{\\theta}_{\\rm DE}$ and $\\bm{\\theta}_{\\rm nuis}$ parameters in current CMB data; \nthese nuisance parameters are marginalized over in our predictions \nfor acceleration observables.\nThe parameter sets and priors on the parameters for each model \nclass are summarized in Table~\\ref{tab:modelclasses}.\n\n\n\n\n\\subsection{MCMC Predictions}\n\\label{sec:mcmc}\n\n\nTo make predictions for the acceleration observables using constraints from current data, we\nuse a Markov Chain Monte Carlo (MCMC) likelihood analysis. Given a dark energy\nmodel class parametrized by $\\bm{\\theta}_{\\Lambda}$, $\\bm{\\theta}_{\\rm Q}$, or $\\bm{\\theta}_{\\rm S}$, the MCMC algorithm\nestimates the joint posterior distribution of cosmological parameters and\npredicted observables by sampling the parameter space and evaluating the\nlikelihood of each proposed model compared with the data described in\nSec.~\\ref{sec:data}\n(e.g.\\ see~\\cite{Christensen:2001gj,Kosowsky:2002zt,Dunetal05}). We use the\ncode CosmoMC \\cite{Lewis:2002ah,cosmomc_url} for the MCMC analysis.\n\nThe posterior distribution is obtained using Bayes' Theorem,\n\\begin{equation}\n{\\cal P}(\\bm{\\theta}|{\\bf x})=\n\\frac{{\\cal L}({\\bf x}|\\bm{\\theta}){\\cal P}(\\bm{\\theta})}{\\int d\\bm{\\theta}~\n{\\cal L}({\\bf x}|\\bm{\\theta}){\\cal P}(\\bm{\\theta})},\n\\label{eq:bayes}\n\\end{equation}\nwhere ${\\cal L}({\\bf x}|\\bm{\\theta})$ is the likelihood of the data ${\\bf x}$\ngiven the model parameters $\\bm{\\theta}$ and ${\\cal P}(\\bm{\\theta})$ is the\nprior probability density. The MCMC algorithm generates random draws from the\nposterior distribution that are fair samples of the likelihood surface.\nWe test convergence of the samples to a stationary distribution that\napproximates the joint posterior density ${\\cal P}(\\bm{\\theta}|{\\bf x})$ \nby applying a conservative Gelman-Rubin criterion \\cite{gelman\/rubin}\nof $R-1\\lesssim 0.01$ across a minimum of four chains for each model class.\n\nAs described in MHH, the MCMC approach allows us to straightforwardly\ncalculate confidence regions for the acceleration observables by computing\n$H(z)$, $D(z)$, $G(z)$ and the auxiliary observables \n$G_0(z)$, $f(z)G(z)$, and $\\gamma(z)$ for each MCMC sample\nusing Eqs.~(\\ref{eq:hz})$-$(\\ref{eq:gamma_z}). The posterior distribution of\nthe model parameters $\\bm{\\theta}$ thus maps onto a distribution of each\nacceleration observable at each redshift. These redshift-dependent\ndistributions of the expansion and growth observables form the predictions\nthat we describe in the next section.\n\n\n\\section{Dark Energy Model Predictions}\n\\label{sec:predict}\n\n\nIn this section, we show the predictions for growth and expansion\nobservables from the combined current CMB, SN, BAO, and $H_0$\nconstraints. Since plotting full distributions for the six observables \ndefine in Sec.~\\ref{sec:obs} at\nseveral different redshifts is impractical, we instead plot only the regions\nenclosing 68\\% and 95\\% of the models at each redshift, defined such that the number density of\nmodels is equal at the upper and lower limit of each region. \n(When describing the predictions, we will typically quote the 68\\% CL limits.)\nTo provide examples of\nfeatures of individual models that may not be apparent from the 68\\% and 95\\% CL limits,\nwe also plot the evolution of observables for the maximum\nlikelihood (ML) MCMC model within each model class. We caution, however, that the\nMCMC algorithm is designed to approximate the overall shape of the likelihood\nand is not optimized for precisely computing the ML\nparameters, so the ``best fit'' models shown here may be slightly displaced\nfrom the true ML points.\n\nIn most figures in this section, we compare the predictions for two model\nclasses, one of which is a subclass of the second, more general class (for\nexample, $\\Lambda$CDM\\ and quintessence). The potential to falsify the simpler class\nin favor of the more complex one is greatest where the two sets of predictions\ndiffer most, i.e.~where one class gives strong predictions and the other does\nnot.\n\n\n\\subsection{$\\Lambda$CDM}\n\nWe begin with the simplest and most predictive model class: flat $\\Lambda$CDM.\nSince $\\Omega_{\\rm K}=0$, this model has only two free dark energy parameters in\nEq.~(\\ref{eq:parametersfull}), $\\Omega_m$ and $\\Omega_{\\rm m} h^2$ (or $H_0$), \nproviding very little freedom to alter the acceleration observables at {\\it any} \nredshift as shown in Fig.~\\ref{fig:flcdm}:\n$H(z)$, $D(z)$, and $G(z)$ are currently predicted with a precision of $\\sim 2\\%$\n(68\\% CL) or better everywhere. The velocity observable $f(z)G(z)$ is predicted to better than 5\\% and the growth index $\\gamma$ to $0.1\\%$.\nThese predictions are more precise than\ncurrent measurements of the acceleration observables at any redshift.\n\n\n\\begin{figure}[tp]\n\\centerline{\\psfig{file=flcdmpredict4.eps, width=2.5in}}\n\\caption{Flat $\\Lambda$CDM\\ predictions for growth and \nexpansion observables, showing the 68\\% CL (shading) and 95\\% CL (curves) \nregions allowed by current CMB, SN, BAO, and $H_0$ data. \nObservables include the linear growth function normalized in two \ndifferent ways, $G(z)$ equal to unity at high redshift and \n$G_0(z)=G(z)\/G(0)$; the product of the differential growth rate \nand the growth function $f(z)G(z)$; \nthe growth index $\\gamma(z)$ which relates $f(z)$ and $\\Omega_{\\rm m}(z)$; \nthe expansion rate $H(z)$; and the comoving distance $D(z)$ \n(scaled by a factor of 1\/10 in the lower panel).\nNote that the separation between the 68\\% and 95\\% CL regions \nis not visible where the observables are extremely well predicted, \ne.g. in the $\\gamma(z)$ predictions in the middle panel.\n}\n\\label{fig:flcdm}\n\\end{figure}\n\n\nThe strong predictions for flat $\\Lambda$CDM\\ arise largely due to CMB constraints: the two parameters\n$\\Omega_{\\rm m}$ and $H_0$ are tied together by the measurement of $\\Omega_{\\rm m} h^2$,\nand the remaining freedom in $H_0$ or the extragalactic distance scale is\nfixed by the measurement of the distance to $z_*$. \nHowever, given the present uncertainties in $\\Omega_{\\rm m} h^2$ and $D(z_*)$, \nthe addition of the other data (SN, BAO, and $H_0$) increases the \nprecision of the predictions by almost a factor of 2 \nrelative to WMAP constraints alone.\n\n \nThe flat $\\Lambda$CDM model is therefore highly falsifiable in that future measurements\nmay find that these quantities deviate substantially from the predictions. \nFor example, an $H_0$ measurement with $\\lesssim 2\\%$ accuracy would match the precision of the\npredictions and hence provide a sharp test of flat $\\Lambda$CDM. \nThese predictions are only a factor of $2-3$ weaker\nthan the Stage IV SN and CMB forecasts from MHH. Since flat $\\Lambda$CDM is the current\nstandard model of the cosmic expansion history and structure formation, falsifying it would represent\nthe most important observational breakthrough since the discovery of cosmic acceleration and would require revision of basic assumptions about the nature \nof dark energy, spatial curvature, or the theory of gravity. \n\n\n\n\\begin{figure}[tp]\n\\centerline{\\psfig{file=rp_lcdm_fg2.eps, width=3.5in}}\n\\caption{Predicted growth and expansion observables for \nnon-flat (dark blue) and flat (light gray) $\\Lambda$CDM, plotted \nrelative to the reference cosmology (the best fit model for flat $\\Lambda$CDM).\nHere and in subsequent figures, \n68\\% CL regions are marked by shading, 95\\% CL regions are bounded \nby solid curves, and red curves outlined in white show the \nbest fit model of the more general (dark blue) model class \n(in this case, non-flat $\\Lambda$CDM). \n}\n\\label{fig:lcdm}\n\\end{figure}\n\n\nGeneralizing the model to $\\Lambda$CDM with curvature increases the range of\npredictions by less than a factor of~2. In Fig.~\\ref{fig:lcdm}, we plot the\npredictions for flat and non-flat $\\Lambda$CDM\\ relative to the ML flat $\\Lambda$CDM\nmodel with $\\Omega_{\\rm m}=0.268$, $h=0.711$. Curvature opens up the ability to free the\nextragalactic distance scale from the constraints imposed by the CMB acoustic\npeak measurements. The tight constraints on SN, $H_0$, and BAO distances\nlimit this freedom. Since the forecasts from MHH used only the current BAO\nmeasurement and a weaker $H_0$ constraint as priors, the relative impact of\ncurvature here is substantially smaller. In particular, predictions of the\ngrowth function are nearly unchanged by curvature and still vary by less than\n$2\\%$.\nLikewise, $fG$ is nearly unaffected by curvature. Although\nthe growth index, $\\gamma(z)$, is not as perfectly determined for non-flat\n$\\Lambda$CDM, especially at high redshift, it is still predicted to better than 1\\%\nat $z \\lesssim 3$, and both $D(z)$ and $H(z)$ are predicted to better than\n$3\\%$. Any measurement that deviates by significantly more than these amounts\nwould prove that the dark energy is not a \ncosmological constant.\\footnote{A substantial decrement in \n growth from high redshifts,\n which in the context of our treatment would be interpreted as \n evidence for early dark energy thus falsifying $\\Lambda$CDM,\n could alternately indicate neutrinos with more than the minimal \n allowed masses.}\n\n\n\n\\subsection{Quintessence}\n\n\n\\begin{figure}[tp]\n\\centerline{\\psfig{file=rp_q_f_fg2.eps, width=3.5in}}\n\\caption{Flat quintessence models without early dark energy (dark blue) \nvs.\\ flat $\\Lambda$CDM\\ (light gray). Other aspects here and in later figures follow\nFig.~\\ref{fig:lcdm}.\n}\n\\label{fig:quint}\n\\end{figure}\n\nIf $\\Lambda$CDM is falsified, then in the context of dark energy \nwe must consider models with $w(z) \\ne -1$. Our\nnext class of models are therefore flat quintessence models \nwith $w(z)$ parametrized by 10 principal components at $z<1.7$, assuming\nno early dark energy (``$w_{\\infty}=-1$''). The predictions for acceleration observables \nwithin this model class are compared with the flat $\\Lambda$CDM\\ predictions\nin Fig.~\\ref{fig:quint}. \n\nInterestingly, the quintessence predictions are no longer centered on the\nflat $\\Lambda$CDM ML model. From the $H(z)$ predictions which mainly reflect\nvariation in evolution of the dark energy density, we see that on average the\ndata favor a smaller low-redshift ($z\\lesssim 0.5$) and larger\nintermediate-redshift ($0.5\\lesssim z\\lesssim 2$) dark energy density.\nCorrespondingly, the best fit growth function $G(z)$ of $\\Lambda$CDM is higher\nthan that of $\\sim 85\\%$ of the quintessence models in the chain.\nTherefore a measurement of the growth relative to high redshift that is\nsmaller than the $\\Lambda$CDM prediction by more than a few percent not only\nrules out a cosmological constant but actually favors these quintessence models.\nThe additional freedom in growth opens up predictions for $\\gamma$\nto include $2-3\\%$ deviations at $z\\lesssim 1$.\n\n\\begin{figure}[tp]\n\\centerline{\\psfig{file=data_bestfit_hd.eps, width=3.5in}}\n\\caption{Upper panel: Comparison of distance constraints from SN data and best\n fit models, plotted relative to the best fit $H_0D(z)$ for flat $\\Lambda$CDM\\ (dotted\n line). Blue points with error bars show the Union SN data in redshift\n bins of width $\\Delta \\log z = 0.05$. \nThe best fit model for flat quintessence without\n early dark energy is plotted as a dashed curve, and the solid curve \nshows how the relative distances are affected by smoothing $w(z)$ \nfor this model by a Gaussian of width $\\sigma_z=0.1$. \n The full distribution of relative distance predictions for this quintessence \n model class is also\n shown with light gray shading (68\\% CL) and curves (95\\% CL). \nLower panel: $w(z)$ for each of the models from the upper panel. }\n\\label{fig:databf}\n\\end{figure}\n\n\n\nMany of the shifts in the predictions relative to flat $\\Lambda$CDM\\ are \nreflected in the evolution of $w(z)$ in the maximum likelihood model for \nflat quintessence without early dark energy. The ML model in this class marginally \nimproves the fit to the current data sets relative to the \n$\\Lambda$CDM\\ ML model, largely due to\nvariations in the SN data with redshift that are fit \nmarginally better by dynamical dark energy than by a cosmological constant.\nFigure~\\ref{fig:databf} compares ML models, quintessence \npredictions, and relative distance constraints from the Union SN data sets \nat $z\\lesssim 1$.\nFreedom in $w(z)$ at these redshifts allows changes in the dark energy \ndensity to improve the fit to SN distances by $-2\\Delta\\ln\\mathcal{L}\\sim 4.5$.\nHowever, some of this improvement is due to the large oscillations in \nthe equation of state at $z\\sim 0.1$, which are allowed to violate \nthe $-1\\leq w\\leq 1$ bound due to the conservative implementation \nof the quintessence prior on PC amplitudes described in Sec.~\\ref{sec:pcs}.\nSmoothing the ML $w(z)$ by a Gaussian with width $\\sigma_z\\sim 0.1$ \nor requiring $w(z)$ to satisfy stricter quintessence bounds\nreduces the improvement relative to $\\Lambda$CDM\\ to $-2\\Delta\\ln\\mathcal{L}\\sim 2$, \nbut has little effect on the overall distributions of the predicted \nobservables.\n\nAlthough differences in the ML models cause quintessence to not be centered\naround $\\Lambda$CDM, the allowed {\\it width} of quintessence predictions around\nthe maximum likelihood relative to $\\Lambda$CDM\\ follows the expectations of the Stage IV predictions\nfrom MHH except for being weaker by a factor of $2-3$. The PCs allow for\noscillatory variations in $H(z)$, $f(z)G(z)$, and $\\gamma(z)$ at $z<1$ that would\nnot be readily observable with expansion history or growth measures due to\nlimited resolution in redshift. On the other hand, $G(z)$, $G_0(z)$, and\n$D(z)$ are still predicted with $\\sim 2-3\\%$ precision, so the class of flat\nquintessence models without early dark energy remains highly falsifiable.\n\n\n\\begin{figure}[tp]\n\\centerline{\\psfig{file=rp_q_fe_fg2.eps, width=3.5in}}\n\\caption{Flat quintessence models with (dark blue) and without (light gray) \nearly dark energy.\n}\n\\label{fig:ede}\n\\end{figure}\n\nAdding early dark energy to flat quintessence (Fig.~\\ref{fig:ede}) \nhas very little impact on the 68\\% CL predictions of most observables due to the\nrestriction that $w\\ge -1$ for a canonical scalar field. To satisfy CMB\ndistance constraints, any increase in the expansion rate due to early dark\nenergy must be compensated by a lower expansion rate at intermediate redshift relative to\n$z=0$, i.e. a dark energy density that decreases with increasing redshift\nrequiring $w<-1$. While adding early dark energy does allow a larger\nsuppression of growth at high redshift (which is also a possible sign of \nmassive neutrinos given current upper limits), a measurement of a $\\gtrsim 10-15\\%$\ndecrement or $\\gtrsim 2\\%$ increment in the growth relative to high redshift\nwould still suggest that a broader class of models is necessary. This freedom in\ngrowth leaves the amplitude relative to $z=0$ practically unchanged as the\n$G_{0}(z)$ predictions show. The only qualitative change with early dark energy\nis to open up the allowed range in $\\gamma(z)$ so that the high redshift end has as much\nfreedom as the low redshift end. All of these trends for early dark energy\nwithout curvature reflect those of the forecasts in MHH.\n\n\n\nIncluding curvature in the quintessence class, but not early dark energy, opens up more\nfreedom as shown in Fig.~\\ref{fig:curv}. Now $z>2$ deviations in $D(z)$ are\nallowed at the $\\sim 5\\%$ level relative to $\\Lambda$CDM. \nThus a BAO distance measurement at $z>2$\n could falsify flat quintessence in favor of quintessence with curvature.\nAs discussed in MHH, because of the $w \\ge -1$ quintessence bound, this additional freedom skews\n to smaller distances and lower growth relative to high redshift.\n \n\n\n\\begin{figure}[tp]\n\\centerline{\\psfig{file=rp_q_c_fg2.eps, width=3.5in}}\n\\caption{Non-flat (dark blue) and flat (light gray) \nquintessence models without early dark energy.\n}\n\\label{fig:curv}\n\\end{figure}\n\n\nPredictions from the most general quintessence class which includes \nboth curvature and early dark energy, shown in Fig.~\\ref{fig:curvede}, \ncombine features of the previous quintessence classes in ways that \nare similar to the Stage IV predictions in MHH. \nThe ML model in this class improves the fit to the combined data by \n$-2\\Delta\\ln\\mathcal{L}\\sim 4$, mostly due to changing the SN likelihood by $-2\\Delta\\ln\\mathcal{L}\\sim 5$;\nhowever, removing the large low-$z$ oscillations by smoothing $w(z)$ reduces \nthe improvement in the SN fit to $-2\\Delta\\ln\\mathcal{L}\\sim 2-3$.\n\n\n\\begin{figure}[tp]\n\\centerline{\\psfig{file=rp_q_ce_fg2.eps, width=3.5in}}\n\\caption{Non-flat quintessence models with (dark blue)\nand without (light gray) early dark energy.\n}\n\\label{fig:curvede}\n\\end{figure}\n\n\nThe predictions for $G_0(z)$, $D(z)$, and $H(z)$, which \nwere affected little by early dark energy alone, \nare nearly the same as those for non-flat \nquintessence without early dark energy.\nThe other observables show a mixture of the effects of\ncurvature at low $z$ and early dark energy at high $z$. Large suppression\n($\\gtrsim 20\\%$) of $G(z)$ (and similarly $fG$) relative to $\\Lambda$CDM\\ is allowed, but enhancement of\nthe growth function over the $\\Lambda$CDM\\ best fit is still limited at the $\\sim\n2\\%$ level. Note that this upper limit on $G(z)$ is robust to neutrino mass \nuncertainties. Likewise, low-redshift distances (including $z_h\nH_0^{-1}$) cannot be smaller than in $\\Lambda$CDM by substantially more than\n$\\sim 2\\%$. As in Fig.~\\ref{fig:ede}, \nthe high redshift predictions for $\\gamma(z)$ in Fig.~\\ref{fig:curvede} weaken\nsubstantially but only in the positive direction. Indeed, all of the\nobservables display similar asymmetric weakening of the predictions with the\naddition of curvature and early dark energy, which can be understood in terms\nof the $w\\geq -1$ quintessence bound.\n\n\nThe existence of an upper or lower bound on each observable that is robust to\nfreedom in curvature and early dark energy provides the possibility of\nfalsifying the entire quintessence model class. In fact, in this most general\nclass, the statistical predictions from current SN and CMB bounds are already\ncomparable to those that can be achieved by a Stage IV version of these\nprobes, which can be understood from the fact\n that the forecasts from MHH used current BAO and\n$H_0$ measurements.\n\n\nThe comparable predictions in large part reflect the fact that curvature is\nalready well constrained through the BAO and $H_0$ measurements. \nThe constraint in this most\ngeneral class of quintessence models is $-0.006<\\Omega_K<0.033$ (95\\%~CL), \na factor of $\\sim 2$ weaker than for non-flat $\\Lambda$CDM\\ and skewed\ntoward open models due to the quintessence prior on $w(z)$.\n\n\nFinally, as an example of the use of the asymmetric quintessence predictions,\nwe consider the application of these results \nto observables which measure some combination of $\\sigma_8$ and $\\Omega_{\\rm m}$. \nTo compute predictions for $\\sigma_8$ given our predictions for the raw acceleration observables, \nwe use the fitting formula \\cite{Hu_Jain}\n\\begin{eqnarray}\n\\sigma_8 &=& \\frac{G(z=0)}{0.76}\\left[\\frac{A_s(k=0.05~{\\rm Mpc}^{-1})}{3.12\\times 10^{-9}}\\right]^{1\/2} \\left(\\frac{\\Omega_{\\rm b} h^2}{0.024}\\right)^{-1\/3} \\nonumber \\\\\n&& \\times \\left(\\frac{\\Omega_{\\rm m} h^2}{0.14}\\right)^{0.563}\n\\left(\\frac{h}{0.72}\\right)^{0.693}(3.123h)^{(n_s-1)\/2}\n\\label{eq:sigma8}\n\\end{eqnarray}\nfor each model sampled in the MCMC likelihood analysis. \nNote that on top of allowed\nvariations in $G(z=0)$, $\\sigma_{8}$ predictions include uncertainties in the\nreionization optical depth $\\tau$ through its covariance with $A_{s}$. While\nthis analysis assumes instantaneous reionization, the uncertainty introduced\nby more general ionization histories is small \\cite{MorHu08}.\nWe have checked that the $\\sigma_8$ distributions obtained using \nEq.~(\\ref{eq:sigma8}) closely match those \nfrom the more accurate computation of $\\sigma_8$ using CAMB.\nThe joint predictions for $\\sigma_8$ and $\\Omega_{\\rm m}$ from the current SN, CMB, \nBAO, and $H_0$ constraints are shown in Fig.~\\ref{fig:sigma8} for \nflat $\\Lambda$CDM\\ and two quintessence model classes.\n\n\nIn particular, in the context of flat $\\Lambda$CDM\\ the current SN, CMB, BAO, and\n$H_0$ data predict the combination best measured by the local abundance of\nmassive galaxy clusters to be $0.394 < \\sigma_8 \\Omega_{\\rm m}^{0.5} < 0.441$ (68\\% CL).\nFlat quintessence without early dark energy weakens the lower end somewhat but\nleaves the upper limit nearly unchanged: $0.358 < \\sigma_8 \\Omega_{\\rm m}^{0.5} < 0.419$.\nQuintessence with both early dark energy and curvature yields $0.306 < \\sigma_8\n\\Omega_{\\rm m}^{0.5} < 0.396$. Therefore a measurement of a local cluster abundance in\nsignificant excess of the flat $\\Lambda$CDM\\ predictions rules out the whole quintessence class,\nwhereas a measurement that is substantially lower would remain consistent with\nquintessence but would rule out a cosmological constant (see also \\cite{Kunz_sigma8}).\nA measurement below the flat $\\Lambda$CDM\\ prediction by $\\lesssim 10\\%$ could \nalso indicate large neutrino masses, but an excess cluster abundance could \nnot be alternately explained by massive neutrinos.\nCurrent cluster surveys, with $\\sim 5\\%$ measurements of similar \ncombinations of $\\sigma_8$ and $\\Omega_{\\rm m}$ \\cite{Vikhlinin,Rozo,Mantz},\nare beginning to reach the precision necessary to test these predictions. \nIn fact, the\nlack of an observed excess already places strong constraints on modified\ngravity explanations of cosmic acceleration \\cite{Schmidt:2009am}.\n\n\n\n\\begin{figure}[tp]\n\\centerline{\\psfig{file=sigma8_om_2.eps, width=3.5in}}\n\\caption{Predictions for $\\sigma_8$ and $\\Omega_{\\rm m}$ for flat $\\Lambda$CDM\\ \n(gray contours, top), flat quintessence without early dark energy\n(red contours, middle), \n and non-flat quintessence with early dark energy \n(blue contours, bottom), showing 68\\% CL (light) and 95\\% CL (dark) regions.\n}\n\\label{fig:sigma8}\n\\end{figure}\n\n\n\n\n\\subsection{Smooth $w_0-w_a$ Dark Energy}\n\nAs a final case we consider the class of models defined by an equation of\nstate $w(z)= w_{0} +(1-a)w_{a}$ \\cite{Chevallier_Polarski,Linder_wa} under the\nassumption that dark energy is smooth relative to matter.\nUnlike our previous cases, this class does not define a physical candidate for\ndark energy such as the cosmological constant or a scalar field but rather\nrepresents a simple but illustrative phenomenological parametrization. Note\nthat early dark energy is included in this parametrization since \n$\\lim_{z\\to\\infty} w(z) = w_0 + w_a$.\n \nThe predictions for the $w_0-w_a$ model class serve two purposes. First, the\ncomparison of predictions for smooth, monotonic $w_0-w_a$ models with \nthose for PC quintessence models test the dependence of the\npredictions on rapid transitions and non-monotonic evolution of the\nequation of state. The second use of the $w_0-w_a$ predictions is to\nillustrate how predictions are affected by the $-1\\leq w(z) \\leq 1$\nquintessence bound. Unlike the model classes where $w(z)$ is parametrized by\nprincipal components, it is simple to impose a strict \nquintessence prior on $w_0-w_a$ models\nby requiring $-1\\leq w_0 \\leq 1$ and $-1\\leq w_0+w_a\\leq 1$. We compare\npredictions using this prior with the more general case, where the priors are\nweak enough that constraints on $w_0$ and $w_a$ are determined solely \nby the data (``no $w$ prior'').\n\n\nA fair comparison can be made between the predictions for flat and non-flat\n$w_{0}-w_{a}$ models with the $-1\\leq w\\leq 1$ prior (light gray contours in\nFigs.~\\ref{fig:w0waf} and \\ref{fig:w0wac}) and PC quintessence models with\nearly dark energy (dark blue contours in Figs.~\\ref{fig:ede} and\n\\ref{fig:curvede}). In particular, observables relatively insensitive to both\nthe amount of early dark energy and large changes in the PC equation of\nstate at low redshift, such as $G_0(z)$ and $D(z)$, \nare generally in good agreement.\nThe expansion rate and growth rate are more sensitive to sudden changes in\n$w(z)$ than the distances and the integrated growth function. Therefore, the\nimpact of large, low-$z$ oscillations in the PCs is greatest for $H(z)$, \n$f(z)G(z)$, and $\\gamma(z)$ at $z\\lesssim 1$, increasing the width of those predictions\nrelative to the corresponding predictions for the smooth $w_0-w_a$ models. The\nPC quintessence models also have more freedom in early dark energy than\n$w_0-w_a$ models since $w_{\\infty}$, unlike $w_0+w_a$, is completely free from the\nlow-redshift SN, BAO, and $H_0$ constraints. As a result, $w_0-w_a$\npredictions for $G(z)$ and the high-redshift values of $\\gamma(z)$ \nand $f(z)G(z)$ are stronger than, but still qualitatively similar to, \nthose for PC quintessence with early dark energy.\n\n\nLike the PC quintessence predictions, the predictions for $w_0-w_a$ models \nbounded by $-1\\leq w\\leq 1$ are shifted relative to flat $\\Lambda$CDM\\ \ndue to marginal improvements in the fit to SN data ($-2\\Delta\\ln\\mathcal{L}\\sim 0.5$ for \nthe ML model) enabled by an evolving equation of state.\nThis is a somewhat smaller change in the likelihood than \nfor PC quintessence models, but the \nmagnitude of the ML model shift in the observables is similar for \n$w_0-w_a$ and PC quintessence, at least for those observables that \ndepend little on early dark energy.\n\n\nComparing the two sets of predictions in Figs.~\\ref{fig:w0waf}\nand~\\ref{fig:w0wac} (no $w$ prior vs.\\ the $-1\\leq w\\leq 1$ prior) shows the\neffect on the $w_0-w_a$ predictions of allowing freedom in $w(z)$ beyond that\nallowed by the quintessence bounds. As discussed in MHH, eliminating these\nbounds makes the range in predictions for observables such as growth more\nsymmetric around the best fit for flat $\\Lambda$CDM\\ since $w(z)$ is allowed to cross\nbelow $w=-1$. In particular, growth in excess of flat\n $\\Lambda$CDM is now allowed. Based on the analysis of MHH, we expect the\n amount of the remaining skewness in the predictions around flat $\\Lambda$CDM\n to be affected by the available volume of parameter space as determined by\n how priors on dark energy parameters weight models with $w<-1$ relative to\n those with $w>-1$.\n\nRemoving the quintessence bounds also allows models with greater amounts of\nearly dark energy, and (for non-flat $w_0-w_a$) more closed models, to fit the\ndata. A notable consequence for models with nonzero curvature is that the\npredictions for $\\gamma(z)$ at 95\\% CL diverge at $z>1$. \nThis is the same effect noted in MHH for $\\gamma(z)$ forecasts\nin the non-flat smooth dark energy model class.\nThe divergence in the tails of the high-redshift $\\gamma(z)$ distribution \nis caused by the appearance of\na singularity in $\\gamma(z)$ for closed models where $\\Omega_{\\rm K}$ is sufficiently\nnegative so that $\\Omega_{\\rm m}(z)$ crosses unity at some redshift; when $\\Omega_{\\rm m}(z)=1$,\n$\\gamma(z)$ is no longer well defined by Eq.~(\\ref{eq:gamma_z}). Such caveats\nmust be kept in mind when using $\\gamma$ as a test of not only quintessence\nbut of all smooth dark energy models.\n\n\n\n\n\n\\begin{figure}[tp]\n\\centerline{\\psfig{file=rp_w0wa_f_fg2.eps, width=3.5in}}\n\\caption{Flat $w_0-w_a$ without priors on $w(z)$ (dark blue) and with\n quintessence priors ($-1\\leq w_0\\leq 1$, $-1\\leq w_0+w_a\\leq 1$; light\n gray). \n}\n\\vskip 0.25cm\n\\label{fig:w0waf}\n\\end{figure}\n\n\n\\begin{figure}[tp]\n\\centerline{\\psfig{file=rp_w0wa_c_fg2.eps, width=3.5in}}\n\\caption{Non-flat $w_0-w_a$ without priors on $w(z)$ (dark blue) and \nwith quintessence priors ($-1\\leq w_0\\leq 1$, $-1\\leq w_0+w_a\\leq 1$; \nlight gray).\n}\n\\vskip 0.25cm\n\\label{fig:w0wac}\n\\end{figure}\n\n\n\n\n\\section{Discussion}\n\\label{sec:discussion}\n\nAny given class of dark energy models makes concrete predictions for the\nrelationship between the expansion history, geometry, and growth \nof structure as a function\nof redshift. Therefore, current distance-based measurements, though limited in\nredshift, make predictions for other dark energy observables that can be used\nto test and potentially rule out whole classes of dark energy models.\n\n\nIn this paper we present the allowed ranges for the expansion rate $H(z)$,\ndistances $D(z)$, the linear growth rate $G(z)$, \nand several auxiliary growth observables from the current\ncombination of cosmological measurements of supernovae, the cosmic microwave\nbackground, baryon acoustic oscillations, and the Hubble constant. In\nparticular, growth at any redshift or a Hubble constant in significant excess\nof $2\\%$ ($68\\%$ CL range) of the current best fit $\\Lambda$CDM model would\nfalsify both a cosmological constant and more general quintessence \nmodels with or without curvature and early dark energy. On the\nother hand, comparable measurements of a decrement in these quantities would\nrule out a cosmological constant but would be fully consistent\nwith quintessence. Alternately, a substantial reduction in growth \nrelative to the expectation for $\\Lambda$CDM\\ could indicate neutrinos with \nlarge masses ($\\sum m_{\\nu}>0.05$~eV).\n\nRemarkably, predictions for the main acceleration observables, $H(z)$, $D(z)$,\nand $G(z)$, are only weaker than Stage IV SN and CMB predictions (MHH) by a\nfactor of $\\sim 2-3$. However, this improvement applies across a wide range\nof redshifts, indicating that multiple phenomenological parameters may each be\nimproved by this factor. For example, parameter-based figures of merit\neffectively involve products of individual parameters (e.g.\\ area in the\n$w_0-w_a$ plane \\cite{Huterer_Turner,DETF} or volume of the principal\ncomponent parameter error ellipsoid \\cite{Albrecht_Bernstein,FoMSWG}), and in\nsuch figures of merit the total improvement with future data can be\nsignificant. \nIf novel dark energy physics affects small pockets of these high-dimensional\nparameter spaces --- that is, if only specific dark energy parameter combinations\nare sensitive to new physics --- then these multiparameter figures of merit\nwill justly indicate a much more significant improvement with future cosmological\ndata.\n\nIn this work we have considered only known and quantifiable sources of error\nin the current data. Recent analyses of supernova data (e.g.~\\cite{Constitution,SDSS_SN,Kelly09}) indicate that unknown systematic\nerrors remain and can significantly affect cosmological constraints.\nFurthermore, the systematic error estimates used here for the SN data were\noptimized for models with a cosmological constant and therefore may be\nunderestimated for dynamical dark energy \\cite{SCP_Union}. We intend to\nexplore the implications of SN systematics for dark energy predictions in\nfuture work. Our predictive methodology can alternately be viewed as a means\nof ferreting out unknown systematics by looking for inconsistencies between\nthe predictions from one set of observations and data from another.\n\n\nOver the course of this study, new data have become available that could\nimprove the predictions for acceleration observables or begin to test\npredictions within the various classes. In particular, BAO measurements from\nSDSS DR7 and 2dFGRS provide a 2.7\\% constraint on $D_V(z=0.275)$ and a 3.7\\%\nconstraint on $D_V(z=0.35)\/D_V(z=0.2)$ \\cite{Percival09}. We have estimated the\nimpact of these new measurements on our predictions by using the updated BAO\nlikelihood to modify the weighting of MCMC samples for each model class. For\nall quintessence model classes, the effect of updating the BAO data is\nnegligible for most observables except for $D(z\\lesssim 1)$ and (to a lesser\nextent) $H(z\\lesssim 0.5)$, reflecting the improved BAO constraint on\nlow-redshift $D$ and $H$.\n\nThe impact of the newer BAO measurements on $\\Lambda$CDM\\ models is greater than\nfor quintessence since the reduced freedom in dark energy evolution ties\nlow-redshift measurements to high-redshift predictions. The updated BAO\nconstraints exclude models on one side of the predicted observable\ndistributions in Fig.~\\ref{fig:lcdm}, reducing their width by $10-30\\%$ and\nshifting the distributions by an equal amount. However, these changes appear\nto be mainly due to a slight tension between the new BAO constraints and the\nother data sets used for $\\Lambda$CDM\\ predictions. Note that the BAO constraints of\nRef.~\\cite{Percival09} are still less precise than the flat $\\Lambda$CDM\\ predictions\nin Fig.~\\ref{fig:lcdm} and comparable to the non-flat $\\Lambda$CDM\\ predictions, so\nthey do not yet represent a significant additional test of the cosmological\nconstant.\n\n\nFalsifiable predictions from current data reveal many opportunities for sharp\nobservational tests of paradigms for cosmic acceleration by requiring \nconsistency within a given theoretical framework between observables \nthat depend on the expansion history, geometry, and growth of structure \nin the universe. These predictions can be used to \ninform future surveys as to the optimal choice of observables, redshifts, and\nrequired measurement accuracies for testing whole\nclasses of dark energy models.\nFalsification of even the simplest model, flat $\\Lambda$CDM, would have \nrevolutionary consequences for cosmology and fundamental physics.\n\n\n\\vspace{1cm}\n{\\it Acknowledgments:} We thank David Weinberg for useful conversations about this\nwork. MM and WH were supported by the KICP under NSF contract PHY-0114422. MM\nwas additionally supported by the NSF GRFP and CCAPP at Ohio State; WH by DOE\ncontract DE-FG02-90ER-40560 and the Packard Foundation; DH by the DOE OJI\ngrant under contract DE-FG02-95ER40899, NSF under contract AST-0807564, and\nNASA under contract NNX09AC89G.\n\n\n\\vfill\n\\bibliographystyle{arxiv_physrev}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nData classification -- e.g. object recognition -- is a fundamental computing problem in machine learning and artificial intelligence. Large-scale classification competitions such as the annual ImageNet challenge \\cite{krizhevsky2012imagenet, russakovsky2015imagenet}, where a super-human accuracy of 95\\% has been achieved within about 5 years of steady progress, have contributed greatly to the general popularity of machine learning. Understandably, ImageNet is mostly discussed in the context of technical improvements regarding the classification methods which enabled this drastic boost of performance. But it also illustrates some fundamental problems that arise when computers are to create models of human-defined data categories:\nFor example, the fact that classification accuracies are typically leveling off at values below 100\\% does not necessarily reflect a limitation of the algorithms, but instead may reveal the classification limits of the humans who provided the ground truth data. Indeed, in the case of ImageNet, the massive work of annotating millions of images had been crowd-sourced using Amazon Mechanical Turk, and so a large number of individuals were involved in the labeling process, individuals who may place certain ambiguous images into different categories. This problem of ambiguity due to non-rigorously defined object categories is most pronounced in biological and medical data, where sample-to-sample variations are notoriously large. \n\nIn this work, we use artificially generated surrogate data, as well as real-world bio-medical data, to explore the implications of this inevitable data ambiguity. We demonstrate that the overlap of data classes leads to a theoretical upper limit of classification accuracy, a limit that can be mathematically computed in low-dimensional examples and which depends in a systematic way on the statistical properties of the data set. We find that sufficiently powerful classifier models of different kinds all perform at this same upper limit of accuracy, even if they are based on completely different operating principles. Interestingly, this accuracy limit is not affected by applying certain non-linear transformations to the data, even if these transformations are non-reversible and drastically reduce the information content (entropy) of the input data. \n\nIn a next step, the same three models that reached the common classification limit for artificial data are now applied to human EEG data measured during sleep. In a pre-processing step, two kinds of features are extracted from raw EEG signals, yielding different marginal distributions and mutual correlations. It turns out that a more complex Bayesian model, based on correlated multi-variate Gaussian likelihoods (CMVG), performs worse than two other models (naive Bayes, perceptron), because the statistical properties of the pre-processed features do not match those of the likelihoods. In contrast, the perceptron and the naive Bayes model still show very similar classification accuracies, indicating that both reach the theoretical accuracy limit for sleep stage classification.\n\nFinally, we address the question whether typical human-defined object categories can also be considered as 'natural kinds', that is, whether the data vectors in input space have a built-in cluster structure that can be detected by objective machine-learning models even in non-labeled data. For this purpose, we use as real-world examples the MNIST data set \\cite{deng2012mnist}, as well as the above EEG sleep data. We find that a simple visualization by multi-dimensional scaling (MDS) \\cite{torgerson1952multidimensional, kruskal1964nonmetric,kruskal1978multidimensional,cox2008multidimensional} already reveals an inherent cluster structure of the data in both cases. Interestingly, the degree of clustering, quantified by the general discrimination value (GDV) \\cite{krauss2018statistical, schilling2021quantifying}, can be enhanced by a step-wise dimensionality reduction of the data, using an autoencoder that is trained in an unsupervised manner. A perceptron classifier with a layer design comparable to the autoencoder, trained on the same data in a supervised fashion, achieves as expected a much stronger cluster separation. However, the enhancement of clustering by unsupervised data compression, combined with automatic labeling methods, could be a promising way to automatically detect 'natural kinds' in non-labeled data. \n\n\n\\clearpage\n\\section{Methods}\n\n\\subsection*{Part 1: Accuracy limit}\n\n\\subsubsection*{Derivation of theoretical accuracy limit}\n\nClassification is the general problem of assigning a discrete class label $i=1\\ldots K$ to each given input data $\\vec{x}$, where the latter is considered as a vector with $N$ real-valued components $x_{n=1\\ldots N}$. Such a discrimination is possible when the conditional probability distributions $p_{gen}(\\vec{x}\\;|\\;i)$ of data vectors, here called {\\bf 'generation densities'}, are different for each of the possible data classes $i$. In the simple case of a two- or three-dimensional data space, each data class can be visualized as a 'point cloud' (See Fig.\\ref{figure_1}(a,b) for examples), and either the shapes or the center positions of these point clouds must vary sufficiently in order to facilitate a reliable classification. However, since the data generation process typically involves not only the system of interest (which might indeed have $K$ well-distinguished modes of operation), but also some measurement or data transmission equipment (which introduces noise into the data), a certain 'overlap' of the different data classes is usually not avoidable.\n\n\\vspace{0.2cm}\\noindent A classifier is receiving the data vectors $\\vec{x}$ as input and computes a set of $K$ {\\bf 'classification probabilities'} $q_{cla}(j\\;|\\;\\vec{x})$, quantifying the belief that $\\vec{x}$ belongs to $j$. They are normalized to one over all possible classes, so that $\\sum_{j=1}^K q_{cla}(j\\;|\\;\\vec{x}) = 1\\; \\forall\\; \\vec{x}$.\n\n\\vspace{0.2cm}\\noindent We can now define a {\\bf 'confusion density'} as the product\n\\begin{equation}\nC_{ji}(\\vec{x}) = q_{cla}(j\\;|\\;\\vec{x})\\;p_{gen}(\\vec{x}\\;|\\;i).\n\\end{equation}\nIt can be interpreted as the probability density that the generator is producing data vector $\\vec{x}$ under class $i$, which is then assigned to class $j$ by the classifier. Because there is usually a very small but non-zero probability density that {\\em any} vector $\\vec{x}$ can occur under {\\em any} class $i$, we expect that the non-diagonal elements $C_{j\\!\\neq\\!i}(\\vec{x})$ are larger than zero as well. These non-diagonal confusion densities will have their largest values in regions of data space where the classes $i$ and $j$ overlap (See Fig.\\ref{figure_1}(e,f) for examples).\n\n\\vspace{0.2cm}\\noindent By integrating the confusion density over all possible data vectors $\\vec{x}$,\n\\begin{equation}\nC_{ji} = \\int C_{ji}(\\vec{x})\\; d\\vec{x},\n\\label{cij}\n\\end{equation}\nwe obtain the {\\bf 'confusion matrix'} of the classifier, which comes out properly normalized, so that $\\sum_{j=1}^K C_{ji} = 1\\; \\forall\\; i$. The confusion matrix $C_{ji}$ therefore is the probability that a data point originating from class $i$ is assigned to class $j$.\n\n\\vspace{0.2cm}\\noindent Assuming for simplicity that all data classes appear equally often, we can compute the {\\bf accuracy} $A$ of the classifier as the average over all diagonal elements of the confusion matrix:\n\\begin{equation}\nA = \\frac{1}{K}\\sum_{i=1}^K C_{ii}.\n\\end{equation}\n\n\\vspace{0.2cm}\\noindent In the following, we are particularly interested in the {\\bf theoretical limit of the classification accuracy}, denoted by $A_{max}$. We therefore consider an ideal classifier that has learned the exact generation densities $p_{gen}(\\vec{x}\\;|\\;i)$. In this case, the {\\bf 'ideal classification probability'} corresponds to the Bayesian posterior\n\\begin{equation}\nq_{cla}(\\;j\\;|\\;\\vec{x}\\;) = \\frac{p_{gen}(\\;\\vec{x} \\;|\\; j\\;)}{\\sum_k p_{gen}(\\;\\vec{x} \\;|\\; k\\;)}. \n\\end{equation}\n\n\\vspace{0.2cm}\\noindent In our numerical experiments, we will use classifiers that output a definite class label $j$ for each given data vector $\\vec{x}$, corresponding to most probably class. To compute the theoretical accuracy maximum for such a model, we replace $q_{cla}$ by the binary {\\bf 'class indicator function'}\n\\begin{equation}\n\\hat{q}_{cla}(\\;j\\;|\\;\\vec{x}\\;) = \\delta_{jk}\\;\\;\\mbox{with}\\;\\;k = \\mbox{argmax}_c\\; q_{cla}(\\;c\\;|\\;\\vec{x}\\;).\n\\end{equation}\nIt has the value $1$ for all data points $\\vec{x}$ assigned to class $j$, and the value $0$ for all other data points (See Fig.\\ref{figure_1}(c,d) for examples). When the ideal accuracy $A$ is evaluated using $\\hat{q}_{cla}$ instead of $q_{cla}$, the result can be directly compared with numerical accuracies based on one-hot classifier outputs.\n\n\\subsubsection*{Numerical evaluation of $A_{max}$}\n\nIn Fig.\\ref{figure_1}, the above quantities have been numerically evaluated for a simple Gaussian test data set. For this purpose, the two-dimensional integral \\ref{cij} has been evaluated numerically on a regular grid of linear spacing 0.01, ranging from -8 to +8 in each feature dimension. \n\n\n\\subsubsection*{Classifiers and input data}\n\n\\vspace{0.2cm}\\noindent In the following subsections, we provide the implementation details for the different classifier models that are compared in this work. The input data for these models is given as lists of $D$-dimensional feature vectors $\\vec{u} = (u_1,u_2,\\ldots,u_f,\\ldots,u_D)$, each belonging to one of $K$ possible classes $c$. In the case of artificially generated data, these lists contain 10000 feature vectors distributed equally over the data classes. They are split randomly into training (80\\%) and test (20\\%) data sets.\n\n\\subsubsection*{Perceptron model}\n\nThe perceptron model is implemented using Keras\/Tensorflow. It has one hidden layer, containing $N_{neu}=100$ neurons with RELU activation function. The output layer has $N_{out}$ neurons with softmax activation function, where $N_{out}=D$ corresponds to the number of data classes. The loss function is categorical crossentropy. We optimize the perceptron on each training data set using the Adams optimizer over at least 10 epochs with a batch size of 128 and a validation split of 0.2. After training, the accuracy of the perceptron is evaluated with the independent test data set.\n\n\\subsubsection*{Naive Bayesian model}\n\nThe naive Bayesian model is implemented using the Python libraries Numpy and Scipy.\n\n\\vspace{0.2cm}\\noindent In the training phase, the training data set is sorted according to the $K$ class labels $c$. Then an individual Gaussian Kernel Density (KDE) approximation (Scott method) is computed for each feature $f$ and class label $c$, corresponding to the empirical marginalized probability densities $p_{f,c}(u_f)$. \n\n\\vspace{0.2cm}\\noindent In the testing phase, the accuracy of the model is evaluated with the independent test data set as follows: According to the naive Bayes approach, the global likelihood $L(\\vec{u}\\;|\\;c)$ of a data vector $\\vec{u}=(u_1,u_2,\\ldots,u_D)$ under class $c$ is approximated by a product of the marginalized probabilities, so that\n\\begin{equation}\nL(\\vec{u}\\;|\\;c) = \\prod_{f=1\\ldots D} p_{f,c}(u_f).\n\\end{equation}\nSince we assume a flat prior probability ($P_{prior}(c)=1\/K$) over the data classes, the posterior probability of data class $c$, given the input data vector $\\vec{u}$, is given by\n\\begin{equation}\nP_{post}(c\\;|\\;\\vec{u}) = \\frac{L(\\vec{u}\\;|\\;c)}{\\sum_{i\\!=\\!1}^K\\;L(\\vec{u}\\;|\\;i)}\n\\end{equation}\n\n\\subsubsection*{Naive Bayesian model with Random Dimensionality Expansion (RDE)}\n\nSince the naive Bayesian model takes into account only the marginal feature distributions $p_{f,c}(u_f)$, it cannot distinguish data classes which accidentally have identical $p_{f,c}(u_f)$ distributions, but differ in the correlations between the features. In principle, this problem can be fixed by multiplying the $D$-dimensional input vectors $\\vec{u}$ by a random $D_2 \\times D$ matrix $\\textbf{M}$, for example with normally distributed entries $M_{ij}\\propto N(\\mu=0,\\sigma=1)$, which yields transformed vectors $\\vec{v} = \\textbf{M} \\vec{u}$. Provided that $D_2\\gg D$, at least some of the new feature linear combinations $v_f$ will have marginal distributions that vary between the data classes. \n\n\\subsubsection*{CMVG Bayesian model}\n\nThe Correlated Multi-Variate Gaussian (CMVG) Bayesian model is also implemented using the Python libraries Numpy and Scipy. \n\n\\vspace{0.2cm}\\noindent In the training phase, the training data set is sorted according to the two class labels $c$. Then, for each class label $c$, we compute the mean values $\\mu_f^{(c)}$ of the features $f=1\\ldots D$, as well as the covariances $\\Sigma_{fg}^{(c)}$ between features $f$ and $g$. These quantities are packed as one vector ${\\bf \\mu}^{(c)}$ and one matrix ${\\bf \\Sigma}^{(c)}$ for each class $c$.\n\n\\vspace{0.2cm}\\noindent In the testing phase, the global likelihood $L(\\vec{u}\\;|\\;c)$ of a data vector $\\vec{u}=(u_1,u_2,\\ldots,u_D)$ under class $c$ is computed as the correlated, multi-variate Gaussian probability density\n\n\\begin{equation}\nL(\\vec{u}\\;|\\;c) = p_{cmvg}\\left( \\vec{u} \\;,\\; {\\bf\\mu}\\!=\\!{\\bf\\mu}^{(c)} \\;,\\;\n{\\bf \\Sigma}\\!=\\!{\\bf \\Sigma}^{(c)}\n\\right).\n\\end{equation}\nSince we assume a flat prior probability ($P_{prior}(c)=1\/2$) for the two data classes, the posterior probability of data class $c$, given the input data vector $\\vec{u}$, is given by\n\\begin{equation}\nP_{post}(c\\;|\\;\\vec{u}) = \\frac{L(\\vec{u}\\;|\\;c)}{\\sum_{i\\!=\\!1}^K\\;L(\\vec{u}\\;|\\;i)}\n\\end{equation}\n\n\n\\subsection*{Part 2: The DSC data model}\n\nWe consider an artificial classification problem with two multivariate Gaussian data classes $c\\in\\left\\{0,1\\right\\}$ and with statistical properties that can be tuned by {\\bf three control quantities}: the {\\bf dimensionality} $D$ of the feature space, the {\\bf separation} $S$ between the centers of the point clouds, and the {\\bf correlation} $C$ between features (within the same class), which is associated with the shape of the point cloud. The generation of artificial data within this DSC model works as follows:\n\n\\vspace{0.2cm}\\noindent Starting from a given triple $D,S,C$ of control quantities, we first generate $N_{rep}$ independent {\\bf parameter sets} $[\\mu_f^{(c)}, \\Sigma_{fg}^{(c)}]$ that describe the statistical properties of the two classes $c\\in\\left\\{0,1\\right\\}$. Here, $\\mu_f^{(c)}$ is the mean value of feature $f$ in class $c$, and $\\Sigma_{fg}^{(c)}$ is the covariance of features $f$ and $g$ in class $c$.\n\n\\vspace{0.2cm}\\noindent The mean values $\\mu_f^{(c=0)}$ in class $0$ are always set to zero, whereas the mean values $\\mu_f^{(c=1)}$ in class $1$ are random numbers, drawn from a uniform distribution with values in the range from 0 to $S$. The separation quantity $S$ is therefore the maximum distance between corresponding feature mean values in each dimension $f$.\n\n\\vspace{0.2cm}\\noindent The diagonal elements $\\Sigma_{ff}^{(c)}$ of the symmetric covariance matrix are set to 1 in both classes. The off-diagonal elements $\\Sigma_{f\\neq g}$ are assigned independent, continuous random numbers $x$, drawn from a box-shaped probability density distribution $q(x,C)$ that depends on the correlation quantity $C$ as follows:\n\n\\begin{eqnarray}\n q(x,C) &=& \\mbox{uniform}[0\\;,\\;C]\\;\\;\\;\\;\\;\\;\\mbox{for}\\;C\\le1 \\nonumber\\\\\n q(x,C) &=& \\mbox{uniform}[C\\!-\\!1\\;,\\;1]\\;\\mbox{for}\\;C>1\n\\end{eqnarray}\n\n\\vspace{0.2cm}\\noindent For $C=0$, the distribution $q(x,C)$ peaks at $x=0$, so that $\\Sigma_{ij}$ becomes a diagonal unit matrix. For $C=1$, the distribution $q(x,C)$ is uniform in the range $[0,1]$, and for $C=2$ it peaks at $x=1$, leading to $\\Sigma_{ij}=1$. A plot of the distribution is shown in \\ref{figure_2}(b).\n\n\\vspace{0.2cm}\\noindent According to the parameter set $[\\mu_f^{(c)},\\Sigma_{fg}^{(c)}]$, we then generate for each of the two classes $c$ a number $N_{vec}\/2$ of random, Gaussian data vectors $\\vec{u}(t)$, in which the $D$ components (features) are correlated to a degree controlled by quantity $C$. In the limiting case $C=0$, the $D$ time series $u_{f}(t)$ become statistically independent, whereas for $C=1$, the time series become fully correlated and thus identical. The total number of $N_{vec}$ data vectors is combined to a complete data set, in which vectors from the two classes (with corresponding labels $c$) appear in random order. By this way, we obtain for each triple $D,S,C$ of control quantities a total number of $N_{rep}$ independent data sets, each consisting of $N_{vec}$ data vectors. Since each data set obeys its own random parameters $[\\mu_f^{(c)},\\Sigma_{fg}^{(c)}]$, the DSC model reflects some of the heterogeneity of typical real world data. Finally, we split each data set into a training set (80\\%) and a test set (20\\%).\n\n\\vspace{0.2cm}\\noindent Before applying different types of classifiers to the DSC data sets, we test that the feature correlations and the class separation can be controlled reliably and over a sufficiently large range, using the quantities $C$ and $S$ (Fig.\\ref{figure_2}(d)).\n\n\\subsubsection*{Control of feature correlations by quantity $C$}\n\nTo evaluate correlation control, we fix the quantities $D=10$ and $S=1.0$ (Note that the separation has no effect on the correlations) and vary $C$ over the complete available range of supported values from 0 to 2. For each $C$, we generate $N_{rep}=100$ independent data sets, each consisting of $N_{vec}=10000$ data vectors. For each data set, we estimate the empirical covariance matrix $\\Sigma_{ij}^{(0)}$ of class 0. Because the matrix is symmetric, we compute the root-mean-square (RMS) average of all matrix elements above the diagonal. The blue line in (Fig.\\ref{figure_2}(d)) shows for each $C$ the mean RMS, averaged over the $N_{rep}=100$ repetitions (The latter are shown as gray dots). We find an almost linear relation between $C$ and the mean RMS. In particular, we can realize the full range of correlations, including the limiting cases of independently fluctuating features (for $C=0$) and identically fluctuating features (for $C=2$).\n\n\\subsubsection*{Control of class separation by quantity $S$}\n\nTo evaluate separation control, we fix the quantities $D=10$ and $C=0.5$ and vary $S$ between 0 and 10. For each $S$, we generate $N_{rep}=100$ independent data sets, each consisting of $N_{vec}=10000$ data vectors. For each (labeled) data set, we compute the general discrimination value (GDV), a quantity that has been specifically designed to quantify the separation between classes in high dimensional data sets \\cite{krauss2018statistical, schilling2021quantifying}. The orange line in Fig.\\ref{figure_2}(d) is the mean negative GDV, averaged over the $N_{rep}=100$ repetitions (The latter are shown as gray dots). \n\n\\vspace{0.2cm}\\noindent The GDV is computed as follows: We consider $N$ points $\\mathbf{x_{n=1..N}}=(x_{n,1},\\cdots,x_{n,D})$, distributed within $D$-dimensional space. A label $l_n$ assigns each point to one of $L$ distinct classes $C_{l=1..L}$. In order to become invariant against scaling and translation, each dimension is separately z-scored and, for later convenience, multiplied with $\\frac{1}{2}$:\n\\begin{align}\ns_{n,d}=\\frac{1}{2}\\cdot\\frac{x_{n,d}-\\mu_d}{\\sigma_d}.\n\\end{align}\nHere, $\\mu_d=\\frac{1}{N}\\sum_{n=1}^{N}x_{n,d}\\;$ denotes the mean, and $\\sigma_d=\\sqrt{\\frac{1}{N}\\sum_{n=1}^{N}(x_{n,d}-\\mu_d)^2}$ the standard deviation of dimension $d$.\nBased on the re-scaled data points $\\mathbf{s_n}=(s_{n,1},\\cdots,s_{n,D})$, we calculate the {\\em mean intra-class distances} for each class $C_l$ \n\\begin{align}\n\\bar{d}(C_l)=\\frac{2}{N_l (N_l\\!-\\!1)}\\sum_{i=1}^{N_l-1}\\sum_{j=i+1}^{N_l}{d(\\textbf{s}_{i}^{(l)},\\textbf{s}_{j}^{(l)})},\n\\end{align}\nand the {\\em mean inter-class distances} for each pair of classes $C_l$ and $C_m$\n\\begin{align}\n\\bar{d}(C_l,C_m)=\\frac{1}{N_l N_m}\\sum_{i=1}^{N_l}\\sum_{j=1}^{N_m}{d(\\textbf{s}_{i}^{(l)},\\textbf{s}_{j}^{(m)})}.\n\\end{align}\nHere, $N_k$ is the number of points in class $k$, and $\\textbf{s}_{i}^{(k)}$ is the $i^{th}$ point of class $k$.\nThe quantity $d(\\textbf{a},\\textbf{b})$ is the euclidean distance between $\\textbf{a}$ and $\\textbf{b}$. Finally, the Generalized Discrimination Value (GDV) is calculated from the mean intra-class and inter-class distances as follows:\n\\begin{align}\n\\mbox{GDV}=\\frac{1}{\\sqrt{D}}\\left[\\frac{1}{L}\\sum_{l=1}^L{\\bar{d}(C_l)}\\;-\\;\\frac{2}{L(L\\!-\\!1)}\\sum_{l=1}^{L-1}\\sum_{m=l+1}^{L}\\bar{d}(C_l,C_m)\\right]\n \\label{GDVEq}\n\\end{align}\n\n\\noindent whereas the factor $\\frac{1}{\\sqrt{D}}$ is introduced for dimensionality invariance of the GDV with $D$ as the number of dimensions.\nIn the case of two Gaussian distributed point clusters, the resulting discrimination value becomes $-1.0$ if the clusters are located such that the mean inter cluster distance is two times the standard deviation of the clusters.\n\n\\subsection*{Part 3: Comparing classifiers}\n\nIn Fig.\\ref{figure_3}, we determine the average accuracy of the three classifier types (See part 1 of the Methods section) for different combinations of the DSC control parameters. For each parameter combination, 100 data sets are sampled from the superstatistical distribution. For every data set, consisting of 8000 training vectors and 2000 test vectors, the three classifiers are trained from the scratch and then evaluated. This results in 100 accuracies for each classifier and each parameter combination. We then compute the mean value of these 100 accuracies, and this is the average accuracy plotted as colored lines in Fig.\\ref{figure_3}(c-f). The individual, non-averaged accuracies are plotted as gray points.\n\n\\subsection*{Part 4: Feature transformations}\n\nIn Fig.\\ref{figure_4}, we return to a much simpler test data set, consisting of two 'spherical' Gaussian data clusters in a two-dimensional feature space, which are centered at $\\vec{x}=(-\\frac{1}{2},0)$ and $\\vec{x}=(+\\frac{1}{2},0)$. respectively. All three classifier types reach the theoretical accuracy limit of about 0.69 in this case.\n\n\\vspace{0.2cm}\\noindent In this part we explore how certain non-linear transformations of the original features (that is, $(x_1,x_2) \\longrightarrow (f(x_1),f(x_2))$) affect the classification accuracy. In particular, we investigate the cases $f(x)=\\sin(x)$, $f(x)=\\cos(x)$ and $f(x)=\\mbox{sgn}(x)$. The signum function yields -1 for negative arguments and +1 for positive arguments. For the special case $x=0$ it would return zero, but this practically does not happen, as the features $x$ are continually distributed random variables. \n\n\\subsection*{Part 5: Sleep EEG data}\n\nFor a real-world evaluation of classifier performance, we are using 68 multi-channel EEG data sets from our sleep laboratory, each corresponding to a full-night recording of brain signals from a different human subject. The data were recorded with a sampling rate of 256 Hz, using three separate channels F4-M1, C4-M1, O2-M1. In this work, however, the signals from these channels are pooled, effectively treating them as data sets of their own.\n\n\\vspace{0.2cm}\\noindent The participants of the study included 46 males and 22 females, with an age range between 21 and 80 years. Exclusion criteria were a positive history of misuse of sedatives, alcohol or addictive drugs, as well as untreated sleep disorders. The study was conducted in the Department of Otorhinolaryngology, Head Neck Surgery, of the Friedrich-Alexander University Erlangen-N\u00fcrnberg (FAU), following approval by the local Ethics Committee (323\u201316 Bc). Written informed consent was obtained from the participants before the cardiorespiratory polysomnography (PSG). \n\n\\vspace{0.2cm}\\noindent After recording, the raw EEG data were analyzed by a sleep specialist accredited by the German Sleep Society (DGSM), who removed typical artifacts\n\\cite{tatum2011artifact} from the data and visually identified the sleep stages in subsequent 30-second epochs, according to the AASM criteria (Version 2.1, 2014) \\cite{iber2007aasm,american2012aasm}. The resulting, labeled raw data were then used as a ground truth for testing the accuracy of the different classifier types.\n\n\\vspace{0.2cm}\\noindent In this work, we are primarily testing the ability of the classifiers to assign the correct sleep label $s$\n(Wake, REM, N1, N2, N3) independently to each epoch, without providing further context information. Such a single-channel epoch consists of $30\\times256=7680$ subsequent raw EEG amplitudes $x_{d,e}(t_n)$, where $d$ is the data set, $e$ the number of the epoch within the data set, and $t_n$ the $n$th recording time within the epoch. \n\n\\vspace{0.2cm}\\noindent In order to facilitate classification of these 7680-dimensional input vectors $\\vec{x}_{d,e}$ by a simple Bayesian model, or by a flat two-layer perceptron with relatively few neurons, the vectors have to be suitably pre-processed and compressed down to feature vectors $\\vec{u}_{d,e}$ of much smaller dimensionality $D\\ll 7680$. \n\\vspace{0.2cm}\\noindent Instead of relying on self-organized (and thus 'black-box') features, we are using mathematically well-defined features with a simple interpretation. In particular, we are interested in the case where all $D$ components $u_f$ of a feature vector $\\vec{u}$ are fundamentally of the same kind and only differ by some tunable parameter. \n\n\\subsubsection*{Fourier features}\n\nOur first type of feature estimates the momentary Fourier component of the raw EEG signal $x_{d,e}(t_n)$ at a certain, tunable frequency $\\nu_f$: \n\n\\begin{equation}\nu_f = \n\\sqrt{\\left(\\;\n\\sum_{n=1}^{7680} x_{d,e}(t_n)\\cdot \\cos(2\\pi \\nu_f t_n)\n\\right)^2 + \n\\left(\n\\sum_{n=1}^{7680} x_{d,e}(t_n)\\cdot \\sin(2\\pi \\nu_f t_n)\n\\right)^2}. \n\\end{equation}\n\n\\vspace{0.2cm}\\noindent The set of frequencies $\\nu_{f=1}\\ldots\\nu_{f=D}$ is in our case chosen as an equidistant grid between 0 Hz and 30 Hz, because our EEG system is filtering out the higher-frequency components of the raw signals above about 30 Hz. \n\n\\subsubsection*{Correlation features}\n\nOur second type of feature is the normalized auto-correlation coefficient of the raw EEG signal $x_{d,e}(t_n)$ at a certain, tunable lag-time $\\Delta t_f$: \n\n\\begin{equation}\nu_f = \\frac{\\left\\langle \\left( x_{d,e}(t_n) - \\overline{x}_{d,e} \\right)\\cdot\\left( x_{d,e}(t_n\\!+\\!\\Delta t_f) - \\overline{x}_{d,e} \\right) \\right\\rangle_n}{\\sigma_{d,e}^2}.\n\\end{equation}\n\n\\vspace{0.2cm}\\noindent Here, $ \\overline{x}_{d,e}$ is the mean and $\\sigma_{d,e}$ the standard deviation of the raw EEG signal within the epoch. The symbol $\\left\\langle \\right\\rangle_n$ stands for averaging over all time steps within the epoch. The set of lag-times $\\Delta t_{f=1} \\ldots \\Delta t_{f=D}$ must be integer multiples of the recording time interval $\\delta t = 1\/256$ sec.\n\n\\subsection*{Part 6: Sleep stage detection}\n\nIn Fig.\\ref{figure_6}, we investigate the performance of the three classifier types described in part 1 in the real-world scenario of personalized sleep-stage detection. For this purpose, the classifiers are trained and tested individually on each of our 68 full-night sleep recordings, using as inputs the same 6-dimensional Fourier- or correlation features as in Fig.\\ref{figure_5} (Note that the aggregated distribution functions and covariance matrices in Fig.\\ref{figure_5} have been computed by pooling over all data sets and therefore show a much more regular behavior than the individual ones).\n\n\\vspace{0.2cm}\\noindent As a result, we obtain 68 accuracies for each combination of classifier type (Fig.\\ref{figure_6}, rows) and used input feature (Fig.\\ref{figure_6}, columns). The distributions of these accuracies are presented as histograms in the figure.\n\n\\subsection*{Part 7: Natural data clustering}\n\nIn Fig.\\ref{figure_7}, we address the question whether typical real-world data sets have a built-in clustering structure that can be detected (and possibly enhanced) by unsupervised methods of data analysis. For this purpose, we visualize the clustering structure.\nA frequently used method to generate low-dimensional embeddings of high-dimensional data is t-distributed stochastic neighbor embedding (t-SNE) \\cite{van2008visualizing}. However, in t-SNE the resulting low-dimensional projections can be highly dependent on the detailed parameter settings \\cite{wattenberg2016use}, sensitive to noise, and may not preserve, but rather often scramble the global structure in data \\cite{vallejos2019exploring, moon2019visualizing}.\nIn contrats, multi-Dimensional-Scaling (MDS) \\cite{torgerson1952multidimensional, kruskal1964nonmetric,kruskal1978multidimensional,cox2008multidimensional} is an efficient embedding technique to visualize high-dimensional point clouds by projecting them onto a 2-dimensional plane. Furthermore, MDS has the decisive advantage that it is parameter-free and all mutual distances of the points are preserved, thereby conserving both the global and local structure of the underlying data. \nWhen interpreting patterns as points in high-dimensional space and dissimilarities between patterns as distances between corresponding points, MDS is an elegant method to visualize high-dimensional data. By color-coding each projected data point of a data set according to its label, the representation of the data can be visualized as a set of point clusters. For instance, MDS has already been applied to visualize for instance word class distributions of different linguistic corpora \\cite{schilling2021analysis}, hidden layer representations (embeddings) of artificial neural networks \\cite{schilling2021quantifying,krauss2021analysis}, structure and dynamics of recurrent neural networks \\cite{krauss2019analysis, krauss2019recurrence, krauss2019weight}, or brain activity patterns assessed during e.g. pure tone or speech perception \\cite{krauss2018statistical,schilling2021analysis}, or even during sleep \\cite{krauss2018analysis,traxdorf2019microstructure}. \nIn all these cases the apparent compactness and mutual overlap of the point clusters permits a qualitative assessment of how well the different classes separate.\n\nIn addition, we measure the degree of clustering objectively by calculating the general discrimination value (GDV) \\cite{krauss2018statistical,schilling2021quantifying}, described in part 2.\n\n\\vspace{0.2cm}\\noindent For the clustering analysis we analyze two examples of 'natural data': One is the MNIST data \\cite{deng2012mnist} set with 10 classes of handwritten digits, in which the input vectors are 784-dimensional (28x28 pixels) and have continuous positive values (between 0 and 1 after normalization). \n\n\\vspace{0.2cm}\\noindent As the second example we use, again, our full-night EEG recordings with the 5 data classes corresponding to the sleep stages Wake, REM, N1, N2, and N3. In order to reduce setup-differences between measurements, we first perform a z-transform over each individual full-night EEG recording, so that the one-channel EEG signal of each participant has now zero mean and unit variance. Next, in order to make the EEG data more comparable with MNIST, we produce one 784-dimensional input vector from each 30-second epoch of the EEG recordings in the following way: The 7680 subsequent one-channel EEG signals of the epoch are first transformed to the frequency domain using Fast Fourier Transform (FFT), yielding 3840 complex amplitudes. Since the phases of the amplitudes change in a highly irregular way between epochs, we discard this information by computing (the square roots of) the magnitudes of the amplitudes. We keep only the first 784 values of the resulting real-valued frequency spectrum, corresponding to the lowest frequencies. By pooling over all epochs and participants, we obtain a long list of these 784-dimensional input vectors. They are globally normalized, so that the components in the list range between 0 and 1, just as in the MNIST case. Finally, the list is randomly split into train (fraction 0.8) and test (fraction 0.2) data sets. \n\n\\vspace{0.2cm}\\noindent It is possible to directly compute the MDS projection of the uncompressed 784-dimensional test data vectors into two dimensions, and also to calculate the corresponding GDV value that quantifies the degree of class separation (using the known sleep stage labeling). In Fig.\\ref{figure_7}, these uncompressed data distributions are always shown in the left upper scatter plot of each two-by-two block.\n\n\\vspace{0.2cm}\\noindent In this context, we also test if step-wise dimensionality reduction in an autoencoder leads to an enhanced clustering. The used autoencoder has RELU activation functions and 7 fully connected layers with the following numbers of neurons: 784,128,64,16,64,128,728. The mean squared error between input vectors and reconstructed vectors is minimized using the Adams optimizer. We also compute the MDS projections and GDV values for layers 2, 3 and 4 (the 16-dimensional bottleneck) of the autoencoder. In Fig.\\ref{figure_7}, these three compressed data distributions are shown within the two-by-two blocks of scatter plots.\n\n\\vspace{0.2cm}\\noindent As a reference for the resulting MDS projections and GDV values in the unsupervised autoencoder, we also process the two kinds of natural data with a perceptron that is trained in a supervised manner, so that it separates the known classes as far as possible. To make the perceptron comparable to the autoencoder, the first 4 layers (from the input to the bottleneck) are identical: Fully connected, RELU activations, and layer sizes 784,128,64,16. However, the decoder-part of the autoencoder is replaced by a softmax layer in the perceptron, which has either 10 (MNIST) or 5 (sleep) neurons. The perceptron is trained by back-propagation to minimize categorical cross-entropy between the true and predicted labels, using the Adams optimizer. Just as in the autoencoder, we compute MDS projections and GDV values for the first 4 perceptron layers. \n\n\n\n\n\n\\clearpage\n\\section{Results}\n\n\\subsection*{Part 1: Accuracy limit}\n\nIn order to demonstrate the existence of an accuracy limit in classification tasks, we assume a statistical process is generating data vectors $\\vec{x}$ which are distributed in the input space (subsequently also called feature space) according to given {\\bf generation densities} $p_{gen}(\\vec{x}\\;|\\;i)$ that depend on the class $i$. For reasons of mathematical tractability and visual clarity, we start with a simple problem of two Gaussian data classes in a two-dimensional feature-space. We assume that class $i=0$ is centered at $\\vec{x}=(x_1,x_2)=(0,0)$, whereas class $i=1$ is centered a distance $d$ away, at $\\vec{x}=(d,0)$. As another discriminating property, the two class-dependent distributions are assumed to have different correlations between the features $x_1$ and $x_2$ (Compare Fig.\\ref{figure_1}(a,b)). \n\n\\vspace{0.2cm}\\noindent As derived in the Methods section, an ideal classifier would divide the feature-space $\\left\\{\\vec{x}\\right\\}$ among the two classes in a way that is perfectly consistent with the true generation densities $p_{gen}(\\vec{x}\\;|\\;i)$. The resulting ideal assignment of a discrete class $j=0$ or $j=1$ to each data vector $\\vec{x}$ can be described by binary {\\bf class indicator functions} $\\hat{q}_{cla}(\\;j\\;|\\;\\vec{x}\\;)$ (Compare Fig.\\ref{figure_1}(c,d)). \n\n\\vspace{0.2cm}\\noindent The latter two quantities can be combined to the {\\bf confusion densities} $\\hat{q}_{cla}(j|\\vec{x}) \\; p_{gen}(\\vec{x}|i)$, which give the probability density that data point $\\vec{x}$ is generated in class $i$ but assigned to class $j$ by the ideal classifier (Compare Fig.\\ref{figure_1}(e,f)). The parts of feature space where the confusion density is large for $i\\neq j$ correspond to the overlap regions of the data classes, and it is this overlap that makes the theoretical limit of the classification accuracy smaller than one.\n\n\\vspace{0.2cm}\\noindent It is possible to compute the {\\bf confusion matrix} of the ideal classifier by integrating the confusion densities over the entire feature space, which is feasible only in very low-dimension spaces. The confusion matrix, in turn, yields the {\\bf theoretical accuracy limit} $A_{max}$ of the ideal classifier.\nIn our simple example, $A_{max}$ is expected to increase with the distance $d$ between the two data classes, as this separation reduces the class overlap. By numerically computing the integral over the two-dimensional feature space of our Gaussian test example, we indeed find a monotonous increase of $A_{max}=A_{max}(d)$ from about 0.62 at $d=0$ to nearly one at $d=5$ (Compare Fig.\\ref{figure_1}(g, black line)).\n\n\\vspace{0.2cm}\\noindent Our next goal is to apply different types of classifier models to data drawn from the generation densities $p_{gen}(\\vec{x}\\;|\\;i)$ of the Gaussian test example above.\n\n\\vspace{0.2cm}\\noindent As an example for a 'black box' classifier, we consider a {\\bf perceptron} with one hidden layer (See Methods section for details). In the training phase, the connection weights of this neural network are optimized using the back-propagation algorithm. \n\n\\vspace{0.2cm}\\noindent As an example of a mathematically transparent, but simple classifier type, we consider a {\\bf Naive Bayesian} model. Here, correlations between the input features are neglected, and so the global likelihood $L(\\vec{u}\\;|\\;c)$ of a data vector $\\vec{u}$, given the data class $c$, is approximated as the product of the marginal likelihood factors for each individual feature $f$ (See Methods section for details). In the 'training phase', the naive Bayesian classifier is simply estimating the distribution functions of these marginal likelihood factors, using Kernel Density Approximation (KDE). \n\n\\vspace{0.2cm}\\noindent Finally, we consider a {\\bf Correlated Multi-Variate Gaussian (CMVG) Bayesian} model as an example of a mathematically transparent classifier that can also account for correlations in the data, but which assumes that all features are normally distributed (See Methods section for details). In the training phase, the CMVG Bayesian classifier has to estimate the mean values and covariances of the data vectors.\n\n\\vspace{0.2cm}\\noindent When applying these three classifiers to the Gaussian test data, we indeed find that {\\bf all models reach the same theoretical classification limit, even though their operating principles are very different} (Compare Fig.\\ref{figure_1}(g)). The only exception is the Naive Bayes classifier at small class distances $d$ (Compare Fig.\\ref{figure_1}(g, orange line)). This model fails because it can only use the marginal feature distributions, which happen to be identical for both classes in the case $d=0$. However, the problem can be easily fixed by multiplying the original two-dimensional feature vectors with a random, non-quadratic matrix (See Methods section for details) and thereby creating many new linear feature combinations, some of which usually have significantly different marginal distributions. Such a {\\bf Random Dimensionality Expansion (RDE)}, as proposed in Yang et al. \\cite{yang2021neural}, allows even the Naive Bayes model to reach the accuracy limit in strongly overlapping data classes (Compare Fig.\\ref{figure_1}(g, olive line)).\n\n\n\\subsection*{Part 2: The DSC data model}\n\nIn order to investigate how the performance of different classifiers depends on the statistical properties of the data, we generate large numbers of artificial data sets with two labeled classes $c \\in \\left\\{0,1\\right\\}$, in which the dimensionality $D$ of the individual data vectors $\\vec{u}$, the degree of correlations $C$ between their components $u_{f=1\\ldots D}$ (here also called features), and the separation $S$ between the two classes in feature space can be independently adjusted (See Figs.\\ref{figure_2}(b,c) for an illustration of $C$ and $S$). To replicate some of the heterogeneity of real world data, we design our data generator as a two-level superstatistical model \\cite{metzner2015superstatistical,mark2018bayesian}: The mean values $\\bf \\mu^{(c)}$ and covariances $\\bf \\Sigma^{(c)}$ of the multi-variate probability distributions $p_c(\\vec{u})$ in each of the data classes $c$ are themselves random variables. They are drawn from certain meta-distributions, which are in turn controlled by the three quantities $D,S,C$ (See Methods for details, as well as Fig.\\ref{figure_2}(a)).\n\n\\vspace{0.2cm}\\noindent Using the General Discrimination Value (GDV), a measure designed to quantify the separability of labeled point sets (data classes) in high-dimensional spaces \\cite{schilling2021quantifying}, we show that the mean separability of data classes in the DSC-model is indeed monotonously increasing with the control quantity $S$ (Orange line in Fig.\\ref{figure_2}(d)), whereas the separability of individual data sets is fluctuating heavily around this mean value (Grey dots in Fig.\\ref{figure_2}(d)).\n\n\\vspace{0.2cm}\\noindent Moreover, we quantify the degree of correlation between the $D$ features of the data vectors in each class $c$ by the root-mean-square average of the upper triangular matrix elements in the covariance matrix $\\bf \\Sigma^{(c)}$. We show that this RMS-average is an almost linear function of $C$ (Blue line in Fig.\\ref{figure_2}(d)) and can be varied between zero (Corresponding to independently fluctuating features, or statistical independence) and one (Corresponding to identically fluctuating features, or perfect correlations).\n\n\\subsection*{Part 3: Comparing classifiers}\n\nNext, we apply the three classifier types to artificial data, with statistical properties controlled by the quantities $D$, $S$ and $C$. We first investigate the {\\bf accuracy of the classifiers as a function of data dimensionality $D$} (Fig.\\ref{figure_3}(c,d)), considering correlated data ($C=1.0$). \n\n\\vspace{0.2cm}\\noindent When the separation of the data classes in feature space is small ($S=0.1$, panel (c)), the classification accuracy for one-dimensional data ($D=1$) is very close to the minimum possible value of 0.5 (corresponding to a purely random assignment of the two class labels) in all three models. As data dimensionality $D$ increases, all three models monotonically increase their average accuracies (colored lines), whereas the accuracies of individual cases show a large fluctuation (gray dots). However, the Naive Bayes classifier (orange line) does not perform well even for large data dimensionality, because the point clouds corresponding to the two classes are strongly overlapping in feature space. By contrast, the CMVG Bayes classifier (red line) and the Perceptron (blue line) eventually achieve a very good performance, because they can exploit the correlations in the data. The similarity of the latter two accuracy-versus-$D$ plots is remarkable, considering that these two classifiers work in completely different ways (the Bayesian model performing theory-based mathematical operations with estimated probability distributions, the neural network computing quite arbitrary non-linear transformations of weighted sums). We therefore conclude that the latter two models approach the theoretical optimum of accuracy for each combination of the control quantities $D,S,C$. \n\n\\vspace{0.2cm}\\noindent As the separation of the data classes in feature space gets larger ($S=1.0$, panel (d)), the accuracy-versus-$D$ plots are qualitatively similar to panel (c), but for one-dimensional data ($D=1$) the common accuracy is now slightly above the random baseline, at 0.6. By comparing panels (c) and (d) we note that Naive Bayes is profiting from the larger class separation, but the other two classifiers reach the theoretical performance maximum even without this extra separation.\n\n\\vspace{0.2cm}\\noindent Next, we investigate the {\\bf accuracy of the classifiers as a function of class separation $S$} (Fig.\\ref{figure_3}(e,f)), considering five-dimensional data ($D=5$). Without correlations ($C=0$, panel (e)), all three models show exactly the same monotonous increase of accuracy with separation $S$, starting at the random baseline of 0.5 and finally approaching perfect accuracy of 1.0. \n\n\\vspace{0.2cm}\\noindent With feature correlations present ($C=1.0$, panel (f)), the Naive Bayes classifier shows the same behavior as in panel (e), whereas the other two correlation-sensitive models now already start with a respectable accuracy of 0.8 at zero class separation.\n\n\\vspace{0.2cm}\\noindent Finally, we investigate the {\\bf accuracy of the classifiers as a function of the feature correlations $C$} (Fig.\\ref{figure_3}(g,h)), considering again five-dimensional data ($D=5$). For strongly overlapping data classes ($S=0.1$, panel (g)), Naive Bayes cannot exceed an accuracy of about 0.55, whereas the two correlation-sensitive models show a super-linear increase of accuracy with increasing feature correlations. However, this decrease is ending rather abruptly at about $C\\approx 0.7$. Above this transition point, both models stay at a plateau accuracy of about 0.8, independent of the correlation quantity. Note that this discontinuity of the slope of the accuracy-versus-$D$ plots is likely not an artifact of the DSC data, since the RMS-average of empirical correlations versus $C$ (Fig.\\ref{figure_3}(d)) did not show such an effect at $C\\approx 0.7$. Moreover, the fact that functionally distinct classifiers such as CMVG Bayes and Perceptron produce an almost identical behaviour here suggests that the accuracy plateau in the strong correlation regime indeed reflects the theoretical performance maximum.\n\n\\vspace{0.2cm}\\noindent As the class separation is increased ($S=1.0$, panel (h)), all three models start at a larger accuracy of about 0.75 in the uncorrelated case. Now the performance of Naive Bayes is even declining with increasing $C$, because this model wrongly assumes uncorrelated data. The other two models show again the super-linear increase up to $C\\approx 0.7$. However, now a further improvement of performance is possible with increasing correlations.\n\n\\subsection*{Part 4: Feature transformations}\n\nThe accuracy limit is determined by the overlap of data classes, that is, by the possibility that different classes $i \\neq j$ produce exactly the same data vector $\\vec{x}^{\\ast}$. Transformations $\\vec{x} \\rightarrow \\vec{f}(\\vec{x})$ of the input features can drastically change the distributions of data points (As an example, compare the rows in Fig.\\ref{figure_4}). However, they cannot be expected to reduce the fundamental amount of class overlap, because transformations are just redirecting the common points $\\vec{x}^{\\ast}$ to new locations in feature space. In particular, invertible transformations can be viewed as variable substitutions in the integral Eq.\\ref{cij} for the confusion matrix. They do not affect the resulting matrix values and thus leave the accuracy invariant.\n\n\\vspace{0.2cm}\\noindent In order to test this expectation, we start with two overlapping Gaussian data classes in a two-dimensional feature space (Fig.\\ref{figure_4}, top row), resulting in an accuracy limit of $\\approx 0.69$. All three classifiers actually reach this limit with the original data as input.\n\n\\vspace{0.2cm}\\noindent Next we perform simple non-linear transformations on the input data, by replacing each of the two features $x_1$ and $x_2$ with a function of themselves (in particular: $\\sin$, $\\mbox{sgn}$, and $\\cos$). We find that the application of the $\\sin$-transformation (second row in Fig.\\ref{figure_4}) has indeed no effect on the accuracy of the three classifiers, even though the joint (first column) and marginal distributions (second and third column) are now strongly distorted. Even the application of the $\\mbox{sgn}$-transformation (third row), which collapses all data onto just 4 possible points in feature space, leaves the accuracies invariant. This works because the two classes in our simple example can be distinguished by the sign of the $x_1$-feature, and both the $\\sin$- as well as the $\\mbox{sgn}$-transformation leave this information intact. By contrast, the application of the $\\cos$-transformation destroys this crucial information, and consequently all accuracies drop to the random baseline of 0.5.\n\n\\vspace{0.2cm}\\noindent The above numerical experiments illustrate that transformations of the input-data can reduce (by destroying information that is essential for class-discrimination), but never increase the theoretical accuracy limit, which is an inherent property of the data. Of course, the subsequent data transformations which are taking place in the layers of deep neural networks are still useful, because they re-shape data distributions until classes can be linearly separated in the final layer of the network. \n\n\n\n\\subsection*{Part 5: Sleep EEG data}\n\nIn our artificial data sets, all feature distributions were normally distributed. Moreover, it was possible to introduce extremely strong correlations between these features, which could then be exploited by two of the three classifier models. It is however unclear if the ability of a classifier to detect correlations is always crucial in real-world problems.\n\n\\vspace{0.2cm}\\noindent We therefore turn in a next step to actually measured EEG data, recorded over-night from 68 different sleeping human subjects. In this case, our final goal is to assign to each 30-second epoch of a raw one-channel EEG signal one of the five sleep stages (Wake, REM, N1, N2, N3). \n\n\\vspace{0.2cm}\\noindent At our sample rate, a single epoch of EEG data corresponds to 7680 subsequent amplitudes. Such high-dimensional data vectors $\\vec{x}$ are however not suitable as direct input for a Bayesian classifier, nor for a flat neural network with only $\\approx 100$ neurons. For this reason, we first compress the raw data vectors $\\vec{x}=(x_1,\\ldots x_{7680})$ into suitable feature vectors $\\vec{u}=(u_1,\\ldots u_D)$ of strongly reduced dimensionality $D\\approx 10$. Since we aim to develop a fully transparent classifier system, we use mathematically well-defined, human-interpretable features $u_f = G(\\vec{x},\\alpha_f)$, which depend on a freely tunable parameter $\\alpha$. The dimensionality $D$ of the feature space is then determined by how many of these parameters $\\alpha_{f=1\\ldots D}$ are chosen.\n\n\\vspace{0.2cm}\\noindent The huge literature on brain waves suggests that the momentary {\\bf Fourier components} of the EEG signal are suitable features for the classification of sleep stages. The parameter $\\alpha$ is then naturally given by the frequency $\\nu$ of the Fourier component (For details see methods). In a first experiment, we use a set of six equally spaced frequencies ($\\nu_1=$5 Hz, $\\nu_2=$10 Hz,$\\ldots$ $\\nu_6=$30 Hz). Based on training data sets that have been manually labeled by a sleep specialist, we then compute the marginal probability density functions of these Fourier features, as well as their covariance matrices, for each of the 5 sleep stages $s$ (Fig.\\ref{figure_5}, left two columns). We find that within each sleep stage, the Fourier features have unimodal distributions, with peak positions and widths depending quite systematically on the frequency $\\nu$. There are characteristic differences between the sleep stages (in particular the distributions are wider in the wake stage), but they are not very pronounced. In the covariance matrices, we find that the off-diagonal elements are significantly smaller than the diagonal elements (The latter have been set to zero in Fig.\\ref{figure_5} to emphasize the actual inter-feature correlations), with the exception of the wake state. Also the N1 state has slightly larger inter-feature correlations compared to the REM, N2 and N3 states. \n\n\\vspace{0.2cm}\\noindent As an alternative or complement to the Fourier features, we also consider the normalized (Pearson) {\\bf auto-correlation coefficients} of the raw EEG signal (Fig.\\ref{figure_5}, right two columns. For details see methods). The feature parameter $\\alpha$ is in this case given by the lag-time $\\Delta t$, for which we choose six equally spaced values ($1,3,\\ldots,11$ in units of the EEG sampling period). Since these correlation features cannot exceed the value of one by definition, the marginal distributions are highly non-Gaussian with pronounced tails towards small values. These tails show relatively strong differences between some of the sleep stages, but also surprising similarities, in particular for REM and N2. In the covariance matrices, we find the strongest inter-feature correlations in the wake and N1 stages. Again, the covariance matrices are very similar in REM and N2.\n\n\\subsection*{Part 6: Sleep stage detection}\n\nNext, we apply our three classifier models to the above sleep EEG data. However, while the feature distributions and correlations in Fig-\\ref{figure_5} were based on the global data, pooled over all 68 full-night EEG recordings, we are considering here the task of personalized sleep-stage recording. That is, the classifiers are trained and evaluated individually on each of the 68 data sets. Because the amount of training data is severely limited in this task, classification accuracies are expected to be rather low and strongly dependent on the participant. We therefore compute the distributions of accuracies over the 68 personalized data sets (histograms in Fig.\\ref{figure_6}) for all three classifiers and for the two types of pre-processed features. \n\n\\vspace{0.2cm}\\noindent We find that the CMVG Bayes model is performing very poorly in this task, presumably because the feature distributions are non-Gaussian and only weakly correlated except in the wake stage. In particular, for some participants the classification accuracy is less then the random baseline of about 0.2, corresponding to consistent miss-classifications. This can happen in Bayesian classifiers when the likelihood distributions learned from the training data set do not match the actual distributions in the test data set. \n\n\\vspace{0.2cm}\\noindent By contrast, the Naive Bayes model can properly represent the non-Gaussian feature distributions by KDE approximations, and it furthermore profits from the lack of correlations. The performance of the Perceptron is comparable to that of the Naive Bayes model. Both for Fourier- and correlation-features, these two models show accuracies well above the baseline, roughly in the range from 0.3 to 0.6.\n\n\\subsection*{Part 7: Natural data clustering}\n\nBoth the ten digits in MNIST, as well as the five sleep stages in overnight EEG recordings, are human-defined classes. It is therefore unclear whether these classes can also be considered as 'natural kinds'. \n\n\\vspace{0.2cm}\\noindent After a suitable pre-processing that brings both data sets into the same format of 784-dimensional, normalized feature vectors (for details see Methods sections), we address this question by computing two-dimensional MDS projections, coloring the data points according to the known, human-assigned labels (In Fig.\\ref{figure_7}, see the upper left scatter plot in each 2-by-2 block). Indeed, the projected data distributions show a small degree of clustering, which is also quantitatively confirmed by the corresponding GDV values (-0.061 for MNIST and -0.035 for sleep EEG data). Note that in the sleep data, a large number of extreme outliers are found which might not correspond to any of the standard classes.\n\n\\vspace{0.2cm}\\noindent The purpose of classifiers is to transform and re-shape the data distribution in such a way that the final network layer (often a softmax layer with one neuron for each data class) can separate the classes easily from each other. Although, as we have shown above, these re-shaping transformations cannot reduce the natural overlap of classes (which would push the accuracy beyond the data-inherent limit), they might as a side-effect lead to a larger 'centrality' of the clusters associated with each class. This would show up quantitatively as a decrease of the General Discrimination Value (GDV) in the higher network layers of the classifier, as compared to the original input data. In order to test this hypothesis, we have trained a four-layer perceptron (see Methods section for details) in a supervised manner on both the MNIST and sleep EEG data. In the case of MNIST, we indeed observe a systematic decrease of the GDV in subsequent network layers: GDV(L0)=-0.061, GDV(L1)=-0.174, GDV(L2)=-0.250, and GDV(L3)=-0.300 (See Fig.\\ref{figure_7}(b)). An analogous layer-wise decrease is found for the sleep EEG data: GDV(L0)=-0.035, GDV(L1)=-0.096, GDV(L2)=-0.122, and GDV(L3)=-0.181 (See Fig.\\ref{figure_7}(d)).\n\n\\vspace{0.2cm}\\noindent We finally address the question whether a natural clustering in novel, unlabeled data sets can be automatically detected, and possibly enhanced, in an unsupervised manner. For this purpose, we consider an autoencoder that performs a layer-wise dimensionality reduction of the data, and then re-expands these low-dimensional embeddings back to the original number of dimensions. During this process of 'compression' and 're-expansion', fine details of the data have to be discarded, and it appears reasonable that this might go hand in hand with a 'sharpening' of the clusters. Again, in our test case where the labels of the data points are actually known, this enhancement of cluster centrality can be quantitatively measured by the GDV. For comparability, we have used an autoencoder that has the same design as the perceptron for the first four network layers. In the case of MNIST, we indeed find that the unsupervised compression enhances cluster centrality: GDV(L0)=-0.061, GDV(L1)=-0.115, GDV(L2)=-0.122, and GDV(L3)=-0.137 (See Fig.\\ref{figure_7}(a)). The behavior is similar with the sleep EEG data, except for the last layer: GDV(L0)=-0.035, GDV(L1)=-0.037, GDV(L2)=-0.041, and GDV(L3)=-0.036 (See Fig.\\ref{figure_7}(c)).\n\n\n\\vspace{0.2cm}\\noindent \n\n\n\\clearpage\n\\section{Discussion and Outlook}\n\nIn this work, we have addressed various aspects of data ambiguity: the fact that multi-dimensional data spaces usually contain vectors that cannot be unequivocally assigned to any particular class. \nThe probability of encountering such ambiguous vectors is easily underestimated in machine learning, because the data sets used to train classifiers - rather than being sampled randomly from the entire space of possible data - typically represent just a tiny, pre-selected subset of 'reasonable' examples. For instance, the space of monochrome images with full HD resolution and 256 gray values contains $256^{1920 \\times 1080} \\approx 10^{4993726}$ possible vectors. The fraction of these images that resemble any human-recognizable objects is virtually zero, whereas the largest part would be described as noise by human observers. One may argue that these 'structure-less' images should not play any role in real-world applications. However, it is conceivable that sensors in autonomous intelligent systems, such as self-driving cars, can produce untypical data under severe environmental conditions, such as snow storms. How to deal with data ambiguity is therefore a practically relevant problem. Moreover, as we have tried to illustrate in this paper, data ambiguity has interesting consequences from a theoretical point of view.\n\n\\vspace{0.2cm}\\noindent In part one, we have derived the theoretical limit $A_{max}$ of accuracy that can be achieved by a perfect classifier, given a data set with partially overlapping classes. By generating artificial data classes with Gaussian probability distributions in a two-dimensional feature space and with a controllable distance $d$ between the maxima, we verified that different types of classifiers (The CMVG Bayesian model with multi-variate Gaussian likelihoods and a perceptron) exactly follow the predicted accuracy limit $A_{max}(d)$ (Fig.\\ref{figure_1}(g)). The naive Bayesian model, which cannot exploit correlations to distinguish between data classes, originally yields sub-optimal accuracies for small distances $d$, but this problem can be fixed by applying a random dimensionality expansion to the data as a trivial pre-processing step \\cite{yang2021neural}. We have restricted ourselves to only two features (dimensions) for this test, because predicting the accuracy limit involves the exact computation of the confusion matrix, which in turn is an integral over the entire data space. Note, however, that for high-dimensional data with known class-dependent generation densities $p_{gen}(\\vec{x}\\;|\\;i)$, the integral could be approximated by Monte Carlo sampling. In this case, the element $C_{ji}$ of the confusion matrix would be computed by drawing random vectors $\\vec{x}$ from class $i$. The class indicator function $\\hat{q}_{cla}(\\;k\\;|\\;\\vec{x}\\;)$ of the perfect classifier, which is fully determined by the generation densities, yields the corresponding predicted classes $k$ for these data vectors. The matrix element $C_{ji}$ is then given by the fraction of cases where $k=j$. \n\n\\vspace{0.2cm}\\noindent In part two, we have constructed a two-level model to generate artificial test data (Fig.\\ref{figure_2}). The model has high-level parameters $D$, $S$ and $C$ which control the number of dimensions (features), the average separation of the two classes in feature space, as well as the average correlation between the features. For each triple of high-level parameters $D,S,C$, a large number of low-level parameters $\\mu, \\Sigma$ are randomly drawn according to specified distributions, which are in turn used to generate the final test data sets. The super-statistical nature of the model allows us to prescribe the essential statistical features of dimensionality, separation and correlation, while at the same time ensuring a large variability of the test data. By using the General Discrimination Value (GDV), a quantitative measure of class separability (centrality), we have confirmed that the high-level parameter $S$ controls the class separability as intended. Moreover, the proper action of parameter $C$ was confirmed by computing the root-mean-square average over the elements of the data's covariance matrix.\n\n\\vspace{0.2cm}\\noindent In part three, we have applied our three types of classifiers to the test data generated with the DSC-model. Without intra-class feature correlations ($C=0$), we find that all three models show with growing separation parameter $S$ exactly the same monotonically increasing average accuracy (Fig.\\ref{figure_3}(e)). Although the exact computation of $A_{max}$ is not possible in this five-dimensional data space, the perfect agreement of the three different classifiers indicates that they all have reached the accuracy limit. When intra-class feature correlations are present ($C\\neq 0$), we find by systematically varying the parameters $D$, $S$ and $C$ that the resulting accuracies of the CMVG-Bayes classifier and of the perceptron are extremely similar in all considered cases, indicating again that they have reached the theoretical accuracy limit. As expected, the naive Bayesian classifier shows sub-optimal accuracies in all cases where feature correlations are required to distinguish between the classes. In general, this analysis shows that the accuracy of classification can be systematically enhanced by providing more features (larger data dimensionality $D$) as input. Extra features that do not provide additional useful information are 'automatically ignored' by the classifiers and never reduce the achievable accuracy. Moreover, accuracy can be enhanced by providing features that are correlated with each other (larger parameter $C$), but differently in each data class. Such class-specific feature correlations can be exploited for discrimination by models such as CMVG Bayes and the perceptron, but not by the naive Bayes model. Moreover, we find that the theoretical accuracy maximum as a function of the correlation parameter $C$ shows an interesting abrupt change of slope at around $C\\approx 0.8$ (Fig.\\ref{figure_3}(g,h)). The origin of this effect is at present unclear, but will be explored in follow-up studies.\n\n\\vspace{0.2cm}\\noindent In part four, we have investigated the effect of non-linear feature transformations, applied as a pre-processing step, on classification accuracy (Fig.\\ref{figure_4}). Since the achievable accuracy in a classification task is limited by the degree of overlap between the data classes, feature transformations can certainly reduce the accuracy to below the limit $A_{max}$ (when they destroy information that is essential for discrimination), but they can never push the accuracy to above $A_{max}$. This is indeed confirmed in a simple test case where all three classifier types perform at the accuracy maximum with the non-transformed data: Applying a feature-wise sine-transformation drastically changes the data distributions $p_{gen}(\\vec{x}\\;|\\;i)$, but leaves the accuracies unchanged at $A_{max}$. The accuracy remains invariant even under a signum-transformation, although this non-invertible operation reduces the data distributions to only four possible points in feature space. In this extreme case, most of the detailed information about the input data vectors is lost, but the part that is essential for class discrimination, namely the sign of the feature $x_1$, is retained. This example demonstrates that classification is a type of lossy data processing where irrelevant information can be safely discarded. For this reason, neural-network based classifiers usually project the input data vectors into spaces of ever smaller dimensions, up to the final discrimination layer which needs only as many neural units as there are data classes. In this context, it is interesting that biological organisms with nervous systems, relying on an efficient classification of objects in their environment for survival, have probably evolved sensory organs and filters that only transmit the small class-discriminating part of the available information to the higher stages of the neural processing chain. As a consequence, our human perception is almost certainly not a veridical representation of the world \\cite{mark2010natural,hoffman2014objects, hoffman2018interface}.\n\n\\vspace{0.2cm}\\noindent In part five, we have analyzed full-night EEG recordings of sleeping humans, divided into epochs of 30 seconds that have been labeled by a specialist according to the five sleep stages. Such recordings can been used as training data for automatic sleep stage classifiers - an application of machine learning that could in the future remove a large work load from clinical sleep laboratories. In our context of data ambiguity, sleep EEG is an interesting case because different human specialists agree about individual sleep-label assignments only in 70\\% - 80\\% of the cases, even if multiple EEG channels and other bio-signals (such as electro-oculograms or electro-myograms) are provided \\cite{fiorillo2019automated}. This low inter-rater reliability suggests that a considerable fraction of the 30-second epochs is actually ambiguous with respect to sleep stage classification, in particular when only the time-dependent signal of a single EEG channel is available as input-data. Our first goal is a suitable dimensionality reduction of the raw data, which (at a sample rate of 256 Hz) consist of 7680 subsequent EEG values in each epoch. As a pre-processing step, we map each 7680-dimensional raw data vector onto an only 6-dimensional feature vector, so that our Bayesian classifiers (Naive and CMVG) can be efficiently used. We consider as features the real-valued Fourier amplitudes at different frequencies, as well as the auto-correlation coefficients at different lag-times (Fig.\\ref{figure_5}). The Fourier features are expected to be particularly useful, as it is well-known that the activity in different EEG frequency bands varies in characteristic ways over the five sleep stages. The correlation features have been successfully applied for Bayesian classification in a former study \\cite{metzner2021sleep}. In our present study, we are using either Fourier or correlation features, but no combinations of those. By performing a statistical analysis of the features, we find that within the same sleep stage, the six features have significantly different marginal probability distributions. However, these distributions are quite similar in all sleep stages, so that their value for the classification task is limited. Moreover, the correlations between features, which could be exploited by the CMVG Bayes classifier and by the perceptron, turn out to be very weak, except for the Fourier features in the wake stage. Another problem is the strongly non-Gaussian shape of the marginal probability distributions in the case of the correlation features, which cannot be properly represented by the CMVG Bayes model. \n\n\\vspace{0.2cm}\\noindent In part six, we have used our three classifier models, based on the above Fourier- and correlation features, for personalized sleep stage detection. In this very hard task, the classifiers are trained and tested, independently, on the full-night EEG data set of a single individual only. Since an individual data set contains typically less than 1000 epochs (each corresponding to one feature vector), random deviations from the 'typical' sleeping patterns are likely to be picked up during the training phase. We consequently find that the accuracies vary widely between the individual data sets. As expected, the CMVG Bayes model performs badly in this task, because there are almost no inter-feature correlations present that could be exploited for sleep stage discrimination, and because feature distributions are non-Gaussian. Interestingly, both the Naive Bayesian classifier and the perceptron achieve relatively good accuracies, mainly in the range from 0.3 to 0.6. However, these accuracies may be further increased by using more sophisticated neural network architectures \\cite{stephansen2018neural, krauss2021analysis}, and hence do not represent the accuracy limit.\n\n\\vspace{0.2cm}\\noindent In the final part seven, we have started to explore whether the distinct classes in typical real-world data sets are defined arbitrarily (and therefore can only be detected after supervised learning), or if the differences between these classes are so prominent that even unsupervised machine learning methods can recognize them as distinct clusters in feature space. Besides the (pooled) sleep EEG data, we have used the MNIST data set to test for any inherent clustering structure. For this investigation, the individual data points, corresponding to respectively one epoch of EEG signal or one handwritten digit, have been brought into the same format of 784-dimensional, normalized vectors. Computing directly the General Discrimination Value (GDV) of the MNIST data, based on the known labels, has indeed revealed a small amount of 'natural clustering', even in this raw data distribution. This quantitative result was qualitatively confirmed by a two-dimensional visualization using multi-dimensional scaling (MDS), however the cluster structure would hardly be visible without the class-specific coloring (left upper scatter plots in Fig.\\ref{figure_7}(a,b)). By contrast, no natural clustering was found for the raw sleep EEG data when the 7680 values in each epoch were simply down-sampled in the time-domain to 784 values (data not shown). This presumably fails because the relevant class-specific signatures appear randomly at different temporal positions within each epoch, and so the Euclidean distance between two data vectors is not a good measure of their dissimilarity. However, when we instead used as data vectors the magnitudes of the 784 Fourier amplitudes with lowest frequencies, a weak natural clustering was found also in the sleep data (left upper scatter plots in Fig.\\ref{figure_7}(c,d)). We have furthermore demonstrated that the degree of clustering (for both data sets) is systematically increasing in the higher layers of a perceptron that has been trained to discriminate the classes in a supervised manner (Fig.\\ref{figure_7}, right column). Finally, we have used a multi-layer autoencoder to produce embeddings of the data distributions with reduced dimensionality in an unsupervised setting. It has turned out that the degree of clustering (with respect to the known data classes) tends to increase systematically with the degree of dimensional compression (Fig.\\ref{figure_7}, left column). This interesting finding, previously reported in Schilling et al. \\cite{schilling2021quantifying}, suggests that unsupervised dimensionality reduction could be used to automatically detect and enhance natural clustering in unlabeled data. In combination with automatic labeling methods, such as Gaussian Mixture Models, this may provide an objective way to define 'natural kinds' in arbitrary data sets. \n\n\n\n\\clearpage\n\\section{Additional information}\n\n\\noindent{\\bf Author contributions statement:}\nCM has conceived of the project, implemented the methods, evaluated the data, and wrote the paper, PK co-designed the study, discussed the results and wrote the paper, AS discussed the results, MT provided access to resources and wrote the paper, HS provided access to resources and wrote the paper.\n \\vspace{0.5cm} \n\n\\noindent{\\bf Funding:}\nThis work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation): grant SCHU\\,1272\/16-1 (AOBJ 675050) to HS, grant TR\\,1793\/2-1 (AOBJ 675049) to MT, grant SCHI\\,1482\/3-1 (project number 451810794) to AS, and grant KR\\,5148\/2-1 (project number 436456810) to PK. \\vspace{0.5cm}\n\n\\noindent{\\bf Competing interests statement:}\nThe authors declare no competing interests. \\vspace{0.5cm}\n\n\\noindent{\\bf Data availability statement:}\nData and analysis programs will be made available upon reasonable request.\n\\vspace{0.5cm}\n\n\\noindent{\\bf Ethical approval and informed consent:} The study was conducted in the Department of Otorhinolaryngology, Head Neck Surgery, of the Friedrich-Alexander University Erlangen-N\u00fcrnberg (FAU), following approval by the local Ethics Committee (323 \u2013 16 Bc). Written informed consent was obtained from the participants before the cardiorespiratory poly-somnography (PSG).\n\n\\vspace{0.5cm}\n\n\\noindent{\\bf Third party rights:}\nAll material used in the paper are the intellectual property of the authors. \\vspace{0.5cm}\n\n\\clearpage\n\\bibliographystyle{unsrt}\n\n\\section{Introduction}\n\nData classification -- e.g. object recognition -- is a fundamental computing problem in machine learning and artificial intelligence. Large-scale classification competitions such as the annual ImageNet challenge \\cite{krizhevsky2012imagenet, russakovsky2015imagenet}, where a super-human accuracy of 95\\% has been achieved within about 5 years of steady progress, have contributed greatly to the general popularity of machine learning. Understandably, ImageNet is mostly discussed in the context of technical improvements regarding the classification methods which enabled this drastic boost of performance. But it also illustrates some fundamental problems that arise when computers are to create models of human-defined data categories:\nFor example, the fact that classification accuracies are typically leveling off at values below 100\\% does not necessarily reflect a limitation of the algorithms, but instead may reveal the classification limits of the humans who provided the ground truth data. Indeed, in the case of ImageNet, the massive work of annotating millions of images had been crowd-sourced using Amazon Mechanical Turk, and so a large number of individuals were involved in the labeling process, individuals who may place certain ambiguous images into different categories. This problem of ambiguity due to non-rigorously defined object categories is most pronounced in biological and medical data, where sample-to-sample variations are notoriously large. \n\nIn this work, we use artificially generated surrogate data, as well as real-world bio-medical data, to explore the implications of this inevitable data ambiguity. We demonstrate that the overlap of data classes leads to a theoretical upper limit of classification accuracy, a limit that can be mathematically computed in low-dimensional examples and which depends in a systematic way on the statistical properties of the data set. We find that sufficiently powerful classifier models of different kinds all perform at this same upper limit of accuracy, even if they are based on completely different operating principles. Interestingly, this accuracy limit is not affected by applying certain non-linear transformations to the data, even if these transformations are non-reversible and drastically reduce the information content (entropy) of the input data. \n\nIn a next step, the same three models that reached the common classification limit for artificial data are now applied to human EEG data measured during sleep. In a pre-processing step, two kinds of features are extracted from raw EEG signals, yielding different marginal distributions and mutual correlations. It turns out that a more complex Bayesian model, based on correlated multi-variate Gaussian likelihoods (CMVG), performs worse than two other models (naive Bayes, perceptron), because the statistical properties of the pre-processed features do not match those of the likelihoods. In contrast, the perceptron and the naive Bayes model still show very similar classification accuracies, indicating that both reach the theoretical accuracy limit for sleep stage classification.\n\nFinally, we address the question whether typical human-defined object categories can also be considered as 'natural kinds', that is, whether the data vectors in input space have a built-in cluster structure that can be detected by objective machine-learning models even in non-labeled data. For this purpose, we use as real-world examples the MNIST data set \\cite{deng2012mnist}, as well as the above EEG sleep data. We find that a simple visualization by multi-dimensional scaling (MDS) \\cite{torgerson1952multidimensional, kruskal1964nonmetric,kruskal1978multidimensional,cox2008multidimensional} already reveals an inherent cluster structure of the data in both cases. Interestingly, the degree of clustering, quantified by the general discrimination value (GDV) \\cite{krauss2018statistical, schilling2021quantifying}, can be enhanced by a step-wise dimensionality reduction of the data, using an autoencoder that is trained in an unsupervised manner. A perceptron classifier with a layer design comparable to the autoencoder, trained on the same data in a supervised fashion, achieves as expected a much stronger cluster separation. However, the enhancement of clustering by unsupervised data compression, combined with automatic labeling methods, could be a promising way to automatically detect 'natural kinds' in non-labeled data. \n\n\n\\clearpage\n\\section{Methods}\n\n\\subsection*{Part 1: Accuracy limit}\n\n\\subsubsection*{Derivation of theoretical accuracy limit}\n\nClassification is the general problem of assigning a discrete class label $i=1\\ldots K$ to each given input data $\\vec{x}$, where the latter is considered as a vector with $N$ real-valued components $x_{n=1\\ldots N}$. Such a discrimination is possible when the conditional probability distributions $p_{gen}(\\vec{x}\\;|\\;i)$ of data vectors, here called {\\bf 'generation densities'}, are different for each of the possible data classes $i$. In the simple case of a two- or three-dimensional data space, each data class can be visualized as a 'point cloud' (See Fig.\\ref{figure_1}(a,b) for examples), and either the shapes or the center positions of these point clouds must vary sufficiently in order to facilitate a reliable classification. However, since the data generation process typically involves not only the system of interest (which might indeed have $K$ well-distinguished modes of operation), but also some measurement or data transmission equipment (which introduces noise into the data), a certain 'overlap' of the different data classes is usually not avoidable.\n\n\\vspace{0.2cm}\\noindent A classifier is receiving the data vectors $\\vec{x}$ as input and computes a set of $K$ {\\bf 'classification probabilities'} $q_{cla}(j\\;|\\;\\vec{x})$, quantifying the belief that $\\vec{x}$ belongs to $j$. They are normalized to one over all possible classes, so that $\\sum_{j=1}^K q_{cla}(j\\;|\\;\\vec{x}) = 1\\; \\forall\\; \\vec{x}$.\n\n\\vspace{0.2cm}\\noindent We can now define a {\\bf 'confusion density'} as the product\n\\begin{equation}\nC_{ji}(\\vec{x}) = q_{cla}(j\\;|\\;\\vec{x})\\;p_{gen}(\\vec{x}\\;|\\;i).\n\\end{equation}\nIt can be interpreted as the probability density that the generator is producing data vector $\\vec{x}$ under class $i$, which is then assigned to class $j$ by the classifier. Because there is usually a very small but non-zero probability density that {\\em any} vector $\\vec{x}$ can occur under {\\em any} class $i$, we expect that the non-diagonal elements $C_{j\\!\\neq\\!i}(\\vec{x})$ are larger than zero as well. These non-diagonal confusion densities will have their largest values in regions of data space where the classes $i$ and $j$ overlap (See Fig.\\ref{figure_1}(e,f) for examples).\n\n\\vspace{0.2cm}\\noindent By integrating the confusion density over all possible data vectors $\\vec{x}$,\n\\begin{equation}\nC_{ji} = \\int C_{ji}(\\vec{x})\\; d\\vec{x},\n\\label{cij}\n\\end{equation}\nwe obtain the {\\bf 'confusion matrix'} of the classifier, which comes out properly normalized, so that $\\sum_{j=1}^K C_{ji} = 1\\; \\forall\\; i$. The confusion matrix $C_{ji}$ therefore is the probability that a data point originating from class $i$ is assigned to class $j$.\n\n\\vspace{0.2cm}\\noindent Assuming for simplicity that all data classes appear equally often, we can compute the {\\bf accuracy} $A$ of the classifier as the average over all diagonal elements of the confusion matrix:\n\\begin{equation}\nA = \\frac{1}{K}\\sum_{i=1}^K C_{ii}.\n\\end{equation}\n\n\\vspace{0.2cm}\\noindent In the following, we are particularly interested in the {\\bf theoretical limit of the classification accuracy}, denoted by $A_{max}$. We therefore consider an ideal classifier that has learned the exact generation densities $p_{gen}(\\vec{x}\\;|\\;i)$. In this case, the {\\bf 'ideal classification probability'} corresponds to the Bayesian posterior\n\\begin{equation}\nq_{cla}(\\;j\\;|\\;\\vec{x}\\;) = \\frac{p_{gen}(\\;\\vec{x} \\;|\\; j\\;)}{\\sum_k p_{gen}(\\;\\vec{x} \\;|\\; k\\;)}. \n\\end{equation}\n\n\\vspace{0.2cm}\\noindent In our numerical experiments, we will use classifiers that output a definite class label $j$ for each given data vector $\\vec{x}$, corresponding to most probably class. To compute the theoretical accuracy maximum for such a model, we replace $q_{cla}$ by the binary {\\bf 'class indicator function'}\n\\begin{equation}\n\\hat{q}_{cla}(\\;j\\;|\\;\\vec{x}\\;) = \\delta_{jk}\\;\\;\\mbox{with}\\;\\;k = \\mbox{argmax}_c\\; q_{cla}(\\;c\\;|\\;\\vec{x}\\;).\n\\end{equation}\nIt has the value $1$ for all data points $\\vec{x}$ assigned to class $j$, and the value $0$ for all other data points (See Fig.\\ref{figure_1}(c,d) for examples). When the ideal accuracy $A$ is evaluated using $\\hat{q}_{cla}$ instead of $q_{cla}$, the result can be directly compared with numerical accuracies based on one-hot classifier outputs.\n\n\\subsubsection*{Numerical evaluation of $A_{max}$}\n\nIn Fig.\\ref{figure_1}, the above quantities have been numerically evaluated for a simple Gaussian test data set. For this purpose, the two-dimensional integral \\ref{cij} has been evaluated numerically on a regular grid of linear spacing 0.01, ranging from -8 to +8 in each feature dimension. \n\n\n\\subsubsection*{Classifiers and input data}\n\n\\vspace{0.2cm}\\noindent In the following subsections, we provide the implementation details for the different classifier models that are compared in this work. The input data for these models is given as lists of $D$-dimensional feature vectors $\\vec{u} = (u_1,u_2,\\ldots,u_f,\\ldots,u_D)$, each belonging to one of $K$ possible classes $c$. In the case of artificially generated data, these lists contain 10000 feature vectors distributed equally over the data classes. They are split randomly into training (80\\%) and test (20\\%) data sets.\n\n\\subsubsection*{Perceptron model}\n\nThe perceptron model is implemented using Keras\/Tensorflow. It has one hidden layer, containing $N_{neu}=100$ neurons with RELU activation function. The output layer has $N_{out}$ neurons with softmax activation function, where $N_{out}=D$ corresponds to the number of data classes. The loss function is categorical crossentropy. We optimize the perceptron on each training data set using the Adams optimizer over at least 10 epochs with a batch size of 128 and a validation split of 0.2. After training, the accuracy of the perceptron is evaluated with the independent test data set.\n\n\\subsubsection*{Naive Bayesian model}\n\nThe naive Bayesian model is implemented using the Python libraries Numpy and Scipy.\n\n\\vspace{0.2cm}\\noindent In the training phase, the training data set is sorted according to the $K$ class labels $c$. Then an individual Gaussian Kernel Density (KDE) approximation (Scott method) is computed for each feature $f$ and class label $c$, corresponding to the empirical marginalized probability densities $p_{f,c}(u_f)$. \n\n\\vspace{0.2cm}\\noindent In the testing phase, the accuracy of the model is evaluated with the independent test data set as follows: According to the naive Bayes approach, the global likelihood $L(\\vec{u}\\;|\\;c)$ of a data vector $\\vec{u}=(u_1,u_2,\\ldots,u_D)$ under class $c$ is approximated by a product of the marginalized probabilities, so that\n\\begin{equation}\nL(\\vec{u}\\;|\\;c) = \\prod_{f=1\\ldots D} p_{f,c}(u_f).\n\\end{equation}\nSince we assume a flat prior probability ($P_{prior}(c)=1\/K$) over the data classes, the posterior probability of data class $c$, given the input data vector $\\vec{u}$, is given by\n\\begin{equation}\nP_{post}(c\\;|\\;\\vec{u}) = \\frac{L(\\vec{u}\\;|\\;c)}{\\sum_{i\\!=\\!1}^K\\;L(\\vec{u}\\;|\\;i)}\n\\end{equation}\n\n\\subsubsection*{Naive Bayesian model with Random Dimensionality Expansion (RDE)}\n\nSince the naive Bayesian model takes into account only the marginal feature distributions $p_{f,c}(u_f)$, it cannot distinguish data classes which accidentally have identical $p_{f,c}(u_f)$ distributions, but differ in the correlations between the features. In principle, this problem can be fixed by multiplying the $D$-dimensional input vectors $\\vec{u}$ by a random $D_2 \\times D$ matrix $\\textbf{M}$, for example with normally distributed entries $M_{ij}\\propto N(\\mu=0,\\sigma=1)$, which yields transformed vectors $\\vec{v} = \\textbf{M} \\vec{u}$. Provided that $D_2\\gg D$, at least some of the new feature linear combinations $v_f$ will have marginal distributions that vary between the data classes. \n\n\\subsubsection*{CMVG Bayesian model}\n\nThe Correlated Multi-Variate Gaussian (CMVG) Bayesian model is also implemented using the Python libraries Numpy and Scipy. \n\n\\vspace{0.2cm}\\noindent In the training phase, the training data set is sorted according to the two class labels $c$. Then, for each class label $c$, we compute the mean values $\\mu_f^{(c)}$ of the features $f=1\\ldots D$, as well as the covariances $\\Sigma_{fg}^{(c)}$ between features $f$ and $g$. These quantities are packed as one vector ${\\bf \\mu}^{(c)}$ and one matrix ${\\bf \\Sigma}^{(c)}$ for each class $c$.\n\n\\vspace{0.2cm}\\noindent In the testing phase, the global likelihood $L(\\vec{u}\\;|\\;c)$ of a data vector $\\vec{u}=(u_1,u_2,\\ldots,u_D)$ under class $c$ is computed as the correlated, multi-variate Gaussian probability density\n\n\\begin{equation}\nL(\\vec{u}\\;|\\;c) = p_{cmvg}\\left( \\vec{u} \\;,\\; {\\bf\\mu}\\!=\\!{\\bf\\mu}^{(c)} \\;,\\;\n{\\bf \\Sigma}\\!=\\!{\\bf \\Sigma}^{(c)}\n\\right).\n\\end{equation}\nSince we assume a flat prior probability ($P_{prior}(c)=1\/2$) for the two data classes, the posterior probability of data class $c$, given the input data vector $\\vec{u}$, is given by\n\\begin{equation}\nP_{post}(c\\;|\\;\\vec{u}) = \\frac{L(\\vec{u}\\;|\\;c)}{\\sum_{i\\!=\\!1}^K\\;L(\\vec{u}\\;|\\;i)}\n\\end{equation}\n\n\n\\subsection*{Part 2: The DSC data model}\n\nWe consider an artificial classification problem with two multivariate Gaussian data classes $c\\in\\left\\{0,1\\right\\}$ and with statistical properties that can be tuned by {\\bf three control quantities}: the {\\bf dimensionality} $D$ of the feature space, the {\\bf separation} $S$ between the centers of the point clouds, and the {\\bf correlation} $C$ between features (within the same class), which is associated with the shape of the point cloud. The generation of artificial data within this DSC model works as follows:\n\n\\vspace{0.2cm}\\noindent Starting from a given triple $D,S,C$ of control quantities, we first generate $N_{rep}$ independent {\\bf parameter sets} $[\\mu_f^{(c)}, \\Sigma_{fg}^{(c)}]$ that describe the statistical properties of the two classes $c\\in\\left\\{0,1\\right\\}$. Here, $\\mu_f^{(c)}$ is the mean value of feature $f$ in class $c$, and $\\Sigma_{fg}^{(c)}$ is the covariance of features $f$ and $g$ in class $c$.\n\n\\vspace{0.2cm}\\noindent The mean values $\\mu_f^{(c=0)}$ in class $0$ are always set to zero, whereas the mean values $\\mu_f^{(c=1)}$ in class $1$ are random numbers, drawn from a uniform distribution with values in the range from 0 to $S$. The separation quantity $S$ is therefore the maximum distance between corresponding feature mean values in each dimension $f$.\n\n\\vspace{0.2cm}\\noindent The diagonal elements $\\Sigma_{ff}^{(c)}$ of the symmetric covariance matrix are set to 1 in both classes. The off-diagonal elements $\\Sigma_{f\\neq g}$ are assigned independent, continuous random numbers $x$, drawn from a box-shaped probability density distribution $q(x,C)$ that depends on the correlation quantity $C$ as follows:\n\n\\begin{eqnarray}\n q(x,C) &=& \\mbox{uniform}[0\\;,\\;C]\\;\\;\\;\\;\\;\\;\\mbox{for}\\;C\\le1 \\nonumber\\\\\n q(x,C) &=& \\mbox{uniform}[C\\!-\\!1\\;,\\;1]\\;\\mbox{for}\\;C>1\n\\end{eqnarray}\n\n\\vspace{0.2cm}\\noindent For $C=0$, the distribution $q(x,C)$ peaks at $x=0$, so that $\\Sigma_{ij}$ becomes a diagonal unit matrix. For $C=1$, the distribution $q(x,C)$ is uniform in the range $[0,1]$, and for $C=2$ it peaks at $x=1$, leading to $\\Sigma_{ij}=1$. A plot of the distribution is shown in \\ref{figure_2}(b).\n\n\\vspace{0.2cm}\\noindent According to the parameter set $[\\mu_f^{(c)},\\Sigma_{fg}^{(c)}]$, we then generate for each of the two classes $c$ a number $N_{vec}\/2$ of random, Gaussian data vectors $\\vec{u}(t)$, in which the $D$ components (features) are correlated to a degree controlled by quantity $C$. In the limiting case $C=0$, the $D$ time series $u_{f}(t)$ become statistically independent, whereas for $C=1$, the time series become fully correlated and thus identical. The total number of $N_{vec}$ data vectors is combined to a complete data set, in which vectors from the two classes (with corresponding labels $c$) appear in random order. By this way, we obtain for each triple $D,S,C$ of control quantities a total number of $N_{rep}$ independent data sets, each consisting of $N_{vec}$ data vectors. Since each data set obeys its own random parameters $[\\mu_f^{(c)},\\Sigma_{fg}^{(c)}]$, the DSC model reflects some of the heterogeneity of typical real world data. Finally, we split each data set into a training set (80\\%) and a test set (20\\%).\n\n\\vspace{0.2cm}\\noindent Before applying different types of classifiers to the DSC data sets, we test that the feature correlations and the class separation can be controlled reliably and over a sufficiently large range, using the quantities $C$ and $S$ (Fig.\\ref{figure_2}(d)).\n\n\\subsubsection*{Control of feature correlations by quantity $C$}\n\nTo evaluate correlation control, we fix the quantities $D=10$ and $S=1.0$ (Note that the separation has no effect on the correlations) and vary $C$ over the complete available range of supported values from 0 to 2. For each $C$, we generate $N_{rep}=100$ independent data sets, each consisting of $N_{vec}=10000$ data vectors. For each data set, we estimate the empirical covariance matrix $\\Sigma_{ij}^{(0)}$ of class 0. Because the matrix is symmetric, we compute the root-mean-square (RMS) average of all matrix elements above the diagonal. The blue line in (Fig.\\ref{figure_2}(d)) shows for each $C$ the mean RMS, averaged over the $N_{rep}=100$ repetitions (The latter are shown as gray dots). We find an almost linear relation between $C$ and the mean RMS. In particular, we can realize the full range of correlations, including the limiting cases of independently fluctuating features (for $C=0$) and identically fluctuating features (for $C=2$).\n\n\\subsubsection*{Control of class separation by quantity $S$}\n\nTo evaluate separation control, we fix the quantities $D=10$ and $C=0.5$ and vary $S$ between 0 and 10. For each $S$, we generate $N_{rep}=100$ independent data sets, each consisting of $N_{vec}=10000$ data vectors. For each (labeled) data set, we compute the general discrimination value (GDV), a quantity that has been specifically designed to quantify the separation between classes in high dimensional data sets \\cite{krauss2018statistical, schilling2021quantifying}. The orange line in Fig.\\ref{figure_2}(d) is the mean negative GDV, averaged over the $N_{rep}=100$ repetitions (The latter are shown as gray dots). \n\n\\vspace{0.2cm}\\noindent The GDV is computed as follows: We consider $N$ points $\\mathbf{x_{n=1..N}}=(x_{n,1},\\cdots,x_{n,D})$, distributed within $D$-dimensional space. A label $l_n$ assigns each point to one of $L$ distinct classes $C_{l=1..L}$. In order to become invariant against scaling and translation, each dimension is separately z-scored and, for later convenience, multiplied with $\\frac{1}{2}$:\n\\begin{align}\ns_{n,d}=\\frac{1}{2}\\cdot\\frac{x_{n,d}-\\mu_d}{\\sigma_d}.\n\\end{align}\nHere, $\\mu_d=\\frac{1}{N}\\sum_{n=1}^{N}x_{n,d}\\;$ denotes the mean, and $\\sigma_d=\\sqrt{\\frac{1}{N}\\sum_{n=1}^{N}(x_{n,d}-\\mu_d)^2}$ the standard deviation of dimension $d$.\nBased on the re-scaled data points $\\mathbf{s_n}=(s_{n,1},\\cdots,s_{n,D})$, we calculate the {\\em mean intra-class distances} for each class $C_l$ \n\\begin{align}\n\\bar{d}(C_l)=\\frac{2}{N_l (N_l\\!-\\!1)}\\sum_{i=1}^{N_l-1}\\sum_{j=i+1}^{N_l}{d(\\textbf{s}_{i}^{(l)},\\textbf{s}_{j}^{(l)})},\n\\end{align}\nand the {\\em mean inter-class distances} for each pair of classes $C_l$ and $C_m$\n\\begin{align}\n\\bar{d}(C_l,C_m)=\\frac{1}{N_l N_m}\\sum_{i=1}^{N_l}\\sum_{j=1}^{N_m}{d(\\textbf{s}_{i}^{(l)},\\textbf{s}_{j}^{(m)})}.\n\\end{align}\nHere, $N_k$ is the number of points in class $k$, and $\\textbf{s}_{i}^{(k)}$ is the $i^{th}$ point of class $k$.\nThe quantity $d(\\textbf{a},\\textbf{b})$ is the euclidean distance between $\\textbf{a}$ and $\\textbf{b}$. Finally, the Generalized Discrimination Value (GDV) is calculated from the mean intra-class and inter-class distances as follows:\n\\begin{align}\n\\mbox{GDV}=\\frac{1}{\\sqrt{D}}\\left[\\frac{1}{L}\\sum_{l=1}^L{\\bar{d}(C_l)}\\;-\\;\\frac{2}{L(L\\!-\\!1)}\\sum_{l=1}^{L-1}\\sum_{m=l+1}^{L}\\bar{d}(C_l,C_m)\\right]\n \\label{GDVEq}\n\\end{align}\n\n\\noindent whereas the factor $\\frac{1}{\\sqrt{D}}$ is introduced for dimensionality invariance of the GDV with $D$ as the number of dimensions.\nIn the case of two Gaussian distributed point clusters, the resulting discrimination value becomes $-1.0$ if the clusters are located such that the mean inter cluster distance is two times the standard deviation of the clusters.\n\n\\subsection*{Part 3: Comparing classifiers}\n\nIn Fig.\\ref{figure_3}, we determine the average accuracy of the three classifier types (See part 1 of the Methods section) for different combinations of the DSC control parameters. For each parameter combination, 100 data sets are sampled from the superstatistical distribution. For every data set, consisting of 8000 training vectors and 2000 test vectors, the three classifiers are trained from the scratch and then evaluated. This results in 100 accuracies for each classifier and each parameter combination. We then compute the mean value of these 100 accuracies, and this is the average accuracy plotted as colored lines in Fig.\\ref{figure_3}(c-f). The individual, non-averaged accuracies are plotted as gray points.\n\n\\subsection*{Part 4: Feature transformations}\n\nIn Fig.\\ref{figure_4}, we return to a much simpler test data set, consisting of two 'spherical' Gaussian data clusters in a two-dimensional feature space, which are centered at $\\vec{x}=(-\\frac{1}{2},0)$ and $\\vec{x}=(+\\frac{1}{2},0)$. respectively. All three classifier types reach the theoretical accuracy limit of about 0.69 in this case.\n\n\\vspace{0.2cm}\\noindent In this part we explore how certain non-linear transformations of the original features (that is, $(x_1,x_2) \\longrightarrow (f(x_1),f(x_2))$) affect the classification accuracy. In particular, we investigate the cases $f(x)=\\sin(x)$, $f(x)=\\cos(x)$ and $f(x)=\\mbox{sgn}(x)$. The signum function yields -1 for negative arguments and +1 for positive arguments. For the special case $x=0$ it would return zero, but this practically does not happen, as the features $x$ are continually distributed random variables. \n\n\\subsection*{Part 5: Sleep EEG data}\n\nFor a real-world evaluation of classifier performance, we are using 68 multi-channel EEG data sets from our sleep laboratory, each corresponding to a full-night recording of brain signals from a different human subject. The data were recorded with a sampling rate of 256 Hz, using three separate channels F4-M1, C4-M1, O2-M1. In this work, however, the signals from these channels are pooled, effectively treating them as data sets of their own.\n\n\\vspace{0.2cm}\\noindent The participants of the study included 46 males and 22 females, with an age range between 21 and 80 years. Exclusion criteria were a positive history of misuse of sedatives, alcohol or addictive drugs, as well as untreated sleep disorders. The study was conducted in the Department of Otorhinolaryngology, Head Neck Surgery, of the Friedrich-Alexander University Erlangen-N\u00fcrnberg (FAU), following approval by the local Ethics Committee (323\u201316 Bc). Written informed consent was obtained from the participants before the cardiorespiratory polysomnography (PSG). \n\n\\vspace{0.2cm}\\noindent After recording, the raw EEG data were analyzed by a sleep specialist accredited by the German Sleep Society (DGSM), who removed typical artifacts\n\\cite{tatum2011artifact} from the data and visually identified the sleep stages in subsequent 30-second epochs, according to the AASM criteria (Version 2.1, 2014) \\cite{iber2007aasm,american2012aasm}. The resulting, labeled raw data were then used as a ground truth for testing the accuracy of the different classifier types.\n\n\\vspace{0.2cm}\\noindent In this work, we are primarily testing the ability of the classifiers to assign the correct sleep label $s$\n(Wake, REM, N1, N2, N3) independently to each epoch, without providing further context information. Such a single-channel epoch consists of $30\\times256=7680$ subsequent raw EEG amplitudes $x_{d,e}(t_n)$, where $d$ is the data set, $e$ the number of the epoch within the data set, and $t_n$ the $n$th recording time within the epoch. \n\n\\vspace{0.2cm}\\noindent In order to facilitate classification of these 7680-dimensional input vectors $\\vec{x}_{d,e}$ by a simple Bayesian model, or by a flat two-layer perceptron with relatively few neurons, the vectors have to be suitably pre-processed and compressed down to feature vectors $\\vec{u}_{d,e}$ of much smaller dimensionality $D\\ll 7680$. \n\\vspace{0.2cm}\\noindent Instead of relying on self-organized (and thus 'black-box') features, we are using mathematically well-defined features with a simple interpretation. In particular, we are interested in the case where all $D$ components $u_f$ of a feature vector $\\vec{u}$ are fundamentally of the same kind and only differ by some tunable parameter. \n\n\\subsubsection*{Fourier features}\n\nOur first type of feature estimates the momentary Fourier component of the raw EEG signal $x_{d,e}(t_n)$ at a certain, tunable frequency $\\nu_f$: \n\n\\begin{equation}\nu_f = \n\\sqrt{\\left(\\;\n\\sum_{n=1}^{7680} x_{d,e}(t_n)\\cdot \\cos(2\\pi \\nu_f t_n)\n\\right)^2 + \n\\left(\n\\sum_{n=1}^{7680} x_{d,e}(t_n)\\cdot \\sin(2\\pi \\nu_f t_n)\n\\right)^2}. \n\\end{equation}\n\n\\vspace{0.2cm}\\noindent The set of frequencies $\\nu_{f=1}\\ldots\\nu_{f=D}$ is in our case chosen as an equidistant grid between 0 Hz and 30 Hz, because our EEG system is filtering out the higher-frequency components of the raw signals above about 30 Hz. \n\n\\subsubsection*{Correlation features}\n\nOur second type of feature is the normalized auto-correlation coefficient of the raw EEG signal $x_{d,e}(t_n)$ at a certain, tunable lag-time $\\Delta t_f$: \n\n\\begin{equation}\nu_f = \\frac{\\left\\langle \\left( x_{d,e}(t_n) - \\overline{x}_{d,e} \\right)\\cdot\\left( x_{d,e}(t_n\\!+\\!\\Delta t_f) - \\overline{x}_{d,e} \\right) \\right\\rangle_n}{\\sigma_{d,e}^2}.\n\\end{equation}\n\n\\vspace{0.2cm}\\noindent Here, $ \\overline{x}_{d,e}$ is the mean and $\\sigma_{d,e}$ the standard deviation of the raw EEG signal within the epoch. The symbol $\\left\\langle \\right\\rangle_n$ stands for averaging over all time steps within the epoch. The set of lag-times $\\Delta t_{f=1} \\ldots \\Delta t_{f=D}$ must be integer multiples of the recording time interval $\\delta t = 1\/256$ sec.\n\n\\subsection*{Part 6: Sleep stage detection}\n\nIn Fig.\\ref{figure_6}, we investigate the performance of the three classifier types described in part 1 in the real-world scenario of personalized sleep-stage detection. For this purpose, the classifiers are trained and tested individually on each of our 68 full-night sleep recordings, using as inputs the same 6-dimensional Fourier- or correlation features as in Fig.\\ref{figure_5} (Note that the aggregated distribution functions and covariance matrices in Fig.\\ref{figure_5} have been computed by pooling over all data sets and therefore show a much more regular behavior than the individual ones).\n\n\\vspace{0.2cm}\\noindent As a result, we obtain 68 accuracies for each combination of classifier type (Fig.\\ref{figure_6}, rows) and used input feature (Fig.\\ref{figure_6}, columns). The distributions of these accuracies are presented as histograms in the figure.\n\n\\subsection*{Part 7: Natural data clustering}\n\nIn Fig.\\ref{figure_7}, we address the question whether typical real-world data sets have a built-in clustering structure that can be detected (and possibly enhanced) by unsupervised methods of data analysis. For this purpose, we visualize the clustering structure.\nA frequently used method to generate low-dimensional embeddings of high-dimensional data is t-distributed stochastic neighbor embedding (t-SNE) \\cite{van2008visualizing}. However, in t-SNE the resulting low-dimensional projections can be highly dependent on the detailed parameter settings \\cite{wattenberg2016use}, sensitive to noise, and may not preserve, but rather often scramble the global structure in data \\cite{vallejos2019exploring, moon2019visualizing}.\nIn contrats, multi-Dimensional-Scaling (MDS) \\cite{torgerson1952multidimensional, kruskal1964nonmetric,kruskal1978multidimensional,cox2008multidimensional} is an efficient embedding technique to visualize high-dimensional point clouds by projecting them onto a 2-dimensional plane. Furthermore, MDS has the decisive advantage that it is parameter-free and all mutual distances of the points are preserved, thereby conserving both the global and local structure of the underlying data. \nWhen interpreting patterns as points in high-dimensional space and dissimilarities between patterns as distances between corresponding points, MDS is an elegant method to visualize high-dimensional data. By color-coding each projected data point of a data set according to its label, the representation of the data can be visualized as a set of point clusters. For instance, MDS has already been applied to visualize for instance word class distributions of different linguistic corpora \\cite{schilling2021analysis}, hidden layer representations (embeddings) of artificial neural networks \\cite{schilling2021quantifying,krauss2021analysis}, structure and dynamics of recurrent neural networks \\cite{krauss2019analysis, krauss2019recurrence, krauss2019weight}, or brain activity patterns assessed during e.g. pure tone or speech perception \\cite{krauss2018statistical,schilling2021analysis}, or even during sleep \\cite{krauss2018analysis,traxdorf2019microstructure}. \nIn all these cases the apparent compactness and mutual overlap of the point clusters permits a qualitative assessment of how well the different classes separate.\n\nIn addition, we measure the degree of clustering objectively by calculating the general discrimination value (GDV) \\cite{krauss2018statistical,schilling2021quantifying}, described in part 2.\n\n\\vspace{0.2cm}\\noindent For the clustering analysis we analyze two examples of 'natural data': One is the MNIST data \\cite{deng2012mnist} set with 10 classes of handwritten digits, in which the input vectors are 784-dimensional (28x28 pixels) and have continuous positive values (between 0 and 1 after normalization). \n\n\\vspace{0.2cm}\\noindent As the second example we use, again, our full-night EEG recordings with the 5 data classes corresponding to the sleep stages Wake, REM, N1, N2, and N3. In order to reduce setup-differences between measurements, we first perform a z-transform over each individual full-night EEG recording, so that the one-channel EEG signal of each participant has now zero mean and unit variance. Next, in order to make the EEG data more comparable with MNIST, we produce one 784-dimensional input vector from each 30-second epoch of the EEG recordings in the following way: The 7680 subsequent one-channel EEG signals of the epoch are first transformed to the frequency domain using Fast Fourier Transform (FFT), yielding 3840 complex amplitudes. Since the phases of the amplitudes change in a highly irregular way between epochs, we discard this information by computing (the square roots of) the magnitudes of the amplitudes. We keep only the first 784 values of the resulting real-valued frequency spectrum, corresponding to the lowest frequencies. By pooling over all epochs and participants, we obtain a long list of these 784-dimensional input vectors. They are globally normalized, so that the components in the list range between 0 and 1, just as in the MNIST case. Finally, the list is randomly split into train (fraction 0.8) and test (fraction 0.2) data sets. \n\n\\vspace{0.2cm}\\noindent It is possible to directly compute the MDS projection of the uncompressed 784-dimensional test data vectors into two dimensions, and also to calculate the corresponding GDV value that quantifies the degree of class separation (using the known sleep stage labeling). In Fig.\\ref{figure_7}, these uncompressed data distributions are always shown in the left upper scatter plot of each two-by-two block.\n\n\\vspace{0.2cm}\\noindent In this context, we also test if step-wise dimensionality reduction in an autoencoder leads to an enhanced clustering. The used autoencoder has RELU activation functions and 7 fully connected layers with the following numbers of neurons: 784,128,64,16,64,128,728. The mean squared error between input vectors and reconstructed vectors is minimized using the Adams optimizer. We also compute the MDS projections and GDV values for layers 2, 3 and 4 (the 16-dimensional bottleneck) of the autoencoder. In Fig.\\ref{figure_7}, these three compressed data distributions are shown within the two-by-two blocks of scatter plots.\n\n\\vspace{0.2cm}\\noindent As a reference for the resulting MDS projections and GDV values in the unsupervised autoencoder, we also process the two kinds of natural data with a perceptron that is trained in a supervised manner, so that it separates the known classes as far as possible. To make the perceptron comparable to the autoencoder, the first 4 layers (from the input to the bottleneck) are identical: Fully connected, RELU activations, and layer sizes 784,128,64,16. However, the decoder-part of the autoencoder is replaced by a softmax layer in the perceptron, which has either 10 (MNIST) or 5 (sleep) neurons. The perceptron is trained by back-propagation to minimize categorical cross-entropy between the true and predicted labels, using the Adams optimizer. Just as in the autoencoder, we compute MDS projections and GDV values for the first 4 perceptron layers. \n\n\n\n\n\n\\clearpage\n\\section{Results}\n\n\\subsection*{Part 1: Accuracy limit}\n\nIn order to demonstrate the existence of an accuracy limit in classification tasks, we assume a statistical process is generating data vectors $\\vec{x}$ which are distributed in the input space (subsequently also called feature space) according to given {\\bf generation densities} $p_{gen}(\\vec{x}\\;|\\;i)$ that depend on the class $i$. For reasons of mathematical tractability and visual clarity, we start with a simple problem of two Gaussian data classes in a two-dimensional feature-space. We assume that class $i=0$ is centered at $\\vec{x}=(x_1,x_2)=(0,0)$, whereas class $i=1$ is centered a distance $d$ away, at $\\vec{x}=(d,0)$. As another discriminating property, the two class-dependent distributions are assumed to have different correlations between the features $x_1$ and $x_2$ (Compare Fig.\\ref{figure_1}(a,b)). \n\n\\vspace{0.2cm}\\noindent As derived in the Methods section, an ideal classifier would divide the feature-space $\\left\\{\\vec{x}\\right\\}$ among the two classes in a way that is perfectly consistent with the true generation densities $p_{gen}(\\vec{x}\\;|\\;i)$. The resulting ideal assignment of a discrete class $j=0$ or $j=1$ to each data vector $\\vec{x}$ can be described by binary {\\bf class indicator functions} $\\hat{q}_{cla}(\\;j\\;|\\;\\vec{x}\\;)$ (Compare Fig.\\ref{figure_1}(c,d)). \n\n\\vspace{0.2cm}\\noindent The latter two quantities can be combined to the {\\bf confusion densities} $\\hat{q}_{cla}(j|\\vec{x}) \\; p_{gen}(\\vec{x}|i)$, which give the probability density that data point $\\vec{x}$ is generated in class $i$ but assigned to class $j$ by the ideal classifier (Compare Fig.\\ref{figure_1}(e,f)). The parts of feature space where the confusion density is large for $i\\neq j$ correspond to the overlap regions of the data classes, and it is this overlap that makes the theoretical limit of the classification accuracy smaller than one.\n\n\\vspace{0.2cm}\\noindent It is possible to compute the {\\bf confusion matrix} of the ideal classifier by integrating the confusion densities over the entire feature space, which is feasible only in very low-dimension spaces. The confusion matrix, in turn, yields the {\\bf theoretical accuracy limit} $A_{max}$ of the ideal classifier.\nIn our simple example, $A_{max}$ is expected to increase with the distance $d$ between the two data classes, as this separation reduces the class overlap. By numerically computing the integral over the two-dimensional feature space of our Gaussian test example, we indeed find a monotonous increase of $A_{max}=A_{max}(d)$ from about 0.62 at $d=0$ to nearly one at $d=5$ (Compare Fig.\\ref{figure_1}(g, black line)).\n\n\\vspace{0.2cm}\\noindent Our next goal is to apply different types of classifier models to data drawn from the generation densities $p_{gen}(\\vec{x}\\;|\\;i)$ of the Gaussian test example above.\n\n\\vspace{0.2cm}\\noindent As an example for a 'black box' classifier, we consider a {\\bf perceptron} with one hidden layer (See Methods section for details). In the training phase, the connection weights of this neural network are optimized using the back-propagation algorithm. \n\n\\vspace{0.2cm}\\noindent As an example of a mathematically transparent, but simple classifier type, we consider a {\\bf Naive Bayesian} model. Here, correlations between the input features are neglected, and so the global likelihood $L(\\vec{u}\\;|\\;c)$ of a data vector $\\vec{u}$, given the data class $c$, is approximated as the product of the marginal likelihood factors for each individual feature $f$ (See Methods section for details). In the 'training phase', the naive Bayesian classifier is simply estimating the distribution functions of these marginal likelihood factors, using Kernel Density Approximation (KDE). \n\n\\vspace{0.2cm}\\noindent Finally, we consider a {\\bf Correlated Multi-Variate Gaussian (CMVG) Bayesian} model as an example of a mathematically transparent classifier that can also account for correlations in the data, but which assumes that all features are normally distributed (See Methods section for details). In the training phase, the CMVG Bayesian classifier has to estimate the mean values and covariances of the data vectors.\n\n\\vspace{0.2cm}\\noindent When applying these three classifiers to the Gaussian test data, we indeed find that {\\bf all models reach the same theoretical classification limit, even though their operating principles are very different} (Compare Fig.\\ref{figure_1}(g)). The only exception is the Naive Bayes classifier at small class distances $d$ (Compare Fig.\\ref{figure_1}(g, orange line)). This model fails because it can only use the marginal feature distributions, which happen to be identical for both classes in the case $d=0$. However, the problem can be easily fixed by multiplying the original two-dimensional feature vectors with a random, non-quadratic matrix (See Methods section for details) and thereby creating many new linear feature combinations, some of which usually have significantly different marginal distributions. Such a {\\bf Random Dimensionality Expansion (RDE)}, as proposed in Yang et al. \\cite{yang2021neural}, allows even the Naive Bayes model to reach the accuracy limit in strongly overlapping data classes (Compare Fig.\\ref{figure_1}(g, olive line)).\n\n\n\\subsection*{Part 2: The DSC data model}\n\nIn order to investigate how the performance of different classifiers depends on the statistical properties of the data, we generate large numbers of artificial data sets with two labeled classes $c \\in \\left\\{0,1\\right\\}$, in which the dimensionality $D$ of the individual data vectors $\\vec{u}$, the degree of correlations $C$ between their components $u_{f=1\\ldots D}$ (here also called features), and the separation $S$ between the two classes in feature space can be independently adjusted (See Figs.\\ref{figure_2}(b,c) for an illustration of $C$ and $S$). To replicate some of the heterogeneity of real world data, we design our data generator as a two-level superstatistical model \\cite{metzner2015superstatistical,mark2018bayesian}: The mean values $\\bf \\mu^{(c)}$ and covariances $\\bf \\Sigma^{(c)}$ of the multi-variate probability distributions $p_c(\\vec{u})$ in each of the data classes $c$ are themselves random variables. They are drawn from certain meta-distributions, which are in turn controlled by the three quantities $D,S,C$ (See Methods for details, as well as Fig.\\ref{figure_2}(a)).\n\n\\vspace{0.2cm}\\noindent Using the General Discrimination Value (GDV), a measure designed to quantify the separability of labeled point sets (data classes) in high-dimensional spaces \\cite{schilling2021quantifying}, we show that the mean separability of data classes in the DSC-model is indeed monotonously increasing with the control quantity $S$ (Orange line in Fig.\\ref{figure_2}(d)), whereas the separability of individual data sets is fluctuating heavily around this mean value (Grey dots in Fig.\\ref{figure_2}(d)).\n\n\\vspace{0.2cm}\\noindent Moreover, we quantify the degree of correlation between the $D$ features of the data vectors in each class $c$ by the root-mean-square average of the upper triangular matrix elements in the covariance matrix $\\bf \\Sigma^{(c)}$. We show that this RMS-average is an almost linear function of $C$ (Blue line in Fig.\\ref{figure_2}(d)) and can be varied between zero (Corresponding to independently fluctuating features, or statistical independence) and one (Corresponding to identically fluctuating features, or perfect correlations).\n\n\\subsection*{Part 3: Comparing classifiers}\n\nNext, we apply the three classifier types to artificial data, with statistical properties controlled by the quantities $D$, $S$ and $C$. We first investigate the {\\bf accuracy of the classifiers as a function of data dimensionality $D$} (Fig.\\ref{figure_3}(c,d)), considering correlated data ($C=1.0$). \n\n\\vspace{0.2cm}\\noindent When the separation of the data classes in feature space is small ($S=0.1$, panel (c)), the classification accuracy for one-dimensional data ($D=1$) is very close to the minimum possible value of 0.5 (corresponding to a purely random assignment of the two class labels) in all three models. As data dimensionality $D$ increases, all three models monotonically increase their average accuracies (colored lines), whereas the accuracies of individual cases show a large fluctuation (gray dots). However, the Naive Bayes classifier (orange line) does not perform well even for large data dimensionality, because the point clouds corresponding to the two classes are strongly overlapping in feature space. By contrast, the CMVG Bayes classifier (red line) and the Perceptron (blue line) eventually achieve a very good performance, because they can exploit the correlations in the data. The similarity of the latter two accuracy-versus-$D$ plots is remarkable, considering that these two classifiers work in completely different ways (the Bayesian model performing theory-based mathematical operations with estimated probability distributions, the neural network computing quite arbitrary non-linear transformations of weighted sums). We therefore conclude that the latter two models approach the theoretical optimum of accuracy for each combination of the control quantities $D,S,C$. \n\n\\vspace{0.2cm}\\noindent As the separation of the data classes in feature space gets larger ($S=1.0$, panel (d)), the accuracy-versus-$D$ plots are qualitatively similar to panel (c), but for one-dimensional data ($D=1$) the common accuracy is now slightly above the random baseline, at 0.6. By comparing panels (c) and (d) we note that Naive Bayes is profiting from the larger class separation, but the other two classifiers reach the theoretical performance maximum even without this extra separation.\n\n\\vspace{0.2cm}\\noindent Next, we investigate the {\\bf accuracy of the classifiers as a function of class separation $S$} (Fig.\\ref{figure_3}(e,f)), considering five-dimensional data ($D=5$). Without correlations ($C=0$, panel (e)), all three models show exactly the same monotonous increase of accuracy with separation $S$, starting at the random baseline of 0.5 and finally approaching perfect accuracy of 1.0. \n\n\\vspace{0.2cm}\\noindent With feature correlations present ($C=1.0$, panel (f)), the Naive Bayes classifier shows the same behavior as in panel (e), whereas the other two correlation-sensitive models now already start with a respectable accuracy of 0.8 at zero class separation.\n\n\\vspace{0.2cm}\\noindent Finally, we investigate the {\\bf accuracy of the classifiers as a function of the feature correlations $C$} (Fig.\\ref{figure_3}(g,h)), considering again five-dimensional data ($D=5$). For strongly overlapping data classes ($S=0.1$, panel (g)), Naive Bayes cannot exceed an accuracy of about 0.55, whereas the two correlation-sensitive models show a super-linear increase of accuracy with increasing feature correlations. However, this decrease is ending rather abruptly at about $C\\approx 0.7$. Above this transition point, both models stay at a plateau accuracy of about 0.8, independent of the correlation quantity. Note that this discontinuity of the slope of the accuracy-versus-$D$ plots is likely not an artifact of the DSC data, since the RMS-average of empirical correlations versus $C$ (Fig.\\ref{figure_3}(d)) did not show such an effect at $C\\approx 0.7$. Moreover, the fact that functionally distinct classifiers such as CMVG Bayes and Perceptron produce an almost identical behaviour here suggests that the accuracy plateau in the strong correlation regime indeed reflects the theoretical performance maximum.\n\n\\vspace{0.2cm}\\noindent As the class separation is increased ($S=1.0$, panel (h)), all three models start at a larger accuracy of about 0.75 in the uncorrelated case. Now the performance of Naive Bayes is even declining with increasing $C$, because this model wrongly assumes uncorrelated data. The other two models show again the super-linear increase up to $C\\approx 0.7$. However, now a further improvement of performance is possible with increasing correlations.\n\n\\subsection*{Part 4: Feature transformations}\n\nThe accuracy limit is determined by the overlap of data classes, that is, by the possibility that different classes $i \\neq j$ produce exactly the same data vector $\\vec{x}^{\\ast}$. Transformations $\\vec{x} \\rightarrow \\vec{f}(\\vec{x})$ of the input features can drastically change the distributions of data points (As an example, compare the rows in Fig.\\ref{figure_4}). However, they cannot be expected to reduce the fundamental amount of class overlap, because transformations are just redirecting the common points $\\vec{x}^{\\ast}$ to new locations in feature space. In particular, invertible transformations can be viewed as variable substitutions in the integral Eq.\\ref{cij} for the confusion matrix. They do not affect the resulting matrix values and thus leave the accuracy invariant.\n\n\\vspace{0.2cm}\\noindent In order to test this expectation, we start with two overlapping Gaussian data classes in a two-dimensional feature space (Fig.\\ref{figure_4}, top row), resulting in an accuracy limit of $\\approx 0.69$. All three classifiers actually reach this limit with the original data as input.\n\n\\vspace{0.2cm}\\noindent Next we perform simple non-linear transformations on the input data, by replacing each of the two features $x_1$ and $x_2$ with a function of themselves (in particular: $\\sin$, $\\mbox{sgn}$, and $\\cos$). We find that the application of the $\\sin$-transformation (second row in Fig.\\ref{figure_4}) has indeed no effect on the accuracy of the three classifiers, even though the joint (first column) and marginal distributions (second and third column) are now strongly distorted. Even the application of the $\\mbox{sgn}$-transformation (third row), which collapses all data onto just 4 possible points in feature space, leaves the accuracies invariant. This works because the two classes in our simple example can be distinguished by the sign of the $x_1$-feature, and both the $\\sin$- as well as the $\\mbox{sgn}$-transformation leave this information intact. By contrast, the application of the $\\cos$-transformation destroys this crucial information, and consequently all accuracies drop to the random baseline of 0.5.\n\n\\vspace{0.2cm}\\noindent The above numerical experiments illustrate that transformations of the input-data can reduce (by destroying information that is essential for class-discrimination), but never increase the theoretical accuracy limit, which is an inherent property of the data. Of course, the subsequent data transformations which are taking place in the layers of deep neural networks are still useful, because they re-shape data distributions until classes can be linearly separated in the final layer of the network. \n\n\n\n\\subsection*{Part 5: Sleep EEG data}\n\nIn our artificial data sets, all feature distributions were normally distributed. Moreover, it was possible to introduce extremely strong correlations between these features, which could then be exploited by two of the three classifier models. It is however unclear if the ability of a classifier to detect correlations is always crucial in real-world problems.\n\n\\vspace{0.2cm}\\noindent We therefore turn in a next step to actually measured EEG data, recorded over-night from 68 different sleeping human subjects. In this case, our final goal is to assign to each 30-second epoch of a raw one-channel EEG signal one of the five sleep stages (Wake, REM, N1, N2, N3). \n\n\\vspace{0.2cm}\\noindent At our sample rate, a single epoch of EEG data corresponds to 7680 subsequent amplitudes. Such high-dimensional data vectors $\\vec{x}$ are however not suitable as direct input for a Bayesian classifier, nor for a flat neural network with only $\\approx 100$ neurons. For this reason, we first compress the raw data vectors $\\vec{x}=(x_1,\\ldots x_{7680})$ into suitable feature vectors $\\vec{u}=(u_1,\\ldots u_D)$ of strongly reduced dimensionality $D\\approx 10$. Since we aim to develop a fully transparent classifier system, we use mathematically well-defined, human-interpretable features $u_f = G(\\vec{x},\\alpha_f)$, which depend on a freely tunable parameter $\\alpha$. The dimensionality $D$ of the feature space is then determined by how many of these parameters $\\alpha_{f=1\\ldots D}$ are chosen.\n\n\\vspace{0.2cm}\\noindent The huge literature on brain waves suggests that the momentary {\\bf Fourier components} of the EEG signal are suitable features for the classification of sleep stages. The parameter $\\alpha$ is then naturally given by the frequency $\\nu$ of the Fourier component (For details see methods). In a first experiment, we use a set of six equally spaced frequencies ($\\nu_1=$5 Hz, $\\nu_2=$10 Hz,$\\ldots$ $\\nu_6=$30 Hz). Based on training data sets that have been manually labeled by a sleep specialist, we then compute the marginal probability density functions of these Fourier features, as well as their covariance matrices, for each of the 5 sleep stages $s$ (Fig.\\ref{figure_5}, left two columns). We find that within each sleep stage, the Fourier features have unimodal distributions, with peak positions and widths depending quite systematically on the frequency $\\nu$. There are characteristic differences between the sleep stages (in particular the distributions are wider in the wake stage), but they are not very pronounced. In the covariance matrices, we find that the off-diagonal elements are significantly smaller than the diagonal elements (The latter have been set to zero in Fig.\\ref{figure_5} to emphasize the actual inter-feature correlations), with the exception of the wake state. Also the N1 state has slightly larger inter-feature correlations compared to the REM, N2 and N3 states. \n\n\\vspace{0.2cm}\\noindent As an alternative or complement to the Fourier features, we also consider the normalized (Pearson) {\\bf auto-correlation coefficients} of the raw EEG signal (Fig.\\ref{figure_5}, right two columns. For details see methods). The feature parameter $\\alpha$ is in this case given by the lag-time $\\Delta t$, for which we choose six equally spaced values ($1,3,\\ldots,11$ in units of the EEG sampling period). Since these correlation features cannot exceed the value of one by definition, the marginal distributions are highly non-Gaussian with pronounced tails towards small values. These tails show relatively strong differences between some of the sleep stages, but also surprising similarities, in particular for REM and N2. In the covariance matrices, we find the strongest inter-feature correlations in the wake and N1 stages. Again, the covariance matrices are very similar in REM and N2.\n\n\\subsection*{Part 6: Sleep stage detection}\n\nNext, we apply our three classifier models to the above sleep EEG data. However, while the feature distributions and correlations in Fig-\\ref{figure_5} were based on the global data, pooled over all 68 full-night EEG recordings, we are considering here the task of personalized sleep-stage recording. That is, the classifiers are trained and evaluated individually on each of the 68 data sets. Because the amount of training data is severely limited in this task, classification accuracies are expected to be rather low and strongly dependent on the participant. We therefore compute the distributions of accuracies over the 68 personalized data sets (histograms in Fig.\\ref{figure_6}) for all three classifiers and for the two types of pre-processed features. \n\n\\vspace{0.2cm}\\noindent We find that the CMVG Bayes model is performing very poorly in this task, presumably because the feature distributions are non-Gaussian and only weakly correlated except in the wake stage. In particular, for some participants the classification accuracy is less then the random baseline of about 0.2, corresponding to consistent miss-classifications. This can happen in Bayesian classifiers when the likelihood distributions learned from the training data set do not match the actual distributions in the test data set. \n\n\\vspace{0.2cm}\\noindent By contrast, the Naive Bayes model can properly represent the non-Gaussian feature distributions by KDE approximations, and it furthermore profits from the lack of correlations. The performance of the Perceptron is comparable to that of the Naive Bayes model. Both for Fourier- and correlation-features, these two models show accuracies well above the baseline, roughly in the range from 0.3 to 0.6.\n\n\\subsection*{Part 7: Natural data clustering}\n\nBoth the ten digits in MNIST, as well as the five sleep stages in overnight EEG recordings, are human-defined classes. It is therefore unclear whether these classes can also be considered as 'natural kinds'. \n\n\\vspace{0.2cm}\\noindent After a suitable pre-processing that brings both data sets into the same format of 784-dimensional, normalized feature vectors (for details see Methods sections), we address this question by computing two-dimensional MDS projections, coloring the data points according to the known, human-assigned labels (In Fig.\\ref{figure_7}, see the upper left scatter plot in each 2-by-2 block). Indeed, the projected data distributions show a small degree of clustering, which is also quantitatively confirmed by the corresponding GDV values (-0.061 for MNIST and -0.035 for sleep EEG data). Note that in the sleep data, a large number of extreme outliers are found which might not correspond to any of the standard classes.\n\n\\vspace{0.2cm}\\noindent The purpose of classifiers is to transform and re-shape the data distribution in such a way that the final network layer (often a softmax layer with one neuron for each data class) can separate the classes easily from each other. Although, as we have shown above, these re-shaping transformations cannot reduce the natural overlap of classes (which would push the accuracy beyond the data-inherent limit), they might as a side-effect lead to a larger 'centrality' of the clusters associated with each class. This would show up quantitatively as a decrease of the General Discrimination Value (GDV) in the higher network layers of the classifier, as compared to the original input data. In order to test this hypothesis, we have trained a four-layer perceptron (see Methods section for details) in a supervised manner on both the MNIST and sleep EEG data. In the case of MNIST, we indeed observe a systematic decrease of the GDV in subsequent network layers: GDV(L0)=-0.061, GDV(L1)=-0.174, GDV(L2)=-0.250, and GDV(L3)=-0.300 (See Fig.\\ref{figure_7}(b)). An analogous layer-wise decrease is found for the sleep EEG data: GDV(L0)=-0.035, GDV(L1)=-0.096, GDV(L2)=-0.122, and GDV(L3)=-0.181 (See Fig.\\ref{figure_7}(d)).\n\n\\vspace{0.2cm}\\noindent We finally address the question whether a natural clustering in novel, unlabeled data sets can be automatically detected, and possibly enhanced, in an unsupervised manner. For this purpose, we consider an autoencoder that performs a layer-wise dimensionality reduction of the data, and then re-expands these low-dimensional embeddings back to the original number of dimensions. During this process of 'compression' and 're-expansion', fine details of the data have to be discarded, and it appears reasonable that this might go hand in hand with a 'sharpening' of the clusters. Again, in our test case where the labels of the data points are actually known, this enhancement of cluster centrality can be quantitatively measured by the GDV. For comparability, we have used an autoencoder that has the same design as the perceptron for the first four network layers. In the case of MNIST, we indeed find that the unsupervised compression enhances cluster centrality: GDV(L0)=-0.061, GDV(L1)=-0.115, GDV(L2)=-0.122, and GDV(L3)=-0.137 (See Fig.\\ref{figure_7}(a)). The behavior is similar with the sleep EEG data, except for the last layer: GDV(L0)=-0.035, GDV(L1)=-0.037, GDV(L2)=-0.041, and GDV(L3)=-0.036 (See Fig.\\ref{figure_7}(c)).\n\n\n\\vspace{0.2cm}\\noindent \n\n\n\\clearpage\n\\section{Discussion and Outlook}\n\nIn this work, we have addressed various aspects of data ambiguity: the fact that multi-dimensional data spaces usually contain vectors that cannot be unequivocally assigned to any particular class. \nThe probability of encountering such ambiguous vectors is easily underestimated in machine learning, because the data sets used to train classifiers - rather than being sampled randomly from the entire space of possible data - typically represent just a tiny, pre-selected subset of 'reasonable' examples. For instance, the space of monochrome images with full HD resolution and 256 gray values contains $256^{1920 \\times 1080} \\approx 10^{4993726}$ possible vectors. The fraction of these images that resemble any human-recognizable objects is virtually zero, whereas the largest part would be described as noise by human observers. One may argue that these 'structure-less' images should not play any role in real-world applications. However, it is conceivable that sensors in autonomous intelligent systems, such as self-driving cars, can produce untypical data under severe environmental conditions, such as snow storms. How to deal with data ambiguity is therefore a practically relevant problem. Moreover, as we have tried to illustrate in this paper, data ambiguity has interesting consequences from a theoretical point of view.\n\n\\vspace{0.2cm}\\noindent In part one, we have derived the theoretical limit $A_{max}$ of accuracy that can be achieved by a perfect classifier, given a data set with partially overlapping classes. By generating artificial data classes with Gaussian probability distributions in a two-dimensional feature space and with a controllable distance $d$ between the maxima, we verified that different types of classifiers (The CMVG Bayesian model with multi-variate Gaussian likelihoods and a perceptron) exactly follow the predicted accuracy limit $A_{max}(d)$ (Fig.\\ref{figure_1}(g)). The naive Bayesian model, which cannot exploit correlations to distinguish between data classes, originally yields sub-optimal accuracies for small distances $d$, but this problem can be fixed by applying a random dimensionality expansion to the data as a trivial pre-processing step \\cite{yang2021neural}. We have restricted ourselves to only two features (dimensions) for this test, because predicting the accuracy limit involves the exact computation of the confusion matrix, which in turn is an integral over the entire data space. Note, however, that for high-dimensional data with known class-dependent generation densities $p_{gen}(\\vec{x}\\;|\\;i)$, the integral could be approximated by Monte Carlo sampling. In this case, the element $C_{ji}$ of the confusion matrix would be computed by drawing random vectors $\\vec{x}$ from class $i$. The class indicator function $\\hat{q}_{cla}(\\;k\\;|\\;\\vec{x}\\;)$ of the perfect classifier, which is fully determined by the generation densities, yields the corresponding predicted classes $k$ for these data vectors. The matrix element $C_{ji}$ is then given by the fraction of cases where $k=j$. \n\n\\vspace{0.2cm}\\noindent In part two, we have constructed a two-level model to generate artificial test data (Fig.\\ref{figure_2}). The model has high-level parameters $D$, $S$ and $C$ which control the number of dimensions (features), the average separation of the two classes in feature space, as well as the average correlation between the features. For each triple of high-level parameters $D,S,C$, a large number of low-level parameters $\\mu, \\Sigma$ are randomly drawn according to specified distributions, which are in turn used to generate the final test data sets. The super-statistical nature of the model allows us to prescribe the essential statistical features of dimensionality, separation and correlation, while at the same time ensuring a large variability of the test data. By using the General Discrimination Value (GDV), a quantitative measure of class separability (centrality), we have confirmed that the high-level parameter $S$ controls the class separability as intended. Moreover, the proper action of parameter $C$ was confirmed by computing the root-mean-square average over the elements of the data's covariance matrix.\n\n\\vspace{0.2cm}\\noindent In part three, we have applied our three types of classifiers to the test data generated with the DSC-model. Without intra-class feature correlations ($C=0$), we find that all three models show with growing separation parameter $S$ exactly the same monotonically increasing average accuracy (Fig.\\ref{figure_3}(e)). Although the exact computation of $A_{max}$ is not possible in this five-dimensional data space, the perfect agreement of the three different classifiers indicates that they all have reached the accuracy limit. When intra-class feature correlations are present ($C\\neq 0$), we find by systematically varying the parameters $D$, $S$ and $C$ that the resulting accuracies of the CMVG-Bayes classifier and of the perceptron are extremely similar in all considered cases, indicating again that they have reached the theoretical accuracy limit. As expected, the naive Bayesian classifier shows sub-optimal accuracies in all cases where feature correlations are required to distinguish between the classes. In general, this analysis shows that the accuracy of classification can be systematically enhanced by providing more features (larger data dimensionality $D$) as input. Extra features that do not provide additional useful information are 'automatically ignored' by the classifiers and never reduce the achievable accuracy. Moreover, accuracy can be enhanced by providing features that are correlated with each other (larger parameter $C$), but differently in each data class. Such class-specific feature correlations can be exploited for discrimination by models such as CMVG Bayes and the perceptron, but not by the naive Bayes model. Moreover, we find that the theoretical accuracy maximum as a function of the correlation parameter $C$ shows an interesting abrupt change of slope at around $C\\approx 0.8$ (Fig.\\ref{figure_3}(g,h)). The origin of this effect is at present unclear, but will be explored in follow-up studies.\n\n\\vspace{0.2cm}\\noindent In part four, we have investigated the effect of non-linear feature transformations, applied as a pre-processing step, on classification accuracy (Fig.\\ref{figure_4}). Since the achievable accuracy in a classification task is limited by the degree of overlap between the data classes, feature transformations can certainly reduce the accuracy to below the limit $A_{max}$ (when they destroy information that is essential for discrimination), but they can never push the accuracy to above $A_{max}$. This is indeed confirmed in a simple test case where all three classifier types perform at the accuracy maximum with the non-transformed data: Applying a feature-wise sine-transformation drastically changes the data distributions $p_{gen}(\\vec{x}\\;|\\;i)$, but leaves the accuracies unchanged at $A_{max}$. The accuracy remains invariant even under a signum-transformation, although this non-invertible operation reduces the data distributions to only four possible points in feature space. In this extreme case, most of the detailed information about the input data vectors is lost, but the part that is essential for class discrimination, namely the sign of the feature $x_1$, is retained. This example demonstrates that classification is a type of lossy data processing where irrelevant information can be safely discarded. For this reason, neural-network based classifiers usually project the input data vectors into spaces of ever smaller dimensions, up to the final discrimination layer which needs only as many neural units as there are data classes. In this context, it is interesting that biological organisms with nervous systems, relying on an efficient classification of objects in their environment for survival, have probably evolved sensory organs and filters that only transmit the small class-discriminating part of the available information to the higher stages of the neural processing chain. As a consequence, our human perception is almost certainly not a veridical representation of the world \\cite{mark2010natural,hoffman2014objects, hoffman2018interface}.\n\n\\vspace{0.2cm}\\noindent In part five, we have analyzed full-night EEG recordings of sleeping humans, divided into epochs of 30 seconds that have been labeled by a specialist according to the five sleep stages. Such recordings can been used as training data for automatic sleep stage classifiers - an application of machine learning that could in the future remove a large work load from clinical sleep laboratories. In our context of data ambiguity, sleep EEG is an interesting case because different human specialists agree about individual sleep-label assignments only in 70\\% - 80\\% of the cases, even if multiple EEG channels and other bio-signals (such as electro-oculograms or electro-myograms) are provided \\cite{fiorillo2019automated}. This low inter-rater reliability suggests that a considerable fraction of the 30-second epochs is actually ambiguous with respect to sleep stage classification, in particular when only the time-dependent signal of a single EEG channel is available as input-data. Our first goal is a suitable dimensionality reduction of the raw data, which (at a sample rate of 256 Hz) consist of 7680 subsequent EEG values in each epoch. As a pre-processing step, we map each 7680-dimensional raw data vector onto an only 6-dimensional feature vector, so that our Bayesian classifiers (Naive and CMVG) can be efficiently used. We consider as features the real-valued Fourier amplitudes at different frequencies, as well as the auto-correlation coefficients at different lag-times (Fig.\\ref{figure_5}). The Fourier features are expected to be particularly useful, as it is well-known that the activity in different EEG frequency bands varies in characteristic ways over the five sleep stages. The correlation features have been successfully applied for Bayesian classification in a former study \\cite{metzner2021sleep}. In our present study, we are using either Fourier or correlation features, but no combinations of those. By performing a statistical analysis of the features, we find that within the same sleep stage, the six features have significantly different marginal probability distributions. However, these distributions are quite similar in all sleep stages, so that their value for the classification task is limited. Moreover, the correlations between features, which could be exploited by the CMVG Bayes classifier and by the perceptron, turn out to be very weak, except for the Fourier features in the wake stage. Another problem is the strongly non-Gaussian shape of the marginal probability distributions in the case of the correlation features, which cannot be properly represented by the CMVG Bayes model. \n\n\\vspace{0.2cm}\\noindent In part six, we have used our three classifier models, based on the above Fourier- and correlation features, for personalized sleep stage detection. In this very hard task, the classifiers are trained and tested, independently, on the full-night EEG data set of a single individual only. Since an individual data set contains typically less than 1000 epochs (each corresponding to one feature vector), random deviations from the 'typical' sleeping patterns are likely to be picked up during the training phase. We consequently find that the accuracies vary widely between the individual data sets. As expected, the CMVG Bayes model performs badly in this task, because there are almost no inter-feature correlations present that could be exploited for sleep stage discrimination, and because feature distributions are non-Gaussian. Interestingly, both the Naive Bayesian classifier and the perceptron achieve relatively good accuracies, mainly in the range from 0.3 to 0.6. However, these accuracies may be further increased by using more sophisticated neural network architectures \\cite{stephansen2018neural, krauss2021analysis}, and hence do not represent the accuracy limit.\n\n\\vspace{0.2cm}\\noindent In the final part seven, we have started to explore whether the distinct classes in typical real-world data sets are defined arbitrarily (and therefore can only be detected after supervised learning), or if the differences between these classes are so prominent that even unsupervised machine learning methods can recognize them as distinct clusters in feature space. Besides the (pooled) sleep EEG data, we have used the MNIST data set to test for any inherent clustering structure. For this investigation, the individual data points, corresponding to respectively one epoch of EEG signal or one handwritten digit, have been brought into the same format of 784-dimensional, normalized vectors. Computing directly the General Discrimination Value (GDV) of the MNIST data, based on the known labels, has indeed revealed a small amount of 'natural clustering', even in this raw data distribution. This quantitative result was qualitatively confirmed by a two-dimensional visualization using multi-dimensional scaling (MDS), however the cluster structure would hardly be visible without the class-specific coloring (left upper scatter plots in Fig.\\ref{figure_7}(a,b)). By contrast, no natural clustering was found for the raw sleep EEG data when the 7680 values in each epoch were simply down-sampled in the time-domain to 784 values (data not shown). This presumably fails because the relevant class-specific signatures appear randomly at different temporal positions within each epoch, and so the Euclidean distance between two data vectors is not a good measure of their dissimilarity. However, when we instead used as data vectors the magnitudes of the 784 Fourier amplitudes with lowest frequencies, a weak natural clustering was found also in the sleep data (left upper scatter plots in Fig.\\ref{figure_7}(c,d)). We have furthermore demonstrated that the degree of clustering (for both data sets) is systematically increasing in the higher layers of a perceptron that has been trained to discriminate the classes in a supervised manner (Fig.\\ref{figure_7}, right column). Finally, we have used a multi-layer autoencoder to produce embeddings of the data distributions with reduced dimensionality in an unsupervised setting. It has turned out that the degree of clustering (with respect to the known data classes) tends to increase systematically with the degree of dimensional compression (Fig.\\ref{figure_7}, left column). This interesting finding, previously reported in Schilling et al. \\cite{schilling2021quantifying}, suggests that unsupervised dimensionality reduction could be used to automatically detect and enhance natural clustering in unlabeled data. In combination with automatic labeling methods, such as Gaussian Mixture Models, this may provide an objective way to define 'natural kinds' in arbitrary data sets. \n\n\n\n\\clearpage\n\\section{Additional information}\n\n\\noindent{\\bf Author contributions statement:}\nCM has conceived of the project, implemented the methods, evaluated the data, and wrote the paper, PK co-designed the study, discussed the results and wrote the paper, AS discussed the results, MT provided access to resources and wrote the paper, HS provided access to resources and wrote the paper.\n \\vspace{0.5cm} \n\n\\noindent{\\bf Funding:}\nThis work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation): grant SCHU\\,1272\/16-1 (AOBJ 675050) to HS, grant TR\\,1793\/2-1 (AOBJ 675049) to MT, grant SCHI\\,1482\/3-1 (project number 451810794) to AS, and grant KR\\,5148\/2-1 (project number 436456810) to PK. \\vspace{0.5cm}\n\n\\noindent{\\bf Competing interests statement:}\nThe authors declare no competing interests. \\vspace{0.5cm}\n\n\\noindent{\\bf Data availability statement:}\nData and analysis programs will be made available upon reasonable request.\n\\vspace{0.5cm}\n\n\\noindent{\\bf Ethical approval and informed consent:} The study was conducted in the Department of Otorhinolaryngology, Head Neck Surgery, of the Friedrich-Alexander University Erlangen-N\u00fcrnberg (FAU), following approval by the local Ethics Committee (323 \u2013 16 Bc). Written informed consent was obtained from the participants before the cardiorespiratory poly-somnography (PSG).\n\n\\vspace{0.5cm}\n\n\\noindent{\\bf Third party rights:}\nAll material used in the paper are the intellectual property of the authors. \\vspace{0.5cm}\n\n\\clearpage\n\\bibliographystyle{unsrt}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\@startsection{section}{1}{\\z@{INTRODUCTION}\n\nTwo Galactic X-ray sources known to produce relativistic radio jets\nare GRS 1915+105 \\cite{Mir94} and GRO J1655-40\n\\cite{Tin95,Hje95}. Optical observations of GRO J1655-40 have provided\ndynamical evidence for a 7 $M_{\\odot}$ black hole \\cite{Oro97} in a 2.6 day\nbinary orbit with a $\\sim$F4 IV companion star. GRS1915+105 is\npresumed to be a black hole binary, based on its X-ray high luminosity\nand similarities with GRO J1655-40. However, a direct measurement of\nthe motion of its companion star has been prevented by interstellar\nextinction, which limits optical\/IR studies of GRS1915+105 to\nwavelengths $> 1$ micron \\cite{Mir94}. While each source was active\nat radio frequencies, H I absorption measurements were combined with\nGalactic rotation models to derive distance measurements of 12.5 kpc\nand 3.2 kpc for GRS 1915+105 \\cite{Mir94} and GRO J1655-40\n\\cite{Tin95}, respectively.\n\nGRS 1915+105 is a transient X-ray source, and the BATSE light curve\n(20--100 keV) indicates that bright X-ray emission began during May\n1992 \\cite{Har97}. Before the launch of the $Rossi ~X$-$ray ~Timing\n~Explorer$ ($RXTE$), observations in soft X-rays were sporadic, and\nGRS1915+105 may have persisted as a bright source in soft X-rays since\n1992. When the All Sky Monitor (ASM) on $RXTE$ established regular\ncoverage on 1996 Feb 22, the source was bright and highly variable,\nand it has remained so throughout 1996 and 1997. The ASM light curve,\nwhich is shown in Figure~\\ref{fig:asm19}, illustrates both the extent\nof the intensity variations and also the repetitive character of\nparticular variability patterns. The early ASM light curve was used to\ninitiate $RXTE$ pointed observations (PCA and HEXTE instruments),\nwhich began on 1996 April 6. Since then the source has been observed\nonce or twice per week, and most of the data are available in a public\narchive. At the higher time resolution provided by PCA light curves,\nthere are again dramatic and repetitive patterns of variations\n\\cite{Gre96}. These results are one of the extraordinary chapters in\nthe history of high-energy astronomy.\n\n\\begin{figure*}\n\\centerline{\\psfig{figure=asm1915.ps,width=16cm,height=16cm} }\n\\caption{ASM light curve (2--12 keV) of GRS1915+105 for 1996 and\n1997. The Crab Nebula, for reference, yields 75.5 ASM c\/s. The ASM\nhardness ratio, $HR2$ is defined as the count rate in the 5--12 keV\nband relative to the rate in the 3--5 keV band. The time intervals\nthat correspond with our groups of combined X-ray power spectra (see\nTable 1) are shown above the light curve.}\n\\label{fig:asm19}\n\\end{figure*}\n\nFourier analyses of the first 31 PCA observations \\cite{Mor97} of\nGRS1915+105 revealed 3 different types of oscillations: a\nquasi-periodic oscillation (QPO) with a constant frequency of 67 Hz;\ndynamic, low-frequency (0.05 to 10 Hz) QPO with a large variety\nof amplitudes and widths; and complex, high-amplitude dip cycles\n($10^{-3}$ to $10^{-1}$ Hz) that are related to the extreme X-ray\nvariations noted above. The combined characteristics of the power\nspectra, light curves, and energy spectra were interpreted as\nrepresenting four different emission states \\cite{Mor97}, none of\nwhich resemble the canonical states of black hole binaries\n\\cite{Van95}.\n\nThe other microquasar, GRO J1655-40, was first detected with BATSE on\n1994 July 27, and the correlation between hard X-ray activity and the\nejections of relativistic radio jets \\cite{Har95} was an important\nstep in establishing the relationship between accretion changes and\nthe formation of jets. During late 1995 and early 1996, GRO J1655-40\nentered a quiescent accretion state, permitting optical spectroscopy\nof the companion star, which led to our knowledge of the binary\nconstituents and mass of the black hole~\\cite{Oro97}, as noted above.\n\nThe ASM recorded a renewed outburst from GRO J1655-40 \\cite{Lev96}\nthat began on 1996 April 25. The ASM light curve is shown in Figure\n~\\ref{fig:asm16}. With great fortune a concurrent optical campaign\nwas in progress, and it was determined that optical brightening\npreceded the X-ray turn-on by 6 days, beginning first in the I band\nand then accelerating rapidly in the B and V bands. These results\nprovide concrete evidence favoring the accretion disk instability as\nthe cause of the X-ray nova episode.\n\n\\begin{figure*}\n\\centerline{\\psfig{figure=asm1655.ps,width=16cm,height=16cm} }\n\\caption{(top) ASM light curve (1.5--12 keV) of GRO J1655-40 for 1996\nand 1997. The tick marks above the light curve show the times of RXTE\npointed observations, either from the public archive (1997) or our\nguest observer program (1996). (bottom) The ASM hardness ratio, $HR2$\nas defined previously.}\n\\label{fig:asm16}\n\\end{figure*}\n\nThe $RXTE$ observations of GRO J1655-40 indicate a more stable form of\naccretion. X-ray spectral variations (see Fig.~\\ref{fig:asm16})\nresemble the canonical ``soft\/high'' and ``very high'' states in black\nhole binaries \\cite{Rem98,Van95}. There are X-ray QPOs in the range of\n8--22 Hz, and there is also a transient, high-frequency QPO at 300\nHz~\\cite{Rem98}. This QPO is detected only when the X-ray power-law\ncomponent reaches its maximum strength.\n\nThe efforts to explain the 67 Hz QPO in GRS1915+105 and the 300 Hz QPO\nin GRO J1655-40 commonly invoke effects rooted in General Relativity\n(GR). There are at least 4 proposed mechanisms that relate the QPO\nfrequency to a natural time scale of the inner accretion disk in a\nblack hole binary. These are: the last stable orbit\n\\cite{Sha83,Mor97}, diskoseismic oscillations \\cite{Per97,Now97},\nframe dragging \\cite{Cui98}, and an oscillation in the centrifugal\nbarrier \\cite{Tit98}. The physics of all of these phenomena invokes GR\neffects in the inner accretion disk. It has also been proposed that\nthe high frequency QPOs may be caused by an inertial-acoustic\ninstability in the disk \\cite{Che95} (with non-GR origin), although\nthe oscillation in GRO J1655-40 would extend this application to\nhigher frequencies than had been argued previously.\n\nIn this paper we advertise some recent work that associates jet\nformation in GRS1915+105 with features in the X-ray light curve. We\nthen turn to the topic of X-ray QPOs. New results are presented on the\nreappearance of 67 Hz oscillations in GRS1915+105. Finally we\ndescribe the various QPO tracks that appear in GRO J1655-40, and we\nexplain how they behave in response to the strength of the power-law\ncomponent in the X-ray spectrum.\n\n\\@startsection{section}{1}{\\z@{CLUES FOR THE ORIGIN OF JETS IN GRS1915+105}\n\nSeveral groups have combined X-ray, radio, and\/or infrared\nobservations of GRS 1915+105 to probe the properties of jet formation\nand relate the ejection events to features in the X-ray light curves.\nInfrared jets were discovered \\cite{Sam96}, and infrared flares were\nseen to occur after radio flares\\cite{Fen97,Mir97}. These\ninvestigations provide solid evidence that the infrared flares\nrepresent synchrotron emission from rapidly evolving jets.\n\nIt has been further demonstrated that the radio, infrared, and X-ray\nbands occasionally show strong oscillations with a quasiperiodic time\nscale of 20--40 min \\cite{Rod97,Fen97,Eik98,Poo98}. In perhaps the\nmost impressive of these studies to date, there were a series of\ninfrared flares (with 20 min recurrence time), and in six of six\npossible cases the flares were seen to follow dramatic dipping cycles\nin the X-ray light curve. Since these dips have been analyzed as\nrepresenting the disappearance of the thermal X-ray emission from the\ninner disk \\cite{Bel97a,Bel97b}, the infrared\/X-ray correlation shows\nthat the jet material originates in the inner accretion\ndisk\\cite{Eik98}. Another conclusion drawn from the recent\nX-ray\/radio\/infrared studies is that there is a wide distribution of\n``baby jets'' in which quantized impulses appear at $\\sim30$ min\nintervals. The radio strength of these events is one to three orders\nof magnitude below the levels of the superluminal outbursts of 1994\n\\cite{Poo98,Mir94}.\n\nWe expect that $RXTE$ will continue to support multifrequency\nobservations of GRS1915+105 during 1998. There are opportunities for\nfurther analysis to characterize the distribution and expansion times\nof the jets, analyze the infrared and radio spectra of these events,\nand study the details of the X-ray light curve in the effort to\nconstrain the physics of the trigger mechanism.\n\n\\@startsection{section}{1}{\\z@{67 HZ OSCILLATIONS IN GRS1915+105}\n\nThere have been many observations of GRS1915+105 with $RXTE$ since the\nsix (1996 April 6--June 11) that provided detections of QPO at 67 Hz\n\\cite{Mor97}. Given the importance of this QPO and also the variety of\nemission states recorded for GRS1915+105 (see Figure~\\ref{fig:asm19}),\nwe investigated the data archive for new detections of this QPO. We\nadopted a global perspective, and we divided the $RXTE$ observations\ninto a sequence of X-ray state intervals, which we label as groups\n``g1'' through ``g10'' in Figure~\\ref{fig:asm19}. The groups were\nselected with consideration of both the ASM light curve and the\ncharacteristics of the PCA power spectra, and some observations\nbetween the group boundaries were ignored as representing transition\nstates.\n\nIn Table~\\ref{tab:67hz} we list the time intervals (cols. 2, 3) the\nnumber of observations (col. 4), the X-ray state (col. 5), and the\naverage X-ray flux (in Crab units) for each group. The typical\nobservation has an exposure time of 10 ks. The X-ray state\ndescription follows the convention of Morgan et al. \\cite{Mor97},\nwhich describes GRS1915+105 as being relatively steady and bright (B),\nflaring (FL), chaotic (CH), or low-hard (LH).\n\n\\begin{table*}\n\n\n\\newlength{\\digitwidth} \\settowidth{\\digitwidth}{\\rm 0}\n\\catcode`?=\\active \\def?{\\kern\\digitwidth}\n\\caption{The 67 Hz QPO in GRS1915+105}\n\\label{tab:67hz}\n\\begin{tabular*}{\\textwidth}{@{}l@{\\extracolsep{\\fill}}llrccccc}\n\\hline\ngroup & start & end & obs & state & flux & freq. & FWHM & ampl. \\\\\n\\hline\n\n1 & 1996 Apr 06 & 1996 May 14 & 7 & B & 1.06 & 64.5 & 4.0 & 0.0069 \\\\\n2 & 1996 May 21 & 1996 Jul 06 & 14 & FL & 1.00 & 65.7 & 2.3 & 0.0022 \\\\\n3 & 1996 Jul 14 & 1996 Aug 10 & 6 & LH & 0.58 & & & \\\\\n4 & 1996 Sep 16 & 1996 Oct 15 & 8 & B & 1.01 & 67.6 & 1.5 & 0.0016 \\\\\n5 & 1996 Nov 28 & 1997 May 08 & 28 & LH & 0.31 & 68.3 & 2.3 & 0.0023 \\\\\n6 & 1997 May 13 & 1997 Jun 30 & 18 & CH\/B & 0.64 & & & \\\\\n7 & 1997 Jul 07 & 1997 Aug 21 & 17 & B & 1.33 & 66.9 & 4.3 & 0.0039 \\\\\n8 & 1997 Aug 24 & 1997 Sep 29 & 15 & CH\/FL & 1.17 & & & \\\\\n9 & 1997 Oct 09 & 1997 Oct 25 & 4 & LH & 0.47 & & & \\\\\n10 & 1997 Oct 30 & 1997 Dec 22 & 15 & FL & 1.41 & 67.4 & 4.2 & 0.0035 \\\\\n\\hline\n\\end{tabular*}\n\\end{table*}\n\nWe then combined the power spectra in each group, using the full\nenergy coverage of the PCA instrument. We fit the results for a power\ncontinuum (with a power-law function) and a QPO feature (with a\nLorentian profile) over the range of 40--120 Hz. We emphasize that the\nlocation of the central QPO frequency is free to wander within this\nfrequency interval. The average power spectra for the 10 groups\n(linear units) and the QPO fits for 6 cases are shown in\nFigure~\\ref{fig:fit67hz}.\n\n\\begin{figure*}\n\\centerline{\\psfig{figure=hmult67.ps,width=16cm,height=16cm} }\n\\caption{Average power density spectra in the range of 20--120 Hz for RXTE PCA observations of 1996 and 1997, combined in 10 groups. For the 6 cases in which a QPO is detected (see Table 1), the QPO fit is shown with a solid line.}\n\\label{fig:fit67hz}\n\\end{figure*}\n\nThe results derived from this analysis are listed in\nTable~\\ref{tab:67hz}. The central QPO frequency is given in col. 7,\nand there is a narrow distribution of $66.7 \\pm 1.4$ Hz. The QPO FWHM\nvalues (col. 8) have a mean value of $3.4 \\pm 1.0$ Hz. Comparing these\nobserving intervals, we conclude that the average X-ray luminosity of\nGRS1915+105 may vary by a factor of 4 with no significant change in\nthe characteristics of the 67 Hz QPO.\n\nThe integrated QPO amplitude is given in col. 9. The amplitudes (like\nthe power spectra in Figure~\\ref{fig:fit67hz}) are normalized by the\nmean X-ray count rate for GRS1915+105. The integrated power in the 67\nHz QPO is in the range of 0.2\\%--0.7\\% of the mean X-ray flux.\n\nThe results for group 5 are particularly noteworthy. During this\nperiod the source was in the low-hard state for a long time (see\nFigure~\\ref{fig:asm19}. The PCA light curves in 1 s time bins show\nvariations limited to moderate flickering, with rms variations $\\sim\n10$\\%. However the continuum power at 40--120 Hz is relatively high\nduring this interval (see Figure~\\ref{fig:fit67hz}). The large number\nof observations in group 5 partially compensates for the losses in\nstatistical sesitivity to QPO detection due to lower count rate and\nelevated continuum power. Nevertheless the QPO search does find a\nsmall feature that is consistent in frequency (68.3 Hz), width (2.3\nHz), and amplitude (0.23\\%) with the other detections. We estimate that\nthe uncertainty in the amplitude is 0.09\\%, so that the detection of\nthe 67 Hz QPO in group 5 has a signigicance of 2.6 $\\sigma$. For the\n4 groups that do not yield QPOs in the range of 40--120 Hz, the\nuncertainties are slightly larger, and we cannot exclude the\npossibility that GRS1915+105 is $always$ emitting X-ray QPOs at 67 Hz\nwith amplitudes in the range of 0.1\\% or larger.\n\nThere are yet many avenues for further investigation of this QPO,\ne.g. time lags at 67 Hz, analysis of the energy spectrum for the\ngroups with positive QPO detection, and segregation of data with\nalternative schemes such as the phases of jet-related dipping cycles.\nAll of these topics will be pursued during the next several months.\n\n\\@startsection{section}{1}{\\z@{QPOs in GRO J1655-40}\n\nWe have conducted similar analyses of PCA power spectra for individual\nobservations of GRO J1655-40. As reported previously \\cite{Rem98},\nthere are transient QPOs in the range of 8--30 Hz and there is a high\nfrequency QPO near 300 Hz. All of these QPOs are associated with the\nstrength of the power-law component. With respect to\nFigure~\\ref{fig:asm16}, the QPOs at 8--30 Hz appear when observations have\nhard spectra that correspond with ASM HR2 values above 0.8, while\nthe 300 Hz QPO is significant only when the combine the power spectra\nfor the 7 ``hardest'' observations made with the PCA (1996 August and\nOctober). \n\nWe fit the individual PCA power spectra for power continuum and QPOs,\nas described above, using frequency windows of 0.02--2 Hz and 5--50\nHz. In Figure~\\ref{fig:qpo16} we show the central QPO frequencies as a\nfunction of the source count rate in the PCA energy channels above 13\nkeV (or above channel 35). We use on open triangle for narrow\nQPOs ($\\nu \/ \\delta\\nu > 5$) and the ``*'' symbol for broad QPOs\n($\\nu \/ \\delta\\nu < 4$). In some observations, both narrow and broad\nQPOs appear in the same power spectrum (i.e. one 10 ks observation).\nThe ``x'' symbol shows a narrow and weak QPO derived from the average\npower spectrum obtained during the 1997 PCA observations (MJD interval\n50500--50650).\n\n\\begin{figure*}\n\\centerline{\\psfig{figure=pub_fxqpo.ps,width=10cm,height=5.5cm} }\n\\caption{The central frequency of X-ray QPOs in GRO J1655-40 as a function of the PCA count rate above 13 keV. The open triangles represent broad QPOs, while the solid triangles represent narrow ones.}\n\\label{fig:qpo16}\n\\end{figure*}\n\nThe results in Figure~\\ref{fig:qpo16} show that the low-frequency QPOs\nin GRO J1655-40 are organized in three tracks. A broad QPO appears to\nbe stationary near 8 Hz, while the narrow QPOs shifts to lower\nfrequency as the hard X-ray flux increases. The QPO derived from the\nsum of 1997 observations appears to be a simple extension of this\nnarrow QPO track, occurring when the X-ray flux above 13 keV is nearly\nzero. Very low frequency QPO (0.085 and 0.11 Hz) are seen on two\noccasions when the hard X-ray flux is near maximum. These QPO coexist\nwith 300 Hz QPO, and they are reminiscent of the 0.067 Hz QPOs in\nGRS1915+105. We speculate that the 0.1 Hz QPOs appear near the\nthreshold of the chaotic light curves manifest in GRS1915+105.\nGRO1655-40 approaches this threshhold but does not cross the line into\nunstable light curves during the 1996-1997 outburst.\n\nIn Figure~\\ref{fig:asm16} we see that GRO J1655-40 fades below 20\nmCrab on 1997 Aug 17. Whether there will be a renewed outburst in\n1998 is anyone's guess, but the ASM will surely be monitoring this\nsource for any signs of X-ray activity.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nLet $M \\subset \\R^n$ be an open, bounded and connected set with a smooth boundary, and consider the wave equation\non $M$,\n\\begin{align}\n\\label{eq:wave_isotropic}\n&\\p_t^2 u(t,x) - c(x)^2 \\Delta u(t,x) = 0, \n\\quad &(t,x) \\in (0, \\infty) \\times M,\n\\\\&u(0,x) = 0,\\ \\p_t u(0,x) = 0,\n\\quad &x \\in M, \\nonumber\n\\\\&\\p_\\nu u(t,x) = f(t,x), \n\\quad &(t,x) \\in (0, \\infty) \\times \\p M, \\nonumber\n\\end{align}\nwhere $c$ is a smooth stricty positive function on $\\bar M$, and $\\p_\\nu$ is the normal derivative on \nthe boundary $\\p M$.\n\nDenote the solution of (\\ref{eq:wave_isotropic}) by $u^f(t,x) = u(t,x)$, let $T > 0$, and define the operator \n\\begin{equation}\n\\label{eq:dtn}\n\\Lambda_{2T} : f \\mapsto u^f|_{(0,2T) \\times \\p M}.\n\\end{equation}\nOperator $\\Lambda_{2T}$ models boundary measurements and is called the Neumann-to-Dirichlet operator. \nLet us assume that $c|_{\\p M}$ is known but $c|_M$ is unknown.\nThe inverse problem for the wave equation is to reconstruct the wave speed $c(x)$, $x \\in M$,\nusing the operator $\\Lambda_{2T}$.\n\nLet $\\Gamma \\subset \\p M$ be open, $\\tau \\in C(\\bar \\Gamma)$, and\nconsider a wave source \n$f$ in $L^2((0,\\infty) \\times \\p M)$ \nsatisfying the support condition\n\\begin{equation}\n\\label{eq:source_supp_condition}\n\\supp(f) \\subset \\{ (t, y) \\in [0, T] \\times \\bar \\Gamma;\\ t \\in [T - \\tau(y), T] \\}.\n\\end{equation}\nBy the finite speed of progation for the wave equation \\cite{Ga, Mi},\nthe solution $u^f$ satisfies then the support condition, \n\\begin{equation}\n\\label{eq:sol_supp_condition}\n\\supp(u^f(T)) \\subset \\{x \\in M;\\ \\text{there is $y \\in \\bar \\Gamma$ such that $d(x,y) \\le \\tau(y)$}\\},\n\\end{equation}\nwhere $d(x,y)$ is the travel time between points $x$ and $y$, see (\\ref{domain_of_influence}) below. Let us denote the set in (\\ref{eq:sol_supp_condition}) by $M(\\Gamma, \\tau)$ and call it the domain of influence. \n\nThe contribution of this paper is twofold.\nFirst, we present a method to compute the volume of $M(\\Gamma, \\min(\\tau, T))$ using the operator $\\Lambda_{2T}$.\nThe method works even when the wave speed is anisotropic, that is, when the wave speed is given by a Riemannian metric tensor $g(x) = (g_{jk}(x))_{j,k}^n$, $x \\in \\bar M$. \nIn the case of the isotropic wave equation (\\ref{eq:wave_isotropic}) we have $g(x) = (c(x)^{-2} \\delta_{jk})_{j,k}^n$.\n\nSecond, assuming that the Riemannian manifold $(\\bar M, g)$ is simple, we show that the volumes of $M(\\Gamma, \\tau)$\nfor $\\tau \\in C(\\p M)$ contain enough information to \ndetermine the metric tensor $g$ up to a change of coordinates in $M$.\nWe recall the definition of a simple compact manifold below,\nsee Definition \\ref{def:simple}.\nIn the case of the isotropic wave equation (\\ref{eq:wave_isotropic}) we can\ndetermine the wave speed $c$\nin the Cartesian coordinates of $M$. \n\nOur method to compute the volume of \nthe domain of influence\nis a quadratic minimization scheme in $L^2((0, 2T) \\times \\p M)$ for the source \n$f$ satisfying the support condition (\\ref{eq:source_supp_condition}).\nAfter a finite dimensional discretization,\nan approximate minimizer \ncan be computed by solving\na positive definite system of linear equations. \nWe show that the system can be solved very efficiently if \nwe use an iterative method, such as the conjugate gradient method,\nand intertwine measurements with computation.\nIn particular, instead of solving the equation (\\ref{eq:wave_isotropic}) computationally in an iteration step, we measure $\\Lambda_{2T} f$ \nfor two sources $f$:\none is the approximate minimizer given by the previous iteration step and the other is related to the time-reversed version of the approximate minimizer, see (\\ref{eq:cg_step_measurements}) below.\nWe believe that our intertwined algorithm is more robust against noise than an algorithm where noise is propagated by simulation of the wave equation.\n\nLet us consider next the problem to determine the metric tensor $g$ given the volumes of $M(\\p M, \\tau)$ for all \n$\\tau \\in C(\\p M)$. Our approach exploits the fact that\n$C(\\p M)$ is a lattice with the natural partial order \n\\begin{equation}\n\\tau \\le \\sigma \n\\quad \\text{if and only if} \\quad \n\\tau(y) \\le \\sigma(y)\\ \\text{for all $y \\in \\p M$}.\n\\end{equation}\nLet us define the greatest lower bound of $\\tau$ and $\\sigma$ in $C(M)$ as their pointwise minimum and\ndenote it by $\\tau \\wedge \\sigma$.\nWe recall that a subset of $C(M)$ is a meet-semilattice if it is closed under the binary operation $\\wedge$.\n\nLet us define the {\\em boundary distance functions},\n\\begin{equation}\n\\label{eq:boundary_distance_functions}\nr_x : \\p M \\to [0, \\infty), \\quad r_x(y) := d(x,y),\n\\end{equation}\nfor $x \\in \\overline M$.\nWe show that the volumes of \n$M(\\p M, \\tau)$, $\\tau \\in C(\\p M)$, determine the meet-semilattice, \n\\begin{equation}\n\\label{eq:semilattice_QM_intro}\n\\overline{Q(M)} = \\bigcup_{x \\in \\overline M}\n\\{ \\tau \\in C(M);\\ \\tau \\le r_x \\}.\n\\end{equation}\nMoreover, we show that if $(\\bar M, g)$ is simple \nthen the boundary distance functions are the maximal elements of $\\overline{Q(M)}$.\nThe set of boundary distance functions\ndetermines the Riemannian manifold $(M,g)$ \\cite{Ku_proc, KKL}. Thus the volumes of $M(\\p M, \\tau)$, $\\tau \\in C(\\p M)$, determine $(M,g)$ if it is simple.\n\n\nOur results give a new uniqueness proof for the \ninverse problem for the wave equation in the case of\na simple geometry. Belishev and Kurylev have proved the uniqueness even when the geometry is not simple \\cite{BeKu}. \nTheir proof is based on the boundary control method\n\\cite{AKKLT, Be3, KK, KKLima, Pestov}, originally developed for the isotropic wave equation \\cite{Be}. \nOur uniqueness proof might be the first systematic use of lattice structures in the context of inverse boundary value problems.\n\nIn previous literature, $M(\\Gamma, \\tau)$ has been defined in the case of a constant function $\\tau$, see e.g. \\cite{KKL} and the references therein.\nIn this paper we establish some properties of $M(\\Gamma, \\tau)$ when $\\Gamma \\subset \\p M$ is open and $\\tau \\in C(\\bar \\Gamma)$.\nIn particular, we show that its boundary is of measure zero. \nThis important detail seems to be neglected in previous literature also in the case of a constant function $\\tau$.\n\nOur method to compute the volume of a domain of influence is related to the iterative time-reversal control method \nby Bingham, Kurylev, Lassas and Siltanen \\cite{ITRC}.\nTheir method produces a certain kind of focused waves, \nand they also prove uniqueness for the \ninverse problem for the wave equation using these waves. Moreover, they give a review of \nmethods that use time-reversed measurements\n\\cite{Bardos, Bardos2, Papa1, CIL, FinkD, FinkMain, Kliba}. \nA modification of the iterative time-reversal control method is presented in \\cite{DKL}. \n\n\\section{Main results}\n\nLet $(M, g)$ be a $C^\\infty$-smooth, \ncompact and connected Riemannian manifold \nof dimension $n \\ge 2$ with nonempty boundary $\\p M$.\nWe consider the wave equation \n\\begin{align}\\label{eq:wave}\n&\\p_t^2 u(t,x) + a(x,D_x) u(t, x) = 0, \\quad (t,x) \\in (0,\\infty) \\times M,\n\\\\\\nonumber& u|_{t=0} = 0, \\quad \\p_t u|_{t=0}=0, \n\\\\\\nonumber& b(x, D_x) u(t,x) = f(t,x), \\quad (t,x) \\in (0,\\infty) \\times \\p M,\n\\end{align}\nwhere $a(x, D_x)$ is a weighted Laplace-Beltrami operator and $b(x, D_x)$ is the \ncorresponding normal derivative. \nIn coordinates, $(g^{jk}(x))_{j,k=1}^n$ denotes the inverse of $g(x)$ and $|g(x)|$ the determinant of $g(x)$.\nThen\n\\begin{align*}\na(x,D_x) u\n&:= -\\sum_{j,k=1}^n \\mu(x)^{-1}|g(x)|^{-\\frac 12}\\frac {\\p}{\\p x^j} \n\\ll( \\mu(x)|g(x)|^{\\frac 12}g^{jk}(x)\\frac {\\p u}{\\p x^k} \\rr),\n\\\\b(x, D_x) u \n&:= \\sum_{j,k=1}^n \\mu(x)g^{jk}(x) \\nu_k(x) \\frac{\\p u}{\\p x^j},\n\\end{align*}\nwhere $\\mu$ is a $C^\\infty$-smooth strictly positive weight function and \n$\\nu = (\\nu_1, \\dots, \\nu_n)$ is the exterior co-normal vector of $\\p M$\nnormalized with respect to $g$, that is $\\sum_{j,k=1}^m g^{jk}\\nu_j\\nu_k=1$.\nThe isotropic wave equation (\\ref{eq:wave_isotropic}) is a special case of (\\ref{eq:wave})\nwith $g(x) := (c(x)^{-2} \\delta_{jk})_{j,k=1}^n$ and $\\mu(x) = c(x)^{n-2}$.\n\nWe denote the indicator function of a set $A$ by $1_A$,\nthat is, $1_A(x) = 1$ if $x \\in A$ and $1_A(x) = 0$\notherwise. \nMoreover, we denote\n\\begin{equation} \\label{eq:integration_triangle_L}\nL := \\{ (t,s) \\in \\R^2; t + s \\le 2 T,\\ s > t > 0 \\},\n\\end{equation}\nand define the operators\n\\begin{align*}\n&J f(t) := \\frac{1}{2} \\int_0^{2 T} 1_L(t,s) f(s) ds, \\quad\nR f(t) := f(2 T - t), \\quad\n\\\\&K := J \\Lambda_{2T} - R \\Lambda_{2T} R J, \\quad\nI f (t) := 1_{(0,T)}(t) \\int_0^t f(s) ds,\n\\end{align*}\nwhere $\\Lambda_{2T}$ is the operator defined by (\\ref{eq:dtn}) $u^f$ being the solution of (\\ref{eq:wave}).\nWe denote by $dS_g$ the Riemannian volume measure of the manifold $(\\p M, g|_{\\p M})$.\nFurthermore, we denote by $(\\cdot, \\cdot)$ and $\\norm{\\cdot}$ \nthe inner product and the norm of $L^2((0, 2T) \\times \\p M; dt \\otimes dS_g)$.\nWe study the regularized minimization problem\n\\begin{equation}\n\\label{eq:minimization_regularized}\n\\argmin_{f \\in S} \\ll ((f, K f) - 2(I f, 1) + \\alpha \\norm{f}^2 \\rr),\n\\end{equation}\nwhere the regularization parameter $\\alpha$ is strictly positive and $S$\nis a closed subspace of $L^2((0, 2T) \\times \\p M)$.\n\nOperator $a(x, D_x)$ with the domain $H^2(M) \\cap H^1_0(M)$ is self-adjoint on\nthe space $L^2(M; dV_\\mu)$, where $dV_\\mu = \\mu |g|^{1\/2} dx$ in coordinates.\nThus we call $dV_\\mu$ the natural measure corresponding to $a(x, D_x)$\nand denote it also by $m$.\nIn \\cite{ITRC} it is shown that\n\\begin{equation}\n\\label{eq:inner_products}\n(u^f(T), u^h(T))_{L^2(M; dV_\\mu)} \n= (f, K h).\n\\end{equation}\nThis is a reformulation of the Blagovestchenskii identity \\cite{Bl}.\nIn Lemma \\ref{lem:cross_term} we show the following identity\n\\begin{equation}\n\\label{eq:inner_product_with_1}\n(u^f(T), 1)_{L^2(M; dV_\\mu)} = (I f, 1).\n\\end{equation}\nThis is well known at least in the isotropic case, see e.g. \\cite{Be2}.\nThe equations (\\ref{eq:inner_products}) and (\\ref{eq:inner_product_with_1}) imply that\n\\begin{align}\n\\label{eq:minimization}\n(f, K f) - 2(I f, 1)\n&= \\norm{u^{f}(T)}_{L^2(M; dV_\\mu)}^2 - 2 (u^{f}(T), 1)_{L^2(M; dV_\\mu)} \n\\\\\\nonumber&=\n\\norm{u^{f}(T) - 1}_{L^2(M; dV_\\mu)}^2 + C,\n\\end{align}\nwhere $C = -\\norm{1}_{L^2(M; dV_\\mu)}^2$ does not depend on the source $f$.\nThus the minimization problem (\\ref{eq:minimization_regularized}) is equivalent with the \nminimization problem \n\\begin{equation*}\n\\argmin_{f \\in S} \\ll( \\norm{u^{f}(T) - 1}_{L^2(M; dV_\\mu)}^2 + \\alpha \\norm{f}^2 \\rr).\n\\end{equation*}\n\nFor $\\Gamma \\subset M$ and $\\tau : \\bar \\Gamma \\to \\R$, we define {\\em the domain of influence},\n\\begin{equation}\n\\label{domain_of_influence}\nM(\\Gamma, \\tau) := \\{x \\in M;\\ \\text{there is $y \\in \\bar \\Gamma$ such that $d(x,y) \\le \\tau(y)$}\\},\n\\end{equation}\nwhere $d$ is the distance on the Riemannian manifold $(M,g)$.\nIn Section \\ref{sec:regularization} we show the following two theorems.\n\\begin{theorem}\n\\label{thm:minimization_on_subspace}\nLet $\\alpha > 0$ and let $S \\subset L^2( (0,2T) \\times \\p M)$ be a closed subspace.\nDenote by $P$ the orthogonal projection\n\\begin{equation*}\nP : L^2( (0,2T) \\times \\p M) \\to S.\n\\end{equation*}\nThen the regularized minimization (\\ref{eq:minimization_regularized}) has unique minimizer $f_\\alpha \\in S$, \nand $f_\\alpha$ is the unique $f \\in S$ solving\n\\begin{equation}\n\\label{eq:normal}\n(PKP + \\alpha) f = P I^+ 1,\n\\end{equation}\nwhere $I^+$ is the adjoint of $I$ in $L^2( (0,2T) \\times \\p M)$.\nMoreover, $PKP + \\alpha$ is positive definite on $S$.\n\\end{theorem}\n\\begin{theorem}\n\\label{thm:indicator_functions}\nLet $\\Gamma \\subset \\p M$ be open, $\\tau \\in C(\\bar \\Gamma)$ and define\n\\begin{equation*}\nS = \\{f \\in L^2((0, 2T) \\times \\p M);\\ \\text{$\\supp(f)$ satisfies (\\ref{eq:source_supp_condition})}\\}.\n\\end{equation*}\nLet $f_\\alpha$, $\\alpha > 0$, \nbe the minimizer in Theorem \\ref{thm:minimization_on_subspace}.\nThen in $L^2(M)$\n\\begin{equation*}\n\\lim_{\\alpha \\to 0} u^{f_\\alpha}(T) \n= 1_{M(\\Gamma, \\tau \\wedge T)}.\n\\end{equation*}\n\\end{theorem}\n\nWe denote $m_\\tau := m(M(\\p M, \\tau))$, for $\\tau \\in C(\\p M)$, and $m_\\infty := m(M)$.\nMoreover, we define\n\\begin{align}\n Q(M) &:= \\{ \\tau \\in C(\\p M);\\ m_\\tau < m_\\infty \\}, \\label{eq:meet_semilattice_QM}\n\\\\R(M) &:= \\{ r_x \\in C(\\p M);\\ x \\in M \\}, \\nonumber\n\\end{align}\nwhere $r_x$ is the boundary distance function defined by \n(\\ref{eq:boundary_distance_functions}) $d$ being the distance on the Riemannian manifold $(M,g)$.\nWe denote by $\\overline{Q(M)}$ the closure of $Q(M)$ in $C(M)$.\nIn Section \\ref{sec:maximal_elements} we prove the \nequation (\\ref{eq:semilattice_QM_intro}) and show\nthe following theorem.\n\\begin{theorem}\n\\label{thm:maximal_elements}\nIf $(M,g)$ satisfies the condition \n\\begin{itemize}\n\\item[(G)] $x_1, x_2 \\in M$ and $r_{x_1} \\le r_{x_2}$ imply $x_1 = x_2$,\n\\end{itemize}\nthen $R(M)$ is the set of maximal elements of $\\overline{Q(M)}$.\n\\end{theorem}\n\nLet $T \\ge \\max\\{ d(x, y) ;\\ x \\in M,\\ y \\in \\p M \\}$.\nThen the set of volumes,\n\\begin{equation}\n\\label{eq:volumes_for_uniqueness}\n\\mathcal V := \\{m_\\tau;\\ \\tau \\in C(\\p M)\\ \\text{and $0 \\le \\tau \\le T$} \\},\n\\end{equation}\ndetermines the set $Q(M)$. Note that $r_x(y) \\le T$, for all $x \\in M$ and all $y \\in \\p M$, and that $m_\\infty = \\max \\mathcal V$.\nMoreover, by Theorem \\ref{thm:indicator_functions} and equation (\\ref{eq:inner_products}) we can compute the volume $m_\\tau \\in \\mathcal V$ as the limit \n\\begin{equation}\n\\label{eq:volumes_via_minimizers}\nm_\\tau = \\lim_{\\alpha \\to 0} (f_\\alpha, K f_\\alpha).\n\\end{equation}\nThe set $R(M)$ determines the manifold $(M,g)$ up to an isometry \\cite{Ku_proc, KKL}.\nHence the volumes (\\ref{eq:volumes_for_uniqueness}) contain enough information to \ndetermine the manifold $(M,g)$ in the class of manifolds satisfying (G).\nIn section \\ref{sec:maximal_elements}, we show that simple manifolds satisfy (G).\n\\begin{definition}\n\\label{def:simple}\nA compact Riemannian manifold $(M, g)$ with boundary is {\\em simple}\nif it is simply connected, any geodesic has no conjugate points and\n$\\p M$ is strictly convex with respect to the metric $g$.\n\\end{definition}\n\nLet us discuss Theorem \\ref{thm:minimization_on_subspace} from the point of view of practical computations.\nWhen the subspace $S$ is finite-dimensional, the positive definite system of linear equations (\\ref{eq:normal})\ncan be solved using the conjugate gradient method.\nIn each iteration step of the conjugate gradient method we must evaluate one matrix-vector product. \nIn our case, the product can be realized by two measurements \n\\begin{equation}\n\\label{eq:cg_step_measurements}\n\\Lambda_{2 T}f, \\quad \\Lambda_{2 T} R J f, \n\\end{equation}\nwhere $f$ is the approximate solution given by the previous iteration step. \nThe remaining computational part of the iteration step consists of a few inexpensive vector-vector operations. \nThus if we intertwine computation of a conjugate gradient steps with measurements (\\ref{eq:cg_step_measurements}), \nthe computational cost of our method is very low. \n\n\n\\section{The open and the closed domain of influence}\n\\label{sec:domains_of_influence}\n\nLet us recall that the domain of influence $M(\\Gamma, \\tau)$ \nis defined in (\\ref{domain_of_influence}) for $\\Gamma \\subset M$ and $\\tau : \\bar \\Gamma \\to \\R$.\nWe call $M(\\Gamma, \\tau)$ also the {\\em closed} domain of influence and\ndefine the {\\em open} domain of influence\n\\begin{equation*}\nM^0(\\Gamma, \\tau) := \\{ x \\in M;\\ \\text{there is $y \\in \\Gamma$ s.t. $d(x,y) < \\tau(y)$} \\}.\n\\end{equation*}\n\nLet us consider the closed domain of influence $M(\\Gamma, \\tau)$ when $\\Gamma \\subset \\p M$ is open and $\\tau$ is a constant.\nFinite speed of propagation for the wave equation guarantees that the solution $u^f$ at time $T$\nis supported on $M(\\Gamma, \\tau)$ whenever the source $f$ satisfies the support condition \n(\\ref{eq:source_supp_condition}).\nMoreover, using Tataru's unique continuation result \\cite{Ta1, Ta2}, it is possible to show that the set of functions,\n\\begin{equation*}\n\\{ u^f(T);\\ \\text{$f \\in L^2((0,2T) \\times \\p M)$ and $\\supp(f)$ satisfies (\\ref{eq:source_supp_condition})}\\},\n\\end{equation*}\nis dense in $L^2(M^0(\\Gamma, \\tau))$, see e.g. the proof of Theorem 3.16 and the orthogonality argument of Theorem 3.10 in \\cite{KKL}.\nIt is easy generalize this for $\\tau$ of form\n\\begin{equation*}\n\\tau(y) = \\sum_{j=1}^N T_j 1_{\\Gamma_j}(y), \\quad y \\in \\p M,\n\\end{equation*}\nwhere $N \\in \\N$, $T_j \\in \\R$ and $\\Gamma_j \\subset \\p M$ are open, see \\cite{ITRC}.\nHowever, the fact that $M(\\Gamma, \\tau) \\setminus M^0(\\Gamma, \\tau)$ is of measure zero, seems to go unproven in the literature.\nIn this section we show that this is indeed the case even for $\\tau \\in C(\\bar \\Gamma)$.\n\nTo our knowledge, this can not be proven just by considering the boundaries of the balls $B(y, \\tau(y))$, \n$y \\in \\Gamma$.\nIn fact, we give below an example showing that the union of the boundaries $\\p B(y, \\tau(y))$, for $y \\in \\p \\Gamma$,\ncan have positive measure.\n\\begin{example}\nLet $\\mathcal C$ be the fat Cantor set, $M \\subset \\R^2$ be open, $g$ be the Euclidean metric, \n$(0,1) \\times \\{0\\} \\subset \\p M$ and $\\Gamma = \\ll( (0,1) \\setminus \\mathcal C \\rr) \\times \\{0\\}$.\nThen the union $B := \\bigcup_{y \\in \\p \\Gamma} \\p B(y, 1)$ has positive measure. \n\\end{example}\n\\begin{proof}\nThe fat Cantor set $\\mathcal C$ is an example of a closed subset of $[0,1]$ whose boundary has positive measure, see e.g. \\cite{Pugh}.\nThe map\n\\begin{equation*}\n\\Phi : (s, \\alpha) \\mapsto (s + \\cos \\alpha, s + \\sin \\alpha)\n\\end{equation*}\nis a diffeomorphism from $\\R \\times (0, \\pi\/2)$ onto its image in $\\R^2$.\nThe image of $H := \\p \\mathcal C \\times (0, \\pi\/2)$ under $\\Phi$\nlies in $B$.\nAs $H$ has positive measure so has $B$.\n\\end{proof}\n\n\\begin{lemma}\n\\label{lem:characterization_of_domi}\nLet $\\Gamma \\subset \\p M$ be open and let $\\tau \\in C(\\bar \\Gamma)$. \nThen the function \n\\begin{equation*}\nr_{\\Gamma, \\tau}(x) := \\inf_{y \\in \\Gamma} (d(x, y) - \\tau(y))\n\\end{equation*}\nis Lipschitz continuous and \n\\begin{align}\nM(\\Gamma, \\tau) &= \\{ x \\in M;\\ r_{\\Gamma, \\tau}(x) \\le 0 \\}, \\label{eq:closed_domi_r}\n\\\\M^0(\\Gamma, \\tau) &= \\{ x \\in M;\\ r_{\\Gamma, \\tau}(x) < 0 \\}. \\label{eq:open_domi_r}\n\\end{align}\nIn particular, $M(\\Gamma, \\tau)$ is closed and $M^0(\\Gamma, \\tau)$ is open.\n\\end{lemma}\n\\begin{proof}\nLet us define\n\\begin{equation*}\nr(x) := r_{\\Gamma, \\tau}(x), \\quad \\tilde r(x) := \\min_{y \\in \\bar \\Gamma} (d(x, y) - \\tau(y)),\n\\end{equation*}\nand show that $\\tilde r = r$. Clearly $\\tilde r\\le r$.\nLet $x \\in M$. The minimum in the definition of $\\tilde r(x)$ is attained at a point $y_0 \\in \\bar \\Gamma$.\nWe may choose a sequence $(y_j)_{j=1}^\\infty \\subset \\Gamma$ such that $y_j \\to y_0$ as $j \\to \\infty$.\nThen\n\\begin{equation*}\nr(x) \\le d(x, y_j) - \\tau(y_j) \\to \\tilde r(x)\n\\quad \\text{as $j \\to \\infty$}.\n\\end{equation*}\nHence $\\tilde r = r$.\n\nLet us show that $\\tilde r$ is Lipschitz. \nLet $x \\in M$, and let $y_0$ be as before. \nLet $x' \\in M$. Then \n\\begin{equation*}\n\\tilde r(x') - \\tilde r(x) \n\\le d(x', y_0) - \\tau(y_0) - \\ll( d(x, y_0) - \\tau(y_0) \\rr) \\le d(x, x').\n\\end{equation*}\nBy symmetry with respect to $x'$ and $x$, $\\tilde r$ is Lipschitz.\n\nLet us show (\\ref{eq:closed_domi_r}). \nClearly $r(x) \\le 0$ for $x \\in M(\\Gamma, \\tau)$.\nLet $x \\in M$ satisfy $r(x) \\le 0$, and let $y_0$ be as before.\nThen \n\\begin{equation*}\nd(x, y_0) - \\tau(y_0) = \\tilde r(x) = r(x) \\le 0,\n\\end{equation*}\nand $x \\in M(\\Gamma, \\tau)$. Hence (\\ref{eq:closed_domi_r}) holds. \nThe equation (\\ref{eq:open_domi_r}) can be proven in a similar way.\n\\end{proof}\n\nIf $\\Gamma \\subset \\p M$ and $\\tau$ is a constant function, then\n\\begin{align*}\nM(\\Gamma, \\tau) &= \\{ x \\in M;\\ r_{\\Gamma, \\tau}(x) \\le 0 \\} \n= \\{ x \\in M;\\ \\inf_{y \\in \\Gamma} d(x,y) \\le \\tau \\}\n\\\\&= \\{ x \\in M;\\ d(x,\\Gamma) \\le \\tau \\}.\n\\end{align*}\nThus for a constant $\\tau$, our definition of $M(\\Gamma, \\tau)$ coincides with the definition \nof the domain of influence in \\cite{KKL}. \n\n\\begin{lemma}\n\\label{lem:level_set_is_null}\nLet $A \\subset M$ be compact and let $\\tau : A \\to \\R$ be continuous. \nWe define\n\\begin{equation*}\nr(x) := \\inf_{y \\in A} (d(x,y) - \\tau(y)), \\quad x \\in M.\n\\end{equation*}\nIf $\\tau$ is strictly positive on $A$ or $A$ is a null set, then $\\{x \\in M;\\ r(x) = 0\\}$\nis a null set.\nWe mean by a null set a set of measure zero with respect to the Riemannian volume measure.\n\\end{lemma}\n\\begin{proof}\nDenote by $V_g$ the volume measure of $M$ and define\n\\begin{equation*}\nZ := \\{p \\in M;\\ r(p) = 0\\}.\n\\end{equation*}\nLet us show that \n\\begin{equation} \\label{eq:A_does_not_matter}\nV_g(Z) = V_g(Z \\setminus A).\n\\end{equation}\nIf $V_g(A) = 0$, then (\\ref{eq:A_does_not_matter}) is immediate. \nIf $\\tau > 0$, then \n\\begin{equation*}\nr(q) \\le d(q,q) - \\tau(q) = -\\tau(q) < 0, \\quad q \\in A.\n\\end{equation*}\nHence $Z \\cap A = \\emptyset$ and (\\ref{eq:A_does_not_matter}) holds.\n\nLet $p \\in M^\\text{int}$. There is a chart $(U, \\phi)$ of $M^\\text{int}$ such that\n$\\phi(p) = 0$ and that the closure of the open Euclidean unit ball $B$ of $\\R^n$\nis contained in $\\phi(U)$.\nWe denote $U_p := \\phi^{-1}(B)$.\n\nThe sets $U_p$, $p \\in M^\\text{int}$, form an open cover for $M^\\text{int}$,\nand as $M^\\text{int}$ is second countable, \nthere is a countable cover $U_{p_j}$, $j = 1, 2, \\dots$, of $M^\\text{int}$.\nHence\n\\begin{equation*}\nV_g(Z) = V_g( (Z \\setminus A) \\cap (\\p M \\cup \\bigcup_{j=1}^\\infty U_{p_j}))\n\\le \\sum_{j=1}^\\infty V_g( (Z \\setminus A) \\cap U_{p_j}).\n\\end{equation*}\nIt is enough to show that $\\phi((Z \\setminus A) \\cap U_p)$\nis a null set with respect to the Lebesgue measure on $B$.\n\nWe define for $v = (v^1, \\dots, v^n) \\in \\R^n$ and $x \\in B$,\n\\begin{equation*}\n|v|_{g(x)}^2 := \\sum_{j,k = 1}^n v^j g_{jk}(x) v^k, \n\\quad |v|^2 := \\sum_{j = 1}^n (v^j)^2,\n\\end{equation*}\nwhere $(g_{jk})_{j,k=1}^n$ is the metric $g$ in the local coordinates on $\\phi(U)$.\nAs $\\overline B$ is compact in $\\phi(U)$, there is $c_p > 0$ such that\nfor all $v \\in \\R^n$ and $x \\in B$\n\\begin{equation*}\nc_p |v|_{g(x)} \\le |v| \\le \\frac{1}{c_p} |v|_{g(x)}.\n\\end{equation*}\n\nAs in the proof of Lemma \\ref{lem:characterization_of_domi}\nwe see that $r$ is Lipschitz continuous on $M$.\nThus by Rademacher's theorem there is a null set $N \\subset B$ such that \n$r$ is differentiable in the local coordinates in $B \\setminus N$.\nWe denote $Z_p := \\phi((Z \\setminus A) \\cap U_p) \\setminus N$.\n\nLet $x \\in Z_p$ and denote $p_x := \\phi^{-1}(x)$.\nAs $A$ is compact and $q \\mapsto d(p_x, q) - \\tau(q)$ \nis continuous, there is $q_x \\in A$ such that\n\\begin{equation*}\nd(p_x, q_x) - \\tau(q_x) = r(p_x) = 0.\n\\end{equation*}\nWe denote $s := d(p_x, q_x) = \\tau(q_x)$.\nAs $p_x \\notin A$ and $q_x \\in A$, we have that $0 < s$.\n\nAs $M$ is connected and complete as a metric space, Hopf-Rinow theorem gives\na shortest path $\\gamma : [0, s] \\to M$ \nparametrized by arclength and joining $q_x = \\gamma(0)$ and $p_x = \\gamma(s)$.\nFor a study of shortest paths on Riemannian manifolds with boundary\nsee \\cite{Ax2}.\nAs $p_x \\in U_p$, there is $a \\in (0, s)$ such that $\\gamma|_{[a , s]}$\nis a unit speed geodesic of $U_p \\subset M^\\text{int}$.\nAs $\\gamma$ is parametrized by arclength,\n\\begin{equation*}\nr(\\gamma(t)) \\le d(\\gamma(t), q_x) - \\tau(q_x) = t - s, \\quad t \\in [a, s],\n\\end{equation*}\nand as $r(\\gamma(s)) = r(p_x) = 0$,\n\\begin{equation*}\n\\frac{r(\\gamma(t)) - r(\\gamma(s))}{t - s} \\ge \\frac{t - s - 0}{t - s} = 1, \\quad t \\in (a,s).\n\\end{equation*}\n\nThe function $r \\circ \\gamma$ is differentiable at $s$ by the chain rule, and\n\\begin{equation*}\n\\p_t (r \\circ \\gamma)(s)\n= \\lim_{t \\to s^-} \\frac{r(\\gamma(t)) - r(\\gamma(s))}{t - s} \\ge 1.\n\\end{equation*}\nAs $\\gamma$ is a unit speed geodesic near $s$,\n\\begin{equation*}\nc_p = c_p |\\p_t \\gamma(s)|_{g(x)} \\le |\\p_t \\gamma(s)| \n\\le \\frac{1}{c_p}|\\p_t \\gamma(s)|_{g(x)} = \\frac{1}{c_p}.\n\\end{equation*}\nHence in the local coordinates in $B$\n\\begin{equation*}\n|D r(x)| \\ge D r(x) \\cdot \\frac{\\p_t \\gamma(s)}{|\\p_t \\gamma(s)|}\n\\ge c_p \\p_t (r \\circ \\gamma)(s) \\ge c_p.\n\\end{equation*}\n\nLet $\\epsilon > 0$. \nThere is $\\delta(x) > 0$ such that \n\\begin{equation}\n\\label{eq:r_derivative_approximation}\n|r(y) - r(x) - D r(x) \\cdot (y - x)| \\le c_p \\epsilon |y - x|, \\quad y \\in B(x, \\delta(x)),\n\\end{equation}\nand $B(x, \\delta(x)) \\subset B$.\nHere $B(x, \\delta)$ is the open Euclidean ball with center $x$ and radius $\\delta$.\n\nThe sets $B(x, \\delta(x)\/5)$, $x \\in Z_p$, form an open cover for $Z_p$,\nand as $Z_p$ is second countable, \nthere is $(x_j)_{j=1}^\\infty \\subset Z_p$ such that the sets \n\\begin{equation*}\nB_j' := B(x_j, \\delta(x_j)\/5)\n\\end{equation*}\nform an open cover for $Z_p$.\nBy Vitali covering lemma there is an index set $J \\subset \\N$ such that\nthe sets $B_j'$, $j \\in J$, are disjoint and the sets\n\\begin{equation*}\nB_j := B(x_j, \\delta(x_j)), \\quad j \\in J,\n\\end{equation*}\nform an open cover for $Z_p$.\n\nWe denote \n\\begin{equation*}\nv_j := \\frac{D r(x_j)}{|D r(x_j)|}, \\quad \\delta_j := \\delta(x_j).\n\\end{equation*}\nIf $y \\in Z_p \\cap B_j$, then $r(y) = 0 = r(x_j)$ and by (\\ref{eq:r_derivative_approximation})\n\\begin{equation*}\n|v_j \\cdot (y - x_j)| \\le \\frac{1}{|D r(x_j)|} c_p \\epsilon |y - x_j| \\le \\epsilon \\delta_j.\n\\end{equation*}\n\nWe denote by $\\alpha_m$ the volume of the open Euclidean unit ball in $\\R^m$\nand by $V$ the Lebesgue measure on $B$.\nLet $j \\in J$.\nUsing a translation and a rotation we get such \ncoordinates that $x_j = 0$ and $v_j = (1, 0, \\dots, 0)$.\nIn these coordinates \n\\begin{equation*}\nZ_p \\cap B_j \\subset \\{ (y^1, y') \\in \\R \\times \\R^{n-1};\\ |y^1| \\le \\epsilon \\delta_j, |y'| \\le \\delta_j \\}.\n\\end{equation*}\nHence $V(Z_p \\cap B_j) \\le 2 \\epsilon \\delta_j \\alpha_{n-1} \\delta_j^{n-1}$.\nParticularly,\n\\begin{equation*}\nV(Z_p \\cap B_j) \n\\le \\epsilon \\frac{2 \\alpha_{n-1}}{\\alpha_{n}} V(B_j) \n= \\epsilon c_n V(B_j'),\n\\end{equation*}\nwhere $c_n := 2 \\cdot 5^n \\alpha_{n-1} \/ \\alpha_{n}$.\nThen\n\\begin{align*}\nV(\\phi((Z \\setminus A) \\cap U_p))\n&= V(Z_p) = V(Z_p \\cap \\bigcup_{j \\in J} B_j) \n\\le \\sum_{j \\in J} V(Z_p \\cap B_j)\n\\\\&\\le \\epsilon c_n \\sum_{j \\in J} V(B_j') \n= \\epsilon c_n V(\\bigcup_{j \\in J} B_j') \n\\le \\epsilon c_n V(B).\n\\end{align*}\nAs $\\epsilon > 0$ is arbitrary, $V(\\phi((Z \\setminus A) \\cap U_p)) = 0$ and the claim is proved.\n\\end{proof}\n\n\\section{Approximately constant wave fields on a domain of influence}\n\\label{sec:regularization}\n\n\\begin{lemma}\n\\label{lem:cross_term}\nLet $f \\in L^2((0, 2T) \\times \\p M)$. Then the equation (\\ref{eq:inner_product_with_1}) holds.\n\\end{lemma}\n\\begin{proof}\nThe map $h \\mapsto u^h(T)$ is bounded $L^2((0, 2T) \\times \\p M) \\to L^2(M)$, see e.g. \\cite{LaTr}.\nThus is it enough to prove the equation (\\ref{eq:inner_product_with_1}) for $f \\in C_c^\\infty( (0, 2T) \\times \\p M)$.\nLet us denote\n\\begin{equation*}\nv(t) := (u^f(t), 1)_{L^2(M; dV_\\mu)}.\n\\end{equation*}\nAs $a(x,D_x) 1 = 0$ and $b(x,D_x)1 = 0$,\nwe may integrate by parts\n\\begin{align*}\n\\p_t^2 v(t) &= - (a(x, D_x) u^f(t), 1)_{L^2(M; dV_\\mu)}\n\\\\&= - \\ll( (a(x, D_x) u^f(t), 1)_{L^2(M; dV_\\mu)} - (u^f(t), a(x, D_x) 1)_{L^2(M; dV_\\mu)} \\rr)\n\\\\&= (b(x, D_x) u^f(t), 1)_{L^2(\\p M; dS_g)} - (u^f(t), b(x, D_x) 1)_{L^2(\\p M; dS_g)}\n\\\\&= (f(t), 1)_{L^2(\\p M; dS_g)}.\n\\end{align*}\nAs $\\p_t^j v(0) = 0$ for $j=0,1$,\n\\begin{align*}\nv(T) &= \\int_0^T \\int_0^t \\int_{\\p M} f(s,x) dS_g(x) ds dt\n\\\\&= \\int_0^{2T} \\int_{\\p M} 1_{(0,T)}(t) \\int_0^t f(s,x) ds dS_g(x) dt.\n\\end{align*}\n\\end{proof}\n\nThe proof of Theorem \\ref{thm:minimization_on_subspace} \nis similar to the proof of the corresponding result in \\cite{ITRC}.\nWe give the proof for the sake of completeness.\n\n\\begin{proof}[Proof of Theorem \\ref{thm:minimization_on_subspace}.]\nWe define\n\\begin{equation*}\nE(f) := (f, Kf) - 2 (If, 1) + \\alpha \\norm{f}^2.\n\\end{equation*}\nThen \n\\begin{equation}\n\\label{eq:energy_functional}\nE(f) = \\norm{u^{f}(T) - 1}_{L^2(M; dV_\\mu)}^2 - \\norm{1}_{L^2(M; dV_\\mu)}^2 + \\alpha \\norm{f}^2.\n\\end{equation}\nLet $(f_j)_{j=1}^\\infty \\subset S$ be such that \n\\begin{equation*}\n\\lim_{j \\to \\infty} E(f_j) = \\inf_{f \\in S} E(f).\n\\end{equation*}\nThen \n\\begin{equation*}\n\\alpha \\norm{f_j} \\le E(f_j) + \\norm{1}_{L^2(M; dV_\\mu)}^2,\n\\end{equation*}\nand $(f_j)_{j=1}^\\infty$ is bounded in $S$.\nAs $S$ is a Hilbert space,\nthere is a subsequence of $(f_j)_{j=1}^\\infty$ converging weakly in $S$.\nLet us denote the limit by $f_\\infty \\in S$ and the subsequence still by $(f_j)_{j=1}^\\infty$.\n\nThe map $h \\mapsto u^h(T)$ is bounded\n\\begin{equation*}\nL( (0, 2T) \\times \\p M) \\to H^{5\/6 - \\epsilon}(M)\n\\end{equation*}\nfor $\\epsilon > 0$, see \\cite{LaTr}.\nHence $h \\mapsto u^h(T)$ is a compact operator\n\\begin{equation*}\nL^2( (0, 2T) \\times \\p M) \\to L^2(M),\n\\end{equation*}\nand $u^{f_j}(T) \\to u^{f_\\infty}(T)$ in $L^2(M)$ as $j \\to \\infty$.\nMoreover, the weak convergence implies \n\\begin{equation*}\n\\norm{f_\\infty} \\le \\liminf_{j \\to \\infty} \\norm{f_j}.\n\\end{equation*}\nHence\n\\begin{align*}\nE(f_\\infty) &= \\lim_{j \\to \\infty} \\norm{u^{f_j}(T) - 1}_{L^2(M; dV_\\mu)}^2 - \\norm{1}_{L^2(M; dV_\\mu)}^2 + \\alpha \\norm{f_\\infty}^2\n\\\\&\\le \\lim_{j \\to \\infty} \\norm{u^{f_j}(T) - 1}_{L^2(M; dV_\\mu)}^2 - \\norm{1}_{L^2(M; dV_\\mu)}^2 + \\alpha \n\\liminf_{j \\to \\infty} \\norm{f_j}^2\n\\\\&= \\liminf_{j \\to \\infty} E(f_j) = \\inf_{f \\in S} E(f),\n\\end{align*}\nand $f_\\infty \\in S$ is a minimizer.\n\nLet $f_\\alpha$ be a minimizer and $h \\in S$. \nBy orthogonality of the projection $P$ and identity (\\ref{eq:inner_products}),\nit is clear that $PKP$ is self-adjoint and positive semidefinite.\nDenote by $D_h$ the Fr\\'echet derivative to direction $h$. \nAs $f_\\alpha = Pf_\\alpha$ and $h = Ph$\n\\begin{equation*}\n0 = D_h E(f_\\alpha) = 2 (h, PKP f_\\alpha) - 2 (h, PI^+ 1) + 2 \\alpha (h, f_\\alpha).\n\\end{equation*}\nHence $f_\\alpha$ satisfies (\\ref{eq:normal}).\nAs $PKP$ is positive semidefinite, $PKP + \\alpha$ is positive definite\nand solution of (\\ref{eq:normal}) is unique.\n\\end{proof}\n\n\\begin{lemma}\n\\label{lem:approximation_by_simple}\nLet $\\Gamma \\subset \\p M$ be open, $\\tau \\in C(\\bar \\Gamma)$ and $\\epsilon > 0$.\nThen there is a simple function\n\\begin{equation*}\n\\tau_\\epsilon(y) = \\sum_{j=1}^N T_j 1_{\\Gamma_j}(y), \\quad y \\in \\p M,\n\\end{equation*}\nwhere $N \\in \\N$, $T_j \\in \\R$ and $\\Gamma_j \\subset \\Gamma$ are open,\nsuch that \n\\begin{align*}\n\\tau - \\epsilon &< \\tau_\\epsilon \\quad \\text{almost everywhere on $\\Gamma$ and}\n\\\\\\tau_\\epsilon &< \\tau \\quad \\text{on $\\bar \\Gamma$}.\n\\end{align*}\n\\end{lemma}\n\\begin{proof}\nAs $\\p M$ is compact, there is a finite set of coordinate charts covering $\\p M$.\nUsing partition of unity, we see that it is enough to prove the claim in the case \nwhen $\\Gamma \\subset \\R^{n-1}$ is an open set. \nBut then $\\tau$ is a continuous function on a compact set $\\overline \\Gamma \\subset \\R^{n-1}$,\nand it is clear that there is a simple function with the required properties. \n\\end{proof}\n\n\\begin{lemma}\n\\label{lem:measures_by_approximation}\nLet $\\Gamma \\subset \\p M$ be open, $\\tau \\in C(\\bar \\Gamma)$ and let $\\tau_\\epsilon$, $\\epsilon > 0$, satisfy\n\\begin{align*}\n\\tau - \\epsilon &< \\tau_\\epsilon \\quad \\text{almost everywhere on $\\Gamma$ and}\n\\\\\\tau_\\epsilon &< \\tau \\quad \\text{on $\\bar \\Gamma$}.\n\\end{align*}\nThen\n\\begin{equation*}\n\\lim_{\\epsilon \\to 0} m(M(\\Gamma, \\tau_\\epsilon)) = m(M(\\Gamma, \\tau)).\n\\end{equation*}\n\\end{lemma}\n\\begin{proof}\nLet $\\epsilon > 0$ and denote by $N \\subset \\Gamma$ \nthe set of measure zero where $\\tau - \\epsilon \\ge \\tau_\\epsilon$ as functions on $\\Gamma$.\nLet us show that $M^0(\\Gamma, \\tau - \\epsilon) \\subset M(\\Gamma, \\tau_\\epsilon)$.\nLet $x \\in M^0(\\Gamma, \\tau - \\epsilon)$. Then there is $y_0 \\in \\Gamma$\nsuch that \n\\begin{equation*}\nd(x,y_0) < \\tau(y_0) - \\epsilon.\n\\end{equation*}\nAs $\\tau$ and the function $y \\mapsto d(x,y)$ are continuous\nand $\\Gamma \\setminus N$ is dense in $\\Gamma$, there is $y \\in \\Gamma \\setminus N$ such that\n\\begin{equation*}\nd(x,y) < \\tau(y) - \\epsilon < \\tau_\\epsilon(y).\n\\end{equation*}\nHence $x \\in M(\\Gamma, \\tau_\\epsilon)$.\nA similar argument shows that $M(\\Gamma, \\tau_\\epsilon) \\subset M^0(\\Gamma, \\tau)$.\n\nClearly $M^0(\\Gamma, \\tau - \\epsilon_1) \\subset M^0(\\Gamma, \\tau - \\epsilon_2)$ for $\\epsilon_1 \\ge \\epsilon_2 > 0$,\nand\n\\begin{equation*}\n\\bigcup_{\\epsilon > 0} M^0(\\Gamma, \\tau - \\epsilon) = M^0(\\Gamma, \\tau).\n\\end{equation*}\nHence $m(M^0(\\Gamma, \\tau - \\epsilon)) \\to m(M^0(\\Gamma, \\tau))$ as $\\epsilon \\to 0$, and\n\\begin{align*}\n0 \\le m(M^0(\\Gamma, \\tau)) - m(M(\\Gamma, \\tau_\\epsilon)) &\\le m(M^0(\\Gamma, \\tau)) - m(M^0(\\Gamma, \\tau - \\epsilon))\n\\\\&\\to 0, \\quad \\text{as $\\epsilon \\to 0$}.\n\\end{align*}\nMoreover, by Lemmas \\ref{lem:characterization_of_domi} and \\ref{lem:level_set_is_null}\n\\begin{equation*}\nm(M(\\Gamma, \\tau)) = m(M^0(\\Gamma, \\tau)) = \\lim_{\\epsilon \\to 0} m(M(\\Gamma, \\tau_\\epsilon)).\n\\end{equation*}\n\\end{proof}\n\n\\begin{proof}[Proof of Theorem \\ref{thm:indicator_functions}.]\nWe may assume without loss of generality that $\\tau \\le T$, as\nwe may replace $\\tau$ by $\\tau \\wedge T$ in what follows.\nLet us denote\n\\begin{equation*}\nS(\\Gamma, \\tau) := \\{ f \\in L^2((0,2T) \\times \\p M);\\ \\text{$\\supp(f)$ satisfies (\\ref{eq:source_supp_condition})}\\}.\n\\end{equation*}\nBy the finite speed of propagation for the wave equation,\nwe have that $\\supp(u^f(T)) \\subset M(\\Gamma, \\tau)$ whenever $f \\in S(\\Gamma, \\tau)$.\nHence for $f \\in S(\\Gamma, \\tau)$,\n\\begin{align}\n\\label{eq:splitting_the_difference_1}\n\\norm{u^f(T) - 1}_{L^2(M; dV_\\mu)}^2\n&= \\int_{M(\\Gamma, \\tau)} (u^f(T) - 1)^2 dV_\\mu + \\int_{M \\setminus M(\\Gamma, \\tau)} 1 dV_\\mu\n\\\\\\nonumber&= \\norm{u^f(T) - 1_{M(\\Gamma, \\tau)}}_{L^2(M; dV_\\mu)}^2 \n + m(M \\setminus M(\\Gamma, \\tau)).\n\\end{align}\n\nLet $\\epsilon > 0$.\nBy Lemmas \\ref{lem:approximation_by_simple} and \\ref{lem:measures_by_approximation}\nthere is a simple function $\\tau_\\delta$ satisfying\n\\begin{equation*}\n\\tau_\\delta < \\tau, \\quad m(M(\\Gamma, \\tau)) - m(M(\\Gamma, \\tau_\\delta)) < \\epsilon.\n\\end{equation*}\nBy the discussion in the beginning of Section \\ref{sec:domains_of_influence}, the set \n\\begin{equation*}\n\\{ u^f(T) \\in L^2(M(\\Gamma, \\tau_\\delta));\\ f \\in S(\\Gamma, \\tau_\\delta) \\}\n\\end{equation*}\nis dense in $L^2(M(\\Gamma, \\tau_\\delta))$.\nThus there is $f \\in S(\\Gamma, \\tau_\\delta) \\subset S(\\Gamma, \\tau)$ such that\n\\begin{equation*}\n\\norm{u^f(T) - 1_{M(\\Gamma, \\tau_\\delta)}}_{L^2(M; dV_\\mu)}^2 \\le \\epsilon.\n\\end{equation*}\nThen \n\\begin{equation*}\n\\norm{u^f(T) - 1_{M(\\Gamma, \\tau)}}_{L^2(M; dV_\\mu)}^2 \\le \n\\epsilon + \\norm{1_{M(\\Gamma, \\tau_\\delta)} - 1_{M(\\Gamma, \\tau)}}_{L^2(M; dV_\\mu)}^2\n\\le 2 \\epsilon.\n\\end{equation*}\n\nMoreover, $E(f_\\alpha) \\le E(f)$ and equations (\\ref{eq:splitting_the_difference_1}) and (\\ref{eq:energy_functional}) give\n\\begin{align*}\n&\\norm{u^{f_\\alpha}(T) - 1_{M(\\Gamma, \\tau)}}_{L^2(M; dV_\\mu)}^2 \n\\\\&\\quad= \\norm{u^{f_\\alpha}(T) - 1}_{L^2(M; dV_\\mu)}^2 - m(M \\setminus M(\\Gamma, \\tau))\n\\\\&\\quad\\le E(f_\\alpha) + \\norm{1}_{L^2(M; dV_\\mu)}^2 - m(M \\setminus M(\\Gamma, \\tau))\n\\\\&\\quad\\le \\norm{u^f(T) - 1}_{L^2(M; dV_\\mu)}^2 - m(M \\setminus M(\\Gamma, \\tau)) + \\alpha \\norm{f}^2\n\\\\&\\quad= \\norm{u^f(T) - 1_{M(\\Gamma, \\tau)}}_{L^2(M; dV_\\mu)}^2 + \\alpha \\norm{f}^2 \n\\le 2 \\epsilon + \\alpha \\norm{f}^2.\n\\end{align*}\nWe may choose first small $\\epsilon > 0$ and then small $\\alpha > 0$ to \nget $u^{f_\\alpha}(T)$ arbitrarily close to $1_{M(\\Gamma, \\tau)}$ in $L^2(M)$. \n\\end{proof}\n\n\\section{The boundary distance functions as maximal elements}\n\\label{sec:maximal_elements}\n\n\\def\\tilde{\\widetilde}\nWe denote $M^0(\\tau) := M^0(\\p M, \\tau)$, for $\\tau \\in C(\\p M)$, and define\n\\begin{equation*}\n\\tilde Q := \\{ \\tau \\in C(M);\\ M \\setminus M^0(\\tau) \\ne \\emptyset \\}.\n\\end{equation*}\n\n\\begin{lemma}\n\\label{lem:maximal_elements}\nIf $\\tau$ is a maximal element of $\\tilde Q$, then $\\tau = r_x$ \nfor some $x \\in M$.\nMoreover, if the manifold $(M,g)$ satisfies (G),\nthen $R(M)$ is the set of the maximal elements of $\\tilde Q$.\n\\end{lemma}\n\\begin{proof}\nLet $x \\in M$ and $\\tau \\in C(\\p M)$. Then $\\tau \\le r_x$ if and only if $x \\notin M^0(\\tau)$.\nIn other words,\n\\begin{equation}\n\\label{eq:characterization_of_Q}\n\\tilde Q = \\{ \\tau \\in C(\\p M);\\ \\text{there is $x \\in M$ such that $\\tau \\le r_x$} \\}.\n\\end{equation}\nMoreover, $r_x \\in \\tilde Q$ for all $x \\in M$.\nIndeed, $r_x$ is continuous and trivially $r_x \\le r_x$.\n\nSuppose that $\\tau$ is a maximal element of $\\tilde Q$. \nBy (\\ref{eq:characterization_of_Q}) there is $x \\in M$ such that $\\tau \\le r_x$,\nbut $r_x \\in \\tilde Q$ and maximality of $\\tau$ yields $\\tau = r_x$\n\nLet us now suppose that $(M,g)$ satisfies (G) and show that $r_x$ \nis a maximal element of $\\tilde Q$.\nSuppose that $\\tau \\in \\tilde Q$ satisfies $r_x \\le \\tau$.\nBy (\\ref{eq:characterization_of_Q}) there is $x' \\in M$ such that $\\tau \\le r_x'$.\nHence \n\\begin{equation*}\nr_x \\le \\tau \\le r_{x'},\n\\end{equation*}\nand (G) yields that $x = x'$. \nThus $\\tau = r_x$ and $r_x$ is a maximal element of $\\tilde Q$.\n\\end{proof}\n\n\\begin{lemma}\n\\label{lem:closure}\nThe set $\\tilde Q$ is the closure of $Q(M)$ in $C(M)$.\n\\end{lemma}\n\\begin{proof}\nLet us first show that $\\tilde Q$ is closed.\nLet $(\\tau_j)_{j=1}^\\infty \\subset \\tilde Q$ satisfy $\\tau_j \\to \\tau$ in $C(\\p M)$ as $j \\to \\infty$.\nBy (\\ref{eq:characterization_of_Q}) there is $(x_j)_{j=1}^\\infty \\subset M$ such that $\\tau_j \\le r_{x_j}$.\nAs $M$ is compact there is a converging subsequence $(x_{j_k})_{k=1}^\\infty \\subset (x_j)_{j=1}^\\infty$.\nLet us denote the limit by $x$, that is, $x_{j_k} \\to x$ as $k \\to \\infty$.\nBy continuity of the distance function,\n\\begin{equation*}\n\\tau(y) = \\lim_{k \\to \\infty} \\tau_{j_k}(y) \\le \\lim_{k \\to \\infty} r_{x_{j_k}}(y) = r_x(y), \\quad y \\in \\p M.\n\\end{equation*}\nHence $\\tau \\in \\tilde Q$ and $\\tilde Q$ is closed.\n\nClearly $Q(M) \\subset \\tilde Q$ and it is enough to show that $Q(M)$ is dense in $\\tilde Q$.\nSuppose that $\\tau \\in \\tilde Q$. Then there is $x_0 \\in M$ such that $\\tau \\le r_{x_0}$.\nLet $\\epsilon > 0$. As $M \\times \\p M$ is compact and the distance function is continuous,\nthere is $r > 0$ such that\n\\begin{equation*}\n\\sup_{y \\in \\p M} |d(x, y) - d(x_0, y)| < \\epsilon, \\quad \\text{when $d(x, x_0) < r$ and $x \\in M$}.\n\\end{equation*}\nHence $\\tau(y) - \\epsilon \\le r_{x_0}(y) - \\epsilon < r_x(y)$ for all $y \\in \\p M$ and all $x \\in B(x_0, r)$.\nIn other words, \n\\begin{equation*}\nB(x_0, r) \\subset (M \\setminus M(\\tau - \\epsilon)),\n\\end{equation*}\nand this yields that $\\tau - \\epsilon \\in Q(M)$. \nFunctions $\\tau - \\epsilon$ converge to $\\tau$ in $C(\\p M)$ as $\\epsilon \\to 0$. Thus $\\tau$ is in the closure of $A$.\n\\end{proof}\n\nLemmas \\ref{lem:maximal_elements} and \\ref{lem:closure} together prove Theorem \\ref{thm:maximal_elements}.\nMoreover, (\\ref{eq:characterization_of_Q})\nyields the equation (\\ref{eq:semilattice_QM_intro})\nin the introduction.\n\n\\begin{lemma}\n\\label{lem:simple_manifold}\nIf $(M,g)$ is simple or the closed half sphere, then (G) holds.\n\\end{lemma}\n\\begin{proof}\nLet $x_1, x_2 \\in M$ satisfy $x_1 \\ne x_2$, and let us show that $r_{x_1} \\nleq r_{x_2}$.\nFirst, if $x_2 \\in \\p M$ then $r_{x_1}(x_2) > 0 = r_{x_2}(x_2)$.\nSecond, if $x_2 \\in M^{\\text{int}}$, then there\nis the unique unit speed geodesic $\\gamma$ and the unique point $y \\in \\p M$ such that \n$\\gamma(0) = x_1$, $\\gamma(s) = x_2$ and $\\gamma(s') = y$,\nwhere $0 < s < s'$.\nAs $\\gamma$ is a shortest path\nfrom $x_1$ to $y$ (the shortest path if $(M,g)$ is simple) and the shortest path from $x_2$ to $y$,\n\\begin{equation*}\nr_{x_1}(y) = s' > s = r_{x_2}(y).\n\\end{equation*}\n\\end{proof}\n\nThe closed half sphere is not simple, since for a point on the boundary the corresponding \nantipodal point is a conjugate point.\nHence the manifolds satisfying (G) form a strictly larger class than the simple manifolds.\n\n\\vspace{1cm}\n{\\em Acknowledgements.}\nThe author would like to thank Y. Kurylev for useful discussions. \nThe research was partly supported by Finnish Centre of Excellence in Inverse Problems Research,\nAcademy of Finland COE 213476,\nand partly by Finnish Graduate School in Computational Sciences.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nAn old question of Lusin asked whether there exists a Borel\nfunction which cannot be decomposed into countably many\ncontinuous functions. By now several examples have been\ngiven, by Keldi\\v{s}, Adyan and Novikov among others. A\nparticularly simple example, the function\n$P:(\\omega+1)^\\omega\\rightarrow\\omega^\\omega$, has been found by\nPawlikowski (cf. \\cite{CMPS}). By definition,\n\\begin{displaymath}\n P(x)(n) = \\left\\{ \n \\begin{array}{ll}\n x(n)+1 & \\mbox{if}\\quad x(n)<\\omega,\\\\\n 0 & \\mbox{if}\\quad x(n)=\\omega.\n \\end{array} \\right.\n\\end{displaymath}\nIt is proved in \\cite{CMPS} that if $A\\subseteq(\\omega+1)^\\omega$ is such\nthat $P\\!\\!\\upharpoonright\\!\\! A$ is continuous then\n$P[A]\\subseteq\\omega^\\omega$ is nowhere dense. Since $P$ is\na surjection, it is not $\\sigma$-continuous.\n\nIn \\cite{Sol} Solecki showed that the above function is, in\na sense, the only such example, at least among Baire class 1\nfunctions (in other words, it is the initial object in a\ncertain category).\n\n\\begin{theorem}[Solecki, \\cite{Sol}]\\label{pmin}\n For any Baire class 1 function $f:X\\rightarrow Y$, where\n $X,Y$ are Polish spaces, either $f$ is $\\sigma$-continuous\n or there exist topological embeddings $\\varphi$ and $\\psi$\n such that the following diagram commutes:\n $$\n \\begin{CD}\n \\omega^\\omega @ >\\psi >> Y\\\\\n @AAP A @AAfA\\\\\n (\\omega+1)^\\omega @> \\varphi >> X\n \\end{CD}\n $$\n\\end{theorem}\n\nIn \\cite{Zpl:DSTDF} Zapletal generalized Solecki's dichotomy\nto all Borel functions by proving the following theorem.\n\n\\begin{theorem}[Zapletal, \\cite{Zpl:DSTDF}]\n If $f:X\\rightarrow Y$ is a Borel function which is not\n $\\sigma$-continuous then there is a compact set\n $C\\subseteq X$ such that $f\\!\\!\\upharpoonright\\!\\! C$ is not\n $\\sigma$-continuous and of Baire class 1.\n\\end{theorem}\n\nIn this paper we give a new proof of the above dichotomy for\nall Borel functions, which is direct, shorter and more\ngeneral than the original proof from \\cite{Sol}.\n\n\\section{Notation}\n\nWe say that a Borel function $f:X\\rightarrow Y$, where $X,Y$\nare Polish spaces, is $\\sigma$-continuous if there exist a\ncountable cover of the space $X=\\bigcup_n X_n$ (with\narbitrary sets $X_n$) such that $f\\!\\!\\upharpoonright\\!\\! X_n$ is continuous\nfor each $n$. It follows from the Kuratowski extension\ntheorem that we may require that the sets $X_n$ be Borel. If\n$f$ is a Borel function which is not $\\sigma$-continuous\nthen the family of sets on which it is $\\sigma$-continuous\nis a proper $\\sigma$-ideal in $X$. We denote this\n$\\sigma$-ideal by $I_f$.\n\nIn a metric space $(X,d)$ for $A,B\\subseteq X$ let us denote\nby $h(A,B)$ the Hausdorff distance between $A$ and $B$.\n\nThe spaces $(\\omega+1)^\\omega$ and $\\oplus^n$ for $n<\\omega$ are endowed\nwith the product topology of order topologies on $\\omega+1$.\n\n\\section{The Zapletal's game}\n\nIn \\cite{Zpl:DSTDF} Zapletal introduced a two-player game,\nwhich turnes out to be very useful in examining\n$\\sigma$-continuity of Borel functions. Let\n$B\\subseteq\\omega^\\omega$ be a Borel set and $f:B\\rightarrow\n2^\\omega$ be a Borel function. Let\n$\\rho:\\omega\\rightarrow\\omega\\times 2^{<\\omega}\\times\\omega$\nbe a bijection. The game $G_f(B)$ is played by Adam and Eve.\nThey take turns playing natural numbers. In his $n$-th move,\nAdam picks $x_n\\in\\omega$. In her $n$-th move, Eve chooses\n$y_n\\in 2$. At the end of the game we have\n$x\\in\\omega^\\omega$ and $y\\in 2^\\omega$ formed by the\nnumbers picked by Adam and Eve, respectively. Next, $y\\in\n2^\\omega$ is used to define a sequence of partial continuous\nfunctions (with domains of type $G_\\delta$ in\n$\\omega^\\omega$) in the following way. For $n<\\omega$ let\n$f_n$ be a partial function from $\\omega^\\omega$ to\n$2^\\omega$ such that for $t\\in\\omega^\\omega$ and $\\sigma\\in\n2^{<\\omega}$\n$$ f_n(t)\\supseteq\\sigma\\quad\\iff\\quad\\exists\nk\\in\\omega\\,\\,\\, y(\\rho(n,\\sigma,k))=1$$ and\n$\\mbox{dom}(f_n)=\\{t\\in\\omega^\\omega: \\forall\nn<\\omega\\,\\exists!\\sigma\\in 2^n\\,\\,\nf_n(t)\\supseteq\\sigma\\}$. Eve wins the game $G_f(B)$ if\n$x\\not\\in B$ or $\\exists n\\, f(x)=f_n(x).$ Otherwise Adam\nwins the game.\n\nIt is easy to see that if $f$ is a Borel function then $G_f$\nis a Borel game. The key feature of the game $G_f$ is that\nit detects $\\sigma$-continuity of the function $f$.\n\n\\begin{theorem}[Zapletal,\\cite{Zpl:DSTDF}]\n For $B\\subseteq\\omega^\\omega$ and $f:B\\rightarrow\n 2^\\omega$ Eve has a winning strategy in the game $G_f(B)$\n if and only if $f$ is $\\sigma$-continuous on $B$.\n\\end{theorem}\n\nNote that if Adam has a winning strategy then the image of\nhis strategy (treated as a continuous function from\n$2^\\omega$ to $B$) is a compact set on which $f$ is also not\n$\\sigma$-continuous. This observation and the Borel\ndeterminacy gives the following corollary.\n\n\\begin{corollary}[Zapletal,\\cite{Zpl:DSTDF}]\\label{cldense}\n If $B$ is a Borel set and $f:B\\rightarrow 2^\\omega$ is a\n Borel function which is not $\\sigma$-continuous then there\n is a compact set $C\\subseteq B$ such that $f\\!\\!\\upharpoonright\\!\\! C$ is\n also not $\\sigma$-continuous.\n\\end{corollary}\n\n\n\\section{Proof of the dichotomy}\n\nIn the statement of Theorem \\ref{pmin} both functions\n$\\varphi$ and $\\psi$ are to be topological embeddings.\nHowever, as we will see below, for the dichotomy it is\nenough that they both are injective, $\\varphi$ continuous\nand $\\psi$ open. We are going to prove first this version of\nthe dichotomy.\n\n\\begin{theorem}\\label{dichotomy}\n Let $X$ be a Polish space and $f:X\\rightarrow 2^\\omega$ be\n a Borel function. Then precisely one of the following\n conditions holds:\n \\begin{enumerate}\n \\item either $f$ is $\\sigma$-continuous\n \\item\\label{fact} or there are an open injection $\\psi$ and\n a continuous injection $\\varphi$ such that the following\n diagram commutes:\n $$\n \\begin{CD}\n \\omega^\\omega @ >\\psi >> 2^\\omega\\\\\n @AAP A @AAf A\\\\\n (\\omega+1)^\\omega @> \\varphi >> X\n \\end{CD}\n $$ \n \\end{enumerate}\n\\end{theorem}\n\\noindent Notice that compactness of $(\\omega+1)^\\omega$ implies that the\n$\\psi$ above must be a topological embedding.\n\\begin{proof}\n \n It is straightforward that (\\ref{fact}) implies that $f$\n is not $\\sigma$-continuous. Let us assume that $f$ is not\n $\\sigma$-continuous and prove that (\\ref{fact}) holds. By\n Corollary \\ref{cldense} we may assume that $X$ is compact.\n\n\n \\subsection*{Notation.}\n\n First we introduce some notation. For a fixed $n$ and\n $0\\leq k\\leq n$ let $S^n_k$ be the set of points in\n $\\oplus^n$ of Cantor-Bendixson rank $\\geq n-k$. For each\n $n<\\omega$ and $1\\leq k\\leq n$ let us pick a function\n $\\pi^n_k:S^n_k\\rightarrow S^n_{k-1}$ such that\n \\begin{itemize}\n \\item on $S^n_{k-1}$ $\\pi^n_k$ is the identity,\n \\item if $\\tau\\in S^n_k\\setminus S^n_{k-1}$ then we pick\n one $i\\in n$ such that $\\tau(i)<\\omega$ and $\\tau(i)$ is\n maximal such and define\n $$\\pi^n_k(\\tau)(i)=\\omega, \\quad\n \\pi^n_k(\\tau)(j)=\\tau(j) \\ \\ \\ \\mbox{for}\\ j\\not= i.$$\n \\end{itemize} \n This definition clearly depends on the choice of the index\n $i$ above. Note, however, that we may pick the functions\n $\\pi^n_k$ so that they are coherent, in the sense that for\n $\\tau\\in\\oplus^{n+1}$, unless $\\tau(n)$ is the biggest\n finite value of $\\tau$, we have\n $\\pi^{n+1}_{k+1}(\\tau)=\\pi^n_k(\\tau\\!\\!\\upharpoonright\\!\\!\n n)^\\smallfrown\\tau(n)$. In particular\n $\\pi^{n+1}_{k+1}(\\sigma^\\smallfrown\\omega)=\\pi^n_k(\\sigma)^\\smallfrown\\omega$\n for any $\\sigma\\in\\oplus^n$. The functions $\\pi^n_k$ will\n be called projections.\n\n \\begin{lemma}\n For each $n$ and $1\\leq k\\leq n$ the projection\n $\\pi^n_k: S^n_k\\rightarrow S^n_{k-1}$ is continuous.\n \\end{lemma}\n \\begin{proof}\n Note that any point in $S^n_k$ except\n $(\\omega,\\ldots,\\omega)$ ($k$ times $\\omega$) has a\n neighborhood in which projection is unambigous and hence\n continuous. But it is easy to see that at the point\n $(\\omega,\\ldots,\\omega)$ any projection is continuous.\n \\end{proof}\n\n For each $n<\\omega$ let us also introduce the function\n $r_n:\\oplus^n\\rightarrow\\oplus^{n}$ defined as\n $r_n(\\tau^\\smallfrown a)=\\tau^\\smallfrown\\omega$.\n\n To make the above notation more readable we will usually\n drop subscripts and superscripts in $\\pi^n_k$ and $r_n$.\n\n We pick a well-ordering $\\leq$ of $(\\omega+1)^{<\\omega}$ into type $\\omega$\n such that for each point $\\tau\\in(\\omega+1)^{<\\omega}$ all elements of the\n transitive closure of $\\tau$ with respect to $\\pi$, $r$\n and restrictions (i.e. functions of the form\n $\\oplus^n\\ni\\tau\\mapsto\\tau\\!\\!\\upharpoonright\\!\\! m\\in\\oplus^m$ for $m\\tau$, as can be seen in\n the first condition above. The last condition, as we will\n see later, will be used to guarantee ``continuity'' of the\n family of sets $X_\\tau$. For technical reasons we will\n also make sure that $X^n_\\sigma=(X^n_\\sigma)^*$.\n\n We are going to ensure disjointness of $C_\\tau$'s by\n satisfying the following conditions:\n \\begin{itemize}\n \\item $C_{\\tau^\\smallfrown a}\\subseteq C_\\tau$,\n \\item $C_{\\tau^\\smallfrown a}\\cap C_{\\tau^\\smallfrown\n b}=\\emptyset$ for $a\\not=b$.\n \\end{itemize}\n\n The fact that $\\mbox{diam}(X_\\tau)<1\\slash|\\tau|$ will follow\n from the following inductive conditions (recall that\n $\\pi(\\tau)\\leq\\tau$ for any $\\tau$):\n \\begin{itemize}\n \\item $\\mbox{diam}(X_\\tau)< 3\\,\\mbox{diam}(X_{\\pi(\\tau)})$,\n \\item $\\mbox{diam}(X_{\\tau^\\smallfrown\\omega}) <\n 1\\slash(3^{|\\tau|+1}(|\\tau|+1))$,\n \\end{itemize}\n because iterating projections in $\\oplus^n$ stabilizes\n before $n+1$ steps.\n\n The crucial feature of the sets $X_\\tau$ is that this\n family should be ``continuous''. Namely, we will require\n that if $\\tau$ and $\\pi(\\tau)$ occur by the $n$-th step\n then\n \\begin{equation}\\label{ineq}\n h(X^n_\\tau,X^n_{\\pi(\\tau)})<3^{|\\tau|}\\,d(\\tau,\\pi(\\tau))\n \\end{equation}\n\n This condition is the most diffucult. To fulfill it we\n will construct yet another kind of objects. Notice first\n that if $h(A,B)<\\varepsilon$ for two nonempty sets in $X$\n then there are two finite families (we will refer to them\n as to ``anchors'') $A_i$ and $B_i$ ($i\\in I_0$) of subsets\n of $A$ and $B$ respectively such that for any\n $A_i'\\subseteq A_i$, $B_i'\\subseteq B_i$ still\n $h(\\bigcup_i A_i',\\bigcup_i B_i')<\\varepsilon$. Similarly,\n if $h(A,B)<\\varepsilon$ and $C\\subseteq A$ then there\n exist a finite family $D_i$ ($i\\in I_0$) of subsets of $B$\n such that for any $D_i'\\subseteq D_i$ $h(\\bigcup_i D_i',\n C)<\\varepsilon$.\n\n At each step $n$ if $\\tau$ is the $n$-the element of $(\\omega+1)^{<\\omega}$\n we will additionally construct anchors \n \\begin{itemize}\n \\item for each pair $X^n_\\sigma$ and $X^n_{\\pi(\\sigma)}$\n such that $\\sigma,\\pi(\\sigma)\\leq\\tau$\n \\item and for each tripple\n $X^n_\\sigma,X^n_{\\pi(\\sigma)},X^n_{\\sigma^\\smallfrown\n a}$ such that $a\\in\\omega+1$, $\\pi(\\sigma^\\smallfrown\n a)\\subseteq\\pi(\\sigma)$ and\n $\\sigma,\\pi(\\sigma),\\sigma^\\smallfrown a\\leq\\tau$.\n \\end{itemize}\n \n\n \\subsection*{Completing the diagram.}\n\n As we now have a clear picture of what should be constructed\n let us argue that this is enough to finish the proof. For\n each $t\\in(\\omega+1)^\\omega$ the intersection $\\bigcap_n\n X_{t\\upharpoonright n}$ has precisely one point so let us\n define $\\varphi(t)$ to be this point. The other function,\n $\\psi$ is defined as $f\\circ\\varphi\\circ P^{-1}$. Let us\n check that this works. Both functions $\\psi$ and $\\varphi$\n are injective thanks to the disjoitness of the sets\n $C_\\tau$ and to the fact that $X_\\tau\\subseteq\n f^{-1}[C_\\tau]$. The function $\\psi$ is open because\n $C_\\tau$ are clopens.\n\n To see continuity of $\\varphi$ notice first that since the\n sets $X_\\tau$ have diameters vanishing to $0$, it suffices\n to check that $\\varphi$ is continuous on each $\\oplus^n$\n (which are treated as subsets of $(\\omega+1)^\\omega$ via the embedding\n $e: \\tau\\mapsto\\tau^\\smallfrown(\\omega,\\omega,\\ldots)$).\n Continuity on $\\oplus^n$ is checked inductively on the\n sets $S^n_k$ for $0\\leq k\\leq n$.\n\n The set $S^n_0$ consists of one point, so there is nothing\n to check. Suppose that $\\tau_i\\rightarrow \\tau$,\n $\\tau,\\tau_i\\in S^n_k,i\\in\\omega$. Then either the\n sequence is eventually constant or $\\tau\\in S^n_{k-1}$.\n Let us assume the latter. By the inductive assumption and\n continuity of projection\n $\\varphi(\\pi(\\tau_i))\\rightarrow\\varphi(\\tau)$. Now pick\n any $\\varepsilon>0$. Let $m$ be such that\n $\\mbox{diam}(X_\\sigma)<\\varepsilon$ for $\\sigma\\in\\oplus^m$ and\n $j\\in\\omega$ such that\n $d(\\tau_j,\\pi(\\tau_j))<3^{-m}\\varepsilon$. Let us write\n $\\rho^\\smallfrown\\omega^l$ for $\\rho$ extended by $l$ many\n $\\omega$'s. By (\\ref{ineq}) and coherence of projections\n we have\n $$h(X_{{\\tau_j}^\\smallfrown\\omega^{m-n}},X_{\\pi(\\tau_j)^\\smallfrown\\omega^{m-n}})<\\varepsilon,$$\n which implies that $\\varphi(\\tau_j)$ and\n $\\varphi(\\pi(\\tau_j))$ are closer than $3\\varepsilon$.\n This shows that $\\varphi(\\tau_j)\\rightarrow\\varphi(\\tau)$\n and proves continuity of $\\varphi$.\n\n \\subsection*{Key lemma.}\n\n Now we state the key lemma, which will be used to\n guarantee ``continuity'' of the family of sets $X_\\tau$.\n\n \\begin{lemma}\\label{limit}\n Let $X$ be a Borel set, $f:X\\rightarrow \\omega^\\omega$ a\n Borel, not $\\sigma$-continuous function. There exist\n a basic clopen $C_\\omega\\subseteq f[X]$\n \n and a compact set $X_\\omega\\subseteq\n f^{-1}[C_\\omega]$ such that\n \\begin{itemize}\n \\item $f\\!\\!\\upharpoonright\\!\\! X_\\omega$ is not $\\sigma$-continuous,\n \\item\n $X_\\omega\\subseteq\\mbox{cl}\\big((f^{-1}[\\omega^\\omega\\setminus\n C_\\omega])^*\\big)$.\n \\end{itemize}\n The compact set $X_\\omega$ can be chosen of arbitrarily\n small diameter.\n \\end{lemma}\n \\begin{proof} Without loss of generality assume that\n $f^{-1}[C]=(f^{-1}[C])^*$ for all clopen sets\n $C\\subseteq\\omega^\\omega$. Let us consider the\n following tree of open sets, indexed by $\\omega^{<\\omega}$\n $$U_\\tau=\\int\\big(f^{-1}[[\\tau]]\\big).$$ Let\n $G=\\bigcap_n\\bigcup_{|\\tau|=n} U_\\tau$ and\n $Z_\\tau=f^{-1}[[\\tau]]\\setminus U_\\tau$. Notice that\n $f\\!\\!\\upharpoonright\\!\\! G$ is continuous and since $X=G\\cup\\bigcup_\\tau\n Z_\\tau$ there is $\\tau\\in\\omega^{<\\omega}$ such that $Z_\\tau\\not\\in\n I_f$. Observe that\n $Z_\\tau\\subseteq\\mbox{cl}\\big(\\bigcup_{\\tau'\\not=\\tau,|\\tau'|=|\\tau|}\n f^{-1}[[\\tau']]\\big)$ because if an open set $U\\subseteq\n f^{-1}[[\\tau]]$ is disjoint from\n $\\bigcup_{\\tau'\\not=\\tau,|\\tau'|=|\\tau|}\n f^{-1}[[\\tau']]$ then $U\\subseteq U_\\tau$. Now put\n $C_\\omega=[\\tau]$ and pick any compact set with small\n diameter $X_\\omega\\subseteq Z_\\tau$ such that\n $X_\\omega\\not\\in I_f$.\n \\end{proof}\n\n \\subsection*{The construction.}\n\n We begin with $X_\\emptyset=X^0_\\emptyset=X$ and\n $C_\\emptyset=\\omega^\\omega$. Without loss of generality\n assume that $X=X^*$. Suppose we have done $n-1$ steps of\n the inductive construction up $\\tau\\in(\\omega+1)^{<\\omega}$. Let $|\\tau|=l$\n and $\\sigma=\\tau\\!\\!\\upharpoonright\\!\\!(l-1)$. There are three cases.\n\n \\textbf{Case 1.} The four points $\\tau$, $\\pi(\\tau)$,\n $r(\\tau)$ and $r(\\pi(\\tau))$ are equal. So\n $\\tau=(\\omega,\\ldots,\\omega)$ and $C_{\\tau\\upharpoonright\n n-1}$ and $X^{n-1}_\\sigma$ are already constructed. In\n this case we use Lemma \\ref{limit} to find a clopen set\n $C_\\tau$ and a compact set $X_\\tau\\subseteq\n X^{n-1}_\\sigma$ of diameter $<|\\tau|\\slash 3^{n+1}$ small\n enough so that no element of the anchors constructed so\n far is contained in $X_\\tau$. We put $X^n_\\tau=X_\\tau^*$,\n $X^n_\\sigma=(X^{n-1}_\\sigma\\setminus f^{-1}[C_\\tau])^*$\n and $X^n_\\rho=X^{n-1}_\\rho$ for other $\\rho<\\tau$. By the\n assertion of Lemma \\ref{limit} we still have\n $X^n_\\tau\\subseteq\\mbox{cl}(X^n_\\sigma)$. In this case we do not\n need to construct any new anchors.\n\n \\textbf{Case 2.} The two points $\\pi(\\tau)$ and $r(\\tau)$\n are equal but distinct from $\\tau$. Let\n $\\delta=d(\\tau,r(\\tau))$. Since\n $X^{n-1}_{r(\\tau)}\\subseteq \\mbox{cl} (X^{n-1}_\\sigma)$ by the\n inductive assumption, we may find finitely many sets\n $B_i\\subseteq X^{n-1}_\\sigma,i\\leq k$ such that\n \\begin{itemize}\n \\item $h(\\bigcup_i B'_i, X_{r(\\tau)})<\\delta$ for any\n $B'_i\\subseteq B_i$,\n \\item $B_i\\not\\in I_f$.\n \\end{itemize}\n The second condition follows from\n $X^{n-1}_\\sigma=(X^{n-1}_\\sigma)^*$. We may assume that\n for each clopen set $C\\subseteq 2^\\omega$ the set $B_i\\cap\n f^{-1}[C]$ is either empty or outside of the ideal $I_f$.\n\n We are going to find clopens $C_i\\subseteq C_\\sigma$, for\n $i\\leq k$ such that $C_i\\cap C_{r(\\tau)}=\\emptyset$ and\n then put $C_\\tau=\\bigcup_{i\\leq k} C_i$,\n $X^n_\\sigma=(X^{n-1}_\\sigma\\setminus\\bigcup_i\n f^{-1}[C_i])^*$ and find $X_\\tau\\subseteq\\bigcup_{i\\leq k}\n B_i\\cap f^{-1}[C_i]$. We will have to carefully define\n $X^n_{r(\\tau)}$ so that $X^n_{r(\\tau)}\\subseteq\n \\mbox{cl}(X^n_\\sigma)$.\n\n It is easy to see that for any $A\\subseteq X^{n-1}_\\sigma$\n $$X^{n-1}_{r(\\tau)}=X^{n-1}_{r(\\tau)}\\cap\\mbox{cl}\\big((X^{n-1}_\\sigma\\cap A)^*\\big)\\ \\cup\\\n X^{n-1}_{r(\\tau)}\\cap\\mbox{cl}\\big((X^{n-1}_\\sigma\\cap\n A^{c})^*\\big)$$ so (putting\n $A=f^{-1}[C_\\sigma\\cap[(m,0)]]$ for $m<\\omega$) we may\n inductively on $m$ pick binary sequences $\\beta^m_i\\in\n 2^m,i\\leq k$ such that $f^{-1}[[\\beta^m_i]]\\cap\n B_i\\not=\\emptyset$ and\n \\begin{displaymath}\n X^{n-1}_{r(\\tau)}\\cap\\mbox{cl}\\big((X^{n-1}_\\sigma\\setminus f^{-1}[\\bigcup_{i\\leq k} [\\beta^m_i]])^*\\big)\\not\\in I_f.\n \\end{displaymath}\n\n We are going to carry on this construction up to some\n $m<\\omega$ and put\n $X^n_\\rho=\\big(X^{n-1}_\\rho\\setminus\\bigcup_{i\\leq k}\n f^{-1}[[\\beta^m_i]]\\big)^*$ for\n $\\rho<\\tau,\\rho\\not\\supseteq r(\\tau)$ and\n $X^n_{\\rho}=\\big(X^{n-1}_{\\rho}\\cap\\mbox{cl}(X^n_\\sigma)\\big)^*$\n for $\\rho<\\tau,\\rho\\supseteq r(\\tau)$. We must, however,\n take care that this does not destroy the existing anchors.\n\n Since $f^{-1}[\\{x\\}]\\in I_f$ for any $x\\in 2^\\omega$ and\n there are only finitely many elements of the existing\n anchors, we may pick $m<\\omega$ and construct the\n sequences $\\beta^m_i$ so that for any element $A$ of an\n anchor ``below'' $X^{n-1}_{r(\\tau)}$ it is the case that\n $A\\cap\\mbox{cl}(f^{-1}[C_\\sigma\\setminus\\bigcup_{i\\leq k}\n [\\beta^m_i]]\\cap X^{n-1}_\\sigma)\\not\\in I_f$ and for any\n element $A$ of other anchors\n $A\\setminus\\big(\\bigcup_{i\\leq k}\n f^{-1}[\\beta^m_i]\\big)\\not\\in I_f$.\n\n Once we have constructed the sequences $\\beta^m_i$ for\n $i\\leq k$ we put $C_i=[\\beta^m_i]$ and $C_\\tau=\\bigcup_i\n [\\beta^m_i]$. Next we find $I_f$-positive compact sets\n $X_i$ inside $B_i\\cap f^{-1}[[\\beta_i]]$, each of diameter\n $<1\\slash(3^{n+1}|\\tau|)$.\n\n If $\\delta>\\mbox{diam}(X^{n-1}_{\\pi(\\tau)})$ then we can pick\n one $X_i$ as $X_\\tau$ and then\n $h(X_\\tau,X^{n-1}_{\\pi(\\tau)})\\leq 3\\,\n h(X^{n-1}_\\sigma,X^{n-1}_{\\pi(\\sigma)})<3^{|\\tau|}\\,\\delta$.\n Otherwise, let $X_\\tau=\\bigcup_{i\\leq k}X_i$ and then\n $\\mbox{diam}(X_\\tau)<3\\,\\mbox{diam}(X^{n-1}_{\\pi(\\tau)})\\leq\n 3\\,\\mbox{diam}(X_{\\pi(\\tau)})$. Define $X^n_\\tau=X_\\tau^*$.\n\n At this step we create anchors for the pair $X^n_\\tau$ and\n $X^n_{r(\\tau)}$ as well as for the tripples $X^n_\\sigma$,\n $X^n_\\rho$, $X^n_\\tau$ for $\\rho<\\tau$.\n\n \\textbf{Case 3.} The two points $\\pi(\\tau)$, $r(\\tau)$ are\n distinct. Let $\\delta=d(\\tau,\\pi(\\tau))$. By coherence of\n the projections $\\pi(\\tau)\\supseteq\\pi(\\sigma)$. By the\n inductive assumption we have\n $h(X^{n-1}_\\sigma,X^{n-1}_{\\pi(\\sigma)})<3^{|\\sigma|}\\,\\delta$.\n Using the existing anchor for the tripple\n $X^{n-1}_\\sigma$, $X^{n-1}_{\\pi(\\sigma)}$,\n $X^{n-1}_{\\pi(\\tau)}$ let us find finitely many sets\n $B_i,i\\leq k$ in $X_\\sigma$ such that\n \\begin{itemize}\n \\item $h(\\bigcup_i B'_i,\n X_{\\pi(\\tau)})<3^{|\\sigma|}\\,\\delta$ for any\n $B'_i\\subseteq B_i$,\n \\item $B_i\\not\\in I_f$.\n \\end{itemize}\n As before, we assume assume that for each clopen set\n $C\\subseteq 2^\\omega$ if $B_i\\cap f^{-1}[C]\\in I_f$ then\n it is empty. We have now two subcases, in analogy to the\n two previous cases.\n\n \\textbf{Subcase 3.1.} Suppose $\\tau=r(\\tau)$. Similarly as\n in Case 1, we use Lemma \\ref{limit} to find $X_i\\subseteq\n B_i$ and $C_i$ for $i\\leq k$. Put $C_\\tau=\\bigcup_{i\\leq\n k} C_i$. If $\\delta>\\mbox{diam}(X^{n-1}_{\\pi(\\tau)})$ then we\n can pick one $X_i$ as $X_\\tau$ and then\n $h(X_\\tau,X^{n-1}_{\\pi(\\tau)})\\leq 3\\,\n h(X^{n-1}_\\sigma,X^{n-1}_{\\pi(\\sigma)})<3^{|\\tau|}\\,\\delta$.\n Otherwise, let $X_\\tau=\\bigcup_{i\\leq k}X_i$ and then\n $\\mbox{diam}(X_\\tau)<3\\,\\mbox{diam}(X^{n-1}_{\\pi(\\tau)})\\leq\n 3\\,\\mbox{diam}(X_{\\pi(\\tau)})$. Again, similarly as in Case 1,\n we put $X^n_\\tau=(X^n_\\tau)^*$,\n $X^n_\\sigma=(X^{n-1}_\\sigma\\setminus f^{-1}[C_\\tau])^*$,\n $X^n_\\rho=X^{n-1}_\\rho$ for other $\\rho<\\tau$.\n\n\n \\textbf{Subcase 3.2.} Suppose $\\tau\\not=r(\\tau)$.\n Similarly as in Case 2, we find clopens $C_i$ in\n $\\omega^\\omega$ such that $X^{n-1}_{r(\\tau)}\\cap\n \\mbox{cl}\\big((f^{-1}[C_\\sigma\\setminus\\bigcup_{i\\leq\n k}C_i])^*\\big)\\not\\in I_f$ and no existing anchor is\n destroyed when we put\n $X^n_\\rho=(X^{n-1}_\\rho\\setminus\\bigcup_{i\\leq\n k}f^{-1}[C_i])^*$ for $\\rho<\\tau,\\rho\\not\\supseteq\n r(\\tau)$ and $X^n_\\rho=X^{n-1}_\\rho\\cap\\mbox{cl}(X^n_\\sigma)$\n for $\\rho<\\tau,\\rho\\supseteq r(\\tau)$.\n\n Next we find $I_f$-positive compact sets $X_i\\subseteq\n B_i\\cap f^{-1}[C_i]$ each of diameter\n $<1\\slash(3^{|\\tau|+1}|\\tau|)$. As previously, if\n $\\delta>\\mbox{diam}(X^{n-1}_{\\pi(\\tau)})$ then we can pick one\n $X_i$ as $X_\\tau$ and then\n $h(X_\\tau,X^{n-1}_{\\pi(\\tau)})\\leq 3\\,\n h(X^{n-1}_\\sigma,X^{n-1}_{\\pi(\\sigma)})<3^{|\\tau|}\\,\\delta$.\n Otherwise, let $X_\\tau=\\bigcup_{i\\leq k}X_i$ and then\n $\\mbox{diam}(X_\\tau)<3\\,\\mbox{diam}(X^{n-1}_{\\pi(\\tau)})\\leq\n 3\\,\\mbox{diam}(X_{\\pi(\\tau)})$. Again, we put\n $X^n_\\tau=(X^n_\\tau)^*$,\n\n In Case 3 we construct the same anchors as in Case 2.\n\n This ends the construction and the entire proof.\n\\end{proof}\n\n\\begin{theorem}\n If $f:X\\rightarrow\\omega^\\omega$ is not\n $\\sigma$-continuous then there exist topological\n embeddings $\\varphi$ and $\\psi$ such that the following\n diagram commutes:\n $$\n \\begin{CD}\n \\omega^\\omega @ >\\psi >> \\omega^\\omega\\\\\n @AAP A @AAf A\\\\\n (\\omega+1)^\\omega @> \\varphi >> X\n \\end{CD}\n $$ \n\\end{theorem}\n\\begin{proof}\n By Theorem \\ref{dichotomy} we have $\\psi$ and $\\varphi$\n such that $\\psi$ is $1$-$1$ open. But as a Borel function\n it continuous on a dense $G_\\delta$ set\n $G\\subseteq\\omega^\\omega$. On the other hand by the\n properties of the function $P$ $X\\in I_P$ implies $P[X]$\n is meager. So $P^{-1}[G]\\not\\in I_P$ and the problem\n reduces to the restriction of the function $P$. This,\n however, has been proved in \\cite{MS.for} (Corollary 2).\n So we get the following diagram:\n $$\n \\begin{CD}\n \\omega^\\omega @>\\psi'>> G @>\\psi>> \\omega^\\omega\\\\\n @AAP A @AAP\\upharpoonright G A @AAf A\\\\\n (\\omega+1)^\\omega@>\\varphi'>> P^{-1}[G] @>\\varphi>> X\n \\end{CD}\n $$ \n which ends the proof.\n\\end{proof}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}