diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzorlf" "b/data_all_eng_slimpj/shuffled/split2/finalzzorlf" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzorlf" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nLow-mass galaxies provide a unique testing ground for predictions of the \ncold dark matter (CDM) paradigm for structure formation, since they generally \nhave a lower fraction of baryons than massive galaxies. These galaxies \nallow for a more direct \nmeasurement of the underlying dark matter potential, as the complicated effects\nof baryons on the dark matter are less pronounced. A particularly testable\nprediction of CDM is that all galaxies share a universal dark matter density \nprofile, characterized by a cuspy inner power law $\\rho \\propto r^{-\\alpha}$\nwhere $\\alpha=1$ (\\citealt{nav96}, hereafter \\citetalias{nav96}). Many authors\nhave investigated low-mass spirals and found, in contrast to the predictions\nof CDM, dark matter density profiles with a flat inner core of slope\n$\\alpha=0$ \\citep{bur95,per96,deb01,bla01,sim05}. This has launched the \ndebate known as the core\/cusp controversy. \n\nA number of other studies have investigated the mass content of dwarf \nspheroidal galaxies (dSphs). \n\\citet{gil07} give a comprehensive review of recent attempts\nto constrain the inner slope of their dark matter profiles with Jeans modeling \n(\\citealt{jea19}; \\citealt{bt87}, chapter 4).\nWhen significant, cored \nprofiles are preferred for all dSphs modeled (\\citealt{gil07}, and references\ntherein).\n\nThese results, however, are subject to a major caveat of Jeans modeling; it is \ncomplicated by the effect of stellar velocity anisotropy. \nModels fit to the line-of-sight component of the velocity dispersion, but\nanisotropy can severely\naffect the modeling of enclosed mass. Therefore, additional assumptions must\nbe made. The studies presented in \\citet{gil07}\nassume spherical symmetry and isotropy. \\citet{eva09} show that \na weakness of Jeans modeling is that given these assumptions combined with \nthe cored light profiles observed in dSphs, the Jeans equations do not allow\nsolutions with anything other than a cored dark matter profile.\n\n\\citet{wal09b} construct more sophisticated models and attempt to parameterize\nand fit for the anisotropy. As a result, preference for cored profiles \nbecomes \nmodel-dependent. They therefore are unable to put significant constraints\non the slope of the dark matter profile. This highlights the main problem\nwith Jeans modeling---it is highly dependent on the assumptions made.\n\nDistribution function models are more general than Jeans models,\nand progress has been made applying them \nto a number of dSph systems \\citep{kle02,wu07,amo11}.\nNevertheless these models still make strong assumptions such as\nspherical symmetry or isotropy, and models that do fit for anisotropy do so\nwithout using the information about the stellar orbits contained in the \nline-of-sight velocity distributions (LOSVDs).\n\nWe employ a fundamentally different modeling technique, known as \nSchwarzschild modeling, that allows us to use this information to \nself-consistently calculate both the enclosed mass and orbital anisotropy.\nSchwarzschild modeling is a mature industry, but one that has seldom\nbeen applied to the study of dSph galaxies (see \\citealt{val05}).\n\nIn addition to being well-suited for measuring dark matter profiles, \nSchwarzschild modeling has often been used to search\nfor black holes at the centers of galaxies.\nAnother unresolved issue relevant to the study of dSphs is whether they host\nan intermediate-mass black hole (IMBH). In a hierarchical merging scenario, \nsmaller galaxies are thought to be the\nbuilding blocks of larger galaxies. It is thought that all massive galaxies\nhost a supermassive black hole (SMBH) at their center, therefore it is logical\nto believe that their building blocks host smaller IMBHs. Evidence for these \nIMBHs is scarce, however, and dynamical detections are even scarcer. \nThe closest and lowest mass example of a dynamical \nmeasurement is an upper limit on the local group dSph NGC~205 of\n$M_{\\bullet}$$\\leq 2.2 \\times 10^4 \\, M_{\\odot}$ \\citep{val05}. Black holes in this\nmass range can provide constraints on theories of black hole growth and\nformation. The two most prominent competing theories of nuclear black hole\nformation are direct collapse of primordial gas \\citep{ume93,eis95,beg06}\nor accretion onto and mergers of seed black holes resulting from the collapse\nof the first stars \\citep{vol05}.\n\nIn this paper we present \naxisymmetric, three-integral Schwarzschild models in an effort to determine\nthe inner slope of the dark matter density profile as well as the orbit \nstructure\nof the Fornax dSph. We also investigate the possibility of a central\nIMBH. We assume a distance of 135 kpc to Fornax \\citep{ber00}.\n\n\n\\section{Data}\n\nTo construct dynamical models, we require a stellar light profile as well as\nstellar kinematics in the form of LOSVDs. We use published data for both\nthe photometry and kinematics, and describe the steps taken to convert this\ndata into useful input for our models.\n\n\\begin{figure}[t]\n\\includegraphics[width=9cm]{losvd.eps}\n\\caption{Line-of-sight velocity distributions of four bins.\nOpen circles with error bars are the data. Over-plotted are the model values\nfor the best-fitting cored model (red) and NFW model (blue). Bins are located\nat: (a) $R=297\\arcsec$, $\\theta=18^{\\circ}$ (b) $R=550\\arcsec$,\n$\\theta=18^{\\circ}$ (c) $R=1008\\arcsec$, $\\theta=45^{\\circ}$\n(d) $R=2484\\arcsec$, $\\theta=45^{\\circ}$. Quoted $\\chi^2$ values are \nun-reduced.\n\\label{losvd}}\n\\end{figure}\n\n\n\\subsection{Stellar Density}\n\nTo determine the stellar density, we use a number density profile from\n\\citet{col05} extending to $4590\\arcsec$. We linearly \nextrapolate the profile out to $6000\\arcsec$---a physical radius of 3.9 kpc at\nour assumed distance. We also extrapolate the profile inwards at constant\ndensity from $90\\arcsec$ to $1\\arcsec$. \n\nTo convert to a more familiar surface brightness\nprofile we apply an arbitrary zero-point shift in log space, adjusting this \nnumber so that\nthe integrated profile returns a luminosity consistent with the value listed\nin \\citet{mat98}. \nAdopting an ellipticity of $e=0.3$ \\citep{mat98}, we deproject under\nthe assumption that surfaces of constant luminosity are coaxial spheroids \n\\citep{geb96}, and for an assumed inclination of $i=90^{\\circ}$.\n\n\\subsection{Stellar Kinematics}\n\nWe derive LOSVDs from individual stellar velocities published in \n\\citet{wal09}. The data contain heliocentric radial velocities and \nuncertainties with a membership probability for 2,633\nFornax stars. Most of these are single-epoch observations, however some are\nmulti-epoch. Stars that have more than one observation are averaged, weighted\nby their uncertainties. After making a cut in membership probability at \n90\\%, we are left with 2,244 stars.\nAlthough a significant number of stars observed may be in binary or multiple\nsystems, simulations have shown that such systems are unlikely to affect \nmeasured dispersions \\citep{har96,ols96,mat98}.\n\nWe adopt a position angle $PA=41^{\\circ}$ \\citep{wal06}. We assume symmetry\nwith respect to both the major and minor axes and fold the data along each\naxis. To preserve any possible rotation, we switch the sign of the velocity\nwhenever a star is flipped about the minor axis.\n\n\\begin{figure}[t]\n\\includegraphics[width=9cm]{moments.eps}\n\\caption{Gauss-Hermite moments for stars near the major axis (blue),\nminor axis (red), and averaged over all angles (green). Solid lines \ncorrespond to the best-fit model with a cored dark matter halo,\ndashed lines are for the best-fit model with a NFW halo.\n\\label{moments}}\n\\end{figure}\n\n\n\nThe transverse motion of Fornax contributes a non-negligible line-of-sight\nvelocity to stars, particularly those at large galactocentric radius. \nUsing the equations in Appendix A of \\citet{wal08}, we correct for this effect.\nWe adopt values for the proper motion of \n$(\\mu_{\\alpha},\\mu_{\\delta})=(47.6, -36.0)\\text{ mas century}^{-1}$\n\\citep{pia07} and assume the heliocentric radial velocity of Fornax is\n$53.3\\text{ km s}^{-1}$ \\citep{pia02}.\n\nWe divide our meridional grid into 20 radial bins, equally spaced in\napproximately $\\log \\,r$ \nfrom 1\\arcsec to 5000\\arcsec. There are 5 angular bins spaced equally\nin sin $\\theta$ over 90$^{\\circ}$ from the major to the minor axis\n\\citep{geb00,sio09}. From the positions of the folded stellar velocity data,\nwe determine the best binning scheme so that each grid cell contains at least \n25 stars from which to recover the LOSVD. Our first bin with enough stars to\nmeet this criterion is centered at 47\\arcsec, and the last bin is centered \nat 2500\\arcsec. We therefore have two-dimensional kinematics coverage over \nthe radial range 47\\arcsec-2500\\arcsec (30pc - 1.6 kpc). At small radii the \nnumber density of stars with velocity measurements is low, thus our \ncentral LOSVDs have higher uncertainty compared to those at larger radii.\n\n\\begin{deluxetable*}{llllllllll}\n\\centering\n\\tablecaption{Best-Fit Model Parameters}\n\\tablewidth{0pt}\n\\tablehead{\n\t\\colhead{DM Profile} & \\colhead{$\\chi^2$} & \n\t\\colhead{$\\frac{M}{L_V}$} & \n\t\\colhead{c} & \\colhead{$r_s$ (kpc)} & \n\t\\colhead{$\\rho_c$ ($M_{\\odot} pc^{-3}$)} & \n\t\\colhead{$M_{\\bullet}$($M_{\\odot}$)} & \\colhead{$N_{model}$}}\n\\startdata\nNFW & 239.8 & $1.3 \\pm 0.6$ & $4.1 \\pm 0.26$ & $11.7 \\pm 1.4$ \n & --- & --- & 3124 \\\\\nLog & 162.6 & $1.5 \\pm 0.5$ & --- & --- \n& $1.6 \\pm 0.1 \\times 10^{-2}$ & --- & 4319 \\\\\nLog & 162.6 & $1.6 \\pm 0.2$ & --- & --- &\n\t $1.6 \\pm 0.1 \\times 10^{-2}$ & $\\leq 3.2 \\times 10^4$ & 3423 \\\\\n\\enddata\n\\tablecomments{Best-fit parameters for NFW, and cored logarithmic\ndark matter halos. $\\chi^2$ is un-reduced, the number of degrees of freedom \nare the same for each model. Model parameters and $1$-$\\sigma$ uncertainties \nare quoted. $N_{model}$ lists the number of models run\nfor the corresponding parameterization.}\n\n\n\\label{restab}\n\\end{deluxetable*}\n\\vskip 20pt\n\nWithin each grid cell, we calculate the LOSVD from discrete stellar velocities\nby using an adaptive kernel density estimate adapted from \\citet{sil86} and\nexplained in \\citet{geb96}. We estimate the $1-\\sigma$ uncertainties in the\nLOSVDs through bootstrap resamplings of the data \\citep{geb96,geb09}.\nThe bootstrap generates a new sample from the data itself by randomly picking $N$\ndata points, where $N$ is the total number of stars in a given bin, allowing\nthe same point to be chosen more than once.\nWe then estimate the LOSVD from that realization and repeat the procedure $300$\ntimes. The $68\\%$ confidence band on the LOSVDs corresponds to the \n$68\\%$ range of the realizations.\nWe compare the velocity dispersion as measured by the LOSVDs with the \nbiweight scale (i.e., a robust estimate of the standard deviation, see \n\\citealt{bee90}) of the individual velocities and note good \nagreement.\n\nFigure \\ref{losvd} plots the LOSVDs of four bins. Rather than parameterizing\nthese LOSVDs with Gauss-Hermite moments, our models instead fit directly to\nthe LOSVDs to constrain the kinematics of the galaxy. However, we do fit\nGauss-Hermite moments for plotting purposes only. These data are presented in\nFigure \\ref{moments} for stars that have been grouped into bins near the\nmajor axis (blue) and minor axis (red). Near the center of the galaxy \nthe density of stars with kinematics is sparse, so we therefore\ngroup stars into annular bins covering all angles (green). \nWe estimate the $1$-$\\sigma$ uncertainties of the Gauss-Hermite moments\nby fitting to each of the 300 realizations calculated during the \nbootstrap discussed above. The error bars plotted contain 68\\% of the \n300 realizations.\n\n\n\n\\section{Dynamical Models}\n\nThe modeling code we use is described in detail in\n\\citet{geb03},\\citet{tho04,tho05}, and \\citet{sio09} and is based on the \ntechnique of orbit superposition \\citep{sch79}. Similar axisymmetric codes \nare described in \\citet{rix97,vdm98,cre99,val04} while \\citet{vdb08} present\na fully triaxial Schwarzschild code. Our code begins by choosing a\ntrial potential that is a combination of the stellar density, dark matter \ndensity, and \npossibly a central black hole. We then launch $\\sim 15,000$ orbits carefully\nchosen to uniformly sample the isolating integrals of motion. In an\naxisymmetric potential, orbits are restricted by three isolating integrals\nof motion, $E$, $L_z$, and the non-classical ``third integral'' $I_3$.\nAs it is not possible to calculate $I_3$ a priori, we use a carefully designed\nscheme to systematically sample $I_3$ for each pair of $E$ and $L_z$\n \\citep{tho04,sio09}. Orbits are integrated for many dynamical times, and each \norbit is given a weight $w_i$. We find the combination of\n$w_i$ that best reproduces the observed LOSVDs and light profile via a \n$\\chi^2$ minimization subject to the constraint of maximum entropy \n\\citep{sio09}.\n\nWe run models by varying 3 parameters---the stellar $M\/L_V$ and two \nparameters specifying the dark matter density profile. Some models are also \nrun with a central black hole whose mass is varied in addition to the other \n3 model parameters. Each model is assigned a value of $\\chi^2$ and we \nidentify the best-fitting model as that with the lowest $\\chi^2$. We \ndetermine the\n$68\\%$ confidence range on parameters by identifying the portion of their \nmarginalized $\\chi^2$ curves that lie within $\\Delta \\chi^2=1$ of the overall\nminimum.\n\n\n\n\\subsection{Model Assumptions}\n\nOur trial potential is determined by solving Poisson's equation for an assumed\ntrial density distribution. On our two-dimensional polar grid, this takes the \nform:\n\n\\begin{equation}\n\\rho(r,\\theta)= \\frac{M}{L} \\nu(r,\\theta) + \\rho_{DM}(r)\n\\label{denseq}\n\\end{equation}\n\n\\noindent where $M\/L$ is the stellar mass-to-light ratio, assumed constant\nwith radius, and $\\nu(r,\\theta)$ is the\nunprojected luminosity density. The assumed dark matter profile $\\rho_{DM}(r)$\nis discussed below. For simplicity, we assume Fornax is edge-on in all our\nmodels.\n\n\n\n\\subsection{Dark Matter Density Profiles}\n\nWe parameterize the dark matter halo density\nwith a number of spherical density profiles. We use NFW halos:\n\n\\begin{equation}\n\\rho_{DM}(r)=\\frac{200}{3} \\frac{A(c)\\rho_{crit}}{(r\/r_s)(1+r\/r_s)^2} \n\\end{equation}\n\n\\noindent where\n\\begin{equation*}\nA(c)=\\frac{c^3}{\\ln(1+c)-c\/(1+c)}\n\\end{equation*}\n\n\\noindent and $\\rho_{crit}$ is the present critical density for a closed universe. \nThe two\nparameters we fit for are the concentration $c$ and scale radius $r_s$. \nWe also use halos derived from the logarithmic potential: \n\n\\begin{equation}\n\\rho_{DM}(r)=\\frac{V_c^2}{4 \\pi G} \\frac{3r_c^2+r^2}{(r_c^2+r^2)^2}\n\\end{equation}\n\n\\noindent These models feature a flat central core of density \n$\\rho_c = 3 V_c^2 \/ 4 \\pi G r_c^2$ for $r \\mathrel{\\rlap{\\lower4pt\\hbox{\\hskip1pt$\\sim$} r_c$ and an \n$r^{-2}$ profile for $r>r_c$. We fit for\n$V_c$ and $r_c$, the asymptotic circular speed at $r=\\infty$ and core radius\nrespectively. We run over 10,000 models with only three distinct\nparameterizations: NFW halos, and logarithmic models with and without \nan IMBH.\n\n\\section{Results}\n\n\\begin{figure*}[t]\n\\centering\n\\includegraphics[width=15cm]{plotall7p.eps\n\\caption{$\\chi^2$ curves for all parameterizations of the mass\nprofile. NFW halos (blue) are parameterized by concentration $c$ and scale \nradius $r_s$. Logarithmic halos with an IMBH (green) and without (red)\nare specified by $V_c$ and $r_c$. We also plot core density \n$\\rho_c=3V_c^2\/4\\pi G r_c^2$ as it is the controlling parameter over the radial\nrange of our models. We fit for stellar \n$M\/L_V$ in all models (upper left panel). NFW models have much higher \n$\\chi^2$ and are scaled down by 75 to fit on the same axis. Black hole mass\nfor logarithmic halos with an IMBH (green) is plotted in the upper right panel.\nNote the apparent minimum in $r_c$ for logarithmic halos with an IMBH is due \nto incomplete parameter sampling.\n\\label{chi2res}}\n\\end{figure*}\n\n\nWe find significant evidence for cored logarithmic dark matter density profiles.\nThese models are preferred at the $\\Delta \\chi^2=77$ level when compared to\nmodels with an NFW halo, a highly significant result. \nPerhaps more convincingly, the values for the concentration preferred by our\nmodels are around $c=4$. Only relatively recently formed structures\nlike galaxy clusters are expected to have concentrations this low\n\\citepalias{nav96}.\n\nTable \\ref{restab} summarizes the results of our models, \nwhile Figures \\ref{losvd} and \\ref{moments} illustrate the preference for\ncored models\nover models with an NFW halo in fitting to the kinematics. We stress again \nthat\nLOSVDs like those plotted in Figure \\ref{losvd} are the kinematic constraint,\nand not the Gauss-Hermite moments of Figure \\ref{moments}.\n\nWhile we fit\nfor $V_c$ and $r_c$ in the cored models, these parameters are strongly \ndegenerate. Our model grid extends to $3.3$ kpc, thus any model with\n$r_c > 3.3$ kpc has a uniform density $\\rho_c=3V_c^2\/4\\pi Gr_c^2$ over the\nentire range of our model. Furthermore, we have no velocity information from\nstars past \n$R \\geq 1.6 \\text{ kpc}$ and therefore cannot constrain the kinematics \nin the outer parts of the galaxy. Thus, for models with \n$r_c \\mathrel{\\rlap{\\lower4pt\\hbox{\\hskip1pt$\\sim$} 1.6 \\text{ kpc}$,\n$\\rho_c$ is now the only parameter\nthat differentiates between models. As $\\rho_c$ is dependent on both\n$V_c$ and $r_c$, the latter two parameters are completely degenerate. \n\n\n\n\n\nFigure \\ref{chi2res} illustrates this effect. Plotted are the $\\chi^2$\ncurves for each model parameter. Lines of the same color indicate a common \nparameterization of the mass profile (e.g. cored + IMBH). While the \n$\\chi^2$ for both $V_c$ and $r_c$ asymptotes to large values, $\\rho_c$\nis tightly constrained. Note that the behavior of $r_c$ for logarithmic\nprofiles with an IMBH (green line) is a result of incomplete parameter\nsampling. With a more densely-sampled parameter space, the $\\chi^2$ curve for\n$r_c$ for cored models with an IMBH would likely asymptote to large $r_c$ \nin a similar fashion as models without an IMBH (red curve).\n\nThe addition of a central black hole to the mass profile does not\nmake a noticeable difference to the overall $\\chi^2$ for most values of \n$M_{\\bullet}$. We therefore place a $1$-$\\sigma$ upper limit on\n$M_{\\bullet} \\leq 3.2 \\times 10^4 \\, M_{\\odot}$.\n\n\n\nWe plot the mass profile for our best-fit model in Figure \\ref{massfig}\n(solid black line with surrounding $68\\%$ confidence region).\nThis is a cored logarithmic dark matter profile without a central black hole.\nThe mass profile of our best-fit dark halo is plotted as the dashed line,\nand the stellar mass profile is plotted in red. The contribution of dark \nmatter to the total mass increases with radius as the local dynamical \nmass-to-light ratio rises from approximately $\\sim 2$ to greater than\n100 in the outermost bin of our model.\n\n\\subsection{Orbit Structure}\n\nWe construct a distribution function for the galaxy from the set of orbital\nweights $w_i$ resulting from the $\\chi^2$ minimization of our best-fit model.\nTo explore the orbit structure, we determine the internal (unprojected) \nmoments of the distribution function in spherical coordinates. Streaming \nmotions in the $\\mathbf{r}$ and $\\pmb \\theta$ directions are assumed \nto be zero. In this coordinate system, cross-terms of the velocity dispersion\ntensor are zero.\n\n\\begin{figure}[t]\n\\includegraphics[width=9cm]{mr.eps}\n\\caption{Total enclosed mass for our best-fit model (black line with \nsurrounding confidence region). Red line is the enclosed stellar mass.\nDashed line is our best-fit dark matter halo.\n\\label{massfig}}\n\\end{figure}\n\n\nFigure \\ref{aniso} plots the anisotropy in the diagonal components of the\ndispersion tensor. While some panels show an average value near \nunity, there are regions in every panel where the ratio plotted is different\nfrom one. Additionally, we define the tangential velocity dispersion \n$\\sigma_t \\equiv \\sqrt{\\frac{1}{2}(\\langle v^2_{\\phi} \\rangle +\n \\sigma^2_{\\theta})}$\nwhere $\\langle v^2_{\\phi}\\rangle$ is the second moment \n$\\langle v^2_{\\phi} \\rangle = \\sigma^2_{\\phi} + V^2_{\\phi}$, and $V^2_{\\phi}$ \nis the mean rotation velocity. With this definition, we plot the ratio\n$\\sigma_r\/\\sigma_t$ in the bottom panels of Figure \\ref{aniso} \nto investigate whether orbits are radially or tangentially biased.\nFrom these plots it is clear that\nthe common assumptions of Jeans modeling---constant or zero anisotropy---are\nunrealistic. We find that at most radii in the galaxy, orbits are \nradially biased. The uncertainty in the anisotropy is largest at small\nradii, as evidenced by the size of the 68\\% confidence regions in\nFigure \\ref{aniso}. This is likely due to the sparsity of kinematics in the\ninner part of the galaxy (there are limits to how closely target fibers can\nbe spaced in multi-fiber spectroscopy).\n\nIn a recent\npaper, \\citet{kaz11} simulated the effects of tidal stirring on a number\nof dSph progenitors around a Milky Way sized halo. They found radial anisotropy\nin all of the final remnants, and our models are consistent with these\nfindings. \n\n\n\n\n\n\n\\section{Discussion}\n\n\\subsection{Cores and Cusps}\n\nOur analysis shows that for the Fornax dwarf an NFW dark matter halo with \ninner slope $\\alpha = 1$ is\nrejected with high confidence. We have\nkinematics from $30$ pc-$1.6$ kpc, and over this range the models prefer an\n$\\alpha = 0$ uniform density core with \n$\\rho_c = 1.6 \\times 10^{-2} \\, M_{\\odot} \\, \\mathrm{pc}^{-3}$. We do not \nattempt to fit for models with an intermediate value of the slope\n$0 \\le \\alpha \\le 1$. Further investigation is necessary before we can\nconclude that the best fitting dark matter profile is the logarithmic \nmodel. The steep $\\alpha=1$ cusp of the NFW profile is, however, robustly\nruled out. \n\nThe models, in general, seem to prefer less mass in the areas over which\nwe have kinematic constraints. In NFW models, the concentration $c$ sets the \nnormalization (or y-intercept) of the density profile. Because $c$ cannot be \nlowered below an astrophysically reasonable limit, NFW models enclose more \nmass than cored models. This difference is reflected in the $\\chi^2$ difference\nbetween cored and NFW models, as the kinematics are best fit by models with\nless mass. Figure \\ref{moments} hints at this as the best\nfit NFW model (dashed line) typically has higher values for $\\sigma$ than\neither the data or best-fitting cored model (solid line).\n\n\nSeveral groups have approached the core\/cusp issue in dSphs by\ntaking advantage of the fact that some dSphs host multiple\npopulations of tracer stars that are chemically and dynamically distinct. \nBy fitting models to each component, the underlying dark matter profile can\nbe modeled more accurately. \\citet{amo11b} fit two-component distribution \nfunction models to Sculptor, while \\citet{wal11} apply\na convenient mass estimator (discussed below) to each stellar component\nin Sculptor and Fornax. It is believed that this mass estimator is unaffected\nby orbital anisotropy, thus their method \nyields a robust determination of the dynamical mass at two locations in the\ngalaxy---allowing for the slope of the dark matter profile to be measured.\nEach of these studies finds models with a cored dark matter halo \npreferable to the predicted cuspy NFW profile.\n\nIt must be noted, however, that we are not observing the pristine \ninitial dark matter distribution in this galaxy. Rather, it has likely been \nmodified by \ncomplex baryonic processes over the lifetime of the galaxy. These processes\nmay include: adiabatic compression \\citep{blu86}, halo rebounding following \nbaryonic mass loss from supernovae \\citep{nav96b},\nor possibly dynamical friction acting on clumps of baryons\n(\\citealt{elz01}; but see also \\citealt{jar09}). Although we chose this galaxy\nbecause these effects were likely to be small, they are nevertheless not \nwell understood and our result must be taken in that context.\n\n\\subsection{Central IMBH}\n\nWe are unable to place a significant constraint on the mass of a central\nIMBH. Figure \\ref{chi2res} (upper right) shows the marginalized $\\chi^2$ curve\nagainst IMBH mass for cored dark matter density profiles. The curve\nasymptotes to low values of IMBH, thus we are only capable of placing an upper\nlimit on the mass of any potential IMBH. Furthermore, our best-fit cored model\nwith and without an IMBH have the same $\\chi^2$. We therefore impose a \n$1$-$\\sigma$ upper limit on $M_{\\bullet} \\leq 3.2 \\times 10^4 \\, M_{\\odot}$.\nIt is unfortunate that we are not able to place a lower limit on $M_{\\bullet}$ \nbecause measurements of black holes in the range $M_{\\bullet}$$\\mathrel{\\rlap{\\lower4pt\\hbox{\\hskip1pt$\\sim$} 10^4 \\, M_{\\odot}$\nplace direct constraints on SMBH formation mechanisms (van Wassenhove et al. 2010 ). Our models, however,\ndo robustly rule out a black hole of larger mass.\n\n\\begin{figure}[t]\n\\includegraphics[width=9cm]{vplot.eps}\n\\caption{Anisotropy in various components of the velocity dispersion tensor. \nShaded\nregions correspond to the 68\\% confidence regions, solid lines plot\nthe best fit model. Left and right hand panels plot stars near the major and \nminor axes, respectively.\n\\label{aniso}}\n\\end{figure}\n\n\nIn massive galaxies it is thought that the radius of influence,\n$R_{\\mathrm{inf}} \\sim G M_{\\bullet}\/\\sigma^2$ must be resolved in order to\ndetect and precisely measure a black hole \\citep{geb03, kor04, fer05,gul09}.\nUsing our upper limit on $M_{\\bullet}$ we can calculate the maximum radius of \ninfluence of a potential black hole. Estimating the central velocity \ndispersion at $\\sigma \\sim 10 \\text{ km s}^{-1}$\ngives an upper limit for $R_{\\mathrm{inf}} \\mathrel{\\rlap{\\lower4pt\\hbox{\\hskip1pt$\\sim$} 14$ pc. Our kinematics start at\n$R = 26$ pc, so it is not surprising that the minimum black hole mass we were\nable to detect has $R_{\\inf}$ close to $ 26$ pc. \nTo detect smaller black holes,\nwe require kinematics of stars closer to the center of the galaxy.\n\nWe are able to detect the dynamical influence of a black hole with a similar\nmass as \\citet{val05} detect in NGC~205, however with kinematics of much lower\nresolution. Our\ninnermost model bin is centered around $30 \\mathrm{~pc}$ whereas they use \nhigh-resolution kinematics from the \\emph{Hubble Space Telescope} \nto resolve spatial scales less than $1 \\mathrm{~pc}$. The advantage we have\nis that the central velocity dispersion is much smaller in Fornax, which \nmakes $R_{\\mathrm{inf}}$ larger for fixed $M_{\\bullet}$. NGC~205 is also more than\nfive times as distant as Fornax.\n\n\\subsection{Mass Estimators}\n\nSeveral authors have come up with convenient estimators of total mass within\na given radius for local group dSphs. \\citet{stri08} use the mass enclosed\nwithin $300$ pc while \\citet{wal09b} and \\citet{wol10} find a similar expression\nfor the mass contained within the projected and un-projected half-light radii,\nrespectively.\nThese estimators bear striking resemblance to a result obtained by \\citet{cap06}\nderived from integral field kinematics of massive elliptical galaxies, and they\nall hint at an easy way to determine dynamical masses without expensive \nmodeling. They are believed to be insensitive to velocity anisotropy\nbased on the derivation in \\citet{wol10}, and we compare their estimates\nto our models as a check on this.\n\nFor the mass contained within $300$ pc we measure\n$M_{300}=3.5^{+0.77}_{-0.11} \\times 10^6 M_{\\odot}$, roughly a factor of three\nsmaller than \\citet{stri08} who measure $M_{300} = 1.14^{+0.09}_{-0.12} \\times\n10^7 M_{\\odot}$ using Jeans models with parameterized \nanisotropy. \n\nThe \\citet{cap06}, \\citet{wal09b}, and \\citet{wol10}\nmass estimators are all of the form:\n\n\\begin{equation}\nM(r_{\\mathrm{est}}) = k \\langle\\sigma^2_{LOS}\\rangle R_e\n\\label{esteq}\n\\end{equation}\n\n\n\\noindent where $r_{\\mathrm{est}}$ is the radius at which the estimator is valid.\nFor \\citet{cap06} and \\citet{wal09b} $r_{\\mathrm{est}} = R_e$\n(the projected half-light radius), while for \\citet{wol10} \n$r_{\\mathrm{est}} = r_e$ (the un-projected half-light radius).\nOther than the projected\/un-projected difference, each estimator \ndifferes only by the value of the\nconstant $k$. In order to more fairly compare between these estimators and\nour models, we use the values for the luminosity-weighted line-of-sight \nvelocity dispersion $\\langle\\sigma^2_{LOS}\\rangle=11.3^{+1.0}_{-1.8} \\, \n\\mathrm{ km~s}^{-1}$\nprojected half-light radius $R_e=689 \\, \\mathrm{pc}$, and un-projected \nhalf-light radius $r_e=900 \\, \\mathrm{pc}$ that we calculate from the data \nused in our models.\n\n\nOur best-fitting model has \n$M(R_e)=3.9^{+0.46}_{-0.11} \\times 10^7 \\, M_{\\odot}$\nand $M(r_e)=5.8^{+1.0}_{-0.2} \\times 10^7 \\, M_{\\odot}$.\nWith each group's value for $k$ and our kinematics, \nthe mass estimates are: \n$M(R_e) \\approx 5.1^{+1.0}_{-1.5} \\times 10^7 \\, M_{\\odot}$ \\citep{wal09b}, \n$M(r_e)\\approx 8.1^{+1.6}_{-2.4} \\times 10^7 \\, M_{\\odot}$ \\citep{wol10}, and\n$M(R_e)\\approx 1.0^{+0.3}_{-0.2} \\times 10^8 \\, M_{\\odot}$ \\citep{cap06}.\nOur model is broadly consistent with both the \\citet{wal09b} and \\citet{wol10} \nestimators.\n\nThe evidence that mass estimators are anisotropy-independent\ncomes largely from comparison to spherical Jeans models (except\n\\citealt{cap06}). The weakness of\nthese models is that the anisotropy must be parameterized and is restricted\nto be a function of radius only. Our models are not subject to these \nconstraints since the anisotropy is calculated non-parametrically and is\nfree to vary\nwith position angle. We suggest that the best way to prove the accuracy of\nmass estimators is to compare with models that can self-consistently \ncalculate both mass and anisotropy for realistic potentials.\n\nFor bright elliptical galaxies, \\citet{cap06} and \\citet{tho11} have done\njust that. In these cases, the mass estimates are checked against \nmasses derived from axisymmetric Schwarzschild modeling and good agreement\nis found. Ours is the first study to perform a similar test with dSphs, and\nthere is no reason to assume that success with bright ellipticals guarantees\naccuracy in the dSph regime. The results from our comparison above are\nnevertheless reassuring.\n\n\n\\subsection{Tidal Effects}\n\nThe principle of orbit superposition, and hence our entire modeling procedure,\nrelies on the assumption that the galaxy is bound and in a steady state.\nThe amount of tidal stripping in Fornax due to the effect of its orbit through\nthe Milky Way's halo is not well-known. For reasonable values of Fornax\ntotal mass $m$, Milky Way mass $M$, and Galactocentric radius $R_0$, \nthe tidal radius of Fornax is $r_t \\sim (m\/3M)^{1\/3} R_0 \\sim 13.5 \n\\text{ kpc}$. This estimate of $r_t$ is sufficiently larger than our model\ngrid that we would not expect tidal effects to be important over the radial\nrange of our models. If Fornax is on an eccentric orbit about the Milky Way,\nhowever, the above equation for\n$r_t$ is not valid and estimation of the tidal radius is not as\nstraightforward. Fortunately, studies investigating its transverse motion\nsuggest the orbit of Fornax is roughly circular \\citep{pia07,wal08}.\n\n\n\\begin{acknowledgements}\n\nKG acknowledges support from NSF-0908639. We thank the Texas Advanced \nComputing Center (TACC) for providing state-of-the-art computing resources.\nWe are grateful to the Magellan\/MMFS Survey collaboration\nfor making the stellar velocity data publicly available. Additionally,\nwe thank Matthew Walker, Mario Mateo, Joe Wolf, and the anonymous referee\nfor helpful comments on an earlier draft of the paper.\n\n\\end{acknowledgements}\n\n\n\n\\bibliographystyle{apj}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\section{Introduction}\n\nFrom scientific research to industrial applications, practitioners often face the challenge to rank features for a prediction task. Among the ranking tasks performed by scientists and practitioners, a large proportion belongs to marginal ranking; that is, rank features based on the relation between the response variable and one feature at a time, ignoring other available features. For example, to predict cancer driver genes, biomedical researchers need to first extract predictive features from patients' data. Then they decide whether an extracted feature is informative by examining its marginal distributions in tumor and normal tissues, usually by boxplots and histograms. This practice is common in high-profile biomedical papers, such as in \\cite{davoli2013cumulative, vogelstein2013cancer}.\n\nThis common practice is suboptimal from a statistical point of view, as multiple features usually have dependence and therefore jointly influence the response variable beyond a simple additive manner. However, the popularity of marginal feature ranking roots not only in the education background and convention, but also in the strong desire for simple interpretation and visualization in the trial-and-error scientific discovery process. As such, marginal feature ranking has been an indispensable data-analysis step in the scientific community, and it will likely stay popular. \n\nIn practice, statistical tests (e.g., two-sample $t$ test and two-sample Wilcoxon rank-sum test) are often used to rank features marginally. However, these tests do not reflect the objective of a prediction task. For example, if the classification error is of concern, the connection between the significance of these tests and the classification error is unclear. This misalignment of ranking criterion and prediction objective is undesirable: the resulting feature rank list does not reflect the marginal importance of each feature for the prediction objective. Hence, scientists and practitioners call for a marginal ranking approach that meets the prediction objective.\n\n\n\n\n\n\n\n\n\nIn this work, we focus on marginal ranking for binary prediction, which can be formulated as binary classification in machine learning. Binary classification has multiple prediction objectives, \\jl{which we refer to as paradigms here. \\jjl{These paradigms include} (1) the \\textit{classical} paradigm that minimizes the classification error,} a weighted sum of the type I and type II errors, \\jl{whose weights are} the class priors \\citep{Hastie.Tibshirani.ea.2009, james2013introduction}, (2) the \\textit{cost-sensitive learning} paradigm that replaces the \\jl{two error weights by pre-determined constant costs} \\citep{Elkan01, ZadLanAbe03}, (3) the \\textit{Neyman-Pearson (NP)} paradigm that minimizes the type II error subject to a type I error upper bound \\citep{cannon2002learning, scott2005neyman, tong2013plug, tong2016neyman}, and (4) the \\textit{global} paradigm that focuses on the overall prediction accuracy under all possible thresholds: the area under the receiver-operating-characteristic curve (AUROC) or precision-recall curve (AUPRC). Here we consider marginal ranking of features under the classical and NP paradigms, \\jl{and we define the corresponding ranking criteria as the classical criterion (CC) and the Neyman-Pearson criterion (NPC). The idea behind these two criteria is easily generalizable to the cost-sensitive learning paradigm} and the global paradigm. \n\nIt is worth \\jl{mentioning that NPC} is robust against sampling bias; that is, even when the class \\jl{proportions in a sample \\jjl{deviate} from those in the population, NPC still achieves feature} ranking consistency between sample and population with high probability. This \\jl{nice property makes NPC particularly useful for disease diagnosis, where a long-standing obstacle is that the proportions of diseased patients and healthy people in medical records do not reflect the proportions in the population.} To implement CC and NPC, we take a model-free approach by using nonparametric estimates of class-conditional feature densities. This approach makes CC and NPC more adaptive to diverse feature distributions than existing criteria for marginal feature ranking. \n\n\n\n\n\n\nThe rest of the paper is organized as follows. In Section \\ref{sec:background}, we \\jl{define CC and NPC on the population level, as the oracle criteria under the classical and NP paradigms}. In Section \\ref{sec:methods}, we \\jl{define the sample-level CC and NPC and develop model-free algorithms to implement them}. In Section \\ref{sec:theoretical properties}, we \\jl{derive theoretical results regarding the ranking consistency of the sample-level CC and NPC in relation to their population counterparts. In Section \\ref{sec:simulation}, we use simulation studies to demonstrate the performance of sample-level CC and NPC in ranking low-dimensional and high-dimensional features. We also demonstrate that commonly-used ranking criteria, including the Pearson correlation, the distance correlation \\citep{szekely2009brownian}\\footnote{In binary classification, the response variable is encoded as $0$ and $1$ and treated as a numerical variable in the calculation of of the Pearson and distance correlations.}, the two-sample $t$ test, and the two-sample Wilcoxon rank-sum test, might give feature ranking misaligned with the prediction objective. In Section \\ref{simu:realdata}, we apply CC and NPC to rank features in two real datasets. Using the first dataset regarding breast cancer diagnosis, we show that both criteria can identify informative features, many of which have been previously reported; we also provide a Supplementary Excel File for literature evidence. Using the second dataset for prostate cancer diagnosis from urine samples, we demonstrate that NPC is robust to sampling bias.} We conclude with a discussion in Section \\ref{sec:conclusions}. All the proofs of lemmas, propositions, and theorems are relegated to the Appendix.\\par \n\n\n\\section{Population-level ranking criteria}\\label{sec:background}\n\nIn this section, we introduce two objective-based marginal feature ranking criteria, \\jjl{on the population level,} under the classical paradigm and the Neyman-Pearson (NP) paradigm. As argued previously, when \\jjl{one has} a learning\/prediction objective, the feature ranking criterion should be in line with that. Concretely, the $j$-th ranked feature should be the one that achieves the $j$-th best performance based on that objective. \nThis objective-based feature ranking perspective is extendable to ranking feature subsets (e.g., feature pairs). Although we focus on marginal feature ranking in this work, to cope with this future extension, our notations in the methodology and theory development are compatible with ranking of feature subsets . \n\n\n\n\n\\subsection{Notations and classification paradigms}\n\nWe first introduce essential mathematical notations to facilitate our discussion. Let $\\left(\\bd X,Y\\right)$ be a pair of random observations where $\\bd X \\in \\mathcal{X} \\subseteq {{\\rm I}\\kern-0.18em{\\rm R}}^d$ is a vector of features and $Y\\in \\left\\{ 0,1 \\right\\}$ indicates the class label of $\\bd X$. A \\textit{classifier} $\\phi:\\mathcal{X}\\rightarrow \\left\\{ 0,1 \\right\\}$ maps from the feature space to the label space. A \\textit{loss function} assigns a cost to each misclassified instance $\\phi(\\bd X) \\neq Y$, and the \\textit{risk} is defined as the expectation of this loss function with respect to the joint distribution of $\\left( \\bd X,Y\\right)$. We adopt in this work a commonly used loss function, the $0$-$1$ loss: $\\mathds{1}\\left(\\phi(\\bd X)\\neq Y \\right)$, where $\\mathds{1}(\\cdot)$ denotes the indicator function. Let ${\\rm I}\\kern-0.18em{\\rm P}$ and ${\\rm I}\\kern-0.18em{\\rm E}$ denote the generic probability distribution and expectation, whose meaning depends on specific contexts. With the choice of the indicator loss function, the risk is the classification error: $R(\\phi) = {\\rm I}\\kern-0.18em{\\rm E} \\left[ \\mathds{1}\\left( \\phi(\\bd X)\\neq Y\\right) \\right] = {\\rm I}\\kern-0.18em{\\rm P} \\left( \\phi(\\bd X)\\neq Y\\right)$. While $R(\\cdot)$ \\jjl{is a natural} objective to evaluate the performance of a classifier, for different theoretical and practical reasons, one might consider different objectives for \\jjl{evaluating classifiers}. \n\nIn this paper, we call the learning objective of minimizing $R(\\cdot)$ the \\textit{classical paradigm}. Under \\jjl{this} paradigm, one aims to mimic the \\textit{classical oracle classifier} $\\varphi^{*}$ that minimizes the population-level classification error, \n$$\n\\varphi^{*}=\\argmin \\limits_{\\varphi: {\\rm I}\\kern-0.18em{\\rm R}^d\\rightarrow \\{0, 1\\}} R\\left( \\varphi\\right)\\,.\n$$ \nIt is well known in literature that the classical oracle $\\varphi^*(\\cdot) = \\mathds{1} (\\eta (\\cdot) > 1\/2)$, where $\\eta(\\bd x) = {\\rm I}\\kern-0.18em{\\rm E} (Y|\\bd X=\\bd x)$ is the regression function \\citep{koltchinskii2011introduction}. Alternatively, we can show that $\\varphi^*(\\cdot) = \\mathds{1}(p_1(\\cdot)\/p_0(\\cdot)>\\pi_0\/\\pi_1)$, where $\\pi_0 ={\\rm I}\\kern-0.18em{\\rm P}(Y=0)$, $\\pi_1 ={\\rm I}\\kern-0.18em{\\rm P}(Y=1)$, $p_0$ is the probability density function of $\\bd X|(Y=0)$, and $p_1$ is the probability density function of $\\bd X|(Y=1)$. Note that the risk can be decomposed as follows:\n\\begin{align*}\n \tR(\\phi) &= {\\rm I}\\kern-0.18em{\\rm P}(Y=0)\\cdot{\\rm I}\\kern-0.18em{\\rm P}\\left( \\phi(\\bd X) \\neq Y \\given Y=0\\right) + {\\rm I}\\kern-0.18em{\\rm P}(Y=1)\\cdot {\\rm I}\\kern-0.18em{\\rm P}\\left( \\phi(\\bd X) \\neq Y \\given Y=1\\right)\\\\\n \t &= \\pi_0 R_0\\left(\\phi\\right)+ \\pi_1 R_1\\left(\\phi\\right)\\,,\n \\end{align*} where $R_j\\left(\\phi\\right) = {\\rm I}\\kern-0.18em{\\rm P}\\left( \\phi(\\bd X) \\neq Y \\given Y=j\\right)$, for $j= 0 \\text{ and } 1$. The notations $R_0(\\cdot)$ and $R_1(\\cdot)$ denote the population-level type I and type II errors respectively. Note that minimizing $R(\\cdot)$ implicitly \\jjl{imposes} a weighting of $R_0$ and $R_1$ by $\\pi_0$ and $\\pi_1$. This is not always desirable. For example, when people know the explicit costs for type I and type II errors: $c_0$ and $c_1$, one might want to optimize the criterion $c_0R_0(\\cdot) + c_1 R_1(\\cdot)$, which is often referred to as \\textit{the cost-sensitive learning paradigm}. \n \n\\jjl{In parallel to the classical paradigm, we consider the \\textit{Neyman-Pearson (NP) paradigm}, which} aims to mimic the \\textit{level-$\\alpha$ NP oracle classifier} \\jjl{that minimizes the type II error while constraining the type I error under $\\alpha$, a user-specified type I error upper bound,} \n \\begin{align}\\label{eq:NP_oracle}\n \\varphi^{*}_{\\alpha} = \\argmin \\limits_{\\varphi: R_0(\\varphi) \\leq \\alpha} R_1(\\varphi)\\,.\n\\end{align} \nUsually, \\jjl{$\\alpha$ is a small value (e.g., $5\\%$ or $10\\%$), reflecting a user's conservative attitudes towards the type I error.} As the development of classification methods under the NP paradigm is relatively new, \\jjl{here we review the development of the NP oracle classifier} $\\varphi^*_{\\alpha}(\\cdot)$. Essentially, \\jjl{due to the famous Neyman-Pearson Lemma (Appendix \\ref{sec::np lamma}) and a correspondence between classification and statistical hypothesis testing,} $\\varphi^*_{\\alpha}$ in \\eqref{eq:NP_oracle} can be constructed by thresholding $p_{1}(\\cdot)\/p_{0}(\\cdot)$ at a proper level $C^*_{\\alpha}$ \\citep{tong2013plug}:\n \\begin{equation}\\label{equ: neyman_pearson}\n \t\\varphi_{\\alpha}^*(\\bd x) = \\mathds{1}\\left(p_1(\\bd x)\/p_0(\\bd x) > C_\\alpha^*\\right)\\,. \\end{equation}\n\nIn addition to the above three paradigms, a common practice is to evaluate a classification algorithm by its AUROC or AUPRC, which we refer to as the \\textit{global paradigm}. In contrast to the above three paradigms that lead to a single classifier, which has its corresponding type I and II errors, the global paradigm evaluates a classification algorithm by aggregating its all possible classifiers with type I errors ranging from zero to one. For example, the oracle AUROC is the area under the curve\n\\[ \\left\\{ \\left(R_0(\\varphi_\\alpha^*),\\, 1-R_1(\\varphi_\\alpha^*)\\right): \\alpha \\in [0,1]\n\t\\right\\}\\,.\n\\]\n\n\n\\subsection{Classical and Neyman-Pearson criteria on the population level}\\label{sec:NPC_population}\nDifferent learning\/prediction objectives in classification induce distinct feature ranking criteria. \\jjl{We first define the population-level CC and NPC. Then we show that these two criteria lead to different rankings of features in general, and that NPC may rank features differently at different $\\alpha$ values. $\\varphi^*_{A}$ and $\\varphi^*_{\\alpha A}$ denote, respectively,} the classical oracle classifier and the level-$\\alpha$ NP oracle classifier that only use features indexed by $A \\subseteq \\{1,\\ldots, d \\}$. This paper focuses on the case when $|A| = 1$. \nConcretely, under the classical paradigm, the classical oracle \\jjl{classifier on index set $A$, $\\varphi^*_{A}$,} achieves \n\\begin{equation*\n\tR \\left(\\varphi^*_{A}\\right) = \\min_{\\varphi_A} R \\left(\\varphi_{A}\\right) = \\min_{\\varphi_A} {\\rm I}\\kern-0.18em{\\rm P} (\\varphi_{A}(\\bd X)\\neq Y)\\,,\n\\end{equation*} \nin which $\\varphi_A: \\mathcal X \\subseteq {\\rm I}\\kern-0.18em{\\rm R}^d \\rightarrow \\{0, 1\\}$ is any mapping that first projects $\\bd X\\in {\\rm I}\\kern-0.18em{\\rm R}^d$ to its $|A|$-dimensional sub-vector $\\bd X_A$, which comprises of the coordinates of $\\bd X$ corresponding to the index set $A$, and then maps from $\\bd X_A\\in {\\rm I}\\kern-0.18em{\\rm R}^{|A|}$ to $\\{0, 1\\}$. Analogous to $\\varphi^*(\\cdot)$, we know \n\\begin{align}\\label{eqn:classical oracle}\n\\varphi^*_{A}(\\bd x) = \\mathds{1}(\\eta_A(\\bd x_A) > 1\/2) = \\mathds{1}(p_{1A}(\\bd x_A)\/p_{0A}(\\bd x_A) > \\pi_0 \/ \\pi_1)\\,, \n\\end{align}\nwhere $\\eta_A(\\bd x_A) = {\\rm I}\\kern-0.18em{\\rm E} (Y|\\bd X_A=\\bd x_A)$ is the regression function using only features in the index set $A$, and $p_{1A}$ and $p_{0A}$ denote the class-conditional probability density functions of the features $\\bd X_A$. Suppose that statisticians are given candidate feature subsets denoted by $A_1, \\ldots, A_J$, which might arise from some domain expertise of the clients. \\jjl{We define the \\textit{population-level classical criterion} (p-CC) of $A_i$ as its \\textit{optimal} risk $R\\left(\\varphi^*_{A_i}\\right)$; i.e., $A_1, \\ldots, A_J$ will be ranked based on $\\left\\{R \\left(\\varphi^*_{A_1}\\right), \\ldots, R \\left(\\varphi^*_{A_J}\\right) \\right\\}$, with the smallest being ranked the top}. The prefix ``p\" in p-CC indicates ``population-level.\"\n Note that \\jjl{$R(\\varphi^*_{A_i})$ represents} $A_i$'s best achievable performance measure under the classical paradigm and \\jjl{does} not depend on any specific models \\jjl{assumed for} the distribution of $(\\bd X, Y)$. \n\n\nUnder the NP paradigm, the NP oracle \\jjl{classifier} on index set $A$, $\\varphi^*_{\\alpha A}$, achieves \n\\begin{equation}\\label{ideaL_sormulation_np}\n\tR_1 \\left(\\varphi^*_{\\alpha A}\\right) = \\min_{\\substack{\\varphi_{A} \\\\ R_0 \\left(\\varphi_{\\alpha A}\\right)\\leq\\alpha}} R_1 \\left(\\varphi_{\\alpha A}\\right) = \\min_{\\substack{\\varphi_{A} \\\\ {\\rm I}\\kern-0.18em{\\rm P}(\\varphi_{A} (\\bd X) \\neq Y | Y=0)\\leq\\alpha}} {\\rm I}\\kern-0.18em{\\rm P}(\\varphi_{A} (\\bd X) \\neq Y | Y=1)\\,.\n\\end{equation} \nBy the Neyman-Pearson Lemma, for some proper constant $C^*_{\\alpha A}$, \n\\begin{equation}\\label{eqn: np oracle}\n\\varphi^*_{\\alpha A}(\\bd x) = \\mathds{1} \\left(p_{1A}(\\bd x_A)\/p_{0A}(\\bd x_A) > C^*_{\\alpha A}\\right)\\,.\n\\end{equation}\nFor a given level $\\alpha$, \\jjl{we define the \\textit{population-level Neyman-Pearson criterion} (p-NPC) of $A_i$ as its \\textit{optimal} type II error $R_1 \\left(\\varphi^*_{\\alpha A_i}\\right)$; i.e., $A_1, \\ldots, A_J$ will be ranked based on $\\left\\{R_1 \\left(\\varphi^*_{\\alpha A_1}\\right), \\ldots, R_1 \\left(\\varphi^*_{\\alpha A_J}\\right) \\right\\}$, with the smallest being ranked the top}. \n\n\nAs a concrete illustration of p-CC and p-NPC, suppose that we want to compare two features $\\bd X_{\\{1\\}}, \\bd X_{\\{2\\}} \\in {\\rm I}\\kern-0.18em{\\rm R}$\n\\footnote{Usually, we use $X_1$ and $X_2$, but we opt to use $\\bd X_{\\{1\\}}$ and $\\bd X_{\\{2\\}}$ to be consistent with the notation $\\bd X_{A}$.}, whose class-conditional distributions are \\jjl{the following Gaussians}: \n\\begin{align}\\label{eq:toy_example}\n\t\\bd X_{\\{1\\}} \\given (Y=0) &\\sim \\mathcal{N}(-5, 2^2)\\,, & \\bd X_{\\{1\\}}\\given (Y=1) &\\sim \\mathcal{N}(0, 2^2)\\,,\\\\\n\t\\bd X_{\\{2\\}} \\given (Y=0) &\\sim \\mathcal{N}(-5, 2^2)\\,, & \\bd X_{\\{2\\}} \\given (Y=1) &\\sim \\mathcal{N}(1.5, 3.5^2)\\,, \\notag\n\\end{align}\nand the class priors are equal, i.e., $\\pi_0 = \\pi_1 = .5$. \nIt can be calculated that $R \\left(\\varphi^*_{{\\{1\\}}}\\right) = .106$ and $R \\left(\\varphi^*_{{\\{2\\}}}\\right)= .113$. Therefore, $R \\left(\\varphi^*_{{\\{1\\}}}\\right) < R \\left(\\varphi^*_{{\\{2\\}}}\\right)$, and \\jjl{p-CC ranks feature $1$ higher than feature $2$}. \\jjl{The comparison is more subtle for p-NPC}. If we set $\\alpha =.01$, $R_1 \\left(\\varphi^*_{\\alpha \\{1\\}}\\right) = .431$ is \\textit{larger} than $R_1 \\left(\\varphi^*_{\\alpha \\{2\\}}\\right) = .299$. However, if we set $\\alpha = .20$, $R_1 \\left(\\varphi^*_{\\alpha \\{1\\}}\\right) = .049$ is \\textit{smaller} than $R_1 \\left(\\varphi^*_{\\alpha \\{2\\}}\\right)= .084$. Figure \\ref{fig:toy example 1} illustrates the NP oracle classifiers for \\jjl{these $\\alpha$'s.} \n\n\n\n\n\\begin{figure}[h!]\n \\centering\n \\makebox{\\includegraphics[width = 0.75\\textwidth]{plots\/toy_example_w_alpha.pdf}}\n \\caption{\\small{A toy example in which feature ranking under p-NPC changes as $\\alpha$ varies. \\textbf{Panel a}: $\\alpha=.01$. The NP oracle classifier based on feature $1$ (or feature $2$) has the type II error $.431$ (or $.299$). \\textbf{Panel b}: $\\alpha=.20$. The NP oracle classifier based on feature $1$ (or feature $2$) has the type II error $.049$ (or $.084$).}}\\label{fig:toy example 1}\n\\end{figure}\n\n\nThis example suggests a general phenomenon that feature ranking \\jjl{depends} on the user-chosen criteria. For some \\jjl{$\\alpha$} values (e.g., $\\alpha =.20$ in the example), p-NPC and p-CC agree on the ranking, while for others (e.g., $\\alpha = .01$ in the example), they disagree. Under special cases, however, we can derive conditions under which p-NPC gives an $\\alpha$-invariant feature ranking \\jjl{that always agrees with the ranking by} p-CC. In the following, we derive such a condition under Gaussian distributions.\n\n\\begin{lemma}\\label{lem: toy example1}\nSuppose that two features $\\bd X_{\\{1\\}}$ and $\\bd X_{\\{2\\}}$ have class-conditional densities\n\\begin{align*}\n\t\\bd X_{\\{1\\}} | (Y=0) &\\sim \\mathcal{N}\\left(\\mu_1^0, (\\sigma_1^0)^2\\right)\\,, & \\bd X_{\\{1\\}} | (Y=1) &\\sim \\mathcal{N}\\left(\\mu_1^1, (\\sigma_1^1)^2\\right)\\,,\\\\\n\t\\bd X_{\\{2\\}} | (Y=0) &\\sim \\mathcal{N}\\left(\\mu_2^0, (\\sigma_2^0)^2\\right)\\,, & \\bd X_{\\{2\\}}| (Y=1) &\\sim \\mathcal{N}\\left(\\mu_2^1, (\\sigma_2^1)^2\\right)\\,.\n\\end{align*}\nFor $\\alpha\\in(0,1)$\\,, let $\\varphi^*_{\\alpha\\{1\\}}$ or $\\varphi^*_{\\alpha\\{2\\}}$ be the level-$\\alpha$ NP oracle classifier using only the feature $\\bd X_{\\{1\\}}$ or $\\bd X_{\\{2\\}}$ respectively, and let $\\varphi^*_{\\{1\\}}$ or $\\varphi^*_{\\{2\\}}$ be the corresponding classical oracle classifier. Then if and only if\n$\n\\sigma_1^0 \/ \\sigma_1^1 = \\sigma_2^0 \/ \\sigma_2^1,\n$\nwe have simultaneously for all $\\alpha$, \n\\begin{align*}\n\t\\text{\\rm{sign}}\\left\\{R_1\\left(\\varphi^*_{\\alpha \\{2\\}}\\right) - R_1\\left({\\varphi}^*_{\\alpha \\{1\\}}\\right)\\right\\} = &\\text{\\rm{sign}}\\left\\{ R\\left(\\varphi^*_{\\{2\\}} \\right) -R\\left(\\varphi^*_{\\{1\\}} \\right) \\right\\} = \\text{\\rm{sign}}\\left\\{\\frac{|\\mu_1^1 -\\mu_1^0| }{\\sigma_1^1} - \\frac{|\\mu_2^1 -\\mu_2^0 | }{\\sigma_2^1}\\right\\}\\,,\n\\end{align*}\nwhere $\\rm{sign}(\\cdot)$ is the sign function. \n\n\n\\end{lemma}\\par\n\n\n\n\n\n\n\n\n\n\n\n\nLemma \\ref{lem: toy example1} suggests that on the population level, \\jjl{ranking agreement between CC and NPC is an exception} rather than the norm. This observation calls for development of the sample-level criteria under different objectives. \n\n\n\n\n\n\n\n\n\\section{Sample-level ranking criteria} \\label{sec:methods}\n\nIn \\jjl{the following text, we refer to sample-level CC and NPC as} ``s-CC\" and ``s-NPC\" respectively. In the same model-free spirit of the p-CC and p-NPC definitions, we use model-free nonparametric techniques to construct s-CC and s-NPC. Admittedly, such construction would be impractical when the feature subsets to be ranked have large cardinality. But since we are mainly interested in marginal feature ranking, with intended extension to small subsets such as feature pairs, model-free nonparametric techniques are appropriate. \n\n\n\n\nIn the methodology and theory sections, we assume the following sampling scheme. Suppose we have a training dataset $\\mathcal{S} = \\mathcal{S}^0 \\cup \\mathcal{S}^1 $, where $\\mathcal{S}^0= \\left\\{\\bd {X}_{1}^{0}, \\dots, \\bd {X}_{m}^{0} \\right\\}$ are \\jjl{independent and identically distributed (i.i.d.)} class $0$ observations, $\\mathcal{S}^1= \\left\\{\\bd {X}_{1}^{1}, \\dots, \\bd {X}_{n}^{1} \\right\\}$ are i.i.d. class $1$ observations, and $\\mathcal{S}^0$ is independent of $\\mathcal{S}^1$. The sample sizes $m$ and $n$ are considered as \\jjl{fixed positive integers}. \\jjl{The construction of both s-CC and s-NPC involves} splitting the class $0$ and class $1$ observations. To increase stability, \\jjl{we perform multiple random splits. In detail,} we randomly divide $\\mathcal{S}^0$ for $B$ times into two halves $\\mathcal{S}_{\\rm ts}^{0(b)} = \\left\\{ \\bd X_{1}^{0(b)}, \\dots, \\bd X_{m_1}^{0(b)} \\right\\}$ and ${\\mathcal{S}}_{\\rm lo}^{0(b)} = \\left\\{ \\bd {X}_{m_1+ 1}^{0(b)}, \\dots, \\bd {X}_{m_1+m_2}^{0(b)} \\right\\}$, where $m_1 + m_2 = m$, the subscripts ``ts\" and ``lo\" stand for \\textit{train-scoring} and \\textit{left-out} respectively, and the superscript $b\\in\\{1,\\ldots, B\\}$ indicates the $b$-th random split. \\jjl{We also randomly split} $\\mathcal{S}^1$ \\jjl{for $B$} times into $\\mathcal{S}_{\\rm ts}^{1(b)} = \\left\\{ \\bd X_1^{1(b)}, \\dots, \\bd X_{n_1}^{1(b)} \\right\\}$ and $\\mathcal{S}_{\\rm lo}^{1(b)} = \\left\\{\\bd {X}_{n_1 + 1}^{1(b)}, \\dots, \\bd {X}_{n_1+n_2}^{1(b)} \\right\\}\\,$, where $n_1+n_2=n$ and $b\\in\\{1, \\ldots, B\\}$. \\jjl{In this work, we take an equal-sized split: $m_1 = \\lfloor m\/2 \\rfloor$ and $n_1 = \\lfloor n\/2 \\rfloor$. We leave the possibility of doing a data-adaptive split to future work.}\n\n\n\n\n\nJust like in the definition of population-level criteria, we write our notations more generally to allow \\jjl{for extension to ranking} feature subsets. For $A\\subseteq\\{1, \\ldots, d\\}$ with $|A| = l$, recall that the classical oracle restricted to $A$, $\\varphi^*_A(\\bd x)$, is defined in \\eqref{eqn:classical oracle} and that the NP oracle restricted to $A$, $\\varphi^*_{\\alpha A}(\\bd x)$, is defined in \\eqref{eqn: np oracle}. Although these two oracles have different thresholds, $\\pi_0 \/ \\pi_1$ vs. $C^*_{\\alpha A}$, the class-conditional density ratio $p_{1A}(\\cdot)\/ p_{0A}(\\cdot)$ \\jjl{is involved in} in both oracles. The densities $p_{0A}$ and $p_{1A}$ can be estimated respectively from $\\mathcal{S}^{0(b)}_{\\rm ts}$ and $\\mathcal{S}^{1(b)}_{\\rm ts}$ by kernel density estimators,\n\\begin{align}\\label{eqn:kernel density estimates b}\n\\hat{p}_{0A}^{(b)}(\\bd x_A)=\\frac{1}{m_1h_{m_1}^l}\\sum_{i=1}^{m_1} K\\left(\\frac{\\bd X^{0(b)}_{iA}-\\bd x_A}{h_{m_1}}\\right) \\quad \\text{ and } \\quad \\hat{p}_{1A}^{(b)}(\\bd x_A)=\\frac{1}{n_1h_{n_1}^l}\\sum_{i=1}^{n_1} K\\left(\\frac{\\bd X_{iA}^{1(b)}-\\bd x_A}{h_{n_1}}\\right)\\,,\n\\end{align}\nwhere $h_{m_1}$ and $h_{n_1}$ denote the bandwidths, and $K(\\cdot)$ is a kernel in ${\\rm I}\\kern-0.18em{\\rm R}^l$.\n\n\n\n\n\n\\subsection{Sample-level classical ranking criterion}\n\nTo define s-CC, we first construct plug-in classifiers $\\hat\\phi_A^{(b)}(\\bd x) = \\mathds{1}\\left( \\hat{p}_{1A}^{(b)}(\\bd x_A)\/ \\hat{p}_{0A}^{(b)}(\\bd x_A) > m_1\/n_1\\right)$ for $b\\in\\{1, \\ldots, B\\}$, where the threshold level $m_1\/n_1$ is to mimic $\\pi_0 \/ \\pi_1$. If the sample size ratio of the two classes is the same as that in the population, then classifiers $\\hat\\phi_A^{(b)}(\\bd x)$'s would be \\jjl{a good plug-in estimate} of $\\varphi^*_A(\\bd x)$. However, under sampling bias, we cannot correct the threshold estimate without additional information. Armed with the \\jjl{classifier} $\\hat\\phi_A^{(b)}(\\cdot)$ trained on $\\mathcal S_{\\rm ts}^{0(b)} \\cup \\mathcal S_{\\rm ts}^{1(b)}$, we define the \\textit{sample-level classical criterion} \\jjl{of} index set $A$ as\n\\begin{align}\\label{CC}\n\t\\mathrm{CC}_A &:= \\frac{1}{B} \\sum_{b=1}^B \\mathrm{CC}_A^{(b)}\\,,\\\\\\notag\n\t\\text{with } \\mathrm{CC}_A^{(b)} &:= \\frac{1}{m_2+n_2}\\left\\{ \\sum_{i=n_1+1}^{n_1+n_2} \\left[ 1-\\hat{\\phi}^{(b)}_{A}\\left(\\bd X_i^{1(b)}\\right) \\right] + \\sum_{i'=m_1+1}^{m_1+m_2} \\hat{\\phi}_A^{(b)}\\left(\\bd X_{i'}^{0(b)}\\right) \\right\\}\\,.\n\\end{align}\nThe $\\text{CC}_A$ is the average performance of $\\hat\\phi_A^{(b)}(\\cdot)$ over the $B$ random splits on the left-out observations $\\mathcal S_{\\rm lo}^{0(b)} \\cup \\mathcal S_{\\rm lo}^{1(b)}$ for $b\\in\\{1, \\ldots, B\\}$. \n\n\n\n\n\n\\subsection{Sample-level Neyman-Pearson ranking criterion}\\label{sec: construction of NP}\n\n\n\n\nTo define s-NPC, we use the same kernel density estimates to \\jjl{plug in} $p_{1A}(\\cdot)\/ p_{0A}(\\cdot)$, as in s-CC. To \\jjl{estimate} the oracle threshold $C^*_{\\alpha A}$, we use the NP umbrella algorithm \\citep{tong2016neyman}. \\jjl{Unlike s-CC, in which both $\\mathcal S_{\\rm lo}^{0(b)}$ and $\\mathcal S_{\\rm lo}^{1(b)}$ are used to evaluate the constructed classifier, for s-NPC we use $\\mathcal S_{\\rm lo}^{0(b)}$ to estimate the threshold and only $\\mathcal S_{\\rm lo}^{1(b)}$ to evaluate the classifier}. \n\n\n\n\n\n\n\n\n\nThe NP umbrella algorithm finds proper thresholds for all \\textit{scoring-type classification methods} (e.g., nonparametric density ratio plug-in, logistic regression and random forest) so that the resulting classifiers achieve a high probability control on the type I error under the pre-specified level $\\alpha$. \\jjl{A scoring-type classification method outputs a scoring function that maps the feature space $\\mathcal X$ to ${\\rm I}\\kern-0.18em{\\rm R}$, and a classifier is constructed by combining the scoring function with a threshold.} To construct an NP classifier given a scoring-type classification method, the NP umbrella algorithm first trains a scoring function $\\hat{s}^{(b)}_A(\\cdot)$ on $\\mathcal{S}^{0(b)}_{\\rm ts} \\cup \\mathcal{S}^{1(b)}_{\\rm ts}\\,$. In this work, we specifically use $\\hat{s}^{(b)}_A(\\cdot) = \\hat{p}_{1A}^{(b)}(\\cdot)\/ \\hat{p}_{0A}^{(b)}(\\cdot)$, in which the numerator and the denominator are defined in \\eqref{eqn:kernel density estimates b}. Second, the algorithm applies $\\hat{s}^{(b)}_A(\\cdot)$ to $\\mathcal{S}^{0(b)}_{\\rm lo}$ to obtain scores $\\left\\{T_i^{(b)} = \\hat{s}^{(b)}_A\\left(\\bd X^{0(b)}_{m_1+i}\\right), i=1,\\dots, m_2\\right\\}$, which are \\jjl{then} sorted in an increasing order and denoted by $\\left\\{T_{(i)}^{(b)}, i=1,\\dots, m_2\\right\\}$. Third, for a user-specified type I error upper bound $\\alpha \\in (0,1)$ and a violation rate $\\delta_1 \\in(0,1)$\\jjl{, which refers to the probability that the type I error of the trained classifier exceeds} $\\alpha$, the algorithm chooses the order \n\\begin{align*\n\tk^* = \\min \\limits_{k=1,\\dots, m_2} \\left\\{k:\\sum_{j=k}^{m_2} {m_2\\choose j} (1-\\alpha)^j \\alpha^{m_2-j}\\leq \\delta_1\\right\\}\\,.\n\\end{align*} \nWhen $m_2 \\geq \\frac{\\log \\delta_1}{\\log(1-\\alpha)}\\,,$ a finite $k^*$ exists\\footnote{If one were to assume a parametric model, one can get rid of the minimum sample size requirement on $m_2$ \\citep{Tong.Xia.Wang.Feng.2020}. However, we adopt the non-parametric NP umbrella algorithm \\citep{tong2016neyman} to achieve the desirable mode-free property of our feature ranking framework.}, and the umbrella algorithm chooses the threshold of the estimated scoring function as \n\n\t\\widehat{C}_{\\alpha A}^{(b)} = T_{(k^*)}^{(b)}. \n$\nThus, the resulting NP classifier is\n\\begin{align}\\label{eq:NP_classifier}\n\t\\hat{\\phi}_{\\alpha A}^{(b)}(\\cdot) = \\mathds{1} \\left(\\hat{s}^{(b)}_A (\\cdot) > \\widehat{C}_{\\alpha A}^{(b)} \\right)\\,.\n\\end{align}\n\n\n\nProposition 1 in \\cite{tong2016neyman} proves that the probability that the type I error of the classifier $\\hat{\\phi}_{\\alpha A}^{(b)}(\\cdot)$ in \\eqref{eq:NP_classifier} exceeds $\\alpha$ is no more than $\\delta_1$: \n\\begin{equation}\n{\\rm I}\\kern-0.18em{\\rm P} \\left(R_0 (\\hat{\\phi}_{\\alpha A}^{(b)}) > \\alpha\\right) \\leq\\sum_{j=k^*}^{m_2} {m_2\\choose j} (1-\\alpha)^j \\alpha^{m_2-j}\\leq \\delta_1\\,, \\label{ineq:npc}\n\\end{equation} \nfor every $b = 1,\\ldots, B$. We evaluate the type II error of the $B$ NP classifiers $\\hat{\\phi}^{(1)}_{\\alpha A}, \\ldots, \\hat{\\phi}^{(B)}_{\\alpha A}$ on the left-out class $1$ sets $\\mathcal S_{\\rm lo}^{1(1)},\\ldots,\\mathcal S_{\\rm lo}^{1(B)}$ respectively. Our \\textit{sample-level NPC} for index set $A$ at level $\\alpha$, denoted by $\\rm{NPC}_{\\alpha A}$, computes the average of these type II errors: \n\\begin{align}\\label{Npscore}\n \t \\mathrm{NPC}_{\\alpha A} &:=\\frac{1}{B} \\sum_{b=1}^{B} \\mathrm{NPC}_{\\alpha A}^{(b)}\\,,\\\\\\notag\n \t \\text{with } \\mathrm{NPC}_{\\alpha A}^{(b)} &:= \\frac{1}{n_2} \\sum_{i= n_1 + 1}^{n_1+n_2} \\left[ 1-\\hat{\\phi}^{(b)}_{\\alpha A}\\left(\\bd X_i^{1(b)}\\right) \\right] = \\frac{1}{n_2} \\sum_{i=n_1 +1}^{n_1+n_2}\\mathds{1}\\left( \\hat{s}^{(b)}_{A}\\left(\\bd{X}_{iA}^{1(b)}\\right) \\le \\widehat{C}^{(b)}_{\\alpha A}\\right)\\,,\n \\end{align} \n where $\\hat{s}^{(b)}_{A}(\\cdot) = \\hat{p}_{1A}^{(b)}(\\cdot)\/ \\hat{p}_{0A}^{(b)}(\\cdot)$ is the kernel density ratios constructed on $\\mathcal{S}_{\\rm ts}^{0(b)} \\cup\\mathcal{S}_{\\rm ts}^{1(b)}$ using only the features indexed by $A$, and $\\widehat{C}^{(b)}_{\\alpha A} = T_{(k^*)}^{(b)}$ is given by the NP umbrella algorithm. \n \n \n\n\n\n\n\n\n\n\n\n\n\\section{Theoretical properties}\\label{sec:theoretical properties}\n\nThis section investigates the ranking properties of s-CC and s-NPC. Concretely, we wish to address this question: among \\jjl{$J$} candidate feature index sets $A_1, \\ldots, A_J$ of size $l$, is it guaranteed that the s-CC and s-NPC have ranking agreements with the p-CC and p-NPC respectively, with high probability? We consider $J$ as a fixed number in the theory development. We also assume in this section that the number of random splits $B = 1$ in s-CC and s-NPC, and for then simplicity we suppress the super index $(b)$ in all notations in this section and in the Appendix proofs. \n\nIn addition to investigation on ranking consistency, we discover a property unique to s-NPC: the robustness against sampling bias. Concretely, as long as the absolute sample sizes are large enough, s-NPC gives ranking consistent with p-NPC even if class size ratio in the sample is far from that in the population. In contrast, s-CC is not robust against sampling bias, except in the scenario that the population class size ratio $\\pi_0 \/ \\pi_1$ is known and we replace the threshold in the plug-in classifiers for s-CC by this \\jjl{ratio}. \n\n\n\n\n\n\n\n\n\\subsection{Definitions and key assumptions}\nWe assume that the size of candidate index sets $l$ $(\\ll d)$ is moderate. \n Following \\cite{Audibert05fastlearning}, for any multi-index $\\bd t=\\left(t_1, \\ldots, t_l \\right)^{\\mkern-1.5mu\\mathsf{T}} \\in {\\rm I}\\kern-0.18em{\\rm N}^l$ and $\\bd x= \\left( x_1, \\ldots, x_l\\right)^{\\mkern-1.5mu\\mathsf{T}} \\in {\\rm I}\\kern-0.18em{\\rm R}^l$, we define $|\\bd t| = \\sum_{i=1}^{l}t_i$, $\\bd t! = t_1!\\cdots t_l!$, $\\bd x^{\\bd t}=x_1^{t_1} \\cdots x_l^{t_l}$, $\\left\\| \\bd x\\right\\| = \\left( x_1^2 + \\ldots + x_l^2 \\right)^{1\/2}$, and the differential operator $D^{\\bd t} = \\frac{\\partial^{t_1 + \\cdots + t_l}}{\\partial {x_1^{t_1}} \\cdots \\partial {x_l^{t_l}}}$. For all the theoretical discussions, we assume the domain of $p_{0A}$ and $p_{1A}$, \\jjl{i.e.,} the class-conditional densities of $\\bd X_A|(Y=0)$ and $\\bd X_A|(Y=1)$, is $[-1,1]^l$, where $l = |A|$. We denote the distributions of $\\bd X_A|(Y=0)$ and $\\bd X_A|(Y=1)$ by $P_{0A}$ and $P_{1A}$ respectively. \n \n\\begin{definition}[H\\\"{o}lder function class]\\label{def:holder_function_class}\n\tLet $\\beta>0$. Denote by $\\floor*{\\beta}$ the largest integer strictly less than $\\beta$. For a $\\floor*{\\beta}$-times continuously differentiable function $g: {\\rm I}\\kern-0.18em{\\rm R}^l \\rightarrow {\\rm I}\\kern-0.18em{\\rm R}$, we denote by $g_{\\bd x}$ its Taylor polynomial of degree $\\floor*{\\beta}$ at a value $\\bd x \\in {\\rm I}\\kern-0.18em{\\rm R}^l$:\n$$g_{\\bd x}^{(\\beta)}(\\cdot) = \\sum_{{\\left| {\\bd t}\\right|}\\leq \\floor*{\\beta}} \\frac{\\left(\\cdot - {\\bd x}\\right)^{\\bd t}}{{\\bd t}!} D^{\\bd t}g\\left({\\bd x}\\right).$$ \\par\nFor $L >0 $, the $\\left( \\beta, L, \\left[-1, 1\\right]^l\\right)$-H\\\"{o}lder function class, denoted by $\\Sigma\\left( \\beta, L, \\left[-1, 1\\right]^l\\right)$, is the set of $\\floor*{\\beta}$-times continuously differentiable functions $g: {\\rm I}\\kern-0.18em{\\rm R}^l \\rightarrow {\\rm I}\\kern-0.18em{\\rm R}$ that satisfy the following inequality:\n$$\\left| g\\left( {\\bd x}\\right) -g_{\\bd x}^{(\\beta)}\\left( {\\bd x}^{\\prime}\\right) \\right| \\leq L\\left\\| {\\bd x}- {\\bd x}^{\\prime} \\right\\|^{\\beta}\\,, \\quad \\text{ for all } {\\bd x}, {\\bd x}^{\\prime} \\in \\left[-1, 1\\right]^l\\,.$$\n\\end{definition}\n\n\\begin{definition}[H\\\"{o}lder density class]\\label{def:holder_density_class}\n\tThe $\\left( \\beta, L, \\left[-1, 1\\right]^l\\right)$-H\\\"{o}lder density class is defined as $$\\mathcal{P}_{\\Sigma} \\left( \\beta, L, \\left[-1, 1\\right]^l\\right)= \\left\\{ p: p \\geq 0, \\int p=1, p \\in \\Sigma\\left( \\beta, L, \\left[-1, 1\\right]^l\\right)\\right\\}\\,.$$ \n\\end{definition}\n\n\nThe following $\\beta$-valid kernels are multi-dimensional analog of univariate higher order kernels.\n\\begin{definition}[$\\beta$-valid kernel]\\label{definition1}\nLet $K(\\cdot)$ be a real-valued kernel function on ${\\rm I}\\kern-0.18em{\\rm R}^l$ with the support $[-1,1]^l$\\,. For a fixed $\\beta>0$\\,, the function $K(\\cdot)$ is a $\\beta$-valid kernel if it satisfies (1) $\\int |K|^q <\\infty$ for any $q\\geq 1$, (2) $\\int \\|\\bd u \\|^\\beta|K(\\bd u)|d\\bd u <\\infty$, and (3) in the case $\\floor* \\beta \\geq 1$\\,, $\\int \\bd u^{\\bd t} K(\\bd u)d\\bd u = 0 $ for any $\\bd t =(t_1, \\dots, t_l) \\in \\mathbb N^l$ such that $1\\le |\\bd t| \\le\\floor* \\beta$\\,.\n\\end{definition}\n\nOne example of $\\beta$-valid kernels is the product kernel whose ingredients are kernels of order $\\beta$ in $1$ dimension:\n$$\n\\widetilde K (\\bd x) = K(x_1)K(x_2)\\cdots K(x_l)\\mathds{1}(\\bd x\\in[-1,1]^l)\\,,\n$$\nwhere $K$ is a 1-dimensional $\\beta$-valid kernel and is constructed based on Legendre polynomials. Such kernels have been considered in \\cite{RigVer09}. When a $\\beta$-valid kernel is constructed out of Legendre polynomials, it is also Lipschitz and bounded. For simplicity, we assume that all the $\\beta$-valid kernels considered in the theory discussion are constructed from Legendre polynomials.\n\n\n\\begin{definition}[Margin assumption]\\label{def: margin_assumpion}\n\tA function $f(\\cdot)$ satisfies the margin assumption of the order $\\bar{\\gamma}$ at the level $C$, with respect to the probability distribution $P$ of a random vector $\\bd X$, if there exist positive constants $\\bar{C}$ and $\\bar{\\gamma}$, such that for all $\\delta \\geq 0$,\n$$P \\left(\\left| f\\left(\\bd X\\right) - C \\right| \\leq \\delta\\right) \\leq \\bar C \\delta^{\\bar{\\gamma}}\\,.$$\n\\end{definition}\n\nThe above condition for densities was first introduced in \\citet{polonik1995measuring}, and its counterpart in the classical binary classification was called margin condition \\citep{MamTsy99}, which is a low noise condition. \nRecall that the set $\\{\\bd x: \\eta(\\bd x)=1\/2\\}$ is the decision boundary of the classical oracle classifier, and the margin condition in the classical paradigm is a special case of Definition \\ref{def: margin_assumpion} by taking $f = \\eta$ and $C=1\/2$. Unlike the classical paradigm where the optimal threshold $1\/2$ on regression function $\\eta$ is known, the optimal threshold level in the NP paradigm is unknown and needs to be estimated, suggesting the necessity of having sufficient data around the decision boundary to detect it. This concern motivated \\cite{tong2013plug} to formulate a detection condition that works as an opposite force to the margin assumption, and \\cite{zhao2016neyman} improved upon it and proved its necessity in bounding the excess type II error of an NP classifier. To establish ranking consistency properties of s-NPC, a bound on the excess type II error is an intermediate result, so we also need this \\jjl{detection condition} for our current work. \n\n\n\n\n\n\n\n\n\\begin{definition}[Detection condition \\citep{zhao2016neyman}]\\label{def:detection_assumption}\n\tA function $f(\\cdot)$ satisfies the detection condition of the order $\\underaccent{\\bar}{\\gamma}$ at the level $(C, \\delta^*)$ with respect to the probability distribution $P$ of a random vector $\\bd X$, if there exists a positive constant $\\underaccent{\\bar}C$, such that for all $\\delta\\in\\left(0, \\delta^*\\right) $,\n$$P\\left( C \\leq f\\left(\\bd X\\right) \\leq C + \\delta \\right) \\geq \\underaccent{\\bar}C \\delta^{\\underaccent\\bar{\\gamma}} \\,.$$\n\\end{definition}\n\n\n\n\\subsection{A uniform deviation result of the scoring function}\n\nFor $A\\subseteq\\{1, \\ldots, d\\}$ and $|A| = l$, recall that we estimate $p_{0A}$ and $p_{1A}$ respectively from $\\mathcal{S}^0_{\\rm ts}$ and $\\mathcal{S}^1_{\\rm ts}$ by kernel density estimators,\n\\begin{align}\\label{eqn:kernel density estimates}\n\\hat{p}_{0A}(\\bd x_A)=\\frac{1}{m_1h_{m_1}^l}\\sum_{i=1}^{m_1} K\\left(\\frac{\\bd X^0_{iA}-\\bd x_A}{h_{m_1}}\\right) \\quad \\text{ and } \\quad \\hat{p}_{1A}(\\bd x_A)=\\frac{1}{n_1h_{n_1}^l}\\sum_{i=1}^{n_1} K\\left(\\frac{\\bd X_{iA}^1-\\bd x_A}{h_{n_1}}\\right)\\,,\n\\end{align}\nwhere $h_{m_1}$ and $h_{n_1}$ denote the bandwidths, and $K(\\cdot)$ is a $\\beta$-valid kernel in ${\\rm I}\\kern-0.18em{\\rm R}^l$. We are interested in deriving a high probability bound for $\\left\\| \\hat p_{1A}(\\bd x_A)\/\\hat p_{0A}(\\bd x_A) - p_{1A}(\\bd x_A)\/p_{0A}(\\bd x_A)\\right\\|_{\\infty}$.\n\n\n\\begin{condition}\\label{condition: 1}\nSuppose that the densities satisfy\n\\begin{itemize}\n\\item[(i)] There exist positive constants $\\mu_{\\min}$ and $\\mu_{\\max}$ such that $\\mu_{\\max}\\geq p_{0A}\\geq \\mu_{\\min}$ and $\\mu_{\\max}\\geq p_{1A}\\geq \\mu_{\\min}$ for all $A\\subset\\{1 \\ldots, d\\}$ satisfying $|A|=l$.\n\\item[(ii)] There is a positive constant $L$ such that $p_{0A}, p_{1A}\\in\\mathcal{P}_{\\Sigma}(\\beta, L, [-1, 1]^{l})$ for all $A\\subset\\{1 \\ldots, d\\}$ satisfying $|A| = l$. \n\\end{itemize}\n\n\n\\end{condition}\n\n\n \\begin{proposition}\\label{lem:bound_s_shat_for_plugin}\nAssume Condition \\ref{condition: 1} and let the kernel $K$ be $\\beta$-valid and $L^\\prime$-Lipschitz. Let $A \\subseteq\\{1, \\ldots, d\\}$ and $|A| = l$. Let $\\hat p_{0A}(\\cdot)$ and $\\hat p_{1A}(\\cdot)$ \\jjl{be} kernel density estimates defined in \\eqref{eqn:kernel density estimates}. Take the bandwidths $h_{m_1}=\\left(\\frac{\\log m_1}{m_1}\\right)^{\\frac{1}{2\\beta+l}}$ and $h_{n_1}=\\left(\\frac{\\log n_1}{n_1}\\right)^{\\frac{1}{2\\beta+l}}$. For any $\\delta_3 \\in (0,1)$, if sample \\jjl{sizes} $m_1 = |\\mathcal{S}_{\\rm ts}^0|$ and $n_1 = |\\mathcal{S}_{\\rm ts}^1|$ satisfy \\[\n \t\\sqrt{\\frac{\\log\\left(2m_1\/\\delta_3\\right)}{m_1h_{m_1}^{l}}} < 1\\wedge \\frac{\\mu_{\\min}}{2 C_0} \\,, \\quad \\sqrt{\\frac{\\log\\left(2n_1\/\\delta_3\\right)}{n_1h_{n_1}^{l}}}< 1, \\quad n_1 \\wedge m_1 \\geq 2\/\\delta_3\\,,\\quad \n \t\\] \nwhere $C_{0}=\\sqrt{48c_{1}} + 32c_{2}+2Lc_{3}+L'+L+C\\sum_{1\\leq|\\bd q|\\leq\\lfloor\\beta\\rfloor}\\frac{1}{\\bd q!}$, in which $c_{1}=\\mu_{\\max}\\|K\\|^2$, $c_{2}=\\|K\\|_{\\infty}+\\mu_{\\max}+\\int|K||\\bd t|^{\\beta}d\\bd t$, $c_{3}=\\int |K||\\bd t|^{\\beta}d\\bd t$ and $C$ is such that\\\\ $C \\geq \\sup_{1\\leq|\\bd q|\\leq\\lfloor \\beta\\rfloor}\\sup_{\\bd x_A\\in[-1, 1]^l}|D^{\\bd q}p_{0A}(\\bd x_A)|$. Then there exists a positive constant $\\widetilde{C}$ that does not depend on $A$, such that we have with probability at least $1-\\delta_3$, \\[\n \t\\left\\| \\hat p_{1A}(\\bd x_A)\/\\hat p_{0A}(\\bd x_A) - p_{1A}(\\bd x_A)\/p_{0A}(\\bd x_A)\\right\\|_{\\infty} \\leq \\widetilde{C}\\left[\\left( \\frac{\\log m_1}{m_1}\\right)^{\\beta\/(2\\beta+l)} + \\left( \\frac{\\log n_1}{n_1}\\right)^{\\beta\/(2\\beta+l)} \\right]\\,.\n \t\\]\n\n\n \\end{proposition}\n\n\n\n\\subsection{Ranking property of s-CC}\\label{sec:theoretic_plug-in-CC}\n\nTo study the ranking agreement between s-CC and p-CC, an essential step is to develop a concentration result between $\\text{CC}_A$ and $R(\\varphi^*_A)$, where $\\varphi^*_{A}$ was defined in \\eqref{eqn:classical oracle}. \n\n\n\\begin{proposition}\\label{prop: CC1}\nLet $\\delta_3, \\delta_4, \\delta_5\\in (0, 1)$. In addition to the assumptions of Propositions \\ref{lem:bound_s_shat_for_plugin}, assume that the density ratio $s_A(\\cdot) = p_{1A}(\\cdot)\/p_{0A}(\\cdot)$ satisfies the margin assumption of order $\\bar\\gamma$ at level $\\pi_0 \/ \\pi_1$ (with constant $\\bar C$) with respect to both $P_{0A}$ and $P_{1A}$, that $m_2 \\geq (\\log\\frac{2}{\\delta_5})^2$ and $n_2 \\geq (\\log\\frac{2}{\\delta_4})^2$, and that $m \/ n = m_1 \/ n_1 = \\pi_0 \/ \\pi_1$, \nthen we have with probability at least $1-\\delta_3-\\delta_4-\\delta_5$, \n$$\n\\left| \\mathrm{CC}_{A} - R \\left( {\\varphi}^*_{A} \\right)\\right|\\leq \\widetilde C \\left[\\left( \\frac{\\log m_1}{m_1}\\right)^{\\frac{\\beta\\bar\\gamma}{2\\beta+l}} + \\left( \\frac{\\log n_1}{n_1}\\right)^{\\frac{\\beta\\bar\\gamma}{2\\beta+l}} + m_2^{-\\frac{1}{4}} + n_2^{-\\frac{1}{4}} \\right]\\,,\n$$\t \nfor some positive constant $\\widetilde C$ that does not depend on $A$. \n\\end{proposition}\n\n\nProposition \\ref{lem:bound_s_shat_for_plugin} is essential to establish Proposition \\ref{prop: CC1}, which in term leads to the ranking consistency of s-CC. \n\n\n\n\\begin{theorem}\\label{thm:selection_consistency_cc}\nLet $\\delta_3$, $\\delta_4$, $\\delta_5\\in (0,1)\\,,$ $A_1, \\ldots, A_J \\subseteq\\left\\{1,\\ldots, d \\right\\}$ and $|A_1| = |A_2|=\\ldots = |A_J| = l$. We consider both $J$ and $l$ to be constants that do not diverge with the sample sizes. In addition to the assumptions in Proposition \\ref{prop: CC1}, assume that the \\jjl{p-CC's} of these feature index sets are separated by some margin $g>0$; in other words, \n$$\n\t \\min \\limits_{i \\in \\{1,\\dots, J-1\\}}\\left\\{ R\\left( {\\varphi}^*_{A_{i+1}}\\right) - R\\left( {\\varphi}^*_{A_i}\\right) \\right\\} > g\\,. \n$$ \nIn addition, assume $m_1, m_2, n_1, n_2$ satisfy that \n\\begin{equation}\\label{eqn:sample size requirement}\n\\widetilde C \\left[\\left( \\frac{\\log m_1}{m_1}\\right)^{\\frac{\\beta\\bar\\gamma}{2\\beta+l}} + \\left( \\frac{\\log n_1}{n_1}\\right)^{\\frac{\\beta\\bar\\gamma}{2\\beta+l}} + m_2^{-\\frac{1}{4}} + n_2^{-\\frac{1}{4}} \\right] < \\frac{g}{2}\\,, \n\\end{equation}\nwhere $\\widetilde C$ is the generic constant in Proposition \\ref{thm:1}. \nThen with probability at least $1 - J(\\delta_3+\\delta_4+\\delta_5)$, $\\mathrm{CC}_{A_i} < \\mathrm{CC}_{A_{i+1}}$ for all $i = 1, \\ldots, J-1$. That is, the \\jjl{s-CC} ranks $A_1, \\ldots, A_J$ the same as the \\jjl{p-CC}. \n\\end{theorem}\n\n\n\\begin{remark}\nIf the sample size ratio $m\/n$ is far from $\\pi_0\/\\pi_1$, we cannot expect a concentration result on $\\left| \\mathrm{CC}_{A} - R \\left( {\\varphi}^*_{A} \\right)\\right|$, such as Proposition \\ref{prop: CC1}, to hold. As such a concentration result is a cornerstone to ranking consistency between s-CC and p-CC, we conclude that the classical criterion is not robust \\jjl{to} sampling bias. \t\n\\end{remark}\n\n\n\n\n\n\\subsection{Ranking property of s-NPC}\\label{sec:theoretic_plug-in}\n\nTo establish ranking agreement between s-NPC and p-NPC, an essential step is to develop a concentration result of $\\mathrm{NPC}_{\\alpha A}$ around $R_1(\\varphi^*_{\\alpha A})$, where $\\varphi^*_{\\alpha A}$ was defined in \\eqref{ideaL_sormulation_np}. Recall that $\\hat \\phi_{\\alpha A}(\\bd x) = \\mathds{1}(\\hat s_A(\\bd x_A) > \\widehat C_{\\alpha A}) = \\mathds{1}(\\hat p_{0A}(\\bd x_A)\/\\hat p_{1A}(\\bd x_A) > \\widehat C_{\\alpha A})$, where $\\widehat C_{\\alpha A}$ is determined by the NP umbrella classification algorithm. We always assume that the cumulative distribution function of $\\hat s_{A} (\\bd X_A), \\text{ where } \\bd X\\sim P_0$, is continuous. \n\n\n\\begin{lemma} \\label{lem:kprime} \nLet $\\alpha, \\delta_1,\\delta_2 \\in (0,1)\\,.$\nIf $m_2 = \\left| \\mathcal{S}_{\\rm lo}^0 \\right| \\geq \\frac{4}{\\alpha\\delta_1}\\,$, then the classifier $\\hat{\\phi}_{\\alpha A}$ satisfies with probability at least $1-\\delta_1-\\delta_2 \\,,$ \n\\begin{align} \\label{eq: R0_concentration} \n\t\\left|R_0(\\hat{\\phi}_{\\alpha A}) - R_0(\\varphi^*_{\\alpha A}) \\right|\\leq \\xi\\,,\n\\end{align}\nwhere\n\\[\n\t\\xi = \\sqrt{\\frac{\\ceil*{ d_{\\alpha,\\delta_1,m_2} \\left(m_2+1\\right)}\\left(m_2+1-\\ceil*{ d_{\\alpha,\\delta_1,m_2} \\left(m_2+1\\right)}\\right)}{(m_2+2)(m_2+1)^2\\,\\delta_2}} + d_{\\alpha,\\delta_1,m_2} + \\frac{1}{m_2+1} - (1-\\alpha)\\,,\n\\]\n\\[\n\t d_{\\alpha,\\delta_1,m_2} = \\frac{1+ 2\\delta_1 (m_2+2) (1-\\alpha) + \\sqrt{1+ 4\\delta_1(m_2+2)(1-\\alpha)\\alpha}}{2\\left\\{ \\delta_1(m_2+2)+1\\right\\}}\\,,\n\\]\nand $\\ceil*{z}$ denotes the smallest integer larger than or equal to $z$. Moreover, if $m_2 \\geq \\max(\\delta_1^{-2}, \\delta_2^{-2})$, we have \n$\n\\xi \\leq ({5}\/{2}){m_2^{-1\/4}}.\n$\t\\end{lemma}\n\nLemma \\ref{lem:kprime} and a minor modification of proof for Proposition 2.4 in \\cite{zhao2016neyman} lead to the next proposition. We can prove the same upper bound for $\\left|R_1(\\hat{\\phi}_{\\alpha A}) - R_1({\\varphi}^*_{\\alpha A})\\right|$ as that for the excess type II error $R_1(\\hat{\\phi}_{\\alpha A}) - R_1({\\varphi}^*_{\\alpha A})$ in \\cite{zhao2016neyman}. \n\n\n\n\n\n\n\n\n \n\n\n\n\n\\begin{proposition}\\label{prop:2}\nLet $\\alpha, \\delta_1, \\delta_2 \\in (0,1)$. Assume that the density ratio $s_A(\\cdot) = p_{1A}(\\cdot)\/p_{0A}(\\cdot)$ satisfies the margin assumption of order $\\bar\\gamma$ at level $C^*_{\\alpha A}$ (with constant $\\bar C$) and detection condition of order $\\underaccent{\\bar}\\gamma$ at \nlevel $(C^*_{\\alpha A}, \\delta^*)$ (with constant $\\underaccent{\\bar} C$), both with respect to distribution $P_{0A}$. \n\\noindent\nIf $m_2 \\geq \\max\\{\\frac{4}{\\alpha \\delta_1}, \\delta_1^{-2}, \\delta_2^{-2}, (\\frac{2}{5}\\underaccent{\\bar}C{\\delta^*}^{\\uderbar\\gamma})^{-4}\\}$, the excess type II error of the classifier $\\hat{\\phi}_{\\alpha A}$ satisfies with probability at least $1-\\delta_1-\\delta_2$,\n\\begin{align*}\n&\\left|R_1(\\hat{\\phi}_{\\alpha A}) - R_1({\\varphi}^*_{\\alpha A})\\right|\\\\\n&\\leq\\, \n2\\bar C \\left[\\left\\{\\frac{|R_0( \\hat{\\phi}_{\\alpha A}) - R_0( \\varphi^*_{\\alpha A})|}{\\underaccent{\\bar}C}\\right\\}^{1\/\\uderbar{\\gamma}} + 2 \\| \\hat s_A - s_A \\|_{\\infty} \\right]^{1 + \\bar\\gamma} \n+ C^*_{\\alpha A} |R_0( \\hat{\\phi}_{\\alpha A}) - R_0( \\varphi^*_{\\alpha A})|\\\\\n&\\leq\\,\n2\\bar C \\left[\\left(\\frac{2}{5}m_2^{1\/4}\\underaccent{\\bar}C\\right)^{-1\/\\uderbar{\\gamma}} + 2 \\| \\hat s_A - s_A \\|_{\\infty} \\right]^{1 + \\bar\\gamma} \n+ C^*_{\\alpha A} \\left(\\frac{2}{5} m_2^{1\/4}\\right)^{-1}\\,.\n\\end{align*}\n\\end{proposition}\n\n\n\n\n\t\n\n\n\n\nPropositions \\ref{lem:bound_s_shat_for_plugin} and \\ref{prop:2} lead to the following result. \n\n\n\n\n\n\\begin{theorem}\\label{thm:1}\nLet $\\alpha$, $\\delta_1$, $\\delta_2$, $\\delta_3$, $\\delta_4$ $\\in (0,1)$, and $l = |A|$. In addition to the assumptions of Propositions \\ref{lem:bound_s_shat_for_plugin} and \\ref{prop:2}, assume $n_2 \\geq \\left(\\log\\frac{2}{\\delta_4}\\right)^2$,\nthen we have with probability at least $1-(\\delta_1+\\delta_2+\\delta_3 +\\delta_4),$ \n$$\n\\left| \\mathrm{NPC}_{\\alpha A} - R_1 \\left( {\\varphi}^*_{\\alpha A} \\right)\\right|\\leq \\widetilde C \\left[\\left( \\frac{\\log m_1}{m_1}\\right)^{\\frac{\\beta(1+\\bar\\gamma)}{2\\beta+l}} + \\left( \\frac{\\log n_1}{n_1}\\right)^{\\frac{\\beta(1+\\bar\\gamma)}{2\\beta+l}} + m_2^{-(\\frac{1}{4}\\wedge \\frac{1+\\bar\\gamma}{\\underaccent{\\bar}\\gamma})} + n_2^{-\\frac{1}{4}} \\right]\\,,\n$$\t \nfor some positive constant $\\widetilde C$ that does not depend on $A$. \n\\end{theorem}\n\n\n\n\n\n\nUnder smoothness and regularity conditions and sample size requirements, Theorem \\ref{thm:1} shows the concentration of $\\mathrm{NPC}_{\\alpha A}$ around $R_1 \\left( {\\varphi}^*_{\\alpha A}\\right)$ with probability at least $1-(\\delta_1+\\delta_2+\\delta_3+\\delta_4)$. The user-specified violation rate $\\delta_1$ represents the uncertainty that the type I error of an NP classifier $\\hat \\phi_{\\alpha A}$ exceeds $\\alpha$, leading to the underestimation of $R_1 ( {\\varphi}^*_{\\alpha A} )$; $\\delta_2$ accounts for possibility of unnecessarily stringent control on the type I error, which results in the overestimation of $R_1 ( {\\varphi}^*_{\\alpha A} )$; $\\delta_3$ accounts for the uncertainty in training scoring function $\\hat s_A(\\cdot)$ on a finite sample; and $\\delta_4$ represents the uncertainty of using leave-out class $1$ observations $\\mathcal{S}^1_{\\rm lo}$ to estimate $R_1(\\hat\\phi_{\\alpha A})$. Note that while the $\\delta_1$ parameter serves both as the input of the construction of s-NPC and as a restriction to the sample sizes, other parameters $\\delta_2$, $\\delta_3$ and $\\delta_4$ only have the latter role. Like the constant $C_0$ in Proposition \\ref{lem:bound_s_shat_for_plugin}, the generic constant $\\widetilde C$ in Theorem \\ref{thm:1} can be provided more explicitly, but it would be too cumbersome to do so. \n\n\n\n\n\n \n \n \n\n\n\n\\begin{theorem}\\label{thm:selection_consistency_plugin}\nLet $\\alpha$, $\\delta_1$, $\\delta_2$, $\\delta_3$, $\\delta_4 \\in (0,1)\\,,$ $A_1, \\ldots, A_J \\subseteq\\left\\{1,\\ldots, d \\right\\}$ and $|A_1| = |A_2|=\\ldots = |A_J| = l$. We consider both $J$ and $l$ to be constants that do not diverge with the sample sizes. In addition to the assumptions in Theorem \\ref{thm:1}, assume that the p-NPC's of these feature index sets are separated by some margin $g>0$; in other words, \n$$\n\t \\min \\limits_{i \\in \\{1,\\dots, J-1\\}}\\left\\{ R_1\\left( {\\varphi}^*_{\\alpha A_{i+1}}\\right) - R_1\\left( {\\varphi}^*_{\\alpha A_i}\\right) \\right\\} > g\\,. \n$$ \nIn addition, assume $m_1, m_2, n_1, n_2$ satisfy that \n\\begin{equation}\\label{eqn:sample size requirement}\n\\widetilde C \\left[\\left( \\frac{\\log m_1}{m_1}\\right)^{\\frac{\\beta(1+\\bar\\gamma)}{2\\beta+l}} + \\left( \\frac{\\log n_1}{n_1}\\right)^{\\frac{\\beta(1+\\bar\\gamma)}{2\\beta+l}} + m_2^{-(\\frac{1}{4}\\wedge \\frac{1+\\bar\\gamma}{\\underaccent{\\bar}\\gamma})} + n_2^{-\\frac{1}{4}} \\right] < \\frac{g}{2}\\,, \n\\end{equation}\nwhere $\\widetilde C$ is the generic constant in Theorem \\ref{thm:1}. \nThen with probability at least $1 - J(\\delta_1+\\delta_2+\\delta_3+\\delta_4)$, $\\mathrm{NPC}_{\\alpha A_i} < \\mathrm{NPC}_{\\alpha A_{i+1}}$ for all $i = 1, \\ldots, J-1$. In other words, the s-NPC ranks $A_1, \\ldots, A_J$ the same as the p-NPC. \n\\end{theorem}\n\n\\begin{remark}\nThe conclusion in Theorem \\ref{thm:selection_consistency_plugin} also holds under sampling bias, i.e., when the sample sizes $n$ (of class $1$) and $m$ (of class $0$) do not reflect the population proportions $\\pi_0$ and $\\pi_1$. \t\n\\end{remark}\n\n\nHere we offer some intuition about the the robustness of NPC against sampling bias. Note that the objective and constraint of the NP paradigm only involve the class-conditional feature distributions, not the class proportions. Hence, the p-NPC does not rely on the class proportions. Furthermore, in s-NPC the class-conditional densities are estimated separately within each class, not involving the class proportions either. It is also worth noting that the proof of Theorem \\ref{thm:selection_consistency_plugin} (in Appendix) does not use the relation between the ratio of sample class sizes and that of the population class sizes. \n\n\n\n \n\\section{Simulation studies} \\label{sec:simulation}\n\nThis section contains simulation studies regarding the practical performance of s-CC and s-NPC in ranking features. We first demonstrate that s-CC and s-NPC rank the two features differently in the toy example (Figure \\ref{fig:toy example 1}), and their ranks are consistent with their population-level counterparts with high probability. Next we show the performance of s-CC and s-NPC in ranking features under both low-dimensional and high-dimensional settings. Lastly, we compare s-CC and s-NPC with four approaches: the Pearson correlation, the distance correlation \\citep{szekely2009brownian}, the two-sample $t$ test, and the two-sample Wilcoxon rank-sum test, which have been commonly used for marginal feature ranking in practice. \\jjl{In all the simulation studies, we set the number of random splits $B=11$ for s-CC and s-NPC, so that we can achieve reasonably stable criteria and meanwhile finish thousands of simulation runs in a reasonable time.} \n\n\\subsection{Revisiting the toy example at the sample level} \n\nWe simulate $1000$ samples, each of size $n=2000$, from the two-feature distribution defined in (\\ref{eq:toy_example}), which contains two features.\nWe apply s-CC (\\ref{CC}) and s-NPC with $\\delta_1 = .05$ (\\ref{Npscore}) to each sample to rank the two features, and we calculate the frequency of each feature being ranked the top among the $1000$ ranking results. \nTable \\ref{tab:toy_example} shows that s-NPC ($\\alpha = .01$) ranks feature $2$ the top with high probability ($97.8\\%$ frequency), while s-CC and s-NPC ($\\alpha = .20$) prefer feature $1$ with high probability. This is consistent with our population-level result: p-NPC ($\\alpha=.01$) prefers feature $2$, while p-CC and p-NPC ($\\alpha=.20$) find feature $1$ better, as we calculate using closed-form formulas in Section \\ref{sec:NPC_population}. Hence, this provides a numerical support to Theorems \\ref{thm:selection_consistency_cc} and \\ref{thm:selection_consistency_plugin}.\n\n\n\\begin{table}[htbp]\n\\caption{\\label{tab:toy_example}The frequency of each feature being ranked the top by each criterion among $1,000$ samples in the toy example (Figure \\ref{fig:toy example 1}).}\n\\centering\n\\begin{tabular}{lrr}\n\\hline\nCriterion & Feature $1$ & Feature $2$\\\\\n\\hline\ns-CC & $78.0\\%$ & $22.0\\%$ \\\\\ns-NPC ($\\alpha = .01$) & $1.6\\%$ & $98.4\\%$ \\\\\ns-NPC ($\\alpha = .20$) & $99.0\\%$ & $1.0\\%$\\\\\n\\hline\t\n\\end{tabular}\n\\end{table}\n\n\n\\subsection{Ranking low-dimensional features at the sample level}\\label{sec:sim_low_dim}\nWe next demonstrate the performance of s-CC and s-NPC in ranking features when $d$, the number of features, is much smaller than $n$. Two simulation studies are designed to support our theoretical results in Theorems \\ref{thm:selection_consistency_cc} and \\ref{thm:selection_consistency_plugin}. \n\nFirst, we generate data from the following two-class Gaussian model with $d=30$ features, among which we set the first $s=10$ features to be informative (a feature is informative if and only if it has different marginal distributions in the two classes). \n\\begin{align}\\label{eq:best_subset}\n\t\\bd X \\given (Y=0) &\\sim \\mathcal{N}(\\bd\\mu^0, \\bd\\Sigma)\\,, & \\bd X \\given (Y=1) &\\sim \\mathcal{N}(\\bd\\mu^1, \\bd\\Sigma)\\,, & {\\rm I}\\kern-0.18em{\\rm P}(Y=1) = .5\\,,\n\\end{align}\nwhere $\\bd\\mu^0 = (\\underbrace{-1.5,\\ldots,-1.5}_{10}, \\mu_{11}, \\ldots, \\mu_{30})^{\\mkern-1.5mu\\mathsf{T}}$, $\\bd\\mu^1 = (\\underbrace{1,.9,\\ldots,.2,.1}_{10}, \\mu_{11}, \\ldots, \\mu_{30})^{\\mkern-1.5mu\\mathsf{T}}$, with $\\mu_{11}, \\ldots, \\mu_{30}$ independently and identically drawn from $\\mathcal N(0,1)$ and then held fixed, and $\\bd\\Sigma = 4 \\, \\mathbf{I}_{30}$. In terms of population-level criteria p-CC and p-NPC, a clear gap exists between the first $10$ informative features and the rest features, yet the $10$ features themselves have increasing criterion values but no obvious gaps. That is, the first 10 features have true ranks going down from 1 to 10, and the rest features are tied in true ranks. \n\nWe simulate $1000$ samples of size $n=400$\\footnote{The minimum sample size required for $m_2$, class $0$ sample size reserved for estimating the threshold, in the NP umbrella algorithm is $59$ when $\\alpha = \\delta_1 = .05$. We set the overall sample size to $400$, so that the expected $m_2$ is $100$; then the realized $m_2$ is larger than $59$ with high probability. } or $1000$ from the above model. We apply s-CC (\\ref{CC}) and s-NPC with $\\delta_1 = .05$ and four $\\alpha$ levels $.05$, $.10$, $.20$, and $.30$ (\\ref{Npscore}), five criteria in total, to each sample to rank the $30$ features. That is, for each feature, we obtain $1000$ ranks by each criterion. We summarize the average rank of each feature by each criterion in Tables \\ref{tab:avg_rank_d30_n400} and \\ref{tab:avg_rank_d30_n1000}, and we plot the distribution of ranks of each feature in Figures \\ref{fig:avg_rank_d30_n400} and \\ref{fig:avg_rank_d30_n1000}. The results show that all criteria clearly distinguish the first 10 informative features from the rest. For s-NPC, we observe that its ranking is more variable for a smaller $\\alpha$ (e.g., $0.05$). This is expected because, when $\\alpha$ becomes smaller, the threshold in the NP classifiers would have an inevitably larger variance and lead to a more variable type II error estimate, i.e., s-NPC. As the sample size increases from $400$ (Table \\ref{tab:avg_rank_d30_n400}) to $1000$ (Table \\ref{tab:avg_rank_d30_n1000}), all criteria achieve greater agreement with the true ranks. \n\n\\begin{table}[htbp]\n\\caption{\\label{tab:avg_rank_d30_n400}Average ranks of the first $20$ features by each criterion with $d=30$ and $n=400$ under the Gaussian setting.}\n\\centering\n\\small\n\\begin{tabular}{lrrrrrrrrrr}\n \\hline\n & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\\\ \n \\hline\ns-CC & 2.19 & 2.03 & 3.45 & 4.94 & 5.60 & 6.28 & 5.80 & 7.05 & 8.84 & 8.82 \\\\ \n s-NPC ($\\alpha = .05$) & 2.17 & 3.73 & 4.04 & 6.43 & 5.37 & 5.11 & 6.21 & 9.35 & 8.97 & 8.54 \\\\ \n s-NPC ($\\alpha = .10$) & 1.91 & 4.43 & 4.34 & 3.26 & 5.99 & 6.93 & 6.39 & 7.17 & 6.89 & 7.85 \\\\ \n s-NPC ($\\alpha = .20$) & 2.39 & 3.67 & 3.50 & 3.51 & 6.35 & 4.70 & 5.91 & 7.82 & 8.84 & 8.32 \\\\ \n s-NPC ($\\alpha = .30$) & 1.96 & 2.54 & 3.86 & 4.40 & 5.65 & 5.21 & 6.53 & 7.14 & 8.67 & 9.04 \\\\ \n \\hline\n & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20 \\\\ \n \\hline\ns-CC & 19.80 & 21.75 & 21.36 & 16.34 & 18.79 & 21.53 & 22.60 & 18.89 & 17.26 & 23.31 \\\\ \n s-NPC ($\\alpha = .05$) & 15.38 & 21.58 & 22.65 & 21.47 & 17.09 & 21.30 & 20.79 & 21.65 & 20.96 & 18.15 \\\\ \n s-NPC ($\\alpha = .10$) & 20.66 & 23.62 & 18.73 & 23.01 & 21.69 & 19.03 & 23.05 & 18.83 & 20.77 & 20.33 \\\\ \n s-NPC ($\\alpha = .20$) & 20.81 & 17.65 & 21.73 & 21.67 & 17.50 & 21.30 & 20.30 & 22.75 & 18.18 & 23.84 \\\\ \n s-NPC ($\\alpha = .30$) & 16.72 & 22.23 & 19.93 & 19.27 & 19.80 & 21.97 & 19.29 & 19.92 & 18.95 & 19.75 \\\\ \n \\hline \n\\end{tabular}\n\\end{table}\n\n\\begin{table}[htbp]\n\\caption{\\label{tab:avg_rank_d30_n1000}Average ranks of the first $20$ features by each criterion with $d=30$ and $n=1,000$ under the Gaussian setting.}\n\\centering\n\\small\n\\begin{tabular}{lrrrrrrrrrr}\n \\hline\n & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\\\ \n \\hline\ns-CC & 2.21 & 2.28 & 2.73 & 4.09 & 4.64 & 6.14 & 6.93 & 7.93 & 8.71 & 9.34 \\\\ \n s-NPC ($\\alpha$ = .05) & 2.55 & 2.60 & 4.21 & 4.44 & 4.28 & 6.43 & 6.48 & 6.99 & 8.22 & 8.80 \\\\ \n s-NPC ($\\alpha$ = .10) & 1.97 & 2.76 & 2.72 & 4.49 & 4.26 & 6.63 & 6.74 & 7.67 & 8.72 & 9.04 \\\\ \n s-NPC ($\\alpha$ = .20) & 1.36 & 2.35 & 3.23 & 4.19 & 4.67 & 5.93 & 7.02 & 8.24 & 8.75 & 9.24 \\\\ \n s-NPC ($\\alpha$ = .30) & 1.85 & 2.73 & 2.71 & 3.58 & 5.18 & 6.11 & 6.80 & 8.04 & 9.01 & 8.99 \\\\ \n \\hline\n & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20 \\\\ \n \\hline\ns-CC & 18.65 & 18.19 & 20.78 & 19.92 & 23.99 & 18.60 & 19.87 & 22.16 & 21.70 & 21.61 \\\\ \n s-NPC ($\\alpha$ = .05) & 22.07 & 20.25 & 21.63 & 18.63 & 17.00 & 22.16 & 19.80 & 23.05 & 19.68 & 20.84 \\\\ \n s-NPC ($\\alpha$ = .10) & 20.37 & 19.67 & 22.67 & 20.15 & 19.31 & 19.58 & 21.61 & 18.53 & 20.51 & 22.49 \\\\ \n s-NPC ($\\alpha$ = .20) & 19.10 & 20.26 & 18.08 & 20.69 & 22.15 & 22.65 & 18.19 & 21.55 & 23.79 & 20.48 \\\\ \n s-NPC ($\\alpha$ = .30) & 18.19 & 19.32 & 20.80 & 16.88 & 22.97 & 21.70 & 19.81 & 23.49 & 19.24 & 20.95 \\\\ \n \\hline\n\\end{tabular}\n\\end{table}\n\nSecond, we generate data from the following two-class Chi-squared distributions of $d=30$ features, among which we still set the first $s=10$ features to be informative.\n\\begin{align}\\label{eq:chisq}\n\t\\bd X_{\\{j\\}} \\given (Y=0) &\\sim \\chi^2_1\\,, \\; j=1,\\ldots,30 \\\\\n\t\\bd X_{\\{1\\}} \\given (Y=1) &\\sim \\chi^2_{11}\\,, \\; \\bd X_{\\{2\\}} \\given (Y=1) \\sim \\chi^2_{10}\\,, \\cdots \\,, \\bd X_{\\{10\\}} \\given (Y=1) \\sim \\chi^2_{2} \\notag \\\\\n\t\\bd X_{\\{j\\}} \\given (Y=1) &\\sim \\chi^2_1\\,, \\; j=11,\\ldots,30 \\notag\n\\end{align}\nSimilar to the previous Gaussian setting, the first $10$ features have true ranks going down from $1$ to $10$, and the rest features are tied in true ranks. We simulate $1000$ samples of size $n=400$ or $1000$ from this model, and we apply s-CC (\\ref{CC}) and s-NPC with $\\delta_1 = .05$ and four $\\alpha$ levels $.05$, $.10$, $.20$, and $.30$ (\\ref{Npscore}), five criteria in total, to each sample to rank the $30$ features. We summarize the average rank of each feature by each criterion in Tables \\ref{tab:avg_rank_d30_n400_chisq} and \\ref{tab:avg_rank_d30_n1000_chisq} (in Appendix), and we plot the distribution of ranks of each feature in Figures \\ref{fig:avg_rank_d30_n400_chisq} and \\ref{fig:avg_rank_d30_n1000_chisq} (in Appendix). The results and conclusions are consistent with those under the Gaussian setting. \n\n\n\n\\subsection{Ranking high-dimensional features at the sample level}\nWe also test the performance of s-CC and s-NPC when $d > n$. We set $d=500$ and $n=400$. The generative model is the same as \\eqref{eq:best_subset}, where $\\bd\\mu^0 = (\\underbrace{-1.5,\\ldots,-1.5}_{10}, \\mu_{11}, \\ldots, \\mu_{500})^{\\mkern-1.5mu\\mathsf{T}}$, $\\bd\\mu^1 = (\\underbrace{1,.9,\\ldots,.2,.1}_{10}, \\mu_{11}, \\ldots, \\mu_{500})^{\\mkern-1.5mu\\mathsf{T}}$, with $\\mu_{11}, \\ldots, \\mu_{500}$ independently and identically drawn from $\\mathcal N(0,1)$ and then held fixed, and $\\bd\\Sigma^0 = \\bd\\Sigma^1 = 4 \\, \\mathbf{I}_{30}$. Same as in the low-dimensional setting (Section \\ref{sec:sim_low_dim}), p-CC and p-NPC have a clear gap between the first $10$ informative features and the rest features but no obvious gaps among the informative features. In terms of both p-CC and p-NPC, the first 10 features have true ranks going down from 1 to 10, and the rest features are tied in true ranks. \n\nWe simulate $1000$ samples of size $n=400$ and apply s-CC (\\ref{CC}) and s-NPC with $\\delta_1 = .05$ and four $\\alpha$ levels $.05$, $.10$, $.20$, and $.30$ (\\ref{Npscore}) to each sample to rank the $500$ features. We summarize the average rank of each feature by each criterion in Table \\ref{tab:avg_rank_d500_n400}, and we plot the distribution of ranks of each feature in Figure \\ref{fig:avg_rank_d500_n400}. The results show that ranking under this high-dimensional setting is more difficult than the low-dimensional setting. However, s-CC and s-NPC with $\\alpha = 0.2$ or $0.3$ still clearly distinguish the first 10 informative features from the rest, while s-NPC with $\\alpha = 0.05$ or $0.1$ have worse performance on features 8--10, demonstrating again that ranking becomes more difficult for s-NPC when $\\alpha$ is small. \n\n\\begin{table}[htbp]\n\\caption{\\label{tab:avg_rank_d500_n400}Average ranks of the first $20$ features by each criterion with $d=500$ and $n=400$ under the Gaussian setting.}\n\\centering\n\\scriptsize\n\\begin{tabular}{lrrrrrrrrrr}\n \\hline\n & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\\\ \n \\hline\ns-CC & 1.51 & 3.39 & 3.25 & 4.82 & 4.43 & 6.47 & 6.59 & 6.80 & 8.53 & 9.86 \\\\ \n s-NPC ($\\alpha$ = .05) & 2.48 & 3.14 & 3.81 & 4.57 & 4.88 & 33.75 & 87.81 & 177.79 & 136.12 & 183.96 \\\\ \n s-NPC ($\\alpha$ = .10) & 2.21 & 2.34 & 3.84 & 4.08 & 5.56 & 6.70 & 6.61 & 19.97 & 116.98 & 51.27 \\\\ \n s-NPC ($\\alpha$ = .20) & 1.87 & 2.55 & 3.60 & 3.76 & 5.41 & 6.35 & 6.67 & 7.51 & 8.61 & 46.10 \\\\ \n s-NPC ($\\alpha$ = .30) & 1.43 & 3.29 & 3.44 & 4.54 & 5.52 & 6.25 & 6.86 & 5.91 & 8.34 & 11.48 \\\\ \n \\hline\n & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20 \\\\ \n \\hline\ns-CC & 234.07 & 244.32 & 213.54 & 213.01 & 183.60 & 249.73 & 292.85 & 269.15 & 328.63 & 240.94 \\\\ \n s-NPC ($\\alpha$ = .05) & 270.19 & 252.46 & 174.22 & 211.67 & 125.66 & 241.64 & 317.62 & 340.59 & 231.31 & 205.63 \\\\ \n s-NPC ($\\alpha$ = .10) & 254.37 & 300.12 & 317.98 & 213.02 & 263.69 & 223.81 & 296.64 & 279.72 & 288.77 & 234.69 \\\\ \n s-NPC ($\\alpha$ = .20) & 223.00 & 253.27 & 287.14 & 205.65 & 249.97 & 187.17 & 312.73 & 224.19 & 265.96 & 238.16 \\\\ \n s-NPC ($\\alpha$ = .30) & 209.82 & 192.70 & 206.62 & 271.58 & 236.41 & 263.22 & 189.90 & 299.44 & 238.57 & 269.64 \\\\ \n \\hline\n\\end{tabular}\n\\end{table}\n\n\n\\begin{figure}[htbp]\n\\includegraphics[width=\\textwidth]{plots\/sim_lowdim_n400_rankdist.pdf}\n\\caption{Rank distributions of the first $20$ features by each criterion with $d=30$ and $n=400$ under the Gaussian setting.\\label{fig:avg_rank_d30_n400}}\t\n\\end{figure}\n\n\\begin{figure}[htbp]\n\\includegraphics[width=\\textwidth]{plots\/sim_lowdim_n1000_rankdist.pdf}\n\\caption{Rank distributions of the first $20$ features by each criterion with $d=30$ and $n=1000$ under the Gaussian setting.\\label{fig:avg_rank_d30_n1000}}\t\n\\end{figure}\n\n\\begin{figure}[htbp]\n\\includegraphics[width=\\textwidth]{plots\/sim_highdim_n400_rankdist.pdf}\n\\caption{Rank distributions of the first $20$ features by each criterion with $d=500$ and $n=400$ under the Gaussian setting.\\label{fig:avg_rank_d500_n400}}\t\n\\end{figure}\n\n\n\n\n\n\n\n\\subsection{Comparison with other marginal feature ranking approaches} \n\n We compare s-CC and s-NPC with four approaches that have been widely used to rank features marginally: the Pearson correlation, the distance correlation \\citep{szekely2009brownian}, the two-sample $t$ test, and the two-sample Wilcoxon rank-sum test. None of these existing approaches rank features based on a prediction objective; as a result, the feature ranking they give may not reflect the prediction performance of features under a particular objective. Here we use an example to demonstrate this phenomenon. We generate data with $d=2$ features from the following model:\n \\begin{align}\\label{eq:gauss_mixture}\n\tX_1 \\given (Y=0) &\\sim \\mathcal{N}(0, 1)\\,, & X_1 \\given (Y=1) &\\sim \\mathcal{N}(1, 1)\\,, & {\\rm I}\\kern-0.18em{\\rm P}(Y=1) = .5\\,,\t\\notag\\\\\n\tX_2 \\given (Y=0) &\\sim \\mathcal{N}(0, 1)\\,, & X_2 \\given (Y=1) &\\sim .5\\,\\mathcal{N}(-2, 1) + .5\\,\\mathcal{N}(2, 1)\\,. & \t\n \\end{align}\n To calculate p-CC and p-NPC with $\\delta_1=.05$ at four $\\alpha$ levels $.05$, $.10$, $.20$, and $.30$ on these two features, we use a large sample with size $10^6$ for approximation, and the results in Table~\\ref{tab:gauss_mixture_pop} show that all the five population-level criteria rank feature 2 as the top feature.\n \n\\begin{table}[htbp]\n\\caption{\\label{tab:gauss_mixture_pop}Values of p-CC and p-NPC of the two features in \\eqref{eq:gauss_mixture}.}\n\\centering\n\\small\n\\begin{tabular}{rrrrrr}\n \\hline\nFeature & p-CC & p-NPC ($\\alpha$ = .05) & p-NPC ($\\alpha$ = .10) & p-NPC ($\\alpha$ = .20) & p-NPC ($\\alpha$ = .30) \\\\ \n \\hline\n1 & .31 & .74 & .61 & .44 & .32 \\\\ \n 2 & .22 & .49 & .36 & .24 & .17 \\\\ \n \\hline\n\\end{tabular}\n\\end{table}\n\nThen we simulate $1000$ samples of size $n=400$ from the above model and apply nine ranking approaches: s-CC, s-NPC with $\\delta_1=.05$ at four $\\alpha$ levels ($.05$, $.10$, $.20$, and $.30$), the Pearson correlation, the distance correlation, the two-sample $t$ test, and the two-sample Wilcoxon rank-sum test, to each sample to rank the two features. From this we obtain $1000$ rank lists for each ranking approach, and we calculate the frequency that each approach correctly finds the true rank order. The frequencies are summarized in Table~\\ref{tab:gauss_mixture_freq}, which shows that none of the four common approaches identifies feature 2 as the better feature for prediction. In other words, if users wish to rank features based on a prediction objective under the classical or NP paradigm, these approaches are not suitable ranking criteria. \n\n\\begin{table}[htbp]\n\\caption{\\label{tab:gauss_mixture_freq}The frequency that each ranking approach identifies the true rank order.}\n\\centering\n\\small\n\\begin{tabular}{rrrrr}\n \\hline\ns-CC & s-NPC ($\\alpha$ = .05) & s-NPC ($\\alpha$ = .10) & s-NPC ($\\alpha$ = .20) & s-NPC ($\\alpha$ = .30) \\\\ \n100\\% & 99.9\\% & 99.3\\% & 99.7\\% & 100\\% \\\\ \n \\hline\nPearson cor & distance cor & two-sample $t$ & two-sample Wilcoxon &\\\\\n0\\% & 0.5\\% & 0\\% & 0\\% &\\\\\n\t\\hline\n\\end{tabular}\n\\end{table}\n\n\n\\section{Real data applications}\\label{simu:realdata}\nWe apply s-CC and s-NPC to two real datasets to demonstrate their wide application potential in biomedical research. \\jjl{Here we set the number of random splits $B=1000$ for s-CC and s-NPC for stability consideration.} First, we use a dataset containing genome-wide DNA methylation profiles of $285$ breast tissues measured by the Illumina HumanMethylation450 microarray technology. This dataset includes $46$ normal tissues and $239$ breast cancer tissues. Methylation levels are measured at $468,424$ CpG probes in every tissue \\citep{fleischer2014genome}. We download the preprocessed and normalized dataset from the Gene Expression Omnibus (GEO) \\citep{edgar2002gene} with the accession number GSE60185. The preprocessing and normalization steps are described in detail in \\cite{fleischer2014genome}. To facilitate the interpretation of our analysis results, we further process the data as follows. First, we discard a CpG probe if it is mapped to no gene or more than one genes. Second, if a gene contains multiple CpG probes, we calculate its methylation level as the average methylation level of these probes. This procedure leaves us with $19,363$ genes with distinct methylation levels in every tissue. We consider the tissues as data points and the genes as features, so we have a sample with size $n=285$ and number of features $d=19,363$. Since misclassifying a patient with cancer to be healthy leads to more severe consequences than the other way around, we code the $239$ breast cancer tissues as the class $0$ and the $46$ normal tissues as the class $1$ to be aligned with the NP paradigm. After applying s-CC (\\ref{CC}) and s-NPC with $\\delta_1 = .05$ and four $\\alpha$ levels ($.05$, $.10$, $.20$, and $.30$) (\\ref{Npscore}) to this sample, we summarize the top $10$ genes found by each criterion in Table \\ref{tab:bc_rank}. Most of these top ranked genes have been reported associated with breast cancer, suggesting that our proposed criteria can indeed help researchers find meaningful features. Meanwhile, although other top ranked genes do not yet have experimental validation, they have weak literature indication and may serve as potentially interesting targets for cancer researchers. For a detailed list of literature evidence, please see \\textit{the Supplementary Excel File}. The fact that these five criteria find distinct sets of top genes is in line with our rationale that feature importance depends on prediction objective. By exploring top features found by each criterion, researchers will obtain a comprehensive collection of features that might be scientifically interesting. \n\n\\begin{table}[htbp]\n\\caption{\\label{tab:bc_rank}Top 10 genes found by each criterion in breast cancer methylation data \\citep{fleischer2014genome}. Genes with strong literature evidence to be breast-cancer-associated are marked in bold; see the Supplementary Excel File. }\n\\centering\n\\small\n\\begin{tabular}{rccccc}\n \\hline\nRank & s-CC & s-NPC ($\\alpha$ = .05) & s-NPC ($\\alpha$ = .10) & s-NPC ($\\alpha$ = .20) & s-NPC ($\\alpha$ = .30) \\\\ \n \\hline\n1 & \\textbf{HMGB2} & \\textbf{HMGB2} & \\textbf{HMGB2} & \\textbf{ABHD14A} & \\textbf{ABHD14A} \\\\ \n 2 & \\textbf{MIR195} & MICALCL & \\textbf{ABHD14A} & \\textbf{ABL1} & \\textbf{ABL1} \\\\ \n 3 & MICALCL & NR1H2 & ZFPL1 & \\textbf{BAT2} & \\textbf{ACTN1} \\\\ \n 4 & \\textbf{AIM2} & \\textbf{AGER} & \\textbf{AGER} & \\textbf{BATF} & AKAP8 \\\\ \n 5 & AGER & \\textbf{BATF} & RILPL1 & \\textbf{CCL8} & AP4M1 \\\\ \n 6 & KCNJ14 & ZFP106 & SKIV2L & \\textbf{COG8} & \\textbf{ARHGAP1} \\\\ \n 7 & \\textbf{HYAL1} & CTNNAL1 & \\textbf{TP53} & FAM180B & \\textbf{ATG4B} \\\\ \n 8 & SKIV2L & \\textbf{MIR195} & \\textbf{RELA} & \\textbf{HMGB2} & \\textbf{BAT2} \\\\ \n 9 & \\textbf{RUSC2} & \\textbf{AIM2} & \\textbf{MIR195} & \\textbf{HSF1} & BAT5 \\\\ \n 10 & DYNC1H1 & ZFPL1 & \\textbf{CCL8} & KIAA0913 & \\textbf{BATF} \\\\ \n \\hline\n\\end{tabular}\n\\end{table}\n\nSecond, we apply s-CC and s-NPC to a dataset of microRNA (miRNA) expression levels in urine samples of prostate cancer patients, downloaded from the GEO with accession number GSE86474 \\citep{jeon2019temporal}. This dataset is composed of $78$ high-risk and $61$ low-risk patients. To align with the NP paradigm, we code the high-risk and low-risk patients as class $0$ and $1$, respectively, so $m\/n=78\/61$. In our data pre-processing, we retain miRNAs that have at least $60\\%$ non-zero expression levels across the $n=139$ patients, resulting in $d=112$ features. We use this dataset to demonstrate that s-NPC is robust to sampling bias that results in disproportional training data; that is, training data have different class proportions from those of the population. We create two new datasets by randomly removing one half of the data points in class $0$ or $1$, so that one dataset has $m\/n=39\/61$ and the other has $m\/n=78\/31$. We apply s-CC and s-NPC with $\\delta_1 = .05$ to each dataset to rank features. To evaluate each criterion's robustness to disproportional data, we compare its rank lists from two datasets with different $m\/n$ ratios. For this comparison, we define \n\\[ \\text{consistency}(j) = \\frac{|A_j \\cap B_j|}{j}\\,, \\;j=1,\\ldots,d\\,,\n\\]\nwhere $A_j$ and $B_j$ are the top $j$ features from two rank lists. Given $j$, the higher the consistency, the more robust a criterion is to disproportional data. We illustrate the consistency of s-CC and s-NPC in Figure~\\ref{fig:consistency}, which shows that s-NPC is much more robust than s-CC. \n\n\\begin{figure}\n\\includegraphics[width=\\textwidth]{plots\/analysis_consistency_combined.pdf}\n\\caption{Consistency of s-CC and s-NPC in ranking feautures in miRNA urine data \\citep{jeon2019temporal}.\\label{fig:consistency}}\t\n\\end{figure}\n\n\n\\section{Discussion}\\label{sec:conclusions}\n\nThis work introduces model-free objective-based marginal feature ranking approach for the purpose of binary decision-making. The explicit use of a prediction objective to rank features is demonstrated to outperform existing practices, which rank features based on an association measure irrelevant to neither the prediction objective nor the distributional characteristics. In addition to the illustrated CC and NP paradigms, the same marginal ranking idea extends to other prediction objectives such as the cost-sensitive learning and global paradigms. Another extension direction is to rank feature pairs in the same model-free fashion. In addition to the biomedical examples we show in this paper, model-free objective-based marginal feature ranking is also useful for finance applications, among others. For example, a loan company has successful business in region A and would like to establish new business in region B. To build a loan-eligibility model for region B, which has a much smaller fraction of eligible applicants than region A, the company may use the top ranked features by s-NPC in region A, thanks to the robustness of s-NPC to sampling bias. \n\nBoth s-CC and s-NPC involve sample splitting. The default option is a half-half split for both class $0$ and class $1$ observations. It remains an open question whether a refined splitting strategy may lead to a better ranking agreement between the sample-level and population-level criteria. Intuitively, there is a trade-off between classifier training and objective evaluation: using more data for training can result in a classifier closer to the oracle, while saving more data to evaluate the objective can lead to a less variable criterion. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Introduction}\n\\section{Introduction}\n\nFrom scientific research to industrial applications, practitioners often face the challenge to rank features for a prediction task. Among the ranking tasks performed by scientists and practitioners, a large proportion belongs to marginal ranking; that is, rank features based on the relation between the response variable and one feature at a time, ignoring other available features. For example, to predict cancer driver genes, biomedical researchers need to first extract predictive features from patients' data. Then they decide whether an extracted feature is informative by examining its marginal distributions in tumor and normal tissues, usually by boxplots and histograms. This practice is common in high-profile biomedical papers, such as in \\cite{davoli2013cumulative, vogelstein2013cancer}.\n\nThis common practice is suboptimal from a statistical point of view, as multiple features usually have dependence and therefore jointly influence the response variable beyond a simple additive manner. However, the popularity of marginal feature ranking roots not only in the education background and convention, but also in the strong desire for simple interpretation and visualization in the trial-and-error scientific discovery process. As such, marginal feature ranking has been an indispensable data-analysis step in the scientific community, and it will likely stay popular. \n\nIn practice, statistical tests (e.g., two-sample $t$ test and two-sample Wilcoxon rank-sum test) are often used to rank features marginally. However, these tests do not reflect the objective of a prediction task. For example, if the classification error is of concern, the connection between the significance of these tests and the classification error is unclear. This misalignment of ranking criterion and prediction objective is undesirable: the resulting feature rank list does not reflect the marginal importance of each feature for the prediction objective. Hence, scientists and practitioners call for a marginal ranking approach that meets the prediction objective.\n\n\n\n\n\n\n\n\n\nIn this work, we focus on marginal ranking for binary prediction, which can be formulated as binary classification in machine learning. Binary classification has multiple prediction objectives, \\jl{which we refer to as paradigms here. \\jjl{These paradigms include} (1) the \\textit{classical} paradigm that minimizes the classification error,} a weighted sum of the type I and type II errors, \\jl{whose weights are} the class priors \\citep{Hastie.Tibshirani.ea.2009, james2013introduction}, (2) the \\textit{cost-sensitive learning} paradigm that replaces the \\jl{two error weights by pre-determined constant costs} \\citep{Elkan01, ZadLanAbe03}, (3) the \\textit{Neyman-Pearson (NP)} paradigm that minimizes the type II error subject to a type I error upper bound \\citep{cannon2002learning, scott2005neyman, tong2013plug, tong2016neyman}, and (4) the \\textit{global} paradigm that focuses on the overall prediction accuracy under all possible thresholds: the area under the receiver-operating-characteristic curve (AUROC) or precision-recall curve (AUPRC). Here we consider marginal ranking of features under the classical and NP paradigms, \\jl{and we define the corresponding ranking criteria as the classical criterion (CC) and the Neyman-Pearson criterion (NPC). The idea behind these two criteria is easily generalizable to the cost-sensitive learning paradigm} and the global paradigm. \n\nIt is worth \\jl{mentioning that NPC} is robust against sampling bias; that is, even when the class \\jl{proportions in a sample \\jjl{deviate} from those in the population, NPC still achieves feature} ranking consistency between sample and population with high probability. This \\jl{nice property makes NPC particularly useful for disease diagnosis, where a long-standing obstacle is that the proportions of diseased patients and healthy people in medical records do not reflect the proportions in the population.} To implement CC and NPC, we take a model-free approach by using nonparametric estimates of class-conditional feature densities. This approach makes CC and NPC more adaptive to diverse feature distributions than existing criteria for marginal feature ranking. \n\n\n\n\n\n\nThe rest of the paper is organized as follows. In Section \\ref{sec:background}, we \\jl{define CC and NPC on the population level, as the oracle criteria under the classical and NP paradigms}. In Section \\ref{sec:methods}, we \\jl{define the sample-level CC and NPC and develop model-free algorithms to implement them}. In Section \\ref{sec:theoretical properties}, we \\jl{derive theoretical results regarding the ranking consistency of the sample-level CC and NPC in relation to their population counterparts. In Section \\ref{sec:simulation}, we use simulation studies to demonstrate the performance of sample-level CC and NPC in ranking low-dimensional and high-dimensional features. We also demonstrate that commonly-used ranking criteria, including the Pearson correlation, the distance correlation \\citep{szekely2009brownian}\\footnote{In binary classification, the response variable is encoded as $0$ and $1$ and treated as a numerical variable in the calculation of of the Pearson and distance correlations.}, the two-sample $t$ test, and the two-sample Wilcoxon rank-sum test, might give feature ranking misaligned with the prediction objective. In Section \\ref{simu:realdata}, we apply CC and NPC to rank features in two real datasets. Using the first dataset regarding breast cancer diagnosis, we show that both criteria can identify informative features, many of which have been previously reported; we also provide a Supplementary Excel File for literature evidence. Using the second dataset for prostate cancer diagnosis from urine samples, we demonstrate that NPC is robust to sampling bias.} We conclude with a discussion in Section \\ref{sec:conclusions}. All the proofs of lemmas, propositions, and theorems are relegated to the Appendix.\\par \n\n\n\\section{Population-level ranking criteria}\\label{sec:background}\n\nIn this section, we introduce two objective-based marginal feature ranking criteria, \\jjl{on the population level,} under the classical paradigm and the Neyman-Pearson (NP) paradigm. As argued previously, when \\jjl{one has} a learning\/prediction objective, the feature ranking criterion should be in line with that. Concretely, the $j$-th ranked feature should be the one that achieves the $j$-th best performance based on that objective. \nThis objective-based feature ranking perspective is extendable to ranking feature subsets (e.g., feature pairs). Although we focus on marginal feature ranking in this work, to cope with this future extension, our notations in the methodology and theory development are compatible with ranking of feature subsets . \n\n\n\n\n\\subsection{Notations and classification paradigms}\n\nWe first introduce essential mathematical notations to facilitate our discussion. Let $\\left(\\bd X,Y\\right)$ be a pair of random observations where $\\bd X \\in \\mathcal{X} \\subseteq {{\\rm I}\\kern-0.18em{\\rm R}}^d$ is a vector of features and $Y\\in \\left\\{ 0,1 \\right\\}$ indicates the class label of $\\bd X$. A \\textit{classifier} $\\phi:\\mathcal{X}\\rightarrow \\left\\{ 0,1 \\right\\}$ maps from the feature space to the label space. A \\textit{loss function} assigns a cost to each misclassified instance $\\phi(\\bd X) \\neq Y$, and the \\textit{risk} is defined as the expectation of this loss function with respect to the joint distribution of $\\left( \\bd X,Y\\right)$. We adopt in this work a commonly used loss function, the $0$-$1$ loss: $\\mathds{1}\\left(\\phi(\\bd X)\\neq Y \\right)$, where $\\mathds{1}(\\cdot)$ denotes the indicator function. Let ${\\rm I}\\kern-0.18em{\\rm P}$ and ${\\rm I}\\kern-0.18em{\\rm E}$ denote the generic probability distribution and expectation, whose meaning depends on specific contexts. With the choice of the indicator loss function, the risk is the classification error: $R(\\phi) = {\\rm I}\\kern-0.18em{\\rm E} \\left[ \\mathds{1}\\left( \\phi(\\bd X)\\neq Y\\right) \\right] = {\\rm I}\\kern-0.18em{\\rm P} \\left( \\phi(\\bd X)\\neq Y\\right)$. While $R(\\cdot)$ \\jjl{is a natural} objective to evaluate the performance of a classifier, for different theoretical and practical reasons, one might consider different objectives for \\jjl{evaluating classifiers}. \n\nIn this paper, we call the learning objective of minimizing $R(\\cdot)$ the \\textit{classical paradigm}. Under \\jjl{this} paradigm, one aims to mimic the \\textit{classical oracle classifier} $\\varphi^{*}$ that minimizes the population-level classification error, \n$$\n\\varphi^{*}=\\argmin \\limits_{\\varphi: {\\rm I}\\kern-0.18em{\\rm R}^d\\rightarrow \\{0, 1\\}} R\\left( \\varphi\\right)\\,.\n$$ \nIt is well known in literature that the classical oracle $\\varphi^*(\\cdot) = \\mathds{1} (\\eta (\\cdot) > 1\/2)$, where $\\eta(\\bd x) = {\\rm I}\\kern-0.18em{\\rm E} (Y|\\bd X=\\bd x)$ is the regression function \\citep{koltchinskii2011introduction}. Alternatively, we can show that $\\varphi^*(\\cdot) = \\mathds{1}(p_1(\\cdot)\/p_0(\\cdot)>\\pi_0\/\\pi_1)$, where $\\pi_0 ={\\rm I}\\kern-0.18em{\\rm P}(Y=0)$, $\\pi_1 ={\\rm I}\\kern-0.18em{\\rm P}(Y=1)$, $p_0$ is the probability density function of $\\bd X|(Y=0)$, and $p_1$ is the probability density function of $\\bd X|(Y=1)$. Note that the risk can be decomposed as follows:\n\\begin{align*}\n \tR(\\phi) &= {\\rm I}\\kern-0.18em{\\rm P}(Y=0)\\cdot{\\rm I}\\kern-0.18em{\\rm P}\\left( \\phi(\\bd X) \\neq Y \\given Y=0\\right) + {\\rm I}\\kern-0.18em{\\rm P}(Y=1)\\cdot {\\rm I}\\kern-0.18em{\\rm P}\\left( \\phi(\\bd X) \\neq Y \\given Y=1\\right)\\\\\n \t &= \\pi_0 R_0\\left(\\phi\\right)+ \\pi_1 R_1\\left(\\phi\\right)\\,,\n \\end{align*} where $R_j\\left(\\phi\\right) = {\\rm I}\\kern-0.18em{\\rm P}\\left( \\phi(\\bd X) \\neq Y \\given Y=j\\right)$, for $j= 0 \\text{ and } 1$. The notations $R_0(\\cdot)$ and $R_1(\\cdot)$ denote the population-level type I and type II errors respectively. Note that minimizing $R(\\cdot)$ implicitly \\jjl{imposes} a weighting of $R_0$ and $R_1$ by $\\pi_0$ and $\\pi_1$. This is not always desirable. For example, when people know the explicit costs for type I and type II errors: $c_0$ and $c_1$, one might want to optimize the criterion $c_0R_0(\\cdot) + c_1 R_1(\\cdot)$, which is often referred to as \\textit{the cost-sensitive learning paradigm}. \n \n\\jjl{In parallel to the classical paradigm, we consider the \\textit{Neyman-Pearson (NP) paradigm}, which} aims to mimic the \\textit{level-$\\alpha$ NP oracle classifier} \\jjl{that minimizes the type II error while constraining the type I error under $\\alpha$, a user-specified type I error upper bound,} \n \\begin{align}\\label{eq:NP_oracle}\n \\varphi^{*}_{\\alpha} = \\argmin \\limits_{\\varphi: R_0(\\varphi) \\leq \\alpha} R_1(\\varphi)\\,.\n\\end{align} \nUsually, \\jjl{$\\alpha$ is a small value (e.g., $5\\%$ or $10\\%$), reflecting a user's conservative attitudes towards the type I error.} As the development of classification methods under the NP paradigm is relatively new, \\jjl{here we review the development of the NP oracle classifier} $\\varphi^*_{\\alpha}(\\cdot)$. Essentially, \\jjl{due to the famous Neyman-Pearson Lemma (Appendix \\ref{sec::np lamma}) and a correspondence between classification and statistical hypothesis testing,} $\\varphi^*_{\\alpha}$ in \\eqref{eq:NP_oracle} can be constructed by thresholding $p_{1}(\\cdot)\/p_{0}(\\cdot)$ at a proper level $C^*_{\\alpha}$ \\citep{tong2013plug}:\n \\begin{equation}\\label{equ: neyman_pearson}\n \t\\varphi_{\\alpha}^*(\\bd x) = \\mathds{1}\\left(p_1(\\bd x)\/p_0(\\bd x) > C_\\alpha^*\\right)\\,. \\end{equation}\n\nIn addition to the above three paradigms, a common practice is to evaluate a classification algorithm by its AUROC or AUPRC, which we refer to as the \\textit{global paradigm}. In contrast to the above three paradigms that lead to a single classifier, which has its corresponding type I and II errors, the global paradigm evaluates a classification algorithm by aggregating its all possible classifiers with type I errors ranging from zero to one. For example, the oracle AUROC is the area under the curve\n\\[ \\left\\{ \\left(R_0(\\varphi_\\alpha^*),\\, 1-R_1(\\varphi_\\alpha^*)\\right): \\alpha \\in [0,1]\n\t\\right\\}\\,.\n\\]\n\n\n\\subsection{Classical and Neyman-Pearson criteria on the population level}\\label{sec:NPC_population}\nDifferent learning\/prediction objectives in classification induce distinct feature ranking criteria. \\jjl{We first define the population-level CC and NPC. Then we show that these two criteria lead to different rankings of features in general, and that NPC may rank features differently at different $\\alpha$ values. $\\varphi^*_{A}$ and $\\varphi^*_{\\alpha A}$ denote, respectively,} the classical oracle classifier and the level-$\\alpha$ NP oracle classifier that only use features indexed by $A \\subseteq \\{1,\\ldots, d \\}$. This paper focuses on the case when $|A| = 1$. \nConcretely, under the classical paradigm, the classical oracle \\jjl{classifier on index set $A$, $\\varphi^*_{A}$,} achieves \n\\begin{equation*\n\tR \\left(\\varphi^*_{A}\\right) = \\min_{\\varphi_A} R \\left(\\varphi_{A}\\right) = \\min_{\\varphi_A} {\\rm I}\\kern-0.18em{\\rm P} (\\varphi_{A}(\\bd X)\\neq Y)\\,,\n\\end{equation*} \nin which $\\varphi_A: \\mathcal X \\subseteq {\\rm I}\\kern-0.18em{\\rm R}^d \\rightarrow \\{0, 1\\}$ is any mapping that first projects $\\bd X\\in {\\rm I}\\kern-0.18em{\\rm R}^d$ to its $|A|$-dimensional sub-vector $\\bd X_A$, which comprises of the coordinates of $\\bd X$ corresponding to the index set $A$, and then maps from $\\bd X_A\\in {\\rm I}\\kern-0.18em{\\rm R}^{|A|}$ to $\\{0, 1\\}$. Analogous to $\\varphi^*(\\cdot)$, we know \n\\begin{align}\\label{eqn:classical oracle}\n\\varphi^*_{A}(\\bd x) = \\mathds{1}(\\eta_A(\\bd x_A) > 1\/2) = \\mathds{1}(p_{1A}(\\bd x_A)\/p_{0A}(\\bd x_A) > \\pi_0 \/ \\pi_1)\\,, \n\\end{align}\nwhere $\\eta_A(\\bd x_A) = {\\rm I}\\kern-0.18em{\\rm E} (Y|\\bd X_A=\\bd x_A)$ is the regression function using only features in the index set $A$, and $p_{1A}$ and $p_{0A}$ denote the class-conditional probability density functions of the features $\\bd X_A$. Suppose that statisticians are given candidate feature subsets denoted by $A_1, \\ldots, A_J$, which might arise from some domain expertise of the clients. \\jjl{We define the \\textit{population-level classical criterion} (p-CC) of $A_i$ as its \\textit{optimal} risk $R\\left(\\varphi^*_{A_i}\\right)$; i.e., $A_1, \\ldots, A_J$ will be ranked based on $\\left\\{R \\left(\\varphi^*_{A_1}\\right), \\ldots, R \\left(\\varphi^*_{A_J}\\right) \\right\\}$, with the smallest being ranked the top}. The prefix ``p\" in p-CC indicates ``population-level.\"\n Note that \\jjl{$R(\\varphi^*_{A_i})$ represents} $A_i$'s best achievable performance measure under the classical paradigm and \\jjl{does} not depend on any specific models \\jjl{assumed for} the distribution of $(\\bd X, Y)$. \n\n\nUnder the NP paradigm, the NP oracle \\jjl{classifier} on index set $A$, $\\varphi^*_{\\alpha A}$, achieves \n\\begin{equation}\\label{ideaL_sormulation_np}\n\tR_1 \\left(\\varphi^*_{\\alpha A}\\right) = \\min_{\\substack{\\varphi_{A} \\\\ R_0 \\left(\\varphi_{\\alpha A}\\right)\\leq\\alpha}} R_1 \\left(\\varphi_{\\alpha A}\\right) = \\min_{\\substack{\\varphi_{A} \\\\ {\\rm I}\\kern-0.18em{\\rm P}(\\varphi_{A} (\\bd X) \\neq Y | Y=0)\\leq\\alpha}} {\\rm I}\\kern-0.18em{\\rm P}(\\varphi_{A} (\\bd X) \\neq Y | Y=1)\\,.\n\\end{equation} \nBy the Neyman-Pearson Lemma, for some proper constant $C^*_{\\alpha A}$, \n\\begin{equation}\\label{eqn: np oracle}\n\\varphi^*_{\\alpha A}(\\bd x) = \\mathds{1} \\left(p_{1A}(\\bd x_A)\/p_{0A}(\\bd x_A) > C^*_{\\alpha A}\\right)\\,.\n\\end{equation}\nFor a given level $\\alpha$, \\jjl{we define the \\textit{population-level Neyman-Pearson criterion} (p-NPC) of $A_i$ as its \\textit{optimal} type II error $R_1 \\left(\\varphi^*_{\\alpha A_i}\\right)$; i.e., $A_1, \\ldots, A_J$ will be ranked based on $\\left\\{R_1 \\left(\\varphi^*_{\\alpha A_1}\\right), \\ldots, R_1 \\left(\\varphi^*_{\\alpha A_J}\\right) \\right\\}$, with the smallest being ranked the top}. \n\n\nAs a concrete illustration of p-CC and p-NPC, suppose that we want to compare two features $\\bd X_{\\{1\\}}, \\bd X_{\\{2\\}} \\in {\\rm I}\\kern-0.18em{\\rm R}$\n\\footnote{Usually, we use $X_1$ and $X_2$, but we opt to use $\\bd X_{\\{1\\}}$ and $\\bd X_{\\{2\\}}$ to be consistent with the notation $\\bd X_{A}$.}, whose class-conditional distributions are \\jjl{the following Gaussians}: \n\\begin{align}\\label{eq:toy_example}\n\t\\bd X_{\\{1\\}} \\given (Y=0) &\\sim \\mathcal{N}(-5, 2^2)\\,, & \\bd X_{\\{1\\}}\\given (Y=1) &\\sim \\mathcal{N}(0, 2^2)\\,,\\\\\n\t\\bd X_{\\{2\\}} \\given (Y=0) &\\sim \\mathcal{N}(-5, 2^2)\\,, & \\bd X_{\\{2\\}} \\given (Y=1) &\\sim \\mathcal{N}(1.5, 3.5^2)\\,, \\notag\n\\end{align}\nand the class priors are equal, i.e., $\\pi_0 = \\pi_1 = .5$. \nIt can be calculated that $R \\left(\\varphi^*_{{\\{1\\}}}\\right) = .106$ and $R \\left(\\varphi^*_{{\\{2\\}}}\\right)= .113$. Therefore, $R \\left(\\varphi^*_{{\\{1\\}}}\\right) < R \\left(\\varphi^*_{{\\{2\\}}}\\right)$, and \\jjl{p-CC ranks feature $1$ higher than feature $2$}. \\jjl{The comparison is more subtle for p-NPC}. If we set $\\alpha =.01$, $R_1 \\left(\\varphi^*_{\\alpha \\{1\\}}\\right) = .431$ is \\textit{larger} than $R_1 \\left(\\varphi^*_{\\alpha \\{2\\}}\\right) = .299$. However, if we set $\\alpha = .20$, $R_1 \\left(\\varphi^*_{\\alpha \\{1\\}}\\right) = .049$ is \\textit{smaller} than $R_1 \\left(\\varphi^*_{\\alpha \\{2\\}}\\right)= .084$. Figure \\ref{fig:toy example 1} illustrates the NP oracle classifiers for \\jjl{these $\\alpha$'s.} \n\n\n\n\n\\begin{figure}[h!]\n \\centering\n \\makebox{\\includegraphics[width = 0.75\\textwidth]{plots\/toy_example_w_alpha.pdf}}\n \\caption{\\small{A toy example in which feature ranking under p-NPC changes as $\\alpha$ varies. \\textbf{Panel a}: $\\alpha=.01$. The NP oracle classifier based on feature $1$ (or feature $2$) has the type II error $.431$ (or $.299$). \\textbf{Panel b}: $\\alpha=.20$. The NP oracle classifier based on feature $1$ (or feature $2$) has the type II error $.049$ (or $.084$).}}\\label{fig:toy example 1}\n\\end{figure}\n\n\nThis example suggests a general phenomenon that feature ranking \\jjl{depends} on the user-chosen criteria. For some \\jjl{$\\alpha$} values (e.g., $\\alpha =.20$ in the example), p-NPC and p-CC agree on the ranking, while for others (e.g., $\\alpha = .01$ in the example), they disagree. Under special cases, however, we can derive conditions under which p-NPC gives an $\\alpha$-invariant feature ranking \\jjl{that always agrees with the ranking by} p-CC. In the following, we derive such a condition under Gaussian distributions.\n\n\\begin{lemma}\\label{lem: toy example1}\nSuppose that two features $\\bd X_{\\{1\\}}$ and $\\bd X_{\\{2\\}}$ have class-conditional densities\n\\begin{align*}\n\t\\bd X_{\\{1\\}} | (Y=0) &\\sim \\mathcal{N}\\left(\\mu_1^0, (\\sigma_1^0)^2\\right)\\,, & \\bd X_{\\{1\\}} | (Y=1) &\\sim \\mathcal{N}\\left(\\mu_1^1, (\\sigma_1^1)^2\\right)\\,,\\\\\n\t\\bd X_{\\{2\\}} | (Y=0) &\\sim \\mathcal{N}\\left(\\mu_2^0, (\\sigma_2^0)^2\\right)\\,, & \\bd X_{\\{2\\}}| (Y=1) &\\sim \\mathcal{N}\\left(\\mu_2^1, (\\sigma_2^1)^2\\right)\\,.\n\\end{align*}\nFor $\\alpha\\in(0,1)$\\,, let $\\varphi^*_{\\alpha\\{1\\}}$ or $\\varphi^*_{\\alpha\\{2\\}}$ be the level-$\\alpha$ NP oracle classifier using only the feature $\\bd X_{\\{1\\}}$ or $\\bd X_{\\{2\\}}$ respectively, and let $\\varphi^*_{\\{1\\}}$ or $\\varphi^*_{\\{2\\}}$ be the corresponding classical oracle classifier. Then if and only if\n$\n\\sigma_1^0 \/ \\sigma_1^1 = \\sigma_2^0 \/ \\sigma_2^1,\n$\nwe have simultaneously for all $\\alpha$, \n\\begin{align*}\n\t\\text{\\rm{sign}}\\left\\{R_1\\left(\\varphi^*_{\\alpha \\{2\\}}\\right) - R_1\\left({\\varphi}^*_{\\alpha \\{1\\}}\\right)\\right\\} = &\\text{\\rm{sign}}\\left\\{ R\\left(\\varphi^*_{\\{2\\}} \\right) -R\\left(\\varphi^*_{\\{1\\}} \\right) \\right\\} = \\text{\\rm{sign}}\\left\\{\\frac{|\\mu_1^1 -\\mu_1^0| }{\\sigma_1^1} - \\frac{|\\mu_2^1 -\\mu_2^0 | }{\\sigma_2^1}\\right\\}\\,,\n\\end{align*}\nwhere $\\rm{sign}(\\cdot)$ is the sign function. \n\n\n\\end{lemma}\\par\n\n\n\n\n\n\n\n\n\n\n\n\nLemma \\ref{lem: toy example1} suggests that on the population level, \\jjl{ranking agreement between CC and NPC is an exception} rather than the norm. This observation calls for development of the sample-level criteria under different objectives. \n\n\n\n\n\n\n\n\n\\section{Sample-level ranking criteria} \\label{sec:methods}\n\nIn \\jjl{the following text, we refer to sample-level CC and NPC as} ``s-CC\" and ``s-NPC\" respectively. In the same model-free spirit of the p-CC and p-NPC definitions, we use model-free nonparametric techniques to construct s-CC and s-NPC. Admittedly, such construction would be impractical when the feature subsets to be ranked have large cardinality. But since we are mainly interested in marginal feature ranking, with intended extension to small subsets such as feature pairs, model-free nonparametric techniques are appropriate. \n\n\n\n\nIn the methodology and theory sections, we assume the following sampling scheme. Suppose we have a training dataset $\\mathcal{S} = \\mathcal{S}^0 \\cup \\mathcal{S}^1 $, where $\\mathcal{S}^0= \\left\\{\\bd {X}_{1}^{0}, \\dots, \\bd {X}_{m}^{0} \\right\\}$ are \\jjl{independent and identically distributed (i.i.d.)} class $0$ observations, $\\mathcal{S}^1= \\left\\{\\bd {X}_{1}^{1}, \\dots, \\bd {X}_{n}^{1} \\right\\}$ are i.i.d. class $1$ observations, and $\\mathcal{S}^0$ is independent of $\\mathcal{S}^1$. The sample sizes $m$ and $n$ are considered as \\jjl{fixed positive integers}. \\jjl{The construction of both s-CC and s-NPC involves} splitting the class $0$ and class $1$ observations. To increase stability, \\jjl{we perform multiple random splits. In detail,} we randomly divide $\\mathcal{S}^0$ for $B$ times into two halves $\\mathcal{S}_{\\rm ts}^{0(b)} = \\left\\{ \\bd X_{1}^{0(b)}, \\dots, \\bd X_{m_1}^{0(b)} \\right\\}$ and ${\\mathcal{S}}_{\\rm lo}^{0(b)} = \\left\\{ \\bd {X}_{m_1+ 1}^{0(b)}, \\dots, \\bd {X}_{m_1+m_2}^{0(b)} \\right\\}$, where $m_1 + m_2 = m$, the subscripts ``ts\" and ``lo\" stand for \\textit{train-scoring} and \\textit{left-out} respectively, and the superscript $b\\in\\{1,\\ldots, B\\}$ indicates the $b$-th random split. \\jjl{We also randomly split} $\\mathcal{S}^1$ \\jjl{for $B$} times into $\\mathcal{S}_{\\rm ts}^{1(b)} = \\left\\{ \\bd X_1^{1(b)}, \\dots, \\bd X_{n_1}^{1(b)} \\right\\}$ and $\\mathcal{S}_{\\rm lo}^{1(b)} = \\left\\{\\bd {X}_{n_1 + 1}^{1(b)}, \\dots, \\bd {X}_{n_1+n_2}^{1(b)} \\right\\}\\,$, where $n_1+n_2=n$ and $b\\in\\{1, \\ldots, B\\}$. \\jjl{In this work, we take an equal-sized split: $m_1 = \\lfloor m\/2 \\rfloor$ and $n_1 = \\lfloor n\/2 \\rfloor$. We leave the possibility of doing a data-adaptive split to future work.}\n\n\n\n\n\nJust like in the definition of population-level criteria, we write our notations more generally to allow \\jjl{for extension to ranking} feature subsets. For $A\\subseteq\\{1, \\ldots, d\\}$ with $|A| = l$, recall that the classical oracle restricted to $A$, $\\varphi^*_A(\\bd x)$, is defined in \\eqref{eqn:classical oracle} and that the NP oracle restricted to $A$, $\\varphi^*_{\\alpha A}(\\bd x)$, is defined in \\eqref{eqn: np oracle}. Although these two oracles have different thresholds, $\\pi_0 \/ \\pi_1$ vs. $C^*_{\\alpha A}$, the class-conditional density ratio $p_{1A}(\\cdot)\/ p_{0A}(\\cdot)$ \\jjl{is involved in} in both oracles. The densities $p_{0A}$ and $p_{1A}$ can be estimated respectively from $\\mathcal{S}^{0(b)}_{\\rm ts}$ and $\\mathcal{S}^{1(b)}_{\\rm ts}$ by kernel density estimators,\n\\begin{align}\\label{eqn:kernel density estimates b}\n\\hat{p}_{0A}^{(b)}(\\bd x_A)=\\frac{1}{m_1h_{m_1}^l}\\sum_{i=1}^{m_1} K\\left(\\frac{\\bd X^{0(b)}_{iA}-\\bd x_A}{h_{m_1}}\\right) \\quad \\text{ and } \\quad \\hat{p}_{1A}^{(b)}(\\bd x_A)=\\frac{1}{n_1h_{n_1}^l}\\sum_{i=1}^{n_1} K\\left(\\frac{\\bd X_{iA}^{1(b)}-\\bd x_A}{h_{n_1}}\\right)\\,,\n\\end{align}\nwhere $h_{m_1}$ and $h_{n_1}$ denote the bandwidths, and $K(\\cdot)$ is a kernel in ${\\rm I}\\kern-0.18em{\\rm R}^l$.\n\n\n\n\n\n\\subsection{Sample-level classical ranking criterion}\n\nTo define s-CC, we first construct plug-in classifiers $\\hat\\phi_A^{(b)}(\\bd x) = \\mathds{1}\\left( \\hat{p}_{1A}^{(b)}(\\bd x_A)\/ \\hat{p}_{0A}^{(b)}(\\bd x_A) > m_1\/n_1\\right)$ for $b\\in\\{1, \\ldots, B\\}$, where the threshold level $m_1\/n_1$ is to mimic $\\pi_0 \/ \\pi_1$. If the sample size ratio of the two classes is the same as that in the population, then classifiers $\\hat\\phi_A^{(b)}(\\bd x)$'s would be \\jjl{a good plug-in estimate} of $\\varphi^*_A(\\bd x)$. However, under sampling bias, we cannot correct the threshold estimate without additional information. Armed with the \\jjl{classifier} $\\hat\\phi_A^{(b)}(\\cdot)$ trained on $\\mathcal S_{\\rm ts}^{0(b)} \\cup \\mathcal S_{\\rm ts}^{1(b)}$, we define the \\textit{sample-level classical criterion} \\jjl{of} index set $A$ as\n\\begin{align}\\label{CC}\n\t\\mathrm{CC}_A &:= \\frac{1}{B} \\sum_{b=1}^B \\mathrm{CC}_A^{(b)}\\,,\\\\\\notag\n\t\\text{with } \\mathrm{CC}_A^{(b)} &:= \\frac{1}{m_2+n_2}\\left\\{ \\sum_{i=n_1+1}^{n_1+n_2} \\left[ 1-\\hat{\\phi}^{(b)}_{A}\\left(\\bd X_i^{1(b)}\\right) \\right] + \\sum_{i'=m_1+1}^{m_1+m_2} \\hat{\\phi}_A^{(b)}\\left(\\bd X_{i'}^{0(b)}\\right) \\right\\}\\,.\n\\end{align}\nThe $\\text{CC}_A$ is the average performance of $\\hat\\phi_A^{(b)}(\\cdot)$ over the $B$ random splits on the left-out observations $\\mathcal S_{\\rm lo}^{0(b)} \\cup \\mathcal S_{\\rm lo}^{1(b)}$ for $b\\in\\{1, \\ldots, B\\}$. \n\n\n\n\n\n\\subsection{Sample-level Neyman-Pearson ranking criterion}\\label{sec: construction of NP}\n\n\n\n\nTo define s-NPC, we use the same kernel density estimates to \\jjl{plug in} $p_{1A}(\\cdot)\/ p_{0A}(\\cdot)$, as in s-CC. To \\jjl{estimate} the oracle threshold $C^*_{\\alpha A}$, we use the NP umbrella algorithm \\citep{tong2016neyman}. \\jjl{Unlike s-CC, in which both $\\mathcal S_{\\rm lo}^{0(b)}$ and $\\mathcal S_{\\rm lo}^{1(b)}$ are used to evaluate the constructed classifier, for s-NPC we use $\\mathcal S_{\\rm lo}^{0(b)}$ to estimate the threshold and only $\\mathcal S_{\\rm lo}^{1(b)}$ to evaluate the classifier}. \n\n\n\n\n\n\n\n\n\nThe NP umbrella algorithm finds proper thresholds for all \\textit{scoring-type classification methods} (e.g., nonparametric density ratio plug-in, logistic regression and random forest) so that the resulting classifiers achieve a high probability control on the type I error under the pre-specified level $\\alpha$. \\jjl{A scoring-type classification method outputs a scoring function that maps the feature space $\\mathcal X$ to ${\\rm I}\\kern-0.18em{\\rm R}$, and a classifier is constructed by combining the scoring function with a threshold.} To construct an NP classifier given a scoring-type classification method, the NP umbrella algorithm first trains a scoring function $\\hat{s}^{(b)}_A(\\cdot)$ on $\\mathcal{S}^{0(b)}_{\\rm ts} \\cup \\mathcal{S}^{1(b)}_{\\rm ts}\\,$. In this work, we specifically use $\\hat{s}^{(b)}_A(\\cdot) = \\hat{p}_{1A}^{(b)}(\\cdot)\/ \\hat{p}_{0A}^{(b)}(\\cdot)$, in which the numerator and the denominator are defined in \\eqref{eqn:kernel density estimates b}. Second, the algorithm applies $\\hat{s}^{(b)}_A(\\cdot)$ to $\\mathcal{S}^{0(b)}_{\\rm lo}$ to obtain scores $\\left\\{T_i^{(b)} = \\hat{s}^{(b)}_A\\left(\\bd X^{0(b)}_{m_1+i}\\right), i=1,\\dots, m_2\\right\\}$, which are \\jjl{then} sorted in an increasing order and denoted by $\\left\\{T_{(i)}^{(b)}, i=1,\\dots, m_2\\right\\}$. Third, for a user-specified type I error upper bound $\\alpha \\in (0,1)$ and a violation rate $\\delta_1 \\in(0,1)$\\jjl{, which refers to the probability that the type I error of the trained classifier exceeds} $\\alpha$, the algorithm chooses the order \n\\begin{align*\n\tk^* = \\min \\limits_{k=1,\\dots, m_2} \\left\\{k:\\sum_{j=k}^{m_2} {m_2\\choose j} (1-\\alpha)^j \\alpha^{m_2-j}\\leq \\delta_1\\right\\}\\,.\n\\end{align*} \nWhen $m_2 \\geq \\frac{\\log \\delta_1}{\\log(1-\\alpha)}\\,,$ a finite $k^*$ exists\\footnote{If one were to assume a parametric model, one can get rid of the minimum sample size requirement on $m_2$ \\citep{Tong.Xia.Wang.Feng.2020}. However, we adopt the non-parametric NP umbrella algorithm \\citep{tong2016neyman} to achieve the desirable mode-free property of our feature ranking framework.}, and the umbrella algorithm chooses the threshold of the estimated scoring function as \n\n\t\\widehat{C}_{\\alpha A}^{(b)} = T_{(k^*)}^{(b)}. \n$\nThus, the resulting NP classifier is\n\\begin{align}\\label{eq:NP_classifier}\n\t\\hat{\\phi}_{\\alpha A}^{(b)}(\\cdot) = \\mathds{1} \\left(\\hat{s}^{(b)}_A (\\cdot) > \\widehat{C}_{\\alpha A}^{(b)} \\right)\\,.\n\\end{align}\n\n\n\nProposition 1 in \\cite{tong2016neyman} proves that the probability that the type I error of the classifier $\\hat{\\phi}_{\\alpha A}^{(b)}(\\cdot)$ in \\eqref{eq:NP_classifier} exceeds $\\alpha$ is no more than $\\delta_1$: \n\\begin{equation}\n{\\rm I}\\kern-0.18em{\\rm P} \\left(R_0 (\\hat{\\phi}_{\\alpha A}^{(b)}) > \\alpha\\right) \\leq\\sum_{j=k^*}^{m_2} {m_2\\choose j} (1-\\alpha)^j \\alpha^{m_2-j}\\leq \\delta_1\\,, \\label{ineq:npc}\n\\end{equation} \nfor every $b = 1,\\ldots, B$. We evaluate the type II error of the $B$ NP classifiers $\\hat{\\phi}^{(1)}_{\\alpha A}, \\ldots, \\hat{\\phi}^{(B)}_{\\alpha A}$ on the left-out class $1$ sets $\\mathcal S_{\\rm lo}^{1(1)},\\ldots,\\mathcal S_{\\rm lo}^{1(B)}$ respectively. Our \\textit{sample-level NPC} for index set $A$ at level $\\alpha$, denoted by $\\rm{NPC}_{\\alpha A}$, computes the average of these type II errors: \n\\begin{align}\\label{Npscore}\n \t \\mathrm{NPC}_{\\alpha A} &:=\\frac{1}{B} \\sum_{b=1}^{B} \\mathrm{NPC}_{\\alpha A}^{(b)}\\,,\\\\\\notag\n \t \\text{with } \\mathrm{NPC}_{\\alpha A}^{(b)} &:= \\frac{1}{n_2} \\sum_{i= n_1 + 1}^{n_1+n_2} \\left[ 1-\\hat{\\phi}^{(b)}_{\\alpha A}\\left(\\bd X_i^{1(b)}\\right) \\right] = \\frac{1}{n_2} \\sum_{i=n_1 +1}^{n_1+n_2}\\mathds{1}\\left( \\hat{s}^{(b)}_{A}\\left(\\bd{X}_{iA}^{1(b)}\\right) \\le \\widehat{C}^{(b)}_{\\alpha A}\\right)\\,,\n \\end{align} \n where $\\hat{s}^{(b)}_{A}(\\cdot) = \\hat{p}_{1A}^{(b)}(\\cdot)\/ \\hat{p}_{0A}^{(b)}(\\cdot)$ is the kernel density ratios constructed on $\\mathcal{S}_{\\rm ts}^{0(b)} \\cup\\mathcal{S}_{\\rm ts}^{1(b)}$ using only the features indexed by $A$, and $\\widehat{C}^{(b)}_{\\alpha A} = T_{(k^*)}^{(b)}$ is given by the NP umbrella algorithm. \n \n \n\n\n\n\n\n\n\n\n\n\n\\section{Theoretical properties}\\label{sec:theoretical properties}\n\nThis section investigates the ranking properties of s-CC and s-NPC. Concretely, we wish to address this question: among \\jjl{$J$} candidate feature index sets $A_1, \\ldots, A_J$ of size $l$, is it guaranteed that the s-CC and s-NPC have ranking agreements with the p-CC and p-NPC respectively, with high probability? We consider $J$ as a fixed number in the theory development. We also assume in this section that the number of random splits $B = 1$ in s-CC and s-NPC, and for then simplicity we suppress the super index $(b)$ in all notations in this section and in the Appendix proofs. \n\nIn addition to investigation on ranking consistency, we discover a property unique to s-NPC: the robustness against sampling bias. Concretely, as long as the absolute sample sizes are large enough, s-NPC gives ranking consistent with p-NPC even if class size ratio in the sample is far from that in the population. In contrast, s-CC is not robust against sampling bias, except in the scenario that the population class size ratio $\\pi_0 \/ \\pi_1$ is known and we replace the threshold in the plug-in classifiers for s-CC by this \\jjl{ratio}. \n\n\n\n\n\n\n\n\n\\subsection{Definitions and key assumptions}\nWe assume that the size of candidate index sets $l$ $(\\ll d)$ is moderate. \n Following \\cite{Audibert05fastlearning}, for any multi-index $\\bd t=\\left(t_1, \\ldots, t_l \\right)^{\\mkern-1.5mu\\mathsf{T}} \\in {\\rm I}\\kern-0.18em{\\rm N}^l$ and $\\bd x= \\left( x_1, \\ldots, x_l\\right)^{\\mkern-1.5mu\\mathsf{T}} \\in {\\rm I}\\kern-0.18em{\\rm R}^l$, we define $|\\bd t| = \\sum_{i=1}^{l}t_i$, $\\bd t! = t_1!\\cdots t_l!$, $\\bd x^{\\bd t}=x_1^{t_1} \\cdots x_l^{t_l}$, $\\left\\| \\bd x\\right\\| = \\left( x_1^2 + \\ldots + x_l^2 \\right)^{1\/2}$, and the differential operator $D^{\\bd t} = \\frac{\\partial^{t_1 + \\cdots + t_l}}{\\partial {x_1^{t_1}} \\cdots \\partial {x_l^{t_l}}}$. For all the theoretical discussions, we assume the domain of $p_{0A}$ and $p_{1A}$, \\jjl{i.e.,} the class-conditional densities of $\\bd X_A|(Y=0)$ and $\\bd X_A|(Y=1)$, is $[-1,1]^l$, where $l = |A|$. We denote the distributions of $\\bd X_A|(Y=0)$ and $\\bd X_A|(Y=1)$ by $P_{0A}$ and $P_{1A}$ respectively. \n \n\\begin{definition}[H\\\"{o}lder function class]\\label{def:holder_function_class}\n\tLet $\\beta>0$. Denote by $\\floor*{\\beta}$ the largest integer strictly less than $\\beta$. For a $\\floor*{\\beta}$-times continuously differentiable function $g: {\\rm I}\\kern-0.18em{\\rm R}^l \\rightarrow {\\rm I}\\kern-0.18em{\\rm R}$, we denote by $g_{\\bd x}$ its Taylor polynomial of degree $\\floor*{\\beta}$ at a value $\\bd x \\in {\\rm I}\\kern-0.18em{\\rm R}^l$:\n$$g_{\\bd x}^{(\\beta)}(\\cdot) = \\sum_{{\\left| {\\bd t}\\right|}\\leq \\floor*{\\beta}} \\frac{\\left(\\cdot - {\\bd x}\\right)^{\\bd t}}{{\\bd t}!} D^{\\bd t}g\\left({\\bd x}\\right).$$ \\par\nFor $L >0 $, the $\\left( \\beta, L, \\left[-1, 1\\right]^l\\right)$-H\\\"{o}lder function class, denoted by $\\Sigma\\left( \\beta, L, \\left[-1, 1\\right]^l\\right)$, is the set of $\\floor*{\\beta}$-times continuously differentiable functions $g: {\\rm I}\\kern-0.18em{\\rm R}^l \\rightarrow {\\rm I}\\kern-0.18em{\\rm R}$ that satisfy the following inequality:\n$$\\left| g\\left( {\\bd x}\\right) -g_{\\bd x}^{(\\beta)}\\left( {\\bd x}^{\\prime}\\right) \\right| \\leq L\\left\\| {\\bd x}- {\\bd x}^{\\prime} \\right\\|^{\\beta}\\,, \\quad \\text{ for all } {\\bd x}, {\\bd x}^{\\prime} \\in \\left[-1, 1\\right]^l\\,.$$\n\\end{definition}\n\n\\begin{definition}[H\\\"{o}lder density class]\\label{def:holder_density_class}\n\tThe $\\left( \\beta, L, \\left[-1, 1\\right]^l\\right)$-H\\\"{o}lder density class is defined as $$\\mathcal{P}_{\\Sigma} \\left( \\beta, L, \\left[-1, 1\\right]^l\\right)= \\left\\{ p: p \\geq 0, \\int p=1, p \\in \\Sigma\\left( \\beta, L, \\left[-1, 1\\right]^l\\right)\\right\\}\\,.$$ \n\\end{definition}\n\n\nThe following $\\beta$-valid kernels are multi-dimensional analog of univariate higher order kernels.\n\\begin{definition}[$\\beta$-valid kernel]\\label{definition1}\nLet $K(\\cdot)$ be a real-valued kernel function on ${\\rm I}\\kern-0.18em{\\rm R}^l$ with the support $[-1,1]^l$\\,. For a fixed $\\beta>0$\\,, the function $K(\\cdot)$ is a $\\beta$-valid kernel if it satisfies (1) $\\int |K|^q <\\infty$ for any $q\\geq 1$, (2) $\\int \\|\\bd u \\|^\\beta|K(\\bd u)|d\\bd u <\\infty$, and (3) in the case $\\floor* \\beta \\geq 1$\\,, $\\int \\bd u^{\\bd t} K(\\bd u)d\\bd u = 0 $ for any $\\bd t =(t_1, \\dots, t_l) \\in \\mathbb N^l$ such that $1\\le |\\bd t| \\le\\floor* \\beta$\\,.\n\\end{definition}\n\nOne example of $\\beta$-valid kernels is the product kernel whose ingredients are kernels of order $\\beta$ in $1$ dimension:\n$$\n\\widetilde K (\\bd x) = K(x_1)K(x_2)\\cdots K(x_l)\\mathds{1}(\\bd x\\in[-1,1]^l)\\,,\n$$\nwhere $K$ is a 1-dimensional $\\beta$-valid kernel and is constructed based on Legendre polynomials. Such kernels have been considered in \\cite{RigVer09}. When a $\\beta$-valid kernel is constructed out of Legendre polynomials, it is also Lipschitz and bounded. For simplicity, we assume that all the $\\beta$-valid kernels considered in the theory discussion are constructed from Legendre polynomials.\n\n\n\\begin{definition}[Margin assumption]\\label{def: margin_assumpion}\n\tA function $f(\\cdot)$ satisfies the margin assumption of the order $\\bar{\\gamma}$ at the level $C$, with respect to the probability distribution $P$ of a random vector $\\bd X$, if there exist positive constants $\\bar{C}$ and $\\bar{\\gamma}$, such that for all $\\delta \\geq 0$,\n$$P \\left(\\left| f\\left(\\bd X\\right) - C \\right| \\leq \\delta\\right) \\leq \\bar C \\delta^{\\bar{\\gamma}}\\,.$$\n\\end{definition}\n\nThe above condition for densities was first introduced in \\citet{polonik1995measuring}, and its counterpart in the classical binary classification was called margin condition \\citep{MamTsy99}, which is a low noise condition. \nRecall that the set $\\{\\bd x: \\eta(\\bd x)=1\/2\\}$ is the decision boundary of the classical oracle classifier, and the margin condition in the classical paradigm is a special case of Definition \\ref{def: margin_assumpion} by taking $f = \\eta$ and $C=1\/2$. Unlike the classical paradigm where the optimal threshold $1\/2$ on regression function $\\eta$ is known, the optimal threshold level in the NP paradigm is unknown and needs to be estimated, suggesting the necessity of having sufficient data around the decision boundary to detect it. This concern motivated \\cite{tong2013plug} to formulate a detection condition that works as an opposite force to the margin assumption, and \\cite{zhao2016neyman} improved upon it and proved its necessity in bounding the excess type II error of an NP classifier. To establish ranking consistency properties of s-NPC, a bound on the excess type II error is an intermediate result, so we also need this \\jjl{detection condition} for our current work. \n\n\n\n\n\n\n\n\n\\begin{definition}[Detection condition \\citep{zhao2016neyman}]\\label{def:detection_assumption}\n\tA function $f(\\cdot)$ satisfies the detection condition of the order $\\underaccent{\\bar}{\\gamma}$ at the level $(C, \\delta^*)$ with respect to the probability distribution $P$ of a random vector $\\bd X$, if there exists a positive constant $\\underaccent{\\bar}C$, such that for all $\\delta\\in\\left(0, \\delta^*\\right) $,\n$$P\\left( C \\leq f\\left(\\bd X\\right) \\leq C + \\delta \\right) \\geq \\underaccent{\\bar}C \\delta^{\\underaccent\\bar{\\gamma}} \\,.$$\n\\end{definition}\n\n\n\n\\subsection{A uniform deviation result of the scoring function}\n\nFor $A\\subseteq\\{1, \\ldots, d\\}$ and $|A| = l$, recall that we estimate $p_{0A}$ and $p_{1A}$ respectively from $\\mathcal{S}^0_{\\rm ts}$ and $\\mathcal{S}^1_{\\rm ts}$ by kernel density estimators,\n\\begin{align}\\label{eqn:kernel density estimates}\n\\hat{p}_{0A}(\\bd x_A)=\\frac{1}{m_1h_{m_1}^l}\\sum_{i=1}^{m_1} K\\left(\\frac{\\bd X^0_{iA}-\\bd x_A}{h_{m_1}}\\right) \\quad \\text{ and } \\quad \\hat{p}_{1A}(\\bd x_A)=\\frac{1}{n_1h_{n_1}^l}\\sum_{i=1}^{n_1} K\\left(\\frac{\\bd X_{iA}^1-\\bd x_A}{h_{n_1}}\\right)\\,,\n\\end{align}\nwhere $h_{m_1}$ and $h_{n_1}$ denote the bandwidths, and $K(\\cdot)$ is a $\\beta$-valid kernel in ${\\rm I}\\kern-0.18em{\\rm R}^l$. We are interested in deriving a high probability bound for $\\left\\| \\hat p_{1A}(\\bd x_A)\/\\hat p_{0A}(\\bd x_A) - p_{1A}(\\bd x_A)\/p_{0A}(\\bd x_A)\\right\\|_{\\infty}$.\n\n\n\\begin{condition}\\label{condition: 1}\nSuppose that the densities satisfy\n\\begin{itemize}\n\\item[(i)] There exist positive constants $\\mu_{\\min}$ and $\\mu_{\\max}$ such that $\\mu_{\\max}\\geq p_{0A}\\geq \\mu_{\\min}$ and $\\mu_{\\max}\\geq p_{1A}\\geq \\mu_{\\min}$ for all $A\\subset\\{1 \\ldots, d\\}$ satisfying $|A|=l$.\n\\item[(ii)] There is a positive constant $L$ such that $p_{0A}, p_{1A}\\in\\mathcal{P}_{\\Sigma}(\\beta, L, [-1, 1]^{l})$ for all $A\\subset\\{1 \\ldots, d\\}$ satisfying $|A| = l$. \n\\end{itemize}\n\n\n\\end{condition}\n\n\n \\begin{proposition}\\label{lem:bound_s_shat_for_plugin}\nAssume Condition \\ref{condition: 1} and let the kernel $K$ be $\\beta$-valid and $L^\\prime$-Lipschitz. Let $A \\subseteq\\{1, \\ldots, d\\}$ and $|A| = l$. Let $\\hat p_{0A}(\\cdot)$ and $\\hat p_{1A}(\\cdot)$ \\jjl{be} kernel density estimates defined in \\eqref{eqn:kernel density estimates}. Take the bandwidths $h_{m_1}=\\left(\\frac{\\log m_1}{m_1}\\right)^{\\frac{1}{2\\beta+l}}$ and $h_{n_1}=\\left(\\frac{\\log n_1}{n_1}\\right)^{\\frac{1}{2\\beta+l}}$. For any $\\delta_3 \\in (0,1)$, if sample \\jjl{sizes} $m_1 = |\\mathcal{S}_{\\rm ts}^0|$ and $n_1 = |\\mathcal{S}_{\\rm ts}^1|$ satisfy \\[\n \t\\sqrt{\\frac{\\log\\left(2m_1\/\\delta_3\\right)}{m_1h_{m_1}^{l}}} < 1\\wedge \\frac{\\mu_{\\min}}{2 C_0} \\,, \\quad \\sqrt{\\frac{\\log\\left(2n_1\/\\delta_3\\right)}{n_1h_{n_1}^{l}}}< 1, \\quad n_1 \\wedge m_1 \\geq 2\/\\delta_3\\,,\\quad \n \t\\] \nwhere $C_{0}=\\sqrt{48c_{1}} + 32c_{2}+2Lc_{3}+L'+L+C\\sum_{1\\leq|\\bd q|\\leq\\lfloor\\beta\\rfloor}\\frac{1}{\\bd q!}$, in which $c_{1}=\\mu_{\\max}\\|K\\|^2$, $c_{2}=\\|K\\|_{\\infty}+\\mu_{\\max}+\\int|K||\\bd t|^{\\beta}d\\bd t$, $c_{3}=\\int |K||\\bd t|^{\\beta}d\\bd t$ and $C$ is such that\\\\ $C \\geq \\sup_{1\\leq|\\bd q|\\leq\\lfloor \\beta\\rfloor}\\sup_{\\bd x_A\\in[-1, 1]^l}|D^{\\bd q}p_{0A}(\\bd x_A)|$. Then there exists a positive constant $\\widetilde{C}$ that does not depend on $A$, such that we have with probability at least $1-\\delta_3$, \\[\n \t\\left\\| \\hat p_{1A}(\\bd x_A)\/\\hat p_{0A}(\\bd x_A) - p_{1A}(\\bd x_A)\/p_{0A}(\\bd x_A)\\right\\|_{\\infty} \\leq \\widetilde{C}\\left[\\left( \\frac{\\log m_1}{m_1}\\right)^{\\beta\/(2\\beta+l)} + \\left( \\frac{\\log n_1}{n_1}\\right)^{\\beta\/(2\\beta+l)} \\right]\\,.\n \t\\]\n\n\n \\end{proposition}\n\n\n\n\\subsection{Ranking property of s-CC}\\label{sec:theoretic_plug-in-CC}\n\nTo study the ranking agreement between s-CC and p-CC, an essential step is to develop a concentration result between $\\text{CC}_A$ and $R(\\varphi^*_A)$, where $\\varphi^*_{A}$ was defined in \\eqref{eqn:classical oracle}. \n\n\n\\begin{proposition}\\label{prop: CC1}\nLet $\\delta_3, \\delta_4, \\delta_5\\in (0, 1)$. In addition to the assumptions of Propositions \\ref{lem:bound_s_shat_for_plugin}, assume that the density ratio $s_A(\\cdot) = p_{1A}(\\cdot)\/p_{0A}(\\cdot)$ satisfies the margin assumption of order $\\bar\\gamma$ at level $\\pi_0 \/ \\pi_1$ (with constant $\\bar C$) with respect to both $P_{0A}$ and $P_{1A}$, that $m_2 \\geq (\\log\\frac{2}{\\delta_5})^2$ and $n_2 \\geq (\\log\\frac{2}{\\delta_4})^2$, and that $m \/ n = m_1 \/ n_1 = \\pi_0 \/ \\pi_1$, \nthen we have with probability at least $1-\\delta_3-\\delta_4-\\delta_5$, \n$$\n\\left| \\mathrm{CC}_{A} - R \\left( {\\varphi}^*_{A} \\right)\\right|\\leq \\widetilde C \\left[\\left( \\frac{\\log m_1}{m_1}\\right)^{\\frac{\\beta\\bar\\gamma}{2\\beta+l}} + \\left( \\frac{\\log n_1}{n_1}\\right)^{\\frac{\\beta\\bar\\gamma}{2\\beta+l}} + m_2^{-\\frac{1}{4}} + n_2^{-\\frac{1}{4}} \\right]\\,,\n$$\t \nfor some positive constant $\\widetilde C$ that does not depend on $A$. \n\\end{proposition}\n\n\nProposition \\ref{lem:bound_s_shat_for_plugin} is essential to establish Proposition \\ref{prop: CC1}, which in term leads to the ranking consistency of s-CC. \n\n\n\n\\begin{theorem}\\label{thm:selection_consistency_cc}\nLet $\\delta_3$, $\\delta_4$, $\\delta_5\\in (0,1)\\,,$ $A_1, \\ldots, A_J \\subseteq\\left\\{1,\\ldots, d \\right\\}$ and $|A_1| = |A_2|=\\ldots = |A_J| = l$. We consider both $J$ and $l$ to be constants that do not diverge with the sample sizes. In addition to the assumptions in Proposition \\ref{prop: CC1}, assume that the \\jjl{p-CC's} of these feature index sets are separated by some margin $g>0$; in other words, \n$$\n\t \\min \\limits_{i \\in \\{1,\\dots, J-1\\}}\\left\\{ R\\left( {\\varphi}^*_{A_{i+1}}\\right) - R\\left( {\\varphi}^*_{A_i}\\right) \\right\\} > g\\,. \n$$ \nIn addition, assume $m_1, m_2, n_1, n_2$ satisfy that \n\\begin{equation}\\label{eqn:sample size requirement}\n\\widetilde C \\left[\\left( \\frac{\\log m_1}{m_1}\\right)^{\\frac{\\beta\\bar\\gamma}{2\\beta+l}} + \\left( \\frac{\\log n_1}{n_1}\\right)^{\\frac{\\beta\\bar\\gamma}{2\\beta+l}} + m_2^{-\\frac{1}{4}} + n_2^{-\\frac{1}{4}} \\right] < \\frac{g}{2}\\,, \n\\end{equation}\nwhere $\\widetilde C$ is the generic constant in Proposition \\ref{thm:1}. \nThen with probability at least $1 - J(\\delta_3+\\delta_4+\\delta_5)$, $\\mathrm{CC}_{A_i} < \\mathrm{CC}_{A_{i+1}}$ for all $i = 1, \\ldots, J-1$. That is, the \\jjl{s-CC} ranks $A_1, \\ldots, A_J$ the same as the \\jjl{p-CC}. \n\\end{theorem}\n\n\n\\begin{remark}\nIf the sample size ratio $m\/n$ is far from $\\pi_0\/\\pi_1$, we cannot expect a concentration result on $\\left| \\mathrm{CC}_{A} - R \\left( {\\varphi}^*_{A} \\right)\\right|$, such as Proposition \\ref{prop: CC1}, to hold. As such a concentration result is a cornerstone to ranking consistency between s-CC and p-CC, we conclude that the classical criterion is not robust \\jjl{to} sampling bias. \t\n\\end{remark}\n\n\n\n\n\n\\subsection{Ranking property of s-NPC}\\label{sec:theoretic_plug-in}\n\nTo establish ranking agreement between s-NPC and p-NPC, an essential step is to develop a concentration result of $\\mathrm{NPC}_{\\alpha A}$ around $R_1(\\varphi^*_{\\alpha A})$, where $\\varphi^*_{\\alpha A}$ was defined in \\eqref{ideaL_sormulation_np}. Recall that $\\hat \\phi_{\\alpha A}(\\bd x) = \\mathds{1}(\\hat s_A(\\bd x_A) > \\widehat C_{\\alpha A}) = \\mathds{1}(\\hat p_{0A}(\\bd x_A)\/\\hat p_{1A}(\\bd x_A) > \\widehat C_{\\alpha A})$, where $\\widehat C_{\\alpha A}$ is determined by the NP umbrella classification algorithm. We always assume that the cumulative distribution function of $\\hat s_{A} (\\bd X_A), \\text{ where } \\bd X\\sim P_0$, is continuous. \n\n\n\\begin{lemma} \\label{lem:kprime} \nLet $\\alpha, \\delta_1,\\delta_2 \\in (0,1)\\,.$\nIf $m_2 = \\left| \\mathcal{S}_{\\rm lo}^0 \\right| \\geq \\frac{4}{\\alpha\\delta_1}\\,$, then the classifier $\\hat{\\phi}_{\\alpha A}$ satisfies with probability at least $1-\\delta_1-\\delta_2 \\,,$ \n\\begin{align} \\label{eq: R0_concentration} \n\t\\left|R_0(\\hat{\\phi}_{\\alpha A}) - R_0(\\varphi^*_{\\alpha A}) \\right|\\leq \\xi\\,,\n\\end{align}\nwhere\n\\[\n\t\\xi = \\sqrt{\\frac{\\ceil*{ d_{\\alpha,\\delta_1,m_2} \\left(m_2+1\\right)}\\left(m_2+1-\\ceil*{ d_{\\alpha,\\delta_1,m_2} \\left(m_2+1\\right)}\\right)}{(m_2+2)(m_2+1)^2\\,\\delta_2}} + d_{\\alpha,\\delta_1,m_2} + \\frac{1}{m_2+1} - (1-\\alpha)\\,,\n\\]\n\\[\n\t d_{\\alpha,\\delta_1,m_2} = \\frac{1+ 2\\delta_1 (m_2+2) (1-\\alpha) + \\sqrt{1+ 4\\delta_1(m_2+2)(1-\\alpha)\\alpha}}{2\\left\\{ \\delta_1(m_2+2)+1\\right\\}}\\,,\n\\]\nand $\\ceil*{z}$ denotes the smallest integer larger than or equal to $z$. Moreover, if $m_2 \\geq \\max(\\delta_1^{-2}, \\delta_2^{-2})$, we have \n$\n\\xi \\leq ({5}\/{2}){m_2^{-1\/4}}.\n$\t\\end{lemma}\n\nLemma \\ref{lem:kprime} and a minor modification of proof for Proposition 2.4 in \\cite{zhao2016neyman} lead to the next proposition. We can prove the same upper bound for $\\left|R_1(\\hat{\\phi}_{\\alpha A}) - R_1({\\varphi}^*_{\\alpha A})\\right|$ as that for the excess type II error $R_1(\\hat{\\phi}_{\\alpha A}) - R_1({\\varphi}^*_{\\alpha A})$ in \\cite{zhao2016neyman}. \n\n\n\n\n\n\n\n\n \n\n\n\n\n\\begin{proposition}\\label{prop:2}\nLet $\\alpha, \\delta_1, \\delta_2 \\in (0,1)$. Assume that the density ratio $s_A(\\cdot) = p_{1A}(\\cdot)\/p_{0A}(\\cdot)$ satisfies the margin assumption of order $\\bar\\gamma$ at level $C^*_{\\alpha A}$ (with constant $\\bar C$) and detection condition of order $\\underaccent{\\bar}\\gamma$ at \nlevel $(C^*_{\\alpha A}, \\delta^*)$ (with constant $\\underaccent{\\bar} C$), both with respect to distribution $P_{0A}$. \n\\noindent\nIf $m_2 \\geq \\max\\{\\frac{4}{\\alpha \\delta_1}, \\delta_1^{-2}, \\delta_2^{-2}, (\\frac{2}{5}\\underaccent{\\bar}C{\\delta^*}^{\\uderbar\\gamma})^{-4}\\}$, the excess type II error of the classifier $\\hat{\\phi}_{\\alpha A}$ satisfies with probability at least $1-\\delta_1-\\delta_2$,\n\\begin{align*}\n&\\left|R_1(\\hat{\\phi}_{\\alpha A}) - R_1({\\varphi}^*_{\\alpha A})\\right|\\\\\n&\\leq\\, \n2\\bar C \\left[\\left\\{\\frac{|R_0( \\hat{\\phi}_{\\alpha A}) - R_0( \\varphi^*_{\\alpha A})|}{\\underaccent{\\bar}C}\\right\\}^{1\/\\uderbar{\\gamma}} + 2 \\| \\hat s_A - s_A \\|_{\\infty} \\right]^{1 + \\bar\\gamma} \n+ C^*_{\\alpha A} |R_0( \\hat{\\phi}_{\\alpha A}) - R_0( \\varphi^*_{\\alpha A})|\\\\\n&\\leq\\,\n2\\bar C \\left[\\left(\\frac{2}{5}m_2^{1\/4}\\underaccent{\\bar}C\\right)^{-1\/\\uderbar{\\gamma}} + 2 \\| \\hat s_A - s_A \\|_{\\infty} \\right]^{1 + \\bar\\gamma} \n+ C^*_{\\alpha A} \\left(\\frac{2}{5} m_2^{1\/4}\\right)^{-1}\\,.\n\\end{align*}\n\\end{proposition}\n\n\n\n\n\t\n\n\n\n\nPropositions \\ref{lem:bound_s_shat_for_plugin} and \\ref{prop:2} lead to the following result. \n\n\n\n\n\n\\begin{theorem}\\label{thm:1}\nLet $\\alpha$, $\\delta_1$, $\\delta_2$, $\\delta_3$, $\\delta_4$ $\\in (0,1)$, and $l = |A|$. In addition to the assumptions of Propositions \\ref{lem:bound_s_shat_for_plugin} and \\ref{prop:2}, assume $n_2 \\geq \\left(\\log\\frac{2}{\\delta_4}\\right)^2$,\nthen we have with probability at least $1-(\\delta_1+\\delta_2+\\delta_3 +\\delta_4),$ \n$$\n\\left| \\mathrm{NPC}_{\\alpha A} - R_1 \\left( {\\varphi}^*_{\\alpha A} \\right)\\right|\\leq \\widetilde C \\left[\\left( \\frac{\\log m_1}{m_1}\\right)^{\\frac{\\beta(1+\\bar\\gamma)}{2\\beta+l}} + \\left( \\frac{\\log n_1}{n_1}\\right)^{\\frac{\\beta(1+\\bar\\gamma)}{2\\beta+l}} + m_2^{-(\\frac{1}{4}\\wedge \\frac{1+\\bar\\gamma}{\\underaccent{\\bar}\\gamma})} + n_2^{-\\frac{1}{4}} \\right]\\,,\n$$\t \nfor some positive constant $\\widetilde C$ that does not depend on $A$. \n\\end{theorem}\n\n\n\n\n\n\nUnder smoothness and regularity conditions and sample size requirements, Theorem \\ref{thm:1} shows the concentration of $\\mathrm{NPC}_{\\alpha A}$ around $R_1 \\left( {\\varphi}^*_{\\alpha A}\\right)$ with probability at least $1-(\\delta_1+\\delta_2+\\delta_3+\\delta_4)$. The user-specified violation rate $\\delta_1$ represents the uncertainty that the type I error of an NP classifier $\\hat \\phi_{\\alpha A}$ exceeds $\\alpha$, leading to the underestimation of $R_1 ( {\\varphi}^*_{\\alpha A} )$; $\\delta_2$ accounts for possibility of unnecessarily stringent control on the type I error, which results in the overestimation of $R_1 ( {\\varphi}^*_{\\alpha A} )$; $\\delta_3$ accounts for the uncertainty in training scoring function $\\hat s_A(\\cdot)$ on a finite sample; and $\\delta_4$ represents the uncertainty of using leave-out class $1$ observations $\\mathcal{S}^1_{\\rm lo}$ to estimate $R_1(\\hat\\phi_{\\alpha A})$. Note that while the $\\delta_1$ parameter serves both as the input of the construction of s-NPC and as a restriction to the sample sizes, other parameters $\\delta_2$, $\\delta_3$ and $\\delta_4$ only have the latter role. Like the constant $C_0$ in Proposition \\ref{lem:bound_s_shat_for_plugin}, the generic constant $\\widetilde C$ in Theorem \\ref{thm:1} can be provided more explicitly, but it would be too cumbersome to do so. \n\n\n\n\n\n \n \n \n\n\n\n\\begin{theorem}\\label{thm:selection_consistency_plugin}\nLet $\\alpha$, $\\delta_1$, $\\delta_2$, $\\delta_3$, $\\delta_4 \\in (0,1)\\,,$ $A_1, \\ldots, A_J \\subseteq\\left\\{1,\\ldots, d \\right\\}$ and $|A_1| = |A_2|=\\ldots = |A_J| = l$. We consider both $J$ and $l$ to be constants that do not diverge with the sample sizes. In addition to the assumptions in Theorem \\ref{thm:1}, assume that the p-NPC's of these feature index sets are separated by some margin $g>0$; in other words, \n$$\n\t \\min \\limits_{i \\in \\{1,\\dots, J-1\\}}\\left\\{ R_1\\left( {\\varphi}^*_{\\alpha A_{i+1}}\\right) - R_1\\left( {\\varphi}^*_{\\alpha A_i}\\right) \\right\\} > g\\,. \n$$ \nIn addition, assume $m_1, m_2, n_1, n_2$ satisfy that \n\\begin{equation}\\label{eqn:sample size requirement}\n\\widetilde C \\left[\\left( \\frac{\\log m_1}{m_1}\\right)^{\\frac{\\beta(1+\\bar\\gamma)}{2\\beta+l}} + \\left( \\frac{\\log n_1}{n_1}\\right)^{\\frac{\\beta(1+\\bar\\gamma)}{2\\beta+l}} + m_2^{-(\\frac{1}{4}\\wedge \\frac{1+\\bar\\gamma}{\\underaccent{\\bar}\\gamma})} + n_2^{-\\frac{1}{4}} \\right] < \\frac{g}{2}\\,, \n\\end{equation}\nwhere $\\widetilde C$ is the generic constant in Theorem \\ref{thm:1}. \nThen with probability at least $1 - J(\\delta_1+\\delta_2+\\delta_3+\\delta_4)$, $\\mathrm{NPC}_{\\alpha A_i} < \\mathrm{NPC}_{\\alpha A_{i+1}}$ for all $i = 1, \\ldots, J-1$. In other words, the s-NPC ranks $A_1, \\ldots, A_J$ the same as the p-NPC. \n\\end{theorem}\n\n\\begin{remark}\nThe conclusion in Theorem \\ref{thm:selection_consistency_plugin} also holds under sampling bias, i.e., when the sample sizes $n$ (of class $1$) and $m$ (of class $0$) do not reflect the population proportions $\\pi_0$ and $\\pi_1$. \t\n\\end{remark}\n\n\nHere we offer some intuition about the the robustness of NPC against sampling bias. Note that the objective and constraint of the NP paradigm only involve the class-conditional feature distributions, not the class proportions. Hence, the p-NPC does not rely on the class proportions. Furthermore, in s-NPC the class-conditional densities are estimated separately within each class, not involving the class proportions either. It is also worth noting that the proof of Theorem \\ref{thm:selection_consistency_plugin} (in Appendix) does not use the relation between the ratio of sample class sizes and that of the population class sizes. \n\n\n\n \n\\section{Simulation studies} \\label{sec:simulation}\n\nThis section contains simulation studies regarding the practical performance of s-CC and s-NPC in ranking features. We first demonstrate that s-CC and s-NPC rank the two features differently in the toy example (Figure \\ref{fig:toy example 1}), and their ranks are consistent with their population-level counterparts with high probability. Next we show the performance of s-CC and s-NPC in ranking features under both low-dimensional and high-dimensional settings. Lastly, we compare s-CC and s-NPC with four approaches: the Pearson correlation, the distance correlation \\citep{szekely2009brownian}, the two-sample $t$ test, and the two-sample Wilcoxon rank-sum test, which have been commonly used for marginal feature ranking in practice. \\jjl{In all the simulation studies, we set the number of random splits $B=11$ for s-CC and s-NPC, so that we can achieve reasonably stable criteria and meanwhile finish thousands of simulation runs in a reasonable time.} \n\n\\subsection{Revisiting the toy example at the sample level} \n\nWe simulate $1000$ samples, each of size $n=2000$, from the two-feature distribution defined in (\\ref{eq:toy_example}), which contains two features.\nWe apply s-CC (\\ref{CC}) and s-NPC with $\\delta_1 = .05$ (\\ref{Npscore}) to each sample to rank the two features, and we calculate the frequency of each feature being ranked the top among the $1000$ ranking results. \nTable \\ref{tab:toy_example} shows that s-NPC ($\\alpha = .01$) ranks feature $2$ the top with high probability ($97.8\\%$ frequency), while s-CC and s-NPC ($\\alpha = .20$) prefer feature $1$ with high probability. This is consistent with our population-level result: p-NPC ($\\alpha=.01$) prefers feature $2$, while p-CC and p-NPC ($\\alpha=.20$) find feature $1$ better, as we calculate using closed-form formulas in Section \\ref{sec:NPC_population}. Hence, this provides a numerical support to Theorems \\ref{thm:selection_consistency_cc} and \\ref{thm:selection_consistency_plugin}.\n\n\n\\begin{table}[htbp]\n\\caption{\\label{tab:toy_example}The frequency of each feature being ranked the top by each criterion among $1,000$ samples in the toy example (Figure \\ref{fig:toy example 1}).}\n\\centering\n\\begin{tabular}{lrr}\n\\hline\nCriterion & Feature $1$ & Feature $2$\\\\\n\\hline\ns-CC & $78.0\\%$ & $22.0\\%$ \\\\\ns-NPC ($\\alpha = .01$) & $1.6\\%$ & $98.4\\%$ \\\\\ns-NPC ($\\alpha = .20$) & $99.0\\%$ & $1.0\\%$\\\\\n\\hline\t\n\\end{tabular}\n\\end{table}\n\n\n\\subsection{Ranking low-dimensional features at the sample level}\\label{sec:sim_low_dim}\nWe next demonstrate the performance of s-CC and s-NPC in ranking features when $d$, the number of features, is much smaller than $n$. Two simulation studies are designed to support our theoretical results in Theorems \\ref{thm:selection_consistency_cc} and \\ref{thm:selection_consistency_plugin}. \n\nFirst, we generate data from the following two-class Gaussian model with $d=30$ features, among which we set the first $s=10$ features to be informative (a feature is informative if and only if it has different marginal distributions in the two classes). \n\\begin{align}\\label{eq:best_subset}\n\t\\bd X \\given (Y=0) &\\sim \\mathcal{N}(\\bd\\mu^0, \\bd\\Sigma)\\,, & \\bd X \\given (Y=1) &\\sim \\mathcal{N}(\\bd\\mu^1, \\bd\\Sigma)\\,, & {\\rm I}\\kern-0.18em{\\rm P}(Y=1) = .5\\,,\n\\end{align}\nwhere $\\bd\\mu^0 = (\\underbrace{-1.5,\\ldots,-1.5}_{10}, \\mu_{11}, \\ldots, \\mu_{30})^{\\mkern-1.5mu\\mathsf{T}}$, $\\bd\\mu^1 = (\\underbrace{1,.9,\\ldots,.2,.1}_{10}, \\mu_{11}, \\ldots, \\mu_{30})^{\\mkern-1.5mu\\mathsf{T}}$, with $\\mu_{11}, \\ldots, \\mu_{30}$ independently and identically drawn from $\\mathcal N(0,1)$ and then held fixed, and $\\bd\\Sigma = 4 \\, \\mathbf{I}_{30}$. In terms of population-level criteria p-CC and p-NPC, a clear gap exists between the first $10$ informative features and the rest features, yet the $10$ features themselves have increasing criterion values but no obvious gaps. That is, the first 10 features have true ranks going down from 1 to 10, and the rest features are tied in true ranks. \n\nWe simulate $1000$ samples of size $n=400$\\footnote{The minimum sample size required for $m_2$, class $0$ sample size reserved for estimating the threshold, in the NP umbrella algorithm is $59$ when $\\alpha = \\delta_1 = .05$. We set the overall sample size to $400$, so that the expected $m_2$ is $100$; then the realized $m_2$ is larger than $59$ with high probability. } or $1000$ from the above model. We apply s-CC (\\ref{CC}) and s-NPC with $\\delta_1 = .05$ and four $\\alpha$ levels $.05$, $.10$, $.20$, and $.30$ (\\ref{Npscore}), five criteria in total, to each sample to rank the $30$ features. That is, for each feature, we obtain $1000$ ranks by each criterion. We summarize the average rank of each feature by each criterion in Tables \\ref{tab:avg_rank_d30_n400} and \\ref{tab:avg_rank_d30_n1000}, and we plot the distribution of ranks of each feature in Figures \\ref{fig:avg_rank_d30_n400} and \\ref{fig:avg_rank_d30_n1000}. The results show that all criteria clearly distinguish the first 10 informative features from the rest. For s-NPC, we observe that its ranking is more variable for a smaller $\\alpha$ (e.g., $0.05$). This is expected because, when $\\alpha$ becomes smaller, the threshold in the NP classifiers would have an inevitably larger variance and lead to a more variable type II error estimate, i.e., s-NPC. As the sample size increases from $400$ (Table \\ref{tab:avg_rank_d30_n400}) to $1000$ (Table \\ref{tab:avg_rank_d30_n1000}), all criteria achieve greater agreement with the true ranks. \n\n\\begin{table}[htbp]\n\\caption{\\label{tab:avg_rank_d30_n400}Average ranks of the first $20$ features by each criterion with $d=30$ and $n=400$ under the Gaussian setting.}\n\\centering\n\\small\n\\begin{tabular}{lrrrrrrrrrr}\n \\hline\n & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\\\ \n \\hline\ns-CC & 2.19 & 2.03 & 3.45 & 4.94 & 5.60 & 6.28 & 5.80 & 7.05 & 8.84 & 8.82 \\\\ \n s-NPC ($\\alpha = .05$) & 2.17 & 3.73 & 4.04 & 6.43 & 5.37 & 5.11 & 6.21 & 9.35 & 8.97 & 8.54 \\\\ \n s-NPC ($\\alpha = .10$) & 1.91 & 4.43 & 4.34 & 3.26 & 5.99 & 6.93 & 6.39 & 7.17 & 6.89 & 7.85 \\\\ \n s-NPC ($\\alpha = .20$) & 2.39 & 3.67 & 3.50 & 3.51 & 6.35 & 4.70 & 5.91 & 7.82 & 8.84 & 8.32 \\\\ \n s-NPC ($\\alpha = .30$) & 1.96 & 2.54 & 3.86 & 4.40 & 5.65 & 5.21 & 6.53 & 7.14 & 8.67 & 9.04 \\\\ \n \\hline\n & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20 \\\\ \n \\hline\ns-CC & 19.80 & 21.75 & 21.36 & 16.34 & 18.79 & 21.53 & 22.60 & 18.89 & 17.26 & 23.31 \\\\ \n s-NPC ($\\alpha = .05$) & 15.38 & 21.58 & 22.65 & 21.47 & 17.09 & 21.30 & 20.79 & 21.65 & 20.96 & 18.15 \\\\ \n s-NPC ($\\alpha = .10$) & 20.66 & 23.62 & 18.73 & 23.01 & 21.69 & 19.03 & 23.05 & 18.83 & 20.77 & 20.33 \\\\ \n s-NPC ($\\alpha = .20$) & 20.81 & 17.65 & 21.73 & 21.67 & 17.50 & 21.30 & 20.30 & 22.75 & 18.18 & 23.84 \\\\ \n s-NPC ($\\alpha = .30$) & 16.72 & 22.23 & 19.93 & 19.27 & 19.80 & 21.97 & 19.29 & 19.92 & 18.95 & 19.75 \\\\ \n \\hline \n\\end{tabular}\n\\end{table}\n\n\\begin{table}[htbp]\n\\caption{\\label{tab:avg_rank_d30_n1000}Average ranks of the first $20$ features by each criterion with $d=30$ and $n=1,000$ under the Gaussian setting.}\n\\centering\n\\small\n\\begin{tabular}{lrrrrrrrrrr}\n \\hline\n & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\\\ \n \\hline\ns-CC & 2.21 & 2.28 & 2.73 & 4.09 & 4.64 & 6.14 & 6.93 & 7.93 & 8.71 & 9.34 \\\\ \n s-NPC ($\\alpha$ = .05) & 2.55 & 2.60 & 4.21 & 4.44 & 4.28 & 6.43 & 6.48 & 6.99 & 8.22 & 8.80 \\\\ \n s-NPC ($\\alpha$ = .10) & 1.97 & 2.76 & 2.72 & 4.49 & 4.26 & 6.63 & 6.74 & 7.67 & 8.72 & 9.04 \\\\ \n s-NPC ($\\alpha$ = .20) & 1.36 & 2.35 & 3.23 & 4.19 & 4.67 & 5.93 & 7.02 & 8.24 & 8.75 & 9.24 \\\\ \n s-NPC ($\\alpha$ = .30) & 1.85 & 2.73 & 2.71 & 3.58 & 5.18 & 6.11 & 6.80 & 8.04 & 9.01 & 8.99 \\\\ \n \\hline\n & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20 \\\\ \n \\hline\ns-CC & 18.65 & 18.19 & 20.78 & 19.92 & 23.99 & 18.60 & 19.87 & 22.16 & 21.70 & 21.61 \\\\ \n s-NPC ($\\alpha$ = .05) & 22.07 & 20.25 & 21.63 & 18.63 & 17.00 & 22.16 & 19.80 & 23.05 & 19.68 & 20.84 \\\\ \n s-NPC ($\\alpha$ = .10) & 20.37 & 19.67 & 22.67 & 20.15 & 19.31 & 19.58 & 21.61 & 18.53 & 20.51 & 22.49 \\\\ \n s-NPC ($\\alpha$ = .20) & 19.10 & 20.26 & 18.08 & 20.69 & 22.15 & 22.65 & 18.19 & 21.55 & 23.79 & 20.48 \\\\ \n s-NPC ($\\alpha$ = .30) & 18.19 & 19.32 & 20.80 & 16.88 & 22.97 & 21.70 & 19.81 & 23.49 & 19.24 & 20.95 \\\\ \n \\hline\n\\end{tabular}\n\\end{table}\n\nSecond, we generate data from the following two-class Chi-squared distributions of $d=30$ features, among which we still set the first $s=10$ features to be informative.\n\\begin{align}\\label{eq:chisq}\n\t\\bd X_{\\{j\\}} \\given (Y=0) &\\sim \\chi^2_1\\,, \\; j=1,\\ldots,30 \\\\\n\t\\bd X_{\\{1\\}} \\given (Y=1) &\\sim \\chi^2_{11}\\,, \\; \\bd X_{\\{2\\}} \\given (Y=1) \\sim \\chi^2_{10}\\,, \\cdots \\,, \\bd X_{\\{10\\}} \\given (Y=1) \\sim \\chi^2_{2} \\notag \\\\\n\t\\bd X_{\\{j\\}} \\given (Y=1) &\\sim \\chi^2_1\\,, \\; j=11,\\ldots,30 \\notag\n\\end{align}\nSimilar to the previous Gaussian setting, the first $10$ features have true ranks going down from $1$ to $10$, and the rest features are tied in true ranks. We simulate $1000$ samples of size $n=400$ or $1000$ from this model, and we apply s-CC (\\ref{CC}) and s-NPC with $\\delta_1 = .05$ and four $\\alpha$ levels $.05$, $.10$, $.20$, and $.30$ (\\ref{Npscore}), five criteria in total, to each sample to rank the $30$ features. We summarize the average rank of each feature by each criterion in Tables \\ref{tab:avg_rank_d30_n400_chisq} and \\ref{tab:avg_rank_d30_n1000_chisq} (in Appendix), and we plot the distribution of ranks of each feature in Figures \\ref{fig:avg_rank_d30_n400_chisq} and \\ref{fig:avg_rank_d30_n1000_chisq} (in Appendix). The results and conclusions are consistent with those under the Gaussian setting. \n\n\n\n\\subsection{Ranking high-dimensional features at the sample level}\nWe also test the performance of s-CC and s-NPC when $d > n$. We set $d=500$ and $n=400$. The generative model is the same as \\eqref{eq:best_subset}, where $\\bd\\mu^0 = (\\underbrace{-1.5,\\ldots,-1.5}_{10}, \\mu_{11}, \\ldots, \\mu_{500})^{\\mkern-1.5mu\\mathsf{T}}$, $\\bd\\mu^1 = (\\underbrace{1,.9,\\ldots,.2,.1}_{10}, \\mu_{11}, \\ldots, \\mu_{500})^{\\mkern-1.5mu\\mathsf{T}}$, with $\\mu_{11}, \\ldots, \\mu_{500}$ independently and identically drawn from $\\mathcal N(0,1)$ and then held fixed, and $\\bd\\Sigma^0 = \\bd\\Sigma^1 = 4 \\, \\mathbf{I}_{30}$. Same as in the low-dimensional setting (Section \\ref{sec:sim_low_dim}), p-CC and p-NPC have a clear gap between the first $10$ informative features and the rest features but no obvious gaps among the informative features. In terms of both p-CC and p-NPC, the first 10 features have true ranks going down from 1 to 10, and the rest features are tied in true ranks. \n\nWe simulate $1000$ samples of size $n=400$ and apply s-CC (\\ref{CC}) and s-NPC with $\\delta_1 = .05$ and four $\\alpha$ levels $.05$, $.10$, $.20$, and $.30$ (\\ref{Npscore}) to each sample to rank the $500$ features. We summarize the average rank of each feature by each criterion in Table \\ref{tab:avg_rank_d500_n400}, and we plot the distribution of ranks of each feature in Figure \\ref{fig:avg_rank_d500_n400}. The results show that ranking under this high-dimensional setting is more difficult than the low-dimensional setting. However, s-CC and s-NPC with $\\alpha = 0.2$ or $0.3$ still clearly distinguish the first 10 informative features from the rest, while s-NPC with $\\alpha = 0.05$ or $0.1$ have worse performance on features 8--10, demonstrating again that ranking becomes more difficult for s-NPC when $\\alpha$ is small. \n\n\\begin{table}[htbp]\n\\caption{\\label{tab:avg_rank_d500_n400}Average ranks of the first $20$ features by each criterion with $d=500$ and $n=400$ under the Gaussian setting.}\n\\centering\n\\scriptsize\n\\begin{tabular}{lrrrrrrrrrr}\n \\hline\n & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\\\ \n \\hline\ns-CC & 1.51 & 3.39 & 3.25 & 4.82 & 4.43 & 6.47 & 6.59 & 6.80 & 8.53 & 9.86 \\\\ \n s-NPC ($\\alpha$ = .05) & 2.48 & 3.14 & 3.81 & 4.57 & 4.88 & 33.75 & 87.81 & 177.79 & 136.12 & 183.96 \\\\ \n s-NPC ($\\alpha$ = .10) & 2.21 & 2.34 & 3.84 & 4.08 & 5.56 & 6.70 & 6.61 & 19.97 & 116.98 & 51.27 \\\\ \n s-NPC ($\\alpha$ = .20) & 1.87 & 2.55 & 3.60 & 3.76 & 5.41 & 6.35 & 6.67 & 7.51 & 8.61 & 46.10 \\\\ \n s-NPC ($\\alpha$ = .30) & 1.43 & 3.29 & 3.44 & 4.54 & 5.52 & 6.25 & 6.86 & 5.91 & 8.34 & 11.48 \\\\ \n \\hline\n & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20 \\\\ \n \\hline\ns-CC & 234.07 & 244.32 & 213.54 & 213.01 & 183.60 & 249.73 & 292.85 & 269.15 & 328.63 & 240.94 \\\\ \n s-NPC ($\\alpha$ = .05) & 270.19 & 252.46 & 174.22 & 211.67 & 125.66 & 241.64 & 317.62 & 340.59 & 231.31 & 205.63 \\\\ \n s-NPC ($\\alpha$ = .10) & 254.37 & 300.12 & 317.98 & 213.02 & 263.69 & 223.81 & 296.64 & 279.72 & 288.77 & 234.69 \\\\ \n s-NPC ($\\alpha$ = .20) & 223.00 & 253.27 & 287.14 & 205.65 & 249.97 & 187.17 & 312.73 & 224.19 & 265.96 & 238.16 \\\\ \n s-NPC ($\\alpha$ = .30) & 209.82 & 192.70 & 206.62 & 271.58 & 236.41 & 263.22 & 189.90 & 299.44 & 238.57 & 269.64 \\\\ \n \\hline\n\\end{tabular}\n\\end{table}\n\n\n\\begin{figure}[htbp]\n\\includegraphics[width=\\textwidth]{plots\/sim_lowdim_n400_rankdist.pdf}\n\\caption{Rank distributions of the first $20$ features by each criterion with $d=30$ and $n=400$ under the Gaussian setting.\\label{fig:avg_rank_d30_n400}}\t\n\\end{figure}\n\n\\begin{figure}[htbp]\n\\includegraphics[width=\\textwidth]{plots\/sim_lowdim_n1000_rankdist.pdf}\n\\caption{Rank distributions of the first $20$ features by each criterion with $d=30$ and $n=1000$ under the Gaussian setting.\\label{fig:avg_rank_d30_n1000}}\t\n\\end{figure}\n\n\\begin{figure}[htbp]\n\\includegraphics[width=\\textwidth]{plots\/sim_highdim_n400_rankdist.pdf}\n\\caption{Rank distributions of the first $20$ features by each criterion with $d=500$ and $n=400$ under the Gaussian setting.\\label{fig:avg_rank_d500_n400}}\t\n\\end{figure}\n\n\n\n\n\n\n\n\\subsection{Comparison with other marginal feature ranking approaches} \n\n We compare s-CC and s-NPC with four approaches that have been widely used to rank features marginally: the Pearson correlation, the distance correlation \\citep{szekely2009brownian}, the two-sample $t$ test, and the two-sample Wilcoxon rank-sum test. None of these existing approaches rank features based on a prediction objective; as a result, the feature ranking they give may not reflect the prediction performance of features under a particular objective. Here we use an example to demonstrate this phenomenon. We generate data with $d=2$ features from the following model:\n \\begin{align}\\label{eq:gauss_mixture}\n\tX_1 \\given (Y=0) &\\sim \\mathcal{N}(0, 1)\\,, & X_1 \\given (Y=1) &\\sim \\mathcal{N}(1, 1)\\,, & {\\rm I}\\kern-0.18em{\\rm P}(Y=1) = .5\\,,\t\\notag\\\\\n\tX_2 \\given (Y=0) &\\sim \\mathcal{N}(0, 1)\\,, & X_2 \\given (Y=1) &\\sim .5\\,\\mathcal{N}(-2, 1) + .5\\,\\mathcal{N}(2, 1)\\,. & \t\n \\end{align}\n To calculate p-CC and p-NPC with $\\delta_1=.05$ at four $\\alpha$ levels $.05$, $.10$, $.20$, and $.30$ on these two features, we use a large sample with size $10^6$ for approximation, and the results in Table~\\ref{tab:gauss_mixture_pop} show that all the five population-level criteria rank feature 2 as the top feature.\n \n\\begin{table}[htbp]\n\\caption{\\label{tab:gauss_mixture_pop}Values of p-CC and p-NPC of the two features in \\eqref{eq:gauss_mixture}.}\n\\centering\n\\small\n\\begin{tabular}{rrrrrr}\n \\hline\nFeature & p-CC & p-NPC ($\\alpha$ = .05) & p-NPC ($\\alpha$ = .10) & p-NPC ($\\alpha$ = .20) & p-NPC ($\\alpha$ = .30) \\\\ \n \\hline\n1 & .31 & .74 & .61 & .44 & .32 \\\\ \n 2 & .22 & .49 & .36 & .24 & .17 \\\\ \n \\hline\n\\end{tabular}\n\\end{table}\n\nThen we simulate $1000$ samples of size $n=400$ from the above model and apply nine ranking approaches: s-CC, s-NPC with $\\delta_1=.05$ at four $\\alpha$ levels ($.05$, $.10$, $.20$, and $.30$), the Pearson correlation, the distance correlation, the two-sample $t$ test, and the two-sample Wilcoxon rank-sum test, to each sample to rank the two features. From this we obtain $1000$ rank lists for each ranking approach, and we calculate the frequency that each approach correctly finds the true rank order. The frequencies are summarized in Table~\\ref{tab:gauss_mixture_freq}, which shows that none of the four common approaches identifies feature 2 as the better feature for prediction. In other words, if users wish to rank features based on a prediction objective under the classical or NP paradigm, these approaches are not suitable ranking criteria. \n\n\\begin{table}[htbp]\n\\caption{\\label{tab:gauss_mixture_freq}The frequency that each ranking approach identifies the true rank order.}\n\\centering\n\\small\n\\begin{tabular}{rrrrr}\n \\hline\ns-CC & s-NPC ($\\alpha$ = .05) & s-NPC ($\\alpha$ = .10) & s-NPC ($\\alpha$ = .20) & s-NPC ($\\alpha$ = .30) \\\\ \n100\\% & 99.9\\% & 99.3\\% & 99.7\\% & 100\\% \\\\ \n \\hline\nPearson cor & distance cor & two-sample $t$ & two-sample Wilcoxon &\\\\\n0\\% & 0.5\\% & 0\\% & 0\\% &\\\\\n\t\\hline\n\\end{tabular}\n\\end{table}\n\n\n\\section{Real data applications}\\label{simu:realdata}\nWe apply s-CC and s-NPC to two real datasets to demonstrate their wide application potential in biomedical research. \\jjl{Here we set the number of random splits $B=1000$ for s-CC and s-NPC for stability consideration.} First, we use a dataset containing genome-wide DNA methylation profiles of $285$ breast tissues measured by the Illumina HumanMethylation450 microarray technology. This dataset includes $46$ normal tissues and $239$ breast cancer tissues. Methylation levels are measured at $468,424$ CpG probes in every tissue \\citep{fleischer2014genome}. We download the preprocessed and normalized dataset from the Gene Expression Omnibus (GEO) \\citep{edgar2002gene} with the accession number GSE60185. The preprocessing and normalization steps are described in detail in \\cite{fleischer2014genome}. To facilitate the interpretation of our analysis results, we further process the data as follows. First, we discard a CpG probe if it is mapped to no gene or more than one genes. Second, if a gene contains multiple CpG probes, we calculate its methylation level as the average methylation level of these probes. This procedure leaves us with $19,363$ genes with distinct methylation levels in every tissue. We consider the tissues as data points and the genes as features, so we have a sample with size $n=285$ and number of features $d=19,363$. Since misclassifying a patient with cancer to be healthy leads to more severe consequences than the other way around, we code the $239$ breast cancer tissues as the class $0$ and the $46$ normal tissues as the class $1$ to be aligned with the NP paradigm. After applying s-CC (\\ref{CC}) and s-NPC with $\\delta_1 = .05$ and four $\\alpha$ levels ($.05$, $.10$, $.20$, and $.30$) (\\ref{Npscore}) to this sample, we summarize the top $10$ genes found by each criterion in Table \\ref{tab:bc_rank}. Most of these top ranked genes have been reported associated with breast cancer, suggesting that our proposed criteria can indeed help researchers find meaningful features. Meanwhile, although other top ranked genes do not yet have experimental validation, they have weak literature indication and may serve as potentially interesting targets for cancer researchers. For a detailed list of literature evidence, please see \\textit{the Supplementary Excel File}. The fact that these five criteria find distinct sets of top genes is in line with our rationale that feature importance depends on prediction objective. By exploring top features found by each criterion, researchers will obtain a comprehensive collection of features that might be scientifically interesting. \n\n\\begin{table}[htbp]\n\\caption{\\label{tab:bc_rank}Top 10 genes found by each criterion in breast cancer methylation data \\citep{fleischer2014genome}. Genes with strong literature evidence to be breast-cancer-associated are marked in bold; see the Supplementary Excel File. }\n\\centering\n\\small\n\\begin{tabular}{rccccc}\n \\hline\nRank & s-CC & s-NPC ($\\alpha$ = .05) & s-NPC ($\\alpha$ = .10) & s-NPC ($\\alpha$ = .20) & s-NPC ($\\alpha$ = .30) \\\\ \n \\hline\n1 & \\textbf{HMGB2} & \\textbf{HMGB2} & \\textbf{HMGB2} & \\textbf{ABHD14A} & \\textbf{ABHD14A} \\\\ \n 2 & \\textbf{MIR195} & MICALCL & \\textbf{ABHD14A} & \\textbf{ABL1} & \\textbf{ABL1} \\\\ \n 3 & MICALCL & NR1H2 & ZFPL1 & \\textbf{BAT2} & \\textbf{ACTN1} \\\\ \n 4 & \\textbf{AIM2} & \\textbf{AGER} & \\textbf{AGER} & \\textbf{BATF} & AKAP8 \\\\ \n 5 & AGER & \\textbf{BATF} & RILPL1 & \\textbf{CCL8} & AP4M1 \\\\ \n 6 & KCNJ14 & ZFP106 & SKIV2L & \\textbf{COG8} & \\textbf{ARHGAP1} \\\\ \n 7 & \\textbf{HYAL1} & CTNNAL1 & \\textbf{TP53} & FAM180B & \\textbf{ATG4B} \\\\ \n 8 & SKIV2L & \\textbf{MIR195} & \\textbf{RELA} & \\textbf{HMGB2} & \\textbf{BAT2} \\\\ \n 9 & \\textbf{RUSC2} & \\textbf{AIM2} & \\textbf{MIR195} & \\textbf{HSF1} & BAT5 \\\\ \n 10 & DYNC1H1 & ZFPL1 & \\textbf{CCL8} & KIAA0913 & \\textbf{BATF} \\\\ \n \\hline\n\\end{tabular}\n\\end{table}\n\nSecond, we apply s-CC and s-NPC to a dataset of microRNA (miRNA) expression levels in urine samples of prostate cancer patients, downloaded from the GEO with accession number GSE86474 \\citep{jeon2019temporal}. This dataset is composed of $78$ high-risk and $61$ low-risk patients. To align with the NP paradigm, we code the high-risk and low-risk patients as class $0$ and $1$, respectively, so $m\/n=78\/61$. In our data pre-processing, we retain miRNAs that have at least $60\\%$ non-zero expression levels across the $n=139$ patients, resulting in $d=112$ features. We use this dataset to demonstrate that s-NPC is robust to sampling bias that results in disproportional training data; that is, training data have different class proportions from those of the population. We create two new datasets by randomly removing one half of the data points in class $0$ or $1$, so that one dataset has $m\/n=39\/61$ and the other has $m\/n=78\/31$. We apply s-CC and s-NPC with $\\delta_1 = .05$ to each dataset to rank features. To evaluate each criterion's robustness to disproportional data, we compare its rank lists from two datasets with different $m\/n$ ratios. For this comparison, we define \n\\[ \\text{consistency}(j) = \\frac{|A_j \\cap B_j|}{j}\\,, \\;j=1,\\ldots,d\\,,\n\\]\nwhere $A_j$ and $B_j$ are the top $j$ features from two rank lists. Given $j$, the higher the consistency, the more robust a criterion is to disproportional data. We illustrate the consistency of s-CC and s-NPC in Figure~\\ref{fig:consistency}, which shows that s-NPC is much more robust than s-CC. \n\n\\begin{figure}\n\\includegraphics[width=\\textwidth]{plots\/analysis_consistency_combined.pdf}\n\\caption{Consistency of s-CC and s-NPC in ranking feautures in miRNA urine data \\citep{jeon2019temporal}.\\label{fig:consistency}}\t\n\\end{figure}\n\n\n\\section{Discussion}\\label{sec:conclusions}\n\nThis work introduces model-free objective-based marginal feature ranking approach for the purpose of binary decision-making. The explicit use of a prediction objective to rank features is demonstrated to outperform existing practices, which rank features based on an association measure irrelevant to neither the prediction objective nor the distributional characteristics. In addition to the illustrated CC and NP paradigms, the same marginal ranking idea extends to other prediction objectives such as the cost-sensitive learning and global paradigms. Another extension direction is to rank feature pairs in the same model-free fashion. In addition to the biomedical examples we show in this paper, model-free objective-based marginal feature ranking is also useful for finance applications, among others. For example, a loan company has successful business in region A and would like to establish new business in region B. To build a loan-eligibility model for region B, which has a much smaller fraction of eligible applicants than region A, the company may use the top ranked features by s-NPC in region A, thanks to the robustness of s-NPC to sampling bias. \n\nBoth s-CC and s-NPC involve sample splitting. The default option is a half-half split for both class $0$ and class $1$ observations. It remains an open question whether a refined splitting strategy may lead to a better ranking agreement between the sample-level and population-level criteria. Intuitively, there is a trade-off between classifier training and objective evaluation: using more data for training can result in a classifier closer to the oracle, while saving more data to evaluate the objective can lead to a less variable criterion. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\n\n{Planet formation starts with sticking collisions of dust in protoplanetary disks \\citep{blumwurm2008}. Relative velocities between the solids are provided by sedimentation to the midplane, radial and transversal drifts, and turbulence \\citep{Birnstiel2016}. Collisional grain growth might proceed at least to millimeter grain size before bouncing dominates the outcome of a collision \\citep{zsom2010, Demirci2017}.}\n\n{It is the interaction or coupling between the solid grains and the gas that sets the collision velocities. Beyond collisions, gas-grain coupling also determines the dust scale height or how far particles sediment to the midplane in balance with turbulent mixing \\citep{Birnstiel2016, Pignatale2017}. Gas-grain coupling is also important for the radial inward drift, especially of decimeter- to meter-sized bodies \\citep{Weidenschilling1977}, and it is a major part in trapping particles in pressure bumps \\citep{whipple1972}.} \nWhile for some aspects particles can be considered as tracer particles -- individual grains with no influence on the gas motion itself -- dense particle clouds require a more complex treatment. \n\n{In recent years, particle-gas feedback was suggested to promote particle concentration that eventually lead to gravitational collapse into planetesimals \\citep{Youdin2005, JohansenYoudin2007, gonzalez2017, Dipierro2018, Squire2018SI}. Concentration mechanisms depend strongly on the particles' Stokes numbers, the metallicity, and the solid-to-gas ratio (e.g. \\cite{bai2010c, carrera2015, yang2017, Squire2018SI}). In any case, these mechanisms might take over from collisional growth at pebble size to form planetesimals.}\n \nAssisting this numerical work relying on particle-gas feedback mechanisms, we investigate the motion of dense particle clouds in a thin gas in laboratory experiments here.\n\nA basic concept in a simple system of one particle in an unlimited reservoir of gas is that the grain needs a certain gas-grain friction time $\\tau_f$ to follow any change in gas motion or react to any external force and reach equilibrium between external force and friction. \nThe flow around the particle can be divided in molecular flow ($\\rm Kn \\gg 1$) {determined by} Epstein drag and continuum flow ($\\rm Kn \\ll 1$) {determined by} Stokes drag where Kn is the Knudsen number with the mean free path $\\lambda$ and particle radius $r$, \n\\begin{equation}\n\\mathrm{Kn} = \\frac{\\lambda}{r}.\n\\label{Knudsen}\n\\end{equation}\nThe stationary sedimentation speed of a grain is given by \n\\begin{equation}\nv_0 = \\tau_f \\cdot g, \n\\label{tau_f}\n\\end{equation}\nwhere $g$ is the gravitational acceleration, i.e. the vertical component of a star's gravity in a protoplanetary disk. \n\n\nThe motion of an individual particle in a cloud of many particles can only be treated this way to a certain limit. It requires that {back-reaction} on the gas and feedback from this {back-reaction} to the other grains can be neglected. This is only true for an isolated particle, i.e. for low {solid-}to-gas mass ratios and low volume filling factors. In protoplanetary disks, the canonical {solid-}to-gas ratio of 0.01 can change by sedimentation and other concentration mechanisms by several orders of magnitude, while the volume filling factor remains low ($< 10^{-6}$) \\citep{Klahr2018}.\n\n\n \nGrains in a dense cloud might just effectively behave like a larger particle, moving faster like the individual grains \\citep{JohansenYoudin2007}. {Eventually}, collective behavior might lead to planetesimal formation \\citep{Johansen2007, Chiang2010, Klahr2018}.\n\nIn \\cite{schneider2019}, we studied the transition from test particle to collective behavior in {a} levitation experiment, analyzing the {free-fall} velocity of grains in a cloud. \nWe {empirically} found that the sedimentation velocity depends on what we call sensitivity factor $F_S$ and the closeness $C$ of the individual particle as\n\\begin{equation}\n v_s = v_0 + F_S \\cdot C.\n \\label{sedvel}\n\\end{equation}\n{The closeness $C$ of a particle is constructed from the interparticle distances $r_j-r$ between the grain and all other particles $j$ as}\n\\begin{equation}\n C = \\sum_{j=1}^N \\frac{1}{|r_j-r|}\n \\label{closeness}\n\\end{equation}\n{$N$ is the total number of particles. The sensitivity factor $F_S$ in eq. \\ref{fs}\ndepends on the average solid-to-gas ratio $\\epsilon$ of the system:\n\\begin{equation}\nF_s = \\alpha (\\epsilon - \\epsilon_{\\rm crit}) \\quad \\texttt{for} \\quad \\epsilon > \\epsilon_{\\rm crit}\n\\label{fs}\n\\end{equation} \nThe solid-to-gas ratio $\\epsilon$ is defined as the ratio between the total dust mass and the average gas mass,}\n\\begin{equation}\n \\epsilon = \\frac{N \\cdot m_p}{V \\cdot \\rho_g} = \\frac{1}{6} \\pi s^3 \\frac{N}{V} \\frac{\\rho_p}{\\rho_g},\n \\label{epsilon}\n\\end{equation}\n{where $m_p$ is the mass of a single particle, $V$ is the total volume covered by particles, $\\rho_g$ is the gas density in the chamber, and $\\rho_p$ is the bulk density of the individual grains, and $s$ is the particle diameter.}\n\n{As seen in eq. \\ref{fs}, \\citet{schneider2019} also empirically found that} particles {are only influenced by} the other particles in a cloud if the average {solid-}to-gas ratio $\\epsilon$ is above a threshold value $\\epsilon_{\\rm crit}${; otherwise, $F_s=0$ and particles sediment with $v_0$}. \n\n{The sensitivity $\\alpha$ connecting the solid-to-gas ratio to the sensitivity factor in eq. \\ref{fs} was just a constant in \\citet{schneider2019}}.\n\n{This description is purely empirical and was deduced from a single experiment so far.}\nHere, we present a systematic analysis, where we varied the gas pressure, particle size, {and} rotation frequency of the chamber and improved the setup and data acquisition.\n\n\\section{Levitation Experiment}\n\n\\subsection{Setup}\nThe setup of the experiment (see fig. \\ref{fig:setup}) follows the principle used in aggregation experiments by \\cite{PoppeBlum1997} and \\cite{blum1998}, but is especially based on earlier experiments on dense clouds by \\citet{schneider2019}. \n\nParticles -- in this study, hollow glass spheres of different sizes and densities -- are dispersed {once at the beginning of the experiment,} within a rotating vacuum chamber with low ambient pressure. Particles are injected using a vibrating sieve in an extension of the vacuum chamber. The gas inside follows the rigid rotation of the chamber.\nThe vacuum chamber has a diameter of 320~mm. Inside the chamber, a ring of LEDs is used to illuminate the particles. The scattered light of the particles is detected by two non-rotating cameras.\n\\begin{figure\n \\centering\n \\includegraphics[width=\\columnwidth]{Setup.pdf}\n \\caption{Experimental setup without auxiliary parts. The vacuum chamber is evacuated to a preset pressure. Two cameras observe the particles from the front. Illumination is provided by LED modules.}\n \\label{fig:setup}\n\\end{figure}\n\n{These} cameras observe the particles from the front at a distance of 40~cm on a 1'', 5~megapixel sensor with a spatial resolution in the order of 10 $ \\rm \\mu m $. The frame rate is 40 fps with an exposure time of 6 ms. The field of view is $19 \\times 15 $cm. Imaging and synchronization of both cameras {are} controlled by a machine vision computer. Spatial calibration was realized with a calibration matrix with $\\sim 10,000$ data points for each camera.\n\n\nThe experimental parameters are {the} particle radius of the sample $r_{\\rm P}$, {the} particle bulk density $\\rho_{\\rm P} $, {the} gas pressure $p$, and {the} rotation frequency $f$. We define the Stokes number of the experiment as\n\\begin{equation}\n \\mathrm{St}=\\tau_f \\cdot f.\n \\label{eq.St}\n\\end{equation}\nThe friction time is calculated with equation \\ref{tau_f}.\nThe experiments were carried out similar to the ones described in \\cite{schneider2019}. \nIn short, the chamber was evacuated to a preset pressure and then disconnected from the vacuum pump. The injection process was then started and the chamber was set to a predefined rotation frequency before image acquisition for both cameras was started.\n\n\n\n\n\n\n\n\n\\subsection{Data analysis}\n\nIn principle, data were processed as in \\cite{schneider2019}. We refer the reader to that paper for details. {All} particle positions were extracted for all times\nwith Trackmate \\citep{trackmate}{,} using a Laplacian of Gaussian {(LoG)} particle detector and {a} Linear Motion LAP tracker for particle track assignment.\nFor data analysis, all particle positions and every particle track with track length $>100$ frames were taken into account.\n\n\n\n{Since} we used two parallel cameras, the 3d position was reconstructed as {a} new feature here. The stereoscopic reconstruction was carried out by an algorithm that maps the expected particle position of the first on the second camera image and then finds matches by minimizing the difference between the projected position and detected particle positions of the second camera {image}. \n\nThe error in the $z$-position of each particle is $\\sim 2\\% $. \nFrom these data, individual sedimentation velocities, individual closenesses, and average {solid-}to-gas ratios were determined. \n\nFurthermore, in this study, the sedimentation velocity was normalized to the undisturbed, individual sedimentation velocity of the grains used in the experiment $v_0$. The closeness was normalized by multiplication with the particle diameter $s$ of the glass beads used in the corresponding experiment (table \\ref{tab:exp}). \n\n\nAccording to equation \\ref{sedvel} and \\ref{fs}, the sedimentation velocity depends on the closeness $C$ and the {solid-}to-gas ratio $\\epsilon$. Due to particle loss $\\epsilon$ decreases with time. We {group} the measured particle positions and velocities in full revolutions of the experiment chamber. Fig. \\ref{fig:sedimentationovercloseness} shows an example of the sedimentation velocity over closeness. This confirms the linear dependence found in \\cite{schneider2019}.\n\n\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\columnwidth]{SedVel-Clo.pdf}\n \\caption{Normalized sedimentation velocity over normalized closeness for 19 {revolution}s of experiment 8 (table \\ref{tab:exp}). The color of the data points refers to the {revolution} of the experiment chamber during the measurement starting with {revolution} 1 in blue (top) and {revolution} 19 in red (bottom). Data points are average values for at least 1000 particle positions with an equidistant spacing of the binned values in closeness space. The total number of examined single sedimentation velocity data points is about 1,025,000; the total number of examined particle positions is about 1,600,000.\\\\\n {The top left inset shows the solid-to-gas ratio of each revolution as a function of time.}}\n \\label{fig:sedimentationovercloseness}\n\\end{figure}\nThe slope varies with every revolution or average $\\epsilon$.\nAccording to equation \\ref{sedvel} and \\ref{fs}{,} this slope is equal to $F_s = \\alpha (\\epsilon-\\epsilon_{\\rm crit})$.\n \nFig. \\ref{fig:sensitivityoverepsilon} confirms the linear trend of the sensitivity factor on $\\epsilon$ \\citep{schneider2019}. \n\\begin{figure}\n \\centering\n \\includegraphics[width=\\columnwidth]{SensFac-Eps.pdf}\n \\caption{Sensitivity factor $F_S$ over solid-to-gas ratio $\\epsilon$ of experiment 8 (table \\ref{tab:exp}). The color of the data points refers to the data points shown in fig. \\ref{fig:sedimentationovercloseness}. The linear fit is $F_S (\\epsilon) = -0.12 + 3.6 \\cdot \\epsilon$}\n \\label{fig:sensitivityoverepsilon}\n\\end{figure}\nFrom the linear fit $F_S = a \\cdot \\epsilon + b$, we can then deduce the sensitivity $\\alpha = a$ and the critical {solid-}to-gas ratio $\\epsilon_{\\rm crit}$ as\n\\begin{equation}\n \\label{epscrit}\n \\epsilon_{\\rm crit} = -\\frac{b}{a}.\n\\end{equation}\n\nWe define a system to be collective when individual sedimentation velocities deviate from $v_0$ or if $F_S >0$. We {define} a system as non-collective if all particles behave like test particles, sedimenting independently of local closeness variations. \n\n\n\n\n\n\\section{Discussion}\n\nAfter data analysis, two {main} quantities are given: $\\epsilon_{\\rm crit}$ and $\\alpha$. The critical {solid-}to-gas ratio varied for the different experiments carried out. As we also changed {several} parameters between individual experiments it is \\textit{a priori} not clear {whether} these two parameters follow systematic trends. {Therefore, we} considered $\\epsilon_{\\rm crit}$ to depend on a number of individual variables, including Knudsen number, pressure, and particle size. However, the only systematic dependence found was {concerning} the experiment's Stokes number $\\rm St$, mainly influenced by $\\tau_f$. This is shown in fig. \\ref{fig:epsilonoverstokes}.\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\columnwidth]{epscrit-st.pdf}\n \\caption{Critical solid-to-gas ratio of all performed experiments in dependence on the Stokes number. The shape and color of data points correspond to the experiments shown in tab. \\ref{tab:exp}. Fit: $\\epsilon_{\\rm crit} (\\mathrm{St}) = -0.018 + 5.9 \\cdot \\mathrm{St}$}\n \\label{fig:epsilonoverstokes}\n\\end{figure}\n\n\\begin{figure} [h]\n \\centering\n \\includegraphics[width=\\columnwidth]{Sens-St.pdf}\n \\caption{Sensitivity of all performed experiments in dependence on the Stokes number. The shape and color of data points correspond to the experiments shown in tab. \\ref{tab:exp}. Fit: $\\alpha (\\mathrm{St}) = 0.016 \\cdot \\mathrm{St}^{1.2 }$}\n \\label{fig:sensitivityfactoroverstokes}\n\\end{figure}\n\n\nThere is a clear linear trend in the data. \n{Interestingly, below a Stokes number of $\\rm St \\leq 0.003$, the deduced $\\epsilon_{\\rm crit}$ formally becomes negative. Negative $\\epsilon_{\\rm crit}$ refers to systems that are \\textit{always} collective. Since $F_S$ always has to be larger or equal to 0, negative $\\epsilon_{\\rm crit}$ are set to 0}.\nParticles with $\\mathrm{St} \\leq 0.003$ always back-react to the gas flow in such a manner that other particles are influenced by this.\n\nThe sensitivity $\\alpha$\nalso depends on the Stokes number as shown in fig. \\ref{fig:sensitivityfactoroverstokes}. We fitted a power law to the data as one possible functional dependence.\nSmall St particles have a higher impact on the gas flow than large St particles for the same average {solid-}to-gas mass ratio.\n\nThe linear dependence of the sedimentation velocity on the closeness and on the solid-to-gas ratio is found for all parameter combinations. {Therefore, we consider this a robust, general finding.} \n\n\\section{{Application to protoplanetary disks}}\n\n\n{The Stokes number in protoplanetary disks is defined as $\\mathrm{St} = \\tau \\cdot \\Omega_{\\rm K}$, where $\\Omega_{\\rm K}$ is the Kepler frequency. The Stokes number below which grains always behave collectively of 0.003 corresponds to particle sizes of about 1~cm at 1~AU for a particle density of 1~$\\rm g cm^{-3}$ in a typical disk \\citep{Johansen2014}.} \n\n\n{For larger grains,} the system becomes increasingly insensitive to high {solid-}to-gas ratios and only turns collective for higher values of $\\epsilon$.\n{It seems more than plausible that drag instabilities can only occur if the cloud becomes collective.\nTherefore, this study suggests that grains {larger than 1~cm} require larger $\\epsilon$ to trigger drag instabilities {at 1 AU or grains larger than 1~mm at 10~AU}.\nFor smaller grains{,} the clouds are always collective and very sensitive to changes in $\\epsilon$. {Drag instabilities might therefore regularly occur for small grains rather than large grains. As grain growth proceeds in disks, pristine bodies might preferentially consist of entities of the threshold size, especially not of larger grains. This is in agreement {with} observations of comets \\citep{Blum2017}.}}\n\n\n\n\\section{Conclusions}\n\nProtoplanetary disks are regions with a wide range of solid--gas interactions, ranging from single test particle behavior of a dust grain in regions depleted of dust to solid dominated motion in {gravitationally} unstable particle subclouds. The {solid-}to-gas mass ratio $\\epsilon$ can vary from lower than interstellar $\\epsilon \\leq 0.01$ to larger than $\\epsilon \\geq 100$, while the volumetric filling factor $\\Phi$ remains below $10^{-6}$. In our experiment, we confirm that the transition from test particle to collective behavior in comparably low-$\\Phi$ environments is characterized by a threshold for the average {solid-}to-gas ratio. Above the threshold, particle feedback on the gas is high enough to influence other particles.\n\nThis threshold depends on the Stokes number of the particles. The larger the Stokes number, the higher the {solid-}to-gas ratio that still allows test particle behavior.\nOn the lower Stokes number end, our experiments \\textit{always} come with collective behavior. Applied specifically to particle motion in protoplanetary disks, we would like to highlight two aspects of this work. \n\nFirst, in a simple cloud of small particles, their motion can be collective already at low {solid-}to-gas ratios if the Stokes number is small, e.g. if grains are still dust and not yet pebbles. This, e.g., leads to increased sedimentation velocities. {As the maximum dust height is a balance between upward gas motion, e.g. as turbulent diffusion or convection, and sedimentation, faster settling corresponds to a reduced dust height for the same upward gas flow in parts behaving collectively. If this also changes the scale height observed astronomically depends on the local conditions at the respective height, i.e. if the top of the particle layer would be collective or non-collective. Collective sedimentation might also lead to a detachment of the surface layer and the midplane particles, but that is only a guess and further details are beyond the scope of this Letter.} \n\n{Also, other motions will change accordingly, e.g. the radial inward drift velocity for a given grain size in a collective ensemble will change}. \n\nSecond, for large grains or rather at higher Stokes numbers, ever higher {solid-}to-gas ratios are needed to get the cloud collective. \n{A threshold grain size of millimeter to centimeter marks the transition between always collective and solid-to-gas ratio dependence. Drag instabilities leading to planetesimal formation will {favor} this particle size supporting observations of comets. }\n\n\\section*{Acknowledgments}\n\nThis project is supported by DFG grant \\mbox{WU 321\/16-1}. We thank the two referees for a very constructive review of the paper. \n\n\\newpage\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nPortrait and self-portrait sketches have an important role in art. From an art historical perspective, self-portraits serve as historical records of what the artists looked like. From the perspective of an artist, self-portraits can be seen as a way to practice and improve one's skills without the need for a model to pose. Portraits of others further serve as memorabilia and a record of the person in the portrait. Artists most often are able to easily capture recognizable features of a person in their sketches. Therefore, hand-drawn sketches of people have further applications in law enforcement. Sketches of suspects drawn based on eye-witness accounts are used to identify suspects, either in person or from catalogues of mugshots.\n\nPrior work related to face sketches in computer vision has been mostly limited to synthesis of highly controlled (i.e. having neutral expression, frontal pose, with normal lighting and without any occlusions) sketches from photographs~\\cite{Tang2003,Liu2005,Gao2012,Wang2013b,Zhang2015} (sketch synthesis) and photographs from sketches~\\cite{Liu2007,Xiao2009,Wang2009,Gao2012,Wang2013b} (sketch inversion). Sketch inversion studies with controlled inputs utilized patch-based approaches and used Bayesian tensor inference~\\cite{Liu2007}, an embedded hidden Markov model~\\cite{Xiao2009}, a multiscale Markov random field model~\\cite{Wang2009}, sparse representations~\\cite{Gao2012} and transductive learning with a probabilistic graph model~\\cite{Wang2013b}. \n\nFew studies developed methods of sketch synthesis to handle more variation in one or more variables at a time, such as lighting~\\cite{Li2006}, and lighting and pose~\\cite{Zhang2010}. In a recent study, Zhang et al.~\\cite{Zhang2016} showed that sketch synthesis by transferring the style of a single sketch could be used also in uncontrolled conditions. In~\\cite{Zhang2016}, first an initial sketch by a sparse representation-based greedy search strategy was estimated, then candidate patches were selected from a template style sketch and the estimated initial sketch. Finally, the candidate patches were refined by a multi-feature-based optimization model and the patches were assembled to produce the final synthesized sketch. \n\nRecently, the use of deep convolutional neural networks (DNNs) in image transformation tasks, in which one type of image is transformed into another, has gained tremendous traction. In the context of sketch analysis, DNNs were used to tackle the problems of sketch synthesis and sketch simplification. For example,~\\cite{Zhang2015} has used a DNN to convert photographs to sketches. They developed a DNN with six convolutional layers and a discriminative regularization term for enhancing the discriminability of the generated sketch against other sketches. Furthermore,~\\cite{SimoSerra2016} has used a DNN to simplify rough sketches. They have shown that users prefer sketches simplified by the DNN more than they do those by other applications 97\\% of the time.\n\nSome other notable image transformation problems include colorization, style transfer and super-resolution. In colorization, the task is to transform a grayscale image to a color image that accurately captures the color information. In style transfer, the task is to transform one image to another image that captures the style of a third image. In super-resolution, the task is to transform a low-resolution image to a high-resolution image with maximum quality. DNNs have been used to tackle all of these problems with state-of-the art results~\\cite{Cheng2015,Iizuka2016,Gatys2015,Dong2014,Dong2016,Johnson2016}.\n\nHowever, a challenging task that remains is photorealistic face image synthesis from face sketches in uncontrolled conditions. That is, at present, there exist no sketch inversion models that are able to perform in realistic conditions. These conditions are characterized by changes in expression, pose, lighting condition and image quality, as well as the presence of varying amounts of background clutter and occlusions. \n\nHere, we use DNNs to tackle the problem of inverting face sketches to synthesize photorealistic face images from different sketch styles in uncontrolled conditions. We developed three different models to handle three different types of sketch styles by training DNNs on datasets that we constructed by extending a well-known large-scale face dataset, obtained in uncontrolled conditions~\\cite{Liu2015}. We test the models on another similar large-scale dataset~\\cite{LearnedMiller2016}, a hand-drawn sketch database~\\cite{Wang2009} as well as on self-portrait sketches of famous Dutch artists. We show that our approach, which we refer to as {\\em Convolutional Sketch Inversion} (CSI) can be used to achieve state-of-the-art results and discuss possible applications in fine arts, art history and forensics.\n\n\\section{Semi-simulated datasets}\n\nFor training and testing our CSI model, we made use of the following datasets:\n\\begin{itemize}\n\\item \\textit{Large-scale CelebFaces Attributes (CelebA) dataset}~\\cite{Liu2015}.\nThe CelebA dataset contains 202,599 celebrity face images and 10,177 identities. The images were obtained from the internet and vary extensively in terms of pose, expression, lighting, image quality, background clutter and occlusion. Each image in the dataset has five landmark positions and 40 attributes. These images were used for training the networks.\n\n\\item \\textit{Labeled Faces in the Wild (LFW) dataset}~\\cite{LearnedMiller2016}.\nThe LFW dataset contains 13,233 face images and 5749 identities. Similar to the CelebA dataset, images were obtained from the internet and vary extensively in terms of pose, expression, lighting, image quality, background clutter and occlusion. A subset of these images (11,990) were used for testing the networks.\n\n\\item \\textit{CUHK Face Sketch (CUFS) database}~\\cite{Wang2009}.\nThe CUFS database contains photographs and their corresponding hand-drawn sketches of 606 individuals. The dataset was formed by combining face photographs from three other databases and producing hand-drawn sketches of these photographs. Concretely, it consists of 188 face photographs from the Chinese University of Hong Kong (CUHK) student database~\\cite{Wang2009} and their corresponding sketches, 123 face photographs from the AR Face Database~\\cite{Martinez1998} and their corresponding sketches, and 295 face photographs from the XM2VTS database~\\cite{Messer1999} and their corresponding sketches. Only 18 of the sketches (six from each sub-database) were used in the current study. These images were used for testing the networks.\n\n\\item \\textit{Sketches of famous Dutch artists}.\nWe also used the following sketches: i) Self-Portrait with Beret, Wide-Eyed by Rembrandt, 1630, etching, ii) Two Self-portraits and Several Details by Vincent van Gogh, 1886, pencil on paper and iii) Self-Portrait by M.C. Escher, 1929, lithograph on gray paper. These images were used for testing the networks. \n\\end{itemize}\n\n\\subsection{Preprocessing}\n\nSimilar to~\\cite{Cowen2014}, each image was cropped and resized to 96 pixels $\\times$ 96 pixels such that:\n\\begin{itemize}\n\\item The distance between the top of the image and the vertical center of the eyes was 38 pixels.\n\\item The distance between the vertical center of the eyes and the vertical center of the mouth was 32 pixels.\n\\item The distance between the vertical center of the mouth and the bottom of the image was 26 pixels.\n\\item The horizontal center of the eyes and the mouth was at the horizontal center of the image.\n\\end{itemize}\n\n\\subsection{Sketching}\n\nEach image in the CelebA and LFW datasets was automatically transformed to a line sketch, a grayscale sketch and a color sketch. Sketches in the CUFS database and those by the famous Dutch artists were further transformed to line sketches by using the same procedure.\n\nColor and grayscale sketch types are produced by the same stylization algorithm~\\cite{Gastal2011}. To obtain the sketch images, the input image is first filtered by an edge-aware filter. This filtered image is then blended with the magnitude of the gradient of the filtered image. Then, each pixel is scaled by a normalization factor resulting in the final sketch-like image.\n\nLine sketches which resemble pencil sketches were generated based on~\\cite{Beyeler2015}. Line sketch conversion works by first converting the color image to grayscale. This is followed by inverting the grayscale image to obtain a negative image. Next, a Gaussian blur is applied. Finally, using color dodge, the resulting image is blended with the grayscale version of the original image.\n\nIt should be noted that synthesizing face images from color or grayscale sketches is a more difficult problem than doing so from line sketches since many details of the faces are preserved by line sketches while they are lost for other sketch types.\n\n\\section{Models}\n\nWe developed one DNN for each of the three sketch styles based on the style transfer architecture in~\\cite{Johnson2016}. Each of the three DNNs was based on the same architecture except for the first layer where the number of input channels were either one or three depending on the number of color channels of the sketches. The architecture comprised four convolutional layers, five residual blocks~\\cite{He2015}, two deconvolutional layers and another convolutional layer. Each of the five residual blocks comprised two convolutional layers. All of the layers except for the last layer were followed by batch normalization~\\cite{Ioffe2015} and rectified linear units. The last layer was followed by batch normalization and hyperbolic tangent units. All models were implemented in the Chainer framework~\\cite{Tokui2015}. Table~\\ref{table_1} shows the details of the architecture.\n\n\\begin{table}[]\n\\centering\n\\resizebox{\\textwidth}{!}{%\n\\begin{tabular}{@{}lllllllll@{}}\n\\toprule\nLayer & Type & in\\_channels & out\\_channels & ksize & stride & pad & normalization & activation \\\\ \\midrule\n1 & con. & 1 or 3 & 32 & 9 & 1 & 4 & BN & ReLU \\\\\n2 & con. & 32 & 64 & 3 & 2 & 1 & BN & ReLU \\\\\n3 & con. & 64 & 128 & 3 & 2 & 1 & BN & ReLU \\\\\n4 & res. & 128\/128 & 128\/128 & 3\/3 & 1\/1 & 1\/1 & BN\/BN & ReLU \\\\\n5 & res. & 128\/128 & 128\/128 & 3\/3 & 1\/1 & 1\/1 & BN\/BN & ReLU\/+x \\\\\n6 & res. & 128\/128 & 128\/128 & 3\/3 & 1\/1 & 1\/1 & BN\/BN & ReLU\/+x \\\\\n7 & res. & 128\/128 & 128\/128 & 3\/3 & 1\/1 & 1\/1 & BN\/BN & ReLU\/+x \\\\\n8 & res. & 128\/128 & 128\/128 & 3\/3 & 1\/1 & 1\/1 & BN\/BN & ReLU\/+x \\\\\n9 & dec. & 128 & 64 & 3 & 2 & 1 & BN & ReLU \\\\\n10 & dec. & 64 & 32 & 3 & 2 & 1 & BN & ReLU \\\\\n11 & con. & 32 & 3 & 9 & 1 & 4 & BN & tanh \\\\ \\bottomrule\n\\end{tabular}%\n}\n\\caption{Deep neural network architectures. BN; batch normalization with decay = 0.9, $\\epsilon = 1e-5$, ReLU; rectified linear unit, con.; convolution, dec.; deconvolution, res.; residual block, tanh; hyperbolic tangent unit. Outputs of the hyperbolic tangent units are scaled to \\lbrack0, 255\\rbrack. x\/y indicates the parameters of the first and second layers of a residual block. +x indicates that the input and output of a block are summed and no activation function is used.}\n\\label{table_1}\n\\end{table}\n\n\\subsection{Estimation}\n\nFor model optimization we used Adam~\\cite{Kingma2014} with parameters $\\alpha = 0.001$, $\\beta_1 = 0.9$, $\\beta_2 = 0.999$, $\\epsilon = 10^{-8}$ and mini-batch size = 4. We trained the models by iteratively minimizing the loss function for 200,000 iterations. The loss function comprised three components. The first component is the standard Euclidean loss for the targets and the predictions (pixel loss; $\\ell_p$). The second component is the Euclidean loss for the feature-transformed targets and the feature-transformed predictions (feature loss)~\\cite{Johnson2016}:\n\\begin{equation}\n\\ell_{f} = \\frac{1}{n}\\sum_{i, j, k}\\left(\\phi\\left(t\\right)_{i, j, k} - \\phi\\left(y\\right)_{i, j, k}\\right) ^ 2\n\\end{equation}\nwhere $n$ is the total number of features, $\\phi(t)_{i, j, k}$ is a feature of the targets and $\\phi(y)_{i, j, k}$ is a feature of the predictions. Similar to~\\cite{Johnson2016}, we used the outputs of the fourth layer of a 16-layer DNN (relu\\_2\\_2 outputs of the VGG-16 pretrained model)~\\cite{Simonyan2014} to feature transform the targets and the predictions. The third component is the total variation loss for the predictions:\n\\begin{equation}\n\\ell_{tv} = \\sum_{i, j}\\left(\\left(y_{i + 1, j} - y_{i, j}\\right) ^ 2 + \\left(y_{i, j + 1} - y_{i, j}\\right) ^ 2\\right) ^ {0.5}\n\\end{equation}\nwhere $y_{i, j}$ is a pixel of the predictions. A weighted combination of these components resulted in the following loss function:\n\\begin{equation}\n\\ell = \\lambda_p \\ell_p + \\lambda_f \\ell_f + \\lambda_{tv} \\ell_{tv}\n\\end{equation}\nwhere we set $\\lambda_p = \\lambda_f = 1$ and $\\lambda_{tv} = 0.00001$.\n\nThe use of the feature loss to train models for image transformation tasks was recently proposed by~\\cite{Johnson2016}. In the context of super-resolution,~\\cite{Johnson2016} found that replacing pixel loss with feature loss gives visually pleasing results at the expanse of image quality because of the artefacts introduced by the feature loss.\n\nIn the context of sketch inversion, our preliminary experiments showed that combining feature loss and pixel loss increases image quality while maintaining visual pleasantness. Furthermore, we observed that a small amount of total variation loss further removes the artefacts that are introduced by the feature loss. Therefore, we used the combination of the three losses in the final experiments. The quantitative results of the preliminary experiments in which the models were trained by using only the feature loss are provided in the Appendix.\n\n\\subsection{Validation}\n\nFirst, we qualitatively tested the models by visual inspection of the synthesized face images (Figure~\\ref{figure_2}). Synthesized face images matched the ground truth photographs closely and persons in the images were easily recognizable in most cases. Among the three styles of sketch models, the line sketch model (Figure~\\ref{figure_2}, first column) captured the highest level of detail in terms of the face structure, whereas the synthesized inverse sketches of the color sketch model (Figure~\\ref{figure_2}, third column) had less structural detail but was able to better reproduce the color information in the ground truth images compared to the inverted sketches of the line sketch model. Sketches synthesized by the grayscale model (Figure~\\ref{figure_2}, second column) were less detailed than those synthesized by the line sketch model. Furthermore, the color content was less accurate in sketches synthesized by the grayscale model than those synthesized by both the color sketch and the line sketch models. We found that the line model performed impressively in terms of matching the hair and skin color of the individuals even when the line sketches did not contain any color information. This may indicate that along with taking advantage of the luminance differences in the sketches to infer coloring, the model was able to learn color properties often associated with high-level face features of different ethnicities.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\textwidth]{figure_2}\n\\caption{Examples of the synthesized inverse sketches from the LFW dataset. Each distinct column shows examples from different sketch styles models, i.e. line sketch model (column 1), grayscale sketch model (column 2) and colour sketch model (column 3). First image in each column is the ground truth, the second image is the generated sketch and the third one is the synthesized inverse sketch.}\n\\label{figure_2}\n\\end{figure}\n\nThen, we quantitatively tested the models by comparison of the peak signal to noise ratio (PSNR), structural similarity (SSIM) and standard Pearson product-moment correlation coefficient R of the synthesized face images~\\cite{Wang2004} (Table~\\ref{table_2}). PSNR measures the physical quality of an image. It is defined as the ratio between the peak power of the image and the power of the noise in the image (Euclidean distance between the image and the reference image):\n\n\\begin{equation}\n\\PSNR = \\frac{1}{3} \\sum_{k} 10 \\log_{10}\\frac{\\max \\DR ^ 2}{\\frac{1}{m}\\sum_{i, j}\\left(t_{i, j, k} - y_{i, j, k}\\right) ^ 2}\n\\end{equation}\nwhere $\\DR$ is the dynamic range, and $m$ is the total number of pixels in each of the three color channels. SSIM measures the perceptual quality of an image. It is defined as the multiplicative combination of the similarities between the image and the reference image in terms of contrast, luminance and structure: \n\n\\begin{equation}\n\\SSIM = \\frac{1}{3} \\sum_{k} \\frac{1}{m} \\sum_{i, j} \\frac{\\left(2 \\mu\\left(t_{i, j, k}\\right) \\mu\\left(y_{i, j, k}\\right) + C_1\\right) \\left(2 \\sigma\\left(t_{i, j, k}, y_{i, j, k}\\right) C_2\\right)}{\\left(\\mu\\left(t_{i, j, k}\\right) ^ 2 \\mu\\left(y_{i, j, k}\\right) ^ 2 + C_1\\right) \\left(2 \\sigma\\left(t_{i, j, k}\\right) ^ 2 \\sigma\\left(y_{i, j, k}\\right) ^ 2 C_2\\right)}\n\\end{equation}\nwhere $\\mu\\left(t_{i, j, k}\\right)$, $\\mu\\left(y_{i, j, k}\\right)$, $\\sigma\\left(t_{i, j, k}\\right)$, $\\sigma\\left(y_{i, j, k}\\right)$ and $\\sigma\\left(t_{i, j, k}, y_{i, j, k}\\right)$ are means, standard deviations and cross-covariances of windows centered around $i$ and $j$. Furthermore, $C_1 = (0.01 \\max \\DR) ^ 2$ and $C_2 = (0.03 \\max \\DR) ^ 2$. Quality of a dataset is defined as the mean quality over the images in the dataset.\n\nThe inversion of the line sketches resulted in the highest quality face images for all three measures (20.12 for PSNR, 0.86 for SSIM and 0.93 for R). In contrast the inversion of the grayscale sketches resulted in the lowest quality face images for all measures (17.65 for PSNR, 0.65 for SSIM and 0.75 for R). This shows that both the physical and the perceptual quality of the inverted sketch images produced by the line sketch network was superior than those by the other sketch styles.\n\n\\begin{table}[]\n\\centering\n\\resizebox{\\textwidth}{!}{%\n\\begin{tabular}{@{}lllllllllllll@{}}\n\\toprule\n & & & & \\multicolumn{1}{c}{\\textit{PSNR}} & & & & \\multicolumn{1}{c}{\\textit{SSIM}} & & & & \\multicolumn{1}{c}{\\textit{R}} \\\\ \\midrule\n\\textit{Line} & & & & \\textbf{20.1158 $\\pm$ 0.0231} & & & & \\textbf{0.8583 $\\pm$ 0.0003} & & & & \\textbf{0.9298 $\\pm$ 0.0005} \\\\\n\\textit{Grayscale} & & & & 17.6567 $\\pm$ 0.0263 & & & & 0.6529 $\\pm$ 0.0008 & & & & 0.7458 $\\pm$ 0.0020 \\\\\nColor & & & & 19.2029 $\\pm$ 0.0293 & & & & 0.7154 $\\pm$ 0.0008 & & & & 0.8087 $\\pm$ 0.0017 \\\\ \\bottomrule\n\\end{tabular}\n}\n\\caption{Comparison of physical (PSNR), perceptual (SSIM) and correlational (R) quality measures for the inverse sketches synthesized by the line, grayscale and color sketch-style models. $x \\pm m$ shows the mean $\\pm$ the bootstrap estimate of the standard error of the mean.}\n\\label{table_2}\n\\end{table}\n\nFinally, we tested how well the line sketch inversion model can be transferred to the task of synthesizing face images from sketches that are hand-drawn and not generated using the same methods that were used to train the model. We considered only the line sketch model since the contents of the hand-drawn sketch database that we used~\\cite{Wang2009} were most similar to the line sketches.\n\nWe found that the line sketch inversion model can solve this inductive transfer task almost as good as it can solve the task that it was trained on (Figure~\\ref{figure_3}). Once again, the model synthesized photorealistic face images. While color was not always synthesized accurately, other elements such as form, shape, line, space and texture were often synthesized well. Furthermore hair texture and style, which posed a problem in most previous studies, was very well handled by our CSI model. We observed that the dark-edged pencil strokes in the hand-drawn sketches that were not accompanied by shading resulted in less realistic inversions (compare e.g nose areas of sketches in the first and second rows with those in the third row in Figure~\\ref{figure_3}). This can be explained by the lack of such features in the training data of the line sketch model, and can be easily overcome by including training examples more closely resembling the drawing style of the sketch artists. \n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\textwidth]{figure_3}\n\\caption{Examples of the synthesized inverse sketches from the CUFS database. First image in each column is the ground truth, the second image is the sketch hand-drawn by an artist and the third one is the inverse sketch that was synthesized by the line sketch model.}\n\\label{figure_3}\n\\end{figure}\n\n\\begin{table}[]\n\\centering\n\\resizebox{\\textwidth}{!}{%\n\\begin{tabular}{@{}lllllllllllll@{}}\n\\toprule\n & & & & \\multicolumn{1}{c}{\\textit{PSNR}} & & & & \\multicolumn{1}{c}{\\textit{SSIM}} & & & & \\multicolumn{1}{c}{\\textit{R}} \\\\ \\midrule\n\\textit{CUHK (6)} & & & & \\textbf{15.0675 $\\pm$ 0.3958} & & & & 0.5658 $\\pm$ 0.0099 & & & & \\textbf{0.8264 $\\pm$ 0.0269} \\\\\n\\textit{AR (6)} & & & & 13.8687 $\\pm$ 0.7009 & & & & \\textbf{0.5684 $\\pm$ 0.0277} & & & & 0.7667 $\\pm$ 0.0314 \\\\\nXM2GTS (6) & & & & 11.3293 $\\pm$ 1.2156 & & & & 0.4231 $\\pm$ 0.0272 & & & & 0.4138$\\pm$ 0.1130 \\\\\n\\textit{All (18)} & & & & 13.4218 $\\pm$ 0.6123 & & & & 0.5191 $\\pm$ 0.0207 & & & & 0.6690 $\\pm$ 0.0591 \\\\ \\bottomrule\n\\end{tabular}\n}\n\\caption{Comparison of physical (PSNR), perceptual (SSIM) and correlational (R) quality measures for the inverse sketches synthesized from the sketches in the CUFS database and its sub-databases. $x \\pm m$ shows the mean $\\pm$ the bootstrap estimate of the standard error of the mean.}\n\\label{table_3}\n\\end{table}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\textwidth]{figure_4}\n\\caption{Self-portrait sketches and synthesized inverse sketches along with a reference painting or photograph of famous Dutch artists: Rembrandt (top), Vincent van Gogh (middle) and M. C. Escher (bottom). Sketches: i) Self-Portrait with Beret, Wide-Eyed by Rembrandt, 1630, etching. ii) Two Self-portraits and Several Details by Vincent van Gogh, 1886, pencil on paper. iii) Self-Portrait by M.C. Escher, 1929, lithograph on gray paper. Reference paintings: i) Self-Portrait by Rembrandt, 1630, oil painting on copper. ii) Self-Portrait with Straw Hat by Vincent van Gogh, 1887, oil painting on canvas.}\n\\label{figure_4}\n\\end{figure}\n\nFor all the samples from the CUFS database, the PSNR, the SSIM index and the R of the synthesized face images were 13.42, 0.52, and 0.67, respectively (Table~\\ref{table_3}). Among the three sub-databases of the CUFS database, the quality of the synthesized images from the CUHK dataset was the highest in terms of the PSNR (15.07) and R (0.83). While the PSNR and R values for the AR dataset was lower than those of the CUHK dataset, SSIM did not differ between the two datasets. The lowest quality inverted sketches were produced from the sample sketches of the XM2GTS database (with 13.42 for PSNR, 0.42 for SSIM and 0.41 for R).\n\n\\section{Applications}\n\\subsection{Fine arts}\n\nIn many cases self-portrait studies allow us a glimpse of what famous artists looked like through the artists' own perspective. Since there are no photographic records of many artists (in particular of those who lived before the 19th century during which the photography was invented and became widespread) self-portrait sketches and paintings are the only visual records that we have of many artists. Converting the sketches of the artists into photographs using a DNN that was trained on tens of thousands of face sketch-photograph pairs results in very interesting end-products.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\textwidth]{figure_5}\n\\caption{Identification accuracies for line, grayscale and color sketches, and for inverse sketches synthesized by the corresponding models. Error bars show the bootstrap estimates of the standard errors.}\n\\label{figure_5}\n\\end{figure}\n\nHere we used our DNN-based approach to synthesize photographs of famous Dutch artists Rembrandt, Vincent van Gogh and M. C. Escher from their self-portrait sketches\\footnote{For simplicity, although different methods were used to produce these artworks, we refer to them as sketches.} (Figure~\\ref{figure_4}). To the best of our knowledge, the synthesized photorealistic images of these artists are the first of their kind.\n\nOur qualitative assesment revealed that, the inverted sketch of Rembrandt synthesized from his 1630 sketch indeed resembles himself in his paintings (particulary his self-portrait painting from 1630), and Escher's to his photographs. We found that the inverted sketch of van Gogh synthesized from his 1886 sketch was the most realistic synthesized photograph among those of the three artists, albeit not closely matching his self-portrait paintings of a distinct post-impressionist style.\n\nAlthough we do not have a quantitative way to measure the accuracy of the results in this case, results demonstrate that the artistic style of the input sketches influence the quality of the produced photorealistic images. Generating new training sketch data to match more closely to the sketch style of a specific artist of interest (e.g. by using the method proposed by~\\cite{Zhang2016}), and training the network with these sketches would overcome this limitation.\n\nSketching is one of the most important training methods that artist use to develop their skills. Converting sketches into photorealistic images would allow the artists in training to see and evaluate the accuracy of their sketches clearly and easily which can in turn become an efficient training tool. Furthermore, sketching is often much faster than producing a painting. When for example the sketch is based on imagination rather than a photograph, deep sketch inversion can provide a photorealistic guideline (or even an end-product, if digital art is being produced) and can speed up the production process of artists. Figure~\\ref{figure_3}, which shows the inverted sketches by contemporary artists that produced the sketches in the CUFS database, further demonstrates this type of application. The current method can be developed into a smartphone\/tablet or computer application for common use. \n\n\\subsection{Forensic arts}\n\nIn cases where no other representation of a suspect exists, sketches drawn by forensic artists based on eye-witness accounts are frequently used by the law enforcement. However, direct use of sketches for automatically identifying suspects from databases containing photographs does not work well because these two face representations are too different to allow a direct comparison~\\cite{Wang2013a}. Inverting a sketch to a photograph makes this task much easier by reducing the difference between these two alternative representations, enabling a direct automatized comparison~\\cite{Wang2009}.\n\nTo evaluate the potential use of our system for forensic applications, we performed an identification analysis (Figure~\\ref{figure_5}). In this analysis, we evaluated the accuracy of identifying a target face image in a very large set of candidate face images (LFW dataset containing over 11,000 images) from an (inverse) face sketch. The identification accuracies for the synthesized faces were always significantly higher than those for the corresponding sketched faces ($p << 0.05$, binomial test). While the identification accuracies for the color and grayscale sketches were very low (2.38\\% and 1.42\\%, respectively), those for the synthesized color and grayscale inverse sketches were relatively high (82.29\\% and 73.81\\%, respectively). On the other hand, identification accuracy of line sketches was already high, at 81.14\\% before inversion. Synthesizing inverse sketches from line sketches raised the identification accuracy to an almost perfect level (99.79\\%).\n\n\\section{Conclusions}\n\nIn this study we developed sketch datasets, complementing well known unconstrained benchmarking datasets~\\cite{Liu2015, LearnedMiller2016}, developed DNN models that can synthesize face images from sketches with state-of-the-art performance and proposed applications of our CSI model in fine arts, art history and forensics. We foresee further computer vision applications of the developed methods for non-face images and various other sketch-like representations, as well as cognitive neuroscience applications for the study of cognitive phenomena such as perceptual filling in~\\cite{Vergeer2015, Anstis2012} and the neural representation of complex stimuli~\\cite{Guclu2015a, Guclu2015b}.\n\n\\clearpage\n\n\\bibliographystyle{ieeetr}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{THEORETICAL ASPECTS.}\n\n\\subsection{Renormalon basics.}\n\nThe image of renormalons is invariably produced\nby the renormalon chain which is an \ninsertion of $n$, where $n$ is large, vacuum-polarization\nbubbles into a photon (gluon) line. \nDenote, furthermore, by $k$, the 4-momentum flowing through the\ndressed line and by $Q$ a large external parameter (like\ntotal energy in $e^+e^-$-annihilation). If we expand in $\\alpha (Q^2)$\nthen in the $n$-th order one readily obtains for the coefficient\n$a_n$ in front of $\\alpha (Q^2)$ the \nfollowing estimates in terms of the first coefficient of the\n$\\beta$-function $b_0$:\n\\begin{equation}\n(a_n)_{IR}~\\sim~\\int d^4k\\left(b_0ln Q^2\/k^2\\right)^n~\\sim \n~n!b_0^n2^{-n}\\label{series}\n\\end{equation}\nin case of $k^2\\ll Q^2$ and \n\\begin{equation}\n(a_n)_{UV}~\\sim~\\int d^2k\\left(b_0ln Q^2\/k^2\\right)^n\n~\\sim~n!(-b_0)^n\n\\end{equation}\nin case of $k^2\\gg Q^2$.\nThe behaviour (\\ref{series}) is process independant\nprovided there is a single soft gauge-boson line\n$k^2 \\ll Q^2$ and $Q$ is a euclidean momentum.\nIn this way renormalons indicate the \nasymptotical nature of perturbative expansions\nand hence bring an uncertainty to perturbative calculations.\nEstimating this uncertainty in a standard way\none gets :\n\\begin{eqnarray}\n\\delta_{IR}~\\sim~exp(-2b_0\/\\alpha_s(Q^2))~\\sim~(\\Lambda_{QCD}\/Q)^4,\\\\ \\nonumber\n\\delta_{UV}~\\sim~exp(-b_0\/\\alpha_s(Q^2))~\\sim~(\\Lambda_{QCD}\/Q)^2\n\\end{eqnarray}\nwhere \n$\\alpha_s(Q^2)\\sim~(b_0ln Q^2\/\\Lambda_{QCD}^2)^{-1}$. \n\nThus, renormalons, although arising within a purely\nperturbative framework, realize the idea of dimensional transmutation.\nAlso, renormalons indicate presence of non-perturbative \npower corrections of the same order in\n$(\\Lambda_{QCD}\/Q)$ which are now\nneeded to render the theory uniquely determined despite the\nuncertainties of perturbative expansions. \nSince renormalons always introduce two different\nmass scales, that is, $k^2\\gg Q^2$ or $k^2\\ll Q^2$\nit is natural to invoke operator product expansions\nto evaluate their contribution. In case of infrared renormalons\nit is the standard OPE, when applicable. In particular,\nthe series (\\ref{series}) above, for $n\\gg 1$\ncan be considered as a perturbative\ncontribution to the matrix element of \n$\\langle 0|\\alpha_s (G_{\\mu\\nu}^a)^2|0\\rangle$ \\cite{mueller,vz}:\n\\begin{equation}\n{\\langle 0|\\alpha_s(G_{\\mu\\nu}^a)^2|0\\rangle_{ren}\n\\over 24\\pi Q^4}\n=\\sum_{\\scriptstyle{n~large}}\n{3\\alpha_s(Q^2)^{n+1}b_0^n\\over 2^{n+1}\\pi^2}n!\\label{cond}\n.\\end{equation}\nThe non-perturbative counterpart was in fact \nintroduced first via QCD sum rules\n\\cite{svz}.\nIn case of UV renormalons one can utilize \\cite{vainshtein} a reverse OPE\nwhich is an expansion in $Q^2\/k^2$. \nThe use of an OPE allows us to formulate the renormalon\ncontribution in terms of the running coupling, without direct\nuse of the renormalon chains. The use of the OPE brings also a\nchallenge to theory \\cite{vainshtein,fsp}.\nNamely, it turns out that a single renormalon chain does not dominate\nin fact over two and more chains. Thus, there is no closed set\nof graphs producing the same $n!$.\n\nIn short, the renormalons\nare a simple and systematic way to\nparametrize the IR contributions to various observables.\n\n\\subsection{Limitations of renormalons.}\n\nAt the one loop level \nrenormalons are not a unique and even not necessarily the simplest way\nto probe IR regions perturbatively. Another posibility is an introduction\nof finte gluon mass $\\lambda$. The gluon mass was tried as a fit parameter\nabout 15 years ago \\cite{cornwall}. In particular there is an \ninfrared-sensitive perturbative \ngluon condensate \\cite{chet}:\n\\begin{equation}\n\\langle 0|\\alpha_s(Q^2)(G_{\\mu\\nu}^a)^2|0\\rangle~=~-{3\\alpha_s\\over\\pi^2}\\lambda^4ln\\lambda^2\n\\label{con}\\end{equation}\nwhich is a substitution for the renormalon contribution \n(\\ref{cond}) in case of massless\ngluons. In recent times, the use of a finite mass $\\lambda\\neq 0$\nhas became very common.\nIn what follows we shall not always distinguish between one-loop calculations\nwith finite $\\lambda$ and a single renormalon chain, labeling generically both\ntechniques as renormalons.\n\nIt might be worth emphasizing, \nhowever, that nowadays the finite gluon mass is \nused mostly not as a fit parameter\nbut rather as a probe of infrared region. \nNamely, non-analytical in $\\lambda^2$ terms \ncome exclusively from infrared gluons. The power of $\\lambda$ characterizes then\nthe strength of the IR sensitive contributions.\nGenerically, the translation of one-loop calculations with finite gluon mass\n$\\lambda$ and with IR renormalons looks as follows \\cite{bbz}:\n\\begin{eqnarray}\n\\alpha a_0 ln\\lambda^2+\\alpha a_1{\\sqrt{\\lambda^2}\\over Q}+\\alpha a_2\n{\\lambda^2ln\\lambda^2\\over Q^2}+...\\rightarrow\\\\ \\nonumber\nb_0 ln\\Lambda_{QCD}^2+b_1{\\Lambda_{QCD}\\over Q}+b_2{\\Lambda_{QCD}\\over Q^2}+...\\label{par}\n\\end{eqnarray}\nwhere we keep only infrared sensitive contributions\nand $a_i,b_i$ are coefficients.\n\nAmong the limitations of the renormalon \ntechnique let us mention the following\npoints:\\\\\n(i) renormalons respect the symmetries of the Lagrangian and cannot,\nfor example, produce a nonvanishing quark condensate\n$\\langle \\bar{q}q\\rangle \\neq 0$.\\\\\n(ii) renormalons are \"target-blind\", e.g.,\n$\\langle p|G^2|p\\rangle~_{renorm}=\n\\langle 0|G^2|0\\rangle_{renorm}$\\\\\n(iii) renormalons give no direct indication of confinement,\nsay, of a string configuration.\n\nAn interesting problem is brought out\nby renormalons \\cite{blok} in supersymmetric gluodynamics.\nTo render the theory supersymmetric one adds to gluons an equal number\nof gluinos $\\lambda^a$. The gluinos affect the value of $b_0$ in Eq.\n(\\ref{cond}) but this seems to be the only change.\nOn the other hand, one might argue that $\\langle (G_{\\mu\\nu}^a)^2\\rangle$\nnow vanishes. Indeed the vacuum expectation \nvalue of the Lagrangian is zero \\cite{voloshin}:\n\\begin{equation}\n\\langle 0|-{1\\over 4}(G_{\\mu\\nu}^a)^2+\\bar{\\lambda}^a\n\\slashchar{D}\\lambda^a|0\\rangle_{SUSY}~=~0\\label{zero}\n\\end{equation}\nsince it is an F-component of a superfield.\nSince $\\slashchar{D}\\lambda^a=0$ \nby virtue of equation of motion, one is inclined to think\nthat (\\ref{zero}) implies the vanishing of the gluon condensate in\nsupersymmetric theories. Calculationally, it is not true, however,\nand it is the vacuum expectation value\nof the equation of motion, $\\langle 0|\\bar{\\lambda}^a\n\\slashchar{D}\\lambda^a|0\\rangle_{SUSY}$, which is not vanishing \nin the renormalon approximation and cancels the gluon\ncondensate induced by renormalons. \nThe reason is that in SUSY gluodynamics\nthe gluino wave function renormalization\nis related to that of the gauge constant\nwhile in ordinary QCD it is gauge dependent and in this sense\narbitrary. This might be an indication that\nthe dynamics of supersymmetric gauge theories is in fact very different\nfrom QCD. \n\nIn conclusion,\nrenormalons provide us with\nwith a systematic, although incomplete, way\nto guess at non-perturbative physics in QCD. Theoretically,\nthere are important questions yet to be answered.\n\n\\section{PHENOMENOLOGY. GENERAL.}\n\n\\subsection{Could-be phenomenology.}\n\nThe main use of renormalons is in cases when there is no OPE.\nHowever, one can try to gauge possible renormalon-based phenomenologies\nto the case when an OPE is valid. \nAs is mentioned above,\nevaluating, for example, the T-product of two\nelectromagnetic currents at large Euclidean momenta $Q^2$\none finds that perturbation theory is unreliable in\nthe infrared as far as terms of order $Q^{-4}$ are concerned.\nThen the polarization operator $\\Pi (Q^2)$ could be represented\nas\n\\begin{eqnarray}\nQ^2{d\\Pi (Q^2)\\over dQ^2}=~(parton~model)\\cdot~~~~~~~~~~~~~~~\\\\ \\nonumber\n(1+a_1\\alpha_s(Q^2)+a_2\\alpha_s(Q^2)^2+...+\na_{ren}C{\\Lambda_{QCD}^4\\over Q^4})\\label{wb}\n\\end{eqnarray}\nwhere $C$ is a constant related to the procedure of defining \nthe uncertainty associated with an asymptotical expansion\nand $a_{ren}$ varies with the choice of the external current, i.e. is\nchannel dependent. Fitting the data in various channel with a single unknown\nconstant $C$ one could try to develop a phenomenology\nsimilar to QCD sum rules \\cite{svz}.\n\nThis kind of phenomenology, however, would run into apparent difficulties.\nIndeed, the $1\/Q^4$ contribution in (\\ref{wb}) is a tiny piece on\nthe background of the first terms in a \n$\\alpha_s(Q^2)$ expansion. In particular any\nredefinition of the coupling would reshuffle the whole series and\nthe $1\/Q^4$ piece could well depend on such a redefinition.\nThus, it is inconsistent, generally speaking, to keep\nthe renormalon contributions without keeping many orders in\n$\\alpha_s(Q^2)$. The phenomenology is painstaking and its \nprincipal features are outlined in Ref.\n\\cite{mueller2,ms1}. For the implementation on the\nlattice of this approach see \\cite{ms}. \n\nOn the other hand the success of the QCD sum rules is\nbased on a simplifying assumption that the non-perturbative\nterms matching the renormalon \n ambiguity are in fact large. \nIt is natural to accept\nthis approach in other applications of renormalons\nwhen we have no OPE, as well\n\\cite{az}.\n\n\\subsection{Renormalons and power corrections.}\n\nRecent considerations of renormalons have\nbrought to light various power corrections.\nCommon to all the examples which we list below is that \nthey go beyond higher twist effects indicated\nby the standard OPE.\n\n({\\it i}) In case of total cross section, a new type of correction\nappears due to UV renormalons \\cite{vz}. \n\\begin{equation}\n{\\sigma_{tot}(\\gamma^{*}\n\\rightarrow X)\\over\\sigma_{tot,parton}(\\gamma^{*}\\rightarrow X)}\n=1+a_1\\alpha_s+...+C_{UV}{\\Lambda_{QCD}^2\\over Q^2} \n.\\end{equation}\nCombined with the idea of enhancement these terms could\nsolve certain problems with QCD \nsum rules and provide a link to NJL models \\cite{vz2}.\nWe will concentrate\non IR renormalons and in this review\nonly note in passing that \nalready the consideration of\nthe UV induced $\\Lambda_{QCD}^2\/Q^2$ terms revealed the problem that overshadows\nall the applications of renormalons. Namely, in the absence of OPE it\nis much more difficult to relate different channels.\nIn particular, the $\\Lambda_{QCD}^2\/Q^2$ corrections are welcome on\nphenomenological grounds in the $\\pi$-meson channel but not in\nthe $\\rho$-meson channel. It is not known whether UV\nrenormalons produce such a pattern of $1\/Q^2$ corrections. \n\n({\\it ii}) Infrared renormalons induce\n$1\/Q$ corrections to many observables. The first\nindications to these corrections \nwere found in the cross section of the Drell-Yan process \\cite{conto}\n\\begin{equation}\nh_1+h_2~\\rightarrow~(\\mu^+\\mu^-)+X\\label{dy}\n.\\end{equation} \nShape variables, like the thrust $T$, also receive $1\/Q$ corrections.\nIn the language of a finite gluon\n mass \\cite{webber}:\n\\begin{equation}\n 1-T~\\sim~\\lambda\/Q\n.\\end{equation}\nIn all the cases these corrections are due to soft gluons\nwith 3-momenta of order $\\lambda$. In the easiest way they can be visualized\non the example of a heavy quark mass \\cite{bigi,bb}.\nThe infrared correction to a heavy mass $M_H$ due to the Coulomb-like field\nis of order:\n\\begin{equation}\n{(\\delta M_H)_{IR}\\over M_H}~\\sim~{1\\over 8\\pi M_H}\\int_{\\scriptstyle{IR}}\n|{\\bf E}^a|^2d^3{\\bf r}~\\sim~\\alpha_s {\\lambda \\over M_H}\\label{irm}\\label{cc}\n\\end{equation}\nwhere ${\\bf E}^a$ is the electric field and\nby the infrared sensitive piece of the mass one can understand\nthe difference in mass renormalization in cases $\\lambda=0$ and$\\lambda\\neq 0$.\nThis contribution is well defined then.\n\n({\\it iii}) Renormalons may bring new predictions also in cases\nwhen the power corrections could be treated within the standard\nOPE, like deep-inelastic scattering or inclusive decays\nof heavy particles. The reason is that \nin terms of the standard procedures the renormalon calculus unifies \nevaluation of the coefficient functions and of the \ncorresponding matrix elements.\nAs a result new relations may arise.\nThe simplest relation of this kind has been in fact already mentioned.\nNamely, there is no dependence on the target.\nWe shall discuss further examples in the next section.\n\nThus, we conclude this section\nwith a remark that at least potentially renormalons \nmay provide us with a new dimension in studies of power corrections.\nThese corrections, in turn, may be important, for\nexample, for the extraction of numerical values of $\\alpha_s$.\nfrom measurements of event shape variables. For an initial\nattempt see \\cite{hamacher}.\n\n\\section{ RENORMALON \"ZEROS\".}\n\n\\subsection{Heavy quark decays.}\n\nRenormalon-based predictions naturally fall into two categories,\nnamely, when one gets either a vanishing or a nonvanishing\ncontribution. If we get a zero \nin a particular calculation, then it is natural to look\nfor a kind of more general explanation, like a symmetry.\nThis indeed turns out to be true, at least for the examples known\nso far.\n\nAs a first example consider inclusive leptonic decays of heavy particles\n\\cite{bigi,bbz}. Confining ourselves to one-loop radiative corrections\nand keeping $\\lambda\\neq 0$ we can, generally speaking, parametrize \nthe infrared sensitivity in terms of the coefficients $a_i$\n(see also Eq (\\ref{par})):\n\\begin{equation}\n\\Gamma_{tot}=\\Gamma_{tot}^0(1+\\alpha a_0ln\\lambda^2+\\alpha a_1\\sqrt{\\lambda^2}+...) \n\\end{equation}\nwhere $\\Gamma_{tot}^0$ is the partonic width, with inclusion of corrections\nof order $\\alpha$.\nThe results of a straightforward calculation are \\cite{bbz}\n\\begin{equation}\na_0~=~a_1~=~a_2~ =0.\n\\end{equation}\n\nNow, these zeros have different status in fact. The vanishing of\n$a_0$ is the well-known Bloch-Nordsieck cancellation.\nThe vanishing of $a_1$ was claimed first \\cite{bigi} \non the basis of the OPE\nfor heavy quarks decays (for a review and further references see\n\\cite{vainshtein2}). This cancellation holds provided that the bare \nwidth is proportional to the fifth power of a short-distance mass\n$M_{sh.d.}$ instead of the physical, or pole mass $M_{pole}$:\n\\begin{equation}\nM_{sh.d}~\\approx~M_{pole}(1-{\\alpha_s\\over 2}\\lambda)\n\\end{equation}\nwhere we keep only the infrared sensitive contribution.\nThe physical meaning of this procedure is simple. \nIndeed, the total decay width is sensitive to the\ninstanteneous energy release. \nThe Coulomb field, on the other hand, is \"shaken off\" \nas a result of a fast decay\nand the Coulomb correction to the mass (\\ref{cc}) does not \naffect the total width. More elaborate calculations confirm this\nintuition.\n\nAs for the vanishing of the coefficient $a_2$ there are no obvious\ngeneral reasons for it. Moreover, within the OPE one can show\n\\cite{vainshtein2} that the quadratic corrections are generally related\nto matrix elements of operators $O_{1,2}$:\n\\begin{equation}\nO_1~=~{1\\over M_H^2}\\bar{Q}\\sigma_{\\mu\\nu}G_{\\mu\\nu}Q, ~\nO_2~=~{1\\over M_H^2}\\bar{Q}{\\bf D}^2Q\\label{oper}\n\\end{equation}\nwhere $Q$ is the (operator of) the field of the heavy particle,\n$G_{\\mu\\nu}$ is the gluonic field strength tensor (with color indices\nsuppressed), and ${\\bf D}$ is the covariant derivative. \nIt is worth emphasizing that the use of OPE does not assume\nthat the matrix elements of the operators (\\ref{oper})\nover a free particle state are normalized to zero. Moreover,\nthe infrared-sensitive part of the matrix elements\nare uniquely determined and are not subject to redefinitions.\nIt just happens that in the renormalon approximation \nthe matrix elements of (\\ref{oper}) vanish. This is an \nexample of what we mean in point ({\\it iii}) of the preceeding subsection\nand we shall return to discuss it in more detail below.\n\n\\subsection{KLN-vacuum.}\n\nIn case of heavy quark decays reviewed above one \nexpects the vanishing of the leading $1\/Q$ corrections based on the OPE.\nIn case of the Drell-Yan process (\\ref{dy}) there is no\nOPE and one could expect appearance of $1\/Q$ corrections.\nHowever, a straightforward calculation demonstrated \\cite{bb2}\nthat terms linear in $\\lambda$ in fact cancel in one loop.\nIn more detail one evaluates moments $M_n$ from the cross section:\n\\begin{equation}\n\\int d\\tau\\tau^{n-1}{d\\sigma (Q^2,\\tau)\\over dQ^2}~=~M_n(1+\\alpha_s a_1\\sqrt{\\lambda^2}+..)\n\\label{moments}\\end{equation}\nwhere $Q$ is the invariant mass of the lepton\npair produced, $\\tau=Q^2\/s$ and $\\sqrt{s}$ is \nthe invariant mass of the $q\\bar{q}$ from the initial \nhadrons $h_{1,2}$. The result \\cite{bb2} is $a_1=0$\nprovided $n$ is not very large:\n\\begin{equation}\nn\\cdot \\Lambda_{QCD}\/\\sqrt{s}~\\ll~1\\label{ln}.\n\\end{equation}\n\nAs argued in \\cite{az2} the reason for this cancellation is\nagain general and it is a manifestation of the inclusive nature \nof the moments (\\ref{moments}). If one considers, on the other\nhand very large n (see (\\ref{ln})) then the integral is practically\nsaturated by an exclusive channel. Moreover, the cancellation\nof the linear terms in (at least $U(1)$) gauge theories\nappears to be the same general phenomenon \nas the Bloch-Nordsieck cancellation.\n\nOne starts with the Kinoshita-Lee-Nauenberg \ntheorem \\cite{ln} as the most general statement on\ninfrared cancellations. Moreover, one can argue \\cite{asz} that the\nKLN summation over\ninitial and final states eliminates not\nonly the $ln\\lambda$ terms as is emphasized in the original papers\nbut linear terms as well:\n\\begin{equation}\n\\sum_{i,f}|S_{i\\rightarrow f}|^2~\\sim~0\\cdot ln\\lambda^2+0\\cdot\\sqrt{\\lambda^2}\\label{canc}.\n\\end{equation}\nHere $S_{i\\rightarrow f}$ are elements of the $S$-matrix and relation\n(\\ref{canc}) holds in each order of the perturbative expansion.\nThe rationale behind (\\ref{canc}) is simple:\nthe KLN summation cancels the singular, $1\/\\omega$ terms on the\nlevel of the amplitudes which implies elimination of both\n$1\/\\omega^2$ and $1\/\\omega$ terms in $\\sum |S_{i\\rightarrow f}|^2$.\n\nNote that to visualize the cancellations due to the summation over\nthe degenerate initial states one may think in terms of a \"KLN-vacuum\"\nwhich is populated by soft gluons.\nTo account for these particles in the initial state\nthe original KLN summation invokes both connected and disconnected graphs.\nTo prove Eqs. (\\ref{canc}), (\\ref{fold}) \non the technical side it is crucial\nthat instead of summing over disconnected graphs one can systematically\nadd to ordinary Feynman graphs those with\npropagators of soft particles changed into their complex\nconjugates \\cite{az2}:\n\\begin{equation}\n\\left({-i\\over k^2+i\\epsilon}\\right)~\\rightarrow~\n\\left({-i\\over k^2+i\\epsilon}\\right)^{*}.\\label{mod}\n\\end{equation}\nAdding graphs with the modified propagator (\\ref{mod})\nis equivalent to using the KLN vacuum and is simple technically.\nIt seems also plausible that the KLN vacuum\ncould be reduced to a finite-temperature vacuum but this\nanalogy has not been elaborated so far.\n\nThe next step is to relate the KLN sum, which extends over\ninitial and final states, into a summation over\nthe final states alone.\nIt is well known that as far as the most singular terms are concerned\nit is indeed possible, and the KLN sum so to say folds into\ntwice the Bloch-Nordsieck sum over the final states:\n\\begin{equation}\n\\sum_{\\scriptstyle{i,f}}|S_{i\\rightarrow f}|^2~\\rightarrow_{\\scriptstyle{soft}}2\\cdot\n\\sum_{\\scriptstyle{f}}|S_{i\\rightarrow f}|^2\\label{fold}\n\\end{equation}\nwhere we have indicated that this is true for soft\nbut not collinear gluons. The new develpoment is to show\nthat Eq. (\\ref{fold}) holds for linear terms as well.\nThe proof \\cite{az2} utilizes the Low theorem and is made explicit for\nthe Drell-Yan process. However, the reasoning appears general enough\nto apply to other processes as well . \n\nOne may wonder also how far the use of the KLN vacuum \n(or, equivalently, of the propagator (\\ref{mod})) extends infrared\ncancellations.\nThe general answer \\cite{asz} is that\nthe cancellations continue until one reaches the condensates terms.\nIn particular, in case of the gauge theories the\nuse of the propagator (\\ref{mod})\ndoubles the effect of the perturbative gluon condensate (\\ref{con}).\nVery recently this function of the modified propagator (\\ref{mod})\nwas emphasized in Ref. \\cite{hoyer}.\n\nTo summarize: at the one-loop level, \nthe linear terms cancel from inclusive \ncross sections the same way as logarithmic terms do.\nThe basic step in the proof is the use of the KLN vacuum\npopulated with soft particles or, equivalently, addition\nof graphs with the modified propagator (\\ref{mod}).\nIn case of $U(1)$ gauge theories the cancellation holds \nfor higher loops as well.\n\n\\subsection{Vanishing martix elements.}\n\nA specific feature of the renormalon calculus is that \nthe power corrections get universally expressed \nin terms of $\\Lambda_{QCD}$ or $\\lambda$ and are not dependent on the target.\nOn the other hand, if the same observable can be treated \nwithin OPE the power corrections are routinely related\nto matrix elements of various operators. Thus, renormalons\nfix the matrix elements. Whether this fixation provides\nsatisfactory results, is a different issue which has not\nbeen addressed systematically, to our knowledge.\nThus, we confine ourselves to a few casual remarks.\n\nAs we have already mentioned, in case of heavy\nquarks, renormalons imply the suppression \n\\cite{bbz} of the matrix elements\nof the operators (\\ref{oper}):\n\\begin{equation}\n\\langle free~particle|O_{1,2}|free~particle\\rangle~=~O(\\lambda^3)\\label{lc}\n\\end{equation}\nwhile on dimensional grounds one would expect terms\nof order $\\lambda^2ln\\lambda^2$.\n\nTechnically, the vanishing of the leading terms is due to\nsimple dynamical features of gauge interactions.\nIn particular one observes \\cite{az3}\nthat the matrix element of the\noperator of the kinetic \nenergy, $\\slashchar{{\\bf D}}^2$ immediately reduces to a matrix element\nof a local operator which is nothing else but the vacuum\nexpectation value\nof the vector potential squared:\n\\begin{equation}\n\\langle free~particle|\\bar{Q}\\slashchar{{\\bf D}}^2\nQ|free~particle\\rangle\n\\sim C\\cdot\\langle{\\bf A}^2\\rangle\\label{asq}\n\\end{equation}\nIt is only natural then that the constant $C$ turns to be zero\nbecause of gauge invariance. As for the matrix element\nof the magnetic energy, $\\bar{Q}\\sigma_{\\mu\\nu}G_{\\mu\\nu}Q$,\nits vanishing is due to the fact that transverse gluons do not\ninteract with a charged particle at rest.\nWhile the the matrix elements in point, (\\ref{lc}),\nwere calculated directly only at the one-loop level the reason for \nthier suppression remains true in higher orders as well \\cite{az3}.\n\nIt is difficult to comment on the significance of (\\ref{lc}).\nOn one hand, the theory of heavy quark decays (for a review see\n\\cite{vainshtein2}) assumes that the matrix elements in point\nare determined by the atom-like structure of hadrons\nand tacitly assumes that for free quarks they are zero.\nThe latter is not obvious (especially in case of confinement).\nOne may say then that this is supported by \nrenormalons (see (\\ref{lc})).\nOn the other hand the very idea that the matrix elements can be\ntarget-independent looks very foreign to the whole OPE approach\nto heavy hadrons decays. It appears more reasonable to apply\nrenormalons only to free particle decays.\n\nIn case of deep inelastic scattering one can evaluate power\ncorrections to moments of structure functions.\nTo be specific, consider \\cite{mueller2}\nthe first moment of $F_3(x)$,\n$\\int dxF_3(x)$, relevant to the Llewellyn-Smith-Gross sum rule. \nThen the \nthe leading twist contribution and the first power correction\nare determined by the matrix elements of the \nfollowing operator \\cite{shuryak} :\n\\begin{equation}\nO_{\\mu\\nu}={2i \\over q^2}\\epsilon_{\\mu\\nu\\alpha\\beta}q_{\\alpha}(\n\\bar{q}\\gamma_{\\beta}q+{4g\\over9q^2}\\bar{q}\n\\tilde{G}_{\\beta\\delta}\\gamma_{\\delta}\\gamma_5q)\\label{omn}\n\\end{equation}\nApplying the renormalon idea means that one evaluates\nthe power correction in terms of the matrix element of the\nleading-twist operator. In terms of the IR parameters\nentering the Feynman graphs this matrix element is a function\nof the gluon mass $\\lambda$, quark mass $m$ and of the quark virtuality\n$p^2-m^2\\equiv \\epsilon^2$ \\cite{az3}. In more detail:\n\\begin{equation}\n\\langle \n{8g\\over9}\\bar{q}\n\\tilde{G}_{\\alpha\\beta}\\gamma_{\\beta}\\gamma_5q\\rangle=\nf(p^2,m^2,\\lambda^2){C_F \\over 2\\pi}{4\\alpha_s\\over 3}\\langle \n\\bar{q}\\gamma_{\\alpha}q\\rangle\n\\end{equation}\nwhere $f(\\lambda^2,m^2,\\epsilon^2)$ is \n\\begin{eqnarray}\nf(\\lambda^2,m^2,\\epsilon^2)=\\int_{\\scriptstyle{0}}^1dyX(y)lnX(y); \\nonumber \\\\\nX=\\epsilon^2y(y-1)\n+m^2y^2+\\lambda^2(1-y).\\label{ambig}\n\\end{eqnarray}\nAs one would expect the $\\lambda^2ln\\lambda^2$ term disappears\nif $m^2\\gg\\lambda^2$ for the same reason as above (see Eq. (\\ref{asq}))\nand is taken over by the quark mass (for $\\epsilon=0$) as an\ninfrared parameter. On the other hand, the $\\lambda^2ln\\lambda^2$\nterm does represent the power correction if other\ninfrared sensitive parameters are set to be zero.\nIn fact much more detailed calculations, representing \nthe whole $x$-dependence of the quadratic power correction are\navailable in this case, or an equivalent thereof \\cite{dmw,stein}.\n\nAt the next step one has to account for the anomalous dimension\nof the operator governing the $Q^{-2}$ correction \n(see Eq. (\\ref{omn})). \nIn the Minkowskii-space approach the effect of the anomalous dimension\ncorresponds to emission\nof soft gluons by energetic gluons. This has not been considered\nso far and it is not clear that $\\lambda\\neq 0$ can be consistently\nkept at this stage.\n \nSummarizing this section, the\nvanishing of certain power corrections revealed so far\nthrough the use of renormalons \ncan be understood each time within a broader\ntheoretical framework. The development of the corresponding\nframework was sometimes initiated by renormalons and its completion\nby including non-abelian theories still represents a challenge. \n\n\\section{RENORMALONS AND EVENT SHAPES.}\n\nRenormalon and renormalon-related techniques have turned out to be\ninstrumental in providing a theoretical basis for the\nexistence of $\\Lambda_{QCD}\/Q$ corrections in shape variables\nin $e^+e^-$ annihilation. The phenomenology \nof these terms is of special interest\nsince they represent, on one hand, leading power\ncorrections and, on the other hand, there does not exist an\nalternative more general framework to treat these\ncorrections. There are experimental fits to $1\/Q$ corrections\n\\cite{barreiro,webber} and \na careful experimental study of $1\/Q$ has been made in \\cite{hamacher}.\n\nThe very existence of the $1\/Q$ corrections has been demonstrated\nby various techniques. Let us mention finite gluon mass \\cite{webber},\nsingle renormalon chain \\cite{az,ks}, dispersive approach to the \nrunning coupling \\cite{dmw}.\nIt also can be seen from simple estimates. Consider, for example,\nthrust T:\n\\begin{equation}\nT~=~max_{\\scriptstyle{{\\bf n}}}{\\sum_{\\scriptstyle{i}}|{\\bf p_i\n\\cdot n}|\\over \\sum_{\\scriptstyle{i}}|{\\bf p}_i|}\\label{thrust}\n\\end{equation}\nwhere ${\\bf p}$ are the momenta of the particles produces \nwhile ${\\bf n}$ is a unit vector. \nPerturbatively $T\\neq 1$ arises because of the emission of\ngluons from quarks. Consider then a contribution\nto $T$ due to a soft gluon emission:\n\\begin{equation}\n\\langle 1-T\\rangle_{soft}\\sim\\int_0^{\\Lambda_{QCD}}\n{\\omega\\over Q}{d\\omega\\over \\omega}\\alpha_s(\\Lambda_{QCD})\\sim{\\Lambda_{QCD}\\over Q}\n\\end{equation}\nwhere the first factor in the integrand comes from \nthe definition of the thrust, $d\\omega\/\\omega$\nis the standard factor of emission of a soft gluon, and \nthe running coupling $\\alpha_s(\\Lambda_{QCD})$ is of order unity.\nNote that, unlike the inclusive Drell-Yan cross section,\nevaluation of the thrust assumes that the momenta of final\nparticles are resolved on the infrared sensitive scale\nand there is no reason, therefore, to expect cancellation of\nthese terms.\n\nOnce the existence of $1\/Q$ is established, the effort\nto create phenomenology shifts to deriving relations\namong various observables and such relations were claimed\nin all the approaches mentioned above.\nIn particular, in the one-renormalon approximation \\cite{az}\none gets for the standard shape variables:\n\\begin{eqnarray}\n{1\\over 2}\\langle 1-T\\rangle_{1\/Q}={1\\over 3\\pi}\\langle C\\rangle_{1\/Q}=\n\\\\ \\nonumber\n={2\\over \\pi}\\langle{\\sigma_L\\over \\sigma_{T}}\\rangle_{1\/Q} =\n{1\\over \\pi}\\langle Esin^2\\delta\\rangle_{1\/Q}=U\\label{univ}\n\\end{eqnarray}\nwhere $Q$ is now the total c.m. energy,\nthe subscript $1\/Q$ means that only linear power corrections are kept\nand $U$ is a universal factor:\n\\begin{equation}\nU~=~ {C_F\\over \\pi Q}\\int_0^{\\sim Q^2}{dk^2_{{\\perp}}\\over k_{{\\perp}}}\n\\alpha_s (k^2_{\\perp})~\\sim~{\\Lambda_{QCD}\\over Q}\\label{u}\n.\\end{equation}\nMoreover, according to the rules of the renormalon calculus\nonly contribution of the Landau pole in $\\alpha_s(k^2_{\\perp})$, \nparametrized in a certain\nway, is retained in (\\ref{u}). As a result $U\\sim\\Lambda_{QCD}\/Q$ indeed.\nSimilar, although not identical, relations \nhave been obtained within other approaches.\nThe earliest derivation \\cite{webber} used \nthe finite gluon mass technique. Comparisons with existing data,\nin general,\nlook favourable \\cite{webber,az,dmw}.\n\nHaving said this, we have to make \nnumerous reservations as to the status\nof relations of the type (\\ref{univ}).\nThe point is that there are uncertainties in derivations\nwhich can be removed only at a price of further assumptions.\nIn different aproaches these unertainties arise in different\nways but reflect the same difficulty: Namely, perturbative\ncalculations are reliable when the coupling is small.\nNow we are trying to relate infrared contributions to various\nobservables. This is possible only if a certain\nextrapolation procedure is accepted\nand any procedure of this kind is speculative.\n\nIn the renormalon language, the problem is that\nall orders of the perturbative expansion\nwhich is an expansion in a small parameter in the UV region,\ncollapse to \nthe same order of magnitude in the IR region.\nIndeed, since\n\\begin{equation}\n\\alpha_s^2(k^2_{\\perp})~\\sim~\\Lambda_{QCD}{d\\alpha_s(k^2_{\\perp})\\over d\\Lambda_{QCD}}\n\\end{equation}\nwe have\n\\begin{equation}\n\\int_{\\scriptstyle{IR}}\n{dk^2_{\\perp}\\over k^2_{\\perp}}\\alpha_s^2(k^2_{\\perp}){\nk_{\\perp}\\over Q}~\\sim~U~\\sim~{\\Lambda_{QCD}\\over Q}\n.\\end{equation}\nThus, one is invited to address the problem in higher\norders as well.\n\nThere is a hope that the universality relations (\\ref{u}) hold\nin higher orders as well. Namely, it is known that all\nthe log terms which dominate in perturbative region are\nuniversally related to the so called cusp anomalous dimension\n$\\gamma_{eik}$. If one retains only these terms in IR as well\nthen the universal factor $U$ in Eq (\\ref{univ}) becomes\n\\cite{az,ks}:\n\\begin{equation}\nU~=~\\int_0^{\\sim Q^2}{dk^2_{{\\perp}}\\over Q\\cdot k_{{\\perp}}}\n\\gamma_{eik}(\\alpha_s (k^2_{\\perp})).\n\\end{equation}\nThe reservation is that the terms which\ndominate in UV region do not necessarily dominate upon\nthe continuation into the IR region.\n\nAn attractive possibility is to relate\nthe factor $U$ in Eq. (\\ref{u})\nto parameters of hadronization models \\cite{az}. \nIndeed, the renormalon technique parametrizes contribution\nof the region where the running coupling\n$\\alpha_s$ blows up. Since in the perturbative regime the coupling runs\nwith $k^2_{\\perp}$ renormalons, at least intuitively, correspond\nto introducing intrinsic transverse momentum for hadrons\nin a quark jet. In the two-jet limit this relation can be \nmade quantitative \\cite{az}.\nNamely, let $\\tilde{\\rho}(z,p_{\\perp})$ \ndenote the appropriately normalized distribution\nof hadrons \nin a jet with longitudinal momentum fraction $z$ and \nperpendicular component $p_{\\perp}$. Then\n\\begin{equation}\nU~\\rightarrow~\\int d^2p_{\\perp}\\rho(p_{\\perp}){p_{\\perp}\\over Q}\\label{tube}\n\\end{equation}\nwhere $\\rho(p_{\\perp}\\equiv\\tilde{\\rho}(0,p_{\\perp}$.\nNumerical value of (\\ref{tube}) can be obtained from\nfits to jet masses within the tube model (for a review see Ref.\n\\cite{webber3})\nwhich identifies $\\rho(p_{\\perp})$ with the $p_{\\perp}$ distribution\nof hadrons in a rapidity-$p_{\\perp}$ \"tube\".\nUsing the experimental data one then gets\n\\begin{equation}\nQ\\cdot U~\\approx~0.5 GeV\n,\\end{equation} \nthe value which also fits well the data on the \n$1\/Q$ terms in shape variables.\n\nThus, Eq. (\\ref{tube}) can be considered as an attempt to\nformulate the enhancement hypothesis (see subsection 2.2) in pure\nphenomenological terms. Theoretically it would be very attractive\nto formulate this hypothesis in terms of matrix elements of some\noperators. Note therefore the attempts to develop\na kind of OPE valid for jet physics \\cite{ks}.\n\nWe have spelled out in some detail \nthe difficulties of a phenomenology\nbased on renormalon chains. It is worthwhile to mention that\nother approaches suffer uncertainties as well. For example,\nthe prediction for the thrust $T$ depends\non whether one keeps the gluon mass $\\lambda\\neq 0$ in the \ndenominator of Eq. (\\ref{thrust})\nor not. The prediction closest to the renormalon chain arises\nif this kinematical effect is neglected \\cite{dmw}.\n\nIn view of the model dependence of the prediction for the\n$1\/Q$ corrections in shape variables, it would be important\nto list predictions which could distinguish between various\nmodels. This has not been done however and we confine ourselves\nonly to a single remark of this kind \\cite{az}.\nNamely, the renormalon-chain predictions outlined above\nallow easily for an enhancement hypothesis. That is, if\ntwo-jet events are observed \nthe $1\/Q$ corrections to a heavy jet mass $M_h$\nand to the light jet mass $M_l$ could be comparable. \nThe only relation which is expected to hold is\n\\begin{equation}\n\\langle 1-T\\rangle~_{1\/Q}=\\langle{M_h^2\\over Q^2}\\rangle_{1\/Q}+\n\\langle {M_l^2\\over Q^2}\\rangle_{1\/Q}\n.\\end{equation}\nThis relation is simply an expression of the fact\nthat the $1\/Q$ corrections arise due to soft gluons.\nOn the other hand, the models with a finite gluon mass\nor the frozen coupling\ndo not allow for such an enhancement.\n\nData at relatively low energies \\cite{barreiro} do\nindicate \n\\begin{equation}\n\\langle{M_h^2\\over Q^2}\\rangle_{1\/Q}~\\approx~\n\\langle {M_l^2\\over Q^2}\\rangle_{1\/Q}\n\\end{equation}\nwhich can be considered as a support to the particular enhancement\nmechanism described above.\n\nTo comprehend the significance of data at higher energies\nmore theoretical work is needed. The point is that\nthe $1\/Q$ form of the leading power corrections has been\nestablished in the two-jet limit.\nAt high energies, however, the two-jet events themselves are suppressed\nby a Sudakov form-factor.\nIt is for this reason that the $1\/Q$ corrections from the very\nbeginning \\cite{conto} were claimed for resummed cross sections.\nTo ensure the two-jet dominance one could\nintroduce a corresponding weight factor. In case of the thrust,\nfor example one can consider \\cite{ks} the following average\nas far as the $1\/Q$ terms are concerned:\n\\begin{equation}\n\\langle 1-T\\rangle_{1\/Q}~\\rightarrow~\\langle \nexp(-\\nu (1-T))\\rangle_{1\/Q}\n\\end{equation}\nwhere $\\nu$ is a new parameter which is to be large\nenough\nto ensure the dominance of the region $(1-T)\\ll 1$.\n\nTo avoid \na special weighting function one should have developed the theory\nof $1\/Q$ corrections for three-jet evens and so on. This has\nnot been done. For a discussion of \nthe effect of intrinsic $k_{\\perp}$ near three-jet configurations\nsee Ref \\cite{ellis} \n\nSummarizing this section,\nrelations among $1\/Q$ terms in various observables are model\ndependent. It looks plausible at this point \nthat the renormalon-based\nmodel will merge with the old-fashioned hadronization models.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}