diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzznnmi" "b/data_all_eng_slimpj/shuffled/split2/finalzznnmi" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzznnmi" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nVariable radio emission is a hallmark of energetic objects such as\ncoronally active stars, supernovae, neutron stars, black holes, and\nactive galactic nuclei (AGN). Indeed, radio variability is often\nindicative of high-energy processes and, in principle, can be\nvaluable for finding examples of relatively rare objects. However,\nsurveys for variability are themselves quite rare --- blind sky\nsurveys are almost never repeated owing to the scarcity of telescope\ntime. The exceptions are mostly in the optical regime. Comparisons\nbetween POSS1, POSS2, and SDSS have been useful for studying\nvariability (de~Vries et al.\\ 2005). Gravitational microlensing\nstudies (e.g., Alcock et al.\\ 1997) and supernova searches (e.g.,\nAstier et al.\\ 2006, Miknaitis et al.\\ 2007) have produced a wealth\nof data on optical variability from targeted sky regions, and several\nupcoming experiments such as Pan-STARRS (Kaiser et al.\\ 2002), the\nPalomar Transit Factory (Rau et al.\\ 2009), and LSST (Tyson 2002)\nwill make the coming decade one in which time-domain astronomy plays\na prominent role.\n\nVariability studies in the radio band have typically targeted bright\nextragalactic sources (see de~Vries et al.\\ 2004 for a review of searches\nfor, and mechanisms of, radio variability). Comparisons between blind\nradio surveys are often hampered by differences in angular resolution and\nthe confusing presence of interferometric sidelobe patterns. For example,\nthere has been no systematic search for radio variability between the\ntwo largest radio sky surveys, FIRST (Becker et al.\\ 1995) and NVSS (Condon\net al.\\ 1998). The FIRST survey did observe one area twice at 1400 MHz, an\nequatorial strip $\\sim1.5$ degrees wide in the range\n$21^{\\mathrm h}20^{\\mathrm m} < RA < 03^{\\mathrm h} 20^{\\mathrm\nm}$. A search for variable sources in this area was reported in\nde~Vries et al.\\ (2004). The search covered $\\sim120$~deg$^2$ of\nextragalactic sky with a sensitivity similar to the Galactic plane\nsearch reported here; it thus serves as a useful control from which\nto estimate how many of the sources we find are background extragalactic\nradio sources.\n\nThe most systematic search for radio variablity in the Galactic plane\nused the NRAO 91-m telescope in Greenbank, WV, operating at a\nfrequency of 5~GHz (Gregory \\& Taylor 1986). Over a five-year period\nthe plane was observed 16 times, leading to the detection of 32\nvariable radio sources. The survey had a flux density threshold of\n$\\sim20$~mJy and an angular resolution of 3\\arcmin. Using the\nVery Large Array\\footnote{The\nVery Large Array is an instrument of the National Radio Astronomy\nObservatory, a facility of the National Science Foundation operated\nunder cooperative agreement by Associated Universities, Inc.}\n(VLA), the Galactic plane has been surveyed at this same frequency\nbut with much higher sensitivity and angular resolution, although\nwith minimal repetition (Becker et al.\\ 1994).\n\nRecently, a new Galactic plane survey at 6~cm (4.86~GHz) has begun\nat the VLA. The new survey (CORNISH\\footnote{The Co-Ordinated Radio\n'N' Infrared Survey for High-mass star formation; see\n\\url{http:\/\/www.ast.leeds.ac.uk\/Cornish}}; Purcell et al.\\ 2008)\nhas substantial overlap with our previous survey; both surveys have\na flux density threshold of $\\sim1$~mJy. Comparison between these\ntwo data sets allows for a search for Galactic radio sources that\nexhibit variability over the fifteen-year interval between the\nsurveys. The search is complicated because the two surveys use\ndifferent VLA configurations and hence have different angular\nresolutions (5\\arcsec\\ versus 1.5\\arcsec).\nNonetheless, it is possible to identify strongly varying sources.\nIn this paper we will compare results between the original survey\nand two epochs of data from the new survey. In section 2 we discuss\nthe parameters of the two surveys, while in section 3 we present\nthe results from a comparison of the two samples and adduce evidence\nfor variability. We describe the properties of the variable sources including\ntheir spatial distribution, spectral indices, and counterparts at other wavelengths\n(\\S4) and end with a discussion of our limited knowledge of the nature of\nthese objects (\\S5).\n\n\\begin{figure*}\n\\epsscale{1.0}\n\\plotone{f1.eps}\n\\caption{\nSky coverage for overlapping regions from the 1990${+}$ (red), 2005\n(blue), and 2006 (gray) 6~cm survey epochs. Darker regions have\nhigher rms noise values, while white areas are outside the survey.\nThe noise is higher at the edges of the coverage and in fields with\nbright or complex extended sources. Typical rms values are $\\sim\n0.1$~mJy in both surveys, but the old 6~cm data were acquired with\na more widely spaced pointing grid and so display greater variation\nwith position. The areas of overlap are given in Table~1.\n}\n\\label{fig-coverage}\n\\end{figure*}\n\n\\section{The 6~cm Surveys}\n\nThe original VLA 6~cm Galactic plane survey was carried out between\n1989 and 1991 in the C and BnC configurations (Becker et al.\\ 1994).\nIt covered a longitude range $-10^{\\circ} < l < 42^{\\circ}$ within\n$\\pm 0.4^{\\circ}$ of the plane for a total of 43~deg$^2$. The data\nwere re-reduced in 2005 using much improved data processing algorithms\nand some additional data (White et al.\\ 2005). The new catalog\nreaches a flux density limit of $\\sim1$~mJy and contains over 2700\nradio sources. Since the data were taken in C and BnC configurations,\nthe angular resolution is $\\sim5\\arcsec$.\n\nThe new CORNISH survey (Purcell et al.\\ 2008) is meant to complement\nthe {\\it Spitzer} GLIMPSE Legacy program (Benjamin et al.\\ 2003).\nWhen completed, it will cover a longitude range $10^{\\circ} < l <\n65^{\\circ}$ within $\\pm 1^{\\circ}$ of the plane. The data are being\ntaken in the B configuration and hence will have an angular resolution\nof $\\sim1.5\\arcsec$. The new survey will also achieve a flux\ndensity sensitivity of $\\sim1$~mJy. The ultimate areal coverage\nwill be 110~deg$^2$. A pilot study of 10~deg$^2$ near $l = 30^{\\circ}$\nwas carried out in the spring of 2005. The first 64~deg$^2$ of the\nfull survey (including repeated observations of the pilot area) was\nobserved in summer 2006.\nWe retrieved these data from the VLA archive and reduced them using\nthe AIPS procedures we developed for the FIRST survey (White et\nal.\\ 1997). Our source detection algorithm HAPPY was run on the\nfinal co-added images.\n\nWe henceforth refer to the three epochs as~I (1990${+}$), II~(2005)\nand III~(2006). Note that while the epoch~II and~III data were\ntaken over short periods of time (spanning about 2 months in each\ncase), the epoch~I data were taken over a much greater time period (hence\nour choice of the label ``1990${+}$'').\nFor the overlapping area used for this paper, 70\\% of the epoch~I\nobservations were taken between June 1989 and December 1990, and\n30\\% were taken between February and April of 2004. Consequently\nthe time span between epoch~I and the later two epochs varies by a\nlarge factor depending on the source location. We will report the\nmean observational epoch of the flux density measurements for\nindividual objects in the following discussion.\n\nFor this paper we restrict our attention\nto sources in sky regions with coverage at two or three epochs.\nTable~1 describes the areas of overlap between the various epochs\nand the number of sources from each catalog included in those areas.\nTo ensure source reliability, we restrict our sample to sources\nthat are strongly detected ($>8.5\\sigma$) in one of the epochs or\nthat are confirmed by detections at multiple epochs. We also check\nfor detections at 20~cm, either from our MAGPIS survey (Helfand et\nal.\\ 2006) or in the catalog of White et al.\\ (2005); a 20~cm detection is\nrequired as confirmation for sources detected in only one 6~cm epoch.\n\nIn\nFigure~\\ref{fig-coverage} we show the sky coverage for the three\nepochs in the vicinity of the overlap region. Note that the 2005\npilot area is entirely covered by the 2006 data, so all of the sky\narea covered by 1990${+}$ and 2005 observations also has 2006\nobservations.\n\n\\begin{deluxetable}{ccccc}\n\\label{table-area}\n\\tablecolumns{5}\n\\tablewidth{0pc}\n\\tabletypesize{\\scriptsize}\n\\tablecaption{Sky Regions with Multiple Epochs of Observations}\n\\tablehead{\n\\colhead{Epochs} & \\colhead{Area Covered} & \\multicolumn{3}{c}{Number of Sources} \\\\\n& \\colhead{(deg$^2$)} & 1990${+}$ & 2005 & 2006 \\\\\n\\colhead{(1)} & \\colhead{(2)} & \\colhead{(3)} & \\colhead{(4)} & \\colhead{(5)}\n}\n\\startdata\n1990${+}$, 2006 & 13.4 & 541 & \\nodata & 347 \\\\\n2005, 2006 & \\phn5.9 & \\nodata & 144 & 142 \\\\\n1990${+}$, 2005, 2006 & \\phn3.9 & 161 & 168 & 133 \\\\\n\\enddata\n\\end{deluxetable}\n\nThe other significant difference between the two surveys is their\nangular resolution. For unresolved radio sources, the flux densities\nfrom the two surveys are directly comparable; the difficulty comes\nin knowing which sources are true point sources. For sources that\nare partially resolved by the new survey, the flux density will be\nlower than that measured a decade and a half ago, even in the absence\nof variability. Hence, partially resolved sources will give a\nfalse-positive variability signal. By the same token, however, any\nsource significantly brighter in the newer survey is almost certainly\nvariable. All images from both surveys can be found at the MAGPIS\nwebsite (\\url{http:\/\/third.ucllnl.org\/gps}).\n\n\\section{Search for Variability}\n\nA match among the three 6~cm data sets resulted in 503 distinct\nsources detected at two or more epochs.\nTo ensure reliability,\nwe restrict our sample to sources that are detected in at least two 6~cm\nepochs or that have confirming detections at 20~cm. Sources detected only\nin a single 6~cm epoch and not at 20~cm are excluded.\nSources are considered a\nmatch if their positions agree to within 1.5\\arcsec\\ for the epoch~II\nand~III catalogs or to within 5\\arcsec\\ between the epoch~I and\nlater-epoch catalogs. These relatively large match radii are chosen\nto include extended sources, which can have larger positional\noffsets. For the higher resolution epoch~II and~III data, the\nmedian position difference is 0.2\\arcsec, and 80\\% of the sources\nhave positions that agree to within 0.4\\arcsec. For comparisons\nbetween the low-resolution epoch~I data and the more recent\nobservations, the median separation is 0.7\\arcsec, and 80\\% of the\nsources have positions that differ by 1.6\\arcsec\\ or less. To avoid\npotential confusion, we have removed from the match list sources\nthat have ambiguous matches owing to multiple components within the\nmatching radius.\n\nA comparison of the flux densities determined from the old and new\ndata is plotted in Figure~\\ref{fig-flux}; sources that fall along\nthe diagonal have comparable flux densities from the two measurements.\nThere is a clear bias for sources to be weaker in the newer\nobservations, a direct consequence of the higher angular resolution,\nwhich results in slightly extended sources having some of their\nflux resolved out in the newer data. Certainly some of these sources\ncould be variable, but it is difficult to distinguish between a\ndecrease due to variability and a decrease due to resolution effects.\nHappily, the reverse is not true; sources that brighten between the\ntwo epochs are likely to be truly variable.\n\n\\begin{figure*}\n\\epsscale{0.8}\n\\plotone{f2.eps}\n\\caption{\nComparison of 6~cm integrated flux densities for old (epoch I\/1990${+}$) and\nnew (epochs II\/2005 and III\/2006) catalogs. Source in all three epochs are plotted\ntwice to show both the 2005 and 2006 flux densities.\nRed symbols indicate the variable\nsources, with upper limits shown for variables detected at only one\nepoch. Most sources have similar flux densities in the two epochs,\nbut extended sources tend to have lower flux densities in the newer\nsurvey because those data were taken in a higher-resolution VLA\nconfiguration that resolves out some of the extended radio emission.\nConsequently, in our variable source search there is a bias toward\nobjects that are brighter in the 2005\/2006 epoch.\n}\n\\label{fig-flux}\n\\end{figure*}\n\nFigure~\\ref{fig-flux2} displays a comparison of the flux densities\nmeasured in the two high-resolution epochs (II and~III). The area\nof overlap is smaller (5.9~deg$^2$ versus 17.3~deg$^2$ for\nFig.~\\ref{fig-flux}), but it is clear that the scatter is considerably\nreduced. This is expected because the observations are taken in\nthe same VLA configuration and so have the same resolution. (The\nshorter time baseline for variability also presumably contributes\nslightly to the reduced scatter.)\n\n\\begin{figure*}\n\\epsscale{0.8}\n\\plotone{f3.eps}\n\\caption{\nComparison of the integrated flux densities for the epoch~II and~III\ncatalogs. These observations were taken in the same VLA configuration,\nwhich makes the flux densities directly comparable. The two epochs\nare also much closer in time, reducing the amplitude of the expected\nvariability signal. The symbols are the same as in Fig.~2.\n}\n\\label{fig-flux2}\n\\end{figure*}\n\nThe sources falling in a region covered by at least two of the three\ncatalogs yielded a list of potential variable sources using a\n$5\\sigma$ variability threshold\\footnote{The difference in the peak\nflux densities was required to be greater than\n$5 \\times (\\sigma_{\\rm old}^2 + \\sigma_{\\rm new}^2)^{1\/2}$ if the source\nwas detected at both epochs; if the object was detected at only one\nepoch, the flux density at the undetected epoch was conservatively\ntaken to be twice the rms value at that epoch.}. A visual inspection\nof the pairs of images led us to reject many as suspect owing either\nto source confusion or to clear angular extent in the higher\nresolution observations. There remained 39 sources regarded as\nhaving a high likelihood of being true radio variables (Table~2).\nWe were cautious about including sources that were brigher in epoch~I\ndue to the resolution difference discussed above. Only 5 of the\ncandidates rely on a bright epoch~I measurement to establish\nvariability; the other 34 either are brighter in the high-resolution\ndata or show variability between epochs~II and~III. We have retained\nthe five sources that fade from their epoch~I flux measurements\nbecause they appear to be point-like in all epochs at both 6~cm and 20~cm,\nbut we cannot\ncategorically exclude a small source extent being responsible for\nthe lower flux density observed in the more recent data. The\nvariability significance in column 16 of Table~2 is shown in bold\nitalic type for these less reliable sources.\n\nIn examining candidate variables, we were alert to the possibility\nof calibration errors or bad data causing systematic flux density\ndifferences. Consequently we looked carefully at the close pair\nof sources, G37.7347${-}$0.1126 and G37.7596${-}$0.1001, both of\nwhich were undetected in epoch~I and were bright ($>10$~mJy) in\nepoch~III. To confirm the reality of these sources, we examined\nthe individual grid images that contributed to the coadded images\nfor each source. The grid images confirmed the variation: in epoch~I\neach source would have been detected at more than $5\\sigma$\nsignificance in two different grid images if the source was as\nbright as in epoch~III, but neither showed any evidence for emission.\nAnd in epoch~III the sources were detected in two or more independent\nobservations at flux densities consistent with the coadded image\ndetection.\n\nOf the five single-epoch detections in the list, one appears only\nin epoch~I (and so is one of the less reliable sources), one appears\nonly in epoch~II, and three appear only in epoch~III. Note that\nsingle-epoch sources both must be relatively bright in the detected\nepoch for the variability to be considered and also must have\nconfirming detections at 20~cm. In fact, all of the multi-epoch\nsources are also detected at 20~cm; we use the spectral indices\nderived from the 6 and 20~cm flux densities below, although caution\nis warranted since the 20~cm and 6~cm observations are not\ncontemporaneous, so the variability for which we are selecting will\nalso affect the spectral index estimates. In fact, roughly half\nof the MAGPIS 20~cm flux densities are inconsistent with our original\ncompact source survey at this wavelength undertaken in the 1980s\n(see catalogs in White et al.\\ 2005), underscoring the case for\nvariability. The degree of variability at 6~cm ranges from 20\\% to\na factor of 18. The flux density distribution ranges from $<1$ to\n65~mJy with a median of $\\sim8$~mJy in the second-epoch data.\n\n\\section{Characteristics of the Variable Sources}\n\nOnly a few papers have reported variability results for\ncentimetric radio sources from extragalactic surveys\non time scales of years and with sensitivities\nin the mJy range. Bower et al.\\ (2007) examined archival VLA data\nfrom a frequently observed calibration field (944 observations over\n22 years) to search for transient radio sources at 6~cm to a flux\ndensity limit of 370~$\\mu$Jy; a transient source was defined as one\nthat only appeared at a single epoch (or over a short range of\ncontiguous epochs $<2$ months in length). They measured a two-epoch rate of\n$1.5\\pm0.4$ transients per square degree, and\nestimated that the number of transients scales with flux density\nas $S^{-1.5}$. Our faintest variable source has a 6~cm flux density\nof 2.8~mJy (at the brighter epoch; fainter objects would not have\npassed the $5\\sigma$ variability threshold). Thus, we should expect\n$\\sim1$ true extragalactic transient in our survey area of\n23.2~deg$^2$ (the total pair-wise area covered -- see Table~1). While\nthere are five sources detected at one epoch only in our sample,\nall four are detected independently at 20~cm (at one or more different\nepochs), and thus cannot be considered true transients. The detection\nof zero transients when one is expected is construed as consistent\nwith the results of Bower et al.\\ (2007) while setting a weak limit\non the number of Galactic transient sources.\n\nExamining data collected for a deep survey of the Lockman Hole\nspaced at intervals of 19 days and 17 months, Carilli et al.\\ (2003)\nconcluded that only 2\\% of sources between 50 and 100~$\\mu$Jy at\n1.4~GHz are highly variable ($>50$\\%). These observations did,\nhowever, yield nine variable objects in the flux density range 1\nto 25~mJy in a field with a FWHM 32\\arcmin; only one of these\nsources varied by more than 50\\%. \n\nThe survey by de~Vries et al.\\ (2004) mentioned in the introduction\n(\\S1) offers the best comparison sample against which to assess the\nfraction of our variable sources likely to be extragalactic. They\nfind 123 sources variable at $>4\\sigma$ significance in 120.2~deg$^2$\nof high-latitude sky, or roughly 1 variable source per square degree.\nThe median flux density of the extragalactic sample is 13.5~mJy at\n$\\lambda=20$~cm. As noted above, we have 20~cm flux densities (albeit\nnon-contemporaneous ones) for all members of our sample; the median\nflux density is 12.9~mJy, very similar to that of the de~Vries\nsample.\n\n\\begin{deluxetable}{ccccc}\n\\label{table-vardist}\n\\tablenum{3}\n\\tablecolumns{5}\n\\tablewidth{0pc}\n\\tabletypesize{\\scriptsize}\n\\tablecaption{Distribution of Variability}\n\\tablehead{\n\\colhead{Fractional} & \\colhead{de~Vries} & \\colhead{This Paper} & \\colhead{Predicted} & \\colhead{Galactic}\\\\\n\\colhead{Variability $f$} & \\colhead{Counts} & \\colhead{Counts} & \\colhead{Extragalactic} & \\colhead{Excess} \\\\\n\\colhead{(1)} & \\colhead{(2)} & \\colhead{(3)} & \\colhead{(4)} & \\colhead{(5)}\n}\n\\startdata\n $<$ 1.25 & 51 & 1 & 7.4 & \\nodata \\\\\n1.25 -- 1.50 & 39 & 5 & 5.7 & 0.0 \\\\\n1.50 -- 1.75 & 19 & 5 & 2.8 & 2.2 \\\\\n1.75 -- 2.00 & \\phn4 & 4 & 0.6 & 3.4 \\\\\n2.00 -- 2.25 & \\phn4 & 2 & 0.6 & 1.4 \\\\\n2.25 -- 2.50 & \\phn2 & 3 & 0.3 & 2.7 \\\\\n2.50 -- 2.75 & \\phn1 & 2 & 0.1 & 1.9 \\\\\n2.75 -- 3.00 & \\phn1 & 0 & 0.1 & 0.0 \\\\\n $>$ 3.0 & \\phn2 & 17\\phn & 0.3 & 16.7\\phn \\\\[\\smallskipamount]\nTotal & 123\\phn & 39\\phn & 17.9\\phn & 28\n\\enddata\n\\tablecomments{\nCol.~(1): Ratio of brightest to faintest flux density measurements.\nCol.~(2): Variable source counts from de~Vries et al.\\ (2004).\nCol.~(3): Variable source counts from this paper.\nCol.~(4): de~Vries counts scaled to match area covered in this paper.\nCol.~(5): Net excess of variables in this paper compared with de~Vries counts.\n}\n\\end{deluxetable}\n\nHowever, an examination of the fractional variability of the two\nsamples (defined as $f$, the highest flux density recorded over the\nlowest) reveals drastic differences. Table~3 displays the distribution\nof fractional flux density variation for the de~Vries\nextragalactic sample and for our Galactic plane catalog of\nvariables. A total of $73\\pm4$\\% of the extragalactic sample has a\nfractional variability of $f<1.5$ while only 6 of the 39 Galactic\nvariable (15\\%) vary this little. At the other end of the distribution,\nonly 2\/123 extragalactic objects vary by as much as a factor of 3,\nwhile fully 17\/39 (44\\%) of the Galactic plane sources are this\nvariable.\n\nWe can use the de~Vries sample to estimate the number of extragalactic\nvariables present in our survey area. The ratio of areas is 23.2\ndeg$^2$\/120.2 deg$^2$ or 0.193. We cannot simply scale by area,\nhowever, because of the resolution bias, discussed above, that\ndiscriminates against sources which faded between epoch~I and\nepochs~II and~III. Of the 30 sources whose variability was established\non the basis of a change between epoch~I and a later epoch, five\nsources faded and 25 sources brightened in the later epochs. Since\nthe distribution should be inherently symmetrical, we can assume\nwe eliminated roughly 20 fading sources to protect against resolution\neffects. Thus, the total number of true variables is reduced by\n20\/50 or 40\\%; note that this correction factor applies only to the\narea in which 1990${+}$ data is compared to later data (19.3 deg$^2$).\nThe effective sky area covered by our survey when this inefficiency\nis taken into account is $A_{\\rm eff} = 0.6 \\times 19.3 + 5.9 =\n17.5\\,\\hbox{deg}^2$, and the expected number of extragalactic\nvariables is therefore $123 \\times A_{\\rm eff}\/120.2 = 18$.\n\nIf we distribute these 18 sources with the fractional variability\nof the extragalactic sample (column 4 of Table~3), we expect\n$\\sim13$ at $f<1.5$, 3 with $1.58.5\\sigma$).\nSuch sources could have been detected as variables. The latitude\ndistribution shows a bias toward negative latitudes, consistent\nwith earlier studies that show that $b=0.0$ lies above the Galactic\nplane in the first quadrant. \nIn Figure~\\ref{fig-longhist}, we display the\nlongitude distribution for all sources (which is distorted by\ndifferential coverage) and for the variable sources. The fraction\nof variables, displayed in the lower panel, shows a clear rise\ntoward the Galactic center.\n\n\\begin{figure*}\n\\epsscale{0.6}\n\\plotone{f6.eps}\n\\caption{\nVariability fraction as a function of the spectral index $\\alpha$ (\n$F_\\nu \\sim \\nu^\\alpha$) between 6~cm and 20~cm.\nThe comparison sample includes the sources from\nFigs.~4 and 5 that have 20~cm flux measurements. The spectral\nindex was computed from the lowest 6~cm flux density at any epoch\nin order to reduce the selection bias toward flatter spectra in variable\nsources.\n}\n\\label{fig-spindhist}\n\\end{figure*}\n\nThe spectral index distribution (Figure~\\ref{fig-spindhist}) also\nsuggests that the variable sources represent a distinct population.\nThere is a trend toward greater variability as the radio spectrum\nbecomes flatter (increasing spectral index). For any individual\nsource, the spectral index calculated between our 6~cm and 20~cm\ncatalogs is unreliable, as the measurements were 1) obtained with\ndifferent spatial resolutions, and 2) far from contemporaneous.\nSince the 20~cm observations have lower resolution, they detect\nmore flux in extended sources and so tend to produce spectral indices\nthat are too steep. On the other hand, the observed variability\nin the 6~cm flux density tends to bias the index toward flatter\nvalues, since sources that brighten at 6~cm are more likely to be\nrecognized as variable. To compensate partially for the latter\neffect, the spectral index in Figure~\\ref{fig-spindhist} is computed\nusing the smallest 6~cm flux measured at any of the three epochs. That\nmay also be responsible for the variable sources with very steep\nspectral indices ($\\alpha < -2$), which may have been in a bright\nphase when measured at 20~cm.\n\nThe interpretation of the spectral index distribution is therefore\nnot straightforward. The apparent increase in variability for\nflatter spectrum sources could result from either a Galactic\npopulation (optically thin or thick thermal emission) or \nan extragalactic population (beamed emission from AGN\/blazars).\n\nAnother line of evidence that many of the variable sources are\nGalactic derives from their counterparts at other wavelengths. We\nexamined images from the {\\it Spitzer} GLIMPSE survey (3.6, 4.5,\n5.8 and 8.0$\\mu$m; Benjamin et al.\\ 2003), the {\\it Spitzer} MIPSGAL\n24$\\mu$m survey (Carey et al.\\ 2009), and the 1.1~mm Bolocam Galactic\nPlane Survey (BGPS; Aguirre et al.\\ 2009). Of the 39 variable\nsources, 7 are MIPSGAL sources, with 6 of those also found to be\nGLIMPSE sources, and 4 are detected in the BGPS millimeter observations;\nall are described in greater detail below. None of the counterparts\nare expected to be the result of chance coincidences, implying that\nall of these objects must be in the Galaxy. Infrared\/mm counterparts\nare significantly more common among the variable sources than\namong the non-variable 6~cm sources. We examined a sample of 40\nnon-variable radio sources, selected as unresolved sources detected\nin at least 2 epochs with 6~cm flux densities that are consistent\nwithin $2\\sigma$ at all epochs. Only 2 of the non-variable objects\nwere found to have MIPSGAL counterparts, and none had BGPS matches.\nWe conclude that the existence of these counterparts is related to\nthe nature of the variable sources.\n\n\\section{Discussion}\n\n\\subsection{Source Identification -- What They Are Not}\n\nHaving established the existence of a population of highly variable\nGalactic sources, the obvious question is, What are they? Three\nclasses of Galactic variable radio emitters can be easily eliminated from\nconsideration: coronally active radio-emitting stars, pulsars, and\nmasers. We justify our exclusion of these source classes in turn.\n\nIn a survey of 122 RS CVn and related active binary systems which\nare among the most luminous stellar radio sources, Drake et al.\n(1989) found only 18 detected above a quiescent flux density of\n1~mJy at 6~cm; the faintest optical counterpart was $V=10.0$. Even\nassuming an extreme flare of a factor of 100 (Osten 2008), the\nfaintest possible counterpart would have $V=15$; none of our variables\nhas a counterpart this bright. As for dMe flare stars, the other\nmain class of variable stellar radio emitters, the most luminous\nquiescent emission is $\\sim10^{14.2}$ erg s$^{-1}$ Hz$^{-1}$ (Gudel\net al.\\ 1993) corresponding to a flux density of $\\sim1$~mJy at a\ndistance of 13~pc. Even an extreme flare with an increase of a\nfactor of 500 over the quiescent level (Osten 2008 and references\ntherein) would fall below our flux density threshold for a distance\n$>290$~pc. Stars with spectral types later than M6 would have\ncounterparts fainter than 20th magnitude and could be represented\nin our sample. However, statistically, M-stars cannot be a significant\ncontributor; Helfand et al.\\ (1999) found only $\\sim5$ M stars in\n5000 deg$^2$ of the {\\sl FIRST} survey to a flux density limit of\n0.7~mJy, whereas our variables have a surface density of 1.6~deg$^{-2}$.\n\nWhile nearby radio pulsars scintillate strongly in the ISM leading\nto large-amplitude variability, pulsars have very steep radio\nspectra, and most have not been detected at 6~cm (none of our objects\nare coincident with one of the 1827 known pulsars; Manchester et\nal. 2005). For a typical spectral index of $-1.5$, our weakest\nsource would be a $\\sim100$~mJy pulsar at 400~MHz, and most unlikely\nto have been missed in pulsar surveys. The small duty cycle of the\nrecently discovered RRATs (Rotating RAdio Transients -- McLaughlin\net al.\\ 2006) makes them equally unlikely to explain our variable\nsources.\n\nFinally, radio masers are known to be highly variable, but no known\nmaser transitions fall within our bandpass. As noted below, however,\nthree of our variables are coincident with methanol masers.\n\nTwo classes of extragalactic radio transients --- supernovae and GRB\nafterglows --- are also highly improbable counterparts for our events.\nBoth have rise times of at most tens of days and cannot, in the\nabsence of a steady underlying source of radio emission, account for\nthe bulk of our sources which show a flux density increase over\nmany years. In addition, their rarity makes them statistically\nunlikely counterparts. The one extragalactic population that does\nshow variability on the time scales we probe, AGN, are shown above\nto have variability amplitudes which exclude them from explaining\nall but a handful of our events.\n\nThe remaining known classes of variable radio sources include\nmicroquasars (accreting, high-mass X-ray binaries that produce\nrelativistic jets: e.g., SS433, Cyg X-3 and GRS 1915${+}$105), radio\nmagnetars (Camilo et al.\\ 2006; Camilo et al.\\ 2007), and the recently\ndescribed Galactic Center Transient sources (Hyman et al.\\ 2009 and\nreferences therein). The first two of these have signatures at\nother wavelengths; we explore below the fragmentary data outside\nthe radio band that is available for our variable objects.\n\n\\subsection{Source Identification -- Multiwavelength Data}\n\nCounterparts at other wavelengths can be useful in suggesting the\norigin of radio variability. At our MAGPIS website (Helfand et al.\n2006), we have collected the following Galactic plane data in\naddition to the three-epoch 6~cm data described herein: two epochs\nof 20~cm observations for these same fields including the principal\nMAGPIS survey, 90~cm observations of the same regions, the 3.6,\n4.5, 5.8, and 8.0 $\\mu$m data from the Spitzer {\\sl GLIMPSE} survey\n(Benjamin et al.\\ 2003), 24 $\\mu$m images from MIPSGAL (Carey et\nal.\\ 2009), 20 $\\mu$m data from the MSX survey (Price et al.\\ 2001)\nand the 1.1~mm Bolocam Galactic plane survey (Aguirre et al.\\ 2009).\nIn addition, we have queried the SIMBAD database for each of our\nsources and have examined the Digitized Sky Survey images; in one\ncase, we have obtained optical observations of a source. We report\nthe results of this multi-wavelength inquiry here.\n\n\\subsubsection{Mid-IR and mm observations}\n\nSeven of our variables are detected at 24 $\\mu$m in the MIPSGAL\nsurvey, and six of these are also detected in at least one GLIMPSE\nmid-IR band. Four of the objects are also detected at 1.1~mm in the\nBolocam survey. In all seven cases at least two bands are available,\nand in all seven cases the sources are red; i.e., they are faintest\nin the short-wavelength bands and brightest in the long-wavelength\nbands. In two cases (G31.1595${+}$0.0449 and G37.7347${-}$0.1126) multiple\ncomponents with different IR spectral shapes are present, with the\nradio source identified with the brighter component in the first\ncase, and the redder component in the second. Three of the IR-detected\nobjects have associated methanol masers; this, coupled with their\nIR spectra demonstrate they represent activity associated with\nstar formation in compact or ultracompact \\ion{H}{2} regions.\n\nFor one IR-detected source, G29.5779${-}$0.2685, we have obtained\nfollowup observations at the MDM Observatory (J.~Halpern, private\ncommunication). R-band and H$\\alpha$ images were obtained on\n23~August 2009, and show a barely resolved ($\\sim1\\arcsec$)\nobject, brighter in H$\\alpha$ and coincident with\nthe radio source. A spectrum obtained the same\nnight shows no continuum, but very strong nebular emission\nlines. The object appears to be a very compact planetary nebula.\nIts radio flux history is thus perplexing: 6.9~mJy at 6~cm in $\\sim1990$,\nrising to 10.5~mJy in 2005 and falling again to 5.8~mJy in 2006.\nThe 20~cm flux density in the MAGPIS survey (epoch 2001--2004)\nis only 1.3~mJy, suggesting the source may be optically thick.\nFurther simultaneous multi-frequency observations are required to\nmeasure the radio spectrum and derive clues as to the nature of the\nsource's variability.\n\n\\begin{figure*}\n\\epsscale{1.0}\n\\plotone{f7.eps}\n\\caption{\nMAGPIS 20~cm image of supernova remnants W41 (G23.3${-}$0.3) and\nG22.7${-}$0.2. The boxes mark the positions of three\nvariable 6~cm sources (G22.7194${-}$0.1939, \nG22.9116${-}$0.2878, and G22.9743${-}$0.3920).\n}\n\\label{fig-snrs}\n\\end{figure*}\n\n\\subsubsection{X-ray observations}\n\nThe brightest variable, G21.6552${-}$0.3611, is coincident with a\npoint-like X-ray source catalogued in the XMM Galactic Plane survey\n(Hands et al.\\ 2004). It has a hard band (2--6~keV) flux of 0.0051\nct s$^{-1}$ and is undetected in the soft (0.4--2.0~keV) band. For\nan intrinsic power-law spectrum with spectral index $\\Gamma = 1.9$,\nthe expected absorption column density through the Galactic plane\nof $\\sim10^{23}$ cm$^{-2}$ is consistent with the non-detection\nin the soft band; the inferred intrinsic flux in the 0.2--10~keV\nband would be $7 \\times 10^{-13}$ erg cm$^{-2}$ s$^{-1}$. For an\nextragalactic AGN at 1 Gpc, this corresponds to a luminosity of $8\n\\times 10^{42}$ erg s$^{-1}$, while for a Galactic object at 5~kpc,\nthe X-ray luminosity would be a modest $2 \\times 10^{33}$ erg\ns$^{-1}$; for a column density of only $10^{22}$ cm$^{-2}$, the\nluminosity estimates are lower by a factor of 3. While this source\nis the brightest of our variables, it has one of the lowest modulation\nfactors (decreasing by just $\\sim50\\%$ over 16 years). The\n(non-contemporaneous) 20~cm flux density is lower than either of\nthe 6~cm values, suggesting a mildly inverted spectrum source. It\nis not detected at any other wavelength. The most likely explanation\nof this object is a flat-spectrum extragalactic radio source, one\nof a handful we expect in our sample.\n\nOne other source, G30.4460${-}$0.2148, lies 27\\arcsec\\ from\nthe position of an ASCA Galactic Plane Survey catalog entry (Sugizaki\net al.\\ 2001). The uncertainty in the X-ray position is 1\\arcmin;\none other (brighter) radio source lies within the X-ray error circle\nalthough at twice the distance from its centroid. The X-ray source\nis a marginal detection ($4.6 \\sigma$) with a 0.7--7.0~keV unabsorbed\nflux of $2.6 \\times 10^{-12}$ erg cm$^{-2}$ s$^{-1}$ for an intrinsic\npower law index of $\\Gamma = 1.9$ and an absorption column density\nof $10^{23}$ cm$^{-2}$; the flux is roughly four times lower for\n$N_H = 10^{22}$ cm$^{-2}$. Assuming the identification is correct,\nthe X-ray to radio flux ratio is thus $\\sim20$ times greater than\nour other X-ray detection, although still within the X-ray to radio\nluminosity ratios characteristic of AGN. The primary distinguishing\nfeature, however, is that the radio source is coincident with a\nvery bright mid-IR source (saturated in all but the 3.6 $\\mu$m band)\nwhich is also detected at 1.1~mm.\n\nThe ASCA Galactic Plane Survey covered an area encompassing all but six of\nour variables to a flux density level of approximately $10^{-12.5}$\nerg cm$^{-2}$ s$^{-1}$; no other X-ray sources are coincident to\nwithin 1\\arcmin. The higher-resolution coverage of the Einstein,\nROSAT, XMM, and Chandra is much spottier; no further matches are\nfound within the 10\\arcsec\\ error circles of these other\ncatalogs.\n\n\\subsubsection{Low-frequency radio detections}\n\nThree of the variable sources are detected at 90~cm. G22.9116${-}$0.2878\n(Fig.~\\ref{fig-snrs}) has a 90~cm flux density of $\\sim180$~mJy; this is consistent with\na nonthermal spectral index of $\\sim-0.9$ if one takes the most\nrecent (but far from contemporaneous) 6, 20, and 90~cm measurements.\nThe 6~cm flux density increased by more than a factor of three since\n1990, making it one of the higher amplitude variables, but no other\ninformation is available on this source. G30.6724${+}$0.9637 (the highest\nlatitude source detected) has a 90~cm flux density of $\\sim70$\nmJy, below that of the 20~cm flux density (90~mJy), possible\nadditional evidence for variability, as the 20:6~cm flux density\nratio is 3:1 (again, all non-contemporaneous). This is the smallest\namplitude variable in our sample and, given its distance from the\nGalactic plane, an extragalactic counterpart is the most likely\nexplanation.\n\nThe third 90~cm detection is G22.7194${-}$0.1939, perhaps the most\nintriguing source in our sample. An image of the region surrounding\nthis source is given in Figure~\\ref{fig-snrs}. The source lies very near\nto the geometric center of a 30\\arcmin-diameter supernova remnant,\nG22.7${-}$0.2 (Green 2004 and references therein) and 4\\arcmin\\ from\na fairly bright \\ion{H}{2} region. There is no counterpart detected at mm,\nIR, or optical wavelengths. The source brightened by a factor of\nfour between 2003 and 2006 at 6~cm; its 20~cm flux density is 12~mJy,\nthree times higher than the higher of the two 6~cm measurements.\nThe distance to the remnant is unknown, although its large angular\ndiameter would suggest it is not very remote (its diameter would\nbe $\\sim45$~pc at 5~kpc). X-ray observations could reveal whether\nor not this source is likely to be a compact object associated with\nthe supernova remnant.\n\n\\subsection{Summary}\n\nWe have discovered a relatively high surface density (2 deg$^{-2}$)\nof variable radio sources in the Galactic plane and have argued\nthat the large majority of these ($\\sim80$\\%) are Galactic objects. While a few\nare associated with young star formation activity, the identity\nof the majority is unknown.\nFollow up radio observations are required to confirm the variability\nin these sources, establish the variability time scale(s), and\nobtain contemporaneous spectral indices. Observations at optical,\ninfrared, and X-ray wavelengths could help establish counterparts\nand identify the origin of the variable radio emission.\n\n\n\\acknowledgments\n\nRHB and DJH acknowledge the support of the National Science Foundation\nunder grants AST-05-07598 and AST-02-6-55. RHB's work was supported\nin part under the auspices of the US Department of Energy by Lawrence\nLivermore National Laboratory under contract W-7405-ENG-48. DJH\nwas also supported in this work by NASA grant NAG5-13062. RLW\nacknowledges the support of the Space Telescope Science Institute,\nwhich is operated by the Association of Universities for Research\nin Astronomy under NASA contract NAS5-26555. The first three authors\nare grateful for the hospitality of Quest University Canada\n(\\url{http:\/\/www.questu.ca}), an innovative new undergraduate university in\nBritish Columbia, where this manuscript was completed.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{Introduction}\nFrom a purely theoretical point of view, given the underlying conditional probability distribution of a dependent variable $C$ and a set of features $\\mathbf{X}$, the Bayes decision rule can be applied to construct the optimum induction algorithm. However, in practice learning machines are not given access to this distribution, $Pr(C|\\mathbf{X})$. Therefore, given a feature vector or variables $\\mathbf{X}\\in R^N$, the aim of most machine learning algorithms is to approximate this underlying distribution or estimate some of its characteristics. Unfortunately, in most practically relevant data mining applications, the dimensionality of the feature vector is quite high making it prohibitive to learn the underlying distribution. For instance, gene expression data or images may easily have more than tens of thousands of features. While, at least in theory, having more features should result in a more discriminative classifier, it is not the case in practice because of the computational burden and curse of \ndimensionality. \n\nHigh-dimensional data poses different challenges on induction and prediction algorithms. Essentially, the amount of data to sustain the spatial density of the underlying distribution increases exponentially with the dimensionality of the feature vector, or alternatively, the sparsity increases exponentially given a constant amount of data. Normally in real-world applications, a limited amount of data is available and obtaining a sufficiently good estimate of the underlying high-dimensional probability distribution is almost impossible unless for some special data structures or under some assumptions (independent features, etc). \n\nThus, dimensionality reduction techniques, particularly feature extraction and feature selection methods, have to be employed to reconcile idealistic learning algorithms with real-world applications. \n\nIn the context of feature selection, two main issues can be distinguished. The first one is to define an appropriate measure function to assign a score to a set of features. The second issue is to develop a search strategy that can find the optimal (in a sense of optimizing the value of the measure function) subset of features among all feasible subsets in a reasonable amount of time. \n\nDifferent approaches to address these two problems can roughly be categorized into three groups: Wrapper methods, embedded methods and filter methods.\n\nWrapper methods \\cite{kohavi:96} use the performance of an induction algorithm (for instance a classifier) as the measure function. Given an inducer $\\mathcal{I}$, wrapper approaches search through the space of all possible feature subsets and select the one that maximizes the induction accuracy. Most of the methods of this type require to check all the possible $2^N$ subsets of features and thus, may rapidly become prohibitive due to the so-called combinatorial explosion. Since the measure function is a machine learning (ML) algorithm, the selected feature subset is only optimal with respect to that particular algorithm, and may show poor generalization performance over other inducers. \n\nThe second group of feature selection methods are called embedded methods \\cite{neumann:04} and are based on some internal parameters of the ML algorithm. Embedded approaches rank features during the training process and thus simultaneously determine both the optimal features and the parameters of the ML algorithm. Since using (accessing) the internal parameters may not be applicable in all ML algorithms, this approach cannot be seen as a general solution to the feature selection problem. In contrast to wrapper methods, embedded strategies do not require to run the exhaustive search over all subsets since they mostly evaluate each feature individually based on the score calculated from the internal parameters. However, similar to wrapper methods, embedded methods are dependent on the induction model and thus the selected subset is somehow tuned to a particular \ninduction algorithm.\n\nFilter methods, as the third group of selection algorithms, focus on filtering out irrelevant and redundant features in which irrelevancy is defined according to a predetermined measure function. Unlike the first two groups, filter methods do not incorporate the learning part and thus show better generalization power over a wider range of induction algorithms. They rely on finding an optimal feature subset through the optimization of a suitable measure function. Since the measure function is selected independently of the induction algorithm, this approach decouples the feature selection problem from the following ML algorithm. \n\nThe first contribution of this work is to analyze the popular mutual information measure in the context of the feature selection problem. We will expand the mutual information function in two different series and show that most of the previously suggested information-theoretic criteria are the first or second order truncation-approximations of these expansions. The first expansion is based on generalization of mutual information and has already appeared in literature while the second one is new, to the best of our knowledge. The well-known minimal Redundancy Maximal Relevance (mRMR) score function can be immediately concluded from the second expansion. We will discuss the consistency and accuracy of these approximations and experimentally investigate the conditions in which these truncation-approximations may lead to high estimation errors. \n\nAlternatively, feature selection methods can be categorized based on the search strategies they employ. Popular search approaches can be divided into four categories: Exhaustive search, greedy search, projection and heuristic. A trivial approach is to exhaustively search in the subset space as it is done in wrapper methods. However, as the number of features increases, it can rapidly become infeasible. Hence, many popular search approaches use greedy hill climbing, as an approximation to this NP-hard combinatorial problem. Greedy algorithms iteratively evaluate a candidate subset of features, then modify the subset and evaluate if the new subset is an improvement over the old one. This can be done in a forward selection setup which starts with an empty set and adds one feature at a time or with a backward elimination process which starts with the full set of features and removes one feature at each step. The third group of the search algorithms are based on targeted projection pursuit which is a linear \nmapping \nalgorithm to pursue an optimum projection of data onto a low dimensional manifold that scores highly with respect to a measure function \\cite{friedman:74}. In heuristic methods, for instance genetic algorithms, the search is started with an initial subset of features which gradually evolves toward better solutions.\n\nRecently, two convex quadratic programing based methods, QPFS in \\cite{rod:10} and SOSS in \\cite{naghibi:13} have been suggested to address the search problem. QPFS is a deterministic algorithm and utilizes the Nystr\\\"{o}m method to approximate large matrices for efficiency purposes. SOSS on the other hand, has a randomized rounding step which injects a degree of randomness into the algorithm in order to generate more diverse feature sets. \n\nDeveloping a new search strategy is another contribution of this paper. Here, we introduce a new class of search algorithms based on Semi-Definite Programming (SDP) relaxation. We reformulate the feature selection problem as a (0-1)-quadratic integer programming and will show that it can be relaxed to an SDP problem, which is convex and hence can be solved with efficient algorithms \\cite {boyd:04}. Moreover, there is a discussion about the approximation ratio of the proposed algorithm in subsection 3.2. We show that it usually gives better solutions than greedy algorithms in the sense that its approximate solution is more probable to be closer to the optimal point of the criterion.\n\\vspace{-3mm}\n\\section{Mutual Information Pros and Cons}\n\\label{mutualinformation}\nLet us consider an $N$ dimensional feature vector $\\mathbf{X}=[X_1,X_2,...,X_N]$ and a dependent variable $C$ which can be either a class label in case of classification or a target variable in case of regression. The mutual information function is defined as a distance from independence between $\\mathbf{X}$ and $C$ measured by the Kullback-Leibler divergence \\cite{cover:91}. Basically, mutual information measures the amount of information shared between $\\mathbf{X}$ and $C$ by measuring their dependency level. Denote the joint pdf of $\\mathbf{X}$ and $C$ and its marginal distributions by $Pr(\\mathbf{X},C)$, $Pr(\\mathbf{X})$ and $Pr(C)$, respectively. The mutual information between the feature vector and the class label can be defined as follows:\n\\begin{align} \n\\label{eq_1} \nI(X_1,X_2,\\dots,& X_N;C)\\! = I(\\mathbf{X};C) = \\notag \\\\\n& \\! \\int \\!Pr(\\mathbf{X},C)\\! \\log{\\frac{Pr(\\mathbf{X},C)}{Pr(\\mathbf{X})Pr(C)}}\\,\\mathrm{d} \\mathbf{X}\\,\\mathrm{d}C\n\\end{align}\nIt reaches its maximum value when the dependent variable is perfectly described by the feature set. In this case mutual information is equal to $H(C)$, where $H(C)$ is the Shannon entropy of $C$.\n\nMutual information can also be considered a measure of set intersection \\cite{reza:61}. Namely, let $\\mathbb{A}$ and $\\mathbb{B}$ be event sets corresponding to random variables $A$ and $B$, respectively. It is not difficult to verify that a function $\\mu$ defined as:\n\\begin{equation} \n\\label{eq_2}\n\\mu({\\mathbb{A} \\cap \\mathbb{B}}) = I(A;B)\n\\end{equation}\nsatisfies all three properties of a formal measure over sets \\cite{yeung:91} \\cite{bog:07}, i.e., non-negativity, assigning zero to empty set and countable additivity. However, as we see later, the generalization of the mutual information measure to more than two sets will no longer satisfy the \\textit{non-negativity} property and thus can be seen as a signed measure which is the generalization of the concept of measure by allowing it to have negative values. \n\nThere are at least three reasons for the popularity of the use of mutual information in feature selection algorithms. \n\n1. Most of the suggested non information-theoretic score functions are not formal set measures (for instance correlation function). Therefore, they cannot assign a score to a set of features but rather to individual features. However, mutual information as a formal set measure is able to evaluate all possible informative interactions and complex functional relations between features and as a result, fully extract the information contained in a set of features.\n\n2. The relevance of the mutual information measure to misclassification error is supported by the existence of bounds relating the probability of misclassification of the Bayes classifier, $P_e$, to the mutual information. More specifically, Fano's weak lower bound \\cite{fano:61} on $P_e$,\n\\begin{equation} \n\\label{eq_3}\n1+P_e\\text{log}_2(n_y{-}1)\\ge H(C)-I(\\mathbf{X};C)\n\\end{equation}\nwhere $n_y$ is the number of classes and the Hellman-Raviv \\cite{hellman:70} upper bound,\n\\begin{equation} \n\\label{eq_4}\nP_e\\le \\frac{1}{2}(H(C)-I(\\mathbf{X};C))\n\\end{equation}\non $P_e$, provide somewhat a performance guarantee. \n\nAs it can be seen in \\eref{eq_3} and \\eref{eq_4}, maximizing the mutual information between $\\mathbf{X}$ and $C$ decreases both upper and lower bounds on misclassification error and guarantees the goodness of the selected feature set. However, there is somewhat of a misunderstanding of this fact in the literature. It is sometimes wrongly claimed that maximizing the mutual information results in minimizing the $P_e$ of the optimal Bayes classifier. This is an unfounded claim since $P_e$ is not a monotonic function of the mutual information. Namely, it is possible that a feature vector $\\mathbf{A}$ with less relevant information-content about the class label $C$ than a feature vector $\\mathbf{B}$ yields a lower classification error rate than $\\mathbf{B}$. The following example may clarify this point.\n\n\\textbf{Example 1}: Consider a binary classification problem with equal number of positive and negative training samples and two binary features $X_1$ and $X_2$. The goal is to select the optimum feature for the classification task. Suppose the first feature $X_1$ is positive if the outcome is positive. However, when the outcome is negative, $X_1$ can take both positive and negative values with the equal probability. Namely, $Pr(X_1{=}1|C{=}1) = 1$ and $Pr(X_1{=} -1 | C {=} -1) = 0.5$. In the same manner, the likelihood of $X_2$ is defined as $Pr(X_2 {=} 1 | C {=}1) = 0.9$ and $Pr(X_2 {=} -1 | C{=} -1) = 0.7$. Then, the Bayes classifier with feature $X_1$ yields the classification error:\n\\begin{align} \n\\label{ex_1}\nP_{e1}= & Pr(C{=}{-}1)Pr(X_1{=}1|C{=}{-}1) \\notag \\\\ \n & +Pr(C{=}1)Pr(X{=}{-}1|C{=}1)=0.25 \n\\end{align}\nSimilarly, the Bayes classifier with $X_2$ yields $P_{e1}=0.2$ meaning that, $X_2$ is a better feature than $X_1$ in the sense of minimizing the probability of misclassification. However, unlike their error probabilities, $I(X_1;C) = 0.31$, is greater than \n$I(X_2;C) = 0.29$. That is, $X_1$ conveys more information about the class label in the sense of Shannon mutual information than $X_2$.\n\nA more detailed discussion can be found in \\cite{ben:12}. However, it is worthwhile to mention that although using mutual information may not necessarily result in the highest classification accuracy, it guarantees to reveal a salient feature subset by reducing the upper and lower bounds of $P_e$.\n\n3- By adapting classification error as a criterion, most standard classification algorithms fail to correctly classify the instances from minority classes in imbalanced datasets. Common approaches to address this issue are to either assign higher misclassification costs to minority classes or replace the classification accuracy criterion with the area under the ROC curve which is a more relevant criterion when dealing with imbalanced datasets. Either way, the features should also be selected by an algorithm which is insensitive (robust) with respect to class distributions (otherwise the selected features may not be informative about minority classes, in the first place). Interestingly, by internally applying unequal class dependent costs, mutual information provides some robustness with respect to class distributions. Thus, even in an imbalanced case, a mutual information based feature selection algorithm is likely (though not guaranteed) to not overlook the features that represent the minority classes. In \\\ncite{bao:11}, the concept of the mutual information classifier is investigated. Specifically, the internal cost matrix of the mutual information classifier is derived to show that it applies unequal misclassification \ncosts when dealing with imbalanced data and showed that the mutual information classifier is an optimal classifier in the sense of maximizing a weighted classification accuracy rate. The following example shows this robustness.\n\n\\textbf{Example 2}: Assume an imbalanced binary classification task where $Pr(C{=}1)=0.9$. As in Example 1, there are two binary features $X_1$ and $X_2$ and the goal is to select the optimum feature. Suppose $Pr(X_1{=}1|C{=}1) = 1$ and $Pr(X_1 {=} -1 | C {=} -1) = 0.5$. Unlike the first feature, $X_2$ can much better classify the minority class $Pr(X_2 {=} {-}1 | C {=}{-}1) = 1$ and $Pr(X_2 {=}1 | C {=} 1) = 0.8$. It can be seen that the Bayes classifier with $X_1$ results in 100\\% classification rate for the majority class while only 50\\% correct classification for the minority. On the other hand, using $X_2$ leads to 100\\% correct classification for the minority class and 80\\% for the majority. Based on the probability of error, $X_1$ should be preferred since its probability of error is $P_{e1} = 0.05$ while $P_{e2} = 0.18$. However, by using $X_1$ the classifier can not learn the rare event (50\\% classification rate) and thus randomly classifies the minority class which is the class of interest in \nmany applications. Interestingly, unlike the Bayesian error probabilities, mutual information prefers $X_2$ over $X_1$, since $I(X_2;C) = 0.20$ is greater than $I(X_1;C) = 0.18$. That is, mutual information is to some extent robust against imbalanced data.\n\nUnfortunately, despite the theoretical appeal of the mutual information measure, given a limited amount of data, an accurate estimate of the mutual information would be impossible. Because to calculate mutual information, estimating the high-dimensional joint probability $Pr(\\mathbf{X},C)$ is inevitable which is, in turn, known to be an NP hard problem \\cite{karger:01}.\n\nAs mutual information is hard to evaluate, several alternatives have been suggested \\cite{battiti:94}, \\cite{peng:05}, \\cite{kwak:02}. For instance, the Max-Relevance criterion approximates \\eref{eq_1} with the sum of the mutual information values between individual features $X_i$ and $C$:\n\\begin{equation} \n\\label{eq_5}\n\\text{Max-Relevance} = \\sum_{i=1}^{N} I(X_i;C)\n\\end{equation}\nSince it implicitly assumes that features are independent, it is likely that selected features are highly redundant. To overcome this problem, several heuristic corrective terms have been introduced to remove the redundant information and select mutually exclusive features. Here, it is shown that most of these heuristics are derived from the following expansions of mutual information with respect to $X_i$. \n\n\\subsection{First Expansion: Multi-way Mutual Information Expansion}\n\nThe first expansion of mutual information that is used here, relies on the natural extension of mutual information to more than two random variables proposed by McGill \\cite{mcgill:54} and Abramson \\cite{abramson:63}. According to their proposal, the three-way mutual information between random variables $Y_i$ is defined by:\n\\begin{align} \n\\label{eq_6}\nI(Y_1;Y_2;Y_3) = & I(Y_1;Y_3)+I(Y_2;Y_3)-I(Y_1,Y_2;Y_3) \\notag\\\\\n = & I(Y_1;Y_2) - I(Y_1;Y_2|Y_3)\n\\end{align}\nwhere ``,'' between variables denotes the joint variables. Note that, similar to two-way mutual information, it is symmetric with respect to $Y_i$ variables, i.e., $ I(Y_1;Y_2;Y_3) = I(Y_2;Y_3;Y_1)$. Generalizing over $N$ variables:\n\\begin{align} \n\\label{eq_7}\nI(Y_1;Y_2;\\dots;Y_N) =\\, & I(Y_1;\\dots;Y_{N-1}) \\notag \\\\\n &- I(Y_1;\\dots;Y_{N-1}|Y_N)\n\\end{align}\nUnlike 2-way mutual information, the generalized mutual information is not necessarily nonnegative and hence, can be interpreted as a signed measure of set intersection \\cite{han:80}. Consider \\eref{eq_6} and assume $Y_3$ is class label $C$, then positive $I(Y_1;Y_2;C)$ implies that $Y_1$ and $Y_2$ are redundant with respect to $C$ since $I(Y_1,Y_2;C) \\le I(Y_1;C)+I(Y_2;C)$. However, the more interesting case is when $I(Y_1;Y_2;C)$ is negative, i.e., $I(Y_1,Y_2;C) \\ge I(Y_1;C)+I(Y_2;C)$. This means, the information contained in the interactions of the variables is greater than the sum of the information of the individual variables \\cite {gurban:09}.\n\nAn artificial example for this situation is the binary classification problem depicted in Figure \\ref{fig1}, where the classification task is to discriminate between the \nellipse class (class samples depicted by circles) and the line class (star samples) by using two features: values of $x$ axis and values of $y$ axis. As can be seen, since $I(x;C)\\! \\approx \\! 0$ and $I(y;C)\\! \\approx\\! 0$, there is no way to distinguish between these two classes by just using one of the features. However, it is obvious that employing both features results in almost perfect classification, i.e., $I(x,y;C)\\! \\approx\\! H(C)$.\n\\begin{figure}\n\\centering\n\\vspace{-0mm}\n\\includegraphics[scale=.45]{fig1.ps}\n\\vspace{0mm}\n\\caption{Synergy between $x$ and $y$ features. While information of each individual feature about the class label (ellipse or line) is almost zero, their joint information can almost completely remove the class label ambiguity.}\n\\label{fig1}\n\\vspace{-6mm}\n\\end{figure}\nThe mutual information in \\eref{eq_1} can be expanded out in terms of generalized mutual information between the features and the class label as:\n\\begin{align} \n\\label{eq_8}\nI(\\mathbf{X};C) =& \\sum_{i_1=1}^{N} I(X_{i_1};C) - \\sum_{i_1=1}^{N-1} \\sum_{i_2=i_1+1}^{N} I(X_{i_1};X_{i_2};C) \\notag \\\\\n & +\\dots + (-1)^{N-1} I(X_1;\\dots;X_N;C) \n\\end{align}\nFrom the definition in \\eref{eq_7} it is straightforward to infer this expansion. However, the more intuitive proof is to use the fact that mutual information is a measure of set intersection, i.e., $I(Y_1;Y_2;Y_3) = \\mu(\\mathbb{Y_1}\\cap \\mathbb{Y_2}\\cap\\mathbb{Y_3})$, where $\\mathbb{Y_i}$ is the corresponding event set of the $Y_i$ variable. Now, expanding the $N$-variable measure function results in:\n\\begin{align} \n\\label{eq_9}\nI(\\mathbf{X};C&) = \\mu((\\bigcup_{i=1}^{N} \\mathbb{X}_i) \\cap \\mathbb{C}) = \\mu(\\bigcup_{i=1}^{N} (\\mathbb{X}_i \\cap \\mathbb{C})) \\\\\n &= \\sum_{i=1}^{N} \\mu(\\mathbb{X}_i \\cap \\mathbb{C}) - \\sum_{i_1=1}^{N-1} \\sum_{i_2=i_1+1}^{N} \\mu(\\mathbb{X}_{i_1}\\cap \\mathbb{X}_{i_2} \\cap \\mathbb{C}) \\notag \\\\ \n & +\\dots+ (-1)^{N-1}\\mu(\\mathbb{X}_1\\cap \\mathbb{X}_2 \\dots \\cap \\mathbb{X}_N \\cap \\mathbb{C}) \\notag\n\\end{align}\nwhere the last equation follows directly from the addition law or sum rule in set theory. The proof is complete by recalling that all measure functions with the set intersection arguments in the last equation can be replaced by the mutual information functions according to the definition of mutual information in \\eref{eq_2}.\n\\subsection {Second Expansion: Chain Rule of Information}\n\nThe second expansion for mutual information is based on the \\textit{chain rule of information} \\cite{cover:91}:\n\\begin{align} \n\\label{eq_10}\nI(\\mathbf{X};C) = \\sum_{i=1}^{N} I(X_i;C|X_{i-1},\\dots,X_1)\n\\end{align}\nThe chain rule of information leaves the choice of ordering quite flexible. For example, the right side can be written in the order $(X_1,X_2,\\dots,X_N)$ or $(X_N,X_{N-1},\\dots,X_1)$. In general, it can be expanded over $N!$ different permutations of the feature set $\\{X_1,\\dots,X_N\\}$. Taking the sum over all possible expansions yields,\n\\begin{align} \n\\label{eq_10.5}\n(N!)&I(\\mathbf{X};C) = (N{-}1)! \\sum_{i=1}^{N} I(X_i;C) \\\\ \n & + (N{-}2)! \\sum_{i_1=1}^{N} \\sum_{i_2{\\in}\\{1,...,N\\}\/i_1} I(X_{i_2};C|X_{i_1}) \\notag \\\\\n & + \\cdots + (N{-}1)! \\sum_{i=1}^{N} I(X_{i};C|\\{X_1,\\dots,X_N\\}_{\\backslash{X_{i}}}) \\notag\n\\end{align}\nDividing both sides by $(N{-}1)!\/2$, and using the following equation $I(X_{i_1};C|X_{i_2})= I(X_{i_1};C)-I(X_{i_1};X_{i_2};C)$ to replace $I(X_{i_1};C|X_{i_2})$ terms, our second expansion can be expressed as\n\\begin{align} \n\\label{eq_11}\n\\frac{N}{2}& I( \\mathbf{X};C) = \\sum_{i=1}^{N} I(X_{i};C) \\\\\n& -\\frac{1}{N-1} \\sum_{i_1=1}^{N-1} \\sum_{i_2=i_1+1}^{N} I(X_{i_1};X_{i_2};C) \\notag \\\\\n &+ \\dots + \\frac{1}{2} \\sum_{i=1}^{N} I(X_{i};C|\\{X_1,\\dots,X_N\\}_{\\backslash{X_{i}}}) \\notag\n \\end{align}\n Ignoring the unimportant multiplicative constant $N\/2$ on the left side of equation \\eref{eq_11}, the right side can be seen as a series expansion form of mutual information (up to a known constant factor). \n \n \\subsection{Truncation of the Expansions}\n In the both proposed expansions \\eref{eq_8} and \\eref{eq_11}, mutual information terms with more than two features represent higher-order interaction properties. Neglecting the higher order terms yields the so-called truncated approximation of the mutual information function. If we ignore the constant coefficient in \\eref{eq_11}, the truncated forms of suggested expansions can be written as:\n\\begin{align} \n\\label{eq_12}\nD_1 = & \\sum_{i=1}^{N} I(X_{i};C) - \\sum_{i=1}^{N-1} \\sum_{j=i+1}^{N} I(X_{i};X_{j};C)\\notag \\\\ \nD_2 = & \\sum_{i=1}^{N} I(X_{i};C) -\\frac{1}{N-1} \\sum_{i=1}^{N-1} \\sum_{j=i+1}^{N} I(X_{i};X_{j};C) \n\\end{align}\n where $D_1$ is the truncated approximation of \\eref{eq_8} and $D_2$ is for \\eref{eq_11}. Interestingly, despite the very similar structure of the expressions in \\eref{eq_12}, they have intrinsically different behaviors. This difference seems to be rooted in different functional forms they employ to approximate the underlying high-order pdf with lower order distributions ( i.e., how they combine these lower order terms). For instance, the functional form that MIFS employs to approximate $Pr(\\mathbf{x})$ is shown in \\eref{kirk}. While $D_1$ is not necessarily a positive value, $D_2$ is guaranteed to be a positive approximation since all terms in \\eref{eq_10.5} are positive. However, $D_2$ may highly underestimate the mutual information values since it may violate the fact that \\eref{eq_1} is always greater than or equal to $\\max_{i} {I(X_i;C)}$.\n \\subsubsection{JMI, mRMR \\& MIFS Criteria}\n Several known criteria including Joint Mutual Information (JMI) \\cite{meyer:06}, minimal Redundancy Maximal Relevance (mRMR) \\cite{peng:05} and Mutual Information Feature Selection (MIFS) \\cite{battiti:94} can immediately be derived from $D_1$ and $D_2$. \n \n Using the identity: $I(X_i;X_j;C) = I(X_i;C)+I(X_j;C)-I(X_i,X_j;C)$ in $D_2$ reveals that $D_2$ is equivalent to JMI.\n \\begin{align} \n\\label{jmi}\n\\text{JMI} \\!=D_2= \\sum_{i=1}^{N-1}\\! \\sum_{j=i+1}^{N} \\!I(X_{i},X_{j};C) \n\\end{align}\n\n Using $I(X_{i};X_{j};C) =I(X_{i};X_{j}) -I(X_{i};X_{j}|C)$ and ignoring the terms containing more than two variables, i.e., $I(X_{i};X_{j}|C)$, in the second approximation $D_2$, one may immediately recognize the popular score function\n\\begin{align} \n\\label{eq_13}\n\\text{mRMR} \\!= \\!\\sum_{i=1}^{N} I(X_{i};C)\\! -\\!\\frac{1}{N-1}\\! \\sum_{i=1}^{N-1}\\! \\sum_{j=i+1}^{N} \\!I(X_{i};X_{j}) \n\\end{align}\nintroduced by Peng et al. in \\cite{peng:05}. That is, mRMR is a truncated approximation of mutual information and not a heuristic approximation as suggested in \\cite{brown:12}. \n\nThe same line of reasoning as for mRMR can be applied to $D_1$ to achieve MIFS with $\\beta$ equal to 1. \n\\begin{align} \n\\label{eq_13.5}\n\\text{MIFS} \\!= \\!\\sum_{i=1}^{N} I(X_{i};C)\\! - \\sum_{i=1}^{N-1}\\! \\sum_{j=i+1}^{N} \\!I(X_{i};X_{j}) \n\\end{align} \n\n\\textbf{Observation}: A constant feature is a potential danger for the above measures. While adding an informative but correlated feature may reduce the score value (since $I(X_i;X_j|C)-I(X_i;X_j)$ can be negative), adding a non-informative constant feature $Z$ to a feature set does not reduce its score value since both $I(Z;C)$ and $I(Z;X_i;C)$ terms are zero, that is, constant features may be preferred over informative but correlated features. Therefore, it is essential to remove constant features by some preprocessing before using the above criteria for feature selection. \n\\subsubsection{Implicitly Assumed Distribution}\n A natural question arising in this context with respect to the proposed truncated approximations is: Under what probabilistic assumptions do the proposed approximations become valid mutual information functions? That is, which structure should a joint pdf admit, to yield mutual information in the forms of $D_1$ or $D_2$? \n \n For instance, if we assume features are mutually and class conditionally independent, i.e., $Pr(\\mathbf{X}) = \\prod_{i=1}^{N}Pr(X_i)$ and $Pr(\\mathbf{X},C) = Pr(C)\\prod_{i=1}^{N}Pr(X_i|C)$, then it is easy to verify that mutual information has the form of Max-Relevance introduced in \\eref{eq_5}. These two assumptions, define the adapted \\textit{independence-map} of $Pr(\\mathbf{X},C)$ where the independence-map of a joint probability distribution is defined as follows.\n \n \\textbf{Definition 1}: \\textit{An independence-map (i-map) is a look up table or a set of rules that denote all the conditional and unconditional independence between random variables. Moreover, an i-map is consistent if it leads to a valid factorized probability distribution}.\n \n That is, given a consistent i-map, a high-order joint probability distribution is approximated with product of low-order pdfs and the obtained approximation is a valid pdf itself (e.g., $ \\prod_{i=1}^{N}Pr(X_i)$ is an approximation of the high-order pdf $Pr(\\mathbf{X})$ and it is also a valid probability distribution). \n \n The question regarding the implicit consistent i-map that MIFS adopts has been investigated in \\cite{balagani:10}. However, the assumption set (i-map) suggested in their work is inconsistent and leads to the incorrect conclusion that MIFS upper bounds the Bayesian classification error via the inequality \\eref{eq_4}. As we show in the following theorem, unlike the Max-Relevance case, there is no i-map that can produce mutual information in the forms of mRMR of MIFS (ignoring the trivial solution that reduces mRMR or MIFS to Max-Relevance). \n \n \\textbf{Theorem 1.} \\textit{Ignoring the trivial solution, i.e., the i-map indicating that random variables are mutually and class conditionally independent, there is no consistent i-map that can produce mutual information functions in the forms of mRMR \\eref{eq_13} or MIFS \\eref{eq_13.5} for arbitrary number of features}.\n \n \\textbf{Proof}: The proof is by contradiction. Suppose there is a consistent i-map, where its corresponding joint pdf $\\hat{P}r(\\mathbf{X},C)$ (which is the approximation of $Pr(\\mathbf{X},C)$) can generate mutual information in the forms of \\eref{eq_13} or \\eref{eq_13.5}. That is, if this i-map is adopted, by replacing $\\hat{P}r(\\mathbf{X},C)$ in \\eref{eq_1} we get mRMR or MIFS. This implies that mRMR and MIFS are \\textit{always} valid set measures for all datasets regardless of their true underlying joint probability distributions. Now, if we show (by any example) that they are not valid mutual information measures, i.e., they are not always positive and monotonic, then we have contradicted our assumption that $\\hat{P}r(\\mathbf{X},C)$ exists and is a valid pdf. It is not so difficult to construct an example in which mRMR or MIFS can get negative values. Consider the case where features are independent of class label, $I(X_i;C)=0$, while they have nonzero dependencies among themselves, $I(X_i;X_j)\\neq0$. \nIn this case, both mRMR and MIFS \ngenerate negative values which is not allowed by a valid set measure. This contradicts our assumption that they are generated by a valid distribution, so we are forced to conclude that there is no consistent i-map that results in mutual information in the mRMR or MIFS forms.$\\blacksquare$\n \n The same line of reasoning can be used to show that $D_1$ and $D_2$ are also not valid measures. \n \n However, despite the fact that no valid pdf can produce mutual information of those forms, it is still valid to ask for which low-order approximations of the underlying high-order pdfs, mutual information reduces to a truncated approximation form. That is, we do not restrict an approximation to be a valid distribution anymore. Any functional form of low-order pdfs may be seen as an approximation of the high-order pdfs and may give rise to MIFS or mRMR. In the next subsection we reveal these assumptions for the MIFS criterion. \n \\subsubsection{MIFS Derivation from Kirkwood Approximation}\n \n It is shown in \\cite{killian:07} that truncation of the joint entropy $H(\\mathbf{X})$ at the $r$th-order is equivalent to approximating the full-dimensional pdf $Pr(\\mathbf{X})$ using joint pdfs with dimensionality of $r$ or smaller. This approximation is called $r$th order Kirkwood approximation. The truncation order that we choose, partially determines our belief about the structure of the function that we are going to estimate the exact $Pr(\\mathbf{X})$ with.\n \n The 2nd order Kirkwood approximation of $Pr(\\mathbf{X})$, can be denoted as follows \\cite{killian:07}: \n \\begin{equation} \n \\label{kirk}\n\\hat{Pr}(\\mathbf{X})= \\frac{\\prod_{i=1}^{N-1}\\prod_{j=i+1}^{N}Pr(X_i,X_j)}{\\big[\\prod_{i=1}^{N}Pr(X_i)\\big]^{N-2}}\n \\end{equation} \n Now, assume the following two assumptions hold:\n \n \\textbf{Assumption 1}: Features are class conditionally independent, that is: $Pr(\\mathbf{X}|C)=\\prod_{i=1}^{N}Pr(X_i|C)$\n \n \\textbf{Assumption 2}: $Pr(\\mathbf{X})$ is well approximated by a 2nd order Kirkwood superposition approximation in \\eref{kirk}.\n \n Then, writing the definition of mutual information and applying the above assumptions yields the MIFS criterion\n \\begin{align} \n \\label{identity}\nI(\\mathbf{X};C) &= H(\\mathbf{X})-H(\\mathbf{X}|C) \\\\\n\t\t& \\stackrel{(a)}{\\approx} \\sum_{i=1}^{N} H(X_i) - \\sum_{i=1}^{N-1}\\sum_{j=i+1}^{N} I(X_i;X_j) -H(\\mathbf{X}|C) \\notag \\\\\n\t\t& \\stackrel{(b)}{=} \\sum_{i=1}^{N} I(X_i;C) - \\sum_{i=1}^{N-1}\\sum_{j=i+1}^{N} I(X_i;X_j) \\notag\n \\end{align} \n In the above equation, (a) follows the second assumption by substituting the 2nd order Kirkwood approximation \\eref{kirk} inside the logarithm of the entropy integral and (b) is an immediate consequence of the first assumption.\n \n The first assumption has already appeared in previous works \\cite{brown:12} \\cite{balagani:10}. However, the second assumption is novel and, to the best of our knowledge, the connection between the Kirkwood approximation and the MIFS criterion has not been explored before. \n\n It is worth to mention that, in reality, both assumptions can be violated. Specifically, the Kirkwood approximation may not precisely reproduce dependencies we might observe in real-world datasets. Moreover, it is important to remember that the Kirkwood approximation is not a valid probability distribution.\n \\vspace{-1mm}\n \\subsection{$\\mathbf{D}_2$ Approximation}\n From our experiments, which we omit because of space constraints, $D_2$ tends to underestimate the mutual information while $D_1$ shows a large overestimation for independent features and a large underestimation (even becoming negative) in the presence of dependent features. In general, $D_2$ shows more robustness than $D_1$. The same results can be observed for mRMR which is derived from $D_2$ and MIFS derived from $D_1$. Previous work also arrived to the same results and reported that mRMR performs better and more robustly than MIFS especially when the feature set is large. Therefore, in the following sections we use $D_2$ as the truncated approximation. For simplicity, its subscript is dropped and it is rewritten as follows: \n\\begin{align} \n\\label{eq_14}\nD(\\{X_1,\\dots,X_N\\}) = &\\sum_{i=1}^{N} I(X_{i};C) \\\\\n- & \\frac{1}{N-1} \\sum_{i=1}^{N-1} \\sum_{j=i+1}^{N} I(X_{i};X_{j};C)\\notag\n\\end{align}\nNote that although $D$ in \\eref{eq_14} is not a formal set measure any more, it still can be seen as a score function for sets. However, it is noteworthy that unlike formal measures, the suggested approximations are no longer monotonic where the monotonicity merely means that a subset of features should not be better than any larger set that contains the very same subset. Therefore, as explained in \\cite{narendra:77} the branch and bound based search strategies can not be applied to them. \n\nA very similar approach has been applied \\cite{brown:09} (by using $D_1$ approximation) to derive several known criteria like MIFS \\cite{battiti:94} and mRMR \\cite{peng:05}. However, in \\cite{brown:09} and most of other previous works, the set score function in \\eref{eq_14} is immediately reduced to an individual-feature score function by fixing $N{-}1$ features in the feature set. This will let them to run a greedy selection search method over the feature set which essentially is a one-feature-at-a-time selection strategy. It is clearly a naive approximation of the optimal NP-hard search algorithm and may perform poorly under some conditions. In the following, we investigate a convex approximation of the binary objective function appearing in feature selection inspired by the Goemans-Williamson maximum cut approximation approach \\cite{goemans:95}.\n\n\n\n\\section{Search Strategies}\n\\label{search}\nGiven a measure function\\footnote{By some abuse of terminology, we refer to any set function in this section as a measure, no matter whether they satisfy the formal measure properties.} $D$, the Subset Selection Problem (SSP) can be defined as follows:\n\n\\textbf{Definition 2}: Given $N$ features $X_i$ and a dependent variable $C$, select a subset of $P \\! \\ll \\! N$ features that maximizes the measure function. Here it is assumed that the cardinality $P$ of the optimal feature subset is known.\n\nIn practice, the exact value of $P$ can be obtained by evaluating subsets for different values of cardinality $P$ with the final induction algorithm. Note that it is intrinsically different than wrapper methods. While in wrapper methods $2^N$ subsets have to be tested, here at most $N$ runs of the learning algorithm are needed to evaluate all possible values of $P$. \n\nA search strategy is an algorithm trying to find a feature subset in the feature subset space with $2^N$ members\\footnote{Given a $P$, the size of the feature subset space reduces to ${N\\choose P}$.} that optimizes the measure function. The wide range of proposed search strategies in the literature can be divided into three categories: 1- Exponential complexity methods including exhaustive search \\cite{kohavi:96}, branch and bound based algorithms \\cite{narendra:77}. 2- Sequential selection strategies with two very popular members, forward selection and backward elimination methods. 3- Stochastic methods like simulated annealing and genetic algorithms \\cite{vafai:93}, \\cite{doak:92}.\n\nHere, we introduce a fourth class of search strategies which is based on the convex relaxation of the 0-1 integer programming and explore its approximation ratio by establishing a link between SSP and an instance of the maximum-cut problem in graph theory. In the following, we briefly discuss the two popular sequential search methods and continue with the proposed solution: a close to optimal polynomial-time complexity search algorithm and its evaluation on different datasets. \n\\vspace{-1mm}\n\\subsection{Convex Based Search}\n\\label{sdp}\nThe forward selection (FS) algorithm selects a set $\\mathbb{S}$ of size $P$ iteratively as follows:\n\\begin{enumerate}\n\\item Initialize $\\mathbb{S}_0 = \\emptyset$. \n\\item In each iteration $i$, select the feature $X_m$ maximizing $D(\\mathbb{S}_{i{-}1}\\cup X_m)$, and set $\\mathbb{S}_{i} = \\mathbb{S}_{i{-}1} \\cup {X_m}$. \n\\item Output $\\mathbb{S}_P$. \n\\end{enumerate}\nSimilarly, backward elimination (BE) can be described as: \n\\begin{enumerate}\n\\item Start with the full set of feature $\\mathbb{S}_N$. \n\\item Iteratively remove a variable $X_m$ maximizing $D(\\mathbb{S}_i\\backslash{X_m})$, and set $\\mathbb{S}_{i-1} =\\mathbb{S}_i\\backslash{X_m}$, where removing $X$ from $\\mathbb{S}$ is denoted by $\\mathbb{S}\\backslash{X}$.\n\\item Output $\\mathbb{S}_P$.\n\\end{enumerate}\nAn experimentally comparative evaluation of several variants of these two algorithms has been conducted in \\cite{aha:94}. From an information theoretical standpoint, the main disadvantage of the forward selection method is that it only can evaluate the utility of a single feature in the limited context of the previously selected features. The artificial binary classifier in Figure \\ref{fig1} may illustrate this issue. Since the information content of each feature ($x$ and $y$) is almost zero, it is highly probable that the forward selection method fails to select them in the presence of some other more informative features. \n\nContrary to forward selection, backward elimination can evaluate the contribution of a given feature in the context of all other features. Perhaps this is why it has been frequently reported to show superior performance than forward selection. However, its overemphasis on feature interactions is a double-edged sword and may lead to a sub-optimal solution. \n\n\\textbf{Example 3}: Imagine a four dimensional feature selection problem where $X_1$ and $X_2$ are class conditionally and mutually independent of $X_3$ and $X_4$, i.e., $Pr(X_1,X_2,X_3,X_4)=Pr(X_1,X_2)Pr(X_3,X_4)$ and $Pr(X_1,X_2,X_3,X_4|C)=Pr(X_1,X_2|C)Pr(X_3,X_4|C)$. Consider $I(X_1;C)$ and $I(X_2;C)$ are equal to zero, while their interaction is informative. That is, $I(X_1,X_2;C) = 0.4$. Moreover, assume $I(X_3;C) = 0.2$, $I(X_4;C)=0.25$ and $I(X_3,X_4;C) = 0.45$. The goal is to select only two features out of four. Here, backward elimination will select $\\{X_1,X_2\\}$ rather than the optimal subset $\\{X_3,X_4\\}$ because, removing any of $X_1$ or $X_2$ features will result in $0.4$ reduction of the mutual information value $I(X_1,\\dots,X_4;C)$, while eliminating $X_3$ or $X_4$ deducts at most $0.25$. One may draw the conclusion that backward elimination tends to sacrifice the individually-informative features in favor of the merely cooperatively-informative features. As a remedy, several hybrid \nforward-backward sequential search methods have been proposed. However, they all fail in one way or another and more importantly cannot guarantee the goodness of the solution.\n\nAlternatively, a sequential search method can be seen as an approximation of the combinatorial subset selection problem. To propose a new approximation method, the underlying combinatorial problem has to be studied. To this end, we may formulate the SSP defined in the beginning of this section as:\n \\begin{align}\n\\label{eq_15}\n &\\max_{\\mathbf{x}} \\, {\\mathbf{x}^T\\mathbf{Q}\\mathbf{x}} \\notag \\\\\n & \\sum_{i=1}^{N} x_i = P \\\\ \n & x_i \\in \\{0,1\\} \\text { for } i=1,\\dots,N \\notag\n \\end{align}\n where $\\mathbf{Q}$ is a symmetric mutual information matrix constructed from the mutual information terms in \\eref{eq_14}:\n \\begin{equation} \n\\label{eq_16}\n \\mathbf{Q} = \\begin{pmatrix}\n I(X_1;C) & \\cdots & -\\frac{\\lambda}{2}I(X_1;X_N;C)\\\\\n -\\frac{\\lambda}{2}I(X_1;X_2;C) & \\cdots & -\\frac{\\lambda}{2}I(X_2;X_N;C)\\\\\n \\vdots & \\ddots & \\vdots \\\\\n -\\frac{\\lambda}{2}I(X_1;X_N;C) & \\cdots & I(X_N;C)\\\\\n\\end{pmatrix}\n \\end{equation}\nwhere $\\lambda =\\frac{1}{P-1}$ and $\\mathbf{x} = [x_1,\\dots,x_N]$ is a binary vector where the variables $x_i$ are set-membership binary variables indicating the presence of the corresponding features $X_i$ in the feature subset.\nIt is straightforward to verify that for any binary vector $\\mathbf{x}$, the objective function in \\eref{eq_15} is equal to the score function $D(\\mathbb{X}_{nz})$ where $\\mathbb{X}_{nz} =\\{X_i | x_i =1;i=1,\\dots,N \\}$. \nNote that, for mRMR $I(X_i;X_j;C)$ terms have to be replaced with $I(X_i;X_j)$.\n\nThe (0,1)-quadratic programming problem \\eref{eq_15} has attracted a great deal of theoretical study because of its importance in combinatorial problems \\cite[and references therein]{poljak:95}. This problem can simply be transformed to a \\text{(-1,1)-quadratic} programming problem,\n\\begin{align} \n\\label{eq_17}\n &\\max_{\\mathbf{y}} {\\frac{1}{4}\\mathbf{y}^T\\mathbf{Q}\\mathbf{y} + \\frac{1}{2}\\mathbf{y}^T\\mathbf{Q}\\mathbf{e} + c} \\notag \\\\\n & \\sum_{i=1}^{N} y_i = 2P-N \\\\ \n & y_i \\in \\{-1,1\\} \\text { for } i=1,\\dots,N \\notag\n \\end{align}\nvia $\\mathbf{y} = 2\\mathbf{x}-\\mathbf{e}$ transformation, where $\\mathbf{e}$ is an all ones vector. Additionally $c$ in the above formulation is a constant equal to $\\frac{1}{4}\\mathbf{e}^T\\mathbf{Qe}$ and it can be ignored because of its independence of $\\mathbf{y}$. In order to homogenize the objective function in \\eref{eq_17}, define an $(N{+}1)\\!\\times\\!(N{+}1)$ matrix $\\mathbf{Q}^u$ by adding a 0-th row and column to $\\mathbf{Q}$ so that:\n\\begin{align} \n\\label{eq_18}\n\\mathbf{Q}^u = \\begin{pmatrix}\n 0 & \\mathbf{e}^T\\mathbf{Q}\\\\\n \\mathbf{Q}^T\\mathbf{e} & \\mathbf{Q}\n\\end{pmatrix} \n \\end{align}\n Ignoring the constant factor $\\frac{1}{4}$ in \\eref{eq_17}, the equivalent homogeneous form of \\eref{eq_15} can be written as:\n\\begin{align} \n\\label{eq_19}\n & S_{\\text{SSP}} = \\max_{\\mathbf{y}} {\\mathbf{y}^T\\mathbf{Q}^u\\mathbf{y}} \\notag \\\\\n \\text{$\\langle$SSP$\\rangle$}\\quad \\quad & \\sum_{i=1}^{N} y_iy_0 = 2P-N\\\\ \n & y_i \\in \\{-1,1\\} \\text { for } i=0,\\dots,N \\notag\n\\end{align}\nNote that $\\mathbf{y}$ is now an $N+1$ dimensional vector with the first element $y_0 = \\pm 1$ as a reference variable. Given the solution of the problem above, i.e., $\\mathbf{y}$, the optimal feature subset is obtained by $\\mathbb{X}_{op} =\\{X_i|y_i=y_0\\}$. \n\nThe optimization problem in \\eref{eq_19} can be seen as an instance of the maximum-cut problem \\cite{goemans:95} with an additional cardinality constraint, also known as the k-heaviest subgraph or maximum partitioning graph problem. The two main approaches to solve this combinatorial problem are either to use the linear programming relaxation by linearizing the product of two binary variables \\cite{frieze:83}, or the semidefinite programming (SDP) relaxation suggested in \\cite {goemans:95}. The SDP relaxation has been proved to have\nexceptionally high performance and achieves the approximation ratio of 0.878 for the original maximum-cut problem. The SDP relaxation of \\eref{eq_19} is:\n\\begin{align} \n\\label{eq_20}\n &S_{\\text{SDP}} = \\max_{\\mathbf{Y}} {\\text{tr}\\{\\mathbf{Q}^u\\mathbf{Y}\\}} \\notag \\\\\n & \\sum_{i,j=1}^{N} Y_{ij} = (2P-N)^2 \\notag \\\\\n\\text{$\\langle$SDP$\\rangle$}\\quad \\quad & \\sum_{i=1}^{N} Y_{i0} = (2P-N) \\\\\n & \\text{diag}(\\mathbf{Y}) =\\mathbf{e} \\notag \\\\\n & \\mathbf{Y}\\succeq 0 \\notag\n\\end{align}\nwhere $\\mathbf{Y}$ is an unknown $(N+1)\\times(N+1)$ positive semidefinite matrix and\n$\\text{tr}\\{\\mathbf{Y}\\}$ denotes its trace. Obviously, any feasible solution $\\mathbf{y}$ for\n$\\langle$SSP$\\rangle$ is also feasible for its SDP relaxation by $\\mathbf{Y} =\n\\mathbf{y}\\mathbf{y}^T$. Furthermore, it is not difficult to see that any rank one solution,\n$\\text{rank}(\\mathbf{Y})=1$, of $\\langle$SDP$\\rangle$ is a solution of $\\langle$SSP$\\rangle$. \n\n The $\\langle$SDP$\\rangle$ problem can be solved within an additive error $\\gamma$ of the optimum by for example interior point methods \\cite{boyd:04} whose computational complexity are polynomial in the size of the input and $log(\\frac{1}{\\gamma})$. However, since its solution is not necessarily a rank one matrix, we need some more steps to obtain a feasible solution for $\\langle$SSP$\\rangle$. The following three steps summarize the approximation algorithm for $\\langle$SSP$\\rangle$ which in the following will be referred to as convex based relaxation approximation (COBRA) algorithm.\n\n\\vspace{2mm}\n\\noindent\\textbf{COBRA Algorithm}:\n\\vspace{1.5mm}\n\\begin{enumerate}\n\\item SDP: Solve $\\langle$SDP$\\rangle$ and obtain $\\mathbf{Y}_{sdp}$. Repeat the following steps many times and output the best solution.\n\n\\item Randomized rounding: Using the multivariate normal distribution with a zero mean and a covariance matrix $\\mathbf{R} = \\mathbf{Y}_{sdp}$ to sample $\\mathbf{u}$ from distribution $\\mathcal{N}(0,\\mathbf{R})$ and construct $\\hat{\\mathbf{x}} = \\text{sign}(\\mathbf{u})$. Select $\\mathbb{X} = \\{X_i | \\hat{x}_i = \\hat{x}_0\\}$.\n\n\\item Size adjusting: By using the greedy forward or backward algorithm, resize the cardinality of $\\mathbb{X}$ to $P$. \n\\end{enumerate}\n\nThe randomized rounding step is a standard procedure to produce a binary solution from the real-valued solution of $\\langle$SDP$\\rangle$ and is widely used for designing and analyzing approximation algorithms \\cite{ragh:88}. The third step is to construct a feasible solution that satisfies the cardinality constraint. Generally, it can be skipped since in feature selection problems the exact satisfaction of the cardinality constraint is not required.\n\nWe use the SDP-NAL solver \\cite{zhao:10} with the Yalmip interface \\cite{lof:04} to implement this algorithm in Matlab. SDP-NAL uses the Newton-CG augmented Lagrangian method to efficiently solve SDP problems. It can solve large scale problems ($N$ up to a few thousand) in an hour on a PC with an Intel Core i7 CPU. Even more efficient algorithms for low-rank SDP have been suggested claiming that they can solve problems with the size up to $N{=}30000$ in a reasonable amount of time \\cite{grippo:12}. Here we only use the SDP-NAL for our experiments. \n\\vspace{-3mm}\n\\subsection{Approximation Analysis}\n\\begin{center}\n \\begin{table*}\n \\addtolength{\\tabcolsep}{5mm}\n \\centering\n \\begin{tabular}{l|| c c c c c c c} \n\\hline\\rule[-0.0ex]{0ex}{2.5ex}\n\\bf{Values of $P$} & $N\/2$ & $N\/3$ & $N\/4$ & $N\/6$ &$N\/8$& $N\/10$ &$N\/20$\n\\\\\n\\hline \n\\hline\n\\ \\bf{BE} & 0.4 & 0.25 & 0.16 & 0.10 & 0.071 & 0.055 & 0.026 \\rule[-0.0ex]{0ex}{2.5ex} \\\\\n\n\\ \\bf{COBRA} & 0.48 & 0.33 & 0.24 &0.13 & 0.082 & 0.056 & 0.015 \\\\ \n\\hline\n\\end{tabular}\n\\caption{Approximation ratios of BE and COBRA for different $N\/P$ values.}\n\\vspace{-2mm}\n\\label{app-ratio}\n\\end{table*} \n\\end{center}\n\\vspace{-3mm}\nIn order to gain more insight into the quality of a measure function, it is essential to be able to directly examine it. However, since estimating the exact mutual information value in real data is not feasible, it is not possible to directly evaluate the measure function. Its quality can only be indirectly examined through the final classification performance (or other measurable criteria). However, the quality of a measure function is not the only contributor to the classification rate. Since SSP is an NP-hard problem, the search strategy can only find a local optimal solution. That is, besides the quality of a measure function, the inaccuracy of the search strategy also contributes to the final classification error. Thus, in order to draw a conclusion concerning the quality of a measure function, it is essential to have an insight about the accuracy of the search strategy in use. In this section, we compare the accuracy of the proposed method with the traditional backward elimination approach.\n\nA standard approach to investigate the accuracy of an optimization algorithm is by analyzing how close it gets to the optimal solution. Unfortunately, feature selection is an NP-hard problem and thus achieving the optimal solution to use as reference is only feasible for small-sized problems. In such cases, one wants a provable solution's quality and certain properties about the algorithm, such as its approximation ratio. Given a maximization problem, an algorithm is called $\\rho$-approximation algorithm if the approximate solution is at least $\\rho$ times the optimal value. That is, in our case $\\rho S_{SSP}\\le S_{COBRA}$, where $S_{COBRA}=D(\\mathbb{X}_{COBRA})$. The factor $\\rho$ is usually referred to as the approximation ratio in the literature. \n\nThe approximation ratios of BE and COBRA can be found by linking the SSP to the k-heaviest subgraph problem (k-HSP) in graph theory. k-HSP is an instance of the max-cut problem with a cardinality constraint on the selected subset, that is, to determine a subset $S$ of k vertices such that the weight of the subgraph induced by $S$ is maximized \\cite{sriva:98}. From the definition of k-HSP, it is clear that SSP with the criterion \\eref{eq_14} is equivalent to the $P$-heaviest subgraph problem since it selects the heaviest subset of features with the cardinality $P$, where heaviness of a set is the score assigned to it by $D$.\n\nAn SDP based algorithm for k-HSP has been suggested in \\cite{sriva:98} and its approximation ratio has been analyzed. Their results are directly applicable to COBRA since both algorithms use the same randomization method (step 2 of COBRA) and the randomization is the main ingredient of their approximation analysis. The approximation ratio of BE for k-HSP has been investigated in \\cite{asahiro:00}. It is a deterministic analysis and their results are also valid for our case, i.e., using BE for maximizing $D$. \n\nThe approximation ratios of both algorithms for different values of $P$, as a function of $N$ (total number of features), have been listed in Table \\ref{app-ratio} (values are calculated from the formulas in \\cite{asahiro:00}). As can be seen, as $P$ becomes smaller, the approximation ratio approaches zero yielding the trivial lower bound 0 on the approximate solution. However, for larger values of $P$, the approximation ratio is nontrivial since it is bounded away from zero. For all cases shown in the table except the last one, COBRA gives better guarantee bound than BE. Thus, we may conclude that COBRA is more likely to achieve better approximate solution than BE.\n\nIn the following section, we will focus on comparing our search algorithm with sequential search methods in conjunction with different measure functions and over different classifiers and datasets.\n\n\\section{Experiments}\n\nThe evaluation of a feature selection algorithm is an intrinsically difficult task since there is no direct way to evaluate the goodness of a \\textit{selection process} in general. Thus, usually a selection algorithm is scored based on the performance of its output, i.e., the selected feature subset, in some specific classification (regression) system. This kind of evaluation can be referred to as the goal-dependent evaluation. However, this method obviously cannot evaluate the generalization power of the selection process on different induction algorithms. To evaluate the generalization strength of a feature selection algorithm, we need a goal-independent evaluation. Thus, for evaluation of the feature selection algorithms, we propose to compare the algorithms over different datasets with multiple classifiers. This method leads to a more classifier-independent evaluation process. \n\nSome properties of the eight datasets used in the experiments are listed in Table \\ref{datasets}. All datasets are available on the UCI machine learning archive \\cite{frank:10}, except the NCI data which can be found in the website of Peng et al. \\cite{peng:05}. These datasets have been widely used in previous feature selection studies \\cite{peng:05}, \\cite{ciarelli:10}. The goodness of each feature set is evaluated with five classifiers including Support Vector Machine (SVM), Random Forest (RF), Classification and Regression Tree (CART), Neural Network (NN) and Linear Discriminant Analysis (LDA). To derive the classification accuracies, 10-fold cross-validation is performed except for the NCI, DBW and LNG datasets where leave-one-out cross-validation is used.\n \nAs explained before, filter-based methods consist of two components: A measure function and a search strategy. The measure functions we use for our experiments are mRMR and JMI defined in \\eref{eq_13} and \\eref{jmi}, respectively. To unambiguously refer to an algorithm, it is denoted by measure function + search method used in that algorithm, eg., mRMR+FS. \n\\vspace{-20mm}\n\\begin{center}\n\\begin{table*}\n\\addtolength{\\tabcolsep}{1mm}\n\\centering\n\\begin{tabular}{l ||c c c c c c c c } \n\\hline \\rule[-0.0ex]{0ex}{2.5ex}\n \\bf{Dataset Name} & Arrhythmia & NCI & DBWorld e-mails & CNAE-9 & Internet Adv. & Madelon & Lung Cancer & Dexter \\\\ \n \\bf{Mnemonic} & ARR & NCI & DBW & CNA & IAD & MAD &LNG &DEX \\\\ \\hline\\hline\n \\# \\bf{Features} & 278 & 9703 & 4702 & 856 & 1558 & 500 &56 &20000 \\\\ \n \\# \\bf{Samples} & 370 & 60 & 64 & 1080 & 3279 & 2000 & 32 &300 \\\\ \n \\# \\bf{Classes} & 2 & 9 & 2 & 9 & 2 & 2 &3 &2 \\\\ \n \\hline\n\\end{tabular}\n\\caption{Datasets descriptions}\n\\label{datasets}\n\\end{table*}\n\\end{center}\n\n\\begin{table}\n\\def\\\\[1.6mm]{\\\\[1.6mm]}\n\\framebox{\n\\parbox{80mm}{\n{\\normalsize \nSet $P$: $\\mathbb{P}\\!=\\!\\{P_1,\\dots,P_L\\}$.\n \\begin{algorithmic}\n\\ForAll {$P$ in $\\mathbb{P}$}\n \n \\State Run the COBRA algorithm and output the solution $\\mathbb{X}$.\n \\State Derive the classifier error-rate by applying K-fold \n \\State cross-validation and save it in $CL(P)$. \n \n\\EndFor \n\\end{algorithmic}\n Output: $ P_{opt} = \\underset{P}{\\operatorname{argmin}}\\, CL(P)$\n }}\n }\n \\vspace{-1mm}\n \\caption{ Estimating $P$ by searching over an admissible set that minimizes the classification error-rate.}\n \\label{algo}\n\\end{table} \n\n\\begin{center}\n \\begin{table*}[!ht]\n \\addtolength{\\tabcolsep}{3.5mm}\n \\hfill{}\n \\begin{tabular}{l|| c@{\\hspace{3pt}}c c@{\\hspace{3pt}}c c@{\\hspace{3pt}}c c@{\\hspace{3pt}}c c@{\\hspace{3pt}}c c } \n \\hline \\rule[-0.0ex]{0ex}{2.5ex} \n \\textbf{Classifiers} & \\multicolumn{2}{|c|}{\\bf{SVM}} & \\multicolumn{2}{|c|}{\\bf{LDA}} & \\multicolumn{2}{|c|}{\\bf{CART}} &\\multicolumn{2}{|c|}{\\bf{RF}} & \\multicolumn{2}{|c|}{\\bf{NN}} & \\bf{Average} \\\\ \\hline \\hline \n & \\multicolumn{11}{c}{\\bf{NCI Dataset}} \t\\rule[-0.0ex]{0ex}{2.5ex}\t\t \\\\ \n \\bf{mRMR+COBRA} & (54) & 81.7 & (95) & 78.3 & (20) & 45.0 & (71)& 88.3 & (60) & 75.0 & \\textbf{73.67} \\\\ \n \\bf{mRMR+FS} & (32) & 78.3 \t & (11) & 68.3 & (2) & 45.0 & (12) & 83.3 & (99) & 70.0 & 69.00 \\\\ \n \\bf{mRMR+BE} & (26) & 76.6 & (11) & 68.3 & (2) & 45.0 & (13) & 85.0 & (31) & 71.7 & 69.33 \\\\ \n \n \\bf{JMI+COBRA} & (72)& 85.0 & (70)& 75.0 & (28)& 45.0 & (45)& 90.0 & (93)& 75.0 & 74.00 \\\\ \n \\bf{JMI+FS} & (27)& 75.0 & (17)& 68.3 & (82)& 45.0 & (17)& 86.6 & (78)& 70.0 & 69.00 \\\\ \n \\bf{JMI+BE} & (23)& 76.6 & (20)& 76.6 & (7)& 33.3 & (19)& 86.6 & (89)& 76.6 & 70.00 \\\\ \\hline\\hline\n \n & \\multicolumn{11}{c}{ \\bf{DBW Dataset}} \t\\rule[-0.0ex]{0ex}{2.5ex}\t\t \\\\ \n \\bf{mRMR+COBRA} & (38)& 96.9 & (152)& 92.2 & (38)& 86.0 & (33)& 92.2 & (33)& 98.4 & \\textbf{93.12} \\\\ \n \\bf{mRMR+FS} & (31)& 93.7 & (4)& 89.0 & (4)& 86.0 & (7)& 90.6 & (9)& 92.2 & 90.31 \\\\ \n \\bf{mRMR+BE} & (110) & 93.7 & (6) & 89.0 & (4) & 82.8 & (29) & 92.2 & (9) & 92.2 & 90.00 \n \\\\ \n \\bf{JMI+COBRA} & (35) & 93.7 & (14) & 89.0 & (8) & 82.8 & (24) & 92.2 & (108) & 93.7 & 90.31 \\\\ \n \\bf{JMI+FS} & (23) &93.7 & (6) & 89.0 & (5) & 82.8 & (34) & 92.2 & (96) & 92.2 & 90.00 \\\\ \n \\bf{JMI+BE} & (24) & 93.7 & (6) & 89.0 & (5) & 82.8 & (23) & 92.2 & (149) & 92.2 & 90.00 \\\\ \\hline\\hline\n \n & \\multicolumn{11}{c}{ \\bf{CNA Dataset}} \t\\rule[-0.0ex]{0ex}{2.5ex}\t\t \\\\ \n \\bf{mRMR+COBRA} & (200) & 94.0 & (183) & 92.7 & (63) & 75.0 & (183) & 90.8 & (187) & 92.0 & 88.91 \\\\ \n \\bf{mRMR+FS} & (149) & 90.6 & (142) & 90.4 & (7) & 70.2 & (138) & 87.7 & (78) & 85.5 & 84.88 \\\\ \n \\bf{mRMR+BE} & (199) & 94.0 & (165) & 92.5 & (47) & 75.0 & (176) & 90.8 & (84) & 92.2 & 88.90 \\\\ \n \n \\bf{JMI+COBRA} & (140) & 92.6 & (146) & 92.2 & (47) & 75.0 & (148) & 90.4 & (148) & 91.4 & 88.30 \\\\ \n \\bf{JMI+FS} & (150) & 92.7 & (142) & 92.1 & (48) & 75.3 & (148) & 90.7 & (145) & 91.3 & 88.40 \\\\ \n \\bf{JMI+BE} & (150) & 92.7 & (142) & 92.1 & (48) & 75.0 & (144) & 90.4 & (134) & 91.2 & 88.30 \\\\ \\hline\\hline\n \n & \\multicolumn{11}{c}{ \\bf{IAD Dataset}} \\rule[-0.0ex]{0ex}{2.5ex}\t\t \\\\\n \\bf{mRMR+COBRA} & (165) & 96.5 & (140) & 96.1 & (28) & 96.4 & (160) & 97.2 & (68) & 97.1 \t & 96.64 \\\\ \n \\bf{mRMR+FS} & (109) & 96.2 & (127) & 95.8 & (127) & 96.7 & (25) & 97.0 & (52) & 97.2\t & 96.58 \\\\ \n \\bf{mRMR+BE} & (22) & 96.3 & (163) & 95.9 & (121) & 96.1 & (109) & 97.2 & (148) & 97.4 \t & 96.58 \\\\ \n \n \\bf{JMI+COBRA} & (112) & 96.3 & (4) & 96.3 & (9) & 96.3 & (57) & 97.3 & (140) & 100& 97.24 \\\\ \n \\bf{JMI+FS} & (9) & 96.2 & (4) & 96.2 & (52) & 96.4 & (7) & 96.8 & (7) & 97.8 & 96.68 \\\\ \n \\bf{JMI+BE} & (4) & 96.6 & (17) & 95.8 & (79) & 96.3 & (13) & 96.5 & (10) & 97.2 & 96.48 \\\\ \\hline\\hline\n \n & \\multicolumn{11}{c}{ \\bf{MAD Dataset}} \t\\rule[-0.0ex]{0ex}{2.5ex}\t\t \\\\ \n \\bf{mRMR+COBRA} & (12) & 83.2 & (13) & 60.4 & (26) & 80.5 & (12) & 88.0 & (11) & 62.2 & \\textbf{74.81} \\\\ \n \\bf{mRMR+FS} & (32) & 55.3 & (5) & 55.5 & (12) & 58.2 & (49) & 57.3 & (5) & 52.7 & \\textbf{55.82} \\\\ \n \\bf{mRMR+BE} & (14) & 55.3 & (11) & 54.8 & (31) & 57.3 & (26) & 56.4 & (115) & 48.6 & 54.50 \\\\ \n \n \\bf{JMI+COBRA} & (13) & 82.5 & (12) & 60.7 & (40) & 80.7 & (13) & 87.6 & (4) & 61.1 & 74.54 \\\\ \n \\bf{JMI+FS} & (13) & 82.5 & (12) & 60.7 & (58) & 80.5 & (13) & 87.9 & (19) & 59.2 & 74.20 \\\\ \n \\bf{JMI+BE} & (13) & 82.5 & (12) & 60.7 & (58) & 80.5 & (13) & 87.3 & (20) & 60.1 & 74.25 \\\\ \\hline\\hline\n \n\t\t\t\t\t\t\t\t & \\multicolumn{11}{c}{ \\bf{LNG Dataset}} \t\\rule[-0.0ex]{0ex}{2.5ex}\t\t \\\\ \n \\bf{mRMR+COBRA} & (23) & 75.0 & (28) & 96.9 & (13) & 71.8 & (28) & 68.7 & (27) & 71.8 & \\textbf{76.87} \\\\ \n \\bf{mRMR+FS} & (7) & 81.2 & (5) & 68.7 & (5) & 71.8 & (5) & 75.0 & (6) & 71.8 & 73.75 \\\\ \n \\bf{mRMR+BE} & (7) & 81.2 & (4) & 68.7 & (4) & 71.8 & (4) & 75.0 & (4) & 75.0 & 74.37 \\\\ \n \n \\bf{JMI+COBRA} & (7) & 78.1 & (6) & 71.8 & (5) & 71.8 & (5) & 75.0 & (5) & 68.7 & 73.12 \\\\ \n \\bf{JMI+FS} & (7) & 78.1 & (4) & 71.8 & (4) & 71.8 & (8) & 78.1 & (5) & 68.7 & 73.75 \\\\ \n \\bf{JMI+BE} & (7) & 78.1 & (6) & 71.8 & (5) & 71.8 & (6) & 78.1 & (6) & 71.8 & 74.37 \\\\ \\hline\\hline\n \n\t\t\t\t\t\t\t\t & \\multicolumn{11}{c}{ \\bf{ARR Dataset}} \t\\rule[-0.0ex]{0ex}{2.5ex}\t\t \\\\ \n \\bf{mRMR+COBRA} & (45) & 81.9 & (48) & 76.3 & (30) & 75.4 & (43) & 82.2 & (57) & 72.9 & 77.75 \\\\\n \\bf{mRMR+FS} & (34) & 81.3 & (43) & 76.1 & (7) & 78.3 & (34) & 81.3 & (5) & 75.7 & 78.56 \\\\\n \\bf{mRMR+BE} & (36) & 81.6 & (43) & 76.3 & (22) & 78.0 & (25) & 82.9 & (8) & 76.1 & 79.02 \\\\ \n \n \\bf{JMI+COBRA} & (26) & 80.6 & (51) & 74.7 & (15) & 78.3 & (51) & 81.5 & (13) & 71.9 & 77.41 \\\\ \n \\bf{JMI+FS} & (47) & 74.3 & (38) & 73.5 & (26) & 76.9 & (37) & 79.2 & (54) & 70.0 & 74.80 \\\\ \n \\bf{JMI+BE} & (47) & 74.3 & (38) & 73.5 & (26) & 76.9 & (25) & 80.0 & (29) & 68.6 & 74.66 \\\\ \\hline\\hline\n \n \t\t\t\t\t\t\t\t & \\multicolumn{11}{c}{ \\bf{DEX Dataset}} \\rule[-0.0ex]{0ex}{2.5ex}\t\t\t \\\\ \n \\bf{mRMR+COBRA} & (3) & 92.0 & (131) & 86.3 & (24) & 80.7 & (3) & 93.0 &(3) & 81.3 & 86.66 \\\\ \n \\bf{mRMR+FS} & (3) & 90.3 & (56) & 87.0 & (94) & 80.3 & (3) & 92.0 &(3) & 80.0 & 86.00 \\\\ \n \\bf{mRMR+BE} & (3) & 90.0 & (131) & 87.3 & (18) & 80.3 & (3) & 91.6 &(99) & 78.6 & 85.53 \\\\ \n \n \\bf{JMI+COBRA} & (88) & 91.6 & (13) & 83.0 & (12) & 80.3 & (3) & 94.0 & (3) & 81.0 & 86.00 \\\\ \n \\bf{JMI+FS} & (149) & 91.0 & (129) & 87.6 & (95) & 80.3 & (119) & 92.3 & (94) & 80.6 & 86.40 \\\\ \n \\bf{JMI+BE} & (149) & 90.0 & (128) & 87.3 & (22) & 81.0 & (146) & 92.0 & (138) & 78.0 & 85.60 \\\\ \\hline\n \n \\end{tabular}\n \\hfill{}\n \\caption{Comparison of COBRA with the greedy search methods over different datasets. For each classifier and combination of search method and measure function, the values in parentheses is the number of selected features and the second value is the classification accuracy. The last column reports the average of the classification accuracies for each algorithm.}\n \\label{ssp-be}\n \\end{table*} \n \\end{center}\n\\vspace{0mm}\n\n A simple algorithm listed in Table \\ref{algo} is employed to search for the optimal value of the subset cardinality $P$, where $P$ ranges over a set $\\mathbb{P}$ of admissible values. In the worst case, $\\mathbb{P} =\\{1,\\dots,N\\}$. \n\n Table \\ref{ssp-be} shows the results obtained for the 8 datasets and 5 classifiers. Friedman test with the corresponding Wilcoxon-Nemenyi post-hoc analysis was used to compare the different algorithms. However, looking at the classification rates even before running the Friedman tests on them reveals a few interesting points which are marked in bold font.\n \n First, on the small size datasets (NCI, DBW and LNG), mRMR+COBRA consistently shows higher performance than other algorithms. The reason lies in the fact that the \\textit{similarity ratio} of the feature sets selected by COBRA is lower than BE or FS feature sets. The \\textit{similarity ratio} $S_i$ is defined as the number of features in the intersection of $i$th and $i{+}1$th feature sets divided by the cardinality of the $i$th feature set. From its definition it is clear that for BE and FS this ratio is always equal to 1. However, because of the randomization step this ratio may widely vary for COBRA. That is, COBRA generates \nquite diverse feature sets. Some of these feature sets have relatively low scores as compared with BE or FS sets. However, since for small datasets the estimated mutual information terms are highly inaccurate, features that rank low with our noisy measure function may in fact be better for classification. The average of the similarity ratios of 50 subsequent feature sets ($\\frac{1}{50}\\sum_{i=5}^{55} S_i$) have been reported for 4 datasets in Table \\ref{s-ratio}. As seen, for NCI the averaged similarity ratio is significantly smaller than 1 while for CNA which is a relatively larger dataset, it is almost constant and equal to 1.\n \n\\begin{table}\n \\addtolength{\\tabcolsep}{2mm}\n \\hfill{}\n \\begin{tabular}{l|| c c c c} \n \\hline \\rule[-0.0ex]{0ex}{2.5ex} \n \\textbf{Datasets} & \\bf{NCI} & \\bf{DBW} &\\bf{IAD} & \\bf{CNA} \\\\ \\hline \\hline \\rule[-0.0ex]{0ex}{2.5ex} \n \\bf{S-ratio} & 0.7717 & 0.8929 & 0.9266 & 0.9976 \\\\ \\hline \n \\end{tabular}\n \\hfill{}\n \\caption{The average (over 50 similarity ratios) similarity ratio for 4 datasets.}\n \\label{s-ratio}\n\\end{table} \n\n \\begin{figure}\n\\resizebox{8cm}{!}{\n \n\\begin{tikzpicture}[x=1pt,y=1pt]\n\\definecolor[named]{fillColor}{rgb}{1.00,1.00,1.00}\n\\path[use as bounding box,fill=fillColor,fill opacity=0.00] (0,0) rectangle (505.89,505.89);\n\\begin{scope}\n\\path[clip] ( 49.20, 61.20) rectangle (480.69,456.69);\n\\definecolor[named]{fillColor}{rgb}{0.00,1.00,0.00}\n\n\\path[fill=fillColor] ( 78.50,267.80) --\n\t(185.04,267.80) --\n\t(185.04,423.22) --\n\t( 78.50,423.22) --\n\tcycle;\n\\definecolor[named]{drawColor}{rgb}{0.00,0.00,0.00}\n\n\\path[draw=drawColor,line width= 1.2pt,line join=round] ( 78.50,346.62) -- (185.04,346.62);\n\n\\path[draw=drawColor,line width= 0.4pt,dash pattern=on 4pt off 4pt ,line join=round,line cap=round] (131.77,210.02) -- (131.77,267.80);\n\n\\path[draw=drawColor,line width= 0.4pt,dash pattern=on 4pt off 4pt ,line join=round,line cap=round] (131.77,442.04) -- (131.77,423.22);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (105.13,210.02) -- (158.40,210.02);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (105.13,442.04) -- (158.40,442.04);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 78.50,267.80) --\n\t(185.04,267.80) --\n\t(185.04,423.22) --\n\t( 78.50,423.22) --\n\t( 78.50,267.80);\n\\definecolor[named]{fillColor}{rgb}{0.75,0.75,0.75}\n\n\\path[fill=fillColor] (211.67,242.34) --\n\t(318.22,242.34) --\n\t(318.22,283.52) --\n\t(211.67,283.52) --\n\tcycle;\n\n\\path[draw=drawColor,line width= 1.2pt,line join=round] (211.67,258.95) -- (318.22,258.95);\n\n\\path[draw=drawColor,line width= 0.4pt,dash pattern=on 4pt off 4pt ,line join=round,line cap=round] (264.94,238.80) -- (264.94,242.34);\n\n\\path[draw=drawColor,line width= 0.4pt,dash pattern=on 4pt off 4pt ,line join=round,line cap=round] (264.94,324.70) -- (264.94,283.52);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (238.31,238.80) -- (291.58,238.80);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (238.31,324.70) -- (291.58,324.70);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (211.67,242.34) --\n\t(318.22,242.34) --\n\t(318.22,283.52) --\n\t(211.67,283.52) --\n\t(211.67,242.34);\n\\definecolor[named]{fillColor}{rgb}{0.00,1.00,0.00}\n\n\\path[fill=fillColor] (344.85, 81.83) --\n\t(451.39, 81.83) --\n\t(451.39,250.31) --\n\t(344.85,250.31) --\n\tcycle;\n\n\\path[draw=drawColor,line width= 1.2pt,line join=round] (344.85,134.96) -- (451.39,134.96);\n\n\\path[draw=drawColor,line width= 0.4pt,dash pattern=on 4pt off 4pt ,line join=round,line cap=round] (398.12, 75.85) -- (398.12, 81.83);\n\n\\path[draw=drawColor,line width= 0.4pt,dash pattern=on 4pt off 4pt ,line join=round,line cap=round] (398.12,302.12) -- (398.12,250.31);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (371.49, 75.85) -- (424.76, 75.85);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (371.49,302.12) -- (424.76,302.12);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (344.85, 81.83) --\n\t(451.39, 81.83) --\n\t(451.39,250.31) --\n\t(344.85,250.31) --\n\t(344.85, 81.83);\n\\end{scope}\n\\begin{scope}\n\\path[clip] ( 0.00, 0.00) rectangle (505.89,505.89);\n\\definecolor[named]{drawColor}{rgb}{0.00,0.00,0.00}\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (131.77, 61.20) -- (398.12, 61.20);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (131.77, 61.20) -- (131.77, 55.20);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (264.94, 61.20) -- (264.94, 55.20);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (398.12, 61.20) -- (398.12, 55.20);\n\n\\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.80] at (131.77, 39.60) {CO - BE};\n\n\\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.80] at (264.94, 39.60) {FS - BE};\n\n\\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.80] at (398.12, 39.60) {FS - CO};\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 49.20, 89.13) -- ( 49.20,443.37);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 49.20, 89.13) -- ( 43.20, 89.13);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 49.20,177.69) -- ( 43.20,177.69);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 49.20,266.25) -- ( 43.20,266.25);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 49.20,354.81) -- ( 43.20,354.81);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 49.20,443.37) -- ( 43.20,443.37);\n\n\\node[text=drawColor,rotate= 90.00,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.80] at ( 34.80, 89.13) {-4};\n\n\\node[text=drawColor,rotate= 90.00,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.80] at ( 34.80,177.69) {-2};\n\n\\node[text=drawColor,rotate= 90.00,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.80] at ( 34.80,266.25) {0};\n\n\\node[text=drawColor,rotate= 90.00,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.80] at ( 34.80,354.81) {2};\n\n\\node[text=drawColor,rotate= 90.00,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.80] at ( 34.80,443.37) {4};\n\\end{scope}\n\\begin{scope}\n\\path[clip] ( 0.00, 0.00) rectangle (505.89,505.89);\n\\definecolor[named]{drawColor}{rgb}{0.00,0.00,0.00}\n\n\\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.70] at (264.94,475.42) {\\bfseries Friedman Test on Mean Accuracies for mRMR};\n\\end{scope}\n\\begin{scope}\n\\path[clip] ( 0.00, 0.00) rectangle (505.89,505.89);\n\\definecolor[named]{drawColor}{rgb}{0.00,0.00,0.00}\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 49.20, 61.20) --\n\t(480.69, 61.20) --\n\t(480.69,456.69) --\n\t( 49.20,456.69) --\n\t( 49.20, 61.20);\n\\end{scope}\n\\begin{scope}\n\\path[clip] ( 49.20, 61.20) rectangle (480.69,456.69);\n\\definecolor[named]{drawColor}{rgb}{0.00,0.00,0.00}\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (273.03,456.69) rectangle (480.69,399.09);\n\\definecolor[named]{fillColor}{rgb}{0.00,1.00,0.00}\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round,fill=fillColor] (283.83,445.89) rectangle (292.47,438.69);\n\\definecolor[named]{fillColor}{rgb}{0.75,0.75,0.75}\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round,fill=fillColor] (283.83,431.49) rectangle (292.47,424.29);\n\\definecolor[named]{fillColor}{rgb}{0.00,1.00,0.00}\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round,fill=fillColor] (283.83,417.09) rectangle (292.47,409.89);\n\n\\node[text=drawColor,anchor=base west,inner sep=0pt, outer sep=0pt, scale= 1.30] at (303.27,438.16) {CO - BE PostHoc P.value: 0.078};\n\n\\node[text=drawColor,anchor=base west,inner sep=0pt, outer sep=0pt, scale= 1.30] at (303.27,423.76) {FS - BE PostHoc P.value: 0.965};\n\n\\node[text=drawColor,anchor=base west,inner sep=0pt, outer sep=0pt, scale= 1.30] at (303.27,409.36) {FS - CO PostHoc P.value: 0.042};\n\\definecolor[named]{drawColor}{rgb}{0.00,0.00,1.00}\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 49.20,266.25) -- (480.69,266.25);\n\\end{scope}\n\\end{tikzpicture}\n\n }\n\\caption{Comparing the search strategies for mRMR measure with the Friedman test and its corresponding post-hoc analysis. The y-axis is the classification accuracy difference and x-axis indicates the names of the compared algorithms.}\n\\label{mrmr-mean}\n\\end{figure}\n\n\\begin{figure}\n\\resizebox{85mm}{!}{\n \n\\begin{tikzpicture}[x=1pt,y=1pt]\n\\definecolor[named]{fillColor}{rgb}{1.00,1.00,1.00}\n\\path[use as bounding box,fill=fillColor,fill opacity=0.00] (0,0) rectangle (505.89,505.89);\n\\begin{scope}\n\\path[clip] ( 32.47,377.65) rectangle (236.31,473.42);\n\\definecolor[named]{fillColor}{rgb}{0.75,0.75,0.75}\n\n\\path[fill=fillColor] ( 46.31,431.08) --\n\t( 96.64,431.08) --\n\t( 96.64,462.75) --\n\t( 46.31,462.75) --\n\tcycle;\n\\definecolor[named]{drawColor}{rgb}{0.00,0.00,0.00}\n\n\\path[draw=drawColor,line width= 1.2pt,line join=round] ( 46.31,439.39) -- ( 96.64,439.39);\n\n\\path[draw=drawColor,line width= 0.4pt,dash pattern=on 4pt off 4pt ,line join=round,line cap=round] ( 71.48,381.20) -- ( 71.48,431.08);\n\n\\path[draw=drawColor,line width= 0.4pt,dash pattern=on 4pt off 4pt ,line join=round,line cap=round] ( 71.48,469.87) -- ( 71.48,462.75);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 58.90,381.20) -- ( 84.06,381.20);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 58.90,469.87) -- ( 84.06,469.87);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 46.31,431.08) --\n\t( 96.64,431.08) --\n\t( 96.64,462.75) --\n\t( 46.31,462.75) --\n\t( 46.31,431.08);\n\n\\path[fill=fillColor] (109.23,428.70) --\n\t(159.56,428.70) --\n\t(159.56,431.47) --\n\t(109.23,431.47) --\n\tcycle;\n\n\\path[draw=drawColor,line width= 1.2pt,line join=round] (109.23,430.29) -- (159.56,430.29);\n\n\\path[draw=drawColor,line width= 0.4pt,dash pattern=on 4pt off 4pt ,line join=round,line cap=round] (134.39,427.91) -- (134.39,428.70);\n\n\\path[draw=drawColor,line width= 0.4pt,dash pattern=on 4pt off 4pt ,line join=round,line cap=round] (134.39,432.66) -- (134.39,431.47);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (121.81,427.91) -- (146.98,427.91);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (121.81,432.66) -- (146.98,432.66);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (109.23,428.70) --\n\t(159.56,428.70) --\n\t(159.56,431.47) --\n\t(109.23,431.47) --\n\t(109.23,428.70);\n\\definecolor[named]{fillColor}{rgb}{0.00,1.00,0.00}\n\n\\path[fill=fillColor] (172.14,403.76) --\n\t(222.47,403.76) --\n\t(222.47,426.72) --\n\t(172.14,426.72) --\n\tcycle;\n\n\\path[draw=drawColor,line width= 1.2pt,line join=round] (172.14,410.89) -- (222.47,410.89);\n\n\\path[draw=drawColor,line width= 0.4pt,dash pattern=on 4pt off 4pt ,line join=round,line cap=round] (197.31,403.37) -- (197.31,403.76);\n\n\\path[draw=drawColor,line width= 0.4pt,dash pattern=on 4pt off 4pt ,line join=round,line cap=round] (197.31,427.91) -- (197.31,426.72);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (184.72,403.37) -- (209.89,403.37);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (184.72,427.91) -- (209.89,427.91);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (172.14,403.76) --\n\t(222.47,403.76) --\n\t(222.47,426.72) --\n\t(172.14,426.72) --\n\t(172.14,403.76);\n\\end{scope}\n\\begin{scope}\n\\path[clip] ( 0.00, 0.00) rectangle (505.89,505.89);\n\\definecolor[named]{drawColor}{rgb}{0.00,0.00,0.00}\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 71.48,377.65) -- (197.31,377.65);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 71.48,377.65) -- ( 71.48,373.69);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (134.39,377.65) -- (134.39,373.69);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (197.31,377.65) -- (197.31,373.69);\n\n\\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.3] at ( 71.48,363.40) {CO - BE};\n\n\\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.3] at (134.39,363.40) {FS - BE};\n\n\\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.3] at (197.31,363.40) {FS - CO};\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 32.47,382.78) -- ( 32.47,461.95);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 32.47,382.78) -- ( 28.51,382.78);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 32.47,398.62) -- ( 28.51,398.62);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 32.47,414.45) -- ( 28.51,414.45);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 32.47,430.29) -- ( 28.51,430.29);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 32.47,446.12) -- ( 28.51,446.12);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 32.47,461.95) -- ( 28.51,461.95);\n\n\\node[text=drawColor,rotate= 90.00,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.2] at ( 22.97,382.78) {-6};\n\n\\node[text=drawColor,rotate= 90.00,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.2] at ( 22.97,414.45) {-2};\n\n\\node[text=drawColor,rotate= 90.00,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.2] at ( 22.97,446.12) {2};\n\n\\node[text=drawColor,rotate= 90.00,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.2] at ( 22.97,461.95) {4};\n\\end{scope}\n\\begin{scope}\n\\path[clip] ( 0.00,337.26) rectangle (252.94,505.89);\n\\definecolor[named]{drawColor}{rgb}{0.00,0.00,0.00}\n\n\\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.98] at (134.39,482.82) {\\bfseries SVM};\n\\end{scope}\n\\begin{scope}\n\\path[clip] ( 0.00, 0.00) rectangle (505.89,505.89);\n\\definecolor[named]{drawColor}{rgb}{0.00,0.00,0.00}\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 32.47,377.65) --\n\t(236.31,377.65) --\n\t(236.31,473.42) --\n\t( 32.47,473.42) --\n\t( 32.47,377.65);\n\\end{scope}\n\\begin{scope}\n\\path[clip] ( 32.47,377.65) rectangle (236.31,473.42);\n\\definecolor[named]{drawColor}{rgb}{0.00,0.00,1.00}\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 32.47,430.29) -- (236.31,430.29);\n\\end{scope}\n\\begin{scope}\n\\path[clip] (285.42,377.65) rectangle (489.26,473.42);\n\\definecolor[named]{fillColor}{rgb}{0.75,0.75,0.75}\n\n\\path[fill=fillColor] (299.26,425.98) --\n\t(349.59,425.98) --\n\t(349.59,460.12) --\n\t(299.26,460.12) --\n\tcycle;\n\\definecolor[named]{drawColor}{rgb}{0.00,0.00,0.00}\n\n\\path[draw=drawColor,line width= 1.2pt,line join=round] (299.26,433.07) -- (349.59,433.07);\n\n\\path[draw=drawColor,line width= 0.4pt,dash pattern=on 4pt off 4pt ,line join=round,line cap=round] (324.42,421.10) -- (324.42,425.98);\n\n\\path[draw=drawColor,line width= 0.4pt,dash pattern=on 4pt off 4pt ,line join=round,line cap=round] (324.42,469.87) -- (324.42,460.12);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (311.84,421.10) -- (337.01,421.10);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (311.84,469.87) -- (337.01,469.87);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (299.26,425.98) --\n\t(349.59,425.98) --\n\t(349.59,460.12) --\n\t(299.26,460.12) --\n\t(299.26,425.98);\n\n\\path[fill=fillColor] (362.17,424.43) --\n\t(412.50,424.43) --\n\t(412.50,425.54) --\n\t(362.17,425.54) --\n\tcycle;\n\n\\path[draw=drawColor,line width= 1.2pt,line join=round] (362.17,425.31) -- (412.50,425.31);\n\n\\path[draw=drawColor,line width= 0.4pt,dash pattern=on 4pt off 4pt ,line join=round,line cap=round] (387.34,424.20) -- (387.34,424.43);\n\n\\path[draw=drawColor,line width= 0.4pt,dash pattern=on 4pt off 4pt ,line join=round,line cap=round] (387.34,425.54) -- (387.34,425.54);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (374.75,424.20) -- (399.92,424.20);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (374.75,425.54) -- (399.92,425.54);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (362.17,424.43) --\n\t(412.50,424.43) --\n\t(412.50,425.54) --\n\t(362.17,425.54) --\n\t(362.17,424.43);\n\\definecolor[named]{fillColor}{rgb}{0.00,1.00,0.00}\n\n\\path[fill=fillColor] (425.09,392.50) --\n\t(475.42,392.50) --\n\t(475.42,424.43) --\n\t(425.09,424.43) --\n\tcycle;\n\n\\path[draw=drawColor,line width= 1.2pt,line join=round] (425.09,413.34) -- (475.42,413.34);\n\n\\path[draw=drawColor,line width= 0.4pt,dash pattern=on 4pt off 4pt ,line join=round,line cap=round] (450.25,381.20) -- (450.25,392.50);\n\n\\path[draw=drawColor,line width= 0.4pt,dash pattern=on 4pt off 4pt ,line join=round,line cap=round] (450.25,428.64) -- (450.25,424.43);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (437.67,381.20) -- (462.83,381.20);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (437.67,428.64) -- (462.83,428.64);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (425.09,392.50) --\n\t(475.42,392.50) --\n\t(475.42,424.43) --\n\t(425.09,424.43) --\n\t(425.09,392.50);\n\\end{scope}\n\\begin{scope}\n\\path[clip] ( 0.00, 0.00) rectangle (505.89,505.89);\n\\definecolor[named]{drawColor}{rgb}{0.00,0.00,0.00}\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (324.42,377.65) -- (450.25,377.65);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (324.42,377.65) -- (324.42,373.69);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (387.34,377.65) -- (387.34,373.69);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (450.25,377.65) -- (450.25,373.69);\n\n\\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.3] at (324.42,363.40) {CO - BE};\n\n\\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.3] at (387.34,363.40) {FS - BE};\n\n\\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.3] at (450.25,363.40) {FS - CO};\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (285.42,381.20) -- (285.42,469.87);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (285.42,381.20) -- (281.46,381.20);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (285.42,403.37) -- (281.46,403.37);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (285.42,425.54) -- (281.46,425.54);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (285.42,447.70) -- (281.46,447.70);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (285.42,469.87) -- (281.46,469.87);\n\n\\node[text=drawColor,rotate= 90.00,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.2] at (275.91,381.20) {-10};\n\n\\node[text=drawColor,rotate= 90.00,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.2] at (275.91,403.37) {-5};\n\n\\node[text=drawColor,rotate= 90.00,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.2] at (275.91,425.54) {0};\n\n\\node[text=drawColor,rotate= 90.00,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.2] at (275.91,447.70) {5};\n\n\\node[text=drawColor,rotate= 90.00,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.2] at (275.91,469.87) {10};\n\\end{scope}\n\\begin{scope}\n\\path[clip] (252.94,337.26) rectangle (505.89,505.89);\n\\definecolor[named]{drawColor}{rgb}{0.00,0.00,0.00}\n\n\\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.98] at (387.34,482.82) {\\bfseries LDA};\n\\end{scope}\n\\begin{scope}\n\\path[clip] ( 0.00, 0.00) rectangle (505.89,505.89);\n\\definecolor[named]{drawColor}{rgb}{0.00,0.00,0.00}\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (285.42,377.65) --\n\t(489.26,377.65) --\n\t(489.26,473.42) --\n\t(285.42,473.42) --\n\t(285.42,377.65);\n\\end{scope}\n\\begin{scope}\n\\path[clip] (285.42,377.65) rectangle (489.26,473.42);\n\\definecolor[named]{drawColor}{rgb}{0.00,0.00,1.00}\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (285.42,425.54) -- (489.26,425.54);\n\\end{scope}\n\\begin{scope}\n\\path[clip] ( 32.47,209.02) rectangle (236.31,304.79);\n\\definecolor[named]{fillColor}{rgb}{0.75,0.75,0.75}\n\n\\path[fill=fillColor] ( 46.31,256.35) --\n\t( 96.64,256.35) --\n\t( 96.64,285.72) --\n\t( 46.31,285.72) --\n\tcycle;\n\\definecolor[named]{drawColor}{rgb}{0.00,0.00,0.00}\n\n\\path[draw=drawColor,line width= 1.2pt,line join=round] ( 46.31,267.43) -- ( 96.64,267.43);\n\n\\path[draw=drawColor,line width= 0.4pt,dash pattern=on 4pt off 4pt ,line join=round,line cap=round] ( 71.48,236.95) -- ( 71.48,256.35);\n\n\\path[draw=drawColor,line width= 0.4pt,dash pattern=on 4pt off 4pt ,line join=round,line cap=round] ( 71.48,301.24) -- ( 71.48,285.72);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 58.90,236.95) -- ( 84.06,236.95);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 58.90,301.24) -- ( 84.06,301.24);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 46.31,256.35) --\n\t( 96.64,256.35) --\n\t( 96.64,285.72) --\n\t( 46.31,285.72) --\n\t( 46.31,256.35);\n\n\\path[fill=fillColor] (109.23,265.77) --\n\t(159.56,265.77) --\n\t(159.56,274.09) --\n\t(109.23,274.09) --\n\tcycle;\n\n\\path[draw=drawColor,line width= 1.2pt,line join=round] (109.23,267.43) -- (159.56,267.43);\n\n\\path[draw=drawColor,line width= 0.4pt,dash pattern=on 4pt off 4pt ,line join=round,line cap=round] (134.39,265.77) -- (134.39,265.77);\n\n\\path[draw=drawColor,line width= 0.4pt,dash pattern=on 4pt off 4pt ,line join=round,line cap=round] (134.39,275.75) -- (134.39,274.09);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (121.81,265.77) -- (146.98,265.77);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (121.81,275.75) -- (146.98,275.75);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (109.23,265.77) --\n\t(159.56,265.77) --\n\t(159.56,274.09) --\n\t(109.23,274.09) --\n\t(109.23,265.77);\n\n\\path[fill=fillColor] (172.14,236.95) --\n\t(222.47,236.95) --\n\t(222.47,276.86) --\n\t(172.14,276.86) --\n\tcycle;\n\n\\path[draw=drawColor,line width= 1.2pt,line join=round] (172.14,265.77) -- (222.47,265.77);\n\n\\path[draw=drawColor,line width= 0.4pt,dash pattern=on 4pt off 4pt ,line join=round,line cap=round] (197.31,212.57) -- (197.31,236.95);\n\n\\path[draw=drawColor,line width= 0.4pt,dash pattern=on 4pt off 4pt ,line join=round,line cap=round] (197.31,297.92) -- (197.31,276.86);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (184.72,212.57) -- (209.89,212.57);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (184.72,297.92) -- (209.89,297.92);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (172.14,236.95) --\n\t(222.47,236.95) --\n\t(222.47,276.86) --\n\t(172.14,276.86) --\n\t(172.14,236.95);\n\\end{scope}\n\\begin{scope}\n\\path[clip] ( 0.00, 0.00) rectangle (505.89,505.89);\n\\definecolor[named]{drawColor}{rgb}{0.00,0.00,0.00}\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 71.48,209.02) -- (197.31,209.02);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 71.48,209.02) -- ( 71.48,205.06);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (134.39,209.02) -- (134.39,205.06);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (197.31,209.02) -- (197.31,205.06);\n\n\\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.3] at ( 71.48,194.77) {CO - BE};\n\n\\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.3] at (134.39,194.77) {FS - BE};\n\n\\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.3] at (197.31,194.77) {FS - CO};\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 32.47,221.44) -- ( 32.47,287.94);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 32.47,221.44) -- ( 28.51,221.44);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 32.47,243.60) -- ( 28.51,243.60);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 32.47,265.77) -- ( 28.51,265.77);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 32.47,287.94) -- ( 28.51,287.94);\n\n\\node[text=drawColor,rotate= 90.00,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.2] at ( 22.97,221.44) {-4};\n\n\\node[text=drawColor,rotate= 90.00,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.2] at ( 22.97,243.60) {-2};\n\n\\node[text=drawColor,rotate= 90.00,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.2] at ( 22.97,265.77) {0};\n\n\\node[text=drawColor,rotate= 90.00,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.2] at ( 22.97,287.94) {2};\n\\end{scope}\n\\begin{scope}\n\\path[clip] ( 0.00,168.63) rectangle (252.94,337.26);\n\\definecolor[named]{drawColor}{rgb}{0.00,0.00,0.00}\n\n\\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.98] at (134.39,314.19) {\\bfseries CART};\n\\end{scope}\n\\begin{scope}\n\\path[clip] ( 0.00, 0.00) rectangle (505.89,505.89);\n\\definecolor[named]{drawColor}{rgb}{0.00,0.00,0.00}\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 32.47,209.02) --\n\t(236.31,209.02) --\n\t(236.31,304.79) --\n\t( 32.47,304.79) --\n\t( 32.47,209.02);\n\\end{scope}\n\\begin{scope}\n\\path[clip] ( 32.47,209.02) rectangle (236.31,304.79);\n\\definecolor[named]{drawColor}{rgb}{0.00,0.00,1.00}\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 32.47,265.77) -- (236.31,265.77);\n\\end{scope}\n\\begin{scope}\n\\path[clip] (285.42,209.02) rectangle (489.26,304.79);\n\\definecolor[named]{fillColor}{rgb}{0.75,0.75,0.75}\n\n\\path[fill=fillColor] (299.26,249.06) --\n\t(349.59,249.06) --\n\t(349.59,270.25) --\n\t(299.26,270.25) --\n\tcycle;\n\\definecolor[named]{drawColor}{rgb}{0.00,0.00,0.00}\n\n\\path[draw=drawColor,line width= 1.2pt,line join=round] (299.26,251.80) -- (349.59,251.80);\n\n\\path[draw=drawColor,line width= 0.4pt,dash pattern=on 4pt off 4pt ,line join=round,line cap=round] (324.42,246.31) -- (324.42,249.06);\n\n\\path[draw=drawColor,line width= 0.4pt,dash pattern=on 4pt off 4pt ,line join=round,line cap=round] (324.42,277.70) -- (324.42,270.25);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (311.84,246.31) -- (337.01,246.31);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (311.84,277.70) -- (337.01,277.70);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (299.26,249.06) --\n\t(349.59,249.06) --\n\t(349.59,270.25) --\n\t(299.26,270.25) --\n\t(299.26,249.06);\n\n\\path[fill=fillColor] (362.17,238.86) --\n\t(412.50,238.86) --\n\t(412.50,253.37) --\n\t(362.17,253.37) --\n\tcycle;\n\n\\path[draw=drawColor,line width= 1.2pt,line join=round] (362.17,244.74) -- (412.50,244.74);\n\n\\path[draw=drawColor,line width= 0.4pt,dash pattern=on 4pt off 4pt ,line join=round,line cap=round] (387.34,227.48) -- (387.34,238.86);\n\n\\path[draw=drawColor,line width= 0.4pt,dash pattern=on 4pt off 4pt ,line join=round,line cap=round] (387.34,258.87) -- (387.34,253.37);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (374.75,227.48) -- (399.92,227.48);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (374.75,258.87) -- (399.92,258.87);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (362.17,238.86) --\n\t(412.50,238.86) --\n\t(412.50,253.37) --\n\t(362.17,253.37) --\n\t(362.17,238.86);\n\\definecolor[named]{fillColor}{rgb}{0.00,1.00,0.00}\n\n\\path[fill=fillColor] (425.09,220.02) --\n\t(475.42,220.02) --\n\t(475.42,247.49) --\n\t(425.09,247.49) --\n\tcycle;\n\n\\path[draw=drawColor,line width= 1.2pt,line join=round] (425.09,241.60) -- (475.42,241.60);\n\n\\path[draw=drawColor,line width= 0.4pt,dash pattern=on 4pt off 4pt ,line join=round,line cap=round] (450.25,212.57) -- (450.25,220.02);\n\n\\path[draw=drawColor,line width= 0.4pt,dash pattern=on 4pt off 4pt ,line join=round,line cap=round] (450.25,301.24) -- (450.25,247.49);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (437.67,212.57) -- (462.83,212.57);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (437.67,301.24) -- (462.83,301.24);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (425.09,220.02) --\n\t(475.42,220.02) --\n\t(475.42,247.49) --\n\t(425.09,247.49) --\n\t(425.09,220.02);\n\\end{scope}\n\\begin{scope}\n\\path[clip] ( 0.00, 0.00) rectangle (505.89,505.89);\n\\definecolor[named]{drawColor}{rgb}{0.00,0.00,0.00}\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (324.42,209.02) -- (450.25,209.02);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (324.42,209.02) -- (324.42,205.06);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (387.34,209.02) -- (387.34,205.06);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (450.25,209.02) -- (450.25,205.06);\n\n\\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.3] at (324.42,194.77) {CO - BE};\n\n\\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.3] at (387.34,194.77) {FS - BE};\n\n\\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.3] at (450.25,194.77) {FS - CO};\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (285.42,220.42) -- (285.42,298.89);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (285.42,220.42) -- (281.46,220.42);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (285.42,236.11) -- (281.46,236.11);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (285.42,251.80) -- (281.46,251.80);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (285.42,267.50) -- (281.46,267.50);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (285.42,283.19) -- (281.46,283.19);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (285.42,298.89) -- (281.46,298.89);\n\n\\node[text=drawColor,rotate= 90.00,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.2] at (275.91,220.42) {-4};\n\n\\node[text=drawColor,rotate= 90.00,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.2] at (275.91,251.80) {0};\n\n\\node[text=drawColor,rotate= 90.00,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.2] at (275.91,267.50) {2};\n\n\\node[text=drawColor,rotate= 90.00,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.2] at (275.91,283.19) {4};\n\n\\node[text=drawColor,rotate= 90.00,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.2] at (275.91,298.89) {6};\n\\end{scope}\n\\begin{scope}\n\\path[clip] (252.94,168.63) rectangle (505.89,337.26);\n\\definecolor[named]{drawColor}{rgb}{0.00,0.00,0.00}\n\n\\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.98] at (387.34,314.19) {\\bfseries RF};\n\\end{scope}\n\\begin{scope}\n\\path[clip] ( 0.00, 0.00) rectangle (505.89,505.89);\n\\definecolor[named]{drawColor}{rgb}{0.00,0.00,0.00}\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (285.42,209.02) --\n\t(489.26,209.02) --\n\t(489.26,304.79) --\n\t(285.42,304.79) --\n\t(285.42,209.02);\n\\end{scope}\n\\begin{scope}\n\\path[clip] (285.42,209.02) rectangle (489.26,304.79);\n\\definecolor[named]{drawColor}{rgb}{0.00,0.00,1.00}\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (285.42,251.80) -- (489.26,251.80);\n\\end{scope}\n\\begin{scope}\n\\path[clip] ( 32.47, 40.39) rectangle (236.31,136.16);\n\\definecolor[named]{fillColor}{rgb}{0.75,0.75,0.75}\n\n\\path[fill=fillColor] ( 46.31, 73.69) --\n\t( 96.64, 73.69) --\n\t( 96.64, 98.64) --\n\t( 46.31, 98.64) --\n\tcycle;\n\\definecolor[named]{drawColor}{rgb}{0.00,0.00,0.00}\n\n\\path[draw=drawColor,line width= 1.2pt,line join=round] ( 46.31, 85.20) -- ( 96.64, 85.20);\n\n\\path[draw=drawColor,line width= 0.4pt,dash pattern=on 4pt off 4pt ,line join=round,line cap=round] ( 71.48, 68.12) -- ( 71.48, 73.69);\n\n\\path[draw=drawColor,line width= 0.4pt,dash pattern=on 4pt off 4pt ,line join=round,line cap=round] ( 71.48,132.61) -- ( 71.48, 98.64);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 58.90, 68.12) -- ( 84.06, 68.12);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 58.90,132.61) -- ( 84.06,132.61);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 46.31, 73.69) --\n\t( 96.64, 73.69) --\n\t( 96.64, 98.64) --\n\t( 46.31, 98.64) --\n\t( 46.31, 73.69);\n\n\\path[fill=fillColor] (109.23, 71.00) --\n\t(159.56, 71.00) --\n\t(159.56, 83.09) --\n\t(109.23, 83.09) --\n\tcycle;\n\n\\path[draw=drawColor,line width= 1.2pt,line join=round] (109.23, 79.25) -- (159.56, 79.25);\n\n\\path[draw=drawColor,line width= 0.4pt,dash pattern=on 4pt off 4pt ,line join=round,line cap=round] (134.39, 54.69) -- (134.39, 71.00);\n\n\\path[draw=drawColor,line width= 0.4pt,dash pattern=on 4pt off 4pt ,line join=round,line cap=round] (134.39, 96.14) -- (134.39, 83.09);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (121.81, 54.69) -- (146.98, 54.69);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (121.81, 96.14) -- (146.98, 96.14);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (109.23, 71.00) --\n\t(159.56, 71.00) --\n\t(159.56, 83.09) --\n\t(109.23, 83.09) --\n\t(109.23, 71.00);\n\n\\path[fill=fillColor] (172.14, 56.03) --\n\t(222.47, 56.03) --\n\t(222.47, 80.60) --\n\t(172.14, 80.60) --\n\tcycle;\n\n\\path[draw=drawColor,line width= 1.2pt,line join=round] (172.14, 68.31) -- (222.47, 68.31);\n\n\\path[draw=drawColor,line width= 0.4pt,dash pattern=on 4pt off 4pt ,line join=round,line cap=round] (197.31, 43.94) -- (197.31, 56.03);\n\n\\path[draw=drawColor,line width= 0.4pt,dash pattern=on 4pt off 4pt ,line join=round,line cap=round] (197.31, 91.15) -- (197.31, 80.60);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (184.72, 43.94) -- (209.89, 43.94);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (184.72, 91.15) -- (209.89, 91.15);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (172.14, 56.03) --\n\t(222.47, 56.03) --\n\t(222.47, 80.60) --\n\t(172.14, 80.60) --\n\t(172.14, 56.03);\n\\end{scope}\n\\begin{scope}\n\\path[clip] ( 0.00, 0.00) rectangle (505.89,505.89);\n\\definecolor[named]{drawColor}{rgb}{0.00,0.00,0.00}\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 71.48, 40.39) -- (197.31, 40.39);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 71.48, 40.39) -- ( 71.48, 36.43);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (134.39, 40.39) -- (134.39, 36.43);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (197.31, 40.39) -- (197.31, 36.43);\n\n\\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.3] at ( 71.48, 26.14) {CO - BE};\n\n\\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.3] at (134.39, 26.14) {FS - BE};\n\n\\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.3] at (197.31, 26.14) {FS - CO};\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 32.47, 42.02) -- ( 32.47,118.79);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 32.47, 42.02) -- ( 28.51, 42.02);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 32.47, 61.21) -- ( 28.51, 61.21);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 32.47, 80.41) -- ( 28.51, 80.41);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 32.47, 99.60) -- ( 28.51, 99.60);\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 32.47,118.79) -- ( 28.51,118.79);\n\n\\node[text=drawColor,rotate= 90.00,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.2] at ( 22.97, 42.02) {-10};\n\n\\node[text=drawColor,rotate= 90.00,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.2] at ( 22.97, 80.41) {0};\n\n\\node[text=drawColor,rotate= 90.00,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.2] at ( 22.97, 99.60) {5};\n\n\\node[text=drawColor,rotate= 90.00,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.2] at ( 22.97,118.79) {10};\n\\end{scope}\n\\begin{scope}\n\\path[clip] ( 0.00, 0.00) rectangle (252.94,168.63);\n\\definecolor[named]{drawColor}{rgb}{0.00,0.00,0.00}\n\n\\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 1.98] at (134.39,145.56) {\\bfseries NN};\n\\end{scope}\n\\begin{scope}\n\\path[clip] ( 0.00, 0.00) rectangle (505.89,505.89);\n\\definecolor[named]{drawColor}{rgb}{0.00,0.00,0.00}\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 32.47, 40.39) --\n\t(236.31, 40.39) --\n\t(236.31,136.16) --\n\t( 32.47,136.16) --\n\t( 32.47, 40.39);\n\\end{scope}\n\\begin{scope}\n\\path[clip] ( 32.47, 40.39) rectangle (236.31,136.16);\n\\definecolor[named]{drawColor}{rgb}{0.00,0.00,1.00}\n\n\\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 32.47, 80.41) -- (236.31, 80.41);\n\\end{scope}\n\\end{tikzpicture}\n\n }\n\\caption{Comparing the search strategies for mRMR. Results of the post-hoc tests for each classifier.}\n\\label{mrmr-cl}\n\\end{figure}\n The second interesting point is with respect to the Madelon dataset. As can be seen, mRMR with greedy search algorithms perform poorly on this dataset. Several authors have already utilized this dataset to compare their proposed criterion with mRMR and arrived at the conclusion that mRMR cannot handle highly correlated features, as in Madelon dataset. However, surprisingly the performance of the mRMR+COBRA is as good as JMI on this dataset meaning that it is not the criterion but the search method that has difficulty to deal with highly correlated features. Thus, any conclusion with respect to the quality of a measure has to be drawn carefully since, as in this case, the effect of the non optimum search method can be decisive.\n \n To discover the statistically meaningful differences between the algorithms, we applied the Friedman test following with Wilcoxon-Nemenyi post-hoc analysis, as suggested in \\cite{hollander:99}, on the average accuracies (the last column of Table \\ref{ssp-be}). Note that since we have 8 datasets, there are 8 independent measurements available for each algorithm. The results of this test for mRMR based algorithms have been depicted in Figure \\ref{mrmr-mean}. In all box plots, CO stands for COBRA algorithm. Each box plot compares a pair of the algorithms. The green box plots represent the existence of a significant difference between the corresponding algorithms. The adjusted p-values for each pair of algorithms have also been reported in Figure \\ref{mrmr-mean}. The smaller the p-value, the stronger the evidence against the null hypothesis. As can be seen, COBRA shows meaningful superiority over both greedy algorithms. However, if we set the significance level at $p=0.05$, only FS rejects the null \nhypothesis and shows a meaningful difference with COBRA.\n \n The same test was run for each classifier and its results can be found in Figure \\ref{mrmr-cl}. While three of the classifiers show some differences between FS and COBRA, neither of them reveal any meaningful difference between BE and COBRA. At this point, the least we can conclude is that independent of the classification algorithm we choose, it is a good chance that FS performs worse than COBRA. \n \n For JMI, however, the performances of all algorithms are comparable and with only 8 datasets it is difficult to draw any conclusion. Thus, the Wilcoxon-Nemenyi test results for JMI is not shown here because of the lack of space.\n \\vspace{-10mm}\n \n\\begin{center}\n \\begin{table*}\n \\addtolength{\\tabcolsep}{4.4mm}\n \\hfill{}\n \\begin{tabular}{l|| c c c c c} \n \\hline \\rule[-0.0ex]{0ex}{2.5ex}\n \\textbf{Datasets} & \\bf{MAD} & \\bf{NCI} & \\bf{IAD} & \\bf{ARR} & \\bf{CNA} \\\\ \\hline\\hline \\Ib \n \\bf{mRMR+COBRA} & 74.81$\\pmS$0.65 & 73.67$\\pmS$2.41 & 96.64$\\pmS$0.16 & 77.75$\\pmS$1.03 & 88.91$\\pmS$0.31 \\\\ \n \\bf{mRMR+QPFS} & 71.44$\\pmS$0.57 & 71.00$\\pmS$1.84 & 95.02$\\pmS$0.21 & 78.73$\\pmS$0.84 & 86.93$\\pmS$0.45 \\\\ \n \\bf{mRMR+SOSS} & 71.36$\\pmS$0.53 & 72.65$\\pmS$2.13 & 96.64$\\pmS$0.28 & 79.86$\\pmS$1.18 & 85.43$\\pmS$0.49 \\\\ \\hline \\hline \\rule[-0.0ex]{0ex}{2.5ex}\n \n \\bf{Time COBRA} & 175${\\,+\\,}$24 &368${\\,+\\,}$341 & 540${\\,+\\,}$121 & 6${\\,+\\,}$14 & 120${\\,+\\,}$50 \\\\ \n \\bf{Time QPFS} & 11 & 180 & 202 & 1 & 25 \\\\ \n \\bf{Time SOSS} & 175${\\,+\\,}$5 & 368${\\,+\\,}$27 & 540${\\,+\\,}$12 & 6${\\,+\\,}$4 & 120${\\,+\\,}$7 \\\\ \\hline\n \n \\end{tabular}\n \\hfill{}\n \\caption{Comparison of COBRA with QPFS and SOSS over 5 datasets. Average classification rates and their standard deviations are reported in the top three rows of the table. In the next three rows, the computational times in second are shown where the first value for COBRA and SOSS is for calculating the mutual information matrix and the second value is the time needed to solve the optimization problems.}\n \\label{qpfs}\n \\end{table*} \n \\end{center}\n In the next experiment COBRA is compared with two other convex programming based feature selection algorithms, SOSS \\cite{naghibi:13} and QPFS \\cite{rod:10}. Both SOSS and QPFS employ quadratic programing techniques to maximize a score function. SOSS, however, uses an instance of randomized rounding to generate the set-membership binary values while QPFS ranks the features based on their scores (achieved from solving the convex problem) and therefore, sidesteps the difficulties of generating binary values. Note that both COBRA and SOSS first need to calculate the mutual information matrix $\\mathbf{Q}$. Once it is calculated, they can solve their corresponding convex optimization problems for different values of $P$. The first 3 rows of Table \\ref{qpfs} report the average (over 5 classifiers) classification accuracies of these three algorithms and the standard deviation of these mean accuracies (calculated over the cross-validation folds). In the next three rows of the table, the computational times of each \nalgorithm for a single run (in second) are shown, i.e., the amount of time needed to select a feature set with (given) $P$ features. The reported times for COBRA and SOSS consist of two values. The first value is the time needed to calculate the mutual information matrix $\\mathbf{Q}$ and the second value is the amount of time needed to solve the corresponding convex optimization problem. All the values were measured on a PC with an Intel Core i7 CPU. As seen, QPFS is significantly faster than COBRA and SOSS. This computational superiority, however, seems to come at the expense of lower classification accuracy. For large datasets such as IAD, CNA and MAD, the Nystr\\\"{o}m approximation used in QPFS to cast the problem into a lower dimensional subspace does not yield a precise enough approximation and results in lower classification accuracies. An important remark to interpret these results is that, for NCI dataset (in all the experiments) we first filtered out the features with the low mutual information \nvalues with the class label and only kept 2000 informative features (similarly for DEX and DBW datasets). Thus, the dimension is 2000 and not 9703 as mentioned in Table \\ref{datasets}. \n\n The generalization power of the COBRA algorithm over different classifiers is another important issue to test. As can be observed in Table \\ref{ssp-be}, the number of selected features varies quite markedly from one classifier to another. However, based on our experiments, the optimum feature set of any of the classifiers, usually (for large enough datasets) achieves a near-optimal accuracy in conjunction with other classifiers as well. This is shown in Table \\ref{general} for 4 classifiers and 3 datasets. The COBRA features of the LDA classifier in Table \\ref{ssp-be} is used here to train other classifiers. Table \\ref{general} lists the accuracies obtained by using the LDA features and the optimal features, repeated from Table \\ref{ssp-be}. Unlike the CNA and IAD datasets, a significant accuracy reduction can be observed in the case of ARR data which has substantially less training data than CNA and IAD. It suggests that for small size datasets, a feature \nselection scheme should take the induction algorithm into account since the learning algorithm is sensitive to small changes of the feature set. \n\\begin{table}[ht]\n \\addtolength{\\tabcolsep}{1.3mm}\n\n\\begin{tabular}{l|c||c c c c } \n\\hline \n \\multicolumn{2}{c||}{\\bf{Classifiers}} & \\textbf{SVM} & \\textbf{CART} & \\textbf{RF} & \\textbf{NN} \\rule[-0.0ex]{0ex}{2.5ex} \\\\ \\hline\\hline\n \n \\multirow{2}{*}{\\textbf{ARR}} & LDA feat. &78.4 & 73.7 & 77.1 & 68.00 \\\\ \\cline{2-2}\n & Optimum &81.9 & 75.4 & 82.2 & 72.9 \\\\ \\hline\\hline\n \\multirow{2}{*}{\\textbf{CNA}} & LDA feat. & 92.6 & 75.0 & 90.5 & 91.1 \\\\ \\cline{2-2}\n & Optimum & 94.0 & 75.0 & 90.8 & 92.0 \\\\ \\hline\\hline\n \n \\multirow{2}{*}{\\textbf{IAD}} & LDA feat & 95.8 & 96.0 & 97.2 & 96.3 \\\\ \\cline{2-2}\n & Optimum & 96.5 & 96.4 & 97.2 & 97.1 \\\\ \\hline\n\n \\end{tabular}\n\\caption{The performance of the classification algorithms when trained with COBRA features optimized for the LDA classifier. This table shows the generalization power of the COBRA features on the classifiers.}\n\\label{general}\n\\end{table}\n\\vspace{-5mm}\n\\section{Conclusion}\n\\label{con}\nA convex based parallel search strategy for feature selection, COBRA, was suggested in this work. Its approximation ratio was derived and compared with the approximation ratio of the backward elimination method. It was experimentally shown that COBRA outperforms sequential search methods especially in the case of sparse data. Moreover, we presented two series expansions for mutual information, and showed that most mutual information based score functions in the literature including mRMR and MIFS are truncated approximations of these expansions. Furthermore, the underlying connection between MIFS and the Kirwood approximation was explored, and it was shown that by adopting the class conditional independence assumption and the Kirkwood approximation for $Pr(\\mathbf{X})$, mutual information reduces to the MIFS criterion. \n\\section{Acknowledgments}\nThis work has partly been supported by Swiss National Science Foundation (SNSF).\n\n\n\n\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nLet $f=(f_1,\\ldots,f_p)$ be a holomorphic mapping from the unit ball $\\mathbb{B} \\subset \\mathbb{C}^n$ to $\\mathbb{C}^p$.\nIf $p=1$ and $f$ is a monomial it is elementary to show, e.g., by integrations by parts or by a Taylor expansion,\nthat the principal value current\n$\\varphi \\mapsto \\lim_{\\epsilon \\to 0}\\int_{|f|^2>\\epsilon}\\varphi\/f$, $\\varphi \\in \\mathscr{D}_{n,n}(\\mathbb{B})$, exists and defines a \n$(0,0)$-current $1\/f$. From Hironaka's theorem it then follows that such limits exist \nin general for $p=1$ and also that $\\mathbb{B}$ may \nbe replaced by a complex space, \\cite{HL}. \nThe $\\bar{\\partial}$-image, $\\bar{\\partial} (1\/f)$, is the residue current of $f$. It has the useful property that its \nannihilator ideal is equal to the principal ideal $\\langle f \\rangle$ and by Stokes' theorem it is \ngiven by $\\varphi \\mapsto \\lim_{\\epsilon \\to 0}\\int_{|f|^2=\\epsilon}\\varphi\/f$, $\\varphi \\in \\mathscr{D}_{n,n-1}(\\mathbb{B})$. \nFor $p>1$, Coleff-Herrera, \\cite{CH}, proposed the following generalization. Define the residue integral\n\\begin{equation}\\label{CHresint}\nI_f^{\\varphi}(\\epsilon)=\\int_{T(\\epsilon)}\\varphi\/(f_1\\cdots f_p), \\quad \\varphi \\in \\mathscr{D}_{n,n-p},\n\\end{equation} \nwhere $T(\\epsilon)=\\cap_1^p\\{|f_j|^2=\\epsilon_j\\}$ is oriented as the distinguished boundary of the corresponding \npolyhedron. Coleff-Herrera showed that if $\\epsilon \\to 0$ {\\em along an admissible path}, which means that\n$\\epsilon\\to 0$ inside $(0,\\infty)^p$\nin such a way that $\\epsilon_j\/\\epsilon_{j+1}^k\\to 0$\nfor all $k\\in \\mathbb{N}$, then the limit of $I_f^{\\varphi}(\\epsilon)$ exists and defines a $(0,p)$-current.\nWe call this current the Coleff-Herrera product associated to $f$. \n\nIf $f$ defines a complete intersection, Coleff-Herrera\nshowed that the Coleff-Herrera product associated to $f$ depends only in an alternating fashion on the \nordering of $f$, (see \\cite{JebHs} and \\cite{HasamArkiv} for stronger results implying this).\nMoreover, in the complete intersection case, \nit has turned out that the Coleff-Herrera product is a good notion of a multivariable residue\nof $f$. In particular, its annihilator ideal is equal to $\\langle f \\rangle$, (\\cite{DS}, \\cite{Pdr}).\nMoreover, the Coleff-Herrera product is the ``minimal'' extension to a current of Grothendieck's cohomological residue\n(see, e.g., \\cite{Pdr} for the definition)\nin the sense that it is annihilated by anti-holomorphic functions vanishing on $\\{f=0\\}$. \nThis is also related to the fact that the Coleff-Herrera product \nhas the so called {\\em Standard Extension Property}, SEP, which means that\nit has no mass concentrated on the singular part of $\\{f=0\\}$, (see, e.g., \\cite{JebAbel} and \\cite{CH}).\n\nThe Coleff-Herrera product in the complete intersection case has also found \napplications, e.g., to explicit division-interpolation formulas and Brian\\c con-Skoda type results \n(\\cite{MatsAM}, \\cite{BGVY}), explicit versions\nof the fundamental principle (\\cite{BePa}), the $\\bar{\\partial}$-equation on complex spaces (\\cite{AS}, \\cite{HePo}), \nexplicit Green currents in arithmetic intersection theory \\cite{BYJAM}, etc. \nHowever, if $f$ does not define a complete intersection, then the Coleff-Herrera product does not depend in\nany simple way on the ordering of $f$. For example, the Coleff-Herrera product associated to \n$(zw,z)$ is zero while the Coleff-Herrera product associated to \n$(z,zw)$ is equal to $\\bar{\\partial} (1\/z^2)\\wedge \\bar{\\partial} (1\/w)$, which is to be interpreted simply as a tensor product. \nNevertheless, it has turned out that the Coleff-Herrera product indeed describes interesting phenomena \nalso in the non-complete intersection case. \nFor instance, the St\\\"{u}ckrad-Vogel intersection algorithm in excess intersection theory can \nbe described by the Coleff-Herrera method of multiplying currents; \nthis is shown in a forthcoming paper by M.\\ Andersson, the second author, E.\\ Wulcan, and A.\\ Yger. \n\n\\smallskip\n\nIn this paper, we describe various approaches to Coleff-Herrera type products, both in general and in the \ncomplete intersection case. More precisely, we study\nto which extent (exterior) products of natural regularizations of the individual \ncurrents $1\/f_j$ and $\\bar{\\partial} (1\/f_j)$ \nyield regularizations of the corresponding Coleff-Herrera products. Moreover, we do this globally on a \ncomplex space and we also consider products of Cauchy-Fantappi\\`e-Leray type currents.\n\n\\smallskip\n\nLet $Z$ be a complex space of pure dimension $n$, let $E_1^*,\\ldots,E_p^*$ be hermitian holomorphic\nline bundles over $Z$, and let $f_j$ be a holomorphic section of $E_j^*$. \nThen $1\/f_j$ is a meromorphic section of the dual bundle $E_j$ and we define it as a current on $Z$ by\n\\begin{equation*}\n\\frac{1}{f_j}:= \\frac{|f_j|^{2\\lambda_j}}{f_j} \\, \\Big|_{\\lambda_j = 0}.\n\\end{equation*}\nThe right-hand side is a well-defined and analytic current-valued function\nfor $\\mathfrak{Re}\\, \\lambda_j \\gg 1$ and \nwe will see in Section~\\ref{formulering}\nthat it has a current-valued analytic continuation to $\\lambda_j = 0$; it is well-known and easy to show\nthat this definition of the current $1\/f$ indeed coincides with the principal value definition of \nHerrera-Lieberman described above, (see, e.g.,\nLemma \\ref{pmmultlemma} below).\nThe residue current of $f_j$ is then defined as the $\\bar{\\partial}$-image of $1\/f_j$, i.e.,\n\\begin{equation*}\n\\bar{\\partial} \\frac{1}{f_j} = \\frac{\\bar{\\partial} |f_j|^{2\\lambda_j}}{f_j} \\, \\Big|_{\\lambda_j = 0}.\n\\end{equation*}\nIt follows that $\\bar{\\partial} (1\/f_j)$ coincides with \nthe limit of the residue integral associated to $f_j$. A conceptual reason for this equality is that \n$\\bar{\\partial} |f_j|^{2\\lambda_j}\/f_j$ in fact is the Mellin transform of the residue integral. \nThe technique of using analytic continuation in \nresidue current theory has its roots in the work of Atiyah, \\cite{At},\nand Bernstein-Gel'fand, \\cite{BG}, and has turned out to be very useful. In the context of residue currents \nit has been developed by several authors, e.g., Barlet-Maire, \\cite{BaMa}, Yger, \\cite{Y},\nBerenstein-Gay-Yger,\\cite{BGY}, Passare-Tsikh, \\cite{PTCanad},\nand recently by the second author in \\cite{HasamArkiv}.\n\nWe use this technique to define products of the residue currents \n$\\bar{\\partial} (1\/f_j)$ by defining recursively\n\\begin{equation} \\label{eqproddef}\n\\bar{\\partial} \\frac{1}{f_k} \\wedge \\dots \\wedge \\bar{\\partial} \\frac{1}{f_1} := \\frac{\\bar{\\partial} |f_k|^{2\\lambda_k}}{f_k} \n\\wedge \\bar{\\partial} \\frac{1}{f_{k-1}} \\wedge \\dots \\wedge \\bar{\\partial} \\frac{1}{f_1} \\, \\Big|_{\\lambda_k = 0}.\n\\end{equation}\nThe existence of the right-hand side of \\eqref{eqproddef} follows from the fact that this type of products\nof residue currents are pseudomeromorphic, see Section \\ref{formulering} for details.\n\n\\smallskip\n\nA natural way of regularizing the current $\\bar{\\partial} (1\/f_j)$ inspired by Passare, \\cite{PCrelle}, is as\n$\\bar{\\partial} \\chi(|f_j|^2\/\\epsilon)\/f_j$, where $\\chi$ is a smooth approximation of ${\\bf 1}_{[1,\\infty)}$,\n(the characteristic function of $[1,\\infty)$). This regularization corresponds to a mild average of the \nresidue integral $I_{f_j}^{\\varphi}(\\epsilon)$ and again, it is well-known and easy to show that \n$\\lim_{\\epsilon\\to 0}\\bar{\\partial} \\chi(|f_j|^2\/\\epsilon) \/f_j =\\bar{\\partial} (1\/f_j)$, (see, e.g., Lemma \\ref{pmmultlemma}). \nWe define the regularized residue integral associated to $f$ by\n\\begin{equation}\\label{resint}\n\\mathcal{I}_f^{\\varphi}(\\epsilon)=\n\\int_Z \\frac{\\bar{\\partial} \\chi_p^{\\epsilon}}{f_p}\\wedge \\cdots \\wedge \\frac{\\bar{\\partial} \\chi_1^{\\epsilon}}{f_1} \\wedge \\varphi,\n\\end{equation}\nwhere $\\chi_j^{\\epsilon}=\\chi(|f_j|^2\/\\epsilon_j)$ and $\\varphi$ is a test form with values in \n$\\Lambda (E_1^*\\oplus\\cdots \\oplus E_p^*)$. \nNotice that if $\\chi={\\bf 1}_{[1,\\infty)}$ (and the $E_j$ are trivial), then \\eqref{resint} becomes \\eqref{CHresint}.\n\n\\begin{theorem}\\label{sats1}\nWith the notation of Definition \\ref{limitdef}, we have\n\\begin{equation*}\n\\bar{\\partial} \\frac{1}{f_p}\\wedge \\cdots \\wedge \\bar{\\partial} \\frac{1}{f_1}.\\, \\varphi = \\lim_{\\epsilon_1 \\ll \\cdots \\ll \\epsilon_p\\to 0}\n\\mathcal{I}_f^{\\varphi}(\\epsilon).\n\\end{equation*}\nMoreover, if we allow $\\chi={\\bf 1}_{[1,\\infty)}$ in \\eqref{resint}, then the limit of \\eqref{resint}\nalong any admissible path also equals\n$\\bar{\\partial} (1\/f_p)\\wedge \\cdots \\wedge \\bar{\\partial} (1\/f_1).\\, \\varphi$.\n\\end{theorem}\n\n\\begin{remark}\nThe requirement that $\\epsilon \\to 0$ along an admissible path if $\\chi={\\bf 1}_{[1,\\infty)}$ is not \nreally necessary. However,\nsince it is not completely obvious what, e.g., $(\\bar{\\partial} \\chi(|f_2|^2\/\\epsilon_2))\/f_2 \\wedge \\bar{\\partial} (1\/f_1) $ means if \n$\\chi={\\bf 1}_{[1,\\infty)}$ we prefer to add the requirement.\n\\end{remark}\n\nTheorem~\\ref{sats1} thus says that the Coleff-Herrera product associated to $f$ equals the successively defined \ncurrent in \\eqref{eqproddef} and also that it can be smoothly regularized by \\eqref{resint}.\nIt also follows that \n$\\bar{\\partial} (1\/f_p)\\wedge \\cdots \\wedge \\bar{\\partial} (1\/f_1)=\\lim_{\\epsilon\\to 0}(\\bar{\\partial}\\chi(|f_p|^2\/\\epsilon)\/f_p)\\wedge \\bar{\\partial} (1\/f_{p-1})\\wedge \\cdots \\wedge \\bar{\\partial} (1\/f_1)$. \n\nTheorem~\\ref{sats1} is a special case of Theorem~\\ref{main} below, where we show a similar result for \nproducts of Cauchy-Fantappi\\`e-Leray type currents, which can be thought of as analogues of the currents\n$1\/f_j$ and $\\bar{\\partial} (1\/f_j)$ in the case when the bundles $E_j$ have ranks $>1$. Products of such currents \nwere first defined in \\cite{WArkiv}, but the definition of the products given there is in general not the same as our.\nThe proof of Theorem~\\ref{sats1} (and Theorem~\\ref{main})\nis very similar to the proof of Proposition 1 in \\cite{PCrelle} but it needs to be modified in our case\nsince extra technical difficulties arise when the metrics of the bundles $E_j$ are not supposed to be trivial.\n\n\\smallskip\n\nTo give some intuition for Theorem~\\ref{sats1}, we recall Bj\\\"{o}rk's realization of the Coleff-Herrera \nproduct, see, e.g., \\cite{JebAbel}, \\cite{MatsAAFS}, or \\cite{JebHs} for proofs.\nGiven a holomorphic function $f_1$ in $\\mathbb{B}\\subset \\mathbb{C}^n$, \nthere exists a holomorphic differential operator $Q$, a holomorphic function $h$, and a holomorphic $(n-1)$-form\n$dX$ such that\n\\begin{equation}\\label{DLrep}\n\\bar{\\partial} \\frac{1}{f_1}.\\, \\varphi\\wedge dz=\\lim_{\\epsilon\\to 0}\\int_{\\{f_1=0\\}} \\chi\\left(\\frac{|h|^2}{\\epsilon}\\right)\n\\frac{Q(\\varphi)\\wedge dX}{h}, \\quad \n\\varphi \\in \\mathscr{D}_{0,n-1}(\\mathbb{B}),\n\\end{equation} \nwhere $\\chi={\\bf 1}_{[1,\\infty)}$ or a smooth approximation thereof.\nThis representation makes it possible to define the principal value of $1\/f_2$ {\\em on} the current $\\bar{\\partial} (1\/f_1)$.\nIn fact, \n$\\lim_{\\epsilon \\to 0}\\int_{\\{f_1=0\\}}\\chi(|hf_2|^2\/\\epsilon)Q(\\varphi\/f_2)\\wedge dX\/h$ exists and defines a current\n$(1\/f_2)\\bar{\\partial} (1\/f_1)$.\nThe $\\bar{\\partial}$-image of this current is then well-defined and, (e.g., by Theorem~\\ref{sats1}), it\nequals $\\bar{\\partial}(1\/f_2)\\wedge \\bar{\\partial}(1\/f_1)$.\nBut $\\bar{\\partial}(1\/f_2)\\wedge \\bar{\\partial}(1\/f_1)$ has a representation similar to \\eqref{DLrep} and one can\nthus define the principal value of $1\/f_3$ on $\\bar{\\partial}(1\/f_2)\\wedge \\bar{\\partial}(1\/f_1)$, and so on. \nIntuitively, this procedure corresponds to first letting $\\epsilon_1\\to 0$ in \\eqref{CHresint} (or \\eqref{resint}),\nthen letting $\\epsilon_2\\to 0$ etc.\n\n\n\n\\smallskip\n\nWe now turn to the case that the sections $f_j$ define a complete intersection on $Z$. \nThen we know that the Coleff-Herrera product\nis anti-commutative but we have in fact the following result generalizing Theorem 1 in\n\\cite{JebHs}.\n\n\\begin{theorem}\\label{jebhs+}\nAssume that $f_1,\\ldots,f_p$ define a complete intersection. Then\n\\begin{equation*}\n\\big|\\mathcal{I}_f^{\\varphi}(\\epsilon)-\\bar{\\partial} \\frac{1}{f_p}\\wedge \\cdots \\wedge \\bar{\\partial} \\frac{1}{f_1}.\\ \\varphi \\big|\\leq\nC\\|\\varphi\\|_{C^M}(\\epsilon_1^{\\omega_1}+\\cdots +\\epsilon_p^{\\omega_p}),\n\\end{equation*}\nwhere the positive constants $M$ and $\\omega_j$ only depend on $f$, $Z$, and $\\supp \\varphi$\nwhile $C$ also depends on the $C^M$-norm of the $\\chi$-functions appearing in the regularized residue integral\n$\\mathcal{I}_f^{\\varphi}$, \\eqref{resint}. \n\\end{theorem}\n\nWe also have a similar statement for products of Cauchy-Fantappi\\`e-Leray currents, Theorem \\ref{epsilon-main} below.\nNotice that it is necessary that the $\\chi$-functions are smooth; if $p\\geq 2$ and $\\chi={\\bf 1}_{[1,\\infty)}$\nin \\eqref{resint}, \nthen the corresponding statement is false in view of the examples \nby Passare-Tsikh, \\cite{PTex}, and Bj\\\"{o}rk, \\cite{JebAbel}.\n\nWe also have a generalization of Theorem 1 in \\cite{HasamArkiv} to products of Cauchy-Fantappi\\`e-Leray\ncurrents, namely our Theorem~\\ref{lambda-main} in Section~\\ref{formulering}. \nIn the special case of line bundles discussed here,\nTheorem \\ref{lambda-main} becomes the following Theorem~\\ref{lambda-budget}. \nHowever, Theorem \\ref{lambda-budget} also\nfollows from the results in \\cite{HasamArkiv}; the presence of non-trivial metrics does not cause any additional \nproblems.\n\\begin{theorem}\\label{lambda-budget}\nAssume that $f_1,\\dots,f_p$ define a complete intersection. If $\\varphi$ is a test form, then\n\\begin{equation*}\n \\Gamma^\\varphi(\\lambda) := \\int \\frac{\\bar{\\partial} |f_p|^{2\\lambda_p}}{f_p} \\wedge \\dots \n\\wedge \\frac{\\bar{\\partial} |f_1|^{2\\lambda_1}}{f_1} \\wedge \\varphi\n\\end{equation*}\nhas an analytic continuation to a neighborhood of the half space $\\{ \\mathfrak{Re}\\, \\lambda_j \\geq 0 \\}$.\n\\end{theorem} \n\nIn the classical case, $\\Gamma^{\\varphi}(\\lambda)$ is the iterated Mellin transform of the residue integral \n\\eqref{CHresint} and it \nis well known that it has a meromorphic continuation to $\\mathbb{C}^p$ that is analytic in\n$\\cap_1^p\\{\\mathfrak{Re}\\, \\lambda_j>0\\}$; (this is also true in the non-complete intersection case). \nThe analyticity of $\\Gamma^{\\varphi}(\\lambda)$ in a neighborhood of \n$0$ when $p=2$ was proved by Berenstein-Yger (see, e.g., \\cite{BGVY}). \n\n\\smallskip \n\nIn Section~\\ref{formulering}, we give the necessary background and the general formulations of our results.\nSection \\ref{bevis} contains the proof of Theorems \\ref{sats1} and \\ref{main}.\nThe proof of Theorems \\ref{jebhs+}, \\ref{epsilon-main}, and \\ref{lambda-main} is the content of \nSection~\\ref{bevis2}; the crucial part is Lemma~\\ref{divlemma} which enables us to effectively use the assumption \nabout complete intersection. \n\n\\section{Formulation of the general results}\\label{formulering}\nLet $E_1^*,\\ldots,E_q^*$ be holomorphic hermitian vector bundles \nover a reduced complex space $Z$ of pure dimension $n$. \nThe metrics are supposed to be smooth in the following sense.\nWe say that $\\varphi$ is a smooth $(p,q)$-form on $Z$ if $\\varphi$ is smooth on $Z_{reg}$, and\nfor a neighborhood of any $p\\in Z$, there is a smooth \n$(p,q)$-form $\\tilde{\\varphi}$ in an ambient complex manifold such that the pullback of $\\tilde{\\varphi}$\nto $Z_{reg}$ coincides with $\\varphi\\lvert_{Z_{reg}}$ close to $p$.\nThe $(p,q)$-test forms on $Z$, $\\mathscr{D}_{p,q}(Z)$, are defined as the smooth compactly supported \n$(p,q)$-forms (with a suitable topology) and the $(p,q)$-currents on $Z$, $\\mathscr{D}'_{p,q}(Z)$, is the\ndual of $\\mathscr{D}_{n-p,n-q}(Z)$; see, e.g., \\cite{larkang} for a more thorough discussion.\n\nWe recall from \\cite{AWCrelle} the definition of {\\em pseudomeromorphic} currents, $\\mathcal{PM}$.\nA current is pseudomeromorphic if it is a (locally finite) sum of push-forwards of elementary currents \nunder modifications of $Z$. A current, $T$, is elementary if it is a current on $\\mathbb{C}^n_x$\nof the form\n\\begin{equation}\\label{T}\nT=\\frac{1}{x^{\\alpha}}\\bigwedge_{\\beta_j\\neq 0}\\bar{\\partial} \\frac{1}{x_j^{\\beta_j}}\\wedge \\vartheta,\n\\end{equation}\nwhere $\\alpha$ and $\\beta$ are multiindices with disjoint supports and $\\vartheta$ is a smooth compactly \nsupported (possibly bundle valued) form. \n(We are abusing notation slightly; $\\Lambda_{\\beta_j\\neq 0}\\bar{\\partial} (1\/x_j^{\\beta_j})$\nis only defined up to a sign.) Elementary currents are thus merely tensor products of one-variable\nprincipal value currents $1\/x_i^{\\alpha_i}$ and $\\bar{\\partial}$-images of such (modulo smooth forms).\n\n\\begin{lemma} \\label{pmmultlemma}\n Let $f$ be a holomorphic function, and let $T \\in \\mathcal{PM}(Z)$.\n If $\\tilde{f}$ is a holomorphic function such that $\\{ \\tilde{f} = 0 \\} = \\{ f = 0 \\}$\n and $v$ is a smooth non-zero function, then $(|\\tilde{f} v|^{2\\lambda}\/f) T$ and $(\\bar{\\partial} |\\tilde{f}v|^{2\\lambda}\/f)\\wedge T$ have\n current-valued analytic continuations to $\\lambda = 0$ and the values at $\\lambda = 0$ are pseudomeromorphic\n and independent of the choices of $\\tilde{f}$ and $v$.\n Moreover, if $\\chi={\\bf 1}_{[1,\\infty)}$, or a smooth approximation thereof, then\n \\begin{equation} \\label{eqepsilonlambda}\n \\left.\\frac{|\\tilde{f} v|^{2\\lambda}}{f} T\\right|_{\\lambda = 0} = \\lim_{\\epsilon \\to 0^+} \\frac{\\chi^\\epsilon}{f} T\n \\quad\\text{and}\\quad \n \\left.\\frac{\\bar{\\partial} |\\tilde{f} v|^{2\\lambda}}{f} \\wedge T\\right|_{\\lambda = 0} = \\lim_{\\epsilon \\to 0^+} \\frac{\\bar{\\partial}\\chi^\\epsilon}{f}\\wedge T,\n \\end{equation}\nwhere $\\chi^\\epsilon = \\chi(|\\tilde{f}v|^2\/\\epsilon)$.\n\\end{lemma}\n\n\\begin{proof}\n The first part is essentially Proposition 2.1 in \\cite{AWCrelle}, except that there, $Z$ is a complex manifold, $\\tilde{f} = f$\n and $v \\equiv 1$. However, with suitable resolutions of singularities, the proof in \\cite{AWCrelle} goes through in the\n same way in our situation, as long as we observe that in $\\mathbb{C}$\n \\begin{equation*}\n \\frac{|x^{\\alpha'}v|^{2\\lambda}}{x^\\alpha} \\frac{1}{x^\\beta} \\quad \\text{and} \\quad\n \\frac{|x^{\\alpha'}v|^{2\\lambda}}{x^\\alpha} \\bar{\\partial} \\frac{1}{x^\\beta}\n \\end{equation*}\n have analytic continuations to $\\lambda = 0$, and the values at $\\lambda = 0$ are $1\/x^{\\alpha + \\beta}$\n and $0$ respectively, independently of $\\alpha'$ and $v$, as long as $\\alpha' > 0$ and $v \\neq 0$\n (and similarly with $\\bar{\\partial} |x^{\\alpha'}v|^{2\\lambda}\/x^\\alpha$).\n\n By Leibniz rule, it is enough to consider the first equality in \\eqref{eqepsilonlambda}, since if we have proved the first equality, then\n \\begin{align*}\n & \\lim_{\\epsilon \\to 0} \\frac{\\bar{\\partial} \\chi^\\epsilon}{f}\\wedge T = \\lim_{\\epsilon \\to 0} \\bar{\\partial} \\left( \\frac{\\chi^\\epsilon}{f} T \\right)\n - \\frac{\\chi^\\epsilon}{f} \\bar{\\partial} T \\\\\n &= \\left.\\left(\\bar{\\partial}\\left(\\frac{|\\tilde{f}v|^{2\\lambda}}{f} T\\right) -\n \\frac{|\\tilde{f}v|^{2\\lambda}}{f} \\bar{\\partial} T\\right)\\right|_{\\lambda = 0}\n = \\left.\\frac{\\bar{\\partial} |\\tilde{f}v|^{2\\lambda}}{f} \\wedge T \\right|_{\\lambda = 0}.\n \\end{align*}\n To prove the first equality in \\eqref{eqepsilonlambda}, we observe first that in the same way as in the first part, we\n can assume that $f = x^{\\gamma} u$ and $\\tilde{f} = x^{\\tilde{\\gamma}} \\tilde{u}$,\n where $u$ and $\\tilde{u}$ are non-zero holomorphic functions.\n Since $T$ is a sum of push-forwards of elementary currents,\n we can assume that $T$ is of the form \\eqref{T}. \n Note that if $\\supp \\gamma \\cap \\supp \\beta \\neq \\emptyset$, then\n $(|\\tilde{f} v|^{2\\lambda}\/f) T = 0$ for $\\mathfrak{Re}\\, \\lambda \\gg 1$ and $(\\chi(|\\tilde{f}v|^2\/\\epsilon) \/f) T = 0$\n for $\\epsilon > 0$, since $\\supp T \\subseteq \\{ x_i = 0, i \\in \\supp \\beta \\}$.\n Thus, we can assume that $\\supp \\gamma \\cap \\supp \\beta = \\emptyset$. By a smooth (but non-holomorphic) change of variables,\n as in Section~\\ref{bevis} (equations \\eqref{varbyte}), we can assume that $|\\tilde{u} v|^2 \\equiv 1$.\n Thus, since $(|x^{\\tilde{\\gamma}}|^{2\\lambda} \/ x^\\gamma) (1\/x^\\alpha)$, $(\\chi(|x^{\\tilde{\\gamma}}|^2\/\\epsilon)\/x^\\gamma) (1\/x^\\alpha)$\n depend on variables disjoint from the ones that $\\wedge_{\\beta_i \\neq 0} \\bar{\\partial} (1\/x_i^{\\beta_i})$ depends on,\n it is enough to prove that\n \\begin{equation*}\n \\left.\\frac{|x^{\\tilde{\\gamma}}|^{2\\lambda}}{x^\\gamma} \\frac{1}{x^\\alpha}\\right|_{\\lambda = 0} =\n \\lim_{\\epsilon \\to 0} \\frac{\\chi(|x^{\\tilde{\\gamma}}|^2\/\\epsilon)}{x^\\gamma} \\frac{1}{x^\\alpha},\n \\end{equation*}\n which is Lemma 2 in \\cite{JebHs}.\n\\end{proof}\n\nLet $f_j$ be a holomorphic section of $E_j^*$, $j=1,\\ldots,q$, and let $s_j$ be the section of $E_j$\nwith pointwise minimal norm such that $f_j \\cdot s_j=|f_j|^2$. Outside $\\{f_j=0\\}$, define\n\\begin{equation*}\n u^j_k = \\frac{s_j\\wedge (\\bar{\\partial} s_j)^{k-1}}{|f_j|^{2k}}.\n\\end{equation*}\nIt is easily seen that if $f_j = f_j^0 f_j'$, where $f_j^0$ is a holomorphic function and $f_j'$ is a non-vanishing section,\nthen $u^j_k = (1\/f_j^0)^k (u')^j_k$, where $(u')^j_k$ is smooth across\n$\\{ f_j = 0 \\}$.\nWe let\n\\begin{equation}\\label{Udef}\nU^j=\\sum_{k=1}^{\\infty} \\left.|\\tilde{f}_j|^{2\\lambda} u^j_k \\right|_{\\lambda=0},\n\\end{equation}\nwhere $\\tilde{f}_j$ is any holomorphic section of $E_j^*$ such that $\\{\\tilde{f}_j=0\\}=\\{f_j=0\\}$.\nThe existence of the analytic continuation is a local statement, so we can assume that \n$f_j = \\sum f_{j,k} \\mathfrak{e}_{j,k}^*$,\nwhere $\\mathfrak{e}_{j,k}^*$ is a local holomorphic frame for $E_j^*$. After principalization\nwe can assume that the ideal $\\langle f_{j,1},\\dots,f_{j,k_j} \\rangle$ is generated by, e.g., $f_{j,0}$.\nBy the representation $u^j_k = (1\/f_{j,0})^k (u')^j_k$,\nthe existence of the analytic continuation of $U^j$ in \\eqref{Udef} then follows from Lemma \\ref{pmmultlemma}.\nLet $U^j_k$ denote the term of $U^j$ that takes values in $\\Lambda^kE_j$; $U^j_k$ is thus\na $(0,k-1)$-current with values in $\\Lambda^kE_j$. Let $\\delta_{f_j}$ denote interior multiplication\nwith $f_j$ and put $\\nabla_{f_j}=\\delta_{f_j}-\\bar{\\partial}$; it is not hard to verify that \n$\\nabla_{f_j}U=1$ outside $f_j=0$. \nWe define the Cauchy-Fantappi\\`e-Leray type residue current, $R^j$, of $f_j$ by $R^j=1-\\nabla_{f_j}U^j$.\nOne readily checks that \n\\begin{eqnarray}\\label{residydef}\nR^j &=& R^j_0+\\sum_{k=1}^{\\infty}R^j_k \\\\\n&=& (1-|\\tilde{f}_j|^{2\\lambda})|_{\\lambda=0}+\\sum_{k=1}^{\\infty}\n\\left.\\bar{\\partial}|\\tilde{f}_j|^{2\\lambda}\\wedge\\frac{s_j\\wedge (\\bar{\\partial} s_j)^{k-1}}{|f_j|^{2k}}\\right|_{\\lambda=0}, \\nonumber\n\\end{eqnarray} \nwhere, as above, $\\tilde{f}_j$ is a holomorphic section such that $\\{\\tilde{f}_j=0\\}=\\{f_j=0\\}$.\n\n\\begin{remark}\nNotice that if $E_j$ has rank $1$, then $U_j$ simply equals $1\/f_j$ and \n$R^j=1-\\nabla_{f_j} (1\/f_j)=1-f_j\\cdot (1\/f_j)+\\bar{\\partial} (1\/f_j)=\\bar{\\partial} (1\/f_j)$.\n\\end{remark}\n\nWe now define a non-commutative calculus for the currents $U^i_k$ and $R^j_{\\ell}$ recursively as follows.\n\\begin{definition}\\label{proddef}\nIf $T$ is a product of some $U^i_k$:s and $R^j_{\\ell}$:s, then we define\n\\begin{equation*}\n\\bullet \\,\\,\\, U^j_k\\wedge T=\n\\left.|\\tilde{f}_j|^{2\\lambda}\\frac{s_j\\wedge (\\bar{\\partial} s_j)^{k-1}}{|f_j|^{2k}}\\wedge T \\right|_{\\lambda=0} \\hspace{.6cm}\n\\end{equation*}\n\n\\begin{equation*}\n\\bullet \\,\\,\\, \\left.R^j_0\\wedge T=(1-|\\tilde{f}_j|^{2\\lambda})T \\right|_{\\lambda=0} \\hspace{2.5cm}\n\\end{equation*}\n\n\\begin{equation*}\n\\bullet \\,\\,\\, R^j_k\\wedge T=\n\\left.\\bar{\\partial}|\\tilde{f}_j|^{2\\lambda}\\wedge \\frac{s_j\\wedge (\\bar{\\partial} s_j)^{k-1}}{|f_j|^{2k}}\\wedge T \\right|_{\\lambda=0},\n\\end{equation*}\nwhere $\\tilde{f}_j$ is any holomorphic section of $E^*_j$ with $\\{\\tilde{f}_j=0\\}=\\{f_j=0\\}$.\n\n\\end{definition}\n\nNote first that $U^j$ and $R^j$ are pseudomeromorphic. Hence, in the same way as the analytic continuation\nin the definition of $U^j$ and $R^j$ exist, we see that the analytic continuations in the definition\nof the currents in Definition \\ref{proddef} exist and also are pseudomeromorphic.\n\n\\begin{remark}\nUnder assumptions about complete intersection, these products have the suggestive\ncommutation properties, e.g., if $\\textrm{codim}\\, \\{f_i=f_j=0\\}= \\rank E_i + \\rank E_j$,\nthen $R^i_k\\wedge R^j_{\\ell}=R^j_{\\ell}\\wedge R^i_k$, $R^i_k\\wedge U^j_{\\ell}=U^j_{\\ell}\\wedge R^i_k$, and\n$U^i_k\\wedge U^j_{\\ell}=-U^j_{\\ell}\\wedge U^i_k$, (see, e.g., \\cite{MatsAArk}). \nIn general, there are no simple relations.\nHowever, products involving only $U$:s are always anti-commutative.\n\\end{remark}\n\nNow, consider collections $U=\\{U^q_{k_q},\\ldots,U^{p+1}_{k_{p+1}}\\}$ and\n$R=\\{R^p_{k_p},\\ldots,R^1_{k_1}\\}$ and put\n$(P_q,\\ldots,P_1)=(U^q_{k_q},\\ldots,R^p_{k_p},\\ldots,R^1_{k_1})$. For a permutation $\\nu$ of \n$\\{1,\\ldots,q\\}$ we define\n\\begin{equation}\\label{URdef}\n(UR)^{\\nu}=P_{\\nu(q)}\\wedge \\cdots \\wedge P_{\\nu(1)}.\n\\end{equation}\nWe will describe various natural ways to regularize products of this kind. For $q=1$ we see \nfrom \\eqref{Udef} and \\eqref{residydef} that we have a natural $\\lambda$-regularization, $P^{\\lambda}_j$, \nof $P_j$ and from Definition \\ref{proddef} we have\n$(UR)^{\\nu}=P^{\\lambda_q}_{\\nu(q)}\\wedge \\cdots \\wedge P^{\\lambda_1}_{\\nu(1)}|_{\\lambda_1=0}\\cdots |_{\\lambda_q=0}$.\nWe have the following result that is proved in a forthcoming paper by M.\\ Andersson, the second author, E.\\ Wulcan, and \nA.\\ Yger.\n\n\\begin{theorem}\\label{aswy}\nLet $a_1>\\cdots >a_q>0$ be integers and $\\lambda$ a complex variable. Then we have\n\\begin{equation*}\n(UR)^{\\nu}=\\left.P^{\\lambda^{a_q}}_{\\nu(q)}\\wedge \\cdots \\wedge P^{\\lambda^{a_1}}_{\\nu(1)} \\right|_{\\lambda=0}.\n\\end{equation*}\n\\end{theorem}\n\nWe see that one does not need to put $\\lambda_1=0$ first, then $\\lambda_2=0$ etc., one just has to \nensure that $\\lambda_1$ tends to zero much faster than $\\lambda_2$ and so on.\nThe current $(UR)^{\\nu}$ can thus be obtained as the value at zero of a one-variable $\\zeta$-type\nfunction. From an algebraic point of view, this is desirable since one can derive functional equations\nand use Bernstein-Sato theory to study $(UR)^{\\nu}$. \n\n\\smallskip\n\nThere are also natural $\\epsilon$-regularizations of the currents\n$U^i_k$ and $R^j_{\\ell}$ inspired by \\cite{CH} and \\cite{PCrelle}. Let $\\chi={\\bf 1}_{[1,\\infty)}$, or \na smooth approximation thereof that is $0$ close to $0$ and $1$ close to $\\infty$. \nIt follows from \\cite{hasamJFA}, or after principalization from Lemma \\ref{pmmultlemma}, that \n\\begin{equation}\\label{Uepsilon}\nU^j_k=\\lim_{\\epsilon\\to 0^+}\\chi(|\\tilde{f}_j|^2\/\\epsilon) \\frac{s_j\\wedge (\\bar{\\partial} s_j)^{k-1}}{|f_j|^{2k}}.\n\\end{equation}\n\\begin{equation}\\label{Repsilon}\nR^j_k=\\lim_{\\epsilon\\to 0^+}\\bar{\\partial}\\chi(|\\tilde{f}_j|^2\/\\epsilon)\\wedge \n\\frac{s_j\\wedge (\\bar{\\partial} s_j)^{k-1}}{|f_j|^{2k}},\\,\\,\nk>0,\n\\end{equation}\nand similarly for $k=0$; as usual, $\\{\\tilde{f}_j=0\\}=\\{f_j=0\\}$. \nOf course, the limits are in the current sense and if $\\chi={\\bf 1}_{[1,\\infty)}$,\nthen $\\epsilon$ is supposed to be a regular value for $|f_j|^2$ and $\\bar{\\partial}\\chi(|f_j|^2\/\\epsilon)$ is to be \ninterpreted as integration over the manifold $|f_j|^2=\\epsilon$. We denote the regularizations \ngiven by \\eqref{Uepsilon} and \\eqref{Repsilon} by $P_j^{\\epsilon}$.\n\\begin{definition}\\label{limitdef}\nLet $\\vartheta$ be a function defined on $(0,\\infty)^q$. We let\n\\begin{equation*}\n\\lim_{\\epsilon_1 \\ll \\cdots \\ll \\epsilon_q\\to 0}\\vartheta(\\epsilon_1,\\ldots,\\epsilon_q)\n\\end{equation*}\ndenote the limit (if it exists and is well-defined) \nof $\\vartheta$ along any path $\\delta\\mapsto \\epsilon (\\delta)$ towards the origin\nsuch that for all $k\\in \\mathbb{N}$ and $j=2,\\ldots,q$ there are positive constants $C_{jk}$ such that\n$\\epsilon_{j-1}(\\delta) \\leq C_{jk}\\, \\epsilon_j^k(\\delta)$.\nHere, we extend the domain of definition of $\\vartheta$ to points $(0,\\dots,0,\\epsilon_{m+1},\\dots,\\epsilon_q)$,\nwhere $\\epsilon_{m+1},\\dots,\\epsilon_q > 0$, by defining recursively\n\\begin{equation*}\n \\vartheta(0,\\dots,0,\\epsilon_{m+1},\\dots,\\epsilon_q) = \\lim_{\\epsilon_{m} \\to 0} \\vartheta(0,\\dots,0,\\epsilon_m,\\epsilon_{m+1},\\dots,\\epsilon_q),\n\\end{equation*}\nif the limits exist.\n\\end{definition}\n\n\\begin{remark}\\label{rem1}\nThe paths considered here are very similar to the admissible paths of Coleff-Herrera, but we also allow \npaths where, e.g., $\\epsilon_1$ attains the value $0$ before the other parameters tend to zero.\n\\end{remark}\n\n\nWe have the following analogue of Theorem \\ref{aswy}.\n\n\\begin{theorem}\\label{main}\nLet $U=\\{U^q_{k_q},\\ldots,U^{p+1}_{k_{p+1}}\\}$ and\n$R=\\{R^p_{k_p},\\ldots,R^1_{k_1}\\}$ be collections of currents defined in \\eqref{Udef} and \\eqref{residydef}.\nLet $\\nu$ be a permutation of $\\{1,\\ldots,q\\}$ and let $(UR)^{\\nu}$ be the product defined in \\eqref{URdef}.\nThen\n\\begin{equation*}\n(UR)^{\\nu}=\\lim_{\\epsilon_1 \\ll \\cdots \\ll \\epsilon_q \\to 0}P_{\\nu(q)}^{\\epsilon_q}\\wedge \\cdots \\wedge P_{\\nu(1)}^{\\epsilon_1},\n\\end{equation*}\nwhere, as above, $(P_q,\\ldots,P_1)=(U^q_{k_q},\\ldots,R^p_{k_p},\\ldots,R^1_{k_1})$ and \n$P_{\\nu(j)}^{\\epsilon_j}$ is an $\\epsilon$-regularization defined in \\eqref{Uepsilon} and \\eqref{Repsilon} of\n$P_{\\nu(j)}$. If $\\chi={\\bf 1}_{[1,\\infty)}$, we require that $\\epsilon\\to 0$ along an admissible path.\n\\end{theorem}\n\n\\smallskip\n\n\\subsection{The complete intersection case}\n\nNow assume that $f_1,\\ldots,f_q$ defines a complete intersection, i.e., that \n$\\textrm{codim}\\, \\{f_1=\\cdots =f_q=0\\}=e_1+\\cdots +e_q$, where $e_j=\\rank E_j$. Then we know that \nthe calculus defined in Definition \\ref{proddef} satisfies the suggestive commutation properties, but \nwe have in fact the following much stronger results.\n\n\\begin{theorem}\\label{epsilon-main}\nAssume that $f_1,\\ldots,f_q$ defines a complete intersection on $Z$, let\n$(P_1,\\ldots,P_q)=(R^1_{k_1},\\ldots,R^p_{k_p},U^{p+1}_{k_{p+1}},\\ldots,U^q_{k_q})$, and let\n$P^{\\epsilon_j}_{j}$ be an $\\epsilon$-regularization of $P_j$ defined by \\eqref{Uepsilon} and \\eqref{Repsilon}\nwith smooth $\\chi$-functions. Then we have\n\\begin{equation*}\n\\left| \\int_Z P^{\\epsilon_1}_1\\wedge \\cdots \\wedge P_q^{\\epsilon_q}\\wedge \\varphi -\nP_1\\wedge \\cdots \\wedge P_q .\\, \\varphi \\right| \\leq C \\|\\varphi\\|_M (\\epsilon_1^{\\omega} + \\dots + \\epsilon_q^\\omega),\n\\end{equation*}\nwhere $M$ and $\\omega$ only depend on $f_1,\\ldots, f_q$, $Z$, and $\\supp \\varphi$ while\n$C$ also depends on the $C^M$-norm of the $\\chi$-functions.\n\\end{theorem}\n\n\n\\begin{theorem}\\label{lambda-main}\nAssume that $f_1,\\ldots,f_q$ defines a complete intersection on $Z$, let\n$(P_1,\\ldots,P_q)=(R^1_{k_1},\\ldots,R^p_{k_p},U^{p+1}_{k_{p+1}},\\ldots,U^q_{k_q})$, and let\n$P^{\\lambda_j}_{j}$ be the $\\lambda$-regularization of $P_j$ given by \\eqref{Udef} and \\eqref{residydef}.\nThen the current valued function\n\\begin{equation*}\n\\lambda \\mapsto P_1^{\\lambda_1}\\wedge \\cdots \\wedge P_q^{\\lambda_q},\n\\end{equation*}\na priori defined for $\\mathfrak{Re}\\, \\lambda_j \\gg 1$, has an analytic continuation\nto a neighborhood of the half-space $\\cap_1^q \\{\\mathfrak{Re}\\, \\lambda_j \\geq 0\\}$.\n\\end{theorem}\n\n\\begin{remark}\nIn case the $E_j$:s are trivial with trivial metrics, Theorems \\ref{epsilon-main} and \\ref{lambda-main} follow\nquite easily from, respectively, Theorem 1 in \\cite{JebHs} and Theorem 1 in \\cite{HasamArkiv} by taking averages.\nAs an illustration, let \n$\\varepsilon_1,\\ldots,\\varepsilon_r$ be a nonsense basis and let $f_1,\\ldots,f_r$ be holomorphic functions. \nThen we can write $s=\\bar{f}\\cdot \\varepsilon$ and so \n$u_k=(\\bar{f}\\cdot \\varepsilon)\\wedge (d\\bar{f}\\cdot \\varepsilon)^{k-1}\/|f|^{2k}$.\nA standard computation shows that \n\\begin{equation*}\n\\int_{\\alpha \\in \\mathbb{CP}^{r-1}}\\frac{|\\alpha \\cdot f|^{2\\lambda}\\alpha \\cdot \\varepsilon}{\n(\\alpha\\cdot f)|\\alpha|^{2\\lambda}}dV\n=A(\\lambda)|f|^{2\\lambda}\\frac{\\bar{f}\\cdot \\varepsilon}{|f|^2},\n\\end{equation*}\nwhere $dV$ is the (normalized) Fubini-Study volume form and $A$ is holomorphic with $A(0)=1$. It follows that\n\\begin{equation*}\n\\int_{\\alpha_1,\\ldots,\\alpha_k\\in \\mathbb{CP}^{r-1}}\\bigwedge_1^k\\frac{\\bar{\\partial} |\\alpha_j\\cdot f|^{2\\lambda}}{\\alpha_j\\cdot f}\n\\wedge \\frac{\\alpha_j \\cdot \\varepsilon}{|\\alpha_j|^{2\\lambda}}dV(\\alpha_j)=\nA(\\lambda)^k\\bar{\\partial} (|f|^{2k\\lambda}u_k).\n\\end{equation*}\nElaborating this formula and using Theorem 1 in \\cite{HasamArkiv} one can show Theorem \\ref{lambda-main} in the case\nof trivial $E_j$:s with trivial metrics. The general case can probably also be handled in a similar manner but the \ncomputations become more involved and we prefer to give direct proofs.\n\\end{remark}\n\n\n\n\\section{Proof of Theorem \\ref{main}}\\label{bevis}\nWe start by making a Hironaka resolution of singularities, \\cite{Hiro}, of $Z$ such that the pre-image of\n$\\cup_j\\{f_j=0\\}$ has normal crossings. We then make further toric resolutions (e.g., as in \\cite{PTY}) \nsuch that, in local charts, the pullback of each $f_i$ is a monomial, $x^{\\alpha_i}$, times a \nnon-vanishing holomorphic tuple. One checks that the pullback of $P_j^{\\epsilon}$ is of one of the following\nforms: \n\\begin{equation*}\n\\frac{\\chi(|x^{\\tilde{\\alpha}}|^2\\xi\/\\epsilon)}{x^{\\alpha}}\\, \\vartheta, \\quad\n1-\\chi(|x^{\\tilde{\\alpha}}|^2\\xi\/\\epsilon),\\quad\n\\frac{\\bar{\\partial} \\chi(|x^{\\tilde{\\alpha}}|^2\\xi\/\\epsilon)}{x^{\\alpha}}\\wedge \\vartheta,\n\\end{equation*}\nwhere $\\xi$ is smooth and positive, $\\supp \\tilde{\\alpha}= \\supp \\alpha$,\nand $\\vartheta$ is a smooth bundle valued form; by localizing on the blow-up we may also suppose that \n$\\vartheta$ has as small support as we wish. If the $\\chi$-functions are smooth, the following special\ncase of Theorem \\ref{main} now immediately follows from Lemma \\ref{pmmultlemma}:\n\\begin{equation}\\label{eq1}\n(UR)^{\\nu}=\\lim_{\\epsilon_q\\to 0}\\cdots \\lim_{\\epsilon_1\\to 0}P^{\\epsilon_q}_{\\nu(q)}\\wedge \\cdots \\wedge \nP^{\\epsilon_1}_{\\nu(1)}.\n\\end{equation} \n\n\\bigskip\n\nFor smooth $\\chi$-functions we put\n\\begin{equation*}\n\\mathcal{I}(\\epsilon)=\n\\int \\frac{\\bar{\\partial} \\chi_1^{\\epsilon}\\wedge \\cdots \\wedge \\bar{\\partial} \\chi_p^{\\epsilon}\n\\chi_{p+1}^{\\epsilon}\\cdots \\chi_q^{\\epsilon}}{x^{\\alpha_1+\\cdots +\\alpha_p+\\cdots +\\alpha_{q'}}}\\wedge \\varphi,\n\\end{equation*}\nwhere $q'\\leq q$, $\\varphi$ is a smooth $(n,n-p)$-form with support close to the origin, and \n$\\chi_j^{\\epsilon}=\\chi (|x^{\\tilde{\\alpha_j}}|^2\\xi_j\/\\epsilon_j)$ for smooth positive $\\xi_j$.\nWe note that we may replace the $\\bar{\\partial}$ in $\\mathcal{I}(\\epsilon)$ by $d$ for bidegree reasons.\nIn case $\\chi={\\bf 1}_{[1,\\infty)}$ we denote the corresponding integral by $I(\\epsilon)$. \nWe also put $\\mathcal{I}^{\\nu}(\\epsilon_1,\\ldots,\\epsilon_q)=\\mathcal{I}(\\epsilon_{\\nu(1)},\\ldots,\\epsilon_{\\nu(q)})$\nand similarly for $I^{\\nu}$. In view of \\eqref{eq1}, the special case of Theorem \\ref{main}\nwhen the $\\chi$-functions are smooth will be proved if we can show that\n\\begin{equation}\\label{eq2}\n\\lim_{\\epsilon_1 \\ll \\cdots \\ll \\epsilon_q \\to 0}\\mathcal{I}^{\\nu}(\\epsilon)\n\\end{equation}\nexists. \nThe case with $\\chi={\\bf 1}_{[1,\\infty)}$ will then follow if we can show\n\\begin{equation}\\label{eq3}\n\\lim_{\\delta\\to 0} (\\mathcal{I}^{\\nu}(\\epsilon(\\delta))-I^{\\nu}(\\epsilon(\\delta)))=0,\n\\end{equation}\nwhere $\\delta \\mapsto \\epsilon(\\delta)$ is any admissible path.\n\nFor notational convenience, we will consider $\\mathcal{I}^\\nu(\\epsilon)$ (unless otherwise stated), but our \narguments apply just as well to $I^\\nu(\\epsilon)$ until we arrive at the integral \\eqref{Iepsilon}.\n\nDenote by $\\tilde{A}$ the $q\\times n$-matrix with rows $\\tilde{\\alpha}_i$. \nWe will first show that we can assume that $\\tilde{A}$ has full rank. The idea is the same as in \\cite{CH} and \\cite{PCrelle},\nhowever because of the paths along which our limits are taken, we have to modify the argument slightly.\nThe following lemma follows from the proof of Lemma III.12.1 in \\cite{TsikhBook}.\n\\begin{lemma}\\label{ranklemma}\nAssume that $\\alpha$ is a $q \\times n$-matrix with rows $\\alpha_i$ such that there exists $(v_1,\\dots,v_q) \\neq 0$ with $\\sum v_i\\alpha_i = 0$.\nLet $j = \\min \\{ i; v_i \\neq 0 \\}$. Then there exist constants $C,c > 0$ such that if $\\epsilon_{j} < C(\\epsilon_{j+1}\\dots\\epsilon_q)^c$,\nthen $\\chi(|x^{\\alpha_j}|^2\\xi_j\/\\epsilon_j) \\equiv 1$ and $\\bar{\\partial}\\chi(|x^{\\alpha_j}|^2\\xi_j\/\\epsilon_j) \\equiv 0$ for\nall $x \\in \\Delta \\cap \\{ |x^{\\alpha_i}|^2 \\geq C_i\\epsilon_i, i = j+1,\\dots,q \\}$, where $\\Delta$ is the unit polydisc.\n\\end{lemma}\nAssume that $\\tilde{A}$ does not have full rank, and let $v$ be a column vector such that $v^t \\tilde{A} = 0$. Since\n$(\\epsilon_1,\\dots,\\epsilon_q)$ is replaced by $(\\epsilon_{\\nu(1)},\\dots,\\epsilon_{\\nu(q)})$ in $\\mathcal{I}^\\nu(\\epsilon)$,\nwe choose instead $j_0$ such that $\\nu(j_0) \\leq \\nu(i)$ for all $i$ such that $v_i \\neq 0$.\nIf $j_0 \\leq p$, we let $\\widetilde{\\mathcal{I}}^\\nu(\\epsilon) = 0$, and if $j_0 \\geq p+1$, we let $\\widetilde{\\mathcal{I}}^\\nu(\\epsilon)$\nbe $\\mathcal{I}^\\nu(\\epsilon)$ but with $\\chi_{j_0}^\\epsilon$ replaced by $1$.\nIf $\\epsilon = \\epsilon(\\delta)$ is such that $\\epsilon_{\\nu(j_0)} > 0$, then $\\mathcal{I}^\\nu(\\epsilon)$ is a current\nacting on a test form with support on a set of the form\n\\begin{equation*}\n \\Delta \\cap \\{ |x^{\\alpha_i}|^2 \\geq C_i\\epsilon_{\\nu(i)}; \\text{for all $i$ such that } \\nu(i) \\geq \\nu(j_0) \\}.\n\\end{equation*}\nIn particular, if $\\epsilon_{\\nu(j_0)}(\\delta)$ is sufficiently small compared to $(\\epsilon_{\\nu(j_0)+1}(\\delta),\\dots,$\n$\\epsilon_q(\\delta))$,\nthen by Lemma \\ref{ranklemma}, if $j_0 \\leq p$, the factor $\\bar{\\partial}\\chi_{j_0}^\\epsilon$ is identically $0$, and if $j_0 \\geq p+1$,\nthe factor $\\chi_{j_0}^\\epsilon$ is identically $1$\nand thus is equal to $\\widetilde{\\mathcal{I}}^\\nu(\\epsilon)$ for such $\\epsilon$.\nSimilarly, if $\\epsilon_{\\nu(j_0)} = 0$, we have that $\\mathcal{I}^\\nu(\\epsilon)$ is defined as a limit along $\\epsilon_{\\nu(j_0)} \\to 0$,\nwith $\\epsilon_{\\nu(j_0)+1},\\dots,\\epsilon_q$ fixed and in the limit we get again that for sufficiently small $\\epsilon_{\\nu(j_0)}$,\nwe can replace $\\mathcal{I}^\\nu(\\epsilon)$ by $\\widetilde{\\mathcal{I}}^\\nu(\\epsilon)$.\nThus we have\n\\begin{equation*}\n\\lim_{\\epsilon_1 \\ll \\dots \\ll \\epsilon_q \\to 0} \\mathcal{I}^\\nu(\\epsilon) = \n\\lim_{\\epsilon_1 \\ll \\dots \\ll \\epsilon_q \\to 0} \\widetilde{\\mathcal{I}}^\\nu(\\epsilon),\n\\end{equation*}\nand we have reduced to the case that $\\tilde{A}$ is a $(q-1)\\times n$-matrix of the same rank. We continue\nthis procedure until $\\tilde{A}$ has full rank.\n\n\\smallskip\n\nBy re-numbering the coordinates, we may suppose that the minor\n$A=(\\tilde{\\alpha}_{ij})_{1\\leq i,j\\leq q}$ of $\\tilde{A}$ is invertible and we put $A^{-1}=B=(b_{ij})$. \nWe now use complex notation to \nmake a non-holomorphic, but smooth change of variables:\n\\begin{equation}\\label{varbyte}\ny_1=x_1\\,\\xi^{b_1\/2},\\ldots, y_q=x_q\\,\\xi^{b_q\/2}, y_{q+1}=x_{q+1},\\ldots, y_n=x_n,\n\\end{equation}\n\\begin{equation*}\n\\hspace{.7cm} \\bar{y}_1=\\bar{x}_1\\,\\xi^{b_1\/2},\\ldots, \\bar{y}_q=\\bar{x}_q\\,\\xi^{b_q\/2}, \n\\bar{y}_{q+1}=\\bar{x}_{q+1},\\ldots, \\bar{y}_n=\\bar{x}_n,\n\\end{equation*}\nwhere $\\xi^{b_i\/2}=\\xi_1^{b_{i1}\/2}\\cdots \\xi_q^{b_{iq}\/2}$. One easily checks that \n$dy\\wedge d\\bar{y}=\\xi^{b_1}\\cdots \\xi^{b_q}\\,$ $dx\\wedge d\\bar{x}+ O(|x|)$, so \\eqref{varbyte} defines a\nsmooth change of variables between neighborhoods of the origin. \nA simple linear algebra computation then shows that\n$|x^{\\tilde{\\alpha_i}}|^2\\xi_i=|y^{\\tilde{\\alpha_i}}|^2$. Of course, this change of variables does not preserve \nbidegrees so $\\varphi(y)$ is merely a smooth compactly supported $(2n-p)$-form.\nWe thus have\n\\begin{equation}\\label{I1(y)}\n\\mathcal{I}^\\nu(\\epsilon)=\n\\int_{\\Delta} \\frac{d\\chi_1^{\\epsilon}\\wedge \\cdots \\wedge d \\chi_p^{\\epsilon}\n\\chi_{p+1}^{\\epsilon}\\cdots \\chi_q^{\\epsilon}}{y^{\\alpha_1+\\cdots +\\alpha_p+\\cdots +\\alpha_{q'}}}\\wedge \\varphi'(y),\n\\end{equation}\nwhere $\\chi_j^{\\epsilon}=\\chi(|y^{\\tilde{\\alpha}_j}|^2\/\\epsilon_{\\nu(j)})$ and \n$\\varphi'(y)=\\sum_{|I|+|J|=2n-p}\\psi_{IJ}dy_I\\wedge d\\bar{y}_J$. By linearity we may assume that the sum only consists\nof one term $\\varphi'(y)=\\psi dy_K\\wedge d\\bar{y}_L$, and by scaling, we may assume that \n$\\supp \\psi \\subseteq \\Delta$, $\\Delta$ being the unit polydisc.\nBy Lemma 2.4 in \\cite{CH}, we can write the function $\\psi$ as\n\\begin{equation}\\label{taylor}\n\\psi(y)=\\sum_{I+J<\\sum_1^{q'}\\alpha_j-{\\bf 1}}\\psi_{IJ}\\, y^I\\bar{y}^J +\n\\sum_{I+J=\\sum_1^{q'}\\alpha_j-{\\bf 1}}\\psi_{IJ}\\, y^I\\bar{y}^J,\n\\end{equation}\nwhere $a 1$. Let $\\epsilon^k$ be any sequence satisfying the conditions\nin Definition~\\ref{limitdef}.\nConsider a fixed $k$, and let $m$ be such that $\\epsilon^k = (0,\\dots,0,\\epsilon^k_{m+1},\\dots,\\epsilon^k_{q})$ with $\\epsilon_{m+1}^k > 0$.\nLet $I_1 = \\nu^{-1}(\\{1,\\dots,m\\})\\cap\\{1,\\dots,p\\}$ and $I_2 = \\nu^{-1}(\\{1,\\dots,m\\})\\cap\\{p+1,\\dots,q\\}$.\nWe consider $\\epsilon^k_{m+1},\\dots,\\epsilon^k_{q}$ fixed in $\\mathcal{I}^\\nu(\\epsilon)$, and define\n\\begin{equation*}\n \\mathcal{I}_k(\\epsilon_1,\\dots,\\epsilon_m) = \\int_{[0,1]^n} \\bigwedge_{i \\in I_1} d\\chi(r^{\\alpha_i}\/\\epsilon_{\\nu(i)})\n \\prod_{i \\in I_2} \\chi(r^{\\alpha_i}\/\\epsilon_{\\nu(i)})\\mathscr{J}_k(r)dr_M,\n\\end{equation*}\noriginally defined on $(0,\\infty)^p$, but extended according to Definition \\ref{limitdef}, where\n\\begin{equation*}\n \\mathscr{J}_k(r) = \\pm \\bigwedge_{i \\in \\{ 1,\\dots,p \\} \\setminus I_1} d\\chi(r^{\\alpha_i}\/\\epsilon^k_{\\nu(i)})\n \\prod_{i \\in \\{ p+1,\\dots,q\\} \\setminus I_2} \\chi(r^{\\alpha_i}\/\\epsilon^k_{\\nu(i)})\\mathscr{J}(r)\n\\end{equation*}\n(where the sign is chosen such that $\\mathcal{I}_k(0) = \\mathcal{I}^\\nu(\\epsilon^k)$).\nSince $m < q$ and $\\mathscr{J}_k$ is smooth, we have by induction that\n\\begin{equation*}\n \\mathcal{I}_k(0) = \\lim_{\\epsilon_m \\to 0} \\dots \\lim_{\\epsilon_1 \\to 0} \\mathcal{I}_k(\\epsilon_1,\\dots,\\epsilon_m) =\n \\lim_{\\delta \\to 0} \\mathcal{I}_k(\\epsilon'(\\delta)),\n\\end{equation*}\nwhere $\\epsilon'(\\delta)$ is any admissible path, and the first equality follows by definition of $\\mathcal{I}_k(0)$.\nWe fix an admissible path $\\epsilon'(\\delta)$. For each $k$ we can choose $\\delta_k$ such that\nif $\\epsilon^{k'} = (\\epsilon'_1(\\delta_k),\\dots,\\epsilon'_m(\\delta_k))$, then\n$\\lim_{k \\to \\infty} (\\mathcal{I}_k(\\epsilon^{k'}) - \\mathcal{I}_k(0)) = 0$ and\nif $\\tilde{\\epsilon}^k = (\\epsilon^{k'},\\epsilon^k_{m+1},\\dots,\\epsilon^k_{q})$, then $\\tilde{\\epsilon}^k$ forms\na subsequence of an admissible path.\nSince $\\mathcal{I}_k(0) = \\mathcal{I}^\\nu(\\epsilon^k)$, and $\\mathcal{I}_k(\\epsilon^{k'}) = \\mathcal{I}^\\nu(\\tilde{\\epsilon}^k)$, we thus have\n\\begin{equation*}\n \\lim_{k \\to \\infty} \\mathcal{I}^\\nu(\\epsilon^k) = \\lim_{k \\to \\infty} \\mathcal{I}^\\nu(\\tilde{\\epsilon}^k) =\n \\lim_{\\delta \\to 0} \\mathcal{I}^\\nu(\\epsilon(\\delta))\n\\end{equation*}\nwhere the second equality follows from the existence and uniqueness of $\\mathcal{I}^\\nu(\\epsilon(\\delta))$\nalong any admissible path.\nHence we have shown that the limit in \\eqref{eq2} exists and is well-defined.\n\nFinally, if we start from \\eqref{Iepsilon}, as (23) in \\cite{PCrelle} shows, either\n\\begin{equation*}\n \\lim_{\\epsilon_1 \\ll \\dots \\ll \\epsilon_q \\to 0} \\mathcal{I}^\\nu(\\epsilon) = \\pm \\int_{r_M \\in (0,1)^{n-p}} \n \\mathscr{J}(0,r_M) dr_M,\n\\end{equation*}\nor the limit is $0$, depending only on $\\alpha$. If we consider $I^{\\nu}(\\epsilon)$ instead, we get the same limit,\nsee \\cite[p. 79--80]{TsikhBook}, and \\eqref{eq3} follows.\n\n\n\n\n\\section{Proof of Theorems \\ref{epsilon-main} and \\ref{lambda-main}}\\label{bevis2}\n\nRecall that $(P_1,\\ldots,P_q)=(R^1_{k_1},\\ldots,R^p_{k_p},U^{p+1}_{k_{p+1}},\\ldots,U^q_{k_q})$ and that \n$P_j^{\\epsilon_j}$ and $P_j^{\\lambda_j}$ are the $\\epsilon$-regularizations with smooth $\\chi$ \n(given by \\eqref{Uepsilon}, \\eqref{Repsilon}) \nand the $\\lambda$-regularizations (cf., \\eqref{Udef}, \\eqref{residydef}) respectively of $P_j$. \nWe will consider the following two integrals:\n\\begin{equation*}\n\\mathcal{I}(\\epsilon)=\\int_Z\nP_1^{\\epsilon_1}\\wedge \\cdots \\wedge P_q^{\\epsilon_q}\\wedge \\varphi\n\\end{equation*}\n\\begin{equation*}\n\\Gamma(\\lambda) =\\int_Z P_1^{\\lambda_1}\\wedge \\cdots \\wedge P_q^{\\lambda_q}\\wedge \\varphi, \n\\end{equation*}\nwhere $\\varphi$\nis a test form on $Z$, supported close to a point in $\\{f_1=\\cdots=f_q=0\\}$, \nof bidegree $(n,n-k_1-\\cdots- k_q +q-p)$ with values in $\\Lambda (E_1^*\\oplus \\cdots \\oplus E_q^*)$. \nIn the arguments below, we will assume for notational convenience that $\\tilde{f}_j=f_j$ \n(cf., e.g., \\eqref{Udef}); the modifications to the general case are straightforward.\n\nThe crucial parts of the proofs of Theorems \\ref{epsilon-main} and \\ref{lambda-main}\nare contained in the following propositions.\n\n\\begin{proposition}\\label{epsilonpropp}\nAssume that $f_1,\\ldots,f_q$ define a complete intersection. For $pp$ and let $U^*$ and $U^*_{\\epsilon}$\ndenote the product of some $U^j$:s and $U^j_{\\epsilon}$:s respectively, also with $j>p$ but only $j$:s not\noccurring in $R^*$. We prove\n\\begin{equation*}\n\\big|R^1_{\\epsilon}\\wedge \\cdots \\wedge R^p_{\\epsilon}\\wedge R^*\\wedge U^*_{\\epsilon}-\nR^1\\wedge \\cdots \\wedge R^p\\wedge R^*\\wedge U^*\\big|\\lesssim \\epsilon^{\\omega},\n\\end{equation*} \ni.e., we prove Theorem \\ref{epsilon-main} {\\em on} the current $R^*$.\nThe induction start, $p=0$, follows immediately from \\eqref{foljd}. If we add and subtract\n$R^1_\\epsilon\\wedge \\dots \\wedge R^p_\\epsilon \\wedge R^*\\wedge U^*$, the induction step follows easily\nfrom \\eqref{nablalikhet} (construed in setting of $\\epsilon$-regularizations) and estimates like \\eqref{foljd}.\n \n\n\n\n\\begin{proof}[Proof of Propositions \\ref{epsilonpropp} and \\ref{lambdapropp}]\nWe may assume that $\\varphi$ has arbitrarily small support. Hence, we may assume that $Z$ is an analytic subset\nof a domain $\\Omega\\subseteq \\mathbb{C}^N$ and that all bundles are trivial,\nand thus make the identification $f_j=(f_{j1},\\ldots,f_{je_j})$, where $f_{ji}$ are holomorphic in $\\Omega$. \nWe choose a Hironaka resolution $\\hat{Z}\\rightarrow Z$ such that the pulled-back ideals $\\langle\\hat{f}_j\\rangle$\nare all principal, and moreover, so that in a fixed chart with coordinates $x$\non $\\hat{Z}$ (and after a possible re-numbering), \n$\\langle\\hat{f}_j\\rangle$ is generated by $\\hat{f}_{j1}$ and $\\hat{f}_{j1}=x^{\\alpha_j}h_j$, where $h_j$ is holomorphic \nand non-zero. We then have\n\\begin{equation*}\n|\\hat{f}_j|^2=|\\hat{f}_{j1}|^2\\xi_j, \\quad \\hat{u}^j_{k_j}=v^j\/\\hat{f}^{k_j}_{j1},\n\\end{equation*}\nwhere $\\xi_j$ is smooth and positive and $v^j$ is a smooth (bundle valued) form. We thus get\n\\begin{equation*}\n\\bar{\\partial} \\chi_j(|\\hat{f}_j|^2\/\\epsilon_j)=\n\\tilde{\\chi}_j(|\\hat{f}_j|^2\/\\epsilon_j)\\left(\\frac{d\\bar{\\hat{f}}_{j1}}{\\bar{\\hat{f}}_{j1}}+\n\\frac{\\bar{\\partial} \\xi_j}{\\xi_j}\\right),\n\\end{equation*}\nwhere $\\tilde{\\chi}_j(t)=t \\chi_j'(t)$, and \n\\begin{equation*}\n\\bar{\\partial} |\\hat{f}_j|^{2\\lambda_j}=\\lambda_j |\\hat{f}_j|^{2\\lambda_j} \\left(\\frac{d\\bar{\\hat{f}}_{j1}}{\\bar{\\hat{f}}_{j1}}+\n\\frac{\\bar{\\partial} \\xi_j}{\\xi_j}\\right).\n\\end{equation*}\nIt follows that $\\mathcal{I}(\\epsilon)$ and $\\Gamma(\\lambda)$ are finite sums of integrals which\nwe without loss of generality can assume to be of the form\n\\begin{equation}\\label{eq4}\n\\pm \\int_{\\mathbb{C}^n_x}\\prod_1^p \\tilde{\\chi}_j^{\\epsilon} \\prod_{p+1}^q\\chi_j^{\\epsilon}\n\\bigwedge_1^m \\frac{d\\bar{\\hat{f}}_{j1}}{\\bar{\\hat{f}}_{j1}}\\wedge \\bigwedge_{m+1}^p\n\\frac{\\bar{\\partial}\\xi_j}{\\xi_j}\\wedge \\bigwedge_1^q\\frac{v^j}{\\hat{f}^{k_j}_{j1}}\\wedge \\varphi \\rho,\n\\end{equation}\n\\begin{equation}\\label{eq4'}\n\\pm \\lambda_1\\cdots \\lambda_p \\int_{\\mathbb{C}^n_x}\n\\prod_1^q |\\hat{f}_j|^{2\\lambda_j}\n\\bigwedge_1^m \\frac{d\\bar{\\hat{f}}_{j1}}{\\bar{\\hat{f}}_{j1}}\\wedge \\bigwedge_{m+1}^p\n\\frac{\\bar{\\partial}\\xi_j}{\\xi_j}\\wedge \\bigwedge_1^q\\frac{v^j}{\\hat{f}^{k_j}_{j1}}\\wedge \\varphi \\rho,\n\\end{equation}\nwhere $\\rho$ is a cutoff function.\n\n\\smallskip\n\nRecall that $\\hat{f}_{j1}=x^{\\alpha_j}h_j$ and let $\\mu$ be the number of vectors in a maximal \nlinearly independent subset of $\\{\\alpha_1,\\ldots,\\alpha_m\\}$; say that \n$\\alpha_1,\\ldots,\\alpha_{\\mu}$ are linearly independent. We then can define new holomorphic coordinates\n(still denoted by $x$) so that $\\hat{f}_{j1}=x^{\\alpha_j}$, $j=1,\\ldots,\\mu$, see \\cite[p.~46]{PCrelle} for details.\nThen we get\n\\begin{eqnarray}\\label{hack}\n\\bigwedge_1^m d\\hat{f}_{j1} &=& \\bigwedge_1^{\\mu}dx^{\\alpha_j} \\wedge\n\\bigwedge_{\\mu+1}^m(x^{\\alpha_j}dh_j+h_jdx^{\\alpha_j}) \\\\\n&=& x^{\\sum_{\\mu+1}^m \\alpha_j}\\bigwedge_1^{\\mu}dx^{\\alpha_j}\\wedge \\bigwedge_{\\mu+1}^m dh_j, \\nonumber\n\\end{eqnarray}\nwhere the last equality follows because $dx^{\\alpha_1}\\wedge \\cdots \\wedge dx^{\\alpha_{\\mu}}\\wedge dx^{\\alpha_j}=0$,\n$\\mu+1\\leq j \\leq m$, since $\\alpha_1,\\ldots,\\alpha_{\\mu},\\alpha_j$ are linearly dependent.\nFrom the beginning we could also have assumed that $\\varphi=\\varphi_1\\wedge \\varphi_2$, where\n$\\varphi_1$ is an anti-holomorphic $(n-\\sum_1^q k_j +q-p)$-form and $\\varphi_2$ is a (bundle valued)\n$(n,0)$-test form on $Z$. We now define\n\\begin{equation*}\n\\Phi=\\bigwedge_{\\mu+1}^m \\frac{d\\bar{h}_j}{\\bar{h}_j}\\wedge \\bigwedge_{m+1}^p\\frac{\\bar{\\partial} \\xi_j}{\\xi_j}\\wedge\n\\bigwedge_1^q v^j \\wedge \\hat{\\varphi}_1.\n\\end{equation*}\nUsing \\eqref{hack} we can now write \\eqref{eq4} and \\eqref{eq4'} as\n\\begin{equation}\\label{eq5}\n\\pm \\int_{\\mathbb{C}^n_x}\\frac{\\prod_1^p \\tilde{\\chi}_j^{\\epsilon} \\prod_{p+1}^q\\chi_j^{\\epsilon}}{\\prod_1^q \\hat{f}^{k_j}_{j1}}\n\\frac{d\\bar{x}^{\\alpha_1}}{\\bar{x}^{\\alpha_1}} \\wedge \\cdots \\wedge \\frac{d\\bar{x}^{\\alpha_{\\mu}}}{\\bar{x}^{\\alpha_{\\mu}}}\n\\wedge \\Phi \\wedge \\hat{\\varphi}_2\\rho,\n\\end{equation}\n\\begin{equation}\\label{eq5'}\n\\pm \\lambda_1\\cdots \\lambda_p \\int_{\\mathbb{C}^n_x}\n\\frac{\\prod_1^q |\\hat{f}_j|^{2\\lambda_j}}{\\prod_1^q \\hat{f}^{k_j}_{j1}}\n\\frac{d\\bar{x}^{\\alpha_1}}{\\bar{x}^{\\alpha_1}} \\wedge \\cdots \\wedge \\frac{d\\bar{x}^{\\alpha_{\\mu}}}{\\bar{x}^{\\alpha_{\\mu}}}\n\\wedge \\Phi \\wedge \\hat{\\varphi}_2\\rho.\n\\end{equation}\n\n\\begin{lemma}\\label{divlemma}\nLet $\\mathcal{K}=\\{i;\\, x_i \\, \\big| \\, x^{\\alpha_j}, \\, \\textrm{some} \\,\\, p+1\\leq j \\leq q\\}$.\nFor any fixed $r\\in \\mathbb{N}$, one can replace $\\Phi$ in \\eqref{eq5} and \\eqref{eq5'} by\n\\begin{equation*}\n\\Phi':=\\Phi -\n\\sum_{J\\subseteq \\mathcal{K}}(-1)^{|J|}\\sum_{k_1,\\dots,k_{|J|} = 0}^{r+1}\n\\left.\\frac{\\partial^{|k|}\\Phi}{\\partial x_J^k}\\right|_{x_J=0}\n\\frac{x_J^k}{k!}\n\\end{equation*}\nwithout affecting the integrals. Moreover, for any $I\\subseteq \\mathcal{K}$, we have that \n$\\Phi'\\wedge \\Lambda_{i\\in I}(d\\bar{x}_i\/\\bar{x}_i)$ is $C^r$-smooth.\n\\end{lemma}\n\nWe replace $\\Phi$ by $\\Phi'$ in \\eqref{eq5} and \\eqref{eq5'} and we \nwrite $d=d_{\\mathcal{K}}+d_{\\mathcal{K}^c}$, where $d_{\\mathcal{K}}$ differentiates with respect to the \nvariables $x_i$, $\\bar{x}_i$ for $i\\in \\mathcal{K}$ and $d_{\\mathcal{K}^c}$ differentiates with respect to the rest.\nThen we can write \n$(d\\bar{x}^{\\alpha_1}\/\\bar{x}^{\\alpha_1})\\wedge \\cdots \\wedge (d\\bar{x}^{\\alpha_{\\mu}}\/\\bar{x}^{\\alpha_{\\mu}})\\wedge \\Phi'$ \nas a sum of terms, which we without loss of generality can assume to be of the form \n\\begin{equation*}\n\\frac{d_{\\mathcal{K}^c}\\bar{x}^{\\alpha_1}}{\\bar{x}^{\\alpha_1}}\\wedge \\cdots \\wedge \n\\frac{d_{\\mathcal{K}^c}\\bar{x}^{\\alpha_{\\nu}}}{\\bar{x}^{\\alpha_{\\nu}}}\\wedge\n\\frac{d_{\\mathcal{K}}\\bar{x}^{\\alpha_{\\nu+1}}}{\\bar{x}^{\\alpha_{\\nu+1}}}\\wedge \\cdots \\wedge \n\\frac{d_{\\mathcal{K}}\\bar{x}^{\\alpha_{\\mu}}}{\\bar{x}^{\\alpha_{\\mu}}}\\wedge \\Phi'\n\\end{equation*}\n\\begin{equation*}\n=\\frac{d_{\\mathcal{K}^c}\\bar{x}^{\\alpha_1}}{\\bar{x}^{\\alpha_1}}\\wedge \\cdots \\wedge \n\\frac{d_{\\mathcal{K}^c}\\bar{x}^{\\alpha_{\\nu}}}{\\bar{x}^{\\alpha_{\\nu}}}\\wedge\n\\Phi''\\wedge d\\bar{x}_{\\mathcal{K}},\n\\end{equation*}\nwhere $\\Phi''$ is $C^r$-smooth and of bidegree $(0,n-\\nu-|\\mathcal{K}|)$ (possibly, $\\Phi''=0$).\nThus, \\eqref{eq5} and \\eqref{eq5'} are finite sums of of integrals of the following type\n\\begin{equation}\\label{eq6}\n\\int_{\\mathbb{C}^n_x}\\frac{\\prod_1^p \\tilde{\\chi}_j^{\\epsilon} \\prod_{p+1}^q\\chi_j^{\\epsilon}}{\\prod_1^q \\hat{f}^{k_j}_{j1}}\n\\frac{d\\bar{x}^{\\alpha_1}}{\\bar{x}^{\\alpha_1}} \\wedge \\cdots \\wedge \\frac{d\\bar{x}^{\\alpha_{\\nu}}}{\\bar{x}^{\\alpha_{\\nu}}}\n\\wedge \\psi \\wedge d\\bar{x}_{\\mathcal{K}}\\wedge dx,\n\\end{equation}\n\\begin{equation}\\label{eq6'}\n\\lambda_1\\cdots \\lambda_p \\int_{\\mathbb{C}^n_x}\n\\frac{\\prod_1^q |\\hat{f}_j|^{2\\lambda_j}}{\\prod_1^q \\hat{f}^{k_j}_{j1}}\n\\frac{d\\bar{x}^{\\alpha_1}}{\\bar{x}^{\\alpha_1}} \\wedge \\cdots \\wedge \\frac{d\\bar{x}^{\\alpha_{\\nu}}}{\\bar{x}^{\\alpha_{\\nu}}}\n\\wedge \\psi \\wedge d\\bar{x}_{\\mathcal{K}}\\wedge dx,\n\\end{equation}\nwhere $\\psi$ is $C^r$-smooth and compactly supported.\n\n\\bigskip\n\nWe now first finish the proof of Proposition \\ref{lambdapropp}. First of all, it is well known that\n$\\Gamma(\\lambda)$ has a meromorphic continuation to $\\mathbb{C}^q$. We have\n\\begin{equation*}\n\\frac{d\\bar{x}^{\\alpha_1}}{\\bar{x}^{\\alpha_1}} \\wedge \\cdots \\wedge \\frac{d\\bar{x}^{\\alpha_{\\nu}}}{\\bar{x}^{\\alpha_{\\nu}}}\n\\wedge d\\bar{x}_{\\mathcal{K}} =\n\\sum_{\\stackrel{|I|=\\nu}{I\\subseteq \\mathcal{K}^c}}C_I\\frac{d\\bar{x}_I}{\\bar{x}_I}\\wedge d\\bar{x}_{\\mathcal{K}}.\n\\end{equation*}\nLet us assume that $I=\\{1,\\ldots,\\nu\\}\\subseteq \\mathcal{K}^c$ and consider the contribution to \n\\eqref{eq6'} corresponding to this subset. This contribution equals\n\\begin{equation}\\label{eq7'}\nC_I\\lambda_1\\cdots \\lambda_p \\int_{\\mathbb{C}^n_x}\n\\frac{|x^{\\sum_1^q \\lambda_j\\alpha_j}|^2}{x^{\\sum_1^q k_j\\alpha_j}} \\bigwedge_1^{\\nu}\\frac{d\\bar{x}_j}{\\bar{x}_j}\\wedge\n\\Psi(\\lambda,x)\\wedge d\\bar{x}_{\\mathcal{K}}\\wedge dx\n\\end{equation}\n\\begin{eqnarray*}\n&=& \\frac{C_I\\prod_1^p\\lambda_j}{\\prod_{i=1}^{\\nu}(\\sum_1^q\\lambda_j\\alpha_{ji})}\n\\int_{\\mathbb{C}^n_x} \\frac{\\bigwedge_{i=1}^{\\nu}\\bar{\\partial} |x_i|^{2\\sum_1^q\\lambda_j\\alpha_{ji}}\n\\prod_{i=\\nu+1}^n|x_i|^{2\\sum_1^q\\lambda_j\\alpha_{ji}}}{x^{\\sum_1^q k_j\\alpha_j}}\\wedge \\\\\n& & \\hspace{7cm}\\wedge \\Psi(\\lambda,x)\\wedge d\\bar{x}_{\\mathcal{K}}\\wedge dx,\n\\end{eqnarray*}\nwhere $\\Psi(\\lambda,x)=\\psi(x)\\prod_1^q(\\xi_j^{\\lambda_j}\/h_j^{k_j})$.\nIt is well known (and not hard to prove, e.g., by integrations by parts as in \\cite{MatsAB}, Lemma 2.1) that the \n{\\em integral} on the right-hand side of \\eqref{eq7'} has an analytic continuation in $\\lambda$ to \na neighborhood of $\\cap_1^q\\{\\mathfrak{Re}\\, \\lambda_j \\geq 0\\}$.\n(We thus choose $r$ in Lemma \\ref{divlemma} large enough so that we can integrate by parts.)\nIf $p=0$, then the coefficient in front of \nthe integral is to be interpreted as $1$ and Proposition \\ref{lambdapropp} follows in this case. \nFor $p>0$, we see that the poles of \\eqref{eq7'}, and consequently\nof $\\Gamma(\\lambda)$, in a neighborhood of $\\cap_1^q\\{\\mathfrak{Re}\\, \\lambda_j \\geq 0\\}$ are along\nhyperplanes\nof the form $0=\\sum_1^q\\lambda_j\\alpha_{ji}$, $1\\leq i \\leq \\nu$. But if $j>p$ and $i\\leq \\nu$, then $\\alpha_{ji}=0$\nsince $\\{1,\\ldots,\\nu\\}\\subseteq \\mathcal{K}^c=\\{i;\\, x_i \\nmid x^{\\alpha_j},\\, \\forall j=p+1,\\ldots,q\\}$.\nThus, the hyperplanes are of the form $0=\\sum_1^p\\lambda_j\\alpha_{ji}$ and Proposition \\ref{lambdapropp} is proved\nexcept for the statement that at least for two $j$:s, the $\\alpha_{ji}$ are non-zero. However, we see from \n\\eqref{eq7'} that if for some $i$ we have $\\alpha_{ji}=0$ for all $j$ but one, then the appearing $\\lambda_j$ in\nthe denominator will be canceled by the numerator. Moreover, we may assume that \nthe constant $C_I=\\det (\\alpha_{ji})_{1\\leq i,j\\leq \\nu}$ is non-zero which implies that we cannot have any\n$\\lambda_j^2$ in the denominator. \n\n\\bigskip\n\nWe now prove Proposition \\ref{epsilonpropp}. Consider \\eqref{eq6}. We have that $\\alpha_1,\\ldots,\\alpha_{\\nu}$\nare linearly independent so we may assume that $A=(\\alpha_{ij})_{1\\leq i,j\\leq \\nu}$ is invertible\nwith inverse $B=(b_{ij})$. We make the non-holomorphic change of variables \\eqref{varbyte}, where the ``$q$'' of \n\\eqref{varbyte} now should be understood as $\\nu$. Then we get $x^{\\alpha_j}=y^{\\alpha_j}\\eta_j$, where \n$\\eta_j>0$ and smooth and $\\eta_j^2=1\/\\xi_j$, $j=1,\\ldots,\\nu$. \nHence, $|\\hat{f}_j|^2=|y^{\\alpha_j}|^2$, $j=1,\\ldots,\\nu$.\nExpressed in the $y$-coordinates we get that \n$\\Lambda_1^{\\nu}(d\\bar{x}^{\\alpha_j}\/\\bar{x}^{\\alpha_j})\\wedge \\psi \\wedge d\\bar{x}_{\\mathcal{K}}\\wedge dx$ \nis a finite sum of terms of the form \n\\begin{equation}\\label{hack2}\n\\frac{d\\bar{y}^{\\alpha_1}}{\\bar{y}^{\\alpha_1}} \\wedge \\cdots \\wedge \\frac{d\\bar{y}^{\\alpha_{\\nu'}}}{\\bar{y}^{\\alpha_{\\nu'}}}\n\\wedge \\bar{y}_{\\mathcal{K}'}\\, d\\bar{y}_{\\mathcal{K}''} \\wedge \\psi_1,\n\\end{equation}\nwhere $\\nu'\\leq \\nu$, $\\psi_1$ is a $C^r$-smooth compactly supported form, and\n$\\mathcal{K}'$ and $\\mathcal{K}''$ are disjoint sets such that $\\mathcal{K}'\\cup \\mathcal{K}''=\\mathcal{K}$.\nIn order to give a contribution to \\eqref{eq6} we see that $\\psi_1$ must contain $dy$. \nIn \\eqref{hack2} we write $d=d_{\\mathcal{K}}+d_{\\mathcal{K}^c}$, and arguing as we did\nimmediately after Lemma \\ref{divlemma}, \\eqref{hack2} is a finite sum of terms of the form\n\\begin{equation*}\n\\frac{d\\bar{y}^{\\alpha_1}}{\\bar{y}^{\\alpha_1}} \\wedge \\cdots \\wedge \\frac{d\\bar{y}^{\\alpha_{\\nu''}}}{\\bar{y}^{\\alpha_{\\nu''}}}\n\\wedge \\psi_2 \\wedge d\\bar{y}_{\\mathcal{K}}\\wedge dy,\n\\end{equation*}\nwhere $\\nu''\\leq \\nu$ and $\\psi_2$ is $C^r$-smooth and compactly supported.\nWith abuse of notation we thus have that \\eqref{eq6} is a finite sum of integrals of the form\n\\begin{equation}\\label{eq7}\n\\int_{\\mathbb{C}^n_x}\\frac{\\prod_1^p \\tilde{\\chi}_j^{\\epsilon} \\prod_{p+1}^q\\chi_j^{\\epsilon}}{\\prod_1^q \\hat{f}^{k_j}_{j1}}\n\\frac{d\\bar{y}^{\\alpha_1}}{\\bar{y}^{\\alpha_1}} \\wedge \\cdots \\wedge \\frac{d\\bar{y}^{\\alpha_{\\nu}}}{\\bar{y}^{\\alpha_{\\nu}}}\n\\wedge \\psi \\wedge d\\bar{y}_{\\mathcal{K}}\\wedge dy\n\\end{equation}\n\\begin{equation*}\n=\\int_{\\mathbb{C}^n_x}\\frac{\\bigwedge_1^{\\nu}d\\chi_j^{\\epsilon}\n\\prod_{\\nu+1}^p \\tilde{\\chi}_j^{\\epsilon} \\prod_{p+1}^q\\chi_j^{\\epsilon}}{y^{\\sum_1^q k_j\\alpha_j}}\n\\wedge \\Psi \\wedge d\\bar{y}_{\\mathcal{K}}\\wedge dy,\n\\end{equation*}\nwhere $\\Psi$ is a $C^r$-smooth compactly supported $(n-|\\mathcal{K}|-\\nu)$-form; the equality follows since\n$\\chi_j^{\\epsilon}=\\chi_j(|y^{\\alpha_j}|^2\/\\epsilon_j)$, $j=1,\\ldots,\\nu$. Now, \\eqref{eq7} is essentially equal\nto equation (24) of \\cite{JebHs} and the proof of Proposition \\ref{epsilonpropp} is concluded as in the \nproof of Proposition 8 in \\cite{JebHs}.\n\\end{proof}\n\n\\bigskip\n\n\\begin{proof}[Proof of Lemma \\ref{divlemma}]\nThe proof is similar to the proof of Lemma 9 in \\cite{JebHs} but some modifications have to be done.\nFirst, it is easy to check by induction over $|\\mathcal{K}|$ that \n$\\Phi'\\wedge \\Lambda_{i\\in I}(d\\bar{x}_i\/\\bar{x}_i)$ is $C^r$-smooth for any $I\\subseteq \\mathcal{K}$; for\n$|\\mathcal{K}|=1$ this is just Taylor's formula for forms. It thus suffices to show that\n\\begin{equation*}\nd\\bar{x}^{\\alpha_1}\\wedge \\cdots \\wedge d\\bar{x}^{\\alpha_{\\mu}}\\wedge \\left.\\frac{\\partial^{|k|} \\Phi}{\\partial x_I^k}\\right|_{x_I=0}=0,\\quad\n\\forall I\\subseteq \\mathcal{K}, \\, k=(k_{i_1},\\ldots,k_{i_{|I|}}).\n\\end{equation*}\nTo show this, fix an $I\\subseteq \\mathcal{K}$ and let $L=\\{j; \\, x_i \\nmid x^{\\alpha_j}\\,\\, \\forall i\\in I\\}$. \nSay for simplicity that \n\\begin{equation*}\nL=\\{1,\\ldots,\\mu',\\mu+1,\\ldots,m',m+1,\\ldots,p',p+1,\\ldots,q'\\},\n\\end{equation*}\nwhere $\\mu'\\leq \\mu$, $m'\\leq m$, $p'\\leq p$, and $q'