diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzgatf" "b/data_all_eng_slimpj/shuffled/split2/finalzzgatf" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzgatf" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nThe Ap star $\\gamma$ Equ (HD 201601, BS 8097) is one of the brightest\nobjects of this class, with the apparent luminosity $V=4.66$ mag. The exact \nspectral type of this object\nis A9p (SrCrEu subclass). The magnetic field of $\\gamma$ Equ has been\nstudied for more than 50 years, starting from October 1946 (see Babcock\n1958). The longitudinal magnetic field $B_e$ of this star does not exhibit\nperiodic variations in time scales typical of stellar rotation,\n$0.5 - 30$ days. Such a variability of the $B_e$ field was observed in most\nAp stars. The above effect is commonly interpreted as the result of\nstellar rotation (oblique dipole model).\n\nThe first measurements by Babcock (1958) showed that the value of the\nlongitudinal magnetic field $B_e$ of $\\gamma$ Equ was positive in 1946--52,\nand approached nine hundred G. From that time on the value of $B_e$ slowly\ndecreased and even changed sign in 1970\/71. One could interpret the magnetic \nbehavior of $\\gamma$ Equ either as secular variations, or variations\ncaused by extremely slow rotation. If the latter picture is correct,\nthen the corresponding magnetic and rotational periods are in the range\nfrom 72 to 110 years (Bonsack \\& Pilachowski 1974; Leroy et al. 1994; \nBychkov \\& Shtol' 1997; Scholz et al. 1997).\n\nThe behavior of the $B_e$ field in $\\gamma$ Equ was investigated by many\nauthors in the second half of the twenty{\\sl th} century. For this research\nwe compiled $B_e$ observations published by Bonsack \\& Pilachowski (1974),\nScholz (1975; 1979), Borra \\& Landstreet (1980), Zverko et al. (1989), \nMathys (1991), Bychkov et al. (1991), Bychkov \\& Shtol' (1997), Scholz et\nal. (1997), Mathys \\& Hubrig (1997), Hildebrandt et al. (2000),\nLeone \\& Kurtz (2003) and Hubrig et al. (2004).\n\nWe included in this paper our unpublished magnetic $B_e$ measurements which\nwere obtained during the past seven years. All the new magnetic observations\nshowed, that the slow decrease of the $B_e$ field in $\\gamma$ Equ apparently\nreached the minimum in 1996--2002 and has actually started to increase. \n\nIn this paper we determined the accurate parameters of secular\nvariability of $\\gamma$ Equ: the period $P_{mag}$, the amplitude and the\ntime of zero phase for $B_e$ variations, which were approximated by a sine\nwave. We support the hypothesis that the long-term $B_e$ \nvariation in $\\gamma$ Equ is a periodic feature. Possible origin of this\nvariation cannot be uniquely determined, see discussion in \nSection ~\\ref{sec:discussion} of this paper.\n\n\\section{ Observations and data processing }\n\nWe have performed spectropolarimetric observations of Zeeman line splitting\nfor $\\gamma$ Equ at the Coude focus of the 1-m optical telescope (Special\nAstrophysical Observatory, Russian Academy of Sciences).\nZeeman spectra were obtained with the echelle spectrograph GECS (Musaev\n1996). We have put the achromatic analyser of circularly polarised light\nin front of the spectrometer slit. Images of the Zeeman echelle spectra\nwere recorded from CCD detectors in standard FITS format.\nFinal reduction of the archived spectra was performed with the standard\nMIDAS software (Monin 1999).\n\nEffects of instrumental polarisation on $B_e$ measurements obtained with\nthis instrument were investigated by Bychkov et al. (1998, 2000).\n\nTable~\\ref{tab:saores} presents the full set of our $B_e$ measurements\nof $\\gamma$ Equ (total 33 $B_e$ points). \nThe meaning of the first 3 columns is obvious. The fourth column\ngives the number $N$ of spectral lines which were\nused for the measurement of $B_e$ for a given exposure. \nTime length $\\Delta t$ of the exposure (in min) is given in the last column\nof Table~\\ref{tab:saores}.\n\nOn average, the value of a single $B_e$ number listed in \nTable~\\ref{tab:saores} was obtained after averaging of $B_e$ measurements\nobtained in 500-1300 spectral lines. Standard deviation $\\sigma_{B_e}$\nfor the resulting value of $B_e$ was computed in the standard manner as\nthe error of an arithmetic mean value.\n\nErrors $\\sigma_{B_e}$ determined in the above way reached rather low values\nin several observations listed in Table~\\ref{tab:saores}. In 2005\/2006\nwe plan to verify the reality of such $\\sigma_{B_e}$ by a special program\nof $B_e$ observations. Actually we accept these errors {\\sl bona fide}\nand note the following properties of our $B_e$ measurements.\n\nThe referee pointed out that a few pairs of $B_e$ measurements of one night\nin Table~\\ref{tab:saores} differ by only a few G, which is substantially\nless than the corresponding standard deviation $\\sigma_{B_e}$. \nWe can explain this only as a purely random effect, and do not see\nany reason for it either in the acquisition of observational data or\ntheir reduction.\n\nSecondly, series of measurements taken within a few nights generally \nshow a scatter of the order of 100 G, which is much higher than the \nstandard errors $\\sigma_{B_e}$ in Table~\\ref{tab:saores}. The latter are \nof the order of $20-30$ G, and such a discrepancy suggests that our\nstandard deviations are systematically underestimated, and are in fact\nof the order of 100 G. On the other hand,\nsuch a scatter of $\\approx$ 100 G is not inconsistent with the\nshort-term variability of light and the longitudinal magnetic field\n$B_e$ in $\\gamma$ Equ in time scales of minutes or above it. \n\nLeone \\& Kurtz (2003) recently discovered periodic variations of the\nlongitudinal magnetic field $B_e$ in $\\gamma$ Equ over the pulsation\nperiod of this star, $P_{puls} = 12.1$ min. The estimated amplitude\n$\\Delta B_e = 240$ G for this period, therefore, these variations \nat least can contribute to the scatter of our $B_e$ points collected\nin Table~\\ref{tab:saores}. \n\nStudy of the rapid periodic $B_e$ variations on a time scale of minutes \nwas also presented in Bychkov et al. (2005b) for $\\gamma$ Equ. They\ndid not found conclusive evidence of such variations above the noise\nlevel at $\\approx 240$ G. \n\nWe also performed spectral analysis of the full set of 298 $B_e$ time\nseries from years 1946--2004. We concluded that there are no short-period\nfield variations with periods above ca. 1 day, but were not able to extend\nour analysis for shorter periods, see Section 4 of this paper.\n\n\n{\n\\newdimen\\digitwidth\n\\setbox0=\\hbox{\\rm0}\n\\digitwidth=\\wd0\n\\catcode`?=\\active\n\\def?{\\kern\\digitwidth}\n\n\\newcommand{\\displaystyle}{\\displaystyle}\n\\begin{table}\n\\caption{Measurements of $B_e$ in $\\gamma$ Equ (HD 201601). }\n\\label{tab:saores}\n\\renewcommand{\\arraystretch}{1.1}\n\\begin{tabular}{|l|r|c|r|c|}\n\\noalign{\\vskip 2 mm}\n\\hline\nJD \\hskip1mm 2400000.+ & $B_e$ (G) & $\\sigma_{B_e}$ (G)&\\hskip-2mm $N$ &\n\\hskip2mm $\\Delta t$ (min)\\\\\n\\hline\n49648.323 & --1045 & 21 & 706 & 30\\\\[-1.0pt]\n49648.345 & --1315 & 26 & 755 & 30\\\\[-1.0pt]\n49649.229 & --1463 & 37 & 576 & 30\\\\[-1.0pt]\n49649.257 & --1159 & 31 & 656 & 30\\\\[-1.0pt]\n49932.424 & --1317 & 26 & 691 & 60\\\\[-1.0pt]\n49932.469 & --1317 & 26 & 675 & 60\\\\[-1.0pt]\n49933.460 & --1316 & 26 & 700 & 60\\\\[-1.0pt]\n49933.507 & --1317 & 29 & 704 & 60\\\\[-1.0pt]\n50023.158 & --1291 & 22 & 501 & 40\\\\[-1.0pt]\n50023.189 & --1380 & 23 & 650 & 40\\\\[-1.0pt]\n50066.128 & --1539 & 26 & 718 & 40\\\\[-1.0pt]\n50066.157 & --1611 & 62 & 532 & 40\\\\[-1.0pt]\n51533.1229 & --1014 & 16 & 966 & 30\\\\[-1.0pt]\n51533.1451 & --1011 & 14 & 701 & 30\\\\[-1.0pt]\n51535.1847 & -- 902 & 16 & 955 & 40\\\\[-1.0pt]\n51535.2153 & -- 901 & 19 & 855 & 40\\\\[-1.0pt]\n51536.1069 & -- 670 & 18 & 821 & 30\\\\[-1.0pt]\n51536.1285 & -- 642 & 24 & 508 & 30\\\\[-1.0pt]\n51888.166 & --1069 & 18 & 847 & 30\\\\[-1.0pt]\n51888.190 & --1092 & 20 &1353 & 30\\\\[-1.0pt]\n51889.103 & -- 890 & 20 & 847 & 30\\\\[-1.0pt]\n51889.126 & -- 865 & 20 & 817 & 30\\\\[-1.0pt]\n51890.142 & -- 742 & 21 & 770 & 30\\\\[-1.0pt]\n52163.3000 & -- 845 & 19 & 833 & 30\\\\[-1.0pt]\n52163.3201 & -- 855 & 19 & 732 & 30\\\\[-1.0pt]\n52164.2861 & -- 956 & 16 & 947 & 30\\\\[-1.0pt]\n52164.3076 & -- 967 & 16 & 914 & 30\\\\[-1.0pt]\n52165.2812 & --1061 & 17 & 835 & 40\\\\[-1.0pt]\n52165.3111 & --1029 & 16 & 991 & 40\\\\[-1.0pt]\n52186.2229 & -- 922 & 17 &1085 & 30\\\\[-1.0pt]\n52186.2451 & -- 942 & 17 &1055 & 30\\\\[-1.0pt]\n52187.2673 & -- 882 & 16 &1072 & 30\\\\[-1.0pt]\n52188.2395 & -- 908 & 18 & 838 & 30\\\\\n\\hline\n\\end{tabular}\n\\end{table} }\n\n\\section{ Magnetic period of $\\gamma$ Equ }\n\\label{sec:magnet}\n\nMagnetic observations presented in Table~\\ref{tab:saores} represent \ncompletely new data. They cover time span of ca. 7 years \nand include the\nphase when the effective magnetic field $B_e$ in $\\gamma$ Equ apparently\nreached its minimum value, and then the slow decrease of $B_e$ observed\nin the recent $\\approx$ 50 years has been reversed. This fact is of \nextraordinary importance, because it allows one for a fairly accurate \ndetermination of the magnetic period and the amplitude of $B_e$ variations\nin $\\gamma$ Equ.\n\nWe have compiled the set of 298 observations of the $B_e$ field in\n$\\gamma$ Equ, scattered in the literature, and appended our measurements.\nThese data cover the time period 1946--2004 (58 years). They are\ndisplayed in Fig.~\\ref{fig:long}. \nNote, that the $B_e$ measurements obtained by Babcock (1958) apparently\ncover the phase of the maximum longitudinal magnetic field in $\\gamma$ Equ.\n\nThe set of $B_e$ measurements analysed in this paper is rather \nheterogeneous. The data have been obtained by several different observers\nover a long time period using various instruments and techniques, and it\nis impossible to estimate or test credibly their systematic and random errors,\nparticularly for the earliest observations of the longitudinal magnetic\nfield in $\\gamma$ Equ.\n\nTherefore, we arbitrarily assumed that systematic errors of the $B_e$\nobservations are equal to zero. In other words, all the $B_e$ points for\n$\\gamma$ which were found in the literature are fully compatible.\n\nRandom errors of individual $B_e$ points frequently were given in the\nsource papers, and are denoted by vertical bars in Fig.~\\ref{fig:long}.\nThese errors\nwere not directly available for the earliest \nphotographic measurements by H.W. Babcock (1958) and Bonsack \\& Pilachowski \n(1974). We adopted here the estimated error for Babcock's data equal\n238 G, and 151 G for Bonsack \\& Pilachowski. These numbers were obtained \nin our thorough reanalysis of the earliest papers dealing with measurements\nof stellar magnetic fields, cf. Section 3.1 in Bychkov et al. (2003).\n\nDetermination of the period and other parameters of the apparent magnetic\nvariability for $\\gamma$ Equ was performed in the following manner.\nAssuming that the run of the observed longitudinal field $B_e$ with time\n$T$ can be approximated by a sine wave \n\\begin{equation}\n B_e (T)=B_0+B_1 \\sin\\left[{2\\pi (T-T_0)\\over P}-{\\pi\\over 2}\\right] \\, ,\n\\label{equ:sigma1}\n\\end{equation} \nwe determined all four parameters: the period $P$, the average field $B_0$,\nthe amplitude $B_1$ and the time of zero phase $T_0$\nusing the iterative technique of nonlinear fitting.\n\nStarting values of $P$, $B_0$, $B_1$, $T_0$ and their standard deviations\nwere found by our computer code for the nonlinear least squares method\n(Bychkov et al. 2003). \nThe final values and their errors were then computed with the public domain\ncode ``nlfit.f'', which is designed for curve and surface fitting with the \nLevenberg-Marquardt procedure ({\\sc ODRPACK v. 2.01} subroutines). The code\nis available at the site {\\tt www.netlib.org}.\n\n\nFitting of a sine wave to all the 298 $B_e$ points with errors as in\nFig.~\\ref{fig:long} gave very poor results with the $\\chi^2$ for\na single degree of freedom $\\chi^2\/\\nu = 18.0420$. Such fits are unacceptable,\nand in case of $\\gamma$ Equ the poor fit is the result of underestimated\nerrors of many $B_e$ points. Many $B_e$ observations presented in \nFig.~\\ref{fig:long} have very low errors, which sometimes are less than 20 G.\nOur new $B_e$ points, which are collected in Table~\\ref{tab:saores}, also\nare of such a high formal accuracy.\n\nWe cannot judge, whether an apparent scatter of $B_e$ points in\nFig.~\\ref{tab:saores} is due to unrealistic error estimates or the intrinsic\nshort-term variability of the longitudinal magnetic field in $\\gamma$ Equ.\nThe estimated random error of $B_e$ points about the starting sine wave\nequals to 213 G. For the final fitting of a sine we assumed that all the\n298 points have identical errors of 213 G. \n\nFinal\nvalues of the fitted parameters and their standard deviations $\\sigma$\nfor the sine phase curve are given below. \n\\halign {\\hskip 1 cm #\\hfil\\hskip1mm &#\\hfil \\cr\n\\noalign{\\vskip 5 mm}\n $P_{mag}$ & = $33278 \\pm 1327$ days $= 91.1 \\pm 3.6$ years \\cr\n $T_0 $ & = JD $2417795.0 \\pm 1057. $ \\cr\n $B_0 $ & = $-\\, 262 \\pm 22.4 $ G \\cr\n $B_1 $ & = $+\\, 839 \\pm 22.1 $ G \\cr\n $r $ & = $-\\, 0.524 \\pm 0.043$ \\cr\n\\noalign{\\vskip 5 mm}\n}\n\\noindent\nIn other words, a parameter range from $-\\sigma $ to $+\\sigma$ is just\nthe true 68\\% confidence interval for this parameter. \n\nThe above fit of a sine wave with uniform errors of 213 G is very good, with\n$\\chi^2\/\\nu = 1.0134$. The effect of inhomogeneity in the $B_e$ time series\nplus the possible existence of rapid magnetic variability in $\\gamma$ Equ\nwere compensated by the increase of the random error, and neither should\ninfluence the above parameters of secular magnetic variability in \n$\\gamma$ Equ.\n\nThe standard parameter $r$ was defined for the oblique rotator model\nof an Ap star. It is related to the angle $\\beta$ between the magnetic\ndipole axis and the rotational axis, and the angle $i$ between the rotational\naxis and the line of sight (Preston 1967):\n\\begin{equation}\nr = {{\\cos\\beta \\cos i - \\sin\\beta \\sin i} \\over\n {\\cos\\beta \\cos i + \\sin\\beta \\sin i}} \n = {B_e (\\min) \\over {B_e (\\max)}} \\, .\n\\label{eqn:rrr}\n\\end{equation}\nParameters $B_e (\\min)$ and $B_e (\\max)$ of the $B_e$ sine wave for \n$\\gamma$ Equ are given by\n\\halign {\\hskip 1 cm #\\hfil\\hskip1mm &#\\hfil\\hskip1mm &# \\hfil \\cr\n\\noalign{\\vskip 2 mm}\n $B_e({\\rm max}) $ & $=B_0+B_1$ & = $+\\,\\,\\, 577 \\pm 31.4$ G \\cr\n $B_e({\\rm min}) $ & $=B_0-B_1$ & = $ -1101 \\pm 31.4$ G \\cr\n\\noalign{\\vskip 2 mm}\n}\nNote, that the meaning of $B_e({\\rm max}) $ and $B_e({\\rm min}) $ for\nuse in Eq.~\\ref{eqn:rrr} is different: $B_e({\\rm max}) $ denotes there\nthe value of magnetic intensity which has the higher absolute value, and\n$B_e({\\rm min})$ has the lower absolute value. In this way we obtained the\nvalue of $r$ for $\\gamma$ Equ equal to $r=577 \/ (-1101) = -0.524 $.\n\nBychkov et al. (2005a) presented an extensive catalog of the magnetic\nphase curves and their parameters for 136 stars on the main sequence and\nabove it. We quoted there the \npreviously estimated period for $\\gamma$ Equ, $P_{mag}=27027^d$,\nwhich was obtained on the basis of a shorter series of $B_e$ data.\nThis paper and the new, more accurate $P_{mag} = 33278^d$ represents a \nmajor revision of the previously known magnetic period of $\\gamma$ Equ.\n\n\\begin{figure}\n\\resizebox{\\hsize}{0.8\\hsize}{\\rotatebox{0}{\\includegraphics{fig1.eps}}}\n\\caption[]{The longitudinal magnetic field $B_e$ for $\\gamma$ Equ in years\n 1946--2004. }\n\\label{fig:long}\n\\end{figure}\n\n \n\\section{Search for additional magnetic periods in $\\gamma$ Equ}\n\nSignificant scatter of the observed points in the long-term run of $B_e (T)$\nin Fig.~\\ref{fig:long} suggests the search for short-term periodicities.\nWe applied the strategy of prewhitening to the set of available $B_e$\nmeasurements, and removed the principal sine-wave variations from the data.\nPrewhitened data were then analysed with the method developed by Kurtz \n(1985), and with his Fortran code (Kurtz 2004). \n\nSuch a search for peaks in the $B_e$ amplitude spectrum of $\\gamma$ Equ\nin this paper was restricted to trial periods higher than 1 day. \nThis is because many of the earlier magnetic observations for this star \neither have poorly determined the time of measurement, or have \nlong times of exposure (see e.g. Babcock 1958). \nThe star $\\gamma$ Equ exhibits rapid nonradial pulsations and the\ncorresponding $B_e$ with the period $P_{mag}=12.1$ min (Leone \\& Kurtz\n2003) and, possibly, with simultaneous shorter periods \n(Bychkov et al. 2005b). None of them were analysed in this paper.\n\nWe have identified two additional periods of statistically low significance\nin the range $P_{mag} > 1^d$, see Fig.~\\ref{fig:short}:\n\n\\vskip 3 mm\n$P_1 = 348.07$ days, amplitude $=122$ G \\par\n$P_2 = 23.44$ days, amplitude $=110$ G \\par\n\\vskip 3 mm\n\n\\noindent\nBoth peaks in the amplitude spectrum in Fig.~\\ref{fig:short} exhibit low\nsignal to noise ratio, with noise level at ca. 80 G. The period $P_1$ \nis close to 1 year. Since most of the existing $B_e$ observations for $\\gamma$\nEqu were performed in months July-November, then the peak $P_1$ in the\namplitude spectrum represents a false period which most likely reflects the\naverage 1-year repetition time in the acquisition of the existing magnetic\nmeasurements.\n\nWe believe that the peak $P_2$ in the amplitude spectrum of the $B_e$ field\nof $\\gamma$ Equ is the random effect of a pure noise. The peak is very\nnarrow, in fact, it only appears in a single bin of a very dense discrete\nfrequency mesh.\n\nKurtz (1983) discussed the possible existence of the period of $\\approx 38$\ndays in his photometric observations of $\\gamma$ Equ\nin 1981. That period was of low probability, but \npossibly could be identified with the real rotational period in\nthis star. We do not confirm the existence of the 38 day period in \nlong-term $B_e$ observations of $\\gamma$ Equ, see Fig.~\\ref{fig:short}.\n\n\n\n\\begin{figure}\n\\resizebox{\\hsize}{0.8\\hsize}{\\rotatebox{0}{\\includegraphics{fig2.eps}}}\n\\caption[]{Amplitude spectrum of the $B_e$ time series for \n $\\gamma$ Equ, years 1946--2004. }\n\\label{fig:short}\n\\end{figure}\n\n\n\\section{ Discussion }\n\\label{sec:discussion}\n\nThere exist three possible explanations for the observed long-term behavior \nof the longitudinal magnetic field in $\\gamma$ Equ:\n\n\\vskip 3 mm\n\\noindent\n{\\bf 1.} Precession of the rotational axis (Lehmann 1987). \\par\n\\noindent\n{\\bf 2.} Solar-like magnetic cycle (Krause \\& Scholz 1981), \\par\n\\noindent\n{\\bf 3.} Rotation with the period of 91.2 years. \n\\vskip 3 mm\n\nThe Ap star $\\gamma =$ HD 201601 in fact is a binary system. One can\nassume, that the gravitational force from the secondary companion can cause\nprecession of the Ap star. As the result, the angle between the rotational\naxis and the direction towards the Earth varies periodically. Therefore,\nchanges of the aspect can in principle cause apparent variations of the\nlongitudinal magnetic field $B_e$ or the amplitude of its variations.\n\nEffects of precession in long-period Ap stars were studied by Lehmann \n(1987), who showed that the oblateness of stars caused by the rotational\nor magnetic flattening is not adequate to produce observable precession \neffects. The only exception was 52 Her, where the observed behavior of \nthe star could be interpreted as a precessional motion. \n\nThe above considerations indicate that the precession theory does not\nconvincingly explain $B_e$ variations in this star. \n\nThe idea by Krause \\& Scholz (1981) that we actually observe the solar-like\nmagnetic cycle in $\\gamma$ Equ in which the global magnetic field\nreverses its polarity, cannot be easily verified by the existing \nobservations of the global longitudinal magnetic field $B_e$. Moreover,\none can note that such an idea requires the existence of a mechanism in the\ninterior of $\\gamma$ Equ which ensures the transfer of huge magnetic\nenergy into electric currents and vice versa. Note that the required\nefficiency of such a mechanism and the amplitude of magnetic field \nvariations in $\\gamma$ Equ is ca. four orders of magnitude larger than\nthat in the Sun in a similar timescale.\n\nFollowing the widely accepted picture of an Ap\nstar, we believe that the magnetic field of $\\gamma$\nEqu can be approximated by a dipole located in the center\nof the star. The dipole is inclined to the rotational axis of $\\gamma$ Equ.\nWe assume that the magnetic field is stable and remains frozen in the \ninterior of a rotating star at the time of observations, i.e. during at\nleast of 58 years. Therefore, slow variations of the $B_e$ field\nin $\\gamma$ Equ are caused by an extremely slow rotation, in which case our\n$P_{mag} = P_{rot} = 33278^d$. Such an explanation is supported to some \nextent by polarimetric measurements by Leroy et al. (1994).\n\nWe plan to perform high accuracy polarimetric \nmeasurements of $\\gamma$ Equ with the new version of MINIPOL. The device\nwas constructed to measure the angles and the degree of linear polarisation\nof stellar radiation, and will be operational at the Special\nAstrophysical Observatory in 2006. We also expect that shall be able to\nverify the extremely slow rotation of $\\gamma$ Equ measuring the rate of \nchange for the polarisation angle of stellar radiation.\n\n\n\\section{Summary }\n\nThe Ap star $\\gamma$ Equ (HD 201601) exhibited slow and systematic\ndecrease of the longitudinal magnetic field $B_e$ starting from 1946,\nwhen the global magnetic field of this star was discovered (Babcock 1958).\nWe have compiled the full set of 298 existing $B_e$ measurements, which\nconsists of the $B_e$ data published in the literature and our observations\nobtained during recent 7 years. The latter magnetic data (33 $B_e$ points)\nwere measured with the echelle spectrograph in the Coude focus of the 1-m\ntelescope at the Special Astrophysical Observatory.\nOur newest observations showed that the longitudinal magnetic field $B_e$\nof $\\gamma$ Equ reached its local minimum and started to rise in 1998-2004.\n\nAll the available data cover the time period of 58 years (1946-2004) and\ninclude both phases of the maximum and minimum $B_e$. \nAssuming that the secular variability of the $B_e$ field is a periodic\nfeature, we determined parameters of the magnetic field curve in\n$\\gamma$ Equ and give the value of its period, $P=91.1 \\pm 3.6$ years,\nwith the zero phase (maximum of $B_e$) at \n$T_0 =$ JD $2417795.0 \\pm 1057$. Sine-wave fit to the $B_e$ phase curve \nyields $B_e({\\rm max}) =+577 \\pm 31$ G and $B_e({\\rm min}) =-1101 \\pm 31$ G.\n\nSpectral analysis of the 58-year long $B_e$ time series essentially do not\nshow the existence of shorter periods, down to trial periods of $\\approx$\n1 day. More specifically, there are no real shorter periods in the run of the\nlongitudinal magnetic field $B_e$ with amplitudes exceeding the noise\nlevel of 80 G.\n\n\n\\section*{Acknowledgments}\n\nWe are grateful to John D. Landstreet, the referee, for his criticism\nand suggestions regarding our computations and the manuscript. \nWe thank Don Kurtz for providing his Fortran software used\nhere to compute the amplitude spectrum of $\\gamma$ Equ. \nThis research was supported by the Polish Committee for Scientific\nResearch grant No. 1 P03D 001 26.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Appendix}\n\n\\renewcommand{\\thesubsection}{\\Alph{subsection}}\n\n\\subsection{Results under more attacks}\n\nIn order to verify the effectiveness of the proposed method, in this section, we further evaluate the robustness of our method under a broader range of powerful attacks: $1)$ AutoAttack~\\cite{croce2020reliable} (an ensemble of four strong diverse attacks, which is widely considered as the strongest attack for robustness evaluation), $2)$ CW attack~\\cite{carlini2017towards} (CW-200), $3)$ PGD attack with restart~\\cite{madry2018towards} (PGD-200), $4)$ One-pixel attack~\\cite{su2019one}, $5)$ Spatial Transformation attack~\\cite{xiao2018spatially}, as well as $6)$ Color Channel attack~\\cite{kantipudi2020color}. PGD-200 and CW-200 both restart 5 times with 40 optimization steps each restart.\n\nIn Table~\\ref{tab:other_attacks-1}, we report the robust accuracy under these attacks with AdvCL serving as baseline on CIFAR100.\nThe results show that our methods can improve robustness under all different attacks across almost all settings, \\textit{e.g.}, 21.43\\% vs. 19.57\\% under AutoAttack and 29.56\\% vs. 27.13\\% under PGD-200 attack, with loss function $\\mathcal{L}^{IP+HN}$ (Equation~\\ref{eq:ip+hn}), under Linear Probing.\n\n\\begin{table}[h]\n\\fontsize{8}{9}\\selectfont \n\\renewcommand\\arraystretch{1.5}\n\\centering\n\\caption{Robustness evaluation under diverse attacks on CIFAR100 with AdvCL as baseline.}\n\\vspace{5pt}\n\\label{tab:other_attacks-1}\n\\scalebox{0.88}{\n\\begin{tabular}{clcccccc}\n\\toprule\n\\multicolumn{2}{c}{\\begin{tabular}[c]{@{}c@{}} Training Methods\\end{tabular}} & PGD-200 & CW-200 & AA & One-pix. & Spatial-Tr. & Color-Ch. \\\\ \\hline\\hline\n\\multirow{4}{*}{Linear Probing} & AdvCL & 27.13 & 21.85 & 19.57 & 72.10 & 47.94 & 25.62 \\\\\n & \\ \\ w\/ $\\mathcal{L}^{IP}$ & 27.87 & 22.10 & 19.80 & 69.60 & 49.31 & 25.88 \\\\\n & \\ \\ w\/ $\\mathcal{L}^{HN}$ & 29.43 & 23.10 & 21.23 & \\textbf{73.20} & 51.57 & 28.01 \\\\\n & \\ \\ w\/ $\\mathcal{L}^{IP+HN}$ & \\textbf{29.56} & \\textbf{23.60} & \\textbf{21.43} & 73.00 & \\textbf{52.62} & \\textbf{28.94} \\\\ \\hline\n\\multirow{4}{*}{\\begin{tabular}[c]{@{}c@{}}Adversarial Linear\\\\ Finetuning\\end{tabular}} & AdvCL & 27.29 & 22.01 & 20.09 & \\textbf{72.80} & 47.31 & 24.98 \\\\\n & \\ \\ w\/ $\\mathcal{L}^{IP}$ & 27.84 & 22.37 & 20.06 & 71.60 & 46.22 & 24.23 \\\\\n & \\ \\ w\/ $\\mathcal{L}^{HN}$ & \\textbf{29.79} & \\textbf{23.79} & 21.52 & 70.80 & \\textbf{51.04} & \\textbf{27.84} \\\\\n & \\ \\ w\/ $\\mathcal{L}^{IP+HN}$ & 29.58 & 23.64 & \\textbf{21.66} & 71.70 & 49.87 & 27.14 \\\\ \\hline\n\\multirow{4}{*}{\\begin{tabular}[c]{@{}c@{}}Adversarial Full\\\\ Finetuning\\end{tabular}} & AdvCL & 29.48 & 25.73 & 24.46 & \\textbf{72.20} & 57.86 & 25.12 \\\\\n & \\ \\ w\/ $\\mathcal{L}^{IP}$ & 30.10 & 26.05 & 24.73 & 71.00 & 58.95 & 25.55 \\\\\n & \\ \\ w\/ $\\mathcal{L}^{HN}$ & \\textbf{30.46} & \\textbf{26.60} & \\textbf{25.22} & 69.30 & 59.04 & \\textbf{26.02} \\\\\n & \\ \\ w\/ $\\mathcal{L}^{IP+HN}$ & \\textbf{30.46} & 26.54 & 25.06 & 69.00 & \\textbf{59.33} & 25.70 \\\\ \\toprule\n\\end{tabular}}\n\\end{table}\n\nTable~\\ref{tab:other_attacks-2} provides results on CIFAR10 under canonical optimization-based attack methods: PGD-200, CW-200 and AutoAttack. Our methods also yield robustness gain in almost all settings. \n\n\\begin{table}[h]\n\\fontsize{8}{9}\\selectfont \n\\renewcommand\\arraystretch{1.5}\n\\centering\n\\vspace{10pt}\n\\caption{Robustness evaluation under optimization-based attacks on CIFAR10, with AdvCL as baseline.}\n\\vspace{5pt}\n\\label{tab:other_attacks-2}\n\\begin{tabular}{clccc}\n\\toprule\n\\multicolumn{2}{c}{\\begin{tabular}[c]{@{}c@{}}Training Methods\\end{tabular}} & PGD-200 & CW-200 & AutoAttack \\\\ \\hline\\hline\n\\multirow{4}{*}{Linear Probing} & AdvCL & 51.05 & 45.65 & 43.48 \\\\\n & \\ \\ w\/ $\\mathcal{L}^{IP}$ & 51.99 & 46.02 & 43.57 \\\\\n & \\ \\ w\/ $\\mathcal{L}^{HN}$ & \\textbf{52.36} & \\textbf{46.09} & \\textbf{43.68} \\\\\n & \\ \\ w\/ $\\mathcal{L}^{IP+HN}$ & 52.01 & 45.35 & 42.92 \\\\ \\hline\n\\multirow{4}{*}{\\begin{tabular}[c]{@{}c@{}}Adversarial Linear\\\\ Finetuning\\end{tabular}} & AdvCL & 52.30 & 46.04 & 43.93 \\\\\n & \\ \\ w\/ $\\mathcal{L}^{IP}$ & 52.77 & 46.60 & \\textbf{44.22} \\\\\n & \\ \\ w\/ $\\mathcal{L}^{HN}$ & \\textbf{53.22} & \\textbf{46.44} & 44.15 \\\\\n & \\ \\ w\/ $\\mathcal{L}^{IP+HN}$ & 52.77 & 45.55 & 43.01 \\\\ \\hline\n\\multirow{4}{*}{\\begin{tabular}[c]{@{}c@{}}Adversarial Full\\\\ Finetuning\\end{tabular}} & AdvCL & 52.90 & 50.92 & 49.58 \\\\\n & \\ \\ w\/ $\\mathcal{L}^{IP}$ & \\textbf{53.61} & 51.25 & 49.90 \\\\\n & \\ \\ w\/ $\\mathcal{L}^{HN}$ & 53.25 & 51.11 & 49.93 \\\\\n & \\ \\ w\/ $\\mathcal{L}^{IP+HN}$ & 53.51 & \\textbf{51.46} & \\textbf{50.28} \\\\ \\toprule\n\\end{tabular}\n\\end{table}\n\\vspace{5pt}\n\nBesides, we also report results compared with RoCL under PGD-200, CW-200 and AutoAttack in Table~\\ref{tab:rocl-other_attacks}, which further validate the effectiveness of the proposed methods. For instance, 25.09\\% vs. 23.51\\% under CW-200 attack, Adversarial Full Finetuning scheme, on CIFAR100.\n\n\\begin{table}[h]\n\\fontsize{8}{9}\\selectfont \n\\renewcommand\\arraystretch{1.7}\n\\centering\n\\vspace{10pt}\n\\caption{Robustness evaluation under optimization-based attacks, with RoCL as baseline, on CIFAR-10 and CIFAR-100.}\n\\vspace{5pt}\n\\label{tab:rocl-other_attacks}\n\\begin{tabular}{cclccc}\n\\toprule\n\\multicolumn{1}{l}{Dataset} & \\multicolumn{2}{c}{\\begin{tabular}[c]{@{}c@{}}Training Methods\\end{tabular}} & PGD-200 & CW-200 & AutoAttack \\\\ \\hline\\hline\n\\multirow{6}{*}{CIFAR10} & \\multirow{2}{*}{Linear Probing} & RoCL & 32.47 & 33.33 & 24.11 \\\\\n & & \\ \\ w\/ $\\mathcal{L}^{IP+HN}$ & \\textbf{34.13} & \\textbf{34.59} & \\textbf{24.58} \\\\ \\cline{3-6} \n & \\multirow{2}{*}{\\begin{tabular}[c]{@{}c@{}}Adversarial Linear\\\\ Finetuning\\end{tabular}} & RoCL & 42.58 & 40.21 & \\textbf{31.81} \\\\\n & & \\ \\ w\/ $\\mathcal{L}^{IP+HN}$ & \\textbf{43.54} & \\textbf{41.26} & 30.37 \\\\ \\cline{3-6} \n & \\multirow{2}{*}{\\begin{tabular}[c]{@{}c@{}}Adversarial Full\\\\ Finetuning\\end{tabular}} & RoCL & 50.33 & 47.57 & 46.69 \\\\\n & & \\ \\ w\/ $\\mathcal{L}^{IP+HN}$ & \\textbf{51.47} & \\textbf{48.26} & \\textbf{47.05} \\\\ \\hline\n\\multirow{6}{*}{CIFAR100} & \\multirow{2}{*}{Linear Probing} & RoCL & 14.93 & 14.75 & 7.58 \\\\\n & & \\ \\ w\/ $\\mathcal{L}^{IP+HN}$ & \\textbf{17.95} & \\textbf{16.57} & \\textbf{8.58} \\\\ \\cline{3-6} \n & \\multirow{2}{*}{\\begin{tabular}[c]{@{}c@{}}Adversarial Linear\\\\ Finetuning\\end{tabular}} & RoCL & 22.59 & 18.99 & \\textbf{11.93} \\\\\n & & \\ \\ w\/ $\\mathcal{L}^{IP+HN}$ & \\textbf{24.46} & \\textbf{20.69} & 11.69 \\\\ \\cline{3-6} \n & \\multirow{2}{*}{\\begin{tabular}[c]{@{}c@{}}Adversarial Full\\\\ Finetuning\\end{tabular}} & RoCL & 27.95 & 23.51 & 22.70 \\\\\n & & \\ \\ w\/ $\\mathcal{L}^{IP+HN}$ & \\textbf{29.37} & \\textbf{25.09} & \\textbf{24.01} \\\\ \\toprule\n\\end{tabular}\n\\end{table}\n\\vspace{5pt}\n\\end{document}\n\n\\section{Introduction}\n\n\nThough Deep Neural Networks (DNNs) have exhibited highly expressive performance and even surpass humans on many tasks, they are rigorously challenged by their vulnerability to \\textit{adversarial examples} \\cite{szegedy2014intriguing,goodfellow2014explaining}, which are artificially crafted to mislead the state-of-the-art models into incorrect predictions.\nDNN's weakness in the face of adversarial examples poses severe threats to their applications in security-critical systems, e.g. autonomous driving, surveillance and face recognition~\\cite{akhtar2018threat,kurakin2018adversarial,cao2019adversarial,vakhshiteh2020threat}.\nOne of the most powerful defense strategies against adversarial attacks is adversarial training (AT)~\\cite{kannan2018adversarial,shafahi2019adversarial,zhang2019you,wong2020fast,zhang2019theoretically,zhu2019freelb}, built upon a min-max optimization problem where the inner one constructs adversaries by maximizing the loss and the outer one fits the network's parameters to them by minimizing the expected loss~\\cite{madry2018towards}. AT is effective yet highly depends on labeled data~\\cite{chen2020adversarial,kim2020adversarial}.\n\n\nHow to leverage unlabeled data to perform AT is an important and worth-exploring problem~\\cite{gowal2020self,chen2020adversarial}.\nIn real-world practice, we can easily acquire huge amounts of domain-related unlabeled data, while it is often highly expensive to get them labeled.\nMoreover, to emphasize the value of unlabeled data, some recent works claim that robustness should not inherently require labels because we essentially ask predictors to be stable around naturally occurring inputs~\\cite{carmon2019unlabeled}, and they resort to semi-supervised learning to boost model's robustness \\cite{alayrac2019labels,carmon2019unlabeled}. \nNevertheless, these methods still require labeled images to generate pseudo-supervision for consequent adversarial training, and can be in essence regarded as fully-supervised adversarial training. How to train robust models totally from unlabeled data remains an important but less explored question.\nMost recently, some works attempted to combine contrastive and adversarial training to perform AT on unlabeled data. The main idea is to generate adversaries against the contrastive loss first, then maximize the similarity between clean views and their adversarial counterparts~\\cite{kim2020adversarial,fan2021does,jiang2020robust}.\nRoCL\\cite{kim2020adversarial} first proposed the above unlabeled AT scheme against contrastive loss and AdvCL\\cite{fan2021does} propose to minimize the learning task gap between unlabeled contrast and labeled fintuning by introducing pseudo-supervision in pretraining stage, achiving the state-of-the-art performance.\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[scale=0.41]{images\/intro\u56fe4.png}\n\\caption{\nThe schematic illustration for the conflict brought by adversarial contrastive learning and our proposed new views on adversaries to reduce such conflicts.\n}\n\\label{fig:intro}\n\\end{figure}\n\n\n\n\nAlthough existing \\textit{Adversarial Contrastive Learning} works have achieved satisfactory robustness in unlabeled settings, we argue that current simple\nextension from contrastive learning to adversarial\ndoesn't treat adversaries correctly and brings severe conflicts in pretraining objectives. The adversaries in ACL are generated against the contrastive loss, i.e., they are away from the anchor as far as possible and pushed close to other datapoints, contrary to the CL's objective. Conventional CL pushes away an anchor with other data points, when we introduce adversarial learning, the contrast between adversaries and their anchors attract the anchors towards other data points that the adversaries are close to, which is a totally opposite direction against CL's exertion from the geometry perspective, as shown in Figure \\ref{fig:intro}(b), engendering absolutely conflicts in training objective. Moreover, we will show by a limit case that current introduction of adversaries actually sets a potential objective to shrink all images into one point. The problem is rooted in that current adversarial contrastive learning treats adversaries the same as other clean augmentations and asks the anchors to head towards them.\n\nSo how to take adversaries correctly in adversarial contrastive learning? We present two new treatments for adversaries from the perspectives of both positives and negatives to alleviate this conflict. Firstly, as positives, we propose to view adversaries as \\textit{inferior positives} that have asymmetric similarity with normal positives (clean augmentations) to directly reduce the conflict, as shown in Figure~\\ref{fig:intro}(c). On the other hand, we propose to treat them as \\textit{hard negatives} that should be upweighted when pushed away from other data points to further help relieve the conflict and make the model more robustness-aware, as shown in Figure~\\ref{fig:intro}(d).\n\nWhen viewed as inferior positives, we propose that the similarity between clean views and adversarial views shouldn't be symmetric, i.e., we hope adversarial ones similar to clean ones and to be pulled back, but actually don't hope the model learns to represent clean images as adversaries which are intentionally perturbed. We put forward an adaptive gradient stopping strategy to model this asymmetry.\nMoreover, we argue that adversarial views have excellent intrinsic properties to become hard negatives in which case they should be upweighted to make the model more robustness-aware. As proposed in \\cite{robinson2020contrastive}, two principles to make a good negative sample for $x$ are: 1. \"True negative\" $x^-$ whose label differs from that of the anchor $x$; 2. The embedding currently believes to be similar to $x$.\nAdversaries, as stated above, are often close to some other data points that could have different labels, satisfying our desire for hard negatives perfectly, what we need to do is to distinguish each data point from its surrounding other classes' adversarial views. Here we combine ideas of positive-unlabeled learning~\\cite{du2014analysis,elkan2008learning} with adversarial and resort to the prevalent practice for reweighting negatives in contrastive learning \\cite{robinson2020contrastive,chuang2020debiased}, to effectively sample true adversarial negatives and reweight each sample according to the current similarity.\n\nTo sum up, our contributions are as follows: \n\\begin{itemize}\n \\item We are the first to consider modeling the asymmetric property of \\textit{Adversarial Contrastive Learning} and propose to perform asymmetric gradient stopping, which can boost both standard accuracy and robustness.\n \\item We provide a new perspective for adversaies in contrastive learning, i.e., view them as hard negatives, which can also be extended to other adversarial scenarios.\n \\item We present a generalized asymmetric InfoNCE loss: A-InfoNCE that unifies all current contrastive learning methods and can integrate our two proposed new views conveniently.\n \\item Our methods are compatible with current adversarial contrastive learning methods, outperform the chosen baselines and achieve new state-of-the-art performance.\n \n\\end{itemize}\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Conclusions}\n\nIn this work, we study enhancing model robustness using unlabeled data and investigate the \\textit{identity confusion} issue in Adversarial CL, \\textit{i.e.}, adversaries with different identities attract their anchors together, contradicting to the objective of CL. \nWe present a generic asymmetric objective \\textit{A-InfoNCE}, and treat adversaries discriminatingly as \\textit{inferior positives} or \\textit{hard negatives}, which can overcome the identify confusion challenge.\nComprehensive experiments with quantitative and qualitative analysis show that our methods can enhance existing Adversarial CL methods effectively.\nFurther, it lies in our future work to extend the proposed asymmetric form to other CL settings to take into consideration the asymmetric characteristics between different views.\n\n\\section*{Acknowledgement}\n\nThis work was supported in part by the National Key R\\&D Program of China under Grant 2021ZD0112100, partly by Baidu Inc. through Apollo-AIR Joint Research Center. We would also like to thank the anonymous reviewers for their insightful comments.\n\\section{Asymmetric InfoNCE}\n\n\\subsection{Notations}\n\n\n\\paragraph{Contrastive Learning (CL) }\n\nCL aims to learn generalizable features by maximizing agreement between self-created positive samples while contrasting to negative samples.\nIn typical contrastive learning, each instance $x$ will be randomly transformed into two views $(x_1, x_2)$, then fed into a feature encoder $f$ with parameters $\\theta$ to acquire normalized projected features, \\textit{i.e.}, $z_i = f(x_i;\\theta)$.\nLet $\\mathcal{P}(i)$ denote the set of positive views of $x_i$, containing the views transformed from $x$ with the same instance-level \\textit{identity} (\\textit{e.g.}, augmentations of the original image $x_i$); $\\mathcal{N}(i)$ denotes the set of negative views of $x_i$, containing all the views from other instances.\nThe conventional InfoNCE loss function~\\cite{oord2018representation} used in CL for a positive pair $(x_i,x_j)$ is defined as:\n\\begin{align}\n \\mathcal{L}_{\\rm CL}(x_i,x_j) \n =\n - \\log \\frac\n {\\exp({\\rm sim}(z_i, z_j)\/t)}\n {\n \\exp({\\rm sim}(z_i, z_j)\/t) + \n \\sum_{k\\in \\mathcal{N}(i)} \\exp({\\rm sim}(z_i, z_k)\/t)\n }\n \n\\end{align}\nwhere $x_i$ serves as the anchor, ${\\rm sim}(z_i, z_j)$ denotes a similarity metric (\\textit{e.g.}, cosine similarity) between $z_i$ and $z_j$, and $t$ is a temperature parameter.\nThe final loss of the CL problem is averaged over all positive pairs of instances.\n\n\\paragraph{Adversarial CL}\nAdversarial CL can be regarded as an extension of CL by adding adversarial samples into the positive sets $\\mathcal{P}(\\cdot)$ to contrast. Adversarial CL is typically modeled as the following min-max optimization formulation to incorporate instance-wise attack~\\cite{madry2018towards,fan2021does}:\n\\begin{align}\n \\min_\\theta \\mathbb{E}_{x\\in \\mathcal{X}} \\max_{||\\delta||_\\infty \\leq \\epsilon} \\sum_i\\sum_{j\\in \\mathcal{P}(i)}\n \\mathcal{L}_{\\rm CL}(x_i, x_j),\\quad \\mathcal{P}(i)\\leftarrow \\mathcal{P}(i) \\cup \\{\\hat{x}_i+\\delta\\}\n\\end{align}\nwhere $\\hat{x}_i$ is the view of $x_i$ used to generate adversarial samples, $\\delta$ is the adversarial perturbation whose infinity norm is constrained as less than $\\epsilon$.\nIn the above formulation, the inner maximization problem constructs adversarial samples by maximizing the contrastive loss, and the outer minimization problem optimizes the expected worst-case loss w.r.t. the feature encoder $f$. \n\n\\subsection{Asymmetric InfoNCE: A Generic Learning Objective}\nCurrent Adversarial CL frameworks directly inherit CL's conventional contrastive loss (\\textit{e.g.}, InfoNCE) to evaluate the similarity between adversarial and clean views in a symmetric fashion. This can result in ineffective or even conflicting updates during CL training as aforementioned.\nTo address this challenge, we propose a generic Asymmetric InfoNCE loss (\\textit{A-InfoNCE}) to incorporate the asymmetric influences between different contrast instances, given by:\n\\begin{equation} \\label{eq:1}\n\\resizebox{\\textwidth}{!}{\n $\\mathcal{L}^{\\rm asym}_{\\rm CL} (x_i, x_j;{\\alpha},\\lambda^p, \\lambda^n) \n = \n - \\log \\frac\n {\\lambda^p_j \\cdot \\exp({\\rm sim^{\\alpha}}(z_i, z_j)\/t)}\n {\\lambda^p_j \\cdot \\exp({\\rm sim^{\\alpha}}(z_i, z_j)\/t) + {\\sum_{k\\in \\mathcal{N}(i)} \\lambda^n_k\\cdot\\exp({\\rm sim^{\\alpha}}(z_i, z_k)\/t)}}\n \n $\n}\n\\end{equation}\nwhere $\\rm sim^{\\alpha}(\\cdot)$ is a generalized similarity metric that enables the incorporation of asymmetric relationships (a concrete instantiation is described in the next section); $\\lambda^p$ and $\\lambda^n$ are asymmetric weighting factors for positive and negative pairs, respectively.\nIt is worth noting that although A-InfoNCE is proposed to address the \\textit{identity confusion} issue in Adversarial CL, it can be easily extended to other CL settings when the asymmetric characteristics between different views need to be captured.\nA-InfoNCE can also generalized to many existing CL methods, for example, $\\mathcal{P}(i)$ and $\\mathcal{N}(i)$ can be altered to different choices of positive and negative views; ${\\rm sim^{\\alpha}}(z_i, z_j)$ is also changeable to a symmetric similarity metric for $z_i$ and $z_j$. $\\lambda^p$ and $\\lambda^n$ control the weights of different positive\/negative pairs. Generalization strategies are itemized below:\n\\begin{itemize}\n \\item If ${\\rm sim^{\\alpha}}(z_i, z_j)$ is a symmetric similarity metric and $\\lambda^p, \\lambda^n = 1$, it degrades to the conventional InfoNCE loss used in CL~\\cite{pmlr-v119-chen20j}.\n \\item If $\\mathcal{P}(i)$ is altered, it corresponds to positives sampling~\\cite{tian2020contrastive,bachman2019learning,tian2020makes}\n . When we add adversaries into $\\mathcal{P}(i)$, it degenerates to the conventional Adversarial CL objectives, where $\\lambda^p, \\lambda^n = 1$ with symmetric ${\\rm sim^{\\alpha}}(z_i, z_j)$~\\cite{kim2020adversarial,jiang2020robust,fan2021does}.\n \\item If we seek better $\\mathcal{N}(i)$, it echos negative sampling methods~\\cite{robinson2020contrastive,kalantidis2020hard} such as Moco~\\cite{he2020momentum}, which maintains a queue of consistent negatives; or mimics DCL~\\cite{chuang2020debiased} that debiases $\\mathcal{N}(i)$ into true negatives. \n \\item If we change $\\lambda^p$ and $\\lambda^n$, it mirrors the pair reweighting works~\\cite{chuang2020debiased,robinson2020contrastive} that assign different weights to each pair according to a heuristic measure of importance such as similarity.\n\\end{itemize}\nWhile most existing methods adopt a symmetric similarity metric, we claim that in some scenarios the asymmetric similarity perspective needs to be taken into account, especially when the quality and property of different views vary significantly.\nIn this paper, we focus on the study of Adversarial CL, and demonstrate the benefits of \ncapturing the asymmetric relationships between adversaries and clean views.\nSpecifically, we design two instantiations to model the asymmetric relationships between adversarial and clean samples, as detailed in next section.\nBoth instantiations \ncan be integrated into the proposed \\textit{A-InfoNCE} framework.\n\n\n\n\n\n\n\\section{Related Work}\n\n\\paragraph{Contrastive Learning}\n\nCL has been widely applied to learn generalizable features from unlabeled data~\\cite{pmlr-v119-chen20j,he2020momentum,tian2020contrastive,grill2020bootstrap,chen2020improved,caron2020unsupervised,bachman2019learning,oord2018representation,chen2020big,chen2021large,khosla2020supervised}. The basic idea is instance discrimination~\\cite{wu2018unsupervised}.\nRepresentative works include CMC~\\cite{tian2020contrastive}, SimCLR\\cite{pmlr-v119-chen20j}\n, MoCo\\cite{he2020momentum}, SwAV\\cite{caron2020unsupervised}, BYOL\\cite{grill2020bootstrap}.\nThere is also a stream of work focusing on refined sampling on different views for improved performance ~\\cite{tian2020contrastive,kalantidis2020hard,chuang2020debiased,robinson2020contrastive,tao2021clustering}.\nFor example, DCL\\cite{chuang2020debiased} proposed to \\textit{debias} the assumption that all negative pairs are true negatives. HCL\\cite{robinson2020contrastive} extended DCL and proposed to mine hard negatives for contrastive learning, whose embeddings are uneasy to discriminate. \n\n\\paragraph{Adversarial Training}\n\nAdversarial training (AT) stems from \\cite{goodfellow2014explaining} and adopts a min-max training regime that optimizes the objective over adversaries generated by maximizing the loss~\\cite{madry2018towards,zhang2019theoretically,shafahi2019adversarial,zhang2019you,wong2020fast,zhu2019freelb,gan2020large,pang2019rethinking}. Some recent work introduced unlabeled data into AT~\\cite{hendrycks2019using,chen2020adversarial,carmon2019unlabeled,alayrac2019labels,kim2020adversarial}. By leveraging a large amount of unlabeled data, \\cite{carmon2019unlabeled,alayrac2019labels} performed semi-supervised self-training to first generate pseudo-supervisions, then conducted conventional supervised AT. Our work explores how to learn robust models without any class labels.\n\n\n\\paragraph{Adversarial Contrastive Learning}\n\nSome recent studies applied CL on adversarial training~\\cite{kim2020adversarial,jiang2020robust,fan2021does,gowal2020self}, by considering adversaries as positive views for contrasting, such that the learned encoder renders robust data representations. RoCL~\\cite{kim2020adversarial} was the first to successfully show robust models can be learned in an unsupervised manner.\nAdvCL~\\cite{fan2021does} proposed to empower CL with pseudo-supervision stimulus.\nSame as CL, these Adversarial CL methods perform symmetric contrast for all pairs, which could potentially induces conflicts in CL and AT training objectives. We are the first to investigate the asymmetric properties of Adversarial CL, by treating adversaries discriminatingly.\n\\section{Adversarial Asymmetric Contrastive Learning}\n\n\nThis section explains the instantiations of the \\textit{A-InfoNCE} loss for Adversarial CL. From the \\textit{inferior-positive} perspective, to reduce the impact of identity confusion, we first design a new asymmetric similarity metric ${\\rm sim^{\\alpha}}(z_i, z_j^{adv})$ for modeling the asymmetric relationships and weakening the learning signals from adversarial examples. From the \\textit{hard-negative} perspective, we view adversaries as hard negatives for other negative samples, and reweight each negative pairs by assigning similarity-dependent weights to ease the identity confusion. \n\n\n\\subsection{Adversarial Samples as Inferior Positives}\n\n\nAdversarial samples with different identities may attract their anchors (clean samples) in a contradicting manner to the exertion of CL. By weakening the learning signal from these adversarial examples in positive contrast (as \\textit{inferior positives} that attract the anchors less), we can effectively mitigate the undesired pull from clean samples via an adaptive gradient stopping strategy.\n\n\\subsubsection{Asymmetric Similarity Function.}\n\nAs the symmetric nature of InfoNCE can bring conflicts in Adversarial CL, we design a new asymmetric similarity function ${\\rm sim^{\\alpha}}(z_i, z_j)$ for \\textit{A-InfoNCE}, by manipulating the scale of gradient for each contrasted branch. We decompose it into two parts for each branch:\n\\begin{align} \\label{eq:2}\n {\\rm sim^{\\alpha}}(z_i, z_j) \n = \n \\alpha \\cdot {\\rm \\overline{sim}}(z_i, z_j) \n +\n (1 - \\alpha) \\cdot {\\rm \\overline{sim}}(z_j, z_i)\n\\end{align}\nwhere ${\\overline{\\rm sim}(a, b)}$ means the one-sided similarity of $a$ to $b$, \\textit{i.e.}, when maximizing ${\\overline{\\rm sim}(a, b)}$, we freeze $b$ and only move $a$ towards $b$. This can be implemented by stopping the gradient back-propagation for $b$ and only optimizing $a$.\n\nWe use a hyperparameter $\\alpha$ to control how much $z_i$ and $z_j$ head towards each other. For a clean sample and an adversarial sample, we let $\\alpha$ denote the coefficient of the clean branch's movement. If $\\alpha$ is 0, it performs total gradient freezing on the clean branch and only adversarial representations are optimized through training. Our empirical analysis finds that $\\alpha$ is relatively easy to tune for boosted performance. We show that any value lower than 0.5 brings reasonable performance boost (see Figure~\\ref{fig:ablation1}), when clean samples move less towards adversaries, following the intrinsic asymmetric property of Adversarial CL.\n\n\\subsubsection{Adaptive $\\alpha$-annealing.}\n\n\nWhen the \\textit{identity confusion} is at play, it is necessary to treat adversarial samples inferior to ensure model robustness. But as training progresses, when model learns robust representations and the negative identity-changing impact of adversarial perturbation wanes, we consider adversarial perturbation as strong augmentations, equal to other typical transformations~\\cite{pmlr-v119-chen20j}. \n\nThe question is how to measure the reduction of instance confusion effect. Here we take a geometry perspective and propose to adaptively tune the proportional coefficient $\\alpha$ on-the-fly based on Euclidean distance. Let $d_{i,j} = ||z_i - z_j||_2$ denote the distance between an original image and its adversarial view in the representation space.\nGiven $\\alpha_{min}$, $d_{max}$, $\\alpha_{max}$, $d_{min}$,\nthe goal is for $\\alpha$ to be $\\alpha_{max}$ when the distance approximates $d_{min}$, and $\\alpha_{min}$ to be close to $d_{max}$. During training, we first compute the current representation distance $d$, then use a simple linear annealing strategy to compute $\\alpha$:\n\\begin{align}\n \\alpha = \\alpha_{min} + (d_{max}-d)\\frac{\\alpha_{max}-\\alpha_{min}}{d_{max}-d_{min}}\n\\end{align}\n$d_{min}$ and $\\alpha_{min}$ can be treated as hyperparameters. $\\alpha_{max}$ is 0.5, indicating adversarial perturbation is equal to other transformations and ${\\rm sim}^\\alpha(z_i, z_j)$ degrades to the symmetric similarity. Moreover, we use the first $N$ epochs as a warm-up to compute the average distance as $d_{max}$, in which period $\\alpha$ is fixed.\n\n\\subsubsection{Adversarial CL Loss with Inferior Positives.}\nWith the above asymmetric similarity function $\\rm sim^\\alpha(\\cdot)$ and the \\textit{A-InfoNCE} loss function $\\mathcal{L}^{\\rm asym}_{\\rm CL}(x_i, x_j;{\\alpha},\\lambda^p, \\lambda^n)$, the complete Adversarial CL loss with \\textit{inferior positives} (IP) can be written as: \n\\begin{small}\n\\begin{align} \\label{eq:ip}\n \\mathcal{L}^{\\rm IP} \n = \n \\sum_i\\sum_{j\\in \\mathcal{P}(i)} \\mathcal{L}^{\\rm asym}_{\\rm CL}(x_i, x_j; 0.5, 1, 1) \n +\n \\gamma \\cdot \\sum_i\\sum_{j\\in \\mathcal{P}(i)} \\mathcal{L}^{\\rm asym}_{\\rm CL}(x_i, x_j^{adv}; \\alpha, 1, 1)\n\\end{align}\n\\end{small}\nwhere the first part stands for standard CL loss that maximizes the similarity between two clean views, which is symmetric ($\\alpha=0.5$) with $\\lambda^p = \\lambda^n = 1$, degrading to the conventional InfoNCE loss. The second part is a robust CL loss that maximizes the agreement between clean and adversarial views, but uses the asymmetric similarity function (\\ref{eq:2}) with a hyperparameter $\\alpha$ that gives weaker learning signals to the counterparts of inferior adversarial samples. The hyperparameter $\\gamma$ balances the robustness and accuracy objectives.\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Adversarial Samples as Hard Negatives}\n\nBesides inferior positives, we also propose an alternative view of adversaries as \\textit{hard negatives}~\\cite{robinson2020contrastive} that be pushed away from surrounding data points with higher weights. This can potentially assuage the confusion brought by adversarial samples of the current instance residing too close to the negative samples of the same instance (as illustrated in Figure 1 (d)). Furthermore, this strategy encourages the model towards more robustness-aware, by giving adversarial samples possessing undiscriminating features higher weights in the pretraining stage, further enhancing Adversarial CL.\n\n\nIn practice, we assign a weight of similarity to each pair. To set a basis for weight assigning, we adopt a simple and adaptive weighting strategy used in~\\cite{robinson2020contrastive}, \\textit{i.e.}, taking each pair's similarity as its weight, with $w_{i,j} = \\exp({\\rm {sim}}(z_i, z_j)\/t)$. By doing so, the adversaries with bad instance-level identity (greater similarity to negative samples) can be automatically assigned with higher weights. The weights can adaptively decay as the instance identity recovers during training.\n\nHowever, as the commonly-used $\\mathcal{N}(i)$ is uniformly sampled from the entire data distribution $p(x)$~\\cite{chuang2020debiased} (\\textit{e.g.}, SimCLR~\\cite{pmlr-v119-chen20j} uses other instances in the current batch as negative samples), simply taking similarities as weights may heavily repel semantically-similar instances whose embeddings should be close. To estimate the true negatives distribution $p^-(x)$ , we take advantage of PU-learning~\\cite{du2014analysis,elkan2008learning} and resort to DCL,HCL~\\cite{chuang2020debiased,robinson2020contrastive} to debias negative sampling.\n\nPU-learning~\\cite{du2014analysis} decomposes the data distribution as: $p(x) = \\tau p^+ (x) + (1-\\tau) p^- (x)$, where $p^+(x), p^-(x)$ denote the distribution of data from the same or different class of $x$, and $\\tau$ is the class prior. Thus\n$p^{-}(x)$ can be rearranged as $p^{-}(x)=\\big(p(x) - \\tau p^+ (x)\\big)\/(1-\\tau)$. We can use all instances and positive augmentations containing adversarial samples of $x$ to estimate $p(x)$ and $p^+(x)$, respectively. Following~\\cite{chuang2020debiased}, we debias the negative contrast part in (\\ref{eq:1}) as:\n\\begin{small}\n\\begin{align}\n \\frac{1}{1-\\tau}\n \\Big(\n \\sum_{k\\in \\mathcal{N}(i)} w_{i,k}^n \\cdot \\exp({\\rm sim^{\\alpha}}(z_i, z_k)\/t)\n -\n \\frac{N}{M} \\cdot \\tau \n \\sum_{j\\in \\mathcal{P}(i)} w_{i,j}^p \\cdot \\exp({\\rm sim^{\\alpha}}(z_i, z_j)\/t)\n \\Big)\n\\end{align}\n\\end{small}\nwhere $M, N$ are the numbers of postives and negatives, $w_{i,k}^n$ is the aforementioned weights for negatives, $w_{i,j}^p$ is a expandable weight for positives (set as 1 in our implementation, other choices can be further explored in the future work).\n\n\\subsubsection{Adversarial CL Loss with Hard Negatives.}\nWe substitute (7) into the \\textit{A-InfoNCE} loss function (\\ref{eq:1}) and rearrange it, acquiring the instantiation of \\textit{A-InfoNCE} loss with \\textit{hard negatives} (HN), with concrete forms of $\\lambda^p$ and $\\lambda^n$ as:\n\\begin{small}\n\\begin{align} \\label{eq:hn}\n \\mathcal{L}^{HN} \n = \n \\sum_i\\sum_{j\\in \\mathcal{P}(i)}\n \\mathcal{L}^{\\rm asym}_{\\rm CL}(x_i, x_j; \\alpha, \\frac{M-(M+N)\\tau}{M-M\\tau}w_{i,j}^p, \\frac{1}{1-\\tau}w_{i,k}^n),\\quad k\\in\\mathcal{N}(i)\n\\end{align}\n\\end{small}\nDue to the lack of class information, we treat $\\tau$ as a hyperparameter and set as~\\cite{chuang2020debiased} suggested. \n\n\\subsubsection{Combined Adversarial CL Loss.}\nFinally, we can view adversaries both as inferior positives and hard negatives for other negative samples. This leads to following combined Adversarial CL loss:\n\\begin{small}\n\\begin{align} \\label{eq:ip+hn}\n \\mathcal{L}^{IP+HN}\n &=\n \\sum_i\\sum_{j\\in \\mathcal{P}(i)} \\mathcal{L}^{\\rm asym}_{\\rm CL}(x_i, x_j; 0.5, \\frac{M-(M+N)\\tau}{M-M\\tau}w_{i,j}^p, \\frac{1}{1-\\tau}w_{i,k}^n) \\ \n + \\nonumber \\\\\n & \\gamma \\cdot \\sum_i\\sum_{j\\in \\mathcal{P}(i)} \\mathcal{L}^{\\rm asym}_{\\rm CL}(x_i, x_j^{adv}; \\alpha, \\frac{M-(M+N)\\tau}{M-M\\tau}w_{i,j}^p, \\frac{1}{1-\\tau}w_{i,k}^n),\\quad k\\in\\mathcal{N}(i)\n\\end{align}\n\\end{small}\n\\section{Experiments}\n\nTo demonstrate the effectiveness and generalizability of the proposed approach, we present experimental results across different datasets and model training strategies. Our methods are compatible with existing Adversarial CL frameworks, and can be easily incorporated by replacing their CL loss. \nWe choose two baselines and replace their loss with $\\mathcal{L}^{IP}$(in Equation~\\ref{eq:ip}), $\\mathcal{L}^{HN}$(\\ref{eq:hn}) and $\\mathcal{L}^{IP+HN}$(\\ref{eq:ip+hn}) for evaluation.\n\n\\textbf{Datasets.} We mainly use CIFAR-10 and CIFAR-100 for our experiments. Each dataset has 50,000 images for training and 10,000 for test. STL-10 is also used for transferability experiments. Following previous work~\\cite{fan2021does}, we use ResNet-18~\\cite{he2016deep} as the encoder architecture in all experiments.\n\n\\setlength{\\tabcolsep}{0pt}\n\\setlength{\\arrayrulewidth}{0.2mm}\n\\renewcommand{\\arraystretch}{1.3}\n\\begin{table}[t]\n\\caption{Results for replacing the objectives of the two baselines with $\\mathcal{L}^{IP}$, $\\mathcal{L}^{HN}$ and $\\mathcal{L}^{IP+HN}$, in Standard Accuracy (SA) and Robust Accuracy (RA).\nThe pre-trained methods are evaluated under the Linear Probing (LP), Adversarial Linear Finetuning (ALF) and Adversarial Full Finetuning (AFF) strategies.\nSupervised methods are trained under conventional adversarial training scheme\n}\n\\centering\n\\fontsize{8.5}{9}\\selectfont \n\\scalebox{0.9}{\n\\begin{tabularx}{1\\textwidth}\n{ m{1.2cm}\n m{1.6cm}\n m{2.2cm}\n \n \n \n \n \n \n P{1.1cm}\n P{1.2cm}\n P{1.1cm}\n P{1.3cm}\n P{1.1cm}\n P{1.1cm}\n \n \n \n \n \n \n P{0.5cm}\n }\n\\toprule\n\\multicolumn{1}{l}{\\multirow{3}{*}{Dataset}} & \\multicolumn{2}{c}{\\multirow{3}{*}{\\makecell[c]{Pre-training \\\\ Methods}}} & \\multicolumn{6}{c}{Finetuning Strategies} \\\\ \\cmidrule(l){4-9} \n\\multicolumn{1}{l}{} & \\multicolumn{2}{c}{} & \\multicolumn{2}{c}{Linear Probing} & \\multicolumn{2}{l}{\\begin{tabular}[c]{@{}c@{}}Adversarial Linear\\\\ Finetuning\\end{tabular}} & \\multicolumn{2}{c}{\\begin{tabular}[c]{@{}c@{}}Adversarial Full\\\\ Finetuning\\end{tabular}} \\\\ \\cmidrule(l){4-9} \n\\multicolumn{1}{c}{} & \\multicolumn{2}{c}{} & \\multicolumn{1}{c}{SA} & \\multicolumn{1}{c}{RA} & \\multicolumn{1}{c}{SA} & \\multicolumn{1}{c}{RA} & \\multicolumn{1}{c}{SA} & \\multicolumn{1}{c}{RA} \\\\\n\\hline\\hline\n\\multirow{10}{*}{\\makecell[c]{CIFAR \\\\ 10}} & \\multirow{2}{*}{Supervised} & AT~\\cite{madry2018towards} &-&-&-&-& 78.99 & 47.41 & \\scriptsize{1}\\\\\n & & TRADES~\\cite{zhang2019theoretically} &-&-&-&-& 81.00 & 53.27 & \\scriptsize{2} \\\\ \\cmidrule(l){2-9} \n & \\multirow{8}{*}{\\begin{tabular}[c]{@{}c@{}}Self-\\\\ Supervised \\end{tabular}} & RoCL~\\cite{kim2020adversarial} & 83.84 & 38.98 & 79.23 & 47.82 & 77.83 & 50.54 & \\scriptsize{3} \\\\\n & & \\ \\ w\/ $\\mathcal{L}^{IP}$ & \\textbf{87.63} & 41.46 & \\textbf{84.15} & 50.08 & 78.97 & 50.29 & \\scriptsize{4} \\\\\n & & \\ \\ w\/ $\\mathcal{L}^{HN}$ & 84.14 & 40.00 & 79.40 & 48.31 & 78.84 & 51.73 & \\scriptsize{5} \\\\\n & & \\ \\ w\/ $\\mathcal{L}^{IP+HN}$ & 85.69 & \\textbf{42.96} & 81.91 & \\textbf{50.90} & \\textbf{80.06} & \\textbf{52.95} & \\scriptsize{6} \\\\ \\cmidrule(l){3-9}\n & & AdvCL~\\cite{fan2021does} & 81.35 & 51.00 & 79.24 & 52.38 & 83.67 & 53.35 & \\scriptsize{7} \\\\\n & & \\ \\ w\/ $\\mathcal{L}^{IP}$ & 82.37 & 52.33 & 80.05 & \\textbf{53.22} & \\textbf{84.12} & 53.56 & \\scriptsize{8} \\\\\n & & \\ \\ w\/ $\\mathcal{L}^{HN}$ & 81.34 & 52.61 & 78.69 & 53.20 & 83.44 & \\textbf{54.07} & \\scriptsize{9} \\\\\n & & \\ \\ w\/ $\\mathcal{L}^{IP+HN}$ & \\textbf{83.15} & \\textbf{52.65} & \\textbf{80.41} & 53.19 & 83.93 & 53.74 & \\scriptsize{10} \\\\ \\midrule\n\\multirow{10}{*}{\\makecell[c]{CIFAR \\\\ 100}} & \\multirow{2}{*}{Supervised} & AT~\\cite{madry2018towards} & -&-&-&- & 49.49 & 23.00 & \\scriptsize{11} \\\\\n & & TRADES~\\cite{zhang2019theoretically} & -&-&-&- & 54.59 & 28.43 & \\scriptsize{12} \\\\ \\cmidrule(l){2-9} \n & \\multirow{8}{*}{\\begin{tabular}[c]{@{}c@{}}Self-\\\\ Supervised\\end{tabular}} & RoCL~\\cite{kim2020adversarial} & 55.71 & 18.49 & 49.30 & 25.84 & 51.19 & 26.69 & \\scriptsize{13} \\\\\n & & \\ \\ w\/ $\\mathcal{L}^{IP}$ & 59.30 & 21.34 & 54.49 & \\textbf{30.33} & 52.39 & 27.84 & \\scriptsize{14} \\\\\n & & \\ \\ w\/ $\\mathcal{L}^{HN}$ & 58.77 & 21.17 & 56.38 & 28.03 & \\textbf{55.85} & 29.57 & \\scriptsize{15} \\\\\n & & \\ \\ w\/ $\\mathcal{L}^{IP+HN}$ & \\textbf{59.74} & \\textbf{22.54} & \\textbf{57.57} & 29.22 & 55.79 & \\textbf{29.92} & \\scriptsize{16} \\\\ \\cmidrule(l){3-9}\n & & AdvCL~\\cite{fan2021does} & 47.98 & 27.99 & \\textbf{47.45} & 28.29 & 57.87 & 29.48 & \\scriptsize{17} \\\\ \n & & \\ \\ w\/ $\\mathcal{L}^{IP}$ & 49.48 & 28.84 & 45.39 & 28.40 & \\textbf{59.44} & 30.49 & \\scriptsize{18} \\\\\n & & \\ \\ w\/ $\\mathcal{L}^{HN}$ & 49.44 & 29.01 & 47.32 & \\textbf{28.69} & 58.41 & 29.93 & \\scriptsize{19} \\\\\n & & \\ \\ w\/ $\\mathcal{L}^{IP+HN}$ & \\textbf{50.59} & \\textbf{29.12} & 45.72 & 28.45 & 58.70 & \\textbf{30.66} & \\scriptsize{20} \\\\ \\bottomrule\n\\end{tabularx}}\n\\label{table:1}\n\\end{table}\n\n\\textbf{Baselines.} We compare with two baselines: RoCL~\\cite{kim2020adversarial}, the first method to combine CL and AL; and AdvCL~\\cite{fan2021does}, the current state-of-the-art framework. During experiments, we observe severe overfitting of AdvCL when training 1000 epochs (experiment setting in the original paper), with performance inferior to training for 400 epochs.\nThus, we pre-train 400 epochs on AdvCL at its best-performance setting. All other settings are the same as original papers except for some hyperparameter tuning. Our methods are also compatible with some recent work like SwARo~\\cite{wahed2022adversarial} and CLAF~\\cite{rahamim2022robustness}, by modeling the asymmetry between clean and adversarial views as aforementioned.\n\n\\textbf{Evaluation.}\nFollowing \\cite{jiang2020robust} and \\cite{fan2021does}, we adopt three finetuning strategies to evaluate the effectiveness of contrastive pre-training: $1)$ Linear Probing (LP): fix the encoder and train the linear classifier; $2)$ Adversarial Linear Finetuning (ALF): adversarially train the linear classifier; $3)$ Adversarial Full Finetuning (AFF): adversarially train the full model. We consider two evaluation metrics: $1)$ Standard Accuracy (SA): classification accuracy over clean images; $2)$ Robust Accuracy (RA): classification accuracy over adversaries via PGD-20 attacks~\\cite{madry2018towards}. \nRobustness evaluation under more diverse attacks is provided in the appendix.\n\n\n\\subsection{Main Results}\nIn Table \\ref{table:1}, we report standard accuracy and robust accuracy of each model, learned by different pre-training methods over CIFAR-10 and CIFAR-100. Following previous works~\\cite{kim2020adversarial,jiang2020robust,fan2021does} and common practice in contrastive learning~\\cite{pmlr-v119-chen20j,he2020momentum}, we first use unlabeled images in CIFAR-10\/-100 to pre-train, then introduce labels to finetune the model. As shown in Table \\ref{table:1}, our methods achieve noticeable performance improvement over baselines in almost all scenarios, when replacing the original loss with our proposed adversarial CL loss.\n\n\nIn comparison with RoCL, $\\mathcal{L}^{IP}$ brings significant performance boost on both standard and robust accuracy consistently across different training methods (row 4 vs. 3, row 14 vs. 13) (except for RA of AFF on CIFAR10).\nComparing to AdvCL, $\\mathcal{L}^{IP}$ also brings noticeable margin (row 8 vs. 7, row 18 vs. 17). This can be attributed to that $\\mathcal{L}^{IP}$ aims to lower the priority of adversaries and prevent clean samples moving towards other instances, which results in better instance discrimination and improves clean~\\cite{wu2018unsupervised} and robust accuracy.\n$\\mathcal{L}^{HN}$ also yields substantial boost on robust and standard accuracy (\\textit{e.g.}, row 15 vs. 13).\nWe hypothesize this is due to that $\\mathcal{L}^{HN}$ helps\nalert the model to adversarial samples by assigning higher weights for adversaries in negative contrast. \nWhen combined together, in most settings both standard and robust accuracy are further boosted, especially for Linear Probing. This is because directly mitigating the negative impact of \\textit{identity confusion} by $\\mathcal{L}^{IP}$ and helping adversarial get rid of false identities by $\\mathcal{L}^{HN}$ can complement each other, bringing further performance boost.\n\\subsection{Transferring Robust Features}\n\n\\setlength{\\tabcolsep}{0pt}\n\\setlength{\\arrayrulewidth}{0.2mm}\n\\renewcommand{\\arraystretch}{1.3}\n\\begin{table}[t]\n\\fontsize{8.5}{10}\\selectfont \n\\centering\n\\caption{Transferring results from CIFAR-10\/100 to STL-10, compared with AdvCL~\\cite{fan2021does}, evaluated in Standard accuracy (SA) and Robust accuracy (RA) across different finetuning methods with ResNet-18\n}\n\\scalebox{0.9}{\n\\begin{tabularx}{1\\textwidth}\n{\n P{2cm}\n m{2.4cm}\n \n \n \n \n \n \n P{1.2cm}\n P{1.3cm}\n P{1.2cm}\n P{1.4cm}\n P{1.2cm}\n P{1.3cm}\n}\n\\toprule\n\\multirow{3}{*}{Dataset} & \\multirow{3}{*}{\\makecell[c]{Pre-training \\\\ Methods}} & \\multicolumn{6}{c}{Finetuning Strategies} \\\\ \\cline{3-8} \n & & \\multicolumn{2}{c}{Linear Probing} & \\multicolumn{2}{c}{\\makecell[c]{Adversarial Linear \\\\ Finetuning}} & \\multicolumn{2}{c}{\\makecell[c]{Adversarial Full \\\\ Finetuning}} \\\\ \\cline{3-8} \n & & SA & RA & SA & RA & SA & RA \\\\\n \\hline\\hline\n\\multirow{4}{*}{\\makecell[c]{CIFAR10\\\\$\\downarrow$\\\\STL10}} & AdvCL~\\cite{fan2021does} & 64.45 & 37.25 & 60.86 & 38.84 & 67.89 & 38.78 \\\\\n & \\ \\ w\/ $\\mathcal{L}^{IP}$ & 64.83 & 37.30 & 61.95 & 38.90 & \\textbf{68.25} & 39.03 \\\\\n & \\ \\ w\/ $\\mathcal{L}^{HN}$ & 65.24 & \\textbf{38.18} & \\textbf{62.83} & \\textbf{39.70} & 67.88 & \\textbf{39.75} \\\\\n & \\ \\ w\/ $\\mathcal{L}^{IP+HN}$ & \\textbf{67.19} & 37.00 & 61.34 & 39.35 & 67.95 & 39.12 \\\\ \\hline\n\\multirow{4}{*}{\\makecell[c]{CIFAR100\\\\$\\downarrow$\\\\STL10}} & AdvCL~\\cite{fan2021does} & 52.28 & 30.01 & 49.84 & 32.14 & 63.13 & 35.24 \\\\\n & \\ \\ w\/ $\\mathcal{L}^{IP}$ & 52.65 & \\textbf{31.33} & 50.18 & 33.15 & 63.26 & \\textbf{35.34} \\\\\n & \\ \\ w\/ $\\mathcal{L}^{HN}$ & 51.88 & 31.29 & \\textbf{50.73} & \\textbf{33.62} & 62.91 & 34.88 \\\\\n & \\ \\ w\/ $\\mathcal{L}^{IP+HN}$ & \\textbf{53.41} & 31.30 & 51.10 & 33.23 & \\textbf{63.69} & 35.09 \\\\ \\bottomrule\n\\end{tabularx}}\n\\label{table:2}\n\\end{table}\n\nLearning robust features that are transferable is a main goal in self-supervised adversarial learning. It is of great significance if models pre-trained with a huge amount of unlabeled data possess good transferability by merely light-weight finetuning. For example, Linear Probing is often 10$\\times$ quicker than conventional adversarial training, with only a linear classifier trained.\n\nHere we evaluate the robust transferability of the proposed approach, by transfering CIFAR-10 and CIFAR-100 to STL-10, \\textit{i.e.}, use unlabeled images in CIFAR-10\/-100 to pretrain, then use STL-10 to finetune and evaluate the learned models. As shown in Table~\\ref{table:2}, our methods yield both clean and robust accuracy gains in most settings, up to 1.48\\% (33.62\\% vs. 32.14\\%) in robust accuracy and 2.74\\% (67.19\\% vs. 64.45\\%) in clean accuracy.\n\n\n\\subsection{Ablation studies}\n\nWe design a basic adversarial contrastive model, named CoreACL, to study the effect of each component in our proposed methods.\nCoreACL only contains the contrastive component with three positive views: two clean augmented views and one adversarial view of the original image.\n\n\\subsubsection{Fixed $\\alpha$ for Asymmetric Similarity Function.}\nWe first use fixed $\\alpha$ without adaptive annealing to explore the effectiveness of \\textit{inferior positives}. Figure~\\ref{fig:ablation1}\n\\setlength{\\intextsep}{12pt}\n\\begin{wrapfigure}[14]{r}{8.0cm}\n \\centering\n \\includegraphics[scale=0.27]{images\/ab2.png}\n \\caption{Deep probing for asymmetric similarity function with different $\\alpha$.}\n \\label{fig:ablation1}\n\\end{wrapfigure}\npresents the results with different $\\alpha$ values when training models for 200 epochs.\nRecall that $\\alpha$ represents the tendency of the clean sample heading towards the adversarial sample. $\\alpha < 0.5$ means clean samples move less toward the adversaries\n(vice versa for $\\alpha > 0.5$), and $\\alpha = 0.5$ degenerates to the original symmetric similarity function form. \n\n\nCompared with symmetric CoreACL ($\\alpha=0.5$), our approach achieves better robustness and accuracy when $\\alpha<0.5$ (adversarial examples are treated as \\textit{inferior positives}). Intriguingly, when $\\alpha=1.0$, the extreme case when only clean samples are attracted by adversaries, we observe the presence of a trivial solution~\\cite{chen2021exploring}, that is all images collapse into one point. This validates our observation that adversaries with false identities are indeed pulling their positives towards other instances in the positive contrasts, with the risk of drawing all samples together.\nIt is also worth noting that when $\\alpha < 0.2$, performance begins to drop, showing that a small but non-zero $\\alpha$ is the optimal setting empirically.\n\n\\subsubsection{Fixed $\\alpha$ vs. $\\alpha$-Annealing.}\nAs shown in Table 3, compared to CoreACL, fixed $\\alpha$ obtains higher clean accuracy (81.29\\% vs. 78.90\\%) but with no gain on robust accuracy. Adaptive annealing $\\alpha$ achieves both higher robust accuracy (50.24\\% vs. 51.27\\%) and better clean accuracy (79.46\\% vs. 78.90\\%).\n\n\\setlength{\\intextsep}{3pt} \n\\begin{wraptable}[12]{r}{5.5cm\n\t\\centering \n\t\\fontsize{8.5}{9}\\selectfont \n\t\\begin{threeparttable}\n\t\t\\caption{Ablation studies, evaluated in SA, RA and time cost. Trained for 400 epochs on 2 Tesla V100 GPUs.}\n\t\t\\label{tab:headings} \n\t\t\\begin{tabular}\n {\n m{1.95cm}\n P{1cm}\n P{1cm}\n P{1.5cm}\n }\n\t\t\t\\toprule \n\t\t\t\\makecell[c]{Methods} & SA & RA & Time Cost (s\/epoch)\\cr \n\t\t\t\\midrule \n\t\t\t\\noalign{\\smallskip}\n\t\t\tCoreACL & 78.90 & 50.27 & 96 \\\\\n \\ w\/fixed $\\alpha$ & 81.29 & 50.24 & 96\\\\\n \\ w\/annealing $\\alpha$ & 79.46 & 51.37 & 101 \\\\ \\hline\n \\ w\/$\\mathcal{L}^{IP+HN}$ & 81.19 & 51.31 & 101 \\\\\n AdvCL & 81.35 & 51.00 & 182 \\\\\n\t\t\t\\bottomrule \n\t\t\\end{tabular} \n\t\\end{threeparttable} \n\t\\label{table:3}\n\\end{wraptable}\n\\subsubsection{Comparison with AdvCL.}\nTable 3 reports the performance and computation cost comparisons with AdvCL.\nCoreACL with $\\mathcal{L}^{IP+HN}$ achieves similar performance to AdvCL, which is equivalent to integrate additional components (high frequency view and pseudo-supervision) into CoreACL. The computation time of AdvCL is almost twice than that of $\\rm w\/\\mathcal{L}^{IP+HN}$, which could due to extra computation on contrasting high frequency views and the pseudo-labeled adversarial training.\nOur methods only need to compute pair-wise Euclidean distance for $\\alpha$-annealing in $\\mathcal{L}^{IP}$, and no extra cost introduced in $\\mathcal{L}^{HN}$. \n\n\\subsubsection{Effect of Hard Negatives.}\nTo investigate the effect of hard negatives, we\n\\setlength{\\tabcolsep}{4pt}\n\\setlength{\\intextsep}{6pt} \n\\begin{wraptable}[12]{r}{8cm}\n\\renewcommand\\arraystretch{1.5}\n\t\\centering \n\t\\fontsize{9}{9}\\selectfont \n\t\\scalebox{0.9}{\n\t\\begin{threeparttable}\n\t\t\\caption{Ablation studies for AdvCL with hard negatives (AdvCL-HN), evaluated under Linear Probing (LP), Adversarial Linear Finetuning (ALF) and Adversarial Full Finetuning (AFF)} \n\t\t\\label{tab:performance_comparison} \n\t\t\\begin{tabular}{ccccccc} \n\t\t\t\\toprule \n\t\t\t\\multirow{2}{*}{Methods}& \n\t\t\t\\multicolumn{2}{c}{LP}&\\multicolumn{2}{c}{ALF}&\\multicolumn{2}{c}{AFF}\\cr \n\t\t\t\\cmidrule(lr){2-3} \\cmidrule(lr){4-5} \\cmidrule(lr){6-7} \n\t\t\t&SA&RA&SA&RA&SA&RA\\cr \n\t\t\t\\midrule \n \n\t\t\t{AdvCL-HN}&{81.34}&{\\bf 52.96}&{78.69}&{\\bf 53.20}&83.44&{\\bf 54.07}\\cr \n\t\t\t{w\/o debias}&{\\bf 81.52}&51.61&{\\bf 78.89}&52.34&{\\bf 83.73}&{54.01}\\cr \n\t\t{w\/o reweight}&76.93&50.01&73.49&49.86&81.74&52.60\\cr \n\t\t\t\\bottomrule \n\t\t\\end{tabular} \n\t\\end{threeparttable}} \n\\end{wraptable}\nevaluate each component (negatives debiasing~\\cite{chuang2020debiased}, reweighting~\\cite{robinson2020contrastive}) as shown in Table 4. With negatives-debiasing removed, we observe decrease in robust accuracy, with slightly increased standard accuracy. We hypothesize that without debiasing, semantically similar adversarial representations that should be mapped closely are pushed away instead. \nIn addition, the removal of negatives reweighting results in a sharp performance drop, showing that viewing adversarial views as \\textit{hard negatives} with higher weights plays a key role in discriminating adversarial samples.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[scale=0.42]{images\/hist-2.png}\n \\caption{Histograms of Euclidean distance (normalized) distribution of all negative pairs learned by different objectives in (a) CIFAR10 (first row) and (b) CIFAR100 (second row). Baseline is AdvCL~\\cite{fan2021does}; IP: baseline with Inferior Positives; HN: baseline with Hard Negatives. On each dataset, our methods are better at differentiating different instances (with larger distance between negative pairs)\n }\n \\label{fig:ablation2}\n\\end{figure}\n\n\n\\begin{figure}[b]\n \\centering\n \\includegraphics[scale=0.39]{images\/rocl-tsne3.png}\n \\caption{t-SNE visualizations in a global view on CIFAR-10 validation set. The embeddings are learned by different self-supervised pre-training methods (SimCLR(a), RoCL(b) and RoCL-IP(c)) (\\textit{colored figure})\n }\n \\label{fig:global tsne}\n\\end{figure}\n\n\\subsection{Qualitative Analysis}\n\nFigure \\ref{fig:ablation2} shows the distribution of normalized Euclidean distance over all negative pairs. We take AdvCL~\\cite{fan2021does} as the baseline and compare it with its enhanced versions with our methods.\nGenerally, our methods can shift the original distribution curve right (larger distance), meaning that treating adversaries as inferior positives or hard negatives encourages the model to separate negative pairs further apart and induce better instance discrimination. This suggests that our proposed methods effectively mitigate the negative impacts of \\textit{identity confusion}.\n\nFigure \\ref{fig:global tsne} provides 2-D visualization (t-SNE~\\cite{van2008visualizing} on CIFAR-10) for the embeddings learnt by SimCLR~\\cite{pmlr-v119-chen20j}, RoCL~\\cite{kim2020adversarial} and RoCL enhanced by $\\mathcal{L}^{IP}$ (RoCL-IP). Each class is represented in one color.\nCompared to SimCLR, RoCL representations are corrupted by adversaries and exhibit poor class discrimination. RoCL-IP yields better class separation compared with RoCL.\nThis shows that asymmetric similarity consideration eases instance-level identity confusion.\n\n\n\\section{Introduction}\n\nWell-performed models trained on clean data can suffer miserably when exposed to simply-crafted adversarial samples~\\cite{szegedy2014intriguing,goodfellow2014explaining,carlini2017towards,dong2018boosting}.\nThere has been many adversarial defense mechanisms designed to boost model robustness using labeled data~\\cite{kannan2018adversarial,shafahi2019adversarial,zhang2019you,wong2020fast,zhang2019theoretically,zhu2019freelb,athalye2018obfuscated}. In practice, however, obtaining large-scale annotated data can be far more difficult and costly than acquiring unlabeled data. Leveraging easily-acquired unlabeled data for adversarial learning, thus becomes particularly attractive.\n\nContrastive Learning (CL)~\\cite{hadsell2006dimensionality}, which performs instance discrimination~\\cite{wu2018unsupervised} (Figure 1 (a)) by maximizing agreement between augmentations of the same instance in the learned latent features while minimizing the agreement between different instances,\nhas made encouraging progress in self-supervised learning~\\cite{pmlr-v119-chen20j,he2020momentum,chen2020improved,grill2020bootstrap}. Due to its effectiveness in learning rich representations and competitive performance over fully-supervised methods, CL has seen a surge of research in recent years, such as\npositive sampling~\\cite{pmlr-v119-chen20j,tian2020contrastive,bachman2019learning,tian2020makes}, negative sampling~\\cite{he2020momentum,kalantidis2020hard,chuang2020debiased,wu2018unsupervised}, \npair reweighting~\\cite{chuang2020debiased,robinson2020contrastive}, and different contrast methods~\\cite{grill2020bootstrap,caron2020unsupervised,li2020prototypical}.\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[scale=0.43]{images\/intro-18.png}\n\\caption{\nIllustrations of ($a$) Contrastive Learning; ($b$) Adversarial Contrastive Learning; and our proposed methods for viewing adversarial samples asymmetrically as: ($c$) Inferior Positives (asymmetric contrast), and ($d$) Hard Negatives. In each circle, data points are augmentations of the same instance, sharing the same \\textit{Identity}. In ($b$), the Adversarial sample (\\textit{A}) shares the same Identity (\\textit{ID:2}) as the current Instance (\\textit{I}), but resides close to a different Identity (\\textit{ID:1}), thus \\textit{Identity Confusion} problem occurs. Specifically, the Adversarial sample (\\textit{A}) of Instance \\textit{(I)} exhibits similar representations to the Negative sample (\\textit{N}) of (\\textit{I}), which makes the positive contrast (\\textit{A}$\\leftrightarrow$\\textit{I}) and negative contrast (\\textit{N}$\\leftrightarrow$\\textit{I}) undermine each other in the training process (\\textit{colored figure}). \n}\n\\label{fig:intro}\n\\end{figure}\n\nRecently, contrastive learning has been extended to adversarial learning tasks in a self-supervised manner, leading to a new area of \\textit{adversarial contrastive learning} (Adversarial CL) ~\\cite{kim2020adversarial,fan2021does,jiang2020robust,gowal2020self}. \nThe main idea is to generate adversarial samples as additional positives of the same instance~\\cite{kim2020adversarial,fan2021does,jiang2020robust} for instance-wise attack, and maximize the similarity between clean views of the instance and their adversarial counterparts as in CL, while also solving the min-max optimization problem following canonical adversarial learning objective~\\cite{madry2018towards,shafahi2019adversarial,zhang2019you,wong2020fast,zhang2019theoretically,zhu2019freelb}.\nFor example, RoCL\\cite{kim2020adversarial} first proposed an attack mechanism against contrastive loss to confuse the model on instance-level identity, in a self-supervised adversarial training framework. AdvCL\\cite{fan2021does} proposed to minimize the gap between unlabeled contrast and labeled finetuning by introducing pseudo-supervision in the pre-training stage.\n\n\nAlthough these Adversarial CL methods showed improvement on model robustness, we observe that a direct\nextension from CL to adversarial learning (AL) can introduce ineffective CL updates during training.\nThe core problem lies in that they add worst-case perturbations $\\delta$ that no longer guarantee the preservation of instance-level identity~\\cite{kim2020adversarial} (\\textit{i.e.}, different from other data augmentation methods, adversarial samples can reside faraway from the current instance in the feature space after several attack iterations, because the attack objective is to make adversaries away from the current instance while approximating other instances, against the CL objective). \nAs illustrated in Figure~\\ref{fig:intro}(b), when the adversarial sample (\\textit{A}) of the current instance (\\textit{I}) are in close proximity to negative samples (\\textit{N}), CL objective minimizes the agreement between negative samples and current instance (\\textit{I} and \\textit{N} are pushed away from each other), while AL objective maximizes the agreement between adversarial samples and current instance (\\textit{A} and \\textit{I} are pulled together as \\textit{A} is considered as an augmented view of \\textit{I}). Meanwhile, \\textit{A} and \\textit{N} share similar representations, which renders the two objectives contradicting to each other. We term this conflict as ``\\textit{identity confusion}'', it means $A$ attracts and `confuses' $I$ with a false identity induced by $N$, which impedes both CL and AL from achieving their respective best performance.\n\nTo address this issue of \\textit{identity confusion}, we propose to treat adversarial samples unequally and discriminatingly, and design a generic asymmetric InfoNCE objective (\\textit{A-InfoNCE}), in order to model the asymmetric contrast strengths between positive\/negative samples.\nFirstly, to mitigate the direct pull between adversarial sample (\\textit{A}) and current instance (\\textit{I}) (Figure~\\ref{fig:intro} (c)) that might dampen the effectiveness of CL, we propose to treat adversarial samples as \\textit{inferior positives} that induce weaker learning signals to attract their counterparts in a lower degree when performing positive contrasts.\nThis asymmetric consideration in AL promises a trade-off and reduces conflicting impact on the CL loss.\n\nSecondly, to encourage adversarial samples (\\textit{A}) to escape from false identities induced by negative samples (\\textit{N}) that share similar representations to (\\textit{A}) (pushing \\textit{A} away from \\textit{N}) (Figure~\\ref{fig:intro}(d)), we consider adversarial samples (\\textit{A}) as \\textit{hard negatives}~\\cite{robinson2020contrastive} of other negative samples (\\textit{N}), by strengthening the negative contrast between \\textit{A} and \\textit{N} in CL computation.\nTo effectively sample true adversarial negatives and re-weight each sample, we follow positive-unlabeled learning~\\cite{du2014analysis,elkan2008learning} and contrastive negatives reweighting~\\cite{robinson2020contrastive,chuang2020debiased} practice. \n\nOur contributions are summarized as follows: \n$1)$\n We propose an generic asymmetric InfoNCE loss, \\textit{A-InfoNCE}, to address the \\textit{identity confusion} problem in Adversarial CL, by viewing adversarial samples \n as \\textit{inferior positives} or \\textit{hard negatives}.\n \n \n \n 2) Our approach is compatible to existing Adversarial CL methods, by simply replacing standard CL loss with \\textit{A-InfoNCE}. \n 3) Experiments on CIFAR-10, CIFAR-100 and STL-10 show that our approach consistently outperforms existing Adversarial CL methods.\n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{\\large\\textsf{\\refname}}%\n \\@mkboth{\\MakeUppercase\\refname}{\\MakeUppercase\\refname}%\n \\list{\\@biblabel{\\@arabic\\c@enumiv}}%\n {\\settowidth\\labelwidth{\\@biblabel{#1}}%\n \\leftmargin\\labelwidth\n \\advance\\leftmargin\\labelsep\n \\@openbib@code\n \\usecounter{enumiv}%\n \\let\\p@enumiv\\@empty\n \\renewcommand\\theenumiv{\\@arabic\\c@enumiv}}%\n \\sloppy\n \\clubpenalty4000\n \\@clubpenalty \\clubpenalty\n \\widowpenalty4000%\n \\sfcode`\\.\\@m}\n {\\def\\@noitemerr\n {\\@latex@warning{Empty `thebibliography' environment}}%\n \\endlist}\n\\makeatother\n\n\\topmargin -2.0cm\n\\oddsidemargin -1.0cm\n\\textwidth 18.5cm\n\\textheight 24cm\n\\footskip 1.0cm\n\n\n\n\\newenvironment{sciabstract}{%\n\\begin{quote} \\bf}\n{\\end{quote}}\n\n\\title{\\textsf{\\textbf{A quantum Fredkin gate}}}\n\n\n\\author\n{Raj B. Patel,$^{1,\\ast}$ Joseph Ho,$^{1}$ Franck Ferreyrol,$^{1,2}$ Timothy C. Ralph,$^{3}$ \\\\\n\\& Geoff J. Pryde,$^{1,\\ast}$\\\\\n\\\\\n\\normalsize{$^{1}$CQC2T and Centre for Quantum Dynamics, Griffith University,}\\\\\n\\normalsize{Brisbane 4111, Australia}\\\\\n\\normalsize{$^{2}$ Laboratoire Photonique, Numerique et Nanosciences, Institut d'Optique,}\\\\\n\\normalsize{CNRS and Universit\\'{e} de Bordeaux, Talence, France}\\\\\n\\normalsize{$^{3}$ CQC2T and School of Mathematics and Physics, University of Queensland, }\\\\\n\\normalsize{Brisbane 4072, Australia}\\\\\n\\normalsize{$^\\ast$To whom correspondence should be addressed; E-mail: r.patel@griffith.edu.au}\\\\\n\\normalsize{or g.pryde@griffith.edu.au}\n}\n\n\n\\date{}\n\n\n\n\n\n\\begin{document}\n\n\n\\baselineskip12pt\n\n\\twocolumn[\n \\begin{@twocolumnfalse}\n\\maketitle\n\n\\begin{sciabstract}\nKey to realising quantum computers is minimising the resources required to build logic gates into useful processing circuits. While the salient features of a quantum computer have been shown in proof-of-principle experiments, difficulties in scaling quantum systems have made more complex operations intractable. This is exemplified in the classical Fredkin (controlled-SWAP) gate for which, despite theoretical proposals, no quantum analogue has been realised. By adding control to the SWAP unitary, we use photonic qubit logic to demonstrate the first quantum Fredkin gate, which promises many applications in quantum information and measurement. We implement example algorithms and generate the highest-fidelity three-photon GHZ states to-date. The technique we use allows one to add a control operation to a black-box unitary, something impossible in the standard circuit model. Our experiment represents the first use of this technique to control a two-qubit operation and paves the way for larger controlled circuits to be realised efficiently.\n\\end{sciabstract}\n\\end{@twocolumnfalse}\n]\n\n\\section*{\\large\\textsf{Introduction}}\nOne of the greatest challenges in modern science is the realisation of quantum computers\\cite{Kok2007,OBrien2009,Ladd2010} which, as they increase in scale, will allow enhanced performance of tasks in secure networking, simulations, distributed computing and other key tasks where exponential speedups are available. Processing circuits to realise these applications are built up from logic gates that harness quantum effects such as superposition and entanglement. At present, even small-scale and medium-scale quantum computer circuits are hard to realise because of the requirement to control enough quantum systems sufficiently well in order to chain together many gates into circuits. One example of this is the quantum Fredkin gate, which requires at least five two-qubit gates\\cite{Smolin1996} to be implemented in the standard circuit model. Thus, despite featuring prominently in schemes for quantum computation\\cite{Vandersypen2001,Lopez2012,Lanyon2007}, error-correction\\cite{Chuang1996,Barenco1997}, cryptography\\cite{Buhrman2001,Horn2005,Gottesman2001}, and measurement\\cite{Ekert2002,FuiasekFilip2002}, no such gate has been realised to date.\n\nThe quantum Fredkin gate, shown in Fig. 1A, is a three-qubit gate whereby, conditioned on the state of the control qubit, the quantum states of the two target qubits are swapped. The original, classical version of the gate first proposed by Fredkin \\cite{Fredkin1982} also serves as one of the first examples of a reversible logic operation where the number of bits are conserved and no energy is dissipated as a result of erasure. In the framework of universal quantum computation, gates are also reversible, so it may seem natural to ask whether it is possible to construct a quantum version of the Fredkin gate. The first design of the quantum Fredkin gate was proposed by Milburn \\cite{Milburn1989} and was to use single photons as qubits and cross-Kerr nonlinearities to produce the necessary coherent interactions. Further schemes utilising linear optics developed these ideas further \\cite{Chau1995,Smolin1996,Fiurasek2006,Fiurasek2008,Gong2008} by using ancilla photons, interference, and multiple two-qubit\\cite{OBrien2003,Pooley2012} and single-qubit gates. However concatenating multiple probabilistic gates in this fashion typically leads to a multiplicative reduction in the overall probability of success of $<1\/100$. Hence it would be desirable to be able to construct a quantum Fredkin gate directly without decomposition and avoid the associated resource overhead.\n\nWe begin by describing the concept of our experiment. We perform the controlled-SWAP operation by adding control to the SWAP unitary $U_{SWAP}$ applying the technique in Zhou \\textit{et al.} \\cite{Zhou2011}, to greatly reduce the complexity of quantum circuits. The notion of adding control to a black-box unitary is forbidden or difficult in many architectures\\cite{Araujo2014,Thompson2013} -- optics lends itself well to this approach because the optical implementation of the unitary leaves the vacuum state unchanged. Here we utilise this method to simplify a controlled multi-qubit operation.\n \\begin{figure*}\n \\begin{center}\n \\includegraphics[width=\\textwidth]{Fig1_S.pdf}\n \\end{center}\n\\noindent\\small{{\\bf Fig. 1.} Experimental arrangement and truth table measurements. \\textbf{A}, The quantum Fredkin gate circuit. The states of the target qubits are either swapped or not swapped depending on the state of the control qubit. \\textbf{B}, Concept of our experiment. Two SPDC photon sources allow production of path entanglement such that modes $R$ and $Y$ are entangled with modes $B$ and $G$. The SWAP operation is carried out on the path modes, depending on the control photon's state, such that arrival of the control photon indicates a system state of $\\alpha|H\\rangle^{C}|\\psi\\rangle^{T1}|\\varphi\\rangle^{T2} + \\beta|V\\rangle^{C}|\\varphi\\rangle^{T1}|\\psi\\rangle^{T2}$. \\textbf{C}, The experimental arrangement. Entangled photons are produced via SPDC (see Materials and Methods). Entering the gate via single-mode fiber, the two target photons are sent through a PBS. The path-entangled state in Eq. \\ref{4GHZ} is produced after each target photon enters a displaced Sagnac interferometer and the which-path information is erased on an NPBS. QWPs and HWPs encode the polarisation state in Eq. \\ref{4GHZU}. The control consists of a polarisation beam displacer interferometer. The desired control state is encoded onto modes $1R$ and $1B$ and coherently recombined. A tilted HWP is used to set the phase of the output state. Successful operation is heralded by four-fold coincidence events between the control, target, and trigger detectors. \\textbf{D}, Ideal (transparent bars) and measured (solid bars) truth table data for our gate. A total of 620 four-fold events were measured for each of the eight measurements, giving $\\left\\langle\\mathcal{O}\\right\\rangle = 96\\pm4\\%$.}\n \\end{figure*}\nA key idea in our demonstration is to use entanglement in a non-qubit degree of freedom (we use the photon's path mode) to drive the operation of the gate. This path entanglement can be produced in different ways. In our demonstration (Fig. 1B), it is generated from spontaneous parametric down-conversion (SPDC). Given the physical arrangement of the circuit and that we only accept detection events where a single photon is counted at each of the four outputs simultaneously, the optical quantum state produced by SPDC is converted to the required four-photon path-mode entangled state (see Materials and Methods) and has the form\n\\begin{equation}\\label{4GHZ}\n \\left(|11\\rangle_{B}|11\\rangle_{G}|00\\rangle_{R}|00\\rangle_{Y} + |00\\rangle_{B}|00\\rangle_{G}|11\\rangle_{R}|11\\rangle_{Y}\\right)\/\\sqrt{2}\n\\end{equation}\n where $B$, $R$, $Y$, and $G$ refer to path-modes and, for example, $|11\\rangle_{B}$ indicates a photon occupying mode $1B$ and another occupying $2B$. The path-modes are distributed throughout the circuit such that $U_{SWAP}$ is applied only to the $B$ and $G$ modes. The qubit state is encoded on the polarisation of the photon. Because the photons are in a spatial superposition, polarisation preparation optics must be applied to both path-modes of each photon. Hence, an arbitrary, separable, three-qubit state $|\\xi\\rangle|\\psi\\rangle|\\varphi\\rangle$ can be prepared as an input to the gate. In particular, the control qubit is encoded on modes $1R$ and $1B$, target 1 is encoded on modes $2R$ and $2B$, and target 2 is encoded on modes $1G$ and $1Y$, yielding\n\\begin{equation}\\label{4GHZU}\n \\left(|\\xi\\rangle^{C}_{1B}|\\psi\\rangle^{T1}_{2B}|\\varphi\\rangle^{T2}_{1G}|H\\rangle^{Tr}_{2G} + |\\xi\\rangle^{C}_{1R}|\\psi\\rangle^{T1}_{2R}|\\varphi\\rangle^{T2}_{1Y}|V\\rangle^{Tr}_{2Y}\\right)\/\\sqrt{2}\n\\end{equation}\nThe two control modes $1R$ and $1B$ are mixed on a polarising beam splitter (PBS), wheras a 50:50 non-polarising beam splitter (NPBS) is used to erase the path information in the target and trigger arms. The SWAP is implemented via rearrangement of the path-modes such that the target modes $2B$ and $1G$ are swapped wheras $2R$ and $1Y$ are not. Successful operation of the gate occurs when photons are detected at the control, target 1, and target 2 detectors (simultaneously with a photon detection at either trigger detector). The polarisation state of the three-qubit system, given the required modes are occupied, is $\\alpha|H\\rangle^{C}|\\psi\\rangle^{T1}|\\varphi\\rangle^{T2} + \\beta|V\\rangle^{C}|\\varphi\\rangle^{T1}|\\psi\\rangle^{T2}$ as expected from application of the Fredkin gate on the state $|\\xi\\rangle^{C}|\\psi\\rangle^{T1}|\\varphi\\rangle^{T2}$ where $|\\xi\\rangle = \\alpha|H\\rangle + \\beta|V\\rangle$. Taking into consideration the probability of recording a four-fold coincidence, successful execution of the gate occurs one-sixteenth of\n the time, on average. This can be increased to one-fourth of the time by collecting the target photons from both NPBS outputs.\n\n \\section*{\\large\\textsf{Results}}\nThe experimental arrangement of the quantum Fredkin gate is shown in Fig. 1C and consists of three interferometers designed to be inherently phase-stable. Pairs of polarisation entangled photons, produced by two SPDC crystals (see Materials and Methods), impinge on a PBS. Two orthogonally polarised photons, one from each source, are sent to separate displaced Sagnac interferometers. Initially, they are incident on a beam splitter where one half of the interface acts as a PBS and the other half acts as an NPBS. Entering at the PBS side, photons may travel along counterpropagating path modes where the polarisation state $|\\psi\\rangle$ is encoded onto one mode and the state $|\\varphi\\rangle$ is encoded on the other. The two paths are then recombined on the NPBS side of the beam splitter where the path information is erased (see Methods and Materials), giving the path-mode entangled state in equation \\ref{4GHZ} whilst the polarisation encoding procedure leads to the state in Eq. \\ref{4GHZU}. The control of the gate is realised in a polarisation interferometer consisting of two calcite beam displacers. The desired polarisation state of the control is encoded onto modes $1R$ and $1B$, which are coherently recombined in the second beam displacer. Given successful operation (arrival of a photon at the control detector), the preparation of the control photon in $|H\\rangle = |1\\rangle$ projects the target photons onto path modes $1G$ and $2B$ which undergo SWAP; conversely, preparing $|V\\rangle = |0\\rangle$ projects the target photons onto path modes $2R$ and $1Y$, which undergo the identity operation. In practice, the trigger arm consists of a half-wave plate (HWP) whose optic axis (OA) is set to $22.5^{\\circ}$, producing diagonal $|D\\rangle=\\frac{1}{\\sqrt{2}}(|H\\rangle + |V\\rangle)$ or anti-diagonal $|A\\rangle=\\frac{1}{\\sqrt{2}}(|H\\rangle - |V\\rangle)$ polarised photons, and a PBS. Successful operation is heralded by measuring four-fold coincidences across the trigger, control and two target detectors.\n\nThe logical operation of the gate was measured by performing eight measurements, one for each of the possible logical inputs. For each input we measure a total of 620 four-fold events distributed across the eight possible output states. Under ideal operation, for a given input, there is a single output. The solid bars in Fig. 1D depict the experimentally measured truth table data, $M_{exp}$ , whereas the transparent bars represent the ideal truth table $M_{ideal}$. To quantify the mean overlap between $M_{exp}$ and $M_{ideal}$, we calculate $\\left\\langle\\mathcal{O}\\right\\rangle = Tr \\left(M_{exp}M_{ideal}^{T}\/M_{ideal}M_{ideal}^{T}\\right)= 96\\pm4\\%$ which confirms excellent performance in the logical basis. The slight reduction in fidelity is most likely due to the imperfect extinction of our polarisation optics.\n \\begin{figure*}\n \\begin{center}\n \\includegraphics{Fig2_S.pdf}\n \\end{center}\n \\noindent\\small{{\\bf Fig. 2.} Real (left) and imaginary (right) parts of the reconstructed density matrices for our four GHZ states. Fidelity and purity were calculated for each state. \\textbf{A}, $|GHZ_{1}^{+}\\rangle$: $F = 0.88.\\pm 0.01$ and $P = 0.79 \\pm 0.02$. \\textbf{B}, $|GHZ_{1}^{-}\\rangle$: $F = 0.90 \\pm 0.01$ and $P = 0.83 \\pm 0.02$. \\textbf{C}, $|GHZ_{2}^{+}\\rangle$: $F = 0.93 \\pm 0.01$ and $P = 0.87 \\pm 0.02$. \\textbf{D}, $|GHZ_{2}^{-}\\rangle$: $F = 0.92 \\pm 0.01$ and $P = 0.85 \\pm 0.02$.}\n \\end{figure*}\n\nWe demonstrate the full quantum nature of our gate by preparing the control in a superposition $|\\xi\\rangle = \\frac{1}{\\sqrt{2}}(|0\\rangle \\pm |1\\rangle)$ which places the gate in a superposition of the SWAP and identity operations. Using our gate, we produce four of the eight maximally entangled three-photon Greenberger-Horne-Zeilinger (GHZ) states, namely\n\\begin{align}\\label{3GHZa}\n \\frac{1}{\\sqrt{2}}(|0\\rangle &\\pm |1\\rangle)^{C}|1\\rangle^{T1}|0\\rangle^{T2}\\rightarrow |GHZ_1^{\\pm}\\rangle \\nonumber\\\\\n &= \\frac{1}{\\sqrt{2}}\\left(|0\\rangle^{C}|1\\rangle^{T1}|0\\rangle^{T2} \\pm e^{i(\\phi + \\theta(\\vartheta))}|1\\rangle^{C}|0\\rangle^{T1}|1\\rangle^{T2}\\right)\n\\end{align}\nand,\n \\begin{align}\\label{3GHZb}\n \\frac{1}{\\sqrt{2}}(|0\\rangle &\\pm |1\\rangle)^{C}|0\\rangle^{T1}|1\\rangle^{T2}\\rightarrow |GHZ_2^{\\pm}\\rangle \\nonumber\\\\\n &= \\frac{1}{\\sqrt{2}}\\left(|0\\rangle^{C}|0\\rangle^{T1}|1\\rangle^{T2} \\pm e^{i(\\phi + \\theta(\\vartheta))}|1\\rangle^{C}|1\\rangle^{T1}|0\\rangle^{T2}\\right)\n\\end{align}\nHere $\\phi$ is a phase shift intrinsic to the gate, and $ \\theta(\\vartheta)$ is a corrective phase shift that can be applied by tilting a HWP at OA by an angle $\\vartheta$, such that $\\phi + \\theta(\\vartheta) = 2n\\pi$ (see Materials and Methods). In doing so, we are able to test the coherent interaction of all three qubits in the gate, which is a key requirement for constructing universal quantum computers. For each of the four states in Eqs. \\ref{3GHZa} and \\ref{3GHZb}, we perform three-qubit quantum state tomography (QST) to fully characterise the state. The control and target qubits are measured independently in the $D\/A$ basis, which we denote as $\\sigma_x$; in the $R\/L$ basis $(\\sigma_y)$, where $|R\\rangle=\\frac{1}{\\sqrt{2}}(|H\\rangle + i|V\\rangle)$ and $|L\\rangle=\\frac{1}{\\sqrt{2}}(|H\\rangle - i|V\\rangle)$; and in the $H\/V$ basis $(\\sigma_z)$. Therefore full state reconstruction can be carried out by a set of 27 measurements settings $(\\sigma_x\\sigma_x\\sigma_x, \\sigma_x\\sigma_x\\sigma_y...)$ effectively resulting in an over-complete set of 216 projective measurements as each measurement setting has eight possible outcomes. Figure 2 shows the real (left) and imaginary (right) parts of the reconstructed density matrices of the four GHZ states, each of which was calculated from $\\sim5000$ four-fold events using a maximum-likelihood algorithm. We measure fidelities and purities of $F = 0.88 \\pm 0.01$ and $P = 0.79 \\pm 0.02$ for $|GHZ_{1}^{+}\\rangle$, $F = 0.90 \\pm 0.01$ and $P = 0.83 \\pm 0.02$ for $|GHZ_{1}^{-}\\rangle$, $F = 0.93 \\pm 0.01$, and $P = 0.87 \\pm 0.02$ for $|GHZ_{2}^{+}\\rangle$, and $F = 0.92\\pm 0.01$ and $P = 0.85 \\pm 0.02$ for $|GHZ_{2}^{-}\\rangle$. The errors were calculated from 500 samples of a Monte-Carlo simulation. These values are most likely limited by imperfect mode overlap at the NPBS in each displaced Sagnac interferometer. Nevertheless, to the best of our knowledge, these values are the highest reported for photonic GHZ states surpassing the previous values reported in Hamel \\textit{et al.} \\cite{Hamel2014}.\n \\begin{figure}[t]\n \\begin{center}\n \\includegraphics[width=\\linewidth]{Fig3_S.pdf}\n \\end{center}\n \\noindent\\small{{\\bf Fig. 3.} Measured correlations for violations of Mermin's and Svetlichny's inequalities. \\textbf{A}, Mermin's inequality resulting in $S_M = 3.58 \\pm 0.06$, a violation by 24 standard deviations. \\textbf{B}, Svetlichny's inequality with $S_{Sv} = 4.88 \\pm 0.13$, a violation by 7 standard deviations. Error bars were calculated from Poissonian counting statistics.}\n \\end{figure}\n\nWe perform further measurements to characterise the quality of the $|GHZ_{2}^{+}\\rangle$ state. GHZ states can show a strong contradiction between local hidden-variable theories and quantum mechanics \\cite{Pan2000}. Mermin \\cite{Mermin1990} derived a Bell-like inequality by imposing locality and realism for three particles, which holds for any local hidden-variable theory\n\\begin{align}\\label{Mermin}\nS_M &= |E(a',b,c') + E(a,b',c') + E(a,b,c) - E(a',b',c)|\\nonumber\\\\\n &\\leq 2\n\\end{align}\n\\begin{figure}[H]\n \\begin{center}\n \\includegraphics[width=\\linewidth]{Fig4_S.pdf}\n \\end{center}\n \\noindent\\small{{\\bf Fig. 4.} Estimations of nonlinear functionals of a single-qubit state with the quantum Fredkin gate. \\textbf{A}, Circuit diagram of the network. \\textbf{B}, Measurements of the overlap of two single qubit states, $|\\langle{T1}|{T2}\\rangle|^2$. The fringe visibility or overlap was measured for states $|0\\rangle^{T1}|0\\rangle^{T2}$ (black), $\\frac{1}{\\sqrt{2}}\\left(|0\\rangle \\pm 1\\rangle\\right)^{T1}|0\\rangle^{T2}$ (red), and $|0\\rangle^{T1}|1\\rangle^{T2}$ (blue) with values $0.82 \\pm 0.02$, $0.52 \\pm 0.02$, and $0.05 \\pm 0.01$, respectively. \\textbf{C}, Measurements of the state purity. We measure a visibilities ranging from $0.82 \\pm 0.02$ for a pure state to $0.03 \\pm 0.02$ for a maximally mixed state.}\n \\end{figure}\nThis inequality can be violated by performing measurements with settings $a = b = c = \\sigma_x$ and $a' = b' = c' = \\sigma_y$ with a maximum violation of $S_M = 4$. From the QST of $|GHZ_{2}^{+}\\rangle$, 747 of the total 5029 four-fold events can be used to calculate the correlation functions $E$ in Eq. \\ref{Mermin}; these results are shown in Fig. 3A. This leads to $S_M = 3.58 \\pm 0.06$ which is a violation by 24 standard deviations. The implication of using these particular measurement settings is that the state exhibits genuine tripartite entanglement.\n\nAn additional test, namely, the violation of Svetlichny's inequality, is required to test whether the state is capable of displaying tripartite non-locality \\cite{Svetlichny1987,Lavoie2009}. Non-local hidden variable theories cannot be ruled out with Mermin's inequality, as they can be violated for arbitrarily strong correlations between two of the three particles. Svetlichny's inequality takes the form\n\\begin{align}\\label{Svet}\nS_{Sv} =& |E(a,b,c) + E(a,b,c') + E(a,b',c) - E(a,b',c')\\nonumber\\\\\n+& E(a',b,c) - E(a',b,c') - E(a',b',c) - E(a',b',c')|\\nonumber\\\\\n&\\leq 4\n\\end{align}\nwith settings $a = Sv1_\\pm$ (where $|Sv1_\\pm\\rangle = \\frac{1}{\\sqrt{2}}(|H\\rangle \\pm e^\\frac{i3\\pi}{4}|V\\rangle)$), $a' = Sv2_\\pm$ (where $|Sv2_\\pm\\rangle = \\frac{1}{\\sqrt{2}}(|H\\rangle \\pm e^\\frac{i\\pi}{4}|V\\rangle)$), $b' = c =\\sigma_x$, and $b = c' = \\sigma_y$. The maximum violation allowed by quantum mechanics is $S_{Sv} = 4\\sqrt{2}$. Figure 3B shows the correlations calculated from 2348 four-fold events leading to $S_{Sv} = 4.88 \\pm 0.13$, which is a violation by 7 standard deviations.\n\nAn application of the quantum Fredkin gate is the direct estimation of non-linear functionals\\cite{Ekert2002} of a quantum state, described by a density matrix $\\rho$, without recourse to QST. Here $\\rho = \\varrho_{T1} \\otimes \\varrho_{T2}$ is the density matrix of two separable subsystems. The circuit we employ is shown in Fig. 4A, where an interferometer is formed using two Hadamard gates and a variable phase shift $\\theta(\\vartheta)$. This interferometer is coupled to the controlled-SWAP operation of our quantum Fredkin gate such that measuring the control in the logical basis leads to an interference pattern given by $\\textrm{Tr}[U_{SWAP}\\varrho_{T1} \\otimes \\varrho_{T2}] = \\textrm{Tr}[\\varrho_{T1}\\varrho_{T2}] = ve^{i\\theta(\\vartheta)}$. If $\\varrho_{T1} \\neq \\varrho_{T2}$ then measurement of the fringe visibility provides, for pure states, a direct measure of the state overlap $|\\langle{T1}|{T2}\\rangle|^2$, where $\\varrho_{T1} = |{T1}\\rangle\\langle{T1}|$ and $\\varrho_{T2} = |{T2}\\rangle\\langle{T2}|$. Conversely, if $\\varrho_{T1} = \\varrho_{T2}$ then the fringe visibility provides an estimate of the length of the Bloch vector ( that is, the purity $P = \\textrm{Tr}[\\varrho^2]$). We realise the Hadamard operations in Fig. 4A by setting the quarter wave plate (QWP) and HWP combinations to prepare or measure $\\sigma_x$.\n\nFigure 4B shows the results of preparing the target qubits in the states $|0\\rangle^{T1}|0\\rangle^{T2}$, $\\frac{1}{\\sqrt{2}}\\left(|0\\rangle + 1\\rangle\\right)^{T1}|0\\rangle^{T2}$, and $|0\\rangle^{T1}|1\\rangle^{T2}$, corresponding to ideal (measured) overlaps and visibilities of 1 ($0.82 \\pm 0.02$), 0.5 ($0.52 \\pm 0.02$), and 0 ($0.05 \\pm 0.01$), respectively. Although the maximum visibility we are able to measure is limited by the performance of the three interferometers in the circuit, our measurements show a clear reduction in visibility as the single qubit states are made orthogonal. Figure 4C shows the result of setting $\\varrho_{T1} = \\varrho_{T2}$. As we increase the degree of mixture (see Materials and Methods), we observe a reduction in visibility from $0.82 \\pm 0.02$ for a pure state to $0.03 \\pm 0.02$ for a maximally mixed state.\n\n\\section*{\\large\\textsf{Discussion}}\nIn conclusion, we have used linear optics to perform the first demonstration of the quantum Fredkin gate. This is achieved by exploiting path-mode entanglement to add control to the SWAP operation. Our implementation has an improved success rate of more than one order of magnitude compared to previous proposals and does not require ancilla photons or decomposition into two-qubit gates. Our gate performs with high accuracy in the logical basis and operates coherently on superposition states. We have used the gate to generate genuine tripartite entanglement with the highest fidelities to date for photonic GHZ states and have implemented a small-scale algorithm to characterise quantum states without QST.\n\nAn alternative method for generating the polarisation-path entanglement that drives the gate is the use of C-path gates\\cite{Zhou2011} at the input. Our implementation varies from a fully heralded quantum Fredkin gate (see Materials and Methods), which does not require preexisting entanglement; however it demonstrates the key properties of a quantum Fredkin gate. For completely general quantum circuits that incorporate Fredkin (or similar controlled-arbitrary-unitary) gates at arbitrary circuit locations, the C-path methodology may be necessary at the cost of some additional resources and success probability (see Materials and Methods), though we conjecture that specific circuits comprising multiple Fredkin gates might be optimised using similar techniques to those that allow us to simplify the Fredkin down from a circuit of five two-qubit gates. Nevertheless, for small algorithms or operations and whenever possible, it is significantly favourable to directly generate path entanglement.\n\nThe quantum Fredkin gate has many applications across quantum information processing. Our demonstration should stimulate the design and implementation of even more complex quantum logic circuits. Later we became aware of related work carried out by Takeuchi\\cite{Takeuchi2015}.\n\n\\section*{\\large\\textsf{Materials and Methods}}\n\n\\paragraph*{Source\\\\}\nOur source consisted of a $150\\textrm{ fs}$ pulsed Ti-Sapphire laser operating at a rate of $80\\textrm{ MHz}$ and at a wavelength of $780\\textrm{ nm}$, which was frequency doubled using a $2\\textrm{ mm}$ LBO crystal. Two dispersion-compensating ultrafast prisms spatially filter any residual $780\\textrm{ nm}$ laser light. The frequency- doubled light (with power $100\\textrm{ mW}$) pumped two $2\\textrm{ mm}$ type-II $\\beta$ barium borate (BBO) crystals in succession. Entangled photons, generated via SPDC, were collected at the intersection of each set of emission cones. They then encountered an HWP with its OA at $45^{\\circ}$ and an additional $1\\textrm{ mm}$ type-II BBO crystal used to compensate for spatial and temporal walk-offs. The single photons were coupled into single-mode fiber and delivered to the gate. This configuration gave, on average, a four-fold coincidence rate of 2.2 per minute at the output of the gate.\n\\paragraph*{Entangled state preparation\\\\}\nEach SPDC source emitted pairs of entangled photons of the form $|\\psi^{+}_{1}\\rangle = \\frac{1}{\\sqrt{2}}\\left(|H\\rangle_{1B}|V\\rangle_{2B}+|V\\rangle_{1R}|H\\rangle_{2R}\\right)$ and $|\\psi^{+}_{2}\\rangle = \\frac{1}{\\sqrt{2}}\\left(|H\\rangle_{1Y}|V\\rangle_{2Y}+|V\\rangle_{1G}|H\\rangle_{2G}\\right)$. Polarisation optics were used to distribute the path-modes throughout the circuit and thus convert this state into the path-entangled states $|\\psi^{+}_{1}\\rangle = \\frac{1}{\\sqrt{2}}\\left(|1\\rangle_{1B}|1\\rangle_{2B}|0\\rangle_{1R}|0\\rangle_{2R}+|0\\rangle_{1B}|0\\rangle_{2B}|1\\rangle_{1R}|1\\rangle_{2R}\\right)$ and \\\\$|\\psi^{+}_{2}\\rangle = \\frac{1}{\\sqrt{2}}\\left(|1\\rangle_{1Y}|1\\rangle_{2Y}|0\\rangle_{1G}|0\\rangle_{2G}+|0\\rangle_{1Y}|0\\rangle_{2Y}|1\\rangle_{1G}|1\\rangle_{2G}\\right)$. Path modes from $|\\psi^{+}_{1}\\rangle$ and $|\\psi^{+}_{2}\\rangle$ were combined on a PBS (Fig 1C, PBS with outputs $2R$, $1G$, $1Y$, and $2B$); along with post-selection of four-fold coincidence events at the outputs of the control, target, and trigger outputs, this led to Eq. \\eqref{4GHZ} in the main text. Each qubit was encoded using photon polarisation: using Eq. \\eqref{4GHZ}, considering that each photon exists in a superposition of path-modes and omitting the unoccupied modes, an arbitrary polarisation state can be encoded onto each qubit by performing a local unitary operation on each mode, giving equation \\eqref{4GHZU}. The state encoding was performed inside the beam displacer (control qubit) and displaced Sagnac (target qubits) interferometers.\n\\paragraph*{Tuning the phase\\\\}\nThe phase was tuned by tilting an HWP set to its OA. To set the correct phase for each of the four GHZ states, we varied the tilt of the HWP and measured fringes in the four-fold coincidences with our measurement apparatus in the $\\sigma_x\\sigma_y\\sigma_y$ basis. For the $|GHZ_{1,2}^{+}\\rangle$ $\\left(|GHZ_{1,2}^{-}\\rangle\\right)$ we set the tilt to maximise (minimise) the occurrence of the $|DRR\\rangle$, $|DLL\\rangle$, $|ARL\\rangle$, and $|ALR\\rangle$ events.\n\\paragraph*{Mixed state preparation\\\\}\nThe mixed states of the form $\\varrho = m|0\\rangle\\langle{0}| + \\frac{(1-m)}{2}\\left(|0\\rangle\\langle{0}| + |1\\rangle\\langle{1}|\\right)$ were obtained by measuring output statistics for a combination of pure input states. The input states of the target were prepared, in varying proportions given by the parameter $m$, as $0.25 (1 + m)^2|0\\rangle^{T1}|0\\rangle^{T2}$, $0.25(1 - m^2)|0\\rangle^{T1}|1\\rangle^{T2}$, $0.25(1 - m^2)|1\\rangle^{T1}|0\\rangle^{T2}$, and $0.25 (1 - m)^2|1\\rangle^{T1}|1\\rangle^{T2}$. The aggregated data resulted in a fringe pattern which reflects the purity of the mixed single-qubit state.\n\\paragraph*{Erasing the which-path information\\\\}\nGeneration of path-mode entanglement and successful operation of the gate in the quantum regime relied on the erasure of the which-path information in the two displaced Sagnac interferometers. We tested this by performing a Hong-Ou-Mandel (HOM) two-photon interference measurement after each interferometer. After overlapping path modes $2R$ and $1G$ on an NPBS, an HWP with its OA set to $22.5^{\\circ}$ rotated the polarisation of the photons to $|D\\rangle$ and $|A\\rangle$, respectively. Sending these photons into the same port of a PBS led to bunching at the output if the path-modes were indistinguishable. Doing the same for modes $2B$ and $1Y$ gave two separate HOM dips (see Materials and Methods) with visibilities of $90 \\pm 5\\%$ and $91 \\pm 6\\%$.\n\\paragraph*{Heralding the quantum Fredkin gate\\\\}\nIn order to use a quantum Fredkin gate as part of a much larger quantum circuit (with gates in series), it is preferable for the gate to be heralded. Realising our gate in this manner involves adding C-path gates\\cite{Zhou2011} to each input. For the best probability of success $P_{success}$, each C-path gate requires two heralded C-NOT gates\\cite{Pittman2001} which, in turn, require two entangled pair ancillae. Execution of the C-path gate succeeds with $P_{success} = (1\/4)^2$\\cite{Zhou2011,Pittman2001}. C-path gates are not a necessity at the output if successful execution is heralded by non-detections at the relevant NPBS ports, at an additional probability cost of factor $1\/4$.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nDue to the increasing growth of 3D point cloud data, point cloud semantic segmentation has been receiving more and more attention in the 3D computer vision community. Most of these segmentation methods focus on fully supervised segmentation with manually annotated points \\cite{hu2020randla, thomas2019kpconv, lei2020seggcn, zhao2019pointweb, wang2019graph}. However, annotating large-scale 3D point clouds is a cumbersome process, which is costly in labor and time. Particularly, the number of point clouds in some real scenes such as the indoor scene can often reach the order of magnitude to millions. Therefore, it is difficult to obtain the accurate labels of these million points for full-supervised segmentation.\n\n\nDifferent from full-supervised point cloud segmentation, semi-supervised segmentation aims to learn a good label prediction for point clouds with partially annotated points. Recent works have been dedicated to the semi-supervised point cloud segmentation task.\nGuinard~\\emph{et al.}~\\cite{guinard2017weakly} propose a weakly supervised conditional random field classifier for 3D LiDAR point cloud segmentation.\nHowever, it converts the segmentation task into an optimization problem, and the contextual information in point clouds is ignored. Mei~\\emph{et al.} propose a semi-supervised 3D LiDAR point cloud segmentation method~\\cite{mei2019semantic}, where the 3D data is projected to range images for feature embedding, and the inter-frame constraints are combined with some labeled samples to encourage feature consistency. Nonetheless, the constraints along the LiDAR sequential frames are not available in general 3D segmentation datasets. Lately,~\\cite{XuLee_CVPR20} proposes a semi-supervised point cloud segmentation method, which employs three constraints to enhance the feature learning of unlabeled points, including block-level label penalization, data augmentation with rotation and flipping for prediction consistency, and a spatial and color smoothness constraint in local regions. Although it can obtain effective segmentation results, the long-range relations are ignored in this method.\n\n\nAlthough some efforts have been made on semi-supervised point cloud segmentation, how to accurately predict the labels of unannotated points for segmentation is still a challenging problem. Particularly, since point clouds are irregular, it is difficult to exploit the geometry structures of point clouds to accurately infer pseudo labels of unannotated points for label propagation. In addition, the uncertainty of inferred pseudo labels of unannotated points hinders the network from learning discriminative features of point clouds, leading to inaccurate label prediction.\n\nAiming at the aforementioned two problems, in this paper, we propose a novel semi-supervised semantic point cloud segmentation network, named SSPC-Net. We first divide the point clouds into superpoints and build the superpoint graph, where the superpoint is a set of points with isotropically geometric features. Thus, we can convert the point-level label prediction problem in the point cloud segmentation task into the superpoint-level label prediction problem. Following the method in ~\\cite{landrieu2018large}, we employ the gated graph neural network (GNN)~\\cite{li2015gated} for superpoint feature embedding.\nIn order to fully exploit the local geometry structure of the constructed superpoint graph, we then develop a dynamic label propagation method to accurately infer pseudo labels for unsupervised superpoints. Specifically, the labels of supervised superpoints are gradually extended to the adjacent superpoints with high semantic similarity along the edges of the superpoint graph.\nWe also adopt a superpoint dropout strategy to obtain the high-quality pseudo labels during the label propagation process, where the extended superpoints with low confidences are dynamically pruned.\nFurthermore, we propose a coupled attention mechanism to learn the discriminative context features of superpoints. We alternatively perform attention on the supervised and extended superpoints so that the discrimination of the features of the supervised and extended superpoints can be boosted each other, alleviating the uncertainty of the inferred pseudo labels of the unsupervised superpoints.\nFinally, we employ a combined cross-entropy loss to train the segmentation network.\nExtensive results on various indoor and outdoor datasets demonstrate that our method can yield good performance with only few point-level annotations.\n\n\nThe main contributions of this paper are summarized as: \\textbf{(1)} We develop a dynamic superpoint label propagation method to accurately infer the pseudo labels of unsupervised superpoints. We also present a superpoint dropout strategy to select the high-quality pseudo labels. \\textbf{(2)} We propose a coupled attention mechanism on the supervised and extended superpoints to learn the discriminative features of the superpoints. \\textbf{(3)} Our proposed method can yield better performance than the current semi-supervised point cloud semantic segmentation method with fewer labels.\n\n\\section{Related Work}\n\\textbf{Deep learning on 3D point clouds.}\nRecently, many deep learning methods are proposed to tackle point cloud classification and segmentation.\nSome methods~\\cite{wu20153d,maturana2015voxnet,sedaghat2016,qi2016volumetric} voxelize point clouds and employ 3D CNNs for feature embedding. However, the voxel-based methods suffer from the large memory cost due to the high-resolution voxels. By projecting point clouds into 2D views, \\cite{su15mvcnn,boulch2017unstructured,tatarchenko2018tangent} use classic CNNs to extract features from point clouds. However, the view-based methods are sensitive to the density of 3D data.\nTo reduce memory cost and additional preprocessing, Qi~\\emph{et al.} propose PointNet, which directly processes the unordered point clouds and uses multi-layer perceptrons (MLPs) and the maxpooling function for feature embedding. Following PointNet, many efforts~\\cite{qi2017pointnet++,klokov2017escape,wang2019graph,hua2018pointwise,li2018pointcnn,zhao2019pointweb,wang2019dynamic,thomas2019kpconv,wu2019point,liu2019point2sequence,han2020point2node,zhao2020jsnet,feng2018gvcnn,ma2018learning} are proposed for point cloud processing. Although these methods have achieved decent performance, their models depend on fully annotated 3D point clouds for training.\nHowever, in this paper, we focus on the semi-supervised point cloud semantic segmentation.\n\n\\textbf{Semi-\/Weakly supervised deep learning on 3D point clouds.}\nMany efforts~\\cite{mei2019semantic,wei2020multi,XuLee_CVPR20} have been proposed to tackle semi-\/weakly supervised point cloud semantic segmentation. In \\cite{mei2019semantic}, Mei~\\emph{et al.} introduce a semi-supervised 3D LiDAR data segmentation method. It first converts the 3D data to depth maps and then applies CNNs for feature embedding. In addition to a small part of supervised data, it also leverages the temporal constraints along the LiDAR scans sequence to boost feature consistency. Therefore, it is not practicable for general point cloud segmentation cases. Inspired by CAM~\\cite{zhou2016learning}, Wei~\\emph{et al.} propose MPRM~\\cite{wei2020multi} with scene-level and subcloud-level labels for weakly supervised segmentation. Specifically, it leverages a point class activation map (PCAM) to obtain the localization of each class and then generates point-wise pseudo labels with a multi-path region mining module. In this way, the segmentation network can be trained in a fully supervised manner. However, in practice, generating the subcloud-level annotation is still time-consuming.\nLately, in~\\cite{XuLee_CVPR20}, Xu~\\emph{et al.} propose a semi-supervised algorithm, which uses three constraints on the unlabeled points, $i.e.$, the block level labels for penalizing the negative categories in point clouds, data augmentation with random in-plane rotation and flipping for feature consistency and a spatial and color smoothness constraint in point clouds. \n\n\\begin{figure*}[t]\n\t\\begin{center}\n\t\t\\includegraphics[width=0.9\\linewidth]{.\/img\/framework_final_v2.pdf}\n\t\\end{center}\n\t\\caption{Overview of the proposed semi-supervised semantic point cloud segmentation network (SSPC-Net). We first leverage the gated GNN to extract superpoints features. Then based on the superpoint graph, we conduct the dynamic label propagation strategy to generate pseudo labels. Next, based on the supervised superpoints and the extended superpoints, we perform a coupled attention mechanism to further boost the extraction of discriminative contextual features in the point cloud.}\n\t\\label{fig_outline}\n\\end{figure*}\n\n\n\\section{Our Method}\nIn this section, we present our semi-supervised point cloud segmentation network and the outline of our framework is shown in Fig. \\ref{fig_outline}. We first introduce the superpoint graph embedding module. Then we propose a dynamic label propagation approach combined with a superpoint dropout strategy. Next, we propose a coupled attention mechanism to learn discriminative contextual features of superpoints. Finally, we depict the framework of our method.\n\n\n\\subsection{Superpoint Graph Embedding}\\label{sec_graph_embeddding}\n\nTo obtain the superpoints and learn the superpoint features, following \\cite{landrieu2018large}, we perform an unsupervised superpoints partition approach to generate superpoints and then build superpoint graphs combined with graph neural network (GNN) for superpoints feature embedding.\nDenote $\\mathcal{G=(V, E)}$ as the superpoint graph built upon superpoints, where $\\mathcal{V}$ is the node set and $\\mathcal{E}$ is the edge set. Edge $(i,j) \\in \\mathcal{E}$ links node $i \\in \\mathcal{V}$ with $j \\in \\mathcal{V}$. \nWe first perform a lightweight PointNet-like structure on the superpoints to obtain superpoints features. After that, we learn the superpoint embedding with the gated GNN used in~\\cite{li2015gated}.\nGiven the superpoint embeddings and the semi-supervision, we can penalize the model with incomplete supervision. For a point cloud consists of $N$ superpoints, we define $a_i \\in \\{0,1\\}^{N}$ to indicate whether the $i$-th superpoint has supervision. Then the segmentation loss $\\mathcal{L}_{s}$ on the superpoint graph embedding module can be formulated as:\n\\begin{equation}\n\\mathcal{L}_{s} = \\frac{1}{A} \\sum \\nolimits _{i=1}^{N}a_i \\cdot \\mathcal{F}_{loss}\\left(z_i, \\bm{y}_i\\right)\n\\end{equation}\nwhere $\\mathcal{F}_{loss}$ is the loss function and we choose the cross-entropy loss in experiments, $A=\\sum \\nolimits _{i=1}^{N} a_i$ is adopted for normalization, $z_i$ represents the superpoint-level label of $i$-superpoint and $\\bm{y}_i$ is the prediction logit.\n\nThe reason why we choose the superpoint graph as the representation of point cloud is at two points. On the one hand, the superpoint is geometrically isotropic and therefore we can directly extend the point-level label to the superpoint-level label, which alleviates the lack of supervision. On the other hand, since the superpoint graph is rooted in the geometric structure of the point cloud, where the linking edges between the superpoints greatly facilitate the feature propagation. Thus we can obtain more discriminative contextual features of superpoints. \n\n\n\\begin{figure}[t]\n\t\\begin{center}\n\t\t\\includegraphics[width=0.95\\linewidth]{.\/img\/extension_v3.pdf}\n\t\\end{center}\n\t\\caption{The procedure of our dynamic label propagation. We progressively propagate the superpoint-level label and discard the extended superpoint with low confidence.}\n\t\\label{fig_extension}\n\\end{figure}\n\n\n\\subsection{Dynamic Label Propagation}\\label{sec_extension_growing}\nTo propagate superpoint labels, we propose a dynamic label propagation strategy to generate pseudo labels. \nSuppose we have constructed three sets: the supervised superpoints set $S$, unsupervised superpoints set $U$, and extended superpoints set $E$. Note that at the beginning we set $E=\\varnothing$. Besides, elements in each set indicate the index of superpoints. \n\n\nFor $\\forall ~i \\in T, T=S\\cup E$, we use the adjacent superpoints to construct candidate set $\\mathcal{N}_i$, where we consider propagating labels in it. Suppose $z_i$ is the label of $i$-th superpoint. Note that $\\forall ~j \\in\\mathcal{N}_i$ must satisfy two constraints: $j\\in U$ and the predicted category of the $j$-th superpoint should be the same as that of the $i$-th superpoint, that is, $z_i$. Compared with other unsupervised superpoints, elements in $\\mathcal{N}_i$ are with higher possibilities to be assigned with pseudo labels, due to the close geometric relations and the close distances to the $i$-th superpoint. To generate high-quality pseudo labels, we assess the confidence scores of the superpoints in $\\mathcal{N}_i$ and denote the scores as $\\bm{m}_i\\in\\mathbb{R}^{|\\mathcal{N}_i|}$. Then, we enumerate all the superpoints in $\\mathcal{N}_i$ and select the superpoint with the highest confidence score. The operation can be formulated as:\n\\begin{equation}\nj^*=\\mathop{\\arg\\max}\\limits_{j=1,2,\\ldots, |\\mathcal{N}_i|}({m}_{i,j})\n\\label{eqn:constraint}\n\\end{equation}\nwhere $j^*$ represents the index of the superpoint with the highest confidence score in $\\mathcal{N}_i$. To further ensure the high quality of pseudo labels, we set the threshold $\\tau$ to filter the selected superpoints with dissatisfactory confidence values. When the confidence score $m_{i,j^*}\\geqslant \\tau$, the $j^*$-th superpoint is selected and assigned with pseudo label $z_i$. Then $j^*$ will be removed from the unsupervised superpoints set $U$ and added to the extended superpoints set $E$. On the contrary, if there is no superpoint satisfying the constraint, no superpoint will be extended from $\\mathcal{N}_{i}$. In the experiments, $\\tau$ is empirically set to 0.9. Note that for each extension procedure, we merge the supervised superpoints set $S$ and the extended superpoints set $E$ to the new set $T=S\\cup E$ for further extension. Because the extended superpoints with pseudo labels can also be treated as the superpoints with supervision for further label propagation. In this way, we can progressively propagate the labels of the supervised superpoints and generate more high-quality pseudo labels for unsupervised superpoints in $U$. What's more, Algorithm \\ref{algorithm_extension} shows the details of the graph-based supervision extension procedure.\n\n\n\\begin{algorithm}[t]\n\t\\DontPrintSemicolon\n\t\\KwIn{Supervised superpoints set $S$, unsupervised superpoints set $U$, extended superpoints set $E$, threshold $\\tau$}\n\t\\KwOut{Updated sets $U$ and $E$}\n\t$T = S\\cup E$ \\\\\n\t\\For {$~i \\in T$}\n\t{\n\t\tDenote $z_i$ as the label of $i$-th superpoint \\\\\n\t\tConstruct the candidate supeproints set $\\mathcal{N}_i$ \\\\\n\t\t\\If{$\\mathcal{N}_i\\neq \\varnothing $}\n\t\t{\n\t\t\tGenerate the confidence scores $\\bm{m}_i$ \\\\\n\t\t\t$j^*=\\mathop{\\arg\\max}\\limits_{j=1,2,\\ldots, |\\mathcal{N}_i|}({m}_{i,j})$ \\\\\n\t\t\t\\If{$m_i^{j^*}\\geqslant \\tau$}\n\t\t\t{\n\t\t\t\tAssign pseudo label $z_i$ to the $j^*$-th superpoint \\\\\n\t\t\t\t$U := U \\setminus \\{j^*\\} \\quad E := E \\cup \\{j^*\\}$\n\t\t\t}\n\t\t}\n\t\t\n\t}\n\t\\caption{Graph-based supervision extension}\\label{algorithm_extension}\n\\end{algorithm}\n\n\nSince our extension strategy is performed progressively, we consider removing the low-confidence superpoints in the extended superpoints set $E$. Hence, we propose a superpoint dropout strategy assessing the reliability of the extended superpoints in the embedding space. In the superpoints set $T=S\\cup E$, we cluster the superpoints into $c$ classes according to the superpoints labels or pseudo labels, where $c$ is the number of categories. Suppose $\\mathcal{C}_i$ is the $i$-th cluster set that contains the index of the superpoints belonging to the $i$-th category. In addition, we denote $\\bm{v}_i$ as the feature of the cluster center of $\\mathcal{C}_i$, which is computed by averaging the features of all the superpoints in $\\mathcal{C}_i$. We assess the confidence of the extended superpoints by considering its distance to the corresponding cluster center in the feature space. For $\\forall ~j\\in E\\cap \\mathcal{C}_i$, its Euclidean distance to the cluster center in the feature space is formulated as: \n\\begin{equation}\nd_i^j = \\left\\|\\bm{f}_{j}-\\bm{v}_i \\right \\|_2\n\\end{equation} \nwhere $\\bm{f}_j\\in\\mathbb{R}^{D}$ is the feature of $j$-th superpoint, and $\\bm{v}_i\\in\\mathbb{R}^{D}$ is the feature of cluster center. Smaller distance indicates the higher reliability of extended superpoints, whereas the larger distance means the higher uncertainty. Therefore, in each cluster, we discard $k$ extended superpoints that are furthest from the cluster center, where $k$ is set to 0.05$*|E\\cap\\mathcal{C}_i|$. In other words, we retain the most reliable 95\\% superpoints and drop the 5\\% unreliable superpoints in the set $E\\cap\\mathcal{C}_i$. Our superpoint dropout strategy is explained in Algorithm \\ref{algorithm_dropout}.\n\nConcretely, as shown in Fig. \\ref{fig_extension}, we perform our graph-based dynamic label propagation strategy every ${M}$ epochs, therefore, the extended superpoints are gradually ``growing'' on the graph from the supervised superpoints. The reason why we conduct the extension operation in a multi-stage manner instead of every epoch is that our extension strategy is a cumulative one, which means that too much extension operations will cause redundant extended superpoints and aggravate the memory cost. Meanwhile, the model is not stable at the beginning, which is not conducive to generating extended superpoints. \n\n\\begin{algorithm}[t]\n\t\\DontPrintSemicolon\n\t\\KwIn{Number of classes $c$, supervised superpoints set $S$, unsupervised superpoints set $U$, extended superpoints set $E$}\n\t\\KwOut{Updated sets $U$ and $E$}\n\t$T = S\\cup E$ \\\\\n\tCluster on $T$ and obtain $c$ cluster sets: $\\mathcal{C}_1,\\mathcal{C}_2, \\ldots,\\mathcal{C}_c$ \\\\\n\t\\For {$i = 1:c $}\n\t{\t\t\t\n\t\tCompute the feature $\\bm{v}_i$ of the cluster center of $\\mathcal{C}_i$ \\\\\n\t\t\\For{each $j \\in E\\cap\\mathcal{C}_i$}\n\t\t{\n\t\t\tGenerate the feature $\\bm{f}_j$ of the $j$-th superpoint \\\\\n\t\t\tCompute the distance $d_i^j = \\left\\|\\bm{f}_{j}-\\bm{v}_i \\right \\|_2$\n\t\t}\n\t\tFind the farthest 5\\% superpoints (set as $\\mathcal{C}_{drop}$) in $E\\cap\\mathcal{C}_i$ from the cluster center according to the distance $\\bm{d}_i$ \\\\\n\t\t$E := E \\setminus \\mathcal{C}_{drop}\\quad U := U \\cup \\mathcal{C}_{drop}$\n\t}\n\t\\caption{Superpoint dropout strategy}\\label{algorithm_dropout}\n\\end{algorithm}\n\n\n\n\\subsection{Coupled Attention for Feature Enhancement}\\label{sec_feature_enhance}\nAiming to learn more discriminative contextual features in point clouds, we propose a coupled attention mechanism.\nFor $\\forall ~i\\in S$, we denote the corresponding embedding as $\\bm{h}_i \\in \\mathbb{R}^{D}$. Similarly, for $\\forall ~j\\in E$, we denote the corresponding embedding as $\\bm{h}_j$. By weighing all the extended superpoints, we extract the novel contextual feature of $i$-th superpoint with attention mechanism:\n\\begin{equation}\n\\bm{x}_i=\\sum \\nolimits_{j\\in E} g\\left(\\phi(\\bm{h}_i, \\bm{h}_j) \\right)\\odot \\alpha(\\bm{h}_j)\n\\end{equation}\nwhere $\\phi(\\bm{h}_i, \\bm{h}_j)=\\mathop{MLP}(\\bm{h}_i-\\bm{h}_j)$ embeds the channel-wise relations between superpoints, $\\alpha(\\bm{h}_j)=\\mathop{MLP}(\\bm{h}_j)$ is a unary function for individual superpoint embedding, $\\phi(\\cdot,\\cdot):\\mathbb{R}^{D} \\rightarrow \\mathbb{R}^{D}$ and $\\alpha:\\mathbb{R}^{D} \\rightarrow \\mathbb{R}^{D}$, $\\odot$ is the Hadamard product. $g$ is a normalization function and is defined as:\n\\begin{equation}\ng\\left(\\phi_l(\\bm{h}_i, \\bm{h}_j) \\right) = \\frac{\\exp(\\phi_{l}({\\bm h}_{i}, {\\bm h}_{j}))}{\\sum\\nolimits_{r \\in E}\\exp(\\phi_{l}({\\bm h}_{i}, {\\bm h}_{r}))}\n\\end{equation}\nwhere $l=1,2,\\ldots,D$, represents $l$-th element of embedding $\\phi_{l}(\\cdot,\\cdot)$. Consequently, the matrix representation of the attention operation on the supervised superpoints in $S$ can be formulated as:\n\\begin{equation}\n\\bm{X}_s = \\sum \\nolimits_{j\\in E}\\bm{W}_{es,j} \\odot \\bm{H}_{e,j}\n\\end{equation}\nwhere $\\bm{X}_s\\in\\mathbb{R}^{|S|\\times D}$, $\\bm{W}_{es,j}\\in\\mathbb{R}^{|S|\\times D}$, $\\bm{H}_{e,j} \\in \\mathbb{R}^{|S|\\times D}$ and $j$ enumerates the extended superpoints in $E$. Note that $\\bm{W}_{es}\\in\\mathbb{R}^{|S|\\times|E|\\times D}$ represents the channel-wise weights from the extended superpoints to the supervised superpoints.\n\n\nOnce we obtain the attention embedding $\\bm{X}_s\\in\\mathbb{R}^{|S|\\times D}$, we can derive new segmentation logits of supervised superpoints and formulate the loss as:\n\\begin{equation}\n\\mathcal{L}_{es} = \\frac{1}{|S|} \\sum \\nolimits _{i \\in S}\\mathcal{F}_{loss}\\left(z_i, \\mathop{FC} \\left(\\bm{X}_{s,i}\\right)\\right)\n\\end{equation}\nwhere $z_i$ is the superpoint-level label, $\\bm{X}_{s,i}$ is the attention feature of the corresponding supervised superpoint, and $\\mathcal{F}_{loss}$ is the cross-entropy loss adopted in experiments. Note that $\\mathop\n{FC}$ is the fully connected layer, which maps $\\bm{X}_{s,i}\\in\\mathbb{R}^{|S|\\times D}$ from the $D$-dim to the dimension of the categories.\n\n\n\\begin{figure}[t]\n\t\\begin{center}\n\t\t\\includegraphics[width=0.95\\linewidth]{.\/img\/coupled_attention_v6.pdf}\n\t\\end{center}\n\t\\caption{The coupled attention for feature enhancement.\n\t}\n\t\\label{fig_attention}\n\\end{figure}\n\n\n\nSimilarly, to promote the feature characterization of the extended superpoints, we then perform attention on the extended superpoints in reverse. By weighting the new features enhanced by the attention operation of the supervised superpoints, we boost the context feature propagation and thus enhance the robustness of the features of the extended superpoints. Thus, for $\\forall j \\in E$, the new embedding of the corresponding superpoint can be calculated as:\n\\begin{equation}\n\\bm{y}_j=\\sum \\nolimits_{i\\in S} g\\left(\\psi(\\bm{h}_j, \\bm{x}_i) \\right)\\odot \\beta(\\bm{x}_i)\n\\end{equation}\nwhere $\\psi(\\bm{h}_j, \\bm{x}_i)=\\mathop{MLP}(\\bm{h}_j-\\bm{x}_i)$ characterizes the dependencies of the extended superpoints on the attention embeddings of the supervised superpoints. $\\beta(\\bm{x}_i)=\\mathop{MLP}(\\bm{x}_i)$ is a unary function similar to $\\alpha$, $\\psi(\\cdot,\\cdot):\\mathbb{R}^{D} \\rightarrow \\mathbb{R}^{D}$ and $\\beta:\\mathbb{R}^{D} \\rightarrow \\mathbb{R}^{D}$. $g$ is a normalization function defined as:\n\\begin{equation}\ng\\left(\\psi_{l}(\\bm{h}_j, \\bm{x}_i) \\right) = \\frac{\\exp(\\psi_{l}({\\bm h}_{j}, {\\bm x}_{i}))}{\\sum\\nolimits_{r \\in S}\\exp(\\psi_{l}({\\bm h}_{j}, {\\bm x}_{r}))}\n\\end{equation}\nwhere $l=1,2,\\ldots,D$, denotes $l$-th element of embedding $\\psi(\\cdot,\\cdot)$. Then the matrix representation of the attention operation on the extended superpoints in $E$ can be defined as:\n\\begin{equation}\n\\bm{Y}_e = \\sum \\nolimits_{i\\in S}\\bm{W}_{ese,i} \\odot \\bm{\\mathcal{X}}_{s,i}\n\\end{equation}\nwhere $\\bm{Y}_e\\in\\mathbb{R}^{|E|\\times D}$, $\\bm{W}_{ese,i}\\in\\mathbb{R}^{|E|\\times D}$, $\\bm{\\mathcal{X}}_{s,i}\\in\\mathbb{R}^{|E|\\times D}$ and $i$ enumerates superpoints in $S$. Note that $\\bm{\\mathcal{X}}_{s}$ is the feature after employing function $\\beta(\\cdot)$ on attention feature $\\bm{X}_{s}$. In this way, we develop the coupled attention, $i.e.$, $\\bm{W}_{ese}\\in\\mathbb{R}^{|S|\\times|E|\\times D}$ denotes the channel-wise weights from the attentional supervised superpoints to extended superpoints.\n\nThen the loss $\\mathcal{L}_{ese}$ on the extended superpoints with enhanced attention features can be formulated as:\n\\begin{equation}\n\\mathcal{L}_{ese} = \\frac{1}{|E|} \\sum \\nolimits _{j \\in E}\\mathcal{F}_{loss}\\left(z^{p}_j, \\mathop{FC} \\left(\\bm{Y}_{e,j}\\right)\\right)\n\\end{equation}\nwhere $z^{p}_j$ is the pseudo label and $\\mathcal{F}_{loss}$ is the cross-entropy loss as well. $\\mathop{FC}$ maps the feature to the category space.\n\nSpecifically, as shown in Fig. \\ref{fig_attention}, our coupled attention considers the intra- and inter-relations concurrently. To encourage the feature consistency in different point clouds, \nwe integrate the supervised superpoints and extended superpoints in various point clouds into sets $S$ and $E$, respectively.\nThe connections between $S$ and $E$ are constructed within and across various point cloud samples, and superpoints with the same labels are encouraged to have more similar semantic embeddings compared to those with diverse classes. \nAs a result, by alternatively performing attention on the supervised and extended superpoints, more long-range dependencies between superpoints are built. Hence, the model learns more discriminative and robust contextual features of the supervised and unsupervised superpoints. \n\n\n\n\n\\subsection{Framework}\\label{sec_framework}\nThe framework of our model is illustrated in Fig. \\ref{fig_outline}. In our framework, the superpoint graph embedding module is the basis of our point cloud feature embedding. Based on this module, the dynamic label propagation method assesses the semantic similarity between the superpoints and propagates the superpoint-level supervision along the edges of the superpoint graph. Then, with the extended superpoints searched by the dynamic label propagation module, we propose a coupled attention mechanism to boost the contextual feature learning of the point cloud. \n\nThe final objective function is a combination of the three objectives $\\mathcal{L}_{final}=\\mathcal{L}_{s} + \\lambda_1\\cdot \\mathcal{L}_{es} +\\lambda_2\\cdot \\mathcal{L}_{ese}$ and we empirically set $\\lambda_1, \\lambda_2$ to 1.\nAs shown in Fig. \\ref{fig_outline}, the dynamic label propagation module and coupled attention module are only conducted in the training stage. For testing, we obtain the inferred prediction directly from the superpoint graph embedding module. \n\n\n\\section{Experiments}\n\\subsection{Implementation Details}\nTo train our model, we adopt Adam optimizer with a base learning rate of 0.01. For the S3DIS~\\cite{armeni20163d}, ScanNet~\\cite{dai2017scannet} and vKITTI~\\cite{Gaidon2016Virtual} dataset, we employ the mini-batch size of 4, 8, 8, respectively. We empirically implement the dynamic label propagation module every $M=40$ epochs. \n\n\n\n\\textbf{Semi-supervision generation.}\nTo produce the semi-supervision of point clouds, we randomly select a part of the points with annotations in each class. For example, given a point cloud containing $n$ points with $c$ classes, suppose the supervision rate be $r$, then we evenly distribute the supervision budget $r\\cdot n$ and randomly sample $ (r\\cdot n)\/c$ points in each category as the supervised points.\nThe label of superpoint is the category with the most annotated points.\nIf there is no supervised point contained, then the superpoint will be unsupervised. \nNote that compared with the sampling strategy of random sampling annotated points directly in point clouds, our labeling mechanism is more in coincident with the human annotation behavior, since the random sampling strategy will result in that most of the supervised points will be occupied by the areas with simple geometric structure but more points, $e.g.$, walls, roads, etc.\nFor evaluation, all the quantitative results are computed at the point level.\n\n\n\n\n\\begin{table}[t]\n\t\\centering\n\t\\begin{adjustbox}{width=0.88\\linewidth}\n\t\t\\large\n\t\t\\begin{tabular}{c|l|c|ccc}\n\t\t\n\t\t\t\\toprule\n\t\t\t\\multicolumn{2}{c|}{{\\bf Method}} &\\multicolumn{1}{c|}{Rate}&mIoU&mAcc&OA\\\\\n\t\t\n\t\t\t\\midrule\n\t\t\t\\multicolumn{6}{c}{6-fold cross validation} \\\\\n\t\t\n\t\t\t\\midrule\n\t\t\t\\multirow{7}{*}{Full}\n\t\t\t&PointNet&100\\%&47.6&66.2&78.5\\\\\t\t\t\n\t\t\t&SPGraph &100\\%& 62.1 &73.0 &85.5 \\\\\n\t\t\t&PointCNN &100\\%&{\\bf65.3} &{\\bf75.6} &{\\bf88.1} \\\\\n\t\t\n\t\t\n\t\t\t&RSNet&100\\%&56.4&66.4&-\\\\\n\t\t\n\t\t\t& G+RCU2 &100\\%&49.7 &66.4 &81.1 \\\\\n\t\t\t& 3P-RNN &100\\%&56.3 &73.6 &86.9 \\\\\n\t\t\n\t\t\n\t\t\t\\midrule\n\t\t\t\\multirow{3}{*}{Semi-}\n\t\t\n\t\t\n\t\t\n\t\t\t&Baseline &0.002\\% &45.1 &63.7 &73.9 \\\\\n\t\t\t&{\\bf SSPC-Net} &0.002\\% &48.5 &68.3 &79.1\\\\\t\n\t\t\t&{\\bf SSPC-Net} &0.01\\%&{\\bf54.5}&{\\bf70.8} &{\\bf 80.4}\\\\\t\t\n\t\t\n\t\t\t\\midrule\n\t\t\t\\multicolumn{6}{c}{Fold 5} \\\\\n\t\t\n\t\t\t\\midrule\n\t\t\t\\multirow{5}{*}{Full}\n\t\t\t&PointNet&100\\%&41.1&49.0&-\\\\\n\t\t\t&PointNet++&100\\%&47.8&-&-\\\\\n\t\t\t&SPGraph &100\\%&{\\bf58.0} &{\\bf66.5} &{\\bf86.3} \\\\\n\t\t\n\t\t\t&SegCloud&100\\%&48.9&57.3&-\\\\\n\t\t\t&PointCNN&100\\%&57.2&63.8&85.9\\\\\n\t\t\n\t\t\t\\midrule\n\t\t\t\\multirow{6}{*}{Semi-}\n\t\t\n\t\t\t&Semi-Seg&1pt &44.5&-&-\\\\\n\t\t\t&Semi-Seg&10\\% &48.0&-&-\\\\\t\t\t\n\t\t\n\t\t\n\t\t\n\t\t\t&Baseline &0.002\\% &39.6 &52.1 &72.4 \\\\\t \t\n\t\t\n\t\t\t&{\\bf SSPC-Net}&0.002\\% &43.0 &56.4 &76.2\\\\\n\t\t\t&{\\bf SSPC-Net}&0.01\\%&51.5 &63.8 &82.0\\\\\n\t\t\t&{\\bf SSPC-Net}&1pt &{\\bf53.8} &{\\bf63.9} &{\\bf83.8}\\\\\n\t\t\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t\\end{adjustbox}\n\t\\caption{Evaluation on the S3DIS dataset.}\n\t\\label{tab_results_s3dis}\n\\end{table}\n\n\n\\subsection{Semi-supervised Semantic Segmentation}\n\\textbf{S3DIS.}\nS3DIS~\\cite{armeni20163d} dataset is an indoor 3D dataset including 6 areas and 13 categories. Three metrics are adopted for quantitative evaluation: mean IoU (mIoU), mean class accuracy (mAcc), and overall accuracy (OA).\n\nThe quantitative and visual results are shown in Tab. \\ref{tab_results_s3dis} and Fig. \\ref{fig_visual_results}, respectively. For a fair comparison, we test our framework with the ``1pt'' labeling strategy adopted in \\cite{XuLee_CVPR20} (dubbed ``Semi-Seg'' in Tab. \\ref{tab_results_s3dis}) as well, which samples one point in each category of each block as the supervised point. It can be seen that our SSPC-Net achieves a significant gain of 9.3\\% in terms of mIoU with the ``1pt'' labeling strategy. In \\cite{XuLee_CVPR20}, Xu {\\em et~al.} split the point cloud into blocks and then train and test their model on each block separately. Nonetheless, our model learns the embeddings of superpoints in the whole point cloud, therefore we can obtain more discriminative contextual features and yield better performance. Note that in Tab.~\\ref{tab_results_s3dis}, ``Baseline'' represents our method without the label propagation strategy and coupled attention mechanism. One can see that our SSPC-Net improves the performance from 39.6\\% to 43.0\\% in terms of mIoU with the supervision rate of 0.002\\% on Area 5 of the S3DIS dataset, benefiting from the pseudo labels generated from the label propagation and the discriminative contextual features extracted by the coupled attention mechanism.\n\n\\textbf{ScanNet.}\nScanNet~\\cite{dai2017scannet} is an indoor scene dataset containing 1513 point clouds with 20 categories. We split the dataset into a training set with 1201 scenes and a testing set with 312 scenes following \\cite{qi2017pointnet++}. We adopt overall semantic voxel labeling accuracy (OA) and mean IoU (mIoU) for evaluation.\n\n\nWe list the quantitative results on the testing set in Tab.~\\ref{tab_scannet_vkitti}. Similar to S3DIS, ScanNet is also an indoor dataset, but the point cloud of ScanNet is much sparser than that of S3DIS. This brings greater challenges to the propagation of supervised labels. However, the proposed model can still achieve good segmentation results and even outperform some fully supervised methods like PointNet \\cite{qi2017pointnet} with semi-supervision. Furthermore, the performance of the proposed model is much better than the baseline method, which further validates the effectiveness of our method.\n\n\n\\begin{table}[t]\n\t\\centering\n\t\\LARGE\n\t\\begin{adjustbox}{width=0.99\\linewidth}\n\t\t\\begin{tabular}{c|l|c|cc|ccc}\n\t\t\t\\toprule\n\t\t\t\\multicolumn{2}{c|}{\\multirow{2}{*}{{\\bf Method}}} & \\multicolumn{1}{c|}{\\multirow{2}{*}{Rate}} &\n\t\t\t\\multicolumn{2}{c|}{ScanNet} & \\multicolumn{3}{c}{vKITTI} \\\\\n\t\t\t\\multicolumn{2}{c|}{} & & mIoU & OA & mIoU & mAcc & OA \\\\ \n\t\t\t\\midrule\n\t\t\t\\multirow{7}{*}{Full}\n\t\t\t&PointNet&100\\%\t&-&73.9&34.4&47.0&79.7\\\\\n\t\t\t&PointNet++&100\\%&-&{\\bf84.5} &-&-&-\\\\\n\t\t\t\n\t\t\t&SSP + SPG&100\\% &- &- &{\\bf52.0} &{\\bf67.3} &84.3 \\\\\n\t\t\n\t\t\t&G+RCU&100\\% &-&- &35.6&57.6&79.7\\\\\n\t\t\n\t\t\n\t\t\t&RSNet&100\\% &{\\bf39.3} &79.2 &-&-&- \\\\\n\t\t\t&3P-RNN&100\\%&-&- &41.6&54.1&{\\bf87.8} \\\\\n\t\t\n\t\t\n\t\t\t&3DCNN&100\\%&-&73.0 &-&-&- \\\\\n\t\t\t\\midrule\n\t\t\t\\multirow{3}{*}{Semi-}\n\t\t\n\t\t\n\t\t\t&Baseline&0.01\\% &24.1&38.2 &35.7 &53.4 &79.2 \\\\\n\t\t\t&{\\bf SSPC-Net}&0.01\\% &27.1&66.6 &41.0 &55.7 &81.2 \\\\\n\t\t\t&{\\bf SSPC-Net}&0.05\\% &{\\bf39.3}&{\\bf77.1} &{\\bf50.6} &{\\bf64.8} &{\\bf85.4} \\\\\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t\\end{adjustbox}\n\t\\caption{Evaluation on the ScanNet and vKITTI datasets.}\n\t\\label{tab_scannet_vkitti} \t \n\\end{table}\n\n\n\\begin{figure}[t]\n\t\\begin{center}\n\t\t\\includegraphics[width=0.87\\linewidth]{.\/img\/visual_v4.pdf}\n\t\\end{center}\n\t\\caption{The visual results on the S3DIS dataset with supervision rate of 0.002\\%.}\n\t\\label{fig_visual_results}\n\\end{figure}\n\n\n\\textbf{vKITTI.}\nvKITTI~\\cite{Gaidon2016Virtual} dataset mimics the real-world KITTI dataset and contains the synthetic outdoor scenes with 13 classes (including road, tree, terrain, car, etc.). For evaluation, we split the dataset into 6 non-overlapping sub-sequences and employ 6-fold cross validation following \\cite{ye20183d}. Mean IoU (mIoU), mean class accuracy (mAcc)\nand overall accuracy (OA) are employed for evaluation.\n\n\nThe quantitative results are presented in Tab.~\\ref{tab_scannet_vkitti}. With the 0.01\\% point-level annotations, compared with the baseline method, our model achieves better segmentation results due to the dynamic label propagation strategy and the discriminative contextual features generated from the coupled attention module. In addition, our model can achieve better or comparable performance than some fully supervised methods with only 0.01\\% and 0.05\\% of the supervised points.\n\n\n\\subsection{Ablation Study}\n\\textbf{Contribution of individual components.}\nIn this section, we investigate the contribution of the proposed components to model performance. The evaluation results on Area5 of the S3DIS dataset of different components with the supervision ratio of 0.002\\% and 0.01\\% are shown in Tab. \\ref{tab_components}, where the components are the graph embedding (Graph Emb.), dynamic label propagation (Label Prop.), coupled attention for feature enhancement (Coup. Attn.). It can be observed that there is an obvious promotion on the performance with the addition of dynamic label propagation and coupled attention module, which further demonstrates the effectiveness of these strategies for the semi-supervision.\n\n\n\n\\begin{table}[t]\n\t\\centering\n\t\\huge\n\t\\begin{adjustbox}{width=1.0\\linewidth}\n\t\n\t\n\t\t\\begin{tabular}{ccc|ccc|ccc}\n\t\t\t\\toprule\n\t\t\t\\multicolumn{3}{c|}{{\\bf Components}} & \\multicolumn{3}{c|}{Rate$=$0.002\\%} & \\multicolumn{3}{c}{Rate$=$0.01\\%} \\\\\n\t\t\t\\midrule\n\t\t\n\t\t\tGraph & Label & Coup. & \\multirow{2}{*}{mIoU} & \\multirow{2}{*}{mAcc} & \\multirow{2}{*}{OA} & \\multirow{2}{*}{mIoU} & \\multirow{2}{*}{mAcc} & \\multirow{2}{*}{OA} \\\\\n\t\t\tEmb. & Prop. & Attn. & & & & & & \\\\\t\t\n\t\t\t\\midrule\n\t\t\t\\checkmark\t&\t\t\t&\t\t\t&39.6 &52.1 &72.4 &48.5 &61.2 &80.3 \\\\\n\t\t\t\\checkmark\t&\\checkmark &\t &40.9 &55.8 &73.6 &50.0 &60.6 &80.8 \\\\\n\t\t\t\\checkmark\t&\\checkmark\t&\\checkmark\t&{\\bf 43.0} &{\\bf 56.4} &{\\bf 76.2} &{\\bf 51.5} &{\\bf 63.8} &{\\bf 82.0}\\\\\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t\\end{adjustbox}\n\t\\caption{The contribution of different components on Area5 of the S3DIS dataset with different annotation rates.}\n\t\\label{tab_components}\t\n\\end{table}\n\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[width=0.78\\linewidth]{.\/img\/curve_v4_cropped.pdf}\n\t\\caption{The percentage of supervised superpoints (ss) and extended superpoints (es) during training. Note that ``all'' means the overall superpoints.}\n\t\\label{fig_ext_ablation}\n\\end{figure}\n\n\n\n\n\\textbf{Supervision rate.} The number of supervised points plays an important role in the segmentation performance. The more labeled points, the smaller gap of data distribution between the semi-supervision and full supervision. To discuss the effect of various labeling rates on model performance, we test our method on Area5 of the S3DIS dataset. The results are shown in Tab. \\ref{tab_comparison}. Combined with Tab. \\ref{tab_results_s3dis}, it can be observed that with only few labeled points, our model has already achieved effective segmentation results. With the growth of supervision, the performance of our model further increases. It is worth noting that we pay more attention to the cases of extremely few supervision signals, which is more challenging for the point cloud segmentation task.\n\n\n\n\n\\begin{table}[t]\n\t\\centering\n\t\\large\n\t\\begin{adjustbox}{width=0.83\\linewidth}\n\t\t\\begin{tabular}{c|ccc|c}\n\t\t\t\\toprule\n\t\t\t\\;\\;\\;\\;\\;{\\bf Rate}\\;\\;\\;\\;\\;&\\;\\;mIoU & mAcc&OA\\;\\;&\\;OA of es\\; \\\\\n\t\t\t\\midrule\n\t\t\t0.002\\% &43.0 &56.4 &76.2 & 87.3 \\\\\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\t0.01\\% &51.5 &63.8 &82.0 &90.9 \\\\\n\t\t\n\t\t\n\t\t\t0.1\\% &56.2 &66.1 &84.6 & 91.0 \\\\\n\t\t\n\t\t\n\t\t\t1.0\\% &58.3 &66.5 &85.7 &90.1 \\\\\n\t\t\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t\\end{adjustbox}\n\t\\caption{Comparison of various supervision rates on Area5 of the S3DIS dataset, where ``es'' represents the extended superpoints.} \n\t\\label{tab_comparison}\n\\end{table}\n\n\n\n\\begin{table}[t]\n\t\\centering\n\t\\LARGE\n\t\\begin{adjustbox}{width=0.82\\linewidth}\n\t\t\\begin{tabular}{c|ccc}\n\t\t\t\\toprule\n\t\t\t\\;\\;\\;\\;{\\bf Interval} \\bm{$M$}\\;\\;\\;\\; &\\;\\;\\;mIoU\\;\\;\\; &\\;\\;\\;mAcc\\;\\;\\; &\\;\\;\\;OA\\;\\;\\; \\\\\n\t\t\t\\midrule\n\t\t\t20 &50.2 &61.1 &81.2 \\\\\n\t\t\t30 &50.8 &63.3 &81.5 \\\\\t\t\t\n\t\t\t40 &{\\bf 51.5} &{\\bf 63.8} &{\\bf 82.0} \\\\\n\t\t\t50 &49.6 &61.5 &80.7 \\\\\n\t\t\t60 &49.9 &62.2 &81.0 \\\\\t\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t\\end{adjustbox}\n\t\\caption{Comparison of segmentation results with various interval $M$ of the dynamic label propagation method in the case of the supervision rate of 0.01\\%.} \n\t\\label{tab_extension_M}\n\\end{table}\n\n\n\\textbf{Number of the extended superpoints.} The dynamic label propagation strategy plays an important role in our model. As shown in Fig. \\ref{fig_ext_ablation}, we show the proportion of the supervised superpoints and extended superpoints in the training set when testing on Area 5 of the S3DIS dataset. With the increase of the annotated points, the proportion of the supervised superpoints increases rapidly. Because the probability of a superpoint containing a supervised point is getting higher as well. However, when there are fewer supervised points, the percentage of extended superpoints is obviously larger.\nThis demonstrates the importance of pseudo labels facing extremely few point annotations.\n\n\n\n\\textbf{Quality of the extended superpoints.} To analyze the quality of the extended superpoints, we evaluate the overall accuracy of the extended superpoints (OA of es) in Tab. \\ref{tab_comparison}. Noted that, similar to the aforementioned metrics, the quantitative results of the extended superpoints are conducted at the point level as well. \nFrom Tab. \\ref{tab_comparison}, one can see that the overall accuracy of extended superpoints is around 90\\%, which demonstrates the high quality of extended superpoints. This further proves the effectiveness of our label propagation strategy which generates high-quality pseudo labels. In addition, the high quality of pseudo labels of the extended superpoints further reveals the reason for the improved performance based on the label propagation module.\n\n\n\\textbf{Epoch interval in dynamic label propagation.}\nDuring the training, we perform the dynamic label propagation method every $M$ epochs. For comparison, we train our model with various interval $M$ while keeping other parameters unchanged with the supervision rate of 0.01\\%. The evaluation results on Area 5 of the S3DIS dataset are shown in Tab. \\ref{tab_extension_M}. It can be observed that when $M=40$, our model achieves the best performance.\n\n\\section{Conclusion}\nIn this paper, we proposed a semi-supervised point cloud segmentation network. We first partitioned the point cloud into superpoints and built superpoint graphs to explore the long-range relations in the point cloud. Then based on superpoint graphs, we proposed a dynamic label propagation method combined with a superpoint dropout strategy to generate high-quality pseudo labels for the unsupervised superpoints. Next, we proposed a coupled attention module to learn discriminative contextual features of superpoints and fully exploit the generated pseudo labels. Our method can achieve better performance than the current semi-supervised point cloud segmentation methods with fewer labels.\n\n\\section{Acknowledgments}\nThis work was supported by the National Science Fund of China (Grant Nos.\nU1713208, 61876084), Program for Changjiang Scholars.\n\n{\n\t\\bibliographystyle{IEEEtran}\n\t","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Relative systoles}\n\n\nWe prove a systolic inequality for a~$\\phi$--relative systole of a\n\\mbox{$\\phi$--essential}\n$2$--complex~$X$, where~$\\phi \\colon\\thinspace \\pi_1(X) \\to G$ is a homomorphism to a\nfinitely presented group~$G$. Thus, we show that universally for\nany~$\\phi$--essential Riemannian~$2$--complex~$X$, and any~$G$, we\nhave~$\\sys(X, \\phi)^2 \\leq 8 \\, \\area(X)$. Combining our results with\na method of L.~Guth, we obtain new quantitative results for\ncertain~$3$--manifolds: in particular for the Poincar\\'e\nhomology sphere~$\\Sigma$, we have~$\\sys(\\Sigma)^3 \\leq 24 \\, \\vol(\\Sigma)$. To\nstate the results more precisely, we need the following definition.\n\nLet~$X$ be a finite connected~$2$--complex. Let~$\\phi \\colon\\thinspace \\pi_{1}(X) \\to\nG$ be a group homomorphism. Recall that~$\\phi$ induces a classifying\nmap (defined up to homotopy)~$X \\to K(G,1)$.\n\n\\begin{definition}\nThe complex~$X$ is called~$\\phi$--{\\em essential\\\/} if the classifying\nmap~$X \\to K(G,1)$ cannot be homotoped into the~$1$--skeleton\nof~$K(G,1)$.\n\\end{definition}\n\n\n\n\\begin{definition}\nGiven a piecewise smooth Riemannian metric on~$X$, the~$\\phi$--relative\nsystole of~$X$, denoted~$\\sys(X,\\phi)$, is the least length of a loop\nof~$X$ whose free homotopy class is mapped by~$\\phi$ to a nontrivial\nclass.\n\\end{definition}\n\nWhen~$\\phi$ is the identity homomorphism of the fundamental group, the\nrelative systole is simply called the systole, and denoted~$\\sys(X)$.\n\n\\begin{definition} \n\\label{def:sigma}\nThe~$\\phi$--systolic area~$\\sigma_\\phi(X)$ of~$X$ is defined as\n\\begin{equation*}\n\\sigma_{\\phi}(X) = \\frac{\\area(X)}{\\sys(X,\\phi)^{2}}.\n\\end{equation*}\nFurthermore, we set\n\\begin{equation*}\n\\sigma_{*}(G) = \\inf_{X, \\phi} \\sigma_{\\phi}(X),\n\\end{equation*}\nwhere the infimum is over all~$\\phi$--essential piecewise Riemannian\nfinite connected \\mbox{$2$--complexes}~$X$, and homomorphisms~$\\phi$\nwith values in~$G$.\n\\end{definition}\n\nIn the present text, we prove a systolic inequality for\nthe~$\\phi$--relative systole of a~$\\phi$--essential~$2$--complex~$X$.\nMore precisely, in the spirit of Guth's text \\cite{Gu09}, we prove a\nstronger, {\\em local\\\/} version of such an inequality, for almost\nextremal complexes with minimal first Betti number. Namely, if~$X$\nhas a minimal first Betti number among all~$\\phi$--essential piecewise\nRiemannian \\mbox{$2$--complexes} satisfying~$\\sigma_{\\phi}(X) \\leq\n\\sigma_*(G) +\\varepsilon$ for an~$\\varepsilon>0$, then the area of a\nsuitable disk of~$X$ is comparable to the area of a Euclidean disk of\nthe same radius, in the sense of the following result.\n\n\n\\begin{theorem} \n\\label{13}\nLet~$\\varepsilon >0$. Suppose~$X$ has a minimal first Betti number\namong all~$\\phi$--essential piecewise Riemannian\n$2$--complexes satisfying~$\\sigma_{\\phi}(X) \\leq \\sigma_*(G)\n+\\varepsilon$. Then each ball centered at a point~$x$ on\na~$\\phi$--systolic loop in~$X$ satisfies the area lower bound\n\\begin{equation*}\n\\area \\, B(x,r) \\geq\n\\frac{\\left(r-\\varepsilon^{1\/3}\\right)^2}{2+\\varepsilon^{1\/3}}\n\\end{equation*}\nwhenever~$r$ satisfies~$\\varepsilon^{1\/3} \\leq r \\leq\n\\frac{1}{2}\\sys(X,\\phi)$.\n\\end{theorem}\n\nA more detailed statement appears in~Proposition~\\ref{prop:minB}. The\ntheorem immediately implies the following systolic inequality.\n\n\\begin{corollary} \n\\label{coro:A}\nEvery finitely presented group~$G$ satisfies\n\\begin{equation*}\n\\sigma_*(G) \\geq \\frac{1}{8}, \n\\end{equation*}\nso that every piecewise Riemannian~$\\phi$--essential~$2$--complex~$X$\nsatisfies the inequality\n\\begin{equation*}\n\\sys(X,\\phi)^{2} \\leq 8 \\, \\area(X).\n\\end{equation*}\n\\end{corollary}\n\nIn the case of the absolute systole, we prove a similar lower bound\nwith a Euclidean exponent for the area of a suitable disk, when the\nradius is smaller than half the systole, without the assumption of\nnear-minimality. Namely, we will prove the following theorem.\n\n\\begin{theorem} \n\\label{theo:B}\nEvery piecewise Riemannian essential~$2$--complex~$X$ admits a\npoint~$x\\in X$ such that the area of the~$r$--ball centered at~$x$ is\nat least~$r^2$, that is,\n\\begin{equation}\n\\label{11c}\n\\area ( B(x,r)) \\geq r^2,\n\\end{equation}\nfor all~$r \\leq \\frac{1}{2} \\sys(X)$.\n\\end{theorem}\n\nWe conjecture a bound analogous to \\eqref{11c} for the area of a\nsuitable disk of a \\mbox{$\\phi$--essential}~$2$--complex~$X$, with\nthe~$\\phi$--relative systole replacing the systole, {\\it cf.}~the GG-property\nbelow. The application we have in mind is in the case\nwhen~$\\phi \\colon\\thinspace \\pi_1(X)\\to \\Z_p$ is a homomorphism from the fundamental\ngroup of~$X$ to a finite cyclic group. Note that the conjecture is\ntrue in the case when~$\\phi$ is a homomorphism to~$\\Z_2$, by Guth's\nresult \\cite{Gu09}.\n\n\n\\begin{definition}[GG-property%\n\\footnote{GG-property stands for the property analyzed by M.~Gromov\nand L.~Guth}\n] \n\\label{def:GG}\nLet~$C>0$. Let~$X$ be a finite connected~$2$--complex,\nand~$\\phi \\colon\\thinspace \\pi_1(X) \\to G$, a group homomorphism. We say that~$X$ has\nthe~$\\rm{GG}_{C}$-property for~$\\phi$ if\nevery piecewise smooth Riemannian metric on~$X$ admits a point~$x \\in\nX$ such that the~$r$--ball of~$X$ centered at~$x$ satisfies the bound\n\\begin{equation} \n\\label{eq:ball}\n\\area \\, B(x,r) \\geq C r^2,\n\\end{equation}\nfor every~$r \\leq \\frac{1}{2} \\sys(X,\\phi)$.\n\\end{definition}\n\nNote that if the~$2$--complex~$X$ is~$\\varepsilon$--almost minimal,\ni.e., satisfies the bound $\\sigma_{\\phi}(X) \\leq G_*(G) +\n\\varepsilon$, and has least first Betti number among all such\ncomplexes, then it satisfies~\\eqref{eq:ball} for some~$C>0$ and\nfor~$r\\geq \\varepsilon^{1\/3}$ by Theorem~\\ref{13}.\n\nModulo such a conjectured bound, we prove a systolic inequality for\nclosed~$3$--manifolds with finite fundamental group.\n\n\n\\begin{theorem}\n\\label{theo:main}\nLet~$p\\geq 2$ be a prime. Assume that\nevery~$\\phi$--essential~$2$--complex has the~$\\rm{GG}_C$-property\n\\eqref{eq:ball} for each homomorphism~$\\phi$ into~$\\Z_p$ and for some\nuniversal constant~$C>0$. Then every orientable closed\nRiemannian~$3$--manifold~$M$ with finite fundamental group of order\ndivisible by~$p$, satisfies the bound\n\\begin{equation*}\n\\sys(M)^3 \\leq 24 \\, C^{-1} \\; \\vol(M).\n\\end{equation*}\nMore precisely, there is a point~$x\\in M$ such that the volume of\nevery~$r$--ball centered at~$x$ is at least~$\\frac{C}{3}r^{3}$, for all\n$r \\leq \\frac{1}{2} \\sys(M)$.\n\\end{theorem}\n\n\nA slightly weaker bound can be obtained modulo a weaker GG-property,\nwhere the point~$x$ is allowed to depend on the radius~$r$.\n\nSince the GG-property is available for~$p=2$ and~$C=1$ by Guth's article\n\\cite{Gu09}, we obtain the following corollary.\n\n\\begin{corollary}\nEvery closed Riemannian~$3$--manifold~$M$ with fundamental group of\neven order satisfies\n\\begin{equation}\n\\label{Poincare}\n\\sys(M)^3 \\leq 24 \\; \\vol(M).\n\\end{equation}\n\\end{corollary}\n\nFor example, the Poincar\\'e homology~$3$--sphere satisfies the systolic\ninequality \\eqref{Poincare}. \\\\\n\nIn the next section, we present related developments in systolic\ngeometry and compare some of our arguments in the proof of\nTheorem~\\ref{theo:main} to Guth's in~\\cite{Gu09},\n{\\it cf.}~Remark~\\ref{rem:compare}. Additional recent developments in\nsystolic geomety include \\cite{AK, BB10, Bal08, e7, BT, Be08, Bru,\nBru2, Bru3, DKR, DR09, Elm10, EL, Gu09, KK, KK2, Ka4, KR2, KSh, NR,\nPar10, Ro, RS08, Sa08, Sa10}.\n\n\n\n\n\\section{Recent progress on Gromov's inequality}\n\nM.~Gromov's upper bound for the~$1$--systole of an essential\nmanifold~$M$ \\cite{Gr1} is a central result of systolic geometry.\nGromov's proof exploits the Kuratowski imbedding of~$M$ in the Banach\nspace~$L^\\infty$ of bounded functions on~$M$. A complete analytic\nproof of Gromov's inequality \\cite{Gr1}, but still using the\nKuratowski imbedding in~$L^\\infty$, was recently developed by\nL.~Ambrosio and the second-named author~\\cite{AK}. See\nalso~\\cite{AW}.\n\nS.~Wenger~\\cite{wen} gave a complete analytic proof of an\nisoperimetric inequality between the volume of a manifold~$M$, and its\nfilling volume, a result of considerable independent interest. On the\nother hand, his result does not directly improve or simplify the proof\nof Gromov's main filling inequality for the filling radius. Note that\nboth the filling inequality and the isoperimetric inequality are\nproved simultaneously by Gromov, so that proving the isoperimetric\ninequality by an independent technique does not directly simplify the\nproof of either the filling radius inequality, or the systolic\ninequality.\n\nL.~Guth \\cite{Gu11} gave a new proof of Gromov's systolic inequality\nin a strengthened {\\em local\\\/} form. Namely, he proved Gromov's\nconjecture that every essential manifold with unit systole contains a\nball of unit radius with volume uniformly bounded away from zero.\n\nMost recently, Guth \\cite {Gu09} re-proved a significant case of\nGromov's systolic inequality \\cite{Gr1} for essential manifolds,\nwithout using Gromov's filling invariants.\n\nActually, in the case of surfaces, Gromov himself had proved better\nestimates, without using filling invariants, by sharpening a technique\nindependently due to Y.~Burago and V.~Zalgaller \\cite[p.~43]{BZ}, and\nJ.~Hebda \\cite{Hebda}. Here the essential idea is the following.\n\nLet~$\\gamma(s)$ be a minimizing non-contractible closed geodesic of\nlength~$L$ in a surface~$S$, where the arclength parameter~$s$ varies\nthrough the interval~$[-\\frac{L}{2}, \\frac{L}{2}]$. We consider\nmetric balls (metric disks)~$B(p,r) \\subset S$ of radius~$r<\n\\frac{L}{2}$ centered at~$p=\\gamma(0)$. The two points~$\\gamma(r)$\nand~$\\gamma(-r)$ lie on the boundary sphere (boundary curve)~$\\partial\nB(p,r)$ of the disk. If the points lie in a common connected\ncomponent of the boundary (which is necessarily the case if~$S$ is a\nsurface and~$L=\\sys(S)$, but may fail if~$S$ is a more\ngeneral~$2$--complex), then the boundary curve has length at\nleast~$2r$. Applying the coarea formula\n\\begin{equation}\n\\label{11b}\n\\area \\, B(p,r)=\\int_0^r \\length \\, \\partial B(p,\\rho) \\, d\\rho,\n\\end{equation}\nwe obtain a lower bound for the area which is quadratic in~$r$.\n\nGuth's idea is essentially a higher-dimensional analogue of Hebda's,\nwhere the minimizing geodesic is replaced by a minimizing\nhypersurface. Some of Guth's ideas go back to the even earlier texts\nby Schoen and Yau \\cite{SY78, SY79}.\n\nThe case handled in \\cite{Gu09} is that of~$n$--dimensional manifolds\nof maximal~$\\Z_2$--cuplength, namely~$n$. Thus, Guth's theorem covers\nboth tori and real projective spaces, directly generalizing the\nsystolic inequalities of Loewner and Pu, see \\cite{Pu} and \\cite{SGT}\nfor details.\n\n\n\n\n\\begin{remark} \\label{rem:compare}\nTo compare Guth's argument in his text~\\cite{Gu09} and our proof of\nTheorem~\\ref{theo:main}, we observe that the topological ingredient of\nGuth's technique exploits the multiplicative structure of the\ncohomology ring~$H^*(\\Z_2;\\Z_2)=H^*(\\R {\\mathbb P}^\\infty; \\Z_2)$.\nThis ring is generated by the~$1$--dimensional class. Thus, every\n$n$--dimensional cohomology class decomposes into the cup product\nof~$1$--dimensional classes. This feature enables a proof by\ninduction on~$n$.\n\nMeanwhile, for~$p$ odd, the cohomology ring~$H^*(\\Z_p;\\Z_p)$ is not\ngenerated by the~$1$--dimensional class; see Proposition~\\ref{42} for\na description of its structure. Actually, the square of\nthe~$1$--dimensional class is zero, which seems to yield no useful\ngeometric information.\n\nAnother crucial topological tool used in the proof of~\\cite{Gu09} is\nPoincar\\'e duality which can be applied to the manifolds representing\nthe homology classes in~$H_*(\\Z_2;\\Z_2)$. For~$p$ odd, the homology\nclasses of~$H_{2k}(\\Z_p;\\Z_p)$ cannot be represented by manifolds.\nOne could use D.~Sullivan's notion of~$\\Z_p$--manifolds,\n{\\it cf.}~\\cite{Su,MS}, to represent these homology class, but they do not\nsatisfy Poincar\\'e duality.\n\nFinally, we mention that, when working with cycles representing\nhomology classes with torsion coefficients in~$\\Z_p$, we exploit a\nnotion of volume which ignores the multiplicities in~$\\Z_p$,\n{\\it cf.}~Definition~\\ref{def:Vol}. This is a crucial feature in our proof. \nNote that minimal cycles with torsion coefficients were studied by\nB.~White \\cite{Wh2}.\n\\end{remark}\n\n\n\n\\section{Area of balls in~$2$--complexes}\n\nIt was proved in~\\cite{Gr1} and~\\cite{KRS} that a finite~$2$--complex\nadmits a systolic inequality if and only if its fundamental group is\nnonfree, or equivalently, if it is~$\\phi$--essential for~$\\phi= {\\rm\nId}$.\n\n\n\nIn~\\cite{KRS}, we used an argument by contradiction, relying on an\ninvariant called {\\em tree energy\\\/}, to prove a bound for the\nsystolic ratio of a~$2$--complex. We present an alternative short proof\nwhich yields a stronger result and simplifies the original argument.\n\n\\begin{theorem} \n\\label{theo:r2}\nLet~$X$ be a piecewise Riemannian finite essential~$2$--complex. There\nexists~$x \\in X$ such that the area of every~$r$--ball centered at~$x$\nis at least~$r^2$ for every~$r \\leq \\frac{1}{2} \\sys(X)$.\n\\end{theorem}\n\nAs mentioned in the introduction, we conjecture that this result still\nholds for~$\\phi$--essential complexes and with the~$\\phi$--relative\nsystole in place of~$\\sys$.\n\n\\begin{proof}\nWe can write the Grushko decomposition of the fundamental group of~$X$\nas\n\\begin{equation*}\n\\pi_1(X) = G_1*\\cdots*G_r*F,\n\\end{equation*}\nwhere~$F$ is free, while each group~$G_i$ is nontrivial,\nnon-isomorphic to~$\\Z$, and not decomposable as a nontrivial free\nproduct.\n\nConsider the equivalence class~$[G_1]$ of~$G_1$ under external\nconjugation in~$\\pi_1(X)$. Let~$\\gamma$ be a loop of least length\nrepresenting a nontrivial class~$[\\gamma]$ in~$[G_1]$. Fix~$x \\in\n\\gamma$ and a copy of~$G_1 \\subset \\pi_1(X,x)$ containing the homotopy\nclass of~$\\gamma$. Let~$\\overline{X}$ be the cover of~$X$ with\nfundamental group~$G_1$.\n\n\\begin{lemma}\nWe have~$\\sys(\\overline{X}) = \\length(\\gamma)$.\n\\end{lemma}\n\n\\begin{proof}\nThe loop~$\\gamma$ lifts to~$\\overline{X}$ by construction of the subgroup\n$G_1$. Thus,~$\\sys(\\overline{X}) \\leq \\length(\\gamma)$. Now, the\ncover~$\\overline{X}$ does not contain noncontractible loops~$\\delta$\nshorter than~$\\gamma$, because such loops would project to~$X$ so that\nthe nontrivial class~$[\\delta]$ maps into~$[G_1]$, contradicting our\nchoice of~$\\gamma$.\n\\end{proof}\n\nContinuing with the proof of the theorem, let~$\\bar{x} \\in \\overline{X}$\nbe a lift of~$x$. Consider the level curves of the distance function\nfrom~$\\bar{x}$. Note that such curves are necessarily connected, for\notherwise one could split off a free-product-factor~$\\Z$\nin~$\\pi_1(\\overline{X})=G_1$, {\\it cf.}~\\cite[Proposition 7.5]{KRS},\ncontradicting our choice of~$G_1$. In particular, the\npoints~$\\gamma(r)$ and~$\\gamma(-r)$ can be joined by a path contained\nin the curve at level~$r$. Applying the coarea formula~\\eqref{11b},\nwe obtain a lower bound~$\\area \\, B(\\bar{x},r)\\geq r^2$ for the area\nof an~$r$--ball~$B(\\bar{x},r) \\subset \\overline{X}$, for all~$r \\leq\n\\frac{1}{2} \\length(\\gamma)=\\frac{1}{2} \\sys(\\overline{X})$.\n\nIf, in addition, we have~$r \\leq \\frac{1}{2}\\sys(X)$ (which apriori\nmight be smaller than~$\\frac{1}{2} \\sys(\\overline{X})$), then the ball\nprojects injectively to~$X$, proving that\n\\begin{equation*}\n\\area(B(x,r)\\subset X) \\geq r^2\n\\end{equation*}\nfor all~$r\\leq \\frac{1}{2}\\sys(X)$.\n\\end{proof}\n\n\n\\section{Outline of argument for relative systole}\n\\label{three}\n\nLet~$X$ be a piecewise Riemannian connected~$2$--complex, and\nassume~$X$ is~$\\phi$--essential for a group homomorphism\n$\\phi \\colon\\thinspace \\pi_1(X)\\to G$. We would like to prove an area lower bound\nfor~$X$, in terms of the~$\\phi$--relative systole as in\nTheorem~\\ref{theo:r2}. Let~$x \\in X$. Denote by~$B=B(x,r)$ and\n$S=S(x,r)$ the open ball and the sphere (level curve) of radius~$r$\ncentered at~$x$ with~$r < \\frac{1}{2} \\sys(X,\\phi)$. Consider the\ninterval~$I=[0,\\frac{L}{2}]$, where~$L=\\length(S)$.\n\n\\begin{definition} \n\\label{def:Y}\nWe consider the complement~$X \\setminus B$, and attach to it a buffer\ncylinder along each connected component~$S_i$ of~$S$. Here a buffer\ncylinder with base~$S_i$ is the quotient\n\\begin{equation*}\nS_i \\times I\/ \\!\\! \\sim\n\\end{equation*}\nwhere the relation~$\\sim$ collapses each subset~$S_i \\times \\{ 0 \\}$\nto a point~$x_i$. We thus obtain the space\n\\[\n\\left( S_i \\times I\/ \\!\\! \\sim \\right) \\cup_f \\left( X \\setminus B\n\\right),\n\\]\nwhere the attaching map~$f$ identifies\n$S_i\\times\\left\\{\\tfrac{L}{2}\\right\\}$ with~$S_i\\subset X\\setminus B$.\nTo ensure the connectedness of the resulting space, we attach a\ncone~$CA$ over the set of points~$A=\\{ x_i \\}$. We set the length of the\nedges of the cone~$CA$ equal to~$\\sys(X,\\phi)$. We will denote by\n\\begin{equation} \\label{eq:Y}\nY=Y(x,r)\n\\end{equation}\nthe resulting~$2$--complex. The natural metrics on~$X \\setminus B$ and\non the buffer cylinders induce a metric on~$Y$.\n\\end{definition}\n\nIn the next section, we will show that~$Y$ is~$\\psi$--essential for\nsome homomorphism~$\\psi \\colon\\thinspace \\pi_1(Y) \\to G$ derived from~$\\phi$. The\npurpose of the buffer cylinder is to ensure that the relative systole\nof~$Y$ is at least as large as the relative systole of~$X$. Note that\nthe area of the buffer cylinder is~$L^2\/2$.\n\n\nWe normalize~$X$ to unit relative systole and take a point~$x$ on a\nrelative systolic loop of~$X$.\nSuppose~$X$ has a minimal first Betti number among the complexes\nessential in~$K(G,1)$ with almost minimal systolic area (up to\nepsilon). We sketch below the proof of the local relative systolic\ninequality satisfied by~$X$.\n\nIf for every~$r$, the space~$Y=Y(x,r)$ has a greater area than~$X$,\nthen\n\\begin{equation*}\n\\area \\, B(r) \\leq \\tfrac{1}{2}(\\length \\, S(r))^2\n\\end{equation*}\nfor every~$r < \\frac{1}{2} \\sys(X,\\phi)$. Using the coarea\ninequality, this leads to the differential inequality~$y(r) \\leq\n\\tfrac{1}{2} y'(r)^2$. Integrating this relation shows that the area\nof~$B(r)$ is at least~$\\frac{r^2}{2}$, and the conclusion follows.\n\nIf for some~$r$, the space~$Y$ has a smaller area than~$X$, we argue\nby contradiction. We show that a~$\\phi$--relative systolic loop of~$X$\n(passing through~$x$) meets at least two connected components of the\nlevel curve~$S(r)$. These two connected components project to two\nendpoints of the cone~$CA$ connected by an arc of~$Y \\setminus CA$.\nUnder this condition, we can remove an edge~$e$ from~$CA$ so that the\nspace~$Y'=Y \\setminus e$ has a smaller first Betti number than~$X$.\nHere~$Y'$ is still essential in~$K(G,1)$, and its relative systolic\narea is better than the relative systolic area of~$X$, contradicting\nthe definition of~$X$.\n\n\n\\section{First Betti number and essentialness of~$Y$} \\label{sec:remov}\n\nLet~$G$ be a fixed finitely presented group. We are mostly interested\nin the case of a finite group~$G=\\Z_p$. Unless specified otherwise,\nall group homomorphisms have values in~$G$, and all complexes are\nassumed to be finite. Consider a homomorphism~$\\phi \\colon\\thinspace \\pi_1(X) \\to\nG$ from the fundamental group of a piecewise Riemannian finite\nconnected~$2$--complex~$X$ to~$G$.\n\n\n\\begin{definition}\nA loop~$\\gamma$ in~$X$ is said to be~$\\phi$--contractible if the image\nof the homotopy class of~$\\gamma$ by~$\\phi$ is trivial, and\n$\\phi$--noncontractible otherwise. Thus, the~$\\phi$--systole of~$X$,\ndenoted by~$\\sys(X,\\phi)$, is defined as the least length of a\n$\\phi$--noncontractible loop in~$X$. Similarly, the\n\\mbox{$\\phi$--systole} based at a point~$x$ of~$X$, denoted\nby~$\\sys(X,\\phi,x)$, is defined as the least length of\na~$\\phi$--noncontractible loop based at~$x$.\n\\end{definition}\n\n\\forget\n\\begin{definition}\n\\label{42b}\nThe~$\\phi$--systolic area of~$X$ is defined as\n$$\n\\sigma_{\\phi}(X) = \\frac{\\area(X)}{\\sys(X,\\phi)^{2}}.\n$$\n\\end{definition}\n\\forgotten\n\nThe following elementary result will be used repeatedly in the sequel.\n\n\\begin{lemma} \n\\label{lem:trivial}\nIf~$r < \\frac{1}{2} \\sys(X,\\phi,x)$, then\nthe~$\\pi_{1}$--homomorphism~$i_{*}$ induced by the inclusion~$B(x,r)\n\\subset X$ is trivial when composed with~$\\phi$, that is~$\\phi \\circ\ni_{*}=0$. More specifically, every loop in~$B(x,r)$ is homotopic to a\ncomposition of loops based at~$x$ of length at most~$2r+\\varepsilon$, for\nevery~$\\varepsilon>0$.\n\\end{lemma}\n\n\n\nWithout loss of generality, we may assume that the piecewise\nRiemannian metric on~$X$ is piecewise flat. Let~$x_{0} \\in X$. The\npiecewise flat~$2$--complex~$X$ can be embedded into some~$\\R^N$ as a\nsemialgebraic set and the distance function~$f$ from~$x_0$ is a\ncontinuous semialgebraic function on~$X$, {\\it cf.}~\\cite{BCR98}.\nThus,~$(X,B)$ is a CW-pair when~$B$ is a ball centered at~$x_0$ (see\nalso \\cite[Corollary~6.8]{KRS}). Furthermore, for almost every~$r$,\nthere exists a~$\\eta >0$ such that the set\n\\begin{equation*}\n\\{ x \\in X \\mid r-\\eta < f(x) < r+\\eta \\}\n\\end{equation*}\nis homeomorphic to~$S(x_0,r) \\times (r-\\eta,r+\\eta)$ where~$S(x_0,r)$\nis the~$r$--sphere centered at~$x_{0}$ and the~$t$--level curve of~$f$\ncorresponds to~$S(x_0,r) \\times \\{t\\}$, {\\it cf.}~\\cite[\\S~9.3]{BCR98}\nand~\\cite{KRS} for a precise description of level curves on~$X$. \nIn such case, we say that~$r$ is a \\emph{regular value} of~$f$. \\\\\n\n\\forget \n\nSince the function~$\\ell(r) = \\length \\, f^{-1}(r)$ is piecewise\ncontinuous, {\\it cf.}~\\cite[\\S~9.3]{BCR98}, the condition~$\\area \\, B >\n\\lambda \\, (\\length \\, S)^{2}$ is open (see ~\\eqref{eq:lambda} below).\nTherefore, slightly changing the value of~$r$ if necessary, we can\nassume that~$r$ is regular. \n\n\\forgotten\n\nConsider the connected~$2$--complex~$Y=Y(x_0,r)$ introduced in\nDefinition~\\ref{def:Y}, with~$r < \\frac{1}{2} \\sys(X,\\phi)$ and~$r$\nregular. Since~$r$ is a regular value, there exists~$r_- \\in (0,r)$\nsuch that~$B \\setminus B(x_0,r_-)$ is homeomorphic to the product\n\\[\nS \\times [r_-,r) = \\coprod_i S_i \\times [r_-,r).\n\\]\nConsider the map\n\\begin{equation} \n\\label{eq:XY}\n\\pi \\colon\\thinspace X \\to Y\n\\end{equation}\nwhich leaves~$X \\setminus B$ fixed, takes~$B(x_0,r_-)$ to the vertex\nof the cone~$CA$, and sends~$B \\setminus B(x_0,r_-)$ to the union of\nthe buffer cylinders and~$CA$.\nThis map induces an epimorphism between the first\nhomology groups. In particular,\n\\begin{equation} \\label{eq:b1}\nb_1(Y) \\leq b_1(X).\n\\end{equation}\n\n\\medskip\n\n\\forget\n\\begin{lemma}\n\\label{lem:betti}\nWe have\n\\[\nb_1(Y) \\leq b_1(X).\n\\]\n\\end{lemma}\n\n\\begin{proof}\nSince~$r$ is a regular value, there exists~$r_- \\in (0,r)$ such that~$B \\setminus B(x,r_-)$ is homeomorphic to~$S \\times [r_-,r) = \\coprod_i S_i \\times [r_-,r)$.\nThe map~$X \\to Z$ which leaves~$X \\setminus B$ fixed and takes~$B(x,r_-)$ to the vertex of the cone~$C$ of~$Z$ induces an epimorphism between the first homology groups.\nHence the result.\n\\end{proof}\n\\forgotten\n\n\n\\forget\n\\begin{proof}\nLet~$f$ be the distance function from~$x_0$. It is convenient to\nintroduce the Reeb space~$\\widehat{X}$ obtained from~$X$ by collapsing to\npoints the connected components of the level curves~$f^{-1}(t)$, for\nevery~$t \\in [0,r]$ (level curves for~$t>r$ are unaffected). Note\nthat the lower bound for the systole no longer holds for~$\\widehat{X}$ due\nto possible ``shortcuts'' created in the graph~$T\\subset\\widehat{X}$\ncorresponding to~$t\\leq r$.\n\nSince the fibers of the map~$X\\to \\widehat{X}$ are connected, by the\ncovering homotopy property, we obtain that every closed path in\n$\\widehat{X}$ lifts to a closed path in~$X$, proving the surjectivity of\n$\\pi_1(X)\\to \\pi_1(\\widehat{X})$.\n\nWe first assume that~$X\\setminus B$ is connected. Then the Reeb\nspace~$\\widehat{X}$ is homotopy equivalent%\n\\footnote{The fact that the ``Reeb graph'' is indeed a finite graph\nfollows from semialgebraicity; see \\cite{KRS} for a detailed\ndiscussion.}\nto the union~$Y \\cup T$ obtained by attaching a finite graph~$T$ to\nthe finite set~$\\{x_1,\\ldots,x_n\\} \\subset Y$,\ncf.~Definition~\\eqref{def:Y}. By van Kampen's theorem, the removal of\nthe graph leads to a further decrease in the Betti number. The\nnon-closed path~$\\alpha$ closes up to a loop in~$X$ but not in~$Y$.\n\nIf~$X\\setminus B$ is not connected, our space~$Y$ is homotopy\nequivalent to a connected component of~$\\widehat{X}$ with the graph~$T$\nremoved, proving the lemma.\n\\end{proof}\n\\forgotten\n\n\n\\forget\nLet~$A=\\{x_1,\\ldots, x_n\\}$ be the finite set formed by the\npoints~$x_i$. Let~$Y \\cup CA$ be the space obtained by attaching a\ncone over~$A$ to~$Y$. Consider the map\n\\[\n\\widehat{X} \\to Y \\cup CA\n\\]\nwhich leaves~$Y$ fixed and takes~$T \\setminus (\\cup_i e_i)$ to the\nvertex of~$CA$, where the~$e_i$ are the semi-edges of~$T$ with\nendpoints~$x_i$. The composite\n\\[\nX \\to \\widehat{X} \\to Y \\cup CA,\n\\]\nwhere~$X \\to \\widehat{X}$ is the quotient map, leaves~$X \\setminus\n\\overline{B}$ fixed and induces an epimorphism between the first\nhomology groups. Hence,\n$$\nb_1(Y) \\leq b_1(Y \\cup CA) \\leq b_1(X).\n$$\n\nNow, suppose that the projection of some arc~$\\alpha$ of~$X \\setminus\nB$ to~$Y$ connects two points of~$A$. Then the space~$Y \\cup CA$ is\nhomotopy equivalent to~$(Y \\cup CA') \\vee S^{1}$, where~$A'\n\\subset A$. That is,\n$$\nY \\cup CA \\simeq (Y \\cup CA') \\vee S^{1}.\n$$\nWe deduce that\n$$\nb_1(Y) < b_1(Y \\cup CA) \\leq b_1(X). \n$$\n\\forgotten\n\n\n\\begin{lemma} \\label{lem:class}\nIf~$r < \\frac{1}{2} \\sys(X,\\phi)$, then~$Y$ is~$\\psi$--essential for\nsome homomorphism~$\\psi \\colon\\thinspace \\pi_{1}(Y) \\to G$ such that\n\\begin{equation} \\label{eq:circ}\n\\psi \\circ \\pi_* =\\phi\n\\end{equation}\nwhere~$\\pi_*$ is the~$\\pi_1$--homomorphism induced by \\mbox{$\\pi \\colon\\thinspace X \\to Y$}.\n\\end{lemma}\n\n\\begin{proof}\nConsider the CW-pair~$(X,B)$ where~$B=B(x_0,r)$. By\nLemma~\\ref{lem:trivial}, the restriction of the classifying\nmap~$\\varphi \\colon\\thinspace X \\to K(G,1)$ induced by~$\\phi$ to~$B$ is homotopic to a\nconstant map. Thus, the classifying map~$\\varphi$ extends to~$X \\cup\nCB$ and splits into\n\\[\nX \\hookrightarrow X \\cup CB \\to K(G,1),\n\\]\nwhere~$CB$ is a cone over~$B \\subset X$ and the first map is the\ninclusion map. Since~$X \\cup CB$ is homotopy equivalent to the\nquotient~$X\/B$, {\\it cf.}~\\cite[Example~0.13]{Hat}, we obtain the following\ndecomposition of~$\\varphi$ up to homotopy:\n\\begin{equation} \\label{eq:XB}\nX \\stackrel{\\pi}{\\longrightarrow} Y \\to X\/B \\to K(G,1).\n\\end{equation}\n\nHence,~$\\psi \\circ \\pi_* = \\phi$ for the~$\\pi_1$--homomorphism~$\\psi \\colon\\thinspace \\pi_1(Y) \\to G$ induced by the map~$Y \\to K(G,1)$. \nIf the map~$Y \\to K(G,1)$ can be homotoped into the~$1$--skeleton of~$K(G,1)$, the same\nis true for\n\\[\nX \\to Y \\to K(G,1)\n\\]\nand so for the homotopy equivalent map~$\\varphi$, which contradicts\nthe~$\\phi$--essentialness of~$X$.\n\\end{proof}\n\n\n\\section{Exploiting a ``fat\" ball}\n\nWe normalize the~$\\phi$--relative systole of~$X$ to one, i.e.~$\\sys(X,\\phi)=1$. \nChoose a fixed~$\\delta \\in (0,\\frac{1}{2})$\n(close to~$0$) and a real parameter~$\\lambda > \\frac{1}{2}$ (close\nto~$\\frac{1}{2}$).\n\n\\begin{proposition} \n\\label{prop:reeb}\nSuppose there exist a point~$x_{0} \\in X$ and a value~$r_{0} \\in\n(\\delta,\\frac{1}{2})$ regular for~$f$ such that\n\\begin{equation} \n\\label{eq:lambda}\n\\area \\, B > \\lambda \\, (\\length \\, S)^{2}\n\\end{equation}\nwhere~$B=B(x_{0},r_{0})$ and~$S=S(x_{0},r_{0})$. \nThen there exists a\npiecewise flat metric on~$Y=Y(x_{0},r_{0})$\nsuch that the systolic areas ({\\it cf.}~Definition~\\ref{def:sigma}) satisfy\n$$\n\\sigma_{\\psi}(Y) \\leq \\sigma_{\\phi}(X).\n$$\n\\end{proposition}\n\n\\begin{proof}\nConsider the metric on~$Y$ described in Definition~\\ref{def:Y}.\nStrictly speaking, the metric on~$Y$ is not piecewise flat since the\nconnected components of~$S$ are collapsed to points, but it can be\napproximated by piecewise flat metrics.\n\nDue to the presence of the buffer cylinders, every loop of~$Y$ of\nlength less than~$\\sys(X,\\phi)$ can be deformed into a loop of~$X\n\\setminus B$ without increasing its length. Thus, by~\\eqref{eq:circ},\none obtains\n\\begin{equation*}\n\\sys(Y,\\psi) \\geq \\sys(X,\\phi) = 1.\n\\end{equation*}\nFurthermore, we have\n\\begin{equation*}\n\\area \\, Y \\leq \\area \\, X - \\area \\, B + \\tfrac{1}{2} (\\length \\,\nS)^{2}.\n\\end{equation*}\nCombined with the inequality~\\eqref{eq:lambda}, this leads to\n\\begin{equation} \\label{eq:wh}\n\\sigma_{\\psi}(Y) < \\sigma_{\\phi}(X) - \\left( \\lambda - \\tfrac{1}{2}\n\\right) (\\length \\, S)^{2}.\n\\end{equation}\nHence,~$\\sigma_{\\psi}(Y) \\leq \\sigma_{\\phi}(X)$,\nsince~$\\lambda > \\frac{1}{2}$.\n\\end{proof}\n\n\n\n\\section{An integration by separation of variables}\n\nLet~$X$ be a piecewise Riemannian finite connected~$2$--complex. Let\n\\mbox{$\\phi \\colon\\thinspace \\pi_{1}(X) \\to G$} be a nontrivial homomorphism to a\ngroup~$G$. We normalize the metric to unit relative systole:\n$\\sys(X,\\phi)=1$. The following area lower bound appeared\nin~\\cite[Lemma~7.3]{RS08}.\n\n\\begin{lemma} \\label{lem:BS}\nLet~$x \\in X$,~$\\lambda >0$ and~$\\delta \\in (0,\\frac{1}{2})$.\nIf \n\\begin{equation} \\label{eq:BS}\n\\area \\, B(x,r) \\leq \\lambda \\, (\\length \\, S(x,r))^{2}\n\\end{equation}\nfor almost every~$r \\in (\\delta,\\frac{1}{2})$, then \n$$\n\\area \\, B(x,r) \\geq \\frac{1}{4\\lambda} (r-\\delta)^{2}\n$$\nfor every~$r \\in (\\delta,\\frac{1}{2})$.\n\nIn particular,~$\\displaystyle \\area(X) \\geq \\frac{1}{16 \\lambda} \\,\n\\sys(X,\\phi)^{2}$.\n\\end{lemma}\n\n\\begin{proof}\nBy the coarea formula, we have\n\\begin{equation*}\na(r) := \\area \\, B(x,r) = \\int_0^r \\ell(s) \\, ds\n\\end{equation*}\nwhere~$\\ell(s)=\\length \\, S(x,s)$. Since the function~$\\ell(r)$ is\npiecewise continuous, the function~$a(r)$ is continuously\ndifferentiable for all but finitely many~$r$ in~$(0,\\frac{1}{2})$\nand~$a'(r)=\\ell(r)$ for all but finitely many~$r$\nin~$(0,\\frac{1}{2})$. By hypothesis, we have\n$$\na(r) \\leq \\lambda \\, a'(r)^2\n$$\nfor all but finitely many~$r$ in~$(\\delta,\\frac{1}{2})$.\nThat is,\n$$ \\left( \\sqrt{a(r)} \\right)' = \\frac{a'(r)}{2 \\sqrt{a(r)}} \\geq\n\\frac{1}{2\\sqrt{\\lambda}}.~$$\nWe now integrate this differential inequality from~$\\delta$ to~$r$, to\nobtain\n$$\n \\sqrt{a(r)} \\geq \\frac{1}{2\\sqrt{\\lambda}} (r-\\delta).\n$$\nHence, for every~$r \\in (\\delta, \\frac{1}{2})$, we obtain\n\\[\n a(r) \\geq \\frac{1}{4 \\lambda} (r-\\delta)^{2},\n\\]\ncompleting the proof.\n\\end{proof}\n\\forget\n, or\n\\begin{equation*}\ndr \\leq \\lambda^{1\/2} a^{-1\/2} da.\n\\end{equation*}\nWe now integrate this differential inequality from~$\\delta$ to~$r$, to\nobtain\n\\begin{equation*}\nr-\\delta \\leq \\int_{a(\\delta)}^{a(r)} \\lambda^{1\/2} a^{-1\/2} da,\n\\end{equation*}\nand hence\n\\begin{equation*}\nr-\\delta \\leq 2 \\lambda^{1\/2} \\left( a(r)^{1\/2} - a(\\delta^{1\/2})\n\\right) \\leq 2 \\lambda^{1\/2} a(r)^{1\/2},\n\\end{equation*}\nproving the result so long as we have the inequality for all the\nintermediate values of~$r$. \n\nWhy is that?\n\\forgotten\n\n\n\n\\section{Proof of relative systolic inequality}\n\nWe prove that if~$X$ is a~$\\phi$--essential piecewise\nRiemannian~$2$--complex which is almost minimal (up to~$\\varepsilon$),\nand has least first Betti number among such complexes, then~$X$ possesses\nan~$r$--ball of large area for each~$r< \\tfrac{1}{2} \\sys(X, \\phi)$.\nWe have not been able to find such a ball for an\narbitrary~$\\phi$--essential complex (without the assumption of almost\nminimality), but at any rate the area lower bound for almost minimal\ncomplexes suffices to prove the~$\\phi$--systolic inequality for\nall~$\\phi$--essential complexes, as shown below.\n\n\\forget\n\\begin{definition}\nLet~$G$ be a group. We set\n\\begin{equation*}\n\\sigma_{*}(G) = \\inf_{X} \\sigma_{\\phi}(X),\n\\end{equation*}\nwhere the infimum is over all~$\\phi$--essential piecewise Riemannian\nfinite~$2$--complexes~$X$, where the homomorphism~$\\phi$ has values\nin~$G$.\n\\end{definition}\n\\forgotten\n\n\\begin{remark}\nWe do not assume at this point that~$\\sigma_{*}(G)$ is nonzero,\n{\\it cf.}~Definition~\\ref{def:sigma}. In fact, the proof of\n$\\sigma_{*}(G)>0$ does not seem to be any easier than the explicit\nbound of Corollary~\\ref{coro:A}.\n\\end{remark}\n\nTheorem~\\ref{13} and Corollary~\\ref{coro:A} are consequences of the\nfollowing result.\n\n\\begin{proposition} \n\\label{prop:minB}\nLet~$\\varepsilon >0$. Suppose~$X$ has a minimal first Betti number\namong all~$\\phi$--essential piecewise Riemannian~$2$--complexes\nsatisfying \n\\begin{equation} \\label{eq:eps}\n\\sigma_{\\phi}(X) \\leq \\sigma_*(G) +\\varepsilon.\n\\end{equation} \nThen each ball centered at a point~$x$ on a~$\\phi$--systolic loop in~$X$\nsatisfies the area lower bound\n\\begin{equation*}\n\\area \\, B(x,r) \\geq\n\\frac{(r-\\delta)^2}{2+\\frac{\\varepsilon}{\\delta^2}}\n\\end{equation*}\nfor every~$r \\in \\left(\\delta,\\frac{1}{2}\\sys(X,\\phi) \\right)$, where\n$\\delta \\in \\left(0,\\frac{1}{2}\\sys(X,\\phi)\\right)$. In particular,\nwe obtain the bound\n\\begin{equation*}\n\\sigma_*(G) \\geq \\frac{1}{8}.\n\\end{equation*}\n\\end{proposition}\n\n\\forget\n\\begin{proof}\nIf for each~$r$ we have~$a(r) \\leq a'(r)$ then we separate variables\nas in the previous section to obtain the area lower bound.\n\nIf for some~$r$, we have~$a(r) > a'(r)$, then there are two\npossibilities: either~$S(r)$ is connected, and we obtain a lower bound\nof~$r^2$ for the area by Hebda's trick, or~$S(r)$ is disconnected.\nBut the latter case is impossible by the hypothesis if minimality of\nBetti number.\n\\end{proof}\n\nWe can now proceed with the proof of the relative systolic inequality\nfor essential~$2$--complexes.\n\\forgotten\n\n\\begin{proof\nWe will use the notation and results of the previous sections.\nChoose~$\\lambda > 0$ such that\n\\begin{equation}\n\\label{52}\n\\varepsilon < 4 \\left(\\lambda - \\tfrac{1}{2} \\right) \\delta^{2}.\n\\end{equation}\nThat is,\n\\[\n\\lambda > \\frac{1}{2} + \\frac{\\varepsilon}{4 \\delta^2} \\quad \\mbox{\n(close to } \\frac{1}{2} + \\frac{\\varepsilon}{4 \\delta^2}).\n\\]\nWe normalize the metric on~$X$ so that its~$\\phi$--systole is equal to one.\nChoose a point~$x_{0} \\in X$ on a~$\\phi$--systolic loop~$\\gamma$ of~$X$. \n\nIf the balls centered at~$x_0$ are too ``thin'',\ni.e., the inequality~\\eqref{eq:BS} is satisfied for~$x_{0}$ and almost\nevery~$r \\in (\\delta,\\frac{1}{2})$, then the result follows from\nLemma~\\ref{lem:BS}. \n\nWe can therefore assume that there exists a ``fat'' ball centered at~$x_0$, i.e., the hypothesis of Proposition~\\ref{prop:reeb} holds\nfor~$x_{0}$ and some regular~$f$--value~$r_{0} \\in\n(\\delta,\\frac{1}{2})$, where~$f$ is the distance function from~$x_0$.\n(Indeed, almost every~$r$ is regular for~$f$.)\nArguing by contradiction, we show that the assumption on the minimality of the first Betti number rules out this case.\n\nWe would like to construct a~$\\psi$--essential piecewise\nflat~$2$--complex~$Y'$ with~$b_1(Y') < b_1(X)$ such that\n$\\sigma_{\\psi}(Y') \\leq \\sigma_{\\phi}(X)$ and therefore\n\\begin{equation}\n\\sigma_{\\psi}(Y') \\leq \\sigma_{*}(G) + \\varepsilon\n\\end{equation}\nfor some homomorphism~$\\psi \\colon\\thinspace \\pi_{1}(Y') \\to G$.\n\nBy Lemma~\\ref{lem:class} and Proposition~\\ref{prop:reeb}, the space~$Y = Y(x_{0},r_{0})$, endowed with the piecewise Riemannian metric of Proposition~\\ref{prop:reeb}, satisfies\n\\begin{equation*}\n\\sigma_{*}(G) \\leq \\sigma_{\\psi}(Y) \\leq \\sigma_{\\phi}(X).\n\\end{equation*}\nCombined with the inequalities~\\eqref{eq:wh} in the proof of\nProposition~\\ref{prop:reeb} and~\\eqref{eq:eps}, this yields\n\\begin{equation*}\n\\left( \\lambda - \\frac{1}{2} \\right) (\\length \\, S)^{2} < \\varepsilon.\n\\end{equation*}\nFrom~$\\varepsilon < 4 (\\lambda - \\frac{1}{2}) \\delta^{2}$\nand~$\\delta \\leq r_{0}$, we deduce that\n$$\n\\length \\, S < 2 r_{0}.\n$$\n\nNow, by Lemma~\\ref{lem:trivial}, the~$\\phi$--systolic\nloop~$\\gamma\\subset X$ does not entirely lie in~$B$. Therefore, there\nexists an arc~$\\alpha_0$ of~$\\gamma$ passing through~$x_{0}$ and lying\nin~$B$ with endpoints in~$S$. We have\n\\[\n\\length(\\alpha_0) \\geq 2r_{0}.\n\\]\nIf the endpoints of~$\\alpha_0$ lie in the same connected component\nof~$S$, then we can join them by an arc~$\\alpha_1 \\subset S$ of length\nless than~$2r_{0}$. By Lemma~\\ref{lem:trivial}, the loop~$\\alpha_0\n\\cup \\alpha_1$, lying in~$B$, is~$\\phi$--contractible. Therefore, the\nloop~$\\alpha_1 \\cup (\\gamma \\setminus \\alpha_0)$, which is shorter\nthan~$\\gamma$, is~$\\phi$--noncontractible. Hence a contradiction.\n\nThis shows that the~$\\phi$--systolic loop~$\\gamma$ of~$X$ meets two\nconnected components of~$S$. \n\nSince a~$\\phi$--systolic loop is length-minimizing, the loop~$\\gamma$\nintersects~$S$ exactly twice. Therefore, the complementary\narc~$\\alpha=\\gamma \\setminus \\alpha_0$, joining two connected\ncomponents of~$S$, lies in~$X \\setminus B$.\nThe two endpoints of~$\\alpha$ are connected by a length-minimizing arc\nof~$Y \\setminus (X \\setminus \\overline{B})$ passing exactly through\ntwo edges of the cone~$CA$.\n\nLet~$Y'$ be the~$2$--complex obtained by removing the interior of one\nof these two edges from~$Y$. The complex~$Y'=Y \\setminus e$ is\nclearly connected and the space~$Y$, obtained by gluing back the\nedge~$e$ to~$Y$, is homotopy equivalent to~$Y' \\vee S^1$. That is,\n\\begin{equation} \\label{eq:Y'}\nY \\simeq Y' \\vee S^1.\n\\end{equation}\nThus,~$Y'$ is~$\\psi$--essential if we still denote by~$\\psi$ the\nrestriction of the homomorphism~$\\psi \\colon\\thinspace \\pi_1(Y) \\to G$\nto~$\\pi_1(Y')$. Furthermore, we clearly have\n\\[\n\\sigma_{\\psi}(Y') = \\sigma_{\\psi}(Y) \\leq \\sigma_{\\phi}(X).\n\\]\nCombined with~\\eqref{eq:b1}, the homotopy equivalence~\\eqref{eq:Y'}\nalso implies\n$$\nb_1(Y') < b_1(Y) \\leq b_1(X).\n$$\nHence the result.\n\\end{proof}\n\n\n\\begin{remark}\nWe could use round metrics (of constant positive Gaussian curvature)\non the ``buffer cylinders\" of the space~$Y$ in the proof of\nProposition~\\ref{prop:reeb}. This would allow us to choose~$\\lambda$\nclose to~$\\frac{1}{2 \\pi}$ and to derive the lower bound\nof~$\\frac{\\pi}{8}$ for~$\\sigma_{\\phi}(X)$ in Corollary~\\ref{coro:A}.\nWe chose to use flat metrics for the sake of simplicity.\n\\end{remark}\n\n\n\\section{Cohomology of Lens spaces}\n\nLet~$p$ be a prime number. The group~$G=\\Z_p$ acts freely on the\ncontractible sphere~$S^{2\\infty+1}$ yielding a model for the\nclassifying space\n\\begin{equation*}\nK = K(\\Z_{p},1) = S^{2\\infty+1}\/\\Z_{p}.\n\\end{equation*}\nThe following facts are well-known, {\\it cf.}~\\cite{Hat}.\n\n\\begin{proposition}\n\\label{42}\n\n\nThe cohomology ring~$H^*(\\Z_p;\\Z_p)$ for~$p$ an odd prime is the\nalgebra~$\\Z_p(\\alpha)[\\beta]$ which is exterior on one\ngenerator~$\\alpha$ of degree~$1$, and polynomial with one\ngenerator~$\\beta$ of degree~$2$. Thus,\n\\begin{itemize}\n\\item\n$\\alpha$ is a generator of~$H^1(\\Z_p;\\Z_p)\\simeq \\Z_p$,\nsatisfying~$\\alpha^2=0$;\n\\item\n$\\beta$ is a generator of~$H^2(\\Z_p;\\Z_p)\\simeq \\Z_p$.\n\\end{itemize}\n\\end{proposition}\n\nHere the~$2$--dimensional class is the image under the Bockstein\nhomomorphism of the~$1$--dimensional class. The cohomology of the\ncyclic group is generated by these two classes. The cohomology is\nperiodic with period~$2$ by Tate's theorem. Every even-dimensional\nclass is proportional to~$\\beta^n$. Every odd-dimensional class is\nproportional to~$\\alpha \\cup \\beta^n$.\n\nFurthermore, the reduced integral homology is~$\\Z_p$ in odd dimensions\nand vanishes in even dimensions. The integral cohomology is~$\\Z_p$ in\neven positive dimensions, generated by a lift of the class~$\\beta$\nabove to~$H^2(\\Z_p;\\Z)$.\n\n\n\\begin{proposition}\n\\label{41}\n\\label{33}\nLet~$M$ be a closed~$3$--manifold~$M$ with~$\\pi_1(M)=\\Z_{p}$. Then its\nclassifying map~$\\varphi \\colon\\thinspace M \\to K$ induces an\nisomorphism\n\\[\n\\varphi_i \\colon\\thinspace H_i(M;\\Z_p)\\simeq H_i(K;\\Z_p)\n\\]\nfor~$i=1,2,3$.\n\\end{proposition}\n\n\\begin{proof}\nSince~$M$ is covered by the sphere, for~$i=2$ the isomorphism is a\nspecial case of Whitehead's theorem. Now consider the exact sequence\n(of Hopf type)\n\\begin{equation*}\n\\pi_3(M) \\overset{\\times p}{\\longrightarrow}H_3(M;\\Z)\\to\nH_3(\\Z_p;\\Z)\\to 0\n\\end{equation*}\nsince~$\\pi_2(M)=0$. Since the homomorphism~$H_3(M;\\Z) \\to\nH_3(\\Z_p;\\Z)$ is onto, the result follows by reduction modulo~$p$.\n\\end{proof}\n\n\n\n\n\\section{Volume of a ball}\n\nOur Theorem \\ref{theo:main} is a consequence of the following result.\n\n\\begin{theorem} \n\\label{theo:ball}\nAssume the~$\\rm{GG}_C$-property~\\eqref{eq:ball} is satisfied for some universal constant\n$C>0$ and every homomorphism~$\\phi$ into a finite group~$G$. \nThen every closed\nRiemannian~$3$--manifold~$M$ with fundamental group~$G$ contains a\nmetric ball~$B(R)$ of radius~$R$ satisfying\n\\begin{equation}\n\\label{24}\n\\vol \\, B(R) \\geq \\frac{C}{3} R^3,\n\\end{equation}\nfor every~$R\\leq\\frac{1}{2}\\sys(M)$.\n\\end{theorem}\n\n\\forget\nRecall the following result.\n\n\\begin{proposition}\n\\label{006}\nIn an orientable~$3$--manifold, cup product on~$H^1\\otimes H^2$ in\ncohomology with~$\\Z_p$ coefficients is dual to intersection between\na~$2$--cycle and a ~$1$--cycle with coefficients in~$\\Z_p$.\n\\end{proposition}\n\nHere the global orientation allows one to count an integer\nintersection index, which is then reduced modulo~$p$. \\\\\n\\forgotten\n\nWe will first prove Theorem~\\ref{theo:ball} for a\nclosed~$3$--manifold~$M$ of fundamental group~$\\Z_{p}$, with~$p$\nprime. We assume that~$p$ is odd (the case~$p=2$ was treated by\nL.~Guth). In particular,~$M$ is orientable. Let~$D$ be a~$2$--cycle\nrepresenting a nonzero class~$[D]$ in\n\\begin{equation*}\nH_2(M;\\Z_p) \\simeq H_{1}(M;\\Z_{p}) \\simeq \\Z_p.\n\\end{equation*}\nDenote by~$D_0$ the finite~$2$--complex of~$M$ given by the support\nof~$D$. Without loss of generality, we can assume that~$D_0$ is\nconnected. The restriction of the classifying map~$\\varphi \\colon\\thinspace M \\to\nK$ to~$D_0$ induces a homomorphism~$\\phi \\colon\\thinspace \\pi_{1}(D_0) \\to \\Z_{p}$.\n\n\\begin{lemma} \\label{lem:DB}\nThe cycle~$D$ induces a trivial relative class in the homology of\nevery metric~$R$--ball~$B$ in~$M$ relative to its boundary, with~$R <\n\\frac{1}{2} \\sys(M)$. That is,\n$$\n[D \\cap B] = 0 \\in H_{2}(B,\\partial B;\\Z_{p}).\n$$\n\\end{lemma}\n\n\\begin{proof}\nSuppose the contrary. By the Lefschetz-Poincar\\'e duality theorem,\nthe relative~$2$--cycle~$D \\cap B$ in~$B$ has a nonzero intersection\nwith an (absolute)~$1$--cycle~$c$ of~$B$. Thus, the intersection\nbetween the~$2$--cycle~$D$ and the~$1$--cycle~$c$ is nontrivial\nin~$M$. Now, by Lemma~\\ref{lem:trivial}, the~$1$--cycle~$c$ is\nhomotopically trivial in~$M$. Hence a contradiction.\n\\end{proof}\n\nWe will exploit the following notion of volume for cycles with torsion\ncoefficients.\n\n\\begin{definition} \n\\label{def:Vol}\nLet~$D$ be a~$k$--cycle with coefficients in~$\\Z_p$ in a Riemannian\nmanifold~$M$. We have\n\\begin{equation}\n\\label{11}\nD= \\sum_i n_i \\sigma_i\n\\end{equation}\nwhere each~$\\sigma_i$ is a~$k$--simplex, and each~$n_i\\in \\Z_p^*$ is\nassumed nonzero. We define the notion of~$k$--area~$\\area$ for cycles\nas in \\eqref{11} by setting\n\\begin{equation}\n\\label{12}\n\\area(D)= \\sum_i |\\sigma_i|,\n\\end{equation}\nwhere~$|\\sigma_i|$ is the~$k$--area induced by the Riemannian metric\nof~$M$.\n\\end{definition}\n\n\\begin{remark}\nThe non-zero coefficients~$n_i$ in \\eqref{11} are ignored in defining\nthis notion of volume.\n\\end{remark}\n\n\\begin{proof}[Proof of Theorem~\\ref{theo:ball}]\nWe continue the proof of Theorem~\\ref{theo:ball} when the fundamental\ngroup of~$M$ is isomorphic to~$\\Z_p$, with~$p$ an odd prime. We will\nuse the notation introduced earlier. Suppose now that~$D$ is a\npiecewise smooth~$2$--cycle area minimizing in its homology\nclass~$[D]\\not=0\\in H_2(M;\\Z_p)$ up to an arbitrarily small error\nterm~$\\varepsilon>0$, for the notion of volume (area) as defined\nin~\\eqref{12}.\n\n\nRecall that~$\\phi \\colon\\thinspace \\pi_1(D_0)\\to \\Z_p$ is the homomorphism induced\nby the restriction of the classifying map~$\\varphi \\colon\\thinspace K \\to M$ to the\nsupport~$D_0$ of~$D$. By Proposition~\\ref{33}, the~$2$--complex~$D_0$\nis~$\\phi$--essential. Thus, by hypothesis of Theorem~\\ref{theo:ball},\nwe can choose a point~$x \\in D_0$ satisfying\nthe~$\\rm{GG}_C$-property~\\eqref{eq:ball}, i.e., the area of~$R$--balls\nin~$D_0$ centered at~$x$ grows at least as~$C R^2$ for~$R <\n\\frac{1}{2} \\sys(D_0,\\phi)$.\nTherefore, the intersection of~$D_0$ with the~$R$--balls of$M$\ncentered at~$x$ satisfies\n\\begin{equation}\n\\label{111}\n\\area(D_0\\cap B(x,R)) \\geq CR^2\n\\end{equation}\nfor every~$R < \\frac{1}{2} \\sys(D_0,\\phi)$.\nThe idea of the proof is to control the area of distance spheres\n(level surfaces of the distance function) in~$M$, in terms of the\nareas of the distance disks in~$D_0$.\n\nLet~$B=B(x,R)$ be the metric~$R$--ball in~$M$ centered at~$x$\nwith~$R<\\frac{1}{2} \\sys(M)$. We subdivide and slightly perturb~$D$\nfirst, to make sure that~$D \\cap \\bar B$ is a subchain of~$D$. Write\n\\[\nD=D_- + D_+,\n\\]\nwhere~$D_-$ is a relative~$2$--cycle of~$\\bar B$, and~$D_+$ is a\nrelative~$2$--cycle of~$M\\setminus B$. By Lemma~\\ref{lem:DB},~$D_-$\nis homologous to a~$2$--chain~$\\mathcal{C}$ contained in the distance\nsphere~$\\partial B = S(x,R)$ with\n\\[\n\\partial \\mathcal{C} = \\partial D_- = - \\partial D_+.\n\\]\nWe subdivide and perturb~$\\mathcal{C}$ in~$S(x,R)$ so that the\ninteriors of its~$2$--simplices either agree or have an empty\nintersection. Here the simplices of the~$2$--chain~$\\mathcal{C}$ may\nhave nontrivial multiplicities.\nSuch multiplicities necessarily affect the volume of a chain if one\nworks with integer coefficients.\nHowever, these multiplicities are ignored for the notion\nof~$2$--volume~\\eqref{12}. This special feature allows us to derive\nthe following: the~$2$--volume~\\eqref{12} of the chain~$\\mathcal{C}$\nis a lower bound for the usual area of the distance sphere~$S(x,R)$.\n\nNote that the homology class~$[\\mathcal{C}+D_+]=[D] \\in H_2(M;\\Z_p)$\nstays the same. We chose~$D$ to be area minimizing up\nto~$\\varepsilon$ in its homology class in~$M$ for the notion of\nvolume~\\eqref{12}. Hence we have the following bound:\n\\begin{equation}\n\\label{112}\n\\area(S(x,R)) \\geq \\area(\\mathcal{C}) \\geq \\area(D_-) - \\varepsilon\n\\geq \\area (D_0 \\cap B) - \\varepsilon.\n\\end{equation}\nNow, clearly~$\\sys(M) \\leq \\sys(D_0,\\phi)$. Combining the\nestimates~\\eqref{111} and~\\eqref{112}, we obtain\n\\begin{equation}\n\\label{113}\n\\area ( S(x,R)) \\geq C R^2 - \\varepsilon\n\\end{equation}\nfor every~$R<\\frac{1}{2} \\sys(M)$. Integrating the\nestimate~\\eqref{113} with respect to~$R$ and letting~$\\varepsilon$ go\nto zero, we obtain a lower bound of~$\\frac{C}{3} R^3$ for\nthe~$3$--volume of some~$R$--ball in the closed manifold~$M$, proving\nTheorem~\\ref{theo:ball} for closed~$3$--manifolds with fundamental\ngroup~$\\Z_{p}$. \\\\\n\nSuppose now that~$M$ is a closed~$3$--manifold with finite (nontrivial)\nfundamental group. Choose a prime~$p$ dividing the\norder~$|\\pi_1(M)|$ and consider a cover~$N$ of~$M$ with fundamental group cyclic\nof order~$p$.\nThis cover satisfies~$\\sys(N) \\geq \\sys(M)$, and we apply the\nprevious argument to~$N$.\n\nNote that the reduction to a cover could not have been done in the\ncontext of M.~Gromov's formulation of the inequality in terms of the\nglobal volume of the manifold. Meanwhile, in our formulation using a\nmetric ball, following L.~Guth, we can project injectively the ball of\nsufficient volume, from the cover to the original manifold. Namely,\nthe proof above exhibits a point~$x \\in N$ such that the volume of\nthe~$R$--ball~$B(x,R)$ centered at~$x$ is at least~$\\frac{C}{3} R^3$\nfor every~$R < \\frac{1}{2} \\sys(M)$. Since~$R$ is less than half the\nsystole of~$M$, the ball~$B(x,R)$ of~$N$ projects injectively to an\n$R$--ball in~$M$ of the required volume, completing the proof of\nTheorem~\\ref{theo:ball}.\n\\end{proof}\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}