diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzhtvd" "b/data_all_eng_slimpj/shuffled/split2/finalzzhtvd" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzhtvd" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction} \\label{sec:intro}\n\nQuiescent galaxies (QGs) are defined as galaxies with their star formation rates (SFR) lower than the average SFR of star-forming galaxies (SFGs) of similar stellar masses at similar redshifts \\citep[i.e., below the ``main sequence'';][]{Daddi2007,Elbaz2007,Noeske2007,Wuyts2011,Ciambur2013}. The quenching mechanism of QGs is an important topic, especially for high-redshift ones. Observations showed that populations of massive QGs exist at $z \\sim$ 1.0--2.0 \\citep{Belli2017, Carnall2019, Newman2018}, and that half of the most massive QGs were formed at $z \\sim$ 1.5 \\citep{Ilbert2013, Muzzin2013}. In recent studies, the QG population has been extended to $z \\sim$ 4.0 \\citep{Straatman2014, Glazebrook2017, Merlin2018, Schreiber2018a, Carnall2020, Forrest2020a, Forrest2020b, Valentino2020}. Among these, \\citet{Valentino2020} reported three QGs with stellar masses around $10^{11}$ M$_{\\odot}$ and with SFR ranging from 1.1 to 24.0 $M_{\\odot}$ year$^{-1}$ (1.0 to 2.1 $\\sigma$ below the main sequence) at $z$ = 3.775, 4.012, and 3.767. How these high-$z$ QGs can increase their mass and quench the star formation in a short period is still not fully understood. \n\nActive galactic nucleus (AGN) feedback is one of the proposed scenarios to explain the rapid quenching \\citep{Bower2006, Croton2006, Somerville2008, Fabian2012, Man2018}. AGN activities may either provide kinetic energy to the interstellar medium in the host galaxies and reduce the star formation efficiency, or heat up the gas to prevent the gas from cooling \\citep[radio mode AGN feedback;][]{Croton2006, Bower2006, Fabian2012, Somerville2008}. AGN activities may also remove the gas and terminate the star formation \\citep[quasar mode AGN feedback;][]{Fabian2012, Somerville2008}.\n\nHowever, there are also other theoretical scenarios for the quenching mechanism. For example, the existence of turbulence may also provide kinetic energy to the system and result in morphological quenching \\citep{Martig2009,Dekel2009}. Another example would be positive AGN feedback. In this case, AGN activities enhance star formation but rapidly consume the gas, resulting in a quenched galaxy \\citep{Ishibashi2012, Zhuang2020, Shangguan2020}. Other theoretical scenarios include mergers, stellar feedback, virial shock heating, etc \\citep[see discussion in][and references therein]{Man2018}, and the quenching of massive galaxies could be a combination of some of these scenarios. Which of the mentioned scenarios is the dominant channel of the quenching mechanism remains unclear. \n\nHere we would like to first focus on the foundation of the QG studies, the selection criteria of QGs. Rest-frame color-color diagrams are widely applied to selecting QGs. They are often composed of two rest-frame colors: a UV-to-optical color (typically as the $y$-axis) to distinguish blue SFGs from red QGs using the strong UV emission from young stars, and an optical-to-near-infrared color (typically as the $x$-axis) to distinguish old passive stellar populations from dusty\/reddened young stellar populations. Such photometric selections are very convenient since only photometric data are needed. There are various kinds of color-color diagrams proposed for QG selections based on rest-frame absolute magnitudes, such as the $U$--$V$--$J$ diagram \\citep{Wuyts2007,Williams2009,Muzzin2013}, the $NUV$--$r$--$J$ diagram \\citep{Ilbert2013}, and the $NUV$--$r$--$K$ diagram \\citep{Arnouts2013}. \n\nDespite the great success of using color-color diagrams to select large samples of QGs, there are potential issues in such selections. First, the selection boundary that separates QGs and SFGs in a color color diagram was decided empirically \\citep[e.g.,][]{Ilbert2013, Muzzin2013, Williams2009}. Whether a set of selection criteria are still applicable to different datasets should be examined. Second, either because of the intrinsic properties of SFGs and QGs, or because of photometric errors, the distribution of the two groups of galaxies can have unknown levels of overlap around the selection boundaries. It was found that adjusting the position of the boundary by $\\pm$0.1 mag could greatly change the selection efficiency and the subsequent analyses based on the selected samples \\citep[][Appendix B therein]{Muzzin2013}.\n\nOne important factor here is that our understanding of dusty galaxies is limited. The spectral energy distribution (SED) of these high-$z$ dusty galaxies may be more complicated than what was initially assumed to setup color selection. The selected QG ``candidates'' may still suffer from contamination by red dusty SFGs at high $z$. For instance, a $z$ = 3.717 QG candidate \\citep{Straatman2014,Glazebrook2017} was detected at 450 and 870 $\\mu$m \\citep{Simpson2017}. This implies that the target is a dusty SFG, or is an interacting system consisting of a QG and a dusty galaxy \\citep[e.g.,][]{Schreiber2018a}, or at least contains a significant dusty star-forming component. Such contamination may lead to an overestimated number density of QGs, especially at high $z$, which may have consequences in our pursuit of an understanding of the quenching mechanism.\n\nTherefore, in this paper, we would like to revisit the selection of QGs using color-color diagrams. We will examine the quiescence of our color-selected QG candidates using submillimeter observations. Various previous studies were carried out to analyze the star formation of color-selected QGs. Some studies measured the SFR by either applying SED fittings from the UV to mid-infrared bands \\citep{Fumagalli2014, Merlin2018, Carnall2019, Toba2019}, using spectroscopic data \\citep{Schreiber2018,Belli2017}, or measuring H$\\alpha$ emission \\citep{Belli2017b}. Others searched for far-infrared dust emissions using $Spitzer$ observations \\citep{Fumagalli2014, Man2016, Gobat2018, Magdis2021}, $Herschel$ observations \\citep{Viero2013, Man2016,Straatman2014, Merlin2018, Gobat2018, Magdis2021} or Atacama Large Millimeter\/submillimeter Array (ALMA) observations \\citep{Santini2019, Schreiber2018, Simpson2017}. Others also searched for gas content in QG candidates \\citep{Sargent2015,Young2011}. In particular, \\citet{Man2016} measured the dust emission of their color-selected QGs by stacking $Herschel$ SPIRE data at 250, 350, and 500 $\\mu$m (FWHM $\\simeq$ $18\\arcsec$.2, $24\\arcsec$.9, and $36\\arcsec$.3), and they claimed that contamination of dusty SFG is $\\sim$ 15 \\% among their QG candidates. We will follow the approach in \\citet{Man2016} but re-examine the issue with higher angular resolutions using JCMT SCUBA-2 450 and 850 $\\mu$m data (FWHM = $7\\arcsec$.9 and $13\\arcsec$) and ALMA data.\n\nIn this study, we selected 18,304 QG candidates using the $NUV$--$r$--$J$ diagram with deep galaxy samples from the COSMOS field \\citep{Laigle2016} and analyzed the properties of the selected QG candidates. In the first part of our study, we estimated the contamination of dusty SFGs among the QG candidates by cross-matching them to the multi-wavelength catalogs as well as performing stacking analyses in the submillimeter images. We also estimated the effect of chance projection in the cross-matching and estimated the degrees of small-scale clustering. In the second part of our study, we further investigated the AGN feedback as a potential quenching mechanism among QG candidates. We examined the relation between various AGNs and the QG candidates in our data by calculating the QG fractions in the AGN samples.\n\nWe describe our data in Section~\\ref{sec:data} and introduce the QG color-color selection in Section~\\ref{sec:QG_selection}. In Sections~\\ref{sec:bright_SMG} and \\ref{sec:faint_SMG}, we analyze the contamination of dusty SFGs among the $NUV$--$r$--$J$ selected QG candidates. We also examine the small-scale clustering between dusty SFGs and QG candidates in Section~\\ref{subsec:blind_matching}. In Section~\\ref{sec:AGN_properites}, we discuss the QG fractions among our AGN samples. Section~\\ref{sec:summary} gives a summary of our results. We use the \\citet{Chabrier2003} initial mass function (IMF) and an H$_0$ = 70 km s$^{-1}$ Mpc$^{-1}$, $\\Omega_\\Lambda$ = 0.7, and $\\Omega_m$ = 0.3 cosmology throughout this study.\n\n\\section{Multi-wavelength Data} \\label{sec:data}\n\n\\subsection{COSMOS2015 Catalog} \\label{subsec:COSMOS2015}\n\nWe selected galaxies from the multi-wavelength band-merged COSMOS2015 catalog \\citep{Laigle2016}. To use the rest-frame absolute magnitudes for our color selection, we excluded those labeled as failure in SED fitting to avoid bad absolute magnitudes. We also excluded samples that are labeled as stars. We excluded galaxies with extreme values in the catalog ($NUV$, $r$, and $J$ absolute magnitudes that are $<-30$ or $>0$, and negative redshifts), which are likely caused by either catastrophic failures in SED fitting or problematic photometry. These selection criteria reject $\\sim57\\%$ (677,085\/1,182,108) of the initial sample.\n\n We further limited the errors of the magnitudes in $K_S$ band to be lower than 0.2. This uniform selection ensures that our sample has a robust set of photometry and avoids biasing against high-$z$ sample. The limiting magnitude is 23.7 for $K_S$ band. The selection criterion of the $K_S$ band magnitude error further rejects $\\sim29\\%$ (344,806\/1,182,108) of the initial sample. Overall, the majority of the rejections are caused by their faintness. They either are not detected at $K_S$ or have $K_S>24$.\n\nWith the above selection criteria, we obtained a total sample size of 160,217 galaxies from the COSMOS2015 catalog. They all have high-quality SED fitting results; all of them have SED fitting based on at least nine filters, and 98\\% of them more than 28 filters. The sample covers an area of 1.58 deg$^2$ in the COSMOS field (Fig.~\\ref{fig:footprint}). We used stellar mass $M_*$, photometric redshift $z$, and rest-frame absolute magnitudes $M_{NUV}$, $M_r$, and $M_J$ from the COSMOS2015 catalog, which were derived from SED fittings. The sample has stellar masses up to $M_* = 10^{12}~M_{\\odot}$ and redshifts over $z\\sim4$ (Fig.~\\ref{fig:data} (b) and (c)). The absolute magnitudes will be applied for our QG selection in the next section.\n\n\\begin{figure}[ht!]\n\\epsscale{1.15}\n\\plotone{footprint.png}\n\\caption{\\label{fig:footprint}Coverage maps of the COSMOS field. The background shows the S2COSMOS 850 $\\mu$m image. The black polygon corresponds to the coverage of our $K_s$ band selected COSMOS2015 sample, while the white circle corresponds to the 151-arcmin$^2$ coverage of STUDIES 450 $\\mu$m image. The MIPS 24 $\\mu$m and VLA 3 GHz catalogs cover the whole area of the black polygon and are not shown in this figure.}\n\\end{figure}\n\n\\begin{figure*}[ht!]\n\\epsscale{1.15}\n\\plottwo{NUVrJ_1.png} {mass_z.png}\n\\plottwo{hist_all.png}{hist_QG.png} \n\\caption{\\label{fig:data}The full galaxy sample selected from the COSMOS2015 catalog. Panel (a) shows the distribution in the $NUV$--$r$--$J$ diagram, and the QG sub-sample is selected in upper-left corner. The fringe structure in the QG population may be caused by either the lack of QG template or by certain procedures in the SED fitting, but the structure does not affect our result. The reddening vector derived from \\citet{Calzetti2000} extinction is shown in the lower-right corner, while the typical (median) error in the two colors for all the sources is shown in the lower-left corner. Panel (b) and (c) show the stellar mass and the redshift distributions of the full galaxy sample (colored in blue), while panel (b) and (d) show those of the QG sub-sample (colored in black).}\n\\end{figure*}\n\n\n\\subsection{Submillimeter Data} \\label{subsec:submm}\n\nWe used submillimeter data in the COSMOS field from JCMT SCUBA-2 \\citep{Holland2013, Holland1999} at 450 $\\mu$m (STUDIES, \\citealt{Wang2017}; final data release in Gao et al.\\ 2021, in prep.) and 850 $\\mu$m (S2COSMOS, \\citealt{Simpson2019}), in order to search for dusty SFGs that contaminate the QG sample. The 450 $\\mu$m map covers the central 151 arcmin$^2$ of COSMOS, while the 850 $\\mu$m map covers the whole COSMOS field (Fig.~\\ref{fig:footprint}). The 450 $\\mu$m and 850 $\\mu$m maps have detection limits of about 3.5 mJy and 2 mJy, respectively. The detection limits are all substantially higher than the confusion noise ($\\sigma_{\\rm c}\\sim0.7$ mJy at 450 $\\mu$m, e.g., \\citealp{Lim2020}; $\\sigma_{\\rm c}\\sim0.5$ mJy at 850 $\\mu$m, e.g., \\citealt{Simpson2019}).\n\nIn total, we selected 357 objects with 450 $\\mu$m detection and 1,147 objects with 850 $\\mu$m detection from the SCUBA-2 maps. Four of the 450 $\\mu$m sources and 166 of the 850 $\\mu$m sources located outside the region occupied by our optically selected sample, because of the difference in area coverage and the masks in the COSMOS2015 catalog (Fig.~\\ref{fig:footprint}). Among the remaining 353 objects with 450 $\\mu$m detection and 981 objects with 850 $\\mu$m detection, 77 and 370, respectively, have ALMA observations from the AS2COSMOS and A3COSMOS catalogs (Section~\\ref{subsec:auxiliary}).\n\nSince the SCUBA-2 maps have relatively low angular resolution, we could not reliably identify the optical counterparts to the submillimeter sources. We therefore include the auxiliary data in section \\ref{subsec:auxiliary} for the process of cross matching COSMOS2015 galaxies to SCUBA-2 sources, and the details will be described in section \\ref{subsec:traditional_matching}.\n\n\n\n\\subsection{Auxiliary Data} \\label{subsec:auxiliary}\n\nWe included $Spitzer$ MIPS 24 $\\mu$m, VLA 3 GHz, and ALMA catalogs in our study for their better astrometry when our submillimeter data do not have sufficient angular resolution and for analyzing QG properties.\n\nFor 24 $\\mu$m data, we used the $Spitzer$ MIPS S-COSMOS image from \\citet{Sanders2007}. In order to generate a catalog deeper than the archival MIPS catalog of \\citet{Sanders2007}, 24 $\\mu$m sources were extracted using \\texttt{SExtractor} \\citep{Bertin1996}, and their fluxes were re-calibrated to their $Spitzer$ General Observer Cycle 3 total fluxes. Our MIPS 24 $\\mu$m catalog has a 3.5$\\sigma$ detection limit of 57 $\\mu$Jy, in contrast to the flux cut at 150 $\\mu$Jy in \\citet{Sanders2007}. Our catalog is very similar to the catalog of \\citet{LeFloch2009} in terms of total numbers of detections. The fluxes are also consistent within 6\\% \\citep{Lim2020} as we calibrated our fluxes to that of \\citet{Sanders2007}.\n\nWe cross-matched the COSMOS2015 catalog with our MIPS 24 $\\mu$m catalog using a search radius of $2\\arcsec$, which corresponds to about 1\/3 of the beam size at 24 $\\mu$m. 26,999 galaxies (16.9\\%) are matched to the MIPS 24 $\\mu$m sources.\n\nFor 3 GHz data, we directly adopted the identification of the COSMOS2015 objects in the VLA catalog of \\citet{Smolcic2017a}, which used a search radius of $0\\farcs8$. The 5$\\sigma$ detection limit of the VLA catalog of \\citet{Smolcic2017a} is 2.3 $\\mu$Jy beam$^{-1}$. 6,002 galaxies (3.7\\%) are matched to the VLA 3-GHz sources.\n\nWe also used catalogs derived from ALMA observations, including the AS2COSMOS \\citep{Simpson2020} and A3COSMOS \\citep{Liu2019} catalogs. The AS2COSMOS catalog was derived from the follow-up 343 GHz observations of 186 bright 850 $\\mu$m sources in the S2COSMOS catalog. The AS2COSMOS sources are essentially complete for the S2COSMOS sources above 6.2 mJy; only one S2COSMOS source does not have ALMA detection. The A3COSMOS catalog collects ALMA archival data in the COSMOS field, at wavelengths from 671 to 90.2 GHz. We cross-matched our optical sample with the ALMA catalogs using a search radius of $1\\arcsec$. This search radius should allow us to overcome the intrinsic offsets between starlight and submillimeter emission from dusty galaxies (e.g., 1$\\sigma$ offset of $0\\farcs55$ in \\citealp{Chen2015}).\n\n\n\\subsection{AGN Sample} \\label{subsec:AGN_samples}\n\nWe also examined the AGN properties of our sample. We cross-matched our sample with radio AGNs from the VLA catalog of \\citet{Smolcic2017}, color-selected mid-IR AGNs from \\citet{Chang2017}, and X-ray AGNs selected from $Chandra$ data by \\citet{Civano2016} and \\citet{Marchesi2016}.\n\nThe radio AGNs were selected by comparing the observed radio emission to the expected radio emission from IR-derived SFR. Those exceeding 3$\\sigma$ in $log(L_{1.4GHz}\/\\mathrm{SFR}_{IR})$ are classified as radio AGNs \\citep[see the details in][]{Smolcic2017, Delvecchio2017}. The mid-IR AGNs were selected in the rest-frame mid-IR color-color diagram. Those which exhibit red power-law SEDs in the mid-IR are classified as mid-IR AGNs \\citep[see the details in][]{Chang2017,Lacy2004, Lacy2007, Donley2012}. The X-ray AGNs were selected with X-ray luminosity of $L_{X(2-10keV)}>10^{42}$ ergs s$^{-1}$ \\citep{Zezas1998,Ranalli2003,Szokoly2004}. We note that if such an X-ray luminosity is produced purely by X-ray binaries rather than an AGN, the inferred SFR would be $>200$~$M_{\\odot}$~yr$^{-1}$ using the conversion between SFR and $L_{X}$ \\citep[e.g.,][]{Ranalli2003}. Such a high SFR would be detect in our submillimeter analyses, but we do not observe it. Therefore, the majority of the $L_{X(2-10keV)}>10^{42}$ ergs s$^{-1}$ sources in our samples should be AGN-dominated.\n\n\\section{Color-Color Diagram} \\label{sec:QG_selection}\n\nWe applied the rest-frame $NUV$--$r$--$J$ color-color diagram to our sample in order to select QG candidates. Various color-color diagrams were used for QG selection, including the $U$--$V$--$J$ diagram \\citep{Williams2009} and the $NUV$--$r$--$J$ diagram \\citep{Ilbert2013}. Although the $U$--$V$--$J$ diagram is more widely used than the $NUV$--$r$--$J$ diagram, there are advantages of using $NUV$ and $r$ bands instead of $U$ and $V$ bands \\citep{Ilbert2013}. The $NUV$ band is at a shorter wavelength, so it is more sensitive to emission from young stars and extinction. The $NUV-r$ color has a wider wavelength span than the $U-V$ color, so it is less vulnerable to photometric errors. The rest-frame $NUV$ band can be obtained from optical data toward higher redshifts, around $z >$ 2, where $U$ band starts to enter the near-IR. This leads to better sensitivities. Because of the above, we adopted the $NUV$--$r$--$J$ diagram in this study. We note that the selection results of using the two color-color diagrams are similar to each other. About 85\\% of our $NUV$--$r$--$J$ selected QG candidates overlap with the $U$--$V$--$J$ selected QG candidates, and the overlapping fraction slightly varies with redshift and the position of the selection boundary.\n\nOn the $NUV$--$r$--$J$ color-color diagram (Fig.~\\ref{fig:data} (a)), a blue color in $y$-axis indicates the starlight from young stars, while the color in $x$-axis breaks the degeneracy between age and dust reddening. QGs tend to locate in the upper-left corner of the diagram. We adopted the criteria proposed by \\citet{Ilbert2013}:\n$$M_{NUV}-M_{r}> 3(M_{r}-M_{J})+1,$$ \n$$M_{NUV}-M_{r}> 3.1.$$\n\nWe selected 18,304 galaxies to be our QG candidates, which are 11.4$\\pm$0.1 \\% of the total (Fig.~\\ref{fig:data}(a)). The selected QG candidates have a redshift distribution peaking at $z\\sim1.0$ and extending to $\\sim3.0$ (Fig. \\ref{fig:data} (d)). Our selection result is well consistent with the flag ``CLASS=0'' in the COSMOS2015 catalog \\citep{Laigle2016}, which applied the same $NUV$--$r$--$J$ selection method. Among the 24 $\\mu$m detected galaxies, 5.9$\\pm$0.1 \\% enter the QG selection region and therefore are QG candidates (Fig. \\ref{fig:NUVrJ243} (a)). Among the 3 GHz detected galaxies, 17.8$\\pm$0.5 \\% are QG candidates (Fig. \\ref{fig:NUVrJ243} (b)). The redshift distributions of the 24 $\\mu$m and 3 GHz detected QGs also peak at $z\\sim1.0$ but have larger fraction of QGs at high $z$ (Fig. \\ref{fig:NUVrJ243} (c)). The QG selection of submillimeter-detected galaxies will be described in Section \\ref{subsec:traditional_matching}. The numbers of selected QG candidates are summarized in Table \\ref{tab:data}.\n\nTo better understand the diagrams, we show the reddening vector and the typical (median) errors of the two colors in Fig.~\\ref{fig:data} (a), Fig.~\\ref{fig:NUVrJ243} (a), and Fig.~\\ref{fig:NUVrJ243} (b). The reddening vector is derived from \\citet{Calzetti2000} extinction. For the magnitude errors, unfortunately the COSMOS2015 catalog does not provide errors in the absolute magnitudes. To have a rough idea of the errors, we followed the COSMOS2015 procedure (O.\\ Ilbert \\& I.\\ Davidzon, private communication) to select the nearest broad-band filter in the rest-frame that has a photometric error of $<0.3$. And we use the photometric error of that filter band as the absolute magnitude error. This clearly does not account for the errors in the $K$-corrections derived from the fitted SEDs, nor the errors propagated from the photo-$z$ errors, but should still include a substantial part of the error budget.\n\nWith the above-estimated photometric errors, we could further estimate the fraction among the QG candidates that may originate from the SFG color space and scattered into the QG color space by the photometric errors. For each QG candidate, we generated 1,000 randomly perturbed $NUV$, $r$, and $J$ band absolute magnitudes that follow Gaussian distributions according to the photometric errors. We then calculated the percentage of the perturbed colors that are located in the SFG region, i.e., the probability of the QG candidate to be selected as a SFG if there were no photometric errors. The average probability among our QG candidates is $\\sim$7.5\\%, meaning that $\\sim$7.5\\% of our selected QG may have moved from the SFG color space across the boundary into the QG color space due to their photometric errors. The probabilities can help to understand the nature of dusty SFG contamination in the color-selected QG population. We will further discuss this in Sections \\ref{sec:bright_SMG} and \\ref{sec:faint_SMG}.\n\n\\begin{figure}[ht!]\n\\epsscale{1.05}\n\\plotone{NUVrJ_24.png}\n\\epsscale{1.05}\n\\plotone{NUVrJ_3.png}\n\\epsscale{1.05}\n\\plotone{hist_243.png}\n\\caption{\\label{fig:NUVrJ243}Distribution of the 24-$\\mu$m detected sample (a) and the 3-GHz detected sample (b) in the $NUV$--$r$--$J$ diagram. The reddening vector derived from \\citet{Calzetti2000} extinction and the typical errors in the two colors are also shown, as in Fig.~\\ref{fig:data} (a). The typical errors are smaller than that of the full galaxy sample (Fig.~\\ref{fig:data}(a)). This is resulted from the higher fractions of bright galaxies among the two subgroups (median $r$ $\\sim0.5$ magnitudes brighter).} The QG candidates are selected in upper-left corner of the panels, and the redshift distribution of the two QG subgroups are shown in (c). We note that the two subgroups have partial overlap between each other.\n\\end{figure}\n\n\\begin{deluxetable*}{l|llccc}\n\\tablecaption{\\label{tab:data}Sample sizes and results of multi-wavelength cross-matching.}\n\\tablehead{\n\\colhead{} & \\colhead{SFGs+QGs} & \\colhead{QGs} & \\colhead{QGs\/(all SFGs+QGs)} & \\colhead{QGs\/(all QGs)} & \\colhead{QGs by chance projection}\n}\n\\startdata\ntotal in the COSMOS field & 160217 & 18304 & 11.4$\\pm$0.1 \\% & 100 \\% & - \\\\\n24 um detected & 26999 & 1596 & 5.9$\\pm$0.1 \\% & 8.72$\\pm$0.22 \\% & 382 \\\\\n3 GHz detected & 6002 & 1066 & 17.8$\\pm$0.5 \\% & 5.82$\\pm$0.18 \\% & - \\\\\n850 $\\mu$m detected & 653 & 30 & 4.6$\\pm$0.8 \\% & 0.16$\\pm$0.03 \\% & 7.0\\\\\n~~~850 $\\mu$m + ALMA & 289 & 11 & 3.8$\\pm$1.1 \\% & - & 1.3\\\\\n~~~850 $\\mu$m + 24 $\\mu$m + 3 GHz & 364 & 19 & 5.2$\\pm$1.2 \\% & - & 5.8\\\\\n\\hline\ntotal in the STUDIES field & 15296 & 1846 & 12.1$\\pm$0.3 \\% & 100 \\% & - \\\\\n450 $\\mu$m detected & 239 & 8 & 3.3$\\pm$1.2 \\% & 0.43$\\pm$0.15 \\% & 2.5\\\\\n~~~450 $\\mu$m + ALMA & 58 & 2 & 3.4$\\pm$2.4 \\% & - & 0.3\\\\\n~~~450 $\\mu$m + 24 $\\mu$m + 3 GHz & 181 & 6 & 3.3$\\pm$1.4 \\% & - & 2.1\\\\\n\\hline\nradio AGN & 1378 & 563 & 40.9$\\pm$1.7 \\% & 3.08$\\pm$0.13 \\% & - \\\\\nmid-IR AGN & 791 & 95 & 12.0$\\pm$1.2 \\% & 0.52$\\pm$0.05 \\% & - \\\\\nX-ray AGN & 2267 & 413 & 18.2$\\pm$0.9 \\% & 2.26$\\pm$0.11 \\% & - \\\\\n\\enddata\n\\tablecomments{The errors are set to be Poissonian, and only reflect the uncertainties caused by the finite sample sizes. The 850 $\\mu$m and 450 $\\mu$m detected samples are determined through both the low-resolution SCUBA-2 data and the high resolution auxiliary data. The auxiliary data are either ALMA data, or 24 $\\mu$m and 3 GHz data (see Section \\ref{subsec:traditional_matching} for details).}\n\\end{deluxetable*}\n\nFrom Fig. \\ref{fig:NUVrJ243} (a), we can see that most of the 24 $\\mu$m detected QG candidates tend to distribute close to the selection boundary in the diagram. They can be either dusty galaxies entering the QG color space because of atypical SED shapes, simply regular SFGs scattered into the QG color space because of photometric errors (cross in Fig. \\ref{fig:NUVrJ243} (a)), or chance projections in the cross matching. By measuring the search area of matching through $2\\arcsec$ search radius, we estimated that 382$\\pm$20 out of the 1596 matches (23.9$\\pm$1.2\\%) can be chance projections. In Table \\ref{tab:data}, 24 $\\mu$m detected galaxies have lower QG fractions than that of all the COSMOS2015 sample. 24 $\\mu$m sources are sensitive to dust emission, and the low fraction suggests a low dusty-galaxy contamination in the QG color selection.\n\nOn the other hand, the 3 GHz detected QG candidates distribute well into the QG selection region in Fig. \\ref{fig:NUVrJ243} (b). If we calculate their vertical distances to the selection boundary, we obtain median values of 0.4 and 0.7 for the 24 $\\mu$m and 3 GHz detected QG candidates, respectively. The median distance for the 3 GHz detected QGs is much larger than the typical photometric error (cross in Fig. \\ref{fig:NUVrJ243} (b)), so they are not SFGs scattered into the QG color space. A large fraction of them should be real QGs harboring radio AGNs (see Section \\ref{sec:AGN_properites} and Fig.~\\ref{fig:NUVrJ_AGN} for further evidence). In Table \\ref{tab:data}, the QG fraction among them is considerably higher than those of all the other subgroups. This gives us a hint about the correlation between radio AGN and QG candidates, which will be discussed in Section \\ref{sec:AGN_properites}.\n\n\\section{Bright Submillimeter Galaxies Among QG Candidates} \\label{sec:bright_SMG}\n\nIn this section, we conduct a thorough analysis on the contamination of bright submillimeter galaxies among our QG candidates. In Section \\ref{subsec:traditional_matching}, we cross-matched our sample with the SCUBA-2 450 $\\mu$m and 850 $\\mu$m catalogs using the positions of MIPS 24 $\\mu$m, VLA 3 GHz, and ALMA submillimeter sources. In Section \\ref{subsec:blind_matching}, we further performed a blind cross-matching and reported the finding of small-scale clustering between QG candidates and SCUBA-2 sources.\n\n\\subsection{Traditional Cross Matching} \\label{subsec:traditional_matching}\n\n\\subsubsection{Counterpart Identification using Auxiliary Data} \\label{subsubsec:traditional_matching_process}\n\nWe have searched for MIPS 24 $\\mu$m, VLA 3 GHz, and ALMA counterparts in the COSMOS2015 catalog with data presented in Section \\ref{subsec:auxiliary}. We can therefore search for the optical counterparts to the low-resolution SCUBA-2 submillimeter sources by including the high-resolution multi-wavelength information. Such a two-step counterpart identification method is traditionally used on SCUBA-2 sources. In general, this method was shown to be able to pick up some 2\/3 of SCUBA-2 source counterparts \\citep[e.g.,][]{Casey2013,Koprowski2016,Cowie2017,Michalowski2017,An2018,Simpson2020,Lim2020}, but the exact fractions depend on the sensitivity of the high-resolution observations in the mid-IR, submillimeter, or radio. \n\nWe first cross-matched our optical sample with the SCUBA-2 450 $\\mu$m and 850 $\\mu$m sources using a search radius of $4\\arcsec$ and $7\\arcsec$, respectively. The search radii are approximately half of the full width at half maximum of the beams (FWHM = $7\\arcsec$.9 at 450 $\\mu$m and $13\\arcsec$ at 850 $\\mu$m). Such larger search radii (cf.\\ 1\/3 FWHM for the 24 $\\mu$m matching) are required as the SCUBA-2 positional accuracy is more impacted by confusion effects and telescope pointing errors, rather than just the beam sizes. Then, from the matched sample, we narrowed down the optical counterparts by searching for ALMA detected galaxies from the AS2COSMOS and A3COSMOS catalogs (described in Section \\ref{subsec:auxiliary}). For the remaining sources without ALMA detection, we identified their optical counterparts by searching for 24 $\\mu$m and 3 GHz detected galaxies (described in Section \\ref{subsec:auxiliary}). Those without MIPS and VLA counterparts are likely to be at higher redshifts \\citep[$z\\gtrsim3$, see Section 3.3 in][]{Lim2020} and are not the main targets of interest in this paper given the redshift distributions in Fig.~\\ref{fig:data}. We note that when there are multiple sources within the search radius, we consider all of the sources and narrow down the possible counterparts only with multi-wavelength information without considering their distances to the SCUBA-2 position.\n\nThe results of the cross-matching are summarized in Table \\ref{tab:data}. For the SCUBA-2 450 $\\mu$m sources, we matched 58 COSMOS2015 galaxies through ALMA observations and 181 through the MIPS and VLA catalogs. We defined them as 450 $\\mu$m detected galaxies. Two out of the 58 galaxies and six out of the 181 galaxies are selected as QG candidates in the $NUV$--$r$--$J$ diagram (Fig.~\\ref{fig:NUVrJ_submm} (a)). For the SCUBA-2 850 $\\mu$m sources, there are 289 and 364 matches when using the ALMA catalogs and using the MIPS and VLA catalogs. We defined them as 850 $\\mu$m detected galaxies. 11 out of the 289 galaxies and 19 out of the 364 galaxies are selected as QG candidates in the $NUV$--$r$--$J$ diagram (Fig.~\\ref{fig:NUVrJ_submm} (b)).\n\nOne thing worth noting is the distribution of 450 $\\mu$m and 850 $\\mu$m detected QG candidates in the $NUV$--$r$--$J$ diagrams in Fig.~\\ref{fig:NUVrJ_submm}. Although the sample sizes are small here, these QG candidates do not appear to have a tendency of locating near the selection boundaries (cf.\\ the 24 $\\mu$m case in Fig.~\\ref{fig:NUVrJ243} (a)), comparing to the typical color errors (crosses in Fig.~\\ref{fig:NUVrJ_submm}). This suggests that most of them are systems that consist of a quiescent component that dominates the rest-frame UV\/optical emission and a dusty component that shows up in the far-IR. This can be either an interacting system like the one in \\citet{Simpson2017} and \\citet{Schreiber2018a}, or a foreground quiescent galaxy which lenses a background dusty galaxy. Indeed, one of the QG candidate is matched to both 450 and 850 $\\mu$m sources through ALMA observations. This target is likely to be a lensed system (star symbol in Fig.~\\ref{fig:NUVrJ_submm}), and we will further discuss it in the end of this section and in Appendix~\\ref{appendix:lensed}.\n\n\\begin{figure}[ht!]\n\\epsscale{1.15}\n\\plotone{NUVrJ_450.png}\n\\plotone{NUVrJ_850.png}\n\\caption{Distributions of the 450 $\\mu$m (a) and 850 $\\mu$m (b) detected sample in the $NUV$--$r$--$J$ diagram. The filled circles are samples matched to ALMA sources, while the empty circles are samples matched to 24 $\\mu$m or 3 GHz sources. The star symbols show the position of the lensed system described in Appendix~\\ref{appendix:lensed}. The reddening vector derived from \\citet{Calzetti2000} extinction and the typical errors in the two colors for the submillimeter sources are also shown with arrows and crosses, respectively. The color errors of the 850 $\\mu$m sources are larger because these sources are generally fainter in the optical than 450 $\\mu$m sources. \\label{fig:NUVrJ_submm}}\n\\end{figure}\n\nFrom the numbers of QG candidates that have 450 or 850 $\\mu$m detections, we could estimate the fraction of the bright submillimeter galaxies among our QG candidates. The results show that 0.43$\\pm$0.15\\% (8\/1,846) and 0.16$\\pm$0.03\\% (30\/18,304) of our QG candidates are bright 450 $\\mu$m and 850 $\\mu$m sources, respectively (Table \\ref{tab:data}). The fraction of 450 $\\mu$m detected QGs is slightly ($\\sim1.8\\sigma$) larger than that of 850 $\\mu$m ones. This may be a result of either the better luminosity sensitivity or the higher source density at 450 $\\mu$m. The former allows us to detect more QGs at 450 $\\mu$m, while the latter increases the probability of chance projection between unrelated QGs and 450 $\\mu$m sources. If we remove the expected number of chance projections (Section~\\ref{subsubsec:chance_projection}), then the difference reduces to $\\sim1.3\\sigma$. So the reason of the difference between the fractions at 450 and 850 $\\mu$m remains unclear under our sample sizes.\n\nWe further spilt the populations into redshift bins (Table~\\ref{tab:brightSMG} and Fig.~\\ref{fig:BSMGfraction}). We can see that the fraction of 850 $\\mu$m detected QG candidates increases with redshift and rises up to 3.51$\\pm$2.48\\% at $z>$ 2. This higher contamination rate at $z>2$ could come from either a real redshift evolution, or simply larger photometric uncertainties on high-redshift sources. Nevertheless, this few-percent contamination rate is still quite low. In conclusion, our QG candidates could be contaminated by bright dusty SFGs at a 0.16\\% to 0.43\\% level, and the contamination rises up to $\\sim$ 1.7\\% to 3.5\\% at higher redshift. We note that the contamination rates may be underestimated since we may not pick up all SCUBA-2 source counterparts in the two-step counterpart identification. We will perform a ``blind'' cross-matching in Section \\ref{subsec:blind_matching} to provide a different estimate of the contamination.\n\nWe analyze the role of photometric errors in the bright SMG contamination. In Section~\\ref{sec:QG_selection}, we estimated the probability of intrinsically being in the SFG color space but scattered into the QG color space by photometric errors for each QG candidate. The mean probabilities are 9.6\\% and 6.2\\% for 450 and 850 $\\mu$m detected QGs, respectively. These both account for less than 10\\% of the SMG contaminations. Therefore, the bright SMG contamination is mainly due to intrinsic properties of the QGs rather than photometric errors.\n\n\\begin{deluxetable*}{l|ccc|ccc}\n\\tablecaption{\\label{tab:brightSMG}Percentage of bright submillimeter galaxies (sub-mm detected QGs) among COSMOS2015 QGs.}\n\\tablehead{\n\\colhead{} & \\colhead{total in the} & \\colhead{850 $\\mu$m detected} & \\colhead{percentage} & \\colhead{total in the} & \\colhead{450 $\\mu$m detected} & \\colhead{percentage} \\\\\n\\colhead{} & \\colhead{COSMOS field} & \\colhead{} & \\colhead{} & \\colhead{STUDIES field} & \\colhead{} & \\colhead{}\n}\n\\startdata\nall & 18304 & 30 & 0.16$\\pm$0.03 \\% & 1846 & 8 & 0.43$\\pm$0.15 \\% \\\\\n$z\\leq$ 1 & 11562 & 8 & 0.07$\\pm$0.02 \\% & 1314 & 5 & 0.38$\\pm$0.17 \\% \\\\\n1$< z\\leq$ 2 & 6045 & 10 & 0.17$\\pm$0.05 \\% & 475 & 1 & 0.21$\\pm$0.21 \\% \\\\\n$z>$ 2 & 697 & 12 & 1.72$\\pm$0.50 \\% & 57 & 2 & 3.51$\\pm$2.48 \\% \\\\\n\\enddata\n\\tablecomments{The errors are set to be Poissonian. The 850 $\\mu$m and 450 $\\mu$m detected samples are determined through both the low-resolution SCUBA-2 data and the high resolution auxiliary data.}\n\\end{deluxetable*}\n\n\\begin{figure}[ht!]\n\\epsscale{1.15}\n\\plotone{BSMGfraction.png}\n\\caption{Percentage of bright submillimeter galaxies (450 and 850 $\\mu$m detected QGs) among COSMOS2015 QGs (Table \\ref{tab:brightSMG}) in logarithmic scale. The data points of 450 $\\mu$m detected QGs are slightly offset along $x$-axis for clarity. The error bars of the 450 $\\mu$m detected QGs are larger because of the smaller coverage of the STUDIES map. The errors are Poissonian.\\label{fig:BSMGfraction}}\n\\end{figure}\n\nIn the above, we used both the AS2COSMOS and the A3COSMOS catalogs during the cross-matching. We can also estimate the contamination by matching QG candidates to only the AS2COSMOS catalog, which contains a homogeneous selection and complete observations of SCUBA-2 850 $\\mu$m sources with $S_{850 \\mu m}>$ 6.2 mJy in the S2COSMOS map. If we match our QG candidates to 850 $\\mu$m sources through only the AS2COSMOS catalog, we found 7 galaxies to be 850 $\\mu$m detected QG candidates. If we assume the same QG fraction for all SCUBA-2 850 $\\mu$m sources, we estimate that there should be 36.9$\\pm$14.0 QG candidates. This accounts for 0.2$\\pm$0.1\\% among all the QG candidates. This agrees with the 0.16$\\pm$0.03\\% contamination mentioned above. \n\nFurthermore, the SCUBA-2 catalog has a detection limit of 2 mJy but is not complete for sources above 2 mJy. The complete number of 850 $\\mu$m sources can be estimated from the sources counts corrected for completeness, from \\citet{Simpson2019}. If we estimate the complete number of sources above 2 mJy and assume the same QG fraction in the AS2COSMOS catalog, we obtain a dusty galaxy contamination rate of 0.6$\\pm$0.3\\%. The relative uncertainty here is slightly larger than that simply propagated from the number of QGs in the AS2COSMOS catalog since the source counts also contain an uncertainty. Nevertheless, this value is larger than the above-estimated value of 0.16$\\pm$0.03\\% and is probably a more realistic estimate if we do have a deeper and more complete survey at 850 $\\mu$m.\n\nWe note that one out of the 11 QG candidates (ID = 659416 in the COSMOS2015 catalog) that are matched to SCUBA-2 850 $\\mu$m sources through ALMA catalogs is likely to be a lensed system because of its unusual submillimeter\/radio flux ratio (see Appendix~\\ref{appendix:lensed} for details). This example demonstrates that when matching QG candidates to the submillimeter sources, a match does not imply the QG candidate and the long-wavelength source to be the same object. They could be physical associations such as the lensed system here, or an interacting galaxy pair consisting of a QG and a dusty object \\citep[e.g.,][]{Schreiber2018a}. Based on our small sample size, the probability for such association is about 9\\% (1\/11). Such spatial correlation effects caused by lensing or galaxy interaction will be further discussed in Section \\ref{subsec:blind_matching}.\n\n\\subsubsection{Effect of Chance Projection}\n\\label{subsubsec:chance_projection}\n\nGiven the small numbers of matched objects in the previous section, we would like to examine whether the matches between our QG candidates and bright submillimeter sources are caused by chance projection or by real spatial correlation. We could estimate the effect of chance projection by simple calculations.\n\nFirst, we calculated the search area in our 2-step cross matching. For the SCUBA-2 sources with ALMA observation, we used a search radius of $1\\arcsec$ for the ALMA sources. For the remaining ones, we used search radii of $2\\arcsec$ and $0.8\\arcsec$ for 24 $\\mu$m and 3 GHz sources, respectively. If a 3 GHz source located within the $2\\arcsec$ search radius of a 24 $\\mu$m source, we only adapted the search area of the 3 GHz source. We then calculated the expected fraction of randomly distributed QG candidates locating in the search area with $1-e^{-na\/A}$, where $A$ is the survey area, $n$ is the number of searched sources in the high-resolution catalogs, and $a$ is the search area per source. The estimated numbers of chance projections are given in the last column of Table \\ref{tab:data}. When we matched through ALMA catalogs, the numbers of chance projections is significantly lower. When we matched through 24 $\\mu$m and 3GHz catalogs, the probability of chance projections is about 1\/3; 2.1 out of 6 (35.0\\%) and 5.8 out of 19 (30.5\\%) matches can be chance projections for 450 and 850 $\\mu$m detected QG candidates.\n\nWe conclude that among the submillimeter detected QG candidates mentioned in the previous section, accounting for 0.16\\% to 0.43\\% among our QG candidates, the majority are real physical associations. The estimated bright dusty SFG contamination is not mainly driven by chance projections.\n\n\\subsection{Blind Cross-Matching} \\label{subsec:blind_matching}\n\n\\subsubsection{Matching with Large Radii and Estimate of Chance Projection}\n\nIn the cross-matching aided by 3 GHz and 24 $\\mu$m astrometry described in Secton \\ref{subsec:traditional_matching}, there is a possibility that the real optical counterparts of the submillimeter sources are undetected at 3 GHz and\/or 24 $\\mu$m. The different redshift dependences of sensitivities in the submillimeter, radio, and mid-IR may introduce such a bias. To avoid this, we can perform a ``blind'' cross-matching to the SCUBA-2 sources. We directly match the QG candidates with SCUBA-2 450 $\\mu$m and 850 $\\mu$m sources using $4\\arcsec$ and $7\\arcsec$ search radii, respectively, without relying on radio and mid-IR positions. The large matching radii here will unavoidably lead to larger numbers of chance projections, so we need to more precisely estimate the number of chance projections.\n\nTo do this, we simulated the matching results using SCUBA-2 submillimeter sources with random positions. Here we do not apply the $1-e^{-na\/A}$ method because the distribution of QGs may not be random at the scale of the relatively large search radii for SCUBA-2 sources and therefore the dispersion in the mean cannot be estimated. We calculated the expected number of QG candidates located within a search radius from the randomly distributed submillimeter sources and compared the results with the actual number of matched QG candidates. The simulation is repeated 1,000 times. The estimated number of matches and its error are set to be the mean and the 68\\% interval of the 1,000 results. The results are summarized in Table \\ref{tab:spatial}, and the fractional difference between the expected matches (chance projections) and the actual matches are also shown in Fig. \\ref{fig:spatial_plot}. We note that the detection limit of the SCUBA-2 450 $\\mu$m, 850 $\\mu$m, and ALMA sources are different. We also estimated the probability that the expected number is equal to or larger than the actual number (Table \\ref{tab:spatial}). We show the results of SFGs for comparison in Table \\ref{tab:spatialSFG}.\n\n\\begin{deluxetable*}{cccc|ccccc}\n\\tablecaption{Cross-Matches and Expected Chance Projections between QGs and Submillimeter Sources \\label{tab:spatial}}\n\\tablehead{\n\\colhead{SCUBA-2} & \\colhead{Group} & \\colhead{Match} & \\colhead{Number}\n& \\colhead{Expected} & \\colhead{Actual}\n& \\colhead{Difference} & \\colhead{Fractional} & \\colhead{Probability\\tablenotemark{b}}\\\\\n\\colhead{sources} & \\colhead{} & \\colhead{Radius\\tablenotemark{a}} & \\colhead{}\n& \\colhead{Matched QG} & \\colhead{Matched QG}\n& \\colhead{} & \\colhead{Difference} & \\colhead{}\n}\n\\startdata\n & all & $4\\arcsec$ & 353 & \n20.3$^{+ 4.7 }_{- 4.3 }$ & 29 & 8.7$^{+ 4.7 }_{- 4.3 }$ & 42.6$^{+ 22.9 }_{- 21.3 }$\\% & 0.05 \\\\\n450 $\\mu$m & without ALMA & $4\\arcsec$ & 276 & \n16.1$^{+ 3.9 }_{- 4.1 }$ & 24 & 7.9$^{+ 3.9 }_{- 4.1 }$ & 48.9$^{+ 24.1 }_{- 25.5 }$\\% & 0.046 \\\\\nsources & with ALMA & $4\\arcsec$ & 77 &\n4.6$^{+ 2.4 }_{- 2.6 }$ & 5 & 0.4$^{+ 2.4 }_{- 2.6 }$ & 9.6$^{+ 53.5 }_{- 56.1 }$\\% & 0.475 \\\\\n & ALMA sources & $1\\arcsec$ & 85 &\n0.3$^{+ 0.7 }_{- 0.3 }$ & 2 & 1.7$^{+ 0.7 }_{- 0.3 }$ & 534.9$^{+ 217.5 }_{- 100.0 }$\\% & 0.042 \\\\\n\\hline\n & all & $7\\arcsec$ & 981 &\n135.0$^{+ 12.0 }_{- 12.0 }$ & 206 & 71.0$^{+ 12.0 }_{- 12.0 }$ & 52.6$^{+ 8.9 }_{- 8.9 }$\\% & 0 \\\\\n850 $\\mu$m & without ALMA & $7\\arcsec$ & 611 &\n84.6$^{+ 10.4 }_{- 9.6 }$ & 120 & 35.4$^{+ 10.4 }_{- 9.6 }$ & 41.8$^{+ 12.2 }_{- 11.4 }$\\% & 0 \\\\\nsources & with ALMA & $7\\arcsec$ & 370 &\n50.6$^{+ 7.4 }_{- 7.6 }$ & 86 & 35.4$^{+ 7.4 }_{- 7.6 }$ & 69.8$^{+ 14.5 }_{- 15.1 }$\\% & 0 \\\\\n & ALMA sources & $1\\arcsec$ & 452 &\n1.3$^{+ 0.7 }_{- 1.3 }$ & 11 & 9.7$^{+ 0.7 }_{- 1.3 }$ & 771.6$^{+ 58.5 }_{- 100.0 }$\\% & 0 \\\\\n\\enddata\n\\tablenotetext{a}{The radius of $1\\arcsec$ to $7\\arcsec$ correspond to 8--56 kpc at $z=1$ and 8--57 kpc at $z=2.5$.}\n\\tablenotetext{b}{The probability that the expected number of matches (based on random spatial distribution) is equal to or larger than that of the actual matches.}\n\\end{deluxetable*}\n\n\\begin{deluxetable*}{cccc|ccccc}\n\\tablecaption{Cross-Matches and Expected Chance Projections between SFGs and Submillimeter Sources \\label{tab:spatialSFG}}\n\\tablehead{\n\\colhead{SCUBA-2} & \\colhead{Group} & \\colhead{Match} & \\colhead{Number}\n& \\colhead{Expected} & \\colhead{Actual}\n& \\colhead{Difference} & \\colhead{Fractional} & \\colhead{Probability\\tablenotemark{b}}\\\\\n\\colhead{sources} & \\colhead{} & \\colhead{Radius\\tablenotemark{a}} & \\colhead{}\n& \\colhead{Matched SFG} & \\colhead{Matched SFG}\n& \\colhead{} & \\colhead{Difference} & \\colhead{}\n}\n\\startdata\n & all & $4\\arcsec$ & 353 & \n149.7$^{+ 12.3 }_{- 12.7 }$ & 466 & 316.3$^{+ 12.3 }_{- 12.7 }$ & 211.3$^{+ 8.2 }_{- 8.5 }$\\% & 0 \\\\\n450 $\\mu$m & without ALMA & $4\\arcsec$ & 276 & \n116.4$^{+ 10.6 }_{- 10.4 }$ & 366 & 249.6$^{+ 10.6 }_{- 10.4 }$ & 214.3$^{+ 9.1 }_{- 9.0 }$\\% & 0 \\\\\nsources & with ALMA & $4\\arcsec$ & 77 &\n32.5$^{+ 5.5 }_{- 5.5 }$ & 100 & 67.5$^{+ 5.5 }_{- 5.5 }$ & 207.6$^{+ 16.9 }_{- 16.9 }$\\% & 0 \\\\\n & ALMA sources & $1\\arcsec$ & 85 &\n2.2$^{+ 1.8 }_{- 1.2 }$ & 56 & 53.8$^{+ 1.8 }_{- 1.2 }$ & 2443.1$^{+ 81.7 }_{- 54.6 }$\\% & 0 \\\\\n\\hline\n & all & $7\\arcsec$ & 981 &\n1045.7$^{+ 35.5 }_{- 35.9 }$ & 2087 & 1041.3$^{+ 35.5 }_{- 35.9 }$ & 99.6$^{+ 3.4 }_{- 3.4 }$\\% & 0 \\\\\n850 $\\mu$m & without ALMA & $7\\arcsec$ & 611 & \n652.9$^{+ 28.1 }_{- 27.9 }$ & 1204 & 551.1$^{+ 28.1 }_{- 27.9 }$ & 84.4$^{+ 4.3 }_{- 4.3 }$\\% & 0 \\\\\nsources & with ALMA & $7\\arcsec$ & 370 &\n394.0$^{+ 23.0 }_{- 23.0 }$ & 883 & 489.0$^{+ 23.0 }_{- 23.0 }$ & 124.1$^{+ 5.8 }_{- 5.8 }$\\% & 0 \\\\\n & ALMA sources & $1\\arcsec$ & 452 &\n9.9 $^{+ 3.1 }_{- 2.9 }$ & 277 & 267.1$^{+ 3.1 }_{- 2.9 }$ & 2687.3$^{+ 30.8 }_{- 29.6 }$\\% & 0 \\\\\n\\enddata\n\\tablenotetext{ab}{~The parameters follow those in Table \\ref{tab:spatial}.}\n\\end{deluxetable*}\n\n\\begin{figure}[ht!]\n\\epsscale{1.15}\n\\plotone{figures\/spatial_4.png}\n\\plotone{figures\/spatial_7.png}\n\\plotone{figures\/spatial_1.png}\n\\caption{Fractional differences between the actual matches and the expected matches based on random spatial distributions (see also Table~\\ref{tab:spatial}). Panels (a), (b), and (c) show results for SCUBA-2 450 $\\mu$m, 850 $\\mu$m, and ALMA sources, where $4\\arcsec'$, $7\\arcsec$, and $1\\arcsec$ search radii are used respectively. It can be seen that the QG results for the SCUBA-2 850 $\\mu$m sources and ALMA sources are all significantly above zero, showing that these extra matches in the actual sample are caused by real physical connection between the QG candidates and the submillimeter sources, rather than chance projection.\\label{fig:spatial_plot}}\n\\end{figure}\n\nThe first and fifth rows of Table \\ref{tab:spatial} and \\ref{tab:spatialSFG} presents simple cross-matching between the QGs\/SFGs and single-dish submillimeter samples without information from ALMA. For SCUBA-2 450 $\\mu$m sources, the actual matches are 42.6$^{+22.9}_{-21.3}$\\% larger than the expected random matches. Although the significance is only about 2$\\sigma$ (Fig. \\ref{fig:spatial_plot} (a)), the estimated probability that the expected number equals to or is larger than the actual number is 0.05, which is quite low. This result suggests that there are $8.7^{+4.7}_{-4.3}$ QGs physically associated with the 353 450 $\\mu$m sources. This corresponds to $0.47^{+0.25}_{-0.23}\\%$ of the 1846 QGs in the 450 $\\mu$m map area. The statistically derived number of $8.7^{+4.7}_{-4.3}$ matches also nicely agrees with the 8 matches found with high-resolution data (Section~\\ref{subsec:traditional_matching} and Table~\\ref{tab:data}). Also, for comparison, $316.3^{+12.3}_{-12.7}$ SFGs are physically associated with the 353 450 $\\mu$m sources. This is as expected, since dusty submillimeter sources should be dominated by SFGs.\n\nFor SCUBA-2 850 $\\mu$m sources, the actual matches are 52.6$^{+8.9}_{-8.9} \\%$ larger than the expected random matches (see also \\ref{fig:spatial_plot} (b)), which is significant ($\\sim6\\sigma$). The estimated probability that the expected number equals to or is larger than the actual number is nearly zero. This means that among the 206 matches between the QGs and 850 $\\mu$m sources, $71\\pm12$ are real physical associations. These $\\sim71$ sources account for $0.39\\pm0.07\\%$ of the 18304 QGs. We note that the number of 71 is significantly different from the number of 30 quoted in Table~\\ref{tab:data}, or 23 after removing the expected number of chance projections. This is because here we do not require high-resolution multi-wavelength data to pin down the cross-matching. This suggests that a significant fraction of QG--850 $\\mu$m associations in our data do not have 24 $\\mu$m, 3 GHz, or ALMA counterparts. This is perhaps partially because of the insufficient sensitivity (24 $\\mu$m and 3 GHz cases) and incomplete coverage (ALMA cases). However, we will soon show that a large fraction of these 71 sources are clustered around the SCUBA-2 sources at a $\\sim7\\arcsec$ scale, but they are not the submillimeter, mid-IR, or radio emitters. Finally, as in the case for 450 $\\mu$m cross-matching, the overlap between 850 $\\mu$m sources and SFGs is much larger than that for QG, which is expected.\n\n\\subsubsection{Verification and Comparison with ALMA Sources}\n\nThe above ``blind'' matching between QG candidates and SCUBA-2 sources using large matching radii (1\/2 of the single-dish beam FWHM) and the comparison between actual matches and simulations allows us to statistically assess the numbers of real physical associations. Now with the ALMA data, we can pin down the cross-matching with a much smaller matching radius, with a smaller subsample. We split the SCUBA-2 sources into those with and without ALMA observations. The results are listed in the remaining rows of Table \\ref{tab:spatial} and \\ref{tab:spatialSFG}.\n\nAs a simple sanity check, we ran cross-matching with large matching radii over the subsamples. The fractional differences between actual and expected random matches do not change significantly between the ALMA and non-ALMA subsamples for QGs (Fig. \\ref{fig:spatial_plot} (a) and (b)). This is even true for the 450 $\\mu$m sample, albeit the small sample sizes and therefore the large errors. This implies that there is no special selection bias in the ALMA observations regarding their QG-submillimeter properties.\n\nFor the SCUBA-2 sources with ALMA observations, the expected numbers for matches under a $1\\arcsec$ search radius and random spatial distributions are always small comparing to the actual matches. This is reflected on the large fractional differences between the actual matches and expected random matches, which are 534.9$^{+217.5}_{-100.0}$\\% for the 450 $\\mu$m sources and 771.6$^{+58.5}_{-100.0}$\\% for the 850 $\\mu$m sources (Fig.~\\ref{fig:spatial_plot}(c), the fourth and eighth rows). This means that the majority of the observed matches between QGs and ALMA sources under a $1\\arcsec$ matching radius are real physical associations.\n\nAn interesting comparison is to see if the $1\\arcsec$ matching pinned down by ALMA agrees with the statistical estimates of real physical associations derived from the large-radius blind matching. Table~\\ref{tab:spatial} and \\ref{tab:spatialSFG} show that if we match the 77 SCUBA-2 450 $\\mu$m sources to QGs and SFGs using a $4\\arcsec$ matching radius, we expect $0.4^{+2.4}_{-2.6}$ out of the 5 QG matches and $67.5^{+5.5}_{-5.5}$ out of the 100 SFG matches to be real associations. These can be compared with the ALMA results for the same sub-sample: 1.7$^{+ 0.7 }_{- 0.3 }$ and 53.8$^{+ 1.8 }_{- 1.2 }$ real associations for the QGs and the SFGs. The values for QG--SMG associations agree nicely, albeit the small sample size. This probably validates the statistical method for estimating the number of real associations and chance projections using simulations and random distributions. On the other hand, the values for SFG--SMG associations (67.5 and 53.8) differ by 25\\% and the difference is about $2\\sigma$. The excess in the number of SFGs around SMGs within $4\\arcsec$ comparing to the number of true associations pinpointed by ALMA suggests a weak clustering of SFGs around SMGs. This excess is only $2\\sigma$ and is not statistically significant. However, if we look at the 850 $\\mu$m values, the excesses for both QG--SMG and SFG--SMG associations become highly significant.\n\nWe make a similar comparison on the 850 $\\mu$m ALMA subsample. The expected numbers of real associations under a $7\\arcsec$ matching radius for the 370 850 $\\mu$m sources are $35.4^{+7.4}_{-7.6}$ and $489.0^{+23.0}_{-23.0}$ for QGs and SFGs, respectively. However, the numbers revealed by ALMA observations are much smaller: 9.7$^{+ 0.7 }_{- 1.3 }$ and 267.1$^{+ 3.1 }_{- 2.9 }$. The differences between the two sets of numbers are both significant. This implies that once we increase the matching radius from $1\\arcsec$ to $7\\arcsec$ ($\\lesssim 60$ kpc at $z=1$--2), additional clustering effects kick in, i.e., there are QGs and SFGs physically associated with the submillimeter sources at such large scales, but they are not the submillimeter sources themselves nor arcsec-scale galaxy-galaxy lensing pairs. This effect becomes undetectable (QGs) or much weaker (SFGs) under the $4\\arcsec$ matching for the 450 $\\mu$m sources, either because of the small sample sizes for the 450 $\\mu$m analysis or because of the different spatial distribution for low-dust-luminosity sources.\n\nPrevious studies of QG autocorrelation functions found that QGs show stronger clusting signal than SFGs at arcminute scales \\citep{Williams2009}, but there do not exist QG-SFG or QG-SMG cross-correlation analyses. Our results suggest that QGs and SMGs are clustered, and detailed cross-correlation studies between these two distinct populations will be an interesting future topic.\n\nIn summary, with direct cross-matching to SCUBA-2 sources and statistical analyses of chance projection effects, we do not find evidence for a different dusty galaxy contamination rate among QGs comparing to what we found with counterpart identifications using ALMA, 24 $\\mu$m, and 3 GHz data. Instead, we found a clustering effect between the bright submillimeter sources and our QG candidates at scales from $1\\arcsec$ to $7\\arcsec$ ($\\sim8$--60 kpc at $z=1$--2).\n\nWe note that our studies in Section \\ref{subsubsec:traditional_matching_process} and \\ref{subsec:blind_matching} thus far imply several possibilities for the submillimeter detected QG candidates obtained from the cross-matching process in Section \\ref{subsubsec:traditional_matching_process}, depending on the angular scales to which the observations are sensitive. They could be the correct submillimeter counterparts to the QG candidates. There are also situations where the QG candidates are not submillimeter emitters, but are physically associated with submillimeter galaxies through effects like galaxy-galaxy lensing (e.g., Fig.~\\ref{fig:lensing}), galaxy interaction, or clustering effects at scale of a few arcsec.\n\n\\section{Faint Submillimeter Galaxies Among QG Candidates} \\label{sec:faint_SMG}\n\nIn Section \\ref{sec:bright_SMG}, we matched our QGs candidates to submillimeter sources and demonstrated that fractions of the matched QG candidates are physically related to the submillimeter sources. However, the 450 $\\mu$m and 850 $\\mu$m sources have a detection limit of about 3.5 mJy and 2 mJy, respectively, which correspond to SFR of roughly 60 and 180 $M_{\\odot}$ year$^{-1}$ at $z = 1$. Therefore, we further perform stacking analysis in order to search for fainter submillimeter emissions among the QG candidates.\n\n\\subsection{Stacking Analysis}\n\nWe measured the submillimeter emission from the SCUBA-2 maps at the positions of our selected QG candidates and calculated the error-weighted average of their fluxes. As our sources are point-like under JCMT's resolution and the SCUBA-2 maps were beam-matched to produce maximum-likelihood flux for point sources, fluxes are measured by directly reading the map values in Jy beam$^{-1}$ at the positions of the QGs. We excluded QG candidates that we matched to the bright submillimeter sources in Section \\ref{subsec:traditional_matching}, as well as QG candidates whose measured SCUBA-2 fluxes exceed $3\\sigma$, in order to prevent our results from being biased by the small number of bright submillimeter sources. To estimate the bias and uncertainty in such a stacked flux, we then stacked at 1,000 random positions and repeated this 10,000 times. In this process, bright submillimeter sources are removed according to the same criteria as above. The mean from these random stacks is considered as the bias in stacking. It is consistent with zero, because of the zero-sum nature of the match-filtered SCUBA-2 maps. Nevertheless, this small bias is subtracted from the mean of the QGs. The dispersion among the 10,000 measurements of the random samples is considered as the uncertainty of stacking 1,000 sources. It is scaled by 1\/$\\sqrt{N}$ to be the uncertainty of the QG stacking. We also stacked different numbers of random sources to verify this 1\/$\\sqrt{N}$ dependence.\n\nThe stacking results are shown in Table \\ref{tab:450stack} and Table \\ref{tab:850stack}. The first rows of the two tables show that we can reach a $6.3\\sigma$ statistical detection at 850 $\\mu$m if we simply stack all QG candidates, but not a significant detection at 450 $\\mu$m. The non-detection at 450 $\\mu$m may be due to the smaller coverage of the STUDIES map. Furthermore, we can divide the QG candidates into subgroups according to their properties, to see if there is a particular group of QGs that contributes to the majority of the stacked signal. First, we classified QG candidates either with 24 $\\mu$m counterparts or with 3 GHz counterparts labeled with SFG flags in the VLA catalog \\citep{Smolcic2017} as ``IR-radio-bright'' QGs, and the rests as ``IR-radio-faint'' QGs. Our terminology is similar but slightly different from that in \\citet{Man2016}. \\citet{Man2016} defined QG candidates with SFR derived from 24 $\\mu$m over 100 $M_{\\odot}$ year$^{-1}$ as ``IR-bright'' QGs, and the rests as ``IR-faint'' QGs. They used 24 $\\mu$m data and SFR constraints to classify the subgroups, while we used 24 $\\mu$m data, 3 GHz data, and radio AGN classification in our work. Then, for the 850 $\\mu$m stacking, because of the larger area of the SCUBA-2 map and therefore more available QGs, we can further divide the QG sample into various redshift and stellar mass bins.\n\n\\begin{deluxetable*}{lcccccccc}\n\\tablecaption{450 $\\mu$m QG Stacking Results\\label{tab:450stack}}\n\\tablehead{\n\\colhead{Groups} & \\colhead{log$M_*$\\tablenotemark{a}} & \\colhead{$z$\\tablenotemark{b}} & \n\\colhead{Number} & \\colhead{$S_{450\\rm \\mu m}$} & \\colhead{SNR} &\n\\colhead{log($L_{\\rm IR}$)} & \\colhead{SFR$_{450\\rm \\mu m}$} & \\colhead{SFR$_{\\rm optical}$\\tablenotemark{c}}\\\\\n& \\colhead{(log($M_{\\odot}$))} & \\colhead{} & \\colhead{} & \\colhead{(mJy)} & \\colhead{} & \\colhead{(log($L_{\\odot}$))} & \\colhead{($M_{\\odot}$ yr$^{-1}$)} & \\colhead{($M_{\\odot}$ yr$^{-1}$)}\n}\n\\startdata\nAll QG & 10.7$^{+0.3}_{-1.1}$ & 0.9 & 1799 & 0.06$\\pm$0.05 & 1.3 & 9.3$^{+0.3}_{-0.7}$ & 0.2$\\pm$0.2 & 2.5$^{+0.0}_{-0.0}$ \\\\\n24-$\\mu$m counterpart & 10.9$^{+0.3}_{-0.7}$ & 0.8 & 155 & 0.66$\\pm$0.16 & 4.1 & 10.6$^{+0.1}_{-0.1}$ & 3.6$\\pm$0.9 & 11.2$^{+0.4}_{-0.3}$ \\\\\n3-GHz counterpart & 11.2$^{+0.2}_{-0.4}$ & 0.9 & 103 & 0.60$\\pm$0.20 & 3.0 & 10.5$^{+0.1}_{-0.2}$ & 3.5$\\pm$1.2 & 9.5$^{+0.4}_{-0.4}$ \\\\\n3-GHz counterpart: SFG & 11.1$^{+0.2}_{-0.4}$ & 0.9 & 45 & 0.85$\\pm$0.30 & 2.8 & 10.8$^{+0.1}_{-0.2}$ & 6.4$\\pm$2.3 & 10.3$^{+0.8}_{-0.7}$ \\\\\n3-GHz counterpart: AGN & 11.2$^{+0.2}_{-0.3}$ & 0.9 & 58 & 0.39$\\pm$0.26 & 1.5 & 10.2$^{+0.2}_{-0.5}$ & 1.5$\\pm$1.0 & 8.9$^{+0.5}_{-0.6}$ \\\\\nIR-radio-faint QG & 10.6$^{+0.3}_{-1.1}$ & 0.9 & 1620 & 0.00$\\pm$0.05 & 0.0 & 0.0$^{+9.3}_{-0.0}$ & 0.0$\\pm$0.2 & 1.7$^{+0.0}_{-0.0}$ \\\\\nIR-radio-bright QG & 11.0$^{+0.3}_{-0.6}$ & 0.8 & 179 & 0.65$\\pm$0.15 & 4.3 & 10.6$^{+0.1}_{-0.1}$ & 3.6$\\pm$0.8 & 10.2$^{+0.4}_{-0.2}$ \\\\\n\\enddata\n\\tablenotetext{a}{Mean and 68\\% interval of stellar mass in logarithmic scale.}\n\\tablenotetext{b}{Median of redshift.}\n\\tablenotetext{c}{Mean of SFRs from COSMOS2015. The error shows the typical error in COSMOS2015 scaled by 1\/$\\sqrt{N}$. Uncertainty of template fitting is not included, which may be large for QG population.}\n\\end{deluxetable*}\n\n\\begin{deluxetable*}{lcccccccc}\n\\tablecaption{850 $\\mu$m QG Stacking Results\\label{tab:850stack}}\n\\tablehead{\n\\colhead{Groups} & \\colhead{log$M_*$\\tablenotemark{a}} & \\colhead{$z$\\tablenotemark{b}} & \n\\colhead{Number} & \\colhead{$S_{850\\rm \\mu m}$} & \\colhead{SNR} &\n\\colhead{log($L_{\\rm IR}$)} & \\colhead{SFR$_{850\\rm \\mu m}$} & \\colhead{SFR$_{\\rm optical}$\\tablenotemark{c}}\\\\\n& \\colhead{(log($M_{\\odot}$))} & \\colhead{} & \\colhead{} & \\colhead{(mJy)} & \\colhead{} & \\colhead{(log($L_{\\odot}$))} & \\colhead{($M_{\\odot}$ yr$^{-1}$)} & \\colhead{($M_{\\odot}$ yr$^{-1}$)}\n}\n\\startdata\nAll QG & 10.7$^{+0.3}_{-1.0}$ & 0.9 & 18011 & 0.06$\\pm$0.01 & 6.3 & 10.0$^{+0.1}_{-0.1}$ & 1.0$\\pm$0.2 & 3.0$^{+0.0}_{-0.0}$\\\\\n24-$\\mu$m counterpart & 10.9$^{+0.3}_{-0.6}$ & 0.8 & 1538 & 0.27$\\pm$0.03 & 8.2 & 11.1$^{+0.0}_{-0.1}$ & 11.7$\\pm$1.4 & 6.2$^{+0.1}_{-0.1}$\\\\\n3-GHz counterpart & 11.1$^{+0.2}_{-0.4}$ & 0.9 & 1028 & 0.14$\\pm$0.04 & 3.5 & 10.8$^{+0.1}_{-0.1}$ & 6.8$\\pm$1.9 & 6.1$^{+0.1}_{-0.1}$\\\\\n3-GHz counterpart: SFG & 11.1$^{+0.2}_{-0.5}$ & 0.9 & 473 & 0.30$\\pm$0.06 & 5.0 & 11.1$^{+0.1}_{-0.1}$ & 13.9$\\pm$2.7 & 7.6$^{+0.2}_{-0.1}$\\\\\n3-GHz counterpart: AGN & 11.2$^{+0.2}_{-0.4}$ & 0.9 & 555 & 0.01$\\pm$0.05 & 0.1 & 9.2$^{+1.0}_{-9.2}$ & 0.2$\\pm$1.4 & 4.9$^{+0.1}_{-0.1}$\\\\\n\\hline\nIR-radio-faint QG & 10.7$^{+0.3}_{-1.0}$ & 0.9 & 16242 & 0.04$\\pm$0.01 & 3.8 & 9.8$^{+0.1}_{-0.1}$ & 0.7$\\pm$0.2 & 2.6$^{+0.0}_{-0.0}$\\\\\n~~~$z\\leq$ 0.5 &&&&&&&\\\\\n~~~log$M_*\\leq$ 10.5 & 9.8$^{+0.4}_{-1.6}$ & 0.3 & 2399 &-0.01$\\pm$0.03 &-0.4 & 0.0$^{+9.3}_{-0.0}$ & 0.0$\\pm$0.2 & 0.0$^{+0.0}_{-0.0}$\\\\\n~~~log$M_*>$ 10.5 & 10.9$^{+0.1}_{-0.3}$ & 0.4 & 632 & 0.07$\\pm$0.05 & 1.4 & 9.8$^{+0.2}_{-0.6}$ & 0.7$\\pm$0.5 & 0.1$^{+0.0}_{-0.0}$\\\\\n~~~0.5 $< z\\leq$ 1.0 &&&&&&&\\\\\n~~~log$M_*\\leq$ 10.5 & 10.1$^{+0.3}_{-0.6}$ & 0.8 & 3739 & 0.01$\\pm$0.02 & 0.4 & 9.2$^{+0.6}_{-9.2}$ & 0.2$\\pm$0.4 & 0.4$^{+0.0}_{-0.0}$\\\\\n~~~log$M_*>$ 10.5 & 10.9$^{+0.2}_{-0.3}$ & 0.8 & 3375 & 0.05$\\pm$0.02 & 2.2 & 9.9$^{+0.2}_{-0.3}$ & 0.8$\\pm$0.4 & 0.3$^{+0.0}_{-0.0}$\\\\\n~~~1.0 $< z\\leq$ 1.5 &&&&&&&\\\\\n~~~log$M_*\\leq$ 10.5 & 10.2$^{+0.2}_{-0.3}$ & 1.2 & 1461 &-0.02$\\pm$0.03 &-0.5 & 0.0$^{+9.8}_{-0.0}$ & 0.0$\\pm$0.6 & 2.1$^{+0.0}_{-0.0}$\\\\\n~~~log$M_*>$ 10.5 & 10.9$^{+0.2}_{-0.3}$ & 1.2 & 2351 & 0.09$\\pm$0.03 & 3.4 & 10.6$^{+0.1}_{-0.2}$ & 3.6$\\pm$1.0 & 1.1$^{+0.0}_{-0.0}$\\\\\n~~~1.5 $< z\\leq$ 2.0 &&&&&&&\\\\\n~~~log$M_*\\leq$ 10.5 & 10.3$^{+0.2}_{-0.2}$ & 1.7 & 469 & 0.07$\\pm$0.06 & 1.3 & 10.3$^{+0.3}_{-0.7}$ & 2.1$\\pm$1.7 & 12.6$^{+0.5}_{-0.3}$\\\\\n~~~log$M_*>$ 10.5 & 10.9$^{+0.2}_{-0.3}$ & 1.7 & 1234 & 0.08$\\pm$0.04 & 2.3 & 10.6$^{+0.2}_{-0.2}$ & 3.7$\\pm$1.6 & 8.9$^{+0.1}_{-0.1}$\\\\\n~~~2.0 $< z\\leq$ 2.5 &&&&&&&\\\\\n~~~log$M_*\\leq$ 10.5 & 10.3$^{+0.2}_{-0.2}$ & 2.3 & 107 & 0.11$\\pm$0.12 & 0.9 & 10.5$^{+0.3}_{-10.5}$ & 2.9$\\pm$3.2 & 25.5$^{+2.5}_{-1.5}$\\\\\n~~~log$M_*>$ 10.5 & 10.9$^{+0.2}_{-0.3}$ & 2.3 & 260 & 0.25$\\pm$0.08 & 3.2 & 11.1$^{+0.1}_{-0.2}$ & 12.8$\\pm$4.0 & 29.1$^{+1.3}_{-0.8}$\\\\\n~~~$z>$ 2.5 &&&&&&&\\\\\n~~~log$M_*\\leq$ 10.5 & 10.3$^{+0.1}_{-0.1}$ & 2.7 & 47 &-0.05$\\pm$0.19 &-0.3 & 0.0$^{+11.0}_{-0.0}$ & 0.0$\\pm$10.0 & 25.5$^{+4.4}_{-2.1}$\\\\\n~~~log$M_*>$ 10.5 & 10.9$^{+0.2}_{-0.3}$ & 2.7 & 168 & 0.11$\\pm$0.10 & 1.1 & 10.8$^{+0.3}_{-0.9}$ & 6.0$\\pm$5.3 & 37.3$^{+2.5}_{-1.4}$\\\\\n\\hline\nIR-radio-bright QG & 11.0$^{+0.2}_{-0.6}$ & 0.9 & 1769 & 0.26$\\pm$0.03 & 8.6 & 11.1$^{+0.0}_{-0.1}$ & 11.7$\\pm$1.4 & 6.4$^{+0.1}_{-0.1}$\\\\\n~~~$z\\leq$ 0.5 &&&&&&&\\\\\n~~~log$M_*\\leq$ 10.5 & 10.1$^{+0.3}_{-1.5}$ & 0.3 & 90 & 0.28$\\pm$0.13 & 2.0 & 10.8$^{+0.2}_{-0.3}$ & 6.8$\\pm$3.3 & 0.2$^{+0.0}_{-0.0}$\\\\\n~~~log$M_*>$ 10.5 & 11.1$^{+0.2}_{-0.4}$ & 0.4 & 214 & 0.14$\\pm$0.09 & 1.6 & 10.3$^{+0.2}_{-0.4}$ & 2.0$\\pm$1.2 & 0.4$^{+0.0}_{-0.0}$\\\\\n~~~0.5 $< z\\leq$ 1.0 &&&&&&&\\\\\n~~~log$M_*\\leq$ 10.5 & 10.2$^{+0.2}_{-0.3}$ & 0.8 & 220 & 0.24$\\pm$0.09 & 2.8 & 11.0$^{+0.1}_{-0.2}$ & 10.5$\\pm$3.8 & 5.1$^{+0.1}_{-0.1}$\\\\\n~~~log$M_*>$ 10.5 & 11.1$^{+0.2}_{-0.4}$ & 0.8 & 730 & 0.13$\\pm$0.05 & 2.7 & 10.5$^{+0.1}_{-0.2}$ & 3.0$\\pm$1.1 & 1.7$^{+0.0}_{-0.0}$\\\\\n~~~1.0 $< z\\leq$ 1.5 &&&&&&&\\\\\n~~~log$M_*\\leq$ 10.5 & 10.3$^{+0.2}_{-0.3}$ & 1.2 & 44 & 0.39$\\pm$0.19 & 2.0 & 11.3$^{+0.2}_{-0.3}$ & 21.1$\\pm$10.5 & 8.7$^{+1.0}_{-0.5}$\\\\\n~~~log$M_*>$ 10.5 & 11.1$^{+0.2}_{-0.4}$ & 1.2 & 260 & 0.40$\\pm$0.08 & 5.1 & 11.4$^{+0.1}_{-0.1}$ & 22.6$\\pm$4.5 & 3.6$^{+0.2}_{-0.1}$\\\\\n~~~1.5 $< z\\leq$ 2.0 &&&&&&&\\\\\n~~~log$M_*\\leq$ 10.5 & 10.3$^{+0.1}_{-0.1}$ & 1.6 & 27 & 0.62$\\pm$0.25 & 2.5 & 11.9$^{+0.1}_{-0.2}$ & 80.9$\\pm$32.3 & 28.6$^{+4.9}_{-2.2}$\\\\\n~~~log$M_*>$ 10.5 & 11.0$^{+0.2}_{-0.4}$ & 1.7 & 108 & 0.81$\\pm$0.12 & 6.6 & 12.0$^{+0.1}_{-0.1}$ & 102.3$\\pm$15.6 & 21.1$^{+0.8}_{-0.7}$\\\\\n~~~2.0 $< z\\leq$ 2.5 &&&&&&&\\\\\n~~~log$M_*\\leq$ 10.5 & 10.3$^{+0.1}_{-0.2}$ & 2.2 & 6 & 1.25$\\pm$0.52 & 2.4 & 12.1$^{+0.2}_{-0.2}$ & 128.8$\\pm$54.0 & 65.8$^{+14.3}_{-15.2}$\\\\\n~~~log$M_*>$ 10.5 & 11.1$^{+0.2}_{-0.4}$ & 2.2 & 39 & 0.48$\\pm$0.20 & 2.4 & 11.6$^{+0.2}_{-0.2}$ & 38.6$\\pm$16.3 & 61.1$^{+5.7}_{-3.5}$\\\\\n~~~$z>$ 2.5 &&&&&&&\\\\\n~~~log$M_*\\leq$ 10.5 & 10.5$^{+0.0}_{-0.0}$ & 2.7 & 2 & 0.42$\\pm$0.90 & 0.5 & 11.4$^{+0.5}_{-11.4}$ & 28.1$\\pm$60.4 & 27.8$^{+17.4}_{-9.9}$\\\\\n~~~log$M_*>$ 10.5 & 11.0$^{+0.2}_{-0.4}$ & 2.8 & 27 & 0.44$\\pm$0.25 & 1.8 & 11.5$^{+0.2}_{-0.4}$ & 30.8$\\pm$17.1 & 62.5$^{+11.6}_{-6.6}$\\\\\n\\enddata\n\\tablenotetext{abc}{~~~The parameters follow those in Table \\ref{tab:450stack}.}\n\\end{deluxetable*}\n\nOverall, we see that QG candidates with 24 $\\mu$m counterparts and QG candidates with 3 GHz counterparts that are not radio AGNs (i.e., IR-radio-bright QGs) exhibit the strongest stacking signal at both 450 $\\mu$m and 850 $\\mu$m. These IR-radio-bright QGs account for 9.7$\\pm$0.2\\% (1769\/18304) of all the QG candidates. In general, we do not reach significant detections of IR-radio-faint QGs. However, even with the low SNR, the stacked 850 $\\mu$m fluxes for high-mass ($>10^{10.5}~M_{\\odot}$) IR-radio-faint QGs are consistently higher than those for low-mass IR-radio-faint QGs. This suggests that even the IR-radio-faint QGs have dust emission in the rest-frame far-IR, or they are clustered around dusty objects (see below). On the other hand, among IR-radio-bright QGs, it is not apparent that the high-mass ones show consistently higher 850 $\\mu$m fluxes than the low-mass ones. This suggests that we are not seeing a population of well-behaved galaxies who follow the star-formation main sequence. This is expected for QGs.\n\nWe can compare our stacked 450 $\\mu$m fluxes with the \\emph{Herschel} 500 $\\mu$m stacked fluxes in \\citet{Man2016}. Our mean 450 $\\mu$m flux of IR-radio-faint QGs is 0.00$\\pm$0.02 mJy, whose 1$\\sigma$ upper limit is over 10 times lower than their stacked 500 $\\mu$m fluxes of IR-faint QGs, which range from 0.2 to 2.5 mJy in different mass and redshift bins. The difference of defining the two QG subsamples (SFR derived from 24 $\\mu$m to be under or over 100 $M_{\\odot}$ year$^{-1}$ in \\citet{Man2016}) may be one of the possible explanations. However, our mean 450 $\\mu$m flux of IR-radio-bright QGs, 0.65$\\pm$0.15 mJy, is still lower than most of the above mean 500 $\\mu$m fluxes of IR-faint QGs (0.2 to 2.5 mJy) except for some of those with $M_*<10^{10.6}~M_{\\odot}$.\n\nOur stacked 450 $\\mu$m fluxes are also lower compared with results in \\citet{Magdis2021}. They selected QG candidates with several color-color diagrams and stacked samples without 24 $\\mu$m detection, so we compare our stacked fluxes of IR-radio-faint QGs with their results. Their stacked \\emph{Herschel} 500 $\\mu$m fluxes, ranging from 0.12 to 0.59 mJy in various redshift bins, are also much higher than our stacked 450 $\\mu$m flux, 0.00$\\pm$0.02 mJy. On the other hand, their stacked SCUBA-2 850 $\\mu$m fluxes, ranging from 0.04 to 0.1 mJy, are at a similar level as our stacked 850 $\\mu$m fluxes.\n\nA possible explanation for the differences between the SCUBA-2 450 $\\mu$m and \\emph{Herschel} 500 $\\mu$m stacked fluxes is that the stacked \\emph{Herschel} fluxes were biased by source clustering at the scale of the large $35\\arcsec$ \\emph{Herschel} 500 $\\mu$m beam (e.g., \\citealt{Viero2013}, also see discussion in \\citealt{Bethermin2017}). Although \\citet{Magdis2021} modeled the emission of all the stacked images to separate surrounding sources, it appears that their 500 $\\mu$m stacked fluxes are higher. Although \\citet{Man2016} applied \\texttt{SIMSTACK} to stack and deblend simultaneously, it remains possible that the effects of source blending and clustering were not completely removed.\n\nThe above comparison confirms the well known bias in submillimeter stacking analysis: when source are clustered at scales comparable to the beam size, the stacked flux would be overestimated. This bias becomes quite severe under \\emph{Herschel}'s large beams in the two longest wavebands. How about our SCUBA-2 stacked fluxes? In the previous section, we showed that QG candidates are clustered around 850 $\\mu$m sources at scales of SCUBA-2's beam. So if we blindly stack these QGs in the 850 $\\mu$m image, the stacked flux will be overestimated. Fortunately, the majority of our 850 $\\mu$m stacking signal comes from the IR-radio-bright subsample. Their 24 $\\mu$m and 3 GHz counterparts are likely to be 850 $\\mu$m sources themselves, and the bias caused by clustering should therefore be negligible. On the other hand, the IR-radio-faint sample does not have deep high-resolution data to confirm that the QGs are responsible for the detected 850 $\\mu$m fluxes. Therefore, strictly speaking, our stacked 850 $\\mu$m fluxes for IR-radio-faint QGs should be considered as upper-limits. Even if a detection is reached on certain subsample of IR-radio-faint QGs, the detected flux should be only an indication that these QG candidates are physically related to faint submillimeter emitters, rather than evidence for in situ star formation in the QG candidates.\n\nFinally, we can examine if the strong 850 $\\mu$m detection (8.6$\\sigma$) of the IR-radio-bright QGs really come from galaxies in the QG color-color space in the $NUV$--$r$--$J$ diagram, or from galaxies originally in the SFG color space scattered by photometric errors across the selection boundary. In Section \\ref{sec:QG_selection}, we show that such SFG contamination caused by photometric errors account for about 7.5\\% of the selected QGs. With the same method, the estimated fraction for misidentified IR-radio-bright QGs caused by photometric errors is slightly higher, 8.9\\%. Moreover, we identified individual IR-radio-bright QGs whose probability of being scattered from the SFG color space to be $>0.05$. These sources account for 33\\% of our IR-radio-bright QGs (584\/1769). We excluded them and re-did the stacking on the remaining IR-radio-bright QGs, and still obtained a strong detection of $0.22\\pm0.04$ mJy (5.9$\\sigma$) despite the very generous probability cut of $>0.05$. These results imply that misidentified QG candidates due to photometric errors account for only $<$10\\% of our estimated dusty SFG contamination, and this minor population does not dominate our stacking results. The majority of the dusty SFG contamination is caused by their intrinsic properties rather than photometric errors.\n\n\n\n\\subsection{Examining the Quiescence}\n\nTo examine if our sub-samples are consistent with a quiescent population, we need to derive their SFRs and compare with their stellar masses.\n\nWe calculated the IR luminosity from the mean submillimeter fluxes and the median redshift of each group of the stacking sample. Since we only conducted measurements at 450 and 850 $\\mu$m, we performed single-band SED ``fitting'' by assuming that there is a unique relation between SED shape and IR luminosity. To do so, we adopt the luminosity-dependent dust SED templates of J.\\ K.\\ Chu et al.\\ (in preparation), which are based on the latest \\emph{WISE} and \\emph{Herschel} photometry for 201 local IR-selected galaxies \\citep{Chu2017}. This set of templates covers IR luminosity of $7\\times10^9$ to $1.7\\times10^{12} L_{\\odot}$. We further supplement the submillimeter galaxy SED from the zLESS program \\citep{Danielson2017}, which has an IR luminosity of $5.2\\times10^{12} L_{\\odot}$. We redshift these SEDs to the redshifts of our targets and calculated their observed 450 $\\mu$m or 850 $\\mu$m fluxes. We picked the templates with redshifted fluxes closest to our stacked flux and interpolate between the template fluxes to obtain the IR luminosity of our targets. We scaled the IR luminosity by 1\/SNR to estimate $1\\sigma$ error of the IR luminosity. For groups with negative mean flux, we calculated the corresponding IR luminosity of flux error to estimate $1\\sigma$ upper limit of the IR luminosity. The results are are presented in the seventh columns of Tables \\ref{tab:450stack} and \\ref{tab:850stack}.\n\nTo verify the results based on the local SEDs of Chu et al., we also repeated the calculations using the SED library of \\citet{Schreiber2018c}. Overall, we find no systematic differences if we assume main sequence galaxies ($R_{\\rm SB}=1$) for the Schreiber et al.\\ library. The mean difference in the calculated $L_{\\rm IR}$ is less than 0.1 dex for the non-zero entries in Tables \\ref{tab:450stack} and \\ref{tab:850stack}, while the rms dispersion is within 0.25 dex. This small difference can be further reduced if we assume a sub-main-sequence $R_{\\rm SB}$ for the IR-radio-faint subsamples in Table~\\ref{tab:850stack} and a starburst $R_{\\rm SB}$ for the IR-radio-bright subsamples. This tuning of the $R_{\\rm SB}$ parameter is consistent with our interpretation of these two subgroups (see below). In our subsequent analyses, we adopt the calculations based on the SEDs of Chu et al.\n\nAfter calculating the IR luminosity, we followed the $L_{\\rm IR}$--SFR calibration applied in \\citet{Man2016}. We estimate the SFR by applying the relation applicable for SFGs \\citep{Kennicutt1998}:\n$$\\mathrm{SFR} (M_{\\odot}\\, \\mathrm{yr^{-1}})=1.7\\times10^{-10} L_{\\rm IR} (L_{\\odot}).$$\nWe then adjusted the obtained SFR to the \\citet{Chabrier2003} IMF by applying the calibration used in \\citet{Man2016}:\n$$\\mathrm{SFR}_{\\rm Chabrier}=\\mathrm{SFR}_{\\rm Salpeter}\/1.7$$\n\nThe results are presented in the eighth columns of Table \\ref{tab:450stack} and Table \\ref{tab:850stack} as SFR$_{\\rm 450 \\mu m}$ and SFR$_{\\rm 850 \\mu m}$, respectively. A few observations can be made here. First, the 850-$\\mu$m derived SFR is in general higher than the 450-$\\mu$m derived SFR. This is partly caused by the much deeper in luminosity sensitivity of the SCUBA-2 450 $\\mu$m imaging and the $<3\\sigma$ thresholds we imposed in the stacking procedure. If we remove this threshold in the 450 $\\mu$m imaging, the difference reduces to within a factor of 2, which is not very significant if we consider the overall low S\/N of the 450 $\\mu$m stacked fluxes and the small number of available sources for the 450 $\\mu$m stacking.\n\nWe compare our results with SFRs derived from optical SED fitting in COSMOS2015 (last column in Table \\ref{tab:450stack} and \\ref{tab:850stack}). Their mean SFRs of IR-radio-faint QGs are higher than ours at high $z$ but their mean SFRs of IR-radio-bright QGs are lower. This can be explained with the age-extinction degeneracy in the SED fitting when there is an absence of far-IR photometry.\n\nWe compare the submilimeter-derived SFRs with the stellar masses of the galaxies in Fig.~\\ref{fig:SSFR}. We show the star-formation ``main sequence'' of \\citet{Speagle2014} with black solid lines and the $\\pm$0.9 dex ranges with shaded areas. \n\nWe show the results from \\citet{Man2016} for comparison (Fig.~\\ref{fig:SSFR}). Our results are in broad agreement with theirs but tend to have slightly lower SFR for low-$z$ samples. In the $0.53\\sigma$ below the main sequence. Other than these, our derived SFRs are fairly consistent. We note that the SFR of \\citet{Man2016} was derived from SED fitting using stacked fluxes across the entire far-IR range. This explains why their 500 $\\mu$m stacked fluxes are much higher than ours, but their SFRs are not. \n\nWe also show the results from \\citet{Magdis2021} in Fig.~\\ref{fig:SSFR}. Their SFRs are about the same as or slightly lower than our SFRs, different from the comparison with \\citet{Man2016}. We note that their IR luminosities were derived from SED fitting using stacked fluxes from mid-IR to radio data. They then also obtained SFRs by applying the relation in \\citet{Kennicutt1998}, but they used a Salpeter IMF and added SFR derived from the optical photometry. We converted their IR luminosity to SFR by the same process in \\citet{Man2016} and this work for a fair comparison. We also show their SFRs obtained by the original conversion in their work for reference, which are in general closer to the SFRs from \\citet{Man2016}.\n\nThe conclusion we can draw from Fig.~\\ref{fig:SSFR} is that the IR-radio-faint QGs are in general below the star-formation main sequence, while the majority of the IR-radio-bright QGs are consistent with the main sequence, probably except for the high-mass end in the two low-redshift bins and in the highest redshift bin.\n\n\\begin{figure*}[ht!]\n\\epsscale{1.15}\n\\plotone{SFRvsM_IMF.png}\n\\caption{SFR derived from 850 $\\mu$m fluxes versus stellar mass. The purple diamonds represent IR-radio-bright QGs, while the red circles represent IR-radio-faint QGs. The smaller semi-transparent symbols are results from other works. The purple diamonds, red circles, and blue squares represent the IR-bright QGs, IR-faint QGs, and SFGs in \\citet{Man2016}, respectively. The red triangles represent QGs in \\citet{Magdis2021}. IR-radio-bright QGs in our work are defined as QG candidates either with 24 $\\mu$m counterparts or with 3 GHz counterparts labeled with SFG flags in the VLA catalog \\citep{Smolcic2017}, while IR-bright QGs in \\citet{Man2016} are defined as QG candidates with SFR derived from 24 $\\mu$m over 100 $M_{\\odot}$ year$^{-1}$. QGs in \\citet{Magdis2021} are defined as QG candidates without 24 $\\mu$m detection. The filled triangles are derived by the same $L_{IR}$--$\\mathrm{SFR}$ conversion with the other two works, while the open triangles are derived by the conversion described in their work.} The black solid line shows the SFR of the redshift-dependent main sequence with a 1--3 $\\times$ 0.3 dex scatters in \\citet{Speagle2014}.\\label{fig:SSFR}\n\\end{figure*}\n\nTo sum up, our stacking results show that only the IR-radio-bright QGs have SFR similar to main-sequence galaxies. These are likely to be faint dusty SFGs that contaminate the QG color selection. However, the population of the IR-radio-bright QGs is small, which accounts for 9.7$\\pm$0.7\\% (179\/1846) and 9.7$\\pm$0.2\\% (1769\/18304) of all the QG candidates, respectively, in the 450 and 850 $\\mu$m images. The fractions range from 7\\% to 12\\% in different redshift bins and do not have a strong redshift dependence. We conclude that the contamination of dusty SFGs is of $\\sim$ 10\\% among the color-selected QG candidates, and that the contamination can be removed using multi-wavelength data such as the 24 $\\mu$m and 3 GHz data for the COSMOS field.\n\nFor comparison, \\citet{Man2016} suggested that the maximum contamination is 15\\% and could be removed by using 24 $\\mu$m observations. In this study, we used submillimeter data with better sensitivities and resolutions, and our estimate of contamination is somewhat tighter (10\\%) than that in \\citet{Man2016}. \n\nLike what we did in Section~\\ref{subsubsec:traditional_matching_process}, if we assume the same fraction of QGs among AS2COSMOS sources, we can estimate the number of faint submillimeter sources that are QGs. For the number of faint submillimeter sources, we again applied the 850 $\\mu$m number count in \\citet{Simpson2019} and extrapolated it to a flux level of S$_{850 \\mu m}= 0.5$ mJy. This leads to 707$\\pm$462 QGs among faint submillimeter sources, and a dusty galaxy contamination rate among QGs of 3.9$\\pm$2.5\\%. We can further extrapolate the counts to 0.26 mJy, the stacked 850 $\\mu$m flux of IR-radio-bright QGs. This will further increase the estimated contamination rate. However, such an extrapolation is is probably too aggressive given the uncertainty in the faint-end counts. Nevertheless, considering the unknown uncertainty of extrapolating the number count to a flux level lower than the detection limit, we concluded that the above estimation is not inconsistent with the $\\sim10\\%$ contamination derived from the stacking of IR-radio-bright QGs.\n\nFinally, using either 24 $\\mu$m or 3 GHz data to pinpoint star-forming contaminants among color-selected QG candidates may not work well at high redshifts ($z>3$ or 4). This is because mid-IR and radio suffer from the strong $K$-correction and are not sensitive to high-redshift SFGs. Our $3.4\\sigma$ detection in the 850 $\\mu$m stacking of the IR-radio-faint QGs at $z>2$ seems to agree with this, i.e., there may exist dusty galaxy contamination that are faint in the mid-IR and radio. In other words, the effectiveness of QG color selection at high redshift remains untested in this framework. Since the formation of QGs at higher redshifts require both rapid growth of the stellar population and rapid quenching, identifications of high-redshift QGs are of great interest \\citep{Merlin2018,Straatman2014,Carnall2020, Valentino2020}. Removing dusty contaminants among high-redshift QGs is beyond the sensitivities of \\emph{Spitzer}, \\emph{Herschel}, and the current VLA, and will require deep ALMA data.\n\n\\section{AGN Properties} \\label{sec:AGN_properites}\n\nIn this section, we discuss the properties of the AGNs among our QG candidates. Fig.~\\ref{fig:NUVrJ_AGN} shows the distribution of three different classes of AGN samples in the $NUV$--$r$--$J$ diagram, including radio AGNs, mid-IR AGNs, and X-ray AGNs (Section~\\ref{subsec:AGN_samples}) in two mass bins. The stellar mass of the samples were limited to above $10^{10.5}~M_{\\odot}$. This is because the samples are likely to be incomplete below $10^{10.5}~M_{\\odot}$ at $z\\sim2$ (see Fig.~\\ref{fig:data} (b)). This mass limit is also consistent with the 90\\% completeness limit found by \\citet{Laigle2016} for QGs at high redshifts. We therefore applied this stellar mass cut for fair comparisons of the QG fractions. In Fig.~\\ref{fig:NUVrJ_AGN}, we can see that the distribution of the radio AGNs is different from those of the other two. A similar distinction also exists between radio selected sources and 24 $\\mu$m selected sources in Fig.~\\ref{fig:NUVrJ243}. Fig.~\\ref{fig:NUVrJ_AGN} provides evidence that the difference in Fig.~\\ref{fig:NUVrJ243} is driven by radio AGNs.\n\n\\begin{figure*}[!ht]\n\\epsscale{1.15}\n\\plotone{NUVrJAGN.png}\n\\caption{The distribution of radio AGNs (left penals), mid-IR AGNs (middle penals), and X-ray AGNs (right penals) on the $NUV$--$r$--$J$ diagram with $M_*>10^{11}~M_{\\odot}$ (top penals) and $10^{10.5}~M_{\\odot}10^{11}~M_{\\odot}$ (a) and $10^{10.5}~M_{\\odot}10^{11}~M_{\\odot}$ (a) and $10^{10.5}~M_{\\odot}2.5$. In Fig.~\\ref{fig:QG_fraction}, the QG fractions among X-ray AGNs are $\\sim0.6$ to $1.5\\sigma$ larger than those among non AGNs in $z>2.5$ redshift bins, with respect to their own error bars. In Fig.~\\ref{fig:AGN_fraction}, the AGN fraction among QGs is $\\sim0.9\\sigma$ larger than that among the full sample in the $z>2.5$ and $M_*>10^{11}~M_{\\odot}$ bin. This may be caused by either selection bias or a real evolution trend. The evolution trend could be explained by the role of quasar-mode AGN feedback \\citep{Fabian2012, Somerville2008}, gas inflow in X-ray AGNs that removes gas and quenches star formation. One possibility is that the mode of AGN quenching may change from quasar-mode to radio-mode from high $z$ to low $z$. Another possibility could be that X-ray AGNs are related to the initial quenching, while radio AGNs are responsible for the maintenance of the quiescence. This could also explain the increasing radio AGN fraction among QG candidates in Fig.~\\ref{fig:AGN_fraction} at lower redshift. Nevertheless, the rises in the X-ray AGNs in Fig.~\\ref{fig:QG_fraction} and \\ref{fig:AGN_fraction} only occur in the highest redshift bins where the sample sizes are the smallest and the selection completeness is less well understood. This has to be further tested with more data and careful examination of various selection biases in the high-redshift ends.\n\nTo sum up, our data show a strong correlation between radio AGNs and QGs but do not point to the right scenario. Our data also do not show whether radio AGNs are related to the initial quenching, or just related to the maintenance of the quiescence.\n\n\\section{Summary} \\label{sec:summary}\n\nIn this study, we examined the submillimeter properties of $NUV$--$r$--$J$ selected QG candidates at $z\\lesssim3$. We cross-matched the QG candidates with bright submillimeter sources detected by JCMT SCUBA-2 and ALMA. For the former, we used \\emph{Spitzer} 24 $\\mu$m and VLA 3 GHz data to refine their positions to overcome the low angular resolution of JCMT. This way, we found that 0.16$\\pm$0.03\\% to 0.43$\\pm$0.15\\% among our QG candidates are likely to be bright 850 and 450 $\\mu$m submillimeter galaxies, respectively. The contamination increases to 1.72$\\pm$0.50\\% to 3.51$\\pm$2.48\\% at $z>$ 2. We further performed stacking analysis of QG candidates in the JCMT 450 and 850 $\\mu$m images. We can obtain strong stacking detections on a subsample of QGs with \\emph{Spitzer} 24 $\\mu$m and VLA 3 GHz counterparts that are not radio AGNs. This special class of ``IR-radio-bright'' QGs account for about 10\\% of the entire QG sample and they are likely to be faint submillimeter sources with SFRs of a few tens to about a hundred $M_\\sun$ yr$^{-1}$. These results are broadly consistent with the contaminate rates derived from a small sample of ALMA detected QGs and the 850 $\\mu$m number counts. We conclude that the dusty star-forming galaxy contamination rate among $NUV$--$r$--$J$ selected QG candidates is up to $\\sim10\\%$, but such contamination can be removed by 24 $\\mu$m, submillimeter, or 3 GHz observations at current sensitivity levels.\n\nWhen we cross-matched the QG candidates with JCMT SCBUA-2 850 $\\mu$m SMGs without relying on high-resolution data, we adopted a large matching radius of $7\\arcsec$ because of the large SCUBA-2 beam size. This leads to a large fraction of chance projections among the matched QGs. We estimated the number of chance projections with simulations by assuming random spatial distributions for SCUBA-2 sources. After statistically subtracting the chance projections, we found that on average, 0.096 (35.4\/370) QG is physically related to an 850 $\\mu$m selected SMG, while ALMA observations indicate that only 0.026 (9.7\/370) QG really coincides with an SMG within $1\\arcsec$. This implies a clustering between these two populations at a scale of $1\\arcsec$ to $7\\arcsec$, and should be a future topic of investigation.\n\nFinally, we examined the QG fractions among our AGN samples and found a correlation between our QG candidates and radio AGNs. When we limited our studies to galaxies with stellar masses larger than $10^{10.5}M_\\sun$, we found that the QG fraction of radio AGNs are larger than those of the non-AGN samples, IR AGNs, and X-ray AGNs at $z<$ 1.5. This suggests a connection between the radio jets and the quenching or the maintenance of the quiescence of the QGs, or the so-called radio-mode AGN feedback. However, our data do not rule out the possibility that radio AGNs are just more easily triggered in quenched galaxies, rather than being responsible for the initial quenching.\n\n\\acknowledgments\n\nThe authors thank Bau-Ching Hsieh, Ian Smail, Iary Davidzon, and Olivier Ilbert for the discussion and comments, the anonymous referee for the comments that greatly improve the manuscript, and JCMT staff for the observational support. Y.H.H., W.H.W., Y.Y.C., C.F.L., and Z.K.G. acknowledge grant support from the Ministry of Science and Technology of Taiwan (MoST, 105-2112-M-001-029-MY3, 108-2112-M-001-014-, and 109-2112-M-001-011-). C.C.C. acknowledges MoST grant 109-2112-M-001-016-MY3. M.J.M. acknowledges the support of the National Science Centre, Poland through the SONATA BIS grant 2018\/30\/E\/ST9\/00208. M.P.K. acknowledges support from the First TEAM grant of the Foundation for Polish Science No. POIR.04.04.00-00-5D21\/18-00. L.C.H. was supported by the National Science Foundation of China (11721303, 11991052) and the National Key R\\&D Program of China (2016YFA0400702). Y.G. acknowledges National Science Foundation of China (NSFC) grants \\#11861131007, 12033004, and 11420101002, and Chinese Academy of Sciences Key Research Program of Frontier Sciences (Grant No. QYZDJ-SSW-SLH008). The submillimeter data used in this work include observations from the JCMT Large and Legacy Programs: S2COSMOS (M16AL002), STUDIES (M16AL006), and S2CLS (MJLSC01), the JCMT PI program of Casey et al.\\ (M11BH11A, M12AH11A, and M12BH21A), the ALMA program AS2COSMOS (ADS\/JAO.ALMA \\#2016.1.00463.S), and various ALMA archival data. The James Clerk Maxwell Telescope is operated by the East Asian Observatory on behalf of the National Astronomical Observatory of Japan; the Academia Sinica Institute of Astronomy and Astrophysics; the Korea Astronomy and Space Science Institute; and the Operation, Maintenance and Upgrading Fund for Astronomical Telescopes and Facility Instruments, budgeted from the Ministry of Finance (MOF) of China and administrated by the Chinese Academy of Sciences (CAS), as well as the National Key R\\&D Program of China (No. 2017YFA0402700). Additional funding support is provided by the Science and Technology Facilities Council of the United Kingdom and participating universities in the United Kingdom and Canada. ALMA is a partnership of ESO (representing its member states), NSF (USA), and NINS (Japan), together with NRC (Canada), NSC and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI\/NRAO, and NAOJ.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Details on Small-world Inter-clique Topology}\n\\label{app:small_world}\n\n We present a more detailed and precise explanation of the algorithm to establish a small-world\n inter-clique topology (Algorithm~\\ref{Algorithm:Smallworld}). Algorithm~\\ref{Algorithm:Smallworld} instantiates the function \n\\texttt{inter} with a\nsmall-world inter-clique topology as described in Section~\\ref{section:interclique-topologies}. It adds a\nlinear number of inter-clique edges by first arranging cliques on a ring. It then adds a logarithmic number of ``finger'' edges to other cliques on the ring chosen such that there is a constant number of edges added per set, on sets that are exponentially bigger the further away on the ring. ``Finger'' edges are added symmetrically on both sides of the ring to the cliques in each set that are closest to a given set. ``Finger`` edges are added for each clique on the ring, therefore adding in total a linear-logarithmic number of edges.\n\n\\begin{algorithm}[h]\n \\caption{$\\textit{smallworld}(DC)$: adds $O(\\# N \\log(\\# N))$ edges}\n \\label{Algorithm:Smallworld}\n \\begin{algorithmic}[1]\n \\STATE \\textbf{Require:} set of cliques $DC$ (set of set of nodes)\n \\STATE ~~size of neighborhood $ns$ (default 2)\n \\STATE ~~function $\\textit{least\\_edges}(S, E)$ that returns one of the nodes in $S$ with the least number of edges in $E$\n \\STATE $E \\leftarrow \\emptyset$ \\COMMENT{Set of Edges}\n \\STATE $L \\leftarrow [ C~\\text{for}~C \\in DC ]$ \\COMMENT{Arrange cliques in a list}\n \\FOR{$i \\in \\{1,\\dots,\\#DC\\}$}\n \\FOR{$\\textit{offset} \\in \\{ 2^x~\\text{for}~x~\\in \\{ 0, \\dots, \\lceil \\log_2(\\#DC) \\rceil \\} \\}$} \n \\FOR{$k \\in \\{0,\\dots,ns-1\\}$}\n \\STATE $n \\leftarrow \\textit{least\\_edges}(L_i, E)$\n \\STATE $m \\leftarrow \\textit{least\\_edges}(L_{(i+\\textit{offset}+k) \\% \\#DC}, E)$\n \\STATE $E \\leftarrow E \\cup \\{ \\{n,m\\} \\}$\n \\STATE $n \\leftarrow \\textit{least\\_edges}(L_i, E)$\n \\STATE $m \\leftarrow \\textit{least\\_edges}(L_{(i-\\textit{offset}-k)\\% \\#DC} , E)$\n \\STATE $E \\leftarrow E \\cup \\{ \\{n,m\\} \\}$\n \\ENDFOR\n \\ENDFOR\n \\ENDFOR\n \\RETURN E\n \\end{algorithmic}\n\\end{algorithm}\n\nAlgorithm~\\ref{Algorithm:Smallworld} expects a set of cliques $DC$, previously computed by \nAlgorithm~\\ref{Algorithm:greedy-swap}; a size of neighborhood $ns$,\nwhich is the number of finger edges to add per set of cliques, and a function \n\\textit{least\\_edges}, which given a set of nodes $S$ and an existing set of\nedges $E = \\{\\{i,j\\}, \\dots \\}$, returns one of the nodes in $E$ with the least number of edges. It returns a new set of edges $\\{\\{i,j\\}, \\dots \\}$ with all edges added by the small-world topology.\n\nThe implementation first arranges the cliques of $DC$ in a list, which\nrepresents the ring. Traversing the list with increasing indices is equivalent\nto traversing the ring in the clockwise direction, and inversely. Then, for every clique $i$ on the ring from which we are computing the distance to others, a number of edges are added. All other cliques are implicitly arranged in mutually exclusive sets, with size and at offset exponentially bigger (doubling at every step). Then for every of these sets, $ns$ edges are added, both in the clockwise and counter-clockwise directions, always on the nodes with the least number of edges in each clique. The ring edges are implicitly added to the cliques at offset $1$ in both directions.\n \n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\\section{Additional Experiments on Scaling Behavior with Increasing Number of\nNodes}\n\\label{app:scaling}\n\nSection~\\ref{section:scaling} compares the convergence speed of various inter-clique topologies at a scale of 1000 nodes. In this section, we show the effect of scaling the number of nodes, by comparing the convergence speed with 1, 10, 100, and 1000 nodes, and adjusting the batch size to maintain a constant number of updates per epoch. We present results for Ring, Fractal, Small-world, and Fully-Connected inter-clique topologies.\n \nFigure~\\ref{fig:d-cliques-mnist-scaling-fully-connected} shows the results for\nMNIST. For all topologies, we notice a perfect scaling up to 100 nodes, i.e.\nthe accuracy curves overlap, with low variance between nodes. Starting at 1000\nnodes, there is a significant increase in variance between nodes and the\nconvergence is slower, only marginally for Fully-Connected but\nsignificantly so for Fractal and Ring. Small-world has higher variance between nodes but maintains a convergence speed close to that of Fully-Connected.\n\n\n\n\n\\begin{figure}[htbp]\n \\centering \n \n \n \n \\begin{subfigure}[b]{0.35\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/d-cliques-mnist-scaling-fully-connected-cst-updates}\n \\caption{Fully-Connected}\n \\end{subfigure}\n \\quad\n \n \n \n \\begin{subfigure}[b]{0.35\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/d-cliques-mnist-scaling-smallworld-cst-updates}\n \\caption{Small-world}\n \\end{subfigure}\n \\quad\n\n \n \n \n \\begin{subfigure}[b]{0.35\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/d-cliques-mnist-scaling-fractal-cliques-cst-updates}\n \\caption{Fractal}\n \\end{subfigure} \n \\quad\n \n \n \n \\begin{subfigure}[b]{0.35\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/d-cliques-mnist-scaling-ring-cliques-cst-updates}\n \\caption{Ring}\n \\end{subfigure} \n \n \\caption{\\label{fig:d-cliques-mnist-scaling-fully-connected} MNIST:\n D-Cliques scaling behavior (constant updates per epoch and 10 nodes per clique) for different\n inter-clique topologies.} \n\\end{figure}\n \nFigure~\\ref{fig:d-cliques-cifar10-scaling-fully-connected} shows the results\nfor CIFAR10. When increasing from 1 to 10 nodes (resulting in a single\nfully-connected clique), there is actually a small increase both in final\naccuracy and convergence speed. We believe this increase is due to the\ngradient being computed with better representation of examples from all\nclasses with 10 fully-connected non-IID nodes, while the gradient for a single\nnon-IID node may have a slightly larger bias because the random sampling \nmay allow more bias in the representation of classes in each batch. At a\nscale of 100 nodes, there is no difference between Fully-Connected and\nFractal, as the connections are the same; however, a Ring already shows a\nsignificantly slower convergence. At 1000 nodes, the convergence significantly\nslows down for Fractal and Ring, while remaining close, albeit with a larger\nvariance, to Fully-Connected. Similar to MNIST, Small-world has\nhigher variance and slightly lower convergence speed than Fully-Connected but\nremains very close.\n\nWe therefore conclude that Fully-Connected and Small-world have good scaling\nproperties in terms of convergence speed, and that the\nlinear-logarithmic number of edges of Small-world makes it the best compromise\nbetween convergence speed and connectivity, and thus the best choice for\nefficient large-scale decentralized learning in practice.\n\n\n\\begin{figure}[htbp]\n \\centering\n \n \n \n \\begin{subfigure}[b]{0.35\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/d-cliques-cifar10-scaling-fully-connected-cst-updates}\n \\caption{Fully-Connected}\n \\end{subfigure}\n \\quad\n \n \n \n \\begin{subfigure}[b]{0.35\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/d-cliques-cifar10-scaling-smallworld-cst-updates}\n \\caption{Small-world}\n \\end{subfigure}\n \n \n \n \n \n \\begin{subfigure}[b]{0.35\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/d-cliques-cifar10-scaling-fractal-cst-updates}\n \\caption{Fractal}\n \\end{subfigure} \n \\quad\n\n \n \n \n \\begin{subfigure}[b]{0.35\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/d-cliques-cifar10-scaling-ring-cst-updates}\n \\caption{Ring}\n \\end{subfigure} \n \n \\caption{\\label{fig:d-cliques-cifar10-scaling-fully-connected} CIFAR10: D-Cliques scaling behavior (constant updates per epoch and 10 nodes per clique) for different\n inter-clique topologies.} \n\\end{figure}\n\n\\section{Additional Experiments with Extreme Label Skew}\n\\label{app:extreme-local-skew} \n\nIn this section, we present additional results for similar experiments as in\nSection~\\ref{section:evaluation} but in the presence of\n \\textit{extreme label distribution skew}: we consider that each node only has examples from a single class. This extreme partitioning case provides an upper bound on the effect of label distribution skew suggesting that D-Cliques should perform similarly or better in less extreme cases, as long as a small-enough average skew can be obtained on all cliques. In turn, this helps to provide insights on why D-Cliques work well, as well as to quantify the loss in convergence speed\nthat may result from using construction algorithms that generate cliques with higher skew.\n\n\\subsection{Data Heterogeneity Assumptions}\n\\label{section:non-iid-assumptions}\n\nTo isolate the effect of label distribution skew from other potentially compounding\nfactors, we make the following simplifying assumptions: (1) All classes are\nequally represented in the global dataset; (2) All classes are represented on\nthe same number of nodes; (3) All nodes have the same number of examples.\n\nWhile less realistic than the assumptions used Section~\\ref{section:evaluation}, \nthese assumptions are still reasonable because: (1) Global class imbalance equally\naffects the optimization process on a single node and is therefore not\nspecific to the decentralized setting; (2) Our results do not exploit specific\npositions in the topology; (3) Imbalanced dataset sizes across nodes can be\naddressed for instance by appropriately weighting the individual loss\nfunctions.\n\nThese assumptions do make the construction of cliques slightly easier by \nmaking it easy to build cliques that have zero skew, as shown in \nSection~\\ref{section:ideal-cliques}. \n\n\\subsection{Constructing Ideal Cliques}\n\\label{section:ideal-cliques}\n \n Algorithm~\\ref{Algorithm:D-Clique-Construction} shows the overall approach\n for constructing a D-Cliques topology under the assumptions of Section~\\ref{section:non-iid-assumptions}.\\footnote{An IID\n version of D-Cliques, in which each node has an equal number of examples of\n all classes, can be implemented by picking $\\#L$ nodes per clique at random.}\n It expects the following inputs: $L$, the set of all classes present in the global distribution $D = \\bigcup_{i \\in N} D_i$; $N$, the set of all nodes; a function $classes(S)$, which given a subset $S$ of nodes in $N$ returns the set of classes in their joint local distributions ($D_S = \\bigcup_{i \\in S} D_i$); a function $intra(DC)$, which given $DC$, a set of cliques (set of set of nodes), creates a set of edges ($\\{\\{i,j\\}, \\dots \\}$) connecting all nodes within each clique to one another; a function $inter(DC)$, which given a set of cliques, creates a set of edges ($\\{\\{i,j\\}, \\dots \\}$) connecting nodes belonging to different cliques; and a function $weigths(E)$, which given a set of edges, returns the weighted matrix $W_{ij}$. Algorithm~\\ref{Algorithm:D-Clique-Construction} returns both $W_{ij}$, for use in D-SGD (Algorithm~\\ref{Algorithm:D-PSGD} and~\\ref{Algorithm:Clique-Unbiased-D-PSGD}), and $DC$, for use with Clique Averaging (Algorithm~\\ref{Algorithm:Clique-Unbiased-D-PSGD}).\n \n \\begin{algorithm}[h]\n \\caption{D-Cliques Construction}\n \\label{Algorithm:D-Clique-Construction}\n \\begin{algorithmic}[1]\n \\STATE \\textbf{Require:} set of classes globally present $L$, \n \\STATE~~ set of all nodes $N = \\{ 1, 2, \\dots, n \\}$,\n \\STATE~~ fn $\\textit{classes}(S)$ that returns the classes present in a subset of nodes $S$,\n \\STATE~~ fn $\\textit{intra}(DC)$ that returns edges intraconnecting cliques of $DC$,\n \\STATE~~ fn $\\textit{inter}(DC)$ that returns edges interconnecting cliques of $DC$ (Sec.~\\ref{section:interclique-topologies})\n \\STATE~~ fn $\\textit{weights}(E)$ that assigns weights to edges in $E$ \n \n \\STATE $R \\leftarrow \\{ n~\\text{for}~n \\in N \\}$ \\COMMENT{Remaining nodes}\n \\STATE $DC \\leftarrow \\emptyset$ \\COMMENT{D-Cliques}\n \\STATE $\\textit{C} \\leftarrow \\emptyset$ \\COMMENT{Current Clique}\n \\WHILE{$R \\neq \\emptyset$}\n \\STATE $n \\leftarrow \\text{pick}~1~\\text{from}~\\{ m \\in R | \\textit{classes}(\\{m\\}) \\subsetneq \\textit{classes}(\\textit{C}) \\}$\n \\STATE $R \\leftarrow R \\setminus \\{ n \\}$\n \\STATE $C \\leftarrow C \\cup \\{ n \\}$\n \\IF{$\\textit{classes}(C) = L$}\n \\STATE $DC \\leftarrow DC \\cup \\{ C \\}$\n \\STATE $C \\leftarrow \\emptyset$\n \\ENDIF\n \\ENDWHILE\n \\RETURN $(weights(\\textit{intra}(DC) \\cup \\textit{inter}(DC)), DC)$\n \\end{algorithmic}\n\\end{algorithm}\n \nThe implementation builds a single clique by adding nodes with different\nclasses until all classes of the global distribution are represented. Each\nclique is built sequentially until all nodes are parts of cliques.\nBecause all classes are represented on an equal number of nodes, all cliques\nwill have nodes of all classes. Furthermore, since nodes have examples\nof a single class, we are guaranteed a valid assignment is possible in a greedy manner. \nAfter cliques are created, edges are added and weights are assigned to edges, \nusing the corresponding input functions.\n\n\\subsection{Evaluation}\n\\label{section:ideal-cliques-evaluation}\n\nIn this section, we provide figures analogous to those of the main text using the partitioning \nscheme of Section~\\ref{section:non-iid-assumptions}.\n\n\\subsubsection{Data Heterogeneity is Significant at Multiple Levels of Node Skew} \n\n\\autoref{fig:iid-vs-non-iid-problem-1-class-per-node} is consistent with \\autoref{fig:iid-vs-non-iid-problem} albeit\nwith slower convergence speed and higher variance. On the one hand, \\autoref{fig:iid-vs-non-iid-problem-1-class-per-node} shows that an extreme skew amplifies the difficulty of learning. On the other hand, \\autoref{fig:iid-vs-non-iid-problem} shows that the problem is not limited to the most extreme cases and is therefore worthy of consideration in designing decentralized federated learning solutions.\n\n\n\n\\begin{figure*}[htbp]\n \\centering\n \\begin{subfigure}[b]{0.25\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/ring-IID-vs-non-IID-eq-classes-1-class-per-node}\n\\caption{\\label{fig:ring-IID-vs-non-IID-eq-classes-1-class-per-node} Ring topology}\n \\end{subfigure}\n \\quad\n \\begin{subfigure}[b]{0.25\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/grid-IID-vs-non-IID-eq-classes-1-class-per-node}\n\\caption{\\label{fig:grid-IID-vs-non-IID-eq-classes-1-class-per-node} Grid topology}\n \\end{subfigure}\n \\quad\n \\begin{subfigure}[b]{0.25\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/fc-IID-vs-non-IID-eq-classes-1-class-per-node}\n\\caption{\\label{fig:fully-connected-IID-vs-non-IID-eq-classes-1-class-per-node} Fully-connected topology}\n \\end{subfigure}\n \\caption{Convergence speed of decentralized SGD with and without label distribution skew for different topologies on MNIST (Variation of \\autoref{fig:iid-vs-non-iid-problem} using balanced classes and skewed with 1 class\/node).\n \\label{fig:iid-vs-non-iid-problem-1-class-per-node}}\n\\end{figure*}\n\n\\subsubsection{D-Cliques Match the Convergence Speed of Fully-Connected with a Fraction of the Edges}\n\n\\autoref{fig:convergence-speed-dc-vs-fc-1-class-per-node} shows consistent\nresults with \\autoref{fig:convergence-speed-dc-vs-fc-2-shards-per-node}:\nD-Cliques work equally well in more extreme skew. It should therefore work\nwell for other levels of label distribution skew commonly encountered in\npractice.\n\n\n\\begin{figure}[htbp]\n \\centering \n \\begin{subfigure}[b]{0.23\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/convergence-speed-mnist-dc-fc-vs-fc-1-class-per-node}\n \\caption{\\label{fig:convergence-speed-mnist-dc-fc-vs-fc-1-class-per-node} MNIST}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.23\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/convergence-speed-cifar10-dc-fc-vs-fc-1-class-per-node}\n \\caption{\\label{fig:convergence-speed-cifar10-dc-fc-vs-fc-1-class-per-node} CIFAR10 (with momentum)}\n \\end{subfigure}\n\\caption{\\label{fig:convergence-speed-dc-vs-fc-1-class-per-node} Comparison on 100 heterogeneous nodes\nbetween a fully-connected network and D-Cliques (fully-connected) constructed with Greedy Swap (10 cliques of 10 nodes) using\nClique Averaging. (Variation of \\autoref{fig:convergence-speed-dc-vs-fc-2-shards-per-node} with 1 class\/node instead of 2 shards\/node).}\n\\end{figure}\n\n\\subsubsection{Clique Averaging and Momentum are Beneficial and Sometimes Necessary}\n\n\\autoref{fig:d-clique-mnist-clique-avg-1-class-per-node} and \\autoref{fig:cifar10-c-avg-momentum-1-class-per-node} show that, compared respectively to \\autoref{fig:d-clique-mnist-clique-avg} and \\autoref{fig:cifar10-c-avg-momentum}, Clique Averaging increases in importance the more extreme the skew is and provides consistent convergence speed at multiple levels.\n\n\\begin{figure}[htbp]\n \\centering\n \\includegraphics[width=0.23\\textwidth]{figures\/convergence-speed-mnist-dc-no-c-avg-vs-c-avg-1-class-per-node}\n\\caption{\\label{fig:d-clique-mnist-clique-avg-1-class-per-node} MNIST: Effect of Clique Averaging on D-Cliques (fully-connected) with 10 cliques of 10 heterogeneous nodes (100 nodes). Y axis starts at 89. (Variation of \\autoref{fig:d-clique-mnist-clique-avg} with balanced classes and 1 class\/node instead of 2 shards\/node).}\n\\end{figure}\n\n\\begin{figure}[htbp]\n \\centering \n \\begin{subfigure}[b]{0.23\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/convergence-speed-cifar10-wo-c-avg-no-mom-vs-mom-1-class-per-node}\n \\caption{\\label{fig:convergence-speed-cifar10-wo-c-avg-no-mom-vs-mom-1-class-per-node} Without Clique Averaging }\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.23\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/convergence-speed-cifar10-w-c-avg-no-mom-vs-mom-1-class-per-node}\n \\caption{\\label{fig:convergence-speed-cifar10-w-c-avg-no-mom-vs-mom-1-class-per-node} With Clique Averaging}\n \\end{subfigure}\n\\caption{\\label{fig:cifar10-c-avg-momentum-1-class-per-node} CIFAR10: Effect of Clique Averaging, without and with\nmomentum, on D-Cliques (fully-connected) with 10 cliques of 10 heterogeneous nodes (100 nodes) (variation of \\autoref{fig:cifar10-c-avg-momentum} with 1 class\/node instead of 2 shards\/node).}\n\\end{figure}\n\n\\subsubsection{D-Cliques Clustering is Necessary}\n\\label{section:d-cliques-clustering-is-necessary}\n\nIn this experiment, we compare D-Cliques to different variations of random graphs,\nwith additional variations compared to the experiments of Section~\\ref{section:d-cliques-vs-random-graphs}, \nto show it is actually necessary. Compared to a random graph, D-Cliques enforce additional constraints \nand provide additional mechanisms: they ensure\na diverse representation of all classes in the immediate neighbourhood of all nodes; they enable\n Clique Averaging to debias gradients; and they provide a high-level of clustering, i.e. neighbors \n of a node are neighbors themselves, which tends to lower variance.\nIn order to distinguish the effect of the first two from the last, we compare D-Cliques to other variations \nof random graphs: (1) with the additional constraint that all classes should be represented in the immediate neighborhood of all nodes \n(i.e. 'diverse neighbors'), and (2) in combination with unbiased gradients computed using \nthe average of the gradients of a subset of neighbors of a node such that the skew of that subset is 0.\n\nThe partitioning scheme we use (Section~\\ref{section:non-iid-assumptions}) makes the construction of both D-Cliques and diverse random graphs easy and ensures that in both cases the skew of the cliques or neighborhood subset is exactly 0. This removes the challenge of designing topology optimization algorithms for both D-Cliques and random graphs that would guarantee reaching the same level of skews in both cases to make results comparable.\n\n\n\n\\begin{figure}[htbp]\n \\centering \n \\begin{subfigure}[b]{0.23\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/convergence-mnist-random-vs-d-cliques-1-class-per-node}\n \\caption{MNIST}\n \\end{subfigure}\n \\hfill \n \\begin{subfigure}[b]{0.23\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/convergence-cifar10-random-vs-d-cliques-1-class-per-node}\n \\caption{CIFAR10}\n \\end{subfigure} \n \\caption{\\label{fig:convergence-random-vs-d-cliques-1-class-per-node} Comparison to variations of Random Graph with 10 edges per node on 100 nodes (variation of \\autoref{fig:convergence-random-vs-d-cliques-2-shards} with 1 class\/node instead of 2 shards\/node as well as additional random graphs with more constraints).} \n\\end{figure}\n\n\\autoref{fig:convergence-random-vs-d-cliques-1-class-per-node} compares the convergence speed of D-Cliques with all the variations of random graphs on both MNIST and CIFAR10. In both cases,\nD-Cliques converge faster than all other options. In addition, in the case of CIFAR10, the clustering appears to be critical\nfor good convergence speed: even a random graph with diverse neighborhoods and unbiased gradients \nconverges significantly slower.\n\n\n\n\n\n\\subsubsection{D-Cliques Scale with Sparser Inter-Clique Topologies}\n\n\\autoref{fig:d-cliques-scaling-mnist-1000-1-class-per-node} and \\autoref{fig:d-cliques-scaling-cifar10-1000-1-class-per-node} are consistent with \\autoref{fig:d-cliques-scaling-mnist-1000} and \\autoref{fig:d-cliques-scaling-cifar10-1000}. The less extreme skew enables a slightly faster convergence rate in the case of CIFAR10 (\\autoref{fig:d-cliques-scaling-cifar10-1000}).\n\n\n\\begin{figure}[htbp]\n \\centering\n \\begin{subfigure}[b]{0.23\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/d-cliques-scaling-mnist-1000-linear-1-class-per-node}\n \\caption{\\label{fig:d-cliques-scaling-mnist-1000-linear-1-class-per-node} Linear}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.23\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/d-cliques-scaling-mnist-1000-super-linear-1-class-per-node}\n\\caption{\\label{fig:d-cliques-scaling-mnist-1000-super-linear-1-class-per-node} Super- and Quasi-Linear}\n \\end{subfigure}\n\\caption{\\label{fig:d-cliques-scaling-mnist-1000-1-class-per-node} MNIST: D-Cliques convergence speed with 1000 nodes (10 nodes per clique, same number of updates per epoch as 100 nodes, i.e. batch-size 10x less per node) with different inter-clique topologies. (variation of \\autoref{fig:d-cliques-scaling-mnist-1000} with 1 class\/node instead of 2 shards\/node).}\n\\end{figure}\n\n\\begin{figure}[htbp]\n \\centering\n \\begin{subfigure}[b]{0.23\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/d-cliques-scaling-cifar10-1000-linear-1-class-per-node}\n \\caption{\\label{fig:d-cliques-scaling-cifar10-1000-linear-1-class-per-node} Linear}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.23\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/d-cliques-scaling-cifar10-1000-super-linear-1-class-per-node}\n\\caption{\\label{fig:d-cliques-scaling-cifar10-1000-super-linear-1-class-per-node} Super- and Quasi-Linear}\n \\end{subfigure}\n\\caption{\\label{fig:d-cliques-scaling-cifar10-1000-1-class-per-node} CIFAR10: D-Cliques convergence speed with 1000 nodes (10 nodes per clique, same number of updates per epoch as 100 nodes, i.e. batch-size 10x less per node) with different inter-clique topologies (variation of \\autoref{fig:d-cliques-scaling-cifar10-1000} with 1 class\/node instead of 2 shards\/node).}\n\\end{figure}\n\n\\subsubsection{Full Intra-Clique Connectivity is Necessary}\n\n\n\n\\begin{figure}[htbp]\n \\centering\n\\begin{subfigure}[htbp]{0.23\\textwidth}\n \\centering \n \\includegraphics[width=\\textwidth]{figures\/d-cliques-ideal-wo-clique-avg-impact-of-edge-removal} \n\\caption{\\label{fig:d-cliques-ideal-wo-clique-avg-impact-of-edge-removal} Without Clique Averaging }\n\\end{subfigure}\n\\hfill\n\\begin{subfigure}[htbp]{0.23\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/d-cliques-ideal-w-clique-avg-impact-of-edge-removal}\n\\caption{\\label{fig:d-cliques-ideal-w-clique-avg-impact-of-edge-removal} With Clique Averaging}\n\\end{subfigure}\n\\caption{\\label{fig:d-cliques-ideal-mnist-intra-connectivity} MNIST: Impact of intra-clique edge removal on D-Cliques (fully-connected) with 10 cliques of 10 heterogeneous nodes (100 nodes) (variation of \\autoref{fig:d-cliques-mnist-intra-connectivity} with 1 class\/node instead of 2 shards\/node). Y axis starts at 89.}\n\\end{figure}\n\n\\autoref{fig:d-cliques-ideal-mnist-intra-connectivity} and \\autoref{fig:d-cliques-ideal-cifar10-intra-connectivity} show higher variance than \\autoref{fig:d-cliques-mnist-intra-connectivity} and \\autoref{fig:d-cliques-cifar10-intra-connectivity}, with a significantly lower convergence speed in the case of CIFAR10 (\\autoref{fig:d-cliques-ideal-cifar10-intra-connectivity}).\n\n\n\n\n\\begin{figure}[t]\n \\centering\n\\begin{subfigure}[htbp]{0.23\\textwidth}\n \\centering \n \\includegraphics[width=\\textwidth]{figures\/d-cliques-ideal-cifar10-wo-clique-avg-impact-of-edge-removal} \n\\caption{\\label{fig:d-cliques-ideal-cifar10-wo-clique-avg-impact-of-edge-removal} Without Clique Averaging }\n\\end{subfigure}\n\\hfill\n\\begin{subfigure}[htbp]{0.23\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/d-cliques-ideal-cifar10-w-clique-avg-impact-of-edge-removal}\n\\caption{\\label{fig:d-cliques-ideal-cifar10-w-clique-avg-impact-of-edge-removal} With Clique Averaging}\n\\end{subfigure}\n\\caption{\\label{fig:d-cliques-ideal-cifar10-intra-connectivity} CIFAR10: Impact of intra-clique edge removal (with momentum) on D-Cliques (fully-connected) with 10 cliques of 10 heterogeneous nodes (100 nodes) (variation of \\autoref{fig:d-cliques-cifar10-intra-connectivity} with 1 class\/node instead of 2 shards\/node).}\n\\end{figure}\n \n \n\\section{Conclusion}\n\\label{section:conclusion}\n\nWe proposed D-Cliques, a sparse topology that obtains similar convergence\nspeed as a fully-connected network in the presence of label distribution skew.\nD-Cliques is based on assembling subsets of nodes into cliques such\nthat the clique-level class distribution is representative of the global\ndistribution, thereby locally recovering homogeneity of data. Cliques are\nconnected together by a\nsparse inter-clique topology so that\nthey quickly converge to the same model. We proposed Clique\nAveraging to remove the bias in gradient computation due to non-homogeneous\naveraging neighborhood by averaging gradients only with other nodes within the clique. Clique Averaging\ncan in turn be used to implement an effective momentum.\nThrough our extensive set of experiments, we\nshowed that the clique structure of D-Cliques is critical in obtaining these\nresults and that a small-world inter-clique topology with only $O(n \\log n)$ \nedges achieves a very good compromise between\nconvergence speed and scalability with the number of nodes.\n\nD-Cliques thus appears to be very promising to reduce bandwidth\nusage on FL servers and to implement fully decentralized alternatives in a\nwider range of applications where global coordination is impossible or costly.\nFor instance, the relative frequency of classes in each node\ncould be computed using PushSum~\\cite{kempe2003gossip}, and the topology could\nbe constructed in a decentralized and adaptive way with\nPeerSampling~\\cite{jelasity2007gossip}. This will be investigated in future work.\nWe also believe that our ideas can be useful to deal\nwith more general types of data heterogeneity beyond the important case\nof\nlabel distribution skew on which we focused in this paper. An important\nexample is\ncovariate shift or feature distribution skew \\cite{kairouz2019advances}, for\nwhich local density estimates could be used as basis to construct cliques that\napproximately recover the global distribution.\n\\section{D-Cliques}\n\\label{section:d-cliques}\n\nIn this section, we introduce D-Cliques, a topology\ndesigned to compensate for data heterogeneity. We also present some\nmodifications of D-SGD that leverage some properties of the proposed\ntopology and allow to implement a successful momentum scheme.\n\n\\subsection{Intuition}\n\nTo give the intuition behind\nour approach, let us consider the neighborhood of a single node in a grid\ntopology represented\non Figure~\\ref{fig:grid-iid-vs-non-iid-neighbourhood}.\nNodes are distributed randomly in the grid and the colors of a node represent\nthe proportion of each class in its local dataset. In the homogeneous\nsetting, the label distribution is the same across\nnodes: in the example shown in Figure~\\ref{fig:grid-iid-neighbourhood}, all classes\nare represented in equal proportions on all nodes. This is not the case in the\nheterogeneous setting: Figure~\\ref{fig:grid-non-iid-neighbourhood} shows an\nextreme case of label distribution skew where each\nnode holds examples of a single class only.\n\nFrom the point of view of the center node in\nFigure~\\ref{fig:grid-iid-vs-non-iid-neighbourhood}, a single training step of\nD-SGD is\nequivalent to sampling a mini-batch five times larger from the union of the\nlocal distributions of neighboring nodes.\nIn the homogeneous case, since gradients are computed from examples of all\nclasses,\nthe resulting averaged gradient points in a direction that tends to reduce\nthe loss across all classes. In contrast, in the heterogeneous case, the\nrepresentation of classes in the immediate neighborhood of the node is\ndifferent from the global label distribution\n(in Figure~\\ref{fig:grid-non-iid-neighbourhood}, only a\nsubset of classes are represented), thus the gradients will\nbe biased.\nImportantly, as the distributed averaging process takes several steps to\nconverge, this variance persists across iterations as the locally computed\ngradients are far from the global average.\\footnote{One could perform a\nsufficiently large number of\naveraging steps between each gradient step, but this is too costly in\npractice.} This can significantly slow down\nconvergence speed to the point of making decentralized optimization\nimpractical.\n\n\\begin{figure}[t]\n \\centering\n \\begin{subfigure}[b]{0.18\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/grid-iid-neighbourhood}\n\\caption{\\label{fig:grid-iid-neighbourhood} Homogeneous data}\n \\end{subfigure}\n \\hspace*{.5cm}\n \\begin{subfigure}[b]{0.18\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/grid-non-iid-neighbourhood}\n\\caption{\\label{fig:grid-non-iid-neighbourhood} Heterogeneous data}\n \\end{subfigure}\n \\caption{Neighborhood in a grid.}\n \\label{fig:grid-iid-vs-non-iid-neighbourhood}\n\\end{figure}\n\nWith D-Cliques, we address label distribution skew by\ncarefully designing a\nnetwork topology composed of \\textit{locally representative cliques} while \nmaintaining \\textit{sparse inter-clique connections} only.\n\n\\subsection{Constructing Locally Representative Cliques}\n\nD-Cliques construct a topology in which each node is part of a \\emph{clique} \n(i.e., a subset of nodes whose induced subgraph is fully connected)\nsuch that the label distribution in each clique is\nclose to the global label distribution. Formally, for a label $y$ and a\nclique composed of nodes $C\\subseteq N$, we denote by $p_C(y)=\n\\frac{1}{|C|}\\sum_{i\\in C} p_i(y)$ the distribution of $y$ in $C$\nand by $p(y)=\\frac{1}{n}\\sum_{i\\in N} p_i(y)$ its global distribution.\nWe measure the \\textit{skew} of $C$ by the sum\nof the absolute differences of $p_C(y)$ and $p(y)$:\n\\begin{equation}\n\\label{eq:skew}\n \\textit{skew}(C) =\\\n \\sum_{l=1}^L | p_C(y = l) - p(y = l) |.\n\\end{equation}\n\n\n\nTo efficiently construct a set of cliques with small skew, we propose\nGreedy-Swap (Algorithm~\\ref{Algorithm:greedy-swap}). The parameter\n$M$ is the maximum size of cliques and controls the\nnumber of intra-clique edges. We start by initializing cliques at\nrandom. Then, for\na certain number of steps $K$, we randomly pick two cliques and swap two of\ntheir nodes so as to decrease the sum of skews of the two cliques. The swap is\nchosen randomly among the ones that decrease the skew, hence\nthis algorithm can be seen as a form of randomized greedy algorithm.\nWe note that this algorithm only requires\nthe knowledge of the label distribution $p_i(y)$ at each node $i$. For the\nsake of\nsimplicity, we assume that D-Cliques are constructed from the global\nknowledge of these distributions, which can easily be obtained by\ndecentralized averaging in a pre-processing step \\citep[e.g.,][]\n{jelasity2005largegossip}.\n\n\\begin{algorithm}[t]\n \\caption{D-Cliques Construction via Greedy Swap}\n \\label{Algorithm:greedy-swap}\n \\begin{algorithmic}[1]\n \\STATE \\textbf{Require:} maximum clique size $M$, max steps $K$, set\n of all nodes $N = \\{ 1, 2, \\dots, n \\}$,\n \n procedure $\\texttt{inter}(\\cdot)$ to create intra-clique connections\n (see Sec.~\\ref{section:interclique-topologies})\n \n \\STATE $DC \\leftarrow []$\n \\WHILE {$N \\neq \\emptyset$}\n \\STATE $C \\leftarrow$ sample $M$ nodes from $N$ at random\n \\STATE $N \\leftarrow N \\setminus C$; $DC.\\text{append}(C)$\n \\ENDWHILE\n \\FOR{$k \\in \\{1, \\dots, K\\}$}\n \\STATE $C_1,C_2 \\leftarrow$ random sample of 2 elements from $DC$\n \\STATE $s \\leftarrow \\textit{skew}(C_1) + skew(C_2)$\n \\STATE $\\textit{swaps} \\leftarrow []$\n \\FOR{$i \\in C_1, j \\in C_2$}\n \\STATE $s' \\leftarrow \\textit{skew}(C_1\\setminus\\{i\\}\\cup\\{j\\})\n + \\textit{skew}(C_2 \\setminus\\{i\\}\\cup\\{j\\})$\\hspace*{-.05cm}\n \\IF {$s' < s$}\n \\STATE \\textit{swaps}.append($(i, j)$)\n \\ENDIF\n \\ENDFOR\n \\IF {len(\\textit{swaps}) $> 0$}\n \\STATE $(i,j) \\leftarrow$ random element from $\n \\textit{swaps}$ \n \\STATE $C_1 \\leftarrow C_1 \\setminus\\{i\\}\\cup\\{j\\}; C_2 \\leftarrow C_2 \\setminus\\{j\\}\\cup\\{i\\}$\n \\ENDIF\n \\ENDFOR\n \\STATE $E\\leftarrow \\{(i,j) : C\\in DC, i,j\\in C, i\\neq j\\}$\n \n \\RETURN topology $G=(N,E \\cup \n \\texttt{inter}(DC))$\n \\end{algorithmic}\n\\end{algorithm}\n\n\nThe key idea of D-Cliques is to ensure the clique-level label distribution\n$p_C(y)$\n matches closely the global distribution $p(y)$. As a consequence,\nthe local models of nodes across cliques remain rather close. Therefore, a\nsparse inter-clique topology can be used, significantly reducing the total\nnumber of edges without slowing down the convergence. We discuss some possible\nchoices for this inter-clique topology in the next section.\n\n\\subsection{Adding Sparse Inter-Clique Connections}\n\\label{section:interclique-topologies}\n\nTo ensure a global consensus and convergence, we introduce\n\\textit{inter-clique connections} between a small number of node pairs that\nbelong to different cliques, thereby implementing the \\texttt{inter}\nprocedure called at the end of Algorithm~\\ref{Algorithm:greedy-swap}.\nWe aim to ensure that the degree of each node remains low and balanced so as\nto make the network topology well-suited to decentralized federated learning.\nWe consider several choices of inter-clique topology, which offer\ndifferent scalings for the number of required edges and the average distance\nbetween nodes in the resulting graph.\n\nThe \\textit{ring} has (almost) the fewest possible number of edges for the\ngraph to be connected: in this case, each clique is connected to exactly\ntwo other cliques by a single edge. This topology requires only $O(\\frac{n}\n{M})$ inter-clique edges but suffers an $O(n)$ average distance between nodes.\n\nThe\n\\textit{fractal} topology\nprovides a logarithmic bound on the average distance. In this\nhierarchical scheme, cliques are arranged in larger groups of $M$ cliques that\nare connected\ninternally with one edge per\npair of cliques, but with only one edge between pairs of larger groups. The\ntopology is built recursively such that $M$ groups will themselves form a\nlarger group at the next level up. This results in at most $M$ edges per node \nif edges are evenly distributed: i.e., each group within the same level adds \nat most $M-1$ edges to other groups, leaving one node per group with $M-1$ \nedges that can receive an additional edge to connect with other groups at the next level.\nSince nodes have at most $M$ edges, the total number of inter-clique edges\nis at most $nM$ edges.\n\nWe can also design an inter-clique topology in which the number of edges\nscales in a log-linear fashion by following a\nsmall-world-like topology~\\cite{watts2000small} applied on top of a\nring~\\cite{stoica2003chord}. In this scheme, cliques are first arranged in a\nring. Then each clique adds symmetric edges, both clockwise and\ncounter-clockwise on the ring, with the $c$ closest cliques in sets of\ncliques that are exponentially bigger the further they are on the ring (see\nAlgorithm~\\ref{Algorithm:Smallworld} in Appendix~\\ref{app:small_world} for\ndetails on the construction). This topology ensures a good connectivity with\nother cliques that are close on the ring, while keeping the average\ndistance small. This scheme uses $O(c\\frac{n}{M}\\log\\frac{n}{M})$ edges,\ni.e.\nlog-linear in $n$.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.20\\textwidth]{figures\/fully-connected-cliques}\n \\caption{\\label{fig:d-cliques-figure} D-Cliques with $n=100$, $M=10$ and a\nfully connected inter-clique topology on a problem with 1 class\/node.}\n\\end{figure}\n\nFinally, we can consider a \\emph{fully connected} inter-clique topology\n such that each clique has exactly\none edge with each of the other cliques, spreading these additional edges\nequally among the nodes of a clique, as illustrated in Figure~\\ref{fig:d-cliques-figure}. \nThis has the advantage of\nbounding the distance between any pair of nodes to $3$ but requires\n$O(\\frac{n^2}{M^2})$ inter-clique edges, i.e. quadratic in $n$.\n\n\n\n\n\n\n\n\\subsection{Optimizing over D-Cliques with Clique Averaging and Momentum}\n\\label{section:clique-averaging-momentum}\n\n\n\nWhile limiting the number of inter-clique connections reduces the\namount of messages traveling on the network, it also introduces a form of\nbias.\nFigure~\\ref{fig:connected-cliques-bias} illustrates the problem on the\nsimple case of two cliques connected by one inter-clique edge (here,\nbetween the green node of the left clique and the pink node of the right\nclique). In this example, each node holds example of a single class. Let us\nfocus on node A. With weights computed as in \\eqref{eq:metro},\nnode A's self-weight is $\\frac{12}\n{110}$, the weight between A and the green node connected to B is\n$\\frac{10}{110}$, and\nall other neighbors of A have a weight of $\\frac{11}{110}$. Therefore, the\ngradient at A is biased towards its own class (pink) and against the green\nclass. A similar bias holds for all other nodes\nwithout inter-clique edges with respect to their respective classes. For node\nB, all its edge weights (including its self-weight) are equal to $\\frac{1}\n{11}$. However, the green class is represented twice (once as a clique\nneighbor and once from the inter-clique edge), while all other classes are\nrepresented only once. This biases the gradient toward the green class. The\ncombined effect of these two sources of bias is to increase the variance\nof the local models across nodes.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.3\\textwidth]{figures\/connected-cliques-bias}\n\\caption{\\label{fig:connected-cliques-bias} Illustrating the bias induced by\ninter-clique connections (see main text for details).}\n\\end{figure}\n\n\\paragraph{Clique Averaging.} \nWe address this problem by adding \\emph{Clique\nAveraging} to D-SGD\n(Algorithm~\\ref{Algorithm:Clique-Unbiased-D-PSGD}), which essentially\ndecouples gradient averaging from model averaging. The idea is to use only the\ngradients of neighbors within the same clique to compute the average gradient\nso as to remove the bias due to inter-clique edges. In contrast, all\nneighbors' models (including those in different cliques)\nparticipate in model averaging as in the original version. Adding Clique Averaging\nrequires gradients to be sent separately from the model parameters: the number\nof messages\nexchanged between nodes is therefore twice their number of edges.\n\n\\begin{algorithm}[t]\n \\caption{D-SGD with Clique Averaging, Node $i$}\n \\label{Algorithm:Clique-Unbiased-D-PSGD}\n \\begin{algorithmic}[1]\n \\STATE \\textbf{Require} initial model $\\theta_i^{(0)}$, learning\n rate $\\gamma$, mixing weights $W$, mini-batch size $m$, number of\n steps $K$\n \\FOR{$k = 1,\\ldots, K$}\n \\STATE $S_i^{(k)} \\gets \\text{mini-batch of $m$ samples drawn\n from~} D_i$\n \\STATE $g_i^{(k)} \\gets \\frac{1}{|\\textit{Clique}(i)|}\\sum_{j \\in \n \\textit{Clique(i)}} \\nabla F(\\theta_j^{(k-1)}; S_j^{(k)})$\n \\STATE $\\theta_i^{(k-\\frac{1}{2})} \\gets \\theta_i^{(k-1)} - \\gamma g_i^{(k)}$ \n \\STATE $\\theta_i^{(k)} \\gets \\sum_{j \\in N} W_{ji}^{(k)} \\theta_j^{(k-\\frac{1}{2})}$\n \\ENDFOR\n \\end{algorithmic}\n\\end{algorithm}\n\n\n\\paragraph{Implementing momentum with Clique Averaging.}\nEfficiently training high capacity models usually requires additional\noptimization techniques. In particular, momentum~\\cite{pmlr-v28-sutskever13}\nincreases the magnitude of the components of the gradient that are shared\nbetween several consecutive steps, and is critical for deep convolutional networks like\nLeNet~\\cite{lecun1998gradient,quagmire} to converge quickly. However, a direct\napplication of momentum in data heterogeneous settings can\nactually be very detrimental and even fail to converge, as we will show in\n our experiments (Figure~\\ref{fig:cifar10-c-avg-momentum} in\n Section~\\ref{section:evaluation}).\nClique Averaging allows us to reduce the bias in the momentum by using the\nclique-level average gradient $g_i^{(k)}$ of\nAlgorithm~\\ref{Algorithm:Clique-Unbiased-D-PSGD}:\n\\begin{equation}\nv_i^{(k)} \\leftarrow m v_i^{(k-1)} + g_i^{(k)}.\n\\end{equation}\nIt then suffices to modify the original gradient step to apply momentum:\n\\begin{equation}\n\\theta_i^{(k-\\frac{1}{2})} \\leftarrow \\theta_i^{(k-1)} - \\gamma v_i^{(k)}.\n\\end{equation}\n\n\n\n\\section{Evaluation}\n\\label{section:evaluation}\n\nIn this section, we first compare D-Cliques to alternative topologies to\nshow the benefits and relevance of our main design choices. Then, \nwe evaluate different inter-clique topologies to further reduce the number of\ninter-clique connections so as to gracefully scale with the number of\nnodes. Then, we show the impact of removing intra-clique edges.\n Finally, we show that Greedy Swap\n(Alg.~\\ref{Algorithm:greedy-swap}) \nconstructs cliques efficiently with consistently lower skew than\nrandom cliques.\n\n\\subsection{Experimental Setup}\n\\label{section:experimental-settings}\n\nOur main goal is to provide a fair comparison of the convergence speed across\ndifferent topologies and algorithmic variations, in order to\nshow that D-Cliques\ncan remove much of the effects of label distribution skew.\n\nWe experiment with two datasets: MNIST~\\cite{mnistWebsite} and\nCIFAR10~\\cite{krizhevsky2009learning}, which both have $L=10$ classes.\nFor MNIST, we use 50k and 10k examples from the original 60k training \nset for training and validation respectively. We use all 10k examples of \nthe test set to measure prediction accuracy. The validation set preserves the\noriginal unbalanced ratio of the classes in the test set, and the remaining\nexamples become the training set.\nFor CIFAR10, classes are evenly balanced: we initially used 45k\/50k images \nof the original training set for training, 5k\/50k for validation, and all 10k examples \nof the test set for measuring prediction accuracy. After tuning hyper-parameters\non initial experiments, we then used all 50k images of the original training set\nfor training for all experiments, as the 45k did not split evenly in 1000 nodes\nwith the partitioning scheme explained in the next paragraph.\n\nFor both MNIST and CIFAR10, we use the heterogeneous data partitioning scheme\nproposed by~\\citet{mcmahan2016communication} \nin their seminal FL work: \nwe sort all training examples by class, then split the list into shards of\nequal size, and randomly assign two shards to each node. When the number of\nexamples of one class does not divide evenly in shards, as is the case for MNIST, some shards may have examples of more than one class and therefore nodes may have examples\nof up to 4 classes. However, most nodes will have examples of 2 classes. The varying number \nof classes, as well as the varying distribution of examples within a single node, makes the task \nof creating cliques with low skew nontrivial.\n\nWe\nuse a logistic regression classifier for MNIST, which\nprovides up to 92.5\\% accuracy in the centralized setting.\nFor CIFAR10, we use a Group-Normalized variant of LeNet~\\cite{quagmire}, a\ndeep convolutional network which achieves an accuracy of $74.15\\%$ in the\ncentralized setting.\nThese models are thus reasonably accurate (which is sufficient to\nstudy the effect of the topology) while being sufficiently fast to train in a\nfully decentralized setting and simple enough to configure and analyze.\nRegarding hyper-parameters, we jointly optimize the learning rate and\nmini-batch size on the\nvalidation set for 100 nodes, obtaining respectively $0.1$ and $128$ for\nMNIST and $0.002$ and $20$ for CIFAR10.\nFor CIFAR10, we additionally use a momentum of $0.9$.\n\nWe evaluate 100- and 1000-node networks by creating multiple models \nin memory and simulating the exchange of messages between nodes.\nTo ignore the impact of distributed execution strategies and system\noptimization techniques, we report the test accuracy of all nodes (min, max,\naverage) as a function of the number of times each example of the dataset has\nbeen sampled by a node, i.e. an \\textit{epoch}. This is equivalent to the classic \ncase of a single node sampling the full distribution.\nTo further make results comparable across different number of nodes, we lower\nthe batch size proportionally to the number of nodes added, and inversely,\ne.g. on MNIST, 128 with 100 nodes vs. 13 with 1000 nodes. This\nensures the same number of model updates and averaging per epoch, which is\nimportant to have a fair comparison.\\footnote{Updating and averaging models\nafter every example can eliminate the impact of label distribution skew. However, the\nresulting communication overhead is impractical.}\n\nFinally, we compare our results against an ideal baseline:\na fully-connected network topology with the same number of nodes. \nThis baseline is essentially equivalent to a centralized (single) IID node using a batch size\n$n$ times bigger, where $n$ is the number of nodes. Both a fully-connected network and a single IID node\n effectively optimize a single model and sample\nuniformly from the global distribution: both therefore remove entirely the\neffect of label distribution skew and of the network topology on the\noptimization. In practice, we prefer a\nfully-connected network because it\n converges slightly faster and obtains slightly \nbetter final accuracy than a single node sampling randomly from the global\ndistribution.\\footnote{We \nconjecture that an heterogeneous data partition in a fully-connected network may force \nmore balanced representation of all classes in the union of all mini-batches, leading to better convergence.}\n\n\\subsection{D-Cliques Match the Convergence Speed of Fully-Connected with a\nFraction of the Edges}\n\\label{section:d-cliques-vs-fully-connected}\n\nIn this first experiment, we show that D-Cliques with Clique Averaging (and\nmomentum when mentioned) converges \nalmost as fast as a fully-connected network on both MNIST and CIFAR10. Figure~\\ref{fig:convergence-speed-dc-vs-fc-2-shards-per-node} \nillustrates the convergence speed of D-Cliques with $n=100$ nodes on MNIST (with Clique Averaging) \nand CIFAR10 (with Clique Averaging and momentum). Observe that the convergence speed is\nvery close to that of a fully-connected topology, and significantly better than with\na ring or a grid (see Figure~\\ref{fig:iid-vs-non-iid-problem}). \nIt also has less variance than both the ring and grid. \n\n\n\\begin{figure}[htbp]\n \\centering \n \\begin{subfigure}[b]{0.23\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/convergence-speed-mnist-dc-fc-vs-fc-2-shards-per-node}\n \\caption{\\label{fig:convergence-speed-mnist-dc-fc-vs-fc-2-shards-per-node} MNIST}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.23\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/convergence-speed-cifar10-dc-fc-vs-fc-2-shards-per-node}\n \\caption{\n \\label{fig:convergence-speed-cifar10-dc-fc-vs-fc-2-shards-per-node} CIFAR10 (w\/ momentum)}\n \\end{subfigure}\n\\caption{\\label{fig:convergence-speed-dc-vs-fc-2-shards-per-node} Comparison on 100 heterogeneous nodes (2 shards\/node)\nbetween a fully-connected network and D-Cliques (fully-connected) constructed with Greedy Swap (10 cliques of 10 nodes) using\nClique Averaging. Bold line is the average accuracy over\nall nodes. Thinner upper and lower lines are maximum and minimum accuracy over\nall nodes.}\n\\end{figure}\n\n\n\\subsection{Clique Averaging is Beneficial and Sometimes Necessary}\n\\label{sec:exp:clique_avg}\n\nIn this experiment, we perform an ablation study of the effect of Clique Averaging.\nFigure~\\ref{fig:d-clique-mnist-clique-avg} shows that Clique Averaging\n(Algorithm~\\autoref{Algorithm:Clique-Unbiased-D-PSGD})\n reduces the variance of models across nodes and slightly accelerates the\nconvergence on MNIST. Recall that Clique Averaging induces a small\nadditional cost, as gradients\nand models need to be sent in two separate rounds of messages. \nNonetheless, compared to fully connecting all nodes, the total number \nof messages per round for 100 nodes is reduced by $\\approx 80\\%$.\n\n\\begin{figure}[htbp]\n \\centering\n \\includegraphics[width=0.23\\textwidth]{figures\/convergence-speed-mnist-dc-no-c-avg-vs-c-avg-2-shards-per-node}\n\\caption{\\label{fig:d-clique-mnist-clique-avg} MNIST: Effect of Clique Averaging on D-Cliques (fully-connected) with 10 cliques of 10 heterogeneous nodes (100 nodes). Y axis starts at 89.}\n\\end{figure}\n\n\n\nThe effect of Clique Averaging is much more pronounced on CIFAR10, as can\nbe seen in\nFigure~\\ref{fig:cifar10-c-avg-momentum}, especially when used in combination with momentum.\nWithout Clique Averaging,\nthe use of momentum is actually detrimental. With Clique Averaging, the \nsituation reverses and momentum is again beneficial. The combination\nof both has the fastest convergence speed and the lowest variance among all\nfour possibilities. We believe that the gains obtained with Clique\nAveraging are larger on CIFAR10 than on MNIST because the model we train on\nCIFAR10 (a deep convolutional network) has much higher capacity than the\nlinear model used for MNIST. The resulting highly nonconvex objective increases the\nsensitivity of local updates to small differences in the gradients, making\nthem point in different directions, as observed by \\citet{consensus_distance}\neven in the homogeneous setting.\nClique Averaging helps to reduce this effect by reducing the bias in\nlocal gradients.\n\n\n\\begin{figure}[htbp]\n \\centering \n \\begin{subfigure}[b]{0.23\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/convergence-speed-cifar10-wo-c-avg-no-mom-vs-mom-2-shards-per-node}\n \\caption{\\label{fig:convergence-speed-cifar10-wo-c-avg-no-mom-vs-mom-2-shards-per-node} Without Clique Averaging }\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.23\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/convergence-speed-cifar10-w-c-avg-no-mom-vs-mom-2-shards-per-node}\n \\caption{\\label{fig:convergence-speed-cifar10-w-c-avg-no-mom-vs-mom-2-shards-per-node} With Clique Averaging}\n \\end{subfigure}\n\\caption{\\label{fig:cifar10-c-avg-momentum} CIFAR10: Effect of Clique Averaging, without and with\nmomentum, on D-Cliques (fully-connected) with 10 cliques of 10 heterogeneous nodes (100 nodes).}\n\\end{figure}\n\n\n\\subsection{D-Cliques Converge Faster than Random Graphs}\n\\label{section:d-cliques-vs-random-graphs}\n\nIn this experiment, we compare D-Cliques to a random graph that has a similar \nnumber of edges (10) per node to determine\nwhether a simple sparse topology could work equally well. \nTo ensure a fair comparison, because a random graph does not support \nClique Averaging, we do not use it for D-Cliques either.\n\\autoref{fig:convergence-random-vs-d-cliques-2-shards} \nshows that even \\textit{without} Clique Averaging, D-Cliques converge faster and with\nlower variance. Furthermore, the use of momentum in a random graph\nis detrimental, similar to D-Cliques without the use of Clique Averaging \n(see \\autoref{fig:convergence-speed-cifar10-wo-c-avg-no-mom-vs-mom-2-shards-per-node}).\nThis shows that a careful design of the topology is indeed necessary.\n\nD-Cliques converge faster even if we were to create diverse neighborhoods \nin a random graph with lower skew and used those to unbias gradients in an analogous \nway to Clique Averaging (details in Annex~\\ref{section:d-cliques-clustering-is-necessary}, as \nthe experiments require a different partitioning scheme for a fair comparison).\nThe clustering provided by D-Cliques therefore provides faster convergence.\n\n\n\n\\begin{figure}[htbp]\n \\centering \n \\begin{subfigure}[b]{0.23\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/convergence-mnist-random-vs-d-cliques-2-shards}\n \\caption{MNIST}\n \\end{subfigure}\n \\hfill \n \\begin{subfigure}[b]{0.23\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/convergence-cifar10-random-vs-d-cliques-2-shards}\n \\caption{CIFAR10}\n \\end{subfigure} \n \\caption{\\label{fig:convergence-random-vs-d-cliques-2-shards} Comparison on 100 heterogeneous nodes between D-Cliques (fully-connected) with 10 cliques of size 10 and a random graph with 10 edges per node \\textit{without} Clique Averaging or momentum.} \n\\end{figure}\n\n\n\n\\subsection{D-Cliques Scale with Sparser Inter-Clique Topologies}\n\\label{section:scaling}\n\nIn this experiment, we explore the trade-offs between scalability and\nconvergence speed induced by the several sparse inter-clique topologies\nintroduced in Section~\\ref{section:interclique-topologies}.\n\\autoref{fig:d-cliques-scaling-mnist-1000} and \\autoref{fig:d-cliques-scaling-cifar10-1000} \nshow the convergence speed respectively on MNIST and CIFAR10 on a larger network of 1000 nodes, \ncompared to the ideal baseline of a\nfully-connected network representing\nthe fastest convergence speed achievable if topology had no impact. Among the linear schemes, the ring\ntopology converges but is much slower than our fractal scheme. Among the super-linear schemes, the small-world\ntopology has a convergence speed that is almost the same as with a\nfully-connected inter-clique topology but with 22\\% less edges\n(14.5 edges on average instead of 18.9). \n\n\n\n\n\\begin{figure}[htbp]\n \\centering\n \\begin{subfigure}[b]{0.23\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/d-cliques-scaling-mnist-1000-linear}\n \\caption{\\label{fig:d-cliques-scaling-mnist-1000-linear} Linear}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.23\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/d-cliques-scaling-mnist-1000-super-linear}\n\\caption{\\label{fig:d-cliques-scaling-mnist-1000-super-linear} Super- and Quasi-Linear}\n \\end{subfigure}\n\\caption{\\label{fig:d-cliques-scaling-mnist-1000} MNIST: D-Cliques convergence\nspeed with 1000 nodes (10 nodes per clique, same number of updates per epoch as 100 nodes, i.e. batch-size 10x less per node) and different inter-clique topologies.}\n\\end{figure}\n\n\\begin{figure}[htbp]\n \\centering\n \\begin{subfigure}[b]{0.23\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/d-cliques-scaling-cifar10-1000-linear}\n \\caption{\\label{fig:d-cliques-scaling-cifar10-1000-linear} Linear}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.23\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/d-cliques-scaling-cifar10-1000-super-linear}\n\\caption{\\label{fig:d-cliques-scaling-cifar10-1000-super-linear} Super- and Quasi-Linear}\n \\end{subfigure}\n\\caption{\\label{fig:d-cliques-scaling-cifar10-1000} CIFAR10: D-Cliques\nconvergence speed with 1000 nodes (10 nodes per clique, same number of updates per epoch as 100 nodes, i.e. batch-size 10x less per node) and different inter-clique topologies.}\n\\end{figure}\n\nWhile the small-world inter-clique topology shows promising scaling behavior, the\nfully-connected inter-clique topology still offers\nsignificant benefits with 1000 nodes, as it represents a 98\\% reduction in the\nnumber of edges compared to fully connecting individual nodes (18.9 edges on\naverage instead of 999) and a 96\\% reduction in the number of messages (37.8\nmessages per round per node on average instead of 999). \nWe refer to Appendix~\\ref{app:scaling} for additional results comparing the convergence speed across different number of nodes. \nOverall, these results show that D-Cliques can gracefully scale with the\nnumber of nodes.\n \n\n\\subsection{Full Intra-Clique Connectivity is Necessary}\n\nIn this experiment, we measure the impact of removing intra-clique edges \n to assess how critical full connectivity is within cliques. We choose edges to remove\n among the 45 undirected edges present in cliques of size 10. The removal of\n an edge removes the connection in both directions. We remove 1 and 5 edges\n randomly, respectively 2.2\\% and 11\\% of intra-clique edges. \\autoref{fig:d-cliques-mnist-intra-connectivity} \n shows that for MNIST, when not using Clique Averaging, \nremoving edges decreases slightly the convergence speed and increases \nthe variance between nodes. When using Clique Averaging, removing up to 5\nedges does not noticeably affect\nthe convergence speed and variance.\n\n\\begin{figure}[htbp]\n \\centering\n\n\\begin{subfigure}[htbp]{0.23\\textwidth}\n \\centering \n \\includegraphics[width=\\textwidth]{figures\/d-cliques-mnist-wo-clique-avg-impact-of-edge-removal} \n\\caption{\\label{fig:d-cliques-mnist-wo-clique-avg-impact-of-edge-removal} Without Clique Averaging }\n\\end{subfigure}\n\\hfill\n\\begin{subfigure}[htbp]{0.23\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/d-cliques-mnist-w-clique-avg-impact-of-edge-removal}\n\\caption{\\label{fig:d-cliques-mnist-w-clique-avg-impact-of-edge-removal} With Clique Averaging}\n\\end{subfigure}\n\\caption{\\label{fig:d-cliques-mnist-intra-connectivity} MNIST: Impact of\nintra-clique edge removal on D-Cliques (fully-connected) with 10\ncliques of 10 heterogeneous nodes (100 nodes). Y axis starts at 89.}\n\\end{figure}\n\nIn contrast, \\autoref{fig:d-cliques-cifar10-intra-connectivity} shows that for CIFAR10, the impact is stronger. We show the results with and without Clique Averaging\nwith momentum in both cases, as momentum is critical for obtaining the best\nconvergence speed on CIFAR10. Without Clique Averaging,\nremoving edges has a small effect on convergence speed and variance, but the convergence speed is too slow to be practical.\nWith Clique Averaging, removing a single edge has a small but noticeable\neffect. Strikingly, removing 5 edges per clique significantly damages the\nconvergence and yields a sharp increase in the variance across nodes.\nTherefore, while D-Cliques can tolerate the removal of some intra-clique edges\nwhen training simple linear models and datasets as in MNIST, fast\nconvergence speed and low variance requires full or nearly full connectivity\nwhen using high-capacity models and more difficult datasets. This is\nin line with the observations made in Section~\\ref{sec:exp:clique_avg}\nregarding the effect of Clique Averaging. Again, these results show the\nrelevance of our design choices, including the choice of constructing fully\nconnected cliques.\n\n\\begin{figure}[htbp]\n \\centering\n\\begin{subfigure}[htbp]{0.23\\textwidth}\n \\centering \n \\includegraphics[width=\\textwidth]{figures\/d-cliques-cifar10-wo-clique-avg-impact-of-edge-removal} \n\\caption{\\label{fig:d-cliques-cifar10-wo-clique-avg-impact-of-edge-removal} Without Clique Averaging }\n\\end{subfigure}\n\\hfill\n\\begin{subfigure}[htbp]{0.23\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/d-cliques-cifar10-w-clique-avg-impact-of-edge-removal}\n\\caption{\\label{fig:d-cliques-cifar10-w-clique-avg-impact-of-edge-removal} With Clique Averaging}\n\\end{subfigure}\n\\caption{\\label{fig:d-cliques-cifar10-intra-connectivity} CIFAR10: Impact of intra-clique edge removal (with momentum) on D-Cliques (fully-connected) with 10 cliques of 10 heterogeneous nodes (100 nodes).}\n\\end{figure}\n\n\\subsection{Greedy Swap Improves Random Cliques at an Affordable Cost}\n\\label{section:greedy-swap-vs-random-cliques}\n\nIn the next two sub-sections, we compare cliques built with Greedy Swap (Alg.~\\ref{Algorithm:greedy-swap})\nto Random Cliques, a simple and obvious baseline, on their quality (skew), the cost \nof their construction, and their convergence speed.\n\n\\subsubsection{Cliques with Low Skew can be Constructed Efficiently with Greedy Swap}\n\\label{section:cost-cliques}\n\nWe compared the final average skew of 10 cliques with 10 nodes each (for\n$n=100$) created either randomly or with Greedy Swap,\nover 100 experiments after 1000 steps. \\autoref{fig:skew-convergence-speed-2-shards}, in the form of an histogram,\n shows that Greedy Swap generates cliques of significantly lower skew, close to 0 in a majority of cases for both MNIST and CIFAR10.\n\n\n\\begin{figure}[htbp]\n \\centering \n \\begin{subfigure}[b]{0.2\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/final-skew-distribution-mnist}\n \\caption{\\label{fig:final-skew-distribution-mnist} MNIST }\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.2\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/final-skew-distribution-cifar10}\n \\caption{\\label{fig:final-skew-distribution-cifar10} CIFAR10}\n \\end{subfigure}\n\\caption{\\label{fig:final-skew-distribution} Final quality of cliques (skew) with a maximum size of 10 over 100 experiments in a network of 100 nodes.}\n\\end{figure}\n\n\\autoref{fig:skew-convergence-speed-2-shards} shows such a low skew can be achieved \nin less than 400 steps for both MNIST and CIFAR10. In practice it takes less\nthan 6 seconds in Python 3.7 on a \nMacbook Pro 2020 for a network of 100 nodes and cliques of size 10. Greedy Swap \nis therefore fast and efficient. Moreover, it illustrates the fact that a\nglobal imbalance in the number of examples\nacross classes makes the construction of cliques with low skew harder and\nslower.\n\n\\begin{figure}[htbp]\n \\centering\n \\includegraphics[width=0.25\\textwidth]{figures\/skew-convergence-speed-2-shards}\n \\caption{\\label{fig:skew-convergence-speed-2-shards} Skew decrease during clique construction of 10 cliques of 10 heterogeneous nodes (100 nodes). Bold line is the average over 100 experiments. Thin lines are respectively the minimum and maximum over all experiments. In wall-clock time, 1000 steps take less than 6 seconds in Python 3.7 on a MacBook Pro 2020.}\n\\end{figure}\n\n\\subsubsection{Cliques built with Greedy Swap Converge Faster than Random Cliques}\n\n\\autoref{fig:convergence-speed-dc-random-vs-dc-gs-2-shards-per-node} compares\nthe convergence speed of cliques optimized with Greedy Swap for 1000 steps with cliques built randomly \n(equivalent to Greedy Swap with 0 steps). For both MNIST and CIFAR10, convergence speed\nincreases significantly and variance between nodes decreases dramatically. Decreasing the skew of cliques\nis therefore critical to convergence speed.\n\n\n\\begin{figure}[htbp]\n \\centering \n \\begin{subfigure}[b]{0.23\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/convergence-speed-mnist-dc-random-vs-dc-gs-2-shards-per-node}\n \\caption{\\label{fig:convergence-speed-mnist-dc-random-vs-dc-gs-2-shards-per-node} MNIST}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.23\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/convergence-speed-cifar10-dc-random-vs-dc-gs-2-shards-per-node}\n \\caption{\\label{fig:convergence-speed-cifar10-dc-random-vs-dc-gs-2-shards-per-node} CIFAR10}\n \\end{subfigure}\n\\caption{\\label{fig:convergence-speed-dc-random-vs-dc-gs-2-shards-per-node} Convergence speed of D-Cliques constructed randomly vs Greedy Swap with 10 cliques of 10 heterogeneous nodes (100 nodes).}\n\\end{figure}\n\n\\subsection{Additional Experiments on Extreme Label Distribution Skew}\n\nIn Appendix~\\ref{app:extreme-local-skew}, we replicate experimental\nresults on an extreme case of label distribution skew where each node only has\nexamples of a single class. These results consistently show that our\napproach remains effective even for extremely skewed label distributions\nacross nodes.\n\n\\section{Introduction}\n\nMachine learning is currently shifting from a \\emph{centralized}\nparadigm, where training data is located on a single\nmachine or\nin a data center, to \\emph{decentralized} ones in which data is processed\nwhere it was naturally produced.\nThis shift is illustrated by the rise of Federated\nLearning\n(FL)~\\cite{mcmahan2016communication}. FL allows\nseveral parties (hospitals, companies, personal\ndevices...) to collaboratively train machine learning models\non their joint\ndata without centralizing it. Not only does FL\navoid the costs of moving data, but it also mitigates privacy and\nconfidentiality concerns~\\cite{kairouz2019advances}.\nYet, working with natural data distributions introduces new challenges for\nlearning systems, as\nlocal datasets\nreflect the usage and production patterns specific to each participant: in\nother words, they are\n\\emph{heterogeneous}. An important type of data heterogeneity encountered in\nfederated classification problems, known as \\emph{label distribution skew} \n\\cite{kairouz2019advances,quagmire}, occurs when the frequency of different\nclasses of examples varies significantly across local datasets.\nOne of the key challenges in FL is to design algorithms that\ncan efficiently deal with such heterogeneous data distributions\n\\cite{kairouz2019advances,fedprox,scaffold,quagmire}.\n\nFederated learning algorithms can be classified into two categories depending\non the underlying network topology they run on. In server-based FL, the\nnetwork is organized according to a star topology: a central server orchestrates the training process by\niteratively aggregating model updates received from the participants\n(\\emph{clients}) and sending back the aggregated model \\cite{mcmahan2016communication}. In contrast,\nfully decentralized FL algorithms operate over an arbitrary network topology\nwhere participants communicate only with their direct neighbors\nin the network. A classic example of such algorithms is Decentralized\nSGD (D-SGD) \\cite{lian2017d-psgd}, in which participants alternate between\nlocal SGD updates and model averaging with neighboring nodes.\n\nIn this paper, we focus on fully decentralized algorithms as they can\ngenerally scale better to the large number of participants seen in ``cross-device''\napplications \\cite{kairouz2019advances}. Effectively, while a central\nserver may quickly become a bottleneck as the number of participants increases, the topology used in fully decentralized algorithms can remain sparse\nenough such that all participants need only to communicate with a small number of other participants, i.e. nodes have small (constant or logarithmic) degree \n\\cite{lian2017d-psgd}. In the homogeneous setting where data is\nindependent and identically distributed (IID) across nodes, recent work\nhas shown both empirically\n\\cite{lian2017d-psgd,Lian2018} and theoretically \\cite{neglia2020} that sparse\ntopologies like rings or grids\ndo not significantly affect the convergence\nspeed compared to using denser topologies.\n\n\n\n\\begin{figure*}[t]\n \\centering\n \\begin{subfigure}[b]{0.25\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/ring-IID-vs-non-IID-uneq-classes}\n\\caption{\\label{fig:ring-IID-vs-non-IID-uneq-classes} Ring topology}\n \\end{subfigure}\n \\quad\n \\begin{subfigure}[b]{0.25\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/grid-IID-vs-non-IID-uneq-classes}\n\\caption{\\label{fig:grid-IID-vs-non-IID-uneq-classes} Grid topology}\n \\end{subfigure}\n \\quad\n \\begin{subfigure}[b]{0.25\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/fc-IID-vs-non-IID-uneq-classes}\n\\caption{\\label{fig:fully-connected-IID-vs-non-IID-uneq-classes} Fully-connected topology}\n \\end{subfigure}\n \\caption{Convergence speed of decentralized\n SGD with and without label distribution skew for different topologies.\n The task is logistic regression on MNIST (see\n Section~\\ref{section:experimental-settings} for details on\n the experimental setup). Bold lines show the\n average test\n accuracy across nodes\n while thin lines show the minimum\n and maximum accuracy of individual nodes. While the effect of topology\n is negligible for homogeneous data, it is very significant in the\n heterogeneous case. On a fully-connected network, both cases converge\n similarly.}\n \\label{fig:iid-vs-non-iid-problem}\n\\end{figure*}\n\nIn contrast to the homogeneous case however, our experiments demonstrate that \n\\emph{the impact of topology is extremely significant for heterogeneous data}.\nThis phenomenon is illustrated in Figure~\\ref{fig:iid-vs-non-iid-problem}: we observe that under\nlabel distribution skew, using a\nsparse topology (a ring or\na grid) clearly jeopardizes the convergence speed of decentralized SGD.\nWe stress the fact\nthat, unlike in centralized FL\n\\cite{mcmahan2016communication,scaffold,quagmire}, this\nhappens even when nodes perform a single local update before averaging the\nmodel with their neighbors. In this paper, we thus address the following\nquestion:\n\n\\textit{Can we design sparse topologies with convergence\n speed similar to a fully connected network for problems involving\n many participants with label distribution skew?}\n\nSpecifically, we make the following contributions:\n(1) We propose D-Cliques, a sparse topology in which nodes are organized in\ninterconnected cliques (i.e., locally fully-connected sets of nodes) such that\nthe joint label distribution of each clique is close to that of the global \ndistribution; (2) We design Greedy Swap, a randomized greedy algorithm for\nconstructing such cliques efficiently;\n (3) We introduce Clique Averaging, a modified version of \nthe standard D-SGD algorithm which decouples gradient averaging, used for\noptimizing local models, from distributed averaging, used to ensure that all\nmodels converge, thereby reducing the bias introduced by inter-clique\nconnections; \n(4) We show how Clique Averaging can be used to implement unbiased momentum\nthat would otherwise be detrimental in the heterogeneous setting; (5) We \ndemonstrate\nthrough an extensive experimental study that our approach removes the effect\nof label distribution skew when training a linear\nmodel and a deep\nconvolutional network on the MNIST\nand CIFAR10\ndatasets respectively; (6) Finally, we demonstrate the scalability of our\napproach by considering up to 1000-node networks, in contrast to most\nprevious work on fully decentralized learning which performs empirical\nevaluations on networks with\nat most a few tens\nof nodes\n\\cite{tang18a,neglia2020,momentum_noniid,cross_gradient,consensus_distance}.\n\nFor instance, our results show that under strong label distribution shift,\nusing D-Cliques in a 1000-node network\nrequires 98\\% less edges ($18.9$ vs $999$ edges per participant on average) to obtain a similar convergence speed as a fully-connected topology,\nthereby yielding a 96\\% reduction in the total number of required messages \n(37.8 messages per round per node on average instead of 999). Furthermore an additional 22\\% improvement\nis possible when using a small-world inter-clique topology, with further\npotential gains at larger scales through a quasilinear $O(n\n\\log n)$ scaling in the number of nodes $n$.\n\nThe rest of this paper is organized as follows.\nWe first describe the problem setting in Section~\\ref{section:problem}. We\nthen present the design of D-Cliques in Section~\\ref{section:d-cliques}.\nSection~\\ref{section:evaluation}\ncompares D-Cliques to different topologies \nand algorithmic variations to demonstrate their benefits, constructed with and without Greedy Swap\nin an extensive experimental study. Finally, we review some related work\nin Section~\\ref{section:related-work}, and conclude with promising directions\nfor future work in Section~\\ref{section:conclusion}.\n\n\\section{Related Work}\n\\label{section:related-work}\n\nIn this section, we review some related work on dealing with heterogeneous\ndata in federated learning, and on the role of topology in fully decentralized\nalgorithms.\n\n\\paragraph{Dealing with heterogeneity in server-based FL.}\nData heterogeneity is not much of an issue in server-based FL if\nclients send their parameters to the server after each gradient update.\nProblems arise when one seeks to reduce\nthe number of communication rounds by allowing each participant to perform\nmultiple local updates, as in the popular FedAvg algorithm \n\\cite{mcmahan2016communication}. Indeed, data heterogeneity can prevent\nsuch algorithms from\nconverging to a good solution \\cite{quagmire,scaffold}. This led to the design\nof algorithms that are specifically designed to mitigate the impact\nof heterogeneity while performing\nmultiple local updates, using adaptive client sampling \\cite{quagmire}, update\ncorrections \\cite{scaffold} or regularization in the local objective \n\\cite{fedprox}. Another direction is to embrace the heterogeneity by\nlearning personalized models for each client \n\\cite{smith2017federated,perso_fl_mean,maml,moreau,Marfoq2021a}.\nWe note that recent work explores rings of server-based topologies \n\\cite{tornado}, but the focus is not on dealing with heterogeneous data but\nto make server-based FL more scalable to a large number of clients.\n\n\\paragraph{Dealing with heterogeneity in fully decentralized FL.}\nData heterogeneity is known to negatively impact the convergence speed\nof fully decentralized FL algorithms in practice \\cite{jelasity}. Aside from approaches that aim to learn personalized models \\cite{Vanhaesebrouck2017a,Zantedeschi2020a}, this\nmotivated the design of algorithms with modified updates based on variance\nreduction \\cite{tang18a}, momentum correction \\cite{momentum_noniid},\ncross-gradient\naggregation \\cite{cross_gradient}, or multiple averaging steps\nbetween updates \\citep[see][and references therein]{consensus_distance}. These\nalgorithms\ntypically require significantly more communication and\/or computation, and\nhave only been evaluated on small-scale networks with a few tens of\nnodes.\\footnote{We\nalso observed that \\cite{tang18a} is subject to numerical\ninstabilities when run on topologies other than rings. When\nthe rows and columns of $W$ do not exactly\nsum to $1$ (due to finite precision), these small differences get amplified by\nthe proposed updates and make the algorithm diverge.}\nIn contrast, D-Cliques focuses on the design of a sparse topology which is\nable to compensate for the effect of heterogeneous data and scales to large\nnetworks. We do not modify the simple\nand efficient D-SGD\nalgorithm \\cite{lian2017d-psgd} beyond removing some neighbor\ncontributions\nthat otherwise bias the gradient direction.\n\n\\paragraph{Impact of topology in fully decentralized FL.} It is well\nknown\nthat the choice of network topology can affect the\nconvergence of fully decentralized algorithms. In theoretical convergence\nrates, this is typically accounted\nfor by a dependence on the spectral gap of\nthe network, see for instance \n\\cite{Duchi2012a,Colin2016a,lian2017d-psgd,Nedic18}.\nHowever, for homogeneous (IID) data, practice contradicts these classic\nresults as fully decentralized algorithms have been observed to converge\nessentially as fast\non sparse topologies like rings or grids as they do on a fully connected\nnetwork \\cite{lian2017d-psgd,Lian2018}. Recent work \n\\cite{neglia2020,consensus_distance} sheds light on this phenomenon with refined convergence analyses based on differences between gradients or parameters across nodes, which are typically\nsmaller in the homogeneous case. However, these results do not give any clear insight\nregarding the role of the topology in the presence of heterogeneous data. \nWe note that some work\nhas gone into designing efficient topologies to optimize the use of\nnetwork resources \\citep[see e.g.,][]{marfoq}, but the topology is chosen\nindependently of how data is distributed across nodes. In summary, the role\nof topology in the heterogeneous data scenario is not well understood and we are not\naware of prior work focusing on this question. Our work is the first\nto show that an\nappropriate choice of data-dependent topology can effectively compensate for\nheterogeneous data.\n\\section{Problem Setting}\n\n\\label{section:problem}\n\n\\paragraph{Objective.} We consider a set $N = \\{1, \\dots, n \\}$ of $n$ nodes\nseeking to\ncollaboratively solve a classification task with $L$ classes. We denote a\nlabeled data point by a tuple $(x,y)$ where $x$ represents the data point \n(e.g., a feature vector) and $y\\in\\{1,\\dots,L\\}$ its label.\nEach\nnode has\naccess to a local dataset that\n follows its own local distribution $D_i$ which may differ from that of other\n nodes.\nIn this work, we tackle \\emph{label distribution skew}: formally, this means\nthat the\nprobability of $(x,y)$ under the local distribution $D_i$ of node $i$, denoted\nby $p_i(x,y)$,\ndecomposes as $p_i(x,y)=p(x|y)p_i(y)$, where $p_i(y)$ may vary across nodes.\nWe\nrefer to \n\\cite{kairouz2019advances,quagmire} for concrete examples of problems\nwith label distribution skew.\n\nThe objective is to find the parameters\n$\\theta$ of a global model that performs well on the union of the local\n distributions by\n minimizing\n the average training loss:\n\\begin{equation}\n\\min_{\\theta} \\frac{1}{n}\\sum_{i=1}^{n} \\mathds{E}_\n{(x_i,y_i) \\sim D_i} [F_i(\\theta;x_i,y_i)],\n\\label{eq:dist-optimization-problem}\n\\end{equation}\nwhere $(x_i,y_i)$ is a data point drawn from $D_i$ and $F_i$ is the loss\nfunction\non node $i$. Therefore, $\\mathds{E}_{(x_i,y_i) \\sim D_i} F_i(\\theta;x_i,y_i)$\ndenotes \nthe\nexpected loss of model $\\theta$ over $D_i$.\n\n\n\n\nTo collaboratively solve Problem \\eqref{eq:dist-optimization-problem}, each\nnode can exchange messages with its neighbors in an undirected network graph\n$G=(N,E)$ where $\\{i,j\\}\\in E$ denotes an edge (communication channel)\nbetween nodes $i$ and $j$.\n\n\\paragraph{Training algorithm.}\nIn this work, we use the popular Decentralized Stochastic\nGradient Descent algorithm, aka D-SGD~\\cite{lian2017d-psgd}. As\nshown in Algorithm~\\ref{Algorithm:D-PSGD},\na single iteration of D-SGD at node $i$ consists in sampling a mini-batch\nfrom its local distribution\n$D_i$, updating its local model $\\theta_i$ by taking a stochastic gradient\ndescent\n(SGD) step according to the mini-batch, and performing a weighted average of\nits local model with those of its\nneighbors.\nThis weighted average is defined by a\nmixing matrix $W$, in which $W_{ij}$ corresponds to the weight of\nthe outgoing connection from node $i$ to $j$ and $W_{ij} = 0$ for $\n\\{i,j\\}\\notin\nE$. To ensure that the local models converge on average to a stationary\npoint\nof Problem\n\\eqref{eq:dist-optimization-problem}, $W$\nmust be doubly\nstochastic ($\\sum_{j \\in N} W_{ij} = 1$ and $\\sum_{j \\in N} W_{ji} = 1$) and\nsymmetric, i.e. $W_{ij} = W_{ji}$~\\cite{lian2017d-psgd}.\nGiven a network topology $G=(N,E)$, we generate a valid $W$ by computing\nstandard\nMetropolis-Hasting weights~\\cite{xiao2004fast}:\n\\begin{equation}\n W_{ij} = \\begin{cases}\n \\frac{1}{\\max(\\text{degree}(i), \\text{degree}(j)) + 1} & \\text{if}~i \\neq\n j \\text{ and } \\{i,j\\}\\in E,\\\\\n 1 - \\sum_{j \\neq i} W_{ij} & \\text{if } i = j, \\\\\n 0 & \\text{otherwise}.\n \\end{cases}\n \\label{eq:metro}\n\\end{equation}\n\n\\begin{algorithm}[t]\n \\caption{D-SGD, Node $i$}\n \\label{Algorithm:D-PSGD}\n \\begin{algorithmic}[1]\n \\STATE \\textbf{Require:} initial model $\\theta_i^{(0)}$,\n learning rate $\\gamma$, mixing weights $W$, mini-batch size $m$,\n number of steps $K$\n \\FOR{$k = 1,\\ldots, K$}\n \\STATE $S_i^{(k)} \\gets \\text{mini-batch of $m$ samples drawn\n from~} D_i$\n \\STATE $\\theta_i^{(k-\\frac{1}{2})} \\gets \\theta_i^{(k-1)} - \\gamma\n \\nabla F(\\theta_i^{(k-1)}; S_i^{(k)})$ \n \\STATE $\\theta_i^{(k)} \\gets \\sum_{j \\in N} W_{ji}^{(k)} \\theta_j^{(k-\\frac{1}{2})}$\n \\ENDFOR\n \\end{algorithmic}\n\\end{algorithm}","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction \\label{Introduction}}\n\nThe processes of interaction of charged particles with matter, in particular, crystalline\nsolids, have been long studied both experimentally and theoretically. \nThe goal of these studies is to determine such characteristics of the interaction as the mean free path traveled\nby particles in the material, their energy losses, emission spectra, and others.\n\nChanneling in crystals, when charged particles falling into a potential channel shaped by electrostatic forces propagate along\ncrystallographic planes or axes, has become the focus of much attention in recent years.\nThe particles trapped in a channel of a straight crystal can travel long distances exceeding the mean free path in an \namorphous target, since such particles lose considerably less energy along their path \\cite{Lindhard_KDan_v34_p1_1965}. \nFor electrons, the channel lies along atomic rows or ion chains of the crystal, while for positrons it lies in the space between\natomic rows. \nThe stability of particle motion along the channels depends on the energy of the transverse motion that is low compared\nwith the height of the potential barrier.\n\nA particle trapped in the channel experiences oscillations in a plane transverse\nto the direction of the particle's propagation, inducing radiation during its channeling \\cite{ChRad:Kumakhov1976}.\nThis radiation is determined by the transverse energy of the channeled particle, and its intensity\nvaries depending on the type of crystal and its orientation. \nOscillatory radiation is incoherent and has a broad energy spectrum\n\\cite{ChRad:AndersenEtAl1983,BakEtAl_NPB_v254_p491_1985,\nBakEtAl_NPB_v525_p302_1988,BazylevZhevago:Uspekhi-v28-p565-1982,KumakhovKomarov-AIP}.\n\nChanneling can also occur in bent crystals, which are often used to bend charged particle\nbeams accelerated to relativistic energies \\cite{Tsyganov_TM-682_1976}. \nThe motion of a particle consists of two components: its oscillatory motion in the\nchannel and its propagation along the centerline of the bent channel. \nThe stability of the second component of motion in such a bent channel is provided by an additional condition, namely,\nthat the bending radius $R$ should significantly exceed the critical value $R_c$ determined by the\nenergy of the particle \\cite{Tsyganov_TM-682_1976}. \nThis motion of a relativistic particle trapped in a bent channel induces additional synchrotron radiation. \nThe intensity and frequency of synchrotron radiation depend on the type and energy of the channeled particles, as well as on\nthe characteristics of the crystal \\cite{KaplinVorobev1978,Bashmakov1981,TaratinVorobiev1989,ArutyunovEtAl_NP_1991,\nTaratin_PhysPartNucl_v29_p1063_1998-English,KSG1998,KSG_review_1999,ChannelingBook2014}.\n\nUndulator radiation is certainly an interesting subject to explore in connection with the concept of the crystal undulator (see,\nfor example, Ref. \\cite{ChannelingBook2014} and references therein). \nChanneling of charged relativistic particles in a periodically bent crystal (a crystal undulator)\ncan produce a new source of monochromatic radiation with energies ranging from hundreds\nof keV to several MeV.\n\nThere has been a number of experiments in the recent years with a view to create the crystal\nundulator, measuring the channeling parameters and the characteristics of the emission spectra\nof ultrarelativistic positrons \\cite{BaranovEtAl_CU_2006,Backe_EtAl_NuovoCimC_v34_p175_2011,Backe_EtAl_2008}\nand electrons \\cite{Backe_EtAl_2011,Backe_EtAl_2013} \nin straight and bent crystals of silicon and diamond. \nTheoretical studies on channeling in these crystals are carried out using the newly developed MBN Explorer\npackage \\cite{MBN_Explorer_Paper,MBN_Explorer_Site}. \nSimulations for amorphous and crystalline silicon have verified that this package is applicable for describing \nthe channeling of electrons and positrons \n\\cite{MBN_ChannelingPaper_2013,Sub_GeV_2013,PolozkovEtAl:NTV_v1_p212_2015,Korol_EtAl_NIMB_v387_p41_2016}. \n\nSince experiments are currently being carried out to measure the emission spectra of electrons in a periodically bent \ndiamond crystal \\cite{BadEms_p58}, theoretical interpretation of the experimental results is clearly an interesting problem.\n\nIn view of the above, the goal of this study is theoretical analysis of channeling of ultrarelativistic \nelectrons and positrons with an energy of 270 MeV both in a straight diamond crystal oriented along the (110) \ncrystallographic plane and in a periodically bent diamond crystal.\n\nWe have performed simulations of electron and positron channeling in straight, bent and\nperiodically bent channels using the versatile MBN Explorer software package.\n\n\n\\section{Simulation procedure with the MBN Explorer package \\label{Procedure}}\n\nThree-dimensional simulation of ultrarelativistic particles passing through a\ncrystalline medium is carried out using a molecular dynamics algorithm implemented in\nthe MBN Explorer software package \\cite{MBN_ChannelingPaper_2013}. \nThe characteristics of the motion of high-energy particles inside the crystal were obtained by\nintegrating the relativistic equations of motion.\nStep-by-step dynamic simulation of the crystalline medium was performed to construct\nthe particle trajectory.\n\nA quasi-classical approximation is applicable to describing the motion of ultrarelativistic\nparticles, and, since the quantum corrections are small, it is sufficient to use the equations of\nclassical relativistic mechanics:\n\\begin{eqnarray}\n\\dot{\\bfp} = q \\bfE(\\bfr)\n\\label{eq.01}\n\\end{eqnarray}\nHere $\\bfE(\\bfr)$ is the external electrostatic field, \n$q$ is the particle charge, and $\\bfp$ is its relativistic\nmomentum $\\bfp = m\\gamma \\bfv$,\nwhere $m$ and $v$ are the mass and velocity of the particle, respectively,\n$\\gamma = \\left(1 -v^2\/c^2\\right)^{-1\/2} \\gg1 $ is\nthe relativistic factor ($c$ is the speed of light).\n\nInitial values of the coordinates and velocity of the particle\nare used to integrate Eq. (\\ref{eq.01}).\n\nIn the MBN Explorer channeling module, the force $q \\bfE(\\bfr)$ is calculated as the gradient of \nthe electrostatic potential $U(\\bfr)$ equal to the sum of atomic potentials $U_{\\rm at}$:\n\\begin{eqnarray}\nU(\\bfr) = \\sum_{j} U_{\\rm at}(\\bfrho)\n\\label{eq.02}\n\\end{eqnarray}\nwhere $\\bfrho_j = \\bfr - \\bfR_j$ with $\\bfR_j$ standing for the position vector of a $j$th atom.\n\nFormally, the sum in (\\ref{eq.02}) accounts for all crystal atoms. \nHowever, given a rapid decrease of $U_{\\rm at}(\\bfrho)$ with distance, one can introduce the maximum \ndistance $\\rho_{\\max}$, beyond which the contribution of the atomic potential is negligible. \nTherefore, for a given observation point $\\bfr$, the sum can be limited to the atoms located inside a \nsphere with the radius $\\rho_{\\max}$. \nThe linked cell algorithm implemented in the MBN Explorer is used to search for such atoms. \nThis algorithm involves dividing the crystal into cells and considering only\nthe atoms closest to the particle. \nThe described scheme is used to calculate the force $q \\bfE(\\bfr)$ acting on the projectile \nat each step of integration.\n\nThe motion of particles along a crystallographic plane with the Miller indices\n$(k l m)$ is simulated by the following procedure \\cite{ChannelingBook2014,MBN_ChannelingPaper_2013}. \nA simulation box with the dimensions $L_x\\times L_y \\times L_z$ \nis introduced, containing a crystal lattice. \nThe $z$ axis is oriented along the incident beam and is parallel to the\n$(k l m)$ plane, the $y$ axis is perpendicular to this plane. \nThe position vectors of the lattice sites are generated in accordance with the type of the\nBravais cell of the crystal, using predefined values of \u200b\u200b the translation vectors.\n\nOnce the nodes inside the simulation box are determined, the position vectors of the\natomic nuclei are generated taking into account the thermal vibrations of these nuclei\nresulting in a random displacement from the nodal positions; these displacements are determined\nby the normal distribution with respect to the root-mean-square amplitude of thermal vibrations \\cite{Gemmel}.\n\nIntegration of the equations of motion begins at instant $t = 0$, when the particle enters the crystal \nat $z = 0$.\nA random number generator is used to choose \nThe initial transverse coordinates $x_0$ and $y_0$ are generated randomly.\nFor a beam with zero emittance, the initial velocity $\\bfv_0$ is oriented along the $z$ axis.\nParticle propagation through a crystal with a finite thickness $L$ is simulated in MBN Explorer using \nthe so-called dynamic simulation box \\cite{ChannelingBook2014,MBN_ChannelingPaper_2013} as a new type of boundary conditions. \nA particle moving inside the box interacts with atoms lying inside the cutoff sphere.\nTo optimize the numerical procedure, the dimensions of the box are chosen to be\n3 to 5 times larger than $\\rho_{\\max}$.\nAt the instant when the distance $l$ from the particle to the nearest face of the box becomes close to $\\rho_{\\max}$, \na new simulation box of the same size is generated, with its geometric center approximately coinciding with the position of\nthe particle. \nThe atoms located at the intersection of the old and the new simulation boxes are left intact.\nThe positions of the atoms in the rest of the new box are generated anew.\nSimulation is interrupted when the $z$ coordinate of the particle\nbecomes equal to the crystal thickness $L$.\n\n\\section{Simulation of electron and positron trajectories \\label{Trajectories}}\n\nThe MBN Explorer package was used to simulate the trajectories of 270 MeV electrons\nand positrons incident on diamond crystals along the (110) crystallographic planes. \nThe calculations were performed for a straight crystal and for a crystal with periodical cosine-like bending. \nIn both cases the crystal length was set to $L=20$ $\\mu$m. \nThe periodical bending was considered with the amplitude $a=2.5$ \\AA{} and \nperiod $\\lambda_{\\rm u}=5$ $\\mu$m. \nEach set of calculations included simulation of 6000 trajectories of projectiles \nwhich were analyzed further to calculate the channeling parameters and radiation emission.\n\nAn ordinary diamond crystal has straight channels due to the periodic arrangement of its atoms. \nThe width of the channel is determined by the interatomic distance which is $d = 1.26$\\AA{}. \nParticles trapped in straight channels with a low transverse energy leave such\nchannels less often. \nSince the crystal is short enough, positrons most often move through the entire\nstraight crystal while staying in the channel, and electrons are more likely to collide with\nlattice atoms and leave the channel. \nThis is because positrons move between the crystal atoms, where they are confined by repulsive\ninteraction with the lattice ions. \nOn the other hand, electrons move along helical trajectories in the immediate vicinity of the nuclei, so they\nare much more likely to collide with them and escape the channel.\n\n\\begin{figure} [h]\n\\centering\n\\includegraphics[width=7.7cm,clip]{Figure1a_v02.eps}\n\\includegraphics[width=7.7cm,clip]{Figure1b_v02.eps}\n\\caption{\nRepresentative trajectories of electrons (left) and positrons (right) with energies of 270 MeV\nin a periodically bent 20 $\\mu$m thick oriented diamond(110) crystal. \nChanneling (curves 1), dechanneling (2) and rechanneling (3) modes are indicated.\n}\n\\label{Figure1.fig}\n\\end{figure}\n\nThe trajectories of charged particles channeled in bent crystals become more complex and diverse. \nAs an example, Fig. \\ref{Figure1.fig} shows several typical trajectories of electrons (left panel) \nand positrons (right panel) in periodically bent diamond. \nThin solid lines in the figure indicate the boundaries of the channels; the distance\n$y$ is plotted along the vertical axis in a plane perpendicular to the direction of motion\n(the distance is measured in units of the interatomic spacing $d$). \nThe main features and characteristics of particle motion in a crystal, such as the channeling, \ndechanneling, and rechanneling modes, are shown in the figures. \nRechanneling is a process when a particle moving outside a channel can experience a\ncollision and get trapped into some channel as a result.\n\nFigure \\ref{Figure1.fig} left presents the trajectory of the only electron that propagated through \na crystal staying in the same channel. \nStatistically, such trajectories are an exception, as the rest of the trajectories\npresented correspond to the more typical motion of electrons in dechanneling and\nirregular rechanneling modes in short segments of different channels.\nComparison of the trajectories shown Fig. 1 left and right, indicates that positrons channel\nmuch better than electrons, and this pattern is observed for both straight and bent crystals.\nOnly a small part of the positrons originally trapped in the channel escapes it, while most\nof them move through the entire crystal while staying in one channel.\nTherefore, the intensity of synchrotron radiation should be higher in periodically bent crystal.\n\nNotably, positrons may have different oscillation amplitudes inside the channel,\nbut transverse oscillations are practically isochronous and their period remains almost\nunchanged, which corresponds to harmonic oscillations. \nConsequently, all positrons emit energy at approximately the same wavelength,\nand their channeling radiation peak is narrower and more intense, in contrast to the maximum\nradiation intensity for electrons.\n\nStatistical analysis of the calculated trajectories allowed to obtain the main\nparameters characterizing the channeling of charged particles (given in the table).\n\nThe particle trapping coefficient $A$ (acceptance) is the ratio of the number $N_{\\rm acc}$\nof the particles trapped in the channel upon entering the crystal to the number $N_0$ of all\nincident particles: $A=N_{\\rm acc}\/N_0$.\n\nThe values given in the table refer to the acceptance for the particles falling along the $z$\naxis.\nThe remaining parameters are related to the mean distances or the times during which\nthe charged particles stay in one or several channels. \nThe channeling length $L_{\\rm ch}$ is defined as the mean total distance traveled by a particle in\nthe channeling mode throughout the crystal. \nThe rechanneling length $L_{\\rm rech}$ is the mean distance covered by a particle from the moment \nwhen it dechannels until the opposite event of rechanneling, i.e. capture into the channeling mode\nas a result of collisions with the crystal atoms.\nTo more parameters are listed in the Table.\nThese are so-called penetration lengths \\cite{ChannelingBook2014,MBN_ChannelingPaper_2013}. \nThe first one, denoted as $L_{\\rm p1}$, is the mean distance traveled by a particle, accepted into the \nchanneling mode at the entrance, until it dechannels at some point in the bulk.\nThe penetration length $L_{\\rm p2}$ is calculated as the arithmetic mean of all channeling segments \n(initial and secondary) with respect to the total number channeling segments in all simulated trajectories.\nThus, it characterizes the average distance traveled by a particle in the channeling mode.\n\n\\begin{table}\n\\caption{\nChanneling parameters of $855$ MeV positrons ($e^+$) and electrons ($e^-$) in\nstraight and periodically bent (PB) $20$ $\\mu$m thick oriented diamond(110) crystal:\nacceptance $A$, \nchanneling length $L_{\\rm ch}$,\nrechanneling length $L_{\\rm rech}$,\npenetration lengths $L_{\\rm p1}$ and $L_{\\rm p2}$\n(all in $\\mu$m). \n}\n\\footnotesize\\rm\n\\begin{tabular}{@{}rrrrrrr}\n\\br\nParameter & \\multicolumn{2}{c}{straight crystal}& \\ & \\multicolumn{2}{c}{PB crystal}\\\\ \n & $e^-$ & $e^+$ & \\ & $e^-$ & $e^+$ \\\\\n\\br \n $A$ & 0.70 & 0.96 & \\ & 0.51 & 0.89 \\\\\n$L_{\\rm ch}$ & 9.04 & 18.7 & \\ & 6.06 & 17.2 \\\\\n$L_{\\rm rech}$ & 4.18 & 6.08 & \\ & 5.98 & 7.53 \\\\\n$L_{\\rm p1}$ & 5.43 & 19.1 & \\ & 4.30 & 18.8 \\\\\n$L_{\\rm p2}$ & 4.55 & 18.0 & \\ & 3.60 & 16.4 \\\\\n\\br \n\\end{tabular}\n\\label{Table_ep-data.C}\n\\end{table}\n\nSince the crystal is rather short (20 $\\mu$m), the positrons accepted in the channeling mode travel through\nalmost the entire crystal staying in the same channel, and, thus, they have greater penetration,\nchanneling and rechanneling lengths.\nElectrons experience collisions with lattice ions at a higher rate, since their trajectories\npass in the immediate vicinity of the ions, and thus the dechanneling events are more frequent.\n\n\\section{Emission spectra of electrons and positrons}\n\nFor each projectile, the simulated dependences $\\bfr = \\bfr(t)$ and $\\bfv = \\bfv(t)$ \nallow one to calculate the spectral characteristics of the radiation emitted by the particle.\n\nThe spectral angular distribution of the radiated energy $\\d^3 E \/ (\\d\\hbar\\om \\d \\Om)$ \n($\\om$ and $\\Om$ stand for the frequency of radiation and the emission solid angle, respectively) is\ncalculated following the general formula derived within the quasi-classical approximation \\cite{Baier}:\n\\begin{eqnarray} \n\\fl\n{\\d^3 E \\over \\hbar\\d\\om\\, \\d \\Om}\n=\n\\alpha \\,\n{ q^2\\omega^2 \\over 8\\pi^2 }\n\\int\\limits_{-\\infty}^{\\infty} \\d t_1\\!\n\\int\\limits_{-\\infty}^{\\infty} \\d t_2\\,\n\\ee^{\\i \\,\\omega^{\\prime} \\left(\\psi(t_1) -\\psi(t_2)\\right)}\n\\left[\n\\left( 1+(1+u)^2 \\right)\n\\left(\n{\\bfv_1\\cdot\\bfv_2 \\over c^2} -1\n\\right)\n+{u^2 \\over \\gamma^2}\n\\right]\\,.\n\\label{eq.03} \n\\end{eqnarray}\nHere $\\alpha= e^2\/ \\hbar\\, c$ is the fine structure constant,\n$q$ is measured in units of the elementary charge, $\\bfv_{1,2} =\\bfv(t_{1,2})$, \nand\nthe $\\psi(t) = t - \\bfn\\cdot\\bfr(t)\/ c$, with $\\bfn$ being the unit vector in the \ndirection of radiation emission.\nOther quantities, which account for the radiative recoil, are as follows:\n$\\om^{\\prime} = (1+u)\\, \\om$ and $u = \\hbar \\om\/(\\E - \\hbar \\om)$.\n\nFor each individual trajectory $j$, the spectral distribution is calculated by \nnumerically integrating the values of $\\d^3 E_j \/ (\\d\\hbar\\om \\d \\Om)$ \nover the ranges $\\phi=[0,2\\pi]$ and $\\theta=[0,\\theta_0]$, where \n$\\theta_0$ is related to the detector aperture.\nThe resulting distribution is calculated averaging $\\d^3 E_j$ \nover the ensemble of the trajectories.\n\nThe results presented below refer to the emission within the cone\n$\\theta_0 \\leq 0.2$ mrad. \n\nFigure 2 left shows the emission spectra of electrons in the straight and periodically bent crystal. \nThe broad peak (curve 1) at $\\hbar \\om \\geq 0.4$ MeV is due to the channeling radiation \n(ChR). \nThe decrease in the intensity of this peak in a periodically bent crystal (curve 2)\nis associated with the decrease in the number of channeling electrons.\n\n\\begin{figure} [h]\n\\centering\n\\includegraphics[width=7.7cm,clip]{Figure2a.eps}\n\\includegraphics[width=7.7cm,clip]{Figure2b.eps}\n\\caption{\nRepresentative trajectories of electrons (left) and positrons (right) with energies of 270 MeV\nin a periodically bent 20 $\\mu$m thick oriented diamond(110) crystal. \nChanneling (curves 1), dechanneling (2) and rechanneling (3) modes are indicated.\n}\n\\label{CLS.fig}\n\\end{figure}\n\nFigure 2 right presents the corresponding emission spectra of positrons.\nHere, the ChR maximum (curve 1) is narrower and higher because the channeling oscillations\nof positrons is much more harmonic that of electrons and, thus, the radiation emitted is \nconcentrated in the narrower bandwidth $\\Delta \\om$.\n\nIt can be seen from Fig. 2 left and right (curves 2) that a radiation intensity peak is observed for\nchanneling in the PB crystal at a photon energy of the order of 130 keV, which is absent in\nthe straight crystal. \nThis peak appears due to motion of channeling particles along\nthe centerline of the periodically bent channel. \nThe particle radiation frequency is related to the period of the channel curvature and the \nlongitudinal energy of the charged particle. \nThis radiation, termed as a crystalline undulator radiation, has a narrow spectral width and bears the features of \nradiation emitted by projectiles moving in magnetic undulators.\nSince the study deals with electrons and positrons with the same energy, the position\nof the undulator peak on the emission spectra is the same. \nHowever, radiation intensity is higher for positrons than for electrons by an order of magnitude, because positrons\nexperience harmonic oscillations and longer channeling.\n\n\\section{Conclusion}\n\nWe have numerically simulated the trajectories of ultrarelativistic charged particles\nin straight and bent diamond crystals, with electrons and positrons incident on the (110)\ncrystallographic plane, using the MBN Explorer software package \\cite{MBN_Explorer_Paper,MBN_Explorer_Site}.\nThe coordinates of the particles upon entering the crystal in the transverse plane were chosen with a random\nnumber generator. \nStatistical processing of the obtained trajectories made it possible to determine the channeling \nparameters of electrons and positrons with an energy of 270 MeV in a $20$ $\\mu$m thick diamond crystal.\nWe have established that channeled positrons have a larger acceptance and run substantially\nlonger distances in the crystalline channel as compared to electrons.\n\nThe calculated emission spectra of electrons and positrons channeled in a periodically bent crystal \ncontain two main regions. \nThe high-energy intensity peak is associated with ChR induced by oscillatory motion of the\nparticles in the channel; the same peak was obtained under channeling in a straight crystal.\n\nA low-energy peak in the 130 keV region occurs when particles move in a periodically\nbent channel and has an undulatory nature.\nThis radiation is coherent and, even though the bent crystal has a small number of periods\n(only 4), the radiation is characterized by a noticeable intensity, which is significant for\npotential applications in lasers \\cite{KSG_review_1999,ChannelingBook2014,KSG_review2004}.\n\nThe obtained channeling parameters and the calculated emission spectra are of interest in\nview of the experiments on electron channeling in straight and bent crystals currently under way\nat the University of Mainz (Germany) \\cite{BadEms_p58}.\n\n\\ack\n\nThis work has been supported by the European Commission (the PEARL Project within the H2020-MSCA-RISE-2015 call, GA 690991).\nWe acknowledge the Supercomputing Center of Saint Petersburg Polytechnic University \n(SPbPU) for providing the opportunities to carry out large-scale simulations.\n\n\\section*{References}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec:1}\nThis paper is dedicated to the comparison \nof multivariate probability \ndistributions with respect to extreme portfolio losses. \nA new notion of stochastic ordering \nnamed \\emph{asymptotic portfolio loss order} ($\\mathrel{\\preceq_{\\mathrm{apl}}}$) is introduced. \nSpecially designed for the ordering of stochastic risk models \nwith respect to extreme portfolio losses, \nthis notion allows to compare the inherent extreme portfolio risks \nassociated with different model parameters such as correlations, \nother kinds of dependence coefficients, or diffusion parameters. \n\\par\nIn a recent paper of \\cite{Mainik\/Rueschendorf:2010} the notion of \n\\emph{extreme risk index} has been introduced in the framework of \nmultivariate regular variation. This index, denoted by $\\gamma_\\xi$, \nis a functional of the vector $\\xi$ of portfolio weights and of the \ncharacteristics of the multivariate regular variation of $X$ given \nby the tail index $\\alpha$ and the spectral measure $\\Psi$. \nIt measures the sensitivity of the portfolio loss\nto extremal \nevents and characterizes the probability distribution of extreme losses. \nIn particular, it serves to determine the optimal portfolio diversification \nwith respect to extreme losses. \nWithin the framework of multivariate regular variation \nthe notion of asymptotic portfolio loss ordering introduced in this paper \nis tightly related to model comparison in terms of \nthe extreme risk index $\\gamma_\\xi$. \nThus this paper can be seen as a supplement of the previous one, \nallowing to order multivariate risk models with respect to \ntheir extremal portfolio loss behaviour. \n\\par\nIn Section \\ref{sec:2} of the present paper we introduce the \nasymptotic portfolio loss order $\\mathrel{\\preceq_{\\mathrm{apl}}}$ and highlight some relationships \nto further well-known ordering notions. \nIt turns out that even strong dependence and convexity \norders do not imply the asymptotic portfolio loss order in general. \nWe present counter-examples, based on the the inversion of diversification \neffects in models with infinite loss expectations. \nAnother example of particular interest discussed here is given \nby the elliptical distributions. \nIn this model family we establish a precise criterion for the asymptotic \nportfolio loss order, which perfectly accords with the classical \nresults upon other well-known order relations. \nSection \\ref{sec:3} is devoted to multivariate regularly varying models. \nWe discuss the relationship between the asymptotic portfolio loss order \nand the comparison of the extreme risk index and characterize $\\mathrel{\\preceq_{\\mathrm{apl}}}$ \nin terms of a suitable ordering of the canonical spectral measures. \nThese findings allow to establish sufficient conditions for \n$\\mathrel{\\preceq_{\\mathrm{apl}}}$ in terms of spectral measures, which can be verified \nby analytical or numerical methods. \nIn particular, we characterize the dependence structures that yield \nthe best and the worst possible diversification effects for a multivariate \nregularly varying risk vector $X$ in $\\Rplus^{d}$ with tail index $\\alpha$.\nFor $\\alpha\\ge1$ the best case is given by the asymptotic independence and \nthe worst case is the asymptotic comonotonicity. \nThe result for $\\alpha\\le 1$ is exactly the opposite \n(cf.\\ Theorem~\\ref{thm:3.8} and Corollary~\\ref{cor:3.10}).\nRestricting $X$ to $\\Rplus^{d}$ means that $X$ represents only the losses, whereas \nthe gains are modelled separately. \nThis modelling approach is particularly suitable for applications in \ninsurance, operational risk, and credit risk.\nIf $X$ represents both losses and gains, these results remain valid if the \nextremal behaviour of the gains is weaker than that of the losses, so \nthat there is no loss-gain compensation for extremal events.\nIn Section \\ref{sec:4} we discuss the interconnections between $\\mathrel{\\preceq_{\\mathrm{apl}}}$ \nor ordered canonical spectral measures and other well-known \nnotions of stochastic ordering. \nOrdering of canonical spectral measures allows to conclude $\\mathrel{\\preceq_{\\mathrm{apl}}}$\nfrom the (directionally) \nconvex or the supermodular order. \nIt is not obvious how to obtain this implication in a general setting.\nFinally, in Section~\\ref{sec:5} we present a series of examples \nwith graphics illustrating the numerical results upon the ordering \nof spectral measures.\nThe relationship to spectral measures provides a useful numerical tool to \nestablish $\\mathrel{\\preceq_{\\mathrm{apl}}}$ in practical applications.\n\\section{Asymptotic portfolio loss ordering}\\label{sec:2}\nTo compare stochastic risk models with respect to extreme portfolio losses, \nwe introduce the asymptotic portfolio loss order $\\mathrel{\\preceq_{\\mathrm{apl}}}$. \nThis order relation is designed\nfor the analysis of the asymptotic \ndiversification effects and the identification of models that generate \nportfolio risks with stronger extremal behaviour. \n\\par\nBefore stating the definition, some basic notation is needed. \nFocusing on risks, let $X$ be a \\emph{random loss vector} with values in \n$\\R^{d}$, i.e., let positive values of the components $X^{(i)}$, $i=1,\\ldots,d$,\nrepresent losses and let negative values of $X^{(i)}$ represent gains \nof some risky assets. \nFollowing the intuition of diversifying a unit capital over several assets,\nwe restrict the set of portfolios to the unit simplex in $\\R^{d}$: \n\\[\n\\Simp^d := \\cubrfl{\\xi\\in\\Rplus^{d}: \\sum_{i=1}^{d}\\xi_i=1 }\n\\ldotp\n\\]\nThe portfolio loss resulting from a random vector $X$ and the portfolio $\\xi$ \nis given by the scalar product of $\\xi$ and $X$. \nIn the sequel it will be denoted by $\\xi^{\\top} X$.\n\\begin{definition}\nLet $X$ and $Y$ be $d$-dimensional random vectors. \nThen $X$ is called smaller than $Y$ in \n\\emph{asymptotic portfolio loss order}, $X\\mathrel{\\preceq_{\\mathrm{apl}}} Y$, \nif \n\\begin{equation}\\label{eq:2.1}\n\\forall\\xi\\in\\Simp^d\n\\quad\n\\limsup_{t\\to\\infty} \n\\frac{\\mathrm{P}\\cubr{\\xi^{\\top} X> t}}{\\mathrm{P}\\cubr{\\xi^{\\top} Y\\ge t}} \\le 1\n\\ldotp\n\\end{equation}\nHere, $\\frac00$ is defined to be 1.\n\\end{definition}\n\\begin{remark}\\label{rem:2.1}\n\\begin{enumerate}[(a)]\n\\item\nAlthough designed for random vectors, $\\mathrel{\\preceq_{\\mathrm{apl}}}$ is also defined for \nrandom variables. In this case, the portfolio set has only one element, \n$\\Sigma^1=\\cubr{1}$. \n\\item\\label{item:rem:2.1.b}\nIt is obvious that $\\mathrel{\\preceq_{\\mathrm{apl}}}$ is invariant under componentwise rescaling. \nLet $vx$ denote the componentwise product of $v,x\\in \\R^{d}$: \n\\begin{equation}\\label{eq:apl.2}\nvx:= (v^{(i)} x^{(i)},\\dots,v^{(d)} x^{(d)}),\n\\end{equation}\nThen it is easy to see that $ X\\mathrel{\\preceq_{\\mathrm{apl}}} Y$ implies $vX \\mathrel{\\preceq_{\\mathrm{apl}}} vY$ for \nall $v\\in\\Rplus^{d}$.\nHence condition~\\eqref{eq:2.1} can be equivalently stated for $\\xi\\in\\Rplus^{d}$. \n\\end{enumerate}\n\\end{remark}\n\\par\nThe ordering statement $X\\mathrel{\\preceq_{\\mathrm{apl}}} Y$ means that for all portfolios \n$\\xi\\in\\Simp^d$\nthe portfolio loss $\\xi^{\\top} X$ is asymptotically smaller $\\xi^{\\top} Y$. \nThus $\\mathrel{\\preceq_{\\mathrm{apl}}}$ concerns only the extreme portfolio losses. \nIn consequence, this order relation is weaker than the (usual) \nstochastic ordering $\\mathrel{\\preceq_{\\mathrm{st}}}$ of the portfolio losses:\n\\begin{equation}\\label{eq:2.2}\n\\xi^{\\top} X \\mathrel{\\preceq_{\\mathrm{st}}} \\xi^{\\top} Y \\text{ for all } \\xi\\in\\Simp^d \\text{ implies } X\\mathrel{\\preceq_{\\mathrm{apl}}} Y.\n\\end{equation}\nHere, for real random variables $U$, $V$ the \\emph{stochastic ordering} \n$U \\mathrel{\\preceq_{\\mathrm{st}}} V$ is defined by \n\\begin{equation}\\label{eq:2.3a}\n\\forall t\\in\\R\\quad \\mathrm{P}\\cubr{U>t}\\le\\mathrm{P}\\cubr{V>t}.\n\\end{equation}\n\\par\nSome related, well-known stochastic orderings \n\\citep[cf.][]{Mueller\/Stoyan:2002,Shaked\/Shanthikumar:1997} \nare collected in the following list. Remind that $f:\\R^{d}\\to \\R$ \nis called \\emph{supermodular} if\n\\begin{equation}\\label{eq:2.3b}\n\\forall x,y\\in\\R^{d}\n\\quad\nf(x\\wedge y)+f(x\\vee y)\\ge f(x)+f(y)\n\\ldotp\n\\end{equation}\n\\begin{definition}\\label{def:2.2}\nLet $X$, $Y$ be random vectors in $\\R^{d}$. Then $X$ is said to be smaller than $Y$ in\n\\begin{enumerate}[(a)]\n\\item \\emph{(increasing) convex order}, \n$X \\mathrel{\\preceq_{\\mathrm{cx}}} Y$ ($X\\mathrel{\\preceq_{\\mathrm{icx}}} Y$), if $\\mathrm{E} f(X) \\le \\mathrm{E} f(Y)$ for all (increasing) convex functions $f:\\R^{d}\\mapsto \\R$ such that the expectations exist; \n\\item\n\\emph{linear convex order}, $X \\mathrel{\\preceq_{\\mathrm{lcx}}} Y$, if \n$\\xi^{\\top} X \\mathrel{\\preceq_{\\mathrm{cx}}} \\xi^{\\top} Y$ for all $\\xi\\in\\R^{d}$;\n\\item\n\\emph{positive linear convex order}, $X \\mathrel{\\preceq_{\\mathrm{plcx}}} Y$, \nif $\\xi^{\\top} X \\mathrel{\\preceq_{\\mathrm{cx}}} \\xi^{\\top} Y$ for all $\\xi\\in\\Rplus^{d}$;\n\\item \\emph{supermodular order} $X\\mathrel{\\preceq_{\\mathrm{sm}}} Y$, if \n$\\mathrm{E} f(X) \\le \\mathrm{E} f(Y)$ for all supermodular functions $f:\\R^{d}\\to\\R$ such\nthat the expectations exist;\n\\item\n\\emph{directionally convex order}, $X\\mathrel{\\preceq_{\\mathrm{dcx}}} Y$, if \n$\\mathrm{E} f(X) \\le \\mathrm{E} f(Y)$ for all directionally convex, i.e., supermodular and componentwise convex functions \n$f:\\R^{d}\\to\\R$ such that the expectations exist.\n\\end{enumerate}\n\\end{definition}\n\\par\nThe stochastic orderings listed in Definition \\ref{def:2.2} are useful \nfor describing the risk induced by larger diffusion (convex risk) as well as \nthe risk induced by positive dependence \n(supermodular and directionally convex). \nThe following implications are known to hold generally for random \nvectors $X$, $Y$ in $\\R^{d}$: \n\\begin{enumerate}[(a)]\n\\item $(X\\mathrel{\\preceq_{\\mathrm{sm}}} Y)_{\\phantom{icx}\\kern-2ex} \\Rightarrow\n(X\\mathrel{\\preceq_{\\mathrm{dcx}}} Y)_{\\phantom{l}\\kern-.5ex} \\Rightarrow (X\\mathrel{\\preceq_{\\mathrm{plcx}}} Y)$\n\\item $(X\\mathrel{\\preceq_{\\mathrm{cx}}} Y)_{\\phantom{ism}\\kern-2ex} \\Rightarrow %\n(X\\mathrel{\\preceq_{\\mathrm{lcx}}} Y)_{\\phantom{d}\\kern-.5ex} \\Rightarrow (X\\mathrel{\\preceq_{\\mathrm{plcx}}} Y)$\n\\item $(X\\mathrel{\\preceq_{\\mathrm{icx}}} Y)_{\\phantom{sm}\\kern-2ex} \\Rightarrow %\n(X\\mathrel{\\preceq_{\\mathrm{plcx}}} Y)$\n\\end{enumerate}\n\\begin{remark}\\label{rem:apl.1}\n\\begin{enumerate}[(a)]\n\\item\\label{item:apl.1}\nIt is easy to see that the usual stochastic order $\\mathrel{\\preceq_{\\mathrm{st}}}$ implies \n$\\mathrel{\\preceq_{\\mathrm{apl}}}$ in the univariate case.\n\\item\\label{item:apl.2}\nIn spite of being strong risk comparison orders, the order relations \noutlined in Definition~\\ref{def:2.2} do not imply $\\mathrel{\\preceq_{\\mathrm{apl}}}$ in general. \nFor instance, it is known that the comonotonic dependence structure is \nthe worst case with respect to the strong supermodular ordering $\\mathrel{\\preceq_{\\mathrm{sm}}}$,\nwhereas it is not necessarily the worst case with respect to $\\mathrel{\\preceq_{\\mathrm{apl}}}$\n(cf.\\ Examples~\\ref{ex:6} and \\ref{ex:2}). \n\\end{enumerate}\n\\end{remark}\n\\par\nThe following proposition helps to establish sufficient criteria \nfor $\\mathrel{\\preceq_{\\mathrm{apl}}}$ in the univariate case. \nTo obtain multivariate results, \nit can be separately applied to each portfolio loss $\\xi^{\\top} X$ for \n$\\xi\\in\\Simp^d$. \n\\par\n\\begin{proposition}\\label{prop:2.3}\nLet $R_1$, $R_2\\ge 0$ be real random variables and let $V$ be a real random variable independent of $R_i$, $i=1,2$.\n\\begin{enumerate}[(a)]\n\\item\n\\label{item:prop2.3b} \nIf $R_1\\mathrel{\\preceq_{\\mathrm{apl}}} R_2$ and $V < K$ \nfor some constant $K$, then \n\\begin{equation}\\label{eq:2.7}\nR_1V\\mathrel{\\preceq_{\\mathrm{apl}}} R_2V\n\\ldotp\n\\end{equation}\n\\item \\label{item:prop2.3a} \nIf $R_1\\mathrel{\\preceq_{\\mathrm{st}}} R_2$, then\n\\begin{equation}\n\\label{eq:2.5}\n\\robr{R_1V}_{+} \\mathrel{\\preceq_{\\mathrm{st}}} \\robr{R_2V}_{+} \n\\quad \n\\text{and}\n\\quad\n\\robr{R_2V}_{-} \\mathrel{\\preceq_{\\mathrm{st}}} \\robr{R_1V}_{-}\n\\ldotp\n\\end{equation}\nIn addition, if $V$ and $R_i$ are integrable and $EV \\ge 0$, then \n\\begin{equation}\\label{eq:2.6}\nR_1V \\mathrel{\\preceq_{\\mathrm{icx}}} R_2V\n\\ldotp\n\\end{equation}\nMoreover, if $EV=0$, then $R_1V \\mathrel{\\preceq_{\\mathrm{cx}}} R_2V$.\n\\end{enumerate}\n\\end{proposition}\n\\par\n\\begin{myproof}\n\\par Part~(\\ref{item:prop2.3b}). \nSince $R_1V \\mathrel{\\preceq_{\\mathrm{apl}}} R_2V$ is trivial for $V \\le 0$, we \nassume that $\\mathrm{P}\\cubr{V>0}>0$. \nHence $V\\le K$ implies for all $t>0$\n\\begin{align} \n\\mathrm{P}\\cubrfl{R_1V>t} \n&= \\nonumber\n\\int_{(0,K)} \\mathrm{P}\\cubrfl{R_1>{t}\/{v}} \\mathrm{d}\\mathrm{P}^V(v) \\\\\n&=\\label{eq:2.10b}\n\\int_{(0,K)} f\\robrfl{{t}\/{v}} \\mathrm{P}\\cubrfl{R_2>{t}\/{v}} \\mathrm{d}\\mathrm{P}^V(v), \n\\end{align}\nwhere \n\\[\nf(z):=\\frac{\\mathrm{P}\\cubr{R_1>z}}{\\mathrm{P}\\cubr{R_2>z}}\n\\ldotp\n\\]\nAn obvious consequence of \\eqref{eq:2.10b} is the inequality \n\\begin{equation}\n\\mathrm{P}\\cubr{R_1V > t} \n\\le\\label{eq:2.10c}\n\\sup \\cubrfl{ f(z): z > {t}\/{K}} \\cdot \\mathrm{P}\\cubrfl{R_2V > t}\n\\end{equation}\nSince $R_1\\mathrel{\\preceq_{\\mathrm{apl}}} R_2$ is equivalent to $\\limsup_{z\\to\\infty} f(z)\\le 1$,\nwe obtain\n\\[\n\\limsup_{t\\to\\infty}\\frac{\\mathrm{P}\\cubr{R_1V>t}}{\\mathrm{P}\\cubr{R_2V>t}}\\le 1\n\\ldotp\n\\] \n\\par\nPart (\\ref{item:prop2.3a}). \nBy the well-known coupling principle for the stochastic ordering $\\mathrel{\\preceq_{\\mathrm{st}}}$\nwe may assume without loss of generality that $R_1 \\le R_2$ \npointwise on the underlying probability space.\nThis implies \n\\[\n\\mathrm{P}\\cubr{R_1 V > t} \\le \\mathrm{P}\\cubr{R_2 V > t}\n,\\quad t \\ge 0,\n\\]\nand, similarly,\n\\[\n\\mathrm{P}\\cubr{R_1 V \\le t} \\le \\mathrm{P}\\cubr{R_2 V \\le t}\n,\\quad t \\le 0\n\\ldotp\n\\]\nIn consequence we obtain~\\eqref{eq:2.5}.\n\\par\nFrom the proof of \\eqref{eq:2.5}\nit follows that the distribution functions\nof the products $R_iV$, $i=1,2$, satisfy the cut criterion of Karlin--Novikov \n(cf.\\ \\citealp{Shaked\/Shanthikumar:1994}, Theorem 2.A.17 and \n\\citealp{Mueller\/Stoyan:2002}, Theorem 1.5.17) \nHence we obtain\n\\begin{equation}\\label{eq:2.9}\nR_1V \\mathrel{\\preceq_{\\mathrm{icx}}} R_2V\n\\ldotp\n\\end{equation}\nIf $EV=0$, then $E\\sqbr{R_1V}=E\\sqbr{R_2V}$ and therefore\n\\begin{equation}\\label{eq:2.10a}\nR_1V\\mathrel{\\preceq_{\\mathrm{cx}}} R_2V\n\\ldotp\n\\end{equation}\n\\end{myproof}\n\\begin{remark}\\label{rem:2.3}\n\\begin{enumerate}[(a)]\n\\item %\nNote that\n\\eqref{eq:2.5}\nimplies (without assuming the existence of moments) \nthat $\\robr{R_2V}_{+} \\mathrel{\\preceq_{\\mathrm{decx}}} \\robr{R_1V}_{+}$ \nwhere $\\mathrel{\\preceq_{\\mathrm{decx}}}$ denotes the \\emph{decreasing convex order}. \nSimilarly one obtains \n$\\robr{R_2V}_{-} \\mathrel{\\preceq_{\\mathrm{icx}}} \\robr{R_1V}_{-}$\n\\item \nIf $f(t):={\\mathrm{P}\\cubr{R_1>t}}\/{\\mathrm{P}\\cubr{R_2>t}} \\le C < \\infty$ and \n$R_1\\mathrel{\\preceq_{\\mathrm{apl}}} R_2$, then $R_1V\\mathrel{\\preceq_{\\mathrm{apl}}} R_2V$. \n\\item \nA related problem is the ordering of products $RV_i$ for $R\\ge 0$ with \n$V_1$ and $V_2$ independent of $R$.\nIn the special case when $R$ is \\emph{regularly varying} with \\emph{tail index} \n$\\alpha>0$, i.e., \n\\begin{equation}\\label{eq:apl.5}\n\\lim_{t\\to\\infty}\n\\frac{\\mathrm{P}\\cubr{R>tx}}{\\mathrm{P}\\cubr{R>t}} \n=x^{-\\alpha}\n,\\quad \nx>0,\n\\end{equation}\nexact criteria for $\\mathrel{\\preceq_{\\mathrm{apl}}}$ can be obtained from Breiman's Theorem \n\\citep[cf.][Proposition 7.5]{Resnick:2007}. \nIf $\\mathrm{E} \\robr{V_i}_{+}^{\\alpha+\\varepsilon} <\\infty$ for $i=1,2$ and \nsome $\\varepsilon>0$, then \n\\[\n\\lim_{t\\to\\infty}\\frac{\\mathrm{P}\\cubr{RV_i>t}}{\\mathrm{P}\\cubr{R>t}} \n= \nE\\sqbrfl{\\robr{V_i}_{+}^{\\alpha}}\n\\ldotp\n\\]\nThis yields \n\\[\n\\lim_{t\\to\\infty}\n\\frac{\\mathrm{P}\\cubr{RV_1>t}}{\\mathrm{P}\\cubr{RV_2>t}} \n= \n\\frac\n{\\mathrm{E}\\sqbrfl{\\robr{V_1}_{+}^{\\alpha}}}\n{\\mathrm{E}\\sqbrfl{\\robr{V_2}_{+}^{\\alpha}}}\n\\ldotp\n\\]\n\\end{enumerate}\n\\end{remark}\nAn important class of stochastic models with various applications are \n\\emph{elliptical distributions}, \nwhich are natural generalizations of multivariate \nnormal distributions. \nA random vector $X\\in\\R^{d}$ is called elliptically distributed, \nif there exist $\\mu\\in\\R^{d}$ and a $d\\times d$ matrix $A$ such that \n$X$ has a representation of the form\n\\begin{equation}\\label{eq:2.11}\nX \\mathrel{\\stackrel{\\mathrm{d}}{=}} \\mu + RAU, \n\\end{equation}\nwhere $U$ is uniformly distributed on the Euclidean unit sphere $\\Sbb^d_2$, \n\\[\n\\Sbb^d_2=\\cubrfl{x\\in\\R^{d} : \\norm{x}_2 =1},\n\\] \nand $R$ is a non-negative random variable independent of $U$. \nBy definition we have \n\\begin{equation}\\label{eq:2.12}\nE\\norm{X}_2^2 <\\infty \\Leftrightarrow E R^2<\\infty,\n\\end{equation}\nand in this case \n\\begin{equation}\\label{eq:2.13}\n\\mathrm{Cov}(X)=\\mathrm{Var}(R) A A^{\\top}\n\\ldotp \n\\end{equation}\nThe matrix $C:= A A^{\\top}$ is unique except for a constant factor and \nis also called the \\emph{generalized covariance matrix} of $X$. \nWe denote the elliptical distribution constructed \naccording to~\\eqref{eq:2.11} by $\\Ecal(\\mu,C,F_R)$, \nwhere $F_R$ is the distribution of $R$.\n\\par\nA classical stochastic ordering result going back to \n\\cite{Anderson:1955} and \\cite{Fefferman\/Jodeit\/Perlman:1972} \n\\citep[cf.][p.~70]{Tong:1980} \nsays that \\emph{positive semidefinite ordering}\nof the generalized covariance matrices $C_1 \\mathrel{\\preceq_{\\mathrm{psd}}} C_2$, defined as\n\\begin{equation}\n\\label{eq:2.13a}\n\\forall \\xi\\in\\R^{d} \\quad \\xi^{\\top} C_1 \\xi \\le \\xi^{\\top} C_2 \\xi,\n\\end{equation}\nimplies symmetric convex ordering if \nthe location parameter $\\mu$ and the distribution $F_R$ of the radial \nfactor are fixed: \n\\begin{equation}\\label{eq:2.14}\n\\Ecal(\\mu, C_1,F_R) \\mathrel{\\preceq_{\\mathrm{symmcx}}} \\Ecal(\\mu, C_2, F_R)\n\\ldotp\n\\end{equation}\nIt is also known that for elliptical random vectors $X\\sim \\Ecal(\\mu,C,F_R)$ \nthe multivariate distribution function \n$F(x):=\\mathrm{P}\\cubr{X_1\\le x_1,\\dots,X_d\\le x_d}$ is increasing in $C_{i,j}$ for $i\\not=j$, where $C=(C_{i,j})$ \\citep[see, e.g.,][Theorem 2.21]{Joe:1997}.\n\\par\nThe following result is concerned \nwith the asymptotic portfolio loss ordering $\\mathrel{\\preceq_{\\mathrm{apl}}}$ for elliptical \ndistributions. \n\\begin{theorem}\\label{theo:2.4}\nLet $X\\mathrel{\\stackrel{\\mathrm{d}}{=}} \\mu_1+R_1A_1U$, $Y\\mathrel{\\stackrel{\\mathrm{d}}{=}}\\mu_2+R_2A_2U$ be elliptically distributed \nwith generalized covariances $C_i:=A_iA_i^{\\top}$. If \n\\begin{equation}\\label{eq:2.15}\n\\mu_1\\le \\mu_2, \\enskip \nR_1\\mathrel{\\preceq_{\\mathrm{apl}}} R_2,\n\\end{equation}\nand\n\\begin{equation}\\label{eq:2.15a}\n\\forall \\xi\\in\\Simp^d \\quad \\xi^{\\top} C_1\\xi \\le \\xi^{\\top} C_2 \\xi,\n\\end{equation}\nthen\n\\begin{equation}\\label{eq:2.16}\nX\\mathrel{\\preceq_{\\mathrm{apl}}} Y.\n\\end{equation}\n\\end{theorem}\n\\par\n\\begin{myproof}\nIt suffices to show that $\\xi^{\\top} Y \\mathrel{\\preceq_{\\mathrm{apl}}} \\xi^{\\top} Y$ for an \narbitrary portfolio $\\xi\\in\\Simp^d$. \nFurthermore, without loss of generality we can assume $\\mu_1=\\mu_2=0$.\nFor $i=1,2$ and $\\xi\\in\\Simp^d$ denote \n\\[\na_i = a_i(\\xi):= \\robrfl{\\xi ^{\\top} C_i \\xi}^{1\/2}\n\\] \nand \n\\[\nv_i = v_i(\\xi) := \\frac{\\xi^{\\top} A_i}{a_i}\n\\ldotp\n\\]\nThen, by definition of elliptical distributions, we have\n\\begin{equation}\\label{eq:2.17}\n\\xi^{\\top} X \\mathrel{\\stackrel{\\mathrm{d}}{=}} R_1 a_1 v_1 U\n\\quad\\text{and}\\quad\n\\xi^{\\top} Y \\mathrel{\\stackrel{\\mathrm{d}}{=}} R_2 a_2 v_2 U\n\\ldotp\n\\end{equation}\nSince the vectors $v_i=v_i(\\xi)$ have unit length by construction, \nthe random variables $v_i U$ are orthogonal projections of $U\\sim \\mathrm{unif}(S_2^d)$ \non vectors of unit length. \nSymmetry arguments yield that the distribution of $v_i U$ is independent of \n$v_i$ and that $v_iU \\mathrel{\\stackrel{\\mathrm{d}}{=}} (1,0,\\ldots,0)^{\\top} U=U^{(1)}$. \n\\par\nThus we have \n\\[\n\\xi^{\\top} X \\mathrel{\\stackrel{\\mathrm{d}}{=}} a_1 R_1 V\n\\quad\\text{and}\\quad\n\\xi^{\\top} Y\\mathrel{\\stackrel{\\mathrm{d}}{=}} a_2 R_2 V\n\\] \nwith $V:=U^{(1)}$.\nBy assumption we have $a_1\\le a_2$ and $R_1\\mathrel{\\preceq_{\\mathrm{apl}}} R_2$. \nApplying Proposition \\ref{prop:2.3}(\\ref{item:prop2.3b}) \nwe obtain $\\xi^{\\top} X\\mathrel{\\preceq_{\\mathrm{apl}}} \\xi^{\\top} Y$. \n\\end{myproof}\n\\par\n\\begin{remark}\\label{rem:2.6}\n\\begin{enumerate}[(a)]\n\\item\\label{item:rem:2.6.a}\nIt should be noted that condition~\\eqref{eq:2.15a} is indeed weaker than \n\\eqref{eq:2.13a}. \nLet $-1 < \\rho_1 < \\rho_2 <1$ and consider covariance matrices \n\\[\nC_i:=\n\\robrfl{\n\\begin{array}{cc}\n1 & \\rho_i\\\\\n\\rho_i & 1\n\\end{array}\n}\n,\\quad \ni=1,2\n\\ldotp\n\\]\nStraightforward calculations show that $C_i$ satisfy~\\eqref{eq:2.15a}, but \nnot~\\eqref{eq:2.13a}.\n\\item\nFor subexponentially distributed \n$R_i$ the assumption $\\mu_1\\le \\mu_2$ in \\eqref{eq:2.15} can be omitted.\n\\end{enumerate}\n\\end{remark}\n\\section{Multivariate regular variation: $\\mathrel{\\preceq_{\\mathrm{apl}}}$ in terms of spectral measures}\n\\label{sec:3}\n\\par\nThis section is concerned with the characterization of the asymptotic \nportfolio loss order $\\mathrel{\\preceq_{\\mathrm{apl}}}$ in the framework of multivariate regular \nvariation. The results obtained here highlight the influence of the tail \nindex $\\alpha$ and the spectral measure $\\Psi$ on $\\mathrel{\\preceq_{\\mathrm{apl}}}$, \nwith primary focus put on dependence structures captured by $\\Psi$. \nIt is shown that $\\mathrel{\\preceq_{\\mathrm{apl}}}$ corresponds to a family of order relations \non the set of canonical spectral measures and that these order relations \nare intimately related to the extreme risk index $\\gamma_\\xi$ introduced \nin \\citet{Mainik\/Rueschendorf:2010} and \\citet{Mainik:2010}. \n\\par\nThe main result of this section is stated in Theorem~\\ref{theo:3.4}, \nproviding criteria for $X \\mathrel{\\preceq_{\\mathrm{apl}}} Y$ in terms of componentwise ordering \n$X^{(i)} \\mathrel{\\preceq_{\\mathrm{apl}}} Y^{(i)}$ for $i=1,\\ldots,d$ \nand ordering of canonical spectral measures. \nA particular consequence of these criteria is the \ncharacterization of the dependence structures that \nyield the best and the worst possible diversification effects for \nrandom vectors in $\\Rplus^{d}$ \n(cf.\\ Theorem~\\ref{thm:3.8} and Corollary~\\ref{cor:3.10}).\nAnother application concerns elliptical distributions. Combining \nTheorem~\\ref{theo:3.4} with results on $\\mathrel{\\preceq_{\\mathrm{apl}}}$ obtained in \nTheorem~\\ref{theo:2.4}, we obtain ordering of the corresponding \ncanonical spectral measures. \n\\par\nRecall the notions of regular variation. \nIn the univariate case it can be defined separately for the lower \nand the upper tail of a random variable via~\\eqref{eq:apl.5}. \nA random vector $X$ taking values in $\\R^{d}$ is called \n\\emph{multivariate regularly varying} with tail index $\\alpha\\in(0,\\infty)$\nif there exist a sequence $a_n\\to\\infty$ and a (non-zero) Radon measure $\\nu$ on \nthe Borel $\\sigma$-field $\\mathcal{B}\\robr{[-\\infty,\\infty]^d\\setminus\\cubr{0}}$ \nsuch that $\\nu\\robr{[-\\infty,\\infty]^d \\setminus \\R^{d}}=0$ and, \nas $n\\to\\infty$,\n\\begin{equation}\n\\label{eq:29}\n\\index{$\\nu$}\nn \\mathrm{P}^{\\,a_n^{-1} X} \\stackrel{\\mathrm{v}}\\rightarrow \\nu\n\\text{ on }\\mathcal{B}\\robr{[-\\infty,\\infty]^d\\setminus\\cubr{0}},\n\\end{equation}\nwhere $\\stackrel{\\mathrm{v}}\\rightarrow$ denotes the \n\\emph{vague convergence} of Radon measures \nand $\\mathrm{P}^{\\,a_n^{-1} X}$ is the probability distribution of\n$a_n^{-1} X$.\n\\par\nIt should be noted that random vectors with non-negative components \nyield limit measures $\\nu$ that are concentrated on \n$[0,\\infty]^d\\setminus\\cubr{0}$. \nTherefore multivariate regular variation in this special case can also \nbe defined by vague convergence on $\\mathcal{B}([0,\\infty]^d\\setminus\\cubr{0})$.\n\\par\nMany popular distribution models are multivariate regularly varying. \nIn particular, according to \\citet{Hult\/Lindskog:2002}, \nmultivariate regular variation of an elliptical distribution \n$\\Ecal\\robr{\\mu,C,F_R}$ is equivalent to the regular variation of the \nradial factor $R$ and the tail index $\\alpha$ is inherited from $R$.\nOther popular examples are obtained by endowing regularly varying margins \n$X^{(i)}$ with an appropriate copula \n\\citet[cf.][]{Wuethrich:2003, Alink\/Loewe\/Wuethrich:2004, Barbe\/Fougeres\/Genest:2006}\n\\par\nFor a full account of technical details related to the notion of \nmultivariate regular variation, vague convergence, and \nthe Borel $\\sigma$-fields on the punctured spaces \n$[-\\infty,\\infty]^d\\setminus\\cubr{0}$ and $[0,\\infty]^d\\setminus\\cubr{0}$ \nthe reader is referred to \\citet{Resnick:2007}.\n\\par\nIt is well known that the limit measure $\\nu$ obtained in~\\eqref{eq:29}\nis unique except for a constant factor, has a singularity in the origin\nin the sense that \n$\\nu\\robr{(-\\varepsilon,\\varepsilon)^d}=\\infty$ for any $\\varepsilon>0$, \nand exhibits the scaling property \n\\begin{equation}\n\\label{eq:30}\n\\nu(tA)=t^{-\\alpha}\\nu(A)\n\\end{equation} \nfor all sets $A\\in\\mathcal{B}\\robrfl{[-\\infty,\\infty]^d\\setminus\\cubr{0}}$ that\nare bounded away from $0$. \n\\par\nIt is also well known that~\\eqref{eq:29} implies that the random variable \n$\\norm{X}$ with an arbitrary norm $\\norm{\\cdot}$ on $\\R^{d}$ is \nunivariate regularly varying with tail index $\\alpha$.\nMoreover,\nthe sequence $a_n$ can always be chosen as \n\\begin{equation}\n\\label{eq:181}\na_n:=F_{\\norm{X}}^{\\leftarrow}(1-1\/n),\n\\end{equation}\nwhere $F_{\\norm{X}}^{\\leftarrow}$ is the quantile function of\n$\\norm{X}$. The resulting limit measure $\\nu$ \nis normalized on the set $A_{\\norm{\\cdot}}:=\\cubr{x\\in\\R^{d}: \\norm{x}>1}$ by \n\\begin{equation}\n\\label{eq:182}\n\\nu\\robrfl{A_{\\norm{\\cdot}}}=1\n\\ldotp\n\\end{equation}\n\\par\nThus, after normalizing $\\nu$ by~\\eqref{eq:182}, \nthe scaling relation~\\eqref{eq:30} yields an equivalent rewriting of \nthe multivariate regular variation condition~\\eqref{eq:29} \nin terms of weak convergence:\n\\begin{equation}\n\\label{eq:34}\n\\mathcal{L}\\cubrfl{t^{-1} X\\,|\\,\\norm{X}>t}\n\\stackrel{\\mathrm{w}}{\\rightarrow}\n\\nu|_{A_{\\norm{\\cdot}}}\n\\text{ on } \n\\mathcal{B}\\robrfl{A_{\\norm{\\cdot}}}\n\\end{equation}\nfor $t\\to\\infty$,\nwhere $\\nu|_{A_{\\norm{\\cdot}}}$ is the restriction of $\\nu$ to the set $A_{\\norm{\\cdot}}$. \n\\par\nAdditionally to~\\eqref{eq:29}\nit is assumed that the limit measure $\\nu$ is \nnon-degen\\-erate\nin the \nfollowing sense:\n\\begin{equation}\n\\label{eq:4}\n\\nu\\robrfl{\\cubrfl{x\\in\\R^{d}: \\absfl{x^{(i)}}> 1}} >0\n,\\quad i=1,\\ldots,d\n\\ldotp\n\\end{equation}\nThis assumption ensures that\nall asset losses $X^{(i)}$ are relevant for the extremes of the portfolio loss \n$\\xi^{\\top} X$. If~\\eqref{eq:4} is satisfied in the upper tail region, i.e., if \n\\begin{equation}\n\\label{eq:4a}\n\\nu\\robrfl{\\cubrfl{x\\in\\R^{d}: x^{(i)}> 1}} >0 \n,\\quad i=1,\\ldots,d,\n\\end{equation} \nthen $\\nu$ also characterizes the asymptotic distribution\nof the componentwise maxima\n$M_n:=\\robr{M^{(1)},\\ldots,M^{(d)}}$ with \n$M^{(i)}:=\\max\\cubr{X_1^{(i)},\\ldots,X_n^{(i)}}$\nby the limit relation \n\\begin{equation}\n\\label{eq:164}\n\\mathrm{P}\\cubrfl{a_n^{-1} M_n\\in[-\\infty,x]} \\stackrel{\\mathrm{w}}{\\rightarrow}\n\\exp\\robrfl{-\\nu\\robrfl{[-\\infty,\\infty]^d\\setminus[-\\infty,x]}}\n\\end{equation}\nfor $x\\in(0,\\infty]^d$. \nTherefore $\\nu$ is called \n\\emph{exponent measure}. \nFor more details concerning the asymptotic distributions of maxima\nthe reader is referred to~\\citet{Resnick:1987} \nand \\citet{de_Haan\/Ferreira:2006}.\n\\par\nAnother consequence of the scaling property~\\eqref{eq:30} is the \nproduct representation of $\\nu$ in polar coordinates \n\\[\n(r,s):=\\tau(x):=(\\norm{x},\\norm{x}^{-1} x)\n\\] \nwith respect to an arbitrary norm $\\norm{\\cdot}$ on $\\R^{d}$.\nThe induced \nmeasure $\\nu^\\tau:=\\nu\\circ\\tau^{-1}$ necessarily satisfies\n\\begin{equation}\n\\label{eq:28}\n\\nu^\\tau=c\\cdot\\rho_\\alpha\\otimes\\Psi\n\\end{equation}\nwith the constant factor \n\\[\nc=\\nu\\robrfl{A_{\\norm{\\cdot}}}\n>0,\n\\] \nthe measure $\\rho_\\alpha$ on $(0,\\infty]$ defined by \n\\begin{equation}\n\\label{eq:176}\n\\rho_\\alpha((x,\\infty]):=x^{-\\alpha},\n\\quad \nx\\in(0,\\infty],\n\\end{equation} \nand a probability measure $\\Psi$ on the unit sphere $\\Sbb^d_{\\norm{\\cdot}}$\nwith respect to $\\norm{\\cdot}$,\n\\[ \n\\Sbb^d_{\\norm{\\cdot}}:=\\cubrfl{s\\in\\R^{d} : \\norm{s} = 1}\n\\ldotp\n\\]\nThe measure $\\Psi$ is called \n\\emph{spectral measure} \nof $\\nu$ or $X$.\nSince the term \\enquote{spectral measure} is already used in other areas, \n$\\Psi$ is also referred to as \n\\emph{angular measure}.\nIn the special case of $\\Rplus^{d}$-valued random vectors $X$ it\nmay be convenient to reduce the domain of $\\Psi$ to \n$\\Sbb^d_{\\norm{\\cdot}}\\cap\\Rplus^{d}$. \n\\par\nAlthough the domain of the spectral measure $\\Psi$ depends on the\nnorm $\\norm{\\cdot}$ underlying the polar coordinates, the \nrepresentation~\\eqref{eq:28} is norm-independent in the following sense:\nif~\\eqref{eq:28} holds for some norm $\\norm{\\cdot}$, then it also holds for \nany other norm $\\norm{\\cdot}_\\diamond$ that is equivalent to $\\norm{\\cdot}$. \nThe tail index $\\alpha$ is the same and the spectral measure $\\Psi_\\diamond$ \non the unit sphere $\\Sbb^d_\\diamond$ corresponding to $\\norm{\\cdot}_\\diamond$ \nis obtained from $\\Psi$ by the following transformation:\n\\[\n\\Psi_\\diamond=\\Psi^T,\\quad T(s):=\\norm{s}_\\diamond^{-1} s\n\\ldotp\n\\]\n\\par\nFinally, it should be noted that multivariate regular variation of \nthe loss vector $X$ is intimately related with the univariate regular variation \nof portfolio losses $\\xi^{\\top} X$.\nAs shown in \\citet{Basrak\/Mikosch\/Davis:2002}, \nmultivariate regular variation of $X$ \nimplies existence of a portfolio vector $\\xi_0\\in\\R^{d}$ such that $\\xi_0 ^{\\top} X$ \nis regularly varying with tail index $\\alpha$ and any \nportfolio loss $\\xi^{\\top} X$ satisfies\n\\begin{equation}\n\\label{eq:192}\n\\lim_{t\\to\\infty}\n\\frac{\\mathrm{P}\\cubrfl{\\xi^{\\top} X >t}}{\\mathrm{P}\\cubrfl{\\xi_0^{\\top} X >t}} \n=c(\\xi,\\xi_0)\n\\in [0,\\infty)\n\\ldotp\n\\end{equation}\nThis means that all portfolio losses $\\xi^{\\top} X$ are either regularly\nvarying with tail index $\\alpha$ or asymptotically negligible \ncompared to $\\xi_0^{\\top} X$. \n\\par\nMoreover, it is also worth a remark \nthat for $\\Rplus^{d}$-valued random vectors $X$ \nthe converse implication is true in the sense that~\\eqref{eq:192} \nand univariate regular variation of $\\xi_0^{\\top} X$ \nimply multivariate regular variation of the random vector $X$.\nThis sort of Cram\\'er-Wold theorem was established in \n\\citet{Basrak\/Mikosch\/Davis:2002} and \\citet{Boman\/Lindskog:2009}.\n\\par\nUnder the assumption of multivariate regular variation of $X$ \nthe \\emph{extreme risk index} $\\gamma_\\xi = \\gamma_\\xi(X)$ \nis defined as \n\\begin{equation}\\label{eq:3.2}\n\\gamma_\\xi(X)=\\lim_{t\\to\\infty} \\frac{\\mathrm{P}\\cubr{\\xi^{\\top} X>t}}{\\mathrm{P}\\cubr{\\norm{X}_1>t}}.\n\\end{equation}\nIn \\citet{Mainik\/Rueschendorf:2010} the random vector $X$ is restricted to \n$\\Rplus^{d}$ and the portfolio vector $\\xi$ is restricted to $\\Simp^d$. \nThe general case with $X$ in $\\R^{d}$ and possible negative portfolio \nweights, i.e., short positions, is considered in \\citet{Mainik:2010}. \nNormalizing the exponent measure $\\nu$ by~\\eqref{eq:182},\none obtains \n\\begin{equation}\\label{eq:3.1}\n\\gamma_\\xi(X)=\\nu\\robrfl{\\cubrfl{x\\in\\R^{d}: \\xi^{\\top} x > 1}}\n\\ldotp\n\\end{equation}\nRewriting this representation in terms of the spectral measure $\\Psi$ \nand the tail index $\\alpha$ yields\n\\begin{equation}\\label{eq:apl.1}\n\\gamma_\\xi\n=\n\\int_{\\Sbb^d_1}\\robrfl{\\xi^{\\top} s}_{+}^{\\alpha} \\mathrm{d} \\Psi(s)\n\\ldotp\n\\end{equation}\nDenoting the integrand by $f_{\\xi,\\alpha}$, we will write this representation \nas $\\gamma_\\xi=\\Psif_{\\xi,\\alpha}$. \n\\par\nThe extreme risk index $\\gamma_\\xi(X)$ allows to compare the risk of different \nportfolios. It is easy to see that \\eqref{eq:3.2} implies\n\\begin{equation}\\label{eq:3.4}\n\\lim_{t\\to\\infty} \\frac{\\mathrm{P}\\cubr{\\xi_1^{\\top} X>t}}{\\mathrm{P}\\cubr{\\xi_2^{\\top} X>t}} = \\frac{\\gamma_{\\xi_1}(X)}{\\gamma_{\\xi_2}(X)}.\n\\end{equation}\nThus, by construction, \nordering of the extreme risk index $\\gamma_\\xi$ is related to the \nasymptotic portfolio loss order $\\mathrel{\\preceq_{\\mathrm{apl}}}$. \n\\par\nHowever, designed for the comparison of different portfolio risks within one \nmodel, the extreme risk index $\\gamma_\\xi$ cannot be directly applied \nto the comparison of different models. \nThe major problem is the standardization by $\\mathrm{P}\\cubr{\\norm{X}_1>t}$ in \n\\eqref{eq:3.2}. Indeed, since $\\mathrm{P}\\cubr{\\norm{X}_1>t}$ also depends on the \nspectral measure $\\Psi_X$ of $X$, criteria for $\\mathrel{\\preceq_{\\mathrm{apl}}}$ in terms of \n$\\gamma_\\xi$ demand the specification of the limit\n\\[\n\\lim_{t\\to\\infty} \n\\frac{\\mathrm{P}\\cubr{\\norm{X}_1>t}}{\\mathrm{P}\\cubr{\\norm{Y}_1>t}}\n\\ldotp\n\\]\n\\par\nAnother technical issue arises from the invariance of $\\mathrel{\\preceq_{\\mathrm{apl}}}$ under \ncomponentwise rescalings. Since the spectral measure $\\Psi$ does not exhibit \nthis property, ordering of spectral measures needs additional normalization \nof margins that makes it consistent with $\\mathrel{\\preceq_{\\mathrm{apl}}}$. To solve these problems,\n we use an alternative representation of $\\gamma_\\xi$ in terms of the \nso-called canonical spectral measure $\\Psi^\\ast$, \nwhich has standardized marginal weights.\n\\par\nThis representation is closely related to the asymptotic risk aggregation \ncoefficient discussed by \\cite{Barbe\/Fougeres\/Genest:2006}. \nFurthermore, the link between the canonical spectral measure and \nextreme value copulas \nallows to transfer ordering results for copulas into the $\\mathrel{\\preceq_{\\mathrm{apl}}}$ \nsetting. These results are presented in Section~\\ref{sec:4}.\n\\par\nTo reduce the problem to the essentials, \nwe start with the observation that $\\mathrel{\\preceq_{\\mathrm{apl}}}$ is trivial for \nmultivariate regularly varying random vectors with different \ntail indices and non-degenerate portfolio losses. \n\\par\n\\begin{proposition}\\label{prop:3.1}\nLet $X$ and $Y$ be multivariate regularly varying on $\\R^{d}$ and assume that $\\gamma_\\xi(Y)>0$ for all $\\xi\\in\\Simp^d$. \n\\begin{enumerate}[(a)]\\label{item:prop.3.1a}\n\\item If\n\\begin{equation}\\label{eq:3.5}\n\\lim_{t\\to\\infty} \\frac{\\mathrm{P}\\cubr{\\norm{X}_1>t}}{\\mathrm{P}\\cubr{\\norm{Y}_1>t}} = 0,\n\\end{equation}\nthen $X \\mathrel{\\preceq_{\\mathrm{apl}}} Y$.\n\\vspace{0.5em\n\\item If $\\alpha_X>\\alpha_Y$, then $X \\mathrel{\\preceq_{\\mathrm{apl}}} Y$.\n\\end{enumerate}\n\\end{proposition}\n\\par\n\\begin{myproof}\n\\begin{enumerate}[(a)]\n\\item %\nUsing relation \\eqref{eq:3.2} we obtain \n\\begin{align*}\n\\hspace{2em}&\\hspace{-2em}\n\\limsup_{t\\to\\infty}\n\\frac{\\mathrm{P}\\cubrfl{\\xi^{\\top} X > t}}{\\mathrm{P}\\cubrfl{\\xi^{\\top} Y > t}}\\\\\n&=\n\\limsup_{t\\to\\infty}\n\\robrfl{\n\\frac{\\mathrm{P}\\cubrfl{\\xi^{\\top} X > t}}{\\mathrm{P}\\cubrfl{\\norm{X}_1>t}}\n\\cdot\n\\frac{\\mathrm{P}\\cubrfl{\\norm{Y}_1>t}}{\\mathrm{P}\\cubrfl{\\xi^{\\top} Y > t}} \n\\cdot\n\\frac{\\mathrm{P}\\cubrfl{\\norm{X}_1 > t}}{\\mathrm{P}\\cubrfl{\\norm{Y}_1>t}}\n}\\\\\n&=\n\\frac{\\gamma_\\xi(X)}{\\gamma_\\xi(Y)}\n\\cdot \n\\limsup_{t\\to\\infty}\\frac{\\mathrm{P}\\cubrfl{\\norm{X}_1 > t}}{\\mathrm{P}\\cubrfl{\\norm{Y}_1>t}}\\\\\n&=0\n\\ldotp\n\\end{align*}\n\\item %\nRecall that multivariate regular variation of $X$ implies regular variation of $\\norm{X}_1$ with tail index $\\alpha_X$. Analogously, $\\norm{Y}_1$ is regularly varying with tail index $\\alpha_Y$. Finally, $\\alpha_X > \\alpha_Y$ yields \\eqref{eq:3.5} and by~(\\ref{item:prop.3.1a}) we obtain $X\\mathrel{\\preceq_{\\mathrm{apl}}} Y$.\n\\qed\n\\end{enumerate}\n\\end{myproof}\n\\par\nThus the primary setting for studying the influence of dependence \nstructures on the ordering of extreme portfolio losses is the case of \nrandom variables $X$ and $Y$ with equal tail indices:\n\\[\n\\alpha_X=\\alpha_Y=:\\alpha\n\\ldotp\n\\] \nIn the framework of multivariate regular variation, asymptotic dependence in the tail region \nis characterized by the spectral measure $\\Psi$ or its canonical version $\\Psi^\\ast$. \nThe \\emph{canonical exponent measure} $\\nu^{\\ast}$ of $X$ is obtained from the exponent \nmeasure $\\nu$ as \n\\[\n\\nu^{\\ast}=\\nu\\circ T\n\\]\nwith the transformation $T:\\R^{d}\\to\\R^{d}$ defined by\n\\begin{equation}\n\\quad\nT(x)\n:=\\label{eq:3.7}\n\\robrfl{T_\\alpha\\robrfl{\\nu\\robr{B_1}\\cdot x^{(1)}},\\ldots, T_\\alpha\\robrfl{\\nu\\robr{B_d}\\cdot x^{(d)}}},\n\\end{equation}\nwhere \n\\begin{equation} \nT_\\alpha(t)\n:=\\label{eq:3.8}\n\\robrfl{t_{+}^{1\/\\alpha} - t_{-}^{1\/\\alpha}} \\text{ and }\nB_i := \\cubrfl{x\\in\\R^{d}: \\absfl{x^{(i)}} > 1}\n\\ldotp\n\\end{equation}\nFurthermore, $\\nu^{\\ast}$ exhibits the scaling property \n\\[\n\\nu^{\\ast}(tA)=t^{-1}\\nu^{\\ast}(A),\n\\quad t>0,\n\\] \nand, analogously to~\\eqref{eq:28}, has a product structure in polar \ncoordinates:\n\\begin{equation}\\label{eq:apl.3}\n\\nu^{\\ast}\\circ\\tau^{-1} = \\rho_1 \\otimes \\Psi^\\ast,\n\\end{equation}\nThe measure $\\Psi^\\ast$ is the \\emph{canonical spectral measure} of $X$. \n\\par \nSince $\\mathrel{\\preceq_{\\mathrm{apl}}}$ and $\\Psi^\\ast$ are invariant under componentwise rescalings, \nthe canonical spectral measure $\\Psi^\\ast$ is more suitable for the \ncharacterization of $\\mathrel{\\preceq_{\\mathrm{apl}}}$. \nThe following lemma provides a representation of the extreme risk index \n$\\gamma_\\xi$ in terms of $\\Psi^\\ast$. Note that the formulation makes use of \nthe componentwise product notation~\\eqref{eq:apl.2}.\n\\par\n\\begin{proposition}\n\\label{prop:3.2}\nLet $X$ be multivariate regularly varying on $\\R^{d}$ with tail index \n$\\alpha\\in(0,\\infty)$. \nIf $X$ satisfies the non-degeneracy condition~\\eqref{eq:4}, \nthen \n\\begin{equation}\\label{eq:3.9}\n\\gamma_\\xi(X)=\\int_{\\Sbb^d_1}g_{\\xi,\\alpha}\\robrfl{v s} \\, \\mathrm{d} \\Psi^\\ast(s),\n\\end{equation}\nwhere $\\Psi^\\ast$ denotes the canonical spectral measure of $X$, \nthe rescaling vector $v=\\robr{v^{(1)},\\ldots,v^{(d)}}$ \nis defined by \n\\begin{equation}\\label{eq:3.10}\nv^{(i)}:=\\robr{\\gamma_{\\ei}(X)+\\gamma_{-\\ei}(X)},\n\\end{equation} \nand the function $g_{\\xi,\\alpha}:\\R^{d}\\to\\R$ is defined as \n\\begin{equation}\\label{eq:3.11}\ng_{\\xi,\\alpha}(x):=%\n\\robrfl{\\sum_{i=1}^d\\xi^{(i)}\\cdot\\robrfl{\\robrfl{x^{(i)}}_{+}^{1\/\\alpha} - \\robrfl{x^{(i)}}_{-}^{1\/\\alpha}}}_{+}^{\\alpha}\n\\ldotp\n\\end{equation}\n\\end{proposition}\n\\par\n\\begin{myproof}\nDenote $A_{\\xi,1}:=\\{x\\in\\R^{d}: \\xi^{\\top} x\\ge 1\\}$. Then, by definition of $\\nu^{\\ast}$,\n\\begin{align}\n\\gamma_\\xi(X)\n&=\\nonumber\n\\nu(A_{\\xi,1})\\\\\n&=\\nonumber\n\\nu^{\\ast}\\robr{T^{-1}(A_{\\xi,1})}\\\\\n&=\\nonumber\n\\nu^{\\ast}\\cubrfl{x\\in\\R^{d}: T(x) \\in A_{\\xi,1}}\\\\\n&=\\label{eq:3.12}\n\\int_{\\Sbb^d_1}\\int_{(0,\\infty)} \n1\\cubrfl{\\xi^{\\top} T(rs) > 1} \\, \\mathrm{d} \\rho_1(r) \\, \\mathrm{d} \\Psi^\\ast(s)\n\\ldotp\n\\end{align}\nIt is easy to see that~\\eqref{eq:3.8} implies \n$T_\\alpha(rt)=r^{1\/\\alpha}T_\\alpha(t)$ for $r>0$ and $t\\in\\R$. \nConsequently, \\eqref{eq:3.7} yields\n\\begin{equation}\n\\label{eq:3.13}\nT(rx) = r^{1\/\\alpha} T(x)\n\\end{equation}\nfor $r>0$ and $x\\in\\R^{d}$. \nApplying~\\eqref{eq:3.13} to~\\eqref{eq:3.12}, one obtains \n\\begin{align}\n\\gamma_\\xi(X)\n&=\\nonumber\n\\int_{\\Sbb^d_1}\\int_{(0,\\infty)} \n1\\cubrfl{r^{1\/\\alpha} \\xi^{\\top} T(s) > 1} \\, \\mathrm{d} \\rho_1(r) \\, \\mathrm{d} \\Psi^\\ast(s)\\\\\n&=\\nonumber\n\\int_{\\Sbb^d_1}\\int_{(0,\\infty)} \n1\\cubrfl{\\xi^{\\top} T(s) > 0}\n1\\cubrfl{r > \\robrfl{\\xi^{\\top} T(s)}^{-\\alpha}} \\, \\mathrm{d} \\rho_1(r) \\, \\mathrm{d} \\Psi^\\ast(s)\\\\\n&=\\nonumber\n\\int_{\\Sbb^d_1}\n1\\cubrfl{\\xi^{\\top} T(s) > 0} \\robrfl{\\xi^{\\top} T(s)}^{\\alpha} \\, \\mathrm{d} \\Psi^\\ast(s)\\\\\n&=\\label{eq:3.14}\n\\int_{\\Sbb^d_1}\n\\robrfl{\\xi^{\\top} T(s)}^{\\alpha}_{+} \\, \\mathrm{d} \\Psi^\\ast(s)\n\\ldotp\n\\end{align}\nFinally, consider the sets $B_i$ defined in~\\eqref{eq:3.8}. \nIt is easy to see that \n\\begin{equation*}\n\\nu(B_i) \n= \n\\gamma_{\\ei}(X)+\\gamma_{-\\ei}(X) = v^{(i)}\n\\ldotp \n\\end{equation*}\nHence \n\\begin{align*}\n\\robrfl{\\xi^{\\top} T(s)}_{+}^{\\alpha} \n&= \n\\robrfl{\n\\sum_{i=1}^d\\xi^{(i)}\\cdot\\robrfl{T_\\alpha\\robrfl{v^{(i)} s^{(i)}}}\n} _{+}^{\\alpha}\\\\\n&=\ng_{\\xi,\\alpha}\\robrfl{vs}\n\\ldotp\n\\qed\n\\end{align*}\n\\end{myproof}\n\n\nAs already mentioned above, $\\mathrel{\\preceq_{\\mathrm{apl}}}$ and $\\Psi^\\ast$ are invariant under \nrescaling of components. \nConsequently, characterization of $\\mathrel{\\preceq_{\\mathrm{apl}}}$ can be reduced to the case when the marginal weights $v^{(i)}=\\gamma_{\\ei}(X)+\\gamma_{-\\ei}(X)$ in~\\eqref{eq:3.9} are standardized by\n\\begin{equation}\n\\label{eq:3.15}\n\\forall i,j\\in\\cubr{1,\\ldots,d}\n\\quad \n\\lim_{t\\to\\infty} \n\\frac{\\mathrm{P}\\cubr{\\abs{X^{(i)}}> t}}{\\mathrm{P}\\cubr{\\abs{X^{(j)}}>t}}\n=1\n\\ldotp\n\\end{equation}\nThis condition will be referred to as the \n\\emph{balanced tails condition}. \nThe following result shows that this condition significantly simplifies the representation~\\eqref{eq:3.9}. \n\\par\n\\begin{proposition}\n\\label{prop:3.3}\nSuppose that $X$ is multivariate regularly varying on $\\R^{d}$ with tail index\n$\\alpha\\in(0,\\infty)$. \n\\begin{enumerate}[(a)]\n\\item \n\\label{item:L39.3}%\nIf $X$ has balanced tails in the sense of~\\eqref{eq:3.15}, then \n\\begin{equation}\n\\label{eq:3.17}\n\\frac{\\gamma_\\xi(X)}{\\gamma_{e_1}(X) + \\gamma_{-e_1}(X)} = \\Psi^\\ast g_{\\xi,\\alpha}\n\\ldotp\n\\end{equation}\n\\item\n\\label{item:L39.1}%\nThe non-degeneracy condition~\\eqref{eq:4} is equivalent to \nthe existence of a vector $w\\in(0,\\infty)^d$ \nsuch that $wX$ has balanced tails. \n\\item\n\\label{item:L39.2}%\nThe extreme risk index $\\gamma_\\xi$ of the rescaled vector $wX$ obtained \nin part~(\\ref{item:L39.1}) satisfies\n\\begin{equation}\n\\label{eq:3.18}\n\\frac{\\gamma_\\xi(wX)}{\\gamma_{e_1}(wX) + \\gamma_{-e_1}(wX)} = \\Psiast_X g_{\\xi,\\alpha}\n\\ldotp\n\\end{equation}\n\\end{enumerate}\n\\end{proposition}\n\\par\n\\begin{myproof}\nPart~(\\ref{item:L39.3}). \nConsider the integrand $g_{\\xi,\\alpha}(vs)$ in the representation~\\eqref{eq:3.9}:\n\\[\ng_{\\xi,\\alpha}(vs)\n=\n\\robrfl{\\sum_{i=1}^{d} \\xi^{(i)} \\cdot \n\\robrfl{\\robrfl{v^{(i)} s^{(i)}}_{+}^{1\/\\alpha} - \\robrfl{v^{(i)} s^{(i)}}_{-}^{1\/\\alpha}}\n}_{+}^{\\alpha}\n\\ldotp\n\\]\nThe balanced tails condition~\\eqref{eq:3.15} implies that $X$ is \nnon-degenerate in the sense of~\\eqref{eq:4}. \nFurthermore, all weights $v^{(i)}$ in the representation~\\eqref{eq:3.9} \nare equal: \n\\begin{align*}\n1\n&=\n\\lim_{t\\to\\infty}\n\\frac{\\mathrm{P}\\cubrfl{\\absfl{X^{(i)}}>t} \/ \\mathrm{P}\\cubrfl{\\normfl{X}_1 >t}} \n{\\mathrm{P}\\cubrfl{\\absfl{X^{(j)}}>t} \/ \\mathrm{P}\\cubrfl{\\normfl{X}_1 >t}}\n=\n\\frac{\\gamma_{\\ei}(X) + \\gamma_{-\\ei}(X)}{\\gamma_{\\ej}(X) + \\gamma_{-\\ej}(X)}\\\\\n&=\n\\frac{v^{(i)}}{v^{(j)}}\n,\\quad i,j\\in\\cubr{1,\\ldots,d}\n\\ldotp\n\\end{align*}\nHence $g_{\\xi,\\alpha}(vs)$ simplifies to \n\\begin{align*}\ng_{\\xi,\\alpha}(vs)\n&=\nv^{(1)} g_{\\xi,\\alpha}(s)\\\\\n&=\n\\robrfl{\\gamma_{e_1}(X) + \\gamma_{-e_1}(X)} g_{\\xi,\\alpha}(s)\n\\ldotp\n\\end{align*} \n\\par\nPart~(\\ref{item:L39.1}). \nSuppose that $X$ satisfies~\\eqref{eq:4}. Then the sets $B_i$ defined in~\\eqref{eq:3.8} satisfy $\\nu(B_i)>0$ for $i=1,\\ldots,d$. Consequently, the random variables $\\abs{X^{(i)}}$ are regularly varying with tail index $\\alpha$. Denoting \n\\begin{equation}\n\\label{eq:3.19}\nw^{(i)}:=\\robr{\\nu(B_i)}^{-1\/\\alpha},\n\\end{equation}\none obtains \n\\begin{align*}\n\\lim_{t\\to\\infty}\\frac{\\mathrm{P}\\cubrfl{\\absfl{w^{(i)} X^{(i)}} > t}}{\\mathrm{P}\\cubrfl{\\norm{X}_1>t}} \n&=\n\\lim_{t\\to\\infty}\n\\robrfl{\n\\frac{\\mathrm{P}\\cubrfl{\\absfl{X^{(i)}} > t\/w^{(i)}}}{\\mathrm{P}\\cubrfl{\\absfl{X^{(i)}} > t}}\n\\cdot\n\\frac{\\mathrm{P}\\cubrfl{\\absfl{X^{(i)}} > t}}{\\mathrm{P}\\cubrfl{\\norm{X}_1>t}} \n}\\\\\n&= \\robrfl{w^{(i)}}^{\\alpha}\\cdot\\nu(B_i)\\\\\n&= 1\n\\end{align*}\nfor $i=1,\\ldots,d$. Hence, for any $i,j\\in\\cubr{1,\\ldots,d}$,\n\\begin{align*}\n\\lim_{t\\to\\infty}\n\\frac{\\mathrm{P}\\cubrfl{\\absfl{w^{(i)} X^{(i)}} > t}}{\\mathrm{P}\\cubrfl{\\absfl{w^{(j)} X^{(j)}} > t}}\n&=\n1\n\\ldotp\n\\end{align*}\n\\par\nTo prove the inverse implication, suppose that $Z:=wX$ has balanced tails \nfor some $w\\in(0,\\infty)^d$. Then the the exponent measure $\\nu$ of $X$\nsatisfies \n\\begin{align*}\n\\frac{\\nu(B_i)}{\\nu(B_1)}\n&=\n\\lim_{t\\to\\infty}\\frac{\\mathrm{P}\\cubrfl{\\absfl{X^{(i)}}>t}}{\\mathrm{P}\\cubrfl{\\absfl{X^{(1)}} > t}}\\\\\n&=\n\\lim_{t\\to\\infty}\\frac{\\mathrm{P}\\cubrfl{\\absfl{Z^{(i)}}>w^{(i)} t}}{\\mathrm{P}\\cubrfl{\\absfl{Z^{(1)}} > w^{(1)} t}}\\\\\n&=\n\\lim_{t\\to\\infty}\\robrfl{\n\\frac{\\mathrm{P}\\cubrfl{\\absfl{Z^{(i)}}> w^{(i)} t}}{\\mathrm{P}\\cubrfl{\\absfl{Z^{(i)}} >t}}\n\\cdot\n\\frac{\\mathrm{P}\\cubrfl{\\absfl{Z^{(1)}} >t}}{\\mathrm{P}\\cubrfl{\\absfl{Z^{(1)}} > w^{(1)} t}}\n\\cdot\n\\frac{\\mathrm{P}\\cubrfl{\\absfl{Z^{(i)}} >t}}{\\mathrm{P}\\cubrfl{\\absfl{Z^{(1)}} >t}}\n}\\\\\n&=\n\\robrfl{\\frac{w^{(i)}}{w^{(1)}}}^{-\\alpha}\n\\in(0,\\infty)\n,\\quad i\\in\\cubrfl{1,\\ldots,d}\n\\ldotp\n\\end{align*}\nSince multivariate regular variation of $X$ implies $\\nu(B_j)>0$ for at least \none index $j\\in\\cubr{1,\\ldots,d}$, this yields $\\nu(B_i)>0$ for all $i$.\n\\par\n\nPart~(\\ref{item:L39.2}). This is an immediate consequence of \npart (\\ref{item:L39.3}) and the invariance of canonical spectral measures \nunder componentwise rescaling.\n\\end{myproof}\n\\par\nRepresentation~\\eqref{eq:3.17} suggests that ordering of \nthe normalized extreme risk indices $\\gamma_\\xi\/(\\gamma_{e_1} + \\gamma_{-e_1})$ \nin the balanced tails setting can be considered \nas an \\emph{integral order relation} \nfor canonical spectral measures with respect to the function class\n\\begin{equation}\\label{eq:3.20}\n\\Gcal_{\\alpha}:=\\cubrfl{g_{\\xi,\\alpha}:\\xi\\in\\Simp^d}\n\\ldotp\n\\end{equation}\nThis justifies the following definition.\n\\par\n\\begin{definition}\n\\label{def:3.4}\nLet $\\Psi^\\ast$ and $\\Phi^\\ast$ be canonical spectral measures on $\\Sbb^d_1$\nand let $\\alpha>0$. \nThen the order relation $\\Psi^\\ast \\mathrel{\\preceq_{\\Gcalalpha}} \\Phi^\\ast$ is defined by\n\\begin{equation}\\label{eq:3.21}\n\\forall g\\in\\Gcal_{\\alpha}\n\\quad\n\\Psi^\\ast g \\le\\Phi^\\ast g\n\\ldotp\n\\end{equation}\n\\end{definition}\n\\par\n\\begin{remark}\n\\label{rem:3.1}\n\\begin{enumerate}[(a)]\n\\item\n\\label{item:r14.1}%\nFor $\\alpha=1$ and spectral measures on $\\Simp^d$ the extreme risk index \n$\\gamma_\\xi(X)$ is linear in $\\xi$ \\citep[cf.][Lemma~3.2]{Mainik\/Rueschendorf:2010}.\nConsequently, $\\mathrel{\\preceq_{\\Gcalalpha}}$ is indifferent in this case, \ni.e., \nany $\\Psi^\\ast$ and $\\Phi^\\ast$ on $\\mathcal{B}\\robr{\\Simp^d}$ satisfy \n\\begin{equation}\n\\label{eq:3.22}\n\\Psi^\\ast \\mathrel{\\preceq}_{\\mathcal{G},1} \\Phi^\\ast \n\\quad\\text{and}\\quad\n\\Phi^\\ast \\mathrel{\\preceq}_{\\mathcal{G},1} \\Psi^\\ast\n\\ldotp\n\\end{equation}\n\\item\n\\label{item:r14.2}%\nThe order relation $\\mathrel{\\preceq_{\\Gcalalpha}}$ is mixing invariant \nin the sense that \nuniform ordering of two parametric families \n$\\cubr{\\Psi^\\ast_\\theta:\\theta\\in\\Theta}$ and \n$\\cubr{\\Phi^\\ast_\\theta:\\theta \\in\\Theta}$, \n\\[\n\\forall \\theta\\in\\Theta\n\\quad\n\\Psi^\\ast_\\theta\\mathrel{\\preceq_{\\Gcalalpha}}\\Phi^\\ast_\\theta\n,\n\\]\nimplies \n\\[\n\\int_\\Theta\\Psi^\\ast_\\theta \\, \\mathrm{d} \\mu(\\theta) \n\\mathrel{\\preceq_{\\Gcalalpha}} \n\\int_\\Theta\\Phi^\\ast_\\theta \\, \\mathrm{d} \\mu(\\theta)\n\\]\nfor any probability measure $\\mu$ on $\\Theta$. \n\\end{enumerate}\n\\end{remark}\n\\par\nThe following theorem states that $\\mathrel{\\preceq_{\\mathrm{apl}}}$ is in a certain sense \nequivalent to the ordering of canonical spectral measures \nand allows to reduce the verification of $\\mathrel{\\preceq_{\\mathrm{apl}}}$ to the verification \nof $\\mathrel{\\preceq_{\\Gcalalpha}}$. \nSome exemplary applications are given in Section~\\ref{sec:5}. \nFurthermore, given explicit representations of spectral measures or their \ncanonical versions, \nthis result allows to verify $\\mathrel{\\preceq_{\\mathrm{apl}}}$ numerically, which is very \nuseful in practice. \n\\par\n\\begin{theorem}\n\\label{theo:3.4}\nLet $X$ and $Y$ be multivariate regularly varying random vectors on $\\R^{d}$ with tail index $\\alpha\\in(0,\\infty)$ and canonical spectral measures $\\Psiast_X$ and $\\Psiast_Y$. Further, suppose that $X$ and $Y$ satisfy the balanced tails condition~\\eqref{eq:3.15}. \n\\begin{enumerate}[(a)]\n\\item\n\\label{item:t4.1}%\nIf $\\absfl{X^{(1)}} \\mathrel{\\preceq_{\\mathrm{apl}}} \\absfl{Y^{(1)}}$,\nthen $\\Psiast_X \\mathrel{\\preceq_{\\Gcalalpha}} \\Psiast_Y$ implies $X \\mathrel{\\preceq_{\\mathrm{apl}}} Y$.\n\\vspace{0.5em\n\\item\n\\label{item:t4.2}%\nIf \n$\\absfl{X^{(1)}} \\mathrel{\\preceq_{\\mathrm{apl}}} \\absfl{Y^{(1)}}$ \nand \n$\\absfl{Y^{(1)}} \\mathrel{\\preceq_{\\mathrm{apl}}} \\absfl{X^{(1)}}$,\nthen \n$\\Psiast_X \\mathrel{\\preceq_{\\Gcalalpha}} \\Psiast_Y$ is equivalent to $X \\mathrel{\\preceq_{\\mathrm{apl}}} Y$.\n\\end{enumerate}\n\\end{theorem}\n\\par\n\\begin{myproof}\n(\\ref{item:t4.1})\nSince $X$ has balanced tails, Proposition \\ref{prop:3.3}(\\ref{item:L39.3}) yields\n\\begin{align*}\n\\lim_{t\\to\\infty}\n\\frac{\\mathrm{P}\\cubrfl{\\xi^{\\top} X >t}}{\\mathrm{P}\\cubrfl{\\absfl{X^{(1)}}>t}}\n&=\n\\lim_{t\\to\\infty}\n\\robrfl{\n\\frac{\\mathrm{P}\\cubrfl{\\xi^{\\top} X >t}}{\\mathrm{P}\\cubrfl{\\norm{X}_1>t}}\n\\cdot\n\\frac{\\mathrm{P}\\cubrfl{\\norm{X}_1>t}}{\\mathrm{P}\\cubrfl{\\absfl{X^{(1)}}>t}}\n}\\\\\n&=\n\\frac{\\gamma_\\xi(X)}{\\gamma_{e_1}(X) + \\gamma_{-e_1}(X)}\\\\\n&=\n\\Psiast_X g_{\\xi,\\alpha}\n\\ldotp\n\\end{align*}\nAnalogously one obtains\n\\[\n\\lim_{t\\to\\infty}\n\\frac{\\mathrm{P}\\cubrfl{\\xi^{\\top} Y >t}}{\\mathrm{P}\\cubrfl{\\absfl{Y^{(1)}}>t}}\n=\n\\Psiast_Y g_{\\xi,\\alpha}\n\\ldotp\n\\]\nMoreover, $\\Psiast_X \\mathrel{\\preceq_{\\Gcalalpha}} \\Psiast_Y$ implies \n\\begin{equation}\n\\label{eq:3.23}\n\\frac{\\PsiastXg_{\\xi,\\alpha}}{\\PsiastYg_{\\xi,\\alpha}}\n\\le\n1\n\\ldotp\n\\end{equation}\nConsequently, \n\\begin{align}\n\\hspace{2em}&\\hspace{-2em}\\nonumber\n\\limsup_{t\\to\\infty}\n\\frac{\\mathrm{P}\\cubrfl{\\xi^{\\top} X >t}}{\\mathrm{P}\\cubrfl{\\xi^{\\top} Y >t}}\\\\\n&=\\nonumber\n\\limsup_{t\\to\\infty}\\robrfl{\n\\frac{\\mathrm{P}\\cubrfl{\\xi^{\\top} X >t}}{\\mathrm{P}\\cubrfl{\\absfl{X^{(1)}}>t}}\n\\cdot\n\\frac{\\mathrm{P}\\cubrfl{\\absfl{Y^{(1)}}>t}}{\\mathrm{P}\\cubrfl{\\xi^{\\top} Y >t}}\n\\cdot\n\\frac{\\mathrm{P}\\cubrfl{\\absfl{X^{(1)}}>t}}{\\mathrm{P}\\cubrfl{\\absfl{Y^{(1)}}>t}}\n}\\\\\n&=\\label{eq:3.24}\n\\frac{\\PsiastXg_{\\xi,\\alpha}}{\\PsiastYg_{\\xi,\\alpha}}\n\\cdot\n\\limsup_{t\\to\\infty}\\frac{\\mathrm{P}\\cubrfl{\\absfl{X^{(i)}}>t}}{\\mathrm{P}\\cubrfl{\\absfl{Y^{(i)}}>t}}\\\\\n&\\le\\nonumber\n1\n\\end{align}\ndue to~\\eqref{eq:3.23} and $\\abs{X^{(i)}} \\mathrel{\\preceq_{\\mathrm{apl}}} \\abs{Y^{(i)}}$.\n\\par \n\\medskip\n\\noindent %\n(\\ref{item:t4.2})\nBy part~(\\ref{item:t4.1}), it suffices to show that \n$X \\mathrel{\\preceq_{\\mathrm{apl}}} Y$ implies $\\Psiast_X \\mathrel{\\preceq_{\\Gcalalpha}} \\Psiast_Y$. \nBy assumption $\\abs{X^{(1)}}$ and $\\abs{Y^{(1)}}$ have asymptotically equivalent tails,\n\\[\n\\lim_{t\\to\\infty}\n\\frac{\\mathrm{P}\\cubrfl{\\absfl{X^{(1)}} >t}}{\\mathrm{P}\\cubrfl{\\absfl{Y^{(1)}} >t}} \n= \n1\n\\ldotp\n\\]\nThus~\\eqref{eq:3.24} yields\n\\[\n\\frac{\\PsiastXg_{\\xi,\\alpha}}{\\PsiastYg_{\\xi,\\alpha}}\n=\n\\limsup_{t\\to\\infty}\n\\frac{\\mathrm{P}\\cubrfl{\\xi^{\\top} X >t}}{\\mathrm{P}\\cubrfl{\\xi^{\\top} Y >t}}\n\\]\nand $X \\mathrel{\\preceq_{\\mathrm{apl}}} Y$ implies $\\Psiast_X \\mathrel{\\preceq_{\\Gcalalpha}} \\Psiast_Y$.\n\\end{myproof}\n\\par\nThe following result answers the question for dependence structures \ncorresponding to the best and the worst possible diversification effects \nfor multivariate regularly varying random vectors in $\\Rplus^{d}$.\nAccording to Theorem~\\ref{theo:3.4}, it suffices to find the upper and \nthe lower elements with respect to $\\mathrel{\\preceq_{\\Gcalalpha}}$ in the set \nof all canonical spectral measures on $\\Simp^d$. \nIt turns out that for $\\alpha > 1$ \nthe best diversification effects are obtained in case of asymptotic \nindependence, i.e., the $\\mathrel{\\preceq_{\\Gcalalpha}}$-maximal element is given by \n\\begin{equation}\\label{eq:apl.8}\n\\Psi^\\ast_0 := \\sum_{i=1}^{d} \\Dirac{e_i},\n\\end{equation}\nwhereas the worst diversification effects are obtained in case of \nthe asymptotic comonotonicity, represented by \n\\begin{equation}\\label{eq:apl.9}\n\\Psi^\\ast_1 := d \\cdot \\Dirac{(1\/d,\\ldots,1\/d)}\n\\ldotp\n\\end{equation}\nFor $\\alpha < 1$ the situation is inverse. \n\\begin{theorem}\\label{thm:3.8}\nLet $\\Psi^\\ast$ be an arbitrary canonical spectral measure on $\\Simp^d$ and let \n$\\Psi^\\ast_0$ and $\\Psi^\\ast_1$ be defined according to~\\eqref{eq:apl.8} \nand~\\eqref{eq:apl.9}. Then\n\\begin{enumerate}[(a)]\n\\item\\label{item:thm:3.8.a}\n$\\Psi^\\ast_0 \\mathrel{\\preceq_{\\Gcalalpha}} \\Psi^\\ast \\mathrel{\\preceq_{\\Gcalalpha}} \\Psi^\\ast_1$ \nfor $\\alpha \\ge 1$.\n\\vspace{0.5em}\n\\item\\label{item:thm:3.8.b}\n$\\Psi^\\ast_1 \\mathrel{\\preceq_{\\Gcalalpha}} \\Psi^\\ast \\mathrel{\\preceq_{\\Gcalalpha}} \\Psi^\\ast_0$ \nfor $\\alpha \\in (0,1]$. \n\\end{enumerate}\n\\end{theorem}\n\\begin{proof}\nLet $X$ be multivariate regularly varying on $\\Rplus^{d}$ with canonical spectral \nmeasure $\\Psi^\\ast$. Without loss of generality we can assume that $X$ \nsatisfies the balanced tails condition~\\eqref{eq:3.15}. \nThen, according to~\\eqref{eq:3.17}, we have \n\\begin{equation}\n\\label{eq:apl.6}\n\\Psiastg_{\\xi,\\alpha}=\\frac{\\gamma_\\xi(X)}{\\gamma_{e_1}(X)}\n\\ldotp\n\\end{equation}\nFurthermore, \nwe have $\\Psi^\\ast g_{e_i,\\alpha} = 1$ for $i=1,\\ldots,d$. \nRecall that the mapping $\\xi\\mapsto\\gamma_\\xi$ is convex for $\\alpha \\ge 1$ \n\\citep[cf.][Lemma~3.2]{Mainik\/Rueschendorf:2010}. \nDue to~\\eqref{eq:apl.6} this behaviour is inherited by the mapping \n$\\xi\\mapsto\\Psiastg_{\\xi,\\alpha}$. \nThus for $\\alpha \\ge 1$ \nwe have $\\Psiastg_{\\xi,\\alpha} \\le 1 = \\Psi^\\ast_1g_{\\xi,\\alpha}$ for all $\\xi\\in\\Simp^d$, \nwhich exactly means $\\Psi^\\ast \\mathrel{\\preceq_{\\Gcalalpha}} \\Psi^\\ast_1$ for $\\alpha \\ge 1$. \n\\par\nTo complete the proof of part~(\\ref{item:thm:3.8.a}), note that \nthe normalization of canonical spectral measures yields \n\\begin{equation}\\label{eq:apl.7}\n\\forall\\xi\\in\\Simp^d\n\\quad\n\\Psi^\\ast_0g_{\\xi,\\alpha} \n= \n\\sum_{i=1}^{d}\\robrfl{\\xi^{(i)}}^{\\alpha}\n=\n\\int_{\\Simp^d} \\sum_{i=1}^{d} \\robrfl{\\xi^{(i)}}^{\\alpha} s ^{(i)} \n\\,\\Psi^\\ast(\\mathrm{d} s)\n\\end{equation}\nComparing the integrand on the right side of \\eqref{eq:apl.7} with \nthe function $g_{\\xi,\\alpha}(s)=\\robr{\\xi^{\\top} s^{1\/\\alpha}}^{\\alpha}$, \nwe see that \n\\[\n\\sum_{i=1}^{d} \\robrfl{\\xi^{(i)}}^{\\alpha} s^{(i)}\n=\ng_{\\xi,\\alpha}(s)\n\\cdot\n\\sum_{i=1}^{d} z_i^{\\alpha}\n\\]\nwith\n\\[\nz_i := \\frac{\\xi^{(i)} \\cdot\\robrfl{s^{(i)}}^{1\/\\alpha}} \n{\\xi^{\\top} s^{1\/\\alpha}}\n\\ldotp\n\\]\nThus it suffices to demonstrate that $\\sum_{i=1}^{d} z_i^{\\alpha} \\le 1$, \nwhich follows from $z_i\\in[0,1]$, $z_i^{\\alpha} \\le z_i$ for $\\alpha \\ge 1$, and \n$\\sum_{i=1}^{d} z_i=1$.\n\\par\nThe inverse result for $\\alpha\\in(0,1]$ stated in~(\\ref{item:thm:3.8.b}) \nfollows from \nthe concavity of the mapping $\\xi\\mapsto\\Psiastg_{\\xi,\\alpha}$ \nand the inequality $z_i^{\\alpha} \\ge z_i$. \n\\end{proof}\n\\par\nDue to Theorem~\\ref{theo:3.4}, an analogue of the foregoing \nresult for $\\mathrel{\\preceq_{\\mathrm{apl}}}$ is straightforward.\n\\begin{corollary}\\label{cor:3.10}\nLet $X$ be multivariate regularly varying in $\\Rplus^{d}$ with tail index \n$\\alpha\\in(0,\\infty)$ and identically \ndistributed margins $X^{(i)}\\sim F$, $i=1,\\ldots,d$.\nFurther, let $Y$ be a random vector with independent margins \n$Y ^{(i)}\\sim F$, and let $Z$ be a random vector with totally dependent \nmargins $Z^{(i)}=Z^{(1)}$ $\\mathrm{P}$-a.s.\\ and $Z^{(1)}\\sim F$. \nThen\n\\begin{enumerate}[(a)]\n\\item\n$Y \\mathrel{\\preceq_{\\mathrm{apl}}} X \\mathrel{\\preceq_{\\mathrm{apl}}} Z$ for $\\alpha \\ge 1$\n\\item\n$Z \\mathrel{\\preceq_{\\mathrm{apl}}} X \\mathrel{\\preceq_{\\mathrm{apl}}} Y$ for $\\alpha \\in (0,1]$.\n\\end{enumerate}\n\\end{corollary}\n\\par\n\\begin{remark}\nThe strict assumptions of Corollary~\\ref{cor:3.10} are chosen for clearness and simplicity.\nThe independence of $Y^{(i)}$ and the total dependence of $Z^{(i)}$ \nare needed only in the tail region, i.e., it suffices for $Y$ and $Z$ \nto be multivariate regularly varying with canonical spectral measures \n$\\Psi^\\ast_0$ and $\\Psi^\\ast_1$, respectively.\nFurthermore, the assumption of identically distributed margins \ncan be replaced by equivalent tails:\n\\[\n1=\n\\lim_{t\\to\\infty}\\frac{\\mathrm{P}\\cubrfl{Y^{(i)} >t}}{\\mathrm{P}\\cubrfl{X^{(i)} >t}}\n=\n\\lim_{t\\to\\infty}\\frac{\\mathrm{P}\\cubrfl{Z^{(i)} >t}}{\\mathrm{P}\\cubrfl{X^{(i)} >t}}\n,\\quad i=1,\\ldots,d\n\\ldotp\n\\]\nFinally, the non-negativity of $X^{(i)}$, $Y^{(i)}$, and $Z^{(i)}$ \nis needed only in the asymptotic sense. The ordering results remain true \nif the spectral measure of $X$ is restricted to the unit simplex $\\Simp^d$. \n\\end{remark}\n\\par\nCombining Theorem~\\ref{theo:3.4} with Theorem~\\ref{theo:2.4}, one obtains an \nordering result for the canonical spectral measures of multivariate regularly \nvarying elliptical distributions. \nThe notation $\\Psi^\\ast=\\Psi^\\ast(\\alpha,C)$ is justified by the fact that \nspectral measures of elliptical distributions depend only on the tail \nindex $\\alpha$ and the generalized covariance matrix $C$. \nAn explicit representation of spectral densities for bivariate elliptical \ndistributions was obtained by \\citet{Hult\/Lindskog:2002}. \nAlternative representations that are valid for all dimensions $d\\ge2$ \nare given in \\citet{Mainik:2010}, Lemma~2.8.\n\\par\n\\begin{proposition}\n\\label{prop:3.7}\nLet $C$ and $D$ be $d$-dimensional covariance matrices satisfying \n\\begin{equation}\n\\label{eq:3.25}\nC_{i,i} = D_{i,i} > 0\n,\\quad\ni = 1,\\ldots,d,\n\\end{equation}\nand \n\\begin{equation}\n\\label{eq:3.26}\n\\forall \\xi\\in\\Simp^d \n\\quad\n\\xi^{\\top} C \\xi \\le \\xi^{\\top} D \\xi\n\\ldotp\n\\end{equation}\nThen \n\\[\n\\forall \\alpha>0 \n\\quad\n\\Psi^\\ast\\robr{\\alpha,C} \\mathrel{\\preceq_{\\Gcalalpha}} \\Psi^\\ast\\robr{\\alpha,D}\n\\ldotp\n\\]\n\\end{proposition}\n\\par\n\\begin{myproof}\nFix $\\alpha\\in(0,\\infty)$ and consider random vectors \n\\[\nX\\mathrel{\\stackrel{\\mathrm{d}}{=}} R A U\n,\\quad\nY\\mathrel{\\stackrel{\\mathrm{d}}{=}} R B U, \n\\]\nwhere $A$ and $B$ are square roots of the matrices $C$ and $D$ \nin~\\eqref{eq:3.26}, i.e., \n\\[ \nC=A A^{\\top}\n,\\quad \nD=B B^{\\top},\n\\]\nand $R$ is an arbitrary regularly varying non-negative \nrandom variable with tail index $\\alpha$.\n\\par\nAs a consequence of Theorem \\ref{theo:2.4} one obtains $X \\mathrel{\\preceq_{\\mathrm{apl}}} Y$. \nFurthermore, invariance of $\\mathrel{\\preceq_{\\mathrm{apl}}}$ under componentwise rescaling \nyields $wX \\mathrel{\\preceq_{\\mathrm{apl}}} wY$ for $w=\\robr{w^{(1)},\\ldots,w^{(d)}}$ with \n\\[\nw^{(i)}:={C_{i,i}}^{-1\/2}={D_{i,i}}^{-1\/2}\n,\\quad \ni=1,\\ldots,d\n\\ldotp\n\\]\nMoreover, as a particular consequence of arguments \nunderlying \\eqref{eq:2.17}, one obtains \n\\[\nw^{(i)} X^{(i)} \\mathrel{\\stackrel{\\mathrm{d}}{=}} w^{(j)} Y^{(j)}\n,\\quad\ni,j\\in\\cubr{1,\\ldots,d}\n\\ldotp\n\\]\nHence the random vectors $wX$ and $wY$ satisfy the balanced tails condition \\eqref{eq:3.15}, whereas their components are mutually ordered with respect to $\\mathrel{\\preceq_{\\mathrm{apl}}}$.\nFinally, Theorem~\\ref{theo:3.4}(\\ref{item:t4.2}) and invariance of canonical spectral measures under componentwise rescalings yield\n\\begin{equation*}\n\\Psi^\\ast(\\alpha,C) = \\Psi^\\ast_{wX} \n\\mathrel{\\preceq_{\\Gcalalpha}} \n\\Psi^\\ast_{wY} = \\Psi^\\ast(\\alpha,D)\n\\ldotp\n\\qed\n\\end{equation*}\n\\end{myproof}\n\\par\nThe subsequent result extends Theorem~\\ref{theo:3.4} to random vectors that do not have balanced tails.\n\\par\n\\begin{theorem}\n\\label{theo:3.8}\nLet $X$ and $Y$ be multivariate regularly varying random vectors on $\\R^{d}$ \nwith tail index $\\alpha\\in(0,\\infty)$ \nand canonical spectral measures $\\Psiast_X$ and $\\Psiast_Y$. \nFurther, assume that $\\abs{X^{(i)}} \\mathrel{\\preceq_{\\mathrm{apl}}} \\abs{Y^{(i)}}$ with \n\\begin{equation}\n\\label{eq:3.27}\n\\lambda_i\n:=\n\\lim_{t\\to\\infty}\n\\frac{\\mathrm{P}\\cubrfl{\\absfl{X^{(i)}}>t}}{\\mathrm{P}\\cubrfl{\\absfl{Y^{(i)}}>t}} \n\\in(0,1]\n\\end{equation}\nfor $i=1,\\ldots,d$ and that the vector $v=\\robr{v^{(1)},\\ldots,v^{(d)}}$ defined by\n\\begin{equation}\n\\label{eq:3.28}\nv^{(i)}:=\\lambda_i^{-1\/\\alpha}\n\\end{equation} \nsatisfies \n\\begin{equation}\n\\label{eq:3.29}\nX \\mathrel{\\preceq_{\\mathrm{apl}}} v X\n\\quad\\text{or}\\quad\nv^{-1} Y \\mathrel{\\preceq_{\\mathrm{apl}}} Y\n\\ldotp\n\\end{equation}\nThen $\\Psiast_X \\mathrel{\\preceq_{\\Gcalalpha}} \\Psiast_Y$ implies $X \\mathrel{\\preceq_{\\mathrm{apl}}} Y$.\n\\end{theorem}\n\\par\n\\begin{myproof}\nAccording to Proposition~\\ref{prop:3.3}(\\ref{item:L39.1}), there exists $w\\in\\Rplus^{d}$ \nsuch that $wY$ satisfies the balanced tails condition~\\eqref{eq:3.15}. \nFurthermore, the tails of the random vector \n\\[\nvwX:=\\robrfl{v^{(1)} w^{(1)} X^{(1)} ,\\ldots, v^{(d)} w^{(d)} X^{(d)}}\n\\]\nwith $v$ defined in~\\eqref{eq:3.27} are also balanced. Indeed, it is easy to see that \n\\[\n\\lim_{t\\to\\infty}\n\\frac{\\mathrm{P}\\cubrfl{\\absfl{w^{(i)} Y^{(i)}} >t}}{\\mathrm{P}\\cubrfl{\\absfl{Y^{(i)}} >t}} \n=\n\\lim_{t\\to\\infty}\n\\frac{\\mathrm{P}\\cubrfl{\\absfl{v^{(i)} w^{(i)} X^{(i)}} >t}}{\\mathrm{P}\\cubrfl{\\absfl{v^{(i)} X^{(i)}} >t}} \n=\n\\robrfl{w^{(i)}}^{\\alpha}\n\\]\nfor $i=1,\\ldots,d$. Analogously one obtains \n\\[\n\\lim_{t\\to\\infty}\n\\frac{\\mathrm{P}\\cubrfl{\\absfl{v^{(i)} X^{(i)}} >t}}{\\mathrm{P}\\cubrfl{\\absfl{X^{(i)}} >t}} \n=\n\\robrfl{v^{(i)}}^{\\alpha} = \\lambda_i^{-1}\n\\]\nand, as a result,\n\\begin{align*}\n\\hspace{2em}&\\hspace{-2em}\n\\lim_{t\\to\\infty} \n\\frac{\\mathrm{P}\\cubrfl{\\absfl{v^{(i)} w^{(i)} X^{(i)}} >t}}{\\mathrm{P}\\cubrfl{\\absfl{w^{(i)} Y^{(i)}} > t}}\\\\\n&=\n\\lim_{t\\to\\infty} \n\\frac{\\mathrm{P}\\cubrfl{\\absfl{v^{(i)} X^{(i)}} >t}}{\\mathrm{P}\\cubrfl{\\absfl{Y^{(i)}} > t}}\\\\\n&=\n\\lim_{t\\to\\infty}\\robrfl{\n\\frac{\\mathrm{P}\\cubrfl{\\absfl{v^{(i)} X^{(i)}} >t}}{\\mathrm{P}\\cubrfl{\\absfl{X^{(i)}} > t}}\n\\cdot\n\\frac{\\mathrm{P}\\cubrfl{\\absfl{X^{(i)}} > t}}{\\mathrm{P}\\cubrfl{\\absfl{Y^{(i)}} > t}}\n}\\\\\n&=\n\\lambda_i^{-1} \n\\cdot\n\\lim_{t\\to\\infty}\\frac{\\mathrm{P}\\cubrfl{\\absfl{X^{(i)}} > t}}{\\mathrm{P}\\cubrfl{\\absfl{Y^{(i)}} > t}}\\\\\n&=\n1\n\\end{align*}\nfor $i=1,\\ldots,d$. Hence the balanced tails condition for $wY$ implies that the tails of $vwX$ are also balanced.\n\\par\nFurthermore, invariance of canonical spectral measures under \ncomponentwise rescaling yields\n\\[\n\\Psi^\\ast_{vwX} = \\Psiast_X \\mathrel{\\preceq_{\\Gcalalpha}} \\Psiast_Y = \\Psi^\\ast_{wY}\n\\ldotp\n\\]\nThus, applying Theorem~\\ref{theo:3.4}(\\ref{item:t4.1}), one obtains\n\\begin{equation}\n\\label{eq:3.30}\nvwX \\mathrel{\\preceq_{\\mathrm{apl}}} wY\n\\ldotp\n\\end{equation}\nSince $v^{(i)}=\\lambda_i^{-1\/\\alpha} > 0$ for $i=1,\\ldots,d$, \ncondition~\\eqref{eq:3.30} is equivalent to \n\\begin{equation}\n\\label{eq:3.31}\nwX \\mathrel{\\preceq_{\\mathrm{apl}}} v^{-1} wY\n\\ldotp\n\\end{equation}\nMoreover, assumption~\\eqref{eq:3.29} implies\n\\begin{equation}\n\\label{eq:3.32}\nwX \\mathrel{\\preceq_{\\mathrm{apl}}} vwX\n\\quad\\text{or}\\quad \nv^{-1}wY\\mathrel{\\preceq_{\\mathrm{apl}}} wY.\n\\end{equation}\nCombining this ordering statement \nwith \\eqref{eq:3.30} and~\\eqref{eq:3.31}, \none obtains \n\\[\nwX \\mathrel{\\preceq_{\\mathrm{apl}}} wY\n\\ldotp\n\\]\nFinally, \ninvariance of $\\mathrel{\\preceq_{\\mathrm{apl}}}$ with respect to componentwise rescaling yields \n$X \\mathrel{\\preceq_{\\mathrm{apl}}} Y$.\n\\end{myproof}\n\\par\nIn the special case of random vectors in $\\Rplus^{d}$ Theorem~\\ref{theo:3.8} \ncan be simplified to the following result. \n\\par\n\\begin{corollary}\n\\label{cor:3.9} \nLet $X$ and $Y$ be multivariate regularly varying random vectors on $\\Rplus^{d}$ \nwith tail index $\\alpha\\in(0,\\infty)$ and canonical \nspectral measures $\\Psiast_X$ and $\\Psiast_Y$. \nFurther, suppose that \n\\begin{equation}\n\\label{eq:3.33}\n\\lambda_i\n:=\n\\limsup_{t\\to\\infty}\n\\frac{\\mathrm{P}\\cubrfl{\\absfl{X^{(i)}}>t}}{\\mathrm{P}\\cubrfl{\\absfl{Y^{(i)}}>t}} \\in (0,1], \n\\quad\ni = 1,\\ldots,d\n\\ldotp\n\\end{equation}\nThen $\\Psiast_X \\mathrel{\\preceq_{\\Gcalalpha}} \\Psiast_Y$ implies $X \\mathrel{\\preceq_{\\mathrm{apl}}} Y$.\n\\end{corollary}\n\\par\n\\begin{myproof}\nAssumption~\\eqref{eq:3.33} yields that the rescaling vector $v$ \ndefined in \\eqref{eq:3.28} is an element of $[1,\\infty)^d$. \nThus $v-(1,\\ldots,1)\\in\\Rplus^{d}$ and, since $X$ takes values in $\\Rplus^{d}$, \nwe have \n\\[\nX \\mathrel{\\preceq_{\\mathrm{apl}}} X + \\robr{v - (1,\\ldots,1)}X = vX\n\\ldotp\n\\]\nSimilar arguments yield $v^{-1} Y\\mathrel{\\preceq_{\\mathrm{apl}}} Y$. \nHence condition~\\eqref{eq:3.29} of Theorem~\\ref{theo:3.8} is satisfied.\n\\end{myproof}\n\\par\nThe final result of this section is due to the indifference of \n$\\mathrel{\\preceq_{\\Gcalalpha}}$ for $\\alpha=1$ mentioned in \nRemark~\\ref{rem:3.1}(\\ref{item:r14.1}).\nThis special property of spectral measures on $\\Simp^d$ allows to reduce \n$\\mathrel{\\preceq_{\\mathrm{apl}}}$ to the ordering of components. \nIt should be noted that this result cannot be extended to the \ngeneral case of spectral measures on $\\Sbb^d_1$. \n\\par\n\\begin{lemma}\n\\label{lem:3.10}\nLet $X$ and $Y$ be multivariate regularly varying on $\\Rplus^{d}$ with tail \nindex $\\alpha=1$. \nFurther, suppose that $Y$ satisfies the non-degeneracy \ncondition~\\eqref{eq:4} and that \n$X^{(i)} \\mathrel{\\preceq_{\\mathrm{apl}}} Y^{(i)}$ for $i=1,\\ldots,d$. Then $X \\mathrel{\\preceq_{\\mathrm{apl}}} Y$. \n\\end{lemma}\n\\par\n\\begin{myproof}\nAccording to Proposition \\ref{prop:3.3}(\\ref{item:L39.1}), \nthere exists $w\\in(0,\\infty)^d$ such that $wY$ satisfies the balanced tails \ncondition~\\eqref{eq:3.15}. \nFurthermore, due to the invariance of $\\mathrel{\\preceq_{\\mathrm{apl}}}$ under componentwise \nrescaling, $X \\mathrel{\\preceq_{\\mathrm{apl}}} Y$ is equivalent to $wX \\mathrel{\\preceq_{\\mathrm{apl}}} wY$. \n\\par\nThus it can be assumed without loss of generality that $Y$ has balanced tails. \nThis yields \n\\[\n\\lambda_i\n:=\n\\limsup_{t\\to\\infty}\\frac{\\mathrm{P}\\cubrfl{X^{(i)} >t}}{\\mathrm{P}\\cubrfl{Y^{(i)} >t}}\n=\n\\limsup_{t\\to\\infty}\\frac{\\mathrm{P}\\cubrfl{X^{(i)} >t}}{\\mathrm{P}\\cubrfl{Y^{(1)} >t}}\n,\\quad\ni=1,\\ldots,d\n\\ldotp\n\\]\nHence the assumption $X^{(i)} \\mathrel{\\preceq_{\\mathrm{apl}}} Y^{(i)}$ for $i=1,\\ldots,d$ \nimplies $\\lambda_i\\in[0,1]$ for all $i$.\nMoreover, the balanced tails condition for $Y$ yields\n\\begin{equation}\n\\label{eq:3.34}\n\\gamma_{e_1}(Y)=\\ldots=\\gamma_{e_d}(Y)\n\\ldotp\n\\end{equation}\n\\par\n\nNow consider the random vector $X$ and denote \n\\[\nj:=\\mathop{\\mathrm{arg\\,max}}\\displaylimits_{i\\in\\cubr{1,\\ldots,d}} \\gamma_{\\ei}(X)\n\\ldotp\n\\]\nRecall that $\\gamma_{\\ei}(X)=\\nu_X\\robr{\\cubr{x\\in\\Rplus^{d}: x^{(i)}>1}}$ \nwith $\\nu_X$ \ndenoting the exponent \nmeasure of $X$ and that $\\nu_X$ is non-zero. This yields $\\gamma_{\\ej}(X)>0$ \neven if $X$ does not satisfy the non-degeneracy condition~\\eqref{eq:4}. \nMoreover, for $\\alpha=1$, the mapping \n$\\xi\\mapsto\\gamma_\\xi(X)$ is linear. This implies\n\\begin{equation}\n\\label{eq:3.35}\n\\gamma_\\xi(X)\n=\n\\sum_{i=1}^{d} \\xi^{(i)} \\cdot \\gamma_{\\ei}(X)\n\\le \n\\gamma_{\\ej}(X)\n,\\quad \n\\xi\\in\\Simp^d\n\\end{equation}\nand \\eqref{eq:3.34} yields\n\\begin{equation}\n\\label{eq:3.36}\n\\gamma_\\xi(Y) = \\sum_{i=1}^{d} \\xi^{(i)} \\cdot \\gamma_{\\ei}(Y) = \\gamma_{e_1}(Y)\n,\\quad\n\\xi\\in\\Simp^d\n\\ldotp\n\\end{equation}\nHence\n\\begin{align*}\n\\hspace{2em}&\\hspace{-2em}\n\\limsup_{t\\to\\infty}\n\\frac{\\mathrm{P}\\cubrfl{\\xi^{\\top} X > t}}{\\mathrm{P}\\cubrfl{\\xi^{\\top} Y >t}}\\\\\n&=\n\\limsup_{t\\to\\infty}\\robrfl{\n\\frac{\\mathrm{P}\\cubrfl{\\xi^{\\top} X > t}}{\\mathrm{P}\\cubrfl{X^{(j)} > t}}\n\\cdot\n\\frac{\\mathrm{P}\\cubrfl{X^{(j)} > t}}{\\mathrm{P}\\cubrfl{Y^{(1)} > t}} \n\\cdot\n\\frac{\\mathrm{P}{\\cubrfl{Y^{(1)} > t}}}{\\mathrm{P}\\cubrfl{\\xi^{\\top} Y >t}}\n}\\\\\n&=\n\\frac{\\gamma_\\xi(X)}{\\gamma_{\\ej}(X)}\n\\cdot\n\\lambda_j\n\\cdot\n\\frac{\\gamma_{e_1}(Y)}{\\gamma_\\xi(Y)} \\\\\n&\\le\n1\n\\end{align*}\ndue to $\\lambda_j\\le 1$, \\eqref{eq:3.35}, and \\eqref{eq:3.36}.\n\\end{myproof}\n\\section{Relations to convex and supermodular orders}\\label{sec:4}\nAs mentioned in Remark~\\ref{rem:apl.1}(\\ref{item:apl.2}), \ndependence orders $\\mathrel{\\preceq_{\\mathrm{sm}}}$, $\\mathrel{\\preceq_{\\mathrm{dcx}}}$ \nand convexity orders $\\mathrel{\\preceq_{\\mathrm{cx}}}$, $\\mathrel{\\preceq_{\\mathrm{icx}}}$, $\\mathrel{\\preceq_{\\mathrm{plcx}}}$ \ndo not imply $\\mathrel{\\preceq_{\\mathrm{apl}}}$ in general. \nHowever, it turns out that the relationship between $\\mathrel{\\preceq_{\\mathrm{apl}}}$ and\nthe ordering of canonical spectral measures by $\\mathrel{\\preceq_{\\Gcalalpha}}$ allows \nto draw conclusions of this type \nin the special case of multivariate regularly varying models. \nThe core result of this section is stated in Theorem~\\ref{thm:5}. \nIt entails \na collection of sufficient \ncriteria for $\\mathrel{\\preceq_{\\mathrm{apl}}}$ in terms of convex and supermodular order relations, \nwith particular interest paid to the \ninversion of diversification effects for $\\alpha<1$. \nAn application to copula based models is given in Proposition~\\ref{prop:4.4}.\n\\par \nThis approach was applied by \\citet{Embrechts\/Neslehova\/Wuethrich:2009} \nto the ordering of risks for the portfolio vector \n$\\xi=(1,\\ldots,1)$ and for a specific family of multivariate \nregularly varying models with identically distributed, non-negative margins \n$X^{(i)}$ (cf. Example~\\ref{ex:2} in Section~\\ref{sec:5}). \n\\par\nThe next theorem is the core element of this section. \nIt generalizes the arguments of \\citet{Embrechts\/Neslehova\/Wuethrich:2009} \nto multivariate regularly varying random vectors in $\\R^{d}$ with balanced \ntails and tail index $\\alpha\\ne 1$. \nThe case $\\alpha=1$ is not included for two reasons.\nFirst, this case is partly trivial due to the indifference of $\\mathrel{\\preceq_{\\Gcalalpha}}$ \nfor spectral measures on $\\Simp^d$ (cf.\\ Remark~\\ref{rem:3.1}(\\ref{item:r14.1})).\nSecond, Karamata's theorem used in the proof of the integrable case $\\alpha>1$ does not yield the desired result for random variables with tail index $\\alpha=1$. \n\\par\n\\begin{theorem}\n\\label{thm:5}\nLet $X$ and $Y$ be multivariate regularly varying on $\\R^{d}$ with identical \ntail index $\\alpha\\ne 1$. Further, assume that $X$ and $Y$ satisfy \nthe balanced tails condition~\\eqref{eq:3.15}. \n\\begin{enumerate}[(a)]\n\\item\n\\label{item:t5.1}\nFor $\\alpha>1$ let\n\\begin{equation}\n\\label{eq:309}\n\\limsup_{t\\to\\infty} \n\\frac{\\mathrm{P}\\cubrfl{\\absfl{X^{(1)}}>t}}{\\mathrm{P}\\cubrfl{\\absfl{Y^{(1)}}>t}}\n= 1\n\\end{equation}\nand let there exist $u_0>0$ such that with $h_u(t):=\\robr{t-u}_{+}$ \n\\begin{equation}\n\\label{eq:292a}\n\\forall u \\ge u_0 \\ \\forall \\xi\\in\\Simp^d \n\\quad\n\\mathrm{E} h_u\\robrfl{\\xi^{\\top} X} \n\\le \n\\mathrm{E} h_u\\robrfl{\\xi^{\\top} Y}\n\\ldotp \n\\end{equation}\nThen $\\Psiast_X \\mathrel{\\preceq_{\\Gcalalpha}} \\Psiast_Y$. \n\\vspace{0.5em}\n\\item\n\\label{item:t5.2}\nFor $\\alpha<1$ suppose that $\\abs{X^{(1)}}$ and $\\abs{Y^{(1)}}$ are \nequivalent with respect to $\\mathrel{\\preceq_{\\mathrm{apl}}}$, i.e.,\n\\begin{equation}\n\\label{eq:310a}\n\\absfl{X^{(1)}} \\mathrel{\\preceq_{\\mathrm{apl}}} \\absfl{Y^{(1)}}\n\\quad\\text{and}\\quad \n\\absfl{Y^{(1)}} \\mathrel{\\preceq_{\\mathrm{apl}}} \\absfl{X^{(1)}}, \n\\end{equation}\nand let there exist $u_0 >0$ such that with $f_u(t):=-(t \\wedge u)$, \n\\begin{equation}\n\\label{eq:292b}\n\\forall u \\ge u_0 \\ \\forall \\xi\\in\\Simp^d \n\\quad\n\\mathrm{E} f_u\\robrfl{\\robrfl{\\xi^{\\top} X}_{+}} \n\\le \n\\mathrm{E} f_u \\robrfl{\\robrfl{\\xi^{\\top} Y}_{+}}\n\\ldotp \n\\end{equation}\nThen $\\Psiast_Y \\mathrel{\\preceq_{\\Gcalalpha}} \\Psiast_X$. \n\\end{enumerate}\n\\end{theorem}\n\\par\nThe proof will be given after some conclusions and remarks. \nIn particular, it should be noted that the relation between \n$\\mathrel{\\preceq_{\\Gcalalpha}}$ and $\\mathrel{\\preceq_{\\mathrm{apl}}}$ established in Theorem~\\ref{theo:3.4} \nimmediately yields the following result.\n\\begin{corollary}\n\\label{cor:8}\n\\begin{enumerate}[(a)] \n\\item \n\\label{item:c8.1}%\nIf random vectors $X$ and $Y$ satisfy conditions of Theorem~\\ref{thm:5}(\\ref{item:t5.1}), \nthen $X \\mathrel{\\preceq_{\\mathrm{apl}}} Y$;\n\\item\n\\label{item:c8.2}%\nIf $X$ and $Y$ satisfy conditions of Theorem~\\ref{thm:5}(\\ref{item:t5.2}), \nthen $Y \\mathrel{\\preceq_{\\mathrm{apl}}} X$.\n\\end{enumerate} \n\\end{corollary}\n\\par\nIt should also be noted that \nconditions \\eqref{eq:292a} and \\eqref{eq:292b} are asymptotic forms of \nthe increasing convex ordering $\\xi^{\\top} X \\mathrel{\\preceq_{\\mathrm{icx}}} \\xi^{\\top} Y$ \nand the decreasing convex ordering $\\xi^{\\top} X \\mathrel{\\preceq_{\\mathrm{decx}}} \\xi^{\\top} Y$, \nrespectively. \nThe consequences can be outlined as follows. \n\\begin{remark}\n\\label{rem:12}\n\\begin{enumerate}[(a)]\n\\item \nThe following criteria are sufficient for~\\eqref{eq:292a} \nand \\eqref{eq:292b} to hold: \n\\begin{enumerate}[(i)]\n\\vspace{0.5em\n\\item \n$\\robr{\\xi^{\\top} X}_{+} \\mathrel{\\preceq_{\\mathrm{cx}}} \\robr{\\xi^{\\top} Y}_{+}$ \nfor all $\\xi\\in\\Simp^d$,\n\\item\n$X$ and $Y$ are restricted to $\\Rplus^{d}$ and \n$X \\mathrel{\\preceq} Y$ with $\\mathrel{\\preceq}$ denoting either \n$\\mathrel{\\preceq_{\\mathrm{plcx}}}$, $\\mathrel{\\preceq_{\\mathrm{lcx}}}$, $\\mathrel{\\preceq_{\\mathrm{cx}}}$, $\\mathrel{\\preceq_{\\mathrm{dcx}}}$, or $\\mathrel{\\preceq_{\\mathrm{sm}}}$.\n\\end{enumerate}\n\\vspace{0.5em}\n\\item\nAdditionally, condition~\\eqref{eq:292a} follows from \n$X \\mathrel{\\preceq} Y$ with $\\mathrel{\\preceq}$ denoting either \n$\\mathrel{\\preceq_{\\mathrm{plcx}}}$, $\\mathrel{\\preceq_{\\mathrm{lcx}}}$, $\\mathrel{\\preceq_{\\mathrm{cx}}}$, $\\mathrel{\\preceq_{\\mathrm{dcx}}}$, or $\\mathrel{\\preceq_{\\mathrm{sm}}}$.\n\\end{enumerate}\n\\end{remark}\nFinally, a comment should be made upon \nconvex ordering of non-integrable random variables and \ndiversification for $\\alpha<1$.\nThe so-called \\emph{phase change} at $\\alpha=1$, \ni.e., the inversion of diversification effects \ntaking place when the tail index $\\alpha$ crosses this critical value, \ndemonstrates that the implications of convex ordering are essentially different \nfor integrable and non-integrable random variables.\nIndeed, it is easy to see that if a random variable $Z$ on $\\R$ \nsatisfies $\\mathrm{E} \\sqbr{Z_{+}} = \\mathrm{E} \\sqbr{Z_{-}}=\\infty$,\nthen the only integrable convex functions of $Z$ are the constant ones.\nMoreover, if $Z$ is restricted to $\\R_{+}$ and $\\mathrm{E} Z =\\infty$,\nthen any integrable convex function of $Z$ is necessarily non-increasing. \n\\vspace{0.5em}\n\\par\n\\begin{myproofx}{of Theorem~\\ref{thm:5}}(\\ref{item:t5.1})\nConsider the expectations in~\\eqref{eq:292a}.\nIt is easy to see that for $u>0$ \n\\begin{align*}\n\\oneby{u}\\mathrm{E} h_u\\robrfl{\\xi^{\\top} X} \n&=\n\\oneby{u}\\int_{(u,\\infty)}\\mathrm{P}\\cubrfl{\\xi^{\\top} X > t} \\mathrm{d} t\\\\\n&=\n\\int_{(1,\\infty)} \\mathrm{P}\\cubrfl{\\xi^{\\top} X > tu} \\mathrm{d} t\n\\end{align*}\nand, as a consequence, \n\\[\n\\frac{u^{-1}\\mathrm{E} h_u\\robrfl{\\xi^{\\top} X}}{\\mathrm{P}\\cubrfl{\\absfl{X^{(1)}}>u}}\n=\n\\frac{\\mathrm{P}\\cubrfl{\\xi^{\\top} X > u}}{\\mathrm{P}\\cubrfl{\\absfl{X^{(1)}}>u}}\n\\int_{(1,\\infty)} \n\\frac{\\mathrm{P}\\cubrfl{\\xi^{\\top} X > tu}}{\\mathrm{P}\\cubrfl{\\xi^{\\top} X > u}}\n\\, \\mathrm{d} t\n\\ldotp\n\\]\nMoreover, Proposition~\\ref{prop:3.3}(\\ref{item:L39.3}) implies\n\\begin{equation}\n\\label{eq:313}\n\\lim_{u\\to\\infty} \n\\frac{\\mathrm{P}\\cubrfl{\\xi^{\\top} X > u}}{\\mathrm{P}\\cubrfl{\\absfl{X^{(1)}}>u}}\n=\n\\frac{\\gamma_\\xi(X)}{\\gamma_{e_1}(X)+\\gamma_{-e_1}(X)} \n=\n\\PsiastXg_{\\xi,\\alpha}\n\\end{equation}\nand Karamata's theorem \n\\citep[cf.][Theorem B.1.5]{de_Haan\/Ferreira:2006} \nyields\n\\[\n\\lim_{u\\to\\infty} \n\\int_{(1,\\infty)} \n\\frac{\\mathrm{P}\\cubrfl{\\xi^{\\top} X > tu}}{\\mathrm{P}\\cubrfl{\\xi^{\\top} X > u}}\n\\, \\mathrm{d} t\n=\n\\int_{(1,\\infty)}\nt^{-\\alpha} \\mathrm{d} t\n=\n\\oneby{\\alpha-1}\n\\ldotp\n\\]\nAs a result one obtains\n\\begin{equation*}\n\\lim_{u\\to\\infty}\n\\frac{u^{-1} \\mathrm{E} h_u\\robrfl{\\xi^{\\top} X}}{\\mathrm{P}\\cubrfl{\\absfl{X^{(1)}}>u}}\n=\n\\oneby{\\alpha-1}\\Psiast_X g_{\\xi,\\alpha}\n\\end{equation*}\nand, analogously,\n\\begin{equation*}\n\\lim_{u\\to\\infty}\n\\frac{u^{-1}\\mathrm{E} h_u\\robrfl{\\xi^{\\top} Y}}{\\mathrm{P}\\cubrfl{\\absfl{Y^{(1)}}>u}}\n=\n\\oneby{\\alpha-1}\\Psiast_Y g_{\\xi,\\alpha}\n\\ldotp\n\\end{equation*}\nHence \\eqref{eq:292a} and \\eqref{eq:309} yield\n\\begin{align*}\n1\n&\\ge\n\\limsup_{u\\to\\infty}\\frac{u^{-1}\\mathrm{E}h_u\\robrfl{\\xi^{\\top} X}}{u^{-1}\\mathrm{E} h_u\\robrfl{\\xi^{\\top} Y}}\\\\\n&=\n\\limsup_{u\\to\\infty}\\robrfl{\n\\frac{u^{-1} \\mathrm{E}h_u\\robrfl{\\xi^{\\top} X}}{\\mathrm{P}\\cubrfl{\\absfl{X^{(1)}}>u}}\n\\cdot\n\\frac{\\mathrm{P}\\cubrfl{\\absfl{Y^{(1)}}>u}}{u^{-1} \\mathrm{E} h_u\\robrfl{\\xi^{\\top} Y}}\n\\cdot\n\\frac{\\mathrm{P}\\cubrfl{\\absfl{X^{(1)}}>u}}{\\mathrm{P}\\cubrfl{\\absfl{Y^{(1)}}>u}}\n}\\\\\n&=\n\\frac{\\PsiastXg_{\\xi,\\alpha}}{\\PsiastYg_{\\xi,\\alpha}}\n\\end{align*}\nfor all $\\xi\\in\\Simp^d$, which exactly means \n$\\Psiast_X \\mathrel{\\preceq_{\\Gcalalpha}} \\Psiast_Y$. \n\\par\n\\medskip\n(\\ref{item:t5.2})\nNote that~\\eqref{eq:310a} implies \n\\begin{equation}\n\\label{eq:310}\n\\lim_{t\\to\\infty} \n\\frac{\\mathrm{P}\\cubrfl{\\absfl{X^{(1)}}>t}}{\\mathrm{P}\\cubrfl{\\absfl{Y^{(1)}}>t}}\n= 1\n\\end{equation}\nand that~\\eqref{eq:292b} yields\n\\begin{equation}\n\\label{eq:311}\n\\forall u>u_0 \\ \\forall v\\ge 0\n\\quad\n\\mathrm{E} f_{u+v}\\robrfl{\\xi^{\\top} X} - \\mathrm{E} f_{u+v}\\robrfl{\\xi^{\\top} Y} \\le 0\n\\ldotp\n\\end{equation}\nFurthermore, it is easy to see that any random variable $Z$ in $\\R_{+}$ \nsatisfies\n\\begin{align*}\n\\mathrm{E}\\sqbrfl{Z\\wedge u} \n&= \n\\int_{(0,\\infty)} \\robrfl{t \\wedge u} \\mathrm{d} \\mathrm{P}^Z(t)\\\\\n&=\n\\int_{(0,\\infty)}\\int_{(0,\\infty)} 1\\cubr{ss} \\mathrm{d} s\n\\ldotp\n\\end{align*}\nThis implies\n\\[\n\\mathrm{E} f_{u+v}(Z) = \\mathrm{E} f_u(Z) - \\int_{(u,u+v)} \\mathrm{P}\\cubrfl{Z>t} \\mathrm{d} t\n\\ldotp\n\\]\nConsequently, \\eqref{eq:311} yields\n\\begin{equation}\n\\label{eq:312}\n\\forall u\\ge u_0\\ \\forall v>0\n\\quad \n\\mathrm{E} f_u\\robrfl{\\xi^{\\top} X} - \\mathrm{E}f_u\\robrfl{\\xi^{\\top} Y} \n\\le \nI(u,v)\n\\end{equation}\nwhere \n\\begin{align*}\nI(u,v)\n&:=\n\\int_{(u, u+v)}\n\\robrfl{\\mathrm{P}\\cubrfl{\\xi^{\\top} X >t } - \\mathrm{P}\\cubrfl{\\xi^{\\top} Y > t}} \n\\, \\mathrm{d} t\\\\\n&\\phantom{:}=\n\\int_{(u,u+v)} \n\\phi(t) \\cdot \\mathrm{P}\\cubrfl{\\absfl{X^{(1)}}>t}\n\\,\\mathrm{d} t\n\\end{align*}\nwith \n\\[\n\\phi(t)\n:=\n\\frac{\\mathrm{P}\\cubrfl{\\xi^{\\top} X > t} - \\mathrm{P}\\cubrfl{\\xi^{\\top} Y > t}}\n{\\mathrm{P}\\cubrfl{\\absfl{X^{(1)}}>t}}\n\\ldotp\n\\]\nMoreover, \\eqref{eq:310}, \\eqref{eq:313}, and an \nanalogue of~\\eqref{eq:313} for $Y$ yield \n\\begin{align}\n\\phi(t)\n&=\\nonumber\n\\frac{\\mathrm{P}\\cubrfl{\\xi^{\\top} X > t}}{\\mathrm{P}\\cubrfl{\\absfl{X^{(1)}}>t}}\n-\n\\frac{\\mathrm{P}\\cubrfl{\\xi^{\\top} Y > t}}{\\mathrm{P}\\cubrfl{\\absfl{Y^{(1)}}>t}}\n\\cdot\n\\frac{\\mathrm{P}\\cubrfl{\\absfl{Y^{(1)}}>t}}{\\mathrm{P}\\cubrfl{\\absfl{X^{(1)}}>t}}\\\\\n&\\to\\label{eq:314}\n\\PsiastXg_{\\xi,\\alpha} - \\PsiastYg_{\\xi,\\alpha}\n,\\quad\nt\\to\\infty\n\\ldotp\n\\end{align}\n\\par\nNow suppose that $\\Psiast_Y \\mathrel{\\preceq_{\\Gcalalpha}} \\Psiast_X$ is not satisfied, \ni.e., there exists $\\xi\\in\\Simp^d$ such that \n$\\PsiastYg_{\\xi,\\alpha} > \\PsiastXg_{\\xi,\\alpha}$.\nThen~\\eqref{eq:314} yields $\\phi(t) \\le - \\varepsilon$ \nfor some $\\varepsilon>0$ and sufficiently large $t$. \nThis implies \n\\begin{equation} \n\\label{eq:315}\nI(u,v)\n\\le \n-\\varepsilon \n\\int_{(u, u+v)}\n\\mathrm{P}\\cubrfl{\\absfl{X^{(1)}}> t} \\, \\mathrm{d} t\n\\end{equation}\nfor sufficiently large $u$ and all $v\\ge0$. \nMoreover, regular variation of $\\absfl{X^{(1)}}$ with tail index \n$\\alpha<1$ implies $\\mathrm{E}\\absfl{X^{(1)}}=\\infty$. \nConsequently, the integral on the right side of~\\eqref{eq:315} tends to \ninfinity for $v\\to\\infty$:\n\\[\n\\forall u>0\n\\quad\n\\lim_{v\\to\\infty}\n\\int_{(u, u+v)}\n\\mathrm{P}\\cubrfl{\\absfl{X^{(1)}}> t} \\, \\mathrm{d} t\n=\n\\infty\n\\ldotp\n\\]\nHence, choosing $u$ and $v$ sufficiently large, one can achieve \n$I(u,v) \\PsiastXg_{\\xi,\\alpha}$ cannot be true \nand therefore it necessarily holds that $\\Psiast_Y \\mathrel{\\preceq_{\\Gcalalpha}} \\Psiast_X$.\n\\end{myproofx}\n\\par\nNow let us return to the ordering criterion in terms of the supermodular \norder $\\mathrel{\\preceq_{\\mathrm{sm}}}$ stated in Remark~\\ref{rem:12}. The invariance of $\\mathrel{\\preceq_{\\mathrm{sm}}}$ \nunder non-decreasing component transformations \nallows to transfer these criteria to copula models. \nFurthermore, since we are interested in the ordering of the asymptotic \ndependence structures represented by the canonical spectral measures, \n$\\Psiast_1$ and $\\Psiast_2$, we can take \nany copulas that yield $\\Psiast_1$ and $\\Psiast_2$ as asymptotic \ndependence structures. \n\\par\nA natural choice is given by the \\emph{extreme value copulas}, defined \nas the copulas of \\emph{simple max-stable distributions} corresponding to \n$\\Psi^\\ast_i$, i.e., the distributions \n\\begin{equation}\\label{eq:apl.4}\nG^\\ast_i(x):=\\exp\\robrfl{-\\nu^{\\ast}_i\\robrfl{-[\\infty,x]^\\mathrm{c}}}\n,\\quad\nx\\in\\Rplus^{d}\n\\end{equation}\nwhere $\\nu^{\\ast}_i$ is the canonical exponent associated with $\\Psi^\\ast_i$ \nvia~\\eqref{eq:apl.3}. For further details on max-stable and simple max-stable \ndistributions we refer to \\citet{Resnick:1987}.\nSince extreme value copulas and canonical spectral measures can be \nconsidered as \nalternative parametrizations of the same asymptotic dependence structures, \nwe obtain the following result. \n\\par\n\\begin{proposition}\n\\label{prop:4.4}\nLet $\\Psiast_1$ and $\\Psiast_2$ be canonical spectral measures on $\\Simp^d$. \nFurther, for $i=1,2$, let $C_i$ denote the copula of the \nsimple max-stable distribution $G^\\ast_i$ induced by $\\Psi^\\ast_i$ \naccording to~\\eqref{eq:apl.4} and~\\eqref{eq:apl.3}. \nThen $C_1 \\mathrel{\\preceq_{\\mathrm{sm}}} C_2$ implies\n\\begin{enumerate}[(a)]\n\\item\n$\\Psiast_1 \\mathrel{\\preceq_{\\Gcalalpha}} \\Psiast_2$ for $\\alpha\\in(1,\\infty)$;\n\\vspace{0.5em}\n\\item\n$\\Psiast_2 \\mathrel{\\preceq_{\\Gcalalpha}} \\Psiast_1$ for $\\alpha\\in(0,1)$.\n\\end{enumerate}\n\\end{proposition}\n\\par\n\\begin{myproof}\nLet $\\nu^{\\ast}_i$ denote the canonical exponent measures corresponding\nto $\\Psi^\\ast_i$ and $G^\\ast_i$.\nIt is easy to see that the transformed measures \n\\[\n\\nu_{\\alpha,i}:=\\nu^{\\ast}_i\\circ T^{-1}\n,\\quad i=1,2,\n\\]\nwith $\\alpha>0$ and the transformation $T$ defined as\n\\[\nT : x \\mapsto \n\\robrfl{\\robrfl{x^{(i)}}^{1\/\\alpha},\\ldots,\\robrfl{x^{(d)}}^{1\/\\alpha}}\n,\n\\quad\nx\\in\\Rplus^{d}, \n\\]\nexhibit the scaling property with index $-\\alpha$:\n\\begin{align*}\n\\nu_{\\alpha,i}\\robr{tA} \n&=\nt^{-\\alpha} \\nu_{\\alpha,i}(A)\n,\\quad\nA\\in\\mathcal{B}\\robr{\\Rplus^{d}\\setminus\\cubr{0}}\n\\ldotp\n\\end{align*}\nHence the transformed distributions \n\\begin{equation}\n\\label{eq:320}\nG_{\\alpha,i}(x)\n:=\nG^\\ast_i\\circ T^{-1}(x) \n= \n\\exp\\robrfl{-\\nu_{\\alpha,i}\\robrfl{[0,x]^c}}\n\\end{equation}\nare max-stable with exponent measures $\\nu_{\\alpha,i}$.\n\\par\nIt is well known that max-stable distributions with identical heavy-tailed margins are multivariate regularly varying \\citep[cf.][]{Resnick:1987}.\nMoreover, the limit measure $\\nu$ in the multivariate regular variation condition can be chosen equal to the exponential measure associated with the property of max-stability. \nConsequently, the probability distributions $G_{\\alpha,i}$ for $i=1,2$ and $\\alpha>0$ are multivariate regularly varying with tail index $\\alpha$ and canonical spectral measures $\\Psi^\\ast_i$. \n\\par\nFurthermore, it is easy to see that $X\\simG_{\\alpha,1}$ and $Y\\simG_{\\alpha,2}$ \nhave identical margins:\n\\[\nX^{(i)} \\mathrel{\\stackrel{\\mathrm{d}}{=}} Y^{(j)}\n,\\quad\ni,j\\in\\cubr{1,\\ldots,d}\n\\ldotp\n\\]\nMoreover, due to the invariance of $\\mathrel{\\preceq_{\\mathrm{sm}}}$ under non-decreasing marginal \ntransformations, $C_1 \\mathrel{\\preceq_{\\mathrm{sm}}} C_2$ implies \n\\[\nG_{\\alpha,1} \\mathrel{\\preceq_{\\mathrm{sm}}} G_{\\alpha,2}\n\\]\nfor all $\\alpha>0$. \nThus an application of the ordering criteria from Remark~\\ref{rem:12} \nto $X\\simG_{\\alpha,1}$ and $Y\\simG_{\\alpha,2}$ completes the proof.\n\\end{myproof}\n\\section{Examples}\n\\label{sec:5}\nThis section concludes the paper by a series of examples with parametric models illustrating the results from the foregoing sections.\nExamples~\\ref{ex:6} and \\ref{ex:2} demonstrate application of Proposition~\\ref{prop:4.4} to copula based models and the phenomenon of phase change for random vectors in $\\Rplus^{d}$.\nThe fact that the phase change does not necessarily occur in the general case is demonstrated by multivariate Student-t distributions in Example~\\ref{ex:3}.\n\\par\n\\begin{example}\n\\label{ex:6}\nRecall the family of Gumbel copulas given by\n\\begin{equation}\nC_\\theta(u)\n:=\n\\exp\\robrfl{- \\robrfl{\\sum_{i=1}^{d}\\robrfl{-\\log u^{(i)}}^\\theta}^{1\/\\theta}}\n,\\quad\n\\theta\\in[1,\\infty)\n\\ldotp\n\\end{equation}\nGumbel copulas are extreme value copulas, i.e., they are copulas of simple max-stable distributions. \nAccording to \\citet{Hu\/Wei:2002}, Gumbel copulas with dependence parameter $\\theta\\in[1,\\infty)$ are ordered by $\\mathrel{\\preceq_{\\mathrm{sm}}}$:\n\\begin{equation}\n\\label{eq:322}\n\\forall \\theta_1,\\theta_2\\in[1,\\infty) \n\\quad\n\\theta_1 \\le \\theta_2\n\\Rightarrow\nC_{\\theta_1} \\mathrel{\\preceq_{\\mathrm{sm}}} C_{\\theta_2}\n\\ldotp\n\\end{equation}\nConsequently, Proposition \\ref{prop:4.4} applies to the family of \ncanonical spectral measures $\\Psi^\\ast_\\theta$ corresponding to \nthe Gumbel copulas $C_\\theta$.\nThus $1\\le\\theta_1\\le\\theta_2<\\infty$ implies \n$\\Psi^\\ast_{\\theta_1} \\mathrel{\\preceq_{\\Gcalalpha}} \\Psi^\\ast_{\\theta_2}$ for $\\alpha>1$ \nand there is a phase change when $\\alpha$ crosses the value $1$, i.e., \nfor $\\alpha\\in(0,1)$ there holds \n$\\Psi^\\ast_{\\theta_2} \\mathrel{\\preceq_{\\Gcalalpha}} \\Psi^\\ast_{\\theta_1}$.\n\\par\nApplying Theorem~\\ref{theo:3.4}, one obtains ordering with respect to $\\mathrel{\\preceq_{\\mathrm{apl}}}$ for random vectors $X$ and $Y$ on $\\Rplus^{d}$ that are multivariate regularly varying with canonical spectral measures \nof Gumbel type and have balanced tails ordered by $\\mathrel{\\preceq_{\\mathrm{apl}}}$. \nIn particular, \nthis is the case if $X$ and $Y$ have identical \nregularly varying marginal distributions and \nArchimedean copulas that satisfy appropriate \nregularity conditions \n\\citep[cf.][]{Genest\/Rivest:1989, Barbe\/Fougeres\/Genest:2006}.\n\\par \nMoreover, it is also worth a remark that \nmultivariate regularly varying random vectors with Archimedean copulas \ncan only induce extreme value copulas of Gumbel type \n\\citep[cf.][]{Genest\/Rivest:1989}. \n\\par\nFigure~\\ref{figure:32} illustrates the resulting diversification effects \nin the bivariate case, \nincluding indifference to portfolio diversification for $\\alpha=1$ and \nthe phase change occurring when $\\alpha$ crosses this critical value. \nThe graphics show the function \n$\\xi^{(1)}\\mapsto\\Psi^\\ast_\\theta\\,g_{\\xi,\\alpha}$ for selected values of \n$\\theta$ and $\\alpha$. \nDue to $X\\in\\Rplus^{d}$, representation \n$\\Psi^\\ast_\\theta\\,g_{\\xi,\\alpha}=\\gamma_\\xi\/(\\gamma_{e_1} + \\gamma_{-e_1})$ simplifies to \n$\\Psi^\\ast_\\theta\\,g_{\\xi,\\alpha}=\\gamma_\\xi\/\\gamma_{e_1}$\nand therefore \n\\[\n\\Psi^\\ast_\\theta \\, g_{e_1,\\alpha} = \\Psi^\\ast_\\theta \\, g_{e_2,\\alpha} = 1\n\\ldotp\n\\]\n\\par\n\\begin{figure\n\\centering\n\\subfigure[Varying $\\alpha$ for $\\theta=1.4$]\n{\\includegraphics[width=.45\\textwidth]{Graphics\/Fig1a-gumbel-eri-1}}\n\\subfigure[Varying $\\alpha$ for $\\theta=2$]\n{\\includegraphics[width=.45\\textwidth]{Graphics\/Fig1b-gumbel-eri-2}}\n\\\\\n\\subfigure[Varying $\\theta$ for $\\alpha>1$]\n{\\includegraphics[width=.45\\textwidth]{Graphics\/Fig1c-gumbel-eri-3}}\n\\subfigure[Varying $\\theta$ for $\\alpha<1$]\n{\\includegraphics[width=.45\\textwidth]{Graphics\/Fig1d-gumbel-eri-4}}\n\\caption{Bivariate Gumbel copulas: Diversification effects represented by functions $\\xi^{(1)}\\mapsto\\Psi^\\ast_\\theta \\,g_{\\xi,\\alpha}$ for selected values of $\\theta$ and $\\alpha$.}\n\\label{figure:32\n\\end{figure}\n\\end{example}\nAs already mentioned above, Theorem~\\ref{thm:5} generalizes some \narguments from \\citet{Embrechts\/Neslehova\/Wuethrich:2009}. \nThe next example concerns Galambos copulas as addressed in \nthat original publication. \n\\par\n\\begin{example}\n\\label{ex:2}\nAnother family of extreme value copulas that are ordered by $\\mathrel{\\preceq_{\\mathrm{sm}}}$ \nis the family of $d$-dimensional \n\\emph{Galambos copulas} \nwith parameter $\\theta\\in(0,\\infty)$:\n\\begin{equation}\nC_\\theta(u)\n:=\n\\exp\\robrfl{\\sum_{I\\subset\\cubr{1,\\ldots,d}} \n(-1)^{\\abs{I}}\\robrfl{\\sum_{i\\in I} \\robrfl{-\\log u^{(i)}}^{-\\theta}}^{-1\/\\theta}\n}\n\\ldotp\n\\end{equation}\n\nAccording to \\citet{Hu\/Wei:2002}, \n$\\theta_1 \\le \\theta_2$ implies $C_{\\theta_1} \\mathrel{\\preceq_{\\mathrm{sm}}} C_{\\theta_2}$.\nThus Proposition~\\ref{prop:4.4} yields ordering of the corresponding canonical spectral measures $\\Psi^\\ast_\\theta$ with respect to $\\mathrel{\\preceq_{\\Gcalalpha}}$. \nSimilarly to the case of Gumbel copulas, $\\theta_1\\le\\theta_2$ implies $\\Psi^\\ast_{\\theta_1} \\mathrel{\\preceq_{\\Gcalalpha}} \\Psi^\\ast_{\\theta_2}$ for $\\alpha>1$ \nand $\\Psi^\\ast_{\\theta_2} \\mathrel{\\preceq_{\\Gcalalpha}} \\Psi^\\ast_{\\theta_1}$ for $\\alpha\\in(0,1)$. \n\\par\nFinally, it should be noted that Galambos copulas correspond to the \ncanonical exponent measures of random vectors $X$ in $\\Rplus^{d}$ with \nidentically distributed regularly varying margins $X^{(i)}$ and \ndependence structure of $-X$ given by an Archimedean copula with a \nregularly varying generator $\\phi(1-1\/t)$. Models of this type were \ndiscussed in recent studies of aggregation effects for extreme risks %\n\\citep[cf.][]{Alink\/Loewe\/Wuethrich:2004, %\nAlink\/Loewe\/Wuethrich:2005, %\nEmbrechts\/Neslehova\/Chavez-Demoulin:2006, %\nBarbe\/Fougeres\/Genest:2006, %\nEmbrechts\/Lambrigger\/Wuethrich:2008, %\nEmbrechts\/Neslehova\/Wuethrich:2009}.\n\\end{example}\n\\par\nThe final example illustrates results established in Proposition~\\ref{prop:3.7} and Theorem~\\ref{theo:2.4}. In particular, it shows that elliptical distributions do not exhibit a phase change at $\\alpha=1$. \n\\par\n\\begin{example}\n\\label{ex:3}\nRecall multivariate Student-t distributions and \nconsider the case with equal degrees of freedom, i.e., \n\\begin{equation}\nX\\mathrel{\\stackrel{\\mathrm{d}}{=}} \\mu_X + R A_X U,\n\\quad\nY\\mathrel{\\stackrel{\\mathrm{d}}{=}} \\mu_Y + R A_Y U,\n\\end{equation}\nwhere $R\\mathrel{\\stackrel{\\mathrm{d}}{=}}\\abs{Z}$ for a Student-t distributed random variable $Z$ with degrees of freedom equal to $\\alpha\\in(0,\\infty)$. \nFurther, let the generalized covariance matrices $C_X=C(\\rho_X)$ and $C_Y=C(\\rho_Y)$ be defined as \n\\begin{equation}\n\\label{eq:146}\nC(\\rho):=\n\\left(\n\\begin{array}{cc}\n1 &\\rho\\\\\n\\rho & 1\n\\end{array}\n\\right)\n\\end{equation} \nand assume that $\\rho_X \\le \\rho_Y$.\n\\par\nAs already mentioned in Remark~\\ref{rem:2.6}(\\ref{item:rem:2.6.a}), \n$C_X$ and $C_Y$ satisfy condition~\\eqref{eq:3.26} and \nProposition~\\ref{prop:3.7} yields $X \\mathrel{\\preceq_{\\mathrm{apl}}} Y$. Moreover, \nProposition~\\ref{prop:3.7} implies a uniform ordering of diversification \neffects in the sense that \n\\[\n\\Psiast_X=\\Psi^\\ast_{\\alpha,\\rho_X} \\mathrel{\\preceq_{\\Gcalalpha}} \\Psi^\\ast_{\\alpha,\\rho_Y}=\\Psiast_Y\n\\]\nfor all $\\alpha\\in(0,\\infty)$.\n\\par\nFigure~\\ref{figure:7} shows functions $\\xi^{(1)} \\mapsto \\Psi^\\ast_{\\alpha,\\rho}\\,g_{\\xi,\\alpha}$ for selected parameter values $\\rho$ and $\\alpha$ that illustrate the ordering of asymptotic portfolio losses by $\\rho$ and the missing phase change at $\\alpha=1$. The indifference to portfolio diversification for $\\alpha=1$ is also absent. \nMoreover, symmetry of elliptical distributions implies $\\gamma_{-e_1} = \\gamma_{e_1}$ and, as a result,\n\\[\n\\Psi^\\ast_{\\alpha,\\rho}\\, g_{e_1,\\alpha} \n= \n\\Psi^\\ast_{\\alpha,\\rho}\\, g_{e_2,\\alpha} \n=\n1\/2\n\\ldotp\n\\] \nThus the standardization of the plots in Figure~\\ref{figure:7} is different from that in Figure~\\ref{figure:32}. \n\\par\n\\begin{figure\n\\centering\n\\subfigure[Varying $\\alpha$ for $\\rho>0$]\n{\\includegraphics[width=.45\\textwidth]{Graphics\/Fig2a-ellipt-1}}\n\\subfigure[Varying $\\alpha$ for $\\rho<0$]\n{\\includegraphics[width=.45\\textwidth]{Graphics\/Fig2b-ellipt-2}}\n\\\\\n\\subfigure[Varying $\\rho$ for $\\alpha>1$]\n{\\includegraphics[width=.45\\textwidth]{Graphics\/Fig2c-ellipt-3}}\n\\subfigure[Varying $\\rho$ for $\\alpha<1$]\n{\\includegraphics[width=.45\\textwidth]{Graphics\/Fig2d-ellipt-4}}\n\\caption{Bivariate elliptical distributions with generalized covariance matrices defined in~\\eqref{eq:146}: Diversification effects represented by functions $\\xi^{(1)}\\mapsto\\Psi^\\ast_{\\alpha,\\rho}\\, g_{\\xi,\\alpha}$ for selected values of $\\rho$ and $\\alpha$.}\n\\label{figure:7\n\\end{figure}\n\\end{example}\n\\par\n\\begin{remark}\n\\label{rem:15}%\nAll examples the authors are aware of suggest that the diversification \ncoefficient $\\Psiastg_{\\xi,\\alpha}$ is decreasing in $\\alpha$.\nThis means that risk diversification is stronger for lighter component tails \nthan for heavier ones.\n\\par\nHowever, it should be noted that the influence of the tail index $\\alpha$ \non risk aggregation is different from that. The asymptotic risk aggregation coefficient\n\\[\nq_d := \\lim_{t\\to\\infty}\\frac{\\mathrm{P}\\cubrfl{X^{(1)}+\\ldots+X^{(d)} > t}}{\\mathrm{P}\\cubrfl{X^{(1)}>t}}\n\\]\nintroduced by %\n\\citet{Wuethrich:2003} is known to be increasing in $\\alpha$ when the loss components \n$X^{(i)}$ are non-negative %\n\\citep[cf.][]{Barbe\/Fougeres\/Genest:2006}. \nIt is easy to see that the restriction to non-negative $X^{(i)}$ implies \n\\[\nq_d\n=\n\\lim_{t\\to\\infty}\\frac{\\mathrm{P}\\cubrfl{\\norm{X}_1 > t}}{\\mathrm{P}\\cubrfl{X^{(1)}>t}}\n=\n\\oneby{\\gamma_{e_1}}\n\\ldotp\n\\]\nMoreover, denoting the uniformly diversified portfolio by $\\eta$, \n\\[\n\\eta:=d^{-1}\\robr{1,\\ldots,1}\n,\n\\]\none obtains \n\\[\nq_d \n= \n\\lim_{t\\to\\infty}\\frac{\\mathrm{P}\\cubrfl{\\eta^{\\top} X > d^{-1} t}}{\\mathrm{P}\\cubrfl{X^{(1)}>t}}\n=\nd^{\\alpha} \\frac{\\gamma_\\eta}{\\gamma_{e_1}}\n\\ldotp\n\\]\nThus $q_d$ is a product of the factor $d^{\\alpha}$, which is \nincreasing in $\\alpha$, and the ratio $\\gamma_\\eta\/\\gamma_{e_1}$, \nwhich is closely related to to the diversification coefficient \n$\\Psiastg_{\\xi,\\alpha}$. \n\\par\nIn particular, given equal marginal weights, i.e., \n\\[\n\\gamma_{e_1}=\\ldots=\\gamma_{e_d},\n\\]\nProposition~\\ref{prop:3.3}(\\ref{item:L39.3}) yields \n\\[\n\\frac{\\gamma_\\eta}{\\gamma_{e_1}} = \\Psi^\\ast g_{\\eta,\\alpha}\n\\ldotp\n\\]\nAs already mentioned above, the coefficients $\\Psiastg_{\\xi,\\alpha}$ \nwith $\\xi\\in\\Simp^d$ are decreasing in all examples considered here.\nThis means that the aggregation and the diversification of risks are \ninfluenced by the tail index $\\alpha$ in different, maybe even always \ncontrary ways.\n\\par\nThe question for the generality of this contrary influence is currently open. \nOne can easily prove that the extreme risk index $\\gamma_\\xi= \\Psif_{\\xi,\\alpha}$ is decreasing in $\\alpha$ for $\\xi\\in\\Simp^d$. However, this result cannot be extended to $\\Psiastg_{\\xi,\\alpha}$ directly since $\\Psiastg_{\\xi,\\alpha}$ is related to $\\Psif_{\\xi,\\alpha}$ by the normalizations~\\eqref{eq:3.17} and~\\eqref{eq:3.18}. \nThe question \nwhether $\\Psiastg_{\\xi,\\alpha}$ with arbitrary $\\xi\\in\\Simp^d$ or at least \n$\\Psi^\\ast g_{\\eta,\\alpha}$ is generally decreasing in $\\alpha$ \nis an interesting subject for further research. \n\\end{remark}\n\\section{Acknowledgements}\nThe research underlying this paper was done at the University of Freiburg. \nGeorg Mainik would also like to thank RiskLab for financial support. \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \nThere are several notions which describe some relative position between two subalgebras of operator algebras. \nAs one of such notions between two subalgebras of finite von Neumann \nalgebras, Popa introduced the notion of {\\it mutually orthogonal subalgebras } (definition below) in \\cite{Po}. \nBy the terminology {\\it complementarity}, \nthe same notion is investigated in the theory of quantum systems (see \\cite{Pe2} for example). \n\nThe most primary interest would be the case where two subalgebras of some full matrix algebra, both of \nwhich are either maximal abelian or isomorphic to also some full matrix algebra. \nIn such the cases, two subalgebras are connected by a unitary. \n\nOur motivation for this work arises from the following fact: \n\nIn the previous paper \\cite{Ch1}, we defined a constant $h(A|B)$ \nfor two subalgebras $A$ and $B$ of a finite von Neumann algebra, \nand showed the relative position between maximal abelian subalgebras $A$ and $B$ of $M_n(\\mathbb{C})$ \nby using the values of $h(A|B)$. \nThis $h(A|B)$ is a slight modification of Connes-St\\o rmer relative entropy $H(A|B)$ in \\cite{CS} (cf. \\cite{NS}). \nIf $A_1$ and $A_2$ are maximal abelian subalgebras of $M_n(\\mathbb{C}),$ \nthen there exists a unitary $u$ such that $A_2 = uA_1u^*,$ and then \n$h(A_1| A_2)$ coincides with the entropy $H(b(u))$ defined in \\cite{ZSKS} of the unistochastic matrix $b(u)$ \ninduced by the unitary $u$. \nAs a consequence, we showed that $A_1$ and $A_2$ are mutually orthogonal if and if $h(A_1| A_2) = H(b(u)) = \\log n.$ \nThis means that $A_1$ and $A_2$ are mutually orthogonal if and if $h(A_1| A_2)$ is maximal and \nequals to the logarithm of the dimension of the subalgebras. \nRelated results in the case of subfactors of the type II$_1$ factors are obtained in \\cite {Ch2}. \nHere, it does not hold in general that $H(A_1| A_2) = H(b(u))$ \n(see, for example \\cite [Appendix] {PSW} ). \n\nOn the other hand, Petz showed in \\cite{Pe2} for subalgebras $A$ and $B$ of $M_n(\\mathbb{C})$ \nthat if $A$ is homogeneous and abelian, then $H(A|B)$ is \nmaximal if and only if $A$ and $B$ are complementary. \nHere homogeneous means that all minimal projections of $A$ have the same trace. \nAlso he remarked that Connes-St\\o rmer relative entropy cannot characterize the complementarity of \nsubalgebras in the general case. \n\\smallskip \n\nIn this paper, we study the case when the subalgebras $A$ and $B$ in question are isomorphic to \nsome $M_n(\\mathbb{C})$. \nWe introduce some density matrix arising from the pair $\\{A, B\\},$ and \nwe show that the von Neumann entropy of the density matrix gives a characterization of \nthe mutual orthogonality (that is the complementarity). \n\\smallskip \n\nIn order to define the entropy for automorphisms of operator algebras, \ntwo kind of notion of {\\it a finite partition of unity} played an important role. \nOne was used by Connes-St\\o rmer, and it corresponds to a finite measurable partition of \na given space in the ergodic theory (see \\cite {NS} \\cite {OP} for example). \nThe other was used by Alicci and Fannes in \\cite{AF} and it is called a {\\it finite operational \npartition of unity}. \nHere, we apply the latter, that is operational partition of unity, \nand we give a numerical characterization for pairs of mutually orthogonal \n subalgebras which are both isomorphic to some full matrix algebra of the same size. \n\\smallskip \n\nThe paper is organized as follows. After preliminaries on \nbasic notions in Section 2, in Section 3 we define some density matrix \nwhich is closely related to subfactors $A$ and $B$ which are both isomorphic to some \n$M_n(\\mathbb{C})$, and we show that $A$ and $B$ are mutually orthogonal \nif and only if the von Neumann entropy of \nthe density matrix is the maximum value $2\\log n$, \nwhich is the logarithm of the dimension of the subfactors. \n\\vskip 0.3cm\n\n\\section{Preliminaries}\nHere we summarize notations, terminologies and \nbasic facts. \n\nLet $M$ be a finite von Neumann algebra acting on a separable Hilbert space, \nand $\\tau$ be a fixed normal faithful tracial state. \nIn the case where $M$ is the algebra $M_n(\\mathbb{C})$ of $n \\times n$ matrices, \n$\\tau (x) = {\\rm Tr}(x)\/ n, $ where {\\rm Tr} the usual standard trace on $M_n(\\mathbb{C})$. \nThe norm $\\Vert x \\Vert_\\tau$ is given by $\\Vert x \\Vert_\\tau = \\tau(x^*x)^{1\/2}$ for all $x \\in M$. \nBy a von Neumann subalgebra $A$ of $M,$ we mean that $A$ is a $*$-subalgebra closed \nin the weak operator topology, the unit of which is the same with the unit of $M$. \nA conditional expectation of $M$ onto \na von Neumann subalgebra $A$ of $M$ is \na completely positive linear map $E_A : M \\to A$ with \n$E_A(axb) = aE_A(x)b$ for all $x \\in M$ and $a, b \\in A$. \nIn the case of a finite von Neumann algebra $M$ with \na faithful normal tracial state $\\tau$, \nthere exists always a unique faithful normal conditional expectation $E_A$ \nof $M$ onto a von Neumann subalgebra $A$ of $M$ \nsuch that $\\tau(xa) = \\tau(E_A(x)a)$ for all $x \\in M$ and $a \\in A$. \nIt is called the conditional expectation with respect to $\\tau$. \n\n\\subsection{\\bf Mutually orthogonal (or complementary) subalgebras.} \nLet $A$ and $B$ be von Neumann subalgebras of $M$. \nIn \\cite [Lemma 2.1] {Po}, Popa showed that the following conditions are equivalent. \n\\begin{enumerate}\n\\item$\\tau(ab) = 0$ for $a \\in A, b \\in B$ with $\\tau(a) = \\tau(b) = 0$;\n\\item $\\tau(ab) = \\tau(a) \\tau(b)$ for all $a \\in A, b \\in B$;\n\\item $\\Vert ab \\Vert_{\\tau} = \\Vert a \\Vert_{\\tau} \\Vert b \\Vert_{\\tau} $ for all $a \\in A, b \\in B$;\n\\item $E_A E_B(x) = \\tau(x) 1_M,$ for all $x \\in M$; \n\\item $E_A (B) \\subset \\mathbb{C} 1_M$. \n\\end{enumerate}\nMoreover (1) - (5) are equivalent with the analogue conditions obtained by interchanging $A$ with $B$. \n\n\\smallskip\nTwo von Neumann subalgebras $A$ and $B$ of $M$ are called {\\it mutually orthogonal} \nif one of the above conditions (1) - (5) is satisfied (\\cite [Definition 2.2] {Po}). \n\nMutually orthogonal subalgebras are also called {\\it complementary subalgebras}, \n(see, \\cite{Pe1} \\cite{Pe2} for example). \n\n\\subsection{\\bf Density matrix and von Neumann entropy.} \nBy a density matrix, we mean a positive semidefinite matrix $\\rho$ \nsuch that ${\\rm Tr}(\\rho) = 1$. \nTo a density matrix $\\rho$, the von Neumann entropy $S(\\rho)$ is given by \n$S(\\rho) = {\\rm Tr}(\\eta(\\rho))$. Here, $\\eta$ is defined on the interval $[0,1]$ by \n$$\\eta(t) = -t \\log t \\quad (0 < t \\leq 1) \\quad \\text{and} \\quad \\eta(0) = 0.$$ \n\\smallskip \n\n\\section{Main results} \nLet $ M_n(\\mathbb{C}) $ be the algebra of $n\\times n$ complex matrices, and let \n${\\rm Tr}$ be the trace of $ M_n(\\mathbb{C}) $ with ${\\rm Tr}(p) = 1 $ for every minimal projection $p$. \nLet $L$ be a finite von Neumann algebra, and \n let $\\tau_L$ be a fixed normal faithful tracial state. \n \nWe let $M = M_n(\\mathbb{C}) \\otimes L,$ and let $\\tau_M = {\\rm Tr}\/ n \\otimes \\tau_L$. \n\\smallskip\n\n\\subsection{} We consider the subalgebra $N = M_n(\\mathbb{C}) \\otimes 1_L$ of $M$. \nIn this case, the conditional expectation $E_N$ with respect to $\\tau_M$ satisfies that \n$$E_N(x \\otimes y) = \\tau_L(y)x \\otimes 1_L, \\quad x \\in M_n(\\mathbb{C}), \\quad y \\in L.$$\n\nThe following lemma is an easy consequence from the definition, and it is essential to our study. \n\n\\subsubsection{\\bf Lemma} \n{\\it \nLet $N = M_n(\\mathbb{C}) \\otimes 1_L$ and let $u \\in M$ be a unitary operator. Then \n$N$ and $uNu^*$ are mutually orthogonal if and only if \n$$E_N(u^*(a \\otimes 1_L)u) = \\tau_M(a \\otimes 1_L) 1_M \n = \\frac{{\\rm Tr}(a)} n 1_M,\\quad \\text{for all} \\quad a \\in M_n(\\mathbb{C}).$$} \n\n\\begin{proof} \nAssume that $N$ and $uNu^*$ are mutually orthogonal, that is, \n$$E_NE_{uNu^*}(x) = E_{uNu^*}E_N(x) = \\tau_M(x) 1_M, \\quad \\text{for all} \\quad x \\in M.$$ \nThen \n$uE_N(u^*xu)u^* = E_{uNu^*}(E_N(x)) = \\tau_M(x) 1_M$, for all $x \\in N$. \nThis implies that \n$$E_N(u^*xu) = \\tau_M(x) 1_M, \\quad \\text{for \\ all} \\quad x \\in N.$$\n\nConversely, \nassume that $E_N(u^*xu) = \\tau_M(x) 1_M$, for all $x \\in N$. Then \n$$E_{uNu^*}(x) = uE_N(u^*xu)u^* = \\tau_M(x) 1_M$$ \nfor all $x \\in N$. \nHence \n$$E_{uNu^*}E_N(x) = \\tau_M(x) 1_M \\quad \\text{ for \\ all} \\ x \\in M$$ \nso that $N$ and $uNu^*$ are mutually orthogonal.\n\\end{proof}\n\\smallskip\n\n\\subsection{}\nLet $\\{e_{ij} ; i,j = 1, \\cdots, n \\}$ be a system of matrix units of $M_n(\\mathbb{C}),$ \nso that \n$$e_{ij}^* = e_{ji}, \\quad e_{ij} e_{st} = \\delta_{js} e_{it}, \\quad \\sum_{i = 1}^n e_{ii} = 1_{M_n(\\mathbb{C})}.$$ \nThen each $x$ in $M = M_n(\\mathbb{C}) \\otimes L$ is written in the unique form: \n$$x = \\sum_{i,j = 1}^n e_{ij} \\otimes x_{ij}, \\quad x_{ij} \\in L,$$\nand $u = \\sum_{i,j = 1}^n e_{ij} \\otimes u_{ij}$ is a unitary in $M$ if and only if \n$$\\sum_{j=1}^n u_{ij} u_{kj}^* = \\delta_{ik} 1_{L}\\quad \\text{and} \\quad \n\\sum_{i=1}^n u_{ij}^* u_{ik} = \\delta_{jk} 1_L . $$\n\\smallskip\n\nWe give a characterization for a unitary $u \\in M$ to satisfy that $N$ and $uNu^*$ are mutually orthogonal. \n\\smallskip\n\n\\subsubsection{\\bf Theorem.} \n{\\it \nAssume that a von Neumann subalgebra $N$ of $M$ is given by $N = M_n(\\mathbb{C}) \\otimes 1_L$ \nand let $ u \\in M$ be unitary. \nThen $N$ and $uNu^*$ are mutually orthogonal if and only if \n$$\\tau_L(u_{ij}^* u_{kl}) = \\delta_{ik}\\delta_{jl} \\frac 1n, \\quad \\text{for all} \\quad i,j,k,l = 1, \\cdots, n.$$\n}\n\n\\begin{proof} \nAssume that $N$ and $uNu^*$ are mutually orthogonal. \nThen by Lemma 3.1.1 \n$$ E_N(u^*(e_{ij} \\otimes 1_L) u ) = \\delta_{ij}\\frac 1n 1_M.$$ \nOn the other hand, since\n$$u^*(e_{ij} \\otimes 1_L) u = \\sum_{l, t = 1}^n e_{lt} \\otimes u_{il}^* u_{jt}, \\ \\text{for \\ all} \\ i, j = 1, \\cdots, n,$$ \nby applying that $E_N(x \\otimes y) = \\tau_L(y)x \\otimes 1_L,$ \nwe have that \n$$ E_N(u^*(e_{ij} \\otimes 1_L) u )\n = \\sum_{l, t = 1}^n \\tau_L( u_{il}^* u_{jt}) e_{lt} \\otimes 1_L.$$\nHence \n$\\sum_{l, t = 1}^n \\tau_L( u_{il}^* u_{jt}) e_{lt} = \\delta_{ij}\\frac 1n 1_{M_n(\\mathbb{C})}$. \nThis means that \n$$\\tau_L(u_{ij}^* u_{kl}) = \\delta_{ik}\\delta_{jl} \\frac 1n \\quad \\text{for all} \\quad i,j,k,l = 1, \\cdots, n.$$\n\nConversely, assume that $\\tau_L(u_{ij}^* u_{kl}) = \\delta_{ik}\\delta_{jl} \\frac 1n$ for all $i,j,k,l = 1, \\cdots, n$. \nThen we have that \n$${E_N (u^*(e_{ij} \\otimes 1_L)u) } = \\sum_{l, t = 1}^n e_{lt} \\otimes \\tau_L(u_{il}^*u_{jt}) 1_L \n = \\sum_{l = 1}^n e_{ll} \\otimes \\delta_{ij} \\frac 1n 1_L \\nonumber = \\delta_{ij} \\frac 1n 1_M\n$$\nfor all $i,j,k,l = 1, \\cdots, n.$ \nHence $N$ and $uNu^*$ are mutually orthogonal, by Lemma 3.1.1. \n\\end{proof}\n\n\\subsubsection{\\bf Note} \nTheorem 3.2.1 implies that if $N = M_n(\\mathbb{C}) \\otimes 1_L$ and if $N$ and $uNu^*$ are mutually orthogonal \nfor some unitary $u \\in M = M_n(\\mathbb{C}) \\otimes L,$ then \nthe set $\\{ u_{ij} \/ {\\sqrt n} \\ ; i,j = 1, \\cdots, n\\} \\subset L$ has to be an orthonormal system with respect to \nthe inner product induced by $\\tau_L$ so that $\\dim(L) \\geq n^2$.\n\n\\subsection{ Entropy associated to an inner conjugate pair of subfactors} \nIn order to give a numerical characterization for mutually orthogonal subalgebras which are all isomorphic to \n$M_n(\\mathbb{C})$, \nwe apply the notion of a finite operational partition $X$ of unity of size $k$ and the density matrix $\\rho_\\phi[X]$ \nwhich were introduced by Alicki and Fannes in \\cite {AF}. \n\n\\subsubsection{\\bf Finite operational partition} \nLet $A$ be a unital $C^*$-algebra. A {\\it finite operational partition of unity of size $k$ } is a set \n$X = \\{x_1, ..., x_k \\}$ of elements of $A$ satisfying \n$$\\sum_i^k x_i^* x_i = 1_A.$$\n\nWe remark that a similar terminology \"finite partition\" is usually used in the different following form: \nA finite subset $ \\{x_1, ..., x_k \\}$ in $A$ is called a finite partition of unity if \nthey are nonnegative operators in $A$ such that $1_A = \\sum_{i=1}^n x_i$. See \\cite{NS} or \\cite{OP}. \n\n\\subsubsection{{\\bf Density matrix} $\\rho[X]$} \n\nLet $\\phi$ be a state of $A$. \nTo a finite operational partition $X$ of unity of size $k$, \nwe associate a $k \\times k$ density matrix $\\rho_\\phi[X]$ such that \nthe $(i,j)$-coefficient $\\rho_\\phi[X] (i,j)$ of $\\rho_\\phi[X]$ is given by \n$$ \\rho_\\phi[X] (i,j) = \\phi(x_j^*x_i), \\quad i,j = 1, \\cdots, k.$$ \nIn the case that $A$ is a finite von Neumann algebra and that $\\phi$ is a given tracial state $\\tau$ of $A,$ \nthen we denote $\\rho_\\tau[X]$ simply by $\\rho[X].$\n\\smallskip\n\n\\subsubsection{{\\bf Finite operational partition induced by a unitary} $u$} \nNow let $ M_n(\\mathbb{C}) $ be the algebra of $n\\times n$ complex matrices and let \n${\\rm Tr}$ be the trace with ${\\rm Tr}(p) = 1 $ for every minimal projection $p$. \nLet $L$ be a finite von Neumann algebra, and\n let $\\tau_L$ be a fixed normal faithful tracial state. \nLet $M = M_n(\\mathbb{C}) \\otimes L,$ and let $\\tau_M = {\\rm Tr}\/ n \\otimes \\tau_L$. \nLet $u$ be a unitary in $M_n(\\mathbb{C}) \\otimes L,$ \nand let $u = \\sum_{i,j} e_{ij} \\otimes u_{ij}, \\ (u_{ij} \\in L),$ \nwhere $\\{e_{ij}\\}_{i,j = 1, \\cdots, n}$ is a set of matrix units of $M_n(\\mathbb{C})$. \nWe consider the set \n$$U = \\{\\frac 1{\\sqrt n} {u_{ij}} \\ ; \\ i, j = 1, \\cdots, n\\}.$$ \nIt is not so essential, but \nwe renumber the elements of $U$ for the sake of convenience. \nFor example, if $kn+1 \\leq i \\leq (k+1)n, $ for some $k = 0, 1, \\cdots, n-1,$ then we put \n$$u_i = \\frac 1{\\sqrt n} {u_{i-kn \\ k+1}}.$$\nIt is clear the correspondence $i \\longleftrightarrow (i-kn, k+1)$ for some $k = 0, 1, \\cdots, n-1$ \nis one to one. \nSince $u$ is a unitary, clearly the set $U$ is a finite operational partition of unity of size $n^2$. \nWe call this set $U$ the {\\it finite operational partition of unity induced by} $u$. \n\\smallskip\n\n\\subsubsection{{\\bf von Neumann entropy} $S(\\rho[U] )$} \nWe consider \nthe von Neumann entropy $S(\\rho_\\phi[U] )$ of the density operator $\\rho_\\phi[U] $ \nin order to characterize the mutual orthogonality for subfactors. \nSo, we assume that our state $\\phi$ is the given normalized trace and \n$$ S(\\rho[U] ) = {\\rm Tr}(\\eta(\\rho[U] )).$$ \n\\smallskip\n\n\\subsubsection{\\bf Theorem.} \n{\\it \nLet $L$ be a finite von Neumann algebra and let $\\tau_L$ be a normalized trace of $L$. \nWe let $ M = M_n(\\mathbb{C}) \\otimes L$ and $\\tau = {\\rm Tr} \/ n \\otimes \\tau_L$.\nAssume that $N = M_n(\\mathbb{C}) \\otimes 1_L$ \nand that $u$ is a unitary operator in $M$. \nThen the following conditions are equivalent: \n\\begin{enumerate}\n \\item $N$ and $uNu^*$ are mutually orthogonal; \n \\item $n^2 \\rho[U] $ is the $n^2 \\times n^2$ identity matrix; \n \\item $ S(\\rho[U] ) = 2 \\log n = \\log\\dim N. $ \n\\end{enumerate}\nHere $U$ is the finite operational partition of unity induced by $u$. \n}\n\\smallskip\n\n\\begin{proof} \nFirst we remark that \n$$\\rho[U](i,j) = \\frac 1n \\tau(u_{j-ln, \\ l+1}^* u_{i-kn, \\ k+1})$$ \nwhere $u_i = (1 \/ {\\sqrt n}) u_{i-kn, \\ k+1} $, \nfor some $k = 0, 1, \\cdots, n-1$ with $kn+1 \\leq i \\leq (k+1)n, $ \nand \n$u_j = ( 1 \/ {\\sqrt n}) {u_{j-ln \\ l+1}}$, \nfor some $l = 0, 1, \\cdots, n-1$ with \n$ln+1 \\leq j \\leq (lk+1)n $. \n\\smallskip\n\n(1) $\\Rightarrow$ (2): \nAssume that $N$ and $uNu^*$ are mutually orthogonal. \nThen by Theorem 3.2.1 and by the definition of $\\rho[U],$ \nthe $n^2 \\times n^2$ density matrix $\\rho[U]$ is the diagonal \nmatrix such that \n$$\\rho[U] (i,i) = \\frac 1{n^2} \\quad \\text{for} \\ i = 1, 2, \\cdots, n^2.$$\n\\smallskip\n\n(2) $\\Rightarrow$ (3): \nClearly, the the von Neumann entropy $S(\\rho [U] ) = 2 \\log n$ and it is the dimension of $N$. \n\\smallskip\n\n(3) $\\Rightarrow$ (2): \nAssume that $S(\\rho[U] ) = \\log n^2$. \nLet $(\\lambda_1, \\cdots, \\lambda_{n^2})$ be an eigenvalue sequence of $\\rho[U] $ and let \n$(p_1, \\cdots, p_{n^2})$ be the corresponding sequence of the minimal projections. \nThen there exists a $n^2 \\times n^2$ unitary matrix $w$ so that \n$$w\\rho[U] w^* = \\sum_{i = 1} ^{n^2} \\lambda_i p_i.$$ \nSince \n$$ \\log n^2 = S(\\rho[U] )= \\sum_{i = 1} ^{n^2} \\eta(\\lambda_i ), $$\nit implies that, by the concavity of the function $\\eta$, \n$$\\lambda_i = \\frac 1{n^2} \\quad \\text{for all} \\quad i = 1, 2, \\cdots, n^2$$\nso that \n$$w\\rho[U] w^* = \\frac 1{n^2} 1_{M_{n^2}(\\mathbb{C})}.$$\nHence (2) holds. \n\\smallskip\n\n(2) $\\Rightarrow$ (1): \nBy the definition of $\\rho[U]$ and the condition (2), we have that \n$$\\delta_{ij} \\frac 1{n^2} = \\rho[U](i,j) = \\frac 1n \\tau(u_{j-ln \\ l+1}^* u_{i-kn \\ k+1}).$$ \n\nThis relation corresponds that \n$\\tau_L(u_{ij}^* u_{kl}) = \\delta_{ik}\\delta_{jl} \\frac 1n$. \nHence by Theorem 3.2.1, $N$ and $uNu^*$ are mutually orthogonal. \n\\end{proof}\n\\vskip 0.3cm\n\n\\subsubsection{\\bf Note.} \nTheorem 3.3.5 means that the mutually orthogonality for inner conjugate \nsubfactors are characterized by the maximum value \n$\\log \\dim$ of the subfactors. \n\nIn fact, since the density matrix $\\rho[U]$ is a $n^2 \\times n^2$ matrix and \nthe function $\\eta$ is operator concave, the value $2 \\log n$ is the maximum. \n\n\\subsubsection{\\bf Note.} \nThe proof shows that the statement of Theorem 3.3.5 does not depend on any choice of a matrix units. \n\n\n\\subsection{\\bf Subfactors of matrix algebras}\nLet $A$ and $B$ be subalgebras of $M_k(\\mathbb{C})$ and assume that both subalgebras are isomorphic to \n$M_n(\\mathbb{C})$. Then $k = mn$. We can assume that \n$M_k(\\mathbb{C}) = M_n(\\mathbb{C}) \\otimes M_m(\\mathbb{C})$ and \n$A = M_n(\\mathbb{C}) \\otimes \\mathbb{C}1$. \nThere exists a unitary matrix $u \\in M_k(\\mathbb{C})$ such that $B = uAu^*$. \nWe denote by $u(A,B)$ this unitary and also by $U(A,B)$ the finite operational \npartition of unity induced by $u(A,B)$. \nThen we have the followings: \n\n\\subsubsection{} \nPetz's characterization of complementarity was given in (\\cite [Theorem 4] {Pe1}): \nThe subalgebra $u(1 \\otimes M_m(\\mathbb{C}) ) u^*$ is complementary to $1 \\otimes M_m(\\mathbb{C})$ \nif and only if \n$$\\frac mn \\sum_{i,j = 1}^n |u_{ij} >< u_{ij}| = 1.$$ \nWhen n = m this condition means that $\\{u_{ij}\\}_{ij}$ is an orthonormal basis in $M_n(\\mathbb{C})$ \nwith respect to the inner product by ${\\rm Tr}$. \n\\vskip 0.3cm\n\n\\smallskip\nOur characterization is the following Corollary of Theorem 3.3.6 by letting $L = M_m(\\mathbb{C})$. \n\n\\subsubsection{\\bf Corollary.}\n{\\it\nLet $A$ and $B$ be subalgebras of $M_{k}(\\mathbb{C})$ and assume that both subalgebras are isomorphic to \n$M_n(\\mathbb{C})$. Then $A$ and $B$ are mutually orthogonal if and only if \n$$ S(\\rho[U(A,B)] ) = 2 \\log n = \\log(\\dim A). $$ \n}\n\\smallskip\n\\subsubsection{\\bf Note} \nIn the above 3.4.1 and 3.4.2, the numbers $m$ and $n$ should be $m \\geq n.$ \n\\smallskip\n\n\\subsubsection{\\bf Comparison with the case of maximal abelian subalgebras.} \nWe remark that Corollary 3.4.2 corresponds to \\cite [Corollary 3.2, Corollary 3.3] {Ch1}: \n\\smallskip\n\nAssume that $A$ and $B$ are maximal abelian subalgebras of $M_{n}(\\mathbb{C})$. \nThen there exists a unitary $u$ in $M_{n}(\\mathbb{C})$ with $uAu^* = B$, \nand we have that \n\\begin{enumerate}\n \\item $h(A \\mid B) = H(b(u)).$ \n \\item \n $A$ and $B$ are mutually orthogonal if and only if \n $$h(A \\mid B) = \\log n = \\log(\\dim A).$$ \n\\end{enumerate}\n\nHere, $h(A \\mid B) $ is the conditional relative entropy for $A$ and $B$ in \\cite{Ch1} and \n$ H(b(u)) $ is the entropy for the unistochastic operator $b(u)$ induced by the unitary $u$ in \\cite{ZSKS}. \n\\smallskip\n\nThis means that $A$ and $B$ are mutually orthogonal if and only if \n$h(A \\mid B)$ takes the maximum value $\\log(\\dim A)$, \nbecause $\\log n$ is the maximum value by the definition of $H(b(u))$ and by \nthe property of the function $\\eta$. \n\\smallskip\n\n\\bibliographystyle{amsplain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}