diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzlizx" "b/data_all_eng_slimpj/shuffled/split2/finalzzlizx" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzlizx" @@ -0,0 +1,5 @@ +{"text":"\\section*{Acknowledgments}\n\nThis work received funding from the European Research Council under the European Union's Seventh Framework Programme (FP7\/20072013\/ERC grant agreement no. [319456] dHCP project). The research was supported by the Wellcome\/EPSRC Centre for Medical Engineering at King's College London [WT 203148\/Z\/16\/Z]; the Medical Research Council [MR\/K006355\/1]; and the National Institute for Health Research (NIHR) Biomedical Research Centre based at Guy's and St Thomas' NHS Foundation Trust and King's College London. The views expressed are those of the author and not necessarily those of the NHS, the NIHR or the Department of Health. The author also acknowledges the Department of Perinatal Imaging \\& Health at King's College London and advice and support from Edgar Dobriban, Jo V. Hajnal and Daan Christiaens.\n\n\\clearpage\n\n\\bibliographystyle{apalike}\n\n\\section{Introduction}\n\n\\label{sec:INTR}\n\nRandom matrix theory is at the core of modern high dimensional statistical inference~\\citep{Yao15} with applications in physics, biology, economics, communications, computer science or imaging~\\citep{Couillet13,Paul14,Bun17}. In the large dimensional scenario, classical asymptotics where the available number of samples of a given population $N$ is much larger than the dimension $M$ are no longer valid. A key contribution in this setting is the Mar\\v{c}enko-Pastur theorem~\\citep{Marcenko67,Silverstein95a}, which characterizes the limiting behaviour of the empirical spectral distribution (ESD) for matrices with random entries when $N\\to\\infty$. Namely, there exists a fixed point equation relating the eigenvalues of the empirical and population distributions, which can be used for inference under the mediation of appropriate numerical techniques. Using this characterization, subsequent asymptotics-based inference can be performed, for instance, for dimensionality reduction, hypothesis testing, signal retrieval, classification or covariance estimation~\\citep{Yao15}. In addition,~\\cite{Silverstein95b} develops on ideas outlined in~\\cite{Marcenko67} to analyze the support of the ESD based on the monotonicity of the inverse of the function involved in the fixed point equation. Emerging from statistical models in array processing, the extension in~\\cite{Wagner12} focuses on generalizing previous results to the case where the matrix rows are independent but drawn from a collection of population distributions. Here, the relation between the population covariances and the limiting behaviour of the sample eigenvalues is governed by a system of non-linear equations and, although asymptotic sample eigenvalue confinement has also been proved~\\citep{Kammoun16}, a simple description of the support is no longer at hand.\n\nDespite the variety of potential applications, not many works have practically confronted the numerical issues involved in computing the ESD from a given population distribution. The most flexible package that we have identified has been described in~\\cite{Dobriban15} (SPECTRODE), where the fixed point equation in~\\cite{Silverstein95a} is transformed into an ordinary differential equation (ODE) with starting point obtained by the solution of the fixed point equation on a single point within the support. In this work the author shows that the proposed method compares favourably with straightforward fixed point solvers, both in terms of accuracy and computational efficiency. In addition, accuracy limitations of Monte Carlo simulations~\\citep{Jing10} are showcased and, while recognizing their independent interest, arguments are provided about the practical limitations~\\citep{Rao08} or limited applicability~\\citep{Olver13} of other approaches. In our experiments, this package has revealed an exquisite accuracy and efficiency in computing the ESD and determining its support. However, the applicability or efficiency of an ODE approach may be compromised for more general models such as~\\cite{Wagner12}. This is mainly due to the increased computational complexity of evaluating the Jacobian of the system of fixed point equations, which no longer depends on the eigenvalues of the populations but on a combination of their covariances. Another interesting tool, conceived to solve the more general problem of recovering the population distribution from the observed eigenvalues by means of the so called QuEST function, has been described in~\\cite{Ledoit17}. Solving this problem is required, for instance, for covariance estimation. The literature review in this work points to a systematic limitation of most precedent methods, as they are only capable to obtain estimates for very particular forms of the population distribution. In addition, it introduces an interesting feature not present in~\\cite{Dobriban15}, the use of a non-uniform grid with increased resolution near the support edges, which allows more efficient approximations. However, once again, this technique is designed for cases where a single nonlinear equation is to be solved while generalizations to systems of equations do not seem straightforward.\n\nThis technical note describes a series of tools that have been developed to compute the ESD for the mixture of populations case in~\\citep{Wagner12}. This model or certain analogous forms, has attracted interest in areas such as telecommunications~\\citep{Moustakas07,Couillet11,Dupuy11}, machine learning~\\citep{Benaych-Georges16,Couillet16}, medical imaging~\\citep{Cordero-Grande19} or genetics~\\citep{Fan19}. Our method is based on directly solving the system of nonlinear equations. However, the results in~\\cite{Dobriban15} and our own analysis, have identified certain limitations in commonly reported algorithms based on fixed point iteration solvers, so a set of technical refinements are proposed in this note. These include the use of Anderson mixing to accelerate the fixed point iterations, an homotopy continuation method to prevent non-admissible solutions, a set of heuristics to detect the support of the distribution and to adapt the approximation grid to the ESD shape, and a formulation that allows for efficient computations in graphical processing units (GPU). We validate our method by comparison with~\\cite{Dobriban15} and~\\cite{Ledoit17}, both in terms of efficiency and accuracy. As our methods are envisaged to operate on more general models than those contemplated in~\\cite{Dobriban15,Ledoit17}, they do not make use of any precomputed information about the distribution support. Nevertheless, we show that they are reliable enough and their efficiency and accuracy is comparable or superior to that in~\\cite{Dobriban15,Ledoit17}. In addition, comparisons with Monte Carlo simulations show that they are also capable of providing accurate results for more general models. A \\textsmaller{\\textsc{MATLAB}} implementation of our approach, that we will refer to as the MIXANDMIX (Mixtures by Anderson Mixing) method, including the scripts required to replicate the experiments in this note, has been made publicly available at \\url{https:\/\/github.com\/mriphysics\/MixAndMix\/releases\/tag\/1.1.0}. This note is organized as follows: in~\\S~\\ref{sec:THEO} we review different random matrix models, in~\\S~\\ref{sec:METH} we describe the main features of our method, in~\\S~\\ref{sec:RESU} we validate the proposed technique, in~\\S~\\ref{sec:DISC} we discuss the implications of the obtained results and in~\\S~\\ref{sec:CONC} we end up with some conclusions.\n\\section{Theory}\n\n\\label{sec:THEO}\n\nConsider a complex random matrix $\\mathbf{X}$ of size $N\\times M$. We are interested in the eigenvalue distribution of the sample covariance $\\mathbf{Y}=\\displaystyle N^{-1}\\mathbf{X}^H\\mathbf{X}$ in the asymptotic regime where both $M\\to\\infty$ and $N\\to\\infty$ but they maintain a fixed ratio $\\gamma=M\/N$ with $\\gamma>0$. For simplicity we assume the entries of the matrix are zero mean Gaussian distributed, but keeping in mind that the literature contemplates different generalizations. Three main scenarios are contemplated:\n\n\\subsection{IID standard entries}\n\n\\label{sec:IIDS}\n\nWithin this model, that we call \\emph{standard} model, we can write $\\mathbf{X}^H\\sim\\boldsymbol{\\mathcal{CN}}\\displaystyle\\left(\\mathbf{0}_{MN},\\mathbf{I}_{MN}\\right)$, with $\\boldsymbol{\\mathcal{CN}}$ denoting a circularly symmetric complex Gaussian distribution, $\\mathbf{0}_{MN}$ the $M\\times N$ matrix with zero entries, and $\\mathbf{I}_{MN}$ the $MN\\times MN$ identity matrix.~\\cite{Marcenko67} showed that in this case the distribution of eigenvalues of the sample covariance asymptotically converges to\n\\begin{equation}\n\\label{ec:MPLA}\nf_{\\gamma}^{\\mathbf{I}}(x)=\\begin{cases}\\displaystyle\\frac{\\sqrt{(\\gamma_{+}-x)(x-\\gamma_{-})}}{2\\pi\\gamma x} & \\mbox{if }\\gamma_{-}\\leq x\\leq \\gamma_{+}\\\\ 0 & \\mbox{otherwise,}\\end{cases}\n\\end{equation}\nwith $\\gamma_{-}=(1-\\sqrt\\gamma)^2$ and $\\gamma_{+}=(1+\\sqrt\\gamma)^2$ defining the support of the distribution. Note that if $\\gamma\\geq 1$, the distribution has a $1-\\gamma^{-1}$ point mass ($\\gamma>1$) or is locally unbounded ($\\gamma=1$) at $x=0$. Thus, in this case,~\\eqref{ec:MPLA} gives an explicit characterization of the ESD.\n\n\\subsection{IID rows}\n\n\\label{sec:IIDC}\n\nIn this scenario, we can write $\\mathbf{X}^H\\sim\\boldsymbol{\\mathcal{CN}}\\displaystyle\\left(\\mathbf{0}_{MN},\\boldsymbol{\\Lambda}_M\\otimes\\mathbf{I}_N\\right)$, with $\\boldsymbol{\\Lambda}_M$ a given population covariance matrix.~\\cite{Marcenko67} and~\\cite{Silverstein95a} respectively derived expressions for probabilistic and almost sure limits of the sample covariance spectrum using its Stieltjes transform\n\\begin{equation}\nm(z)=\\int_{\\mathbb{R}}\\frac{f_{\\gamma}^{\\boldsymbol{\\Lambda}_M}(x)}{x-z}dx,\\quad z\\in\\mathbb{C}\\setminus\\mathbb{R}\n\\end{equation}\nfor which the inversion formula would give back the ESD by\n\\begin{equation}\n\\label{ec:INST}\nf_{\\gamma}^{\\boldsymbol{\\Lambda}_M}(x)=\\frac{1}{\\pi}\\lim_{\\epsilon\\to 0^{+}}\\Im\\left\\{m(x+i\\epsilon)\\right\\}.\n\\end{equation}\nIf, in a discrete setting, we denote the increasingly sorted eigenvalues of $\\boldsymbol{\\Lambda}_M$ by $\\{\\lambda_1,\\ldots,\\lambda_M\\}$, when $N\\to\\infty$ the eigenvalue distribution of the matrix $\\tilde{\\mathbf{Y}}=\\displaystyle N^{-1}\\mathbf{X}\\mathbf{X}^H$ converges to a function whose Stieltjes transform $\\tilde{m}(z)$ is related to the population distribution by the fixed point equation\\newline $\\tilde{m}(z)=\\displaystyle\\left(-z+\\frac{\\gamma}{M}\\sum_{m=1}^M\\frac{\\lambda_m}{1+\\lambda_m \\tilde{m}(z)}\\right)^{-1}$ and to the Stieltjes transform of the corresponding limiting function for the spectral distribution of the sample covariance matrix $\\mathbf{Y}$, $m(z)$, by $\\tilde{m}(z)=\\gamma m(z)+(\\gamma-1)\/z$. In addition,~\\cite{Silverstein95b}, following the guidelines in~\\cite{Marcenko67}, showed that the support of the ESD can be characterized by studying the zeros of the derivative of the inverse map, $z'(\\tilde{m})$, in appropriate domains. This property, together with the analyticity of the empirical density within its support are the keys to the SPECTRODE approach in~\\cite{Dobriban15}.\n\nTo clarify the connections between the different models, it is more convenient to express the previous relations in terms of auxiliary functions $e(z)=-\\displaystyle\\frac{1}{\\gamma z\\tilde{m}(z)}-\\frac{1}{\\gamma}$ for an equivalent fixed point equation,\n\\begin{equation}\n\\label{ec:CFST}\ne(z)=\\frac{1}{M}\\sum_{m=1}^M\\lambda_m\\left(\\frac{\\lambda_m}{1+\\gamma e(z)}-z\\right)^{-1},\n\\end{equation}\nand an expression for the Stieltjes transform of the ESD,\n\\begin{equation}\n\\label{ec:STSD}\nm(z)=\\frac{1}{M}\\sum_{m=1}^M\\left(\\frac{\\lambda_m}{1+\\gamma e(z)}-z\\right)^{-1}.\n\\end{equation}\nWe refer to this model as the \\emph{single population} model.\n\n\\subsection{Independent rows}\n\n\\label{sec:INCO}\n\nIn this \\emph{mixture of populations} model, the matrix is drawn from $\\mathbf{X}^H\\mathbin{\\sim}\\boldsymbol{\\mathcal{CN}}\\displaystyle\\left(\\mathbf{0}_{MN},\\sum_{k=1}^{K}\\boldsymbol{\\Lambda}^k_M\\otimes\\mathbf{D}^k_N\\right)$, where $\\mathbf{D}^k_N$ is a diagonal indicator matrix with ones in the diagonal elements corresponding to those rows sampled according to the population covariance $\\boldsymbol{\\Lambda}^k_M$ and zero otherwise. The equations relating the empirical and population distributions for $N\\to\\infty$ have been derived in~\\cite{Wagner12} also making use of the Stieltjes transform of the limiting spectral distribution $f_\\gamma^{\\prescript{}{K}{\\boldsymbol{\\Lambda}}^{}_{M}}$, $m(z)$, and the auxiliary functions $e_j(z)$, $1\\leq j\\leq K$. These functions are related to the population covariances by a system of nonlinear equations\n\\begin{equation}\n\\label{ec:CSST}\ne_j(z)=\\frac{1}{M}\\tr\\left(\\boldsymbol{\\Lambda}_j\\left(\\sum_{k=1}^{K}\\frac{\\alpha_k\\boldsymbol{\\Lambda}_k}{1+\\gamma e_k(z)}-z\\mathbf{I}_M\\right)^{-1}\\right),\n\\end{equation}\nfor which there is a unique solution in $\\mathbb{C}\\setminus\\mathbb{R}^{+}$. $m(z)$ is expressed in terms of these functions as\n\\begin{equation}\n\\label{ec:SSSD}\nm(z)=\\frac{1}{M}\\tr\\left(\\left(\\sum_{k=1}^{K}\\frac{\\alpha_k\\boldsymbol{\\Lambda}_k}{1+\\gamma e_k(z)}-z\\mathbf{I}_M\\right)^{-1}\\right),\n\\end{equation}\nwith $\\alpha_k=\\tr(\\mathbf{D}^k)\/N$. Note that~\\eqref{ec:CSST} and~\\eqref{ec:SSSD} reduce to~\\eqref{ec:CFST} and~\\eqref{ec:STSD} when $K=1$. There are two main limitations to extend the SPECTRODE method to this setting. First, we are unaware of studies characterizing the support of the measures inducing $e_j(z)$ by means of some analogue to~\\cite{Silverstein95b}. Second, both potential extensions of support characterizations or usage of ODE solvers would require the Jacobian of~\\eqref{ec:CSST}, which involves additional matrix multiplications, with a penalty in computational efficiency. Thus, we have focused on developing a reliable method to solve the system~\\eqref{ec:CSST} not requiring the Jacobian.\n\\section{Methods}\n\n\\label{sec:METH}\n\nIn this Section we describe our numerical solver for the system of equations in~\\eqref{ec:CSST}.\n\n\\subsection{Support detection}\n\n\\label{sec:SUDE}\n\nWhen $\\gamma\\to 0$ the ESD tends to the population distribution for the standard and single population models, while for the mixture of populations it is governed by an effective single population distribution $f_\\gamma^{\\prescript{}{K}{\\boldsymbol{\\Lambda}}^{}_{M}}=f_0^{\\overline{\\boldsymbol{\\Lambda}}_M}$ with $\\overline{\\boldsymbol{\\Lambda}}_M=\\displaystyle\\sum_{k=1}^K\\alpha_k\\boldsymbol{\\Lambda}_M^k$, so that we have $f_0^{\\overline{\\boldsymbol{\\Lambda}}_M}(x)=\\displaystyle\\frac{1}{M}\\sum_{m=1}^M\\delta(x-\\overline{\\lambda}_m)$, with $\\overline{\\lambda}_m$ the $m$-th eigenvalue of $\\overline{\\boldsymbol{\\Lambda}}_M$. Thus, in this limiting case the ESD support only includes the eigenvalues of an equivalent single population distribution. On the other side, for $\\gamma>0$, $[a,b]=\\left [t^{-1}(1-\\sqrt{\\gamma})^2\\displaystyle\\min_k\\lambda_1^k,t(1+\\sqrt{\\gamma})^2\\max_k\\lambda_M^k\\right ]$, with $\\lambda_m^k$ denoting the $m$-th eigenvalue of $\\boldsymbol{\\Lambda}_M^k$ and $t>1$, provide lower and upper bounds on the limiting support.\n\nTo consider these two extreme cases, we group and sort the set of $P=M(K+1)$ eigenvalues $\\lambda_p=\\{\\lambda_m^k,\\overline{\\lambda_m}\\}$, $10}$. In our experiments $S$ has been selected by defining a minimum number of grid points to approximate the densities by $M^{\\text{(m)}}$, and making $M=\\max(M^{\\text{(u)}},M^{\\text{(m)}})$ with $M^{\\text{(m)}}=100$.\n\n\\subsection{Adaptive regridding}\n\n\\label{sec:ADRE}\n\nAdditional points are added to the grid by pursuing $g'(x)\\sqrt{xf''(x)}=c$, with $g(x)$ the grid mapping function and $c$ a constant. This non-uniform grid construction criterion is based on both the second order derivative of the density $f''(x)$ and the grid value $x$. The first feature, $f''(x)$, favours the allocation of grid points near the support edges, in accordance to the $\\sqrt{x-x_0}$ behaviour of the distribution at the boundaries~\\citep{Silverstein95b}, as well as in those areas where linear interpolation results in larger approximation errors. This is similar in spirit to the arcsine criterion in~\\cite{Ledoit17} but does not use any prior information about the support edges as it is not available in the mixture of populations model. The second feature, $x$, favours the allocation of grid points near the upper edge of the support, which could be important for applications related to signal detection~\\citep{Nadakuditi14,Dobriban19}. After $P_l=R_lP_0$ points are added to the grid, the solver for $f(\\mathbf{x})$ is called on the new set of points to allow for an update of the $f''(x)$ values to be used at the next iterative regridding step. This whole process is repeated $L$ times, so $R_l$, $1\\leq l\\leq L$ control the final resolution of the distribution computations. The parameters by default in our implementation are $R_l=1$ $\\forall l$ and $L=1$.\n\n\\subsection{Homotopy continuation}\n\n\\label{sec:HOMO}\n\nThe calculation of the ESD involves a pass to the limit in~\\eqref{ec:INST} as the Stieltjes transform does not converge in the real line. In addition, the solution of~\\eqref{ec:CSST} is not unique on the real line. Numerically, this may provoke spurious fixed point convergence when the current solution is far away from the optimum and the computations are being performed in locations that are close to the real line. To prevent these situations, we have emulated~\\eqref{ec:INST} by homotopy continuation. We start by obtaining an approximate solution for~\\eqref{ec:CSST} in a grid given by $\\mathbf{z}=\\mathbf{x}+\\boldsymbol{\\xi}^2i$, with $\\boldsymbol{\\xi}=\\xi^0\\mathbf{1}_{P_l}$ for big enough $\\xi^0$ common for all $1\\leq p\\leq P_l$. At each iteration $i$ we compute $\\varepsilon^i_p=\\displaystyle\\max_k|e^i_k(x_p)-e^{i-1}_k(x_p)|$, where the updates on $e$ are to be described in~\\S~\\ref{sec:ANDE}. Considering that, as discussed in~\\cite{Dobriban15}, to obtain an accuracy of at least $\\epsilon$ for the distribution $f(x)$, we need to solve the system of equations in a complex grid given by $x+i\\epsilon^2$, we can perform the update $\\xi^{j+1}_p=\\max(\\xi^j_p\/\\beta,\\epsilon)$ whenever $\\varepsilon^i_p\\leq\\varepsilon^{i-1}_p$. The iterations at the grid location indexed by $p$ are terminated when the prescribed accuracy is reached, namely when $\\xi^j_p=\\epsilon$ and $\\varepsilon^i_p<\\epsilon$. In our experiments we have used $\\xi^0=1$ and $\\beta=10$, for which we have observed robust and efficient performance. \n\n\\subsection{Anderson acceleration}\n\n\\label{sec:ANDE}\n\nThe experiments in~\\cite{Dobriban15} showed that when solving the IID rows problem in~\\S~\\ref{sec:IIDC} by a straightforward fixed point algorithm, in our context when performing the updates on $e(z)$ directly using~\\eqref{ec:CFST}, convergence is often very slow, so they reported situations where their SPECTRODE code could be $1000\\times$ quicker while simultaneously obtaining $1000\\times$ higher accuracy. In this note we show that this apparent limitation of the fixed point iterates can be overcome by using techniques to accelerate their convergence. As discussed in~\\S~\\ref{sec:INCO}, the accelerated convergence achievable by methods requiring the Jacobian of the fixed point identities may not compensate for the increased cost per iteration involved in computing the Jacobian. Thus, we have resorted to Anderson mixing~\\citep{Anderson65}, a technique not requiring explicit Jacobian calculations that has demonstrated good practical performance, in occasions providing competitive results when compared to gradient-based approaches~\\citep{Ramiere15}.\n\nConsidering a given multidimensional fixed point mapping $\\mathbf{g}(\\mathbf{e})$ such as~\\eqref{ec:CSST}, Anderson iterations are computed as\n\\begin{equation}\n\\mathbf{e}^{i+1}=\\mathbf{g}(\\mathbf{e}^{i})-\\sum_{q=1}^{Q_i}(\\mathbf{g}(\\mathbf{e}^{i-Q_i+q})-\\mathbf{g}(\\mathbf{e}^{i-Q_i+q-1}))\\nu^i_q,\n\\end{equation}\nwith $Q_i$ denoting the number of iterations whose history is used for the update at iteration $i$ and $\\boldsymbol{\\nu}^i=(\\nu^i_1,\\ldots,\\nu^i_{Q_i})^T$ obtained by solving a linear least squares problem involving the fixed point updates $\\mathbf{h}^i(\\mathbf{e}^i)=\\mathbf{g}(\\mathbf{e}^i)-\\mathbf{e}^i$ and their differences $\\Delta\\boldsymbol{h}^i=\\boldsymbol{h}^{i}-\\boldsymbol{h}^{i-1}$ arranged in a $K\\times Q_i$ matrix $\\Delta\\boldsymbol{H}^i=[\\Delta\\boldsymbol{h}^{i-Q_i+1},\\ldots,\\Delta\\boldsymbol{h}^i]$. Due to the potential ill-posedness of this system, we have actually solved a damped version~\\citep{Scieur19,Henderson19}\n\\begin{equation}\n\\boldsymbol{\\nu}^i=\\argmin_{\\boldsymbol{\\nu}}\\|\\boldsymbol{h}^i-\\Delta\\boldsymbol{H}^i\\boldsymbol{\\nu}\\|_2^2+\\lambda^i\\|\\boldsymbol{\\nu}\\|_2^2,\n\\end{equation}\nwith damping parameter given by $\\lambda^i=\\displaystyle 0.1\\max_{k,q}|\\Delta H^i_{k,q}|$. We have set $Q_i=\\min(2,i-1)$ on the basis of our empirical testing and in agreement with the experimental results in~\\cite{Ramiere15}.\n\n\\subsection{GPU acceleration}\n\n\\label{sec:GPUA}\n\nGPU-based implementations of the SPECTRODE method~\\citep{Dobriban15} appear involved due to the sequential nature of ODE solvers. In contrast, acceleration of the ESD computation in the QuEST method~\\citep{Ledoit17} seems more plausible as it solves for a zero of a function independently for the different grid locations, but the authors have not discussed this aspect. Our code has been architectured so that most demanding routines support both CPU and GPU based parallel computations. This includes the parallel computation of the solutions of the system of equations in~\\eqref{ec:CSST} for the different grid locations but also the parallel computation of different ESDs, required, for instance, in patch-based image denoising applications~\\citep{Cordero-Grande19}. \n\n\\section{Results}\n\n\\label{sec:RESU}\n\nIn this Section we first validate the beneficial effects of Anderson acceleration and homotopy continuation in ESD computations (\\S~\\ref{sec:BEFU}), then compare our method to the SPECTRODE and QuEST proposals in those regimes in which they can operate (\\S~\\ref{sec:COLI}) and finally provide some results on the application of our technique to the mixture of populations model (\\S~\\ref{sec:MIPO}). Unless otherwise stated experiments are performed using CPU computations.\n\n\\subsection{Validation of introduced refinements}\n\n\\label{sec:BEFU}\n\nIn~\\cite{Dobriban15} an experiment was performed illustrating the limitations of fixed point iterations to obtain accurate results in ESD calculations. Their method is compared with a fixed point iteration solver for the standard model in~\\S~\\ref{sec:IIDS}, using the closed form density in~\\eqref{ec:MPLA} with $\\gamma=0.5$ to assess the accuracy. Here we repeat that experiment adding our Anderson acceleration technique to the fixed point solver. The results are presented in Fig.~\\ref{fig:FIG1}. First, we have been able to replicate the results in~\\cite{Dobriban15}; the fixed point algorithm, grossly equivalent to the MIXANDMIX implementation with $Q=0$, i.e., without Anderson mixing, is only able to provide very moderate accuracies, despite being run for $1\/\\epsilon$ iterations. However, when introducing the Anderson acceleration scheme, the results are dramatically better, with MIXANDMIX and SPECTRODE demonstrating comparable performance. In this experiment the curves show a slightly better accuracy (Fig.~\\ref{fig:FIG1}a) and worse computational efficiency (Fig.~\\ref{fig:FIG1}b) for MIXANDMIX, but this should be taken with caution as these tests have been conducted without considering the influence of grid sizes on the approximation, which will be taken into account in the experiments in~\\S~\\ref{sec:COLI}. In addition, we show (Fig.~\\ref{fig:FIG1}c) that despite the accuracy obtained by a straightforward fixed point algorithm is poor everywhere within the support, the accuracy curve after Anderson mixing remains below the SPECTRODE curve almost everywhere.\n\\begin{figure}[!htb]\n\\begin{minipage}{.32\\textwidth}\n\\includegraphics[width=\\textwidth]{figs\/Fig01a}\n\\end{minipage}\n\\begin{minipage}{.32\\textwidth}\n\\includegraphics[width=\\textwidth]{figs\/Fig01b}\n\\end{minipage}\n\\begin{minipage}{.32\\textwidth}\n\\includegraphics[width=\\textwidth]{figs\/Fig01c}\n\\end{minipage}\n\\caption{\\textbf{a)} Averaged accuracy of different methods $\\overline{\\Delta f}=\\displaystyle\\sum_{p=1}^{P}|\\hat{f}_{0.5}^{\\mathbf{I}}(x_p)-f_{0.5}^{\\mathbf{I}}(x_p)|\/P$ with $\\hat{f}$ denoting the computed density. \\textbf{b)} Computation times $t$. \\textbf{c)} Accuracy $\\Delta f(x)=|\\hat{f}_{0.5}^{\\mathbf{I}}(x)-f_{0.5}^{\\mathbf{I}}(x)|$ throughout the support (case $\\epsilon=10^{-5}$).}\n\\label{fig:FIG1}\n\\end{figure}\n\nIn Fig~\\ref{fig:FIG2}a we compare the results of our method without and with homotopy continuation to those of SPECTRODE. The SPECTRODE method and ours with homotopy continuation are observed to overlap at the scale of the plot. However, when running MIXANDMIX without homotopy continuation, we can see that there exist some grid points for which the computations spuriously converge to the zero solution. We know this solution is infeasible because it provokes discontinuities in the distribution, which contradicts its expected analyticity properties~\\citep{Silverstein95b}. To illustrate the reasons for these numerical issues, we have taken a grid point corresponding to one of these infeasible results, $x=2.2$. For this point, Figs.~\\ref{fig:FIG2}b-e show the squared magnitude of the residuals of the fixed point maps, $|h(e)|^2$, at different complex plane values of the auxiliary function $e(z)=e(x+\\delta i)$ as we approach the real line with $\\delta=\\{1,0.1,0.01,0.001\\}$. First, for $z=x+1i$ there is a unique minimum in $\\mathbb{C}^{+}$ whose basin of attraction covers the whole of $\\mathbb{C}^{+}$. As we decrease $\\delta$ (see for instance $z=x+0.1i$) we can track this minimum in a neighborhood of its previous location and check that it is still the only one in the upper half of the complex plane. However, a new local minimum has emerged in the lower half of the plane but so close to the real line that its basin of attraction extends to the upper half. As we keep decreasing $\\delta$, the basin of attraction of this minimum in $\\mathbb{C}^{+}$ gets bigger; however, by analyticity there has to be an area around the global optimum for which the method should still converge to the global optimum, which can be ensured by homotopy continuation. This explains the problem we are observing in the left hand side: the fixed point algorithm has entered the basin of attraction of the minimum in the lower half and it has not been able to escape from this area. In addition, the location of the attractor explains why the spurious distribution value obtained by the fixed point algorithm is generally pushed to $0$ when failing to converge to the right optimum.\n\\begin{figure}[!htb]\n\\begin{minipage}{.33\\textwidth}\n\\includegraphics[width=\\textwidth]{figs\/Fig02a}\n\\end{minipage}\n\\begin{minipage}{.66\\textwidth}\n\\includegraphics[width=0.49\\textwidth]{figs\/Fig02b.png}\n\\includegraphics[width=0.49\\textwidth]{figs\/Fig02c.png}\\\\\n\\includegraphics[width=0.49\\textwidth]{figs\/Fig02d.png}\n\\includegraphics[width=0.49\\textwidth]{figs\/Fig02e.png}\n\\end{minipage}\n\\caption{\\textbf{a)} Calculated ESDs when using the SPECTRODE method, MIXANDMIX including homotopy continuation and MIXANDMIX without homotopy continuation. \\textbf{b-e)} Base $10$ logarithm of the squared magnitude of the fixed point update, i.e., $\\log_{10}(|h(e)|^2)$, together with corresponding isolines at \\textbf{b)} $z=2.2+1i$, \\textbf{c)} $z=2.2+0.1i$, \\textbf{d)} $z=2.2+0.01i$ and \\textbf{e)} $z=2.2+0.001i$.}\n\\label{fig:FIG2}\n\\end{figure}\n\n\\subsection{Comparison with the literature}\n\n\\label{sec:COLI}\n\nIn Fig.~\\ref{fig:FIG3} we compare the accuracy and computational efficiency of MIXANDMIX with the SPECTRODE and QuEST approaches for two population distributions that admit a closed form expression for the ESD. In Figs.~\\ref{fig:FIG3}a,b, we show respectively the averaged accuracy and computation times for a set of aspect ratios ranging from $\\gamma=0.05$ to $\\gamma=0.95$ in steps of $0.1$, $\\gamma=1$, and reciprocal $1\/\\gamma$ ranging from $\\gamma=0.05$ to $\\gamma=0.95$ in steps of $0.1$ for the standard distribution (MP) in~\\eqref{ec:MPLA}. Corresponding accuracies throughout the support together with the gold standard density (in a logarithmic scale) are shown in Fig.~\\ref{fig:FIG3}c for the $\\gamma=0.5$ case. Analogous plots are provided in Figs.~\\ref{fig:FIG3}d-f for a two-delta ($\\delta\\delta$) distribution with equiprobable eigenvalues at $\\lambda_1=1$ and $\\lambda_2=8$, for which the ESD can be obtained by solving a third order polynomial equation~\\citep{Dobriban15,Rao08}. The SPECTRODE and MIXANDMIX approaches have been run with an accuracy parameter providing similar computation times than those of the QuEST method using $100$ grid points, which corresponds to $\\epsilon=10^{-6}\/\\epsilon=10^{-5}$ (MP, $\\gamma<1\/\\gamma\\geq 1$) and $\\epsilon=10^{-5}\/\\epsilon=10^{-4}$ ($\\delta\\delta$, $\\gamma<1\/\\gamma\\geq 1$) for SPECTRODE and $\\epsilon=10^{-5}$, $L=3$ (both) for MIXANDMIX. To account for the relative grid complexities of different methods, linearly interpolated densities are compared with close-form solutions in a uniform grid comprised of $10000$ evenly distributed points along the support. MIXANDMIX is roughly $2$ and $1$ orders of magnitude more accurate than QuEST and SPECTRODE respectively. We observe that the computation times of SPECTRODE largely depend on the aspect ratio, with increments of several orders of magnitude as $\\gamma\\to 1$, while they are much more uniform for MIXANDMIX and QuEST. As for the accuracy distributions, they are generally satisfactory for all methods, but MIXANDMIX seems to provide improved results near the lower edge for the MP case and throughout the support for the $\\delta\\delta$ case. The grid sizes used by each method for $\\gamma=0.5$ have been $102\/104$ by QuEST ($100$ plus some additional points to localize the support limits), $2930\/5734$ by SPECTRODE and $1200\/1200$ by MIXANDMIX, for the MP \/ $\\delta\\delta$ problems.\n\\begin{figure}[!htb]\n\\begin{minipage}{.33\\textwidth}\n\\includegraphics[width=\\textwidth]{figs\/Fig03a}\n\\end{minipage}\n\\begin{minipage}{.33\\textwidth}\n\\includegraphics[width=\\textwidth]{figs\/Fig03b}\n\\end{minipage}\n\\begin{minipage}{.33\\textwidth}\n\\includegraphics[width=\\textwidth]{figs\/Fig03c}\n\\end{minipage}\n\\begin{minipage}{.33\\textwidth}\n\\includegraphics[width=\\textwidth]{figs\/Fig03d}\n\\end{minipage}\n\\begin{minipage}{.33\\textwidth}\n\\includegraphics[width=\\textwidth]{figs\/Fig03e}\n\\end{minipage}\n\\begin{minipage}{.33\\textwidth}\n\\includegraphics[width=\\textwidth]{figs\/Fig03f}\n\\end{minipage}\n\\caption{\\textbf{a,d)} Averaged accuracy, \\textbf{b,e)} computation times, and \\textbf{c,f)} accuracy throughout the domain ($\\gamma=0.5$) for \\textbf{a-c)} the MP problem and \\textbf{d-f)} the $\\delta\\delta$ problem with equiprobable $\\boldsymbol{\\lambda}=\\{1,8\\}$.}\n\\label{fig:FIG3}\n\\end{figure}\n\nMIXANDMIX is potentially limited by risks of failures at detecting all the segments comprising the distribution support. Thereby, we have conducted some experiments to test the support detection reliability in challenging scenarios. In Fig.~\\ref{fig:FIG4} we cover analogous experiments to those in Fig.~\\ref{fig:FIG3} for skewed $\\delta\\delta$ problems with $\\boldsymbol{\\lambda}=\\{1,100\\}$ with respective multiplicity ratios $\\mathbf{w}=\\{0.99,0.01\\}$ (Figs.~\\ref{fig:FIG4}a-c), left skewed (LS) problem, and $\\mathbf{w}=\\{0.01,0.99\\}$ (Figs.~\\ref{fig:FIG4}d-f), right skewed (RS) problem. Note that due to the scale differences in the population eigenvalue locations, results on the accuracy throughout the domain are more conveniently represented in a logarithmic scale. SPECTRODE parameters for approximately matched average computation times have now been modified to $\\epsilon=10^{-4}\/\\epsilon=10^{-3}$ (LS, $\\gamma<1\/\\gamma\\geq 1$) and $\\epsilon=10^{-3}\/\\epsilon=10^{-2}$ (RS, $\\gamma<1\/\\gamma\\geq 1$) while MIXANDMIX parameters could be kept the same as in previous experiments while still matching QuEST computation times. Once again, results in Figs.~\\ref{fig:FIG4}a,d show an improvement of averaged accuracy by several orders of magnitude when using the MIXANDMIX method. Results on Figs.~\\ref{fig:FIG4}c,f show that MIXANDMIX has been capable to approximate the support for both problems and to provide more accurate results than the other two methods almost everywhere within the support. In this case the grid sizes have been $104\/104$ for QuEST, $6963\/9130$ for SPECTRODE and $1248\/1248$ for MIXANDMIX for the LS \/ RS problems.\n\\begin{figure}[!htb]\n\\begin{minipage}{.33\\textwidth}\n\\includegraphics[width=\\textwidth]{figs\/Fig04a}\n\\end{minipage}\n\\begin{minipage}{.33\\textwidth}\n\\includegraphics[width=\\textwidth]{figs\/Fig04b}\n\\end{minipage}\n\\begin{minipage}{.33\\textwidth}\n\\includegraphics[width=\\textwidth]{figs\/Fig04c}\n\\end{minipage}\n\\begin{minipage}{.33\\textwidth}\n\\includegraphics[width=\\textwidth]{figs\/Fig04d}\n\\end{minipage}\n\\begin{minipage}{.33\\textwidth}\n\\includegraphics[width=\\textwidth]{figs\/Fig04e}\n\\end{minipage}\n\\begin{minipage}{.33\\textwidth}\n\\includegraphics[width=\\textwidth]{figs\/Fig04f}\n\\end{minipage}\n\\caption{\\textbf{a,d)} Averaged accuracy, \\textbf{b,e)} computation times, and \\textbf{c,f)} accuracy throughout the domain ($\\gamma=0.5$) for the $\\delta\\delta$ problem with $\\boldsymbol{\\lambda}=\\{1,100\\}$ skewed towards \\textbf{a-c)} the smallest ($\\mathbf{w}=\\{0.99,0.01\\}$) and \\textbf{d-f)} largest ($\\mathbf{w}=\\{0.01,0.99\\}$) eigenvalue.}\n\\label{fig:FIG4}\n\\end{figure}\n\nIn Fig.~\\ref{fig:FIG5} we apply the three methods to a challenging comb-like~\\citep{Dobriban15} problem. In this case the population eigenvalues come from $100$ equiprobable point masses uniformly distributed in the interval $[0.1,10]$, thus with a large number of components that differ by as much as two orders of magnitude. We show the results at selected areas of the support for $\\gamma=0.025$, $\\gamma=0.5$ and $\\gamma=0.975$ respectively in Figs.~\\ref{fig:FIG5}a-c, again using logarithmic scaling for improved visualization. When tuning the SPECTRODE parameter for similar computation times to those of QuEST and MIXANDMIX, which turned out to happen at $\\epsilon=10^{-3}$, the visual impression is of limited performance. In contrast, MIXANDMIX appears to be powerful, showing strengths not only in detecting all the support intervals, but also in picking-up the density oscillations observed for $\\gamma=0.025$ and the steeped lower edge for $\\gamma=0.975$. We have quantitatively assessed this perception by running the final experiment including also the SPECTRODE results at the accuracy level where we could not visually detect any differences with MIXANDMIX results, $\\epsilon=10^{-6}$ for both $\\gamma=0.025$ and $\\gamma=0.5$, or the maximum code accuracy, $\\epsilon=10^{-8}$, for $\\gamma=0.975$. Grid sizes of the different methods were $1950\/1113\/1216$ for SPECTRODE with matched computation times, $140\/106\/102$ for QuEST, $1248\/1200\/1200$ for MIXANDMIX, and $11678\/19156\/244296$ for SPECTRODE with improved accuracy. Results show no visual differences between MIXANDMIX and SPECTRODE with improved accuracy for $\\gamma=0.025$ and $\\gamma=0.5$, and more plausible functional shape of the former around the left edge for $\\gamma=0.975$, even though approximately $200\\times$ less computational resources were used. Finally, QuEST results show a remarkable ability to correctly determine the support intervals, but limitations to accurately approximate the spiked density areas or capture the density oscillations.\n\\begin{figure}[!htb]\n\\begin{minipage}{.33\\textwidth}\n\\includegraphics[width=\\textwidth]{figs\/Fig05a}\n\\end{minipage}\n\\begin{minipage}{.33\\textwidth}\n\\includegraphics[width=\\textwidth]{figs\/Fig05b}\n\\end{minipage}\n\\begin{minipage}{.33\\textwidth}\n\\includegraphics[width=\\textwidth]{figs\/Fig05c}\n\\end{minipage}\n\\caption{ESD for the Comb problem with $100$ equiprobable point masses evenly distributed in the interval $[0.1,10]$. \\textbf{a)} $\\gamma=0.025$, \\textbf{b)} $\\gamma=0.5$ and \\textbf{c)} $\\gamma=0.975$.}\n\\label{fig:FIG5}\n\\end{figure}\n\n\\subsection{ESD of population mixtures}\n\n\\label{sec:MIPO}\n\nIn this Section we provide an illustration of the MIXANDMIX results for the mixture of populations model and the benefits of the GPU architecture. Fig.~\\ref{fig:FIG6}a shows the calculated ESD for a mixture of populations with $K=6$ equiprobable populations, so $\\alpha_k=1\/K$, drawn from population covariances $\\boldsymbol{\\Lambda}_{M\\text{(DIAG)}}^{k}=\\Lambda_{mn\\text{(DIAG)}}^k=((m+k)\\bmod K)+1)\\delta[m,n]$, i.e., a diagonal matrix whose elements in the diagonal grow from $1$ to $K$ with period $K$ with this pattern being shifted across the populations. Fig.~\\ref{fig:FIG6}b extends this DIAG problem to non-diagonal population covariances using $\\boldsymbol{\\Lambda}_{M\\text{(CORR)}}^{k}=\\Lambda_{mn\\text{(CORR)}}^k=\\rho^{|m-n|^l}\\sqrt{\\Lambda_{mm\\text{(DIAG)}}^k\\Lambda_{nn\\text{(DIAG)}}^k}$, $\\rho<1$, $l>0$, so this CORR problem reduces to the DIAG problem when $l\\to\\infty$. Namely, Fig.~\\ref{fig:FIG6}b shows the results for $\\rho=0.2$, $l=0.25$ and $\\gamma=0.5$, here with $M=120$ for convenience. We can see that calculations are in agreement with simulations (with these being obtained using the biggest matrix sizes that we would fit in our GPU memory) for both the DIAG and CORR problems, with the CORR results showing a larger spectral dispersion due to the larger condition number of the non-diagonal matrix. Finally, Fig.~\\ref{fig:FIG6}c shows the GPU computation times for the CORR problem for a number of populations ranging from $K=1$ to $K=6$. First, we can appreciate a significant penalty when moving from $K=1$ to $K=2$, as that switches the problem from solving a single equation based only on the eigenvalues to a system of equations based on the whole structure of the covariance matrices. Second, we observe that for $K\\geq 2$ the computation times grow with $K$ following a lower than $1$ ratio. This is to be attributed to an increased degree of parallelization of the GPU implementation for bigger problems and a stable fixed point Anderson acceleration in the multidimensional case.\n\\begin{figure}[!htb]\n\\begin{minipage}{.33\\textwidth}\n\\includegraphics[width=\\textwidth]{figs\/Fig06a}\n\\end{minipage}\n\\begin{minipage}{.33\\textwidth}\n\\includegraphics[width=\\textwidth]{figs\/Fig06b}\n\\end{minipage}\n\\begin{minipage}{.33\\textwidth}\n\\includegraphics[width=\\textwidth]{figs\/Fig06c}\n\\end{minipage}\n\\caption{Calculated ESDs and simulations for the \\textbf{a)} DIAG and \\textbf{b)} CORR problems. \\textbf{c)} computation times for solving the CORR problem for different values of $\\gamma$ and $K$.}\n\\label{fig:FIG6}\n\\end{figure}\n\n\\section{Discussion}\n\n\\label{sec:DISC}\n\nWe have presented a set of numerical techniques to aid in the computation of the ESD in a mixture of populations model. These include the use of Anderson mixing to accelerate the fixed point iterations, homotopy continuation for robust convergence to the right optimum, adaptive grid construction to efficiently detect the support and approximate the distribution, and a parallel architecture to tackle the increased computational demands in this setting. Results have shown that our method offers favorable practical efficiency-accuracy tradeoffs when compared with related approaches while being able to address more general models than available tools. \n\nOur tool has focused on the model described in~\\cite{Wagner12} but it could straightforwardly be adapted and optimized to address several analogous models in the literature, such as those mentioned in~\\S~\\ref{sec:INTR}. Nevertheless, we have not contemplated even more general models such as the Kronecker model~\\citep{Zhang13}, with a recent contribution for the separable case in~\\cite{Leeb19}, or more general couplings between the matrix elements~\\citep{Wen11,Lu16}. However, these models generally involve the solution of more intricate systems of nonlinear equations with more auxiliary functions, but with strong functional resemblances to the system we have studied here, so there is potential to reuse or adapt our tools to tackle them. The population model discussed in this paper admits a free probability~\\citep{Mingo17} based formulation given by\n\\begin{equation}\n\\label{ec:FPFP}\nf_\\gamma^{\\prescript{}{K}{\\boldsymbol{\\Lambda}}^{}_{M}}=\\displaystyle\\op_{k=1}^K\\alpha_k(f_0^{\\boldsymbol{\\Lambda}_M^k}\\boxtimes f^{\\mathbf{I}}_{\\gamma \/\\alpha_k}),\n\\end{equation}\nwhere $\\boxtimes$ represents the free multiplicative convolution and $\\op$ the free additive convolution over the summands. This implies that the machinery described in~\\cite{Belinschi17} could be used to obtain the system of equations in~\\S~\\ref{sec:INCO}. However, the models in~\\cite{Belinschi17} are more general, including self-adjoint polynomials not necessarily built from a combination of Mar\\v{c}enko-Pastur and atomic distributions, but simply from asymptotically freely independent matrices. Thus, investigations are required to discern under what conditions our numerical tools can be extended to cope with these models.\n\nThe proposed technique does not fully exploit the existing descriptions used for support detection in the single population model because we are not aware of any such descriptions for mixtures of populations (although recent results such as those in~\\cite{Bao19,Ji19} may pave the way for them). Anyhow, the gridding procedure has shown a robust behaviour in challenging practical scenarios, even when compared with methods that exploit the properties of the support. The main reason is the introduction of a logarithmic grid, that enables efficient support searches at different spectral scales. This has been synergistically combined with an adaptive grid refinement making use of the second order derivative of the density (with a bias towards the upper edge information) generally with more efficient approximations than provided by previous methods. Although this grid refinement criterion has proven effective for all the tested cases, generally providing finer spectral resolvability than previously proposed methods, other criteria may be more appropriate for different applications. In this regard, we should mention that our software is built on top of a generic grid refinement method that allows to test other possibilities by simply defining different criteria for grid cell subdivision, with some exemplary alternatives already included in the code. In addition, support detection correctness can generally be inferred from the numerical integration of the estimated measures, which is provided as an output.\n\nAnother difference with previous approaches is the dependence of the method on more parameters. Although this may add an extra degree of complexity for users, we have observed good behaviour for all test cases without resorting to parameter tuning, so the tool should be readily usable for many applications. From a different perspective, the combined inspection of simulations and manipulation of these parameters may allow to fine tune the method in most challenging scenarios, some of them, as shown in some of the experiments in~\\S~\\ref{sec:COLI}, not being adequately covered by the reduced flexibility of related approaches. In summary, grid detection robustness can be improved by increasing $M^{\\text{(i)}}$, robustness of approximation by increasing $\\xi^0$ and\/or decreasing $\\beta$ and accuracy by decreasing $\\epsilon$ and\/or increasing $L$. However, a line for future research could involve the incorporation of refined techniques for parameter selection. A possibility could be to investigate more efficient and robust interplays between homotopy continuation and nonlinear acceleration, for instance based on adaptive regularization schemes for nonlinear acceleration~\\citep{Scieur19}.\n\\section{Conclusions}\n\n\\label{sec:CONC}\n\nThis work has introduced a set of techniques to compute the ESD in a mixture of populations model. A generic procedure using only the functional form of the fixed point equations relating the population and limiting empirical distributions has been proposed. Efficient convergence is achieved by Anderson acceleration and homotopy continuation and novel strategies for grid construction have been provided. The method has compared well with related proposals in the literature which, to our knowledge, are only capable to address more restricted models. By providing this detailed description of our solution, we expect our distributed tool to be of practical interest for statisticians working in the field.","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn \\cite{GO}, Spatzier communicated the conjecture that any irreducible higher-rank ${\\mathbb Z}^d$ Anosov action is $C^\\infty$-conjugate to an algebraic action. Later, in \\cite{KS}, Kalinin and Spatzier stated a refinement of this conjecture that contends that any irreducible higher-rank ${\\mathbb Z}^d$ Anosov action on any compact manifold has a finite cover $C^\\infty$-conjugate to an algebraic action. The asserted global rigidity was motivated in part by earlier results of Katok and Lewis in \\cite{KL1} and \\cite{KL2}, and a more recent result by Rodriguez Hertz in \\cite{RH}. In the latter, global rigidity has been shown for any higher-rank ${\\mathbb Z}^d$ Anosov action on ${\\mathbb T}^n$ whose action on homology has simple eigenvalues and whose course Lyapunov spaces are one or two dimensional, plus additional conditions. A partial confirmation of the refined assertion of global rigidity is provided in \\cite{KS} for higher-rank ${\\mathbb Z}^d$ Anosov $C^\\infty$ actions each of whose course Lyapunov spaces are one-dimensional, plus additional conditions. If a higher-rank ${\\mathbb Z}^d$ Anosov action on ${\\mathbb T}^n$ is $C^\\infty$-conjugate to an algebraic one and that algebraic action has a common real eigenvector, then that higher-rank ${\\mathbb Z}^d$ Anosov action preserves a one-dimensional $C^\\infty$ foliation of ${\\mathbb T}^n$ determined by that common real eigenvector, i.e., generated by a equilibrium-free $C^\\infty$ flow.\n\nThis paper intertwines the global rigidity of higher-rank ${\\mathbb Z}^d$ Anosov $C^\\infty$ actions on ${\\mathbb T}^n$ with the classification of equilibrium-free $C^\\infty$ flows on ${\\mathbb T}^n$ that possess nontrivial generalized symmetries. The intertwining centers on presence of a single one-dimensional $C^\\infty$ distribution determined by an equilibrium-free $C^\\infty$ flow that is invariant under a ${\\mathbb Z}^d$ Anosov $C^\\infty$ action, without a priori conditions on all the course Lyapunov spaces. As shown in Section 3, any generalized symmetry of an equilibrium-free flow is nontrivial if it is Anosov (see Corollary \\ref{multiplierAnosov}). Furthermore, any equilibrium-free flow that possesses a nontrivial generalized symmetry does not have any uniformly hyperbolic compact invariant sets (see Corollary \\ref{nonhyper}). The intertwining juxtaposes an equilibrium-free $C^\\infty$ flow that is not Anosov with a smooth ${\\mathbb Z}^d$ action that is Anosov. In the $C^\\infty$ topology, this is a counterpoint to the result of Palis and Yoccoz in \\cite{PY} on the triviality of centralizers for an open and dense subset of Anosov diffeomorphisms on ${\\mathbb T}^n$, and also to the result of Sad in \\cite{Sa} on the local triviality of centralizers for an open and dense subset of Axiom A vector fields that satisfy the strong transversality condition (as applied to vector fields on ${\\mathbb T}^n$). In particular, it is quite rare for an equilibrium-free $C^\\infty$ flow on ${\\mathbb T}^n$ (or more generally, on a closed Riemannian manifold) to possess a nontrivial generalized symmetry (see Corollary \\ref{rare}).\n\nThe first aspect of the intertwining on ${\\mathbb T}^n$ relates the global rigidity of a ${\\mathbb Z}^d$ Anosov $C^\\infty$ action with an equilibrium-free $C^\\infty$ flow that is quasiperiodic. As detailed in Section 2, the generalized symmetry group of a $C^\\infty$ flow $\\Phi$ is the subgroup $S_\\Phi$ of ${\\rm Diff}^\\infty({\\mathbb T}^n)$ each of whose elements $R$ sends (via the pushforward) the generating vector field $X_\\Phi$ of $\\Phi$ to a uniform scalar multiple $\\rho_\\Phi(R)$ of itself. The multiplier group $M_\\Phi$ of $\\Phi$ is the abelian group of these scalars. As shown in Section 4, the elements of $M_\\Phi\\setminus\\{1,-1\\}$ when $\\Phi$ is quasiperiodic (or more generally, minimal) are algebraic integers of degree between $2$ and $n$ inclusively (see Corollary \\ref{algebraicnature}).\n\n\\begin{theorem}\\label{Anosovalgebraic} On ${\\mathbb T}^n$, suppose $\\alpha$ is a\\, ${\\mathbb Z}^d$ Anosov $C^\\infty$ action, and $\\Phi$ is equilibrium-free $C^\\infty$ flow. If $\\alpha({\\mathbb Z}^d)\\subset S_\\Phi$ and $\\Phi$ is quasiperiodic $($i.e., $C^\\infty$-conjugate to an irrational flow$)$, then $\\alpha({\\mathbb Z}^d)$ is $C^\\infty$-conjugate to an affine action, a finite index subgroup of $\\alpha({\\mathbb Z}^d)$ is\\, $C^\\infty$-conjugate to an algebraic action, and $M_\\Phi$ contains a ${\\mathbb Z}^d$ subgroup.\n\\end{theorem}\n\n\\noindent Relevant definitions and the proof are given in Section 6. The proof holds not only for $d\\geq 2$ but also for $d=1$. It uses a semidirect product characterization of the structure of the generalized symmetry group for an irrational flow (as shown in Section 5). It also uses the existence of a common fixed point for a finite index subgroup of the ${\\mathbb Z}^d$ Anosov action, a device used in other global rigidity results (for example, see \\cite{KL2}).\n\nThe second aspect of the intertwining on ${\\mathbb T}^n$ relates the classification of an equilibrium-free $C^\\infty$ flow with a ${\\mathbb Z}^d$ Anosov $C^\\infty$ action that is topologically irreducible. As detailed in Section 7, an irrational flow $\\phi$ on ${\\mathbb T}^n$ is of Koch type if a uniform scalar multiple of its frequencies form a ${\\mathbb Q}$-basis for a real algebraic number field of degree $n$ (also see \\cite{KO} and \\cite{LD}). For an $R\\in S_\\Phi$ of an equilibrium-free $C^\\infty$ flow $\\Phi$, the quantity $\\log\\vert \\rho_\\Phi(R)\\vert$ is the value of the Lyapunov exponent $\\chi_R$ of $R$ in the direction of $X_\\Phi$ (see Theorem \\ref{Lyapunov}).\n\n\\begin{theorem}\\label{flowKoch} On ${\\mathbb T}^n$, suppose $\\alpha$ is a higher-rank\\, ${\\mathbb Z}^d$ Anosov $C^\\infty$ action, and $\\Phi$ is equilibrium-free $C^\\infty$ flow. If $\\alpha({\\mathbb Z}^d)\\subset S_\\Phi$, and $\\alpha$ is topologically irreducible and $C^\\infty$-conjugate to an algebraic ${\\mathbb Z}^d$ action, and for an Anosov element $R\\in{\\alpha({\\mathbb Z}^d})$, the multiplicity of the value $\\log\\vert \\rho_\\Phi(R)\\vert$ of $\\chi_R$ is one at some point of\\, ${\\mathbb T}^n$, then $\\Phi$ is projectively $C^\\infty$-conjugate to an irrational flow of Koch type.\n\\end{theorem}\n\n\\noindent Relevant definitions and the proof are given in Section 7. The proof uses the Oseledets decomposition for an Anosov diffeomorphism (see \\cite{BP} and \\cite{KH}) to show that the flow is $C^\\infty$-conjugate to one generated by a constant vector field. Then by the topological irreducibility and results of Wallace in \\cite{WA}, the components of a scalar multiple of the constant vector field are shown to form a ${\\mathbb Q}$-basis for a real algebraic number field.\n\n\\section{Flows with Nontrivial Generalized Symmetries}\n\nGeneralized symmetries extend the classical notions of time-preserving and time-reversing symmetries of flows. To simplify notations for these and for proofs of results, it is assumed throughout the remainder of the paper that all manifolds, flows, vector fields, diffeomorphisms, distributions, etc., are smooth, i.e., of class $C^\\infty$. Let $P$ be a closed (i.e., compact without bounday) manifold. Let ${\\rm Flow}(P)$ denote the set of flows on $P$. Following \\cite{BC}, a {\\it generalized symmetry}\\, of $\\psi\\in{\\rm Flow}(P)$ is an $R\\in{\\rm Diff}(P)$ such that there is $\\mu\\in{\\mathbb R}^\\times = {\\mathbb R}\\setminus\\{0\\}$ (the multiplicative real group) for which\n\\[ R\\psi(t,p) = \\psi(\\mu t, R(p)){\\rm\\ for\\ all\\ } t\\in{\\mathbb R}{\\rm\\ and\\ all\\ } p\\in P.\\]\nIt is easy to show that $R$ being a generalized symmetry of $\\psi$ is equivalent to $R$ satisfying \n\\[ R_*X_\\psi = \\mu X_\\psi{\\rm\\ for\\ some\\ }\\mu\\in{\\mathbb R}^\\times.\\]\nHere $X_\\psi(p)=(d\/dt)\\psi(t,p)\\vert_{t=0}$ is the vector field that generates $\\psi$, and $R_*X_\\psi = {\\bf T}R X_\\psi R^{-1}$ is the push-forward of $X_\\psi$ by $R$ where ${\\bf T}R$ is the derivative map. The {\\it generalized symmetry group}\\, of $\\psi$ is the set $S_\\psi$ of all the generalized symmetries that $\\psi$ possesses. There is a homomorphism $\\rho_\\psi:S_\\psi \\to {\\mathbb R}^\\times$ taking $R\\in S_\\psi$ to its unique multiplier $\\rho_\\psi(R) = \\mu$. The multiplier group of $\\psi$ is $M_\\psi = \\rho_\\psi(S_\\psi)$.\n\nThe generalized symmetry group and the multiplier group of a flow are invariants for the equivalence relation of projective conjugacy. Two $\\psi,\\phi\\in{\\rm Flow}(P)$ are {\\it projectively conjugate}\\, if there are $h\\in{\\rm Diff}(P)$ and $\\vartheta\\in{\\mathbb R}^\\times$ such that $h_*X_\\psi = \\vartheta X_\\phi$. Projective conjugacy is an equivalence relation on ${\\rm Flow}(P)$. Projective conjugacy reduces to smooth conjugacy when $\\vartheta = 1$. For $h\\in{\\rm Diff}(P)$, let $\\Delta_h$ be the inner automorphism of ${\\rm Diff}(P)$ given by $\\Delta_h(R) = h^{-1}Rh$ for $R\\in{\\rm Diff}(P)$. If $h_*X_\\psi = \\vartheta X_\\phi$, then $\\Delta_h(S_\\phi) = S_\\psi$ (see Theorem 4.1 in \\cite{BC} which states that $S_\\psi$ is conjugate to the generalized symmetry group for the flow determined by $\\vartheta X_\\phi$, which is exactly the same as $S_\\phi$.) Furthermore, if $\\psi$ and $\\phi$ are projectively conjugate, then $M_\\psi=M_\\phi$ (see Theorem 2.2 in \\cite{BA5}), i.e., the multiplier group is an absolute invariant of projective conjugacy.\n\nAny $R\\in S_\\psi$ is a trivial generalized symmetry of $\\psi$ if $\\rho_\\psi(R)=1$ (i.e., $R$ is time-preserving), or if $\\rho_\\psi(R)=-1$ (i.e., $R$ is time-reversing). Any $R\\in S_\\psi$ with $\\vert\\rho_\\psi(R)\\vert\\ne 1$ is a {\\it nontrivial generalized symmetry}\\, of $\\psi$. A flow $\\psi$ (or equivalently its generating vector field $X_\\psi$) is said to possess a nontrivial generalized symmetry when $M_\\psi\\setminus\\{1,-1\\}\\ne\\emptyset$. \n\n\\begin{theorem}\\label{noperiodic} Let $\\psi$ be a flow on a closed Riemannian manifold $P$. If $\\psi$ has a periodic orbit and $M_\\psi\\setminus\\{1,-1\\}\\ne\\emptyset$, then $\\psi$ has a nonhyperbolic equilibrium.\n\\end{theorem}\n\n\\begin{proof} Suppose for $p_0\\in P$ that ${\\mathcal O}_\\psi(p_0)=\\{\\psi_t(p_0):t\\in{\\mathbb R}\\}$ is a periodic orbit whose fundamental period is $T_0>0$. (Here $\\psi_t(p) = \\psi(t,p)$.) Assuming that $M_\\phi\\setminus\\{1,-1\\}\\ne\\emptyset$ implies there is $Q\\in S_\\psi$ such that $\\rho_\\psi(Q)\\ne \\pm 1$. If $\\vert \\rho_\\psi(Q)\\vert >1$, then $\\vert\\rho_\\psi(Q^{-1})\\vert<1$, since $\\rho_\\psi$ is a homomorphism. Hence there is $R\\in S_\\psi$ such that $\\vert\\rho_\\psi(R)\\vert<1$. Then \n\\[ R(p_0) = R\\psi(T,p_0) = \\psi\\big(\\rho_\\psi(R)T_0,R(p_0)\\big).\\]\nFor $p_1= R(p_0)$, this implies that ${\\mathcal O}_\\psi(p_1)$ is a periodic orbit with period $T_1 = \\vert \\rho_\\psi(R)\\vert T_0$. Suppose that $T_1$ is not a fundamental period for ${\\mathcal O}_\\psi(p_1)$, i.e., there is $00$ and $\\lambda\\in(0,1)$ independent of $p\\in P$, such that\n\\[ \\Vert {\\bf T}_p R^m(v)\\Vert \\leq c \\lambda^m \\Vert v\\Vert {\\rm \\ for\\ }v\\in E^s(p){\\rm\\ and \\ }m\\geq 0,\\]\nand\n\\[ \\Vert {\\bf T}_p R^{-m}(v)\\Vert \\leq c \\lambda^m \\Vert v\\Vert {\\rm \\ for\\ }v\\in E^u(p){\\rm\\ and \\ }m\\geq 0\\]\nwith the angle between $E^s(p)$ and $E^u(p)$ bounded away from $0$ (see \\cite{BP}).\n\n\\begin{theorem}\\label{multiplierAnosov} Let $\\psi$ be an equilibrium-free flow on a closed Riemannian manifold $P$. If $R\\in S_\\psi$ is Anosov, then $\\vert\\rho_\\psi(R)\\vert \\ne 1$.\n\\end{theorem}\n\n\\begin{proof} Suppose $R\\in S_\\psi$ is Anosov and $\\vert\\rho_\\psi(R)\\vert=1$. If $\\rho_\\psi(R)=-1$, then replacing $R$ with $R^2$ gives $\\rho_\\psi(R) =1$ with $R$ Anosov. Let $E^s(p)\\oplus E^u(p)$ be the continuous ${\\bf T}R$-invariant splitting with its associated contraction estimates. The generating vector field for $\\psi$ has a continuous decomposition $X_\\psi(p) = v^s(p) + v^u(p)$ for $v^s(p)\\in E^s(p)$ and $v^u(p)\\in E^u(p)$. From the contraction estimates of ${\\bf T}_pR$ on $E^s(p)$ and $E^u(p)$ it follows for all $p\\in P$ that\n\\[ \\Vert {\\bf T}_pR^m(v^s(p))\\Vert \\to 0, \\ \\ \\Vert {\\bf T}_p R^{-m}(v^u(p))\\Vert\\to 0{\\rm\\ as\\ } m\\to\\infty.\\]\nWith $\\rho_\\psi(R) = 1$ and $X_\\psi(p) = v^s(p) + v^u(p)$, the equation $R_*X_\\psi = \\rho_\\psi(R)X_\\psi$ becomes\n\\[ {\\bf T}_p R^m (v^s(p) + v^u(p)) = X_\\psi(R^m(p)) = v^s(R^m(p))+ v^u(R^m(p))\\]\nfor all $p\\in P$ and $m\\in {\\mathbb Z}$. The ${\\bf T}R$-invariance of $E^u(p)$ implies that\n\\[ {\\bf T}_p R^m(v^u(p)) = v^u(R^m(p)).\\]\nFor a fixed $p\\in P$, there is by the compactness of $P$, a subsequence ${\\rm R}^{m_i}(p)$ converging to a point, say $p_\\infty$, as $m_i\\to \\infty$. Hence\n\\begin{align*}\nX_\\psi(p_\\infty)\n& = \\lim_{i\\to\\infty} X_\\psi(R^{m_i}(p)) \\\\\n& = \\lim_{i\\to\\infty} {\\bf T}_pR^{m_i}(v^s(p)+v^u(p)) \\\\\n& = \\lim_{i\\to\\infty} \\big[{\\bf T}_p R^{m_i}(v^s(p)) + v^u(R^{m_i}(p))\\big] \\\\\n& = v^u(p_\\infty).\n\\end{align*}\nIt now follows that\n\\[ \\lim_{m\\to\\infty}X_\\psi(R^{-m}(p_\\infty)) = \\lim_{m\\to\\infty}{\\bf T}_{p_\\infty} R^{-m}(X_\\psi(p_\\infty)) = \\lim_{m\\to\\infty}{\\bf T}_{p_\\infty} R^{-m}(v^u(p_\\infty))=0.\\]\nCompactness of $P$ implies that $X_\\psi$ has a zero. But $\\psi$ is an equilibrium-free flow, and therefore $\\vert\\rho_\\psi(R)\\vert\\ne 1$.\n\\end{proof}\n\nAny equilibrium-free flow with a generalized symmetry that is Anosov does not have any periodic orbits according to Lemma \\ref{noperiodic} and Theorem \\ref{multiplierAnosov}. On the other hand, for an equilibrium-free flow that is without nontrivial generalized symmetries, Theorem \\ref{multiplierAnosov} implies that none of its generalized symmetries can be Anosov. However, the converse of Theorem \\ref{multiplierAnosov} is false. As illustrated next, a partially hyperbolic diffeomorphism can be a nontrivial generalized symmetry of an equilibrium-free flow without periodic orbits.\n\n\\begin{example}\\label{nonquasi}{\\rm Let $P={\\mathbb T}^n={\\mathbb R}^n\/{\\mathbb Z}^n$, the $n$-torus, equipped with global coordinates $(\\theta_1,\\theta_2,\\dots,\\theta_n)$. The flow $\\psi$ generated by\n\\[ X_\\psi = \\frac{\\partial}{\\partial\\theta_1} + \\frac{\\partial}{\\partial\\theta_2} + \\cdot\\cdot\\cdot + \\frac{\\partial}{\\partial\\theta_{n-2}} + \\frac{\\partial}{\\partial\\theta_{n-1}} + \\sqrt 2 \\frac{\\partial}{\\partial\\theta_n}\\]\nis equilibrium-free and without periodic orbits. Let $R\\in{\\rm Diff}({\\mathbb T}^n)$ be induced by the ${\\rm GL}(n,{\\mathbb Z})$ matrix\n\\[ B = \\begin{bmatrix} 1 & 0 & \\hdots & 0 & 0 & 1 \\\\ 0 & 1 & \\hdots & 0 & 0 & 1 \\\\ \\vdots & \\vdots & \\ddots & \\vdots & \\vdots & \\vdots \\\\ 0 & 0 & \\hdots & 1 & 0 & 1 \\\\ 0 & 0 & \\hdots & 0 & 1 & 1 \\\\ 0 & 0 & \\hdots & 0 & 2 & 1\\end{bmatrix},\\]\nthat is, ${\\bf T}R = B$. This $R$ is a nontrivial generalized symmetry of $\\psi$ because $R_*X_\\psi = (1+\\sqrt 2)X_\\psi$, i.e., $\\rho_\\psi(R) = 1+\\sqrt 2$. However $B$ has an eigenvalue of $1$ of multiplicity $n-2$, and the eigenspace corresponding to this eigenvalue is an $(n-2)$-dimensional center distribution for $R$. The other two eigenvalues of $B$ are $1\\pm\\sqrt 2$ and the corresponding eigenspaces are the $1$-dimensional unstable and stable distributions for $R$ respectively. Thus $R$ is partially hyperbolic.\n}\\end{example}\n\n\\section{Restrictions on Multipliers}\n\nAlgebraic restrictions may occur on $\\rho_\\Phi(R)$ for a nontrivial generalized symmetry $R$ of an equilibrium-free flow $\\Phi$ on a compact manifold $P$ without boundary. This happens when there are interactions beyond $R_*X_\\Phi = \\rho_\\Phi(R)X_\\Phi$ between the dynamics of $R$ and $\\Phi$ on submanifolds diffeomorphic to ${\\mathbb T}^k$ for some $2\\leq k\\leq {\\rm dim}(P)$. A {\\it real algebraic integer}\\, is the root of a monic polynomial with integer coefficients, and its degree is the degree of its minimal polynomial. As shown by Wilson in \\cite{WI}, there is for any integer $k$ with $2\\leq k\\leq {\\rm dim}(P)-2$, an equilibrium-free flow $\\Phi$ on $P$ which has an invariant submanifold $N$ diffeomorphic to ${\\mathbb T}^k$ on which $\\Phi$ is {\\it minimal}, i.e., every orbit of $\\Phi$ in $N$ is dense in $N$. The well-known prototype of a minimal flow on ${\\mathbb T}^k$ is an {\\it irrational flow}, i.e., a $\\psi$ on ${\\mathbb T}^k$ for which $X_\\psi$ is a constant vector field whose components (or frequencies) are rationally independent (see \\cite{FA}), i.e., linearly independent over ${\\mathbb Q}$. According to Basener in \\cite{BA}, any minimal flow on ${\\mathbb T}^n$ is topologically conjugate to an irrational flow.\n\n\\begin{lemma}\\label{subset} If a flow $\\phi$ on a compact manifold $N$ without boundary is topologically conjugate to a flow $\\psi$ on ${\\mathbb T}^k$ $(k={\\rm dim}(N))$ and $\\psi$ is minimal, then $M_\\phi\\subset M_\\psi$ and each $\\mu\\in M_\\phi\\setminus\\{1,-1\\}$ is an algebraic integer of degree between $2$ and $k$ inclusively.\n\\end{lemma}\n\n\\begin{proof} Suppose there is a homeomorphism $h:N\\to{\\mathbb T}^k$ such that $h\\phi_t = \\psi_t h$ for all $t\\in{\\mathbb R}$. Without loss of generality, it is assumed that $\\psi$ is an irrational flow, since any minimal flow on ${\\mathbb T}^k$ is topologically conjugate to an irrational flow.\n\nLet $V\\in S_\\phi$ and set $\\mu=\\rho_\\phi(V)$. In terms of the homeomorphism $Q=hVh^{-1}$ on ${\\mathbb T}^k$, the multiplier $\\mu$ passes through $h$ to $\\psi$:\n\\[ Q\\psi(t,\\theta) = hR\\phi(t,h^{-1}(\\theta)) = h\\phi(\\mu t,Rh^{-1}(\\theta)) = \\psi(\\mu t, Q(\\theta)), {\\rm\\ for\\ all\\ }t\\in{\\mathbb R}, \\theta \\in{\\mathbb T}^k,\\] \ni.e., $Q\\psi_t = \\psi_{\\mu t}Q$. This does not say yet that $Q$ is a generalized symmetry of $\\psi$ with multiplier $\\mu$ because $Q$ is only a homeomorphism at the moment.\n\nFollowing \\cite{AP}, the homeomorphism $Q$ of ${\\mathbb T}^k$ lifts uniquely to $\\hat Q(x)=\\hat L(x) + \\hat U(x) +\\hat c$ on ${\\mathbb R}^k$, i.e., $\\pi \\hat Q = Q\\pi$ where $\\pi:{\\mathbb R}^k\\to{\\mathbb T}^k$ is the covering map. Here the linear part of this lift is $\\hat L(x) = Bx$ for $B\\in {\\rm GL}(k,{\\mathbb Z})$; the periodic part is $\\hat U(x)$, i.e., $\\hat U(x+\\nu) = \\hat U(x)$ for all $x\\in{\\mathbb R}^k$ and all $\\nu\\in {\\mathbb Z}^k$, is continuous and satisfies $\\hat U(0)=0$; and the constant part is $\\hat c\\in[0,1)^k$. A lift of the irrational flow $\\psi$ to ${\\mathbb R}^n$ is $\\hat \\psi(t,x) = x+td$ where $d=X_\\psi$ and $x\\in{\\mathbb R}^k$. For all $t\\in{\\mathbb R}$, a lift of $Q\\psi_t$ to ${\\mathbb R}^k$ is $\\hat Q\\hat \\psi_t$ and a lift of $\\psi_{\\mu t}Q$ is $\\hat \\psi_{\\mu t}\\hat Q$. These two lifts differ by a constant $m\\in{\\mathbb Z}^k$ since $Q\\psi_t = \\psi_{\\mu t}Q$ for all $t\\in{\\mathbb R}$:\n\\[ \\hat Q\\hat \\psi(t,x) = \\hat \\psi(\\mu t,\\hat Q(x)) + m {\\rm\\ for\\ all\\ }t\\in{\\mathbb R}, x\\in{\\mathbb R}^k.\\]\nSince\n\\[ \\hat \\psi(t,x) = Bx + tBd + \\hat U(x+td) + \\hat c\\]\nand \n\\[ \\hat\\psi(\\mu t,\\hat Q(x)) = Bx + \\hat U(x) + \\hat c + \\mu t d,\\]\nit follows that\n\\[ \\hat U(x+td) - \\hat U(x) = - t(B-\\mu I)d + m {\\rm\\ for\\ all\\ }t\\in{\\mathbb R}, x\\in{\\mathbb R}^k,\\]\nwhere $I$ is the identity matrix. Evaluation of this at $x=0$ gives\n\\[ \\hat U(td) = -t(B-\\mu I)d + m {\\rm\\ for\\ all\\ }t\\in{\\mathbb R}.\\]\nHowever, $\\hat U$ is bounded since it is continuous and periodic. This boundedness implies that $(B-\\mu I)d = 0$, and so $\\hat U(td) = m$ for all $t\\in{\\mathbb R}$. Evaluation of this at $t=0$ shows that $m=0$ because $\\hat U(0)=0$. Thus\n\\[ 0=\\hat U(td) = \\hat U (\\hat \\psi_t(0)){\\rm \\ for\\ all\\ }t\\in{\\mathbb R}.\\] Since $\\hat U$ is periodic and continuous on ${\\mathbb R}^n$, it is a lift of a homeomorphism $U$ on ${\\mathbb T}^n$. A lift of $U\\psi_t$ is $\\hat U\\hat\\psi_t$, and so\n\\[ 0=\\pi \\hat U(\\hat \\psi_t(0)) = U(\\psi_t(0)) {\\rm\\ for\\ all\\ }t\\in{\\mathbb R}.\\]\nThe minimality of $\\psi$ implies that $U$ is $0$ on a dense subset of ${\\mathbb T}^n$. By continuity, $U = 0$ which implies that $\\hat U=0$. Thus $\\hat Q$ is $C^\\infty$, and so $Q$ is a diffeomorphism, whence $Q\\in S_\\psi$ with $\\rho_\\psi(Q) = \\mu$. Since $\\mu\\in M_\\phi$, then $M_\\phi\\subset M_\\psi$.\n\nThe multiplier $\\mu \\in M_\\psi$ is a real algebraic integer of degree at most $k$ because it satisfies $(B-\\mu I)d = 0$ for nonzero $d$, i.e., the characteristic polynomial of $B$ is a monic polynomial of degree $k$ with integer coefficients. The only rational roots this characteristic polynomial can have are $\\pm 1$ since $\\psi$ is an irrational flow, i.e., $M_\\psi\\cap{\\mathbb Q}=\\{1,-1\\}$ (see Corollary 4.4 in \\cite{BA2}). However, if $\\mu \\ne \\pm 1$, then the minimal polynomial for $\\mu$ has degree between $2$ and $k$ inclusively.\n\\end{proof}\n\n\\begin{theorem}\\label{algebraicmultiplier} Suppose for an equilibrium-free flow $\\Phi$ on $P$ there is a $\\Phi$-invariant compact submanifold $N$ without boundary and $R\\in S_\\Phi$ with $\\vert\\rho_\\Phi(R)\\vert \\ne 1$ such that $R(N)\\cap N\\ne \\emptyset$. If ${\\rm dim}(N)=2$ and $N$ is orientable, then $\\rho_\\Phi(R)$ is a real algebraic integer of degree $2$. If ${\\rm dim}(N)\\geq 3$ with $N$ diffeomorphic to ${\\mathbb T}^{{\\rm dim}(N)}$ and $\\Phi\\vert N$ is a minimal flow, then $\\rho_\\Phi(R)$ is a real algebraic integer of degree between $2$ and ${\\rm dim}(N)$ inclusively.\n\\end{theorem}\n\n\\begin{proof} Let $\\mu=\\rho_\\Phi(R)$ with $\\vert\\mu\\vert\\ne 1$ and $k={\\rm dim}(N)$. By Theorem \\ref{noperiodic} there are no periodic orbits for the equilibrium-free flow $\\Phi\\vert N$. If $k=2$ and $N$ is orientable, the Poincar\\'e-Bendixson Theorem implies that $N$ is diffeomorphic to ${\\mathbb T}^2$, and that $\\Phi\\vert N$ is minimal. If $k\\geq 3$, it is assumed that $N$ is diffeomorphic to ${\\mathbb T}^k$ and that $\\Phi\\vert N$ is minimal. \n\nThe submanifold $R(N)$ is $\\Phi$-invariant because $N$ is $\\Phi$-invariant and $R\\in S_\\Phi$, i.e., for $p\\in R(N)$ and $q= R^{-1}(p)\\in N$,\n\\[ \\Phi(t,p) = \\Phi(t,R(q)) = R\\Phi(t\/\\mu,q) \\subset R(N) {\\rm \\ for\\ all\\ }t\\in{\\mathbb R}.\\]\nBy hypothesis, there is $\\tilde p\\in R(N)\\cap N$. By the $\\Phi$-invariance of $N$ and $R(N)$ and the minimality of $\\Phi\\vert N$ it follows that $\\overline{{\\mathcal O}_\\Phi(\\tilde p)} = N$ and $\\overline{{\\mathcal O}_\\Phi(\\tilde p)} = R(N)$. This gives $R(N) = N$, i.e., that $N$ is $R$-invariant.\n\nThe nontrivial generalized symmetry $R$ restricts to a nontrivial generalized symmetry of $\\Phi\\vert N$ because $N$ is $\\Phi$-invariant and $R$-invariant. If $V=R\\vert N$ and $\\phi$ is the flow on $N$ determined by $X_\\phi = X_\\Phi\\vert N$, then $R_*X_\\Phi = \\mu X_\\Phi$ becomes \n\\[ {\\bf T} V X_\\phi (p) = \\mu X_\\phi (V(p)){\\rm\\ for\\ }p\\in N.\\]\nSince $V\\in{\\rm Diff}(N)$, then $V\\in S_\\phi$ with $\\rho_\\phi(V) = \\mu$.\n\nBy \\cite{BA}, minimality of $\\phi=\\Phi\\vert N$ with $N$ diffeomorphic to ${\\mathbb T}^n$ implies that $\\phi$ is topologically conjugate to an irrational flow $\\psi$. Applying Lemma \\ref{subset} shows that $\\mu$ is an algebraic integer of degree between $2$ and $k$ inclusively.\n\\end{proof}\n\n\\begin{corollary}\\label{algebraicnature} Suppose $\\Phi$ is a minimal flow on $\\mathbb T^n$, and $R\\in S_\\Phi$. If $\\vert \\rho_\\Phi(R)\\vert \\ne 1$, then $\\rho_\\Phi(R)$ is an algebraic integer of degree between $2$ and $n$ inclusively.\n\\end{corollary}\n\n\\begin{proof} Any nontrivial generalized symmetry $R$ of $\\Phi$ together with $N=\\mathbb T^n$ satisfy the conditions of Theorem \\ref{algebraicmultiplier}.\n\\end{proof}\n\n\\section{A Group-Theoretical Characterization of Irrational Flows}\n\nMinimality of an equilibrium-free flow on ${\\mathbb T}^n$ places a semidirect product structure the generalized symmetry group of that flow. A group ${\\mathcal S}$ is the {\\it semidirect product}\\, of two subgroups ${\\mathcal N}$ and ${\\mathcal H}$ if ${\\mathcal N}$ is a normal subgroup of ${\\mathcal S}$, if ${\\mathcal S}={\\mathcal N}{\\mathcal H}$, and if ${\\mathcal N}\\cap {\\mathcal H}$ is the identity element of ${\\mathcal S}$. Notational this is written\n\\[ {\\mathcal S} = {\\mathcal N}\\rtimes_\\Gamma {\\mathcal H},\\]\nwhere $\\Gamma: {\\mathcal H}\\to {\\rm Aut}({\\mathcal N})$ is the conjugating homomorphism of the semidirect product, i.e., $\\Gamma({\\mathfrak h})({\\mathfrak n}) = {\\mathfrak h}{\\mathfrak n}{\\mathfrak h}^{-1}$ for ${\\mathfrak h}\\in {\\mathcal H}$ and ${\\mathfrak n}\\in {\\mathcal N}$. A normal subgroup of $S_\\psi$ is $\\ker \\rho_\\psi$ for any flow $\\psi$. A normal subgroup of ${\\rm Diff}(\\mathbb T^n)$ is the abelian group ${\\rm Trans}(\\mathbb T^n)$ of translations. Each {\\it translation}\\, on ${\\mathbb T}^n$ is of the form ${\\mathcal T}_c(\\theta) = \\theta + c$ for $c\\in{\\mathbb T}^n$. If $\\psi$ is a flow on ${\\mathbb T}^n$ with $X_\\psi$ a constant, then ${\\rm Trans}({\\mathbb T}^n)\\subset {\\rm ker}\\rho_\\psi$ because $({\\mathcal T}_c)_* X_\\psi = X_\\psi$ for all $c\\in{\\mathbb T}^n$.\n\n\\begin{lemma}\\label{kernel} Let $\\psi$ be a flow on ${\\mathbb T}^n$ for which $X_\\psi$ is a constant. If ${\\rm ker}\\rho_\\psi = {\\rm Trans}({\\mathbb T}^n)$, then $\\psi$ is irrational.\n\\end{lemma}\n\n\\begin{proof} Suppose that $\\psi$ is not irrational. Then the components of $X_\\psi$ are not rationally independent. Up to a permutation of the coordinates $\\theta_1,\\dots,\\theta_n$ on ${\\mathbb T}^n$, i.e., a smooth conjugacy, it can be assumed the first $l$ components of $X_\\psi$ are the smallest subset of the components that are linearly dependent over ${\\mathbb Q}$. Specifically, writing $X_\\psi = [a_1,a_2,\\dots,a_n]^T$, there is a least integer $l$ with $1\\leq l\\leq n$ such that\n\\[ k_1a_1+\\cdot\\cdot\\cdot + k_la_l =0\\]\nwith $k_i\\in{\\mathbb Z}\\setminus\\{0\\}$ for all $i=1,\\dots,l$. The existence of an $R\\in{\\rm ker}\\rho_\\psi\\setminus{\\rm Trans}({\\mathbb T}^n)$ will be exhibited separately in the cases of $l0$, $V$ a nonempty subset of ${\\mathbb R}^n$, and $m\\in{\\mathbb Z}^n$, define $N_\\epsilon(V)$ to be the set of points in ${\\mathbb R}^n$ less than a distance of $\\epsilon$ from $V$, and define $V+m$ to be the translation of $V$ by $m$. By the definition of $E$, if $E+m\\ne E$, then $N_{1\/2}(E+m)\\cap N_{1\/2}(E)=\\emptyset$. Since $B(U)=E$ and $B({\\mathbb Z}^n)\\subset {\\mathbb Z}^n$, there exists $\\epsilon>0$ such that if $U+m\\ne U$, then $N_\\epsilon(U+m)\\cap N_\\epsilon(U)=\\emptyset$.\n\nThe vector $e_n\\not \\in U$. Let $x_1,\\dots, x_n$ be the coordinates on ${\\mathbb R}^n$ that correspond to the basis $u_1,\\dots,u_{n-1},e_n$. Let $f:{\\mathbb R}\\to{\\mathbb R}$ be a smooth bump function with $f(0)=1$ and whose support has length smaller than $\\epsilon$. In terms of the coordinates $x_1,\\dots, x_n$, define a smooth vector field on $N_\\epsilon(U)$ by\n\\[ Y = f(x_n)\\frac{\\partial}{\\partial x_n}.\\]\nExtend this vector field to all of ${\\mathbb R}^n$ by translation to $N_\\epsilon(U+m)$ for those $m\\in{\\mathbb Z}^m$ for which $U+m\\ne U$, and to the remainder of ${\\mathbb R}^n$ as $0$. The extended vector field is globally Lipschitz, and so determines a flow $\\xi$ on ${\\mathbb R}^n$.\n\nSince the vector field generating $\\xi$ is invariant under translations by $m\\in{\\mathbb Z}$, the time-one map $\\xi_1$ is also invariant under these translations. Thus $\\xi_1$ is a lift of an $R\\in{\\rm Diff}({\\mathbb T}^n)$. In terms of the coordinates $x_1,\\dots, x_n$, the derivative of $\\xi_1$ at any point $x\\in{\\mathbb R}^n$ is of the form\n\\[ {\\bf T}_x\\xi_1 = \\begin{bmatrix} 1 & 0 & \\dots & 0 & 0 \\\\ 0 & 1 & \\dots & 0 & 0 \\\\ \\vdots & \\vdots & \\ddots & \\vdots & \\vdots \\\\ 0 & 0 & \\dots & 1 & 0 \\\\ 0 & 0 & \\dots & 0 & \\ast \\end{bmatrix},\\]\nand so ${\\bf T}_x\\xi _1(u) = u$ for $u\\in U$. Since $X_\\psi\\in U$, this means that ${\\bf T}_x \\xi_1(X_\\psi) = X_\\psi$. Since $X_\\psi$ is a constant vector field, then $(\\xi_1)_* X_\\psi = X_\\psi$. Since $\\xi_1$ is a lift of $R$, it now follows that $R_*X_\\psi = X_\\psi$. Therefore $R\\in {\\rm ker}\\rho_\\psi$ but $R\\not\\in{\\rm Trans}({\\mathbb T}^n)$.\n\\end{proof}\n\nThe group ${\\rm Aut}({\\mathbb T}^n)$ of automorphisms of $\\mathbb T^n$ is naturally identified with ${\\rm GL}(n,{\\mathbb Z})$. For $\\mathcal T_c\\in {\\rm Trans}(\\mathbb T^n)$ and $B\\in {\\rm GL}(n,\\mathbb Z)$, the composition of $\\mathcal T_c$ with $B$ is written $\\mathcal T_c B= B+c$. \n\n\\begin{theorem}\\label{characterization} Let $\\psi$ be a flow on ${\\mathbb T}^n$ with $X_\\psi$ a nonzero constant vector. Then $\\psi$ is irrational if and only if there exists a subgroup $H$ of ${\\rm GL}(n,{\\mathbb Z})$ isomorphic to $M_\\psi$ such that $S_\\psi = {\\rm Trans}({\\mathbb T}^n) \\rtimes_\\Gamma H$.\n\\end{theorem}\n\n\\begin{proof} Suppose that $\\psi$ is irrational. This implies (by Theorem 5.5 in \\cite{BA2}) that\n\\[ S_\\psi = {\\rm ker}\\rho_\\psi \\rtimes_\\Gamma H_\\psi,\\]\nwhere $H$ is a subgroup of ${\\rm GL}(n,{\\mathbb Z})$ isomorphic to $M_\\psi$. Furthermore, the irrationality of $\\psi$ implies (by Corollary 4.7 in \\cite{BA2}) that ${\\rm ker}\\rho_\\psi={\\rm Trans}({\\mathbb T}^n)$.\n\nNow suppose that $\\psi$ is not irrational. By Lemma \\ref{kernel}, there is $R\\in {\\rm ker}\\rho_\\psi\\setminus{\\rm Trans}({\\mathbb T}^n)$. If there were a subgroup $H$ of ${\\rm GL}(n,{\\mathbb Z})$ isomorphic to $M_\\psi$ such that $S_\\psi = {\\rm Trans}(\\mathbb T^n)\\rtimes_\\Gamma H$, then $R=\\mathcal T_c B$ for some $c\\in{\\mathbb T}^n$ and $B\\in H$. Hence $1=\\rho_\\psi(R) = \\rho_\\psi(\\mathcal T_c)\\rho_\\psi(B)= \\rho_\\psi(B)$. However, since $H$ is isomorphic to $M_\\psi$, there is only one element of $H$ which corresponds to the multiplicative identity $1$ of $\\mathbb R^\\times$, and that is $I$, the identity matrix. This means that $B=I$, and so $R=\\mathcal T_c$, a contradiction.\n\\end{proof}\n\n\n\\section{Global Rigidity for Certain ${\\mathbb Z}^d$ Anosov Actions}\n\nGlobal rigidity is about when a ${\\mathbb Z}^d$ Anosov action, which as is well-known is topologically conjugate to an algebraic ${\\mathbb Z}^d$ action (see \\cite{KK}), is smoothly conjugate to an algebraic ${\\mathbb Z}^d$ action. A ${\\mathbb Z}^d$ action on ${\\mathbb T}^n$ is a monomorphism $\\alpha:{\\mathbb Z}^d\\to{\\rm Diff}({\\mathbb T}^n)$. It is Anosov if there is $m\\in{\\mathbb Z}^d\\setminus\\{0\\}$ with $\\alpha(m)$ Anosov, is {\\it algebraic}\\, if $\\alpha({\\mathbb Z}^d)\\subset {\\rm GL}(n,{\\mathbb Z})$, and more generally is {\\it affine}\\, if $\\alpha({\\mathbb Z}^d)\\subset{\\rm Trans}({\\mathbb T}^n)\\rtimes_\\Gamma{\\rm GL}(n,{\\mathbb Z})$. Algebraic ${\\mathbb Z}^d$-actions are found in algebraic number theory (see \\cite{KKS} and \\cite{Sc}). \n\n\\vskip0.2cm\n\\noindent{\\it Proof of Theorem \\ref{Anosovalgebraic}.} Although well-known, the argument for the existence of a common fixed point of a finite index subgroup of a ${\\mathbb Z}^d$ Anosov action is included for completeness. Let $m_0\\in{\\mathbb Z}^d$ be such that $\\alpha(m_0)$ is Anosov. Then $\\alpha(m_0)$ is topologically conjugate to a hyperbolic automorphism of ${\\mathbb T}^n$ (see \\cite{Ma}), and so $\\alpha(m_0)$ has a finite number of fixed points, $f_1,\\dots,f_l$. Let $\\alpha(m_1),\\dots,\\alpha(m_d)$ be a generating set for $\\alpha({\\mathbb Z}^d)$. Since $\\alpha({\\mathbb Z}^d)$ is abelian, then for all $i=1,\\dots,d$ and $j=1,\\dots,l$,\n\\[ \\alpha(m_0)\\alpha(m_i)(f_j) = \\alpha(m_i)\\alpha(m_0)(f_j) = \\alpha(m_i)(f_j).\\]\nThis means that $\\alpha(m_i)(f_j)$ is one of the finitely many fixed points of $\\alpha(m_0)$. Since each $\\alpha(m_i)$ is invertible, there is a positive finite integer $r_i$ such that $\\alpha(m_i)^{r_i}(f_1) = f_1$. Thus the finite index subgroup of $\\alpha({\\mathbb Z}^d)$ generated by $\\alpha(m_1)^{r_1}, \\dots,\\alpha(m_d)^{r_d}$ has $f_1$ as a common fixed point.\n\nBy hypothesis, there is a $g\\in{\\rm Diff}({\\mathbb T}^n)$ and an irrational flow $\\phi$ on ${\\mathbb T}^n$ for which $X_\\phi = g_* X_\\Phi$. By Theorem \\ref{characterization}, there is a subgroup $H_\\phi$ of ${\\rm GL}(n,\\mathbb Z)$ isomorphic to $M_\\phi$ such that $S_\\phi={\\rm Trans}({\\mathbb T}^n)\\rtimes_\\Gamma H_\\phi$. For $h\\in{\\rm Diff}({\\mathbb T}^n)$ given by $h={\\mathcal T}_{-g(f_1)}\\circ g$ it follows that $h_*X_\\Phi = X_\\phi$ because $({\\mathcal T}_c)_*X_\\phi = X_\\phi$ for any $c\\in{\\mathbb T}^n$. Then $\\Phi$ and $\\phi$ are projectively conjugate, and so, as mentioned in Section 2, $\\Delta_h(S_\\phi) = S_\\Phi$. The inclusion $\\alpha({\\mathbb Z}^d)\\subset S_\\Phi$ implies that\n\\[ \\Delta_{h^{-1}}(\\alpha({\\mathbb Z}^d))\\subset {\\rm Trans}({\\mathbb T}^n)\\rtimes_\\Gamma H_\\phi.\\]\nThis means that $\\alpha$ is $C^\\infty$-conjugate to an affine ${\\mathbb Z}^d$-action.\n\nFor each $\\alpha(m)$ in the finite index subgroup generated by $\\alpha(m_1)^{r_1},\\dots,\\alpha(m_d)^{r_d}$ there are $B_m\\in H_\\phi$ and $c_m\\in {\\mathbb T}^n$ such that $\\Delta_{h^{-1}}(\\alpha(m)) = B_m + c_m$. Since $h(f_1) = 0$, then\n\\[ c_m=(B_m+c_m)(0) = h\\circ\\alpha(m)\\circ h^{-1}(0) = h \\circ \\alpha(m)(f_1) = h(f_1) = 0.\\]\nThis means that $\\Delta_{h^{-1}}(\\alpha(m))\\in H_\\phi$. Hence the finite index subgroup of $\\alpha({\\mathbb Z}^d)$ generated by $\\alpha(m_1)^{r_1},\\dots,\\alpha(m_d)^{r_d}$, which is isomorphic to ${\\mathbb Z}^d$, is $C^\\infty$-conjugate to an algebraic ${\\mathbb Z}^d$-action. Since $H_\\phi$ contains a ${\\mathbb Z}^d$ subgroup, and $M_\\phi$ is isomorphic to $H_\\phi$, then $M_\\phi$ contains a ${\\mathbb Z}^d$ subgroup. By absolute invariance of the multiplier group under projective conjugacy, $M_\\Phi$ contains a ${\\mathbb Z}^d$ subgroup. \\hfill $\\Box$\n\n\\vskip0.2cm\nAny quasiperiodic flow on ${\\mathbb T}^n$ whose generalized symmetry group contains a ${\\mathbb Z}^d$ Anosov action must possess nontrivial generalized symmetries since its multiplier group contains a ${\\mathbb Z}^d$ subgroup by Theorem \\ref{Anosovalgebraic} (cf.\\,Theorem \\ref{multiplierAnosov}). This puts a necessary condition on the quasiperiodic flows to which Theorem \\ref{Anosovalgebraic} does apply. The quasiperiodic flows of Koch type mentioned in the next section satisfy this necessary condition. However, there are quasiperiodic flows that do not, as is illustrated next.\n\n\\begin{example}\\label{trivialquasi}{\\rm On ${\\mathbb T}^n$, let $\\psi$ be the flow generated by\n\\[ X_\\psi = \\frac{\\partial}{\\partial\\theta_1} + \\pi\\frac{\\partial}{\\partial\\theta_2} + \\cdot\\cdot\\cdot + \\pi^{n-1} \\frac{\\partial}{\\partial\\theta_n}.\\]\nThis flow is quasiperiodic, since if its frequencies $1,\\pi,\\dots,\\pi^{n-1}$ were linearly dependent over ${\\mathbb Q}$ then $\\pi$ would be algebraic. Quasiperiodicity of $\\psi$ implies that each $\\mu\\in M_\\psi$ is a real algebraic integer of degree at most $n$, and moreover, $M_\\psi\\cap{\\mathbb Q}=\\{1,-1\\}$ (see Corollary 4.4 in \\cite{BA2}). Suppose $\\mu\\in M_\\psi\\setminus\\{1,-1\\}$. Then there is $R\\in S_\\psi$ such that $R_*X_\\psi = \\mu X_\\psi$. Quasiperiodicity of $\\psi$ implies that ${\\bf T}R=B\\in{\\rm GL}(n,{\\mathbb Z})$ (see Theorem 4.3 in \\cite{BA2}). Then $R_*X_\\psi = \\mu X_\\psi$ becomes $BX_\\psi = \\mu X_\\psi$, and for $B=(b_{ij})$, it follows that\n\\[ \\mu = b_{11} + b_{12}\\pi + \\cdot\\cdot\\cdot + b_{1n}\\pi^{n-1}.\\]\nIf $b_{12}=\\cdot\\cdot\\cdot = b_{1n} = 0$, then $\\mu = b_{11}\\in{\\mathbb Z}$. But $M_\\psi\\cap{\\mathbb Q}=\\{1,-1\\}$, and so $\\mu = \\pm 1$. This contradiction means that one of $b_{12},\\dots,b_{1n}$ is nonzero. Because each multiplier of $\\psi$ is a real algebraic integer of degree at most $n$, there is a monic polynomial $l(z)$ in the polynomial ring ${\\mathbb Z}[z]$ such that $l(\\mu)=0$. But this implies that $\\pi$ is a root of a polynomial in ${\\mathbb Z}[z]$, making $\\pi$ algebraic. This shows that $M_\\psi=\\{1,-1\\}$, and so $\\psi$ does not possess nontrivial generalized symmetries.\n}\\end{example}\n\n\n\\section{Classification of Certain Equilibrium-Free Flows}\n\nQuasiperiodic flows of Koch type are algebraic in nature and provide foliations which are often preserved by a topologically irreducible ${\\mathbb Z}^d$ Anosov action (see \\cite{BA5}, \\cite{BA6}, and \\cite{KKS} for such examples). A flow on ${\\mathbb T}^n$ is {\\it quasiperiodic of Koch type}\\, if it is projectively conjugate to an irrational flow whose frequencies form a ${\\mathbb Q}$-basis for a real algebraic number field ${\\mathbb F}$ of degree $n$ over ${\\mathbb Q}$. For a quasiperiodic flow $\\Phi$ of Koch type, the real algebraic number field ${\\mathbb F}$ of degree $n$ associated to it is unique, and its multiplier group is a finite index subgroup of the group of units ${\\mathfrak o}_{\\mathbb F}^\\times$ in the ring of integers of ${\\mathbb F}$ (see Theorem 3.3 in \\cite{BA5}). By Dirichlet's Unit Theorem (see \\cite{SD}), there is $d\\geq 1$ such that ${\\mathfrak o}_{\\mathbb F}^\\times$ is isomorphic to ${\\mathbb Z}_2\\oplus{\\mathbb Z}^d$, and so every quasiperiodic flow of Koch type always possesses nontrivial generalized symmetries.\n\nTopological irreducibility of a ${\\mathbb Z}^d$ action $\\alpha$ is a condition on the topological factors that $\\alpha$ has. A ${\\mathbb Z}^d$ action $\\alpha^\\prime$ on ${\\mathbb T}^{n^\\prime}$ is a {\\it topological factor}\\, of $\\alpha$ if there is a continuous surjection $h:{\\mathbb T}^n\\to{\\mathbb T}^{n^\\prime}$ such that $h\\circ \\alpha = \\alpha^\\prime\\circ h$. A topological factor $\\alpha^\\prime$ of $\\alpha$ is {\\it finite}\\, if the continuous surjection $h$ is finite-to-one everywhere. A ${\\mathbb Z}^d$ action $\\alpha$ is {\\it topologically irreducible}\\, if every topological factor $\\alpha^\\prime$ of $\\alpha$ is finite.\n\nFor an algebraic ${\\mathbb Z}^d$ action $\\alpha$, there is a stronger sense of irreducibility, one that uses the group structure of ${\\mathbb T}^n$. An algebraic ${\\mathbb Z}^d$ action $\\alpha^\\prime$ on ${\\mathbb T}^{n^\\prime}$ is an {\\it algebraic factor}\\, of $\\alpha$ if there is a continuous homomorphism $h:{\\mathbb T}^n\\to{\\mathbb T}^{n^\\prime}$ such that $h\\circ \\alpha = \\alpha^\\prime\\circ h$. An algebraic factor $\\alpha^\\prime$ of $\\alpha$ is {\\it finite} if the continuous homomorphism $h$ is finite-to-one everywhere. An algebraic ${\\mathbb Z}^d$ action $\\alpha$ is {\\it algebraically irreducible}\\, if every algebraic factor $\\alpha^\\prime$ of $\\alpha$ is finite. Algebraic irreducibility of a higher rank algebraic $\\mathbb Z^d$ action $\\alpha$ is equivalent to there being an $m\\in \\mathbb Z^d$ such that $\\alpha(m)$ has an irreducible characteristic polynomial (see Proposition 3.1 on p.\\,726 in \\cite{KKS}; cf.\\,\\cite{BE}).\n\n\n\\vskip0.2cm\n\\noindent{\\it Proof of Theorem \\ref{flowKoch}.} Identify ${\\bf T}{\\mathbb T}^n$ with ${\\mathbb T}^n\\times{\\mathbb R}^n$, and place on the fiber the standard Euclidean norm $\\Vert\\cdot\\Vert$. By the hypotheses, there is $h\\in{\\rm Diff}({\\mathbb T}^n)$ and a hyperbolic $B\\in{\\rm GL}(n,{\\mathbb Z})$ such that $\\Delta_{h^{-1}}(\\alpha(m_0)) = B$. Every point of ${\\mathbb T}^n$ is Lyapunov regular for $B$. The Oseledets decomposition associated with $\\chi_B$ is\n\\[ {\\bf T}_\\theta {\\mathbb T}^n = \\bigoplus_{i=1}^k E^i_B,\\]\nwhere $E_B^i$, $i=1,\\dots,k$, are the invariant subspaces of $B$ which are independent of $\\theta$. Since $\\Delta_{h^{-1}}(\\alpha(m_0))=B$, every point of ${\\mathbb T}^n$ is Lyapunov regular for $\\alpha(m_0)$. Set\n\\[ E^i_{\\alpha(m_0)}(\\theta) = {\\bf T}_{h(\\theta)}h^{-1}\\big( E^i_B), \\ \\ \\theta\\in{\\mathbb T}^n.\\]\nThe Oseledets decomposition associated with $\\chi_{\\alpha(m_0)}$ is then\n\\[ {\\bf T}_\\theta {\\mathbb T}^n = \\bigoplus_{i=1}^k E^i_{\\alpha(m_0)}(\\theta).\\]\n\nFor $\\mu = \\rho_\\Phi(\\alpha(m_0))$, the hypothesis that the multiplicity of the value $\\log\\vert \\mu \\vert$ of $\\chi_{\\alpha(m_0)}$ is one at a point $\\hat \\theta\\in{\\mathbb T}^n$ implies that there is $1\\leq l\\leq k$ such that ${\\rm dim}\\big(E^l_{\\alpha(m_0)}(\\hat\\theta)\\big) = 1$ and\n\\[ \\chi_{\\alpha(m_0)}(\\hat\\theta,v) = \\log\\vert \\mu \\vert {\\rm\\ for\\ } v\\in E^l_{\\alpha(m_0)}(\\hat\\theta)\\setminus\\{0\\}.\\]\nBy the definition $E^l_{\\alpha(m_0)}(\\hat\\theta) = {\\bf T}_{h(\\hat\\theta)}h^{-1}\\big( E^l_B)$, it follows that ${\\rm dim}\\big( E^l_B\\big) = 1$. Furthermore, since $\\Delta_{h^{-1}}(\\alpha(m_0))=B$ and $\\chi_B$ is independent of $\\theta$, it follows that $\\chi_B(\\theta,v) = \\log\\vert\\mu\\vert$ for all $v\\in E^l_B\\setminus\\{0\\}$ and for all $\\theta\\in{\\mathbb T}^n$. The definition $E^l_{\\alpha(m_0)}(\\theta) = {\\bf T}_{h(\\theta)}h^{-1}\\big( E^l_B)$ and the independence of $E^l_B$ from $\\theta$ imply for all $\\theta\\in{\\mathbb T}^n$ that ${\\rm dim} \\big(E^l_{\\alpha(m_0)}(\\theta)\\big) = 1$ and $\\chi_{\\alpha(m_0)}(\\theta,v) = \\log\\vert\\mu\\vert$ for all $v\\in E^l_{\\alpha(m_0)}(\\theta)\\setminus\\{0\\}$. Hence, the multiplicity of $\\log\\vert\\mu\\vert$ for $\\chi_{\\alpha(m_0)}$ is one for all $\\theta\\in{\\mathbb T}^n$.\n\nBy Theorem \\ref{Lyapunov}, the $\\alpha(m_0)$-invariant one-dimensional distribution $E$ given by $E(\\theta) = {\\rm Span}(X_\\Phi(\\theta))$ satisfies $\\chi_{\\alpha(m_0)}(\\theta,X_\\Phi(\\theta))=\\log\\vert\\mu\\vert$ for all $\\theta\\in{\\mathbb T}^n$. If $E(\\theta)\\ne E^l_{\\alpha(m_0)}(\\theta)$ at some $\\theta\\in{\\mathbb T}^n$, then $E(\\theta)+E^l_{\\alpha(m_0)}(\\theta)$ is a two-dimensional subspace of ${\\bf T}_\\theta {\\mathbb T}^n$ for which $\\chi_{\\alpha(m_0)}(\\theta, v) = \\log\\vert\\mu\\vert$ for all $v\\in \\big(E(\\theta)+ E^l_{\\alpha(m_0)}(\\theta)\\big)\\setminus\\{0\\}$. This contradicts the multiplicity of $\\log\\vert\\mu\\vert$ for $\\chi_{\\alpha(m_0)}$ being one at every $\\theta$. Thus $E^l_{\\alpha(m_0)}(\\theta) = E(\\theta) = {\\rm Span}(X_\\Phi(\\theta))$ for all $\\theta\\in{\\mathbb T}^n$.\n\nThe vector field $h_*X_\\Phi$ satisfies $h_*X_\\Phi(\\theta)\\in E^l_B$ for all $\\theta\\in{\\mathbb T}^n$ because ${\\rm Span}(X_\\Phi(\\theta)) = {\\bf T}_{h(\\theta)} h^{-1}(E^l_B)$ for all $\\theta\\in{\\mathbb T}^n$. Let $\\psi$ be the flow for which $X_\\psi = h_*X_\\Phi$. Since $E^l_B$ is a one-dimensional invariant subspace of $B$ and $\\chi_B(\\theta,v) = \\log\\vert\\mu\\vert$ for all $v\\in E^l_B\\setminus\\{0\\}$, it follows for all $\\theta\\in{\\mathbb T}^n$ that\n\\[ \\Vert B^kX_\\psi(\\theta)\\Vert = \\vert \\mu\\vert^k \\Vert X_\\psi(\\theta)\\Vert {\\rm \\ for\\ all\\ }k\\in{\\mathbb Z}.\\]\n\nHyperbolicity of $B$ implies that there is $\\bar\\theta\\in{\\mathbb T}^n$ such that ${\\mathcal O}_B(\\bar\\theta)=\\{ B^k(\\bar\\theta):k\\in{\\mathbb Z}\\}$ is dense in ${\\mathbb T}^n$. Since $\\alpha(m_0)_*X_\\Phi = \\mu X_\\Phi$ and $X_\\psi = h_*X_\\Phi$, the matrix $B$ satisfies $BX_\\psi =\\mu X_\\psi B$. Then $B^k X_\\psi = \\mu^k X_\\psi B^k$, and so $X_\\psi B^k = \\mu^{-k} B^k X_\\psi$. Thus,\n\\[ \\Vert X_\\psi(B^k\\bar\\theta)\\Vert = \\vert \\mu\\vert^{-k} \\Vert B^k X_\\psi(\\bar\\theta)\\Vert = \\vert\\mu\\vert^{-k}\\vert\\mu\\vert^k \\Vert X_\\psi(\\bar\\theta)\\Vert = \\Vert X_\\psi(\\bar\\theta)\\Vert {\\rm \\ for\\ all\\ }k\\in{\\mathbb Z}.\\]\nDenseness of ${\\mathcal O}_B(\\bar\\theta)$ and continuity of $X_\\psi$ imply that $\\Vert X_\\psi(\\theta)\\Vert = \\Vert X_\\psi(\\bar\\theta)\\Vert$ for all $\\theta\\in{\\mathbb T}^n$. The one-dimensionality of $E^l_B$ to which $X_\\psi$ belongs implies that $X_\\psi$ is a constant vector. Thus $X_\\psi$ is an eigenvector of $B$.\n\nThe assumed topological irreducibility of $\\alpha$ and the inclusion $\\Delta_{h^{-1}}(\\alpha({\\mathbb Z}^d))\\subset {\\rm GL}(n,{\\mathbb Z})$ imply that $\\Delta_{h^{-1}}(\\alpha(\\mathbb Z^d))$ is algebraically irreducible. Thus there is $B^\\prime\\in \\Delta_{h^{-1}}(\\alpha({\\mathbb Z}^d))$ with an irreducible characteristic polynomial. Since $BB^\\prime = B^\\prime B$ and $B^\\prime$ has an irreducible characteristic polynomial, the eigenvector $X_\\psi$ of $B$ is an eigenvector of $B^\\prime$ too. Then there is $\\vartheta\\in{\\mathbb R}^\\times$ such that the components of $\\vartheta^{-1}X_\\psi$ form a ${\\mathbb Q}$-basis for an algebraic number field of degree $n$ over ${\\mathbb Q}$ (see Propositions 1 and 8 in \\cite{WA}). Thus the flow $\\phi$ determined by $X_\\phi = \\vartheta^{-1}X_\\psi$ is irrational of Koch type for which $h_*X_\\Phi = X_\\psi = \\vartheta X_\\phi$. \\hfill $\\Box$\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Conclusions}\n\\vskip -0.39em\nWe introduce a general method to train a CNN which can switch the resolution during inference, and thus its running speed can be selected to fit different computational constraints. Specifically, we propose the parallel training to handle multi-resolution images, using shared parameters and private BNs. We analyze the interaction effects of resolutions from the perspective of train-test discrepancy. And we propose to learn an ensemble distillation based on the same image instances with different resolutions, which improves accuracies to a large extent.\n\n\n\\section*{Acknowledgement}\n\\vskip -0.35em\nThis work is jointly supported by the National Science Foundation of China (NSFC) and the German Research Foundation (DFG) in project Cross Modal Learning, NSFC 61621136008\/DFG TRR-169. We thank Aojun Zhou for the insightful discussions. \n\n\n\\clearpage\n\n\n\n\\section*{\\LARGE Appendix}\n\n\\section{Related Work}\n\\label{rw}\n\\subsubsection{Image Recognition.} Image recognition acts as a benchmark to evaluate models and is a core task for computer vision. Advances on the large-scale image recognition datasets, like ImageNet \\cite{DBLP:conf\/cvpr\/DengDSLL009}, can translate to improved results on a number of other applications \\cite{DBLP:journals\/ijcv\/GordoARL17,DBLP:conf\/cvpr\/KornblithSL19,DBLP:conf\/cvpr\/OquabBLS14}. \nIn order to enhance the model generalization for image recognition, data augmentation strategies, e.g., random-size crop, are adopted during training \\cite{DBLP:conf\/cvpr\/HeZRS16,DBLP:journals\/corr\/abs-1812-01187,DBLP:conf\/cvpr\/SzegedyLJSRAEVR15}. Besides, common models are usually trained and tested with fixed-resolution inputs. \\cite{DBLP:journals\/corr\/abs-1906-06423} shows that for a model trained with the default $224\\times224$ resolution and tested at lower resolutions, the accuracy quickly deteriorates (e.g., drops 11.6\\% at test resolution $128\\times128$ on ResNet50). \n\n\\subsubsection{Accuracy-Efficiency Trade-Off.} \n\\vskip-1em There have been many attempts to balance accuracy and efficiency by model scaling. Some of them adjust the structural configurations of networks. For example, \nResNets \\cite{DBLP:conf\/cvpr\/HeZRS16} provide several choices of network depths from shallower to deep. MobileNets \\cite{DBLP:journals\/corr\/HowardZCKWWAA17,DBLP:conf\/cvpr\/SandlerHZZC18} and ShuffleNets \\cite{DBLP:conf\/cvpr\/ZhangZLS18} can reduce network widths by using smaller width multipliers. While some other works \\cite{DBLP:journals\/corr\/abs-1905-02244,DBLP:journals\/corr\/HowardZCKWWAA17,DBLP:journals\/corr\/abs-1908-03888,DBLP:conf\/cvpr\/SandlerHZZC18} reduce the computational complexity by decreasing image resolutions at input, which is also our focus. Modifying the resolution usually does not make changes to the number of network parameters, but significantly affects the computational complexity \\cite{DBLP:journals\/corr\/HowardZCKWWAA17}. \n\n\n\\subsubsection{Knowledge Distillation.} \n\\vskip-1em \nA student network can be improved by imitating feature representations or soft targets of a larger teacher network \\cite{DBLP:journals\/corr\/HintonVD15,DBLP:conf\/eccv\/LiH16,DBLP:conf\/cvpr\/ZhangXHL18}. The teacher is usually pre-trained beforehand and fixed, and the knowledge is transferred in one direction \\cite{DBLP:journals\/corr\/RomeroBKCGB14}. Yet \\cite{DBLP:conf\/cvpr\/ZhangXHL18} introduces a two-way transfer between two peer models. \\cite{DBLP:conf\/cvpr\/SunYZZ19} performs mutual learning within one single network assisted by intermediate classifiers. \\cite{DBLP:conf\/nips\/LanZG18} learns a native ensemble design based on multiple models for distillation. \\cite{DBLP:journals\/corr\/abs-1903-05134} conducts the knowledge distillation between the whole model and each split smaller model. Regarding the supervised image recognition, existing distillation works rely on different models, usually needing another teacher network with higher-capacity than low-capacity students. While our design is applied in a shared model, which is data-driven, collecting complementary knowledge from the same image instances with different resolutions.\n\n\\section{Introduction}\n\\label{intro}\nConvolutional Neural Networks (CNNs) have achieved great success on image recognition tasks \\cite{DBLP:conf\/cvpr\/HeZRS16,DBLP:conf\/nips\/KrizhevskySH12}, and well-trained recognition models usually need to be deployed on mobile phones, robots or autonomous vehicles \\cite{DBLP:conf\/iclr\/CaiZH19,DBLP:journals\/corr\/HowardZCKWWAA17}. To fit the resource constraints of devices, extensive research efforts have been devoted to balancing between accuracy and efficiency, by reducing computational complexities of models. Some of these methods adjust the structural configurations of networks, e.g., by adjusting the network depths \\cite{DBLP:conf\/cvpr\/HeZRS16}, widths \\cite{DBLP:journals\/corr\/HowardZCKWWAA17,DBLP:conf\/iclr\/YuYXYH19} or the convolutional blocks \\cite{DBLP:conf\/eccv\/MaZZS18,DBLP:conf\/cvpr\/ZhangZLS18}. Besides that, adjusting the image resolution is another widely-used method for the accuracy-efficiency trade-off \\cite{DBLP:journals\/corr\/abs-1905-02244,DBLP:journals\/corr\/HowardZCKWWAA17,DBLP:journals\/corr\/abs-1908-03888,DBLP:conf\/cvpr\/SandlerHZZC18}. If input images are downsized, all feature resolutions at different convolutional layers are reduced subsequently with the same ratio, and the computational cost of a model is nearly proportional to the image resolution ($H\\times W$) \\cite{DBLP:journals\/corr\/HowardZCKWWAA17}. However, for a common image recognition model, when the test image resolution differs from the resolution used for training, the accuracy quickly deteriorates \\cite{DBLP:journals\/corr\/abs-1906-06423}. To address this issue, existing works \\cite{DBLP:journals\/corr\/abs-1905-02244,DBLP:conf\/cvpr\/SandlerHZZC18,DBLP:conf\/icml\/TanL19} train an individual model for each resolution. As a result, the total number of models to be trained and saved is proportional to the amount of resolutions considered at runtime. Besides the high storage costs, each time adjusting the resolution is accompanied with the additional latency to load another model which is trained with the target resolution.\n\nThe ability to switch the image resolution at inference meets a common need for real-life model deployments. By switching resolutions, the running speeds and costs are adjustable to flexibly handle the real-time latency and power requirements for different application scenarios or workloads. Besides, the flexible latency compatibility allows such model to be deployed on a wide range of resource-constrained platforms, which is friendly for application developers. In this paper, we focus on switching input resolutions for an image recognition model, and propose a general and economic method to improve overall accuracies. Models trained with our method are called \\textbf{Resolution Switchable Networks (RS-Nets)}. Our contribution is composed of three parts.\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[scale=0.5]{pics\/mflops.pdf}\n\\vskip-0.8em\n\\label{mflops_all_nets}\n\\caption{ImageNet accuracy vs. FLOPs (Multiply-Adds) of our \\textbf{single} models and the corresponding \\textbf{sets} of individual models. A single RS-Net model is executable at each of the resolutions, and even achieves significantly higher accuracies than individual models. The results of two state-of-the-art switchable networks (switch by varying network widths) S-MobileNetV2 \\cite{DBLP:conf\/iclr\/YuYXYH19} and US-MobileNetV2 \\cite{DBLP:journals\/corr\/abs-1903-05134} are provided for comparison.\nDetails are in Table \\ref{tabs:main_results} and Table \\ref{tabs:mflops}.} \n\\vskip-1.4em\n\\end{figure}\n\n\nFirst, we propose a parallel training framework where images with different resolutions are trained within a single model. As the resolution difference usually leads to the difference of activation statistics in a network \\cite{DBLP:journals\/corr\/abs-1906-06423}, we adopt shared network parameters but privatized Batch Normalization layers (BNs) \\cite{DBLP:conf\/icml\/IoffeS15} for each resolution. Switching BNs enables the model to flexibly switch image resolutions, without needing to adjust other network parameters.\n\nSecond, we associate the multi-resolution interaction effects with a kind of train-test discrepancy (details in Section \\ref{sec:discrepancy}). Both our analysis and empirical results reach an interesting conclusion that the parallel training framework tends to enlarge the accuracy gaps over different resolutions. On the one hand, accuracy promotions at high resolutions make a stronger teacher potentially available. On the other hand, the accuracy drop at the lower resolution indicates that the benefits of parallel training itself are limited. Both reasons encourage us to further propose a design of ensemble distillation to improve overall performance.\n\nThird, to the best of our knowledge, we are the first to propose that a data-driven ensemble distillation can be learnt on the fly for image recognition, based on the same image instances with different resolutions. Regarding the supervised image recognition, the structure of our design is also different from existing ensemble or knowledge distillation works, as they focus on the knowledge transfer among different models, e.g., by stacking multiple models \\cite{DBLP:conf\/cvpr\/ZhangXHL18}, multiple branches \\cite{DBLP:conf\/nips\/LanZG18}, pre-training a teacher model, or splitting the model into sub-models \\cite{DBLP:journals\/corr\/abs-1903-05134}, while our model is single and shared, with little extra parameters.\n\nExtensive experiments on the ImageNet dataset validate that RS-Nets are executable given different image resolutions at runtime, and achieve significant accuracy improvements at a wide range of resolutions compared with individually trained models. Illustrative results are provided in Fig. \\ref{mflops_all_nets}, which also verify that our proposed method can be generally applied to modern recognition backbones.\n\n\\section{Proposed Method}\n\\vskip-0.1em\nA schematic overview of our proposed method is shown in Fig. \\ref{framework}. In this section, we detail the insights and formulations of parallel training, interaction effects and ensemble distillation based on the multi-resolution setting.\n\n\\subsection{Multi-Resolution Parallel Training}\n\\label{sec:parallel_training}\n\nTo make the description self-contained, we begin with the basic training of a CNN model. Given training samples, we crop and resize each sample to a fixed-resolution image $\\bm{x}^i$. We denote network inputs as $\\{(\\bm{x}^i,y^i)|i\\in\\{1,2,\\cdots,N\\}\\}$, where $y^i$ is the ground truth which belongs to one of the $C$ classes, and $N$ is the amount of samples. Given network configurations with parameters $\\bm{\\theta}$, the predicted probability of the class $c$ is denoted as $p(c|\\bm{x}^i, \\bm{\\theta})$. The model is optimized with a cross-entropy loss defined as:\n\\vskip-0.3em\n\\begin{equation}\n\\label{ce}\n\\mathcal{H}(\\bm{x},\\bm{y})=-\\frac{1}{N}\\sum_{i=1}^N\\sum_{c=1}^C\\delta(c,y^i)\\log\\big(p(c|\\bm{x}^i, \\bm{\\theta})\\big),\n\\end{equation}\nwhere $\\delta(c,y^i)$ equals to $1$ when $c=y^i$, otherwise $0$.\n\n\t\n\\begin{figure}[t]\n\\centering\n\\hskip-0.3em\n\\resizebox{\\linewidth}{!}{\n\\includegraphics[scale=0.42]{pics\/framework_frame.pdf}}\n\\caption{Overall framework of training a RS-Net. Images with different resolutions are trained in parallel with shared Conv\/FC layers and private BNs. The ensemble logit ($\\bm{z}_0$) is learnt on the fly as a weighted mean of logits ($\\bm{z}_1,\\bm{z}_2,\\cdots,\\bm{z}_S$), shown as green arrows. Knowledge distillations are shown as red arrows. For inference, one of the $S$ forward paths is selected (according to the image resolution), with its corresponding BNs, for obtaining its corresponding prediction $\\bm{p}_s, s\\in\\{1,2,\\cdots,S\\}$. \\emph{The ensemble and knowledge distillation are not needed during inference.}}\n\\label{framework}\n\\vskip-0.7em\n\\end{figure}\n\nIn this part, we propose \\textbf{multi-resolution parallel training}, or called \\textbf{parallel training} for brevity, to train a single model which can switch image resolutions at runtime. During training, each image sample is randomly cropped and resized to several duplicate images with different resolutions. Suppose that there are $S$ resolutions in total, the inputs can be written as $\\{(\\bm{x}_1^i, \\bm{x}_2^i, \\cdots, \\bm{x}_S^i, y^i)|i\\in\\{1,2,\\cdots,N\\}\\}$. Recent CNNs for image recognition follow similar structures that all stack Convolutional (Conv) layers, a Global Average Pooling (GAP) layer and a Fully-Connected (FC) layer. In CNNs, if input images have different resolutions, the corresponding feature maps in all Conv layers will also vary in resolution. Thanks to GAP, features are transformed to a unified spatial dimension ($1\\times1$) with equal amount of channels, making it possible to be followed by a same FC layer. During our parallel training, we share parameters of Conv layers and the FC layer, and therefore the training for multiple resolutions can be realized in a single network. The loss function for parallel training is calculated as a summation of the cross-entropy losses:\n\\vskip-2mm\n\\begin{equation}\\label{loss_cls}\n\\mathcal{L}_{cls}=\\sum_{s=1}^S \\mathcal{H}(\\bm{x}_s,\\bm{y}).\n\\end{equation}\n\nSpecializing Batch Normalization layers (BNs) \\cite{DBLP:conf\/icml\/IoffeS15} is proved to be effective for efficient model adaption \\cite{DBLP:conf\/cvpr\/ChangYSKH19,DBLP:conf\/iclr\/MudrakartaSZH19,DBLP:conf\/iclr\/YuYXYH19,DBLP:journals\/corr\/abs-1909-00182}. In image recognition tasks, resizing image results in different activation statistics in a network \\cite{DBLP:journals\/corr\/abs-1906-06423}, including means and variances used in BNs. Thus during parallel training, we privatize BNs for each resolution. Results in the left panel of Fig. \\ref{share} verify the necessity of privatizing BNs. For the $s^{\\text{th}}$ resolution, each corresponding BN layer normalizes the channel-wise feature as follows:\n\\begin{equation}\\label{sbn}\n\\bm{y}'_s=\\bm{\\gamma}_s\\frac{\\bm{y}_s-\\bm{\\mu}_s}{\\sqrt{\\bm{\\sigma}_s^2+\\epsilon}}+\\bm{\\beta}_s, s\\in\\{1,2,\\cdots,S\\},\n\\end{equation}\n\nwhere $\\bm{\\mu}_s$ and $\\bm{\\sigma}_s^2$ are running mean and variance; $\\bm{\\gamma}_s$ and $\\bm{\\beta}_s$ are learnable scale and bias. Switching these parameters enables the model to switch resolutions.\n\n\n\\subsection{Multi-Resolution Interaction Effects}\n\\label{sec:discrepancy}\n\n\n\\begin{figure}[t]\n\\centering\n\\resizebox{115mm}{!}{\n\\includegraphics[scale=0.13]{pics\/crop_224_96.pdf}}\n\\vskip-0.5em\n\\caption{\\textbf{Left:} An illustration of interaction effects for the parallel training with two resolutions. Each red box indicates the region to be cropped, and the size of each blue dotted box is the apparent object (in this sample is a cup) size. For this example, in either one of the models, the apparent object size at testing is smaller than at training. \\cite{DBLP:journals\/corr\/abs-1906-06423} reveals that this relation still holds when averaging all data, which is called the train-test discrepancy. The data pre-processing for training or testing follows the standard image recognition method, which will be described in Section \\ref{details}. \\textbf{Right:} CDF curves for comparing the value distributions of feature activations. All curves are plotted on the validation dataset of ImageNet, but are based on different data pre-processing methods as annotated by (train) or (test).}\n\\label{pic_crop}\n\\vskip-1.4em\n\\end{figure}\n\n\nIn this section, restricted to large-scale image datasets with fine-resolution images, we analyze the interaction effects of different resolutions under the parallel training framework. We start by posing a question: compared with individually trained models, how does parallel training affect test accuracies at different resolutions? As multi-resolution can be seen as a kind of data augmentation, we analyze from two aspects as follows.\n \n\nThe first aspect is straightforward. The model meets a wide range of image resolutions, which improves the generalization and reduces over-fitting. Thus if the setting of resolutions is suitable (e.g., not too diverse), the parallel training tends to bring overall accuracy gains at testing, especially for a high-capacity network such as ResNet50.\n\n\nThe second aspect is based on the specialty of large-scale image recognition tasks, where objects of interest randomly occupy different portions of image areas and thus the random-size crop augmentation is used during training. \\cite{DBLP:journals\/corr\/abs-1906-06423} reveals that for common recognition works, as the random-size crop is used for training but not for testing, there exists a train-test \\textbf{discrepancy} that the average ``apparent object size\" at testing is smaller than that at training. Besides, \\cite{DBLP:journals\/corr\/abs-1906-06423} achieves accuracy improvements by alleviating such discrepancy, but is accompanied with the costs of test-time resolution augmentations and finetuning. Note that we do not aim to modify the data pre-processing method to alleviate the discrepancy. Instead, we are inspired to use the concept of the discrepancy to analyze multi-resolution interaction effects. We take the parallel training with two resolutions $224\\times224$ and $96\\times96$ for example. According to the left panel of Fig. \\ref{pic_crop} (the analysis in colors), compared with the model using only $224\\times224$ images, the parallel training can be seen as augmenting $96\\times96$ images, which reduces the average apparent object size at training and thus alleviates the discrepancy. On the contrary, compared with the individual model using $96\\times96$ images, augmenting $224\\times224$ images increases the discrepancy. Thus in this aspect, this parallel training tends to increase the test accuracy at $224\\times224$ (actually +1.6\\% for ResNet18, as shown in Fig. \\ref{parallel}), while tends to reduce the test accuracy at $96\\times96$ (-0.8\\%). The right panel of Fig. \\ref{pic_crop} plots the Cumulative Distribution Function (CDF)\\footnote{The CDF of a random variable $X$ is defined as $F_X(x)=P(X\\le x)$, for all $x\\in \\mathbb{R}$.} of output components of the Global Average Pooling (GAP) layer for a well-trained ResNet18 (as a ReLU layer is before the GAP, all components are nonnegative). We plot CDF to compare the value distributions of feature activations when using training or testing data pre-processing method. The parallel training seems to narrow the train-test gap for the $224\\times224$ model, but widen the gap for the $96\\times96$ model.\n\n\nWe take ResNet18 as an example and summarize the two aforementioned aspects. For the parallel training with two resolutions, the test accuracy at the high resolution increases compared with its individual model, as the two aspects reach an agreement to a large degree. As for the lower resolution, we find that the accuracy slightly increases if the two resolutions are close, otherwise decreases. Similarly, when multiple resolutions are used for parallel training, test accuracies increase at high resolutions but may decrease at lower resolutions, compared with individual models. Results in Table \\ref{tabs:main_results} show that for the parallel training with five resolutions, accuracies only decrease at $96\\times96$ but increase at the other four resolutions. Detailed results in Fig. \\ref{parallel} also verify our analysis.\n\nFor image recognition, although testing at a high resolution already tends to achieve a good accuracy, using the parallel training makes it even better. This finding opens up a possibility that a stronger teacher may be available in this framework, and seeking a design based on such teacher could be highly effective.\n\n\n\n\\subsection{Multi-Resolution Ensemble Distillation}\n\\label{mred}\n\nIn this section, we propose a new design of ensemble distillation. Regarding the supervised image recognition, unlike conventional distillation works that rely on transferring knowledge among different models, ours is data-driven and can be applied in a shared model. Specifically, our design is learnt on the fly and the distillation is based on the same image instances with different resolutions.\n\n\n\\input{tabs\/samples.tex}\n\nAs is commonly known, for image recognition tasks, models given a high resolution image are easy to capture fine-grained patterns, and thus achieve good performance \\cite{DBLP:conf\/cvpr\/SzegedyVISW16,DBLP:conf\/icml\/TanL19}. However, according to the sample statistics in the middle column of Table \\ref{tab_proportion}, we find that there always exists a proportion of samples which are correctly classified at a low resolution but wrongly classified at another higher resolution. Such results indicate that model predictions at different image resolutions are complementary, and not always the higher resolution is better for each image sample. Therefore, we propose to learn a teacher on the fly as an ensemble of the predictions w.r.t. all resolutions, and conduct knowledge distillation to improve the overall performance. Our design is called \\textbf{Multi-Resolution Ensemble Distillation (MRED)}. \n\n\nDuring the training process of image recognition, for each input image $\\bm{x}^i$, the probability of the class $c$ is calculated using a softmax function:\\vskip-0.2em\n\\begin{equation}\\label{p}\np(c|\\bm{x}^i, \\bm{\\theta})=p(c|\\bm{z}^i)=\\frac{\\exp(z_c^i)}{\\sum_{j=1}^{C}\\exp(z_j^i)}, c\\in\\{1,2,\\cdots,C\\},\n\\end{equation}\nwhere $\\bm{z}^i$ is the logit, the unnormalized log probability outputted by the network, and probabilities over all classes can be denoted as the model prediction $\\bm{p}$.\n\nIn the parallel training framework, each image is randomly cropped and resized to $S$ images with different resolutions. To better benefit from MRED, these $S$ images need to be resized from a same random crop, as illustrated in the left-most part of Fig. \\ref{framework}. The necessity will be verified in Section \\ref{veri_kd}.\n\nAs each image sample is resized to $S$ resolutions, there are $S$ corresponding logits $\\bm{z}_1,\\bm{z}_2,\\cdots,\\bm{z}_S$. We learn a group of importance scores $\\bm{\\alpha}\\!=[\\alpha_1\\,\\alpha_2\\,\\cdots\\,\\alpha_S]$, satisfying $\\bm{\\alpha}\\!\\ge\\!0$, $\\sum_{s=1}^S\\alpha_s\\!=\\!1$, which can be easily implemented with a softmax function. We then calculate an ensemble logit $\\bm{z}_0$ as the weighted summation of the $S$ logits:\n\\begin{equation}\\label{ens_logits}\n\\hskip-0.8em\n\\bm{z}_0=\\small{\\sum_{s=1}^S}\\alpha_s \\bm{z}_s.\n\\end{equation}\n\nTo optimize $\\bm{\\alpha}$, we temporally froze the gradients of the logits $\\bm{z}_1,\\bm{z}_2,\\cdots,\\bm{z}_S$. Based on the ensemble logit $\\bm{z}_0$, the corresponding prediction $\\bm{p}_0$, called ensemble prediction, can be calculated via Eq. \\ref{p}. Then $\\bm{\\alpha}$ is optimized using a cross-entropy loss between $\\bm{p}_0$ and the ground truth, which we call the ensemble loss $\\mathcal{L}_{ens}$:\n\\begin{equation}\\label{loss_ens}\n\\mathcal{L}_{ens}=-\\frac{1}{N}\\sum_{i=1}^N\\sum_{c=1}^C\\delta(c,y^i)\\log\\big(p(c|\\bm{z}_0^i)\\big).\n\\end{equation}\n\n\nIn knowledge distillation works, to quantify the alignment between a teacher prediction $\\bm{p}_t$ and a student prediction $\\bm{p}_s$, Kullback Leibler (KL) divergence is usually used:\n\\begin{equation}\\label{kl}\n\\mathcal{D}_{kl}\\big(\\bm{p}_t\\|\\bm{p}_s\\big)=\\frac{1}{N}\\sum_{i=1}^N\\sum_{c=1}^C p(c|\\bm{z}_t^i)\\log\\frac{p(c|\\bm{z}_t^i)}{p(c|\\bm{z}_s^i)}.\n\\end{equation}\n\n\nWe force predications at different resolutions to mimic the learnt ensemble prediction $\\bm{p}_0$, and thus the distillation loss $\\mathcal{L}_{dis}$ could be obtained as:\n\\begin{equation}\\label{loss_kd_v1}\n\\mathcal{L}_{dis}=\\sum_{s=1}^S\\mathcal{D}_{kl}\\big(\\bm{p}_0\\|\\bm{p}_s\\big).\n\\end{equation}\n\nFinally, the overall loss function is a summation of the classification loss, the ensemble loss and the distillation loss, without needing to tune any extra weighted parameters:\n\\begin{equation}\\label{loss}\n\\mathcal{L}=\\mathcal{L}_{cls}+\\mathcal{L}_{ens}+\\mathcal{L}_{dis},\n\\end{equation}\nwhere in practical, optimizing $\\mathcal{L}_{ens}$ only updates $\\bm{\\alpha}$, with all network weights temporally frozen; optimizing $\\mathcal{L}_{cls}$ and $\\mathcal{L}_{dis}$ updates network weights. \n\nWe denote the method with Eq. \\ref{loss_kd_v1} as our vanilla-version MRED. Under the parallel training framework, as the accuracy at a high resolution is usually better than at a lower resolution, accuracies can be further improved by offering dense guidance from predications at high resolutions toward predictions at lower resolutions. Thus the distillation loss can be extended to be a generalized one:\\vskip-0.2em\n\\begin{equation}\\label{loss_kd_v2}\n\\mathcal{L}_{dis}=\\frac{2}{S+1}\\sum_{t=0}^{S-1}\\sum_{s=t+1}^S\\mathcal{D}_{kl}\\big(\\bm{p}_t\\|\\bm{p}_s\\big),\n\\end{equation}\nwhere the index $t$ starts from $0$ referring to the ensemble term; as the summation results in $S(S+1)\/2$ components in total, we multiply $\\mathcal{L}_{dis}$ by a constant ratio $2\/(S+1)$ to keep its range the same as $\\mathcal{L}_{cls}$. We denote the method with Eq. \\ref{loss_kd_v2} as our full-version MRED, which is used in our experiments by default. \n\n\nThe proposed design involves negligible extra parameters (only $S$ scalars), without needing extra models. Models trained with the parallel training framework and MRED are named Resolution Switchable Networks (RS-Nets). An overall framework of training a RS-Net is illustrated in Fig. \\ref{framework}. \\textbf{During inference}, the network only performs one forward calculation at a given resolution, without ensemble or distillation, and thus both the computational complexity and the amount of parameters equal to a conventional image recognition model. \n\n\n\\section{Experiments}\nWe perform experiments on ImageNet (ILSVRC12) \\cite{DBLP:conf\/cvpr\/DengDSLL009,DBLP:journals\/ijcv\/RussakovskyDSKS15}, a widely-used image recognition dataset containing about 1.2 million training images and 50 thousand validation images, where each image is annotated as one of 1000 categories. Experiments are conducted with prevailing CNN architectures including a lightweight model MobileNetV2 \\cite{DBLP:conf\/cvpr\/SandlerHZZC18} and ResNets \\cite{DBLP:conf\/cvpr\/HeZRS16}, where a basic-block model ResNet18 and a bottleneck-block model ResNet50 are both considered. Besides, we also evaluate our method in handling network quantization problems, where we consider different kinds of bit-widths.\n\n\n\\subsection{Implementation Details}\n\\label{details}\nOur basic experiments are implemented with PyTorch$\\,$\\cite{DBLP:conf\/nips\/PaszkeGMLBCKLGA19}.$\\,$For quantization tasks, we apply our method to LQ-Nets \\cite{DBLP:conf\/eccv\/ZhangYYH18} which show state-of-the-art performance in training CNNs with low-precision weights or both weights and activations. \n\nWe set $\\mathbb{S}=\\{224\\times224,192\\times192,160\\times160,128\\times128,96\\times96\\}$, as commonly adopted in a number of existing works \\cite{DBLP:journals\/corr\/HowardZCKWWAA17,DBLP:journals\/corr\/abs-1908-03888,DBLP:conf\/cvpr\/SandlerHZZC18}. During training, we pre-process the data for augmentation with an area ratio ($\\text{cropped area}\/\\text{original area}$) uniformly sampled in $[0.08, 1.0]$, an aspect ratio $[3\/4, 4\/3]$ and a horizontal flipping. We resize images with the bilinear interpolation. Note that both $[0.08, 1.0]$ and $[3\/4, 4\/3]$ follow the standard data augmentation strategies for ImageNet \\cite{DBLP:conf\/cvpr\/HuangLMW17,DBLP:conf\/cvpr\/SzegedyLJSRAEVR15,DBLP:journals\/corr\/abs-1906-06423}, e.g., \\emph{RandomResizedCrop} in PyTorch uses such setting as default. During validation, we first resize images with the bilinear interpolation to every resolution in $\\mathbb{S}$ divided by 0.875 \\cite{DBLP:journals\/corr\/abs-1908-08986,DBLP:journals\/corr\/abs-1908-03888}, and then feed central regions to models. \n\n\nNetworks are trained from scratch with random initializations. We set the batch size to 256, and use a SGD optimizer with a momentum 0.9. For standard ResNets, we train 120 epochs and the learning rate is annealed from 0.1 to 0 with a cosine scheduling \\cite{DBLP:journals\/corr\/abs-1812-01187}. For MobileNetV2, we train 150 epochs and the learning rate is annealed from 0.05 to 0 with a cosine scheduling. For quantized ResNets, we follow the settings in LQ-Nets \\cite{DBLP:conf\/eccv\/ZhangYYH18}, which train 120 epochs and the learning rate is initialized to 0.1 and divided by 10 at 30, 60, 85, 95, 105 epochs. The weight decay rate is set to 1e-4 for all ResNets and 4e-5 for MobileNetV2.\n\n\n\\subsection{Results}\nAs mentioned in Section \\ref{intro}, common works with multi-resolution settings train and deploy multiple individual models separately for different resolutions. We denote these individual models as \\textbf{I-Nets}, which are set as baselines. We use I-\\{resolution\\} to represent each individual model, e.g., I-$224$.\n\n\n\\input{tabs\/main_results.tex}\n\n\\subsubsection{Basic Results.} \n\\vskip-0.3em\nIn Table \\ref{tabs:main_results}, we report results on ResNet18, ResNet50 and MobileNetV2 (M-NetV2 for short). Besides I-Nets, we also report accuracies at five resolutions using the individual model which is trained with the largest resolution (I-$224$). For our proposed method, we provide separate results of the parallel training (parallel) and the overall design (RS-Net). As mentioned in Section \\ref{intro}, I-Nets need several times of parameter amount and high latencies for switching across models. We also cannot rely on an individual model to switch the image resolutions, as accuracies of I-$224$ are much lower than I-Nets at other resolutions (e.g., 15\\%$\\sim$20\\% accuracy drop at the resolution $96\\times96$). Similarly, each of the other individual models also suffers from serious accuracy drops, as can be seen in the right panel of Fig. \\ref{share}. Our parallel training brings accuracy improvements at the four larger resolutions, while accuracies at $96\\times96$ decrease, and the reason is previously analyzed in Section \\ref{sec:discrepancy}. Compared with I-Nets, our RS-Net achieves large improvements at all resolutions with only 1$\/$5 parameters. For example, the RS-Net with ResNet50 obtains about 2.4\\% absolute top-1 accuracy gains on average across five resolutions. Note that the number of FLOPs (Multiply-Adds) is nearly proportional to the image resolution \\cite{DBLP:journals\/corr\/HowardZCKWWAA17}. Regarding ResNet18 and ResNet50, accuracies at $160\\times160$ of our RS-Nets even surpass the accuracies of I-Nets at $224\\times224$, significantly reducing about 49\\% FLOPs at runtime. Similarly, for MobileNetV2, the accuracy at $192\\times192$ of RS-Net surpasses the accuracy of I-Nets at $224\\times224$, reducing about 26\\% FLOPs.\n\n\n\\input{tabs\/quanti_results.tex}\n\\input{tabs\/mflops.tex}\n\n\\subsubsection{Quantization.}\n\\vskip -1em\nWe further explore the generalization of our method to more challenging quantization problems, and we apply our method to LQ-Nets \\cite{DBLP:conf\/eccv\/ZhangYYH18}. Experiments are performed under two typical kinds of quantization settings, including the quantization on weights (2$\/$32) and the more extremely compressed quantization on both weights and activations (2$\/$2). Results of I-Nets and each RS-Net based on quantized ResNets are reported in Table \\ref{tabs:quan_results}. Again, each RS-Net outperforms the corresponding I-Nets at all resolutions. For quantization problems, as I-Nets cannot force the quantized parameter values of each individual model to be the same, 2-bit weights in I-Nets are practically stored with more digits than a 4-bit model\\footnote{The total bit number of five models with individual 2-bit weights is $\\log_2(2^2\\times5)\\approx4.3$.}, while the RS-Net avoids such issue. For quantization, accuracy gains of RS-Net to ResNet50 are more obvious than those to ResNet18, we conjecture that under compressed conditions, ResNet50 better bridges the network capacity and the augmented data resolutions. \n\n\n\n\n\\subsubsection{Switchable Models Comparison.} \n\\vskip -1em\nBased on MobileNetV2, results in Table \\ref{tabs:mflops} indicate that under comparable FLOPs (Multiply-Adds), adjusting image resolutions (see I-Nets-r) achieves higher accuracies than adjusting network widths (see I-Nets-w). For example, adjusting resolution to $96\\times96$ brings 2.5\\% higher absolute accuracy than adjusting width to $0.35\\times$, with even lower FLOPs. Results of S-MobileNetV2 (S) \\cite{DBLP:conf\/iclr\/YuYXYH19} and US-MobileNetV2 (US) \\cite{DBLP:journals\/corr\/abs-1903-05134} are also provided for comparison. As we can see, our RS-Net significantly outperforms S and US at all given FLOPs, achieving 1.5\\%$\\sim$4.4\\% absolute gains. Although both S and US can also adjust the accuracy-efficiency trade-off, they have marginal gains or even accuracy drops (e.g., at width $1.0\\times$) compared with their baseline I-Nets-w. Our model shows large gains compared with our baseline I-Nets-r (even they are mostly stronger than I-Nets-w). Note that adjusting resolutions does not conflict with adjusting widths, and both methods can be potentially combined together.\n\n\n\n\\begin{figure}[t]\n\\hskip-0.1em\n\\includegraphics[scale=0.39]{pics\/basic_ablation.pdf}\n\\vskip -1em\n\\caption{\\textbf{Left:} Comparison of parallel trainings with shared BNs and unshared (private) BNs, based on MobileNetV2. Individual models and Parallel (unshared BNs) + MRED (i.e., RS-Net) are provided for reference. \\textbf{Right:} Comparison of our RS-Net and each individual model (from I-$224$ to I-$96$) tested at denser resolutions (the interval is 16), based on ResNet18. Each individual model suffers from serious accuracy drops at other resolutions, but our RS-Net avoids this issue.}\n\\label{share}\n\\vskip -0.5em\n\\end{figure}\n\n\\subsection{Ablation Study}\n\n\\subsubsection{Importance of Using Private BNs.}\nA quick question is that, why not share BNs as well? Based on MobileNetV2, the left panel of Fig. \\ref{share} shows that during the parallel training, privatizing BNs achieves higher accuracies than sharing BNs, especially at both the highest resolution (+1.7\\%) and the lowest (+2.3\\%). When BNs are shared, activation statistics of different image resolutions are averaged, which differ from the real statistics especially at two ends of resolutions.\n\n\n\\subsubsection{Tested at New Resolutions.} \n\\vskip -1em\nOne may concern that if a RS-Net can be tested at a new resolution. In the right panel of Fig. \\ref{share}, we test models at different resolutions with a smaller interval, using ResNet18. We follow a simple method to handle a resolution which is not involved in training. Suppose the resolution is sandwiched between two of the five training resolutions which correspond to two groups of BN parameters, we apply a linear interpolation on these two groups and obtain a new group of parameters, which are used for the given resolution. We observe that the RS-Net maintains high accuracies. RS-Net suppresses the serious accuracy drops which exist in every individual model. \n\n\n\\begin{figure}[t]\n\\centering\n\\hskip-0.3em\\includegraphics[scale=0.36]{pics\/joint_all.pdf}\n\\vskip-0.6em\n\\caption{Absolute top-1 accuracy variations (\\%) (compared with individual models) of parallel trainings, based on ResNet18. The result of each individual model (from I-$96$ to I-$224$) is used as the baseline. We use single numbers to represent the image resolutions.}\n\\label{parallel}\n\\vskip-1em\n\\end{figure}\n\n\n\\subsubsection{Multi-Resolution Interaction Effects.} \n\\vskip-0.7em\nThis part is for verifying the analysis in Section \\ref{sec:discrepancy}, and we do not apply MRED here. Parallel training results of ResNet18 are provided in Fig. \\ref{parallel}, which show top-1 accuracy variations compared with each individual model. The left panel of Fig. \\ref{parallel} illustrates the parallel training with two resolutions, where all accuracies increase at $224\\times224$. As the gap of the two resolutions increases, the accuracy variation at the lower resolution decreases. For example, compared with I-$224$ and I-$96$ respectively, the parallel training with $224\\times224$ and $96\\times96$ images increases the accuracy at $224\\times224$ from 71.0\\% to 72.6\\%, but decreases the accuracy at $96\\times96$ from 62.6\\% to 61.8\\%. The right panel of Fig. \\ref{parallel} illustrates the parallel training with multiple resolutions, where we observe that most accuracies are improved and accuracies at the lowest resolutions may decrease. These results verify our analysis. \n\n\n\n\\subsubsection{Verification of Ensemble Distillation.} \n\\vskip-0.7em\n\\label{veri_kd}\nIn Section \\ref{mred}, we define two versions of MRED. The vanilla-version has only the distillation paths starting from the ensemble prediction $\\bm{p}_0$ toward all the other predictions, while the full-version has additional paths from predications at high resolutions toward predictions at lower resolutions. In Table \\ref{tabs:kd_results}, we compare the performance of the two versions as well as two other variants that omit the distillations from $\\bm{p}_0$, based on ResNet18. Results indicate that all kinds of the proposed distillations are indispensable. Besides, we also emphasize in Section \\ref{mred} that for training a RS-Net, each image should be randomly cropped only once (called single-crop) and then resized to multiple resolutions (called multi-resolution). Results in Table \\ref{tabs:kd_crop_results} indicate that using multi-crop (applying a random crop individually for each resolution) will weaken the benefits of MRED, as accuracies at low resolutions are lower compared with using single-crop. We also verify the importance of multi-resolution by replacing the multi-resolution setting by five identical resolutions, called single-resolution. As resolutions are identical, an individual experiment is needed for each resolution. Results indicate that applying ensemble distillation to predictions of different crops has very limited benefits compared with applying it to predictions of different resolutions.\n\n\n\\input{tabs\/kd.tex}\n\\input{tabs\/kd_crop.tex}\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec:introduction}\n\t\nThroughout, let $n$ be a positive integer, let $[n]$ denote the set $\\{1,2,\\dots,n\\}$, and let\n$B_n$ be the power set of $[n]$, ordered by inclusion. An \\emph{antichain} is a family of subsets\nof $[n]$ none of which contains any other. An antichain $\\mathcal{A}$ such that no\n$B \\subseteq [n]$ with $B \\not\\in \\mathcal{A}$ can be added to $\\mathcal{A}$ without destroying the\nantichain property is called a \\emph{maximal antichain}.\nBy Sperner's Theorem \\cite{Sperner1928}, any antichain in $B_n$ has at most $\\binom{n}{\\lceil n\/2\\rceil}$ elements.\nThe question studied here is: For which\nintegers $m$ with $1\\leq m\\leq\\binom{n}{\\lceil n\/2\\rceil}$ does there exist a maximal antichain of\nsize $m$?\nIn~\\cite{Griggs2021a}, we answered this question for $m$ close to $\\binom{n}{\\lceil\n n\/2\\rceil}$, the largest possible size of an antichain.\n \n\\begin{theorem}[Theorem 1 in~\\cite{Griggs2021a}]\\label{thm:large_sizes}\n Let $k=\\lceil n\/2\\rceil$, and let $m$ be a positive integer with\n $\\binom{n}{k}-k^2\\leq m\\leq\\binom{n}{k}$. There exists a maximal antichain of size $m$ in $B_n$ if\n and only if\n \\[m=\\binom{n}{k}-tl+\\binom{a}{2}+\\binom{b}{2}+c\\] for some integers $l\\in\\{k,n-k\\}$,\n $t\\in\\{0,\\dots,k\\}$, $a\\geq b\\geq c\\geq 0$, $1\\leq a+b\\leq t$.\n\\end{theorem}\n\nIn this paper, we complete the solution by constructing maximal antichains of all sizes $m$ less than those \naddressed in Theorem \\ref{thm:large_sizes}. Following \\cite{Griggs2021a} let us introduce the notation\n\\[w(n)=\\binom{n}{\\lceil n\/2 \\rceil} - \\left\\lceil\\frac{n}{2}\\right\\rceil \\left\\lceil\\frac{n+2}{4}\n\\right\\rceil.\\]\nHere is the answer to our question:\n\n\\begin{theorem}\\label{cor:main_result}\n Let $m$ be an integer with\n $1 \\le m \\le \\binom{n}{\\lceil n\/2\\rceil}$. If $m \\le w(n)$, then there is a maximal antichain of size\n $m$ in $B_n$. For $m>w(n)$, there is a maximal antichain of size $m$ if and only if $m$ is of the\n form that is stated in Theorem~\\ref{thm:large_sizes}.\n\\end{theorem}\n\nFor given $n$, denote by $\\phi(n)$ the smallest value of $m$ for which there is no maximal antichain of size $m$ in $B_n$.\nIn view of Theorem \\ref{cor:main_result} here and Theorem 3 in \\cite{Griggs2021a} we immediately obtain:\n\n\\begin{theorem}\\label{thm:asymptotics}\nFor $n\\to\\infty$, $\\displaystyle\\phi(n)=\\binom{n}{\\lceil n\/2\\rceil}-\\left(\\frac12+o(1)\\right)n^{3\/2}$.\n\\end{theorem}\n\nLet us mention a related problem of interest. A family of subsets of $[n]$ is \\emph{flat}\\\/ if all sets in the \nfamily have the same size within one. So a flat antichain in $B_n$ is completely contained in some two \nconsecutive levels of $B_n$. For what values $m$ does there exist a maximal antichain in $B_n$ that is flat?\nIn \\cite{Griggs2021a} it is shown that values $m$ covered in Theorem \\ref{thm:large_sizes} (which are large) \nhave this property. Our third paper \\cite{Griggs2021c} will take us down all the way below $n^2$, by constructing \nflat maximal antichains for all possible sizes at least $\\binom n2 - \\lfloor (n+1)^2\/8 \\rfloor$.\nIt turns out that not for all very small sizes flat maximal antichains exist.\n\nThe rest of the paper consists of the proof of our main result, Theorem \\ref{cor:main_result}. \nIn the next section we set up the actual result we need to prove, by induction on $n$.\nIt is helpful to define $S(n)=\\{\\abs{\\mathcal A}\\,:\\,\\mathcal A\\text{ is a maximal antichain in }B_n\\}$.\nIn \\Cref{sec:construction} we present our general construction of maximal antichains on three consecutive\nlevels, and derive that $S(n)$ contains certain intervals. In \\Cref{sec:large_n}, we verify that the\nintervals of sizes obtained from this construction are sufficient for $n\\geq 20$. In\n\\Cref{sec:small_n} we provide an additional construction to fill the gaps for $7\\leq n\\leq 19$.\n\n\n\\section{Inductive reduction}\\label{sec:reduction}\n\nIn view of Theorem \\ref{thm:large_sizes}, to prove Theorem \\ref{cor:main_result} it is enough to construct \nmaximal antichains of all sizes below $\\binom nk-k^2$. Note that this gets below the maximum antichain size\n$\\binom nk$ by $k^2\\sim n^2\/4$. However, for our induction to work nicely, we only go below the maximum\nantichain size by $\\sim n^2\/8$. Specifically, Theorem \\ref{cor:main_result} is implied by this statement:\n\n\\begin{theorem}\\label{thm:main_theorem}\n Let $m$ be an integer with $1\\leq m\\leq w(n)$.\n Then there exists a maximal antichain in $B_n$ with size $m$.\n\\end{theorem}\n\nTo prove Theorem \\ref{thm:main_theorem}, it will suffice by induction to construct antichains of larger sizes $m$,\nspecifically consider:\n\n\\begin{claim}\\label{lem:induction_step}\n If $n\\geq 7$, then $[w(n-1)+2,w(n)]\\subseteq S(n)$.\n\\end{claim}\n\nAssuming this claim, the proof of \\Cref{thm:main_theorem} is easy. In the proof, we shall use\nsome more notation. For a set $M$ and a positive integer $k$ we denote the\nset of all $k$-subsets of $M$ by $\\binom{M}{k}$, and we call $\\binom{[n]}{k}$ the $k$-th level of\n$B_n$. The \\emph{shadow} of a family $\\mathcal F$ of $k$-sets, denoted by $\\Delta\\mathcal F$, is the\nfamily of $(k-1)$-sets which are a subset of some member of $\\mathcal F$. Similarly, the\n\\emph{shade} (or \\emph{upper shadow}) of $\\mathcal F$ (with respect to the fixed ground set $[n]$), denoted by\n$\\nabla\\mathcal F$, is the family of all\n$(k+1)$-subsets of $[n]$ which are a superset of some member of $\\mathcal F$:\n\\begin{align*}\n \\Delta\\mathcal F &= \\left\\{A\\,:\\,\\abs{A}=k-1,\\,A\\subseteq X\\text{ for some }X\\in\\mathcal F\\right\\},\\\\\n \\nabla\\mathcal F &= \\left\\{A\\,:\\,A\\subseteq[n],\\,\\abs{A}=k+1,\\,X\\subseteq A\\text{ for some }X\\in\\mathcal F\\right\\}.\n\\end{align*}\n\n\\begin{proof}[Proof of \\Cref{thm:main_theorem} (assuming Claim \\ref{lem:induction_step})]\n We proceed by induction on $n$. As we want to use Claim \\ref{lem:induction_step} for the induction\n step, we have to establish the result for $n\\leq 6$ as the base case.\n For $1\\leq m\\leq n$, a maximal antichain of size $m$ is given by\n $\\{\\{1\\},\\{2\\},\\dots,\\{m-1\\},\\{m,\\dots,n\\}\\}$. For $n\\leq 5$, this is already sufficient, because\n then $w(n)\\leq n$. As $w(6)=14$, we need maximal antichains of sizes\n $m=7,8,\\dots,14$ for $n=6$. Such are given by:\n \\begin{description}\n \\item[$m=7$] $\\mathcal A=\\{\\{1,2,3\\},\\{1,2,4\\},\\{1,2,5\\},\\{3,4\\},\\{3,5\\},\\{4,5\\},\\{6\\}\\}$,\n \\item[$m=8$] $\\mathcal A=\\binom{[4]}{2}\\cup\\{\\{5\\},\\{6\\}\\}$,\n \\item[$m=9$]\n $\\mathcal\n A=\\{\\{1,2,5\\},\\{1,2,6\\},\\{3,4,5\\},\\{3,4,6\\},\\{5,6\\},\\{1,3\\},\\{1,4\\},\\{2,3\\},\\{2,4\\}\\}$,\n \\item[$m=10$]\n $\\mathcal\n A=\\{\\{1,2\\},\\{1,3\\},\\{1,4\\},\\{2,5\\},\\{3,6\\}\\}\\cup\\binom{[6]}{3}\\setminus\\nabla\\{\\{1,2\\},\\{1,3\\},\\{1,4\\},\\{2,5\\},\\{3,6\\}\\}$,\n \\item[$m=11$]\n $\\mathcal\n A=\\{\\{1,2\\},\\{1,3\\},\\{1,4\\},\\{2,5\\},\\{3,5\\}\\}\\cup\\binom{[6]}{3}\\setminus\\nabla\\{\\{1,2\\},\\{1,3\\},\\{1,4\\},\\{2,5\\},\\{3,5\\}\\}$,\n \\item[$m=12$]\n $\\mathcal\n A=\\{\\{1,2\\},\\{2,3\\},\\{4,5\\}\\}\\cup\\binom{[6]}{3}\\setminus\\nabla\\{\\{1,2\\},\\{2,3\\},\\{4,5\\}\\}$,\n \\item[$m=13$]\n $\\mathcal\n A=\\{\\{1,2\\},\\{2,3\\},\\{3,4\\}\\}\\cup\\binom{[6]}{3}\\setminus\\nabla\\{\\{1,2\\},\\{2,3\\},\\{3,4\\}\\}$,\n \\item[$m=14$]\n $\\mathcal A=\\{\\{1,2\\},\\{3,4\\}\\}\\cup\\binom{[6]}{3}\\setminus\\nabla\\{\\{1,2\\},\\{3,4\\}\\}$.\n \\end{description}\n For the induction step, we assume $n\\geq 7$ and $[1,w(n-1)]\\subseteq S(n-1)$. Noting that\n $1\\in S(n)$ for every $n$, and adding the singleton $\\{n\\}$ to each of the maximal antichains with\n sizes $m\\in[1,w(n-1)]$ in $B_{n-1}$, we obtain $[1,w(n-1)+1]\\subseteq S(n)$, and we use \n Claim \\ref{lem:induction_step} to conclude $[1,w(n)]\\subseteq S(n)$ as required.\n\\end{proof}\n\nThe rest of the paper is devoted to the proof of Claim \\ref{lem:induction_step}.\n\n\n\\section{The main construction}\\label{sec:construction}\nBefore going into the details of the construction we outline the general idea. A crucial role is\nplayed by \\emph{maximal squashed flat antichains}. These are maximal antichains of the form\n$\\mathcal{A} = \\mathcal{I} \\cup\\binom{[n]}{k-1} \\setminus \\Delta \\mathcal{I}$ with\n$\\mathcal{I} \\subseteq \\binom{[n]}{k}$ for some $k$ such that $\\mathcal{I}$ is an initial segment\nof $\\binom{[n]}k$ in \\emph{squashed order} (also known as \\emph{colexicographic order}), i.e.,\n$\\mathcal{I}$ consists of the first $\\abs{\\mathcal{I}}$ sets of $\\binom{[n]}{k}$, where $A \\subseteq [n]$\nprecedes $B \\subseteq [n]$ if $\\max \\left[ (A\\setminus B) \\cup (B \\setminus A)\\right] \\in B$.\nBy the Kruskal-Katona Theorem \\cite{Kruskal1963, Katona1968}, the squashed order is known to solve the\nShadow Minimization Problem for families of $k$-sets.\n\nFrom a maximal squashed flat antichain $\\mathcal A'\\subseteq\\binom{[n]}{k}\\cup\\binom{[n]}{k-1}$ with\n$\\binom{[k+3]}{k}\\subseteq\\mathcal A'$, we can obtain a new maximal antichain $\\mathcal A$ in the\nfollowing way. We replace $\\binom{[k+3]}{k}$ in $\\mathcal{A}'$ by a maximal antichain\n$\\mathcal{F}'\\subseteq \\binom{[k+3]}{k+1}\\cup\\binom{[k+3]}{k}$, where\n$\\mathcal{F}:=\\mathcal{F}'\\cap \\binom{[k+3]}{k+1}$ contains only few sets (about $k\/2$).\nClearly, $\\mathcal{F}'=\\mathcal{F}\\cup\\binom{[k+3]}{k}\\setminus\\Delta\\mathcal{F}$.\nBy results from \\cite{Griggs2021a} about the shadow spectrum, we can vary $\\mathcal F$ without changing\nits cardinality to get $k$ consecutive values for the size of $\\Delta \\mathcal{F}$.\n(For a few small values of $n$ we only get $k-1$ or $k-2$ consecutive shadow sizes,\nand we close the resulting gaps by extra constructions.)\nHence, from every maximal squashed $\\mathcal A'\\subseteq\\binom{[n]}{k}\\cup\\binom{[n]}{k-1}$,\nwe obtain an interval of $k$ consecutive sizes of maximal antichains in $B_n$. We then\nvary $\\mathcal A'$.\nWe say that two distinct maximal squashed antichains\n$\\mathcal{A}_1,\\mathcal{A}_2\\subseteq\\binom{[n]}{k}\\cup\\binom{[n]}{k-1}$ are \\emph{consecutive}\\\/\nif there is no maximal squashed antichain $\\mathcal{A}_3\\subseteq\\binom{[n]}{k}\\cup\\binom{[n]}{k-1}$\nwith $|\\mathcal{A}_1\\cap\\binom{[n]}{k}|< |\\mathcal{A}_3\\cap\\binom{[n]}{k}| < |\\mathcal{A}_2\\cap\\binom{[n]}{k}|$.\n(Note that if $\\mathcal{A}_1,\\mathcal{A}_2$ are consecutive, their sizes might not, i.e., there can be an\n$\\mathcal{A}_3$ with $|\\mathcal{A}_1|<|\\mathcal{A}_3|<|\\mathcal{A}_2|$.)\nObserving that the sizes of consecutive maximal squashed flat antichains differ\nby at most $k+1$, we deduce that the maximal antichains on levels $k+1$, $k$ and $k-1$ obtained from\nall these $\\mathcal A'$ yield an interval $I(n,k)$ of sizes of maximal antichains. We check\nthat these intervals for consecutive values of $k$ overlap, so that their union is an interval containing the\nrequired $[w(n-1)+2,w(n)]$.\n\nFor an integer $k$ with $\\lfloor n\/2 \\rfloor \\le k \\le n-3$ we denote by $\\mathcal{M}(n,k)$ the set\nof all maximal squashed flat antichains $\\mathcal{A}$ with\n$\\binom{[k+3]}{k} \\subseteq \\mathcal{A} \\subseteq \\binom{[n]}{k}\\cup \\binom{[n]}{k-1}$. Moreover, we\nlet $Y(n,k)=\\{\\abs{\\mathcal{A}}\\,:\\,\\mathcal{A} \\in \\mathcal{M}(n,k)\\}$ be the set of their\nsizes. The \\emph{shadow spectrum} $\\sigma(t,k)$ has been introduced in~\\cite{Griggs2021a} as the set\nof shadow sizes for a $t$-family of $k$-sets:\n$\\sigma(t,k) = \\left\\{\\abs{\\Delta \\mathcal{F}}\\,:\\, \\mathcal{F} \\text{ is a family of }k\\text{-sets with }\n \\abs{\\mathcal{F}} = t \\right\\}$.\n\nThe following lemma was used in \\cite{Griggs2021a} already.\n\\begin{lemma}\\label{lem:standard_ac_is_max}\n Let $1\\le k \\le n$. For every $\\mathcal{F} \\subseteq \\binom{[n]}{k}$ with $\\abs{\\mathcal{F}} \\frac{n+1}{2}$ then\n \\[I(n,k) \\supseteq \\left[ \\binom{n}{k} - C_{n-k} -tk,\\,\\binom{n}{k-1} + \\binom{k+3}{k} -\n \\binom{k+3}{k-1} -tk + \\binom{t-j}{2} + \\binom{j+1}{2} \\right].\\]\n\\item If $k \\le \\frac{n+1}{2}$ then $\\displaystyle I(n,k) = \\left[ \\binom{n}{k-1} - C_{k-1} -tk,\\,\\binom{n}{k} -tk + \\binom{t-j}{2} + \\binom{j+1}{2}\\right]$.\n\\end{enumerate}\nMoreover, $I(7,3)=[16,29]$, $I(8,4)=[41,61]$, and $I(9,4)=[69,117]$.\n\\end{lemma}\n\\begin{proof}\n For $k\\geq 5$, by \\Cref{lem:min_Y},\n \\[\\min I(n,k)=\\min\\left\\{\\binom{n}{k-1},\\,\\binom{n}{k}\\right\\}-C_{\\min\\{k-1,n-k\\}}-tk,\\]\n and this gives the left ends of the intervals in (i) and (ii). For $k>\\frac{n+1}{2}$, the right\n end of the interval comes from\n $\\binom{[k+3]}{k}\\cup\\binom{[n]}{k-1}\\setminus\\binom{[k+3]}{k-1}\\in\\mathcal M(n,k)$. For\n $k\\leq\\frac{n+1}{2}$, $\\binom{[n]}{k}\\in\\mathcal M(n,k)$ implies\n \\[\\max I(n,k)= \\binom{n}{k} -tk + \\binom{t-j}{2} + \\binom{j+1}{2}.\\]\n This is also valid for $k\\in\\{3,4\\}$, and gives the right ends of the intervals for\n $(n,k)\\in\\{(7,3),(8,4),(9,4)\\}$. For the left ends of these intervals we do an exhaustive search\n over $\\mathcal M(n,k)$ and find the following antichains. For $(n,k)=(7,3)$, the maximal squashed\n flat antichain with $21$ $3$-sets has size $25$. For $(n,k)=(8,4)$, the maximal squashed flat\n antichain with $37$ $4$-sets has size $53$. For $(n,k)=(9,4)$, the maximal squashed flat antichain\n with $37$ $4$-sets has size $81$.\n\\end{proof}\n\n\\section{Proof of Claim \\ref{lem:induction_step} for large $n$}\\label{sec:large_n}\nIn this section, we prove Claim \\ref{lem:induction_step} for $n\\geq 20$.\n\\begin{lemma}\\label{lem:induction_large_n}\n If $n\\geq 20$ then $[w(n-1)+2,w(n)]\\subseteq S(n)$.\n\\end{lemma}\n\\begin{proof}\n For $20\\leq n\\leq 199$, we use \\Cref{lem:size_of_I(nk),lem:interval_n_k} (and a computer) to\n verify the statement. For $n\\geq 200$, we conclude using the following inequalities which are\n proved below:\n \\begin{itemize}\n \\item For $k=\\left\\lceil\\frac{9n}{10}\\right\\rceil$, $\\min I(n,k)1.4^n$.\n\\end{proof}\n\\begin{lemma}\\label{lem:upper_bound}\n For $n\\geq 200$ and $k=\\lceil n\/2\\rceil$, $\\max I(n,k)\\geq w(n)$.\n\\end{lemma}\n\\begin{proof}\n Set $t=\\left\\lfloor\\frac{k+3}{2}\\right\\rfloor$ and $j=j^*(t)$. By \\Cref{lem:size_of_I(nk)}, we\n have to show\n \\[\\binom{n}{k}-tk+\\binom{t-j}{2}+\\binom{j+1}{2}\\geq \\binom{n}{k}-k\\left\\lceil\\frac{k+1}{2}\\right\\rceil.\\]\n We prove the stronger inequality $\\displaystyle\\binom{n}{k}-tk+\\binom{t-j}{2}\\geq\\binom{n}{k}-k\\left\\lceil\\frac{k+1}{2}\\right\\rceil$,\n or, equivalently,\n \\[\\binom{t-j}{2}\\geq\n k\\left(\\left\\lfloor\\frac{k+3}{2}\\right\\rfloor-\\left\\lceil\\frac{k+1}{2}\\right\\rceil\\right).\\]\n If $k$ is even then the right-hand side is $0$, and there is nothing to do. For odd $k$, the\n right-hand side is equal to $k$, and we can show $\\binom{t-j}{2}\\geq k$ as in the proof of\n \\Cref{lem:last_shadow_interval}. The assumption $k\\geq 33$ made in this proof follows from $n\\geq 200$.\n\\end{proof}\n\\begin{lemma}\\label{lem:bridges}\n For $n\\geq 200$ and $\\left\\lceil\\frac{n+2}{2}\\right\\rceil\\leq k\\leq\\left\\lceil\\frac{9n}{10}\\right\\rceil$, $\\max I(n,k)\\geq\\min I(n,k-1)$.\n\\end{lemma}\n\\begin{proof}\n Set $t=\\left\\lfloor\\frac{k+3}{2}\\right\\rfloor$, $t'=\\left\\lfloor\\frac{k+2}{2}\\right\\rfloor$ and\n $j=j^*(t)$. By \\Cref{lem:size_of_I(nk)}, we have to show that\n \\[\\binom{n}{k-1} + \\binom{k+3}{k} -\n \\binom{k+3}{k} -tk + \\binom{t-j}{2} + \\binom{j+1}{2}\\geq\\min I(n,k-1).\\]\n For $k=\\frac{n+2}{2}$, this is\n \\[\\binom{n}{k-1}+\\binom{k+3}{k}-\\binom{k+3}{k-1}-tk+\\binom{t-j}{2}+\\binom{j+1}{2}\\geq\\binom{n}{k-2}-C_{k-2}-t'(k-1).\\]\n From $n^5\\leq\\binom{n}{k-1}$, we obtain\n \\[n^4\\leq\\frac1k\\binom{n}{k-1}=\\binom{n}{k-1}-\\binom{n}{k-2},\\]\n and this can be used to bound the left-hand side:\n \\[\\binom{n}{k-1}+\\binom{k+3}{k}-\\binom{k+3}{k-1}-tk+\\binom{t-j}{2}+\\binom{j+1}{2}\\geq\\binom{n}{k-1}-n^4\\geq\\binom{n}{k-2},\\]\n which is obviously larger than the right-hand side. Now we assume $k\\geq\\left\\lfloor\\frac{n+4}{2}\\right\\rfloor$, and the claim will follow from\n \\[\\binom{n}{k-1}+\\binom{k+3}{k}-\\binom{k+3}{k-1}-tk+\\binom{t-j}{2}+\\binom{j+1}{2}\n \\geq\\binom{n}{k-1}-C(n-k+1)-t'(k-1).\\]\n The left-hand side is larger than $\\binom{n}{k-1}-n^4$ and the right-hand side is smaller than\n \\[\\binom{n}{k-1}-C_{n-k+1}\\leq\\binom{n}{k-1}-C_{\\lfloor n\/10\\rfloor}\\leq\\binom{n}{k-1}-\\frac{1}{\\lfloor n\/10\\rfloor+1}\\binom{2\\lfloor n\/10\\rfloor}{\\lfloor n\/10\\rfloor}.\\]\n Therefore, it is sufficient to\n verify $\\frac{1}{l+1}\\binom{2l}{l}\\geq n^4$ for $l=\\lfloor n\/10\\rfloor$. We use the bound\n \\[\\frac{1}{l+1}\\binom{2l}{l}\\geq \\frac{4^l}{(l+1)\\left(\\pi l\\frac{4l}{4l-1}\\right)^{1\/2}}\\]\n from~\\cite{Dutton1986}. From $n\\geq 200$, it follows that $l\\geq 20$, and this implies $4^l\\geq\n 25000l^{11\/2}$. Bounding the denominator by\n \\[(l+1)\\left(\\pi l\\frac{4l}{4l-1}\\right)^{1\/2}\\leq\\frac{23l}{22}\\left(\\pi\n l\\frac{88}{87}\\right)^{1\/2}\\leq 2l^{3\/2},\\]\n we obtain\n \\[\\frac{4^l}{(l+1)\\left(\\pi l\\frac{4l}{4l-1}\\right)^{1\/2}}\\geq 12500 l^4\\geq (11l)^4\\geq n^4,\\]\n which concludes the proof.\n\\end{proof}\n\n\n\\section{Proof of Claim \\ref{lem:induction_step} for small $n$}\\label{sec:small_n}\nIn this Section, we prove Claim \\ref{lem:induction_step} for $7\\leq n\\leq 19$. We need a few auxiliary results. Let\n\\[S(n,k)=\\left\\{\\abs{\\mathcal A}\\,:\\,\\mathcal A\\subseteq\\binom{[n]}{k-1}\\cup\\binom{[n]}{k}\\text{ is\n a maximal antichain in }B_n\\right\\}.\\]\n\n\\begin{lemma}\\label{lem:recursion}\n For $k\\geq 2$ and $n\\geq k+1$, $\\displaystyle S(n,k)\\supseteq S(n-1,k)+\\binom{n-1}{k-2}$ and\n $\\displaystyle S(n,k)\\supseteq S(n-1,k-1)+\\binom{n-1}{k}$.\n\\end{lemma}\n\\begin{proof}\n let $\\mathcal A'\\subseteq\\binom{[n-1]}{[k-1]}\\cup\\binom{[n-1]}{[k]}$ be a maximal antichain in\n $B_{n-1}$. Then\n \\[\\mathcal A=\\mathcal\n A'\\cup\\left\\{A\\cup\\{n\\}\\,:\\,A\\in\\binom{[n-1]}{k-2}\\right\\}\\subseteq\\binom{[n]}{k-1}\\cup\\binom{[n]}{k}\\]\n is a maximal antichain in $B_n$, and this implies the first inclusion. For the second one, let\n $\\mathcal A'\\subseteq\\binom{[n-1]}{[k-2]}\\cup\\binom{[n-1]}{[k-1]}$ be a maximal antichain in\n $B_{n-1}$. Then\n \\[\\mathcal A=\\left\\{A\\cup\\{n\\}\\,:\\,A\\in\\mathcal\n A'\\right\\}\\cup\\binom{[n-1]}{k}\\subseteq\\binom{[n]}{k-1}\\cup\\binom{[n]}{k}\\]\n is a maximal antichain in $B_n$.\n\\end{proof}\n\n\\begin{definition}\n A flat antichain $\\mathcal A\\subseteq\\binom{[n]}{k-1}\\cup\\binom{[n]}{k}$ is called\n \\emph{$\\{1,2\\}$-separated} if $\\{1,2\\}\\subseteq A$ for every $k$-set $A\\in\\mathcal A$\n $\\lvert A\\cap\\{1,2\\}\\rvert\\leq 1$ for every $(k-1)$-set $A\\in\\mathcal A$.\n\\end{definition}\n\n\\begin{observation}\\label{obs:12_separated_recursion}\n Let $\\mathcal A'\\subseteq\\binom{[n-1]}{k-1}\\cup\\binom{[n-1]}{k}$ and\n $\\mathcal A''\\subseteq\\binom{[n-1]}{k-2}\\cup\\binom{[n-1]}{k-1}$ be $\\{1,2\\}$-separated\n maximal antichains in $B_{n-1}$. Then\n $\\mathcal A=\\mathcal A'\\cup\\{A\\cup\\{n\\}\\,:\\,A\\in\\mathcal\n A''\\}\\subseteq\\binom{[n-1]}{k-1}\\cup\\binom{[n-1]}{k}$ is a $\\{1,2\\}$-separated maximal antichain\n in~$B_n$.\n\\end{observation}\nThis can be used in an induction to establish the following result.\n\\begin{lemma}\\label{lem:separated_intervals}\n Let $n$, $k$ and $m$ be integers with $k\\geq 2$, $n\\geq k+1$, and\n \\[\\binom{n-1}{k-1}\\leq m\\leq \\binom{n}{k-1}-2\\binom{n-3}{k-3}-\\binom{n-4}{k-5}.\\]\n Then there exists a $\\{1,2\\}$-separated antichain\n $\\mathcal A\\subseteq\\binom{[n]}{k-1}\\cup\\binom{[n]}{k}$ with $\\abs{\\mathcal A}=m$.\n\\end{lemma}\n\\begin{proof}\n For $k=2$, we have $m\\in\\{n-1,n\\}$. Then $\\binom{[n]}{1}$ is a $\\{1,2\\}$-separated maximal\n antichain of size $n$, and $\\{\\{1,2\\}\\}\\cup\\left(\\binom{[n]}{1}\\setminus\\{\\{1\\},\\,\\{2\\}\\}\\right)$\n is a $\\{1,2\\}$-separated maximal antichain of size $n-1$. For $k\\geq 3$, $n=k+1$, we have\n $k\\leq m\\leq 2k-2$. For $l\\in\\{3,4,\\dots,n\\}$,\n \\[\\mathcal A= \\left\\{[n]\\setminus\\{j\\}\\,:\\,3\\leq j\\leq l\\right\\}\\ \\cup\\ \\{\\{3,4,\\dots,n\\}\\}\\ \\cup\\\n \\{\\{i\\}\\cup([3,n]\\setminus\\{j\\})\\,:\\,i\\in\\{1,2\\},\\,j\\in\\{l+1,\\dots,n\\}\\}.\\] is a maximal\n $\\{1,2\\}$-separated antichain with $\\abs{\\mathcal A}=l-2+2(n-l)+1=2k-l+1$. For $n\\geq k+2$, we\n proceed by induction, and note that \\Cref{obs:12_separated_recursion} implies that we\n get all sizes $m$ with\n \\[\\binom{n-2}{k-1}+\\binom{n-2}{k-2}\\leq\n m\\leq\\binom{n-1}{k-1}-2\\binom{n-4}{k-3}-\\binom{n-5}{k-5}+\\binom{n-1}{k-2}-2\\binom{n-4}{k-4}-\\binom{n-5}{k-6},\\]\n and this simplifies to the claimed range.\n\\end{proof}\n\\begin{lemma}\\label{lem:bridges_2}\n For $\\displaystyle 3\\leq k\\le 10$ and $2k\\leq n\\leq 20$, $\\displaystyle\\left[\\binom{n-1}{k-1},\\binom{n}{k-1}\\right]\\subseteq S(n,k)$.\n\\end{lemma}\n\\begin{proof}\n We proceed by induction on $k$, and for fixed $k$ by induction on $n$. For the base case\n $(k,n)=(3,6)$, we have to verify $[10,15]\\subseteq S(6,3)$. For $m\\in[10,14]$ we can use the\n maximal antichains on levels $2$ and $3$ in $B_6$ that are given in the proof of\n \\Cref{thm:main_theorem} at the end of \\Cref{sec:introduction}, and for $m=15$ we use the maximal\n antichain $\\binom{[6]}{2}$. Next, we look at the induction step from $(k,n-1)$ to $(k,n)$ for\n $n\\geq 2k+1$. From the induction hypothesis and \\Cref{lem:recursion},\n \\[S(n,k)\\supseteq\\left[\\binom{n-2}{k-1},\\,\\binom{n-1}{k-1}\\right]+\\binom{n-1}{k-2}=\\left[\\binom{n-2}{k-1}+\\binom{n-1}{k-2},\\,\\binom{n}{k-1}\\right],\\]\n and the claim follows with \\Cref{lem:separated_intervals} and\n \\[\\binom{n}{k-1}-2\\binom{n-3}{k-3}-\\binom{n-4}{k-5}\\geq\\binom{n-2}{k-1}+\\binom{n-1}{k-2}\\]\n for all $(k,n)$ with $4\\leq k\\leq 10$ and $2k+1\\leq n\\leq 20$. To complete the induction argument, we check that the claim for $k\\geq 4$ and $n=2k$ is implied by\n the statement for $k-1$. We need four ingredients:\n \\begin{itemize}\n \\item By \\Cref{lem:separated_intervals},\n \\[\\left[\\binom{2k-1}{k-1},\\,\n \\binom{2k}{k-1}-2\\binom{2k-3}{k-3}-\\binom{2k-4}{k-5}\\right]\\subseteq S(2k,k).\\]\n \\item By \\Cref{lem:recursion,lem:separated_intervals},\n \\[S(2k,k)\\supseteq\\left[\\binom{2k-2}{k-1},\\,\\binom{2k-1}{k-1}-2\\binom{2k-4}{k-3}-\\binom{2k-5}{k-5}\\right]+\\binom{2k-1}{k-2}.\\]\n \\item By \\Cref{lem:recursion}, $S(2k,k) \\supseteq S(2k-1,k) + \\binom{2k-1}{k-2} \\supseteq S(2k-2,k-1)+\\binom{2k-2}{k}+\\binom{2k-1}{k-2}$,\n and with induction,\n \\[S(2k,k)\\supseteq\\left[\\binom{2k-3}{k-2},\\,\\binom{2k-2}{k-2}\\right]+\\binom{2k-2}{k}+\\binom{2k-1}{k-2}.\\]\n \\item By \\Cref{lem:recursion} and induction,\n \\[S(2k,k)\\supseteq\\left[\\binom{2k-2}{k-2},\\,\\binom{2k-1}{k-2}\\right]+\\binom{2k-1}{k}=\\left[\\binom{2k-2}{k-2}+\\binom{2k-1}{k},\\,\\binom{2k}{k-1}\\right].\\]\n \\end{itemize}\n We check that for $4\\leq k\\leq 10$, the union of these four intervals is\n $[\\binom{2k-1}{k-1},\\,\\binom{2k}{k-1}]$, as required.\n\\end{proof}\nWe need one more simple construction before we can complete the argument for $n\\leq 19$.\n\\begin{lemma}\\label{lem:final_case}\n Let $n \\ge 2$ be an integer. If $m \\in S(n-1)$ and $m > \\binom{n-2}{\\lfloor \\frac{n-2}{2} \\rfloor} +1$, then $m+n-1 \\in S(n)$.\n\\end{lemma}\n\\begin{proof}\n Let $m \\in S(n-1)$ with $m > \\binom{n-2}{\\lfloor \\frac{n-2}{2} \\rfloor} +1$, and $\\mathcal{A}$ be\n a maximal antichain in $B_{n-1}$ with size $m$. Then, by Sperner's Theorem, $\\mathcal{A}$ cannot\n contain a singleton. Thus, $\\mathcal{A} \\cup \\{\\{i,n\\}\\,:\\, i \\in [n-1]\\}$ is a maximal antichain in\n $B_n$ of size $m+n-1$.\n\\end{proof}\n\nFinally, we combine \\Cref{lem:interval_n_k,lem:size_of_I(nk),lem:bridges_2,lem:final_case} to prove\nClaim \\ref{lem:induction_step} for $7\\leq n\\leq 19$.\n\\begin{lemma}\\label{lem:induction_step_small_n}\n For all $n\\in\\{7,\\dots,19\\}$, $\\left[w(n-1)+2,w(n)\\right] \\subseteq S(n)$.\n\\end{lemma}\n\\begin{proof} For each value of $n$ we list the sizes coming from the various lemmas.\n \\begin{description}\n \\item[$n=7$] We need the interval $[w(6)+2,w(7)]=[16,23]$, and \\Cref{lem:size_of_I(nk)} yields\n $I(7,3)=[16,29]$.\n \\item[$n=8$] We need $[w(7)+2,w(8)]=[25,58]$. From \\Cref{lem:size_of_I(nk)} we have $\\mathcal\n I(8,4)=[41,61]$, and from \\Cref{lem:bridges_2},\n $\\left[\\binom{7}{2},\\binom{8}{2}\\right]\\cup\\left[\\binom{7}{3},\\binom{8}{3}\\right]=[21,28]\\cup[35,56]$. Finally,\n Lemma~\\ref{lem:final_case} with $[22,27]\\subseteq S(7)$ yields $[29,34]\\subseteq S(8)$.\n \\item[$n=9$] We need $[w(8)+2,w(9)]=[60,111]$. From \\Cref{lem:size_of_I(nk)} we have $\\mathcal\n I(9,4)=[69,117]$, and from \\Cref{lem:bridges_2},\n $\\left[\\binom{8}{3},\\binom{9}{3}\\right]=[56,84]$.\n \\item[$n=10$] We need $[w(9)+2,w(10)]=[113,237]$. From \\Cref{lem:size_of_I(nk)}\n we have\n \\[I(10,5)\\cup I(10,6)=[164,236],\\]\n and from \\Cref{lem:bridges_2},\n $\\left[\\binom{9}{3},\\binom{10}{3}\\right]\\cup\\left[\\binom{9}{4},\\binom{10}{4}\\right]=[84,120]\\cup[126,210]$. From\n \\Cref{lem:final_case} and $[112,116]\\subseteq S(9)$, we deduce $[121,125]\\subseteq S(10)$, and\n finally, a maximal antichain of size $237$ is given by $\\mathcal A=\\mathcal\n F\\cup\\binom{[10]}{5}\\setminus\\Delta\\mathcal F$ where $\\mathcal F=\\{\\{1, 2, 3, 4, 5, 6\\},\\,\\{1, 2, 3, 4, 7, 8\\},\\,\\{1, 2, 3, 4, 9, 10\\}\\}$.\n \\item[$n=11$] We need $[w(10)+2,w(11)]=[239,438]$. From \\Cref{lem:size_of_I(nk)}\n we have\n \\[I(11,5)\\cup I(11,7)=[273,446],\\]\n and from \\Cref{lem:bridges_2}, $\\left[\\binom{10}{4},\\binom{11}{4}\\right]=[210,330]$.\n \\item[$n=12$] We need $[w(11)+2,w(12)]=[440,900]$. From \\Cref{lem:size_of_I(nk)}\n we have\n \\[I(12,6)\\cup I(12,7)=[693,904],\\]\n and from \\Cref{lem:bridges_2}, $\\left[\\binom{11}{4},\\binom{12}{4}\\right]\\cup\\left[\\binom{11}{5},\\binom{12}{5}\\right]=[330,792]$.\n \\item[$n=13$] We need $[w(12)+2,w(13)]=[902,1688]$. From \\Cref{lem:size_of_I(nk)}\n we have\n \\[I(13,7)\\cup I(13,8)=[1138,1688],\\]\n and from \\Cref{lem:bridges_2}, $\\left[\\binom{12}{5},\\binom{13}{5}\\right]=[792,1287]$.\n \\item[$n=14$] We need $[w(13)+2,w(14)]=[1690,3404]$. From \\Cref{lem:size_of_I(nk)}\n we have\n \\[I(14,7)\\cup I(14,8)=[2767,3404],\\]\n and from \\Cref{lem:bridges_2}, $\\left[\\binom{13}{5},\\binom{14}{5}\\right]\\cup\\left[\\binom{13}{6},\\binom{14}{6}\\right]=[1287,3003]$.\n \\item[$n=15$] We need $[w(14)+2,w(15)]=[3406,6395]$. From \\Cref{lem:size_of_I(nk)}\n we have\n \\[I(15,8)\\cup I(15,9)=[4755,6402],\\]\n and from \\Cref{lem:bridges_2}, $\\left[\\binom{14}{6},\\binom{15}{6}\\right]=[3003,5005]$.\n \\item[$n=16$] We need $[w(15)+2,w(16)]=[\\num{6397},\\num{12830}]$. From \\Cref{lem:size_of_I(nk)}\n we have\n \\[I(16,8)\\cup I(16,10)=[\\num{7752},\\num{12837}],\\]\n and from \\Cref{lem:bridges_2}, $\\left[\\binom{15}{6},\\binom{16}{6}\\right]=[\\num{5005}, \\num{8008}]$.\n \\item[$n=17$] We need $[w(16)+2,w(17)]=[\\num{12832},\\num{24265}]$. From \\Cref{lem:size_of_I(nk)}\n we have\n \\[ I(17,9)\\cup I(17,10)=[\\num{18763},\\num{24267}],\\]\n and from \\Cref{lem:bridges_2}, $\\left[\\binom{16}{7},\\binom{17}{7}\\right]=[\\num{11440}, \\num{19448}]$.\n \\item[$n=18$] We need $[w(17)+2,w(18)]=[\\num{24267},\\num{48575}]$. From \\Cref{lem:size_of_I(nk)}\n we have\n \\[ I(18,9)\\cup I(18,10)\\cup I(18,11)=[\\num{31122},\\num{48577}],\\]\n and from \\Cref{lem:bridges_2}, $\\left[\\binom{17}{7},\\binom{18}{7}\\right]=[\\num{19448}, \\num{31824}]$.\n \\item[$n=19$] We need $[w(18)+2,w(19)]=[\\num{48577},\\num{92318}]$. From \\Cref{lem:size_of_I(nk)}\n we have\n \\[ I(19,10)\\cup I(19,11)\\cup I(19,12)=[\\num{49679},\\num{92329}],\\]\n and from \\Cref{lem:bridges_2}, $\\left[\\binom{18}{7},\\binom{19}{7}\\right]=[\\num{31824}, \\num{50388}]$.\\qedhere\n \\end{description}\n\\end{proof}\n\n\n\\printbibliography\n\n\n\n\\end{document}\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{sec:introduction}\nQuasars are among the most powerful objects in the Universe, found from low redshift to redshifts beyond 7 \\citep{Mortlock2011}. Tracing the properties of quasars can help understand supermassive black holes in massive galaxies and the coevolution of black holes and their host galaxies \\citep{Kormendy2013}. Large quasar surveys are important for finding the clustering of quasars and lensed quasars, and for probing the galaxy merger scenario and measuring the mass distribution of halos \\citep[e.g.,][]{Oguri2006, Hennawi2010}. So far, more than 346,000 quasars have been spectroscopically identified in the SDSS \\citep{Schneider2010, Paris2017}.\n\nMassive spectroscopic surveys require a large amount of telescope time, so it is usually very expensive to obtain spectroscopic redshifts for large quasar samples. Photometric redshifts (photo-$z$s), derived from photometric data, provide an alternative technique to measure redshifts. Photometric quasar samples have been used to do many important studies, such as the clustering of quasars \\citep{Myers2006, Myers2007a, Myers2007b}, quasar number count statistics \\citep[e.g.,][]{Richards2009b, Richards2015}, cosmic magnification \\citep{Scranton2005}, and the Integrated Sachs-Wolfe effect \\citep{Giannantonio2006}. Besides, photo-$z$ estimation is very useful for quasar candidate selection in spectroscopic redshift surveys \\citep{Richards2004, Richards2009a, Richards2009b, Richards2015}.\n\nNowadays, more and more photometric data are being acquired. For example, the Pan-STARRS1 Telescope \\citep[PS1;][]{Kaiser2002} carried out a distinct set of imaging synoptic sky surveys that are useful for quasar searches in the southern sky. In the near future, the Large Synoptic Survey Telescope \\citep[LSST;][]{Tyson2002} will bring more opportunities for photo-$z$ estimates and cosmology research based on photo-$z$ quasar samples. The Dark Energy Spectroscopic Instrument \\citep[DESI;][]{DESI1,DESI2} is the successor to the Stage-III BOSS redshift survey, and will study baryon acoustic oscillations (BAO) and the growth of structure through redshift-space distortions (RSD) with a wide-area galaxy and quasar redshift survey. High efficiency quasar candidate selection would save a lot of follow-up observation time. We aim to improve the photo-$z$ accuracy of quasars and develop an efficient quasar candidate selection algorithm for a wide range of redshift and magnitude. With carefully defined selection completeness and efficiency correction, a photometrically selected quasar sample has the potential to be used to derive the quasar luminosity function (QLF), and reach a fainter magnitude limit than a spectroscopically identified sample. Moreover, photometrically selected quasars combined with multi-epoch and multi-band LSST data will be powerful for studies such as measuring black hole mass through photometric reverberation mapping \\citep[e.g.,][]{Hernitschek2015, Zu2016}; detecting changing-look quasars \\citep[e.g.,][]{Gezari2017}; and characterizing the variability of quasars \\citep[e.g.,][]{MacLeod2010, Zuo2012}.\n\nDifferent methods have been put forward to estimate the photo-$z$s of quasars, including quasar template fitting \\citep[e.g.,][]{Budavari2001, Babbedge2004, Salvato2009}, the empirical color-redshift relation (CZR) \\citep[e.g.,][]{Richards2001a, Wu2004, Weinstein2004, Wu2010, Wu2012}, Machine Learning \\citep[e.g.,][]{Ball2007, Yeche2010, Laurino2011, Brescia2013, Zhang2013} and the XDQSOz method \\citep{Bovy2012}. In the COSMOS field, the template fitting method is efficient with the photometry from 30 bands. But there are few fields with such rich photometry available. Apart from the template fitting method, the photo-$z$ regression method needs a training sample, usually a spectroscopically identified quasar sample. The redshift and magnitude distributions of spectroscopically identified quasars are affected by their target selection methods and the incompleteness of spectroscopic observations. So, dividing the spectroscopically identified quasar training sample into a grid of redshift and magnitude is helpful, considering the dependence of quasar colors on redshift and luminosity. Quasars are usually bluer when brighter, and the equivalent width (EW) of their emission lines are anti-correlated with the continuum flux \\citep[Baldwin effect;][]{Baldwin1997}. The slope of the power law continuum, the EW and FWHM of emission lines span wide ranges \\citep[e.g.,][]{Vanden2001, Telfer2002}. In addition, the redward flux of the Lyman-$\\alpha$ emission profile in a quasar spectrum is affected by the absorption lines of the Lyman-$\\alpha$ forest from neutral hydrogen along the line-of-sight to the quasar. The color distribution of quasars, even in a narrow redshift and magnitude bin, differs from a Gaussian distribution. It is obviously skewed and shows tails even when excluding broad absorption line (BAL) quasars. A significant population of red quasars exists \\citep[e.g.,][]{Webster1995, Richards2001b, Richards2003, Hopkins2004}. \\citet{Richards2003} defined a quasar to be dust-reddened with relative color $\\Delta (g^*-i^*)$ redder than 0.2, corresponding to $E(B-V)=0.04$, and find 6\\% quasars fall into the redder quasar category. Dust reddening at the redshift of the quasar is the primary explanation for the red tail in quasar color distribution. \\citet{Hopkins2004} modeled the color distribution as a Gaussian convolved with an exponential function to represent the dust. The Skew-t function can be used to describe data with skewed and tail features. The Skew-t distribution is widely used for multivariate skew distributions in statistics, quantitative risk management, and insurance. We choose skew functions instead of Gaussian functions to model the posterior distributions of quasars. Details about the Skew-t function will be provided in \\ref{subsec:skewt}.\n\nIn addition to the systematics of photo-$z$, quasar candidate selection is also a key issue. There are diverse methods used to select quasars. For example, the ultraviolet excess (UVX) method \\citep{Sandage1965, Green1986} for $z<2.2$ quasars; X-ray sources \\citep[e.g.,][]{Trump2009}; radio sources such as from the VLA FIRST survey \\citep[e.g.,][]{Becker2000}; quasar variability \\citep[e.g.,][]{NPD2011}; optical color box selection for the 2dF-SDSS LRG and QSO Survey \\citep[2SLAQ,][]{Croom2009}, and for the SDSS target selection \\citep[e.g.,][]{Richards2002}; more complex methods with optical (and infrared photometry), including non-parametric Bayesian classification and Kernel Density Estimator \\citep[KDE,][]{Richards2004, Richards2009b}, XDQSO \\citep{Bovy2011}, the neural network approach \\citep{Yeche2010}, the Bayesian likelihood approach \\citep{Kirkpatrick2011}; and selection combining different methods \\citep[e.g.,][]{Ross2012}. When a survey goes fainter, the contamination of point-like galaxies becomes significant, with the contamination rate as a function of magnitude. Fitting a training sample with all point-like objects is not efficient with regard to quasar selection at different magnitudes. A training sample consisting of all point-like objects will include stars, quasars, and point-like galaxies, thus it is hard to fit their posterior distribution all together. To separate quasars from stars, we estimate the number counts and colors (relative fluxes) of stars from a Milky Way synthetic simulation with the Besan\\c{c}on model. We also do galaxy template fitting to help distinguish galaxies from quasars.\n\nThe paper is organized as follows. In Section \\ref{sec:data}, we introduce the spectroscopically identified quasar sample and photometric data used in this work. In Section \\ref{sec:method}, we describe the photo-$z$ regression algorithm. We compare the photo-$z$ results obtained by different photo-$z$ methods using the same optical photometric data in Section \\ref{sec:results}. We also present photo-$z$ results using SDSS, SDSS-WISE, PS1-WISE and DECaLS-WISE photometry in Section \\ref{sec:results}. We present the classification method in Section \\ref{sec:classification}, including the stellar simulation, the galaxy template fitting, and the Bayesian classification method. Quasar candidate selection using the DECaLS and WISE photometry is presented in Section \\ref{sec:discussion}. We test the results in some deep fields and present the quasar number count statistics in the SDSS Stripe 82 (S82) region. We summarize the paper in Section \\ref{sec:summary}. We will make the photo-$z$ and classification code publicly available\\footnote{\\url{https:\/\/github.com\/qian-yang\/Skewt-QSO}} with the current version archived in Zenodo \\citep{SkewtQSOv1}. In the paper, all magnitudes are expressed in the AB system. The galactic extinction of extragalactic objects is corrected using the dust reddening map of \\citet{Schlegel1998}. We discuss only type 1 quasars (or AGNs) in this work. We use a $\\Lambda$CDM cosmology with $\\Omega_{\\Lambda}=0.728$, $\\Omega_m=0.272$, $\\Omega_b=0.0456$, and $H_0=70$ km s$^{-1}$ Mpc$^{-1}$ \\citep{Komatsu2009}.\n\n\\section{The Data} \\label{sec:data}\n\\subsection{Spectroscopically Identified Quasar Sample} \\label{subsec:sample}\nWe use a sample of spectroscopically identified quasars consisting of quasars from the SDSS Data Release 7 Quasar catalog (DR7Q) \\citep{Schneider2010} and the Data Release 12 Quasar catalog (DR12Q) \\citep{Paris2017}. There are 105,783 quasars in the DR7Q, and 297,301 quasars in the DR12Q, including 25,275 quasars in both catalogs. BAL quasars are anomalously redder than most quasars and are excluded from our analysis. There are 29,580 quasars identified as BAL quasars in the DR12Q, and 6,214 quasars in the DR7Q identified as BAL quasars by \\citet{Shen2011}. After removal of the BAL quasars, there are 346,464 quasars in our quasar sample (DR7\\&12). Since, in comparison with the SDSS photometric bands, there are more high redshift quasars detected by the redder PS1 $y$ band and deeper DECaLS $z$ band, it is now possible to construct color models for high redshift quasars. We also include some quasars, which are not in the SDSS DR7 or DR12 catalog. A high redshift quasar catalog with 437 $z>4.5$ (called the BONUS high redshift sample) was constructed from the literature (Table 1 and Table 3 in \\citet{Wang2015} and references therein; Table 7 in \\citet{Banados2016} and references therein; \\citealt{Jiang2016}; \\citealt{Yang2017}; \\citealt{Wang2017}).\n\n\\subsection{SDSS Photometry} \\label{subsec:sdss}\nWe use the point spread function \\citep[PSF;][]{Lupton1999} photometry in the five SDSS bands $ugriz$ \\citep{Fukugita1996}. The magnitude limits\n(95\\% completeness for point sources) in the five bands are 22.0, 22.2, 22.2, 21.3, and 20.5 mag, respectively. We queried the photoObjAll table in the SDSS CASJOB, and got the SDSS photometry for 304,241 quasars with restrictions on mode and flags \\citep{Stoughton2002, Bovy2011, Richards2015}. The Galactic extinction coefficients for E(B-V) used are $Au, Ag, Ar, Ai, Az = 5.155, 3.793, 2.751, 2.086, 1.479$. The $u$ band and $z$ band are converted to the AB system using $u_{AB} = u_{SDSS} - 0.04$ mag and $z_{AB} = z_{SDSS} + 0.02$ mag \\citep{Fukugita1996}.\n\n\\subsection{PS1 Photometry} \\label{subsec:ps1}\nWe use the PSF photometry in the PS1 survey. The 5$\\sigma$ median limiting AB magnitudes in the five PS1 bands $grizy$ are 23.2, 23.0, 22.7, 22.1, and 21.1 mag, respectively. We queried the StackObjectThin table in the PS1 CASJOB with restrictions on primaryDetection and infoFlag, and got PS1 photometry for 344,318 quasars. Due to the difference between the absorbing column of the atmosphere at the two survey sites, the extinction coefficients for the SDSS and PS1 filters are different. The Galactic extinction coefficients for E(B-V) are $Ag, Ar, Ai, Az, Ay = 3.172, 2.271, 1.682, 1.322, 1.087$ \\citep{Schlafly2011}.\n\n\\subsection{WISE Photometry} \\label{subsec:wise}\nWISE \\citep{Wright2010} mapped the sky at 3.4, 4.6, 12, and 22 $\\mu$m (W1, W2, W3, W4). The 5$\\sigma$ limiting magnitudes of the ALLWISE catalog in W1, W2, W3 and W4 bands are 19.6, 19.3, 16.7 and 14.6 mag. We use only WISE W1 and W2 photometric data, because the other two bands are much shallower. Out of 346,464 quasars in the DR7\\&12 spectroscopic quasar catalog, 261,614 (76\\%) and 256,606 (74\\%) quasars are detected within 2 arcseconds in the WISE ALLWISE W1 and W2 bands, respectively. The WISE magnitudes are converted from Vega magnitude to AB magnitude with $\\Delta m = $ 2.699 and 3.339 for the W1 and W2 bands, respectively.\n\n\\subsection{DECaLS Photometry} \\label{subsec:decals}\nThe DESI Legacy imaging survey (DELS; Dey et al. 2017, in preparation) will provide images for target selection, including the DECam Legacy Survey (DECaLS) in the $g$, $r$ and $z$ bands, the Beijing-Arizona Sky Survey \\citep[BASS;][]{Zou2017} in the $g$ and $r$ bands, and the Mayall $z$-band Legacy Survey (MzLS). The 5$\\sigma$ point-source magnitude limits in $g$, $r$, and $z$ will be roughly 24.7, 23.9, and 23.0 mag. With depths of $1.5-2.5$ mag fainter than in the SDSS, the DELS will be useful in searching for fainter quasars than the SDSS spectroscopic quasars, and also high redshift quasars \\citep{Wang2017}. In this work, we use the three band ($grz$) photometry from the DECaLS DR3. There are 194,529 known quasars detected in the DECaLS DR3 catalogs, and 98,481 quasars observed in all three bands in DR3. There are 235 quasars in the BONUS high redshift sample detected in the DECaLS DR3, and 149 of them were observed in the $g$, $r$, and $z$ bands. The unWISE coadds the WISE imaging and has better resolution \\citep{Lang2014}. The unWISE $5\\sigma$ detection rates for our spectroscopic quasar sample are higher than those for WISE, 87.6\\% and 77.1\\% for the W1 and W2 bands, respectively. The unWISE photometry is available in the DECaLS catalogs. For objects with detections lower than 5$\\sigma$, the unWISE data are still included in the DECaLS catalogs with corresponding larger photometric errors. We use the unWISE W1 and W2 band photometry, instead of ALLWISE, when using the DECaLS optical photometry.\n\n\\begin{figure}[htbp]\n\\hspace*{-0.3cm}\n\\epsscale{1.2}\n\\plotone{f1_flux_ratio.pdf}\n\\caption{\\label{fig:skewt}Examples of one dimensional relative flux $f_u\/f_r$, $f_g\/f_r$, $f_i\/f_r$ and $f_z\/f_r$ distributions for quasars with $1.54.6$ quasars become faint. Quasars are therefore divided into magnitude bins based on the $r$-band magnitude, and we use the relative fluxes between other band fluxes and the $r$-band flux, for example $f_u\/f_r$, $f_g\/f_r$, $f_i\/f_r$ and $f_z\/f_r$ when using the SDSS five-band photometry. Each relative flux is a dimension in the Skew-t multi-dimensional model. The covariance between relative fluxes is accounted for by the covariance matrix $\\bm{\\Sigma}$.\n\nTo calculate the PDF, we weigh relative fluxes using photometric uncertainties as follows. For example, for the four relative fluxes of the SDSS photometry $f_u\/f_r$, $f_g\/f_r$, $f_i\/f_r$ and $f_z\/f_r$, with flux uncertainties $e_{u}$, $e_{g}$, $e_{r}$, $e_{i}$ and $e_{z}$ in the five SDSS bands, the relative flux covariance matrix can be derived from the error propagation equations as,\n\\begin{equation} \\label{eq:4}\n\\bm{\\Sigma}_0 =\n\\left(\n\\begin{array}{ccccc}\n\\frac{e_u^2 f_r^2 + e_r^2 f_u^2}{f_r^4} & \\frac{f_u f_g e_r^2}{f_r^4} & \\frac{f_u f_i e_r^2}{f_r^4} & \\frac{f_u f_z e_r^2}{f_r^4} \\\\\n\\frac{f_u f_g e_r^2}{f_r^4} & \\frac{e_g^2 f_r^2 + e_r^2 f_g^2}{f_r^4} & \\frac{f_g f_i e_r^2}{f_r^4} & \\frac{f_g f_z e_r^2}{f_r^4} \\\\\n\\frac{f_u f_i e_r^2}{f_r^4} & \\frac{f_g f_i e_r^2}{f_r^4} & \\frac{e_i^2 f_r^2 + e_r^2 f_i^2}{f_r^4} & \\frac{f_i f_z e_r^2}{f_r^4} \\\\\n\\frac{f_u f_z e_r^2}{f_r^4} & \\frac{f_g f_z e_r^2}{f_r^4} & \\frac{f_i f_z e_r^2}{f_r^4} & \\frac{e_z^2 f_r^2 + e_r^2 f_z^2}{f_r^4}\\\\\n\\end{array}\n\\right)\n.\n\\end{equation}\n\\vspace{0cm}\n\nWhen combining optical photometry and mid-infrared photometry that are taken separately for years, the quasar variability introduces extra uncertainties into the relative fluxes, such as $f_{W1}\/f_r$ and $f_{W2}\/f_r$. To reduce the uncertainties from quasar variability, we use $f_{W1}\/f_r$ and $f_{W2}\/f_{W1}$ for quasar photo-$z$ estimation. In the case of using the DECaLS $g$, $r$, $z$ and WISE W1, W2 photometry, the relative fluxes used are $f_g\/f_r$, $f_z\/f_r$, $f_{W1}\/f_r$ and $f_{W2}\/f_{W1}$, and the covariance matrix $\\bm{\\Sigma}_0$ is written as\n\\begin{equation} \\label{eq:5}\n\\left(\n\\begin{array}{ccccc}\n\\frac{e_g^2 f_r^2 + e_r^2 f_g^2}{f_r^4} & \\frac{f_g f_z e_r^2}{f_r^4} & \\frac{f_g f_{W1} e_r^2}{f_r^4} & 0\\\\\n\\frac{f_g f_z e_r^2}{f_r^4} & \\frac{e_z^2 f_r^2 + e_r^2 f_z^2}{f_r^4} & \\frac{f_z f_{W1} e_r^2}{f_r^4} & 0\\\\\n\\frac{f_g f_{W1} e_r^2}{f_r^4} & \\frac{f_z f_{W1} e_r^2}{f_r^4} & \\frac{e_{W1}^2 f_r^2 + e_r^2 f_{W1}^2}{f_r^4} & -\\frac{e_{W1}^2 f_{W2}}{f_{W1}^2}\\\\\n0 & 0 & -\\frac{e_{W1}^2 f_{W2}}{f_{W1}^2} & \\frac{e_{W2}^2 f_{W1}^2 + e_{W1}^2 f_{W2}^2}{f_{W1}^4} \\\\\n\\end{array}\n\\right).\n\\end{equation}\n\nThen the covariance matrix is $\\bm{\\Sigma}^*(z, m) = \\bm{\\Sigma}(z, m) + \\bm{\\Sigma}_0$. The posterior probability is expressed as\n\\begin{equation} \\label{eq:6}\nP_{\\rm QSO}(\\bm{f}|z, m) = ST_n(\\bm{\\mu}(z, m), \\bm{\\Sigma^*}(z, m), \\bm{\\lambda}(z, m), \\nu(z, m)),\n\\end{equation}\nwhere $\\bm{f}$ represents the relative fluxes.\n\nPS1, DECaLS, and WISE photometry are based on multiple epochs of imaging data, and the effect of variability is mitigated. It happens\nthat PS1, DECaLS, and WISE are all averages over a roughly similar timeframe (although DECaLS is mostly a couple of years after PS1),\nwhereas SDSS is about a decade earlier than the others. Therefore, combinations of SDSS and the other surveys will be the most impacted\nby long-term variability.\n\n\\begin{figure}[htbp]\n\\hspace*{-0.5cm}\n\\epsscale{1.3}\n\\plotone{f2_QLF_part.pdf}\n\\vspace{-0.5cm}\n\\caption{\\label{fig:qlf}The quasar number prior $N_{\\rm QSO}(z, m)$ per deg$^2$ as a function of redshift and magnitude with $\\Delta z = 0.05$ and $\\Delta g = 0.1$, derived from the QLF in \\citet{NPD2016}. The curves, from top to bottom, are for $g = $ 22.0, 21.5, 21.0, 19.5, 19.0, 18.5, 18.0 mag, respectively.}\n\\end{figure}\n\n\\subsection{Prior Probability from the QLF} \\label{subsec:qlf}\nThe number density of quasars depends on the redshift and luminosity \\citep[e.g.,][]{Ross2013, NPD2016}. The QLF characterizes quasars through the evolution of their number density with luminosity and redshift. \\citet{NPD2016} present the QLF using quasars from the extended Baryon Oscillation Spectroscopic Survey of the Sloan Digital Sky Survey (SDSS-IV\/eBOSS). Their quasar sample is 80\\% complete to $g = 20$ mag and 50\\% complete to $g = 22.5$ mag, and the QLF has been corrected for incompleteness. We derive the quasar number prior $N_{\\rm QSO}(z, m)$ per deg$^2$ as a function of redshift and magnitude with $\\Delta z = 0.05$ and $\\Delta g = 0.1$ from the QLF in \\citet{NPD2016} derived in the SDSS $g$ band, and with the k-corrections as a function of both redshift and luminosity \\citep{McGreer2013}. Figure \\ref{fig:qlf} shows the number distribution as a function of redshift for quasars with $g = $ 22.0, 21.5, 21.0, 19.5, 19.0, 18.5, 18.0 mag from top to bottom.\n\nThus the PDF is obtained with the posterior probability and the number prior as\n\\begin{equation} \\label{eq:7}\n P_{\\rm QSO}(z) = P_{\\rm QSO}(\\bm{f}|z, m) N_{\\rm QSO}(z, m).\n\\end{equation}\nUsing the PDF as a function of redshift, $P_{\\rm QSO}(z)$, the photo-$z$ can be estimated by a maximum probability method or a peak recognition with maximum integrated probability. We identify peaks in a PDF curve using the $findpeaks$ function in R $pracma$ package\\footnote{https:\/\/cran.r-project.org\/web\/packages\/pracma\/index.html}, and calculate the photo-$z$ as the peak with the largest integrated PDF within a redshift range $(z_1, z_2)$. A parameter $P_{\\rm prob}$ describes the probability that the redshift locates within $(z_1, z_2)$ is\n\\begin{equation} \\label{eq:8}\nP_{\\rm prob} = \\frac{\\int_{z_1}^{z_2} P_{\\rm QSO}(z) dz}{\\int P_{\\rm QSO}(z) dz}\n.\n\\end{equation}\n\nThe logarithmic likelihood ($L$) of an object to be a quasar over the whole redshift range is written as\n\\begin{equation} \\label{eq:9}\nL_{\\rm QSO} = {\\rm log}(P_{\\rm QSO}) = {\\rm log}\\int P_{\\rm QSO}(z) dz .\n\\end{equation}\n\nTo assess the impact of the prior distribution on the photo-$z$ regression results, we also present the results with photo-$z$ derived only from the posterior distribution. The PDF from the posterior probability is\n\\begin{equation} \\label{eq:10}\n P_{\\rm QSO}'(z) = P_{\\rm QSO}(\\bm{f}|z, m).\n\\end{equation}\nThe logarithmic likelihood of an object to be a quasar from the posterior distribution over the whole redshift range is written as\n\\begin{equation} \\label{eq:11}\nL'_{\\rm QSO} = {\\rm log}(P_{\\rm QSO}') = {\\rm log}\\int P_{\\rm QSO}'(z) dz .\n\\end{equation}\nThe influence of prior distribution on the photo-$z$ regression and quasar candidate selection is discussed in Sections \\ref{subsec:photoz_infrared} and \\ref{subsec:selection}.\n\n\\begin{figure}[htbp]\n\\hspace*{-0.5cm}\n\\epsscale{1.3}\n\\plotone{f3_compare.pdf}\n\\vspace{-0.5cm}\n\\caption{\\label{fig:compare}The photo-$z$ distributions from different photo-$z$ methods, including Skew-t (red solid), XDQSOz (blue dashed) and CZR (blue dotted), compared with the spectroscopic redshift $z_s$ distribution (gray shade), using the SDSS five-band photometry for the same quasar test sample. The photo-$z$ distribution from the Skew-t model is more similar to the $z_s$ distribution, while the CZR method identifies more $z\\sim 0.8$ quasars and the XDQSOz method identifies more $z \\sim 2.2$ quasars.}\n\\end{figure}\n\n\\begin{figure*}\n \\centering\n \\hspace{0cm}\n \\subfigure{\n \\includegraphics[width=3.6in]{f4a_sdss.pdf}}\n \\hspace{-1.4cm}\n \\subfigure{\n \\includegraphics[width=3.6in]{f4b_sdss_wise.pdf}}\\\\\n \\vspace{-0.3cm}\n \\hspace{0cm}\n \\subfigure{\n \\includegraphics[width=3.6in]{f4c_ps1_wise.pdf}}\n \\hspace{-1.4cm}\n \\subfigure{\n \\includegraphics[width=3.6in]{f4d_decals_wise.pdf}}\\\\\n \\caption{\\label{fig:photoz}Photo-$z$ ($z_p$) compared with spectroscopic redshifts ($z_s$) for SDSS, SDSS-WISE, PS1-WISE and DECaLS-WISE photometry, respectively. The degeneracy between $z\\sim 0.8$ and $z\\sim2.2$ is obvious when using only the SDSS photometry, and is alleviated by combining optical data with mid-infrared photometry.}\n \\vspace{0.5cm}\n\\end{figure*}\n\n\\begin{figure}[htbp]\n\\hspace*{-1cm}\n\\epsscale{1.4}\n\\plotone{f5_offset.pdf}\n\\vspace{-0.5cm}\n\\caption{\\label{fig:offset}The normalized distributions of $\\Delta z$ for the different sources of photometry listed in Table \\ref{tab:photoz_infrared}. The histograms are SDSS (S, gray shade), SDSS-WISE (SW, blue solid), PS1-WISE (PW, black dashed), and DECaLS-WISE (SW, red dotted), respectively.}\n\\end{figure}\n\n\\begin{figure*}\n \\centering\n \\hspace{-0.7cm}\n \\subfigure{\n \\includegraphics[width=3.7in]{f6a_ratio_redshift.pdf}}\n \\hspace{-1.2cm}\n \\subfigure{\n \\includegraphics[width=3.7in]{f6b_ratio_mag.pdf}}\n \\caption{\\label{fig:raito}Photo-$z$ accuracy $R_{0.1}$ as a function of redshift (left panel) and magnitude (right panel) for different combinations of photometry, including SDSS (gray open dot-line), SDSS-WISE (black pentagon-line), PS1-WISE (black diamond-dashed line) and DECaLS-WISE (red triangle-dotted line). The photo-$z$ accuracies of PS1-WISE and DECaLS-WISE are lower than that of SDSS-WISE, mainly due to lack of $u$ band photometry in PS1 and DECaLS.}\n\\end{figure*}\n\n\\begin{table}\n\\small\n\\begin{center}\n\\tablewidth{1pt}\n\\caption{Photo-$z$ results for different methods with the same test sample of SDSS photometry} \\label{tab:photoz}\n\\begin{tabular}{lcrcccccc}\n\\tableline\\tableline\n${\\rm Method}$ & $\\sigma(\\Delta z)$ & $\\overline{\\Delta z}$ & $R_{0.1}$ & $R_{0.2}$ & ${\\rm Time}$\\\\% & $Photometry$ & $Number$ \\\\\n\\tableline\nSkew-t & 0.27 & -0.02 & 74.2\\% & 81.5\\% & 1 \\\\% & SDSS & 151,385\\\\\nXDQSOz & 0.31 & -0.04 & 72.8\\% & 79.3\\% & 17 \\\\%& SDSS & 151,385 \\\\\nKDE & 0.35 & -0.06 & 70.6\\% & 77.9\\% & 0.002 \\\\%& SDSS & 151,385 \\\\\nCZR & 0.29 & 0.05 & 68.0\\% & 73.9\\% & 0.005 \\\\%& SDSS & 151,385\\\\\n\\tablelin\n\\end{tabular}\n\\tablecomments{$R_{0.1}$ ($R_{0.2}$) is the fraction of quasars with $|\\Delta z|$ smaller than 0.1 (0.2). Time is calculated by using the same machine, and the time used by the Skew-t method to obtain the photo-$z$ results for the test sample is normalized to 1. A test calculation of 100,000 objects with SDSS five bands data using Skew-t method took 23 minutes (by one processor computer with 3.1 GHz Intel Core i7 CPU).}\n\\end{center}\n\\end{table}\n\n\\begin{table*}\n\\small\n\\begin{center}\n\\tablewidth{1pt}\n\\caption{Photo-$z$ results for different sources of photometry} \\label{tab:photoz_infrared}\n\\begin{tabular}{ccccrccc}\n\\tableline\\tableline\n${\\rm Photometry}$ & $N_{\\rm bands}$ & ${\\rm PDF}$ & $\\sigma(\\Delta z)$ & $\\overline{\\Delta z}$ & $R_{0.1}$ & $R_{0.2}$ & ${\\rm Numbe}r$ \\\\\n\\tableline\nSDSS & 5 & posterior \\& prior & 0.27 & -0.024 & 74.9\\% & 82.0\\% &304,241 \\\\\n- & - & posterior & 0.29 & -0.025 & 73.5\\% & 81.1\\% & - \\\\\n\\tableline\nSDSS-WISE & 7 & posterior \\& prior & 0.15 & -0.005 & 87.0\\% & 93.3\\% & 229,653 \\\\\n- & - & posterior & 0.16 & -0.007 & 85.8\\% & 92.5\\% & - \\\\\n\\tableline\nPS1-WISE & 7 & posterior \\& prior & 0.18 & -0.006 & 79.1\\% & 89.5\\% & 254,349 \\\\\n- & - & posterior & 0.22 & -0.03 & 77.0\\% & 87.6\\% & - \\\\\n\\tableline\nDECaLS-WISE & 5 & posterior \\& prior & 0.17 & 0.020 & 72.4\\% & 88.0\\% & 98,450 \\\\\n- & - & posterior & 0.23 & -0.002 & 72.3\\% & 87.5\\% & - \\\\\n\\tablelin\n\\end{tabular}\n\\tablecomments{The photo-$z$ results are calculated with the PDF derived from the posterior and prior distributions in Equation (\\ref{eq:7}) or from the posterior distribution in Equation (\\ref{eq:10}).}\n\\end{center}\n\\end{table*}\n\n\\section{photo-$z$ Results} \\label{sec:results}\n\\subsection{Comparing Photo-$z$ Results using the SDSS Photometric Data} \\label{subsec:photoz_sdss}\nWe compare the performance of our photo-$z$ regression algorithm with other methods by testing with the same sample of photometric data. We randomly divide the quasar sample with the SDSS photometric data into two subsamples, one as the training sample, and the other one as the test sample. We also try a KDE method mapping the two dimensional color-color distributions of $u-g$ versus $g-r$, $g-r$ versus $r-i$, and $r-i$ versus $i-z$ in redshift bins. The KDE photo-$z$ code is based on the KDE method in \\citet{Silverman1986}. The CZR photo-$z$ is calculated based on the CZR method in \\citet{Weinstein2004}. The XDQSOz photo-$z$ is calculated with the XDQSOz code \\citep{Bovy2012}. The photo-$z$ results of the Skew-t, XDQSOz, KDE and CZR methods are listed in Table \\ref{tab:photoz}.\n\nThe Skew-t photo-$z$ algorithm performs well compared with other photo-$z$ methods. The difference between the photo-$z$ ($z_p$) and the spectroscopic redshift ($z_s$) is expressed as $\\Delta z = (z_s-z_p)\/(1+z_s)$. $R_{0.1}$ ($R_{0.2}$) is the fraction of quasars with $|\\Delta z|$ smaller than 0.1 (0.2). The standard deviation of $\\Delta z$, $\\sigma(\\Delta z)$, from the Skew-t photo-$z$ is 0.27, slightly better than 0.31 and 0.29 from the XDQSOz and CZR methods. The KDE method is memory consuming when the number of dimensions is high. The KDE method also strongly depends on its training sample. When the test sample is the same as the training sample, the $R_{0.1}$ is as high as 85\\% if 4 SDSS colors are used. It decreases to 70\\% when the test sample is different from the training sample. So, the KDE photo-$z$ method is easily over-trained. For the Skew-t method, when the test sample is the same as the training sample, the accuracy $R_{0.1}$ changes by less than 1\\% (74.2\\% in Table \\ref{tab:photoz} and 74.9\\% in Table \\ref{tab:photoz_infrared}).\n\n\\begin{figure}[htbp]\n\\hspace*{-1cm}\n\\epsscale{1.4}\n\\plotone{f7_hist_simulation.pdf}\n\\vspace{-0.5cm}\n\\caption{\\label{fig:hist}The number distribution as a function of the $r$-band magnitude for the DECaLS ``PSF\" type objects (black), ``PSF\" type objects with $\\Delta \\chi^2>40$ (gray dashed), and stellar simulation (blue) within a 20 deg$^2$ test region in S82 ($340^{\\circ}<{\\rm R.A.}<350^{\\circ}$, $-1^{\\circ}<{\\rm Decl.}<1^{\\circ}$). The contaminations of point-like galaxies become prominent at the faint end.}\n\\end{figure}\n\n\\begin{figure*\n \\centering\n \\hspace{-0.7cm}\n \\subfigure{\n \\includegraphics[width=3.5in]{f8a_gr_rz.pdf}}\n \\hspace{-0.2cm}\n \\subfigure{\n \\includegraphics[width=3.5in]{f8b_zW1_W12.pdf}}\n \\caption{\\label{fig:colorcolor}Color-color diagram of $g-r$ versus $r-z$ (left panel) and $z-W1$ versus $W1-W2$ (right panel) for $193.4$, the Lyman limit moves out of the $u$ band, and then the $u$ band photometry is not important for the photo-$z$ regression any more. Optical and mid-infrared data are sufficient for photo-$z$ regression at $3.440$, where $\\Delta \\chi^2$ is the $\\chi^2$ difference between fitting to a PSF model and a simple galaxy model. Smaller $\\Delta \\chi^2$ means that the object is more likely to be similar to a galaxy morphology. Point-like galaxy is possible to be identified as ``PSF\" morphology but remains smaller $\\Delta \\chi^2$ comparing to those objects with true ``PSF\" morphologies. The difference between the simulation and observation at $r \\sim 24$ are mainly caused by the contaminations of point-like galaxies. At fainter magnitudes, the contamination by point-like galaxies becomes more and more prominent. At the faint end, the observation is not consistent with the simulation due to the magnitude limit of the imaging survey, which is roughly 23.9 mag in the r band for DELS. Figure \\ref{fig:colorcolor} shows the $g-r$ versus $r-z$ (left panel) and $z-W1$ versus $W1-W2$ (right panel) color-color diagrams of stars with $19x$, namely\n\\begin{equation} \\label{eq:22}\n\\frac{P_{\\rm Star}}{P_{\\rm QSO}} + \\frac{P_{\\rm Galaxy}}{P_{\\rm QSO}} < \\frac{1}{x} - 1.\n\\end{equation}\nHere we suggest three Bayesian probability criteria as\n\\begin{equation} \\label{eq:23}\nL_{\\rm QSO} > x_1,\n\\end{equation}\n\\begin{equation} \\label{eq:24}\nL_{\\rm QSO} - L_{\\rm Star} > x_2,\n\\end{equation}\n\\begin{equation} \\label{eq:25}\nL_{\\rm QSO} - L'_{\\rm Galaxy} > x_3,\n\\end{equation}\nwhich correspond to\n\\begin{equation} \\label{eq:26}\nP_{\\rm QSO} > 10^{x_1}\n\\end{equation}\n\\begin{equation} \\label{eq:27}\n\\frac{P_{\\rm Star}}{P_{\\rm QSO}} < 10^{x_2}\n\\end{equation}\n\\begin{equation} \\label{eq:28}\n\\frac{P_{\\rm Galaxy}}{P_{\\rm QSO}} < c10^{x_3}\n\\end{equation}\nThese criteria mean that (1) the object has relative fluxes similar to quasars, (2) the object is more likely to be a quasar than a star, (3) the object is more likely to be a quasar than a galaxy. The quasar candidate selection flowchart is shown in Figure \\ref{fig:flowchart}. For a given object, we measure its relative fluxes and magnitudes, and then apply a morphology criterion that most quasars are point-like objects. Then we calculate the probability of the object being (1) a quasar, with a prior probability derived from the QLF, and a posterior probability modeled with a multivariate Skew-t distribution as a function of magnitude and redshift; (2) a star, with a prior probability from number counts and distribution of stellar parameters from a Milky Way synthetic simulation, and a posterior distribution modeled by a multivariate Gaussian distribution with relative fluxes from the Padova isochrones; (3) a galaxy, with a prior probability from the BPZ prior distribution for point-like galaxies, and a posterior probability modeled by a multivariate Gaussian distribution with relative fluxes from galaxy templates. We obtain quasar candidates, as well as photo-$z$s, with the three Bayesian probability criteria in Equations (\\ref{eq:23}), (\\ref{eq:24}), (\\ref{eq:25}).\n\nFor fainter objects, the posterior probabilities decrease due to larger photometric uncertainties. The left panel in Figure \\ref{fig:prior} shows the logarithmic likelihoods of the objects to be quasars integrated from the posterior probability ($L'_{\\rm QSO}$ in Equation (\\ref{eq:11})). According to the QLF, a fainter object is more likely to be a quasar compared to a brighter object. The right panel in Figure \\ref{fig:prior} shows the logarithmic likelihoods integrated from posterior probabilities and prior probabilities ($L_{\\rm QSO}$ in Equation (\\ref{eq:9})). $L_{\\rm QSO}$ is more uniform than $L'_{\\rm QSO}$ over the range of magnitudes. A criterion of a $L_{\\rm QSO}$ cut is more reasonable than a simple $\\chi^2$ cut or a probability cut without considering photometric uncertainties. As a consequence, the selection completeness will be affected by the prior distribution from the QLF. For example, if the bright end of the QLF is underestimated, some bright quasars with colors deviating from bright normal quasars may be missed.\n\n\\begin{figure*\n\\centering\n\\hspace{0cm}\n\\epsscale{1.1}\n\\plotone{f12_x123.pdf}\n\\caption{\\label{fig:roc} Trade-off between completeness and efficiency in the classfication procedure. Left panel: the dots are results with $x_1 \\in (-3.5, -0.5)$, $x_2 \\in (0, 15)$ and $x_3 \\in (0, 10)$, and the color map shows the value of $x_1$. Black pentagons mark $x_1 = -2$, which are obviously located at the edge of the PR diagram. Right panel: results with $x_1 = -2.0$, $x_2 \\in (0, 15)$ and $x_3 \\in (0, 10)$. The blue dot-line denotes $x_2 = 4.4$, and is located at the edge of the curves for a wide range of $x_3$ values. The magenta open dot-line denotes $x_3 = 5.4$, and is located at the edge of the curves when the completeness is in the range of $\\sim 79\\%$ to $\\sim 82\\%$. The black unfilled star marks the point $x_1 = -2$, $x_2 = 4.4$, and $x_3 = 5.4$.}\n\\end{figure*}\n\n\\begin{figure*\n\\centering\n\\hspace{0cm}\n\\epsscale{1.1}\n\\plotone{f11_L1_L2.pdf}\n\\caption{\\label{fig:ll} The $L_{\\rm QSO}-L'_{\\rm Galaxy}$ versus $L_{\\rm QSO}-L_{\\rm Star}$ diagram. The gray scale hexagon show the density of point sources (left panel) and quasars (right panel). The blue vertical line is the cut of $L_{\\rm QSO}-L_{\\rm Star} = 4.4$, and the blue horizontal line is the cut of $L_{\\rm QSO}-L'_{\\rm Galaxy} = 5.4$. Quasars span a much larger space in the $L_{\\rm QSO}-L'_{\\rm Galaxy}$ versus $L_{\\rm QSO}-L_{\\rm Star}$ diagram, and 11\\% of known quasars with ``PSF\" type and $L_{\\rm QSO} >-2$ are located below these two cuts. Meanwhile, 85\\% of point objects with $L_{\\rm QSO}>-2$ are excluded by these two cuts.}\n\\end{figure*}\n\n\\section{Discussion} \\label{sec:discussion}\n\\subsection{Quasar Candidate Selection using DECaLS and WISE photometry} \\label{subsec:selection}\nWe test the quasar candidate selection algorithm described in Section \\ref{subsec:flowchart} with the DECaLS $g$, $r$, $z$ and WISE W1 and W2 photometry. There is a 15 deg$^2$ region ($36^{\\circ}<{\\rm R.A.}<42^{\\circ}$ and $-1.25^{\\circ}<{\\rm Decl.}<1.25^{\\circ}$) in S82 with spectroscopically identified quasars as dense as 167 per deg$^2$. We exclude quasars in this region from the quasar training sample, and model the quasar relative fluxes posterior distribution using the method in Section \\ref{subsec:skewt}. For selection criteria, there is a trade-off between completeness and efficiency. We define the completeness as the completeness of selecting the spectroscopically identified quasars at $r<23$ mag and redshift $z<5.4$. The efficiency is defined as\n\\begin{equation} \\label{eq:29}\n{\\rm efficiency} = \\frac{N_{\\rm QLF}(r) * {\\rm completeness}(r)}{N_{{\\rm photo}-z}(r)},\n\\end{equation}\nwhere $N_{\\rm QLF}(r)$ is calculated from the QLF \\citep{NPD2016} (PLE+LEDE model). The completeness also includes $\\sim 5\\%$ incompleteness from the ``PSF\" morphological criterion. It is worth noting that the completeness is probably overestimated, because the spectroscopic sample is not complete \\citep{Ross2012, Ross2013} at $r<23$ mag even in this dense S82 region. Therefore, the efficiency might be also overestimated. Figure \\ref{fig:roc} shows the efficiency vesus completeness, the Precision-Recall (PR) diagram \\citep{Davis2006}, with parameters $x_1 \\in (-3.5, -0.5)$, $x_2 \\in (0, 15)$ and $x_3 \\in (0, 10)$. These parameter ranges are large enough to cover a wide range of the completeness and efficiency space. We determine $x_1$, $x_2$, and $x_3$ sequentially. First, for the above given ranges of $x_2$ and $x_3$, the best value $x_1$ is --2. The black pentagons mark $x_1 = -2$, which is located at the edge of the PR diagram, with relatively larger efficency with the same completeness. With the criterion $L_{\\rm QSO} >-2$, 97\\% of the known quasars (``PSF\" type) are selected, and 87\\% of the point sources are excluded. For the fixed $x_1=-2$ and the $x_3$ range given above, the best $x_2$ value is 4.4. The blue dot-line denotes $x_2 = 4.4$, which is located at the edge of the PR with a wide range of $x_3$ values. Finally, with the best values of $x_1$ and $x_2$ determined above, we find the best $x_3$ to be 5.4. The magenta open dot-line shows where $x_3 = 5.4$, which is located at the edge when the completeness is in the range $\\sim 79\\%$ to $\\sim 82\\%$. The black star marks $x_2 = 4.4$ and $x_3 = 5.4$. Figure \\ref{fig:ll} shows the $L_{\\rm QSO}-L'_{\\rm Galaxy}$ versus $L_{\\rm QSO}-L_{\\rm Star}$ diagram. Quasars span a much larger space in the $L_{\\rm QSO}-L'_{\\rm Galaxy}$ versus $L_{\\rm QSO}-L_{\\rm Star}$ diagram, and 11\\% of the known quasars with ``PSF\" type and $L_{\\rm QSO} >-2$ are located below these two cuts. Meanwhile, 85\\% of the point objects with $L_{\\rm QSO}>-2$ are excluded by these two cuts. Larger cuts will result in lower selection completeness and higher efficiency. For example, specifically a $L_{\\rm QSO}-L_{\\rm Star} > 10.0$ cut excludes 86\\% of the point objects with $L_{\\rm QSO}>-2$, but meanwhile causes 12\\% more selection incompleteness at $z\\sim 2.8$. Therefore, we suggest the criteria for quasar selection when using the DECaLS $g$, $r$, $z$, and WISE W1 and W2 to be\n\\begin{equation} \\label{eq:30}\n{\\rm type} = ``{\\rm PSF}\",\n\\end{equation}\n\\begin{equation} \\label{eq:31}\n{\\rm flux}(r)>0,\n\\end{equation}\n\\begin{equation} \\label{eq:32}\nL_{\\rm QSO}>-2,\n\\end{equation}\n\\begin{equation} \\label{eq:33}\nL_{\\rm QSO} - L_{\\rm Star}>4.4,\n\\end{equation}\n\\begin{equation} \\label{eq:34}\nL_{\\rm QSO} - L'_{\\rm Galaxy}>5.4.\n\\end{equation}\n\n\\begin{figure*}[htbp]\n\\hspace*{-0.5cm}\n\\epsscale{1.2}\n\\plotone{f13_completeness_all.pdf}\n\\caption{\\label{fig:completeness}The completeness of the classification method used for the spectroscopic quasar sample as a function of redshift (left panels) and $r$-band magnitude (right panels). The top panels show the completeness results applying the logarithmic likelihood criteria, including criterion $L_{\\rm QSO}>-2$ in Equation (\\ref{eq:32}) (blue diamonds), criterion $L_{\\rm QSO} - L_{\\rm Star}>4.4$ in Equation (\\ref{eq:33}) (orange dots), criterion $L_{\\rm QSO} - L'_{\\rm Galaxy}>5.4$ in Equation (\\ref{eq:34}) (magenta open pentagons), and criteria all above in Equations (\\ref{eq:32})-(\\ref{eq:34}) (black open stars). The bottom panels show the completeness results applying the ``PSF\" morphology criterion in Equation (\\ref{eq:30}) (blue dot-line), and criteria combined all logarithmic likelihood criteria and ``PSF\" morphology criterion in Equations (\\ref{eq:30})-(\\ref{eq:34}) (red star-line). The incompleteness for $z<1$ is probably caused by quasar variability, non-PSF morphology, and host galaxy contamination. $z\\sim 2.8$ quasars are close to the stellar locus, and the completeness of $z\\sim2.8$ decreases to $\\sim 70\\%$. The completeness decreases to lower than 50\\% at $r>22.3$ as the WISE data are shallower than the DECaLS data.}\n\\end{figure*}\n\n\\begin{figure*}\n \\centering\n \\hspace{-0.5cm}\n \\subfigure{\n \\includegraphics[width=3.7in]{f14a_vvds.pdf}}\n \\hspace{-1.2cm}\n \\subfigure{\n \\includegraphics[width=3.7in]{f14b_cosmos.pdf}}\n \\caption{\\label{fig:deep}Objects in fields with deep spectroscopic surveys, in the VVDS deep field (left panel) and the COSMOS field (right panel). The y-axis is $r$-band magnitude. The black stars are spectroscopically identified AGNs that are selected by our classification method, and the blue open circles are AGNs missed by our method. Most missed AGNs are fainter than 23 mag. The gray dots are photo-$z$ selected quasars without spectra. The red crosses are AGNs missed by our method because their morphology types are not ``PSF\". The orange diamonds show non-AGN objects selected by our method, and they mainly show up at $z<1$ or at the faint end.}\n\\vspace{0.5cm}\n\\end{figure*}\n\\begin{table*}\n\\small\n\\begin{center}\n\\tablewidth{1pt}\n\\caption{Quasar Candidate Selection Test in Spectroscopic Surveys} \\label{tab:deep}\n\\begin{threeparttable}\n\\begin{tabular}{lcrcccccc}\n\\tableline\\tableline\n${\\rm Field}$ & ${\\rm Area}$ & Spec\\tnote{a} & photo\\tnote{b} & QSO\\tnote{c} & Star\\tnote{d} & Galaxy\\tnote{e} & completeness\\tnote{f} & efficiency\\tnote{g} \\\\\n& $\/deg^2$ & QSO & QSO & selected & selected & selected & spec & spec\\\\\n\\tableline\nVVDS (deep) & 0.7 & 70 & 273 & 53 & 0 & 17 & 71\\% & 76\\% \\\\\nCOSMOS & 2.1 & 156 & 255 & 119 & 1 & 28 & 76\\% & 77\\% \\\\\nS82 (2.5h) & 15 & 153 & 271 & 129 & 8 & 3 & 84\\% & 91\\%\\\\\nS82 (22.7h-3h) & 162.5 & 71 & 251 & 64 & 2 & 1 & 90\\% & 95\\%\\\\\n\\tablelin\n\\end{tabular}\n\\tablecomments{Collumns in the table are as follows ($r<22.5$),}\n\\begin{tablenotes}\n \\item[a] spectroscopically identified quasar number per $deg^2$.\n \\item[b] photometrically selected quasar number per $deg^2$.\n \\item[c\/d\/e] photometric method selected objects that are spectroscopically identified as quasars\/stars\/galaxies per $deg^2$.\n \\item[f] The completeness is calculated from the spectroscopically identified quasars at $r<22.5$.\n \\item[g] The efficiency is calculated as the ratio of photometric method selected, spectroscopically identified quasars from all spectroscopically identified objects (quasars, stars, and galaxies) at $r<22.5$.\n \\end{tablenotes}\n \\end{threeparttable}\n \\end{center}\n\\end{table*}\n\nWith the criteria in Equations (\\ref{eq:30})-(\\ref{eq:34}), the selection completeness of spectroscopically identified quasars in the dense region is 81\\%. For 98,450 quasars with DECaLS photometry, we recover 84,639 quasars (86\\%). Figure \\ref{fig:completeness} shows the completeness for the spectroscopically identified quasar sample as a function of redshift (left panel) and $r$-band magnitude (right panel). In the top panels, the blue diamonds show the completeness after applying the criteria in Equation (\\ref{eq:32}). The completeness decreases when the redshift is less than 1, and one possible reason is the uncertainties from variability, because the DECaLS images and WISE images were not taken simultaneously. The incompleteness at $z>4.5$ is mainly caused by the limited number of high redshift quasars and larger photometric uncertainty in the $r$ band. Better Bayesian probability selection for $z>4.5$ quasars is potentially possible if we use simulated quasar fluxes \\citep{McGreer2013}, relative fluxes divided by the $z$, $y$ or $J$-band flux, and the QLF at high redshift. The orange dots show the completeness using the criterion in Equation (\\ref{eq:33}), the completeness at $z \\sim 2.8$ decreases as quasars move close to the stellar locus \\citep[e.g. ][]{Fan1999}. This criterion also causes an incompleteness at $r>21$ mag. The magenta open pentagons show the completeness with the criterion in Equation (\\ref{eq:34}). The completeness decreases rapidly at $r>21.5$ mag, as the WISE photometric uncertainties increase dramatically. The black open stars show the completeness when applying all the criteria in Equations (\\ref{eq:32})-(\\ref{eq:34}). The blue dot-line in Figure \\ref{fig:completeness} (bottom panels) shows the completeness applying the ``PSF\" morphology criterion as a function of redshift (bottom left panel) and magnitude (bottom right panel). The completeness with the morphology criterion decreases rapidly as redshift decrease at $z<1$. The fraction of known quasars satisfying the morphology criterion decreases from 92\\% with redshift at $0.521.5$. Because the colors of $2.5